►
From YouTube: Jenkins GSoC git plugin office hours 2020-03-24
Description
Jenkins project office hours for the git plugin project ideas that have been suggested as part of Google Summer of Code 2020.
A
And
this
route
I'm
mark
white,
we're
recording
this-
is
a
google
Summer
of
Code
office
hours
session
for
these
Jenkins
get
slug
ins
thanks
very
much
for
being
here.
What
we'll
start
with
is
we
want
the
bulk
of
this
session
to
be
question
and
answer?
We
want
it
to
you.
Ask
a
question:
I'll
give
an
answer
or
I'll
declare
that
I
don't
know
the
answer,
and
that's
okay
too,
but
we'll
do
mostly
question
and
answer
before
we
get
to
the
question
answer.
A
What
we'll
do
is
we
will
we
will
first,
each
of
us
will
introduce
themselves
where
are
you
from
name
etc,
where
you
are
I,
add
in
school
those
things
so
that
we
get
to
know
each
other
and
I
will
do
a
brief
overview
of
the
two
project
ideas.
Those
are
not
the
only
things
that
could
be
project
plans,
but
those
are
the
two
that
I
had
in
mind
and
then
we
can
talk
about
about
questions
that
you
have
so,
let's,
let's
go
first
I'm
I'm
mark
wait.
A
I
am,
let's
see
a
Jenkins
contributor,
I
maintain
the
gate,
plug-in
and
the
git
client
plug-in
and
have
done
so
for
a
number
of
years.
So
your
understanding,
my
sort
of
biases
I,
started
maintaining
the
gate
plug-in
because
I
felt
a
little
grumpy
about
other
people
breaking
to
get
plug
in
with
their
changes,
and
so
the
the
way
I
got
involved
was
I
started.
Writing
a
bunch
of
tests
and
I
wrote
a
bunch
of
tests
and
submitted
pull
requests,
wrote
a
bunch
of
tests
and
submitted
more
poll
requests
and
the
current
maintainer
said
hey.
A
You
know
we're
kind
of
tired
of
this.
Should
we
just
make
him
a
maintainer
and
they
did,
and
so
I
became
a
maintainer
and
as
I
kept
maintaining
vage
sort
of
faded
into
the
background
to
do
other
things
till
it
became
that
I
was
the
primary
maintainer
and
yes,
I'm
still
fixated
on
tests.
I
still
carry
very
deeply
about
not
breaking
things,
and
that
means
that
that
I
will
bias
towards
actually
not
taking
changes
rather
than
taking
changes
that
don't
have
tests.
A
If
a
change
is
proposed-
and
it
has
no
test
I'm
very
likely
unless
it's
a
very
compelling
change-
I'm
unlikely
to
write
the
test
myself,
because
I
expect
an
author
to
write
the
tests
as
part
of
their
exercise
of
writing.
I'm
a
I've
spent
some
years
in
programming,
but
I
spent
about
20
20
25
years.
Managing
so
I'm,
also
imperfect
and
still
learning
about
programming.
So
don't
be
surprised
if
you
learn
something
and
I
learned
something
in
the
interaction.
It's
perfectly
fine.
We've
we've
shaken
our
heads.
A
I've
shaken
my
head
personally
dismayed
with
some
things
that,
for
instance,
rishabh
had
submitted
something
and
I
realized.
Oh,
this
is
a
long
time
bug
that
I
had
left
in
the
tests
that
I
had
not
fixed,
and
it's
really
embarrassing,
but
I'm
going
to
be
embarrassed
and
just
admit
I
made
a
mistake.
So
let's
go
ahead.
Why
don't
we
have
rishabh?
Do
you
want
to
introduce
yourself
next
Oh
your
videos
off
and
they
don't
have
a
microphone
for
you
so
well,
so,
let's
go
and
I.
Don't
know
how
to
pronounce
your
names.
B
So
my
so
my
name
is
Josh
Jane
and
currently
I'm
pursuing
my
masters
at
Saint,
Diego,
State,
University
and
I'm
from
India,
but
currently
I'm
in
San,
Diego
u.s.,
so
so
usually
I
got
asked.
Why
did
I
decide
to
pursue
master's
pursue
I
was
looking
to
increase.
My
knowledge
said
so:
yeah
I
came
here
to
and
created
new
different
set
of
new
challenges
and
yeah
I
mean
it's
been
good.
Till
now
saying
it's
been
six
months
for
me
in
the
u.s.
right
now.
B
A
C
Yeah
so
I'm
somewhat
I'm,
currently
pursuing
a
bachelor's
in
control
engineering
in
New
Delhi.
So,
like
my
my
degree,
is
in
control
engineering
but
I
realized.
My
interest
was
in
computer
programming,
so
I
took
a
lot
of
electives
and
I
will
try
to
build
my
path
towards
you,
know,
computer
programming
and
get
internships,
and
so
slowly-slowly
have
I
think
being
able
to
make
that
shift.
I
I
think
assembled.
A
few
of
my
changes
were
in
Jenkins
COBOL,
mostly
so,
and
I've
looked
the
community
so
far.
The
community
is
absolutely
amazing
everybody.
A
Thank
you
well,
so
so
you
and
I
have
a
shared
shared
history.
In
one
part,
then
my
degree
is
actually
a
mechanical
engineering
and,
as
I
was
graduating
from
the
University
recruiting
I
had
to
tell
the
recruiter.
I
had
no
desire
to
do
mechanical
engineering
I
wanted
to
be
play
with
computers
for
the
rest
of
my
life
and
it
turned
out
that
that
company
wanted
someone
who
wanted
to
play
with
computers
with
a
degree
in
mechanical
engineering.
So
a
degree
in
control
engineering
is
a
good
choice.
There
are
lots
of
places.
We
do
software
rishabh.
A
D
I'm
rishabh,
Latonia
and
I've
been
studying
computer
science
engineering.
Basically,
it's
a
dual
degree
come
to
science
engineering,
an
MBA,
a
five
year
course
in
hopper,
Institute
of
Engineering
and
Technology.
It's
in
Patiala
India
right
now,
I
am
based
in
Noida,
where
I
just
completed
my
internship
recently
in
a
big
data
analytics
company.
D
So
my
interest
so
I
started
open
source
contribution
at
the
company
where
I
was
interning.
I
was
not
aware
about
open
source
contribution.
I
always
thought
that,
whatever
the
contributions
we
made
when
either
a
client
request
for
the
company
or
for
the
benefit
of
the
company's
software,
but
as
one
of
the
mentors
I
had
in
the
company
motivated
me
to
one
of
the
features
we
had
in-house
to
contribute
it
to
the
open
source
we
are
using
as
well.
D
That
is
really
the
moment
where
I
started
to
understand
the
benefits,
how
my
the
abilities
of
the
way
I
coded
it.
It
changed
drastically
because
I
kind
of
learned
how
the
how
the
open
source
community
helps
you
to
improve
your
the
way
you
could
and
the
the
style
of
good
in
everything,
so
that
that
just
that
was
the
point.
I
started
open
source
contribution,
then
I
started
I.
Think
I
started
around
in
February,
I
started
wanting
to
change
in
get
like
been
in
decline.
D
A
Great,
thank
you
thanks
very
much
all
right.
So
let's
I
think
the
next
topic,
then,
should
be.
Let's
do
a
quick
review
of
the
the
project
ideas.
There
are
two
and
I'm
gonna
share
my
screen
for
this,
because
I
think
it
may
help
us
just
to
remind
me
and
remind
everybody.
Okay,
here
are
the
things
that
we
had
in
mind.
So
let
me
find
google
Summer
of
Code
and
I'll
bring
it
onto
my
screen
and
then
share
my
screen,
one
of
while
I'm
getting
that
brought
up.
A
One
of
the
reminders
that
olegmon
Asif
suggested
to
me
was
to
remind
that
be
sure
that
all
student
submissions
make
sure
that
they
have
used.
All
of
this
provide
all
the
sections
that
are
mandated
by
the
google
Summer
of
Code
outline
be
sure,
you're,
very
careful
as
you're
preparing
your
proposals
and
Rishabh
I
haven't
read
reviewed
your
proposal.
Yet
to
look
for
that
specifically,
but
I
will
be
looking
for
it
specifically
with
my
next
one
or
two
days,
but
be
sure
it
includes
all
the
sections
because
they
expect
every
section.
A
Is
there
that
you've
read
the
details
and
that
you
follow
the
the
description
rigorously.
So
so
don't
don't
miss
the
opportunity
to
use
those
hints
and
as
a
reviewer,
this
is
my
first
year
doing
meant
being
a
mentor,
so
I
will
tend
to
make
mistakes
as
well.
So
you'll
have
to
watch
for
yourself
too,
to
assure
that
your
proposal
is
in
as
good
a
condition
as
possible
as
you're
getting
it
ready.
A
A
Excellent
and
that's
that's
that's
what
I
would
hope.
That's
that's
great
to
hear
so
I'm
going
to
share
my
screen
now,
and
here
we
go
okay.
So
the
two,
the
two
project,
ideas
that
I
had
offered
were
get
plug-in
performance
improvements
and
get
repository.
Caching
on
agents.
Let's
look
at
Pro
plug-in
performance
improvements
first,
so
here
the
idea
is
that
the
get
plug-in
has
or
the
git
client
plug-in,
has
two
implementations
inside
of
it.
A
One
that
uses
command
line
get
and
one
that
uses
a
Java
based
implementation
of
git
command
line
get
must
be
invoked
from
Java
by
forking
a
new
process
by
creating
a
new
process
starting
that
process.
Communicating
with
that
process
in
getting
its
information
back
and
on
Windows,
especially
there
have
been
times
in
the
past,
at
least
where
that
the
cost
of
starting
a
process
and
running
it
has
been
quite
a
bit
higher
than
the
cost
of
starting
running
a
process
on
Linux.
A
So
my
thought
was,
there
may
be
real
benefits
just
to
on
some
of
the
things
where
the
the
cost
of
the
call
to
the
command
line.
Git
is
overshadowed
by
the
cost
of
starting
the
process,
Jake,
it
might
be
faster,
and
so
the
idea
was
take
the
Java
micro
benchmark,
harness
and
use
it
to
test,
comparing
the
command
line,
get
implementation
and
the
jagged
implementation.
A
This
is
just
there's
an
optimization
thing
that
needs
to
be
done
here
and
then
the
the
the
other
talks
about
performance
comparison
using
using
jmh
and
some
QuickStart
ideas,
some
newbie
friendly
issues
that
are
available
and
who
so
before
we
do
any
questions
I
wanted
to
welcome
it.
Maybe
the
best
is:
do
we
want
to?
Are
there
questions
on
this?
One,
specifically
that
you
would
like
to
ask,
and
we
should
discuss
before
we
go
to
the
next
set
of
questions.
D
Actually
I
have
some
questions,
but
they
are
related
to
the
proposal.
I
propose
solutions
and
the
implementations.
Would
you
like
to
discuss
that
right
now,
but
I
think
it's
going
to
be
a
I,
don't
know
maybe
a
ten
minute
or
maybe
a
fifteen
minute
discussion,
so
I
don't
want
to
overshadow
other
people.
So
let
me
at
I'm
too
much
that's.
A
And
that's
a
fair
question:
I
think
I
would
like
to
get
to
your
questions
your
sob,
but
I
would
offer.
Rather,
let's
let's
look
at
high-level
before
we
talk
about
specific
details,
we
will
get
to
your
questions.
Absolutely
I.
Think
that's
the
right
thing
to
do
for
this
session,
but
implementation
details.
I
wasn't
true.
We
should
do
here
before
I
review,
the
other
one.
A
A
Great
all
right!
Well,
so
then,
then
let
me
do
this
second,
the
next
overview
and
then
will
they
will
do
open
questions
on
both
of
them.
So
git
repository
caching
on
agents
is,
is
looking
at
the
the
realization
that
a
single
workspace
on
a
Jenkins
agent
is
probably
mostly
a
copy
of
other
workspaces
that
are
on
the
same
agent,
particularly
multi
branch
plug-ins,
for
instance,
where
we
use
a
multi
branch
plug-in
to
test
the
or
the
multi
branch
jobs
to
test
the
Jenkins
get
plugin.
A
So
the
master
branch,
the
stable,
stable
3x
branch
and
every
pull
requests
are
all
derived
from
the
same
basic
repository
and
cloning
that
full
copy
of
the
repository
for
every
workspace
seems
like
it's
wasteful,
and
there
are
things
that
we
might
do
to
avoid
a
full
clone
of
every
every
time.
We
need
it
because
there
are
probably
existing
copies
of
that
repository
somewhere
on
that
disk.
Already
that
we
could
use
as
a
reference
repository
and
when
git
uses
a
reference
repository,
a
reference
repository
allows
the
local
copy
to
be
updated
from
local
objects.
A
So,
there's
a
pull
request
that
was
proposed
a
few
years
ago,
Poe
request
502,
which
offers
one
one
variant
of
this,
but
I
think
there
are
several
different
ways.
This
could
be.
This
could
be
done.
It
could
be
done
by,
for
instance,
following
following
a
technique
where
we
cache
things
always
to
the
local
agent
in
some
central
cache.
It
could
be
done
by
looking
on
the
local
agent
for
through
an
index
of
repositories
that
are
in
workspaces,
trying
to
find
an
existing
workspace.
There
are
several
different
ways.
A
E
B
B
What
I
see
over
there
have
been
done
is
using
master
casts
on
the
master
agent
and
using
that
as
a
mechanism
who
update
all
the
local
copies
okay.
So
as
I
was
thinking,
if
we
have
a
reference
repository
like
the
clone
feature,
would
would
there
be
overlap
on
both
of
these?
If
we
like,
would
they
would
they
be
a
sort
of
overlap
between
these
two
functionality
of
using
reference?
Other
reference
repository
in
advance,
clone
Bolivia
and
using
having
this
cell
phone's
ality
as
well?
Well,.
A
There
certainly
is
overlap
potential
and
it's
those
are
two
very
good
concepts.
So
the
mercurial
plugin
I
love
it's
it's
concept
that
the
person
who
wrote
it
is
brilliant
Jesse
glicks
work
on
it
is
is
absolutely
brilliant.
So
what
what
the
concept
that
came
to
me
was
if,
if
we,
if
we
used
the
cache
on
the
master
and
copied
to
the
agent,
the
agent
and
the
master
distance,
is
probably
much
shorter,
they're,
probably
much
nearer
than
the
agent
to
the
git
repository
right
then
to
the
central
git
repo,
so
there's
benefit
there.
A
However,
the
the
copy
from
master
to
agent
is
probably
still
not
not
as
fast
as
copy
on
the
local
disk
of
the
agent.
So
the
thought
I
had
was
okay.
We
might
use
copy
from
master
to
agent
as
as
the
beginning,
primate
primed
the
cache.
By
going
from
master
to
agent,
we
then
got
to
ask
the
remote
repository:
okay,
give
me
the
real
objects
and
that
will
that
will
populate
them.
So
that
would
be
one
approach
would
be
master
to
agent
and
then
go
ask
the
remote
repository
to
give
us
the
latest
objects.
So.
B
Instead,
I
was
I
was
more
keen
on
going
to
the
mouse
and
how
the
most
you
will
have
done
it
using
like
taking
updating
the
master
first
like
if
we
see
a
request
that
that
is
making
a
record
at
his
meeting
Erica
that
making
that
has
changes
and
needs
a
copy
new
copy.
So
it
would
update
the
master
cache
first
then
use
the
loop
update,
the
local
cache
parallely
and
then
use
it
from
there.
So
then,
all
the
all,
the
local,
all
the
local
agents
that
have
the
that
cache
would
have
been
updated.
B
A
Whether
I,
like
the
way
you're
thinking
so
that's,
let
me
see
if
I
can
say
it
back
to
you
to
be
sure
that
I've
understood
it
so
I
think
your
your
vision
was.
If
there
are
many
jobs
running
on
an
agent,
they
would
each
make
a
request
to
the
central
casts
to
the
master.
Saying
I
want
this
repository.
They
know
they
and
they
would
tell
the
master
I
want
this
repository.
That
is
this
location,
but
they're
actually
asking
the
master.
A
The
master
then
performs
the
request
to
the
the
actual
remote
repository
and
then
delivers
the
to
the
many
requesters
that
are
out
yeah
that
that
seems
that
seems
viable.
You
would
then
have
a
single
reader
from
the
master
to
the
remote
repository
that
populates,
the
cache
on
the
master
and
many
many
readers
to
the
agents
and
and
since
there's
only
one
real
repository
master
repository
off
on
github
or
bitbucket.
B
Was
hoping
that
this
doesn't
break
the
underlying
architecture,
because
if
we
see
how
the
reference
reference
supposedly
has
been
used
like
in
the
existing
at
once
clean
behavior,
we
already
have
a
functionality
of
giving
reference
repository
every
on
every
agent.
So
I
was
hoping.
It
doesn't
break
this
or
some
way
it
might
break
it
or
it
would
be
an
Olaf,
maybe
somewhere
I
I.
A
Don't
think
it
would
break
it
because
the
reference
repository
concept
is
built
right
into
get
and
so
get
itself
does.
Does
those
reference
repositories
we're
just
using
a
facility
that
GUID
already
has
now
it
may
not?
It
may
not
give
you
the
maximum
disk
savings
or
the
maximum
data
transfer
savings.
A
If
you
don't
use
a
reference
repository
on
the
agent,
so
so
the
the
the
the
one
to
one
to
many
that
you
described
where
git
repository
to
master
was
one
one
chain
of
request
and
then
many
check
requests
to
the
agent
will
tend
to
copy
all
of
the
objects.
Many
times
to
the
agent.
You
could
consider
one
to
one
to
one
to
many
where
you
say:
I'm
gonna
go
from
Massa
from
git
repository
to
master
one
request,
master
agent,
cache
repository
one
request
and
then
then
spread
that
use.
A
Key
our
502
one
of
its
blocking
points
was
that
when
I
did
interactive
testing
with
it,
it
had
concurrency
issues
that
I
did
not
know
how
to
solve.
All
I
saw
was
I
saw
that
it
had
concurrency
problems
and
those
concurrency
problems
were
serious
enough.
That
I
would
not
release
it
to
production
and-
and
that
was
just
me
and
I'm.
Okay.
I
know
how
to
stress
the
thing,
but
I
am
certainly
not
a
thousand
people
using
it.
A
B
Wondering
if
like
what
would
be
a
good
proposal
because
laying
of
this
architecture,
of
having
one
to
ear
like
from
remote
to
master
master
aging,
then
even
to
other
people,
agent,
other
job
instances
and
other
agents
on
the
parallel
running
on
the
running
on
the
level
of
that
particular
agent.
So
giving
out
the
architecture
giving
up
the
architecture
or
writing.
Some
template
could
be
like
using
some
sort
of
coding,
a
mechanism
to
display
what
you
are
trying
to
achieve.
What
these.
A
A
B
A
So
the
master
caches,
the
master
caches.
If
we
would
want
the
safety
check
that
the
master
caches
are
in
fact
full
copies,
yeah
right,
because
we
don't
want
a
narrow,
ref
spec.
We
don't
want
a
shallow
clone,
it's
not
a
check
out,
so
we
don't
have
to
worry
about
sparse
check
out,
but
it
should.
If
it's,
if
it's
shallow,
then
one
of
the
actions
should
probably
be
deep
in
it.
A
B
B
That
would
override
that
as
fast
check
out
shallow
copy,
whatever
the
user
had
requested,
and
that
would
clone
the
complete
repository
on
the
on
the
master
at
a
particular
given
location
by
the
user
and
all
the
agents.
And
everybody
would
know
that
this
would
always
be
the
single
point
of
those
where
they
can
pull
to
get
the
updates
instead
of
having
it
produce,
having
it
defined
by
having
it
randomly
or
maybe
generated
somewhere
on
the
workspace
finicky
of
the
fluid
master
and.
A
That's
that
certainly
also,
however,
I
believe
and
I
could
be
wrong
on
this,
because
the
that
that
section
of
the
code
was
actually
implemented
by
Steven
Connolly,
with
inputs
from
Jessie
Glick,
so
I'm,
not
as
fluent
in
the
in
the
multi
branch
code
as
I'd
like
to
be,
but
my
I
think
what
you're
describing
is
only
a
slight
variation
on
the
caches
that
are
already
in
the
get
plugin
from
on
the
sitting
on
the
master.
So
all
you're
doing
what
you
just
described.
A
A
And
and
that
that's
a
your,
your
point
is
valid
and
you
may
want
to
put
that
into
the
plan.
Saying:
hey
the
the
intent
is
safety
check
that
the
cached
copies
are
not
shallow
and
not
not
a
let's
say
nut
shallow,
not
narrow,
ref,
spec
and
and
if
it
turns
out
that
you,
you
confirm
that
oh
the
depth
and
or
the
ref
spec
is
under
user
control.
You
may
have
to
then
say:
I've
got
to
create
a
new
cache
concept.
I've
got
to
have
a
new
cache
on
the
master,
which
is
not
under
user
control.
A
D
Rochelle
I
have
a
question
related
to
this
discussion
when
you
said
mark
that
we
could
have
a
different
cache,
which
the
user
doesn't
know
about,
will
not
create
problems
for
the
users
who
have
large
repositories,
because
the
cache
would
be
growing
bigger
and
they
would
not
know
what
is
occupying
their
disk
space
and
if
you're
not
able
to
maintain
it.
I
am
not
sure
how
we
maintain
our
cache.
A
Assumption
is
valid,
yes,
actually,
it
certainly
is
a
valid
concern.
A
Jenkins
master
is
quite
commonly
a
a
large
consumer
of
disk
space
and
that
that's
an
accepted
accepted
reality.
Now,
if,
if,
let's
take
the
example
I
had
with
a
previous
employer
where
we
didn't,
we
didn't
like
it,
we
weren't
terribly
proud
of
it,
but
we
had
to
get
repository
that
was
20
gigabytes
and
a
20
gigabyte.
Git
repository
mattered
a
lot
where
we
put
that
thing
right.
A
It
was
and
and
we
we
had
to
make
sure
we
never
never
cloned
it
anything
other
than
shallow.
We
never
did
anything
but
use
a
narrow,
ref,
spec
and,
and
it
created
all
sorts
of
limitations
on
us
for
disk
space.
So,
yes,
we
do
have
to
be
aware
that,
as
an
example
that
the
testing
to
test
the
base
test
repository
for
the
the
repository
caching
should
probably
be
the
Linux
kernel,
it's
an
excellent
choice
and
it
starts
at
a
gigabyte.
That's
that's,
and
it's
got
great
history.
A
So
the
the
Linux
kernel
is
the
poster
child
of
a
big
big
repositories
with
lots
and
lots
of
commits,
and
so
yes,
it's
a
it's
an
excellent
choice
and
it
reminds
us
that
it's
not
unreasonable
to
have
a
one
or
two
gigabyte
repository
that
is
maintained
by
people
who
are
very
serious
about
using
it.
Now
the
the
20
gig
repository
that
I
had
we
were.
We
were
actually
not
serious
about
using
it.
We
had
people
who
are
checking
large
binary
Zinn
to
it,
but
Linus
does
not
typically
check
large
fineries
into
that
repository.
D
D
Not
sure
why
is
it
standing
yeah?
So
the
first
question
is
that
what,
if
ride,
with
benchmarking
I,
want
to
give
it
as
an
option
as
an
additional
behavior
to
the
gate,
plugging
in
user
and
the
rationale
behind
this
decision
is
that
gate
plug-in
in
decline,
decline
I
mean
the
boots
have
a
very
broad
audience
and
whatever
performance
changes
we
are
doing.
Because
of
this
start,
we
will,
because
of
the
test
suite
will
have
performance
tests
will
have
I,
don't
want.
D
Possibly
we
cannot
get
create
test
cases
for
every
and
abuse
case.
The
users
will
have
because
it's
impossible
cause.
We
have
drawn
audience
so
what
I?
What
I
thought
was
that
initially
we
could
have
it
as
an
additional
behavior,
and
here
I
have
created
an
implementation
and
there's
this
prototype
and
after
maybe
a
release
or
two
once
we
know
that
from
the
user
feedback
that
what
we,
if
replacing
get
with
get
CLI
get
with
Jake,
it
is
actually
giving
them
a
considerable
boost
in
performance.
D
Then
maybe
we
could
shift
it
from
an
addition
behavior
to
something
which
is
happening
inside
and
they
don't.
They
don't
know
about
it.
As
a
mandatory
feature,
although
it
makes
sense
that
performance
improvement
should
be,
should
not
be
a
concern
for
the
user,
but
what
I
thought
was
since
this?
This
is
a
GD
shop
project
with
with
a
person
and
I'm
implementing
the
changes
who
doesn't
have
a
considerable
experience
with
each
lesbian
I
would
I
would
want
to
have
a
safe.
D
A
A
There
are
also
techniques
in
the
community
that
kind
of
could
allow
us
to
gather
data
from
users
if
you're
interested
in
that
there
are
ways
to
to
actually
allow
us
to
do
telemetry
where
that
they
could
optionally
report
back
to
us
for
a
fixed
period
on
their
experience
automatically,
so
so
you're
taking
it's
not
only
good.
It's
very
very
well
suited
okay,.
D
So
it's
going
to
be
and
get
a
cm
extension
a
class,
an
implementation
which
will
be
called
performance
improvement
option.
What
it
will
basically
do
is
it
will
decorate
the
environment,
media
business,
the
vibrant
global,
in
my
variable
we
will
have,
it
is
basically
add
one
flag
of
get
with
comments,
lag
boolean
flag.
So
if
a
user
chooses
to
enable
performance
wherever
we
have
modified
the
code
through
the
results
of
our
benchmarking
study,
wherever
we
have
modified
the
code,
we
will
we
will
have
checks.
If
Flagg
is
true,
we
will
basically
implement
the
implement.
D
Our
will
select
the
implementation
which
is
performing
better
according
to
us
and
if,
if
the
boolean
says
no,
then
we
will
will
use
the
default
footpath.
So
it's
basically
how
every
decorate
function
works
and
then
I've
just
showed
how
I'm,
just
so
over
short
for
steps
how
I'm,
implementing
and
I'm
sure,
once
you
review
the
proposed,
then
you'll,
you
see
the
implementation,
you
know.
So
what
do
you
think
this
is
the
right
way
to
do
it
or
would
you
would
you
have
some
concerns
or
criticism
regarding
this
approach?
I
think.
A
This
is
a
fine
way
to
do
it.
I'm
not
sure
I
would
use
an
environment
variable,
because
the
mere
exists
in
the
existence
of
the
decorator
in
this
case
is
already
a
flag.
You
probably
I
mean
if
you
look
I
believe
there
are
other
other
decorators
available
already
implemented
like
wipe
workspace
or
like
shallow
clone,
which
they
they
don't
use.
An
environment
variable
to
record
the
state
of
the
decorator,
the
true
or
false
nosov
it
they
just
use
an
internal
variable
is
there?
Is
there
was
there?
A
D
I
think
it
was
the
first
thing
which
came
into
my
mind.
I
thought
this
thing
to
do
with
it.
I
would
because
I
I
think
I
thought
that
in
this,
medium
will
be
shared
everywhere
and
I
would
not
have
issues
where
I
will.
Of
course,
if
I
have
an
internal
variable
at
least
appreciate
every
will
so
wouldn't
have
that
problem.
But
when,
at
the
time
when
I
was
thinking
about
the
solution,
I
thought
that
I
can
actually
my
own
video
everywhere.
A
A
I'm
not
sure
that
that
the
the
code
you
you
insert
then
will
be
any
less
aware
of
of
get
performance
flag
than
it
is.
It
would
be
of
any
of
an
internal
variable,
so
a
very
method
of
field
on
the
class,
so
the
the
extension
could
just
as
easily
I
mean
either
way.
Every
time
you
want
to
ask
a
question
of
the
extension
you're
going
to
have
to
see
is
the
extension
there
and
so
I
think
I
think
the
the
overhead
for
you
will
be
about
the
same
in
this
case.
A
This
environment
variable
would
be
available
and
they
could
see
if
if
it
was
changing
from
using
jacott
to
get
back
and
forth
so
so
there
is,
there
is
some
communication,
with
the
user
benefit
by
use,
choosing
to
use
an
environment
variable
I'm,
not
sure
that
you
know
it's
I'm,
not
sure,
that's
healthy,
because
we
may
not
want
the
user
to
actually
know
which
one
we're
doing.
We
may
want
to
hide
that
from
them,
but
but
oftentimes
I've
made
the
mistake
of
thinking.
A
A
A
Then
now
and
again
this
is
I.
Don't
know
that
if
you've
already
done
testing
and
already
assured
you'll
you'll
you'll
get
a
much
better
answer
during
implementation
phase,
then
I
could
offer
right
now
because
I'm
just
offering
my
guess
right,
you'll
learn
during
it.
Okay,
so
it's
this
is
a
plan
does
not
require
that
you
have
the
code.
If
I
understand
correctly,
a
pen
does
not
require
that
you're
proposing
final
code,
it's
rather
you're
proposing.
This
is
a
proof
that
you've
thought
carefully
about
these
steps
and
you've
thought
enough
to
realize.
A
D
Okay:
okay,
okay,
that's
good,
but
the
second
thing
I
would
like
to
ask
is
I,
think
I
guess
once
you
either
probably
write
because
I
performed
a
performance
benchmarking
on
get
fetched
and
I'm,
not
sure
that
the
results
I'm
getting
correct,
I'm,
seeing
change
I've
seen
I
used
a
320
nm,
very
possibly
for
get
fetched
and
I
compared.
You
like
it
with
jacott
and
I,
have
seen
a
1
min
more
than
1
minute
difference
in
an
execution
time
is
that
is
that
correct?
Yes,
it.
E
A
Quite
believable,
so
so
considering
the
investment
that
is
made
in
the
code
that
is
github
that
is
git
and
and
J
get
the
community
behind
get
is
dramatically
larger
than
the
community
behind
Jake.
It
there's
there's
just
no,
no,
no
kidding
ourselves
that
they
are
not
very
different
sized
communities.
A
The
community
that
is
behind
get
includes
people
who
have
worked
on
Linux
file
systems
and
have
worked
on
on
at
the
kernel
level
for
a
very
long
time,
and
therefore
they
tend
to
choose
things
and
linus
chose
things
in
initial
implementation
that
were,
in
fact
very,
very
well
tuned
to
the
Linux
file
system.
So,
if
command,
it
is
dramatically
faster
than
Jake.
Yet
I
am
NOT
the
least
surprised,
particularly
on
large
repositories.
Now,
if
it's
the
other
direction,
where
you
find
a
on
large
repositories,
Jake,
it
is
significantly
faster
than
command
line.
Get.
A
D
Did
find
one
observation
which
is
interesting
to
me
even
when
I
was
performing
this
get
fetched
so
far
repository
size?
Is
it
so
very
low
I
have
a
34
KB
repository
Jake.
It
was
performing
better
than
gate
in
terms
of
average
execution
time
and
for
a
for
any
repository
as
well.
I
I
saw
that
Jake.
It
was
performing
better
than
cake,
but
then
I
was
since
this
was
kind
of
an
anomalous
behavior
I
wanted
to
check.
D
If
this
is,
if
this
is
right
or
not
so
what
I
did
I
was
I
was
a
little
apprehensive
of
the
fact
that,
since
we
are
using,
the
jjh
framework
applies
JVM
forming
before
they
run
the
performance
benchmarks,
so
I
will
I
was
a
little
I
had
a
doubt
that
if
the
bomb
obsession
is
actually
giving
Jacob
this
boost
of
performance,
so
what
I
did
was
to
check
it
this
to
confirm.
If
this
was
the
right,
behavior
I
tried
a
different
mode
of
performing.
D
This
is
called
a
code
start
performance
benchmarking,
which
basically
means
that
I
don't
bomb
up
the
JVM
enough
I,
actually
don't
wobble
I,
just
I
just
start
counting
the
execution
time
right
from
the
test.
I've
written
the
get
fetched
and
I
actually
found
that
Jake.
It
was
dent
slower
when
the
devil's
nobama
Jake,
it
was
not
faster.
D
But
if
the
gem
is
not
warmed
up,
then
Jake
it
is
not
performing
better
than
yet
and
and
I
am
Not
sure
right
now.
Currently,
how
would
I
get
to
know
if
the
JVM
in
the
real
code?
How
would
I
get
to
know
this
JVM
is
born?
I
would
assume
the
helium
is
warmed
up
when
I
reach
to
the
gates
edge
part.
So,
basically,
I
think
it's
a
fair
assumption
during
the
Jake.
It
would
perform
better
and
yet
under
a
certain
size
of
repository,
so
they
would
be
yeah.
A
What
you
just
described
is
exactly
the
kind
of
sensitivity,
analysis
that
that
I
was
hoping
for
in
this
and
and
what
you
described
aligns
very
well
with
what
I,
what
I
assumed
would
be
the
result,
because
what
what
what
happens
in
your
test
was,
let's
see,
there's
a
cost
to
fork
a
new
process,
the
cost
of
fork.
A
new
process
is
relatively
constant
and
on
small
repositories
it
may
actually
dominate
the
the
cost
of
the
the
operation.
A
Now
you
used
an
operation
which
is
a
network
operation,
so
it
introduces
the
network
variability
and
the
network
slowdown.
In
addition.
So
so,
and
yet
still
you
saw
that
the
the
cost
of
the
fork
and
the
cost
of
communicating
between
processes
was
a
significant
portion
of
the
total
cost
up
to
a
certain
threshold.
Now,
for
me,
that
may
lobby
that
that
we
would
consider
in
choosing
these
tuneups
to
use
some
form
of
local
estimate
of
the
size
of
the
remote
repository
and
I.
D
A
Know
how
you
would
do
that
local
estimate
of
sizes,
but
there
there's
it
seems
like
if
you've
got
a
local
copy
somewhere
in
a
cache,
there's
an
easy
estimate
or
if
you've
got
you
could
use
heuristics
about
hey.
Well,
yeah,
you
can
imagine
all
sorts
of
guesses
as
to
how
big
is
the
remote
repository
to
tune
which
which
implementation
should
I
use.
D
D
A
E
A
D
A
And
and
they
have
very
different
characteristics,
the
IBM
mainframe
is
a
completely
different
way
of
looking
at
things
actually,
and
so
this
very
interesting
in
terms
of
oh
I,
mano
or
or
arm
64
is
my
Raspberry
Pi,
and
it's
file
system
has
a
completely
different
behavior.
Then
the
file
systems
you
have
on
your
Mac,
for
instance,
so.
A
A
A
A
You're
on
the
right
track,
right,
you're,
you're,
doing
exactly
the
right
thing
in
thinking
about
what
are
the?
What
are
the
axes
of
performance
evaluation
and
what
is
the
sensitivity
along
that
axis
and
repository
size
is
clearly
one.
The
architecture
or
the
the
operating
system
of
the
computer
seems
like
another
one,
particularly
with
Windows,
where
the
cost
to
fork
a
process,
at
least
at
one
time
was
much
higher
on
Windows
than
it
was
on
Linux.
D
A
That's
a
very
good
insight
it
they
certainly
could
I
had
not
thought
of
that
and
I
think.
That's
that's
a
valid
thing
to
check.
I
know
that
the
cloudy's
support
team
has
published
recommended
guidelines
based
on
their
experience
of
what
the
best
JVM
settings
are
for
the
Jenkins.
So
JVM
parameters
is
a
very
good
item
to
evaluate
okay.
D
Okay
and
the
next
question
I
had
was
okay,
a
good
double
stretch
issue,
which
is
another
performance
issues.
We
have
known
performance
issue.
The
fixed
I
provided
was
basically
a
flag,
which
kind
of
oh
boy
did
the
second
stretch.
Would
you
didn't
say
that
you
had
my
additional
test
to
test
if
that,
if
that
solution
is
not
creating
any
kind
of
lost
of
information,
so
would
you
give
me
because
I've
included
the
the
solution
in
this
proposal?
D
Would
you
give
me
some
more
pointers
so
that
I
can
maybe
test
the
efficiency
of
the
solution
more
or
would
you
recommend
me
to
look
for
another
solution,
maybe
some
kind
of
an
argument
matcher
which
basically
a
class
which
matches
the
argument
like
we
have
a
clone
command
initially,
and
then
we
check,
if
that,
if
we
have
the
same
clone
commodity
of
the
same
clone
command,
then
we
would
avoid
the
texting
patch.
If
that's
not
the
case,
maybe
we
do
something
else.
A
Happy
to
share
the
the
the
things
that
I
created
to
do
my
initial
testing
and
it's
not
completed,
but
my
initial
that
thing
I
created
to
do
my
initial
testing
was
more
about
me.
Doing
interactive
testing
not
doing
automated
testing
I
have
a
strong
personal
bias
towards
first
explore
something
interactively
and
then
I'll
express
it
as.
A
A
It
is
a
public
github
repository
that
I
use
that
defines
a
full
and
complete
Jenkins
with
a
number
of
interesting
jobs
in
it,
and
some
of
those
jobs
are
exactly
to
test
for
this
case,
and
so
I
can
all
I'll
post
that
in
the
Gator
channel.
That's
that's
a
very
good
one.
We're
saying
hey!
Look
here
or
here
is
this
job
this
job
in
this
job
that
can
can
be
used
as
checks.
For
that
specific,
the
double
fetch.
D
Yeah
so
I
also
wrote
a
bench
micro
benchmark
test
for
for
the
solution,
I
implemented
for
this
redundant
edge
and
when
so,
I
had
to
baseline
tests,
where
I
had
one
test
which
had
a
narrower
spec
and
then
I
tested,
give
get
friend,
I
I
had
to
get
fetched.
Basically,
I
had
to
get
fetches
in
that
in
the
head
test
and
and
one
bit
narrows
their
spec
one
way
right
to
the
rep
spec
so
and
after
that
I
removed
one.
D
We
did
fetch
to
basically
show
that
if
we
just
have
one
gift
fetch
within
narrow
respect
and
with
override
respect
and
then
compare
those
four
tests,
you
see
a
is
there's
a
reasonable
the
increase
in
the
performance
when
we
remove
one
get
fed.
So
what
I
found
out
was
that
further
for
three
of
the
repositories
I
chose,
which
are
visions,
are
basically
the
order
of
slightly
over
KB
or
MB,
and
then
I
think
40,
MB
or
50
MB
for
those
three
repositories.
D
When
I,
when
I
used
when
I
removed
the
dub,
the
second
gig
fetch.
That
was
all
there
was
the
time
the
execution
time
was
it
reduced
by
50%.
All
smooshed
and
I
so
I
was
I
was
a
little
apprehensive
with
the
would
it
would
that
actually
happen
because
I
initially
when
I
was
looking
at
the
issue,
I
thought:
okay,
if
I
just
remove
one
gate,
such
I
would
have
I
would
improve
the
performance
in
the
order.
Why,
basically
in
in
the
order
of
but
I,
don't
understand
it.
A
D
Okay,
one
thing
I
did
notice
as
well
was
for
the
large
repository
of
the
order
of
3
24
MB.
The
test
did
not
give
me
a
very
remarkable
difference
between
all
the
four
tests.
They
do
not
give
me
remarkable
attachments,
which
was
a
little
confusing
to
me,
because
it
shirt
I
think
this
should
not
depend
on
the
size
of
repository
if
I'm
removing
one
digit.
Oh
no,.
A
No,
it
very
much
does
depend
on
the
size
of
the
repository
it
is
done.
It
should
anyway,
so
so
I'm
not
sure
I'm
reading
your
graph
correctly
so
on
the
graph.
Is
it
that
the
is
the
group
of
four
bars
man
zoom
closer,
so
I
can
see
what
the
the
accident
or
just
describe
it.
You
don't
have
to
zoom
in
even
just
just
go
ahead
and
describe
yeah.
D
A
D
I
saw
the
average
execution
time
due
once
appreciatively
seconds
per
operation.
Okay,
so
I
think
I
am
going
to
myself.
It's
the
I
think
it's
the
reverse.
It's
the
reverse!
Yes,
those
the
first
two
tests.
They
don't
have
the
second
get
fetch
it
just
have
one
get
fetch
command
and
the
last
two
tests
they
have
boots.
They
get
fetch
command.
That
is
what
is
happening
wheels.
Of
course,
if
I'm
is
increased,
so
both
of
those
tests
would
have
would
get
registry.
A
Gathered,
okay,
so,
and,
and
so
what
that
the
way
I'm
interpreting
the
data
is
that
the
topmost
group
of
four
and
the
second-most
group
of
four
are
probably
both
without
a
redundant
fetch,
we're
and
then
the
next
group
of
four
and
the
bottom
most
group
of
four
is
are
both
with
a
redundant
fetch.
Yeah.
D
A
And
and
I'm
I
can't
I
would
have
expected
the
results
that
are
shown
for
the
tiny,
small
and
medium-sized
repositories.
Let's
call
those
the
the
dark,
blue,
less
blue,
a
little
bit
less
blue,
but
not
the
green
one.
Those
are
those
results.
Those
seem
expected
but
of
course,
they're
their
sizes
of
repositories.
That
aren't
aren't
the
example
for
this
particularly
right.
Forty
megabyte
repository.
Okay,
that's
getting
interesting
and
I
can't
explain
why
the
the
removal
of
the
redundant
fetches
did
not
dramatically
improve
the
oh,
no,
no
way
to
said
I
sure.
A
Can
yes
right,
of
course,
because
well
maybe
let
lets
let's
try
this.
Let's,
let's
discuss
so
fetching
a
forty
or
forty
megabyte
repository
is
probably
dominated
by
data
transfer
on
the
first
fetch
by
data
transfer
of
the
objects
right,
it's
getting
forty
megabytes
of
data,
so
it
may
be
that
we
need
to
have
you
tune
this,
this
benchmark
to
say
because
most
fetches
into
Jenkins
get
workspaces
are
not
the
first
populated.
This
is
the
this
is
the
populated
case
right.