►
From YouTube: Platform SIG 2020 06 18
Description
Jenkins Platform Special Interest Group meeting JUne 18, 2020. Topics included Google Summer of Code progress on the git plugin performance improvement project
A
A
Then
we
are
delighted
to
have
rishabh
with
us
to
talk
about
his
progress
on
the
get
plugin
performance,
improvement,
Google
Summer
of
Code
project,
interesting
results,
good
results
and
fascinating
progress.
So
we'll
give
him
as
much
time
as
in
each
ask
questions,
record,
notes,
etc.
Then,
Alex
and
I
are
gonna
talk
briefly
about
the
adopt
open,
JDK
for
docker
image
change
and
worked
through
what
it
means.
Do
some
discussions
briefly
and
we'll
we'll
conclude:
there
are
there
any
other
topics
we
should
add
to
the
agenda.
A
No,
okay,
great
all
right,
so
let's
do
the
review
of
action
items.
I!
Have
the
action
to
switch
the
meeting
URL
to
use
the
CDF
zoom
account
I've
done
that
Rishabh.
You
will
notice
that
you
can
share
the
screen
without
me
having
to
do
anything.
That
is
one
of
the
results
in
switching
to
use.
That's
great!
Let's
zoom
account
I'm
not
hit.
I
had
still
had
the
open
action
item
to
open
a
Jenkins
enhancement
proposal
for
docker
operating
system.
Support,
I
apologize,
yes,
I
will
get
that
done.
A
A
Oh
no
I!
Take
it
back.
The
one
I'm
in
progress
on
is
the
next
one,
so
we
still
need
to
make
some
further
progress
on
the
docker.
Build
rework
PR,
because
this
one
gives
us
a
better
structure
Alex.
This
is
the
docker
manifest
and
parallel
build
changes
that
I
haven't
done
yet
anything.
Yet
what
to
report.
B
A
A
A
A
C
Ok,
a
gate
plug-in
for
improvement,
as
Marcus
mentioned,
I'm
the
G
software
24
student,
the
shop
is
my
name
so
the
objective
of
the
project.
So
we
it's
a
simple
objective:
to
improve
the
performance
of
gate,
plugin
and
they've
identified
two
ways
to
do
it.
The
first
is
to
you
use
jmh,
which
is
a
framework
java
micro,
benchmarking
harness
which
provides
us
a
safe
environment
to
test
benchmarks.
C
So
we
want
to
use
benchmarks
to
identify
known
or
unknown
performance
issues
with
the
existing
gate
implementation
we
have,
that
is
the
sea
like
it
and
the
Java
pure
Java
implementation
G
again.
So
this
is
the
first
objective
we
have
created.
We
have
already
created
gate,
fetch
meant
parts
and
we
have
some
great
insights
from
those
benchmarks
and
we
are
going
to
implement
those
inside
the
gate
plugin.
The
second
is
to
fixed
the
existing
performance
issues
we
have
in
the
great
plugin
and
I'm
going
to
discuss
them
further
in
the
presentation.
C
So
what
have
we
done
and
what
are
we
doing?
The
first
was
the
first
thing
we
have
done
is
to
integrate
jmh
the
modules
inside
the
gate,
client
plug-in.
So
basically,
what
I've
done
is
that
I've
added
the
benchmarks
to
the
test
module
of
the
plug-in
and
now
the
benchmark
scan
done
on
CI
tank
installed
IO,
which
gives
us
a
wider
selection
of
platforms
where
I,
where
we
can
run
the
benchmarks
and
have
a
comprehensive
result,
result
profile
and
how
it
looks.
It's
it's
basically
like
this,
a
pipeline,
a
sample
pipeline.
C
So
we
we
build
the
repository.
Then
we
check
it.
We
land
the
benchmarks
and
then
we
we
we
have
a
JSON
which
is
a
report
which
we
can
use
to
feed
in
into
visualizer,
where
we
can
see
the
benchmarks
visually,
how
things
are
going
in
how
the
operations
are
performing.
So
this
is
the
first
thing
we've
done.
C
C
If
this
is
an
app
hypothesis
that
removing
the
second
fetch
would
actually
improve
the
gate,
plugins
performance,
so
for
that
initially
I
created
a
benchmark
for
benchmarks,
actually
with
git
fetch.
The
benchmarks
were
simple:
the
first
benchmark
is
a
simple,
a
single
git
fetch
with
an
arrow
respect,
which
basically
means
that
it
is
trying
to
fetch
one
single
deposit,
one
single
branch,
which
is
the
master
branch.
The
second
test
was
also
a
baseline,
single
git
fetch
test,
but
with
a
wider
web
spent.
C
That
means
all
the
branches,
then
the
third
and
fourth
tests
are
double
fetches
with
the
same
thing:
narrow
respect
and
a
wider
ref
spec.
So
what
before
even
fixing
the
issue?
What
we
could
see
from
the
benchmarks
was,
as
you
can
see,
this
is
an
interactive
chart
as
I
progress
with
test1
test2,
test3
and
tests
for
for
each
repository.
The
execution
time
increases
when,
when
I'm
moving
towards
test
three
and
test
for
yes
mark
okay,.
A
C
Test1
test2,
test3
and
test
for
are
basically
benchmarks,
test1
and
test2
our
benchmarks,
which
calculate
the
execution
time
for
a
single,
get
fetch
as
three
and
test
for
our
benchmarks,
which
calculate
the
execution
time
for
double
get
fetches.
What
you
seen
here,
the
the
x-axis
is
the
repository
size
and
the
y-axis
is
the
average
execution
time
which
is
increasing
repository
size.
I
have
four
repositories:
the
sizes
report
for
repo
one
it's
less
than
one
empty
for
the
second
one.
It's
5
MB
for
the
third,
it's
ninety
and
for
the
fourth,
its
300.
C
What
the
while
we're
iterating
the
dots,
it's
basically
running
through
the
benchmarks,
I
have
done
and
the
results
of
those
benchmarks.
So
what
you
can
see
here
is
once
I
go
from
test
one
two
to
test
three:
four:
with
repository
three
and
four,
you
can
see
a
remarkable
increase
in
the
execution
time
and
with
repo
one
in
two.
There
is
not
much
of
an
increase
which
kind
of
tells
us
gives
us
a
hint
that,
with
a
small
repository
size,
the
incremental
fetch
would
not
add
too
much
performance
overhead.
C
We
added
a
boolean
to
check
once
we're
using
this
first,
which
we
will
not
use
the
second
search,
and
so
once
we
did
that,
are
we
seeing
any
change
in
performance
and
to
see
that
I
used
profiling
filing
with
Java
flight
recorder
in
on
JDK
11,
so
I
ran
the
Jenkins
war
with
the
updated
gate
plugin
with
my
changes,
with
my
fix
for
the
redundant
fetch
issue
of
the
Jenkins
war
on
JDK
11
with
JFR
Java
flight
recorder,
which
is
which
is
a
profiling
tool,
comes
with
the
JDK
11.
It
has
very
low
performance
overhead.
C
So
so
what
you
can
see
here
is
so
I
took
two
repositories:
one
is
the
Jenkins
I/o
or
40
MB
size,
and
when
I
took
the
Samba
get
a
positive,
it
is
near
about
300
MB.
So,
with
these
two
repositories,
I
do
I
perform
the
simple
check
out
build,
and
here
what
you
can
see
is
without
the
fix
for
the
smaller
repository.
There
is
not
much
of
a
difference.
It's
it's!
Basically
a
10
second
difference
if
without
fix,
it's
2
minutes
55
seconds
and
with
the
fix
it
2
minutes
46
seconds.
C
So
there's
a
10
second
decrease
in
the
execution
time,
for
the
gate,
fetch
calls
for
the
plug-in
overall
and
with
the
second
bit
of
much
larger
repository,
300
MB
size.
There
is
approximately
a
two
minute
difference
once
we
remove
the
second
fetch,
which
is
I,
think
a
considerable
degree
improvement
in
the
performance
and
if
we
increase
the
size
of
the
repository
and
I'm
sure
we
will
have
much
larger
differences.
So
so
I've
done
this
once
or
I'm
done
done.
This
actually
twice
to
confirm,
if
is
my
before
my
performing
results,
profiling
results,
correct
or
not.
C
A
Okay,
so
then
your
Java
flight
recorder
could
could
not
even
could
not
have
any
impact
actually
on
the
on
the
actual
operations
performed
by
CLI
get
because
it's
a
separate
sub
process.
So
your
that
gives
me
even
more
confidence
in
your
benchmark
numbers
and
your
measurements,
because
alright
there's
there's
not
even
the
profiling
from
the
Java
flight
recorder
component,
basically
pauses
or
or
has
just
sub
process
or
just
process
impact,
not
it
can't
touch
the
get
command-line
process.
C
So
so
for
this
issue,
this
is
this
is
what
we've
done.
So
the
next
step
forward
for
us
is
to
implement
the
performance
improvement
inside
the
plugins,
and
to
do
that,
we've
figured
out
two
steps
right
now.
The
first
is
to
provide
a
compatibility
switch
to
the
users.
The
switch
is
going
to
be
basically
for
users.
We
were
assuming
that
once
we
once
we
add
the
improvements.
Whatever
we
get
from
the
benchmarks,
we
had
those
improvements
inside
the
plug-in.
There
might
be
cases
where
the
users
functionality
might
be
affected.
C
Performance
might
be
affected
in
some
ways
we
do
not
be.
We
did
not
anticipate,
so
we
are
providing
a
switch,
and
it's
going
to
look
like
this
roughly
I.
Have
it
it's
in
the
configure
system
in
Jenkins,
taking
inside
the
gate,
plugin,
you
will
have
a
checkbox
which
says,
revert,
performance,
improvement
changes
so
once
it
will
revert
to
the
old
version
of
okay.
So
so
this
is
the
first
step.
C
The
second
step
is
to
actually
selectively
switch
between
the
implementation,
that
is,
the
C
or
jig
it
and
from
the
benchmarks
we
found
out
that
forget
Fitch.
We
found
out
that
the
size
of
repository
is
is
the
is
the
biggest
Vidya
is
the
biggest
parameter
which
is
affecting
its
performance
for
an
example.
One
insight
we
have
is
that
for
for
a
depository
less
than
5
MB
size,
J
kid
is
going
to
perform
better
than
get
and
for
a
larger
repository,
get
is
considerably
performing
way
better.
C
It's
performing
way
better
than
J
get
so
so,
if
we
want
to
implement
that
inside
gate
plug-in,
we
need
to
estimate
the
repository
size
before
creating
the
client,
and
so
we
have
we've
had
some
discussions
on
using
heuristics
like
using
gate,
LS
remote
to
calculate
the
number
of
branches
we
have
without
cloning,
the
repository
inside
the
plugin
and-
and
we
can
use
a
rough
estimate
so
that
we
know
maybe
50
launches
means
a
large
repository
or
a
lesser
number
means
the
smaller
size
could
possibly
use.
That
second
could
be.
C
Another
option
could
be
to
use
existing
rest.
Api
is
exposed
by
github
or
gate
lab
where
little
while
the
size
of
the
repository.
So
we
could
also
do
that.
We
were
thinking
how
to
do
it.
Currently,
it's
in
process
and
any
suggestions
on
how
to
do
this.
How
to
estimate
the
repository
size
without
joining
the
repository
first
would
be
appreciated.
A
Okay,
so
so
the
first,
the
first
cycle,
then,
is
the
redundant
fetch.
You've
already
got
a
target
where
I
say
we
know
that
there's
a
significant
performance
improvement
to
be
gained
here
and
this
this
redundancy
affects
both
command
line,
get
and
Jake
it
so
it's
redundant
in
both
cases.
It's
not
it's,
not
just
one
or
the
other
they're
both
doing
this
redundant
fetch.
Yes,.
C
A
A
On
the
on
the
the
benchmarking,
the
jmh,
based
benchmarking,
the
the
challenge
there
was
that
you've
got
to
figure
out
before
knowing
before
having
performed
the
full
operation,
you
have
to
decide
which
implementation
should
should
actually
execute
the
operator
the
operation.
So
before
doing
a
fetch,
you
need
to
make
a
decision,
shall
we
use
CLI
get
or
shall
we
use
use
J
get
yes.
B
C
C
A
B
C
It's
okay,
so,
yes,
there
is
an
option
that
the
Java
the
Jenkins
test,
harness
already
as
it
has
the
jmh
harness
inside
the
dependency.
So
once
you
have
so
there's
a
version
here
for
the
poem
you
need
to
have
for
Jenkins.
For
that
particular
dependency,
once
you
have
your
grid,
I,
don't
remember
the
version
I
think
it's
2.5
for
the
Jenkins
test,
harness
once
you
have
that
you,
you
don't
need
anything
else.
C
C
Can
I
actually
show
you?
How
is
a
benchmark?
A
so-so?
Yes,
like
J
unit
test?
It's
it's
it's
in
a
it's
a
little
similar
like
that,
so
you
you
annotate
it
with
a
benchmark
annotation
and
you
need
a
benchmark
or
ll
as
well,
which
will
identify
the
benchmarks
and
run
it
with
some
options.
You'll
need
the
options
can
be
how
you
want
the
results
average
time
or
the
throughput
time.
C
Whatever
you
want,
how
much
you
will
warm
up
the
benchmarks
before
you
run
them
the
you
the
time
unit,
folks,
the
JVM
folks,
you
want
so
all
of
those
options
they
are
included
in
the
runner
and
that
class.
So
how
do
you
run
that
so
there's
a
profile
in
Mabon
maven
profile,
gem,
each
benchmark,
so
your
benchmarks
will
be
well
done
from
that
command
and
you
will
have
a
generator
JSON
report
from
that
command
and
if
you
want
to
integrate
the
benchmark,
you
want
to
run
them
on
CI
Jenkins
Road.
C
A
A
Thank
you,
so
we
shop
would
you
would
you
show
the
Jenkins
file
I
I'm
I'm
enamored,
with
with
how
elegant
the
work
from
the
role
strategy
plug-in
made
it
for
this,
but
then
we
found
some
some
interesting
issues
that
made
me
beg
rishabh
to
do
some
extensions.
So
in
the
end
we
get
client
plug-in
he's
going
to
show
us
what
the
Jenkins
file
looks
like.
C
Yes,
so
this
is
the
step
which
enables
running
the
benchmarks
and
we
provide
the
name
of
the
JSON.
We
want
as
an
output
so
before
so.
What
was
happening
before
was
that
for
every
pull
request
for
every
branch,
Marko
screening
for
testing
the
benchmarks
they
today
they
have
a
considerable
time
duration.
They
run
for
maybe
an
hour.
So
for
our
depends
on
the
kind
of
benchmarks
we
have
it
takes
time.
So
it's
an
unnecessary
addition
for
people
who
do
not
want
to
test
the
performance.
Maybe
so
so.
A
See
for
me
for
me,
Alex,
the
treat
here
was
that
somebody
else
had
taken
the
time.
The
the
last
year,
roll
strategy
had
taken
the
time
to
create
this
plugin.
This
pipeline,
shared
library,
called
run
benchmarks,
and
so
rishabh
was
able
to
without
without
doing
anything
more
than
using
run
benchmarks.
He
gets
it
to
execute
now
on
the
CIO
Jenkins
at
I/o
infrastructure
and
then
from
for
my
needs.
He
also
gets,
though,
the
ability
to
put
conditionals
in
there,
so
we
don't
have
to
run
it
on
any
branches
except
specific
targets.
A
A
And
I'm
gonna
switch
on
screen
share
and,
let's
take
a
look
at
the
next
topic,
it
was
that
we
want
to
do
share
screen
really
there.
Okay,
I
think
the
next
topic
is
around
the
Alpine
image.
Oh
I
didn't
take
any
notes,
Rochelle
bout,
all
inserts
and
notes
here
on
your
your
results,
so
that
we've
got
them
in
the
text
as
well.
The
recording
of
the
of
the
meeting
will
be
post
shortly
after
the
meeting.
Okay.
A
Would
be
great
if
you're
willing
to
do
it.
That
would
be
wonderful,
I
think
so
adopt
openjdk
for
docker
on
Alpine,
Debian,
Buster
and
CentOS
so
Alex.
This
was
for
me
at
an
excuse
to
just
have
a
conversation
with
you
to
be
sure
that
I've
understood
what
what
we're
expecting
in
PR
956
and
what
we
should
be
reviewing
sure.
A
So
as
far
as
I
can
tell
it's
giving
us
a
nice
integrated
new
structure
where
we
say
we
start
with
the
jdk
target
is
the
first
part
of
the
directory
name,
then
the
operating
system
and
then
the
B.
What
do
you
call
it?
B
JVM,
optimization
technology,
whether
a
hotspot
or
open
j9?
Yes,
that's
great,
and
so
so
then
what
I
was
trying
to
decode
was.
How
does
that
map
to
tags
in
the
repository?
So,
for
instance,
I
was
looking
for
a
Debian
squeeze
based
version
of
this
and
didn't
detect
one.
A
If
is
that,
have
I
misunderstood,
there's
buster,
so
there's
Debian
10
Buster
in
the
slim
image,
but
there
doesn't
appear
to
be
a
Buster.
Another
is
Buster,
but
somehow
or
other
I
didn't
find
that
label.
Okay,
so
I!
Oh,
this
is
JDK.
11
I
was
looking
for
JDK
8
with
Buster.
So
what
you've
done
here
is
if
using
Java
11
I
get
the
newest
version
of
Debian,
not
the
old
version
of
Debian.
Well,.
A
A
And
so
that's
that's
this
year,
which
in
particular
with
Alpine,
is
a
real
win
for
us,
because
it
gives
us
a
newer
version
of
the
Alpine
based
operating
system
right
instead
of
3.9,
it's
now
3.11
correct,
which
whatever
they're
in
latest,
is
yeah
exactly
and
it's
since
the
alpine
project
has
stopped
stopped
any
any
work
or
I.
Guess
it's
the
wrong
way
to
say
open
JDK
is
a
project,
is
no
longer
publishing,
updates
to
the
Alpine
image
right.
It's
it's
stuck
on
Java,
8
212,
as.
A
A
Excellent
okay!
Thank
you
all
right.
So
one
of
the
one
of
my
pleas
to
the
viewers
of
the
video
is
this
is
a
great
excuse
to
do
additional
testing
to
help
us
as
we
evaluate
this
because
it's
it
so.
The
intent
here
is
not
to
alter
what
we're
delivering
in
the
sense
of
based
operating
system
version,
but
we
are
altering
which
operating
system
we're
using
or
which,
which
base
image
we're
using.
So
instead
of
using
open,
JDK
will
use,
adopt
open,
JDK
rich
got
it.
Okay,
excellent.