►
From YouTube: GSoC 2020 - Coding Phase 2 demos, Part I (Jul 29, 2020)
Description
At this meeting GSoC students in the Jenkins will present their projects. Each student will do a quick project overview and a live demo of the current project status.
Agenda:
* GSoC 2020 Introduction by Oleg Nenashev
* Git Plugin Performance Improvements by Rishabh Budhouliya
* GitHub Checks API for Jenkins Plugins by Kezhi Xiong
* External Fingerprint Storage by Sumit Sarin
Full presentation abstracts and links can be found here: https://docs.google.com/document/d/1F9JVUEQyTL_JI8-_WPV-vMbuX94ZENjffsVlPOptPKE/edit#heading=h.ibikyjoj0fur
A
Just
to
introduce
the
google
summer
of
code,
so
google
summer
of
code
is
the
world's
biggest
open
source
mentorship
program.
It
has
solvents
of
students
each
year
and
the
the
jenkins
project
is
proud
to
participate.
A
So
yeah,
that's
basically
the
summary
just
to
clarify
this
year.
We
have
two
organizations
participating
in
gsoc.
One
is
jenkins
drinks,
umbrella
organization
for
jenkins
and
jenkins
x
project,
but
also
our
own
umbrella
organization
continues
the
delivery
foundation.
It
also
participates
in
a
google
summer
of
code.
There
are
projects
for
spinnaker
and
screwdriver,
and
if
you
go
through,
the
even
linux
foundation
participates
in
gsoc,
so
yeah
there
is
a
lot
of
optimizations
this
year.
A
In
our
case,
we
present
on
the
projects
in
the
jinx
organization,
and
you
can
find
the
list
here.
So
there
is
custom
jenkins
distribution,
build
service,
machine
learning,
plugin
need
performance
improvements,
yamaha
support
for
jenkins
windows,
services
with
hub
checks,
api
external
fingerprint
storage
and
also
ups
and
advancements
and
edition
for
jenkins
x.
So
this
is
the
projects
we
have
right
now
we
are
three
months
in
the
gsoc,
so
we
had
one
month
of
community
bonding
and
two
months
of
coding
and
yeah.
Basically,
all
projects
are
ready
to
be
presented.
A
We
have
great
damage
to
show
if
you're
interested
in
jsoc,
we
have
public
mailing
fees,
we
have
gita,
we
have
lots
of
rental
office
hours
so
during
the
summer
time
frame.
Usually
we
do
them
on
demand
if
somebody
is
interested,
but
if
you
want
to
participate,
we
have
to
host
them
and
also
every
channel
released
that
they
have
their
own
project
channels,
so
how
to
find
them.
You
can
go
to
our
projects
page,
and
here
you
can
find
all
the
information
about
the
google
summer
of
code
engineers.
A
So,
for
example,
if
you're
interested
in
just
one
drink
distribution
build
service,
you
go
here,
you
can
find
project
details.
Reference
to
materials
transfer
face
also
communication
channels.
All
our
projects
would
appreciate
a
feedback
and
evaluation
by
users,
so
if
you're
interested
in
them,
please
use
these
channels
to
contact
the
teams.
A
Okay,
so
what
else
do
we
have
yeah
for
jenkins
sex
drinking
sex,
separates
it's
in
its
own
community
channel?
So
if
you're
interested
that,
please
join
the
slack,
it's
not
on
meter
the
majority
of
our
communication
channels,
this
time,
okay,
before
we
test
it
with
demos,
I
would
like
to
thank
all
the
participants
in
gsoc,
so
it's
all
students,
mentors,
orcadmins
and
also
all
the
reviewers
and
other
community
members
who
participated
in
this
year,
because
we've
got
a
lot
of
feedback
from
review.
For
example,
jenkins,
co
reviews
beginning
games.
A
We
also
got
a
lot
of
feedback
on
the
developer.
My
name
is,
and
hopefully
we
will
get
a
lot
of
feedback
from
users.
So,
thanks
to
everyone
who
participates
in
the
jsoc
this
year,
let's
start
with
demos.
So
today
we
have
three
demos,
so
the
git
plugin
performance,
informants,
github
checks,
api
and
external
fingerprint
storage.
The
next
three
dms
will
happen
tomorrow
at
the
same
time.
So
if
you're
interested
to
know
more
about
other
projects,
please
join
us
tomorrow.
A
For
each
demo
we
will
basically
have
a
small
introduction
demo
and
then
discussion
and
query.
So
if
you
have
any
questions,
please
feel
free
to
ask.
We
are
doing
this
meeting
consume,
so
basically
everyone
can
unmute
yourself
and
ask
questions
and
basically
that's
it.
If
you
watch
this
recording
and
if
you
want
to
ask
any
questions
or
find,
please
use
our
guitar
channel
or
the
project
channels
which
will
be
communicated
due
to
the
presentations
as
well.
A
Comments:
okay,
then,
let's
proceed
with
phase
one
demos,
I'll
just
stop
sharing
my
screen
and
the
first
presentation
is
by
sliding
sliding.
Are
you
ready?
Oh
no,
actually
it's
not
sliding
it's
yeah,
so
it's
a
good
plug-in
performance
and
for
once
sorry,.
B
Nobody
now,
I
hope
you
all
can
share
see
my
screen.
Yes,
we
can
okay
welcome
everyone.
This
is
the
phase
two
review
for
the
git
plugin
performance
improvement
project
am
rishabh
buddhalia.
B
A
brief
summary
of
what
we
have
done
in
the
project,
so
the
singular
aim
of
the
project
is
to
improve
the
performance
of
git
plugin
in
phase
one.
The
essence
of
phase
one
was
to
differentiate
the
performance
between
the
git
implementations.
We
have
inside
the
git
plug-in,
which
is
git
and
jget,
so
we
used
benchmarking
principles
to
do
that:
micro,
benchmarking
principles
and
we
use
jmh
as
the
framework
to
do
it.
It
provides
us
the
environment
to
design,
implement
and
analyze
benchmarks,
so
we
implemented
a
module
inside
the
git
client
plugin.
B
To
do
that
now,
one
of
the
major
experiments
we
did
was
to
compare
the
performance
of
get
fetch
as
the
gate
operation
for
git
and
j
get,
and
what
we
found
out
was
that
there
is
a
strong
correlation
between
the
performance
or
get
fetch
with
the
size
of
a
repository.
And
so
what
was
not?
That
obvious
in
this
in
the
results,
was
that
jet's
nature
of
performance
changes
after
a
certain
size
of
repository.
B
So
so
we
found
out
that
during
the
phase
one
and
we
also
fixed
the
double
fetch
issue
in
the
checkout
step
for
the
get
plugin
now.
The
second
phase
was
about
this
was
about
implementing
the
insights
we've
gained
from
the
benchmarks
inside
the
git
plugin,
and
to
do
that,
we've
created
a
new
functionality
called
the
git
tool
chooser,
which
is
basically
it's
going
to
recommend
the
optimal
gate
implementation.
It's
going
to
try
to
recommend
the
optimal
gate,
implementation
for
a
particular
remote
repository.
B
The
second
thing
we
wanted
to
do
was
to
expand
the
scope
of
benchmarking.
We
are
doing
for
multiple
repository
parameters
like
branches
commits
or
tags,
and
we
wanted
to
see
the
consistency
of
our
results
across
multiple
platforms,
so
I'll
start
with
the
git
tool
chooser.
So
so
I've
explained
it's
it's
it's.
Basically,
it's
going
to
recommend
a
git
implementation
which
is
going
to
be
optimized
on
the
basis
of
the
repository
the
plugin
is
using,
and
what
does
it
need
to
do
do
so?
It's
it
either.
B
In
your
jenkins
instance,
you
have
a
brand
source
plugin
like
github,
gitlab,
bitbucket
or
gt,
or
you
can
have
a
multi-branch
project
both
of
them.
If
you
have
any
of
them,
you
have,
you
can
use
the
functionality
to
improve
performance.
How
so?
This
is
a
two-part
answer.
The
first
part
that
so
I've
explained
from
the
inside
we've
gained
from
the
benchmarks
that
we
have
a
size
rule
now
for
about
size.
We
can.
B
We
know
that
which
implementation
is
going
to
perform
better
than
the
other,
so
that
is
the
first
part
and
the
second
part
of
the.
How
is
the
architecture
of
the
class?
So
if
you
have
a
multi-branch
project
within
the
jenkins
instance,
we
can
use,
we
can
use
the
cache
stored
in
the
workspace
to
estimate
the
size
of
the
repository
and
then
recommend
you
the
optimal
git
tool,
which
is
the
implementation.
B
So
I'd
like
to
show
you
how
that
is
going
to
happen.
This
feature
has
not
been
released.
Yet
I
have
this.
This
m
is
going
to
be
in
my
local
machine.
So
so
I
created
two
projects.
I
cannot
show
you
a
live
demo
because
profiling
to
see
the
performance
results
profiling.
It
would
take
time
so
so
I
created
two
projects
both
of
the
projects
as
a
user.
I
have
chosen
jgit
as
the
implementation
I
want
to
choose
for
a
repository
which
is
ruby,
which
is
around
400
500
mb.
B
So
now
what
is
happening?
What
is
the
difference
between
the
projects
in
this
project?
I'm
not
using
the
get
tool.
Chooser,
it's
not
there
and
for
the
second
one
we're
using
the
get
tool
chooser.
So
now,
what
is
the
difference
in
terms
of
the
expectation
for
user?
B
So
what
is
the
kind
of
difference
we're
seeing
in
performance?
So
I
profiled
this
jenkins
instance
using
java
flight
recorder.
I
attached
it
to
the
jenkins
instance,
and
so
this
is
the
this
is
the
performance
thread
for
the
project
where
we
don't
have
the
git
tool
chooser,
and
what
you
see
here
is
this
is
the
thread
execution
for
git
fetch.
It
is
taking
around
five
minutes
to
execute
that
step,
which
is
the
majority
of
what
the
checkout
step
takes
now
with
the
introduction
of
get
tool
chooser.
B
What
you'll
see
is
that
the
fetch
is
going
to
take
just
a
minute
less
than
two
minutes.
So
this
is
what
is
going
to
happen
if
we
include
this
functionality
within
the
git
plugin,
but
this
will
happen
if
you
choose
jk
as
the
implementation
to
perform
the
git
operations.
B
So
this
is,
I
think
this
is
what
they
get
to
choose
wants
to
do
now.
There
are
some
challenges
which
we've
faced,
which
are
facing.
The
first
is
that
we've
discussed
this,
but
still
we
we
want
to
see
if
we
want
to
give
the
user
an
option
at
the
global
configuration
level
or
at
to
a
much
tighter
scope
at
a
project
level.
They
want
to
implement
this
feature
or
not.
B
The
second
is,
since
we
depend
on
other
plugins
to
to
get
the
size
we
need
to
implement,
so
we
have
provided.
We
have
exposed
an
extension
point
which,
upon
implementation,
can
communicate
with
the
rest
apis
of
those
providers,
so
we
need
to
implement
that
to
have
support
across
github
git
lab
get
getty
bucket.
B
The
other
challenge
we
have
is
that
jk
doesn't
support
lfs
checkout
in
shadow
checkout,
so
we
need
to
make
sure
that
we
don't
recommend
something
which
would
break
existing
use
cases
now.
The
second
part
of
the
proj
of
this
phase,
the
progress
is
that
we
wanted
to
expand
the
benchmarking
experiments
we
were
doing
so.
The
first
thing
we
did
was
that,
as
of
now,
we've
mostly
tested
any
kind
of
git
operations
performance
with
the
size
of
the
repository
in
these
experiments.
B
B
What
we
see
is
similar
to
what
we
would
see
similar
in
the
sense
that
jay
gates
nature
is
similar
when
we
talk
about
get
get
fetch's
performance
with
with
the
repository
size.
Very
the
variation
of
repository
size,
it
is
changing.
Jacket
is
changing
its
nature
after
a
certain
increase
in
the
number
of
branches,
as
you
can
see
here,
but
we
can
also
see
that,
for
less
than
100
branches,
the
the
the
performance
overhead
is
is
kind
of
negligible,
because
the
execution
time
we
are
measuring
here
is
in
milliseconds
for
operation.
B
So,
in
terms
of
the
whole
plugins
performance,
it
would
not
make
much
difference
when
we
are
talking
about
branches
less
than
100
branches,
if
you're
talking
about
more
than
that,
it
would
still
at
some
point,
maybe
at
least
half
of
a
second
possibly
so
so
we're
not
thinking
of
using
this
as
a
parameter
to
gain
any
actionable
insight
now.
The
second
is
with
the
number
of
commits,
so
what
we
can
see
is
there
the
the
nature
is
different
for
both
of
the
implementations
jake,
it
is
jacket
or
gate.
B
B
Jk
is
performing
the
better
than
gate,
though,
so
that
is
something
here.
The
third
experiment
was
with
the
tags
with
tags.
What
we
see
is
that
the
correlation
factor,
if
there's
a
quantitative,
I
would
say
the
the
amount
of
how
much
the
number
of
increase
in
tags
is
affecting
the
performance
of
git
fetch
for
both
of
the
implementations
is
much
higher
than
for
branches
or
commits
like
we
can
see.
B
For
a
thousand
tags,
or
more
than
that,
there
is
almost
half
a
second
added
to
the
operation,
get
fetch
so
so
we
would
like
to.
We
would
like
to
add
this
to
the
current
to
the
current
tool
we
have.
We
would,
if
we're
not
sure
how
we're
going
to
add
it
right
now,
because
we
need
to
make
sure
that
the
experiments
in
the
repository
size
are
they
are
are.
Are
they
significant
enough
to
not
include
this
parameter,
or
should
we
do
that
with?
That
is
something
we
have
to
explore.
B
These
are
the
results
with
those
parameters.
Another
experiment
we
did
was
to
to
check
if
the
results
we've
gained
from
a
single
platform,
the
benchmarks:
are
they
consistent
across
multiple
platforms?
We
needed
to
see
that
and
that's
important
for
us.
So
so
we
compared
the
performance
of
a
git
fetch
operation
using
a
400
size
repository
and
the
platforms
we
use
are
windows,
freebsc,
12,
an
ibm,
workstation
and
s390x.
B
So
so
the
most
important
observation
here,
I'd
like
you
all
to
concentrate
on
the
second
graph,
is
the
red
line
here.
So
this
red
line
marks
the
difference
in
performance
between
jet
and
kit
and,
if
you'll
observe,
this
line
will
almost
remain
constant
for
across
all
the
platforms,
whether
it's
vbsd
or
it's
an
ibm,
workstation
or
windows.
B
So
that
makes
us
observe
that
that
are
results.
If
we
have
in,
if
you
have
coded
one
of
our
benchmark
results
on
the
basis
of
let's
say
a
linux
instance,
it
would
not
vary
our
our
estimation.
Our
recommendation
would
not
vary
across
multiple
platforms,
which
is
a
great
thing.
I
think
the
next
phase,
so
the
next
phase
is
the
the
most
important
thing
for
us
is
to
release
the
features
we've
added
in
phase
one
and
ps2,
and
that
includes
solving
the
current
challenges
we
have.
B
We
need
more
test
cases
and
more
support
from
other
plugins.
We
need
to
implement
those
extensions
extension
points
we've
provided
apart
from
that
we'd
like
to
explore
other
areas
of
git
plug-in
to
improve
the
performance
if
we
can
find
and
if
we
cannot-
and
we
have
time
we
might
implement
git
clone
inside
git
plugin.
Currently
we
do
a
get
in
it,
plus
git
fetch
step
instead
of
git
cloud.
So
we
might
look
into
that.
We
haven't
discussed
much
about
it.
We
would
right
now,
after
the
start
of
this
phase.
B
So
yes-
and
this
is
it
from
my
side-
any
questions.
A
C
C
You
could
yes
this
one,
so
you
said
you're
comparing
branches
and
tags.
Yes,
you
said
one
of
them
has
a
lot
more
impact
on
the
performance,
but
it
it
wasn't
clear
to
me.
B
Which
one
with
tags,
we
are
seeing
a
greater
impact
and
how
we
can
see
that
so
the
y-axis
is
the
execution
time
for
git
fetch
it's
in
milliseconds
for
operation.
But
so,
if
you
see
that
for
branches
for
let's
say
practical
size
of
branches,
let's
say
a
hundred
less
than
500
you're,
not
seeing
much
of
a
gain
of
an
overhead
of
performance,
it's
less
than
one
fourth
of
a
second,
but
with
tags.
If
we
increase
the
tags
after
a
point,
we
would
see
a
gain
of
half
a
second
or
maybe
more
than
that.
B
So
to
be
more
sure
about
it.
We
would
I
I
would
actually
like
to
calculate
so.
I've
used
in
the
research
I've
used
a
factor
called
the
the
pearson
correlation
coefficient
which
would
quantify
the
relationship.
If
there's
a
linear
relationship
between
the
the
two
parameters
we're
discussing
here,
the
first
being
the
tags
and
the
second
being
the
git
fetch
performance,
it
would
quantify
the
relationship
between
them
and
then
we
might
be
able
to
more
confidently
say
what
kind
of
impact
it
is
showing.
B
D
And
I
think,
though,
it's
maybe
important
to
note
that
those
are
like
to
compare
like
the
impact
with
size,
which
was
from
the
earlier
phase.
I
guess
I
guess
it's
not
a
comparison
with
size,
but
size
has
been
the
primary
thing
that
that
richard's
been
looking
at
and
then
this
is
looking
at
additional
factors
and
then,
in
terms
of
platforms,
primary
development.
It's
been
on
linux
and
mac
so,
like
those
other
platforms,
were
to
also
make
sure
that
we
would,
because
we
were
seeing
those
on
mac
and
linux
already.
D
C
A
C
A
C
C
B
We
haven't
compared
get
close
performance
versus
get
fetch,
but
actually
that's
an
interesting
thing.
We
could
do
right
now.
We
were
just
thinking
that
we
we
do
clone
a
repository,
but
we
actually
perform
again
in
it.
Let's
get
fetched
there,
so
that's
something
I
I
would
explore.
I
I
haven't
compared
both
of
them
both
to
their
performance.
E
Have
anecdotal
information
from
one
or
more
bug
reports
in
jira
which
claim
that
the
the
choice
to
use
get
init
plus
git
fetch
is
actually
less
efficient
than
using
git
clone?
Now
I
my
benchmarking,
that
I
did
two
or
three
years
ago
on
on
that
bug.
Report
did
not
support
the
assertion
that
the
bug
report
was
making,
but
we
have
users
who
say
clone
is
faster
than
init
plus
fetch.
E
E
So
rishab
to
to
oleg's
earlier
question
on
the
release
plan,
we
we
certainly
will
do
a
release,
including
the
the
changes
we're
excited
by
them.
Looking
forward
to
them,
expect
it
if
not
a
portion
of
the
changes
will
probably
release
within
the
next
week
or
two
and
the
full
set
shortly,
if
not
by
the
end
of
the
project.
Shortly
thereafter,.
A
It
okay,
there
is
no
other
questions
thanks
a
lot
for
the
presentation
and
I
suggest
to
move
on
so
the
next
presenter
is
casual
and
he
will
present
github
checks.
Api
for
jenkins,
plugins.
F
F
So
hello,
everyone,
I'm
koji,
I'm
going
to
talk
about
the
github
checks,
api,
plugin
and
my
mentors
are
wooly
and
team.
So
first
we
have
added
some
features
from
fix.
One
and
first
we
have
general
api
and
now
we're
hosted
in
the
checks,
api
plugin
and
we
also
added
an
implementation
for
the
github
checks.
Api,
we're
hosting
github
checks,
plugin,
and
we
now
have
released
both
of
the
plugins.
F
F
So
in
phase
2
we
actually
use
the
checks
api
in
practice
and
first
we
use
it
in
words,
plugging
we
use
it
to
report
the
quality
gate.
As
you
can
see,
there
are
many
green
checks
and
the
red
x
here
they
represent
the
quality
gate
from
the
from
the
different
tools
and
here's
the
messages
like
the
new
issues
or
no
new
issues
when
total
and
if
you
want
to
see
more
details,
we'll
show
you
some
severity
about
the
issue.
Statistics
and
the
next
thing
is
about
annotations.
F
F
F
And
now
we
have
already
merged
this
feature
in
the.
Where
is
plugin?
If
you
want
to
try
it,
you
just
need
to
update
the
warnings
plug-in
to
8.4.0
and
also
instead
of
the
checks,
api,
plugin
and
the
github
checks
plugin,
and
they
just
use
this
one's
plugin
in
a
project
that
uses
github
branches
and
maybe
a
multi
multi-branch
project
or
a
github
organization
project.
F
But
if
you
feel
I'll
just
feel
terrible
about
this
feature
and
if
you
feel
terrible
about
to
see
so
many
warnings
or
issues
for
your
code,
you
can
definitely
disable
it
and
you
can
skip
the
public
checks
just
like
other
options,
and
you
can
also
skip
it
in
in
the
pipeline
script
in
default.
The
way
enable
this
feature-
and
the
next
part
is
the
code
coverage
api
plugin.
F
We
use
that
to
first
to
report
the
coverage
trend.
So,
in
the
message
part
you'll
see
like
the
nine
branch.
Next,
the
line
coverage
against
the
target
branch.
Normally
it's
the
master
branch
and
you'll
see
the
branch
coverage
against
the
not
successful,
build,
and
there
will
be
some
links
to
the
reference
build
and
also
you
have
a
coverage
healthy
score.
You
you
can
control
this
score
by
setting
the
threshold
for
the
coverage
when
one
configure
the
the
plugin-
and
we
also
add
some
details
about
different
coverages,
like
report
a
group
but
most
useful.
F
F
So
here
is
a
short
message
and
the
details,
those
target
links
directly
to
the
reference
view,
and
here
is
a
link
to
the
coverages
action
page.
So
the
coverage
api
report
coverage
in
recursive
way,
but
we
believe
it's
too
much
too
complicated
if
we
use
such
a
recursive
style
in
the
github
ui.
There
will
be
too
many
of
those
reports
so,
and
this
is
the
reference
build
so.
G
F
So
let's
talk
about
the
laters
first,
okay,
so
for
so
I
want
to
first
talk
about
the
plan
for
phase
three.
So
where
we
add
the
pipeline
support
and
we
will
also
add
the
rerun
request
through
the
checks,
api
and
some
other
tweaks
for
this
plugin.
F
D
A
Yeah,
for
that
we
already
have
a
plugin
which
is
based
on
command
tops,
so
there
is
a
github
command
plugin,
if
I
recall
correctly,
but
yeah.
This
plugin
isn't
very
active
at
the
moment,
but
if
you
plan
something
more
complicated
yeah,
it
would
be
awesome
and
yeah
for
the
record.
Even
this
demo,
it
looks
great
I'm
looking
forward
to
update
my
instances.
F
A
Yeah
thanks
to
team,
we
already
started
discussion
about
adopting
this
feature
on
the
sea
agent
in
scion,
so
this
will
be
soon
available
to
jenkins
plugin
developers
and
contributors.
A
It
is
hopefully
thanks
to
ollie
and
team
for
working
on
pipeline
library,
patches,
okay,
it
was
a
long
road
to
get
this
pull
request
merged,
but
yeah.
Hopefully
we
will
get
it
over
the
line
so
that
we
have
something
to
show
to
jenkins
contributors
as
well.
H
Oh
and
the
other
thing,
just
what
pipeline
support
means
here,
is
that
it'll
be
things
like
steps
that
users
can
do
can
use
to
add
their
own
checks,
so
inside
of
their
pipeline
or
their
pipeline
library,
they
can
easily
interact
with
the
checks,
api.
A
Yeah
warnings
and
g
is
our
key
to
all
static
analysis,
features
and
rankings.
So
yeah,
just
by
supporting
quantum
university
support,
a
huge
number
of
use
cases
right
away.
There
is
code
coverage.
A
D
A
A
Oh,
for
what
it
was
kj
has
already
recommended
it
and
the
phase
one
got
post
right.
So
if
you
open
the
blog
post,
you
can
see
that
yeah.
There
is
a
sample
how
to
do
that,
but
yeah
there
is
some
optimization.
Of
course
it
would
be
helpful.
A
G
B
So
thanks
everybody
for
joining
this
presentation,
so
we'll
be
presenting
today,
the
external
storage,
the
external
fingerprint
storage
project,
which
is
one
of
the
projects
under
gsoc
and
jenkins
this
year,
and
I'm
very
glad
to
be
a
part
of
it
and
thanks.
I
want
to
extend
my
thanks
to
all
the
mentors
or
like
andre
mike,
so
they've
been
awesome
with
helping
me
out
with
this
project.
So
I'll
begin
this
presentation,
so
we
have
number
of
topics
for
the
agenda.
B
I
started
with
a
small
personal
introduction,
so
I'm
sumit
I'm
one
of
those
two
and
I'm
the
student
for
this
project
and
I'm
currently
pursuing
a
bachelor's
in
instrumentation
and
control
engineering.
I
started
contributing
to
jenkins
in
december
2019
and
I
started
with
the
fingerprints
engine
and
that's
why
it
led
me
to
you
know
being
part
of
this
project
so
I'll
just
do
a
quick
phase.
One
recap,
because
I
think
there
are
new
people
also,
and
that
would
help
everybody
get
familiar
with
what
exactly
fingerprints
are
so
file.
B
Fingerprinting
inside
jenkins
is
a
way
to
basically
allow
tracking
which
version
of
a
file
or
build
is
being
used
inside
the
zinc.
In
the
ecosystem,
right
so
just
as
a
small
example
say:
team,
a
builds,
a
dot
jar
and
team
b
builds
b
or
jar
right
and
b.
Jar
has
a
dependency
on
a
dot
jar
and
team
b
finds
that
you
know,
there's
some
issue
in
b
or
jar,
so
team
a
needs
to
fix
it
and
not
even
needs
to
figure
out
which
particular
version
of
video
jar
they're
using
right.
B
So
fingerprinting
engine
allows
this
version
tracking
to
happen
across
jobs
and
bits
you
know,
so
you
can
basically
fingerprint
your
artifacts
or
files.
Anything
that's
related
to
these.
These
artifacts
that
are
being
created
by
builds
right,
so
I'll
just
show
a
small
example
to
show
exactly
how
this
ui
inside
fingerprint
exists
right.
So
I
have
over
here.
I
have
two
jobs
right,
a
and
b,
and
what
b
does
is
it
copies
the
artifact
that
a
produces
right?
So
if
I
just
trigger
a
build
or
for
a.
B
And
I
I
look,
I
can
see
here
that
I
can
go
to
see
fingerprints
right
and
it
has
it's
producing
the
artifact
or
txt,
and
I
can
see
that
its
usage
has
been
in
job
is
build
number
three
right
and
if
I
trigger
the
you
know
a
build
for
b
and
I
go
to
see
fingerprints,
I
can
see
that
it's,
it
has
a
dot
excuse.
Original
owner
was
the
was
created
by
job
a's,
build
three,
and
I
can
see
all
the
versions
where
this
particular
artifact
was
used.
B
So
that's
just
a
small
intro
to
the
fingerprinting
engine
inside
jenkins,
so
yeah.
So
we
saw
the
ui.
So
just
what
we
did
in
phase
one
also.
What
the
disadvantage
with
the
current
fingerprint
engine
is
that
it's
it's
basically
storing
these
fingerprint
files
inside
the
local
storage
and
as
we
move
towards
the
cloud
native
jenkins,
we
want
to
externalize
the
storage
of
fingerprints
right.
B
So
the
main
idea
behind
this
project
is
to
was,
to
you
know,
provide
an
api
that
can
allow
extern
the
plug-ins
to
come
in
and
they
can
support.
You
know
different
types
of
storage,
plug-ins
like
a
radius,
plug-in
or
mysql
fingerprint
source
plug-in,
and
these
fingerprints
can
then
be
stored
inside
these
instances
and
basically
the
dependence
on
the
disk
storage
of
jenkins.
Lessons
right
and
you
know
so.
B
We
built
the
redis
fingerprint
storage
plugin
in
phase
one,
and
we
created
that
api
in
jenkins
core
and
we
released
it
and
sing
instru.242
and
we
have
a
jet
for
it
right.
So
jet
pro
260
can
go,
and
so
we
have
all
the
design
decisions
listed
there.
So
what
did
we
do
this
phase
right?
So
one
of
the
stories
we
targeted
in
this
phase
of
fingerprint
clean,
fingerprint
cleanup.
B
So
what
happens
is
in
in
so
what
earlier
used
to
happen?
You
know
in
the
local
storage
walls
that
sometimes
what
can
happen
is
that
builds
get
deleted
from
jenkins
right
and
if
a
fingerprint
does
not
have
any.
You
know
pointer
to
any
build
that
it
does
not
make
sense
to
store
it
right.
So
we
need
to
delete
that
fingerprint,
because
then
it's
just
occupying
extra
space.
So
this
is
a
periodic
job
that
happens
on
jenkins,
which
cleans
up
these
built.
B
Less
fingerprints,
but
that
that
capability
we
are
not
exposed
to
the
external
storages.
So
now
this
feature
is
now
implemented,
so
we
have
introduced
new
methods
for
api
in
the
api
for
plugin
developers,
so
they
can
also
now
the
plugin
has
to
implement.
You
know
iterate
and
clean
up
fingerprints,
and
basically
they
can
iterate.
These
fingerprints
jenkins
will
enter
it.
B
This
method
will
be
called
by
jenkins
core
and
the
again
it's
up
to
the
plugin
to
you
know
clean
these
fingerprints
and
we
have
provided
these
the
clean
fingerprint
method.
I
can
call
right
for
cleaning
that
fingerprint,
so
we
released
this
feature
in
jenkins,
2.246
actually
2.2
for
eight,
because
that
build
there's
some
problems
so
so
yeah
it
was
in
2.248,
so
fingerprint
cleanup
and
this
particular
api
was
is
then
you
know,
consumed
by
the
redis
fingerprint
storage
plug-in.
B
This
is
the
reference
implementation
that
we
work
on
simultaneously,
so
inside
the
the
plug-in
we
actually
use
cursor.
So,
basically,
now
we
need,
to
you
know
crawl,
the
entire
fingerprint
database
inside
the
radius.
So
you
know
we
used
curses
because
did
we
get
an
added
advantage
that
they
don't
block?
B
So
it's
not
a
blocking
operation
and
you
know
that
it's
better
than
actually
are
you
doing
something
like
a
fetch
all
so
so
that's
how
we
implemented
cleanup
inside
reddish
fingerprints
and
also
we
gave
the
users
the
you
know,
feature
to
disable
fingerprint
cleanup,
because
since
these
fingerprints
are
now
in
an
external
storage
and
external
storages,
are
you
know
a
lot
of
times
many
they're
very
cheap,
so
it
makes
sense.
You
know
to
actually
not
have
an
extra
performance
overhead.
B
So
so
it's
now
up
to
the
users
to
actually
you
know
they
can,
if
they
want,
they
can
receive
cleaner,
fingerprint
cleanup
it's
up
to
them.
So
fingerprint
cleanup
was
one
of
the
stories
we
targeted.
Another
story
we
targeted
was
fingerprint
migration.
So
earlier
with
the
redis
plug-in,
what
happens
was
what
happened
was
inside,
in
fact
with
any
storage,
plug-in
or
whatsoever.
B
So
the
old
fingerprints
that
are
already
in
the
system,
and
then
you
know
you,
you
go
ahead
and
you
install
the
redistinguished
storage
plug-in
what
happens
to
this
old
fingerprint
so
earlier
they
used
to
remain
on
the
system,
and
that
was
a
drawback.
Now
we
have
implemented
migration.
How
we've
done
it
is?
Basically,
we
have
implemented
a
kind
of
lazy
migration,
so
whenever
these
fingerprints
are
used,
we
transfer
them
to
the
new
external
storage.
B
So
you
know
we
don't
create
huge
performance
bottlenecks
where
we
are
you
know
taking
on
along
or
we're
transferring
all
the
fingerprints
from
the
local
source
to
the
external
storage
at
one
go.
So
that's
fingerprint
migration.
It's
not
yet
released
it's
it's
still
under
review
engine
score
right
and
then
there
was
fingerprint
storage
descriptor
right
so
earlier
what
happened
with
the
redis
plugin?
B
And
so
basically
we
now
we
have
introduced
fingerprint
storage
descriptor,
which
allows
you
know
the
plugins
to
be
actually
configured
from
this
drop
down
so
earlier,
as
soon
as
the
plugin
was
installed,
the
storage
was
changed
by
default,
so
there
was
no
option
to
you
know
toggle
these
storages.
Now,
basically,
with
the
drop
down
you
can
actually
choose.
B
You
know
you
can
even
have
multiple
sources
installed,
but
you
can
choose
one
that
you
want
right,
so
that
was
some
refactoring
that
we
did,
and
so
this
was
released
in
2.2
for
it
and
we
improved
the
testing
for
the
redis
plugin.
So
we
introduced
connection
tests,
authorization
tests,
web
ui
tests.
We
ensured
that
you
know
configuration
as
port
is
also
so
basically,
you
can
use
the
cask
to
configure
the
plugin,
so
we
introduce
those
tests
also
with
achievements.
B
You
know
so,
as
I
said,
we
clean
up
api
and
storage.
Descriptor
was
released
into
it
for
eight
and
the
plugins
point
one
zero
point.
One
rc2
release
has
also
happened
so
now
you
can
directly
install
the
program
from
the
update
center
right,
so
so
yeah
we
have
now.
We
now
have
the
plugin
on
the
plugins.jenkins.io
also
and
yeah
so,
and
we've
had
two
rc
releases,
so
yeah,
that's
so
please
I
would
recommend
everybody
to
go
ahead
check
this
plugin
out.
B
Let
us
know
you
know
if
you
can
face
any
bugs
any
issues
and
next,
so
I
now
move
on
to
the
demo.
So
whatever
I
talked
about
you
know
so
how
how
the
so
that
we
can
see
how
it
happens
right.
So
so
what
I'll
do
is
quickly
I'll,
create
a
new
item
so
I'll
create
a
job.
It's
called
demo
I'll,
make
it
a
freestyle
project
and
I'll
add
a
build
step
to
execute.
Shell
is
an
echo
I.
B
And
then
I'll
add
a
post
build
up
action
for
recording
fingerprints
so
for
the
demo.txt
file,
I'll
hit,
apply
and
I'll
just
save
right.
So
now
I
have
this
job
right
demo.
So
at
the
moment
I
don't
have
an
external
storage
configured.
So
this
is
the
local
storage
right.
So
if
I
start
a
build
for
this
and
just
a
quick
question,
you
can
see
the
screen
also
right
awesome.
B
So
let's
go
ahead
and
see
where
the
single
print
is
right.
So
right
now
in
my
fingerprints
folder,
I
have
two
fingerprints.
So
let's
see
this
right.
So
this
is
the
demo
fingerprint
that
just
got
created
and
we
can
see
that
it
was
used
in
build
one
for
demo
right.
So
now
I'll
take
you
to
the
configuration.
B
So
if
you
go
to
the
configuration
page
for
jenkins,
I
have
the
plugin
already
installed
so
inside
the
fingerprints
you
know
so
the
first.
So
one
of
the
implementation
I
talked
about
right
was
this
descriptor
that
we
made
so
now
latest
fingerprint
source
can
be
selected
right
from
this
menu
and
before
I
actually
configure
this
I'll
just
start
a
local
redis
server.
On
my
machine
right.
B
Right
so
I
was
over
running
here
and
I'll
just
start:
a
command
line
interface
to
the
server.
So
if
I
see
which
fingerprints
I
have
so
it's
empty
right
now,
right
and
now,
if
I
do
a
tester
disconnection,
I
get
a
success
rate.
So
now
I
can
go
ahead
and
hit
apply
and
hit
save.
So
now
you
know
the
external
fingerprint
storage
is
configured.
So
ideally
what
should
happen
right?
What
we
want
to
happen?
So
if
we
may,
if
we
go
back
here,
I
can
still
see
this
fingerprint
right.
B
B
So
I
get
built
too
and
if
I
just
quickly
see
the
fingerprints
here
that
everything
is
working,
fine,
so
yeah,
I'm
getting
too
busy
have
used
this
particular
fingerprint,
and
if
I
go
here
right
so
if
you
just
notice
that
fingerprint
got
deleted,
I
have
just
one
fingerprint
now,
which
is
from
earlier
and
if
I
hit
the
server,
I
have
now
have
entry
for
this
fingerprint.
So
if
I
just
do
a
get,
I
can
see
this
fingerprint
now
in
the
editor
server.
B
So
this
was
what
exactly
I
talked
about
when
I
mentioned
migration
that
we
have
implemented.
Third
thing
was
cleanup
right,
so
if
I
go
back
to
the
configuration
page
so
yeah,
so
this
is
the
option,
for
you
know
disabling
things.
So
at
the
moment
fingerprint
cleanup
is
disabled,
so
I'll
just
go
ahead
and
enable
it
and
I'll
hit
apply
and,
let's
see
hey
so
at
the
moment,
no
cleanup
should
happen
because
you
know
that
fingerprint
has
two
builds
associated
with
it.
B
So
what
I'll
do
is
I'll
just
delete
these
builds
okay
and
I'll
delete
build
number
one.
Also.
B
Now,
if
I'll
go
back,
you
know
so
there's
no
build
associated
with
that
fingerprint
and
now,
if
I
go
ahead
and
query,
it's
gone
right,
so
fingerprint
cleanup
happens.
Just
a
small
side.
Note,
fingerprint
cleanup
happens
daily
so
once
in
a
day,
but
I'm
just
running
you
know
for
the
demo
purposes.
I
have
just
degrees
that
interval.
So
it's
now
having
everything
happening
every
10
seconds.
So
that's
why
it
happened
so
quickly.
So
that's
the
fingerprint
cleanup
api
that
I
talked
about
in
my
presentation.
B
Yeah,
so
that
was
about
it.
What
we
did
this
phase
next
step
is
you
know,
working
on
a
new
reference
implementation,
so
yeah?
I
think,
as
you
can
guess,
we're
going
with
postgres
this
time,
and
this
is
a
new
set
of
challenges
that
come
with
postgres,
because
so,
basically
till
now
we
store
these
fingerprints
as
blobs
right
and
that's,
you
know
more
easier
than
a
relational
database
and
we
want
to
decouple
this
right.
B
So
basically,
what
we
are
trying
to
do
is,
you
know,
define
some
certain
schema
to
these
fingerprints
and
you
know
store
them
in
a
relational
database.
What
this
will
allow
is
one
is
that
you
know
you
can
use
this
postgres
plugin
and
plus
it
for
new
implementation
developers,
new
plug-in
developers
who,
if
they
want
to
use,
you
know
a
reference,
a
relational
database,
so
they
can
then
use
this
reference
plugin
to
build
more
plugins
right.
B
So
that's
our
idea
for
the
postgres
plugin
and
you
know
some
things
that
are
farther
away.
Maybe
are
tracing
same
so
basically
just
you
know
we
get
the
so
basically
what
tracing
is
that
this?
These
storages
are
so
this
plugin
and
the
api
is
made
in
such
a
way
that
multiple
jenkins
instances
can
be
configured
to
a
single
storage.
B
B
So
that
is
something
you
know
might
be
worth
exploring,
maybe
further
down
the
line
so
yeah,
that's,
I
think
about
it
from
my
side
and
before
I
start
the
q
a
I
just
have
some
links
at
the
last
page.
If
anybody
wants
to
check
them
out
so
I'll
open
up
the
discussion
for
q,
a.
A
So
I
think
that
it's
a
great
improvement
to
the
pluggable
service
guy
system,
because
yeah,
when
we
talk
to
developers
and
the
developer
automatically
people
say
they
don't
use
fingerprints.
But
when
we
look
at
huge
statistics,
actually
many
people
actually
have
file
fingerprints
credentials.
Fingerprints
enabled-
and
I
believe
that
there
is
better
storage
but
user
experience.
We
can
actually
provide
greater
traceability
and
observability
features
to
jenkins.
A
E
So
assuming
are
there
any
things
that
you
have
learned
from
this
that
we
should
apply
in
general
to
the
externalization
of
other
storage
components
as
well?
Certainly,
there
are
lots
of
places
where
jenkins
stores
things
that
we
would
consider
doing
externalization
are
there
any
things
that
you
need
to
share?
Thankfully,
you've
got
oleg
as
your
mentor,
so
he's
had
lots
of
experience
in
that
space.
B
Right
so
yeah,
so
with
the
cloud
natives
here
I
think
this
is
a
area
for
active.
You
know
where
a
lot
of
stories
are
happening,
for
this
is
one
of
them.
You
know,
so
I
think
the
answer
to
that
question
is
probably
that
yam
v,
so
I
think
one
of
the
facet
for
that
answer
is
that,
yes,
we
figured
out
how
to
you
know,
make
these
apis
and
you
know
as
an
as
in
we
develop
more
plugins
and
we
add
more
features.
B
We
realize
that
how
well
or
how
bad
our
original
api
was.
So
I
think
that
api
can
act
as
a
you
know,
a
reference
to
the
future
externalization
stories,
but
I
also
think
that
another
facet
is
that
you
know
all
these
stories
are
unique
in
their
own
sense.
Some
of
these
stories,
you
know,
are,
are
more
difficult
to
implement
with
you
know
that
certain
consoles,
if
you
go
to
configurations
you
need
them
at
startup.
So
that's
another
challenge.
I
think
all
of
them
have
separate
challenges
associated
with
them.
B
But
yes,
I
I
think.
As
far
as
learning
goes
yeah,
I
think
we
made
us
a
decent
api
and
I
think
you
know
time
will
tell
how
you
know
if
it's
holds
well
or
not.
H
A
Yeah
it's
on
my
list.
I
actually
started
updating
the
cloud
native
seek.
We
just
started
it
in
may,
but
yeah
right
after
restarting,
we
went
on
a
kind
of
summer
break,
although
we
have
a
few
meetings
planned
for
august
stay
tuned,
but
yeah
all
these
materials
will
be
audit,
updated
example
for
configurations
yeah.
Now
I
would
rather
say
that
we
have
jenkins
configurations
code,
I'm
not
sure
whether
we
really
want
to
invest
in
the
pluggable
configuration
storage.
A
It's
a
subject
for
the
discussion,
but
other
stories
still
need
to
be
implemented
and
for
me,
fingerprints
is
actually
a
great
story
because,
firstly,
it's
isolated,
so
it
can
be
done
in
a
feasible
amount
of
time,
like
sumit
demonstrated
during
this
project.
It
still
provides
us
a
lot
of
insights
and
exp
experience
how
it
could
be
done
as
plugins
with
database
with
api
changes,
so
architecturally
wise.
I
think
that
this
project
is
already
a
total
success
and
yeah
thanks
to
sumit.
We
already
have
everything
landed
in
the
jig
score.
A
So
now
it's
a
matter
of
reference
implementation
and
amount
of
additional
features
we
could
get
out
of
it
because,
for
example,
querying
fingerprints
for
data
like
let's
say
acquiring
by
timestamps,
squaring
by
particular
events.
D
D
A
D
D
A
B
So
yeah
just
you
know
so
these
these
fingerprints
have,
you
can
add
a
facet
to
them
and
once
there
is
a
facet
and
it
can
decide
if
it
wants
to
block
the
deletion
for
the
fingerprint.
So
if
that
happens
then
you
know
even
cleanup
won't
delete
it.
If
you
have
such
a
facet,
that
is
blocking
the
deletion.
So
so
that
is
one
way
to
ensure
that
you
know
that
fingerprint
never
gets.
A
A
If
not
thanks
to
everyone
and
yeah,
we
did
this
meeting
in
time,
so
yeah
just
to
repeat
what
we
discussed
in
the
beginning
of
the
meetup.
If
you
have
any
questions,
we
have
a
json,
guitar
channel.
A
Questions
looks
like
not
so
then
join
us
tomorrow.
I
will
have
another
session
with
three
presentations
and
yeah
thanks
to
all
students,
mentors
and
the
other
contributors
who
work
on
jsoc,
it's
just
the
middle
of
the
project,
but
we
can
already
see
great
demos
by
all
the
students
and
it's
a
pleasure
to
see
how
the
project
evolves.
This.