►
From YouTube: 2021 02 25 Jenkins Contributor Summit Closing Session
Description
Includes presentations from the five tracks, Securing the Jenkins Delivery Pipeline, Containers and Platforms, Cloud Native, Events and Advocacy, and Documentation.
Also includes a demonstration of a prototype Pipeline graph visualization plugin that renders the Pipeline like Blue Ocean without requiring Blue Ocean.
A
Welcome
everyone:
this
is
the
jenkins
contributor
summit.
It's
the
closing
session
for
february
25th,
2021
delighted
that
you're
here
with
us
thanks
very
much
for
being
part
of
the
summit.
Thank
you
for
your
participation
in
the
tracks,
thanks
for
to
those,
especially
who
led
the
tracks-
oleg
nanashev
alyssa,
tong
de
la
mark,
and
thank
you
very
very
much
for
what
you've
done
very
grateful
for
it.
So,
let's
take
on
first
today's
agenda.
A
First,
we'll
ask
invite
oleg
nanashev
to
discuss
securing
the
jenkins
delivery
pipeline
and
the
experiences
and
observations
there
then
I'll,
take
on
containers
and
platforms,
carl
will
discuss
cloud
native
alyssa
will
discuss
advocacy
and
events
and
then
I'll
take
on
documentation.
So
let's
go
ahead
with
oleg.
I
think
you'll
probably
prefer
to
share
your
own
screen
for
navigation
purposes,
so
I'm
going
to.
C
B
We
attribute
jinx
delivery
pipelines
and
particular
plugin
pipelines,
also
other
components
like
junkies
libraries.
One
of
the
main
focuses
during
the
discussion
was
job
229
created
by
jc,
which
provides
continuous
delivery
for
plugins.
We
reviewed
what
steps
needed
there
to
get
this
jab
over
the
line,
including
documentation,
including
support
manual
release.
B
Maybe
also
discussions
about
questionnaire,
schema
about
impact
on
gradle
flow,
which
would
need
a
similar
implementation
if
you
wanted
to
adopt
it
across
the
entire
plugin
eca
system,
and
the
second
part
was
about
executed
scanning
tools.
What
would
we
like
to
adopt
in
our
pipelines
and
how
to
adopt
that?
So
the
action
plan
for
us
is
finalizing
the
job
two
to
nine,
then
particular
scans,
which
have
been
evaluated
by
daniel.
B
It
will
be
great
to
finish
that
and
make
it
available
for
two
developers
same
for
sneak
it's
available,
but
we
need
to
configure
that
so
that
maintainers
can
use
that
if
they
want-
and
you
also
want
to
emulate
other
security
tools
available
on
the
github
actions
marketplace.
A
So
can
you
can
you
help
me
understand
or
help
us
understand
a
little
more
detail
on
the
security
scanning
tools
for
the
pipeline?
The
containers
t
track
talked
about
snake
security
scanning
docker
images,
but
I'm
assuming
that's
not
what
this
is
describing.
Can
you
give
me
a
little
more
detail
on
that.
B
So
sneak
has
multiple
flavors.
Actually,
there
are
multiple
products
offered
offered
by
the
company,
but
yeah
their
core
product
is
basically
security,
scan
and
dependency
analysis,
including
licenses,
but
yeah
vulnerabilities,
etc,
and
this
is
what
the
linux
foundation
offers
as
a
part
of
lfx.
Solafx
is
a
platform
for
projects
that
of
different
tools
available,
and
one
of
them
is
security
scanning.
B
A
So
it's
it's
got
a
a
source
code
analysis,
something
like
an
advanced
form
of
fine,
bugs
kind
of
thing
where
it's
actually
looking
for
issues
in
the
sources.
Okay,.
D
B
B
A
B
Abuses,
maybe
in
a
packaging
system,
so
basically
what
happens
in
pom
xml.
You
declare
define
dependence
on
a
plugin
as
a
jar,
then
maybe
an
hpi
plugin
does
some
major
tricks
in
order
to
determine
whether
it's
a
jar
or
a
plugin
and
based
on
data
does
packaging
so
for
a
security
tool
which
is
not
a
way
about,
maybe
an
hpi
plugin.
There
is
no
way
to
see
whether
there
is
a
real
dependency
or
plugin
dependency
for
jenkins
course.
B
A
Got
it?
Thank
you
so
on
a
different
topic,
the
code
ql
scans.
I
know
that
I
was.
I
was
one
of
the
early
experimenters
with
those
and
found
them
quite
positive
because
they
they
from
in
my
case,
knew
things
specifically
about
the
jenkins
code
base
and
about
problems
that
are
that
are
not
generic
but
they're
rather
very
much.
Oh,
this
is
a
pro.
This
is
a
type
of
problem
we
see
in
jenkins.
A
So
what
what
does
it
mean
to
widely
roll
those
out?
That
means
they'll
be
more
available?
Will
plug-in
maintainers
still
need
to
choose
to
subscribe,
or
will
that
be?
Is
it
envisioned
that
that
would
be
turned
turned
on
for
everybody,
and
suddenly
we've
got
a
lot
of
noise
or
what
what's
the
the
idea
there.
B
E
So
even
once
I
make
the
jenkins
specific,
could
kill
warnings
or
detectors
public.
E
They
need
still
would
need
to
be
configured
on
a
per
repository
basis,
as
would
any
security
scan
that
integrates
with
github
code
scanning
the
current
approach,
where
it's
run
on
project
infrastructure,
and
I
just
update
the
default
branch's
findings
periodically,
does
not
have
this
limitation,
but
obviously
isn't
as
nice
as
having
statuses,
integrated
in
pull
requests
and
regularly
scanning
when
s
on
commit.
So
I
not
sure
there's
a
great
solution
here,
other
than
perhaps
programmatically
creating
pull
requests
that
adds
at
the
action
and
basically
advertise
it.
That
way,.
A
E
So
yes,
so
I
would
like
to
distinguish
between
codeql
in
general,
which
is
just
yet
another
static
code
analysis
tool.
You
may
know
from
lgdm.com
and
the
jenkins
security
scan.
I
basically
use
codeql
rip
out
everything
that
you
know
is
about
general
java
apis
and
add
a
bunch
of
jenkins
stuff
in
there.
E
You
can
already
have
codeql,
because
it's
public
and
provided
by
github,
you
need
to
add
an
action
and
you
have
it.
This
is
the
future.
I
also
envisioned
for
the
jenkins
security
scan
based
on
codeql,
but
we're
not
there
yet
because
so
far
the
rules
are
sort
of
in
private
beta,
as
I
mentioned
before,.
E
I
think
it
integrates
with
pull
requests
because
it
would
use
the
same
apis
as
the
existing
github
security
scans
use
and
that's
a
big
selling
point
for
those,
but
I
have
not
yet
tried
to
integrate
it
to
that
level.
A
A
Oh
all,
right
thanks
very
very
much
so
I
I
think
the
next
step,
then,
is
some
some
portion
of
this
next
week
or
or
tomorrow-ish
gets
conceptualized
into
the
jenkins
roadmap.
Oh,
like
you've
sort
of
been
our
lead
on
roadmap.
How?
How
would
you
see
that
working.
A
B
A
Okay,
so
mine
is
containers
and
platforms.
We
had
a
discussion
on
on
well,
the
the
key
results
here
were.
First,
we
want
to
continue
the
roadmap
work
that
we
had
actually
outlined
already
in
2020
that
roadmap
work
included
delivering.
We
want
in
the
future
to
deliver
multi-platform,
docker
images
with
support
for
arm
64
system,
390,
mainframes
and
powerpiece
t64.
A
Now
the
the
motivation
for
system
390
and
powerpc
may
not
be
obvious
to
to
this
group.
The
crucial
motivation
is
ibm
is
investing
engineering
talent
in
helping
us
improve
our
docker
images,
and
so
we
think
it's
a
good,
healthy,
healthy
change
to
say,
let's
find
a
way
to
do
multi-platform
we'll
get
arm
64
as
a
result
of
that
and
the
system
390
mainframe
and
their
powerpc
platform.
A
The
idea
being
that
we
want
to
encourage
community
contributors
by
seeing
that
the
image
is
up
for
adoption
that
oh,
they
might
benefit
by
being
involved
as
an
example.
The
complexity
here
is,
I'm
very
interested
in
the
debian
image
and
somewhat
interested
in
the
alpine
image.
So
those
two
get
my
attention,
but
I
don't
pay
any
attention
to
the
centos
image
and
yet
we
know
there
are
people
who
value
that
image
very
much,
but
we've
not
communicated
hey.
We
need
a
new
additional
maintainers
for
that.
A
A
We've
also
found,
and
thanks
to
daniel
for
highlighting
this
one,
that
our
docker
image
build
process
is
just
too
slow
and
and
we've
got
solid
agreement.
We
need
to
accelerate
the
speed
at
which
we
can
build
docker
images
that
may
involve
optimizations
in
build
technique.
It
may
involve
reducing
the
number
of
images
we
build,
we're
open
to
all
sorts
of
things
like
that,
but
we
think
accelerating
the
docker
image
build
process
is
good
for
the
health
of
the
project
and
good
for
the
image
maintenance.
A
The
additional
item
we've
added
is.
We
would
like
to
regularly
scan
docker
images
for
issues.
I've
been
running
a
prototype
with
sneak
and
okay.
The
number
of
image
issues
that
they
report
are
somewhat
somewhat
dismaying
in
terms
of
while
the
base
debian
image
has
a
lot
of
of
issues
that
they're
reporting
as
unresolved
and
no
mitigation,
and
so
it's
it's
going
to
be
a
little
complicated
for
us
to
to
use
that
and
find
a
way
to
handle
the
volume.
A
Then
then,
in
terms
of
cloud,
we
see
that
we're
continuing
to
use
the
helm
charts
for
the
jenkins
jenkins
controller,
and
we
know
that
we're
using
more
and
more
helm
charts
for
jenkins
infrastructure.
We
think
those
continue
and
we
we
look
forward
to
the
cloud
native
work.
That's
going
on
on
kubernetes
operators.
E
Yes,
how
would
the
image
ownership
be
exposed
to
administrators
or
users
for
plugins,
it's
directly
visible
in
jenkins,
in
the
plugin
manager,
as
well
as
on
the
plugins
site,
but
if
I'm
the
normal
jenkins
administrator
and
I
docker
pull
or
docker
run,
how
would
I
be
informed
that
the
image
is
currently
unmaintained?
E
A
That
one's
not
clear
to
us
one
of
the
ideas
we
discussed
but
haven't
had
any
any
implementation
or
proof
of
concept
was
wondering.
Should
we
consider
adding
a
what
I
think
is
called
an
administrative
monitor,
a
warning
that
hey
you're,
running
a
docker
image
and
we've
checked
back
and
see
that
it's
marked
as
up
for
adoption?
A
We
did
think
that
we
could
do
badges
on
the
readmes
pretty
easily.
What
we
weren't
sure
of
is,
if
the
administrative
monitor
idea
was
worth
considering,
I'm
open
to
suggestions
there.
Do
you
have
do
you
have
recommendations.
F
If
I
can
just
add
something,
basically,
something
that
we
also
discussed
was
to
to
introduce
some
concept
where
we
would
just
stop
building
image
for
a
specific
distribution
of
our
specific
use
case
as
well.
So
maybe
if
we
realize
that
we
don't
have
a
maintainer
for
a
specific
image,
let's
say
for
three
months
and
we
are
not
merging
peers.
F
Maybe
we
can
just
say
we
stop
providing
that
image,
so
we
there
is
no
way
to
get
notified
if
an
image
is
not
maintained,
if
an
image
is
maintained
or
it's
not
maintained
anymore,
except
by
not
being
able
to
download
it.
E
I
mean
you,
you
pull
the
latest
tag
anyway,
so
people
would
just
end
up
complaining
that
their
image
variant
doesn't
exist
yet
and
we
would
get
dozens
and
dozens
of
people
reporting
and
then
we're
like
well
nobody's,
maintaining
it
so
we're
no
longer
publishing
in
it
and
they
were
like
well.
Thank
you
for
not
telling
us.
E
I
don't
think
this
is
a
particularly
practical
approach.
I,
like
the
idea
of
the
admin
monitor
I've
thought
about
that
recently
in
the
context
of
it
would
be
useful
to
be
able
to
tell
jenkins
how
it's
being
run
through
a
startup
parameter
and
integrating
that
into
the
default
packaging
stuff
and
the
images
could
say.
Well,
I'm
this
kind
of
docker
image
variant
so
that
could
be
combined
with
that
and
the
other
is,
I
think,
if
I
run
the
image,
there's
probably
a
startup
shell
script
of
some
sort.
E
G
A
G
Yes,
there
are
label
schemas,
it's
more
garrett,
the
specialist
on
that
part,
but
there
are
two
standard
label
schemas
on
oci
and
the
second
one,
so
they
provide
a
set
of
recommended
label
to
sets
like
the
git,
the
url
of
the
git
repository
where
the
source
that
was
used
to
build
a
given
image
is
located
the
date
time.
I
want
to
read
a
commit
of
that
specific
repository,
but
there
are
some
more
here,
I'm
not
speaking
about
a
standard
label
but
a
custom
label.
G
That's
all
the
docker
images
that
we
provide
as
a
community
and
the
label
could
be
something
like
io.jenkins.communitysupport
equals
and
a
string
and
a
string
could
be
either
supported,
deprecated
and
supported
security
alerts.
So
the
idea
will
be
to
only
update
the
label
of
given
image
when
we
know
there
is
an
issue
in
particular
securities:
it's
not
it
will.
It
won't
fix
all
the
issues,
but
it's
one
of
the
numerous
way
we
can
have
to
communicate
the
information.
Additionally
to
what
you
already
said,
yeah.
C
B
Need
to
update
the
image
to
check
the
labels,
so
you
would
need
to
self-check
and
to
print
some
warnings.
Maybe.
G
B
G
How
you
can
we
already
override
images
when
there
is
an
upstream
update
on
the
parent
image?
Tags
are
not
immutable
on
docker
and
they
should
not
be
considered.
It's
really
important
to
advertise
that
stacks
can
be
overridden
for
important
updates,
and
so
since
we
can
override
a
tag,
we
can
override
the
label,
but
that's
some
machinery
to
add
to
your
rights
first
with
the
new
images.
G
But
still
I
mean
there
is
an
api
in
the
docker
hub,
so
script
that
lists
the
all
the
tags
existing
and
that
update
them
to
have
a
json
file
that
maps
the
status
of
each
tag
and
each
support.
This
is
something
that
can
be
technically
done,
it's
a
bit
of
time
to
spend,
but
it's
not
a
complex
one.
It's
only
a
matter
of
time.
A
All
right,
so
I
have
one
more
slide,
things
that
are
not
on
the
platforms
and
containers
2021
list,
and
I'm
intentionally
noting
here
that
we've
got
a
roadmap
item
for
java,
15
plus
support,
and
it
was
not
discussed,
and
my
recommendation
is
that
we
will.
We
do
not
plan
during
2021
to
breach
this
even
if
java
17
releases,
I
think
we
have
other
things
that
are
higher
priority
and
higher
value
to
the
jenkins
project
than
doing
java.
17
support.
Now
I'm
open
to
your
comments
here,
because
I
fear
this
may
be
very
controversial.
A
D
I
was
just
going
to
say
to
my
knowledge:
it
works
the
jenkins
core
tests
pass
on
java,
15.,
there's
possibly
edge
cases
that
don't
work,
but
pipeline
works.
We're
starting
to
see
people
with
higher
than
java.
11
versions
are
starting
to
appear
in
usage
stats
now,
so
when
jna
was
updated
in
2.274,
that
made
pipelines
start
working,
and
I
ran
the.
A
B
No,
I
think
that
if
we
can
at
least
test
it
in
our
pipelines,
it's
something
we
should
do
and
once
we
see
that
it
actually
works,
I
mean
17
lcs.
We
can
just
mark
the
market
as
preview
support.
Similarly
to
how
we
did
use
java
11.
A
Good
okay,
now
I
have
not
been
following
the
java
15
and
beyond
release
schedule.
Are
there
those
on
the
call
who
know
the
estimated
timeline
for
java
17?
Is
it?
Is
it
likely
in
2021,
or
is
it
more
likely
2022
2021
autumn
of
2021?
Okay?
So
we
we
could
see
it
this.
We
likely
will
see
it
this
calendar
year.
Okay,.
A
H
So
at
the
cloud
native
sig
track,
we,
we
discussed
some
of
the
initiatives
that
we're
already
working
on
within
the
cloud
native
sake,
which
you're
all
welcome
to
join
it's
fridays
at
4200,
utc,
it's
quite
exciting,
because
we're
really
looking
at
how
to
refine
and
shape
a
jenkins
cloud
native
future.
So
there's
a
lot
of
ideas
which
is
nice
and
and
one
of
the
things
you
can
go
to
the
next
line.
H
One
of
the
things
I
really
liked
about
this
track
is
that
we
had
we
had
people
who
don't
necessarily
aren't
able
to
make
it
to
the
sig
meeting
the
regular
friday
one,
and
so
we
had
different
different
approaches,
different
concerns
and
different
suggestions
and
ideas
of
how
to
move
forward,
which
I
thought
was
really
really
cool.
So
we
looked
back
on
the
the
existing
roadmap
and
it
became
clear
that
some
of
our
users
they
struggle
when
there's
like
the
restart
charms
of
jenkins.
H
Basically
so
when
it
fails
the
research
times
and
there
were
different
ways
of
approaching
how
to
improve
this
pluggable
storage
was
definitely
brought
up,
and
this
is
already
on
the
roadmap.
But
if
you
look
at
the
cloud
native
plugin
storage,
page
on
jingan's,
io,
there's
actually
already
a
lot
of
jabs
on
this
a
lot
and
they
it's
been
going.
This
discussion
has
been
ongoing
for
for
some
time,
so
we
we
went
over
what
had
been
discussed.
H
What
pocs
were
were
already
in
existence
and
other
ways
of
approaching
the
problem
or
a
way
to
improve,
say,
restore
time
and
one
of
the
ideas
that
gareth
has
been
looking
at
was
improving
web
the
webhooks,
but
I'm
gonna.
Let
gareth
talk
more
about
that,
because
he's
done
an
awesome
poc
on
that,
so
that
that
was
an
interesting
approach
to
that
problem.
And
I'm
relatively
new
to
this
that
particular
problem
of
plugable
storage
was
jenkins.
H
So
it
was
really
interesting
discussion
for
me
and
then
we
also
discussed
some
of
the
initiatives
that
we're
working
on
in
the
cloud
native
sig
around
looking
at,
I
guess
integrations
with
pipelines,
so
integrations
with
text
on.
So
we
were
looking
at
the
tecton,
client,
plugin
and
also
cloud
events
plug-in.
So
the
cloud
events
discussion
was
quite
interesting
because
it
very
much
it
really
brings
together
sort
of
a
way
of
having
greater
interoperability.
H
Also,
there's
a
lot
of
discussion
with
how
tecton
is
doing
cloud
events.
We
briefly
discussed
that
this
is
not
particularly
the
genius
project
but
really
interesting.
We
discussed
the
four
keys
open
source
project
which
takes
card
events
from
different
ci
cd
tools
and
will
create
a
really
nice
dashboard
for
you,
looking
at
some
key
door
metrics.
H
So
that
was
a
very
nice
use
case
of
how
you
would
use
cloud
events.
I
will
likely
bring
that
forward
into
gsoc
and
let
the
students,
especially
the
student.
If
we
have
students
working
on
the
cloud
events
plugin,
they
can
then
use
this
project
for
testing
some
of
their
work
and
seeing
immediately
the
you
know
how
their
work
is
beneficial,
so
that
was
quite
cool
and
then
yeah.
So
then
we
discussed
more
about
dna
cloud
events
and
support.
Okay,
I
will
let
gareth
speak
more,
especially
about
his
web
hook.
Work.
I
Sure,
yeah
yeah,
so
the
one
of
the
frustrations
that
was
was
coming
up,
was
having
jenkins
lose
all
of
its
weapon
events
when
it's
being
restarted
and
if
a
restart
sort
of
takes
somewhere
between
five
and
ten
minutes.
That's
quite
a
common
thing:
you're
gonna
lose
information,
especially
if
you're
jenkins
is
quite
busy,
builds,
aren't
going
to
trigger,
and
then
you
have
to
go
and
manually
kick
them
off
again.
So
it's
just
yeah,
it's
a
bit
annoying,
so
we
felt
we
could.
I
We
could
probably
solve
this
quite
easily
by
offloading
the
sort
of
web
hook,
handlers
or
just
to
a
sort
of
store
and
forward
rip
hook
relay
that
would
work
with
kubernetes
quite
well,
so
it
created
a
little
pocket
to
see
if
it
would
work
and
it
yeah.
It
does
work
quite
nicely
I'll
pop
the
link
to
the
park
in
the
demo
in
the
chat
people
are
interested.
I
It's
currently
using
like
a
back
off
strategy
to
relay
webhooks.
So
if
it
can't
connect,
it
will
just
keep
retrying
and
retry
gets
longer
and
longer
up
until
like
a
max
time
that
you
can
set-
and
hopefully
jenkins,
can
recover
in
that
period
of
time.
But
we
should
be
able
to
add
a
series
of
strategies
in
there
quite
easily
one
of
them.
We're
thinking
is
to
write
the
webhook
to
a
crd
and
just
read
from
those
crds.
I
That
would
allow
us
to
run
with
multiple
instances
of
the
webhook
thing,
but
it
saw
it
in
golang,
it's
very
small,
very
small
footprint.
It
runs
with
very
low
sort
of
memory,
overhead
and
cpu
overhead,
so
you
can
just
pop
it
in
the
same
name.
Space
as
jenkins
exposes
its
own
ingress
and
you
can.
A
So
now,
and
will
that
how
will
that
interact
with
the
with
the
automatic
web
hook?
Registration
that
happens
in
jenkins
already
would
would
that
somehow
be
magically
included
in
that
automatic
registration,
or
does
so.
I
have
to
change
my
web
hooks
to
point
at
the
at
that.
At
this
alternate
end
point.
D
So
there's
a
there's
an
advanced
option
in
the.
If
you
are
using
that
which
I
think
a
lot
of
places,
don't
use
it.
But
if
you
are
using
it
there's
a
if
you
click
advanced.
This
is
the
ultimate
host
name
for
jenkins,
so
you
can
put
an
alternate
host
name
and
then
click
re-register
all
hooks,
I'm
not
sure
if
it
will
delete
your
old
ones.
If
your
host
name
changes
but.
A
D
J
J
J
J
So
so
that's
that
for
devops
world's
community
track.
So
much
like
previous
years,
cloudbees
has
given
us
a
track
which
will
consist
of
24
sessions.
We
can
do
whatever
we
want
to
do
with
it.
J
Is
there
a
specific
message
that
we
want
to
amplify,
and
this
is
our
chance
to
do
it
here,
because
devops
world
is
our
biggest
event
of
the
year
and
then
other
things
that
we
want
to
cover
within
the
community?
How
do
we
want
this?
This
track
to
look
like,
so
I'm
hoping
that
we
can
come
together
and
and
identify
or
strategize
this,
and
also
you
know,
the
people
that
are
signing
up
for
this
will
also
be
reviewing
proposed
talks.
Submissions
through
the
cfp.
J
Diversity
in
the
jenkins
community,
so
the
goal
here
is
to
establish
diversity
and
inclusion,
so
we
know
that
what
we're
typically
typically
are
seeing
is
that
the
people
who
are
currently
involved
are
pretty
much
the
same
usual
suspects
right,
similar
in
nationality
and
gender
age,
etc.
So
this
beg
the
question:
what
can
we
do
to
bring
a
a
wider,
broader
range
of
people
to
contribute
to
the
project,
and
we
know
there's
just
so
much
to
do
right,
there's
so
many
ways
to
participate.
J
J
And
so
so
far,
the
plan
here
is
increase
the
outreach
efforts
to
bring
more
people
to
jenkins
reinforce
the
google
summer
of
code,
which
I
think
is
doing
excellent
year
by
year
mentor.
She
code
africa
contribution
in
april,
highlight
the
people
of
the
jenkins
project,
and
the
thought
here
is
that
there's
so
many
people
that
are
doing
great
things
right
that
are
contributing
in
great
ways,
but
we're
not
really
highlighting
them.
J
J
J
B
Maybe
one
question
so
for
2021:
would
we
like
to
organize
any
specific
hackathons
or
community
events.
A
Good
point
yes,
and
I
think
we
should
absolutely
it
feels
like
we've
got
hacktoberfest
that
we
can
predict
will
arrive.
We
had
great
results
last
year
with
the
ui
ux
hack
fest
in
it
was
what
was
that?
May?
J
H
Yeah
at
cdcon-
it's
just
it's
for
the
moment
idea,
but
actually
it
would
be
so
fun
if
we
had.
I
wouldn't
want
to
exclude
other
projects,
but
it
would
be
really
fun
to
do
a
hackathon
around
the
time
of
cdcon,
which
was
jenkins
and
text
on,
because
with
such
a
funny
discussion
in
the
cloud
native
track.
Basically
I
I
I
realized
with
the
section
client
plug-in
one
of
the
things
where
we
try
and
do
with
jenkins
is
actually
to
run
techdown
pipelines
at
least
partially.
H
H
K
A
K
K
Yeah,
if
you,
if
the
idea,
is
to
outreach
efforts
to
bring
people
to
jenkins-
and
you
want
have
diversity,
look
for
them
where
they
are
I'm
based
in
south
florida.
So
we're
there's
a
lot
of
latinos
and
hispanics.
Here,
hang
out
with
the
universities
in
that
area,
same
things
in
detroit
right,
new
orleans
right.
C
H
G
A
H
Yes,
I
think
damien
is
entirely
correct.
Thank
you
for
saying
that
jamie
and,
I
think,
that's
wonderful.
Certainly,
I've
been
very
involved
with
code
bar
I've
been
involved
in
in
london
area
because
I
live
in
london,
but
kobar
is
actually
a
global
organization.
So
we
have.
We
have
a
lot
of
chapters
around
the
world
in
different
countries
and
cities,
so
that
is
an
interesting
place
to
start
to
engage,
and
certainly,
I
would
say
in
almost
any
any
large
city
there's
a
lot
of
tech.
H
Meetup
groups
like
a
ton
and
people
really
are
always
trying
to
learn,
and
it
is
good
to
outreach.
It
doesn't
just
have
to
be
a
jenkins
outreach
group
or
a
cic
icd
outreach
group,
certainly
considering
on
a
more
broader
level
of
just
developers
who
are
trying
to
learn
and
bringing
jenkins
to
them.
That
is
a
great
idea.
A
A
Okay,
next
topic,
then
documentation.
So
documentation
was
a
reminder
for
me
of
of
how
worldwide
our
organization
is.
We
did
two
tracks
intentionally
the
east
track
and
the
west
track,
and
we
did
that
because
the
west
track
was
the
middle
of
the
night
for
the
east
track
and-
and
it
worked
quite
well-
so
thanks
very
much
to
the
participants
there
first
step,
we
want
to
continue
the
2020
roadmap
work.
A
A
We've
had
over
600
plugins
migrate,
their
documentation
and
that's
great,
but
we
have
over
eighteen
hundred,
so
we've
got.
We've
still
got
quite
a
ways
to
go
on
plug-in
doc
migration.
Likewise,
the
wiki
document
migration
will
continue.
The
pace
is
relatively
slow
there,
because
the
work
requires
skills
in
assessing
the
accuracy
of
the
wiki
documentation
that
in
many
cases,
could
be
as
much
as
10
or
12
years
old.
A
We
know
that
we
want
to
continue
terminology
cleanup.
We've
got
deprecated
terms
that
are
used
in
many
places
in
user,
visible
strings
that
we'd
like
to
replace
and
that's
actually
a
good
candidate
as
a
project
as
part
of
the
chico
africa
effort
in
gen
in
in
april
terminology.
Cleanup
would
be
a
good
one
for
that.
Likewise,
cleanup
of
the
screenshots
that
are
on
the
jenkins.io
site.
Right
now,
we've
got
screenshots
that
are
based
on
jenkins
versions
prior
to
2.27.1.
A
A
What
we're
getting
there
is
where
we
used
to
search
with
an
embedded
version
of
elasticsearch
inside
the
plug-in
site.
Api
gavin
in
the
last
few
days
has
switched
it
over
to
use
an
algolia
based
search
engine
that
is
providing
the
the
search
results
for
us,
the
improvement
is,
is
nice,
it's
still
under
work.
There
are
still
some
surprises,
some
things
that
we're
having
to
work
out,
but
it
looks
very
positive.
A
The
intent
is
that
olivier
and
gavin
will
work
together
to
register
us
for
the
algolia
open
source
program
and
we'll
use
the
plug-in
sites
at
site
as
an
experiment
to
prepare
for
the
eventual
use
of
that
same
indexing
technology
on
the
www.jenkins.io
site
next
or
any
questions.
So
far.
Sorry,
I
should
have
asked
for
questions
see
if
anybody
had
concerns
or
issues
they
wanted
to
raise.
A
A
A
We
need
to
know
that,
so
that
we've
understood
do
we
have
the
right
structure
for
our
documentation
and
so
what
we're
going
to
do
over
the
course
of
the
next
several
months
in
the
office
hours
for
the
documentation
special
interest
group
is,
we
will
look
at.
Where
would
this
content
go
and
this
content
go?
Jonathan
morris
had
started
the
process
for
us
and
the
thousand
plus
pages
that
we
had
have
been
initially
reviewed,
we'll
do
a
more
detailed
review
and
work
through
that
now,
one
more
piece
of
that
is:
we've
got
currently
a
confluence.
A
A
So
one
of
the
ideas
that
was
discussed
was
what,
if
we
batch,
transform
those
wiki
pages
to
asciidoc
host
them
inside
of
a
github
repository
and
then
redirect
the
wiki
to
that
site.
The
idea
is
that
would
now
let
us
incrementally
and
stepwise
use
those
ascii
doc
files,
eventually
as
the
basis
for
content
in
www.jenkins.
K
A
Your
your
observation
is
correct.
Right
now,
all
of
the
content
is
is
found
through
google
searches
right
there
isn't
a
site
search
for
www.jenkins.io
at
all.
We
envision
that
being
there,
but
right
now
the
the
solution
for
typical
users
is
google
search
or
bing
search
or
you
choose
your
search
engine.
A
A
K
B
Finally,
stopping
confluence,
it
would
be
a
great
thing,
especially
if
you're
able
not
only
to
military
documentation
but
actually
to
renew
or
remove
obsolete
documentation,
because
many
pages
are
just
not
relevant
anymore.
So,
even
if
you
do
such
migration,
maybe
reviewing
pages
and
removing
ones,
we
do
not
want
to
see.
A
A
D
Cool,
so
so
this
was
based
on
a
hack
project.
Cliff
myers
did
about
four
months
ago.
He
lifted
the
graph
component
out
to
just
on
a
standalone
view
on
the
job.
So
it's
called
the
pipeline
graph
view
plugin.
I
think
there's
a
screenshot
in
here,
but
so
it's
all
based
off
of
this.
D
So
I've
created
three
sample
jobs
with
three
different
sort
of
pipelines.
We've
got
a
like
a
very
empty
pipeline.
That's
just
got
a
few
stages,
so
you'll
see
here,
they're,
just
completely
empty
kind
of
standardised,
looking
pipeline.
If
we
go
in
here
onto
a
build,
I've
installed
the
blue
ocean
plug-in
just
so.
I
can
compare
it
back
to
the
regular
one,
but
you
go
here
pipeline
graph,
so
you've
got
the
whole
blue
ocean
graph.
Here
you
go
to
a
failed,
build
with
a
few
different
varieties.
D
It's
got
so
if
we
go
back
here,
you'll
see
it's
got
a
variety
of
different
stages,
a
unstable
success,
eric
in
court
and
a
full-on
failure.
D
D
And
then-
and
you
see
it's
mostly
working
the
only
bit-
that's
not
working
is-
I
haven't
quite
managed
to
identify
the
second
nested
stage
properly,
but
apart
from
that
skip
parallel
parallel
nested
stages
label
on
the
branch-
and
this
is
this-
is
a
this-
is
a
regular
problem
in
blue
ocean.
If
you've
ever
tried
to
use
matrix,
it's
unreadable
without
hovering
over
it,
which
is
probably
quite
an
easy
fix,
but
another
problem
with
blue
ocean
is
that
this
graph
is
actually
in
a
different
repo
in
a
different
package.
D
So
even
if
someone
were
to
try
and
fix
this
they'd
have
to
try
and
change
the
repo
another
repo
and
try
and
get
someone
to
release
that
package
and
then
get
it
into
bluemotion,
so
very
high
barrier
to
entry,
so
just
on
the
code
so
cliff
has
just
lifted
this
out
of
that
package,
he's
converted
it
to
typescript
from
javascript
and
from
jsx
to
tsx,
so
very
minimal
changes
just
ripping
stuff
out.
That's
not
needed.
D
I've
lifted
some
very
minimal
from
blue
ocean,
so
I've
lifted
the
pipeline,
nodegraph
visitor
just
to
recreate
the
graph
and
then
just
the
minimum.
I
remove
some
code,
that's
not
needed
and
lifted
in
just
some,
the
minimum
supporting
code
needed
by
the
nodegraph
visitor
and
then
there's
just
a
simple
action
which
just
pulls
in
the
icon
and
has
a
web
method
to
just
load
the
graph.
So
it's
just
a
get
request
which
loads
them
json
and
then
some
very
gnarly
code
to
convert
from
the
pipeline.
D
Nodegraph
visitors,
children,
so
parent-parent
approach
where
you
get.
But
the
response
has
a
list
of
parents
but
the
graphics,
that's
a
list
of
children.
So
you
have
to
reverse
the
graph
and
rejoin
it
and
it's
pretty
nasty
code.
But
you
just
see
some.
It's
probably
it's
very
hacky,
but
it
mostly
works
and
then
yeah
just
a
couple
of
sample
pipelines
in
the
tests
and
then
some
tests
to
make
sure
it's
all
building
properly.
C
C
Could
have
something
at
the
job
level,
so
when
you
use
the
pipeline
stage
view
plugin,
for
example,
it
shows
at
the
job
level,
so
you
can
see
multiple
builds
going
or
at
least
get
at
least
see
the
current
build.
Maybe
we
could
have
something
at
the
top
level
that
just
embeds
the
graph
of
the
last
build
or
something
like
that.
D
Yeah
it'd
probably
be
a
separate
component
with
all
just
like
a
simplified
graph
kind
of
like
the
the
existing
pipeline
stage,
view
thing,
but
not
as
complex
and
not
as
heavy
as
that's
that
doesn't
perform
very
well
at
all.
It's
far
too
heavy,
just
like
a
much
nicer
one
without
log
support,
probably
which
is
a
lot
lighter.
D
D
D
Yeah
the
plan
was
just
to
look
at
look
at
polling
while
the
build
is
running
and
then
once
it's
complete,
you
never
need
to
re-render
it
and
possibly
even
storing
the
state
in
a
file
on
disk,
because
you
never
need
to
recalculate
it
once
the
job
was
completed
either.
A
Yeah,
so
historical,
historical
graphs,
don't
have
to
be
recomputed,
I
mean
the
job
is
done,
it's
not
going
to
change
now.
You
indicated
putting
it
into
the
build,
would
be
done
without
the
log
so
that-
and
that
was
because
pipeline
stage
views
log
log
rendering
is
expensive
or
painful
or
or
not
terribly
useful.
D
It's
very
slow
if
you
ever
load
any
so
if
you
lose,
I
mean
a
lot
of
people
use
pipeline
stage
view
because
it's
good
to
be
able
to
see
a
combination
of
history,
timings
across
builds
on
average,
but
it
also
pulls
in
the
logs
and
just
a
whole
bunch
of
stuff.
It's
a
it's
very
slow
until
it's
cached
a
really
poor
user
experience,
but
it
kind
of
works.
A
G
I
think
this
will
be
really
rewarding
for
introducing
new
contributor,
because
it's
visual,
I
don't
know
the
amount
of
knowledge
required
to
put
the
hands
on
such
thing,
but
their
typescript,
which
is
a
let's
say
the
usual
contributor,
might
not
have
that
kind
of
knowledge
or
are
not
experts
so
technically
and
in
terms
of
the
feedback
loop,
it
could
be
really
rewarding
and
interesting
to
call
for
help
on
that
part.
What
do
you
think?
Is
it
only
an
idea,
or
I
really
want
your
advice?
G
B
D
Yeah
and
the
bigger
part
is
where
do
you
take
it?
Next,
though,
it's
do
you
put
in
like
scm
information.
Do
you
want
a
new
logs
viewer
and
yeah
like
the
suggestion,
before
recreating
a
better
job,
widget
to
visualize
the
jobs
over
time
and
yeah,
so
this
one's
not
too
far
away
from
like
an
mvp,
but
it's
is
it
this
again
or
is
it
another
one
where
you
extend
it
with
some
of
the
other
suggestions.
A
Okay,
so
and
coordinating
that
in
the
ux
sig,
so
encouraging
people
to
come
come
join.
The
the
ux
sig
and
get
involved
is,
is
the
repository
public
tim
that
that
they
could
already
start
exploring
in
it
or
not
yet
ready.
D
So
I've
pushed
everything
on
my
fork
of
cliffs
one.
I
plan
to
probably
host
it.
Maybe
next
week,
yeah
probably
host
it
next
week
and
release
now
for
also,
if
it's
nearly
ready,
hopefully
we'll
see
if
I
can
fix
the
last
couple
of
minor
issues
today.
If
I
have
time
but
yeah,
it's
it's
all.
First,
it's
all
on
my
fork.
K
K
D
D
D
So
these
here,
these
are
just
the
stage
names.
If
that's
what
you're
asking
about.
D
Yeah,
it
doesn't
care
they're,
not
unique.
D
They've
got
they
get
different
ids
in
the
graph.
So
if
you,
if
you
do
inspect
the
metadata,
there
is
different
ids
that
are
used.
But
apart
from
that,
no
they're
not.
K
A
Any
other
topics
to
review
here-
I
guess
I
have
one
shameless
plug
jenkins2.277.1-
will
release
march
the
10th.
That's
the
the
expectation.
Anyway.
We
have
about
two
weeks
to
do
verification
of
the
release
candidate
tim's
published
the
release
candidate.
Please
take
it
up
and
test
it
explore
it.
This
is
a
major
change.
Right.
We've
got
the
extreme
unfork.
We've
got
the
a
cg
to
stream
spring
security
improvements,
we've
got
jquery
update
and
we've
got
tables
to
divs
in
this
release.
This
is
a
major
major
release,
great
chance
to
help
us
test.
It.
A
All
right
thanks,
everybody
recording,
will
be
available
separately.
Thank
you
for
joining
the
contributor
summit.
I
will
send
a
retrospective
survey
to
everyone
who
registered
we
had
69
or
so
register
and
encourage
you
to
complete
it.
That
survey
will
go
out
likely
early
next
week,
we'd
like
to
try
this
again.
We
think,
but
we
want
to
learn
more,
how
we
do
it
better
thanks
a
bunch.