►
From YouTube: End User Panel (full session)
Description
Jenkins Contributor Summit June 25, 2021 End User Panel (full)
End user panel presentations from Andrey Babushkin, Ioannis Moutsatsos, Ivan Fernandez, and Victor Martinez
A
Okay,
so
today
we
have
our
first
jinx
and
user
panel,
so
the
idea
is
to
have
inverted
discussion,
so
not
contributors
presenting
to
users
but
the
end
users,
token,
to
contributors,
presenting
their
experiences,
expectations
from
jenkins
and
then
contributors
on
the
call
just
asking
questions
providing
some
feedback
and
again,
everyone
is
welcome
to
participate
in
this
discussion
either
by
voice
or
by
chat,
and
I
suggest
that
we
start
from
andre
because
he
was
the
first
to
respond
so
andre.
B
Yeah
hi,
my
name
is
andre
babushkin
and
currently
I
work
for
intel
in
intel
open.
We
know
to
kid
projects
and
we
use
jenkins.
We
use
drinking
since
the
inception
of
our
project
it.
As
far
as
I
remember
it's
2018
and
the
oldest
jenkins
version.
I
was
able
to
find
this
2.1
or
so
so
we
have
seen
many
updates
we've
seen
how
jkusk
was
created,
we've
seen
ui
improvements.
B
We
I
think
we
have
upgrade
issues
only
once,
and
I
can't
remember
when
it
was
last
time.
So
this
part
of
jenkins
is
very,
very
great
and
most
of
us
experience
issues
I
think,
is
connected
with
the
fact
that
oprah
mina
is
not
a
java
project
right.
So
we
don't
have
any
java
experience,
and
you
know
when
something
goes
wrong
with
jenkins
or
in
jenkins
pipeline
they're
huge
stack,
traces,
mentioning
some
strange
concepts.
Some.
B
Deep
inside
pipeline
csp
code
and
that's
a
bit
confusing
other
issues
we
have
like
our
pipelines
is
such
big
that
we
was
forced
to
split
them
into
a
few
separate
jobs,
because
we
can
just
put
all
stages
in
a
single
pipeline
use,
spiral,
step
and
run
all
builds
on
all
linuxes,
on
flavors,
on
windows
and
on
mac
and
run
tests,
because
when
you
try
to
upload
a
best
results
to
jenkins
build,
it
puts
like
we
there's
no
way
to
separate
few
test
executions
in
this
in
jenkins
test
report
and
other
issue
is
sometimes
we
need
to
more
powerful
build
dependencies
than
just
upstream
downstream
relationship
right.
B
We
sometimes
we
want
to
specify
that
this
build
is
dependent
on
from
this
build,
and
this
build-
and
currently
we
cannot
do
this
in
our
multi-drop
pipeline
right.
So,
basically,
that's
all
that
I
was
thinking
of
a
few
day
for
a
few
days
after
end
user,
if
you
ux
panel
announcement,
so
I
think
that's
all.
A
B
I'm
may
have
seen,
but
I
had
a
chance
to
try.
It.
A
Yeah
so
yeah,
why
I'm
asking
because
it
actually
supports
speaking
reports
by
various
factors
and
tags
and
programming
languages
if
you
want
so
if
the
user
experience
today
is
what
you
would
like
to
see,
maybe
it
could
be
a
good
referral
for
indonesia.
I
believe
that
the
unit
plugin
currently
uses
github
issues
and
team,
who
is
on
the
call
he's
currently
one
of
the
maintainers
of
the
g
unit,
plugin.
B
Yeah,
but
like
test
reports
is
just
one
example
of
why
we
need
to
split
our
jobs
out
by
playing
to
a
few
jobs
right.
Other
thing
is
amount
of
logs.
We
need
to
see
and,
like
we've
tried
to
put
this
into
one
big
pipeline,
but,
like
just
imagine,
you
have
parallel
steps
with
ubuntu,
centos,
debian
and
windows
and
mac
os
and
inside
each
parallel
stage.
B
C
So
if
you
use
the
classic
ui,
the
junior
report
will
show
you
reports
by
stage
and
group
things
by
stage,
so
you
can
actually
have
a
slightly
better
overview.
B
No
blue
ocean
is
actually
better
because
in
each
of
b
of
build
drops
and
test
jobs,
we
split
out
by
playing
a
few
for
a
few
stages
like
copy
artifacts,
unpack
artifacts,
run
tests,
write
results,
something
like
that,
and
if
we
see
this
in
classic
ui,
we
just
see
the
latest
stage
and
in
the
latest
stage,
I'm.
C
I'm
not
talking
about
the
stage
overview
or
anything
like
I'm
purely
talking
about
the
test
result
reports
as
in.
If
you
go
to
the
build
slash
test
report,
you
will
get
kind
of
I'm
just
trying
to
pull
up
an
example
on
my
instance,
so
I
can
tell
you
exactly
rather
than
just
going.
I
think
it
works
like
this
based
on
my
memory.
A
While
we
talk
about
that
about
the
job
relations,
what
did
you
expect
something
like
a
cyclic
dependency
graph
or
how
would
you
like
the
jobs
to
be
executed?
What's
your
main
problem
with
the
current
triggering.
B
Actually,
I
I
think,
like
I
saw
a
gig
up
action
pipeline
recently
and
in
github
actions
we
can
specify
a
stage
dependency
dependencies.
That
depends
from
two
or
more
stages,
so
this
stage
will
be
executed
only
after
all
stages,
it
depends
from
will
be
executed.
So
something
like
that.
A
A
Personally,
I
think
that
zhinki's
pipeline
engine
supports
it
in
principle,
but
it
requires
a
significant
rework
of
how
our
desales
implemented
so
right
now
there
is
no
way
to
actually
implement
them
jenkins.
You
can
just
have
parallel
jobs,
which
well
basically
start
from
very
beginning.
Then
you
could
probably
use
a
join
or
milestones
plugin
to
actually
do
some
dependencies,
but
it
would
be
quite
complicated.
B
We
actually
have
tried
to
use,
I
don't
know
the
name
of
the
plugin,
but
it
has
added
stage
like
awaits
or
something
like
that,
but
it
seems
there
was
some
bug
in
this
plugin
and
we
have
received
some
deadlocks
in
our
pipelines,
so
we
stopped
using
that,
but
actually
we
use
jenkins
not
only
for
continuous
integration
purposes.
We
run
many
many
tests
in
our
nightly
and
weekly
validation
cycles
and
our
weekly
cycle
is
about
3
000
jenkins
bills,
and
it's
quite
a
lot.
C
A
For
once,
okay,
any
questions
for
andre
about
use
cases,
or
should
we
move
on
because
yeah
we
still
have
an
opportunity
to
discuss
particular
topics,
for
example
during
ignite
talks.
If
someone
wants
to
hack
a
duck
support
for
pipeline
in
a
few
hours
but
yeah
any
questions,
comments.
A
So
thanks,
andre
and
yeah,
I
think
that
we
could
try
a
deeper
dive
later
so,
for
example,
if
your
team
will
want
to
join
together
so
that
we
deep
dive
into
your
topics,
actually
I
had
an
honor
to
go
to
new
zealand
and
present
there
at
one
of
meetups
and
yeah.
I
know
that
there
is
a
lot
of
jenkins
users
there,
so
we
could
organize
something
specifically
to
the
different
use
case.
For
example,
if
you're
interested
in
jcask
and
pipeline,
you
can
invite.
B
A
Welcome
to
the
club
yeah,
okay,
thanks
a
lot
and
let's
move
to
iranians,
would
you
like
to
introduce
yourself.
D
Sure
so,
hello,
everybody,
nice
being
on
this
group,
I
am
a
life
and
data
scientist
working
for
a
pharmaceutical
company
and
for
a
minute
I
wanted
to
sort
of
forget
everything
you
know
about
jenkins
and
actually
remember
everything
about
jenkins.
D
But
what
we're
talking
about
is
actually
outside
the
standard
devops
operations
and
all
like,
is
it
possible
for
me
to
share
slides
or
can
you.
A
A
A
C
D
D
But
back
in
2013,
I
discovered
jenkins
myself,
I'm
a
I'm
a
trained
phd
molecular
biologist,
but
I
went
back
to
school.
I
got
a
master's
in
software
engineering
and
I
was
for
a
long
time
interested
in
in
software
development
and
I'm
in
a
in
an
interesting
intersection
of
medicine
and
data
science.
Now,
which
makes
a
lot
of
these
things
really
really
interesting.
D
So
back
in
2013,
I
discovered
jenkins
and
has
been
using
it
since
then,
but
we've
been
using
it
for
a
total
different
application,
and
a
few
years
ago
we
published
this
paper
in
scientific
literature
where
we
introduced
jenkins
as
a
platform
for
scientific
data
and
image
processing
applications,
and
it
has
nothing
to
do
with
actual
compilation
of
code
testing
code
and
so
on,
but
nonetheless
it
uses
all
of
the
capabilities
of
jenkins.
D
So
I
really
want
to
start
by
by
thanking
a
lot
of
people
that
have
been
sort
of
fundamental
in
this
process
and,
interestingly
enough,
my
boss
at
the
time,
was
called
jeremy
jenkins,
and
you
know,
over
the
years
I've
met
many
of
the
jenkins
contributors
and
very
nice
people
in
the
group
like
oleg
and
marky,
and
even
koshuki,
who
visited
novartis
a
few
years
ago.
D
Jesse
importantly,
my
colleague,
who
is
now
in
a
new
zealand
bruno
kinoshita,
who
developed
some
of
the
key
plugins
for
these
and
participants
in
the
gsoc
2020
last
year,
where
we
developed
a
machine
learning
plugin
for
for
jenkins.
So
why
use
jenkins
for
life
science
applications?
Really?
There
are
a
lot
of
standardized
things
that
jenkins
offers
that
are
key
enablers,
such
as
the
accessibility
of
the
jobs
via
web
portal,
the
freestyle
parameterized
jobs,
easy
deployment.
You
know
the
super
rich
plug-in
ecosystem.
D
I'm
not
going
to
read
this
this
whole
list,
but
these
are
what
I
call
sort
of
the
standard
enablers
of
jenkins
that
have
made
these
possible
and
the
benefits
that
this
offers
is
that
life
and
data
science
pipelining
really
requires
integration.
A
lot
of
different
utilities,
applications,
custom
script
tools
and
jenkins
able
to
do
all
of
that.
Finally,
we
have
developed
this
concept
of
one
page
web
apps
on
a
shoe
string.
D
People
can
go
to
a
jenkins
job
interface
and
be
able
to
execute
an
entire
data
analysis
or
data
ingestion
and
processing
and
parsing
in
a
very
reproducible
way.
That
leaves
a
really
good
what
we
call
data
provenance
path,
where
we
can
always
determine
where
the
data
came
from
and
finally,
through
this
similar
web
portal,
we're
able
to
to
share
this
data
with
others
and
and
and
collaborate.
D
Nonetheless,
you
know
there
is
a
kind
of
any
impedance
mismatch
between
develop
and
operations
and
science
and
just
always
as
a
kind
of
funny
point.
I
bring
this.
This
word
artifact
that
we're
using
in
gen
games
and,
of
course,
artifact
is
used
with
the
idea
of
something
that
jenkins
creates,
but
for
science
this
is
really
a
serious
observation
and
a
bad
thing,
something
that
you
do
not
want.
So
you
know
just
that.
D
A
really
kind
of
simple
example
of
nomenclature
where
things
are
are
different,
but
let's
look
at
specifically
of
pipelines,
jobs
and
and
builds
for
developers.
We
check
out
of
code
from
the
scm.
The
pipelines
are
more
consistent
and
continues.
The
jobs
require
very
few
parameters.
D
The
builds
are
almost
always
deleted
and
the
artifacts
are
automatically
tested
on
the
scientific
side,
though
there's
nothing
such
as
the
concept
of
an
scm
for
for
data
and
instruments.
The
files
are
all
over
the
place,
whether
it's
on
on
a
particular
instrument
on
a
local
network
drive
the
pipelines
are
really
discontinuous.
It
consists
of
an
ad
hoc
mix
of
the
jenkins
jobs.
D
Different
tasks
are
encapsulated
in
separate
jobs
that
need
to
provide
input
and
output
to
each
other.
The
builds
are
almost
never
deleted,
because
this
is
really
primary
data
that
you're
generating
it's
not
a
kind
of
a
you're,
not
superseding
all
the
data
or
old
jars
or
old,
builds,
and
the
artifacts
are
really
inspected,
annotated
and
curated
by
the
scientists,
rather
than
in
an
automatic
way.
D
D
You
know
andrew
mentioned
the
blue
ocean
project,
and
I
had
some
questions
around
its
status
because
it
really
looked
interesting
at
the
beginning
because
it
starts
approaching
the
some
of
the
requirements
that
scientists
have
around
visual
editors
for
configuring,
the
jobs,
but
I
I
have
tried
to
use
it,
and
I
realized
that
actually,
it's
more,
for
you
know
kind
of
the
the
build
stage
and
in
in
larger
sort
of
not
so
granular
that
it
is
useful
for
configuring
parameters
and
we
use
a
lot
of
the
freestyle
parameterized
jobs,
which
is
not
very
common
for
the
developers.
D
So
what
we're
missing
and
still
is
sort
of
this
configuration
exploration,
dependency,
management,
understanding
where
these
things
is
what
you
see
on
the
right-hand
side
is
a
kind
of
my
attempt
to
roll
my
own.
This
is
actually
the
parameters
in
a
in
a
particular
job
and
they
depend
on
each
other
and
so
and
they
depend
on
groovy
scripts
and
strip
flips
that
are
executed
as
part
of
the
job.
D
So
this
is
sort
of
you
know
our
own
version
of
trying
to
understand
the
configuration
better,
but
it
would
be
great
if
we
had
a
kind
of
a
better
supported
tool.
Search
and
metadata
are
still
issues.
I
think
in
the
standard
version
of
jenkins,
searching
for
artifacts
across
different
builds
is
still
very
difficult,
build
level
metadata
is
not
searchable
and
it's
not
generated
very
easily
and
the
same
thing
across
builds.
D
I
call
it
relational
builds
where
you
know
a
downstream
build
will
depend
from
two
or
three
upstream
builds,
and
it's
very
difficult
to
sort
of
document
that,
and
it's
even
more
difficult
to
do
a
cascade,
which
we
would
like
to
do.
If
you,
if
you
delete
a
primary
artifact
on
which
a
bunch
of
analysis
are
dependent
on
stream,
you
would
like
to
have
the
opportunity
to
at
least
identify
those
validate
them
and
delete
them.
D
Here
is
a
concept
that
is
critical
for
what
we're
doing
and
is
solely
missing
from
jenkins.
What
we
call
this
is
the
interactive
pre-builds,
a
lot
of
activity
going
on
before
you
even
start
the
build-
and
this
has
to
do
with
the
fact
that
starting
a
complicated
analysis
in
r
python
image
processing
whatever
requires
the
selection
of
a
bunch
of
parameters
that
they
may
be
appropriate
or
not
for
the
analysis
and
going
through
a
full
build
cycle.
D
D
This
is
some
examples
of
the
kind
of
artifacts
we're
talking
about
we're
talking
about
images,
we're
talking
about
scientific,
you
know,
analysis
and
that
you
visualize
through
graphs,
and
even
you
know,
data
tables
and
so
on,
and
all
the
build
does
at
the
end
is
archives
and
reports
these
pre-built
artifacts.
D
So,
for
example,
here
you
can
see
there
is
a
report
with
six
different
pre-built
artifacts
that
have
are
using
different
algorithms
and
different
parameters
to
generate,
and
you
know
we
have
managed-
and
this
is
the
amazing
thing
about
jenkins-
that's
still
sort
of
it's
one
of
the
greatest
joys
to
work
with
it,
because
you
know
you
can
you
can
get
it
to
do
a
lot
of
different
things
right,
even
even
these
pre-builds
that
I
think
the
concept
is
missing
from
it
now.
D
Something
that
may
not
go
well
with
a
lot
of
people
is:
please
don't
let
security
function.
You
know
groovy
script,
execution,
inline,
javascript
html
are
keys
for
the
kind
of
things
that
we're
doing
and
we
have
been
struggling
and
struggling
to
maintain
their
functionality
in
the
present
scheme
of
security
improvements.
D
D
Finally,
I
would
like
to
say
that
you
know
we're
talking
about
a
lot
of
big
companies
are
using
jenkins
and
about
a
lot
of
you
know:
big
installations
junkies,
but
for
the
life
sciences
and
data
sciences,
we
cannot
forget
how
jenkins
would
fit
into
sort
of
the
environment
of
an
academic
lab
where
you
know
there
is
academic
lab
doing
some
research.
They
need
to
deal
with
their
data
in
their
laboratory
instruments
and
they
do
have
one
developer
there.
D
It
would
be
great
if
that
developer
can
apply
some
of
these,
these
kind
of
jobs
that
we
are
developing
for
life,
science,
integration
and
data
science
in
a
rather
easy
way-
and
that's
it-
I
I
will
leave
you
with
a
set
of
preferences
and
and
if
anyone
is
interested
in
hearing
a
little
bit
more
about
this,
I
think
we
have
an
ignite
session
on
applications
of
jenkins
and
data
sciences.
D
A
little
bit
later,
then
we'll
go
in
a
little
bit
more
detail
on
this
and
again,
thank
you
for
the
opportunity
to
speak
on
this
on
behalf
of
perhaps
voices
that
you've
never
heard
before.
So,
thank
you
alex
for
inviting
me.
A
Yeah
and
thanks
a
lot
for
feedback.
If
you
want
to
do
extended
session,
your
junkies
online
meetup
always
welcomes
you
and
yeah.
There
is
a
lot
of
good
points.
Definitely
it
would
be
was
discussing
like
I
especially
appreciate
the
point
about
security
and
yeah
things
like
world
ocean
yeah.
A
We
discussed
them
a
lot
of
the
previous
summits
and
I
think
that
it's
a
really
valid
points
from
an
user
standpoint
who
actually
want
to
keep
junkies
as
a
framework
for
use
cases
like
your
informatics
or
whatever
way
you
still
don't
get
like
games
out
of
the
box.
But
you
want
to
use
a
power
assistant,
automation,
engine.
B
Yeah
I'm
gonna.
I
just
want
to
add
something
about
security.
I
need
to
make
a
confession,
because,
since
the
beginning
of
our
project,
we
use
permissive
script,
security
plugin
and
we
use
it
just
because
we
are
not
creating
new
plugins.
Instead,
we
put
all
our
custom
functions.
Functions.
Github
api
integrations,
gitlab
api
integrations
something
like
that
into
our
jenkins
shark
library
and.
B
A
I
understand
the
point
and
unfortunately
we
don't
have
vertical
daniel
here,
so
we
can
discuss
this
topic
even
more
one
question
before
so
for
a
bigger
and
one
how
much
time
you
need
approximately
for
value
and
discussion,
because
you
have
some
time
constraints.
A
A
Yeah
so
yeah,
basically,
we
have
10
minutes
more.
So
would
you
like
to
continue
feedback
from
iranians
for
now,
or
rather
take
it
offline
because
yeah,
there
is
a
lot
of
feedback
to
discuss.
I
think
it
would
be
rather
feasible
to
have
something
like
one
hour
session
or
maybe
on
half
a
session
together
with
whomever
is
interested
and
talk.
E
A
Okay,
so
let's
continue,
then
I
mean.
E
Yeah,
I
I
want
to
make
a
point
about
the
the
they
use
the
try
to
to
avoid
the
the
security
script
like
to
in
the
yankees.
Sorry
library,
and
we
have
a
really
big
solid
and
we
don't
need
to
to
approve
any
any
script
or
or
use
that
plugin
for
anything
we
can.
We
will
manage
to
to
do
everything
that
we
want
without
have
to
worry
around
the
script
security
plug
with
scrapes
with
binaries
with
other
things.
So
I
I
think
that
this
is
not
required
to
to
use
the
plugin.
E
You
can
always
find
some
way
to
to
do
the
same
thing
in
in
the
best
way
that
is
keeping
the
security
plugin
in
there.
A
There
are
some
plugins
like
pattern:
utility
steps
like
note,
api
iterator.
For
me,
for
example,
it
was
always
the
case
when
I
needed
to
do
custom
scheduling.
A
B
B
There's
too
much
code
already
to
rewrite
rewrite
right.
So
that's
why
we
keep
using
permissive
security
plugins
too
hard
for
us
to
rewrite
our
old
code
in
order
to
not
to
interfere
with
security.
F
B
D
In
our
case,
a
lot
of
the
scripts
are
part
of
the
parameters
in
the
ui,
we're
using
the
active
choices,
plugin
that
creates
sort
of
interactive
cascading
parameters
and
creates
for
us
these
html
and
javascript
elements
that
were
interested
for
more
interaction
in
introducing
graphics,
libraries,
scientific
libraries,
imaging
and
so
on
and
yeah
all
of
those
for
us.
We
need
to
go
and
approve
those
scripts.
E
A
current
task
that
is
used
in
all
the
pipelines
and
we
have
some
regression
or
wherever
we
move
that
stuff.
That
is
the
current
to
the
previous
one,
and
we
fix
the
the
issue
in
the
time
we
release
the
the
library
ten
times
five
times
and
between
five
and
ten
times
a
week,
and
we
managed
to
have
the
the
library
in
version,
without
any
issues
in
the
three
years
that
we
are
using
the
library
more
or
less.
D
I
should
say
that
I
don't
think
I
have
deactivated
the
security
plug-in
or
use
the
the
other
plugin
that
the
permissive
permissions,
whatever
it
is
called,
but
we
have
tinkering
around
with
a
what
was
called
a
wasp
renderer
or
something
like
that
in
the
in
the
startup
of
jenkins,
so
that
it
will
allow
html
and
things
like
that
to
be
rendered.
G
So
this
one
on
the
top
is
like
you
were
saying
about
this
ability,
so
I
want
to
be
the
voice
as
well
from
some
of
the
users
that
we
have
at
the
elastic,
like
some
of
them
find
really
hard
when
you
run
so
many
things
in
parallel
on
a
pipeline
to
debug.
What's
going
on
right,
so
that's
one
of
the
issues
we
hear
about
quite
often
how
to
make
this
easier
to
debug
from
the
console
our
potential
one,
because
sometimes
he
doesn't
even
reload
correctly,
then
the
locks
in
the
ui.
G
G
And
also
about
the
usability,
I
think
that's
also
a
good
point,
the
one
regarding
to
make
the
life
easier
from
the
end
user,
how
to
restart
a
particular
stage
when
the
bill,
or
that
particular
pipeline
failed,
not
working
in
all
the
cases,
and
that's
probably
one
of
the
key
areas
as
well.
How
we
can
make
people
way
of
working
with
the
jenkins
pipelines
easier
that
they
don't
really
need
to
wait
for,
as
I
already
hear,
like
some
bills,
take
hours
and.
G
A
Yeah
for
that
one
of
our
approaches
said
that
we
could
use
ignite
talks
because
we
don't
have
so
many
ignite
rocks
submitted
at
the
moment.
So
we
could
just
have
your
session
there
after
ignite
talks
and
just
a
deep
dive.
Oh
here's
his
body
coming,
so
he
missed
the
most
interesting
part
but
yeah.
A
So
yeah
thanks
a
lot.
We
have
something
like
one
minute
before
we
start
breakout
rooms
and
yeah
right
now,
I'm
not
100
sure
how
mark
configured
them
so
yeah,
I'm
trying
to
figure
it
out
at
the
moment.
A
So
I
believe,
let
me
do
them
in
this
room,
but
I
don't
see
breakouts
configured
to
be
honest,
so
I'll
stop
the
recording
and
then
we
can
figure
out
together
if
needed.
So
thanks
a
lot
and
again
we
will
be
doing
another
session
with
elastic
during
internet
talks.
So
thanks
a
lot.