►
From YouTube: End User Panel Ioannis Moutsatsos
Description
Jenkins Contributor Summit June 25, 2021 End User Panel - Ioannis Moutsatsos
A
A
But
what
we're
talking
about
is
actually
outside
the
standard
devops
operations
and
all
like,
is
it
possible
for
me
to
share
slides
or
can
you
yeah.
B
You
can
just
share
your
screen.
Okay,
you
should
be
able
to
do
that.
If
not,
I
will
fix
it,
but
you
should
have
permission.
B
A
Good,
so
I
I
put
these
slides
together
just
so
that
you
have
a
frame
of
reference
later
on.
If
you
want
to
go
back-
and
you
know
refresh
your
mind
or
some
of
these
things,
that
may
be
a
little
bit
out
of
the
standard
realm
of
what
we're
doing
with
with
jenkins.
A
But
back
in
2013,
I
discovered
jenkins
myself,
I'm
a
trained
phd
molecular
biologist,
but
I
went
back
to
school.
I
got
a
master's
in
software
engineering
and
I
was
for
a
long
time
interested
in
in
software
development
and
I'm
in
a
in
an
interesting
intersection
of
of
medicine
and
data
science.
Now,
which
makes
a
lot
of
these
things
really
really
interesting.
A
So
back
in
2013,
I
discovered
jenkins
and
has
been
using
it
since
then,
but
we've
been
using
it
for
a
total
different
application,
and
a
few
years
ago
we
published
this
paper
in
scientific
literature
where
we
introduced
jenkins
as
a
platform
for
scientific
data
and
image
processing
applications,
and
it
has
nothing
to
do
with
actual
compilation
of
code
testing
code
and
so
on,
but
nonetheless
it
uses
all
of
the
capabilities
of
jenkins.
A
So
I
really
want
to
start
by
by
thanking
a
lot
of
people
that
have
been
sort
of
fundamental
in
this
process
and,
interestingly
enough,
my
boss
at
the
time,
was
called
jeremy
jenkins,
and
you
know,
over
the
years
I've
met
many
of
the
jenkins
contributors
and
very
nice
people
in
the
group,
like
oleg
and
marky,
and
even
kashuki,
who
visited
in
the
parties
a
few
years
ago.
A
Jesse
importantly,
my
colleague,
who
is
now
in
a
new
zealand
bruno
kinoshita,
who
developed
some
of
the
key
plugins
for
these
and
participants
in
the
gsoc
2020
last
year,
where
we
developed
a
machine
learning
plugin
for
for
jenkins.
So
why
use
jenkins
for
life
science
applications?
Really?
There
are
a
lot
of
standardized
things
that
jenkins
offers
that
are
key.
Enablers,
such
as
the
accessibility
of
the
jobs,
va
web
portal,
the
freestyle
parametrized
jobs,
easy
deployment.
You
know
the
super
rich
plug-in
ecosystem.
A
I'm
not
going
to
read
these
this
this
whole
list,
but
these
are
what
I
call
sort
of
the
standard
enablers
of
jenkins
that
have
made
these
possible
and
the
benefits
that
this
offers
is
that
life
and
data
science
pipelining
really
requires
integration.
A
lot
of
different
utilities,
applications,
custom
script
tools
and
jet
is
able
to
do
all
of
that.
A
People
can
go
to
a
jenkins
job
interface
in
able
to
execute
an
entire
data
analysis
or
data
ingestion
and
processing
and
parsing
in
a
very
reproducible
way.
That
leaves
a
really
good
what
we
call
data
provenance
path,
where
we
can
always
determine
where
the
data
came
from
and
finally,
through
this
similar
web
portal,
were
able
to
to
share
this
data
with
others
and
and
and
collaborate.
A
Nonetheless,
you
know
there
is
a
kind
of
any
impedance
mismatch
between
develop
and
operations
and
science
and
just
always
as
a
kind
of
funny
point.
I
bring
this.
This
word
artifact
that
we're
using
in
gen
games
and,
of
course,
artifact
is
used
with
the
idea
of
something
that
jenkins
creates,
but
for
science
this
is
really
a
serious
observation
and
a
bad
thing,
something
that
you
do
not
want.
So
you
know
just
that.
A
A
really
kind
of
simple
example
of
nomenclature
where
things
are
different,
but
let's
look
at
specifically
of
pipelines,
jobs
and
and
builds
for
developers.
We
check
out
of
code
from
the
scm.
The
pipelines
are
more
consistent
and
continues.
The
jobs
require
very
few
parameters.
A
The
builds
are
almost
always
deleted
and
the
artifacts
are
automatically
tested
on
the
scientific
side,
though
they
there's
nothing
such
as
the
concept
of
an
scm
for
for
data
and
instruments.
The
files
are
all
over
the
place,
whether
it's
on
on
a
particular
instrument
on
a
local
network
drive
the
pipelines
are
really
discontinuous.
It
consists
of
an
ad
hoc
mix
of
the
jenkins
jobs.
A
Different
tasks
are
encapsulated
in
separate
jobs
that
need
to
provide
input
and
output
to
each
other.
The
builds
are
almost
never
deleted,
because
this
is
really
primary
data
that
you're
generating
it's
not
a
kind
of
a
you're,
not
superseding
all
the
data
or
old
jars
or
old
builds,
and
the
artifacts
are
really
inspected
and
annotated
and
curated
by
the
scientists
rather
than
in
an
automatic
way.
A
Another
sort
of
impedance
mismatch
here
is
around
job
configuration
you
know
for
for
developers
now
you
know
we're
moving
more
and
more
the
pipeline
as
code.
A
You
know
andrew
mentioned
the
blue
ocean
project,
and
I
had
some
questions
around
its
status
because
it
really
looked
interesting
at
the
beginning
because
it
starts
approaching
the
some
of
the
requirements
that
scientists
have
around
visual
editors
for
configuring,
the
jobs,
but
I
have
tried
to
use
it,
and
I
realized
that
actually,
it's
more,
for
you
know
kind
of
the
the
built
stage
and
in
in
larger
sort
of
not
so
granular
that
it
is
useful
for
configuring
parameters
and
we
use
a
lot
of
the
freestyle
parameterized
jobs,
which
is
not
very
common
for
the
developers.
A
So
what
we're
missing
and
still
is
sort
of
this
configuration
exploration,
dependency,
management,
understanding
where
these
things
is
what
you
see
on
the
right
hand,
side
is
a
kind
of
my
attempt
to
roll
my
own.
This
is
actually
the
parameters
in
a
in
a
particular
job
and
they
depend
on
each
other
and
so
and
they
depend
on
groovy
scripts
and
strip
flicks
that
are
executed
as
part
of
the
job.
A
So
this
is
sort
of
you
know
our
own
version
of
trying
to
understand
the
configuration
better,
but
it
would
be
great
if
we
had
a
kind
of
a
better
supported
tool.
Search
and
metadata
are
still
issues.
I
think
in
the
standard
version
of
jenkins,
searching
for
artifacts
across
different
builds
is
still
very
difficult,
build
level
metadata
is
not
searchable
and
it's
not
generated
very
easily
and
the
same
thing
across
builds.
A
I
call
it
relational
builds
where
you
know
a
downstream
build
will
depend
from
two
or
three
upstream
builds,
and
it's
very
difficult
to
sort
of
document
that,
and
it's
even
more
difficult
to
do
a
cascade,
which
we
would
like
to
do.
If
you,
if
you
delete
a
primary
artifact
on
which
a
bunch
of
analysis
are
dependent
on
downstream,
you
would
like
to
have
the
opportunity
to
at
least
identify
those
validate
them
and
delete
them.
A
Here
is
a
concept
that
is
critical
for
what
we're
doing
and
is
solely
missing
from
jenkins.
What
we
call
this
is
the
interactive
pre-builds,
a
lot
of
activity
going
on
before
you
even
start
the
build-
and
this
has
to
do
with
the
fact
that
starting
a
complicated
analysis
in
r
python
image
processing
whatever
requires
the
selection
of
a
bunch
of
parameters
that
they
may
be
appropriate
or
not
for
the
analysis
and
going
through
a
full
build
cycle.
A
A
This
is
some
examples
of
the
kind
of
artifacts
we're
talking
about
we're
talking
about
images.
We're
talking
about
scientific.
You
know
analysis
that
you
visualize
through
graphs
and
even
you
know,
data
tables
and
so
on,
and
all
the
build
does
at
the
end
is
archives
and
reports
these
pre-built
artifacts.
A
So,
for
example,
here
you
can
see
there's
a
report
with
six
different
pre-built
artifacts
that
have
are
using
different
algorithms
and
different
parameters
to
generate,
and
you
know
we
have
managed-
and
this
is
the
amazing
thing
about
jenkins-
that's
still
sort
of
it's
one
of
the
greatest
joys
to
work
with
it,
because
you
know
you
can
you
can
get
it
to
do
a
lot
of
different
things
right,
even
even
these
pre-builds
that
I
think
the
concept
is
missing
from
it
now.
A
Something
that
may
not
go
well
with
a
lot
of
people
is:
please
don't
let
security
it
function.
You
know
groovy
script,
execution,
inline,
javascript
html
are
keys
for
the
kind
of
things
that
we're
doing
and
we
have
been
struggling
and
struggling
to
maintain
their
functionality.
In
the
present
scheme
of
security
improvements,
you
know,
I
know
that
bruno
has
been
very
good
at
fixing
security
warnings
and
so
on,
but
it's
just
the
nature
of
what
we're
doing.
A
Finally,
I
would
like
to
say
that
you
know
we're
talking
about
a
lot
of
big
companies
are
using
jenkins
and
about
a
lot
of
you
know:
big
installations
junkies,
but
for
the
life
sciences
and
data
sciences,
we
cannot
forget
how
jenkins
would
fit
into
sort
of
the
environment
of
an
academic
lab.
A
Where
you
know
there
is
academic
lab
doing
some
research.
They
need
to
deal
with
their
data
in
their
laboratory
instruments
and
they
do
have
one
developer
there.
It
would
be
great
if
that
developer
can
apply
some
of
these.
These
kind
of
jobs
that
we
have
are
developing
for
life,
science,
integration
and
data
science
in
a
rather
easy
way-
and
that's
it.
A
I
I
will
leave
you
with
a
set
of
preferences
and
and
if
anyone
is
interested
in
hearing
a
little
bit
more
about
this,
I
think
we
have
an
ignite
session
on
applications
of
jenkins
and
data
sciences
a
little
bit
later
and
we'll
go
in
a
little
bit
more
detail
on
this
and
again,
thank
you
for
the
opportunity
to
speak
on
this
on
behalf
of
perhaps
voices
that
you've
never
already.
B
And
thanks
a
lot
for
your
feedback.
If
you
want
to
do
extended
session,
your
junkies
online
meetup
always
welcomes
you
and
yeah.
There
is
a
lot
of
good
points.
Definitely
it
was
discussing
like
I
especially
appreciate
the
point
about
security
and
yeah
things
like
world
ocean
yeah.
B
We
discussed
them
a
lot
of
the
previous
summits
and
I
think
that
it's
a
really
valid
point
from
an
user
standpoint
who
actually
want
to
keep
thinking
as
a
framework
for
use
cases
like
your
informatics
or
whatever
way,
you're
still
going
to
get
the
games
out
of
the
box.
But
you
want
to
use
a
power
object,
assistant,
automation,
engine.