►
From YouTube: Containers and Platforms Track 2021 02 24
Description
Jenkins Contributor Summit Containers and Platforms Track Feb 24, 2021.
See https://docs.google.com/document/d/1KDieVAgbeo9xTNVnOWYo0igJBy_8Qj2yCoMu4cRR4nc/edit?usp=sharing for notes taken during the session
B
B
I
know
we
briefly
talked
about
that
when
there
was
the
whole
rate
limits
and
questions
around
docker
hub
and
stuff,
like
that,.
A
Okay,
a
good
good
topic
and
great
that
we've
got
olivier
here.
He
and
I
had
a
recent
conversation
about
using
possibly
using
multiple,
multiple
container
registries.
So
that's
that's
the
idea.
The
question
is
concept
and
use
of
multiple
registries,
strengths
and
weaknesses,
right
so
strength
and
and
weaknesses,
because
I
assume
that
podman
can
use
an
image
from
docker
from
the
docker
hub
as
well.
Can't
they.
B
Yep
it's
already
registered
by
default.
When
you
go
down
to
pull
down
an
image,
you
guys,
you
can
choose
whether
it's
searching
the
docker
I
o
or
clay
io.
So
I
see
okay.
A
A
Okay,
olivier
any
topics
that
you'd
like
to
add
to
this
add
beyond
this
current
list.
C
A
Okay,
all
right
in
terms
of
ordering
I
I
would
like
to
be
sure
we
stay
absolutely
within
two
hours
and
I'm
willing
to
consider
even
less
than
that,
but
let's
put
the
highest
priority
things
first.
So
for
me
this
one
docker
image-
support
for
multiple
platforms
is
quite
high
priority
because
I'm
interested
in
arm
64
and
jim's
interested
in
s390.
A
I
think
testing
and
validation
are
both
high
value
because
of
their
relationship
to
image
and
delivering
images
at
all.
So
I
propose
we
put
those
two
pretty
high
on
the
list.
Any
objections,
okay,
great
all
right,
so
I'm
going
to
move
them
to
the
top
of
the
heap.
This
maintainers
topic.
I
would
like
to
be
sure
we
get
to
it.
I'm
less
concerned.
C
Personally
would
be
interested
to
to
to
briefly
talk
about
publishing
images,
because
I
mean
I
think
it
can
be.
We
can
quickly
talk
about
that.
Okay,
I
don't
think
it's
a
big
big
change,
but
I
just
want
to
I
mean
we
are
clear
there:
okay,.
A
D
D
A
Great
so
first
topic
docker
image,
support
from
multiple
platforms.
I
think
the
crucial
thing
here
is
we
need
we
need
we
or
we
have
equipment
to
do
the
builds
right
because
we
have
so.
We
have
the
equipment.
C
For
ci
jenkins,
o
is
not
configured
to
push
to
the,
I
would
say,
prediction
organization.
The
reason
to
that
is
because
seattle
jenkins
io,
is
a
public
instance,
and
we
don't
want
to
take
the
risk
to
be.
C
I
mean
to
have
zero
day
whatever,
so
we
use
a
different
instance
named
trust.ci,
which
is
only
available
from
a
private
location,
so
basically
for
the
harm
64,
it's
pretty
simple,
because
we
just
have
to
copy
paste
the
configuration
to
have
them
on
amazon,
which
is
fine
regarding
the
s
390,
I'm
not
sure
if
we
just
reduce
the
same
cluster.
C
Maybe
maybe
we
could
move
that
to
infrared
ci,
because
damien
did
some
work
there
to
to
build
images
and
to
test
images.
So
I
mean
you
work
on
the
shard
library
that
build
docker
images,
test,
docker
images
and
so
on
so
and
it's
an
infrared
ci
sorry
configure
to
use
that,
but
the
specificity
of
infrared
tr
right
now
is
it's
running
on
kubernetes,
so
which
means
that
we
would
have
to
put
more
configuration
to
provision
instances.
A
A
A
C
Just
to
clarify
the
jenkins
running
on
the
community's
cluster
is
the
same
jenkins
and
everything
else
the
specificity
is
we
have
access
to
the
to
the
kubernetes
cluster,
so
we
can
schedule
but
which
is
faster
than
provisioning,
a
windows
machine.
So
that's
why
we
wanna.
Basically,
we
prefer
to
build
images
on
using
communities
using
buds,
but
obviously
it
does
not
works
for
a
different
architecture,
because
this
does
not
provide
nodes
for
those
architecture.
So.
D
It's
just
easier
to
provide
the
configuration
yeah
s.
The
s390x
is
having
official
support
for
humanity
since
1.20.
I
don't
know
for
irm
on
ppc
irm
64..
It
should
work,
but
I'm
not
completely
convinced.
The
question
is
more:
is
it
possible
to
add
custom
static
nodes
to
the
existing
azure
managed
cluster.
D
The
the
proposal
area
and
if
you
want,
I
can't
start
the
topic
here
with
an
issue,
but
the
idea
will
be
to
have
a
kubernetes
cluster
but
a
specif
with
for
specific
architecture,
maybe
a
second
one.
So
from
jenkins
point
of
view,
it's
just
kubernetes
and
pods
templates.
That's
the
only
resource
we
have
to
manage
and
on
the
infrastructure
side
we
should
be
able
to
provide
kubernetes
static
nodes
directly.
A
E
A
C
Also
on
that
topic,
I
have
three
concerns.
First,
we
have
to
double
check
that
aks
tab
support
that,
because
aks
is
a
specific
managed
cluster,
that
we
have,
that
we
are
using
azure.
So
that's
that's
the
first
limitation
and
if
aks
support
it
now,
we
also
have
to
be
sure
that
the
version
that
we
are
using
support
it.
So
to
me
it
sounds
easier
to
just
configure
jenkins
with
the
right
ssh
key
and
to
ssh.
D
Okay,
I
would
so
the
idea
of
having
a
kubernetes
cluster.
Maybe
it
it's
not
the
aks,
it
will
be
cool
because
it
will
avoid
spreading
the
maintenance
of
the
kubernetes
control
plane,
but
the
hypothesis
of
at
one
moment
on
time
we
can
reach
a
nokia
s
or
aks
cluster
and
that
we
can
add
these
specific
nodes.
D
That
is
that
that
cluster
could
be
reused
across
our
different
instances.
Let's
say
we
could
have
a
namespace
for
infront
and
namespace
for
trustee,
for
instance.
The
idea
is
that
both
jenkins
should
be
able
to
reach
the
cluster
and
build
on
their
own
namespace
and
reusing
the
specificity
of
the
architecture
by
using
annotation
and
toleration
on
the
template
of
each,
but
we
will
have
only
one
kubernetes
setup
and
which
each
jenkins
instance
could
connect
to.
A
B
I
was
going
to
say
I
haven't
used
aks,
but
internal
to
ibm.
We
I've
been
testing
multi-arch
clusters
on
kubernetes
for
a
long
time,
that's
kind
of
what
I
do
for
the
majority
of
my
day
job.
So
I
can
tell
you
it
works.
Well
again,
I
don't
know
about
aks.
B
The
only
thing
that
you
need
to
be
careful
of
is,
if
you
have
generic,
like
definitions
right
when
you
go
to
do
your
deployments,
you
need
to
make
sure
that
image
whatever
you
guys
are
using,
is
multi-arch
or
that
your
registry
is
aware
and
knows
to
pull
down
certain
art,
specific
containers
that
that's
where
a
lot
of
people
run
into
trouble,
yeah.
So.
D
E
C
What
jim,
what
gym
is
suggesting
it's
a
really
good
suggestions,
but
yeah
as
damien
is
mentioning,
for
instance,
right
now
we
have
buds
that
only
works
on
windows,
so
we
have
specific
part
annotation
and
that
just
explicitly
says.
D
D
B
I
know
I
know
in
kubernetes
there's
also.
I
know
you
mentioned
labels
for
windows
and
stuff,
there's
also
a.
I
think
I
think
it's
still
in
beta,
but
there's
an
arch
selector.
You
can
utilize
too.
I
can
look
up
the
specific
tag
or
whatever
you
want
to
call
it.
As
for
what
was
the
software
you,
you
guys
were
looking
for
scopio
or
was
tokyo.
D
D
A
Well,
that's
a
good
question.
I
thought
we
have.
D
C
And
and
once
and
once
we
get
limitation,
we
investigate
configuring
directly
kubernetes
for
that.
D
And
I
would
add
something
is
that
the
setup
of
adding
non-kubernetes
agent
on
infra
ci
is
something
we
need,
because
currently
we
build
docker
image
using
img,
because
we
don't
want
to
run
nested
containerization,
even
with
docker
rootless.
There
are
still
some
hiccups
in
security
concern
and
we
are
testing
only
the
tag
of
the
images
using
a
specific
tool.
D
If
we
want
to
to
execute
docker
run
commands
somewhere.
Whatever
the
architecture
we
need
static
agent.
We
should
consider
that
for
security
concern,
it's
not
possible
on
infra
ci
now
with
kubernetes,
so
it
could
be
totally
worth
it
to
start
learning
and
applying
setups
to
a
static
agent
or
agent
that
are
not
kubernetes.
C
A
Okay,
so
I
think
what
I've
heard
then
is
we're:
okay,
with
initially
going
with
static,
ssh,
build
agents
for
ppc
and
for
s390x
with
the
equipment
we've
got
access
to
already
and
the
dynamically
launched
cloud
agents
that
we're
already
using
on
ci.jenkins.org
conceptually
should
be
available
as
well
from
from
infra
or
from
trusted.
C
Yes,
exactly
exactly
so
in
case
of
trusted
ci,
we
just
have
to
copy
paste
the
configuration
I
mean
using
the
web
user
interface
for
infra.
We
have
to
put
that
in
the
gcas
config
file.
Obviously
I
would
prefer
to
have
that
in
the
gcask
on
infra,
so
because,
because
we
have
to
work
on
trusted.ci
and
ci
to
to
to
put
the
configuration
in
a
file
anyway.
So
if
we
already
do
that
on
infrared
ci,
that
we
simplify
future.
A
Work,
great
okay,
so
I
think
I
caught
the
notes
correctly,
then
all
right,
so
so
it
feels
to
me
like
we,
I
can
I've.
I've
heard
a
description
of
a
path
forward
to
give
us
s,
390,
arm
64
and
ppc
support
short
term
with
static,
build
agents
and
with
the
option
in
the
future
to
become
more
flexible
and
use
kubernetes.
If
we
wish
did,
I
did.
I
say
it
correctly:.
D
A
D
Thanks
for
the
the
comment,
jim
yeah,
it's
a
nod
selector
and
we
can
select
the
architecture.
It's
built-in
in
kubernetes,
yeah.
B
C
Something
that
I
noticed
in
the
past
man
since
I'm
using
aks
inch
timers
often
when
a
new
feature
is
available.
You
need
to
provision
the
cluster
from
scratch
so
sometimes,
depending
on
what
you're
looking
for,
we
may
have
to
replace
something
which
is
I
mean
everything
is
automated.
So
it's
always
a
good
time
to
test
that
we
can
reprovision
everything
from
scratch,
but
yeah,
it's
not
as
simple
as
just.
E
A
A
Okay,
so
nothing
else,
I
think
we've
covered
docker
image
support,
then,
okay,
with
that
one
ready
to
talk
to
the
next
topic.
Okay,.
C
Just
just
the
last
blast
for
the
docker,
what
I
would
do
is,
I
would
first
try
to
configure
infrared
once
we
have
the
right
configuration
and
we
are
sure
that
it
worked
on
infrared
ci.
We
configure
trust.ci
to
use
that.
C
A
C
The
reason
to
that
is
first
because
trusted
ci
is
quite
critical.
I
mean
it's
a
critical
instance
and
also
we
have
to
work
on
having
the
gcask
configuration
anyway.
So
I
think
that
would
be
a
good
way
to
yeah.
E
A
D
Still
better
to
run
at
least
for
clarity,
because
you
will
have
at
least
on
the
file
system
a
clear
separation
between
the
workspaces
of
each
agent,
the
ports,
the
processes
they
will
be
separated.
It's
still
a
good
practice,
but
we
have
to
to
assume
that
multi
multi-tenancy
is
not
won't,
be
easy
to
get
there.
A
Right
right,
but
so
it's
it,
one
of
the
crucial
things
is
docker
command
line
access
is
required
and,
as
an
example,
the
the
agents
that
I
run
cannot
run
docker
on
those
machines,
and
it
was
it
was
intentional.
I
I
don't
think
cia.jenkins.io
agents
either
have
that.
So
these
these
are
distinct
in
that
sense
that
they
need
to
be
able
to
execute
docker
commands.
B
C
Because
what
I
fear
in
this
case
is
if,
if
we
want
to,
I
mean
the
way
I
always
consider
ci
to
check
in
that
I
o
as
not
trusted
just
because
it's
publicly
accessible,
because
whatever
may
happen,
I
mean
right.
So
I
would
not
feel
comfortable
to
run
docker
demon
on
from
tresol
ti
and
from
seattle
jenkins
that
I
you
basically.
B
C
So
from
I
mean
I'm
fine
to
use
infrared,
ci
and
trust.ci,
because
infra.ci
is
only
available
from
the
vpn,
so
I
mean
the
risk
is
lower,
but
ci
the
jnk
is
the
leo.
I
mean
we
will
really
take
care
of
updating
it
all
the
time,
and
so
on
I
mean
we
are
thank
god
that
service,
but
I
mean
you
never
know
what
what
can
happen.
D
I
would
add
also
that
infrastr
is
now
completely
working
with
github
app
setup,
which
means
that
a
contributor
can
see
the
repository
of
the
code
of
the
source
code
of
whatever
docker
image
we
want
to
build
on
share.
They
will
have
feedback
from
the
build,
even
if
they
cannot
access
the
infra
ci
and,
in
particular
the
github
app
with
jenkins
setup,
provide
the
build
outputs
on
anything
which
is
a
which
could
fail
on
the
jenkins
build.
D
You
have
the
output
on
the
github
check,
which
means
these
github
are
available
for
any
public
contributor
without
requiring
access
to
infra
ci.
So
we
can
completely
have
a
setup,
a
welcoming
setup
for
contributor
keeping
the
specific
static
agent
on
infrared
or
trusted
behind
the
vpn
not
publicly
available.
D
So
it's
not
mandatory
to
have
these
machines
on
ci
the
junk
inside
we
can
totally
set
up.
The
building
in
france
hosted
an
infrared
for
the
build
and
test
and
then
trusted
or
whatever,
wherever
you
want
for
the
publication
that
it's
a
kind
of
intermediate,
because
these
are
specific
machines
and
we
cannot
apply
large-scale
security
setup
like
we
can
with
kubernetes.
A
Okay,
so
so
it
seems
like
we've
got,
we've
got
an
action
item
there
that
we
will
for
to
to
address
olivier's
concern.
We
do
need
to
ask
for
a
separate,
a
separate
thing
right,
a
separate
machine
or
a
separate
cluster
whatever
it
is
a
separate
thing
so
that
the
untrusted
environment,
I
think
that
makes
sense.
The
untrusted
environment
that
is
ci.jenkins.io
and
the
trusted
environment
that
is
infra
or
trusted.ci.jenkins.org
do
not
risk
contaminating
each
other
through
their
docker,
their
docker
command.
A
Now
ci.jenkins.io
is
not
currently
allowed
to
execute
docker
commands
on
these
on
on
s390
or
on
powerpc
olivier.
So
so
the
permission
isn't
there
so,
but,
but
I
think
it's
still
a
valid
point
to
say.
If
we
in
the
future
decided
to
enable
docker
for
ci.jenkins.io
on
these
machines,
we
would
now
be
risking
contamination
between
infra
or
trusted,
and
the
untrusted
ci
server.
C
And
so
so
now
that
I
put
the
question
get
away,
do
we
really
need
to
have
ci
jk
in
the
io,
with
those
architecture.
A
D
And
I
will
say:
kubernetes
will
be
a
nice
solution
here,
meaning
that,
if
you
need
to
run
a
container
on
this
specific
architecture
in
the
future,
with
all
the
image
built
on
infra
after
a
few
months,
then
as
soon
as
we
are
able
to
add
kubernetes
clusters
to
ci
jenkins
io,
it's
easy
to
ask
for
requirement.
I
want
to
run
that
image
on
a
restricted
environment
for
that
show
texture,
but.
A
C
A
A
A
D
C
C
B
There's
two
other
topics
I
I
want
to
mention
over
here.
I
know
I
think
one
of
them
is
actually
going
to
cover
later
on.
My
first
question
was
for
arm.
Are
we
have
we
decided?
Are
we
specifically
just
targeting
arm
64
or
arm
like
v8?
I
think
is
you
know
like
the
kind
of
new
ish
arm,
not
ak
not
arm
32.
B
Are
we
just
supporting
that
or
are
we
supporting?
You
know
the
other
versions
of
arm
as
well.
A
So
the
the
only
ones
that
are
interesting
to
me
are
are
ones
that
identify
themselves
specifically
as
aarch
64
in
linux,
okay,
yeah,
so
that's
and
and
given
that
today
on
windows
we
officially
declared
we
only
do.
We
only
deliver
a
64-bit
installer
today
on
linux,
we
are
very
blunt
about
hey.
You
need
to
use
64-bit,
so
so
I
don't.
I
actually
know
it
works
on
32-bit
arm
because
I
happen
to
have
raspberry
pi's
in
my
basement.
The
work,
but
that's
that's
irrelevant
to
the
whole,
to
the
for
the
project.
64-Bit
is
all
we
need.
A
B
B
B
Okay,
the
only
other
thing
is,
I
think
we
are.
I
think
you
have
it
down
there
maintenance,
but
we
decided
on
what
images
we
are
going
to
support
with
multi-arch.
So
I
know
there
was
brought
up
multiple
times
in
the
sig
meetings,
the
platforms
meeting
that
we,
of
course,
are
doing
the
jenkins
main
image,
but
I
know
there
was
talks
of
agents
and
the
other
kind
of
helper.
It
looks
like
you
have
them
right
down
there.
Okay,.
A
I
think
at
least
the
topics
that
were
of
concern
for
me,
we've
already
covered
in
the
earlier
in
the
earlier
discussion,
so
long
as
they're
available
in
some
place
that
consuming
plug-ins
or
jenkins
core
could
decide
to
run
tests
it
that
that
seems
good
enough
to
me
any
anything
else
that
people
are
worried
about.
Damien,
you've,
you've
led
some
specific
efforts
here
on
testing
and
validation.
D
Now,
right
now
the
the
binaries
we
are
using
for
building
and
testing
on
constrained
environment
and
to
avoid
using
docker
are
intel
only
but
written
in
go
so
they
could
be
cross
compiled.
So
that
means
that,
right
now
we
will
need
a
full
docker
engine
for
these
specific
architectures
for
building
and
testing.
D
E
D
Binary
compliant
with
build
kits
and
a
google
controller
structure
tool,
which
is
a
command
line
that
parse
a
yaml
and
run
a
set
of
checks
on
a
docker
image,
either
using
only
the
tar
driver.
So
it
only
check
the
content
of
the
image
and
the
metadata.
If
you
don't
have
docker
or
don't
want
to
use
it
for
security
and
eventually
it
connects
to
docker
to
run
a
bunch
of
tests
for
behavioral
driven
development.
D
So
these
are
two
tools
that
can
be
completely
compiled
or
cross-compiled,
because
this
architecture
are
officially
supported
by
golong.
So
there
should
be
no
issue
on
that
or
one
bug
or
the
other.
But
it's
the
only
thing
is
that
we
need
to
start
with
docker
and
then
we'll
go
ahead,
securing
and
securing
right.
Now
this
is
mandatory
and
we
need
to
take
measure.
According
to
this.
B
A
No
scanning
yet,
yes,
we
need
it.
C
So
maybe
you
wanted
to
briefly
talk
about
that
now,
so,
as
in,
as
mark
said,
people
from
time
to
time
use
scanning
tools
to
identify
vulnerabilities,
several,
maybe
one
weeks
or
two
weeks
ago,
we've
been
discussing
about
adding
that
to
the
process
when
we
build
images,
my
my
not
fear,
but
what
I
feel
is,
if
you
put
such
a
tools
in
place,
but
nobody
look
at
the
outputs
and
try
to
really
solve
what
we
see,
then
it's
useless
and
each
time
I
run
such
a
tools.
C
You
cannot
truly
automate
the
results,
because
you
have
vulnerabilities
that
are
not
really
vulnerabilities
and
so
for
each
outputs.
You
need
to
really
carefully
read
the
output
to
really
says.
Okay,
this
one
is
dangerous.
This
one
is
not
really
dangerous
and
that's
until
now.
I
really
don't
know
how
to
deal
with
that.
C
B
C
B
No,
it's
I
was
gonna,
say
it's
not
something.
I've
done
personally.
I've
seen
a
couple
repositories,
you
know
open
source
communities
start
implementing
it.
I
don't
even
think
adopto
and
jk
has
implemented
it,
yet
they
might
have
their
own
internal
stuff,
but
not
any
of
the
more
popular
tools.
B
Is
there
not
like
a
again
I
I'm?
Not.
I've
really
played
around
with
it,
but,
like
obviously
it
will
slow
down.
I
know.
B
Mark
rightly,
will
slow
down
the
build
process,
but
is
there
not
a
way
to
build
it
into
the
ci
cd
like
publishing
kind
of
pipeline,
where,
like
you
know,
you
go
along,
build
them
all
and
then
test
them,
and
then,
if
there
is
an
error
or
if
there
are
concerns
open
like
a
pr
or
open
a
issue
on
the
repository
and
somehow
automatically
like
dump
the
log
output
or
link
the
lock
output
and
tag
a
couple
of
the
key
members
to
address
it?
B
A
E
A
And
I
can
scan
after
the
fact.
I
think
I
think
it's
a
valid
thing
to
consider
olivier's
observation,
that
having
a
scanning
run
that
we
then
ignore
doesn't
help,
and
so
it
may
be
part
of
the
maintainer
role
is,
if
you
agree
to
maintain
you
agree
to
read
security
things
and
you
agree
to
think
about
them
and
resolve
them
in
the
system.
It
just
for
me
that
it's
it's
not
not
come
to
full
fruit.
A
C
C
Such
features
are
usually
provided
on
a
registry,
but
that
kind
of
feature
does
not
come
for
free.
So,
for
instance,
I
think
docker
hub
provide
that.
But
it's
not
part
of
the
open
source
program.
A
I
thought
that
linux
foundation
security
also
had
a
that
lfx
had
something
we've
got
the
benefit
that
andrew
grimberg
is
online
from
linux
foundation.
I
don't
know
if
he's
actively
listening
at
the
moment,
but
I
thought
there
was
a
security
card.
Oh,
oh,
so
so
I
thought
there
was
a
webinar
recently.
I
missed
it.
Unfortunately,
andrew
on
lfx
security
services
is
image
scanning
part
of
that.
Do
you
know.
F
A
A
B
B
That
would
actually
work
for
you
guys,
but
it
have
you
guys
thought
about
like
blocking
you
know,
obviously
like
scanning
after
you
publish
the
images,
is
definitely
an
option,
but
have
you
thought
about
like
just
blocking
the
publishing?
If
there
is
an
issue
found.
A
When
I
looked
at
the
at
the
the
output,
from
sneak
from
the
images
I
was
testing,
there
were
always
comments
there
and
many
of
them
there
would
be
a
place
where
they
say
issue,
no,
no,
no
known
remediation,
so
they
found
something
in
a
library
that
was
in
the
image.
But
there
was
no
known
not
even
from
the
operating
system
vendor.
A
E
C
It
would
just
be
a
waste
of
time,
but
on
the
other
side,
if
you
just
say,
run
security
tools,
but
don't
fail.
If
you
find
something
wrong,
then
you
will
not
look
at
the
outputs.
So
that's
right
now!
That's
that's
the
challenge
that
I
have
it's.
It's
like.
It's
really
like
having
a
lot
of
unique
tests
that
fails
and
some
are
not
relevant
to
your
projects
and
you
have
each
time
I
have
to
look
at
easy
tree,
but
or
not,
and
so
yeah
that
that's
what
I
fear
by
automating
that
so
yeah.
D
Know
for
the
security
scanning
is:
are
we
able
with
tools
like
snake
or
tv,
to
put
first
a
threshold
or
ignore
certain
rules,
because
that's
what
you
will
do
on
lint
if
you
want
to
ensure
that
lint
is
applied,
but
you
know
that
for
a
specific
rule,
you
can
enable
it
most
of
the
time
the
pattern
is
in
line
or
in
a
file.
You
explicitly
disable
that
rule
in
particular
for
a
specific
context.
D
So
that's
why
online
inline
command
and
you
had
a
comment
explaining
why,
for
instance,
okay,
we
know
that
the
package
blah
blah
is
subject
to
that,
whatever
we
don't
have
any
case
where,
with
that
image,
so
we
disabled
that
particular
rule
for
that
particular
package.
But
I
don't
know
if
the
security
scanning
tools
provide
such
feature
for
ignoring,
so
I
don't
know,
but
if
they
we
should
totally
add
the
blocking
ci
and
add
ignore
rules,
because
it
will
complete
our
knowledge
and
it
will
provide
something
that
is
like
test.
D
It
means
there
is
something
to
be
done
either
removing
the
test
or
fixing
it,
and
but
I
don't
know
the
second
point
here
is
I
don't
know
if
we
can
do
a
diff
between
this,
the
rules
and
the
issues
that
come
from
the
parent
image
that
we
cannot
do
anything
about
and
do
a
default.
The
instruction
that
we
run
when
we
build
the
official
jenkins
image
we
inherit
from
the
from
opengdk
whatever.
D
If
we
do
a
diff,
we
can
see
what
our
image
is
adding
in
terms
of
security
issues,
and
then
this
issue
are
valuable
because
they
point
us
to
something
that
we
installed
either.
It
can
be
packaged
that
we
installed
that
we
don't
need
in
dms,
so
that
could
be
an
indicator
or
this
will
be
real
life
and
the
goal
will
be
to
remove
a
lot
of
the
noise.
We
only
focus
so
maybe,
if
it's
possible,
because
again
I
don't
know.
If
technically
we
can
do
it.
D
D
A
Yeah,
I
I
haven't,
I
haven't,
I
think,
that's
ingenious.
I've
just
never
tried
it.
I
think
that's
a
very
good
idea,
because
what
you
just
described
is
hey
the
packages
we
added
in
our
debian
debian
packaging,
for
instance,
when
we
add
git
when
we
add,
let's
see
pick
another
one
gnu
pg
gpg
and
we
add
them
because
the
because
jenkins
runs
better
with
them
installed,
but
that
means
we've
now
accepted
them
as
an
additional
security
liability
over
the
adopt
open,
jdk
image.
That
is
our
base
image
yeah.
I
think
that's
very
good
idea.
D
The
proposal
here
will
be
the
same
as
when
you
add
linting,
to
a
project:
that's
starting
with
a
non-blocking,
of
course,
because
we
don't
want
to
be
blocked
on
our
current
capability
to
deliver
right
and
we
do
a
threshold
like
for
the
free
next
month.
We
run
the
security
and
we
see
what
we
can
do
with
it
in
three
months.
If
no
one,
absolutely
no
one,
either
from
the
infra
team
or
the
community
did
anything,
then
we
can
stop
using
it
because
it
means
that
what
olivier
described
is
true.
A
That,
okay,
all
right
so
next
topic,
then
publishing
docker
images.
C
I
guess
I
guess
we
want
to
merge
this
next
point
as
well.
So
yes,
so
my
point
was
to
publish
to
kuwait
that
I
own,
I'm
not
sure
to
understand.
So
this
is
something
that
I've
been
thinking.
When
I
was
looking
for
docker
hub
solution,
I
mean
when
they
stopped
providing
the
free
tier,
but
now
that
we
have
something
in
place,
I'm
really
wondering
why
we
would
have
to
publish
to
kuwait
at
io
as
well,
because
it
supports
military
architecture.
It's
support,
but
man
as
well.
B
Yeah,
so
I
I'm
just
starting
to
dive
into
the
whole
pod
man
experience
jenkins.
I
don't
know
if
you
draw
it.
I
even
know
sorry,
not
jenkins
doc.
I
don't
know
if
you
know
drop
support
for
s390
and
power
going
forwards.
B
They
still
have
like,
I
think,
the
last
version
publicly
accessible
via
their
servers.
I
think
there's
a
couple
like
custom
builds,
but
you
know
what's
the
valuability
of
those
you
know,
but
I
think
it's
at
like
1809.
B
So
you
know
any
of
the
new
docker
features
going
19.00,
which
I
think
includes
bildax
a
couple
other
like
beta
features
and
stuff
like
that
aren't
going
to
work
on
s390
on
ppc.
It
doesn't
really
matter
too
much
if
they're,
just
remote
agents,
if
you're
just
using
it
to
like,
do,
builds
and
stuff
like
that.
It
doesn't
really
change
the
functionality.
B
But
if
you
did
docker
compose
and
stuff
like
that,
but
internally,
my
team
is
looking
and
evaluating
and
switching
over
to
podman
and
as
I've
done,
that
I'm
starting
to
understand
the
ecosystem
and
stuff
like
that
and
hence
kwai
io
is
one
of
the
the
new
kind
of
parts
to
that
ecosystem.
Being
another
docker
repository,
and
I
think
right
now
there
isn't
too
many
differences
with
quayo
and
docker.
I
mean,
even
when
you
go
to
do
a
pull
of
an
image.
B
Podman
has
a
little
like
command
line,
option
to
either
search
docker.io,
which
is
the
docker
hub
registry
or
quayio,
and
you
can
pick
which
either
one
just
search,
but
I
think
it
would
be
nice
to
give
users
an
option
to
pull
down
from
coilo
and
I
think
there's
a
couple
cool
features
being
added
and
additional
things
being
added
to
podman
and
build
off,
which
I
don't
know
if
you
guys
looked
at
build
up.
B
But
it's
like
the
the
build
engine
to
podman
and
I
think
those
features
are
probably
going
to
be
working
with
coil
or
loosely
coupled
with
way
io.
So
I
I
thought,
it'd
be
interesting
to
look
into
exploring
and
pushing
images
there
and
getting
a
presence
on
kwai
io.
B
I
haven't
really
found
out
one
of
the
one
of
the
issues
I
haven't
really
looked
into.
Is
you
have
the
official
images
from
docker
hub
right
that
whole
official
image,
repo
and
stuff
like
that.
B
Yeah
I
know-
and
I
haven't
really
seen
if
there
was
a
equivalence
or
kind
of
comparison
on
kw,
so
it
might
be
good
to
establish
some
sort
of
presence
on
koi
io.
You
know
marking
down
hey
here's
the
official
jenkins
images
before
someone
else
might
come
along
and
do
stuff
that
might
confuse
users
and
stuff.
So
that's
really
my
my
point.
C
So
I
can,
I
guess
so
that's
a
good
point.
I
can.
I
could
definitely
register
the
name
on
credit.
I
o
my
my
my
fear
here
is
if
it
does
not
really
add
major
values
to
how
to
maintain
both
registries,
I
mean
on
our
side,
you
slow
down
the
process
from
a
configuration
point
of
view.
It
does
not
change
because
we
can
just
add
one
additional
tag.
C
I
mean
we
just
provide
one
additional
credentials
and
we
just
one
additional
tags
and
that's
fine,
but
ultimately
we'll
have
to
push
artifacts
in
a
new
location,
so
it
will
slow
down
security
releases,
lts
releases
and
and
so
on.
Also,
once
you,
when
we
start
probably
pushing
image
on
created
io,
the
community
will
expect
us
to
keep
maintaining
them.
So
if
we,
if
for
some
reason,
we
we
don't
want
to
use
those
images
anymore,
then
we'll
have
to
write
a
blog
post
to
communicate
about
that.
C
So
that's
why
we
that's
why
we
created
the
jenkins
foreign
organization
on
docker
hub,
so
we
could
have.
We
could
experiment
and
we
could
have
yeah.
We
could
experiment
with
images,
but
in
this
case
I
just
fear
that
we
are
just
putting
too
much
constraint
without
getting
values
from
that.
C
So
if,
if
tomorrow,
it
appears
that
we
need
to
use
credit,
I
o
for
having
more
architecture.
Maybe
that
would
be
a
good
reason
to
switch
to
credit
io.
But
for
now
I
think,
as
long
as
the
community
is
happy
with
images
from
docker
hub,
we
have
the
process
in
place
to
push
there.
B
So
yeah,
no,
I
I
think
I
know
that's
perfectly
acceptable.
I
just
want
to
throw
it
out
there,
but
I
think,
like
you
mentioned,
you'll
register
your
account
or
the
jenkins
account
over
there
at
least
have
some
sort
of
presence
or
username.
So
when,
if
you,
if
you
do
down
the
line,
you
need
to
push
there
or
want
to
push
there,
there
won't
be
any
hassle.
I
guess.
A
For
image,
building
is
and,
and
damien
had
mentioned,
img
my
simple
mind
had
only
been
thinking
about
the
docker
way
of
doing
these
builds.
Do
we
as
a
as
a
container
and
platforms
track,
need
to
think
about
any
changes
specific
to
those
or
is
it?
Is
it
sufficient
to
say,
okay,
we
we
will
build
with
an
image
builder,
it
could
be
podman
with
builda,
it
could
be
img,
it
could
be
docker,
but
people
will
use
the
containers
in
any
case,
or
is
that
simplicity
not
actually
there.
B
No,
I
I
think,
I
think,
if
you
guys
are
supporting
the
open
container
initiative.
Oci
you
guys
are
fine.
Docker
has
both
support
for
oci
and
podman
does
too
so
with
the
oci.
That's
the
whole
purpose
of
being
able
to
be
transportable
across
any
kind
of
containerized
platform.
B
So,
and
as
far
as
I
know,
the
images
you
guys
are
building
are
oci
client,
so
there
shouldn't
be
any
issues.
Yep
great.
B
A
You
thanks
for
that
clarity.
Okay,
so
now
publishing
docker
images
for
me
had
one
additional,
which
one
was,
and
I
guess
it's
a
separate
topic.
It's
the
speed
at
which
we
build
our
images
or
the
the
delay
between
the
arrival
of
a
let's
take
the
security
team.
As
the
very
specific
case,
a
security
issue
is,
is
detected
resolved,
but
now
they
need
to
announce
it
and
then
publish
the
image
and
I've.
I've
heard
noise
from
the
security
team
that
hey
they'd,
like
our
image,
publishing
to
be
faster.
Is
that
already
covered
in
pending
pull
requests?
D
D
D
So
in
order
to
be
sure
that
this
would
optimize
right
now,
we
need
to
start
measuring
the
build
time.
We
need
to
check
the
trends,
maybe
put
them
on
datadog.
If
it's
not
the
case
and
start
acting
on
a
building.
To
be
sure
that
we
identify
is
the
build
the
build
time
taking
time.
Is
it
the
test?
Is
it
the
publication?
Is
it
the
three
of
them,
but
we
need
measurements.
So,
whatever
we'll
do
to
optimize
this,
we
need
to
validate
each
time.
D
B
One
question
I
had
about
that
is
about
accelerating
the
docker
build
process
and
I
don't
really
know
much
about
how
you
guys
machines
work,
and
I
know
we
talked
about
static
agents
for
us
united
power.
But
I
know
one
of
the
big
things
in
docker
build
is
caching
of
the
different
layers
and
I
think,
for
the
majority
of
at
least
the
docker
of
the
jenkins
main
image
that
I've
been
working
on
right.
B
It's
pretty
much
the
same
for
the
beginning
layers
and
then
later
on,
when
you
copy
the
binary
or
do
a
couple
of
downloads,
those
might
change.
But
if
we
have
some
sort
of
captioning
mechanism
that's
saved
in
between
the
builds
or
some
sort
of
way
to
do
that.
That
should
speed
up
things
a
lot.
B
But
I
know
there
are
some
issues
with
caching,
if
done
improperly.
In
terms
of
you
know,
game
messy
and
stuff.
C
So
the
main
limitation
that
we
have
right
now
is
the
scripts
which
builds
the
grammage.
The
starting
sequence
build
one
image,
then
the
second
one
and
so
on.
So
the
the
good
thing
is,
the
same
script
is
run
on
the
same
virtual
machine,
but
it's
definitely
not
optimized,
and
we
just
I
mean
it's
typically,
the
kind
of
example
where
someone
work
on
something-
and
we
just
organically,
put
new
features
to
the
data
script
without
really
taking
the
time
to
refactor
it.
So
that's
what
we
have
to
do
today.
B
Yeah,
I
I
I
do
have
a
pr
open
from
the
multi-arch
builds
and
you're
right,
where
the
j,
like
part
of
the
build
trip,
is
looping
over
and
building
the
last,
like,
I
think,
30
builds
of
jenkins.
You
kind
of
keep
those
ones
fresh
and
do
os
refreshing
and
stuff
like
that,
and
those
do
cache
very
well
because
after
you
build
the
first,
you
know
half
of
all
the
docker
containers.
It's
really
not
much
changing,
but
I
guess
I
I
don't
know
it's.
B
I
guess
it's
really.
That's
that
first
build
that
will
probably
take
longer
and
then
each
one
after
that
should
be
all
cash
and
stuff,
but
it
would
it
would
take
a
good
amount
redesigning
to
kind
of
really
optimize
it.
But
I
don't
know
if
that's
where
things
are
taking
the
time
or
you
know
I
I
think
we
just
mentioned
reporting,
building
and
tracking
and
seeing
where
actually
the
slowdown
is.
D
Yeah,
the
the
the
issue
with
caching
is
that
we
might
have
issues.
How
do
we
unsure
that
that
a
given
release,
because
here
we
are
the
release?
Most
of
the
time
we
like
to
build
a
release
from
scratch
to
be
sure
that
any
package
index
inside
the
image
are
freshly
downloaded,
etc,
and
the
caching
invalidation
depends
a
lot
on
docker,
at
least
on
the
content
of
the
instruction
run
inside
the
docker
file.
So
if
we
don't
change
the
instruction,
we
might
lose
with
caching
the
update
of
packages
internally.
D
D
Issue,
however,
that
yeah
right
now,
I
don't
know
if
we
are
using
build
kit.
But
there
is
absolutely
no
caching
issue
and
we
could
have
some
caching,
at
least
on
the
base
images
and
also
right
now.
We
have
different
docker
files
for
each
images.
While
if
we
were
able
to
run
the
docker
builds
on
a
powerful
machine
with
a
single
docker
file
that
contained
a
lot
of
multi-stage.
D
If
we
enable
buildkit
it's
far
more
efficient
than
trying
to
distribute
the
build
on
multiple
machines,
because
buildkit
is
really
impressive
in
its
capacity
to
parallelize
to
pull
different
layers
at
the
same
time,
because
this
does
not
have
any
issue
and
it's
completely
able
to
create
a
graph
complete
graph
of
dependencies
and
build
in
parallel
as
much
in
the
graph.
And
I've
had
a
lot
of
improved
time
on
the
most
of
my
setups
with
this
and
on
big
setups
that
are
far
bigger
than
this
one.
So
that
could
be
also
an
opportunity.
E
B
With
what
the
new
docker
pull
limits
implemented,
is
your
guys's
account
for
when
we
actually
go
to
do
like
the
the
builds
and
publishing?
Are
you
guys
using
like
a
premium
account
to
get
around
that
so
like?
If
we
had
to
pull
down
alpine,
we
had
to
pull
down.
You
know
debbie
and
buster,
w
slim
and
stuff,
like
that.
We're
not
gonna
hit
those
limits,
because
I've
been
hitting
limits
with
my
free
account.
I
didn't
know.
C
We
are
not
because
we
are
sponsored,
I
mean
we
are
part
of
the
instance
who's
planned
from
for
the
card.
Basically,
okay,.
B
I
was
going
to
suggest
if
we
are
hitting
those
limits,
or
also
just
to
speed
things
up
the
caching
of
those
images
somewhere
like
okay.
What's
cash,
you
know
debbie
and
buster,
but
again
with
cashing.
You
know
you
you
might
miss.
I
I
don't
know.
I
haven't
really
found
a
good
docker
cash
for
myself
anyways.
So
I
don't
know.
What's
out
there.
D
That's
correct
right
now,
in
terms
of
measurement,
I've
added
a
link
on
the
conversation,
which
is
the
github
actions
feedback
from
jenkins
for
the
official
jenkins
controller
image.
We
it's
nice
because
we
have
a
detail
of
all
the
timings,
but
the
first
thing
here
is
that
the
build
part
is
implied.
E
D
There
is
a
build
step
which
seems
generic.
We
took
9
minutes
and
then
we
have
15
minutes
per
test.
So
we
need
to
be
sure
that
the
tests
are
not
implying
or
rebuild
each
time
and
since
we're
using
a
make
file,
you
see
where
we
might
have
different
issues
there.
So
that's
some.
Let's
say
that
should
be
your
first
topic
be
sure
that
we
have
a
clear
separation
between
build
test
and
deployment.
F
D
A
Feels
like
damian
what
you
described
as
we
need
to
measure
and
we've
got
preliminary
measures
already
visible
in
through
the
github
github
actions,
the
ch
or
the
github
checks.
And
then
then
we
start
looking
at
what
things
we
could
do
to
accelerate
you
mentioned
build
kit
now
build
kit
is
not
something
that
I'm
familiar
with.
So
that's
a
that's
a
higher
level
thing
over
docker.
D
D
C
C
D
Don't
worry
it's
a
two-time
process.
First,
the
is
the
shorter
path
should
be
setting
the
environment
variable
to
the
value
one
inside
the
jenkins
file
or
better
on
the
make
file
that
takes
care
of
building
the
image.
Once
we
do
that,
we
can
immediately
measure
the
gain
in
time
we
might
or
might
not.
I
don't
know,
but
it
this
will
be
for
for
later.
Most
of
the
time
where
bill
kit
is
better
out
of
the
box
is
in
evaluating
if
a
layer
needs
to
be
rebuilt
or
not,
it's
really
efficient.
D
D
B
Do
I
do
worry,
I
I
haven't
actually
played
around
with
big
hit
and
I
I
did
see
bill
kitt
in
when
I
was
exploring
build,
but
I
do
worry
about
s390
and
power
being
you
know,
you
mentioned
the
last
december,
build
it
might
have
been
in
a
couple
of
the
beta
builds,
but
it
might
not
be
available
on
s390
or
power.
E
A
A
C
C
Of
thing
that
we
can
ask
to
the
platform,
we
can
open
a
support
ticket
and
sometime
it
works.
I
mean
I
already
retrieved
the
champion's
name
on
other
systems
previously.
A
All
right
so
maintainers
for
existing
docker
images,
so
the
the
the
statement,
the
problem
statement
here
is
that
we've
got
docker
images
that
are
in
various
stages
of
being
cared
for
or
not
being
cared
for,
and
the
proposed
jenkins
enhanced.
The
idea
for
the
jenkins
enhancement
proposal
is
that
we
decl,
we
use
the
github
code
owners
facility.
A
For
images
so
that
people
would
understand
as
an
example,
I'm
not
paying
significant
attention
to
the
centos
images.
I
pay
a
lot
of
attention
to
the
debian
images
and
I
pay
a
little
bit
of
attention
to
the
alpine
images,
but
we
need
somebody
else,
who's
interested
in
centos
to
take
on
sentos,
and
so
those
that's
the
concept
here
then
the
notion
is:
we
need
that
for
controllers,
that's
the
one!
A
I
pay
a
lot
of
attention
to
agents
where
I
don't
pay
nearly
enough
attention
and
then
infrastructure
images,
because
these
those
are
also
crucial
to
us
and
we
use
dedicated
infrastructure
images
for
specific
tasks.
If
I
understand
correctly
damien
and
olivier,
can
you
help
illuminate
for
me
a
little
more
there?
Yes,.
D
So
letting
aside
the
renaming
of
the
agent
that
went
from
glp
to
unbound
agent
for
ssh
agent,
I
don't
know
if
it
has
been
renamed,
but
I
will
expect
outbound
agents.
But
that
means
we
have
a
dependency
graph
for
all
the
agents
related
image.
D
We
now
have
a
base
image
that
only
specify
the
minimum
for
an
agent,
which
means
a
jenkins
user,
an
open
gdk
and
that
download
a
sample
of
the
agent.jar.
Even
though
there
is
also
an
entry
point
script
which
role
is
to
start
an
agent.
So
you
pass
it
the
parameters
to
connect
to
your
master
and
it
will
download
the
latest
agent
and
that's
all
then
from
this.
D
D
Then
we
there
is
a
bunch
of
images
inherited
from
the
older
gnlp
agents.
D
They
are
actually
on
jenkins,
ci,
slash,
tnlp
dash
agents,
roll
form
repository
and
these
images
are
published
to
the
docker
herb
and
it's
a
library
of
images
that
provide
a
kind
of
base
ready
to
go
inbound
agents
with
specific
tools
for
a
specific
language,
for
instance
gnlp
agent,
maven,
gdk8,
gnlp
agent,
ruby
channel
page
and
python
2
python
3.
There
are
four
windows,
also
like
core
runtime.
D
They
don't
have
a
readme,
because
by
default
docker
herb
catch
to
read
me
from
the
images
and
the
publication
process
of
such
image.
If
we
keep
them,
we
must
have
a
readme
with
at
least
a
badge
that
say
we
need
a
maintainer
or
the
maintainer
and
the
source
code
out
there,
because
right
now,
this
imaging
these
images
look
really
suspicious
as
a
docker
hub
browser.
I
see
that
image
with
no
readme
no
information
about
the
repository
where
the
dockerfile
come
from.
D
I
mean
it
could
be
a
bitcoin
miner
for
what
I
know
so
the
idea
for
whatever
all
the
images,
because
the
controller
has,
but
for
the
other
we
undo
check,
we
need
to
be
sure
that
we
have
a
readme
and
the
correct
way
to
evaluate
the
trust
a
user
can
put.
It's
not
perfect.
Of
course,
even
readme
does
not
guarantee
that
the
content
of
image
has
not
been
dealt
with
by
a
malicious
user,
but
still
in
term
of
image,
documentation.
User
experience
is
mandatory.
B
I
was
gonna
say
also,
I
think
I
know
a
lot
on
like
a
couple
of
docker
hub
instances
and
stuff
like
that.
B
People
post
images,
they
have
to
read
me
that
usually
links
back
to
the
github
for
more
information,
but
then,
if
you
guys
can
include
like
a
link
or
something
like
that
in
the
github
or
on
the
docker
hub
page
to
where
the
images
get
built
like
if
it
is
a
public
build
line
or
or
what
that
would
help
clarify
and
like
that
trust
right,
like
you
can
see
hey.
This
is
coming
from
jenkins.
C
B
D
I
don't
remember
the
name,
but
there
are
a
few
services
that
provide
embedded
badge
that
you
can
put
on
markdown,
which
are
sas
service
that
run
the
equivalent
of
a
docker
inspect,
and
so
they
provide
you
a
web
view
of
the
given
image
on
the
docker
hub
with
the
analysis
of
each
layer.
The
docker
also
provides
such
service,
but
adding
this
can
at
least
the
minimum.
D
So
if
you
use
it
contact
us
to
help
us
maintaining
it
or
risk
having
this
image
deprecated
in
the
future,
that
could
be
a
place
to
also
have
security
concerned.
If
there
are
like
a
hundred
percent
of
security,
whatever
trends
and
also
the
the
point
here,
so
we
have
the
readme,
which
is
a
part
of
the
process
to
for
all
images,
but
the
issue
we
have
with
the
glp
agents
images
is
that
we
have
people
using
these
images,
but
not
only.
D
D
Is
it
expected
to
use
ssh
to
be
connected
to
and
started,
or
is
it
expected
to
be
tnlp
websockets?
And
now,
with
these
images,
we
have
the
concept
of
a
fifth
at
least
dimension.
Do
you
want
maven,
3,
maven,
4
python,
2,
python,
3,
python,
2.7,
python
3.?
Whatever
I
mean
that's
a
lot
to
maintain
it's
an
exponential
effort,
while
the
goal
of
using
docker
images
or
even
better
kubernetes
pods
it's
to
have
minimalistic
and
isolated
images.
D
So
there
is
that
world
topic
of.
Do
we
need
these
images
and
if
we
need
there
is
no
problem.
When
I
say
we
it's
as
a
community,
do
we
have
community
users
who
rely
on
these
images?
Are
they
willing
to
maintain
it
because
maintaining
testing
checking
security
for
of
all
these
images
are
the
costs,
and
we
can
be
sure
that
we
need
controllers
on
the
agent
images,
but
do
we
really
do
we
really
need
these
images?
C
Do
we
really
need
to
maintain
every
images
I
mean?
That's
the
same
for
the
default
jenkins
image.
We
maintain
an
image
for
debian.
We
maintain
an
image
for
centos
and
for
alpine
at
the
end,
what
I
mean,
what
the
benefits
of
doing
that?
C
From
a
jenkins
point
of
view,
I
mean
we
don't
have
I
mean
it's,
I
don't
think
that's
that's
the
responsibility
of
the
jenkins
project
to
maintain
every
image
for
every
combination
of
scenarios,
and
so
I
think,
having
account
owners
would
already
simplify,
because
something
that
I
find
challenging
is
we
have
many
different
images.
C
We
have
many
people
opening
prs,
but
following
and
reviewing
pr's
take
time
and
once
the
pr
is
merged,
then
people
assume
that
we
will
maintain
those
image
forever,
and
I
mean
it
does
not
work
this
way
and
so,
for
instance,
we
already
saw
well
last
time
we
had
to
build
the
docker
images
for
the
for
the
stable
release.
It
took
us
like
one
hours
and
again
do
we
have
to
build
this
image
for
everything.
Does
it
really
make
sense,
so
I
I
would.
C
I
would
definitely
be
interested
to
have
cod
owners
for
specific
images,
but,
more
importantly,
I
would
also
like
to
have
a
procedure
where
we
could
remove
images
because,
for
instance,
when
you
go
to
docker
hub
you'll,
see
an
amateur,
for
instance
for
blueishand,
for
instance,
I
think-
and
there
are
bluish
and
docker
image
where
the
latest
tag
point
to
a
very
old
version,
and
so
I
know
that
I
already
have
people
in
the
past
saying:
oh
look.
I've
tried
that
docker
image
and
it
was
not
working,
and
so
it
was
just
a
not
used
image.
C
So
I
think
we
should
have
a
more
aggressively
process
to
delete
images
that
we
don't
want
to
distribute
anymore.
So.
A
I
know
that
I
I
I
don't
have
a
solution.
I
I
have
certainly
seen
that
even
the
the
jenkins
images
we
were,
our
debian
10
or
debian
9
controller
image
was
based
for
a
full
year
on
an
image
that
the
open,
jdk
project
had
stopped
maintaining
right.
So
we
were
based
on
a
tag
that
they
had
just
stopped
maintaining,
and
so
we
were.
We
were
locked
backwards
in
time
for
that
year,
old
jdk
version,
but
I
don't
think
there
is
a
way
to
remove
an
image.
Once
it's
published,
it's
it's
published
right,
it's!
C
So
so
you
cannot,
you
cannot
remove
the
image
from
other
end
user
on
his
machine,
but
at
least
you
can
remove
the
tag.
You
cannot
remove
the
image,
but
you
can
remove
the
tag
so
by
removing
the
tag
if
the
person,
if
that
person
is
using
the
digest,
you
will
still
be
able
to
retrieve
the
image.
So
if
that
person
is
looking
for
a
specific
image,
let's
say
the
latest
pollution
image,
then
that
person
won't
be
able
to
download
that
image.
A
D
D
A
Yeah
it
just
it
that
feels
that
feels
different
than
how
I'm
used
to
thinking
of
of
code
we've
delivered
in
other
places
where
we
don't.
We
don't
actively
prevent
you
from
running
an
outdated
jenkins
version,
even
though
we
know
it
contains
security
vulnerabilities
leaving
the
tags.
There
seems
harmless
to
me
in
that
they're
they're,
not
we're
not
gaining
much.
By
deleting
a
tag
are
we.
C
A
F
C
D
Then
it
it's
it's
already
good
enough
same
way.
This
image
are,
they
are
considered
deprecated
and
won't
have
any
maintenance,
and
this
image
are
known
to
embed
security
feature.
We
recommend
you
to
use
that
other
whatever,
so
you
don't
remove
tag,
but
you
have
you
share
the
knowledge
to
the
community
that
they
should
not.
Then
it's,
as
you
say,
their
choice
to
keep
using
it
or
not.
B
Mark
didn't
we,
I
think,
didn't
you
put
a
proposal
together
for
the
what
platforms
are
what
os,
as
we
support
and
deprecation
and
stuff
like
that,
I
think
that
would
be
good
to
kind
of
bring
up.
A
Absolutely,
and-
and
that's
that's-
that's
the
the
general
theme
here
is
that
that
specific
idea
that
concept,
so
I
I
you're
you're,
absolutely
right
jim.
I
think
this
is
the
right
place
for
that
and
and
the
ideas
that
have
been
added
here,
adopt
this
image
and
code
owners
and
including
it
for
agents
not
just
for
controllers
feels
like
a
really
healthy
thing
to
do
so.
For
me,
it
feels
like
this
should
be
one
of
those
things.
E
B
I
think,
having
like
a
document
that
really
spells
out,
you
know
what
the
deprecation
path
is.
You
know
especially
like
if
we're
you
know,
upgrading
from
debian
versions,
you
know
buster
to
stretch
and
stuff,
like
that,
you
know
how
that
process
works
and
what
tags
I
I
know
I
think,
with
alex.
We
we
talked
about
the
new
tagging
scheme,
at
least
for
the
the
the
main
image
and
the
windows
is
starting.
B
The
car,
the
windows
images
are
starting
to
use
that
talk
about
like
okay,
hey
the
jenkins,
you
know
colin
debian
tag
right.
What
does
that
point
to
you
know,
because
that
changes
any
given
time
right,
whether
that's
okay,
the
the
latest
debian
right?
Okay,
that
points
to
stretch
and
then
we
eventually
needed
to
move
it
to
the
next
lts
so
detailing
how
that
works,
and
what
is
the
current
one
and
stuff
like
that
would
be
extremely
helpful
for
the
community
and
a
good
place
to
add
those
deprecations.
B
You
know
like
hey,
hey.
This
is
deprecated.
Here's
the
security
concern.
Please
use
this
alternative
image
if
you
want
to
stay
on
debian
or
something
like
that
right.
C
E
C
I
want
to
add
where
we
talk
about
the
tag
and
the
example
that
I
have
here
is:
if
you
pull
the
jenkins
ci,
slash
champions,
docker
image,
you're
running
the
version
2.151,
and
so
so
it's
a
very
old
version,
and
so,
if
someone
is
not
necessarily
aware
that
that
image
is
not
maintained
anymore,
that
person
will
just
try
to
test
jenkins
using
the
latest
tag.
And
it's
not
the
latest
version.
It's
a
very
old
image.
D
I
have
a
proposal
for
this
like
building
just
one
more
time.
The
latest
image
which,
on
three
points,
start
with
a
slip
100
seconds
with
a
message
that
say
this
image
is
unmaintained
unsecured.
You
should
check
another
tag,
just
to
be
sure
that
we
keep
the
latest
it
keep
exist.
But
if
it
takes
2
minutes
to
start,
then
you
will
be
really
compelled
to
use
an
another
correct
version.
B
B
People
have
to
check
the
the
logs
anyways
to
grab
the
whole
api
key
or
a
password.
I
forget
what
it
is,
so
it'd
be
cool
to
add
that
deprecation
notice,
somehow
into
those
images.
I
don't
know
how
you
would
go
about
doing
it,
but
I
like
your
idea
of
okay,
hey
if
we
deprecate
a
image
right,
let's
rebuild
it
one
last
time
and
add
the
deprecation
notice
into
that
message.
Somehow
I
think
that
would
be
really
really
cool
and
also
a
good
way
to
communicate
to
users
that
hey.
A
Okay,
well,
so
we've
actually
already
got
a.
I
think
it's
an
interesting
idea.
We've
got
a
facility
already
that
will
alert
users
of
security
vulnerabilities
right.
We've
got
an
at
the
admin
monitor
that
appears
and
says,
you're
running
a
vulnerable
version,
and
and
that's
already
there
and
we've
got
admin
monitors
that
show
that
you're
running
outdated,
plug-ins
with
versions.
So
maybe
we
ought
to
consider
a
way
of
saying
hey.
This
docker
image
is
also
somehow
outmoded
and
add
an
admin
monitor
for
that.
A
D
I
have
two
two
points
here,
a
first
one.
That
is
an
idea
from
garrett,
because
I
think
I'm
not
sure.
If
garrett
is
there,
he.
E
D
There
are
two
kind
of
label
schemes
that
are
metadata
associated
to
the
docker
images
that
you
can
tune
and
that
are
defined
by
oca
or
another
scheme,
and
so
we
don't.
I
don't
see
any
label
stuff
on
the
official
docker
images,
but
adding
these
labels
to
help
discoverability
by
implementing
at
least
the
standard
oci
scheme,
and
maybe
adding
information
like
the
jenkins
version
that
the
date
and
time
and
git
commit
are
also
metadata.
D
Of
course,
this
metadata
can
be
tampered
with.
However,
it
helps
for
discoverability
when
you
check
the
information
about
an
image
without
running
it.
So
we
could
also
use
this
as
a
support
to
complement
the
detailed
tagging
syntax
by
saying
base
operating
system
by
using
the
package
list
or
the
tools
list.
If
it's
installed
and
adding
a
bunch
of
metadatas
to
explain
the
intention
on
a
different
way.
B
That'd
be
awesome:
do
you
have
a
link
to
the
oci
or
the
labeling
standard?
That'd
be
really
awesome.
I
I
honestly
haven't
really
played
around
with
labels,
but
that
sounded
really
cool.
I
know
scopio,
I
think
that's.
The
name
is
the
the
one
of
the
tools
that
pod
well.
One
of
the
tools
you
can
use
to
you
know
interact
with
remote
images
without
pulling
them
down.
So
I
imagine
you
can
look
at
labels
and
decide
whether
hey
this
is
the
image
I
actually
do
want
based
on
those.
F
Yes,
there
are
I'll,
send
I'll
put
the
individual
links
in
but
yeah
there's
the
open
container
spec
and
there's
the
label
schema
where
the
two
different
ones.
D
Thanks
and
I'm
taking
the
opportunity
to
also
share
to
everyone
the
tool
that
garrett
has
written,
it's
a
common
line
that
we
start
using
on
the
infra
that
did
does
a
diff
between
two
images
in
term
of
labels,
which
is
really
useful
when
you
rely
on
the
labels
to
understand
what
is
the
content
of
these
images?
That's.
B
I
know
one
of
the
things
we
do
or
one
of
the
improvements
I
I
did
in
the
build
scripts
and
it's
in
my
pr
was
doing
grabbing
like
certain
info
down
from
dockhub's
api
like,
instead
of
enabling
some
beta
features
on
the
command
line,
and
one
of
them
was
like
using
the
remote,
inspect
and
stuff
like
that
to
gain
more
information,
because
I
don't
want
to
have
to
pull
down
one
of
the
things
I
was
doing.
The
build
script
was
avoiding
pulling
down
images.
B
If
I
didn't
need
to
to
kind
of
speed
things
up,
which
we
do
have
some
of
that
in
the
build
script
already.
But
I
was
trying
to
be
more
more
aggressive,
whether
checking
we
actually
need
to
build
something
again
or
stuff
like
that.
So
the
labels
could
be
something
really
useful
if
we
start
implementing
them
in
the
images.
F
F
F
F
D
Docker
file
sample
that
say
from
whatever
image
run,
apt-get
install
blah
blah
blah
packages,
and
I
feel
like
that,
all
the
gnlp
agents
instead
of
maintaining
image
in
the
future.
If
we
don't
have
code
maintainers,
one
of
the
solutions
on
the
deprecation
path
could
be
providing
a
dockerfile
generator,
or
at
least
some
documentation
that
say.
Okay,
if
you
want
to
install
some
tools
on
your
inbound
agent
here,
is
how
to
do
it.
An
example
with
maven
from
a
jenkins
ci,
slash,
inbound
agent,
run
apk
aid
maven
or
on
debian
appetite,
install
maven
or
girl.
D
Whatever
version
of
maven
check
the
sha,
and
this
will
be
the
idea
for
docker
file
generator,
where
you
would
have
a
form
that
would
be
hey,
I
want
a
jenkins
agent,
inbound
or
outbound
operating
system
should
be
windows
debian
whatever
and
ice.
I
want
maven
terraform
whatever
select,
and
then
it's
only
textual.
It's
a
library
of
textual,
run
instruction
block
that
could
generate
the
textual
file.
This
can
be
done
in
javascript
on
the
static
website
that
could
be
done
on
numerous
way.
D
So
this
is
long
term,
but
at
this
short
term
having
a
first
documentation
that
say:
hey
how
do
you?
How
can
you
extend
that
image,
because
this
is
an
information,
a
lot
of
users
earned
on
using
these
shadow
images?
But
in
fact,
what
is
their
real
need,
what
they
need?
They
want
to
customize
their
agents
and
run
it
with
docker
container
or
pods.
So
if
we
show
them
how
to
do
that,
they
will
take
care
of
building
security
scanning
testing,
maintaining
these
images
on
their
own,
because
this
is
these-
are
their
specific
needs.
D
A
Damien,
so
the
I
I
like
the
I
like
the
concept
it
feels
like
it
supports
the
idea
of
hey.
If
we
have
no
maintainer,
we
mark
the
thing,
as
as
up
for
adoption
up
for
adoption
means
it's
not
actively
being
maintained
right
and
and
then,
if
we
have
even
better
ways
to
hint
here's
how
you
can
overcome
the
up
for
adoption
by
adopting
it
yourself
for
your
specific
needs.
Here
are
the
steps
you
do.
I
like
that
good,
okay,.
A
A
Anything
else
on
on
that
topic
before
we
go
to
what
appears
to
be
our
concluding
topic,
we've
got
not
more
than
20
minutes.
I've
promised
myself.
I
will
stop
at
two
hours,
so
official
helm
chart
for
jenkins
was
one
and
then
I'd
like
a
few
minutes
at
the
end.
To
just
give
you
all
a
description
of
what
I
think,
I'm
going
to
say
tomorrow
and
summarizing
this
track.
A
A
C
Because
right
now,
right
now
we
have
to
kind
of
have
charts
in
the
project.
We.
A
C
The
official
thinking
some
charts
this
one
is
using
github
pages,
so
we
just
put
so
just
there
just
to
get
a
page
where
we
publish,
I
think
they
have
a
process
in
place
jared
contributed
to
that.
Quite
recently
I
mean,
I
think,
it's
it
works,
so
I
don't
think
something
that
we
could
bring
here.
We
may
sign
the
chart,
but
I'm
not
sure
I
don't
value
to
do
that.
On
the
other
side,
we
also
have
hem
charts
on
the
jk
team
for
projects,
so
those
are
not
versions.
C
It's
not
a
big
deal
for
us
at
the
moment,
but
yeah
we
may
I
I.
I
got
access
to
hem
chat
to
a
heart
chart
repository
on
the
g
frog
service
on
repo
jenkins
here
the
arc,
so
we
could
push
our
hem
chart
there,
so
we
would
have
everything
under
reproduction
in
ci.org,
but
I
mean
for
the
official
jenkins
same
charts
again.
There
is,
I
mean
they
are
already
using
that
to
the
communities
using
that.
So
I
would
not
change
the
end
point
anyway.
D
B
One
thing
I
I
noticed
with
like
kubernetes
and
stuff
helm:
charts
are
still
very
popular,
but
I
know,
there's
been
an
uptick
in
operators
and
having
a
jenkins
operator
might
be
kind
of
cool.
You
know
operate
is
more
managing
the
life
cycle
over
time.
Well,
my
understanding
of
helm
charts
is
more
just
like
a
single
install,
which
I
think
you
can
update
it,
but.
C
Yeah,
but
there
is
already
a
jenkins
operator
that
that
thing
was
pushed
by
virtuous
lab
and
redux.
So
you
have
you
have
a
discussion
on
dev
mailing
list
about
that.
Basically,
they
propose
to
collaborate
to
develop
the
changing
security.
B
I
didn't
know
that,
but
that'd
be
cool.
I
know
the
operator,
I
probably
imagine,
is
only
for
x86
pulling
those
images
but
it'll
be
kind
of
cool
to
see
as
we
make
progress
on
the
other.
You
know
points
and
multi-arch
support,
looking
to
update
the
operator
to
enable
multi-arch
support
to
be
kind
of
cool.
F
So
something
that
came
up
on
the
cloud
native
track
about
an
hour
ago
that
way
you
should
have
discussed
is
when
you're,
using
this
helm,
chart
and
you're
applying
updates.
F
Quite
often,
there
is
a
period
of
time
at
which
jenkins
is
not
available
for
and
you're
likely
to
miss
webhook
events
coming
from
something
like
it
happened.
F
So
there
was
some
talk
about
whether
we
could
create
something
external
to
jenkins
but
included
optionally
inside
the
jenkins
helm.
Chart
that
basically
does
some
kind
of
store
and
forward
of
web
hook.
Events
to
jenkins
internally,
something
that's
something:
that's
fast
to
restart
low
memory,
yeah,
nice
and
lightweight,
and
done
really.
D
I
also
have
a
let's
say
nice
to
have,
but
not
mandatory.
D
It's
that
I've
used
the
helm
chart
free
time
since
the
past
year
and
each
time
I
ended
up
having
to
to
search
for
the
correct
value,
because
the
values
have
were
changed
and
there
is
a
shift
that,
but
it's
a
classical
for
all
m
chart
of
the
world
that
sometimes
the
values
that
default
the
default
value.
The
ml
file,
which
is
the
source
of
truth
for
default
values
and
the
documentation
associated
to
it,
tend
to
shift
and
for
the
traffic
official
m
chart
there.
D
There
is
another
still
using
it,
but
two
years
ago
what
we
started
was
improving
the
ci
process
that
was
lending
the
wall
m
chart
by
adding
a
dom
shell
script.
But
I'm
sure
we
can
find
or
build
something
that
check
that.
If
a
given
value
is
defined
on
a
default
value
file,
then
the
same
value
should
exist
on
the
associated
readme.
D
Note
markdown
just
to
be
sure
that
the
ci
should
fail
if
you
change
the
default
value
without
updating
the
documentation,
because
even
the
best
person
in
the
world,
if
that
person
is
exhausted
and
change,
something
forgetting
documentation
happen
all
the
time
and
it's
not
a
human
issue.
So
since
all
human
can
do
that
that
error,
having
a
tool
that
helps
us
to
that
improve
the
user
experience
because
I've
I've
been
pretty
frustrated,
but
I
know
deep
dive
or
m
chart
works.
D
Someone
who
is
new
to
that
environment
will
just
retain
the
experience
that
elm
and
jenkins
are
painful.
So
let's
use
something
else,
so
it's
not
mandatory,
but
that's
that
could
be
really
something
easy
to
not
that
complicated.
It's
not
easy,
but
it's
not
complicated
to
implement
that
could
have
some
value
for
the
newcomers
there.
C
D
D
Topic
that
has
been
brought
to
me
yesterday
by
someone
who
dropped
on
the
room
and
that's
something
I
already
had
this
discussion
also
during
the
first
damn
event.
They
asked
if
we
can
use
customize
with
a
k
which
is
an
alternative
to
elm,
to
install
so
right.
Now.
I
naively
point
them
to
the
fact
that
elm
can
export
the
result
of
the
chart
as
pure
yaml,
so
you
can
have
a
helm
whatever
command
that
output
a
bunch
of
yaml
that
can
then
be
piped
to
customize.
D
But
I
don't
know
if
we
so
it's
the
same
debate
should
we
provide
for
every
package
manager-
I
don't
know,
but
maybe
mentioning
the
trick
of
the
helm
to
yaml
pipe
to
customize
could
provide
an
easy-on-three
solution.
At
least
I
don't
know
all
the
pro
and
cons
of
using
customize,
but
this
is
this-
is
built-in
in
the
cube,
cuttle
binary
out
of
the
box.
Since
a
few
versions.
C
D
D
A
A
D
Yes
and
complementary:
pull
requests
to
the
jenkins
with
kubernetes
documentation
on
the
official
dock.
I.
E
C
By
the
way,
I'm
just
wondering
the
best
way
to
highlight
the
jenkins
the
to
the
operator
in
the
jenkins
hem
charts.
Maybe
we
should
put
that
on
the
download
page.
I
don't
know
if
it's
already
there.
A
C
A
A
E
A
A
A
So,
in
tomorrow's
closing
session
I
I
intend
to
present
a
summary
from
this.
I've
got,
let's
say,
10
to
15
minutes,
actually,
no
more
like
five
to
seven
minutes,
and
in
that
what
I
was
getting
thinking
do
is.
Let
me
I
want
to
put
yellow
highlights
on
the
things
that
are,
I
think
should
be
mentioned
right.
So
I'd
like
to
mention
this
one,
but
we
intend
to
do
a
silly
thing.
A
A
A
A
A
A
E
A
C
A
All
right-
and
you
are
welcome
to
to
chime
in
tomorrow
and
correct
me
or
update-
we're
we're
again
going
to
do
it
as
a
doctor
meeting,
not
as
a
webinar,
so
that
or
in
the
docker
meeting,
as
a
zoom
meeting,
not
as
a
webinar,
so
that
you
will
all
be
able
to
make
yourself
voice
and
and
intentionally
it's
a.
This
is
part
of
the
contributor
summit,
not
just
a
presentation
to
to
of
the
world
so
join
us
tomorrow.
For
that
and
looking
forward
to
it.