►
From YouTube: 2021 07 30 Platform SIG
Description
July 30, 2021 Jenkins platform special interest group meeting. Topics include Docker, Java 11, Arm support, and more.
A
Welcome
to
the
jenkins
platform
special
interest
group:
it's
the
30th
of
july
remind
everyone
that
we
are.
We
abide
by
the
jenkins
code
of
conduct
in
our
meetings,
so
be
nice
to
each
other.
So
today's
items
that
I've
got.
Oh
oops,
open
action
items,
open
action
items
here,
java
11
as
default
in
all
our
images
pending
docker
changes
and
that's
it
aditya-
is
there
anything
that
you
would
like
to
add
to
the
agenda.
A
Okay,
all
right
so
first
action
item
for
me
is
that
I'm
going
to
open
the
jenkins
enhancement
proposal
today,
it's
been.
This
document
has
been
out
for
review
for
an
extended
period.
It's
still
got
a
lot
of
things
that
I
need
to.
I
need
to
add.
Oh
I've
got
an
open
question
from
tim:
jacom,
that's
good,
so
that
we
can
we
can
address
there.
Then
we've
got
an
additional
jip
which
won't
happen
as
today.
A
Any
questions
on
the
action
items,
no
okay.
So
then
the
next
piece
of
the
story
is
java
11,
as
default
in
all
of
our
images
and
and
that's
the
jenkins
enhancement
proposal.
That
is
this
thing
right
here,
so
so
this
will
be
submitted
later
today,
as
a
pull
request
to
the
jenkins
enhancement
proposal,
github
repository
and
we'll
start,
the
review
and
discussion
process.
B
Yes,
so,
let's
just
say
that
okay,
everyone
gives
a
thumbs
up
and
plus
one
okay.
Now
we
are
going
to
have
this.
So
what
is
the
process
and
how
will
all
the
jenkins
images
will
start
using
jdk
11,
so
will
it
be
like
a
new
update
is
available
and
then
everyone
will
download
that
update
and
use
it,
so
how?
How
does
the
process?
Basically,
what
is
the
flow
of
the
whole
thing.
A
A
B
A
B
Yes,
yes,
it
does
answer
my
question.
This
is
what
I
was
looking
for.
So
I
had
a
follow-up
question
so,
from
the
user's
point
of
view,
it's
just
that
it
got
a
new
update,
so
they
don't
need
not
worry
about
restoration,
but
there
might
be
a
case
where
some
some
functionality
might
break
rate.
So
if
some
software
does
not
support
jdk
11,
so
how
are
those
cases.
A
to
2.2
302.1,
they
will
also
implicitly
transition
from
java
8
to
java
11
running
their
controller.
Now
that
that
may
surprise
them
that's,
why
we're
going
to
communicate
it
we're
going
to
blog
about
it,
we're
going
to
inform
them
because
those
those
changes,
it's
important,
that
they
know
this
is
happening,
and
even
with
all
of
our
communication,
we
still
expect
there
will
be
people
who
will
be
dismayed
or
say.
A
Oh,
I
need
I
need
java
8
for
this
specific
reason,
and
so
one
of
the
proposals
here
is
that
we
will
add
image
tags
for
java
8
so
that
if
a
user
says
oh,
I
cannot
upgrade
to
java
11.
They
will
have
a
place
that
they
could
change
their
docker
file
or
change
their
docker
image
they're
using
to
instead
use
a
use,
a
different
image,
so
they
could
use
latest
jdk8
or
lts
jdk8.
A
Thank
you
now,
there's
a
good
question
here
from
tim
jacom
he
asked
should
we
retire
the
centos
images
and
it's
a
valid
one,
because
centos
8
is
no
longer
maintained
in
the
way
that
it
was
maintained
previously,
centos
8
has
become
an
a
very
different
distribution
and
updates
are
no
longer
arriving
to
centos
8.
In
the
same
way
they
did
before.
A
Therefore,
it's
becoming
out
of
date
and
as
it
becomes
out
of
date,
that's
a
risk
to
our
users,
so
tim's
question
about,
should
we
retire.
The
those
images
is
a
very
good
question
and
I'll
I'll
be
addressing
that,
because
I
think
he's
right.
We
should
acknowledge
that
we're
going
to
retire
it
and
we're
going
to
switch
to
using
a
thing
based
on
a
linux
distribution
called
alma
linux,
which
is
a
an
open
source
or
a
community
inspired
distribution
based
on
the
same
concepts
that
centos
used
to
use.
A
C
A
B
A
Just
describing
the
java
11
transition
that
we're
going
to
be
doing-
and
it's
described
in
this
draft
jenkins
enhancement
proposal
that
I'll
be
submitting
today,
so
damian
I'll
rely
on
you
and
others
as
part
of
the
review
team.
To
look
at
this
thing
next
week,.
B
A
And
we'll
have
discussions
about
it.
I'm
sure
tim
jacom
today
has
provided
some
good
feedback.
I've
got
some
things
that
I
need
to
tune
in
it
before
I
submit
it
as
a
pull
request.
So
it's
it's
an
attempt
to
describe
what
we
should
do
and
and
then
we'll
we'll
plan
to
implement
it
over
the
course.
The
next
few
weeks.
C
A
C
Okay,
so
we
did
two
main
changes
in
the
build
process.
The
first
one
is
regarding
the
build
step,
so
we
have
a
build
process,
which
is
quite
straightforward.
C
If
a
contributor
opened
a
pull
request
on
the
docker
repository
jenkins,
ci,
slash
docker,
then
there
is
a
pipeline
executed
on
ci
the
jenkins
io,
which
first
build
and
then
test
all
the
image
declination,
both
linux
and
windows,
all
on
intel,
as
for
today,
the
build
part
was
a
shell
script.
It
was
iterating
through
a
list
of
images
and
building
them
and
then
the
test
part
is
using
bats.
So
I
will
go
back
on
that
one
and
iterate
over
the
same
list
and
run
the
run
the
test
harness
on
each
one.
C
That
was
before
our
changes,
so
the
first
step
is
most
of
the
work
on
the
build
part
has
been
done
by
team.
While
I
worked
on
the
test
part,
we
split
the
let's
say
the
major
person
and
we
use
each
other
knowledge
for
reviewing
so
always
two
persons,
so
the
knowledge
is
shared
and
not
only
one
person,
and
we
had
a
bunch
of
review
from
mark
from
alex
hall
and
from
olivier
list
so
buildport.
C
C
If
you
have
an
earlier
version
of
docker
or
if
you
don't
see
the
buildix
enabled
on
the
docker
info
on
your
docker
engine,
it
means
you
need
to
manually
install
the
plugin
and
the
plugin
is
a
single
binary
to
install
on
the
client
side.
So
there
is
no
change
on
a
remote
docker
machine.
You
don't
need
to
restart
docker.
It's
only
a
client-side
plugin
that
plug-in
as
underlined
on
the
official
docker
documentation
is
a
new
implementation
on
the
way
that
docker
build
commands
are
implemented.
C
C
Personally,
I
don't
really
care.
Yaml
allows
commands,
but
ash
cl
allows
something
really
powerful
variable
injection
without
templating,
so
ash
cell
seems
a
first
class
choice,
especially
when
we
have
a
bunch
of
images
like
on
our
repository.
It's
a
mono
repository
that
generates
a
collection
of
images.
C
So
by
using
hcl
format,
it
was
it's
easier
for
us
to
inject
a
single
tag
and
have
integrated
variable
management
inside
the
file.
So
what
is
the
goal
of
that
file?
The
goal
of
that
file
is
to
be
a
manifesto
reside
that
lists
exhaustively
all
the
images
we
expect
to
build
on
that
repository
on
a
single
place,
which
is
docker
understandable,
it
only
depends
on
docker
and
docker
is
the
only
actor
that
should
be
able
to
read
that
one.
C
C
We
it
allows
to
specify
the
dockerfile
location.
For
instance,
you
have
dockerfile.windows
for
the
maven
image
or
something
when
you
have
a
bunch
of
dockerfile
on
different
directories.
You
can
have
one
entry
that
that
point
directly
and
finally,
you
can
add
multiple
name
for
a
given
image,
which
means,
if
you
build
the
docker
file,
which
is
on
the
directory,
8,
slash
debian,
then
you
can
specify
that
the
image
build
there
will
have
different
aliases.
C
It
will
be
jenkins,
jenkins,
column,
lts,
jdk
hates,
and
it
will
also
be
jenkins,
slash
jenkins,
column,
debian,
lds,
etc.
You
can
add
as
much
aliases
you
want
if
you
want
to
to
have
different
naming
conventions,
especially
in
our
case
where
we
have
different
dimensions.
The
operating
system
used
as
place
os,
eventually,
the
architecture,
the
gdk
and
the
main
line
of
gen
games,
weekly
lts
latest
etc.
C
So
it
allows
us
to
put
the
naming
matrix
in
an
exhaustive
and
declarative
way.
So
as
as
we
saw
during
the
publish,
we
forgot
the
tag
during
the
change.
It
was
easy.
It
was
one
tag
to
add
on
a
collection
and
that's
all
and
we
just
run
the
build
again
instead
of
a
bunch
of
shell
file
with
iterating
over
loops
and
another
part
of
the
list
and
iteration
on
the
jenkins
file,
etc.
So
the
goal
is
to
have
something
that
is
portable
and
reputable.
C
So
team
changed
all
the
build
process.
We
are
still
using
some
shell
script,
but
the
goal
is
to
have
as
much
information
as
possible
on
the
docker
bake
and
the
rest
is
only
calling
the
docker
buildix
bay
command
that
will
parse
the
file
and
based
on
the
entry
parameter,
we'll
select,
either
all
the
images
or
just
a
group
of
images,
the
grouper
antenna
we
can
do
whatever
group
we
want.
It's
only
a
matter
of
adding
a
group
inside
a
file
or
we
can
target
only
a
specific
image
and
filter
out.
C
So
that's
that's
the
main
work,
the
good
positive
effect
of
using
that
is
that
it
accelerated
even
more
the
builds
because
when
you
run
docker
buildings,
bake
docker
by
default
will
try
to
parallelize
every
images.
So,
whatever
list
of
images
you
ask
him
to
build,
it
will
analyze
this
and
parse
the
docker
file.
The
multi-stage
docker
file,
see
if
there
are
images
depending
on
each
other
and
it
will
create
a
tree
and
then
it
will
try
to
build
in
parallel.
So
while
it's
pulling
an
image,
it
can
already
run
a
wrong
layer.
C
C
C
Next,
the
second
part
is
related
to
the
test.
We
did
first,
the
build,
we
deployed
the
build
and
we
did
a
full
weekly
release
with
that
new
building
speak.
So
the
idea
was
one
pull
request.
We
didn't
want
big
bang
changes.
First,
one
main
major
change,
then
one
weekly
release
and
then
we
can
iterate.
C
So
the
test
step
was
almost
13
to
15
minutes,
at
least
for
each
image,
even
sometimes
slower
when
the
bill
was
picking
a
machine
with
that
was
already
used,
with
no
cache
or
with
low
power,
depending
on
aws
or
azure
repairvisors,
and
we
saw
by
reading
the
test
different
things.
First,
we
were
using
an
unmaintained
version
of
bat
shell
framework,
so
we
had
to
switch
from
the
the
original
project
to
the
new
maintain
project.
C
So
we
said,
oh
initially,
we
wanted
to
at
least
run
optimize
the
shell
script
so
that
instead
of
iterating
sequentially
on
each
test,
we
wanted
to
initially
say:
okay,
let's
run
all
the
files
in
background
and
then
the
shell
script
should
wait,
but
with
that
flag
bat
scan
can
do
that.
For
us
the
only
requirement
was
using
new
parallel.
So
we
had
to
install
this
on
all
the
build
agents.
That's
the
only
requirement
and
we
have
tune
the
make
file.
C
C
So
it's
a
kind
of
automatic
and
you
can
disable
that
with
a
variable.
If
you
see
issues,
because
maybe
we
there
are
side
effects
that
we
don't
cooked
earlier,
so
if
any
contributor
or
ci
maintenance
see
that
there
is
a
variable
to
disable
the
parallel
testing,
so
the
consequence
on
the
test
is
that
almost
all
the
tests
started
to
fail
because
we
had
a
lot
of
tests
with
duplicated
logic
or
tests
that
depended
on
each
other
on
a
given
file.
That
was
for
reasons.
C
This
is
all
the
test
harness
was
designed
initially
a
few
years
ago.
So
after
a
discussion
with
a
few
members,
it
looked
like
that.
In
fact,
there
were
no
reason
to
keep
the
sequential,
because,
most
of
the
time
the
sequentiality
come
from
a
test
was
building
a
custom
image,
so
it'd
say:
okay,
I
have
to
test
jenkins
slash
chinkins
lts
alpine
that
has
just
built
with
buildix
before
and
then
I
have
to
ensure
that
I
can
build
that
image,
but
it
was
already
done.
C
C
What
we
did
is
that,
instead
of
having
one
machine
where
we
build
all
the
all
the
images
and
then
another
machine
where
we
test
and
rebuild
old
images,
we
shift
it,
and
I
said
we
have
one
machine
for
one
platform.
Let's
say
jenkins:
for
alpine
with
gdk8
and
on
the
same
machine,
docker
buildix
was
only
building
alpine
and
then
run
in
parallel.
All
the
tests
harness
so
just
stop.
Building
from
scratch
again
during
the
test
phase
allowed
us
to
gain
two
to
three
minutes
per
test
per
test
on
eighty
percent
of
the
test.
C
So
we
went
to
thirty
minute
to
six
and
then
by
revamping
the
tests
and
ensuring
they
were
all
independent
from
each
other.
So
that
was
a
bunch
of
shell
script
to
do,
and
we
cleaned
up
some
tests
that
were
making
no
sense
in
the
sense
that
they
were
written.
They
were
making
sense
when
the
hotel
wrote
them
a
few
years
ago,
but
now
they
were
duplicating
logic
or
or
even
testing
nothing.
So
we
removed
these
tests.
C
C
And
finally,
an
additional
benefit.
That
was
quite
an
issue
and
this
that
was
the
reason
why
the
tests
were
not
run
in
parallel.
Before
with
shell
script,
you
were
able
to
completely
exhaust
the
resources
on
the
machine.
If
you,
if
you
throw
a
bunch
of
uncontrolled
tasks
that
are
quite
heavy
on
cpu,
mainly
then
the
cpu
start
to
be
over
allocated,
and
we
saw
the
consequences
that
some
agents
were
cutting
the
connection
to
our
jenkins
controller.
C
C
It's
able
to
control
the
allocation
of
the
resources
and
to
to
manage
the
pool
of
tests,
mainly
based
on
the
amount
of
cpu,
which
means
you
don't
need
to
to
risk
any
resource
exhaustion.
Everything
is
managed
with
your
canal,
and
so
there
is
no
macbook
pro
or
laptop
trying
to
run
and
no
more
disconnection
of
the
agent.
So
stabilized
pipelines.
C
So
yeah,
with
the
combination
of
these
two,
we
had
to
revamp
most
of
the
make
file,
so
there
might
be
some
side
effects.
We
saw
some
contributors
seeing
issues
that
we
tried
to
fix
on
the
go.
There
might
be
still
some.
So
if
you
want
to
contribute
and
see
issues
with
make
file,
don't
hesitate
to
open
an
issue
because
we
might
have
broken
something.
I'm
not
a
machine.
C
C
So
the
test
can
reuse
the
correct
image
that
was
the
most
complicated
part
that
involved
exporting
the
docker
building
space,
configuration
as
json,
understood
out
and
parsing
that,
but
outside
this,
that
was
quite
easy
and
we
are
really
happy
with
what
we
learned
and
the
results,
because
the
time
for
the
linux
image
is
when
to
build
and
test
for
20
to
25
minutes.
We
are
under
four
minutes
now
for
the
combination
of
both.
C
C
Windows
is
building
and
testing
sequentially
for
now,
so
we
have
to
wait
and
find
a
way.
We
saw
two
roads
that
could
be
possible
from
here.
The
first
one
is
a
naive,
optimization
using
powershell
scripting,
it's
already
a
powershell
iterating,
so
trying
to
parallelize,
where
powershell
is
way
more
powerful
than
shell
in
the
ability
to
control
parallel
resources
in
background
provide
more
features.
So
that
could
be
your
first
first
step,
but
we
are.
We
need
contributor
at
ease
with
porsche
scripting.
C
So
maybe
we
could
directly
benefit
some
from
from
the
rest
of
the
work.
However,
docker
buildix
is
not
ready
to
support
windows
platform
right
now,
so
yeah,
that's
the
state.
We
need
to
spend
more
time
experimenting
on
that
part,
but
there
are
some
time
that
we
could
gain
from
here,
because
now
the
windows
part
is
the
slowest.
A
What
an
amazing
result
thank
you,
damian.
That
is
absolutely
exceptional,
so
so
yeah
I
I
am
delighted
with
the
results.
One
of
the
questions
for
me
is
okay.
This
was
for
the
controller.
We've
also
got
inbound
agent,
outbound
agent
and
agent
images,
and
I
believe
we've
got
jenkins
in
for
images.
Do
you
have
an
insight
there?
Are
we
considering
making
the
change
on
those,
or
does
it
not
make
sense
to
make
the
change
on
the
agent
build
process.
C
C
However,
regarding
the
jenkins,
inbound
and
outbound
agents,
we
will
have
to
discuss
that
with
the
community,
because
jenkins
infra
is
something
that
we
use
for
our
own
use
case.
So
we
can
experiment
and
we
have
a
bunch
and
we
will
have
some
the
same.
Let's
say
a
speed
improvement
for
sure,
given
that
we
have
a
collection
of
images
in
the
case
of
inbound
outbound,
I
don't
know
that
has
to
be
asked.
Do
we
want
to
move
all
the
images
in
the
same
repository
because
there
are
dependency
between
each
other?
C
A
Okay,
I
hadn't
considered
that
so
there
is
an
there
is
an
attribute
of
that
that
today's
situation
with
three
separate
github
repositories,
one
for
the
base
agent,
one
for
inbound
agent
and
one
for
outbound
agent-
might
be
better
suited
by
docker
buildex
bake
in
a
single
repository.
I
think
that's
what
you're
seeing,
but
but
then
we've
got
to
deal
with
the
different
tag,
tagging
patterns
or
naming
patterns
in
each
of
those
subsets.
A
A
C
A
C
A
C
So
that's
also
a
benefit
of
using
docker
buildix,
so
docker,
build
x
by
default
is
a
standalone
replacement
to
the
default.
Docker
build
command,
which
is
you
have
a
binary
docker,
which
is
the
docker
client,
and
you
have
a
docker
engine
which
run
on
your
local
machine
on
linux
or
inside
a
virtual
machine
on
docker
desktop
for
windows
or
mac
os,
which
is
a
linux
or
windows
machine,
and
sometimes
you
have
a
remote
engine.
C
C
C
So
not
only
not
only
you
have
this
kind
of
benefits,
but
also
bildix
introduced
the
concept
of
a
worker.
A
worker
is
a
demon
running
on
the
docker
engine
where
you
want
to
build
or
any
container
engine
you
want
to
use,
and
it
allows
you
to
build
container
without
needing
root
access
to
the
remote
machine.
C
Sorry,
I'm
back
so
with
the
concept
of
a
worker.
That
means
that
a
local
docker
client
is
able
to
send
things
on
a
bunch
of
remote
buildix
workers
which
can
be
a
remote
kubernetes
cluster,
and
so
that
means
a
single
docker.
Build
digs
command
is
able
to
to
send
builds
requests
to
different
machines.
C
So
one
ram
one
intel
one
power
pc.
So
as
soon
as
you
have
a
collection
of
remote
docker
buildings
daemon.
If
my
I'm
not
completely
sure
about
the
wording,
the
documentation
is
way
more
clearer
than
me.
But
the
idea
is
that
you
have
one
docker
file,
one
docker
client
and
you
have
a
bunch
of
remote
machine.
Buildix
allows
to
to
manage
that
cluster,
and
so
with
that,
in
account
docker
added
a
nice
feature
which
is
enabling
kemu,
the
emulator.
C
C
So
using
that
feature,
dockabilix
is
able
to
spawn
in
five
six
seconds
with
camus,
a
worker
which
is
able
to
build
in
parallel
for
all
the
supported
architecture
here.
So
what
did
tim
work?
He
only
added
the
supported
and
expected
architectures
on
the
docker
bank
file,
saying
okay,
we
want
for
jenkins
debian,
not
only
intel
but
also
irm,
etc.
C
So
that
was
just
an
array
to
to
append
and
we
had
to
install
kmu
and
the
ensure
it's
enabled
on
all
the
agents,
and
if
you
have
both
so
enabling
cameo
is
quite
easy.
It's
a
docker
container
to
run
with
the
privilege
that
will
enable
an
instruction
on
the
kernel
and
then
that's
okay
and
with
and
with
the
big
change
and
the
camera
emulation.
C
C
C
Okay,
sorry
so
with
cameo
installed
and
the
big
change
it
allowed
us
to
build
successfully
during
one
week
on
ci
jenkins
iu,
all
these
architectural
images
without
requiring
any
specific
architecture
agent,
which
is
quite
a
good
step.
So
it's
an
emulator,
so
maybe
one
day
we
will
find
issues
because
we
built
an
image
with
some
binary
built
or
whatever,
but
most
of
the
time
we
only
download
binaries
and
packages
for
a
given
architecture
and
we
rely
on
the
upstream
operating
system,
debian
packages,
alpine
packages
and
press
packaging.
C
C
So
that
was
the
last
changes
and
fixes
that
are
quite
hot
because
it
was
this
morning
in
europe.
So
thank
for
tim
for
all
the
effort
and
the
heavy
lifting
here.
That's
that's
really
interesting
to
see
the
our
ability
to
go
forward
on
that
subject.
So
anyone
interested
on
that
part
don't
test
it
to
raise
your
hand,
propose
help
or
reach
out
on
discourse.
A
Thanks
very
much
damian,
that's
excellent!
I
I
don't
have
any
further
questions,
although
I
admit
I
am
looking
forward
to
having
an
arm
image,
I'm
a
little
jazzed
about
system,
390
and
powerpc,
just
because
they're
so
exotic
to
me.
So
thank
you
very
much.
That's
great!
Now.
I
assume
no
plan
for
arm
32,
for
instance,
that
this
is
all
64-bit
work.
A
Well,
I
guess
I
guess
that's
a
good
point.
We'd
we
had
long
ago
chosen.
I
guess
it's
a
year
or
two
ago
chosen
that
officially
the
jenkins
project
is
actually
only
running
on
64-bit
jvm.
So
I've
answered
my
own
question.
Really
we
don't
support.
Yes,
it
runs
on
32-bit
virtual
machines,
but
fundamentally
the
project
does
not
test
windows
32-bit,
for
instance,
and
we
certainly
don't
test
any
linux,
32-bit
and
and
so
no
reason
for
us
to
make
an
exception
here.
It's
all
64-bit,
all
the
time.
C
B
C
C
Given
the
amount
of
work
I've
added,
I
wanted
to
write
a
mail.
I
will
still
have
I'm
not
the
maintainer
of
the
image
and
I
would
want
to
volunteer
to
be
part
of
the
maintenance
team,
especially
given
the
amount
of
code.
I
changed
there.
A
Just
just
send
a
so
I
I
asked
to
become
a
volunteer,
maintain
the
docker
images,
so
you
can
find
that
was
a
while
ago.
So
you
can
find
an
example.
I
think
it
just
sends
to
jenkins
developers.
I
will
immediately
plus
one
that
I
am
confident
that
tim
jacom
will
give
his
immediate
vote
yes
and,
and
then
you'll
be
granted
permission.
A
We'll
end
our
meeting
now,
then
thanks,
everybody
recording
will
be
posted
shortly,
so
within
24
hours,
thanks
all
have
a
great
weekend.
Thank
you.
Bye-Bye.