►
From YouTube: 2021 08 03 Jenkins Infra Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Europe
damian,
yes,
so
hello.
Everyone
welcome
to
the
jenkins
infrastructure
team
meeting
for
the
week
of
the
the
first
week
of
august.
So
let's
start
with
a
few
announcements,
so
the
weekly
2.305
has
been
released.
So
congratulations
on
the
release
officer
and
everyone
involved.
A
B
I
haven't,
I
haven't
double,
checked
doing
the
weekly
release
checklist.
Yet
so
I've
got
to
check
still
the
the
docker
image
arrived.
I
assumed
that
you
and
tim
were
discussing
docker
image
build,
but
I
haven't
checked
to
see
if
it's
there
yet
so
we
need
to
whoops
mark
to
run
the
weekly
release
checklist.
B
A
B
A
Centos
7,
santos,
alma,
linux
and
rochelle
excellent,
okay,
good,
so
yeah
sounds
good.
That
means
no
publication
issue.
So
the
last
elements
that
we
tackle
down
with
team
have
been
fixed
so
for
the
record,
depending
on
the
kind
of
hypervisor
and
vm
and
cloud
provider.
A
B
B
Okay,
all
right
so
multi-arch
images
have
been
built,
but
not
not
yet
published.
A
B
A
Now
the
second
announcement,
so
we
have
to
prepare
for
the
next
lts
release
that
will
be
end
of
august,
so
that
one
might
be
a
bit
tricky
marker
will.
Let
you
underline
if
there
are
elements
related
to
the
infrastructure,
but
the
gdk11
is
the
main
one.
As
far
as
as
I
know,.
B
B
Latest
lts
lts-alpine.
B
Dash
slim
all
examples
of
that.
A
B
B
B
A
And
okay,
so
if
you
see
this
recording-
and
you
already
are
using
docker
images-
you
can
start
us
for
today
by
adding
this
edica
8.
If
you
want
to
keep
using
that
image,
so
you
will
be
sure
that
that
that
is
always
a
good
practice,
but
yeah,
don't
let
this
change
bite
you
and
you
can
already
change
the
tag.
The
image
are
the
same:
they're
just
aliases,
so
you
can
start
right
now
to
fix
your
dependencies,
so
you
can
take
them
the
time
to
switch
to
gdk11.
A
B
So
good
insight,
thank
you
and
that's
all
that
I
had
on.
There
are
certainly
other
changes.
The
changelog
has
not
been
generated
yet,
but
we'll
likely
do
that
in
next
week's
documentation,
office
hours
with
with
dhiraj
joda
mcroberts
and
kristen
whedstone.
A
Okay,
I
have
one
more
add-on
to
for
us
for
the
infra.
We
need
to
audit
all
the
changes
we
did
to
the
wall
release
process,
including
both
the
normal
process
and
the
docker
image
publication,
because,
as
far
as
I
remember,
the
latest
lts,
we
forgot
some
that
were
done
on
the
weekly,
and
so
we
add
to
cherry
pick.
I
think
that's
you
mark
will
open
record
that
issue
during
the
lts
release.
B
Right
well,
so
so
the
the
docker
images
we
build
from
a
single
branch
right
so
that
that
should
be
okay
right,
because
docker
build
processes
are
doing
both
lts
and
weekly
from
a
single
branch.
However,
your
point
is
correct:
that
release
and
package
repositories
for
the
jenkins
core
release
need
to
be.
We
need
to
be
sure
that
we've
kept
those
up
to
date
and
that's
a
checklist
item
that
I
have
not
added.
So
let
me
give
myself
an
an
action
item.
B
Add
to
the
jenkins
release
checklist,
because
it
should
be
there
and
I
I
had
said
I
would
do
it
and
then
I
I
I
failed
to
do
it.
A
B
A
No
problem:
there
is
no
shame
in
asking
for
help
or
asking
for
delegation
on
this
great
okay,
okay,
so
these
were
the
announcements,
so
then
about
the
let's
say
the
weekly
activity
of
the
infra
team.
Now
so
this
is
the
not
sport
and
so
first
of
all
the
progress,
a
progress
on
docker
so
with
the
recent
changes,
docker
build
leaks,
etc.
We
discussed
that
last
meeting.
So,
if
you're
interested
look
at
the
previous
recording,
so
this
week,
tim
started
to
work
on
the
docker
agent
and
I'm
helping
him
on
that
part.
A
So
the
idea
is
to
use
docker
buildix
as
the
default
builder
for
the
repository
jenkins,
ci
docker
agent,
which
is
related
to
the
base
image
jenkins
agent
on
the
docker
hub.
That
image
is
the
foundation
for
the
inbound
outbound,
the
former
jlp
agent,
on
the
bunch
of
more
images.
It's
the
base
image
that
has
a
java
declined
on
a
bunch
of
different
linux
distribution.
A
A
We
are
one
test,
a
way
of
merging
that
part,
so
it's
under
freak,
it
will
be
finished,
it's
only
a
side
effect
of
parallelization
of
the
test,
and
we
have
identified
the
issue.
We
see
exactly
the
same
result.
The
time
the
time
for
the
build
part
only
has
been
shrank.
Fourth
time
from
four
to
one
minutes.
A
The
impact
on
the
test
is
not
that
much
it's
ten
percent.
Only,
but
overall
the
build
and
test
on
the
of
all
the
platform
on
linux.
One
is
now
at
four
or
five
minutes
instead
of
20
before
so
same
kind
of
improvement.
Overall,
now
windows
has
to
be
the
next
part,
and
it
also
helps
to
clarify,
as
we
did
for
the
controller,
the
list
of
images
supported
on
the
tags,
so
the
gdec
11
as
default
will
be
easier.
A
A
So
here
we
are,
I
think
the
next
step
next
week
will
be
the
inbound
and
outbound
agent.
The
discussion
and
the
exchange
have
been
start
has
had
been
started
on
the
irc
about
maybe
merging
all
the
repository
right.
Now,
I'm
collecting
all
the
knowledge
from
oleg
and
from
other
former
contributors.
That
will
include
you
mark
on
the
reason
I
want
to
confirm
the
reason
why
we
split
the
repositories
to
see
if
there
is
a
compatibility
or
not.
A
So
less
maintenance
pain,
easier
contributor
setup,
so
a
lot
of
benefit
can
be
added
and
a
centralized
configuration
of
the
old
images
we
have
so,
especially
when
there
is
a
security
issue
that
could
help
don't
worry.
We
won't
take
any
decision
before
starting
an
email
thread
on
the
discourse
to
collect
advice,
to
see
what
the
community
think
right
now.
We
are
just
trying
to
understand
their
reasons,
so
we
can
then
see
if
it
makes
sense
or
not
to
push
that
subject.
A
Yes,
because
buildix
is
able
to
understand
the
dependencies
between
images.
There
is
a
keyword
that
allows
you
to
say
that
image
depends
on
that
image
and
support
either
the
keyword
which
is
explicit.
It
depends
on,
and
it
also
supports
multi-stage
docker
file,
since
we
are
using
a
bit
of
both
for
all
the
use
cases
both
are
supported
and
so
docker
bail,
buildix
will
be
able
to
have
all
the
tree
and
we
can
release
everything
at
the
same
time
when
we
change
gdk,
for
instance.
So
the
time
between
a
change
is
asked.
A
B
Yeah
that
that
sounds
very
promising,
then
thank
you.
Thank
you,
damian,
congratulations
to
you
and
to
tim,
what
a
great
outcome
and
I'm
looking
forward
to
more.
This
is
really
wonderful
and
I
agree
with
the
how
easy
it
was
to
handle
the
tags
thanks
to
that
declarative,
syntax
that
is
specified,
of
which
things
were
being
built.
Where
that
that
made
made
it
much
easier,
I
understood
it.
I
can
make
the
addition.
It
just
worked.
Brilliant.
A
A
So
now
it's
running
on
the
oracle
cloud
on
an
iran-based
machine,
which
is
a
good
enough
for
the
role
here.
Downloading
things
on
airsync,
the
synchronization
of
the
artifact
has
been
improved
by
olivier,
so
we
should
see
a
shrunk
time
between
a
plugin
has
been
released
and
the
moment
it's
available
on
archive
and
then
the
subsequent
mirror.
A
A
So
we
don't
use
staging
branch
anymore.
So
we
explain
why,
during
the
previous
meeting,
we
don't
have
staging
environment
to
validate
the
changes,
so
it
was
only
slowing
us
down
without
testing
anything,
so
we
decided
to
be
able
to
deploy
to
production
in
a
faster
way.
So
if
we
break
something,
we
are
able
to
deploy
the
hotfix
in
a
much
more
faster
way
and
with
confidence.
A
There
is
still
some
work
to
improve
the
test
harness.
We
are
working
on
that,
but
it
was
alleviating,
let's
say,
four
pull
requests
and
he
was
able
to
deploy
in
production.
The
four
pull
requests
the
same
day,
which
is
a
rate
far
it's
far
more
frequent
and
faster
than
what
we
used
to
do.
So
it
allows
us
to
be
way
more
responsive
on
that
part
and,
secondly,
we
started
to
update
on
the
go
a
bunch
of
the
dependencies
we
used
on
the
puppet
stack
after
upgrading
the
puppet
master
to
the
latest
lts
minor
version.
A
We
had
we
have
a
bunch
of
game
dependencies
on
the
tripo,
so
we
are
updating
them
on
the
go,
especially
puppet
modules,
so
olivier
did
a
brilliant
job,
also
at
starting
that
part.
We
are
still
struggling
with
the
server
spec
part,
which
is
why
staging
and
tests
were
not
there
for
acceptance
testing.
So
we
are
working
on
that
part
and
preparing
the
upcoming
puppet
and
era
upgrades.
A
B
Yes,
I
did
have.
I
did
have
one
question:
oh
I
did
you
raised
his
hand
too.
I
had
oh,
that
was
a
clap.
I
I
had
one
question
so
I
was
accustomed
to
for
a
brief
period,
at
least
seeing
archives.jenkins.io
in
the
list
of
mirrors
provided
by
the
mirror,
the
mirror
stats,
and
when
I
looked
recently,
it's
not
in
the
list
of
mirrors
any
longer.
I
wasn't
sure
if
it's
intentionally
not
in
the
list
or
if
it's
accidentally
not
in
the
list,
it
was
it
used
to
be
at
least
for
a
brief
period.
B
It
was
the
very
bottom
of
the
list.
So
if
no
other
location
had
it
it,
we
would
seek
it
on
archives.jenkins.io,
any
guidance
there.
Damian.
A
So
the
fact
that
it's
not
visible
on
the
mirror
list,
olivier
told
me
that
it's
the
current
behavior
we'll
confirm
that,
once
we
will
have
stopped
track
space
machine,
I
understand
that
the
change
to
be
made
here
might
require
to
not
roll
back.
So
that's
why
we
want
to
be
sure
that
we
won't
have
to
roll
back.
A
B
A
A
So
now
the
next
one
is
ci
jenkins
io.
So
since
the
past
week
we
were
able
to
deliver
the
configuration
as
code
as
defined.
We
had
to
do
that
a
bit
earlier.
So
I
don't
know
who
is
the
culprit?
And
I
don't
want
to
know
but
yeah,
I
think
we,
the
team
or
people
are
being
admin.
Access
to
ci
jenkins
did
a
mistake.
I'm
not
sure
if
it's
a
plugin
upgrade
someone
messing
with
the
agent
and
cloud
configuration,
but
the
result
was
before
the
previous
week
weekly
release.
A
I
I
think
that
that
was
a
plug-in
update
that
did
not
finish
but
not
sure,
because
the
configuration
was
not
has
been
deleted
by
jenkins
on
the
xml
files.
So
we
had
to
to
deploy
the
cask
configuration
support
on
puppet
which
allowed
us
to
reuse
the
configural
that
was
taken
the
day
before
so
we
did
not
lose
anything.
It
was
a
kind
of
backup,
and
so
we
were
able
to
in
less
than
one
hour,
make
a
ci
jenkins
io
back
with
the
correct
configuration
on
the
agent
validated
by
team.
A
We
have
done
one
subsequent
pull
request
to
update
the
reference
of
the
machines
related
to
the
docker
builds.
We
had
to
update
the
vm
templates,
so
tim
updated
the
file
and
we
were
able
to
merge
it
on
jenkinson4
and
it
automatically
deployed
the
new
version
on
ci
after
a
reload
and
thanks
to
team
careful
review,
we
finally
implemented
a
cask
reload,
because
initially
we
are
doing
jenkins,
safe
restart,
but
the
inconvenient
of
a
safe
restart
is
that
the
ui
and
the
waybook
standpoint
are
not
available
for
20
30
seconds
on
ci,
it's
quite
fast.
A
However,
it's
still
unavailable.
So
now,
if
we
change
the
agent
configuration
the
configuration
as
code,
reload
does
not
stop
the
service,
at
least
for
the
agent
scope.
I
don't
know
for
other
casks
copy.
This
has
to
be
verified,
but
yeah.
That
is
really
useful,
because
we
can
update
without
being
scared
of
putting
cr
jenkins
io
down,
which
is
a
good
improvement.
A
So
the
next
step
now
dow
that
cascar
first,
I
have
to
check
with
the
security
team
that
all
their
process
when
they
update
ci
jenkins
io,
is
in
sync
with
the
changes
we
need
and
the
changes
who
plan
to
do.
We
need
to
be
in
sync
on
that
part
to
avoid
someone
thinking
that
they
they
are
able
to
change
the
configuration
and
their
changes
not
being
persisted
by
our
system.
So
we
need
to
double
check
that
everything
is
okay
and
if
there
are
some
point
that
we
forgot,
the
second
step.
A
A
It's
because
he
saw
a
bunch
of
errors
on
the
bill
during
the
previous
day
on
siege
and
kim
sayo,
when
rebuilding
the
bomb
are
launching
the
bomb
builds.
There
were
a
bunch
of
bombs
that
that
have
the
tenancy
to
start
a
bunch
of
agents
and
most
of
the
agents
were
aci
and
they
were
struggling
for
cpu,
mostly
because
these
are
shared
machine.
A
So,
since
aci
our
container,
our
idea
is
to
start
on
some
pull
request
of
the
bomb,
which
is
a
great
candidate,
because
if
it
works
then
all
the
other
builds
should
work,
because
it's
one
of
the
worst.
So
we
will
want
to
try
and
experiment
the
bomb
on
a
specific
pull
request
that
will
use
kubernetes
agents.
Our
goal
is
to
transplant
the
configuration
of
aci
and
translate
that
to
pod
template
with
the
same
docker
images
and
run
these
on
the
current.ks
cluster.
A
We
might
be
able
to
have
something,
however,
in
terms
of
performances.
We
don't
know
because
the
machines
are
static.
We
only
have
free
machines,
no
auto
scaling
yet
because
we
decided
to
have
a
static
capacity
to
avoid
bad
surprises
in
time
of
budget.
So
it's
it's
a
step-by-step
process.
I
understand
that
it
can
be
frustrating
for
developers.
So
sorry
if
it
slows
you
down,
but
our
goal
is
to
also
not
break
the
existing
developer
workload.
A
We
want
to
start
with
specific
and
surgical
pull
requests
and
then
we're
gonna
grow
that
the
reason
is
because
if
it
works
with
kubernetes
and
if
the
only
issue
is
that
we
it's
the
capability
of
the
cluster,
we
now
have
digital
ocean
and
scale
way
sponsorship,
so
that
will
be
adding
two
new
kubernetes
cluster
with
also
static
but
on
different
provider,
and
we
can
start
mixing
the
workloads
so
right
now.
The
stretch
goal
is
a
specific
pull
request
with
specific
labels.
B
A
Can
be
defined
at
administration
level
on
jenkins
or
on
a
shared
library?
That
depends
on
the
two
use
case,
but
we,
the
infra
team,
are
still
responsible
to
provide
this
predefined
pod
template
as
a
service
like
we
provide
aci
and
they
should
use,
they
can
be
associated
to
label
that
mean
the
pod.
Template
can
be
used
if
such
label
is
asked.
A
I
don't
know,
and
I
have
no
knowledge
and
I'm
not
sure
it's
possible,
I'm
interested
in
knowing
that
if
jenkins
is
able
to,
if
there
is
a
rule,
if
we
have
aci
and
kubernetes
with
the
same
label,
I
don't
know
what
is
the
rule?
Is
it
trying
to
pack
on
first
on
aci
and
then
on
kubernetes,
or
does
it
work?
I
assume
that
it's
cl
it's
an
algorithm
close
to
the
try
to
reuse
as
much
as
possible
where
they
have
been
success,
so
aci
will
still
have
some
weight.
A
So
that's
also
why
we
want
to
first
start
with
specific
labels,
who
will
define
bot
templates
at
administration
level
with
specific
labels
different
from
the
existing
builds.
And
then
the
pull
request
will
use
these
specific
labels
to
be
sure
that
we
only
use
these
spots
for
this
pair,
and
then
we
will
decide
based
on
the
result
of
that.
B
Excellent
okay,
thank
you
so
that
that
clarifies
for
me
it
is.
It
is
that
there
is
the
the
responsibility
to
define.
The
pod
templates
remains
with
the
infra
team,
and
everyone
else
is
just
consuming
those
templates
not
having
to
define
them
and
learn
all
of
the
complexities
of
what
it
means
to
define
a
correct
pod
template
exactly.
B
A
An
abstraction
layer
here
either
a
pipeline
library
or
jenkins
configures
code,
so
developers
don't
have
to
care
about
that
part.
However,
developer
right
now
are
able
to
use
pods
if
they
are
able
to
edit
the
jenkins
file.
If
you're
a
maintainer
of
a
plugin,
you
can
still
already
try
your
own
on
this.
B
A
And
we
have
falco
running
and
we
are
going
to
add
more
and
more
restrictive
rules
on
the
cluster,
so,
for
instance,
we
are
at
the
moment
in
time.
We
should-
and
I
think
that's
going
to
be
the
next
step.
We
will
have
an
load
list
of
the
images
that
can
run.
So,
if
you
want
to
try
whatever
new
images,
we
should
not
be
able
we
already
disabled.
B
A
Also,
these
clusters
are
stateless
using
a
case
or
whatever,
so
the
idea
is
that
weekly,
this
cluster
will
be
completely
throwback,
the
machine
will
be
destroyed
and
then
a
new
one
will
be
initiated
and
cask,
but
right
now
we
have
to
discuss
that.
We
should
also
be
able
to
efficiently
implement
credential
rotations.
Also.
A
So
this
is
a
multi-layer
situation,
but
we
have
to
sync
with
the
security
team
as
well
to
see
what
are
the
requirements.
What
they
feel
like
is
mandatory.
What
is
important,
what
might
be
less
because
sometimes
it
can
be
important
from
our
point
of
view
as
maintainer
of
the
infra,
but
there
are
other
topics
that
are
more
concerning
for
them.
A
So
that's
why
we
have
to
stick
with
them
all
right
and
finally,
I
started
today
to
work
on
something
we
discussed
compatibility
of
terraform
1.0
that
includes
datadog
and
aws
terraform
project
as
for
today,
these
are
tiny
projects,
so
mainly
a
modules
update
and
a
bunch
of
syntax
fixes
and
preparing
a
new
version
of
the
docker
image
we
use
for
running
terraform
on
our
ci.
A
Why
terraform
1.0?
Because
we
are
two
version
away?
There
should
not
be
so
much
changes,
but
1.0
is
a
kind
of
lts.
That
means
the
terraform
syntax
should
not
change
for
the
upcoming
three
years.
If
you
have
a
project,
a
terraform
project
compliant
with
1.0,
that's
the
reason
why
we
need
to
do
that
e4.
To
be
sure
the
maintenance
can
be
then
spread
across
multiple
years.
C
I
mean
I
I
had
a
question,
but
it
is
absolutely
fine
if
we
can
take
this
asynchronously
on
irc
as
well.
So
in
the
point
of
replacement
of
archives.jenkins.io
euro
synchronization
of
the
archive
has
been
improved.
So
actually
I
wanted
to
know
how
how
how
does
the
intra
team
kind
of
make
sure
that
synchronization
is
in
place?
I
just
wanted
to
know
the
architecture
behind
that,
but
it
is
absolutely
fine
if
we
can
take
this
asynchronous
answer
later.
A
There
are
two
synchronization
one
that
happens
every
15
minutes
and
that
triggers
an
alert
if
it's
not
run
and
the
full
synchronization
of
everything
every
hour
or
three
hours.
I
don't
remember
exactly
everything
is,
is
based
upon
air
sync,
and
there
are
each
time
there
is
a
new
pkg
distribution.
So
plugin
is
updated,
occur
or
whatever
is
updated
on
the
reference.
A
So
it's
it's.
Let's
say
air
sync,
shell
and
a
timestamp
based
on
text
file.
A
No
problem,
you
can
still
go
on
jenkins,
infra
repository.
Don't
hesitate
to
look
at
the
code
specifically
for
archive.
This
is
puppet
manifest
and
you
will
see
all
the
shell
script
listed
there.