►
From YouTube: 2022 03 01 Jenkins Infra Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Let's
get
started
with
announcements,
so
the
announcement
I
had
on
my
draft
notes
is
the
fact
that
today's
release
today's
weekly
release
has
been
released
successfully.
So
you
can
grab
the
version
2.
337
of
jenkins,
the
docker
image
is
available.
We
should
have
our
infra
ci
system
upgraded
during
the
next
upcoming
hours.
B
B
A
No
announcement,
okay,
so
let's
get
started
if
it's
not
available,
it's
because
I
might
have
concluded
too
quickly
that
it's
available
because
of
some
pull
requests.
B
A
Okay,
I'm
just
adding
the
nuts
related
to
the
weekly
release,
we'll
explain
later.
If
we
have
time
otherwise,
it
will
be
different.
Okay,
if
it's
okay
for
you,
let's
get
started
so
first
of
all,
big
thanks
service,
stefan
for
taking
care
of
c.I
jenkins
iu,
major
issue
earlier
today,
where
the
maven
container
agents
weren't
started
at
all.
A
So
we
had
a
build
queue
that
was
with
a
bunch
of
jobs
waiting
to
be
processed,
so
I'm
the
root
cause
because
yesterday,
by
solving
other
incidents,
I
upgraded
all
packages
on
all
plugins
of
cign
kinceio
and
trigger
reboot.
A
But
this
morning
it
seems
like
that
no
container
was
spawned
at
all
no
error
on
the
logs
as
the
stephan
and
then
reject
so
smells
like
a
race
condition
somewhere
they
met
a
lot
of
stacker
by
the
way
every
while
I'm
thinking,
if
you
don't
mind,
asynchronously,
to
add
the
screenshot
of
error
stack,
could
you
upload
them
to
the
help
desk
issue
yeah?
There
wasn't
any
sensitive
data
on
this
screenshot,
so
you
can
totally
go
it.
A
A
A
First,
a
reboot
of
the
container,
then
a
reboot
of
the
vm
to
see
if
the
problem
happen
again
and
if
you
can
see
any
error
log
if
we
can
reproduce
that
issue,
if
it's
a
race
condition
that
might
be
random
but
yeah,
there
is
still
an
area
where
we
don't
know,
and
we
cannot
conclude,
but
it
smells
like
by
upgrading
all
the
plugins
yesterday
that
could
have
triggered
that.
A
We
cannot
be
sure
the
reason
that
is
the
main
reason
was
because
yesterday
we
had
a
petron
failing
for
c.I,
j
and
kim
sayo.
A
The
issue
was
not
on
the
puppet
agent
of
the
machine,
but
on
the
fact
that
a
paid
agent
asked
the
puppet
master
to
trigger
some
historical
backup
of
the
configuration
files
changed
each
time
there
is
a
change
in
the
github
repository.
The
agent
pulls
the
files
from
the
code
repository
through
the
mask
to
puppet
master,
and
if
there
is
a
change
detected,
puppet's
agent
informed
the
puppet
master
which
takes
snapshots
of
this
different
version,
so
we
can
always
roll
back,
even
if
we
lose
the
github,
the
github
source.
A
So
there
is
a.
We
are
limited
by
the
space
on
the
puppet
master,
of
course,
and
it's
rotating
it's
correctly.
It
should
not
use
more
than
10
gigabyte
on
that
machine.
For
for
that
specific
feature,
so
we
don't.
We
won't
fill
the
hard
drive,
but
the
snapshot
backup
failed
on
the
puppet
master,
because
the
location
where
it
was
trying
to
backup
was
owned
by
a
root
instead
of
the
puppet
user.
A
A
Then
there
was
a
five
level
of
subdirectory
each
one
name
with
a
single
letter
seems
like
something
they
were
trying
or
I
forgot
the
wording
like
it's
like
a
cache
with
a
tree,
and
I
assume
that
we
went
on
the
back
part
of
the
tree,
so
I
recursively
applied
correctly
the
authorization
and
permissions
and
that
fixed
the
issue
and
data
notice.
Each
files
have
the
same,
correct
permission
and
mask
right
now.
A
A
A
A
But
in
fact
we
there.
We
were
missing
some
specific
open,
ssl
extended,
attribute
expected
by
openvpn,
thanks
to
olivier
olivier,
jumped
on
a
call
to
help
us.
He
pointed
us
to
the
the
location,
so
it
me
it
seems
like
that
it
was
not
as
easy
as
he
and
us
were
thinking,
but
were
able
to
make
it
success
and
it
has
been
documented.
A
So
unless
I
missed
something
on
the
to-do
list,
yesterday
don't
hesitate
to
rv
and
stefan,
if
you
see
anything
missing
on
the
new
dock,
I
might
have
missed
a
little
part.
But
now
we
are
sure
that
we
know
how
to
do
it.
We
have
the
low
level
commands.
We
can
still
improve
the
easy
vpn
go
long
cli
to
take
care
of
that
specific
case,
but
that's
one
common
difference
and
I
made
sure
that
both
events
define
have
the
access
to
the
secrets
embedded
and
everything
has
been
written.
A
It
was
two
days
of
delayed
jobs.
There
were
a
queue
of
40
builds
waiting,
all
for
only
github
reports,
infrared
bots.
Let's
see
later,
however,
user
had
issues
so
he
opened
an
issue
and
we
went
to
the
root
cause
that
trusted
cia
wasn't
able
to
spawn
virtual
machine
agent
on
azure,
because
the
credential
used
for
azure
was
expired.
A
A
A
We
tried
artists.
I
tried
to
fix
my
failure
on
the
past
months
to
add
this
expiration.
Not
I'm
sure
we
are
missing
some.
So
I'm
already
sorry
in
advance.
For
the
upcoming
ones,
but
as
a
matter
of
fact,
we
tried,
we
should
be
strict
and
I
try
to
restrict
myself
and
each
time
I
see
that
kind
of
incident.
I
immediately
had
a
notification
on
the
team
shared
calendar
with
a
link
to
the
help
desk
issue.
A
A
I
took
the
opportunity
to
remove
all
the
code
valet
tiler
under
black
or
that
kind
of
pattern,
name,
application
and
all
the
azure
applications
that
were
more
than
two-year-olds
with
secrets
that
were
expired
since
two
years
or
more,
which
means
this
application
weren't
used.
So
let's
close
them,
there
are
still
one
or
two
that
I
would
like
to
audit
correctly,
but
I
made
sure
that
there
was
no
more
expired
credential
for
service
principal
applications,
and
we
spend
some
times
this
morning
to
be
sure
that
stefan
andere
have
the
same
rights
as
I
am.
A
We
need
to
finish
that
part
on
azure
portal,
so
the
one
of
them
so
meaning
not
me
with
my
full
admin
stack-
should
be
able
to
rotate
the
last
credential
that
is
sensitive
on
that
area,
which
is
infrastruct
as
your
packer
credential
used
to
build
a
virtual
machine
with
packer
that
one
expired
in
two
weeks.
So
calendar
has
been
added
with
notification
today,
and
the
goal
is
earlier.
Stefan
should
be
able
to
rotate
the
credential
and
update
it
for
the
sake
of
knowledge,
sharing
and
ensuring
that
they
have
the
same
permission
on
azure.
A
B
B
C
But
we,
since
we
haven't
that
many
people
in
this
your
directory,
I
don't
think
it's
really
a
problem
more
manipulation,
but
not
that
much.
A
Seems
that
terraform
might
be
able
to
manage
this
part
and
we
unforced
enabled
unenforced
authentication
for
any
user
that
need
to
access
the
azure
portal,
because.
C
C
A
A
C
C
See
what
we
need
to
do
is
to
be
able
to
measure
costs.
I'd
have
to
ask
kevin,
or
someone
asked
to
tell
us
why
we
don't
see
the
building
page
in
world
jenkins
or
jenkins.
Second,
there
are
some
documentation
to
be
completed
and
for
the
sponsoring
we
should
add
a
mention
on
the
sponsors
page
and
maybe
maybe
but
make
a
blog
post
to
announce
it
more
globally.
I
I
wrote.
A
Great
job
seems
like
that:
the
the
cluster
first,
the
cluster,
is
eating
a
lot,
especially
when
you
have
a
bill
queue
that
suddenly
goes
in
a
wave
of
builds.
So
that
means
it
works
very
well
and
transparently,
at
least
from
what
we
can
see.
Maybe
some
users
don't
think
so,
and
in
that
case
please
report
it
if
there
are
an
issue.
A
Second,
just
a
reminder,
I
don't
know
if
everyone
received
the
email
from
digital
nation
yesterday,
since
we
enabled
the
automatic
patch
upgrade
for
the
cluster,
they
send
us
yesterday
an
email
the
same
seven
days
that
will
be
patch
day.
So
our
digital
lesson
cluster
should
be
upgraded
in
six
days
now.
Just
a
reminder.
A
What
to
say,
the
policy
is
almost
the
same
as
azure,
which
is
as
soon
as
they
are.
They
are
validated
the
patch
version
of
kubernetes.
They
put
it
on
their
own
version,
but
sometimes
they
also
transplant
back
fixes
on
a
more
recent
version
if
they
can
put
it
on
the
current
one,
especially
for
security
issue,
which
is
really
really
practical.
That's
why
it's
enabled-
and
it
should
not
show
any
outage
unless
something
breaks
seriously
on
their
own,
because,
as
harvard
pointed
out,
there
is
the
surge.
A
What
is
called
the
surge
upgrade,
meaning
they
start
by
allocating
a
bunch
of
machines
on
the
new
one.
Then
they
migrate
our
workflow
and
then
they
remove
the
old
one
and
we
pay
way
less
than
what
we
really
consume
during
the
search
we
pay
a
bit
more,
but
that's
worth
the
non-outage
default
and
if
it
doesn't
work,
we
still
have
eks
handling
the
build
so
again
really
great
job
on
that
part
survey,
and
thanks
for
that
so
now
time
to
write
about
it.
A
A
That
instance
is
managed,
like
trusted.ci
and
cig
stefan
helped
and
took
the
lead
on
that
topic.
So
thanks
a
lot
that
topic
really
went
clearly
out
of
this
cup.
For
numerous
reasons.
The
first
one
is
that
we
have
no
explanation.
Neither
them
or
us
are
able
to
explain
that,
but
as
soon
as
the
lts
update
was
applied
and
the
jenkins
instance
restarted
in
the
container,
the
markup
clouds
in
the
config
xml
was
removed.
A
A
However,
that
was
a
good
opportunity
for
stefan
and
I
to
share
knowledge
on
all
the
prepaid
agent
templates
work.
We
were
able
to
cover
a
lot
of
elements,
so
it
was
good
for
the
team,
bonding
and
team
knowledge
sharing
and
also
it
was
a
good
opportunity,
thanks
to
a
bunch
of
ideas
that
stefan
admitted
to
propose
an
let's
say,
improvement
on
the
way
we
manage
labels
and
tools.
A
So
there
should
be
a
formalization
of
this
proposal
that
should
be
applied
to
cigo
if
it's
okay
for
the
end
user,
but
the
idea
will
be
to
keep
the
dimension
expressed
through
the
agent
labels
to
kernel
related
element,
for
instance
the
operating
system,
linux
or
windows,
the
the
cloud,
the
kind
is
it
a
virtual
machine
or
container.
All
of
these
elements
are
canal
related.
A
A
We
were
able,
thanks
to
what
you
showed
me
mark
a
few
weeks
ago,
to
define
a
kind
of
logical
pattern
for
the
tools,
because
we
were
blocked
a
few
months
earlier
on
how
to
say,
depending
on
the
kind
of
label
operating
system.
A
A
The
tools
are
trying
to
use
the
locally
installed
gdk
because
all
of
our
linux
and
windows,
virtual
machine
templates
provide
free,
gdk,
8,
11
and
17..
So
why
download
the
tool
that
we
cannot
control
and
check
the
provenance
during
builds
it's
lowest
down
and
in
term
of
security?
It's
not
really
nice.
So
instead
we
try
to
use
the
local
one.
A
However,
we
could
still
use
some
fallbacks
to
download
or
to
use
other
paths
for
the
machines
that
are
not
managed
by
our
template,
such
as
the
power
pc
machine,
for
instance,
so,
convention
fallback
system,
and
since
we
use
puppet
templating
or
elm
templating
we
can
in
we
can
cover
all
corner
cases
and
better.
We
duplicate
it.
Thanks
to
the
template,
gdk8
and
gdecat-8,
so
we
could
add
added
value
for
the
end
user
if
they
do
a
typo
and
the
jdk.
That
would
still
be
covered
with
exactly
the
same
definition.
A
A
So,
even
though
that
things
goes
went
quickly
out
of
scope,
it
was
clearly
a
great
learning
session
and
it's
a
great
opportunity
to
improve
the
ci
service
we
provide
for
our
contributors
and
the
security
team
agreed
to
upgrade
their
their
pipeline
in
accordance
to
use
them.
So
that's
cool.
That's
really
cool,
less
work
for
us,
so
sorry,
security
team.
If
we
hear
that,
but
they
volunteer
and
told
us
that
it's
okay
for
them,
they
haven't
started
to
use
windows
image
but
yeah.
I
hope
that
will
be
useful
for
them.
B
Oh
hi
damian,
I
added
some
notes
there
about
implied
labels
that
where,
where
our
duplication
of
jdk8
and
jdk-8
could
actually
be
resolved
by
the
implied
labels
plug-in
that
lets
us
say,
one
label
implies
another.
However,
it's
not
configured
as
code
with
jcask
yet,
and
so,
oh
okay,
so
that
gets
in
our
way.
I
think
the
technique
you
used
is
the
most
configuration
as
code.
A
B
B
C
A
A
A
A
A
A
One,
what
about
infra
reports
that
has
to
be
migrated
from
trusted
to
infra
ci,
so
irvine
has
splits
the
the
workload
on
that
one.
There
are
two
matter.
The
most
important
one
is
the
switch,
the
github
bot
user
to
github
app
and
fix
the
permission.
Issues,
which
consequence
is
the
issue
2788
raised
by
by
role.
A
So
we
have
to
work
on
that
part
to
avoid
authenticating
to
jenkins
ci
organization
on
github,
with
a
technical
user
and
use
instead
of
github
app,
so
work
in
progress
on
that
part.
That
means
changing
the
way
we
retrieve
the
token
so
that
part
is
handled
by
rv
currently
and
on
my
site,
I'm
working
on
since
that
job
need
to
run
regularly
and
trusted.
Ci
is
facing
peaks,
as
recommended
by
danielle
migrating,
that
job
to
infrasi
instead
is
better.
A
That
allows
us
to
also
gain
some
money
because
it
can
be
long
running
for
some
parts,
so
using
pods
on
a
virtually
infinite
capacity
adapted
to
the
right
sizing
on
infra
ci
will
help
us
and
wouldn't
block
other
builds
and
trust
it
bump
and
bruises.
I
won't
go
into
details
unless
you
want
to
discuss
that,
but
we
have
reached
a
point
where
the
containerless
rootless,
img
used
to
build
docker
without
docker
on
kubernetes
pods
isn't
working
anymore,
so
either
we
change
tool
or
thanks
to
the
work
that
stefan
did.
A
It's
four
past
four,
so
is
there
any
other
priority
topic
for
the
rest,
these
are
miners,
so
we
can
delay
to
next
week
unless
there
are
import.
A
A
Yeah
hypitable:
it's
done.
I've
closed
the
issues,
irc
notification,
the
topic
of
prepaid
agent,
but
it
works
thanks
survey.
The
missing
piece
was
the
puppet
master,
which
is
half
automated
half
manually
managed
and
we
had
to
put
to
run
the
puppet
agent
on
the
machine.
I
have
to
drop
see
you
later
mark
thanks.
The
docker
image
is
up
in
the
abductor
two
three
seven.
Sorry,
three,
three,
seven,
nice
thanks!