►
From YouTube: 2023 03 07 Jenkins Infra Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
around
the
table.
We
get
myself
Dimension
portal,
Mark
Waits,
Stephen,
Merle,
Bruno,
Martin
and
Kevin
Martins.
We
are
six
size,
Six
Bullets.
Yes,
that's
a
good
start
announcements,
weekly,
no
weekly,
planned
today
the
plan
change.
Yesterday,
the
weekly
has
been
moved
tomorrow
because
there's
been
a
pre-announce
of
a
security
advisory
and
the
weekly
release
is
part
of
that
advisory.
A
B
A
C
No
they
in
order
the
the
release
pattern
is
to
deliver
a.
We
need
to
deliver
security
release
for
for
weekly.
At
the
same
time,
we
do
LPS
so
that
Weekly
users
don't
have
to
be
forced
to
use
LTS.
C
A
Or
at
least
the
main
branch
or
Master
branch
in
the
case
of
Jenkins
correct
first,
so
most
of
the
time
on
the
hidden
repository
that
the
security
team
uses,
the
pattern
is
always
the
same:
it's
like
a
fixing,
a
nasty
bug.
First,
you
start
on
the
master
branch
and
you
ensure
that
the
weekly,
or
at
least
the
master
branch,
has
a
correct,
CI
and
doesn't
suffer
from
the
issue
anymore,
whether
it's
security
or
just
a
bug,
then
you
consider
if
you
have
two
back
ports
to
the
LTS
line
that
were
forked
from
previous
weekly.
A
So
then
you
backwards.
You
cherry
pick
the
committee,
if
possible.
Sometimes
you
need
to
adapt
things
as
well.
That
can
be
a
complicated
process
and
so
tomorrow
will
be
release
day.
So
all
these
releases,
where
the
backboards
happen,
including
the
weekly
Branch,
will
happen
tomorrow-
is
also
the
day
of
a
new
LTS,
so
about
announcement.
So
no
weekly
for
today
best
work
for
us.
A
Reminder
tomorrow
will
be
a
big
day
with
a
lot
of
with
security
advisory.
A
new
LTS
and
previous
LTS
will
be
updated
and
weekly
will
be
updated
and
I
wonder
if
there,
if
some
plugins
are
also
concerned
by
the
security
advisory,
I,
don't
know
so,
let's
check
what
the
down
on
State
weekly,
formal,
TS,
new
LTS.
A
C
Oh,
oh,
actually,
I,
guess:
I
have
a
question
over
the
weekend.
I
didn't
remind
me
how
I'm
how
ci.jenkins.io
plugins
are
managed.
I
did
an
upgrade
over
the
weekend
of
plugins
on
ci.jenkins.io.
Was
it
okay
that
I
did
it
from
the
user
interface
or
should
I
have
done
it
from
a
configurationist
code
file?
No.
A
A
So
next
weekly
should
happen
next
week.
Fourth
of
March
I,
don't
remember
the
expected
number
I
assume
to
the
three
nine
something
I
think
it's
free,
maybe.
C
A
C
A
A
First
of
all,
what
tasks
were
we
able
to
finish
doing
that
milestone,
I'm,
taking
them
on
the
order
on
my
screen,
not
in
order
of
priority?
The
has
your
service
principle.
Credential
world
have
expired
tomorrow
for
the
current
I
guess
for
publicates
cluster,
so
we
were
able
to
rotate
the
credential
and
update
the
cluster
that
required
an
announcement
because
the
ldap
and
release
CI
Services
had
to
be
restarted
and
that
can
take
from
one
to
five
minutes.
A
C
A
Haven't
seen
any
we
we,
we
sent
it
messages,
email
two
hour
prior
to
the
operation
that
could
have
been
one
day
before
we
didn't
see
any
complaint,
but
for
sure
there
has
been
some
outage,
Stefan
and
high,
so
that
cigen
Kim
Sayo
was
already
under
heavy
load.
So
the
amount
of
request,
cached
and
waiting
reconnecting
to
ldap
took
his
tool
on
the
performance
of
the
system,
but
it
was
still
working
as
expected.
Just
a
bit
slower
during
the
five
minutes.
A
That,
mainly
as
pointed
out
by
Daniel
Beck
repo
Jenkins
ca.org,
is
the
one
that
could
be
impacted,
but
we
don't
have
H
A
for
the
adap
Next
Issue
attempt
to
skip
artifact
caching
proxy
failed,
so
that
wrapper
issue
was
opened
by
Bazil.
That
has
been
closed.
There
were
two
main
issues
that
you
can
see
below.
A
First,
the
root
cause
was
the
fourth
one
here:
502
bad
gateway
error.
Due
to
the
the
persistent
data
for
the
ACP
instances,
some
of
them
were
already
full
at
50
gigabytes,
so
we
had
to
perform
an
increase
of
the
size
of
this
persistent
volume.
That
was
an
opportunity
for
us,
since
the
data
inside
is
persistent,
volume
is
not
a
problem.
It's
caching
that
I
write.
We
use
persistence
to
be
sure
that
we
don't
re-download
everything
all
the
time.
A
A
The
second
part
was
an
issue
on
the
way
the
skip.
Caching,
proxy
label
and
pipeline
Library
were
were
processed
that
has
been
fixed
by
our
very
really
quickly
to
unblock
the
blocked
builds.
Both
of
these
separated
issues
are
closed
and
we
were
able
to
confirm
that
every
builds
that
were
reported
as
broken
by
one
or
the
two
issues
were
correctly
working
as
expected
yesterday,
that
include
the
failed.
Plugin
builds
due
to
missing
Windows
configuration
file.
That
was
the
same
issue.
A
Just
to
note
that
we
opened
an
issue
to
track
the
work
about
monitoring
the
usage
of
this
persistent
volume,
not
sure
if
we
already
have
this
on
data
dog,
and
it's
only
a
matter
of
adding
an
alert.
If
we
have
to
add
a
probe
to
monitor,
if
we
have
to
do
something
else
but
yeah,
we
need
to
do
this
not
immediately
because
we
increase
the
volume,
but
that's
something
to
have
in
mind.
B
Yeah
and
now
that's
sleep
again,
extension
indexer
is
using
the
artification
proxy
I.
Think
most
of
the
plugin
now
left
in
the
in
this
mirror.
So
yeah
yeah
and.
C
And
I
think
oh
go
ahead.
Damian!
No!
Sorry,
sorry,
good
I
I
think
that
may
hint
that
artifact
caching
proxy
probably
is
becoming
almost
the
size
of
our
data
volumes
on
on
repo.jenkinsci.org
right.
It's
almost
becoming
a
full
and
complete
cache
thanks
to
things
like
back-end
extension
indexer
that
builds
every
single
plug-in
tragic,
as
that
may
be
yeah.
Okay,
thanks.
A
A
Okay,
last
week,
I
was
of
most
of
the
week,
but
Stefan
and
I
already
took
the
issue,
a
freaking
page
of
Duty
alerts.
That
was
there
back
there
again
good
catch
folks.
So
it
looks
like
that
on
Azure,
the
Windows
Server
templates
when
given
a
big
disk,
are
trying
to
create
a
c
and
the
D
drives
partitioned
inside
the
disk.
A
A
A
A
Just
a
reopening
and
reclosing
of
Maven
390,
as
I
pointed
out
by
the
team
in
Russia
I
used
for
back-end
extension,
indexer
and
eventually
other
jobs,
while
still
using
a
3.8
Maven
version.
We
should
add
an
automation
of
the
update
of
that
Maven
3.9.0.
We
are
currently
debating
and
nitpicking
about.
Should
this
be
synchronized
with
the
puppets
or
should
it
be
autonomous,
which
means
yeah
almost
there.
That's
a
good
thing
when
you
are
nitpicking.
A
Old
email
provider
deleted
said
well
that
was
a
user
asking
for
a
change
of
their
account
in
kinsayo
Associated
email.
They
were
able
to
give
enough
Prof
and
the
risk
was
quite
true
because
that
person
didn't
does
not
maintain
any
plugin.
We
checked
the
email
mix
that
person
discussed
with
us
also
so
yeah.
The
amount
of
proof
were
good
enough
for
trusting
that
challenge.
A
A
team
took
care
of
that.
We
use
the
issue
to
have
an
audit,
an
audit
trial
log,
and
we
took
care
of
not
updating
the
configuration
of
release.ci
until
today.
That
should
be
the
same
until
tomorrow
and
that's
all
finally
unable
to
login
yeah.
That's
a
classical
someone
opening
an
issue
with
an
account
problem,
and
we
don't
know
their
username.
We
don't
know
the
email
they
just
doesn't
fill
the
form
so
I
close.
The
issue,
because
it
was
it,
wasn't
filled
as
expected.
A
A
A
We
ask
proof
from
jira
atlassian
for
matlation
organization.
The
person
says
they
are
part
of
the
github.com
apps
Genie
Tini
yeah
I,
guess
I
will
write
that,
even
if
they
can
prove
they
are
under
that
GitHub
issue.
I,
don't
mind,
but
still
that
email
should
be
absolutely
still
valid
under
the
atlation,
so
they
have
to
contact
their
internal
I.T
system,
because
here
it's
about
taking
over
the
maintenance
of
a
plugin
that
has
been
updated
since
month,
if
not
years.
A
A
A
We
have
one
issue
which
is
not
a
direct
action
expected
from
the
Jenkins
infra
team,
so
thanks
basil
and
team
for
taking
care
of
discussing
winyan
about
migrating
one
of
his
plugins
inside
the
Jenkins
organization.
A
B
In
a
2019.
A
D
A
Don't
think
so?
Okay,
so
if
no
one
objects
that
one
will
move
to
the
upcoming
Milestone
and
we'll
have
to
check
on
AWS,
which
is
still
cloudbase
accounts.
A
A
It
was
the
ability
to
connect
to
Azure
from
trusted
CI,
which
runs
on
AWS,
Stefan
and
I
were
able
to
rotate
the
credential
apply
it,
but
we
wanted
to
start
to
manage
our
code.
These
credentials.
D
A
The
terraform
as
your
plugin,
so
we
should
at
least
have
an
audit
log
of
when
we
rotate
the
credential.
We
we
had
an
issue
is
that
the
permission
model
that
we
use?
We
don't
want
the
technical
user
managing
terraform
as
your
resources
to
be
administrator
of
the
organization
just
in
case,
if
that
account
is
compromised,
it
cannot
access
the
billing
and
a
lot
of
issues
we
want
to
limit.
A
A
We
have
the
same
for
search
CI,
that's
exactly
the
same.
The
difference
in
the
case
of
third
CI
is
that
we
want
to
switch
that
credential
to
no
credential
at
all,
using
a
capability
name
workload,
identity
management,
which
is
possible
in
the
case
of
search
CI
because
it
runs
inside
Azure
itself.
So
we
can,
if
we
start
managing
with
the
reform
or
directly
on
the
Azure
UI
tell
the
system.
A
Oh
any
requests
sent
from
that
virtual
machine
will
be
Associated
to
that
account,
no
need
to
insert
a
credential
inside
chain
kits
that
team
confirmed
that
it
should
work
with
the
Azure
virtual
machine
plugin
inside
Jenkins.
So
we
need
to
validate
that
assumption.
Two-Step
process
first
step
is
start
managing
search,
CI
virtual
machine
with
terraform
and
then
create
the
workload,
identity
and
test
it
same
thing.
It
remains
open
because
the
current
search
CI,
what's
its
name
credential
that
has
been
generated
manually
to
unblock
the
Jenkins
security
team.
A
We
add
the
same
and
I
forgot,
so
let
me
add
just
a
comment
here
and
if
no
one
object,
I
will
have
had
an
issue
for
CI
jenkinsayo,
which
credential
also
for
Azure
also
expired
during
the
weekend.
That's
the
issue
that
Mark
mentioned
a
bit
earlier
that
you
try
to
to
fix.
It
wasn't
clear
on
the
error
message
if
it
was
related
to
the
way
cig
and
kinsai
works,
or
if
it
was
something
else,
it
was
something
else
in
that
case,
thanks
Mark
for
taking
care
of
that.
A
So
let
me
write
this
down.
I
did
exactly
the
same
rotated
the
credential
inserted
a
new
credential
restarted
the
world
machine
and
checked
that
the
everything
works
again
so
see.
I,
don't
can
say
you
as
your
credential,
so
the
idea
if
it
worked
for
third
CI,
we
should
do
the
same
for
CI
junkinsayu,
which
is
also
an
Azure
virtual
machine,
so
also
candidate
for
workload,
identity
management,
which
means
no
credential,
better,
no
need
to
rotate
them
same
as
search.ca
temp,
credential
and
candidate
to
workload,
identity
management.
A
A
Mark
Jenkins
got
signing
certificate
related
issues.
We
have
to
wish
open
issue
right
now.
What
is
the
status,
or
did
you
had
any
news
about
the
digi
search
renewal.
C
My
apologies
I
have
the
action
item
to
send
them
a
message.
I've
received
no
response
from
them
to
Stefan's
in
my
attempt
and
I
haven't
yet
asked
them.
So
I
will
do
that
today.
Sorry
about
that,
that's
I've
got
to
raise
that
to
them.
We've
now
got
what
is
it
21
days
or
less
before
it
expires.
A
Issues
yeah,
so
that
Stefan,
is
it
okay
for
you
if
you
keep
keep
working
on
that
with
Mark,
but
the
goal
is
that
I'm
there
as
a
fullback
for
mock,
is
that
okay
for
you,
Stefan.
C
A
A
Okay,
so
more
news
next
week,
we'll
see
thanks,
Mark,
okay,
if
it's
okay,
this
so
This
tissue,
will
keep
being
moved
from
Milestone
to
Milestone,
so
I
will
add
them
to
the
next
milestone.
A
A
Okay,
which
one
did
you
finished
or
walk
on
since
last
week,.
B
A
Okay,
pipeline
steps
generator
so
I
got
a
question
here.
Honest
question:
were
we
able
to
check
the
impacts
on
g-frog
because.
C
A
So
nice
work,
so
that
means
we
should
continue
walking
on
this
element,
but
that's
really
important
because
the
time
spent
on
this
one
might
not
be
worth
the
effort.
But
in
that
case
it
is
well.
A
I
saw
one
major
benefit
of
Airways
work
on
this
reposit.
On
this
let's
say
exotic
repository,
though
it's
ervy
was
able
to
find
the
way
we
were
able
to
retrieve
remote
objects
using
URL
connections
and
stuff.
We
are
using
yeah
a
directly
repo
Jenkins
CI
and
they
were
using
all
Java
form
of
HTTP
clients.
So
somehow,
at
least
finishing
this
element
will
allow
us
to
prepare
the
future
for
increasing
the
Java
version
from
gdka
8
11
to
17
or
even
more,
which
would
have
been
a
blocker
right
now.
A
So
at
least
it's
a
kind
of
cleanup
project.
That's
the
value
I
see
there
so
everywhere.
I
I
absolutely
defer
to
you,
and
if
you
see
that
it's
useful
to
continue
those
on
these
elements,
please
go
ahead.
Your
time
is
well
spent
on
that
I
I
think
it's
still
worth
the
effort
not
only
yeah.
The
metric
of
the
ACP
bandwidth
was
the
top
priority.
Now
that
it
has
decreased,
it's
still
important
to
get
them.
A
A
A
Looks
good
cool
so
on
the
topic
of
ACP
or
g
frog,
realign
repo
Jenkins,
CI,
org
Mission.
Today's
publicates,
with
the
ldap
restart,
shows
that
we
need
to
find
a
way
to
have
an
highly
available
ldap
still
to
do.
I
started
to
reproduce
I
started
yesterday
to
work
on
a
local
ldap
with
a
with
a
set
of
a
test
data
that
is
already
on
the
open,
ldap
image
and
I'm.
A
Wait
on
I
will
want
to
seemingly
how
to
fine
tune.
The
detection
of
when
one
of
the
replica
goes
away
because
maintenance
or
crash
how
does
it
behave
and
how
much
time
does
it
take
before
the
the
load
balancer
is
able
to
switch
to
the
other
in
the
context
of
kubernetes
and.
D
A
And
we
can
accept
that
accounting.
Sayo
is
down
for
five
minutes
time,
for
the
rights
for
the
right
instance
to
be
restarted,
but
yeah
right
now,
it's
still
not
a
problem,
but
it
will
become
if
we
need
to
enable
authentication
which
that's
the
next
topic,
it
seems
like
Mark
that,
given
on
the
top
consumer,
you
saw,
we
might
have
to
still
go
on
the
way
of
eventually
enabling
authentication
for
the
mirror
repositories.
We.
C
Lots
of
requests
from
a
few
IP
addresses
in
the
high
lists
to
the
maven
repo
one
cache,
but
basel's
point
was
we
need
to
understand
if
those
requests
to
the
maven
repo
one
cache
are
in
fact
generated
by
Jenkins
related
activity
or
if
they
are
just
someone
asking
for
a
copy
from
Maven
repo
one
and
and
it's
I
think
he's
got
a
good
point
there
and
I
think
there
are
ways
to
answer
that
question,
but
it
will
need
some
further
looking
at
the
data,
because
we
don't
basel's
point
was
we
don't
want
to
enable
Authentication?
C
A
Sense,
so
we
need
to
challenge
frog
on
that
part
and
I
believe
that
we
have
to
maybe
ask
Stephen
chin
on
Glory,
first
again
to
give
them
a
status
that
we
showed
some
efforts.
What
do
you
think
Mark
yeah.
C
I
think
I
think
it's
more
that
not
so
much
that
we
need
to
challenge
them
as
rather
we
need
to
do
the
data
analysis
and
bring
the
analysis
of
the
data
to
them.
To
show
look
here
are
our
here
was
our
usage
before
here's
our
Usage
Now
here
are
our
key
consumers
before
here
is
the
reduction
of
those
key
consumers
now
and-
and
that
brings
the
dismaying
when
that
the
largest
single
consumer
is
still
the
largest
single
consumer
and
we're
still
working
on
that
topic
with
them
as
to
what
we
do
about
that.
C
G
frog
right
and
that's
why
we've
got
that
conversation
with
jfrog
is
look.
We've
got
this
large
consumer
that
our
attempts
to
find
them
have
failed.
Are
attempts
to
appeal
to
the
abuse,
reporting
organization
of
their
ISP
have
failed
and
and
we're
we're
sort
of
out
of
options
that
we
can
take,
because
we
don't
have
control
of
the
networking
endpoints
on
that
service.
D
A
C
Well,
there
may
be
so
I
may
need
for
I.
I
will
need
further
help
and
I'll
ping,
the
infra
team
separately,
with
being
sure
that
the
IP
address
is
in
the
report
are
not
ours,
because
there
was
one
that
we
I
saw
in
the
report
from
digitalocean
in
Frankfurt
Germany.
That
may,
in
fact
be
one
of
ours.
I'll
look
at
the
most
recent
data.
It
just
arrived
today
for
the
last
12
days,
and
so
I
have
a
great
excuse
to
do
some
more
data
analysis.
A
A
A
Okay,
so
we
need
to
check
the
difference
between
our
repository
that
mirrors
Maven
Central
and
the
maven
Central,
but
okay,
so
that
one
definitively
goes
to
the
next
milestone.
A
We
had
issues,
but
we
also
had
to
closely
monitor
the
capacity
of
crj
and
kinsayo
to
treat
builds,
particularly.
The
bomb
builds
that
are
again
more
and
more
frequent.
Frequents
I
assume
that
the
LTS
and
security
advisory
might
have
an
impact
given
two
or
three
core
versions.
This
increase
the
amount
of
possible,
builds
and
plug-in
envelopes.
A
A
A
Digital
design
allowed
us
to
increase
our
limits.
They
we
haven't,
checked
the
impact
on
the
spending
yet
so
we'll
have
to
closely
monitor
that
on
digital
ocean.
A
We
also
can
start
monitoring
a
bit
closer,
a
bit
more
closely,
the
studying
if
we
could
vertically
scale
each
node
of
kubernetes
right
now.
Each
nodes
is
able
to
ask
freeports
at
the
same
time,
given
the
memory
and
CPU
limits
we
use.
So
we
could
study
o
to
increase
the
size
of
these
machines.
Given
we
have
more
and
more
frequent
builds
that
could
help
us
to
have
the
same
amount
of
nodes
but
handling
more
capability
and
about
that
issue.
A
A
B
A
To
cigo
you
see
that
you
can
check
at
anyone
and
you
can
be
logged
out.
Let
me
try
in
real
time
I'm
not
logged
in.
So
it's
a
public
information
and
you
can
see
a
diagram,
so
the
colors,
as
are
reported
on
the
Legend
So.
In
that
case
the
gray,
the
gray
one
is
the
amount
of
build
in
the
build
queue
waiting
for
an
Executor
and
the
green
one
is
the
amount
of
online
executors.
A
A
So
that's
the
first
step
to
give
some
actionable
to
the
developer.
If
the
ability
is
slow,
they
can
check
this
one
see
oh
I,
see
that
we
have
currently
300
Builds
on
the
bill.
Queue.
That's
why
you
are
waiting
the
next
topic
and
the
next
issue.
We
will
have
to
separate
the
workload
capability
between
Bowman
plugins.
A
Is
that
clear?
Is
there
any
question?
Objection
things
unclear
on
that
topic
here?
Thank
you.
So,
better
close,
this
one
contact
close
and
open
a
separated
issue
to
split
workloads,
one
major
one
that
has
been
solved,
that
is,
the
emergency
ports.
The
update,
Center
job
was
failing
due
to
mainly
the
guy
speaking.
A
We
had
dependencies
on
a
command
name,
blob
xfair,
which
is
a
kind
of
AirSync
for
Azure
or
bucket
storage,
and
when
we
worked
on
the
let's
uncrypt
update
to
support
Azure
DNS
it
broke
pythons,
and
since
that
big
machine
package,
origin,
Jenkins
IO,
is
currently
used
for
synchronizing
different
plugin
updates.
Even
though
the
update
Center
run
on
trusted
CI,
we
choose
an
agent
that
connects
to
PKG
to
run
something
to
push
to
the
mirrors
and
that
command
blocks.
A
Affair
was
installed
manually
and
not
managed,
so
we
missed
the
part
where
it
was
broken,
so
we
were
able
to
fix
it
by
playing
around
with
python,
and
we
have.
We
have
fixed
that
on
the
machine
it
was
able
to
go
back
now
before
closing
the
tissue.
We
still
have
to
track
the
installation
of
blob
exephyr
that
we
moved
around
all
the
shell
script
as
a
requirement
that
takes
D,
so
developer
of
the
script
can
now
control
the
version
that
will
be
used.
A
We
now
need
to
tell
pupet
to
check
that
requirement
and
install
it
if
needed,
open
the
separated
issue
to
ensure
that
all
the
Blob
xfr
should
be
replaced
by
your
new
AZ
command
line
that
doesn't
use
Python.
It's
statically
compiled
easier
to
install
and
has
way
more
features,
because
the
latest
version
of
blob
XFL
was
from
September
2021.
A
A
A
While
we
worked
on
trying
to
to
the
two
or
three
days
while
the
update,
Center
failed,
some
of
the
plug-in
update
were
missed
by
the
system
synchronization,
so
the
plugin
is
seen
as
tagged
and
released.
Its
HPI
file
is
uploaded
on
Republicans,
but
it's
not
available
on
the
download
server.
A
A
D
A
Sounds
like
that
that
time
Windows
wasn't
big
enough,
so
we
will
have
to
run
right
now,
a
big
more
time,
but
there
is
also
a
manual
version
which
will
used
to
be
the
former
former
method.
It's
described,
On
The,
Run
books,
private
documentation
and
we
have
to
run
the
update
Center
project
locally
on
our
own
machine,
so
we
won't
mess
with
stretch
CCI
and
the
current
update
Center.
A
That
operation
should
generate
a
Json
file
with
the
list
of
missing
plugins
and
we
can
then
on
packaging
origin
as
part
of
the
procedure
upload.
That
file
run
the
synchronization
script,
one
time
and
Conquer
concurrently
to
the
current
one.
That
should
fix
at
least
for
this
one
so
definitively
this
issue
going
to
the
next
Milestone
as
priority.
C
Thank
you
thanks
very
much
so
I
assume
it
would
not
help
to
have
a
list
of
exactly
which
the
the
process
you
were
describing
sounds
like
it
will
generate
the
list
of
which
plugins
were
missed
and
will
then
synchronize
them.
So
because
I
could
read
old
email
messages
that
I
get
from
repo.
That
tell
me
what's
been
released,
but
that
would
that
relies
then,
on
me
doing
a
good
job
of
reading
and
sounds
like
you're
to
the
the
tooling
will
do
a
much
better
job
of
that.
A
A
Are
they
we
had
this
one
on
hold
because
of
all
the
activity
and
incidents
we
are
back
on
Royal
as
well
as
far
as
I
can
tell
to
continuing
migration
of
the
Clusters.
Can
you
give
us
a
status
of
what
you
should
be
able
to
work
on
run
migration
of
private
Gates
and
duplicates
as
well.
B
So
two
or
three
but
service
we
can
move
at
first.
B
Yeah,
it
says
to
Twitter
buttons
oops,
but
we
can
learn
immigration
and
thin
procedure.
The
next
day
we
can
I
think
size
no
Prejudice
to
shut
them
down.
A
A
And
then
we
also
release
CI
Jenkins
IO.
That
will
be
the
next
steps.
A
I
will
let
you
write
down
on
the
issue,
what
you
plan
for
that
once
you
will
have
finished
with
the,
but
is
that
okay,
yes
cool?
A
We
have
a
few
issues
now
create
an
update
CLI,
manifest
to
update
the
kubernetes
scooter
depending
on
the
CI
config.
So
that's
the
port
where
we
increase
the
capacity
of
cigen
kinsayu.
We
have
two
locations
where
we
have
the
max
amount
of
PODS
per
cluster.
One
is
on
Jenkins
configuration
side
and
the
other
is
on
kubernetes
quota
side.
We
need
both
to
be
sure
that
it
doesn't
behave
unexpectedly
and
yeah.
A
Yes-
and
we
don't
want
them
to
be
able
to
access
to
the
credentials
as
well,
because
this
credential
implies
some
risk
and
more
people
have
access
to
this.
Even
if,
behind
inside
the
private
Network,
the
more
people
could
have
a
mesh,
a
machine
compromised
that
will
try
to
authenticate
to
really
say
and
try
to
extract
elements.
That's.
A
C
It's
not
it's
not
occurred
again.
So
I
was
I
was
surprised,
I,
don't
understand
why
and
I'm
I'm
not
overly
worried
about
it.
A
A
Yes,
let
me
add
the
issue:
okay,
so
I
never
used
that.
Where
is
the.
A
C
A
Problem
thanks,
we
will
see
we
have
to
analyze
this
one
to
see
what
happened.
What
is
the
message
Etc?
Because
it
depends
on
the
kind
of
agent
if
it's
container
agents
that
is
filling
the
disk,
then
we
have
that
one
might
not
be
easy
to
solve.
A
C
Reason
the
reason
I'm
a
little
worried
here
is
that
we've
just
recently
added
the
AWS
SDK
plugins
as
dependencies
managed
by
by
the
bill
of
materials
in
the
awws
SDK
plugins
are
huge,
so
so
we
could
in
fact
be
Inc
have
increased
our
disk
use.
That
was
my
only
concern.
There
makes
sense
all
right
so
that
one
I
need
to
keep
this
that
keeps
forever
is
set
and
I
will
add,
keep
okay
got
it
thanks.
D
We
did
upgrade
everywhere,
the
fdcli
2
is
0.46,
I,
think
and
and
this
one
is
back
in
the
correct
two
spaces.
In
addition,
we
still
need
to
check
if,
if
it
repair
the
the
one
that
were
broken
with
four
indentation
I've,
not
seen
any
version
of
bad
ones
since
then,.
A
D
D
About
five
minutes
before
that
meeting
I
had
a
a
green
on
my
version
of
the
Dead
CLI
running
through
a
Jenkins
file,
so
that
should
be
I'm
doing
that
in
three
step.
The
first
step
is
the
new
update
CLI
the
new
Jenkins
file,
sorry
dedicated
for
the
CLI
version.
Then
the
ask
card
for
the
the
controller
to
deal
with
that
file
and
if
it's
working,
fine
I
will
remove
the
GitHub
action
that
was
doing
that
for
us
from
now.
A
No
problem,
finally,
the
issue
I
mentioned
earlier
open
by
RV.
If
no
one
objects
given
the
walk,
the
loads
I
will
move
it
to
the
backlog.
The
goal
is
to
ensure
that
we
replace
all
blob
XFL
to
Azure
CLI
call,
which
will
require
refining
the
exhaustive
list
of
these
elements.
Machines
templates
then
ensuring
that
we
have
Azure
CLI
and
then
changing
them,
one
of
the
user,
keeping
in
mind
that
some
might
be
hard
to
test
or
verify
until
it's
freely
tested,
particularly
the
new
plugin
synchronization
to
mirrors
or
the
Jenkins
score
release.
A
If
you,
if
I
forgot
some,
we
have
a
proving
I'm,
not
a
spammer,
so
it's
an
account
issue
so
that
one
will
be
added
and
we
will
see
what
is
the
requests
migrate,
update,
Jenkins
IU
to
another
clouds
that
machine
PKG
origin
Jenkins
is
an
upcoming
machine,
is
a
mission
that
should
be
migrated
out
of
AWS
as
soon
as
possible,
especially
given
the
last
changes
we
had,
we
will
have
to
go
back
on
this
one,
given
the
cost
for
the
bandwidth
that
it
cost
us
three
to
four
K
per
month.
A
That
one
will
be
worked
on,
because
some
of
the
blob
XFL
fixes
for
the
update
Center
will
be
part
of
this
one.
That's
why
I'm
mentioning
it
here
valid
SSL
certificate
for
search,
CI
jenkinsayo.
A
A
A
I
saw
the
move
ACI
remaining
workload
to
kubernetes,
to
stop
using
ACI
at
all
that
require
adding
Windows,
not
pull
if
it's
okay
for
everyone,
let's
wait
for
finishing
migrating
release
here
in
the
correct
cluster
and
then
we'll
see
what
we
can
do
and
one
last
bit
that
might
drive
our
March
months,
Ubuntu
migration,
that
one
is
important
that
I
want
to
migrate
this
one
it's
currently
on
the
backlog.
Let
me
add
it
to
the
meeting
notes.
A
Ubuntu
2004
campaign,
so
the
blood
twist
Ubuntu
bionic
that
we
used
almost
everywhere
on
our
virtual
machines
is
end
of
life
in
April
this
month
this
year
we
might
have.
D
A
A
A
Good
thing
is
that
most
of
our
puppet
infrastructure
is
demonstrated
by
Stefan.
Andervey
works
very
well
on
Ubuntu,
20
and
Ubuntu
22.,
but
we
will
have
to
migrate
everyone
to
recent
version.
My
proposal
is
to
focus
on
Ubuntu
22,
because
it's
the
latest
LTS
and
because
Ubuntu
20
was
a
mess
with
python
packages.
For
instance,
the
create
repo
tool
used
for
generating
the
Reddit
repositories
of
Jenkins
doesn't
exist
in
any
form
on
Ubuntu
20
and
is
not
installable
and
compilable.
It
will
break
due
to
the
way
python
packages
are
done
on
that
distribution.
A
A
A
note
about
Pikachu
origin,
Jenkins
IU.
Most
of
the
testing
can
be
done
since
we
use
the
docker
image
for
the
local
puppet
thing,
so
we
should
start
at
least
if
what
kind
of
packages
just
such
as
create
trip
on
the
rest
exist
on
the
current
machine
and
see
if
we
can
find
a
new
way
and
the
only
way
to
test
it
will
be
to
migrate
the
virtual
machine
in
the
future.
So
that
will
implies
doing
a
snapshot
of
the
current
file
system.
Upgrade
the
machine
restart
it.
A
A
A
So
if
it's
okay
for
everyone,
I've
mentioned
that
I'm,
not
sure
who
will
be
able
to
work
on
that
the
upcoming
Milestone,
but
for
sure
we'll
have
to
work
on
it
during
the
month
of
March,
so
I
propose
we
start
Ubuntu
22
campaign
as
part
of
that
Milestone.
Some
work
has
already
been
done
and
then
for
the
upcoming
Milestone,
we'll
work
on
migrating
to
azure.
C
C
Like
the
blue
ocean
container,
we
have
operating
system
container
images
like
the
Centos
7
controller
image
that
eventually
we
want
to
end
life
because
its
Upstream
is
end
of
life.
C
The
the
the
info
here
is
just
to
say
that
this
Jenkins
enhancement
proposal
will
be
coming
and
will
propose
to
extend
core
Jenkins
to
have
a
way
to
disclose
to
users
that
something
is
approaching
end
of
life
and
then
based
on
a
date.
Stamp
has
reached
end
of
life
and
and
it
will
have
to
have
a
way
to
represent
things
that
are
not
immediately
obvious.
C
So
it's
there's
some
discussion
needed
there,
but
just
be
aware
that
end
of
life
is
getting
attention
from
the
platform
Sig
and
the
doc
Sig
and
I
I
guess
maybe
maybe
one
more
announcement
so
Bruno,
while
I'm
here
doing
announcements
and
Kevin.
The
the
second
item
is
that
the
doc
Sig
has
a
has
decided
that
about
April
or
May.
We
will
transition
the
documentation
on
installation
from
describing
how
to
install
with
Java
11
to
describe
how
to
install
with
Java
17.
C
11
will
continue
to
be
so
supported,
but
we
will
make
that
transition,
because
we
know
that
Debian
12
will
not
deliver
Java
11
at
all.
Now
we
don't
that
doesn't
affect
the
Jenkins
project
because
we
ship
we
deliver
Tamron
and
temin
will
work.
Just
fine
on
Debian
12.,
but
we
don't
want
two
sets
of
instructions.
C
C
I'm
I'm
not
really
ready
to
say
that
last
statement
that
Damien
just
put
in
there
and
I
want
everyone
to
know.
I
didn't
say
that,
but
I
appreciate
Damian's
leading.