►
From YouTube: 2023 04 11 Jenkins Infra Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
Let's
get
started
with
announcement,
so
the
weekly
2014
is
okay
has
been
done
to
the
release
of
the
sign.
War
went
well,
we
are
the
equip
during
the
packaging.
Restarting
the
build
fixed.
The
issue
and
I've
just
triggered
the
container.
So
I
expect
the
release
to
be
finished
with
the
last
backlog
items
in
the
upcoming
hours.
A
A
note
about
the
I
think
that
will
be
worth
opening
an
issue
about
the
the
problem
that
has
been
centering:
the
packaging
step,
I'm,
opening
the
Jenkins
CI
packaging
repository,
which
has
scripts
for
the
different
kind
of
packages.
We
are
looking
at
the
script
in
charge
of
publishing
the
wire
once
generated.
A
A
A
The
reason
is
because
the
word
variable
points
to
a
moon
points
in
which
there
is
a
blob
storage
account,
which
is
a
kind
of
it's
like
the
same
as
three
things,
but
for
Azure
it's
not
a
posix
compliance
system.
It's
an
object,
storage
and
not
a
file
system.
A
The
we
use
kubernetes
with
its
which
use
CSI
driver,
which
converts
and
gives
the
impression
you
are
browsing
directly,
while
in
fact
it's
sending
requests
to
a
remote
HTTP
server
and
it's
not
fully
posix
and
that
driver
used
the
dreaded
cifs
system
from
Microsoft
that
may
work
or
might
not.
But
what
is
sure
is
that
that
cifs
implementation
is
not
politics.
A
So
for
sure
everything
here
try
to
run
a
system
call
which
is
posix,
but
the
implementation
seems
to
panic,
because
not
only
we
have
a
permission
denied
error
which
make
no
sense.
In
that
case
the
permission
are
fully
r77
on
data
system
and
also
it
says
time
nodes
which
is
the
word
one
I
mean
writing
a
file
in
timeouts.
That's
something
you
don't
you
haven't
seen
since
years
right,
so
yeah
I
think
it's
worth
an
issue
to
explain
that
we
have
to
retry.
In
that
case,
that
happens
sometime
to
time
long
term.
A
A
B
B
B
And
Baseline
selection
for
the
next
LTS
should
happen.
Oh
yes,
by
next
Wednesday,
so
2.400
or
2.401
are
likely
candidates.
A
A
B
A
B
B
B
D
A
A
A
They
they
gave
us
enough
money
to
continue
at
the
Define
rates
which
clearly
removed
the
month
of
March,
where
we
we
overuse
the
digital
Lucerne
due
to
the
AWS
issues.
So
thanks
survey,
that's
really
good
news
and
that's
also
great
opportunity
to
see
if
we
can
continue
the
sponsorship.
A
We've
closed
the
issue
in
the
ldesk,
because
that
issue
was
only
about
infrastructure
tracking,
but
we
expect
eventually
stop
me
if
I'm
incorrect,
but
the
blog
post
greets
them,
because
that's
really
nice
of
them
to
help
us
on
that
area
and
to
be
so
so
quick
to
do.
That
is
that
correct
survey.
Kevin.
A
Thanks
to
your
work,
we
were
able
to
ensure
that
any
agent
is
now
mounting
the
slash
TMP
and
the
M2
default
repository
folder,
even
if
not
always
used
by
Maven
builds
because
we
specify
another.
But
now
there
are
montids
as
a
numpty
an
empty.
There
is
a
directory
directly
mounted
on
the
virtual
machine
hosting
the
both
containers
in
opposition
of
writing
by
default
inside
the
container
file
system,
which
is
terrible
because
if
you
try
to
ride
on
slash
on
a
slash,
om
Jenkins,
for
instance,
that
will
be
written
on
a
low
performance
system.
A
A
So
since
we
run
free
Pods
at
the
same
time
and
the
fact
that
once
a
body
stopped
the
empty,
there
is
cleaned
up.
We
were
able
to
say:
oh
instead
of
200
gigabytes
on
AWS
per
machine,
which
should
decrease
to
90
gigabyte.
That
should
allows
us
to
gain
some
bugs.
That's
not
a
lot,
but
it's
worth
not
using
too
much
and
digital.
Listen!
It's
a
bit
different
200
gigabyte
is
the
default
of
the
machine.
A
We
cannot
decrease
it,
so
we
have
way
more
space
on
digital
Lucian
kubernetes
nodes
than
what
what
we
have
on
AWS,
so
that
issue
about
out
of
space
for
the
bomb
builds
is
definitely
closed.
Thanks
survey
as
usual,
if
you
see
anything
related
to
the
space
usage
for
the
bomb
Builds
on
CR,
Jenkins
IO
or
on
any
brilliant
builds
that
are
both
running
on
container,
please
open
an
eldest
issue
that
might
or
might
not
be
related
to
this.
A
Problem
in
fighting
an
artifact
perimeter
repository
so
everywhere
was
able
to
fix
the
issue
for
these
users.
So
it's
the
second
or
third
time
that
we
have
user
building
a
plugin
that
use
our
artifacts
from
a
repository
which
is
not
ours.
So
we
need
to
add
exception.
A
A
So
as
now,
everybody
expects
from
me
methods
to
get
the
lists
of
the
mirror
repositories
that
we
have
on
g-frog.
That's
an
API
call
that
anyone
can
do
it's
that
doesn't
require
authentication,
but
I
need
to
share
it,
and
we
will
continue
on
an
upcoming
issue.
The
goal
is
to
check
with
the
Jenkins
security
team.
If
each
of
these
repositories
are
acceptable
and
should
we
mirror
them
or
keep
the
exception.
C
A
Not
that
it's
it's
not
a
problem
to
have
the
exception
in
the
setting,
because
AC
the
goal
of
ACP
is
to
decrease
the
bandwidth
from
g
frog
instance.
So
if
we
have
exception
like
this
one,
that
means
that
our
agent
directly
connects
to
the
other
repositories.
They
don't
consume
through
g-frog.
So
it's
not
the
problem
for
the
goal
of
the
ACP
itself.
A
It's
a
problem
to
maintain
the
list
of
exception,
though,
because
that
could
cause
issues
like
this
one
and
also
that's
the
point
about
the
that
could
be
discussed
on
the
plug-in
IL
score
area.
Should
we
score
a
plugin?
Should
we
add
one
new
score
that
will
say:
hey?
If
you
don't
use
the
g
frog
mirror
repository,
meaning
with
infrastructure
and
Jenkins
security
analysis.
Then
you
might
lose
a
bit
of
scoring
or
if
you
are
maybe
a
positive
one.
C
B
C
B
B
A
Yep
next
issue
that
we've
gone
through
the
girac
on
password.
That's
a
account.
We
were
able
to
successfully
renew
the
signing
certificate
for
Jenkins
score.
So
congratulations
to
everyone.
That
was
a
huge
team
effort
and
we
did
it
with
a
2.400
version
and
with
the
Jenkins
latest
LTS,
along
with
updated,
tpg
keys.
So
now
we
know
how
to
how
to
run
and
the
expiration
of
both
the
gbg
key
and
the
DJ
search
code
signing
are
in
three
years
both.
So
we
will
change
them
the
two
of
them
at
the
same
time.
A
Next
year
there
should
be
soon
a
postmortem
on
what
could
we
improve,
including
doing
it
six
months
in
advance,
so
we
are
sure
that
it's
not
late.
The
goal
will
be
to
avoid
reaching
the
expiration
date
when
we
switch
the
keys.
A
A
Thanks
Stefan
for
taking
care
of
that
huge
one
that
will
generate
a
lot
of
discussion
and
changes
and
fixes
we
had
the
leftovers
like
six,
sixty
gigabyte
of
leftovers
of
backups
and
stuff.
Like
this,
we
had
100
gigabytes
of
not
discarded,
build
logs,
and
a
lot
of
builds
are
storing
a
lot
of
archived
artifacts
on
the
file
system.
So
we
cleaned
up
anything.
We
could.
A
Everything
has
been
done
here,
so
the
issue
was
closed
because
we're
able
to
to
go
below
the
80
percent
usage
thresholds
issues
have
been
opened
for
all
of
the
fixes.
So
I
will
come
to
this
later.
A
A
Okay,
let's
proceed.
We
have
a
lot
of
running
issue
and
new
issues
as
well.
First
realign
so
the
goal
for
the
issues
that
we
have
there,
it's
a
kanban
rule.
Do
we
keep
working
on
it
or
do
we
postpone
I
propose
to
postpone
the
realign
repo
Jenkins,
CI
Aug,
Mission
I
haven't
had
time
during
the
past
three
weeks
to
work
on
this
topic,
the
h
I
availability
held
up
to
sustain
if
we
enable
authentication
of
the
shift
mirrors
right
now.
A
A
Ubuntu
22204
upgrade
campaign
so
that
went
pretty
well.
Irvin
and
I
were
able
to
deliver
this
one
for
the
agent.
So
now
all
the
Cai
Jenkins
IO
agents
are
using
Ubuntu
2020
to
o4.
Everything
went
well
with
a
tiny
minor
exception.
A
Switching
to
Ubuntu
220
to
o4
broke
the
some
uncivil
test
case
on
the
packaging
when
using
the
hold
Amazon
Linux
2.
that
might
be
related
to
the
system.
D
and
c
groups
updates,
Ubuntu,
22
features
at
least
c
groups
version
2.,
which
changed
the
way
the
control
groups
are
run
by
underlying
container
and
it's
not
the
only
major
upgrade,
but
thanks
to
Basil
the
nice
work
has
been
done,
especially
pumping
the
Amazon
Linux
operating
system
version,
which
work
very
well
inside
Ubuntu
and
other
GDK
related
issues.
A
A
D
A
Cool,
so
everybody
you
will,
you
will
have
low
low
bandwidth
this
week,
so
I
don't
expect
you
to
spend
some
time
over
22.
A
if
it's
okay
for
you
I
plan
to
check
and
eventually
upgrade
the
node
groups
that
who
could
have
on
kubernetes
on
our
kubernetes
clusters
check
the
Ubuntu
version.
A
If
any
on,
as
on
AKs
I,
think
it's
AKs
and
eventually
digital
Lucian,
if
I
see
that
there
is
a
possibility
to
upgrade
the
laying
note
groups,
I
will
start
the
operations
to
to
do
it
during
this
week.
Any
objection
on
this
one,
no
great
and
eventually
Docker
Dash
openvpn
I'm
sure
this
one
uses
a
as
base
image.
A
A
Okay,
let's
continue
on
the
tasks
document
the
code
signing
certificate
renewal
process
so
that
one
will
migrate
on
the
next
Milestone.
The
pull
request
is
open,
so
I'm
waiting
for
a
review
approval
and
if
everything
goes,
we
merge
it
worst
case.
We
have
a
few
changes
to
do
to
the
duck,
but
that
One
automatically
moved
to
the
next
milestone.
A
D
For
now,
I'm
stuck
with
a
ceiling
problems
that
I
don't
quite
well
understand,
but
I
tried
to
open
an
issue
with
the
baker,
so
I'm
I'm
hoping
to
have
some
some
direction
to
follow.
I'm
stuck
with
the
rm64
versus
AMD,
not
a
load
to
be
used.
A
A
So
we
don't
have
access
to
the
sun
grids,
configured
email,
sending
server
for
accounting
Sayo,
so
we
can
check
when
an
email
doesn't
reach
a
remote
machine.
So
if
it's
okay
for
you
RV,
are
you?
Okay,
I
will
command
this
issue
and
the
goal
is
now
that
we
have
access
to
the
mail
gun
accounts.
At
least
foreign.
A
The
amount
of
email
is
low,
so
we
should
stay
in
the
free
chair
as
I
understand,
so
I
need
to
create
account
for
both
of
you,
Stefano
nerve
and
then
Airways
should
be
able
to
update
the
configuration
of
accounts,
Jenkins
IU,
to
switch
to
mail
gun.
So
we
should
be
able
then,
to
work
with
that
user
and
serve
upcoming
issues.
Is
that
okay,
for
your
way,.
A
So
I'm
questioning
and
I
will
take
care
of
commenting
reporting
on
that
issue,
so
you
should
be
able
to
start
working
on
it
as
soon
as
I've
sent
the
mail
Ganesha
account.
A
We
have
an
issue
about
artifact,
caching,
proxy
being
unreliable,
so
it
was
in
the
the
you.
There
were
two
errors,
one
on
the
bomb
builds
running
on
digital
ocean,
so
we
should
be
able
to
check
it
again,
and
the
second
issue
was
when
trying
to
use
all
the
steps
of
the
ath
beams,
so
we
will
have
to
diagnose
a
bit
more.
It's
for
the
first
case.
For
the
second
case,
it
looks
like
a
lot
of
network
errors,
so
there
are
some
incoming
issues.
A
I
will
move
it
to
the
next
week
and
we'll
continue
diagnosing
here
because
there
isn't
anything
obvious.
So
it's
a
low
level
thing,
especially
on
the
network
area,
so
yeah,
that's
anyone
willing
to
take
some
time
by
default.
I
will
take
some
time.
One
of
the
main
actionable
we
have
here
is
to
change
the
network
where
the
cigu
agents
running
in
Azure
are
spawned.
The
goal
is
to
move
them
to
a
closer
Network
than
the
ACP
server
and
see
if
the
issues
continue
happening
on
azure
and
in
for
digital
ocean.
A
A
Let's
remove
trade
from
here,
so
the
goal
is
to
install
the
launchable
command
line
on
our
backer
images,
at
least
to
be
sure
that
it's
available
for
already
it's
not
needed
to
be
installed
each
time,
so
it's
at
least
for
Linux.
Ideally,
if
you
are
able
to
install
it
also
on
Windows,
that
will
help
basil
a
lot
I'm
moving
into
the
next
milestone.
A
A
So
Stefan
Thanks
for
opening
that
issue
about
migration
of
trusted,
CI
Jenkins
IO
from
AWS
to
azure.
There
are
three
goals.
The
main
goal
here
is
keeping
control
of
our
infrastructure
by
moving
sensitive
machines
in
clouds
that
any
Jenkins
and
for
a
team
member
can
manage
the
AWS
account
is
still
used
and
provided
by
cloudbase,
which
is
very
kind
of
them
because
they
they
pay
for
the
bill,
but
that
doesn't
allow
non-cloud
Visa
employee
to
access
the
management
of
these
machines.
A
So
the
main
issue
here
is
the
safety
by
moving
trusted
cig
and
Kim
Sayo
and
Associated
machines,
which
are
in
charge
of
generating
Update,
Center,
deploying
Jenkins
IO
and
some
other
trusted
tasks.
The
goal
is
to
move
them
in
a
dedicated
Network
in
Azure
virtual
machines,
so
we
should
be
able
to
streamline
the
management.
A
Thanks
Stefan,
we
have
to
do
lists
that
looks
really
good
about
what
are
the
expected
tasks.
So
our
are
you
okay
to
work
on
this
next
Milestone?
Yes,.
A
A
An
issue
that
is
almost
closable,
there
has
been
an
issue
on
the
automatic
renewal
of
the
certificate
for
updates,
Jenkins,
IO
and
jenkinsay.org.
The
certificate
has
been
renewed.
We
have
an
event
in
the
calendar
in
two
months
to
check
for
the
next
renew
event.
A
The
last
step
is
before
that
we
need
to
enable
logging
of
the
Chrome
tab
renewal
to
the
syslog
system
on
the
virtual
machines.
That's
an
option
on
the
pubert
module
that
we
use
instead
of
having
having
a
third
bot
renew
quiet
mode
which
doesn't
help
us
to
Jack
knows
what
happened.
Most.
Probably,
the
failure
in
automatic
renewal
come
from
the
breakage
I
did
last
month
on
updating
all
the
python
installation
and
third
board
versions,
but
we
don't
really
know
we
don't
have
any
logs.
That
shows
the
error.
A
So
that's
why
we'll
need
to
be
careful
next
time
so
once
we
are
sure
that
third,
but
renew
commands
is
written,
its
result
is
written
inside
syslog,
so
applied.
We
wait
24
hours
and
we
check
this
log
and
we
see
that
the
third
but
renew
should
say:
hey
I've
tried
to
renew
this
certificate
and
they
are
not
bound
to
expire.
A
A
A
First
of
all,
I
try
to
details,
something
that
was
discussed
privately
because
it
was
cloud-based
internal
due
to
the
AWS
accounts.
So
now
I've
published
what
the
excerpt
of
the
discussion
the
goal
is
to
decrease
what
we
consume
on
AWS
trusted.
Moving
the
virtual
machine
of
trustee
CI
is
one
of
these
elements,
so
there
is
an
issue
with
a
lot
of
details
on
the
short-term
leverage
for
that
Milestone.
We
have
a
work
that
is
in
progress
about
cleaning
up
the
snapshots
created
by
Packer.
That
should
be
almost
thousand
dollars
per
month.
A
Once
it's
finished,
we
have
a
work
about
trying
to
optimize.
The
bomb
builds
the
for
this
Milestone.
The
goal
will
be
to
split
the
nod,
pulls
between
bomb
builds
and
plugins
builds,
so
we
will
be
able
to
check
the
CPU
and
memory
usages
and
see
if
we
can
optimize
packing
pods
or
maybe
move
the
workload
toward
the
clouds.
A
We
saw
30
terabyte
of
data
per
month
of
outbound
bandwidth
and
we
don't
have
to
pay
for
that
that
for
that
outgoing
bandwiz
on
digital,
we
wanted
to
use
Oracle
a
few
months
before,
but
the
partnership
with
Oracle
is
still
it's
not
bad,
but
it's
still
a
tiny
Partnerships.
So
we
prefer
going
to
digital
lesson
right
now,
because
they
are
really
really
at
ease
with
us,
and
then
we
will
see
to
extend
to
make
that
service
highly
available
in
the
future.
But
right
now
we
call
yeah
avoid
spending
free
to
6K
per
month.
A
So
that
should
be
a
huge
win.
We
have
the
trusted
CI
migration.
That
should
also
helps
us.
So
we
are
at
that
State
The
Proposal.
Is
we
start
with
these
elements
and
then
iterate
the
week
after,
so
that
one
move
automatically
to
the
next
speeding
session?
That
will
be
a
long
running
issue.
Sorry
for
that
folks.
A
D
A
Finally,
one
last
issue
is
related
to
CI
John,
Kim,
Sayo
and
related
to
the
disc
full
that
we
had.
So
we
saw
a
lot
of
outbound
bandwidth.
So,
to
summarize
the
discussion
we
want
to
use
an
Azure
artifact
manager,
plugin
that
will
ensure
that
CI
Jenkins
IU
start
archiving
artifact
inside
an
Azure
bucket.
The
goal
is
to
reduce
the
pressure
in
term
of
IU
on
the
data
disk
use
for
the
controller
and
to
decrease
the
storage
needs.
A
What
why
will
will
that
help
us
on
the
outbound
bond
with
because
we
should
be
able
to
measure
curve
really
really
precisely?
What
is
the
outbound
bond
with
caused
by
station
stash
to
the
kubernetes
cluster
on
AWS
and
digitalocean,
compared
to
the
data
downloaded
directly
through
the
web
UI
of
CI
Jenkins
IO,
so
that
one
require
first
configuring,
the
artifact
caching
proxy?
Then
we
will
have
more
discussions
to.
We
have
different
options
that
we
need
to
report
on
that
issue.
A
A
We
need,
as
you
both
of
you
thanks
for
that
work.
When
you
check
the
disk
issues
in
order
to
either
operation,
we
need
to
add
labels
or
elements
that
will
help
us
to
immediately
detect
which
virtual
machines
and
which
metric
on
datadog
researches.
A
Shows
us
that
we
can
use
configuration
variables
to
enable
AWS
automatic
detection
and
to
force
some
labels
that
we
can
add
with
the
name
of
the
machine
or
the
service
that
will
help
so
that
one
I
propose.
We
move
it
to
the
upcoming
Milestone,
because
it's
one
or
two
lines
any
objection
agreed
I'm
in
the
mood
of
doing
a
lot
of
beta
and
Java
data.
A
That
one
is
related
to
CI,
Jenkins,
IU
disc,
full.
The
goal
is
to
install
the
global,
build
discorder
and
add
a
global,
build
discording
policy
by
default,
unless
pipelines
or
job
configuration
says
something
else
that
should
help
because,
based
on
what
we
saw,
some
of
the
builds
didn't
add
any
so
having
a
global
build
disorder
will
help
a
lot
on
reducing
the
amount
of
storage,
so
that
one
is
also
a
plugin
installation.
A
A
C
A
A
How
do
you
feel
about
this?
One.
D
A
Got
it
so
I've
added
to
the
next
milestones
in
survey
volunteer
and
let's
see,
if
no
no
obligation
to
finish
this
for
the
upcoming
Milestone,
especially
with
the
low
bandwidth.
You
have
sounds
good
for
you.
Folks.