►
From YouTube: 2023 07 11 Jenkins Infra Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
hello:
everyone
welcome
to
the
Jenkins
infrastructure
weekly
team
meeting
so
today
that
will
be
overloaded
because
we
canceled
the
meeting
last
week,
for
we
did
I
was
alone.
As
far
as
I
can
tell
available
for
running
it.
We
had
production
operation
that
were
planned
that
day.
So
the
combination
of
all
of
these
make
it
made
it
harder.
A
A
Last
week
this
week,
so
I
haven't
had
time
to
finish
a
comment,
but
I
started
an
issue
about
the
release
process.
So
the
the
goal
is
to
track
somewhere
because
I
took.
We
already
had
an
eldesk
issue
which
we
didn't.
We
already
have
an
L
desk
issue
for
infrastructure,
about
moving
all
the
publication
process
to
release
CI,
which
is
something
that
is
discussed.
But
today
we
want
to
automate
the
manual
task
where
we
create
an
annotated
tag
and
we
publish
the
draft
release.
A
That
is
something
that
changed
compared
to
six
months
ago,
for
instance,
and
it's
a
manual
step
documented
on
the
release
process.
So
I've
opened
an
issue
because
these
steps
could
be
automated,
at
least
for
the
weekly
release.
Mark
I
saw
the
description
you
add
with
Team
here
com
about
yeah.
Maybe
we
can
or
cannot
automate
it
for
the
LTS
score
release
which
should
be
doable,
but
that
was
the
exchange
for
security
release
that
we'll
need
to
update
that
will
need
to
an
exchange
with
the
Gen
sector.
A
A
B
Well
and
my
apologies
that
that
race
condition
is
there
I
I
thought
it
was
valuable
enough
to
do
the
pull
from
gear
from
get
that
jenkins.io,
even
with
the
race
condition,
because
I
was
aware
of
the
race
condition
when
I
proposed
it
and
accepted
the
race.
But
obviously,
if
we
lost
the
race
once
we
need
to
to
implement
a
fix.
A
Yep
so
last
week
here
released
went
well
this
week
we
had
two
issues,
so
the
one
we
just
mentioned
and
another
one
since
we
upgraded
kubernetes
last
week,
I
run
a
replay
of
the
package
thing,
because
I
wanted
to
check
that
the
agent
were
properly
created
on
the
corresponding
node
pool.
To
avoid
a
bad
surprise,
today,
packaging
failed
due
to
a
replayed,
build.
A
A
So
the
initial
build
on
a
given
Branch
for
the
package
process,
at
least
and
I'm
sure
the
release
has
the
same
issue.
The
first
build
is
failing
and
need
to
be
retry
a
second
time.
Each
time
we
create
a
new
LTS,
the
dot
one
of
LTS
has
that
problem
and
the
replay
on
the
master
branch
that
I
did
cleaned
up
the
previous
versions.
So
two
days,
packaging,
weekly
Bill
failed
due
to
same
reason.
It
didn't
have
the
default
value,
so
it
failed
because
it
was
empty
and
the
shell
script
failed
if
it's
empty.
A
A
A
Let
me
add
this
to
open
issue
or
command
on
existing
track.
This
race
condition.
B
I've
confirmed
that
the
container
image
downloads
and
builds
successfully
so
so
that
part
was
successful
even
after
the
race
condition
got
exercised,
so
the
rebuild
must
have
worked.
That's
great
thanks
and
the
changelog
has
been
merged.
Thanks
to
Kevin
for
proposing
changes
to
the
changelog.
There
was
an
oddity
in
the
changelog
that
sometimes
happens
where
the
automated
change
log
generates
unexpected
things
and
I
didn't
do
the
investigation
to
figure
out
why
I
just
fixed
it.
A
Just
a
note
about
the
change
logs
I'm
impressed
because
since
one
year
we
have
the
weekly
release
and
the
change
log
done
the
same
day
every
week.
That's
really
impressive.
It
sounds
normal
now
today,
but
trust
me
one
year
and
a
half
ago
that
wasn't
easy
to
have
this
kind
of
timeline.
So
congratulations
to
everyone
involved
in
the
process.
That's
really
efficient!
Well,.
A
That's
all
for
me
on
the
weekly
releases
from
both
weeks,
anything
to
add
or
questions
or
things
to
play.
If
I
could
regarding
the
announcements
next
week,
I
will
be
off
so
I
need
you
folks
either
to
run
the
meeting
or
to
cancel
it.
B
A
A
What
do
you
think
RV,
you
will,
will
you
be
half
in.
C
C
A
A
B
Not
yet
so
2.401.3
will
release
soon,
let's
see
so
baseline
or
release
candidate
today,
so
releasing
in
two
weeks
on
the
26th
of
July.
Okay.
Today,
tomorrow
the
new
Baseline
will
be
selected
and
the
new
Baseline
is
likely
2.414,
because
2.413
looked
very
good
and
I.
Don't
expect
any
surprises
from
2.414.
A
A
Perfect
there
is
a
net
visery
announced
earlier
today
publicly
for
tomorrow.
A
A
So
that
that
means
that
the
Genesect
team
might
need
to
access
sea
urchin,
King,
Sayo
I
was
late
on
com
on
pinging
them
on
that
element,
thanks
survey
for
catching
that
really
because
otherwise
there
would
have
been
slowed
down
on
that
process.
They
will
use
the
new
CI
Jenkins.
So
if
any
of
the
current
plugin
version
are
are
at
yeah
are
concerned,
then
they
would
have
to
update
it.
Otherwise
they
only
need
trusted
CI
Jenkins,
so
you
to
be
up
so
they
can
publish
Jenkins
website
when
needed.
Yeah.
B
But
shouldn't
we
shouldn't
shouldn't:
we
actively
assume
that
a
restart
of
ci.jenkins
that
I
will
be
required.
Just
in
case
yeah
I
mean
okay,
we
don't
know
they
haven't
told
us
which
plugins
are
involved,
they're,
probably
disclose
which
plugins,
but
it
feels
safe
to
just
say
we're
going
to
restart
ci.jenkins.io
tomorrow
and
then,
if
we
don't
okay,
we
smile
and
say
we
didn't.
A
Yep
absolutely:
is
there
a
volunteer
to
open
the
statues
here.
A
B
So,
and
and
I
assume
we
set
it
for
a
time
tomorrow.
Let's
see,
did
they
tell
us,
did
they
say
in
the
public
announcement
the
time
they're
publishing,
not.
A
B
A
A
Okay,
so
let's
remove
it
then,
and
we
can
start
with
the
huge
list
of
things
that
has
been
done.
I
will
try
to
be
as
as
fast
as
possible.
I
promise
thanks
survey
for
the
work
on
integrating
cig
in
Sayo
observability
in
data
dog
using
the
datadog
plugin.
A
So
now
we
have
a
full
observability
integrated
in
data
dog
and
we
can
start
doing
things
such
as
monitoring
builds,
for
instance,
monitoring
the
builds
in
infra
acceptance,
tests
that
could
alert
us
when
an
agent
cannot
be
spawned
or
when
the
package
is
not
able
to
be
installed
or
a
lot
of
things.
So
these
jobs
will
be
really
helpful
to
warn
Us
in
case
of
issues
that
will
be
helpful
to
us
to
the.
What
is
this,
why
are
the
bomb
builds
taking
so
much
time
for
simple
sh
steps
with
the
Telemetry?
A
A
While
we
are
on
sea
agent,
Kim
sayud,
it
runs
on
a
new
virtual
machine
since
last
week,
in
a
new
network
in
a
new
Resource
Group
with
a
new
hardware
which
is
way
more
powerful
and
cheaper,
and
it
runs
on
the
network,
which
does
not
have
any
overlap
and
potential
to
to
have
IPv6.
A
So
sea
urchin
kinsayu
is
not
reachable
for
IPv6,
but
it
could
be
in
the
future.
The
the
former
resources
have
been
cleaned
up.
We
did
a
lot
of
clean
thanks
survey
for
the
the
help
on
there
and,
if
you
use
to
access
CI
Jenkins
IO
with
SSH,
you
need
to
look
at
the
new
runbook,
like
the
Gen
SEC
team
has
to
do
today,
because
there
is
a
new,
a
new
configuration.
It's
just
us
name
change
and
you
need
to
use
the
new
VPN,
the
private
one,
because
we
change
the
network.
A
Any
question
on
that
part:
okay,
so
let's
continue
there's
been
an
issue
when
releasing
the
jelly
doc
Maven
plugin
thanks
basil
for
fixing
the
issue
inside
the
plugin.
That
was
due
to
the
gdka
version
and
the
trend
and
the
set
of
transitive
problems.
Due
to
that
now
it's
fully
working
with
gdka
11,
even
17,
but
I'm,
not
sure
but
I,
know
it's
not
gdka
8
anymore
fixing
the
issue
thanks
basil
for
that.
A
Rv,
can
you
give
us
a
word
about
Windows,
Server,
2022
agents
on
trusted
CI?
Why
did
we
do
that
and
what
problem
did
we
solve
about.
C
Two
weeks
ago,
I
started
to
work
on
creating
2022
Russians
image
to
be
able
to
provide
the
current
bundage
and
2022
has
already
in
those
2015
two
agents
configured,
but
trustee
didn't
so
we
added
them
to
trusted.
So
the
education.
A
Thanks
it's
a
nice
work.
I
haven't
checked
the
result
if
the
images
were
published
but
I
think
you
did.
C
Already
yeah
yeah
and
I've
used
them
to
test
the
2022
version
and
the
care
invented
agent.
Oh
nice,
it's
already
regarding
everything.
I
just
have
to
finish
a
refactoring
of
CPU
process
in
the
current
moon
Engine
with
you.
The
problem
forecast
is
ready
and
then
creating
the
2022
agent
document
agent
will
be
as
simple
as
adding
a
agent
type
in
the
drink
inside.
A
Cool,
so
there
is
a
part
that
now
should
move
to
the
Sig
regular
meeting,
because
we
did
what
we
needed
to
prove
that
the
infrastructure
did
their
work
anyway.
We
still
need
Windows,
Server
2022,
for
the
infrastructure,
because
we
have
the
release
package
that
build
the
windows
official
MSI
during
the
core
releases.
It's
running
using
2019
line
images,
even
not
even
LTS,
so
we
will
need
to
build
inbound
agent
or
to
use
inbound
agent
2022.
So
we
can
upgrade
the
node
pool.
A
This
should
also
be
a
Target
to
use
the
brand
new
2022
agent.
We
are
building
custom
images
today
with
Docker
on
Windows
that
inner
it
from
these
images.
The
survey
is
publishing,
so
we
will
ask
them
to
update
our
own
builds
in
the
future.
We
will
also
that
would
allow
us
to
switch
every
systems
to
Windows
Server
2022
LTS.
A
We
will
need
to
keep
the
2019
builds,
but
these
can
be
built
from
a
2020
to
server
Docker
with
Docker.
Now
it
will
use
Virtual
Machine,
it's
a
bit
slower,
but
a
different
isolation
level.
It's
not
virtual
machine
but
yeah.
So
that
means
we
can
move
everything
as
recommended
by
azure
episode.
2
everywhere.
B
So
we've
adopted
the
board
or
we're
using
the
repository
that
was
created.
We've
got
agreement
between
me
and
Alex
brandes
that
will
continue
using
it
and
it's
it's
got
content
I'm
sure
we'll
do
more
with
it
as
time
goes
on
right
now,
it's
just
a
Content
archive,
but
it
could
certainly
be
posted
to
some
place
like
governance.jenkins.io
or
or
see
this
archive.jenkins.io
or
you
know,
Etc,
that's
it's
its
initial
purpose
is
met
and
I
am
pleased
to
announce
that
one
action
item
that
was
on
my
list
for
probably
two
years
is
gone.
C
B
Meeting,
no
because
Alex
and
I
haven't
settled
yet
and
I'm
the
note
taker
for
the
governance
meeting
right
now.
My
strong
preference
is
still
Google.
Docs
I
appreciate
that
everybody
likes
hack,
MD
but
editing,
editing,
markdown,
live
in
a
meeting
is
more
difficult
for
me,
so
for
me,
I'm
still
prone
to
keep
using
Google
Docs.
B
C
I
was
asking
it
because
I
saw
the
current
from
2020
to
2023
is
one
unique
markdown
file
and
I
was
about
to
propose
to
to
split
it
in
daily
or
Monster
or
like
the
other
archive,
and
with
that
I
don't
know
you
might
have
to
adapt
your
process
for
archiving.
B
No
problem
it's
trivial
if
we
want
to
split
it
to
a
file
per
per
meeting,
that's
easy
to
easy
to
do
and
I
fully
support
that
if
that's
easier,
that
I
have
no
problem
with
writing
an
individual
file.
That's
what
I
did
with
the
most
recent
notes
is
I
had
to
update
them.
I
extracted
the
current
ran
it,
and
it
was
easy
so
happy
to
do
that.
No
problem
I'm
great
if
we
split
it
to
individual
meeting
per
file.
If
that's
easier
for
people.
I
am
happy
with
that.
Okay,.
A
Okay,
let's
continue,
we
had
an
issue,
build,
monitor,
plugin,
failing
with
401
unauthorized,
so
that
one
was
due
to
Herbie
repository
permission,
updater
build
failures,
untrusted
I,
don't
recall
if
the
timeline
was
due
to
the
outage
a
coast
last
Friday
or
something
else,
but
that
was
the
cause
and
as
soon
as
the
the
build
run
again
properly,
then
problem
was
fixed
for
bezel
update,
GitHub
user
to
authorize
us
committers.
So
thanks
I
think
it's
I.
Guess
it's
Alex!
So
someone
added
for
permissions.
So
thanks
for
this
one
test,
history,
page
and
CA
agencies
is
inaccessible.
A
Okay,
so
that
that
issue
was
due
to
another,
the
other
S3,
the
S3,
so
there
were
too
much
builds,
so
the
page
mentioned
by
Alex.
That
was
in
error.
It's
the
proxy
Timeout
on
Apache,
because
it
took
two
to
three
to
four
minutes
to
loads.
At
the
beginning,
due
to
another
issue,
we'll
see
later,
we
were
able
to
rotate
the
builds
so
back
to
an
effective
limit
of
50
builds
in
the
history
of
the
master
branch
of
Jenkins
core
and
now
the
time
is
30
to
40
seconds,
which
is
below
the
minutes.
A
So
yeah
thanks
for
reporting,
thanks
for
the
because
really
pointed
an
issue
on
the
G
Unit
plugin
that
could
have
been
related
to
that
so
yeah.
Now
it's
working
so
no
more
problem.
A
There
were
requests
to
add
users,
so
they
can
access
to
gen
to
Jenkins
CI
team
releasing
to
incremental
yields
503.
That
was
two
weeks
ago.
So
the
incremental
publisher
was
failing
because
someone
here
bumped
the
node.js
core
version
to
trying
to
clean
up
the
image
and,
of
course,
that
person
named
Damien
reportel,
which
is
me
didn't
tested
thoroughly.
A
So
it
started
to
fail.
It
has
been
fixed,
also
had
a
secondary
issue
that
was
initially
it's
using
its
authenticating
authenticating
on
cig
API
to
retrieve
information
to
get
the
build
artifacts.
That
authentication
is
not
mandatory
but
could
be
used
for
API
rate,
limiting
I'm
still
having
mixed
feelings
about
that
and
I,
don't
know
o
and
why?
But
we
see
I
Jenkins,
IU
migration
to
new
virtual
machine,
the
token
of
the
technical
user
we
use
disappeared.
So
of
course
the
presented
token
since
it
were,
there
were
no
token
defined
on
for
that
user
on
cigo.
A
So
CI
tried
to
pass
that
token
as
an
elder
password
to
a
world
app,
which
of
course
said
no,
that's
not
the
right
password
to
use
so
effectively.
It
failed.
So
yeah
we
created
a
new
token
on
ciginsayu,
I'm,
sure
I
added
the
instruction
and
it
was
pushed
on
our
secret
store
and
once
we
deployed
that
okay,
sorry
Mark
go
ahead.
So.
B
I
think
you
missed
one.
One
part
of
that
story,
which
was
incremental's
publisher
was
not
running,
depend
about
at
all
right,
so
it
was
way
behind
and
you
found
that
problem
fixed
that
problem
and
of
course
that
meant
there
was
a
flurry
of
updates
and
and
I
blindly
approved
many
of
those
flurry
of
updates,
and
so
there
was
this
terrifying
period
of
blind
approval,
because
CI
passed,
but
CI
was
doing
almost
no
verification,
so
yeah
that.
Thank
you.
Thank
you
to
all
involved
in
fixing
that.
B
I
I
guess
I've
got
a
question
is:
is
there
a
way
we
should
safety
check
or
depend
about
pages,
to
be
sure
that
if
dependabot
is
enabled
it's
not
showing
an
error
because
I
know
plugins
I
maintain
have
that
were
on
occasion,
I'll
make
a
mistake
and
I
break
dependabot
and
of
course
it
stops
submitting
submitting
pull
requests
because
I
broke
it
a
separate
topic.
Another.
A
Time,
that's
out,
I
will
say:
I,
don't
know
if
we
can
catch
if
dependent
body
is
broken,
but
checking
for
the
last
image
update,
which
was
two
years
ago
for
incremental
publisher
or
one
year
and
a
half
and
checking
for
the
presence
of
eventually
the
last.
No
it's
not.
We
I'm
not
sure.
If
there
is
something
on
the
GitHub
API
for
update
CLI,
you
could
look
at
the
last
builds,
but
yeah
I.
B
A
A
Whatever
solution
we
choose,
you
need
an
exhaustive
list
of
the
prod
of
the
repositories
that
we
need
to
check
and
the
more
repository
will
create
the
more
we
need
to
update
that
list.
That
can
be
okay
but
yeah.
There
is
that
that's
a
complicated
topic
and
I
I,
don't
know
always
depend
a
button.
Referring
to
that.
C
I
would
say,
with
a
crown
running
around
running
every
repository
and
check
if
there
is
a
new
Dependable
config
to
add
it
to
the
repository
to
check
that
I'm.
A
C
C
B
A
B
Well,
and-
and
we
have
similar
cases
in
other
things-
we've
got
a
back-end
extension
indexer
bug
report
right
now
that
has
is
highlighting
that
its
tests
are
not
testing
anything
useful
and
we
had
a.
We
had
a
failure
in
the
pipeline
step
stock
generator
that
its
tests
were
not
testing
anything
useful.
So
so
that's
a
common
pattern.
There
are.
There
are
a
number
of
places
where
we
we
would
benefit
by
more
tests.
C
A
Okay
but
yeah,
but
now
it's
working
well.
So
thanks.
Everyone
involved,
because
at
this
sixth
person,
we're
involved
in
fixing
that
issue,
so
thanks
brunch
strategies
from
prayer
configuration
consequence
of
the
migration
as
well
their
migration.
Since
we
had,
we
had
to
recreate
a
Jenkins
home
copied
from
the
former
one
we
weren't
able
to
reuse
the
former
one.
We
applied
a
lot
and
a
lot
of
cleanup
and
also
we
had
issue
with
the
S3
buckets
that
are
archiving
artifacts.
A
Seen
that
later,
so
we
applied
roughly
any
elements
that
mean
we
need
to
think
about
using
job
DSL
for
defining
the
jobs
in
Sea
agent
in
the
US
code.
So
this
kind
of
elements
will
be
hazier
to
update
and
we
could
plan
it
easier
because
each
time
you
try
to
scan
you
eat
the
API
rate
limit
on
GitHub
effectively
that
break
all
the
Builds
on
searching
can
say
or
no
that
makes
all
the
bill.
A
Okay
thanks
everybody
for
the
IPv6
support
so
now
get
Jenkins
is
properly
set
up
everywhere
and
all
of
the
services
running
on
on
public
on
the
new
cluster
are
all
available
through
IPv6.
C
C
Good
points
it's
working
on
a
on
its
own
load,
balancer
service,
so
we
could
have
the
nipple
basics
for
cldap
service,
but
we
don't
see
the
need
right
now.
A
S390X
so
I've
closed
that
old
issue,
because
since
we
upgraded
to
22
two
weeks
ago,
the
machine
we
don't
need
to
replace
it
and
by
the
way
the
agent
was
offline,
I,
don't
remember
exactly,
but
there
was
a
quick
fix.
I
think
we
had
to
restart
CI
Jenkins
IO
and
it
was
upgraded
at
that
moment.
So
I
don't
have
anything
else
to
add
on
this
one
and
we
have
an
open
issue
for
managing
it
as
code
in
the
future.
A
A
A
A
Next
one
javac
vanish
from
Docker
High
meme,
so
we
had
the
issue
on
the
templates
that
has
been
fixed
same.
It
was
related
to
the
agent
configuration
and
their
content.
So
thanks
Alex
for
that
IPv6
again.
That
has
been
validated
with
the
new
cluster.
As
we
mentioned,
and
during
the
the
changes
we
we
did
some
misconfiguration.
So
we
had
the
people.
We
were
using
RFC
compliant
DNS
resolver
in
the
organization.
A
A
A
A
There
was
a
mysterious
issue
about
someone
saying
I
want
to
install
it
doesn't
download
and
that's
all
the
information
we
had
so
I
closed
it
with
no
information,
thanks
survey,
because
that
could
have
been
an
issue
with
the
mirror,
so
that
made
sense
to
move
it
to
ldisk
close
does
not
planned.
Also,
we
only
have
one
agent
and
there
is
no
need
for
us
to
have
an
agent
and
additionally,
the
discussion
we
had
with
Olivia
and
Danielle
two
years
ago.
A
Is
that
running
some
an
agent
that
we
don't
control
is
risky
for
The
Trusted
CI
ports,
since
the
initial
need
was
to
provide
Docker
images
for
that
CPU
architecture,
and
we
are
using
chemu
to
provide
these
images
since
at
least
one
year.
There
is
no
need
for
that,
and
the
native
machine
running
on
cig
and
kinsayo
is
perfectly
fine
and
would
allow
native
tests
if
we
have
a
plugins
or
an
artifact.
That
needs
to
run
a
native
Hardware
without
emulation
that
can
happen.
In
that
case,
you
will
be
the
way
to
go.
A
Finally,
I
closed,
an
old
issue
about
importing
and
manage
AWS
resources,
because
now
we
already
managed
the
kubernetes
cluster
and
we
are
getting
away
from
AWS.
We
have
free
virtual
machine
lifts
and
then
we
won't
need
it.
So
that's
why
I
closed
the
issue
because
no
need
to
spend
our
time
on
this
one.
A
Okay,
walk
in
progress.
We
had
a
new
issue
about
the
upgrades.
I
saw,
there's
been
a
setup
on
Girard
at
a
default
setup
that
has
been
changed,
I'm,
not
really
sure
I'm
Gerard
means
so
in
theory.
I
could
do
it.
Do
it,
but
I
have
no
idea
what
that.
B
I,
don't
know
if
we
can
do
it,
so
it's
and
Daniel
Beck
is
is
a
jira
admin.
So
if
there's
anybody
who
knows
how
to
do
it,
it's
Daniel
Beck,
and
so
it
may
just
be
as
simple
as
us
saying,
plus
one
to
to
set
the
the
behavior
Daniel.
Can
you
do
it
because,
because,
like
you,
Damien
I
am
a
jira
administrator,
but
I'm
scared,
as
can
be
to
make
administrative
level
changes,
because
I
don't
have
the
expertise
that
Daniel
does.
B
A
B
A
Aws
decrease,
AWS
cost,
so
short
term
I
didn't
had
time,
but
I
need
to
do
it
later
today.
I
need
to
report
on
the
June
billing.
What
is
the
status
of
our
spendings
I?
Guess
we
will
it
stay
almost
the
same
since
the
past
three
weeks,
I
haven't
seen
anything
and
we
have
put
alerts
and
we
saw
the
alerts
two
months
ago.
So
a
peak
of
increase
would
have
would
have
been
mentioned
to
us,
so
I
need
to
report
and
give
a
status
here.
A
Then
the
other
task
for
that
issue
in
order
to
decrease
so
I,
propose
that
we
should
be
able
to
close
it
and
create
a
new
one
just
to
focus
on
the
new
item.
The
new
Step
will
be
remove
the
free
virtual
missions
that
are
currently
running
and
consuming
half
of
the
credits.
Monthly
one
is
update,
Jenkins
CIA
with
the
packaging
machine,
and
then
we
have
two
of
the
tiny
machines
and
Susan
usage.
I.
A
Yes,
okay,
contact
close
in
favor
of
a
new
one
with
title
scope.
A
Need
to
report
for
June
before
okay,
almost
closable
but
not
closed,
yet
kubernetes
1.25.
So
we
did
the
heavy
lifting,
including
a
big
General
outage
of
the
cluster,
that
we
had
to
recreate
from
scratch.
So
first
good
worker
way
because
you
work
a
Lotus
to
recreate
with
brand
new
production
cluster
in
less
than
half
a
day.
A
So
thanks
a
lot
for
the
support
and
the
work
here,
I
still
have
a
postmortem
Mark
I
was
thinking
about
writing
the
postmortem
on
the
Jenkins
IU
blog
post,
because
it
was
the
impact
was
clearly
outside
our
team,
so
I
would
prefer
doing
it
afterwards.
Is
that
okay,
for
you
that.
B
A
So
that's
we
need
to
finish
the
postmortem.
That
was
an
issue
with
an
accumulation
of
configuration,
change
directly
related
to
IPv6
and
the
way
it
works
with
kubernetes.
A
But
we
also
learn
and
propose
some,
let's
say
improvements
so,
for
instance,
the
public
IP
that
were
deleted
changing
effectively
all
the
external
DNS
resolution
of
everyone
trying
to
download.
That
should
not
happen
in
the
future,
so
we
were
able
to
find
thanks
to
our
early
researchers
logs.
So
we
have
the
concept
of
we
can
lock
resources,
so
they
cannot
be
deleted.
A
What
will
happen
next
time
if
we
delete
the
kubernetes
cluster
on
the
public?
Ip
are
part
of
one
of
the
automatically
managed
resource
groups.
Then
the
deletion
of
the
resource
Group
will
fail
at
the
ends,
because
the
public
IP
has
log
that
say
do
not
delete.
So
we
have
added
that
as
a
security
and
its
managers
code,
that's
a
minimum
Improvement
and,
as
Tim
said
so
I,
don't
know.
If
you
see
this
one
early
yeah.
A
A
We
can
avoid
that
transitive,
implicit
deletion
in
the
future,
so
I
propose
that
we
yeah
we
can
keep
it.
We
can
add
this
directly
on
on
the
commands
of
the
load
balancer.
Even
if
it's
the
default
value,
so
we
can
add
a
command
pointing
here
saying
a
you
can
add
them
on
another
Resource
Group.
If
you
have
to
recreate
it
next
time,
his.
A
C
A
A
Another
Improvement
is
that
we
have
a
technical
administrative
user
used
to
administrate
the
cluster
from
our
kubernetes
management
system.
That
technical
user
is
the
service
account
and
the
only
way
to
create
it
is
a
script
inside
my
machine,
which
is
absolutely
not
sustainable,
so
I've
got
to
add
the
require
element
on
terraform.
A
Since
now
we
have
managers
code,
all
the
cluster
that
should
create
that
that
user
for
us
and
generate
the
cube
config
as
a
sensitive
output
like
every
mentioned
during
the
past
few
months,
so
we
can
only
copy
and
pass
the
output
if
we
have
admin
access
to
terraform.
That
should
be
quite
the
Improvement.
C
A
And
finally,
I
need
to
open
the
issue
for
the
next
kubernetes
upgrade
that
we
I
propose
that
we
try
to
do
before
September,
ideally
just
to
be
sure
that
we
are
good
enough.
The
issue
will
start
with
what
are
the
depreciation
line
of
1.25,
so
that
will
help
us
scheduling
a
timeline
that
that
should
have
the
same
element
that
we
have
so
once
this
tasks
are
done.
We
can
close
the
issue,
but
the
rest
of
the
upgrade
has
been
done
successfully.
A
A
A
A
That
one
will
me
we'll
need.
We
need
to
switch
to
poop
at
7,
not
Enterprise,
because
as
for
today,
the
problem
is
the
following:
Enterprise
version
of
puppet
doesn't
support,
yet
Jeremy
there
are
I.
Can
there
is
a
huge
ticket
with
a
lot
of
people
asking
for
that,
but
the
open
source
version
works?
Why
do
we
use
the
Enterprise
because
we
were
able
to
use
it
for
free
for
the
first
10
or
12
machine
as
an
open
source
project,
but
it
doesn't
give
us
any
feature
that
we
have
used
during
the
past
two
years.
A
We
used
to
work
with
the
web
UI,
which
is
Enterprise,
but
we
haven't
done
it
so
by
default.
That
means
we
should
migrate
from
Puppet
6
Enterprise
to
pupet
7
line
or
eventually
puppet
8.
I,
don't
care
about
open
source,
and
then
we
will
have
the
puppet
server
support
for
Jimmy.
The
agent
is
working
on
Jamila.
A
We
mentioned
a
few
meeting
earlier
about
using
ansible.
Instead,
that
could
have
allowed
us
to
change
the
Paradigm
and
not
needing
that
virtual
machine
anymore.
However,
with
the
recent,
let's
say,
heated
discussion
on
the
Reddit
area
regarding
not
only
the
operating
system
around
Reddit
ecosystem,
but
ansible,
which
could
be
suddenly
Less
open
source
I'm
now
having
second
talked
about
moving
away
from
puppets,
there
are
other
solutions
that
could
fit
but
yeah.
So
that's
why
I
propose
that
we
focus
on
during
the
summer
upgrading
to
pupet
7.
A
Okay,
so
I
propose.
We
keep
that
issue
on
the
upcoming
Milestone
because,
as
we'll
see
a
bit
later
in
two
bullets
points
we
will
have
to
work
on.
The
updates
Jenkins
is
that,
okay
for
everyone.
A
Okay,
next
one
cigsaw
fails
to
delete
stashed
artifact
with
access
denied,
so
that
one
was
preventing
the
build
decoder
to
work
effectively
on
sea
agent
in
sayu,
and
that
was
a
chain
of
problem.
Then
this
problem
has
been
fixed
so
Airway.
Do
you
want
to
explain
just
a
bit
what
we
did?
What
is
left
to
be
done.
C
When
we
reverted
to
the
plug-in
default
configuration
which
is
not
editing
anything
from
S3
storage
from
Jenkins
as
it's
a
security
risk,
as
underlined
by
JC,
that's
also
the
why
these
options
are
specified
via
gvm
option,
which
is
cumbersome,
but
it's
it's
not
for
it's
for
a
resistant
reason.
It's
for
discouraging
people
to
use
to
use
this
option.
C
I
think
we
can
close
this
shoe
since
the
production
program
is
fixed
and
I
I.
Might
we
have
to
open
a
new
one
to
to
put
in
place
a
service
to
discard
all
the
artifacts
and
stash
on
S3
VR
AWS
lifecycle
policies,
yep.
C
A
A
Thanks
everybody,
no,
the
next
one,
the
big
one
that
will
be
split
between
Irvine
and
high.
The
goal
is
to
migrate
the
virtual
machine
hosting
a
day
drinking
Sayo,
the
update,
Center
and
PKG
origin
kinsayu.
The
back
end
that
firstly
uses
to
serve
on
cache
and
distribute
to
remote
through
CDN
Network
package
files
that
one
is
running
on
a
hold
hold
machine
on
AWS
that
cost
us
money
that
is
locked
to
Ubuntu
18
right
now
and
that's
create
a
lot
scan
of
weird
issues
and
maintenance
challenges.
Let's
say
so.
A
The
goal
for
us
is
to
get
away
from
the
current
pattern.
There
are
multiple
solution
right
now,
irvi
is
studying
if
we
can
use
cloudflare
air
tools
service
for
the
update,
Center.
A
So
the
challenge
here
is:
can
we
replace
an
Apache
web
server
that
we
run
from
Aegis,
which
is
not
highly
available
and
subject
to
issues
to
something
else
where
we
have
a
CDN
network
of
something
which
is
an
S3
bucket-like
or
compliance
that
could
be
used
to
distribute?
That's
the
challenge?
We
have
multiple
issues
and
elements
to
discuss.
I
propose
that
we
don't
go
on
details
here,
but
everybody
is
working
on
that
area.
A
C
It
seems
so,
but
there
are
a
lot
of
ftxs
reduction
generated
by
this
inter
and
I
need
to
to
see.
Why,
what's
why?
Why
is
there
and
maybe
ask
Danielle
for
a
tour
about
this
code
and
every
all
the
stuff
around
a
bit
Center.
A
And
one
of
the
compensation
that
we
talked
about
because
the
goal
for
us
is
to
find
a
way
to
decrease
the
outbound
bond,
with
represented
by
the
Json
updates
on
the
file
that
I've
served
to
the
end
users.
That
part
cost
a
lot.
So
it's
not
the
amount
of
requests
that
we
need
to
solve.
That's
the
bandwidth,
that's
what
we
are
paying
for
and
additionally,
High
availability
is
needed
and
one
of
the
elements
that
could
be
studied,
unbox,
you
might
have.
A
B
A
Okay
dates
Center
index,
because
if
we
have
this,
that
means
we
can
always
have
our
own
Apache
system.
Even
if
it's
a
container
highly
available
because
build
an
integrated
someone,
one
of
our
clusters
and
the
redirection
when
it
earns
on
instead
of
serving
the
file
that
will
redirect
to
Earth
the
cloudflare
R2
for
instance.
So
we
will
control
the
entry
points
we
won't.
We
would
not
have
to
point
the
domain
name
to
cloudflare
and
the
redirect
will
act
like
we
do
with
the
mirrors
the
difference
with
the
mirrors
and
why
we
don't
use
get
Jenkins.
A
Io
techniques.
Is
that
first,
the
redirection
but
get
the
mirror
bits
as
an
Apache
in
from
so
we
we
might
be
able
to
use
it
with
different
us
name,
though.
The
thing
is
that
what
we
call
mirror
is
something
we
control,
because
the
reason
of
not
putting
a
date
Center
on
mirror
bits
as
far
as
I
can
remember,
but
maybe
that
can
be
changed,
is
that
if
we
want
to
invalidate
on
the
data
center
cache,
we
need
to
control
the
invalidation
process,
which
we
cannot
with
sponsor
based
mirrors
that
are
pulling
the
data
from
us.
A
A
Okay
and
on
my
side,
I've
started
working
on
PKG
original,
which
is
the
second
service
run
on
that
machine.
So
the
goal
here
is
trying
to
move
that
service
to
Azure
on
on
our
public
and
private
infrastructure.
A
That
means.
Instead,
if
we
have
all
the
data
during
the
packaging
process
that
will
run
directly
on
the
Pod,
that
is
releasing
and
packaging
everything
that
will
have
access
to
the
real
data
generated,
Debian
and
sentos
package
inside
the
bucket,
and
that
bucket
will
be
immediately
available
through
the
public
service
on
the
public
cluster.
A
A
A
C
Yeah
I've
looked
at
Givens
said
that
his
repo
was
way
more
updated
Sun,
so
one
in
drinking
Safari
but
I
didn't
so
any
difference.
C
A
Have
to
spend
time
on
that,
but
yeah
thanks
for
the
head
up,
that's
that
is
that
says
we
can
work
on
that
and
give
in
already
I
did
some
work
and
we
need
to
help
him
on
that
area.
This
proposal
for
application
to
migrate
to
rm64
got
a
delay
until
Stefan
is
back.
A
We
had
a
discussion,
and
now
everything
is
clear.
It
took
notes,
so
we
should
be
able
to
resume
when
it
will
be
back
in
holidays
when
I
will
be
back
as
well.
So
the
goal
is
to
have
a
ram
64
for
all
the
static
services
on
public
Keys.
Now
that
everything
has
been
done,
upgraded
migrated
that
will
be
Stefan
main
priority,
because
it
will
help
to
decrease
cost
on
different
area.
A
B
So
that's
that'll
get
some
Focus
today
and
tomorrow
for
me,
I've
got
I've,
got
a
proposed
well
proposed
changes
to
the
root
Palm
for
both
Jenkins
core
and
plugins,
and
then
we've
got
to
start
evaluating.
What
does
that
mean
Etc
and
dealing
with
the
bumps
and
bruises
of
it.
A
Thanks
so
I'm
keeping
it
and
finally
artifact
caching,
proxy
and
reliable
I
was
late,
but
the
goal
is
to
open
a
pull
request
on
ath
based
on
former
basil
walk.
The
goal
is
to
start
using
ath
on
every
part
ACP
on
every
part
of
the
acceptance
test.
Earnest
builds
not
only
the
initial
generation,
so
get
a
store
to
draft
pull
request
and
see
if
it
break
the
ACP
on
Azure
or
if
it's
working
now
with
the
new
network.
A
I
think
that's
all
for
the
current
element.
We
have
new
issues
so
improve
data
dog
ingestion
thanks,
everybody
I
think
that's
to
keep
track
of
what
we
could
do
with
data
dog
in
the
future.
A
That
one
will
we
treated
it.
I've
opened
backup
infrastructure
data.
I
was
sure
we
had
an
issue
that
you
wrote
and
issued
away,
but
I
wasn't
able
to
find
it.
So
maybe
that's
a
duplicate
but
I
try
to
put
as
much
information
as
possible
here
to
go
in
the
different
element
we
could
have
that
was
triggered
by
the
cigu
former
Resource
Group
deletion.
A
B
A
B
I
could
say:
I've
taken
care
of
it.
I've
got
to
do
the
research
to
find
out
what
broke
I
I
am
reasonably
confident
I'm,
the
one
who
broke
it
and
therefore
it's
perfectly
Justified
that
I
should
be
the
one
who
fixes
it,
but
I
suspect,
I
merged
to
depend
about
change
and
the
depend
about
change,
passed
all
the
tests,
which
hints
that
they're
more
tests
needed.
So
there
will
be
some
research,
and
the
outcome
of
that
will
eventually
be
tests
that
check
that
particular
attribute
and
then
only
allow
pull
requests.
If
that
test
passes.
A
A
A
And
I
will
give
details
on
what
we
said.
Do
we
have
new
issues,
though
CIA
are
missing
some
metadata
two
weeks
ago
missed
this
one:
oh
yeah,
okay,
so
we
have
users
from
China
that
have
low
performances
when
trying
to
reach
either
a
date,
Center
and
or
the
download
mirrors.
Even
though
we
have
mirrors
in
China,
we
have
way
more
than
we
that
we
knew
that
could
be
interesting.
A
So
as
we
say
to
that
person
they
cannot
use
archives
or
PKG.
There
is
a
lot
of
lemon.
So
one
of
the
proposal
we
add
the
Air
2
plot
flare
as
ability
to
project
copies
inside
China,
so
that
could
be
a
way
to
help
our
China
users.
Does
we
talk
to
that
person?
We
need
a
sponsor
inside
China,
because
there
are
a
lot
of
unknown
knowledge.
There
are
a
lot
of
things
that
we
don't
know
about
how
it
used
to
live
there.
A
B
A
And
finally,
remove
we
already
treated
this
one,
so
I
don't
see
new
issues
here.
Do
you
have
new
other
topics
to
to.
B
You
want
to
mention
so
I
raised
an
issue
that
may
be
interesting
to
infrastructure
I'd,
like
an
opinion,
I'm,
sorry
that
I
didn't
solicit
your
opinion
already,
but
pkg.jenkins.io
and
mirrors.jenkins.io
both
have
installation
instructions
in
their
Debian
and
in
their
red
hat
subdirectories,
but
the
installation
instructions
are
so.
If
you
pick
Debian
or
red
hat
either,
either
of
Debian,
stable
or
Debian.
You
see
a
nice
page
that
gives
you
some
installation
instructions
and
they're
generally
helpful.
B
They
are
nice
and
simple,
but
they
only
tell
you
about
Java
11
and
we
want
you
to
use
Java
17,
but
Java
17
isn't
available
on
some
Debian
machines
and
yeah
on
and
on
there's
an
awful
lot,
so
the
Temptation
I've
had
is
to
offer
the
pkg.jenkins.io
should
redirect
for
that
page
to
this
page
and
and
stop
trying
to
present
those
simplified
instructions,
because
simplified
instructions
are
inevitably
the
wrong
instructions
for
some
set
of
users.
So
the
I'm
open
to
your
opinions
there
and
we
could
do
it
separately.
It
doesn't
have
to
be
today.
B
It's
just
I've
realized.
This
page
worked
really
well.
When
there
was
only
one
Debian
released
to
support.
We
now
have
10,
11
and
12.
with
very
different
Java
versions
on
them.
When
there
was
only
one
red
hat
version,
the
red
hat
page
worked,
but
we
now
have
eight
and
nine,
and
we
have
other
variants
like
Rocky
and
Alma
and
Oracle,
and
and
it's
it's
no
longer
a
nice
and
simple
world.
A
B
Mean
the
other
benefit
there
is.
It
will
stop
the
enumeration
of
directory
contents
that
happens
on
some
of
those
pages
and
I
I.
Just
don't
think,
there's
enough
value
to
people
listing
directory,
Contents
I,
don't
remember
it's
mirrors
or
PKG
that
actually
will
list
the
directory
contents,
but
I
I,
just
I'm
I'm,
not
persuaded
that
that's
valuable.
A
Make
sense,
I
think
that's
worth
opening
an
issue
to
store
the
discussion
here.
Right,
don't
forget
about
it,
because
I
mean
that
makes
sense.
That
will
help
us,
especially
we
had
a
confusing
moment
when
high
about
finding
which
index
HTML
genetic
from
where
I
don't
know.
If
you
remember
the
gpg
key
rotation,
the
Jenkins
design,
I
forgot
the
name,
things
Eder
and
footer,
so
yeah
that
will
help
definitively
yeah
it.
It.
B
B
A
Okay
so
tomorrow,
okay,
I'm,
closing
the
issue
and
stay
here
just
need
a
quick
sync
to
read
the
two
of
you
and
see
you
in
two
weeks
for
the
other,
watching
this
recording
bye-bye.