►
From YouTube: 2023 06 25 Jenkins Infra Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
welcome
to
the
Jenkins
infrastructure
weekly
team
meeting.
So
this
week
we
are
the
25th
of
July
2023.
Last
week
we
didn't
had
a
weekly
meeting
because
it
was
holiday
for
a
lot
of
us.
So
today
will
be
a
two
weeks.
Milestone
conclusion
next
weekly
meeting
should
be
next
week.
The
1st
of
August
as
far
as
I
can
tell
everyone
should
be
there.
A
A
Yes,
because
tomorrow,
as
we'll
explain
later,
we'll
have
a
secreted
visery
on
the
Chain
kids
score,
and
so
the
main
branch
is
locked
that
has
been
announced
publicly
yesterday,
I
think
on
the
mailing
list.
So
that's
why
we
try
to
not
publish
a
call
release
before
security
release
to
avoid
anything
going
wrong.
A
Okay,
do
you
have
other
announcement
folks.
A
C
C
Yeah
but
I
heard
people
were
thinking
also
of
the
four
one,
five
to
become
the
LTS,
but
finally
your
decision
is
made
and
it
will
be
four
one.
Four,
okay
well.
B
So
the
the
discussions
were
were
really
good
about
that
topic,
because
416
will
include
a
security
fix,
it
must
be
backwarded
and
there
was.
There
was
an
issue
that
James
Nord
detected
that
will
result
in
a
back
Port
as
well.
So
so
there
will
be
some
backboards
to
414.1.
That's
good!
We're
we're
really
proud
that
people
are
finding
things
that
that
need
to
be
backboarded.
That's
helping.
A
A
Okay,
so
August
will
be
a
busy
week
a
busy
month,
then
so
the
announcement
we
tomorrow
we
will
have
a
high
severity
security
release.
As
announced
yeah
26th
of
July
tomorrow
confirmed.
A
Tomorrow,
so
that
means
since
it's
tomorrow,
we
should
try
as
much
as
possible
to
not
release
anything
on
release
here
again
say
you
and
we
will
so
I
need
someone
to
help
or
prepare.
That
can
be
me,
but
that
can
be
someone
else,
but
we
need
someone
to
take
care
of
opening
the
the
usual
statues
Jenkins
message,
because
CI
Jenkins
Ayo
will
be
down
during
the
the
restarts
and
we
also
need
so
unless
someone
volunteer
I
will
take
care
of
doing
this.
A
C
A
A
That's
all
we
have,
then
we
will
have
to
follow
the
updates
update
all
the
images
there
will
be
an
issue
and
yeah
and
we'll
deploy.
So
the
goal
is
to
have
a
proposed
challenge
tomorrow,
end
of
day
in
Europe
we
will
have
upgraded
all
of
our
controller
and
when
I
say
all
it's
all,
because
but
release
lines
are
covered
by
the
security.
A
Looks
good
for
you
no
question
next
major
events:
I,
don't
know
when
I
even
tracked
this.
If
you
have
any
next
major
event
where
you
know
some
team
member
will
be
present,
I,
don't
mind
you
sharing
it
I,
don't
know
what
are
the
plans
Mark?
Do
you
have
events
in
mind.
C
A
A
I've
closed
the
issue,
but,
as
they
mentioned
earlier
today
that
worth
it
checking
the
username
on
the
data
logs,
the
data
dog
logs
of
the
account
app
to
see
what
is
the
error
message
that
could
have
been
triggered
the
anti-spam
the
goal
will
be
to
be
sure.
What
is
the
reason
why
the
anti-spam
system
blocked
that
person.
A
A
A
Setup
he
wanted
a
new
category
and
asked
for
a
bit
more
permission
than
the
token,
so
that
has
been
done,
the
administrator
any
administrator
of
the
community.
This
course
can
do
it
I'm,
not
at
least
giving
an
IR
administrator
on
the
review.
Verno
I,
don't
know
for
the
others,
but
everything
was
done.
So
unless
you
have
questions,
I
am
going
to
move
forward.
A
Okay,
rename
pipeline
log
from
the
cloud
watch,
plugin
developers-
wow,
that's
quite
the
pipeline,
so
also
so
thanks
for
the
people
who
took
care
of
that.
That
was
renaming
a
repository
on
Jenkins
CIA
organization
and
I
assume
the
corresponding
change
on
C
I
changed
inside
next
up.
We
cannot
download
the
jenkinsfar
war
file
and
plugins
I.
Don't
remember
the
reason
of
the
error
for
that
user.
A
Okay,
so
that
person
had
issues
due
to
their
internal
firewall
must
probably
due
to
the
public
IP
changes.
I,
guess
that
happened
on
the
cube
during
the
kubernetes
upgrade
two
weeks
ago,
or
maybe
not,
but
that
will
be
one
of
the
use
case
where,
since
we
changed
the
public
IAP,
the
firewall
are
up,
are
keeping
the
old
IP
that
has
been
trashed
as
okay
and
the
new
IP
isn't
on
the
system.
Yet
that's
why
we
need
to
take
care
of
communicating
and
avoiding
changing
these
IPS
say
the
guy
who
broke
the
production.
A
A
A
Okay,
so
I
understand
that
Mark
you
held
on
that
setting
to
keep
the
sanity
of
our
users
is
that.
B
A
Thanks
for
taking
care
of
this
one,
we
had
five
issues
about
closed
as
not
planned
so
user
trying
to
create
a
Jenkins
account
on
accounts,
Jenkins
IO,
which
makes
sense
on
the
paper,
but
that
was
for
testing
Jenkins
or
it's
by
the
anti-spam
system.
It's
not
always
clear!
A
So
thanks
everybody
for
taking
care
of
this
or
unmark
I
think
it's
related
to
the.
We
have
at
least
two
cases
with
the
anti-spam
system.
As
far
as
I
can
tell,
we
might
have
more.
So,
let's
check
if
there
is
something
wrong
or
what
is
the
reason?
My
guess
is
public
IP,
but
I'm
I.
Don't
remember
exactly
so.
Let's
see
any
question
on
these
topics.
A
Okay,
so
walk
in
progress,
most
of
the
work
that
has
been
done
during
the
past
two
two
weeks
were
on
these
tasks.
Some
are
almost
close
able,
let's
take
them
one
by
one,
so
a
report
about
the
upgrade
to
kubernetes
1.25
because
we
didn't
meet
last
week.
So
during
two
during
the
beginning
of
that
Milestone,
we
plan
to
upgrade
kubernetes
1
to
25
with
one
cloud
provider
per
day.
A
Everything
went
properly
until
the
last
one,
which
is
public
okay,
so
the
upgrade
in
itself
went
very
well,
but
as
part
of
the
upgrade
the
change
of
the
node
pool
combined
to
the
particular
networks
setup,
we
had
made
the
cluster
in
in
a
state
where
it
wasn't
able
to
reach
some
IPS
causing
some
Mayhem
and
when
we
try
to
fix
it.
A
A
So
first
positive
thing
is
that
all
the
work
that
early
and
Olivia
did
during
the
past
years
on
the
public
cluster
with
Automation
and
as
a
code
showed
that
we
are
able
to
recreate
it
from
scratch,
including
restoring
the
ldap
backup,
I
have
to
admit.
I
was
quite
freaked
out
by
the
deletion
of
the
ldap
persistent
volume,
so
the
cluster
voice
recreated
and
as
part
of
that
Recreation,
we
realized
that
the
public
IP
were
changed
automatically,
so
that
that
is
a
subsequent
topic
that,
thanks
for
from
team,
we
were
able
to
see.
A
A
The
cluster
Cascade
delete
the
child
Resource
Group,
and
we
were
under
the
impression
that
the
public
IP
must
have
been
on
the
same
Resource
Group
as
node
pool
and
as
Tim
showed.
We
have
ways
of
doing
of
having
the
public
IPO,
Nanos
or
Resource
Group,
with
the
correct
annotation
on
kubernetes
setup,
which
we
didn't
know
back
in
time.
So
when
we
deleted
the
cluster,
the
public
IP
were
deleted,
so
they
have
been
created
on
the
same
Resource
Group,
because
we
had
to
recreate
the
cluster
as
close
as
possible
as
the
former
one.
A
But
now
we
have
an
upcoming
issue
tracked
on
a
subsequent
topic
that
could
allow
us
to
not
only
keep
the
lock
on
the
public
IP,
but
also
move
them
to
a
proper
Resource
Group
to
avoid
blocking
deletion
of
the
cluster
in
the
future.
So
thanks
survey
and
thanks
team
for
pointing
these
elements,
thanks
for
the
help,
Mark
and
RV
on
that
topic,
consequence
is
that
we
are
to
change
the
public
IP.
A
A
blog
post
has
been
published
about
is
to
communicate
to
end
users,
and
this
issue
is
closable,
except
that
I
need
to
open
the
issue
to
plan
the
1.26
with
the
date
time
and
linking
this
one.
That's
the
one
last
mile,
all
the
other
Improvement
that
we've
seen
have
their
own
issue
to
be
tracked,
because
it's
outside
the
topic
of
the
update.
But
these
are
things
that
we
should
work
on
to
improve
the
next
updates
or
the
next
production
outage.
D
A
No
it's
because
during
the
cluster
Recreation,
the
javadoc
service
wasn't
triggering
a
scale
up
of
the
cluster.
Maybe
it's
because
we
didn't
add
enough
time
or
I
didn't
diagnosed
and
I
started
by
changing
the,
not
the
pool
label
to
force
it
back
to
Intel.
Okay,
we
recreated
everything
it
was
Friday
and
the
next
Monday
or
Tuesday
I
fixed
the
problem
by
moving
it
back
to
irm
64
after
scaling
manually,
like
you
did
the
first
time.
Okay,.
D
A
A
So
yeah
good
good
points,
good
question,
but
it
should
be
in
the
same
state
as
you
left
it.
When
you
went
to
vacations.
A
A
E
Server
in
it
create
a
service
on
duplicate
this
cluster,
with
Western
volume
to
month
every
access
files
generated
by
the
update,
Center
script.
E
The
script
is
triggered
by
the
building
trusted.ca.junking
Sayo
to
update
the
version
in
this
access
reduction
So.
The
plan
is
to
retrieve
this.
This
put
them
on
the
attached
attached
to
server
configuration
running
on
the
public,
K
desk
cluster
and
right
with
HTTP
to
the
content
stored
in
R2
cloudflare
R2
buckets.
E
Could
benefit
of
cloudflare
here
too,
is
that
they
have
they
can
deploy
a
bucket
in
China,
so
this
will
help
for
user,
and
we
can
also
use
this
with
mirror
bits
to
have
a
better
location
for
you
or
user
later.
E
E
C
E
E
Anyway,
we
can,
we
can
do
that.
I
think
it
can
be
beneficial
for
versus
Jenkins
and
cloudflare.
A
A
Longer
term,
we
so
that's
a
discussion
that
we
had
the
China
projection
right
now
we
will,
we
will
start
with
only
one
buckets
in
the
US
and
we
will
start
moving
the
service,
because
the
main
goal,
as
a
reminder,
is
to
get
that
service
away
from
AWS
to
avoid
paying
five
to
seven
K
per
month
in
egress
fees.
A
If
we
have
the
so,
the
question
we
had
two
weeks
ago
was:
is
Jenkins
able
to
follow
a
HTTP
redirection
for
that
and
the
answer
phone
by
Airways?
Yes,
because
we
have
all
of
these
HT
access
that
he
mentioned
that
are
generated
by
trusted,
so
that
allows
us
to
keep
control
of
the
domain
name.
Updating
kings.
Are
you
and
updating.org
the
certificate
and
the
redirectional
rules?
A
We
keep
that
control
and
then
we
trigger
HTTP
redirection
so
that
the
real
aggress
bandwidth
is
served
by
a
service
where
we
don't
pay,
but
we
keep
the
control
of
this
of
the
initial
service,
which
is
not
a
reverse
proxy.
Otherwise
we
will
still
keep
serving
the
content
and
paying
address
fees.
That's
the
subtle
difference.
That's
not
an
easy
one!
A
Considering
that
pattern,
we
can
think
about
improving
the
life
of
our
user,
all
across
the
world.
We
have
China
because
that's
where
it's
the
most
visible
today,
but
that
could
be
Improvement
for
everyone.
We
could
use
the
same.
We
could
use
a
mirror
or
director,
like
we
have
forget
to
say
you,
except
in
that
case
we
must
manage
and
feel
the
content
of
the
mirrors
between
the
goats,
meaning.
A
We
could
think
about
having
a
bucket
on
the
U.S
east
and
a
bucket
on
China
on
the
Air
to
cloudflare
China
Network,
and
then
we
could
instantiate
the
new
mirror
bit
service
for
update
Jenkins
IO.
That
say,
oh
I
see
you
are
located
in
China,
so
the
radioact
will
be
on
the
update,
Center
Jason
served
inside
the
China
Network,
but
the
problems
we
would
have
with
the
existing
service
is
that
we
don't
manage
the
mirrors,
for
instance
University.
A
A
In
that
case,
that
will
be
another
mirror
beats
instance
that
will
allow
us
to
project
the
proper
updates
on
file
on
the
location
we
seek,
but
we
control
filling
these
elements
so
trusted
CI
today
generate
the
content
and
then
run
a
nursing
to
the
virtual
machine.
In
that
scenario,
on
the
long
term
feature
trusted,
CI
will
generate
the
contents
and
run
AWS
S3
CP
or
a
local
CPU
on
a
file
system
and
copy
to
all
the
location
where
it's
needed,
and
then
we
can
update
on
the
mirror.
D
We
can
even
add
a
CDN
on
top
of
that
to
to
manage
the
one
that
are
not.
That
cannot
have
any
any.
A
E
A
D
Exactly
okay,
I
I,
like
the
way
that
we
handle
that
and
and
last
resource.
If
we
cannot
have
the
bucket
close
of
of
the
client,
we,
we
still
have
the
ability,
depending
of
the
price
and
the
in
the
system,
to
use
a
CDN
as
the
last
solution,
because
because
we
got
the
solution,
in
fact,
we
are
handling
that.
A
A
Okay,
so
that's
the
main
priority
for
RV,
because
that's
quite
the
cost
and
we
want
to
get
rid
of
that
virtual
machine.
So
that's
the
our
new
top
priority
now
Next
Issue
artifact,
caching,
proxies
and
reliable.
If
I
understand
correctly,
that
issue
is
closable.
We
are
waiting
for
Basil
just
to
do
a
final
validation
since
it
has
been
tested
merged
one
or
two
weeks
ago.
So
we
use
the
new
ace,
the
ACP
artificial
proxy
on
the
wall.
Ath
builds
now
so
that
should
decrease
a
bit
more.
A
What
we
download
from
G4
artifactory
is
that
correct
our
way
or
did
I
miss
something
thanks
for
taking
care
of
that.
So
that
also
underlines
something
hidden.
Is
that
now
we
don't
have
any
more
overlap
issues
and
we
have
almost
removed
everything.
We
just
have
a
few
elements.
I
will
comment
this
a
bit
later,
but
that
proved
that
the
network
or
the
virtual
machine
we
use
on
cigenkins
iO
as
for
today,
is
now
stable
enough
to
sustain
ACP.
A
Closable
waiting
for
a
final
confirmation
from
developers
thanks
Larry
for
this
one,
we
have
an
issue:
Jenkins
server
is
enabled
to
download
plugin
from
updates
Jenkins
just
okay,
so
the
user
seems
to
have
issues
connecting
to
the
aachen
university
and
the
mirrors
is
redirecting
them.
Both
Mark
and
high
have
at
least
have
checked
that
it's
not
a
problem
on
our
mirror
or
on
the
aachen
university.
The
files
are
there.
A
A
So
users
have
to
contact
ahan
University,
nothing
else
from
us.
Okay,
we
cannot
do
any
patch
on
the
mirror
director
because
it's
based
on
their
geolocation.
So
there
is
nothing
we
can
do
about
that.
As
far
as
I
can
tell
issue
while
creating
Jenkins
infrastructure
account,
I
guess
survey,
that's
the
one
you
mentioned
this
morning:
oh
no!
It's
a
new
one.
A
Okay,
so
we
have
a
potential
new
contributor,
so
we
have
to
take
care
of
this
one
because
the
user
want
to
create
the
account
I've
assigned
myself.
This
issue:
are
they?
Are
you
okay
to
assign
this
yourself,
since
we
I
ask
you
for
checking
the
logs
and
do
you
mind
creating
their
account
to
unblock
them
on
short
term
and
the
long
term?
We
keep
the
tissue
open
to
keep
the
log
research
as
a
night
as
a
working
item.
Is
that
okay
for
you?
E
I,
don't
think
we
have
a
problem.
I
see
80
warning
for
the
last
15
days.
It
seems
normal
for
me,
since
there
are
not
only
I
couldn't
issue
but
other
stuff
in
this
warning.
I.
A
A
So
thanks
for
the
pointer
from
RV,
we
conclude
a
team
that
we
could
temporarily
disable
the
spot
mode
for
the
agents
and
we
will
do
it
for
the
upcoming
week
until
next
Milestone
and
we'll
see
if
it
changed,
the
behavior
of
the
ath
builds
if
it
doesn't
change
the
behavior.
That
means
we
have
to
search
of
the
root
codes,
because
some
elements
feel
like
the
same
problem
we
have
with
the
bomb
builds
as
if
some
threads
were
stuck
while
trying
to
garbage
collect
to
manage
the
agent
connection.
A
A
And
finally,
if
the
spot
instance
improved,
then
we
will
have
to
study
carefully
the
size
of
the
instances
because
that's
pointing
by
RV.
We
choose
that
instance
size
because
it's
nine
times
cheaper
with
a
low
eviction
rates,
but
now
that
will
be
clearly
way
more
expensive.
So
we
have
to
check
if
without
spots,
is
it
the
best
fit
for
this
workload?
A
A
Oh
okay,
no
I
can
take
care
of
this.
One
I
might
need
a
help
marker.
Just
in
any
case,
I
might
ask
you
for
help
tomorrow.
If
it's
okay
for
you.
A
Okay,
next
issue
is
Jenkins
CI
felling
for
Jenkins
plugin
after
changing
Jenkins
file,
so
I'm
a
bit
annoyed
by
this
one.
To
be
quite
honest,
we
discussed
it
it's
the
third
issue
as
far
as
I
can
tell
from
that
plugin
maintainer.
They
say
they
have
a
problem.
We
walk
different
elements
to
fix
their
problem
and
we
don't
have
any
feedback
until
three,
two
or
three
weeks
later,
where
they
open
a
new
issue.
A
A
But
since
this
user
haven't
carefully
followed
the
repository
permission,
updater
elements
they
are,
they
have
managed
their
their
wall
system
and
because
that
plugin
was
a
fork
migrated
to
our
organization.
There
might
be
a
combination
of
parameters
that
will
help
us
a
lot
if
the
user
was
responsive,
which
they
are
not
so
I
propose
that
we
keep
that
issue
open
or
we
can
close
it.
If
we
keep
it
open,
we
have
to
tell
the
user
a
please.
We
need
your
help
because
it's
a
word
case
and
we
you
need
to
be
responsible.
A
We
cannot
help
you
because
it's
a
bit
infra,
it's
a
bit
Jenkins,
CI
organization,
administrator,
cig
and
kinsai
administrator
I
mean
this.
It's
it's
cross
team,
so
yeah
we
need.
We
need
them
to
help.
B
B
It
was
that
C.I
jenkins.io
did
not
realize
they
were
a
and
trusted
untrusted
person.
Okay,
so
that's
different
than
what
I
had
I
had
a
case
where,
where
GitHub
thought
I
was
not
not
trusted
or
depend
about
thought,
I
was
not
trusted,
but
then
I
was
allowed
to
merge
the
pull
request
myself.
So
my
my
situation
is
different
thanks,
nothing
to
do
with
this.
Okay,.
E
A
The
suggestion
from
Team
makes
sense,
but
we
need
the
user
to
answer.
So
is
that
okay,
if
we
keep
it
open-
and
we
don't
expect
the
Jenkins
in
fatim
to
work
on
it
unless
there
is
something
that
points
to
CI,
Jenkins
setup,
but
I
mean
that's
the
only
user.
So
there
is
something
really
age
here:
I
guess:
I,
guess:
team
team
at
the
good
feeling,
but
yeah
I,
don't
see
anything.
We
can
do
to
help
them
more
than
what
we
did.
A
A
A
Next
Issue
ssrt
Factory
bandwidth
reduction
option.
Now.
Can
you
give
us
a
quick
summary
on
what
changed
during
the
past
week
on
that
topic?.
B
Yeah
so
I
had
a
discussion
with
James
Nord,
where
he
identified
a
a
very
rapidly
implementable
technique
to
reduce
bandwidth
use
without
requiring
a
release
of
all
Palm
files
and
without
requiring
adoption
of
new
Palm
file
releases
for
everyone,
and
so
I've
scheduled
a
session
for
tomorrow
to
discuss
it
in
more
depth.
It's
described
in
this
in
this
issue
ticket
there.
The
idea
is,
we
just
password,
protect
our
cache
of
Maven
Central.
B
That's
the
only
thing
we
password
protect
on
this
effort,
but
that
is
by
10x
larger
data
volume
than
any
other
repository
that
we
cache,
and
so
so
that
one
change,
but
because
that
change
is
because
that
repository
is
automatically
included
as
a
fallback
by
Maven,
our
stopping
caching.
It
will
just
cause
the
builds
to
revert
to
asking
Maven
Central
for
the
the
artifacts,
which
is
where
everyone
else
asks
anyway.
Oh.
B
That
only
works
for
exactly
one
repository,
and
it
happens
to
be
a
major
high
visibility
repository.
It
doesn't
work
for
J
gate,
it
doesn't
work
for
many
others,
but
for
this
one
it
just
so
happens
that
the
data
volume
from
Maven
Central
in
our
in
our
measurements
is
10x
greater
than
the
next
largest
mirrored
repository.
So
so
it's
it's
a
very
interesting
choice.
If
this
works,
we
should
implement
it.
B
The
question
then,
from
the
security
team
is,
or
the
question
will
breach
with
the
security
team
tomorrow
is:
is
it
okay
that
we're
relying
on
Maven
Central
for
artifacts
and
that
we
are
requiring
ourselves
to
pass
through
potentially
other
repositories
on
the
search
process
and
and
immediately
it
will
just
be
repo
repo.jenkinsci.org
falling
back
to
Central,
but
if
jfrog
requires
more
than
that,
then
we
have
to
put
intermediate
steps
and
there's
some
danger
there
on
supply
chain,
so
feels
feels
comfortable
feels
reasonable,
we'll
discuss
it
further
tomorrow.
C
B
B
C
So
I'm
sorry
I
have
another
dumb
question
which
may
be
linked,
so
you
say
that
it
would
be
password
protected
now
from
now
on.
Okay,
so
who
will
use
the
password
I
mean?
Will
our
Jenkins
plugins
build
use
that
password
that
it
doesn't
change
anything
or
will
we
also
use
for
the
Jenkins
build
plugins,
the
maven
Central,
which
means
that
our
builds
could
take
some
more
time.
A
A
Your
user
will
never
be
able
to
see
the
password
and
get
it
in
any
way.
It
will
be
hidden
somewhere
else,
which
is
not
reachable.
They
will
only
be
able
to
get
the
password
to
connect
to
ACP
today.
That
means
they
can
reuse
it
at
home
and
use
ACP
from
outside,
but
we
should
quickly
shift
to
ACP
restricted
only
to
CI,
Jenkins
IU
agents.
A
One
Thing
Mark.
We
be
careful
about
the
tests
suggested
by
gems.
A
We
need
to
do
that
test
as
a
first
layer,
but
we
still
need
to
plan
a
brown
out
when
we
will
enable
authentication
of
the
only
on
the
Upstream
repo
Maven
central
inside
the
the
tree,
because
that's
the
mirrors
will
have
different
HTTP
answer
than
when
it
will
ask
for
a
password
and
the
fallback
Behavior
cool
change
if
we
have,
if
it
receive
a
different
answer
from
the
remote,
the
wall
search
fallback
system,
I'm
sure
James
is
absolutely
true,
and
that
should
be
enough.
B
A
wholehearted
agreement
there
we,
we
must
not
implement
this
without
a
brown
out
first,
and
we
may
need
multiple
brownouts
to
be
confident
that
it's
behaving
the
way
we
expect
absolutely.
It's
no
question
in
my
mind
that
once
we
get
agreement
on
the
concept,
brownout
is
the
next
step,
and
and
then
we
we
and
we,
we
Define,
I-
think
carefully
what
the
things
we
want
to
test
during
the
brown
out.
Because
do
we
need
to
make
artifact
caching
proxy
changes
in
order
to
do
the
brown
out
or
can
we
run
it
unmodified?
A
Cool,
so
that's
good
news.
I,
don't
say
we
shouldn't
have
a
highly
available
ldap,
but
at
least
we
won't
be
forced
into
doing
it
in
last
minutes
with
the.
If
that,
if
this
is
working,
ath
builds
failing
due
to
deny
the
outbound
request
during
tests,
so
that
one
is
a
consequence
of
CI
Jenkins,
IO
virtual
machine
being
migrated
to
a
new
virtual
machine
to
three
weeks
ago,
and
we
also
moved
the
agents
that
were
changed.
They
change
region,
subnets
and
connection
connection
method
from
SSH
to
inbound.
A
Eventually,
we
could
think
about
SSH
for
GitHub
to
github.com,
public
IPS
or
eventually
some
git
Labs,
but
the
goal
is
to
be
sure
that
we
don't
have
agent
trying
to
do
weird
things.
It's
not
absolute,
of
course,
if
one
can
can
forge
a
weird
thing
through
I
mean
I've
used
SSH
through
the
port
43
to
bypass
firewall
rules
in
my
previous
organization,
so
I,
don't
say
it's
impossible,
but
defense
in
depth
states
that
yeah
by
default,
if
you
don't
need
it
four
minutes.
A
Also
the
big
deprecated
message
please
avoid
using
public
jpg
key
servers
is
a
message
that
should
say:
hey,
let's
server
using
hkp
protocol
and
instead,
let's
copy
the
public
gpg
key
next
to
the
docker
file
and
use
it.
So
all
of
this
information
have
been
passed,
I
need
to
check,
but
that
one
is
closable.
A
Any
question
things
unclear
suggestion:
objection
on
the
world,
firewall,
hqp,
ath
thing:
okay,
next
issue,
AWS
summer,
2023
I've
closed
the
previous
issue
that
was
spring
2023
because
we
already
were
able
to
decrease
the
bill.
So
now
here
is
the
usage
of
June
2023.
That's
a
follow-up
issue.
The
next
actionable
we
have
are.
We
have
four
for
that
summer.
So
for
me,
summaries
until
middle
September,
so
for
the
upcoming
month
and
a
half
more
or
less
we'll
have
to
work
on
these
four
issues.
That's
a
kind
of
Epic
issues.
A
One
is
what
are
they
described?
That's
moving
away
the
AWS
machine
to
decrease
the
outbound
Bond
risk
costs.
That's
the
update,
Jenkins
the
update
Center
index,
move
to
cloudflare
and
somewhere
else.
As
you
can
see
on
the
diagrams,
that's
the
the
data
transfer
out,
bytes
and
I
haven't
given
more
details,
it's
on
the
other
issues,
but
it's
only
updates.
Jenkins
IU
virtual
machine.
A
Please
note
that
also
I
will
mention
that
we
also
have
the
PKG
origin
Jenkins,
but
since
that
one
is
backed
by
fastlier
CDN
in
front
that
shouldn't
be
a
aggress
consumer
that
one
shouldn't
generate
a
lot
of
outbound
bandwidth
because
fasty
protectors,
so
that
one
should
also
migrate
to
Azure.
That's
a
separate
topic,
but
part
of
that
bullet.
A
Also,
we
have
two
quick
wins
more.
We
also
have
two
virtual
machines
for
the
services
usage
Jenkins
that
could
be
moved
as
inside
kubernetes
or
others
virtual
machine.
We
need
to
evaluate
for
both
services,
but
this
one
are
running
on
AWS
and
there
is
no
need
for
that.
Since
the
Azure
billing
is
now
way
below
the
limits
we
can
afford
moving
these
machines
and
that
will
make
us
less
coupled
to
AWS
Mark.
We
have
a
question.
Do
you
remember
what
is
the
purpose
of
the
service
census?
Jenkins?
Are
you
because
we
are
not
sure.
A
No
worry,
then
it
was
just
if
you
had
it
in
mind,
so
that
means
for
the
world
team.
We
want
to
check
census.
Jenkins
usage
is
quite
easy,
so
there
is
already
an
issue
open
for
that
since
years
by
Olivia
himself,
so
I
think
we
can
start
working
on
these
two
and
then
we
will
evaluate
some
source
thanks
Ariel.
So
for
the
issue
about
the
S3
artifact
caching,
as
we
saw
two
weeks
ago,
yeah,
basically
with
the
S3
artifact
manager
on
cig
and
kinsayu,
we
don't.
A
We
should
not
let
the
plugin
delete
artifact
when
the
Builder
rotated
for
a
lot
of
reasons,
so
we
disabled
that
behavior,
which
is
the
default
and
recommended
Behavior.
So
that
means
we
need
to
create
a
garbage
collector
system
in
the
bucket
where
the
cigen
say
artifact
are
stored
and
that
artifact
should
we
should
Define
the
rules
such
as
a
did
it
any
artifact
that
is
ordered
on
one
month,
for
instance,
the
goal
for
us
is
not
to
have
bad
surprise
in
one
year
and
a
half
with
the
AWS
building.
A
So
yeah
these
are
the
issues
so
I
propose
that
we
keep
that
top
level
issue
on
the
milestone.
We
have
the
update.
Jenkins
are
you
walk
and
then
we
will
consider
the
others.
Is
that
okay
for
you?
If
anyone
has
time
you
can
start
the
all
the
time
issues,
but
right
now
the
first
one
is
the
top
priority,
because
the
most
visible
any
question
objection
things
unclear
suggestion
on
that
topic.
A
A
A
A
Unless,
where
you
want
to
bear
with
Stefano
with
me
or
take
that
topic,
of
course,
but
the
goal
is
this
one,
and
the
second
parallel
task
is
take
all
the
elements
that
gave
in
created
see
if
the
docker
image
is
built
or
if
it's
missing
something
like
the
proper
pipeline
function
on
infrastia.
The
proper
permission
on
the
docker
Hub
check
all
the
elements.
Until
we
have
the
valid
image
published
on
each
tag,
creation
on
the
Repository.
A
A
The
pull
requests
are
piling
that
we
cannot
merge
about
on
the
puppet
module
because
they
require
poop
at
7..
So
puppet
7
will
be
a
topic
to
treat
in
August
I.
Don't
think
we
should
focus
on
this
on
the
upcoming
Milestone
and
differ
during
the
month
of
August.
If
it's
okay
for
all
of
you
and
the
second
machine,
not
in
Ubuntu
20204,
is
the
PKG
update
virtual
machine
that
we
mentioned
earlier.
A
A
The
goal
is
to
move
the
wall
package,
the
narration,
including
the
500
gigabyte
of
packages,
to
be
moved
from
AWS
to
Azure,
and
so
the
process
that
we
use
on
a
really
CI
to
build
the
car
and
package.
The
core
of
Jenkins
should
be
able
to
run
all
its
step
locally
inside
the
agent
instead
of
one
of
the
steps
being
a
remote
SSH
command.
A
Once
it's
moved
on
that
agent,
we
can
take
care
of
changing
the
docker
image
from
omu
to
18
to
Ubuntu
whatever
or
whatever
system
with
the
proper
tooling.
But
right
now
we
need
to
migrate
that
data,
that's
the
first
step
and
that
will
require
moving
the
PKG
origin,
Jenkins
IO
service
to
a
new
Apache
service
on
publicates
and
the
same
ideas,
whatever
described
for
updates
in
Kim's
IO.
A
A
B
It's
that
Daniel
Beck
asked
a
question
that
needs
to
go
to
the
Linux
foundation
and
I've
got
to
raise
the
question
to
them.
As
far
as
I
know,
as
far
as
I
can
tell,
the
issue
is
absolutely
resolved.
What
Daniel's
question
was
is:
does
the
Linux
foundation
for
future
cases
need
to
be
sure
that
their
caching
system
does
not
cache
too
aggressively?
B
Okay,
so
so
cash,
the
cash
to
raise
cash
lifespan,
Cash,
Time,
To
Live,
may
need
to
be
reduced
and
I
had
seen
something
similar
minutes
after
the
or
in
the
hour
after
the
Linux
Foundation
had
brought
issues
that
jenkins.io
back
online,
but
this
report
from
this
user
was
a
day
later.
Neither
he
nor
Daniel,
neither
Daniel
nor
I
could
duplicate
it.
But
it's
worth
the
question
to
LF.
I'll
I
have
the
action
to
do
that.
A
Okay
cool,
so
that
means
we
can
keep
you
assigned
to
the
issue
moving
to
the
next
Milestone
and
wait
from
the
feedback
cool.
Yes,
thanks
for
the
clarification
all
right
take
care
of
it
thanks
Mark,
so
in
the
list
of
new
items,
whether
to
be
triaged
or
to
be
considered
for
the
upcoming
milestone,
are
they
open,
open
topic
of
moving
the
public
IPS
of
the
cluster
that
we
mentioned
earlier,
the
IPS
that
were
deleted?
A
Once
we
know
what
is
the
effects
of
moving
the
public
IPS
and
updating
the
load
balancer
notation,
so
it
knows
where
is
the
new
IP
located
once
we
have
done
that,
and
we
know
the
behavior
on
the
geomi
load
balancer?
If
it
we
can
plan
the
operation
of
the
current
public
IP,
that
might
be
a
brown
out
that
might
be
that
will
be
worth
planning
and
announcing
it,
because
that
cool
cut
connection
during
one
or
two
minutes,
but
if
it,
the
goal
is
to
add
this
to
the
current
plugs.
A
A
Okay,
next
one
mirro
Jenkins
CI
org,
is
missing
some
necessary
metadata
file,
which
prevented
from
being
added
as
an
aptee
repo
I
forgot
to
pass
the
link,
but
that
will
update
I
wanted
to
mention
this
one
because
we
told
the
user
who
will
delay
so
right
now.
A
This
issue
is
blocked
because
we
need
to
First
migrate,
PKG
origin
somewhere
else,
and
the
part
about
the
the
repo
is
complicated,
because
we
need
to
be
able
to
provide
CDN
for
the
packages
inside
China
and
other
networks,
and
there
is
an
issue
related
to
the
the
amount
of
data
that
is
stored
on
the
mirrors.
We
have
archives
Jenkins
right
now
as
a
that.
A
Will
that
acts
as
a
default
fallback,
but
it
seems
that
the
epitome
repo
pattern
that
we
use
only
have
indexes
on
the
top
level
domain,
not
on
the
mural
domain.
So
there
might
be
some
elements,
such
as
mirroring
allowing
the
mirrors
of
the
packages
of
the
package
indexes,
but
that
problem
is
the
same
as
the
update
Center.
A
It
means
that
these
indexes
must
be
updated
almost
immediately
when
we
change
them
due
to
a
security
release
like
tomorrow.
That's
why
Olivia
never
went
that
road,
so
that
issue
is
blocked
by
the
work
that
everybody
is
doing
on
updates
Chen
Kim
Sayo,
because
that
means,
if
we
can
have
our
own
instance
of
update
slash
package
with
a
mirror
director
system
that
we
control.
E
I,
don't
think
it's!
It's
I
think
it's
a
low
priority
issue.
Okay,
it's
less
than
ten
dollars
per
month.
So.
E
A
Good
points:
do
you
mind
adding
a
command
on
the
ath
issue
that
gems
open
so
one
next
week,
when
we
will
check
the
feedback
of
the
one
week
without
spots?
If
we
want
to
persist
them,
we
will
have
a
message
to
remind
us
that
we
need
to
implement
the
Cabbage
collector.
Along
with
that
change.
Is.
D
A
A
A
So
the
main
the
primary
topic
is
check,
which
Services
can
be
migrated
like
javadoc
to
the
rm64,
not
pool
to
decrease
the
cost,
the
operational
cost
of
the
cluster.
So
you
have
to
put
your
hands
back
on
the
subject.
Write
things
done
share
with
the
team
and
work
on
it.
If
you
are,
if
you
have
located
services,
including
a
non-smand,
validation,
Etc
and
the
second
one
is
working
on
the
Node
Toleration
maintains.
A
A
A
Just
last
one
before
the
IP
restriction,
a
kubernetes
cluster
defined
in
Fry
CI
admin
SVC
account
as
code.
That's
an
issue.
I
opened
consequence
of
the
kubernetes
1.25
I
realized
that
I
never
shared
the
shell
script.
When
we
create
a
brand
new
cluster
kubernetes
cluster,
we
need
a
technical
user
which
has
Administration
permission
so
that
we
can
have
infra
CI
joints.
A
Are
you
installing
and
managing
charts
as
administrator
of
the
cluster,
and
that
one
was
most
of
the
time
created
manually
by
someone
named
Demand
reportel
with
a
a
dog
and
Shadow
shell
script
on
his
machine?
So
the
goal
of
that
issue
is
that
that
one
should
be
done
by
terraform
when
terraform
Creator
cluster.
Once
the
cluster
is
created,
it
should
create
the
accounts
and
prepare
a
sensitive
output
that
we
can
immediately
put
in
infrastia.
A
A
A
However,
we
still
have
two
services
that
we
identify
that
are
running
inside
the
Whole
Net
public.
Whatever
Network
one
is
VPN,
Jenkins
IU,
it's
easy.
We
just
want
to
get
rid
of
that
service
and
forget
about
it.
So
remove
the
virtual
machine,
the
code
and
Etc.
We
still
need
to
clean
up
resources,
so
an
issue
is
required
for
removing
that
service
answered
the
CIA
jenkinsayo,
which
is
the
Jenkins
controller
used
by
the
Jenkins
security
team.
A
I,
chose
to
defer
the
deletion
of
this
one
because
of
the
security
released
for
tomorrow,
because
they
require
that
controller
to
be
able
to
walk.
That's
why
we
didn't
work
on
this
two
weeks
ago,
so
now,
once
they
will
have
done
the
security
release,
if
we
don't
have
any
security
release,
even
privately
shared
with
them.
In
the
upcoming
two
weeks,
then
we
can
migrate
search.ci
to
the
new
network
and
clean
up
all
the
resources
of
the
old
Network.
A
Networks,
the
reason
is
because
that
issues
about
trusted
CI,
which
is
in
its
own
virtual
Network,
a
brand
new
somewhere,
which
is
not
the
new,
not
the
public
private-
that
we
have
today,
not
another
one
and
the
fact
that
we
want
to
restrict
the
access
to
the
SSH
and
to
that
trusted
CI.
We
have
the
same
pattern
for
sort
CI.
The
Restriction
is
not
the
same.
We
don't
have
the
same
person
on
both
Services.
A
We
have
an
overlap:
Daniel
back
of
a
deck
need
to
reach
both,
but
not
every
member
of
the
Gen
sect
team
need
to
reach
trusted,
and
not
everyone
from
the
infra
team,
or
at
least
our
usual
contributor,
should
reach
30.,
so
they
need
either
a
separated,
subnets
inside
private
Network
or
their
own
virtual
net.
In
any
case,
today,
third
CI
users
reach
third
CI
using
the
VPN.