►
From YouTube: 2023 01 24 Jenkins Infra Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
I
haven't:
haven't,
checked
the
rest
of
the
checklist
because,
with
ci.jenkins.io
down
we
can't
merge
the
we
can't
merge
the
documentation
change,
but
I
assume
Kevin
will
will
or
has
already
updated
to
confirm
the
weekly
chain-
dogs,
okay,
we
just
we
need
to
wait
for
ci.jenkins.io
to
return
before
we
finish
that
job.
C
A
A
Next
security
release
today,
so
that
was
announced
yesterday
on
the
mailing
lists
by
Daniel.
It's
currently
being
run
so
the
the
binaries
have
been
deployed
one
or
two
hours
ago
see
agent.
Kinsey
was
currently
down,
I
assume
they
have
a
checklist
and
until
the
checklist
isn't
finished,
they
won't
bring
CI
Jenkins.
Are
you
up?
A
A
A
So
it
works.
We
created
the
new
VPN
only
for
the
new
private
Network.
Now
it's
only
for
reaching
infra.ca.jenkins.io.
A
We've
added
manually
members
of
the
team,
including
Stefan
Ervin
I
as
the
first
wave
and
then
the
second
wave
included
team,
Alex,
Mark,
Gavin,
Daniel
and
Vedic
I
haven't
checked
with
Daniel
and
vadekas.
They
they
are
not
going
to
enforce
CI
often,
but
they
were
that
made
sense
in
the
upcoming
future.
A
Why
don't
we
migrate
everyone
automatically
it's
a
kind
of,
let's
say
every
three-year
cleanup.
We
have
a
lot
of
users
that
have
access
to
the
current
VPN
that
will
be
Legacy
soon,
I
hope.
So
we
try
to
track
down
and
audit
the
person
that
really
need
access.
Now,
it's
a
kind
of.
If
you
don't
need
access,
your
access
will
be
deleted
in
the
future
and
there
is
no
problem
on
asking
for
an
access
if
it
Justified.
A
So
we
don't
bother
people,
but
we
don't
have
to
finally
select,
but
still
we
don't
keep
people
that
are
not
using
or
are
not
participating
on
the
project
anymore
Mark.
You
confirm
that
it
works
so
I
close
the
issue
were
you
able
to
use
the
new
certificate
that
you
generated,
which
end
of
life
should
be
in
one
year,
not.
B
A
Any
question
on
that
topic:
no,
okay,
just
a
reminder
that
new
VPN
does
not
require
you
to
have
a
new
certificate.
You
can
reuse
the
same
private
certificates
that
only
you
have
access
to.
The
only
technical
item
that
we
have
to
do
is
adding
a
specific
config
per
user
that
allocate
a
private
IP
in
the
virtual
Network,
for
you
only
for
you
and
we
rebuild
the
new
image
and
deploy
it.
And
then
you
have
access,
but
you
keep
the
same
authentication
and
same
certificate
as
an
end.
User,
it's
easier
for
everyone.
B
C
E
A
Thanks,
Stefan,
okay,
so
next
topic,
kubernetes
management,
job
stack
in
executor
queue.
So
last
week
we
migrated
in
Frasier
on
the
new
network
on
the
new
cluster,
so
that
that
was
quite
the
migration
and
that
kind
of
migration
is
never
without
Road
bumps
because
in
fascia
manage
itself.
So
it's
a
egg
egg
and
chicken
issue,
so
we
had
to
bootstrap
from
scratch.
Of
course,
we
always
forget
about
things,
especially
since
the
previous
clusters
were
done
manually,
and
it
was
the
case
for
years
and
months,
while
the
nuclear
story
is
fully
Ascot.
A
Thanks
to
the
work
of
the
team
which
mean
that
yeah,
of
course,
we
had
some
changes.
One
of
the
mention
is:
is
that
the
the
amount
of
available
iops
for
the
data
directory
used
by
the
Jenkins
controller,
which
is
IO
bound?
The
number
of
iops,
was
clearly
lower
than
it
was
in
the
previous
setup.
A
We
talked
by
using
the
same
setup.
We
were
in
the
same
result,
but
the
it
appeared
that
we
might
have
done
manual
things.
The
result
was
when
trying
to
deploy
again
a
new
image,
as
we
usually
do
for
plugins
or
core
updates,
at
least
once
a
week.
The
time
required
for
the
world
system
to
deploy
the
new
image,
restart
the
controller
and
the
controller
reconnect
to
the
agents
that
time
was
greater
than
the
timing
of
the
task
itself.
A
So
it
was
immediately
rolled
back
and
we
were
always
having
one
and
then
two
restarts
leading
to
a
lot
of
pipeline
issue,
because,
while
trying
to
check
what
was
the
state
of
the
pipeline
or
restoring
the
pipeline
and
retrying
the
pipeline
or
continuing
them,
it
was
suddenly
stopped
again
by
an
external
signal.
So
that
was
clearly
corrupting
the
the
pipeline
States
XML
file
on
the
file
system
and
creating
a
lot
of
iops
and
the
more
it
was
recurring.
Since
last
Friday,
the
more
the
amount
of
available
iops
was
decreasing.
A
A
A
We
have
still
to
watch
carefully
in
the
upcoming
two
weeks.
If
we
have
some
bursts,
because
maybe
we
will
need
to
increase
the
qos,
you
can
increase
the
quality
of
the
device
and
also
the
qos
meaning
way
more
iops,
but
it
costs
a
bit
more.
So
that's
why
we
go
step
by
step
and
also
we
as
a
team.
We
spend
the
whole
day
yesterday
adding
another
other
safety
measures
now
the
time
period
for
Jenkins
info
itself,
because
it's
a
sensitive
topic
is
greater.
A
We
might
benefit
from
upgrading
the
pipeline
to
use
the
retry
in
the
future,
but
right
now
we
stopped
because
we
had
the
other
other
pressing
concerns,
so
it
stabilized.
It
hasn't
been
in
issue
and
we
will
validate
finally
a
second
time
with
today's
weekly
because
we'll
generate
a
new
Docker
image
so
later
today
or
tomorrow
we
might
crash
the
controller
again
or
not
I
trust
that
it
won't
because
Stefan
and
I
tested
it
yesterday
late,
so
that
should
be
okay.
A
So
thanks
Alex
for
opening
that
issue,
I
see
that
Alex
is
an
early
person
because
he
opened
the
issue
way
before
I
even
woke
up.
So
any
question
on
the
topic:
okay,
there's
been
VPN
access
request
for
Adriano,
we
haven't
finished
the
task,
yet
another
issue
was
open.
The
goal
is
for
him
is
to
be
able
to
access
the
database
used
for
plug-in
IL
scoring
a
VPN
was
the
first
step.
It's
not
an
easy
task,
so
we
are
currently
working
on
it,
but
at
least
he
has
access
to
the
VPN
access.
A
There's
been
an
issue
about
Jenkins
artifactory,
not
accessible,
it
was
a
user
request
and
in
fact
the
user
had
an
issue
on
their
own.
It
was
a
paperback,
so,
yes,
they
recovered
their
access
and
they
confirm
it's
okay,
so
nothing
more
about
this
one
same
for
someone
was
blocked
by
the
anti-spam
system.
A
So
we
never
received
the
answer
from
that
person.
So
I
created
the
account
with
the
email
they
provided.
That
should
have
received
the
email
and
closed
the
issue,
no
answer
so
yeah
I
guess
they
are
using
it.
Otherwise
that
will
be
an
empty
icon
for
nothing
and
we
archived
components
nothing
to
say
about
this.
One
any
question
about
the
work:
we
did:
okay
now
status,
so
just
a
reminder
about
the
top
priority.
Following
the
meeting
that
Mark
had
with
you
forecast
week,
yeah
sorry
I
fell
asleep
on
the
couch,
I'm.
A
Really
sorry
so
I
wasn't
able
to
join.
It
was
too
late
for
me,
my
bad,
but
the
outcome
is
that
first,
we
we
have
now
a
metrics
with
the
new
G4
platform
and
these
metrics
are
retrieved
by
Mark
and
they
we
should
have
them
weekly.
Now
this
Matrix
implies
all
the
requests,
so
we
have
an
evaluation
of
the
outbound
Bond
ways
and
downloads
per
repository
per
IP.
A
First
things
defrock
told
us
that
blocking
IPS
Max
stop
me
if
it's
wrong,
but
blocking
IP
might
be
Contour,
productive
or
at
least
ineffective,
because
most
of
the
pattern
they
see
as
soon
as
they
block
an
IP,
either
globally
or
per
repo.
Whatever
people
can
use
another
public
IP
and
it's
a
cat
and
mice
game.
So
that's
something
we
will.
That
will
be
ineffective
for
us,
which
mean
we
still
need
to
work
on
enabling
authentication
on
the
non-uh
normal
repository.
A
A
So
that's
the
top
priority
gifrog
decreasing
in
decreasing
the
outbound
bandwidth
there,
because
G3
G4
needs
us
to
get
under
yeah
to
show
that
we
make
progress.
The
second
topic
is
based
on
the
data
we
received.
It
looks
like
that
the
the
amount
of
data
downloading
from
the
Jenkins
is
clearly
greater
than
expected,
which
means
the
ACP
project
artifact
caching,
proxy,
that
we
worked
on
the
bad
Munchies
is
clearly
going
back
on
top
of
the
priority
at
the
same
level,
because
that's
something
we
can
immediately
tell
Chief
rock
look.
A
A
Our
goal
is
to
check
if
the
current
ACP
instance
in
digital
ocean
is
responding
correctly.
Is
it
currently
working
and
we
are
going
to
try
to
start
using
it
as
the
default
one,
even
if
it
creates
and
short-term
a
bit
of
cross
Cloud
bandwidth
cost
for
us,
it
will
be
clearly
important
to
show
figures
to
g-frog.
A
Secondly,
Stefan
and
I
will
also
plan
to
work
on
fixing
the
AWS
one
ACP
instance,
because
we're
only
missing
a
certificate,
so
that
should
be
easy
to
fix
and
we
will
have
two.
So
at
least
we
will
have
AWS
data
that
consume
a
lot
in
AWS
and
digitalocean,
based
on
which
one
is
working.
The
best
of
both
we
could
even
decide.
That's
a
good
tip
that
Mark
shared
with
me
last
week.
We
could
change
the
amount
of
workload
between
AWS
and
digitalocean.
We
can
control
the
amount
of
pods.
A
The
thing
is,
it's
related
to
the
ACP
in
Azure,
because,
with
the
IP
overlap
on
the
current
public
network,
we
need
the
Brand's
new
set
of
networks.
Private
was
done,
public
is
get
got
started
and
we
don't
need
to
migrate
everything
immediately
to
get
started
with
a
new
ACP
in
Azure.
We
only
need
a
new
cluster
that
elf
is
working
on
with
the
new
ACP
instance.
A
So
that's
all
we
divided
the
work
that
will
be
the
top
priority
for
us
in
the
upcoming
week.
The
rest
will
be,
let's
say,
a
day-to-day
work,
but
we
should
not
try
more
tasks
in
that
context.
So,
first
of
all,
lyric
and
I,
can
you
give
us
a
status
of
private
Gates
migration,
so
the
cluster
itself.
D
A
A
So
release.ci
will
be
quite
the
topic
in
the
sense
that
we
we
will
need
to
create
node
pools
that
were
that
used
to
be
on
the
public
cluster
in
the
new
one.
We
need
to
every
already
spend
some
time
on
being
sure
that
we
can
run
two
Jenkins
controller
in
Russian
release
here
in
the
same
cluster,
with
different
agents
on
different
namespaces.
Just
to
be
sure
that
you
have
good
naming
conventions
and
good
isolation
between
all
that
will
require
a
bit
of
infrastructure,
work
and
also
communication
to
the
to
the
release.
A
Team
will
drive
the
knowledge
sharing
here,
because
we
will
have
to
communicate
to
them.
What
are
the
requirements
for
the
next
weekly
and
the
next
LTS
I'm?
Not
sure
we
will
be
able
to
do
it
before
the
upcoming
LTS,
but
if
we
can,
that
will
be
a
nice
thing,
though,
depends
on
the
ACP
result
that
we
will
have
in
the
upcoming
days,
because
I
will
prefer
to
run
a
lts.free
version
on
a
new
setup
instead
of
a
brand
new
LTS
for
me
feel
safer
because
yeah,
that's
not
the
same
amount
of
moving
pieces.
D
Has
just
been
created,
I'm
working
out
a
little
issue
around
this
trash
class,
as
Etc
from
job
doesn't
seems
to
be
able
to
connect
to
this
new
cluster.
But
it
doesn't
surprise
me
as
I
didn't
I,
don't
I
I
haven't
had
the
time
to
put
its
credential
in
the
secrets.
So
I've
temporarily
commented
this
section
out.
D
It's
a
work
in
progress
but
progressing
all
right.
D
I
can
connect
to
this
new
cluster
I
will
deploy
a
and
things
Ingress
on
it
and.
A
A
A
No
I
got
one
question
because
that's
in
that
topic
we
have
a
stack
that
Olivier
deployed
and
used
during
months
and
years
we
have
a
metric
collection
and
log
collection,
Stacks
metrics,
only
three
built
on
with
Prometheus
and
grafana.
A
All
that
stack
is
inside
the
current
prod
public
Gates
and
is
collecting
metrics
only
of
the
cluster
itself.
That
was
built
when
it
was
the
only
cluster
we
haven't
used
that
stack
since
one
year,
at
least
even
for
double
checking,
because
we
have
data
dog
now
collecting
everything
successfully,
and
we
use
that
adoke.
A
The
reason
of
having
datadog
and
graphene
in
parallel
spur
Olivier's
feedback
when
I
started
with
working
with
him
on
that
topic
was
just
in
case
that
a
dog
get
rid
of
us.
However,
right
now
the
Prometheus
deployment
is
absolutely
broken.
It
doesn't
work,
we
don't
have
metric
collection,
anymore
and
graphene
has
been
done
regularly
for
for
days
before
I
realized,
so
that
mean
no
one
is
using
it.
A
So
since
that
setup
is
using
storage
compute
cycle
and
would
have
been
migrated
on
one
of
the
new
private
or
public
cluster
I
got
a
proposal
and
I
just
want
to
validate
with
everyone
here
and
I
also
want
to
double
check
with
Olivia
one
time
but
I
plan
to
propose
deletion
of
this
system.
Delish
a
clean,
soft
deletion,
it's
still
running
on
the
current
cluster
I
won't
delete
it
automatically,
but
we
won't
create
new
deployments
on
the
new
on
the
two
new
clusters.
A
D
A
Yeah
I
I
vote
for
not
migrating
them
personally,
I,
don't
say
it's
not
useful
I
just
say
we
already
have
data
dog
and
the
day
we
will
reinstall
that
world
stack.
We
will
have
to
plan
it
to
write
down
a
reason
why
we
did
that
because
it
has
been
deployed
manually
then
imported
manually
on
Elm,
just
to
be
sure
that
it's
inside
Elm,
but
there
has
been
so
much
breaking
change
on
Primitives
and
graphene
and
charts
that
we
are
not
able
to
keep
up
so
that's.
A
Why
is
it
okay,
so
erase
it
okay
for
you
for
not
migrating
them
and
we'll
see
Olivia's
feedback
during
first
then.
A
No
okay,
if
you
add
a
VPN
for
public
network
I,
propose
to
delay
that
one.
The
use
case
for
such
a
VPN
will
be
to
allow
people
such
as
to
access
the
kubernetes
services
that
are
in
the
public
network
without
granting
them
access
to
the
infrastia
release.
Ci
private
Network.
A
D
A
Another
task
that
I
propose
to
keep
working
on
on
trusted.
Ci,
we
did
some
work
along
with
Stefan.
We
created
brand
new,
multi-brand
jobs
to
to
check
for
the
tags
and
build
and
deploy
the
tags
of
our
Docker
agent
images
free
images.
A
The
goal
was
to
avoid
that
hold
tag
being
rebuilt
and
deployed
as
the
latest
image,
because
each
tag
is
deployed
as
the
tag
itself
and
that's
the
latest,
because
we
expect
a
tag
to
be
on
a
built
only
once
while
playing
around
so
Stefan.
Do
you
want
to
explain
the
root
cause
that
we
found
around
git
pulling?
Are
you
okay
or
do
you
want
me
to
explain?
Nah.
A
That's
cheating,
okay,
so
some
of
the
whole
tags
came
from
an
age
where
multi-branch
pipeline
wasn't
used
for
that
one
and
they
had
keyword
around
git
pulling
and
once
you
started
the
git
pulling
then
regularly
in
a
multi-branch
pipeline.
The
the
leaf
job
is
only
a
pipeline
job
and
is
pulling
itself.
A
You
can
do
whatever
settings
you
want
at
the
multi-branch
scanning
level
to
say:
don't
rebuild
this
one:
the
polling
keep
existing
inside
the
sub
job
as
soon
as
the
job
exists.
So
that's
why
we
went
on
creating
brand
new
jobs
to
see
if
it
was
avoiding
bullying
protip.
We
almost
did
that
except
we
still
don't
understand.
Some
of
the
job
were
trigger
at
least
one
time,
starting
again,
bullying
on
the
new
jobs,
but
just
a
few,
and
we
did
a
nice
trick.
A
E
I
see
that
a
little
like
a
crown
Crown
job
at
the
user
level
instead
of
the
of
the
system
level.
So
you
don't
see
them,
but
they
are
they're
running
on
in
your
back
at
the
at
the
user
level,
and
in
that
case
they
are
working
in
in
each
pipeline
in
instead
of
the
whole
thing.
So
you,
whatever
you
do
on
the
on
the
multi-branch,
it's
not
urging
the
one
which
is
programmatically
done
by
the
the
current
at
the
pipeline
level,
am
I
right.
A
That's
correct,
so
we
haven't
seen
since
last
week
in
the
parasit
job.
So
now
the
next
step
will
be
for
the
upcoming
seven
days.
The
old
job
were
disabled.
We
are
we're
having
the
new
ones,
so
we
will
have
to
carefully
watch
before
closing
the
tissue,
the
incoming
Docker
image
agents.
So
there
should
be
new
tags
soon,
because
we
had
the
operating
system
updates
on
the
free
images.
A
A
None:
okay,
her
renewed
designer
certificate
for
Jenkins
I
propose
to
move
it
to
delay
it
of
one
week
until
the
first
damn
because
since
we
will
physically
meet
with
Olivier,
is
it
okay
Mark?
If
we
delay
the
work
that
Juan
Hai
had
to
do,
I
will
ask
him
and
interview
him
about
the
world
certificate
signing
and
the
constraints
that
we
report
back
on
the
issues.
So
we
can
act.
The
end
is
a
March
so
that
let
us
February
to
act
on
this
one
that.
B
A
Let's
ask
Olivier
directly
any
question
up:
okay,
so
hard
play
right,
truly
now
agent
image,
so
thanks
Stefan
for
the
job
you
did
here.
Can
you
remind
me
what
were
the
next
step.
E
A
So
Gavin
did
everything
on
the
docker
Builder.
So
it's
okay,
you
open
the
pull
request,
studying
not
GS,
and
that
is
testing
the
that
npx
playwright
is
able
to
be
installed
without
requiring
sudo
request
initial
issue
and
as
far
as
I
can
tell
I
released
a
version
of
Packer
image
that
has
the
playwright
stuff.
A
A
Deployed
on
kubernetes,
but
not
on
CI
Jenkins
I
also
almost
done
well,
we'll
do
it
once
it
will
be
merged
by
the
way
expect.
In
the
upcoming
week,
an
update
on
the
free
GDK
available,
because
GDK
8,
17
and
11
has
been
updated.
Pull
requests
are
already
opened.
We
have
a
minor
testing
thing
to
fix,
but
that
will
mean
the
new
release
of
the
Packer
image
or
56.0,
with
the
updated
GDK.
A
Issue,
source
index.html
back
to
backlog,
I
didn't
have
time
to
focus
on
this.
One
requests
from
Gavin
about.
There
is
one
index
HTML
on
Azure
bucket
file.
If
we
could
Source
it
so
we
could
update
it
easily
that
require
put
adding
a
new
workflow
process
or
pipeline
that
will
deploy
it
to
Azure
and
infrasti.
It's
not
complicated,
but
require
a
bit
of
time
that
we
don't
have
right
now.
A
There
are
four
modulated
S
I,
keep
it
on
current.
We
need
to
check
for
duplicated
cluster.
A
We
were
waiting
for
today's
advisory
and
then
we
will
test
the
deployment
on
trusted.
Be
aware
that
will
have
an
impact
on
all
the
virtual
machine
using
let's
encrypt,
because
we
change
the
let's
encrypt
profile
that
we
used.
It
will
be
able
to
support
DNS
in
the
future
DNS
a
challenge
instead
of
HTTP,
which
will
be
a
nice
for
the
private
machine
such
as
trusted.
A
That
will
be
the
first
one
so
one
when
we
will
deploy
this
pull
request,
please
don't
merge
it
because
we
will
have
to
first
disable
to
pay
attention
everywhere
and
then
enable
It
Machine
by
machine.
So
first
testing
on
trusted
CI,
then
on
other
machines
to
see
the
impact.
The
main
impact
on
existing
configuration
is
third
bot
will
be
upgraded
from
0.23
to
1.27..
A
No
breaking
change
if
you
are
using
HTTP
challenge,
which
is
the
case
everywhere,
trusted
the
first
one
use
DNS
challenge
in
the
poop
head
setup,
but
we
never
know
I
might
have
messed
things
up
despite
the
testing.
So
that's
why
doing
one
machine
at
one
machine
will
avoid
propagate
Mayhem
but
be
aware,
don't
merge
that
pair?
Please.
A
B
Yeah
I'll,
let
me
let
me
we've
got
that
doc.
I've
got
that
Google
doc
that
you
and
I
had
started.
Let
me
start
there:
let's
put
it
there,
it's
a
good
place
for
me
to
think
about
detailing
out
all
sorts
of
things,
because
we've
got
to
give
an
announcement
and
we've
we've
got
our
goal:
that
by
mid-February
we
do
our
first
Brown
out
by
mid-march
a
second
Brown
out
and
go
live
by
end
of
March,
so
yeah
I'll
happy
to
take
that
action.
You
bet.
A
Okay,
so
on
my
side,
I'm,
enabling
again
so
searching
again
solution
for
the
hlb
ldap.
A
And
share
with
the
team
and
then
I
don't
know
who
will
try
to
implement
or
I
can
end
over.
We
can
pair,
but
right
now,
if
it's
okay,
I
prefer
working
with
Stefan
on
the
ACP.
For
these
two
and
then
I
can
delegate
tasks
to
Stefan
and
I
will
do
the
initial
work
for
h
a
unless
someone
want
to
decrease
this
one
I
don't
mind.
That's
only
a
proposal
just
so
that
we
divide,
walk.
A
We
need
to
help
Adriano
with
the
VPN
access
equilaterally
access
the
database
by
IP,
except
we
need
to
add
the
correct
VPN
routing
in
this
configuration.
That's
the
element
that
Stefan
proposed
to
show
to
Mark
about
the
new,
the
new
VPN
that
will
be
the
same
technical
element,
but
on
the
former
VPN
we
will
not.
We
will
need
to
add
rooting,
routing
rules
and
other
client
side
or
everyone's
client
side.
That
could
be
interesting.
A
A
Basil
opened
an
issue:
I
wasn't
able
to
look
at
it,
but
we
need
to
look
at
it
on
the
upcoming,
because
it's
blocking
him
as
a
as
a
user
of
the
platform
to
release
a
new
Jenkins
test
on
this
HTML
unit,
I'm,
not
sure
what
this
project
is
or
how
it's
working.
First
glance,
it
looks
like
you,
the
automatic
credential
in
the
CD
process
wasn't
up
to
date
when
it
tried.
A
Okay,
so
in
in
both
case
it
was
a
repository
permission,
updater,
that's
a
job
that
run
every
three
hours
that
that
updates,
because
the
credential
are
only
valid
for
four
hours,
runs
every
three
hours,
so
maybe
there's
been
a
discrepancy
on
the
the
amount
of
so
good
catch.
Are
we
so
it's
okay?
If
we
had
that
one
on
the
upcoming
Milestone,
because
we
should
unblock
it
Brazil,
maybe
it
will
be
nothing
worst
case.
A
Yep,
which
is
a
really
really
good
help
for
the
similarity
system.
We
had
an
issue
from
one
of
our
user
that
has
been
closed
under
reopen
so
that
user
uses
a
tool
that
checks
the
metadata
for
RPM
reboot
and
that
RPM
system
list
a
bunch
of
version
of
Jenkins.
In
that
case
they
are
mirroring
everything.
That's
the
behavior
and
some
of
the
old
version
are
not
mirrored
anymore.
However,
we
should
have
at
least
the
archive
mirror
reference
on
the
mirror,
so
I'm
not
sure
what
and
how
it
fails.
A
My
first
guess
is
that
we
might
have
a
synchronization
script
that
only
copy
something
that
is
not
older
than
a
certain
amount
of
month
is,
and
then
you
have
a
sliding
window
of
files
copied
to
the
mirror
reference
I
guess
for
the
RPM.
We
should
have
always
ordered
RPM.
In
that
case,
because
and
same
for
debians
and
old
packages,
they
should
always
be
available
so
I'm
not
sure
why
it's
failing,
but
I
check.
A
B
A
B
A
Mirror
system
the
reference
mirror
system
is
not
a
CSL.
It's
inside
gauge
and
kinsayu
in
the
kubernetes
cluster.
There
is
an
Azure
bucket
with
all
the
files
like
a
one
terabyte
buckets,
so
that
bucket
should
not
be
clean
from
all
the
RPM
or
Deb
packages
right.
So
the
ash
of
the
file
is
still
on
the
mirror
database,
but
the
only
mirror
serving
that
one
will
be
archived
in
Sayo
because
it
has
the
files.
A
Thank
you.
So
that's
why
that
issue
and
that's
the
reason
why
team
reopened
it
because
that's
that's
an
issue,
but
that's
the
edge
case
issue,
because
we
don't
have
a
lot
of
users
doing
that
so
I
told
the
user
made
February.
So
we
should
not
forget
about
this
one,
but
I
propose
not
to
work
on
this
one,
because
we
have
much
much
pressing
topics.
A
A
C
A
A
B
A
A
A
E
A
A
A
So
we
are
the
campaign
of
changing
a
GitHub
action
depreciation
where,
if
you
add
that
string
printed
on
the
STD
out
of
a
custom
action
same
thing
as
a
shared
Library,
that
thing
was
deprecated
in
favor
of
your
right
to
key
equals
values
inside
the
file
which
is
present
on
the
path
provided
by
the
environment,
viable
GitHub,
outputs,
that's
what
GitHub
tells
us
to
do,
but
in
the
case
of
that
GitHub
action,
the
gulong
command
line
in
if
you're
running
in
GitHub
action
printed
that
string,
Under
Stefan
online,
we
see
that
screen
it's
valid.