►
From YouTube: 2023 06 06 Jenkins Infra Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
Too
long
did
it
read
it's.
We
had
an
issue
with
the
adap,
that's
obviously
with
the
Murphy's
Law
upon
the
on
the
worst
moment
of
the
release
publication
that
led
us
on
an
partial
release.
A
So,
instead
of
trying
to
fix
that
and
forget
half
of
the
thing,
it
has
been
decided
with
the
help
of
the
really
person
that
the
usual
release
suspects
as
I
named
them
to
trigger
brand
new
release
and
declare
2
408
that
easier,
and
we
should
have
resulted
in
one
hour
or
less
thanks
everyone
for
the
support
on
this
one
yeah.
That's
all
we
are
watching
it
and
then
the
usual
steps.
A
Another
announcement
in
the
upcoming
days
we're
going
to
proceed
to
a
migration
of
the
ldap
and
get
Jenkins
mirror
public
services
to
a
new
cluster.
The
date
and
time
will
be
announced,
but
I
think
that's
important
enough
to
mention,
because
some
of
the
services
air
could
be
critical
to
the
end
users.
C
Code,
nope
two
dot-
oh,
oh!
Actually,
yes,
so
get.jenkins.io
is
going
to
be
used
by
Docker
containers.
A
Will
be
announced
the
Jenkins
Docker
image
yeah.
A
A
C
And
it
will
release
on.
Let's
see,
we
are
I,
believe
release
candidate
in
one
week
and
final
release
in
three
weeks.
I'll.
A
A
E
Yeah
I'm,
helping
getting
rm64
to
be
known
by
more
people
is
a
good
thing
from
my
point
of
view,
of
course,
yeah
congrats.
A
Stefan,
do
we
have
another
issue,
or
are
you
okay
to
create
one?
If
we
don't
have
that,
will
list
the
upcoming
candidates
for
that
migration
to
rm64,
because
I'm
sure
we
have
other
services
there
I.
A
Issue
to
open
I,
don't
think
so.
I,
don't
remember
so
your
role
is
to
check
if
we
have
already
one
and
if
we
don't,
then
you
will
work
together
to
select
the
other.
The
criterias
will
be
already
migrated
to
the
new
cluster
that
require
synchronization
with
our
videos.
To
be
sure,
we
don't
step
on
his
toes.
We
can't
wait
the
end
of
the
full
migration
if
we
don't
feel
with
that.
A
D
D
A
A
Given
it's
being
worked
on
right
now,
so
it's
it
will
be
hard
to
really
have
an
order
of
magnitude
until
we
have
finished
everything
and
that
will
be
hard
to
distinguate
between
do
we
pay
less
because
it's
part
of
the
work
that
everybody
did
on
selecting
the
proper
node
pool
topology
the
proper
Network,
the
proper
machine
size
for
the
node
pool,
or
is
it
related
to
irm
64.?
The
answer
will
be
because
of
both,
but
that
would
be
hard
to
distinguish,
but.
D
A
A
Okay,
so
Stefan,
you
have
to
really
sync
with
the
rest
of
the
team
when
you
will
do
operations
because,
as
you
saw,
we
might
have
surprises
when
migrating
to
rm64
pod,
not
pool
sorry,
so
just
to
be
sure
that
we
we
don't
interact.
The
priority
is
to
Airway
for
the
migration.
If
there
is
a
conflicts
in
term
of
timing
next
issue,
you
can
create
an
account
sorry,
Belgium,
Air
Force
is
training
Ukrainian
pilots
on
the
F-16,
so
yeah
they
are
playing
around
on
low
level.
A
Altitude
I
close
the
issue
because
we
never
heard
back
from
the
user.
That's
the
user
who
claimed
that
they
never
received
the
emails,
but
as
the
work
that
there
we
did
Stefan
here
we
see
that
melgen
or
sangria
I,
don't
remember,
but
our
email
system
that
we
gain
access
to
send
the
email
to
the
email
provider.
A
So
we
repeatedly
repeatedly
asked
the
user
to
either
use
another
email
domain
or
contact
their
email
administrators
to
see.
Why
are
the
email
not
delivered
but
as
as
we
saw,
the
email
is
sent
and
acknowledged
by
the
the
email
server
of
that
domain?
So
there
is
nothing
else.
We
can
do
about
that
and
most
of
the
time,
the
answer,
the
user
answer
with
a
two
or
three
weeks,
DeLay
So
yep,
as
we
said,
if
they
can
reopen
but
I,
don't
see
anything
else
we
can
do
do
you.
D
B
I
think
we
spent
many
enough
time
with
this
user.
He
received
some
mails
and
he
might
have
received
the
last
one,
but
didn't
check
back
so.
A
We
we
gave
them
two
paths:
they
can
use
for
being
there
I
I'm,
sorry,
because
maybe
the
user
will
want
to
contribute
or
open
issues
to
the
genocide
project
and
it's
slowing
down
them
or
blocking
them
to
do
so,
but
I
don't
see
anything
else
we
can
do
here.
So,
let's
see
at
least
the
outcome
of
your
work
area
in
that
area
with
develop
Stefan
is
that
now
we
are
able
to
observe
when
the
email
are
sent
or
not
so
good
job
team.
That's
that's
still
really
useful.
A
B
But
I
haven't
I,
didn't
add
the
opportunity
to
restart
the
theater
drinking
statio,
because
there
were
every
time
there
were
some
cars.
Some
bomb
builds.
A
A
So
thanks
everybody
for
taking
care
of
this
one
Puppet
Master
my
great
virtual
mission
to
Azure,
surprise
migration
that
wasn't
planned
at
all
last
week,
summary
as
part
of
the
Ubuntu
2204
campaign
that
was
planned,
we
updated
the
OSL
machine.
With
the
plan
of
we
try
edamame
and
lettuce
that
went
well,
we
started
to
upgrade
the
the
pet
Master
VM,
which
failed
and
never
restarted.
We
had
to
ask
USL
for
help.
They
helped
really
quickly.
A
That
wasn't
possible
or
too
risky,
so
the
emergency
choice,
because
we
weren't
able
to
deploy
new
configuration
changes
across
our
virtual
machines
due
to
that
problem.
After
discussing
quickly
with
the
team,
I
took
the
decision
to
create
a
new
virtual
machine
on
azure
to
host
the
puppet
master.
That
machine
is
using
puppet
2004
that
will
that
should
be
the
only
machine
to
stay
on
focal,
because
the
goal
was
also
to
stop
using
bionic
as
much
as
possible
since
it's
out
of
date,
that
machine
has
been
created.
A
A
One
of
the
proposal
I
have
and
I'm
raising
right
now
is
to
tell
us
your
cell
that
the
we
do.
We
ask
them
for
a
snapshot
of
the
virtual
machine
and
they
can
release
this
machine
to
avoid
consuming
resources
on
their
cluster,
and
if
we
need
resources
on
the
side,
we
send
them
a
request
for
brand
new
machines
that
will
avoid
us
to
have
Legacy
machines
with
Legacy
system
that
has
been
upgraded
all
over
the
years.
That
will
release
the
resources
for
them.
A
A
I
propose.
We
will
do
on
two
different
email
thread,
though,
because
ideally,
if
they
can
share
with
us
the
snapshots
of
the
machine
before
deleting,
we
could
put
this
snapshot
on
the
archive
bucket
on
Azure.
That
are
we
created
which
is
encrypted.
So
in
any
case,
if
we
need
an
old
old
thing
for
whatever
reason
we
would
have,
the
file
system
of
this
machine
available
looks
good
for
you,
folks,
okay,
so
I
will
take
care
of
opening
issue
to
release
lettuce.
A
A
A
At
the
end
of
the
migration
we
render
job
successfully,
the
user
confirmed
they
were
okay,
so
no
problem.
That
was
an
outage.
Sorry
for
the
inconvenience
that
was
a
short
term
notice
for
that
migration.
We
already
noticed
the
morning
for
the
afternoon
so
room
for
improvement
for
for
the
announcement
next
time,
but
outside
is
nothing
special
there.
Unless
someone
has
a
proposal
or
something
to
point
here,.
A
Okay
related
to
trustee.ci
migration
to
azure
CIA
Jenkins
plugin
agent
are
not
allocated.
That
issue
has
been
closed,
but
what
we
tend
to
see
is
that
when
the
CI
Jenkins
IU
controller
restarts,
it
looks
like
that
there
is
a
blocking
somewhere
inside
the
controller
that
forbid
Jenkins
to
try
to
create
new
bod
agents.
A
Stefan
did
a
complete
existing
jobs
about
checking
if
it
wasn't
related
to
the
spot.
Virtual
machine
allocation
in
the
kubernetes
cluster
that
supports
the
bomb
builds
that
wasn't.
We
didn't
see
any
problem
on
that
on
the
Azure
console
there
weren't
any
limits
in
podka
CPU
memory.
No
error
were
happening
on
the
auto
scaler.
A
We
didn't
see
any
issue
on
the
lower
level
infrastructure,
but
what
we
saw
is
that
Jenkins
stopped
trying
to
create
Bud
after
a
certain
amount
of
time
and
the
restore
after
the
restart.
It
tried
nothing
while
the
other
jobs
were
still
creating
new
boats,
so
that
could
be
related
to
a
behavior
of
the
kubernetes
Plugin
or
the
bomb
builds.
These
are
the
two
things
because
we
have
right
now:
three
kubernetes
Cloud
setup
on
cigenkins
iO.
A
Each
kubernetes
cloud
has
its
own
okay,
HTTP
threadpool
for
the
kubernetes
client,
so
either
the
problem
is
located
on
the
way
the
kubernetes
client
Works
inside
the
plugin,
and
that
could
explain
why
that
cloud
is,
is
somehow
facing
a
piece
of
activity
a
restored
and
it
breaks
something
or
that
could
be
related
to
the
setup
of
the
bomb,
builds
job
that
might
not
be
set
up
to
survive.
The
controller
restarts-
and
in
that
case,
that
might
be
a
burger
in
the
way
that
the
agent
on
the
process
should
be
stopped
and
released.
A
So
there
is
something
weird
and
we
don't
know
there
is
a
lock
or
an
issue
here
here,
it's
hard
to
pin
it
to
infrastructure
at
First
Sight,
but
maybe
there
is
something
none
obvious
here:
I
propose,
as
we
mentioned
on
the
issue
when
we
see
a
such
problem,
that
we
collect
the
thread,
a
thread
dump
of
the
controller,
as
we
saw
as
a
team
exercise
earlier
today.
Collecting
a
support
bundle
from
the
top
level
item
on
the
left
is
enough
because
the
freedoms
are
present,
then
we
we
keep
this
bundle
and
analyze
them
afterwards.
A
When
you
see
that
problem,
a
controller,
restart
or
reload
from
disk
that
one
is
the
safer
reloading
from
disk,
unblock
the
problem
most
of
the
time,
if
the
reload
doesn't
work,
don't
hesitate
to
restart
the
controller,
had
a
message,
shutdown,
message
restart
and
then
it
should
start
again.
The
bombings.
A
A
Do
you
have
in
something
else
to
say
about
this
one?
So
it's
closed
because
the
problem
after
a
restart
was
solved
and
the
bomb
builds
are
back
again
invite
Mustafa
into
the
Jenkins
infrapogginal
scoring
thanks.
Everybody
I
think
you
took
care
of
this.
One
I
think
that's
usual
operation.
If
there's
anything
to
say
on
this
one
thanks
and
we
add
an
LTS
release
last
week,
so
of
course
we
upgraded
all
of
our
controller
less
than
24
hours
after
the
official
release
thanks.
Everyone
involved
on
this.
A
We
are
uninsured
related
to
account.
Someone
did
a
mistake,
mistake
or
did
not
answer
and
was
trying
to
reset
the
password
of
an
account
that
doesn't
exist
so
hence
my
it
looks
like
a
mistake
between
our
system
and
someone
else
system,
thanks
everybody
for
taking
care
of
this
one
and
now
back
to
the
tasks
that
we
are
working
on
for
each
task.
A
As
usual,
we
have
to
state
if
we
can
continue
working
on
the
next
milestone,
we'll
need
to
leverage
the
amount
of
tasks
I'm
working
on,
because
Friday
I'm
I
might
be
half
I,
don't
know
if
some
of
you
are
have
days
off
for
the
upcoming
milestone.
Yes,.
A
A
There
is
a
word
issue
here:
a
user
is
opening
pull
requests
and
was
set
up
as
administrator
of
the
plugin
Repository,
but
his
pull
requests
were
still
seen
as
untrusted
by
C.I
John,
kinsai
I'm
sure
we
missed
something
abuse,
so
we
replayed
multiple
times,
pull
requests
that
changed
the
Jenkins
file,
because
his
goal
was
to
test
that
touching
in
file
change,
so
it
was
locked
and
their
need
for
asking.
Initially
they
weren't,
they
didn't
have
the
correct
permission.
They
weren't
using
the
correct
git
commits
system
that
has
been
fixed
and
still
issue.
A
A
Now
that
the
pull
requests
Mark
the
user
as
interested,
there
is
no
way
this
could
be
changed
as
per
team
I'm,
not
sure
of
myself.
I.
Remember
that
we
were
able
to
just
rescan
the
repository
that
should
change
that
behavior
with
scanning
pull
requests,
but
Tim
says
no,
so
I
might
have
confused
memory
of
what
we
did
on
in
Frasier
a
few
months
ago.
A
So
the
proposal
is
we
let
the
user
merge
to
the
main,
create
a
new
release
and
then
open
a
new
pull
request
with
their
new
permission
to
see
where
the
problem
comes
from.
We
check
it's
in
relative
to
yesterday
GitHub
issue.
That
issue
happens
in
two
weeks
and
most
probably
it's
the
we
might
have
an
edge
case
where
the
user
is
a
direct
admin
of
the
repository
but
outside
the
Jenkins
organization.
So
maybe
GitHub
reports
wrong
wrong
collaborator
status
when
it's
scanned
by
Jenkins,
but
that
that's
just
an
out
of
the
wild
assumption.
A
So
I
hope
what
team
did
by
setting
the
proper
permission
would
help
and
would
lab
user
so
I
propose
that
we
add
this
issue
for
watching
only
I,
don't
think
there
are
any
action
expected.
So
we
add
it
to
the
next
Milestone.
Is
that
okay
for
you,
okay,
let's
watch
the
result
of
permission.
B
Yes,
let
me
check
which
we
have
only
four
Services
remaining
to
migrate,
CL,
tab
service,
currently
immigration,
and
then
we
have
mirror
bits
and
public
site
and
Jenkins
that
I.
A
Okay,
so
for
service
left
cool
remaining
to
migrate.
Almost
there,
then
yes,
jenkins.io
and
plugins
site,
so
that
mean
earlier
there
were
migrated
successfully.
A
A
B
A
Yep
new
ldap
installation
required.
A
Oh,
what
did
I
do
I
changed
so,
as
you
say,
the
file
storage,
that's
important,
I.
Think
more
for
Stefan
to
know
we
created
the
storage
account
and
as
a
way
found.
We
also
need
to
specify
file
storage,
which
is
a
sub
element.
Storage
account
allows
you
to
create
buckets
which
are
blob
storage,
but
they
can
be
on
different
types
and
we
need
the
specific
type
of
story,
file,
storage
file,
storage
maps
to
SMB
mount
on.
B
Now
we
need
to
on
the
new
ldap,
releasing
Publications
I'll,
try
to
restore
recent
existing
damp.
B
If
then,
it
works,
I'll,
I'll,
continuous
immigration
by
switching
account
the
jenkins.io's
name
to
the
migration
status
announcement,
while
working
on
so
next
phase
is
the
next
steps.
So
no
rights
will
happen
on
adapt
base
and
I
will
be
able
to
do
a
backup
of
the
existing
one
and
restore
it
to
the
new
one
without
any
new
writer
modification.
A
Next
one
see
I
jenkinsayo
use
a
new
VM
instance
type,
so
the
goal
is
to
transplant
every
changes
until
the
last
one
I
that
has
been
done
on
the
new
trusted
VM
on
azure
to
the
new
cigen
kinsayu
virtual
machine
that
is
not
used
yet
once
validated
and
bootstrapped
there
will
be
a
requirement
for
an
initial
data
copy
I
try
to
take
a
snapshot
of
the
current
cigo
I
was
able
to
create
a
new
database
from
that
and
mounted
on
the
new
VM,
but
terraform
definition
of
data
disk
requires
you
to
define
the
snapshots.
A
The
data
disk
and
the
stat
write
down
explicitly
that
the
data
this
come
from
the
snapshots,
which
is
not
the
easier
way
because
after
we
want
to
delete
the
snapshot
because
we
won't
need
it
anymore
in
a
few
weeks
once
migrated,
and
then
that
will
let
the
data
disk
in
a
state
where
removing
the
snapshots
recreate
the
data
disk,
because
the
reference
doesn't
exist
anymore.
So
it
will
be
a
recreated
empty.
A
D
A
A
If
you
try
to
import,
the
database
from
the
data
is
Created
from
the
snapshots.
The
reform
says
that
attribute
must
be
copy
because
it
has
been
initialized
with
the
attribute
as
copy.
That
attribute
is
immutable,
so
that
mean
on
terraform.
You
need
to
Define
copy
and
say
setting
copy
required
to
add
a
second
attribute,
which
is
Source
ID,
which
will
be
the
ID
of
the
snapshot.
If
you
remove
the
snapshot,
the
definition.
D
A
A
A
Migrate
trusted
CI
to
Azure
that
has
been
done
successfully.
There
are
one
last
tasks.
There
were
two
last
tasks
as
for
this
week.
One
has
been
done.
The
network
has
been
restricted
and
checked
everywhere.
The
last
element
will
be
that
might
be
a
subsequent
issue
to
select
the
proper
size
for
the
virtual
machine,
effemeral
agents
to
use
the
same
as
cig
and
kinsayu
that
we
changed
a
few
weeks
ago,
because
right
now
we
are
using
cost
instances
and
we
don't
have
the
same
quota
on
the
new
region
where
the
new
VM
is.
A
A
So
yes,
nice
work.
Stephen!
Thanks
for
the
underwear
we
were
able
to
to
launch
this
one,
it's
working
well
since
Friday.
So
now
we
will
have
to
watch
the
request
from
security
team
to
access
trusted
CI
because
they
will
need
to
provide
their
public
IP.
That's
a
new
change.
From
the
former
instance,
the
VMS
has
been
stopped
on
AWS,
but
not
deleted.
I
propose
that
we
wait
until
the
next
either
LTS
release
or
security
core
release,
so
that
will
probably
end
of
the
June
so
we'll
delete
it
in
July.
A
Okay,
so
that
one
is
obviously
on
the
next
Milestone.
That's
only
one.
Last
change
Ubuntu
22204
upgrade
campaign
right
now
for
this
one,
we
have
one
candidates
that
will
be
third
CI
Jenkins
IU
The
Proposal
is
to
do
an
inline
upgrade
because
it's
on
azure,
so
the
easiest
way
is
taking
a
snapshot
of
the
system.
Since
we
control
the
virtual
machine,
we
do
the
upgrade.
We
change
the
puppets,
infrar
config
to
use
Docker
the
docker
2012
for
a
version,
and
that
should
be
okay
I
propose
to
if
anyone
is
interested.
A
Upgrade
to
kubernetes
1.25
I've
pushed
this
one
due
to
the
puppet
trusted
and
ldap
migration.
I
propose
to
put
that
one
back
out
of
the
Milestone,
because
I
accept
reading
the
challenge
log
I'm
preparing
the
next
step.
No
one
will
have
too
much
much
time
on
this.
One
and
I
propose
that
we
go
back
to
starting
a
braiding
cluster
in
two
or
three
weeks
depending
on
our
availability,
because
the
surveys
of
is
on
PTO.
Maybe
we
won't
have
time
Stefan,
but
maybe
we
can
start
with
the
doks
cluster.
A
A
So
yeah
back
to
backlogger
are
they?
Did
you
add
time
to
spend
on
the
install
and
configure
data
dog
clicking
on
cigenkins
iO.
A
Okay,
as
Stefan
and
I
saw,
we
might
need
to
prioritize
these
tasks
once
l-dub
is
done,
because
there
is
a
lot
and
when
I
say
a
lot.
It's
a
lot
of
error
logs
on
cig
and
Kim
Sayo,
due
to
the
plugin
trying,
unsuccessfully
to
contact
the
agent.
A
Are
we
using
the
plugin
because
the
the
most
the
simplest
case
will
be
uninstalling
it?
But
if
you
find
just
one
configuration
setup
that
will
stop
the
plugin
to
contact
train
to
contact
the
agent
that
might
be
worth
it
to
avoid
triggering
a
restart.
A
If
you
need
time,
we
can
work
on
this,
Thursday
should
be
okay.
For
me,.
B
A
A
support
Linux
container
when
running
on
Windows
VM,
my
initial
manual
tests,
I
wasn't
able
to
have
a
running
Docker
desktop
I
was
able
to
install
Docker
desktop,
but
it
was
always
failing
to
start.
The
Linux
WSL
backend
I
haven't
tried
installing
WSL
before
I
was
assuming
that
Docker
desktop
was
using
it
and
I.
Guess
that's
the
problem.
I
had
on
the
manual
tests,
so
maybe
I
should
just
install
WSL
spin
up
Debian
and
then
start
desktop
back
to
backlog.
A
For
me,
because
yeah
too
much
thing
and
I
haven't
spent
too
much
time
on
this
one.
So
unless
someone
want
to
try
band
themselves,
the
goal
is
to
have
a
Docker
desktop
up
and
running
on
Windows
Server
2022,
ideally
and
as
code
with
Packer
image.
A
A
Artifact,
caching,
proxy,
reliable
related
to
cig
and
Kim
Sayo
agent
on
VM
migration
to
a
new
network.
Still
progressing
I
want
to
orally
ponder
that
the
fact
that
the
ACP
is
still
working
well
as
per
RV,
walk
on
datadog
that
allow
him
to
pause
the
log
and
check
the
amount
of
data
that
is
served
by
the
ACP
instead
of
the
Repository.
A
Real
Matrix,
but
just
so
that's
that's
a
lot.
So
ACP
is
still
doing
a
lot
of
work
and
saving
precious
bandwidth.
4G,
frog,
I
didn't
add
time
and
we
absolutely
need
to
help
the
user
add
jetpack
to
repository
available.
So
in
order
to
unblock
the
user,
we
need
to
add
a
new
exception
on
the
ACP
repository
ID
I.
A
So
they
try
to
use
the
jar
dependency
trick,
so
you
point
Maven
to
a
local
directory
that
is
used
along
the
other
repositories.
If
you
follow
the
naming
convention
that
we
we
had
it
that
will
be
ignored
and
won't
use
ACP
and
we'll
use
the
local
files
but
yeah.
The
issue
is
that
user
wanting
to
a
blow
jar
on
artifactory,
but
they
don't
have
the
permission.
A
Yes,
they
propose
to
use
it
as
a
proxy
that
will
mean
adding
one
of
the
mirror.
But
right
now
we
are
trying
to
restrain
the
amount
of
mirror
so
proposal
is
to
add
instead
a
new
exception
for
that
repositories.
A
A
If
the
ID
of
the
repository
on
the
perm
XML
is
a
I,
don't
know
a
non-cashed
dash,
something
that
could
be
or
external
or
non-jenkins
or
you
know
you
know,
we
could
use
a
generic
pattern
if
they
had
a
certain
prefix
that
will
automatically
not
be
cached.
What
do
you
think
yeah.
B
There
is
a
problem:
I've
suggested
a
probe
to
the
beginner
scoring
detecting
the
third
third
party
repository.
So
we
could
use
that
to
see
and
check
what
is
plugin
using
Sid
for
their
external
repository
yep.
A
To
avoid
ACP
caching,
third
party
repositories-
let's
add
cheat
pack
as
exception
on
short
term
proposal
for
plugin
health
club
or
third-party
repo-
is
that
okay
for
everyone
for
this
one.
A
Okay,
are
you
okay
to
take
this
one,
or
is
that
okay
for
you
cool
thanks,
so
I'm,
adding
it
to
the
next
milestone
access,
artifactory
bandwidth
reduction
options,
so
that
one
was
opened
by
Mark
forgot,
so
we
will
need
to
plan
brownouts
a
brownout.
Is
a
blackout?
Is
a
planned
blackout
of
a
service
on
a
short
amount
of
time
to
see
if
everything
explode
or
to
identify
what
could
be
problem
that
we
weren't
able
to
identify
at
first
step,
the
first
brownout
will
be
trying
on
g-frog
to
set
the
gigit
proxy
repository.
A
A
To
avoid
to
forbid
this
user,
to
use
us
as
a
free
and
free
Qs
mirror,
don't
get
don't
get
too
safe
on
this
one,
because
once
user
will
discover
they
can
still
use
the
repo
Jenkins
CI,
slash
public,
because
it's
a
virtual
that
includes
all
of
these
dependencies.
Then
part
of
the
traffic
will
shift
to
that
repository.
But
the
goal
is
interested.
A
I
mean
that
the
person
who
will
do
that
they
won't
do
it
accidentally,
but
the
user
that
are
using
this
one
might
are
doing
it
accidentally
so
better
to
shut
down
these
accesses
and
have
a
centralized
point
and
then
iterate
and
see
the
impact
on
the
bandwidth
reduction.
So
we
will
need
to
announce
the
blackout.
Ideally,
the
gigit
should
happen
this
week
for
planning
the
the
maven
repo
one
next
week
and
the
other
we.
So
we
should
be
middle
of
June
with
the
first
feedback
to
give
to
shifog.
A
Good
for
everyone,
so
I
will
update.
I
will
comment
it
after
last
exchange
with
Mark
and
then,
if
it's
okay
and
we
will
proceed
the
most
probably
Thursday
afternoon
morning
for
the
US.
That's
the
proposal
I
make
here.
Okay
for
everyone.
A
A
A
We
have
another
issue
in
triage
States
that
will
be
about
installing
the
M
chart.
We
might
need
to
CR
if
it's
okay
for
everyone.
A
We
will
update
on
this
one
and
we
will
link
everything
because
the
other
one
is
mixed
on
the
Google
Analytics
yep
I
need
to
clean
up
the
issues
and
follow
up
on
the
initial
assessment
from
giving
we
need
to
assess
the
amount
of
storage,
the
requirement
for
database
the
requirement
for
the
entry
point.
Do
we
need
the
back
end
the
front
end
just
to
be
sure
we
use
the
correct
Ingress
Gavin
already
gave
us
the
required
answer,
so
you
need
to
access
them
as
a
team.
A
B
A
B
C
A
Ideally
use
our
existing
postgres
worst
case.
We
still
have
flexible
MySQL
in
a
server
instance
on
Azure
worst
case
I.
That
would
be
a
fallback
if
it
doesn't
work.
A
Bruno
Stefan,
just
for
you,
yeah
I'm,
not
sure
that
will
be
a
good
candidate
at
first
sight
for
irm,
because
it's
running
PHP
and
I
have
absolutely
no
knowledge
of
PHP
supports
for
irm
64..
That
might
work
that
might
not
I
don't
know.
So
if
you
want
to
try
an
rm64,
we
have
to
check
with
Gavin
but
I
guess:
Gavin
was
using
digital
ocean
since
he
worked
there.
So
I'm
not
sure
he
has
irm-64
virtual
machine
for
his
tests.
That
could
be
interesting
to
evaluate
at
least
for
the
front
and
middleware
stages.
Yeah.
E
A
So
that
could
be
interesting.
Only
MySQL
is
supported.
E
C
C
A
A
A
A
Tolki
today,
four
new
triage
issue:
we
had
the
cigar
repository
scan
failed
with
the
stack
tray
so
that
one
should
be
closable
we'll
take
care
of
this
one.
Yesterday,
GitHub
changed
something
on
their
API
and
all
the
Jenkins
instance
of
the
world
using
the
GitHub
branches
API,
so
native
GitHub
were
showing
that
error.
When
technical
user
set
up
for
the
GitHub
scanning,
whether
organization
or
multi-branch,
had
to
maintain
had
the
main
team
status.
A
Oh
hard,
I,
don't
I'm,
not
sure
if
it's
a
user
with
the
maintenance
status
or
the
maintenance
that
you
saw
themselves.
I've
heard
I've
seen
two
different
cases,
so
I'm
not
absolutely
sure
of
that
one.
But
we
had
issue
that
was
on
GitHub
side,
but
they
were
really
efficient
and
once
the
error
was
reported,
so
always
report
error.
When
you
have
such
a
such
problem,
they
roll
back
the
change
and
they
took
additional
measures
so
that
it
won't
happen
again.
A
Package
availability
dashboard
is
on
T
I.
Think
it's
a
consequence
of
the
cleanup
that
has
been
done
on
datadug.
We
have
a
package,
sorry,
a
dashboard
that
is
relying
on
the
metric
that
doesn't
exist
anymore.
Is
it
because
there
is
a
specific
label
is
because
we
change
the
metric
because
we
stop
collecting
that
metric
I.
Don't
know
this
has
to
be
checked.
A
A
So
it's
not
I
priority
but
yeah.
If
anyone
has
time
to
check
I
propose,
we
add
it
as
a
bonus
on
the
upcoming
Milestone.
Is
that
okay
for
everyone?
Yes
I,
can
take
I
will
let
you
assign
you
know
to
yourself.
If
you
have
time,
yes,
I'm,
removing
trayage
for
this
one
I've
opened
an
issue
while
trying
to
migrate
a
blink
realize
that
we
have
a
server
which
is
a
single
server
that
cannot
be
replicated,
and
that
has
an
EV
database.
A
It
takes
seven
to
eight
hours
to
dump
the
data
and
three
to
four
hours,
to
restore
it
on
an
idly
optimized
page
dump
on
the
machine
with
16
CPU
all
use
that
hundred
percent
during
the
dump
so
highly
parallelized,
mainly
because
the
structuration
of
the
data
table
is
we.
There
is
one
big
table
with
tons
of
Records,
so
there
is
no
way
you
can
parallelize.
A
We
could
try
to
create
the
proper
index
in
the
future
to
have
improved
them
time
restore,
but
the
most
efficient
way
will
be
migrating
to
a
flexible
server
Azure
as
a
server
side
migration
tool
for
that
for
less
than
for
80
to
100
gigabyte
of
data
on
the
database.
They
they
claim
to
one
hour
only
migration,
with
the
service
being
shut
down,
clearly
way
more
efficient
than
us
doing
a
client-side
PG
dump.
A
So
the
proposal
is
that
we
work
on
migrating
a
blink
with
a
outage
planned
outage
to
a
flexible
server
and
eventually
either
migrate,
the
database
from
flexible
server
to
our
current
instance,
or
we
can
directly
migrate
to
the
new
flick
to
our
current
instance.
I
haven't
checked
in
details
if
both
are
possible
or
if
we
need
a
two-step
process,
why
migrating
from
flexible
to
flexible,
because
these
instances
from
secure
to
flexible,
yes
but
the
flexible
server
allows
you
to
create
replication
between
multiple
flexible
instances.
So
that's
why
I
assume
the
two-time
process.
First,
we.
B
D
D
A
D
A
A
A
Do
you
other
new
issues
or
things
to
add
on
the
pipe.