►
From YouTube: 2023 04 04 Jenkins Infra Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
Is
the
last
one?
Okay?
So,
let's
start
with
the
usual
announcement,
so
two
days
weekly
is
not
out
yet
due
to
the
current
DT
search
issues.
The
release
this
for
today
is
blocked
at
the
verify
state,
so
the
wow
file
has
been
generated,
tagged
and
pushed
as
usual.
It
used
the
new
gpg
key
that
we
added
last
week.
However,
it's
using
it's
still
using
the
old
DG
sortex
biode
certificates.
That
is
why
the
verify
step
fails.
A
So
now
we
will
have
different
options.
We
will
discuss
later
the
go
we
have,
we
might
have
to
trigger
a
new
weekly
release
or
not-
or
we
could
finish,
the
packaging
depends
on
what
we
want
to
achieve.
That's
the
discussion
for
later
and
once
the
packaging
step
is
done
and
we
have
decided
to
use
fully
the
new
digisert
or
not,
then
we
can
proceed
to
the
usual
delivery
bits.
A
A
So
the
second
announcement
is
that
we
have
the
new
DG
search
certificate
and
that's
the
good
news.
It
should
be
okay
for
the
tomorrow's
LTS
release,
t-shirts.
A
A
A
Nope,
okay,
so
that
has
been
a
really
really
packed
week,
thanks
mainly
Airway
and
Stefan,
for
the
Huge
works.
You
did
on
all
of
these
tasks
and
mark
for
taking
care
of
the
certificates.
That's
the
the
greetings
that
are
required
for
because
that
week
was
pretty
pretty
hard
on
the
task
that
we
were
able
to
run
and
to
finish,
the
robo
Butler
service
has
been
Sunset
removed
from
bupet
removed
on
the
virtual
machine,
cleaned
up
and
its
repository
and
image
has
been
archived
haven't,
seen
any
error.
A
A
Is
now
having
an
automated
certificate
renewal
using
Azure,
DNS
and
let's
encrypt
it's
only
and
it's
everything
is
managed
as
code
and
it's
working,
so
what
we
did
a
few
months
ago
with
a
certificate
bound
to
expire.
The
11th
of
April
is
now
renewed
every
three
months,
even
though
that
is
a
private
instance.
A
That
was
also
the
opportunity
to
clean
up
the
DNS
of
Stefan
and
high.
So
they
were
not
only.
We
had
the
current
DNS
record
for
third
CI
that
we
have
to
migrate
to
the
new
child
Zone,
where
the
permission
are
restricted,
but
there
were
former
not
used
or
let's
say,
Legacy
records
that
have
been
removed.
A
A
A
A
Someone
proved
that
they
weren't
a
spammer,
so
we
created
the
account
with
no
spamming
issue.
I
assume
that
the
spam
message
come
from
their
IP
that
was
marked
as
from
a
public
IP
Marcus
abusing
because
I'm
not
sure
why,
but
the
anti-spam
will
be
triggered
for
these
persons.
Since
I
was
able
to
create
the
account
on
my
own.
So
the
only
difference,
I
reused,
the
information
they
gave
so
either
they
add
something
on
their
web
browser
that
was
sending
spamming,
requests
to
accounts
I'm,
not
really
sure.
A
Anyway,
the
account
was
created
so
I,
let
the
user
to
the
thing
social
link,
updated
on
GitHub
organizations.
That's
a
new
feature,
thanks
Alex
and
Tim,
and
their
way
that
has
been
updated
on
our
GitHub
organization,
page
Miss.
So
someone
was
trying
to
create
an
account.
They
weren't
using
the
correct
email.
I
purposely
avoided
to
give
the
correct
email
later
that
person
gets
to
correct
email
or
create
a
new
account.
I,
don't
want
to
risk
any
icon
takeover
on
that
area.
A
Okay,
congratulations!
Rv
again,
we
were
able
to
close
the
introduce
artifact
caching
proxy
for
CI
Jenkins
IO,
so
it
was
reopened
due
to
issues
that
are
tracked
on
another
issue,
because
the
work
for
the
ACP
is
there.
We
finished
all
the
tasks
that
were
to
be
finished
last
week,
so
that
one
is
considered
closing.
Now
it's
not
creating
a
new
service.
It's
finding
why
the
service
in
some
edge
cases,
is
not
working
as
expected,
which
is
different
topic.
A
A
A
So
we
have
now
a
number
when
we
need
to
run
a
scan
organization
on
the
wall,
Jenkins
Organization
for
the
plugins.
It
takes
1
hour
and
20
minutes
it
succeed,
and
if
it's
not
running
and
at
the
same
time
of
10
or
12
other
bomb
builds,
then
it
succeed
without
any
problem.
The
main
challenge
here
is
that
it
has
to
put
because
we
have
a
rate
limits
imposed
on
GitHub
API,
so
it
needs
a
30
minute
pose
in
the
middle
that
way
it
takes
so
much
time.
A
Otherwise
it
should
be
more
40,
45
minutes
there
might
be
solution,
but
I
I
haven't
found
any
about
all
making
it
even
using
different,
GitHub,
app
I'm,
not
sure
directly.
We
should
not
have
any
API
rate
limit
with
the
GitHub
app
for
this
kind
of
operation,
but
it
sounds
like
a
Gita
branches,
plugin
inside
Jenkins
limitation,
I
assume
it
come
from
the
time
we
we
were
using
token
instead
of
GitHub
app,
but
I
don't
have
enough
knowledge
on
that
are
real,
so
it
work
as
a
general
matter
of
fact.
A
A
We
were
able
to
get,
and
now
we
don't
have
enough
information,
especially
since
we
change
and
shifted
or
the
workload
from
ec2
to
Azure
and
digital
Lucian,
and
we
are
back
so
I
propose
to
close
this
one
and
see
if
it
come
back
because
it
wasn't
reproduced
after
the
pull
request
was
merged.
So
that
could
have
been
Network
issue
or
something
else.
A
Maven
3.9.1
was
released
and
deployed
so
thanks.
Everyone
involved
in
that
step
it's
generally
available
and
we
send
an
email
to
the
mailing
list.
A
Thanks
a
lot
everybody
for
helping
the
drinking
security
team
and
specifically
taking
that
thing
alone.
So
you
were
able
to
not
only
Grant
access
to
two
new
genseek
team
member
to
release.ca,
but
also
you
were
able
to
restrict,
or
at
least
clean
up
some
part
of
the
airbag
model.
We
might
need
more
restriction
there,
but
thanks
Danielle
for
raising
that
and
there
we
for
taking
care
of
that
step.
D
A
May
I
ask
you
to
take
care
of
opening
the
tissue
and
starting
the
discussion.
So
we
can
ask
Danielle
for
a
double
check.
Is
that
okay,
for
you?
Yes
cool,
don't
say
to
add
it
to
this
week?
Milestone
I
haven't
created
it
yet,
but
yeah
thanks,
I
haven't
heard
back
from
the
security
team,
so
I
assume
it's
okay.
A
Okay,
we
were
able
to
close
the
big
issue
as
well.
Congratulations
are
we.
A
We
now
have
a
brand
brand
new
private
case
cluster,
so
private
kubernetes
Cruiser
in
Azure,
which
is
hosting
in
Frasier,
released
yeah
and
our
Bots
that
doesn't
require
Public
Access.
It's
it
completely
manages
code.
So
now
we
can
iterate
and
it's
able
to
run
fields
on
Linux,
on
big
Linux
machines
and
on
Windows
and
using
two
different
subnets
for
the
agents
to
avoid
any
security
issues.
A
So
great
work
just
to
REM,
just
a
question
again:
sorry,
are
we
able
to
clean
up
the
former
cluster,
or
do
we
still
have
that
manual
task
to
to
run.
A
A
That
was
a
long
dur
week
request,
but
that
one
like
the
triggered
the
full
organization
scanning
and
put
CI
Joint
inside
your
outage.
So
the
lesson
learned
for
everyone
is
that
before
triggering
an
organization
scanning,
we
must
let
the
other
know
in
advance
and
if
we
accidentally
trigger
one,
no
stress
just
cancel
it
before
cigen
can
say
your
become
irresponsive,
especially
when
we
have
already
more
than
a
thousand
built
on
the
queue
there
might
be
Improvement
there,
of
course,
but
that's
the
status
for
today.
A
Finally,
we
finished
the
work
around
Azure
credential
for
CI
Jenkins,
so
that's
worked
it
by
Stefan
and
Harry.
Now,
the
that's
the
same
as
the
other
controller,
the
other
credential
required
to
trigger
virtual
machine
from
CA
Jenkins
are
now
manager's
code
and
have
a
clear
expiration
date
in
clear,
so
we
should
be
able
to
track
it
next
time.
A
Okay,
so
that
was
all
for
the
job
that
we
did.
We
closed
a
few
issue
that
were
out
of
out
of
subjects
that
has
been
closed
as
not
planned
and
finally,
we
can
go
on
the
walk-in
progress.
So,
first
of
all,
we
had
an
issue
with
ACP
in
certain
case
only
only
when
using
digitalocean
and
bomb
builds
that
are
2
to
300
parallel
steps
running
on
each
one
on
the
different
pods.
A
When
we
have
that
we,
our
cluster,
are
scaled
at
the
maximum
and
after
90
minutes
of
that
maximum
workload,
we
start
having
word
issues
only
on
digital
ocean.
So
it's
not
even
sure
that
it's
the
ACP
setup
that
we
have,
because
why?
Why
don't
we
have
this
on
AWS
or
Azure,
that
sustain
the
same
kind
of
workload
that
is
for
AWS.
A
So
we
are
trying
to
search
the
differences
that
could
be
related
to
really
low
level
topics.
There
has
been
multiple
multiple
areas,
thanks
basil,
for
pointing
that
it
could
be
related
to
the
amount
of
Maximum
connection.
Despite
what
the
metrics
says,
it's
still
important
to
check.
A
A
Are
they
I
ask
you,
help
to
re-trigger
a
build
forced
on
digitalization
using
the
ACP
with
the
new
settings?
Where
did
you
had
time
to
check
the
result
of
this
one?
D
D
A
Don't
okay!
So
right
now
there
is
no
blocker
first,
because
thanks
to
Elvis
walk
during
the
past
day,
I
think
it
was
Friday.
When
the
builds
are
running
on
digitalocean,
then
we
use
the
Frog.
So
it's
a
bit
more
bandwidth,
but
that
allows
developer
to
run
the
big
amount
of
bomb
builds
that
we
are
having
So,
currently
not
blocked,
no
ACP.
A
A
It
sounds
like
there
are
different
solutions:
I'm,
not
sure
if
it's
a
willingness
to
change
the
world
setup
or
to
just
ask
question
to
think
I'm,
not
really
sure
right
now,
I'm
gonna.
We
are
going
to
check
this
one,
because
digitalism
might
need
that
subsequent
subject
might
need
to
be
stopped
in
a
few
days.
So
maybe
we
won't
have
to
to
dig
that
on
that
one
so
I
propose
that
outside
the
build
that
there
we
just
mentioned
that
we
are
checking
with
the
new
engineering
setup
I
propose
that
we
don't
spend
too
much
time
here.
A
Gpg
key
expires
on
March
the
13th,
so
that
issue
is
only
there
waiting
for
tomorrow,
LTS
and
the
under
folder
the
the
issue,
because,
right
now
the
weekly
Debian
repository
and
Reddit
repository
are
signed
with
the
new
key,
but
not
the
LTS
one.
That
will
be
okay
tomorrow
and
that
issue
should
be
closed.
So
it
will
mechanically
move
to
next
milestone
any
question.
A
B
Yeah
and
I
have
the
action
item
from
the
board
meeting
on
Monday.
That
be
sure,
we
convene
a
retrospective
to
figure
out
what
we
do
to
prevent
that
in
the
future.
We
may
want
an
admin
monitor
months
ahead
and
etc,
etc.
There
are
lots
of
things
to
improve
I.
My
apologies.
We
made
a
bunch
of
mistakes
on
on
that
pgp
expiry
and
on
the
code
signing
certificate
we'll
get
better
thanks.
Mark.
A
A
Universal
activity
seems
to
be
more
and
more
the
normal
activity
and
the
easy
two
issues
we
add
and
the
shift
of
traffic
and
workloads
we
we
consumed
8K
instead
of
1.5
K
during
the
past
month
in
digitalocean
to
under
the
big
amount
of
builds.
So
we
only
have
a
few
days
left
of
credits.
I
think
we
should
be
under
the
thousand
dollar
and
that's
my
credit
card
on
this
one.
So
I
will
stop
the
service
once
we
will
have
depleted
all
the
credits.
A
Hopefully
array
was
able
to
communicate
that
to
digital
scenario
today,
so
we
hope
we
should
have
an
answer
and
we
should
do
that
at
the
same
time.
So
the
question
is:
are
they
okay
to
extend
the
credit
or
grow
we'll
see?
Maybe
that
will
be
the
end
of
the
digital
lesson
platform.
I,
don't
know.
A
If
it's
the
case,
we
will
have
to
search
for
other
sponsor
and,
if
not
the
case,
we
still
have
to
search
for
the
sponsor
to
extend
the
increasing
amount
of
build
from
the
developer.
That's
a
good
ill-fine
for
the
community,
but
that's
quite
the
nightmare.
For
us.
We
have
to
search
for
fundings
on
that
area.
A
Okay,
so
let's
wait
for
answer
from
the
o
or
adding
credits
again.
We
are
very
thankful
to
digitalocean
for
helping
them
for
helping
us
because
yeah
we,
we
really
really
had
the
opportunity
to
to
shift
our
workloads
directly
to
digital
lesson,
with
almost
no
pain.
That
was
quite
easy.
The
only
issue-
and
it's
not
even
sure,
it's
digital
accountability-
it
is
the
ACP,
but
yes,
the
API
and
dock
was
quite
nice.
So
say
again,
thanks
for
what
you
did
for
the
Jenkins
project,
I
hope
we
will
be
able
to
continue.
A
We
have
a
user
saying
that
the
password
reset
email
is
not
coming
so
we
tried
multiple
times.
They
don't
receive
the
problem
that
the
Jenkins
infra
discovers
that
accounts
Jenkins
Ayo
send
email
through
sendgrid
SMTP.
A
A
B
B
In
send
grid
is
definitely
in
the
list
of
infrastructure
that
we
use.
I'll
have
to
re,
we'll
have
to
read
that
page
to
be
sure.
A
Okay,
the
main
goal
is:
can
you
ping
kosuke
just
to
be
sure
that
you
see
two
different
person
asking
him
to
check
send
grid
because
I
might
have
missed
something,
but
yeah
I'm
I
need
help
to
have
an
answer.
B
A
And
I
will
communicate
to
the
user,
but
right
now
we
cannot
analyze
why
the
user
is
not
receiving
email.
My
proposal
is
that
we
check
we
wait
until
the
end
of
week
and
if
we
don't
add
any
news
from
kosuke
or
anyone
else
that
could
help
us
get
access
to
some
grid
I
propose
that
we
shift
account
up
to
mail
gun
since
we
have
access.
So
at
least
we
should
be
able
to
monitor
the
state
of
the
emails
that
could
have
been
refused
by
the
remote
user.
Emap
servers
does
that
make
sense.
A
A
We
have
work
in
progress
so
on
using
Azure,
irm,
64
virtual
machine.
So
as
a
reminder,
Azure
announced
in
December
2022
that
virtual
machine
with
irm
CPU
was
a
ga.
So
now
we
can
use
them.
That's
the
good
news
because
they
found
last
week
last
month:
three
shifted
all
the
EC
to
Virtual
Machine
workload
on
now
private
controller,
to
Azure
VM
controller,
to
decrease
the
AWS
Bill
and
to
have
less
bandwidth
and
to
have
a
centralized
management
inside
azure.
A
C
A
A
I?
Don't
something
a
question
that
we
had
privately.
It
was
just
an
idea
and
we
are
going
to
open
that
more
and
more.
We
were
thinking
about
being
able
to
use
these
arm
64
machines
on
both
AWS
and
Azure,
for
the
workloads
that
shouldn't
that
shouldn't
be
concerned
by
the
kind
of
CPU
we
talked
I.
Think
you
mentioned
you
ask
about
the
bomb.
The
question
is:
do
we
is
the
bomb
the
world
PCT
steps,
these
300
steps
that
cost
us
lots?
A
B
A
So
that
could
be
interesting
for
this
one
and
eventually
for
for
the
plugin
CI,
but
not
all
plugin,
so
that
could
be
also
something
proposed
to
the
developers
saying
by
default.
We
might
want
to
shift
to
irm
by
default
unless
you
need
a
specific
Intel
binding.
In
that
case,
you
should
you
should
we
should
provide
something
on
the
pipeline
library
for
build
plugin
well,.
B
And
we
certainly
could
already
provide
an
argument
in
pipeline
library
that
says
that
allows
you
to
opt
in
to
arm
64.,
right
or
or
opt-in
to
platform
independent.
Maybe
arm64
is
the
wrong
choice
but
saying
hey
as
far
as
I
know,
this
plug-in
is
platform
independent
pick
any
one,
and
it
would
allow
us
then,
on
occasion
to
run
on
system
390.
If
we
wanted.
A
Of
course,
building
Jenkins
itself
and
running
the
ath
won't
work
on
irm
64.
in
the
case
of
the
ath,
not
only
because
acceptance
test
you
want
to
run
the
real
life,
we
could
add
specific
acceptance
test
only
for
arm,
but
you
have
to
know
that
most
of
the
docker
images
that
are
used
in
the
acceptance
test,
RNs
are
using
Intel
image
and
the
required
work
for
ath
will
be
clearly
too
high
for
now.
A
That's
also
something
for
us
inside
infra
to
think
about.
All
of
our
workloads
should
be
able
to
run
an
irm
to
decrease
our
communities
management
terraform
project,
most
of
the
tools
that
we're
using
are
statically
compiled
using
golang.
So
RM
is
one
of
the
targets
and
we
can
use
binaries
for
this
platform.
B
A
A
Next
task
is
the
same
kind.
It's
a
long
running
task.
We
have
to
upgrade
all
of
our
Ubuntu
instances
out
of
18
and,
ideally
out
of
20..
The
goal
is
to
update
everything
we
can
in
2
Ubuntu
222.
So
thanks
survey,
you
did
the
first
release
of
the
pacquery
mage
using
that
new
Ubuntu
version
we
are
currently
we
are
going
to
roll
out
first.
That
version
on
our
private
controller
verify
that
we
can
run
Docker
and
our
tools
without
an
issue
and
once
it's
okay,
then
we
should
be
able
to
roll
out
to
CI
jenkinsayo.
A
So
I
propose,
if
it's
okay
for
you,
that
we
deploy
as
soon
as
we
can
to
infrasti,
and
we
we
my
proposal
arbitrary.
If
it's
okay,
for
you,
is
to
roll
out
to
Cajun
Kim
sayu,
with
an
announcement
to
developer
Thursday,
so
tomorrow,
Wednesday
will
be
LTS
on
Thursday
will
be
Ubuntu
22
for
most
of
the
agents.
A
Does
it
make
sense,
for
you?
Is
it
okay
for
you,
cool
and
the
next
step,
obviously
that
that
will
be
from
the
team?
So
it
can
be
your
way,
but
it
can
be
someone
else.
It
will
be
planning
to
migrate.
All
of
the
Linux
node
pool
that
we
use
on
all
of
our
kubernetes
cluster,
so
the
underlaying
machines
are
switching
to
Ubuntu
22..
A
A
A
A
Okay,
so
document
code
signing
certificate,
renewal
process
and
renew
designer
certificates,
so
this
one
are
quite
the
are
quite
the
eats
right
now.
I
propose
that
we
we
give
a
summary
of
what
what
we
did
today:
Mark
Stefan
RV,
and
what
are
the
upcoming
Next
Step
that
we
could
go
for
the
release
is
that,
okay,
if
we
discuss
that
now,
yeah.
A
Yeah
foreign,
so
document
I've
shared
with
we
have
shared
the
four
of
us
on
a
private
Channel,
because
the
sensitivity
of
the
data
that
we
exchange
we
have
chair,
we
were
able
to
upload
the
new
DG
cert
after
fighting
with
transforming
it
from
different,
open
SSL
certificate
formats.
But
it
sounds
like
Azure
accepted
the
certificate
and
was
able
to
parse
it
and
determine
all
the
meta
information
which
is
a
good
direction.
A
So
we
should
be
able
to
update
soon
the
documentation
once
it
will
be
finished.
There
were
just
a
few
missing
elements
compared
to
what
we
have
it
wasn't
enough,
but
it
was
good
enough
for
us
to
get
started
so
now.
We
need
to
be
sure
that
the
data
we
have
added
the
new
Digi
search
inside
the
volts
is
able
to
be
used
on
two
elements.
B
A
That's
assigning
the
correct
route
thanks
Mark
for
for
using
this
one
yeah.
Otherwise,
it's
confusing
me
today.
The
weekly
has
been
generated
and
signed
with
the
former
certificate.
The
verify
steps,
which
is
the
last
step
of
the
release
pipeline,
failed
now
it's
time
to
decide
what
we
do
with
the
pack.
Do
we
want
to
package
to
start
packaging
this
version?
A
So
let
me
write
this
down.
We
have
two
days:
weekly
is
released
with
old
cert.
The
packaging
is
currently
configured
to
not
generate
a
MSI
signed
or
not.
Node
MSI
at
all,
for
today's
weekly
is
configured
for
no
MSI
at
all.
The
goal
was
to
avoid
a
stressful
message
with
that
MSI
because
it
would
have
been
signed
by
the
old
certificate
and
every
user
downloading.
It
would
have
been
red,
War,
red,
big
red
error
message
on
Windows,
which
is
pretty
scary.
We
don't
want
that.
That's
why
we
say
better,
not
having
MSI
at
all.
A
Now
do
we
want
to
trigger
the
package
build
as
it
or
do
we
want
to
roll
back
Mark
change,
the
disable
MSI
generation
and
add
the
second
change
that
will
use
the
new
DG
cert
and
start
prod
and
checking
that
we
can
produce
an
MSI
with
the
new
certificate,
meaning
we
can
validate.
The
action
we
took
in
the
past
hours
are
okay
with
the
new
certificate,
at
least
so
the
jar
will
be
signed
with
the
older
one,
not
verifiable,
but
MSI
should
be
okay,.
B
That's
the
first
step
so
alternative
to
be
sure
that
I
captured
the
alternative,
so
alternative
one
is
continue
the
packaging
as
it
packaging,
as
is
no
MSI
right,
yep,
alternative
two
is
and
after
that
then
then
revert
well.
No,
that
that
that's
okay,
alternative
two
continue
or
revert,
revert
or
let's
say
it.
This
way
add
MSI
to
Packaging.
C
B
B
Three
is
possible:
even
if
we
choose
one
or
two
that's
correct
and
really
we
could
do
one,
and
then
we
could
even
attempt
two
right,
but
the
problem
with
attempting
two
after
one
is
no
no
I,
take
it
back.
We
can't
we
choose
either
one
or
two
and
then
three
we
cannot
do.
We
cannot
do
both
one
and
two,
because
as
soon
as
we've
performed
the
packaging
step,
if
it's
successful,
it
will
not
repackage
again
that
same
release
exactly.
A
A
Yeah
the
same
there,
my
goal
on
proposing
the
alternative
to
right
now
is
to
have
an
intermediate
step
to
validate,
as
we
say,
it's
the
certificate
state.
Is
it
correctly
encoded
on
usable
the
new
certificate.
D
A
Okay,
anyway,
there
is
a
risk
that,
even
with
alternative
to
walking
as
expected,
the
way
packaging
consumed
This
research
certificates
is
different
than
how
Maven
consume
it.
So
we
could
still
have
a
bad
surprise
when
Maven
tried
to
read
the
file,
but
I
assume
that
yeah
at
least
we
have
a
smoke
test
with
alternative
to.
If
it's
smoke
it's
bad
and
then
we
will
Maven
will
fail
immediately
before
generating
any
data.
As
far
as
I
can
understand,
I
remember
we
can.
We
can
start
it
again
exactly.
A
C
That
would
be
great
to
finish
between
before
midnight.
The
correct
answer.
A
B
A
B
Think
yeah.
We
certainly
want
him
informed
about
what
we're
doing
and
and
yeah
asking
asking
for
a
pause
on
the
launch
of
the
of
the
of
tomorrow's
LTS
Tim's,
a
good
one
to
ask
and
copy
Chris
Stern
on
it.
Okay
and
probably
should
copy
Alex
Brandis
as
well.
Just
because
Alex
has
been
a
frequent
release,
lead.
A
B
Is
yeah
we
would
have
to?
We
would
have
to
create
a
new
change
log,
but
the
changelog
is
is
pretty
easy
to
do
and
that
that's
a
very
low
risk
item
to
do
it.
Yes,
people
will
wonder
why
the
change
log
is
so
brief
and
we'll
put
a
banner
on
it.
That
says
this
changelog
is
so
brief
because
we
were
verifying
the
the
MSI.
A
C
But
alternative
2
is
not
is
not
creating
generating
a
new,
a
weekly
version.
It's
it's
packaging,
the
one
that
we
got
with
the
same
version,
so
the
only
one
is
the
alternative
three
which
will
trigger
a
new
version,
but
this
one
will
be
packaged
correctly
with
the
everything
with
the
new
search.
So
it
it
it's
not
only
to
check
it's.
It's
the
the
real
good
one
with
the
new
set,
not
expired,.
B
Well
so
so,
if
alternative
2
is
successful,
it
will
result
in
a
signed
jar
file
or
signed
War
file
and
assigned
MSI.
If
alternative
two
is
or
no
I
take
it
back.
That's
that's
wrong.
Alternative
two
only
validates
MSI
signing,
not
war
file,
you're
correct
the
so
so
one
of
the
changes
in
alternative
threes
changelog,
as
you
said,
Stefan,
would
be.
The
war
file
is
again
signed.
B
A
So,
let's
go
for
that
I'm
taking
on
we'll
ask
validation
to
team
before
proceeding
and
we'll
let
no
Chris
Stern
as
the
LTS
lead
and
Alex
from
this
as
a
High
Point.
Please.
C
A
I
think,
that's
all
so,
as
you
said
Mark
we
will
have
so
the
board
ask
for
a
postmortem
for
both
GPD
and
cut
signing,
which
I
believe
will
be
most
do
it
earlier.
A
B
D
A
Yeah
and
I
see
this
proposal
from
user,
just
a
frustration
from
someone
not
being
able
to
finish
their
day-to-day
job,
which
I
can
understand
over
yeah.
There
are
consequences
of
doing
as
of
not
doing
of
not
saying
no
to
end
user
sometime,
but
okay,
do
we
have
other
elements
on
that
area
about
the
signing,
search
on
the
briefing
okay,
any
question:
any
objection,
any
things
unclear
that
you
want
to
bring
on
these
topics
nope.
A
So
now,
Next
Issue,
realign
repo
Jenkins,
CI
augmentation.
It's
easy,
nothing
was
done,
I
started
the
cluster
and
then
we
had
a
lot
of
the
other
tests.
So
I
didn't
have
time.
Anyone
interested
in
helping
finding
an
highly
available
adapt
that
we
can
run
on
Virtual
machines
or
kubernetes
is
absolutely
welcome
to
help
here.
A
Yeah
I
still
have
that
M
chart
to
test
locally.
Finally,
an
issue
I
saw
just
before
the
meeting
you
were
able
to
send
a
pull
request.
Survey
on
that
topic
about
to
allow
us
to
close
that
part.
It's
about
the
disk
space
of
the
container
agent
used
for
the
bomb
builds
that
triggered
issue.
We
don't
have
issue
today,
but
it's
an
improvement,
especially
for
performances.
Can
you
remind
me
the
status
around
this
one
I've
absolutely
forgot.
D
D
As
as
they
are
currently
monetized
overlay.
A
A
Okay,
will
you
be
able
to
sync
up
on
this
one
after
or
tomorrow,
maybe
I'm
wondering
about
the
testing
process
I
believe
we
should
be
able
to
test
it,
not
only
so
yeah.
Is
it
okay
or
do
you
want
to
discuss
it?
There.
A
Just
to
let
you
know
that
we
might
have
to
select
an
annoying
path
or
slow
path,
annoying
because
slow
to
be
sure
that
we
don't
break
all
the
legend.
At
the
same
time,
I
will
give
you
elements
because
I
feel
like
I,
wasn't
clear
and
I
didn't
add
and
neither
took
the
time
to
explain
the
testing
path
there.
So
I
wasn't
expecting
the
test
part
to
be
theirs.
The
goal
is
to
discuss
and
learn
all
together
on.
How
could
we
test
this
kind
of
element
in
production
without
breaking
everything.
A
D
A
A
B
A
A
A
A
A
few
new
issues,
if
it's
okay
for
you,
so
we
received
an
alert
about
the
update,
Center,
a
certificate
that
need
to
be
renewed.
That
was
done
one
year
ago
we
have
two
months,
but
I
propose
that
we
do
it
in
advance.
So
if
it's
okay,
thanks
Stefan
for
changing
the
alert
to
calendar,
alert
to
an
issue,
I'm
going
to
add
this
one
to
the
next
milestone,
we'll
have
to
work
on
that.
A
I'm
looking
at
the
new
issues,
so
that
one
will
be
on
the
new.
Let
me
edit
in
the
notes,
new
issues.
I
will
format
the
notes
afterwards
up.
A
A
We
have
free
virtual
machines
that
allow
strategy
I
trusted
CI
to
run
currently
on
ec2.
We
want
to
move
that
workload
to
Azure,
so
we
can
have
a
centralized
management
and
less
on
with
costs.
A
We
might
need
to
only
move
two
virtual
machines
depending
on
the
let's
say,
network
security
solution.
We
will
select
the
third
one
called
Bounce
is
an
SSH
question.
We
can
have
the
same
pattern.
We
can
have
an
Azure
bounce
Bastion
because
they
provide
Cloud
resource.
We
could
use
VPN,
that's
something
to
be
discussed,
but
at
least
starting
the
Azure
terraform
resources
for
two
new
virtual
machine
with
the
associated
data
disk
uninstall
them
with
Ubuntu
22.
Then
that
could
be
a
great
start
inside
the
private
network
of
Their
Own.
A
So
is
that,
okay,
for
you
to
take
this
one
Stefan?
Yes,
then
the
Sun
City
part
will
be
the
permanent
agents.
Once
you
will
be
able
to
manage
with
puppet
these
two
machine
with
styrofoam
and
puppets,
they
are
empty
inside
the
private
Network.
As
a
first
step,
we
will
have
to
think
carefully
with
Daniel
and
Danielle
is
here,
the
Boost
Factor
about
the
way
updates
on
third
generation
works,
because
the
cache
is
using
hard
links,
so
we
will
have
to
air,
think
and
migrate.
A
C
A
D
A
A
A
There
are
two
Source.
We
have
identified
two
potential
sources
of
outbound
bandwidth.
The
first
one
will
be
through
people
browsing
cigenkins,
IU
at
First,
Sight,
let's
say
just
a
few
web
pages,
but
as
some
people
there
that
already
mentioned
yeah
downloading
the
logs
of
a
bomb
builds
and
I'm
sure
that
both
Mark
and
Airway
can
confirm
that
clicking
on
Console
output,
full
console
output
takes
some
times
because
it's
30
to
40
megabytes
log
output
for
bomb
builds
so
that
one
could
be
a
source.
A
It
looks
like
we
have
Apache
logs
passed
on
datadog,
so
we
need
to
go
on
data
dog
and
run
whatever
magical
request
or
put
it
on
sqlite
and
run
whatever
I
I'm,
not
neither
a
python
or
a
SQL
person,
so
I
will
use
happily
datadog
interface
or
shell,
the
other
source.
That
could
be
one
of
the
also
quite
big.
It's
the
station
stash,
let's
say
for
a
bomb,
builds
you.
We
are
stashing
what
is
called
the
mega
war.
A
A
So
if
you
generate
it
on
digital
Sim,
your
stash
it
it
sends
to
azure.
So
stash
is
only
inbound,
it's
okay,
but
then
for
every
AWS
or
digital
Lucian
bomb
PCT
builds.
You
have
to
unstash
it
two
or
three
hundred
times
that
is
outbound
bandwidth
from
the
controller
and
that
one
is
a
lot
of
data.
So
one
of
the
solution
is
using
the
what
we
call
external
artifact
manager.
A
These
are
alternative
implementations
that
instead
of
zipping
the
file
on
the
agent
sending
them
to
the
controller
which
are
the
stash
or
archive
artifact,
they
do
the
same
and
then
astash
is
unstash
is
sending
it
back
to
agent
through
the
same
inverted
protocol.
Instead,
you'd
say:
oh,
let's
use
that
S3
buckets
and
so
the
agent
copied
the
data
on
S3
buckets
and
the
other
agent
should
be
able
to
reuse
from
the
same
S3
buckets.
A
B
A
That
could
be,
you
could
not
be
used.
I,
don't
know
that
will
require
yeah,
I
I,
don't
think
it
has
been
built
for
that
use
case
right,
but
yeah,
that's
that's
an
option.
Can
I
ask
you
to
add
to
comment
on
the
issue?
Yes,
so
that
one
that
budget
will
allow
us
to
us
to
trusted
CI
virtual
machines
without
any
other
red
cost.
If
we're
able
to
decrease
the
bill
here,.
A
Any
question
on
this
one:
any
objection
to
edit
to
the
next
big
cost.
For
us
we
had
not
important
issues,
but
just
wanted
to
mention
them:
I'm,
gonna,
move
them,
add
launchable
to
agents.
Basil
is
working
on
launchable
and
require
tooling
the
proposal
and
the
creative
that
issue
just
to
keep
track.
It's
not
priority.
Now
it's
a
low
priority,
but
we
can
do
it.
A
A
That
one
we
had
tank
survey
for
adding
create
script
to
lock
and
lock
as
your
resource
Group.
So
we
almost
deleted
cigion
production
last
week
due
to
a
human
mistake
on
terraform
and,
as
you
provide
a
way
to
lock
some
critical
elements,
so
the
let's
say
the
direction
we
are
all
going.
The
consensus
is
having
an
external
script,
where
we
identify
the
Sun
City
element
and
we
lock
them
so
any
change
on
terraform.
A
Once
it's
code
managed
or
even
today,
when
it's
manually
managed
any
chance
that
will
involve
deleting
and
recreating
the
resource,
because
whatever
change
we
want
to
do
accidentally
will
be
blocked
and
forbidden,
even
if
the
user
has
the
permission,
so
I
think
it's
a
bit
too
early
to
spend
time
on
that
we
already
have
enough,
but
yeah
I.
Let's
keep
that
in
mind.
That
should
be
done.
The
same
thing
as
the
backups,
the
more
operation
we
are
doing,
the
more
critical
it
start
to
be
so
better
planning
for
it.
A
Instead
of
suffering
for
not
having
had
the
time
for
this
one,
then
we
have
an
issue
that
one
is
also
for
later
so
no
priority.
It's
removing
credential
and
using
workload
identity
management.
We
can
do
it
for
search
yeah
and
ci.g.
Today,
okay,
that's
let's
say
bonus.
A
C
You
you
don't
want
to
speak
about
the
discussion
you
have
to
to
get
budget
for
from
CDF,
I,
think
or
Amazon.
B
D
A
Oh
correct,
oh
I
see
everybody
thanks.
Yes,
good
point,
we
received
an
answer
from
Docker
open
source,
despite
Us
closing
the
issue
because
they
they
went
back
and
they
want
Sunset
the
free
team.
They
confirmed
that
Jenkins
here
in
front.
Jenkins
forever
are
part
of
the
open
source
program.
A
They
were
they
weren't
just
added
with
the
label
on
the
advertising
publicly
thanks
survey
for
the
reminder,
good
point
so
I
confirmed
that
that's
what
we
want,
because
there
is
no
need
and
I
would
say
it's
even
safer
for
us
for
not
advertising
this
publicly,
not
because
it's
secret,
but
because
of
these
two
organization,
are
using
images
for
our
own
usage
in
the
case
of
Jenkins.
A
So
I
don't
see
a
point.
The
goal
is
not
to
publish
that
for
external
usage,
because
the
effort
for
helping
and
supporting
people
on
our
use
cases
is
worth
nothing.
So
that's
up
to
the
debate.
We
can
always
ask
them
to
add
the
badge
but
I.
So,
as
are
they
checked,
we
have
more
than
50
million
pools
on
some
of
our
images
on
the
infra,
so
that
means
these
images
are
popular.
That's
that's
bothering
me.
It's
like
that
means
people
are
build.
A
D
D
C
A
The
richest
open
source
program
in
the
world
right
so
Jokes
Aside.
No,
that's
that's
bad
for
them,
but
my
proposal
and
again
you
can
disagree
but
think
about
it.
I
don't
want
us
to
be
advertisers.
Hey,
let's
reuse,
that
your
tools,
we
don't
even
have
time
for
supporting
our
own
use
case.
So
supporting
our
use
case
on
someone
else.
Machine
is
yeah
anyway.
That
could
be
a
way
to
get
some
contribution,
though
so
mixed
feeling
about
that.
A
Please
don't
advertise
this
element
this
one.
This
one
must
not
be
advertised.
I
can
accept
for
Jenkins
CI
infra,
but
if
you
do
Jenkins
forever
advertising
that
yeah
yeah,
let's
choose
it,
you
will
have
the
Jenkins
security
team
that
will
come
for
you.
They
will
find
your
address.
They
will
fund
you
and
we
will
never
heard
about
you
again
now.
Jenkins
forever
is
only
untrusted
workload.
Please
don't
that
don't
advertise
it's
only
for
testing
on
short
time
window.
The
content
here
cannot
be
unsure
that
it's
safe
to
run.
A
Okay,
so
before
I
close
I
just
want
to
thanks
everyone
there
for
the
huge
work
you
have
done
during
the
past
two
weeks.
All
of
you,
honestly,
that
has
been
really
really
two
challenging
weeks
with
a
lot
and
a
lot
of
things
and
I'm
really
happy
because
alone,
two
years
ago
our
Olivier
was
suffering
and
they
did
that
for
years
and
yeah
I
know
the
amount
of
time
and
the
difficulties
we
all
have.
We
all
have
yeah
a
lot
of
work.
We
don't
all
work
all
together
the
same
way.