►
From YouTube: 2023 04 18 Jenkins Infra Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
C
And
the
the
changelog
has
not
been
merged,
yet
there
are
some
minor
updates
needed
that
I'll
make
Kevin's
out
ill
today,
so
I'll
make
those
changes
and
merge
them.
After
this
meeting.
B
A
C
A
One
two:
three:
okay,
so
let's
get
started
with
the
next
weekly
that
should
be
2.402
next
week,
so
the
25th
of
April,
if
I'm,
August,
25,
the
next
LTS
387.4,
3,
yes
or
four
right.
Yes,.
C
Scheduled
for
May
3,
releasing
May
3
release
candidate
19
April
tomorrow,
back
porting
has
started
Chris
Stern
is
the
release
lead.
A
A
A
Okay,
so
let's
have
a
quick
look
on
the
tasks
that
we
were
able
to
finish
during
the
past
milestone.
We
had
a
plugin
in
update
on
cigo
to
help
the
core
developers
around.
So
that's
an
update
on
the
code
coverage
plugin
so
done.
Thanks
survey
for
managing
the
Jenkins
release,
Twitter
accounts
suspension
dice
so.
D
Yeah
we
were
on
the
previous
plan
and
the
second
was
still
active,
but
it
would
have
been
cut
in
40
days.
If
we
didn't
change
the
icon
type
I
searched
a
little
bit
before
finding
how
to
do
that,
because
it
wasn't
obvious
at
all
another
great
execution
from
Twitter.
D
It's
around
it's
when
1000
500
tweets
per
month
allowed
with
this
plan,
so
we
are
we're
good
unless
we
have
a
lot
of
developers
producing
a
lot
of
again,
it.
A
D
No,
it's
I
administrate
the
giving
issue
at
first
I
thought
it
was
for
Brick
Insight,
but
it
wasn't
a
NCS
gift,
because
I
modified
my
message
but
yeah
the
story:
I
couldn't
necessary.
Repository
wasn't
in
the
selected
repository
of
renovate,
so
I
ever
did
I
added
it.
A
Cool
thanks
for
helping
thanks,
Mark
and
Derby
for
the
move
house
track
I-beams
into
Jenkins
ion
repository
I
assume,
we've.
D
Done
nothing,
it's
I,
don't
know
how
to
pronounce
this
username,
but
he
don't
is
done.
The
work.
A
A
Nothing
else
to
do
on
this
one.
We
were
able
to
ensure
that
Apache
logs
were
collected
by
data
dog
on
our
machines
that
was
required
to
measure
the
amount
of
data
served
by
CI
Jenkins
IU.
At
least
that
means
now.
The
update
Center
update
logo
are
now
streaming
inside
datadog.
If
you
want
to
extract
some
information
about
the
data
that
is
served
by
Update
Center
as
well,
so
the
next
step
is
learning
on
datadogo
to
do
a
dashboard
that
will
measure
the
amount
of
data
served
by
Apache.
A
A
Jobs
with
tasks
using
Azure
blood
are
failing.
The
tissue
was
opened
by
Alex,
thanks
for
mentioning
that
that
really
did
to
team
and
I
trying
the
Azure
artifact
manager.
There
is
an
issue
a
bit
later
about
that
topic.
So
that's
why
I
closed
the
issue,
telling
that
we
have
to
follow
back.
So,
let's
treat
that
one
as
a
duplicate,
but
the
takeaway
is
that
some
random
jobs
were
failing
for
the
archive
artifact
steps.
A
Do
we
disabled
immediately
teammates
started
helping
us
on
that
specific
area,
so
it
only
happened
during
a
Time
window,
24
hours.
That
might
happen
again,
but
that
time
we
will
have
to
communicate,
sent
an
email
and
I
realized
that
my
email
was
blacklisted
by
Google
Groups,
even
if
I'm
an
admin
of
the
mailing
list
windows.
So
next
time,
I
promise
I
will
check
that
the
email
is
received
self-improvement
anyway.
Next
issue
update
Center
doesn't
build
because
agent
1
is
offline.
A
A
So
I
cleaned
up
the
puppets
and
I
removed
any
mention
of
non-existing
virtual
machines
cucumber
was
the
virtual
machine
serving
jenkins.iu
website
years
ago,
it's
been
three
four
years
that
that
machine
is
long
gone,
so
I
decided
to
remove
the
Mansion
to
cucumber
keys
because
it
was
used
by
trusted
CI
to
upload
through
AirSync
the
generated
Jenkins
IU
website.
That's
not
the
case
since
four
or
five
years.
A
A
Did
you
call
it
Cucumber,
2
or
no,
that
could
have
been
a
fun
one
I
next
time,
I
will
promise
I
will
call
you
components
to
be
archived
done
nothing
else.
The
only
administrator
can
do
that.
A
Alex
brandes
AKA,
not
my
fault,
has
been
added
as
to
copy
editors
team
as
validated
on
the
mailing
list,
and
also
today's
side
notes.
I've
added
him
to
the
Jenkins
info
organization
group
named
Docker
images.
So
now
is
a
maintainer
of
all
the
docker
Dash
images
that
we
built.
The
reason
is
because
he's
helping
us
a
lot
on
taking
care
of
the
dependencies
and
fixing
issues
on
most
of
these
images.
So
that's
why
I
decided
he
still
cannot
push
or
deploy
himself.
C
A
Congrats
Alex,
for
that
trust
level,
and
thanks
for
the
help
we
had
issues
on
the
virtual
machine
with
Docker
on
Windows,
both
2019
and
22,
that
was
blocking
all
the
test
framework
of
the
official
Jenkins
images.
That
issues
is
gone
and
fixed.
That
was
a
tricky
one.
I
don't
want
to
go
into
details,
everything
is
written
Within
and
now
we
have
up-to-date
Docker,
which
is
not
meurantist
but
Community
Edition,
and
we
have
everything
required
for
the
for
the
images.
A
By
the
way.
There
is
still
a
word
issue:
we
there
we
cannot
update.
We
cannot
create
with
Powershell
inside
windows
container
on
our
inbound.
The
image
we
cannot
create.
There
are
some
Powershell
command
that
fail
only
on
Windows
server
that
doesn't
fail,
so
I
cannot
reproduce
that
on
my
machine
and
that
do
not
fail
when
run
in
interactive
mode.
A
Monitoring
improved
data
dog
tagging
for
prepared
virtual
machines.
That
was
a
request
from
Stefan
underway
that
make
a
lot
of
sums
instead
of
identifying
infrastructure.
Virtual
machine
per
the
US
name,
I
mean
ip-172
dot,
something
that
something
is
not
really
clear
that
it's
trusted.ci
so
better
to
use
better
human
readable
names.
So
it's
done
validated
on
datadog.
You
can
search
and
Apache
logs
combined
with
that
make
it
easier
for
us
to
when
we
have
an
operation
or
a
failure
on
the
infrastructure.
A
There
was
a
jira
account
locked
accounts
issue,
that's
fixed
as
part
of
the
campaign
of
cleaning
up
the
AWS
account
to
spend
less
credits.
We
finished
the
garbage
collecting.
That
was
an
old
issue.
So
right
now
the
garbage
collecting
is
a
bit
aggressive
because
it
deletes
sometimes
images
of
production.
It's
unexpected.
A
It
should
not,
but
we
were
able
to
see
I
need
to
comment
on
the
later
issue,
but
we
saw
the
three
of
us
that
we
were
able
to
gain
60
bucks
per
month
on
the
accounts
with
which
is
yeah,
60
bucks
per
no
sorry
daily,
that's
better
day.
Yeah
so
I
would
say
it
should
be
around
1.5
K
per
month
of
economy.
We
already
see
that
on
the
forecast
and
the
garbage
collecting
ensure
that
that's
that
snapshot
and
the
IMEI
storage
cost
won't
come
back
again.
A
Okay,
the
documentation
of
the
code,
signing
renewal
process
for
ourselves
or
our
successors
in
three
years
has
been
done
validated
and
reviewed,
so
the
world
gpg
and
side
cut
signing
should
be
okay,
good
thanks
Morrow
for
catching
the
issue
on
updating
kimci.org.
So
the
certificate
has
been
renewed.
A
C
A
C
A
The
system
logs,
we
saw
that
the
contact
run
the
CR
once
a
day
at
six
in
the
morning,
UTC
time,
the
con,
the
wood
contact
on
all
the
virtual
machine,
with
let's
encrypt,
run
the
contab
renew
command
and
that
contact
renew
should
have
detected
the
glitch.
When
we
run
manually
the
command,
it's
succeeded
in
renewing
the
certificates
that
but
renew
net
contabrina.
Oh
yeah,
sorry,
third,
but
renew
my
bad.
C
A
Not
oh,
thank
you.
That's
yeah,
that's
another
issue,
so
we
digged
a
bit,
but
it
seems
like
the
the
puppets
module
and
the
way
it's
working
is
not
correct.
However,
we
saw
there
were
some
a
leftovers
of
a
former
set
bot
package
that
were
installing
the
own
cron
tab
that
was
running
at
the
same
time.
A
A
The
thing
is:
we
cannot
remove
the
dash
queue
flag,
it's
in
row
inside
the
puppet
module
on
the
Chrome
tab.
That's
quite
annoying,
so
my
proposal
is
that
we
wait
for
the
upcoming
three
months,
we'll
see
if
it
happened,
and
if,
if
it
happened
again
as
a
route,
then
we
will
have
to
eventually
disable
the
poop
head
feature
and
create
the
Chrome
tab
ourselves.
A
So
thanks,
Mark
I
think
it
will
be
worth
it
to
create
the
same
kind
of
monitoring
on
data
dog
in
the
future.
We
are
still
missing
time
for
that,
but
yeah
thanks
for
monitoring
this
foreign.
A
A
The
deadline
is
end
of
May,
so
this
year
we
are
starting
quite
early.
So
thanks
Stefan
for
taking
care
of
that.
A
Sign
it
oh
yeah,
maybe
they
only
have
to
sign,
but
they
have
a
critical
private
key,
which
is
a
credential
last
year
or
even
I
decided
not
to
grant
me
that
the
access
to
this
one,
because
the
search
the
the
crl
behind
and
the
the
certificate
Authority
is
valid
until
2028.
It's
still
five
years
from
now,
which
means
the
more
people
have
access
to
this.
The
more
people
could
play
around
with
the
update
Center,
so
the
goal
is
to
try
to
have
as
less
Possible
having
access
to
that
credential.
A
So
we
have
asked
Olivier
is
was
still
bothered
by
the
fact.
We
cannot
have
the
key,
so
he
is
asking
if
I
can
have
the
key
as
well
encrypted
to
my
name,
that
I
will
put
on
the
restaurant
restricted
machine
Mark.
Is
it
okay
if
we
submit
that
proposal
to
the
Jenkins
board?
Yes,.
A
Okay,
so
gotta
ask
the
board
and
I:
ask
Olivier
to
send
me
a
new
certificate,
so
we
can
renew
it,
but
we
have
still
some
time.
A
So
as
soon
as
we
have
the
certificate
Stefan,
you
can
continue
working
on
this.
Thanks
for
checking
the
documentation
sounds
like
it's
good.
We
were
able
to
write
it
properly
last
year.
That's
also
your
job,
so
thanks
and
yeah,
let's
see
in
one
or
two
weeks,
I'm
adding
that
One
automatically
to
next
Milestone.
If
that's,
okay,
for
you,
that's
perfect,
any
question.
A
So
on
that
topic,
we
don't
have
access
to
the
sendgrid
cloud
account
that
is
currently
used
by
account
Jenkins.
So
we
cannot
monitor
if
these
emails
were
graylisted
if
they
were
sending
issue
to
the
respective
provider.
A
So
we
ask
KK
and
we
got
different
answers.
First,
KK
was
able
to
grant
us
access
to
the
mail
girl
mail
gun
account
where
Andrew
Taylor
already
had
access.
So
now
the
four
of
us
have
access.
So
please
a
reminder:
can
you
use
a
personal
account
and
enable
a
multiple
authentication,
Factor
mail
gun
for
everyone
here.
A
Mail
gun
doesn't
seem
to
be
used
earlier
today.
Kk
also
answered
that
the
sun-grid
accounts
as
a
plan.
A
That
would
be
interesting
in
terms
of
con
access
control
for
admins,
and
all
of
you
wonders
about
the
potential
costs.
As
far
as
I
can
tell
sandgrid
is
around
50
bucks
per
month.
I
don't
know
who
is
paying
this
I
assume
it's
KK,
but
maybe
not
I
never
had
an
answer
from
KK
on
that
particular
topic.
Maybe
it's
not
paid
I
got
a
billing
spreadsheets
from
Olivier
that
mentioned
40.
A
like
40.90.
No,
it's
14
or
15
bucks
per
month
for
sun
grid,
but
that's
the
only
information
I
get.
So
my
The
Proposal
made
by
I
think
airbn
Stefan
was:
can
we
change
an
accountant,
kinsayo
the
send
the
email
sending
provider
the
real
question
behind
this?
Is
the
amount
of
email
and
the
cost
that
we
will
have?
Are
we
able
to
switch
to
melgon
since
web
access?
Should
we
need
to
a
bigger
sun-grade
instance
that
could
cost
us
money?
D
Security
problem
is,
there
is
a
2fa
on
it
on
yeah,
it's
a
problem,
as
one
phone
number
has
to
be
dedicated
to
it.
I
was
thinking.
Maybe
we
could
get
a
3DO
account
for
that
I.
A
I
think
you're,
probably
your
the
fact
that
you
checked
the
send
grid
on
Azure.
If
it's
20
bucks
per
month,
I
mean
for
that
amount
of
email.
That
will
make
that's
absolutely
an
option.
It's
not
costy.
My
question
is
I,
don't
know
the
plant
from
melgen.
Are
you
okay
to
check
and
compare
because
we
have
mail
gun
accounts
with
multiple
administrator
already
today,
so
why
not
choosing
it
if
it's?
If
we
are
in
the
in
the
free
plan,
if.
D
D
B
That's
one
of
the
main
prohibition
and
concern
for
for
male
gains,
so
even
on
shared
IPS,
they
are
drastically
hating
people
with
non-secure
database,
so
I'm
I'm,
pretty
confident
on
on
the
the
quality
of
those
shared
IP.
B
D
A
B
Gmail
and
as
an
admin
of
the
of
the
group
and
is
still
sent
in
in
spam,
of
course,
there
is,
there
is
absolutely
no
way
to
prevent
that,
but
even
with
a
dedicated,
IP
and
I
I
know
how
they
they
work
on
that
area
and
that's
really
hard,
and
they
do
a
great
job
that
that's
all
I
can
say.
Of
course,
that's
not
foolproof.
A
Okay,
so
the
shared
IP,
Challenger,
Way
and
the
pricing
looks
like
the
SE
are.
The
challenge
to
solve
here.
Is
that
okay
for
you?
Yes,
so
if
you
see
a
solution,
you
can
go
both
as
infrastructure
officer.
I.
Absolutely
trust
you
on
that
part.
If
you
have
an
adopt
or
want
someone
else
to
help
on
the
decision,
don't
hesitate
to
ask,
looks
good.
A
Okay
using
the
Azure
Sun
grid
will
mean
eventually
creating
it
manually
and
then
importing
it
on
terraform
is
possible
to
have
a
configuration
management.
If
you
go
the
other
way.
The
cost
of
20
bucks
per
month
is
acceptable
inside
the
Azure
billing,
and
if
you
go
to
mail
gun
accounts,
then
go
ahead
and
update.
The
goal
is
to
unblock
the
users
and
being
able
to
have
a
run
book
explaining.
If
you
have
an
issue
with
email
sending
go
there
or
go
there
and
then
we
can.
We
can
help
users.
A
Okay,
so
we
realized
that
we
were,
we
could
decrease
the
Azure
billing.
We
had
two
options.
The
first
options
on
Azure
cost
was
using
an
artifact
manager
to
ensure
that
cigu
outbound
bandwidth
could
be
decreased.
So
what
has
been
done
on
the
bomb
builds?
A
Jesse
click
was
able
to
merge
a
proposal
where
most
of
the
station
stash
steps
during
the
build
aren't
done
anymore.
We
were
able
so
I
need
to
comment
on
the
issue.
We
saw
an
impact.
The
album
bandwidth
decreased
due
to
this
change.
The
the
forecast
shows
were
around
600
instead
of
1300.
Last
months
could
be
worth
checking
the
month
before,
though,
because
we
had
a
quite
a
new
Universal
activity
last
week,
but
clearly
bum
is
one
of
the
culprits.
A
The
artifact
manager
is
a
tentative
to
decrease
that
outbound
bandwidth,
because
on
Azure,
the
outbound
bond
with
pay
from
Azure
buckets
is
clearly
lower
than
from
a
virtual
machine.
One
of
the
main
reason
is
by
default:
The
Blob
storage.
When
they
are
public
like
the
one,
we
would
have
there
to
serve
the
archived
artifact
of
cig
and
kinsayo.
It
will
use
the
Azure
CDN
by
default.
You
can
disable,
but
it's
enabled
that's
why
Microsoft
is
able
to
decrease
the
cost
because
it
served
for
their
CDN
Network.
A
How
does
it
work?
Artifact
manager
wants
installed
and
configured
the
archive
artifact
or
station
stash
operation
right
from
the
the
cigar
virtual
machine
to
an
Azure
blob
bucket
and
then
CI
Joan
kinsayo,
when
you
click
on
an
artifact,
because
you
want
to
download
it.
For
instance,
all
plugins
archived.hpi
generated
file,
which
is
a
few
megabytes.
A
Valid
for
one
hour,
so
when
you
click,
you
have
an
HTTP
redirect
from
cigo
to
their
Azure
content.
Network
and
CI
Jenkins
is
not
serving
the
data
only
the
redirects
and
in
background
each
time,
someone
issue
a
request
to
an
artifact
which
is
stored
on
blob
storage.
Then
the
new
token
is
generated
for
each
and
these
are
a
bunch
of
temporary
token.
That's
a
really
smart
Behavior.
A
I
don't
know
for
this
one
I
assume
it
depends
on
where
you
are.
If
you
are
inside
Azure,
for
instance,
Azure
virtual
machine
agents,
then
the
answer
is
no,
but
for
the
AWS
or
digital
ocean,
yes,
I,
believe
so
in
any
case,
CDN
or
not,
the
goal
is
to
have
the
these
files
served
by
another
service
than
cigsaw
itself,
less
pressure
on
the
Apache
server,
Less
storage
and
less
thread
serving
requests.
A
A
We
saw
data
in
the
bucket
I,
initially
forked,
that
my
initial
configuration
was
wrong,
but
team
thanks
team
checked
that
it
was
working.
We
were
able
to
reproduce
and
with
a
few
builds
that
we
tried.
It
worked
every
time
from
our
local
controller
tests,
which
means
there
is
an
issue
when
scaling
up
with
cigen,
Kim,
Sayo
and
or
an
issue
in
cigen
Sayo
setup.
A
A
So
a
niche,
an
update
of
that
plugin
fixing
some
of
these
issues,
but
not
all,
have
been
issued
and
deployed
to
cig
and
Kim
Sayo.
So
the
proposal
is
assuming
a
proper
communication
to
developers
that
we
will
try
again
using
the
Azure,
artifact
caching
manager
and
the
areas
we
will
see
and
enable
debug
logging,
at
least
for
the
administrator,
to
see
if
the
same
behavior
happens
again
to
help
the
plugin
developer.
To
pin
the
issue
because
it's
expected
to
work
and
it
should
be
transparent.
A
My
the
question
is:
oh,
it
behave
if
we
have
both
systems
installed
and
set
up
at
the
same
time
a
given
controller,
because
technically
you
can
do
that
from
the
UI
I'm
interested
in
knowing
how
it
work.
How
does
Jenkins
select
one
or
the
other
I
don't
know,
and
it's
written
nowhere
and
no
one
was
able
to
give
me
a
proper
answer.
So
I
propose.
Let's
try
locally.
A
The
question
I
want
to
raise
is
maybe
in
order
to
keep
an
equilibrium
if
we
cannot
make
the
Azure
plugin
I
want
to
raise
the
question
of,
we
can
still
use
an
S3
bucket.
That
means
the
artifact
will
be
sent
from
Azure
to
Amazon
each
time,
but
most
of
this
artifacts
are
in
fact
generated
by
AWS
or
digitalocean
agents,
so
they
will
be
copied
from
digital.
Listen
on
Amazon
directly
on
S3
and
CI.
A
Jenkins
IO
will
only
issue
a
redirect
to
AWS
when
someone
requests
the
artifacts,
so
that
should
still
be
able
to
allow
us
to
decrease
the
bandwidth.
That
could
be
a
solution.
Most
of
our
builds
happen
in
AWS
anyway,
so
that
that's
the
area
where
we
are
so
now.
The
proposal
is
to
start
with
Azure.
First
time.
A
B
Was
I
was
pretty
happy
because
I
managed
to
have
the
build
from
Packer
for
the
azure
irm
64.,
but
for
now
it's
bumping
into
an
error
due
to
the
not
not
being
able
to
overwrite
VM
images
and
machine
images
within
the
Azure
Gallery.
If
you
got
the
same
version
there
is
it's
forbidden
to
override
the
old
one
and
that's
a
problem
not
for
the
production
usage.
B
So
when
we,
when
we
issue
a
tag,
it
should
be
okay,
but
when
we
do
PR
and
and
built
on
Main,
we
are
using
Dev
gallery
and
and
staging
gallery
and
those
one
we
are
in
in
in
the
problem.
We,
where
I'm
stuck
in
that
problem
of
already
existing
images,
so
we
were
going
away
with
looking.
B
A
The
next
step
for
us
will
be
to
be
able
to
start
using
infrastia
the
jenkins.io
to
build
our
own
Docker
images
on
rm64
or
to
switch
to
the
all-in-wall
image
and
start
most
of
most
of
our
workloads
to
rm64.
A
On
the
Azure
quests
spot
instant,
using
spot
instances
so
spot
instances,
we
checked.
The
price
is,
alas,
only
two
times
better,
which
mean
if
a
spot
instance
is
reclaimed,
then
only
one
retry
remove
all
the
the
benefits
and
the
non-benefit
is
developers
seeing
their
bill
being
retried,
especially
on
the
ath.
A
The
thing
is
in
the
case
of
plugins
that
builds
on
virtual
machine
requiring
Docker.
Most
of
these
plugins
needs
Docker
and
have
a
yeah
30
to
60.
Minute
builds
it's
not
a
quick
build
of
Maven
can
install
you
have
an
HPA
most
of
the
integration
tests
used
Docker
and
need
a
virtual
machine
and
could
be
reclaimed
by
spot
same
thing
for
for
ath.
Some
ath
branches
are
just
a
few
minutes
and
could
be
good
customer
of
ath
of
a
spot
instance,
but
some
Take
6
hours.
A
So
you
don't
want
a
six
hour
process
to
be
canceled
by
a
spot
tree
claim
so
using
spot
for
the
ath
is
not
a
good
idea
and
the
proposal
is,
we
might
need
to
propose
new
templates
with
spots
and
then
we
could
either
opt-in
or
opt
out,
but
in
the
case
of
ADH
that
will
need
a
bit
of
revamp
inside.
It's
not
easy.
It's
not
an
easy
way
to
gain
money.
So
spot
instance
for
virtual
machines
in
Azure
might
not
be
interesting
enough
for
cig
and
kinsayu
agents.
A
A
Then,
unless
you
have
questions
about
Azure
costs
right
now,
we
should
be
just
under
the
10K
this
month
with
the
current
workload,
meaning
all
the
virtual
machine
of
CI
Jenkins
IU
are
running
on
Azure,
that's
the
current
status,
so
we
should
be
okay
and
then
we
will
need
to
decrease
cig
inside
your
spendings
on
azure.
That
will
be
the
next
step.
B
B
I'm,
defining
the
the
VM
that
we
need
to
spawn
on
Azure
for
trusted
within
terraform
and
I'm
on
the
on
the
network
side
right
now,
I
so
I
have
two
PR
on
two
repository
because
we
have
Azure
and
we
have
Azure
net
so
right
now,
I
try
to
have
both
of
them
working
together
with
Creation
with
of
the
network
within
Azure,
net
and
usage
of
that
Network
in
azure.
But
yes,
it's
it's
my
main
task.
A
That
happened.
Do
you
need
a
review
or
unblocking
on
that
task
on.
B
The
upcoming
days
or
I
will
probably
need
to
to
check
with
you
if,
if
I
did
correctly
for
the
for
the
mind
of
of
the
of
the
net
tomorrow,
if
you,
if
you're
available.
A
Okay
should
be
doable.
Thanks
worked,
so
we
were
able
to
successfully
start
and
run
partially
a
bomb
build
on
a
new
set
of
node
pool.
The
assumption
is,
we
are
able
to
show
that
we
can
decrease
the
costs
of
the
bomb
builds.
There
are
different
layers.
The
layer
here
is
first
targeting
of
not
blocking
the
plugin
builds
that
need
container
when
there
is
a
bomb
build
or
a
storm
of
pump
builds.
A
A
So
we
have
updated
the
existing
node
pool
that
bomb
and
plugins
is
using
on
AWS
to
decrease
the
cost
of
a
single
pod
in
Italy.
That
doesn't
mean
that
we'll
decrease
the
cost
globally,
because
it
depends
on
all
the
parameters,
but
still
we
were
able
to
to
decrease
of
20
percent
the
cost,
and
we
also
change
the
spot
eviction
rate
because,
most
of
the
instance
State
size,
we
were
using
had
a
10.
Sometimes
15
percent
eviction
rate,
which
was
visible
on
the
bomb,
builds
with
a
lot
of
Agents.
A
A
A
So
we
have.
We
have
different
solutions
here,
but
it
seems
like
we
need
a
self-made
solution
for
stashing
and
unstashing
here.
That
means
we
should
ensure
that
everything
is
running
on
that
newer,
node
pool
and
that
we
should
use
either
an
EBS
volume
or
S3
buckets.
So
we
build
one
time,
the
one
we
we
copy
it
here
and
then
we
can
reuse
it.
That's
an
optimization.
The
good
and
positive
thing
is
now.
We
have
a
way
to
measure,
specifically
the
behavior
of
bomb
builds.
A
So,
let's,
let's
iterate
on
that
part
important
takeaway
for
us
as
administrator
as
underlined
by
Jesse,
the
bomb
will
currently
use
label
the
label
allocate
an
agent
and
we
as
admin,
implement
the
interface
contract
that
the
label
is
by
saying.
Oh,
it's
a
pod
template.
If
the
label
is
GDK
17,
it's
that
that
template
or
this
one
or
this
one
the
test
we
did.
We
directly
specify
the
bottom
plate
method
in
the
pipeline
itself,
so
we
don't
use
that
label
contract
by
the
Admin.
A
The
advantage
of
that
new
of
that
second
method
is
that
it
surfaces
the
Pod
eviction
to
the
developer
and
the
bid
logs,
which
wasn't
the
case
before
it
was
only
on
the
controller
logs
so
hidden
from
Developers,
and
the
thing
is,
with
the
spot
instances,
I
eviction
rate
and
some
and
also
CPU
eviction,
because
using
too
much
CPU
to
do,
we
have
a
limit
of
four
CPU.
Sometimes
we
saw
peaks
in
5
and
6
requested,
so
the
system
kill
the
Pod,
and
then
you
have
a
reattempt.
A
So
we
are
working
on
still
studying
the
metrics
on
this
area,
but
what
was
surfaced
by
the
last
builds
yesterday
on
this
night
is
that
there
is
an
issue
on
ciden
kinsayo.
We
don't
know
if
it's
Jenkins,
if
it's
the
kubernetes
plugin,
if
it's
a
world
setup,
fits
the
topology
Azure
to
AWS,
but
sh
steps
on
the
bomb
builds.
A
For
instance,
a
single
curl
request,
a
check
for
the
ACP
availability
that
should
take
a
few
seconds.
It
takes
two
three
four,
sometimes
five
minutes
for
the
controller
to
establish
the
connection
to
the
agent
run,
the
command
and
report
back
so
10
minute,
building
the
mega
war
3
to
15
minutes
for
running
all
the
intermediate
steps
and
then
15
to
20
minutes
of
running
the
PCT.
That
means
each
branches
of
the
280
branches
is
currently
taking
30
to
45
minutes
each
at
the
same
time.
A
C
I
were
just
having
a
discussion
prior
to
this
meeting
about
bomb
cost
reductions,
I'll
be
sending
an
email
message
proposing
to
significantly
reduce
bomb
execution
costs
by
changing
what
we
execute
when
we
execute
it.
So
the
idea
I'm
going
to
take
is
I'm
going
to
propose
that
we
will
only
run
a
very
lightweight
step
on
each
pull
request
and
that,
in
order
to
in
order
to
run
bigger
tests,
the
tests
we
currently
run
on
every
pull
request
will
have
to
apply
either
a
label
or
a
comment
to
a
pull
request.
C
And
what
will
what
I'm
proposing
to
do
is
we'll
use
an
an
octopus
merge
to
combine
many
pull
requests
into
a
single
build
so
that
we
cut
the
costs
we
get
that
we
get
the
smoke
test.
That
happens.
It
takes
from
five
to
20
minutes
to
run
the
smoke
test
and
the
smoke
test
tells
us
important
things,
but
then
the
proposal
will
be,
let's
only
run
the
bigger
set
of
tests,
the
ones
we
run
on
every
pull
request.
A
Absolutely
that's
absolutely
a
good
way
because
it
can
be
done
now
later
and
it
doesn't.
It
doesn't
predate
or
create
any
kind
of
problems
with
the
order,
optimization
right,
we
still
need
to
fix
that
issue.
I
mean
300,
parallel
elements
in
the
build.
You
should
not
takes
minute
for
Jenkins
instance
that
size
there
is
something
abnormal
here,
but
still
yeah.
That
will
that's
a
really
good
idea
if
we
are
able
to
to
drive
this
thanks.
Everybody
thanks
Mark
for
that
idea.
C
So
now
I
did
realize
just
as
I
was
describing
it
there's
an
exception
case
there.
When
a
developer
wants
to
evaluate
a
a
prototype,
they
need
to
be
able
they
need
to
get
full
execution
like
a
regular
pull
request.
So
I
think
what
we
would
do
is
say
if
the
label
has
dependencies
only
run,
the
small
test
only
run
the
smoke
test.
Yeah
right,
we'll
we'll
discuss
that
in
in
the
I'll
use
the
developer
list,
I.
Think
for
that
discussion.
C
Yes,
yeah:
if
that's
okay
with
you
I'll,
do
that
that's
a
better
way
to
do
it.
That
gives
us
a
very
solid
place
to
track
and
discuss
good
I'll.
A
Do
that
the
discussion
in
the
mailing
list
is
clearly
or
better,
but
yeah.
Putting
tracks
record
for
audits
to
serve
as
support
for
us
is
important.
A
That's
absolutely
worth
it
folks,
every
so!
No
sorry!
So
that's
all
for
the
bum.
We
had
we
weren't
able
to
reproduce
ACP
issues
with
artifact
caching,
proxy
on
digital
ocean.
Only
only
digital,
listen
for
the
bomb.
I,
don't
speak
about
ath
in
Azure.
That's
another
topic!
A
A
However
Mark
we
are
not
sure
and
we
might
need-
or
at
least
we
might
need
your
help,
understanding
each
PCT
duplicity.sh
step
once
you
have
the
mega
war
and
you
run
the
PCT
it's
a
jar
file
that
is
called
with
a
few
parameters
in
the
shell
script
and
we
don't
know
what
that
process
is
doing.
Is
it
calling
Maven
and
Building
Things
or
is
it
doing
something
else?
A
For
the
ath
issue
that
basil
reported
that's
different
topic,
that's
not
the
same
job
and
that's
not
the
same
network.
That's
not
the
same
cloud
and
that's
not
the
same
error
message.
It
sounds
like
that.
It's
when
we
start
having
too
much
digitizen
parallel
steps,
then
it's
it
start.
We
might
have
a
limit
in
the
in
the
system.
In
the
case
of
the
ath,
that's
TCP
connection
ratios,
that's
absolutely
the
network.
A
Okay,
tiny
tasks
make
environment
and
description
feels
mandatory
for
bug
type
issues.
I
propose
to
remove
that
issue
from
Milestone
Alex
opened
it
that's
worth
a
discussion
with
the
Jenkins
score.
I,
don't
feel
like
it's
the
Jenkins
infrastructure
team
role.
A
A
Puppet
agents
keeps
updating
the
gpg
key.
Each
time
we
have
a
weekly.
There
is
something
on
the
system
that
writes
the
value
of
the
new
key.
The
new
key
file
has
Dash
2023
in
its
name
to
the
hold
file,
while
puppet
on
the
PKG
keeps
updating
it
the
other
way
around.
So
we
are
spammed
by
this
one.
We
have
to
find
which
part
of
the
weekly
process
I
was
able
to
pinpoint
to
the
mirror
scripts
two
weeks
ago,
but
it's
not
the
only
one
doing
that
so
I
misunderstood
something
or
missed
something.
That's
minor.
A
Rm64
VM
isn't
available
same
garbage.
Collection
of
the
Packer
Mei
is
a
bit
too
drastic,
so
the
proposal
area
should
be
quick
before
deleting
an
Mei.
Let's
check
the
configuration,
the
public
configuration
file
from
CI,
Jenkins
and
infra
CI,
and
if
the
Mei
is
within,
don't
delete
it
so
that
one
should
be
easy
to
implement
cigo
Define
a
default
build
discarder
I've
installed
the
plugin
I
haven't
looked
yet.
I
will
want
to
do
a
session
with
you
folks,
because
in
so
we
need
it.
A
We
need
to
decide
and
communicate,
but
also
I,
don't
know
if
you
remember
the
two
of
you
when
we
checked
the
details
of
the
child
orphan
policy
on
the
organization
scanning
on
cig,
which
Define
how
much
element
when
a
repository
or
a
branch
is
deleted
or
much
element
or
how
much
time
an
element
should
be
kept
and
I
discovered.
There
is
something
named
a
build
strategy
that
provide
the
build
rotation
logs
that
we
were
searching
for,
which
is
a
bit
different
than
the
orphan
child.
A
So
maybe
we
could.
We
could
have
not
only
this
at
top
level
by
default,
but
we
could
Define
one
policy
for
each
GitHub
top
level
element
in
cigen
kinsayo,
and
we
missed
that
one
last
time.
So
if
it's
okay
for
you
I
will
want
to
take
30
minutes
because
I
remember
early
last
week
before
the
devops,
you
mentioned
that
setting
like
this
one
that
I
warned
you
about
not
applying
not
immediately.
D
A
A
So
next
step
is
setting
a
default,
build
discard
the
policy
and
communicate
about
that
to
developers
saying
hey
now
it's
only
five
minutes.
A
D
The
issue
before
about
I
haven't
had
time
to
check
it.
I
don't
have
to
agent.
My
pull
request
is
ready.
The
community
checks
are
okay,
so
when
it
will
be
merge,
I
would
be
able
to
update
the
backline
library
to
to
use
the
install
the
launchable
or
install
it.
If
not
good,.
A
I
was
only
able
to
start
a
pull
request
on
the
docker
open,
VPN
image.
I
haven't
checked
the
result
yet.
A
A
We
need
to
change
the
system
disk
to
an
SSD
on
cigenkins
IU.
As
you
told
the
survey
importing
on
terraform
could
be
done
one
shot,
so
we
should
be
able
to
audit
and
check
all
the
resource
settings
on
the
hardware
specially
the
SSD
issue.
A
Then
we
that
will
allow
us
to
enable
disk
snapshots
on
the
Jenkins
home.
So
we
could
get
rid
of
the
job
config
history.
There
is
the
network
migration
that
we
mentioned
earlier.
That
could
help
also
related
to
the
Azure
Port
I've
asked
timiacom
if
we
could
shrink
the
amount
of
azure
virtual
machine
templates
and
another
one
interesting
to
share
with
you,
everybody,
because
I
think
you
were
the
first
person
to
ask
me
about
that.