►
From YouTube: 2023 04 25 Jenkins Infra Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Won't
be
available
today,
let's
get
started
with
the
announcement,
so
the
weekly
2.402,
at
least
the
war
was
published.
I
haven't,
checked
the
package
and
Docker
image
status,
but
I
assume
it
will
be
available
soon.
A
C
We
have
a
little
problem
with
the
weekly,
so.
A
Right
now,
digital
ocean
is
currently
underlying
the
builds
for
cig
and
Kinsey,
so
that
should
be
okay.
The
the
worst
case
was
we
had
a
15
build
in
the
queue
and
there's
been,
these
bills
are
being
created,
so
the
impact
is
not
is
not
I,
but
still
an
impact
that
was
done
in
CIO
is
up
to
date.
C
I
I
relaunched
the
packaging
for
the
weekly.
It's
an
error,
like
usual.
A
The
root
cause
of
the
issue
is
a
combination
of
human
and
Technical
problem.
The
human
was
made
the
human
encodes
wanted
to
configure
that
cluster
to
use
in
the
permission
in
administration
for
permission
system
to
map
the
new
AWS
accounts
that
cloudbase
provided
us.
We
did
the
change
on
the
cluster
hosting
the
ACP
for
AWS
last
week,
which
went
very
well
and
me
I
wanted
to
have
access
from
my
local
machine
in
order
to
test
the
work
I'm
doing
on
the
bomb
builds
and
in
that
process
I
sent
the
pull
requests
as
usual.
A
The
technical
error
is
because
that
cluster
was
created
with
a
terraform
system,
but
it
was
created
and
that's
the
real
human
error
and
I'm
sure
I'm,
the
one
who
did
it
but
I
can't
remember.
It
has
been
created
with
the
AWS
technical
user
accounts
on
a
machine
instead
of
using
the
remote
system
for
CI
because
it
was
broken.
We
had
to
go
quick,
so
the
owner
of
the
cluster
was
a
user
that
has
been
deleted
since
and
we
failed
we
even
with
the
help
of
cloud
base
admin
to
recreate
the
same
user.
A
So
what
happened
is
that
when
we
try
to
reconfigure
them
the
that
cluster
to
manage
identities,
the
mechanism
is
use.
What
is
called
the
kubernetes
config
map
that
Maps
help
kubernetes
to
map
its
Technical
and
administrator
users
with
the
official
AWS?
It's
not
it's
only
for
the
airbag.
So
it's
not
the
security
of
authentication.
You
are
already
authenticated
and
what
happened
is
that
that
cluster
was
created
with
automatic
management
of
that
config
map,
then
there's
been
a
major
upgrade
of
the
terraform
module
that
stopped
absolutely
managing
this,
so
we
had
to
stop
managing.
A
A
So
now
the
cluster
says:
oh
that
list,
that
empty
list
is
the
list
of
administrator
able
to
do
anything
and
connects
the
list
is
empty
with
because
it
has
been
deleted.
So
no
one
is
able
to
connect
to
the
cluster.
We
were
locked
out.
The
Only
Exception
would
have
been
the
user
who
created
that
cluster,
which
was
deleted.
So
no
super
admin,
no
more
admin
we
are
locked
out.
The
only
thing
we
can
do
is
deleting
the
cluster
now
yeah,
quite
an
adventure
that
should
be
operational
later
today.
A
Anyway,
we
have
digital
lesson
managing
the
build.
So
if
I'm
not
able
to
finish
today,
I
will
finish
without
a
lot
of
stress
tomorrow.
The
impact
will
be
if
we
have
bomb
builds
until
it's
back
to
the
normal.
The
impact
will
be
this
bomb
bill
will
wait
and
we
might
have
a
big
queue.
So
be
careful,
hey,
why
don't
you
we
use
digitalism
since
Friday?
That's
the
thing
I
want
to
mention
now
I
think
we
can
start
publicly,
stating
that's
the
third
announcement.
A
What
a
week
so
Friday
end
of
day
in
Europe,
digital
Ascent
centers
an
abuse
report,
so
an
automatic
system
failed
to
ban
mentioned
them.
That's
an
IP
identified
to
one
of
our
roles,
at
least
at
the
time.
The
failed
ban
is
pointing
it,
and
at
IP
was
used
for
Brute
Force
SSH
attack.
A
So
first
thing:
that's
the
reason
why
we
disabled
digitalocean,
because
this
is
the
only
remediation
we
were
able
to
have
and
we
were
waiting
until
the
incidents
today,
but
we
never
had
any
feedback
from
the
digital
lesson
support
or
security
team.
So
it's
been
four
days
if
I
exclude
the
weekends.
It's
two
working
days
was
yesterday
a
day
off
in
the
US
mark.
A
A
A
So
problem
number
one
is
that
we
were
unable
to
detect
any
malicious
mode.
There
were
only
one
bomb
builds
with
free
agent
running
on
that
cluster.
At
that
moment,
for
the
for
the
world
life
cycle
of
the
machine
no
logs
at
all,
except
the
usual
logs
of
bomb
builds,
so
we
we
had
to
read
each
line
line
by
line
for
these
agents.
It
did
its
work,
no
SSH
traffic
and
pull
request
and
the
change
is
a
maven
dependency.
So
for
sure
it's
not
the
burn
builds.
A
A
So
if
someone
was
able
to
find
such
a
flow,
I
mean
they
deserve
to
break
our
platform
and
without
joking
there
is
no
arm
in
that
area,
except
it's
annoying
to
being
blacklist
for
one
hour,
then
we
checked
the
technical
system
on
kubernetes,
assuming
that
the
kubernetes
node
was
found,
meaning
not
the
application
Port,
but
the
rest
data
dog
did
not
report
any
network
request
using
the
SSH
protocol
and
not
using
the
IP
that
were
reported
as
a
target
of
the
Brute
Force
attack.
A
C
C
Any
any
graphical
of
of
the
network
cloudbound
flow
from
from
digitalism
I
I
think
they
don't
provide
it,
but
I
didn't
find.
A
Anything
they
do
and
it
was
empty
for
that,
because
we
were
able
to
see
the
outbound
flow
of
the
agent
on
the
port
50
000.
Connecting
back
to
the
controller
using
TCP
level
protocol,
we
were
able
to
see
DNS
resolution
outbound
calls,
but
no
SSH
code
at
all
in
in
digital.
So
you
found
that.
Oh,
no!
No!
Sorry
in
oh
sorry,
data,
my
bad.
A
Because
yeah
through
digital
design
does
not
provide
that
kind
of
metrics
yeah.
We
try
to
remediate
by
enabling
more
restrictive
firewall
rules
here.
So
the
goal
would
be
to
say:
hey
these
missions
and
these
spots
should
only
be
allowed
to
go
to
the
internet
on
the
80
or
for
https,
HTTP
or
inbound
agent
protocols
thing
is
digitalism
does
not
allow
you
to
customize
the
firewall
rules.
There
is
a
firewall
with
manage
and
its
lifecycle
is
driven
by
the
kubernetes
cluster
and
each
time
you
change
the
rules.
A
After
two
or
three
minutes,
the
rules
are
reset
and,
of
course,
the
default
rules
are
allow
all
outbound
protocol
to
all
IPS.
So
that's
why
I
asked
them
for
help
like
okay,
I
really
want
to
restrict
the
outbound
traffic
from
my
kubernetes
cluster,
but
I
think
you
have
a
feature
issue
on
your
product.
There
that's
a
lesson,
not
a
nice
feedback,
but
that's
a
feedback
and
a
request
for
enhancement
to
them.
A
So
that's
why
we
decided
with
the
weekend
to
disable
the
cluster
at
all
and
we
just
re-enable
it
so
we'll
see
if
we
will
be
Target.
My
good
feeling.
Wild
guess
is
that
the
time
that
failed
to
ban
reports
is
not
what
it
says,
I
think
they
might
have
a
time
zone
issue,
and
since
these
IPS
are
shared
between
customers,
that
could
have
been
another
customer,
but
that's
a
wild
guess
and
cannot
prove
it.
So
that's
why
we
ask
them
for
help.
A
We
will
write
down
an
issue,
this
Milestone
with
the
the
tracks
we
have
because
we
didn't
receive
any
answer
from
digitalocean.
So
first
we
keep
on
as
I
mentioned,
on
the
email
description.
That's
also
for
us,
an
incentive
for
thinking
do.
Should
we
continue
using
the
kubernetes
cluster
in
digitalocean,
because
we
have
a
lot
of
obstacle.
It
cannot
scale
down
to
zero
for
a
CI
system.
A
That's
quite
an
issue
that
cost
between
200
and
800
bucks
per
month,
depending
on
the
side
of
the
machine
and
its
forbid
us
to
try
using
bigger
machines
to
pack
more
pods,
because
that's
one
machine
will
cost
us
a
lot,
even
if
it's
doing
nothing
so
this
and
the
firewall
rules
that
we
cannot
apply.
So
we
cannot
secure
properly
the
kubernetes
cluster.
It
starts
to
be
an
annoyance
in
our
use
case,
so
instead
I'm
bringing
back
the
idea
that
came
from
Gavin
Morgan.
A
Since
there
is
a
Jenkins
plugin
to
spin
up
virtual
machine
digital
ocean,
we
could
switch
instead
of
using
kubernetes
cluster,
remove
that
kubernetes
cluster
from
Deo
and
instead
build
custom
virtual
machine.
We
can
only
support
Linux,
but
at
least
we
could
have
all
the
ath
machines
that
are
now
only
on
Azure.
We
could
use
digital
Ascent
for
that
instant.
A
That's
a
proposal
on
the
table
right
now,
no
recoil
reduction
because
we
have
other
things
to
fix,
thanks
to
me,
so
that
these
are
the
major
elements.
Is
there
any
question
or
things
unclear
on
this
free
announcements?
If.
A
Aks
could
be
used.
We
could
create
a
cluster
in
AKs
that
would
have
the
benefit
of
supporting
Windows,
not
pool
and
irm-60
for
not
pull
so
same
feature
set
as
AWS,
but
that
means
the
money
we
spend
on
the
virtual
Mission
today
on
Azure.
First,
we
need
to
move
them
to
digitalocean,
so
we
have
enough
room
for.
C
D
A
D
A
A
A
A
A
I
I
took
a
shortcut
for
this
one
instead
of
first
enabling
the
full
automatic
backup
and
then
Etc
the
backup
cost
money
so
better
to
have
a
backup
based
on
an
initial
snapshot
which
is
as
small
as
possible
and
as
we
discussed
during
the
week,
we
might
want
to
create
a
new
data
disk
for
the
incoming
new
cigar
virtual
machine
and
with
the
effort
that
Stefan
and
Denmark
and
I
did
on
the
data
disk
and
the
data
of
the
Jenkins
home,
we
were
able
to
shrink
the
data
from
almost
yeah,
almost
600
gigabytes,
full
disk.
A
Three
weeks
ago
today
we
only
use
250.,
so
more
than
half
decrease
that
might
decrease
even
more
with
the
new
S3
artifact
manager.
We'll
see
you
a
bit
later,
so
the
idea
is
that
better
to
I
took
a
snapshot
of
the
Quran
data,
just
to
be
sure.
If
job
config
history
well
broke
something,
we
could
go
back
in
time.
That
was
the
most
important
thing
done
for
that
task.
The
Next
Step
will
be
hey,
let's
create
a
new
data
disk
which
is
B
which
will
be
cheaper.
Yes,
half
of
the
size
and
trust
me.
A
These
Premium
ssds
cost
something,
and
then
we
can
start
the
backups
from
that
initial
250
Gigabyte,
instead
of
the
600
that
we
had
before
that's
the
shortcut
I
took
validated
in
101
with
Team
here
com,
just
to
be
sure
it
wasn't
crazy.
The
reason
why
I
urge
that
issue
is
that
I've
looked.
It
could
help
regarding
the
bomb
pipeline
steps
that
are
slow
as
hell
that
the
thing
is
I
tried
and
it's
not
it's
not
helping.
Here.
It's
helping
another
area,
it's
still
valuable,
but
it's
not
decreasing.
A
A
A
So
we
had
a
jar
component,
archived.
After
a
long
long
discussion
and
exchanges.
We
choose
AWS
S3
instead
of
azure
blob
storage
for
artifact
manager
on
CI,
Jo
and
Kim.
So
you,
the
goal
is
CI
Joan,
Kim
Sayo.
When
there
is
a
stash
and
stash
or
archival
artifact
operation,
it's
sent
inside
an
S3
bucket,
most
of
the
let's
say:
EV
builds
the
bomb
Builds
on
the
board
of
the
plugins
are
already
running
on
AWS,
so
the
agent
stashed
to
an
S3,
buckets
and
and
stash
on
the
other.
A
But
there
is
no
absolute
solution
unless
we
use
a
single
Cloud,
because
whatever
Cloud
we
select
for
storing
And
archiving
this
data,
we
will
have
outbound
and
inbound.
So
here
at
least
we
won't
store
data
and
we
won't
put
pressure
on
cig
and
kinsio
and
for
the
bomb
build.
It
helps
it
helps
a
lot
because
the
stash
unstash,
not
only
they
are
EV,
but
also
it
allows
us
to
go
back
to
the
pattern
we
built
the
world
one
time
and
then
we
distribute
it.
A
A
C
A
A
Today,
it's
less
than
one
person
so
nice
we
already
saw
metrics
so
that
that
one
is
okay
and
yeah.
We
can
proceed
request
for
installing
GitHub
gitbodia
GitHub
application
on
Jenkins
server
for
Jenkins
IO,
that's
been
done,
so
the
benefit
is
that
users
should
be
able
to
spin
up
GitHub
gitpadio.
A
We
now
have
a
default
build
discarded
policy.
So
if
a
German
cigar
does
not
specify
explicitly
a
discarder
a
discard
build
policy,
then
it
will
be
only
the
five
last
item.
D
A
It's
only
if
there
are
none
applied
to
the
job
configuration
we
might
that
that
one
might
have
zero
effects,
because
we
use
GitHub
organization
scanning
and
I
realized
that
last
time
the
three
of
us
tried
unchecked,
we
missed
the
the
builds
policy.
We
looked
at
the
orphan
item
policy,
which
is
GitHub
repository
to
inside
organization
and
then
branches
inside
multi-branches
that
stopped
there
and
there
is
a
specific
trait
which
is
named
Bill,
Discounter
policy
that
we
missed,
so
we
might
have
to
rescan
and
rebuild
and
think
about
that
setting
and
that's
the
setting.
A
B
D
D
D
It
was
more
than
a
day
ago.
Yeah
absolutely
no
need
to
look
it
up.
Damien
I
was
just
concerned.
If
there
were
a
performance
issue
with
because
I
assume
it
sweeps
all
all
builds
across
the
whole
system
and
the
system
is
large.
So
if
there
were
a
performance
issue,
we
would
have
heard
about
it
by
now.
Oh.
A
A
No
problem:
let's
use
the
virtual
machine
that
used
to
run
neither
Girard
Confluence
I,
never
remember,
has
been
cleaned
up.
There
were
a
lot
of
that
was
confused
because
there
were
a
lot
of
Wiki
data.
So
following
alerting
changes
done
last
week
by
the
team
that
one
started
to
send
each
one,
it
was
using
a
lot
of
disk
with
unused
data,
so
it
was
cleaned
up
thanks
everybody.
For
that
part
cigo.
We
switch
the
hard
drive
to
a
standard
SSD.
A
A
So
that
explains
why
the
new
SSD
is
not
automatically
used
on
the
old
size
because
we
increase
the
size
from
50
to
64,
because
we
pay
the
same
so
instead
of
trying
to
Tinker
with
that
system,
disk
The
Proposal
that
everyone
made
independently
and
when
we
share
we
say.
Oh,
we
all
agree
is
that,
as
part
of
the
Ubuntu
22204
campaign,
we
will
create
a
brand
new
virtual
machine
with
new
data
disk
new
setup
already
in
place.
That
should
be
the
next
cig
and
games.
A
Are
you
on
the
correct
Network
and
we
will
do
a
snapshot
and
then
everything
synchronization
of
the
data
disk?
So
we
will
use
the
new
data
disk
I
mentioned
before
with
a
backup
policy.
Everything
manages
code,
so
we
should
be
able
to
trash
that
machine
when
the
time
will
come,
hoping
that
will
also
increase
performances.
A
A
first
point
it's
alerting
at
80
and
at
19..
So
there
might
be
a
misunderstanding
on
the
way
we
have
to
set
up
data
dogs.
So
more
work
is
required
on
that
one
so
and
that
helped
us
to
track
that
we
have
leduced.
That
was
full,
as
I
said,
and
also
some
of
our
node
pools
on
the
private
cluster
used
for
infrasia
and
release
CI.
A
We
were
able
to
help
a
user
about
creating
an
account.
There's
been
an
issue
with
the
DNS
record,
hippo
Maven
apache.org
outside
our
system,
and
we
had
to
fix
Missing
virtual
machine
images.
A
A
A
Let
me
use
correct
indentation
first
email
issues,
so
we
have
accounts
that
does
who
doesn't
receive
their
emails
in
some
time.
Let
me
try
accounts
Jenkins,
IO
and
email
errors.
A
So
for
this
one
the
solution
is,
we
are
going
to
migrate,
the
sun
greatest
MTP
system,
the
current,
the
one
that
is
currently
used
by
a
conjunct
in
Sayo
to
send
email.
We
don't
have
access
to
it
as
per
kosuke
feedbacks
there
should
there
can
only
be
one
account
at
a
time
on
the
free
plan,
as
pointed
by
team
and
Olivier.
Within
this
tweak,
the
idea
will
be
to
use
the
sun
grid
manage
system
on
Azure
or
the
mailgun
account
that
we
have
access
to.
A
Our
goal
is
to
have
an
email,
sending
system
that
we
can
access
and
manage.
So
we
can
diagnose
the
issue
for
these
users.
That's
the
goal.
Let
me
update.
There
is
a
second
issue.
I
will
edit
letter
at
this
one.
So
there's
been
the
first
step.
Every
earlier
today
was
able
to
to
enable
the
use
of
mail
gun
accounts.
So
most
of
the
email
were
sent
except
the
one
to
Google.
Free
mails
were
blocked.
There
is
a
whatever
email
system:
I,
don't
have
any
knowledge
on
that.
B
A
Tried
with
mail
gun,
but
it's
a
Denmark
issue
with
Gmail,
so
right
now
reverted
to
Old
Sun
grids.
So
the
service
continue
to
work
question
for
you
Stefan.
Do
you
think
that
we
will
have
to
run
back
using
mail
gun
in
order
to
Jack
nose
and
fix
the
issue.
C
But
the
problem
is
that
they
had
some
some
errands,
some
Legacy
for
the
mark.
That
was
stopping
mail
to
be
sent.
In
fact,
there
is
already
something
in
the
system
in
the
DNS
for
the
map:
that's
stopping
sending
mail,
so
the
the
the
the
the
fact
that
we
got
blocked
is
normal
because
we
asked
to
okay,
but
we
have
to
work
around
because
they
that's
not
only
the
Mark.
B
A
That's
quite
clear:
does
my
not
capture
what
you
said.
A
Okay
thanks:
the
explanation
is
clear
and
makes
sense
to
me
so
I
assume
that
so
everybody
had
to
travel.
So
that's
why
we
roll
backed
as
soon
as
he's
able
is
there.
We
will
have
to
think
I
will
mark
this,
that
he
has
to
anticipate
that
part.
So
you
should
be
able
to
get
some
availability
to
help
him,
because
I
I
want
to
attend.
I
will
learn
a
lot,
but
my
knowledge
in
email
is
close
to
zero.
I,
don't
even
know
what
the
dmarc
is
so.
C
I
stopped
working
on
email
just
when
that
came
out
and
we
were
more
using
the
other
ones,
so
I'm
brand
new
to
that
too,
with
their
own
yeah.
A
But
my
knowledge
is
close
to
zero,
so
I,
probably
delegate
that
to
Airway
and
you
and
I
will
learn
on
the
process
for
sure,
with
pleasure.
Stefan
award
on
trusted
CI
migration
from
AWS
to
Azure.
What
is
the
status
I.
C
I
still
right
now
with
the
one
VM
which
is
Boons
operated
by
terraform.
That's
a
brand
new
one
with
nothing
in
there
I'm,
starting
with
only
this
one,
because
I
first
started
with
every
VM,
and
that
was
way
too
much.
So
I
had
to
lower
down
my
my
Target
and
and
iterate
so
I'm
back
to
one
VM
and
I'm,
almost
ready
for
a
second
review.
B
C
A
C
Network
we
need
to
to
put
some
some
firewalls
and
and
stuff,
and
then
we
will
go
ahead
with
another
VM
and
then
the
last
one.
D
A
A
You
discover
that
currently,
the
there
is
a
resource
Group
and
the
network
used
for
the
FML
agent
of
the
actual
trusted
CI,
which
is
running
on
AWS,
and
the
thing
is
that
that's
Resource
Group
and
its
resources
are
on
U.S
east
region
on
Azure
While,
most
of
our
networks
and
setup,
and
the
new
VM
will
be
a
new
assist
too.
So
we
could
start
updating
that
part
in
parallel
on
what
you
are
doing
so
so,
then
we
we
wouldn't
have
the
agent
far
away
from
the
controller.
C
And
we
choose
yes,
it's
two
to
be
close
to
private
k8s,
absolutely.
A
A
Thanks
walk
in
progress
now
on
my
side
about
splitting
the
bomb
the
bomb
build
with
the
other,
so
this
status
is
as
commented
on
the
issue.
A
We
tried
with
a
new
note,
pool
that
worked
very
well
and
were
able
to
experiment
that
new
node
pool
different
than
the
current
one
was
using
bigger
machines
where,
instead
of
Free
Birds
at
the
same
time
it's
able
to
under
23.
At
the
same
time,
the
goal
was
first
having
less
machines,
less
cost
purpose
and
the
ability
to
treat
a
single
bomb
bill
in
less
than
30
minutes.
A
A
To
deliver
the
split
experiments
with
a
bigger
nut
pool
is
not
good
enough,
so
not
delivering
it
for
now,
so
the
value
is
again
ensuring
that
any
builds
that
require
your
Linux
container,
which
is
not
bomb,
builds,
don't
have
to
wait
for
resources.
If
there
is
Bomb
builds
currently
being
run.
Only
the
bomb
Bills
should
compete
with
each
other.
A
A
A
No,
it
worked
as
I
spent
one
hour
debugging
and
trying
it,
though,
because
open
SSL
add
to
fix
minor
issues
that
were
creating
here
errors
because
open
SSL
went
from
1.1.123.
something
had
to
fine
tune,
the
certificates,
the
certificate
authorities,
because
the
new
open,
SSL,
the
interaction
with
the
ldap
client
is
a
bit
different
and
it's
not
checking
the
ashes
of
the
sea
asserts.
In
the
same
way,
it
was
able
to
use
directly
the
Cs
search
from
one
of
the
lists,
so
the
cell
search
is
the
let's
encrypt
X1.
A
In
that
case,
for
ldap
and
now
as
part
of
the
image
build
there,
we
need
to
to
specify
an
environment
variable
to
ldap
environment,
so
it
directly
goes
to
the
file
on
the
storage
instead
of
searching
for
the
ash
on
the
cssl
certificate.
A
So
that
one
is
a
wrapper
tasks
I'll
see
on
that,
because
there
is
no
dedicated
action
regard
to
this
one.
So
if
it's
okay,
I
won't
add
this
issue
on
the
Milestone
doesn't
serve
to
have
a
wrap.
It's
a
kind
of
Epic
doesn't
make
sense
I'm
only
using
it
to
reference
all
the
tasks
that
have
the
direct
impact
of
pumping
the
Ubuntu
22.
is
that
okay.
A
A
Absolutely
so
that
was
yesterday
it
was
14
and
today
it
was
even
lower
the
impacts
we
were
able
to
clean
up
the
snapshots
and
to
keep
them
clean,
the
snapshot
of
the
Amis.
So
that's
already
80
bucks
per
day,
so
that's
already
1.5
to
2K
per
month.
A
Also
we
change
the
the
instance
kind
used
for
the
the
node
pool
and,
as
you
can
see,
that
has
a
Direct
effects.
We
were
able
to
decrease
drastically
because
these
nut
pools
are
cheaper.
They
have
less
changes,
chances
to
be
evicted
as
spot
instances
and
the
usage
is
clearly
better
for
the
the
kind
of
workload
we
have.
So
we
use
the
same
can
for
the
new
node
pool
I'm
I'm
I
mentioned
earlier.
We
could
even
optimize
by
watching
the
metrics
next
month,
but
it
already
has
an
impact.
A
So
next
month
we
will
have
to
carefully
watch
this
area,
so
I
hope
by
moving
the
updates
onto
and
trusted.
We
should
be
able
to
remove
four
to
five
k
per
month
from
that
build
that
should
allow
us
to
make
it
sustainable.
So
we
had
we
made
good
efforts
that
are
visible,
but
it's
not
enough.
So
it's
positive,
but
we
need
to
keep
continue
working
on
this
one.
A
No
expected
action
next
next
week,
so
I
won't
put
that
on
the
next
Milestone
that
will
be
on
the
backlog,
because
there
is
no
immediate
action
for
us
right
now.
On
that
specific
topic.
A
A
Make
environment
and
description
Fields
mandatory,
so
I
saw
before
the
meeting
that
there
is
an
action
expected
from
the
Gerard
administrator.
The
consensus
was
reached,
so
I
need
to
check
if
I
can
do
it
or
if
I
can
ask
Marco
Daniel,
because
I'm
not
totally
sure
on
that
kind
of
setup,
so
I
might
ask
them
for
help,
but
by
default
I
take
care
of
it
administrator.
A
Go
to
check
and
ask
in
the
issue,
if
needed
a
word
about
renewing
the
update
Center
certificates,
thanks
Stefan
for
preparing
the
that
part,
so
I
was
able
to
meet
with
Olivier.
A
So
initially
we
wanted
to
ask
the
board
if
I
can
have
access
to
the
ca
key,
but
the
board
has
been
delayed,
so
we
won't
have
answered
before
eight
or
nine
of
May,
so
I've
asked
Olivier
yesterday
if
you
can
at
least
generate
a
six
months
valid
certificate
for
the
update
Center
instead
of
the
usual
one
year,
six
months.
Why?
Because
that
will
force
us
to
consider
the
world's
access
to
the
key
in
six
months?
A
I
haven't
heard
from
Olivia
I
should
have
met
him
today
physically,
but
he
had
a
cancellation,
so
I
will
try
to
to
arrest
him
a
bit
just
so
he
can
unblock
us
still
no
emergency,
but
I
would
prefer
doing
it
before
it
expires
to
avoid
issues
so
stay
tuned,
I'm.
Moving
that
issue
for
for
new
search
from
Olivier
and
next
board
for
the
key
to
them.
In
no
thing
to
do,
I
will
remove
that
from
the
next
Milestone
and
if
I
I
have
news
from
Olivier,
I
will
add
it
to
the
next
Milestone.
A
So
I've
opened
an
issue
about
cig
and
Kim,
so
you
use
a
new
VM
instance
type.
So
the
goal
is,
as
we
said
earlier,
to
create
a
brand
new
virtual
machine
like
what
Stefan
is
doing
and
trust
it
that's
the
same
pattern,
but
for
CI
that
one
might
be
a
bit
different,
no
need
for
bounce
or
privacy.
That's
a
public
instance
that
will
be
Ubuntu
2204,
a
new
data
disk
with
Azure
backup
policies
and
security
groups,
and
then
we
should
be
able
to
configure
it
and
try
the
puppet
Port
Stefan.
A
Is
it
okay
if
I
start
this
one
as
soon
as
possible
to
work
on
the
ubu
on
the
puppet
Parts
running
on
Ubuntu
2004?
So
the
time
you
create
the
infrastructure
for
the
free
trusted,
I
will
work
on
this
one.
So
you
should
be
able
to
have
a
working
puppet
model
because
right
now
puppets
might
fail
on
Ubuntu.
C
A
Puppet
I
mean
the
puppet
profile
that
is
in
charge
of
creating
a
Jenkins
controller.
We
know
for
a
fact,
based
on
Airways
work,
that
puppet
works
properly,
because
the
private
VPN
machine
is
already
Ubuntu
2204.
So
it's
already
working
on
that
on
that
part,
but
we
still
need
to
test
way
more
profiles
and
stuff.
A
Good
yeah
perfect,
so
we
can
parallelize
on.
When
will
be
the
time
for
you
to
bootstrap
puppet
agent
on
this
machine,
the
work
should
already
be
done
for
you.
That's
okay!
If
we
split
the
bill
like
this
yeah,
perfect,
okay,.
A
Migrate,
Google
analytics
to
V4
I'm,
also
waiting
for
Olivier
Olivia
gave
me
Administration
permission
on
the
property
on
Google
analytics
that
need
updates
to
Virtue
to
V4.
However,
in
order
to
migrate,
the
current
property
to
V4
I
need
to
be
administrator
of
the
world
analytics
space,
not
only
the
property,
so
yeah
I'm
waiting
for
him
I
will
try
also
to
harass
him
nicely.
Maybe.
A
A
B
A
A
Google
analytics
Pikachu
original
with
the
puppet
agent
doing
some
noise.
It's
not
an
important
task.
I
didn't
have
time
on
this
one.
Just.
C
A
A
Add
launchable
to
agent
so
early
add
an
interesting
exchange
with
basil.
It
appears
that
there
is
a
need
for
also
installing
launchable
on
windows.
A
So
right
now
it's
the
case
for
the
windows
VM
already
not
the
case
for
the
content
Windows
container
on
ACI
basil
at
the
High
Hopes
on
decreasing
time
for
some
builds
and
test
of
some
EV
workloads
once
it
will
be
done
so
Airway
will
work
with
after
the
email
on
that
topic
to
provide
launchable
command
line,
install
on
ACI
agents
hoping
to
unblock
Brazil
on
that
part,
and
that
includes
a
bit
of
pipeline
Library
setup
thanks
everybody
thanks
buddy
for
that
work,
that's
really
useful
because
we
could
decrease
the
the
machine
time
on
the
infrastructure
and
so
the
money,
absolutely
Windows.
A
Already
on
all
other
agent
on
cig
and
say
you
and
windows
VMS
a
word
about
the
artifact
caching
proxy
being
unreliable.
There
are
two
rules:
I'm
keeping
that
one
on
the
Milestone.
The
first
kind
of
errors
are
related
to
digital.
Listen.
So
we
didn't
add
any
of
these
errors
since
weeks,
because
digital
lesson
is
not
used
or
almost
not
splitting
the
bomb
builds
to
their
own
node
pool
on
AWS
means
we
won't
use
digital
lesson
for
bomb
builds
anymore,
so
that
issue
will
be
gone.
A
A
However,
thanks
team
for
the
new
feature
published
a
new
Azure
VM
feature
agent
plugins
feature
inbound
protocol.
That
means
we
should
be
able
at
any
moment
to
configure
the
agents
to
contact
cig
and
Kim
Sayo,
which
is
public,
so
we
should
try
this
to
migrate.
The
agent
like
we
should
do
for
trusted,
but
in
the
case
of
CI,
that
should
be
hazier
with
the
new
plugin.
B
A
C
The
problem
that
I
I
flipped
in
was
with
the
gallery
the
image
gallery
within
Azure
that
cannot
have
the
same
image.
With
the
same
version.
Okay,
overwrite
I
started
to
look
for
lockable
resources
to
deal
with
that.
The
problem
is
only
kind
of
development
and
and
staging
Gallery
not
for
the
production
one,
but
as
we
have
to
do
pure
and
work
on
our
computer,
that's
kind
of
annoying
I
had
to
stop
to
go
to
pupet
and
I.
C
A
A
On
but
at
least
you
learn
all
the
lock
work
for
advanced
use
case,
so
you
should
be
able
to
help
user
in
the
future.
I'm
sure
Jean
Mark
will
be
happy
to
learn
about
it
anyway.
So
now,
thanks
to
Airbus
fix
I
was
able
to
to
verify
something
that
did
not
work
months
ago.
A
diversion
for
the
gallery
require
used
to
require
a
strict
semantic
version
without
any
suffix.
But
now
that
constraint
has
been
relaxed
as
error,
we
found
we
can
add
a
suffix
and
I
think
he
did
even
something
even
more.
A
Even
smarter,
I
think
it's
not
trying
to
suffix
at
all.
There
is
a
convention
for
the
so
for
the
the
version
for
the
Dev
pull
request
is
a
o
dot,
build
number
dot,
random
generated
thing,
which
is
always
different,
and
for
the
staging
there
is
o
one
dot
bid
number
that's
something
for
staging.
A
So
that
should
be
that
you
help
a
lot
on
cleaning
up,
because
we,
we
didn't
add
any
more
issues
as
on
the
line
by
ashikorpe
the
person
that
actually
co-working
on
that
topic,
the
the
the
fix
that
you
did
Stefan.
So
now
we
don't
export
virtual
machine
and
then
add
that
virtual
machine
template
as.
A
That
is
forbidden
for
rm64.
So
now
we
directly
export
to
Shell
Library,
less
resources
less
time
for
the
export
builds,
it's
already
miserable.
It's
a
two
two
to
five
minutes
gain
on
each
template.
So
that's
that's
nice,
and
also
now
we
have
the
GC
cleanup
so
yeah,
so
that
should
be
able
to
work
again.
We
should
be
able
to
retry
I
propose.
We
don't
put
that
right
now
on
the
Milestone
just
to
not
dump
you.
But
if
you
are
bored,
don't
stay
too,
try
again
something
good
for
you!
A
Oh
yes,
and
now
we
had
an
issue
thanks
for
the
work
on
that
you
you
did
great
because
we
should
be
able
to
use
a
ram,
64
and
I
think
we
should
also
be
able
to
open
the
blog
post.
Once
we
will
have
succeeded
on
the
first
usage
of
azure.
Microsoft
will
be
really
happy
to
worry,
about
that
I'm
sure,
don't
worry,
Bruno
or
Kevin
will
be
happy.
A
We
had
an
issue
open
by
a
user
thanks
for
that.
That's
reports,
slow
pages
on
get
Jenkins,
mainly
the
list
of
the
previous
releases.
We
did
the
first
first
set
first
set
of
updates
that
fixed
the
issue
of
configuration,
changes
that
fixed
the
problem
for
slash,
War,
Dash
stable,
which
is
the
LTS.
A
A
Dog
integration
will
be
and
should
be
enabled,
so
that
the
set
of
a
notation
to
set
on
the
Pod
of
the
mirror
bit
service
behind
Gadget
inside
you,
so
that
they
will
expose
on
a
private
endpoint
the
data
required
for
datadog,
so
that
a
dog
will
be
able
to
scrap
that
private
endpoint
and
Report
metrics
for
Apache.
We
should
be
able
to
track
the
slow
requests
based
on
these
metrics,
the
the
main
good
feeling.
A
A
So
there
is
an
option
of
using
NFS
on
the
Pro
system
volume
instead
of
cifs,
but
that
might
cost
eventually
use
NFS
for
full
Basics,
because
that
issue
is
also
a
packaging
issue
on
the
private
cluster
that
where
there
are
some
try,
when
it's
trying
to
write
a
file,
it
gets
permission
deny
why
it's
not
the
case.
We
have
to
retry
that
issue
already
existed
since
years.
It
appears
from
time
to
time.
A
Also
there
is,
there
is
a
remediation
that
could
be
done
using
nginx,
Ingress
caching,
I
mean
these
pages
are
updated
once
a
week.
That's
all
even
one
stage
is
okay,
so
we
could.
We
could
use
the
Ingress
rule
since
we
have
nginx
in
front
of
that
service.
That
will
cache
these
Pages,
specifically
like
we
do
with
ACP
for
these
pages,
so
that
was
also
the
opportunity
to
clean
up
the
service,
less
replicas,
less
real,
more
realistic
limits
for
the
CPU
and
memories,
so
we
consume
a
bit
less.
A
A
A
A
So
we
need
to
update
these
the
M
chart
of
this
application,
so
they
should
not
use
the
system
pool
either
an
anti-affinity
or
we
could
add,
detained
on
the
system
pool
and
only
configure
other
system
to
run
undertaked
the
only
worrying
part,
and
we
need
to
check
carefully
the
AKs
dock
before
that
they
might
have
recommendation,
but
we
realized
that
we
have
only
one
node
with
both
core
DNS
container
there.
A
A
A
What
do
we
have?
We
have
use
websocket
for
agents
so
that
one
I
propose
not
to
work
on
this.
The
that's
part
of
the
performances
of
cig
in
Sayo.
The
goal
is
in
for
the
inbound
agents,
especially
the
kubernetes.
Instead
of
contacting
the
controller
through
the
TCP
board,
5000
50
000.
Instead,
it
should
use
https
with
a
websocket
protocol
connection
that
has
better
retries,
should
use
less
memory
and
should
also
decrease
the
amount
of
disconnected
and
should
be
easier
to
monitor
as
well.
I.
A
Yes
and
no
it's
hard
to
tell
because
it's
hard
to
monitor
the
TCP
protocol,
while
websocket
might
or
might
not
help
but
yeah
we
could.
But
the
thing
is
that,
right
now
we
have
to
create
a
new
instance,
and
then
we
will
have
to
update
the
puppets
protocol
in
the
Apache
server
because
we
need
to
enable
websocket
at
Apache
level.
Okay,
Azure
billing
shows
huge
Cloud
because
due
to
outbound
bandwidth,
so
we
decreased,
we
only
consume
our
fund
of
what
we
consume.
A
A
We
only
have
one
week
of
effect
of
the
artifact
manager,
so
we
will
have
to
wait
a
full
month
to
see
the
results
and
last
one
we
have
the
cluster
publicates.
So
as
discussed
with
survey
that
one
will
move.
Also,
we
need
to
start
working
on
this
one
I'm
thinking
particularly
on
starting
to
migrate
service
by
service
to
the
new
cluster,
because
we
need
to
get
rid
of
this
one.
A
One
of
the
first
to
be
migrated
for
me
will
be
the
ldap.
That's
a
critics
critic
one.
So
we
should
start
with
this.
One.