►
From YouTube: 2023 05 16 Jenkins Infra Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
I
saw
the
change
log
was
published
marked.
Is
that
correct,
correct.
A
A
I
only
made
a
minor
mistake
on
the
Linux
builds
that
has
been
fixed
thanks
for
the
review,
so
next
weekly
should
work
properly,
so
the
next
step
with
new
system.
So
since
we
have
the
release,
documentation
updated,
the
next
step
will
be
us
infrastructure
person
to
communicate
with
the
security
team
to
update
their
documentation.
So
they
can
do
the
same
and
they
know
the
prerequisites.
A
That
could
be
for
automation
that
could
be
through
numerous
elements,
but
we
get
the
first
step.
Why
am
I
mentioning
this
during
the
infrastructure?
Sig
meeting?
It's
because
for
us
infrastructure
we
we
kept
having
issues
with
the
image
being
rebuilt
and
published
changing
the
checksum
and
that
impacts
Earth
in
our
usages
for
cig
and
kinsayo
trusted
and
certain
release
CI
four
of
our
five
controller
6
controllers,
but
also
it
impacts
Us
in
terms
of
a
supply
chain
security.
A
So
now
that
we
build
a
tag,
we
can
start
smartthing
I,
remember
Airway,
asking
for
s-bomb
capability,
so
publishing
the
s-bomb
of
a
given
build.
Now
that
one
has
set
the
ground
for
starting
walking
for
real
on
this
one,
it's
ready
to
go
so
anyone
interested
in
contributing
that
to
the
Jenkins
controller
image
can
get
started.
A
Okay,
so
that
means
we
are
ready
to
deploy
that
that
Weekly
core
release
to
our
systems
for
weekly,
cig
and
Kim,
sayon
and
Francia.
A
Second
announcements:
today
we
had
the
security
advisory
that
when
fine
as
usual
at
least
already,
it
was
plugins
only
that
one
we
had
some
plugins
that
were
subject
to
The
Advisory,
so
it
has
been
updated
on
CI
Jenkins
IO,
which
is
public
instance
already
did
the
trusted
and
third
CI
updates
I
saw
pull
request,
merge
on
our
Docker
Jenkins
weekly
image.
A
That
was
done
for
that
specific
reason.
I
did
the
same
during
the
past
advisory
two
weeks
ago,
and
the
last
one
will
be
Jenkins
Dash
LTS
to
do
MTS
remaining
to
deploy
with
the
the
plugins
advisory
for
so
for
specifically
for
Docker
Jenkins
weekly
image.
That
will
be
both
the
weekly
core
release
and
the
plugins
update
for
the
correct
visery.
So,
ideally,
we
should
try
to
deploy
it
later
today.
A
Is
it
okay
Stephanie?
Are
you
will
you
will
be
able
to
do
it
after
that
meeting.
D
A
A
A
B
A
A
C
A
During
the
past
Milestone,
so
we
we
had
those
minor
fix-ups
on
weekly
Jenkins
ci.io,
thanks
Alex
for
taking
care
of
that.
So
just
Awards,
weekly,
cig
and
Kim
Sayo
is
a
public
demonstrator
based
on
the
weekly
call
release
with
some
SI
ux
new
language
elements
and
some
features,
and
it
appears
that
some
setup
were
done
manually
during
the
path
month
is
so
with
the
work
currently
being
done
by
RV
on
migrating
instances.
We
migrated
weekly
CI
last
week
and
it
appeared
that's
yeah.
A
So
mark
that
might
be
a
security
feature
or
a
security
accidental
feature.
We
forgot
to
open
the
firewall
with
the
new
outbound
IP
for
the
new
instance,
so
it
was
really
it
was.
It
had
a
heroic
when
connecting
to
ldap
starting
the
security
realm
with
connection
error
and
the
rendering
of
the
top
Banner.
The
custom
message
written
in
HTML
was
written
as
plain
text.
As
soon
as
we
fixed
the
Elder
part
without
a
restart,
then
we
started
to
sew
after
the
first
login
again,
HTML
was
enabled
again
and
visible
and
rendered
as
HTML.
B
D
B
Well,
so
so
I,
okay
I,
am
one
of
the
causes
of
the
manual
configuration
of
that
system.
Having
done
that,
manual
configuration
and
when
I
saw
saw
things
were
flawed.
I
went
into
this
configure
the
manage
Jenkins
configure
system
page
and
switched
the
HTML
form
the
formatter
from
no
formatting
to
H
valid
HTML,
but.
B
A
A
A
B
A
Exactly
we
all
have
that
spirit,
and
some
oh
yeah
yeah,
but
okay,
thanks
that
one
was
word
but
yeah.
So
now
it's
fixed
Alex,
confirmed
weeklies
in
Quran
is
in
a
current
state.
We
expect
so
so
that
should
be
okay
for
us
close
the
issue
about
huge
Cloud
cost
due
to
outbound
bandwidth.
A
So
it's
been
now
three
weeks
and
the
metrics
on
Azure
shows
that
the
adding
S3
artifact
manager
on
cig
clearly
decreased
a
lot,
the
outbound
bandwidth.
We
still
have
a
bit,
but
it's
clear.
It's
it's
way
way
different
and
now
it's
sustainable,
so
I
was
able
to
close
this.
A
We
had
someone
blocked
by
the
spam
accounts.
More
and
more.
We
have
users
that
say
they
are
blocked
by
the
sperm
account
and
all
the
every
time
when
you
look
in
the
logs
in
datadog
for
the
icon
tab,
you
see
the
reason
is
Cookie,
because
these
persons
tried
to
create
an
account
in
the
past
five
or
15
minutes
with
the
same
web
browser
and
they
never
logged
out.
A
A
Each
time
we
see
that
it's
a
user
who
tried
something,
add
an
error
or
did
a
typo
in
the
email
or
that
kind
of
thing.
So
I
created
the
issue,
but
I
wanted
to
share
that
piece
of
knowledge
with
all
you
all
of
you
folks,
a
long-term
solution
will
be
getting
rid
of
accounts,
Jenkins,
IU
application
and
use
something
which
is
built
for
that
key
logo.
Whatever
application
could
that,
but
that's
a
long
term
future.
We
have
plenty
of
things
to
do
until
then.
D
A
Keeping
it
the
current
situation
is
fine,
I
mean
for
me.
It's
an
indicator
of
people
not
reading
instruction
properly
so
I
mean
you
cannot
fight
against
that,
except
using
an
application
that
has
a
user
path
which
is
way
lazier
for
them.
But
in
that
case,
I
don't
see,
I,
don't
see
a
reason
for
finding
a
solution
for
people
who
mistyped
their
email
in
a
field.
A
Okay,
but
maybe
maybe
I'm
wrong.
It's
just
a
proposal
every.
If
you
feel
like
it's
a
lot
of
spam
account
then
yeah.
Maybe
we
could
start
thinking,
but
here,
most
of
the
time,
whether
that
we
try
24
hours
later
or
yeah,
I
I
had
I
saw
one
user,
I
discussed
in
101,
that's
someone
with
web
browser
and
tons
of
tabs
and
they
never
kits
their
web
browser
which
never
earned
decision
so
even
trading
24
hours
after
since
there
is
no
session
cleanup
I
think
that
web
browser
had
a
TTL.
A
A
Build
the
boarding
gate
scaling
down.
That
was
a
user
error
that
has
been
fixed.
The
piece
the
Nuggets
of
knowledge
here
is
when
build
plug-in
pipeline.
Library
method
is
used
by
default.
It
has
a
fail,
fast
parameter
enabled,
which
means
any
failure
on
one
of
the
branches
will
immediately
stop
all
the
other
branches.
End
user
can
tune
that
the
parameter
is
exposed
through
the
function,
but
that
might
lead
to
word
Behavior
like
this,
because
here
we
have
a
pipeline
Specialist
or
Jenkins
I
I
experts,
and
even
that
expert
was
cooked
on
that
trap.
A
A
Rv,
so
you
finish
the
task
at
launchable
to
agent
and
you
were
able
to
update
the
pipeline
Library.
Is
that
correct
and
are
there
other
tasks
or
feedback
about
that
topic?.
D
A
A
A
Cool
thanks
account
issue,
so
I'm
passing
it
temporary
name,
resolution
failure
and
plug-in
bomb
builds
so
that
one
I
took
on
me
to
close
it,
because
the
worksho
did
folks
last
week
a
loads
to
remove
that
error.
I
haven't
seen
that
error
anymore
on
bomb
builds,
we
oven
things.
We
haven't
fixed
the
root
cause
issue,
but
now
it's
not
present
anymore.
A
It's
okay,
the
the
root
cause
is
the
core
DNS
components:
the
local
DNS
server
inside
the
CI
case
cluster
on
AWS,
while
either
having
too
much
issues
or
had
temporary
failure
to
resolve
outside
domain
names.
A
We
were
able
to
track
that
the
data
dog
agent
on
that
cluster
were
absolutely
I,
don't
know
the
English
word,
but
they
were
sending
a
lot
of
requests
to
core
DNS
and
because
our
data
log
agents
were
trying
all
of
our
services
as
part
of
the
datadog
probs.
That's
the
historical
system,
so
Arabian
worked
on
this
port
and
was
able
to
disable
the
data
dog
as
a
team.
We
discussed
that
topic
and
we
decided
that
the
Clusters
were
cigenkins.
A
Iu
runs:
build
should
not
have
data
doc
probes
for
our
services,
particularly
since
the
past
two
years,
data
dog
introduced
something
named
synthetics
that
allow
data
doc
to
run
their
own
props
on
their
own
system.
You
can
select
regions
and
cloud
provider
on
different
locations
in
the
world,
so
there
is
a
less
good
reasons
for
us
to
run
our
own
problems
and
consuming
CPU
on
Networks.
A
B
A
B
We
had
previously
been
installing
the
datadog
agent
on
every
every
pod
or
every
container
that
we
were
starting
inside
the
the
kubernetes
cluster
as
a
ci.jenkins.io
agent,
mations.
B
At
the
Machine
level,
not
at
every
container,
interesting,
okay
and
and
it
was-
and
that
was
making
enough
each
machine
that
set
of
all
machines.
So
in
the
case
of
bomb
that
could
be
a
hundred
or
more
machines
right.
It
could
be
very
large,
but
they
were
enough
to
overwhelm
the
DNS
for
interesting
cool.
Absolutely.
A
Another
account
issue
disk
space
for
system
pool
that
one
was
a
tricky
one.
So,
as
pointed
by
Stefan
a
few
weeks
ago,
after
the
walk
that,
after
the
initial
cleanup
work
on
data
dog
that
elevated,
we
had
some
monitor
that
were
alerting
us
that
we
we,
we
passed
the
80
percent
threshold
of
this
usage.
That
threshold
is
important
on
Linux,
because
it
means
your
performances
are
decreasing
even
with
the
SSD
most
of
the
time
you
need
20
20,
free
on
your
disk,
at
least
for
the
system
pool
on
that
cluster.
A
That
was
weird
because
that's
the
default
system
pool
and
when
using
terraform
or
Azure.
If
you
change
the
default
system
pool
it
want
to
recreate
the
cluster,
because
the
the
life
cycle
is
tied,
however,
we
were
able
to
find
a
trick
Solution
by
creating
a
secondary
system,
pool
draining
everything,
then
removing
the
old
one.
A
Everything
could
have
was
done
manually
either
through
Azure
command
line
or
UI,
and
in
the
end,
if
you
change,
only
the
naming
on
terraform
terraform
is
tricked
into
thinking
the
current
system
pool,
which
is
the
first
one
on
the
list
of
system
pool.
Is
the
default
one
and
just
like
this?
It
goes
so
we
were
able
to
increase
the
disk
space
for
system
pool
and
with
deep
dive.
A
We
don't
pay
more
for
these
disks
because
we
have
set
up
ephemeral
machines,
so
we
use
now
all
the
available
FML
storage
for
the
OS
disk
in
the
context
of
kubernetes.
The
OS
disk
does
not
survive
a
virtual
machine,
restart
or
reschedule,
but
we
don't
care
because
that's
kubernetes
with
auto
scaling.
So
that's
why
we
use
that
one
feeling
for
Jenkins
pregame
after
changes
in
Jenkins
file.
I,
don't
remember
this
one,
but
it's
closed
I!
Think
user
had
issues.
A
Another
account
issue
is
wrong.
Email
s
account,
so
we
have
a
plugin
maintainer.
We
did
a
bit
more
than
expected
for
the
infras
cup,
but
we
helped
that
user
to
get
access
to
be
able
to
release
their
their
plugin.
The
trick
here
is
that
Apple
tools
is
a
is
an
old
plugin
and
that
hasn't
been
updated
since
years,
and
the
associated
technical
account
was
part
of
the.
A
There
was
a
security
issue
Mark
in
2020
before
I
joined
and
a
lot
of
ldap
account
that
were
marked
as
a
unused
since
months
or
years
were
disabled
on
key
clock
and
ldap
sites,
and
so
the
consequences
of
that
were
still
there.
That
was
creating
a
set
of
minor
issues
and
also
the
user
wanted
to
do
a
manual
release
of
the
plugin.
A
So
that's
not
the
best
idea.
They
had
issues
with
configuring,
their
Maven
and
I'll
ask
for
them.
They
were
essentially
in
discovering
that
artifactory
UI
changed
Frank's
Bruno
thanks
Mark,
for
taking
care
of
that.
Our
documentation
on
Jenkins
IO
for
developer
need
to
be
updated
as
soon
as
possible,
at
least
removing
the
UI
steps
for
now,
and
we
need
to
discuss
with
g-frog,
because
you
cannot
get
the
maven
configuration
settings
file
through
the
UI.
You
must
choose
the
current
command
line,
that's
the
takeaway!
So
Mark.
A
B
A
The
goal
is
avoid
confusion
for
now
and
then
we
can
iterate.
Once
we
have
found
the
solution.
Dropping
a
note
will
mean
a
we
used
to
do
it
with
the
UI
is
currently
broken
and
artifactory.
That
might
come
not
like
this.
So
everyone
know
you
have
to
use
command
line
and
go.
A
Thanks
survey
looks
like
you,
you
didn't
add
any
issue
on
migrating:
the
the
Bots
application
from
system
pool
to
Linux
pool
nothing
to
report
on
that.
One.
D
A
We
had
issue
with
the
credential
for
Arch
Factory
maven
I'm,
asking
for
a
Kunta
review
on
repository
permission,
updater,
the
Jenkins
file,
I
wanna
I've
opened
a
pull
request
a
few
months
ago,
the
reason
to
clear
conferences,
but
my
proposal
is
to
do
at
least
one
retry
when
the
job
fails,
because
there
are
the
job
run,
every
four
or
six
hours.
A
So
if
the
job
fails
and
no
one
not
dies
and
trigger
a
new
one
that
mean
we
reach
the
automatic
CD
token
for
artifactory,
they
reach
their
end
of
life
during
one
to
three
hours
leading
to
that
kind
of
issues.
We
have
one
from
time
to
time
most
of
the
time
it's
okay,
but
my
proposal
is
to
do
only
one.
A
Retry
Daniel
did
not
truly
object
it
but
said
it
will
be
better
to
have
this
built-in
inside
the
rpu
system,
because
most
of
the
time
it
fails
because
GitHub
being
in
holidays
like
last
week,
it's
just
I,
don't
have
the
boundaries
and
the
knowledge
to
do
it,
built
in
inside
the
airpayu
Java
application.
So
right
now,
I
propose.
We
had
the
retry
with
the
commands
in
the
Jenkins
file
for
US,
avoiding
that
kind
of
support.
A
And
finally,
that
one
was
tricky:
I
took
that
with
Maven
free
being
able
to
specify
a
maven
repository
for
artifact,
that
is
local
was
forbidden
or
deprecated.
It
appears.
I
was
mistaken,
so
the
whole
time
when
we
build
and
specified
the
ACP
system.
I
I
did
not
even
talk
about
that
case,
but
it
appeared.
We
have
it
so
Maven
free.
Officially,
you
can
specify
a
local
file
password
to
a
local
repository
and
it's
added
in
a
list
and
of
course
it
work
like
every
other
artifact
repositories,
meaning
by
default.
A
It
was
proxy
by
ACP,
so
the
user
are
at
their
build
failing.
So
the
idea
was
to
Define
an
idea,
a
technical
ID.
So
thanks
for
the
help
everyone
that
ports,
we
should
now
document
this
one.
But
if
you
you
need
a
local
repository
with
files,
you
need
to
use
the
specific
ID
that
is
excluded
from
ACP
on
our
system.
A
A
That
happened
now
back
to
work
in
progress,
install
and
configure
datadog
plugin
on
cig
and
Kim's
IO.
So
just
a
note,
we
didn't
discussed
about
this
one
last
week
that
appeared
during
the
week,
but
that's
a
really
nice
idea
from
Airway,
because
that
will
allow
us
to
monitor
a
lot
of
things
on
cigen
sale.
A
We
are
not
using
datadog
on
cigur
for
the
specific
Jenkins
metrics
we
use
to
have
The,
Primitives
and
plug-in
metrics.
We
need
to
check.
This
begins
have
been
removed
because
we
removed
that
The
Primitives
platform,
and
now
we
saw
an
infras
AI
that
the
data
in
datadog,
when
sent
by
the
native
plugin,
are
really
useful
and
provide
additional
information
for
us
in
terms
of
observability
of
Jenkins.
A
That
could
help
on
numerous
topic,
including
the
bomb
slowness,
the
fact
that
we
cannot
have
300
bomb
parallel
steps
at
the
same
time.
Otherwise,
here
Jenkins
are
you
just
dies
for
one
hour
but
other
topics.
So
that's
why
we
have
added
that
as
part
of
the
Milestone,
because
it's
essential
for
us
to
observe
that
hervey
your
turn.
What's
the
status
of
that
task.
D
A
A
A
If,
if
you
need
help
on
that
ports,
please
ask
now
usual
channels,
but
yeah.
It's
in
the
good
direction,
because
you're
already
set
up
the
the
world
Jenkins
configuration
as
code.
That's
obtain
only
enabled
for
CI.
We
don't
enable
it
for
trusted
and
search.
So
that's
really
a
lot
of
work
and
working
work.
We
don't
want
to
send
data
from
within
the
certain
trusted
controllers,
but
we
want
for
cig
and
Kim
Sayo.
A
A
Yes,
that
you
did
the
odd
the
the
artwork
it's
a
new
matter
of
finding
the
right
network
path.
So
that's
not
a
lot
of
config
and
there
is
no
blocker
here
from
my
point
of
view.
So
nice
walk
and
let's
continue.
A
Use
a
new
VM
instance
type,
so
the
new
virtual
machine
and
its
environment
has
been
created
earlier
today,
currently
working
on
running
puppet
agents.
So
the
goal
is
to
have
a
generation
2
virtual
machine
that
cost
a
bit
less
than
the
current
one,
with
better
CPU
performances
and
better
system
disk
performances
we'll
see.
If
that
one
helps
on
the
bomb
area,
there
might
be
issues
due
to
the
HDD
system
and
the
OS
disk
not
being
led
on
the
current
see
I.
Think
it's
on
your
machine
under
the
hood.
A
A
I
expect
continue
working
on
this
one
and
being
able
to
plan
migration,
either
on
the
upcoming
Milestone
or
in
two
weeks.
Maximum.
Quite
optimistic
on
that
part
that
will
require
an
interruption
on
cig
and
kinsio
for
doing
the
for
doing
the
world
migration,
but
that
will
be
one
hour.
No
more
notice.
A
C
Should
be
as
as
well
as
I
am
because
I
did
the
Andover,
so
I
told
you,
but
I
will
I
will
update
everyone.
We
are
at
the
point
where
we
got
the
three
VM.
We
got
the
networks
with
that
subnet
we
got
the
security
groups
and
and
and
the
opening
for
the
for
the
ports.
So
everything
should
be
ready
to
get
the
puppet
installation
for
the
software.
C
A
A
I
validated
almost
all
the
security
groups
rules,
the
puppet
configuration
is
ready
to
roll,
so
adding
the
three
machines
inside
the
Perpetual
list
of
machines.
So
we
should
be
able,
later
today
or
tomorrow,
to
start
the
first
initial
puppet
run.
If
it
worked
then
I'm
waiting
for
finishing
the
security
rules
before
starting
the
initial
migration
of
Tata
Jenkins
home
and
the
permanent
agent,
the
tricky
part
will
be
to
find
a
way
to
test
the
date
Center
generation.
A
A
A
As
your
billing
excessive
consumption
on
East
us,
so
these
are
the
virtual
machine
agents.
We
change
the
kind
of
instance
and
the
setup
thanks
for
helping
me
and
reviewing
on
that
area
the
I
checked.
Yesterday
the
billing
seems
to
decrease
the
lot.
We
are
using
spot
instances
of
kind
of
instances
which
is
10
times
cheaper
in
the
spot.
So
clearly
it's
worth
the
effort
there
might
be
hiccups,
especially
with
for
the
Jenkins
score
or
long-running
builds
if
they
have
too
much
agent.
A
Disconnections
I
saw
some,
but
I
cannot
evaluate
if
it's
a
lot,
if
it's
blocking
annoying
slowing
or
if
it's
working
as
expected,
we
should
wait
one
or
two
weeks
before
seeing
a
real
impact
on
the
billing.
We
have
solution.
If
the
spot
instances
are
breaking
either
acceptance,
testarness
or
the
other
kind,
we
could
have
different
sets
of
virtual
machines.
The
cost
in
any
case
is
cheaper.
We
were
able
to
remove
the
elements.
A
There
is
an
improvement,
though,
with
the
work
of
team
yacombe.
We
should
be
able
to
quickly
use
inbound
agent
mode
for
these
machines
instead
of
SSH.
That
will
clearly
increase
the
stability
if
we
have
spot
instances,
because
the
time
for
detection
by
John
Kings
clearly
is
shorter.
When
the
inbound
agent
drop
the
connection.
A
Does
in
most
cases,
but
in
the
case
of
SSH,
the
inbound
protocol
is
mixed
inside
the
SSH
protocol
Jenkins
communicate
to
doing
with
the
agent
with
a
net
SSH
client,
which
is
a
Java
native
SSH
client
and
the
setup
of
both
TCP
and
SSH
on
that
implementation
has
a
keep
alive.
That
is
way
longer,
so
it
takes
way
more
time
to
detect
that's
the
the
classical
agent
disconnected
and
did
not
reconnect
it
properly.
A
A
C
Yes,
for
the
the
infra
part,
it's
it's
running
quite
well.
I
did
switch
the
default
VM
used
by
the
Packer
building
process
and
it's
now
using
that
irm-64
machine.
So
we
we
should
save
some
money
on
that,
because
it's
cheaper
and
I
also
set
up
an
instance,
an
agent
irm
64
Azure
VM
on
CI
jinkinsio,
with
the
same
image
and
everything.
So
we
we
are
ready
to
to
try
to
use
it
as
much
as
possible
to
save
some.
A
Problem
is
there
another
issue
for
tracking
ERM
64.
On
c
I
during
King
Sayo,
not.
A
A
To
check
issue
and
refresh
status,
okay,
cool
next
major
task
is
migrating.
Our
kubernetes
public
workload
to
the
new
cluster
in
the
new
network,
public
Gates
take
away
one.
We
will
pay
less
money
with
that
new
cluster.
Take
away
two,
we
will
have
a
non-overlapped
network
with
better
performances,
take
away
free
that
cluster
is
IPv6
compliance,
so
we
should
be
able
to
to
publish
our
services
running
on
this
cluster
on
IPv6
for
our
Indian
friends,
everybody.
What's
the
status
and
did
I
miss
something.
D
It's
also
a
great
way
to
to
remember
how
this
service
is
are
running
and
What.
What's
the
need
and
the
different
differences
like
post
credit
advice,
extra
okay,
so
it
will
also
a
way
to
do
some
cleanups
as
a
import
of
drinking
instead
of
your
DNS
records,
which
are
most
of
them
not
as
good.
D
A
A
A
A
word
on
the
postgresql
right
now
we
have
the
let's
say,
free
services
that
are
using
postgresql
database.
We
tried
a
lot
of
things,
but
it
appeared
that
we
need
to
create
a
new
instance
on
the
new
network
and,
for
instance,
migration.
For
these
three,
we
will
have
to
dump
stop
the
stop.
The
service
done.
The
database
import
on
the
new
service
start
the
service
from
the
new
cluster
Stefan
worked
on
with
report.
A
Jenkins
are
you
a
few
months
ago
when
we
migrated
it
from
and
same
4K
cloak,
when
we
migrated
them
from
AWS
to
azure,
so
we
will
have
a
service
stopped,
but
that's
okay.
If
we
just
do
the
announcement
properly,
we
can
start
with
key
clock,
because
the
only
person
impacted
by
key
clock
will
be
us
so
array.
You
should
be
able
to
do
it
without
further
announcement.
Just
let
us
know
at
least
one
hour
before
as
an
internal
synchronization,
but
also
you
discovered.
A
Some
services
are
using
separated
databases
that
in
any
case,
will
need
to
be
migrated
because
they
were
using
the
let's
say
the
in
the
Legacy
postgresql
managed
service
in
azure.
So
we
might
need
to
create
first
a
new
instance
on
the
new
network
and
migrate
as
well
these
instances.
So
we
can
clean
up
formal
services.
D
A
May
I
ask
you
to
open
an
issue
describing
this.
We
might
have
a
few
where
the
question
stands.
A
few
buckets
for
which
the
question
stands:
it's
not
mandatory
for
the
migration
as
far
as
I
can
tell,
but.
A
A
That,
okay,
for
you
to
open
an
issue
describing
this,
so
then
we
can
have
the
discussion
for
the
cost
or
understanding
there
and
we
might
add,
a
team
on
the
discussion
thread.
He
might
have
Insight
on
that
part
that
might
have
an
impact.
I,
don't
know
if
it's
positive
negative
or
none
on
the
way
kubernetes
cluster
are
mounting
the
PVCs
using
a
virtual
version,
1
and
version
two
I'm
almost
sure
that,
for
instance,
if
you
want
to
use
NFS
instead
of
the
default
Samba
mounting
for
buckets,
you
need
a
version
to
version.
A
A
So
version
two
version,
one
for
the
kids
storage
to
be
created
good
catch.
Thanks
survey
Stefan,
you
opened
an
issue
about
the
leftover
disk
to
clean
up
on
digital
ocean,
which
name
started
by
PVC
Dash,
something
yes,.
A
That
could
be
something
back
from
experiment
from
either
Airway
or
high
for
the
ACP
doks
public
cluster
or
someone
another
another.
One
I
propose
there
with
you,
okay,
we
should
check
during
the
upcoming
days.
Together
we
take
a
coffee,
we
look
at
the
the
state
and
if
we
don't
know,
we
can
just
delete
it
because
there
isn't
any
sensitive
data
in
digital.
Listen
yet.
A
C
Is
the
status
I
will
start
because
I
I
did
the
first
step.
I
am
I
struggle
with
pecker
to
be
able
to
build
some
image
for
digital
ocean
to
be
used
as
image
template
image
for
a
VM
that
we
will.
We
would
spawn
as
agent
Jenkins
agent
from
the
controllers.
I
did
manage
to
have
a
a
nice
image
on
Intel
AMD,
AMD
and
I
pushed
my
code
on
online
I
built
one
image
so
now
I'm
giving
the
the
baby
back
to
Airway
to
play
around
with
the
plugin
of
Jenkins
to
spoon
the
actual
VM.
A
Okay,
so,
as
I
explained
on
the
issue,
the
the
idea
is
to
stop
using
kubernetes
agent
on
digitalocean,
because
it
cannot
Auto
scale
to
zero
and
we
cannot
control
at
low
level
the
the
outbound
security
groups.
We
cannot
forbid
the
SSH
outbound
from
instance,
so
both
of
them
are
major
reason
for
us
to
switch
to
Virtual
machines.
This
is
what
giving
Morgan
intenders
to
do
at
the
beginning
of
the
partnership
almost
two
years
ago,
so
we
are
back
to
that
initial
assessment.
That
might
be
better
at
at
least
for
the
billing.
A
So
that's
why
we
started
these
elements.
The
limitation
is
that
we
only
have
Linux
Intel
MD
machine,
we
don't
have
irm
Linux
and
we
don't
have
Windows
machines
on
digital
ocean,
but
still
that
could
help,
especially
with
the
spot
non-spot
issue.
We
mentioned
earlier
about
the
virtual
machine
agent
on
CIT
Canada
any
question
or
anything
to
add
on
that
part
cool
thanks
RV.
Could
you
describe
the
next
issue
about
cleanup
and
import
and
manage
data
dog
monitoring
in
terraform.
D
D
D
D
D
B
D
But
instead
adding
the
creation
of
a
file
somewhere
in
a
public
place
from
these
jobs.
So
we
can
monitor
on
on
observes
these
jobs.
D
Without
opening
access
interested.
D
A
For
good
reasons,
it's
because
we
need
both.
We
need.
The
probes
need
to
be
on
all
the
infrastructure
on
each
machine.
Each
machine
means
virtual
machines
are
managed
by
puppets.
So
that's
why
you
have
to
Define
them
on
puppet,
and
some
machines
are
kubernetes
nodes
on
the
Node
pool.
So
for
that
you
need
a
demon
set
with
the
m
chart
data
dog.
That's
why
you
have
a
duplication
of
the
props.
A
So
there
is
that's
a
good
reason
and
the
question
is
more:
do
we
need
custom
props,
that's
more!
The
real
question
most
of
these
probes
could
be
replaced
by
your
synthetics
as
for
today,
which
means
fully
delegating
all
that
things
to
to
data
dog.
But
the
choice
of
having
them
on
kubernetes,
virtual
machine
or
both
is
is
not
really
an
interesting
question.
If
we
need
them,
then
we
install
them
everywhere.
A
Thanks
artifact
caching
proxies
and
reliable
I
haven't
had
time.
No
I
walked
a
bit
on
the
inbound
agent
part.
The
goal
is
to
move
agent
on
a
closer
network
of
the
ACP
on
Azure.
That
would
solve
the
issue
because
DNS
is
now
served
and
digital
listen
is
not
used
by
the
bomb
anymore,
so
we
should
be
able
to
leverage
the
limits
of
the
ACP
Behavior.
A
So
the
next
step
is
checking
again
that
issue
once
the
agent
will
be
on
a
closer
Network
I
intend
to
work
on
that
on
the
upcoming
weeks.
Now
we
have
validated
the
keyboard.
Agent
are
working
as
expected.
A
So
all
of
these
issues
should
be
worked
on.
That's
already.
A
lot.
I
just
want
to
quickly
cover
a
few
new
issues
that
are
marked
as
triage.
Is
it
okay
for
everyone
or
I,
proposing
I,
keep
them
as
straight
edge?
We
just
read
the
title
and
see
if
they
are
mandatory
or
if
we
can
postpone
triage
to
next
week,
is
that
okay,
for
you.
A
We
differ
that
because
Stefan
is
going
to
be
in
the
holidays
and
these
targets
the
stateless
services
that
Irving
Mansion
is
currently
starting
to
migrate
to
the
new
cluster.
So
that
mean
that
Stefan,
you
weren't
fast
enough.
So
now
you
have
to
wait
for
the
migration
to
be
complete.
A
So
I
will
postpone
this
one
I,
let
it
a
straight
edge.
The
goal
is
you
can
in
any
case
you
can
start
creating
the
node
pool,
but
you
won't
be
able
to
migrate
the
services
until
the
migration
is
fully
finished,
for
these
Services
looks
good
for
you
just
a
reminder.
The
idea
is
that
some
of
our
services,
such
as
javadoc,
are
just
a
web
server
that
certified
from
file
system.
A
So
there
is
an
opportunity
here
to
run
these
services
on
machines
that
are
based
on
rm60,
for
instead
of
Intel,
because
we
use
nginx
or
apacheet
that
exists
on
both
debut
architectures
that
have
good
performances
on
both,
but
the
costs
is
clearly
cheaper
when
using
rm64
it
consumes
less
energy
and
cost
less.
So
that's
a
good
thing
for
numer
for
numerous
reasons
and
the
proposal
to
use
them
on
not
pools
and
the
interest
for
us
is
to
start
managing
irm-60
for
not
pull
on
kubernetes.
A
Upgrade
to
kubernetes
1.25
that
one
is
required
to
finish
Ubuntu
220,
to
or
for
campaign,
because
that's
the
only
way
to
migrate
to
Ubuntu
for
the
kubernetes
node
I
would
like
to
to
drive
this
upgrade
the
pre,
the
previous
upgrade
were
driven
by
either
RV
or
Stefan,
or
both
but
I'm
interested
into
driving
this
one
just
because
I,
just
a
just
something
I
want
there
is
nothing
hidden
there.
Well.
A
Exactly
we
don't
have
a
lot
of
choice
on
as
your.
If
you
have
kubernetes
up
to
1.24,
you
will
have
Ubuntu
18
below
it's
a
custom
kernel
custom
enforcing,
so
the
security
issues
are
backpacked
by
Azure,
but
still
I
would
prefer
having
Ubuntu
22,
especially
with
this
control
groups
behaviors,
because
the
newborn
2
features
a
new
control
group,
major
version.
A
A
B
D
A
Yeah,
so
maybe
having
the
wrapper
issue
here
is
still
good,
because
it's
about
two
different
repositories:
we
don't
have
a
convenient
way
to
use
the
equivalent
of
what
is
an
epic
on
jira.
That
issue
will
be
an
epic
get.
A
project
does
not
allow
that
in
an
easy
way,
because
that
add
an
additional
full
component.
On
top
of
that,
so
that's
why
it's?
Okay,
it's
not
on
a
milestone,
so
we
don't
have
any
actionable
here
and
still
it's
on
the
discussion
area.
A
Digital
leftover,
this
to
clean
up,
so
that
one
is
part
of
a
milestone.
So
that's
okay.
We
can
remove
try
H
after
this
meeting
or
remove
Docker
pool
credential
for
kubernetes
cluster.
The
work
from
everyone
datadog
show
that
now
data
dog
was
the
last
component
requiring
credential
for
pulling
on
kubernetes
cluster,
and
it
doesn't
anymore
because
they
defaulted
to
gcl.io
registry.
They
still
provide
an
image
on
Docker
Hub
and
you
can
shift
it,
but
that's
not
the
default.
So
that
means
we
should
be
able
to
clean
up
some
code.
A
A
Digitalocean
virtual
machine
agent,
okay,
but
garbage
collector
to
Jenkins
kubernetes
cluster-
that
one
is
an
answer
to
the
concern
from
our
friends
at
cloudbiz
who
are
paying
for
our
AWS
accounts
and
they
were
concerned
about.
If
we
increase
the
maximum
limits
for
the
bomb
builds
of
available
resources,
we
and
we
don't
take
care
of
cleaning
up
pods.
That
could
still
be
running.
That's
what
happened
with
the
virtual
machine
on
March,
so
we
cannot
say
it
will
never
happen
for
the
pods
as
well.
A
So
that's
why
that
one
will
be
a
safety
concern
by
adding
I,
propose
different
Solution
on
the
issue,
but
the
process
that
will
delete
at
least
once
a
day
all
the
remaining
Bots,
because
we
don't
have
any
kind
of
usage
on
AWS
that
should
take
a
pod
up
and
running
more
than
six
to
eight
hours,
so
yeah.
If
we
remove
all
the
pods
that
are
detected
as
more
than
one
day
hold,
then
it's
again.
We
should
never
have
any
problem.