►
From YouTube: 2021 10 26 Jenkins Infra Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
It's
already
so
we
can
go
it
so
welcome
everyone
to
this
week,
infrastructure
team
meeting.
We
are
the
26th
of
october.
So
let
me
share
my
screen,
so
we
will
you
will
see
the
notes
in
real
time.
I
can
do
it.
I'm
almost!
Oh!
That's!
Okay,
thanks!
So
no
announcement
today
unless
I'm
missing
something
except
don't
forget
to
register
as
voter
for
the
jenkins
election
and
to
vote
just
a
reminder.
B
B
C
A
I
saw
noted
in
the
in
jenkins
release
channel
irc
that
next
week's
lts
is
a
security
release
and
daniel
had
asked,
for
you
know,
asked
for
being
more
conservative
about
merges
to
the
main
to
the
master
branch
of
the
jenkins
core
repository.
B
And
on
my
side
I
have
a
personal
note,
because
I
was
responsible
last
week
for
trying
to
fix
something
to
help
the
release
which
ended
finishing
the
release
late
night
with
vadec
due
to
my
changes
so
yeah.
Let's
try
not
to
merge
anything
even
if
it
we
have
something
broken
to
let
danielle
quickly
release
and
then
we
fix
the
service
afterwards,
unless
it's
something
strictly
required
for
him.
A
A
The
lts
and
the
weekly
will
both
happen,
wednesday
and
that's
why
we
disable
the
weekly
on
tuesday.
B
That's
it
for
me,
okay,
cool,
so
we
can
go
ahead.
We
just
finished
the
upgrade
of
the
nginx
ingress
on
the
aks
cluster.
That
was
a
not
easy
one.
We
ended
up
on
one
hour
outage
almost
on
some
services.
Some
services.
B
We
had
a
bunch
of
issues.
Half
of
these
issues
were
mainly
because
we
are.
We
had
too
much
elements
to
put
on
that
upgrade,
but
it
was
mandatory.
So
we
were
overwhelmed
by
the
amount
of
tasks
and
we
ended
up
forgetting
some
tiny
ones
but
yeah
when
you
have
100
lines
and
you
missed
the
last
one,
you
missed
it
now,
that's
not
an
issue.
B
There
are
improvement
in
the
amount
of
let's
say,
helm,
charts
code
that
we
have.
It
was
hard
to
get
to
at
least
it's
a
personal
opinion.
There
were
a
lot
of
elements
and
it
was
hard
to
know
which
one
were
in
production,
which
one
weren't,
which
part
to
focus
on
that
would
be
a
nice
improvement
for
us
on
an
easy
one
in
the
future.
B
However,
that
was
a
hard
one,
because
the
migration
path
for
using
the
new
ingress
modules
was
not
easy.
We
had
good
surprises
and
bad
surprises.
It
should
be
okay
right
now
we
are
using
the
latest
nginx
version,
the
latest
ingress,
and
we
are,
we
should
be
able
to
to
upgrade
kubernetes
to
the
next
version
without
risking
a
depreciation,
at
least
for
the
ingress
element.
B
B
A
C
Would
be
nice
to
see
how
we
can
improve
in
the
future
the
work
done
today
was
I
mean
that
was
difficult
to
plan
everything
in
advance
because
of
the
size
of
the
team,
but
it
would
be
nice
to
identify
how
we
can
do
better
in
the
future.
To
avoid
time.
A
B
B
While
working
on
that
board,
we
had
some
issues
also
caused
by
the
plug-in
site,
so
it's
installed
with
two
back-ends
replica
and
one
front-end
and
the
issue
are
related
to
dns.
They
were.
The
second
replica
was
always
in
dns
error
and
it's
unable
to
resolve
some
dns
without
any
logic.
So
I'm.
B
So
the
image
is
relatively
recent.
It
has
been
it's
less
than
one
month,
but
the
base
image
it's
built
upon
has
not
been
updated
since
two
years,
so
I've
sent
a
full
request
and
right
now
I'm
trying
to
release.
I
will
try
to
deploy
a
version
we
we
will.
That
will
be
interesting
to
do
that
in
bird
though
it
was
using
gt9
alpine,
which
hasn't
been
updated
since
two
years.
Since
the
alpine
tag
is
not
mentioned
anymore,
on
the
jte
docker
page,
I
assume
they
dropped
the
support
for
gt
for
sure
two
years
ago.
B
The
running
java
on
alpine
linux
was
a
nightmare.
It's
better
now,
but
I
understand
so.
We
switched
back
to
the
slim
version,
which
is
built
on
the
open,
gdk8
image.
It's
still
not
adoptium,
but
it
will
be
still
more
recent
and
built
on
debian
since
its
dns.
We
don't
have
formal
proof
that
it's
related
to
alpine.
However,
alpine
3.9
was
really
well
known
on
kubernetes
area,
to
have
some
issues,
so
I've
added
some
links.
So
the
idea
is,
we
dropped
alpine
and
we're
gonna
try
with
this
one.
B
So
that
means
upgrading
and
maybe
breaking
the
plug-in
sites,
so
we
will
have
to
to
update
status
change
inside
you
for
that
we
have
the
fastly
part
in
front.
So
we
should
be
okay,
where
it's
the
back
end
for
research,
so
we
might
break
the
research,
though.
B
A
B
Yeah,
so
we
should
that
one
was
pretty
important.
Maybe
we
missed
something,
so
I
really
hope
we
won't
break
here.
That
means
that
also
plugin
site
is
not
on
the
standard.
Docker
image
build
on
the
home
automatic
system
that
create
automated
release
and
update
everything.
So
we
might
have
some
work
on
that
area.
B
C
C
Just
for
the
context,
they
may
work
on
automating
docker,
the
current
image
process,
so
we
have
we
use
idle
lint
to
link
the
car
file
we
use
a
day
today
to
update,
I
mean
we
have
quite
a
lot
of
automation
in
place.
The
only
thing
we
need
to
do
is
to
update
the
jkins
file,
so
we
use
the
new
process
because
on
that
image,
at
this
stage
we
are
just
I
think,
running
manual
commands.
C
C
B
Okay,
on
the
topic
of
the
aws
cost
management,
I
need
to
add
the
two
to
screenshot.
I
took
this
morning
that
I
already
shared
internally,
because
it's
cloud
b,
sport
we
saw
decrease
on
the
daily
cost
on
the
aws
account,
which
is
quite
visible.
B
B
Next
step
will
be
work.
We
should
work
this
week
on
using
spot
instances
for
both
agent
vm
and
like
eks
cluster.
We
said
last
week
so
that
are
that
are
the
next
priority
step
after
the
interest
in
unix
and
we
had
the
digital
ocean
port.
Did
you
have
any
feedbacks
olivier
on
digitalocean,
okay,
which
we
might
want
to
escalate
with
kevin?
B
I
don't
know
if
he's,
if
he's
still
involved
on
that
part
or
not
another,
we
should
also
send
her
a
ping
reminder,
since
it
has
been
one
week
by
email.
I
will
take
care
of
that.
If
it's
okay
for
you.
B
B
Yeah,
yes,
I
just
I
just
want
to
get
all
the
elements
that
has
that
were
shared,
because
not
everyone
was
involved
in
all
the
parts
I'm
trying
to
aggregate
everyone.
I'm
sure
that,
to
all
of
you
to
prepare
a
plan
that
will
be
shared,
so
anyone.
A
B
Take
over
if
needed,
because
that
there
were
too
much
subject
between
the
internal
cloud
base
issues
and
the
email
exchanges
with
amazon,
cloudbees
and
cdf.
So
that's.
Why
just
want
to
be
sure
we
don't
mess
it,
because
the
risk
is
if
we
have
an
account
which
is
not
related
to
credit
card
that
can
afford
the
payment
which
is
10k
per
month
right
as
for
today,
that
means
stopping
trusteed,
pkg
and
a
bunch
of
automation.
So
the
footage
could
be
quite
important
right
right.
C
A
A
C
B
Okay
issue
three
edge:
we
should
have
done
that,
but
with
the
ingress
in
genetics
that
took
more
time
than
expected,
we
are
to
delay.
So
the
idea
is
continuing.
The
effort,
especially
olivia,
is
a
bunch
of
issues
that
might
be
either
closed
or
taken
over
by
someone
else
or
stop
working
on.