►
From YouTube: 2021 10 08 Jenkins Infra Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hi
everybody
welcome
for
this
new
jenkins
infrastructure
meeting.
So
today
we
are
six
and
that's
awesome,
hi
everybody.
So
the
first
thing
that
I
want
to
announce
here
before
we
start
the
meeting.
We
have
a
new
player
in
the
game,
so
are
they
just
joined
club
b's
the
community
team
at
club
beast
to
work
with
us
on
the
jenkins
project
and
he
will
start
helping
us
on
the
infrastructure,
but
yeah?
A
Don't
don't
overwhelm
him
with
stuff
just
one
thing
at
a
time,
so
at
the
moment
he's
still
learning
and
following
us
to
the
topic,
we
have
quite
a
few
topic
to
the
agenda.
So
the
first
one
which
I'm
using
misunder
misunderstood
the
results.
The
jira
maintenance
was
not
enough
to
fix
the
root
cause
of
that
maintenance.
So
unicode
does
not
work
in
injira
at
this
stage,
so
I
have
to
do
a
follow
up
there
so
yeah.
I
I
need
to
contact
anton
to
see
what
what
would
be
the
next
steps
there.
B
A
Yeah
I
mean
yeah
anyway,
so
that
that's
that's
something
that
we
have
to
figure
out
next
week.
The
next
epic
is
about
jenkins
car
security
update.
So
we
had
a
security.
We
got
a
security
release
on
wednesday
for
the
weekly
and
for
the
stable.
It
was
not
really
smooth
because
of
different
reasons.
A
So
daniel
was
not
available
to
help
us
on
wednesday,
so
we
had
to
do
the
security
release
with
biotech,
and
so
it
was
not
up
to
date
with
this
working
environment
and
at
the
same
time
we
discovered
many
issues.
That
was
not
that
were
not
related
to
it,
but
I
think
there
one
of
them
was
so
issues
with
pacquiao
so
that
that
was
what's
the
first
one.
That
was
an
interesting
one.
We
struggled
to
allow
paddock
to
connect
on
packet.
Searching
that
are,
you
mark
could
not
connect
on
package.js.
A
That
are
your,
neither
so,
basically
all
the
people
that
were
added
to
packages
jenkins
that
are
you
manually
after
we
stopped
the
puppet
agent.
Those
installations
were
broken,
so
we
find
we
found
the
issue.
We
fixed
that
the
second
issue
that
affected
was
the
update
center
job,
which
is
a
requirement
to
do
a
release
whatever
the
type.
But
it's
a
requirement
to
do.
The
release
was
broken
because
we
we
could
do
a
jk's
agent
could
not
find
the
java
path,
so
that
issue
was
related
to
some
work
done
by
demian.
A
So
demon
was
able
to
identify
the
root
cause
there
as
well.
So
that's
something
that
so
we
know
that
we
rely
on
a
jenkins
instance
named
trusted
ci,
which
is
not
defined
as
code,
which
means
that
when
we
do
an
improvement
on
one
machine,
that
improvement
does
not
affect
trust.ci.
B
B
That's
wrong.
Sorry
to
cut
trusted
is
now
managed
and
the
reason
is
because
I
pushed
a
change
that
was
aimed
at
fixing
issues
that
appear
a
few
days
ago,
but
that
change
was
worse
than
the
thing
he
tried
to
to
fix.
Otherwise
I
should
not
have
pushed
a
change
a
day
of
a
release,
so
I
I'll
culprit
there
but
yeah.
We
have.
We
have
configurations
code
and
we
have
synchronization
now.
So
if
we,
if
we
push
something
not
working,
it
won't
work
anywhere.
A
B
A
A
So
that
was
another
issue
I
I
was
not
able
to
follow
the
rest
of
the
release
so,
but
it
appears
that
it
took
longer
than
I
expected,
but
yeah
that
security
release
was
not
a
smooth
right.
C
Yeah
so
buttock
fallonia
has
been
gathering
a
retrospective
document
and
I
added
a
couple
of
items
to
it
or
one
item
to
it
related
to
the
weekly
release
checklist
that
doesn't
exist
and
that
detected
some
gaps.
C
C
A
B
B
Board-
and
I
spent
a
few
hours
just
waiting
in
case
he
had
an
issue.
For
instance,
he
did
not
have
the
right
to
merge
on
jenkins.you
or
to
directly
push
on
the
master
branch
of
chenkins
io
right,
but
everything
is
written
on
its
retrospective,
but
that's
also
authorization
issues.
That
means
we
can
completely
improve
by
having
a
checklist
of
check
things.
Before,
especially,
can
you
do
do
you
have
the
alteration
there
or
there.
A
Yeah-
and
there
is
one
other
element
which
was
also
very
annoying,
so
the
thing
is,
we
don't
have
a
very
stable
way
to
disable
the
weekly
release.
So
I
know
that
danielle,
like
disabled,
multiple
times
the
weekly
release
and
each
time
we
made
a
change
to
the
instance.
Even
minor,
that's
disabling
the
job
was
revert
so
even
several
hours
before
the
security
release
we
had,
I
mean
even
on
tuesday,
we
had
to
disable
the
weekly
release
because
it
was
re-enabled
again,
and
the
problem
here
is
mainly
because
we
configure
everything
as
code.
A
Everything
is
defined,
a
git
repository.
It
always
try
to
go
back
to
that
definition.
That
means
that,
if
we
want
to
see,
I
I
think
if
we
want
to
disable
a
weekly
release,
we'll
have
to
look
at
it
at
some
points.
A
Exactly
any
last
command
to
that
topic.
A
No
okay
right
so
the
next
one
that
I
want
to
bring
in
this
case.
It's
mainly
for
damien,
so
damien
has
been
working
a
lot
on
the
aks
cluster
demand.
Do
you
want
to.
B
I
will
go
back
on
that.
It
took
down
the
case
cluster
during
one
hour
and
a
half,
and
now
with
the
new
limit,
which
is
the
we
hit
the
rate
limit
on
the
docker
hub,
the
reason,
and
even
if
we
authenticate
with
the
free
plan
that
we
have,
we
will
steal
it
because
it's
the
rate
limit
per
ip.
We
have
a
private
network
of
machines.
All
the
requests
come
from
the
same
public,
ip
from
docker
point
of
view.
B
A
Can
you
not
use
like
multiple
a
range
of
ips
for
egress
connection.
B
We
could,
but
we
will
pay
for
that.
Almost
the
same
cost
as
paying
a
full
enterprise
docker
subscription
account,
so
the
I
would
prefer
going
the
direction
of
having
the
docker
images
pushed
on
the
docker
hub
and
somewhere
else,
like
whatever,
because
the
cost
per
gigabyte
is
really
almost
nothing
for
a
docker
registry
we
could
use
or
whatever.
So
we
would
have
each
image
on
two
lock
different
locations
yeah.
So
the
thing
is
on
most
long
term.
It's
more
the
question
of.
B
Do
we
really
need
that
size
in
auto
scaling,
because
that
costs
a
lot.
We
are
currently
working
on
the
cost
for
aws,
because
mostly
what
we
stopped
paying
on
the
3
000
bucks
per
month
that
we
gain
on
azure
have
moved
to
amazon.
B
So
the
question
is
more:
do
we
need
that
much?
We
know
that
we
can
auto
scale
so
now
the
next
step
will
be
to
add
more
sponsoring
capacity.
We
have
digital
ocean
and
scale
way
that
are
waiting
for
us.
Even
if
it's
two
nodes
cluster,
we
have
the
two
osu
sl
machines
that
could
be
revamped
as
a
kubernetes.
B
That's
the
reason
so
we
have,
we
can
have
a
bunch
of
tiny
clusters
instead
of
one
big
that
scales,
especially,
we
have
static
machines.
We
have
sponsorships.
The
second
thing
is
most
of
the
build
peaks.
These
days
come
from
specific
builds
like
the
bomb
and
in
the
case
of
the
bomb,
all
the
pull
requests
of
the
bomb
are
rebuilt
once
per
week,
which
means
every
week
during
the
weekend
we
have
a
peak
of
600,
builds
waiting
and
most
of
the
time
it's
not
even
needed.
B
So
these
are
the
next
step
trying
to
have
different
sources
for
the
docker
images,
to
avoid
rate
emitting,
adding
more
clusters
to
to
have
a
different
cloud
sources
for
the
container.
So
if
we
bring
back
a
case
like
I
did
this
morning,
we
can
still
handle
container
workload
and
finally,
disabled
builds
that
are
not
needed.
B
A
Considering
that
someone
can
just
from
a
pr
rerun
the
checks,
isn't
it
just
better
to
stop
rebuilding
all
the
time
prs?
Because
if
someone
wants
to
just
work
on
an
old
pr,
you
can
just
ask
to
rerun
the
checks.
B
C
A
B
C
B
C
D
Yeah
they
they
do
have
stuff
for,
like
your
remote,
build
caches
and
and
it's
supposed
to
and
like
the
demon
will
run
in
cr
as
well,
but
I
it
just
doesn't
really
help
with
ephemeral
we've.
Always
we
disabled
the
demon
a
long
time
ago
at
work
on
ci
just
because
it
was
crashing
and
problems,
it's
probably
better.
Now
they
do
say
you
can
have
it
on,
but
even
crashing
once
or
once
in.
D
Every
50
builds
is
enough
that
we
turned
it
off,
but
it's
certainly
been
a
lot
of
work
done
on
it
and
yeah
remote,
build
caches
and
stuff
supposed
to
make
that
easier.
But
is
this
just
for
gradle
builds
greater,
builds.
C
C
Just
just
regular
gradle,
okay,
we
we
use
cradle
for
everything,
though
just
got
it
so
I
may,
I
may
beg
for
your
help
after
I've
done
some
initial
experimenting
just
to
understand
they.
They
seemed
very
interested
in
working
with
us
and
getting
involved
with
the
jenkins
project.
A
Okay,
thanks
next
topic
is
about
the
recent
change,
so
the
let's
encrypt
root
certificate
changed
I
mean
that
was
nothing
new.
It
just
actually
totally
deprecated
old
version.
We
discovered
that
it
affected
us
on
adaptation.
Zio,
not
because
I
mean
the
certificate
that
we
were
using
were
already
signed
with
a
new
one.
It's
just
like
in
the
ldap
configuration
we
were
enforcing
the
wrong
road
certificates,
so
we
fixed
so
that
affected
us
last
weekend.
We
fixed
that
monday
morning.
A
So
that's
one
thing:
we
still
have
some
action
to
do
with
the
fastly
account,
because
apparently
one
of
the
certificates
that
we
are
using
fastly
is
still
signed
with
the
old
root
certificate.
I
received
the
notification
by
email,
so
this
is
something
that
I
have
to
look
there.
A
But
otherwise
it
did
not.
I
mean
that's
the
only
thing
that
affected
us,
because
we
are
using
its
encrypt
almost
everywhere
in
our
infrastructure,
and
then
everything
went
well.
Another
funny
story
was
yesterday.
The
next
epic
is
about
the
rackspace
account,
so
we
officially
remove
old.
We
officially
removed
the
machine
there.
So
just
for
the
story,
we
switched
from
rackspace
to
oracle
clouds
back
in
august
because
we
could
use
the
new
harm
machine
provided
by
oracle
and
because
we
don't
pay
an
oracle,
the
network
bandwidth.
A
We
were
able
to
reduce
the
cost
from
700
to
25
dollars
a
month,
and
so
we
stopped
the
our
rackspace
machine
at
that
time,
and
now
we
still
sorry
we
stopped
using
it
the
rackspace
machine
at
that
time.
One
week
ago
we
stopped
the
machine,
and
so
we
did
not
discover
any
issues,
so
we
decided
to
delete
the
machine
and
we
discovered
that
we
had
to
follow
a
security
procedure
to
delete
that
machine.
So
we
had
to
call
one
wonderful
number
provide
our
identity.
A
We
went
and
redirected
someone
else.
I
mean
all
that
to
say
that
that
machine
is
now
officially
gone,
so
kk
should
stop
being
built
on
that
account,
which
is
nice
for
him
question
yeah.
There
is
no
nothing.
I
see
that
we
have
sorry
do.
A
No,
no,
no
so
run
right.
That's
right
explains
that
was
nice,
because
rackspace
sponsored
the
changes
project
for
very
long
time,
because
that
machine
starts,
I
think
in
2014
with
ubuntu
12.,
so
the
sponsoring
ended
last
year
in
march,
and
because
that
was
a
very
old
machine
that
was
pretty
expensive
for
us.
So
now
we
don't
have
anything
on
rackspace
anymore,
which
will
simplify
the
building.
D
C
They
had
stopped
sponsoring,
then
renewed
us
for
a
period
of
12
months
and
that
renewal
ended
last
march
yeah.
So
there
was
a
period
where
they
had
sponsored
us
for
years
and
years
and
then
dropped
the
sponsorship.
We
were
surprised
after,
I
think,
six
or
eight
months.
We
asked
for
olivier,
asked
for
it
and
they
granted
another
sponsorship,
but
that
ended
and
then
they
refused
to
renew
any
sponsorship.
A
Weird
they
didn't,
they
did
not
notified
us
that
they
would
not
renew
the
sponsorship.
So
I
was
just
collecting
the
various
costs
and
discovered
that
we
had
to
pay
invoices
on
the
rackspace
account
since
months
like
because
yeah,
I
review
all
our
account
every
two
three
months
to
see
how
it
stands
with
the
cost
and
so
yeah
the
notification,
but
yeah
anyway.
So
one
account
less
in
the
project,
and
you
know
to
be
that
you
want
to
bring.
C
B
So,
in
that
case
it
was
low
cost.
It
was
less
than.
D
B
Were
just
standing
there
with
name
of
a
function
that
were
clear
that
it
was
not
used
anymore,
okay
and
the
same
on
aws
based
on
cost.
I'm
currently
preparing
something
for
the
next
next
weekly
about
breaking
down
the
cost
on
aws
now
between
the
kind
of
instances
on
ci
jenkins
io.
Is
it
costs
that
come
from
the
aks
workload
from
imem
from
low
memory
and
at
the
end
of
the
month,
we
should
see
improvement
on
that
area.
B
A
Last
call
last
epic:
we're
set
seven
minutes
before
the
end
of
the
meeting.
So
that's
that's
awesome
thanks
for
your
time
and
have
a
great
weekend
goodbye.