►
From YouTube: 2021 06 01 Jenkins Infra Meeting
Description
Jenkins infrastructure meeting from June 1, 2021
A
Hi
everybody
welcome
for
this
new
jenkins
infrastructure
meeting
today
to
the
agenda.
We'll
talk
about
postmortem.
In
fact,
looking
at
the
agenda,
we
have
quite
a
lot
of
small
topic
to
cover,
so
I
think
it
would
be
just
better
if
we
take
them
one
by
one.
So
the
first
one
before
we
start
is
we
release
a
new
weekly
release.
Today
everything
went
great.
It
went
well
and
that's
great,
considering
the
last
week
issue,
but
I'll
quickly
mention
the
last
week
issue
in
the
the
post-mortem.
C
A
So
the
first
topic
that
I
want
to
briefly
cover
is
over
the
last
week
we
had
two
issues
so
last
week
we
had
an
issue
with
the
weekly
release
that
we
briefly
mentioned
from
damian.
So
what
happened
was
with
the
recent
upgrade
of
kubernetes
cluster.
A
The
documents
cluster
just
does
not
read,
does
not
handle
the
same
way
path
in
windows
containers.
So
that's
what
they
meant
troubleshoot
and
identified
last
week,
so
the
fix
was
quite
easy.
A
The
thing
is,
if
you
want
to
understand
a
little
bit
more
and
that's
where
I
want
to
to
come,
damien
wrote
a
really
nice
postmortem,
and
so,
if
you
have
access
to
iqmd,
we
now
started
writing
postmortem
documents.
So
this
is
the
one
mentioning
the
weekly
packaging
issues,
so
we
were
able
to
release
every
packages
except
that
the
windows
one
because
of
the
the
pass
issue.
So
they
may
explain
everything
here
and
that's
really
nice.
A
So
if
we
just
look
at
what
so
that
document
is
not
available
and
so
and
then
we
push
the
document
on
github
jenkins
in
france
documentation,
so
we
started
pushing
every
document
that
we
write
on
icmd
on
the
git
repository.
So
we
really
consider
the
git
repository
as
a
source
of
true.
So
now
you
have
a
postmortem
directory
when
we
list
the
different
post
button.
A
So
if
you're
curious
about
the
windows
issue,
specifically
that
that
explained
here
so,
as
I
said,
we
had
the
issue
with
the
past,
with
windows
and
during
the
investigation
damien
trigger
a
second
time,
the
wrong
job,
which
generates
a
second
weekly
release.
So
the
second
week
you
release
was,
I
mean
at
the
same
contents.
A
It
was
just
the
wrong
documentation
on
our
site.
So
if
you
have
some
inputs,
damien
suggested
some
improvement
that
we
could
do
in
short
term,
medium
term
and
long
term
feel
free
to
participate.
Here.
A
We
usually
consider
one
week
before
after
the
outage,
so
people
can
provide
some
inputs
because
yeah
it's
easier
to
to
consider
them,
because
once
the
document
has
been
published
on
the
git
repository,
we
just
go
back
to
that
document
for
specific
needs,
but
yeah.
That's
that's
something.
The
second
yeah
interesting
to
note
is
that
damian
is
proposing
to
specific
templates
for
a
document
where
he
specified
the
chronology,
the
impact
of
the
issue,
a
bunch
of
technical
elements,
what
went
wrong
and
finally
different
improvement
that
he
made.
A
So
I
think
it's
really
nice
template,
as
is,
and
we
had
another
issue
over
the
weekends
and
them
in
mark
reuse,
the
same
pattern
to
for
the
documents.
A
So
the
document
is
available
again
on
the
postmartem,
so
that
was
related
to
seattle,
jenkins,
delio,
just
a
quick,
a
quick,
a
quick
note
for
mark.
A
So
we
can
use
tag
information
here
directly,
and
so
you
can
specify
a
tags
in
this
case
it's
a
postmortem,
so
it
helps
us
to
to
identify
document
directly
inside.
I
can
actually
so
you
have
postmartem,
we
have
maintenance
and
we
have
so
what
were
the
directories?
So
we
have
postmartem
meetings,
maintenance
and
run
books
and
also
we,
we
agreed
with
damian
to
also
use
the
tag
state
to
specify
if
we
are
open
to
inputs
or
not.
A
So
the
idea
is
to
collect
all
every
feedback
that
you
can
have
and
once
we
consider
the
incidents
close
and
we
don't
accept
any
feedback,
then
we
just
consider
it
as
close
and
you
just
we
are
just
archiving
the
documentation
for
future,
so
it
just
and
the
final
thing
is
once
we
consider
our
document
to
be
closed.
We
just
push
it
subversion
and
then
we
can
push
the
document
directly
to
the
git
repository
and
we
really
consider
the
git
repository
as
a
source
of
truth.
A
So
once
we
are
ready
with
the
documents
we
publish
it
and
then
we
don't
go
back
to
it.
We
still
have
some
improvement
to
do
on
this
git
repository
to
to
simplify
the
visualization
of
the
content
here,
because
it's
only
marked
on
or
ask
a
doctor,
but
until
now
I'm
quite
happy
with
the
amount
of
documentation
that
we
were
able
to
put
here.
B
A
So
you
wrote,
you
wrote
the
documents,
the
only
thing
that
I
did
was
to
add
the
tags,
so
I
specified
the
state
of
the
document
as
closed
because,
okay,
we
are,
we
don't
have.
We
are
not
waiting
for
an
input
anymore
and
then
what
I
did
is
I
commit
the
change
so
yeah.
I
went
to
version
and
git
and
then
push
to
the
git
repository.
A
That's
one
thing
that
I
did
and
also
an
interesting
feature
that
I
like
from
icmd
is
the
ability
to
have
linter.
A
You
see
the
the
the
red
button,
it
automatically
run
markdown
linter
for
markdown,
but
it
also
run,
what's
the
name
again
to
to
correct
yeah,
I
mean
it
can
fix.
Also
small
boarding
and
stuff
like
that.
Graham.
C
A
C
A
A
I
really
like
the
fact
that
you
both
create
first
the
postmortem
document,
so
that
was
easy
to
identify
what
went
wrong
and
also
that
you
created
an
incident
on
the
status
page,
and
so
when
we
have
an
incident
on
the
status
page,
we
don't
really
have
to
wait
for
a
pr
review,
because
the
goal
is
to
announce
the
incident
as
quickly
as
possible,
and
so
that's
what
you
did
over
the
weekend
and
last
week.
So
that
was
really
great.
A
Sounds
good.
The
next
topic
is
free
node
we
started
me.
I
mean
I
would
not
explain
the
current
situation
between
frino
and
libera
and
all
the
other
rc
network.
On
the
jenkins
project.
We
started
to
migrate
from
frino
to
libera,
so
we
we
created
the
equivalent
channels
and
so
on.
I
think
from
the
jenkins
and
throughout
point
of
view,
we
are
almost
ready
to
to
stop
using
the
on
three
nodes.
A
The
last
the
last
element
that
was
let
me
created
was
the
puppet
notification
and
I
look
at
the
puppet
server
just
before
the
meeting
restart
the
puppet
server
and
now
notifications
are
sent
to
libera.
So
I
think
we
can
all
leave
the
jenkins
in
fresh
again.
A
A
So
I
think
we
are
officially
ready
and
because
yeah,
because
even
the
element
that
I
provide
a
bridge
to
libera
now
we
can-
I
mean
every
everybody
can
move
at
least
I'm
I'm
I'm
ready
to
to
disconnect
from
three
nodes.
That
was
the
last
challenge
that
I
was
waiting
for
any
question
before
we
move
to
the
discourse
topic,
so,
okay,
the
next
topic
is
about
this
course.
A
A
If
you
are
familiar
with
this
course,
we
are
really
looking
for
feedbacks.
We
don't
want
to
communicate
too
broadly
right
now,
because
we
are
still
experimenting
with
our
first
authentication
mechanism,
how
people
can
authenticate
on
the
service
and
the
way
we
organize
categories,
so
the
first
topic
is
about
so
regarding
authentication,
we
started
a
discussion
on
this
course,
so
you
can
now
join
this
course,
and
so
we
are
wondering.
Where
is
that.
A
So
the
the
main,
the
main
question
that
we
are
wondering
is
first:
do
we
rely
on
this
course
to
handle
the
accounts,
so,
for
instance,
when
someone
creates
an
account,
his
username
email,
address
and
so
on,
everything
is
stored
is
stored
in
this
course
or
we
rely
on
a
third
sso
like
one,
the
one
provided
by
the
linux
foundation,
and
if
we
rely
on
just
on
this
course,
we
can
enable
from
this
course
integration
with
github,
google,
twitter,
linkedin
and
facebook,
and
so
the
question
is,
if
we
decide
to
just
rely
on
this
course,
which
social
network
we
want
to
integrate
with.
A
I
don't
want
to
explain
my
my
my
opinion
on
this
meeting,
but
what
I
want
to
share
is,
if
you
have
some
insight
you
want
to
participate
to,
the
discussion
feel
free
to
join
us
on
community.
The
jenkins,
are
you
provide
some
arguments
about
which
one
we
should
choose
and
the
one
that
we
shouldn't
select?
So
that's
one
one
main
area
and
the
second
area
is
regarding
categories.
The
way
we
organize
so
at
the
moment
we
organize
the
drafts.
A
So
if
you
look
at
the
categories,
we
have
using
jenkins
with
subcategories
discussion
around
the
jenkins
community
with
subcategories
as
well
different
ways
to
contribute
providing
some
feedback
on
the
on
the
site.
So
at
the
moment,
most
of
the
discussion
is
on
the
site
feedback
because
yeah
that's
the
beginning
and
we
are
trying
to
understand
how
it
work,
and
so,
if
you
want
to
provide
some
feedback
on
the
categories,
feel
free
to
join
us
and
to
discuss.
And
if
you
don't
want
to
participate
from
this
course,
we
still
have.
A
No
sounds
we
can
continue.
Next
topic
is
about
azure
and
azure
cost.
So
I
look
at
the
previous
invoice
and
we
are
still
above
the
10k.
When
I
spent
I
mean
I
spent
a
little
amount
of
time
trying
to
understand
what
the
cost
is,
and
we
definitely
have
6
000
spent
on
the
for
ci
dodging
kinsaleo,
which
is
quite
a
lot
mainly
used
in
terms
many
use
in
virtual
machines
and
container
instances.
A
That's
where
most
of
the
cost
is
going,
otherwise
all
the
other
areas
we
did
not
decrease.
I
was
surprised
also
to
see
that
the
communities
cluster
that
we
have
running
on
azure
and
the
cost
of
that
service
did
not
increase,
did
not
decrease,
which
is
something
that
I
I
thought
that
would.
That
would
be
the
case
because
we
stopped
running
mirrors
on
that
cluster.
So
because
we
stopped
putting
mirrors
on
the
cluster,
I
was
expecting
a
smaller
network.
A
Bandwidth
cost,
but
apparently
it's
not
the
case
anymore.
So
I
mean
I
guess
there
are
other
services
that
are
taking
and
generating
traffic
at
the
service.
So
I
should
spend
more
time
to
to
understand
how
we
can
save
some
money
on
that
account,
but
it
seems
to
me
that
the
biggest
cost
is
definitely
coming
from
ci
dodging
is
that
io?
So
we
should
better
use
that
service.
B
And
is
that
that
we
need
to
set
upper
bounds
on
ac
azure
container
instances?
I
mean
the.
A
Something
that
I
think
we
should
prioritize
and
work
is
so
they
may
configure
an
eks
cluster
on
c
h
in
the
io,
with
a
fixed
amount
of
nodes,
and
so
maybe
we
we
would
be
able
to
better
control
the
cost
by
relying
on
the
companies
cluster.
For
our
content,
I
mean,
as
a
replacement
for
short
container
instances.
C
The
thing
is
that
we
cannot
decrease
the
amount
of
workload
on
that
instance
for
sure
we
have
a
certain
amount
of
plug-in,
builds
and
more
plug-ins
more
core,
more
release.
We
have
more
builds,
we
will
have.
The
question
is
more
about.
How
can
we
ensure
that
we
see
an
effect
on
the
code
because,
right
now
we
are
still
blind
about
on
cng
in
scio.
We
don't
have
dashboard
that
will
help
us.
I
think
the
information
is
present
on
the
system
based
on
the
amount
of
metrics
and
dashboards.
C
C
There
are
different
leverages
here,
so
not
really
sure
which
one
and
I'm
sure
the
being
able
to
measure
to
have
a
clear
measure
to
see.
If
we
apply
something,
does
it
change
something?
That's
the
part
where
it's
still,
I'm
not
still
sure
where
it
is.
We
shared
that
concern
with
the
people
from
elastic
during
their
demonstration
last
week,
so
maybe
that
could
be
an
interesting
thing
in
terms
of
observability
because
they
have
the
amount
at
jenkins
level.
The
thing
is
that
the
infrastructure,
only
level
metrics
are
hard
to
help
us
on
that
topic.
A
And
as
as
damian
mentioned,
it's
quite
difficult
to
identify
how
to
reduce
the
cost,
because
it
take
several
weeks.
I
mean
several
day
and
definitely
several
weeks
to
saw
the
changes
in
the
azure
portal
because
yeah
that's
how
it
works.
It's.
A
We
also
covered
the
outage
with
aci
container
instances
the
mark
yeah
mark.
Maybe
you
want
to
provide
more
insights
regarding
that
outage.
B
No,
I
just
had
to
roll
back
several
plug-ins
and
tim
jacom
has
stated
that
he'll
take
a
look
at
it.
He
wasn't
able
to
duplicate.
It
apparently
so
needs
more
investigation
on
what's
at
the
root
of
it
I'll.
My
solution
was
roll
back
five
plug-ins
and
that
roll
back
was
successful.
B
A
Thank
you.
The
next
pickup
service
can
be
removed
because
we
briefly
talk
about
it
during
the
azure
cost.
We
haven't
worked
on
it
with
elastic,
yet
we
still
have
to
organize
enough
for
our
meeting
with
them.
I'm
still
in
the
agenda.
There
is
still
plan,
but
we
have
we'll
have
to
work
on
that.
A
The
last
topic
that
I
briefly
want
to
cover
is
archived
jenkins
that
I
know
to
be
available
over
earthing
connection,
so
this
was
a
prerequisite
for
sort
of
for
amirah
infrastructure,
so
just
to
come
to
go
back
to
about
archive
so
archive.
The
gen
keys
that
I
o
is
a
mirrors
that
contain
every
artifact
generated
on
the
jenkins
project
since
the
beginning,
so
that
includes
sun
artifacts
and
we
couldn't
use
it
from
get
the
jenkins.
A
That
is
because,
in
order
to
add
it
as
a
mirror,
we
we
need
mirror
bits
needs
either
an
ftp
connection
or
an
earthing
connection
in
order
to
collect
file
metadata
like
when
the
file
was
changed,
exam
and
so
on,
and
so
we
had
to
temporarily
remove
archives
agencies
from
the
mirror
infrastructure,
and
so
now
the
the
archive
is
available
on
airsync.
A
A
So
we
have
a
list
of
connections
that
we
allow.
Basically,
those
our
connections
are
coming
from
package
at
origin,
the
jenkins
lio,
the
get
the
jenkins
rayo,
and-
and
that's
it
so
in
the
future.
If
a
third
mirrors
want
to
mirror
every
files,
then
you
will
also
be
able
to
to
get
the
files
from
that
specific
mirrors
so
that
that
would
be
yeah
helpful
for
in
order
to
increase
the
number
of
mirrors.
A
What
does
that
mean
is
for
ins?
So
if
you
now
look
at
the
output
of
get
the
jenkins
layer,
that's
here,
so
this
is
something
that
just
worked
before
the
meeting.
So
it's
not
really
to
that.
It's
not
totally
ready
yet,
but
you
can
see
that
the
mirror
is
now
listed
and
it's
considered
as
town.
I
have
to
investigate
why,
but
the
idea
is
that
mirror
should
have
a
very
low
priority
and
only
be
used
when
there
are
no
other.
You
know
available.
A
So
an
example
is,
if
you
go
to
get
the
jenkins
leo.
Look
at
you
want
to
download
a
debian
package,
but
a
very
old
one.
I
mean
I'm
pretty
sure
that
it's
unlikely
that
you
want
to
download
that
specific
version,
but
copy
link
here.
Our
list,
for
instance,
for
the
hudson
underscore
1.300
version
at
the
moment
on
the
archives
that
jenkins
leo,
has
that's
fine.
A
The
reason
why
it's
needed
not
not
to
download
hudson
but
because
the
mirrors
that
we
provide
the
mirror
maintained
by
the
osu
lc
network
can
only
contains
100
gigabytes
of
of
data,
which
is
around
one
year
of
of
files,
which
means
that
we
didn't
filed
older
than
around
a
year
and
third
mirrors
have
different
approach
to
to
to
fetch
data
from
osu
network
either
they
have
a
strict
copy
of
the
cosm
network
or
they
just
keep
downloading
the
files.
So
sometimes
you
have
files
that
are,
let's
say
one.
A
I
mean
two
years
old,
but
only
available
from
specific
mirrors,
because
they
don't
they
don't
have
the
policy
to
delete
old
files,
whereas
some
other
files
are
only
available
from
I
mean
where
other
mirrors
only
allows
you
to
download
a
new
word
version.
So
that's
why
it
was
important
to
have
archives
such
as
io
available
for
older
files.
So
again,
I'm
not
looking
for
adsense
but
files
in
between
any
question.
A
So
so
it's
not,
it
won't
be
able
to
under
the
loads.
Definitely
because
the
traffic
that
goes
to
get
the
jenkins
at
io
is
in
terms
of
terabytes
per
day.
So
we
I
mean
one
second
machine
cannot
under
the
loads.
A
B
A
The
thing
is
archive
is
a
fallback
for
older
older
version
of
the
plugin,
so
new
plugins
are
only
coming
the
new,
so
I'm
not
sure
if
you
can
visualize
that
configuration
now,
you
won't
be
able
to
visualize
that,
but
the
the
current
mirror,
the
current
fallback
for
mirrors,
is
to
rely
on
osu
asset
network.
So
those
are,
I
guess
I
know
how
to
visualize
this.
A
A
Yes,
yeah,
but
what
I
mean
by
all
the
files
is
files
older
than
one
year
and
even
more,
and
so
the
idea
is
really.
If,
if
there
is
no,
I
mean
if
there,
if
there
are,
if
there
are
no
files
available
from
amirov's,
then
ask
for
archives,
and
if,
if
we
realize
that,
if
we
really
realize
that
we
put
too
much
pressure
on
archives,
then
we'll
deploy
an
additional
machine.