►
From YouTube: 2021 03 02 Jenkins Infra Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hi
everybody.
So
let's
start
this
new
jenkins
infrastructure
meeting
yeah,
considering
the
fact
that
we
had
the
contributor
summit
last
week,
we've
been
quite,
I
mean
we
have
quite
a
lot
of
topic
to
talk
today.
So
the
first
one
is
we
make
progress
with
algolia,
so
the
plan
is
to
use
algolia
to
improve
research
results
both
on
plug-in
site,
again
search
engines,
audio
and
the
documentation
www.jenkins.io.
A
So
first
on
the
plugin
side,
so
it's
now
enabled.
So
let
me
just
show
you
this
one.
So
now,
when
we
go
to
the
plugins
at
jenkins,
at
your
website
we
have-
I
mean
you
can
see
the
search
by
algorithm
and
so
basically
what
it
does
it
just
return
more
results.
So,
for
instance,
if
you
type
just
ti
it
will
show
you
git
plugins
for
instance,
so
it
just
I
mean
it's
more.
I
mean
it's
more
powerful
than
what
we
had
before
and
also
something
that
we
are
looking
to
to
process.
A
We
also
have
some
analytics
right
now
in
the
algorithm
search.
The
console
sorry-
and
so
we
one
of
one
of
the
topic
that
we
want
to
analyze,
is
what
are
the
kind
of
information
people
are
searching
for
so
either
because
it's
not
correctly
documented
or
maybe
there
is
a
missing
plugin
or
whatever
so
yeah.
We
we,
we
have
more
analytics
information
now
and
we
have.
We
have
to
see
what
we
can
do
with
those
right
now.
The
next.
The
next
step
is
to
to
to
use
I'd
code.
Us
yesterday.
B
March,
so
on
that
I
was
impressed
that
if
you
look
at
this
one
already
on
this
page,
we
see
one
hit
notice
number
four
in
the
search
without
results.
B
They
searched
for
the
words,
get
and
plug-in
and
got
no
result,
and
so
it's
like
oops,
that's
a
bad
sign
because
that
should
have
should
have
had
hits,
and
so
so
already
we're
seeing
hints
in
this.
Oh
there
are
things
we
need
to
tune
and
improve
yeah.
That
and
that's
that's
quite
a
surprise,
right
oops.
Why
would
that
I
mean
the
word
plug-in
suddenly
made
it
less
useful.
A
A
Yeah
we
still
have
to
to
list,
or
at
least
to
understand
all
the
analytics
that
we
have
by
default.
In
case
we
have
to
turn
off
some
of
them,
but
yeah
right
now
we
are,
we
are
really
still
at
the
beginning,
so
I
was
in
contact
with
algolia
employees
and
they
were
really
happy
to
sponsor
us.
They
are
using
jenkins
internally
and
they
are
really
happy
with
that.
So
that
was
really
good
conversation
we
had
last
week
so
yeah
we
should
add
algodia
to
the
sponsoring
page
as
well.
A
The
plan
is
to
continue
working
with
the
documentation.
The
next
step
is
www.jenkins.eo.
It
appears
that
they
have
a
specific
offers
for
documentation.
I
think
it's
doc
search,
so
it's
slightly
different
but
yeah.
First,
that's
that's
on
the
roadmap.
I'm
not
sure
if
mark
or
rk
or
gavin
will
look
at
it.
B
A
Any
question
before
we
move
to
the
next
topic:
no
awesome,
so
the
next
topic
is
about
pager
duty,
so
I
added
a
few
people
to
the
page
of
duty
system.
So
I
had
the
garrett
damien
and
mark
wait.
So
if
you
just
look
at
it
so
right
now
we
have
four
different
slots
time
slots.
Basically,
the
layer,
one
and
layer
two
are
most
more
european
time
zones,
while
layer,
three
and
fours
are
us
time
zone.
A
So
basically
it
covers
morning
morning
and
part
of
the
afternoon
and
then
afternoon
and
yeah
and
evening,
basically
what
I
did
for
the
europe
and
because
that
was
quite
easy.
I
did
this.
A
Oops,
so
I
added
so
we
already
had
daniel
and
daniel
arno
in
the
loop,
and
I
had
a
damien
and
carrot
so
basically
you'll
be
on
call
once
a
week,
that's
the
default
behavior
and
then
the
next.
The
next
thing
is,
if
you
get
notified-
and
you
don't
respond,
then
I'll
be
notified
at
the
end
of
the
day.
A
So
anyway,
we
have
the
same
for
the
u.s
time
zone,
but
we
don't
have
a
lot
of
people
there
right
now
for
the
layer,
four,
it's
kozuke,
which
does
not
respond
anymore,
tyler
from
time
to
part
time,
depending
on
the
gravity,
the
depending.
How
important
is
the
notification
same
for
andrew
and
I
just
added
mark
weights
to
the
loop
we
try
to
catch
so
for
us
we
try
to
catch
at
least
the
most
important
issues.
A
Let's
say
when
the
website
is
down.
If
when
we
are
on
call,
the
idea
is
to
work
on
the
system
during
our
day-to-day
working
hours.
So
typically,
if
something
goes
wrong
in
the
middle
of
the
night.
For
me,
I
usually
usually
try
to
find
someone
on
in
the
us
to
to
deal
with
the
issue.
Otherwise,
we
just
delay
until
we
have
the
time
to
work
on
the
issues
most
of
the
time.
B
A
I
think
I
think
we
can
just
quickly
do
it
now,
because
it's
quite
simple
so
pagerduty,
the
only
thing
it
does
is
it
receive
a
notification
from
datadog
and
then
forward
that
to
you
the
way
you
get
the
notification
depends
on
how
you
want
to
be
notified.
So
when
you
go
to
your
profile,
you
can
provide
an
email
address.
You
can
provide
a
phone
number,
an
sms
or
you
can
install
the
page
or
duty
application
on
your
phone.
A
I
used
to
provide
my
phone
number,
but
I
stopped
doing
that
because
otherwise
I
find
quite
I
find
that
quite
intrusive,
but
I
receive
sms
when
something
goes
wrong
and,
more
importantly,
I
have
the
patriot
duty
app
installed
on
my
phone.
So
if
something
goes
wrong
and
I'm
not
available,
I
can
just
acknowledge
that
the
issue
is
there
and
I
will
and
I'll
work
on
it.
When
I
have
some
time,
let's
say
the
day
after
or
in
two
days,
something
like
that.
That's
that's
the
one
I
prefer
but
yeah.
A
B
A
B
A
Basically,
the
the
thing
the
next
step
is
when
you
get
notified
about
something
we
have
a
specific
git
repository
named
jenkins,
infra,
slash
runbooks,
where
typically,
we
document
the
kind
of
things
that
we
do
when
I
mean,
depending
on
the
situation
and
so
the
the
the
process
now
is
to
first
look
at
if
the
documentation
there
is
still
relevant
and
if
not
either
open
a
grid
ticket,
so
we
can
try
that
to
miss
some
documentation
or
write
it
or
whatever,
but
the
idea
is
ideally,
someone
on
call
should
be
able
to
solve
the
issue
with
the
the
correct
documentation
right
now.
A
The
challenge
that
we
have
is:
we
have
two
kind
of
infrastructure.
We
have
virtual
machine
with
some
people
comfortable
with
those
machines.
Some
people
have
access
to
those
machines
and
then
we
also
have
the
wall
communities.
Environments
with
different
kind
of
I
mean
different
people
are
comfortable
with
that
environment
as
well.
A
Thank
you
last
last
last
mention
on
pagerduty
last
year.
I
was
that's
also
the
the
next
up.
The
next
point
over
the
the
the
over
the
weekend,
we
we
got
a
lot
of
notification
on
pager
duty.
So
basically,
what
I
did
is,
I
enabled
more
monitoring
on
friday
afternoon,
which
is
not
something
that
I
should
have
done
and
it
just
spawned
people
over
the
weekend.
A
I
was
only
on
cold
sunday,
so
I
only
discovered
that
sunday
and
then
I
logged
in
and
jenkins
infrared
discovered
that
daniel
was
complaining
about
the
amount
of
the
amount
of
notification
from
pager
duty,
but
yeah,
that's
a
big.
That
was
my
fault,
which
I
will
explain
in
the
next
topic.
The
question
before
I
continue
so
from
datadog
point
of
view.
So
basically,
what
I
did
is
something
that
I
wanted
for
a
while
was
to
monitor.
Third,
the
third
mirrors
I
want.
A
I
wanted
to
be
able
to
detect
when
let's
say
surveying
or
whatever,
whatever
the
issue
might
happen.
Typically,
the
the
reason
for
that
is
when
we
have
people
complaining
about
getting
changed
in
zlatan,
because
they
cannot
install
a
plugin.
Even
if
the
jenkins
services
are
running
fine.
Sometimes
the
root
cause
is
just
to
be
honest,
so
I
just
enabled
basic
http
monitoring
for
those
services
and
it
ended
up
that
those
services
were
not
reliable
over
the
weekend,
so
some
were
really
slow.
A
And
so
the
challenge
that
we
had
here
was
we
received
notification.
I
acknowledged
the
notification.
Then
the
problem
was
gone
because
the
mirror
went
back
to
the
normal
states
and
then
15
minutes
after
that
the
issue
reappeared.
So
it
was
a
different
alert,
a
different
notification
and
so
yeah.
We
just
got
spammed
over
the
weekend,
so
I
disabled,
I
disable
page
of
duty
alerting
for
those
mirrors,
because
the
final
goal
is
not
to
be.
I
mean
we
cannot
do
anything
with
those
mirrors.
A
A
So
that's
basically
the
rule
but
yeah
if,
if
yeah,
if
for
garrett
and
damian,
if
it
becomes
an
issue
for
you
to
stay
on
page
rdt,
because,
ideally
it
should
not
be
a
problem,
then,
instead
of
just
hearing
alerts,
just
tell
me
or
let
first
worked
on
reducing
the
number
of
alerts
instead
of
just
ignoring
the
issues.
A
Next
topic
damian,
so
joseph
joseph
proposed
a
pr
to
switch
to
traffic
for
the
english,
so
traffic
is
a
different
web
surfer
with
more
feature
compared
to
nginx
right
now,
and
so
this
is
something
that
I
wanted
to
test
in
prediction
for
a
while
joseph
started
working
on
that
damian
finalized
the
pr.
So
we
both
we
now
have
both
those
both
ingress
in
place,
one
for
the
private
network,
one
for
the
public
network.
A
A
No
sorry,
we
can
the
one
once
we
decide
to
switch
to
traffic
from
engineering,
so
traffic.
Nobody
will
notice
that,
because
we
have
many
stateless
applications
on
the
communities
cluster
at
the
moment
or.
C
So
the
idea
is
that
we
can
do
a
b,
a
b
testing
or
a
b
deployment
here,
so
the
risk
is
is
quite
low
in
the
sense
that
if
we
see
something
goes
wrong,
when
migrating
a
given
service,
we
can
always
roll
it
back
to
nginx.
C
Nginx
won't
go
away
until
we
have
moved
everything
and
we
are.
We
are
sure
that
traffic
fulfill
all
the
needs,
so
starting
with
a
private,
is
a
good
exercise,
because
if
you
break
things
it
will
break
only
for
us,
yeah
and
yeah
josef
did
a
really
good
work
on
that
part.
He
did
all
the
heavy
lifting.
C
In
term
of
feature,
the
the
let's
say,
the
feature
we're
doing
with
ingenix
loan
are
almost
the
same,
but
with
nginx
we
had
to
add
more
components.
One
of
the
things
I
have
in
mind
is,
for
instance,
the
certificate
management
with
cert
manager
for
the
ingress
specific
part,
which
is
less
heavy.
It's
less
code
to
maintain
so
the
feature
set
is
the
same.
There
is
no
new
feature,
it
will
just
ease
to
have
less
code
and
less
configuration
to
manage
for
the
same
feature
set.
A
Okay,
and
and
and
also
a
traffic,
provide
feature
that
we
needed
in
the
past
that
we
don't
need
for
now,
but
we
may
need
that
again
in
the
future,
like
oh
proxy
and
stuff
like
that,
so
it's
not
like.
We
need
it
right
now.
It's
just
like
in
terms
of
feature.
It
provides
more
feature,
and
so
I
prefer
to
have
it
enough
in
advance
when,
when
we
have
the
time
to
work
on
that,
then
when
we
really
need
something
in
particular,
because
yeah.
C
Yeah
another
point
here
is
that
traffic
support
dynamic
configuration
not
only
from
kubernetes,
but
we
could
add
different
backend
provider
on
the
same
instance,
so
the
ingress
will
do
an
ingress
but
multisystem.
So
you
could
imagine
having
different
docker
instances
like
you
have
a
virtual
machine
with
docker,
but
which
is
on
a
private
network.
It
will
be
easy
to
auto
configure
this,
so
you
can
still
do
this
with
kubernetes,
but
you
have
to
manage
the
configuration
by
yourself.
A
Next
topic
that
I
that
I
put
here
was
damian
is
fine-tuning
the
jenkins
master,
so
basically
infrared
ci
and
soon
release
that
ci.
So
we
are
affected
by
a
weird
issue
with
the
ldap
connection,
so
basically
from
time
to
time,
ldap
connection
times
out
time
out,
damien
has
been
looking
at
that
he
did
not
find
anything
interesting.
So
what
he
did.
He
installed
the
advisory
plugin
to
collect
some
information
to
improve
the
to
fine-tune
the
jenkins
instance.
A
Right
now,
yeah
right
now,
this
this
was
on
on
the
memory
settings.
I
like
made
a
good
suggestion
that
maybe
four
gigabyte
was
not
enough
for
infra,
I
think
for
now.
It
is
because
at
least
we
are
using
graph
and
now
to
monitor
some
integration
matrix
and
nothing
tell
us
that
we
don't
have
enough
memory
or
cpu
at
the
moment.
This
is
something
that
may
change
in
the
future,
but
at
least
for
the
moment,
yeah,
that's
fine,
but
the
idea
is
to
have
one
change
at
a
time
regarding
the
adapt.
C
Yes,
the
tuning
that
have
been
applied
are
only
keeping
the
same
resource
usage
as
defined
today.
It's
just
optimization
based
on
the
current
state
in
order
to
change
the
state,
ej,
adding
or
removing
cpus
or
memory
so
that
you
can
instance
fine
analyzing
the
ldap
issues
that
are
tightly
related
to
the
garbage
collection
inside
the
gvm.
The
goal
is
to
have
precise,
metrics
and
all
the
information
source
on
that
says
that
we
need
fine,
grained
information
and
matrix,
and
so
we
will
have
to
add
them
most
of
the
time.
C
It
comes
from
the
cloud
based
knowledge
or
jenkins,
6
knowledge,
but
that
will
be
the
next
steps,
because
almost
everyone
that
I
ask
for
help
told
me
you
need
to
measure
which
makes
sense.
So
the
goal
is
to
okay,
let's
optimize
what
we
have
right
now
and
see
if
we
have
a
change
and
then
before
going
in
dichotomy
analysis,
we
could
just
measure
precisely
what
we
need
and
want
to
check,
and
then
we
will
iterate
based
on
that.
C
Thus,
the
the
information
which
is
really
important
here
is
that,
given
the
split
state
of
jenkins
between
gdk8
and
11,
the
m
chart
we
provide
is
try
to
be
agnostic.
But
most
of
the
gvm
option
that
we
are
using
should
be
a
good
default
and
same
default
for
any
jenkins
instance.
Most
of
the
time-
and
this
will
be
a
topic
in
the
future
as
a
feature
for
the
helm
chart
by
providing
the
set
of
gvm
flags
that
are
known
to
be
a
good
rule
of
thumb.
A
Jenkins
with,
I
think
something
basically
that
should
go
back
to
the
hem
chart
is
to
by
default,
to
use
the
gtk11
version
because
on
the
hem
chart
with
by
default
running
containers,
so
it
makes
sense
to
use
java
11
instead
of
by
default,
deploying
java
8.
But
this
is
a
some
topic
specific
to
the
upstream
charts,
but
I
think
it
would
make
sense
to
by
default,
use
the
java
11
on
the
default
time
chart
and
have
specific
default
parameters.
A
D
Absolutely
to
keep
in
mind
that
there
is
a
recent
report
about
memory
leak
on
java
11..
Thank
you.
So
if
you
observe
the
same
on
my
increasing
or
on
other
instances,
it's
important
because
it
might
be
a
real
issue-
okay,
good
to
know
so
it's
related
to
pipeline
cps,
plugin.
A
Sorry,
sorry
about
that,
I
have
quite
a
lot
of
activity
inside
and
sorry,
so
the
last
topic
is
about
the
jenkins
inbound
agent.
So
several
weeks
ago
we
got
issues
where
so.
Basically,
we
are
using
specific
inbound
agent
on
ci
the
jenkins
that
I
o,
which
by
default,
are
using
the
default
user
by
the
upstream
image
so
most
of
the
time
roots,
and
so
damian,
damien
and
kara
work
on
those
images
to
by
default
usage
jenkins
agent.
So
the
idea
is
to
not
be
using
the
root
user.
By
default.
A
There
was
some
discussions
about.
Should
we
maintain
those
images
on
jenkins,
ci
or
jenkins?
You're
from
the
first
step
is
because
the
needs
is
for
the
jenkins
and
fra
project.
We
first
built
and
published
those
images
on
the
jenkins
sync
for
organization
under
hub.
We
did
not
have
the
time
to
update
ciao
chances.
A
Are
you
yet
because
of
the
contributor
submit
last
week,
and
but
at
least
those
images
are
now
published
on
the
cohub,
so
yeah
in
the
coming
days,
we'll
have
to
update
the
hydrogen
kits
that
I
o
this
configuration
is
done
manually.
A
For
now,
we
still
have
to
work
on
the
other
jenkins,
but
yeah,
but
I
put
I
mean
I
put
some
links
to
the
docker
hub
and
the
kitchen
repository.
The
next
step
is
to
see
how
we
can
publish
tags
for
changing
ci
as
well,
so
the
community
can
use
that,
but
what
we
want.
What
we
fear
here
is
that
we
just
create
a
new
set
of
images
that
the
community
expect
us
to
maintain
the
problem.
A
A
B
A
B
A
B
A
The
thing
is,
we
are
quite
a
lot
of
version
in
the
past.
I
think
we
are
using
a
one
year
old
version.
A
lot
of
things
happen,
the
one
that
I'm
the
most
excited
personally
is.
A
In
the
past.
You
had
to
specify
tmi
that
you
wanted
to
use
now
you
can
specify
a
pattern.
So
let's
say
you
want
to
always
have
the
same
image
based
on
that
specific
tag,
which
means
that
you
don't
have
to
build.
You
don't
have
to
modify
the
jenkins
configuration
each
time
that
a
new
mi
is
available.
A
It's
just
a
lot
easier.
This
is
the
way
it
works
with
the
azure
plugin
and
I
find
it
more
convenient
to
use.
But
this
thing
was
introduced
several
months
ago.
So
when
you
have
some
time,
let's
just
pump
the
version
and
see
see
what
happens
thanks.
C
C
They
they
mostly
do
bare
metal,
but
they
also
have
a
managed
kubernetes
service,
which
is
cmcf
compliant
and
object
storage.
So
they
are
open
to
start
the
discussion,
so
it
could
be
interested
to
check.
I
was
thinking
initially
and
heavily
on
the
kubernetes
managed
service.
If
because
it
could
add
additional
capacity
to
whatever
chain
for
the
future
of
ci
g
and
kings
outside
the
chaos,
it
could
be
a
great
way
to
have
a
fallback,
because
we
can
add
multiple
kubernetes
and
we
can
reuse
the
same
pod.
C
C
C
C
There
are
evie
users
of
jenkins
open
source,
so
I'm
trying
to
get
them
to
to
give
a
testify,
at
least
on
the
jenkins
is
the
way
and
they
will
be
they
will
it
look
like
they
are
interested
if
they
could
provide
the
ec2
service
with
a
few
credits,
and
so
the
ec2
latest
plugin
is
able
also
to
handle
their
own
api,
which
they
use
internally,
so
that
could
be
also
be
interested.
I've
started
the
discussion
on
both.
I
don't
know
the
amount
but
yeah
diversifying
sponsorship
is
always
good.