►
From YouTube: 2021 02 09 Jenkins Infra Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
So
we
are
ready,
hi
everybody
welcome
for
this
new
jenkins
infrastructure
meeting,
the
first
one
after
first
them.
The
good
thing
is
nothing
major
crashed
four
for
them,
so
I'm
not
sure
who
put
to
the
agenda
for
them
results.
I'm
not
sure
if
it
end
up
to
be
in
this
meeting.
But
let's
see,
let's,
let's
briefly
talk
about
what
that
one
I
mean
it
went
great.
I
followed
the
jenkins
stands
and
the
crc
dev
room
for
the
strength
instance.
A
I
think
there
are
things
that
could
have
been
improved
because
we
couldn't
stream
demo,
so
people
had
to
join
the
the
jitsi
call,
but
at
least
for
the
ci
cd,
the
room,
it
went
really
great.
I
saw
some
numbers
posted
on
the
web,
mentioning
33
000
attendees
over
the
weekends,
considering
that
the
first
day
max
attended.
A
This
number
is
8
000,
that's
a
pretty
impressive
number,
especially
considering
that
was
the
first
time
that
that
that
the
system
was
used
for
the
first
time
and
then
yeah
that
that
was
a
great
great
great
weekends.
A
Regarding
the
jenkins
stands,
I
have
the
feeling
that
we
had
less
attendees
compared
to
the
previous
years.
I
think
it's
probably
because
people
had
to
join
the
call,
so
we
usually
during
the
first
time
have
people
who
just
show
up
at
the
booth
just
to
sew
the
demos,
but
they
don't
really
want
to
to
to.
I
would
say
to
really
engage,
and
in
this
case
we
could
not
see
those
people,
so
it
was
really
only
the
people
who
started
discussion.
A
So
it's
hard
to
see
it's
hard
to
say
from
from
my
point
of
view,
if
it
was
more
attendees
or
less
attendees
compared
to
the
previous
year,
but
at
least
nothing
major
broke
during
the
during
the
first
time
from
an
infrastructure
standpoint
I
mean
accepted
the
vpn,
but
it
was
not
critical
for
the
jenkins
community,
so
we
could
do
demo
and
stuff
like
that.
So
that
was
really
great
any
questions
before
I
continue
no
sounds
good,
so,
let's
briefly
talk
about
the
vpn
outage.
A
So
since
so
last
wednesday
we
discovered
that
the
vpn
would
not
work
anymore.
The
incident
started
after
we
updated
the
open,
ended
up
docker
image,
which
was
really
weird,
and
so
we
tried
to
replicate
the
issue
locally.
A
We
identified
that
the
problem
was
related
to
some
tell
us
tell
us
issues
so
for
some
reason,
we
could
establish
the
connection
from
the
adapt
clients
from
the
vpn
to
the
ldap,
but
we
could
not
use
the
ldap
plugin
used
by
openvpn
to
establish
the
connection
from
the
vpn
to
open
ldap.
We
investigated
with
garrett's
endemium
and
what
we
discovered
was
the
ldap
configuration.
A
I
mean
the
the
ldap
plugin
from
openvpn
was
configured
to
use
a
node
ldap
ca
so
previously
and
opened
it
up.
We
were
using
a
specific
cm
from
a
code
id.
I
think
like
that.
I'm
not
sure,
then,
back
in
june
we
switched
to
let's
encrypt
and
for
some
reason
for
some
reason
it
has
been
working
until
now
and
then
and
then
stop
working.
So
our
guess
is
a
default.
Configuration
in
opened
up
refuse
to
to
have
connection
coming
with
the
wrong
ca,
but
basically
we
fixed
that
this
morning.
A
So
now
everything
is
back
on
track,
which
is
which
is
fine
in
terms
of
incidents
that
was
annoying,
but
not
a
major
issues,
because
we
had
workarounds
and
we
did
not
even
rely
on
the
infrastructure
running
inside
vpn.
So
it
was
okay
to
just
delay
the
work
and
that
for
a
few
days.
A
So
now
we
identify
improvement
for
the
openvpn,
so
we
can
easily
reproduce
the
environment
locally.
Using
docker
compose
file,
damien
opened
a
pull
request
that
just
which
is
already
merged
containing
fixes,
so
we
can
use
self-signed
certificate
locally,
so
we
can
easily
replicate
the
production
environment.
A
A
question
before
I
move
to
the
next
topic,
the
next
one
that
I
want
to
briefly
mention.
I
sent
an
email
on
the
mailing
list.
This
afternoon,
I've
been
monitoring
the
status
of
serverion,
the
mirror
server,
the
ip
was
stable.
The
dns
record
was
stable,
so
I
put
back
that
mirror
in
our
infrastructure.
A
I
had
to
remove
the
configuration
and
put
it
back
again
because,
for
some
reason
mirror
bits
would
not
allow
me
to
update
the
location.
It
would
not
detect
the
new
location
of
the
public
ip.
So
I
suspect
that
it
only
configure
the
location
the
first
time
we
add
the
mirror
so
removing
and
re
reading
the
mirror
correctly
discover
the
correct
location,
so
it
should
be
back
soon
and
if
there
is
anything
wrong
and
then
we'll
have
to
investigate,
we
still
have
to
monitor
mirehouse.
A
I
haven't
had
the
time
to
open
npr,
for
that
should
be
a
pretty
quick
fix.
B
B
A
No,
if
you
look
at
my
screen,
I'm
listing
mirrors-
and
so
it's
correctly
detected
in
netherlands
for
for
the
moment
it
detects
the
mirror
as
down
but
yeah.
It
should
come
back
so
basically
the
way,
the
way
the
way
mirror
bits
detect
if
a
node
is
up
or
down.
You
just
take
a
random
file
and
test
if
that
file
is
located
on
the
mirror.
The
problem
that
we
have
right
now
is
because
most
of
I
mean
because
almost
every
mirror
only
contain
part
of
the
files.
A
If
we
are
testing
a
very
old
one,
I
mean
it
detects
down,
which
is
not
so
sometimes
it
can
take
some
time
to
add
those
mirrors
to
the
pool.
An
improvement
would
be
to
configure
archive.changingcr.org
to
be
the
source
of
the
mirror,
so
people
who
download
every
file
it
would
simplify
bits
management
but
from
an
end
user
point
of
view,
it
would
change
nothing
because
who
cares
to
download
adsense
ads
and
binaries
anyway,
so
archive
the
jenkins
layout
contain
every
artifact
generated
under
hudson
and
the
jenkins
project.
A
Well,
yeah
people
are
just
relying
on
on.
You
are
the
fights,
so
this
would
just
be
a
quick.
I
mean
it's
not
mandatory,
but
that
would
be
a
way
to
have
to
to
simplify
the
management
of
my
mirror
beats.
A
Unless
you
have
any
question,
I
move
to
the
next
topic,
which
is
chain
kim's
updates,
because
last
week,
because
of
the
first
time
that
happened
last
week
last
week,
I
had
issues
with
update
cli
as
well,
and
we
had
issues
with
the
vpn.
A
So
we
have
quite
a
lot
of
pending
prs
to
update
the
various
changes
in
our
infrastructure,
so
I've
been
waiting
before
merging
the
prs,
because
when
we
merge
prs
that
define
which
docker
image
we
use
automatically
to
start
which,
against
instances,
I
didn't
want
to
do
it
today,
because
we
had
the
weekly
release
today,
so
both
release
that
ci
and
trust.ci
were
affected.
A
I
think
we
have
a
stable
release
coming
or
in
the
coming
days
tomorrow,
right
yeah,
so
we
don't
yeah
yeah,
so
we
so
it's
definitely
not
the
right
time
to
play
with
jenkins
updates.
So
we'll
probably
wait
until
thursday
before
merging
changing
speed.
I
mean
prs
related
to
jenkins.
B
A
To
be
to
be
honest,
I
wouldn't
bother
for
that
specific
version,
so
I
would
just
I
would
just
wait
thursday.
Okay,
while
we
are
also
talking
about
the
jenkins,
I
would
like
to
update
the
ec2
plugin
on
see
ida
jenkins
leo.
A
There
is
one
feature
that
I
discover
that
I
that
I
was
missing
from
my
point
of
view
for
a
while
on
ec2
with
the
ec2
plugin,
we
had
to
specify
a
specific
mi
so
which,
which
means
that
each
time
we
were
building
a
new
mi,
we
had
to
manually
conf
update
to
jenkins
configuration
which
was
cumbersome,
and
so
now,
with
the
more
recent
version
of
the
ec2
plugin,
we
can
now
filter
and
fetch
the
latest
version.
So
I
would
like
to
use
that,
which
means
that
we
would
be
using
the
latest.
A
A
But
yeah,
probably
waiting
until
thursday
to
be
sure
that
everything
is
fine
before
putting
those
services
down.
B
A
Great
the
next
topic
that
which
is
about
pager
duty
so
last
week
I
did
a
demo
to
garrett
damien
in
cara
about
how
to
use
pagerduty.
A
I
realized
at
that
time
that
I
did
not
have
permission
to
invite
people
so
basically
something
that
we
would
like
to
do
now
is
start
using
pagerduty
again
so
we'd
like
to
have
more
people
in
the
loop
and
also
fix
either
incident
or
monitoring.
So
right
now
we
have
few
people
in
the
monitoring,
but
they
don't
necessarily
answer
and
so
we're
just
in
your
alerts
and
the
idea
would
be
to
start
using
it
again.
A
I
sent
a
bunch
of
invite
today
if
other
people
are
interested
to
participate
in
the
on-call
rotation.
Basically,
what
we
try
to
do
is
to
only
be
on
call
during
our,
I
would
say,
working
hours,
because
we
have
enough
people
on
different
time
zone
to
look
at
issues
there.
So
yeah!
That's
that's!
That's
the
thing.
The
the
the
I
would
say
the
priority
right
now
is
when
we
get
a
notification
for
an
issue
is
to
see
to
identify
if
that
issue
is
relevant.
A
If
it's
naturally
then
to
update
the
check
accordingly,
so
we
don't
get
notified
again
if
it's
relevant
and
we
don't
have
the
documentation,
we
try
to
update
the
documentation,
so
other
people
can
deal
with
that
issue
and
so
yeah
there
is
to
have
more
check
to
to
to
do
the
way
we
handle
picture
duty
issues.
We
already
have
a
git
repositories
with
documentation,
so
that
would
be
a
perfect
moment
to
update
that
documentation.
A
C
In
that
context,
the
technical
work
has
been
done
by
cara
and
she
took
the
opportunity
to
add
a
test
harness
which
is
common
for
all
the
images,
because
all
the
images
of
from
the
glp
agent's
repository
share
the
some
same
expectation
in
term
of
behavior.
The
same
entry
point,
the
same
default
user,
because
most
of
this
image
are
built
in
everything
from
the
tools
installed
like
golong
or
maven
or
powershell.
So
they
narrow
it
from
this
tool
and
some
files
or
elements
are
copied
or
duplicated
from
the
jenkins
bone
agent
images.
C
So
the
jar
file
a
script,
so
the
resulting
image
might
differ
from
the
inbound
agent
in
term
of
behavior
or
content.
This
is
why
kara
have
wrote
this
test.
Harness
to
be
sure
that
what
we
expect
the
general
behavior
we
expect
from
all
these
images
is
detected
really
early
in
the
process
by
this
testing
harness
system.
C
So,
with
this
we're
able
to
deliver,
however,
there
is
still
one
blocking
concern.
It's
about
the
naming.
We
made
a
proposal
after
some
discussions
about
moving
the
images
that
were
in
jenkins,
ci
namespace
in
docker
hub
tnlp
agent.
C
However,
the
discussion
on
the
mailing
list
raised
the
issue
that
there
are
some
people
from
the
community
that
are
consuming
these
images,
so
it
looks
like
there
is
still
an
expectation
of
keeping
the
jenkins
ci
namespace,
so
with
no
decision
has
been
taken.
I
think
the
discussion
was
delayed
due
to
the
vpn
and
the
first
them.
However,
we
need
to
take
decision,
so
the
proposal
would
have
been
the
following.
C
Even
it
would
have
been.
First,
we
delivered
that
change
for
the
jenkins
ci
infra,
so
we
renamed
the
image,
including
the
move,
to
the
jenkins
here
namespace
first
and
we
announced
the
deprecation
of
the
older
image
name.
Then
the
idea
is,
we
can
totally
introduce
back
next
month
or
in
the
next
month.
Is
the
new
jenkins
inbound
agent-something?
C
If
and
only
if,
an
image
has
been
documented
and
there
is
someone
from
the
community
willing
to
help
maintaining
it?
That's
the
proposal.
So
if
no
one
come
to
maintain
these
images,
then
the
images
won't
be
provided,
and
if
we
don't
have
anyone,
it
will
just
disappear,
because
no
one
is
using
it.
The
goal
is
to
ensure
that
we
have
some
quality
level
because
providing
images
that
are
updated
once
a
year
is
deserving
everyone
us
as
maintainer,
but
also
the
community,
because
they
will
use
this
image,
keep
this
image,
assuming
that
it
will
work.
C
Well,
it's
not,
and
I
don't
even
mention
the
security
issues,
because
if
we
pass
a
security
scanning
on
these
images,
it
will
be
really
really
bad.
Some
import,
some
things
that
were
triggered
in
that
discussion.
First,
do
we
really
want
to
provide
these
images
in
the
sense
that
we
already
have
different
dimension
to
maintain
gdk
version
operating
system,
inbound,
outbound
agents-
and
this
add
another
dimension
about
we
want
maven,
but
we
want
to
maintain
maven
3
on
the
incoming
maven
4.
Do
we
want
to
add
2
version
of
ruby?
C
C
That's
really
important.
That's
also
the
reason
why
I
propose
that
we
move
to
the
jenkins
cia
infra,
because
we
know
why
we
use
it
and
then,
if
the
community
asks
for
questions
that
mean
there
are
people
using
it,
they
have
interesting
use
case
and
it's
important
to
capture
this
use
case
to
build
the
correct
artifact
for
them.
C
Maybe
they
will
need
back
these
images,
but
we
need
to
be
sure
to
not
waste
our
time
there,
because
it's
a
complex
topic,
in
particular
around
maintenance
and
updates
some
improvement
that
we
see
upcoming,
that
might
not
be
prior,
but
due
to
this
discussion
and
the
work
that
kara
did
adding
a
specific
test
harness
per
image,
because
the
golong
image
have
some
differences
from
the
maven
image
in
that
context,
so
we
need
to
improve
the
process
so
that
the
process
is
able
to
build
and
test
images
in
parallel
efficiently,
with
first
a
common
tester-ness
and
then
a
specific
tester-ness
by
image.
C
Second,
we
see
another
improvement.
It's
thinking
about
all
the
images
we
produce
on
the
jenkins,
ci,
all
the
agents
and
even
the
controller
and
maybe
use
the
cst
harness.
The
goal
will
be
to
test
half
of
the
features
or
expected
behavior
with
the
cst,
which
is
really
fast
to
run
and
delaying
only
the
complex
acceptance
changes
to
the
already
existing
tester-ness
built
on
bats.
C
Finally,
we
have
two
ideas
that
are
completely
exploratory,
but
we
I
want
to
mention
them
if
we
have
to
maintain
all
these
dimensional
builds.
Maybe
the
docker
file
is
not
the
correct
tool
for
that.
Job.
Packer
is
able
to
build
images,
and
we
could
use
utilize
packer
to
maintain
that
matrix
in
the
future.
C
C
C
I
want
that
base
operating
system
windows
alpine
debian.
Whatever
then
I
want
gdk
8,
11,
maybe
15
in
the
future,
and
then
I
also
want
to
customize
with
the
following
supported
tools
on
that
list.
So
you
have
a
list.
Let's
say
terraform
go
long
whatever,
and
then
you
click
on.
I
want
this,
and
the
local
javascript
on
your
web
browser
generate
a
docker
file.
It's
completely
static
does
not
need
a
backend
or
a
database,
it's
only
built
on
file
system
and
it
produced
the
result.
C
The
docker
file
reside
that
say:
okay,
we
recommend
you
to
build
your
image
with
that
template
with
fixed
version
eventually
checksum
and
the
database
would
be
a
json
file
answered
with
this
older
html
file.
So
we
could
totally
use
that
as
a
static
service,
and
that
could
be
a
great
idea
to
provide
a
service
to
the
community,
because
the
idea
here
will
be
to
provide
a
recycle
and
not
the
cooked
meal
for
the
community,
because
a
cooked
meal
can
be
riped
if
we
keep
it
as
it's
outside
fridge
during
months.
C
A
But
yeah
just
just
to
clarify
here
the
consider
alternate
ways
to
maintain
those
images
are
just
suggestions
and
if
the
community
want
to
participate
with
those
that
would
be
nice
but
from
an
infrastructure
standpoint,
we
don't
have
any
plan
to
spend
too
much
time
on
those.
So
it's
more
like
ideas
that
we
have
that
we
are
exploring,
but
it's
definitely
I
mean.
A
I
don't
think
that
the
jenkins
infra
project
should
focus
building
jenkins
agent
for
the
community,
because
that's
definitely
a
complex
situation
and
the
the
main
jab
just
want
to
explain
why
why
it's
challenging
to
build
the
jenkins
image
here
is
because
the
jenkins
project
build
the
inbound
agent.
A
So
we
have
a
small
docker
image
containing
the
the
dnbone
agent
that
can
establish
a
connection
with
the
controller,
because
we
also
use
image
on
maven,
ruby,
python
and
those
images
we
had
a
choice
to
either
either
maintain
our
own
image
and
manage
ourselves
the
way
we
install
maven
the
way
we
install
python.
The
way
we
install
ruby
or
we
decided,
but
that's
basically
what
we
did.
We
just
said
the
ruby
we're
going
to
use
the
upstream
ruby
docker
image.
A
The
python
we
are
going
to
use
the
upstream
python
docker
image,
but
because
we
don't
control
the
waste,
those
images,
some
image
use
alpine,
some
image,
just
debians
some
image
use
sent
to
us
and
because,
in
our
case,
we
want
to
have
those
inbound
agent
with
the
jenkins
user,
which
means
that,
based
on
the
operating
system,
we
have
different
ways
to
create
the
user.
A
For
instance,
and
that's
that's
where
the
challenge
comes
and
based
on
the
version
of
java,
I
mean
that's,
why
that's
what
debian
explained
with
the
metric
matrix
of
a
dockery
file
but
again
yeah?
That's,
we
have
to
see
how
how
we
can
use
it,
but
yeah.
That's
that's,
definitely
challenging!
So
that's
why
we
had
that
discussion.
A
Should
we
build
inbound
agent
for
different,
I
would
say
tools
for
the
community,
or
do
we
just
build
them
for
the
jenkins
infra
project,
because
obviously
building
them
for
the
jenkins
infra
project
mean
that
we
build
them.
If
we
want
to
duplicate
it's
a
decision
on
our
side,
but
on
the
other
side,
once
we
push
and
publish
an
image
on
the
jenkins
docker
hub
organization,
then
people
assume
that
we
are
shipping,
those
that
we
are
maintaining
those
image
which
is
not
necessarily
the
case.
A
So
if
you
are
interested
to
maintain
those
image,
we
would
like.
We
would
like
to
put
in
place
the
person
in
the
code
owner
to
really
identify
who's
responsible
to
build
those
image,
maybe
just
one
image
if
you're
interested
to
just
build
a
bite
and
that's
fine
as
well,
but
we
definitely
have
to
clean
up
and
to
simplify
those
inbound
in
agents.
That's
why
we
are
working
on
this
at
the
moment.
C
Just
aside,
not
regarding
the
jenkins
infra
project,
that
problem
will
be
solved
in
a
different
way
as
soon
as
we
will
be
able
to
switch
to
kubernetes
agents,
because
with
the
concept
of
the
pod
with
multi-container,
you
only
need
highly
specialized
docker
container,
and
you
don't
need
this
image
anymore.
So
as
soon
as
any
jenkins,
infra
jobs
that
are
using
these
images
is
switched
to
kubernetes-based
agents.
We
won't
have
to
maintain
these
images
for
our
own
and
use
it
on
our
own
infrastructure,
and
that
should
be
a
good
indicator
of
our
work.
A
C
Can
totally
be
provided
through
parent
templates
for
the
kubernetes
agent,
so
we
can
provide
these
rules
on
the
pipeline
library
or
parent
template,
but
the
goal
will
be
to
get
rid
of
these
images.
That
could
be
a
good
indicator
of
simplification
of
the
tooling
we're
using
yep.
A
A
Yes,
you
did
that's
awesome.
We
have
a
test
release
coming
tomorrow
and
the
last
topic
before
we
close
this
meeting,
which
is
about
oracle
cloud.
Have
you
created
that
account
yet?
Or
is
it
still
so
you
created
that
account?
Do
you
need
some
help
to
deploy
mirror?
A
B
A
If
you
have
some
time
on
friday,
I
would
I
would
okay
all
right
great,
okay
sure,
because
basically,
what
I
fear
is
once
you
start
the
accounts,
the
time
that
we
can
use
under
the
sponsoring
program
reduce-
and
we
always
have
excuse
to
delay
work.
So
let's
take
a
date
and
fix
to
that
date,
all
right
so
I'll
schedule
some
time
with
you
for
friday.
A
Awesome
then
thank
you
for
participating
to
this
infra
meeting
and
I
propose
to
stop
here
and
continue
the
discussion
in
our
in
rc.
Basically,
thanks
for
your
time.