►
From YouTube: 2021 03 09 Jenkins Infra Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hi
everybody
welcome
for
this
jenkins
infrastructure,
music
meeting.
Sorry,
we
have
quite
a
lot
of
different
topic
to
cover
today
so
yeah.
Let's
start
the
first
one,
the
first
one.
It's
a
small
reminder:
it's
a
related
to
the
docker
hub
organization,
jenkins
foreign.
So
as
a
quick
reminder,
thus
that
docker
hub
was
put
in
place,
so
people
could
quickly
iterate
on
experiments
or
build
docker
image
for
specific
usage.
A
Since
the
beginning
this
docker
hub
organization
was,
I
mean
that
was
clear
for
everybody,
that
it
was
an
untrusted,
docker,
docker
hub
organization,
because
everybody
can
push
image
there.
As
long
as
the
person
opened
the
right
pull
request
so
yeah,
I
saw
that
several
people
started
using
those
images
again.
If
we
need
to
have
long
running
docker
image,
then
we
need
to
engage
a
conversation
to
see
if
we
can
host
them
on
the
jenkins
or
jenkins
ci
infra
organization,
and
then
the
process
to
publish
image.
A
There
is
better,
basically
so
yeah,
just
quick
reminder.
The
second
topic
that
I
want
to
briefly
talk
here
is
about
hosting
a
new
set
of
docker
image,
for
you
have
been
designed
because
you're
currently
signing
with
okay,
someone
used
the
cdf
accounts,
so
they
continue.
A
I
think
it's
fine,
so
someone
requested
to
that.
The
jenkins
infra
project
start
building
a
specific
windows,
server
core
image
containing
adopt
opengdk
targeting
the
windows
version
1909.
A
What,
while
initially
I
was
in
favor
as
long
as
that
person
was
maintaining
the
the
docker
file
in
the
process
and
so
on.
The
reality
is
puts
strong
constraints
on
the
infrastructure
that
we
are
using
and
we
don't
so.
Basically,
we
need
to
build
packer
image
for
that
specific
windows
version
and
maintain
those
spiker
image,
and
then
we
have
to
update
the
build
process
and
so
on
so
considering
that
that
version
is
quite
close
to
the
lts
one
that
we
are
maintaining.
A
B
You
need
to
write
a
policy
or
something
to
say
that
will
support
the
lts
version
and
because
otherwise
we
will
be
doing
this
every
six
months
with
windows
versions,.
A
C
Well-
and
I
think
it
should
be
part
of
that
docker
image-
jenkins
enhancement
proposal-
that
we
discussed
at
the
contributor
summit.
Docker
docker
images
must
have
a
code
owner
before
we
adopt
them,
accept
them
and
we
will
start
flagging
them
as
up
for
adoption
when
the
code
owner
goes
inactive.
So
so
when
they
don't
have
a
code
owner
so
yeah.
But
I
still
got
to
write
that.
A
A
The
next
topic
is
about
ci,
touching
kids,
that
I
also
basically
last
friday
I
did
some
I
mean
I
did
quite
a
lot
of
modification
there.
First
of
all,
I
did
some
credentials
cleanup,
so
I
removed
credentials
that
were
not
reported
as
used.
Obviously
it
does
not
mean
that
those
credentials
were
not
used,
but
yeah,
so
maybe
maybe
I
remove
the
credentials
that
was
effectively
effectively
used.
A
A
A
I
also
took
that
opportunity
to
a
bump
to
ec2
plugin,
so
we
are
now
running
the
latest
version.
What
I
noticed
after
the
version
updates
was
the
ec2
configuration
was
a
huge
mess
like
wrong
credentials.
Ec2
plugins
were
configured
to
deploy
resources
in
japan
and
stuff
like
that.
So
it
took
me
a
while
to
to
reconfigure
old
images.
A
We
had
a
discussion
with
damien
as
the
fact
that
we
definitely
need
the
gcas
configuration
for
the
ci
touching
installatio,
while
it
wouldn't
necessarily
make
sense
to
deploy
ci
on
communities
right
now,
it
would,
I
mean,
would
already
be
able.
I
would
already
be
nice
to
just
configure
that
instance.
Let's
say
from
puppets:
it's
not
a
huge
works,
because
we
already
have
everything
in
place.
We
have
the
perpet
agent.
A
We
have
the
process
there,
so
we
just
have
to
put
in
place
the
the
right
templating
and
it's
just
erp
templates,
so
maybe
I'll
start
working
with
daemon
on
that.
I
don't
think
we
need
a
lot
of
work.
Team
jacob
already
prepared
a
lot
of
things
last
year,
while
I'm
not
sure
that
we
will
be
able
to
read,
reuse
everything
because
the
gcas
plugin
evolved
since
then
yeah.
It's
already
a
good
start.
A
Still,
while
we
were
talking
about
the
ec2
plugin,
I
also
saw
people
complaining
a
little
bit
more
about
timeout
issues
and
I
have
the
feeling
that
people
stop
complaining
in
the
past.
But
I
I
don't
know
if
the
problem
was
still
there,
but
just
people
stopped
complaining.
All
the
problem
was
gone
and
it
reappeared.
C
E
C
A
I'm
just
wondering
if
if
we
could
just
really
play
seattle
jenkins
on
amazon
and
just
have
the
agent
in
the
same
region,
maybe
that
would
solve
because
we
already
have
the
process
to
deploy.
I
mean
seattle,
thinking
that
I
was
on
amazon
previously,
so
we
already
have
everything
in
place
to
switch
back
to
that
location.
C
A
A
We
so
we
still,
we
can
still
use
that.
Definitely,
but
this
bring
me
to
another
topic:
damian
has
been
working
on
deploying
aks
on
amazon,
so
this
means
that
we
could
just
use
you
could
we
could
replace
azure
container
instances
with
kubernetes
buds?
I
see
okay,
so
the.
F
A
A
good
point,
because
something
that
I
wanted
to
test
was
also
if
different
regions
would
provide
us
better
performance,
because
I
know
that
the
controller
is
running
on
azure
in
the
us
east
and
on
the
other
side,
amazon
provide
different
regions
in
u.s
east.
So
maybe
we
could
use
usb2.
I
mean
I'm
not
sure
if
this
would
improve
the
network
performance.
F
Something
else
to
consider
about
this
timeout
is
that
most
of
the
time
the
timeout
sounds
related
to
the
agent
to
controller
connection,
and
I
know
for
a
fact
that
first
websocket
has
solved
part
of
the
former
gnlp
through
tcp
topics,
but
still
websocket
is
still
a
tricky
protocol
to
to
deal
between
the
controller
and
the
agent,
and
we
should
check
that
part
as
well.
Oh,
do
we
have
a
load
balancer
between
the
ac2
agent
and
the
ci
jenkins
io,
and
in
that
case
we
should
check
how
this
load
balancer
is
acting
at
the
tcp
level.
A
Have
load
balancer
at
this
level,
I
mean
not
that
we
deploy
ourselves.
F
Okay,
so
then
we
should
start
measuring
the
tcp
connection
and
setup
of
the
kernel
of
the
vm
hosting
the
ci
gen
inside
you
as
well,
and
or
maybe
checking
the
logs
that
we
already
have
about
timeouts
to
be
sure
what
kind
of
timeout
is
it
can
we
measure?
Is
it
a
timeout
while
starting
the
vm?
Is
it
a
timeout
when
the
vm
try
to
connect
back.
A
Or
the
other
way
around
the
challenge
that
we
have
here
is
those
agents
are
dynamic
and
we
don't
have
any
monitoring
list
there.
So
we
don't
create
data
right
now
and
also
sometimes
it
works.
Sometimes
it
does
not.
So
maybe
one
of
the
solutions
would
be
to
add
a
data
agents
there,
so
we
could
start
creating
information
or
yeah.
A
I
just
I
just
have
to
double
check
about
the
credentials,
because
then
it
means
that
if
we
do
those
experiments
on
ci,
the
drinking
that
I
o
yeah
against
the
other
jenkins
that
I
use
is
not
a
very
trusted
instance.
B
A
So
so
the
reason
why
I
feel
uncomfortable
to
switch
it
directly
to
kubernetes
is
because
we
already
have
some
processes
in
place
to
manage
that
instance.
So,
for
instance
danielle
when
to
when
you
do
a
security
release
ssh
on
that
machine,
restart
the
pod
and
stuff
like
that,
and
so
considering
all
the
other
things
that
we
have
to
work
on,
I'm
not
sure
that
switching
to
kubernetes
will
bring
enough
value
to
that
instance.
Right
now,.
A
A
Agent,
so
it
depends
on
the
levels
on
the
labels,
depending
on
what
you
need.
If
you
just
need
to
run
something
in
maven,
then
we
just
provision
a
maven
container
and-
and
you
do
the
workload
there,
but
if,
for
some
reason
you
need
to
run
a
lot
of
tests
and
you
need
you
need
an
ec2
machine.
I
mean
a
virtual
machine
for
specific
use
cases.
Then
we
need
this
to
plug
in
so
right
now.
We
need
both
because
it
depends
on
what
on
what
we
test.
A
I'm
pretty
sure
that
the
ath
test
needs
full
of
machine,
but
yeah.
It
depends.
I
mean
on
that
topic,
I
don't.
I
don't
have
strong
opinion.
I
think
we
can
provide
both
and
both
have
their.
F
Okay,
do
we
understand
correctly
that
potentially
aci
agent
could
be
replaced
by
kubernetes
agent
that
could
be
nikkis.
A
That's
that's
that's
what
I
said:
okay,
like
five
minutes
ago,
that
if
the
work
on
aks
is
ready-
and
we
can
start
using
that
cluster
to
for
the
seattle
jenkins-
that
I
origin,
that
would
mean
that
every
resources
will
be
in
amazon
in
their
bezel
accounts.
So
if
we
have
the
ec2
in
a
region,
if
we
have
the
eks
in
the
same
account,
we
could
easily
move
our
ci
dot
jnk
to
the
amazon.
A
So
we
would
have
everything
in
the
same
place
and
so
obviously
the
the
timer
tissue
I
mean
the
the
response.
Time
will
be
smaller
and
so
the
reason
why
I
was
saying
that
is
because,
in
the
puppets
in
the
puppet
configuration
everything
was
running
on
amazon
in
the
past,
so
we
were
using
ec2.
We
were
using
the
name
but
yeah
the
equivalent
of
aci,
and
we
were
using
scs.
A
I
think,
and
everything
was
running
in
amazon
and
then
we
did
the
migration
to
azure
and
then
then
that's
why
we
started
using
azure
virtual
machine
sci
for
the
containers
and
the
controller
was
put
on
azure,
and
then
we
are
doing
the
migration
bike.
The
good
point
that
you
raise
here
damien
is
we
don't
know
what
the
future
will
be.
So
if
we
can
run
as
many
as
we
can
in
the
currencies,
these
give
us
the
freedom
to
move
between
cloud
vendors
but
yeah.
For
now.
A
B
Are
there
any
regional
problems
around
having
infrared
and
releases
not
being
in
the
same
area
as.
A
So
so
the
other,
so
the
other
have
so
they're
just
independents,
so
the
trust.
So
right
now
we
have
multiple
jenkins
instance
what
they
all
have
in
common.
Is
they
fetch
the
code
from
the
same
github
organization
and
then
push
artifact,
but
for
instance,
we
don't,
we
don't
generate
artifact
from
ci
that
is
allowed
to
push
in
a
different
location.
So,
right
now
we
have
less.
A
I
mean,
for
instance,
for
release
the
ci
or
infra.ci.
It's
fully
running
on
communities,
so
we
just
provisioned,
but
when
we
need
that
cert.ci
and
trust.ci
have
a
much
lower
usage,
I
mean
they
just
provision
those
from
time
to
time
just
to
do
some
specific
tests,
but
yeah
cici.
The
trick
is
definitely
the
biggest
one
but
yeah.
We
try
to
keep
everything
there
independent
of
everything
else.
A
So
so
the
next
and
the
next
point
is
for
me
working
with
debian
to
configure
qi
task.
So
this
will
definitely
help
us
in
the
future.
When
we
have
the
issues
like
what
we
have
on
friday,
when
we
have
to
reconfigure
everything
and
audit
and
try
to
understand
what
changed
when
and
so
on.
So
this
will
definitely
simplify
the
management
of
the
configuration
there
and
then
we'll
probably
try
to
identify
how
we
can
add
a
monitoring
agent
to
dynamic
agents,
so
how
we
can
add
monitoring
to
those
agents.
A
So
if
you
don't
have
any
more
question
I'll
go
directly
to
the
next
point,
so
basically
I
mentioned
aks,
so
damien
work
on
a
terraform
code
provision,
an
aks
cluster.
We
are
almost
there
as
far
as
I
know.
What
is
what
remain
is
the
credentials
that
we
would
have
to
put
so
how
to
automate.
So,
no
sorry,
we
still
have
small
configuration
for
that
cluster.
A
Let's
say
if
we
want
the
datadog
agent
or
basically
what
we
need
there,
but
ultimately,
this
cluster
should
only
will
only
be
used
by
ci,
the
jenkins
that
I
own.
So
we
we
are
just
thinking
the
best
way
to
configure
that
cluster
and
also
we
have
to
cut
to
to
create
an
account
that
we
can
use
to
connect
between
ci,
changing
the
layout
and
that
cluster.
A
The
next
topic,
which
is
about
inquest
controller,
so
this
one
is
related
to
the
main
cluster
that
we
have
right
now.
So
historically,
we've
been
using
the
nginx
controller,
so
the
controller
is
the
nginx
controller.
Was
the
the
service
that
received
http
requests
and
then
forward
those
to
the
different
websites
like
javadoc
plugin,
site,
main
website
and
so
on?
A
Vpn
broken
should
not.
Let's
look
at
that
after
I'm.
Sorry
for
that,
so
those
those
nginx
controller
who
are
still
running
are
still
using
the
ham,
v2
hem
charts.
We
we
have
to
move,
not
have
v3
and
we
took
this
opportunity
to
deploy
traffic
to
experiment
with
traffic,
the
the
those
in
class
controller
already.
So
the
plan
is
now
to
switch
the
private
service,
so,
like
infrared
cr
release,
dot
ci
just
to
validate
that
everything
work
as
expected
and
then
we'll
start
switching
the
public
services.
A
A
C
Yeah
I
just
I
was
going
to
alert
people
that
I
intend
to
schedule
a
session
alleviate
with
you
and
me
and
probably
damien,
to
work
through.
What
does
it
take
to
make
the
on
duty?
The
pager
duty
experience
better.
I
again
yesterday
got
five
or
six
or
seven
alerts,
telling
me
weird
response
time
that
I
didn't
quite
know
what
to
do
and
I'd
like
to
learn
about
how
to
do
it
better.
So
I'm
just
going
to
schedule
a
session.
C
A
Okay,
let's
let's
plan
this
session
I'll,
do
that
just
just
for
the
context
we
we.
We
have
monitoring
in
place
to
detect
many
different
issues,
and
one
of
them
is
if
a
website
is
getting
slow
and
what
we
have
I
mean,
and
more
often
at
the
moment
is
situation
where
the
website
is
slow.
We
get
an
alert
and
then
the
issue
resolves
by
itself
after
15
minutes,
and
so
we
get
a
lot
of
notifications
saying
that
some
services
are
slow.
C
A
C
A
Okay,
we
don't
have
any
other
topic
to
the
agenda,
so
I
propose
to
I.
F
Have
one
last
that
sure
that
just
arrived
five
minutes
before
the
meeting?
That's
why
I
haven't
blocked
it.
In
the
note
I
have
a
so
I
have
two
feedback
about
the
two
cloud
companies,
so
the
company
name
outscales
providing
an
ec2
compliant.
F
Yeah
I've
been
told
so
and
I
have
a
feedback
from
scale
way.
They
are
okay,
but
they
need
us
to
help
them
defining
the
amount
of
resources
we
we
will
plan
to
use
in
a
kubernetes
cluster
for
the
agents
and
since
I
have
exactly
the
same
requirement
to
size
correctly,
the
eks
cluster
nodes.
F
That
should
be
an
interesting
topic.
So
if,
if
we
can
schedule
a
discussion
or
meeting
or
whatever
or
tickets
and
juror
with
this
information,
I'm
sure
there
are
already
some
documentations
and
metrics
that
are
available.
But,
yes,
I
need
help
on
that
topic.
In
order
to
be
sure
how
much
machine
are
we
gonna
pay
on
aws?
And
how
much
can
we
ask
to
scale
away
as
well?
F
A
B
And
I
was
just
going
to
mention
the
switching
over
of
infra
and
releases
to
the
really
sort
of
built,
docker
images,
they're,
still
kind
of
blocked
on
the
versioning
stuff,
which
is
still
blocked
on
jenkins
pipeline
unit
test
changes.
E
C
B
C
And-
and
the
answer
is
yes,
we
can
depend
on
a
pipeline
library
by.
I
think
we
can
depend
on
a
pipeline
library
by
sha-1
at
least.
D
C
Got
it
so
it's
it's
delivering
a
jar
file
tim,
not
a
not
a,
not
just
some
groupie
code.
D
D
D
Yeah,
I
wouldn't
bother
with
that,
but
it's
just
there's
nothing
yeah
I
mean.
Can
you
inline
something
into
the
library
temporarily
or
or
even
just
comment
out
the
part
of
the
test
that
fails?
I'm
not
sure
specifically
how
bad
it
is
and
just
say,
crudo,
but
once
libraries
bumped
uncomments.