►
From YouTube: 2023 05 02 Jenkins Infra Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Let's
get
started
with
announcement:
the
weekly
car
release
2.404
is
out
at
least
release
packages
and
Docker
image,
so
I
assume.
As
usual.
We
have
the
release
checklist
item
to
be
finished,
changelog
change,
log
dog
as
usual.
They
will
be
done
a
bit
later,
but
that's
a
goal
for
us
to
deploy
that
new
version.
Hopefully
that
will
fix
an
issue
we
are
seeing
within
for
CI,
where
Jenkins
is
having
issues
with
Unicode
characters
in
guitarpool
requests
only
with
the
two
four
or
three
so
with
last
week
quickly.
A
We
add
one
last
week,
so
I
assume
it
will
be
in
a
few
weeks,
so
I
will
put
an
a
unless
someone
can
find
it.
That
should
be
at
least
three
weeks
before
any
release
for
us.
A
A
C
A
We
had
a
contributor,
a
new
maintenance
of
a
new
plugin
jacked
plugin.
We
look
like
to
have
some
issues
and
we
were
able
to
help
them
to
release
the
first
version
of
their
plugin.
Now
it's
more
discussion
about
versioning,
so
we
can
consider
this
issue
closed,
unable
to
create
accounts.
We
have
a
user.
We
try
to
create
the
accounts,
I'm
not
really
sure
I.
Think
for
this
one.
They
had
the
issue
the
logs
of
the
application
mentioned
cookie.
A
So
in
that
case,
when
you
have
the
cooker,
as
per
the
source
code,
it
means
that
the
user
already
had
a
session
to
accounts
Jenkins
io
on
the
web,
browser
that
one
doesn't
happen.
Often
usually
it's
when
you
have
multiple
accounts,
and
you
have
not
unlocked
your
your
previous
account
when
you
try
to
create
a
new
one.
It's
considered
spam
because
that
means
you
already
are
trying
to
create
multiple
accounts
on
an
automated
way
in
the
same
web
browser
session
or
the
same
current
session.
A
A
A
A
So
now
we
are
four
person
having
access
to
the
ca,
the
certificate
Authority
in
charge
of
signing
this
certificate.
That
means
anyone
from
the
team
is
able
to
generate
a
new
certificate
once
a
year,
usually,
but
only
four
percent
on
the
world
have
the
ability
to
sign
these
requests
for
generating
a
new
certificate.
A
It's
now
kosuke
the
creator
of
Jenkins
olegnina,
Chef,
Olivia
Verna,
and
now
myself
as
infrastructure
officer
that
CA
is
valid
for
five
years.
That
means
the
four
of
us
for
the
five
upcoming
years
have
to
take
care
of
that
credential,
so
that
one
is
okay.
We
were
able
to
generate
and
unblock
the
the
web,
the
all
the
jobs
and
everything
is
back
no
issue
and
nothing
was
broken.
Of
course.
A
A
Oh
I
still
need
to
update
the
calendar
notification,
though,
for
next
year
update,
then
updates
here
also
update
controller
to
the
latest
LTS.
Last
Wednesday
we
had
an
LTS
release,
so
we
updated
all
our
controller
based
on
the
LTS
version,
with
the
plugins
side.
Note
that
one
embedded
an
Azure
virtual
machine,
plugin
update,
which
had
a
breaking
change,
so
we
were
beaten
by
this
one,
but
we
were
able
to
fix
that
on
all
the
controller
in
less
than
one
hour.
A
A
A
It
looks
like
that
when
we
have
a
new
weekly
oil
TS
release,
we
need
to
have
the
update
Center
to
regenerate
the
new
version,
and
in
order
for
that,
you
need
a
new
plugin.
So
there
is
that
time
window
between
a
new
call
release
either
LTS
or
weekly.
Until
there
is
a
release
of
another
plugin,
there
is
no
need
to
regenerate
just
for
the
core
release.
A
So
that's
why
that's
what
Mark
explained
and
the
reason
why
there
were
no
action
for
us.
A
The
keys
here
just
to
be
sure
for
what
that
everyone
follows
me,
because
I
might
have
been
too
quick
since
we
weren't
able
to
successfully
build
the
update
Center,
no
new
plugin
were
updated,
so
the
new
car
release
wasn't
taken
in
account.
As
soon
as
the
update
Center
was
updated.
With
the
new
certificate,
then
everything
went
back
to
normal.
A
Any
question
so
far:
okay,
then
we
had
don't
send
paid
your
duty
notification
on
data
dog
warning
notices.
So
thanks
survey
for
taking
care
of
this
one,
that's
way
less
alert
for
us,
especially
when
we
have
a
machine
using,
let's
say,
80,
81
percent
or
85
percent
of
the
outdrive.
It's
only
a
warning.
It's
not
blocking
it's
not
emergency.
It
only
decrease
it's
a
penalty
for
the
io
performances
and
thanks
everybody
for
adding
that's
nice,
but
that
every
day
tells
us
and
remind
us
to
check
the
data
dog
monitors
for
warning.
A
That's
all
for
the
the
task
that
we
act
on
on
none
plan.
We
add
the
Debian
package,
starting
with
a
two
millionaire,
so
we
had
a
user
that
was
using
accidentally
and
accidentally
walking
URL
for
the
Debian
packages.
They
should
not
and
they
they
have
been
told
to
use
the
official
URL
somewhere
else.
So
they
should
not
have
any
issue
in
the
future
web
user
for
a
user.
A
That
I
looks
like
thanks
for
taking
care
of
that
folks
forgot,
username
and
password
someone
messed
up
mixed
our
ldesk
with
their
company
ldesk,
so
they
wanted
to
get
access
to
their
own
Jenkins,
but
yeah,
nothing.
We
can
do
for
these
people
and
again
missing
Update
Center
for
two
dot,
whatever
that's
what
that
explained
earlier,
that
one
is
focused
only
on
the
update,
Center
generation
and
the
other
is
a
consequence
of
the
certificate.
A
Okay,
so
let's
move
to
the
work
in
progress
first
of
all,
I'm
taking
in
the
order
on
the
left,
increased
disk
space
for
system
pool,
so
we
have
a
private
kubernetes
cluster
hosting
private
Jenkins
controller
and
the
system
pool
is
a
node
pool,
a
collection
of
virtual
machine
where
the
technical
services,
let's
say
the
you-
can
see
that
as
plugins
the
plugins
of
the
cluster
in
charge
of
taking
care
of
the
data,
for
instance,
it's
it's
running
on
these
virtual
machines.
A
A
A
That's
an
that
looks
like
to
be
an
Azure
constraint.
That
means
we
cannot
increase
the
disk.
We
want
to
try
because
the
documentation
is
not
really,
let's
say,
Rich,
that
we
are
missing
information
we
might
be
able
to
use
either
a
special
terraform
attributes
for
letting
the
Azure
API.
They
can
create
and
recycle
between
two
system
node
pools
or
we
can
try
creating
a
system
that
pulls
on
our
hound
to
see
if
it
work
still
not
sure
how
we
could
do
that,
but
might
need
to
recreate
sorry
changing
this
resource
required.
A
So
another
solution
here
is
that
we
don't
need
that
much
of
data.
Most
of
this
data
is
consumed,
that
the
docker
images
of
the
services
running
here
so
I
propose
that
we,
instead
of
focusing
on
this,
we
try
moving.
We
have
another
issue.
We
discovered
that
some
of
our
Bots
are
running
on
these
node
pools
and
they
should
not.
They
should
run
on
the
Linux
pool.
So
if
we
don't
have
this
note
pool
anymore,
we
we
expect
the
disk
usage
to
to
be
decreased.
A
So
proposal
is
that
we
first
move
the
workloads
that
should
not
run
on
that
system
pool.
Eventually,
we
might
need
to
add
a
specific
system,
pull
for
that
and
once
we
don't
have
any
of
these
images,
engine
mixing
grass,
the
free
birds
and
maybe
a
fifths.
We
only
let
the
Azure
default
DNS
and
CSI
pods.
Then
we
shouldn't
have
any
warning
anymore.
A
Let
me,
let's
start
moving
up
workload
to
decrease
disk
usage.
By
the
way
we
took
the
opportunity
to
upgrade
the
Linux
pool
disk
size
that
was
set
to
50
gigabyte
and
we
were
using.
We
were
using
a
80
81,
so
we
are
just
on
the
limit
of
the
warning.
A
nice
reminder
from
Stefan
is
that
in
Azure
the
way
you
pay
for
the
outdrive
or
the
ssds
you
pay
for
the
the
next
limits,
so
we
were
using
50
gigabytes.
That
means
we
can
increase
until
the
limit
of
64..
We
will
pay
the
same.
A
You
don't
pay
depending
on
the
size.
You
pay
by
I
forgot
the
English
words,
but
it's
a
32
and
and
64..
So
whether
you
have
a
33
gigabyte,
disk
or
64,
you
pay
the
same.
So
that's
why
we
increase
it
to
64..
That
should
also
remove
the
last
warning
for
these
missions.
A
A
The
perm
XML
had
a
maven
repository
name,
which
has
a
specific
IDs
that
points
to
the
local
files.
So
if
you
build
locally
on
a
machine
that
work
and
what
I
discover
is
that
our
setup
with
the
mirroring
is
trying
to
retrieve
the
artifact
from
ACP
from
an
artifact
caching,
proxy
I
I
would
have
expected
Maven
to
detect
the
file
Dash
column,
Dash,
Dash,
and
and
implies
that
it's
local,
so
it
shouldn't
but
doesn't
seem
to
work
that
way.
A
A
So
once
merge,
we
should
be
able
to
try
this
one
and
validate
their
issue.
So
yeah
I
need
a
review
and
that's
all
and
then
I
will
take
care
of
text,
testing,
text,
testing
that
and
go
back
to
the
end
users.
A
A
Migration
data
may
I,
ask
you,
you
can
start
also
the
puppet
port.
Okay,.
C
B
A
Both
every
and
I
know
how
to
do
that.
I,
don't
know
error,
so
just
information
I,
don't
know
if
he
or
if
you
I,
let's
let's
see
but
which
both
of
us
should
be
able,
no
problem.
It's
written
that
you
need
help
to
bootstrap.
This
part
makes
sense.
A
Any
question
on
that:
one:
nope:
okay,
can't
access!
Apple
tools,
accounts
another
user
for
another
plugin
release.
Most
of
the
time
we
have
issue.
While
we
have
you,
Kevin
and
Bruno.
I
might
have
requested
that
I'm
I'm
making
the
request
without
thinking,
but
we
might
need
help
and
we
might
need
to
catch
these
two
contributor
of
two
different
plugins
to
check.
A
When
you
do
a
maven
manual
release
that
one
does
not
seem
to
work
anymore
through
the
UI,
so
we
might
need
to
check
with,
but
or
at
least
remove
the
part
of
the
documentation
that
guides
users
step
by
step
with
the
screenshots,
because
it's
a
brand
new
version
of
artifactory,
the
curl
command
line,
still
work
perfectly
yeah,
so
that
one
is
fixed
and
the
rest.
This
user
seems
to
have
it
a
lot
of
hiccups.
A
D
A
Right
now
it
might
seem
it's
exceptional
in
the
sense
that
they
they
are
advertising
since
December
about
a
brand
new
web
UI
that
you
can
switch
to,
and
it
looks
like
that.
The
ports
where
you
you
go
to
generate
the
maven
settings
file
that
one
despite
sticking
to
the
classic
UI,
is
still
now
using
the
new
eye
and
I
think
that's
breaking
change
that
we
recently
an
expected.
Braking
change
but
I
might
be
wrong.
So
there
is
that
fix
about
the
UI,
because
that
one
we
need
to
fix
the
documentation.
A
Even
if
we
say
hey,
it's
command
line,
only
I,
don't
understand,
don't
mind
the
other
one
is.
Maybe
we
have
room
for
improvements,
but
we
need
the
time
to
check
on
this
one
I
just
wanted
to
share
with
both
of
you
cool.
Thank
you.
If
you
have
any
questions,
don't
State,
even
if
it's
now
or
later
on
these
topics,
both
users
were
able
to
release
their
plugins.
A
I
haven't
closed
this
one
because
I
wasn't
I
haven't
checked
the
last
feedback
from
the
user,
though
they
should
so
that
might
be
closed
today
or
tomorrow.
No
expectation,
okay,.
A
E
A
E
A
A
Failed
to
deploy
artifacts
that
one
should
be
closed
as
well.
I
need
to
check
one
last
time
if
it's
okay,
for
you.
C
E
A
A
No
answer
from
the
user
I
Will
Wait
24
hours
before
closing,
without
answer
I'm,
not
sure.
What's
the
problem
of
that
person
honestly,
because
they
even
gave
the
link
where
to
visit
so.
E
I
haven't
seen
any
mail
sent
for
after
password
resetting
the
American
blocks.
A
Okay,
thanks
for
checking,
can
you
hide
a
comment
on
the
issue
to
tell
that
no
email
was
there.
The
account
exists
with
this
email,
I.
A
So
no
no
expectation
this
one
past
releases
sites
are
taking
long
time
to
load
so
that
one
I
will
move
that
one
back
to
the
backlog,
because
it's
working
and
we
were
able
to
give
a
solution
for
the
end
user
for
the
problem.
Specifically
initially,
it
was
answering
503
errors
and
was
slope
buff.
The
errors
are
gone
since
now
three
weeks
and
one
of
the
two
links
is
working
really
fast
and
the
other
links
is
slow.
We
need
to
prove
we
need
to
prove
that,
where
are
the
performance
equips
to
see
how
we
can
act?
A
So
that's
we
need
for
that
to
use
data
dogs,
so
that's
something
inside
the
survey.
If
I'm
not
mistaken,
is
that
correct.
A
So
yeah
very
you
is
that
you
in
charge
of
connecting
Apache
for
the
mirror,
Legend
King
Sayo,
with
datadog,
to
collect
metrics,
specific
Apache
metrics.
A
Okay,
are
you
willing
to
do
it
or
should
we
take
the
issue.
E
A
A
That's
the
idea
so
I
prefer,
if
you
don't
know
it's
a
no
and
then
we
don't
take
it
and
we
do
what
we
can
next
one.
Unless
there
is
a
question
on
the
performance
issues,
oh
I
I
haven't
shared
the
solution.
That
person
was
trying
to
programmatically
list
the
weekly
releases
from
and
they
use
the
link
from
the
download
page
on
Jenkins.
So
your
website,
the
idea
here
is
I,
pointed
them
to
the
maven
metadata
XML
file
from
artifactory,
which
is
the
source
of
truth.
A
That's
the
first
thing
when
we
have
a
new
world
released
there,
so
they
they
should
only
have
to
curl
that
XML
file
parsitting
XML
and
extract
the
list.
We
already
have
Shell
Code
doing
that
for
the
official
Docker
images.
That's
what
I
pointed
to
that
person
looks
like
they
are
using
golang,
but
I
mean
that's
a
file
to
get
and
powers
and
treats
so
they
have
a
way
to
have
the
correct
source
of
true
with
acceptable
performances.
That's
why
it's
not
a
broker
anymore.
A
Then
analytics
V4
I
haven't
heard
back
from
Olivier
about.
We
need
to
migrate.
Some
Google
Analytics,
its
name
properties.
Olivier,
gave
me
another
person
administrator
on
a
permission
on
that
property,
but
we
need
wall
administrator,
but
it
does
not
seem
that
Olivia
has
the
wall
admin,
so
yeah
we'll
see,
but
at
the
same
time
giving
proposed
again
to
go
to
use
matamo,
which
will
be
a
self-hosted
analytics
platform
instead
of
Google
Analytics.
A
So
that
could
be
interesting
because
it
tried
that
successfully
in
parallel
for
the
past
monthes.
So
I
asked
him
if
he
can
open
an
issue
for
matomo
listing
the
technical
requirement
to
see
a
o
and
on
which
cluster
we
should
host
it.
Does
it
need
a
database?
Is
it
self-hosted
Etc?
How
much
data
do
we
need,
so
that
will
mean
a
new
service
on
the
public
Gates
cluster.
E
C
D
E
A
But
yeah
worst
case
we
installed
motor
Moon,
we
start
from
scratch
on
a
new
set
of
data,
I
mean
we
can
always
request
the
data
until
we
start
using
metamor
and
then
so
yeah,
so
that
one
will
go
back
to
the
backlog
until
we
have
answered
from
either
giving
orbit,
because
there
is
no
action
expected
from
us,
it
should
go
back
on
the
Milestone.
If
we
have
to
work,
though,
somewhere
worst
case,
Google
analytics
will
continue
working,
it
will
automatically
migrate
in
V4
in
June,
I,
think
or
July.
A
Even
though
the
scary
message
here,
the
details
on
the
administration
tells
me
it
will
be
automatically
migrated,
but
with
news
eventually
new
settings,
that's
why
they
recommend
you
to
do
it
manually.
Uncheck
I
have
no
idea
that
piece
of
crap
work
honestly.
So
that's
why,
if
we
could
get
away
from
a
Google
piece
of
crap
service,
I
will
be
really
happy
to
be
quite
honest.
Yeah
we
might
lose
a
few
features,
but
I
mean
collecting.
Data
for
analytics
is
not
something
we
should
do
so.
A
A
Never
did
so,
it
might
be
useful,
but
they
have
no
idea
what
data
extract.
I
assume
it's
something
on
the
journey
inside
your
website
to
track
user
and
their
their
path,
but
no
idea
I'm,
not
sure
what
will
be
the
goal
honestly
worth
asking
the
question:
unless
that's
a
maybe
it's
a
naive
question:
I
don't
have
knowledge
of
this
tools
so.
B
That's
not
for
her
that's
for
marketing
purposes
or
yeah.
That's
that's
not
for
infrastructure
or
for
the
idea
is
to
point
out
where
in
the
in
the
world
we
are,
we
have
trending
where
there
is
a
I,
don't
know
interest
and
there's
a
lot
and
lot
and
a
lot
of
data
in
Google
analytics.
So
someone
who
knows
how
it
work
can
extract
some
sense
on
it.
But,
as
you
said,
we
are
not
the
one
that
we
that
will
use
it.
Likewise,.
A
C
Okay,
so
yeah.
A
I
I
I
will
ask
giving
on
the
IRC
Channel
it
might
have
answers
for
that
and
I
I
assume
mock.
A
Okay
next
issue,
currently
open
use
a
new
virtual
machine
instance
type
for
CI
Jenkins
IU
work
in
progress.
Sorry
I've
left
the
lock
on
the
Reformation
I
need
to
start
working
on
it
again.
I
was
able
to
find
a
downsized
instance
with
a
bit
bit
less
memory.
It's
still
it's
a
few.
The
the
economy,
the
the
price
difference,
is
low.
It's
like
4
40
bucks
per
month
for
the
VM
itself,
but
that
will
be
a
virtual
machine
of
last
generation.
A
Size
font,
yeah,
I
I
have
almost
everything
ready
for
creating
an
empty
shell
in
the
same
idea
of
what
Stefan
did
with
trusted.
So
we
should
be
able
to
do
that
soon.
Vm,
ready
to
be
created
need
pair
and
review.
It's
almost.
C
A
No
okay,
next
to
pick
my
good
application
from
system
pool
to
Linux
pool
and
private
Gates,
so
the
scope
is
the
freebots
that
we
are
hosting
I
assume
we
weren't
able
to
work
on
this
today,
and
it
was
a
long
weekend
in
France,
so
I
assume
we
have
to
work
on
this
one.
Is
it
still
okay
for
your
way
to
take
it
this
week.
E
A
Long
weekend
that
one
should
solve
the
issue
of
system
pool
using
too
much
data
as
a
reminder,
so
that
one
is
important
and
we
we
should
maxed
out
can
create
account
as
usual.
I
will
pass
on
this
one
that
need
to
be
checked,
add
the
table
to
agents
array.
Could
you
give
us
a
quick
status
and
it's
your
big
Central
task.
E
C
E
D
C
C
A
A
A
Okay,
can
you
check
with
basil
the
the
next
steps
for
testing
and,
if
it
doesn't
work,
then
the
fallback
will
be
using
Windows
Server
core
and
that's
all
the
impact
means
a
longer
startup
time
for
the
agent
and
ACI
that's
something
we
are
going
to
have
in
the
future,
because
we
need
to
move
these
images
to
Packer
which
require
building
on
Windows
Server
core.
We
cannot
use
Nano
server
for
our
all-in-one
agents,
so
at
the
moment
on
time,
we
we
will
have
to
migrate
to
Windows
Server
core
on
the
perfect
world.
A
We
should
do
that
once
ACI
workload
will
be
moved
to
kubernetes
workloads,
because
that
means
once
we
have
the
image
on
the
single
node,
the
image
will
be
cached
so
which
is
not
the
case
with
ACI,
which
don't
know
the
image
on
each
call,
so
that
should
that
should
make
it
pretty
invisible.
But
if
we
have
to
we'll
have
to
start
with
Windows
server
error
at
first,
you
could.
A
Is
that
okay
for
you
or
do
you
want
to
hand
over
the
task
if
it's,
maybe
because
it
has
been
a
lot
for
you
on
that
part?.
A
C
C
A
A
However,
based
on
the
latest
changes
that
team
yacom
did
on
the
Azure
VM
plugin,
we
should
be
able
to
already
migrate
the
cigo
agents,
at
least
the
virtual
machines
to
the
same
network
as
the
ACP.
So
that's
why
I'm
keeping
that
issue
open
that
should
be
separate
tasks,
because
CI
Jenkins
Sayo
virtual
machine
migration
might
be
done
quickly
or
maybe
it
will
take
weeks.
I'm
still
not
sure.
So,
if
you
don't
see
any
objection,
I
will
work
on
migrating.
A
C
C
A
No
question
yep,
the
next
one
add
cluster
publicates
erase
it
okay.
For
you
to
start
working
on
this
one
I,
don't
remember,
I
I
might
have
said.
I
wanted
to
start
on
it
and
I
did
not
have
any
time
during
the
past
week.
A
Okay,
we
can
do
one
service
per
week
or
that
kind
of
it's
a
ruler
thumb
that
should
not
be
a
problem
for
most
of
these
services.
A
Okay,
is
it
okay,
if
I
add
you
and
if
you
need
help
for
review,
don't
stay
to
ask,
looks
good
for
you
clean
up
and
important
manage
data
dog
monitoring
in
terraform.
Can
you
remind
me
the
status
of
this
one.
E
I
haven't
worked
on.
This
is
several
unmanaged
or
duplicated
data
document
touring.
C
A
So
we
keep
it
for
the
next
Milestone
temporary
name,
resolution
failure
and
plug-in
bomb
builds.
So
we
need
more
investigation
on
the
core
DNS
embedded
in
the
E
Keys
clusters.
A
Still,
we
still
need
to
understand
why
core
DNS
component
is
not
able
to
resolve
all
the
DNS
requests
outside.
Is
it
because
not
powerful
enough?
Was
it
transient
error
on
the
AWS
Network?
We
don't
know
we
need
to
go
deeper.
There
is
there,
isn't
anything
abuse
for
now
we
talked
about
security
groups
blocking
the
outbound
DNS
request.
It
does
not
look
like
it's
the
case,
so
we
will
have
to
to
try
a
bit
more
with
double
container
on
the
not
pools
and
see
what
happened.
C
B
B
Managed
to
to
try
the
new
Azure
rm64
VM
that
are
built
by
a
paper
Packer
and
and
now
that
I
in
used
on
AV
high.ci,
the
jenkins.io.
A
Is
that
okay
for
you
to
then
continue
on
CI
joint
games,
IO
and
and
had
the
new
template
and
remove
the
former
AWS?
Yes,.
A
A
Looks
good
for
you
to
add
this
on
the
upcoming
Milestone.
Thank
you
on
the
upcoming
task.
I'm
gonna
work
on
this
because
I
already
started
Ubuntu
220
to
offer
campaign
I
have
removed
this
one
because
I
knew
I
was
off
so
no
time
to
work
on
this
one.
So
the
next
step
here
will
be
the
node
pull
on
AKs,
since
we
are
playing
around
with
the
system
pulls
and
everything
I'm
yeah.
The
goal
is
to
see
and
start
creating
new
system
pull
and
new
node
pools
and
migrate.
A
Do
you
have
other
topics
that
you
see
that
you
will
want
to
work
on
on
the
upcoming
Milestone?
That
looks
like
important
for
you
on
the
backlog.