►
From YouTube: 2022 04 12 Jenkins Infra Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Let's
start
with
a
few
announcements:
first,
we
had
the
digital
lossy
and
suspicious
sunday
more
on
that
later.
We
are
now
sure
that
no
arm
was
done
except
someone
mining
for
five
minutes.
Before
we
cut
all
accesses,
there
will
be
a
public
communication
later
in
the
week
we
are
finishing
the
trial
and
feedbacks,
but
yeah.
That's
the
the
information.
A
I
forgot
to
mention
that
on
jenkins
release
no
worries,
it
should
come
automatic
release
for
that
part
in
the
upcoming
weeks,
so
crossing
fingers,
but
everything
went
fine
as
far
as
I
can
tell
from
my
point
of
view.
However,
I
assume
that
mark
unit
or
someone
on
the
release
team
need
to
check
all
the
boxes
for
the
release
list.
A
Thanks
mark
now
the
announcement
there
has
been
a
security
advisory
today,
a
bunch
of
security
issues
on
plugins.
So
please
update
the
plugins
already
done
on
our
public
instances.
Hg
weekly,
ci,
jenkins,
io
and
ci
gentin
sayo
already
done
also
on
the
private
instances
on
kubernetes
forest
release
and
info.
A
One
last
announcement:
it's
been
two
or
three
weeks
that
we
started
seeing
a
random
slowness
on
the
service
updates
jenkins
sayo.
So
I
still
need
to
write
down
a
detailed,
a
detailed
issue
on
helpdesk
that
will
explain
the
different
thing
right
now
mark
and
I
are
the
person
on
purge
duty.
We
tend
to
restart
the
apache
service
when
it
goes
too
far.
A
We
have
action
points
to
fix
that
the
main
one
that
I'm
announcing
now,
but
that
should
be
communicated
in
a
written
way
and
public
way
later
today
or
tomorrow.
Worst
case
is
that
we
are
going
to
sunset
the
old
mirror
system,
which
is
http
only
to
use
and
redirect
all
mirroring
to
the
current
production
system
running
on
kubernetes.
A
That
means
that
the
mirrors
will
be
forced
or
redirected
to
https.
So
if
you
hear
that
now
using
http,
only
you
will
have
trouble,
but
we
are
in
2022.
So
there
is
no
sane
reason
to
not
use
https
for
such
thing
there.
That
will
be
the
subject
of
a
blog
post,
communication
and
email
notification
and
maybe
more
so
stay
tuned.
But
that's
the
the
whole
idea
of
that
announcement.
B
A
So
today
I
will
focus
on
the
dawn.
I
will
and
eventually
the
work
in
progress
since
two
members
of
the
team
are
ill
on
off.
Of
course,
the
bandwidth
was
clearly
lower
than
the
past
weeks,
so
I
won't
take
any
hard
decision
for
the
upcoming
weeks.
A
A
Still
there
are
a
bunch
of
issues
that
have
been
fixed
so
usual,
I'm
taking
them
from
the
list
on
the
left.
I
just
synchronized
before
so
account
recover.
We
tend
to
have
people
losing
their
password,
so
we
have
to
remind
them
the
correct
documentation
page,
so
they
can
autonomously
get
the
account
back
or
visit
the
password.
Sometimes
it's
because
they
haven't
connected
since
years,
so
they
have
to
visit
even
their
email.
A
C
A
We
went
back
to
the
normal
security
expected
issue
to
avoid
them
some
web
exploits,
so
that
has
been
closed.
Finally,
in
the
area
of
security
issues,
I
did
an
emergency
upgrade
of
the
nginx
chart
and
cert
manager
last
friday
to
fix
open
ssl
cv.
A
A
So
thanks
a
lot
terve
for
that
help.
We
had
the
lts
upgrade
last
week
as
planned.
That
lts
has
been
applied
in
less
than
48
hours
everywhere.
It
took
some
time
for
ci
jenkins
iodo,
because
we
had
to
wait
for
the
a
big
big
queue
of
pills
to
be
processed,
but
no
issue
whatever.
For
this
one,
an
old
plugin
was
archived
thanks
tim
for
that
we
had
issues
around
repository
permission,
updater,
not
synced,
I'm
the
culprits.
I
will
discuss
that
later.
B
Yeah
now
I
think
I
think
you
ought
to
highlight
the
story
there
that
we're
moving
something
off
of
that
that
trusts
the
trusted
infra
into
infra.ci,
where
more
of
us
have
access,
so
I
think
you're
doing
the
right
thing
that
was
just
a
minor
bump
yeah.
So
looking
forward
to
your
comments,
there
no
problem.
A
Yeah,
let's
go
later,
there's
been
request
access
for
the
vpn
from
il
defonso
because
he
was
the
lts
release,
officer
or
real
israelis.
I
don't
know,
what's
the
matter.
B
A
A
That's
really
cool,
so
that
means
we
have
incoming
vpn
access.
The
good
thing
is
that,
thanks
to
defensor,
we
were
able
to
update
finally
the
last
pieces
of
the
documentation
for
vpn
access.
That's
some
part
we
missed,
so
I
hope
that
you'll
be
smooth
for
alex.
Let's
see
thanks
tim
and
rv
for
spinning
up
the
public
instance
weekly.ci.jenkins.io,
which
features
not.
A
On
the
weekly
release,
public
instance,
where
we
can
demonstrate
externally
the
new
design
library
elements,
so
thanks
folks,
we
have
that
machine
that
instance
running
with
a
minimalistic
set
of
ui.
A
A
Synchronized
with
infrasea
it's
the
same
image
for
now
another
plugin
archived.
So
thanks
for
the
people
involved
on
that,
because
I
don't
know
the
rules
for
that
part
ervin,
I
will
remove
the
last
pieces
of
the
evergreen
legacy
infrastructure
because
even
though
it
should
have
been
deleted,
there
were
some
databases
and
resource
group
on
azure,
so
not
really
expensive,
but
still
still
good
to
clean
up.
A
A
A
That's
the
reference
I
was
searching
for
since
month
is
and
there
was
able
to
find
it
for
us.
So
thanks
everybody.
A
A
A
So
now
the
main
work
in
progress.
First,
we
had
to
in
order
to
contact
the
docker
open
source
program,
to
ask
for
open
source
coverage
on
the
jenkins
central
organization,
not
only
the
jenkins
one.
We
had
to
clean
up
as
a
prerequisite
all
the
members
of
the
organizations
that
we
are
using
for
different
class
of
images.
A
Most
of
the
time
we
add
between
8
and
12
seats
used
for
each
organization.
While
we
should
only
have
three,
so
we
have
documented
on
the
private
run
book,
documentation
the
new
pattern
and
now
for
each
of
the
organization
owned
by
the
infra
team,
the
jenkins
officer
and
the
backup
human
each
case
will
be
the
owners,
the
co-owners
they
must
have
their
2fa
authentication
enabled
on
the
docker
hub.
So
they
are
not
subject
to
credentials
stealing
if
they
are
new.
The
password
and
the
third
seat
should
be
not
an
owner,
but
a
technical
user.
A
A
What
do
we
have?
My
great
rating
jenkins,
I
o
so
stefan
was
able
to
create
the
ingress
on
his.
He
was
currently
before
getting
a
flu.
He
was
working
on
migrating
the
database,
so
good
job.
Stefan
it's
delayed
next
week,
of
course,
time
for
him
to
heal
dog
pencils
program.
I'm
currently
discussing
on
on
the
official
docker
slacks
channel
with
the
manager
of
the
team
in
charge
of
the
open
source
program.
A
Now
I
think
both
he
told
he
clearly
told
me
they
might
give
us
short-term
solution
for
now
to
unblock
us,
but
he
understand-
and
it's
a
great
use
case
for
them
to
understand
what
the
end
user
could
do
in
the
upcoming
months
or
what
some
end
user
are
doing.
But
don't
tell
us
so
that's,
that's
mainly
a
product
management
discussion
and
then
I
will
issue
the
official
request
for
extending
the
open
source
program
to
our
organizations
and
users.
A
What
about
the
email
alias
for
press,
so
I've
contacted
both
linux
foundation
and
mail
gun
on
waiting
for
feedbacks.
So
I
expect
a
new
foundation
to
expect
them
to
create
the
mail
server.
So
we
can
move
the
mx
record
to
their
system
and
in
case
of
mail
gun,
I
asked
them
if
we
can
recover
the
accounts
or
at
least
if
they
can
export
the
list
of
emails,
given
that
we
prove
our
identity
through
adding
a
dns
record
or
whatever
security
measure.
A
Gc
for
packer,
so
now
all
the
development
image
are
garbage
cleaned.
So
we
gain
some
money.
Not
it's
not
that
much
but
great
job.
Stefan,
I
gave
him
the
requested
information
just
before
the
weekend,
so
when
he
will
be
back,
he
will
continue
on
the
staging
and
production
images.
A
Yep
I
forgot
this
one.
Thanks.
Mark
monitoring
builds
on
private
instances
that
one
is
delayed.
Erv
and
stefan
were
planning
to
work
on
that,
but
since
they
are
obviously
not
in
good
shape,
delayed
on
one
or
two
weeks,
depending
on
their
bandwidth,
when
they
will
be
back
so
no
action
done
on
this
one
same
for
kubernetes
upgrade
to
1.21.
A
On
my
side,
I
was
able
to
clean
up
a
bit
the
artifact
caching
proxy,
so
we
have
docker
image
which
is
published
and
up
to
date
now
and
tracked
by
update
cli.
The
next
step
is
to
find
a
way
to
test
that
correctly
and
then
diff
propose
and
validate
with
a
real-life
user
of
see
agent
in
cio.
What
they
think
about
that
solution.
A
No
at
first
I
want
to
provide
one
instance
per
cloud,
so
one
on
digital
ocean
and
one
on
aws
for
now
and
ceo,
it
behave.
That's
the
pattern.
I
want
to
propose
one
cache
per
cloud,
so
we
don't
deal
with
network
latency,
but
we
have
to
deal
with
distributed.
Caching,
which
means
a
given
build,
could
have
could
take
longer
to
build
if
it's
scheduled
on
another
cloud
because
it
will
have
to
re-cache,
but
that
should
be
good
enough
for
now,
given
that
only
the
released
artifact
will
be
cached
excellent.
Thank
you.
A
Okay
on
that
area,
so
I've
asked
for
more
details,
but
it
looks
like
that
there
is
what
is
called
the
nexus
client,
I'm
not
sure
about.
What
is
this,
because
nexus
is
a
server
for
me,
so
maybe
it's
a
local
nexus
instance
mirroring
or
republic
repo,
and
it
looks
like
that
this
instant
is
causing
a
lot
of
requests
on
gfrog
artifactory,
as
they
ask.
A
I
assume
it
should
be
a
great
and
big
user
of
jenkins
development
or
someone
working
a
lot
of
chain
games,
but
we
need
more
information
before
being
able
to
find
who
it
could
be
and
maybe
contact
them
if
they
have
a
contacts.
So
I'm
waiting
for
g
frog
from
feedback,
but
it
seems
like
that.
The
recent
slowness
on
the
report
and
qt
org
were
caused
by
that
big
peak
of
so.
Let's
see,
we
need
more
information
to
be
able
to
compute
or
contact
anyone.
A
A
Without
no
answer
in
the
upcoming
10
days,
I
will
try
again
doing
some
mayam
by
using
by
merging
the
pull
request
and
see
if
it
breaks
anything,
no
worries
on
the
states,
because
the
advantage
of
this
regular
task
is
that
if
anything
is
broken,
I
shut
it
down
and
I
let
trusted
ci
redo
it
again.
There
is
no
past
state
to
reconcile,
so
it's
quite
easy
to
go
back,
no
need
for
backups.
It's
just
some
plug-in
maintenance
won't
be
able
to
publish
for
three
four
hours
time
for
the
system
to
heal
itself.
A
And
finally,
irv
and
I
are
working
on
an
issue
on
update
cli,
which
is
causing
delay
on
the
automated
update
system
named
update
cli.
We
cooked
an
issue
on
the
recent
version
where
they
fixed
the
issue.
So
it's
time
to
release
and
publish
that
should
allow
how
hone
gave
in
to
publish
a
new
version
of
the
plugin
site
in
production.
C
A
Not
at
all
it's
it's
only
that
our
update
cli
system
see
changes
but
failed
to
open
the
pull
request,
proposing
the
new
changes
which
make
the
production
updates
slower,
because
we
need
to
do
them
manually
when
we
see
the
request.
Failing
got
it.
So
it's
a
bug
on
update
cli
on
the
upstream
tool
we
are
using
battery
was
able
to
fix
the
bugs,
so
we
are
only
waiting
for
everything
to
be
published,
released
on
deployed
excellent.
Thank
you.
Thanks
for
the
clarity,
no
problem,
so
yeah
that's
a
lot
of
work
in
progress.
A
A
Docker
rep
credential
for
vm
agent
is
blocked
because
I
we
have
stephan
and
I
to
work
together
to
put
the
current
new
credential
since
we
rotated
them
all
so
delayed
migrates
update,
ci
jenkins
io
to
another
cloud,
so
that
one
is
an
old
issue
already
mentioning
that,
so
the
idea
will
be
to
find
a
way
to
avoid
this
3k
per
month
of
bandwidth
on
aws
for
the
update
center
season.
A
There
had
been
nice
and
interesting
discussion
and
proposals
that
were
never
pushed
on
the
iep
from
olivier
verna
about
storing
that
json
file
on
azure
bucket.
So
at
least
we
don't
have
to
deal
with
a
web
server
that
could
be
limited
it
because
we
have
to
tune
the
tcp
when
there
is
a
workload
peak
so
using
web
server
from
azure
could
be
interesting
because
it's
self
managed
and
self
scaled
that
could
be
run
on
kubernetes,
so
we
can
move
to
oracle
and
finally,
rv
was
discussing
with
daniel.
A
A
So
there
wasn't
any
immediate
reason
for
not
doing
that.
That
could
help
us
a
lot
and
diminish
the
workload
and
updates
to
the
ci
say
you,
but
we
need
to
be
sure.
So,
let's,
let's
have
danielle
and
team
just
slow
down
from
their
huge
week
with
the
security
advisory
and
ask
them
again
next
week.
That
could
be
a
nice
way
to
to
solve
the
solution,
because
that
will
remove
a
lot
of
constraint
on
the
web
service.
B
B
A
A
We
have
an
issue.
Oh,
I
will
add
that
issue
for
me.
For
next
week,
our
jira
instance,
hosted
by
linux
foundation,
has
reached
the
end
of
life
check,
which
means
in
six
months
it
won't
be
updated
anymore.
So
I
need
to
request
the
linux
foundation
next
as
soon
as
possible,
if
they
can
upgrade
to
the
latest
jira
lts,
which
should
have
end
of
life
in
2023
october.