►
From YouTube: 2021 09 17 Jenkins Infra Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hi
everybody
welcome
for
this
new
jenkins
infrastructure
meeting.
We
didn't
have
one
for
several
weeks
now,
so
we
have
quite
a
lot
to
cover
during
this
session.
So
the
first
one
is,
I
want
to
remind
so
it's
I'm
not
sure
if
it's
an
announcement
or
not,
but
we
have
the
octoberfest
coming
pretty
soon
and
the
jenkins
infra
project
is
definitely
a
good
candidate
to
receive
contribution
from
people.
A
A
So
next
thing
that
I
want
to
share
with
you
so
anton
from
the
linux
foundation
will
do
a
maintenance
on
jira
database,
so
he
has
to
change
the
chart,
sets
in
order
for
unicode
to
work
in
issues,
comments
and
description.
So
the
service
will
be
done
for
two
hours
maximum,
but
yeah.
We
never
really
know
for
sure
with
with
jira,
but
so
that
that
maintenance
will
happen
on
the
5th
of
october
2021,
which
is
the
day
before
the
next
stable
release
will
do.
He
will
do
the
maintenance
at
4
00
pm
utc.
A
A
Next
topic,
unless
you
have
a
question
regarding
jira
maintenance,
so
I
mean
it's
not
it's
not
a
big
yeah
mark.
If
you
can
take
the
notes
that
would
be.
That
would
simplify
me
a
lot
here.
Thank
you.
So
the
next
topic
that
I
want
to
cover
briefly
cover,
because
it's
not
all
about
infrastructure,
but
we
have
the
election
coming.
A
So
I
wrote
a
blog
post
to
announce
the
beginning
of
the
election,
so
we
had
a
bunch
of
discussion
about
committee,
the
jenkins
that
I
know
community
the
jenkins
lieu,
so
I
just
took
all
the
feedback
there
and
write
a
blog
post.
So
I
would
like
to
publish
that
blog
post
on
monday,
so
we
officially
start
the
election
process.
If
you
have
any
feedback,
please
put
there
so
we
can
adjust,
and
so
I
put
to
link
one
real,
a
link
to
the
pr
to
publish
a
blog
post
and
the
second
one.
A
I
created
a
small
tool
that
I
named
efd
for
email
from
this
course,
so
it's
just
a
small
tool
to
collate
to
to
retrieve
email
addresses
from
members
of
a
specific
discourse
group.
A
So
the
idea
is
when
people
join
their
specific
group,
they
agree
to
participate
to
the
election,
they
register
to
the
election
and
then
we'll
use
and
then
we'll
fetch
email
addresses
for
us
from
that
group.
So
we
can
send
them
invitation
to
participate
on
the
condorcet
voting
system
service,
but
everything
is
explained
in
the
blog
post
so
that
that's
it
the
the
small
tool
I
mean,
I
don't
think
it
will
evolve
a
lot
once
the
election
is
over,
but
it
may
be
useful
for
different
purposes.
So
that's
why
I
created
that.
I
created
it.
A
Next
topic
is
about
key
cloak,
so
we
discover
so
due
to
the
conference
security
issues
that
affected
us.
Two
weeks
ago,
we
asked
people
to
reset
that
password
by
using
quick
cloak,
so
beta
dot
account
such
a
kind
of,
and
we
identify
several
issues
with
the
service.
One
of
them
is
it's
using
sand
grid
to
send
them
in,
but
the
problem
is
sorry
with
angry.
A
So
we
have
to
find-
and
we
have,
we
need
to
find
an
improvement
regarding
the
same
group
plan
and
the
second
annoying
thing.
With
that
saying
ranked
accounts
is
we
are
limited
to
only
two
people
who
can
have
access
to
it,
which
is
at
the
moment
kozuke
and
me,
and
so
ideally,
I
would
like
to
have
more
people,
or
at
least
all
the
people
who
can
help
with
the
jenkins
infra
and
email
issues.
Tim
jacom
earlier
today
made
us
a
very
good
suggestions,
which
is
to
integrate
same
grid
with
our
azure
accounts.
A
In
that
case,
we
will
be
built
directly
from
azure
that
will
not
evenly
affect
the
azure
invoice.
So
that's
really
nice
and
also
we
would
be
able
to
use
an
azure
active
directory
for
authentication,
so
more
people
will
be
able
to
access
disabled
accounts.
So
that's
something
that
I'm
planning
to
work
on
in
the
coming
days
weeks,
so
to
come
back
to
the
key
cloak.
So
that
was
one
of
the
issue
emails
sent
from
kick
lock
the
second
one.
A
We
still
have
some
improvement
to
do
there,
I'm
not
sure,
what's
the
current
state,
because
if
people
are
still
affected
by
it,
what
I
noticed
in
my
case
is
sometimes
we
have
a
very.
We
have
timeout
issues
when
we
try
to
update
user
from
key
cloak,
but
that's
something
that
I
need
to
investigate.
A
Any
question
no
sounds
good,
so
yeah,
the
the
same
great
point:
I
covered
it
in
during
the
key
club
topic.
So
the
next
topic
is
about
azure.
So
a
quick
update
on
azure.
We
successfully
reduce
the
cost
of
the
azure
accounts.
The
last
invoice
was
something
like
7
000,
which
is
really
nice,
and
I
would
like
us
to
continue
on
that
bus.
A
So
we
end
the
year
with
a
good
balance,
because
our
months
in
the
previous
months,
we
were
above
10
000
per
month,
which
is
the
limit
that
the
cdf
asked
us
to
keep,
and
so
now
the
idea
is
to
to
keep
the
banner
the
tickets
to
keep
that
number
low.
B
A
A
So
that's
something
that
we
need
to
to
work
on.
Digitalocean
allow
us
to
to
use
a
small
communities
cluster
with
three
nodes.
We
still
have
room
for
improvement
there.
So
if
we
can
justify
more
budget
and
digital
chain,
I
think
we
may
we
can.
We
can
yeah.
That
was
just
a
first
estimation
when
we
talked
with
digitalocean.
A
So
definitely
the
azure
account
cost
reduced
the
amazon
one
increased,
not
as
much
as
azure
azure
one.
I
think,
and
we
still
have
some
fine
tuning
to
do
like
the
size
of
the
machine.
The
number
of
the
machine,
and
so
on
at
the
moment,
that's
something
that
I
have
to
double
check
with
siemens,
but
the
limit
the
limit.
At
the
moment
it
should
be
quite
high
with
amazon
accounts,
but
that's
something
that
we
may
reduce,
but
yeah.
That's
just
a
number
that
you
have
to
adjust.
A
A
A
The
next
topic
is
about
biker
images.
I
know
that
debian
has
been
working
on
that.
Unfortunately,
it's
not
there
to
do
an
update,
so
I
will
not
risk
myself
to
explain
all
the
things
there,
so
I
guess
yeah.
So
I
guess
it's
just
better
to
keep
that
topic
for
next
week.
A
Digitalocean
we
briefly
covered
that
one,
so
I
proposed
to
just
continue
to
the
wiki
exports.
So,
as
you
know,
two
weeks
ago
we
were
affected
by
olivier.
Could
you
could
you
scroll
your
screen
up
so
that
we've
got
sure.
B
A
Thanks
for
the
reminder
mark,
so,
as
I
was
saying
two
weeks
ago,
we
were
affected
by
major
security
shoes
with
wiki,
dodging
kinsaleo,
and
that's
a
service
that
we
did
not
really
maintain
for
years
now.
Most
of
the
data
that
that
we
have
there
is
quite
old,
but
we
did
not
really
find
the
time
to
migrate
the
content
elsewhere,
so
the
current
state
of
the
machine.
Now
the
service
is
stopped.
Our
conference
is
stopped.
A
We
we
restore.
We
need
a
copy
to
another
location,
a
machine,
a
bigger
machine
that
that
we
can
do
some
experiments
so
now
that
the
purpose
is
to
to
find
the
best
way
to
extract
all
the
contents
that
we
have
on
the
conference
and
then
to
to
migrate
that
content
to
the
right
location.
So
gavin
has
been
doing
some
work
on
that
topic,
so
he
created
a
git
repository
named
jks,
infraslash,
plugins,
wiki
dashbox.
A
So
it's
he
exported
plug-in
documentation
on
that
git
repository.
That
was
not
yet
exported,
but
we
still
have
so
the
plugin
documentation
issue
is
still
it's
can
be
solved
now,
but
we
still
have
a
lot
of
other
pages
on
conference,
such
as
meeting
notes
like
first
name
organization,
contributors
or
submit,
and
quite
a
lot
of
information.
A
A
The
only
thing
that
I
have
at
this
time
is
the
exported
files,
don't
necessarily
match
what
you
see.
So
you
should
see
my
screen
here
so
that
the
css
is
slightly
different
than
what
you
have
on
wiki
your
url
as
well.
So,
for
example,
in
this
case
you
can
see
that
we
have
a
small
checksum
page
and
with
that
html,
so
we
may
have
some
cleanup
to
do
so.
What
I
was
suggesting
was
just
to
export
all
every
pages
to
html
pages
and
then
upload
them
to
archive
to
change
that.
A
B
Any
question
so
I'm
still
hoping
that
we
could
get
some
way
of
easily
doing
redirects
from
wiki.jenkins.io
urls
to
archives.jenkins.io,
given
the
urls
in
in
the
base.
It
looks
promising,
even
if
we
dropped
the
slash
wiki
but
I'll
I'll.
Take
it
up
with
you
separately,
olivier,
I
think,
you're
on
the
right
track.
We
need
to
extract
the
html
so
that
we
don't
lose
the
html
okay,
okay
yeah.
We
can.
C
B
Okay,
so
so
the
artifactory
topic
was
mine
that
I
put
in.
I
see
that
we
we
had
a
failure
to
download
an
artifact
from
repos.jenkins.ci.org.
B
A
We
were
not
allowed
to
do
that
amount
of
requests
now
with
repo
jenkins
audio,
so
we
put
in
place
at
that
time
a
proxy
cache
on
the
ci
environments.
So
maybe
that's
something
that
we
could
do
again,
especially
in
the
especially
considering
that,
but
that's
maybe
something
that
I
have
to
investigate
with
damien,
because
everything
is
running
on
this
cluster,
so
we
could
deploy
a
cache
locale
to
them
to
the
agents,
because
that's
definitely
the
problem
here.
C
I'm
naturally
queen
on
such
solution,
to
be
quite
honest
because
the
the
combination
of
management,
the
time
to
spend
on
managing
such
an
instance
the
I
o
that
it
requires
the
cost
in
term
of
infrastructure
and
the
amount
of
problem
that
it
can
cause
due
to
caching
to
the
developers.
Because
you
are
that's
a
special
area
where
the
developers
cannot
control
the
cache
and
cache
which
mean
unless
we
yeah,
I
don't
know.
I
had
a
pretty
bad
experience
on
corporation
before
on
that
one.
A
D
D
It's
so
we
used
to
we
used
to
have
been
trained
as
our
upstream,
but
we
were
having
binge
around
reliability
issues,
and
so
we
just
deployed
an
artifactory
on
our
ci
cluster
and
it's
been
pretty
much
completely
hands
off
and
just
worked.
A
I
did
in
the
past
was
to
deploy
an
enginex
proxy
cache
on
the
cluster
as
well,
so
that
was
pure
caching
in
that
case,
but
the
problem
in
here
is:
we
don't
really
control
the
champions,
so
we
can
open.
We
can
open
issues
with
g
frog,
but
at
that
instance,
it's
fully
sponsored,
so
we
don't
have
any
sla
with
them.
C
Yeah
fair,
it's
just
that
we
just
we
just
exited
a
cv
issue
on
something
that
we
dumped
somewhere
and
didn't
touch
for
free
for
a
few
years.
So
honestly
yeah
right
now.
It
feels
that
we
are
overwhelmed
by
the
amount
of
tasks
so
yeah.
I
would
prefer
asking
g-frog
to
increase
their
acla
or
maybe
consider
another
service
or
consider
our
own
artifactory
from
scratch
without
the
help
of
g-frog.
D
C
Yeah,
that
would
be
an
idea
also
to
check
with
daniel
and
the
security
people,
because
maybe
the
argument
for
a
spawning
instance
will
be
yeah.
We
need
to
have
our
tiny
artifactories
as
a
service
inside
infra,
especially
for
the
release
prerelease
in
order
to
build
them.
So
if
we
have
to
build
this,
then
that
means
we
will
have
to
to
spawn
two
three
instances.
So
in
that
case
the
cost
will
be
worth
it.
D
C
A
The
problem,
the
problem,
if
you
have
one
instance,
is
in
the
current
situation.
We
have
some
agent
on
azure
summit
less
than
a
usual
amount,
but
we
have
a
jump
on
amazon
and
we're
going
to
have
a
gentleman
digital
soon.
So
the
suggestion
here
is
just
to
build
a
docker
image.
That
is
just
a
proxy
cache
with
release.
I'm
sorry
for
the
changing,
slash,
release,
no
configuration,
no
authentication
just
read
only,
and
so
we
would
deploy
it
on
digitalation
and
amazon
and
elsewhere
if
we
end
up
using
a
different
cloud
provider.
C
B
C
C
C
D
D
D
A
I
mean
I
I
know
I
know
I
saw
some
discussion
with
jc
as
well,
because
that's
a
recurring
topic,
I
mean
we
are
not
the
only
one
to
have
that
kind
of
issues
so
yeah.
I.
C
C
I
mean
if
it
happens
once
or
twice
a
year,
the
issue
on
g
frog,
that's
totally
worth
it
not
wasting
time
on
maintaining
something
else
or
paying
for
something
else.
We
have
too
much
services
right
now
that
we
are,
I
mean
there
are
more
production
and
qa
issues
that
we
should
focus
on
unless
it
really
solves
it
really
brings
more
values
for
the
users
that
that's
what
I
mean
we
just
have
to
balance
correctly.
D
Yeah,
like
I
don't
it's,
I
haven't
seen
that
too
much
recently,
there's
certainly
been
off
and
on
quite
a
lot
of
issues
ever
since
they
upgraded
us
to
the
new
platform.
Just
random
instability,
I
think
we're
having
more
problems
with
like
core
builds
and
remoting
builds.
The
tests
aren't
a
lot
of
random
issues
happening.
D
Possibly
since
we
moved
to
the
kubernetes
agents,
it
was
kind
of
hard
because
they're
having
different
issues
on
aci.
D
A
Okay,
perfect
thanks
for
your
insights,
any
last
comments
on
repo.inside.org.
C
A
C
A
I
so
that
that's
that's
a
discussion.
I
I
saw
this
question
happening
on
a
jenkins
team
from
mailing
lists.
Jenkins,
dev
many
means
that
so
that's
a
recurring
issue,
because
we
faced
the
same
issue
like
three
or
four
years
ago
with
reporter
jenkins
at
io,
which
was
not
able
to
handle
the
loads,
and
then
we
realized
that,
after
a
while,
the
the
proxy
cash
performance
was
bad
compared
to
the
improvement
that
chief
rob
did.
A
So
we
decided
to
remove
the
proxy
cache
at
some
point,
and
maybe
the
situation
will
just
improve
on
g
for
oxide,
because
yeah,
as
we
mentioned,
they
migrated
our
service
to
a
newer
platform.
So
maybe
that's
the
time
for
them
to
do
better
controlled,
I
mean
to
better
size
the
the
infrastructure
below
the
service.
So
maybe
we
can
just
wait
a
few
weeks
and
see
hot
how
it
evolved.
We
have
two
minutes
left
before
the
end
of
the
meeting,
so
I
propose
that
we
quickly
cover.
C
Please
review
and
review
my
last
pair
the
actual,
enable
automatic
upgrade
of
the
vm
on
ci
jenkins
io.
If
it
works,
then
it
will
be
automatically
opening
a
pull
request
with
the
latest
mei
builds
today
and
once
the
second
pull
request
is
merged,
then
it
will
update
ci
and
quincy.
So,
let's
test
the
full
end
to
end
then
next
week,
I'll
I'll
focus
on
the
team
feedbacks.