►
From YouTube: 2020 08 11 Jenkins Infra Meeting
Description
Jenkins infrastructure meeting August 11, 2020 with topics including LDAP outage, tomorrow's security releases, Jira upgrade plans, identity management transition to Linux Foundation, and more
A
B
All
right
welcome
everybody.
This
is
the
jenkins
infrastructure
meeting.
It's
the
11th
of
august
2020..
We've
got
a
number
of
topics
on
the
agenda.
Olivier.
Do
you
want
me
to
share
the
agenda
while
we,
while
we
go
through
this
sure,
okay,
so
I'm
gonna
go
ahead
and
share
my
screen
and
you've
got
the
first
topic
olivier.
So,
let's
bring
it
to
you.
C
Yep
I
do
yeah.
I
would
just
like
to
remove
the
full
screen.
Yes
perfect.
So
basically,
the
the
outage
that
we
had
this
morning
is
in
the
certificate
in
the
communities
resource
configuration.
C
So
basically,
what
we
had
in
the
past
is
we
used
to
have
a
certificate
that
we
bought
from
godaddy
and
then,
when
the
certificate
exparts.
Three
months
ago,
we
moved
to
a
let's
encrypt
certificates,
so
the
certificate
is
renewed
every
three
months,
but
the
ldap
configuration
needs
three
components:
ca.crt,
we
need
the
the
key,
obviously
under
the
certificate
for
ldap,
but
cert
manager,
self
manager
do
not
provide
a
ca,
dot
crt.
C
So
we
I
had
to
retrieve
that
information
from
let's
encrypt
the
website,
and
so
basically
what
happened
is
because
of
the
way
the
ldap
image
was
designed.
We
needed
to
pass
those
three
parameters
in
the
configuration
when
the
container
started
and
the
problem
is
because
of
how
communities
in
the
way
we
mount
volumes,
we
needed
to
have
those
three
components
in
the
same
volume.
C
So
what
I
did
is
I
slightly
modified
the
docker
image
to
fetch
the
cica.cot
from
a
different
directory,
and
so
I
didn't
needed
to
have
that
the
sierra
crt
from
the
communities
resource
and
what
happened,
but
what
I
did
to
go
faster.
I
just
manually
manually
modified
the
secrets
in
order
to
have
that
information
to
the
secret
as
well
on
community
side,
even
if
the
container
does
not
use
it.
So
if,
if
you
look
at
the
two
pr's
that
I
created
this
morning,
I
I
remove
the
mention
of
ca.cut
from
the
steadfault
sets.
C
So
we
do
not
try
to
mount
that
information
because
it
does
not
exist
in
the
secret
and
it
should
not
that's
one
of
the
thing
and
the
second
thing
is:
I
updated
the
the
docker
image
to
fetch
to
correctly
fetch
the
to
now
sorry
to
not
fail.
If
you
cannot
find
the
that's
the
the
30
in
the
correct
location.
C
So
that's
basically
all
the
the
way
I
found
that
that
was
quite
simple.
I
tried
to
connect
on
one
of
our
services
on
jira.
Initially
I
I
thought
that
my
password
was
changed
for
some
reason
or
compromise.
Whatever
and
then
I
realized
that
the
ldap
container
was
not
running
anymore,
so
just
look
at
what
happened
in
the
communities
cluster
so
that
the
ldap
container
was
not
running.
C
C
So
basically,
what
I
did
this
morning
to
solve
this,
because
we
needed
the
adapt
service
to
be
running
again.
I
just
modified
the
community
secret
that
contained
the
secret
certificate
and
the
private
key,
and
I
also
added
ca.
That's
here
t
in
the
the
secret,
so
I
did
the
same
fix
that
I
did
three
months
ago,
I
opened
two
pr's.
The
two
pr's
are
documented
are
in
the
google
doc
here
once
we
merge
those
two
pr,
it
should
be
done.
C
It
should
be
fixed.
The
second
main
issue
that
we
may
have,
but
it's
mitigated
at
the
moment,
because
the
way
we
use
the
adapter
the
laptop
image.
The
second
issue
that
we
may
have
in
the
future
is
because
we
switch
to
let's
encrypt
certificates.
C
The
certificate
is
renewed
every
three
months
and
the
container
need
to
be
restart
every
I
mean
within
the
three
months,
because
we
every
week
we
update
the
dab
container
based
on
based
on
the
latest
updates,
because
we
rebuild
the
adapter
image
every
week
to
have
all
the
latest
version
we
haven't.
B
So
that's
consistent
with
our
jira
container
that
we
must
restart
every
two
weeks
so
that
thing,
if
we
don't
restart
it,
every
two
weeks
has
exorbitant
disk
usage
and
and
causes
itself
problems.
So
jira
container
restarts
for
right
now,
at
least
for
me,
or
a
just
a
fact
of
life.
Good
okay,.
C
But
in
the
case
of
the
ldap
it
should
not
be
an
issue
because
there
is
a
health
check
on
it,
so
if
the
ldap
container
does
not
work
for
some
reason,
the
container
is
still
then
reboot
and
it
just
takes
few
seconds
for
the
container
to
restart
so
I
mean
that's
one
of
the
risks
but
yeah
we
still
have
to
improve,
but
anyway,
once
if
we
move,
if
we
moved
the
adapt
system
to
the
linux
foundation,
we
won't
have
to
work
on
that.
C
So
for
now
I
would
just
merge
my
two
pr's
regarding
ldap.
Once
we
are
ready
because
we
have
ongoing
releases
at
the
moment.
I
don't
want
to
impact
the
work
being
done
there,
so
I
prefer
to
hold
for
a
few
days
before
merging
the
prs.
B
Okay,
super
olivier
thanks
very
much
for
that
status.
We
ready
for
the
next
topic,
then
upcoming
releases
tomorrow.
So
this
one,
olivier
and
daniel
beck
are
going
to
monitor
delivery
tomorrow,
I'm
acting
as
backup.
If,
for
some
reason,
olivier
were
unavailable
if
olivier
becomes
unavailable,
daniel's
plan
is
to
start
a
little
later
so
that
he
can
do
it
while
I'm
normally
awake
rather
than
the
start
at
roughly
midnight.
My
time.
B
But
we
don't
expect
any
surprises
there.
We
believe
the
infrastructure
is
stable,
steady
and
ready
to
go,
upgrade
guides
have
been
written,
haven't
been
merged
yet
upgrade
guide
and
release
notes,
but
they
they
look
good,
and
I
think
now
this
is
alex
the
first
delivery
of
first
delivery
of
the
windows
installer
on
lts.
B
C
No,
that
sounds
great
also,
we
have
a
bunch
of
pr's
on
the
first
slash
release,
git
repositories,
and
I
think
that
we
should
not
merge
them
before
tomorrow.
We
have
to
be
sure
that
we
do
the
change
of
this
process,
and
if
we
want
to
change
it,
then
we
have
to
wait
for
the
next
weekly,
because
it's
less
critical
than
the
secretary
release,
I
mean
we
don't
merge
anything
before
tomorrow
and
I
mean
on
a
global
more
globally.
We
don't
merge
anything
that
could
impact
the
release
process.
C
So,
for
example,
that's
why
we
do
not
merge
to
end
up
changes
right.
C
On
that
topic
I
did
so
I
want
to
highlight
something:
that's
happened
and
I
was
really
surprised
it
worked
now
because
it
did
not
in
the
past
in
communities.
So
last
week,
while
danies
was
preparing
the
release,
I
accidentally
merged
a
pr
that
updates
the
jenkins
version
used
by
the
release.
So
I
updated
release
of
the
junkies,
so
the
master
was
restarted
and
it
did
not
impacted
the
agents,
so
the
agents
were
able
to
correctly
reconnect
to
the
master
and
do
the
work
as
as
usual.
B
C
Something
that
was
working
I
mean
in
the
past.
We
secured
this
agent
so
that.
C
Right
while
we
talk
about
the
release,
there
is
one
topic
that
I
put
at
the
end
of
the
agenda
that
I
would
like
to
mention
here.
I
will
just
copy
paste
this
here,
so
basically,
I've
been
investigating
about
different
ways
to
reduce
the
cost
on
the
azure
accounts,
and
one
of
the
main
costs
I
mean
on
the
usher
is
taking
30
of
the
cost
is
about
packages
that
we
store
on
azure
and
the
bandwidth
to
fetch
the
packages
there.
C
So
right
now,
when
we
go
on
package
of
jenkins
layout
and
we
try
to
download
the
package,
there
is
last
access
rule
that
redirects
the
request
to
our
azure
file
storage,
which
means
that
the
jenkins
project
is
paying
for
the
bandwidth
for
everything
and
it
we
should.
I
mean
that
would
be
better
to
just
rely
on
the
on
our
mirror
infrastructure.
C
C
So
that's
one
of
the
that
was
one
of
the
motivation
why
I
investigated
investigating
mirror
beats
so
mirror
bit
is
deployed
on
get.js,
while
we
still
have
some
improvement
to
do
there.
I
I
think
that
for
packages
and
keys
that
I
it
should
be
ready,
something
that
I
would
like
to
keep
in
mind
for
tomorrow
for
the
release
of
tomorrow.
C
The
behavior
with
me
are
beats.
Is
it
builds
that
build
the
ash
for
every
file
and
then
compare
that
ash
on
the
remote?
So
if,
for
some
reason,
the
file
between
the
mirror
bits
and
the
remotes
the
mismatch,
then
it
falls
back
to
another
specific
endpoint
that
I
configured
so
for
that
endpoint.
I
configured
archives,
the
champions
that
I
own
to
be
the
foldback,
because
we
control-
and
we
know
that
we
have
old
packages
available
there,
and
so
something
that
I
would
like
to
test
tomorrow
is
when
we
do.
C
I
would
like
to
update
the
package
agencies
that
you
actually
access
rules
to
redirect
to
not
use
azure,
but
to
use
getter
check
is
the
layout
to
download
early
packages.
I
created,
I
created
a
jira
ticket
about
that
work
because
the
package
that
origin,
the
checking,
is
not
anymore
managed
by
puppet.
At
least
at
the
moment.
The
perpet
agent
is
disabled.
It's
just
a
manual
fix,
so
I
documented
in
the
tickets,
where
all
all
the
files
that
need
to
be
modified.
So
what
I
suggest
is
just
to,
for
example,
as
a
first
iteration.
C
C
It
depends
on
the
mirror
you
get.
It
really
depends
on
me
how
you
get
because
in
this
case
you
are
really
downloading.
So
for
me
it
was
faster
to
use
a
local
mirrors,
because
there
is
what
there
is.
There
are
two
mirrors
in
one
in
germany
and
one
in
holland,
but
I
guess
it
will
depend
on
other
people.
C
There
are
we
have
at
the
moment,
six
millions
so
and
all
of
those
sects
are
they
https
enabled
because,
yes,
they
are
only
so
let
me
show
my
screen,
so
I
can
give
you
a
look.
C
C
You
can
specify
parameters
in
the
url,
so
I'm
going
to
add,
give
me
the
mirror
list
and
in
this
case,
for
this
specific
file,
you
can
see
that
no
mirrors
contain
the
file,
because
that
file
is
really
holds.
It's
like
one
of
the
first
j
kings.
No,
it
was
still
hudson.
So
if
you
try
to
download
it,
then
you
will
fetch
the
the
package
from
archive.jenkins.org
and
we
do
not
test
the
the
ash
or
whatever
we
just.
C
We
just
that's
that's
a
full
back
service,
but
on
the
other
side,
if
we
want
to
test-
let's
say
one
of
the
latest
version,
this
one
jenkins,
I'm
going
to
take
a
link
same
list,
so
everything
is
happening
on
https
and
so
right
now
we
have
six
mirrors
that
contain
that
specific
file
with
the
wizard
ash
that
match
the
one
army
orbits
and
obviously,
once
we'll
do
the
release.
The
first
thing,
because
it's
part
of
the
process,
we
update
osceola,
semi-horse,
osseous
and
mirrors
because
we
control
them,
but
then
its
mission,
servarium.
C
Those
I
mean
will
be
updated
once
they
are
available,
so
they
are
all
only
working
on
https
anyway,
because
I
disable
http,
because
the
purpose
of
getting
the
genie
is
to
work
on
https
only
but
yeah,
that's
what
it
looks
like
and
so
something
that
I
would
like
also
to
test
it.
Now
I
mean
it
is
about
having
stats
about
how
many
people
done
those
oops.
B
B
C
C
C
C
And
in
the
hashtag
access
file,
we
have
a
rewrite
condition
if
it's
binary
deb
then
redirect
to
project
kingsrelease.blob.com.window.net,
and
what
I
want
to
do
basically
is
to
replace
this
rule
doing
this.
C
Yet
jenkins,
that's
basically
what
I
want
to
do
what
I
tried
what
I
tried
several
months
ago,
while
I
was
working,
I
tried
to
oops.
I
will
just
revert
to
this
today.
C
What
I
tried
several
months
ago
was
to
really
use
the
rewrite
rude
initial
one
so
to
to
to
redirect
to
the
roster
jenkins
that
I
o,
but
basically
it
fails
because
it
does
not
support
https
and
the
debian
package
manager
will
fail.
If
you
try
to
right,
you
must
have
https,
okay,
yeah.
Okay,
so
that's
that's!
That's
a
change
that
I
would
like
to
do
so
just
reminder
tomorrow.
Tomorrow
we
test.
If
category
immediately
contains
the
right
packages
and
if
it
does,
I
will
start
to
to
change
just
fine
manually.
B
C
But
we
are
not
well
yet,
but
in
the
case
of
team
and
me,
because
we
are
quite
close
in
terms
of
location,
for
us,
it's
a
little
bit
faster.
Something
that
we
have
to
keep
in
mind
is
the
azure
blob
storage
that
we
are
using
is
located
in
east
us
too,
so
for
for,
for
example,
probably
for
you
mark,
it
would
be
faster,
but
we
have
mirrors
in
your
location
but,
for
example,
for
people
from
china
or
asia
more
globally.
C
C
Yeah
and
something
regarding
the
new
house,
we
regularly
had
people
proposing
miracles
and
we
declined
in
the
past.
The
main
reason
to
that
is
because
we
initially
saw
that
you
had
enough
money
to
to
distribute
the
packages
by
yourself,
because
then
you
have
a
full
control
on
the
pipeline,
but
yeah
today
we
have
to
find
alternatives
and
and
relying
on
the
horse
is
maybe
more
important
more
interesting.
So
we
could
definitely
promote
that
and
have
more
mirrors
in
the
future.
C
Pacquiao
then
I
will
just
stop
sharing.
There
is
one
last
topic.
It's
a
regard.
I
can
just
continue
on
this
one
mark
if
it's
okay,
for
you
sure.
B
C
But
basically,
what's
happening
right
now
is
we
have
two
ways
configured
in
the
infrastructure
and
it's
not
consistent
between
the
distribution
packages,
but
either
we
push
a
sim
link,
a
link
on
the
rehearse
and
then,
if
you
go
to,
let's
say
windows
latest
you
are
redirected,
you
are
using
the
same
link
or
we
are
using
hd
access.
So,
for
example,
in
the
case
of
windows,
we
had
two
configured
at
the
same
time
and
so,
depending
on
the
mirrors
or
occasion.
C
Sorry,
depending
on
your
house
sometime,
it
was
using
the
same
link
and
sometimes
it
was
using
the
hashtag
access,
because
we
have
mirrors
that
disable
access
basically,
and
so
we
have
to
find
a
way
to
to
maintain
that
link.
C
Personally,
I
would
like
to
just
but
yeah
just
a
personal
opinion.
I
would
like
to
not
use
hashtags
anymore
because
it
puts
a
strong
dependency
on
on
apache,
and
so
it
means
that
every
people
who
provide
a
new
house
need
to
use
apache
in
order
to
work
with
our
infrastructure.
C
C
B
C
B
Thank
you
all
right.
So
next
topic
we
are
almost
out
of
our
30
minutes.
Very
brief.
The
jira
upgrade
plan.
We
are
currently
running
jira
7.13.
It
will
end
support
at
the
end
of
november
2020..
B
B
As
part
of
that
transition
we're
meeting
with
them
today,
others
are
welcome
to
join
if
they
wish
it's
a
conversation
about
what
are
the
limits
of
what
they
are
willing
to
do,
what
they
can
do
for
us,
etc.
B
C
B
C
No,
I
think
it's
just
better
to
stick
to
the
to
the
to
the
schedule.
I
wanted
to
organize
the
session
and
to
talk
more
about
the
puppet
infrastructure.
So
I
wanted
to
do
a
small
demo
because
I
gave
access
last
week
to
tim
about
about
the
machines,
but
we
probably
won't
have
the
time
to
talk
about
that
today.
B
All
right
thanks
everyone,
let's
go
ahead
and
call
an
end
to
this
meeting.
I
will
post
a
recording
to
the
jenkins
jenkins
youtube
channel
thanks
very
much
thanks
for
your
time
have
a
good
day.
Bye.