►
From YouTube: 2021 03 23 Jenkins Infra Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hi
everybody
welcome
for
this
new
jenkins
infrastructure
meeting
today
to
the
agenda.
We
mainly
have
to
cover
a
few
minor
outages
that
happen
over
the
weekend
or
over
the
past
few
days,
and
the
first
one
that
I
want
to
mention
is
over
the
weekend.
The
job
used
to
provide
permission
to
publish
new
publix
was
broken
and
that
job
is
running
on
trustee.ci,
which
means
that
people
who
try
to
use
the
new
workflow
to
automatically
release
plugins
was
broken.
So
considering
that
we
only
have
five
plugins
using
that
workflow.
A
That
was
not
a
major
issue,
but
still
for
those
people
who
tried
to
release
a
new
plugin,
I
think
one
of
them.
One
of
the
main
I
mean
when
I
haven't
really
looked
at
what
was
the
root
cause?
So
maybe
carrots
can
help
on
this,
but
at
least
what
it
reminds
us
is
trusted.
A
Ci
is
a
very
protected
jenkins
instance,
with
only
a
few
people
where
only
a
few
people
have
access
to
it,
which
means
that
when
we
have
an
issue
on
that
on
that
instance,
it
takes
us
a
long
time
to
discover
the
issue
so
one
way
to
detect
problem.
That
would
be
to
configure
notification
like
sending
emails
when
a
job
is
failing
or
notify
an
rc.
So
we
definitely
have
to
investigate
some
options
to
be
notified
about
failing
jobs
on
that
instance.
A
So,
just
to
remind
people,
trusted.ci
is
a
jenkins
instance
running
in
a
private
location
where
we
run
really
few
jobs,
but
those
are
quite
important.
So,
typically,
we
build
changes.
Docker
images
we
generate
thinkings
that
are
your
website.
A
We
have
our
jobs,
name,
release
permission
updater,
which
manage
permissions
which
allowed
basically,
which
handled
the
permission
and
grant,
for
instance,
plug-in
maintainer
to
publish
artifacts
on
our
artifactory
service,
and
so
typically
that
was
the
job
that
was
fading
over
the
weekends.
A
A
So
the
outage
on
that
specific
machine
had
some
downstream
issues,
but
everything
is
back
to
normal.
The
the
thing
is
the
good
point
is
it
gave
us
an
opportunity
to
to
expert
to
test
the
status
service,
so
stay
tuned
jenkins
that
I
own
so
danielle
opened
a
pr
on
checking
for
status.
The
issue.
What
we
identify
is
only
one
person
had
the
ability
to
merge
pr
there.
I
mean
really
few
people,
because
I
was
not
the
only
one,
but
really
few
people
had
the
opportunity
to
merge
pr.
A
So
now
I
fix
that
by
allowing
more
teams,
so
typically
people
from
the
core
team
are
and
people
who
are
on
call
right
now,
so
more
people
in
the
future
will
be
able
to
merge
pr
there.
Otherwise,
the
process
to
report
issue
was,
I
mean,
pretty
easy
to
use
pretty
easy
to
review.
We
identify
another
small
issue,
which
is
we
don't
mention
the
time
zone
in
the
tickets.
A
A
No,
I
think
I
think
the
tool
can
handle
the
time
zone.
I
saw
pr
regarding
that.
I
have
to
double
check
if
it's
already
included
in
the
version
of
the
tool
that
we
are
using
or
not,
but
I
think
so
the
fallback
situation
will
be
to
use
btc
if
we
cannot
have
dynamic
time
zones,
but
otherwise
yeah.
I
try
to
configure
it
to
for
your,
and
the
thing
is
maybe
it's
already
the
case,
but
I
have
to
double
check.
A
Any
questions
regarding
this
outage
so,
while
we
are
talking
about
issues
on
trusted
ci,
I
just
bought
another
issue
right
before
the
meeting,
which
is
the
plugin
site,
is
not
updated
anymore.
So
the
job
that
generate
plugin
site
data
is
failing
since
midnight
I
mean
since
I'm
sorry
since
16
hours,
it
will
be
easier
for
everybody.
So
since
16
hours
the
job
is
failing
and
the
error
is
pretty
obvious.
It
says
that
we
are
trying
to
fetch
data
that
does
not
exist
on
the
api,
so
there
is
a
graphql.
A
What
surprised
me
is
in
the
past,
we
had
a
monitoring
check
that
detects
or
hold
where
the
data
from
the
from
the
plugin
site,
and
this
time
the
check
did
not
trigger.
So
I
have
to
double
check
with
giving
if
the
api
and
that
we
are
monitoring
change,
which
is
yeah,
which
is
possible
because
we
switch
from
an
sd
search
to
alcohol
yeah.
I
think.
B
Well
and
that
monitoring
check
that
we're
using
seems
aligned
with
what
gareth
had
lobbied
earlier,
that
we
should
be
user
centered
if
the
plugin
site's
not
updating,
that's
sort
of
a
user
perception
where
we
say:
okay,
yeah,
the
users
won't
see
things
that
arrive
so
so
that
makes
that,
to
me
at
least
a
very
valuable
check.
Gareth
help
me
on
that
is
that
the
kind
of
thing
that
you
were
you
were
thinking
of,
or
is
that
still
not
user
user-facing
enough.
C
Yeah,
I
think
that's.
I
think
that
would
make
sense
yeah,
rather
than
I
mean
the
lack
of
the
lack
of
a
plug-in
being
able
to
download
is
the
problematic
thing.
Isn't
it
that's
something
that
somebody
would
experience?
C
I
mean
just
going
back
to
the
just
going
back
to
the
the
outage
of
the
weekend,
the
kind
of
thing
that
you'd.
Whilst
we
could
monitor
the
failure
of
the
job,
what
we're
actually
interested
in?
It's,
not
necessarily
the
failure
of
the
job,
but
the
failure
of
being
able
to
release
so
it's
there.
It's
like
I'm,
not
sure
how
we
would
find
this
out,
but
it's
that
particular
github
action
that
is
triggering
the
release.
That
is
failing
because
it
can't
upload
the
artifact.
C
That's
what
the
user
would.
Experience
is
the
problem
because
the
job
could
fail,
but
the
credentials
could
be
valid.
For
I
don't
know
six
hours,
I'm
not
sure
how
often
they
rotated
but
six
hours
12
hours.
A
So
another
an
option
would
just
be
to
monitor
for
how
long
the
job
has
been
failing.
Yeah
so
see
this,
you
see
no
jumps
since,
if
no
job
succeed
within
an
hour,
then
the
issues
the
severity
of
the
issue
increase
but
yeah
we
should
not
be
trigger
if
anyone
should
fail,
because
yeah,
maybe
for
some
reason.
So
I
definitely
the
thing
is
right.
Now
we
don't,
we
don't
have
any
notification
configure
on
our
change.
Instance
like
we
don't
send
email
without
notify
rc.
A
I
tried
to
use
the
datadog
plugin
in
the
past,
but
that
was
not
really
successful,
so
we'll
remove
it
for
now
yeah.
I
I
think
I
I'll
end
up
configuring
the
rs
notification.
The
reason
why
I
like
rc,
if,
if
we
can
manage
the
notification
to
a
pretty
small
amount
of
notification,
it's
useful
to
have
those
information
from
rc,
because
we
all
have
always
have
one
person
available.
Who
could
look
at
that?
A
We've
been
using
rc
notification
for
the
puppet
master
for
a
while,
and
that
I
mean
this
has
been
really
useful,
but
we
usually
don't
get.
I
mean
that's
because
we
don't
have
a
lot
of
notifications
like
we
have
maybe
one
of
the
months
up
to
two
hour
per
month,
so
I
just
we
just
have
to
double
check
that
we
don't
spam.
B
And
with
those
notifications
go
to
jenkins,
infra
or
or
what
what?
For
the
puppets
for
puppets
change,
it.
A
Sometimes
you
see
something
like
puppet
failed
on
that
machine
and
so
they're
pretty
obvious
that,
for
some
reason,
puppet
applied
did
not
work
on
that
machine.
Most
of
the
time
it's
just
disks
full
are
like
something
like
you
try
to
install
a
package
that
does
not
exist.
I
mean
there
are
different
reasons
why
the
puppet
would
fail,
but
yeah,
usually
because
because
it
does
not
happen
because
it
does
not
regularly
happen,
we
usually
fix
the
issue
like
15
minutes.
A
We
saw
the
notification,
look
at
it
fix
it
and
that's
it
and
that's
I'm
done
so.
We
should
definitely
but
another
thing.
We
should
start
me
creating
some
of
the
job
running
untrusted.ci
on
infrared.ci.
A
Those
for
instance
that
I
have
in
mind
are
the
job
that
generate
javadoc
the
job
that
generates
jenkins,
that
are
your
website.
Those
could
be
easily
moved
to
infrared
ci.
C
C
It's
just
it's
just
that
the
image
at
the
moment
isn't
and
it
downloads
the
plugins
each
time.
But
after
the
chat
with
olivia
this
morning,
that
will
soon
be
rectified.
A
So
yeah
that
that
was
a
minor
permission
issue
that
I
had
to
fix
this
morning.
So,
as
garrett
mentioned,
he
has
been
working
on
a
process
to
automatically
build
docker
jenkins,
docker
image
for
the
jenkinson
front
project.
So
the
idea
is
each
time
we
release
a
new
version,
a
new
weekly
version,
our
new
stable
version.
We
fetch
that
information
build
a
new
image,
but
also
each
time
a
new
version
is
available
for
plugin.
A
We
update
the
docker
image
containing
that
new
plugin
version,
so
the
idea
is
to
to
directly
ship
the
docker
image
with
everything
packaged
in
terms
of
stability.
The
thing
is
right
now
we
just
have
in
the
current
in
the
current
situation,
so
without
the
work
that
has
been
done
by
garrett.
In
the
current
configuration
we
use
gcas
to
install
everything
and
we
use
the
hem
chart
and
one
of
the
configuration
in
the
hem
chart.
We
list
the
list
of
plugins
that
we
need
and
each
time
the
docker
image
starts,
it
try
to
reinstall
every
plugin.
A
So
typically,
what
happened
is
if
for
some
reason-
and
it
already
happened
in
the
past-
for
some
reason-
we
have
to
restart
jkins
instance,
because
because
whatever
the
reason
we
restart
that
instance
and
then
we
cannot
install
plugins,
because
there
is
an
issue
with
a
third
mirror,
and
so
it
does
not
make
sense.
A
Because
then
the
the
jenkins,
with
the
I
mean
it,
would
take
us
like
15
minutes
to
start
the
service,
even
if
we
didn't
change
the
plugin,
but
just
because
by
default
we
remove
the
plugins
from
the
installation
from
the
from
the
disk
and
we
try
to
reinstall
them
yeah,
just
it
slowed
down
the
start,
the
starting
time.
So
that's
that's
what
we
want
to
change
and
so
yeah.
Basically,
what
we
were
missing
is
git.
A
Tags
are
at
least
a
way
to
clearly
identify
which
which
docker
image
contain,
which
version
of
jenkins
and
which
plugin.
So
we
can
roll
back
in
case
of
issues.
So
this
is
something
that
should
be
solved
pretty
quickly.
A
A
A
But
at
the
same
time
I
would
like
to
keep
them
in
the
check
and
safe
organization,
so
they
can
still
provide.
A
We
can
still
notify
them
if
we
expect
some
pr
reviews
or
whatever
so
I've
been
thinking
to
create
a
group
that
would
name
alumni
and
so
by
default,
every
people,
every
person
who
don't
contribute
anymore
to
the
project.
They
would
just
put
them
in
that
specific
group.
So
we
would,
I
mean
yeah
we
would
be
able
to.
We
would
still
be
able
to
notify
them
if
we
need
some
reviews
and
their
pr
would
still
be
considered.
A
But
at
the
same
time
we
would
need
someone
more
active
to
merge
gpr
the
people
that
have
in
mind
that
the
people
that
have
in
mind
are
people
like
tyler,
kozuki,
markie
jackson
officially
stepped
down,
and
so
we
have
a
lot
of
people
in
the
changing
super
organization
who
haven't
conf?
Who
haven't
contributed
to
the
project
for
a
really
long
time
and
so
yeah?
B
A
Is
to
give
them
read
permission
to
public
repositories,
not
private
repositories,
and
that's
one
thing:
I'm
maybe
a
bit
concerned
about
the
notification,
because
if
they
are
in
read-only
on
every
git
repository
by
default,
they
will
receive
notification
each
time.
Someone
open
a
pr
whatever
but
yeah.
Then
then
that
it
will
be
the
responsibility
of
the
person
to
opt
out.
D
A
So
that's
a
public
report
but
having
them
in
a
team
would
allow
us
to
ask
them
to
to
notify.
So
you
could
just
put
the
name
and
they
will
receive
a
notification
specifically,
but
you,
but
you
wouldn't
use
the
team,
though
no
you
would
not
use
the
team,
but,
for
instance,
if
you
want
so,
if
you,
if
you
want
to
when
you
open,
for
instance,
a
pr
and
you
want
to
assign
that
pr
to
someone
or
ask
a
reviewer,
that
person
needs
to
be
in
the
organization.
D
I
think,
unless
they've
contributed
to
the
repo
before
I
think
or
changed
that
file
recently,
that's
not
knowing,
but
they
shouldn't
get
any
notifications,
but
by
just
having
read
access
they
have
to
okay.
A
Okay,
so
that's
that's
my
only
I
mean
fear
but
yeah.
If,
if
they
don't
get
notification,
then
that's
perfect,
the
thing
is,
we
have
quite
a
lot
of
people
that
would
be
in
that
scenario.
So,
and
so
they
would
still
have
the
the
small
jenkins
in
front
logo
under
their
profile
profile
as
well,
but
yeah,
that's
that's
it
but
yeah.
I
have
to
send
I'll
send
a
notification
later
today.
A
D
I
was
just
gonna:
ask:
has
anyone
done
the
work
or
the
it
says
five
plugins
affected
before
we
trusted
zero?
That's
so
has
someone
synced
the
plug-in
that
word
down
before
interested
see
I
was
down
over
the
weekend.
Has
someone
done
the
sync.
D
So
when
trusted
ci
was
down
whatever
plugins
were
released
from,
I
think
friday
to
monday
won't
have
been
published
to
the
updates.
What's
a
git,
you
can
start
io.
Thank
you.
D
The
update
center
by
default
only
sinks,
the
last
three
hours
of
releases,
and
it
relies
on
the
fact
that
it
runs
every
three
minutes.
There's
flags
that
you
can
run
to
the
update
center
to
increase
that
number
or
you
can
sync
everything,
but
thinking
everything
takes
hours.
B
D
The
updater
is
built
every
three
minutes:
the
update
output,
a
file,
a
json
file
with
recent
releases
of
the
last
three
hours
of
releases
based
on
it,
just
as
it
runs
it
builds
that
file
and
then
it
outputs
that
file.
And
then
it
runs
a
script
on
package
stop
or
package.
D
Yeah,
it
can
be
manually
run,
but
it's
easiest
to
just
get
updates
or
to
build
to
build
the
list
for
you
or
you
can
manually
check
artifactory.
If
you
want.
That's,
not
the
easiest.
A
D
It's
up
to
you,
it's
just
a
flag
that
you
pass
to
it
on
how
many
hours
you
want
it
to
run
how
many
hours
you
want
it
to
output.
A
Is
it
is
it
also?
Is
it
also
the
script
that
uploads
artifacts
to
get
the
jenkins
that
are
you,
yeah,
yeah?
Okay,
right,
I
see
which
crypt
I'm
thinking
of.
A
Okay,
I'll
look
at
it.
I
think
it
would
be
easier
to
check
just
keep
yeah
when
manually
run
the
script
I'll
think
with
daniel
as
one.
D
Yeah
daniel
first
on
thursday
there
was
an
outage
last
week
as
well,
when
I
think
the
certificate
got
under
30
days,
and
so
basically
the
same
thing
happened
last
week
and
daniel
did
it
last
time.
But
then
trusted
went
down
again
on
friday
and
it
needs
doing
again
so
that.
A
That's
a
good
point
that
you
remind
that.
You
read
that
I
forgot
to
mention
so
so
the
the
update
center,
when,
when
you
update
so
when
the
job
that
update
update
center
run
it
tests
that
the
certificate
of
the
update
center
is
valid,
and
it's
older.
I
mean
it's
valid
for
at
least
one
month
and
we
have.
A
We
have
to
to
rotate
the
update
center
certificate
and
we
are
planning
to
do
that
next
week,
so
the
29th
of
march,
but
but
because
the
current
update,
but
because
the
current
certificate
is
expiring
in
less
than
one
month,
the
job
was
filling
so
danielle,
just
temporarily
removed
that
condition
about
failing.
If
the
certificate
is
expiring.
A
A
I
just
generate
that
and
upload
and
update
the
configuration
and
trust,
let's
see
hi,
so
I
did
that
multiple
times
and
each
time
it
like
takes
me
15
minutes
something
like
that,
but
yeah
I'll
work
on
that
next
monday.
B
A
We
had
yeah,
we
we
had
a
small
pitch
later
earlier
today
with
a
timer
tissue,
which
was
the
timer
issuing
the
changes
agents,
so
we
had
to
retrigger
the
build,
but
yeah
everything
went
fine
second
time,
but
the
thing
is,
we
don't
often
have
issues
with
the
release
environment
so
yeah,
it's
not
like
it's
a
common
issue,
but
yeah
still,
we
have
to
keep
that.
We
have
to
keep
an
eye
on
that.
D
A
D
Yeah,
it's
going
to
expire
the
day
before,
looking
at
my
calendar
unless
something
gets
changed.
So
I.
B
Mean
the
job's
gonna
god's
gonna
fail
the
day
before
which
which
is
cheap
to
ask
daniel
or
we
could
submit.
The
poll
request,
give
it
12
days
instead
of
14
right.
So
then
that
we
don't
have
to
change.
We
don't
have
to
have
you
working
on
a
weekend
olivier
to
do
that
certificate.
Rotation.
I'd
rather
stay
with
29th
as
the
rotation
day,
if
it's
okay
with
you,
otherwise,
otherwise
we
can
do
the
certificate
rotation
on
friday,
yeah
and
I'm
I'm
more
prone
to
monday.
Just
personally,
we
announced
it
for
monday.
I
just
assume
monday.
D
A
Okay,
thanks
everybody
for
your
time
and
see
you
on
rc
have
a
great
day
have
a
great
day,
bye.