►
From YouTube: 2023 05 30 Jenkins Infra Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Our
collaborative
description
of
the
notes,
perfect:
let's
get
started
with
announcements.
The
weekly
release
2.407
has
been
released
successfully,
at
least
for
the
packaging
part,
including
the
docker
image.
So
that
means
Stefan
you
already.
Whenever
you
want
Duncan,
you
can
update
weekly
and
en
Francia.
C
A
Cool
Mark
just
to
double
check
because
we
mentioned
it
one
or
two
weeks
ago.
So
it
looks
like
that.
Creating
the
tag
on
the
Jenkins
CI,
slash,
Docker
Repository
was
effective
and
in
the
few
minutes,
10
minutes
after
that,
only
the
new
release
version
was
pushed
to
the
docker
Hub
successfully.
Is
that
correct?
Yes,.
C
So
I
I'm
I
created
I'm,
created
the
2.407
tag
in
my
local
repository
and
then
did
a
git
push
minus
minus
tags.
Unfortunately,
that
pushed
Four
Tags,
not
one,
because
I
apparently
had
some
some
latent
tags
sitting
in
my
private
repository
in
my
working
repository
that
were
not
on
the
on
the
remote
I
promptly
deleted
three
of
the
Four
Tags,
because
they
were
junk
tags
that
didn't
belong
there,
but
I
was
worried
that
I
might
have
damaged
the
build
process,
but
Damian.
A
Those
safety
checks
worked
exactly
the
the
setup
of
the
of
the
job
on
trusted.
Ci.
The
current
setup
is
Disco
covers
all
existing
tags.
A
If
a
tags
is
removed,
then
remove
it
immediately
from
the
build
history
and
finally,
don't
build,
don't
trigger
a
build
for
tags
older
than
three
days
which
mean
if
by
error,
some
someone
changes
a
tag
from
the
past.
That's
the
situation
you
were
in
that
won't,
rebuild
and
override
the
existing
image
right.
Okay,.
C
A
C
A
But
that
will
be
the
topic
of
the
Sig
platform.
Okay,
but
of
course
that's
the
next
step,
but
now
for
the
infrastructure
report.
Today
we
have
demonstrated
that
now
we
are
not
overriding
the
tags,
the
existing
tags,
unless
we
specifically
trigger
a
build
and
that
allows
the
contributor
to
have
their
pull
request,
merge
way
faster.
B
A
A
And
now
that
means
for
infrastructure.
The
next
step
for
us
will
be
to
to
see
I
propose.
We
wait
for
tomorrow's
LTS
and
after
and
next
week
we
discuss
again
the
topic
of
automating
the
release.
The
part
build
the
docker
content
we
made
from
the
release
that
could
be
as
simple
for
today
as
really
CI
once
finished
the
packaging-
or
maybe
maybe
he
can
start
earlier,
but
at
the
moment
in
time,
the
the
weekly
release
process
should
be
able
to
create
the
tag
on
the
repository
by
itself.
A
A
So
proposal
wait
for
next
week
for
discussing
this
one
good
for
you.
Yes,
let
me
take
note
of
that
element.
Do
you
have
other
announcements.
C
C
So
you
may
see
noise
in
online
forums.
You
may
see
noise
in
various
places.
Saying
hey.
Jenkins
is
now
telling
me
they're
that
we're
not
going
to
support
Centos
7
anymore,
the
correct
answer
that
is
yes,
that
is
accurate,
beginning
mid-november
of
2023.
The
Jenkins
project
will
no
longer
support
Centos
7.
C
Yeah,
it's
red
hat
Enterprise
Linux
7
is
the
is
the
base,
but
the
the
vast
majority
of
users
are
probably
actually
using
Centos,
not
not
red
hat,
not
Oracle
Linux
and
not
scientific
Linux.
Nice.
It's
picked
and
there
will
be
a
blog
post.
Oh
isn't
it
it
is
today.
Isn't
it
Bruno,
so
I'll
publish
the
blog
post?
We've
got.
B
C
C
C
A
So
please
don't
break
the
infrastructure
tomorrow
we
I
haven't
seen
any
advisories
published,
so
we
should
not
expect
probably
public
adversaries
on
The,
Mailing
and
I.
Don't
know
about
next
major
events.
A
Okay,
so
then,
the
test
that
we
were
able
to
finish,
we
invited
a
student
from
the
gsoc
project
to
the
plugin,
is
called
repository
and
the
associated
team.
We
had
to
work
on
different
areas
because
we
weren't
sure
about
the
initial
need
but
looks
good
for
Adriano.
So
I've
closed
the
issue
with
like.
A
I
guess:
that's
correlated
with
the
uprising
of
chat,
GPT
and
I'm
afraid
that
maybe
chargpt
could
send
people
to
account
in
kinsayo
I'm,
not
sure
how
to
deal
with
that.
No
I'm
not
really
interested
into
doing
things
with
chat
GPT.
So
anyone
with
good
idea
or
knowledge
or
skills
on
that
part
could
help
us
maybe
might
theories
wrong.
But
if
that's
the
case
influencing
charge
GPT
to
tell
them
to
not
open,
not
redirect
users
to
open
issues,
there
could
be
a
great
great
thing.
Maybe.
A
Yeah
they
would
you
can
I'm
not
creating
an
account
there
anymore,
thanks
Mark
fondling
that
thanks
Stefan
for
working
on
ensuring
that
first
GS
property,
S2
and
bootstrap
4
were
removed
from
all
of
our
controller
controllers.
Sorry,
with
all
the
involved,
chaos
that
we
can
have
between
the
docker
images,
the
puppet
manage
the
manually
managed
everywhere
anything
to
add
on
the
topic.
Stefan.
B
A
No,
it
was
just
in
case
I
forgot,
something
no
I,
don't
think
there
is
a
task
or
feedback
or
postmortem
to
do
in
this
task
right,
no
cool
thanks
for
that.
I
see
you
closed
the
Azure
Ram
64
virtual
machine.
A
A
Next
step
for
me
will
be
to
check
the
impact
on
the
AWS
billing,
most
probably
later
this
week
or
beginning
of
new
month.
I
think
that
will
be
a
minor
jump
compared
to
the
updates
on
their
cost,
but
I'm
sure
that
will
be
visible
for
this
month.
A
Another
account
that
is
issue
so
jumping
Auto
Link
references
for
core
I
have
no
idea
what
this
issue
is.
I
assume
it's
something:
okay,
that
has
been
done
between
Alex
and
team.
That
was
a
request
for
the
Jenkins
project.
C
Is
that
correct
that
is
correct
and,
and
it's
working
quite
well,
15
plus
plugins,
that
I
maintain
are
all
Now
using
Auto
links
and
I've
confirmed
they
work
so
for
those
plugins
that
use
jira,
Jenkins
jira
as
their
bug
tracker.
This
makes
it
a
little
easier
for
most
of
our
work
in
Jenkins.
Infra
is
not
tracked,
as
in
in
Jenkins
jira
right,
we've
intentionally
switched
to
use
GitHub
issues,
so
this
doesn't
help
us
as
much
but
plug-in
maintainers
like
me
are
benefited
by
it
if
they
choose
to
enable
it
themselves.
A
Cool
thanks
for
the
explanation,
I
forgot
what
it
was
about
now.
It's
way
more
clear
with
the
combination
clean
up
and
import
and
manage
data
dog
monitoring
in
terraforms,
I'm
gonna
be
the
voice
of
Arabi
here.
So
thanks
survey,
even
if
you're
not
there,
there
have
been
numerous
ways
of
creating
data
dog
monitors,
so
monitors
are
objecting
data,
dog,
API
and
UI
that
allows
to
create
thresholds
and
conditions
to
alert
us
when
these
threshold
are
crossed
or
when
there's
these
conditions
are
met.
A
So
we
did
a
big
cleanup
on
that
Port.
Everything
is
manager's
code
from
our
terraform
repository.
So
thanks
for
that
big,
clean
up
everybody,
we
removed
false
positives
and
there
was
an
hidden
task
behind
based
on
the
feedback
from
both
Airway
and
Stefan.
When
we
reached
a
new
usage,
a
newer,
no
disk
space
later
I,
don't
remember
on
which
service,
but
we
were
using.
We
were
still
having
a
lot
of
free
disk
space
and
we
all
realized
that
it
was
a
full
on
inodes.
A
So
now,
thanks
for
for
from
your
actions,
folks,
we
have
a
monitor
that
not
only
monitor
the
free
disk
space
better,
so
the
free
I
node
per
device
that
will
alert
us
in
the
future
to
avoid
blocking
a
service
like
that
problem
did
so
nice,
iteration,
folks,
I'm,
really
really
proud,
and
one
last
issue
account
name,
don't
need
to
spend
more
time
on
this.
One
now
jump
on
the
work
in
progress.
A
A
The
statues
is
that
we
migrated
a
few
Services
during
the
past
milestones,
including
public
electoral,
that
is
now
running
properly
on
the
new
cluster.
Here
you
can
see
the
lists,
so
we
have
already
migrated.
Javadoc
key
clock
has
been
migrated
last
week
without
any
problem
incremental
publisher
service,
which
is
a
basically
Web
book
receiver
that
receive
messages
from
the
pipeline
library
on
cigo.
A
That
has
been
migrated
with
a
10
minute
down
time
after
letting
everyone
know
that,
oh,
it
was
by
the
way
we
should
think
about
making
it
highly
available.
It's
a
stateless
service,
so
adding
a
replica
wouldn't
kill.
A
A
A
A
When
you
have
this
ilf
icons
that
you
click
to
vote
and
rate
the
releases
which
is
used
for
Telemetry
sanding
from
Jenkins
controller
all
over
the
world,
so
we
can
get
some
statistics,
so
both
of
these
services
are
of
the
same
topology.
They
have
the
postgresql
database
on
two
bird
replica
running
on
the
cluster,
so
we
are
going
to
migrate
both
of
the
same
time
in
a
few
in
one
or
two
hours.
A
A
Next,
one
migrate
trusted
cigo
from
AWS
to
Azure,
so
we
worked
both
Stefan
and
I
on
this
one
Stefan
was
my
rubber
duck
the
world
that
has
been
migrated
to
the
new
virtual
machine
successfully.
So
the
next
step
is
to
run
the
effective
migration
of
trusted
and
see
what
happened
a
few.
A
We
might
start
tomorrow
but
yeah
as
as
a
good
and
really
really
wise
tip
from
Stefan.
Maybe
wait
for
not
doing
this
it's
a
day
of
LTS,
even
a
few
hours
after,
because
I'm
sure
the
LTS
will
be
done
quickly.
So
that's
why
the
proposal
for
Thursday.
Until
then
the
next
step
will
be
taint.
The
virtual
machines
that
mean
destroying
and
recreating
machine
from
start
to
clean
up
every
Tempo
is
tries
that
we
might
have
left
on
the
data
disk
that
won't
Rush.
The
data
already
migrated.
A
We
want,
because
the
data
disk
won't
be
tainted
only
the
virtual
machines,
and
then
we
will
see
what
will
be
the
next
steps.
There
might
be
some
fine
tuning
afterwards,
especially
on
the
security
groups,
but
now
we
have
reached
the
same
quality
level
and
same
feature
set
as
what
we
have
on
AWS.
So
we
should
be
able
to
proceed
for
the
next
steps.
A
A
Most
Cricket
critical
is
the
update
Center,
but
that
should
be
quick
to
build
and
republish
and
the
second
one
will
be
the
rpu
most
of
the
issues
we
should
have
after
that.
After
that,
migration
will
be
IP
openings
on
different
firewalls
because
it
used
to
be
a
whole
machine.
So
the
new
IEP
that
will
be
used
by
its
agents
and
its
controller
have
changed,
of
course,
so,
for
instance,
when
we
will
want
to
push
to
the
updates
on
the
virtual
machine,
we
might
need
to
update
the
configuration.
C
A
The
plan
includes,
most
probably,
that
will
be
sun
tomorrow.
One
day
before
the
operation
status,
Junction
will
be
updated
and
an
email
to
the
mailing
list
of
Jenkins
will
be
sent,
because
that
one
has
quite
the
impact
so
better
communicating,
even
if
everything
works,
I
prefer
letting
people
know
good
point.
Thanks
for
the
reminder,
that's
important.
A
Next
task
install
and
configure
datadog
plugin
on
cigo,
so
Irvin
I
walked
a
bit
on
how
to
make
the
datadog
plugin
installed
inside
the
cig,
inside
your
container,
to
communicate
in
UDP
with
the
data
dog
agent
running
on
the
host
machine
behind
it's
mostly
a
question
of
setting
up
the
agent
to
listen
on
the
proper
network
interface,
so
the
container
can
reach
it
on
the
host
because
by
default
the
agent
only
listened
on
localhost,
which
is
not
available.
A
The
localhost
of
a
host
machine
is
not
available
from
a
container,
so
that's
only
a
set
of
finding
the
proper
puppet
setup
for
the
agent.
So
it's
a
agents.yml
configuration
file
will
be
updated
to
listen
on
the
proper
network.
Interface
Airway
told
me
will
be
able
to
continue
working
on
this
on
the
next
Milestone.
So
I
will
keep
the
tissue
there.
He
did
some
successful
manual
test
on
the
machine
that
have
been
overridden
since
then
by
the
agents.
A
B
A
We
had
a
question
about
adding
a
repository
jetpack,
so
a
contributor
should
be
able
to
build
their
own
their
plugin.
The
I
haven't
had
time
to
look
on
this
one.
Most
probably,
we
will
act
on
short
term
by
adding
an
exception
on
ACP,
so
the
build
will,
instead
of
trying
of
ACP
training
to
get
on
our
g-frog
repository,
it
will
directly
bypass
ACP
for
that
specific
Repository.
A
A
A
The
idea
will
be
planning
for
a
kubernetes
cluster,
upgrade
next
Monday
for
deal
case
and
CI
Cades.
So
the
two
cluster
used
by
cig
and
Kim
Sayo,
because
this
one
doesn't
have
anything
related
to
high
availability,
load,
balancer
or
persistent
volume.
They
don't
have
these
three
features,
so
that
could
be
easy
to
start
with.
A
Acp
is
unreliable.
We
didn't
work
on
this
one,
so
that
was
a
time
management
mistake
for
this
one
right
now,
the
next
steps
will
be
being
able
to
have
inbound
agent
for
the
Azure
VM
agent
on
cigen
kinsayu.
A
A
A
Why
this
Freemasons
are
special.
They
are
virtual
machines
or
bare
metal
machine,
I'm,
not
sure
I
think
it's
virtual
machine
hosted
by
the
OSU
OSL
organization,
which
is
a
University
of
Oregon.
The
question
is:
if
we
upgrade
in
place
the
distribution
and
reboot
will
the
kernel
work
with
the
virtual
machine
improviser
behind.
A
A
A
A
B
I
started
appear
to
to
Define
as
a
oh
sorry
to
define
the
new
node
pool
and
the
the
main
problem,
not
really
a
problem
was
to
find
the
correct
machine
to
use
as
a
irm
node.
B
B
It
should
go
ahead
now
and
maybe
we
will
have
to
work
a
little
on
on
a
new
net
pool
for
the
Intel
one
to
just
rename
the
net
pools
and
to
have
something
more
coherent,
yep.
A
Homogeneous
is
that,
okay
for
you
to
prioritize
once
the
node
pool
is
created
successfully
because
you
know
with
terraform
and
Azure.
We
know
that.
Sometimes
the
plan
says
I
should
create
these
resources
and
when
it's
time
to
create
them,
it
fails
with
whatever
error,
and
then
you
have
Twitter
rates
so
once
created
successfully
your
proposal
is,
then
you
start
walking
on.
A
B
A
Again,
the
goal
of
that
issue
is
to
execute
some
of
our
workloads
websites,
mostly
or
static
websites,
on
rm64
machine
instead
of
Intel,
so
we
can
decrease
the
costs
per
request
or
the
cost
absolute
cost
of
these
workloads,
foreign,
the
backlog.
Let's
cover
the
triage
and
new
issues,
Mark
I
think
you
can
start
because
I
saw
you
opened
an
issue
about
bomb
issue.
Bun
problems
earlier
today
is.
C
That
correct,
yes,
so
so,
and
I
think
you
mentioned
something
that
may
help
in
in
your
earlier
comments.
I'm,
not
sure.
So
what
we
see
is
that
attempts
to
release
the
Jenkins
plug-in
bill
of
materials
failed
over
the
weekend
on
three
different
attempts.
Each
attempt
failed,
taking
seven
seven
and
a
half
to
up
to
nine
hours
to
to
attempt
to
do
the
run
previous
releases
that
had
been
successful
with
this
configuration
took
six
hours,
and
so
there
may
be
some
change
that
has
caused
things
in
the
last
seven
or
eight
days
to
become
slower.
C
We
will
take
some
actions
on
the
bill
of
material
side
right
now,
we're
we're
supporting
four
release
lines:
361
375,
387
and
401..
We
will
very
soon
drop
361.,
so
that
should
reduce
our
run
time
somewhat
right.
There
immediately,
but
there
may
have
been
other
changes
that
are
worth
are
worth
further
consideration
in
the
infra
team
right.
The
the
problem
here
may
go
away
just
by
the
changes
we'll
make
on
the
bill
of
materials
to
reduce
from
four
configurations
to
three,
but
if
it
doesn't
we'll
then
need
help.
A
Okay,
so
we
will
still
need
to
look
at
the
logs
and
see
what
happened
because
between
the
eventually
the
spot,
the
spot
instances
eviction
rate
that
could
have
grown
on
these
instances.
So
we
can
check
this
and
also
yeah
I
fear
that
we
will
still
be
in
the
the
stuck
with
the
the
concurrent
for
resources
on
cig
and
Sayo
when
there
is
a
bomb
builds
and
when
the
step
starts
to
to
take
absolutely
unexpected
times
for
simple
steps
that
should
take
signals
on
the
takes
minutes.
A
A
C
Yeah,
so
so
there
are,
there
are
plans
already
for
the
bomb
to
take
some
actions.
The
Tim
Jacob
even
replied
to
my
that
one
saying:
hey:
we
don't
even
need
to
wait
for
the
release
of
2.401.1.
We
could
drop
2.361
immediately
and
I.
Think
it's
a
valid,
a
valid
statement,
because
2.361
has
no
known
security,
vulnerabilities
and
users
should
be
running
375
or
newer.
Actually,
they
should
be
running
by
now.
387.
A
A
Yes,
he
created
a
repository
and
we
removed
it
because
we
did
not
add
news
for
Mondays
on
that
topic,
even
after
asking
him,
so
he
recreated
everything.
So
we
have
to
edit
this
and
I
will
take
or
I
will
take
a
just
scan
the
state,
the
configuration
state
of
that
repository
and
everything
I
assume
it's
the
part
for
replacing
Google
analytics
by
matoma,
which
is
back
in
the
pipe.
Yes,
that
will
help
us
so
not
depending
because
it
looks
like
even
Olivia
doesn't
have
access
on
Google
Analytics,
so
yeah
you
don't.
A
It
doesn't
have
enough
permissions
to
grant
me
admin
on
to
migrate
some
of
the
properties.
These
are
objects
inside
the
Google
analytics
API
as
I
understand,
so
we
will
have
to
wait
on
July
for
automatic
Migration
by
Google
analytics
themselves,
but
that
was
a
discussion
on
their
way
was
willing
to
help
to
to
have
our
own
matomo
service.
A
For
the
past
two
years
for
update
CLI
and
Gavin,
also
on
his
own,
so
that
should
be
a
service
to
us
on
our
cluster
The
Next
Step
will
be.
We
need,
and
I
will
ask
explicitly
again
and
here
what
do
we
need
for
running
the
the
cluster,
because
there
is
no
reason
for
hosting
and
building
a
Docker
image
if
we
don't
run
it
somewhere
and
we
need
to
know
the
requirement
for
running
matoma
production
based
on
these
experience,
so
by
default,
I'm,
adding
it
to
the
next
Milestone.
A
About
artifactory
bandwidth
assessment,
oh
I
forgot
to
work
on
this
today,
oh
crap,
we
have
a
bit
too
much
thing,
so
the
idea
is
after
a
meeting
with
g
frog.
Last
week
we
have
two
burnout
session
to
do
so.
Brownout
will
be
us
changing
a
major
setting
on
the
repositories
and
see
the
effects
on
both
the
build
on
the
infrastructure
and
the
build
from
the
outside
contributor
Brown
out
is
between
blackout
and
I.
Don't
know
is
white
out
existing
so
between
nominal
condition
and
everything
broken.
A
The
goal
is
for
one
hour,
so
we
we
let
users
know
a
few
days
ago
that
that
day
during
one
hour,
we
will
change
that
setting
that
might
have
that
impact
and
will
must
probably
break
your
builds,
because
we
want
to
see.
Oh,
it
breaks
the
first
one
will
be
once
validated
with
the
one
time
with
the
G
Gates
repository
to
see.
If
we
can
dis
disable
the
maven
repo
one
making
it
private
only
for
artifactory,
that's
a
repository
is
you
is
used
on
the
virtual
repository
public
that
everyone
should
use.
A
A
A
A
The
second
brownouts
will
be
removing
Maven
Ripple,
one
from
even
the
public
virtual
repository
at
all
and
see
the
impacts,
but
that
net
will
mean
we'll
need
that
one
will
need
more
details,
because
one
thing
for
sure
is
that
if
the
abusive
use
case
switch
from
the
direct
Maven
Ripple
one
and
they
realize
they
can
use
public,
that
will
just
shift
the
problem
from
one
repo
to
the
other.
And
we
need
to
see.
A
Why
do
we
have
a
mirror
of
Maven
repo
one
today
that
one
might
need
some
fine
tuning
of
the
ACP,
though,
because
ACB?
If
they
cannot
find?
If
it
cannot
find
an
artifact,
then
it
will
need
either
to
fail
abruptly,
and
then
we
fix
the
perm
XML
dependency
or
eventually
directly
have
ACP
downloading
artifact
from
Maven
repo
one.
Instead
of
our
chief
hog,
repo
and
caching,
everything
still
to
keep
the
caching
on
an
infrastructure.
A
C
A
C
A
A
And
maven
Maven
repo
one
run
out
for
the
five
of
those
or
the
six
is
that
is
that?
Okay,
yes,.
A
Yes,
that
that's
the
same
thing
Maven
reborn,
we
don't
have
administrative
on
the
maven
repo
one.
That's.
A
A
C
There
was
one
that
was
raised
in
chat
just
minutes
ago
by
Gavin
Mogan
on
the.
B
A
Real
time
check
perfect
thanks,
yeah
I
added
it
to
Milestone,
but
I
forgot
to
cherry
pick
to
the
notes.
Right.
Let
me
add
the
SS
artifactory.
A
And
so
find
a
way
to
monitor
job
done
for
private
controller.
It's
a
kind
of
next
logical
step
of
the
CI
Jenkins
IO
connection
to
the
datadog
plugins.
That
should
give
way
more
information
to
the
Target
dog,
about
the
internals
of
Jenkins,
the
amount
of
job
failing
jobs
that
could
allow
us
to
monitor
critical
jobs.
A
For
instance,
when
the
bomb
takes
more
than
10
hours,
we
could
have
a
monitoring
data
dog,
letting
us
know
that's
a
practical
example,
but
for
some
private
and
sensitive
controllers
such
as
trusted
CR
and
release
CI
as
infrastructure
officer
I
refuse
to
enable
the
datadog
plugin.
At
that
level
of
detail,
we
can
have
datadoc
plugins,
sending
virtual
machine
metrics,
saying
oh,
that
sensitive
virtual
machine
is
using
a
lot
of
CPU.
That
information
is
okay,
sending
internals
of
a
Jenkins
controller
that
cooled
or
could
not,
depending
on
the
accidentally
set
up
by
someone
unexpected.
A
The
unexpected
backup
of
credential
in
data
dog
is
a
scenario
that
could
happen
that
we
don't
want
specifically
for
Update
Center.
We
don't
want
an
unexpected
backup
of
the
update
Center
certificates
right,
so
we
need
to
find
a
way
they
used
to
be
a
proposal
by
Danielle
that
might
have
been
an
issue
or
private
conversation.
I
can't
remember
so.
I've
shared
that
with
survey.
A
The
idea
will
be
to
each
of
these
sensitive
jobs
that
we
want
to
monitor
will
need
a
post,
build
step
that
just
write
a
few
selected
information,
the
date
when
it
run
the
status,
whatever
information
that
are
not
sensitive
inside
our
I
think
we
have
a
public
bucket
with
file
with
Json
values.
For
the
reports
we
can
write
without
any
risk
for
the
safety
of
this
controller.
A
The
status
of
the
latest
update,
Center,
build
or
rpu
builds
and
then
build
the
datadog
monitor
that
say
if,
after
15
minutes,
the
update,
Center
last
build
six,
let's
just
have
less
successful,
build
hasn't
been
updated,
then
send
an
alert.
We
can
build
that
kind
of
two-step
process,
so
that's
not
top
priority,
but
that
would
be
really
useful
for
us
to
track
these
these
jobs
and
helps
developer,
because
we
can
fix
the
element
before
it
all
happen.
A
A
A
I
got
one
last
item
that
need
to
be
tracked
on
the
new
issue.
I
will
take
care
of
opening
it
and
adding
it.
We
received
the
pull
requests
from
Alex
I
will
sync
with
Alex
for
the
implementation
to
see.
If
we
have
to
do
it,
Alex
and
Tim
are
working
are
heavily
working
on
weekly
crjen
kinsayo,
which
is
a
public
instance
and
a
public
demonstrator
of
the
new
Jenkins
design.
Uex
UI
Etc
the
mainly
the
design
language
thing,
and
they
want
to
make
anyone
being
able
to
have
the
system
reads.
A
So
we
can
show
the
UI
of
the
system
administration.
The
new
UI,
which
is
a
valid
on
Legit
use
case
thing
is
that
could
risk
and
that
would
risk
people
being
able
to
access
some
encrypted
credential,
even
if
the
credentials
are
encrypted
that
will
give
them
some
specific
or
permission.
We
are
not
completely
sure
but
I'm
not
really
willing
to
try,
because
we
are
in
a
sensitive
area
giving
red
access
to
the
system.
A
Configuration
should
give
you
access
to
gcask
export
as
far
as
I
can
tell
which,
as
an
export
of
the
encrypted
credentials,
all
credentials
could
be
on
some
Fields
I,
don't
know
exactly
how
the
permission
work,
but
as
a
matter
of
safety.
My
proposal
is
I.
Don't
want
to
block
that
new
thing,
but
before
I
would
prefer
first
stop
using
ldap
authentication
for
that
instance
and
switch
to
the
local
Jenkins
user
database.
A
So
we
would
have
an
admin
and
a
shared
password
for
the
administration,
so
the
Elder
binding
password
would
there
will
be
no
chance
to
expose
that
password
and
the
second
credential
that
could
be
risky
is
the
GitHub
app
token.
But,
as
Tim
said,
we
could
do
remove
it
or
it's
fine
to
to
have
that
risk
there,
because
it's
a
really
fine-grained
GitHub
app.
So
my
proposal
is
to
change
the
configuration
of
weekly
CI,
so
it
doesn't
choose
ldap
anymore
and
it
doesn't
choose
any
credential
on
top
level
unless
there
will
be
public
credential
for
demonstration.
C
A
Because
on
the
paper
that
permission
shouldn't
expose
credential
on
the
paper,
but
once
the
credential
is
exposed,
that's
annoying
because
that's
a
public
instance,
so
yeah,
better
and
also
I,
want
to
suggest
to
a
team
and
Alex
that
we
could
create
a
new
me
agents.