►
From YouTube: 2023 03 21 Jenkins Infra Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
B
Been
placed
about
an
hour
ago,
okay,
final
changelog
has
not
been
posted
yet,
but
Kevin
has
created
the
the
draft
updates
for
it
to
correct
to
make
it
ready
for
final,
don't
know
when
that
will
be
ready,
because
last
I
checked
ci.jenkins.io
was
still
down,
while
in
progress
security
update
for
plugins.
A
A
A
B
B
A
We
are
the
Jenkins
plugin
advisory
today
that
has
just
been
released
security
advisory
today
on
plugins.
So,
as
we
said,
CI
Jenkins
is
Back.
The
Advisory
has
been
public
published
on
Jenkins
a
few
minutes
ago,
so
everything
looks
good,
so
I
haven't
seen
any
error.
A
Upcoming
calendar
next
week
will
be
the
next
weekly
will
be
28th
of
March.
As
usual.
B
2.378
that
or
you
say,
the
upcoming
LTS
387.,
oh
next,
weekly
is
397.
97.
B
A
A
Okay,
parfait,
perfect,
perfect.
Is
there
any
other
announcement
calendar
events
that
you
folk,
Wanna,
Talk
About?
A
Oh
okay,
let's
proceed
with
the
work
that
we
were
able
to
finish
during
the
past
Milestone.
So,
first
of
all,
a
public
apologize,
Stefan
and
I
have
let
the
poor
Airway
alone
last
week,
because
we
were
in
early
days
and
we
absolutely
gave
no
reminder
to
anyone.
So
you
might
have
been
a
bit
alone
last
week
and
I'm
sincerely
sorry
for
that
therapy
next
time,
so
self-improvement
for
the
person
going
in
early
days,
in
particular
damage
portal.
C
Usually
brag
about
being
on
holidays
every
day
a
month
before
so
usually
everybody
knows
that's,
like
my
birthday,
I
remind
people
all
the
time.
A
A
A
The
last
miles
were
being
able
to
properly
Define
a
full
terraform
as
code
automation,
so
we
now
have
a
credential
with
an
explicit
expiration
dates
in
the
terraform
code,
which
is
publicly
available.
So
we
should
be
able
to
monitor
that
I.
Don't
know
if
you
will
have
time
for
that,
but
at
least
it's
visible
publicly.
A
Also
on
that
issue,
I've
mentioned
a
topic
that
has
been
blocked
by
numerous
people,
including
Garvey
and
timiacom
I've
created
a
new
issue
for
that
the
goal
is
to
use
workload,
identity
management
inside
azure
in
order
to
avoid
having
to
manage
credential.
So
the
issue
that
has
been
created
with
no
Milestone-
we
won't
work
on
that
right
now,
we'll
just
remove
that
password
with
the
wall,
expiration
and
rotation
and
calendar
event
Etc,
and
instead
that
will
be
automatically
managed
by
the
cloud
that
will
be
a
nice
Improvement.
But
that
wasn't
the
case
on
the
issue.
A
A
A
There
were
there
were
the
hypotheses
of
splitting
the
workload
on
different
note,
pools
for
the
bomb
builds
and
the
other
builds.
Alas,
right
now
this
this
at
least
two.
Since
today,
until
the
end
of
the
month,
we
have
removed
a
lot
of
AWS
kind
of
Agents,
because
we
went
clearly
above
our
the
billing
limits
from
the
jws
account
and
adding
two
different
bomb.
Not
tools
is
not
right
now,
something
we
should
work
on,
because
we
don't
have
the
capacity
we
had
to
decrease
the
capacity.
A
Just
a
note,
given
the
decreasing
capacity
and
the
risk
here,
we
might
need
to
do
a
drastic
reduction
on
the
way
bomb.
The
bomb
builds
are
done.
We
might
need
to
add
a
lock
system
to
ensure
that
only
one
bomb
build
is
done
at
a
time
whether
it's
on
the
main
branch
or
on
pull
requests.
A
So
that
has
to
be
checked
onto
the
during
the
upcoming
weeks.
Right
now,
we'll
see
we
the
shift.
The
workload
shift
was
done
on
digital
lesson,
where
we
have
credits.
So
that's
why
we
close
the
issue.
Unless
someone
object,
we
can
at
any
moment
reopen
the
issue,
but
if
we
open
it,
that
will
be
a
isolated
topic.
A
B
A
B
Agreed
and
if
we
need
to
if
we
need
to
use
lockable
resources
or
something
like
that
to
say
that
we
want
only
one
bomb
build
running
at
a
time,
I
think
that's
perfectly
fine.
Also,
it
certainly
is
in
when
a
flurry
of
pull
requests
arrive
from
dependabot
for
the
bomb.
It
can
certainly
be
overwhelming
to
see
a
thousand
plus
jobs
in
the
queue
and
200
virtual
200
machines
allocated
on
Seattle
jenkins.io.
A
Oh
Next
Step
so
that
one
Stefan
under
a
new
so
same
problem
as
third
CI,
but
in
that
case
it's
for
our
Packer
job.
We
need
Packer
to
be
able
to
spin
with
Azure
virtual
machine
to
build
our
agent
templates.
So
you
need
a
credential
for
that
now
that
credential
has
been
defined
as
codeine
with
the
same
pattern
as
search.ci.
So
next
time
it
will
expire
same
problem.
It
has
been
renewed
on
one
year,
calendar
Etc
and
we
could
study
in
the
future
using
workload,
identity,
management.
A
For
the
same
reason,
the
main
difference
between
both
is
the
set
of
permissions
required
for
both.
They
are
certainly
different
in
the
case
of
Packer.
The
main
challenge
is
that
we
use
Packer
with
the
default
Azure
mod,
which
was
creating
Azure
Resource
Group,
each
time
it
was
building,
so
it
creates
an
ephemeral.
Resource
Group
puts
everything
on
that,
build
inside
that
one
and
remove
it
at
the
end
of
the
build.
A
Now
that
we
have
a
restricted
the
credential
because
being
able
to
create
and
delete
resource
groups
at
the
world
subscription
level
was
risky.
So
now
we
have
one
Resource
Group,
where
Packer
put
everything
within
it's
a
bit
less
practical
for
for
the
backup
process
itself.
However,
that
I'm
sure
that
Packard
only
have
writing
access
inside
the
resource.
Group,
so
Packer
is
not
no
longer
able
to
reach
the
rest
of
the
infrastructure
one
as
well,
and
we
add
to
fine-tune
Packer
and
the
credential
for
that.
A
Bkg
origin
SSL
certificate,
so
in
the
long
theory
of
consequence
of
my
mistakes
by
updating
Python
3
on
that
machine,
that
one
is
another
one
on
the
list
that
has
been
solved
thanks
Mark
for
you
certificate
monitoring
we
fixed
by
adding
the
missing
pip
package
yeah,
but
without
your
monitoring
we
would
have
waited
for
the
expiration.
That
would
have
been
worse.
A
Clearly,
we
have
a
room
for
improvement
on
how
on
the
infrastructure
team
monitoring
with
we
didn't
have
time
or
maybe
the
priority
on
that
both
our
reasons,
but
now
it
has
been
expired
and
it
worked
and
it
was
and
it
the
positive
consequence
is
that
a
bunch
of
SSL
certificate
were
updated
all
over
the
platform
showing
that
the
let's
encrypt
system
is
still
working
as
expected.
A
We
have
one
issue
that
I
closed
after
generating
the
notes,
or
here
before
that
I
forgot
to
post
the
message
so
one
of
the
plugin
developer
at
the
401
issue
when
releasing
their
plugin,
they
retried
a
few
hour
later
and
it
worked.
The
reason
has
been
identified.
The
repository
permission,
updated
job
that
ran
on
trusted,
CI
run
every
three
hours
and
ex
the
the
TTL
of
the
tokens
used
to
publish
plugins
is
five
hours,
so
every
time
the
build
runs
successfully
at
the
end,
it
regenerates
the
tokens.
A
So
if
you
have
one
failure,
then
you
will
have
one
hour
during
while
you
cannot
release
so
the
problem
of
the
user
is
served
and
we
have
tracked
it
so
I
took
on
me
on
closing
the
issue
and
opening
a
pull
request
on
the
repository
permission,
updater,
which
aims
to
retry
the
steps,
at
least
one
time,
because
these
failures
are
most
of
the
time,
at
least
on
the
path
months,
due
to
a
GitHub,
API
error,
let's
say
five
or
four
or
five
or
three
cells,
server,
side,
error.
A
Danielle
suggested
that
we
might
improve
the
native
Java
groovy
code
of
the
repository
permission,
updated
jobs
instead,
instead
of
the
pipeline
got
no
objection
on
both
I
understand.
One
can
be
better,
but
I
need
help
on
the
second
one
I'm,
not
feeling
at
ease
writing
Java
code
for
doing
so,
and
I
don't
want
to
spend
time
on
that
as
much
so
yeah.
So
we
have
a
pull
request
with
the
request
for
announcement.
The
usual
problem
is
okay.
So
issue
closed,
unless
someone
object,
of
course,.
A
No
question
issues
that
has
been
closed
with
no
work
on
it,
or
at
least
only
analysis
sounds
like
there
was
an
issue
on
the
docker
up
description
of
the
Jenkins
inbound
agent
image
that
Alex
cooked,
but
someone
fixed
it
or
maybe
mixed
up.
We
don't
really
know
but
I
we
checked
earlier
today
and
it
sounds
like
it's
gone,
so
not
sure
I
wasn't
there
every
day,
so
I
someone
already
might
have
fixed
that
and
we
missed
it.
I
don't
know,
but
Alex
close
the
issue
saying
it's.
A
Okay,
now
we
had
one
two
three
four
account
issues
most
of
the
time
as
people
doesn't
answer
a
thanks
survey
for
managing
all
of
them,
no
answer
one
two
or
three
weeks
afterwards,
so
we
done
my
proposal
is
we
close
them
and
the
user
can
feel
free
to
reopen
them
if
they
need
one
of
the
four
one
was
flagged
as
a
sensitive
a
few
weeks
ago,
because
it's
someone
asking
for
the
Ops
Journey
plugin
takeover.
A
A
Now
work
in
progress,
I'm
taking
them
on
the
order
of
the
lists
here
could
not
create
account.
Someone
had
issues
with
the
account
so
that
we
tried
visiting
the
person.
The
person
says:
they've
been
blocked
by
the
anti-spam
system,
but
down
system
spam
system,
all
the
way
through
a
Java
stack
in
the
logs,
neither
Airway
or
high
were
able
to
find
this
stack.
A
A
I,
don't
know
anyway,
in
that
case,
try
to
reset
the
password
and
the
person
will
be
guided
to
send
us
an
email
to
the
new
email
who
will
check
the
by
asking
an
answer
by
email
check
the
full
back
and
forth
exchange
to
see.
If
that
person
is
a
human
and
is
not
trying
to
take
over,
and
if
it's
the
it's
not
the
case,
then
we
can
reset
the
email
in
the
account
directly.
A
Main
issue
today
we
worked
on
ec2
that
started
as
a
lot
of
ec2
agent
on
sea
agents.
Are
you
marked
as
broken
states
on
the
left
column
and
it
happened?
It
looks
like
there's
been
numerous
issues
with
the
ec2
plugin,
starting
with
idle
termination
time,
the
time
where
the
garbage
collector
for
these
agents.
It
has
an
amount
of
time
before
passing
and
checking
again.
It
was
a
30
minutes
by
default,
when
gcask
wasn't
specifying
any
value
which
we
didn't,
because
we
used
to
have
something
as
soon
as
it
was
used
by
your
job.
A
The
job
finish
we
used
to
have
agent
being
deleted
immediately,
I,
don't
know
when
it
started,
but
it
started
to
be
30
minutes.
So
once
it's
finished,
you
have
to
wait
30
minutes
before
something
pass
on
worst
case
check
if
the
agent
gets
deleted
or
not
that
led
to
a
lot
of
broken
States,
mostly
because
spot
instances,
but
not
only
which
is
still
something
we
can't
explain
here.
But
the
consequence
is
that
the
AWS
account
went
clearly
above
the
budgets.
A
So
we
did
a
drastic
action
by
removing
any
kind
of
EC
to
agent
and
decreasing
the
availability.
We
still
have
to
switch
fully
in
Frasier
from
AWS
virtual
machine
to
Azure
virtual
machine
that
one
will
be
definitive
once
done,
so
we
should
be
able
to
leverage
the
amount
of
billing
for
this
month
and
we
should
be
able
back
to
normal
next
months
or
maybe
not
that
will
be
discussed
in
April.
But
right
now
we
absolutely
need
to
avoid
crossing
the
16k
top
limits.
A
A
D
A
A
Next
issue
is
from
James
Nord
about
agent
instabilities
on
CI
joint
kinsayo,
so
I
I
can't
grasp
out
to
efficiently
use
datadog
I.
None
of
us
were
able
to
find
metrics,
but
the
metrics
should
be
collected,
so
maybe
they
have
been
deleted,
After
Time
window.
We
need
to
fine
tune,
but
yeah
a
lot
of
information
on
that
issue.
To
say
one
of
the
builds
failed
with
a
mirror
message.
A
Clearly,
it's
not
a
om
kill,
because
we
didn't
see
any
137
code
and
none
of
the
usual.
This
kind
of
events
are
shown
on
the
AWS
or
Azure
console.
In
that
case
it
was
on
AWS
and
no
alert
about
pod
or
container
being
om
killed.
A
A
We
didn't
see
anything
and
any
retry
kind
on
that
job,
so
that
sounds
like
an
edge
case.
James
build
was
finally
rebuilt,
two
or
three
times
and
the
pull
request
was
merged.
A
So
we
don't
know
what
happened
and
we
weren't
able
to
observe
thing
is
the
timeline
was
during
the
2000
builds
of
the
bomb
that
was
on
that
gray
area.
So
the
gut
feeling
and
only
good
feeling,
I
have
no
formal
proof
and
I
don't
know,
but
I
don't
have
any
way
to
prove.
Where
does
it
come
from?
So
yeah
not
sure
what
to
do
with
this
one,
but
any
idea
welcome
there.
I
don't
know
how
to
go
further
here,
because
a
lot
of
hypothesis
and
assumption,
but
nothing,
factual
or
miserable
here,
right.
A
So
my
proposal
is,
as
I
asked
James
if
he
was
able
to
to
see
that
problem
on
another
build.
Another
plugin,
and
the
last
thing
for
us
to
do
is
to
check
the
spot
history
inside
the
AWS
console.
We
have
a
history,
so
we
can
see
if
some
spot
instances
were
reclaimed
on
that
timeline.
That
will
explain
the
problem.
A
Unless
someone
else
want
to
keep
it
or
had
details,
nobody
can.
Okay,
we
have
Azure
credential
for
CI
Jenkins.
That
one
was
due
for
two
or
three
weeks
ago,
sirajan
Kim
Sayo
wasn't
able
to
spin,
as
your
virtual
machines
and
the
ACI
container
same
expired
credential.
A
manual
credential
has
been
generated
on
short
term
that
is
valid
until
June.
Now
we
have
to
put
that
one
as
code.
The
tricky
part
on
this
one
might
be
around
the
ACI.
A
We
will
need
not
only
permission
on
the
virtual
machine,
Resource
Group,
but
also
the
ACI
Resource
Group
for
Windows
container,
but
the
rest
is
the
same
as
sir.ca
and
also
same
Improvement
on
long
term.
That
should
be
a
candidate
for
workload.
Identity
management
in
the
future,
so
yep
that
one
move
automatically
to
the
upcoming
milestone.
A
I
took
this
one
was
related
to
the
DG
search
that
we
had
to
generate
the
pgp
key
with
DG
search,
but
that
might
not
be
the
case.
We
should
be
able
to
operate
this
one
quickly,
at
least
for.
B
I
think
I
think
the
first
that
Olivia
would
want
is,
let's
read
the
read
the
materials
that
were
documented
from
three
years
ago,
go
through
them
and
be
sure
that
we
understood
what
they
say
and
then,
if
we
have
a
question
we
talked
to
him
but
I'm.
Confident,
given
the
experience
with
the
digicert
thing,
where
Mark
said,
hey
don't
have
any
documentation.
Olivier
then
said:
yes,
we
do
it's
right
here.
Look
at
this
page
read
this
page
and
it
was
all
right
there.
So
I
suspect.
A
A
A
And
also,
of
course,
what
if
we
you,
we
have
to
focus
on
depending
on
the
workflow,
do
we
need
to
trigger
a
new
LTS
and
Core
weekly
release
for
signing
again
a
new
version
with
the
new
key?
If
we
need
a
new.
B
Key
and
and
that
I
think
we
won't
need
to
at
last
the
last
time
we
didn't
what
we
did
was
we
announced
we
did
a
blog
post.
The
last
time,
which
said
the
signing
key
is
changing.
Please
install
the
new
signing
key
now,
I,
don't
know
if
that
kind
of
message
will
be
needed
again
or
if
we
found
a
way
in
our
last
exercise
to
allow
that
to
be
avoided
again
so
avoided
this
time,
I
I,
I'm.
Sorry,
it
was
three
years
ago.
I,
don't
remember
no
problem.
B
A
Okay,
my
my
question
to
them
will
be
more
about
the
impact
of
renewing
the
expiration
date
of
an
existing
jpg
key
versus
creating
a
new
gpg
key.
What
we
know
the
the
consequences
we'll
be
interested
in
the
difference
in
terms
of
security
pattern
of.
B
D
We
noticed
that
the
bomb
build
we
are
using
MTD
or
volume
to
what
they
have
to
do,
and
this
is
mptr.
We
are
using
the
node
space
available
space,
the
cluster
node
space,
which
were
20
Gigabytes.
D
And
the
builder
needs
more
than
that,
so
they
were
feeling
we
increased
the
cluster
nodes
disk
size
to
200
gigabytes
and
increase
the
IOP
I
ups
and
type
where,
where
I
did
we
now
need
to
maybe
decrease
these
days
to
90
gigabytes,
which
shouldn't
be
enough,
since
there
are
at
most
pre-builds
on
the
on
each
node
at
the
same
time
and
the
boom
Builder
after
the
burn
build,
we
can
see
that
23
gigabytes
are
used.
A
A
So
in
the
case
of
our
builds,
for
instance,
kubernetes
plugin
on
Jenkins,
as
proven
by
Jesse,
create
an
empty
there
for
the
workspace
where
the
build
happen,
which
is
good.
However,
on
a
default
build
when
you
run
Maven,
for
instance,
it
creates
and
writes
data
in
O
in
the
home
directory
slash.m2
repository
by
default.
That
thing
is
written
inside
the
container
layering
file
system.
So
if
you
need
to
read
or
write
a
lot
of
data,
the
performances
would
be
really
bad
there,
so
we
might
want
to
mount
explicitly
the
OM
Jenkins.
A
A
Also
the
slash
TMP
directory
is
not
a
tmpfs
and
is
not
an
mtdr,
so
we
might
want
to
Define
that
as
a
TMP
FS
run
disk.
That
will
take
a
bit
of
memory,
but
we
have
margin
on
these
machines
wall
scales.
We
use
24
gigabyte
of
the
32,
so
we
have
a
few
gigabyte
left
so
mounting
the
slash
TMP
to
something
like
500.
Mega
will
be
enough
if
it's
full.
A
A
So
just
a
note,
we
don't
have
any
more
disk
fully
shoes
for
the
more
can
you
confirm
that
you
didn't
see
any
more
issues
since
this
operation
on
that
confirmed.
B
We
cleared
the
backlog
of
20,
pull
requests
over
the
weekend
and
released
a
new
version
of
the
bomb
early
Monday
so
or
Monday
sometime
20.
B
So
it
took
it
took
quite
a
bit.
Well,
it
was
a
lovely
weekend
exercise
to
try
to
get
that
thing
cleared
and
I'm
glad
that
we
did
it,
because
that
meant
that
there
were
36
dependency
changes
in
the
most
recent
release
of
the
bomb.
36
is
probably
three
times
larger
than
our
typical
bomb
size
release
so
yeah
it
was.
It
was
time
to
catch
up
thanks
very
much
for
doing
it.
B
Well,
one
okay,
I
admit
one
of
the
one
of
the
things
to
catch
up
on
those
20.
Pull
requests
actually
combines
six
pull
requests
into
one
okay,
so
so
I
I
did
some
I
did
some
attempt
to
machine
minimize,
but
it
was
still
at
least
10
cycles
of
the
bill
of
materials.
Therefore,
you
can
approximate
how
many
machines
we
spun
up.
Yes,
it
was
a
lot.
B
A
Okay,
so
I
will
move
that
issue
on
next
Milestone.
Is
that
okay
for
you?
Where
are
we
under
the
assumption
that
you
need
the
help
and
no
problem
we
can
pair
or
triple
on
this?
One
next
issue
is
cluster
private
Gates
so
that
cluster
has
been
created.
We
already
have
some
Services
running
and
there
we
can.
You
remind
us
the
status
on
this
one.
D
The
plan
is
ready.
We
now
just
need
to
decide
when
we
execute
it.
A
D
A
D
We
need
to
move
from
this
cluster
to
the
private
one.
The
next
obviously
I,
don't
know.
Let
me
see
it's
here,
but
it's
some
really
big
services.
A
Okay,
how
much
time
does
the
operation
should
be
approximately
for
you.
D
A
A
C
A
A
Problem
so
then
that
means
you
have
to:
if
is
it
okay
for
you
to
prioritize
first,
adding
this
to
the
plan
preparing
the
message,
so
people
know
what
they
have
to
do.
If
for
people
who
checked
like
Kristen,
we
check
release
CI
today
for
the
weekly.
What
will
do
they
have
to
do
until
next
Tuesday
in
order
to
check
and
monitor
the
next
weekly
release?
A
A
So,
were
you
able
to
check
the
availability
of
the
windows
pool?
Was
it
able
to
spin
agent
and
stuff.
A
D
D
A
Cool
just
last
one
because
just
my
mind
thinking
we
need
to
check
the
Azure
unborn
for
the
vault
credentials,
just
think
about
this
one.
We
will
talk
about
that,
but
it's
important.
A
Yes,
important
steps
to
check,
we
have
VPN.
A
Okay:
okay,
let's.
A
A
I,
don't
mind
taking
this
one
just
because
I
sent
them
the
email
last
time.
So
that's
why,
unless
someone
want
to
take
the
subject
and
I
don't
mind
either
if
they
say
no
for
Jenkins
yeah
infra,
the
alternative
will
be
to
use
Josh
here,
as
suggested
by
RV,
that
could
ease
the
access
control
and
that
could
scope
the
images
per
repository.
Since
we
already
have
a
kind
of
airbag
permission
system
inside
GitHub.
A
Sure
you
need
to
write
down
Rose,
better
or
back
already
managed
per
Repository
risk
in
that
case
outside
the
fact
that
we
only
have
to
change
the
name
and
create
talk,
GitHub
tokens,
but
that
should
be
okay,
that
would
be
pipeline
Library
change
and
instead
of
using
the
pre-built
Docker
hook
credential,
we
we
use
it
with
the
Eid
EIT.
The
GitHub
application
one
hour
valid
token
generated
on
each
call,
so
we
should
just
wrap
with
credential
with
the
git
Department
that
should
be
okay.
A
The
risk
is
more
rate
limit
costs
in
Josh.
There
I
don't
know
the
limitation,
but
maybe
it's
free
for
us,
since
we
are
sponsored,
but
that
remain
to
be
checked.
I
mean
GitHub.
Action
have
limits
on
Joshua
as
well,
for
Jenkins
forever,
we'll
see
we'll
discuss.
Is
there
any
objection?
If
I
send
an
email
to
ask
the
developer?
A
We
cooked
that
we
have
a
service
that
should
be
sunseted
and
removed
rubber
Butler.
That
one
should
be
an
easy
task.
It's
a
set
of
Puppets
I
propose
to
move
it
outside
the
next
Milestone,
given
the
walk,
but
that
one
is
an
easy
to
take.
If
someone
is
bored
and
have
some
time
to
spend.
Is
that
okay,
for
you,
let's
move
to
battle
again
anyway?
I
am
not
sure.
What
is
the
Gatsby
plugin
Jenkins
layout
issues
about?
Can
you
do
you
remember.
D
Yes,
it's
about
which
has
been
done
and
you
wasn't.
You
were
not
sure
if
we
should
create
a
separate
kit
selection
just
for
the
semantic
release.
Action
reads
not
needed,
but
I'm
not
sure
a.
D
A
A
A
Okay,
I
see
yep.
A
The
risk
here
is,
if
one
application
try
to
write
or
perform
a
release
on
another
repository
that
use
the
same
GitHub
app.
Then
you
could
have
cross
repository
writings,
which
should
not
be
possible
by
default.
So
that's
why
I
ends
my
proposal
of
creating
multiple
GitHub
app
one
per
repository,
but
that
could
be
one
pair
area.
The
question
is
more:
is
it
acceptable
if,
for
instance,
in
our
case,
Jenkins
IU
components
that
you
see
on
the
screen?
Is
it
or
is
it?
A
A
For
the
Jenkins
infrastructure
that
looks
like
the
same
area,
I
will
put
that
under
the
umbrella
of
whatever
front-end
web
stuff
thing
is
managing
manual
or
all
those
GitHub
apps
start
to
be
problematic,
and,
as
you
underline
correctly
or
the
last
time
now,
I
remember.
The
amount
of
manual
management
and
credential
to
to
updates
could
could
be
a
counter
effects
of
trying
to
separate
properly
elements
unless
we
have
a
GitHub
App
Management
as
code
in
an
automated
way.
A
Your
argument
makes
some
survey
and
I
propose
that
you
can
close
this
issue.
It's
okay,
that
we
don't
separate,
GitHub
app,
because
that
will
be
another
world
topic
on
that
one.
Does
it
does
everyone
agree
on
this
one
or
do
you
think
we
should,
if.
C
A
It's
okay
for
all
of
you,
I
will
close
the
issue
with
a
message
about
the
trade-off
here
is:
does
it
make
sense
for
all
of
you?
Thanks
survey?
That
means
the
issue
is
fixed
technically
functionally
speaking,
and
we
only
have
to
put
to
write
on
the
trade-off
thanks
for
handling
that
to
be
closed,
trade-off,
secure
in
term
of
security
and
permissions.
A
These
people
need
to
be
added
on
both
the
VPN.
The
new
VPN
eventually
I'm
crossing
my
fingers
for
this
one,
but
they
need
a
VPN
account
in
any
case
and
then
in
the
release
CI
airbag
system.
Also,
we
will
need
to
check
the
airbag
configuration
on
release
CI
because
right
now
anyone
with
the
VPN
access
and
authenticated
as
too
much
permissions.
A
A
A
That
one,
the
root
cause
is
fixed,
Ivan
closed
it.
Yet
there
are
some
puppet
management
things
to
ensure
that
we
have
the
correct
blobics
for
install
everywhere
and
we
have
to
perform
a
blobics
for
upgrade
migration
that
could
have
consequences
on
a
lot
of
scripts,
but
I
think
we
will
have
to
do
this
one
quickly.
A
A
Okay,
so
if
it's
okay
for
everyone,
the
definition
I've
done
for
that
issue
will
be
the
second
item
here,
meaning
putting
on
the
infrast
code
the
way
to
ensure
that
blobex
fair
is
installed.
It
should
be
explicit
on
puppets
because
it
was
done
manually
before
once
it's
done,
we
can
close
the
issue.
That
will
be
the
improvements
to
avoid
reproducing
the
issue.
A
A
Then
cut
signing
certificate
renewal
process
that
one
is
quick.
We
are
waiting
for
layers
of
the
Linux
Foundation
to
discuss
with
lawyers
of
digital
and
lawyers
of
CDF,
and
once
then
the
CDF
30
from
CDF
will
come
back
to
Mark,
with
a
new
DG
search,
a
new
certificate
to
sign
the
MSI
of
Jenkins
on
the
jar.
A
A
That
means
we
will
take
the
new
private
certificates
and
put
it
on
the
Azure
vault,
which
is
used
by
release
CI
through
the
Vault
endpoints,
and
my
remark
earlier,
I
really
say:
I
use
it
to
sign
the
jar
and
the
windows
package
and
that
needs
to
be
encrypted
inside
the
Azure
Vault
password
system.
So
that
should
be
only
a
volt
update,
so
we
have
to
unseal
the
Vault.
Put
the
new
version
sell
it
and
wait
for
the
next
release
to
trigger
eventually
try
a
replay
with
the
dumi
command.
A
A
A
A
A
So
once
we
are
sure
that
javadoc
is
okay,
the
last
mile
for
us
will
be
to
check.
Can
I
ask
you
to
drive
that
topic,
so
you
should
be
able
to
close
the
issue.
The
goal
is
to
check
the
measure
from
g-frog
to
see
we
have
our
top
IPS
and
if
they
are
eating
g-frog
less
than
it
was
in
December
at
least
Mark
might
be
able
to
help
on
Brazil
as
well,
and
if
you
see
we
have
decreased
and
we
are
not
on
the
top
whatever
that
means
yeah.
That
means
we
can
proceed
and
close.
A
That
H
A
Walk
in
progress
I
wasn't
able
to
work
on
this.
One
I
need
to
focus
on
how
to
have
a
replicated
ldap,
especially
inside
the
kubernetes.
A
A
We
receive
the
notification
by
the
GitHub,
whatever
system
used
on
the
Jenkins
project
had
free
day
to
comply,
so
it
has
been
treated
by
the
Jenkins
board
yesterday
and
all
of
the
plugin
related
to
computer
and
BMC
has
been
removed
because
of
the
copyright
thing.
The
issue
there's
been
an
email
sent
to
computer
people
by
the
board.
Marker
also
copied
the
content
of
the
email
on
the
issue.
There
is
no
infrastructure
action
right
now.
It's
only
to
track
this
and
Danielle
did
that
as
part
of
the
ldesk
process.
A
D
D
A
A
No,
no
yeah,
hey
I'm,
just
contouring
with
my
eyes,
sorry,
okay,
so
we
have
two
account
issue
and
one
that
should
be
easy,
if
not
already
done
okay,
so
we
let
them
and
that's
one
should
be
closed,
perfect
other
topics
or
issues
or
things
that
I
could
have
missed.