►
From YouTube: 2023 01 31 Jenkins Infra Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Around
the
table
today
we
are
six
myself
Dimension
portal,
quinovarton
Mark,
Waits,
Kevin,
Martins,
Stephen,
Merrill
and
er
willemer.
Are
there?
Do
we
have
Six
Bullets?
Yes,
we
have
six
bullets
in
the
notes.
A
Another
announcement
artifact
caching,
proxy.
So
that's
a
topic.
That's
the
top
priority
topic
for
today's
for
today,
but
the
idea
I
propose
that
we
just
announced
it
now
and
we
get
in
details
in
the
upcoming
tasks.
But
we
plan
to
today
or
tomorrow
enable
the
artifact
caching
proxy
for
all
plugin
Builds
on
cig
and
Kinsale.
A
A
Just
for
information
I
proposed
earlier
to
Urban
Stefan,
we
might
create
a
dedicated
issue
just
for
the
the
switch
opt-in
to
opt
out
to
obtain
so
because
the
tissue
will
serve
as
a
kind
of
documentation,
because
one
of
the
main
thing
is
that
if
anyone
has
an
issue
will
be
easier
to
have
every
contributor
to
add
a
command
with
a
specific
build
and
Link
and
the
body,
the
initial
body
of
the
issue
will
show
hey.
A
If
your
build
is
slow,
relaunch
it
one
or
twice
check
this
and
this-
and
you
will
see
the
real
timing
with
impact
there,
because
yeah
there
might
be
a
feeling
of
slow
build
that
could
be
leading
to
either
agent
taking
time
to
be
spawned,
which
is
absolutely
unrelated
to
the
to
the
ACP
itself,
and
the
variability
on
performance
is
in
tests.
And
since
the
perceived
timing
is
the
maven
command
that
builds
and
run
tests.
A
We
don't
have
an
easy
way,
except
going
into
the
build
out
put
on
looking
at
timestamps
of
two
specific
lines.
Otherwise
we
don't
have
a
way
to
say
it
took
that
time
that
much
time
to
download
dependencies
and
that
much
time
to
build
that
much
time
to
test
the
information.
Is
there
but
not
easily
available.
So
the
goal
is
to
have
a
provide
a
quick
manual
on
the
dedicated
issue
just
to
help
Developers.
A
Thanks
Mark,
so
we'll
look
at
that
right.
After
so
next
week,
Tuesday
Tuesday,
we
will
have
the
2.390.
C
B
A
Thanks
Eric
thanks
Stefan
next
LTS
is
next
week.
Is
it
Wednesday
if
I'm
correct
so
2.375.3.
A
A
A
We
haven't
seen
that
Infamous
git
bullying
log
again,
so
we
had
the
release
that
happened
on
Docker
agent.
That
was
picked
automatically
as
expected,
and
there
are
two
upcoming
updates
on
SSH
agent
and
inbound
agents
earlier
later
today.
So
everything
is
far,
the
former
job
has
been
archived.
It
has
been
backup
so
now,
let's
watch
it
over
time,
but
yeah
good
news.
A
We
should
not
I
hope
we
shouldn't
have
any
more
latest
image,
embedding
already
a
two
years
old
agent.jar
version,
thanks
Stefan
for
the
help
and
everyone
involved
in
that
one,
no
question.
A
Valid
certificate
for
trusted
CI,
so
it
was
already
done
last
week
for
the
initial
certificate,
and
now
the
renewal
is
automated
as
code
using
DNS.
So
next
step,
we
should
be
able
to
to
set
us
code,
the
world
renewal
for
search,
CI,
Jenkins
foreign.
What
was
news
that
we
weren't
managing
DNS
challenge
for,
let's
encrypt
in
the
puppet
virtual
machine
architecture,
we
were
using
DNS
challenge
only
in
our
kubernetes
cluster.
A
So
we
should
do
the
same
for
search
CI,
Jenkins
IO,
so
we
will
benefit
from
automatic
renewal
of
the
certificate
that
has
been
manually
created,
so
that
was
also
a
big
team
effort
on
different
areas.
Thanks
team
thanks:
everybody
thanks
Stefan
thanks
Mark
and
thanks
Vedic
and
Danielle
for
pointing
out
the
correct
element.
A
A
A
Any
question
remove
duplicated
plugin
handlebarrows
from
our
instances,
so
we
had
a
warning
message.
On
the
Jenkins
controller,
like
every
users,
we
removed
the
plugin
either
manually
for
the
virtual
machine.
We
went
on
the
UI,
uninstall,
plugin,
restart
controller
and
that's
okay,
or
on
the
docker
images
it
was
for
weekly.
It
was
present.
It
was
removed
from
the
docker
image
and
once
the
new
Docker
image
without
the
plugin
within
was
installed,
we
were
able
to
uninstall
it
properly.
A
A
A
But
I
forgot
to
record
when
done
that,
so
I
think
that
should
be
a
outside
Jenkins.
Let's
say
help
for
the
person
writing
that,
but
we
could
help
on
that
area.
Well,.
B
That's
a
that's
a
Kevin
and
and
I
have
that
topic.
We
know
that
that
that
Docker
pull
request
to
document
how
to
uninstall
a
plug-in
and
I
think
ultimately
we'll
want
that
on
Jank
on
www.jenkins.io
as
part
of
the
official
documentation,
it's
it's
complicated
and
we
probably
want
to
find
a
better
way
to
do
it
for
Docker
container
for
container
images.
But
right
now
we
should
describe
what
we
have.
A
A
I
have
a
different,
oh
it's
just
that
it
wouldn't
work
as
soon
as
you
restart.
If
the
plugin
is
still
on
the
docker
image,
then
it
will,
it
will
be
reinstalled
on
the
Jenkins
home
I
have
a
solution.
You
create
a
temporary
run
disk
mounted
on
Jenkins,
home,
slash,
plugins
and
you're
done.
I
mean
that's,
not
bad.
A
It's
not
a
faster
startup
for
Gen
games
because
you
need
to
see
create
the
files,
however,
that
I'm
sure
that
anyone
try
to
install
a
plugin
for
experimentation.
You
just
have
to
restart
the
container
or
the
Pod,
and
you
still,
you
again
have
the
exact
set
of
plugin
from
the
image.
Interesting
idea
I
like
that.
That
could
be
a
recommendation,
but
you
need
enough
memory
to
load
the
world
plugin
set
and
compressed
in
memory.
That's
the
and.
B
A
Artifact
caching,
so
both
AWS
Cruise,
digital,
listen
on
one
side
and
Azure
on
the
other
side
have
been
fixed,
I'm,
not
sure
it's
worth
going
into
details
on
what
happened.
A
lot
of
things
changed.
The
last
very
well
known
state
was
working
well
beginning
of
December,
and
then
the
migration
of
g
frog
changed
a
lot
of
a
lot
of
things.
So
we
add
to
fix
these
elements.
A
We
also
had
issue
low
level
with
kubernetes
on
AWS
I
broke
the
cluster
by
updating
the
terraform
module
bit
brutally
and
it
has
let's
say
major
changes,
not
Backward
Compatible
digital
design
was
still
working
well
and
in
the
case
of
azure,
we
face
something:
we
don't
understand,
it
was
slow
as
hell
and
suddenly
it
worked.
A
So
the
good
thing
is
that
now
everything
work
and
we
can
proceed.
Everything
has
been
logged
on
these
issues.
So
thanks
a
lot
again.
The
team
efforts,
a
lot
of
let's
say:
I
forgot
the
name.
I
read
a
lot
of
EV
lifting
in
2022
Stefan
helped
me
a
lot
on
bearing
last
week,
and
so
thanks.
Everyone
on
that
element.
D
D
A
C
Dashboard
wasn't
accessible,
we
would
kicks
with
killed
the
R5
replica
and
actually
started
what.
A
We
can't
tell
also
thanks
folks.
The
thing
is
we
can't
tell
if
any
data
was
lost.
A
By
data
lost
I
mean
not
the
database
itself,
but
the
data
sent
by
Telemetry
to
the
service.
While
it
was
in
that
word
State,
the
data
might
be
lost
if
it
wasn't
committed.
Probably
on
the
database
sounds
like
that
the
logs
were
incomplete,
so
we
don't
really
understand
what
happened
there.
So
thanks
Daniel
for
pointing
that
out
and
helping
us
now,
the
three
of
us
have
Administration
access
to
the
Uplink
dashboard
service.
B
A
A
A
B
Aware
that
there
is
a
Sentry
service
and
that
Tyler
was
deeply
involved
in
it
and
Batiste
matus
I
believe
also
is
aware
of
it,
because
there
was
a
part
of
it
that
was
used
for
for
the
thing
called
Essentials,
but
I
don't
know
if
batiste's
involvement
was
actually
in
this
particular
aspect
of
it
or
just
related
to
City
Essentials.
A
For
me,
it's
another
disc,
okay,
okay,
to
be
migrated
into
our
taxable
instance.
The
reason
of
changing
is
because
that
one
that
is
a
standard
postgresql
instance
that
has
some
limitation.
Since
then,
we
have
created
a
new
postgresql
server
instance,
which
currently
hosts
already
free
database,
plug-in
ill
service
rating
and
the
key
cloak.
A
So
that
will
be
a
matter
of
creating
the
database
migrating
the
data
and
changing
the
configuration
of
the
application
advantage
of
the
new
kind
of
server
postgresql
flexible.
Is
that
it's
flexible,
as
its
name
implies?
It
has
a
high
availability
possibility
and
that's
also
single
entry
point
for
us
to
manage
databases.
A
It
costs
less.
The
only
downside
that
we
discovered
is
that
you
cannot
use
multiple
virtual
Network
on
Azure
to
reach
the
same
database
instance.
So
when,
like
some
people
there,
you
try
to
migrate
some
services
from
one
network
to
the
other,
you
will
have
Interruption
of
service,
so
you
need
to
find
creative
way
for
not
breaking
the
service
more
to
come
in
the
future.
A
Sorry
about
work
already
done
now.
We
switch
to
the
work
being
done.
We
had
an
issue
about
renew
designer
certificate
for
Jenkins,
so
we
absolutely
need
to
work
on
this
one.
The
idea
was
to
walk
with
olivian
Mark,
especially
Olivia.
If
we
can
see
it,
Mark
I
saw
that
you
updated
and
you
had
information.
B
Yeah
I
think
so:
I've
confirmed
that
I
can
log
into
digisert
and
I
can
see
the
certificate
that
was
issued.
So
I
think
you
and
I
Damien
can
do
it
and
create
a
run
book
for
ourselves.
It
just
needs
some
more
investigation.
I'd
propose
you
and
I'll
spend
some
time
tomorrow
and
possibly
Thursday
and
and
then,
if
I
remember
correctly,
you're
you're
at
fosdum
already
on
Friday,
correct,
so
yeah,
so,
but
if
tomorrow
and
Thursday,
we
make
progress
great.
B
If
we
have
questions,
you
can
always
ask
those
questions
to
Olivier
when
you
see
him
at
fosdum,
okay,
so
for
me
it
it
feels
like
we're
at
a
good
point.
We've
got
the
weekly
release
done
now,
so
tomorrow
we
could
start
on
it,
and
the
next
weekly
release
will
be
a
few
days
yet.
So,
if
we
make
some
mistake,
we
can
we
have
time
to
recover.
A
Y
okay,
is
it
okay
if
we
delay
the
usage
of
the
new
certificate
after
the
LTS,
because.
B
I'm
I'm
I'm
perfectly
fine
with
that,
then
I
would
think
what
we
ought
to
do
is:
let's
accept
that
you
and
I
will
work
on
it
after
the
LTS
next
week,
because
I
don't
want
to
touch
the
the
working
configuration.
I
can
do
research
that
doesn't
modify
the
working
configuration,
but
I
don't
want
to
alter
anything
in
the
working
configuration
until
after
that
LTS.
That
sounds
fine
to
me.
A
B
B
A
Thanks
Mark
for
documenting
this,
so
same,
let's
work
on
it
after
I
will
I'll
take
care
of
moving
the
issues
of
the
correct
Milestone.
Don't
want
to
bother
you!
Okay,
Next
Issue
work
in
progress
burn
the
terraform
module
for
AWS
eks,
so
bumping
that
model
version
still
add
consequences.
Until
today
it
has
been
updated
with
the
fixes
we
had
to
do
on
the
eks
cluster
last
week
to
make
the
AWS
ACP
work
again.
A
Some
of
these
elements
were
manually
done
to
quickly
fix
the
issue
we
now
have
before
closing
that
issue.
There
still
have
terraform
changes
to
do.
The
main
problem
that
has
been
documented
is
that
the
load
balancer
tried
to
automatically
find
the
backend
eyepiece.
A
So
the
latest
version
should
have
everything
required
to
manage
these
tags,
but
right
now,
I've
moved
it
manually,
which
means
we
could
still
have
the
repo
AWS
breaking
so
I
documented
everything
on
that
issue.
If
anyone
see
an
issue
and
I'm
not
there,
please
look
carefully
on
what
has
been
done
manually
on
that
issue.
That's
basically
going
to
AWS
console
and
remove
a
tag
like
you
select
the
load
balancer.
Then
you
go
to
the
associated
Network
and
you
remove
the
tag
and
five
minute
after
it's
fixed
automatically.
You
don't
have
anything
else
to
do.
A
A
So
we
saw
a
random,
let's
say
times
for
the
first
builds.
What
is
sure
is
that
one
two
eventually
free
builds
are
a
bit
slower
for
dependency
resolution,
because
you
have
to
fill
the
cache
and
that
depends
on.
Where
is
the
agent
running
your
Maven
command,
locate
it
and
even
for
a
given
build?
You
can
have
different
agent,
for
instance,
if
you
run
Windows
container
that
will
spawn
on
Azure
that
will
reach
one
of
the
two
Azure
ACP.
A
A
A
Can
you
explain
the
deployment
plan
that
we
want
to
follow
for
which
project
which
scope
to
deploy
your
ACP
by
default.
C
Currently,
we
intend
to
use
it
and
build
plugin.
Subject:
plugin
function
used
by
almost
every
plugin
on
Jenkins
CI
organization,
computer
organization.
C
's,
like
Jenkins
various.
C
B
And
and
the
bomb
build
is
enormous:
that's
the
one
that
runs
what
100
or
200
concurrent
parallel
tasks,
each
one
checking
out
large
chunks
of
code
and
Performing
many
tests,
so
I
I
think
the
bomb
width
improvements
should
be
substantial
right.
That's
that's!
A
that's
got
to
be
expensive
on
bandwidth
because
it's
enormously
expensive
on
compute.
A
Makes
sense
the
plan
is
that,
as
we
said,
we'll
add
a
dedicated
issue
present
an
email
for
plugins
ACP
activation.
C
The
functionality
I
want
to
add,
which
is
at
the,
which
is
the
ability
to
skip
the
artificial
proxy
by
adding.
A
Opt
out
and
another
opt-out
by
default
that
works
today
will
be
changing
the
Jenkins
file
of
the
project
later
on
a
pull
request
or
principle
branches,
and
the
call
to
build
plugin
should
have
use
artifact
caching
proxy
set
to
false.
That
is
the
other
way
that
is
working
today
of
opting
out,
but
that
one
is
a
bit
more
cumbersome
than
adding
a
label
on
pull
request.
A
A
These
two,
the
Jenkins
file
by
setting
argument,
use
artifact
caching
proxy,
so
giving
the
choice
that
will
provide
different
ways
of
opting
out.
If
there
is
any
issue
so
right
now
we
let
at
least
for
a
traditional
period.
We
let
developer
being
able
to
opt
out
if
they
feel
like
it's
causing
Mayhem.
A
B
Good
plan
and
I
I
can
report
that
the
8
or
10
or
12
plugins,
that
had
artifact
caching
proxy
enabled
are
all
building
successfully
multiple
times
and
show
no
performance.
Regression
I
don't
care
if
they
improve
performance.
I
I,
don't
see
anything
that
I
could
statistically
call
a
performance
regression.
So
that's
great
foreign.
D
A
Also,
we
confirmed
first
Stefan
and
high
and
then
RV
and
a
we
have
checked
as
a
team
that
we
have
access
to
a
lot
of
metrics
in
data
dog
we
can
watch.
Resource
usage
is
from
the
point
of
view
of
the
pods
we
have
access
to
logs
and
the
logs.
The
access
logs
of
the
container
allows
us
to
know
for
a
given
requests
if
it
was
served
from
the
cache
or
if
it
were,
we
have
the
timing.
We
are
able
to
correlate
the
timing
if
it
was
a
third
from
g-frog.
A
Also,
as
early
pointed
out
this
morning,
we
also
have
the
ability
to
check
on
the
Azure
portal
the
metrics
on
the
file
system
used
for
the
caching
which
has
dedicated,
because
there
are
different
we
need.
We
could
eventually
have
to
resize
right
now,
it's
50
gigabyte
and
we
use
only
one,
but
it
might
increase.
So
we
have
to
follow
that
part
and
the
iops
the
amount
of
IO
operation
per
second
right
now
we
have
the
let's
say
the
default
low
cost
class
for
iops,
which
has
a
thresholds.
A
It
can
burst
punctually,
but
the
quality
of
the
associated
file
system
is
not
that
high.
However,
for
now
we
cannot
know,
because
when
we
use
the
fuel
plugin,
we
need
to
see
a
full
fully
fledged
cigen
games,
I
always
all
built.
So
we
will
have
to
watch
this
metrics
from
kubernetes
data
point
of
view
from
Azure
System
point
of
view
and
follow
them
on
the
future.
The
mode
will
increase
the
usage,
but
what
we
see
now
is
that
the
engine
X
uses
a
lot
of
memory
to
have
the
most
frequently
used.
A
Cached
item
is
memory
served
by
itself.
The
machines
behind
you
is
also
caching.
The
file
system
like
Linux
naturally
does
so.
That
part
is
another
layer
of
caching
from
memory
which
protect
and
makes
us
safe
from
reaching
too
much
iops,
but
with
the
bum
I
think
the
bomb
will
be
more
impactful,
but
we
currently
over
provision
the
size
of
the
container.
To
be
sure
we
are.
We.
A
B
So,
to
be
sure,
I
understood
so
you've
you're
already
comfortable,
that
you've
got
datadog-based
measurements
to
help
understand
the
performance
characteristic
of
the
caching
proxy.
So
if
we
were
to
overload
it,
you
would
see
you
would
see
that
indicated
in
the
datadog
metrics
that
you're
Gathering
yeah.
C
Exactly
we
have,
we
have
liver
to
improve
performance
as
right
now,
the
disk
we
are
using
low
iOS
and
we
can
quickly
increase
their
class.
So
if
needed,
we
can.
We
have
some.
We
are
good.
B
C
I
think
anything
is
obtained,
served
out
for
this
or
just
the
build
plugin
will
right.
Yeah
greatly
increase
the
load.
Okay.
A
A
Okay,
so
my
proposal
for
is
that
we
have
the
last
pull
request
for
the
label.
Once
that
pull
request
is
merged,
we
can
immediately
send
an
email
to
the
developers
that
will
let
24
hours
before
really
applying
the
challenge
is
that,
okay,
it
let
one
full
day
if
anyone
complains
or
if
you
want
objecting.
No
that
is
dangerous
or
is
there
is
something
important
that
I
don't
want
to
use
ACP.
A
D
A
Can
add
them
and
we
put
them
in
the
same
rig
as
olivian
Taylor
as
we
say,
depend
on
the
objection
realistically,
so
we'll
see,
yeah
I
understand
that
could
be
really
frustrating.
If
someone
says
I
got
that
issue
and
we
are
not
able
to
deliver
it,
but
we
have
to
listen
to
the
users.
So
we
open
the
discussion
and
we
see
if
there
is
an
objection.
If
there
is
no
objection,
then
we
can
proceed.
B
We've
had
this
before
right.
We
had
a.
We
had
a
variant
of
this
before
without
any
complaints,
so
so
Olivia
had
deployed
something
like
this
previously
and
we
we
actually
reverted
that
deployment
when
we
saw
that
it
didn't
seem
to
to
help
performance.
At
that
time
we
didn't
care
about
bandwidth,
consumption
and
so
so
I
think
I
think
we're
ready
to
announce
it,
and
if
there
are
objections,
yes,
we
give
time
for
objections,
but
it
would
take
a
very
significant
objection
to
stop
us
from
deploying
this.
D
B
A
We
gave
them
fresh
if
we
give
them
fresh
money,
I'm
sure
they
will
accept.
Okay,
anyway,
is
there
anything
else
about
the
ACP
okay?
So
the
next
priority
is
working
on
these
items
and
then
we
will
start
experimenting
on
bomb
and
we
will
report
next
week
the
status
given
we
are
full
them.
We
might
have
some
delay
since
we
have
a
weekly
report
from
jeffrog
next
week
should
already
show
some
things,
but
maybe
one
week
is
not
enough,
so
we
might
have
to
wait
two
weeks
before
seeing
a
real
progress.
A
So,
are
we
cooked
that
some
of
our
GitHub
actions
that
are
not
necessarily
up
to
date?
In
that
example,
we
use
tidbex
gitobab
token
action.
The
proposal
is
to
enable
dependables
on
these
repositories,
so
the
pandabot
will
easily
keep
track
of
that.
D
A
A
So
the
issue
mentioned
the
list
of
services.
We
might
need
to
update
it.
Some
services
will
be
migrated
to
the
new
public
cluster.
Some
services
will
be
migrated
to
the
new
private
cluster,
such
as
release
CI
after
the
code
sign
Challenge
and
after
the
LTS.
If
it's
okay
for
everyone,
some
services
will
not
be
migrated,
so
we'll
just
double
check
with
Olivier,
but
the
proposal
is
to
not
migrate.
The
grafana
Primitive
stack
because
we
are
not
using
it.
Palmetto
seems
to
be
in
a
bad
shape,
our
installation,
not
the
software
itself.
A
A
We
will
destroy
the
whole
cluster,
releasing
these
resources
and
then
we
could
start
in
the
future.
The
topic
of
if
we
need
a
graphene
stack
for
collecting
traces
for
CI
gen
games,
IO,
for
instance,
or
additional
metrics
logs
tracing
collection,
that
we
cannot
implement
or
don't
want
with
that.org.
That
will
be
another
topic,
but
right
now
the
goal
is
to
avoid
duplicating
when
one
of
the
two
tools
is
not
used
by
us
anymore.
It
was
used
in
the
past
and
it
has
been
a
great
help,
but
we
don't
since
at
least
one
year.
A
A
A
C
It's
okay,
it's
okay!
It's
almost
the
same
for
the
private
cluster.
A
A
C
That's
been
prepared,
but
yeah
between
standby.
While
we
are
going
to
first
element.
A
D
A
A
Playwright
was
on
o55,
oh
yeah
and
agent
template.
Oh
56.
A
Provides
updated
gdks
for
all
gdks.
We
had
the
timarine
latest
update
for
8,
11,
17
and
19..
Also
that
was
done
along
the
playwright.
So
thanks
a
lot
for
that.
One.
A
And
finally,
the
last
issue
is
realign
repo
Jenkins
CI
OG
Mission,
so
we
focused
on
ACP
I
found
an
M
chart
that
will
allow
us
on
the
topic
to
have
a
h,
a
ldap
so
I'm,
currently
playing
around
with
that
on
the
local
cluster,
if
possible.
That's
not
for
everyone.
I
propose
that
we
might
want
to
test
that
new
h,
l
dap
on
the
new
public
case
cluster.
B
Yeah
I've
still
got
that
action
item
Damian
right
now.
The
the
analysis
of
the
reports
indicates
that
the
realignment
won't
actually
fix
the
majority
of
issues.
So
the
reporting
that
we're
seeing
hints
that
the
the
access
that
this
would
change
are
not
the
major
problems
and
so
we're
trying
to
resolve
the
major
problems
by
other
means.
A
Any
question
on
that:
nope:
okay:
we
had
a
few
new
issues,
so
update
Appling,
Jenkins
IO
project
dependencies.
So
that's
one
is
not
priority,
but
thanks
for
opening
this
one,
because
while
trying
to
operate
a
blink,
we
saw
that
that
image
has
not
been
updated
since
month
is,
if
not
years,
it's
still
using
node.js9,
which
is
it's
as
if
I
had
Jenkins
running
with
gdka6
I'm,
just
a
bit
exaggerating,
but
still
it's
not
the
LTS
version
of
node.
Of
course,
bumping.
A
That
version
might
have
side
effects
so
that
need
a
bit
of
time
and,
of
course,
all
the
internal
dependencies
within
that
image.
Is
that
correct
or
where
did
I
miss
something
else.
A
We'll
have
to
create
issues
in
term
of
planning.
I
assume
we
won't
be
working
on
that
one
for
the
coming
two
weeks
is
that
correct.
C
A
A
And
to
remove
Jenkins
are
your
page,
is
around
accessible
and
indexed
anymore,
a
fun.
One
sounds
like
that.
The
deployment
process
does
not
delete
pages
when
deploying
it
just
adds
contents.
So,
each
time
we
remove
a
page
from
Jenkins
IU,
we
need
a
manual
operation.
I,
don't
see
that
as
a
problem,
but
it
has
to
be
documented
on
both
Jenkins
IU
website.
B
C
Of
course,
but
I've
asked
Danielle
if
he
knew
why
we
have
like
such
and
it
just
because.
B
A
A
Automating
deletion
only
only
led
to
losing
data,
which
means
you
need
to
build
a
certain
level
of
trusts
before
having
a
fully
updated
thing,
and
that's
also
my
point
of
view
because
yeah
you
have
a
website
on
a
Docker
image
on
a
on
the
paper
that
works,
but
as
soon
as
you
need
to
roll
back,
it's
complicated
the
issue
with
Jenkins.
Are
you
it's
not
one
machine
with
one
container
and
one
instance
that
will
be
easy
to
automate.
A
In
that
case,
in
the
case
of
Jenkins
IO,
you
have
a
distributed
system
that
has
local
cache
that
sometimes
read
on
the
centralized
bucket
where
the
file
should
be
deleted,
and
then
you
have
another
distributed
system
of
CDN
that
can
be
decached,
so
you
have
two
distributed
systems
that
depends
on
each
other.
Trust
me.
If
you
want
to
automate
deletion
on
these
cases,
you
are
losing
your
safety
nets.
A
A
A
We
are
in
sync.
My
goal
was
to
provide
just
elements
to
understand
why
so
now
we
can
think
about
how
to
automate
it
for
success.
And
for
me,
the
first
and
immediate
step
that
we
should
take
is
documented
that
process,
given
the
huge
work
and
activity
that
happened
during
the
past
months
by
Kevin
and
Mark
and
old
doc.
Sig.
A
That
kind
of
thing
will
happened
more
and
more
and
once
we
have
documented
it
and
we
are
at
ease
on
the
procedure,
then
we
can
automate
easily
and
I'm
sure
you
will
have
brilliant
ideas
that
we
never
talked
about
for
doing.
That.
I
have
no
adopted
that,
but
we
need
to
reach
some
layers
of
trust
in
ourselves
before.
C
One
one
I
thought
about
is
automatically
purchased
the
new
as
a
ability
to
page
looking
at
the
modified
file
in
the
pull
request
and
running
a
curve
Purge
on
this
URL.
A
One
of
the
ideas
that
has
been
canceled
by
Olivier
was
to
build
jenkins.io
Docker
image
with
the
doc
root
within
that
would
have
removed
the
need
for
that
bucket,
and
it
would
have
been
fully
stateless
on
our
cluster.
Removing
one
constraint
that
worked
very
well,
but
the
time
required
to
build
the
image
and
deploy
the
image
and
the
constraint
of
our
audit
auditability
with
kubernetes
management
records.
A
A
Okay,
a
new
issue,
frequent
Pedro,
Duty
alerts,
disk
space
is
below
one
gigabyte
with
pick
about
that
two
weeks
ago,
I've
put
on
written
way,
and
now
we
have
actions
on
that,
mainly
updating,
CI,
Jenkins,
IU
and
also
controller
setup,
to
be
sure
that
all
age
and
virtual
machines
are
ephemeral,
especially
the
windows
one.
And
then,
if
we
keep
having
the
image
we
will
have
to
find
if
there
are
failed
builds
and
if
they
are,
we
will
have
to
increase
the
disk
size.
That's
the
summarized
version.
A
C
Okay,
I
open
this
question.
We
would
be
what
will
happen
when
atlasan
Will
Bill
their
user,
but
it's
a
question
for
later.