►
From YouTube: 2023 09 19 Jenkins Infra Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
welcome
to
the
Jenkins
infrastructure
team
meeting.
Today
we
are
the
19th
September
2023
and
around
the
table.
We
got
myself
the
magic
portal,
Mark
White,
Stefan,
Merton
and
Kevin
Martins
looks
like
a
Herve
element,
looks
ill,
so
we'll
try
to
speak
on
his
behalf
for
the
task
He
Walked
on.
We
try
to
share
a
bit
of
information,
but
since
I
was
also
half
during
the
past
week,
I
will
try
our
best.
A
Let's
start
with
announcement
weekly
no
weekly
this
week,
no
weekly
today,
because
tomorrow,
Jenkins
score
security
advisory
will
be
published.
It
has
been
announced
it's
public
and
official,
because
core
advisory
tomorrow
on
both
MTS
and
weekly,
so
the
automatic
release
process
have
started
but
was
changed
to
not
perform
anything.
A
Checked
this
earlier
today,
so
it
didn't
run
the
release
as
expected,
but
using
an
unexpected
error.
Instead
of
an
echo
of
a
message
and
an
error.
Failing
the
pipeline,
there
was
a
pipeline
syntax
error
due
to
the
echo
message,
not
on
the
proper
block,
but
it
did
the
job
as
expected.
So
it's
okay,
I
love.
When
the
plan
comes
together,.
B
A
A
So
does
that
mean
more
work
for
Stefan
and
their
way?
Yes,
it
does
and
Bruno
yes,
yeah
Bruno
and
Stefan
yep.
Everybody
is
healed,
so
yep
all
the
work
for
you
for
you
guys
thanks
other
announcements,
a
question:
no
okay,
so
next
weekly
is
tomorrow.
A
A
A
What
do
we
have
next
major
events?
We
have
devops
world
that
happened
last
week,
I
I
believe
we
have
one
episode
on
October.
Is
that
correct
mark
no
Chicago?
First
September.
B
A
B
A
Yes,
so
thanks
sir
Ben
Stefan
for
helping
and
continuing
helping
on
underlying
the
issues
that
could
be
first
newbies
for
Jenkins
for
participation
on
Oktoberfest.
We
can
continue.
If
you
have
any
question
on
that
topic,
we
haven't
written
a
document.
Yet
that
could
be
a
nice
thing
to
add.
If
we
want
a
public
document
about
tactographest
that
we
could
pin
on
the
issues
that
could
be
interesting.
A
A
Yep?
Okay,
so
what
were
we
able
to
terminate
during
the
past
milestone?
We
have
free
closed
issue
and
one
issue
closed
as
a
no
answer
from
the
end
user.
A
I
closed
this
one
earlier
today,
because
after
marks
and
their
way,
feedbacks
no
answer
from
the
user
except
the
frustrated
message
with
no
information,
so
I
don't
know,
what's
the
problem
and
how
we
can
help.
So
that's
why
I
decided
to
close
it.
Of
course
the
user
can
reopen
as
usual,
but
without
technical
information.
It's
unclear
what
is
expected
from
us.
A
Unexpected
long
delay,
uploading
by
multifact
has
three
buckets
so
mark.
It
looks
like
that
Jesse
Glick
was
was
right
about
the
fact
that
it
it
looks
like
it
happened
during
an
AWS
S3
outage,
which
explained
the
long
upload
times.
A
What
we
saw
is
that
no
regression
we
weren't
able
to
measure
properly
if
it
had
an
effect
on
the
performances,
but
the
build
time
of
a
single
Bond
build
is
around
3
hours
and
15
minutes.
Almost
four
hours.
That
seems
quite
constant.
This
month
is
a
bit
increased
because
we
added
more
tests,
but
we
haven't
seen
any
big
difference.
We
still
see
when
I
parallelized
a
set
of
workspaces
are
running.
Then
all
the
concurrent
assertion
or
the
agents
or
stations
Dash
operation
take
sometimes
but
yeah.
A
B
Yes,
though,
I
did
have
a
surprise:
I
launched
a
bomb
build
about
12
or
14
hours
ago,
and
it
died
after
about
eight
and
a
half
hours
of
run
time.
So
so
it's
it's
not
a
perfect
world
yet,
but
this
one
as
far
as
I
can
tell
is
resolved.
I
still
need
to
analyze.
What
went
wrong
that
caused
this
ultimately
failure.
B
A
Okay,
I
see
two
improvements
on
short
term
that
we
could
run
so
I
had
my
wrong
running
task
about
trying
using
bigger
machines
to
have
way
more
buds,
but
in
term
of
cost.
We
saw
that
this
one
was
unchanging
or
was
increasing.
The
cost
of
a
single
builds,
so
I'm
not
sure
it's
the
right
solution
until
the
concurrency
problem
is
solved
in
the
Jenkins
internal.
But
it's
really
hard
to
understand
the
problem.
However,
we
could
apply
the
same
trick
as
the
one
that
Stefan
did
on
the
ath.
A
I
see
that
quite
often
we
have
a
lot
of
spot
instance,
reclaim
leading
to
agent
being
removed
that
one
is
slowing
down.
I,
don't
know
if
it's
slowing
down
at
one
percent
time,
five
I,
don't
know
the
effects,
but
having
a
system
that
changes.
The
the
tags
from
a
bum
from
a
agent
on
spot
instance
to
agent
on
on
demand
instance
could
be
interesting
to
see
if
it
has
an
effect.
A
B
A
It
was
more
for
planting
seeds
than
really
calling
for
actions.
Great
SSL
certificate
issues
has
been
served,
so
the
problem
is
that
there
has
been
some
confusion
since
two
years
between
which
one
which
component
between
firstly
and
our
public
cluster,
is
managing
certificates
and
receive
requests
for
the
Apex
domain.
Jenkins.Io
WWE,
Jenkins
IO
points
too
fastly,
that's
a
fact,
and
its
certificate
is
managed
by
fastp
as
on
points
in
the
case
of
Jenkin
Sayo.
A
There
has
been
numerous
times
and
in
order
to
remove
that
confusion
since
the
last
change
in
term
of
DNS
when
they
walked
on
migrating
to
the
new
cluster,
we
decided
as
a
team
to
change
the
Jenkins
IU.
Apex
requests
are
sent
to
the
cluster,
which
redirects
too
fastly
on
WWE
jenkinsayo.
So
it's
not
a
lot
of
workload.
So
that's:
okay.
It's
only
a
HTTP
redirect
which
mean
we
terminate
TLS
for
Jenkins
IU
Apex
domain
on
public
Gates.
A
So
in
order
to
solve
that
issue
after
confirming
that
Jenkins
was
the
culprit
for
non-renewing
certificates
inside
the
cluster
automatically
the
order
was
there.
It
was
training
but
failing
to
create
the
Acme
challenge
in
the
DNS
records,
because
the
DNS
record
was
owned
by
fast
by
removing
every
reference
to
the
Apex
domain
on
Fastpitch.
Now
everything
is
managed
from
the
public
cluster
and
it
renewed
automatically.
A
A
B
D
D
Buckets
which
is
like
S3
from
from
AWS
with
free
aggress,
which
is
why
we
chosen
at
first,
we
resource.
D
D
We'll
add
a
corresponding
job
in
a
fraction
and
restrict
this
short
configuration
and
then
we'll
start
an
initial
job
on
empty
state
to
from
the
bank
branch
to
have
a
initial
States
and
then
we've
now
available
to
create
the
resources
we
want,
which
are
several
two
brackets
in
it's
a
different
version.
They
propose
and
the
corresponding
cloudflare
of
the
Zone,
which
are.
D
D
D
So
I'm
I'll
post
see
updating
the
issue
and
cool.
A
Cool
that
that
looks
obvious,
but
I
prefer
asking
that's.
Nice
walk
really
happy
with
the
outcome,
so
we
should
be
able
to
have
something
soon:
nice,
job
yeah,
you
you
can
you
can
leave
the
meeting
if
you
feel
bad
physically
or
they
don't
hesitate
thanks
for
taking
the
time
for
sharing
your
progress
here.
D
A
A
Okay,
on
my
side,
I'm
still
blocked
on
the
maven
HPI
plugin.
So
thanks
for
the
help
of
Basil,
many
thanks
for
the
pointers.
It
appears
that
the
initial
strategy
we
had
to
fix
the
integration
test
won't
work
as
expected.
So
now
we
are
exploring
other
options,
I
admit,
I'm,
a
bit
lost
I'm,
trying
a
lot
of
things
based
on
the
official
Maven
and
all
the
tools
of
that
project
and
I'm.
Trying
to
the
core
of
the
problem
is
that
we
have
errors
on
cigen
Sayo,
which
are
really
hard
to
reproduce
locally
and
I.
A
Don't
understand
why
I
think
I
have
something
it
looks
like
the
way
the
mrm
Maven
local
repository,
plugin,
merge
to
settings.
Xml
files
depend
on
the
way
the
settings.
Xml
files
are
called
for.
The
outer
Maven
invocation.
When
you
run
Maven
by
default,
it
will
look
on
ohm
dot,
M2
settings
XML.
This
is
where
I
tend
to
put
the
files,
but
in
fact,
on
CI
joints
are
you.
We
don't
do
that.
We
change
the
environment.
A
Variable
Maven
underscore
something
and
we
specify
additional
flag,
such
as
dash
s
on
the
path
to
the
temporary
file
setting
the
XML
file
when
Maven
top
level
Maven
is
invoked.
It
automatically
read
that
variable
and
complements
the
flags
which
mean
when
you
read
the
build
outputs
on
sea
agencies.
You
don't
see
that
flag.
A
You
don't
see
the
dash
s
explicitly
because
it's
an
internal
of
maven,
the
only
way
to
see
it
is
to
run
Maven
in
debug
mode
or
to
run
an
effective
pump
that
will
take
the
settings
XML
in
account
and
it
it
appears
that
I
can
reproduce
the
problem
when
using
that
form
on
my
machine.
Well,
if
I
use
the
proper
settings
XML
on
the
default
location,
it's
properly
merged
that
smells
like
either
a
bug
or
a
weird
behavior
from
the
mrm
plugin.
A
A
B
A
B
More
properly
for
me,
I
well,
I
think
if,
if
opting
out
of
ACP,
for
that
example
resolves
it,
we've
got
other
things
that
I
think
are
more
valuable.
Let's
use
the
opt
out
and
if
it
works,
where
do
we?
We
call
it
good
enough?
Yes,
we
will
probably
have
to
solve
it
more
later,
but
I'm
I'm,
okay,
with
opting
out
for
that
one.
That
one
thing.
B
A
Yeah,
the
thing
is
that
basil,
like
me,
discovered
that
the
error
is
still
present
even
without
the
mrm
plugin.
So
first
we
need
to
stabilize
the
ID
before
thinking
about
keeping
it
or
using
it.
A
So
that's
all
for
the
artifactory
bandwidth
I'm
keeping
it
because
it's
a
still
action
from
the
for
us
to
fix
the
HPI
Mark.
Do
you
remember
when
we
meet
g-frog
next
time
to
see
with
them
the
effect
they
can
see?
I
don't
believe
it's.
B
Oh
I,
don't
see
it
on
a
calendar,
but
I
think
we
did
set
one
up.
We
did
plan
one
are
you
sure,
I'm
not
sure,
maybe
not
I'll
check
with
Lori
Larusso.
A
A
Okay,
Stefan
your
turn
about
the
high
availability
of
replicated
services
on
the
public
cluster
affinity
and
pdb.
Can
you
give
us
a
heads
up
on
this
one.
C
Yeah
Affinity
only
didn't
really
did
the
catch.
C
C
The
second
nod
I'm
thinking
of
forcing
the
minimal
number
of
of
now
to
two
on
the
Auto
scaling
in
Azure,
at
least
for
the
irm-64,
but
I
think
for
everyone
will
be
good
enough
too,
and
to
enforce
everywhere.
We
have
more
than
one
accounting
stones,
but
blood
instance
to
to
have
at
least
the
pdb
and
and
not
affinity
to
make
sure
that
if
we
have
to
recycle
a
nod,
every
application
with
two
instance
will
still
be
working
on
others.
C
A
A
A
It
was
added
between
last
team
meeting
and
now
that's
why
I'm,
putting
notebook
just
finishing,
remove
account
request
field
from
jira
login
page
I
didn't
have
time
to
look
onto
this
someone
a
contributor
proposed
help
I'm
not
really
sure
what
to
do.
I
won't
have
days
of
this
week,
so
I
might
have
some
time
to
spend
on
this
one
and
ask
other
for
help.
A
It's
required.
Jenkins
admin
Stefan
your
turn
for
the
two
last
one
rm64
migration
of
services.
C
That's
on
hold
because
we
don't
want
to
to
migrate
application
that
that
could
be
disrupted
by
your
recycling,
so
first
anti-affinity
and
and
pdb
and
and
square
to
two
Auto
squealing
before
keeping
the
migration.
C
And
I
I
might
also
prioritize
the
change
on
the
shared
pipeline
library
to
have
only
one
pass
on
on
Main
instead
of
Main
and
tag
to
publish
the
images.
Okay,
safe
cost
and
time.
A
Please,
pipeline
Library,
Optimum,
okay,
I
believe
so
comment
on
this
one.
Oh
I'm,
I'm
confusing
with
another
I,
believe
you
added
a
list
of
the
potential
services
that
could
be
migrated
to
rm64.
I.
C
Did
one
that
week
already
and
that
list
was
not
exclusive,
no
exhaustive.
A
A
D
A
A
C
A
A
No
problem:
it's
just
to
have
a
status
yeah,
okay,
so
now
erase
if
you're
still
here
and
still,
okay,
let's
go
on
the
new
issues
that
we've
added
since
last
meeting
on
the
current
milestone.
In
that
case,
removing
pages
of
Jenkins
when
we
deploy
a
new
version,
removing
the
pages
that
are
not
part
of
the
new
build
anymore.
D
But
to
be
sure
we
will
not
lose
content.
I've
I've
first
I've
made
the
snapshot
I've
put
in
place
a
backup
of
this
storage
account,
which
is
another
Factor
range
foreign.
D
From
this
storage
account
from.
D
Commented
on
it
most
of
these
difference
seems
okay,
but
there
is
also
some
documentation
about
Library,
which
are
not
here,
probably
because
of
the
issue
of
the
back
end
extension
and
Excel.
That.
D
Okay,
so
I've
asked
him
the
issue
if
we
should
resolve
this
issue
first
and
this
issue.
First,
the
two
results
need
more
work
cells.
It
will
require
a
factoring
of
feedback
and
extension,
or
a
complete
free
rights
of
I
saw
some
ping
pong
between
some
Jenkins
maintenance.
I,
don't
know
really
where
what's
the
state
of
it,
so
yeah
mechanics.
B
B
B
So
so,
yes,
it's
it's
it's
already
broken
Jesse
Glick
has
offered
a
an
alternative
that
will
be
much
much
faster
in
terms
of
the
generation
process
and
is
much
smarter
about
doing
the
generation
right
now.
That
thing
is
very
heavyweight,
so
part
of
me
wonders
if
we
should
choose
to
disable
the
back
end
extension
indexer
completely
and
then
cue
up
a
task
for
hacktoberfest
to
rewrite
it,
because
it's
it's,
it's
maybe
a
little
useful
right
now,
but
becoming
less
useful.
All
the
time.
B
D
A
B
A
D
I
tried
the
diff
with
an
option
to
in
your
file
name
case,
but
I.
Don't
know
why
I
have
more
results
so
yeah.
B
That
well
and
I,
like
that,
that's
a
great
answer.
That's
an
answer
that
my
ignorance
of
who
uses
htaccess
would
never
have
answered
so
you're
you're,
giving
a
great
answer.
If
that's
only
used
by
Apache
servers
and
we've
long
ago
stopped
using
Apache
servers
to
serve
this
site,
then
no
problem,
yeah.
A
Do
I
believe
so
erase
that
okay
to
be
I
want
to
be
absolutely
careful
because
we
could
broke
a
lot
of
pages.
D
A
Like
the
second
one,
no
problem,
the
the
tricky
UPS
in
me
will
most
probably
run
CP
Dash
error
on
production
of
the
current
bucket
content
on
posix
file
system
level
then
run
Dash
delete
and
if
anything
goes
wrong,
we
can.
We
have
a
copy
that
we
can
move
back
to
the
initial
State
without
waiting
for
a
new
site
bit
I,
don't
mind.
Checking
on
my
own
I
think
we
need
another
view
from
someone
managing
the
jenkins.io
pages,
because
I
don't
really
know
the
link
between
a
docs
and
the
pages.
A
I'm,
not
I,
think
I
have
the
same
view
as
yours,
or
even
an
an
order
of
you
than
yours,
but
I
don't
mind
helping.
My
only
request
is
I
prefer
that
we
plan
the
operation,
so
we
announced
that
we
will
clean
up
the
jenkins.io
website
on
status,
changing
started
you
just
to
be
double
sure
that
if
there
is
something
going
wrong
yeah,
but
it
it
will
have
been
announced,
is
that
okay,
for
your
way.
B
Yeah
I
I,
like
your
suggestion,
I
think
we
ought
to
put
on
Kevin,
Martins
or
Kevin
Martins
and
me
the
responsibility
to
do
the
look
from
the
documentation
side
because
hey
Kevin's,
the
docs
officer
and
I've
had
some
experience
with
the
documentation.
So
let's
put
he
and
I
as
the
ones
to
do
the
the
that
check.
D
B
D
D
B
A
D
Think
we
already
have
that
we
see
Stitch
if
we
pull
requests
on
jenkins.io
is
creating
a
temp
staging
websites
on
the
on
yourself
or
I.
Don't
remember
corrective.
D
Yep,
so
we
have
already
stitching
websites
to
compare.
A
Okay,
so
I'm
gonna
add
this
one
on
the
new
Milestone.
Is
there
something
else
to
add
or
no
all
good
for
me?
Okay,
so
you
will
have
to
to
propose
a
planification
data
survey
unless
you
don't
want
to
drive
the
next
steps.
That's
your
choice
and,
in
any
case,
we'll
answer
to
your
call
for
help
for
a
second
profile
in
the
technical
part.
A
Okay,
next
topic:
Stefan,
you
created
the
issue,
speed
up
the
docker
image
Library.
Can
you
just
remind
us
the
goal
and
why
you
want
to
work
on
this
for
now,
I.
C
Discovered
that
when
I
was
working
on
the
on
the
sharp
pipeline
Library,
we
need
to
handle
with
a
Docker
bake
the
building
of
the
images
and,
in
fact,
in
it's
a
it's
a
two
round
process.
The
first
one
from
the
building
from
Maine
is
building
latest
and
it's
publishing
a
tag,
a
GitHub
tag
which
trigger
another
build
on
the
attack
side
from
the
infra
and
that
pass
that
that
time
it
publishing
the
tag
on
the
of
the
image
and
the
docker
Hub.
C
C
We
discovered
that
we
hit
that
that
quite
often
and
and
that
will
save
us
time
also,
my
main
procubation
on
that
was
how
to
handle
manual
tag,
how
how
that
will
trigger
a
manual
tag
if
we
remove
the
automatic
triggering
for
for
tags.
So
you
answered
them,
you
know
I'm,
sorry,
I
didn't
get
the
full
extent
of
your
answer,
so
we
need
to
talk
everything
my
English
is
on
on
problem.
There.
A
Git
tag
triggering
so
yeah,
so
the
idea
is
that
first,
you
have
to.
We
all
have
to
understand.
We
have
Discovery
and
triggering
when
you
have
a
tag
in
Jenkins
both
require
specific
configuration
today,
our
Docker
images
on
the
nephrase
CI
are
all
set
up
to
do
both
discovering
a
tag
which
means.
When
you
receive
a
Web
book,
you
scan
the
repository
and
Jen
can
say:
oh
I
see
there
is
a
tag
and
it
creates
a
pipeline
on
the
tabs
tab
of
the
multi-branch
pipeline.
A
It
creates
a
pipeline
for
that
tag,
but
it
doesn't
trigger
by
default
that
tag.
The
goal
is
that
if
you
migrate
to
Jenkins
and
it
scan
the
repository
Again
by
default,
you
shouldn't
have
something
trying
to
rebuild
all
tags,
which
makes
sense.
So
that's
the
discovery
and
that
one
is
not
a
problem.
At
any
moment
a
user
can
manually
go
to
a
tag
and
Trigger
that
build
if
we
focus
on
that
part.
That
means
what
you
have
to
handle
on
the
pipeline.
A
Library
is
the
fact
that
if
the
variable
environment
git
tag
is
set,
then
you
might
want
to
have
a
specific
behavior
such
as
oh.
In
that
case,
I
already
know
the
version
require
requested
by
the
user,
so
I
won't
trigger
the
automatic
version.
Detection
and
I
will
use
the
git,
the
environment,
variable
git
tag
or
GitHub
tag
that
variable,
which
is
set
if
it's
not
set.
A
But
that
means
we
will
have
to
first
disable
on
infrastia.
The
automatic
trigger
thing
that
automatic
trigger
is
an
additional
job,
configuration
level
item
that
say:
oh,
if
you
discover
a
tag
and
that
tag
points
to
a
commit
which
is
a
newer
than
three
days
old,
then
automatically
trigger
a
build
and
that
one
you
don't
want.
Otherwise,
when
you
will
push
the
git
tag
that
will
trigger
a
build
that
will
rebuild
the
same
tag
and
that
will
be
a
loop.
A
A
I
might
be
wrong.
That's
the
first
analysis
and
maybe
I'm
forgetting
something
but
I
believe
your
question
is
the
proper
one,
because
we
should
still
Reserve
ourselves.
The
ability
to
manually
trigger
a
build
that
should
be
handled
by
the
pipeline
library,
but
that
will
be
only
a
rare
case
most
of
the
time.
What
our
main
target
is
is
we
run
the
main
branch
build
and
that
will
automatically
create
a
new
attack.
A
A
And
my
proposal
is
that
your
first
version
should
even
focus
on
the
git
tag,
Focus
as
if
it
was
main
branch
and
we
want
a
new
release,
we
trigger
a
main
branch,
build
that
will
push
a
new
tag.
We
start
with
this
one
and
then
we
implement
the
git
tag
support
only
if
we
see
we
need
that
case
because
I
don't
believe
we
had
that
case
since
Mondays.
A
A
Forget
Jenkins,
since
a
few
months
is
that
mirror
was
marked
out
of
sync
by
our
get
Jenkins,
which
was
the
case
last
time
we
checked,
but
it
was
months
or
even
years
ago
the
person
contacted
us
who
asked
and
say
hey
it's
up
to
date,
so
we
don't
know
when
and
if
we
did
something
wrong
if
they
had
some
delay
and
then
fix
it,
I
don't
know
what
happened
in
the
past,
but
it
happened
right
now.
It's
updated
quite
regularly.
A
B
Yeah
about
18
18
months
or
two
years
ago,
we
we
subscribed
as
the
Jenkins
project
to
the
Oracle
initiative
as
they
were
launching
their
cloud
and
we
used
it
to
save
money.
However,
that
was
a
12
month
or
an
18
month,
limited
thing,
and
now
we're
back
to
paying
full
price
for
Oracle
resources
and
rather
than
having
one
more
full
price
cloud
provider.
We
think
the
best
choice
is:
let's
stop
using
Oracle
cloud
and
we'll
switch
to
using
other
locations.
B
So
I've
got
the
action
item
to
figure
out
how
to
pay
the
existing
Oracle
Cloud
bill.
It
can
be
paid
by
the
Jenkins
project
who
could
be
donated
by
some
by
another
company.
Whatever
we've
got,
archives.jenkins.io
is
the
only
current
running
resource
in
Oracle
Cloud.
All
the
other
resources
are
shut
down
now
so,
but
that
one
does
need
to
move
if
terraform
or
if
cloudflare
R2
is
a
candidate.
B
Archives.Jenkins.Io
looks
an
awful
lot
like
a
web
server
with
just
delivering
a
whole
bunch
of
files
infrequently.
If
not,
we
know
how
to
move
it
to
other
Cloud
providers.
Aws
Azure,
digitalocean
Etc.
It's
it's
not
a
heavy
lift.
A
lot
of
files,
but
a
lot
of
files
get
copied,
and
then
we
start
the
web
server
and
we're
done
did
that
cover
what
you
needed
Damien
or
is
there
something
where
you'd
like
me
to
discuss.
A
For
me,
it's
perfect,
for
the
other.
Is
that
explaining
for
you,
everything
cool,
so
I
have
two
issues
to
create
for
that
topic,
then
one
for
archives,
then
Jenkins
IO
and
one
from
the
for
the
cleanup
of
the
rest
of
the
projects.
We
have
terraform
projects,
dates
and
documentation.
A
A
The
data
on
that
server
is
the
same
on
as
the
buckets
that
is
used
by
get
Jenkins
to
check
the
fingerprint
of
a
file
to
no
one
to
which
mirror
it
redirects.
So,
theoretically,
we
should
have
the
same
data
on
both.
So
the
proposal
I'm
making
is
on
short
term,
is
to
add
an
HTTP
server
or
add
viewers
on
an
existing
HTTP
server
to
that
bucket
on
Azure,
which
mean
it
will
effectively
move.
A
The
second
ideas
I
believe
Irving
made
the
same
proposal
yesterday
during
a
team
meeting
is
that
we
could
use
cloudflare
or
digitalocean
space
or
mirror
to
host
that
one.
If
it's
digital,
listen
that
that
could
be
since
we
remove
digital
lesson
out
from
the
update
center,
it's
moving
fully
on
cloudflare.
That
means
I
could
build
on
top
of
the
work
that
ever
did
around
the
end
chart.
A
So
we
could
deploy
a
mirror
of
that
data
on
digital
ocean
that
will
increase
the
cost
and
digital
listen,
but
it
will
avoid
any
risk
on
the
Azure
Port.
So
it's
a
bit
more
work
a
bit
more
slower
because
we
need
to
synchronize
all
file
one
time
from
either
archive
or
Azure
to
digitalocean
one
time,
but
then
we
have
a
mirror
on
digital
Ocean
and
the
on
bandwidth
could
be
charged
from
here.
I
haven't
checked
the
amount
of
data
that
goes
out
of
of
this
I.
A
A
So
I
will
add
issues
and
we
will
have
at
these
issues
on
the
current
Milestone.
That's
a
top
level
top
priority
topic,
because
we
want
to
get
a
way
to
avoid
paying
more.
A
B
So
mark
your
turn
all
right,
so
this
one
is
and
Damian
if
you
could
open
the
diagram
so
that
people
can
see
it
on
screen.
This
one
is
it'll
be
raised
as
a
discussion
in
the
Jenkins
developer
list
and
user
list
just
proposing
a
transition
from
our
current
way
of
supporting
Java
to
a
an
eventual.
Two
plus
two
plus
two
model
Java
releases
now
come
out
on
a
consistent
clock.
B
B
However,
the
Jenkins
project
developers
really
don't
want
to
support
three
Java
releases
concurrently.
The
idea
is,
let's
only
support
two
concurrently
and
in
order
to
do
that,
we're
proposing
this
two
plus
two
plus
two
model
for
the
first
two
years
of
a
Java
release.
We
will
support
the
Java
release,
but
not
require
it.
It's
not
mandatory
minimum.
B
The
idea
this
is
not
a
this
is
not
a
commitment.
This
is
not
a
a
yes
everyone's
agreed
on
this.
This
is
an
idea
that
I'm
proposing
and
will
be
discussed
further.
The.
Hope,
though,
is
that
by
early
October,
we'll
be
able
to
have
agreement
from
people
that
this
is
a
good
model
for
us
to
go
towards,
and
Basel
Crow
has
agreed
to
do
a
blog
post,
announcing
Java
21
support,
announcing
end
of
life
of
java
11
in
October
of
2024
and
announcing
this
transition.
A
I
believe
that
as
infrastructure,
it
looks
like
it's
okay,
given
the
way
we
were
able
to
release
Java
21
and
the
fact
that
it's
now
being
lit.
My
proposal
is
that
we
in
in
the
idea
to
support
that
kind
of
effort,
whether
or
not
that
will
be
the
plan
or
the
plan
might
be
changed
for
the
Jenkins
project.
My
proposal
is
that
we
try
before
end
of
year,
ideally
to
move
all
our
controllers
and
agent
to
gdka
21
to
support
the
world
effort.
Do.
B
I
I,
like
that
idea,
that's
one
that
I
hadn't
considered
one
of
the
things
Basel
noted
in
governance
yesterday
is:
there
are
activities
in
the
first
two
years
of
supporting
a
Java
release
that
are
not
really
described
here.
Things
like
install
the
pre-release
before
it's
available
before
it's
available
as
a
release
install
the
release,
announce
the
end
of
life
of
the
of
the
preceding
release,
all
those
things
that
we
need
to
do
a
detailed
view
of
One
of
These
Bars
and
look
at
what
the
sequences
are,
and
one
of
the
things
should
be
added.
B
A
Thanks
for
that
proposal,
that
makes
sense.
That's
really
a
Leading
Edge
for
Jenkins.
We
haven't
had
decent
years
or
I
personally
I,
really
like
that
proposal.
A
So
yeah.
Let's
challenge
ourselves
for
the
infrastructure
to
support
that
efforts
as
much
as
possible.
You
might
want
to
start
with
private
controllers
or
eventually,
starting
with
cig
and
kinsai
on
its
own.
It
depends
I,
propose
that
we
first
wait
for
gdka
21
to
be
used
with
its
official
non-early
available
release.