►
From YouTube: 2023 08 22 Jenkins Infra Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Let's
get
started
with
the
announcement.
We
were
able
to
successfully
release
the
weekly
core
2.4
19
last
week,
no
problem,
and
today
we
have,
we
didn't
release
successfully
the
version
2.420.
B
A
A
I
even
looked
on
this,
ideally
we
should
re-launch
it.
I,
don't
know.
Maybe
that
could
be
I,
don't
know
if
it's
a
consequence
of
using
tideka
17
on
the
agents.
I
doubt
it
I.
Maybe
in
the
memory
management
I
haven't
looked
at
the
Matrix
of
the
Bots,
there
are
CPU
and
memory
metrics,
and
maybe
the
memory
usage
is
different.
I
gotta
check.
A
C
A
Will
be
the
29th
of
August?
Is
that
correct?
We
have
a
new
LTS
line
that
should
be
released
tomorrow.
C
B
A
A
A
So
since
we
provide
different
operating
system
and
architecture
than
we
used
to
do
with
that
old
version,
we
we
decided
to
rebuild
the
world
set
for
each
version
to
avoid
any
risk
of.
Let's
say
if
you
have
al-malinity,
that's
okay,
but
if
you
have
Debian,
you
have
an
old
version,
so
the
consequences
is
that
the
checksum
of
these
images
changed.
A
B
And
and
I
think
we've
got
users
asking:
could
we
also
republish
older
weeklies
and
to
to
repair
it?
So
we
had
one
requesting
hey
it's
the
current,
so
2.419
is
the
current.
Is
the
most
recent
weekly
Is
it
feasible
for
us
to
rebuild
to
republish
the
all
the
way
you
know
several
weeks
back
or
is
that
just
infeasible
technically.
A
The
answer
is
no
for
the
weekly:
they
can
use
a
either
2.4
or
19,
or
the
upcoming
two
420.
B
D
Eventually,
could
we
modify
the
pipeline
to
not
push
on
latest
if
certain
clearly.
A
A
So
that's
why
yeah
we
publish
automatically
atomically
a
given
release
so
I'm
I'm,
sorry
for
the
end
users,
but
I
will
be
interested
on
the
use
and
the
reason
why
they
cannot
use
the
latest
weekly
version.
Since
it's
updated
once
a
week,
there
will
be
at
least
two
weekly
late
I
mean
given
we
have
a
new
LTS
tomorrow,
I
would
and
given
the
number
of
the
upcoming
LTS
I
would
suggest
them
wait
for
Thursday
and
either
switch
to
the
new
LTS
or
switch
to
the
latest
weekly
I.
B
A
Okay,
so
that
means
a
given
tag,
is
atomic
either
or
osm
platform
and
must
be
rebuilt
in
the
proper
order
right
to
ensure
latest
got
it,
or
at
least,
or
at
least
unsure.
Latest
weekly
is
rebuild.
A
And
yeah
yeah
that
that's
problem
and
we
still
need
to
find
a
solution
to
protect
ourselves.
From
this
scenario
about
your
question,
can
we
discard
we
can
delete
the
tags
on
the
docker
hub,
brilliant
simple,
but
that
means
this
version
will
be
gone.
B
B
A
B
A
B
C
A
Of
problem
that
that's
not
the
docker
registry
does
not
allow
that.
It's
not
like,
eventually
on
artifactory,
but
even
an
artifactory
I'm,
not
sure
it's
possible.
It's
only
a
maven
artifact
or
node.js
artifact,
but
the
docker
registry
does
not
does
not
allow
logging
a
given
tag,
and
the
thing
is
that
here
we
are
in
our
in.
We
have
a
lot
of
restriction,
but
that's
their
location
where
putting
in
restriction
is
only
based
on
is
Jenkins
behaving
properly,
and
here
it's
a
it's
really
a
jenkinsburg
or
or
something
really
weird.
Okay,.
A
Yeah
I
will
add
messages
because
team
at
proposal
to
protect
ourselves
even
more
than
today,
because
right
now
we
are
not
publishing
Master
Branch
so
that
that's
a
good
thing
now.
The
problem
is
that
the
whole
tags
we
set
up
the
job
to
not
build
tags,
although
than
three
days
and
still
they
were
built
with
which
doesn't
so
I
might
be
missing
something,
and
but
that's
really
really
weird
and
the
suggestion
of
team
I
wasn't
able
to
find
the
multi-branch
job
traits.
He
was
mentioning.
A
A
So
the
last
thing
will
be
protecting
yourself
on
the
pipeline
level.
We
have
the
environment
variable
with
the
tag,
so
we
have
to
fail
the
pipeline.
No,
that
one
won't
work,
because
the
whole
tag
doesn't
have
the
pipeline
change
so
yeah,
so
that's
nightmare
I
would
advise
on
removing
the
tags.
That's
that's
the
same
kind
of
thing
as
to
removing
Docker
tags,
removing
tags
from
GitHub,
especially
when
they
are
associated
with
the
GitHub
releases.
A
Yeah
I'm
a
bit
sad
by
this:
that's
really
not
a
trustable
yeah,
we'll
see
what
we
can
do
and
I
will
report
next
week
on
that
topic.
Is
that,
okay
for
everyone?
Yes,.
A
B
Nomination
is
allowed
exactly
oh,
but
self-nomination
is
allowed,
but
but
the
crucial
thing
for
me
is
the
the
infrastructure
officer
will
be
up
for
consideration
as
part
of
this.
This
election
we
elect
new
office
every
year,
likewise
the
documentation
officer
so
Kevin
and
Damien
you.
You
are
welcome
to
continue
and
nominate
yourself
for
the
rest
of
us.
We'll
happily
nominate
you,
but
if
you
say
I'm
not
sure
I
want
to
do
that,
then
you
need
to
go
looking
for
somebody.
Who'll
take
your
place
as
a
candidate.
A
That's
that's
not
easy
for
me,
as
current
officer
trying
to
to
get
to
stay
officer
to
say
that,
but
yes,
it's
important
to
have
people
presenting
with
the
project,
ideas
and
chances
if
needed
and
in
any
case,
don't
forget
during
the
second
period,
between
18
of
September
until
November
to
register
yourself
as
voter.
Yes,
the
condition
the
threshold
is,
you
have
I
suppose
it's
the
same
as
last
year.
Mark
yes,.
A
A
B
Oh
okay,
I
think
we've
got
one
more
major
event,
though
yes,
Java
21
support
or
Java
21
arrives
in
our
Jenkins
containers
today
and
tomorrow
and
tomorrow.
Right
now,
it's
still
an
early
access
version
and
saying
support
is
too
strong,
a
term
right:
Java
21,
Early
Access,
how
about
we'll
be
in
Jenkins
yeah
as
Early,
Access,
good,
perfect
and
and
it's
so
it's
expected
to
be
available
today
and
tomorrow
and.
C
A
So
that's
great
news
as
a
reminder
for
the
infrastructure
to
start
testing
controller
and
production.
A
I
would
prefer
to
wait
for
the
initial
release
because
it
doesn't
have
a
cve
process
in
place
right
now
so
running
publicly.
This
that
might
be
a
bit
risky.
B
October
September
September.
A
B
A
C
A
One
okay
for
everyone:
okay,
so,
let's
start
with
the
walk,
we
were
able
to
finish
during
the
last
Milestone.
Let
me
open
the
link,
so
these
were
two
weeks
Milestone
a
lot
of
tiny
tasks,
but
not
a
lot
of
person
available
at
the
same
time,
so
we
have
overall
the
workload
of
one
week
yeah.
Let's
start
with
unable
to
release
plugin
401
that
one
has
been
opened
today.
A
It's
a
contributor
that
tried
too
early
to
release
the
plugin
using
the
CD
automatic
process,
while
their
token
wasn't
created
yet
by
the
repository
permission
updater.
So
we
explain
and
told
them
to
wait
three
hours,
no
feedbacks,
but
they
were
everything
looked
good.
They
had
a
token.
It
was
valid.
I
tested
the
token
myself.
It
was
just
a
time
they
tried
10
minutes
too
early.
D
A
A
A
A
Yet
yeah
nothing
else
to
say,
except
that
we
have
a
bunch
of
different
agent
definition
everywhere.
So
last
time
we
were
able
to
divide
by
half.
I
was
able
to
decrease
that
spreading
of
20
this
year.
I
hope
that
for
gdka
21
we
will
be
able
to
have
a
centralized
location
at
this
one
for
Linux
and
one
for
Windows.
A
So
yeah,
it's
a
personal
minor
failure.
I
was
hoping
that
we
were
able
to
finish
the
all-in-one
image
before
gdka
17.
We
weren't
for
good
a
lot
of
good
reasons.
So
it's
just
I'm
a
bit
sad
I
was
challenging,
but
yeah
everything
is
working
well,
I
hope
we
will
see
operational
benefits
on
the
memory
usage,
but
that
happened
to
be
measured.
So
that's
why
one
way
can
see
the
service
route.
A
Okay,
a
repository
that
of
a
plugin
that
was
archived
because
the
product
behind
was
closed
two
years
ago,
Maven
incremental
command,
so
that
was
not
related
to
infrastructure
at
all.
However,
the
instruction
it's
the
same
thing
as
airplane,
that's
a
note
for
Mark
and
for
Kevin
the
mvn
incremental
command
instruction
looks
good
on
the
paper,
but
there
was
there
is
still
a
bug
and
yeah.
A
Not
only
fixing
the
bug
will
help,
but
also
the
plug-in
documentation
could
be
improved
by
saying,
if
you
have
an
error,
you
have
to
look
on
the
detailed
instruction
here.
So
that's
just
the
link
and
an
explanation,
because
in
that
case
the
user
carefully
followed
the
instruction
from
jenkins.io
and
still
had
an
issue,
because
they're
perm
XML
was
missing
something
because
they
they
played
with
it.
That
wasn't
an
easy
one,
but
a
good
opportunity
to
jump
on
or
is
Maven
working
with
spriggins.
A
Nodes,
as
pointed
by
team
for
the
Jenkins
CI,
slash
Docker,
a
team
was
able,
based
on
the
work
done
by
Bruno
and
Stefan,
about
locating
the
property
Marine
21
binaries
was
able
to
find
some
hidden
release
which
in
fact,
are
official
release
by
Tamarind
a
preview
release
with
a
fixed
name
on
the
release
and
tag
name
which
will
allow
Automation
and
tracking
of
updating
the
version
until
the
last
version
is
okay,
so
Stefan
that
mean
as
we
should.
A
We
checked
together
right
now,
we
are
using
for
the
infra
the
nightly
builds
while
the
or
the
preview
Docker
image
will
use
the
weekly.
That's
the
only
difference,
so
we
might
want
to
to
put
our
hands
on
this
one
if
we
have
time,
but
that's
not
a
mandatory
topic
in
any
case,
I'm,
not
opening
an
issue.
If
we
have
time
will
do
it
and
until
the
official
release
where
we
will
have
to
do
something,
can
we
use
quickly
with
fixed
back
name,
rowing
tracking,
with
update
CLI
any
question
on
judica
21?
A
A
lot
of
puppet
Magic
not
able
to
log
into
artifactory,
so
the
problem
with
some
user
that
need
a
manual
change
of
the
setup
on
artifactory
for
them
in
order
to
use
the
password
instead
of
whatever
local
password,
they
have
still
not
fixed
I,
will
come
back
on
the
topic
still
open,
but
the
Chooser
for
that
user.
That
was
fixed
right
now.
It's
still
a
manual
case,
backings
CD
release
fading
invalid
token
and
trusted
CI
Jenkins.
A
That
one
happened
last
week
as
far
as
I
remember,
and
it's
because
we,
the
the
administration,
token
used
by
the
repository
permission,
update
or
job
which
allow
it
to
connect
to
a
defrog
API
to
create
reset
token
and
permission
on
artifactory,
that's
top
level.
Admin
token
was
expired,
so
of
course
the
airplane
build
was
filling
with
401
because
it
wasn't
presenting
any
token.
A
So
a
new
token
has
been
generated
once
we
understood
I've
updated
the
calendar
because
that
wasn't
the
infra
team
in
charge
of
that
token.
Last
time
it
was
a
three
year
valid
token.
It
was
done
before
any
of
us
was
working
here.
It
was
done
by
Daniel
and
Olivier
and
they
didn't
had
the
calendar.
So
now
it's
on
the
calendar
and
you
it's
only
valid
for
one
year
and
I've
added
plenty
of
notifications
and
also,
as
I,
show
a
show
to
Stefan
on
trusted.
A
A
Any
question,
or
things
include
this
one:
okay,
we
had
three
issues
closed
as,
as
usual,
no
answer
back
from
the
user.
A
B
So
so
artifactory
bandwidth
reduction
we've
got
agreement
from
jfrog
that
if
we
can
successfully
stop
mirroring
Central
the
maven
Central
repository
that
will
be
sufficient
in
terms
of
their
expectations
from
us
to
reduce
our
bandwidth
use
and
the
reason
they
were
willing
to
do.
That
is
because
our
our
analysis
showed
that
that
is
the
majority
of
the
misused,
artifact
or
misused
mirrors,
and
since
it's
the
majority,
they
said,
that's
fine.
B
It's
also
a
change
that
we
believe
we
can
make
without
requiring
new
releases
of
Palms
without
requiring
new
releases
of
plugins
without
requiring
changes
to
2000
plus
GitHub
repositories.
So
we
we
like
that
they
were
willing
to
compromise
in
that.
Where
we're
very,
very
grateful
for
that,
we
have
more
work
to
do
to
assess.
We
did
an
initial
brownout.
We
found
positive
results
in
ways,
we'd
hoped,
but
some
surprising
messages
that
need
more
investigation
will
report
our
latest
results
to
jfrog
in
a
meeting
on
Thursday
or
Friday,
and
so
I
and
Damien.
B
A
No,
that's
still
still
see
the
same,
so
yeah
didn't
have
time
to
spend
on
that
one
either.
So,
let's
I
hope
we
will
have
time
Thursday
to
prepare
for
Friday.
A
Up
you
know
so,
then,
let's
continue
a
new
issue
that
I've
had
it
around
public
cluster,
so
we
had
the
failures
and
kinsayu
and
another
website
I,
don't
remember
which
one
in
fact
we,
whether
we
use
Intel
or
irm
for
these
Services,
they
all
have
the
same
problem.
We
don't
Define
an
anti-affinity,
which
means
all
the
replicas
of
a
given
service
are
scheduled
in
something
are
used
on
the
same
machine.
Sometimes
it's
indifferent.
Sometimes
it's
not
the
same.
It's
something
that
exists
since
month.
A
Is
that
always
existed
because
our
end
charts
are
are
not
taking
that
in
account,
and
what
happened
is
that
during
a
regular
update
process,
I
wasn't
careful
because
I
took
these
Services
were
replicated
so
I
triggered
the
updates
of
the
operating
system
for
security
issues
and
that
resulted
on
yeah.
The
service
was
absolutely
shut
down
time
for
the
new
machine
to
start,
but
it
was
only
one
or
two
minutes,
but
it
was
cooked
by
your
security
team
because
they
were
testing
things
before
the
last
last
week
during
their
security
advisory.
A
So
that
issue
has
been
open
to
track
this.
That
means
we
will
have
to
update
M
chart
by
m
chart
to
Adam
taffinity,
which
should
have
the
result
of
scaling
at
least
the
RM
based
pods
to
two
machine.
At
the
same
time,.
B
Okay,
so
so
Damian
I
need
to
be
sure,
I
understood
what
you
just
said.
So
we've
got
today's
configuration,
maybe
running
a
single
pod
to
serve
javadoc.jenkins.io
or
something
like
that.
No,
they.
B
C
Need
it
on
two
different
computers
and
the
chance
to
have
the
two
pods
on
the
same
host,
bigger
with
arm,
because
we
have
a
little
number
of
of
them.
So
we
should
only
need
one
nod
for
the
power,
but
we
should
have
two
for
that
kind
of
problem.
B
C
A
Yeah
I'm
not
sure
if
we
are
able
to
work
on
this
on
the
upcoming
week,
but
yeah
I
think
we
should
personally
I
will
advise
on
Stefan.
Once
you
have
finished
the
irm
64
Docker
image
builds
you
move
to
this
one
before
going
back
to
migration
of
workloads
to
the
rm64.
A
A
So
now
the
new
user
have
the
proper
setup,
which
means
now
we
only
have
to
find
a
way
to
treat
the
other
thousands
users
and
I'm
sure
they
have.
The
proper
setup
didn't
have
time
to
spend
on
this
one,
given
that
we
only
have
one
user
asking
for
that
this
week
or
whether
we
still
need
to
find
to
write
a
batch
script.
Of
course,
I
asked
them
and
they
said.
Oh,
you
have
a
nice
API.
You
could
use
it
yes,
so
that
means
some
bash
script
will
be
written
soon.
A
I
will
be
happy
to
share
the
burden
or
delegate
to
someone,
but
on
the
administrator
can
validate
and
test
that
so
yeah.
Unless
you
want
to
set
up
a
full
artifact
instance,
artifact
or
instance
on
your
machine
and
the
ldap
and
the
connection
between
both
and
the
test
case
of
users.
So
yeah
better
to
test
in
production.
A
Okay,
so
I
will
continue
working
on
this
topic
this
week.
I
hope
I
started
something
on
the
go
this
week,
so
I,
that's
a
long
time,
plugin
site
build
commonly
failed
on
in
first
CI
when
accessing
plugins.
Last
time,
I
checked
before
ever
you
went
to
holidays
you
this.
You
detected
that
every
three
hours
we
had
a
peak
of
502
errors
when
the
site
was
generated
from
fastly,
plugin,
backend
I.
Don't
think
neither
you
want
higher
time
to
to
dive
on
that
topic.
A
So
with
people
not
dive
more.
A
We
are
not
sure
if
it's
fastly
or
sending
the
error
or
if
it's
firstly,
answering
an
error
due
to
an
error
on
the
back
ends.
So
yeah
got
a
dip
on
this
topic.
We
are
not
at
ease
with
the
plug
inside
architecture.
To
be
quite
honest,
I'm
not
sure
if
we
have
time
right
now,
are
they
I
believe
you
made
a
proposal
and
maybe
it's
already
done?
Can
you
remind
me
if
you
already
implemented
a
retry
on
the
build
setup.
A
Proposed
retry
and
pipeline
on
short
term
is
that
okay,
you
to
check
and
if
it's
not
done
to
add
a
retry
on
the
build
I
know
it's
always
for
builds
like
this
one.
When
we
generate
something
that
the
same
for
Apu
I
know,
that's,
we
have
a
lot
of
developers
that
say
hey.
A
We
could
Implement
free,
try
natively
inside
the
batch
jobs
and
whatever,
but
on
the
infrastructure
side
we
don't
really
have
the
expertise
to
jump
on
these
elements
on
a
language
on
the
domain
that
we
not
always
master
so
at
least
on
short
term,
adding
a
retry,
that's
not
the
best,
but
one
two
three
retry
eventually
avoids
errors
most
of
the
time,
so
the
goal
is
to
decrease
the
errors,
making
them
a
bit
more
invisible,
though
that's
most
of
the
time.
The
problem,
but
now
that
one
is
not
is
visible.
A
D
D
We
first
looked
at
I've.
Looked
at
the
moment,
S3
like
bucket
as
volume
that
must
most
tradition
are
using
S3
FS.
D
To
mount
S3
as
fuse
system,
which
is
a
no-go
for
us
as
the
template
running
with
privileged
context,
education,
so
I've
looked
at
visual
activity
decommissioned
the
mirror
we
had
previously,
as
you
ordered,
mirror
after
jenkins.io,
which
is
using
the
same
storage
record,
which
was
using
the
same
storage
account
as
mirror
bits.
Foreign.
D
To
try
to
add
this
here,
two
buckets
mirror
with
its
own:
your
url
HTTP
URL,
and
this
URL
running
on
another
service.
D
And
the
next
step
will
will
be
to
transplant
the
mini
SDR
access
redirection
to
get
similar
behaviors
and
the
current
updates
searching
Instagram.
D
B
Well,
so
so
I
I'm
sorry
I
was
thinking
from
the
provider
side
from
the
from
the
side
that
we
provide
so
I
have
an
update
site
on
my
private
inside
my
private
Network,
and
it
reads
from
an
rsync
server
that
is
I,
think
that
is
one
of
the
mirrors
of
get,
but
the
central
server
is
providing
both
rsync
and
FTP.
Is
that
what
you're
saying
Arvey
I'm
trying
to
understand,
or
is
it.
D
B
A
And
there
is
also
the
other
thing
you
mentioned
the
mark,
so
are
they
also
had
the
same
confusion?
Initially,
the
mirrors
that
we
provide
can
download
the
data
we
provide
on
forget
Jenkins,
so
the
HPI
files
they
can
download
using
earthsync,
but
they
also
need
to
provide
the
earsing
server.
So
they
have
both
client
and
server
because
we
need
to
be
able
to
scan
them
or
FTP
is
the
alternative
of
course.
A
So
that's
why
we
have
today
that
tricky
topic,
if
we
use
as
the
S3
bucket
in
cloudflare
or
on
Amazon
or
somewhere
else,
we
have
a
cheap
HTTP
server
that
we
can
replicate
and
the
storage
cost
nothing,
but
mirobit
is
not
able
to
scan
it.
As
everybody
explain
so
in
the
case
of
the
update,
Center
The
Proposal
I
made
12
is
to
say,
hey,
let's
provide
the
Earth
sync
server
on
the
reference,
because
we
control
the
updates.
It's
atomic.
A
A
So
if
merovitz
points
to
the
Earth
sync
server
on
the
reference
data,
it's
not
a
problem,
it
will
think
the
file
is
there,
but
we
don't
have
the
risk
or
the
risk
is
really
low
of
discrepancy
between
cloudflare
and
the
reference
that
is
not
possible
with
get
Jenkins.
Are
you,
of
course,
because
we
don't
control
when
a
given
mirror
has
run
their
AirSync
to
get
the
data,
so
we
need
to
carefully
scan
each
of
the
mirrors
at
every
at
any
moment.
A
It's
only
because
we
control
the
update
of
the
mirrors
in
the
case
of
Update
Center
that
that's
mandatory
for
us
in
any
case,
for
security
reason,
of
course,
so
that's
okay
to
get
started
with
this
one.
We
have
other
Alternatives,
we
can
stop
using
saying
we
can
say
no
more
buckets.
We
will
use
a
machine
or
whatever
service
somewhere
in
the
world
and
use
these
mirrors
for
at
least
moving
a
current
update
Center
to
Azure
on
first
type.
A
Cloudflare
is
only
there
because
it's
cheap,
that
would
have
been
a
news
console
and
they
have
a
foot
on
the
China
Network.
But
if
we
don't
have
an
easy
solution
or
if
it
doesn't
work
as
I
said,
then
let's
use
a
dumb
virtual
machine
with
storage
web
server
and
everything
in
digital
7
and
Azure
on
different
locations.
A
Is
not
quite
often
updated.
The
code
is
quite
simple.
That
will
be
a
nice
feature
request
to
say:
hey,
why
not
adding
S3
as
a
scanning
protocol,
which
means
we
could
directly
say
hey,
we
have
has
three
compliant
buckets
and
a
mirror
bit
could
directly
take
care
of
the
scanning
avoiding
having
to
provide
a
nursing
for
these
servers.
All
the
S3
providers,
AWS
digitalocean
scaleway,
VH
cloudflare.
They
all
provide
both
S3
and
HTTP
protocol.
A
A
A
A
Okay,
unless
there
is
a
question
objection
next
to
pick
nothing
to
do
to
say
it's
a
GitHub
organization,
permission
managed
by
the
Jenkins
admin
where
they
only
use
our
tracker
for
centralization
a
word
about,
so
we
have
two
issues
that
are
linked:
each
others.
It's
migrating
third
CI
to
new
network
and
restricts
to
VPN
users
on
needy
access
to
both
trusted
and
and
the
new
third
CA.
A
A
Team
already
suggested
that
we
should
create
a
terraform
module,
so
it's
the
first
instance
of
the
same
topology
in
particular
trusted
and
cert
will
be
exactly
the
same
topology
on
network
level.
So
in
order
to
achieve
a
proper
VPN
restriction,
I
chose
to
refactor
on
the
terraform
module
now
CI
and
trusted
are
using
that
module.
So
now,
I
should
be
able
to
spin
up
certain
CIU
new
virtual
machine
and
we'll
migrate
its
data
on
this
one.
A
And
this
one
blocked
by
below
issue
30.
migrate,
Pikachu
origin
service
from
AWS
to
Azure,
so
I
didn't
had
any
time
to
work
on
it.
It's
the
same
idea
as
what
are
we
is
doing.
I
wanted
to
walk
on
the
on
the
foot
of
Airways,
especially
with
The
Walking
Dead,
on
improving
the
M
charts,
since
we
will
have
holidays
and
eventually
take
a
takeover
of
the
updates
to
in
kinsayu
I
propose
to
move
that
issue
on
the
backlog
until
Update
Center.
A
Until
we
have
a
clearer
view,
it's
on
the
good
direction,
but
I
want
I,
don't
want
to
follow,
ervis,
walk
and
then
having
to
Chen
two
or
three
times,
because
we
had
bad
surprises
in
the
update
Center,
so
I
prefer
that
we
are,
we
start
to
have
something.
We
are
close
to
a
stable
situation
for
date,
Center
thanks
to
to
that
job,
so
yeah
I
prefer
having
less
things
but
doing
them.
Sequentially,
but
properly
is
that
okay
for
everyone,
foreign.
A
Looking
command
prompts
to
avoid
confusion
between
Services,
Stefan
I.
Believe
you
wanted
to
work
on
this
one,
but
you
didn't
had
time.
A
C
A
Still
working
on
it,
Next
Issue
ath
builds
commonly
unresponsive,
so
that
was
different
ideas
that
blocks
us
from
closing
the
issue
to
to
build
ath
on
spot
on,
first
try
and
then
fall
back
on
On
Demand.
A
If
that's
just
a
few
pipeline
code
line
Stefan.
Is
it
okay
if
I
take
over
this
one
for
billing
reasons
on
Azure
I
just
want
to
have
this
implemented
before
every
developer
come
back
from
early
days
and
start
pics
of
ADH
builds
of.
A
A
Item
cool
64:
can
you
give
us
yeah.
C
That's
easy:
we
we
I'm
working
on
the
chat
pipeline
library
to
be
able
to
to
build
our
images
on
IMD
and
arm.
C
C
A
I
believe,
right
now
you
are
switching
to
use
of
Docker
bake
files,
Docker
bake
for
Linux,
at
least
yes
to
a
low
rm64
windows
and
irm
64
is
not
a
thing
for
our
Cloud
providers,
so
that
will
be
Windows,
Intel
or
Linux
Iram
Intel
and
the
benefit
of
Docker
bake.
It's
a
bit
of
revamp
and
change
the
behavior
of
the
pipeline
library,
but
that
will
benefit
because
that
means
we
can
also
build
images
for
power,
PC
or
s390x
image.
A
If
you
want
to
run
custom
agent
or
custom
services
on
such
machines,
meaning
if
you
have
a
sponsor
giving
us
this
kind
of
machines,
we
can
use
them
for
production
workloads,
especially
for
aha
services.
That
mean
we
could
have
one
Intel
machine
and
then
replicas
on,
let's
say
cheaper
machines
in
the
future.
A
C
A
A
A
A
A
The
odds
limit
will
be
October
20th
to
2023,
so
that
means
we
should
plan
1.26
in
September.
Just
as
a
reminder.