►
From YouTube: 2023 05 23 Jenkins Infra Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
So
announcement
the
weekly
as
far
as
I
understand
today
is
Core.
Weekly
release
has
been
delayed,
but
it's
fixed
and
the
job
is
running.
Is
that
what
you
are
saying
that.
A
B
A
A
C
A
A
A
That
will
be
the
first
to
the
mountain
we
had
last
week,
the
16
we
had
the
secreted
visery.
We
already
talked
about
that
and
we
don't
have
other
announced
on
the
mailing
list.
A
Next
major
events,
none
that
I'm
aware
about.
Is
there
any
major
rebound
where
you
are
aware?
Folks,
no.
B
A
So,
let's
get
started
with
that
item
so
as
part
of
walk
done
on
the
Sig
platform
meeting,
but
that
doesn't
impact
on
the
team.
We
change
the
way,
the
Jenkins
controller,
the
official
Docker
image
is
built
before
10
days
ago.
We
used
to
have
a
script
on
each
run
on
master
that
run
on
our
private
trusted.
The
script
was
for
the
Linux
image
in
charge
of
okay.
A
Let's
check
the
two
last
version
of
weekly
line
and
two
last
version
of
LTS
line
and
check
on
the
docker
herb
that
each
of
the
definition
of
that
image,
the
different
operating
systems,
the
different
CPUs,
the
different
tags,
all
of
them
are
published.
If
they
are
not,
then
it
fail
and
they
try
to
republish
the
image
on
the
paper
that
looks
really
good
and
that
has
walked
somehow
for
the
past
years.
A
The
issue
we
see
for
the
past
year
we
saw
was
that
sometimes
when
we
introduce
a
new
platform,
a
new
operating
system,
a
major
operating
system
change,
a
new
CPU
platform.
Then
the
two
past
releases
are
overwritten
rebuilt
and
changed
which
changed
the
checksum
of
the
end
users
and
that
one
started
to
be
less
and
less
acceptable,
so
the
we
changed
recently
now
we
need
to
create
a
tag.
The
tag
is
the
Jenkins
version.
A
So
the
advantage
is
that
we
only
build
the
new
version.
We
don't
have
to
fear
of
a
written
or
adding
new
platform,
so
we
can
deliver
way
more
often
that
create
new
questions
that
will
be
for
the
Sig
platform
meeting
later
today.
Do
we
need
now
for
the
docker
image
to
have
LTS
and
weekly
branches
or
master
and
LTS?
A
We
should
but
that's
the
new
question.
Now
we
have
the
foundation
and
for
us,
the
infrastructure
team
that
mean
we
need
to
set
up
the
permission
correctly.
The
request
from
Alex
was,
should
we
add,
as
maintenance
of
the
official
Docker
image,
the
members
of
the
release
team,
so
they
should
be
able
to
create
the
tags.
A
A
That
could
be
a
solution,
because
the
core
release
system
has
a
token
that
has
the
permission
to
create
the
tag.
So
maybe
we
could
avoid
that
until
then
we
have
either
to
build
that
automation
or
decide
if
it's
okay
to
have
a
few
members
to
be
maintainer.
That
could
be
an
intimidates
members
such
as
Alex
Alex
brandes.
A
My
proposal
is
that
we
use
that
intermediate.
We
had
a
few
selected
member
team.
Yacom
is
already
able
Mark
and
high
are
maintenance.
We
have
the
permission
so
eventually
adding
infrastructure
team
member
but
I
think
the
three
of
us
out
there
I
propose
that
we
only
add
Alex
Brando's
nominatively
until
we
settle
for
that
or
we
accept
that
every
Jenkins
release
team
member
are
also
co-maintainer
of
the
docker
image.
That's
a
balance
to
find.
B
So
that
and
tell
me
about
the
the
risk
that
you
see
there
I'm
not
sure
yeah.
So
we've
already
got
maintainers
on
the
the
container
image,
but
they're
not
necessarily
release
leads,
and
so
the
idea
was
should
we
add
the
release
leads
to
the
list
of
people
who
are
allowed
to
maintain
the
controller
container
image
exactly.
A
The
risk
is
in
order
to
create
a
tag.
That
means
you
are,
you
need
to
be
granted
permission
that
allows
you
to
push
codes,
not
only
pull
requests.
So
that
means
you
can
you
have
Space
special
you?
You
need
to
be
a
writer,
so
we
can
eventually
protect
the
master
Branch,
but
there
is
still
a
permission
risk
on
that
area
compared
to
creating
the
tag
automatically
as
part
of
the
release,
score
process.
B
A
B
Now
I
guess
even
further,
we
could
consider
converging
the
controller's
container
image
into
the
Jenkins
core
image,
but
then
I
guess
the
problem
is
that
now
locks
out
the
contain
current
container
maintainers
so
that
they
would
have
to
also
be
core
maintainers.
That's
probably
not
healthy.
Okay
for.
A
That
I
personally
want
to
bring
and
push
forward
on
the
Sig
platform
meeting
is
stop
using
the
same
exact
version
between
Jenkins
score
and
the
Jenkins
image.
We
need
a
way
to
to
say
that
version
of
the
Jenkins
image
as
that
version
of
Jenkins,
but
we
need
something
like
a
suffix
like
the
package
builds
because
Jenkins
score
is
a
dependency
of
the
image.
It's
not
the
image.
B
Right
that
I
I
see
your
point
and
certainly
other
other
package
deliveries.
Other
CIS
other
people
who
are
doing
these
kinds
of
things
are
doing
something
similar
right.
The
many
of
the
container,
the
operating
system
container
images
have
a
dash
suffix
that
they
use
to
say
this
is
version
such
and
such,
but
it's
this
iteration
of
it
thanks.
A
A
A
A
Let's
start
with
now
with
the
task
that
we
were
able
to
finish
last
week,
unless
anyone
has
a
command
objection,
question
on
the
topic
we
just
we
are
about
to
close
one,
two:
three:
okay.
Now
what
are
the
tasks
that
we
were
able
to
close
during
the
past
Milestone
thanks
Stefan
for
the
digital,
listen
leftover.
A
We
are
the
leftover
persistent
volume,
and
since
we
don't
have
a
garbage
collector
on
digital
lesson,
then
we
have
that
one
that
we
don't
have
often
so
that's
okay
confirm
that
the
Azure
budget
should
be
under
the
10K,
the
10K
as
forecasted
for
this
month.
One
of
the
main
effort
we
did
was
on
the
age
virtual
machine
agent
from
cigen
Sayo,
the
change
of
instances
that
allowed
us
to
drop,
creating
an
SSD
for
each
machine
and
the
fact
that
we
use
new
instance
type
with
the
same
capacity.
A
A
A
A
What
would
happen
that
mean
we
might
have
to
Define
two
kind
of
instances
for
this
job,
the
I
memory
instances
some
that
will
be
the
default
with
spots
and
some
that
will
be
imim
critical.
Where
explicitly
on
the
pipeline.
You
call
the
label
critical,
but
the
people
using
that
will
will
require
a
close
review
if
it's
really
needed.
A
Also,
we
have
some
work
that
we'll
discuss
a
bit
later
started
by
Alvin
Stefan
to
start
using
virtual
machine
on
digital
ocean.
That
could
be
the
solution.
We
might
say
we
only
use
low
cost
spot
on
Azure
and
on
digitalization
we
use
IO
cost
instances,
but
that
won't
fail
during
the
builds.
That
could
be
also
way
to
extend
and
spread
the
load,
but
right
now
the
cost
decreased
so
that
the
issue
is
cost.
It's
closed.
Sorry,
launchable
RV.
Can
you
give
us
a
quick
report.
D
A
Cool,
so
the
initial
phase
of
Discovery
is
now
finished.
The
initial
the
secondary
phase
of
setting
up
the
tooling
has
been
done,
and
now
we
are,
we
finished
the
third
phase
of
optimization
and
it's
usable
and
as
far
as
I
can
tell,
there
is
a
lot
of
hidden
work
by
urv
about
automation,
of
the
update
of
launchable
to
keep
track
of
launchable
on
all
of
our
Assets
in
a
synchronous
time,
so
that
should
allow
us
to
keep
up
with
the
new
changes
provided
by
launchable,
so
nice
job.
A
A
C
A
Okay,
I
will
add
one
less
condition
before
closing.
Since
you
remove
any
usage
of
AWS
on
infrasti,
you
have
to
check
that
we
don't
have
any
more
any
AWS
credentials
or
CI
just
to
be
sure
we
don't
need,
and
that
means
also
removing
the
ec2
plugin
and
Associated
these
plugins
from
the
controller.
Okay,.
C
A
But
only
in
front
CI,
there
is
that,
okay
for
you,
yes,
okay,
so
I
propose
we
keep
the
tissue
on
the
Milestone
and
we
will
close
it
once
we
won't
have
any
reference
of
credential
or
Plugin
in
infrastia
itself.
Is
that
okay,
for
you
perfect?
Thank
you.
Thanks
I
checked
on
the
billing
on
AWS
and
we
saw
a
difference.
It's
minor
compared
to
what
the
bomb
builds
are
generating,
but
still
it's
visible.
A
Good
impact
on
the
AWS
building,
oh
by
the
way
Airway
the
launchable
work
done
by
by
basil,
allowed
him
to
contribute
on
the
bomb
a
few
weeks
ago.
So
it
also
had
a
indirect
positive
impact
on
the
billing
on
AWS.
A
A
That
update
is
required
on
AKs
to
be
able
to
use
Ubuntu
22204
node
pools
right
now,
currently
checking
on
the
deprecated
directive
that
we
should
update
on
our
own
charts
and
ones
that
will
be
done.
That
will
be
changelog
reading
for
each
of
the
providers
I'm
proposing
the
following
plans:
folks,
I'm
taking
that
issue
as
I
said
last
week,
I
will
want
to
start
updating
digitalocean
as
soon
as
possible.
A
Then
because
it's
it's
used
by
CI
Jenkins
IO
for
plugins.
Usually
the
updates
are
going
relift
most
on
digital
sound,
because
we
don't
have
a
lot
of
complex
things.
There
then
I
will
want
to
continue
on
AWS,
which
is
a
bit
more
sensitive,
but
still
only
used
by
cig
and
kinsayu.
So
the
scope
will
be
only
impact
on
CI
Jenkins
Io.
A
If
it
fails,
then
we
will
have
to
work
on
Azure,
but
since
we
have
the
migration
of
prod
public
gates
to
publicates
that
one
might
be
locked
before
updating
to
the
new
kubernetes
version,
so
I
propose
that
for
the
upcoming
Milestone
I
only
targets
digital
ocean
for
sure
and
eventually
AWS-
and
we
will
do
a
status
report
next
week-
is
that
okay
for
all
of
you
or
did
I
miss
something
or
do
you
have
other
proposal
ideas.
A
C
And
then,
if
I'm
not
mistaking
there's
two
in
digital,
yes.
A
Alternate
Gates
migration
or
AKs,
so
the
migration
on
publicates,
so
everybody
you
ended
it
over
to
me.
So
you
already.
Can
you
report
on
what
you
did
during
the
past
Milestone
before
the
underwear.
A
D
A
A
Okay,
preparation
work,
so
you
earned
it
over
because
you
will
have
a
short
week.
You
have
some
days
off
for
this
long
weekend.
So
that's
the
reason
why
I'm
taking
over
on
this
one,
one
of
the
main
elements
you
identified
earlier
today
and
shared
with
me
are:
where
I
wasn't
aware:
we
have
one
stateless
application,
yet
the
incremental
publisher
that
one
should
be
easy
to
migrate
is
that
okay,
yes,
stateless,
easy
to
migrate
to
do
and
then
I
plan
on
working.
The
goal
is
migrating
key
cloak
to
test
match
the
database.
A
Just
a
word
about
the
postgresql
database.
It
wasn't
that
easy.
We
needed
to
create
a
new
instance,
and
we
realized
that
this
manage
instance
does
not
support
IPv6
Network,
so
we
had
to
create
a
specific
Network
and
now
I'm
I'm
fighting
against
Network,
peerings
and
accesses,
so
it
has
been
created
with
success
with
the
new
network
and
I'm
working
on
accessing
from
the
new
cluster
and
also
from
our
management
system.
So
the
database
are
automatically
created
right
now.
A
So
that
was
already
the
case
early
when
you
created
the
network
a
few
months
ago.
I
don't
know
if
you
remember
you
created
appearing
from
private
to
public
terraform
and
the
Azure
API
reports
that
everything
is
okay,
but
when
we
go
on
the
Azure
UI,
it
say
it's
incomplete
and
we
are
missing
the
symmetric
pairing.
A
It
looks
like
it's
a
recent
change
less
than
one
year
ago
on
the
way
we
created
I
tried
some
peering
manually
and
when
you
create
manually
Brewing
from
the
UI,
it
creates
both
pairing
now,
which
wasn't
the
case
so
I'm
gonna
have
to
work
on
that
part,
creating
both
symmetrical
but
need
the
documentation.
Reading
before
that,
so
you
did
write.
It's
just.
It
looks
like
it
changed
in
the
way,
as
you
maintenance
so
right
now,
I
cannot
access
the
public
cluster
from
private
cluster
and
I
cannot
access
the
new
database
cluster
from
the
one.
A
So
I
guess
public
to
database
is
not
working
despite
the
for
the
same
reason,
so
that
should
be
the
next
priority
task
for
me:
Next
Issue,
peak
of
usage
cost
for
the
protocol
releases,
Resource
Group,
so
that
issue
I
plan
to
close
it
I
will
wait
until
end
of
May
and
see
the
status
of
the
billing.
We
don't
know
why,
beginning
of
April
the
cost
on
that
system
increased
a
lot
and
now
it
has
decreased
since
one
week.
Suddenly,
multiple
Theory
multiple
analysis
did
on
the
issue.
A
But
the
answer
is
we
don't
know?
Why
is
this
because
of
the
DNS
decreased
workloads,
thanks
to
the
work
that
RV
did
on
the
data
dog
agent
on
the
Clusters?
Is
it
because
we
are
migrating
to
public
Gates?
Is
it
something
else?
My
proposal
is
to
wait
end
of
the
month.
In
fact,
it's
not
the
end
of
the
month
that
counts.
The
most
important
is.
A
A
Don't
hesitate
if
you
have
any
question
on
that
topic
to
ask
on
the
issue
or
on
the
channel
migration
of
trusted
CI
Jenkins
the
same
we
had
on
over
because
Stefan
you
had
to
you
had
a
long
weekend
last
week,
so
the
underwear
went
really
fine.
The
work
you
did
was
working.
We
had
the
virtual
machine
that
were
connected
to
puppets.
A
A
A
Specifically
some
elements,
but
now
we
can
have
the
SSH.
So
the
next
steps
are
data.
Migration
from
AWS.
A
So
we
have
two
data:
the
Jenkins
sum
that
should
be
the
quickest,
but
we
have
all
the
cached
data
on
the
agent
for
the
update
Center
though,
and
the
second
one
is,
there
are
still
some
security
groups
to
apply
to
the
permanent
agents
groups
to
fine
tune
for
permanent
agent.
A
A
And
adapt
because
some
the
network
flows
might
be
a
bit
different,
but
everything
is
going
really
fine,
nice
job
Stefan.
Are
you,
okay
to
take
back
on
the
security
group
part
to
help
me
on
that
topic
on
the
upcoming
Milestone?
Would
it
be
okay
for
you?
Yes,
of
course,
so
I
propose
with
as
murder
secondary
and
we
invert
the
roads,
for
the
second
task
is
that,
okay,
for
you.
D
A
C
For
my
part,
I
did
manage
to
build
an
image
on
digitalization
through
packer.
It's
not
full
automatic,
but
it's
built
by
backer
and
and
I
hand
over
to
Airway
to
try
and
play
with
the
plugin
digital
lesson
from
a
controller
to
spoon
VM.
With
that
image
we
can
only
have
Intel
Ubuntu
2
2204.
There
is
no
arm
in
digital
resume.
C
A
A
A
D
I
still
have
I
didn't
have
time
to
work
on.
The
communication
is
true
between
data
doc,
which
is
running
on
the
first
and
Jenkins
controller,
which
is
running
inside
the
docker
container.
Okay,.
A
Ci.G
container
asked
that
a
dog
agent:
okay,
do
you
need
help,
or
do
you
want
to
walk
alone
on
this
one?
Yes,.
D
It's
like
yeah.
A
A
D
I've,
let
that
open
I've
got
the.
There
are
two
all
the
monitors
that
okay
aren't
currently
applicable
because
they
were
watching
jobs
that
now
are
interested
okay.
We
would
need
to
to
create-
or
put
it
in
place,
I
file
away
a
way
to
retrieve
them
through
from
trusted
jobs.
Yes,
public
files
or
something
like
that.
A
Okay,
if
I'm
not
mistaken,
that
means
for
that
issue
for
the
scope
or
that
issue.
That
means
deleting
the
monitor
now,
because
they
are
manually
managed
and
they
are
not
used
and
they
don't
have
any
data.
So
that's
they
are
good
candidate
to
be
deleted
and
I'm
sure
we
have
an
issue
we
might
have
already
issued
from
Danielle,
but
I'm
not
sure
either
create
a
new
one
that
say:
okay,
we
need
to
monitor
trusted,
explain
the
need,
so
we
can
build
a
new
solution
from
that.
A
A
Oh
that's
good
job
because
everything
was
done
manually
so
that
helps
a
lot
and
that
will
allow
us
to
create
monitor
when
needed,
demonstration.
The
work
you
did
yesterday
during
the
I
nod
full.
We
had
on
infra
CI.
In
less
than
one
hour,
we
were
able
to
fix
the
incident
and
have
a
monitor
to
a
very
prediction
on
the
future.
So
that's
a
demonstration
that
that
work
is
really
useful.
Thanks.
A
A
A
The
main
issue
here
is
the
VPN.
The
private
VPN
should
be
able
to
allow
access
to
the
public
network
where
the
new
instance
is
created
and
now
I
discover
with
the
public
AIDS
creation
that
the
peering
are
not
working
as
expected.
So
I
was
stuck
on
that
part.
I
wasn't
able
to
SSH
to
do
instance
or
tests,
my
security
groups.
A
So
if
it's
okay,
I
will
move
this
for
this
Milestone
I
will
destroy
the
virtual
machine
because
we
are
paying
for
it
and
we
don't
need
so.
I
will
comment
out.
Terraform
will
destroy
the
resources
and
then
I
will
differ
in
two
weeks
before
finishing.
The
rest
is
that,
okay,
for
all
of
you,.
A
A
Artifact
caching
proxy
is
unreliable.
Stefan
I
propose
that
who
will
pair
together
on
that
one.
So
what
I
did
since
last
week,
I
was
able
to
create
manually
tested
manually
on
CI
Jenkins,
say
you
new,
inbound
mode
for
virtual
machines,
so
the
goal
for
the
artificial
proxy
reliability
is
that
in
Azure
it's
still
unreliable
on
Deal
case
we
don't
have
the
issue
on
the
bomb
builds.
We
have
decreased
on
AWS,
so
we
still
have
issues,
but
we
don't
have
a
workload
that
justify
the
spending
time
on
diagnosing.
A
A
That
could
be
that's
still
issue
on
azure,
with
the
overlapped
Network
still
issue
made
or
might
not
be
the
cause
so
I
tested
on
CI
Jenkins,
new
kind
of
inbound
agents
that
works
very
well.
So
the
goal
is
to
switch
from
SSH
to
inbound
from
the
ephemeral
as
your
virtual
machine.
A
A
C
A
Area
you
build
it
so
I
I've
tested
with
the
help
of
team.
So
the
goal
now
is
to
share
knowledge
with
you.
So
then
we
can
see
how
we
describe
the
work.
You
me
both
I,
don't
really
care,
but
the
goal
is
to
migrate
these
agents,
so
that
needs
multiple
tiny,
iterations.
First
moving
to
inbound,
then
moving
to
the
new
Network
and
then
see
if
it's
still,
okay,
okay,
good
for
you,
yeah
cool.
A
So
for
the
next
Milestone,
let's
check
the
triage
or
new
incoming
issue.
If
it's
okay
for
you,
can
you
read
my
screen
or
do
you
need
me
to?
A
A
Had
a
pod
garbage,
collector
or
Jenkins
kubernetes
clusters
that
one
for
me
I
need
to
switch
to
a
session
where
I'm
authenticated.
Sorry
here
we
are
I,
don't
think
we
should
be
able
to
work
on
this
one.
The
goal
is
to
create
a
kubernetes
crone
job
on
our
end
chart.
So
we
have
a
chrome
job
run
by
kubernetes
other
pod.
A
That
will
take
care
of
deleting
agents.
I
propose.
We
differ
this
one
for
later
is
that
okay
for
everyone,
I'm
removing
the
tray
age.
Since
we
have
covered
the
at
least
the
Y
and
yeah
no
milestone
for
this
one,
then
what
do
we
have?
We
have
created
an
rm60
for
not
pull
on
public
aids
to
start
using
a
ram,
64,
pods
Stefan.
A
A
D
A
A
Thanks
backup,
L
desk
issue
has
markdown
RV.
D
Yes,
this
one
is
more,
it's
that
important
at
all.
D
Backing
up
Smackdown,
you
know
further
in
this
Repository.
So
when
we
someone
executes
a
search
in
repositories,
they
can
get
more
it
on
the
results.
D
We
are
currently
discussing
a
lot
about
multiple
services
in
elliptic
issues,
but
when
we
want
to
search
I,
don't
know
reduce,
for
example,
I
want
to
search
lettuce
in
every
repositories
right
now,
I'm
grabbing
all
previous
meeting
or
previous
discussion
on
sort
of
like
that,
all
logs
there,
which
can
be
in
the
repositories
I,
think
the
issues
are
smacked
down
in
this
repository
will
allow
me-
and
maybe
others
to
grab
for
some
for
name
of
our
own
finger
in
issues
to.
A
A
A
Between
but
if
it's
okay
on
another
repository,
you
can
proceed
unless
someone
has
an
objection,
so
I
remove
the
tray
H,
because
the
goal
is
clear,
as
you
said,
it's
not
important,
but
if
you
want
to
work
on
instant
state,
do
you
want
me
to
put
it
on
a
milestone
or
is
it
okay?
If
we
leave
it
as
it
is,.
D
B
A
What
do
we
have
agent
experience
likes
the
Polish
of
kitab
action
opened
by
basil,
so
basil
is
requesting
to
have
docker
the
docker
command
and
the
docker
engine
inside
the
inside.
The
kubernetes
content
agent
container
that
we
run
on
CI
Joint
inside
the
Technologies,
such
as
firecracker
and
Qatar
containers
allows
the
underlying
machine
instead
of
running
containers,
to
run
a
really
specific
virtual
machine
lightweight
and
start
so
you
can
run
Docker
engine
Within,
so
nested
Docker
engine
without
an
issue
technically
firecracker
come
from
AWS.
A
So
that
shouldn't
be
an
issue
on
the
paper,
at
least
to
add
it
to
an
eks
cluster
for
digital
ocean
I.
Don't
know
if
we
can
change
the
container
runtime
and
on
Azure
we
can
use
both
and
or
even
I
looked
at
sysbox
also
in
the
past.
So
that
could
be
a
great
idea
anyway.
That's
not
our
priority,
so
I
will
add
a
message
for
Basil.
A
That's
absolutely
okay,
but
given
we
have
different
kubernetes
container,
that
could
create
a
constraint
on
the
technology
and
cloud
provider
we
use,
but
maybe
we
could
have
a
not
pull
on
a
single
element
just
for
people
who
need
Docker.
That
could
be
an
alternative
to
the
virtual
machine
that
could
help
on
the
cost
control
as
well.
A
single
virtual
machine
for
single
build
cost,
always
a
bit
more
than
a
pod.
A
Don't
know
if
yes
also
for
the
costs
only,
but
here
what
is
pointing
basil
is
for
the
developer
in
terms
of
developer
experience.
That
is
that
the
fact
that
they
need
the
agent
to
start
really
quickly,
and
that
depends
on
much
time.
The
virtual
machine
takes
to
start
connect
to
Jenkins
and
see.
A
So
that
one
I'm
removing
try
H,
because
we
have
discussed
this
one
I-
need
to
add
the
message
to
Basil
here
to
say:
that's
nice
idea,
I
have
a
few
links
to
point,
but
right
now
we
cannot
work
on
that.
That's
optimization
and
we
have
other
priorities.
So
we
cannot.
We
don't
have
the
matter
of
time
to
work
on
this
one
anyway.
That's
a
good
idea
and
if
you
are
interested
looking
at
firecracker
and
Qatar
container,
are
really
interesting
piece
of
technology
for
running
Docker
in
docker.
A
A
Peak
of
usage,
we
have
covered
this
one
and
a
new
one
forgot
my
username
okay,
so
that
one
will
be
automatically
on
the
Milestone.
I
will
remove
the
trade
after
one
last
thing.
Do
you
have
other
issues
that
we
have
on
the
infra
team?
Sync?
Next,
that
you
want
to
work
on
on
the
upcoming
on
the
upcoming
milestone.
A
One
I
want
to
bring
because
that
might
be
helpful
for
the
Google
summer
of
code,
so
we
have
the
Ubuntu
20
to
22
for
upgrade
campaign
automatically,
because
we
are
working
with
the
dates
of
trusted
CI
and
support
Linux
container
when
running
on
Windows
Virtual
machines
that
one
I
want
to
edit
to
the
new
milestone.
A
There
is
a
Google
summer
of
code
project
where
the
goal
is
to
automate
the
test
to
provide
technical
elements
for
some
of
the
docker
tutorials
of
the
Jenkins
IO
documentation.
For
instance,
let's
get
started
with
Jenkins
download
that
file
install
Docker,
run
Docker
compose
up,
go
to
localhost.
Whatever
do
this
do
this?
Do
this
and
you
have
a
working
Jenkins
instance,
that
kind
of
tutorial
the
technical
element
provided
as
part
of
the
documentation
should
be
tested.
The
idea
is
to
test
them
on
the
infrastructure
on
cigins.
Are
you
at
least
once
a
week?
A
A
So
the
idea
is
to
start
working
on
that
topic
and
see
if
Docker
desktop
can
be
installed
through
chocolaty
on
our
templates
and
if
it
work,
that's
a
swapping
replacement
for
the
container
we
already
use.
Instead
of
installing
Docker
CE,
we
install
Docker
desktop.
That's
all
so.
The
goal
is
to
try
working
on
this
one.