►
From YouTube: 2021 08 20 Jenkins Infra Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Awesome
hi
everybody
welcome
for
this
new
jenkins
infrastructure
meeting
today
for
the
announcement
we
have
quite
a
few,
so
the
first
one
is.
We
are
changing
the
meeting
time,
so
several
contributors
asked
to
move
the
time
because
it
did
not
work
well
for
them.
We
want
to
have
as
many
people
as
possible,
so
this
so
now
we
we
are
going
to
do
it
on
friday
afternoon.
A
A
What
is
important
is
we
have
one
remaining
issues
that
we
have
to
fix
and,
more
importantly,
that's
not
the
right
time
to
modify
anything
related
to
repo
jenkins
here.org
or
the
release
environment
or
whatever.
So
we
just
asked
to
not
change
things
that
may
affect
the
release.
Environments.
That's
all
I
ask
you,
that's
it!
A
Otherwise,
in
terms
of
announcement,
based
of
last
week,
news,
we
officially
start
using
java
11
as
a
default
java
version
for
our
docker
images.
That's
part
of
the
latest
weekly
release.
I
put
a
link
to
the
blog
post
that
announced
that
change
and
also
thanks,
damian
and
tim
for
your
work
with
the
docker
image.
We
can
now
officially
support
different
architecture
such
as
arm
64,
ppc,
64
and
fs
390x
for
the
docker
images
and
finally,
the
latest.
A
The
final
announcement
that
I
have
to
share
is
we
are
investigating
some
issues
regarding
reporter
jenkins.org
with
g
frog
and
updating
the
service
to
the
latest
version
available
may
fix
the
issues
as
they
brought
some
improvements
regarding
the
time
that
it
take
to
copy
artifact
between
maven
repositories.
So
we
are
planning
to
do
that.
Upgrades
next
thursday.
The
that
means
that
repo
thejenkincia.org
will
be
done
for
around
20
minutes.
We
still
have
to
decide
on
time
between
g
frog
and
us,
but
we
are
playing
to
do
that
after
the
lts
release.
B
Just
on
the
on
the
s390,
I'm
delighted
to
report
that
I've
had
personal
personal
contact
from
a
person
at
ibm
who's
interested
in
contributing
more
on
the
s390
stuff.
So
I've
been
guiding
her
to
hey.
Please
submit
an
acceptance
test
into
the
infraslash
acceptance
test
repository
if
you'd
like
to
verify
s390
installation
and
do
those
kind
of
things
so
she's
she's
very
interested
in
being
involved
working
with
her
management
on
what
it
would
mean
to
be
involved.
A
Yeah,
that's
awesome
any
last
comments
before
we
start
so
to
the
agenda.
I
put
six
topic
some
more.
The
first
one
is
about
ci
the
jenkins
rio,
so
tim
and
damian
have
been
working
to
duplicate
the
aci,
so
that's
that
was
for
the
azure
container
instances.
So
a
bit
of
history
here
is.
We
were
using
container
instance
on
seattle,
jenkins,
io,
to
run
various
workloads
like
maven
agent
and
so
on.
So
we
faced
many
shoes
in
the
past
and
it
ended
up
being
quite
expensive
for
our
use
case.
A
A
lot
of
work
was
needed
to
switch
to
kubernetes
agents,
obviously
changing
the
jenkins
file
changing
agent
and
running
various
tests,
and
it
appears
that
that
we
achieve
a
major
milestone
in
data
in
that
domain.
So
we
are
now
officially
using
kubernetes
agents,
unseated
jenkins,
that
I
o,
so
we
are
using
at
the
moment
the
communities
cluster
running
on
on
our
amazon
account
and
we
should
have
more
coming
so
we
can
spread
the
load
on
different
communities.
Cluster.
C
So,
thanks
to
tim
that
point
in
me,
we
have
updated,
run
books
and
documentation.
There
is
still
one
last
step
in
the
pipeline
library,
so
the
idea
is
that
we
are
gonna
duplicate.
B
Damien,
isn't
isn't
that
just
a
matter
of
gh
pr
create
you
know,
kind
of
thing.
So,
admittedly,
it's
a
scripting
challenge,
but
it's
it's
just
a
bulk
scripting
exercise.
The
problem
still
is.
We
then
have
to
persuade
them
to
actually
merge
the
pull
request
and-
and
I
would
assume
that's
the
bigger
challenge.
D
I
don't
think
we
have
to
worry
about
the
merging
it.
We
just
create
the
pr's
ideally,
so
I
did
this
for
the
checks,
but
your
textile
and
fine
bugs
change.
Some
of
them
got
merged
straight
away.
Some
of
them
took
a
few
weeks,
some
still
coming
in
every
few
months,
but-
and
I
think
after
six
to
12
months
or
so
I
just
closed
them
as
she
meant
they're
abandoned,
but
most
most
of
them
get
married
to
the
ones
that
don't
don't.
If
they've
got
used,
aci
they're,
probably
more
likely
to
be
maintained.
B
C
C
And
your
ideas
team
also
to
hide
the
check
in
the
pipeline
library
that
will
that
not
only
it
will
print.
I
like
that
idea.
So
not
the
ideas,
not
only
you
print
a
warning.
You
should
switch
blah
blah
blah
in
the
pipeline
output,
but
also
add
some
kind
of
github
check
on
the
pull
request
that
will
warn
the
end
user
that
hey
there
is
a
depreciation
on
one
thing
that
should
be
visible
inside
the
pull
request,
checks.
C
And
the
idea
is
that,
right
now
such
check
should
not
fail
the
pull
request.
For
now
we
send
the
batch
of
pull
requests
and,
let's
say
in
one
month
we
start
after
communicating
properly.
Of
course,
the
idea
will
be
okay,
we
start
to
fail
pull
requests
because
of
that
at
the
moment
in
time.
So
then,
in
two
months
that
should
be
the
deposition
campaign
should
be
finished.
D
C
And
in
terms
of
costs,
lilly-
and
I
checked
a
bit
earlier
today-
what
we
have
on
datadog,
so
we
saw
that
we
do
autoscaling
for
that.
Cluster
is
50
machine
maximum.
It
was
completely
arbitrary.
The
goal
was
to
be
sure
and
convince
ourselves
that
we
were
able
to
replace
aci
by
a
single
kubernetes
cluster.
C
So
that
gives
us
a
first,
let's
say,
area.
We
know
by
default.
We
have
only
two
machines
and
the
auto
scaler
stops
the
machine.
10
minutes,
that's
the
default
value
after
no
pod
has
been
scheduled,
which
means
the
bomb
builds
or
the
jenkins
core
pair
builds
that
happen
during
the
past
10
days,
never
reach
more
than
36.
So
that's
two
points
per
mission.
When
you
have
these
builds,
so
we
have
now
an
idea
of
how
much
worker
we
need
on
one
cluster.
A
No,
no,
no,
no,
no,
no,
no,
no,
no
you're
successful
the
nodes,
so
what
we
were
looking
here
is
we
have
nodes
and
on
one
node
you
make,
you
can
have
multiple
jobs,
multiple
pods,
and
so
what
we
wanted
to
test
was
to
see
okay
right
now,
by
default
we
have
two
nodes
and
those
two
nodes
can
run
multiple
jobs
if
we
enable
auto
scaling.
That
means
that
if
we
don't
have
enough
capacity
of
one
of
the
nodes,
we
just
create
a
new
machine
on
the
kubernetes
cluster
and
what
we
checked
was.
A
C
Android
agents
maximum
on
the
kubernetes
cloud
on
jenkins
sites.
The
idea
is
that
worst
case
you
have
hundred
requests
for
starting
pods
and
then
based
on
the
allocation
on
kubernetes,
though
the
scalar
decides
on
how
many
missions
it
has
which
mean.
Maybe
we
reach
the
hundred
requests,
but
the
time
that
the
autoscalers
start
to
add
machines.
C
A
Because
here,
because
here
yeah,
the
idea
is
to
have
let's
say
this:
cluster
and
digitization
have
one
for
this
cluster
and
scale
away
one
on
amazon,
so
we
can
spread
the
cost
on
different
cloud
provider.
So
now
the
question
is
what
would
be
the
size
of
the
different
communities
cluster
in
order
to
handle
the
loads.
A
C
Something
related
to
the
orchestration
mechanism
on
jenkins
because
you
have
the
sticky
nature
of
okay.
If
I
run
on
that
kind
of
cloud,
then
there
is
a
kind
of
sticky
for
the
same
job
branch,
but
the
ash
include
the
branch
so
for
given
for
master
branch,
for
instance.
That
will
try
to
always
reuse
the
same.
C
C
That
might
be
one
of
the
challenges.
One
of
the
ideas
is,
we
will
want
to
artificially
decrease
the
maximum
amount
of
requested
board
for
a
given
cluster.
We
have
15
right
now.
Let's
say
that
we
don't
move.
If
we
add
10
machine
on
scale
way,
then
we
we
should
go
on
40
for
aws,
which
mean,
if
jenkins
asks
for
50,
it
will
fill
the
queue
for
amazon
and
then
we'll
start
filling
the
queue
on
worst
case
on
the
second
cluster.
C
A
Thank
you
just
I'm
just
showing
some
information,
I'm
not
sure
how
fast
it
would
be.
I
would
like
just
to
show
you
what
we
used
here
to
analyze
that
information,
so
we
use
that
I
like
to
collect
metrics
on
communities,
agents
and
and
a
lot
of
different
things
and
what
we
were
using
to
auto
to
identify
on
the
states
and
it's
loading.
A
So
this
one
one,
it's
ready
this
one
is
the
dashboard
that
monitor
ci
20s,
which
is
the
communities
used
by
ci
dodging
king
zerayo,
and
so
in
this
case
we
were
looking
at
node
in
a
ready
condition.
So
you
see
that
most
of
the
time
we
only
have
two
nodes
and
from
time
to
time
we
have
peaks
like
36
to
27,
and
our
objective
is
to
monitor,
to
better
monitor
this
behavior,
to
understand
the
right
number
of
nodes
that
we
want
to
have
on
the
cluster
and
yeah.
C
A
C
A
Next
topic,
which
is
about
archived
jenkins
that
I
also
several
weeks
ago,
I
deployed
a
new
machine
on
oracle
clouds,
so
the
idea
was
to
move
archives
that
jenkins
leo
from
the
rackspace
account
to
oracle
clouds.
It's
working!
Sorry,
it's
working
very
well
get
the
jenkins.
A
I
had
to
do
a
minor
improvement
because
it
was
not
listed
on
the
mirror
list
anymore,
which
I
fixed
beginning
of
the
week,
but
the
most
important
thing
here
is
it's
working
very
well
and
it's
a
lot
cheaper
than
what
we
had
on
the
rackspace,
it's
probably
just
because
we
never
update
the
machine
type
on
rackspace.
So
it
was
a
very
old
one,
our
stats,
but
what
we
were
paying
on
rackspace
was
around
800
dollars
a
month
and
at
the
moment
we
are
close
to
60
dollar
per
month.
A
I
think
one
of
the
reason
why
it's
a
lot
cheaper
is
because
our
record
cloud
doesn't
charge
for
network
bandwidth,
I'm
not
sure
how
long
they
will
do
that,
but
at
least
for
now
it
makes
it
really
cheap
to
run
on
the
regular
clouds.
It's
very
slow.
A
Interesting,
I
think
that's
interesting,
I
think,
compared
to
beef
yeah.
A
A
A
Sorry,
so
before
doing
that,
I
was
just
keeping
it
what
you
did
sometimes
so,
if
you
detect
such
issues
feel
free
to
share,
and
I
I
don't
want
to
delete
the
machine
basically.
B
A
So,
depending
on
how
many
people
try
to
download
package
on
that
map,
because
we
we
have
a
specific
configuration
in
apache
to
limit
the
network
bandwidth
on
the
machine
and
because
it's
used
in
the
mirror,
maybe
that's
the
reason
why
it's
slow,
but
that
limit
could
be
removed
because
we
had
strong,
I
mean
that
network
bandwidth
was
quite
expensive
on
rackspace,
which
is
not
the
case
anymore,
and
we
have
a
bigger
bigger
network
interface
than
what
we
used
to
have
so
yeah.
A
We
probably
have
some
fine
tuning
to
do
there
next
topic,
which
is
about
cost.
I
updated
the
google
sheets,
so
I
saw
some
interesting
behavior.
The
first
one
is
the
azure
account.
Cost
is
decreasing
so
compared
to
one
year
ago,
one
year
ago
we
managed
to
to
stay
below
10k
and
then
over
the
past
next
month
we
increase
up
to
30
or
40k,
and
now
we
are
going
back
to
10k
again.
A
Which
is
nice,
but
at
the
same
time,
what
I
noticed
is
the
amazon
cost
is
increasing,
which
makes
sense
because
we
are
replacing
sei
by
aks.
So
we
have
some
improvement
to
do
there.
We
were
looking
with
damien
if
we
can
save
some
money
on
the
amazon
account,
and
it
appears
that
some
ec2
instances
are
oversized,
so
we
could
save
some
money
there
as
well,
but
this
this
we
have
to
give
it
another
group.
A
C
I
have
to
say
we
have
to
wait
the
end
of
the
month
before
the
cost
explorer
on
aws
reports,
because
right
now
most
of
the
costs
come
from
data
transfer,
mostly
and
others,
so
we
have
to
fine-tune
because
the
the
cost
increase
in
only
ec2
instances.
If
I
look
at
the
past
six
months,
it's
not
that
much.
It's
like
less
than
1k
since
january,
while
the
kubernetes
cluster
has
been
quite
used
in
july,
but
not
that
much
as
in
august.
C
So
right
now
we
are
not
completely
sure
about
the
cost
increase,
while
the
amount
for
the
bandwidth
is
still
high
and
since,
as
you
said,
we
dumped
rack
space.
We
still
have
the
bandwidth
out
from
package,
so
I'm
not
sure
how
much
a
bandwidth
how
it
can
be
gained,
but
that's
still
the
the
first
cost
center.
C
I
don't
remember
how
we
ended
up,
but
the
imm
memory
machines
so
outside
the
fact
that
are
we
using
all
the
cpu
and
memory
of
this
image
of
these
machines.
This
is
a
question
to
be
checked,
but
before
that
we
can
decrease
the
cost
of-
let's
say,
20
percent
by
switching
to
ebs
instances,
because
right
now
the
instances
we
are
using
for
imem
have
two
nvme
ssds
and
the
ios
on
these
machines
is
absolutely
completely
not
saturated.
C
So
I
understand
that
reusing
the
machines
is
interesting
because
of
the
caching
and
once
it
started,
but
we
are
billed
for
per
second
or
per
minute.
Oh
it's
per
minute.
Sorry
it's
per
minute
for
these
missions,
so
it's
not
that
much
but
yeah
there
are
some
jobs
that
should
use
slower
machines
that
are,
let's
say
10
year.
C
C
A
And
overall,
if
you
are
interested
to
look
at
with
us
to
the
cloud
to
that
account
to
investigate
some
ways
to
reduce
the
cost
feel
free
to
manifest
as
well.
A
So
that's
all
for
the
costs
at
the
moment.
Next
point
I
wanted
to
talk
about
packages,
packaging
images,
I'm
not
sure
if
I
was
the
one
putting
that
topic
here
so
damian.
Do
you
remember.
A
Something
I
thought
that
we
wanted
to
talk
about:
building
docker
agent.
C
Okay,
so
if
it's
about
the
backer,
the
goal
is
to
stop
building
a
docker
inbound
agent
from
docker
files,
but
use
the
same
script
as
we
are
using
to
build
the
ec2
templates
or
the
azure
shared
library
templates.
The
idea
is
to
say:
either
your
job
run
on
a
container
which
is
maven,
11
or
ubuntu
machine.
Then
you
will
have
exactly
the
same
way
to
install
both
tdk
and
maven
and
all
the
shell
tools
that
are
inside
the
container,
the
rational
beyond.
C
That
is
that,
each
time,
if
one,
we
want
to
change
something
a
version
of
maven,
for
instance,
we
have
the
new
maven
382
that
has
been
released.
Last
week
we
have
two
locations
where
to
change
it
and
the
installation
of
maven
differs
on
both.
Is
it
a
virtual
machine
or
a
container
so
for
the
user
experience
of
a
main
plugin
maintainer?
We
don't
want
them
to
have
a
bad
surprise
either
if
they
switch
to
container
virtual
machine,
especially
if
us
we
want
to
add
temporary
capacity,
change
cloud
or
swap
the
implementation
for
agents.
C
Since
packer
is
able
to
build
docker
images,
the
idea
will
be
to
put
all
the
definition
there
that
will
also
provide
a
improved,
improved
docker
pull
because
there
is
not
so
much
cache
on
our
images
each
time
there
is
a
new
image
that
is
built
downloaded.
There
is
no
cache
reusing,
especially
with
all
the
patterns
and
the
differentiation,
so
the
goal
will
be
to
have
more
all-in-one
templates.
A
Any
question
so
the
next
step,
the
next
topic
is
about
yeah,
I
just
took
notes
ibm,
is
interested
to
participate.
I
don't
think
we
have
much
to
say
here.
We
already
mentioned
that
in
the
announcements.
The
final
topic
that
I
just
want
to
briefly
cover.
We
had
a
meeting
with
linux
foundation
several
days
ago
to
discuss
the
elephants,
lfx
platform
security
v2.
A
Asked
on
this
course
about
who
was
interested
to
participate.
It
appears
that
people
with
the
right
permission
on
the
key
top
of
the
jenkinson
for
a
github
organization
will
have
access
automatically
to
the
dashboard,
so
that
so
we
don't
have
to
provide
user
account
or
whatever.
We
just
have
to
wait
for
the
linux
foundation
to
share
and
to
tell
that
to
to
tell
us
that
the
platform
is
available.
A
So,
as
of
today,
we
don't
have
anything
to
do.
We,
I
already
configured
jenkinson
for
organization,
to
send
data
to
the
platform,
but
yeah,
nothing,
nothing
else
to
report
so
so.
A
A
So
we
are
running
out
of
day
out
of
time.
I'm
sorry
about
that.
So
quick
questions
before
we
close
this
meeting.