►
From YouTube: 2022 06 07 Jenkins Infra Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Let's
get
started
so
the
weekly
first
announcement,
so
the
weekly
2.351
is
not
released
yet.
A
We
faced
some
a
set
of
different
issues,
first,
set
of
issues
that
we
might
still
have
some
we'll
see
during
the
packaging
phase.
So
we
reached
almost
the
end
of
the
release
phase
that
takes
two
hours.
A
First
set
of
issues
come
from
infrastructure
changes,
because
then
I
tried
to
modernize
the
configuration
between
a
fresh
eye
and
release,
but
it
appears
that
there
are
some
subtle
differences
between
the
two
underlying
humanities
clusters
and
we
were
beaten
by
these
elements,
so
that
has
been
fixed.
We
might
have
some
related
to
the
way
windows
container
are
scheduled,
so
we
are
looking
into
it.
A
We
add
so
infra
issues,
kubernetes
jenkins
configuration,
so
that's
literally
the
way
pod
agents
are
scheduled
and
there
were
also
some.
I
won't
call
these
issues
but
more,
let's
say:
first
steps.
Since
the
image
docker
packaging
was
upgraded
to
use
ubuntu
22.04,
one
of
the
main
changes
that
was
from
the
at
18.04.
A
A
That
was
the
main
core
of
the
change.
That
tool
is
used
to
generate
rpm
repositories
for
red
hat
and
this
galaxy
of
this
linux
distribution.
A
When
you
are
on
ubuntu
or
another
distribution,
so
you
don't
have
the
install
whatever
sdk
to
build
rpm
repositories,
so
that
tool
is
a
way
to
build
this
on
something
else
than
the
red
hat's
galaxy.
A
A
A
So
now
we
have
reached
the
release
and
let's
continue
to
watch
this
release
so
that
way
there
has
been
a
question
from
alex,
not
my
fault,
on
irc
about.
Could
we
stage
the
changes,
that's
something
we
already
tried
in
the
past.
So
for
the
sake
of
sharing
that
knowledge,
most
of
the
time,
the
effort
to
create
a
switch
that
say
if
it's
not
a
real
release,
but
a
pull
request
or
a
staging,
then
you
have
to
build
sign
but
do
not
deploy.
A
The
complexity
of
such
a
code
is
to
maintain
to
build
to
test
itself
is
really
risky
because
it
will
mean
releasing
and
deploying
a
new
release
compared
to
the
fact
that
we
have
a
weekly
and
a
weekly
can
be
run
multiple
times
per
day
each
time
it
creates
a
new
release
that
is
exposed
publicly,
but
that's
not
really
an
issue.
So
it's
a
kind
of
better
and
easier
for
everyone
and
faster
to
test
in
production
in
a
real
environment,
especially
the
signing
part,
is
quite
sensitive
and
it's
hard
to
test
it.
A
So
that's
why
we
chose
for
that
one.
However,
we
still
had
a
topic
with
the
stage
with
the
from
the
security
team
that
is
there
since
at
least
three
years.
I
think
about
being
able
to
stage
the
release
that
will
mean
running
the
release
one
day
before
the
real-life
release
like
we
would
want
to
build
and
prepare
packaging,
the
monday,
for
instance,
and
during
the
tuesday
we
only
have
to
promote
the
risk
publicly.
A
So
that's
a
high
level
idea
that
might
create
some
interesting
challenges
to
solve
the
docker
registry,
the
easiest
one.
But
that
means
creating
temporary
registry
for
maven,
pushing
the
war
and
metadata
on
that
one
and
then
promoting
that
one
publicly.
A
A
Nope?
Okay,
let's
go,
let's
start
by
checking
what
was
done
this
week,
I'm
taking
them
on
the
order
they
are
presented
on
the
closed
issues.
That's
worked
because
the
order
is
not
kept
between
open
and
closed
issue.
I
don't
understand,
but
once
an
issue
is
closed,
you
cannot
change
the
order.
So
that's
why
it's
not
about
priority.
A
A
A
Okay,
build
our
own
docker
images,
congrats.
Folks,
on
that,
on
that
huge
work
so
now
see
jenkins,
I
o,
when
running
a
plugin,
build
on
windows,
uses
10k
infrared
custom,
docker
images
inherited
from
the
official
jenkins
inbound
agent,
but
built
on
our
infrastructure.
A
That
was
a
huge
work
involving
a
lot
of
code,
so
congrats
survey
on
being
able
to
deliver
that
one,
because
powershell
can
be
painful,
sometimes
just
a
note.
While
working
on
that
port,
we
were
able
to
break
in
francia,
because
we
tried
to
schedule
container
and
during
a
configuration
change
for
infrastructures,
try
to
reschedule
that
container
to
a
windows
node,
because
we
missed
some
scheduling
elements.
A
A
That's
not
the
top
priority
right
now,
but
knowledge
is
shared
on
that
meeting
about
that
we
were
able
to
update
the
configuration
and
the
constraint
for
scheduling
infrasi.
So
now
the
elm
chart
is
won't,
will
never
try
to
reschedule
on
a
windows
node.
So
thanks
rv
for
all
the
hidden
work
on
the
pipeline
library,
now
you're
an
expert
on
the
groovy
shared
library
and
we
can
go
forward
on
the
images.
A
A
We
were
able
to
get
help
from
olivier
who
gener,
who
is
one
of
the
owner
of
the
c
a
key.
So
only
the
free
person
that
are
kk,
oleg
and
olivier
are
allowed
to
get
that
key
to
sign
the
new
certificate.
He
did
that
and
uploaded
it
to
trusted
ci
to
help
us.
So
many
thanks,
olivier
and
stefan
and
I
were
able
to
put
a
bunch
of
documentation,
fixes
and
tests.
So
everything
is
green
and
working
and
documented
and
we
have
a
calendar
alert
for
next
year.
A
A
A
However,
there
were
a
set
of
word
settings
on
the
case
of
ec2.
There
was
a
timeout
to
wait
before
and
the
policy
once
and
on
azure
we
weren't
even
using
that
policy.
Now
both
clouds
are
using
the
same
policy
with
no
timeouts,
and
it
looks
like
that.
It's
working,
I
cannot
be
hundred
percent
sure,
but
at
least
we
didn't
saw
some
a
bunch
of
high
mem
machines
in
a
world
state
waiting
for
minutes
that
should
allowed
us
also
to
provide
faster
bills,
less
retention
and
decrease
the
cost.
A
A
The
release
last
week
that
failed
two
causes
one,
because
the
release
happened
wednesday
after
we
merged
some
pull
requests
on
the
cumulative
management,
so
that
was
temporary.
Usually
we
tried
to
avoid
to
do
this
let's,
but
that
time
we
we
missed
that
the
tuesday
windows
and
started
our
infrastuf
during
the
wednesday
that
happened.
A
Second,
one
is
missing
and
b
command.
So
that's
a
consequence
of
my
work
on
the
mirror
brain
port,
thanks
rve,
for
helping
me
on
fixing
that
and
thanks
olivier
for
putting
all
these
scripts
on
the
github
repository.
So
we
were
able
to
fix
it.
It's
working.
It
looks
like
it's
working,
let's
confirm
later
today.
A
A
A
A
A
Io
is
not
at
risk
now
because
of
the
api
rate
limit
for
the
agent,
but
now
we
still
have
a
rate
limiting,
because
the
open
source
plan
does
not
automatically
grant
our
accounts
to
a
professional
paid
account,
which
means
we
still
have
an
api
rate
limit
for
the
official
docker
base
images
and
for
the
all
the
docker
officials
and
kids
images
that
we
build.
We
need
the
base
image
from
the
operating
system,
alpine
santos,
ubuntu,
etc,
and
we
are
rate
limiting
for
these
images.
A
So
we
are
in
discussion
with
docker.
We
are
waiting
for
them
to
apply
a
team
plan
that
should
increase
the
thresholds.
I've
put
a
set
of
short-term
solutions
that
would
help
on
the
issue
with
each
one
as
its
own
pro
outcomes.
So
that's
a
summary
of
what
we
discussed
during
the
past
weeks
and
months
about
that
topic.
A
But
yes,
for
now
we
are
quite
annoyed.
Just
a
note.
I
realized
that
there
are
a
lot
of
tests
that
fail
because
they
used
images
and
they
rebuild
images,
while
the
images
are
already
rebuilt
for
some
end-to-end
testing
and
that
could
be
improved.
That
should
improve
the
rate
of
succeeding
stages.
That's
a
specific
pipeline
stuff
from
the
project.
So
right
now
we
still
have
that
thing
and
we
are
waiting
from
feedback
to
docker.
If
we
don't
have
any
feedback
end
of
june,
then
we
will
have
to
act
and
find
another
solution.
A
A
Next
topic:
bootstrap
terraform
project
for
oracle.
So
thanks,
stefan
for
the
work
on
that
one
that
was
required
for
the
migrate
update,
jenkins
io
to
another
cloud.
The
status
is
work
in
progress
for
bootstrapping,
the
terraform
states.
So
correct
me.
If
I'm
wrong,
but
the
status
is
that
now
you
have
the
states
that
are
created
as
your
buckets.
A
B
A
A
Okay,
next
one
digitalocean
sponsorship.
First
of
all,
thanks
rv
for
checking
the
costs.
That
was
worrying
that
the
may
billing
was
bigger
than
the
amount
of
credits
that
we
are
have
double
checked
on
the
detailed
invoice
and
in
fact
it
was
just
the
node
pool
that
we
deleted
the
first
day
of
may.
So
we
are
okay.
A
We
are
consuming
10
to
15
bucks
per
month
now
and
the
actual
real
time
billing
view
on
the
digital
sun
console
seems
to
confirm
that
we
have
consumed
less
than
two
dollars
for
june,
and
now
then
I
we
have
to
take
appointment
with
digitalis
and
folks
discuss
with
them
for
the
next
steps.
A
A
So
we
can
merge
this
one,
but
I've
put
the
commands.
We
cannot
test
it
for
now
because
we
are,
we
have
a
docker
m-file
issue.
There
has
been
a
change
on
eks
m-file
combination
of
these
three
on
the
latest
stable
version
of
the
docker
image,
which
failed
to
check
for
eks.
I've
put
the
tips
on
that
area.
We
have
to
fix
that.
It's
okay
for
access
and
digitalocean,
but
not
on
amazon.
A
A
I'm
trying
to
go
back
to
the
issue
yep,
so
stefan
and
rv
you
are
assigned
to
this
one.
Do
you
think
you
will
be
able
to
work
on
it
next
next
milestone?
Yes,.
A
Remove
img
at
all
so
now
by
default
we
use
docker,
but
we
still
have
img
tool
defined
and
used
so
on
the
pipeline
code
that
I'm
not
sure
if
it
has
been
cleaned
up
or
not,
but
that
will
we
will
have
to
and
on
the
docker
builder.
I
think
image.
A
C
A
B
A
D
A
A
A
On
the
next,
so
the
next
important
thing
I'm
searching
for
okay,
infra
team,
think
next
and
the
new
things
I
want
to
start
with
three
elements.
As
we
say,
docker
packaging
update
has
impacts
on
weekly.
A
A
I
think
it
will
be
important
to
have
an
issue
about
infrastr
backups.
Can
I
ask
one
of
you
to
write
an
issue
or
update
an
existing
issue
with
the
backups
topic,
to
mention
that
infrastructure,
backup.
C
Something
like
valero
to
save
the
pvc
content.
A
Exactly
that
will,
but
that
will
be
the
work
of
putting
that
together
on
the
niche
on
an
issue
checking.
If
so,
there
might
be
a
backup
generic
issue
that
doesn't
make
sense,
because
backuping
the
wall
in
front
is
complicated,
but
maybe
having
a
specific
issue,
saying:
okay,
we
need
to
back
up
in
frasier.
A
A
And
finally,
just
to
let
you
know,
I've
been
asked
by
jc
glick
about
jenkins
49
707,
that
one
will
generate
a
lot
of
pull
requests
on
different
workflow
plugins.
This
is
full
open
source.
The
goal
for
that
one
will
help
us
a
lot
because
it
targets
to
retry
automatically
if
we
put
the
correct
setting
on
a
pipeline,
the
stages
when
the
agent
went
down
during
a
pipeline.
A
A
So
the
goal
is:
if
we
configure
proper
edition
library,
all
the
plugin
builds
at
least
will
be
retried
and
also
the
bom
or
ath
builds
when
they
have
agent
that
cannot
be
spawned
or
in
a
world
state
or
abruptly.
Sometimes
we
have
agents
that
stop
the
connection
on
their
own.
We
don't
always
know
why
that
will
allow
to
only
retry
the
failing
stage.
A
A
C
Will
launch
a
build
and,
on
my
request
for
the
fix,
I
have
a
problem
with
tests.
A
Okay,
I
I'll
try
to
do
you
need
me
to
pair
with
you,
or
do
you
just
need,
sometimes
to
walk
and
help.
A
A
A
A
B
For
the
direction
for
jenkins
is
the
way
I
think
we
just
need
to
take
the
ownership
of
the
of
the
domain,
or
at
least
the
dns
for
the
domain.
That's
right,
so
we
just.
A
Need
mark
mark
and
alisa
and
both
of
them
are
the
silicon.
So
that's
why
let
me
put
that
on
top,
so
we
won't
forget
it.
That's
good
to
mention
it
thanks.
Stefan
next
week
we
should
be
okay
right
now.
The
most
urgent
part
of
that
issue
is
that
the
google
research
are
at
least
redirecting
people
to
the
root
of
the
new
website
on
the
new
url.
A
A
To
you,
do
you
have
other
questions?