►
From YouTube: 2021 07 27 Jenkins Infra Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hi
everybody
welcome
for
this
new
jenkins
infrastructure
meeting
today
to
the
agenda.
We're
going
to
talk
about
repo
jenkinsia.org,
the
new
digital
ocean
accounts,
the
rackspace
accounts
some
ongoing
work
with
hideo
jenkins,
rayo
and
docker
with
the
arch.
So
let's
start
with
the
g
frog
thing.
So
last
friday
we
had
a
meeting
with
chief
frog
so
just
to
go
back,
I'm
not
sure.
If
I
should.
A
So
yeah
I
I
wrote,
I
wrote
some
notes
during
the
meeting
so
just
to
to
to
to
come
back
on
what
happened
during
the
last
security
release.
We
had
issues
to
do
the
security
release.
We
had
the
issue
to
promote
artifacts
from
the
staging
environment
to
the
production
environment.
A
While
that
thing
has
been
solved,
what
we
discovered
was
we
were
using
an
undersized
database,
so
we
had
some
like
issues
on
the
number
of
connections,
so
the
only
thing
that
chiffon
did
was
to
increase
the
size
of
the
database,
so
we
could
have
more
connection
on
the
database
connection,
but
the
second
thing
that
we
noticed
was
it
took
us
one
hour
to
copy
the
artifact
instead
of
save
several
seconds
initially.
So
we
engage
with
gfrock
supports
sorry.
A
We
engage
with
g4
support
and
we
had
a
session
last
week
where
we
debug
the
biggest
so
the
the
the.
What
we
have
to
learn
to
understand.
There
is
what
we
did
was
to
just
increase
the
level
of
vlogging
on
the
service,
and
then
side
was
a
person
who
helped
us
on
last
friday.
We
investigate
on
this
side
so
right
now
we
still
don't
understand,
what's
happening
and
why
it
took
one
hour
to
copy
the
artifacts,
but
we
hope
to
have
a
better
understanding
in
the
coming
weeks.
A
A
Any
question
sounds
good,
so
yeah.
The
next
step
is
once
I
have
more
information.
We
should
plan
another
meeting
with
with
them
and
and
I
hope,
we'll
find
a
good
solution.
A
The
next
topic
that
I
want
to
talk
about
that
I'm
pretty
excited
is
digitalocean
offered
us
to
sponsor
on
some
machines,
so
they
so
I
created
a
digital
ocean
account.
Last
week
I
invited
a
few
people
who
were
interested
to
participate
in
it
mainly
gavin
in
demon.
I
think
so.
The
plan
is
to
provision
an
additional
mirror
and
to
provision
a
small
disk
cluster
to
use
with
the
to
use
in
our
ci
infrastructure.
A
We
are
looking
for
some
help
here
to
write
the
terraform
code
to
provision
those
resources.
I
know
that
digitization
has
pretty
good
support
with
terraform,
and
so
we
would
like
to
do
it
the
correct
way,
initially
so
yeah.
If
someone
is
interested
to
participate
here
and
that's
definitely
a
place
where
you
can
help.
A
So
yeah
feel
free
feel
free
to
reach
out
to
damian.
I
think
that
you
may
have
some
time
to
work
on
it
still
related
to
the
miracle.
A
A
We
are
almost
ready
to
de-provision
the
one
running
on
rackspace,
so
the
only
thing
is,
I'm
changing
a
little
bit
the
way
we
push
artifact
to
the
service.
So
previously
we
were
doing
a
push
approach
for
from
a
machine
name
package
touching
kings,
the
package
charging
in
the
lio.
We
would
upload
every
artifact
to
the
archive
touching
to
the
layer
machine
and
now
the
approach
is
a
little
bit
different.
We
just
pull
all
the
artifact
available
from
a
nursing
server
and
right
now
that
server
is
the
osu
sm.
A
A
The
good
thing
was
the
one
on
oracle.
Is
it's
pretty
cheap?
I
think
it
was
something
like
30
per
month.
A
If
some
people
also
have
experience
using
terraform
in
our
record,
that's
also
a
place
where
we
could
help
for
the
first
machine
we
just
did
manual
provisioning
just
went
to
the
user
interface
and
configured
everything,
but
for
more
machines.
We
would
like
to
do
a
better
setup,
so
people
can
contribute
to
that
environment
using
terraform
again.
A
Is
it
right
for
all
of
you
sounds
correct?
The
next
step
is
to
brief
updates
on
sea
seattle
chinking's
rio.
I
think
that
damien
is
ready
to
talk
on
it.
If
you
want
so
okay,
so.
B
So
my
proposal
is
that,
in
order
to
wish
to
be
sure
outside
the
classic
on
the
hypothesis
that
the
classical
review
process
goes
well,
that
there
is
no
hidden
thing
cooked
by
you.
Folks,
when
reviewing
my
request,
I
would
like
to
get
rid
of
the
staging
branch
tomorrow
morning,
as
a
first
mandatory
step
before
going
forward,
just
to
be
sure
that
once
we
merge
your
pull
request
on
the
jenkins
infrarepo,
then
it's
merged
and
deployed
to
production
immediately.
B
A
On
that
one,
I
totally
support
that
move,
because
definitely
the
problem
that
we
have
here
is
when
we
want
to
do
a
change.
We
first
have
to
merge
the
change
to
the
staging
branch,
and
then
we
have
to
just
create
a
pr
from
the
staging
branch
to
the
prediction
branch
and
it's
only
deployed
to
the
it's
only
deploy
in
prediction.
I
had
to
look
at
it
in
the
past.
We
were
testing
every
staging
branch
on
amazon.
A
So
in
the
past
we
were
provisioning,
amazon,
infrastructure,
testing,
the
deployment
there
and
then
put
it
on
that
infrastructure.
This
is
something
that
we
stopped
doing
a
long
time
ago.
So
right
now
we
just
don't
use
the
stitching.
I
mean
it's
just
a
way
to
slow
down
us,
so
I
totally
support
that.
I'm
just
not
sure
if
we
want
to
switch
to
the
default
branch
prediction
or
just
use
a
different
branch,
maybe
prediction
since
the
easier
one
yeah.
B
So
once
we
have
that
one,
I
will
feel
totally
okay
and
safe
to
go
forward.
So
if
everyone
is
okay
on
the
review-
and
we
can
remove
that-
I
I
should
be
able
to
deploy
the
ci
jenkins-
I
agent
configuration
so
that
that's
the
cool
part
and
obviously
there
have
been-
and
something
I
want
to
mention
here-
I've
so
we
have
merged
today,
something
on
staging
that
will
be
deployed
to
production
as
well.
B
There
are
issues
on
the
puppet
code
that
I
cooked.
They
might
be
there
for
a
reason,
but
also,
I
think
they
are
there
by
accident,
and
I
want
to
be
sure
with
everyone
that
we
are
aware
that
that
can
put
the
stability
of
cr
j
kinsey
at
risk.
When,
when
I
will,
I
will
deploy
this,
that's
why
I
want
to
group
the
operation
on
a
single
one.
A
B
To
the
second
issue,
we,
the
default
home
user
and
most
of
the
puppet
code,
is
using
a
jenkins
own
path,
which
is
the
path
of
the
jenkinson
on
the
host
virtual
machine
file
system.
So
sometimes
it's
correct,
but
sometimes
it's
a
parameter
passed
inside
the
container
where
the
moon
point
is
different.
It's
not
var
lib
jenkins,
but
var
jenkinson.
B
Also,
the
default
user
running
the
jenkins
controller
on
cigo
has
an
om
directory
that
doesn't
exist.
Guaranteed
sets
to
var
lib
jenkins
because
of
that
bug
on
the
puppet
code
or-
and
I
understand
it
comes
from-
it
was
running
with
the
package
and
when
it
was
shifted
to
docker
container,
it
did
not
follow
the
docker
container
precept.
B
So
I
tried
to
fix
this
element.
Steam
and
olivier-
have
reviewed
that.
But
I
might
have
forgotten
or
broken
something
in
particular,
with
the
I
tried
to
be
careful
on
the
user
id
I
I
tested
and
reproduced
with
the
vagrant
box,
where
I
made
sure
that
there
were
different
uid
between
the
default
thousand
user
on
the
machine.
The
thousand
of
the
user
name
jenkins
inside
the
container-
and
I
was
using
a
third
one
to
have
the
same
topology.
It
was
working
well,
but
still
we
might
have
different
behaviors.
B
Also
that
can
have
an
impact
on
trustees
plus
it
has
the
same
pattern
except
it's
not
999.
It's
something
else
which
is
9000,
which
is
9001.,
so
both
might
have
issues
due
to
that
change.
But
we
have
to
fix
these
errors.
I
mean
it's
not
acceptable
that
we
have
a
perpetration
but
forever
on
production.
They
are
purely
ignored,
but
we
have
to
fix
that.
B
A
B
Which
is
we
use
the
command
line
on
the
host?
A
shell
script
named
idempotent
cli,
which
role
is
to
run
either
install
plugin
on
jenkins,
run
a
groovy
script
for
the
lock
box
or
a
safe,
restart
jenkins.
At
the
end,
these
are
the
three
actions
that
I
identify
that
that
command
run.
How
does
it
work?
It
does
a
docker
exec
inside
the
container,
locate
the
jenkins
cli.jar
and
connect
through
the
cli
to
localhost
to
jenkins
to
trigger
the
actions,
so
the
safe
restart
and
the
groovy
execution
must
be
working.
B
That's
the
case
I
had.
In
theory,
jenkins
cast
should
still
should
fail
the
startup
of
jenkins,
but
this
the
api
should
still
be
available
on
localhost.
However,
some
some
educate
edge
cases.
It
doesn't
work.
That
means
sometimes
you
can
have
jenkins
failing.
The
container
is
up
jenkins
prints,
a
big
error
stack
due
to
configures
code
because
missing
plugin
and
what
you
want
is
the
idempotent
cli
to
install
the
missing
plugin,
because
we
forget
that
we
had
to
mention
this
this
one
and
then
you
are
stuck
unless
you
manually
put
the
gpi.
C
B
C
B
C
C
C
While
it
does
not
yeah
if
it
if
we
affected
trusted,
that
would
block
the
docker
containers
and
it
just
it,
makes
me
a
little
nervous
if
we
could
just
get
through
the
process
of
building
2.289.3,
and
it's
really
only
about
four
hours.
So
damian.
If
you
were
to
launch
the
2.289.3,
build
as
you
arrive
at
work
and
then
four
hours
later,
it's
probably
done.
B
A
B
Thursday,
okay,
so
I
I
think
I
cover
I'm
sorry,
I
it
took
a
bit
of
time,
but
I
wanted
to
underline
these
changes,
so
everyone
is
aware
that
if
it
fails
in
the
upcoming
days,
I
I'm
there,
I
don't
plan
to
go
on
holidays,
but
it's
important
to
inform
everyone
on
that
topic.
That's
that's
the
first
time
for
the
purpose
of.
A
C
Yeah
so
so
there
I
think
the
the
story
is
quite
quite
good.
We've
had
docker
multi-arch
images
on
our
road
map
for
over
a
year,
and
now
we've
got
nice
progress
thanks
to
tim
jacom
and
to
damien
deportal
on
actually
implementing
multi-arch
support.
C
So
thanks
to
tim's
work
for
implementing
docker,
buildex,
bake
and
damien's
work
for
implementing
parallel
testing,
we've
got
a
significantly
faster,
build
process
for
our
linux
images.
Thank
you
to
both
of
them
there's
more
to
do
there
and
and
more
to
more
to
continue
and
we've
got
work
that
will
continue
damian.
Maybe
you
want
to
give
more
more
color
and
more
info
more,
but
that's
it's
good
and
we've
got.
C
B
Yep,
the
only
hiccups
here
that
we
discovered
today
during
today's
weekly
process
is
that
the
publication
of
these
new
images
that
we
wanted
to
try
to
to
do
during
that
weekly
failed
for
little
minor
hiccups
that,
but
we
have
to
reproduce
and
fix
them
so
right
now
we
only
build
on
and
test.
We
don't
deploy,
publish
these
images
for
now.
C
B
We
have
to
to
to
check
what
is
happening
because
the
behavior
we're
so
untrusted
is
different
than
what
we
saw
on
ci.
So
we
have
a
team
on
high
to
double
check
that
we
use
exactly
the
same
agents
with
the
same
kemu
because
we
use
kemu
mainly
we
are
not
using
yet
the
specific
architectural
images,
because.
C
C
B
And
we
can
still
make
more,
I
have
one
number,
so
the
master
build
branch
went
to
20
to
25
minutes
before
the
all
those
changes,
and
now
it's
between
12
and
50
minutes.
So
we
gained
10
minutes,
that's
a
nice
improvement
and
if
anyone
is
willing
to
help
on
the
windows
port,
so
I'm
gonna
work
on
this
in
the
upcoming
weeks.
But
this
is
mainly
powershell
scripting
and
everything
is
sequential
and
that
part
is
a
different
pattern.
So
we
have
a
lot
of
gains.
B
B
Yes,
because
the
windows
machine
are
taking
almost
30
min
12
to
15
minutes,
that's
the
slowest
part
of
the
pipeline.
You
can
see
clearly
on
the
branches,
while
all
the
other
linux,
the
slowest
linux
are
redat
based
and
they
took
almost
two
minutes
to
build
and
around
two
minutes
on
the
half
test,
which
means
the
slowest
linux
part
is
six
minutes.
C
C
Thank
you.
That's
all
that
I
had
on
docker
multi-arch,
look
for
I'll
I'll,
send
I'll
announce
the
the
jab
for
review.
It
certainly
needs
lots
of
review.
This
is
a.
There
were
a
lot
of
things
I
learned
by
trying
to
prepare
the
jeff
that
people
will
now
tell
me.
Oh
you
misunderstood
this,
or
that.
A
That
sounds
good.
I
think
we
cover
all
the
topic
for
today's
meeting
and
the
last
topic
before
we
close
the
meeting
thanks
for
your
time
then,
and
see
you
in
our
scene.