►
From YouTube: 2022 02 15 Jenkins Infra Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
and
welcome
to
the
drinking
stuff
ramen
team
of
the
15th
of
february,
we'll
start
with
the
announcement
says
there
was
a
core
security
release
last
week
and
there
is
also
a
plugin
security
release
today
in
progress
so
finishing
the
publishing
of
advisory
and
we'll
delay
the
weekly
release
for
tomorrow.
B
C
B
I
just
expect
I
expect
lots
of
lots
of
shouting
and
I
think
it's
healthy
for
us.
If
we
have
it
before
the
shouting
starts,
we
have
a
blog
post.
That
says:
look.
We
said
it
happened
here
where
we
it's
intentional.
This
is
not
not
active
hate.
This
is
us
thinking
about
it
and
realizing.
We
need
to
get
to
a
modern,
modern
way
of
doing
process
management
in
our
services.
B
Quicker,
oh
yes,
and
I
will
certainly
tweet
it
as
well
it
just
for
me,
the
the
blog
post
is
is
a
is
a
is
a
an
after.
The
fact
thing
where
we
can
highlight
look
here
are
the
some
of
the
instructions
we
offered
here.
Some
of
the
things
that
we've
we've
implemented
to
try
to
be
make
this
as
smooth
as
we
can.
A
A
Yeah
they
remove
a
lot
of
stuff
around
the
authentication
around
users
around
worker
groups.
They
rename
a
lot
of
variable.
They
change
the
way
of
declaring
four
loops
a
lot
of
change,
so
it
didn't
fix
our
problem,
but
we
need
it
anyway.
So
another
damn
right
right
there
and
we
have
some
benefits
like
faster,
auto
scaling,
less
resource,
because
we
don't
need
an
public
ip
now
and
during
this
upgrade,
we
found
the
missing
permission
by
comparing
what
we
had
and
what
we
needed.
A
Do
a
full
request
on
the
on
the
module
to
only
repository
to
write
a
new
permission,
and
we
have
also
used
the
corrupt
credential
to
put
in
place.
And
this
is
the
next
point.
D
I
forgot
to
write
down.
We
also
have
to
contact
aws
support
or
see
if
we
can
request
their
documentation
to
add
missing
or
contact
someone
at
aws,
saying
hey
folks
since
last
friday.
If
you
don't
have
that
permission,
it's
not
working
anymore
because
it,
the
aws
documentation,
shows
less
permission
than
the
terraform
module
which
show
less
than
what
is
required
since
friday.
D
D
The
direct
impact
is
that
we
started
to
see
pods
in
error
status,
where
they
were
unable
to
pull
the
image
from
the
docker
hub,
because
one
will
use
the
docker
hub
for
the
bmmp,
whatever
agent
and
two,
the
docker
herb.
If
you
are,
if
you
aren't
authenticated,
then
it
uses
the
public
ip
as
a
way
to
aggregate
the
request
per
day
over
six
hour
windows.
I
don't
remember
exactly
since
we
have
private
ips
for
each
worker
nodes.
It
means
they
only
have
one
aggress
ip
scene
from
the
docker
up.
D
So
all
the
pool
come
from
are
content
to
that
public
ip
direct
consequence
we're
already
limited.
We
applied
a
short
term
fix
because
we
had
to
handle
the
ci
gen
inside
your
queue.
So
we
did
the
docker
registry
secret,
like
we
did
on
the
past,
with
a
new
account
docker
hub
account,
because
we
don't
want
to
reuse
the
existing
one.
That's
absolutely
out
of
question.
That's
a
public
instance,
so
the
probability
of
that
credential
being
stolen
accidentally
is
close
to
100
percent,
which
means
we
don't
want
someone
pushing
images
that
we
will
use.
D
C
Can
we
use
that
account
on
the
on
the
vm
images
as
well.
C
B
What
tim
was
describing
I'm
accustomed
to
having
to
do
a
rebuild
or
a
schedule,
a
rebuild
two
hours
from
now,
so
that
it
will
have
quieted
on
the
rate
limit
and
give
us
another
chance.
We.
B
D
Just
a
reminder,
last
time
we
had
the
discussion
and
that
we
tried
to
embed
the
configuration
on
the
virtual
machines.
We,
we
were
again
rate
limited
because
rate
limit
went
from
public
ip
to
the
account
used
and
configured
for
the
docker
engines,
which
is
rate
limited
daily
compared
to
the
six
hour
windows
of
the
public
ips,
which
means
if
we
have
too
much
bills
with
the
same
account,
then
we'll
hit
that
limit.
D
D
D
Second
thing
proposal
from
rv:
we
can
totally
add
the
docker
image
proxy
on
the
cluster
and
as
well
on
the
on
the
aws,
where
we
spawn
the
vm
agent
and
same
on
azure,
we
can
add
on
a
docker
proxy
on
each,
not
sure.
Oh,
it
worked
technically,
but
that
could
be
an
idea
and
finally,
switching
back
to
public
ip.
D
D
Another
consequence
is
that
not
only
the
ci
junk
inside
your
workload,
but
also
we
had
datadog,
that
was
rate
limited
because
it
appeared
that
we
don't
use
the
official
agent
datadog
image.
We
use
a
custom
image
built
from
the
official
with
some
stuff
copied
within
I
I
didn't
know.
I
didn't
know
that
part
and
since
they
are
built
by
ourselves
so
pushed
on
jenkins,
I
have
found
occur
up.
They
were
already
limited.
D
So
here
we
saw
a
nice
learning
opportunity
to
help
stefan
to
get
started
the
hard
way
on
kubernetes,
because
that's
not
an
easy
topic,
so
he's
going
through
the
create
a
helm
chart
that
would
allow
to
specify
a
list
of
namespace
so
in
each
cluster.
If
you
specify
the
time
chart,
it
will
create
the
docker
registry
secrets
on
all
the
name
spaces.
D
It
will
allow
to
specify
one
time
on
subs
the
accounts
on
one
location
and
the
unchart
will
install
on
each
namespace.
So
each
namespace
could
use
the
docker
registry
to
authenticate
their
docker
pool.
That
should
be
at
least
a
tool
that
we
can
reuse
and
once
used
for
both
jenkins
agents
and
datadog.
D
We
can
stop
using
it
move
to
a
proxy,
but
we
can,
in
the
future,
reuse
that
if
we
need
it
on
short
term,
that
will
help
and
long
term
is
finding
on
datadog.
Why
the
hell
are
we
using
this
custom
image
while
datadog
provides
everything
on
the
helm
charts,
I
would
prefer
using
relying
on
datadog,
especially
since
it's
less
things
for
us
to
build,
and
I
bet
that
we
copy
some
files
because
we
weren't
able
to
to
mount
them
on
the
charts
from
an
hour
time.
D
D
C
D
D
D
Something
to
be
careful
of-
I
I
thought
over
that,
but
we
have
to
remind
us.
We
have
to
put
a
maximum
amount
of
pods
that
we
can
allocate
per
kubernetes
cluster
at
ci
gen
inside
your
level,
just
to
be
sure
that
jenkins
doesn't
start
to
schedule
hundreds
of
pods
to
the
poor
digital
sun
cluster.
That
only
has
two
volcanoes
today,
because
it
will,
or
will
end
with
a
bunch
of
pod
in
pending
states,
and
since
it's
hard
to
anticipate,
is
jenkins
going
to
try
to
schedule
this
pods
on
the
other
kubernetes
cluster.
D
C
B
D
D
If
you
read
between
the
lines
that
mean
the
death
of
trusted
ci
on
a
visible
future
in
favor
of
release,
ei,
not
mentioning
the
ability
to
say
hey.
If
we
split
ci
and
cd
for
a
contributor,
they
should
have
the
same
environment
for
release
as
they
see
on
the
pubic.
Ci
makes
obvious,
but
that's
hard
to
unsure
when
you
have
different
gene
kids
instances
so
great
jobs.
Stefan
that
also
allows,
in
short
term
now
the
ability
to
run
our
packer
system
to
build
docker
images,
which
is
already
the
case.
D
D
D
I
don't
know
if
there
are
other
question
things
that
I
could
have
forgot
on
that
topic
to
clarify
so
great
job.
Folks,
ci
jenkins
io
not
only
has
been
security
updated
in
the
past
hour.
Also,
it
now
features
the
latest
gdeca
17
and
8
versions
that
have
been
deployed.
D
Tdk8
is
two
weeks
old,
but
we
had
something
broken
on
the
build
process
that
delayed
the
release
and
thanks
mark
for
catching
yet
another
name
change
pattern
for
the
open,
gdk,
the
17
that
time.
So
we
had
to
change
all
the
update
process
and
the
provisioning
script
and
blah
blah
blah
yeah.
It's
annoying.
D
B
C
C
B
C
D
D
A
D
D
I
don't
know,
but
there
is
a
task
to
all
of
that,
the
reason
being
we
should
be
able
to
build
for
new
architectures
and
new
platforms
such
as
windows
container,
because,
right
now
our
windows,
docker
images
rely
on
the
jenkins
ci
images
and
we
want
to
own
the
full
stack.
So
that
will
allow
that
part.
B
But
damien
on
that
I've
I've
been
using
red
hat
enterprise
linux
8
for
about
four
weeks
now
and
they
don't
even
deliver
docker
ce.
They
do
pod
man.
So
so
this
kind
of
thing
I
I've
been
getting
first-hand
experience
with
it,
not
that
it
changes
anything
we
do,
but
there
it's
it's
not
just
kubernetes.
Where
sometimes
you
can't
do
docker?
B
D
However,
since
red
hat
advertised,
that
batman
is
so
perfect
and
so
secure
that
they
should
allow
a
docker
in
podman,
which
they
don't
so
not
my
problem,
sorry
for
it
that
one
was
cynical,
but
I
mean,
if
you
look
at
the
aliases
on
a
rational
eight,
you
should
see
alias
docker
equals
spodman
literally,
so
I
hope
they
are
good
at
keeping
up
with
the
docker
api
changes.
But
again,
but
that's
a
good
point.
That's
the
reason
why
some
people
need
it.
D
So
it's
a
to-do
list.
There
is
no
priority
on
that
one,
but
that
should
be
quite
useful
if
anyone
is
interested
in
contributing
on
shared
pipeline
library.
Help
is
welcome.
On
that
part,
a
word
about
security
on
infra
ci.
We
start
to
have
a
lot
of
different
jobs
for
different
use
cases
on
the
first
ci.
That
instance
was
first
started
only
for
infrastuff,
but
now
we
deploy
previews
for
website
for
jenkins
io.
We
have
terraform.
D
D
D
Yeah,
I'm
not
really
at
ease
with
that
one,
because
it's
not
only
the
preview
site
that
would
bother
me.
It's
the
fact
that
terraform
jobs
can
access
the
credential
that
they
should
not
have,
and
vice
versa.
So
it
whether
or
not
we
move
that
won't
solve
the
issue
that
we
need
to
split
the
credential
scopes.
That's
a
good
practice
in
production
with
sensitive
credential
as
ours.
D
So
we
could
split
the
jenkins
instances.
There
are
long-term
things
that
moving
all
the
credentials
out
of
jenkins
and
connecting
jenkins
to
a
remote
vault
system.
D
C
D
Yes,
but
for
instance,
garrett
started
work
last
year
using
kubernetes
secrets,
so
jenkins
doesn't
store
credentials
when
you
request
a
credential,
it's
just
a
placeholder
and
jenkins
will
get
the
credential
from
the
kubernetes
secrets,
so
jenkins
will
never
encrypt
the
credential
on
its
chain
console.
It
will
always
be
on
in
memory.
C
I
really
don't
think
it
makes
much
difference
like
sure
it's
not
written
to
disk
encrypted,
but
that
that's
just
one
very
minor
area.
I
think.
C
D
Yep
and
then
on
kubernetes,
we
have
to
store
the
credential
encrypted
in
the
cloud
vault.
We
have
a
cloud
vault
for
each
kubernetes
provider,
because
kubernetes
alone
is
only
hashing,
the
password
in
secrets,
but
the
goal
is
to
rely
on
the
underlying
cloud
like
we
have
an
azure
kms
right
now,
so
that
will
be
that's
a
more
complicated
topic.
So
that's
why
I
said
long
term.
Right
now
we
have
the
ability
to
start
scoping
credentials.
C
D
We
also
have
the
reason
to
separate
the
jobs
is
also
because
we
need
different
properties.
Terraform
jobs
doesn't
have
the
same
properties
as
local
jobs,
especially
in
terms
of
the
ability
to
to
do
github
checks
feedbacks
for
some
jobs.
We
don't
care,
even
though
we
need
to
have
feedbacks
for
public
users
and
for
some
jobs
we
don't
want.
D
So
since
we
need
these
different
policies
per
jobs,
separating
per
kind
of
policy,
even
of
folders
instead
of
github
organization,
avoids
the
trade
configuration
like
I
go
to
github
configuration,
I
change
one
trait,
and
then
I
have
to
wait
for
the
watch
to
apply
everywhere
and
then
two
days
later,
someone
else
complains
because
they
have
that
new
trace
that
they
don't
want.
So
you
have
to
split
etcetera,
etcetera,
which
is
maintenance
hell.
D
B
And-
and
we
had,
I
had
asked
them
a
question
about
the
physical
location-
didn't
get
an
answer,
but
actually
it
doesn't
matter.
It
would
be
great
to
have
one
more
physical
location
in
in
the
us
as
an
example,
because,
right
now
the
entire
u.s
west
coast
is
being
served
by
a
single,
a
single
mirror.
That's
out
of
that
says
it's
in
the
middle
of
the
continent.
D
Okay
for
the
two
other
top-level
tasks
before
we
close
I'm
the
best
factor
for
both
sia
jenkins.
I
o.
I
still
need
to
finish.
Writing
the
epic
issues
milestone
to
give
you
visibility.
So
I
have
to
extract
things
from
my
head
and
put
them
on
issues.
Some
issues
already
exist,
some
other
don't
so
I'm
the
best
factor
here.
So
no
no
need
danielle
fixed
the
matrix,
free
errors
and
ch
and
kinsey
untrusted
thanks
daniel.