►
From YouTube: Kubernetes SIG K8s Infra - 20220511
Description
A
Hello,
everybody
welcome
to
the
secrets
improv
meeting
of
11
may
2022.
This
meeting
is
being
recorded
and
will
be
uploaded
to
youtube
later.
So
please
add
here
to
the
cncf
kind
of
conduct,
which
basically
means
be
excellent
to
each
other.
Welcome
to
today's
meeting,
let's
see
would
we
want
to
go
through
the
billing
report
this
morning.
Would
you
like
us
to
do
that?
Okay,
so
let
me
share
a
screen
and
let's
start
there
there
and
trust
you
can
see
my
screen
now.
A
We
have
a
new
contributor
visiting
her
robert
we've
met
you
at
the
conformance
meeting.
Welcome
to
today's
meeting.
If
you
want
to
introduce
yourself,
please
feel
free.
B
Yeah
sure
I
can
just
do
like
the
quick,
the
quick
version,
so
my
name
is
robert
and
I
work
as
head
of
platform
engineering
at
crayon,
which
is
well
it's
a
big
company
crayon.com.
That's!
The
short
answer
is
that
I
tried
to
get
a
little
bit
more
involved
in
the
kubernetes
part
of
things,
I'm
in
cncf
in
the
tag,
app
delivery
and
a
bunch
of
other
things,
working
with
githubs
and
all
those
kind
of
cool
stuff
yeah.
That's
that's!
Basically,
it.
C
A
A
E
I
think
that
big
query
one
was
because
something
was
not
running
and
then
we
made
sure
it
started
running.
That's
that's
probably
why
the
push
that
that
cost
yeah
this
was
to
generate
the
failures
and
the
fake
json
files.
Oh
yes,.
E
Yeah
one
thing
hippie
ended
up.
Tagging
me
on
an
email
thread
was,
I
think,
the
scale
23
triggered
some
kind
of
an
email
chain.
Do
you
want
to
talk
about
it?
Hippie.
D
E
Yeah
I
was
talking
about
hippies
in
an
email
chain
for
one
of
the
projects.
They
were
asking
us
whether
it
was
ours
or
not,
and
I
confirmed
to
them.
You
know
I
was
able
to
look
at.
E
You
know
the
gcp
console,
and
I
confirmed
that
it
is
something
that
we
we
own.
They
they
told
us
like
hey.
Do
you
want
to
bump
up
the
quota
on
something-
and
I
said,
probably
not
it's
a
ca
job.
If
we
use
up
more
memory,
more
cpu,
then
we
are
essentially,
you
know,
will
be
even
higher
loss
on
on
in
terms
of
the
cost.
A
And
I
think
that
the
key
things
to
discuss
today
is
do
agenda
points
and
then
so,
if
we
look
at
the
cap,
thanks
for
all
the
updates
and
resolving
all
the
all
the
conversations
in
there,
I
see
we
got
an
approve.
A
I
think,
do
we
go
to
the
end
I
still
this
morning
we
haven't
approved.
We
just
need
lgtm
if
somebody's
comfortable,
giving
us
lgtm
that
can
merge
in
any
thanks
for.
E
E
Actually,
there
was
one
more
thing
here
right
I
had
a
pr,
so
we
were
asking
sigrillis
folks
to
take
a
look
at
both
the
pr
and
the
cap.
At
the
same
time,
for.
C
G
Hello,
can
you
hear
me?
Yes,
not
yet,
I'm
on
my
way
to
the
airport,
to
valencia,
but
I'll
have
some
time
to
work.
Some
get
some
work
there
done
at
the
airport,
I'll
probably
take
a
look
there.
E
No
worries
just
to
give
you
a
heads
up.
This
was
a
pr
in
kk
where
we
essentially
did
a
replace
of
of
kth.gcr.io
with
registry.
and
then
making
sure
that
all
the
ci
jobs
go
green.
E
But
I
I
haven't
looked
exactly
at
what
you
know
the
automation
that
the
care
sig
release
team
uses
and
I
think
ben
left
some
notes
in
there
for
you
as
well
saying
hey,
we
might
need
to
override
okay
or
something
like
that
in
the
k
release,
whatever
tooling,
that
we
have.
E
I
G
I
Yeah
in
the
past
that
would
have
been
problematic
because
releases
just
pushed
straight
to
production
and
you
can't
push
there,
but
since
we're
already
overriding
and
pushing
the
staging
and
k-roll
now,
according
to
science's
comments,
we
should
be
fine.
G
G
I
Okay
yep,
then
we
should
be
good,
then
that
that
should
be
a
safe
change
right.
A
Do
you
have
a
link
for
for
that
appearance,
I'm
going
to
add
it
to
the
agenda
just
for
this
historical.
A
All
right,
so
that
is,
I
think,
that
you'll
get
that
approved.
D
C
A
Thank
you
very
much.
Let's
take
us
to
the
next
point.
If
there's
no
nothing
else
on
on
that
vr
and
the
cap,
anybody
else
have
anything
on
that.
E
I
did
have
a
question
for
ben
ben:
when
would
you
be
comfortable,
pushing
waters
and
staging
to
production
before
this
pr
merges
or
after.
I
I
think
we
should
it
should
be
after
we
should
focus
on
making
sure
that
that
goes
smoothly,
and
the
thing
that
we
have
in
staging
right
now
for
testing
purposes,
always
points
at
the
one
bucket
or
no
setup
when
you
are
when
we're
serving
from
aws
clients.
Yeah,
that's
not
a
good
experience.
If
we
were
actually
serving
global
clients,
as
opposed
to
a
little
bit
of
ci
testing,
we
should
get
the
we
should
get
the
additional
buckets
in
place.
First
before
we
roll
that
out
or
we
should
disable
the
bucket
path.
I
We
there
are
ongoing,
pull
requests
around
those
buckets
and
the
and
the
management
of
them,
and
we
have
an
issue
open
discussing
what
the
mapping
will
look
like
yeah
region
when.
E
We're
seeing
that
one?
Okay,
okay!
So
let's
get
this
pr
in
and
then
let's
work
on
the
buckets
and
then
I
think
I
think
it
should
also
come
with
a
job
that
pushes
things
out
every
day
or
something.
I
I
think
we
could
move
it
to
production
as
soon
as
we
have
the
regions
we
want
with
the
real
buckets
and
we
have
at
least
one
time
copy,
because
it'll
be
fine
if
we
fall
back
to
gcr.
That's
that's
pretty
much
where
we're
at
today,
but
it's
not
very
useful.
If
we
point
everyone
at
this
single
region
or
if
we
don't
have
anything
in
any
of
the
buckets
okay.
I
Had
empty
buckets
there,
that
would
probably
be
fine
because
we'll
just
query
them
and
find
that
there's
no
hits
it's
less
ideal,
but
it's
it
should
be
fine,
but
have
while
we
just
have
the
single
bucket
configured,
because
we
don't
have
the
buckets
yet
that
that
shouldn't
roll
out
to
production.
J
I
Well,
right
now,
the
code
that's
running
in
the
sandbox,
only
points
at
a
sort
of
sandbox
bucket,
so
we
need
to
get
that
changed
first
and
then.
Ideally,
we
need
to
get
some
layers
into
those
buckets.
I
K
Right
I
mean
we
still
need
to
do
that
one-time
sync
anyway,
right,
for
I
mean
yeah.
I
I
Buckets
we'll
need
to
have
everything
set
up.
We
so
there's
off-the-shelf
tooling,
though
a
lot
of
steel
one
times
think
but
there's
the
like
credentials
and
everything,
but
we'll
need
that
in
the
future.
Right
we'll
need
that
in
the
future
for
backfill
purposes
anyhow,
but
if
we
do
a
single
backfill
once
that
should
already
cover,
like
you
know
plenty
of
traffic
released
versions.
A
Yeah,
I
think
the
next
point
in
the
agenda
takes
us
there
because
of
the
I'm
roll.
There
was
a
lot
of
discussion
there
to
get
the
imroll
in
place
for
for
the
sink
and
for
be
able
to
write
to
the
buckets
yeah.
Let's
talk
no.
K
K
Yeah,
so
I
I
don't-
I
mean
now
that
the
access
key
has
been
removed
from
the
terraform.
That
was
certainly
tim
and
arno's.
K
Caleb
removed
it,
but
the
the
next
time
I
went
and
looked
at
this
pr
unless
I'm
totally
mistaken,
there's
going
to
be
a
separate
imuser
and
policy
created
for
each
aws
region,
because
the
bucket
the
aws
s3
bucket
variable.
D
So
so
there's
going
to
be
a
single
im
user
for
the
access
to
it,
which
will
be
logged
in
and
then
the
policy
is
per
region
and
bucket
yeah,
not
not
a
user
per
region,
because
that
would
be
a
little
bit
much.
K
Okay,
so
you're
saying
that
there's
a
separate
I
am
policy
created
per
region,
but
for
the
same
I
am
user.
D
D
J
K
J
K
Go
ahead
that
is
going
to
need
an
access
key
right.
Sorry,
my
zoom
is
is
all
choppy
that
I
am
user
is
going
to
need
an
access
key
in
secret
that
we
will
need
to
store.
I
I'm
just.
K
I
guess,
I'm
I'm
concerned
that
the
multiple
policy
objects
will
be
harder
to
maintain
than
just
having
the
imuser
assume.
The
s3
writer
has
been
already
created.
K
I
mean
I'm
I'm
certainly
open
to
suggestions
on
this,
though,
and
it
would
be
great
if
we
could
get
like
ted
zamborski
or
you
know,
like
a
maybe
a
developer
advocate
from
the
I
am
organization
to
verify.
The
thinking
here.
I
just
think,
like
simplicity,
is
better
in
this
case.
K
If
we
have
a
role
that
I
mean,
we've
already
created
in
the
cncf,
slash
aws,
infrared
terraform,
right
that
that
role
create
has
a
permission
to
create
buckets
and
write
on
objects
that
is
responsible
for
registrykates.io
in
the
cncf
org
and
then
I'll
just
get
let
this
im
user
call
sts,
assume
role
into
that
role.
D
The
my
understanding
is
in
order
to
use
the
the
role
that
you
assume
into
the
registry
kate's.
I
o
aws
account
you
need
to
be
in
the
cncf
org
and
since
us
here
are
kubernetes
humans,
so
we
should
probably,
I
think
they
propagate
down.
K
K
D
So,
regardless
of
where
which
way
we
set
up
this,
it
will
be
an
im
user
that
will
be
created
in
some
account.
That's
then
assumed
into
the
okay
and
we're
correct
and.
D
Okay,
that's
cool
so
which
account
would
be
the
best
to
put
it
in,
because
obviously
we
don't
want
to
go
cncf
top
level
would
go
with
one
of
the
there's
the
kubernetes
root
account.
Would
we
want
to
go
with
that?
One.
K
Well,
because
this
code
that's
running
terraform.
Where
is
that
running
so
far?
I've
only
seen
it
run
like
on
a
local
laptop
as
like
a
cncf
admin
right,
but
what
we
want
is
for
the
ci,
or
you
know
like
the
principal
associated
with
a
build
job,
to
run
this
right,
and
so
that
I
am
principle
that's
associated
with
that
ci
job
and
or
you
know,
cloud
instance
needs
to
then
assume
the
s3
writer
role
in
the
cncf
account.
K
We
we
need
to
figure
out
which
aws
account
that
ci
user
is
in
and
just
add
a
trust
relationship
in
the
cncf
dash
in
for
a
slash,
aws,
infra
project,
terraform.
J
K
Ole
miss
and
then
then
you
don't
need
this.
I
am
user
policy
at
all.
The
only
thing
that
you
need
in
the
I
am
music
is
the
ability
to
call
sts
assume
role
which
all
users
and
principals
have.
H
I
think
we
all
everybody's
cutting
up
a
little
bit
depending
on
I
don't
know
it
doesn't
seem
specific
to
any
one
person
so
grace
on
you
all
this
I'd
like
the
idea
and
we've
talked
about
it
before
around,
making
the
the
lot
of
the
policies
for
for
what's
allowed
in
the
roles
and
then
creating
the
I
am
access
and
assigning
with
that
sds
assume
real
being
the
way
we
use
this
as
a
structure.
H
D
One
concern
I
have
is
the
the
role
that
we
have
created.
Does
everything,
and
so
I
think,
it'd
be
good
to
have
a
separate
role
which
is
specifically
for
I
know
we
called
the
current
one
s3
writer,
but
if
we
could
have
one
that
does
the
life
cycle
management,
so
it
can
even
create
more
and
delete
more
kind
of
thing.
But
if
we
have
a
separate
role,
that's
just
for
putting
and
and
getting
and
listing,
then
let's
coordinate
that
so
I
can.
I
can
create.
G
L
D
K
All
right
awesome
just
just
to
point
out
that
when
you
call
sts
assume
role,
the
session
is
automatically
short-lived
credential,
which
we
want
to
get
away
from
as
much
as
possible
along
long-lived
credentials
and
access
keys,
and
things
like
that.
So
by
calling
sds
assume
role,
it
creates
a
temporary
session
token,
and
you
know,
with
with
permissions
to
to
execute
whatever
that
target
role
has
permission
for.
D
A
I
think
thanks
for
clarifying
in
in
short,
caleb
will
push
and
jay
will
iterate
results.
I
will
agitate
that
we
need
both.
A
Absolutely
I've
not
replaced
it.
One
is
replacing
the
other,
but
let's
meet
the
moves.
It's
fast,
okay,
as
soon
as
that
is
done,
I
assume
we
can
do
the
r
copy
and
then
then
you
can
start
doing
the
things
you
need
to
do
once
the
roles
are
in
place
and
you
can
point
to
those
specific
buckets.
A
Okay,
anybody
any
other
thoughts.
What
what
would
be
blocking
to
move
us
forward
once
we've
got
the
arm
rolls
and
the
buckets
populated.
A
I
think
we're
good
thanks
kenneth,
looking
forward
to
that
jay,
I
see
you
that
enchanted
your
chat
became
my
agenda
point.
Do
you
want
to
go
ahead
with
that.
K
Sure
the.
K
Links
on
the
I
made
it
a
separate
promo
tools
command
that
just
does
the
mirroring,
because
again
we
didn't,
we
didn't
want
to
run
it
within
the
same
workflow
for
signing
and
all
that
we
want
to
do
it.
After
all,
that
gets
done
so
I've.
I've
got
most
of
it
done.
K
I'm
just
writing
some
tests
right
now
and
you
can
figure
in
a
yaml
file,
the
the
mirrors
that
and
like
the
aws
regions
and
the
buckets
and
stuff
like
that,
and
you
pass
the
mirror
to
f
or
image
uri
and
it
uses
the
gcr.
K
Or
the
google
containers
library
to
parse
the
image
manifest
and
get
the
layers
and
then
just
issues
an
uploader
call
in
the
s3
library
to
upload
to
that
particular
bucket,
so
not
not
particularly
complicated,
so
hopefully
I'll
be
able
to
by
the
end
of
the
week
get
it
up
pushed
so
folks
could
review
it.
E
Jay,
how
how
are
we
passing
credentials
or
whatever
else
tokens?
Whatever
else
is
needed
by
this
tool.
K
So
the
crane
stuff
which
it
uses
the
local
off
chain
for
you,
know
accessing
gcr
for
the
s3
stuff
it'll
take
whatever
is
the
it's
the
you
know
the
typical
sig
v4
aws
signing
stuff
and
that's
all
encapsulated
in
the
aws
sdk
session
object
and
everything
like,
for
instance,
it'll,
look
at
the
aws
credentials
and
like
access,
key
environment
variables,
okay,.
E
Okay,
typical
aws
stuff
and
on
the
side,
yes,
nothing,
nothing
happening.
Okay
got
it.
K
K
A
A
And
last
topic:
eddie
nice,
seeing
you
ocracy
customization
testing.
You
want
to
lead
with
that.
L
C
E
Yeah
we
have
lots
of
staging
buckets
that
for
different
projects,
so
just
ask
for
one.
E
So
if
you
gather
the
email,
addresses
and
open
up
an
issue,
arno
can
quickly
help
with
that
or
I
can
do.
L
A
Okay,
adolfo
says
he's
in
chat
he's
on
his
way.
I
think
he
beat
us
all
the
longest
travel
time
starting
now,
and
I
think
that's
the
last
four
from
a
output
point
of
view
on
this.
I
assume
just
about
everybody.
Working
on
this
would
be
at
cubecon
when
you're
also
going.
A
Okay,
so
how
dependent
would
you
be
on
on
the
work
that
kaleb
and
jay
is
doing
all
with,
because
I
think
they'll
be
out
of
play
for
off
the
cube
problem?.
E
You
got
to
land
it
it's
going
to
take
a
week
so,
but
I
don't
think
yeah
then
go
ahead.
Please.
I
At
this
point,
I
think,
regardless
of
that
work,
we
can
move
forward,
hopefully
with
using
the
production
domain
in
more
places,
we've
been
ready
to
do
that
for
some
time
for
being
able
to
move
s3
serving
to
production
or
we're,
of
course,
blocked
on
this.
I
don't
think
we
have
any
major
additional
works,
going.
I'm
currently
drafting
some
docs
around
testing
expectations
and
looking
into
enforcing
code
coverage,
but
other
than
that,
I
don't
think
we
have
a
lot
of
additional
work
to
do.
E
Okay,
the
only
other
thing
that
I
could
think
of
was
caleb
and
jay
when
you're
doing
this
just
write
it
up
somewhere.
So
we'll
remember
the
rationale
or
you
know-
puts
up
with
some
small
diagrams
or
something
like
that.
So
we
know
you
know
who
to
track
or
how
it
is
set
up.
Basically,.
A
It
really
feels
like
we
are
getting
very
close.
I
saw
ben's
write-up
on
the
testing
or
that
he
started
working
on
that
and
I
think
that's
a
lot
of
work
still
ahead
of
him.
So
yeah.
I
think
we're
in
a
good
way
waiting
for
the
iron
roll
just
to
be
finalized,
and
then
we
can
start
implementing
the
production
buckets.
E
Yeah,
so
the
other
update
that
I
have
for
this
group
is,
you
know:
arno
was
working
with
the
cloudflare
folks
on
trying
some
things.
He
stuck
with
one
request
for
them
to
switch
on
something
and
they
haven't
done
it.
So
I
have
to
go
chase
them
this
week,
so
arnold
will
be
unlocked
when
he
comes
back
from
kubecon.
That's
that's
the
only
other
thing
that
I
had.
A
E
Yes,
discuss
that
already.
Yes,
we
started
with
that
like
when
we
talked
about
the
kept
okay.
So
in
the
fact
that
you
know
there
is
a
pr2,
so
I'm
trying
to
well
sasha
and
adolfo.
I
think
they
are
okay
with
those
things
and
now
I
have
to
go,
hunt
for
jordan
or
you
know,
clayton
or
somebody.
K
Else
too,
okay
yeah,
because
I
think
ben
was
saying
something
to
the
effect
of
like
some
of
these
search
and
replaces
are
actually
for
the
push.
And
we
don't
want
that.
We
only
want
to
pull
for
registry
case
at
io
right.
E
Yeah,
but
we
talked
about
that
already
so,
okay,
essentially
the
release
tooling
already
injects
the
staging
repository
when
it
does
the
build.
So
it
you
know
whatever
is
in
kkk
repository,
gets
overdone
with
the
correct
staging
registry.
I
Yeah,
we
we
don't
run
pushes
directly
there
but
like
whatever
the
current
tags
are
just
get
pushed.
So,
but
that's
that's
already
handled
it.
It
wasn't
in
the
past
before
we
had
cairo.
E
I
Anything
else
from
you
ben,
I
do
have
one
note,
just
an
fyi.
I
currently
tim
and
I
have
a
meeting
on
the
23rd
with
some
of
the
gcr
slash
artifact
registry
folks,
just
to
discuss
what
our
options
are
a
possibility
for.
If
we
can
do
a
redirect
on
the
existing
domain
or
like
what
they're
open
to
no
one
will
be
implementing
anything
we'll
just
be
discussing
their
like
what
they're
comfortable
with
what
their
thoughts
are
on
that.
I
I've
actually
had
that
for
a
bit.
I
don't
recall
if
I
mentioned
at
a
meeting
but
yeah
we
tentatively
booked
for
the
23rd.
E
K
Was
going
to
say,
remember,
remember
the
request
that
deleted
xml
from
from
openstack
yeah,
that
was
fun
that
was
fun.