►
From YouTube: Kubernetes SIG Testing - 2021-06-15
Description
A
Hi,
everybody
today
is
tuesday
june
15th
and
you
are
at
the
kubernetes
testing
bi-weekly
meeting.
I
am
your
host
erin
crickenberger
aka
spiff,
xp,
aka
aaron
of
sig
beard
we're
all
going
to
adhere
to
the
kubernetes
code
of
conduct
while
we're
at
this
meeting
and
while
we
participate
the
project
in
general
by
being
our
very
best
selves
to
each
other.
A
So
on
today's
agenda,
we're
going
to
talk
about
naveed
and
following
up
on
external
secrets
for
snake
scanning
and
then
it's
kind
of
open,
open
questions.
If
anybody
has
any
specific
questions,
I
initially
had
plans
to,
like
scrub
all
of
the
issues
related
to
moving
kubernetes
ci
related
stuff
over
to
kate's
infra.
But
I
didn't
get
time
to
do
that
ahead
of
time.
A
So
before
I
hold
you
all
hostage
by
walking
through
all
of
that
live
we
could
we
could
answer
specific
stuff
if
people
have
specific
interests
or
questions.
So
with
that,
I'm
going
to
hand
it
over
to
nafeed.
B
Sure
so
I
think
this
vr
is
about
adding
a
bit
of
context
like
this
is
about
adding
a
continuous
cystic
scan,
job
and
and
ci,
which
would
run
a
snake
cli
against
kk
master,
and
this
requires
a
secret
that
will
authenticate
this
next
cli
against
the
server
to
to
perform
the
job
in
order
to
get
that
secret
into
the
infrastructure.
B
I
think
we
are
going
to
use
external
secrets
here,
and
there
was
a
need
to
do
some
plumbing
that
was
needed
under
this
infrasight
that
is
already
being
done
by
aaron
thanks
a
lot
aaron.
So
I
am
following
up
on
that
to
understand
like
what
would
be
the
next
steps
to
get
the
secret
actually
into
the
gsm.
A
Yeah,
I
can
speak
to
this.
I
appreciate
your
patience.
A
I
keep
tripping
into
other
pieces
of
tech
that,
as
I
try
to
get
this
up
and
running
so
I
think
the
next
step
is
to
create
the
external
secret
crd
over
in
the
kate's
infrared
trusted
kate
center
for
prowl
build
trusted
cluster.
I'm
trying
to
dig
up
a
link
for
where
you
would
put
that
so
it'd
be
a
pr
against
the
kate's.
I
o
repo
we've
watched
that
merge.
A
B
I
think
the
first
point
under
that
which
is
about
two
two
triple
two:
nine:
eight-
that
pr
is
already
one
against
testing
for
to
create
a
cr
for
external
secrets,
and
I
keep
that
on
hold.
But
I
think
that
secret
external
secret
cr
is
referencing
the
actual
secret
that
will
be
created
in
gsm
and
that
step
is
spending.
So
I
don't
know
like
if,
if
I'll
get
an
access
to.
C
A
Okay,
there's
a
g
cloud
command.
I
can
paste
to
you
offline
to
update
the
value
of
the
secret,
but
this
pr
triple
two
nine
eight,
I'm
gonna
ask
you
to
close
this
and
open
a
new
pr
against
the
kate's
io
repo.
I
posted
a
path
to
the
resources
file
and
if
you
can
just
make
a
new
file
named
like
external
secrets
or
something
and
put
your
your
cr
in
there,
that
should
work.
A
I
mean
we
can
do
the
crpr
first
it'll
it'll,
probably
before
you
update
the
the
secret
with
the
value
but
I'll
I'll
coordinate
with
you
on
this
immediately
after
this
meeting.
If
you're
available
after
this
meeting.
A
B
A
Yeah,
so
I
feel
confident
by
after
we
coordinate
after
this
meeting,
we'll
be
able
to
go,
see
successful
test
runs
of
this
on
test
grid,
or
at
least
get
to
the
point
where,
like
your
job,
is
running
and
it's
it's
not
breaking,
because
they
can't
access
the
token
that
it
needs.
A
A
That's
a
that's
a
great
question,
paris.
Well,
we
have
a
project
board
that
ben
and
I
have
tried
to
keep
up
to
date
but,
like
I
said,
I
didn't
have
time
to
really
speed
through
it
for
this
meeting
I'll
post
a
link
in
chat,
but
I
will
also
share
my
screen
for.
A
A
Specifically,
we
want
to
make
sure
we're
talking
about
closing
issues
if
they
take
multiple
prs
to
address
and
if
you're,
if
you're
coming
here,
asking
about
a
specific
pr
most
likely,
we
got
to
make
sure,
there's
an
issue
covering
this
work
which,
as
I
say
that
out
loud,
I
think
navid
has
one,
but
I
legit
can't
remember
off
the
top
of
my
head
and
in
terms
of
like
where
we
want
help,
we
have
a
help
wanted
column.
A
We
try
to
keep
it
populated
with
issues
that
are
relevant
and
have
the
help
wanted
label
on
them
and
could
use.
Let's
see
so
I
hear
one
I
see
one
here,
that's
stale,
I'm
going
to
remove
life
cycle
stale
on
it
because
it's
it's
still
a
distilled
problem.
A
You
may,
if
you've
lived
the
ci
signal
life
or
have
been
involved
in
troubleshooting
stuff,
with
the
kubernetes
release,
you
might
notice
that
there
are.
There
are
some
tests
that
have
please
hold.
D
A
I
don't
know
if
people
can
see
this
I'll
blow
it
up
a
bit,
maybe
or
maybe
test
grid
will
not
play
well,
oh
boy,
and
if
I
hover
over,
like
the
top
column,
the
tool
tip
will
say
commit,
and
this
is
the
commit
of
kubernetes,
the
repo
so
like.
If
I'm
trying
to
verify
like
which
version
of
my
tests
are
running
or
which
version
of
the
code
is
being
exercised,
that's
what
this
column
is
about.
A
But
then
this
other
column
here
is
called
infra
commit
and
it's
kind
of
a
holdover
from
the
days
where,
like
literally
everything,
related
to
testing
and
jobs
and
all
that
stuff
lived
in
the
test,
infrarepo,
and
so
often
people
will
take
a
look
at
the
infra
commit
and
see
when
that
changes
to
decide
like
when
their
job
config
changed
or
when
the
version
of
cube
test
used
changed,
because
what
this
issue
was
tracking
was
like
somebody
added
a
new
flag
to
cubetest.
A
Cubetest
is
not
dynamically
built
on
the
fly
it's
installed
in
the
version
of
cube
of
kins
that
runs
right,
and
this
commit
here
doesn't
correspond
to
the
version
of
coopkins.
That
runs.
Nor
does
it
correspond
to
the
version
of
coop
test,
that's
being
used,
it
corresponds
to
the
testing
for
repo
whenever
this
job
happened
to
run.
A
So
people
are
starting
to
lose
useful
information
to
help
them
figure
out
what
actually
changed
and
so
tesperate
test
grid.
Has
this
wonderful
feature
where
you
can
add
like
a
bunch
of
custom
columns,
I'm
probably
not
going
to
find
it
on
the
first?
Try,
let's
see.
A
It's
well,
I'm
looking
for
testgrid
has
a
feature
that
allows
you
to
specify
custom
columns.
A
A
A
A
A
No,
I
can't,
I
wonder
if
I
look
for
kate's
beta
reboot,
I
can
all
right.
Let's
go
see
what
that
shows
me
hey,
look.
There
are
a
bunch
of
additional
columns
here
and
I'm
hovering
over
one
of
them
and
I
can
see
node
os
image
over
there
and
I
can
see
other
stuff.
A
So
basically,
this
issue
is
about
making
extra
columns
like
this.
That
show
the
cube
test
version
and
I'm
rambling
about
how
to
do
this,
and
it's
not
super
specifically
documented
or
spelled
out
in
this
issue.
But
the
idea
is
because
it's
tagged
as
it's
not
tagged
as
outpointed.
A
This
is
what
happens
when
you
you
all
just
let
me
ramble
live.
I've
been
told
that
people
like
it
when
I
make
mistakes
and
when
I
stumble
around,
because
you
learn
how
I
use
our
different
tools
and
stuff
to
find
things,
but
I
gotta
admit
like
I
feel
like.
Sometimes
I'm
just
wasting
people's
time
here
so
because
the
issue
is
a
lot
by.
D
A
The
idea
is
like,
if
you
want
to
work
on
something
like
comment
on
the
issue
and
reach
out
to
us
in
the
sick,
testing,
channel
and
say
hey
I
want
to.
I
want
to
work
on
this
thing.
What
do
I
need
to
do
if
it's
not
spelled
out
in
the
issue?
Description,
we'll
work
with
you
to
try
and
like
update
what
we
think
the
next
steps
would
be.
A
A
So,
for
example,
we
have
a
ton
of
images
that
are
used
in
a
bunch
of
our
test
infrastructure
and
a
bunch
of
our
jobs,
and
we
got
a
checklist
here
for
which
images
have
readmes
and
which
ones
don't
because
when
I
navigate
to
like,
what's
the
difference
between
alpine
and
alpine
bash,
I
don't
know
because
they
don't
have
readmes
it's
kind
of
annoying
but
like
if
I
go
look
at
the
readme
for
krte.
A
Well,
there's
a
big
scary
thing
that
says:
warning
use,
use
it
at
your
own
risk
for
your
personal
needs,
but
it
also
says
it's
used
for
kind,
so
all
of
the
ci
jobs
that
are
related
to
kind
use
this
image,
I
think
another.
A
Would
be
the
like,
the
generically
named
bigquery
image
that
says
what
it's
used
to
run
and
then,
like?
Basically,
what
tools
are
installed
in
it?
It's
a
convention.
I
personally
am
trying
to
follow
when
I
make
new
images
like
I
made
an
image
in
the
case,
I
o
recently
called
kate's
infra,
which
has
all
of
the
tools
necessary
to
run
all
of
the
tests
and
manage
all
of
the
infrastructure
that
we
manage
in
the
gates
and
for
working
group
so
to
contribute
to
I've
lost
my
place
already.
A
To
contribute
to
this
issue
would
involve
saying:
hey.
I
want
to
work
on
this
or
even
just
opening
a
poll
request
that
references
this
issue
and
they're
like
hey,
I'm
gonna,
I'm
gonna
work
on
the
g
cloud
thing
and
I'm
gonna
take
a
look
at
the
docker
file
and
see
what's
used
here.
I'm
gonna
take
a
look
at
the
cloud,
build
file
and
see
that
it
builds
this
thing
called
gcloud
and
go
and
I'll
go
use.
A
Cs.Kates.Io,
maybe
to
go
see
like
so
what
uses
gcloud
and
go,
and
then
I've
got
a
bunch
of
jobs
and
stuff
that
I
could
go
look
at
say.
Well,
it's
used
to
run
these
jobs
and
these
things.
A
A
D
D
A
Okay,
so
the
issue
that
erno
has
been
working
in,
I
pasted
in
chat,
but
the
idea
is
we
want
to
eventually
have
crowdoutkates.io
run
in
the
same
gcp
organization
and
stuff
that
everything
else
managed
by
kate's
infra
runs
in
because
that's
the
place
where
community
members
can
help
run
prowl
like
I
want
to
live
in
a
world
where
we
don't
have
a
secret
group
of
googlers
that
you
have
to
bake
and
plead
to
make
things
happen.
A
I
want
people
who
are
interested
in
changing
prow
or
understanding
why
prow
isn't
working
or
who
want
to
help
make
proud
work
better.
I
want
to
give
people
disability
and
we
can't
do
that
while
prowl
continues
to
run
in
a
google.com
owned
project,
because
only
googlers
can
be
assigned
to
use
google.com
projects.
A
A
So
as
much
as
I
would
love
for
it
to
be
like
we
stand
up,
we
stand
up
the
new
prow
instance,
and
then
we
just
gradually,
like
start
saying,
run
on
the
new
prow
instance.
Instead
of
the
old
prow
instance,
we
could
end
up
in
a
situation
where,
like
there
are
competing
competing
bots
all
like
talking
on
the
same
pull
request
or
or
trying
to
like
label
and
remove
labels
from
each
other.
A
So
that's
why
developing
a
plan
on
how
to
do
this
is
the
next
I'm
talking
like
I'm
sharing
my
screen
and
I'm
not
I'm
sorry.
A
Developing
a
plan
to
do
this
is
sort
of
an
open
issue.
Now
I've
lost
my
chat
window
because
I
was
gonna
paste
that
there
for
you
to
follow
along-
and
I
I'll
admit
like
I
haven't-
touched
this
issue
in
a
long
time.
It
was
basically
first
it
was
about
setting
up
build
clusters
over
in
the
catenfra
area.
A
There's
a
google
doc
linked
in
this
issue,
and
there
are
a
bunch
of
issues
that
are
linked
from
this
there's,
even
a
checklist
of
some
issues
that
need
to
be
completed
to
make
this
build
cluster
totally
done
and
then
arno
is
correct.
We
also
started
up
a
discussion
about
this
thinking
that
maybe
discussions
would
be
a
better
way
to
coordinate
asynchronously,
and
we
can
continue
to
coordinate
here
if
y'all
want.
A
I
feel,
like
part
of
the
problem
is
just
my
availability
keeps
you
know
coming
and
going,
I'm
not
sure
who
else
feels
knowledgeable
enough
to
help
sort
of
draft
this
plan.
But
I
believe
the
plan
is
gonna.
Look
something
like
you
know,
standing
up,
completing
standing
up
a
prow
instance
and
then
maybe
like
flipping.
A
Repo
to
be
managed
just
by
this
proud
instance,
which
is
probably
going
to
involve,
telling
proudcates.io
to
explicitly
not
manage
this
repo
and
then
telling
the
staging
pro
instance
to
explicitly
manage
this
repo
there's
been
work
done
along
the
way
to
make
prows
config
a
little.
You
can
now
like
exclude
things
on
a
per
repo
basis
for
most
of
browse
configuration
settings,
so
I
think,
like
we
have
all
of
the
required
tools
and
technology
and
configuration
support
to
this.
A
We
just
do
kind
of
need
to
to
draft
a
plan
and
walk
through
it
as
a
group
and
agree
to
execute
on
it
and,
as
I've
said
before
and
commented
on
issues
like
I'm
happy
to
draft
that
plan,
I
just
need
to
get
some
dedicated
time
to
set
aside
for
that
and
at
the
moment,
I'll
just
say
that
personally,
the
higher
priority
for
me
is
going
to
be
one
of
checking
where
we
are
on
migrating
all
of
the
assets
that
are
used
in
ci
for
the
for
the
kubernetes
release
process.
A
Basically,
I
don't
have
the
issues
for
this
super
handy.
Let's.
A
A
This
is
part
of
it
all
of
the
image
registries
that
are
used
as
part
of
kubernetes
ci
should
be.
We
should
live
in
kate's
infra.
We've
got
issues
for
a
bunch
of
these.
Some
of
you
are
more
familiar
with
this
than
others.
I
know
cloudy.
U
has
been
doing
a
bunch
of
great
work
on
making
sure
that,
like
all
of
the
images
that
are
used
as
part
of
e
to
b
test
now
for
the
most
part
live
instead
of
living
in
the
google.com
own
kubernetes
test.
D
A
A
C
For
for
for
this
particular
issue,
only
those
pull
requests,
these.
C
A
Again,
I
don't
want
to
review
this
live
during
the
meeting,
but
I'll
take
a
look
at
this
as
soon
as
I'm
done.
Helping
so
for
me,
like
getting
all
the
images
and
stuff
moved
over
is
a
higher
priority
like
getting
that
plan
together
is
a
higher
priority
for
me
than
working
on
the
plan
to
get
the
staging
pro
instance
up
and
moving
some
jobs
and
stuff
over.
A
The
reason
for
that
is
because
I'm
thinking
about
I'm
not
sure
off
the
top
of
my
head
exactly
what
the
what
the
release
timeline
is.
Let's
discover
together.
Oh
wait.
Can't
I
do
this
via
cait
stab
now,
resources,
release
information
or
kate's
io
releases.
Thank
you
so,
tldr
by
july,
8th
or
july
15th.
A
We
should
basically
not
be
changing
code
in
kubernetes
anymore,
and
we
tend
to
be
a
little
sensitive
about
making
big
infrastructural
changes
for
all
of
the
things
that
run
all
the
tests
for
kubernetes,
depending
on
the
stability
of
ci
signal
and
the
general
flakiness
of
etp
tests
and
whatnot.
C
A
Would
say
largely
test:
freeze
is
the
date
yeah
and
we
can
you
can
click
on
these
to
see
the
definition
test.
Freeze
is
basically,
we
can't
add
any
more
new
tests,
but
it
is
acceptable
to
like
fix
broken
tests
and
whatnot.
But
for
me.
A
See
like
all
of
the
image
stuff
swapped
over
before
test,
freeze
to
give
us
the
time
to
like
fix
or
revert
if
you
notice
anything
is
broken.
Last
minute,
I
recall
that
we
went
through
a
bit
of
a
push
right
up
against
the
test.
Freeze
deadline
last
cycle
and
didn't
quite
make
it,
which
is
why
I
believe
we're
really
close
on
this
particular
issue.
A
A
To
do,
I
guess
something
else
related
to
to
this,
which
arno
is
also
working
on
which
I'm
gonna
have
to
find
the
issues
for.
A
Right
so
we
had
closed
these
out.
We've
migrated
like
all
of
the
jobs
that
are
involved
in
getting
the
kubernetes
release
out
the
door,
both
release
blocking
and
merge
blocking
I'll
run
on
a
build
cluster
in
ktempra
and
community
members
totally
are
free
to
pr
themselves
into
a
group,
to
like
view
the
logs
for
that
cluster
or
check
in
on
the
nodes
for
that
cluster
or
view
the
logs
for
the
projects
where
these
etp
tests
are
running.
A
All
that
great
stuff,
so
now
think
about
widening
the
the
scope
of
the
number
of
jobs
that
we've
got
running
over
there,
because
the
release
blocking
jobs
and
the
merge
blocking
jobs
make
up.
I
think
it's
ballpark
like
200
and
something
jobs
out
of
the
2
000
and
something
jobs
that
we
have
configured
to
run
via
productkates.io.
A
Don't
some
of
the
jobs
that
run
on
product
case.io
are
for
projects
that
aren't
actually
part
of
kubernetes
at
all,
so
there's
a
separate
issue
to
like
hope,
the
the
owners
of
those
projects
to
get
their
stuff
off
of
productkates.io,
because
these
jobs
shouldn't
be
migrated
over
to
the
kids.
In
for
brown
instance,
they
shouldn't
run
on
the
community,
build
cluster
things
like
that,
but
it
seems
like
now
that
we've
taken
care
of
all
the
release
blocking
jobs.
A
A
good
next
step
would
be
the
release
informing
jobs,
many
of
which
appear
to
be
red
right
now,
but
two,
two
of
the
the
big
ones.
There
are
the
the
scale
correctness
job
and
the
scale
performance
job
which
some
of
you
know
as
the
5k
node
scalability
tests.
A
A
Here
so
you
got
the
5k
node
jobs,
they're,
also
100
node
jobs.
There
are
also
cube
mark
related
jobs
over
here
you
know
various
performance
tests
so
on
and
so
forth.
So
arno's
working
to
get
all
of
those
jobs
up
and
running
over
here-
and
this
is
this-
is
just
what
happens
to
me.
I
have
too
many
tabs
open
now,
so
I'm
gonna
go
back
to
the
migration
plan.
A
A
I
can't
find
it
okay,
I
have
reached
peak
awkwardness
of
holding
you
all
hostage
while
I
stumble
around
and
I'm
gonna
pause.
A
A
E
Yeah
it'll
be
a
quick
one.
I
just
wanted
to
kind
of
throw
a
quick
update
on
the
e2e
framework
thing
we're
making
progress
on
the
the
helper
function.
Packages
package,
I
should
say-
and
I'll
drop
the
the
initial
pr
and
we
should
have
some.
We
should
have
another
pr
coming
up
soon,
which
will
have
actual
crud
type
abstraction
for
creating
objects,
etc.
A
A
A
Else,
I'm
not
gonna
ramble
for
another
20
minutes.
I
feel
like
that's
a
poor
use
of
everybody's
time
here.
A
So,
thank
you
all
for
your
time.
Thank
you
for
showing
up,
and
I
hope
you
all
have
a
happy
tuesday.