►
From YouTube: Kubernetes SIG Testing - 2021-03-23
Description
A
Yep:
okay,
welcome
to
tuesday
march
23rd
2021's
sick
testing
meeting
this
meeting
is
under
the
cncf
code
of
conduct.
The
short
version
of
that
is
be
excellent
to
each
other.
This
meeting
will
be
recorded
and
uploaded
to
youtube
or
it's
being
recorded.
A
Proud
secrets
automatically
synced
from
secret
manager
chow.
Do
you
want
to
go
talk
about
that
yeah
sure.
B
Same
thing
I
cannot
share,
can
you
grant
me
the
aryani?
You
can
have
it
now.
B
Yes
cool
so
so
this
was
a
proposal
based
on
top
of
a
dock
that
aaron
wrote
but
didn't
share.
So
the
problem
we
want
to
stop
here
is
to
be
specific.
The
problem
I
wanted
to
solve
is
for
pro
on
call.
B
There
were
two
problems
that
we
encountered
were
while
we
are
doing
prong
call.
One
of
them
is
we
have.
There
are
several
pro
components
need
to
access
the
secret,
which
is
keep
config.
B
It's
a
very
powerful
secret
that
if
you
have
the
config
secret,
you
can
basically
access
and
modify
everything
in
that
cluster.
B
So
we
kind
of
need
to
handle
that
very
carefully,
but
the
current
process
is
super
secure
in
a
way
that
the
pro
clients
team
transfer,
the
keep
config
secret,
normally
on
slack
via
plain
text,
and
the
pro
on
call
would
take
that
secret
screen
and
apply
onto
pro
cluster,
so
that
pro
can
schedule.
Paths
on
that
build
cluster,
so
this
is
kind
of
troublesome.
B
First
is
not
secure.
The
other
is
it's
a
toyo
for
hong
kong
and
also
there
is
a
ton
of
runtime,
because
you
need
to
be
client
and
uncalled
to
be
both
online
to
do
this
kind
of
transaction.
B
Also,
there
are
times
when
we
need
to
prove
that
secret
is
also
pretty
tough,
because
it's
more
like
in
place
in
line
deletion
of
a
yaml
file
and
manually
applying
back
to
the
cluster,
which
is
also
a
hair
prone,
and
there
are
so
many
times
when
I
was
on
call.
I
wish
there's
a
way
we
can
have
a
better
backup
than
that
secret,
because
it's
when
the
secret
in
the
cluster
is
deleted,
it's
gone
forever.
There's
no
way
we
can
cut
it
back.
B
So
the
solution
proposed
here
is:
we
have
a
one-way
pro-secret
sync
from
in
this
title.
I'm
mentioning
google
secret
manager
because
that's
how
we
wanted
to
in
google
internal
pro,
but
this
proposal
actually
work
with
every
secret
manager,
not
every
with
most
of
the
major
secret
manager,
including
gcp,
edger
and
aws.
B
B
Once
someone
add
a
external
secret
object
in
the
cluster,
the
controller
will
automatically
translate
this
external
secret
to
a
kubernetes
secret,
and
the
value
of
this
secret
will
be
derived
from
the
config
here.
So,
for
example,
in
this
case,
it's
a
gcp
secret
manager,
and
this
is
gcp
project,
and
this
is
the
key
of
the
tcp
secret
and
the
what's
the
oh.
The
key
name
is
is
like
this
guy,
so
it's
a
data
field
name
their
name.
B
B
B
B
In
google
secret
manager
in
a
gcp
project
they
own,
we
have
quite
a
few
discussion
with
aaron
in
this
space,
so,
as
I
mentioned
before,
it
doesn't
actually
have
to
be
gcp.
It
could
be
any
major
or
secret
manager
provider.
So
as
long
as
the
client
team
have
access
to
one
of
the
secret
manager,
they
can
create
secrets
in
the
place
they
own
and
what
they
need
to
do
next
is
grant
the
pro
secret
manage
the
pro
service
account
permission
to
access
the
secret.
B
B
Currently,
the
the
folding
interval
can
be
set
by
default
is
every
10
seconds,
so
it's
almost
like
instantly.
As
long
as
this
config
is
applied,
this
object
is
applied
in
broadcluster.
B
B
So,
while
thinking
about
the
approach,
one
of
the
things
we
considered
was
a
controller
like
a
in-house
tool
where
aaron
had
had
an
intern
work
done.
B
B
So
I
think
this
is
pretty
much.
What
I
want
to
cover
here-
and
the
only
other
thing
I
want
to
mention
is
that
the
initial
goal
of
this
plan
was
actually
supporting
pro
control,
playing
primarily
for
the
cube
config
secret,
but
the
same
pattern
could
actually
benefit
from
build
cluster
for
any
encore
that
managed
manageability
cluster.
C
Oh,
that
was
just
me
saying
I
I
shared
the
doc
he
mentioned,
and
I
am
happy
to
do
less
work
and
to
hand
this
out
to
you.
Thank
you
for
picking
this
up.
B
C
I
think
your
call,
because
part
of
part
of
what
was
motivating
me
to
work
on
this
was
not
just
secrets
for
proud
but
secrets
for
kubernetes
clusters
in
general.
I
specifically
have
the
aaa
cluster
in
mind
that
kate's
infra
is
using
to
run
things
like
triage
parties,
slack
and
all
that,
so
it
would
be
probably
nice
to
give
nikita
a
heads
up
as
controvex
tech
lead,
but
yeah,
I'm
not
sure
if
it's
necessary
for
everybody
or
not.
B
Yeah
I'll
reach
out
to
nikita
also
another
heads
is:
we
already
started
piloting
this
on
google
internal
pro
and
it's
it
hasn't
been
it's
not
in
action
yet
because
we
need
to
wait
until
the
next
team
want
to
unfold
crows
to
do
this
exercise,
but
things
are
already
set
up.
We
just
need
to
check
when
a
new
team
wants
to
board
how
efficient
this
system
is
and
what
would
they
feel
do
they
really
like
the
new
process.
C
B
Yeah
for
sure,
actually
the
I
I
already
drafted
a
pull
request
in
case
test
infra,
but
since
we
need
to
discuss
on
this
docs,
I
didn't
create
a
pull
request
and
the
internal
version
of
the
crd
and
deployment
actually
was
just
copy
pasted
from
a
draft
pr.
So
if
we
are
all
good
with
this,
I
will
just
publish
my
pr
yeah
go
for
it.
That
sounds
great
cool
cool.
If
there
is
no
more
question
I'll
stop
presenting.
A
Okay,
thanks
ciao,
anyone
have
any
remaining
comments
or
questions
before
we
move
to
the
next
topic.
A
Okay,
thanks
again
so
alvaro
has
a
topic
about
a
bot
account
for
cherry
picking.
B
B
Today,
yeah,
I
actually
he
p.m
me.
I
thought
that
he
mentioned
that
on
the
public
channel
anyways.
He
cannot
make
it.
B
I
think
things
are
going
pretty
well,
I
believe
I
have
completed
all
of
the
prerequisites
asked
in
the
cap,
so
those
include
capture
the
pod
crash,
looping
alerts
that
I
already
added
based
on
what
the
open
shift
did
and
the
other
thing
was
splitting
the
image
bump
between
pro
images
and
the
testing
images.
That
was
also
done
and
the
the
next
thing
was
automatically
posting
on
slack
and
I
have
cleaned
the
pro
alerts
channel
good
enough.
That,
I
believe,
is
a
good
place
for
automatic
posting
and
yeah.
B
C
Awesome.
Thank
you.
Thank
you
for
that
yeah.
It's
it's
a
much!
It's
a
much
less
noisy
channel
over
in
our
alerts.
Now.
B
Yeah,
I'm
really
glad
that
I
can
see
one
alert
each
day
that
remind
me
still
alive,
but
it's
not
like
tons
of
each
day.
C
I
guess
I
won't
keep
us
here
too
long,
but
just
sort
of
an
update
on.
I
know
we
talked
a
bunch
last
time
about
updating,
clone,
wraps
or
paw
details
to
support
cloning,
the
default
remote
branch-
and
we
have
a
working
theory
that
maybe,
if
I
put
head
as
the
base
graph,
that
that
would
just
magically
work,
it
did
not.
C
So
when
I
get
time,
I
will
try
to
model
on
a
proposed
solution
for
that.
If
it
gets
too
big
I'll,
come
at
you
all
with
the
proposal.
Otherwise,
maybe
look
for
a
proof
of
concept,
pr
or
something
in
the
next
week
or.
B
B
C
The
problem
with
that
specific
sentinel
value
is
for
all
I
know
somebody
might
have
a
branch,
that's
called
default.
I
know.
Actually
I
was
kind
of
one
idea
I
had
was
to
suggest
that
if
you
don't
specify
the
bass
graph
field
at
all
that
some,
whichever
parts
of
power
necessary,
would
interpret
that
as
the
default
branch.
But
it's
not
clear
to
me
whether
that
plays
nicely
with
some
of
the
assumptions
that
are
made
about
that
particular
api
right
now.
C
If
it
supports
it,
that
would
definitely
be
my
preference,
but
I'll
have
to.
A
A
Because,
as
far
as
I
can
tell,
that
is
the
main
blocker
right
now
is
that
if
you
say
I
want
to
check
out
this
ref,
we
treat
it
as
that
ref
as
a
branch,
and
so
we
not
only
fetch
it
with
that
name,
but
we
also
try
to
create
a
local
branch
with
that
name
which
makes
sense,
but
that's
not
what
you'd
want
here,
but
we
also
don't
want
to
be
in
a
detached
head
state
so
like
we
could
also
just
treat
head
as
the
sentinel
value
and
map
it
to
what
the
default
branch
is
and
check
it
out
as
the
actual
default
branch
name.
C
Yeah,
so
so,
whenever
I
come
up
with
with
a
suggestion,
I'll
try
and
sort
of
articulate
what
I
think
the
implications
are
from
a
token
usage
perspective
and
sort
of
like
you
know
at
trigger
time
versus
run
time
when
refs
are
being
resolved.
C
If
anybody
else
wants
to
work
on
this,
that's
totally
cool
too.
I
want
to
make
sure
I'm
not
doing
the
thing
where
I
lick
the
cookie,
so
I
think
it
was
really
awesome
that
I
started
noodling
on
the
thing
for
secrets
and
chao
was
like.
Oh,
I
was
thinking
about
the
same
thing
you
go
so
it's
a
thing
like
I.
I
really
want
to
see
happen
and
I'm
dangerous
enough
to
maybe
know
how
to
to
make.
C
It's
just
a
question
of
my
time
in
family,
so,
if
anybody's
interested
I'm
totally
happy
to
hand
off
otherwise,
I
will
refer
back
when
I've
gotten
around
to.
A
A
Well,
thank
you
all
for
coming.
I
actually
think
it
might
be
a
good
idea
to
get
some
time
back
with
test
freeze
going
on,
I'm
not
sure
about
the
rest
of
you,
but
I
am
spending
a
very
large
amount
of
time
doing
reviews
right
now
to
try
to
get
all
of
the
test
things
in
before
test.
Freeze
it's
about
as
bad
as
code.