►
From YouTube: Kubernetes SIG Testing - 2020-07-14
Description
A
A
We
have
relatively
light
agenda
today.
We
have
Shane
here
to
talk
to
us
about
a
secret,
syncing
and
rotation
design
proposal
he's
been
working
on
to
try
and
help
with
some
of
the
like
service
account
secrets.
We
have
to
stuff
into
prowl
and
since
I
was
at
the
testing
Commons
meeting
on
Friday
I
thought
I'd
just
give
a
brief
update
on
stuff
that
happened
there,
and
if
anybody
would
like
to
chat
about
anything
else
at
the
end
of
complete,
please
feel
free
to
you
add
it
to
the
agenda.
A
B
A
B
Okay,
hi
everyone,
I'm
Shane,
and-
and
thank
you
all
for
the
time.
So
this
is
a
design
proposal
for
the
synchronization
and
rotation
of
kubernetes
secret.
So
it
is,
it
consists
of
two
mostly
independent
part.
The
first
part
is
the
synchronization
of
secret
from
Google
Cloud,
secure
manager
to
turbinate
ease,
and
the
second
part
is
the
rotation
of
secure
manager,
secret,
okay.
B
Okay,
so
that's
just
jump
into
the
design
of
the
first
part,
so
the
synchronization
part,
so
we
propose
to
implement
the
synchronization
with
a
synchronization
controller.
So
this
is
like
a
flowchart
of
how
it
is
being
launched.
So,
with
a
give
a
given
configuration,
file
or
config
map
from
the
user,
we
can
collect
a
collection
of
synchronization
pairs,
consist
of
a
source
and
the
destination
secret,
and
with
all
these,
we
go
through
the
configuration
validation
to
see.
B
B
Okay,
so
this
is
an
illustration
for
the
synchronization
control
loop,
so
the
controller
will
have
a
collection
of
sync
pairs
and
loop
through
all
the
synchronization
pairs
and
for
each
pair
it
looks
at
the
secret
value
of
the
source,
secret
and
the
destination
secret,
and
if
there's
a
difference,
that
is,
if
the
two
value
are
different,
then
we
just
update
the
destination
secret
to
whatever
the
value
is
in
the
source
secret.
So
what
it
interact
was.
Is
we
actually
interact
with
the
grenade
surface
account
which
has
worked
low
identity
to
Google
cloud
service?
Account?
B
Okay,
so
that's
give
an
example
for
this.
So
an
example
for
the
configuration
file
looks
like
this.
So
now
the
user
specifies
two
pairs
of
synchronization
pairs
and
we
set
the
source
to
always
the
secret
manager,
secret
and
the
destination,
to
always
be
a
Cuban
80
secret,
so
we're
always
sinking
from
secret
manager
secrets
who
terminate
a
secret
okay.
B
B
B
B
So
we
will
launch
a
periodic
job
to
monitor
on
this,
and
so,
whenever
the
previous
version,
the
previous
created
token
or
a
key,
is
older
than
22
hours,
then
we
generate
a
new
one
and
for
every
old
key
or
old
secret
that
is
older
than
eight
hours.
Then
we
just
delete
them.
Okay,
so
let's
see
an
example
time
line
here
so
suppose
that
we
with
the
rotator
we
created
one
two
three
four
four
secrets:
they
can
be
separate
account
tokens
or
SSH,
IMing
service
service,
account,
keys
or
SSH
keys.
B
B
So
when
it
is
running
at
this
time
point
because
and
all
these
three
secrets
are
outside
of
the
lifetime
window,
so
we
the
rotator,
should
try
to
deactivate
of
them
and
by
deactivating
them
it
means
that
it
should
disable
or
destroy
them
in
the
secret
manager
and
also
to
make
them
invalid,
and
so,
for
example,
if
it's
a
Google
cloud
service
account
key,
it
should
make
that
key
invalid.
Also,
okay,.
B
So
this
is
an
example
of
how
we
might
actually
implement
the
rotation
for
service
account
keys,
so
we
propose
to
use
the
metadata
of
the
secret
manager
secret.
So
if
it
is
registered
as
I
wrote
in
a
secret
with
a
type
of
service
account
key,
then
the
labels
of
related
secret
will
be
attached
and
also
which
project
and
which
account
it
is
the
service
account
is
then
well
also
be
attached.
B
So
this
is
like
the
starting
information
for
this
rotated
secret
and
when,
with
these
three
fields,
the
rotator
can
create
a
new
service
account
key
and
whenever
it
did
so,
it
attached
a
key
value
pair
into
the
metadata
so
supposed
the
version.
One
of
the
secret
is
associated
with
the
service
account
key
ID.
So
whenever
and
in
the
future,
it
tries
to
deactivate
or
delete
this
service
account
key.
It
can
just
use
the
project
ID,
the
service
account
name
and
the
according
service
account
key.
B
So
the
benefit
is
that
it
only
tries
to
delete
the
service
account
key
that
it
actually
created
before,
so
that
it
doesn't
actually
touch
and
I
think
that's
not
that
doesn't
necessarily
belong
to
it.
Okay
yeah,
so
this
is
briefly
about
the
design
for
secret,
synchronization
and
rotation,
and
the
rest
part
is
more
detailed,
so
I
I
think
I
just
skipped
that
so
yeah
is
there
any
questions
I
like
to
take
any
inputs,
I.
C
B
You
if
now
you
have
stay
a
secret
manager
secret
containing
a
service
county
earned
you'd
actually
have
to
so
now.
If
you
have
an
activated
service
account
key
with
the
ID
of
this,
and
all
you
have
to
do
to
launch
this
is
to
create
a
secret
manager
secret
and
attach
these
labels
and
to
make
this
as
the
first
version.
So
you
attach
this
key
back
up
here
to
the
meta
and.
D
D
B
A
A
A
A
For,
like
the
crowd,
build
clusters
that
I
have
set
up
in
Kate's
in
front
right
now,
they
all
have
service
account
secrets
loaded
in
them,
and
I
would
like
to
make
sure
that
I
can
rotate
them
out
if
they're
compromised
so
I'm.
Looking
forward
to
you
deploying
this,
but
yeah
bootstrapping,
google
cloud
secrets,
it's
kind
of
annoying
and.
A
That
sounds
awesome
yeah.
It
was
in
a
in
a
case
in
for
meeting
when
senior
manager
when
GA
and
somebody
was
like
wow,
this
sounds
cool.
Should
we
use
this
I
was
like
yes,
I
would
be
shocked
if
Google
doesn't
already
have
an
integration
between
secret
manager
and
kubernetes
and
I
was
shocked.
I'm
pretty
excited
to
see
this
happen.
A
B
B
E
E
A
So
we've
talked
a
little
bit
about
a
couple
times
at
this
meeting
about
the
idea
of
refactoring
the
e2b
framework
or
making
it
more
reusable
or
better
or
moving
it
staging
or
whatever,
and
one
of
I
achieve.
Concerns
has
been
that,
while
there's
been
lots
of
effort
over
time
on
the
framework,
it's
kind
of
unclear
to
me
what
has
been
done
and
like
what
the
rules
are
and
what
the
tribal
knowledge
is
that,
like
I
kind
of
wanted
to
see
a
plan
of
like
where
are
we
headed
and
like?
A
What
are
the
conventions
were
trying
to
adhere
to
so
George
helpfully
wrote
up
this
draft
of
a
document
that
sort
of
lays
out
what
he
sees
in
all
the
what's
in
the
core
framework,
and
so
the
core
framework
here
would
be
everything.
That's
in
the
test.
E
to
e
framework
directory
of
kubernetes
kubernetes
I
want
straight
into
this
without
much
context.
A
The
framework
used
to
just
be
like
kind
of
a
single
flat
package,
and
there
was
a
lot
of
effort
to
move
functions
related
to
specific
humanities
resources
into
sub
packages
of
frameworks.
So,
like
all
the
actions
related
to
waiting
for
pods
to
go,
running,
ready
or
scheduling
a
pot
or
whatever
all
the
stuff
related.
The
pods
would
go
into
the
pod.
A
A
There
were
also
handy
things
at
the
time
that
didn't
use
to
exist
and
go
when
this
project
started,
such
as
the
ability
to
output
to
j-unit
xml,
which
is
something
that
ginkgo
provides
a
reporter
to
do,
and
the
ability
to
execute
tests
in
parallel,
which,
if
people
are
curious,
why
we
can't
just
run
an
e
to
e
binary.
We
have
to
run
ginkgo
to
run
or
need
to
be
binary.
It's
to
enable
that
execution
of
multiple
e
to
be
tests.
A
In
parallel,
you
can't
go
kind
of
does
the
starting
up
of
which
tests
execute
on
which
you
can
get
a
node.
So
we're
trying
to
scope
down
the
core
of
the
framework
to
describe
like
setting
up
things,
interacting
with
ginkgo
parsing
flags,
making
them
available
to
everything
and
seeing
if
we
can,
for
every
other
package,
try
to
move
it
to
a
sub
package
of
e
to
be
framework
or
move
it
out
entire
team.
A
A
Out
of
kubernetes
and
people
have
to
import
the
entire
kubernetes
tree
to
reuse
this
framework,
so
one
of
the
goals
is
to
see
if
this
could
be
moved
into
the
staging
repo,
but
people
can
only
import
the
the
framework
repo
out
of
staging
instead
of
the
entirety
of
kubernetes.
In
order
to
do
that,
we're
gonna
have
to
identify
those
parts
of
v2e
framework
that
rely
on
code
within
kubernetes.
A
The
one
that
I
know
of
offhand
is
there's.
There
are
functions
inside
of
the
test.
Detail
package
to
handle
image
manifests
that
needs
to
be
untangled,
so
stop
sharing
this
document,
so
the
idea
is,
we
would
appreciate
anybody's
help
or
feedback
on
like
what
what
you
think
the
framework
is
or
what
you
want
it
to
be
in
terms
of
whether
we're
doing
work
to
move
it
in
the
right
direction.
A
Secondly,
I
have
been
trying
to
have
been,
and
talking
with,
the
maintainer
of
Ginkgo
to
see
if
there
are
things
that
we
could
add
into
ginko
or
take
away
from
ginko
that
would
measurably
improve
our
lives.
So
if
there
are
pain
points
that
we
on
the
project
have
experienced
over
our
years
of
usage
of
Ginkgo,
is
there
any
feedback
we
could
provide?
That
would
help
help
create
a
roadmap
for
sort
of
an
incremental
involved,
inca
1.0
to
2.0.
A
Why
my
two
biggest
pain
points
off
the
top
of
my
head
are
the
fact
that
we
can't
I
had
this
dream
of
the
framework
wrapping
ginkgo
entirely
and
nobody
actually
realizing.
That
ginkgo
is
what
we
used,
that
that
was
just
an
implementation
detail.
It
turns
out
doing
so
loses
line
number
information
when
the
ginkgo
fails.
The
test
right
now
at
helpfully
tells
you
which
test
failed,
and
it
tells
you
the
line
at
which
that
test
was
defined,
but
the
only
reason
it
can
do.
A
That
is
because
you
called
gingka's
version
of
the
function
have
that
line
number.
If
we
start
wrapping
kink
in
functions,
it
starts
used
as
spitting
out
the
line
numbers
of
framework,
which
is
not
helpful
for
people
to
debug
which
test
failed,
and
why
so
it
would
be
cool
to
be
able
to
like
override
the
blind
number
stuff
a
little
bit.
So
we
could
maybe
wrap
kinking
calls
and
maybe
construct
things
a
little
better.
A
The
second,
and
at
least
more
frustrating
thing
for
me,
is
that
ginkgo
only
provides
regular
expressions
begins
the
test
names
as
the
mechanism
to
select
or
exclude
tests
from
a
given
run.
This
means
that
we
live
in
a
world
where
a
test
run
of
200
something
odd
tests
passed
5,000,
something
odd
tests.
Skipped
is
totally
normal.
It
would
be,
it
would
be
so
cool
if,
like
we
could
have
200
tests
passed
and
that
being
like
the
expected
number
that
like,
if
I,
could
say,
I
selected
200
tests
to
run
and
all
of
them
should
pass.
A
That's
great.
The
fact
that
all
the
other
tests
that
I
didn't
select
are
now
marked
as
skipped
it's
kind
of
annoying.
This
is
to
say
nothing
of
the
fact
that,
like
our
test,
names
are
illegible
because
we
ate
we
put
all
these
like
string
lights
tags
all
over
all
over
them.
I
was
curious.
If
anybody
else
had
any
other
pain,
points
or
suggestions.
There.
A
D
Big
ones,
and
so
and
so
I
think
the
reason
the
reason
I
stopped
like
I
remember
when
I
went
to
when
I
met
you
first
in
Barcelona,
we've
messed,
not
that
your
table.
Well,
it's
because,
okay
I,
don't
know,
kubernetes
I
need
to
sit
down
with
test
people,
because,
if
I
you'll
just
under
tense,
that
will
be
my
gateway
into
sunny,
kubernetes
and
so
so
economy.
On
the
Bronco
faced
and
hopped
onto
the
shadowy,
I
signaled
that,
like
a
couple
of
pain
points,
the
the
test
naming
is
the
main
pain
point.
D
So
as
I
look
at
test
grade
and
look
down
the
list
of
tests
on
the
left-hand
side
and
I'm
kind
of
natural,
the
wall
of
test
times,
essentially
a
flattened
hierarchy
and
to
make
each
testing
unique
using
all
the
times
that
associated
with
testing.
So
if
you're,
looking
yeah
I,
measured
the
models
and
the
test
names
in
terms
of
number
of
tweets,
it's
like
two
and
a
half
feet
long.
D
D
D
The
end-to-end
tests
is
that
the
community,
the
kubernetes
operator
or
user
you'll
be
able
to
look
at
an
end-to-end
test
readers
and
go
I
now
understand
it
and
nuanced
piece
of
behavior
about
kubernetes
and
what
I
want
to
be
able
to
do
is
to
put
the
intent
and
sweet
respond
to
them
in
such
a
way
that
they
can
navigate
the
intent
as
sweet
to
go
right
on
having
a
problem.
This
part
there's
two
Vanitas
I
need
to
understand.
D
You
need
to
be
able
to
drive
down
to
this
specific
piece
of
functionality
in
order
to
go
right,
move
more
what
happens
here
and
then
how
that
was
a
piece
of
living
documentation
that
is
CI
room
and
you
know,
delivers
useful
information
to
people
and
then
for
for
new
contributors
than
that.
That
needs
to
be
a
manual
like
in
terms
of
saying,
okay,
let's
take
a
vertical
slice
of
Cuba
news
that
you
want
to
contribute
to
here:
the
tests
that
are
our
present,
etcetera,
etc.
B
D
It's
not
hard
and
I
just
get
the
sense
that
some
ways
implements
to
feature
and
they're
not
taking
the
half
hour
hour,
two
hours
to
go
or
thinking
good
evening
and
because
there's
some
enter
and
framework
there
between
between
the
the
core
framework
and
the
wrapper
framework.
I,
don't
think
we
have
enough
direction
on
another
ation
of
testico
right.
D
You
want
this
to
tell
a
story
and
if
here's
the
best
guidelines
to
tell
that
story,
using
what
we've
written
and
what
the
framework
delivers
and
so
I'd
be
happy
to
get
stuck
into
this
document,
and
you
know
add
to
it
some
vision
for
what
this
could
end
up
doing
for
the
community
and
for
the
end
user
community,
in
the
contributor
community
and
and
and
yeah
I
kind
of
see
that
we
need
to
get
out
of
the
way.
Ginko
thought
that
something
like
Mike
got
that
something
might
go
to
Michael.
Take
this
from
me.
A
D
A
A
One
of
one
of
my
motivating
factors
here:
I
won't
walk
us
through
the
doctor.
The
vagueness
is
that
for
the
conformance
program
we
are
looking
at
the
ability
to.
We
need
to
be
able
to
specify
you're
talking
about
specifying
the
concept
of
profiles
for
the
performance
program,
which
I
view
as
it
all
boils
down
to
you.
A
Selecting
a
different
suite
of
tests
to
run
so
conformance
should
be
the
base
level
functionality
that
is
available
everywhere
on
every
cluster,
no
matter
what
by
default,
and
then
profiles
is
this
mechanism
by
which
we
could
say,
but
also,
if
you
happen
to
support
storage,
you
can
run
all
the
tests
and
this
orange
profile
and
also,
if
you
happen
to
have
like
machine
learning
specific
stuff,
you
can
run
all
the
tests
and
machine
learning
for
five
or
we've
had
requests
from
openshift.
That's
like
hey.
A
Could
we
could
we
take
what
we
currently
call
the
base
and
actually
like
slim
it
down
a
little
bit
so
that
the
base
set
of
tests
don't
require
cluster
admin,
privileges
or
or
they
don't?
They
can't
run
privilege
net
privilege,
pods
whatever,
so
that
then
we
can
have
an
additional
profile.
It's
like
the
test
that
required
cluster
did
that's
that's
the
motivation
for
all
of
that
is.
Is
offenders
want
to
be
able
to
certify
that
they're?
A
Oh,
you
passed
this
profile
test
that
you
passed
miss
profile
in
this
profile,
this
profile
or,
if
I'm
like
a
developer,
and
wants
to
really
make
sure
that
my
offering
passes
a
given
profile,
I
could
focus
on
just
performance,
dot,
star
profile,
storage
or
whatever,
but
I
have
concerns
in
fact
throws
too
much
noise
at
folks,
such
as
yourself
Rob,
who
are
trying
to
read
the
test
names
as
a
human,
so
I
was
playing
around
with
the
idea
of
what
I,
but
ideally
well
that's
a
guy
phrases,
but
I
ideally,
would
want
from
Ingo
to
port
to
do.
A
A
So
that's
why
we
have
the
world.
We
live
in
today,
where
the
annotations
are
the
tags
that
we
stopped
inside
of
the
test
date
and
then
using
regular
expressions.
We
can
kind
of
filter
those
tags,
but
I'm
wondering
if
we
could
kind
of
have
if
all
of
those
tags
necessarily
need
to
be
displayed
in
the
test.
Name
that
end
users
see,
but
so
what?
If
we
wrote
a
what?
If
I
wrote
a
custom
j-unit
reporter
that
script
away?
A
All
that
tag,
information,
and
so
all
you
saw-
was
a
human,
readable
test,
human
readable,
fascinating
and
so
like
the
tag.
Information
could
still
be
used
to
select
tests
and
all
that.
But
it's
not
what
people
would
ultimately
see
in
their
UI.
It
might
make
things
a
lot
less
noisier,
but
I
have
a
feeling
that
human,
a
lot
of
humans
on
this
project,
trained
themselves
to
visually,
identify
all
of
the
metadata
in
the
test.
Name
and
I'm,
not
sure
where
else
it'll
be
possible
to
expose
that
metadata
in
as
easy
a
fashion.
D
D
D
On
screen
and
what,
in
one
single
line?
Okay,
the
other
thing
I
would
say
about
what
you
just
said.
There
Moses
and
I
think
it
would
be
a
good
idea
to
express
such
as
as
a
requirement
and
because
because,
as
you
talk
through
it
like
to
think
of
ways
that
we
could
do
that
annotation
and
tagging,
and
there
could
be
ways
of
doing
that.
But
if
we
get
through
acquirements
down
and
we
might
find
some
interesting
implementations.
A
D
A
Yeah,
it
kind
of
all
comes
down
to
me
to
do
anything
other
than
regular
expression.
We
have
to
completely
hijacking
as
execution
mechanism,
so
I
told
around
with
ideas
of
like
you
can
bake
flights
into
the
evening
test
framework
that
support
alternative
selection
mechanisms
based
on
metadata
that
we
have,
but
it
will
still
require,
like
hijacking
gank,
because
we're
still
tied
to
it
so
we'd
have
to
like
from
ginkgos
perspective.
It
would
have
to
run
all
of
the
tests
and
then
we
would
take
full
responsibility,
food,
focusing
or
skipping
all
tests.
So
you.
F
Say
that,
but
that's
a
thing
that
like,
for
example,
a
storage
tests
already
more
or
less
do
they
just
have
like
a
whole
list
of
all
of
the
things
they
can
do
and
then,
depending
on
which
driver
is
under
test.
It
skips
a
bunch
of
them.
So
we
actually
already
have
a
large
number
of
tests
that
are
like
always
skipped
so.
A
We
have
them,
we
have
modified.
What
comes
out
of
the
expect
talk,
so
if
and
expect
call
fails,
we
print
out
a
stack
trace
that
people
to
us
and
doesn't
have
as
much
pinko
stuff,
but
it's
not
being
picked
up
by
Kinkos
chained
unit,
with
a
quarter
that
I
can
tell
like
the
these
back
traces.
The
way
I
think
of
it
is
these
snack
traces
aren't
popping
up
in
the
triage
dashboard,
where
I
would
expect
to
see
something
a
little
more
meaningful.
That
seems
like
something
fixable.
A
Yeah,
the
other,
the
other
thing
I
had
off
the
top
I
had
Rob.
This
is
me
not
knowing
test
grid
is
super
well,
but
I'm,
not
sure
how
many
layers
of
hierarchy
testfit
supports
right
now.
The
model
of
thinking
I
have
is
a
unit
in
XML
and
depending
on
which
schemas
you
look
at
out
there
chaining
it
has
the
concept
of
infinitely
many
nested
suites
for
a
single
suite
and
right
now,
Kinkos
stock
j-unit
reporter
assumes
that
a
single
run
of
Ginkgo
is
the
entire
suite.
F
A
F
The
problem
with
these
I
think
is
that
even
with
that
mechanism,
it
won't
work
because
these
need
to
opt
in
to
not
running
these
have
like
anti
affinity.
Tests
like
the
tests
have
an
affinity
for
being
the
only
thing
running
right,
as
opposed
to
like
sub
test
being
parallel
see.
These
tests
are
like,
even
without
knowing
what
other
things
are
going
on,
like
you
think
running,
because
I
do
things
that
are
potentially
unsafe
for
some
other
pests.
A
G
F
A
Okay
I
feel
like
if
I
were
to
ask
Incas
maintainer,
they
would
suggested
probably
different
ways.
We
could
architect
things
to
organize
things.
To
support
that
use
case
like
ginkgo
doesn't
have
to
like
I
feel
like
our
use
case,
where
we
actually
compile
everything
down
into
a
single
binary,
a
look.
What
you
do
with
that.
A
Just
actually
unusual
I
think
it
is
normally
used
to
being
able
to
scan
a
bunch
of
arbitrary
packages,
and
then
you
tell
it
just
from
this
package,
so
it
feels
like
the
way,
the
actual
ETA
dot,
o
and
e
to
be
underscore
test
that
go
files,
strapless
fancy
a
framework
and
ginkgo
and
all
that
stuff.
Maybe
we
could
work
there,
but
Ben's
point
in,
like
your
request,
is
very
reasonable
and
what
I,
often
think
of
when
people
say
blanket
I
just
go
test
leave
my
changes
this
one
file.
A
A
There
frameworks
other
than
ginkgo
that
allow
you
to
be
like
it
could
be
that
we
end
up
writing
a
lot
more
boilerplate
for
it
for
it,
but
like
we
can
write
our
tests
to
be
more
explicit
about
needing
to
use
that
boilerplate
and
us
not
being
locked
into
all
of
the
magic
that
happens
in
a
single
sort
of
bead
strapping.
You
test.
F
First,
most
ideas
or
editors
that
have
cointegration
will,
let
you
do
things
like
run
your
tests
and
have
them
coverage
instrumented
and
show
lines
or
like
let
you
click
and
run
a
particular
test
and
I
find
that
really
helpful
when
I'm
developing
anything
that
isn't
ready.
But
when
I
go
to
the
nuttiest,
it's
like
I
completely
give
up
on
hovering
an
actual
iterative
loop.
Even
this
bite
work
was
like
kind
or
something
it's
just.
It
takes
too
long.
D
D
A
H
Hey
this
is
a
chicken
Gabrielson
trying
to
turn
my
video
here.
H
I
work
on
AWS
I'm
super
new
to
kubernetes,
so
I,
don't
have
a
ton
of
interesting
things
to
say
yet
I'm
hoping
if
I
could
actually
understand
things
better.
Maybe
I
could
help
with
some
of
the
e2b
cleanup
that
you
guys
have
in
the
dock,
like
maybe
at
least
cleaning
up
like
the
AWS
provider.
So
it
doesn't
refer
back
to
the
kubernetes.
You
know
or
directories
might
be.
You
know,
gate
one
useful
like
minor
but
useful
thing
just
to
get
weather
rid
of
one
dependency
anyway.
This
is
an
idea.