►
From YouTube: Kubernetes SIG Testing - 2021-05-04
Description
A
Hi
everybody
today
is
tuesday.
May
the
4th
this
is
the
kubernetes
sig
testing
bi-weekly
meeting.
I
am
today's
host
aaron
krikenberger,
also
known
as
aaron
of
sig
beard,
also
known
as
spiff
xp
at
all
the
places
during
this
meeting,
which
will
be
publicly
posted
to
youtube
later
we're
going
to
adhere
to
the
kubernetes
code
of
conduct,
which
basically
means
we're
going
to
be
our
very
best
selves
to
each
other.
A
If
you
have
a
problem
with
the
conduct
of
this
meeting,
please
reach
out
to
conduct
kubernetes
dot
io
or
you
are
also
welcome
to
reach
out
to
me
privately
on
slack
or
spfxp
gmail.com,
okay.
So,
having
said
all
that,
I
will
go
ahead
and
post
the
agenda
again
in
chat
for
those
who
have
joined
a
little
late
and
today
I
will
be
handing
us
off
to
I
don't
know
your
cool.
A
That
is
your
name
and
your
handle
I'll,
be
handing
this
off
to
harsh
arno
and
vladimir
to
talk
about
a
couple
different
things
so
that
arsh
take
it
away
to
talk
to
us
about
stat.
B
B
So
the
initial
idea
was
that
whenever
folks
make
a
pr
to
the
kkk
repository,
they
sometimes
accidentally
end
up
bringing
in
dependencies
which
are
not
in
the
code
base
and
thereby
complicating
things,
and
one
of
the
maintainers
have
have
to
you
know,
think
them
that
like
if
they
are
bringing
stuff
which
is
not
at
all
necessary
and
can
be
trimmed.
So
one
of
the
maintainers
has
to
pick
them
that
please
get
rid
of
this.
B
So
the
idea
was
that
you
know
we
could
automate
this
process
so
for
that
we
created
abstract,
which
basically
runs
on
any
go
modules,
enabled
project
and
gives
you
certain
statistics
based
on
that,
like
total
number
of
dependencies
in
the
project
or
number
of
transitive
dependencies
or
the
length
of
the
longest
dependency
chain.
That
sort
of
thing
so
when
dipstick
was
completed
enough,
that
it
could
be
used.
The
way
we
plan
to
integrate
it
with
the
kkk
repository
was
having
update
and
verify
scripts,
and
there
would
be
a
json
file
with
the
initial
stats.
B
So,
whenever
a
pr
is
made,
we
would
run
the
verify
script,
which
would
compare
the
json
file,
which
had
the
original
stats
and
the
results
which
you
know,
results
which
come
when
you
take
the
changes
made
by
the
pr
into
consideration,
because
that
is
being
run
in
the
ci
pipeline
and
compare
those.
So
that
was
the
idea
and
I'll
drop
a
link
for
everyone,
so
that
you
can
see
the
pr.
C
B
This
was
the
pr
and
then
ben
dropped
some
comments
regarding
that
that
it
would
not
be
a
good
idea
to
check
in
the
json
file,
because
the
json
file
has
just
like
three
lines
and
if
you
check
that
in
so
they're
gonna
be
a
bunch
of
merge
conflicts
when
folks
are
simultaneously
trying
to
work
on
this,
because
it's
a
small
file-
so
I
was
here
hoping
to
you
know,
come
to
a
consensus
about
how
we
can
go
about
integrating
this
with
the
kkk
depository
ben
left.
B
A
Sure
I
can
riff
based
on
what
little
knowledge
I
know,
but
if
other
people
have
better
ideas,
please
jump
in
so
I
was
looking
over
the
pr
I
think
ben
was
suggesting
one
way
we
can
do.
This
is
to
have
the
dependencies
file
stored,
someplace
else
like
store
it
in
a
gcs
bucket
or
something
at
a
well-known
path,
and
then
have
a
post
submit
job
update
that
file
every
time.
A
Something
merges
to
kubernetes
and
so
prs
would
then
need
to
rerun
if
that
file
changes
to
catch
sort
of
the
latest
state
of
what
is
in
what
is
it?
I
guess
so
we
do
this
today.
I
think
for
some
code
coverage
jobs
that
are
not
widely
used,
but
it's
a
pretty
well
established
pattern
I
feel
like
so
that's
that's
a
pro.
It's
a
pretty
easy
pattern
to
copy
paste.
I
feel
like
a
con,
might
be
it's
a
little.
A
It
could
be
a
little
racy,
so
it's
not
like
once
the
post
submit
job
updates
that
file
with
the
new
known
state
that
there's
anything.
That's
gonna
go
magically
re-trigger
all
of
the
pull
requests
jobs.
A
So
if
they
passed
against
the
old
dependency
but
they're
going
to
fail
against
the
new
dependency
file,
for
whatever
reason,
people
won't
find
that
out
until
like
worst
case
when
tide
tries
to
re-run
the
dependency
checking
job
just
prior
to
a
pull
request
merging
the
fact
that
head
moves
so
quickly
is
why
we
have
type
read
test
jobs
just
prior
to
merge.
A
B
Can
I
interrupt
you
here
and
I
had
just
two
questions
about
this
approach,
so
one
was
I'm
not
familiar
with
the
bucket
concept
so,
but
I'll
read
more
about
that
and
what
you're
saying
sounds
good.
The
other
question
was
that
so
can
the
verify
script
reach
out
to
this
location,
where
this
file
would
be
in
the
bucket?
Because
we'll
need
to
do
that
in
order
to
compare
it.
A
Yes,
so
the
idea
is:
there's
a
command
line
tool
called
gsutil
that
can
be
used
to
copy
and
cat
files
in
gcs,
and
so
the
idea
is
the
ci
job
would
run
with
sufficient
privileges
to
be
able
to
write
to
g
a
gcs
bucket,
but
that
the
bucket
would
be
set
up
so
it's
world
readable,
so
that
anybody
could
take
a
look
at
the
contents
of
that
file
if
they're
like
developing
locally
or
whatever.
Yes,
thank
you
are
now.
A
So
it's
a
it's
pretty
well
known,
command
line
utility
and
that's
how
I
would
do
that.
I
don't
know
we
might
you
might
think
about
like
setting
up
a
gcs
bucket
just
for
this
purpose
in
previous
times.
I
think
we
have
reused
the
kubernetes
jenkins
bucket
that
stores
like
all
of
the
logs
and
job
artifacts,
and
things
like
that.
A
So
that's
one
option.
Another
option
is
we
could
we
could
pay
the
cost
of
recomputing
the
dependencies
of
the
known
state
every
time?
So
when
you
make
a
poll
request
you
the
proud
job
has
a
specification.
A
That's
like
what's
the
branch
that
I'm
basing
my
pull
request
off
of
and
then
what's
the
branch
that
I'm
trying
to
merge
into,
and
so
given
that
information
you
should
be
able
to
have
a
script
that
could
be
like
well,
I
know
what
branch
I'm
basing
off
of
so
let
me
go
compute
the
dependencies
on
that
branch
and
then
let
me
compute
the
dependencies
based
on
the
merged
commit
and
then
diff.
A
The
two
results
that
to
me
sounds
like
the
more
ideal
approach,
because
it
is
less
prone
to
race
conflicts,
but
I
could
see
that
that
might
be
a
little
bit
more
work
to
implement,
just
because
I'm
not
sure
off
the
top
of
my
head
exactly
how
you
would
get
at
that
information
where
you
would
find
the
structs
if
you'd
have
to
kind
of
you
know
proud
jobs
with
with
the
decorate,
true
flag
sort
of
just
magically
give
you
a
commit,
you
would
expect
for
pre-submit
or
post-submit,
or
periodic
job
you're,
just
like
dumped
in
the
working
directory
of
the
repo.
D
A
B
So
I
think,
since
like
we
are
aiming
this
to
be
used
in
the
long
term,
we
should
study
the
second
one,
because
first,
one,
like
you
said,
would
have
similar
cons
as
the
one
we
were
facing
right
now,
so
that,
if
that's
the
way
to
go
yeah,
so
do
you
have
some
examples
where
this
brow
job
is
already
being
used?
So
I
could
look
at
it
and
learn
from
it.
A
I
can
send
you
a
link
to
the
pod
utilities
documentation
and
I
can
send
you
a
link
to
sort
of
the
coverage
job.
I
think
to
show
you
an
example
of
the
first
approach.
I
suggested
the
first
approach
could
I
I
feel
like
if
we
use
sort
of
like
a
well-known
path,
that's
based
off
of
a
given
job,
name
or
something
you
could
maybe
through
convention
over
configuration
kind
of
make
it
magically
extensible
to
other
repos,
not
just
kk,
because
I'm
assuming
kk
is
kind
of
our
sorry.
Kubernetes
kubernetes
is
our
target.
A
That's
the
thing!
That's
driving
this!
Thank
you.
Arno
he's
got
the
links
to
pod
utilities
right
there,
but
I
would
imagine
we'd
want
this
available.
A
The
projects
within
kubernetes,
not
just
the
main
repo,
does
that
sound
right.
A
I
I'm,
I
would
like
to
see
the
second
approach.
Personally,
I'm
just
trying
to
give
you
the
pros
and
cons
of
each
approach.
That's
all
yeah.
A
Okay,
I'm
continuing
to
look
for
the
relevant
job,
but
I
think
I
can
see.
B
A
Yeah,
I
think,
ben
maybe
you
came
in
late,
but
I
was
sort
of
trying
to
describe
both
the
submit
uploading
mile,
the
gcs
approach
and
then
another
approach
when
we
compute
sort
of
the
thing
we're
gonna
diff
against
on
the
fly.
If
you
have
any
opinions
on
either
of
those.
E
I
guess
the
main
upside
to
to
pre-computing
and
storing
it
somewhere.
Is
that
you
can?
You
can
like
casually
peruse
the
results
without
computing
them
again
right?
Otherwise,
I
guess
the
thing
I'm
missing
is
from
the
stats
I
saw
so
far.
E
Is
this
actually
is
this
s?
Is
the
purpose
of
this
to
track
stats
over
time,
or
is
the
purpose
of
this
to
like
enforce
something
in
pre-submit,
because
some
most
of
the
stats
I
saw
so
far
didn't
look
like
things
that
we
would
enforce
and
pre-submit
like
the
maximum
dependency
depth
or
something.
B
So
it's
the
primary
focus
is
not
to
track
the
stats
over
time
I
mean
we
are
able
to
do
that,
but
that's
not
what
the
primary
goal
is.
I
think
it
is
like
is
also
mentioned
earlier.
It
is
to
let
the
folks
know
who
are
making
the
pr
that
you
are
bringing
in
some
dependencies
or
too
much
too
many
dependencies
with
the
changes
you
make,
and
so
this
is
what
we
discussed
in
the
sig
architecture
meeting
and
the
max
depth
of
dependencies
is
something
what
liggett
mentioned
would
be
interesting
to
observe.
E
E
B
A
A
Yeah,
thank
you
for
your
time,
thanks
for
bringing
that
to
the
group.
A
Okay.
Next
up,
let's
hear
from
arno
about
some
of
the
kate's
infra
work
that
he
is
interested
in
asking
the
group
about.
C
Okay
hi,
so,
as
you
know,
I
work
with
around
on
basically.
C
The
community
resource
to
a
new
testament
gets
in
front
and
given
nato
is
one
of
those
workloads,
we
need
to
migrate
and
people
reach
out
last
week
about
this,
because
they
are
interested
to
work
on
that.
But
I
feel
like
there
are
some
basically
questions
we
need
to
answer
before
before
they
start
to
to
work
on
that.
C
So
I
put
basically
the
umbrella
issue
related
to
migrate.
Anything
in
the
pro
in
the
google
project
or
kids.
Kubernetes
aaron
made
some
amazing
issues
about
this,
but
there
are
still
some
questions
about
how
we
want
to
migrate.
This
project,
like,
for
example,
uber
net,
run
right
now
on
google
app
engine.
Do
we
want
to
do
that
on
git
server,
or
do
we
want
to
run
kubernetes
on
gk
cluster.
A
So,
as
far
as
gubernator
itself
goes,
that's
the
one
I
want
to
sort
of
keep
punting
to
the
very
very
last.
It
is
not
excuse
me.
It's
not
clear
to
me
whether
gubernator
the
app
engine
app
has
any
purpose
anymore.
There's
one
thing
that
I
know
I
and
possibly
some
other
contributors
use
it
for
which
is
the
pr
dashboard
which
these
days
basically
lets
me
see
how
horribly
unresponsive.
A
I
am
at
all
of
the
prs
that
come
my
way
so
I'll
just
post
a
link,
so
you
can
all
see
that's
what
gubernator
thinks
my
incoming
queue
looks
like
I've
got
61
prs
that
need
my
attention.
72
that
are
incoming
and
I've
got
four
outgoing
prs.
A
I
personally
have
been
trying
to
use
github's
native
notification
system
these
days
instead
of
relying
on
google-
and
I
know
there
are
other
folks
who
have
tried
using
triage
party
as
a
way
with
like
certain
carefully
tuned
rules.
You
can
set
up
kind
of
a
personal
triage
dashboard
for
yourself
based
on
the
labels
and
how
excuse
me
how
recently
things
were
updated
everything
else
within
the
gubernator
project.
I
would
like
to
see
migrated
and
I
think
it's
mostly
a
question
of
the
order
in
which
we
do.
A
And
moving
of
data,
so,
for
example,
I
think
I
would
probably
start
with
the
bigquery
data
set.
Currently
it's
called
kate's
kubernator
builds.
I
think
this
is
a
data
set
that
is
populated
by
an
app
called
kettle.
It
basically
scrapes
all
of
gcs
and
also
subscribes
to
a
pub
sub
thing
for
updates
to
a
gcs
bucket.
This
case,
kubernetes
jenkins
and
then
reads
everything
and
transforms
all
of
the
metadata
that
it
reads:
yeah
and
then
stores
it
into
a
big
query
or
a
data
set
that
is
queryable
by
bigquery.
A
This
is,
then,
what
drives
some
automated
metrics
jobs,
so
the
go.kates.io
triage
dashboard
is
driven
entirely
off
of
query
results
from
this
data
set.
A
We
also
have
some
jobs
that
compute
flakiness,
like
the
flakiest,
the
top
and
flakiest
tests
for
the
top
and
flakiest
jobs
on
a
weekly
basis,
which
has
been
really
useful
during
burndown
to
identify
like
what
are
the
worst
flaking
tests
that
we
should
fix.
It
helps
us
kind
of
identify
which
tests
are
like
release,
blockers
to
fix
and
which
tests
have
kind
of
always
been
this
way
sort
of
deal,
and
then
it's
also
used
to
compute.
A
A
So
it's
just
kind
of
unclear
to
me
like,
which
is
if
it
makes
more
sense
to
like
move
the
data
set
first
or
if
it
makes
more
sense
to
move
all
of
the
things
that
query
the
data
set
first
and
then
move
that
over
last,
but
I
feel
like
everything
except
the
app
engine
app
there's
no
like
there's
nothing
blocking
us
from
moving
them,
it's
mostly
time
and
bandwidth.
A
As
always,
I
could
get
myself
in
trouble
and
say,
like
I
really
want
to
do
all
of
these,
but
my
availability
gets
really
spotty.
So
I
know
I
broke
out
issues
for
each
of
the
components,
with
the
exception
of
the
kubernetes
app
engine
thing.
If
people
are
interested
in
migrating,
the
individual
components,
I'm
super
happy
to
like
help
flush
those
issues
out
if
it's
unclear
like
what
needs
to
be
done
and
the
order
in
which
it
needs
to
be
done
and
yadda
yadda
yadda
did
that
make
sense.
E
I
think
we
could.
I
I
think
we
could
move
to
writing
to
to
like
two
copies
of
the
data
set,
one
in
the
old
place
and
and
one
in
the
new
project
and
then,
as
soon
as
you
have
that
working
it
would
be
possible
to
start
migrating
the
components.
E
I
think
anything
else
is
going
to
be
a
huge
headache,
because
it
will
wind
up
relying
on
a
googler
again
and
be
blocked
on
for
like
okay.
I
need
this
component
to
start
using
this
other
data
set
and
we
need
to
make
sure
all
the
im
is
in
place
and
what
not,
but
we
can
cut
it
down
to
just.
We
need
to
make
sure
the
thing
writing
the
data
writes
to
two
locations,
and
then
you
can
start
standing
up
new
instances
of
each
of
the
components.
Reading
from
the
cncf
project.
A
So,
as
you
said
that
one
thing
just
popped
into
my
mind,
so
the
gubernator
data
set
is
actually
world
readable.
So
literally
anybody
can
can
query
it.
All
they
need
to
provide
is
the
project
against
which
to
build
the
compute,
which
is
pretty
I'm
pretty
sure
it's
just
like
a
command
line
flag
or
it's
a
configuration
setting
for
for
bq
I
feel
like,
and
so,
if
we're
talking
about
modifying
the
thing
that
writes
to
bigquery,
that
is
modifying
kettle,
and
I
don't
want
to
speak
fully
on
behalf
of
grant.
A
He
was
here
and
he's
done
a
lot
of
work
on
like
improving
the
operational
characteristics
of
kettle.
But
my
impression
is,
it's
still
kind
of
restart
times
for
kettle
are
non-trivial
and
making
sure
we've
got
everything
up
and
running.
Has
in
the
past
caused
some
some
bumpy
periods
lasting
a
week
or
two
which
early
in
the
release
cycle
is
a
great
time
to
do
that.
A
If
we
want
to
do
that
approach,
but
I
kind
of
feel
like
if
the
data
set's
publicly
queryable
it
might,
there
may
not
actually
be
a
lot
of
credential
hassle.
It
might
be
easier
to
move
like
triage
super
easy
to
move
totally
easy
to
move
same
with
all
the
metrics
jobs.
Those
are
basically
just
proud
jobs.
A
It's
just
a
matter
of
they
write
to
a
gcs
bucket.
It
would
be
a
matter
of
like
standing
up
a
new
dcs
bucket
or
maybe
even
trying,
the
dance
of
like
moving
the
gcs
bucket,
which
I
would
be
comfortable
doing
for
these
things.
Since
they're
less
critical
than
like
the
gcs
buckets
the
whole
project.
A
So
that's
the
thought,
but
I
feel
like
yeah.
That's
the
gubernator
thing.
I
think
there
are
a
bunch
of
others
related
to
migrating
projects
that
host
images
that
are
used
in
ci.
I
feel
like
we
got,
I'm
just
gonna
guess
like
80
of
the
way
there
during
the
last
release
cycle.
I
fully
intend
for
us
to
commit
to
migrating
all
the
ci
images
during
this
release
cycle.
G
E
If
it
is,
if
it
is
actually
public
reading
we're
not
going
to
have
problems
with
that,
then
it
might
make
sense
to
do
that
same
thing.
But
after
we
move
the
components
then,
instead
of
before,
because
then
then
you
can,
then
you
can
go
ahead
and
migrate,
all
the
components
and
have
full
control
over
changing
where
they
read
right
from.
C
E
There's
a
there's
so
the
gcs
bucket
there
is
like
a
go
link
that
points
to
that
goda
case
studio
that'll
be
pretty
easy
to
switch
out
once
we're
ready
and
we've
we've
kind
of
done
some
stuff
like
that.
Before,
for
last
time,
we
had
an
intern
work
on
the
triage
dashboard.
We
had
a
total
rewrite
and
ran
the
rewrite
in
parallel
for
a
bit,
so
we
already
have
some
tooling
to
do
like
automatic
upload
of
it
and
that
sort
of
thing.
A
Yeah,
the
triage
bucket
is
for
what
it's
worth
the
trans
bucket
is
one
very
specifically
where
I
think
it
might
be
easier
to
just
try
and
do
the
delete
the
bucket
and
then
quickly
recreate
it
and
the
new
play
stance.
The
reason
I
say
that
is
because
a
lot
of
people
so
like
for
what
maybe
we
can
fix
this
too.
The
godot
case
that
I
o
triage
link
sends
you
to
the
bucket,
but
then
the
url
you
see
in
your
browser
is
like
storage.google.com
slash
triage.
A
Okay,
let's
see
you
got
another
thing
here:
oh
go
ahead.
C
Tell
me
so,
basically,
I
want
to
know
what's
needed
to
be
done
from
sick
testing
to
make
this
happen.
So
basically
it's
about
migrating.
C
I
think
community
jenkins,
for
I'm
not
sure
what
needs
to
be
done.
A
So,
there's
a
link
to
a
testing
for
issue
in
the
very
in
the
description
of
that
issue
that
talks
about
like.
Maybe
we
could
stop
using
it
entirely
and
although
that
issue
has
fallen
stale,
I
know
ahmet
wat
v
and
myself
sort
of
tried
to
do
a
proof
of
concept
where
we
just
stopped
using
that
bucket
and
actually
I
I
feel
like
a
lot
of
the
jobs
that
were
using.
A
This
were
were
bazel
based,
which
then
kind
of
removed
the
use
of
so
it
could
be
that
we
no
longer
have
any
jobs
that
require
the
use
of
the
kubernetes
release
poll
bucket,
as
is
it
could
be
that
we
have
older
release
branch
jobs
that
are
still
using
bazel.
That
do
still
require
this
bucket,
in
which
case
it
is
a
job
configuration
change
that
ahmed
outlined
in
the
last
comment
before
fadabot
marked
this
issues
tale.
A
I
I
feel
like
cloud
you
we'll
talk
about
the
rest
of
the
kate's
in
for
work
and
the
images
and
stuff
at
the
kate's
info
meeting
after
chatting
with
arno
I'll,
make
sure
we
kind
of
groom
the
board
and
sort
of
prep
all
that
for
discussion
at
the
next
meeting,
because
I
wanna,
if
we
have
time
left
over
after
vladimir's
thing,
maybe
we
can
talk
about
it,
but
I
want
to
be
respectful
of
vladimir
having
signed
up
yeah
sure,
okay,
so
vladimir
is
here
to
talk
to
us
about
the
e2e
framework,
subproject.
F
Yes,
it's
been
a
while,
since
I've
come
to
this
meeting,
so
I'm
probably
gonna
start
reattending
because
of
this
effort.
Last
time
I
was
here,
I
presented
a
document.
Let
me
go
ahead
and
share
my
screen.
So
folks
will
know
what
I'm
talking
about.
Where
are.
F
You,
let's
do
the
desktop
yep,
so
this
is
the
this
document
and
if
you
go
to
I'll,
just
pull
the
link
down
here.
Sorry
about
the
scrolling.
F
If
you
go
to
this
repository
right
here,
there's
a
link
to
this
google
doc
document,
if
you're
interested
in
seeing
what
we're
talking
about
and
basically
as
a
recap,
this
is
an
effort
to
create
a
framework
that
allows
you
to
create
e2e
tests
and
go
for
components
deployed
in
a
cluster
and
the
motivation
behind
that
is
right.
Now,
if
you're
interested
in
writing
your
own
ch
test
and
you're,
not
part
of
the
upstream
kubernetes
kubernetes,
it's
actually
hard
or
probably
impossible
to
to
vendor
and
the
stuff.
F
That's
a
lot
of
the
good
stuff.
That's
already
been
done
upstream.
So
what
we're
doing
is
starting,
fresh
and
creating
a
framework
that
a
allows
you
to
express
your
tests
in
a
way
that
you
can
do
filtering
when
you
exit,
when
you
exercise
your
test,
where
you
can
specify
what
you
want
to
actually
run
and
b.
The
other
portion
of
this
effort
is
also
to
slowly
but
surely
add,
helper
functions.
F
Similarly,
to
what
we
find
in
upstream
kubernetes
kubernetes
test
that
allows
you
to
basically
talk
to
the
cluster
or
interact
with
a
cluster
right
now,
upstream,
kubernetes
has
a
large
what
seems
to
be
a
large
set
of
helper
functions
to
allow
you
to
do
a
number
of
things
regarding
any
number
of
any
number
of
objects
residing
in
the
cluster.
F
So
what
we
want
to
do
in
the
second
portion
of
this
effort
is
create
something
that
provides
some
a
collection
of
helper
functions
to
to
to
help.
Folks
who
are
writing
those
tests,
and
the
other
reason
why
I
wanted
to
come
on
is
to
announce
that
there
is
a
first
pocpr,
that's
part
of
the
for
in
in
the
actually
the
pr
still
hang
hanging
out
right
now,
and
you
know
you're
free
to.
If
you
go
to
actually,
let's
go
back
here
and
it's,
I
think,
pr
number
five
or.
F
It's
it's,
this
pr
work
in
progress,
initial
proof
of
concept,
it's
a
large
pr,
because
it
is
the
actual
initial
implementation
of
the
design
document,
and,
as
I
was
implementing
this
pr,
I
was
also
updating
the
design
document
to
make
sure
that,
whatever
I
ran
into
issues
that
I
run
into
and
any
kind
of
lesson
learned
was
immediately
reflected
in
the
design
doc,
because
I'm
still
open
to
receiving
feedback
from
the
design
dog.
F
To
make
this
better-
and
I
think
yesterday
I
posted
something
on
the
on
slack
and
some
folks
that
had
already
have
some
some
feedback
and
then
for
this.
F
F
F
So
you
can
express
your
testing,
starting
at
the
suite
or
the
package
level
where
you
can
start
set
up
at
the
package
at
the
at
the
package
level
for
your
test
and
you
define
a
few
callbacks
one.
One
of
them
is
set
up,
the
other
one
is
finish
for
tear
down,
and
basically
what
happened
is
when,
when
your
test
gets
executed
by
by
calling
this
function,
it
it'll
do
all
the
right
things
and
take
care
of
the
life
cycle
of
your
test
by
calling
all
the
before
and
after
or
all
the
setups.
F
And
then
when
your
test
is
done,
call
the
the
finish
to
to
do
the
teardown.
Now
an
actual
test
will
look
like
this.
F
There.
You
go.
Let's
look
at
this
one,
something
very
simple,
I'm
showing
up
here.
So
this
is
what
a
test
could
look
like
where
you
express
just
test
as
a
feature.
You
give
it
a
name.
F
You
optionally,
you
can
add
a
label
and
the
label
is
something
to
further
to
allow
you
to
further
do
filtering
at
the
at
the
command
line.
F
If
you
didn't
want
to
use
the
go,
dash
run
flag,
but
you
wanted
to
still
do
some
kind
of
customized
filtering
as
to
what
you
want
to
to
run,
and
then
you
can
pass
an
assessment,
and
this
is
basically
a
callback
of
the
test
that
you
want
to
run
and
when
you're
ready,
you
called
test
and
tess
will
exercise
the
assessment,
but
also
call
before
and
after
test
which
you
define
at
the
package
level.
F
And
and
so
as
you
can
see,
it's
it's
it's
intentionally.
The
the
framework
is
kept
simple,
because
we
want
to
lean
heavily
on
the
go
test
framework
as
much
as
possible.
F
The
extra
framework
bits
that
you
see
is
mostly
there
to
allow
you
to
do
filtering
and
express
yourself
and
express
your
tests
in
a
way
where
you
can
yeah
when
you're
running
it.
You
can
specify
a
number
of
things,
including
the
name
of
the
feature,
the
labels
etc.
To
do
to
do
to
do
your
test
filter,
but
the
the
the
structure
of
the
test
is
basically
a
sample.
F
Go
test
we're
not
introducing
anything
else
and
we're
relying
on
what
goal
already
has
to
to
to
run
the
test
and
there's
more
detail
in
the
in
the
document
in
the
in
the
in
the
design
dock.
If
you're
interested
to
know
what
what's
going
on
inside
and
then
the
next
set
of
effort,
we're
going
to
focus
in,
as
I
said
earlier,
is
the
start.
F
Looking
around
and
study
the
type
of
helper
functions
that
are
already
part
of
the
e2e
test
upstream,
see,
if
there's
a
pattern
of
things,
because
the
one
thing
I
noticed
there
were
a
lot
of
repeats
of
things
that
you
would
look
in
one
place
and
look
somewhere
else
and
it
seems
like
they
were
basically
doing
the
same
thing
and
it's
a
lot
of
it
is
has
to
do
with
weight
and
retries.
F
So
what
we're
going
to
do
is
spend
some
time
up
there
and
kind
of
analyze.
What
are
the
things
that
would
make
sense
to
start
bringing
over
immediately
to
so
that
folks
can
feel
comfortable
not
only
using
this
to
write
the
test,
but
also
having
the
right
tool
to
easily
interact
with
with
the
api
server
so
I'll
stop
and
see.
If
there's
any
questions
right
now,.
E
Yeah,
just
one
kind
of
key
one:
there's
a
there's,
a
bit
of
text
that
you
put
in
the
notes
that
mentions
the
doc
we've
been
talking
about
and
it's
blue
and
underlined.
But
it's
not
a
link.
Do
you
have
the
link
to
the
doc?
Yes,
let
me
I'm.
I
apologize.
F
That's
okay,
I
might
have
I
don't
know
I
might
have
messed
something
up.
Let's
change
this
all
right.
Let
me
grab
that
I'll
I'll
put
it
in
the.
Where
is
it
chat?.
F
A
So,
thank
you
for
the
link
to
the
doc,
based
on
what
you
said.
My
read
is.
This
is
basically
an
attempt
to
completely
remove
ginkgo
as
the
driver
for
testing
and
instead
just
make
like
still
allow
us
to
use
plain
old,
go
test
and
that's
about
it
right.
F
Yeah,
that's
part
of
the.
If
you
look
in
the
the
doc
part
of
the
motivation
is
not
imposed
any
heavy
anything
heavy.
So
if
somebody
wanted
to
use
genko,
nothing
would
really
stop
you.
F
But
if
you
don't,
you
know
this
is
there
to
give
you
the
ability
to
write
your
test
and
also
be
able
to
filter
out
what
tests
and
features
that
you
want
to
exercise
as
well,
but
you're
right,
the
the
the
overwhelming
drive
is
to
lean
heavily
on
on
the
existing
go,
go
test
framework
and
as
as
the
as
the
code
progresses,
I'm
sure
you
know
we'll
we'll
get
requests
to
add
x,
y
and
z,
and-
and
we
want
to
make
sure
that
we'll
leave
room
for
that
to
happen
and
make
sure
that
as
folks
start
to
reach
out
and
and
try
to
use
it
make
sure
that
we
also
not
breaking
anything
in
the
way
that
and
before
I
state
what
I'm
about
to
say,
the
way
that
the
the
implementation
works
today.
F
F
So
if
you
want
to
cancel
out
of
that
out
of
that
test,
you
can
do
that
at
a
at
any
moment
and
also
you
can
call
t
t
dot
skip
at
any
moment
as
well.
If
you
wanted
to
so
like
I
said
we
heavily
rely
on
what
goal
is,
is
already
provides
to
to
to
do
this
and
then
another
question
I
always
get
is:
are
we
trying
to
replace?
What's
already
upstream?
I
don't
think
the
first
round
of
this
is
probably
going
to
replace
anything.
F
This
is
mostly
for
folks
that
if
you're
writing
something
new
or
maybe
you
have
something
you
already-
you
know
already
in
flight,
and
you
were
staring
at
the
possibility
of
having
to
reinvent
the
wheel
because
of
the
difficulty
of
of
vendoring
upstream
kubernetes
tests.
This
could
be
an
alternative
for
you
to
say:
hey
I'll,
opt
for
something
like
this,
where
I
don't
have
to
re
re-uh,
reinvent
the
wheel
and
even
talking
to
folks,
both
upstream
and
internally.
F
One
of
the
things
I'm
finding
out
is
the
thing
that's
very
important
to
folks
is
the
helper
functions,
the
fact
that
they
don't
have
to
if
you
abstract
out
client
go
and
give
folks
nicely
wrapped
functions.
That
makes
it
easy
to
talk
to
back
and
forth
with
the
with
the
api
server.
I
think
that's
gonna
be
a
big
win
too.
A
Yeah,
I
don't
know
I'm
a
huge
fan
of
this.
I
don't
I.
D
A
Think
I'm
giving
anything
away.
That's
too
proprietary!
If
I
shock
and
surprise
you
with
the
fact
that
google
internally
runs
e2e
tests
against
gke,
blatantly
not
with
ginkgo
and
very
much
something
that
is
basically
just
go
test
with
some
helper
functions
to
sort
of
set.
Some
stuff
up
right
just
happens
to
be
tangled
up
enough
with
google
specific
stuff
that
we've
been
unable
to
open
source
it.
But
I
think
this
is
exactly
are
clamoring,
for
I
have
like
you
know.
A
C
A
At
all
the
helper
stuff
we
can
extend,
I
think
one
of
the
craziest
things
that
that
makes
ginkgo
difficult
to
reason
about
is
all
of
the
before
each
after
each
and
the
order
of
evaluation
of
all
of
those
things.
So
I'm
just
going
to
assume
that
these
helper
functions
kind
of
do
that
in
a
more
sane
way,
like
yeah.
A
A
testing
t
is
available
pretty
much
everywhere.
I
think
that's.
That
is
a
much
better
way
of
giving
us
something
to
hang
off
of.
F
Exactly
exactly
yeah
the
I
try
to
stay
away
from
the
from
the
callbacks
and
not
be
heavy
about
them
there.
There
is
a
before
test
in
at
the
suite
level
or
the
package
level
where
you
define
in
your
environment,
hey
before
before
I
test,
as
is
ran.
Do
this,
but
that's
that's
that's
mostly,
and
I
I
stay
very
faithful
to
how
it's
it's
a
very
straight
line,
graph,
there's
not
no
dependency
resolution
or
anything
like
that.
F
It's
a
straight
slice
how
it
appears,
how
you
define
them,
that's
how
it's
going
to
get
executed.
I
think.
A
I
don't
know
I
I
won't.
I
won't
give
the
wish
list
of
things
that,
like
kinko
was
close
to
providing
us,
but
didn't
quite.
I
think
what
one
thing
I
will
share,
I
guess
is:
I
feel
as
though
our
experience
with
the
regular
expressions
for
ginkgo
has
taught
us
that
those
are
really
cryptic
and
difficult
to
understand
why
this
particular
set
of
regular
expressions
are
being
used
for
a
given
job.
D
A
Is
much
clearer
is
just
sort
of
include
and
exclude
types
for
labels
which
feels
sort
of
more
like
the
go
away
where
you're
like
I
want
to
include
these
tags
and
exclude
these
tags,
and
I
don't
need
to
worry
about
like
making
sure
that
I
match
the
entire
name
or
the
full
word
or
just
part
of
the
word
whatever,
but
it
could
be
that
regular
expressions
prove
useful
in
other
ways.
The
other
thing
I'll
say
is
on
the
helper
functions.
A
It's
been
a
while,
since
I've
taken
a
look
at
them,
but
I
think
that
it
would
be
cool.
If
the
helper
functions
are
things
we
could
actually
extract
out
and
then
import
back
into
kubernetes
kubernetes.
I
believe
that
there
was
there
was
an
effort
underway
to
make
sure
that
most
of
the
helper
functions
are
sub
packages
of
the
e2e
framework
package
in
kubernetes,
and
I
think
most
of
them
were
many
of
them
were
trying
to
be
rewritten
so
that
they
didn't
actually
import
anything
underneath
of
them.
A
They
were
wholly
self-contained,
such
that,
like
you,
you
gave
them.
You
know
a
client
context
or
whatever
or
a
client
set,
and
then
they
could
do
what
they
needed
to
if
they're.
Not
that
way,
I
would
be
interested
in
you
know,
sort
of
what's
the
shape
of
api.
That
would
make
it
useful
for
you
to
see
if
we
could
like
transform
them
slightly,
so
they
come
over
like
it
might
be.
They
need
to
return
errors
instead
of
using
asserts
and
stuff
like
that.
F
F
It
would
be
great
if
we
didn't
have
to
rewrite
all
this,
but
at
the
same
time
I
think
what
I'm
gonna
find
just
at
a
very
high
level
of
looking
at
it.
I
think
what
I'm
gonna
find
is
a
lot
of
the
there's,
a
pattern
of
things
that
folks
did
over
time
and
organically,
and
that
pattern
keeps
repeating
itself
and
one
of
the
patterns
that
I'm
already
seeing
is
a
lot
of
places
where
folks
created
helper
functions.
F
A
lot
of
that
was
for
weight
and
retries
on
on
getting
an
object
of
some
sort,
and
it's
all
over
the
place,
whether
it's
you
know,
node,
pods,
etc,
etc.
Although
it's
you
know,
it
looks
like
it's
a
lot,
but
once
you
start
putting
the
layers
you're
like
oh
okay,
it's
basically
grabbing
an
object
and
doing
something
with
it
and
you
know
grabbing
it
within
a
retry
of
some
sort
to
make
sure
that
it
it
does
it
over
and
over.
F
So
I'm
definitely
going
to
keep
an
eye
out
on
on
what
the
pattern
is
and
whether
or
not
you
know
if
we
can
bring
over
what's
already
there
and
what
that
would
look
like.
But
as
far
as
the
dependency
graph
of
how
things
I
think
I've
already
seen
places
where
some
of
those
helpers
are
reaching
into
kubernetes
kubernetes
itself.
F
So
but
I'll.
Knowing
what
you
just
said,
I'll
I'll
keep
a
more
of
a
vigilant
eye
to
see
places
where
that
didn't
happen,
and
maybe
those
are
the
first
kind
of
candidate.
We
can
start
looking.
E
At
yeah
I
I
would,
I
would
actually
recommend
against
trying
to
like
import
this
back
into
the
main
test.
I
think
we're
sort
of
stuck
with
that.
You
eat
test
thing
we
have
now
because
it's
just
so
massive
there's
thousands
of
pests
and
we
pretty
regularly
have
to
make
small
tweaks
to
these
dozens
of
retry
functions
and
things
to
make
the
test
run
smoother.
E
But
if
we
were
able
to
start
over
for
new
tests,
I
think,
like
you
said,
there's
an
awful
lot
of
things
that
are
duplicated
everywhere,
with
10
different,
half-baked
versions.
I
don't
think
anybody
actually
has
the
energy
to
replace
all
of
that
in
the
main
repo,
but
I
don't
think
we
want
to
sort
of
export
the
the
bad
ideas
from
that
right,
but
I
also
don't
think
we
have
the
energy
to
clean
up
all
of
them.
We
just
need.
D
E
F
What
I'm
thinking
is,
you
know
a
a
helper
function,
two
for
lack
of
a
better
term,
but
for
now
it
can
live
in
this,
the
same
repo
where
everything
is
together
and
but,
like
I
said
there
are
a
lot
of
good
stuff
in
what's
already
been
done.
Maybe
we
just
use
it
as
inspiration
for
for
things
that
that
that
is
to
come.
E
So
I
think
it
will
really
help
drive
adoption
if
there
is
the
option
to
use
these
helper
functions
without
necessarily,
you
know
prescribing
to
the
full
test
framework.
I
think
some
people
are
still
gonna.
I
think
some
projects
are
actually
still
gonna
want
to
use
ginkgo
just
because
they're
used
to
it,
but
we
also
want
to
encourage
projects
to
start
writing
tests
for
components
outside
of
kubernetes
core
that
are
not
importing
the
core
hidoe
framework,
as
we
recently
found
out.
Csi
is
doing.
E
F
That's
a
good
point
and
that's
a
that's
a
sentiment
that
I've
heard
as
well
as
far
as
the
help
of
function
being
still
useful
because,
like
you
said
folks
already
have
things
already
baked
in
a
certain
way,
but
those
helpful
functions
could
still
be
useful
for
that
to
them.
F
So
I
think
what
I
want
to
do
for
now
is
keep
the
help
of
function
as
a
sub
package
of
this
repo
and
then
later
on.
If
we
deem
that
it's
good
enough
to
stand
on
its
own,
it
could
be,
I
don't
know,
e2e
dash,
helpers
or
whatever
or
something
but
yeah,
definitely
that
those
are
good.
D
Hi
this
is
mimi,
and
so
I'm
with
apple
and
we're
doing
the
kubernetes
test
as
well
and
we're
actually
in
the
process
when
we're
trying
to
write
more
tests
but
at
the
e2e
level.
So
now
the
components
better
e2e
in
our
cluster
type
of
tests,
and
we
actually
have
tests
that
we
heavily
depend
on
the
ginkgo
and
there
are
tests
not
in
ginkgo
in
the
test
testing.t.
D
But
the
model
we're
trying
to
follow
is
ginkgo
tests
and
also
ginkgo
is
coming
up
with
a
v2,
and
I
think
it
might
have
more
features
that
we
could
utilize
it.
D
I
really
like
the
idea
about
the
helper
functions,
because
we
do
have
a
lot
of
helper
functions
outside
of
it
and
we
try
to
get
our
test
framework
based
on
the
gingko,
but
as
simple
as
possible.
Right
so,
and
you
know,
I
would
be
very
happy
to
look
into
the
the
version
two
of
the
vt
tests
framework
that
if
we
can
really
use
it,
somehow
the
collaborating
or
in
incorporated
with
our
tests,
because
we're
sort
of
in
the
starting
point
at
this
point.
D
A
You
are
the
guy
working
on
the
code.
I
don't
understand
why
you
need
to
bug
other
people
for
an
approve,
so
there's
there's
that
yeah,
I
feel
like
figuring
out
the
right
api
in
shape
of
eight
of
e2
helpers
will
be
a
very
fruitful
discussion
I
like,
if
I,
if
we
could
do
it
all
over
again,
we
would
we
would
avoid
ginkgo
really
really
hard.
I
I
like
I.
It
definitely
was
the
thing
that
got
us
to
where
we
are
today
and
I
have
no
like.
A
I
don't
begrudge
it
anything
like
that,
but
it's
just
too
difficult
to
reason
about
the
order
of
execution
and
and
whatnot.
I
did
talk
with
the
maintainer
of
genko,
about
kinko
v2
and,
like
the
feature
that
sounds
most
exciting,
might
be.
A
F
A
But
I
feel
like
this
framework
kind
of
has
that
already,
like
you
know,
we
started
using
like
specific
tags
to
test
names
to
fake
having
labels
and
stuff
in
our
tests,
and
at
the
moment
I
don't
I'm
not
sure
I
see
a
way
for
the
kubernetes
kubernetes
repo
to
migrate
to
ginkgo
2.0,
and
it's
not
clear
to
me
that
the
benefits
that
we
get
that
from
doing
that,
would
be
worth
the
cost.
A
So
if
I
were
to
think
about
migrating
those
tests
at
some
point
far
off
in
the
future,
I
I'm
really
interested
to
see
how
far
you
know
this.
This
project
goes
as
well,
but
yeah.
I
think
I
feel,
like
you
know,
I'm
a
big
fan
of
merge
and
iterate.
A
So
if
you
feel
like
your
proof
of
concept,
pr
is
more
or
less
like
workable,
either
just
things
you
want
to
improve,
I
feel
like
it
would
be
a
good
time
to
maybe
merge
it
and
open
up
issues
for
the
things
you
want
to
improve.
Yeah.
F
G
I
have
a
couple
of
questions
if
I
may
absolutely
regarding
the
framework
itself.
Okay,
at
this
moment,
are
you
able
to
exclude
this
just
by
a
given
name
or
something
like
that,
for
example,
let's
say
some
of
this
is
starting
to
flake
a
lot
in
the
ci.
A
lot
of
pr's
failed
too
much.
Because
of
that,
would
you
be
able
to
just
add
that
test
to
be
scripted
or
something
like
that
without
having
to
send
a
pull
request
for
that
to.
F
Kkk,
oh,
I
see
what
you're
saying
right
now
today
the
there's
no
specific
or
explicit,
I
should
say
exclusion
based
on
any
of
the
meta
information
around
the
test,
like
like
the
label
or
like
the
name
of
the
feature,
but
that
could
be
something
we
can.
You
know
we
could
put
in
right
away
and
make
sure
that
it's
it's
you
know
it's
added.
B
F
Actually,
you
know
what
could
be
really
helpful
if,
if
you
open
a
an
issue
in
that
repository,
so
we
can,
we
can
add
it
like
as
soon
as
everything
merged.
It
could
be
something
we
add,
but
because
that's
a
good
idea.
So
today,
right
now
everything
is
inclusive.
So
if
you
say
feature
equals
whatever
it'll
look
for
a
feature
of
that
name
or
labels
equal
whatever
and
look
for
a
label
with
that
name.
F
G
E
I
want
to
point
out
we're
a
couple
minutes
over
time
now
so
may
have
to
move
on.
I
also
want
to
say
super
thanks.
Vladimir,
it's
really
cool
to
see
someone
working
on
this.
We
wanted
this,
but
we
just
haven't
had
the
bandwidth
recently.
E
I'd
also
be
really
excited
to
see
some
folks
at
apple
contributing
their
feedback
and
efforts
to
this.
So
thanks
for
speaking
up
and
coming
everyone.
Thank
you.