►
From YouTube: Kubernetes SIG Testing 2018-04-24
Description
A
Okay,
hey
everybody
and
I'm
Eric
tricking.
Burger
today
is
Tuesday
April
24th.
Welcome
to
your
weekly
kidron,
any
state
testing
meeting.
We
don't
really
have
anything
on
the
agenda
today,
so
just
sort
of
started
having
a
discussion
about
sort
of
measuring
an
epi
coverage
and
what
the
best
way
to
do.
That
is
where
the
best
touch
points
are
within
our
testing
for
a
stack
to
kind
of
hook
that
stuff
up
and
just
yeah.
A
So
you
said
you
had
you
had
qadian
set
up,
but
Tim
you
expressed
concerns
that
the
majority
of
our
clusters
aren't
necessarily
set
up
through
cube
ADM,
and
we
started
talking
about
how
we
weren't
really
sure
the
best
place
or
format
for
results
and
and
how
we
might
analyze
those
that
sound
about
right.
Yeah.
B
So
the
the
TLDR
two
summarized
augment
a
little
bit
is
that
he
enabled
the
web
foot
for
audit
logging,
which
allows
you
to
push
the
audit
logs
to
wherever
you
want
to,
which
is
super
useful
for
a
lot
of
use
cases.
But-
and
you
need
to
specify
the
format
of
how
you
want
it
to
go.
But
the
the
question
is:
if
we
were
to
enable
this
for
test
infra
where
that
data
would
be
stored
and
how
would
be
accessing
retrieved
from
a
tested
four
perspective.
So
how.
D
It's
every
API
call
that's
made
to
the
API
server,
so
there's
two
points:
one
we
can
go
and
enable
the
API
log
on
the
API
server
where
we
can
reach
it,
and
another
approach
is
what
I'm
working
on
now
is
enabling
an
audit
webhook
so
that
we
have
some
centralized
place
where
all
of
the
API
calls
across
all
of
our
tests
are
aggregated
and
we
can
have
some
interesting
test.
Results
are
estimated
at
work
with
and
like
questions
are
my
cue
bad
was
just
another
tool.
I
was
working
on
we're
within
testing.
C
Somewhere,
some
something
about
in
cube
test
with
setting
up
those
jobs
will
want
to
configure
that.
So
we
have
I
would
say
the
ideal
situation
will
be
like
enabling
that
for
a
certain,
certain
jobs
will
be
scheduled
and
then
having
cube
test
dump
that
into
the
artifacts
directory.
We
upload
a
lot
of
things
like
other
log
files
or
JUnit
results
to
that
directory.
B
A
D
I
was
I,
went
to
them
the
snooping
routes,
because
L
could
inject
it
upon
all
the
pods,
but
it
felt
the
feedback.
I
got
was
just
kind
of
clunky.
I
was
trying
to
cleaner,
but
the
trade-off
is
we
do
have
to
modify
stuff
on
the
API
server
and
that
cuts
us
out
of
doing
a
few
things
on
providers
that
we
can't.
B
Could
you
you
can?
Actually
you
could
well
there's
a
provision
that
people
are
wanting
to
make
for
dynamic
way
to
cook
changes
so
long
as
you
have
admin
ackles,
you
could
basically
have
the
ability
to
stub
in
and
to
put
that
webhook
on
any
provider
as
long
as
the
api
machinery
folks
agree
to
some
of
those
proposals
which
there
already
is
precedent
for
some
of
this
stuff,
so
it
it
doesn't
seem
too
far-fetched
to
that.
This
is
coming,
and
it
will
happen
now
because
there's
already
proposals
around
it.
Okay,.
A
But
I
do
agree
with
Ben
that,
like
the
general
pattern
we
follow
for
producing,
test,
artifacts
is
just
dump
them
in
the
artifact
directory
first
for
each
run,
and
then
we
can
have
something
that
later
goes
through
and
iterates
through
the
Google
Cloud
buckets
and
like
aggregates
those
separately
right.
So
that's
how
we
like.
We
have
jobs
that
push
into
GCS,
and
then
we
have
kettle
listening
over
pumps
up
I,
think
to
understand
when
things
are
pushed
into
GCS
and
it
scrapes
from
GCS
and
pushes
into
bigquery.
C
D
C
B
A
E
A
Okay,
so
you
feel
like
at
a
high
level,
you,
you
sort
of
so
I
guess
on
a
high
level
like
where,
where
are
we
going
with
this
coverage
data?
How
is
this
gonna
hook
into
the
ongoing
conformance
tests
efforts,
because
I
gather,
like
MIT
Mithra,
introduced
some
of
the
folks
who
are
working
on
the
conformance
tests
today?
But
I?
Don't
know
like
how
we're
using
coverage
data
to
help
inform
what
they're
working
on
I
don't.
C
D
A
Okay,
that
makes
sense,
I
mean
I,
know
for
now,
like
one
of
the
questions
that
came
up
was
about,
there
seem
to
be
some
workload
tests
that
we're
still
using
the
v
1
beta,
1
workload,
api
instead
of
the
v1
or
clone
api
and,
like
I
know
in
general,
like
there's
an
intent
to
make
sure
that
any
api,
that's
V
1,
we
gotta
have
coverage
on
that.
So
roughly
no
like
code
needs
to
be
worked
on,
but
coverage
is
going
to
help
us
inform
that
a
lot
more.
A
But
one
of
the
questions
that
that
raised
for
me
is
like
if
we
make
a
blanket
policy
of
always
saying
well,
it
just
has
to
be
one:
it's
stable,
it's
out
the
door.
Everything
should
be
talking
to
v1,
technically
speaking,
v1
beta
one
still
needs
to
be
supported
for
some
amount
of
time
due
to
our
deprecation
policy
and
I.
Don't
know
like
how
we
would
mandate
or
enforce
them.
You
have
sufficient
coverage
across
both
of
those
api's,
but
like
it
kind
of
feels
a
little
dirty
to
me.
B
Look
I'm
I'm
of
the
the
mind
that
everything
that's
v1
beta
one
should
eventually
be
promoted
to
b1
and
I.
Think
if
they're
doing
work
right
now,
working
from
v1
backwards
makes
the
most
sense,
because
there's
plenty
of
things
that
will
be
promoted
still
to
v1,
which
we
know
does
not
have
sufficient
coverage
right.
A
B
Have
all
the
existing
tests
weak
coverage
that
existed
for
the
previous
tradition
right?
They
do
the
skew
testing
across
one
rendition,
it's
its
renditions
to
deprecated
the
the
groups,
so
it
it's
always
been
a
plus-two
policy
from
my
vantage
point,
because
I've
been
bitten
by
it.
So
many
goddamn
time,
sorry
that
every
two
renditions
they
can
deprecated
across
too
so
like
if
you
did
V
1
beta
1
in
like
one
nine
by
one
eleven,
it
would
be
dead
and
just
be
view
on
right.
B
So
the
they'll
have
a
window
of
time
where
the
support
both
and
then
one
after
that,
it's
gone
so
the
it
doesn't
matter
to
me,
I
think
from
from
their
from
their
perspective
and
spinning
up
folks
to
get
coverage.
Just
focusing
on
v1
is
totally
sufficient
and
by
the
time
we
get
to
the
point
where
any
AP
eyes
are
not
fully
v1.
We
can
address
those
piecemeal
about
what
needs
to
be
done
and
that
should
be
done
by
sega
architecture
right.
So.
A
This
I
to
two
releases
sounds
really
aggressive
to
me,
because,
if
I
understand
what
you're
saying
we
moved
a
bunch
of
the
workload
api's
to
be
one
back
in
late
last
could
go
great
December
and
we're
sitting
there
all
loud
and
proud
about
how
hey
look
deployments
are.
Finally,
if
you
want,
does
that
mean
the
release?
That's
being
worked
on
right
now
is
ripping
out
all
the
old
v1
beta1
workload,
stuff
I'm.
B
Not
going
to
say
it's
consistent
because
it's
not,
but
that
has
been
the
standard
operating
procedure
that
I've
seen
across
releases
and,
like
the
reason
I
know,
this
is
because
I
have
basic
I
have
tooling
that
exercises
every
single
release
that
qualifies
versions
and
gets
every
single
resource
object
right.
So
that's
just
what
I've
seen?
Okay.
A
B
A
B
Think
we
should
ever,
but
the
the
test
Suites
should
support
the
version
that
it
could
releases
with
and
if
for
what?
Because
we
have
three
releases
that
support
you're,
always
in
a
stable
version,
four
of
the
test
suite.
So
that
would
the
way
it
works
is
the
suite
for
the
given
series.
Well,
it
will
test
the
coverage
for
that
series
right
and
so
long
as
they're
upgrading
the
coverage
to
disability
support,
v1
api's.
B
F
Everyone
before
so
only
wanted
to
share
with
you,
where
we're
consolidating
things
that
we're
working
on
so
which
are
shared
with
us,
like
mr.
Frank,
wants
to
work
with,
and
she
sort
of
said,
which
was
a
priority
and
which
were
near
for
double
click
on
conformance.
So
my
uncle
picked
the
priority
class
and
I
picked
namespace,
so
he
created
one
well.
F
F
So
we
are
using
that
we're
learning
a
lot
on
the
process.
We
were
able
to
do
a
PR
benefit
us
load
and
we
have
now
to
seek
for
feedback
right.
It's
we
need
to
know
if
it
makes
sense
to
test
API
rest
like
behavior,
like
I,
find
first
I
find
I
used.
What's
on
the
resource,
there's
nothing
or
while
it
was
there
before
then
I
post.
Something
so
I
see
that
that
resource
is
there.
F
When
I
did
it's
not
there
anymore
and
things
like
that
or
it
deep
down
on
some
more
complex
scenarios,
so
pretty
much.
That's
our
first
PR
for
my
anger
and
I,
so
we're
seeking
for
feedback
right
to
recalibrate
how
to
construct
better
scenarios.
So
it's
only
to
give
you
a
context
on
what
are
things
that
we're
working
on
so.
B
When
you
create
the
PR,
I
do
have
feedback
immediately
when
you
create
the
PR
at
the
proper
team.
So
that
way,
the
people
have
visibility,
because
sometimes
that
there
are
literally
how
many
3300
3400
issues
and
PRS
in
total,
so
the
label
alone
will
not
get
sufficient
eyeballs,
sometimes
on
it
II
at
the
team.
When
you
create
an
issue
in
order
for
it
to
get
proper
eyeballs
on
it,
it
took
so
that
way.
At
least
somebody
can
route
it
properly.
B
A
Think
one
other
potential
place
to
get
good
feedback
is
before
you
like
actually
go
through
the
trouble
of
writing
all
the
test
code.
Do
you
have
like
a
test
plan
that
you
can
present
to
to
either
the
sig
that
owns
that
feature
or
potentially
save
architecture,
to
get
a
good
idea
of
whether
or
not
you
are
you
know
generating
like
before
you
spend
time
writing
the
code?
A
Are
you
about
to
write,
meaning
fold
test
cases
or
are
there
some
things
where
it's
not
worth
the
effort
to
write
that
code,
you
know:
are
there
places
where
you're
missing
meaningful
test
cases
that
sort
of
stuff?
Are
you
what's
the
sort
of
process
you're
going
through
to
review
that
before
writing
code.
F
D
F
A
F
B
What
both
Aaron
and
I
are
harping
on
or
trying
to
suggest
is
that
we
need
a
process
in
place
to
loop.
You
guys
in
to
the
right
people
to
give
you
signal
back
properly
and
the
process
that
Aaron
was
mentioning
as
well
as
I
was
to
socialize
first,
with
your
test
plan
with,
say,
gark
and
then,
when
you
create
the
PRS
and
issues
to
CC
the
appropriate
teams,
according
to
what
domain
they
are
covering
because,
like
you
might
have
some
that
would
be
API
machinery,
some
might
be
cig
apps,
some
might
be.
B
A
A
Shout
out
for
my
talk,
which
is
the
Wednesday
of
the
conference.
You
know
what
I
would
really
help
if
I
have
the
schedule
for
this
talk
in
front
of
me,
I
believe
my
talk
is
Wednesday
immediately
after
the
keynotes
and
the
snake
testing
deep
dive,
which
is
going
to
be
presented
by
corner
and
said,
Lou
is
going
to
be
Friday
to
2
to
35.
So
looking
forward
to
seeing
you
all
there
happy
Tuesday
everybody.