►
From YouTube: Kubernetes SIG Testing 2017-06-27
Description
(internet dropped a few times during recording, so there are a few gaps)
Meeting notes: https://docs.google.com/document/d/1z8MQpr_jTwhmjLMUaqQyBk1EYG_Y_3D4y4YdMJ7V1Kk
A
Okay,
hi
everybody
see
if
this
works.
Hi,
everybody
hey,
is
change
way:
South,
American
burger.
This
is
the
kubernetes
safe
testing
meeting,
so
just
giving
a
brief
update
on
one
side,
release
status,
we're
playing
on
I
believe
cutting
tomorrow.
We're
certainly
the
next
deadline
is
that
if
you
have
any
release
notes,
you
need
to
cut
them
in
by
noon.
A
Pacific
time
tomorrow
and
as
I
was
saying,
the
main
area
from
a
testing
perspective
is
that
upgrade
tests
seem
to
be
causing
some
amount
of
flail,
because
this
seems
to
be
the
first
time
we
have
been
coming
to
automate
upgrading
from
two
versions
behind
up
to
one
side.
So
we've
been
ignoring
those
errors
and
focusing
mostly
on
the
errors
that
were
happening
from
one
six.
Two
one
seven
couple
blockers
have
revealed
themselves.
A
A
He
once
gave
a
plugin
that
we
had
enabled
people
a
while
back
to
add
a
need:
sega
label
to
any
issue
that
didn't
have
a
noting
sig
label
attached
to
it.
We're
now
down
to
like
ten
the
issues
from
about
1100
issues
last
week,
so
great
job
for
those
of
us
who,
those
of
us
who
are
helping
on
triage,
that's
all
the
one,
seven
stuff
that
I
have
today
Eric.
It
looks
like
our
Michelle.
Are
you
around
to
demo
test
grid.
B
C
C
C
In
Mexico
I'll
just
give
a
brief
presentation
on
Ted
and
then
is
there
any
questions
or
like
suggestions
and
we'll
solve
for
them,
but
feel
free
to
ask
if
there's
anything,
that's
unclear,
yeah,
so
I
assume
people
here
are
familiar
with
septum
use
lien
certain
on,
like
basically
a
short
overview
of
tests
graded
and
out
works
and
then
going
to
some
features
that
like
will
give
a
basic
overview,
as
well
as
like
some
the
options
that
are
left
on
them
and
then
a
little
bit
on
some
of
the
future
work
roles
with
money.
C
So
just
to
start
what
a
separate
is
a
Griffith
from
Eppley
named,
so
we
currently
have
a
instance
for
a
bunch
the
run
any
tests
which
exists
a
case
centered
subscribes
main
features
are
that
we
have
the
update
in
sorry.
We
have.
We
update
writing
a
state
such
that
everything
is
pre
computed
in
easy
and
fast
access,
there's
a
bunch
of
things
for
being
able
to
sort
filter
and
otherwise,
like
change
the
state
for
analyzing
how
petrol
Cubs
up.
C
We
integrated
some
stuff
with
the
cut
results
from
the
burn
area,
as
well
as
other
things
from
github
like
the
commit,
IDs
and
searching
for
Russians,
and
then
we
have
a
lot
of
options
in
acceleration
for
actually
like
changing
both
few
options,
as
well
as,
like
a
other
places,
other
options
for
like
viewing
the
teachers
open
so
and
then
the
link
for
the
actual
stuff
in
testan
frog
for
the
test
grade
configuration
and
checks
as
well.
So
please,
please
subscribes
mainly
as
a
separated
fronting
and
then
the
actual
updates.
C
There's
a
thing
on
the
back
end
that
basically
will
go
through
all
the
configuration
for
all
of
the
access
groups
update
all
the
separately
right,
the
state
to
cloud
storage
and
then
the
current
time
will
just
pull
those
objects
from
cloud
storage
separately
and
then
read
through
all
of
that.
C
Go
in
a
bit
into
the
actual
features,
this
is
still
something
we
would
like
to
improve
on
for
speed,
but
currently
we
do
like
manage
to
serve
ten
thousands
or
even
hundreds,
thousands
of
results.
One
second.
C
C
D
Always
questions
just
you
know,
while
we're
in
the
mode
of
interesting.
It
is
the
stack
that's
there
for
vetting
right,
because
this
is
a
particularly
enticing
set
that
was
created
for
this,
because
right
now
there's
a
possibility
to
change
the
back
along
the
way
when
you're
doing
some
of
the
tests.
But
you
don't
have
the
exact
versions
of
every
single
dependency
in
the
graph
there.
D
C
So
I
believe
we
have
a
little
bit
of
custom
information
right
now
for
what
things
are
running,
as
that
is
not
currently
a
next
iteration
there's
an
option
which
also
show
later
for
like
adding
a
little
bit
more
information
into
the
column
headers.
So
if
the
information
is
possible
to
fetch
in
test
grids-
and
we
can
configure
that
to
show
like
this
was
running
a
particular
version-
it
just
has
to
be
passed
along
somewhere.
C
C
C
Slide
a
little
later
on,
this
was
actually
fun.
That
I
will
be
adding
sure
it's
made
for
external,
but
you
should
be
able
to
configure
alerts
for
if
you
have
a
Tesla
sailing
over
a
specified
like
amount
or
threshold
being
able
to
get
a
alerts
automatically
emailed
to
it.
Like
specified
email
addresses
on
hey.
These
tests
are
failing.
C
We
also
have
like.
If
we
detect
between
like
X&Y,
commit
we've
detected
that
a
failure
happened.
Then
we
also
will
auto-populate
a
link
by
any
test
that
actually
shows
hey
here's
a
link
to
a
regression
search
and
get
some
differences
between
these
two
permits,
but
default.
We
have
the
column.
Headers
will
show
the
date
time
and
bill
number
that
the
way
a
tenable
member
yeah
that
the
test
actually
ran
out
with
the
row
showing
the
test
name
of
target.
But-
and
this
was
going
into
thing-
I
was
talking
about
earlier.
C
The
commit
ID
is
actually
a
custom
column
header,
that's
added
in
the
configuration
for
all
the
tests
for
d39
eclipses,
so
something
else
but
like.
If
we
have
the
information
available
for
other
kinds
of
versions
or
whatnot
to
track
on
the
road,
we
should
be
able
to
add
it
via
similar,
similar
configurations.
C
Are
the
whole
thing,
so
you
can
do
to
change
the
view
in
order
to
like
it's
better
look
at
some
of
the
results.
So,
for
instance,
you
can
change
the
width.
This
is
actually
a
completely
customizable
via
the
URL
parameter,
but
we
also
have
set
lists
for
like
very,
very
compact
that
I've
pixels
versus,
like
the
normal
one,
which
Chum
as
you
saw
before,
is
something
like
eighty
and
then
also
I
go
sort
of
compact
one
as
well.
You
can
throw
like
under
options.
C
You
can
also
simply
have
the
information
available.
Basically,
if
you're
running
a
bunch
of
the
same
tests
with
a
matrix
like,
for
instance,
this
one
is
running
against
a
bunch
of
different
versions.
Then
we
can
include
that
after
the
customer
to
easily
differentiate
them
when
you're
viewing
the
results.
This
also
means
that
under
options,
you
could,
for
instance,
a
group
together
all
similar
tests
with
the
same
target
and
then
also
like
able
to
expand
that
in
view.
C
If
we
have
failures
like
is
it
failing
at
one
particular
version
or
for
everyone,
a
couple
more
sort
of
recent
things,
fifty
ability
to
add
descriptions
for
tabs,
so
you
can
easily
see
like
this
types
purpose
is
for
this
there's
also
no
current
examples,
but
you
can
add
notifications
for
tabs
in
order
to
like
display
a
like
bright,
yellow
bar
at
the
top.
C
We
also
have
a
summary
tab
that
basically
just
gives
a
overview
of
a
bunch
of
the
test
statistics.
So,
for
instance,
we
have
what
percentages
test
for
failing
with
last
week,
as
well
as
more
on
the
actual
gist
of
Excel
yrs,
so
like
target
a
number
of
times
failing
consecutively,
when
we
first
under
sailing
I've
noted,
like
I,
will
be
adding
that
shortly
for
being
able
to
get
that
via
email,
and
then
we
will
also
have
a
come
extension
for
that
later
this
year.
C
It's
also
currently
some
graphs.
You
can
show
what
are
the
Leafs
that
by
default,
the
only
one
is
restoration.
You
can
do
that
under
the
graph
menu.
Basically
like
how
long
does
this
take
to
run
and
see
changes
over
time?
We
have
a
value
for
each
test
result
and
then,
if
they're,
like
a
bunch
of
these
options,
you'd
like
to
enable
by
default
for
a
tab,
you
can
store
that
in
the
base
options,
option
for
configuration
and
then
just
have
that
come
up
every
time.
C
Somebody
wants
to
view
that
having
particular
so,
for
instance,
this
one
has
include
filtered
by
DirectX
volume
to
show
only
test
targets
that
have
volume
in
their
target
name
make
the
West
slightly
smaller.
So
we
see
more
test
results
this
time
and
then
also
show
by
default
e
test
duration
minutes
for
how
long
successor
ticking
and
then
for
some
of
the
future
things
we
like
to
work
on.
C
There's
a
lot
honestly,
but
we're
working
on
tackling
butchie's
one.
So
in
future,
we'd
like
to
open
source
more
the
check
period
beyond
just
the
configuration
and
also
take
advantage
of
kubernetes
is
like
actually
make
the
secure
native
application,
so
both
the
updater
and
front-end
separately
in
open
source
there's
also
a
lot
of
things
that,
like
we
recognize,
are
harder
to
discover
in
test
grids
that
we'd
like
to
make
easier
to
use
and
then
also
some
features
like
notifications
for
configurations.
C
So
oh
yeah
configuration
stuff,
listen
test
and
protestor
config
and
then
the
actual
committee's
instance
is
available.
A
like.
A
So
one
thing
I
plan
on
helping
out
with
is
trying
to
add
a
description
to
the
litany
of
tabs.
You
have
for
the
kubernetes
test
grade
right
now,
because
tonight
none
of
them
have
descriptions.
There
are
also
a
good
chunk
of
them
that
seem
to
be
stale,
so
I
think
we
as
a
community
need
to
come
up
with
postal
policies
to
be
able
to
enforce.
Like
is
this
a
thing
worth
cluttering
up,
I
mean
Boris.
A
Only
three
three
percent
of
the
things
have
failed
in
the
past
week,
great
right,
but
that's
the
number
of
test
cases
times.
The
number
of
job
runs
I,
think
I
mean
what
might
be
more
actionable
if
we
start
to
assign
an
ownership
of
these
jobs
to
different
states
or
people
is
to
say
the
job
as
a
whole.
So
the
number
of
job
runs.
What's
that
percentage
of
flakiness
to
failure
rate
right,
yeah.
C
And
I
think
actually
I
have
that
tracked
for
building
in
the
enhanced
summary
being
able
to
see
both
like
overall
test
cases.
Failure
as
well
as
see
like
percentage
of
filter
jobs
that
have
failed
since,
like
yeah,
multiple
people
do
want
the
ability
to
like
view
as
a
like
per
run
status
or
desk.
A
C
C
C
A
A
Question
I've
seen
up
here
every
once
in
a
while
is
that
the
data
itself
in
tested
seems
to
be
stale
compared
to
the
Lisa
of
Google,
Cloud,
Storage
or
whatnot
I'm,
not
sure
what
the
appropriate
escalation
path
is
for
that
is
that
something
the
testing
for
role
or
the
build
cops
should
be
in
charge
of
watching
and
you're
kicking
off.
Remember.
A
Going
to
gain
urine
so
I've
heard
question
sometimes
come
up.
Maybe
it
hasn't
happened
lately,
I'm,
not
sure
where
people
have
noticed
that
test
grid
seems
out
of
date
compared
to
say
Gege
Nader,
so
so
they
notice
that
there's
more
recent
data
in
the
buckets
that
is
being
reflected
in
the
tester
display,
and
so
at
that
point
it
seems
like
some
components
behind
the
scenes
hasn't
run
or
ladies
running
slowly
and
the
question
is:
what's
the
escalation
path,
yeah.
B
I
would
say
to
contact,
you
know,
see
it's
sort
of
like
paying
one
of
us
on
slack,
if
it's
more
I
mean
if
it's
less
than
fifteen
minutes
like
we
would
expect
a
15-minute
delay
subscrube
inators
incident.
So
it
should
be
that
you
know.
If
a
new
result
passes,
it
should
show
up
immediately
lubricators
if
that's
a
scans,
GCS
and
then
15
minutes
later
it'll
show
up
in
test
grid,
and
if
that
you
know,
if
it's
substantially
longer
than
that,
then
contact
us
there
for
like
for
a
while.
B
There
is
a
bug
in
the
summary
update
where
it
would
stop
updating
the
summary,
and
so
that
could,
on
you
know,
just
get
like
days
behind.
That
was
a
bug
on
our
side
that
we
found
it
fixed
and
so
yeah.
So,
if
you're,
seeing
that,
let
us
know
because
that
shouldn't
be
happening,
but
we
are
aware
that
the
updates
are
slower
than
we
would
like
them
to
be
we'd
like
them
to
be.
You
know
a
minute.
C
C
A
Absolutely
like
I'm
on
board
with
all
that
I'm
asking
questions
from
the
perspective
of
in
the
meantime.
Where
can
we
go
look
to
like
answer
these
questions
to
make
sure
people
are
needlessly
being
hanged
and
I
also
have
the
question
from
the
perspective
of
attempting
that
staff
out
or
a
lot
of
a
build
Kapital
to
non
good
works
understanding
like
what
privileges
well
it'll
not
allowed
to
do
anyway?
That's
all
the
questions
I
have
is
super,
informative.
E
A
D
So
I
think
just
requesting
resolution
on
that
one.
The
second
one
was
discuss
examples,
there's
so
many
issues
going
on
I,
don't
know
if
anyone's
got
the
state
of
the
state
of
what's
going
on
with
examples,
I
would
love
to
rip
them
out
of
tests.
I,
don't
think
they
should
belong
in
test
in
any
way,
shape
or
form,
and
we've
talked
about
this
many
times.
A
What
should
we
do
with
this
issue?
For
the
let's
just
say,
Kafka
example
that
nobody
has
volunteered
to
maintain.
Should
we
leave
the
issue
open
for
somebody
who
might
want
to
come
in
and
main
taking
the
coffee
example
right?
My
answer
would
be
no,
but
let's
make
sure
we
have
the
discussion
and
accept
traffic
at
some
place.
So
that,
if
somebody
comes
back
later
and
does
want
to
do
something
about
it,
they
follow
the
breadcrumbs
to
this
collection
of
stale
things
that
need
to
be
tied
back
up.
F
D
So
what
I'll
do
with
that?
One
is
I
will
follow
up
with
cig
apps
and
try
to
report
back
to
this
cig
next
meeting,
but
I'm
definitely
going
to
give
the
hard
stop
at
the
one
one
eight
cycle,
because
it
we've
been
talking
about
this
now
for
eons
and
we
get
PRS
that
I
people
asked
me
to
review
and
whatnot,
and
it's
not
even
worth
bandwidth
to
get
the
email
right
because
I
get
enough
of
it,
and
so
does
everyone
else.
D
The
last
one
was,
we
don't
publish,
aversions
kubernetes
in
ten
container.
This
will
become
a
priority
and
I'm
going
to
just
request
that
someone
from
this
cig
other
than
me
I'll,
probably
put
a
PR
together,
because
I
already
created
one
and
I,
don't
know
who
has
echo
ship
rights
to
push
containers.
What.
B
D
Not
going
to
be
test
in
front
it'll
be
a
the
way
the
hypercube
container
is
created.
I
would
do
it
the
exact
same
way
and
use
the
same
version
or
semantics
that
the
rest
of
the
system
uses
so
it'd,
be
part
of
the
quick
release
target,
so
I
make
quick
release
and
it
would
create
the
container,
but
somebody
has
to
push
what.
A
This
is
where
ten
might
believe
you
are
asking
for
and
artifacts
to
run
end
to
end
tests.
That
is
a
container
that
has
all
of
the
dependencies
necessary,
and
you
just
run
that
container
and
that's
running
the
event
test
against
the
cluster
of
choice
and
ice
repository.
Sponsz
might
be
what
we
have
kun
test.
That
is
the
single
artifact
that
runs
the
end-to-end
tests.
That
sort
of
the
discussion
we're
having
here
yeah.
D
The
problem
is:
it's
not
versions,
the
version
semantics
are
not
the
same,
so
we
either
one
we
do
painting.
So
that
way,
when
a
person
gets
a
when
a
person
tries
to
get
the
released
target
version
of
an
image
container
that
has
the
intent
test,
it
gets
the
exact
one
right,
that's
pretty
much.
The
only
requirement,
I
guess
yeah,
is
that
when
I,
when
I
get
a
release
of
kubernetes
I
need
to
have
one,
that's
been
tagged
for
that
release.
So.
A
The
my
confusion
here
is,
and
till
today
right,
the
artifacts
with
which
you
test,
confer
Nettie's
is
a
binary
called
the
e2b
test
that
comes
in
a
tarball.
That's
version
is
just
the
same
and
built
at
the
same
time
that
tarball
is
for
doesn't
Internet
Israelis
I
feel
like
you
are
asking
that
that
is
not
sufficient.
It
must
be
an.
D
Image,
it
must
be
an
image
that
is
non
Google
cloud
specific,
because
this
will
be
like
a
requirement
for
folks
turning
point
for
words.
Otherwise,
there's
going
to
be
a
pillar
iteration
of
containers
that
exists
when
people
are
trying
to
run
conformance
tests.
So
without
this
there
is
no
canonical
location
for
anyone
to
reference
to
say
that
I
have
a
valid
kubernetes
right
when
they're
executing
their
tests,
because
it's
not
well
versions
according
to
the
release
cycle,
it
didn't
come
from
upstream,
they
had
to
create
their
own
right.
D
D
V2V,
the
problem
that
will
be
that
will
exist
is
people
create
their
own
containers.
There's
no
canonical
location
anymore
right.
So,
even
because
of
the
way
we're
defining
tests
is
either
inside
or
out
of
cluster,
and
ideally
we
would
have
an
introspective
running
on
the
cluster
running
the
test.
So
you
couldn't
just
have
the
tarball
running
on
the
cluster.
You'd
have
to
build
a
container
and
then
you'd
have
to
define
the
spec.
So
at
some
point
this
is
an
end
run.
It
will
happen
right.
D
It's
and
I'm
going
to
force
the
issue
because
it
has
to
happen,
and
other
people
agree
that
this
has
to
happen.
The
question
I'm
trying
to
state
is
that
we
as
a
thing
we
should
be
responsible
for
artifacts
right
and
I
will
push
a
PR
to
help
create
an
artifact
or
we
conversion
the
one
that's
from
tests
infra
and
remove
the
Google
Cloud
dependencies
that
exist
inside
there,
because
we're
going
to
need
to
create
a
consumable
that
other
people
can
use
to
validate
clusters.
D
A
Yeah
so
I
guess
like
I'm,
not
disagreeing
with
any
of
that
I,
for
whatever
reason
I
appear
to
be
taking
issue
with
the
stance
that
there
is
no
way
of
testing
the
clusters
with
a
well-defended
of
artifacts
today,
or
is
it
you're
saying
that
people
won't
stop
until
they've
built
a
container
to
do
that
for
them,
and
if
we
don't
build
that
container?
First,
we're
always
going
to
be
chasing
down
like
well?
How
did
you
build
your
container?
What
container
did
you
use?
A
The
fact
that
me
happened
to
be
using
well,
specifically,
Google
happens
to
be
using
a
container
because
proud
takes
off
pods
that
have
containers,
and
so,
as
a
matter
of
convenience,
APIs
have
some
things
baked
in
their
Google
CI
specific.
In
addition
to
community
to
be
testing
stuff,
the
question
would
become.
We
want
to
start
with
that
and
remove
the
Google
CI
specific
stuff,
or
would
we
want
to
attempt
to
make
our
own
container
in
anticipation
of
or
trying
to
shortcut
people
who
are
going
to
do
this
on?
A
B
Take
is
that
you
know
each
release
of
kubernetes
doesn't
have
its.
You
know,
there's
not
like
a
1/8
version
of
bash
and
so
cube
cans
or
cube
tests
to
me
doesn't
really
seem.
I
mean
I.
Think
this
I
definitely
think
that
we
do
need
to
provide
a
way.
Oh,
hey,
I,
just
put
together
a
kubernetes
cluster.
That's
1.5
and
I
want
a
way
to
validate
that
this.
Is
this
conforms
to
a
1:5
cluster
on
the
thing
is
that
you
know
essentially
the
way
you
do
that?
B
Is
you
download
right
now
you
download
the
kubernetes
test
tarball
for
that
1:5
release,
and
then
you
download
head
of
a
cube
test
or
really
any
version
of
cube
test
and
call
cube
test
with
that
on,
and
so,
if
we
want
to
package
all
this
into
a
container
which
definitely
I
yeah
I,
think
I
would
agree
with
you
Aaron
that
it
probably
shouldn't
be
the
q-kidz
container
since
I
think
that's
more
for,
like
our
CI
purposes,
we
I
think
we
just
need
to
think
about
how
we
want
to
do
that.
So.
D
I've
already
done
it
myself
right,
like
I,
already
have
a
container.
It's
already
been
wrapped.
It
has
conformance
flags
as
default
in
these.
You
know,
for
it
and
I.
Think
I
would
just
make
a
pull
request
to
the
mainline
and
put
it
in
the
similar
fashion
to
where
the
hypercube
image
is
created,
because
that's
the
only
one
that's
like
analogously
created
the
other
ones.
The
different
component
containers
are
created
in
a
different
way
right.
A
I
guess
I
would
like
to
see
an
issue
that
sort
of
lays
out
the
broader
scope.
Pr
references
that
issue
there
is
one
okay
and
five
occasions
attend
to
make
an
estate.
One
eighth
item
that
you
are
FG
our
owns.
A
D
Want
I
want
it
to
be
one
way
that
the
community
uses
it
and
how?
How
the
internal
testing
does
their
businesses
their
business
right,
I'm
not
going
to
dictate
what
Google
does
in
their
tests
and
pro,
but
how
the
community
runs
their
tests.
I
do
think
we
should
standardize
right
because
there
will
be
an
infinite
priority.
Is
an
infinite
level
of
complexity,
even
exists.
A
B
E
A
I
mean
I,
don't
necessarily
want
to
make
everybody
sit
through
and
read
through
the
whole
dock.
I
I
noticed
that
a
number
of
folks
have
actually
signed
up
for
issues
mostly
Google
tests
in
14
I
signed
up
for
a
couple.
I
have
committed
to
a
couple
things,
and
my
proposal
would
be
that
for
pretty
much
everything
we
have
on
this
dock.
I
want
to
see
us
have
an
issue
in
the
test
in
for
repo
and
have
these
names
as
a
Sinese
on
those
issues.
A
The
remaining
issues
can
then
be
left
around
and
put
into
a
next
milestone
sort
of
thing
right,
so
you
can
actually
create
a
v18
milestone
and
then
say
this
is
what
we
suggesting
have
committed
to
and
then
community
meeting
after
July,
the
fourth
or
I,
don't
know.
If
it's
going
to
get
canceled
about
me
or
what
not.
We
can
actually
present
that
as
like
hey
here's,
the
plan.
D
A
The
the
important
thing
for
me
isn't
just
what
we
have
for
1/8,
but
that
we
have
a
list
of
suggestions
for
and
beyond
or
a
list
of,
and
if
you
would
like
to
help
out,
here's
where
and
how
you
can
help
up.
I
believe
Eric,
correct
me
if
I'm
wrong
that
there's
a
fix-it
coming
up
in
the
next
like
week
or
two:
that's
why
I
really
want
to
make
sure
we
actually
have
this
documented,
so
some
of
the
like
long-standing
itchy,
stuff
or
stuff.
That's
in
this
roadmap.
A
B
Yeah
I
think
you
know
going
through
and
creating
issues
for
all
of
the
items
sounds
good
and
yeah
having
a
backlog
also
sounds
good
and
curating
it
to
sort
of
have
an
idea
of.
What's
what
I'm
thinking
for
one-eighth?
What
are
things
for
people
to
work
on
yeah
I
also
think
it'd
be
I,
think
it'd
be
nice.
If
we
had
more,
if
we
got
you
know
if
we
got
more
people
to
sign
up
for
things
as
well.