►
From YouTube: Kubernetes SIG Testing 2018-07-31
Description
A
Out
to
the
cloud
hi
everybody,
my
name
is
Aaron
crackin
burger.
This
is
the
sick
testing
weekly
meeting
today
is
Tuesday
July
31st
everything
we're
saying,
is
being
publicly
recorded
and
will
be
posted
to
YouTube
later
so,
please
be
on
your
best
behavior
this
week,
we're
gonna
start
off
with
a
demo
from
Paul
incan
to
show
us
the
spyglass
feature
of
proud
that
he's
been
working
on
for
a
while
god,
I
hope,
I
got
that
name
right
and
then
I
want
to
talk
about
lunch,
github,
stuff
and
then
sort
of
towards
the
end.
A
B
So
yeah
I'm
Paul
I've
been
working
on
something
called
spyglass
for
a
while
spyglass
is
meant
to
be
a
pluggable
artifact
viewing
framework
for
pro.
So
if
you've
ever
sorry,
I
like
clicked
on
your
failing,
pre
submits,
you
will
probably
be
taking
a
Grenadier
where
you
see
some.
You
know
info
about
your
job,
but
in
the
future
we
are
going
to
want
something
a
little
more
flexible
than
that,
ideally
something
that
would
allow
people
to
create
their
own
views
of
different
parts
of
the
job.
So
that's
what
spy
glasses!
B
B
B
B
B
So
let's
see
this
is
for
a
job
at
that
so
yeah.
So
we
can
so
I
was
a
job
whose
artifacts
are
stored
in
GCS,
but
there's
also
support
for
viewing
jobs
in
progress.
The
the
prior
job
ID
right
here
this
is
a
this-
is
the
like
example:
getting
started,
sort
of
proud
job
that
comes
with
like
proud
getting
started
I'm.
So
this
is
just
the
echo
Bend
date,
but
that
will
support
whatever
our
job
is.
B
Currently
running-
and
this
is
also
a
general
like
source
parameter,
so
you
can,
if,
in
the
future
we
we
have
sources
besides
GCS,
we
can
specify
a
sort
of
a
more
general
way
of
understanding
where
to
fetch
artifacts
from
I
could
see.
These
are
all
collapsible.
It's
got
links
it's
hopefully,
maybe
in
the
future
will
be
more
useful
in
in
greater.
D
Throw
this
under
the
issue,
but
so
I
was
noticing
today
that
there
is
a
currently
is
a
place
to
view
artifacts
and
being
sort
of
relatively
new
to
tests.
Infra
I
don't
necessarily
understand
what
that
is
with
respect
to
spyglass,
since
it's
also
a
way
to
to
view
build
artifacts.
Maybe
some
information
to
that
sure
would
be
useful.
Well,.
A
Let
me
give
a
little
bit
of
higher-level
context.
So
spyglass
is
our
intent
to
eventually
or
I
think.
Our
intention
is
to
eventually
replace
goober
nadir
with
spyglass
Garba
nadir
is
the
thing
that,
like
displays
a
bunch
of
results
given
so
given
the
test,
artifacts
it'll
display
the
test
results
and
the
artifacts
it's
more
or
less
like
a
really
smart
viewer
on
top
of
a
bunch
of
GCS
buckets.
That
also
happens
to
be
aware
of
some
github
events,
so
that
I
can
like
reference
PRS,
that
link
to
the
test
results.
A
Things
like
that
great,
but
it's
an
app
engine
app
and
are
like
long-term
yeah.
Evil
mission
is
to
try
and
create
a
fully
self-contained
stack
that
runs
entirely
on
kubernetes,
so
the
pieces
that
don't
run
on
kubernetes,
yet
we
eventually
want
to
and
so
spyglasses
our
attempt
to
start
heading
in
that
direction
for
the
things
that
goober
nadir
does
spyglass
does
not
yet
completely
re-implement
everything
that
goober
nadir
does.
But
that's
like
the
direction
we
want
to
go
yeah.
A
E
A
Yeah,
that's
that's
what
I
mean
I'm
just
trying
to
understand
me.
The
the
migration
path
looks
something
like
we
get
it
out
there
we
get
it
up,
so
we
can
start
seeing
it.
So
we
don't
have
to
start
keeping
you
to
get
links
to
what
it
looks
like
and
then
gradually
as
we
get
it
to
a
place.
Where
we're
much
happier
with
it,
it's
more
usable,
we
can
start
to
share
it
with
the
rest
of
the
community.
A
This
is
more
or
less
the
approach
we
took
with
the
PR
dashboard
that
we
added
a
couple
of
months
ago,
and
it
was
this
sort
of
thing
we're
like
we
didn't
want
to
add
an
initially,
because
we
were
scared
of
like
all
the
potential
API
calls.
It
might
do,
or
you
know
slowdowns
it
caused,
but
eventually
we
got
something
up
and
running
so
that
we
could
look
at
it
and
we
could
use
it.
It
wasn't
necessarily
on
the
critical
path,
so
I'd
be
interested
in
seeing
what
it
takes
to
get
us
there
with
spyglass
the.
F
F
A
B
A
A
Okay,
I
will
share
my
screen,
although
I
don't
know
how
relevant
it
is.
Okay,
I
talk
about
this
every
once
in
a
while.
I
want
I
have
a
dream
of
us
one
day,
dancing
around
a
bonfire.
That's
fueled
with
the
remains
of
munch
github
lunch.
Github
is
the
thing
that
sweeps
around
and
pulls
all
the
issues
on
github.
It's
the
thing
that
at
this
point,
based
on
what
I
heard
from
talking
to
Cole
the
other
day,
we
only
really
deploy
like
once
or
twice
a
quarter
now,
and
that's
only
to
modify
the
milestone.
A
Maintainer
robot,
which
is
supposed
to
help
implement
different
phases
of
the
release
cycle
for
things
like
code
slush
to
code,
freeze
and
things
like
that.
So
I
have
an
issue
linked
this
tracking
issue
here
that
was
opened.
Let's
not
talk
about
how
long
ago,
and
you
know,
I've
got
a
table
of
every
single
month.
We
think
it
hub,
as
well
as
what
our
plan
was
to
do
with
it
and
where
we
are
and
getting
that
done
and
we're
pretty
much
done
with
everything
that
we
planned
to
migrate.
A
There's
some
things
that
we
don't
plan
to
migrate,
that
have
these
fun
little
footnotes
and
then
there's
this
one
with
a
question
mark.
So
to
sum
that
up
they're,
basically
three
big
chunks,
we
need
to
finally
like
replace
the
submit
queue.
The
only
place
that
uses
this
admit
to
you
now
is
kubernetes
kubernetes.
We
intend
to
replace
that
was
tied.
There
are
reasons
we
haven't
yet
I,
don't
know
them
off
the
top
of
my
head,
but
I
believe
there
are
some
technical
issues
that
coal
could
rattle
off
the
top
of
his
head.
A
A
The
same
thing
that
we
use
to
implement
fada
bot
to
automatically
mark
issues
is
stale
or
rotten
after
a
certain
amount
of
inactivity
if
they
match
a
certain
github
query,
and
it
could
be
that
we
we
meet
the
needs
of
the
release
team,
but
this
is
going
to
take
some
talking
because
there
are
some
words
and
thoughts
and
opinions
on
it.
The
other
big
issue,
the
other
big
thing
is
I,
don't
know
how
many
people
know
that,
if
to
go
along
with
chair
of
the
submit
queue,
there's
also
a
cherry-pick
queue
that
large
github
implements.
A
Yeah
this
thing,
I
have
no
idea
if
any
of
the
branch
managers
or
patch
release
managers
know
that
this
thing
exists.
There
are
also
a
couple
munchers
that
dropped
labels
on
pull
requests
that
haven't
gone
through
whatever
our
cherry-pick
processes.
These
days,
these
five
munchers
right
here.
So
if
any
of
these
are
super
useful,
I'd
like
to
get
them
quarter
to
a
proud
login,
if
they're
not
super
useful,
let's
just
not
bother
with
them
and
then
I
think
longer
term.
There's
been
a
lot
of
discussion
around
improving
the
cherry-pick
process
as
a
whole.
A
Based
on
that,
we
want
to
be
able
to
do
things
like
issue:
a
cherry-pick
command
on
a
pull
request
that
went
into
master
and
have
multiple
cherry-picks
for
multiple
release
branches
happen
and
be
able
to
track
based
on
that
PR
when
all
of
those
things
have
landed.
This,
apparently,
is
what
makes
release
people's
lives
easier.
A
D
Sorry
I
was
ready
to
just
get
my
screen
ready.
Let
me-
and
this
needs
to
go
to
my
other
desktop
I
share
screen
I,
just
I
didn't
have
it
ready?
I
was
just
so
surprised.
Actually,
I
saw
the
agenda
and
I
thought
really
I
thought
at
this
point.
It
was
on
purpose
yeah.
So
you've
heard
me
bring
this
topic
up
a
few
times
and
I.
The
title
of
this
issue
is
testing
and
reporting
auditory
unit
tests
and
I.
D
Suppose
it
really
should
be
divided
into
two
issues,
but
for
the
sake
of
this
conversation,
let's
quickly
get
through
the
first
one.
We
could
discuss
the
second
one
there.
A
couple
of
you
have
already
responded
to
it,
but
the
the
primary
proposal
that
I
am
pushing
with
this
issue
is
that
it
no
longer
makes
sense
for
kubernetes
to
be
blocked.
Four
out
of
three
providers,
I
mean
one
of
the
purposes
of
moving
things
out
of
tree
is
so
that
releases
aren't
held
up
by
entry,
then
needs
for
entry
providers
or
when
they
break
or
whatnot.
D
We
still
need
to
worry
about
compatibility,
but
the
actual
testing
should
not
hold
the
release.
So
what
I
posit
here
is
that
so
the
auditory,
presubmit
blocking
or
pre
submit
tests
for
providers
should
not
block
kubernetes
kubernetes.
They
should
not
be
a
member
of
presubmit
kubernetes
blocking
on
the
test
grid
currently,
and
it's
Aaron
I
think
he
said
it's
because
this
is
a
hard
time
identifying
those
with
respect
to
the
sig
that
owns
them.
D
A
Be
clear,
though,
like
the
they're
blocking
the
tree
of
kubernetes,
while
they're
in
the
tree
of
kubernetes
once
those
tests,
yes,
they
removed
from
kubernetes,
they
no
longer
block
it
and
yeah
we're
in
strong
agreement
like
so
the
state
of
today,
as
there
is
no
such
thing
as
out
of
a
tree
providers.
So
there
are
no
out
of
tree
providers
that
block
kubernetes,
and
once
there
are
out
of
tree
providers,
we
don't
want
them
to
block.
D
Think
the
conversation
of
this
issue
has
been
funny
to
me
because
there
have
been
very
strong
responses,
all
in
violent
agreement
with
the
issues
proposal.
So
so
so
strong
that
occasion,
I'm
thinking,
wait,
did
I
actually
say
what
I
think
I
did,
because
that's
such
a
straight
almost
seems
like
I
agree:
okay,
but
yeah,
so
I
I'm,
basically
trying
to
put
or
establish
a
precedent,
a
patter
and
a
guide
whatever
for
people
when
they
move
out
a
tree
that
it
will
not
be
blocking
kubernetes.
D
It
will
not
be
a
member
of
presubmit
scoober
Nettie's
blocking,
however,
it
can
and
likely
should
still
be
presubmit
blocking
for
the
out
of
tree
providers,
provider
repositories,
pull
requests
and
should
still
be
visible
on
the
test
grid.
Now,
there's
been
some
feedback
on
this.
Both
Ben
and
Erin
have
recommended
that
I
drop
the
blocking
from
here
until
it
becomes
necessary
to
distinguish
between
the
two
that
it's
an
artifact
of
having
both
categories
and
kubernetes,
but
for
a
lot
of
the
auditory
providers.
D
There
may
not
be
the
difference,
so
just
drop
that
the
blocking
so
one
thing
that
I
try
to
encompass
here
and
so
makes
it.
Well.
Let
me
let
me
circle
back
to
this
I
think
everybody's
in
agreement
here
right,
I,
think
the
only
point
of
contention
was:
do
people
think
that
the
out
of
tree
tests,
unit
or
otherwise
still
need
to
be
visible
on
the
test
grid
in
some
capacity
I.
A
D
So
that
that's
actually
the
other
question
and
it's
an
interesting
one
and
it's
one
that
we
can
discuss
on
a
PR
submitted
earlier
today,
but
go
ahead
and
bring
it
up
here
because
it
would
be
relevant.
So
VMware
is
an
interesting
one.
Cig
VMware
doesn't
own
any
code.
Cig
cloud
provider,
vSphere
owns
the
vSphere
cloud
provider.
Cig
VMware
may
end
up
owning
the
vSphere
cluster
API
provider
since
there's
no
sig
it.
For
that
now,.
D
A
If
you're
saying
cig
VMware,
it
doesn't
own
any
code,
then
maybe
it
shouldn't
be
a
cig.
Really,
the
purposes
of
sake
is
to
provide
the
ultimate
authority
for
things
to
own
code.
Sorry,
this
is
me,
taking
my
sake,
testing
hat
on
and
putting
my
steering
committee
hat
on.
As
far
as
clustering
API
provider
goes,
other
clustering,
API
provider,
repos
have
submitted
proposals
to
the
steering
committee
and
they've
all
set
themselves
up
as
sub
projects
of
sig
cluster
lifecycle.
A
So
the
example
I
linked
in
the
pull
request
or
the
issue
you're
showing
here
was
how
cluster
provider
API
or
cluster
API
provider,
OpenStack,
GCP
and
AWS
all
consider
themselves
sub
projects
of
cluster
lifecycle,
but
the
gentleman
who
decided
to
submit
the
proposal
for
VMware
thinks
it
should
fall
under
sig
VMware,
which
I
mean.
If
that's
how
you
want
to
run
it?
That's
great.
You
can
totally
be
different,
but
yeah
I
would
really
like
to
see
it
then
fall
under
the
sikh
vmware
ownership.
A
D
So
I
was
not
I
was
not
my
understanding
or
I
was
not
aware
that
some
of
the
six
considered
themselves
sub-projects
I'm
not
going
to
get
into
a
discussion
with
you
on
this
call
about
what
cig
VMware
should
or
should
not
own.
It's
not
my
place
to
say,
and
if
you
want
to
provide
feedback
to
that
gentleman,
please
please
do
or
a
VMware
itself
I'm
working
with
what
I
can
and
have
at
my
disposal.
D
I
can
work
with
another
at
my
disposal,
so
it
seems
like
it
is
your
your
opinion
that
all
the
code
should
either
relate
to
the
cig
that
owns
it
or
maybe,
if
the
sig
is
a
sub-project
or
considers
itself
a
sub-project
if
another
said
be
organized
there.
I
the
pr
submitted
a
proposed
a
top-level
vmware
category.
Maybe
others
would
post
something
similar
for
for
organizing
all
out
of
tree
code,
because
there
is
precedent
with
google
tectonic
and
SEO
to
have
a
canonical
to
have
top-level
categories
that
seem
to
be
based
on
organization.
So.
F
There's
two
things
there:
there
is
some
extra
organization,
that's
not
related
to
opening
things
like
test
grid
is
not
just
about
owning
things,
for
example,
is
there's
a
conformance
dashboard
for
conformance
stuff
there
is
that,
like
SiC
conformance,
isn't
really
a
thing,
it
falls
on
there
other
states,
but
what
Aaron
was
also
bringing
up
that
just
like
as
a
separate
point
that
SIG's
exist
to
own
code.
So
if
a
sig
on
some
code,
we
also
expect
tests
to
show
up
under
that
sink,
maybe
not
under
that.
Under
that
sake,
okay,.
D
That
makes
sense,
I'll
I'll
revisit
with
those
people.
D
I
strongly
agree
with
that
idea.
Okay,
so
I'll
make
a
remark
about
that
and
then,
as
far
as
the
cluster
API
stuff
goes,
whether
it's
cluster
or
lifecycle
or
there's
a
separate
signal,
ester
API,
I,
don't
know
what
I
don't
know
what's
going
to
end
up
happening
there,
yeah
I
can
something
similar
I
can.
A
Take
that
up
over
at
the
steering
level,
that's
what
like
from
it's
the
the
like
engineering
me
that
likes
to
keep
everything
dry
inconsistent!
That's
like!
Why
are
you
being
unique?
Why
don't
you
just
do
the
same
thing,
all
the
other
cluster,
if
you
had
providers
are
doing
but
like
if
sig
VMware
wants
to
be
in
charge
of
these
two
sub
projects
instead
of
having
said
cluster
lifecycle,
make
sure
that
they're
you.
A
D
A
That's
totally
fine
I
mean,
like
you
see,
we
have
a
sig
GCP
on
there
for
most
of
the
GCE
related
or
gke
related
jobs
that
we're
keeping
track
of
so
like
that's,
that's
fine,
I
think.
The
other
thing
I
just
wanted
to
emphasize
is
that,
like
test
test
grid
configuration
is
not
prescriptive.
So
the
fact
that
a
thing
is
in
pre
submits
blocking
does
not
make
the
thing
blocking
it's
generally,
as
humans
are
kind
of
organizing
everything
that
way
to
try
and
make
make
sense
of
it
and.
D
F
F
D
A
Other
thing,
I
guess
I
would
say,
is
maybe
you're
the
violent
agreement
you
were
getting
on.
The
issue
comes
from
the
fact
that
there
were
it
was
just
kind
of
unclear
like
what
was
the
ask,
because
there
were
a
lot
of
words
there.
So
I
tried
to
drill
in
on
to
the
the
two
questions
and
make
sure
like
that's
really
what
the
issue
was
asking,
and
hopefully
we
sufficiently
answered
those
two
questions.
It's
unclear
to
me
whether
or
not
you
have
more
questions
that
need
answering
or
if
it's
just
those
two
I.
D
Filed
an
issue
to
have
a
discussion
because
I
was
told
the
slack
was
not
the
place
to
have
that
discussion,
and
so
whether
this,
whether
the
issue
has
pointed
questions
or
supposed
to
encourage
discussion,
I,
was
asked
to
file
an
issue
there.
If
that
wasn't
the
right
place
to
do
it,
then
you
know:
let's
take
it
back
to
slack
or
have
it
in
Google
Groups.
But
at
some
point
it
would
be
nice
to
have
a
discussion
that
maybe
everything
isn't
clear
up
front.
C
D
D
A
D
A
So
that
way
we
can
see
from
an
audit
logging
perspective
like
whether
or
not
different
endpoints
are
getting
hit
from
the
storage
tests
or
the
init
containers
tests
or
the
pods
back
tests.
And
if
he
is
looking
for
somebody
who
has
some
experience
in
he
tooi
framework,
it
was
unclear
whether
or
not
this
was
the
right
forum
to
ask
for
help.
So
if
anybody
knows
anything
about
this,
hippie,
maybe
ping
on
slack
and
we'll
see
if
we
get
a
discussion
going,
there.