►
From YouTube: Kubernetes SIG Testing 2017-10-03
Description
Meeting notes: https://docs.google.com/document/d/1z8MQpr_jTwhmjLMUaqQyBk1EYG_Y_3D4y4YdMJ7V1Kk/edit
A
B
Just
gonna
end
it
off
actually
to
chuck,
but
to
give
a
brief
overview.
We've
been
involved
with
the
upstream
or
sort
of
the
CNC,
F
conformance
effort
and
as
a
part
of
our
own
efforts
for
us
to
be
able
to
qualify
people
who
sort
of
have
a
kubernetes
distribution
already
in
their
environment.
We
needed
some
tooling
to
be
able
to
give
it
a
green
red
signal
about
what
was
what
we
were
evaluating.
So
with
that
preface
I'll
hand
it
over
Chuck,
hey.
C
C
C
Control
apply
command
that
you
can
grab,
and
then
you
paste
it
into
your
cluster
like
that
and
that'll
spin
up
a
bunch
of
stuff
that
goes
runs,
and
this
is
gonna
take
a
while,
so
I
won't
make
you
wait
four
to
thirty
minutes
for
this,
for
these
tests
to
finish,
but
we've
got
a
run
yeah.
You
can
see
that
this
page
automatically
updates
and
when
your
tests
finish
it'll
forward
you
to
the
results.
C
But
if
you
want
to
see
what
a
result
looks
like
here
is
a
results
page,
you
get
a
list
of
conformance
tests
that
have
run
and
everything
passed,
except
for
this
failure
here
you
can
click
into
it
and
you
can
drill
down
and
you
can
see.
Okay.
This
is
the
actual
test
that
was
run
and
what
happened,
and
then
it
pulls
out
the
system
logs
for
a
given
failure.
So
hopefully
a
way
to
see
what
actually
went
wrong.
C
A
A
Okay,
so
we
weren't
gonna
have
a
sort
of
a
discussion
on
what
a
Status
page
might
look
like,
but
do
be
I
just
give
him
the
volume
of
other
things
here.
I
was
wondering
if
we
were
okay
with
moving
to
what
we
need
to
do
on
this
pull
request
that
Phillip
I
don't
know
how
to
pronounce
the
name
here.
This
pull
request
on
the
integration
test
plan
for
apps.
A
We've
raced
this
a
couple
times
now
and
it
seems
like
we
we'd
like
to
see
if
there's
an
overlap
of
effort
here
or
if
we
are
comfortable,
letting
this
go
forward
and
iterating
on
it,
and
so
Maru
I
know
you're.
Here
you
have
some
opinions.
I've
heard
I
am
like
leaning
in
in
the
direction
of
like.
If
this
is
work
that
it
that
somebody's
willing
to
do
I,
don't
really
want
to
oppose
it.
I
just
want
to
maybe
better
understand
if
we
think
it's
gonna
get
in
the
way
of
any
other
ongoing
work.
D
As
per
my
comment,
I
think
that
the
set
up
should
not
be
done
in
I.
Think
integration,
testing
by
and
large
is
better
suited
to
doing
things
in
code,
where
you
can
configure
it
where
you
can
swap
components
out.
You
know
using
interfaces
that
kind
of
thing
so
I
think
to
me.
That's
the
major
thing
and
the
secondary
thing:
it's
not
as
big,
but
I
think
that
fixture
should
just
be
common.
It
shouldn't
be
oh,
this
is
all
cops
only.
D
It
should
be
discoverable
like
in
a
common
path
and
then
we
can
iterate
it
on
there,
but
I
like
those
sort
of
preconditions
to
be
met
before
we
merge
it
and
then
we
can
evolve.
It
I
definitely
think
there's
room
for
sharing
infrastructure
like
the
work
around
doing
darker
and
darker
for
integration.
Testing
could
probably
use
a
lot
of
that
or
extend
from
it.
D
B
A
Constructs
so
I
linked
the
PR
in
the
chat
and
Philip
I'm,
not
sure
if
I'm
pronouncing
your
name
correctly
or
if
I'm,
even
saying
that
the
right
name
but
feel
free
to
jump
in
here.
If
I'm
miss
stating
things
like
the
description
of
the
PR
sort
of
says,
you
know
we
want
to
test
basically
the
workload
controllers
and
different
layers.
So
let's
talk
about
pods
nodes,
making
sure
those
work
correctly
then
go
level
up
from
there
and
things
like
services
and
volumes
and
then
a
level
up
from
there.
A
D
I
mean
what
he's
implemented
is:
is
basically
spin
up
a
control,
plane,
spin
up
some
hollow
nodes
and
then
run
the
tests
against
that,
and
so
it's
basically
like
it's
it's
a
test
cluster
and
it's
not
fully
functional.
It
doesn't
actually
run
anything.
It
doesn't
actually
like
run
containers
use
docker,
which
is
good
for
their
purposes.
I'm
just
concerned
that
he's
it's
just
yet
another
way
to
spin
up
a
cluster
as
if
we
don't
have
enough
of
those
already.
D
B
D
Problem
is
that
there's
been
a
gap
in
integration
fixture
and
that
gap
is
just
remains
unfilled.
There's
no
canonical
way
to
start
a
master,
that's
actually
clean
and
representative
of
what
you
actually
deploy.
There's
some
you
know
run
a
master
nonsense,
but
as
I
linked
to
in
my
comment
like
dr.
Stefan,
has
an
example
of
how
to
do
an
API
server
properly.
You
need
to
extend
the
work
he
did
to
do
the
other
components
of
the
control
planes.
D
B
D
That's
what
I'm
saying
so
so
there's
an
example
for
API
server
testing
of
how
to
do
it
properly
and
we
need
to
smooth
I.
Have
it
on
it
to
do
to
like
extract
that
from
the
API
server
path
and
actually
move
it
as
integration
framework,
and
we
can
then
might
you
said,
use
that
example
to
target
the
other
components
that
we
need
to
run.
Integration
testing
great.
A
Start
a
kind
of
reframe
in
the
context
of
what
tripped
us
up
the
first
time
this
discussion
came
up.
Is
you
know
this
idea
of
improving
the
fidelity
of
our
integration
tests
because
we
as
say
testing
as
a
group
80
to
be
tests,
they're,
slow,
they're,
flaky,
they're
unreliable?
They
don't
provide
great
signal,
we'd
rather
stuff,
that's
faster
and
so
integration
tests
seem
like
they
could
be
a
good
mixture,
a
good
middle
level
between
unit
and
e
to
e.
On
the
other
hand,
we
could
try
and
approach
where
we
bring
up
a
lower
fidelity
cluster.
A
We
Quentin.
We
try
this
in
the
past
with
a
doctor
and
doctor
based
cluster,
and
this
is
basically
an
attempt
to
do
that
with
a
real
control
plane
but
hollow
nose,
and
if
we
could
take
the
existing
e
two
E's
and
point
them
at
a
lower
fidelity
clustered
with
that
give
us.
You
know
faster
signal
and
that
way
I
view
these
as
two
parallel
things
that
eventually
one
of
those
efforts
will
probably
prove
more
fruitful
than
the
other,
but
I
would
be
willing
to
see
that
proceed.
D
Useful
I
mean
yeah.
My
goal
isn't
to
say:
don't
do
this
at
all,
it's
more
there's
some
common
stuff
there,
like
being
able
to
from
code
spin
up
a
control
plane,
that's
that
could
be
used
for
both
a
darker
and
darker
note
and
a
hollow
node
cluster.
So
I
just
I
want
to
sort
of
set
him
on
the
road
of
collaborating,
rather
than
just
doing
something.
That's
very
app
specific
and
I.
Don't
think
it's
a
lot
of
work
to
take
what
he's
done
and
do
that.
E
Thank
you
for
being
is
up
in
the
absolute
agenda
today,
I
mean
using
a
doctor
for
quite
a
long
time.
One
of
the
problem
that
I
feed
I
mean
I've
used.
The
doctor
doctor
from
could
be
admin
a
version,
so
one
of
the
problem
I
often
run
into
is
the
is
that
the
setup
itself
is
not
really
stable.
Sometimes
you
end
up
having
the
cluster
being
gravy
flaky
when
you're
trying
to
bring
it
up,
especially
when
it's
a
new
build,
so
I
mean
it's
I
mean
because
it's
a
levee
envy
and
we
don't
have.
E
Usually
we
don't
have
enough
budget
to
actually
stand
up
a
real
cluster.
We
have
to
deal
with
dr.
Dockery
stuff,
but
then
I
thought
this
one
is
really
clean.
You
actually
start
off
real
control
plane
without
faking.
Any
of
that
I
started
with
starting
these
control
plane
inside
the
code
like
starting
the
Avs
over
within
the
code.
But
then
the
might
be
changing,
I
mean
you're
using
rest
and
then
future
you
might
be
using
a
different
protocol.
So
why
not
stop
the
actually
EPA?
So
we're
not
study?
D
Obviously,
you
can
do
that.
You
just
don't
have
to
do
it
from
CLI,
it's
possible
to
start
the
actual
component,
but
do
it
from
code
and,
as
I
said
the
example
I
linked
to,
and
my
comment
points
to
how
to
do
it
for
the
API
server
and
that's
written
by
the
guy
who
one
of
the
maintainer
zuv
the
api
server.
This
is
how
they're
intending
to
do
their
testing.
Oh.
E
Was
one
one
small,
little
reason
which
I
thought
that
we
might
be
able
to
start
just
using
the
binary?
It's
because
you
know
that
is
an
error
in
maybe
there
is
an
error
in
your
code
and
you
want
to
further
investigate
it.
After
the
test
is
completed,
then
he
would
actually
leave
this
dummy
cluster
up
and
running,
and
you
could
actually
point
UCLA
to
do
some
interested
in
continued
investigation,
but.
D
Option
of
debugging
it
I
mean
the
danger
is
if,
if
you
go
too
far
with
I'm
gonna
run
a
cluster
and
it's
gonna
be
like
the
bug
the
ball
like
after
the
fact,
then
you
have
a
lot
of
complexity
involved,
whereas
you
can
use
a
delve
debugger
on
an
imprecise
component
like
the
API
server
and
actually
hone
in
on
what
the
problem
is
and
to
me
that's
an
advantage
of
doing
something:
integration
versus
I'm
and
and
and
I
have
this
whole.
You
know
cluster
that
I
have
to
worry
about.
Okay,.
E
D
D
We
were
talking
about
being
able
to
actually
write
to
us
that
could
target
real
clusters
were
fake
clusters
and
I
think
the
test
that
you're
imaginating
for
apps,
that
can
you
know
they
can
target
this
test
cluster.
They
could
just
as
easily
target
a
real
cluster,
and
my
goal
would
be
enabling
you
know
cheap,
fast
integration
fixture.
D
We
would
run
that
most
of
the
time,
but
then
you'd
be
able
to
run
that
test
against
a
real
cluster,
whether
it
be
darker
and
darker,
based
or
actual
like
physical
node
based,
and
so
you
kinda
get
the
best
of
all
worlds
and
the
way
you
do
that
is
separating
the
test
fixture
from
the
test
setup
from
the
test
execution.
As
long
as
that
separation
is
maintained,
you
have
a
lot
more
options
if
that
makes
sense.
A
So
I
guess
what
I'm
trying
to
just
set
this
a
chat
like
what
I'm
trying
to
understand
its
it
sounds
like
there.
There
is
a
way
we
can
collaborate
here.
What
I
want
to
understand
is
what's
the
point
at
which
we
feel
like
we're,
ready
to
start
iterating
on
on
what
you
two
are
talking
about
like
because
this
this
has
didn't
propose,
have
been
hanging
out
and
blocked
for
a
while
and
I
would
like
to
make
sure
we
can
actually.
D
Start
moving
forward
on
that
I
think
we
I
don't
think
there's
anything
blocking
it.
Like
I
said
my
two
suggestions,
I'd
like
to
see
us
make
the
fixture
more
junior,
not
so
up
specific
and
that's
just
providing
a
little
bit
of
separation
between
the
test
and
the
fixture
involved
and
and
focusing
on
launching
components
from
code,
and
it
may
be
that
launching
from
current
and
the
phonons
from
code
is
a
longer-term
effort.
D
B
D
B
A
Effort
involved
so
yeah
I
think
what
I'm
hearing
is.
There
is
an
issue
that
needs
to
be
made
for
this
broader
idea
of
higher
fidelity,
fixtures
and
more
reusable
fixtures.
That
can
be
done
as
a
follow-on
effort
this
and
you
could
either
say
we
would
prefer
that
this
PR
not
be
merged
until
we
like
it,
take
a
good
hard
look
at
what
fixtures
are
being
done
in
bash.
That
could
instead
be
done
with
the
integration
fixtures
or
we
could
merge
it.
A
B
A
E
F
E
The
reason
we
are
related
of
the
test
in
such
a
way
is
that
in
the
future,
if
there
is
something
that
was
breaking,
we
would
know
which
layer
is
causing
that
breakage.
So
it
would
be
easier
for
us
to
run
a
test
in
that
layer
in
fashion,
especially
for
integration
of
the
whole
idea
now
I'd
love
to
rethink
if
it
is,
if
it
has
to
be
a
genetic
test
that
I
know
it's
like
a
lot
of
other
components,
because
I've
never
thought
through
things
like
make
networks
I,
don't
think.
D
D
My
focus
is
on
enhancing
Federation
testing,
which
includes,
like
BC,
basically
doing
a
lot
of
integration,
fixture
work,
making
sure
that
we
can
do
exactly
what
he's
doing,
but
in
a
general
way
and
reuse
it
Confederation.
So
even
if
it
languages
for
a
short
period.
If,
for
some
reason,
he
gets
pulled
away,
I'm
committed
to
maintaining
it
going
forward.
A
D
I'll
just
commit
to
like
iterating
on
that
it'll
be
from
the
testing
perspective,
so
I'm
in
pronouncing
your
name
right,
Dilip
didn't
be
up
right,
yeah,
correct,
so
yeah
feel
free
to
reach
out
to
me
directly.
If
you
don't
get
immediate
response
on
the
PR
and
sig
testing,
my
as
you
can
see,
Maru
and
yeah,
we
won't
keep
blocking
you
or
I,
won't
anyway.
Okay,.
G
G
I
wanted
to
go
through
the
missions
of
setting
it
up
because
I'm
thinking
about
running
it
against
a
bunch
of
other
repos
like
the
rules,
kate's
rule,
stalker,
etc,
and
once
I
had
prowl
in
a
state
where
I
could
stand
it
up
with
some
of
this
workflow
stuff.
I
wanted
to
try
it
out.
So
I
spent
a
little
while
I'm
putting
together
a
build,
a
fire
plug
in
this
weekend,
basically
to
test
it
out.
G
A
G
G
G
Puts
them
into
Basel
and
and
then
we
have
these
objects
things
that
group
together
some
logical
things
and
then
ultimately
at
the
bottom
there's
and
everything
target
that
effectively
reconstructs
starter
amel,
and
so
the
reason
is
broken
apart
like
this
is
mostly
so
that
well
so
I
can
use
everything
to
run
everything
dot
apply.
You
stand
up
the
cluster
if
it's
not
there
or
if
I
want
to
iterate
on
individual
components.
I
can
say
something
like
hook,
deployment
replace
and
that
will
rebuild
the
images
with
any
changes.
G
I
have
republished
them
to
the
registry
and
can
cut
all
replace
in
this
case
that
particular
deployment.
So
let's
see
okay,
so
we
saw
the
build
a
fire
out,
but
I
will
change
just
to
show
this
in
action
where,
if
I
say,
Goodes
gets
reply
that
basically
just
puts
a
little
high
cig
testing
in
here,
and
so,
if
I
run
the
place
like
I
said
this
is
going
to
rebuild
the
go
image
and
republish
it
to
the
registry
and
read
redeploy
that
thing
to
my
environment.
G
You
can
see
it's
already
here
up
and
running,
and
so,
if
I,
if
I
go
over
here
and
I
build
it
by
this,
is
like
the
longest
part
waiting
waiting
for
this
thing
to
run,
build
a
fire
and
you
can
see
it
has
high
suggesting
in
it.
So
basically
I
put
together
some
of
the
rules,
chaos
stuff
and
to
make
it
so
that
you
know
you
could
basically
have
one
command.
You
could
run
to
do
you
really
fast
development
or
against
a
dev
environment
on
your
cluster
to
rebuild
republish
images.
G
It
can
do
multiple
images
if
you
needed
to
do
it
and
do
it
really
really
fast
and
that's
that's
the
gist
of
it.
So
I.
You
know
using
that.
I
use
this
to
build
the
whole
build
a
fire
thing
this
weekend
and
it
made
me
really
happy
so
so
I
like
it.
If
you
want
to
try
it
out,
I'll,
probably
be
breaking
off
pieces
of
this.
Just
some
send
you
guys
some
key
ours.
The
mill
divider
plugin
needs
unit
tests
and
stuff
like
that.
G
A
G
Applies
the
rule
KS
stuff
to
it.
I
think
the
next
branch
has
my
changes
to
the
configuration
to
run
certain
plugins
and
stuff,
because
the
vanilla
thing
doesn't
have
any
well
I
think
it
has
like
a
sized
plugin
or
something
enabled,
but
I
wanted
more
stuff,
and
then
the
third
is
adding
all
the
build
of
fire
stuff.
So
so
yeah
you
can
check
it
out
there.
The
the
rules,
chaos
stuff,
is
under
comm
basil,
build
rules,
key
a
DES.
H
G
I
haven't
done
anything
to
add
dockerfile
support
in
there,
and
one
of
the
problems
is
that
one
of
the
problems
is
that
you
know
the
way.
Dockerfile
builds
the
docker
Fire
soccer
files
are
built.
It
does
not
play
very
nice
with
Basil's
assumptions
about
reproducibility,
because
it's
going
to
produce
something
slightly
different.
Every
time
you
could
probably
put
together
so
I
don't
have
a
dot,
a
rule
that
will
build
a
docker
file.
G
You
could
put
together
general
that
produces
a
docker
file
and
I
think
in
place
of
one
of
the
targets
that
would
normally
reference
a
docker
build
rule.
You
could
probably
just
put
a
docker
save
style.
Tarball
I
haven't
tried
that,
but
the
docker
builder
will
support
that
is
based
on
it
as
a
base
image
and
so
I
think
that
should
work.
If
not,
if
not,
you
can
put
it
through
sort
of
an
empty
dock
or
build
a
rule
and
feed
that
into
it,
and
that
should
work,
but
I
could.
H
A
B
A
For
a
cake,
hey,
it's
still
kind
of
a
large
thing,
but
seriously
planetary
planter
is
awesome.
I'm,
a
total
bezel,
newb
and
I
highly
recommend
you
check
it
out.
If
you
haven't,
it's
been
pretty
helpful
for
me
to
get
my
toes
whiten
that
let's
see
we're
kind
of
over
time
as
usual,
but
real
quickly,
Maru,
you
had
a
question
about
what
process
new
repo
should
use
for
managing
merges
if
I
had
to
guess.
If
you
give
us
like
a
week
or
two,
we
would
say
the
answer
is
tied,
but.
D
So
I
mean
brief.
Backstory
on
this
is
we
have
a
few
repos
coming
out
of
Federation
or
coming
out
of
kaykai
and
becoming
their
own
repos
and
kind
of
like
okay?
So
how
are
we
gonna
actually
make
sure
that
the
tests
run
and
you
don't
merge
until
the
tests
pass
and
is
the
answer
just
variable
submit
queue
or
you
know,
did
we
just
make
sure
the
tests
pass
and
then
we
can
hit
the
green
button?
I'm
I,
don't
really
know
if
there's
they're
just
docs
on
this
I'm
happy
to
just
be
pointed
it
at
that.
D
A
A
If
you
want
to
go
the
extra
step
of
using
a
submit
queue,
you
can
know
like
I
said,
if
you
can
wait
like
a
week
or
two,
we
feel
that
tide
is
very
very
close.
We've
basically
turned
tide
on
for
the
testing
for
a
repo
and
so
we're
just
sort
of
dogfooding
it
to
make
sure
that
it's
working
before
we
open
it
up
to
the
general
public.