►
From YouTube: Kubernetes SIG Testing 2018-08-28
Description
A
B
B
A
B
C
C
A
A
B
C
A
C
So
I
was
going
to
demo
a
lot
more
and
then
I
realized
that
I'm
on
laptop
and
some
of
these
things
are
slow,
like
building
communities,
so
I
pre-built,
an
image
over
here
and
I'm
just
going
to
show
like
this,
is
what
it
would
look
like,
but
I'm
not
actually,
so
you
can
install
it
from
testing
front.
The
goal
of
this
is
testing
kubernetes,
somewhat
integration
initially.
Obviously
this
is
not
a
perfect
real
cluster,
but
I'm
targeting
they
could
form
and
smooth.
Initially,
it
probably
also
be
fairly
useful
for
really
basic
workloads.
C
I
should
add
briefly:
there
are
other
projects
that
are
similar,
there's
mini
cube,
but
it
doesn't
really
target
building
communities
or
testing
it
anymore,
but
integrating
against
it
using
a
BM
and
also
it
doesn't
do
multiple
nodes,
which
is
something
we're
very
interested
in.
Currently
there
is
one
other
local
one
that
does
do
multiple
nodes.
Actually,
one
requires
system
need
which
is
a
bit
of
the
blocker.
C
C
The
other
one
is
cumin
and
end
cluster,
which
is
in
bash
and
does
some
interesting
things
with
only
using
binary,
which
gives
it
a
much
faster
iteration
time,
but
in
much
lower
fidelity,
I,
guess,
emulation
of
the
communities
cluster
because,
for
example,
it
doesn't
build
any
of
the
control
plane
images
it
just
tries
to
like
hack
them
to
use
this
new
binary.
So
with
this
one,
you
can
do
kind
of
base
to
know
the
static
base.
C
Image
will
really
change
that
just
includes
components
that
you
need
for
running
trip
identities,
various
packages
that
are
expecting
also
packages
that
are
used
as
part
of
the
build
like
our
state,
and
you
can
build
a
note
of
it.
So
when
you
build
a
note
image
we
have
installing
from
which
is
actually
fast
enough,
that
I
should
do
a
build,
and
so
that
will
install
the
upstream
packages
instead
of
doing
those.
We
also
have
the
basal.
C
C
C
Just
here
we
go
so
it's
going
to
take
the
image
that
is
built
that
has
the
kubernetes
binaries
and
images
in
it
right
now.
What
it's
doing
is
its
pre
loading,
all
of
the
images
which
are
baking
image,
which
is
a
lot
slower
because
I'm
on
battery
power
in
zoom.
What
normally
takes
about
a
40
seconds
or
so
to
do
this,
so
it
does
that,
and
you
can
see
these
are
all
the
images
that
are
pre-loaded
here.
C
Then
it
will
own
the
cube
Adelman
it
doing
a
few
things
to
make
that
work
properly
and
once
you're
done
and
a
bit
longer
here,
it's
gonna
pull
the
other
images.
They
are
not
built.
Currently
that's
at
sea
and
corniness,
because
those
are
not
part
of
the
trip.
Nice
Billy,
oh
and
we
use
stable
images
for
that.
C
So
it's
picked
up
those
it's
booting,
the
the
rest
of
the
cluster,
when
it's
done
it
will
placate
you
config
close
to
where
normally
keep
config
would
be
so
home
cube
and
then
we'll
generate
a
following
depending
on
the
cluster
name.
You
can
have
multiple
clusters
running
with
this,
and
when
it's
done,
it
will
spit
out
something
for
you
to
run
to
gain
access
to
the
cluster
and
I
have
a
cluster
that
takes
just
on
a
laptop
it's
under
a
minute
to
do
the
cluster,
even
even
on
like
better
dark.
It's
a
bit
longer.
C
C
C
Second,
cluster,
with
a
different
name,
so
we're
hoping
that
this
will
work
pretty
well
for
testing
is
that
the
main
thing
is
that
it
actually
builds.
Something
that
looks
like
a
community's
release
builds
all
of
the
images
in
the
basal
case.
It
actually
builds
and
Debian's
and
installs
those
and
the
make
case
it
builds.
The
binaries
puts
those
in,
and
then
it
puts
in
the
system
unit
files
as
well,
and
then
it
configures
and
runs
a
cluster,
and
so
far
the
fidelity
has
been
pretty
good.
It
generally
passes
conformance
tests,
occasionally
some
of
the
serial.
C
F
F
C
C
C
Not
doing
multiple
notes
yet
either.
How
do
we
know
that
that's
feasible
and
this
is
sort
of
an
extension
of
there's
another
directory
and
testing
fraud,
didn't
that
had
some
local
cluster
stuff
there.
Basically,
this
takes
a
lot
of
the
same
ideas,
moves
almost
all
the
code
into
code
out
of
fashion
and
does
some
other
cleanup
and
gives
you
a
figuration.
C
F
C
C
C
C
And
I've
been
testing
against
head
literally.
It
also
has
a
pretty
quick
build
because
it
grabs
the
packages
that
are
in
the
standard,
app
repository
and
installs.
Those
is
one
of
the
options,
so
we
can
test
the
actually
released
pins,
no
gotcha.
Thank
you,
something
that
one
boots
slower
like
I
said
because
it's
not
pre
pulling
the
images
yet.
C
C
C
Should
know
like
are
there
workloads
that
are
not
gonna
work?
It's
not
gonna
work
for
things
that
you
would
actually
expect
that
a
cloud
controller
they're
there
for
so
like
when
we
want
a
Sun.
You
see
we
have
stuff
like
this
provisioner,
then
you
can
test
things
like
provisioning,
just
right
right,
because
there's
no
PV
source
from
it
right,
like
ingress,
there's
a
whole
class
of
things
like
that.
They
just
don't
really
make
sense
for
any
sort
of
local
cluster,
no
matter
how
you
believe,
it's
just!
Never!
C
B
A
C
Storage
afterwards,
like
I,
mean
so
like.
If
you
were
the
schedule
a
privileged
container
on
this,
you
could
probably
do
some
careful
stuff,
but
that
would
be
true
of
the
real
clusters.
Well
right,
it's
just
a
workstation
I
haven't
seen
any
too
bad
so
far,
I
mean
so
like,
for
example,
I'm,
not
it
it's
truly
wanting
a
blocker
and
docker.
C
So
the
trade-off
there
is
that
it's
running
a
privileged
container,
but
it
is
not
touching
your
hosts
docker
other
than
to
create
the
containers
at
the
top
level
we
reach
inside
the
node,
it's
running
its
own
docker,
and
it's
about
as
isolated
as
you
could
get.
Vm
would
certainly
be
better
for
the
tools.
I
was
good
and
they're,
not
really
an
option
for
us
on
the
ground.
Right
now,.
F
C
C
F
C
C
C
Now
you
would
have
to
go
like
edit,
the
source
code,
but
like
there
are
phases
to
the
boot
and
I
can
insert
something
there
to
allow
that
to
be
configurable.
I
actually
have
something
kind
of
nice.
The
nodes
have
an
entry
point
that
waits
for
a
cig
user
one
signal,
so
they
don't
move
to
system
D
until
it
wants
them
to
so
I
take
advantage
of
that
to
fix
somehow
stuff
related
to
system,
D
and
kind
of
container.
C
C
I'm,
also
playing
to
add
some
kind
of
like
delete
blank
where
you
could
omit
the
cluster,
but
not
and
then
do
whatever
you
want
with
it,
and
it's
single
it
to
continue
and
most
of
the
Hexen
turned
out
to
not
be
too
bad.
There
is
a
little
bit
of
weirdness
actually
with
system
D
in
the
container.
It
works.
C
Pretty
well,
if
it's
not
privileged,
if
it's
a
privileged
container-
and
this
is
somebody
made
innards-
are
like-
why
are
you
doing
that
I'm,
like
I
know,
but
they're
not
too
hard
to
get
around,
and
it's
so
for
work?
Pretty
simply
fine,
I've
tested
those
on
my
number
of
different
systems
and
I've
reached
out
to
people
instead
of
interest
that
so
far
it
seems
to
mostly
work
biggest
probably
burn.
C
C
It's
in
the
chyme
directory
I
have
a
to-do
list,
a
file
in
the
docs
directory.
That's
sorted
so
here's
the
things
that
we
should
be
doing
doesn't
work
on
this
and
then
there's
also
kind
of
a
life
longer
term
wish
list.
These
are
maybe
less
feasible
that
if
someone
lost
the
take
a
look
at
the
Creek
and
as
far
as
development,
I,
just
added
a
label
and
I'm
going
to
be
tacky,
there's
issues
with
area,
slash
kind
which
is
hilarious,
then.
F
I
know,
there's
been
talk
about
moving
other
things
out
of
the
test
sent
for
a
repo
separating
config
from
code.
Is
there
any
in
terms
of
all
new
tools
being
developed?
Is
the
test
and
for
a
repo
still
the
appropriate
place
for
everything?
I?
Guess
it's
sort
of
an
incubation
area
I
mean
these
seems
powerful
enough
that
it
might
want
to
serve
its
own
repo
yeah.
C
I
wanted
to,
and
so
today
is
the
first
time
that
I've
wanted
to
actually
show
it.
Anyone
say
hey
look
at
this.
This
is
this
is
the
thing
I
think
probably
requesting
a
reco
is
in
the
future.
I
want
to
make
sure
that
it
all
makes
sense
and
that
it's
actually
differentiated
enough.
We
have
a
number
of
projects
related
to
clusters.
In
particular,
there
is
one
other
project
that
is
in
a
very
similar
space,
sponsored
by
cluster
life
cycle
nominally
for
local.
F
Yeah
please
gonna
work
and
you
sync
up
with
the
TSC,
because
I
know
the
cluster
lifecycle
sig,
with
what
they're
doing
it
just
seems
like
there's
so
many
projects
around
exactly
what
you're
doing
right
now
in
anything
that
comes
out
of
tehse
temper
is
going
to
have
sort
of
weight
attached
to
it.
So
I
think
it
would
be
good.
C
And
they've
been
aware
of
the
existing
project.
This
is
pretty
much
just
like
a
spiritual
three
rate
of
the
din
directory
and
testing
front.
It
follows
all
of
the
same
expectations
and
proposal.
Just
the
difference
was
because
I
did
it,
because
I
did
reference
it,
but
kind
of
do
a
ground
over
here.
I
was
able
to
find,
like
oh
they're,
doing
this
weird
thing
related
to
system
B
and
no
one
has
commented
why
they're
doing
it
and
when
I
wrote
this
one
I
know
what
comments
that's
like.
C
Okay,
we
need
to
do
this
thing,
because
this
would
be
also
because
a
number
of
the
components
use
bash
for
building
in
the
din
directory,
and
it's
not
super
maintainable.
So
this
one.
The
other
goal
is
that
this
is
actually
a
library,
a
set
of
libraries
for
build
and
core
cluster
creation
and
management
and
the
command
line
just
wraps
those
libraries.
So
at
some
point
we
might
even
try
just
having
Q
test
in
close
those
libraries,
instead
of
showing
out
to
get
another
tool,
definitely
requesting
a
curve
in
a
state's
repo.
C
C
I'm
also
planning
to
I
think
particularly
they
didn't
cluster
the
queue
Batum
did
cluster
one
I'm,
probably
going
to
write
a
couple
appears
that
was
some
things
that
I've
found
that
I'm
doing
that
I,
don't
think
they're
doing.
That
would
make
sense,
but
actual
obviously
things
like
doing
a
completely
different
style.
We're
not
things
that
we're
doing
to
want
to
do
between
those
projects,
but
we
have
already
in
the
past.
That's
only
a
few
things
related
to
like
say
the
actual
doctor
doctor
aspect,
I
think.
D
We
can
work
together
on
that.
It
would
be
great
if
we
had
a
spec,
sorry,
I,
joined
late.
I
saw
the
agenda
when
I
wanted
to
jump
on
it
be
great
if
we
had
aspect
to
rally
the
troops
behind,
like
the
canonical
reasons
why
we
chose
a
versus
B,
because
right
now,
there's
there's
several
incantations
like
jet
stack
has
one
and
the
Dana
cluster
was
one
that
we
sponsor
believe
in
sponsored
it
reluctantly,
because
what
we
really
wanted
was
a
go
rewrite
of
of
what
Moran
tiss
had
done.
Some.
C
On
this,
I
don't
have
time
to
write
much
code
yet,
but
he's
been
very
helpful
with
review
and
they
are
intending
to
switch
well
in
the
past.
I'd
use
mini
cube,
but
they
have
kind
of
similar
problems.
They
use
prowl,
Ness
beams,
and
things
are
painful.
So
they'd
like
to
use
this
they've
been
using
a
fork
of
our
previous
tend
one.
They
intend
to
transition
at
this
tool.
C
D
So
I
think,
if
we,
if
you're
right
aspect,
I,
think
you
could
probably
bring
a
ton
of
people
to
the
to
the
party
here,
because
people
have
one
of
this
for
a
long
time.
Ideally,
my
grand
unified
field
theory
on
this
is
that
it
would
replace
a
bunch
of
test
automation
as
well
as
the
local
up
cluster
goo
right
yeah.
C
I'm
also
I've
been
talking
a
little
bit
about
this,
not
quite
as
much
because
I
also
haven't
wanted
to
distract
the
ones
time
too
much
I
wasn't
sure,
like
literally
a
lot
of
it
was
just
exploring
feasible
Eve
like
how
well
does
action
word
I
just
now,
gotten
the
point
or
like
I
can
actually
do
pretty
decent
builds
and
cluster
boots,
and
it
seems
like
something
that
I
think
is
going
to
work
pretty.
Well
so
I'd
like
to
talk
to
more
people
about
it,
starting
this
week.
D
There
is
a
separate
spec
that
we
wrote
for
that
for
the
sub-project,
the
testing
Commons,
one
where
it
overlaps
a
lot
of
the
testing
requirements
that
different
folks
have
had
into
the
client
library
requirements
for
initialization,
right
and
I
think
building
a
spec
that
adhere
or
building
something
that
adheres
to
some
of
the
principles
that
we
kind
of
laid
out.
There
would
be
super
useful
for
for
anybody,
because
the
whole
purpose.
D
D
C
D
C
Well,
I
think
we
should
be
able
to
make
that
work
through
the
world,
like
my
goal
on
work,
I'm
really
working
the
configuration
aspect.
Okay,
the
idea
is
there
is
a
cluster
there's
a
cluster
package
and
there's
a
build
package
and
for
anyone
to
consume
this
tooling,
they
can
either
consume
that
or
they
can
consume
the
command.
C
D
C
Yeah,
definitely
is
it
kind
of
work,
this
justice
or
such
things
out
and
see
how
things
work.
I.
Think
it's
at
the
point
where
I
think
it's
going
to
work.
It
gives
us
something
and
go
when
I
hopefully
integrates
pretty
well
I'm
trying
to
bring
normal
people
in
work,
but
I
didn't
I
didn't
reach
out
to
most
of
the
people.
I
could
get
in
contact
with
most
so
much
some
of
the
other
local
clusters
to
say.
B
F
C
I've
been
talking
to
PCF
is
on
his
call
about
that,
and
also
some
too
long,
I'm,
sorry
I'm
sure.
That's
that
I
just
know
his
game
when
it
was
people
but
Paul
pull
here
can
probably
speak
more.
That.
E
Yeah
we
from
Cisco
so
we're
doing
quite
a
bit
of
work
with
ipv6
and
we're
starting
in
here
ready
to
do
dual
stack.
So
we've
been
using
Kubb
Edmund
in
cluster,
quite
a
bit
for
bringing
up
v6
clusters,
we
sort
of
took
what
rant
has
had
and
modified
it
so
that
it
would
work
with
ipv6
and
now
we're
starting
to
look
at
it
to
see
how
we
can
modify
it.
Work
with
dual
stack,
so
I'm
very
interested
in
kind
as
well
just
to
sort
of
see
what
it
can
do
and
see.
E
D
C
I
didn't
come
out
and
say
that,
because
I
think
that
tool
is
really
cool,
they've
built
some
really
interesting
functionality
into
it.
Like
they
have
this
kind
of
cluster
snapshot
mechanism,
it
gives
you
sort
of
a
dump
for
the
cluster
state,
but
every
extension
of
functionality
to
it
involves
adding
more
global
environment
variables
and
I'm
really
like
to
eliminate
that
sort
of
thing
from
our
testing.
So
this
one's
a
go,
you
specify
config
like
flags
and
yamo
JSON.
A
D
Now
that
we
have
like
the
ability
to
get
all
of
the
test
images
isolated,
one
of
the
major
things
that's
required
is
to
plumb
through
an
override
for
registry.
This
is
for
like
air-gapped
installations
or
air-gapped
testing
scenarios
where
people
can
only
pull
from
private
registries.
So
it's
just
a.
We
need
to
plumb
through
a
couple
of
variables
and
knobs
for
the
intent
testing
framer,
one
of
which
is
the
registry
override
and
the
second
of
which
that
I
think
is.
We
probably
should
have
added
a
long
time
ago.
D
We've
Prussian
it
in
both
long
time
ago,
but
the
second
of
which
is
auto
labeling
of
every
workload
that
testing
does
and
spins
should
probably
be
specified
with
his
own
label.
So
we
can
bulk
delete
things
because
right
now
it
doesn't
do
that
and
it's
fine
for
the
stuff
that
it's
spun
and
test
in
front.
D
I
think
so
long
is
the
list
of
tests
is
the
canonical
list
and
it
can
be
a
be
compared
from
one
to
the
other
with
expected
results.
That's
all
that
really
matters.
Ideally
I
would
like
to
get
like
our
version
of
the
trimmed-down
tests.
A
container
pushed
up
stream
cocoons,
as
we
mentioned
before,
and
that
different
conversation
is
a
little
bit
conflating.
It's
got
a
bunch
of
tests
in
for
a
specific
stuff
in
it.
D
Ideally,
consumers
just
want
to
have
just
the
tests
just
a
test,
only
container
that
contains
the
artifact
needed
to
reproduce
conformance
tests
in
the
wild
right.
We
maintain
this
on
our
site.
I
talked
with
Matt
liked
it
a
long
time
ago.
We
wanted
to
push
it
upstream,
just
no
one
ever
found
the
time
to
do
it.
C
D
C
Would
be
appropriate
I
like
to
think
of
more
testing
separate
lists,
I'm
just
going
to
be
working
on
right
now,
but
why
do
you
think
I'm
like
to
increase
collaboration
and
I'm,
probably
going
to
try
to
figure
out?
You
know
this
gets
the
point
where
we
actually
have
some
tests
running
and
it
looks
good
or
we
should
pull
it
out
into
a
repo.
That's
not
testing
from
I
haven't
figured
out
yet
late
cuz,
you
don't
matter
anything,
but
that's
probably
something
that
should
happen
in
your
future.
F
Tim
you
mentioned
you
would
like
to
get
a
trimmed-down
list
of
tests
committed
upstream
I
wondered
is
no.
Is
this
the
same
list
discussed
last
week
that
sonobuoy
uses
it
does
not
include
the
flaky
tests
that
we
still
use
in
cube
tests?
Is
that
the
list,
or
is
there
some
just
totally
off-base
here.
D
That's
kind
of
a
it's
it's
a
little
bit
of
an
overlapping
topic.
The
set
up
we
just
got
a
couple
of
extra
filters
on
the
test.
I
would
like
to
be
it
to
be
maintained
upstream,
because
I
don't
want
to
maintain
it.
The
only
reason
that
that
list
was
pruned
was
because
there
are
issues
that
existed
in
the
beginning,
and
some
of
those
issues
have
been
fixed
and
it'd
be
nice.
If
we
just
pushed
all
that
stuff
upstream
and
as
the
canonical
source
of
truth
for
everybody,.
C
Some
kind
of
some
of
the
things
that
we're
having
to
filter
out
those
tests
should
never
have
been
allowed
to
be
tagged,
conformance
which,
where
it
makes
it
so
you
need
this
bill
drink.
Probably
we
want
to
have
a
discussion
at
some
point
around
how
we
can
prevent
some
I
think
flaky
is
one
of
them
and
we
should
never
allow
a
test.
That's
flaky,
to
be
called.
D
The
could
control
client
tests
or
the
coop
control
tests
are
all
fixed
just
that
they
couldn't
originally
run
them
introspectively.
It
didn't
generate
a
coop
config
from
pink
luster
config
that
stuff's
been
fixed
since,
like
one
9news
and
I
fixed
it,
so
there's
no
reason.
We
can't
unblock
that
stuff,
but
there's
broader
questions
about
like.
Should
we
allow
aggregate
level
tests
to
be
called
conformance
tests
if
we're
going
to
eventually
push
things
like
goop
control
on
a
tree?
D
F
It
would
be
interesting
to
see
like
what's
being
skipped
that
isn't
already
annotated
in
the
e
to
e
framework.
You
know
I,
guess
they're
calling
out
specific
ones,
but
is
there
anything
being
skipped
not
based
on
a
label
but
like
specific
tests
or
specific
patterns
that
aren't
related
to
the
predefined
labels.
C
A
If
you
have
to
drop,
we
are
over
10
minutes
over
times
that's
cool
I
have
to
drop,
which
I
think
means
I
have
to
stop
the
recording,
but
if
you
guys
want
to
keep
talking
in
here,
that's
totally
cool.
So
in
that
vein
take
server
for
coming.
We
will
see
y'all
next
week,
thanks
Rosie
dude.
Absolutely
thanks.