►
From YouTube: Kubernetes SIG Testing 2018-07-24
Description
A
A
A
By
doing
this
plus,
you
know
you
have
the
editorial
voice
for
all
time,
so
another
thing
I
wanted
to
share
real
briefly
will
share
my
screen
is
that
where
we
have
milestones
in
the
testing
for
repo
they're
here,
I
have
milestones
for
this
release
cycle
for
next
release,
cycle
kind
of
our
overall
goals
for
this
year
and
some
place,
we
can
punt
sort
of
overall
goals
for
next
year.
As
we
start
thinking
longer-term.
A
Historically,
we
haven't
been
the
greatest
at
effectively
bucketing
everything
into
this
to
try
and
create
a
roadmap.
One
thing
I
did
to
hopefully
make
it
a
little
bit
easier.
This
time
is
add
support
for
the
slash
milestone
command,
which
I
think
I
tried
to
use
to
punt
everything
in
to
here.
Let's
see,
if
I
did
that
yeah,
so
you
can
use
the
slash
milestone
command
in
order
to
actually
use
the
name
command.
A
I'm
navigating
live
to
the
help
on
the
fly
just
to
demonstrate
this
here
it
says
you've
got
to
be
a
member
of
this
kubernetes
milestone,
maintainer
x'
team.
I
don't
have
the
document
handy
that
describes
how
to
get
here,
but
basically
you
just
have
to
email
kubernetes
to
contribute
your
experience.
If
you're,
not
a
member
of
this
team
and.
A
So
it
would
be
helpful,
I'm
gonna,
try
and
go
through
and
sort
of
scrub,
all
of
our
issues
going
forward
to
see
if
I
can
kick
stuff
into
112
and
113,
but
I'm
also
gonna
say
like
if
we're
not
finding
this
useful
as
a
group,
if
we're
not
like
really
using
this
ourselves,
then
maybe
we
should
find
some
other
way
of
just
trying
to
call
the
shots
before
we
take
the
shots
or
accept
that
word.
That's
just
not
really
how
we
operate,
but
one
of
the
reasons
to
do
this
might
be.
A
It
makes
communicating
status
and,
what's
going
on
to
the
rest
of
the
community,
a
lot
easier
than
having
to
point
them
at
arbitrary
Google
Docs
that
I
keep
forgetting
about
and
I
have
to
go
search.
My
Google
Drive
to
figure
out
which
one
I've
looked
at
most
recently.
So
there's
that
I
lost
my
place
on
the
agenda.
I
apologize,
I
think
the
next
thing
was
about
Yukon
Shanghai.
A
B
C
A
I
know
I
submitted
to
talk,
but
I
don't
know
if
that
means
I'm
going
or
not,
but
there's
a
sort
of
a
call
out
there
for
sakes
to
do
intro
and
deep-dive
sessions
the
same
way
we
have
done
other
cou
cons.
So
it's
the
sort
of
thing
that
I
prefer
to
use
for
more
higher
bandwidth
discussions
amongst
those
of
us
who
are
really
busy
with
stuff
and
I'm
just
wondering
if
anybody
in
the
group
would
be
interested
in
doing
that
or
if
anybody
is
interested
in
volunteering
to
do
that.
A
A
Yeah
I
think
personally,
I
think
something
that
is
in
flux
right
now,
but
could
maybe
be
described.
A
little
bit
later
is
like
all
of
the
layers
and
tools
and
stuff
involved
in
running
it.
What
actually
happens
when
you
run
a
job
I
think
we've
done
a
pretty
good
job
of
communicating
like
prowl
and
the
infrastructure
around
kicking
things
off,
but
like
what
are
those
things?
What
are
the
layers?
What
is
this
bootstrap?
What
are
these
potty
tales?
What
are
these
scenarios?
How
much
of
that
is
getting
mixed
up
right
now,
etc,
etc?
C
C
B
When
you
say
chromatic
I
mean
it
sounds,
like
you
mean
portable
I,
don't
know
that
one
isn't
necessarily
the
same
as
the
other
right
like
a
pod.
Right
now
is
hermetic,
but
the
or
an
image
is
hermetic,
but
the
idea
that
it
can
run
anywhere,
wouldn't
it
would
be
or
with
fewer
dependencies,
would
make
it
portable.
Or
am
I
misunderstanding,
it
maybe
I,
don't
understand
the
meaning,
because
you
want
to
be
able
to
run
someone's
desktop
versus
just
GCE
or
AWS.
B
C
C
E
C
So
I
mean
I
I'm,
hearing
from
both
I
mean
but
I'm
hearing
from
both
of
you.
You
know
maybe
a
focus
on
making
it
easy
to
take
someone's,
oh
yeah,
I,
guess
I,
don't
know
I
sort
of
view,
those
as
two
slightly
different
things.
Maybe-
and
maybe
that's
just
my
mistake-
well
of
the
difference
between
trying
to
make
it
easy
to
run
the
same
test
on
a
local
cluster
and
an
AWS
cluster
and
a
TCP
cluster
I
think
is
slightly
different
from
ensuring
that
the
local
cluster
has
that
capability
in
the
first
place,
yeah.
B
So
I
think
what
confuses
me
is
the
word
kinetic
I
did
do
a
quick
check
on
it
and
not
to
be
pedantic.
I
mean
sealed
like
right
or
airtight.
That's
what
the
traditional
definition
apparently
is,
and
so
I
I
only
know
the
work
from
the
phrase
hermetically
sealed,
but-
and
this
is
where
I
think
I
get
confused,
because
something's
sealed
I
think
of
people
not
being
able
to
reach
in
I,
don't
think
of
it,
meaning
people
can't
like
the
thing
that
seal
can't
necessarily
reach
out
and
take
advantage
of
resources.
B
So
it's
not
like
you,
don't
want
to
have
external
resources,
and
so
the
side
effect
would
be
Portability
and
again
maybe
I
just
still
don't
understand.
But
when
you
bring
up
the
other
platforms
as
we
don't
want
it
to
depend
on
these
things,
I
kind
of
see
what
you're
saying
you
want
any
cluster
to
be
able
to
run
it.
You
don't
necessarily
want
to
be
run
in
any
cluster
I
guess
you.
E
Don't
want
to
have
the
requirements
of
provisioning
is
the
thing
that
we're
the
problem
is
provisioning.
You
don't
want
to
be
able
to
dedicate
and
have
to
set
up
a
whole
cluster
across
n
resources.
You
want
to
be
able
to
use
a
localized
environment
with
the
build
artifacts
for
your
local,
build
right.
If
I'm.
A
E
A
We
don't
want
to
live
in
is
where
every
pull
request
to
kubernetes
has
to
depend
on
a
cluster
spinning
up
an
AWS,
a
cluster
spinning
up
in
GCE,
a
cluster
spinning
up
and
Azure
a
cluster
spinning
up,
and
some
OpenStack
cloud
cost
you're
spinning
up
in
some
VMware
public
cloud
right
like
that,
takes
entirely
too
long.
We
just
want
something
that
passes
most
of
the
ete
tests
that
we
care
about.
That's
why
you.
B
A
Furthermore,
when
we're
talking
about
hermetic
we're
talking
about
like
we
don't
want
to
be
subject
to
the
relative
humidity
and
barometric
pressure
of
different
cloud
environments,
which
can
autumn
test
flakes,
so
we
feel
like
if
we
can
get
faster,
less
flaky
signal
on
pull
request
jobs
going
into
core
kubernetes
that
speeds
up
development
velocity
there.
It
then
begs
the
question
of
how
we
accomplish
sufficient
coverage
of
cloud
providers
specific
functionality
in
a
way
that
can
provide
meaningful
signal
to
different
cloud
provider.
Implementations
that
can.
C
So
Maria's,
you
know
leaving
a
comment
about
putting
it
into
stages
where
the
first
stage
you
know,
runs
locally
and
the
second
stage
you
know,
runs
against
all
tender
and
yeah
I.
Think
that's!
You
know
exactly
what
we're
sort
of
talking
about.
With
the
caveat
that
you
know
you
know,
maybe
the
idea
is
to
consider
something
good
enough
to
merge.
C
It
passes
all
of
our
local
tests
and
the
I
think
part
of
this
and
the
thing
that
doesn't
exist
right
now,
like
for
I'm,
not
exactly
sure
if
mini
cube,
is
a
conformant
cluster
or
not,
but
I
do
know
that
you
know
it
would
be
hard
to
run
mini
cute,
I,
don't
know
if
we
could
lund.
We
certainly
can't
run
mini
cube
inside
of
kubernetes
pod
and
I.
Don't
know
whether
we
could
run
mini
cube
on
a
GC
ezm.
You.
E
Wouldn't
be
using
the
artifacts
that
you
created,
that's
the
problem,
the
problem
with
core
kubernetes
and
doing
PR
blocking
jobs.
Is
you
need
to
test
the
change
modification
that
you
made
in
the
localized
stand
up
environment?
Listen
that
does
that
ours,
like
a
customized
IND
and
even
with,
even
with
the
din
solutions
that
exist
now,
they're,
not
customized
right.
You
want
to
be
the
build
artifacts
for
the
change
set
that
you're
making
right
so
I.
A
Like
perhaps
I
am
the
speaking
prayer,
but
I
I
was
hoping
we
could
talk
about
this
in
terms
of
a
framing
of
our
North
Star
like
this
is
the
direction
head,
but
we
don't
yet
have
that
provider
list
really
fast
called
cluster
implementation.
That
would
that
we
can
use
in
lieu
of
all
of
these
cloud
levels
further
like
we're
talking
about
conformance
and
the
full
set
of
HP
tests.
A
That
is
run
for
every
full
request
is
actually
larger
than
conformance,
and
that's
something
that
I
would
like
to
change
the
gap
on,
but
like
we'll
have
somebody
who's.
Not
here.
Right
now
is
trying
to
work
on
that
kind
of
implementation,
at
which
point
it
sounds
like
Tim,
st.
Clair.
Maybe
we
want
to
talk
to
you
about
resourcing
how
we
could
move
forward
with
sort
of
the
specification
thing
you
have
in
testing
Commons
and.
E
C
Yes,
oh
yeah,
definitely
yeah.
Maybe
it
sounds
like
we
should
talk
to
the
cluster
lifecycle,
but
I
mean
I,
think
potentially
I
think
maybe
potentially
some
parts
of
you
know
conflict
would
be
like.
There
are
useful
things.
You
know
that
are
specific
to
you
know
GCP
about
like
load.
You
know
the
load
balance
or
something
that
you
like.
We
could
potentially
fake
like
a
service
on
a
local
cluster.
That's
not
going
to
be
the
same
as
how
GCP
you
know.
It
actually
runs
on
DCP,
but
I.
Think
that's!
C
B
Difficult
to
sell
that,
though,
I
really
do
believe
I
mean
the
whole
purpose
of
the
effort.
I'm
leading
it
is
to
provide
the
vSphere
environment.
For
for
that
testing
and
I
could
see
people
saying
well,
we
have
no
confidence
once
something's
merged,
even
though
there
is
gate
gates,
like
you
said,
and
a
before
release,
resources
could
have
been
reallocated
and
staffing
could
have
changed
because
it
looks
like
this.
You
know
is
no
longer
an
issue,
so
I
be
careful
about
how
that
gets.
B
Divided
up,
I
I
will
agree,
though,
that
there
does
need
to
be
a
refactoring
of
testing
overall
I've,
seen
a
lot
of
tests
and
the
existing
vSphere
and
in
tests
that
I
think
these
one
are
necessary
to
aren't
if
they
could
be
rewritten
in
such
a
way
that
they
could
work
in
a
partitioned
environment
and
not
expect
a
dedicated.
You
know
cluster,
and
that
might
be
true
for
some
other
platforms
as
well.
B
A
I
I
want
to
get
to
the
out
of
tree
thing,
but
I
don't
think
it
necessarily
can
be
solved
within
a
10
minute
discussion,
which
is
why
I
want
to
talk
specifically
about
the
cloud
provider
angle
of
it.
For
one
thing,
okay,
our
provider
has
a
a
sake
dedicated
to
it
and,
secondarily,
like
I,
think
that
would
be
a
good
place
to.
We
already
have
entry
implementations
on
cloud
provider
as
well
as
out
of
tree
implementation
for
cloud
providers.
That's
going
to
be
good,
like
we
should
tackle
that
problem.
A
B
Literally,
what
that
document
I
produce
is
about,
and
then
the
first
part
is
about
that,
and
the
second
part
is
about
the
implementation
that
we
designed
to
to
solve
that
at
least
at
VMware,
but
yeah.
It's
all
about
how
to
handle
the
the
cloud
provider
both
in
and
out,
and
what
that
process
looks
like
over
time.
The
traditional.
A
Approach
we
take
with
post
submit
blocking
stuff.
Is
we
then
say
that
that
becomes
we
abandon
the
world
where
post
submits
can
block
further
submissions
of
code?
Well
because
it
was
really
really
it
hurt
everybody.
It
caused
a
lot
of
pain.
Somebody's
broken
test
would
inflict
pain
upon
the
entire
community
and
we
couldn't
seem
to
move
with
enough
expediency
to
solve
the
problem.
A
So
we
instead
say
that
it
blocks
cutting
releases
and
you
could
live
in
a
world
where
we're
merging
PRS
to
make
sure
that
sort
of
the
core
or
nucleus
of
kubernetes
without
a
cloud
provider,
is
relatively
functional.
But
then
you
can
say
that
a
release
of
kubernetes
can't
go
out
the
door
until
we've
ensured
that
it
works
with
such-and-such
cloud
providers,
such-and-such
cluster
providers
yada
yada
yada.
A
But
we
need
to
see
some
sort
of
demonstrated
level
of
competency
and
responsiveness
when
it
comes
to
tests
that
generate
reliable
signal
and
people
actually
fixing
the
tests
when
they
break
or
paying
attention
to
test
failures,
which
historically
hasn't
been
super
great
right.
Now
the
release
process
kind
of
relies
on
a
single
person
holding
a
CI
signal
rule
to
go
around
and
nag
people
as
a
human
and
that's
really
not
sustainable.
A
If
we
start
talking
about
expanding
this
matrix
of
different
cloud
providers
and
cluster
providers
and
what-have-you
as
blocking
a
release,
so
we've
got
like
a
reliable
channel
to
communicate
to
these
people.
We've
got
to
hold
them
to
some
kind
of
SOS
whatever
you
want
to
call
it
on
responsiveness
to
move
forward
with
that
and
that's
just
sort
of
how
it
would
bacon
to
the
existing
release
process,
I'm,
not
sure
about
brain
storming
out
of
trees
stuff,
but
no.
B
I
mean
I,
didn't
think
one
of
the
things
I
call
out
and
that
dock
is
the
fact
that
there
are
ete
tests,
entry
for
v2,
but
they're
not
configured
to
be
run
in
any
type
of
capacity.
Second
of
all,
I'd
call
that
OpenStack
for
making
us
all
look
bad
because
they're
ahead
of
the
curve
on
all
of
all
of
this,
but
I
mean
one
thing,
would
be
to
configure
those
existing
tests
so
that
they
are
run
and
they
do
provide
some
type
of
signal
and
in
some
think,
in
a
conformance
capacity.
Perhaps
there.
B
That
are
blocking,
but
they're
all
organized
under
presubmit,
like
pool
kubernetes
pre,
submit
as
the
basil,
but
they're
they're
not
aligned
with
any
specific
provider,
except
they
are
by
average,
by
virtue
of
where
they
exists.
They
just
don't
show
up
in
test
score
it
as
aligned
with
that
provider.
You
wouldn't
even
know
that's
there
unless
they
fail
most
likely
I'm,
not
sure
I
follow
that
so
like
there
are
unit
tests
and
the
vSphere
cloud
provider
entry.
B
Those
show
up
under
I
think
it's
like
kubernetes
presubmit
blocking
under
that
basel
test
job,
which
is
the
hundreds
like
all
the
unit
tests
and
kubernetes
entry
right.
Those
show
up
there,
but
they
so
they
don't
show
up
a
line
to
a
particular
cloud
provider,
even
though
they
are
aligned.
Thar
cloud
provider
because
they're
not
configured
to
show
up
and
test
grid
that
way
so
I'm
saying
that
there
is
some
type
of
signal
coming
from
a
cloud
provider.
Currently,
that's
pre
submit
blocking,
but
it's
just
not
visualized
in
the
test
grid.
In
that
capacity.
A
E
A
I
think
tails
into
into
the
question
Andrews
talk
on
like
how
do
we?
What
patterns
do
we
want
to
follow
for
entry
and
out
of
tree
provider,
implementations
and
testing
going
forward,
because
this
this
immediately
kind
of
kicks
our
North
Star,
is
to
kick
all
providers
out
of
the
critical
path
of
merge
blocking
code?
So
we
still
have
to
solve
that
problem
somewhere.
You.
B
E
A
C
Yeah,
the
intention
was,
you
know,
to
have
people
add
their
results
to
the
release.
You
know
one
whatever,
for
you
know
their
cloud
provider
and
have
that
show
up
in
test
grid
and
allow
the
release
team
to
you
know,
look
at
those
and
make
sure
they're
passing
in
contact
with
I.
Think
is
still,
you
know
a
viable
option
and
we've
had
people
experiment
with
that.
But
for
whatever
you
know,
I
think
it's
just.
You
know,
for
whatever
reason
that
best,
it
hasn't
seemed
to
have
stuck
as
a
yep.
A
C
And
the
other,
the
other
thing
you
know,
I
I,
think
specifically
the
thing
we
want
to
avoid
is
you
know
making
it
to
where
I
need
to
like.
If
we
have
a
vSphere
unit
test,
that
seems
fine,
or
at
least
more
fine
to
me
than
if
we
need
to
spin
up
a
vSphere
cluster
or
we
need
to
spin
up
a
GCP
cluster.
You
know
cuz,
like
we
spend
a
lot
of
time.
C
You
know
not
merging
pr's,
because
maybe
some
aspect
of
GCP,
for
whatever
reason
flakes
around
you
know
taking
too
long
to
create
a
load
balancer
or
we
forgot
to
clean
up
some
resource
or
something,
and
it's
very
hard
to
reproduce
and
adds
a
lot
of
flakiness.
That
I
think
would
be
best
dealt
with,
as
you
know,
on
on
the
release
level,
rather
than
the
person
who
is
making
a
totally
unrelated
change
in
their
PR.
Well,.
B
Let
me
ask
I
mean
ask
a
question
about
that.
So
I
don't
want
to
deviate
from
the
North
Star
and
night
to
bring
up
unit
tests
again,
but
the
ones
that
are
there
and
I
wasn't
involved
in
any
of
them,
but
they
they
use
the
vCenter
simulator,
so
I
think
the
people
the
way
that
they
categorize
the
between
unit
and
end-to-end
they
thought
unit
should
not
have
any
external
dependencies
to
depend
on
real
Hardware.
B
It
sounds
like,
though,
that
you're
kind
of
leaning
towards
ok
with
with
some
type
of
faking
and
simulation,
so
it
may
be
the
case,
at
least
in
this
particular
instance.
Could
we
categorize
some
of
those
is
as
end-to-end
or
is
in
my?
But
do
you
want
a
standard
faker
that
isn't
dependent
upon
a
particular
platform
or
I
mean.
C
I,
you
know
I
think
maybe
that's
too
detail
I
mean
we're
the
yeah
III
think
fakes
are
gonna
have
to
be.
You
know
if
there
is
some
technology.
If
there
is
some
piece
of
functionality
you
want
to
validate
that.
You
know
on
a
PR
pre-merger
that
you're
either
going
to
have
to
provide
a
fake
or
use
a
real
cluster,
I
think
yeah.
The
idea
would
be
to
have
a
I.