►
From YouTube: Kubernetes SIG Testing 2017-10-17
Description
Meeting notes: https://docs.google.com/document/d/1z8MQpr_jTwhmjLMUaqQyBk1EYG_Y_3D4y4YdMJ7V1Kk/edit
A
Is
the
weekly
sleep
testing
meeting
it's
being
publicly
recorded
and
will
be
posted
to
YouTube
shortly
so
big
thanks
to
Jeff
for
hosting
and
recording
the
meeting
last
week
since
I
dropped
off
the
face
of
the
earth
and
forcibly
removed
slack
into
all
things
electronic
from
my
life
for
a
week,
so
since
I'm
still
primarily
in
catch-up
mode,
I
kind
of
wanted
to
just
discuss
what
we're
planning
on
doing
for
one
984
in
general?
Well,
the
other
thing
I
missed
last
week
was
giving
us
a
testing
update
to
the
community.
A
A
Did
in
that
dry
run
a
couple
of
weeks
ago
in
the
space
of
10
minutes,
so
I
linked
the
meeting.
That's
in
chat
and
I'll
leave
some
room
at
the
end
to
talk
about
Basel,
and
maybe
we
can
talk
about
that
in
the
context
of
what
we're
doing
for
one
night.
So,
basically,
the
way
I
did
the
way
did
things
for
one.
A
Eight
Rangel
was
Eric
put
together
this
Google
Doc
of
everything
we
had
planned,
sorted
for
the
one,
eight
time
frame
and
then
2017
goals
in
channel,
and
then
I
tried
to
go
through
and
convert
all
of
those
to
get
hug
issues
and
assign
them
to
people
as
they
had
sort
of
signed
up
for
them
in
the
Google.
Doc
I
think
that
worked
okay
with
the
exception
that,
like
we
didn't
I,
felt
like
we
didn't,
go
back
and
reconcile
whether
or
not
those
issues
were
actually
relevant
anymore
or
whether
progress
had
been
made
on
them.
A
Why
does
this
issue
clothes
and
things
like
that,
so
I
think
being
able
to
use
the
milestones
to
help
triage
there,
with
with
help
so
I
created
a
woman
on
milestone
and
I've
created
a
20
18
goals
milestone
for
things
that
we
think
are
cool
and
relevant.
But
we
just
we're
not
gonna
get
to
this
year
and
bright
shiny
next
year
will
maybe
so
I
recognize.
A
I
have
a
lot
of
reconciliation
to
do
between
the
1/9
Google
Docs,
which
basically
was
a
copy-paste
of
everything
in
1/8,
crossed
out
all
the
things
we
think
we
got
done.
I
did
some
cool
new
shiny
things,
there
that's
more
or
less
what
I'm
going
for
as
the
source
of
truth
and
based
on
my
glance
of
that
I
came
up
with
the
following
things
for
release,
so
I
think
that
basically
we're
trying
to
end
of
life
as
many
things
as
we
can
really
trying
to
end
up
like
Jenkins
and
we're
trying
to
end
of
life.
A
Munch
github,
pointing
both
things
in
the
direction
of
Brown
so
for
Jenkins
I,
believe
this
means
we
need
a
call
to
action
where
we
need
to
hurt
people
to
actually
migrate
through
tops
off
of
chickens
and
over
to
prowl.
This
is
probably
going
to
be
the
most
painful
aspect
of
this.
I
would
imagine
the
scale.
Jobs
are
the
only
ones
that
I
am
intimately
familiar
with,
but
I
suspect
there
are
other
jobs
that
you
know.
A
So
if,
if
there
are
people,
we
specifically
need
help
with
or
people
for
the
things
we
need
to
designate
to
help
us
out
with
this
I'd
like
to
see,
if
we
can
do
that,
you
know
we
sort
of
commit
so
we'll
make
sure
that
you
provide
folks
to
help
out
with
the
transition
away
from
Jenkins
and
look
how
cool
and
shiny
prowl
is
I
can
do
that
by
sort
of
bragging
about
all
the
the
testing
for
improvements
we've
done
lately,
but
we
really
it's
time.
We've
got
to
do
this
as
far
as
munchkin.
A
Colgate,
new
searches
planned
migration,
most
of
the
munchers
out
of
lunch
github
to
proud
logins
I'll,
put
that
in
initiating
trying
to
sort
of
do
timeline
for
like
here's,
the
things
we
plan
on
rolling
away
from
much
github
over
the
next
quarter.
It
is
non-destructive
anyway,
as
possible.
I
think
the
one
thing
that
cold
punted
on
was
cheering
thinking,
and
that
seems
like
kind
of
a
critical
thing
for
cutting
point
analysis
and
patch
releases.
A
C
C
A
Was
not
aware
of
that,
like
wildly
underwater,
if
you
can
paste
a
link
to
that
in
slack
or
something
just
to
bring
it
to
my
attention
that
would
before
and
then
the
final
end
of
life
thing
is
we'd
like
to
hopefully
end
it
like
the
submit
queue,
we're
going
to
be
so
happy
with
how
tied
works
that
we
want
to
just
shut
off
this
make
you
entirely.
You
know
we
live
in
the
world
right
now
or
they're
in
submit
queue,
instances
for
end
repose,
not
all
the
repos.
Have
some
a
cuter
gone.
A
We've
been
trying
tied
on
testing
for
I'm
kind
of
curious
to
hear
feedback
from
the
group
on
how
we
think
that's
been
going,
but
some
point
here.
We
want
to
turn
tide
on
for
everything
and
then
use
a
book
for
it
and
I.
You
know
I
think
maybe
I
had
said
in
the
past.
A
One
one
prerequisite
for
me
personally
is
to
show
an
actual
live
demo
of
tied
with
some
kind
of
UI
at
the
community
meeting
to
show
folks
that
this
is
real,
and
this
is
the
way
this
now
works
for
those
of
you
who
were
used
to
using
the
submit
to
you.
I,
don't
think
we're
yet
at
that
point,
but
like
we're,
that's
sort
of
you
know
coming
soon,
preview
I
can
announce
of
community
okay.
A
Next
major
thing
is
a
better
support
policy,
and
so,
instead
of
supporting
all
the
things
we
want
to
support
a
very
few
well
scope
things.
This
starts
with
the
document
that
Jase
has
been
working
on
where
we
actually
have
some
kind
of
SLA,
or
we
use
some
kind
of
label
to
issues
as
things
that
are
blocking
to
you
need
to
be
worked
on.
A
We
were
spitballing
the
idea
of
potentially
having
a
Status
page,
possibly
driven
by
this
or
possibly
manually,
updated
by
whoever
the
on-call
person
is
and
a
lot
of
work
has
been
going
into
metrics
and
how
to
alert
based
on
those
metrics
or
how
to
alert
from
tests
created
in
a
meaningful
way
to
make
sure
that
we
can
respond
more
quickly
to
failures
before
our
user
can
develop
our
community.
That
this
is
that
the
submit
key
seems
to
us,
not
Corki,
I.
D
D
If
there
is
a
problem,
they
won't
be
impacted
until
they
try
to
update
their
PIN,
in
which
case,
maybe
they
might
have
to
deal
with
something
if
they've
discovered
a
problem,
you
need
to
their
project,
but
I'm,
just
wondering
if,
like
scoping
this
is
this
is
a
huge
problem.
We
have
to
solve
indefinitely
is
maybe
a
little
bit
pessimistic.
A
It's
possible
what
I'd,
rather
I'd,
rather
make
sure
that
this
is
phrased
in
the
perspective
of
hey,
look,
there's
an
actual
team
of
humans
behind
this,
and
instead
of
us
being
walled
off
in
a
secret
area
that
only
a
few
people
know
look
at
that.
There's
going
to
be
more
transparency
to
what
the
status
of
the
project
is
and
why
why
things
are
moving
slowly
or
quickly.
D
Like
I
said
I'm
not
saying
that
this
isn't
the
reality
today,
I
just
like
to
me,
it's
helpful
to
keep
in
mind
where
we're
going
towards,
so
we
don't
invest
too
much
energy
in
maintaining
the
status
quo.
We
definitely
wanna
make
it
better.
I,
just
don't
want
to
like
invest
in
keeping
it
this
way.
Okay,.
A
A
Is
I
want
I
want
us
to
move
to
a
future
where
it's
not
just
a
select
few
people
who
can
support
the
infrastructure
to
run
the
entire
project
right
living.
Nor
arguments
learn
right
now.
It's
just
like
the
one
repo
the
problems
going
to
require
more
involvement
by
more
people
than
were.
You
know
responsible
for
shepherding,
with
47,
on
repos
or,
however,
and
so
achieve
that
and
where
it
seems
like
trying
to
have
the
the
boring
money,
step
and
logistics,
and
then
we
have
documentation
and
stuff
and
then
there's
all
sorts
of
technical
tools.
A
A
But
that's
money,
that's
involved
in
spending
up
all
sorts
of
kubernetes
clusters
for
all
sorts
of
scalability
jobs
and
and
and
tests
for
everything
and
one-off
tests,
and
no
tests
and
certifying
different
images
and
operating
systems
and
and
all
of
that
right,
I
view
a
lot
of
that
as
specific
to
the
Google
cloud
provider,
and
that
is
something
I
would
hope
that
Google
continues
to
find.
But,
for
example,
I,
don't
think
it'll
should
continue
to
fund
skating
up
clusters
in
AWS
right.
A
So
to
that
end,
since
AWS
joined
the
CN
CF
recently
we
got
them
to
kick
us.
Some
credits
to
an
AWS
account.
So
I'd
like
to
see
us
move
to
spinning
up
all
the
AWS
clusters,
which
we
do
be
accomplished
right
now
in
that
AWS
account.
So
then
AWS
can
sort
of
become
responsible
for
if
they
feel
like
the
fidelity
of
testing.
That's
happening
on
AWS
isn't
sufficient
to
prove
that
communities
is
as
awesome
on
AWS
as
it
is
on
Google.
A
Then
there's
sort
of
moving
in
from
that
there
all
of
the
notes
that
are
responsible
for
running
like
projects
right
now
or
in
today's
world.
It
would
also
be
like
all
of
the
Jenkins
slaves
and
stuff
that
are
responsible
for
just
sitting
there
and
taking
the
clusters
off
and
then
running
tests
against
those
clusters
and
aggregating
themselves
right
and
then
moving
further
in
words.
There's
prowl
and
lunch
github
itself,
it's
responsible
for
taking
things
in
from
github
and
actually
running
the
tests.
A
A
You
know
:
mentioned
something
about
tensor
flow
right
and
we've
had
sto,
which
may
one
day
be
a
CNC,
have
project
I,
think
it
kind
of
makes
sense
to
start
to
figure
out
what
it
take
to
migrate
this
back
to
to
within
the
walls
of
CNC
yeah,
so
that
could
just
be
an
OG
ke
cluster,
that's
funded
by
students
yeah.
It
could
be
that
we
decide
we
wanna
spin
it
up
in
the
AWS
a
chapter.
A
A
Expected
to
run
all
that
opportunity
and
come
up
with
a
number
that
we
can
hand
to
the
scenes
yeah.
It
was
my
it's
my
intent
to
sort
of
raise
this
as
an
issue
through
the
new
fangled
steering
committee,
still
not
entirely
sure
what
that's
gonna
look
like,
but
I
figure.
This
is
a
good
chance
to
test
its
powers.
Forget
it
so.
D
Yeah
I
guess
what
I'm
hearing
is
that
there's
there's
a
goal
of
reducing
the
requirement
for
Google
money
and
personnel
to
maintain
the
test
infrastructure
for
kubernetes
and
just
add
that
load
a
little
wider
around
participating
organizations?
Yes,
is
that,
like
an
explicit
goal,
because
I'm
kind
of
excited
kind
of
extracting
it
from
what
you're
telling
me
and
I've
heard
it?
You
know
in
various
other
meetings
but
like
if
that's
the
goal,
wouldn't
it
make
sense
that
I?
Actually,
like
figure
out?
D
A
I
don't
feel
a
need
to
turn
this
into
numbers
in
the
name
of
reducing
costs.
My
aim
is
really
principally
to
reduce
the
number
of
companies
that
can
support
this
project
from
N
equals,
one
which
is
Google
into
N,
equals
greater
than
one
right
I.
Don't
so
much
care
too
much
about
the
number
of
engineers
no
exam
reduce,
spend
so.
B
Steve
are
already
has
the
same
thing
on
the
open
shift
side
right
like
well,
they
may
have
leveraged
some
of
the
things
for
testing
front,
but
we
are
if
our
one
tooling
right,
and
so
what
I
do
care
about
is
that
the
uniformity
of
the
build
artifacts
and
the
release
process
is
consistent,
because
if
we,
if
we
rally
on
that
specific
target,
that
is
a
quantifiable
piece
of
the
puzzle.
That
means
that
anybody
can
build
a
release.
B
Every
build
of
the
binaries
of
main
codebase
is
a
release
artifact
of
some
kind
and
to
me
that
is
like
zero,
importante
and
I
know
that
other
people
want
the
base
of
bills
too,
but
I
want
them
even
more,
because
I
want
to
be
able
to
build
all
the
artifacts
from
mainline
and
get
rid
of
an
ago
and
then
way
from
a
cluster
life
cycle
perspective.
Everything
that's
there
is
just
boom
done,
one
build
and
you're
going.
You're
good
to
go
I.
E
So
basically,
it
works.
The
main
thing
we're
still
waiting
on
from
the
basil
go
team
is
the
full
cross
build
support,
and
this
is
something
that's
still
sort
of
in
process.
I
know
they
are
actively
working
on
it
right
now,
there's
like
very
limited
cross,
build
support
for
only
native
go
code
and
I
know
they're
like
actually
working
on
being
able
support
all
of
the
various
architectures
we
need
to
build
before.
Since
we
know
we
need
to
build
for
all
the
various
Linux
s,
390x
and
amd64
arm.
E
There's
a
is
improving
as
well.
One
of
the
challenges
is
like
I
guess.
If
you
have
any
Seco
code,
you
have
to
have
a
proper
C
cross
compiler
tool
chain
in
order
to
do
all
of
that
which
immediately
we
already
have
because
we're
you
have
to
do
that.
So
possibly
we
might
be
able
to
combine
like
our
cross,
weld
image
plus
basil.
It
might
be
able
to
do
that,
but
right
now,
like
a
bunch
of
people,
are
already
using
Basil's
sort
of
to
build
their
clusters
and
run
tests,
and
things
like
that.
E
It
works
great
for
that.
It's
just
sort
of
the
main
thing
that's
blocking
us
from
making
it
the
de-facto
way
to
build.
Kubernetes
is
just
we
need
to
be
able
to
build
all
the
various
architectures,
and
so
that's
kind
of
the
last
missing
piece.
But
there
is
good
progress
on
that
and
so
hopefully
like
in
the
next
month
or
so
I've
been
basically
meeting
with
the
team
weekly
just
kind
of
track.
Progress
on
that
makes
for
making
progress
there.
But
it's.
B
E
Agree
like
I,
think
you
know,
we,
on
the
one
hand
like
we
and
the
Basel
team,
would
love
and
we
could
get
four
one
nine,
but
I'm
also
a
little
hesitant
just
given
that
it
doesn't
work.
Yet
it's
you
know
it's
it's
getting.
It's
we'll
see
what
happens
in
the
next
week
or
two
buddy
I'm
a
little
a
little
worried
about
one
night,
I
think
110
I
know
I,
keep
saying
this
like
every
release.
It's
like!
Oh
well,
not
this
release
the
next
one
and
it's
kind
of
it's
very
frustrating,
but
yeah
do.
B
We
have
like
do
you
have
stats
or
anything
that
you
could
share
with
regards
to,
like
you
know,
I
know,
there's
some
people
that
there's
some
I
don't
know.
Tinkertoy
things
on
the
ARM
architecture.
Right
vast
majority
is
x86,
but
like
do
we
even
care
I
hope
we
don't
care
about
as
390x
just
sayin.
We
don't
actually
run
tests
against
that
architecture.
So
right.
E
There
are
some
people,
I
guess:
IBM
cares
and
I
think
power
as
well.
There
are
something
to
look
here:
I,
don't
know
how
large
of
a
market
if
anyone
who's
actually
using
that
I,
don't
know.
I
know
that
you
know.
There's
people
I
mean
if
you
people
working
on
all
that
support,
but
I'm
not
sure
what
who's
actually
using
I
do
not
have
those
details.
C
E
I
yeah,
it
would
be
good
to
try
to
pull
in
some
of
those
people
that
are
working
with
architectures
and
see
if
we
can
actually
get
some
real
testing
results.
I
know
there
at
some
point.
There
have
been
some
kind
of
testing,
but
there
hasn't
been
a
whole
lot
and
so
yeah
we
should
maybe
maybe
we
should
sort
of
say
we
expect
some
conformance
testing
weasels
or
something
on
these
architectures
that
we're
actually
supporting
them
as
and
maintaining
them
and
building
them
as
the
official
releases
depending.
E
Thankfully,
just
because
it's
all
built
in
to
go
and
it's
pretty
much
all
the
same
like
once,
we
you
know
sort
of
unlock
being
able
to
build
one
of
these
alternative
architectures
it
should
the
rest
should
just
follow
so
and
I
think
like
arm
I,
think
has
at
least
slightly
more
I.
Don't
know,
actually
know
how
how
large
the
arm
usage
is,
but
at
least
it's
a
little
bit
more
plausible
I
think
the
I
guess
3
and
X,
but.
A
E
B
A
The
sort
of
thing
that
the
the
principle
still
worth,
sticking
to
or
blocking
for
right,
I,
don't
get
read
yet
at
the
point
where
it's
worth
being
pragmatic
today,
yeah
okay,
when
110
was
sort
of
I
guess
just
based
on
where
we're
at
today
right,
but
it
does
sound
like
you
know,
to
your
point
of
just
one
more
release
like
it.
It
is
actually
pretty
much
usable
for
the
happy
path.
Mm-Hm.
A
Were
what
what's
going
to
help
give
us
confidence
right
is
Ben's
work
to
actually
involve
Basel,
shared,
builds
and
more
of
the
actual
jobs
that
are
going
to
be
exercised.
Thousands
and
thousands
of
times
between
now
and
the
end
of
here
right,
he'll,
give
us
a
lot
of
good
confidence-building
data
or
bugs
knows
I,
know.
E
A
For
sure-
and
the
other
thing
just
to
like
cross
building
I
think
it's,
it's
probably
a
viable
position
for
us
to
hold
that
we
want
to
want
to
enable
the
community
to
help
us
out.
So
while
we
may
not
necessarily
have
the
resources
to
test
on
all
of
these
architectures,
we
are
making
sure
that
we
can
build
across
all
these
architectures
to
allow
other
people
to
help
homeless
vet.
The
kubernetes
runs
on
their
particular
platform
of
interest
right
and
then
you
can
push
it
to
the
certified
kubernetes
land.
C
C
Okay,
I
also
one
remark
on,
like
general,
one
nine
goals
which
I
don't
think
I've
done
a
really
good
job
of
putting
into
the
dock,
but
I
think
from
our
end,
one
of
the
the
two
things
that
we're
probably
gonna
be
focusing
on.
The
most
moving
forward
is
trying
to
get
stuff
a
little
bit,
and
so
especially
with
with
the
hook
plugins
trying
to
get
stuff
more
broken
out.
So
we
don't
have
to
compile
plugins
into
say,
testing
repo
to
actually
get
them
to
run.
C
C
Half
of
that
support
policy
is
finding
relatively
programmatic
ways
to
determine
the
health
of
the
entire
proud
cluster
and
all
of
the
micro
services
that
fall
in
there,
which
I
think
is
like
a
hard
question
in
general
for
kubernetes
applications,
but
like
I,
would
that
definitely
is
like
a
very
large
part
of
actually
being
able
to
do
something
reasonable.
In
terms
of
support,
and
so
we're
gonna
try
to
put
more
of
our
effort
to
and
like
helping
to
provide
more
metrics
and
test
amounts
left.
A
So
then,
the
shiny
thing
I
would
show
in
this
regard,
if
I
think
maybe
y'all
heard
that
side
on
them.
I
looked
at
the
monitoring
thing
for
Belgium
recently
and
I
noticed.
There
is
a
fourth
graph
now,
it's
time
since
last
merch,
which
I
believe
is
based
on
Winston's,
adding
the
metric
for
when
the
last
merge
actually
happens.
I
don't
actually
know
what
that
little
green
line.
There
is
do
this
for
the
last
seven
days,
but
you
can
sort
of
see
we
get
these
red
spikes
into.
A
Probably
what
is
the
threshold
if
that's
way
too
much
times
for
there
to
be
a
last
merge
if
you
combine
that
with
the
yellow
line,
which
is
how
many
PRS
are
queued
up
and
knit
that
sort
of
gives
us
an
idea
that,
if
we're
sitting
here
waiting
forever
for
a
last
merge
but
the
submit
to
use
totally
empty?
It's
probably
because
it's
you
know
the
middle
of
the
night,
but
if
we're
starting
to
have
problems
like
that,
sometimes
I
can't
give
you
a
good
example
here.
A
I
actually
don't
know
what
the
failure
case
looks
like
in
this
graph
yeah,
just
sort
of
demonstrating
that
hey
we're,
starting
to
add
health
checks
to
all
of
our
infrastructure,
and
so
this
is
the
health
check
stuff.
That's
related
to
munchkin
health
did
submit
to
you,
but
I
just
told
you
that
would
plan
on
getting
rid
of
all
of
this
and
replacing
it
add
as
much
some
can
if
tied
and
proud,
and
so
we
will
need
health
checks
and
adequate
visibility
into
what's
going
on
there.
A
A
F
F
F
There
are
just
a
few
jobs
that
have
decided
that
they
want
to
started
reporting
two
people
but
they're
not
actually
blocking
the
PR.
So,
for
example,
the
GC
GPU
job
with
the
GPU
job
would
like
be
blocking
at
some
point.
But
it's
not
so
if
it's
flaking,
then
people
are
concerned
that
it's
flaking
and
they
want
it
to
pass
on
their
PR.
But
it
might
just
be
broken
at
the
moment
and
it's
not
actually
blocking
their
PR.
So.
F
D
F
C
One
thing
we
saw
moving
stuff
out
of
out
of
blocking
if
it's
possible,
maybe
changing
the
behavior
of
the
robot,
to
not
delete
the
comment
when
there's
no
failed
jobs
that
are
blocking
just
so
that
you
still
get
that
goober
nadir
link
with
all
of
the
historical
stuff.
So
that's
easy
for
you
to
say:
oh
I!
Do
I
can
actually
go
and
look
at
those
jobs,
because
without
the
context
at
the
bottom,
you
kind
of
have
to
finagle
your
way
through
good
render
interface
to
find
your
pure.
F
Yeah
yeah
I
can
see
that
being
hopeful,
I'd
say
right
now,
since
I
know
how
it
works.
I
go
to
one
of
the
statuses
and
click
their
link,
yeah,
but
yeah
I
think
things
we
could
do
except
really
to
clean
up
the
the
comments.
We
might
also
want
to
try
to
make
the
comments
avoid
mentioning
tests
that
aren't
walking
and
I
think
it
might
already
do
a
bit
of
that
I'm,
not
sure
yeah.
F
C
A
A
You
see
the
jobs
that
have
required
next
to
them
in
your
go
request,
but
I
agree
like
people
look,
people
look
at
the
dot
next
to
their
commit
and
if
the
dot
stays
yellow
because
one
of
the
non-blocking
things
is
still
non-blocking
or
hasn't
triggered
or
if
they
see
an
X
because
one
of
the
non-blocking
things
has
failed,
then
they
think
they're
hulking,
hazmat
yeah.
So
it's
it's
a
messaging
thing.
A
C
We've
got
all
that
in
uber,
Nader
I
think
connecting
the
dots
from
your
PR
to
Guru
Nader.
The
overview
for
your
PR
is
I'm
trivial
because
you
kind
of
have
to
know
how
the
GCS
layout
looks
and
stuff,
sometimes
so
like
having
that
link.
That
would
be
super
cool,
because
I
link
I
was
away
as
soon
as
the
comments
in
the
zone.