►
From YouTube: Kubernetes SIG Testing 2017-10-31
Description
Meeting notes: https://docs.google.com/document/d/1z8MQpr_jTwhmjLMUaqQyBk1EYG_Y_3D4y4YdMJ7V1Kk/edit
A
Okay,
hi
everybody
today
is
Tuesday
October,
31st,
happy
Halloween.
Everybody
welcome
to
the
community
sick
testing
weekly
meeting,
which
is
being
recorded
and
will
be
posted
to
YouTube
publicly
shortly
he
relatively
late
agenda
today,
I
wanted
to
start
off
by
talking
a
little
bit
about
release
from
nine
related
stuff.
Just
to
let
you
all
know
what's
going
on
there,
I
will
go
ahead
and
share
my
screen.
A
A
A
My
proposal
is:
there's
people
dog
that
Chase
has
one
that
I
will
link
here
and
I'm
also
going
to
try
and
link
to
the
jobs
that
are
failing,
which
seem
like
obvious
candidates
not
to
be
released,
most
walking
things
so
because
I
have
a
couple
days
worth
of
Commerce
here,
I
can
sort
of
tell
which
jobs
have
kind
of
moved
in
and
out
and
seen
some
action
which
jobs
did
not
some
pretty
obvious
jobs.
That
I
think
don't
belong
in
this
with
the
correctness
job.
A
The
other
one
is
the
job
ID
now
calling
Conway's
Game
of
Life.
This
is
the
G
C
ITC
stereo
jela.
It
has
been
almost
impossible
to
compact.
It
literally
has
been
impossible
to
get
paid
for
being
done
on
this
job
because
there's
so
flaky
by
the
way
I
know
Eric
pointed
this
out
of
the
side
chain,
but
for
those
Washington
quarry
notice
that
the
test
period
board
is
now
automatically
sorted
by
failures.
So
it's
really
obvious
which
specific
test
cases
we
might
look
at
first.
A
That
criteria
is
going
to
look
something
like
it
has
to
have
a
single
earner.
So
there
are
a
couple
jobs
to
do
right
and
how
they
don't
have
signalers,
clarify,
I,
know,
sort
of
them.
I'm
gonna
file,
an
issue
to
figure
out
which
jobs
for
business
owners
for
and
see
if
we
can
at
least
slide
back
and
then
other
loosey-goosey
stuff.
B
A
B
A
Finally,
ready
to
go
out
the
door
and
in
any
time
we
accept
a
new
commit
if
the
clock
sort
of
resets
on
that
right,
if
the
job
takes
like
eight
hours
to
run
is
probably
not
realistic.
To
expect
us
to
wait
for
24
hours
to
see
three
of
losing
our
runs,
and
we
were
bouncing
around
the
idea
of
making
sure
that
each
stake
has
some
sort
of
on-call
rotation
or
some
level
responsiveness
or
SLA
in
response
to
jobs.
A
A
As
a
human
being
is
for
these
issues
for
the
test
cases
that
are
really
obviously
failing,
I'm,
creating
an
issue
prefixing
with
e2e
failure
using
the
exact
text
of
the
test
case
that
is
failing
and
then
linking
to
the
triage
record
for
that
particular
test
case.
In
this
case,
I
made
a
mistake
and
look
at
the
cluster,
but
if
I
do
all
clusters
right,
we
can
see
that
when
I
went
of
people
they
actually
did
solve
the
major
test.
A
Failure
and
there's
still
one
sort
of
noisy
failure
here
and
if
I
look
down
a
little
bit
bored,
it's
this
problem
and
it's
happening
just
in
this
job
specifically,
which
is
not
on
the
release
master
clocking
job.
But
so
it's
this
sort
of
like
linking
together
of
the
tools
that
I
really
feel
should
be
able
to
automate
me
out
of
this
job
entirely,
but
we're
currently
at
the
point
where
I'm,
human
and
I'm
paying
attention
Mac
people
that
seems
to
be
driving
the
stuff,
but
the
other
quick
one.
A
My
related
thing
is
I
wanted
to
clarify
that
these
seem
to
be.
The
label
queries
that
we
expect
to
use
for
pie
or
something
like
it,
so
all
P
are
suitable
for.
Merge
normally
are
going
to
look
something
like
this.
Maybe
I've
missed
this,
but
this
is
what
I'm
seeing
used
for
testing
for
right
now.
It
needs
to
be
adjusted
for
community
or
communities
I'm,
not
sure,
then,
during
code
slash.
A
B
B
The
design
doc
should
be
continually
updated
to
like
match
what
the
code
ends
up
actually
doing,
but
I
think
we'll
probably
also
make
some
documentation.
That
explains
a
little
bit
more
user-facing
after
it's
finished.
The
other
question
comment
ahead
or
I.
Guess
if
anybody
has
any
comments
on
that,
but
yeah
I
just
wanted
to
give
information
from
the
breakout.
The
other
thing
I
had
shoved
onto
the
agenda
that
I
wanted
to
hear
some
info
or
some
feedback
on
was.
B
This
is
something
that
I've
been
struggling
as
a
maintainer
of
a
proud
cluster.
Come
for
a
bit,
it's
very
difficult
to
trace
the
execution
or
like
trace
to
the
derivative
events
from
a
specific
github
issue
or
a
specific
event
ID
or
a
specific
crowd
job
or
whatever,
through
the
cluster.
So
like
being
able
to
see,
oh,
you
know
he
got
delivered
here.
Hook
aside,
these
plugins
registered
and
actually
did
something
with
it.
This
project
was
created
for
it.
You
know
because
of
that
crowd
job.
D
B
And
I
was
thinking.
Eric
had
recently
mentioned,
like
moving
as
many
of
the
plugins
out
of
hook,
so
they're
not
compiled
in
and
so
definitely
I.
Guess
we
do
that,
and
we
have
like
70
pounds
earning
at
least
I
would
think
we
should,
on
reviews,
try
to
say
like
hey
if
you're,
adding
new
functionality
make
sure
that
the
base
logging
level
at
least
exposes
the
positive
actions
that
are
taken.
I.
E
B
Cool
I
know
we
had
had
a
conversation
a
little
bit
earlier
about,
like
just
logging
in
general
thing,
on
an
issue
and
so
I.
Don't
think
necessarily
I
think
I
made
a
comment
about
it:
okay
or
something,
but
if
this
seems
like
a
good
idea,
I'll
try
to
like
run
through
and
add
the
logs
and
hopefully
build
some
tool
that
could
synthesize
it
without
having
to
buy
into
log
driver
or
something
more
complicated.
B
E
I
feel
I
mean
I,
don't
know
how
do
I
feel
like
there
might
be
some
value
in
terms
of
you
know
having
this
talk
about
things
before
actually
doing
them
a
little
bit
and
not
that
it
has
to
be
some.
You
know
gigantic
novel,
but
I
feel
like
that's
sort
of
a
valuable
way
to
exchange
what
we're
thinking
about
and
sort
of
align
our
ideas.
E
So,
if
something
similar,
you
know,
you
would
want
to
be
willing
to
write
that
up.
I
think
that
would
be
useful
for
the
rest
of
the
community
and
then
there's
more
examples
for
other
people
to
follow
when
we
have
newcomers
coming
in,
like
yeah,
that's
sort
of.
If
you
have
some
interesting
idea,
you
want
to
do
write
up
something
similar.
Okay,
yeah
will
do.
A
E
So
one
other
thing
is,
you
know,
maybe
we
could
say
hi
I
see.
Chemichi
is
here
who
we
were
chatting
over
the
next
couple
days
so
hi
by
the
way
and
then
yeah
he
he
has
a
pretty
interesting
PR
out
that
I
think
I
think
what
it
does
is
you
run
the
ad
test
and
then
it
looks
at
the
API
server
logs
to
like
check
and
see
which
API
is,
if
any
were
covered
during
tests
and
I
definitely
think
that
you
know
over
the
coming
releases.
E
A
Yeah
I
agree.
My
my
impression
is
that
this
aligns
closely
with
the
conformance
testing
effort,
that's
being
driven
through
the
CNC
F
kubernetes,
conformance
working
group
and
state
architecture
who
are
now
the
arbiters
of
what
is
not
a
conformance
test.
I,
don't
know
if
y'all
followed,
like
they
did
a
PR,
where
there's
now
a
list
of
all
the
performance
tests
in
the
kubernetes
tree,
and
so,
if
you
add,
or
remove
the
conformance
tag
from
a
test,
and
it
ends
up
differing
from
that
list,
they
won't
somebody
from
sake.
A
Architecture
is
gonna
have
to
make
that
approval,
and
they
found
that
through
owners,
which
they'll
be
able
to
do
that
most
effectively.
Once
we
deploy
prowl
in
a
way
that
uses
the
note
parent
option
that
we
enabled,
but
basically
the
idea
is
since
we're
using
and
and
tests
as
the
way
of
certifying
whether
or
not
clusters
conformant.
We
have
no
idea
how
much
API
coverage
those
conformance
tests
give.
Let's
start
there
and
see
like
how
much
of
a
kubernetes
cluster
I
would
be
exercising
to
say
that
it's
confirmed
the
number
was
somewhere.
A
It
was
under
30
percent,
I.
Think
in
terms
of
API
coverage,
so
I
would
I
would
say
you
wanna
I
would
probably
want
to
say
architecture
to
to
drive
that
percentage
number
first
and
foremost,
and
if
we
decide
that
that
turns
out
to
be
an
effective
and
useful
metric,
then
we
can
drive
that
more
broadly
through
the
rest
of
the
tests.
But
that's
that's
where
it
seems
to
matter
most.
C
My
only
comment
on
that
is
I
want
to
make
sure
that
we're
on
the
same
page,
that
we
don't
actually
want
a
whole
lot
more
in
and
sting
we
want
a
lot
more
coverage
at
the
end
and
level
on
the
differentiator.
For
me
is
a
lot
of
the
API
testing
can
be
performed
by
integration
tests
that
we
can
run
in
environmentally
cheaper
to
run
most
of
the
time,
but
still
fulfill
the
requirement
of
testing
and
deployed
cluster
I.
Just
don't
want
to
go
down
the
road
of
light.
That's
just
more
Uli's
and
I.
B
D
B
That
being
said,
I
do
know,
with
with
unit
tests,
there's
a
lot
of
finicky
ways
to
reuse,
artifacts
and
build
first
and
in
the
past
we
haven't
always
done
that
correctly.
I
know
we
had
some
changes
that
made
it
a
lot
faster
just
by
changing
some
directory
structures.
I,
don't
know
if
you
guys
remember
that
happen
about
a
year
ago,
so
maybe
it's
not
as
bad
as
I
think
it
is
just.
E
Just
don't
yeah,
ideally
I
would
like
be
basil
to
help
with
that
and
I.
Don't
think
right
now,
basil
does
coverage
for
go
but
I'm
a
good
question
for
the
basil,
Channel
and
yeah,
but
definitely
agree
that
fundamentally,
I
would
like
to
what
I
would
really
like
to
start
doing
is
measuring
our
unit
test
coverage
and
integration
test
coverage
that
runs
on
a
single
machine
and
drive
that
higher,
but
I
think
also,
if
there's
some
ways
of
getting
ete
test
coverage.
That's
also
awesome.
I
mean.
C
C
A
E
A
A
I
know
we
had
walked
away
from
last
week,
feeling
like
there
were
two
concrete
action
items
that
we
need
to
accomplish
before
we
felt
ready
to
roll
tide
out
to
other
repos,
and
one
of
the
things
I
asked
for
was
if
we
can
make
sure
that
there's
documentation
and
we're
giving
people
a
heads
up
about
what
tide
is,
how
you
can
use
it
and
interact
with
it
and
stuff
before
we
start
turning
on
the
other
rico's,
so
I
was
totally
cool
with
the
idea
of
like
using
tied
for
Federation,
because
we've
gotten
interested
to
hear
her
super
motivated,
but
on
repos
that
have
a
much
larger
audience
like
community
or
kubernetes,
it
seems
like
we
still
have
a
couple
things
couple
pieces
of
administrivia
that
we
want
to
get
out
of
the
way
before
we
turn
it
off
there.
A
I
know
one
of
them
was
like
I
were
to
make
sure
that
I
could
construct
the
right
set
of
labels,
that
men
are
release,
managers
wins
as
far
as
can
we
accurately
describe
just
labels
among
what
peers
are
ready
to
merge,
because
that's
how
time
works
so
cool
there?
How
else
would
we
are
we
doing
about
that?.
D
I
think
our
current
plan,
as
far
as
the
required
jobs
problem,
specifically
that
being
that
a
tide
wants
to
look
at
the
overall
status
for
PR
and
see
that
a
screen.
So
if
we
have
non
required
jobs
in
the
PR,
those
are
going
to
cause
problems
if
we're
not
blocking
so
B
I
think
our
plan
right
now
is
just
to
stop
reporting
jobs
that
aren't
required
so
make
those
job
status
is
only
available
in
Cooper,
Nader
I.
D
Don't
know
that
we've
actually
taken
steps
to
do
that
any
repos
yet,
but
that
should
work
once
we're
ready
to
do
that.
The
other
thing
was
the
tide
front,
end
itself
and
I.
Don't
know
that
any
more
work
has
been
done
on
that
I
think
the
last
thing
that
happened
with
that
was
that
Joe
set
up
tide
to
serve
its
status,
so
that
deck
can
you
know,
fetch
that
status
and
then
serve
it
somehow,
but
we
don't
actually
have
deck
creating
a
front
end.
Yet,
okay,
cool.