►
From YouTube: Kubernetes SIG Testing 2017-11-07
Description
Meeting notes: https://docs.google.com/document/d/1z8MQpr_jTwhmjLMUaqQyBk1EYG_Y_3D4y4YdMJ7V1Kk/edit
A
Okay,
hi
everybody
today
is
Tuesday
November
7th.
This
is
the
state
testing
weekly
your
host
Aaron
Berger.
This
meeting
is
being
publicly
recorded
and
will
be
posted
to
YouTube
shortly
on
today's
agenda,
as
usual,
I
haven't
had
time
to
actually
go
through
and
sway
through
the
last
week's
worth
of
PRS
that
have
been
merged
to
testing.
For
it,
though
I
know,
we've
made
some
substance
of
progress.
If
anything,
anyone
has
anything
they
want
to
sort
of
brag
about
or
bring
up.
Today,
I
wanted
to
just
travel
briefly.
A
Or
suggestions
folks
had
for
the
could
con
sake,
testing
update
and
deep
dive
like
I'm
totally
happy
just
making
it
up
but
figured
if
there
are
strong
opinions
on
what
content
you
would
like
to
see
there
I
can
work
on
doing
that
now.
I
have
until
Friday
to
submit
abstracts
and
I'm
trying
to
clarify
whether
or
not
I
have
until
November
20th
just
bit
slides
like
other
sheet
eaters.
A
Do
Matt
legate
said
something
about
using
basil's
the
passport
for
pre
submits,
but
I
don't
see
him
here
and
then
there's
something
by
Steph
Jennings,
who
also
isn't
here,
but
maybe
Stevie's
talked
about
that
the
need
for
release
blocking
no
need
to
be
tests
have
I
left
anything
out
or
anything.
People
like
on
the
agenda.
Oh
sweet,
new
order,
stuff.
A
Okay,
so
just
rambling
about
basically
Atkin
con
first
off,
who
here
is
going
to
be
a
kook
on
strawpoll,
but
no
I
will
be
but
cool.
A
A
What
all
the
components
are,
what
they
all
do,
how
many
of
these
you
can
use
today
with
calls
to
action
on
how
to
use
all
of
them,
because
I
think
like
thanks
to
you
the
hard
work
that's
been
put
in
my
open
shift
in
sto.
We
can
definitely
talk
about
how
to
use
prowl
on
any
repo
at
work
and,
hopefully
by
the
conference.
We
can
talk
about
use
of
tie
in
as
well.
Maybe
not
even
mention
lunch
again
or
just
mention
it
as
a
historical
quirk.
A
A
However,
they
can
well
please
if
they
flip
their
results
in
a
well-known
location
in
the
Google
bucket
that
can
be
plugged
into
test,
create
and
all
the
rest
of
our
infrastructure,
but
I
mean
that's
sort
of
the
gist
and
what
I
was
thinking
for
the
update.
The
other
option
me
is
I
focus
a
lot
more
heavily
on
roughly
here's
what
where
we
were
at
last
year
and
here's
where
we're
at
today
and
like
look
at
all
the
awesome
progress
that's
been
made,
but
that
might
be
a
lot
to
stuff
into
you.
A
30
minutes
I
mean
it's
a
fun
round
rofling,
but
I'm
trying
I
like
I
honestly,
don't
know
what
kind
of
audience
exactly
so
there's
that
and
then
I
was
thinking.
Basically,
a
call
to
action
is
come
on
down
a
deep
dive
session.
If
you
care
about
actively
collaborating
on
this,
we
can
hash
out
our
2018
roadmap.
We
can
have
this
long
discussion
and
Sweden
have
tacking
on
problems
like
whatever
I
made
sure
we
have
a
projector
and
look
if
people
wanted
to
get
lightning
talks,
they
can,
but
I
haven't
really
got
much
thought
into
it.
C
Could
we
frame
the
effort
here
as
a
dogfooding
effort,
because
that
might
be
more
relatable
and
like
clearly
understandable
people
that
have
nothing
to
do
with
testing?
Can?
Can
you
flesh
that
out
a
little
bit
right?
I
mean
like
there's
like
so
much
going
on
right
now,
with
with
prowl
and
deploying
it
like
I
know
for
us
where,
if
we
ran
everything
that
we
ran
in
prowl
in
one
monolithic
application
on
a
VM,
we
wouldn't
have
to
necessarily
be
worrying
about
ok.
But
how
are
we
correlated
logs
between
things?
Are
we
exposing
metrics?
C
A
I,
like
that,
so
I
mean,
if
I
didn't
summarize,
I
feel
like
you're,
describing
what,
if
we
told
the
story
of
the
challenges
that
this
team
has
faced.
Rather
than
talk
about
the
awesome
products
that
we've
created
as
a
result,
I
mean
you
can
sort
of
hey.
We
made
this
thing
to
overcome
this
particular
challenge,
but
I
like
the
idea
of
telling
that
story,
my
my
husband's
would
be
I
really
like
frankly,
I'm
the
one
doing
all
the
development
y'all
have
been
so
I
would
be
looking
to
solicit
input
on
like
what.
A
What
do
you
remember
as
the
pain
points
or
the
real,
like
gnarly
problems
that
you
have
to
tackle?
I
do
think
that
that
sounds
like
a
compelling
presentation,
rather
than
talking
about
a
product
or
service
I'm,
a
big
fan
of
telling
a
story
for
sure.
A
D
D
B
Of
the
people
here
are
part
of
that
right,
like
some
of
people
that
attend
here
and
like
Jeff,
helped
write
a
good
chunk
of
the
framework
and
other
people
over
time
also
helped
work
on
that.
You
know.
I've
been
scattershot
in
the
beginning
and
scattershot
recently,
and
people
on
my
previous
team
used
to
help
quite
a
bit
on
this
work
too
as
well.
Jade
did
a
lot
of
work
in
the
tests
and
Friends
sort
of
Rob
I.
A
A
I
have
the
time
for
and
I
would
love
to
see
an
owner
to
sort
of
drive
that
forward
right,
like
I'm
kind
of
trying
to
drive
forward,
to
make
sure
that
we're
doing
a
better
job
of
communicating
what
the
hell
is
happening
because
there's
so
many
moving
pieces,
and
so
that's
sort
of
agree
with
you
that
it's
like
he
is
definitely
service.
Taking
further
we're
just
trying
to
make
sure
that
we
keep
you
keep
the
tests,
we
keep
the
spice
blend.
A
I
mean
we
keep
the
kind
of
flowing,
we
keep
the
tests
running
and
so,
as
a
result,
he
we
have
some
opinions
of
like
tests
that
are
really
kind
of
annoying
to
to
deal
with,
but
I'm
not
sure
we
have
the
strictest
opinions
or
or
like
mandate
to
go,
enforce
good
test
hygiene
and
I
would
love
to
see
that
added
to
the
scope
of
this
group.
But
if
I
look
at
the
folks
who
are
here
or
regularly
attend,
that's
not
something
I
see
the
group
collectively
having
a
bad
word
for.
D
Yeah
I
agree,
I
mean
it's
just
like
to
me.
The
the
quality
of
an
overarching
effort
like
you
or
Nettie's,
is
it's
kind
of
a
separate
concern
from
enabling
the
tooling
that
enables
that
quality
to
be
implemented.
That's
all
and
they're
obvious,
both
critical
and
to
be
honest,
I'm
very
interested
in
both
parts,
so
I'm
not
trying
to
minimize
the
importance
of
what's
going
on
here.
I'm
just
sake
castings
to
me,
is
a
little
bit
of
a
misnomer.
At
this
point,
I.
A
I
completely
agree,
like
I,
have
hopes
and
dreams
of,
encompassing
all
that
I'm
trying
to
figure
out
how
we
can
accumulate
enough
interest
and
resources
to
tackle,
like
all
the
things
that
seem
like
the
right
things
to
tackle
and
it's
been
prioritizing
towards.
Well,
let's
just
make
sure
we
build
it
and
then
they
will
they
will
come,
but.
D
A
Was
absolutely
this
is
absolutely
like.
The
kind
of
discussion
I
wanted,
so
I
think
I
kind
of
want
to
try
to
balance
like
how
much
time
I
have
I
will
try
and
put
together
a
summary
of
abstract
or
something
and
post
it
in
the
channel
tomorrow
or
the
mailing
list,
or
something
and
it's
like
I
can
definitely
if
I
had
to
just
make
it
up
on
my
own
I
could
definitely
tell
the
story
of
why
be
proud
and
why
we
have
a
cloth
and
wipe
Rao
has
evolved
in
the
direction
that
it
has
it.
A
It's
the
the
proliferation
of
the
Bach
commands.
How
that
makes
for
a
more
user,
friendly,
Bach
community
and
as
enabled
growth
and
velocity
of
kubernetes
is
all
I
can
speak
less
to
the
specific
design
challenges
that
we
tackled
and
that's
where
I
could
use
help,
but
I
think
I
could
get
away
with
at
least
for
the
purposes
of
the
abstract.
Just
leaving
it
to
like.
Here
are
some
of
the
challenges
we
faced
on
our
journey
to
better
testing
and
those
challenges
can
be.
A
You
know,
process
based
cultural
based
or
design
challenges
that
were
technically
encountering
and
I
can
solicit
feedback
from
this
group
as
we
ramp
up
to
coupon.
That's
the
big
question
mark
for
me,
which
I'm
try
to
have
answers
whether
or
not
I
have
slide
I
have
to
have
slides
in
my
November
20th
or
I.
Can
just
do
it
right
up
to
the
day
of
because
that
would
be
my
preference
okay.
A
A
A
E
Testing
meeting
hello,
everyone
so
I
come
into
this
with
not
a
whole
lot
of
background,
but
I
do
know
that
we
are
looking
to
get
a
an
upstream
test
release
blocking.
You
know
not
running
on
every
PR,
but
at
least
running.
You
know,
I'm,
not
sure
what
they
really.
What
the
test
cadence
is
for
the
tests
that
don't
run
every
PR
but
run
every
release,
but
probably
looking
to
get
something
in
the
grid
for
that
on
al
based
distributions,
which
run
a
slightly
different
version
of
docker,
and
we
actually.
B
E
E
And
so
this
is,
you
know
in
a
way,
this
test
is
fixing
a
bug
again,
that
we
didn't
notice
in
the
previous
release
and
I
I
guess
what
I'm
really
looking
forward
looking
for
is
either
documentation
or
someone
pointing
me
in
the
direction
because
test
for
me
is
like
wow,
there's
lots
of
stuff
here.
I,
don't
even
know
where
to
begin
to
start
coming
up
with
a
with
a
test
that
can
run
upstream.
For
this.
B
Said
no,
it's
a
partial
integration,
so
they
don't
only
stand
up
pieces
of
the
cluster
and
they
fake
out
the
rest.
But
you
can
there's
no
reason.
You
can't
add
a
note
in
to
end
test
and
if
you're
running
it
on
every
PR,
you
can
either
default.
Leave,
set
it
but
you're
going
to
need
someone
on
signo
to
thumb
to
stamp
it
for
you.
Otherwise
you
can
feature
flag
it
and
it
won't
be
running
every
PR,
but
you'll
have
to
figure
out
how
it
gets
enabled
for
some
other
interval.
B
D
F
F
F
A
When
we
kind
of
come
from
the
top
down,
I
think
maybe
give
me
some
more
like
keywords
or
buzz
words
to
search
for
so
I'm,
actually
you're,
making
the
question
of
what
defines
what
really
sparking
test
is.
I'm
really
trying
to
work
to
answer
that
question
with
the
document
that
chasing
tamar's
drafted
and
I
still
have
to
actually
reviewing
it
if
I
don't
soon
I'm.
Just
gonna
like
ask
this
to
please
sign
up,
but
basically
they're.
A
Roughly
speaking,
I
can
say
that
I
want
a
release
blocking
job
to
be
I,
want
it
to
be
something
that
can
pass
pretty
consistently.
Generally
tribally
speaking,
the
release
team
really
likes
to
sit
on
a
single,
commit
and
watch
all
of
the
release.
Blocking
jobs
run
three
times
and
see
them
passed
three
times.
So
we
have
good
confidence
that
it's
like
not
flaky,
and
it's
generally
pretty
stable
things
that
prevent
us
from
reaching
that
happy
future
are
jobs
that
are
really
flaky
and
can't
pass
consistently
or
jobs
that
take
a
really
long
time
to
run.
A
So,
instead
of
having
to
wait
like
a
couple
of
hours,
we
have
to
wait
a
whole
day
or
longer.
So
the
idea
is,
if
you
look
at
the
release,
master
blocking
dashboard
on
the
test
screen
right
now.
There
are
some
jobs
that
are
continually
family,
which
are
not
really
shouldn't
be
on
there.
If
they're
failing
the
idea
is
we
want
every
chocolates
on
that
board
to
be
green
and
if
it
is
not
green?
A
Speaking
with
my
CI
signal
need
hat
on
this,
the
number
of
them
of
these
team
I
want
somebody
that
I
can
go
nag
to
ask
the
wives
of
the
job
fixed,
yet
why's
the
job
fixed.
Yet
that
becomes
the
response
time
that
I
expect
out
of
that
individual
or
snake
produces,
as
we
get
closer
and
closer
to
release,
as
I
really
want
to
make
sure
everything
that's
on
that
board
is
treated
with
the
respect
that
it
deserves.
A
So,
having
said
all
of
that,
I
think
that
I
noticed
there's
a
job
on
the
cig
release
master
blocking
board
called
nude
lip,
which
I
think
might
be
the
note
II
to
be
tests,
some
variant
of
ECE
I'm,
not
sure,
but
basically
like
as
far
as
what
would
it
take
to
get
the
note
e
to
be
tests
running
and
putting
their
results.
Someplace
signal
would
probably
be
a
good
group
to
collaborate
with
her
or
ask
like
hey.
How
are
you
doing
there?
A
E
Right
so
like
I'm
I'm,
looking
at
the
release,
1-8
blocking
dashboard
under
sig
note,
I
see
signaled
costs
image,
and
so
that
would
be
much
or
exactly
what
that
does.
But
it
seems
like
it's
an
image
specific
et
node
run
and
that's
kind
of
what
I'm
looking
for
is
a
BD
node
run
that
runs
on
a
specific
image.
Yeah.
A
E
A
Know
you
could
use
as
far
as
I'm
concerned,
you
could
use
your
own
Jake
run
your
own
stuff
and
get
just
as
long
as
you
can
collect
the
logs
and
the
j-turn
xml
files,
all
that
and
stuff
them
in
a
Google
Cloud
bucket
somewhere.
We
can
like
get
our
config
files
set
up
to
pay
attention
to
that
bucket
and
then
get
that
to
display
on
test
grade
like
get
that
linked
in
the
Grenadier
and
all
that
stuff.
A
So,
like
you,
would
be
the
one
responsible
for
running
those
tests
on
your
infrastructure,
with
your
operating
system
of
choice,
which
you
put
the
results
of
the
place
that
we
collectively
can
look
at.
I
think
what
we,
the
community
would
want
would
at
least
be
logs
down
troubleshoot.
If
we're
trying
to
understand
like
why.
C
Well,
I
guess
that
was
kind
of
a
no-smoking
done.
I,
don't
know
Aaron.
If
you
can
speak
to
like
how
that
would
work
like
from
I
know,
AWS
has
given
some
funding
to
see
NCI
for
testing
on
there.
I
don't
know
if
spitting
out
Burrell
VMs
in
that
cloud
would
be
appropriate,
or
if
we
wanted
to
do
rail
testing,
would
we
be
asking
Red
Hat
to
provide
subscriptions
like
it
seems
like
a
higher
level
question
I
feel
like
that?
Would
most.
A
C
So
we've
we've
charted
McCullough's
landed
Jenkins
operator
sharding,
so
we
should
be
able
to
stand
up
with
Jenkins
master
on
our
end
and
give
the
queue
Brunetti
sprout
like
access
to
that
to
trigger
stuff
so
stuff,
let's
figure
out.
If
we
can
get.
If
we
have
funding
to
run
those
on
our
end
and
then
we
can
plug
it
through,
but
I
like.
A
E
A
Y'all,
you
know
we
all
kind
of
collectively
face
them
like.
How
did
we
miss
this,
because
this
is
a
pretty
big
deal
so
I'm,
like
whatever
we
as
a
community,
can
do
to
be
supportive
to
make
sure
we
elevate
that
signal
for
sure
and
yeah
I
think,
like
Steve
suggestion,
if
Steven
Kohler
cool
with
it
I
had
like
idiot,
if
Jake,
if
Brown
can
chart
off
which
Jenkins
it
talks
to
to
spin
up
which
job
that
means
it's.
It's
been
pretty
closely
to
the
rest
of
our
testing
infrastructure.
E
B
D
B
You
there
is,
but
you
have
to,
like
you,
have
to
post
back
to
the
Jeep.
That's
what
he's
kind
of
referring
to
earlier
is
that
you
have
to
post
back
in
a
well-defined
location,
a
well-defined
format
for
it
to
be
consumable,
and
then
it
can
be
part
of
test
grid
output.
It's
a
question
of
whether
or
not
anyone's,
even
looking
at
that,
though
so
like
now
you're
part
of
the
test
grid,
but
that
doesn't
mean
anything
unless
people
who
are
in
the
know
are
actually
tracking
and
executing
against
it.
Man.
D
Okay,
so
there
is
sort
of
a
requirement
for
a
certain
amount
of
integration.
So
there
is
sufficient
information
about
a
failure.
It's
not
just
like
hey,
I,
failed
and
I'm.
Just
thinking
like
I
can
imagine,
there's
going
to
be
more
scenarios
in
the
future
where
people
with
specific
hardware
or
specific
integrations
or
maybe
they're,
bundling
and
doing
some
sort
of
distribution,
and
they
want
to
be
able
to
have
some
feedback
to
the
community
that
something's
going
on
not
necessarily
blocking
feedback.
Just
hey
you're
doing
something
that
happens
to
affect
me
and
if
you
care.
B
Yeah
well
so
you
know
like
we
started.
Four
failed
starts
on
the
team
if
you're
still
on,
if
you're
in
Derek's
team
now
stuff
and
still
are
that
team
before
when
it
was
Andy's
team.
We
started
like
and
even
before,
that
my
team
had
like
misfires
along
the
way
by
trying
to
get
this
infrastructure
in
place,
so
I
believe
re
and
Paul
Mori
had
the
last
incantation
of
where
it
lived
for
trying
to
get
this
started.
So
that
way
it
was
able
to
post
the
information.
C
Of
the
efforts
are
that's
we're
working
on
right
now,
if
you
did
want
to
buy
into
okay,
I
need
to
support
being
able
to
do
like
a
bash
merge
before
my
test
or
after
my
testing
is
done.
I
need
to
figure
out
where
to
push
this
stuff
so
that
it
makes
sense
the
desecrated
Grenadier
right
now
to
do
that.
C
You
have
to
buy
into
like
a
couple
of
different
pieces
on
like
in
the
bootstrapping
scenarios
and
stuff
around
it,
so
we're
trying
to
break
that
out
and
make
it
a
little
bit
easier
to
layer
on
top
of
a
container
like
standalone,
but
it's
not
quite
there.
Yes,
I
think
today,
if
you
wanted
to
do
that,
you
would
need
to
probably
look
at
the
older
efforts.
C
A
Yeah,
just
to
speak
on
test
script,
for
a
real,
quick
minute
like
we
do
I'm
trying
to
figure
out
the
best
way
to
get
better
about
telling
people
they
can
use
it
or
suggesting
good
ways
to
use
it.
So
we
have
the
capability
right
now
for
tester
to
email,
an
email
address
or
a
group
of
email
addresses
if
either
test
fails
in
times
in
a
row
or
the
test
doesn't
have
any
results,
Princeton
it
within
hours
days
whatever.
A
So
we
have
the
technology,
it's
kind
of
like
how?
How
would
you
suggest
people
use
it?
You
know
you
could
have
each
SIG's
sets
up
a
mailing
list
such
that
when
a
test
fails
on
their
safe
dashboard,
they
get
notified
or
I
start
setting
it
up.
So
now
that
I've
added
owners
to
all
the
jobs
on
the
release
master
blocking
dashboard
I
could
set
it
up.
A
So
it
goes
and
emails
those
sakes
whenever
those
tests
fail
in
time
to
grow
some
things
that
I
think
I'm
still
trying
to
figure
out
the
right
way
to
do
it.
I've
started
to
document
what
I
is
a
human
in
doing
for
the
as
the
when
I
as
I
came
as
the
CI
signal
edom
doing
like
describing
how
like
I,
go
to
the
master
dashboard
I
see.
A
So,
like
you
have
a
technology
to
to
make
a
lot
of
noise
when
I'm
trying
to
figure
out
where
can
you
must
effectively
put
the
push
the
notifications,
so
people
don't
interpret
it
as
noise
right
away,
but
I'm
like
really
open
to
suggestions
or
feedback?
If
we've
got
ideas
on
how
to
how
to
do
this?
Better
because
I
really
don't
like
being
a
human
dashboard,
a
notification
system
and
will
not
do
it
for
much
longer.