►
From YouTube: Kubernetes SIG Testing 2017-10-24
Description
Meeting notes: https://docs.google.com/document/d/1z8MQpr_jTwhmjLMUaqQyBk1EYG_Y_3D4y4YdMJ7V1Kk/edit
A
Thank
you
all
right.
Hi.
Everybody
today
is
Tuesday
the
24th.
This
is
the
state
testing
weekly
meeting,
I'm
and
Curtin
burger.
You
are
all
being
publicly
recorded
and
will
be
posted
to
YouTube.
Very
shortly
so
say
we
all
I,
don't
have
a
ton
on
the
agenda
today,
but
a
couple
things
to
ramble
on
about
the
very
first
of
which
I've
shared
my
screen.
A
There
you
all
are
okay,
so
we
won
nine
stuff.
How
is
our
Mazda
I'm,
looking
I
have
been
going
through
our
road
map
dock,
which
hopefully
I,
did
not
Lincoln
meeting
names,
but
I
will
take
up
in
my
cam,
but
I've
tried
to
add
all
of
the
issues
from
that
to
the
one
nine
milestone
here.
So
I
think
it
was
Ben
this
morning
very
helpfully
like
when
somebody
comes
and
asks
like.
What
are
we
working
on?
What's
our
plan
for
the
next
release,
this
is
it
this
all
the
stuff
we
want
help
with.
B
B
A
Issues
around
migrating
munchers
to
different
plugins-
something
I
still
have
yet
to
do
here.
If
it's
a
help
at
all
as
well.
We
have
this
table
and
we've
kind
of
described
what,
with
roughly
planning
to
do
for
most
of
them.
We
don't
necessarily
have
issues
that
correspond
to
all
of
them
that
we
could
cross
off,
but
I'm
getting
a
better
sense
of
where
we're
at
today,
but
that's
also
because
I've
had
my
head
in
this
game
pretty
consistently
for
the
past
couple
of
days.
The
real
goal
here
is
for
somebody
to
be
able
to
come.
A
Somebody
may
beat
myself,
you
might
get
distracted
by
steering
commitee
stuff
and
yank
around
at
other
places
to
be
able
to
come
back
here
and
figure
out
like
are
we
still
doing
this
stuff?
We
said
we
did
if
we
had
really
awesome
ideas
in
the
meantime
and
went
ahead
and
just
did
them.
Are
we
doing
that
in
a
way
where
we're
getting
the
right
level
of
visibility?
So
we
can
communicate
best
practices
and
also
getting
a
chance
to
brag
about
all
the
awesome
stuff
which
one
sorry
any
any
miscellaneous
comments
on
that.
A
Okay,
I
have
inserted
a
more
concrete
thing.
There
I
was
at
the
safe
release
119
meeting
this
morning
with
them
where
we
got,
they
introduced
ourselves
to
everybody
and
Anthony.
The
release
manager
showed
this
is
the
release
timeline.
So
the
things
that
are
relevant
and
pertinent
to
this
group
are
Thursday.
November
16th
is
in
theory
when
we're
supposed
to
start
studying
up
CI
around
the
release.
One-Line
branch
I
think
that's
pretty
uncontroversial.
A
He
calls
out
down
here.
November
29th
is
when
we
begin
manual
downgrade
testing.
So
I
was
wondering
if
that
means
we
oughta
have
go
ahead
and
set
up
automated
upgrade
testing
automated
downgrade
testing.
If
there's
any
manual
upgrade
testing
find
out
about
that,
but
I
think
generally,
it
was
a
pretty
easy
copy
paste
job
person.
A
My
other
question
around
this
is
looking
at
this
code
slush
in
code
freeze
on
November
20th
and
never
22nd.
What
do
those
even
mean?
Well,
they
mean
something
like
this,
where
it's
like,
only
pull
requests
that
are
approved
for
the
milestone
and
then
like
what
does
that
even
mean
because
there's
all
this
discussion
around
labels
and
things
of
that
nature?
A
Basically,
when
I
say
what
does
that
even
mean
the
question
I'm
trying
to
ask
you,
let's
find
this,
the
question
I'm
trying
to
ask
is:
can
these
problems
be
discussed
in
starting
kim
code,
slashing
code
freeze
be
articulated
in
terms
of
github
label
queries,
because
this
is
how
type
works
and
I'm
trying
to
understand.
If
cig
testing
do
we
feel
comfortable
enough
with
type?
C
B
A
If
it's
something
we
want
to
go
with
for
reals,
we
have
the
opportunity
to
what's
a
better
word
than
socialized
I'm
gonna
demo
it
to
the
relief
team
everybody's
on
the
same
page
with
where
do
I
go
to
check
the
status
of
things?
How
does
this
all
work
to
go
know
what
all
the
right
labels
are.
A
lot
of
this
is
a
mess
of
labels
of
the
released
team's
own
making.
But
when
it
comes
to
do
I
know
if
the
submit
queue
whoops,
it's
not
the
submit
queue
anymore.
B
A
If
we're
at
the
thick
of
things
is
how
could
we
make
sure
the
tide
only
merges
just
this
one
pole
request,
because
we
want
just
this
one
pull
request
and
we
want
to
exclude
everything
else,
so
I'll
try
and
help
brainstorm
with
the
release
team
as
we
sort
of
move
forward
what
those
tricky
use
cases
might
be,
but
these
are
the
sorts
of
things
that
often
make
people
clamor
for
ordering
in
a
queue
which,
as
I
understand
it
tide
has
doesn't
understand
the
concept
of
order.
I.
Think
it's
just
a
query.
Good
month,
I
mean.
A
C
Not
sure
I
can
check
that
real,
quick
but
I
believe
it's
mainly
based
on
the
idea
that
all
the
PRS
are
in
the
pool
so
they're
all
giving
equal
chance
that.
D
Doesn't
make
sense,
I
mean
maybe
in
the
general
case,
but
I
mean
what,
if
you
have
something,
that's
breaking
and
you
need
to
merge
the
unbreaking
before
all
the
other
things.
That
just
seems
like
a
very
common
case,
first
Nikki,
but
they
run
the
whole
pool
at
once,
though,
so
you
just
merge.
She'd
merge
the
ones
that
you
merge,
a
single
one
that
passed
and
the
rest
would
just
waste
some
cycles,
but
otherwise
he
wouldn't
cause
any
trouble.
Okay,.
A
C
D
Think
I
mean
doing
me
wrong.
It
sounds
like
I
mean
the
implication
would
be.
If
you
had
some
sort
of
breakage
and
you
had
this
PR
that
fixed
it,
it
would
be
the
only
one
whose
test
passed.
It
would
be
only
one
that
would
actually
go
into
the
queue
I
guess
I'm,
just
I
guess
I
would
share
her
and
concern
around
people's
expectations
if
they
expect
to
be
able
to
do
things
like
prioritize
or
whatever.
E
E
A
A
Okay,
speaking
of
documentation,
I,
just
one
of
the
things
I
scraped
into
the
one
on
the
roadmap
was
better
documentation
for
our
crowd.
Plugin,
since
we
talked
about
how
awesome
crowd
is
that's
cool,
but
what
are
all
these
plugins
and
what
do
they
even
do?
Sometimes
the
names
are
self-explanatory.
Sometimes
you
get
to
derive
that
by
looking
at
the
command
slot
and
D
file,
and
you
can
sort
of
see
different
slashes
but
I
think
we're
gradually.
A
Needing
to
have
a
little
more
structure
around
it,
I
had
an
issue
somewhere
in
the
one
nine
milestone
that
I
pulled
in
that
was
talking
about.
How
do
we
improve
this
because
I'm
happy
to
do
the
lowest
buy
thing
which
is
just
pretty
to
read
me?
Dot
MD
in
the
plugins
directory
had
a
one-line
entry
for
every
plugin
and
boom.
That's
how
we
do
it
going
forward.
A
If
you
add
a
new
plugin,
you
add
it
to
that
I'm
wondering
if
this
begins
to
change
now
that
the
support
or
we're
planning
on
supporting
out
of
tree
plugins
I've,
also
heard
mention
of
potentially
having
proud,
provide
a
URI
that
you
can
hit
to
just
automatically
get
help
for
all
the
things
that
it
currently
supports.
Implements.
C
E
C
A
Okay
well,
work
with
you.
Offline
I
still
feel
like
they
would
be
useful
to
have
documentation
about
what
the
potential
set
of
plugins
could
be,
not
necessarily
the
ones
they're
live,
but
that
definitely
sounds
like
much
more
automation,
friendly
thing.
One
thing
I
really
love
about
this
to
you
today
is
what
anybody
asks
like.
What
do
I
have
to
do
to
get
my
PR
merge.
I
can
literally
link
them
to
the
thing
that
says,
merge
requirements
and
it
is
dynamically
updated
depending
on
how
the
cibecue
is
configured.
A
A
E
Have
one
thing
the
there's
we
mentioned
when
you
were
talking
about
the
milestone,
the
Jenkins
related
issues.
Yeah.
Are
we
planning
to
actually
make
a
policy
that
I
thought
Eric
has
an
issue?
This
is
something
like
in
centralized
support
for
Jenkins.
So
today
we
had
a
couple
of
PRS
asking
dad
New
Mutants
jobs.
A
A
What's
I
was
sort
of
under
the
impression
that
this
team
is
the
team
that
gets
to
say
no
more
Jenkins
like
if
you
look
in
return,
if
you
really
need
a
new
job,
we
get
to
help.
You
migrate
your
job
to
pro,
so
this
discussion
comes
up
sometimes
when
somebody
wants
to
run
docker
and
as
part
of
their
job,
and
they
say
that's
a
feature
that
prow
doesn't
have
that.
We
really
really
really
need
support.
It's.
A
We
ought
to
be
moving
those
discussions
into
issues,
so
we
can
identify
if
there
are
truly
concrete
use
cases
that
prow
does
not
yet
support
and
we
must
implement
before
we
can
turn
down
Jenkins
or,
if
there's
some
other
way
they
could
accomplish
what
they
need
to
within
the
existing
constraints
of
prowl,
but
that
we
ought
to
be
helpful
in
whittling
down
like
really
like
to
make
sure
we're
not
accepting
new
jobs
and
really
hopefully
whittling
down
the
existing
jobs.
So
I.
E
B
B
B
B
D
D
That's
what
I
was
trying
to
get
at
because
I
heard
the
same
thing
and
for
me
I
think
it's
necessary
evil,
certainly
for
things
like
the
verify
scripts
that
are
launching
you
know,
containers
willy
nilly,
that's
not
a
good
reason
to
use
darker
and
darker,
but
Quinton
Lee
is
you
know,
desire
to
get
e
to
e
jobs,
running
against
the
darker
and
darker
cluster.
That
seems
to
me
a
perfectly
legitimate
reason
to
use
it
so
not.
E
Not
further,
but
if
anyone
has
a
clean
way
to
make
it
part
of
one,
like
argue
me,
image
I
would
be
very
happy
to
push
for
that
pull
request.
I'm
tried
using
a
like
side
card
container
and
that
has
kind
of
some
issues
without
product
specs,
cleanup
touring.
It
expects
your
container
to
actually
exit.
E
A
One
last
thing
to
Eric's
point:
you
reminded
me
something
I
said,
but
you're
not
having
time
for
today
is
to
work
with
Jace
around
documenting
criteria
to
consider
a
job
as
blocking
either
the
release
or
blocking
merges
and
I
would
say
it's
it's
a
fair
thing
say:
in
order
for
it
to
be
a
pre
submit
PR
blocking
chop,
it
has
to
be
a
proud
job.
I
need
to
actually
go
back
and
confirm
that
all
of
our
pre
submit
jobs
are
in
fact,
proud,
chops.
E
Verified
job
that
are
going
to
mean
something
like
darker
and
darker
or
totally
right.
Do
the
jobs
use
they're
migrating
towards
not
they're
using
their
own,
like
Federation.
E
A
B
So
I
shared
out
a
doc
yesterday,
which
is
in
the
meeting
notes
about
right
now.
You
know
our
images
bootstrap,
which
is
responsible
for
copying
up
logs
and
telling
gloominator
and
tests
graded.
What
actually
happened
in
the
testing
and
also
checking
out
the
repository,
all
that
happens
in
bootstrap
and
we
one
want
to
write,
rewrite
it
and
go.
And
second,
we
want
to
move
that
outside
of
the
container,
so
that
the
container
can
assume
that
when
it
starts
that
all
of
the
repositories
it
needs
are
checked
out
in
the
right
location.
B
And
then
it
can
just
run
its
scenario.
And
then
you
know
exit
zero
or
one
depending
on
whether
or
not
testing
passed
or
failed
optionally
copying
some
debugging
artifacts
into
another
location,
and
then
that
there's
some
other
and
then
they
all
the
artifacts
and
get
copied
up
magically
so
that
you
know
whether
this
so
that
the
fact
that
the
container
is
part
of
Crowl
and
test
grade
and
kübra
nadir
is
something
that
the
container
itself
doesn't
have
to
understand
and
I
think
it
might
be.
Yes.
F
F
So
these
are
all
the
proud
injected
variables
and
then
the
container
itself
is
running
a
entry
point
script
that's
written
ago,
and
what
that
script
does
is
it
runs
whatever
and
then
it
redirects
standard
out
in
standard
air
into
a
file
and
it
redirects.
Once
it's
finished,
they'll
write
the
exit
code
of
that
command
into
a
different
file.
F
We
have
a
sidecar
container,
that's
running
with
that,
and
what
that
does
also
written
go
is
it
it
knows
about
an
artifact
directory
and
whenever
things
pop
up
in
there
it
shows
them
into
GCS
and
when
it
sees
the
return
code
file,
written
it'll
take
the
file
that
has
stated
out
in
standard
error
and
push
that
as
the
build
log
up
to
GCS
as
well.
A
lot
of
that
potentially
could
be
useful
to
you
guys,
but
I
think
and
I
made
a
comment
as
well
on
on
the
proposal.
F
One
of
the
things
that
we
noticed
was
a
really
complicated
testing
system
with
a
bunch
of
containers.
Is
it's
a
real
hair
like
a
nightmare
just
to
like
figure
out
what
went
wrong
when
something
goes
wrong
and
so
putting
the
onus
on
pods
and
init
containers
to
push
stuff
up
into
GCS,
especially
with
the
way
that
goober
Nader
currently
handles
like
a
missing
started,
die
JSON
or
something
is
kind
of
hairy.
So
we
tried
to
think
about.
F
We
have
a
pod
that
is
watching
proud
jobs
and
as
they
get
created
and
Jenkins
jobs
get
started
for
them.
We
can
actually
push
started
and
finish
that
JSON
up
then
from
there,
because
the
test
doesn't.
This
is
like
there's
some
fields
that
get
dynamically
computed
during
the
test,
but
the
vast
majority
of
them
that
we're
actually
interested
in
are
just
like
what
configuration
went
into
there
when
did
I.
Think
I
started
it.
F
When
did
it
finish,
but
yeah
I
definitely
think
that,
like
having
an
interface
where
somebody
provides
to
you
and
so
again
in
our
cases,
it's
actually
just
our
contract
is
your
working
directory.
Is
your
repository?
You
give
us
the
commands
to
run
it's
not
giving
it
pods
back,
but
something
of
the
sort
definitely
really
valuable.
F
And
if
we
don't
the
only
reason
we
don't
use
the
pods
back
and
we
use
just
the
command
is
so
we
can
do
that
streaming
of
standard
out
and
standard
air
into
a
file
and
the
reason
we
did
that
was
to
hopefully
in
the
future,
support
streaming
that
log
out
to
something.
But
if
we
aren't
supporting
that
today,
just
having
the
sidecar
around
would
probably
be
enough
and
it's
I
can
link
I'll
shoot
a
link
to
the
tool
in
the
chat
and
we're
very
happy
to
a
stream.
B
Yeah
this
you
know,
I
think
this
might
be
worth
continuing
to
discuss
or
to
figure
out.
How
do
we
want
my
thinking,
you
know
the
reason
why
I
was
doing
this
all
with
on
yeah
I.
Think
your
idea
about
having
parts
you
know
maybe
have
the
started
and
finished
be
responsible.
Has
some
other
pod
doing.
That
sounds
fine.
B
The
thinking,
I
guess
ideally,
I
would
like
to
not
have
any
constraints
on
the
image
like
you.
Basically,
don't
need
to
change
your
image
at
all
to
make
it
work
with
pro
we
just
sort
of
have
proud,
run
it
and
then
everything
just
works
and
I'm
not
sure
how
necessary
that
is,
but
it
sort
of
seems
potentially,
you
know
useful
to
me,
but.
F
But
yeah
one
question:
I
had
for
you
guys
because
I
don't
know
what
you
run
with
in
production
for
our
production
clusters.
We
ubiquitously
use
journal
D
as
the
logging
driver,
which
has
rate
limits
built
into
it,
so
that
it
doesn't
blow
up
the
system.
Logging
storage.
So
we
noticed
that
a
and
so
there's
also
the
issue
that
for
docker
all
of
the
logs
for
every
container
go
through
the
daemon.
So
if
you
have
really
high
throughput
nodes
with
a
bunch
of
tests
running,
sometimes
that
gets
rate
limited
and
you
lose
test
output.
F
F
B
Here's
whatever
the
default
is
for
chi
ke
and
it's
yeah
I'm,
not
to
be
honest,
I,
don't
know
what
that
default
is,
but
I,
don't
think
we've
customized
that
but
I
mean
it
does.
I
do
think.
You
know
if
you
check
into
cloud
storage,
console
it
like
outputs
standard
out
to
our
logging,
like
you
can
view
the
logs
there,
but
I
think
that
we
will.
You
know
we
haven't,
had
problems
with
it
thus
far.
So
BM,
that's
an
interesting
thing
to
think
about.
A
Yeah
we
were
having
back
in
the
early
days.
Samsung
was
trying
to
scale
test
a
thousand
node
cluster
and
we
found
some
fun
log
level.
V4
messages
that
choked-up
journal
D
pretty
hard.
It
was.
It
was
impressive
and
we
spent
a
long
time
scratching.
Our
heads
like
why
nobody
else
had
seen
this
at
Google
scale
test
again
and
it
turns
out.
A
F
C
B
You
be
willing
to.
Maybe
we
could
throw
together
another
one
of
those
breakout
sessions
later
this
week
for
us
to
talk
in
more
detail.
Yeah
and
I
could
try
to
demo
one
of
our
pipelines
or
name
yeah
sounds
great.
Let's
do
that
cool
and
so
I
mean
I.
Think
that's
you
know,
but
I
guess
the
only
other
thing
would
be.
If
anybody
you
know
has
comments,
please
add
them
so,
and
some
people
have
already
done
that.
So
thank
you
for
that
and
yeah.