►
From YouTube: Kubernetes SIG Testing 2017-09-05
Description
Meeting notes: https://docs.google.com/document/d/1z8MQpr_jTwhmjLMUaqQyBk1EYG_Y_3D4y4YdMJ7V1Kk/edit#
A
B
B
A
A
A
So
I'm,
not
the
test
person
for
the
release.
Team
I,
anticipate
we'll
have
more
clarity
on
that
tomorrow's
burndown
meeting,
if
I
had
to
guess
I'm
still
catching
up
on
scroll
back
right,
but
it
looks
like
maybe
the
release,
1/8
branch
hasn't
been
cut
yet
I
know.
If
you
go
to
test
period
right
now,
there
aren't
at
least
one
eight
specific
dashboards.
A
So
on
that
basis,
I
would
anticipate
that
there's
probably
not
upgrade
testing
to
one
eight.
Specifically,
there
are
the
master
upgrade
tests
in
the
release
blocking
tab,
so
I
know
that
tests
are
running
and
I
see
pretty
much.
All
of
them
appear
to
be
failing.
Is
that
sort
of
what
you
were
referencing
there?
No.
C
It's
referencing
specifically
the
upgraded
test
scenario,
because
that's
always
the
conundrum
whenever
new
features
are
added.
There's
like
this
export
version,
skew
things
when
api's
get
promoted
and
everything
else,
and
that's
when
things
break
pretty
tremendous.
You
know
in
very
subtle,
interesting
ways
and
it's
proven
itself
to
be
a
thorn
in
the
side.
Yes,
I
think.
A
We're
kind
of
at
this,
maybe
J's,
can
speak
more
to
this.
He's
lived
for
the
excitement
of
the
weekend.
I
gather
then
we're
at
a
crawl
walk,
run
phase
here
of
making
sure
we
have
a
set
of
tests
that
are
consistently
passing
and
provide
good
signal
for
the
one
eight
release.
So
we
don't
yet
have
the
one
in
release
branch.
A
So
there's
that
if
we
go
by
the
release
master
blocking
list
of
tests,
many
of
the
tests
that
are
not
upgrade
tests
most
MGK
ii
related
aren't
passing
right
now
and
then,
if
I
look
at
the
master
upgrade
tests,
it
seems
like
those
stopped
a
lot
of
those
stopped
running
around
september
second
contest
period.
So
I
would
expect
we
probably
have
some
work
to
do
there.
A
D
Well
done,
sir,
exactly
correct
and
we're
trying
to
do
not
eat
the
whale
all
at
once.
So
there's
plenty
of
things
we're
trying
to
fix
just
the
interim
plus
some
of
the
core
tests
like
the
cop
stuff
is
not
working,
and
it's
just
this
weekend
was
really
quite
quite
an
endurance
marathon.
Actually
so
we're
just
trying
to
get
herself
selected
and
get
this
stuff
fixed.
A
Ok
cool
so
so
Tim
I
think.
Maybe
the
the
short
answer
to
your
question
is:
if
you
want
to
find
out
more
about
upgrade
test
status,
we're
at
the
point
in
the
release
process
where
there
are
burned
down
meetings
on
Monday,
Wednesday
and
Friday.
If
I
remember
right,
trying
to
look
at
my
calendar
right
now
that.
A
A
B
There's
two
as
part
of
this
weekend's
fun,
one
of
the
things
I
was
trying
was
to
try
to
get
federal
results
in,
and
it
seems
that
test
grid
has
limitations
that
are
different
from
what
on
the
build
ID
from
what
bootstrapped
up
py
will
generate,
and
so
I
was
trying
to
figure
out
what
those
what
those
build
IDs
were,
but
also
like.
If
there
was
any
way,
I
could
figure
that
out
for
myself
because,
like
it
seems
that
the
test
grid
code
is
not
publicly
available.
Is
that
true.
A
Yes,
there
are
components
of
test
grid
that
are
not
open
source-
the
majority
of
it.
Unfortunately,
for
that
or
I
mean
this
is
essentially
a
tool
that
has
started
as
Google
internal
only
and
it
didn't
actually
become
publicly
accessible
for
consumption
until
about
sometime
last
year
and
we're
just
at
the
point
right
now
where
people
can
edit
that
yamo
file
through
poll
requests
to
get
their
results
included.
A
I
mean
just
to
speak.
To
that
I
can
say
the
long
term.
We
would
love
to
open
source.
It
I
think
it's
sort
of
a
question
of
resources
and
priorities
from
within
Google
at
the
moment,
but
that
we
recognize
that
that
is
certainly
a
component
of
the
whole
like
kubernetes
tested
for
stack
that
isn't
open
source.
Yet
it's
just
a
matter
of
getting
resources
inside
of
Google.
That
can
work
on
that
yeah.
B
A
I'm
playing
us,
I'm,
still
paging
stuff
back
in,
but
I
think
I
might
know
who
get
this
talked
to.
So
hopefully
we
get
your
response
within
24.
Oh
thank
you.
Okay,
he
okay
Steve!
So
last
week
we
had
sort
of
an
impromptu
discussion
around
goober
nadir
and
potentially
people
outside
of
Google,
using
or
extending
or
forking
or
whatnot
goober,
nadir
and
I
was
wondering
if
you
could
give
us
a
summary
of
that
meeting
from
Friday
yeah.
F
Sure
I
pasted
a
link
to
notice
that
Eric
wrote
up
for
us
while
we
were
doing
it
in
the
doc
but
I
think
generally,
what
came
out
of
that
was
right.
Now,
Grenadiers
posted
on
Google,
App,
Engine,
there's
a
lot
of
good
stuff
that
gives
us
as
a
pass
and
there's
low
priority
to
report
that
into
Cuban
a
native
application.
F
Also,
it
seems
like
there's,
Ryan
and
Eric
are
both
happy
with
the
scope
of
Governor
being
a
little
bit
larger
and
having
good
run
a
don't
know
how
to
serve
results
from
different
organizations
and
in
the
future.
That
might
look
like
a
CNC
F
owned
instance
that
similar
to
the
Federation
that
you
can
do
with
tests
great
you
can
say.
Okay
here
is
my
GCS
bucket,
maybe
here's
so
the
branding
that
I
would
put
on
my
page
and
then
it'll
serve
up
your
stuff
for
you.
So
I
guess
that's
the
general
overview
from
from
that.
F
Our
fork
will
go
away,
I'm
working
right
now
to
upstream,
while
the
configuration
of
this
to
make
it
so
that
we
don't
have
to
have
any
code
there
and
it
we
may
continue
to
deploy
our
own
instance
on
our
own
account,
but
at
least
it'll
be
possible
if
we
want
it
to
be
use.
One
larger
instance
I
think
right
now
there
were
some
issues
with
the
specific
App
Engine
account
that
was
being
used.
I'm,
not
really
sure
there
was
some
like
it
wasn't
possible
to
transfer
ownership
right
now
of
the
credentials
and
stuff.
A
Yeah,
it's
it's
owned
by
Google,
comm,
Google
project,
I,
think,
which
is
the
same
issue.
We
had
for
like
heading
non-googlers,
to
be
able
to
see
the
gke
cluster
that
runs
like
prowl
and
things
of
that
nature.
So
the
long-term
hope
is
that
we
can
transfer
ownership
of
this
to
like
CN
CF
io.
If
just
in
domain.
B
A
B
A
That's
like
put
all
your
artifacts
in
Google
Cloud
buckets,
and
you
could
just
link
to
that,
but
goober
nature's
a
little
bit
smarter
and
starts
to
like
parse
the
actual
logs
detect
failures,
thanks
to
some
of
the
stuff
Clayton's
did
a
little
while
ago
and
it
counts
the
number
of
tests,
and
things
like
that
so
like
in
the
grand
unified
way
of
viewing
all
the
test.
Artifacts
that
are
full
stack
of
test
infrastructure
provides
seems
like
it
could
be.
Cool
I,
sometimes
wonder
in
the
back
of
my
head.
A
F
And
I
think
there's
a
couple
changes
that
could
be
made
to
potentially
how
goober
Nader's,
interacting
with
the
build
logs,
especially
some
of
the
tests
that
we
run.
For
instance,
aren't
super
well
behaved
to
generate
25,
megabytes
of
output
and
right
now
that
means
you
know
you're
expanding
either
five
lines
or
you
know
65,000
at
a
time,
and
that's
not
super
great.
So
there's
definitely
some
growing
pains
that
we
might
see
as
you
get
more
projects
using
the
same
Google
dinner
instance.
A
And
I
think
maybe
just
looking
at
urinator
as
a
template
of
the
like
middle
ground
that
we
could
get
to
with
test
great
I'm,
not
sure
where
we're
like
in
an
ideal
world.
It
would
be
really
great
to
say
the
entire
stack
of
infrastructure.
The
test
kubernetes
itself
runs
on
kubernetes
like
urinator,
is
an
example
of
App.
E
Good
quick
question
about
the
solo
and
I
apologize
for
hitting
echoes
right
that
all
right
good.
So
my
question
is
so
Jenkins,
for
all
of
its
faults
is
at
least
open
source
and
runs
on
kubernetes
and
there's
goober
knight,
or
has
obviously
asked
the
App
Engine
dependency.
Has
there
been
I
and
I
agree
with
everything
you
said
as
a
pre
fifth
preface
I'm
not
trying
to
prolong
the
company
ideal
controversy,
but
I
guess.
My
question
is
for
the
those
that
want
to
run
all
of
the
tests
on
top
of
kubernetes?
E
Would
it
be
easier
to
run
stuff?
That's
currently
on
your
vanity
on
goomer
net
ear,
goober
mater
on
App
Engine
to
actually
move
that's
to
keep
that
stuff
on
canyons
or
actually
lose
some
of
that
functionality
Vaughn
to
James
or
something
else
that
runs
on
kubernetes
already
or
do
you
think
that
there's
just
too
much
there
to
possibly
move
that
stuff
over
yeah.
A
It's
difficult
for
me
to
give
a
diplomatic
all
on
all-or-nothing
answer
with
Jenkins
I
just
feel
like
once
you
start
using
Jenkins.
It
takes
a
lot
of
effort
to
make
it
usable
while
locking
it
down
sufficiently
to
prevent
humans
from
tweaking
it
and
turning
it.
The
way
they
like
to
it's
just
the
overall
management
of
Jenkins
can
be
a
pain
so
for
a
smaller
team
of
individuals.
It's
is
fine.
Once
you
scale
up
to
a
larger
project.
I
just
the
example
I
have
in
my
head
was
open.
Stacks
Jenkins
is
is
really
cool.
A
How
you
can
oh
and
see
a
lot
of
like
read-only
things,
but
it
took
a
lot
of
effort
to
set
that
up
and
they
couldn't
really
do
that
effectively
until
they
create
a
Jenkins
job
builder,
which
is
a
tool
we
have
tried
and
kind
of
run
away
from
as
quickly
as
we
can
in
terms
of
the
templates
and
whatnot
sort
of
ballooned
pretty
quickly
and
that's
just
management
of
the
jobs.
That's
not
even
management
of
the
various
plugins
and
configuration
files
for
things
like
that.
C
B
Of
weeks
once
we
figure
out,
like
some
things
around
test
grid,
build
IDs
and
silly
things
like
that.
It
will
be
very
easy
to
run
an
asynchronous
job
that,
like
runs
every
hour
and
reports,
results
into
the
central
kubernetes
test
bridge,
probably
using
test
tube
test
and
or
bootstrap
epiy.
But
you
can
do
that
basically
completely
separately.
It
should
be
relatively
standalone
able
to
run
in
a
pod,
and
you
just
set
it
up
on
your
communities,
cluster
and
off.
B
A
Totally
fine
and
cool
the
little
bit
of
like
connectedness
is
when
you
click
on
a
red
cell
in
test
grip.
You're
like
oh,
that
thing
fails,
can
I
go
see
what
failed
it
right
now,
if
you
click
that
it
links
you
through
your
header
right
so
in
the
world
where
other
people
like
we
want
people
to
be
able
to
use
whatever
CI
system,
they
have
to
contribute
their
results
and
treat
Google
buckets
as
the
common
delimiter
there.
A
But
for
that
full
integration
of
I
want
to
see
more
information
about
the
specific
test
and
why
passed
or
failed,
you
need
to
provide
some
sort
of
publicly
exposed
endpoint,
and
so,
if
you
happen
to
be
running
your
own
homegrown
Jenkins
and
are
comfortable
exposing
that
in
the
Internet,
we
can
maybe
provide
customization
to
allow
that
to
happen.
But
I
think
right
now,
it's
like
we
assume
that
you're
gonna
end
up
clicking
through
to
coober
nader,
pointed
at
the
same
TCS
bucket.
That
is
driving
the
test.
Great
results,
yes,.
B
F
F
Why
Jenkins,
even
with
plugins,
even
with
customization,
fails
to
do
that
in
a
clean
and
concise
way,
and
one
of
the
things
we
really
like
about
guru,
nadir
or
at
least
sort
of
a
simpler
approach
is
we
can
do
that
we
can
bring
out
the
things
that
we
find
valuable.
We
can
take
that
list
of
things
to
highlight
and
make
it
what
we
want.
We
can
show
the
j-unit
front
and
center
like
right
in
your
face.
F
A
A
A
Cool
anybody.
A
A
F
It
doesn't
seem
to
be
currently
in
use
and
I'm,
not
entirely
certain.
If
we
still,
if
we
have
like
a
path
right
now
to
okay,
you
need
a
new
label
created
on
this
repo.
What?
What
do
you
do?
Because
at
this
moment,
I've
added
the
configuration
to
both
Cuban
Eddie's,
main
repo
and
tested
for
the
labels
seem
to
have
been
created
when
with
the
wrong
color?
It
doesn't
seem
very
clear.
Let's
build
on
my.
A
F
A
I
connect
I
can
maybe
add
some
color
to
that
and
I'm
having
to
be
corrected.
So
the
context
is
probably
going
to
be
a
meeting
where
we
can.
We
can
talk
about
that.
Wednesdays
9:30
Pacific,
the
effort
to
standardize
labels
across
all
repos
is
something
that
that
stake
is
driving
bit
of
project
history.
There
is
a
Munder
that
you
can
turn
on
in
the
Munch.
Github
instance
called
check
labels,
and
it
will
look
at
that
label
see
animal
file
and
if
none
of
those
labels
exist,
it
will
automatically
create
them.
A
I
don't
know
if
it
creates
them
with
the
correct
color
or
not.
But
that's
that's
one
way
of
doing
it.
That
was
turned
off
a
while
ago
for
reasons
that
I,
don't
I,
don't
actually
know,
then
a
little
bit
later,
I
sort
of
noticed
that
that
labels
not
yellow
file,
existed
but
was
horrific
lis
out
of
date
as
compared
to
the
labels
in
the
repo
itself
and
I
needed
some
machine
readable
way
of
getting
labels,
and
that
gamal
file
was
a
much
faster
way
for
me.
A
A
It's
a
whole
separate
program
that
we
want
to
try
running
as
either
a
cron
job
or
a
schedule
chop
through
Pro
I,
believe
we're
looking
at
cron
job
and
that
program
will
be
used
to
sync
up
labels
across
repos
and
rather
than
like
turning
it
on
against
every
repo,
like
all
40-plus
repos
in
the
kubernetes
order.
Right
now,
we
want
to
turn
it
on
for
just
a
few
repos
for
just
a
few
labels
make
sure
it
actually
works
and
then
turn
it
on.
For
the
rest.
A
A
A
Yeah,
thank
you.
Thank
you
for
asking
that,
because
that's
I
mean
a
lot
of
people
have
a
lot
of
contacts
or
anything
about
it.
Yes,
that's
the
tool
that
I'm
talking
about
that
was
written,
and
this
is
the
other
thing
that
I
think
I
have
seen
folks
used
in
the
past.
It's
it's.
Definitely
what
I
have
used
to
sync
the
state
of
the
world
back
to
that
yam
will
file
when
humans
went
and
added
labels
on
their
own
I.