►
From YouTube: Kubernetes SIG Testing 2017-07-25
Description
Meeting notes: https://docs.google.com/document/d/1z8MQpr_jTwhmjLMUaqQyBk1EYG_Y_3D4y4YdMJ7V1Kk
A
A
The
presupposition
was
there
are
some
questions
about
what
was
going
on
here.
So
let
me
give
a
quick
overview
of
what
I'm
trying
to
achieve
and
where
we
are
with
it,
and
then
people
can
ask
questions
so
right
now,
all
of
the
testing
that
we
do
automated
or
a
lot
of
the
testing
we
do
is
run
on
Google
cloud
and
is
paid
for
by
Google
sort
of
implicitly,
because
it's
just
running
in
our
Google
Google
com
domain,
all
the
projects
were
created
by
colors.
A
It
should
actually
be
owned
by
CF
or
or
Linux
Foundation,
or
somebody
who
is
more
officially
in
charge
of
the
project
now
so
I
set
out
to
make
it
possible
for
that
to
be
the
case
so
that
we
could
invite
more
people
to
Co
admin.
These
testing
resources-
it's
going
to
be
interesting
because
it's
going
to
require
us
to
do
some
splitting
of
testing
like
right.
A
Now
we
use
gke
tests
as
part
of
the
overall
test
suite
to
get
a
good
signal
and
it
doesn't
seem
appropriate
for
CNCs
to
be
paying
for
decaying
tests,
so
we're
gonna
have
to
split
those
bits
of
infrastructure
out,
probably
so
where
we
are
with
it.
I've
actually
gone
and
we've
created
a
kubernetes
at
I/o
Google
domain,
which
means
I
can
create
a
Google
cloud
platform
organization
object
which
represents
kubernetes
that
I
own.
This
is
good
means
that
I
can
create
projects.
A
A
A
I
expect
that
Google
will
probably
just
put
a
lot
of
that
as
we
are
today.
It's
not
really
that
big
a
problem,
but
we
need
to
figure
out
how
to
make
that
work
across
the
domains.
So
that's
where
we
are
with
it
once
we
figure
this
out,
then
we'll
be
able
to
create
new
GCD
projects
in
this
new
domain
and
be
able
to
invite
people
to
have
the
exact
same
level
of
authority
over
the
testing
resources
that
Googlers
do
today.
A
In
fact,
I
do
not
hold
the
keys
to
that
domain
is
only
Linux,
Foundation
System
administration
that
holds
the
keys
to
that
domain.
So
that
is
the
goal
in
sort
of
the
disentanglement
of
Google
from
Canadians
infrastructure.
People
gave
us
a
hard
time
we're
working
on
it
totally.
This
is
something
I'm
like
massively
in
favor
of
and
I
think
I
was
probably
the
reason
you've
dropped
in
here
is
just
as
I.
I
saw
a
slide
that
had
fake
testing
name
on
it,
and
it
was
the
first
I
had
ever
heard
of
it.
A
So
I
just
wanted
to
clear
up.
We
were
in
the
loop.
The
other
thing
I
have
concerns
over
is
just
like
testing,
for
it
can
mean
a
lot
of
things
or
attribute.
Nettie's
infrastructure
can
mean
a
lot
of
things
and
so
I
feel
like
at
some
point.
You
ought
to
allow
what
that
means.
A
A
We
absolutely
will
have
to
charter
that,
but
off
the
top
of
my
head,
the
line
falls
on
anything
that
is
part
of
the
kubernetes
project
that
is
important
to
the
successful
running
of
the
project
that
isn't
tied
to
any
one
particular
provider.
So
it
doesn't
seem
appropriate
that
we
would
run
Google
specific
tests.
A
It
doesn't
seem
appropriate
that
we
would
run
Google's
gke
testing
or
that
we
would
block
on
Google's
GK
testing,
but
it
does
seem
appropriate
that
the
submit
queue
and
the
munge
github
bots
and
website
and
the
docs
and
everything
else
falls
under
this
umbrella.
So
what
the
first
thing
I
want
to
move
is
like
dns
for
kubernetes
on
I/o
and
capes
at
I/o
yeah.
I
think
I'm
largely
on
board
there
and
so
what's
the
right
forum
to
track
this
stuff
going
forward.
A
That's
a
great
question:
I!
Actually,
don't
have
any
idea
head
and
thought
about
what
sake
should
own
this,
but
it
probably
should
be
under
sync
something
I,
don't
know
the
answer
that
I'm
open
to
suggestions
right
now.
It's
going
nowhere
until
we
sort
out
who's
paying
for
stuff
right.
So
it's
really
just
in
a
holding
pattern.
Until
we
get
that
sorted
out,
I
was
on
vacation
in
the
last
couple
of
weeks,
so
I'm
back
now
and
I
hope
to
push
it
forward.
C
C
A
Has
at
least
an
executive
director
and
whatever
Chris
and
sheiks
job
is
so
that,
yes,
it
has
a
couple
of
employees
for
sure,
but
it
really
is
part
of
the
larger
Linux
Foundation
umbrella
they
have.
Employees
may
have
system
means
they
are
system
in
they're,
not
going
to
run
our
stuff
right.
That's
going
to
be
on
us,
but
they're
willing
to
help
us
get
things
set
up
in
a
way.
That's
compatible.
A
So,
yes
he's
going
to
be
work
involved
in
adapting
the
test
projects
and
finding
ways
that
we
can
move
the
non-google
specific
parts
away
from
the
Google
specific
parts
we
have
our
utility
cluster
Ryan
Mike
I
know
you
have
access
to
that.
We'll
have
to
move
that
over
there
left
to
move
reviewable
data
Brandi's
that
I
owe
over
to
the
new
domain.
Will
move
DNS
will
move
the
capes
that
I
owe
you
know.
Ssl
will
forward
all
those
little
bits
and
pieces
and
munch
github
is
another
great
example.
A
We
need
we'll
have
to
move
those
over
and
then
the
larger
pieces
are
going
to
be
more
challenging
right,
but
it
is.
It
does
give
us
an
opportunity
to
rethink
how
we
named
projects
and
how
we
have
charted
the
work
right
like
we
order.
We
have
projects
that
are
just
sort
of
dumping
grounds
for
throw
out
in
this
project
and
like
we
can
stop
to
rethink
that
now.
We.
C
Do
have
work
in
progress
to
make
projects
someone
has
needs
for
each
test
instead
of
having
a
project
for
every
detest
flavor,
which
would
also
help
a
lot.
Okay,
not
of
course,
but
so
so,
rather
than
having
the
three
in
a
different
project.
We're
eating
variants
purpose
for
like
20
in
you
will
get
one
for
your
test
and
it
would
be.
D
Less
crazy
yeah
when
you
are
calculating
the
cost,
what
projects
did
you
include
all
of
our
eating
projects,
or
were
you
looking
at
like
the
main
ones
that
when
are
pre,
submit
tests
and
like
the
Jenkins
and
proud
I,
looked
at
the
scale.
A
Tests
and
I
looked
at
some
of
the
ete
tests
and
I
looked
at
the
just
utility
cluster
test.
I
want
to
I
kind
of
like
it's
time
for
the
other
topics
here
but
yeah.
What
I'm
hearing
is
a
concrete
action
item
is,
we
should
probably
follow
up
in
this
forum.
I
know
the
CNC
at
board
meeting
is
happening
at
the
end
of
this
week,
where
we
might
get
some
substitutes
decisions
on
what
exactly
they're
willing
to
pay
for
and
I
can
ping
you
offline
to
sort
of
bring
the
discussion
back
here.
A
I
think,
like
the
high-level
question,
would
be
whether
or
not
this
would
happen
to
be
born
a
time
frame
or
when
we
could
do
it
to
be.
As
nan
discussed
with
this
possible
yeah
I
mean
there's
no
rush
for
this
I.
Think
right,
I
mean
it's
been
sitting
this
way
for
a
couple
years.
You
know,
there's
no
urgency
to
it,
but
if
we
don't
sort
of
keep
pushing
on
it,
it'll
never
get
done.
So,
from
my
point
of
view,
I
think
the
action
of
that
I
have
is
to
figure
out.
A
E
A
Admitting
group
and
we'll
get
a
couple
of
volunteers
who
will
hold
the
keys
to
the
kingdom
and
then
the
rest
will
be
delegated
to
seek
testing
to
do
the
testing
work.
That
sounds
awesome
and
if
we
need
to
have
a
face-to-face
discussion
and
our
working
group
isn't
spun
up
in
time.
This
seems,
like
the
right
sake,
all
all
right
cool.
Well,
if
anybody
has
any
of
your
questions,
I
right
now,
I'm
the
point
of
contact
all
right,
I'm
going
to
drop
off.
Then
thanks
guys,
my
green
room.
A
F
A
So,
just
in
general,
like
I,
think
there's
a
lot
of
stuff.
That's
worth
hanging
it
out,
but
I'm
not
sure
I'm,
going
to
catch
everything.
The
stuff
that
I've
been
seeing
has
mostly
been
geared
at
making
lunch
github
a
lot
more
operationally
friendly,
but
I,
sometimes
harp
in
a
community
or
to
other
people
about
the
fact
that
lunch
github
takes
a
long
time
to
redeploy
right
now.
So
some
things
to
trust
that.
C
I,
don't
know
if
there
was
a
hearing
one
we.
We
have
some
vague
plans
for
having
a
stateful
github
caching
proxy,
where
it
receives
events
from
github
and
when
you
make
it,
you
make
a
github
API
request
to
it
and
it
will
synthesize
the
appropriate
thing
great
sites
of
sham
the
world,
so
any
sort
of
get
head
anything
you
write.
C
All
sorts
of
other
things
right
now:
the
mantra
has
a
lot
of
hacks
where
it
tries
to
do
etags
and
stuff,
so
it
doesn't
leave
token,
but
it
still
take
like
the
exact
same
amount
of
time
to
do
a
request
and
everything,
but
that
would
be
a
good
way
to
get
the
state
into
a
separate
thing.
But
more
respect
to
you,
stateless,
that's
not
started
just
yet
that's.
Why
idea?
Okay.
A
I
mean
I
guess
like
from
a
high
level
I've
even
seen
it
discussed
on
some
requests
and
I.
Have
it
in
my
brain
at
least
as
much
github
is
something
we
would
rather
move
away
from
in
favor
of
things
like
proud,
proud,
plugins,
but
at
the
same
time
it
is
that
sub,
a
queue
that
we
have
today
and
so
I
think
a
lot
of
effort
has
gone
into
making
it
more
operationally
friendly.
Today.
My
question
is:
do
we
know
we're
sort
of
we
stand
up
long
term
plans
for
this?
A
C
We're
moving
removing
that
a
truly
stateless
bit
like
the
juices
about
that.
We
can
like
a
lot
of
the
label
management
where
someone
responds
when
the
label
is
updated,
that
can
move
to
an
event-based
thing
like
prowl
and
that
is
being
done.
I
a
core
queue
itself
is
probably
the
most
valuable
and
useful
and
complex
bit
of
code
there.
C
Well
we're
not
currently
looking
at
replacing
that
there
are
some
ideas
where
we
could
have
all
the
hue
state
be
labeled
on
a
repo,
but
that's
for
our
future
right
now
we're
looking
at
we're
going
to
work
to
make
it
see,
because
once
a
big
queue
handle
multiple
read
those
and
then
you
can
do
better
reload.
So
cool
has
just
been
implementing
one
where
you
can
change
your
config
map
with
all
the
options,
and
it
will
pick
those
off
to
the
left
without
restarting
the
queue
we
can
change
out.
F
C
Karina's
repo
hasn't
something
running
with
the
exact
same
features.
There
is
less
confusion
where
oh,
this
is
one
of
the
side
root
fields
where
we
have
to
manually
merge
trips,
people
up
a
lot.
We
just
want
the
same
workflow
for
everything
that
we
do
and
especially
as
people
are,
adding
more
repos
for,
like
second
or
third
tier
components
that
will
get
integrated
later.
It's
just
much
better
to
have
the
same
experience
for
everyone.
So.
A
A
Also,
it's
really
painful
and
there's
a
lot
of
overhead
and
administrivia
to
stand
up
yet
another
instance
of
munchkin,
but
it
seems
like
that
cost
has
been
going
down
over
time.
Is
this
something
we
want
to
encourage
people
to
use
across
war,
Tripos,
yeah.
F
So
currently,
we've
been
trying
to
make
the
cost
of
two
point
in
the
next
marginal
instance,
not
not
as
much
as
it
is
now
I
think
our
plan
is
to
make
it
so
that
a
single
instance
can
operate
on
multiple
repos,
but
giving
them
off
none
of
the
money.
Here's
we're
designed
to
do
that
I
think
that's
a
little
ways
out,
so
we're
also
looking
at
a
so.
F
C
There's
a
lot
less,
there's
a
lot
less
occupation
and
file
they
after
you
now
to
display
news
version
our
new
instance
and
a
lot
of
the
a
lot
of
that
is
driving
force
behind
the
multiple
repo
under
one
to
make
you
it's
not
being
as
there
are
so
many
resources
freaks
to
make
you
used
to
make
you
use
resources
approximately.
According
to
how
many
events,
the
github
repo
has
it's
more
about
doing
interesting
features
across
most
will
be
both
like
lockstep
integrations
between
different
products.
C
F
F
A
Super
cool
I
mean
for
me
personally
at
least
the
main
reason
I
just
wanted
to
see
you
guys
on
camera
is
because
I
think
you've
been
doing
awesome,
work
and
I
wanted
to.
Thank
you
a
lot
for
your
visits.
That
kind
of
it
has
been
through
so
many
hands,
and
it's
just
kind
of
been
tossed
around
like
a
hot
potato
and
it's
good
to
see
some
like
actual
decent
engineering
practices
can
apply
to
it.
They're
reviewing
reviewing
cold
5001.
A
Likely
I
really
have
been
trying.
It
kind
of
bums
me
out
that
all
I
can
contribute
our
like.
Your
comments
aren't
quite
right
because
you
guys
done
a
fantastic
job
of
catching
all
the
other
really
detailed,
stuff.
I
think
your
point
about
multi
reef
has
stuff
that
Mario
is
on
here.
I
think
he
had
a
couple
questions
about
that
that
maybe
would
be
appropriate
to
raise
now
I.
B
B
Don't
see
anybody
saying
we
really
need
to
make
sure
there's
good
interfaces
between
the
stuff
across
these
repos
and
those
interfaces
are
well
tested
and
my
sort
of
alarmist
concern
having
seen
this
play
out
and
OpenStack
not
well,
is
that
that
sort
of
path
leads
towards
things
depending
on
implicit
behavior,
rather
than
documented
behavior
and
end
up
being.
You
have
to
test
everything
with
everything
else
all
the
time,
and
so
you
end
up
with
something
that's
very
serving
wieldy
from
the
CI
CD
perspective.
We're
amount
of
resources
required
to
keep
it
going.
B
If
we're
going
to
move
to
this
multi
repo
world
with
all
those
implications,
I
think
it's
either
I
think
it
would
be
helpful
to
either
review
some
of
the
decisions
that
Zul
is
made,
not
necessarily
using
Zul
directly
but
at
least
consider
the
fact
that
they've
actually
solved
a
lot
of
these
problems.
Already
us
reinventing
wheel,
I,
don't
think,
would
serve
anyone
well.
C
And
I
definitely
agree
with
that.
I
have
I've
read
through
a
lot
of
those
dual
presentations
of
papers,
bunch
and
we
don't
currently
have
any
of
the
same,
a
the
same
prophecies
with
Windows
the
multi
branch
in
a
multi,
tangible
theory,
but
I
agree.
Piper.
Almost
everything
like
that
come
on
I
think
we're
going
to
try
avoiding
going
down
that
sort
of
complexity
as
much
as
possible.
I,
don't
think
we
currently
have
any
NZ
quite
like
that,
and
we
do
also
have
some
members
working
on
much
better
integration.
Testing.
C
We're
components
could
be
tested
at
a
smaller
level.
Quinton
Lee
is
quickly
on
the
proposal
for
essentially
doing
an
entire
entire
API
server
controller
and
cubed
deployment
on
one
under
one
docker
image
with
any
components
you
want
to
change
in
there
change
as
necessary.
So
we
can
so
we
can
change
out
of
the
IDI
test
to
be
actually
an
integration
that's
running
locally
and
that
sort
of
down
scaling
would
really
help
these
component
repos
get
a
better
testing
of
their
module.
C
Just
like
that.
So
yeah
we
are
we're,
definitely
in
the
dual
and
what
it
has
to
offer
motor
you
go.
If
we
go
down
that
path,
but
I,
don't
think
we
there's
not
really
a
consensus
yet
on
how
we're
going
to
do
it.
Obviously
we
have
multi
repo
and
everything's
locked
up
with
Korea's
Committee
get
no
benefit,
so
that's
that's
a
filly.
Rather
we
won't
take
but
other
than
that
I'm,
not
sure
what
it
is
yet
the
currency
of
the
world
is.
We
publish
new
publishing
side
repos
from
kanae's
created.
B
Mean
we
don't
really
have
multi
repo
properly
yet
so
it's
kind
of
you
know
carpet
a
chicken
or
egg
type
of
thing,
but,
like
I,
said,
I
think
that
if
the
dev
side
of
things
doesn't
focus
on
ensuring
interface
stability,
I
worry
that
CIC
D
is
going
to
have
to
you'll,
be
left
holding
the
bag
and
have
to
go
down
the
road
or
testing
everything
with
everything.
And
all
this
complexity
is
all
like
complexity.
That
implies
I,
really
hope
that
we
can
avoid
that.
But,
like
I
said,
I,
don't
really
see
a
lot
of
evidence.
A
Kind
of
think
I'd
like
to
follow
up
with
you
offline,
because
this
may
be
related
to
a
discussion.
I
tried
to
bring
it
during
State
architecture
where
I
feel
like
we
need
to
do
it,
but
the
project
as
a
whole
needs
to
do
a
better
job
of
sort
of
defining
what
the
version
boundaries
are
supposed
to
be
between
all
the
different
layers
and
components.
A
B
D
B
Yeah
they're
I
mean
I,
think
I
think,
like
the
concerns
that
I've
raised
both
in
the
document
LinkedIn
chat
and
in
the
Leadership
Summit,
is
that
I
think
it's
kind
of
dangerous
to
be
moving
towards
a
multi
repo
world
without
actually
starting
with
interface
stability
within
one
repo,
because
there's
something
you
can
easily
achieve
right.
You
can
write
the
test,
you
can
make
sure
they're
running
and
then
you
can
move
things
apart
and
the
tests
will
continue
running.
B
E
A
slight
sigh
of
a
slight
degree
of
apathy
towards
this
because,
like
we
don't
have
control
over
who's
going
to
do
what?
Because
they're
the
things
are
doing
them
independently
right
and
this
particular
sake
is
responsible
for
them
doing
the
integration
testing
across
them.
But
there
they
have
been
moving
independently
without
sort
of
a
set
of
checks
and
balances
that
existed
at
a
higher
level
to
ensure
that
we
are
doing
the
right
thing
first
and
that
other
people
agree
with
it
right.
I'm
adamantly
opposed
to
the
idea
of
moving
out
of
machinery
in
a
point.
E
First
I
thought
it
was
a
bad
example,
but
there
needs
to
be
some
lockstep
driven
process
across
SIG's
to
coordinate
this
effort
right
so
like
it,
I
don't
see
how
there
needs
to
be
some
working
group.
That
makes
this
happen
right,
because
it's
not
just
suggesting
it
has
to
be
a
focused
group
of
people
who
are
making
this
go
right
and
we
have
other
working
groups
in
other
areas
that
crossing
boundaries
for
things
like
this,
but
I
have
not
seen
a
working
group
related
to
making
sure
that
our
infrastructure
is
sane
as
we
migrate.
B
I'd
agree
with
that:
I
mean
when
I
was
bringing
it
up
in
the
context
of
like
I,
just
wasn't
sure
how
many
people
inside
testing
we're
sort
of
aware
of
the
implications
of
multi
the
move
to
multi
repo
and
so
I'm,
not
suggesting
we
should
solve
it.
So
I
think
there's
people
here
that
would
want
to
participate.
I'm.
E
C
C
Agents,
that's
as
well
away
because
all
of
our
test
is
going
to
assume
a
role,
innovation
points,
so
yeah
there's
a
lot
of
there's
a
lot
of
refactoring.
You
shouldn't
be
a
lobby
about
that
and
also
very
worried
about
the
amplification
effect
where,
if
you
have
to
maybe
bill,
do
you
want
to
make
a
change?
You
have
to
make
coordinating
tiaras
and
Shepherds
evolve
and
you
either
need
some
ways
for
that
Shepherd
we
run
together
as
easy
or
as
one.
If.
A
Well,
actually,
I
was
going
to
say
I
think,
that's,
that's,
maybe
a
good
place
to
call
it
for
this
week.
I
know
we
didn't
quite
get
to
Tim's
point.
Can
we
actually
maybe
next
week
talk
about
how
do
we
test
the
actual
binaries
that
are
cut
by
on
ago
and
your
release
testing?
If
there's
a
pull
request?
I
want
to
get
some
eyes
on,
but
I'll
taking
people
in
slack
about
that
from
cool?
Well,
thanks
everybody
for
an
awesome.
Tuesday,
we'll
see
you
all
next
week,
yeah.