►
From YouTube: Kubernetes SIG Testing 2017-09-12
Description
Meeting notes: https://docs.google.com/document/d/1z8MQpr_jTwhmjLMUaqQyBk1EYG_Y_3D4y4YdMJ7V1Kk
A
Okay,
hi
everybody
today
is
Tuesday
September.
The
12th
welcome
to
the
sink
testing
weekly
meeting,
which
is
being
publicly
recorded
and
will
be
broadcast
or
will
be
posted
to
YouTube
later
today
or
tomorrow,
if
I'm
committing
so
two
things
on
the
agenda,
unless
anybody
has
anything
more
urgent.
First
up
we're
going
to
talk
about
the
repo
publishing
bot,
Daniel
Smith
from
API
machinery.
Here
since
most
of
the
repos
affected,
our
API,
our
missionary
facing
and
I
have
located
in
mine
and
I
have
a
laundry
list
of
other
tested
clippings.
B
I
can
I
can
give
context,
so
various
people
in
the
project
have
had
this
idea
that
that,
in
order
to
increase
velocity
and
throughput
and
etc,
all
that
stuff,
all
that
good
stuff
that
makes
developers
happy.
We
need
to
split
our
development
effort
into
these
multiple
repositories.
Other
people
don't
think
this
is
a
good
idea.
I'm
not
here,
to
weigh
in
on
that
debate.
B
So,
like
the
go,
client
library,
users
of
the
go
client
library
need
to
use
the
go
client
library
and
they
don't
want
to
use
the
rest
of
the
code
from
the
system
like
you
shouldn't
have
to
import
cubelet
to
use
the
go
client
library
it
turns
out.
We
have
multiple
things
like
this.
Just
an
API
machinery.
We
have
the
go
client
library,
we
also
have
the
library
for
making
API
servers
and
it
turns
out
some
authors
may
want
to
make
an
API
server,
which
is
also
a
client.
B
This
means
you
need
to
be
able
to
use
both
of
those
chunks
of
code
together,
and
that
turns
out
that
turned
up
a
problem
with
our
initial
approach,
which
is
we
were
making
a
copy
of
all
the
API
types
and
putting
them
in
the
client.
Now,
if
we
do
that,
you
import
the
client,
you
get
a
copy
of
the
API
types.
You
import,
you
guys
server,
you
get
a
direction
to
the
main
repos
API
types
and
those
API
types
are
not.
The
same
goes
import
package
technology
makes
things
very
little,
make
your
life
very
miserable.
B
B
B
It
does
happen
that
I
think
most
of
the
repos,
our
API
machinery,
but
that's
purely
an
artifact
of
the
fact
that
we
were
blocked
first,
so
we
had
to
do
something.
We
ended
up
making
the
repo
publisher
2x.
Well,
let
me
give
even
more
context
so
we
intended
to
get
to
a
world
where
people
actually
did
the
development
for
these
repos
in
in
those
repos
themselves.
B
But
obviously,
there's
gonna
be
a
long
time.
Delay
like
it's
it.
We
can't
switch.
We
can't
flip
a
switch
and
get
into
that
world
like
there's.
There's
a
documents
to
be
updated.
There's
PRS
in
flight
there's
all
the
stuff
that
needs
to
happen
before
that
is
even
a
theoretical,
e3
or
theoretical
possibility
that
we,
so
we
knew
we're
gonna.
We
knew
that
we
were
going
to
live
in
this
world
for
a
while.
B
So
we
made
this
staging
directory,
so
everything
is
checked
into
the
main
repository,
but
some
things
are
checked
into
a
directory
that
is
underneath
the
staging
directory.
So
the
staging
directory
is
basically
the
contents
of
a
bunch
of
external
repositories,
so
that
solves
the
like
transitional
stay
problem,
but
we
actually
need
those
external
repositories
to
exist
in
reality.
B
B
That
also
means
we
need
to
copy
it
out
for
other
repos
too,
because
the
client
repo
depends
on
the
other
things
and
if
you
only
copy
out
client
go,
then
you
need
to
you
have
the
same
problem
right
for
all
of
us
dependencies.
So
that
explains
why?
Why
do
we
have
like
a
bunch
of
different
random
stuff
in
the
staging
directory?
I'm,
not
just
people
kind
of
staging
thing.
B
Correct
yeah,
so
I'm
so
development
like
the
staging
directory
is
the
canonical
location
and
to
solve
the
problem
where
we
need
to
get
this
content
out
there
for
other
people.
Tchau
and
there's
mostly
just
view
right.
Yeah
tchau
wrote
the
repo
publisher,
repo
publishing
bot,
which
does
some
fancy,
get
voodoo
and
syncs
up
history
and
and
pushes
over
to
these
external
repositories.
B
A
B
A
Unpack
that
a
little
bit
my
understanding
is
that
stts
is
pull.
Request,
is
sort
of
about
an
additional
Lunger,
I,
think
or
changes
to
the
publishing
box.
I
forget
the
fonts
of
Lunger,
but
he's
got
that
running.
He's
got
his
fork
running
in
a
separate
cluster
and
pushing
to
his
of
those
repos,
and
so
the
script
that
child
runs
basically
is
a
fast
forward.
Push
from
those
Forks
to
the
main
reasons.
A
C
Stefan
PR
he
changed
the
algorithm
of
the
publishing
robot
so
that
the
robot
will
also
pick
the
merge
commits
from
the
kubernetes
vehicle,
and
the
purpose
is
that
we
can
synchronize
kubernetes
tax
jawed,
most
different,
we
post.
So,
for
example,
if
we
have
a
release.
Sorry,
a
one
point:
seven
point:
zero
cargo
net
kubernetes
the
robot
is
going
to
automatically
tagging
all
the
deriving
repost
was
the
same
pack
so
that
people
can
find
it
a
correct
match.
Conversions
among
all
those
repost,
so
I
took
her.
I
took
her.
Maybe
two
passes
of
his
peer.
C
D
Mean
I
was
I
was
Raveena
when
we
were
looking
at
short-term
solutions
to
the
current
breakage,
the
the
publisher
does
gate
rewrites.
It
does
a
gate,
filter
branch
and
a
bunch
of
rewriting
to
make
the
commits
show
up
on
the
other
on
the
like
the
sub
Rico's
essentially,
and
it
got
confused
by
a
merge,
commit
going
in
the
wrong
direction.
Essentially
so
I
I
don't
want
to
design
it
here.
D
No,
so
hey
guys
I
just
been
I
just
been
reviewing
the
code
to
help
Lee
get
more
fixed
and
obviously
the
manual
state
is
bad,
but
I
think
char.
Doing
it
right
now
makes
sense
for
the
because,
like
API
machine
I
want
to
show
that
for
the
current
one
and
then
of
course,
tester
infer
akin
keeps
the
bot
running
once
to
this
and
I
think
Stefan's
getting
that
working,
but
the
larger
scale.
Multi
repo
story
has
not
really
been
solving.
This
kind
of
a
band-aid
we've
been
using
yeah.
B
E
C
E
B
Try
to
sort
of
rally
the
troops
to
like
finish
the
job
and
and
and
the
troops,
if
not
rallied
and
I
would
I
would
expect
to
live
in
this
transitional
state
for
perpetuity.
At
this
point,
I
know
there
are
some
people
who
would
like
to
sort
of
finish
the
job
and
started
developing,
and
these
other
repos,
but
I
haven't
seen
any
any
like
any
one
sort
of
take
the
the
helmed
and
and
like
start
pushing
on
that,
so
I
don't
expect
it
to
happen.
I
sigh
yeah.
B
D
F
D
Good
should
be,
it
should
be
a
very
low
maintenance
piece
of
read-only
mirrors
of
these
sub
repositories
that
we
can
maintain
as
long
as
we
want
until
we
have
a
way
to
have
development
in
those
server
repository
for
some
of
them,
and
even
then
I
bet
there
will
be
some
where
we
won't
have
the
case
where
every
single
one
of
them
is
its
own
working
repository
just
because
some
of
them
are
quite
minor
and
naturally
subordinate
to
other
ones
like
I.
Don't.
C
D
A
That
I'm,
this
isn't
the
place
to
necessarily
hash
out
that
design
for
from
my
purposes,
I
think
that
the
intent
of
getting
you
and
like
hashing
his
plan
out
here
was
to
make
sure
that
we
all
had
operational
buy-in
that
to
me
when
we
say
transitional
state,
the
big
transition
I
have
in
my
head
is
that
Chow
is
a
bot.
I
want
to
know
when
Chow
is
no
longer
a
bot.
It's
Charles.
F
Don't
think
we
are
in
that
state
at
this
point
and
I'm
trying
to
figure
out
what
I
would
like
to
talk
about
is
what
do
we
need
to
get
to
where
you
know
this
is
one.
Do
we
is
this
something
that
sig
testing
wants
to
support
like
we
support
Guba
Nader
and
the
submit
you
and
to
what
needs
to
happen
before
we
consider
it?
You
know
supported
in
the
same
way
and
like
Ryan,
Woodard
ooh.
You
have
thoughts
about
that.
B
A
Alright
I
think
correct
me
if
I'm
wrong
and
is
I'm
not
trying
to
divert
the
discussion
at
this
issue.
Of
one
downstream
example.
I've
seen
is
that
helm
couldn't
say
that
they
released
they
supported
kubernetes
1:7
because
they
were
consumers
in
some
of
these
repos,
which
themselves
didn't
actually
release
until
a
couple
weeks.
After
one
seven
went
out
the
door
yeah
major.
D
So
we
can
keep
it
running
at
or
I
don't
know
about
when
we'll
have
that,
like
the
stephanas
obviously
invest
a
lot
of
time
in
getting
this
working.
He
has
something
like
forty
or
fifty
commits
in
his
PR.
So
he's
been
hacking
away
this
for
a
long
time.
I
don't
know
if
we
have
the
resources
for
that
for
that
sort
of
investment.
We
can
certainly
keep
the
bots
running
and
fix
minor
problems
like
that,
but
more
major,
ryoga,
textures
I,
don't
know
if
we
can
invest
in
for
current
roadmap,
I
mean.
D
F
I
don't
so
what
happens?
You
know
that
if
we
run
it
nightly
for
a
few
weeks
or
months,
something
will
happen
and
somebody
will
do
some
change
or
something
something
is
going
to
happen
where
it's
broken.
What
happens
when
it
breaks
after
you
know?
Is
it
just
that
we
want
yeah
when
what
brakes
so
for.
C
The
lusts
of
you
breakage
so
once
this
because
some
the
go
language
update.
Well,
we
we
updated
to
a
newer
version
of
local
language
and
the
robot
is
running
code
has
with
then
odigo
language
antenna
fills
and
the
most
recent
breakage
is
because
someone
checked
in
a
merge
commit
which
is
not
clean
and
the
robot
doesn't
consider
it
a
case,
and
then
the
robot
is
broken,
so
happens.
P
R
is
going
to
fix
that,
but
I
mean
who
knows
what
else
can
go
wrong
in
the
future.
I.
F
Think
we
should
plan
for
something
I
mean
something
is
going
to
go
wrong
if
it's
important
and
we
use
it
I
assume
sometime
during
the
one
nine
time
frame,
it
is
gonna
break
and
someone's
gonna
need
to
fix
it,
and
so
who
is
going
to
fix
it
and
what
happens?
You
know
like
this
last
time
right,
a
weird
merge
commit
cause
the
staging
or
the
the
staging
bot
to
be
broken
for,
like
you
know
some
number
of
days,
which
is
now
requiring
or
motivating
all
this
desire
to
do
some
manual
staging
you
know.
F
A
B
Although
the
yeah
the
yeah,
regardless
of
if
it's
going
to
take
more
than
a
few
days,
then
many
significant
changes
we're
to
the
repository
recently
and
people
are
blocked
and
can't
do
their
jobs
because
they
can't
update
their
rendering
for
repos.
That
depend
on
these
things
that
we
should
be
publishing.
A
B
A
F
D
Have
better
local
testing
stories
right
now,
the
right
now
the
testing
strategy
for
the
previous
iteration
at
least
was
make
a
clone
of
every
repo
that
the
bot
publishes
to
privately
like
on
your
personal
github
and
then
run
the
bot
and
change
the
directory.
So
it
was
basically
like
really
really
tedious.
C
D
D
B
A
Yeah
I
absolutely
understand
that,
for
me,
the
blurry
line
would
be
between
say,
testing
or
say
release,
since
this
seems
mostly
to
be
about
pushing
bits
around
and
there
aren't
necessarily
tests
that
gate
the
bits
being
pushed
around,
but
I
haven't
seen
significant
amount
of
operational
action
from
sig
release
when
it
comes
to
these
sorts
of
things.
There's.
E
No
permanent
owners
of
sig
release
so
like
consistency
across
release
cycles
has
never
been
maintained.
The
people
who
are
experts
on
given
areas
leave
st.
release
and
then
then
there's
new
people
that
come
on
board
asking
what
what
happened.
This
sake
at
least
has
consistency
across
for
these
signals.
Yeah.
A
A
A
Anything
in
munchkin,
though
we
can
make
it
a
separate
block
and
I
think
Cole
just
made
like
an
issue
creator
bot
that
pulled
out
a
bunch
of
things
from
a
bunch
of
them.
So
we
could
follow
that
pattern
here
beyond
that,
like
I,
don't
know
operational
experience
with
it
documents
common
failures,
we
you
know
the
usual
things
you
get
from
just
seeing
it
run
like
common
failures.
You
would
expect
and
how
to
work
around
that
and
making
sure
that
there's
more
than
a
single
point
of
failure
for
domain
knowledge.
F
Good
I
would
I
think
would
be
useful,
for
you
know:
cig
testing,
maybe
like
Ryan
and
or
I
from
sig
testing,
and
maybe
Shou
and
Stephan
from
sig
API
machinery
to
sort
of
come
up
with
a
transition
plan
that
you
know
we
both
agreed
to
about
like
here's,
what
we're
gonna
do
and
then,
when
we
complete
all
that
you
know
tests
infra
will
on
the
progress
forward
from
then
on.
Okay,
you
guys.
C
D
D
F
I
do
know
that
I'm
pretty
sure
that,
like
Phil
and
the
rest
of
sig
CLI
is
probably
interested
in
doing
something
similar
and
having
their
own
repos
and
stuff.
So
I
think
it's
just
an
API
me.
My
feeling
is
that
API
machinery
is
kind
of
the
you
know
route
for
the
the
sort
of
initiator
of
all
this,
but
that
it's
probably
gonna
spin
out
into
other
repos
and
six
as
time
progresses.
Yeah.
A
A
Right,
thank
you,
yeah
thanks
a
bunch
that
was
informative.
That
also
takes
us
to
time
so
I'll,
just
save
all
the
links
and
stuff
I
posted
in
the
agenda
for
next
week.
The
big
one
just
for
this
group,
while
we're
in
high
bandwidth,
is
I
just
push
it
out
a
PR
to
put
the
cops
AWS
presubmit
job
back
to
blocking
it
was
taken
out
of
walking
for
a
variety
of
reasons
and
I
believe
we
have
fixed.
All
those
reasons.
Context
is
in
the
pull
request
that
I'm
linked
in
the
meeting
notes.