►
From YouTube: Kubernetes SIG Testing 2019-02-19
Description
A
Okay,
hi
everybody,
I
am
Aaron
of
state
beard.
Today
is
Tuesday
February
19th,
and
this
is
the
weekly
kubernetes
se
testing,
meaning
we're
all
being
publicly
recorded
and
we'll
have
our
smiling
faces
posted
to
YouTube
later.
So,
please
remember
that
we
should
all
adhere
to
the
kubernetes
code
of
conduct,
which
basically
boils
down
to
please
don't
be
a
jerk
today
on
the
agenda.
I
wanted
us
to
lead
with
a
discussion
from
Duvall
who's,
one
of
the
trooper
Nettie's
114
release
team
tests
in
fruss.
A
Suzhou
been
going
from
is
the
testing
for
side.
What
upcoming
milestones
are
happening
both
from
the
release
team's
perspective,
as
well
as
from
state
testings
perspective,
because
I
see
some
things
about
like
spyglass,
replacing
goober
nadir
and
results
store
and
stuff
further
on
in
the
schedule,
so
I
thought
we
could
start
there
well
do
you
want
to
take
it
from
there
yeah.
B
So,
okay,
so
today
we
are
supposed
to
cut
a
release,
so
we
have
a
new
1.14
branch.
Once
we
make
those
changes,
we
want
to
make
changes
to
all
the
config
files.
Omits
pull
request,
had
all
the
approvals
but
looks
like
that.
There
is
a
conflict
on
the
github
issue,
so
sin
and
I
are
working
on
it
right
now
trying
to
resolve
it.
B
B
We
have
automation
of.
We
have
done
automation
of
all
this
manual
tasks,
and
that
is
also
on
the
review.
So
most
of
the
manual
tasks
are
now
automated.
We
now
we
need
to
write
a
stitching
script
or
a
script
that
launches
all
this
in
order,
so
that
is
the
only
part
that
I
spend.
Of
course
the
review
needs
to
be
approved
and
once
I
get
the
approval
we
will
likely
get
merged.
B
B
So
yeah
so
so,
yeah
Simon
mentioned
that
if
you
do
hold
cancel
it
should
get
merged,
but
it
did
not
sin
and
I
discussed
today
and
then
probably
he
will
do
a
review
again
and
approve
it.
So
we
need
to
write
a
stretching
script
that
will
completely
make
sure
that
the
test
infer
whatever
tasks
we
do
today
are
autom
automated
and
they
can
be
run
one
by
one
I'm.
B
B
D
B
Some
of
them
are
already
documented,
and
some
of
them
are,
some
of
them
have
already
been
automated.
Some
of
them
have
been
not
the
documentation
does
not
mention
that,
so
some
of
the
scripts
are
experimental.
Some
of
so
we
need
to
stitch
all
of
them
and
make
sure
and
and
some
of
the
scripts
that
are
already
there
may
have
some
bug's
okay.
B
All
the
config
files,
but
for
rotation
of
jobs,
which
you
are
talking
about,
I,
think
rotation
of
releases,
release
versions
for
each
of
the
job.
I
think
we
have
a
specific
set
of
scripts.
These
are
pre
submit
and
some
post
submit
jobs
that
we
have
selected
saying.
We
will
do
rotations
only
for
those.
B
E
One
thing
we
did
downstream
to
know
we
have
a
similar
sub
tooling
doing
similar
things.
We
just
cut
the
114
bridge
early,
kept
it
fast
forwarding
to
master
and
then
use
that
as
a
staging
space
for
making
sure
that
we've
gotta
make
all
the
switches
over.
So
that
can
be
something
we
could
consider
for
150
minutes
I
mean.
D
A
B
B
A
A
Cuba,
Nader
and
stuff
one
of
the
ones
off
the
top
of
my
head
is
Jeff.
Grafton
was
working
on
separating
out
all
of
the
test.
Tar
balls
so
the
same
way
that
we
broke
up
the
one.
We
work,
our
ball
for
all
architectures
and
kubernetes
clients,
server-side
and
client-side.
We're
gonna
do
the
same
thing
for
test
our
goals
and
it's
unclear
to
me.
How
much
of
that
involves
changing
our
CI
setup
to
make
sure
that
that
stuff
is
appropriately
sized
I.
F
F
Kubernetes
anywhere
is
also
kind
of
dying,
a
slow
death
which
is
somewhere
in
the
intersection
of
sick
testing
at
cluster
lifecycle,
and
we
are
spinning
up
some
kind
stuff.
It's
unclear
how
much
of
that
will
be
blocking
and
when
and
I
am
looking
to
try
to
make
shinier
cube
tests.
But
I
wouldn't
expect
to
make
anything
major
depend
on
that,
probably
until
after
the
least
okay
limp
along
and
try
to
kill
it
with
fire.
As
soon
as
we
can.
F
A
A
H
Guess
the
main
thing
to
be
aware
of
is
that
right
now,
Krauss
kind
of
in
a
weird
that
most
state
it
is
running
in
theory
fine,
but
we
can't
really
update
it
right
now
because
of
some
mess-ups
and
weird
migration
patterns
that
have
been
happening.
We
have
a
fix
in
progress,
probably
pretty
close
to
done.
H
A
Yeah
I
didn't
want
to
spend
a
ton
of
time
on
all
that
tactical
stuff,
but
I
feel
like
now's,
a
good
time
to
remind
everybody
that
we
are
over
halfway
through
the
release
cycle,
and
so
we
start
to
get
to
that
point
where
we
want
to
think
about
what
changes.
Should
we
try
to
land
before
code
freeze
and
what
changes
should
we
decide
to
maybe
pull
back
on
because
they
might
seem
a
little
too
big
or
risky
or
disruptive.
A
J
I
I
Already
links
to
spyglass
but
eric
has
suggested,
but
the
links
should
be
bigger
and
more
obvious.
So
I
will
do
that
too.
A
I
also
correct
me,
if
I'm
wrong
I'm
under
the
impression
that
spyglass
is
a
tool
used
for
viewing
job
artifacts
and,
to
some
limited
extent,
job
history,
but
that
there
are
other
things
that
goober
Nader
does
such
as
provide
a
PR
dashboard
that
lists
incoming
and
outgoing
PR
s.
Parts
of
you
are
we
trying
to
replace
here
so.
I
A
A
Will
find
out
offline
and
I'm,
not
necessarily
in
voluntold
ink
a
friend
to
do
that
demo
I
just
feel
as
though
that's
a
good
place
to
start
some
kind
of
Roadshow
to
make
sure
that
people
are
aware.
This
is
coming
and
the
PR
dashboard
will
still
live
for
a
while
until
we
find
something
that
is
more
convenient
than
that
okay,
is
it
permanent,
dark
mode
part
of
the
light
mode?
Is
there
a
switch
keep
on
it?
It's.
I
I
E
C
F
H
A
I
H
K
Would
encourage
people
to
maybe
check
it
out?
I
feel
like
it's
fairly.
The
the
most
controversial
thing
in
it
is
not
actually
super
related
to
the
actual
change,
so
I
feel
like
it's
fairly
coal
I
think
it
would
be
nice
if
you
could
take
a
look
at
his
dock,
but
I
mean
the
change
he
made.
It
seems
like
I'm,
pretty
confident
in
the
details
he's
put
there.
So
that
looks
good
I,
don't
know
Steve.
E
E
M
E
K
A
In
conceivably,
this
would
be
a
really
good
win
for
the
community
from
the
tested
cost
perspective
on
just
the
sheer
volume
of
jobs
that
we
run.
A
lot
of
them
are
for
pre
submits
and
if
I
understand
this
proposal
correctly,
it's
about
immediately
cancelling
a
pre
submit
if
we
know
that
there's
another
job
that
we
need
to
run
for
it.
So
we
are
not
about
doing
that.
I
K
K
H
E
C
Okay,
that's
what
you
were
you
you
just
captured.
What
I
was
wondering
about
that
I've
seen
this
in
Travis
CI.
If
something
gets
canceled,
because
something
at
new
and
there's
the
pipeline,
then
resource
cleanup
just
doesn't
happen
so
the
cancellation,
because
some
new
job
comes
in
it
can
be
more
dangerous
than
it
is
useful.
E
K
I
think
that
I
think
that
this
is
probably
under
design
and
I
think
that
this
would
be
a
good
thing
for
us
to
start
thinking
about
separately,
because
there's
probably
more
it's
yeah,
it's
fairly
basic
right
now
and
we
haven't
really
done
a
whole
lots,
and
maybe
we
don't
want
to
do
anything
and
that's
fine
or
whatever,
but
yeah.
E
A
K
K
Actually,
the
data
store
that
Google
uses
internally
with
test
grid,
and
so
we
will
have
test
grid
support
reading
from
this
externally
as
well,
and
so
the
proud
Casio
instance
has
the
ability
to
upload
the
metadata
to
there
and
so
I
have
a
utility
that
is
doing
that
and
yeah
so
I
mean
for
various
reasons,
like
the
fact
that
not
everybody
has
access
to
it
right
now
and
it's
not
something
that
can
live
inside
your
kubernetes
cluster
I.
Don't
think
this
will
ever
replace
a
spyglass.
K
Now,
but
yeah
sort
of
just
some
just
FYI
I
think
eventually
a
controller
will
be
exporting
reading
the
data
that
we
have
in
GCS
and
moving
it
into
a
result,
store
on
a
sort
of
continuous
basis
and
we'll
see
if
we
can
make
that
useful
or
not
I'm,
shocked
to
hear
you
using
anything
with
the
database
hey,
we
are
not
managing
the
data
I'm
fine
with
other
people
managing
the
databases.
That
is
their
problem.
A
K
Potato
depending
on
test
grid
is
more
like
a
graphical
view
right,
and
this
is
fair.
This
is
more
similar
to
spyglass,
except
theoretically,
it
will
allow
like
at
some
point
it
doesn't
have
searching
right
now,
but
I
could
imagine
that,
since
you
can
sort
of
since
it
is
a
database,
you
could
sort
of
say,
hey
I
want
to
find
results
that
failed
with
this
error
message
or
something
in
the
future,
but
for
for
right
now,
I
think
the
use
is
not.
F
A
K
E
H
Yeah
I
was
just
gonna
say
that
I
think
the
main
benefit
is
that
we
didn't
make
it.
We
should
not
be
reinventing
every
little
utility
that
Brown
meets,
there's
tons
of
things
that
are
made
by
other
people
that
are
a
lot
more
full-featured
and
robust,
and
we
should
know
that
there
is
a.
There
was
a
request
that
came
in
from.