►
From YouTube: Kubernetes SIG Testing 2018-01-30
Description
Meeting notes: https://docs.google.com/document/d/1z8MQpr_jTwhmjLMUaqQyBk1EYG_Y_3D4y4YdMJ7V1Kk/edit
A
Oh,
this
is
humanistic
testing,
as
the
meeting
on
Tuesday,
January
30th
meeting
will
be
recorded
and
posted
on
YouTube
shortly
afterwards.
If
I
have
to
figured
out
how
to
do
all
that
correctly,
I've
put
the
meeting
notes
in
the
chat.
I
guess
one
thing
that
I
just
wanted
to
start
off
with
was
just
a
quick
debrief
for
a
co-working
day.
Last
Friday
Christine
teach
thanks
to
Matt
and
the
KSR
team
for
hosting
us
in
the
building.
I
was
really
cool.
Thank
you
guys
for
feeding
us
as
well
and
I.
A
A
B
Thought
it
was
great
and
yeah
I
definitely
would
second
the
opinion
that
thanks
Matt
and
uks,
you
guys
were
great
hosts
and
I
thought
it
was
pretty.
It
was
good
to
mean
everybody
in
person
and
I
thought
it
was
useful
to
hash
out
some
things
and
I
would
yeah
I
would
be
interested
in
continuing
them
and
eventually
you
know
getting
to
the
point.
B
Maybe
where
you
know
it
doesn't
necessarily
have
to
be
a
I
feel
like
you
know,
this
was
sort
of
mostly
a
lot
about
like
meetings
and
strategy
and
whatnot,
but
I'd
also
be
interested
in
you
know.
Maybe
if
there's
you
know
something
that
you
and
me,
or
you
and
Cole
or
whatever
are
working
on
the
you
know
that
we
could
bang
away
of
PR.
You
know,
have
it
be
more
of
a
typical
workday,
as
opposed
to
like
a
strategy
session,
if
we
continue
on
our
on
a
regular
basis,
but
you
know
I
think
it's.
B
A
C
C
I
don't
know
if
I
can
convince
my
manager
to
let
me
host
twice
in
a
row
but
yeah
so
I
think
the
one
thing
I
would
say
is
maybe
a
little
bit
stronger
of
an
agenda,
particularly
if
we
want
to
kind
of
do
a
maybe
a
little
bit
of
a
meeting.
Time
followed
up
with,
like
an
hour
to
of
like
breakout
time
where
you're
going
to
the
small
groups
and
actually
hack
away
on
code,
and
then
you
know,
come
back
in
for
another
meeting
and
then
break
out
again
might
might
help
kind
of
silo.
A
C
A
Okay,
it
doesn't
look
like
Tim
is
here,
but
maybe
Mario
can
Phyllis
and
the
other
thing
I
wanted
to
bring
up
today
was
just
a
super
quick
about
the
testing.
Commons
I
saw
an
invite,
went
out
on
the
final
meeting
time,
but
I
figured
it'd
be
good,
just
to
blast
it
out
if
they
had
been
decided
on
Maru.
Do
you
know.
D
A
A
B
Yeah
I
mean
I
think
this
was
something
we
briefly
mentioned
and
we
talked
about
on
Friday
a
little
bit
right
about
how
that
might
help
with
some
flakiness
or
something
because
sometimes
spot
and
sis's
are
sold
out
for
a
period
of
time
or
I.
Guess
non-reserved
distances,
I,
don't
know
Matt.
If
you
have
any
thoughts
or
recommendations
or
whatever
I
think.
C
We
just
need
to
sit
down
and
look
at
you
know
what
level
we
should
reserve
in
each
availability
zone.
I
know,
there's
also
some
work
on
adding
more
availability
zones
to
testing
I
think
prior,
maybe
even
right.
Now
still
there
were
only
three
zones
being
used,
so
you
know
if
we
spread
it
out
across
all
of
them.
How
many
do
we
want
to
reserve
in
each
zone?
C
A
D
There
in
that
would
be
yeah,
so
if
I
want
the
simplest
example
is
Lian.
Federation
and
I
want
to
run
a
kubernetes
api
server.
We
run
one
over
on
at
CD.
Is
it
possible
that
I
use
containers
to
run
those
things
just
like
you
would
in
production
and
if
so
like
yeah
like
could
I
actually
run
a
pod
on
the
cluster?
That's
running
the
test
that
runs
my
server,
that
I'm
interacting
with
technically
possible,
maybe
not
realistic,
depending
on
security
concerns
and.
D
This
is
for
integration
testing,
and
this
is
for
things
like
say,
Federation,
we're
not
having
like
the
things.
We're
gonna
be
testing
in
tree,
so
we
have
to
get
them
from
somewhere.
We
have
to
pull
an
image
or
pull
a
binary
or
something
from
some
other
place
and
then
beat
up
in
an
environment
that
we
can
run
tests
against.
B
B
D
Would
we
we
actually
have
to
change
the
product
configuration
whenever
we
updated
the
test
target
and
we
would
have
I
mean
I
would
say
if
we
had
multiple
supported
versions?
We
have
you
know,
release
I,
think,
I,
think
it's
mostly
okay
for
release
versions,
I,
guess
more
concerned
about
development
versions
like
release
version
to
be
really
stable.
It's
like!
Oh,
you
testing
against.
D
D
So
maybe
it
doesn't
matter
in
CI
who
is
more
for
uniformity
across
local
versus
CI,
like
it's
more
convenient
to
source
an
image
and
run
a
container
if
you're
running
locally,
then
having
to
download
a
binary
and
maintain
that
binary,
I'll
I,
keep
it
up-to-date
versus
give
me
give
me
latest
container.
So
maybe
it's
a
less
of
a
question
about
CI
it's
like.
If
we
can
do
containers
in
CI,
then
that
would
be
preferable
just
to
keep
things
the
same.
When
you're
doing
these
locally
I
mean.
C
D
E
E
E
I
would
hesitate
to
use
something
like
doctor
and
doctor
just
to
keep
them
similar,
because
there's
no
reason
we
can't
just
have
a
top-level
container.
That's
similar
and
I
don't
see
why
we
need
to
set
the
container
at
one
time.
E
D
B
To
me,
it
still
I
still
don't
understand
why
I
mean
if
this
is
for
integration,
I
guess
how
I
would
imagine
doing
that
is.
You
know
if
I
have
repo
a
which
produces
image
a
and
repo
B
which
produces
image
B
I
would
maybe
have
some
you
know
system
which
is
constantly
publishing
the
latest
a
and
the
latest
B,
and
then
from
my
integration
test.
B
I
would
define
a
proud
job
that
has
two
containers
for
maybe
three
containers
and
have
a
sidecar
container,
which
you
know
pulls
the
latest
of
a
and
starts
the
the
job
for
that
and
the
latest
of
which
starts
for
that.
And
then
my
tests
can
just
point
at
those
and
I
guess
it's
not
clear
to
me
how
well
that
would
or
would
not
work
or
what
that's
not
how
that's
different
from
what
you're
envisioning,
but
I,
it
seems
like
containers,
seems
like
a
good
idea.
B
But
I
guess
the
I
guess
the
I
guess
my
instinct
would
be
to
try
and
have
the
container
specification
be
or
the
B
static
so
that
we
could
just
check
that
into
the
job
and
then
maybe
have
the
image
change
as
the
state
of
the
image
you
know.
Rather,
this
is
like,
rather
than
having
the
job
start.
Containers
just
have
the
sidecar
containers
be
part
of
the
specification
of
the
job
right.
D
So
if
I
mean
in
a
simple
example,
if
I
wanted
to
validate
like
aggregation
against
my
queue,
API
I
would
need
like
an
NC
d
container,
cube,
API
container
and
then
I
guess
I
would
have
to
run
my
thing
as
a
binary,
but
that's
fine,
like
I'm
doing
the
stuff
that
I
don't
know
as
a
container
and
then
I'll
run.
My
own
stuff
is
binary.
It's
because
I'm
going
to
be
building
them
right.
There.
D
C
D
B
But
yeah
I
feel
like
that
would
be.
You
know
like
right
now
how
we
have
like
the
latest
green
dot,
who
have
only
read
these
files
that
we
uploaded
to
GCSE
would
say,
like
latest
green
dot
text,
and
so
that's
how
we
determine
like
what
to
check
out
for
RC
Jeff
CI
jobs,
but
it
might
be
useful
to
sort
of
publish
a
cube
image
which
is,
like
you
know,
latest
unit
test,
passing
version
of
cube,
I'm.
E
A
D
In
your
job,
I,
don't
think
the
goal
is
to
have
like
a
full
cluster.
It's
mainly
like
at
least
not
initially,
it's
it's
mainly
just
on
the
API
level
like
once
rather
tree.
We
need
an
API
server
that
we
can
aggregate
against
and
validate
against
and
I
just
seem
hokey
like
how
to
download
binaries
versus
being
able
to
stood
a
just
use
this
container,
and
it
just
runs
the
container
runtime
like
I,
said
it's
kind
of
less
of
a
concern
for
CIA.
It
was
just
like
well,
if
we're
gonna,
do
it
one
way
locally?
D
It
sounds
viable.
The
only
the
only
thing
that's
maybe
a
problem
in
my
mind
is,
is
how
we
manage
configuration
for
the
API
server
like
if
we're
running
it.
You
know
a
pod,
we're
setting
some
degree
of
configuration.
Would
it
be
possible
to
like
how
that
how
could
we
dynamically
change?
The
configuration
I
guess
is
the
question
to
sound
like
aggregation
like
I
mean.
D
E
Please
I
mean
I
would
definitely
add
that
it
is
also
100%
possible
to
do
darker
and
darker
if
you
need
it,
it's
just
that
adds
its
own
needs
and
mess
and
like
kubernetes,
already,
does
a
pretty
good
job
managing
containers
for
you.
So
if
we
can
stick
to
just
like
you,
have
a
job
container,
I,
don't
know
if
it's
worth
it.
If
you're
gonna.
C
D
Right,
yeah,
we
can
I
guess
maybe
I
should
I
should
put
more
thought
into
how
what
the
configuration
requirements
are
and
how
that
could
be
implemented.
It
was
just
an
API
server
you're
just
using
and
you
don't
have
to
change
the
configuration.
That's
one
thing,
but
if
we
have
to
vary
the
configuration,
then
it
sounds
like
maybe
darker,
darker
or
binary.
So
I'll
do
some
more
work.
Yeah.
A
I
mean
the
immediate
response
to
having
a
more
complicated
surface
there
with
you
know,
configure
type
for
whatever
they
doesn't
I
think
we'd
have
to
think
very
carefully
about
how
proud
would
expose
that
API
to
jobs
and
how
we
would
I
don't
know
would
have
a
namespace
for
project
we're
like
yeah.
How
are
we
not
just
that?
It's
pretty
far
outside
of
the
current
scope
of
that
yeah.
D
I
mean
the
this
is
really
kind
of
exploratory,
like
the
first,
like
things
out
of
tree
that
are
going
to
start
wanting
to
do.
This
is
maybe
close
to
registry
Federation,
but
I.
Don't
imagine
I,
think
this
is
something
that
is
going
to
be
increasingly
common
and
around
the
testing.
Like
the
new
frameworks,
repo,
the
goal
is
to
just
have
something
that
someone
could
create
a
new
API
and
they
could
actually
just
reuse
this
infrastructure.
So
all
that
to
say
they're
like
I,
don't
really
have
all
the
answers.
D
A
E
E
And
somewhat
tangentially
related,
don't
want
to
go
too
deep
into
it,
but
Quentin's
had
some
more
progress,
hopefully
on
finishing
up
streaming.
The
local
nested,
docker
and
docker
cluster
at
some
point
will
want
to
do
that
yeah
and
that
certainly
solves
integration
by
just
here.
Here's
a
fake
cluster
but
like
Steve,
said,
if
you
don't
really
need
it,
that
might
be
a
little
a
heavyweight
and.