►
From YouTube: Kubernetes SIG Testing 20180619
Description
SIG Testing meeting, June 19.
A
Hello
everyone:
this
is
the
sick
testing
meeting
for
June
19th
I'm
Steve.
We
will
be
recording
this
and
putting
it
on
YouTube
on
our
playlist.
After
the
fact,
I've
shot
a
link
to
the
meeting
notes
into
the
chat
and
the
first
thing
that
we
had
for
the
111
release.
Notes.
We've
been
asked
to
help
to
fill
out
that
draft
for
major
topics
from
this
link.
A
So
if
you've
contributed
features-
or
you
know,
significant
bug
fixes
to
anything
under
the
suggesting
Abela
go
ahead
and
jump
over,
there
maybe
add
some
some
PRS
that
one
thing
I
wanted
to
talk
about.
So
this
was
most
of
these
items
are
from
last
meeting
where
we
didn't
have
quorum
so
the
meeting
two
times
ago,
we
had
a
conversation
about
making
scalability
pre-sentence
blocking
the
agreement.
There
was,
we
would
turn
them
on
pending
any
any
blockers
from
like
the
release
team,
just
based
on
my
cord,
we're
in
the
cycle.
A
A
Sounds
like
they
missed
a
moan,
usely
awesome
cool,
so
the
next
thing
was
I
was
gonna,
give
a
demo
of
what
we
have
been
calling
the
potty
Atilla
teepees.
So
a
couple
months
ago
we
started
thinking
about
how
we
could
put
a
little
bit
more
of
the
infrastructure
that
is
running
for
all
of
our
tests
into
the
test
framework.
It
and
kind
of
take
it
out
of
the
hands
of
the
people,
writing
the
tests,
and
so
we
basically
decided
that
it
would
cool
if
we
could
have
a
test
definition
that
was
as
minimal
as
possible.
C
A
I
think
that
should
be
sharing
now
you
guys
seeing
deck
here.
Yep
looks
good
awesome
cool,
so
we've
got
a
PR
here
and
we
are
going
to
trigger
a
test,
run
and
watch
magic
happen
and
so
yeah
as
I
was
saying.
We
wanted
the
actual
configuration
that
users
had
to
bring
to
the
table
to
be
pretty
minimal,
so
the
UN
test
job
that
we
just
triggered
in
this
view
here
we're
looking
at
the
configuration
for
it,
and
hopefully
the
font
here
is
large
enough.
But
as
we
see
this
is
a
pretty
basic
presubmit
configuration.
A
We
are
running
on
the
master
branch,
we
have
every
run
command
and
the
spec
is
actually
super.
Super
minimal.
We're
using
the
latest
going
image
from
the
docker
hub
and
running
go
test,
and
we
don't
have
to
worry
about
which
repo
we're
in
what's
sinking.
We
don't
have
to
worry
about
any
of
the
pull
ref
stuff.
A
If
we
look
at
the
Grenadier
output
for
it
we're
gonna
notice,
we
see
pretty
much
the
same
I
know
in
kubernetes,
there's
a
couple
more
pieces
of
information
up
here,
but
you
know
we
see
that
it
failed
and
we
see
that
it
ran
for
17
seconds
and
if
we
look
at
the
logs
we
looked
at
go
testing
was
run
and
we
have
verbose
output
from
it.
Here.
A
If
we
go
to
artifacts
here,
we
can
actually
start
looking
at
some
of
the
sort
of
internals
of
how
this
happened.
So
under
this
logs
directory,
we
actually
have
output
of
all
of
the
little
utilities
that
we
added
to
the
pod.
That
was
running
so
the
first
thing
that
happened
was
we
ran
a
tool
called
clone
res
and
what
that
did
was
it
cloned
all
of
the
references
that
we
needed
from
kit
so
it?
A
Let's
see
it
made
a
new
branch
it
merged
in
the
PR,
and
it
put
it
up
immersed
into
the
master
branch
and
that's
what
we
ran.
Everything
against
after
that
a
little
utility
ran
that
uploaded
our
clone
records.
It
updated
the
log
from
the
cloning
thing
and
it
updated.
This
started
a
JSON.
It
also
ensured
that
the
latest
build
got
updated
in
GCS
for
goober
nadir
and
that
ran
right
before
the
test.
A
Place.
Tools
actually
has
no
logs,
because
this
activist
injects
some
tooling
into
the
image
that
was
provided,
and
then
we
see
that
a
sidecar
container
was
running
at
the
same
time.
That's
nested,
and
once
the
tests
finished
the
sidecar
container
uploaded,
we
made
sure
the
latest
build
lists
was
still
accurate.
It
uploaded
the
build
blog
and
uploaded
finish
that
JSON
and
the
actual
test
container
itself.
A
All
it
did
was
run,
go
test
and
output
from
there
and
what's
cool
about
this,
this
process
is:
if
we
look
back
at
the
configuration,
we
noticed
that
the
image
that
we're
running
in
is
just
the
docker
hub
image.
So
in
order
to
have
all
of
this
work,
we
don't
need
a
specific
base
image
and
if
a
project
is
already
container
izing
their
test
and
they're
already
using
an
image
to
run
it
in
there.
A
Writer
is
more
about
their
tests,
so
the
status
of
this
feature
right
now
is
it's
live
and
proud
and
I
think
slowly.
Jobs
will
be
being
moved
over
to
it,
but
I
know
for
for
RedHat.
At
least.
We
are
super
excited
about
this,
because
it
means
that
any
of
the
organizations
or
projects
that
we're
interacting
with
if
they
already
have
their
tests
written
in
a
containerized
fashion.
It's
super
easy
for
proud
of
them
trigger
that
test
in
their
own
container
and
sort
of
just
like
add
the
prowess.
On
top.
C
A
So
I
think
right
now,
there's
not
a
momentum
behind
you.
Yes,
the
overview
dashboard
that
you
saw
there
from
Google
writer
that
expects
GCSE
a
lot
of
the
other,
tooling
defeats
test,
fit
and
whatnot.
All
that
is
expecting
is
that
specific
layout
in
GCSE
as
well
and
I,
think
right
now,
it's
just
kind
of
been
a
de
facto
standard,
just
based
on
the
momentum
behind
it.
Okay,
thank
you
in.
D
General
we've
tried
to
keep
proud
decoupled
from
the
platform
that's
running
on,
so
it
should
run
on
any
kubernetes
provider.
That
being
said,
we
do
have
a
bit
of
a
tighter
coupling
with
Jesus
all
of
our
tooling
kind
of
expects
GCS
right
now
that
could
be
changed
in
the
future,
but,
as
is
everything
is
pretty
tightly
coupled.
Thank
you.
A
F
E
A
C
That
was
actually
me
that
raised
it.
Cuz
I
was
new
to
it,
and
I
was
looking
for
where
it
was
I'm.
Just
I
guess
conditioned
at
this
point
to
look
for
that.
That
had
my
name
right
and
but
that
was
the
only
reason
I
wasn't
suggesting
that
you
needed
to
it
was
just
like
I
said,
conditioned
to
look
for
that,
and
not
my
name
and
I
did
I
didn't
see
it
so
when
so,
when
you're
looking
for
that
is
that
information?
E
Or
group,
hey
Tim,
hippie,
hacker
or
Matt,
you
guys
work,
you
know,
have
you
guys
seen
that
pattern
in
other
SIG's
and
have
you?
How
have
you
found
that
useful
if
at
all,
I
mean
if
it's
kind
of
just
a
bunch
of
you
know
busy
work
I
would
say,
maybe
not
use
it,
but
if
it
is
helping
people
I
think
it's.
F
C
F
C
F
I
typically
do
that
say
if
you
see
my
note
in
there,
you
see
it's
preface
with
Tim
and
I.
Do
that?
Because
that's
the
way
we
run
sequester
life
cycle
and
it's
is
useful
for
people
who
want
to
follow
up
on
items
because
you
do
get
like
randos
that
put
comments
on
the
dock,
and
you
know
they're,
basically
trying
to
follow
up
to
figure
out.
If
there's
an
action
item
who
to
who
to
poke.
F
A
B
C
And
that's
why
I
suggested
it
may
be
more
relevant
for
working
groups
and
the
COS
SIG's
tend
to
be
larger,
but
again,
like
it.
I
think
it
goes
back
to
the
well
Tim
said
something
I
can
go
to
the
attendees
if
I,
if
I'm
sort
of
a
noob
and
though
that's
Tim
and
there's
his
email
address,
and
but
for
people
that
attend
every
week,
you
know
y'all
go
Tim,
yeah,
I'm,
just
gonna
call
Tim
right
now,
cuz
you
you
get
metres
of
them
every
Friday
or
something
you
know,
I!
Think
it's
more!
It's
more!
C
A
Cool
so
as
some
as
an
action
item,
I'll
try
to
keep
these
topic
lists
with
names
so
that
anyone
who's
completely
ballistic
has
a
little
discoverability
there.
Those
reasonable
the
next
time,
I'm
here
I
think,
was
added
a
couple
weeks
ago,
I
had
to
get
moved
it's
a
little
bit
nebulous
and
I'm
thinking,
see
I.
E
Like
this
was
potentially
I
mean
it
might
be
cold
I,
don't
know
if
coal
I
feel
like.
Maybe
we
can
move
this
to
next
week
and
I
feel
like
there
are
other
people
at
Google
who
would
want
to
hang
it
on
this?
We're
not
in
yeah
Cole.
You
feel
comfortable
driving.
This
topic
maybe
be
the
most
appropriate
person
which
topic
the
prowl
architecture
going
forward
as
an
integrated
product
is
an
agenda
item
I.
D
A
I
mean
this
seems
like
a
pretty
large,
so
maybe
we
could
do
a
breakout
and
sort
of
try
to
frame
that
breakout
with
like
some
action
items
are
like
what
are
we
trying
to
achieve?
What
kind
of
discussions
do
I
have
I
guess
this
is
this?
Are
we
planning
for
the
summer
like?
Are
we
planning
things
for
the
next
quarter?.
D
C
C
All
and
I
mean
I'm,
never
the
smartest
guy
in
the
room
but
I
hope
I'm,
not
the
dumbest
and
it
some
of
the
entry
introduction
and
an
overview
of
the
architecture
I
think
it
kind
of
assumes
that
you're
already
kind
of
on
board
into
the
project.
By
working
with
somebody
else,
it's
I
think
a
little
difficult
to
get
started.
Sort
of
looking
at
it
fresh,
especially
you
know,
used
to
the
typical
hub,
spoke
model
of
something
like
it
builds
over
an
eightieth.
D
I'll
probably
send
you
some
doc
reviews
then
happy
to
do
it.
Yeah
I
not
sure
how
useful
getting
you're
like
an
overall
feedback
from
you
is
now
just
because
we
know
that
there's
tons
of
missing
documentation
so
I'm
thinking
that
maybe
once
we've
supplied
some
of
that,
having
you
know,
I
mean
take
a
look
and
see
see.
What's
missing
at
that
point
might
be
a
more
useful.
B
C
Absolutely
it's
it's
a
little
more
than
that,
though,
and
I
don't
want
on
I.
Don't
want
to
turn
this
meeting
into
that.
It's
like
I,
said
everything
in
the
in
the
end
where
we
can
be
a
platform
for
actually
executing
those
tests
as
well.
In
addition
to
being
able
to
place
things
into
those
those
buckets,
but
maybe
somebody
needs
access
to
VMware
cloud
services
or
to
certain
types
of
storage
that
we
could
more
easily
provide
than
you
know,
Google's
able
to
do
quickly.
C
A
Cool,
so
it
sounds
like
this.
This
topic,
we
should
probably
have
a
breakout
and
maybe
frame
it
as
the
complaining
or
just
identifying
places
where
we
really
need
to
get
some
work
done
I'm
in
the
next
quarter,
so
maybe
Eric
and
Cole,
since
you
guys,
are
pretty
good
at
making
those
meeting
invites
for
breakouts.
Could
you
take
that
one
yeah
I
can
schedule
something
awesome
Thanks.
A
C
F
So
what
we're
looking
to
do-
and
we
will
probably
do
in
the
112
cycle
and
I
talked
with
Matt
and
they
get
a
long
time
ago-
was
to
basically
PR
the
creation
of
this
container.
So
it
can
be
used
by
people
outside
of
Google
on
mainly
for
a
lot
folks,
who
are
just
basically
vetting
their
environment,
ad
hoc
or
on
the
fly
or
part
of
a
release
process.
F
B
F
So
this
is
one
of
the
other
containers
for
sonobuoys,
which
is
basically
just
a
wrapper
for
the
indent
test.
It's
a
it
does
provide
some
extra
benefits
that
is
useful
for
auto
rolling
things
up
using
signal
handling
from
son,
a
boy
so
son.
A
boy
can
signal
the
worker
to
auto
roll
itself
back
up
if
it
if
it
sees
some
type
of
error
along
the
way,
but
there's
nothing
special
about
it
and
Matt
Liggett
and
I
chatted
a
long
time
ago.
F
In
fact,
he
was
one
of
the
main
contributors
to
some
of
this
stuff
a
while
ago,
when
I
originally
creating
it
that
we
should
put
this
in
the
mainline
repository
so
other
folks
who
are
consuming
the
test.
Just
have
a
single
canonical
reference
that
isn't
conflated
with
other
Google
isms
pertaining
to
like
test
grid
and
everything
else,
because
it's
just
it's
just
a
consumable
binary,
but
there
might
be
wrapper
scripts
and
other
things
so
probably
going
to
mainline
yeah.
B
F
E
I
feel
like
the
primary
weakness
with
cube
test
is
that
it
makes
a
lot
of
assumptions
about
what
your
cluster
name
is
and
having
to
set
all
these
different
environment
variables.
The
really
nice
thing
about
sonobuoy
is
I
can
just
have
a
cluster
and
point.
You
know
the
foreman
suite
at
it
and
it'll
just
run
I.
B
F
There's
a
lot
of
the
other
testing
containers,
at
least
when
I
looked
at
him,
which
was
like
this
is
several
releases
ago
now
there
was
a
bunch
of
other
jiggery.
All
kind
of
tied
into
test.
Infra
and
I
was
firm
for
consumable
purposes.
We
wanted
to
do
just
one
thing,
just
very
simple
and
clean
and
consumable
by
anyone
right,
yeah.
B
I
mean
we
still
need
a
container
that
does
like
the
cluster
open
things
as
part
of
the
test,
but
that
can
be
like.
In
addition,
the
other
jiggery
like
you
talking
about
is
the
bootstrap
stuff
and
I
was
he
was
discussion
in
the
beginning.
We
are
pulling
that
out
of
the
test.
I
would
expect,
probably
within
the
next
quarter,
that
tests
won't
know
anything
about
like
GCS
or
anything
like
that.
They'll
just
be
putting
files
in
a
certain
directory
and
expecting
that
the
other
jiggery
will
pick
that
up.
G
B
Thing
was
that
there
were
some
other
projects,
like
charts
was
the
one
at
that
time.
They
want
to
do
some
testing
and
we
were
just
pointing
out
that
we
do
have
a
you
know,
credential
that
we're
using
a
crown
a
little
was
designed
in
like
the
box
testing,
and
we
could
probably
set
them
up
with
that
or
they
can
set
up
the
room
thing.
B
What
the
patient
was
just
to
make
sure
that
all
the
code
is
sorted
out
if
they're
gonna
have
anything,
go
I
have
no
idea
what
their
resource
needs
is
right
now,
they're
just
running
a
gke
cluster
every
once
in
a
while,
like
when
they
have
a
PR.
It's
a
lot
smaller
than
curious,
so
they
may
not
even
need
much,
but
I
just
wanted
them
to
talk
to
you
guys
in
make
sure
that's
sorted.
G
Okay,
all
right
now
I'll
check
and
see
if
we
want
to
do
like
a
second
account
or
just
piggyback
onto
the
the
one
account
that
has
other
cards
yeah.
G
Okay
and
then
I'm
fairly
sure
this
wasn't
related
to
that,
but
there
was
somebody
would
reach
out
essentially
trying
to
get
more
of
the
test
running
on
AWS,
as
opposed
to
just
the
AWS.
Specific
ones
is
what
I
was
kind
of.
Just
I
got
from
it.
I
got
a
second
hand.
I
just
wanted
to
was
that
the
intention
does
anybody
know
like
like
wanting
to
run?
Do
you
know
who
do
you
remember
who
reach
out
I?
Do.
B
Not
there
been
a
couple,
this
got
one
of
the
ongoing
discussions
and
something
that
we're
hopefully
fixing
in
30,
soon
he's
just
that,
like
the
actual
runners
and
stuff
are
like
run
by
our
team
and
as
the
way
the
building's
set
up
like,
we
can't
add
external
people
directly
to
it.
You
know
all
the
codes,
open-source
and
stuff
the
country's
open-source,
but
like
actually
going
and
like
you
know,
managing
the
cluster
is
not
okay,
though
someone
was
actually
exploring.
Aws
is
way
to
do
that.
B
F
I
would
love
to
talk
more
in
the
future
right
now.
We're
gonna
do
a
POC
for
sequester
lifecycle
to
kill
with
the
mighty
sword
the
communities
anywhere
integration
that
exists
and
try
to
go
to
cluster
API
for
deploying
for
on
AWS
primary
target
along
with
GCE.
So
that
way,
we'd
have
test
signal
for
the
major
two
clouds
for
ku
vidiian
deployments.
That's.
B
F
B
A
H
Share
the
the
first
one
is
I
wanted
to
see
who's
actually
running.
How
out
in
the
field
to
kind
of
get
a
list.
Is
this
the
place?
We
will
continue
those
ongoing
discussions.
We
talked
about
the
breakout,
I
know
that
we're
all
interested
in
seeing
prowl
architecture
used
everywhere
and
then
the
second
one
is.
What
about
running
these?
The
infrastructure
based
on
this
for
open
projects
in
a
more
community
focused
way
where
it's
not
a
particular
company
or
or
that
approach
I
just
wanted
to
I
know
this
is
ongoing.
B
There
are
a
bunch
I
spent
some
time
on
this.
This
quarter.
There
are
a
bunch
of
other
deployments,
the
it's
mostly
we're
talking
about.
There's
some
things
right
now
like
actually
in
the
and
the
repo,
and
things
were
customers
that
are
conflated
with
the
specific
deployment
for
the
kubernetes
project
now
is
run
by
the
Google
inch
brought
team
from
the
moment,
mostly
just
because
of
the
scale
the.
A
Was
there
and
I
think
we've
had
we've,
but
we've
had
both
the
conversation
of
potentially,
we
should
split
apart
the
proud
core
from
the
proud
deployment
configuration
and
maybe
have
a
repo
choice
for
code
and
they've
also
had
a
conversation,
maybe
eight
months
ago
as
well
like
how
do
we
get
people
that
are
not
on
Google
inch,
broad
to
contribute
to
long
haul?
For
that
specific
cluster
and
I'm,
not
sure
either
of
those
two
conversations
like
ever
came
to
a
head.
Well,.
B
I
can
say
that
I've
gotten
some
signals
going
further
up
that
being
on
on
call
that
should
start
the
booth
word
we're
expecting
about
a
month
until
there
should
be
a
kubernetes
is
I.
Will
I'm
gonna
see
a
small
with
credits,
and
then
it
will
just
be
a
matter
of
like
it's
gonna
be
tricky
to
actually
move
a
bunch
of
this
stuff,
since
it's
so
highly
consumed
everywhere.
The
other
discussion
with
splitting
up
the
config
is
going
to
be
necessary,
as
part
of
that
and
son
is
doing
some
wonderful,
not
Korea.
Right
now
sounds.
A
E
I
mean
I
would
say
that,
ideally,
we
would
want
this
to
be
more
of
the
like
CN
CF.
You
know
proud
deployment,
but
right
now,
sort
of
the
Google
proud
appointment
and
the
CN
CF
prow
deployment
are
kind
of
conflated
and
will
be
able
to
because
late
that's
once
the
CN
CF
once
we
have
somewhere
else
to
run
it.
Okay.
E
E
H
I
got
a
chance
to
sit
down
and
talk
with
Erin
for
a
good
long
time
about
the
story
and
relationships
and
how
we
got
here
with
tests.
Infra
and
I
definitely
want
to
sit
down
with
in
the
CN
CF
perspective
and
see
how
we
can
start
to
connect
all
that
together
and
start
to
do
this
in
a
more
community
fashioned
way
and
I'm,
not
sure
the
exact
next
steps
on
that.
But
I'll
touch
base
with
the
leadership
in
CF
and
ask
for
some
direction.
They're.
Basically,.
B
B
C
F
A
Saying
that
the
the
largest
crowd
employment,
touching
cue,
brunette
ease
of
repositories
runs
under
Google
inch,
broad
gotta,
not
understand
yes,
I
will
say
there
are
a
number
of
committees,
incubator
and
Kira
e-cigs
repose,
where
our
deployments
are
actually
also
triggering
tests.
There,
yeah
I,
think
yeah
moving
these
entered
into
CN
CF
stewardship
and,
having
like
a
more
community,
focused
ops
and
maintenance,
for
it
has
been
to
the
time
it's
just
pending.
We're.