►
From YouTube: Kubernetes SIG Testing 2018-10-09
Description
A
Bounine
hi
everybody
I'm
here
in
the
crackin
burger
today
is
Tuesday
October
9th,
and
this
is
the
sake
testing
Li
from
eating.
This
is
being
publicly
recorded
and
will
be
posted
to
YouTube.
So
please
remember
that
we
adhere
to
a
code
of
conduct
which
basically
boils
down
to
you:
don't
be
a
jerk
alright.
A
So
what
I
wanted
to
roll
through
real
quickly
today
was
a
quick
discussion
around
an
initial
stab
I
took
at
our
charter
for
the
sake
and
then
just
to
call
out
that
I
signed
us
up
for
a
couple
follow-up
items
from
the
112
or
least
retrospective.
This
may
not
be
all
of
them,
because
I
believe
the
rest
of
it
is
supposed
to
happen
at
the
cig
release
meeting
today,
but
just
to
highlight
some
of
these
things.
A
A
So
basically,
those
of
you
who
haven't
served
on
the
steering
committee
probably
haven't
seen
too
many
of
these
charters.
The
idea
here
is,
we
just
walk
through
real
quickly.
What
is
in
scope
for
the
stake
and
what
we
consider
explicitly
to
be
out
of
scope
for
the
stake
and
then,
if
and
how
we
deviate
at
all
from
the
governance
template.
That's
called
out.
Governance
template
calls
out
things
like
what
are
the
roles
there
should
at
least
be
a
chair
role.
A
There
can
optionally
be
a
tech,
lead
role,
there's
some
project
owners
to
members,
there's
a
security
contact,
and
then
we
have
a
spelled
out
procedure
for
how
we
go
about
creating
sub
projects,
I
kind
of
don't
care,
I'll
talk
about
that.
So
I
tried
not
to
deviate
from
that
as
much
as
possible.
I'm
gonna
go
back
to
the
review
view
for
this,
because
I
had
a
couple
things
for
discussion.
A
So
what
I
call
out
as
in
scope
for
us
is
we're
interested
in
effective
testing
of
kubernetes.
We
do
not
write
or
troubleshoot
the
projects
tests,
but
we
focus
on
tooling.
That
makes
it
easier
for
the
community
to
write
and
run
the
tests
and
contribute
analyze
and
act
upon
with
the
test
results,
and
so
that
covers
things
like
project
CI,
automation,
extracting
test
results
to
a
public
data
set,
displaying
the
results
and
artifacts.
Although
we
don't
write
your
jobs
for
you,
we
do
want
to
make
sure
that
it's
easy
to
manage
hundreds
of
jobs.
A
So
we
do
configuration
management
of
jobs
and
we
have
some
tools
for
that
and
then
tools
that
make
it
easy
to
do
local
testing
if
communities
things
like
plants
or
greenhouse
greenhouses.
What
a
planter
is
what
I
was
thinking
of,
not
greenhouse.
Sorry,
any
kind
I
put
fada
bot
is
in
scope
for
the
sake,
I,
don't
know
how
Veda
feels
about
that,
but
I
think
at
least
the
jobs
that
run
is
think
about
are
probably
in
mistake.
A
I'm
explicitly
calling
out
that
we
at
the
moment
are
in
charge
of
keeping
all
these
things
that
we
own
running
on
a
best
effort
basis
I
like
for
that
basis,
to
be
better
than
best
effort,
but
we
can't
really
do
that
until
we've
transitioned
this
to
the
CNC
F,
at
which
point
I
would
expect
there
to
be
more
fleshed
out
on-call
policy,
but
for
now
just
to
make
sure
that
there
is
an
owner
for
it.
I
think
it's
here
in
terms
of
cross
cutting
processes.
A
I
think
we
overlap
with
stake,
release
specifically
and
their
responsibilities
of
the
test
in
control.
I
think
when
we
roll
out
changes
that
impact
the
project
as
a
whole
that
we
mostly
follow
this
pattern
of
interacting
with
state
contributor
experience,
notifying
kubernetes
def,
with
lazy
consensus
and
providing
at
deadlines.
My
concern
here
is
that
this
sounds
really
squishy,
there's
actually
no
enforcement.
This
is
just
like
we're
going
to
be
responsible
quote-unquote.
A
There
are
no
criteria
for
how
would
apply
we're
gonna
be
responsible,
I,
don't
know
if
anybody
here
has
any
strong
opinions
on
like
if
we
should
have
thresholds
or
gates
or
criteria
by
which
it's
okay
for
us
to
just
go
ahead
and
roll
a
change
out
versus
actually
having
to
go
through
more
of
a
formal
process,
and
then
I
also
think
that
we
reserve
the
right
to
halt
automation
and
infrastructure
that
we
own
and
also
disable
any
tests
or
jobs
that
we
haven't
written
but
are
impacting
the
project
as
a
whole,
and
then
things
I
call
out
as
specifically
knocking
scope.
A
For
this
sake,
are
we
not
responsible
for
troubleshooting
or
writing
tests
or
jobs
that
are
owned
by
other
stakes
and
we're
not
responsible
for
ongoing
maintenance
of
the
e2e
test
framework?
This
is
also
one
I
had
questions
over,
because
you
could
argue
that
maybe
the
test
framework
falls
under
each
week.
S
framework
falls
under
the
testing
common
sub
project,
but
I
don't
really
see
us
doing
any
ongoing,
regular
maintenance
of
the
e3
test
framework.
Maybe
this
is
something
we'd
want
to
revisit
when
the
refactor
of
the
framework
has
finished
parent.
C
We
don't
have
long-term
maintainer
x'
for
this
stuff,
I
think
it's
the
problem,
historical
problem
for
the
project
right,
the
the
attend,
the
intent
test
framework
has
kind
of
been
a
cobbling
of
efforts
across
a
long
period
of
time,
and
we
have
gone
through
some
major
refactorings
and
the
testing
common
stuff
is
not
necessarily
meant
to
solve
the
Intendant
testing
framework
problem.
It's
meant
to
solve
some
of
the
generic
abstract
spin
up
pieces
for
testing
frameworks,
so
it's
I
do
think
we
should
own
it.
C
A
lot
of
the
people
in
this
sig
that
do
attend
this
meeting
are
the
people
that
have
worked
on
it
or
it's
just
a
matter
of
whether
or
not
we
have
resources.
You
know
within
the
individual
organizations
to
be
able
to
actually
execute
on
it.
Okay
drive
things
in
a
good
in
a
good
way
when
PRS
come
in,
but
we
don't
actually
have
time
to
devote
to
making
it
better
an.
B
Optics
thing,
when
you
said
you
know
we
don't
own
it
Aaron
when
you
said
that
and
he's
in
the
sub-project
good
I
figured
my
butt
look
sick,
then,
but
certainly
if
but
he's
interested
in
augmenting
it
or
working
on
it
or
doing
something
with
it.
You
know
it's
gonna
fall
on
this
cig
to
to
interact
with
them
for
that,
but
to
then
say
that
we're
not
responsible
for
the
maintenance,
it
seems
a
weird
position,
but
he
said
that
mix
salt
that
all
makes
sense.
The
matter,
resources,
I,
don't
know
yeah.
A
Hey
I,
agree:
I,
agree
with
that.
It's
more
like
I'm,
trying
to
just
call
out
the
state
of
today,
which
is
that
oh,
no,
no,
it's
fine
like
it
is
effectively
unmaintained.
So
I
don't
know
what
the
best
way
to
do.
That
is
is
to
say
like
we
own
it,
but
we're
actively
not
maintaining
it.
You
can
do
that,
but.
C
Alright
I
think
we
should
use
it
as
a
forcing
function
like
within
people
the
people
within
the
city,
if
you're
attending
this
call,
you
know
like
it,
are
you're
only
including
test
automation
or
actually,
including
the
tests
themselves
and
making
the
framework
better.
So
that
way,
we
deflate
the
actual
core
problems
themselves.
I.
C
A
A
And
yeah
at
the
moment:
that's
not
something
we
have
the
resources
to
control
or
staff,
and
that
is
something
I
thought
that,
but
that
did
fall
under
the
purview
of
the
testing
Commons
sub-project.
So
we
could
like
call
that
out,
but
I
I'm,
not
sure
if
anything
is
actively
being
done
there
right
like
what
you
what
you're
describing
sounds
like.
We
really
love
it
if
somebody
went
to
it
through
and
did
a
thorough
audit
of
everybody's
test
cases
and
document,
two
good
practices
and
bad
practices.
B
A
B
Know
I
view
it
kind
of
like
any
other
thing.
You
use
it
long
enough
and
you
start
to
see
all
its
problems
but
forget,
like
all
the
solutions
that
provided
in
the
first
place,
I
know
if
you're
speaking
directly
to
Gingka
or
not
but
I,
think
that's
getting
off.
I
think
I'm,
taking
yourself
on
a
chance
yeah.
A
That
is,
that
is
one
of
them,
but
so
I
mean
to
to
Tim's
point
like
maybe
it
does
make
sense
to
say
like
we're
the
owners
of
this
and
but
also
call
out
that,
like
it's,
we're
not
really
doing
anything
with
it.
It's
not
on
our
roadmap,
it's
not
on
our
radar,
but
if
you
are
interested
in
doing
something
with
it,
just
so
you
know,
there's
a
point
of
contact
can
can
talk
about
it.
Does
the
SIGGRAPH?
Is
he
actively
working
on
something
they
own?
No.
C
They
don't
and
there's
a
bunch
of
things
that
are
that
way
too.
So
the
I
think
it's
just
a
matter
of
saying
that
this
is
where
its
house
go
here
to
ask
questions.
I
I
don't
it's!
It's
really
inconsistent
with
how
we
specify
what
things
are
actively
being
maintained
versus
what
things
are
kind
of
like
just
exists
in
perpetuity.
C
B
Isn't
this
a
case
where
the
actual
you
know
what's
an
IDI
tests
and
pardon
my
ignorance,
I
haven't
looked
directly
but
I.
Imagine
right.
People
are
writing
tests
and
tests
can
get
promoted
and
whatnot,
but
without
actually
affecting
the
general
framework
that
ete
test
represents.
This
still
can
be.
You
still
can
add
and
remove
tests
from
it
so
that
in
a
way,
it's
being
it's
just
that
the
framework
is
not
that,
but
the
binary
itself,
the
tests
that
are
included,
that's
still
a
consideration.
That's
ongoing
and
people
address
that
and
discuss
that
yeah.
B
A
We
say
we
own
the
HIV
test
framework
and
then
somebody
comes
and
asks
us
a
question
about
the
quote-unquote
architecture
of
the
ete
test
framework.
I'm,
not
sure
anybody
here
is
actually
qualified
to
answer
that
question,
because
it
is
evolved
organically
and
changed
so
many
hands
that,
even
though
it
is
ultimately
landed
here,
our
response
would
be
like
yeah
I
kind
of
do
whatever
you
want,
which
is
why
I'm
super
thankful
that
a
community
member
has
stepped
up
and
said
they
want
to
refactor
the
e3
test
framework.
No.
C
I,
don't
necessarily
agree
with
that
statement.
I
mean
people
who
a
lot
of
people
in
this
call.
Well,
some
of
them
aren't
here
like
XD's,
not
here
but
I'm
here,
and
we
helped
create
that
monstrosity
right
just
because
we
had
to
out
of
necessity
all
right,
so
we
understand
what's
there
and
where
the
bodies
are
buried,
but
at
the
same
time
like
we,
we
don't
have
time
to
go
fix
it.
So
it's
somebody.
B
C
I
I
think
this
is
an
example
of
like
maybe
we
should
just
say
like
for
the
time
being,
maybe
call
out
that
is,
it
is
home
to
here
and
we
would
actively
looking
for
people
to
participate,
but
this
is
a
prime
example
of
where
we
could
do
the
job
board
thing
or
the
other
code
contributor
stuff,
that's
been
ongoing.
Okay,
say
like
we
could
use
people
to
help
fix
these
things.
If
you
really
love
tests,
hey
wanna
fix
test
frameworks
come
come
attend
and
we
will
can
point
you
at
a
bunch
of
ongoing
issues.
B
B
The
the
real
quick,
the
part
is
making
very
early
on
when
I
joined
the
sig
I
think
Erin,
you
mentioned
excited
the
idea
of
this
pattern
for
outer
tree
providers
and
I
think
you
kind
of
mentioned
that
hey
well
we're
not
really
in
the
business
of
wanting
to
tell
the
difference
things
how
to
do
a
thing.
It's
its
color,
if
they're
their
decision
to
do
that,
and
they
can
maybe
ask
us
for
help.
But
we
don't
really
take
a
super
proactive
approach
to
that,
because
we
want
different
ideas
to
come
up.
B
You
know
organically,
and
you
know,
sort
of
the
best
of
breed
and
I
think
the
e2e
test
by
I
mean
the
framework.
Is
there
for
a
reason
but
they're
there
for
multiple
reasons
and
some
of
those
reasons
ultimately
could
kind
of
get
separated
out
into
you
know
it's
on
you
like
we'll
give
you
a
place
to
display
the
results,
but,
aside
from
you
know
the
broad
set
of
tests
that
define
conformance
and
some
other
aspects
of
it.
B
D
B
Architecture
even
estimate
lis,
the
binary
that's
used
to
run
it
and
so
to
take
to
say
to
take
that
away,
and
everyone
do
their
own
thing.
That
would
be
ultimately
today
taking
away
the
ability
to
easily
conform.
Its
tests.
I
was
just
using
that
as
an
example,
but
that
sort
of
like
a
kubernetes
wide
set
of
tests
where
a
lot
of
other
stuff
I
mean
yes,
historically,
there's
been
this
Sega's
help
provide
tooling
or
maintain
tooling,
to
run
those
for
people,
but,
like
you
said
Aaron,
we
don't
really
care
how
they
do
it.
Yeah.
A
Yeah
so
I
get
your
point
like
if
somebody
else
wants
to
make
a
different
testing
framework
for
them
to
write
and
run
their
tests
and
it
works
well
for
them
we're
happy
that's
great.
We
don't
have
to
actively
get
involved
if
it
works,
so
well
that
it
could
be
used
by
other
SIG's.
Maybe
we
want
to
help
promote
that,
but
it's
not
something
we
would
necessarily
enforce
so
I.
A
Think
Patrick
raised
a
pretty
good,
concrete
question,
though,
if
a
vendor
a
nagger
line,
version
change,
breaks
e
to
e
test
and
needs
work,
we'll
say:
testing
find
someone
to
fix
it
who
broke
it
there
I
would
say
yeah.
Probably
we
would
we're
definitely
involved
in
the
line
changes
right
now,
since,
like
we
just
whitespace
to
the
world
for
go,
111
I
mean
I.
B
D
A
Up
to
say,
we
do
that
right,
so
I,
don't
know,
I
know
that
vendor
is
within
scope,
but
I
feel
like
we're.
Maybe
eating
the
horse
dead
at
this
point,
like
I'll,
just
I'll
say
that
e
to
e
framework
that
the
e
to
be
test
framework
is
owned
by
the
sig
and
I
would
really
appreciate
a
thorough
review
from
you.
Tim
and
I
will
poke
theta
and
steep
to
make
sure
the
chairs
have
had
a
look,
but
there
are
many
active
members
of
the
sake
in
this
meeting
right
now.
A
So
we
got
10
minutes
left
I
wanted
to
move
on
real,
quick
and
just
share
where,
where
what
I
have
signed
us
up
for
as
part
of
the
112
retro,
these
are
all
things
I
put
down
under
my
name
but
I'm
happy
to
farm
these
off
to
whomever,
or
they
may
actively
be
worked
on
by
other
people.
So
I'll
give
bottom
up.
A
A
I
also
think
we
should
be
removing
jobs
that
have
been
continuously
failing
for
over
in
days
or,
let's
call
10
120
if
nobody's
actively
fixed
a
job.
That's
been
failing
for
120
days,
I,
don't
think
anybody's
paying
attention
to
it.
So
I
don't
think
it's
really
providing
us
any
signal.
I'm
planning
on
doing
that,
not
on
a
one-shot
basis,
but
by
setting
up
a
query
that
will
give
us
a
continual
list
of
jobs
that
are
eligible
to
get
kicked
out.
A
There
had
were
a
lot
of
concerns
over
whether
or
not
you
were
using
the
same
sets
of
jobs
for
different
releases,
so
this
is
about
reconciling
the
jobs
used
across
each
release,
making
sure
that
the
jobs
have
descriptions
making
sure
that
the
jobs
are
consistently
named
and
then
reconciling
these
with
the
jobs
that
are
on
the
release
master
blocking
bridge
at
the
moment.
This
is
why
we
raise
the
question
of
whether
I'll
use
conformance
tests
doing
here
on
the
112
blocking
bridge.
A
As
of
right
now
there
are
30
jobs
on
the
blocking
bridge,
which
seems
like
an
awful
lot
of
jobs
if
they're
all
green
and
passing
all
the
time.
That's
great
if
they're
flaky,
that
might
be
a
different
story.
So
again,
I
would
like
to
go
back
and
once
we
reconcile
the
jobs
here,
I
plan
on
going
back
and
documenting
water,
the
criteria
that
a
job
should
meet
from
a
metrics
perspective
to
be
eligible
to
be
on
the
blocking
dashboards.
So
it's
got
to
take
no
longer
than
this
time
to
run.
A
A
The
entire
job
and
I
wouldn't
necessarily
disable
it
I
would
just
be
removing
it
from
the
release
blocking
so
okay
I
confuse
two
things.
There
I
would
move
in
off
of
a
release
blocking
dashboard,
but
it
would
still
be
running.
It
would
just
be
someplace
that
is
considered
non
blocking
from
the
release
team's
perspective,
but
for
those
jobs
that
have
been
fit
like
there
are
jobs
that
have
been
literally
failing
for
over
380
days
now.
Those
I
will
most
likely
go
through
and
delete.
A
I
would
like
to
make
sure
I
give
people
a
chance
to
recognize
that
they
actually
do
care
about
that
job.
They
just
had
no
idea
but
I'm,
not
gonna.
Let
these
things
hang
out
forever
and
then
the
final
thing
is
upgrade.
Tests
are
pretty
atrocious
right
now
you
have
no
real
quick
way
of
knowing
what
version
of
kubernetes
you
are
upgrading
from
and
what
version
of
cupidity
is.
You
are
upgrading
to
this
isn't
displayed
and
gibber
nadir.
This
isn't
displayed
in
test
grid.
A
We
have
the
ability
to
display
these
things
if
we
only
parse
it
out
so
I'm,
going
to
try
and
modify
to
test
a
parse
this
out,
but
this
is
a.
This
is
definitely
something
that
any
new
contributor
could
help
with
so
I
threw
on
a
Help
Wanted
label
on
this.
Those
were
the
things
that
I
know
were
kind
of
a
bumpy
ride
for
112.
Does
anybody
have
anything
else
that
I
should
bring
up
during
the
retrospective.
A
A
A
F
Can
hear
me,
oh
you're
sure
it
all
went
crazy,
it's
pretty
short
I
can
go
back
to
my
notes,
sure
conversation
going
around
I
run.
What
does
it
look
like
when
we
start
transitioning
things
to
the
scenes?
Yeah,
because
I
right
now
it
with
API
standby
I,
do
need
to
start
engaging
and
creating
some
type
of
pipelines
to
fill
our
buckets
around
with
or
more
audit
logs.
F
That
include
what
our
app
world
is
doing
and
I
just
wanted
to
get
the
start
a
conversation
around
what
if
we
I
have
a
CNC,
FC
I
bought
on
github
and
slack
and
on
github
we
we
tried,
there
was
a
skit.
We
did
last
week,
maybe
the
week
before,
where
we
interacted
what
happens
when
you
create
a
ticket
or
mention
to
the
cnc
FC
I
bought,
and
it
says
that.
F
D
That's
it
I'm,
not
sure
like
who
are
you?
Are
you
looking
for
every
conduct
to
run
their
own
or
just
have
their
own?
Then
what
what
makes
sense
I'm,
not
a
proud,
Louise,
Lee
effective
in
doing
like
we
give
like
a
cig
or
a
repo
or
something
they
have.
They
have
a
subdirectory
of
the
config.
We
talk
to
them,
some
appropriate
people
to
own
the
config.
We
set
up
an
owner
Stella,
and
then
they
just
make
PRS
to
currently
test
on
fraud,
but
like
we
could
move
it
at
some
point.
D
If
we
mean
to
I
think
the
more
interesting
thing
is
going
to
actually
be
giving
all
of
that
running
somewhere
and
for
that
there's
a
whole
lot
of
work.
That
needs
to
happen.
First,
I
think
something
like
having
or
Krause
is
just
gonna
be
a
lot
of
operational
writing
like
someone
has
to
maintain
that
yeah.
A
I've,
yet
so
what
you're
talking
about
also
kind
of
I
feel
like
is
about
a
corner
ahead
in
the
future
based
on
the
velocity
of
gates
and
for
a
working
group,
so
like
we're
still
in
the
early
stages
there
of
trying
to
figure
out.
What's
the
what
is
the
Google
Cloud
project
structure,
we
want
what
are
the
right
teams?
What
are
the
right?
Permissions
were
the
right
naming
schemes
and
such
before.