►
From YouTube: CNCF Kubernetes Conformance WG Meeting - 2018-02-08
Description
Join us for KubeCon + CloudNativeCon in Barcelona May 20 - 23, Shanghai June 24 - 26, and San Diego November 18 - 21! Learn more at https://kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
A
Okay,
let's
go
ahead
and
dive
in
this
is
Dan
Kahn
from
CMC
F
I
see.
We
have
15
folks
on
the
call
right
now,
which
I
think
counts
as
quorum.
Insanely
there's
like
250
people
on
this
mailing
list,
no
117
people
on
the
mailing
list,
which
just
says
there's
a
lot
of
people
who
care
about
software
conformance.
So
we
haven't
had
scheduled
meetings
and
we
had
some
requests
to
set
this
one
up.
A
But
I
would
really
compare
it
to
almost
any
other
conformance
program
in
the
history
of
software
or
open
source
software,
or
something
like
that,
where
it's
almost
unprecedented
to
have
gotten
I,
don't
have
the
exact
numbers
in
front
math.
It
was
38
companies
out
of
the
gate
and
we're
currently
up
to
49
and
I'll,
just
paste
that
spreadsheet
in
and
so
obviously
that
success.
A
A
Okay,
well,
let
me
stop
there
any
other
intro
comments
or
questions
about
where
things
stand.
I
mean
I
kind
of
presume
that
most
of
you
are
watching
the
github,
repo
and
so
you're,
seeing
between
that
and
the
email
lists.
Essentially
all
the
interactions
that
we're
having
with
folks-
and
we
continue
to
have
people
make
those
errors
or
or
need
help
with
things
or
other
stuff,
but
I
feel
like
we've
able
been
able
to
be
pretty
responsive.
B
There
was
one
general
PSA
I
wanted
to
mention
in
preparation
for
our
what
Ken
had
uncovered.
In
our
conversations
during
the
coop
con,
we
have
set
up
a
separate
sub
project,
called
testing
Commons
to
be
as
a
clearinghouse
location
for
folks
if
they
want
to
get
their
PRS
reviewed
in
a
timely
fashion
or
if
they
want
to
converse
with
other
folks
on
the
topic
of
conformance
tests,
as
well
as
other
testing
common
area
that
that's
probably
the
most
beneficial
sub
project
to
be
the
clearinghouse
for
the
stuff.
B
There
is
information
in
the
community
site
and
I
can
also
post
it
inside
of
this.
These
notes
here
for
other
folks,
take
a
look
at,
but
that's
that's
the
venue
for
upstream
now,
besides
cig
testing,
suggesting,
is
the
main
meeting
and
then
there's
a
sub-project
meeting
now
specifically
devoted
towards
this
type
of
this
focus.
D
All
right
great
thing
things
like
internet
I,
think
that
says
to
seem
perfectly
for
Jacobs
Nationals
now
in
the
link
is
I'll.
Just
take
the
link
in
the
chat
for
analysis
to
see
you
know.
E
At
the
the
top
of
the
dot,
I
kind
of
put
some
background
just
for
future
civilizations
or
our
future
selves,
looking
back
at
this
time
and
the
conference
program
and
just
wanted
to
give
the
context
that
we
intentionally
focused
on
the
process
of
evolving
the
conformance
program
ahead
of
filling
out
the
surface
area
coverage
of
the
conformance
program.
That
was
both
to
ensure
the
widest
participation
and
also
because
we
didn't
want
a
flurry
of
activity
in
what
was
considered
conformance,
leading
up
to
the
launch
of
that
program.
E
So
that
was
very
intentional.
It
was
a
recognized
gap
even
in
the
very
earliest
conversation
and
just
as
a
reminder,
conformance
label
was
essentially
that
is
a
text
string
that
anyone
could
add
to
any
test
and
call
that
part
of
conformance.
And
so
there
is
a
disproportionately
high
number
of
component
tests
in
some
areas
where
some
engineers
just
happened
to
think
that
was
useful,
but
really
no
rigor
around
what
ought
to
be
part
of
a
component
suite.
E
I
also
suggested
a
couple
of
toes
in
the
water
and
API
machinery.
Garbage
collection
and
watch
are
important
for
many
of
the
use
cases
that
users
writing.
Custom
controllers,
for
example,
need
to
be
able
to
depend
on
the
watch
API,
and
so
the
two
I
suggested
for
a
BMI
sharing
where
garbage
collection
and
watch,
and
so
there
are
some
folks
looking
to
add
those
ete
tests
to
the
conformance
programs
as
well.
I.
Think
no
sig
note
ought
to
propose
some
cuts.
E
I
haven't
even
taken
a
swing
and
what
those
ought
to
be,
but
I
think
that's
an
important
area
as
well,
and
one
that
will
likely
drive
some
interesting
conversations
about
what
does
confirm
is
being
as
it
applies
to
the
cubelet
and
then
towards
the
bottom
of
the
dock.
I
mentioned
briefly
the
new
test,
adult
I,
expect
we
will
find
areas
that
either
don't
have
a
ute
test
that
the
related
things
wish
has
existed
at
some
point
or
that
it's
through
this
process
we
discover
ought
to
exist
and
I
think
there.
E
There
may
be
two
ways
to
approach
this.
One
is
to
a
data-driven
test,
suite
that,
based
on
earlier
work
that
aratoon
did
for
off
and
seems
to
be
an
effective
way
to
create
a
non
flaky
test,
but
don't
necessarily
focus
on
the
behavior
but
focus
on
the
API
surface
area
and
that
it
exists,
and
so
I
dropped
a
link
in
in
the
effort
underway
for
one
of
the
four
in
a
data-driven
test,
but
exercises
I
think
at
this
point.
E
Only
namespace
resources
there's
a
possible
follow-up
TR
to
that
for
non
maiden
space
resources,
but
that
is
out
the
future
and
then
we've
also
just
got
through
with
cmcm
contracting,
with
external
vendors
to
potentially
have
a
one-time
amnesty
program
for
things.
Oh
I
wish
this
test
always
existed,
I've,
never
gotten
around
to
writing
it
and
I
expect
there
will
be
a
small
final
percentage,
the
final
ten
or
twenty
percent
that
likely
require
some
effort
to
write
those
death
and,
and
they
would
be
approved,
of
course,
I
the
community.
E
But
that
is
touching
on
the
discussion
about
getting
some
some
funding
from
the
cmgs
to
outsource,
no
yeah,
so
I'll
stop
there
I
think
that's
a
fairly
good
starting
point
and
again
this
is
a
brainstorming
phase.
Once
you
get
through
the
initial
generally
directionally
right,
I
do
expect
to
turn
this
into
a
cap
and
go
see
that
process
just
to
make
sure
there's
full
visibility.
But
it's
been
useful
to
have
initial
feedback
from
this
group
before
it
even
gets
there
and
I
think
that
pattern
is
one
we
agreed
is
useful
coincidence.
E
E
E
E
D
F
That's
actually
something
I
was
gonna.
Ask
about
because
API
coverage
from
just
saying
you
hit
the
API
may
not
be
sufficient.
I
think
at
some
point
to
be
really
nice.
If
we
could
get
some
kind
of
brainstorming.
There
are
thoughts
around
possible
ways
to
test
the
different
variants
in
which
the
EP
has
to
be
called
right,
because
it's
know
that
what
are
the
parameters,
we're
different
values
for
the
parameters
and
then
different
scenarios
in
which
they
can
be
invoked.
F
I
know,
that's
that's
non-trivial
and
not
necessarily
so
I
mean
this
machine
generator
well,
but
I
like
that.
At
some
point
see
we
can
think
about
ideas
to
maybe
increase
our
coverage
or
our
measuring
of
our
coverage
based
upon
the
actual
semantics
of
what
we
expect
to
have
happen,
and
instead
of
just
did
you
hit
this
API
because
they
just
hand
the
API
alone.
Is
it
yes,
a.
B
Lot
of
a
lot
of
the
tests
that
exist
today
are
behavioral
driven
ones,
I
mean
they're,
it's
not
complete
by
any
Forester
to
the
imagination,
but
most
of
them
are
exercising
that.
Yes,
you
hit
the
API,
but
you
also
exhibited
response
and
behavior
that
you
expected
from
hitting
that
API
before
you
completed.
So
all
these
does
is
today
are
built
in
that
vein,
so
they're,
not
just
care
coverage
they're,
their
behavioral
driven
tasks,
no.
F
I
know
I
understand
that
I
agree,
but
that
the
tests
themselves
test
behavior
I'm,
not
questioning
that
what
I'm
questioning
is.
There
are
code
coverage
statement
right
because,
let's
say
our
entire
API
side
consisted
of
a
single
API,
but
there
are
two
and
fifty
five
different
ways
in
which
you
could
invoke
that
with
different
parameters
and
stuff
right
all
right.
Our
coverage
tool
right
now
will
say:
hey
we
have
a
high
percent
coverage,
even
though
we
will
may
only
test
three
variants.
F
E
F
D
It's
one
of
the
one
of
the
important
elements
of
this
dog
is
is
looking
at
new
features
too,
so
we
have
the
buggy
advocacy
test
coverage,
amnesty
program
that
I
think
from
what
I
heard
from
Dan
we
passed
on
through
through
the
board
meeting
regarding
established
season.
It's
not
going
to
be
our
intended
that
we
need
to
make
sure
that
that
at
least
going
forward,
all
new
features
have
components
coverage.
So
one
thing
we're
hoping
to
get
the
community
to
a
dog
he's
a
policy
that
features
where
the
GA
have
certain
gated
components.
D
In
now
to
achieve
that,
we
probably
need
to
actually
be
getting
those
testicles
a
little
building
in
that
so
Jaden,
jo
and
I
are
taking
around
a
couple
ideas
like
do.
We
need
to
have
like
a
faded,
informants
tag
that
basically
indicates
neither
component
tests
or
animated
PJs,
no
technically
plan
a
murder
and
yet
because
the
features
not
yet
ta,
there's
something
that
potentially
we
can
package
up
in,
like
a
sort
of
way
run
that
people
can
be
kind
of
analyzing
ahead
of
time,
seeing
if
they
have
any
problems,
seeing
if
it's
performing
to
it.
B
Depends
upon
the
API
group
really,
some
API
groups
have
a
track
record
of
growing
into
GA
things,
but
sometimes
there's
a
shuffling
that
occurs
across
API
groups,
and
that
is
actually
very
painful.
So
if
you
were
to
do
an
API
driven
test
and
you
switched
groups
like
even
in
Sona
boy,
we
have
a
lot
of
glue
logic
right
now
that
exists,
that
detects
and
checks
for
this
shuffling
of
API
groups
and
the
earlier
reference
of
apps
in
Demon's
sets.
B
That
was
an
example
of
something
that
was
originally
in
extensions
and
now
is
in
apps
and
now
is
going
to
GA,
so
like
I,
think
being
careful
about
when
we
tag
it
as
beta
conformance
and
making
sure
it's
got
a
track
to
success.
Is
it
it's
gonna,
be
a
little
bit
of
tap
dancing
because
the
ability
to
for
us
to
test
that
we're
giving
accurate
signal.
It
is
gonna,
be
rough
across
the
versions
if
they
don't
start
doing
this
API
group
switching
right.
G
E
I
can
see
that
and-
and
one
one
outcome
of
that
I
hope
is
that
this
group,
who
are
also
involved
in
other
working
groups
when
sig,
will
start
the
conversation
about
what
are
the
conformance
implications
of
this
teacher
are
earlier
in
the
process.
So
the
type
I
hope
that
we
start
to
build
that
into
the
camp
process,
and
it's
just
request
on
for
at
West
and
I'm.
You
considered
the
test
suite
for
this
and
which
part
of
this
is
required
or
intended
to
meet
communities.
I
think.
B
That
should
probably
be
put
on
to
sig
architecture.
To
basically
add
maybe
something
to
the
template
of
the
camp
that
outlines
its.
There
is
something
that
outlines
the
path
of
its
lifecycle
to
GA,
but
maybe
we
can
say
like
the
path
of
the
tests,
to
make
sure
that
there
is
coverage
I'm
totally
in
favor
of
that
I
think
that's
a
great
idea.
E
The
other
thing
I
wanted
to
follow
on
what
William
was
talking
about,
making
sure.
There's
visibility
on
the
way
through
the
lifecycle
is
that
I'm
also
involved
in
the
effort
to
extract
the
cloud
provider
code
outside
of
core
and
I
think
there
are
important
implications
here
as
well,
and
one
of
the
things
that
we've
been
working
on
there
to
get
ahead
of
the
madness
is
to
have
other
town
hall,
health
providers,
running
CI,
tooling
in
their
own
environments,
and
then
posting
their
test
results
back
to
tested.
E
We
have
a
walkie
idea
to
actually
make
test
great,
a
multi
cloud
application
as
well.
So
there's
not,
let
me
know
but
hoping
to
visualize
a
dashboard
with
the
providers
that
are
participating
there
in
a
single
dashboard.
So
if
someone
working
on
one
provider
uniformly
breaks
another
one,
we
surface
typically
early
and
I.
Think
that
fits
in
really
nicely
here
as
well.
I
would
expect
that
would
be
a
useful
signal
or
folks
person
sitting
in
there
confronts
program.
B
It's
yes!
This
is
really
thorny
in
the
fact
that
history
has
taught
us
people
don't
look
at
test
grid
unless
it's
a
blocking
test,
and
this
has
been
ongoing
for
a
long
time
and
I
see.
Justin
is
up
there
and
we
get
coverage
immediately
when
cops
break
something
so
we
know
it
differently.
Well,
when
something
is
breaking
for
cops
right
and
without
having
that
blocking
level
signal,
it
becomes
kind
of
blind
to
a
lot
of
people,
because
there's
so
much
noise
right
in
the
system.
B
H
H
Fun
good
yeah
I
have
asked
him.
This
is
you
know,
part
of
our
way
to
try
to
get
plugged
in
I
have
asked
I
have
asked
him
to
start
doing
that
because
I
know
this
is
important,
but
I
think
the
specific
way
like
best
practice
for
a
cloud
provider
to
interact
with
test
grid
is
perhaps
a
bit
outside
the
remit
of
that
group.
I,
don't
know,
maybe
it
isn't
I.
I
Also
think
if
we
had
I
do
support
more
providers
in
test
grid,
because
you
know
it's
currently,
the
only
one
testing
a
bunch
of
other
stuff
and
I
would
love
for
to
be
just
like
one
vote
amongst
five
as
failing
it
isn't
blocking
everyone
as
long
as
I'm,
the
only
one
letting
the
south
side
down
right
now.
It's
like
you
know
all
I'm
me
but
like
if
I
was,
if
I
had
a
free
pass
as
long
as
I
was
the
only
one
of
those
five
that
would
be
helpful.
E
So
I
will
take
the
action
item
to
get
the
folks
who
are
working
on
that
in
the
cloud
provider
working
group
to
socialize
that
more
and
loop,
this
through
bin
as
well
I,
think
there
is
significant
overlap,
and
it's
not
a
coincidence
that
that
group
is
working
on
submitting
results
pulled
out
of
sonobuoy,
letting
the
conformance
test
back
to
tested.
That
came
out
of
me
also
being
involved
here
so
I
think
it'll
be
at
least
worthy
of
an
exploration,
and
maybe
we
find
better
ways
of
doing
things.
J
J
Some
of
the
work
items
I
thought
that
we
can
work
on
I've
created
this
document
about
a
few
weeks
ago
and-
and
the
timelines
here
probably
will
make
much
sense.
But
the
overall
goals
for
this
is
to
increase
the
certification
coverage
and
raise
bar
on
the
conformance
documentation,
the
documentation
that
we
have
right
now.
I'll
walk
you
through
some
of
that
quickly.
How
much
time
do
I
got.
J
J
Awesome
and
then
some
tooling,
that
is
required
to
approve
conformance
tests
when
they're
npr's
gets
merged,
and
then
we
can
also
gather
some
future
list
items
here.
Some
of
them
are
like
the
the
one
I
stated
here,
the
exploratory
items
which
I
am
not
really
very
familiar
with,
but
we
will
cover
that
at
the
end
of
other
things.
So,
in
order
to
achieve
the
first
three
goals,
which
is
complements
coverage
and
documentation
and
tooling,
I
split
that
into
two
pieces,
one
is
the
test
suite
enhancements.
J
J
The
proposal
here
is
to
approach
individual
six
to
identify
such
gaps
in
the
test
cases
or
strengthen
the
existing
test
cases
I'm,
mostly
referring
to
meeting
a
test
cases
here,
that's
what
it's
been
so
far,
and
either
these
staying
earning,
States
or
individual
parties
would
put
an
effort
to
add
new
test
cases
or
or
fix.
The
existing
test
case
still
will
have
better
checks.
So
some
of
the
examples
like
you
know,
existing
test
cases
are
part
of
the
components,
but
they
don't
do
all
the
checks
that
are
required.
J
Maybe
we
need
to
strengthen
them
and
at
the
end
of
it,
this
process,
cig
architecture,
I,
don't
know,
I,
believe
it
is
the
cigar
texture,
nice
job
did
you
identify
it
conformance
test
cases
and,
like
we
talked
about
one.
Other
thing
here
is
about
the
coverage.
Basically,
what
percentage
of
the
ETA
test
cases
are
now
conformance
test
cases
and
out
of
which,
how
much
percentage
of
the
core
code
they
are
testing?
So
that's
a
that's
a
much
more
complex
topic,
but
we
need
to
address
that
at
some
point
in
time.
J
Now
that's
pretty
much
about
Deacon,
adding
coverage,
which
is
an
I
try
to
process
the
second
part
which
I
kind
of
started
working
on.
This
is
45,
of
course,
based
on
the
approval
that
if
we
want
to
pursue
this
and
I,
think
it's
important
fortify
the
test
documentation
that
we
have
right
now,
so
we
are
generating
it
as
document
for
all
the
conformance
tests
and
it
is
checked
in
under
ciencia
darhk's
section.
J
The
take
that
here
is
that
the
conformance
documentation
needs
to
be
certified.
Basically,
it
needs
it
needs
to
conform
to
RFC
two
one,
one,
nine
keywords,
meaning
that
doing
this,
and
this
would
enforce
this
behavior
and
we
need
a
user
do
not
have
to
go
and
look
at
go
code
to
understand
what
the
test
is
doing.
What
the
behavior
of
communities
should
be.
They
should
be
able
to
read
that
this
must
happen
as
part
of
the
running
this
test.
So.
B
So
quick
question:
there
are
we
going
to
I
know
there
was
work
by
other
folks
to
get
some
of
the
documentation
in
place,
but
are
we
going
to
have
like
a
more
rigorous
breakdown
of
some
of
these
tests
in
place
that
will
follow
this
documentation
and
publish
it
as
part
of
the
main
back
site
underneath,
like
a
conformist
label,
because
I
get
asked
questions
all
the
time
about
test
a
versus
B?
And
what
does
this
actually
mean?
B
F
J
One
second,
so
currently,
this
is
the
document
we
are
generating,
so
we
have
some
meta
data
that
we
expect
from
the
EDA
test
cases
in
the
low
fall,
and
then
we
have
a
nice
named
to
the
test
and
then
the
documentation
is
part
of
the
comment
section
about
this
test.
The
test
has
this
metadata
on
top
of
it,
which
gives
you
the
test
name
and
the
description,
and
the
description
should
be
like
I'm
showing
here
it
should
be
very
detailed
enough.
J
It
would
be
a
lot
easier
for
people
to
understand
and
then
explain
what
the
test
is
today
like,
for
example,
some
of
the
work
I'm
trying
to
do
here
is
in
this
particular
case
that
I
highlight
the
original
documentation
is
very
skimpy,
like
one-liner,
saying
make
sure
the
body
with
readiness
probe
should
not
be
ready
before
any,
shall
delay
but
person
to
understand
the
test.
What
we
are
trying
to
do
here
is
create
a
pod
configure
it
with
initial
delay.
B
B
F
Know
the
intent
is
to
go
through
all
the
existing
test
cases
that
we
have,
because
right
now
there
are
a
lot
of
them,
are
very
sparse
with
respect
to
documentation,
so
take
all
the
work.
The
screenings
done
here,
put
them
into
the
existing
test
cases
and
then
do
exactly
what
you
said
to
him.
Make
sure
that,
as
part
of
the
process,
we
document
the
expectations
for
all
the
additional
test
cases
are
going
to
come
in
the
future
to
meet
this
bar.
J
Problem
thanks
so
yeah.
The
document
page
currently
is
ciencia.
Yeah.
Definitely
Doc's
needs
to
be
that
process
needs
to
be
employed
sometime.
So
that's
about
the
the
components
test.
Creator
suite
enhancements.
Basically,
we
are
proposing
this
standard
and
the
idea
is
to
generate
the
documentation
based
off
of
this
standard
and
for
the
one
can
for
the
existing
tests,
and
all
new
should
adhere
to
this
newly
specified
read
like
this
specification
bar
standard
right.
So
that's
what
I'm
trying
to
say
in
that
particular
case.
J
The
other
thing
I'm
proposing
is
during
enhancements
around
conformance
tests.
The
eye
here
is
anybody
today
can
submit
a
PR
and
by
accident
or
by
intention
they
can
change
an
eight
plus
two
components,
it
so
by
conforming,
Singh
conformance
it
they
they
they
are
will
and
their
tests
would
be
part
of
the
conformist
we
sweet
and
there
are
no
checks
and
balance
right
now
to
to
identify
Trini,
yeah
hi.
E
C
D
C
Yeah,
if
you
add
me,
if
you
were
to
add
conformance
if
to
a
new
test,
would
make
a
different
test
under
community
test
conformance
fail,
because
that
thing
checks
that
the
list
of
all
conformance
invocations
matches
our
goals
and
list
of
conformance
tests
to
get
the
test
to
pass.
You
have
to
edit
that
golden
list,
but
to
check
in
changes
to
the
golden
list.
You
have
to
get
an
approval
from
someone
in
big
architecture.
D
E
C
Gets
pretty
hairy,
I'll,
say
yeah,
because
the
test
can
just
be
one-liners,
it
cost
another
function
or
I
mean
that's
not
likely,
but
the
point
is
that
the
test
code
is
going
to
call
a
bunch
of
other
crap
that
can't
all
be
under
this
level
of
review
so
checking
the
definition
the
test
doesn't
changes
is
very
difficult,
but
at
least
checking
the
validity
of
test
you
have.
The
change
is
something
we
can
control
I.
D
L
E
J
But
again
that
that
means
I
understand.
This
is
a
difficult
task,
because
you
know
you
need
going
through
each
and
every
PR
and
semantically.
Identifying
such
behavior
is
is,
is
gonna,
be
hit
on
the
CI
process
and
whatnot
right.
So
it's
gonna,
slow
down
lots
of
things,
but
yeah
I
was
I
was
thinking
like
maybe
like
go
forms
we're
on
the
land
we're
on
V.
If
we
can
have
a
tool
that
we
can
draw
as
a
developer
living,
nothing
has
changed
that
trickles
down
to
the
actual
components
tests
that
would
be
great
with
I.
J
J
That's
done.
The
other
little
proposal
I
have
here
is
about
the
versioning
of
the
components
test
today.
The
metadata
around
the
conformance
test
is
just
a
test
name
and
the
description,
but
we
did
not
know,
and
this
has
being
added
as
part
of
the
components
the
documentation
that
we
are
generating
also
tells
when
the
test
is
added
to
the
components
and
if
there
are
any
other
modifications,
as
that
information
is
also
part
of
the
documentation,
so
I'm
proposing
another
metadata
around
the
test.
J
J
D
General
solution
is
that
if
there
was
a
I
think
many
tender
in
the
full
story
that
someone
actually
tried
to
certify
what
was
more
of
a
tool
rather
than
a
distribution
of
platform,
we
have
to
actually
release
a
specification.
Just
brings
its
reasons
that
I
kind
of
raised
the
point
like
okay,
like
two
drinks
I,
mean
I,
mean
this
is
kind
of
more
long-term
leanness.
But
do
you
mean
summergram.
A
K
A
A
But
then
they
the
they
were
essentially
using
the
conformance
test
to
test
something
that
it
doesn't
test
and
then
it
was
never
designed
to
test
and
I
reached
out
to
them
and
they
very
kindly
withdrew
their
certification,
because
otherwise
we
were
just
going
to
get
dozens
of
companies
that
we're
gonna
come
in
in
the
same
way.
But
essentially
my
current
belief
is
that
there's
no
need
for
a
container
certification
program,
because
Linux
is
well
in
the
linux.
A
Abi
is
well
just
enough
to
find
docker
containers
and
OCI
and
such
well
enough
to
fine,
but
that
I
do
think
that
there's
potentially
some
value
to
something
like
kubernetes
third-party
add-ons
and
that's
a
very
vague
term.
But
the
examples
would
be
a
security
add-on
like
an
aqua
or
a
twist
lock
or
also
possibly
a
storage
vendor,
and
the
basic
thing
to
add
to
test
would
be.
Can
we
confirm
that
the
are
only
using
public
api's,
and
so
you
would
have
a
conforming
kubernetes?
G
M
The
reason
was
frankly
because
it
was
a
little
confusing
when
the
program
first
came
out.
We
really
wanted
to
make
sure
that
if
there
were
such
a
thing
that
we
were
taking
advantage
of
it,
obviously
after
it
came
out
I'm
more
specific
about
what
the
target
was,
which
is
why
we
were
confused
but
Dana
the
term
that
we
had
talked
about
over
email
with
that
was
kubernetes,
certified,
tooling
and
I.
M
Thought
tooling
was
a
particularly
good
term,
because
it,
you
know
kind
of
implies
that
it's
not
applications
that
you're
going
to
run
but
but
like
kind
of
system
level
utilities,
and
there
certainly
isn't
I-
mean
I-
think
there
would
be
a
lot
of
interest
in
that
from
us.
Certainly
but
I
think.
Just
a
larger
community
would
like
to
have
that
as
well.
I.
A
F
F
A
G
Right
and
just
seriously,
how
are
you
gonna
keep
those
tools
from
doing
some
third
party
or
hidden
API
that
doesn't
get
to
your
proxy
I'm,
just
curious,
I
enforce
it
I
mean.
Surely
you
capture
every
time
they
are
going
through
the
proxy
to
to
a
kubernetes
api,
but
I
would
assume
there's
multiple
ways.
They
could
be
like
doing
weird
things
under
the
covers,
but
then
maybe
that's
for
another
meeting.
I
was
just
kind.
It's.
A
A
fair
point,
but
I
mean
the
basic
idea,
would
be
the
same
as
as
with
this,
where
people
can
obviously
lie
about
their
conformance
tests.
Okay,
somebody
would
try
and
install
it
on
a
certified
kubernetes
installation
in
the
future.
It
wouldn't
work
and
then
they
would
report
that
and
we'd
go
and
investigate
it.
So
it
presumes
an
honest,
but.
D
D
E
Put
together
a
doc
a
couple
months
ago,
as
we
were
going
through,
the
conformance
from
him
about
need
for
kubernetes
program,
which
the
inspiration
will
put
out
was
the
same
idea
that
it
would
work
on
any
conformance.
Kubernetes,
cluster
and
I
do
agree
that
the
separate
program,
but
essentially
ensuring
that
there's
no
requirement
for
vendor
specific
identity
or
some
time
there
and
that
they
would
be
portable
across
various
providers
and
distribution.
E
F
E
F
All
right,
there
was
one
other
thing
in
there,
which
is:
how
do
we
allow
for
some
sort
of
conformance
checking
of
plugins
and
I?
Think
some
people
on
the
chat
here
might
have
casually
mentioned
it,
but
I
don't
think
it
was
thusly
stated.
Oh
I,
think
at
some
point
we
need
to
talk
about.
You
know
how
do
for
example,
us
how
does
a
CRI
plug-in
or
a
CSI
plug-in
get
some
sort
of
certification
statement
that
says?
F
Yes,
we
conform
properly
to
with
the
CSI
interface,
and
you
can
use
us
as
sort
of
entertaining
and
approved
kubernetes
CSI
plugin
type
thing.
There's
no
performance
around
that
if
you
scroll
down
just
a
little
trainee
but
I
want
to
make
sure
that
people
start
thinking
about
how
we're
gonna,
you
know,
I'll
be
able
to
certify
those
plugins
I.
F
H
F
G
Careful
though
I
mean
you're
gonna
make
the
mistakes
that
past
communities
have
made
where
we
think
work
of
providing
value.
The
customer
by
doing
things
like
that
and
then
and
you
end
up
hurting
the
overall
brand
by
doing
it,
because
then
things
stop
being
you
you're,
differentiating
based
on
your
scheduler
and
but
now
you're
actually
hurting
the
overall
interoperability,
the
ability
of
the
customer
to
have
no
vendor
lock-in.
So
please,
you
know
really
think
through
what
you're
saying
and
think
of
how
it
benefits.
The
overall
customer
is
it's
very,
very
dangerous
waters.
Yeah.
F
Well,
I
think
what's
important
is
that
the
basse
kubernetes
conformance
test
suite
should
probably
always
pass
I
mean
that
may
be
examples
that
break
that.
But
I
think
my
default
position
should
be
the
core
areas.
Conformance
test
suite
still
passes
even
with
a
custom
scheduler
as
an
example,
but
that
doesn't
necessarily
mean
that
the
customs
scheduler
itself
is
necessary,
higher
percent.
H
Example,
I'm
thinking
of
is
slightly
different
than
that
there
was
even
well
the
community
meeting
a
few
minutes
ago.
There
was
something
that
would
serve
them
along
this
line,
but
it
you
know
if
you
read
like
a
lot
of
the
docs
around
kubernetes.
One
of
the
cool
things
is
that
was
it
look.
You
can
write
your
own
scheduler
to
do
like
special
things,
so
there's
a
lot
of
encouragement
around
people
to
do
that.
A
lot
of
the
tests
here
it
looked
like
are
headed
towards
testing
a
standard
as
the
standard
default
community
scheduler.
H
What
you're
kind
of
saying
is
they?
The
implication
here
is
that
the
past
conformance
you're,
going
to
have
to
include
like
a
conformant
cluster
would
probably
need,
like
practically
speaking,
need
to
include
the
default
scheduler.
But
if
you
have
some
additional
things
that
are
based
on
naming
a
scheduler
after
that
and
that's
added
on,
then
you
should
be
fine.
It's
replacing
the
default.
Scheduler
is
a
much
higher
bar
I
agree.
B
I
think
that
I
think
we're
getting
a
little
bit
ahead
of
ourselves.
To
be
honest,
a
I
understand
the
purpose
of
trying
to
talk
about
this
early,
but
I
do
think
some
of
the
aspects
of
being
able
to
support
this
in
the
community
are
not
there
right,
because
right
now,
a
lot
of
SIG's
individually
have
enough
time
keeping
up
just
with
the
community
itself
and
we're
kind
of
asking
it
to
up
the
bar
to
be
able
to
have
these
levels
of
guarantees
around
the
behavior.
So
I
appreciate
the
conversation
and
I
do
it
is?
B
H
H
A
Folks,
I
think
we
can
probably
end
there
on
time,
but
it's
pretty
clear
from
the
conversation
that
we
should
keep
going
with
these
em
twice
a
month
meetings
and
so
on
the
made
for
kubernetes
and
the
tooling
certification
or
whatever
we
decide
to
call
it
I'll
bring
that
to
the
list.
I'd
encourage
you
just
to
bring
other
topics
the
list
as
well,
obviously,
each
with
their
own
subject,
header
and
then
I'll
talk
to
you
in
in
two
weeks:
I'm
good
yeah
thanks
everybody
thank.