►
From YouTube: CNCF Kubernetes Conformance WG Meeting - 2018-08-22
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
A
A
A
63
certified
vendors
new
ones,
keep
coming
out
of
the
woodwork,
including
some
Kiwis
in
the
last
week
and
74
certified
products,
and
you
can
see
that
we're
up
to
fifteen
1.11
certifications,
so
I'm
I
think
the
fact
that
all
these
certifications
are
taking
place
via
the
github
repo
means
that
it's
pretty
transparent,
how
it's
taking
place
and
issues
that
come
up
in
other
sorts
of
things,
but,
of
course,
anyone's
welcome
to
speak
up.
If
you
have
questions
or
concerns
about
it,
but
I'm
pretty
pleased
with
the
process
right
now.
A
B
Actually,
there
was
I
think
Iowa
actually
did
this
to
signing
up,
but
yeah
the
huawei
IBM
Google
are
all
working
together
on
the
on
the
agenda
for
the
deep
dive,
an
intro,
great
and
I.
Think
I
sent
you
a
note.
C
A
B
B
Okay,
there's
a
link
to
the
planet
agenda.
How
much
does
anything
that
we
need
to
settle
on
it?
If
people
have
questions
or
concerns
about
the
dock,
we
do
have
a
meeting
every
two
weeks.
So
I
guess
we
have
one
not
this
week,
I
think
the
next
week
to
go
over
concerns
or
questions
we
have
and
they
start
passing
through
some
slide
decks.
A
B
Know
I'll
add
that
to
the
list
and
I
think
Erin
might
be
doing
the
intro
right
now
so
I'll
leave
it
up
to
Erin
decide
how
he
wants
to
balance
that
time
with
you.
D
Yeah
I'll
work
through
that
with
you
Dan
my,
like
I'm,
happy
giving
a
broad
overview,
but
I
haven't
been
historically
involved
with
this
group
since
its
inception,
so
I'm
sure
you
can
help.
He
had
a
whole
bunch
of
context.
A
E
Progress
and
adding
to
the
different
components
to
be
able
to
enable
all
right,
I'm
speaking
to
something
in
the
agenda
I
mean
there
we
go,
we
did
the
grain
out
of
the
links
and
we
still
haven't
done
it
food
for
the
test.
Yet.
A
D
G
Where
is
the
backlog
like,
where
is
the
canonical
backlog
for
the
one
of
the
things
that
should
be
addressed
by
this
group,
like
we've,
we've
Unleashed
folks
to
work
on
stuff,
but
it's
not
been
transparent,
as
of
yet
like
who
is
doing
what?
Where
and
what
the
actual
execution
backlog
is?
I
know
we've
talked
about
it
a
couple
of
times
in
previous
meetings,
but
if
we
want
to
go
into
detail
on
to
where
that
is,
that
would
be
helpful.
I
think.
D
So
I'm
hoping
the
update
I
provide,
will
provide
sufficient
transparency
on
what
we've
been
doing
and
why
for
now
but
question,
why
you
like?
Why
you
and
the
person
I.
G
Just
stay
just
a
update
with
the
state
of
getting
documentation
in
place
for
the
conformance
test.
So
we
can
point
folks
at
issues
because
I
know
that
we
had
done
a
lot
of
efforts
to
get
a
lot
of
documentation
in
place
with
regards
to
you
know,
spec'ing
out
and
having
a
well-defined
text,
and
that
was
supposed
to
be
auto,
published
to
the
dockside
eventually
or
to
some
location.
But
I,
don't
know
what
the
state
of
that
is
either.
So
that
would
also
be
pertain
to
the
backlog
and
anybody.
D
The
brief,
the
brief
part
that
I'm
aware
of
is
that
there
were
a
number
of
PRS
outstanding
to
put
that
text
in
that
I
helped
srinivasa
get
merged
and
as
of
right
now,
I
have
no
bandwidth
to
further
assist
in
automating,
the
creation
of
those
Docs
and
as
we'll
discuss
below
the
way
those
Doc's
were
generated,
walked
through
a
slightly
different
lists
of
tests
than
what's
on.
Aboy
has
been
running
for
this
program,
so
there's
there
is
work
there
to
be
done.
D
The
docs
we're
talking
about
are
the
ones
that
end
up
landing
in
Kate's,
conformance
in
the
Kate's
conformance
repo
under
the
what
is
it
under
the
docs
directory
right
now,
there's
one
for
kube
conformance,
one
nine
include
conformance
one
eleven.
This
was
this
groups
desire
to
have
like
human,
readable
descriptions
of
what
the
tests
aren't
and
why
these
are
the
things
that
define
conformance.
D
My
personal
opinion
is:
it
is
not
all
the
way
there
in
being
a
consumable
or
be
useful.
I
had
initially
tried
an
effort
where
I
thought
well.
If
this
is
what
the
end
results
supposed
to
look
like
for
describing
what
all
the
different
test
cases
are,
we
could
maybe
work
backwards
from
something
like
this,
where
we
write
out
all
the
test,
cases
in
human,
readable
form
and
then
figure
and
then,
like
you,
know,
implement
those,
but
personally
this
this
needs
to
be
pushed
forward.
I
have
no
bandwidth
to
do
it.
H
B
D
Yeah,
so
my
concern
is
the
state
of
automating
the
publishing
of
documents
based
on
the
comments
in
the
conformance
test.
That's
not
there
right
now
so
right
now,
it's
like
Sarina
had
to
go
manually,
run
a
command
and
then
take
the
output
of
that
command
and
then
open
up
a
pull
request
against
the
gates,
conformance
repo.
How
are
we
going
to
automate
this
process
and
then,
like
personally,
I
further,
have
concerns
that,
like
the
the
comments
are
kind
of
brittle,
but
I
don't
have
time
to
take
issue
with
that
yeah.
B
F
B
F
I
G
G
Do
we
not
have
a
repo
track
shoes
in?
We
could
use
the
conformance
repo
to
track
some
of
the
stuff
into
triage
and
use
it
that
way
or
we
can
use
KK
and
just
provide
that
enough
labeling.
So
that
way
we
can
at
least
if
you
just
put
it
inside
KK
it'll,
get
lost
forever.
There's
too
much,
but
if
you
put
in
the
cnc
of
performance
repo,
you
can
actually
track
it
and
it's
smell
enough
freak
and
radishes
one.
D
I
guess
I
would
suggest
that
perhaps
this
group's
interests
and
backlogs
should
be
tracked
in
this
groups:
repo
yeah
there
are
like
technical
issues
or
bugs
that
relate
to
the
sausage-making
over
in
KK.
That's
fine,
but
ultimately
we
should
be
tying
back
to,
like
you,
guys,
open
a
project
board
on
this
repo
right.
Do
something
like
that
I
hate
project
boards
for
what
it's
worth
believe
totally.
Do
it
whatever
works
for
you,
folks,
no.
B
I
think,
let's
just
start
up
creating
issues
or
whatever
inside
of
our
repo
and
track
it
in
there,
and
if
we
imported
something
more
complicated
than
an
issue,
we'll
figure
that
out
as
we
need
it
so
Trina.
Can
you
open
up
an
issue
to
track
the
progress
or
the
requirement
of
automating
the
Federation
of
our
or
of
our
documents
right,
yeah?
Okay,
thank
you.
Okay,.
D
So
if
you
folks
don't
mind,
I'll
move
on
to
my
agenda
items
in
the
in
here,
I
put
in
a
lot
of
info
I'm,
not
sure
we're
gonna
have
time
to
really
drill
deep
on
all
of
these.
This
is
kind
of
a
in
an
intent
to
catch
you
all
up
on
what
I've
been
doing
for
the
past
month
and
get
us
discussing
a
couple
things
and
then
make
sure
I
Drive
that
discussion
to
the
appropriately
actionable
places
where
any
decisions
Karen.
A
D
Great
sorry,
yeah,
so
what's
what's?
The
first
item
is
refining
the
definition
of
conformance,
so
the
definition
of
conformance
is
something
that
the
kubernetes
project
owes
specifically
within
that
it's
sig
architecture,
as
I've
been
drilling
into
this
I've
noticed
a
couple
inconsistencies.
The
first
thing
was
that
there
were
tests
with
the
conformance
tag
that
lived
inside
a
directory
called
et
node.
Each
wee
node
is
just
no
conformance
tests,
which
is
a
completely
separate
and
orthogonal
concept
to
conformance
as
we
care
about
it
here.
D
D
I
have
opened
up
issues
in
the
github
in
in
KK
and
I've,
been
applying
area
conformance
labels
to
that
and
I've
been
making
sure
this
is
sort
of
covered
down
at
the
bottom
and
in
terms
of
how
is
this
being
done,
where
the
process
is
I'm,
opening
up
issues
I'm
calling
out
to
people
on
github
publicly
and
then,
when
ready
for
review
or
discussion
by
sega
architecture
in
terms
of
whether
or
not
this
is
conformance
related
I
put
it
on
a
project
board
that
sega
architecture
takes
a
look
at
when
we
discussed
there.
I'm.
A
Me
be
clear,
I
mean
sorry
I'm,
actually
driving
globin
and
I'm
paying
them,
but
several
folks
from
Google
have
been
assisting
me
with
that.
But
it
is
not
a
Google
task
in
in
any
way
and
other
folks
who
would
like
to
get
involved
in
in
managing
that
are
welcome
to
ultimately
I'm,
certainly
looking
to
cig
architecture
on
feedback
for
where
we
should
be
prioritizing.
But
I
do
want
to
emphasize
that
so
far
the
first
few
months
have
just
been
them.
G
I
also
want
to
be
clear
that,
like
I'm,
not
I,
don't
want
to
drive
all
this
I
just
want
transparency.
So
that
way,
if
somebody
else
asks
the
question
which
will
happen
and
I've
already
been
asked,
I'm
number
of
times
that
there's
a
canonical
location
or
set
of
locations
where
I
can
point
them
to,
and
then
you
know
it's
clear
to
everyone:
what's
what's
going
on
and
who's
doing
what
I.
D
A
D
Yep,
that's
he
put
took
the
words
out
of
my
mouth
like
I.
I,
don't
want
to
be
the
only
person
running
this
show.
My
ideal
scenario
is
that
this
group
collectively
comes
to
agreement
on
a
dumptruck
worth
of
work
that
we
can
just
start
to
turn
the
crank
on,
but
we're
kind
of
not
there
yet
and
I've
been
trying
to
help
us
iterate
through
that.
D
So
the
next
area
of
iteration
is
that
I
noticed
that
sana
boy
runs
conformance
tests
by
running
a
docker
image
that
hekia
has
built
that
has
a
skip
list
inside
of
it
such
that
the
set
of
tests
that
sana
boy
runs
is
different
than
the
set
of
tests
we
list
as
conformance
tests
inside
of
kubernetes
kubernetes.
I'm
of
the
opinion
that
say
it's
kubernetes.
The
project
should
be
defining
what
the
conformance
tests
are,
that
Skip
list
should
go
away
and
those
concepts
should
be
propagated
upstream.
D
So,
for
example,
I
have
the
Skip
list
in
the
in
the
meeting
notes
here
like
I,
agree
that
if
a
test
has
a
tag
of
like
alpha
or
disruptive
or
flaky,
it
should
never
have
been
tagged
as
conformance
and
if
it
was,
we
should
go
back
and
strip
it
out
of
conformance
or
figure
out,
if
maybe
it's
not
actually
as
flaky
as
we
once
thought
things
of
that
nature.
But
ultimately
my
ideal
scenario
is
a
skip
list,
goes
away
and
conformance
tests
involved
just
focusing
on
tests
that
have
the
conformance
tag
inside
of
them.
Yeah.
G
All
right
there's:
no,
there
was
no
arguments
and
in
fact,
Yago
and
math
Liggett
were
involved
in
the
creation
of
those
things
in
the
beginning,
and
ideally,
Matt
Liggett
and
I
had
both
agreed
that
we
wanted
to
push
that
container,
that
single
artifact
that
everyone
can
use,
which
is
a
good
conformance
container
into
upstream,
and
that's
this
that
would
be
a
canonical
location
that
everyone
could
use.
It
just
never
happened
right
so.
D
Practically
speaking,
I
don't
have
the
resources
to
support
that
within
safe
testing,
or
at
least
the
next
quarter,
but
that's
a
direction
I
wouldn't
mind
heading
like
in
the
q4
time
frame,
but
in
the
interim
I
just
want
to
get
rid
of
that
skip
list
and
make
sure
that,
like
these
are
the
sorts
of
things
that
should
be
used
in
a
checklist
of
should
a
test
be
tagged,
conformance,
yes
or
no.
Why.
G
D
So
I
think
that
those
issues
have
been
worked
through
number.
One
number
two
I
think
keep
CTL
exercises
like
the
cube,
see
tail
tests
exercising
at
kubernetes
cluster
are
a
great
example
of
end
user
behavior
and
are
very
illustrative
of
conformance,
related
behavior
they're,
already
tagged,
as
conformance
I
think
they
should
be
conformance
tests.
So
we
could
go
through
deeper
review
from
subject
matter.
Experts
to
confirm
that,
like
nothing
that
isn't
stable
is
being
exercised.
This.
G
Is
this
fella
spec
2
architecture
for
this
particular
one
because
they
own
it
the
there
was
a
meta
topic
that
existed
originally,
which
is
one
it
didn't
work
for
all
cases
for
other
people.
That
Matt
was
aware
of
this
too
as
well.
We
it
works
now
because
DIMMs
and
I
fixed
it,
so
we
can
enable
it
should
we
want
to,
but
there
there
are
macro
level
concerns
about
whether
or
not
that
behavior
is
aggregate
level
behavior
versus
API
coverage,
behavior.
D
I
have
a
point
about
that
down
further
in
terms
of
prioritization,
where
I
talk
about
like.
Where
do
we
care
about
focusing
on
direct
versus
indirect
coverage,
because
I
agree
keeps
ETL,
probably
in
directory
cover
indirectly
covers
a
lot
of
functionality
that
we're
not
directly
covering
next
up
huge
thanks
to
dims.
D
As
a
result
of
this
I
know,
there's
a
PowerPC,
64
conformance
test
being
run
out
there
now,
which
is
great
and
finally,
the
bigger
issue
where
I
kind
of
want
to
get
this
groups
input
and
Shepherd
discussion
is
like
refining
what
the
definition
of
conformance
should
be.
So
I
have
an
open
poll
request.
D
I
do
want
to
make
sure
that
I
get
this
groups
input
in
consensus,
but
my
goal
is
to
walk
away
with
a
document
that
has
a
bullet
list
that
is
so
clear
and
explicit
that
it's
obvious
to
both
people
writing
tests
and
people
who
are
reviewing
tests.
Whether
or
not
the
test
is
being
written
in
a
way
suitable
for
conformance
and
whether
or
not
it's
exercising
a
behavior
suitable
for
conformance
to
be
cleared
here.
D
D
So,
but
the
key
thing
that
prompted
me
to
start
writing
the
stock
to
tighten
up
requirements
was
that
years
and
years
ago
the
versioning,
the
version
SKU
requirements,
were
maybe
slightly
misinterpreted.
To
imply
that
a
cluster
that
was
say
1,
9
conformant,
should
also
be
passing.
1
7
conformance
tests,
but
that's
not
actually
true.
It
turns
out
that
client
version
skew
guarantees
are
only
one
version
back,
so
in
so
like
the
biggest
change
that
I
saw
aside
from
refining.
D
All
of
the
specific
criteria
was,
if
your
conformant,
for,
like
version
111,
you
only
have
to
also
be
conformant
for
1/10.
You
are
allowed
to
fail.
V19
conformance
tests,
I'm
curious
of
people
within
this
group
like
interpreted
its
interpreted,
the
versioning
policy.
That
way-
or
if
this
looks
if
this
sounds
like
a
change.
I
B
I
D
J
Look
here,
I'm
curious,
yes,
I
was
lurking
for
exactly
this
kind
of
issue
right
so
actually
couldn't
be
within
kubernetes.
The
only
thing
that
multiple
release
versions
are
supported
for
is
a
skew
between
the
cluster
level,
control,
plain
components:
the
API
server
and
controller
manager
and
scheduler
and
the
cubelets.
J
So
the
qubits
can
be
up
to
two
releases
behind
the
control
plane
components,
but
we
don't
even
support
any
skew
between
the
cluster
level
control,
plane,
components
at
all
officially
yet,
which
is
obviously
a
problem
for
things
like
AJ
and
we
don't
support
downgrades
officially,
which
is
a
problem
for
lots
of
other
reasons
and
queue.
Control
only
supports
one
release
forward
and
backward
of
skew,
so
1.8
queue,
controller
will
work
with
1.9
and
one
point
in
keep
control
should
work
with
1.9
control
plane,
which
is
also
different
than
the
cubelet
control
planes.
J
B
So
Brian,
can
you
elaborate
a
little
cuz
I
understand
conceptually
what
you
said
there,
but
what
I'm
trying
to
understand
is
aside
from
removal
of
a
feature,
in
other
words,
have
been
deprecated.
Do
you
actually
have
examples
in
mind
where
the
functionality
changed
across
two
releases,
but
it
didn't
change
across
one
release:
go
I'm
trying
to
wrap
my
head
around
how
that
could
possibly
happen.
J
Yes,
there
have
been
a
number
of
cases,
so
the
API
machinery
has
been
evolving
heavily
over
the
past
few
years
entirely.
New
things
were
added,
like
our
current
API
discovery
mechanism
and
the
swagger
1.2
was
also
exported,
which
is
different
from
our
are
sort
of
custom,
API
discovery
mechanism,
but
the
four
we
effectively
used
is
water
just
for
schema
discovery
and
not
for
endpoint
discovery
for
various
reasons.
That
was
added
for
keep
control,
validation
and
then
it
was
evolved
over
time
to
swagger
to
or
what
came
open
API.
J
So
there
were
a
bunch
of
dances
that
needed
to
be
performed
to
transition
various
details.
There
are
also
a
lot
of
bugs
in
the
schema
information
and
things
like
that
that
had
to
be
fixed,
so
clue
just
had
to
be
put
in
and
then
we're
later
removed.
So
there
was
a
bunch
of
complexity
around
that,
as
well
as
changes
in
garbage
collection
and
reaping
in
all
sorts
of
things.
J
Heat
control
had
a
lot
of
logic
in
it
was
basically
a
fat
client,
and
these
things
were
not
officially
in
the
API
really
they
were
just
in
queue
control
and
it
is
sort
of
we
didn't,
have
the
mechanisms
in
place
or
keep
control
to
actually
really
reason
about
what
version
of
the
control
plane
it
was
talking
to
you
the
like.
There
are
all
kinds
of
issues,
so
basically
it
was
just
through
testing
that
we
ensured
compatibility
and
practically
speaking,
there
is
only
one
release
of
compatibility
there
aught
there
are
still
there's
still
very
complex
functionality.
J
This
in
queue
control
control
apply,
is
probably
the
the
most
the
biggest
and
most
complex
that
basically
doesn't
behave
properly.
If
the
fields
don't
exactly
match
between
the
versions.
The
behavior
is
very
surprising.
We're
moving
on
we're
working
on
moving
that
into
the
API
server
itself
to
address
that
and
other
problems,
but
it's
still
going
to
be
several
releases
before
that
transition
is
completely
done
so
yeah,
it's
it's
a
very
it's
a
very
complex,
very
complex,
set
of
surface
area.
J
D
G
G
D
Will
get
to
that
sometime
in
the
next
three
days
if
Tim,
since
you
had
expressed
an
interest
in
being
actively
involved?
If
you
want
to
kick
it
off,
that's
fine
by
me,
but
like
I,
it
is
important
to
me.
I
opened
it
up
a
while
ago
and
then
I
haven't
gotten
back
to
it,
because
I've
been
behind
on
other
things,
but
I
will
start
it
up
and
iterate
us
on
that
discussion,
soon,
yeah
cool.
Actually
it
was
great
yeah,
alright
moving
on.
D
What
or
how
people
are
coming
to
that
determination
when
they
make
that
statement
so
I'm
trying
to
get
us
to
to
measuring
our
expectations,
starting
with
less
granular
measurements
and
moving
us
to
more
detailed
or
more
granular
measurements
over
time.
So
things
this
group
have
talked
about
in
the
past
were
like
client-side
API
coverage
information.
This
is
a
Michy's
tool
which
currently
lives
in
test
infrared
coverage,
there's
server-side
API
coverage,
which
hippie
hacker
has
done
with
API
snoop.
D
The
version
of
that
that
I
most
prefer
at
the
moment
is
the
audit
log
review
thing,
which
you
all
recognize
as
that
wonderful
sunburst
graph,
that
breaks
down
API
coverage
by
API
group
by
the
endpoint
and
by
the
verbs
that
we're
hitting
that
endpoint
with
the
problem.
With
this
approach
is
that
it
relies
on
audit
logs
and
the
default
audit
policy
filters
out
a
lot
of
high
volume
and
high
traffic
events.
So
it
looks
like
we're
not
actually
covering
a
bunch
of
ad
is.
J
D
D
Very
odd,
in
the
meantime,
thanks
to
hippie
hackers,
prompting
and
help
and
Katherine
Barry
from
Google
who
has
joined
us
recently,
we're
working
on
looking
at
API
coverage
from
audit
logs
with
the
user
agent
included.
So
there
was
initially
some
work
done
to
try
and
correlate
like
what
test
causes.
What
API
endpoints
to
be
exercised
by
doing
this
manual
process
of
like
running
the
test,
client
side,
then
SSA
Qing
over
and
like
running
this
stuff
over
here
and
like
it
worked,
but
it's
kind
of
hacky.
D
So
what
we're
doing
instead
now
is
the
user
agent
gets
logged
as
part
of
audit
logs
and
then
the
ETV
client
will
set
the
user
agent
to
the
name
of
the
tests.
That
is
running
things.
So
this
way
we
can
filter
specifically
down
to
like
which
test
is
running
which
API
endpoints,
even
without
that
today,
Kathryn
put
together
a
pull
request
into
API
state
for
a
version
of
coverage
called
e
to
e
coverage
view
which
allows
us
to
filter
API
coverage
by
user
agent.
D
So
we
can
exclude
all
the
API
coverage
that
happens
from
things
like
controller
managers
sweeping
through
scheduler
trying
to
pull
every
single
node
in
pot
right.
If
we
have
time
I'd
love
for
us
to
show
demos
on
this,
but
in
the
interest
of
moving
quickly
I
hope
you
don't
mind
if
I
just
kind
of
move
forward.
D
The
next
thing
that
Katherine
has
been
working
on
is:
what
do
we
do
when
API
coverage
isn't
enough?
At
some
point,
you
know
we
could
like
cover
all
the
api's
with
stupid,
crud
tests,
but
we're
clearly
not
going
to
be
exercising
all
the
behavior
so
generally
in
the
world
of
unit
tests.
What
you
do
for
this?
Is
you
look
at
line
by
line
coverage?
D
Yes,
that
that
is
a
12
linking
clock?
Thank
you.
You
know
whatever
it's
fine,
so
in
the
world
of
unit
tests,
you
look
to
line
coverage
for
this
sort
of
thing,
so
we're
trying
to
run
every
kubernetes
process
as
a
unit
test
and
get
blind
coverage
from
the
kubernetes
system
itself.
Catherine
has
a
design
proposal
out
there
that
we
are
running
through
the
community.
D
But
the
idea
is
to
do
things
like
you
know,
collect
coverage
from
every
node
in
the
cluster
and
then
merge
that
coverage
information
together.
So
even
if
tests
end
of
exercising
different
keywords,
you
get
the
union
of
what
code
was
actually
covered.
This
is
the
sort
of
thing
we
can
probably
take
to
a
discussion
and
subject
matter
experts
to
verify
whether
or
not
we've
hit
all
of
the
appropriate
corner
cases
for
behavior.
D
So
with
that,
let's
talk
about
how
we
can
use
this
information
to
help
us
improve
coverage
and
how
we
can
prioritize
where
we
should
be
improving
coverage
and
I.
Think
this
kind
of
ties
into
an
action
item
that
hippy
had
if
we
look
purely
at
API
coverage
and
we
try
to
go
after
the
biggest
least
covered
area.
I
think
that's
the
wrong
approach
because
it
could
result
in
us
going
after
high
visibility
but
low
value
targets.
I
think
that
higher
value
targets
of
coverage
are
areas
where
functionality
can
be
implemented
by
plugins
or
substitutes.
D
D
J
So
I
just
want
to
comment
on
this,
so
this
is
basically
been
my
direction
since
the
start
of
the
conformance
effort,
since
the
purpose
of
conformance
is
testing
compatibility.
We
need
to
focus
on
the
areas
where
compatibility
is
at
risk.
Compatibility
is
at
risk
in
areas
where
there
are
multiple
implementations
right.
So
that's
things
that
are
explicitly
portable
like
CRI,
and
it's
things
that
people
swap
out
like
the
scheduler
or
cubelet
or
ingress
controllers
or
whatever
so.
J
G
D
G
J
We
need
to
identify
which
portable
and
non
deportable
behaviors
that
are
expected.
This
has
come
up
in
a
variety
of
different
contexts.
Since
you
mentioned
profiles,
windows,
for
example,
there
are
a
number
of
Linux
isms,
sadly,
even
things
that
are
specific
to
specific
Linux
distributions
like
su
Linux
vs.,
a
bar
that
are
in
the
API
we're
gonna
have
to
annotate
those,
ideally
in
the
API,
so
that
we
exclude
them.
You
know
when
we
have
profiles.
We
may
want
to
include
them
in
profiles,
but
right
now
we
don't
have
that.
J
So
we
would
just
exclude
those
things.
There
are
things
that
are
going
to
be
fairly
subtle.
That
should
be
consistent.
Networking,
in
particular,
is
one
of
those
like
testing
that
pod
networking
actually
works.
Yeah
addresses,
AI
IP
addresses
from
pods
that
are
routable
to
each
other
on
different
nodes
is
something
that
I
don't
believe
we
test
and
I,
don't
know
that
that
will
be
covered
explicitly
by
line
coverage
or
API
coverage,
or
anything
like
that.
We
will
need
to
make
sure
that
that
gets
tested.
A
Brian
I'm
turn
the
supportive
of
focusing
on
tested
matter,
as
opposed
to
just
a
percentage
number
I
just
need
to
remember
back
in
December
at
the
live
meeting
in
Austin.
You
had
mentioned
an
effort,
I
think
that
was
being
looked
at
inside
Google.
That
would
provide
more
of
a
base
level
coverage
of
just
the
just
the
kind
of
existence
of
the
API
and
maybe
whether
they
followed
the
rest
verbs.
J
Yeah
I'm,
remembering
correctly
and
you're
dead
effort
go
anywhere.
I
basically
told
people
to
stop
working
on
it
because
the
stuff
they
were
doing
was
not
going
to
be
useful.
It
was
like
exercising
a
very
small
amount
on
the
API
server
as
to
what
they
were
doing.
So
we
have
multiple
efforts
underway,
focusing
on
getting
things
that
are
currently
in
node
conformance
into
the
actual
conformance
suite
so
that
we
get
or
cop
pod
coverage
is
sort
of
one
entirely
low-hanging
fruits,
but
not
so
it
doesn't
require
rocket
science
either.
J
B
And
I
have
that
if
you
assume
that
the
community's
code
base
itself
doesn't
change
based
on
each
provider,
which
I
think
is
for
the
most
part,
probably
true-
that
I
definitely
go
with
everything.
Brian
was
saying
there.
It's
all
the
points.
That
is
not
it's
not
actually
true,
well,
I,
I,
but
but
wait.
But
your
assertion
there
about
the
the
biggest
concern
are
the
the
plug
point:
C
Ric
and
I.
B
That
kind
of
stuff
I
think
in
general,
that's
probably
be
accurate
and
my
question
was
actually
more
of
a
tangent
the
one
to
that
which
is
if
someone
I'm
gonna
claim
conformance
to
our
test
suite
do
they
have
to
say
which
plugins
they're
using
or
are
we
comfortable
saying
gentlemen's
agreements?
You're,
not
gonna,
swap
that
you
see
an
AI
implementation
and
that's
right
after
you've
run
the
conformance
test.
There's.
G
A
part
in
that
way
there's
a
part
in
the
conformance
process
that
clearly
defines
how
to
make
the
experiment
reproducible
to
other
consumers.
That
is
part
of
the
CN
CF
effort.
So
they
need
to
specify
versions
as
well,
as
you
know,
other
details
of
how
they
created
X,
which
will
have
that
occurred.
Okay,.
A
Brian
can
I
just
we
can
do
this
in
our
next
meeting
about
email,
but
just
one
more
minute
on
the
very
shallow
broad
coverage
which
is,
we
did
have
a
case
where
a
major
provider
of
kubernetes
there
hosted
implementation
was
not
compliant
because
they
had
turned
off
a
couple.
Api
features
that
they
didn't
think
we're
necessary
and
thankfully
those
were
caught
by
the
original
1.7
conformance
suit
and
they
decided
to
turn
back
on
sudden
thinkin
conformant.
So
I
just
want
to
ask
here,
because
it
does
seem
something
like
that.
G
J
Somewhere
I
have
a
doc
with
high
level
guidance
on
areas.
I
think
deserve
prioritization.
Those
need
to
get
translated
into
actually
lower
level
details
about
what
functionality
actually
needs
to
get
tested
and
that
can
populate
the
backlog.
But
I
am
NOT
gonna
waste.
My
time
reviewing
things
that
I
believe
have
low
or
no
value.
D
D
Okay.
So
now
and
again,
this
is
the
this
is
definitely
trying
to
tie
back
to
Tim's
like
how
do
we
collectively
get
involved?
How
have
I
been
doing
this
right
now,
so
you
know
that
that
conformance
doc
tries
to
lay
out
both
what
are
the
criteria
for
tests
to
be
promoted,
to
conformance
as
well
as
the
process
to
promote,
as
has
to
conformance
that
process.
Basically,
is
right.
D
The
tests
prove
that
the
test
works
and
then
we'll
talk
about
promoting
it
into
conformance
I
have
made
sure
that
all
work
related
to
this
is
labeled
with
a
github
label
called
area.
Slash
conformance
I
thus
far
have
been
driving
those
tests
forward
to
LG
TM,
either
myself
or
with
subject
matter.
Experts
I
then
put
them
on
a
conformance
test
review
board
that
lives
in
the
architecture.
D
What
functionality
is
acceptable
and
so
like
right
now,
I
know
Tim
saying,
like
we've
unleashed
all
these
people
to
be
clear.
It
is
me
shepherding
and
like
to
contractors,
so
we
don't
have
like
a
whole
herd
of
people
who
are
bum-rushing
the
project.
I'd
love
for
that
to
happen.
In
order
for
that
to
happen,
I
need
a
dump
truck
of
test
cases
and
I.
Don't
have
that
right
now,
I
think.
D
Like
the
ideal
scenario,
the
current
scenario
is
like
we're
just
kind
of
going
as
back
and
forth,
suggesting
test
cases
from
subject
matter
experts,
and
then
we
have
Brian
grant
and
Clayton.
Coleman
is
kind
of
the
saiga
architecture
bottleneck
to
figure
out.
If
that
makes
sense,
I
want
to
help
us
like
grow
this
understanding
to
larger
pools
of
people
to
get
to
that
dump
truck
of
test
cases,
and
this
is
100%
where
I
could
use
this
cream
self.
J
Yeah
and
then
effectively
we
could
rope
in
more
the
API
approvers
and
even
some
people
beyond
that
I
think
we
need
shadow
reviewers,
so
they
can.
People
can
start
to
understand
the
types
of
issues
we
have
concern
about
and
we
can
make
sure
that
those
things
get
documented,
like
some
things
are
documented,
but
there,
like
an
API
convention
document
about
this,
is
not
guaranteed
to
be
a
stable
part
of
the
API
and
things
like
that.
J
So
we
need
to
collate
a
list
or
a
list
of
pointers
or
something
those
issues
as
we
surface
them
like
there
are
questions
about.
Yes,
this
is
a
pod
feature,
but
it's
not
guaranteed
to
be
portable
or
in
the
past,
hasn't
been
stable
or
it's
not
part
of
CRI.
Yet,
like
they're,
all
host
of
details
that
that
one
could
get
into
when
reviewing
these
things,
we
just
don't
have
enough
them
under
our
belt
to
have
a
comprehensive
list
of
criteria.
I.
D
Mean
to
be
clear,
a
lot
of
what's
been
happening
right
now
is
I've
been
kind
of
correcting
and
refining
the
definition
of
conformance.
We
really
haven't
been
going
out
there
and
writing
brand
new
custody
cases
to
cover
new
things.
It's
more
in
a
process
of
identifying
which
test
cases
already
exists
which
look
like
they
could
be
promoted
to
conformance
or
which
test
cases
were
called
conformance,
but
weren't
actually
being
run,
but
at
some
point
I'm
going
to
run
out
of
those
two
things
and
it's
gonna
come
time
to
like
write.
Some
new
tests,
yeah.
J
And
along
those
lines
for
people
that
you
didn't
hear
the
discussion
that
we
had
in
cigar
picture
and
maybe
elsewhere
about
it
at
some
point,
there's
gonna
need
to
be
some
engineering
efforts
around
building
many
test
frameworks
that
will
enable
us
to
run
tests
in
multiple
modes
right.
There
are
a
lot
of
advantages
to
writing
smaller
scope
tests,
unit
tests
and
integration
tests
that
can
be
run
outside
of
being
in
test
framework
that
we
have
they're,
faster,
they're,
more
efficient,
they're,
more
stable
than
more
be
pluggable,
and
so
on.
J
Right,
like
all
the
reasons
why
unit
tests
are
good,
but
so
if
we
had
a
hundred
percent
coverage
in
unit
tests,
we
could
stop
zero
percent
coverage
in
the
conformance
test
because
they
weren't
running
in
each
week.
So
we
are
going
to
need
frameworks
that
can
actually
abstract
that
away
and
run
the
tests
in
multiple
modes.
You
can
directly
invoke
the
functionality
and
the
unit
tests
and
you
can
invoke
it
through
an
off-the-shelf
cluster
in
the
end-to-end
test
and
still
test
correct
behavior.
J
H
Ben
elder
has
been
taking
a
stab
at
it,
a
little
bit
in
testing
for,
but
that's
very
early
stage
still
with
the
kubernetes
in
docker
adverts,
but
that
doesn't
cover
multi,
node
or
any
of
the
advanced
scenarios
yet,
but
can
at
least
provide
a
baseline
for
some
set
of
conformance
tests.
We
are
hoping,
but
then
that
still
a
work
in
progress
and
there
are
PR
it's
interesting
for
us.
Well.
J
The
doctor
and
doctor
thing
is:
that's
why
you're,
if
you
referring
to
I,
don't
really
consider
it
to
be
a
unit
test,
maybe
it
could
be
considered
an
integration
test,
I
mean,
obviously
you
wouldn't
actually
require
a
cloud
provider
or
actual
multiple
nodes.
So
that's
a
useful
thing
for
just
exercising
the
components
themselves,
although
not
in
a
real-world
scenario,
I'd
say
that's
like
a
whole
nother
category
test.
We
need
to
be
able
to
run
that
in
both
the
doctor
doctor
mode
and
on
real
clusters.
G
D
But,
like
I
view
all
of
that
discussion
as
relative
to
improving
the
testing
hygiene
of
the
kubernetes
project
as
a
whole,
ultimately
conformance
tests
still
have
to
be
run
as
blackbox
tests
that
simulate
real
world
scenarios
as
end-to-end
tests.
So
Tim
since
I've
been
trying
to
tie
back
to
your
point
like
how
do
you
think
this
group
can
help
me
accomplish
the
goals
I
just
laid
out
above
I.
G
D
G
B
D
J
J
D
So,
just
in
case
enough,
like
I,
really
don't
want
Google
to
be
the
owner
of
this
I.
Think
this
group
is
the
owner
of
this,
and
if
you
want
to
help
out
I
more
than
encourage
it,
I
would
greatly
welcome.
It.
I
think
that
apologize
that
I
didn't
really
get
a
chance
to
demo
today,
since
we're
kind
of
at
time,
but
hippy
and
Kathryn
both
put
in
a
lot
of
great
work
to
show
some
of
the
things
that
I
linked.
A
Mentioned
that
I
had
moved
this
back
to
a
monthly
meeting,
because
they've
been
a
little
quieter
if
we're
having
a
real
active
period
right
now,
I'm
happy
to
move
it
back
to
twice
a
month,
but
when
we
go
monthly
for
another
month
or
two
and
see
if
we
keep
running
out
of
time
like
this
and
I,
do
want
to
remind
folks
at
the
mailing
list
as
their
and
all
of
us
are
on
it.
So
please
feel
free
to
engage
there.
So.