►
From YouTube: CNCF Kubernetes Conformance WG - 2018-06-28
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
A
A
A
C
A
C
A
C
Extremely
widely
used
and
the
to
kind
of
focus
areas
coming
up
are
we
have
cubic
on
cloud
native,
con
Shanghai
in
November,
14th
and
15th,
our
first
ever
event
in
China,
and
we're
definitely
going
to
want
to
do
a
probably
two
sessions
there,
an
intro
and
a
deep
dive,
and
so
I'm
hoping
there's.
Some
folks
from
this
call
can
join.
D
A
C
A
C
A
C
A
C
A
For
the
update
down
with
that
I
guess,
let's
go
through
the
agenda
so
top
a
list
is
the
conformance
testing
guidelines,
yeah,
okay,
yeah,
so
I
just
wanted
to
give
a.
E
Good
quick
update
of
what
we've
been
up
to
since
the
last
meeting,
so
one
of
the
PR
side
blinked
there
is
the
confluence
guidelines,
so
we've
been
formalized
saying
some
of
the
guidelines
taking
the
world
training
in
Arabic
and
and
then
passing
it
around
it's
a
card
to
get
some
of
the
guidelines
as
such
formalized,
so
the
VR
is
out
there
thing.
If
you
want
to
take
a
look
and
comment
on
it,
we
are
waiting
on,
say,
kotts
approval
to
actually
check
in.
So
that's
one
and
the
other
is
a
spoken
form.
E
Its
coverage,
F
itself,
we've
been
looking
at
two
main
areas.
One
is
a
missionary
and
the
other
is
note,
with
the
aim
of
identifying
a
few
chests
that
we
can
a
promote
to
conformance
in
112
and
also
some
areas
of
cj's,
where
we
can
actually
start
writing
into
it
tests
and
then
promote
them
after
they
farewell
so
in
terms
of
api
missionary,
again,
a
link
to
the
tracking,
but
whether
they're
ongoing
discussions
feel
free
to
chime
in
working
with
Frederico
and
there's.
E
The
API
leads
here:
we've
identified
like
six
cases
to
promote
to
conformance
in
112.
Out
of
these,
we
have
PRS
in
flight
for
three
of
them.
Already
one
of
the
PR
is
we're
fixing
you
try
to
deflate
some
tests
before
we
can
actually
promote
it,
the
the
other
PRS.
We
have
a
couple
of
names.
These
tests
that
are
again
waiting
on
review
from
Sicard
before
we
can
promote
the
next
cup
makes
few.
E
We
need
some
guidelines
from
this
group
because
one
of
the
feature
is
in
beta,
which
has
been
identified
for
Comorian,
we'll
have
more
discussion
on
that
below
the
other
test
that
was
identified.
We
already
have
coverage
for
that
in
end-to-end,
but
going
through
the
coverage
we
found
that
it
doesn't
excise
all
the
scenarios
and
there
is
some
room
to
improve,
update
the
test
or
add
new
tests
there.
So
again,
we
are
working
with
the
sake
itself
for
guidance
on
that.
As
for
node,
again,
there's
a
link
to
the
tracker
bug
this
see.
E
This
is
a
much
more
involved
effort
than
email
machine,
any
considering,
there's
so
many
endpoints
at
stretching
forward
and
you're.
Trying
to
understand
the
interworking
of
everything,
but
meaning
areas
that
we
are
focusing
on
is
gen
from
node.
He
shortlisted
a
prioritized
set
of
API
is
for
us
to
look
into
and
we
rank
API
snoop
coverage
on
quote
into
a
non
conformance
tests
and
the
conformance
test
to
see
if
they
have
any
Delta
e
to
ease
that
we
can
promote.
We
found
that
the
epoch
API
endpoint,
is
the
one
where
we
have.
E
We
have
some
end-to-end
test
cases
that
could
that
could
be
candidates
for
promotion
again
bunch
of
those
are
and
in
the
discussion
we
found
that
those
API
is
in
the
feature
itself
is
again
a
Mita,
so
we
need
some
guidance
on
that
and
CUNY
was
one
of
our
vendors.
She
went
ahead
and
took
took
a
stab
at
coming
up
with
some
test
plan
and
cj's
for
patch
API,
which
is
one
of
the
prioritized
nice
you
see.
E
If
scenarios
are
covered
in
each
of
those,
we
can
promote
so
I
willing,
you're
going
to
be
working
with
more
with
us,
the
node
team
itself
to
go
through
the
regular
test
plan
and
see
if
any
of
the
scenarios
they're
good
to
go
to
test
cases.
So
if,
if
you
got
any,
if
you
want
to
take
a
look
at
the
test,
then
and
comment
there,
so
hopefully
some
of
these
might
come
in
time
for
112
for
promotion,
Oh
possible.
F
To
have
a
you
know,
not
that
I
want
more
github
teams,
because
I
don't,
but
if
would
be
possible
to
have
a
conformance,
github
team.
So
that
way
when,
like
something
gets,
promoted
people
within
this
group,
a
subsection
of
this
group,
as
well
as
a
subsection
of
arch,
are
notified
at
you
know,
conformance
PRS,
or
something
like
that.
So
that
way
it's
it's
clear
and
the
loops
and
the
right
people
as
part
of
the
process.
Okay,.
E
F
A
A
Like
option
because
I
guess
like
now
starting
to
come
relevant,
you
know,
maybe
we
need
to
start
labeling.
Things
is
like
beta
conformance
and
having
it's
kind
of
like
it's,
not
so
much
a
formal
profile,
as
it
is
just
a
group
of
endo
and
tests
that
people
could
run
for
the
purposes
of
a
qualifying
data
features
and
potentially
even
including
tests
that
are
not
yet
that
are
of
like
GA
features,
but
that
are
kind
of
just
to
go
through.
A
The
beta
profile
could
actually
be
useful
for
end
users
as
well,
which
is
like
hey
I'm,
actually
using
a
whole
bunch
of
beta
communities.
You
know
and
I
guess
at
that
point.
The
protocol
does
become
a
little
bit
more
formal
foot
yeah.
What
do
people
think
about
essentially
creating
a
beta
component
tag?
I,
don't.
F
Mind
adding
a
tag
of
some
kind
of
indicate
beta
features,
but
I
I
put
in
chat
that
the
term
beta
conformance
is
an
oxymoron.
It's
a
plastic,
gastric,
glass
or
jumper.
Sure,
because,
like
the
whole
purpose
of
conformance,
is
that
these
are
features
you
can
rely
on
that
are
absolutely
you
know,
production
grade,
but
the
by
definition
beta.
We
are
not
making
that
guarantee
and
we
are.
We
are
even
saying
that
we
will
break
or
add
things
in
the
future,
so
the
guarantees
are
different.
A
H
I
think
we're
complaining
two
big
themes
here
too,
one
is
giving
providers
a
heads
up
as
to
what
have
been
proposed,
as
conformance
tests
that
are
coming
out
to
give
them
an
early
warning
that
they
need
to
either
push
back
or
make
changes,
and
the
other
is
a
profile
that
describes
what
ap
eyes
are
exposed
in
a
given
provider,
very
distinct
concepts.
Well,.
B
F
B
A
I
mean
there
really
can
flavor
that
I
guess
like
we
need
to
just
bright
both
problems,
whether
we
do
that
with
the
same
tag
or
different
tags,
I
think
we
need
to
address
both
problems
so
when
it
comes
to
beta
features,
I
think
it's
actually
extremely
important,
like
I
would
like
to
suggest,
maybe
as
part
of
your
document
like
a
candidate
document
that
way
a
feature
should
not
be
promoted
to
GA
in
communities
anymore,
without
the
conformance
implications,
I'm
just
understood
now,
you
know
we
don't
want
that
just
to
like
drop
in
there.
Suddenly.
A
So
I
think
like
it
makes
sense
that,
as
a
feature
itself
ease
and
like
this
even
over
right.
So
as
the
features
themselves
kind
of
go
through
the
graduation
process
like
they
should
be
having
tests,
there
should
be
a
gating
function,
whether
they
graduate.
So
we
need
some
way
to
capture
that
no
I,
don't
wanna
get
too
caught
up
on
the
name,
but
I
think
you
know.
A
I
also
think
that
that
is
a
way
for
providers
to
kind
of
raise
the
flag
and
say:
hey,
actually,
there's
a
problem
for
me
and
have
those
discussions
like
as
early
as
possible
to
to
Joe's
point
Delic.
We
don't.
We
don't
actually
want
to
like
drop
things
on
people
by
surprise
and,
like
you
know,
then
there
might
be
non-conforming
without
having
a
chance
to
kind
of
argue
their
case.
So
I
think
that's
important,
and
then
you
know,
since
we
have
a
low
coverage
situation,
that
we're
expanding
that
kind
of
like
surprise.
Here's.
A
G
A
H
My
point
is:
maybe
calling
the
same
conformance,
ite
method
if
it
is
depending
on
a
beta
api,
it
doesn't
get
included
in
the
gold
list,
but
it
is
run
as
part
of
the
conformance
suite
and
it
gives
you
a
separate
output.
I,
don't
I'm
totally
making
this
up
as
I
talk,
but
I
think
we're
touring
is
features
that
graduate
to
GA
have
conformance
test
as
part
of
it.
H
A
H
H
A
B
J
B
Label
I
want
make
sure
one
the
same
pages.
You
guys
you're
talking
about
a
label
for
a
test.
That
almost
means
the
exact
same
thing
is
what
we
just
described.
It's
a
this
form.
This
conformance
test
isn't
officially
approved
yet,
and
it's
for
a
beta
feature
which
is
not
gone
GA
yet,
which
is
why
it's
obviously
beta
and
hopefully
both
those
things
will
happen
at
the
same
time,
meaning
the
feature
will
go
GA
and
the
performance
test
will
go
quote
GA
at
the
same
time
and
you're
looking
for
a
label,
for
that.
C
Does
this,
let
me
make
a
proposal
if
you
don't
mind,
which
is
that
we
currently
a
hundred
percent
of
the
conformance
test,
our
quality
production,
convolve
conformance
test
for
their
GA
level.
I
propose
that
we
create
three
new
categories
contest
a
candidate
alpha
candidate
beta
intended
GA
and
then,
as
new
features,
are
maturing
through
the
kubernetes
process.
C
C
A
A
That
that
would
only
leave
us
with
a
label
for
the
J
candidates,
I
guess,
but
so
me
right
now
we
don't,
we
don't
lag
the
candidates
by
release.
Do
we
know
and
we
haven't
had
any
problems
with
that
I
guess
I
mean
I
mean
I,
guess
wouldn't
introducing
a
candidate
jlabel
mean
that
we
can
actually
like
move
faster
and
maybe
use
we
we
could.
We
could,
you
know
quote:
we
could
basically
label
a
whole
bunch
of
tests
as
candidate
and
then
use
like
a
cycle
to
get
feedback
on
that.
A
E
E
A
F
To
me,
be
an
artifact
that
is
attached
to
me:
Jenna
yeah
I'm
not
opposed
to
doing
the
automation
behind
the
scenes,
but
the
the
labeling
of
what
we
call
it
I
think
over
time
needs
to
be
clear,
but
I
said
in
the
chat
window.
Why
don't
we
move
this
to
a
proposal
and
that
way
we
can
actually
have
a
proposal
where
we're
going
we're
gonna
get
other
people
and
stakeholders
that
we're
going
to
need
to
solicit
like
that.
The.
D
F
A
I
mean
so
many
good
to
have
some
rock
consensus
here,
so
it
sounds
like
we
want
to
be.
We
want
to
be
making
adding
tests
to
alpha
beta
features,
there's
two
ways
to
go:
whether
it's
the
same
label
with
with
directory
structure
or
maybe
differently
that
can
I
guess
they
left
up
for
discussion
based
on
like
the
Toyman.
What's
the
better
preach,
the
second
issue
of
J
candidates,
do
we
need
it
or
not,.
A
Okay,
I,
don't
see
a
immediate
need
for
that
and
based
on
what
I've
done
so
kindly
currently
just
labeling
them,
and
then
how
like
these
there
is
a
risk
that
someone
could
I
guess
discover
that
oh
there's
a
problem
with
these
tests
like
post
release
or
I
guess
doesn't
really
matter.
We
just
we
can
just
drop
it
after
the
fact.
If
that
happens,.
A
The
components
there
was
about
I
forget
exactly
the
count
they're
about
three
tests.
I
got
excluded
in
the
end,
because
it's
like
oh
this,
it
was
either
you
had
a
bug
so
like
had
a
bug
that
kind
of
like
had
me
fixed,
so
he
like
dropped
it
from
conformance
temporarily,
but
it
was
like
certainly
been
labeled
components
and
people
would
still
technically
failing
it.
We
just
said:
okay,
a
failure
on
this.
One
doesn't
exclude
you
right
that
there's
one
there
was
a
bug.
There's
one
that,
like
just
got
strips
from
conformance,
is
like
a
it's.
A
It's
labeled
compromise
for
waited
we're
just
ignoring
it,
I'm
wondering
like
what
happens
if
we
dredge.
If
we
like,
promote
an
eatery
test
today,
like
the
land,
it
drops
them
like
112,
and
then
you
know,
let's
say
a
provider
wasn't
like
heating
up
today
and
they're
like
we're.
Cheating
is
ahead
of
time,
which
I
kind.
A
A
Know
if
we
yeah
yeah
I'm
wondering
if
we
need
that
it
sounds
like
it
hasn't,
really
I
mean
yeah.
Do
you
think
we
need
it?
I
think
it
would
be
useful
because
we
can
always
drop
it
after
the
fact.
Anyway,
we
can
always
say
you
know
we
promoted
five
tests
for
112
turned
out.
One
was
one
needs
a
bit
more
works,
we're
giving
away,
ignore
the
results
of
it
temporarily,
like
that's
the
other
way
to
do
it.
B
Isn't
it
I
I'd
rather
not
do
it
that
way
and
then
potentially
piss
off
lots
of
people
I
guess,
certainly
that
non-conformance
I'd
rather
do
the
other
way
and
say
guys:
here's
a
set
of
compartments
to
test
or
we're
gonna.
Add
you
better.
Darn
well
run
them
by
this
date,
because
this
is
the
test
night,
everybody's,
okay
with
it
so.
H
When
I
was
I
think
proposing
before
as
a
process
to
fill,
this
role
was
to
agree
by
feature
freeze
for
1:12
what
ete
tests
we
intended
to
propose
as
companions
or
1:12
those
ete
tests
already
exist.
People
can
argue
about
them,
but
this
would
be
a
much
more
visible
way
to
get
that
feedback.
So
I
agree
with
the
direction
and
I
think
that
the
trick
is
always
making
sure
the
signal
is
valuable.
A
So
I
think
anything
you're
just
anyway,
just
tell
people
a
little
bit
earlier.
You
know
by
the
way
you
know
we
just
feature
froze.
You
may
want
to
run
the
conformance
test,
and
let
us
know
if
there's
any
problems
in
the
next
like
a
few
weeks
as
opposed
to
I
mean
the
nice
thing
about
that
is
we
avoid.
We
void
lagging
aversion,
which
is
good.
H
Yeah
and
the
distinction
here
is
I-
think
there
is
a
time,
a
calendar
day
gap
between
what
features
we,
what
ete
tests
we
proposed
to
be
part
of
1:12
and
when
those
are
actually
reviewed
and
approved
and
added
to
the
gold
list
and
will
actually
be
run
as
part
of
the
conformance
suite.
So
what
you're
proposing
would
put
that
into
the
tooling
and
I
think
that
is
an
improvement
that
with
so
that
the
proposed
conformance
or
coming
soon.
A
Right
so
I
guess
we
had
two
options
here
then,
like
one
is
we
just?
We
make
a
commitment
that
they're
all
in
by
the
feature
freeze
and
then
we
expect
providers
to
test
the
you
know:
pre-release
version
communities
and
report
any
problems.
Otherwise
it
goes
into
the
release
and
we
can
always
deal
with
you
know,
escalations
artifact
or
we
introduce
a
process
where
something
gets
a
flag,
so
it
soaks
for
a
version.
A
H
H
H
B
D
B
What
if,
but
what?
If
people,
don't
actually
run
the
test
cases
to
verify?
They're,
okay,
with
the
conformance
test
suite
until
all
the
code
is
actually
written
and
which
means
code
freeze,
and
then
we
find
something
wrong
with
the
code
and
that
the
in
general,
the
concept
of
the
test
is
correct,
but
the
code
itself
has
a
bug
in
it.
How
do
we
get
that
in
the.
G
H
We
we
already
have
a
process
by
which
we
do
bug
fixes
between
code
freeze
and
the
release
and
I
would
expect
this
would
behave
the
same
way
if
there's
a
bug
in
the
test
or
there's
a
big
discussion,
and
we
realize
that
it's
actually
shouldn't
be
in
the
conformance
suite.
After
all,
we
all
made
a
mistake.
We
just
make
a
change
and
cherry-pick
it
into
the
release
branch.
Just
I'm
like.
B
Yeah
I'm,
okay
with
that,
as
well
as
long
as
the
people
who
get
to
approve
those.
Those
hot
fixes
for
like
Roberto
phrase
understand
that,
even
though
these
are
test
cases,
they
are
serious
to
note
that
they
should
go
around
that
or
that
they
should
follow
that
same
process
or
be
allowed
in
and
not
look
at
it
and
say:
oh,
it's
just
a
test
case.
We
don't
need
to
pay
attention
to
it.
B
A
H
A
B
A
H
H
A
All
right
so
I
guess
the
action
item
is
caps,
but
it
sounds
like
I
mean.
Does
anyone
object
to
this
direction
at
what
they
were
going
with
these
in
any
feedback
that
we
should
know
and
take
on
board
before
before
the
kid
is
proposed?
Obviously,
it's
better
if,
as
a
worker,
that
we
kind
of
have
some
consensus
going
to
the
care
process,
so
we're
not
arguing
amongst
ourselves.
A
D
Sure
what
I
wanted
to
get
some
New
Zealand
culture
in
here,
because
there's
some
things
about
introduction
that
it's
been
difficult
for
me
to
convey
and
I
figured
with
a
really
short
story.
It
might
make
more
sense.
Mri
is
where
people
come
together
to
do
social
things
in
New
Zealand
within
the
Maori
context,
and
when
they
do
so
there's
a
protocol
involved.
D
One
owns
the
haka,
where
there's
you've
probably
seen
it
in
front
of
sporting
events
where
the
New
Zealand
All
Blacks
team
will
give
an
invitation
to
either
best
at
the
the
rugby
games
and
then
there's
the
concept
of
a
waka
and
New
Zealand
was
populated
not
that
long
ago,
by
people
coming
in
on
canoes
and
when
we
get
together
at
MRI
or
and
they
do
a
formal
introduction,
people
will
be
asked
their
a
Papa
and
what
they're
asking
is.
Where
are
you
from?
What's
your
history?
D
How
do
we
normally
identify
when
someone's
talking
to
us
via
HTTP
and
it's
the
user
agent?
So
a
lot
of
this
is
around
saying.
Who
are
you
talking
to
me
about
this?
So
one
of
the
approaches
that
I
took
was
doing
hacking
client
go
to
when
the
API
call
is
made,
creating
that
Apapa
all
the
way
back
to
main,
if
possible
or
back
to
the
assembly
code,
where
we
were
in
the
wait.
So
we
really
know
the
Papa
for
this
particular
conversation
and
when
I
pull
all
that
data
together.
D
D
Would
you
mind
telling
us
what
you
think
your
up
up
is
so
that
we
can
correlate
and
provide
some
really
meaningful,
in-depth
user
stories,
because
now
I
think
with
this
data
we
will
be
able
to
create
some
data-driven
conformance,
possibly
some
automated
test
that
we
see
through
machine
learning
or
other
progress
here
is
the
actual
patterns
that
we
see
over
and
over
again
throughout
our
community.
So
I
set
all
this
to
do
come
to
because
it's
been
hard.
I
know
that
user
agent
is
not
necessarily
designed
in
this
way.
D
There's
some
limitations,
because
in
the
past
we
presented
hey,
that's
just
supposed
to
represent
the
application,
and
maybe
its
version,
but
for
me
who
you
are
when
you're
talking
to
me,
can
mean
much
more
and
may
need
more
space.
So
there's
a
user
agent
we
release
for
us
that
has
a
lot
of
interesting
data.
It's
based
on
the
data,
the
same
structure
that
we
used
last
time,
but
I
don't
have
a
lot
of
correlation
on
it.
D
Yet
because
I've
it's
it's,
it's
been
a
journey
to
get
all
these
pieces
together
because
it
affects
so
many
different
pieces
within
the
ecosystem.
I
think
it's
gonna
be
best
to
create
a
cat
to
convey
the
importance
of
why
we
need
this
and
and
and
I
love
some
help
in
in
the
authoring
that
editing
the
the
definition
so
that
we
can
find
a
really
good
way,
I
think
to
do
some
stuff
and
client
go
and
to
make
sure
we
get
it
all
the
way
into
audit.
A
A
D
That
they
yes,
but
in
order
for
us
to
collect
this
and
make
it
meaningful
I'm,
also
suggesting
we
provide
a
way
where
they
can
run
something
similar
to
Sona
boy,
and
it
says:
hey
this
isn't
actually
my
production
environment
but
I'm
gonna
do
all
the
stuff
I
normally
do
because
man
I
want
this
to
be
a
part
of
what
what
is
tested
and
it
just
configures.
The
dynamic
audit
thing,
which
is
another
kept,
that's
being
worked
on,
I,
think
related
to
this
mmm-hmm
I.
D
Don't
have
all
that
figured
out,
but
I
do
want
everyone
to
be
able
to
send
their
there
as
it
works
within
their
cluster
or
you
know,
even
within
our
applications,
I'd
love
to
do
it
like
for
all
of
the
helm,
charts
and
for
all
and
find
an
easy
way
for
people
to
enable
it
easy
thought
was
have
to
enable
the
up
a
variable.
You
know
and
client
ill
picks
it
up.
It
says:
hey
I'm
gonna!
Do
that
thing
that
I
don't
do
normally
but
I'm
gonna.
Do
the
formal
introduction
thing.
H
D
Think
I
don't
think
so.
Cuz
I
looked
into
doing
those
those
approaches
and
they
they
don't
interact
well
with
the
API
like
getting
it
through
the
API
server
itself.
They
view
all
the
other
components
and
another
thing
is
wanting
to
not
not
complicate
the
the
contribution
of
this
information
I
in
some
way.
If
we
could
find
a
way
to
have
it
just
be
it
a
switch,
they
flip
on
and
then
they're
pointing
their
audit
there.
D
D
Call
it
a
callback
or
whatever,
let's
find
the
term
that
works,
but
there's
a
difference
like
there's
the
function,
the
function,
history,
you
know
how
we
got
here
from
Maine
and
then
there's
also
the
per
line.
I
think
that
this
being
able
to,
for
example,
in
some
UI
in
API,
snoops
metadata
place,
going
to
a
particular
function
and
saying
here
all
the
places
in
our
community
that
flows
through
this
function
and
if
we
don't
make
it
super
easy
to
contribute
to
that.
D
H
D
It's
probably
more
similar
to
the
second
and
that
to
the
first,
because
I
actually
thought
about
reducing
it
to
two
an
ID
like
I
said
it's
four.
If
you
want
to
look
at,
if
we
don't
go
into
all
of
the
per
line
trace
stuff,
we
just
identified
it
as
a
hash
and
we
didn't
even
translate
it.
At
least
we
would
know
this
particular
API
call
is
coming
in
for
the
same
reasons:
I,
don't
know
the
reasons
or
have
a
lot
of
metadata
around
it.
But
I
know
it's
the
exact
same
up
up.
D
B
D
B
D
D
It
were
the
nice
thing.
Is
it
works
for
all
the
applications
now,
so
we
get
some
really
interesting
data
for
like
what
was
this
one
listens
for
the
the
cube,
API
server
so
instrument
in
the
cube.
Api
servers
not
really
going
to
be
possible
and
a
lot
of
the
different
approaches
that
we
have.
Unless
we
just
say
hey,
why
don't
you
tell
us
who
you
are
when
you
show
up
sorry
for
the
anther
motorisation
of
all
of
this
I,
just
I'm
people.
A
D
A
D
D
What
would
be
nice
whiff?
We
have,
those
things
is
just
to
say:
hey!
Here's,
a
coo
cuddle
apply
that
initializes
a
what
are.
They
called
the
initializer
and
it
just
sets
the
variable
for
all
of
your
pods.
So
when
they
come
on,
it's
all
enabled
and
when
you
provision
your
cluster
go
ahead
and
make
sure
you
set
that
variable
and
your
provisioning
so
that
everything
community-wide
all
of
a
sudden
if
they
want
to-
and
they
take
the
special
steps
to
enable
it
provides
us
with
this
thing,
the
API
snoop.
H
D
A
K
A
F
B
L
B
Now's
I'm
not
against
doing
this
I'm,
just
more
curious,
more
than
anything
else
like
I
understand
that
we
wanted
to
get
some
sort
of
like
better
phrase
tracing
through
the
API
server,
to
see
what
it's
a
code,
we're
hitting
to
make
sure
we
get
good
coverage
and
stuff.
So
what
does
this
information
provide
for
you?
Because
this
is
information
about
more
or
less
the
client
side
right?
So
how
are
we
going
to
use
this
information
going
forward.
D
So
one
of
the
things
that
I'm
I'm
seeing
is
when
we're
I
think
if
we
put
I,
haven't
seen
the
patterns
yet
because
I
just
get
into
where
I
have
all
this
data
in
a
single
cent
but
being
able
to
not
just
identify
endpoints
that
need
coverage.
That's
that's
kind
of
a
thing
and
not
just
identified
tests
current
tests
that
I
guess
it
actually
would
be
kind
of
it.
D
The
tests
that
are
getting
I,
sorry,
they're
called
normal
tests
that
are
not
promoted
yet
really
good
data
there
for
the
for
the,
for
is
the
current
test
doing
what
we're
what
the
community
is
doing
and
then,
when
we're
starting
to
write
a
new
component
or
a
new
test
going
and
looking
at
those
similar
programs
and
their
line
of
code
where
they're
starting
and
saying
oh
look.
This
is
the
flow
of
the
of
the
the
logic
through
here
and
here's
an
auto-generated
set
of
what
a
user
story,
user
user
store
might
look
like
generated.
C
D
Approved
by
human
but
I
think
having
these
that
that's
the
goal,
I
guess
for
this
data
is
to
provide
a
way
for
data-driven
user
stories
to
to
come
forth
that
we
might
not
have
even
seen
before.
It's
definitely
about
and
I
think
to
do
so.
We
do
need
some
type
of
trace
of
who.
Why
are
they
here
and
why
are
they
here
in
a
very
specific
and
I
think
this
is
that
the
shortest
and
concise
form
that
provides
us
that
level
of
clarity?
G
G
H
A
Sorry,
just
quick:
we
need
to
move
on
to
Michelle
very
shortly,
give
any
giving
last
thoughts
like
one
or
two
minutes.
D
Please
participate
in
the
cat
if
possible.
That
would
really
help
to
to
continue
the
conversation
outside
of
what
we
have
here.
Did
you
share
a
link?
Oh
Oh?
Will
you
it's
it's
a
Google
Doc
for
pre
PR,
but
once
once
we
get
a
consensus
and
I
have
sponsorship
by
the
right.
I
think
I
have
to
have
some
SIG's
and
some
is,
but
you
fill
out
the
top
area.
Well,
the
between
the
two
little
caches,
I,
think
that'll
context
for
the
rest
of
the
conversation.
D
A
L
L
It
requires
a
vendor
specific
volume
plugin
in
order
to
really
test
the
full
functionality
so
I
believe
there
is
this
concept
or
maybe
I,
don't
think
it's
an
official
concept
but
I
think
there's
an
idea
of
having
profile
conformance
Suites
with
these
sort
of
more
optional
features,
so,
at
least
for
sure,
I
think
we'll
want
to
add
quite
a
few
tests
into
this
profile.
Suite
that
can
you
know
test
both
control,
plane
and
data
path
for
using
volumes.
A
B
L
B
I
and
I
know
that
there's
been
some
pushback
in
the
past
on
on
testing
specific
plugin
from
selves
or
extensibility
features
like
this,
but
that's
really
what
people
want.
Unfortunately,
a
little
bit
of
functionality
says
you
know
if
I
provision
of
volume,
regardless
of
what
buying
a
plug-in
I,
have
I'm
going
to
get
a
volume
as
opposed
to
something
a
billion
different.
Like
a
network
yeah.
M
Good
question
just
a
clarification:
this
is
a
debug,
visual,
Bobby,
so
profile
when
you
say
profile.
Is
that
the
same
thing?
You
know
the
certification,
the
profile
certification
like
the
way?
One
time
we
were
thinking
about
multi-tenancy
profile,
or
is
that
fall
in
this
category?
Because
if
that's
the
case,
this
will
be
too
granular,
though,
is
there
be
explosion
of
these
kind
of
profiles?.
A
Right
so
yes,
I,
think.
The
second
point
here
is:
is
that
yes
and
Brian
grant
did
said
sense
of
feedback
this
morning
where
he
said
kind
of
a
similar
vane
is
a
view
that
you
know,
let's
not
get
to
granules.
So
then
it
becomes
a
question
of
do.
We
want
to
create
like
a
dynamic
curveball,
where
some
of
these
things
go.
M
F
Think,
there's
general
roll-up
level
of
snapshotting,
sorry
general
roll-up
level
of
features
that
are
provider-specific
like
you.
Could
you
can
have
an
entire
category
if
you
lump
them
into
larger
categories?
The
number
of
categories
you
have
is
finite
right.
It's
not
but
I,
think
the
I
think
defining
the
categories
and
the
behavioral
level
of
testing
is
key.
So,
like
storage
as
a
lump
sum
for
all
of
storage,
features
makes
perfect
sense.
F
L
L
And
the
well,
the
extra
challenge
with
storage
is
that
we
there
are
various
types
of
storage
so
like
we
have
single
writer,
storage
and
multi
writer,
storage
and
those
are
gonna,
have
different
and
user
behaviors
and
some
volume
plugins
might
support
snap
snapshots
and
others
might
not
so
having
just
storage
as
one
lump
sum
profile
might
limit
the
number
of
features
that
we
could
end
up,
including
in
these
Suites.
You.
F
H
L
F
A
M
M
F
L
H
L
H
Two
things:
one
I
think
we
had
general
consensus
here
at
least
that
testing
the
controllers
that
don't
actually
do
anything
is
not
useful
to
the
confirm
ensuite,
as
it's
currently
defined
I
think
that's
a
reasonable
position
and
just
want
to
make
sure
we
captured
that
because
I'm
sure
other
groups
will
have
a
similar
question.
So.
H
I
did
want
to
make
the
distinction
I
think
the
they
can
be
odd
components
for
storage
as
part
of
the
default
profile
doesn't
need
its
own
additional
profile.
I,
don't
think
we
need
they
confirm
its
with
V+
in
storage.
Profile
like
this
is
just
base
profile
for
storage.
Just
like
we
have
base
profile
for
API
machinery
and
base
profile
for
node
this.
This
would
just
be
part
of
the
existing
conformance
test,
suite
for
storage,
right
yeah.
H
F
Think
there's
layers
I,
do
think.
There's
a
separation
I
hear
what
you're
saying
you
don't
know,
but
there's
some
providers
may
not
wish
to
have
that
profile.
Just
for
maybe
security
concerns
right.
They
don't
want
to
allow
workloads
to
be
run
on
there
that
have
storage
capabilities
right.
Maybe
it's
for
whatever
reason,
because
we
consider
feature
extra
feature:
why
for
these
different
things
and
they
might
explicitly
shut
them
off
or
have
a
different,
different
things,
but
the
neck
guarantee
a
certain
API
level,
behavior
totally
supported
right.