►
From YouTube: CNCF Kubernetes Conformance WG - 2018-11-28
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
A
B
C
B
C
D
What
type
of
people
showed
up
for
those
sessions?
Is
it
something
where
you
want
to
explain
to
users
why
they
should
care
about
conformance
or
something
for
contributors
about
why
they
should
help
with
the
effort
that
I
think
that's
really
clear
today,
it's
important
to
make
it
clear
both
for
for
us
and
also
in
the
event,
description
to
make
sure
the
right
people
so.
E
You
know
offering
provider
focused
I,
don't
know
how
other
folks
feel
about
that.
This.
The
attendance
was
about
six
to
ten
people
for
each
session.
I
found
we
got
more
questions
in
the
deep
dive
session
than
we
did.
The
introspection
was
kind
of
tough
for
me
to
tell
how
was
all
received
because
of
the
language
barrier.
B
Does
have
a
quick
question
so
Dan
do
we
have
an
intro
and
a
deep
dive,
as
well
as
the
brainstorming
session
or
just
the
intro
on
brainstorming
session,
the.
C
At
least
the
way
I
say
the
intro
is
kind
of
us
teaching,
so
whoever
we
identify
is
the
audience
we're
kind
of
like
broadcasting
information,
whereas
the
working
session
is
really
just
a
face-to-face.
You
know.
Version
of
this,
hopefully
like
extremely
productive
soybean
to
me,
would
make
sense
like
to
you.
If
it's
contributors
that
we're
asking
questions,
olive
oil,
you
know
hope
either
they
contribute
more.
That
would
make
sense
in
the
intro
section,
I
think.
If
they
came
to
the
working
group
session,
they
probably
get
overwhelmed.
I
feel.
E
Like
an
intro
session,
that
seemed
mostly
at
people
who
want
to
certify
their
cluster
as
conformance
or
understand
what
conformance
is
with
some
additional
details
that
go
into
some
of
the
requirements
that
specify
what
and
why
conformance
and
maybe
a
like
overview
of
what
progress
we've
made
in
terms
of
improving
conformance
and
its
coverage,
might
facilitate
dialogue
for
those
who
are
more
interested
in
coming
to
a
working
group
session.
Yeah
my
to
size.
A
C
E
C
A
Yeah
I'd
prefer
to
just
do
the
first
five
minutes
as
the
intro
and
then
I'm
happy
to
hand
it
off
and
I,
don't
know
Doug
or
Brad,
or
anyone
else
would
like
to
do
a
portion
of
it.
I
guess
the
piece
that
we
had
in
Shanghai
and
the
deep
dive
that
maybe
someone
would
like
to
show
is
just
running
song,
buoy
and.
B
B
C
Me
like
if
I
was
if
I
was
coming
and
I,
wanted
the
introduction
to
conformance
then
seeing
this
sort
of
way
demo
day
I
make
sense,
whereas
if
someone
comes
to
the
Sun,
even
a
deep
dive
right,
it's
a
it's
a
working
session.
If
they
come
to
that,
then
they're
gonna
get
really
into
the
we're
gonna
get
into
the
weeds
and
it
may
look
irrelevant
to
those
people.
Yes,.
E
I
E
C
A
I
think
we're
gonna
come
back
to
it
later
on
the
agenda.
I
wanted
to
just
review
where
we
I
think
the
consensus
was
left
a
year
ago
and
not
I.
Don't
want
to
assume
that
that's
where
we
have
to
go
forward
from,
but
but
I
think
we're
gonna
hit
that
after
the
a
PR
70
so.
J
Can
you
see
and
hear
me
yes,
and
one
of
the
things
that
is
wanted
to
quickly
go
through
from
where
we
are
standing,
is
what
we
accomplished
on
that
link.
I,
don't
know
if
it's
a
404
Quinn
said
one
of
the
links
was
the
404,
but
I'll
walk
through
and
update
that
later
now,
what
you
can
see
now
is
the
front
picked
for
8
guys
now
and
we're
in
bringing
in
all
of
different.
J
J
J
Gps
and
will
notice
between
the
first
and
second,
where
we're
doing
ete
only
I'm
kind
of
going
back
and
forth.
You
can
see
we've
added
darker,
colors
or
when
something
is
actually
conformance
and
when
it's
not
conformance
with
ete,
only
we're
able
to
go
blue
into
these
tests,
for
example.
This
is
the
end
points
here
when
you
mouse
over
them
and
let's
kind
of
reset
the
link,
if
you
want
to
mouse
over
this
section
here,
if
you're
part
of
any
of
these
SIG's
or
or
are
interested
in
suggesting
these
get
upgraded.
J
J
This
point
so
we've
we've
put
up
a
binder
on
CN
CF
CI
for
bringing
in
so
you
can
put
in
the
URL
of
the
github
13th
of
API
snoop
binders,
the
branch
we're
working
on
that
and
you,
when
you
click
on
that
and
I
think
I've
put
a
link
for
an
already
set
up
binder.
We
can
actually
go
through
and
the
running
binder
includes
the
data
that
gets
generated.
So
we
can
look
at
our
informants
data
and
the
JSON
and
SQLite
information.
J
J
K
Yeah
I
had
a
quick
question:
apologies
I'm
very
late
to
the
party
here,
so
I'm
kind
of
catching
up,
and
do
we
yet
have
a
sense
of
you
know,
for
some
set
of
real
world
could
be
Nettie's
applications.
What
percentage
of
their
API
usage
is
covered
by
conformance
it
sounds
like
that's
any
more
stuff
you
just
demonstrated
mmm-hmm.
Do
we
yet
have
a
sense
of
how
big
the
Delta
is.
J
E
I
can
help
with
that,
but
so
I
think
maybe
put
in
when
you
say
for
some
subset
of
kubernetes
applications.
Is
it?
Is
it
this
group's
consensus
that
taking
the
most
commonly
downloaded
charts
and
then
running
those
charts
against
kubernetes
and
seeing
what
communities
API
calls
result?
Is
that
sufficient
for
determining
what
we
think
the
appropriate
group
of
apps
to
cover
is,
or
should
there
be
some
other
determination
that
you
had
in
mind
like.
K
K
E
I
sort
of
tried
to
cover
a
good
chunk
of
this
through
my
talking
performance
where
we
were
talking
about
is
API
coverage,
an
acceptable
way.
We're
not
really
sure
is
line
coverage
and
except
boy
you
don't
really
think
so.
You
ultimately
need
to
look
it
like
what
are
a
set
of
behaviors
that
we
as
feel
are
appropriate,
but
in
terms
of
API
coverage,
the
stake
be
put
in
the
ground
was
all
of
the
staple
stuff.
E
F
There's
still
a
gap
for
a
reasonable
user
of
kubernetes
needs
more
than
what
is
currently
stable,
corrupt
minetti's
and
that
needs
to
be
the
focus
of
the
community
is
to
bring
those
api's
reasonable
use.
We're
going
to
be
wired
right,
becomes
people
delay.
So
there's
an
example
of
something
there
it's
necessary,
but
it's
not
stable.
E
Correct
and
bringing
those
two
stable,
one
part
of
bringing
those
two
stable
would
make
would
be
to
make
sure
they're
sufficiently
covered
by
tests
that
meet
the
conformance
requirements,
but
I'm
sure
there
are
many
other
requirements
that
need
to
be
enumerated
by
cig,
architectural
or
some
appropriate
group
to
say
here's,
the
checklist
of
requirements.
You
must
meet
for
you
to
promote
your
feature
from
data
to
statement.
D
D
D
You
know
we
actually
just
need
to
do
their
hard
work
of
actually
making
sure
those
things
are
tested,
and
it's
pretty.
We
have
so
few
in
twin
tests
right
now,
but
it's
pretty
easy
to
figure
out
which
things
are
not
tested.
Maybe
that
will
change
in
the
future,
but
right
now
that's
the
case.
I
would.
E
Push
back
on
that
and
say
I'm
not
sure
it's
actually
phenomenally
easy
to
say
which
things
are
or
not
are
not
tested
and
I'm.
Also
not
sure
it's
really
easy
to
enumerate
what
full-coverage
means
from
a
functionality
perspective
so
like
we
have
liveness
probes
and
readiness
probes
our
covers,
but
I
saw
some.
There
was
some
email
traffic
on
confusion
of
between
the
interaction
of
the
two,
and
so
it's
not
just
looking
at
individual
API
features
or
individual
features,
but
like
how
does
everything
interact
with
everything
else
for
an
interpreter?
Well,.
D
Whoever
is
making
sure
they
are
covered,
so
accidental
ex
exercising
features
accidentally
I
don't
count
as
covered,
which
is
why
I
don't
believe
this
kind
of
coverage
measurement
is
useful,
I
mean
if
we
actually
want
to
test
the
behaviors
of
the
features
that
matter.
We
need
to
make
sure
that
we
have
tests
that
actually
do
that.
So.
D
E
D
E
D
E
I
You
know,
dovetailing
this
execution
of
this
with
other
work
items
that
we
actually
have
a
backlog
for,
but
it
has
no
priority
and
no
one's
been
assigned
and
people
aren't
really
executing
on
all
those
pieces
right.
So
this
is
nice,
it's
good
to
get
a
status
readout,
but
there
are
mother
there
as
Brian
points
out
to
there
are
much
higher
priority
items
that
we
should
be
focusing
on
addressing
such
one
of
the
coverage
just
general
coverage
for
some
of
the
basic
things
we
already
know
about
yeah.
So.
F
D
E
But
I
found
a
walk
through
the
different
like
API
fields
and
go
field
by
field
or
feature
by
feature
to
enumerate
all
of
the
test
cases
and
then
cross-reference
that
against
all
of
the
existing
end-to-end
tests
that
we
have
or
find
the
files
and
see
if
there's
sufficient
coverage
of
all
the
different
cases.
So
I
agree
with
you
I'm
sure
we
have
really
important
things
to
focus
on
and
I
agree.
These
are
tools
to
help
us,
but
they
are
not
the
end-all
be-all.
E
E
D
E
Right
and
so
how
much
should
we
be
walking
through
the
interaction
between
liveness
and
readiness
probes,
as
it
pertains
to
services
hooking
up
to
them,
as
it
pertains
to
them
being
restarted,
ECP
versus
HTTP
versus
exact
probes
and
the
interactions
between
those,
that's
just
one
specific
area,
and
we
can
drill
in
on
that.
But
I
feel
like
I'm,
not
quite
sure
how
to
Fedder
eight
out
to
be
drilling
in
and
then
reduce
through
a
prioritization
function,
because
it's.
I
If
we
are
going
to
make
traction
on
this
effort,
you
know
enumerate
the
state
space
and
starting
to
open
up
the
issues,
and
you
know
give
an
overall
priority
for
execution
and
federating.
That
across
a
wider
group,
which
is
this
group
I
think,
is
highly
beneficial
to
talk,
because
that
will
allow
us
to
actually
get
stuff
done,
because
currently,
who
are
the
people
that
are
executing
on
changing
some
of
these
things
they
were
reviewing
and
making
some
of
these
changes
happen.
I
saw
some
cinemas
yeah.
E
I've
seen
like
a
few
Googlers
here
and
their
focus
on
storage
based
things
as
it
pertains
to
hooking
up
to
pods
I
have
the
two
globin
contractors
who
are
working
on
sundry,
pod,
related
things
and
that's
about
it
as
far
as
people
actually
banging
bits
together
to
improve
conformance
coverage
to
your
point
to
the
numerating
state
space.
Like
that's
what
I
view
these
tools
as
things
to
help
with,
because
I'm
not
sure
how
how
to
actually
go
about
a
numerating
or
state
space
I,
don't
know
how
to
quantify
that.
So.
I
Someone
in
this
meeting
should
take
lead
and
say
we
have
this
backlog
and
we
say
that
this
is
the
most
important
thing
we
or
we
agree
to
disagree
at
some
point
to
say
we
are
going
to
work
on
this
one.
It's
the
highest
priority
thing
to
work
on,
and
we
just
and
we
enumerate
priority
because
we
have
logged.
This
is
exactly
how
I
operate
inside
of
SIG's.
E
So
I'm
happy
to
start
doing
that,
based
on
what
I
see
here,
I'm
not
sure
if
I'm
the
right
subject
matter,
expert
to
really
go
through
everything
line
by
line
I
can
try
and
we
can
see
if
there's
too
much
back
and
forth
here
and
discover
if
somebody
else
needs
to
be
doing
this.
But
that's
why
I've
been
pushing
to
have
these
tools
and
use
these
tools?
You.
D
E
D
D
I
D
Parts
of
the
system
that
are
very
unlikely
to
have
multiple
implementations
should
be
lower
priority.
So
this
is
why
we're
focusing
on
pod,
because
it
is
the
most
used
part
of
the
system.
It
is,
has
a
very
wide
surface
area.
Yes,
Hewitt
is
highly
pluggable
and
they're.
Even
multiple
implementations.
Yes,.
E
D
Storage
volumes
other
than
local
ones
are
very
tricky.
We
haven't
because
they're
all
non
portable,
so
we
haven't
figured
that
out
yet,
but
you
know
the
readiness
probes,
I
think
yeah
we
have.
We
have
some
tests
with
readiness,
probes
and
liveness
probes.
They
may
not
be
good
enough
in
terms
of
exercising
all
the
different
corner
cases.
I
would
say
we
should
prioritize
covering
more
on
corner
cases,
probably
lower
than
just
making
sure
that
all
the
really
critical
functionality
gets
covered.
D
E
E
C
D
I
think
that's
our
fundamental
problem.
There
should
we
need
people
actually
doing
the
work,
people
who
can
understand
the
domain
and
put
the
time
and
actually
determining
whether
we
have
adequate
coverage
and
the
fact
that
some
yields
are
exercised
by
some
test
is
not
the
same
as
testing
whether
those
are
acts
doing
anything.
That's
I,
think
there's
cargo
cult
thing
and
copying
of
contigs,
for
example,
I've
seen
in
the
past.
Maybe
now
that
the
no
components
tests
have
been
some
of
them
have
been
converted
to
conformance.
D
D
Well,
many
of
the
features
were
developed
by
people
who
moved
on
from
the
project
or
to
other
areas
of
the
project.
So
you
know,
roughly
speaking,
there
are
six
like
in
this.
It's
the
big
node
owns
all
the
plot
features.
Ok,
so
they
should
be
able
to
find
some
people
within
the
cig
to
help
out
with
this.
D
C
K
Yeah
I
just
wanted
to
check
whether
do
we
have
a
common
agreement
on
what
the
and
say
when
we're
finished,
but
but
presumably
there's
an
initial
goal,
which
is
that
you
know
a
given
application
or
if
an
application
came
along
and
wanted
to
decide
whether
it
would
run
on
a
given
cluster.
It
could
look
at
the
conformance
profile
of
that
cluster
and
say
all
the
stuff
I
need
is
their
goal.
D
K
Applications
can
determine
whether
they
can
run
our
clusters
and
and
that
some
proportion
of
applications
are
covered
by
that,
and-
and
you
know
maybe
initially
that
proportion
is
like
five
percent
and
then
we,
you
know,
push
that
so
that
it
ultimately
60%
of
application.
You
can
figure
out
whether
they
can
run
on
a
cluster,
but
but
is
that
a
is
that
all
here?
Well.
D
D
So
we
don't
have
a
cross-product
of
40
different
profiles
and
results
in
no
actual
portability
in
practice.
So
we
have
a
bunch
of
optional
features
like
load-balanced
services
and
dynamically
provisioned
volumes
ingress
like
a
lot
a
lot
of
a
lot
of
things
and
but
I
suspect
that
actually
many
of
them
are
provided
by
most
providers,
so
we
could
or
even
are
back
for
example.
So
we
could
have
a
profile
that
just
bundles
up
all
those
things-
and
this
is
the
common
suite
of
cloud
provider
features
of
the
system
or
something
like
that.
I.
G
Love
the
way
you're
thinking
about
that,
because
that's
when
you
get
to
the
the
more
the
more
interesting
aspects
of
this,
where
you
start
asking
the
interesting
questions
of,
and
everybody
support
this
and
if
there's
some
feature
that
can't
be
supported,
he
wants
to
do
a
one-off
profile.
You
have
the
hard
discussion
and
you
say
well
wait
a
minute.
G
The
profile
will
figure
out
what
it
takes
for
you're
provided
to
be
able
to
support
that
function
because,
anytime,
you
can
kind
of
tug
people
along
just
like
we
did
when
we
first
started
this
and
you
tug
people
longer
a
consensus.
The
end
result
is
typically
better
right,
so
I
think
you're
hitting
all
the
right
points
and
we'll
get
to
the
more
interesting
and
and
and
valuable
conversations,
but
what
you
were
going
through,
Brian
yeah.
D
Unless
the
provider
has
the
ability
to
create
clusters
with
multiple
nodes,
it's
probably
not
super
useful.
They
couldn't
create
a
high
availability
application
and
in
practice
all
the
providers
looked
like
they
did
support
multiple
nodes,
as
far
as
I
could
tell
at
a
glance
so
making
that
part
of
the
base
profile,
even
though
it
makes
me
sad
that
mini
cube
would
not
be
covered,
people
say
well
mini
cube
would
be
more
useful
if
it's
affordable.
Although
it's
do
and
people
have
ideas.
D
K
C
G
D
G
This,
but
since
I
saw
this
happened
in
previous
urban
infrastructure's,
you
know
what
you
don't
want
to
have
is
a
race
of
well
we're
gonna,
provide
these
profiles
and
we
think
we've
got
a
better
cloud
platform,
because
we've
we've
provided
all
these
profiles
that
becomes
analogous
in
the
old
days
of
the
previous,
your
own
schedulers,
or
something
and
causing.
Oh,
it's
better.
But
now
your
stuff
only
runs
here
if
we're
all
working
together
to
make
sure
it's
more
of
a
common
infrastructure.
G
A
You
so
on
that
note,
I'd
like
to
just
remind
folks
and
where
I
think
the
consensus
was
a
year
ago
on
profiles,
which
is
that
we
were
hoping
to
minimize
them
and
and
I
think
folks
had
been
pretty
happy.
I
mean
it.
You
know,
we'd
have
77
certified
distributions
and
and
hosted
platforms
and
installers
is
a
pretty
extraordinary
accomplishment
one
year
into
this
program
and
that
we've
been
able
to
get
everybody
to
pass.
A
The
entire
conformance
test
suite
so
I
think
that
the
topic
for
our
conversation
and
the
deep
dive
is
about
making
Windows
the
first
optional
profile,
but
I
did
want
to
throw
out
there
that
I
think.
The
approach
that
we're
discussing
here
or
the
default
approach
to
consider
is
that
it
is
an
optional
extra.
A
That
is
an
additional
thing
that
conformant
clusters
can
provide,
but
that
we're
not
offering
an
option
for
Windows,
only
clusters
that
can't
run
Linux
workloads
and
then
that
we're
not
saying
and
here's
the
next
10
profiles
that
we
want
to
approve
as
well.
But
obviously
you
know
the
the
move
from
zero
to
one
profiles
is
the
most
profound
one.
So
we
should
assume
that
they'll
be
ones
that
follow
afterwards
yeah.
D
I
think
there
are
other
candidate
features
or
functionalities
that
we
can
think
about
in
the
same
context
and
think
about
whether
profiles
are
the
best
or
only
way
they
handle
them.
Windows
is
certainly
one
and
there
may
even
be
multiple
windows
versions
that
have
to
be
considered
GPUs
and
other
particular
hardware.
Devices
are
another
area
where
not
all
providers
may
offer
the
same
capabilities.
D
Something
coming
up
is
distinguishing
privileged
operations
from
unprivileged
ones,
so
we
could
see
more
kind
of
past
like
environments
where
the
average
application
developer
operator
doesn't
have
the
ability
to
perform
privileged
operations
for
various
reasons.
Openshift
is
an
example
of
this
today,
and
we
probably
need
a
test
suite,
at
least,
if
not
a
profile
that
can
be
constrained
within
that
set
of
behaviors
and
features
that
are
allowed.
C
G
The
preferred
route
would
be
they
additive
and
everybody
accepts
it
being
added
to
core,
because
what
I'd
love
to
see
is
people
go
through
the
different
features
and
you
at
least
have
a
check
that
says
well.
Is
there
any
way
to
push
this
into
the
core,
and
could
everybody
do
it
because
every
time
let's
say,
there's
20
of
them?
If,
if
15
of
them,
you
could
do
that
I
think
you've
done
everybody,
a
great
service,
ninety-eight.
C
A
D
C
E
So
one
of
the
things
to
be
wary
of
with
those
subtractive
type
things
I
guess
my
mind,
gravitates,
mostly
towards
security,
related
things
or
admission
controllers
is
I'm,
not
sure
we
want
to
allow
a
scenario
where
some
cluster
operator
is
allowed
to
disable
all
these
some
security
measures,
just
so
they're
clustered
passes
conformance.
Then
they
turn
those
security
measures
back
on.
They
put
their
certified
conformance
stamp
on
their
cluster,
and
then
users
show
up
and
are
surprised
why
their
application
doesn't
work,
because
certain
features
that
they
use
are
now
disabled
or
don't
work
correctly.
E
I
Exist
today,
I
mean
that's
the
reason
why
we
have
the
suite
publicly
available
so
that
way
people
can
actually
report
back.
We
already
have
a
policy
to
for
remediation
of
those
situations,
so
I
mean
I,
don't
think.
That's
a
problem.
I
think
we've
expected
this.
What
we
kind
of
built
it
into
the
whole
policy.
What's.
E
E
C
A
D
Well,
I
would
say
within
most
large
organizations
there
will
be
users
with
different
levels
of
privilege.
So,
given
that
some
cases
require
cluster
route,
some
require
network
privilege
and
some
require
node
privilege.
If
they
there
are
going
to
be
different
sets
of
credentials
in
the
cluster
and
not
all
of
them
will
be
able
to
pass.
So
as
an
interesting.
C
D
E
I
just
wanted
to
bring
this
back
real,
quick
to
the
enumeration
of
the
states
based
discussion.
If
I
can
just
to
review
some
of
the
charts
that
I
showed
its
Shanghai
to
talk
about
her
progress
and
what
I
thought
the
next
steps
were
to
see
if
that's
in
line
with
where
Li
we're
all
headed.
So
this
is
a
chart
of
how
API
coverage
has
improved
for
the
top
three
are
the
conformance
jobs.
The
bottom
three
are
released,
blocking
jobs
that
prevent
released
from
going
out.
E
The
intent
of
this
chart
is
to
show
that
if
you
focus
only
on
stable,
core
or
stable,
the
API
coverage
doesn't
really
change
all
that
much
between
the
conformance
and
the
released
jobs,
thus
indicating
that
our
end-to-end
coverage
isn't
really
great.
If
you
look
at
API
coverage
solely
throughout
them
and
conformance
is
relatively
decent
and
that
we've
always
been
going
up
into
the
right.
If
you
look
at
API
coverage
purely
by
does
any
single
test
hit
hit
it
or
not.
This
is
API
coverage,
copy/paste
of
a
bunch
of
Google
spreadsheet
things.
E
Green
means,
at
least
one
test
hit
at
red
means
it
does
it
didn't?
This
is
only
for
stable
endpoints,
but
if
you
look
at
it
this
way,
it
looks
like
we
really
don't
cover
a
lot
of
our
API,
it's
kind
of
unclear.
How
much
of
this
relates
to
cots
API
snoop
shows
this
a
little
more
prettily
wait
were
those
just
endpoints
or
there
those
were
just
endpoints,
and
so,
but
some
of
these,
like
maybe
like
a
lot
of
them,
are
proxy
endpoints.
Where
certain
verbs
aren't
used,
the
proxy
test
only
uses
HTTP
GET.
E
It
doesn't
verify
whether
put
post
patch
delete
things
like
that
are
used.
Some
of
them
are
are
back,
which
is
an
optional
thing
right
now,
and
some
of
them
are
workload
related
tests
which
haven't
transitioned
over
I,
think
also
resource
quotas
are
aren't
really
exercised
because
I'm
not
sure
that's
an
optional
feature
or
not.
E
End
up
creating
pods
and
deleting
pods,
so
I
feel
like
there's
a
lot
of
incidental
coverage
there,
but
I
can't
look
necessarily
at
their
fields,
but
there
are
a
number
of
endpoints
that
are
only
exercised
by
one
or
two
tests,
which
tells
me
we
probably
could
use
a
lot
more
test
cases
on
these
things.
So,
for
example,
I
dug
dip
deeply
on
one
of
them
and
discovered
that
yeah
the
proxy
proxy
for
pods
proxy
for
services,
proxy
for
nodes.
Only
tests
get
and
ignores
these
other
verbs.
So
we
could
do
really
great.
E
We
could
also
proxy
sub
paths,
as
well
as
regular
paths.
I
looked
at
line
coverage
line
coverage,
as
we
all
know,
isn't
really
super
realistic
if
I'm
comparing
line
coverage
with
conformance
on
the
left,
with
release
blocking
tests,
sorry
conformance
coverage
on
the
right
police
blocking
tests
on
the
left,
the
release
blocking
tests
do
ultimately
cover
more,
but
it's
all
relative
to
red
and
it
doesn't
particularly
change
over
time.
E
If
I
look
surely
at
say
how
googly
Google
it
interacts
with
the
doctor
shim
layer,
it
looks
like
it's
a
little
bit
better
and
more
in
the
green.
It
could
be
perhaps
when
we
want,
when
we
talk
about
exercising
more
pod
behavior
we're
looking
at
having
more
of
this,
this
level
of
stuff
exercised
and
then
finally,
here's
an
example
of
how
a
line
coverage
caught
the
TCP
probes
were
not
exercised
in
any
way
shape
or
form.
I
used
this
in
concert
with
walking
through
all
the
API
fields.
E
Perhaps
a
less
noisy
way
of
representing
this
could
be
using
something
like
code
Cubs
tree
map
graph.
Although
this
doesn't
let
you
drill
down.
This
could
also
potentially
help
us
identify
areas
in
the
code
base
that
we
think
should
be
covered,
but
absolutely
aren't,
and
so
I
have
next
steps
for
the
tools
and
the
next
steps
for
the
coverage
are
really
filling
in
the
gaps
that
I
just
described
to
you.
E
So
what
I
can
do
is
start
chipping
away
and
just
filing
a
bunch
of
issues
like
this
I'm,
not
sure
this
is
going
to
create
too
much
noise
where,
if
there's
another
owner,
you
should
be
doing
this
level
of
enumeration,
but
this
is
this
is
how
I
would
start
enumerating
things.
I
guess
I
would
be
looking
for
somebody
else
within
this
group
to
say
yes,
that's
important
or
no.
We
don't
care
about
that
right
now.
How.
L
L
Do
we
already
have
a
test
hitting
that
behavior?
We
still
have
to
go.
Look
at
that
test
to
make
sure
that
it's
actually
testing
the
behavior
I
would
see
this
more
as
just
it's,
not
the
starting
point.
This
seems
like
something
we
should
be
doing
so
as
a
follow
on
and
really
it
has
to
be
that
I
completely.
E
I
completely
agree,
like
I've,
always
said
when
pushing
us
towards
this
stuff.
Let's
say
these
are
like.
We
should
use
these
to
measure
our
progress,
but
they
do
not
necessarily
chart
our
end
state
or
our
end
goal.
So
that's
where
I
feel
like
helping
helping
hippie
hacker
if
identify
what
charts
are
most
downloaded
as
a
proxy
for
what
charts
are
most
used
as
a
proxy
for
what
applications
are
most
used
as
a
proxy
for
what
API
endpoints
most
user
applications
would
hit,
could
be
another
step
forward
in
terms
of
looking
at
from
a
user's
perspective.
E
L
Just
what
are
your
filing
issues
against
things
that
are
results
of
those
tests
is
the
most
productive
way.
Florida,
we
should
be,
as
I
think,
was
discussed
earlier,
at
least
as
I
understand
it
looking
at
say,
pod
and
and
the
high
level
functionality
breaking
it
into
pieces
and
then
assigning
those
pieces
for
somebody
to
enumerate
the
specific
behaviors
of
high
level
peace.
Sorry.
L
That's
that's
one
issue.
Somebody
goes
and
gets
a
sign.
They
need
it
analyzed
when
all
the
different
API
options
are
our
probes,
that
you
want
to
be
a
part
of
confinements
and
enumerate
those
test
cases,
as
these
are
the
behaviors
we
expect
out
of
this
and
and
then
somebody
has
to
build
the
tests.
It's
a
lot
of
work.
L
K
I
agree
wholeheartedly,
as
for
a
mechanical
process
to
get
there
is
it?
Is
it
a
feasible
and
I
think
these
automated
processes,
which
draw
graphs
and
give
us
an
illustration
of
what
we
have
right
now
are
useful
I
can't
help
thinking
that
a
fairly
brute-force
approach
of
taking
you
know
enumerate
the
API
and
for
each
piece
of
the
API
apply
a
subjective
ranking
to
it
to
say
you
know
if
I
cannot
create
a
pod
I
clearly
can't
do
anything.
K
So
that's
a
1-
and
you
know
there
are
other
things
which
I
can
definitely
build
an
application.
That's
useful
without
certain
other
features.
So
those
are
you
know
a
10
or
whatever
and
literally
just
go
through
that
list
and
apply
a
number
to
each
one
and
say:
let's
start
at
the
number
ones,
and
make
sure
that
we
have
coverage
of
those
and
then
go
to
the
number
twos,
and
what
number
three
I
feel.
E
K
E
C
K
E
K
E
C
E
K
L
E
E
C
B
One
quick
question:
Dan
I,
think
you
would
regionally
I
know
you
sent
out
the
other
day
talking
about
these
two
sessions
that
we
have
yeah
I,
think
you're
asking
for
the
moderators
or
speakers
in
the
sessions
we
have
the
intro
stuff
done.
We
needed
a
moderator
for
the
brainstorm
briefing
session.
I
was
gonna,
volunteer
phoneless.
Someone
else
really
wanted
it
I,
don't
think
it,
though,
guiding
the
role.
It
just
makes
sure
we're
staying
on
track.
More
than
anything
else.