►
From YouTube: CNCF Kubernetes Conformance WG Meeting - 2019-03-27
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
A
I
will
just
give
a
brief
overview
on
where
the
mechanics
of
the
program
sit
right
now
and
then
I
was
going
to
hand
it
off
to
Taylor
Wagoner
for
two
minutes
who
had
a
question
specifically
for
Erin
about
a
submission
that
he
just
made
so
I'm
pleased
to
report
that
we
are
up
to
84
certified
vendors,
which
is
an
all-time
high
and
kind
of
a
spectacular
accomplishment
and
103
certified
products.
I'm
just
pasting
in
the
spreadsheet.
A
Most
people
should
be
familiar
with,
and
we
just
had
a
kind
pass
a
few
hours
ago
as
the
first
1.14
certification,
so
very
nice
to
see
that
moving.
Also
a
long
way
we
reached
out
to
I
think
almost
over
half
a
dozen
organizations
who
were
going
to
have
their
certifications
expire
today.
Essentially,
if
they
did
not
get
an
older
one
certified
and
only
two
of
them
fell
out
of
certification,
which
Taylor
I
believe
that
was
inspir
and
who
was
the
other
one.
It.
A
We
were
made
pretty
thrilled
with
where
things
stand
yeah
and
you
can
see
the
numbers
there
that
were
47,
1.13,
54,
1.12,
62
1.11
certifications
are
all
really
fantastic.
So
if
there's
not
any
questions
about
that
overview,
I
would
ask
Taylor
to
ask
that
question
for
Erin
in
the
group
sure.
B
C
B
D
C
Yes,
so
all
of
the
conformance
tests
are
the
test
cases
that
we
use
and
one
of
the
common
ways
that
people
will
do.
This
is
by
using
a
tool
called
sauna
boy,
which
is
captain
wears
VMware's
tool,
but
it's
almost
sort
of
test
cases
that
are
maintained
by
the
upstream
clarinetist
projects
and
directions
for
how
to
do
this
should
be
in
the
CN
CF
gates,
informant
github
repo
can.
C
E
Is
Chris
I
was
taking
notes
for
last
four
words
meeting
and
we
talked
about
generating
a
description
field
within
the
conformance
talks.
I
asked
a
question
around:
where
was
its
the
next
question?
What's
the
command
to
generate
the
conformance
talks,
let's
kind
of
combine
together,
how
can
we
can
automate
the
process
for
what
fields
are
available
and
what
changes
between
releases
to
have
it?
That.
F
C
F
C
Sweeney's
also
got
a
cap
out
about
trying
to
auto-generate
conformance
thoughts
as
part
of
the
kubernetes
release
process,
so
that,
like
the
concurrent
stop
the
documentation
that
describes
all
the
Corman's
tests
and
what
they
are
and
what
they
do
is
distributed
as
part
of
the
kubernetes
release.
That
seems
to
make
more
sense
to
me.
I,
don't
think
he
thought
implementable
in
time
for
kubernetes
1:14.
C
F
G
H
Yes,
most
of
those
are
in
the
cap
and
we
also
included
them
in
the
docs
right
now.
We
actually
have
two
separate
PRS
open
that
are
working
to
get
this
based
on
the
feedback
from
the
working
session.
Yesterday,
we're
trying
to
get
a
brief,
like
a
condensed
version
of
that
in
the
conformance
or
sorry
in
the
in
the
contributor
documentation
for
conformance
tests
to
give
people
an
idea
of
what
to
look
for
in
terms
of
handling
multi
our
tests,
and
then
that
has
a
link
to
the
full
list
as
well.
H
F
C
C
It
was
unclear
to
me
that,
whether
the
conformance
Walker
parses
out
that
entire
block
comment,
I
thought
it
looked
specifically
for
some
fields
and
then
pulled
the
values
from
those
fields.
In
the
comment
it
might
be
worth
considering
if
we
want
another
field
for
like
when
it's
only
there's
some
to
describe
why
this
particular
thing
but
feel
like
yesterday,
we
said
description.
The
other
thing
I'll
suggest
that
I
and
you
get
a
to
y'all
to
figure
out
how
you
best
want
to
accomplish.
This
is
so.
C
So
you
could
link
to
the
known
github
issue.
The
same
way
will
do
sometimes
if
we
drop
to
do
comments
and
code,
and
we're
like
this
is
super
weird
because
link
to
github
issue,
but
I
feel
like
just
linking
directly
to
the
cap,
as
this
today
will
not
be
granular
enough
for
what
we
were
hoping
to
accomplish.
H
Okay,
so
the
PRS
I
mentioned
that
I've
listed
here
are
basically
for
the
documentation.
I,
don't
think
that's
the
right
place
for
the
exhaustive
list.
I
think
the
takeaway
here
is
that
it
sounds
like
we
do
want
to
update
those
descriptions
and
include
the
specific
links
in
the
test
case,
where
it's
relevant.
Is
that
what
you're
saying.
G
H
Okay,
yes,
so
we
can
do
that
I
guess.
The
thing
I
thought
that
would
be
most
helpful
to
me
is:
if
we're
going
to
put
things
up
in
the
description
field,
I
need
that
information
on
how
to
generate
the
doc
just
so
we
can
make
sure
that
it
is
something
that's
going
to
be
visible
there,
so
that
way,
someone
that's
not
delving
into
the
source
code
can
find
what's
Linux
only
and
what
the
reason
is.
E
If
that's
all
for
the
the
last
action
item
we
just
one
last
thing
was:
where
were
we
on
the
automation
for
the
board?
I
know:
we've
done
a
lot
of
manual
curation
and
opening
up
to
the
communities
worked
really
well.
I,
just
I
wasn't
quite
sure
on
who
was
focusing
on
the
automation
and
and
I
think
there
was
addition
to
automation
querying
to
populate
the
board.
I.
C
C
There's
a
contributor
out
there
somewhere
who's
working
on,
maybe
a
problem
in
the
way
of
kubernetes
state
testing,
so
like
automatically
populate
a
project
board
with
a
given
github
issue.
Query
but
I
don't
have
status
on
that
and
then,
in
addition,
see
contributor
experience
has
a
number
of
umbrella
issues
around
how
to
better
automate
project
management
as
those
to
seek
PM
I
feel
like
ongoing
work
on
that
stuff
is
out
of
the
scope
of
this
group.
C
Do
think
it
would
be
super
cool
if
there
was
somebody
who
was
responsible
for
grooming
the
board,
and
it
is
unclear
to
me
if
that
person
is
Timothy
st.
Clair's
and
see
how
kind
of
bootstrap
and
organize
the
more
tactical
meeting
that
has
held
under
city
architecture
as
a
sub
project,
or
if
it's
Srinivasa,
since
he
said
he
was
going
to
sort
of
take
over
like
shepherding
and
whatnot
in
this
group.
It
is
very
much
not
myself.
F
G
The
people
who
review
whether
give
it
tests
actually
validates
that
behavior
and
I
think
it's
a
it's
a
lot
easier
to
to
review.
The
behaviors
then
go
through
all
the
test
code
to
identify
whether
that
behavior
is
is
tested
properly.
So
what
I
was
suggesting
you
need,
and
in
the
document
I
link
to
it,
people
please
go
ahead.
A
comment
all
I'm
all
I'm
asking
about
here
in
this
meeting
is
for
people
to
take
a
look
and
whether
people
think
it
has.
G
The
general
approach
has
enough
legs
to
to
move
it
to
a
cap,
basically
its
create
a
machine,
readable
file
that
defines
all
the
behaviors
or
maybe
a
collection
of
files
and
and
then
the
tests
have
to
essentially
link
back
to
an
ID.
That's
in
that
that's
in
that
file
and
that
file
then,
can
be
independently
reviewed
by
people
who
don't
necessarily
want
to
read
all
the
test
code.
I
I
You
know:
how
do
we
populate
the
thing
and
how
do
we
make
sure
that
it
is
complete,
and
you
know
I
know
that
when
we,
when
we
get
people
to
write
a
cap
we
can
propose.
You
know
some
sections
there
to
get
them
to
fill
that
stuff.
How
do
we
backfill
it
right
now,
since
you
know,
essentially,
we
will
have
to
yeah.
G
G
And
so
I
guess
what
I
would
propose
right
now?
Is
we
move
it
forward
in
a
kept
form,
and
then
we
can
try
and
line
up
I
can
try
and
line
up
some
resources
here
and-
and
you
know
it
all
comes
down
to
the
approvers
of
the
conformance
suite-
would
decide
whether
that
you
know
what
whether
the
behavior
is
listed.
There
are
complete
or
that
sort
of
thing
I
mean
I,
guess
those
problems
are
inherent
to
this
effort,
whether
we
do
it
behavior,
first
or
test
first
and
so
I
think
all.
I
G
G
It's
about
the
the
ties
back
to
its
about
the
human
human,
understandable
description
of
the
behavior
and
then
having
a
hook
for
that
to
be
tied
into
on
the
test
side,
so
tests
still
have
to
be
written
by
hand,
and
then
somebody
has
to
validate
this
test
actually
does
test
this
behavior
or
this
this
set
of
behaviors
and
and
that's
that's
part
of
the
review
process.
It's
just
separating
out
right
now.
G
If
you
go
and
review
a
conformance
test,
you're
reviewing
two
things:
you're
reviewing
does
this
behavior
is,
should
it
be
part
of
conformance
and
two
does
this
test
validate
that
behavior,
so
I'm
just
trying
to
separate
those
things
out
into
two
different
reviews,
because
I
think
it's
two
different.
It
can
be
two
different
people
that
today.
I
Right,
so
definitely
if
there
are
people
who
are
willing
to
sign
up
to
do
this
work
to
produce
the
initial
set
that
can
then
be
reviewed.
I
think
that'll
be
really
helpful,
because
the
people
who
are
going
to
do
the
review
will
not
be
you
know
able
to
do
that
and
I
pasted
a
one
more
link
called
Gabby.
This
is
something
that
we
use
on
the
OpenStack
side,
for
it
is
a
machine,
readable
form
and
then
it
basically
it
doesn't
generate
code,
but
it
runs.
K
Yeah
I,
just
I
just
wanted
to
and
I've
gone
through,
John's
document
I
think
it's
great
I
think
I
think
this
is
like
one
of
the
single
most
important
things
that
we
have
to
do
is
is
actually
defining
what
what
is
and
what
is
not
very
far
away
from
it
today.
I
think.
What's
in
the
darkest
is
a
great
start:
I
shared
images
concern
I,
activate
greater
concern,
so
backfilling
stuff
is
actually
you
know
a
reasonably
tractable
problem.
If
we
decide
it's
the
right
thing
to
do,
I
think
the
bigger
problem.
B
K
Actually,
making
sure
that
this
stuff
stays
up-to-date
over
time,
because
as
soon
as
we
have,
you
know,
tests
and
code
and
descriptions
of
what
the
code
is
supposed
to
do
and
what
the
tests
are
supposed
to
do.
We
have
these
three
things
that
can
very
easily
get
out
of
sync
and
and
know
as
far
as
I
can
determine
no
reliable
way
to
actually
ensure
that
they
stay
in
sync,
so
I'm
just
kind
of
thinking
out
loud
here.
Another
approach
is
to
actually
have
a
reference
implementation
and
say
this
reference.
K
Implementation
is,
by
definition,
what
is
kubernetes
and
if
you,
if
your
system
behaves
exactly
like
this
reference
implementation,
that
it
is
conformant
and
if
it
doesn't,
then
it
is
not
conformed
and
then
we
potentially
and
we
have
to
decide
whether
the
tests
or
the
descriptions
are
all
the
implementation
are
the
actual
canonical
definition
of
what
this
stuff
is
because,
right
now
the
tests
are
not.
The
implementation
is
not,
and
these
feature
these
behavior
descriptions
are
kind
of
distant,
never
to
be
because
we
can't
keep
them
in
sync,
with
the
tests
and
the
implementations.
B
I
Quentin
the
problem
there
is:
how
do
we
give
the
ability
to
someone
who
has
no
idea
what
need
to
be
run
or
what
conformance
means
the
tools
to
compare
their
implementation
versus
the
reference
implementation
right?
That's
what
we
have
right
now
with
Sona
boy,
hiding
the
end-to-end
tests.
So
that's
going
to
be
the
larger
problem.
There.
K
I
understand
that,
but
I
mean
the
reality
is
that
you
know
a
very
small
fraction
of
our
of
our
entering
tests.
Around
10%
are
actually
kind
of
defined
to
be
conformance
tests
and,
as
a
result,
I
would
guess
that
that
it
would
be
completely
impossible
to
write
an
application
that
runs
on
something
which
is
only
conformant,
because
it's
just
not
enough
stuff
in
the
conformance
tests
to
actually
be
able
to
do
that.
C
Gone
round
and
round
on
this
in
the
past,
also
in
the
context
of
like
the
LPS
discussion,
the
idea
that
maybe
it
doesn't
make
sense
to
try
and
focus
rally
around
the
kubernetes
until
we've
actually
got
everything
to
GA.
That
is
usable
and
acceptable.
Notably
this
comes
up
in
the
context
of
storage.
Many
hundreds,
if
not
thousands
of
those
MPs
and
then
test
you
see
skipped-
are
different.
C
Variants
of
storage
tests
run
for
all
of
the
each
of
the
different
CSI
plugins,
and
it's
not
our
job
to
verify
that
kubernetes
is
conformant
for
literally
every
possible
CSI
and
CNI
and
CRI
plugin
that
she
can
hook
into
the
kubernetes,
but
to
make
sure
that,
whichever
one
of
those
you
have
plugged
into
your
CUDA
Nettie's,
it
works
as
expected,
because
we
say
the
conformance
tests
can
like
have
to
rely
on
default.
Behavior
like
any
of
those
CSI
plugins.
C
You
can't
guarantee,
like
you,
can't
guarantee
a
consistent
common,
persistent
storage
implementation
across
all
versions
of
kubernetes
and
I.
So
that's
like
one
great
example
of
how
like
applications
usually
need
to
persist,
state
in
one
form
or
another,
and
conformance
tests
can't
cover
that,
because
there's
no
out-of-the-box
consistent
way
persisting
state
like
we
might
have
that
with
114
because
of
persisted
local
volumes,
but
I'm
not
sure
if
any
of
those
things
are
actually
GA.
So.
G
Aaron
that
the
CSIC
and
I
those
our
pluggable
aspects,
so
we
need
to
have
a
clear
I,
mean
CSI
forms
a
clear
contract
right
between
the
kubernetes
infrastructure
and
the
backend.
So
in
theory,
as
long
as
we
have
tests
that
exercise
all
of
those
things,
we,
you
know
it's
up
to
the
distribution
seeking
conformance
to
configure
their
particular
cluster
with
the
CSI
plugins
that
they
want
to
use
and
and
and
validate
the
conformance
there
as
a
most
validating
conformance
of
every
different
CSI
driver.
That's.
C
Correct
it
just
gets
up
to
into
that
like
weird
corner
case
of
oh
god,
I
really
don't
wanna
talk
about
profiles
right
now,
but
like
kubernetes
can
run
on
raspberry
PI's,
and
it
can
also
run
on
5000
node
clusters
that
have
like
very
specialized
storage
plug-ins
for
them.
So
are
we
saying
like,
in
order
to
be
a
kubernetes,
you
have
to
have
some
kind
of
CSI
plugin
hooked
up
or
are
we
saying
it's
acceptable
to
be
a
kubernetes
without
a
CSI.
J
Worse
than
that,
because
all
of
the
network
attached
storage
providers
have
different
their
volume
sources
have
different
parameters
exposed
to
the
user.
So
you
would
need
an
abstraction
over
that
which
is
some
sort
of
storage
class
thing
and
then
to
find
some
kind
of
common
behaviors
that
you
would
expect
across
different
volume
sources.
So
I
don't
actually
want
a
rat
hole
on
that
specific
issue
right
now.
J
It's
a
hard
problem
and
the
storage
folks
have
been
looking
at
it,
but
I
actually
think
that
particular
thing
is
much
lower
in
priority
than
covering
the
basic
things
that
everybody
uses.
And
yes,
that's
not
sufficient
for
anything
that
if
we
don't
have
coverage
of
even
that,
then
nothing
else
really
matters.
In
my
thing,
all.
J
I
made
a
comment
in
the
chat
which
I
guess
is
sort
of
related
to
some
of
the
other
comments
that
were
made,
but
it's
more
than
just
the
behavior.
If
yours
you're
putting
a
tag
on
some
test
saying
it
tests
this
behavior,
it's
really
hard
to
know
what
that
means
without
going
to
review
the
tests,
because
you
don't
know
if
it
adequately
exercises
that
behavior
and
test
the
Kuran
cases
the
need
to
be
tested,
you
don't
know
whether
it
tests
those
behaviors
using
acceptable
mechanisms
for
from
the
perspective
of
conformance.
J
You
don't
know
whether
the
test
is
going
to
be
adequately
forward
compatible,
which
is
another
requirement
so
right
now,
it's
pretty
hard
to
review
conformance
tests.
We're
not
really
at
the
point
where
you
can
turn
a
crank
and
say:
I
know
how
to
create
a
conformance
test.
That's
going
to
be
sufficient
and
acceptable.
J
G
Yeah
I
guess
my
what
it
sounds
like
you're
saying
is
that
you're,
you
know
you're
not
necessarily
an
agreement
that
the
people
reviewing
the
behaviors
that
we
can
set.
Then
we
can
say
great
segregate
the
people
viewing
the
reviewing
the
behaviors.
This
is
what
should
be
conformant
versus
the
people
reviewing
this
test.
G
Basically,
there's
your
first
sentence
and
all
the
rest
and
I'm
thinking
that
those
could
be
different,
that
the
people
reviewing
that
the
test
actually
validates
the
behavior
doesn't
have
to
be
the
same
person,
whether
there's
value
in
that
it
sounds
like
you're
challenging
that
assumption.
I.
Think.
J
We're
not
there
yet
right.
Theoretically,
that
would
be
true
that
you
could
just
have
someone
get
a
test
to
the
last
point
where
it
needs
to
be
approved
and
we've
been
trying
to
move
in
this
direction
and
say:
look
is
this
about
behavior
to
test
in
conformance
or
not
like?
We
have
tests
that
cover
it
totally
adequately
and
properly
and
whatever,
and
we
just
want
to
know,
should
we
officially
add
this
to
the
component
suite
that
would
be
beautiful
and
wonderful.
G
I
J
I
mean
certainly
I'm
in
favor
of
trying
to
come
up
with
a
list
of
behaviors
that
we
should
test
like
dad
seems
like
a
valuable
exercise
and
in
some
cases
I.
Don't
even
think
it's
rocket
science
like
just
go
read
through
the
pods
back
and
cross
out
everything.
That's
not
optional
or
non-portable.
Okay
and
I
said
I
mean
in
the
in.
G
C
If
we
use
y
ammo
like
that's
fine,
because
I
just
want
us
to
get
to
the
point
where
we
enumerate
the
list
of
behaviors
and
then
we
sort
of
map
out
the
state
space,
and
then
we
start
to
cross
those
behaviors
out
as
we
implement
them,
and
so
I
think
this
is
a
great
way
of
parallelizing.
Let's
approve
the
dump
truck
of
work,
and
then
we
can
have
other
people
work
on
the
dumpster
on
can
work
and
yeah
like
we
definitely
have
to
make
sure
they
implement
the
dump
truck
and
work
in
the
right
way.
J
So
this
is
one
of
these
basic
things
that
are
talking
about
I.
We
have
been
focused
on
moving
mid
conformance
tests
into
conformance
to
get
better
pod
coverage.
That's
one
of
the
basic
primitives
of
communities.
The
other
is
the
API
surface
of
the
API
server
somewhat
generically,
and
there
have
been
some
proposals
or
attempts
to
create
some
sort
of
automatic
tests
of
API,
endpoints
and
whatnot,
but
actually
think
that
the
tests
that
have
been
written
that
does
area
that
has
not
been
super
useful.
J
What
would
be
super
useful
is
more
rigorous
testing
of
the
behaviors
that
we
inherit
from
sed,
because
originally
we
had
in
mind
a
certain
model
for
interaction
with
the
API
server.
But
you
know
out
of
expedience.
We
kind
of
just
lifted
behaviors
almost
hold
cloth
directly
from
a
TD,
and
we
have
more
and
more
projects
that
are
swapping
out.
J
It
see,
there's
the
cosmos,
DB
implementation,
there's
k3s
using
SQLite
since
one
of
the
most
recent
that
I'm
aware
of
so
they're
a
bunch
of
examples
of
this,
almost
maybe
I'm
sig
API
machinery
had
been
working
on
adding
a
few
more
tests
around
watch,
behavior,
specifically
I.
Don't
know
what
the
current
status
was.
The
last
time
I
saw
it.
It
wasn't
super
rigorous,
just
tested
that
yeah.
J
You
know
you
need
to
test
things
like
breaking
the
watch
connection
and
being
able
to
reconnect
and
reestablish
watch
their
consistency
model
issues
that
we
haven't
even
really
decided.
What
what
behavior
we
want
to.
Officially
support
and
clients
are
billing
assumptions
around
accidental
behaviors,
like
the
resource
versions,
are
technically
officially
supposed
to
be
opaque,
but
we
don't
obfuscate
them,
so
people
are
doing
comparisons
on
them
in
ways
that
we
don't
really
recommend,
but
we
don't
strongly
enough
discourage.
J
J
So
we
actually
need
to
decide
what
behaviors
we
officially
guarantee
and
which
ones
we
don't
and
write
some
kind
of
spec
for
that
and
test
the
spec
and
maybe
also
think
about
you,
know
ways
we
could
force
clients
to
adhere
to
this
Beck
and
not,
but
then
on
things
that
are
not
in
the
spec
yeah.
We
could
have
Testament.
G
J
F
J
F
F
J
C
G
G
J
F
J
Another
pretty
basic
area,
that's
related
to
pods,
but
goes
a
little
bit
outside
of
that
is
networking,
so
basic
pod
networking,
it's
not
clear
to
me
that
we
have
adequate
coverage.
Networking
is
another
one
of
these
things
in
communities
that
is
super
pluggable.
There
are
lots
of
CNI
implementations,
I.
J
B
J
G
E
Mississippi,
one
of
the
things
we've
been
working
on
is
adding
the
ability
to
filter
by
user
agents.
So
now
that
we
have
the
user
rate,
it's
available.
I
think
this
will
help
us
to
identify
pieces
of
software
that
are
used
within
a
system
and
what
endpoints
they're
hitting
we
have
our
initial
branch
up,
there's
a
link
there,
but
it's
having
some
issues
on
some
browsers,
so
I
went
ahead
and
paste.
Some
pictures
here,
you'll
note
that
CSI,
our
storage
interface
is
hitting
some
some
beta
endpoints.
E
Just
we
can
be
aware
of
what
what
they're
in
points
for
hitting
the
search
bar
will
allow
you
to
do
a
red
X
for
all
of
the
different
endpoints
that
are
there.
So
we
can
possibly
do
things
where
we
look
at
pieces
of
software
for
different
we're,
calling
in
a
fix
at
one
point
that
anything
using
the
API.
So
we
can
define
and
research.
These
behaviors
I
think
this
might
be
useful
in
helping
to
address
some
of
John's
behavioral
different
proposal.
E
These
are
some
of
the
links
to
node
closer
life
cycle
in
Windows
that
we
we're
going
to
go
through,
but
I
think
with
that
I.
Now
that
I've
kind
of
picked
remit
meeting
yesterday,
that's
probably
where
we
go
through
the
board
more,
and
this
is
more
of
a
high-level
overview.
But
I
just
wanted
to
get
some
initial
feedback
on
how
useful
it
is
to
filter
by
user
agent
and
eventually,
by
the
the
endpoints
based
on
various
metadata
and
also
I'm.
Sending
out
links
to
various
states
and
the
related
issues
on
the
board.
E
B
G
G
Something
like
this
could
help
us
understand
what
can
run
on
a
given
cluster
when
it
only
implements
a
subset
of
the
functionality
that
you,
you
know,
that's
well,
it
doesn't
implement.
P
V
is
what
what
components
of
the
system
may
or
may
not
function,
or
even
you
know,
so
what
third-party
tooling
may
or
may
not
function
if
there's
a
way
to
kind
of
automate
that,
because
we
can
see
what
API
is
it's
calling
if
it's
calling
the
API
is
that
aren't
supported
in
some
particular
set
of
features?
C
What
I
was
more
focused
on
the
conformance
effort
back
to
my
particular
presentation
for
Shanghai,
a
couple
ways
that
I
found
this
user
agent
information
useful
was
to
be
able
to
take
a
look
at
what
some
points
are
obviously
exercised
by
a
lot
of
tests
and
what
endpoints
our
are
not
to
give
me
some
context
for
cool
we're,
touching
API
endpoint,
but
only
once
we're,
probably
not
hitting
it
with
enough
variation
in
parameters.
That
would
be
an
area
to
investigate
for
coverage.
I
also
feel
like
the
API
coverage
information.
C
It
would
be
more
useful
if
we
could
find
a
way
to
filter
out
this
every
API
accesses,
because
today,
if
you
just
take
a
look
at
API
coverage
due
to
conformance
tests,
you'll
see
there
are
a
lot
of
alpha
and
beta
endpoints
that
are
hit,
and
it's
not
it's
not
like
the
tests
themselves
are
hitting
those
endpoints.
It's
not
you
know,
cube
CTL
or
sorry
queue,
client
or
something
like
hits
the
discovery,
endpoint
and
walks
every
endpoint
available
at
first.
C
If
we
could
get
rid
of
that,
then
we
could
start
to
really
gate
the
like
big
red
flashing
light.
If
you
know
something
is
not
testing
a
stable
amount,
but
something
else
and
I
think
it
would
be
a
really
good
sanity
check
for
all
this
filtering
on
test
tags
and
stuff
could
be
useful
because,
like
all
of
the
different
test
case,
names
are
each
their
own
user
agent.
That
was
really
helpful
to
me
or
drilling
down
and
exploring
this
data.
E
One
of
the
things
you're
recommending
there
on
being
able
to
see
what's
hit
a
lot
is
actually
a
ticket
for
implementation
in
the
next
few
weeks.
It's
a
flare
so,
as
in
points
are
hit
more
within
a
particular
link
with
it,
it
will
be
longer,
so
the
ones
that
are
really
long
on
the
outer
edge
I
could
drop
a
link
to
that,
but
that
what
should
help
with
easily
identifying
which
tests
are
used?
E
F
Yeah
we
have
six
more
minutes.
Let's
move
on
quickly,
I
think
Chris
briefly
touched
upon
the
curation
of
the
board
or
project
board.
Essentially,
what
we
are
listing
for
is
a
pattern
to
identify
with
six
to
to
engage
and
then
also
on
a
periodic
basis.
We
we
have
to
we
figure
out.
There
are
lots
of
rotten
issues
that
that
are
still
part
of
the
right
board
and
there
are
those
needs
to
be
manually
addressed.
E
L
Then
yes,
I
just
wanted
to
go
to
the
what
options
off
of
Yukon
Barcelona
we've
been
approved
for
a
combined
tract,
which
is
like
the
intro
in
the
deep
dive
to
Cuba.
So
my
plan
was
to
present
you
know
just
just
the
interdict
that
we
have
to
anyone,
who's,
Nita
and
apparently
there's
a
huge
number
of
attendees,
AQ,
Khan,
so
I
imagine
there
will
be
some
new
participants
and
people
who
are
interested
in
certifying
so
that
they
should
hope,
hopefully
need
valuable
content
for
this
group.
L
L
Right
so
in
terms
of
topics,
because
people
put
their
plant
attendants
there-
and
this
is,
you-
know,
you're
not
committing
to
it
at
this
point-
it's
just.
He
expects
to
be
there.
What
kind
of
topics
we
should
bring
up
face
to
face?
Do
you
think?
Is
there
anything
like
I
mean,
but
once
I
my
champ
it
to
be
useful?
Oh
I
get
any
kind
of
phony
or.
L
C
Yeah
I
agree:
I
felt
like
the
last
working
group
session
was
good
to
get
some
consensus
on
topics
we've
discussed
that
length
I
personally
feel
as
though
discussion
around
the
concept
of
validation
would
be
helpful.
I
think
that
we
have
some
preparatory
work
to
get
us
to
there
like
there's
a
PR,
I
think
frog,
TOEFL
or
somebody
started,
but
I
feel,
like
you
were
talking
about
that
as
a
way
to
kind
of
do,
maybe
like
no
validation
or
maybe
we're
talking
about
CSI
validation
and
CNI
validation
and
CRI
validation.
C
To
talk
about
those
consistent
set
of
behaviors
across
the
different
plugins
that
implement
those
things
I
think
we
had
talked
about.
Those
is
may
be
a
way
of
trading
off
the
concept
of
profiles,
and
there
seemed
to
be
some
consensus
that
validation
sounded
like
a
good
concept,
may
be
a
good
way
to
rename
what
we
now
call
the
no
d2e
tests
or
the
note
conformance
tests,
but
I
feel,
like
we
haven't
spent
time
flushing
that
out
and
getting
to
actionable
steps
that.
F
L
F
C
A
new
suggestion
of
what
defines
useful
conformance
and
how
do
we
get
there
might
be
good,
could
be
a
good
time
to
just
sort
of
review
where
we're
at
in
implementing
Jon's
proposal
do
the
set
of
behaviors
there
look
meaningful
when
it
comes
to
the
types
of
applications
that
and
that
enables
us
to
run
okay.
My
ear
pods
are
literally
dying
right
now,
so
it's
probably
a.