►
From YouTube: 20220406 SIG Arch Conformance
Description
GMT20220406 200407 Recording 1686x768
A
Good
day
everybody
and
welcome
to
the
gates
conformance
meeting
today,
6
april,
we
have
a
code
of
conduct
in
kubernetes,
where
we,
the
cncf
code
of.
B
A
To
each
other
and
the
meetings
being
recorded,
it
will
be
streamed
on
youtube
for
later
viewers
all
right.
Let's
start
looking
at
the
agenda
over
there.
A
Share
you
should
be
seeing
my
screen
now
all
right.
Let
me
take
us
through
the
please
add
your
name
to
the
attendees
list.
Let
me
take
you
to
the
through
the
agenda.
First
point
is
the
release
closed.
Yesterday
was
code
freeze,
so
we
are
brought
in
everything
we
could
for
the
last
release.
We
landed
16
end
points
12
for
proxy
endpoints,
which
came
a
very
long
way
because
it
was
hard
and
then
we
did
one
for
template
endpoint.
We
did
three
batch
endpoints.
A
There
is
four
more
that
is
sitting
in
the
queue
for
the
life
cycle
test,
but
at
the
it
was
running
perfect
on
the
test
grid
for
two
weeks,
but
at
the
last
moment
it
flicked
and
clayton.
Grace's
great
graciously
pointed
out
the
small
fix
that
we
need
to
do
then
snoop
picked
up.
There
was
a
from
jordan's
team.
A
There
was
a
pr
that
killed
one
of
the
end
points
that
was
conformation,
so
we
continue
to
monitor
that
we
don't
lose
any
ground
brought
it
up
and
I
think
within
24
hours
that
fixed
it
so
really
helpful.
There
then,
the
log
list
handler
endpoint
we
and
the
file
handler
both
of
those
we
made
ineligible
thanks
for
helping
with
that
john
and
limbs
they're
also
updated
to
come
forward
test
requirements.
A
Thanks
for
merging
that
bill,
then
we
have
eight
new
endpoints
that
came
in
around
the
two
weeks
that
the
promotion
that
emerged
yesterday.
So
though
it
endpoints
now
ga
and
they
were
added
to
the
api
snip
list,
then
that
leaves
us
with
45
endpoints
remaining
without
conformance
tests.
So
that
looks
positive
that
we
could
still
finish
up
this
year.
Just
to
note,
if
you
go
to
api
snip
this
morning,
you'll
find
that
it
shows
50..
A
Something
funny
happened
yesterday
I
updated
it
and
it
showed
the
45,
but
the
eight
end
points
from
storage
capacity
didn't
pull
through
yet
because
one
of
the
jobs
haven't
run
after
the
job
ran,
we
lost
five
tests,
but
those
are
still
in
the
conformance
yaml.
I'm
I'm
still
trying
to
figure
out.
What's
ongoing,
there's
probably
just
some
underlying
issues.
D
Changed
on
some
of
those
tests,
and
I'm
wondering
if
that
had
an
impact
we
I
think
we
already
have
some
linux
only
as
required,
which
would
obviously
force
windows
out
of
it.
But
those.
D
A
There
is
two
peers
ready
to
put
security,
but
I
doubt
whether
that
could
have
an
impact
so
yeah
we're
busy
digging
and
see.
What's
going
on,
then.
First,
I'm
gonna
I'll
introduce
a
topic
controller
revision.
A
B
A
When
we
exercise
it
with
an
e2e
test,
the
controller
revision
user
agent
that
exercise
controller
revision
actually
doesn't
show
up.
You
got,
you
can't
link
it
together.
So
it's
not
possible
to
see
you.
You
do
use
that
endpoint
every
time
you
run,
for
instance,
the
diamond
set
test,
but
you
can't
actually
link
it
back
to
api
snip,
because
it's
not
the
e2e
user
agent
that
is
exercising
the
endpoints.
A
So
because
of
that,
it's
it's
a
little
hard
to
test
those
and
we'll
have
to
work
out
another
way
of
testing
them.
We
do
discuss
the
eligibility
and
you'll
see,
there's
a
link
from
jordan.
He
said
if
a
third
party,
maybe
I
want
to
use
controller
version,
you
would
need
that
test
to
confirm
that
it's
actually
conformed.
E
Yeah,
it's
pretty
much
so
it's
damon's,
head
and
staple
set
tested,
of
course,
using
the
control
revision
behind
the
scenes.
It's
just
that
apr
snape
sees
the
user
agent
coming
from
the
appropriate
controller,
so
there's
no
way
to
easily
associate
the
two.
So
I
have
started
a
direct
test
of
control
revision.
E
So
far,
I've
got
one
of
the
list
endpoints
and
get
so
far.
So
I'm
just
wanting
a
little
bit
of
a
confirmation
that
this
sounds
like
a
reasonable
approach
and
to
carry
on
with
that.
G
F
Yeah,
it's
exposed
so
based
on
what
is
it
hiram's
law
or
something
right?
You,
you
expose
an
interface
and
eventually
somebody
will.
E
F
I
think
we
should
probably,
if
it's
there
and
it's
ga
and
it's
been
there
for
a
long
time,
and
we
have.
We
have
built-in
controllers
that
rely
on
it
that
are
highly
likely.
There's
third-party
controllers
that
also
rely
on
it,
and
we
should.
We
should
make
it
part
of
conformance
cool.
E
I've
started
around
the
I
found
some
traces
of
controller
revision
and
the
history
controller
at
the
moment,
so
I've
just
got
to
carry
on
trying
to
pull
it.
The
right
thread
to
find
a
little
bit
more
of
the
the
puzzle.
So
it's
a
work
in
progress.
Okay,.
F
D
D
We
need
something
to
point
to
when
these
conversations
come
up
for
clarity,
so
that
we
don't
have
to
go.
Because
that's
why
I
was
joking
about
the
keys.
John
john
actually
has
the
historical
knowledge
for
all
of
the
the
community
coming
from
architecture
to
be
able
to
give
us
these
authoritative
statements,
which
we
translate
into
policy
which
lets
us
know
what
endpoints
are
required
for
use
by
the
rest
of
the
community
versus
some
of
those
things
that
are
ineligible,
that
we
don't
have
to
write
tests
for.
E
Also,
it
will
probably
help
with
the
follow-up
discussions
with
cigarettes
as
repo.
A
And
the
great
thing
when
we
get
these
controller
vision,
endpoints
in
and
also
the
remaining
job
endpoints,
we
are
done
with
control
with
all
the
app
endpoints,
so
that
would
be
fantastic,
nice,
so
we're
targeting
those
for
this
release.
So
we're
going
to
have
a
direct
exercise
and
stephen
on
my
right.
It's
going
to
be
kind
of
in
the
same
way
that
you're
running
the
status
test.
At
the
moment.
It's
going
to
be
similar
approach,
yeah.
E
It's
basically,
I'm
just
going
to
be
directly
trying
to
make
sure
that
the
the
various
endpoints
get
tested
as
though
they
are
coming
from
an
appropriate
controller.
As
as
my
current
mindset
so
yeah,
it's
it's
pretty
close
to
how
the
status
stuff
has
been
done
at
the
moment.
A
B
G
A
The
first
video
it
was
actually
quite
good,
nice,
okay,
now
the
next
three
is
all
discussing
about
in
eligibility,
I'm
going
to
throw
that
over
to
stephen
he's
got
more
backgrounds
about
these.
Basically,
we
we
are
left
with
all
the
endpoints
that
have
issues
are
difficult,
although
obviously
all
the
easy
things
are
taken
first,
so
this
is
the
top
of
the
tree
fruits
that
we
so
we're
going
to
run
these
posts
to
you
just
to
check
whether
you
agree
about
you
know
eligibility
for
all
these
okay,
you
can
thank
you.
Steven.
E
Yeah
so
the
first
one
deleting
a
collection
of
node,
I'm
just
wondering
with
the
current
test
framework-
is
that
even
I
don't
even
know
where
to
potentially
start
with
that,
because
we
don't
actually
there's
pre
nodes
already
created
as
part
of
the
cluster
setup
for
the
test
framework,
I
just
felt
completely
lost
on
how
this
could
even
be
potentially
tested.
F
Yeah
you
could
do
I
mean
if
you
look
at
like
what
was
it,
that
the
one
we
use
for
stale
we
create
fake
nodes
there
pods
running
as
nodes
registers
nodes,
so
you
could
do
something
there,
but
do
we
actually
test
the
node
apis
for
conformance
at
all?
Today,
we've
often
had
a
debate
around
like
the
or
at
least
the
prioritization
has
been
around
application
portability,
which
node
operations
generally
are
not
part
of.
F
You
know
they
can
be
for
things
like
cluster
api,
which
is
effectively
a
sort
of
application,
but
do
we
conformance
test
all
the
rest
of
the
node
registry
and
all
that
registration
and
all
that.
E
I
think
there
has
been
some
historical
node
endpoints
that
have
been
made
ineligible
because
of
various
statements
at
that
point
in
time,
and
I
can't
remember
off
the
top
of
my
head
before
they
are.
But
you
know
in
the
inaudible
list,
there
is
some
in
points
that
have
been
made
ineligible.
F
I
don't
imagine
those
require
keyboard
api.
The
cube
probably
sets
those.
F
So
if
we
could,
I
would
I
think
it's
worth
investigating
those
status
ones
and
seeing
if
those
can
become
part
of
conformance,
but
I
don't
think
it's
not
their
priority.
The
delete,
I
think,
definitely
is
analogy.
Well.
F
The
question
is
whether
I
mean
honestly
it's
an
application
where,
depending
on
the
nodes
standards-
and
I
would
I
would
I
would-
I
would
discourage
that
strongly
from
an
application
standpoint,
because
it's
a
bad
idea
but.
J
So
the
only
caveat
here
is:
what
do
we
treat
things
like
virtual
cubelet
right?
They
would
depend
on
these
apis
to
be
working
and
not
changing
their
behavior
too.
That
would
be
the
concern.
J
Both
the
delete
and
the
read
replay
status,
all
of
them
will
be
exercised
in
that
case
right
because
you
know
it's
actually
one
go
process
that
does
masquerading
as
many
behind
the
scenes
right,
so
they
would
be
using
all
these
things
instead
of
the
cubelet.
Actually
using
these
things
right,
we
don't
treat
them
as
applications
right
at
this
point.
We
treat
them
as
those
are
components
that
people
have
written.
A
J
Class
of
applications
where
they
would
end
up
using
these
apis
and
we
are
not
prioritizing
for
those
for
sure.
F
F
If
it
doesn't
support
this,
then
if
you
can't,
if
you
can't
create
a
cluster
that
can
that
can
process
these
endpoints,
then
you
don't
get
the
badge
you're,
not
really
kubernetes,
because
your
application
applications.
I
don't
think
it's
a.
I
don't
think
saying
that
any
distribution
needs
to
be
able
to
support.
F
So
to
me,
there's
a
caveat
in
our
conformance
list
that
says:
if
you
have
to
talk
to
the
cubelet
api,
you
are
not
you're
not
going
to
be
part
of
conformance,
because
our
conformance
program
is
around
our
api
server
apis
and
what
we
serve
and
we're
not
going
to
right.
We're
not
going
to
take
it
so
so
to
me.
F
Reading
the
ineligible
list
that
says
create
node
needs
to
talk
to
the
keyblade
api.
Is
that
really
true?
I
think
that's
probably
not.
Otherwise
we
couldn't
do
our
cubemark
or
whatever
it
is
that
registers
itself
or
like
is
this
a
bi-directional?
Is
it
like
cubelet
calls
in
and
and
registers
a
node,
and
then
the
api
server
calls
back,
because
how
do
you
do
that
in
a
complex
networking
environment
like
I'm
skeptical?
I
guess
I
guess
you
you.
Api
server
can
talk
to
cubelet,
that's
generally
true,
but.
G
B
D
J
H
F
F
F
D
To
this
piece
of
software,
that
is
part
of
the
extending
the
conformance
test
suite
to
do
something
similar.
That
is
not
virtual
kubelet,
but
provides
us
a
surface
area
to
not
hit
their
kubelet
api
implementation,
but
something
to
test
their
surface
area
for
their
node
style
things
that
would
normally
require
us
to
hit
some
type
of
that
would
exercise
kubelet
apis.
D
J
The
easiest
way
to
think
about
it
is,
you
know
the
cube
mark.
Does
this
rightly
cube
mark
pretends
that
there
are
so
many
nodes
in
a
single
vm?
J
So
if
I
don't
even
want
to
go
there-
and
at
this
point
I
would
rather
say
these
apis
are
used
by
components
and
not
end
user
facing
you
know
these
are
not
end
user
facing
and
they
are
used
by
built-in
components
of
kubernetes
for
special
things.
That's
the
way
I
would
express
it.
F
Yeah
before
we
get
there,
let
me
say
a
few
more
things,
and
I
I
think
that
I'm
okay
with
that
for
now
and
then
we
can
revisit
later
whether
this
set
of
apis
related
to
node
should
be
in
conformance
or
not,
I'm,
okay,
with
it
not
being
for
now,
but
just
to
sort
of
clarify
the
policies.
F
The
policy
actually
is
now
that
I'm
sort
of
swapping
back
into
memory
here,
the
issue
we
had
and
the
reason
there's
this
policy
about
cubelet-
was
that
there
were
ede
tests
that,
if
I
recall
correctly,
would
talk
directly
to
the
cubelet
api
and
what
we
said
is
because
that
api
is
not
part
of
the
conformance
surface.
F
F
F
F
That's
perfectly
conformant
from
an
api
server
api
point
of
view,
but
don't
support
that
cubelet
api,
that
that
test
is
using
and-
and
I
wouldn't
run
successfully
yeah.
So
that's
why
that's?
Why
that
rule's
in
there-
and
so
this
gets
back
to
my
earlier
point
of
like
just
because
the
api
server
internally
calls
a
cubelet
api
doesn't
mean
that
that
api
endpoint
on
the
api
server
is
invalid.
It
was
the
test
that
was
calling
the
cubelet
api,
not
the
api
server.
J
So,
just
to
catch
you
up
on
one
more
twist
to
this
john,
you
know
when
was
this:
while
you
were
out,
there
was
one
matter,
clayton
kind
of
ruled
on,
and
we
put
it
into
the
policy
which
is
about
log
file,
list
handler
and
log
file
handler,
yes
yeah.
I
did
see
that
I
actually
remembered
that
so
we
added
some
language
in
the
policy
to
say
something
about
that.
I'm
trying
to
recall
exactly
the
link.
K
Is
up
here
there
we
go,
it's
basically
saying
debugging
tools,
don't
count.
J
The
operation
tools
used
for
operations,
kind
of
thing,
yeah.
F
It's
a
little
fuzzy
because
again
we've
exposed
to
this
api
but
yeah.
I
don't
you
know
if
an
application
requires
on
that
they're
doing
something
they
shouldn't
be
doing.
Probably.
J
H
Everywhere
we
can
just
yeah,
don't
test.
Node
feels
odd,
so
I
I
I
have
something
so.
J
If
there
is
an
api
that
augments
that
is
used
to
augment
the
cluster
with
an
additional
component
that
and
that
is
not
used
by
end
user
applications
typically,
then
it
should
not
be
tested.
J
F
J
Which
ones
can
be
done
through
that
configuration
resource.
D
All
of
the
controllers
for.
D
Rub
hooks
modification
hooks.
J
F
F
J
F
F
The
problem
with
that
is,
maybe
it's
not
a
problem
I
just
like
we
should.
We
should
discuss
it
with
other
interested
parties.
The
problem
with
that
is
that,
if
I'm
like
a
vendor
of
management
products
now
I
can't
necessarily
rely
on
reading
node
data
out
of
a
cluster
and
having
a
reliable
source
of
information
or.
D
F
Let's
this
is
why
the
read
ones-
at
least
I'm
I'm
hoping
we
can
make
part
of
conformance
a
read
core
v1
notes
down.
Yes,
and
we.
D
J
Right,
yeah,
you
need
an
ssh
or
you
need
some
sort
of
api
that
is
not
covered.
I
guess
you
definitely
can.
D
F
All
we
have
to
do
for
read
status
and
read.
Node
is,
I
know
what
node
we're
on
because
we're
running
in
cluster
and
there
we
are
for
that
node
which
we
can
get
from
the
downward
api
right.
I
believe,
and
so
we
look
at
that
node
and
we
make
sure
we're
alive
or
whatever,
and
that's
probably
good
enough
for
the
read
status
and
the
read
and
the
read
node
replace
status
is
a
little
trickier.
J
Okay,
so
assume
that
there
are
two
buckets
one
is
we
are
not
going
to
deal
with
it
right
now.
Another
one
is
not
eligible,
and
so
this
goes
into
the
first
bucket.
J
Let's
not
worry
about
it
right
now,
rather
than
you
never
do
it
right.
A
Which
brings
me
to
the
point
that
I
discussed
yesterday
is
there's
a
list
of
ineligible
endpoints
that
we
should
revisit.
So
what
I
would
do
is
before
our
next
meeting
try
and
bring
up
the
listing,
put
it
in
groups
and
maybe
start
looking
at
once
we
cleared
out
all
we
have
now,
which
ones
we
want
to
bring
back
maybe
beginning
of
next
year.
So
it
would
be
great
to
kill
out
all
the
technical
debt
now
the
last
45
endpoints
and
then
start
thinking
about
specifically
like
the
known
things.
A
F
So,
let's,
let's
continue
the
debate
on
the
node
stuff,
so
put
it
in
a
lower
priority
that
there's
two
aspects
like
there's,
whether
or
not
it
should
be
part
of
conformance
and
then
there's
whether
or
not
we
can
test
it
with
our
current
framework
and
those
are
two
separate
things:
the
first
one
we
have
to
decide
whether
it
should
be,
and
that
is
something
we
need
a
whole
full
group
here
at
least
clayton
and
I
and
dims
would
be
great
to
have
too,
and
we
can
have
that
debate.
J
Let's
open
an
eq
for
this,
so
that
we
can
try
to
do
this
asynchronously
with
those
other
people
he
just
trying
to
get
everybody
on
the
call
is
probably
not
gonna
work
well,.
F
A
All
right,
that's,
I
think,
that's
a
reasonable,
so
we'll
open
an
issue
and
we'll
share
it
in
this
channel
and
I'll
include
clayton
and
john
all
of
you
on
on
that
and
when
we
have
a
back
and
forth
see
where
the
access.
D
I
think,
if
that's
the
end
of
the
agenda,
I
wanted
to
know
that
we
have
45
endpoints
remaining.
I
hope
to
oh
sorry.
I
saw
your
hand
go
ahead.
Stephen.
We
still
have.
A
E
F
F
Is
that
it
doesn't
do
any
like
validation
right,
because
you
think
about
it,
there's
two
ways
to
do
quotas.
One
is
you
enforce
what
people
are
allowed
to
request
and
the
other?
Is
you
look
at
what
people
are
using
and
right
and
then
you
do
something?
I
don't
think
it
does
the
latter.
I
think
it
does
the
former
meaning
it
enforces
it
at
a
policy
api
server
level,
not
by
bubbling
up
by
stage.
D
Some
further
clarity
on
what
we're
we're
hitting
on
there,
stephen.
J
The
other
thing
is,
there
is
a
admission
control
for
resource
quota.
I
guess
so.
You
know
that
that's
an
optional
component
people
can
switch
off.
F
Yeah,
the
actual
effect
of
the
implementation
or
data
plane
of
it
is
an
optional
component.
You're
saying
correct.
J
F
D
Sorry,
one
of
the
things
that
we've
tried
to
do
is
to
make
sure
that
the
the
pool
of
endpoints
we're
looking
at
are
generated
by
an
api
server
spun
up
with
every
optional
component,
disabled.
So
from
a
api
perspective,
with
everything
optional
enabled
ga,
only
this
endpoint
is
exposed
and
yeah.
It's
like
how.
G
F
No
so
like
network
policy,
I
think,
did
we.
I
thought
I
saw
something
recently
go
by
that's
showing
network
policy
as
part
of
the
conformance
service
area
and
network
policy
doesn't
actually
have
a
controller,
a
networking
implementation
that
implements
it
right.
So
there's
data
plane
and
control
plane
and
the
control
plane.
B
D
This
is
something
we
should
go
and
revisit.
Then
I
think,
to
help
identify
those
areas
that
are
available
in
the.
J
D
F
F
A
Quickly
now
I
was
thinking
about
this.
I
knew
what
the
storage
capacity
that
I
was
thinking
about.
D
F
So
here's
an
example
when
I
first
started
on
the
conformance
thing:
I
noticed
that
our
conformance
tests
tested
the
service
apis
but
didn't
actually
test
whether
the
actual
data
plane
of
the
network
complied
with
those
service.
Apis
I
can,
I
can
stand
up.
I
can
stand
up
a
kubernetes
cluster
without
controller
manager
like
and
and
the
api
server
can
read
and
write
all
those
apis.
It
doesn't
do
anything
yeah,
but
so
we
need,
for
my.
My
contention
is
that
we
would
like
our.
F
Tests
conformance
tests
to
show
that
the
behavior,
not
just
the
api's,
can
be
read.
Crud
can
be
done
on
the
api,
but
the
actual
behavior
of
the
cluster
based
upon
how
you
what
resources
you
crud
yeah
like
does
what
you
expect
right.
So
what
that
would
imply
is
that
something
like
network
policy,
which
is
an
optional
functionality?
F
Those
apis
should
not
be
part
of
conformance
because
it's
perfectly
legitimate
to
run
a
conforming
cluster
that
or
that
does
not
implement
the
controllers.
For
those
apis,
I
mean
the
standard
gke.
Doesn't,
I
think,
probably
the
standard,
every
kubernetes?
Doesn't
you
got
to
run
something
special,
usually
to
get
network
policy
and
dimms
is
saying
resource
quota
may
be
the
same
in
that
in
order
for
resource
quota.
To
actually
do
anything,
you
need
to
enable
a
particular
admission
controller.
F
J
F
Right
the
decision
isn't,
but
my
point
is
the
decision.
Isn't
it's
the
decision
of
this
group
that
decides
whether
it's
optional
or
not
like
everything,
is
technically
optional.
From
a
technical
standpoint,
everything
you
can
turn
all
the
stuff
off.
You
can
turn
off
all
the
apis.
You
know
and
just
run
one
thing,
but
it
wouldn't
be
conforming,
so
this
is
really
about.
Do
we
think
resource
quota
as
a
feature,
because
it's
built
in
code?
It's
not
like
people
have
their
own
controllers,
there's
an
open
source
implementation
of
it.
F
J
Probably
when
you
see
it
that
way,
it
seems
yes
because
you
know
just
reading
through
like
let
me
paste
this
link
here.
It
seems
like
people
will
want
to
do
this
all
right.
D
F
J
So
can
we
check
some
of
the
major
cloud
products
if
they
have
it
or
not?
You
know,
as
a
data
point.
F
F
Applications
won't
break
their
their
their
policy
like
much
like
our
backers,
as
others
have
suggested.
These
are
policy
constraints
to
control
utilization
of
the
cluster
by
different
tenants
and
that's
not
functional
behavior
of
an
application.
That's
policy
controls,
so
I'm
okay
in
the
initial
cut
for
it.
Staying
out.
I
think
that
this
again
starts
to
tickle
the
dreaded
there's
a
set
of
functionality
that
a
production
system
needs
in
most
cases
that
isn't
maybe
part
of
the
base
core
conformance.
F
So
this
is
why
these
are
all
all
at
the
end,
because
they're
all
harder
they're,
like
more
subtle
decisions,
we
have
to
make.
A
I,
like
it
yeah.
I
think
we're
going
to
find
some
we're
going
to
make
issues
for
these.
Just
to
summarize
so
the
node
endpoints
delete
the
collection.
I
think
we
agreed
with.
I
That
could
be
ineligible,
or
we
should
make
that,
based
on
the
the
the
precedent
that
they
create
that
the
delete
node
is
already.
A
Okay,
so
I'll
make
that
one
I'll
create
the
pr
to
make
that
ineligible.
Then
I'll
make
a
issue
for
the
status
and
those
status
and
then
we'll
make
another
issue
for
resource
quota
for
discussion
to
see
where
those
go.
F
Okay,
I
think
that
they
may
be
ineligible
simply
because
they're
not
easily
tested
in
all
environments,
and
that
may
be
a
sufficient
reason
like,
as
opposed
to
like,
like
maybe
we
wish
these
could
be
part
of
the
surface
area,
but
because
of
the
way
managed
providers
in
particular
work
like
we
don't
we
don't
need
them
to
be.
I
don't
know,
that's
the
discussion,
that's
the
discussion.
A
A
I
baked
some
cookies
the
other
day
and
one
of
the
trays
of
cookies
got
slightly
burned
and
I
put
all
the
cookies
in
the
jar
and
by
the
end
of
the
batch
of
cookies,
I
realized
the
kids
auto
sorted
them
to.
They
leave
all
the
slightly
burned
ones
in
the
jar
because
they
pick
out
all
the
ones
that's
not
slightly
burnt.
So
I
think
we
are
there.
Now
we've
got
the
slightly.
A
D
That
was
a
segue
into
what
I
wanted
to
make
sure
at
the
end
of
the
meeting.
While
we
do
only
have
45
endpoints
remaining
and
we
are
cautiously
optimistic
to
try
to
get
them
done
this
year.
These
are
going
to
be
harder
burnt,
cookies
and
we're
going
to
hit
stuff
like
this.
Unless
we
make
some
nice
beautiful
policy
decision
that
they're
not
eligible
and
eligible
right,
then.
H
D
F
The
point
is
no:
I
agree
that
the
ones
we're
coming
up
to
now
are
going
to
be
the
ones
where
we
have
to
have
these
this.
This
debate
gets
back
to
what's
the
point
of
the
conformance
program
and
making
sure
that
it,
you
know
that
it
that
it
serves
its
primary
purpose
rather
than
absolutely
than
getting
pedantic
yeah.
A
A
That
would
be
fantastic.
I
think
that
we
are
five
minutes
to
time
if
it
is,
if
it's
not
anything
else,
I
think
that
was
a
fantastic
good
meeting
for
today.