►
Description
Service APIs Bi-Weekly Meeting (APAC Friendly Time) for 20211025
A
All
right
all
right,
we're
recording
it's
october.
25Th
we've
got
lots
to
get
through
today,
but
first
off,
we've
got
nick
to
talk
about
conformance.
So
let's
head
it
off
to
nick.
B
Yep
cool
okay,
so
I
wanted
to
level-set
a
bit
with
this
first
bit
because
I
think
it's
you
it's
easy
for
us
to
just
start
using
the
word
without
making
sure
that
we
all
we're
all
on
the
same
page
so
yeah.
I
think
that
the
important
parts
of
conformance
at
the
end
of
the
day,
the
most
important
reason
to
do
conformance
on
the
gateway
api,
is
that
one
of
the
big
deliverables
we
want
here
is.
B
We
ought
to
move
between
gateway,
api
implementations
and
if
we
don't
have
a
standard
way
to
do
conformance,
then
we're
not.
Then
people
are
not
gonna.
Be
able
to
do
that
because
things
are
gonna.
People
are
gonna,
implement
things
in
different
ways,
and
so,
if
we
want
people
to
be
able
to,
we
wanna
meet
this
sort
of
cross-migration
goal.
B
Then
we've
gotta
have
really
good
performance
and
we've
gotta
have
really
clear
easy
to
do
and
easy
to
understand,
conformance
so
that's
sort
of,
and
when
I'm
talking
about
performance
here,
I
think
there's
three
parts
to
conformance
there's
the
definition
of
what
conformance
actually
is
like
what
what
it.
B
What
is
it
for
something
to
be
a
conformal
implementation
of
the
gateway
api,
a
set
of
tests
that
validate
that
you
are
conforming
to
the
thing
that
we
defined
in
step,
one
and
then
a
process
by
which
you
can
submit
your
data
somewhere
and
have
a
canonical
representation
of
that
data.
So
that
there's
a
place
where
you
can
say
what
are
the
canonical?
B
The
the
conformant
implementations
of
the
gateway
api,
there's
a
corollary
across
all
four
of
those
things
that
we
need
to
talk
about
later,
about
how
we
handle
versioning
and
how
we
say
like
because
for
the
upstream
kubernetes
apis
implementations
have
to
re-certify
for
each
kubernetes
version
right.
You
are
conformant
for
a
particular
kubernetes
version.
You've
got
to
re-test
and
resubmit.
B
Most
people
don't
normally
put
the
version
on
the
thing,
but
you
actually
have
to
redo
it
for
every
version.
So
I
think
we're
probably
going
to
need
to
do
something
similar,
that
when
we
cut
a
version
you're
going
to
need
to
re-submit
your
tests,
there
are
corollaries
there
that
probably
we
need
the
testing
and
the
conformance
profiles
will
all
be
included
as
part
of
the
release
of
the
game
api.
I
don't
I
wanted
to
stop
there
and
say
is
any
of
what
I
just
said
controversial.
C
B
Yeah,
okay,
yeah,
so,
let's
scroll
down
a
bit
rob
I
think
I
covered
these
few
things
so
yeah.
I
think
that
we
need
so
the
things.
So
if
you
want
to
scroll
back
up
a
little
bit
yeah,
I
just
want
to
cover
these
the
the
scope
things
here.
So
I
think
that
we've
talked
a
bit
about
how
there
are
different
implementations
of
the
get
api
and
I
think
the
absolute
must-have
is
that
there's
some
way
to
have
different
types
of
performance
for
different
implementations
at
a
minimum.
B
I
think
there
needs
to
be
layer,
four
and
layer,
seven
sort
of
conformance
profiles,
so
your
layer,
seven,
is
your
typical
ingress
controller
that
that
handles
layer,
seven
http,
https,
tls,
stuff
and
then
layer.
4
is
the
stuff
that
handles
the
someone
at
the
moment:
poorly
defined,
tcp
and
udp,
and
maybe
tls
stuff.
B
I
think
I'll
discuss
this
a
little
bit
further
down
so,
but
the
other
things
that
we
need
to
do
are
we
need
to
make
sure
that
there's
a
way
in
the
performance
testing
and
the
conformance
profiles
to
define
like
to
use
the
support
levels
that
we've
already
defined
in
other.
In
the
api
there
needs
to
be
a
way
that
you
can
validate,
that
your
thing
supports
core
you,
the
core,
the
extended
and
the
implementation
specific
stuff,
and
there
needs
to
be
ways
that
that
we
validate
all
of
those
things
for
extended
features.
B
In
particular,
I
think
this
is
really
interesting
because
extended
features
you
don't
have
to
support
them,
but
if
you
do,
there
are
behaviors
that
you
must
support,
and
so
that
me
that
says
to
me
that
we
need
to
have
conformance
tests
upstream
for
those
extended
features
that
are
conditionally
enabled
if
you
say
that
you
want
to
support
those
things.
So
exactly
how
we
do
that
yeah.
B
I
think
that's
the
part
that
I
want
to
talk
about
in
the
proposal
and
then
the
last
thing
is
obviously
the
testing
suite
there
and
I
think
one
of
the
things
that
would
be
really
nice
to
have
is
that
there's
some
way
to
do
this.
That
means
that
you
don't
have
to
go
out
to
some
external
website
to
find
out.
If
your
thing
is
is
is
supported
or
not,
then
that
would
be
really
nice.
Sorry,
I
didn't
see
what
you
added
there.
How
are
you.
D
B
Yeah,
so
I
think
that
probably
the
extended
feature
stuff
is
going
to
need
a
few
passes
to
get
it
exactly
right.
But
that's
one
of
the
things
I
wanted
to
talk
about
in
the
proposal
is
exactly
how
we
make
that
work.
So
let's
go
down
and
let's
talk
about
the
actual
so
before
we
do
that,
it
feels
like
there
doesn't
seem
to
be
any
anybody's,
not
chatting
at
me
saying
that's
terrible
idea.
So
I.
E
B
Here
I
think
the
sort
of
the
three
phases
I
talked
about
are
the
same
as
the
upstream.
From
what
I
understand
you
know,
there's
you
define
a
profile
you
you
provide
testing
for
the
profile
and
then
you
provide
a
way
to
submit
data.
That
is
that
is
based
on
what
I
believe
the
upstream
want
to
do.
I
haven't
touched
it
in
a
while,
so
maybe
I'm
out
of
date
there,
but.
B
I
think
the
thing
that
we
have
to
do
that's
unique
is
having
very
clearly
defined
profiles
and
we
and
having
the
the
different
support
levels
for
different
features.
Those
things
are
unique
to
us
and
so
we're
going
to
have
to
have
unique
ways
to
handle
doing
them.
I
see
got
it
yeah,
but
I
think
the
the
sort
of
the
broader
strokes
profile
we
should
just
lift
from
upstream
and
that's
what
I've?
That's?
What
that
that
sort
of
I
have
just
assumed
that
we
have
done
that
in
the
rest
of
this.
B
I'm
talking
about
in
the
rest
of
this
I'm
sort
of
assuming
that
that
we're
going
to
stick
with
you
know
that
we
define
the
profiles
we
have
to.
We
have
tests
for
the
profiles.
We
have
a
way
to
submit
the
data
back
and
then
all
of
this,
all
of
the
rest
of
this
is
talking
about
what
it.
What
how
do
we
define
the
profiles?
B
E
B
I
think
it's
actually
like
a
j
unit,
a
j
unit
test
file
or
something
like
that.
Yeah.
Basically,
you
take
zonoboy
and
you
output.
You
send
a
pr
to
a
specific
repo
with
that
with
that
output
and
then
someone
checks
that
the
output
is
does
what
it
says
or
something
like
that
then
yeah.
Thank
you
thanks
james
yeah,
it's
been
a
little
while,
since
I
looked
at
this,
but
the
last
time
that
I
did
yeah,
you
run,
you
run
the
sonoboy
tests
and
then
you
get
the
output.
B
Yeah,
I
think
something
like
that
is
what
we'll
we
will
need,
but
exactly
what,
how
we
end
up
doing
that
depends
on
what
we
end
up,
building
using
as
a
test
framework
and
what
the
output
of
that
test
framework
is
and
stuff
like
that.
So
I
kind
of
went
that's
a
problem
for
that's
future
us's
problem.
B
We've
got
other
problems
right
now,
so
I
think
yeah.
So
I
think
we
should
have
just
the
two
performance
conformance
profiles
to
begin
with,
we
need
to
ensure
that,
when
we're
doing
the
performance
profile
like
mechanics,
that
we
have
a
way
to
add
more
later,
that
may
be.
That
may
like
be
a
superset
or
you
know
a
partial
set
of
what
we
have
already,
but
but
basically
we
just
need
a
way
to
extensively.
B
Add
that
without
having
to
substantially
change
the
mechanisms
we're
using,
and
I
think
that
we
should
def,
it
seems
to
me
to
make
sense
that
the
sort
of
the
core
conformance
for
each
layer
means
you
for
the
objects
that
are
that
fit
into
that
conformance
profile.
B
That's
the
support
for
the
core
fields
and
the
core
functionality
of
those
things,
and
we
don't
that
we
that
we
treat
that,
like
sorry,
that,
like
a
docker
tag
in
that,
like
like
a
like
a
container
image
tag
in
that
the
core
conformance
tests
is
actually
like
a
manifest
for
a
certain
set
of
tests
to
be
run
against
a
cluster.
But
we
kind
of
don't
want
to
be
too
specific
about
that
for
the
core
performance,
because
we
want
to
be
able
to
bump
that
on
a
version
basis
and
just
say
core
conformance.
B
So
we
want
to
treat
that
a
little
bit
like
a
tag
in
git
or
in
a
container
that
that
you
know
we
say:
okay
version
0.5.0
of
the
api
we've
we've
issued
core
conformance
for
the
layer,
7
conformance
profile
and
the
layer,
4
conformance
profile,
and
you
there's
just
a
boolean
that
says
you
know
level
layer,
7
support,
true
and-
and
that
includes
some
set
of
tests
that
you
can
go
and
look
at
if
you
really
want
to.
But
it
doesn't,
the
end
user
should
not
really
care.
B
They
should
just
care
that
it
does
everything
you
would
expect
it
to
do.
You
know
that's
labeled
as
core
performance
right
and
the
exact
set
of
tests
does
not
matter.
B
However,
that's
kind
of
syntactic
sugar
for
the
underlying
mechanisms,
and
we
want
to
expose
those
underlying
mechanisms
a
little
bit
more
in
the
case
of
the
extended
features
and
so
I'll
get
to
that
in
a
second,
but
so
the
that's.
What
I
think
you
that
that's
what
I
think
the
core
confirmation
profile
should
mean.
B
Is
it's
like
a
shorthand,
for
this
is
the
set
of
tests
that
you
run
develop
to
validate
that,
and
so,
when
we
do
that,
yeah
like,
like,
I
said
there,
we
should
we
need
to
version
the
conformance
profiles
and
include
both
them
and
the
tests
in
the
release.
So
when
we
cut
a
version
that
will
that
it
will
include
you,
the
conformance
profile,
like
the
set
of
tests,
that
we
should
run
like
the
list
of
tests
that
we
should
run
and
then
the
tests
themselves,
they
are
the.
E
E
B
I
think,
regardless
of
where
the
code
lives,
the
releases
may
have
to
be
in
lockstep
right,
like
the
you
know,
so
if
you
have
the
code
live
somewhere
else,
you
actually
have
a
coordination
problem
that
that,
when
you
cut
a
release
in
the
the
actual,
the
actual
api
repo,
you
also
have
to
cut
a
release
in
the
testing
repo.
So
it
probably
makes
more
sense
to
just
put
it
all
in
one
thing,
because
it's
much
easier,
there's
less
coordination
over
here
and
it's
easier
to
test
and
stuff.
B
When
you
know
that
everything
is
at
the
same
level.
How
do
you
test
the
tests?
And
you
know,
google
police,
the
police
right
like
how
far
down
do
you
go
on
that
rabbit
hole?
So
I
think
the
you
so
I
think
we
need
to
version
them
and
include
them.
They
need
to
be
versioned
and
released
at
the
same
time
is
my
proposal
and
also
you.
B
The
other
thing
that's
important
to
note
here
is,
I
think
I
said
it
before,
but
the
conformance
profiles
in
general
should
be
composable
like
you
should
be
able
to
support
multiple
like
you
should
be
able
to
say.
I
support
layer
set
layer,
4
and
layer
7
or
I
support
some
other
set
of
conformance
profiles,
and
they
should
not
necessarily
be
mutually
exclusive.
Like
you
should
be
able
to.
You
know,
there
may
be
shared
tests
between
between
different
performance
profiles,
but
they
should
not
necessarily
be
like.
You
can
only
support
one.
B
That's
what
that's
what
I'm
trying
to
say
with
that?
You
should
be
able
to
support
multiple
performance
profiles:
okay
and
then
so
I
just
went
through
and
sort
of
marked
out
some
quick,
really
really
basic
stuff.
You
know
I'm
100
open
to
changing
the
names
here,
but
I've
kept
it
the
most
descriptive
name,
I
could
think
of
just
layer,
7
and
layer
4.
if
we
can
come
up
with
better
names
than
that.
I'd
be
very
happy
to.
E
Yeah,
this
is
like
with
the
bike
chatting
thing,
but
probably
another
alternative
is
like
htp
versus
anyways.
We
can
talk
forever
on
this.
I
don't
want
to
get
stuck
on
it.
B
Yeah
yeah
yeah,
like
that's
what
I
kind
of
just
again,
this
was
a
handwave
everything.
I'm
like
it's
going
to
be
something
like
this:
it's
not
worth
worrying
about
it.
For
now,
the
important
part
is
like
what
we're
actually
talking
about
here,
which
is
the
layer.
711
means
that
you
support,
gateway,
class
gateway,
so
gateway
class
and
gateway.
You
have
to
support
amongst
by
every
single
performance
profile
right.
I
don't
think
that
should
be
very
controversial
gateway
is
the
key
object
for
this
whole
api?
B
What
we're
really
talking
about
in
terms
of
performance
profiles
is
what
extra
routes
you
support
and
what
features
you
support
inside
those
routes
and
so
layer,
seven
is
http
route
and
tls
route.
You
have
to
support
all
of
the
fields
march
core
in
each
of
in
each
and
all
four
of
those
objects,
and
then
you
may
support
field
maps
extended
or
implementation
specific.
B
Similarly,
for
for
layer,
four,
that's
gateway,
class
gateway,
tcp
udpr.
Now
one
of
the
things
that
I
think
the
more
I
have
used,
the
more
I've
talked
about
the
gateway
api,
the
more.
I
think
that
there
are
definite
use
cases
for
using
that
where
you
could
use
the
gateway
api,
as
we
have
constructed
it
to
describe
things
like
vpn
tunnels,
gre
tunnels,
stuff
like
that,
where
you're
talking
about
actually
layer
3
or
are
other
constructs
lower
than
light
4..
So
that's
one
of
the
reasons
I've
left
this
pretty
open
that
I
don't
want.
B
I
don't
want
to
rule
out
adding
stuff
for
that
later,
with,
with
the
way
that
we
talk
about
this.
So
in
each
case,
though,
the
conformance
profile
says
you've
got
to
support
all
the
fields
in
our
core
and
you
may
support
the
extended
fields.
So
how
do
we
know
what
extended
fields
you
support?
Well,
the
contention
that
I
make
is
that
we
should
just
put
that
in
the
gateway
class
status.
B
It
should
be
the
responsibility
of
a
controller
to
write
a
stanza
to
the
gateway
class
status
that
describes
the
conformance
profile
that
the
controller
meets
in
terms
of
both
its
core
compliance.
His
core
support
and
its
extended
support.
The
exact
mechanics
of
how
we
specify
the
extended
support.
It's
pretty
complicated.
I
think
we're
going
to
need
to
go
back
and
forth
on
a
lot.
What
I
wanted
to
get
to
today
was
agreement
that
that
sort
of
broad
strokes
approach
was
the
way
to
go.
E
Yeah,
I
was
of
the
opinion
that
I'm
not
as
certain
that
we
actually
need
to
throw
this
into
status,
but
because
it
almost
seems
like
something
that
exists
on
the
back
of
the
box.
Going
back
to
time
when
software
is
shipped
in
paper
boxes,
but
would
not
just
be
omnipresent,
so
the
user's
actually
using.
B
It
so
the
reason
I
wanted
to
put
it
on
the
gateway
class
is
that
in
my
mind,
there's
a
strong
tie.
It's
not
a
one-to-one
mapping,
but
it's
close
to
one-to-one
between
a
gateway
class
and
an
implementation.
You
know,
like
every
gateway
class
must
have
an
implementation,
not
you
one.
Implementation
may
have
more
than
one
gateway
class,
but
but
because
for
every
gateway
class,
it's
going
to
be
you
it's
the
gateway
class.
B
That
tells
you
what
the
gateway
is
under
that
support,
and
so
that's
why,
in
my
mind,
the
gateway
class
is
the
right
place
to
put
this
information,
because
it
means
that
you
don't
have
to.
I
think
the
the
really
good
thing
you
get
out
of
this
is
you:
don't
have
to
run
off
to
some
website
to
look
at
the
back
of
the
box.
The
back
of
the
box
is
like
included
in
the
box.
F
Yeah
the
trick
here,
though,
is
the
is
the
vocabulary
right,
because
this
goes
back
to
like
the
feature
tags
or
whatever
the
terminology
we
use
before
discussion
right,
because,
basically
I
I
don't
know
if
it
I
don't
know,
I
don't
think
that
you
programmatically
want
to
know.
Does
that
support
this
conformance
thing?
You
probably
need
to
know
more
granular
detail
than
that.
B
Yeah,
so
I
think,
programmatic,
that's
why
the
that's
why
I
kind
of
wanted
to
make
it
that
so,
if
you
scroll
down
a
little
bit,
I
had
a
couple
of
ideas
here
that
you
know,
we've
got
sorry
actually
just
up
a
little
bit.
I've
sort
of
put
a
summary
of
the
things.
I
think
we
should
do
just
that
bullet
of
this
thanks
rob.
We
have
a
a
small
number
of
profiles
that
are
effectively
syntactic
sugar
for
a
certain
set
of
tests.
B
So
I
think
that's
the
key
part
james,
that
you
know
that's
what
I
meant
by
like
it's
like
a
doc.
It's
like
a
container
image
tag
or
a
git
tag
that
you
know,
there's
two
fields
really
that
are
important.
There's
a
version
field
somewhere
and
a
do.
I
support
this
conformance
profile
field
and
those
two
things.
B
Let
you
know
the
features
that
are
included,
and
so
you
can
make
an
assumption
as
a
as
a
as
a
user,
based
on
your
diversion
what
what
stuff
is
included
and
as
a
tool
consuming
this
information,
you
can
pull
the
you,
can
pull
the
api,
spec,
the
objects
and
all
that
sort
of
stuff,
and
presumably
we
would
need
to
have
a
manifest
somewhere
of
the
exact
stuff.
That's
included
somehow
again,
there's
a
lot.
B
The
handwave
there,
but
like
the
idea
here,
is
that
the
core
performance
profile
should
be
like
the
stuff
that
you
expect
that
you
would
expect
that
to
do.
And
then
the
interesting
part
is
the
extended
features
where
it's
like.
Do
you
support
this?
This
extended
feature
and
if
so,
here's
the
ways
that
you
have
to
do
it,
and
so
I
think
that
I
agree.
E
With
the
like,
there
should
be
an
easy
way:
that's
programmatically
consumable
for
someone
to
understand
a
given
implementation,
which
extended
things
supports.
It's
still
a
little
hazy
to
me.
We
need
to
necessarily
have
it
in
like
a
very
kubernetes
native
online
format,
because
it
feels
like
that
has
less
use
cases,
although
it
sounds
useful
in
very
niche
circumstances.
E
B
B
The
idea
that
it
has
to
be
on
the
status
is,
it
doesn't
have
to
be,
but
I
wanted
to
use
that
as
a
way
to
discuss
some
of
the
things
that
come
up
about
the
way
about
the
way
that
we
handle
this
information,
because,
no
matter
how
we
do
it,
it
needs
to
be
machine.
Consumable,
that's
right
and
it
needs
to.
A
B
If
that's
something
is
the
gateway
class,
then
you
don't
have
to
go
to
not
reach
out
to
another
site.
You
have
to
talk
to
the
gateway,
the
kubernetes
gateway
up
here,
the
kubernetes
api
server.
Already
that
that
lets
you
go
to
one
place
as
for
the
testing
thing
and
then
putting
that
stuff
on
the
gateway
class
status.
B
Lets
users
use
that
as
a
way
to
say
what
is
this.
You
know
I've
just
installed
this
implementation
in
my
cluster.
Can
I
move
my
udp
routes
to
it?
You
know
and
you
look
at
the
gateway
class
and
be
like?
Oh
no.
I
can't
you
know.
Hopefully,
you've
figured
that
out
already,
but
but
it
means
there's
like
a
single
place.
B
That's
easy
to
understand
that
you
go
to
you,
don't
have
to
go
and
look
up
the
project's
website
or
something
like
that
or
go
to
its
github
repo
to
pull
down
a
manifest
file
that
describes
this
support.
It's
all
right
there
and
it's
described
in
a
standard
way.
That's
why
I
think
the
status
is
a
that's.
Why
I
was
pushing
for
the
status
is
that
you
get
an
easy
win
for
the
testing
framework,
but
then
it's
useful
for
other
people
as
well.
B
B
So
I'd
like
you
all
to
just
for
now,
live
in
a
world
where
we've
accepted
that
yeah
sure
we're
gonna
write
some
status,
although
we
haven't
and
let's
talk
about,
if
we
did
write
status,
what
sort
of
things
that
would
look
like,
I
put
the
of
the
obvious
most
naive
way
I
could
think
of
which
is
basically
you
know
some
bullions
plus
annotations,
which
I
don't
think
I
think,
is
a
terrible
idea,
but
I
wanted
to
put
it
in
here
for
completeness:
you
is,
if
you
scroll
down
a
little
bit.
B
Basically,
we
just
add
a
stanza,
that's
like
for
each
profile.
You
have
a
thing,
that's
like
supported,
false
or
true,
and
then
a
you
know,
a
map
spring
ball
of
fields.
That
of
extended
features
that
you
support.
Obviously
the
risks
here.
Are
you
know
this
isn't
any
pattern,
it's
like
annotations,
which
is
a
bad
idea.
You
know,
how
do
you
name
the
extended
field
supports?
How
does
that
describe
like
what
the
actual
extended
fields
do?
You
can't
do
domain
prefixing,
so
there's
no
way
for
you
to
like
slide
in
implementation.
B
Specific
features
into
this
at
all
versioning
is
really
coarse
and
it's
difficult
to
add
extra
components.
Profiles,
because
it's
not
it's
not
a
list
or
something
else
that
you
can
add
to
it's
like
a
it's
an
actual
feel
so
yeah.
I
kind
of
wanted
to
add
this
as
the
the
straw
man
to
be
like
this
is
something
we
could
do,
but
I
don't
think
it's
a
good
idea.
B
I
think
that
we
should
we
should
steal
from
conditions
api
here
that
we
should
be
looking
at
adding
lists
of
structured
objects
to
describe
each
of
the
things
we
want
to
describe.
B
So
I
think
that
I've
in
this
in
this
one,
if
you
look,
let's
skip
past
the
conditions
and
stuff
at
the
moment,
rob
and
just
show
it
and
I'll
talk
through
the
actual
example.
The
idea
here
is,
there's
a
there's,
a
list
of
profiles
that
are
structured
data
about
the
conformance
profiles
that
use
that
you
support,
which
just
have
a
small
number,
a
small
amount
of
information
like
do
you
support
it,
and
what
version
are
you
talking
about?
If
you
do?
B
Probably
some
of
these
could
be
optional
and
stuff
like
that,
but
the
idea
here
the
key
here
is
that
the
profiles
are
a
list,
but
they
are
a
structured
object,
the
same
as
the
same
as
you
would
add
for
other
kubernetes,
structured
objects
and
then,
more
importantly,
the
features
are
also
a
list
and
they
are
also
a
structured
object
so
and
that
structured
object
can
have
information
about.
B
B
That
seems
like
it
would
be
really
useful
for
implementation
to
be
able
to
say
here's,
some
of
the
names
of
the
features
that
we
support,
that
no
one
else
does
they're
implementation,
specific
ones,
but
here's
the
thing
I
put
in
like
a
field
spec
there,
because
that
feels
like
a
jason
park
kind
of
thing
to
say
this
is
the
thing
that
I'm
talking
about.
You
can
go
and
look
at
the
spec
and
see
the
things
you
know
and
find
the
the
definition
of
the
thing
that
I'm
talking
about.
B
So
a
lot
of
this
is
100
up
for
negotiation,
but
I
think
the
two
key
things
that
I've
put
in
this
in
this
idea
are
that
the
things
are
lists
of
the
things:
the
list
of
structured
data.
The
profiles
like
and
features
have
their
own
sort
of
separate
things,
and
the
profiles
are
like
again,
a
syntactic
sugar
for
a
certain
set
of
core
of
core
features.
B
So,
theoretically,
you
could
describe
these
as
a
huge
list
of
features
and
not
have
the
profiles,
but
the
profiles
are
useful
because
those
are
the
things
that
everybody
has
to
do
so
yeah.
I
think
those
are
sort
of
the
key
things
that
I'm
contending
here
is
that
if
you're
going
to
put
it
on
the
status,
then
it
makes
sense
to
do
something
like
conditions
where
you've
got
a
list
of
structured
data
and
that
that
that
structured
data
should
easily
be
sort
of
easily
extensible.
B
B
Yes,
so
I
just
want
to
make
sure
I
haven't
missed
anything
so
yeah,
I
think
yeah.
If
we
did
it
this
way,
then
you
you
can
do
you
can
do
versioning
for
both
profiles.
You
can
version
profiles,
but
you
can
also
version
like
your
field
support,
so
you
can
have
implementation
specific
fields
that
you
care
about.
No
one
else
does
having
the
actual
explicit,
like
the
field.
To
be
really
explicit,
I
think
is
really
is
a
good,
is
a
good
thing.
B
It
makes
it
very
clear
exactly
what
extended
fields
you're
doing
that
kind
of
solves
the.
How
do
you
know
what
extended
features
you
support
problem
harry,
but
I
think
you
know
you
run
the
risk.
The
downside
of
that
is
you
run
the
risk
of
the
object
getting
too
big.
You
know
if
there's
a
lot
of
extended
features,
that's
a
big
risk
and
you
know
having
field
spec-
and
you
know,
field
spec
and
stuff
like
that
is
a
bit
seems
a
bit
risky
that
it
might
change
too
much
or
something
like
that.
E
Let's
say
we
want
to
do
this?
How
do
we
start
saying
this
is
an
extended
feature,
giving
it
a-
I
guess,
a
short
name
and
then
describing
what
is
inside
it
and
then
the
other
one
is
if
we
are
going
to
link
it
to
either
the
api
types
comments
or
documentation,
that's
in
text
somewhere
on
the
website.
How
do
we
keep
all
that
stuff
consistent
when
we're
writing
this
yeah?
I.
A
Think
I
think
I
think
this
really
complements
the
notes
we
had
earlier
around
conformance
too
and
like
the
questions
you
were
saying,
bowie
about
how
how
we
assign
short
names
to
features
I
mean.
E
And
the
question
would
be
like:
is
it
then
it's
like
a
classification
like
then,
unfortunately,
it's
a
bit
fuzzy,
it's
like.
Is
it
one
short
name
or
is
it
like
three
short
names
because
they're
like
three
different,
smaller
features
that
then-
and
I
think
that
that's
going
to
take
up
the
bulk
of
our
effort
to
simply
just
write
it
down
and
organize
it.
B
Yeah,
I
think
that
comes
back
to
what
james
said
about
classification
and
taxonomy
right,
like
you,
the
large
part
of
the
effort
in
actually
doing
this
is
just
going
to
be
going
through
and
classifying
everything
into.
You
know
where
we've
marked
as
extended
where
we
marked
a
field
as
extended
support,
then
that's
when
we
have
to
make
it
and
we
have
to
give
it
a
name.
We've
marked
it
as
call
support.
B
It
also
has
to
have
a
name
because
the
core,
the
core
support,
like
the
import,
the
point
of
what
I
was
showing
there
is
that
you
don't
want
the
end
user
to
have
to
give
a
about
the
you
know.
What's
in
the
course
support
right,
like
you
want
them
that
you
want
them
to
be.
Like
you
know,
hey
I've
got
core
support
for
layer
7
that
lets
me
do
all
the
stuff.
I
expect
to
do
for
later.
B
We
worry
about
that
right,
but
for
us
to
worry
about
that,
we
have
to
have
a
list
of
the
manifest
of
the
tests
that
you
run
to
do
that
and
those
manifest
that
manifest
of
tests
need
to
be
like
we
test.
The
you
know
we
test
your
hostname
support
that
it
that
it
does.
You
know
we
have
three
tests
against
your
hostname
support
or
something
like
that.
Are
those
three
tests
all
covered
by
that
in
the
hostname
field,
or
do
we
have
like
individual
names
for
each
of
the
tests
or
something
like
that
right?
E
Yeah,
I'm
curious
about
how
we're
going
to
tackle
that,
because
it
seems
like
we
both
need
to
reference.
The
fact
that
we
went
through
all
the
api
type
documentation
and
field
documentation
and
basically
have
like
okay,
we've
had
this
test
and
it
basically
points
here
and
then
the
same
thing
with
the
just
the
pros
documentation.
I
think
we
actually
have
like
a
whole
bunch
of
behavior.
That's
on
the
website
that
technically,
like
should
be
conformance
sized
somehow.
B
Yep
yeah
and
this
this
was
not
intending
to
sort
of
make
it
that
we
don't
need
to
do
that
work.
We
absolutely
do
need
to
do
that
work.
B
What
I'm
trying
what
I
was
trying
to
do
with
this
proposal
was
sort
of
say:
here's,
let's
build
the
background
to
why
we
do
that
work
so
that
when
we
do
that
work,
we're
all
thinking
about
it
in
the
same
way,
we're
all
aiming
towards
the
same
thing
and
we
end
up
in
the
same
place
that
way
we
can
break
the
work
down
into
chunks
and
have
different
people
work
on
it,
and
it's
not
one
person
slaving
away.
Building
this
thing
from
scratch.
E
A
E
Classification,
so
I'm
curious
how
what
people
think
how
people
think
we
should
be
tackling
it.
A
I
mean
when
I,
when
I
thought
about
conformance.
I
think
I
think
it's
important
to
have
a
very
clear
plan
for
how
we
do
conformance
testing
for
extended
features,
but
we
need
to
have
some
performance
testing
and
the
easiest
place
to
get
to
for
performance.
Testing
is
core
features,
so
if
we
can
just
do
core
features,
I
think
this.
This
idea
of
profiles
start
with
l4
l7
and
do
you
know
focus
on
getting
some
conformance
tests
out
the
door
that
cover
those
core
features?
A
We're
going
to
be
a
lot
further
ahead
than
if
we
get
too
far
down
into
weeds
of
extended
features
like
we
need.
We
need
to
have
a
plan
for
them
and
it's
good
to
have
this
discussion
and
keep
on
going
with
it.
But
I
think
we
we
have
a
lot
of
clear
path
ahead
for
conformance
testing
for
that
kind
of
core,
and
I
think
that
is
probably
the
most
important
starting
point.
For
me
at
least.
A
Yeah,
I
I
think
that's,
I
think
our
doc
probably
covers
this
a
theoretical
structure
more
closely,
and
this
is
covering
more
what
we're
testing,
whereas
the
other
doc
is
how
the
test
would
be
structured,
and
I
think
we
can
merge
those
two
concepts
together
pretty
nicely,
but
I
I
don't
know
that
we
have.
You
know
until
it's
a
gap
or
I
think
I
think
a
gap
is
the
sensible
next
step
for
this.
But
until
it's
a
gap
we
don't
technically
have
you
know
consensus
on
any
of
these
things.
B
D
B
Yeah
agreed
okay,
so
maybe
my
next
action
here
is
to
take
this
bit
like
the
sort
of
the
top
bits
of
this
that
I
and
lift
them
out
and
put
them
into
a
gap.
That's
like
so
we're
gonna
do
performance,
we're
gonna
do
performance
profiles,
you
know
conformance
we're
going
to
have
core
the
performance
profiles
are
going
to
be
like
syntactic
sugar,
for
a
set
of
core
conformance
tests
to
cover
the
features
that
are
marked
core
in
the
objects.
Here's
rough
so
and
basically
so
rob
if
you
scroll
a
bit
further
up.
B
If
we
sort
of
go
up
to
the
you
know,
basically
we
sort
of
put
it
up
to
we.
I
cut
it
off
at
conformance
information
in
gateway
status.
We
don't
put
that
in
but
like
the
the
rest
of
it
is
basically
to
get.
You
know
like
that
that
we
have
two
performance
profiles
that
the
performance
profiles
are
defined
in,
like
terms
of
what,
essentially,
what
routes
you
support
is
really
what
the
performance
profile
is.
B
So
I
can
probably
even
cut
this
down
a
little
bit
to
just
be
like
what
routes
do
you
support
and
but
and
then
in
those
routes,
core
conformance
is
supporting
all
of
the
core
fields
and
then
that
that
then
lets
us
build
the
framework
for
how
we
talk
about
building
out
the
set
of
core
tests
for
the
call
fields,
the
stuff
that
I
said
about
the
status
and
stuff.
Yes,
I
agree.
B
I
agree,
john,
that
it's
nice
to
have,
but
like
it's
not
important
to
start
with
the
most
important
thing
is
to
have
some
tests.
Let
you
validate
the
core
performance,
but
I
did
want
to
mention
some
of
that
in
that
I
do
think
that
having
something
visible
it
just
makes
it
so
much
easier
to
use.
That's
my
feeling,
I
is
not
important
for
now,
though,
so
I'll
write
a
gap.
That's
that's
like
basically
just
the
top
part
of
this
document
that
so
that
we
can
say
this
is
what
we're
going
to
do.
B
This
is
what
a
plan
is
for
for
conformance
testing
in
general
and
for
performance
profiles
and
I'll
put
in
there.
I
don't
think
I've
been
explicit
enough
in
there
that
there's
sort
of
three
parts
of
the
conformance
regime.
You
know
the
the
profiles,
the
profile
definitions,
the
testing
code
to
do
them
and
the
the
process
by
which
you
submit
that
data
somewhere
to
to
have
your
implementation
registered
canonically
as
a
as
a
conformant
implementation.
B
And
so
those
are,
I
think,
the
three
things
that
this
should
cover,
that
that's
the
way
that
we
want
things
to
work
and
that
the
conformance
profile
will
be
core
support
based
on
the
objects
that
you
support,
and
then
there
will
be
two
initially.
I
think
all
of
those
things
were
relatively
not
contentious
is
that.
B
D
A
Awesome
anything
else
to
say
on
conformance,
I
think
we're
at
a
good
point
here.
Should
we
move
on
to
other
items
on
the
agenda
or
any
other
questions,
topics
concerns.
A
Okay,
awesome
yeah.
Thank
you.
Thank
you
for
getting
this
together
nick.
This
is
great
excited
to
have
a
better
vision
for
what
conformance
profiles
will
be
yeah
all
right.
Let
let
me
move
on.
I
have
a
proposal
here.
That
is
a
proposal
to
be
clear.
I
call
I
called
it
a
plan
in
the
dock,
but
that
that
is
very
much
an
ambitious
title,
and
this
is
an
ambitious
timeline,
but,
based
on
our
call
last
week
we
discussed
okay.
How?
How
soon
can
we
get
v,
one
beta
one
out
how?
A
How
soon
can
we
actually
graduate
to
beta
and
at
the
same
time
we
were
discussing
hey?
Let's
talk
about
these
features
that
we,
you
know,
missed
the
cut
for
v,
one
alpha
two,
but
we
said
okay
well,
we'll
do
that
in
the
next
release
after
v,
one
alpha
two,
so
we
have
these
two
kind
of
seemingly
conflicting
ideas
here
right.
We
have
this
idea
of.
A
We
wanna
add
these
cool
new
features
that
we've
been
talking
about
for
a
while
now,
but
we
also
want
to
get
to
beta,
because
you
know
it's
it's
easier
to
launch
product
support,
etc.
When
you
at
least
have
a
beta
api
to
work
with
so
trying
to
combine
those
two
concepts,
I
wanted
to
run
through
how
we
could
potentially
make
this
all
work.
Some
of
the
context-
that's
missing
here.
A
I
mentioned
it
in
slack,
but
I
I
brought
up
the
idea
of
experimental
fields
to
api
machinery
at
the
meeting
last
week
and
in
the
mailing
list.
I
was
expecting
it
to
be
a
highly
controversial
topic.
It
was
not.
It
was
around
a
five
minute
discussion
item
that
the
ba
proposal
was
great,
so
yay,
I
guess,
but
the
idea
is
the
idea
of
releasing
a
separate
experimental
track
of
crds
was
actually
well
received
and
seemed
like
the
way
to
do
it.
So
yay.
E
Yeah,
I
think,
rob
I
know
we
discussed
like
after
that
meeting.
Is
this
going
to
be
in
some
form
that
is
easily
digestible
by
the
community
in
general,
because
I
don't
like
we
even
us
like
discussing
this
with
the
crd
maintainers.
It
wasn't
very
obvious
that
this
was
indeed
a
thing
that
was
needed
and
then,
after
going
through
it
a
lot
more,
it
was
oh
yeah
this.
A
I
I
think
for
better
for
worse,
a
lot
of
a
lot
of
projects
are
looking
at
what
we're
doing
as
a
reference
point
for
how
they'll
be
handling
crds.
So
I
think
we
need
to
do
a
good
job
at
documenting
our
plans
and
how
we
got
to
these
plans.
A
You
know
the
discussions
with
api
machinery
and
whatnot,
but
I
don't
know
if
there's
any
great
upstream
documentation
for
that
which
would
be
great
to
fix,
but
we
can
fix
our
side
by
at
least
making
sure
our
plans
are
well
documented,
which
I,
since
this
is
a
discussion
topic
that
I
kind
of
started
and
that
I
don't
mind
being
the
one
to
actually
document
that,
but
yeah.
That's
that's
a
good
point.
B
I
feel
like
yeah
that
one's
also
like
a
gap
a
bit
like
what
the
performance
profiles
one
that's
like
this
is
our
sort
of
stance
and
plan
like
it's
not
an
immediately
implementable
gap,
but
it's
a
gap
to
explain
what
we
do
like
what
we're
doing.
Why
we're
doing
it?
Why
we
think
it's
even
a
problem.
You
know
hey
we've
thought
about
this
and
realized.
This
is
a
problem
that
you
haven't
run
into
yet,
but
trust
us,
you
will
yeah.
B
This
is
the
this
is
the
solution
that
we're
proposing
you
know
and
so
so
that
we
all
have
a
chance
to
understand
because
yeah,
I
think
I've
spent
some
time
thinking
about
this
as
well.
I've
talked
to
you
about
it,
but
I
don't
know
if
everyone
on
this
call
has
spent
any
time
thinking
about
what
happens
when
you
try
to
have
like
if
you
have
the
same
object,
the
the
same
object
by
every
way
that
kubernetes
knows
like
if
the
gbk
is
the
same,
but
there
are
different
schemas
available.
A
Completely
free
yeah,
let
me
let
me
work
on
a
gap
for
that
one
yeah,
so
that
that
actually
set
us
up.
Well,
I
you
know.
At
the
same
time
I
was
thinking
about
beta
got
some
more
clearance.
You
know,
clarity
on
exactly
how
we
should
handle
experimental
fields,
and
it
sounds
like-
and
this
is
my
take
on
it-
that
it
is
possible
to
add
new
features
in
the
same
release.
A
A
So
that's
that's
the
the
high
level
summary
of
how
I
think
we
can
do
this
in
a
relatively
short
time
frame.
I
let
I
can.
I
can
dig
through
the
dock
a
little
bit
more
to
actually
go
into
timeline
and
how
this
could
actually
work,
but
just
to
frame
everything.
That's
that's
how
I'm
approaching
this
does.
Does
that
seem
like
a
reasonable,
a
reasonable
approach
to
take
for
our
next
release?
Any
any
has
okay.
C
Any
hesitation
so
we're
thinking
these
features
that
would
be
experimental
and
beta.
They
would
be
part
of
the
ga
not
but
not
as
experimental
when
we
go
to
the
ga
release.
A
Yeah,
so
that
that's
a
good,
that's
a
good
question.
I
think
how
we've
defined
experimental
and
this
this
gap
was
we'll
need
to
get.
But
the
way
the
way
the
existing
doc
defines
experimental
is
these
are
features
that
can
be
removed
and
or
graduated
in
a
following
release.
A
B
I
think
I
think
it's
pretty
important
to
call
out,
though,
that
that
these
will
be
experimental
fields
on
the
beta
object
right
and
so
in
the
same
way
that,
right
now,
if
you
want
to
add
something
to
the
ingress
object
in
kubernetes,
the
ingress
object
is
a
ga
object.
So
the
object
itself
is
ga,
but
you
can
add
a
new
field
in
like
an
experimental
way
using
a
feature
gate
on
the
api
server.
So
you
can
have
a
feature
gate.
B
What
we're
talking
about
here
is
that
the
experimental
track
of
the
crds
will
basically
like
it's
actually
going
to
be
a
little
funky
to
figure
out
exactly
how
we
make
this
work
properly
with
the
api
server.
But
it's
going
to
have
the
extra
fields
available
in
the
schema
of
the
object,
which
will
be
the
same
gbk
as
the
stable
one,
but
there
will
be
need
to
be
additional
discriminators
added.
B
We've
talked
about
annotations
for
now,
but
rob-
and
I
talked
about
maybe
promoting
this
to
a
field
later
or
for
a
cid
that
sort
of
gives
you
a
schema
version
that
lets
you
sort
of
add
extra
fields
that
that
would
then
be
stored
in
the
in
the
underlying
representation
on
the
api
server
and
then
it's
kind
of
we're
kind
of
grafting,
an
extra
version
field
on
top
of
the
existing
version
field
right
like
and
you
with
all
of
the
shenanigans
that
that
that
implies,
because
there's
a
lot
of
shenanigans
involved
in
maintaining
the
the
the
version
field
out
of
the
gbk
for
any
given
kubernetes
version.
B
You
know
like
there's
a
storage
version
and
a
presentation
version
and
the
api
server
can
transparently
compare
convert
between
things
and
it's
okay
to
add
new
fields
as
long
as
they're
nillable
and
the
there's
a
certain
set
of
behaviors
that
are
added
and
upstream
the
upstream
api
machinery
folk
all
know,
but
but
like
this
is
this
is
us
sort
of
trying
to
pull
all
the
stuff
that
the
upstream
opioid
machinery
folk
have
learned
for
the
core
objects
and
make
it
available
to
cid's
in
some
way?
And
that's
why,
like
that's?
B
Why,
like
you
100
agree
with
what
rob
said
here,
this
is
a
good
timeline
like
the
the
process
that
you've
got
here.
Rod
makes
sense,
but
I
think
that
exactly
what
you
put
here
is
going
to
be
a
bit
unclear
until
we
have
rob
until
rob's
written
the
the
thing
that
explains.
What
do
we
mean
by
like
an
experimental
field
track,
because
it's
pretty
nuanced
and
the
nuance
is
actually
really
important.
B
So
don't
what
I'm
saying
is
don't
be
too
scared
as
an
implementer.
Yet
you
know,
like
the
point
of
this
gap,
is
to
get
everybody
clear
who
needs
to
implement
this?
B
What
we're
going
to
have
to
do
as
implementers
to
support
like
these
sort
of
different
tracks
of
ci
of
object
and
make
sure
that
we
all
agree
that
it's
a
good
idea
to
do
it
that
way
and
that
we
can't
see
any
problems,
but
like
this,
this
sort
of
plan
that
rob
is
doing
is
100
dependent
on
that
that
you
know
that
everybody
understanding
and
agreeing
to
something
like
that,
so
that
we
can
add
new
features
in
at
like
an
experimental
level
without
messing
with
the
actual
gnn
of
the
underlying
objects.
C
A
Yeah,
I
I
think
so
I
I
think
every
and
this
this
is
a
good
reminder
that
I
need
to
translate
a
doc
into
a
gap,
but
I
I
think
every
every
individual
feature
similar
to
kubernetes
upstream,
would
have
its
own
graduation
criteria
right.
So
you
know,
there's
there's
no
way
to
guarantee
that
all
three
features
will
go
directly
to
ga,
but
I
think
the
you
know
the
general
idea
is
that
we
have
usage
of
this
experimental
feature.
The
feedback
is
good
and
we're
comfortable
with
taking
it
from
experimental
to
beta
or
ga.
B
B
B
But
what
we're
talking
about
here
is
that
we'll
have
a
v
1
beta
1
that
will
have
a
certain
set
of
features,
ga
to
start
with,
then
at
some
point
we'll
be
able
to
migrate,
say
path
redirects
into
the
into
the
ga
v1
beta1
object,
without
making
it
v1
beta2
you
and
without
changing
the
gvk
of
the
of
the
object,
there
will
be
a
new
field
just
appear:
okay
and
yeah,
so
that
I
guess
the
thing
that
I'm
trying
to
say
is
that
we're
part
of
what
we're
trying
to
do
with
this
whole
discussion
is
make
it
so
that
features
being
added
doesn't
require
a
rev
of
the
of
the
judicate.
B
A
Yeah,
that's
that's
really
helpful.
I
I
I
appreciate
it
and
a
good
reminder.
I
need
to
work
on
a
gap
or
two
now
so
yeah.
Thank
you.
So
with
all
that
said,
we
we
had
agreed
to
it
in
the
cap
itself
and
you
know,
as
as
we
moved
to
kubernetes
api
group
became
an
official
kubernetes
api.
We
agreed
to
a
few
points
of
beta
graduation
criteria.
A
I
tried
to
sketch
out
a
rough
idea
of
when
we
could.
You
know
ideally
have
these
complete
by.
I
I'm
open
to
feedback
on
each
of
these
dates,
but
I
was
just
trying
to
sketch
out
a
timeline
of
when
I
thought
these
things
were
possible.
First
off
the
v1
alpha
2
apis
are
implemented
by
several
implementations.
A
A
So
if
three
of
those
get
in,
I
think
we've
you
know
met
several,
so
I
feel,
like
mid-november,
seems
likely
for
v
for
this
part
of
the
graduation
criteria.
To
be
true,
the
next
one
validating
web
hook
for
advanced
validation.
We
already
have
a
web
hook.
It
is
close
to
complete
harry.
A
I
noticed
you
added
an
issue
that
has
been
in
the
back
of
my
mind
that
you
know
we
don't
have
good
release
machinery
around
this
right
now,
so
we
don't
have
anything,
that's
going
to
publish
a
version
to
tag
with
a
specific
image.
When
we
do
a
release,
we
don't
there's
some
cleanup,
that's
needed
here
and
cleanup
testing
etc.
A
It's
largely
in
place.
We
just
need
some
fit
and
finish
and
deploy
help
and
I'm
not
sure
what
else.
But
this
is
just
an
area
that
needs
someone
to
look
into,
but
I
I
feel
like
it's
possible
by
mid-november
as
well
three.
This
is
this
is
harder
to
define,
but
what
we
have
said
in
graduation
criteria
is
that
we
have
at
least
some
conformance
tests
in
place.
This
is
not
asking
for
a
complete
conformance
test
suite.
A
A
A
Any
any
questions
comments,
thoughts
on
any
of
these
items
here
anything
look
wildly
unreasonable.
D
On
two
I
mean
we
have
validation,
for
maybe
we
have
a
couple
of
cases
of
validation.
Is
that?
Is
it
okay
to
term
that,
as
advanced.
A
D
So
there
are
so.
This
is
one
area
where
we
haven't
kept
track
of,
but
there
are
a
lot
of
like
review
comments.
Oh
we
should
do
this
in
validation
web.
We
should
check
this
in
validation.
D
We
say
that,
but
then
we
haven't
always
created
like
a
an
issue
to
track
that
right,
and
so
there
are
a
lot
of
area
like
there
is.
There
are
at
least
some
areas
where
we
have
like
okay,
you
could
you
probably
need
a
validation
webhook
for
a
better
experience
where
you
have
a
field
dependent
on
a
third
field,
and
you
know
that's
not
certain
things
like
that.
D
A
B
Like
that's
a
yeah,
we
have
to
pull
that
out
anywhere
right
now,
but
we
need
to
make
sure
that
that
is
called
out
that
every
time
you
install
a
gateway
api
implementation,
it
is
expected
that
you
also
have
the
validating
web
book
for
the
go
api
installed,
yeah
objects,
yeah
and
so
yeah
you've
got
to
have
you
know,
container
images
publish
somewhere
that
you
can
and
a
manifest
and
all
that
sort
of
stuff
there
that
you
can
just
do.
B
I
think,
the
in
terms
of
the
timelines.
I
think
they
are
pretty
optimistic,
especially
given
that
yeah
y'all
are
about
to
move
into.
You
know
thanksgiving
slash
holiday
season,
and
so
I
I
would
expect
bandwidth
to
be
at
maximum
60
for
the
for
this
entire
period.
You
know
so
between
people
being
away
and
whatever
so
like.
I
think,
having
a
date
is
good.
I
would
be
pretty
surprised
if
we
met
all
these
dates.
Yeah
yeah.
F
A
Yeah
yeah,
no,
I
I
I
would
agree
yeah
yeah.
These
are
optimistic
timelines,
I'm
just
trying
to
trying
to
go
forward
with
a
vision
that
is
on
the
optimistic
side,
because
yeah
I
would
like
to
get
a
beta
out,
but
I
do
recognize.
These
are
optimistic
dates.
B
Right
right,
exactly
yeah
yeah,
I
mean
yeah,
I
think
more
realistically
like
if
we
were.
If
you
know
that
I
would
kind
of
expect
that
the
sequence
will
be
the
same.
Things
might
take
a
little
longer,
but
you
know
sort
of
like
january
or
early
february,
pretty
surprised
if
we
could,
if
we
couldn't
be
at
number
five.
Okay,.
D
So
yeah
well
another
thing
that
I
think
this
la
the
the
point
I'm
going
to
bring
up
lies
both
in
the
this
document
and
the
last
document
that
we
discussed
is
there
are
things
like
reference
policy,
that's
easier
to
test
for
that's
a
core
resource,
but
there
are
things
like
the
policy
mechanism
that
we
have.
We
don't
have
any
resources,
but
we
do
say
implemented.
D
I
don't
know
how
much
what's
the
right
word,
how
much
conformance
or
how
much
policing
should
we
be
doing
that?
I
don't
know
the
right
way
to
put
it,
but
at
least
reference
policy
is
something
that
we
need
to
be
careful
with.
I
think
that,
then,
because
that's
a
new
concept
right,
like
routes
and
stuff,
we
can
probably
be
more
confident
that
I
think,
but
that's
one
area
where
we
probably
categorically
need
more
feedback,
because
that's
new
and
that's
also
like
security
right,
so
be
more
careful,
yeah
good
point.
B
Great
yeah
yeah,
I
think
the
the
reference
policy
like
I
you
I
mean
I
love
the
idea,
but
like
yeah,
we
that
one
seems
like
a
very
early
candidate
for
conformance
testing.
Yeah,
it's
reasonably
clear
to
define
as
well
like
the
tests
are
actually
pretty
easy
to
define
you
that,
especially
because
they're
relatively
you
know
black
box
kind
of
tests
like
you
when
you
create
things
like
this,
it
shouldn't
work
and
if
it
does
yeah
fail,
like
yeah,
that's
pretty
easy
test
to
make.
B
So
I
I
think
the
key
part
about
the
the
thing
that
I
think
is
really
important
for
the
initial
performance
test
in
place.
In
order
for
initial
conformance
test
to
be
in
place,
we
have
to
have
picked
a
testing
framework
and
picked
a
way
that
you
run
the
tests,
and
you
know
done
all
of
that
done
all
of
that
work
and
sort
of
define
the
the
harness
and
framework
bits
which
actually,
I
think,
is
maybe
a
bit
more
like
a
little
bit
more
work
than
than
it
might
say.
B
B
Eventually,
and
probably
you
want
to
make
the
tests
for
the
gamer
api
be
able
to
test
other
things
as
well
like
ingress,
you
know,
or
you
and
so
you're
only
running
one
test
harness
and
then
you
can.
Then,
if
you
can
make
it
so
that
you
can
extend
it
to
say,
custom
crds
or
something
like
that,
then
that
covers
a
lot
of
people
as
well
like
it
covers
me,
yeah
yeah.
B
That
means
that
contour
can
be
like
well
we're
going
to
run
the
upstream
performance
test,
but
then
we're
going
to
add
this
this
one
to
to
use
that
framework
to
do
the
hpv
proxy
once
right.
You
know
that
having
this
suite
of
performance
tests
is
like
a
de
facto
like
standard
ingress
controller
test
as
well,
because
you
know
you,
the
stuff
you're
going
to
be
exercising
in
gateway
is
the
same
stuff
that
you
would
configure
in
any
other
method
that
you
were
configuring
ingress
traffic
to
your
cluster.
A
Yeah,
no,
I
completely
agree.
I
know
we're
we're
past
time,
so
I
gotta.
A
But
yes,
I'm
sorry.
A
C
I
was
just
gonna
say
I
I
think
the
timeline
is
fine
and
it's
it
is
aggressive,
but
we
it's
good
to
have-
and
I
was
gonna
point
out
the
time
so.