►
From YouTube: Kubernetes SIG API Machinery 20190912
Description
Bi weekly sig meeting, covered agenda topics from here: https://docs.google.com/document/d/1x9RNaaysyO0gXHIr1y50QFbiL1x8OWnk2v3XnrdkT5Y/edit
B
B
So,
let's
get
started.
I
want
to
kick
us
off
with
a
short
item
if
you've
been
here
for
the
last
few
months,
like
four
or
five
months
or
so,
you
may
have
noticed,
fed
a
here
doing
a
lot
of
stuff
either
when
I'm,
here
or
not
here
like
running
this
meeting
and
I,
think
I'd
like
to
propose
that
we
split
the
chair
in
TL
roll
for
this
sig
at
least
like
David,
EADS
and
I.
B
Are
each
co-chair
and
Cote
L
so
I'm,
proposing
that
we
take
the
half
my
half
of
that
and
make
fed
a
the
chair
and
I'll
continue.
Bbl
I
think
it's
pretty
traditional
to
give
some
notice
when
you
do
something
like
this,
so
I'll
make
an
follow
up,
email
on
the
list
and
we'll
make
it
official
like
next
week
or
something
if
nobody
objects
just
to
just
to
briefly
recap:
what
the
difference
is.
The
chair
is
more
about
operations
and
processes
governing
the
sig
like
running
this
meeting
and
or
I,
don't
know.
B
B
C
B
The
next
piece
of
good
news
server
side
apply
when
beta
and
1.16.
This
is
a
limited
beta.
It's
opt-in
for
each
object,
so
it
doesn't
like
there's
a
bunch
of
stuff
that
that
we
will,
in
the
fullness
of
time,
track
for
every
operation,
like
updates,
for
every
object,
and
that
is
not
on
globally
yet
because
we
have
to
fix
some
performance
issues,
but
we
have
a
path
to
fixing
those
performance
issues
that
were
confident
in,
and
this
will
so
basically,
as
soon
as
you
use
the
server
side
apply
on
an
object.
It
will
track
it.
B
B
Our
plan
for
the
future
is
the
next
next
release
of
a
beta
2
and
we're
gonna
fix
our
serialization
issues
and
hope
to
turn
it
on
universally,
and
maybe
we
will
accept
that's,
except
with
an
e,
some
really
high
bandwidth
types
like
pods
and
end
points
leader
Lisa's
just
so
we
are
extra
sure
that
we
don't
have
any
performance
impact
and
then,
when
we
go
GA,
we
expect
to
turn
it
on
everywhere.
After
we
demonstrate
that
there's,
there's
not
a
significant
performance
impact
yeah,
so
the
connectivity
service,
that's
with
the
K,
went
alpha.
B
This
is
a
egress
system
that
replaces
the
SSH
tunnels,
which
were
kind
of
done
in
a
rush
and
not
great,
but
as
evidence
for
how
hard
the
problem
is,
it's
taken
several
quarters
to
get
their
replacement
in
Walters
and
great
job
pushing
those
few
yards
through.
We
got
sign-off
from
networking,
sig
folks
and
me
and
David
EADS,
and
so
yes,
there's
Walter
off-camera,
so
that's
great
and
we'll
hope
to
go.
Every
planning
beta.
E
B
So
basically,
this
solves
the
problem
where
you
have
an
asymmetric
networking
situation
and
is
easy
easier
to
make
connections
that
go
into
the
control
plane
network
than
it
is
for
the
component
reach
out
here
to
your
cluster
nodes
right,
which
is
a
pretty
common
situation
for,
depending
on
how
you're
operating,
but
it's
certainly
common
for
a
number
of
cloud
providers.
So
yeah.
We
worked
multiple
multi
company
coordination
on
that
one,
so
your
D
defaulting
beta
assumed
that
Stefan
you
put
that
on
there.
B
G
G
D
C
B
A
A
B
Have
I
registered
just
in
case
I,
haven't
completely
decided
but
I
think
I
registered
for
the
the
contributor
summit
also
I
think
what
we're
looking
for
is
meet
and
greets
I'm,
not
sure
I
had
like
felt
like
a
bunch
of
different
people.
Thinking
me
about
this.
Yes,
perhaps
that's
motivating
the
first
side
of
your
let's.
A
B
B
H
B
H
B
C
So
there
are
some
I'm
going
to
say
significant
road
bumps,
as
you
try
to
make
certain
pieces
say,
aggregate
and
api's
work,
we've
hit
them
and
found
work
around
stand
over
shift
and
I
want
to
try
to
start
bringing
those
up
back
into
cube.
One
of
them
is
about
delegated
authentication
inside
of
the
server
I've
got
a
bug
open
that
describes
some
of
the
issues
there.
It
spans
both
off
an
API
machinery
and
so
all
started
now.
C
D
C
B
C
Watching
their
API
for
it,
it's
really
embarrassing.
There
doesn't
work.
Yeah
yeah
should
find
a
team
responsible
and
tell
them
to
make
something
that
makes
it
work.
I
had
a
general
question
about
whether
there
was
appetite
for
moving
the
queue
API
server
to
staging
it's
logically,
a
distinct
unit.
We
definitely
want
to
do
things
like
keep
our
internal
ApS
to
ourselves.
C
C
Them
to
staging
and
then
once
you
go
to
staging
it
becomes
easier
to
find
you
like.
We
have
tools
today,
they're
supposed
to
depend
or
prevent
import,
bad
imports
and
bad
boundaries,
but
if
we
actually
moved
to
staging,
we
had
direct
enforcement.
At
least
the
last
things.
I
moved
there.
It
was
as
simple
as
a
go
list.
Do
you
have
this
there?
Okay,
you
fail
sure.
B
No
I
think
no
professional
session
just
yet
so
I
think
that's
fine!
If
there's
not
an
issue
I
the
the
thing
that
interests
me
is
like
I
wonder:
is
it
how
feasible
would
it
be
to
move
like
the
API
server
code
there,
while
leaving
the
registry
stuff
built-in
api's
under
the
control
of
the
proper
place.
C
C
C
C
It
may
be
worthwhile,
you
know
I'm,
looking
at
it
and
seeing
what
it
looks
like
the
direction
from
code
organization
is.
Is
there
one
sure
to
separate
our
bits,
make
their
dependencies
clear,
essentially
move
all
the
pieces
to
staging
as
a
first
cut
separation
yeah
and
make
the
dependencies
amongst
them
extremely
explicit
and
and
figure
out
where
to
snip
from
there
and
that's
something
I
agree
with
but
require
doing
something
like
making
packages
suddenly
internal,
which
I
do
fine
to
be
abuses.
C
Okay,
the
other
thing
the
next
thing
on
my
list
is
study
related.
We
had
long
talk
about
the
idea
of
a
generic
client
go
so
pieces
of
client
go
that
do
not
depend
on
case
io,
API
and
I
still
think.
This
is
really
valuable
for
us
to
I,
like
the
generic
client
I,
like
the
generic
Informer's
I,
like
the
controller
structure
that
we
have,
that
doesn't
have
any
dependency
today
on
case
I.
Oh
yeah
I
think
this.
B
C
B
Yeah,
because
the
idea
is
like,
if
you're,
providing
an
API
type,
and
you
want
to
make
a
place
for
somebody
to
grab
like
generated
clients
or
whatever
like
it,
doesn't
gel
well
with
the
with
the
existing
library
that
we
publish,
because
we
package
all
the
generic
stuff
that
you
have
to
depend
on
along
with
all
the
generated
stuff
that
you
may
or
may
not
need.
Yeah.
C
So
if,
if
this
idea
has
some
traction
in
terms
of
review
time,
I
will
spend
the
time
to
go
ahead
and
write
cap
to
describe
what
it
is.
That
I
mean
I.
Do
think
that
if
we
could
find
a
way
to
make
this
not
depend
on
case
io,
API
machinery,
it
would
be
even
happier
if
I
can
find
a
way
to
have
a
truly
separate
library
that
could
actually
be
controlled
and
work
with
cember
like
the
go
mod
stuff,
yeah.
B
I
I'm
having
some
trouble
earrings
that
Sully
yeah
I
think
that's
all
right.
Yeah
I
was
saying
this
would
be
super
useful
from
cupid
Earl,
and
this
would
move
us
into
an
area
that
would
make
it
a
lot
easier
to
maintain
some
of
our
dependency
related
stuff,
Stefan
and
I
have
talked
about
this
a
little
bit
before.
So
when
you
have
something
or
if
you
have
something
for
review.
Definitely
please
do
loop,
listen
and.
C
Because
being
able
to
snip
that
one
until
you
can
snip
that
when
you
can't
use
semper
and
Ann
December
be
a
big
deal
so
I
always
and-
and
you
know,
try
to
explain
what
our
options
are.
If
it
is
completely
distinct,
I,
don't
think,
would
be
developed
in
stage
I
think
it
would
be
developed
truly
separate.
D
Want
to
but
I
feel
like
we
should
this
issue
where
a
network
disruption
ends
up,
leaving
dead,
TCP
connections
in
the
go
client
keep
alive
cache
where
they
sit
and
poison
requests
and
let
higher-level
HTTP
requests
just
timeout
and
fail
and
then
get
returned
to
the
Keeble.
I
of
cool
has
showed
up
in
a
lot
of
different
places
in
a
lot
of
different
ways.
Q
proxy
hangs
cube.
Let's
hang
they
feel
the
heartbeat
admission.
D
Webhooks
hang
like
any
client
that
uses
client,
though,
can
experience
this
a
proposal
or
a
liberally
low-level
TCP
level
fix
that
was
really
ugly
at
the
time
used
like
socket
file
descriptor
bit
twiddling
OS
level
stuff.
That
was
really
scary
and
horrible,
actually
might
be
possible
to
do
as
of
go.
112
go
started,
exposing
the
descriptor
directly
to
us
without
some
of
the
weird
copy
like
allocation
problems.
D
D
D
B
B
D
B
D
D
C
B
Yeah
no
I
I
think
I
saw
that
I
think
it
was
this.
He
had
something
else
with
his
environment.
Okay,
should
we
like
be
filing
an
issue
in
upstream,
go
like
it
got
filed
it's
known
and
working.
D
D
There
are
I,
can
dig
up
the
issues
and
there
are
recommendations
and
tools
that
they
provided
that
do
some
of
the
things
and
I
think
Tim
already
switched
some
of
our
build
scripts
to
use
that,
but
not
the
code
generation
ones,
ok
anyway,
I
linked
to
the
kind
of
umbrella
issue.
If
you
find
other
things
with
go
1:13,
let's
gather
them
there
and
then
we
can,
as
people
want
to
do
bits
of
that,
they
can
sign
up
form
there.
All.
B
Right,
let's
another
I
have
another
actually
code
generation
type
thing.
I
think
it
would
be
very
interesting
to
take
our
existing
like
right.
Now
we
have
the
typed
clients
and
the
dynamic
client
and
they
have
different
behaviors.
When
they
don't
understand
a
field
like
the
type
clients
will
drop.
Those
and
the
dynamic
clients
would
preserve.
It.
I
think
it
would
be
very
interesting
to
generate
not
the
typed
client,
but
a
interface
that
wraps
a
dynamic
client.
B
B
C
D
G
So
the
feature
of
this
proposal
is
open,
API,
v3
support
so
having
another
endpoint
just
needs
to
cap.
Obviously
we
are
prototyping,
that's
no
more
or
less
what
you
want.
It's
pretty
obvious.
We
have
sicko
open,
API
dependency
and
calm.
Api
is
not
well
maintained
for
a
long
time,
but
it's
really
critical
for
our
API
semantics
of
the
Dacian
and
other
things
which
we
added
and
we
think
about
replacing
very
data,
at
least
so
probably
forking
it
simplifying
it
and
correcting
because
it's
really
wrong
in
some
places
and
then
we.
G
Second,
one
is
the
client
side
one.
Is
this
not
SCORM
API,
it's
our
own,
but
it's
incomplete
and
has
issues
we
know
we
work
around
service.
I
would
not
hit
those
issues
in
control
and
we
want
to
place.
It
was
yeah,
so
shared
very
data
for
D,
which
we
use
the
set
of
call,
maybe
I.
So
it's
a
lot
of
yeah.
We
were
fixing
cleanup
and
as
it
would
not
much
visible,
but
pretty
important
for
the
future
of
meditation
yeah.
G
B
The
structural
schema
is
very
similar
to
the
schema
type
that
we
amperes
for
server-side
apply.
It
is
III,
think,
there's
more
opportunity
for
deduplication
on
the
input
half
of
the
stack
than
there
is
on
the
output
validation,
half
of
the
stack,
if
that
makes
sense
like
we
have
this
queue
builder,
and
we
have
our
built
in
like
we
have
the
very
strange
path
for
built-ins,
where
we
basically
compile
an
API
server
the
generated
spec,
which
that
we
then
check
in
which
I
think
we
can
probably
simplify
that
path.
Some
maybe
don't
like.
B
B
Do
that
in
there
I
wasn't
sure,
that's
what
you
were
talking
about
yeah,
it's
kind
of
an
example
I.
Basically
what
I'm
thinking
of
is
like
we,
we
know
how
the
generic
API
we
know
how
a
a
PIR
trick
is
supposed
to
behave.
A
resource
from
API
server,
even
if
we
don't
necessarily
understand
very
much
about
the
content
of
the
object
like
we
know
how
metadata
is
supposed
to
behave.
B
We
know
how
the
optimistic
locking
stuff
is
supposed
to
behave.
The
patient's
and
the
garbage
collector
finalizar
is
all
that
we
know
a
bunch
of
generic
stuff
about
objects,
but
we
don't
have
any
generic
tests.
So
I
think
it
would
be
good
if
we
had
a
generic
test
suite
that,
like
you,
you
tell
it.
Okay,
here's
my
here's,
my
endpoint,
here's,
my
object!
You
give
it
one
or
two
examples
of
valid
objects,
because
we
don't
necessarily
know
what
a
valid
example.
What
your
research
would
be.
B
You
give
it
one
or
two
examples
of
valid
optics
and
then
it
completely
generically
goes
out
and
tests
like.
Okay,
that's
your
post,
endpoint
work!
Is
your
foot
endpoint
work?
Does
your
patch
endpoint
work?
Does
your
does
your
watch
in
point
work,
which
is
what
the
the
linked
issue
is
about
and
yeah
I
think
there's
a
lot
of
value.
We
could
add
there
like
that.
Would
that
would
vary.
I,
think
that
was
pretty.
D
B
D
B
Know
it
would
be
good
for
a
conformance
type
test
like
like
if
you
with
a
given
API
object,
there's
kind
of
two
sort
of
orthogonal
things
you
want
to
test
about
it
like
one
is
like:
does
it
uphold
its
readwrite
contract?
Quite
a
reasonable
client
against
it
and
the
other
is,
does
it?
Does
it
perform
the
action
that
is
labeled
on
the
tin
like?
Does
it
do
what
it's
supposed
to
do?
That's
like
the
semantics
behavior
that
you
need
to
know
a
lot
more
detail
about
it
to
effectively
test
right
like.
C
B
D
B
Know
we
can
read
the
we
can
read
the
discovery
doctor,
the
the
the
open,
API
spec
or
whatever,
to
figure
out
what
firts
it
is
supposed
to
support.
And
if
you
support
a
verb,
then
you
should
probably
do
it
in
the
right
way
or
get
an
exception.
Like
I
I
do
think
we
have
some
exceptions
like
the
subject:
access
review
or
whatever,
but
you
want
to
do
knowingly,
not.
C
B
Giving
it
which
is
pretty
common
for
aggregated
API
servers
and
it
shouldn't
be
like
a
draconian
process
to
get
an
exception,
and
maybe
it's
just
as
simple
as
including
a
marker
or
something
in
your
open,
API
spec.
That
says,
like
I,
have
a
non-standard
blah
blah
thing:
don't
try
to
run
the
compartments
on
me,
so
yeah
I
I.
We
need
to
break
provisions
for
things
that
are
not
gonna
permit,
but
they
need
those
need
to
be
deliberate
and
not
accidental.
C
B
B
C
B
J
One
of
the
for
our
own
one
of
the
things
for
the
conformance
definition
for
the
stick
architecture
that
everybody
needs
to
conform
to
is
those
definitions
need
to
be
published
in
the
kubernetes
repo.
Your
swagger
days,
which
is
a
difference
between
what's
available
on
an
API
server
later
versus,
what's
and
feature
flags
would
see.
Are
these
the
install
and
what
must
be
on
all
clusters,
as
conformance.
D
B
B
Ok,
let's
go
on:
let's
keep
going,
we've
only
got
20
minutes
left
field
gates.
Is
it
time
for
field
gates
yet
and
I
think
the
world
has
changed
a
lot
since
we
approved
a
design
for
basically
excluding
fields
from
our
specification
if
they
were
disabled
by
a
a
feature
gate,
especially
in
that
we've
got
C
or
DS.
C
B
Yeah,
if
I
recalled
the
design
which
is
entirely
possible,
I,
don't
but
I
think
that's
what
the
in
the
design
is
that
you
would
annotate
a
field
in
the
object
and
basically
label
it
with
the
feature
gate
that
enables
or
disables
it.
And
then
all
of
our
tooling
would
like
suppress
that
in
the
decoding
steps
and
the
spec
generation
steps
think.
B
I
think
that
that
might
be
where
I'm
at
we'll
see
the
next
thing
is
I,
don't
know
if
it's
not
really
related,
but
I
think
it
is
an
important
feature
that
people
are
gonna
want
pretty
soon
well
that
people
definitely
want
today,
but
it's
really
hard,
which
is
making
CDs
permit
you
to
reference
other
objects
in
the
schema
right.
So
like
creating
some
sort
of
controller,
it's
got
a
template
like
a
pod
to
plate
or
something
I
want.
B
B
G
B
Sweet,
like
the
I,
think
the
the
answer
is
that
you
want
them.
That
is
that
is
currently
installed
in
the
cluster
and
when
that,
when
the
clusters
idea
of
meditator
changes,
so
does
the
one
in
your
object,
and
that
means
like
you'd
need
to
like
use
the
storage
migrator
to
read
and
write
everything
to
update
it
right
and
you
need
to
use
the
default
semantics
of
the
cluster.
It's
in
yeah.
D
The
the
other
issue
with
supporting
schema
references,
object
meta
is
at
least
fundamental
to
all
objects.
So,
even
if
you
had
an
API
server
that
was
only
serving
like
namespaces
and
C
or
DS,
you
would
still
have
object
meta,
but
what
most
people
want?
A
reference
are
things
like
pods
and
deployments,
and.
F
D
F
D
B
C
Can
I
can
understand
as
I'm
doing
a
manipulation
of
this
and
when
I
do
that
I'm
actually
fairly
sensitive
to
the
level
of
upon
that
I
have
I,
not
not
catastrophic
Aliso,
but
you
know
when
an
it
containers
come
along
that
that
changes
things
when
ephemeral
containers
arrive
that
changes.
What
these
things
can
do.
C
B
Think
I
think
to
actually
make
this
work
you'd
have
to
like
just
like.
We
have
controllers
that
control
certain
aspects
of
CRTs
I
think
you'd
have
to
like
have
a
resolution,
like
a
reference
resolution,
controller
that
it
was
out
resolves
of
reference
and
like
puts
in
a
hash
or
whatever
is
like
the
current
version
of
this
reference.
B
That's
stored
in
this
object
is:
is
this
so
that
you
can
detect
if
your
reference
changes,
which
then
implies
things
about
all
the
data
stored
like
like
you
just
said
about
you,
know
schlepping
data
from
one
place
to
another?
You
don't
care,
but
you
kind
of
do
care
if
the
data,
if
you
stored
that
data
in
the
custom
resources
and
then
the
thing
that
they
that
they're
actually
about
changes
like
all
that
data
is
now
wrong,
yeah.
So
the
experience.
C
That
we've
had
downstream
is
when
validation
changes.
We
end
up
in
trouble
right
when
and
that's
probably
the
biggest
one
right,
like
validation,
changes
and
now
are
these
things
valid
or
not?
How
are
we
validating?
Does
our
validation
have
to
match
exactly
what
cubes
validation
is?
Is
there
a
way
for
us
to
do
that?
It's
it's
possible,
but
at
least
in
our
experience
it
hasn't
been
as
easy
as
just
saying.
Yep
is
this
one
yeah.
B
Okay,
we're
I
want
to
talk
about
this
more
button,
but
I
think
we
need
to
keep
moving.
You
know
the
the
last
thing
that
I
added
here
is.
Is
it
may
be
time
for
some
latching
validation,
tightening
like
right?
No,
and
thanks
for
mentioning
validation
right
now,
it's
basically
impossible
to
change
validation.
If
you
make
it
tighter,
then
people
can't
upgrade
because
we
break
currently
objects
and
you
can't
update
them
anymore.
B
If
we
make
it
looser,
it
seems
like
that
should
be
safe,
but
then,
actually
you
can
no
longer
rollback
right
like
if
you
upgrade
make
a
new
object
that
takes
advantage
of
the
looser
rules
and
then
discover
a
problem
and
roll
back
now.
You're
know
you've
got
the
problem
with
that
object.
So
so
really
any
validation
changes
need
to
be.
B
Thought
through
carefully
so
there's
a
there's,
an
issue
out
there,
where
that
I
didn't
bother
to
link
because
I
added
this
a
few
minutes
ago.
There's
an
issue
out
there,
where
Jordan
and
I
have
kind
of
worked
out
a
system
for
doing
the
doing.
Some
validation,
tightening
and
I
think
that
it
may
be
time
to
work
on
this,
especially
for
server-side
of
Thank
You
Jordan,
server-side,
apply
I.
B
B
Systematic
yeah,
the
the
yeah
I
think
yeah.
It's
not
systematic
and
also
there's
a
subtle
problem
with
that
which
is
you
may
have
a
system.
You
may
not
have
objects,
the
obvious
might
be
transient,
so
it's
not
good
enough
to
just
permit
existing
objects
to
retain
the
old
validation
semantics.
You
need
to
monitor
the
cluster
for
new
objects
that
rely
on
the
old
validation
semantics,
because
you
could
have
like
a
batch
system
that
creates
a
bunch
of
workloads
and
like
right,
I.
You.
B
Basically,
what
I
think
we
ended
up
at
it
is,
like
you
know,
leave
it
on
the
like:
leave
it
on
the
old
semantics
for
like
a
month
or
a
couple
months
and
just
count
how
many
objects
show
up
that
that
rely
on
that
past,
the
old
validation,
but
not
the
new
one,
and
if
you
don't
see
any
and
like
a
month
or
a
release
or
something
like
that,
then
latched
as
a
new
one.
Where
you
start
warning,
you
start
giving
errors,
but
you
can.
The
administrator
can
still
turn
it
turn
it
back
off.
B
F
F
B
C
It
is
narrowly
focused
on
gathering
information
about
what
storage
versions
actual
servers
are
using,
and
then
it
externally
reviews
that
information
to
make
a
decision
for
the
case
of
storage
migration.
It
is
not
generically
trying
to
solve
exposure.
This
is
what
QAPI
servers
doing
yeah
and
it
is
not
trying
to
inform
how
a
cube
API
server
stores
its
data,
so
data
its
hard-coded,
and
it
remains
hard-coded.
After
this
yeah.
B
Yeah
I
think
that's
more
or
less.
What
this
will
be
under
the
email
thread
is
like
everybody.
Has
this
problem
just
make
a
kubernetes
api
object
and
use
our
built-in
coordination
mechanisms
to
just
like
you
know,
do
it
the
hard
way?
Basically,
okay.
I
have
one
more
item
about
the
external
named
cap.
Yeah.
K
B
B
C
C
J
About
this
we're
short
on
time,
the
sake
architecture
has
a
sub
group
called
conformance
and
the
it's
there
to
ensure
that
all
the
cloud
providers
can
run
the
CN
CF
and
get
the
CNC
up
badge
for
performance.
And
what
we're
trying
to
do
is
ensure
that
we're
moving
towards
100%
coverage
and
how
we
measure
that
and
ensure
that,
as
we
introduce
new
features
in
your
API
surface
area,
that
that
is
included
in
our
tests.
J
Eventually,
we,
the
desire,
is
to
move
towards
something
where
we
can
not
only
have
that
policy
in
place
which
we
do,
but
we
have
a
way
to
enforce
that
and
they're
exploring
using
a
PR
blocking
job
or
some
type
of
feedback
mechanism
where
we
can
compare
each
PR
and
the
results
of
those
tests
against
that
coverage.
Now
how
we
define
coverage
and
how
we
measure
coverage
and
how
we
tie
those
together
are
evolving,
but
the
primary
mechanism
right
now
is
endpoints
or
API
operations,
as
it
comes
from
from
the
swagger
JSON
we're
having
some.
J
It
takes
some
time
like
30
minutes
on
a
56
core
machine
to
match
together
the
yet
the
blue
and
the
orange
we're
trying
to
find
a
way
that
doesn't
impact
everybody
that
we
can
run
on
our
on
the
CI
jobs
and
there's
been
some
mention
on
on
the
conformance
group
around.
Maybe
adding
an
alpha
feature
gate
that
were
not
terribly
interested
in
moving
forward,
because
we
just
wanted
to
have
that
information
available
to
process
and
then
another
one
is
in
order
for
us
to
know
what
we
don't
need.
The
test
we
have.
J
We
were
testing
some
endpoints
that
that
were
not
used
by
anyone.
That's
why
they
weren't
tested
and
we
found
they
needed
to
be
deprecated,
and
we
found
some
fields
that
we're
alpha
and
we
learned
about
that.
There
is
description,
but
there's
not
a
field
for
us
to
declare
you
li
say:
don't
worry
about
that
field.
It's
alpha
or
there's
a
gate
required
when
it's
deprecated.
J
So
we
added
that
and
I
thought
it
might
be
useful
if
it
was
programmatically
available
for
us
to
not
even
attempt
to
worry
about
those
skills
for
coverage,
and
this
is
kind
of
an
ongoing
discussion
based.
What
is
the
feedback
that
it
would
be
useful
to
have
some
help
in
prioritizing
and
making
sure
that?
C
You've
got
something
to
say:
David
I
do
I
think
that
this
state
views
its
role
as
providing
the
machinery
for
people
to
write
api's
but
you're,
not
responsible
for
the
content
or
or
the
test
coverage
of
those
particular
api's
yeah
like
I'm,
not
sure
what
you
would
really
want
us
to
be
doing
here.
Yeah.
B
I
so
I,
let
me
speak
to,
like
the
the
second
of
these
two
bullet
points
to
mapping
an
audit
event
to
its
operation.
I
described
a
method
in
the
like
you,
you
opened
an
issue
about
this
and
I
described
a
method
that
you
can
do
this
I.
Don't
understand
why
it
is
hard.
It
should
not
be
taking
thirty
minutes
on
its
core
machine.