►
From YouTube: 20190910 sig arch conformance
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
B
B
B
We
all
discussed
the
the
behavior
cap,
which
is
my
next
item,
which
I
have
very
little
update
on
right
now,
but
that
I
hope
to
dramatically
increase
this.
You
know
on
the
order
of
five
to
ten
fold,
but
we'll
see,
if
that's,
if
that's
possible,
that's
it
for
this
item
and
ask
people
questions.
I
had.
D
B
C
B
D
Go
ahead,
clean
I,
didn't
say
that
the
contents
of
my
question
was
just
sometimes
it's
it's
useful
to
split
those
two
problems
into
so
so
the
clear
the
backlog
problem
and
the
prevent
you
know
new
stuff
getting
in
without
conformance
tests
problem.
So
it
sounds
like
we've
done
that
and
yeah
okay
cool.
Thank
you.
Yeah.
B
B
We
can
use
to
willing
to
start
to
ensure
some
of
these
things
tooling,
where
you
know
when
you,
when
you're
going
to
actually
promote
you'll,
get
a
bunch
of
things.
You
have
to
do
like
adding
things
into
this,
a
couldn't
confront
in
small
dot
txt
for
other
things
as
well,
but
so
any
we're
trying
to
try
to
tighten
up
the
process.
I
guess
that's
kind
of
a
different
well.
A
A
part
of
this
it
kind
of
does
cover
the
screw
and
hippie
and
I
have
talked
about
it,
pretty
much
that
like
to
eventually
get
a
PR
blocking
job
in
place.
The
biggest
problem
we
have
right
now
is
that
we
didn't
have
enforcement,
where
we
have
quote
unquote
policy,
but
a
policy
of
enforcement
is
basically
you
know
it's
like
a
banana
republic
law.
It's
only
a
law
as
long
as
somebody's
there
to
enforce
it.
B
B
While
I'm
talking
I'll
give
you
update
on
this,
which
is
basically
M,
no
major
progress,
I
guess
I.
Consider
myself
as
the
blocker
on
that
I
need
to
get
time
to
nail
it
down
and
develop
it
to
the
next
level
of
sort
of
implementable
and
I'm,
hoping
that
before
we
meet
next
I
will
have
something
that
we've
already
reviewed.
I
have
Jeffrey
here,
he's
new
to
the
team
here
and
he
will
be
helping
work
on
actually
developing
this
as
we
move
move
it
forward.
B
D
B
My
plan
right
now
is
to
kind
of
sit
down
and
start
to
break
out
the
moments
of
you
know
what
what
tooling
we
need
and
that
sort
of
thing
just
trying
to
make
make
a
more
concrete
implementable
plan
out
of
it
rather
than
just
have
a
set
of
ideas.
So
probably
I
mean
I
plan
to
do
that.
If
you,
if
you
have
time
and
want
to
do
that,
that's
what
I
do
and
otherwise
you
know
view
and
critique
of
whatever
I
produce.
D
Yeah
I
wouldn't
want
to
do
it
without
you,
you've,
given
all
to
the
thought
to
it
up
to
now.
I
think.
But
you
know,
I
could,
for
example,
come
to
Mountain
View
and
like
sit
down
with
you,
and
we
could
like
try
and
bash
something
out
in
the
morning
or
something
if
that's
helpful,
I'm
not
too
far
away.
But
but
if
that's
a
possibility,
I'm
in
Sunnyvale.
D
B
A
B
C
D
Sorry
I
was
sorry,
find
a
button.
Yeah
I
think
what
we
decided
was
that
a
sort
of
general
ability
to
kind
of
label
ESTs
and
be
able
to
you
know,
label
them
as
having
various
properties
and
then
filter
on
those
was
was
useful
and
that
would
solve
this
validation
and
and
a
bunch
of
other
yeah.
B
And
we
talked
about
what
I
guess,
what
I
remember
the
action
I
mean
we
talked
about
the
idea,
possibly
tagging
existing
Edes
with
or
shouldn't
make
informants
tasks
with
some
some
sort
of
feature
tag
that
we
could.
Then
we
could
let
the
providers
go
and
run
those
those
that
we
could
collect
that
data
using
something
like
AP
is
new
and
actually
hit.
B
Me
were
probably
there
to
the
the
idea
being
that
we
start
to
get
potentially
some
data
around
whether
various
features
are
implemented
by
all
the
cloud
providers
and
therefore
can
easily
make
a
cloud
provider
profiles
who
it
was
sort
of
the
the
conversation
of
all
from
validations
weeds
to
feature
tags
to
gathering
data
on
those
feature
tags,
but
I
haven't
written
anything
up
on
that
yet
got
any
comments.
Might
even
here
would
be
useful.
D
Yeah
I
think
I
think
I
did
I
wrote
that
in
a
in
a
comment
to
the
original
PR,
I
think
I
wrote
a
sort
of
a
counterproposal
which,
which
I
think
covers
a
lot
of
that
general
tagging
and
stuff.
So
we
could
use
that
and
we
could
literally
probably
cut
and
paste
that
comment
as
a
starting
point
for
this
tagging
proposal.
I
think
that's
useful
and
a
contraband.
Beware!
That
thing
is
it's
a
bit
further
down
in
a
minute.
D
Yes,
I
just
wanted
to
find
out.
There
was
a
bit
of
confusion.
Last
time,
I
think
there
was
a
thought
that
Aaron
had
done
some
of
this
work
and
then
Aaron
came
on
the
call
later
and
said
he
had
not,
and
it
still
needed
to
be
done
and
I
noticed
that
the
that
it's
sort
of
been
open
for
more
six
months
now
and
bounced
around
a
bit.
It
seems
to
me
like
a
kind
of
cornerstone
for
a
lot
of
the
stuff.
D
If
we
can't
tell
people
like
right,
conformance
tests,
then
then,
like
a
lot
of
the
stuff
is
difficult.
So
I
was
just
wondering
if
people
agree
with
that.
Is
it
important
I
think
it's
very
important,
but
they
may
not
be
agreement
on
that
and
if
it
is,
should
we
like
get
it
done
in
the
next
two
weeks
or
whatever
I.
A
Think
it's
generally
a
good
thing,
as
folks
have
tried
to
promote
or
transmogrify
current
tests
or
change
test,
I.
Think
having
that
detail,
there
be
good,
just
in
general,
for
as
new
Teske
authored
to
as
well.
We
have
requirements,
we
don't
for
what
it
means
to
be
promoted,
but
we
don't
necessarily
have,
but
it
needs
to
be
a
good
test
in
to
tell
some
of
those
requirements
feet.
D
Behind
that
was,
you
know,
I
keep
hearing
that
people
are
not
writing.
You
know
tests
in
general
and
conformance
tests,
etc,
and
yet
we
it's
not
that
easy.
If
you're,
a
you
know
relatively
inexperienced,
could
be
ladies
contribute,
it's
there's
no
kind
of
place
to
go
to
and
say
this
is
what
I
do
and-
and
these
will
be
considered
good,
there's
only
a
bunch
of
things
that
I
shouldn't
do
and
then
ad
hoc
complaints
about
these
terrible
tests.
D
A
Think
part
of
that
is
overdue
by
like
what
years
this,
but
another
part
of
this,
too,
is
that
we
we
created
this
monolith,
that
we
call
the
testing
framework
right
and
much
like
2001
Space,
Odyssey
I
sit
there
angrily,
throwing
sticks
at
it
without
having
the
resources
to
actually
the
monolith,
because
it
in
part
writing
good
test
means
that
you
have
good
structure
that
supports
your
capabilities
to
rain
the
good
tests
right
and
we
have
a
bunch
of
details
and
knowledge
and
stuff
in
the
framework.
But
saying
it's
good,
you
would
I,
don't
know.
A
I
can't
stress
reality
that
way,
so
I
think
that
there's
there's
some
good
practices
we
should
probably
distill
down,
but
like
there's
also
some
practical
things
that
need
to
get
done
too.
So,
there's
a
little
bit
of
both
that
we're
gonna
need
to
try
to
push
the
ball
on
to
make
the
ability
or
the
capability
of
a
novice
person
to
be
able
to
come
in
and
understand
the
right
good
tests
reduce
that
area
to
entry
right.
Do
we.
D
Have
I
think
you're
right,
I
mean
a
lot
of
the
stuff
is
somewhat
automatable
and
we
can
certainly
write
the
libraries
to
do
some
of
the
heavy
lifting
do
we
do.
We
have
like
an
actual
at
least
the
beginnings
of
a
task
list.
They
can.
We
can
we
actually
start
dishing
these
things
out.
If
somebody
could
like
give
me
an
issue,
I
could
like
go
and
write
some
code,
but
I
just
don't
know
what
those
missing
things
are.
So.
B
B
C
I
was
just
assigned
this
two
months
ago,
specifically,
and-
and
this
is
not
necessarily
around
any
framework
updates
or
anything
else
is
just
how
do
we
currently
write
good
tests
and
we're
in
the
middle
of
capturing
all
of
the
various
exception
and
requirements
and
things
and
our
team
is
actively
riding
better
and
better
tests
and
so
I
think
we're.
Finally,
at
the
point
where
we
can
consolidate
some
of
that
and
I'd
love
to
see
I
like
I'm
glad
to
see
everybody's
having
some
of
the
similar
issues
and
highlighting
what
we're
working
on
it.
C
D
C
I
have
not
linked
to
that
ticket.
Yet
that's
my
bad
that
when
we
get
to
slide
four
there'll,
be
a
link
to
all
of
the
other
PRS
and
issues
that
have
been
related
to
helping
us
understand
what
it
is
to
write
a
good
test,
but
it
has
that
information
hasn't
consolidated
into
any
action
on
the
reference
ticket
yet
right
there
for
that
knock.
C
C
We've
been
the
team
here
in
in
New
Zealand
and
then,
and
also
was
as
one
of
us
and
in
the
western
United
States
I've
been
working
on
increasing
our
numbers
and
measuring
our
numbers
as
a
new.
We
were
measuring
numbers
for
a
while
trying
to
find
the
right
way
to
do
it
and
we've
been
actively
working
on
increasing
the
numbers
since
June,
and
this
is
our
way-
we've
been
measuring
it
in
the
past
and
this
is
only
stable
endpoints.
C
So
the
percentage
of
stable
endpoints,
the
ones
that
have
been
hit
by
tests
at
all
and
that
all-important
number
for
our
group
is
what
percentage
of
possibly
of
endpoints
have
we
tested
and
isn't
over
there
in
the
green.
It's
been
pretty
slow
up
until
June,
and
it's
because
we
had
a
lot
to
learn
and
and
I
think
since
June
the
the
number
and
then
the
ramping
rate
has
gone
up
into
the
right.
C
As
that
we've
been
looking
at
since
October
and
in
June
the
big
plan
was,
we
are
going
to
increase
by
16%,
it's
going
to
be
so
easy
and
the
strategy
was,
we've
got
this
tool,
API
snoo
and
obviously
there
are
these
ten
tests
that
already
hit
22
new
endpoints
that
if
we
just
go,
identify
them
and
ask
the
group,
can
we
just
stick
up?
Stick
the
conformance
tag
on
there.
That
would
be
great.
C
It
would
help
us
all
be
a
happy
more
conformant
group
of
that's
not
exactly
what
happened,
but
we
did
learn
a
lot
in
the
in
the
last
nine
weeks
or
so.
We've
written
and
the
numbers
are
slightly
higher.
Now,
we've
probably
written
eighteen
or
twenty
eighteen
or
nineteen
tests,
we've
rewritten,
six
and
we've
written
I
think
three
new
tests
from
scratch.
Now
the
result
was
the
numbers
did
increase
a
good
percentage
and
faster
than
they
have
in
the
past,
but
not
to
my
satisfaction
and
probably
not
to
maybe
some
other
people's
satisfactions.
C
However,
the
biggest
output
I
think
of
this
was
the
updated
documentation
and
conformance
test
writing
guidelines
that,
as
Quentin
pointed
out,
are
not
precisely
located
in
they're
in
the
right
place
and
we'll
get
on
that.
The
next
slide
is
the
the
actual
links
to
this
isn't
all
of
them.
This
was
the
three
ones
that
popped
out
higher
and
the
the
PR
on
the
conformance
docs
for
should
google
it
API
access,
not
be
allowed.
It's
an
exception.
It's
part
of
writing.
C
A
good
test
now
is
at
least
a
conformance
test
is
Kubik
can't
talk
to
you
can't
talk
to
couplet,
and
then
it
takes
too
long
once
we
once
we
identify
a
test
that
we
needs
to
be
written,
writing
it
and
letting
it
sit
for
two
weeks
is
enough
needing
to
wait
a
whole
cycle
or
more
is
too
slow.
We've
identified
and
merged
that
test
names
need
to
be
literal
strings.
C
You
can't
start
having
a
variable
that
changes,
because
it's
hard
for
us
to
use
our
existing
tooling
to
identify
the
tests
and
those
have
all
been
updated.
So
this
is
the
beast
were
three
of
several
that
updated
our
conformance
Docs.
We
also
clarified
edge
cases
like
what
is
it
that
we
want
to
test.
We
need
to
separate
behaviors
from
tests.
Thank
you
John.
This
is
definitely
a
thing.
That's
going
to
help
us
all
when
we
can
have
the
definition
of
behaviors
and
someone
writing
tests
and
all
that
those
differences
being
separate.
C
We've
also
unkind
endpoints
that
have
never
been
used.
I'll
open
this
one
up
just
for
a
minute,
because
it's
kind
of
funny
how
do
I
click
here?
Click
here
this
is
an
endpoint
that
we
identified
and
and
said
pod
template
test
has
never
doesn't
have
anything
hitting
it
and
it's
in
stable
core.
So
we
wrote
a
test
for
it
and
if
you
look
down
here,
you'll
see
that
John
said
it
should
be
deprecated.
But
Brian
said
this
was
created
way
back
in
issue
170
and
never
implemented.
C
We
should
document
that
it
should
not
be
at
a
conformance,
and
we
should
document
that
it
should
be
disabled
on
most
clusters
that
was
really
cool
and
part
of
the
output.
Another
piece
was
nuances
in
fields,
so
I'll
open
these
up
real
quick.
This
is
how
I
I
was
able
to
go
through,
and
we
hope
accurately
identify
alpha,
beta
and
GA
fields.
C
Based
on
these
eight
different
SQL
expressions.
It
would
be
really
cool
if
we
could
concretely
say
that
those
were
the
precise
alpha,
beta
or
GA
fields,
we've
opened
up
and
the
same
thing
for
feature
gated.
This
is
how
we
identify
feature
gates
and
for
required
if
it
has
the
word
required
in
it
and
for
deprecated.
Those
are
all
searching
the
description
field.
If
we
look
at
the
the
output
of
those
discoveries,
we've
made
a
PR
to
say:
please
add
these
new
properties
into
the
open
API
field.
C
At
some
point
in
the
future,
we
need
to
make
sure
they're
the
right
ones.
We
could
automate
it,
but
we're
in
the
process
and
and
and
PIM,
is
aware
of
that
when
anybody
interested
please
chime
in
because
right
now,
it's
me
guessing
based
on
the
research
I
think
it's
pretty
accurate,
but
I
needs
to
be
agreed
upon
by
us
as
a
unity.
C
This
has
been
really
fun
and
and
significant
collaboration
between
us
and
our
conformance
group
is
is
really
going
to
help
us
directly
define
the
behaviors
and
what
our
end
points
is,
and
that's
going
to
lead
the
faster
test,
writing
and
increase
our
conformance
numbers,
which
I'm
excited
to
do
so
for
the
test.
Writing
for
the
next,
because
we're
moving
on
to
projects,
one
is
API
snoop
and
one
is
so
measuring
our
coverage
and
increasing
our
coverage
to
increase
our
coverage.
C
We
have
decided
to
focus
on
from
API
snoop
the
pod
spec
umbrella
issue,
which
currently
has
four
fields
I'm
having
some
trouble
identifying
fields
that
are
of
other
object,
types
or
arrays,
and
we're
nearly
done
with
that.
Probably
early
next
week,
I'm
not
going
to
open
the
umbrella
issue,
but
if
you'll
take
a
look
actually
I,
maybe
I
will
this
has
links
to
tickets
we're
currently
working
on?
If
you
disagree
with
the
priority,
you
have
an
opinion.
Please.
Chime
in
this
is
is
where
we're
at.
C
C
However,
this
is
field
coverage,
we're
intentionally
focusing
on
and
we
need
to
agree
as
a
community
that
we're
no
longer
going
to
be
focusing
on
endpoints
and
operations,
so
that
number
that
has
been
increasing
it
up
into
the
right,
we're
intentionally
not
focusing
on
it
and
we're
going
to
be
creating
tests
based
on
behaviors
and
fields,
and
we
need
to
have
a
defined
number
of
the
number
of
behaviors
and
the
number
of
fields.
So
we
can
rise
to
the
occasion
and
where
the
other
thing
is
obviously
continuing
to
update
that
documentation.
C
D
C
D
D
C
C
B
So
exactly
so
part
of
the
issue
is
that
we've
been
measuring
this
based
on
n
points,
which
is
not
really.
Those
are
like
gigantic
buckets
right,
so
yeah
and
creating
a
pod
right,
I
mean
there's
so
many
fields
that
go
into
that
and
making
all
the
different
values,
and
some
of
those
fields
are
alpha
and
some
are
like
the
optional
like
things
that
are
Linux
only,
and
not
only
that.
But
we
don't.
B
It's
just
as
the
accuracy
is
probably
not
that
great.
We
do
know
that
we're
getting
better
when,
like
we
see
here
from
this
point
of
you
know
of
the
endpoints,
but
we
don't
know
that
one
of
those
endpoints
might
hide.
You
know
another
thousand
tests.
We
need
I,
think.
D
B
B
C
What
another
thing
that
I,
don't
know
if
I've
noted
in
here
in
the
slides
at
all,
is
that
when
you
are
on
a
running
cluster,
the
source
of
truth
for
the
open,
API,
Swagger's
JSON
comes
from
the
API
server
itself
in,
depending
on
this
feature,
gates
that
you've
enabled
and
depending
on
it
may
just
be
the
feature
gates,
I'm,
not
aware,
but
it
actually
changes.
What
is
available
via
the
API
I
may
have
misspoke
when
I
said
that
endpoints
don't
get
disabled
but
I
there's
not
a
way
for
some.
E
C
Looking
at
feature
search
descriptions
of
fields
to
decide
if
a
field
is
feature,
gated
I
don't
have
the
information
to
tell
whether
a
endpoint
is
deprecated
or
feature
dated.
However,
if
we
were
actually
to
collect
the
swagger
JSON
at
the
time,
the
cluster
is
running
and
compare
it
to
the
git
commit
of
the
swagger
JSON
that
the
server
is
running
at.
We
could
tell
pretty
much
what
was
what
what
the
difference
was
well,
let
help
yeah.
D
B
C
Touch
and
fill
a
certain
number
of
them
that
just
means
whether
the
container
was
touched
now
now
we're
gonna
talk
about
well.
That
container
actually
only
fits
these
particular
type
of
things
to
store
in
it,
and
that's
the
the
meteor
part
and
that's
still
not
behavior
by
the
way
to
me,
behavior
is
the
sequence
of
how
you
mix
the
stuff
in
the
containers
into
a
beautiful
recipe
that
is
meaningful.
C
So
any
other
feedback
over
the
test,
writing
and
the
documentation
and
what
we're
doing
next,
I
guess
if
I
were
to
can
I,
propose
a
vote
that
we
don't
I
mean
whatever
we
need
to
do
this,
but
that
we
stop
focusing
on
endpoint
coverage,
because
it's
really
expensive,
timewise
and
CPU
eyes
and
a
bunch
of
other
things,
and
we
don't
need
to
do
that
now.
I'm,
just
gonna
recommend
that
we
I.
B
D
A
So,
just
just
for
clarity
of
that
that
previous
statement
I
made
about
enforcement
I
do
not
think
of
me
bad
though,
at
least
at
this
point
in
time
to
to
be
able
to
track
any
regression
on
a
per
PR
basis.
So
if
we
have
PR
blocking
jobs
that
prevented
people
from
adding
new
endpoints
without
having
test
coverage,
I,
don't
think
that's
a
bad
thing.
Okay,.
C
I
right
now,
the
time
it
takes
to
map
an
entire-
let's
say:
400
Meg
of
audit
logs
to
keep
to
match
each
of
those
audit
entries
to
a
particular
endpoint
is
really
CPU
intensive
and
can
take
30
minutes
or
more
on
a
56
core
machine,
so
there's
some
optimizations
and
and
things
that
I'll
go
through
and
the
slides
that
would
help
with
that.
Currently
it's
a
little
bit
difficult
to
do
that.
/,
PR.
B
Yeah
this
has
been
discussed
here.
It's
like
sort
of
like
the
operations
as
they're
defined
are
actually
in
the
schema
or
not
real
they're
sort
of
sort
of
heuristic
Li
created
by
the
the
code
that
generates
the
schema
and
and
so
then
mapping
them
backwards.
Is
it's
quite
difficult,
sell,
books,
I
think
this
is
something
to
think
about.
If
there's
a
way
to
do
it,
it
doesn't
require
that
calculation
we
may
be
able
to
do
that.
I've.
C
I'm
going
to
move
forward
to
do
all
this
measurement
and
PR
blocking
and
stuff,
we
needed
to
take
the
tool
from
being
just
a
static
website
generated
from
data
and
be
able
to
answer
new
questions
when
they
come
up,
because
every
time
we
dig
into
something
well
what
about
this?
And
what
about
that
and
we
have
to
go
well,
it's
figure
out
a
retool,
the
the
processing
and
we
needed
basically
to
get
a
database.
C
So
we
can
look
at
the
API
surface
from
different
perspectives
and
find
multiple
ways
to
interact
with
and
and
there
right
now
we're
using
Postgres
and
a
graph
QL
compliant
database.
Because
then,
when
I
go
down,
some
document
store
database
route.
They
didn't
seem
to
do
the
the
type
of
tooling
that
we
we
wanted
out
of
it,
and
I
also
wanted
to
be
able
to
use
existing
tooling
for
creating
UIs
and
for
graphs
and
for
meaningful
representation
that
data
we
needed
to
be
cupola
from
the
UI.
C
C
So
far,
we've
been
investigating
pods
backfield
coverage.
I.
Think
that
my
yeah
I'm
currently
down
because
I've
been
looking
at
some
other
stuff
and
updating
it.
We
can
come
back
to
it,
but
the
the
pod
spec
field
coverage
went
through
and
showed
us
that
ticket
that
we
prioritized,
which
fields
were
going
to
be
writing
on.
We
also
went
through
and
uncovered
all
fields
related
to
pods,
because
I'm
also
a
link,
that's
not
a
link.
C
All
the
fields
related
to
subfields
relate
at
the
pod,
spec
I
think
numbered
around
300,
or
so
it's
all
the
descending
things
and
it
gets
it
gets
complex
and
being
able
to
choose
which
of
those
fields
are
either
required
or
GA
or
feature
gated,
or
it
was
part
of
being
able
to
select
and
narrow
that
down.
I
showed
you
some
links
earlier,
where
we
able
to
turn
this
tribal
knowledge
of.
When
we
said
we're
going
to
focus
on
this
new
end
point
and
I.
C
This
was
the
for
I
think
we
now
have
five
or
six
fields
that
we're
focusing
on
for
the
uncomplete
lis
untested
pods
back
fields.
We
did
write
that
one
test
for
a
pod
template
and
realized
that
it
was
Brian
said
that
was
never
implemented,
so
it
should
be
deprecated
which
we'll
take
in,
and
you
know,
obviously
a
year
to
deprecated,
but
we
need
the
document
that
people
shouldn't
be
should
disable
it.
Probably
most
of
our
audit
logs
are
somewhere
between
250
and
400
thousand
audit
lines,
audit
entries
and
within
those
there
are
and
I.
C
It
was
funny
that
when
I
wrote
this
the
first
it
was
sixty
six
thousand
six
hundred
and
last
I
checked.
There
was
71
thousand
some
odd
field
combinations
they
don't
those
entries
don't
have
information
linking
them
to
API
operations.
Each
of
them
have
to
has
to
be
looked
up
using
a
regular
expression
and
the
HTTP
method.
To
say
here
is
the
HTTP
verb
and
the
this
random
URL
that
we
need
to
match,
and
it's
it's
really
intense
I
think
is
this
the
ticket?
That's
not
a
ticket.
C
We
someone
option
that
we
have
is
to
require
that
the
audit
audit
logs
include
another
annotation
when
they're
logging
that
includes
the
operation
ID.
At
that
point,
we
can
have
a
PR
job
that
does
this
and
reports
back
in
like
60
seconds.
It's
really
fast.
It's
just!
You
can't
do
that
if
we
don't
have
the
ability
to
to
do
this
mapping
this.
C
B
A
C
C
C
Beyond
the
operation
ID,
the
the
body
of
a
PA
Chris
has
a
full
kubernetes
object
and
it
needs
to
have
the
each
of
those
JSON
fields
compared
to
the
correct,
open,
API
parameter
and
when
we
do
that,
we
actually
could
have
not
only
the
value
and
sub
value
of
all
of
the
possible
no
77,000
field
combinations
or
that
were
used,
but
the
fields
she
could
say
well,
there's
a
particular
value
that
we're
not
testing.
It's
always
set
to
true
and
it's
a
binary.
C
B
B
B
C
Just
say
whether
it's
trying
to
get
to
the
point
where
we
is
it
set
or
not:
okay
and
then
the
next
step
is
what
is
it
said
to,
and
then
the
next
step
would
be.
Is
that
value
interesting
to
us
because
it's
the
default
or
not?
Okay,
this
is
the
example
of
I
I,
being
the
PR
blocking
job
J
beds
had
a
they
were
promoting
admission,
webhook
and
I
noted
what
what
noted
was.
C
C
Went
through
and
used
API
snoop
to
identify
here
is
the
particular
end
points
you
are
trying
to
promote
here
are
where
you
have
existing
tests.
That
could
be
conformance.
Please
promote
them.
Here
is
one
remaining
test.
That's
gray!
That's
not
written
at
all.
These
are
that
all
of
the
end
points
when
you
do
this
you're
good
to
merge
according
to
our
policies.
This
is
what
we
are
talking
about
when
we
want
to
have
a
a
PR
blocking
job.
B
C
Obviously
we're
gonna
have
that
stuff
for
pods
back
we're
gonna
have
that
stuff
for
container.
We
also
are
going
to
look
at
deployments
and
replica
sets
as
a
higher
priority.
After
that
there's
parameters.
Cuz
watch
is
a
parameter:
we've
deprecated
all
endpoints
that
start
with
watch
and
so
to
understand
how
we
could
automatically
write
some
tests
to
iterate
over
all
objects
at
their
current
revision
and
then
put
a
watch
on
all
of
them
and
then
update
any
mutable
values.
All
of
a
sudden
we've
got
a
sheep
that
we
hit.
C
An
endpoint
related
to
coverage
will
make
the
number
on
that
first
chart
jump
considerably.
I
I,
don't
know
what
the
numbers
going
to
be,
but
it's
going
to
be
good.
We
don't
do
any
of
the
responses
low
priority,
but
because
we're
doing
the
data
set,
those
are
both
the
object,
types
and
the
print
and
the
we
are
not
I'm.
Thinking
we're
not
going
to
be
updating
our
new
site
to
use
the
back
end
because
we're
talking
about
deprecating
it
the
the
data,
but
it
sounds
like
we're
going
to
keep
that
on.
C
We
talked
about
the
processing
I'd
like
to
save
those
30
minutes.
We
have
this
to
be
the
feature
gate
to
record
the
operation.
Id
I,
don't
know
that
we'll
get
that,
because
that's
on
other
people
to
agree
or
to
find
a
way
where
we
have
that
feature
gate
to
record
that
ID.
So
we
can
have
these
metrics
under
audit
logs
I.
Think
it's
gonna
take
more
than
four
weeks
to
do
that.
One
and
I
think
it's
also
gonna
take
more
than
four
weeks
to
get
API
snoop
running
in
the
cluster.
That's
not
true.
C
C
B
C
E
C
To
in
four
and
plus
weeks
is
to
have
it
running
in
cluster
at
the
hook
and
using
the
work
actually
I
was
just
is
it
over
here
k
native
has
an
API
coverage
web
hook
that
I'm
exploring
okay,
and
that
would
be
fun
to
see
how
we
can
integrate
that
work,
but
that's
four
plus
weeks
get
into
it
all
this.
With
this
month,
yeah.
B
To
me,
I
guess:
I'll
keep
talking
the
I
think
it's
useful,
the
things
you're
not
covering
around
changes.
We
need
to
make
to
performance
criteria
and
to
that
it
can
metadata
and
things
like
that
to
make
this
all
work
better.
So
I
don't
really
want
to
do
it.
If
you
can
distill
the
things
you've
learned
into
the
best
practices,
all
right,
formitz,
test
document
that
we
talked
about
earlier
I,
think
that
would
be
yep.
C
Thank
you
all
for
your
support
specifically
want
to
reach
out
to
spiff
X
and
Timothy
and
John
and
Neil
it
and
Liggett
and
drabness
an
mg
dead
stack
and
all
the
people
who
have
been
part
of
the
conversations
and
actively
triaging,
TRS
and
closings
are
for
us.
It's
super
fun
and
exciting
to
get
a
chance
to
work
with
you
all.
E
C
E
Few
questions
to
discuss
EP
so
so
so,
basically
like
what
I
have
seen
like.
Basically,
our
first
priority
task
is
to
improve
coverage
on
the
courts
packed
and
what
I
like
like.
We
have
tried
to
promote
existing
e2e
on
those
areas,
but
still
there
are
lot
of
entry
points
where
we
have
to
improve
the
coverage
like
first
of
all,
if
we
try
to
see
on
the
continent
side,
as
you
were
saying
so
few
of
the
issues
we
had
traced
like
how
we
can
start
writing
new
e2e
with
new
scenarios.
B
E
Would
be
ultimately
improving
the
actual
field
coverage?
What
you
were
pointing
so
definitely
like
one
part
is
completed
like
from
promoting
existing
Italy,
but
definitely
we
do
think
about
writing
new
v2
as
well.
So
you
can
actually
boost
up
this
second
part,
like
first
of
all,
having
the
good
scenarios
and
then
start
writing
those
e
to
e,
so
how
we
can,
what.
E
C
Increasing
those
fields-
and
so
we've
identified
the
umbrella
ticket
with
specific
fields
that
don't
have
coverage,
because
our
primary
goal
right
now
is
to
increase
field
coverage
for
pod
spec,
for
example,
and
we're
focusing
on
writing
a
test
for
that
field,
and
that
should
bring
the
scope
down
to.
Does
it
test
something
important?
It's
not
a
full
behavior
yet
because
we
don't
have
that
to
run
off
yet
and
I.
Don't
think
we
should
spend
a
whole
lot
of
time
trying
to
figure
out
the
whole
behavior
before
we
keep
improving
the
coverage.
E
On
improving
the
filled
coverage
for
the
resource
go
down,
so
if
we
get
some
kind
of
the
subject
matter,
experts
to
point
the
abuse
like
whether
this
kind
of
the
scenarios
are
effective
or
not,
so
that
would
be
real
good.
So
if
any
of
the
member
can
start
writing
those
e2e,
at
least
the
it
would
ensure
like
the
behaviors
are
good
enough
and
we
can
start
writing
goes
into
each
in
conformance
perspective,.