►
From YouTube: 20200714 SIG Arch Conformance
Description
GMT20200714 210359 SIG Arch C 1920x956
A
All
right
good
day,
everybody,
this
is
the
kubernetes
sig
architecture,
conformance
subproject
meeting
for
july,
14
2020
and
I'm
your
host
john
bellameric
and
we
are
ready
to
get
started.
Please
remember
that
this
is
a
community
in
which
we
value
each
other
and
respect
each
other.
So
please
follow
our
code
of
conduct
and
let's
get
started.
A
A
So
do
we
have
michelle
on
the
call
she
has
added
a
item
here
for
us,
but
I
don't
see
her
on
the
call.
A
C
I
added
yes,
yes,
you
are
number
one
all
right,
awesome
yeah.
So
basically
we
had
a
bug
come
in
and
it
turns
out
that
this
behavior
that
this
conformance
test
was
testing
is
actually
not
desired.
Behavior
that
we
want
so
what
we
ended
up
doing
was
fixing
this
bug
and
removing
the
conformance
test
for
it
and
now
the
and
we
did
this
in
119.
C
The
main
question
I
have
now
is:
is
it
possible
to
actually
backport
this
change
because
it
is
a
bug
fix
and
you
know
we
do
have
users
that
are
hitting
it,
so
it
would
be
nice
to
be
able
to
backport
this
to
old
releases,
but
the
main
challenges
is
because
it
was
also
modifying
a
conformance
test
and
removing
a
test
case
from
it
that
might
cause
like.
I
guess
you
know
this
would
like
have
to
if
we
backported
it.
C
It
would
make
this
change
in
the
patch
release
and
it
might
cause
if,
if
people
are
like
testing
against
they're
testing,
the
older
releases
against
a
version
of
the
conformance
suite
that
doesn't
have
this
new
change,
then
now
their
conformance,
they
will
show
that
they're
failing
this
new
conformance
test.
A
Yeah
so
basically
one
thing
that
was
thinking
about.
B
If
I
can
just
just
to
help
me
understand
so,
you've
like
in
one
sense,
this
is
lowering
the
bar
because
we've
removed
yes,
but
in
another
sense
it's
raising
the
bar
in
a
different
area,
because
previous
clusters
won't
be
able
to
not
do
the
thing
and
you're
verifying
that
they
should
be
able
to
not
do.
The
thing
now
is
that
right.
C
If
the,
if,
like
I'm,
not
sure
about
the
conformance
process
like
what
version
of
the
conformance
suite
to
run
but
like
if
they
run
a
version
of
the
conformance
suite
that
does
that
has
the
old
test
and
it
doesn't
have
the
new
change.
Where
we
remove
the
test,
then
they,
if
they
run
the
older
version
of
that
conformance
suite
against
a
cluster
that
has
this
patch,
then
they're
gonna
fail
the
conformance
suite.
B
I
believe
the
intent
is
for
the
conformance
suite
to
come
from
the
latest
patch
release
with
contents
of
the
release
branch,
so
they
should
always
be
running
the
latest
version
of
the
conformance
suite
or
the
latest
patch
release
of
the
conformance
suite,
we'll
double
check
the
cncs
documentation.
But
it
is
my
impression
that
we
release
an
updated
version
of
the
conformance
runner
that
sonaboy
uses,
which
is
used
by
the
majority
of
people
to
certify,
and
so
they
should
be
using
the
latest
patch
release.
Whenever
a
patch
release
comes
out.
C
Okay,
then,
that
should
be
okay,
at
least
from
like
a
test
passing
point
of
view.
So
if
they,
if
an
older
cluster,
was
testing
against
it,
that
didn't
have
the
fix,
then
it's
just
that
test
case.
We
removed
the
test
case,
so
it's
just
the
test
case
won't
run
anymore
and
then
newer
clusters
that
have
the
fix
they'll
also
run
against
the
newer
test
case,
which
doesn't
have
the
test
anymore.
C
A
Yeah,
the
only
glitch
is-
and
I
don't
think
it's
probably
significant-
is
that,
like
when
we
document
the
versions
of
when
we
document
conformance
tests
in
their
metadata,
we
specify
the
version
either
that
it
was
introduced
or
if
there's
a
change
in
behavior,
the
the
version
that
changed
behavior
and
there's
two
things
one.
We
only
typically
put
minor
versions
in
there,
although
I
don't
see
a
reason
we
couldn't
put
patch
versions
but
two.
If
we
remove
the
test,
we
actually
don't
have
like
a
tombstone
metadata
in
there.
So
you
actually
there's
nowhere.
A
To
put
that
information
like
to
say,
oh,
this
test
was
only
valid
through
this
version,
but
I
think
that's
that's
a
realistically.
A
If,
if
somebody
gets
a,
you
know
like
aaron
said,
probably,
if
they're
submitting
a
new
conformance
for
a
particular
version,
then
they
should
be
using
the
latest
version
and
if
somebody
else
pulls
it
down
the
other
area,
I
was
a
little
bit
trying
to
think
about
was
these
are
supposed
to
be
reproducible,
so
if
somebody
say,
creates
a
cluster
in
an
older
version
and
then
runs
or
rather
creates
a
brand
new
cluster.
A
B
C
I
think
we
might
want
to
let
the
bug
fix
soak
a
little
bit
just
to
make
sure
that
it
doesn't
cause
any
more
problems
and
we
we
might
have
to
make
another
fix
or
something
in
the
future
that
might
change
the
behavior
again.
C
So
I
think
once
we
can
actually
confirm
this
is
indeed
like
the
behavior
we
want
and
it's
you
know
it
works
in
all
of
the
cases
that
we
are
thinking
about.
Then
I
think
we
can
think
about
reintroducing
the
test
case
with
the
modified
behavior.
C
Yeah,
I
think
at
this
point,
though,
we
didn't
want
to
like
cause
too
much
churn
by
potentially
adding
something
to
conformance
that
we
weren't
quite
sure
about
that.
We
wanted
to
actually
have
this
be
conformant
behavior.
C
F
A
C
A
Next
item
is
so
aaron
did
this
pr
actually
merge
because
it
was
in
the
it
was
in
the
retest
queue
for
a
while,
but
aaron
should
be
now
a
an
approver
for
the
chess
conformance.yaml
that
test
the
test
data.
So
one
more
congratulations,
one
more
one
more
increased
bandwidth
there.
A
And
I
believe
this
next
item
is
hippies,
but
I
know
he
said
his
voice
is
bothering
him,
so
I
can.
If
somebody
else
is
ready
to
talk
about
it.
I
can
I'm
familiar
with
it
somewhat.
I'm
not
sure
what
conclusion
you
want
out
of
bringing
it
up
in
the
meeting.
I
can
summarize
it
and
then
I
can
be
corrected.
D
D
A
So,
as
I
understand
yeah,
as
I
understand
it,
basically
right
now
in
order
to
get
the
data
for
api
snoop,
the
this
dynamic
auditing
configuration
is
added
and
that's
that's
been
pulled,
so
dynamic
auditing
is
with
alpha.
It
was
never
was
decided.
It
should
not
move
forward
for
a
variety
of
reasons
and
was
was
removed,
and
so
now
there
needs
to
be
an
alternative
method.
A
The
alternative
method
is
through
a
static
configuration
for
auditing,
as
opposed
to
dynamic,
but
that
means
it
has
to
be
done
during
cluster
bring
up
in
the
flags,
and
so
the
question
is
whether
that's
an
appropriate
approach
for
getting
this
data
out
of
the
conformance
run.
A
E
G
So
there's
there's
two
parts
where
api
snoop
is
is
getting
data
one
is
using
like
existing
runs
and
we
just
pour
through
them
and
and
get
the
user
agents
and
tests
and
such
the
other
one
comes
when
you
have
a
test
writer
who
is
writing
a
new
test
and
wants
to
see
which
endpoints
it's
that
test
is
hitting,
and
so
it's
they're
like
running
functions
against
a
cluster
and
then
api
snoop
is
seeing
the
endpoints
being
hit,
live
and
seeing
the
user
agents
that
are
being
hit,
live
and
the
way
that
we
do.
G
That
is
with
this
dynamic
audit,
sync
and
so
anytime,
where
there's
a
issue
opened
or
a
pull
request
open.
That
says,
like
we
have
seen
that
if
you
were
merged
this
test,
it
would
hit
these
five
end
points
like
once
that,
like
caleb
opens
or
stephen
opens
the
way
that
they're
getting.
That
number
is
through
this
program,
that
is
using
the
dynamic
audit
sync,
and
so
it's
it
wouldn't
affect
like
api
snoop
this
site,
but
it
does
affect
api
snoop
as
a
way
of
generating
information
for
prs
for
new
tests.
B
Yeah,
so
it's
like
this
reduces
the
number
of
kubernetes
offerings
that
api
snoop
could
run
against
to
validate
the
data
that
you
are
collecting
against
that
particular
offering
you
are
now
instead
limited
to
offerings
that
you
have
the
ability
to
specify
the
command
line,
arguments
for
the
api
server,
and
so
that
means
hosted
offerings
such
as
gke
or
eks
or
aks
are
not
offerings
that
api
snoop
can
sufficiently
audit
api
coverage
of.
B
From
my
perspective,
I
don't
feel
like
that
is
the
conformance
definition,
subprojects
problem
and
I'm,
but
in
so
much
as
I
think
it's
been
useful
to
make
sure
that,
when
you're
running
a
test,
it
works
against
multiple
different
kubernetes
offerings,
such
as
kind
and
cube
up
and
cops
or
whatever
right
and
those
options
all
do.
Allow
you
to
specify
the
api
server
command
at
startup.
So
I
still
feel
like.
I
trust
you.
If
it's
a
question
of
like
do.
I
trust
your
development
process.
B
Given
the
limitations
of
api
snoop
like
given
that
api
snoop
can't
do
a
dynamic
audit
config,
I
do
I,
I
think
I
trust
the
the
coverage
data
that
we're
getting
from
you.
It
could
mean
that
longer
term,
if
you
want
to
make
api
snoop
strategically
useful
if
it
turns
out
we
decide
like
api
coverage,
is
a
useful
thing
for
measuring
different
profiles,
and
maybe
one
of
the
profiles
is
something
that
can
only
run
on
hosted
offerings
or
offerings
that
don't
allow
dynamic
that
don't
allow
static
audit
configuration.
B
A
I'm
not
not
sure,
that's
the
issue
aaron,
I
think,
and
I
don't
know
who
zzz
is,
but
if
you
can
clarify
maybe
is
it?
Is
it
that
that
you're
worried
about
not
being
able
to
run
against
these
different
hosted
offerings,
or
is
it
that
you're
using
in
order
to
to
do
a
ci
to
do
a
check
in
github
you're
using
the
specific
ci
runs
that
happen
today
and
you'd
like
to
configure
one
of
them
to
contain
this
static
auditing
configuration.
G
Yeah
it's
and
I
could
introduce
myself
with
more
than
initials.
I'm
I'm
zach,
I'm
also
part
of
ii,
and
I
do
a
lot
of
stuff
with
api,
snoop,
okay,
but
but
yeah
it
is
a
like
particular
flow
that
is
related
to
how
ii
writes
tests
and
how
we've
like
advocated
writing
tests,
and
so
the
the
flow
as
best
as
I
can
describe
it
is
a
test
writer
will
create
their
own
cluster
api
snoop
attaches
to
that
cluster
and
starts
and
listens
to
all
events
being
passed
to.
G
It
listens
to
everything,
that's
happening,
and
so
you
can
then
write
a
test
or
write
a
function,
give
that
function
a
user
agent
and
use
google
or
client
coup
client
or
whatever,
to
run
that
function
against
the
cluster,
at
which
point
api
snoop,
because
it's
been
listening
with
that
dynamic
auditsync
we'll
see
these
are
the
user
agents
that
are
currently
hitting
these
endpoints,
including
this
custom
user
agent
that
has
just
been
sent
by
the
test
writer.
So
it's
it's
completely
self-hosted.
G
A
B
B
I
know
I
know
we
got
to
the
land
of
like
looking
at
audit
logs
because
we
maybe
did
try
looking
at
admission
web
hooks
and
couldn't
quite
get
all
the
data
we
wanted
and
we
talked
about
like
intercepting
network
traffic
and
that
maybe
didn't
quite
work
either
and
those
may
be
alternatives
you
want
to
explore.
If
you
find
you
need
that
down
the
line,
but
what
we
have.
A
Okay,
awesome
next
item:
I'm
not
sure
who
put
this
on
watch
tooling,
so
I
guess
aaron.
You
gave
some
advice
around
improving
the
watch
tooling
or
the
tests
and
the
way
they're
using
the
watch
tooling
and
there's
a
video.
So
is
there
anything
I'm
hosting?
But
I
didn't
put
this
item
on
here,
so
I
don't
know
exactly
what
what
the
ask
is
here
from
the
group
or
if
it's
just
a
note,
I.
D
Think
it's
it's
to
confirm
if
that
that's,
where
we're
going!
Okay,
any
disagreement,
please
voice
it
now.
B
That
that
sounds,
that
sounds
good
to
me.
So
to
recap
my
concerns
were
it
felt
like
it
was
introducing
way
too
much
complexity.
If
this
group
is
trying
to
sort
of
demonstrate
how
to
write
clear,
concise,
simple,
readable
tests,
I
felt
like
having
a
bunch
of
tooling.
A
B
Yeah
I
so
I
haven't
had
the
time
to
go
thoroughly,
re-evaluate
every
single
use
case
against
which
this
is
applied,
but
there
are
some
really
simplistic
use
case
cases
where
we
don't
need
to
address
clayton's
concerns,
and
I
did
try
to
do
a
bit
of
a
postmortem
on
how
we
got
to
now.
It's
I
feel
like
the
the
lesson,
as
always
for
this
group
is
that
high
latency
discussions
can
introduce
a
lot
of
whiplash
and
uncertainty,
and
so
I
think
each
thing
maybe
seemed
reasonable.
B
B
And
you
can
there's
one
test
where
I
feel
like
clayton's
concerns
may
be
valid
and
as
far
as
I
for
me,
it's
like
the
data
is
king,
so
like
if
the
data
says
that
it's
not
flaky,
then
it's
not
flaky,
but
I
was
trying
to
have
an
extra
eye
toward
code
quality.
Since
I
felt
like
the
ii
team
was
trying
to
make
trying
to
like
set
the
pattern
going
forward
for
how
to
write
clear,
concise,
simple
and
effective
tests,
and
it
was
it.
Was
that
part
that
I
wanted
to
course
correct.
B
Flaky
next.
A
G
Yeah
and
I
think
it
actually
might
be
related
to
michelle's
point
earlier
between
two
data
runs.
There
was
some
end
points
that
were
covered
previously
that
were
not,
and
it
might
be
from
a
performance.
A
That's
being
removed,
no
unlikely
that
that
was
a
there
was
a
endpoint.
It
would
be
one
and
there's
one
test
and
it's
probably
an
endpoint
that's
covered
with
a
particular
configuration
on
a
particular
set
of
operations
against
that
endpoint.
I
I
mean
I
can't
say
that
for
sure,
but
I
suspect
that's
not
it
and
certainly
from
one
being
vote.
So
we
do
need
to
investigate
that
further
and
get
a
list
of
what
those
endpoints
are
and
where
they
used
to
be
hit,
because
there
should
be
you're,
saying,
they're
new.
They
don't
have
conformance
tests.
A
A
No,
this
is
waiting.
This
is
waiting
for.
These
tests
are
written,
they're
waiting
for
two
weeks
of
non-flakes.
They
have
like
10
days
right
now.
B
Just
a
point
of
meta
feedback:
it
now
seems
obvious
to
me
now
that
you're
raising
this
the
conformance
progress
site
that
you
have
on
seeing
on
apis
snoop.cncf.io
is
sort
of
naturally
going
to
beg
these
kinds
of
questions.
You've
got
a
chart
that
shows
these
really
nice
colors.
That
says:
here's
what's
new
tested,
here's
what's
new
untested,
so
on
and
so
forth,
and
it
immediately
will
lead
to
the
question.
Oh,
so
what
are
the
new
untested
endpoints?
B
A
H
A
A
Data
for
non-flakiness,
so
that
was
seven
days
ago,
so
actually
it's
probably
probably
can
be
done
now,
although
there
was
a
change
anyway,
I
can
look
at
this
offline.
D
A
Anyway,
I
think
this
is
probably
ready.
I'm
ready
to
merge.
H
Now
did
you.
A
A
C
G
Oh
yeah,
sorry
I
can.
I
can
stick
to
this
two
new
updates,
so
the
the
site
now
has
that
conformance
progress
page,
which
will
be
improved
and
added
to
and
such
all
of
the
data
that
we're
getting
now
is
being
generated
with
this
link
that
snoop
db.
G
So
it's
similar
to
what
was
shared
before
of
just
creating
some
json
or
yaml
of
the
current
coverage
and
grabbing
that
for
the
site,
with
the
other
intention
of
that
is
to
make
it
really
simple
to
run
up
the
the
database.
That
has
a
number
of
preset
views
and
such
into
it
one
being
what
are
the
new
endpoints
per
release
that
are
still
untested.
G
So
you
can
get
a
nice
list
of
the
ones
to
focus
on
and
such
one
thing
that
I
wanted
to
mention
with
the
site
right
now
that
is
being
fixed
is
that
the
conformers
progress
page
is
showing
higher
numbers
for
the
number
of
endpoints
hit
than
if
you
were
to
look
at
any
releases
individual
sunburst.
G
The
reason
for
that
is
because
the
newest
data
that
we
have
there
is
a
suggestion
to
look
at
this
different
test
run
bucket.
That
included
disruptive
tests
and
serialized
tests
that
the
bucket
that
we're
using
previously
did
not
include,
which
meant
that
there
was
a
number
of
tests
and
a
number
of
endpoints
that
were
conformant,
that
we
weren't
seeing
in
the
the
previous
sunburst.
G
In
that
progress
page,
we
used
the
latest
test
data
plus
the
open
api
specs,
plus
the
conformance
yaml,
and
the
conformancy
like
where
each
test
has
its
release
date
with
those
three
different
files.
We
are
able
to
historically
see
when
an
endpoint
was
introduced
and
if
it's
being
hit
by
a
conformance
test
and
when
that
conformance
test
was
introduced.
G
So
we
can
then,
like
historically
go
back
to
say
what
did
it
look
like
in
1.181?
Did
it
look
like
in
1.17
etc?
G
But
it
means
that
the
sunburst
currently
is
missing
out
on
those
additional
tests
I'm
looking
into.
If
there
is
a
different
historical
buckets
that
we
could
use
to
see
stuff
for
those
or
it
might
just
be
doing
some
manual
work
to
update
the
json
that
those
sunbursts
are
being
pulled
from.
But
I
wanted
to
explain
that
discrepancy.
B
Make
sense
gcs
buckets
age
out
after
90
days,
I
may
have
some
older
stuff
that
I
might
be
able
to
help
you
with.
If
you
poke
me
offline,
we
can
figure
out
here's
another
way
to
get
you
the
data
that
you
need.
G
B
Can
it
looks
like
I'm
the
next
thing
on
the
agenda?
I
just
wanted
to
sort
of
recap.
I
presented
the
conformance
propile
profiles
proposal
here
two
weeks
ago,
I
felt
like
I
didn't,
get
a
ton
of
feedback,
but
I've
shopped
it
around
a
little
bit.
I
also
presented
it
at
the
testing
commons
meeting
to
get
some
feedback
from
and
testing
to
get
some
feedback
from
people
who
are
like
ci
signal
and
who
are
test
authors
and
interested
in
testing
patterns
generally.
B
Yeah,
and
so
I'm
I'm
fine
with
with
doing
the
boring
thing
that
makes
this
way
more
painful
for
all
humans.
I
feel
like
what
I
want
for
final
consensus
or
sign.
D
B
Is
to
get
buy-in
from
the
sig
architecture
leads
at
the
meeting
on
thursday,
so
I've
sent
out
the
design
proposal
for
the
groups
today
and
would
like
to
chat
about
it
on
thursday.
So
that's
how
reasonable.
A
B
Did
you
go
ahead
and
put
it
in
the
agenda?
If
you
haven't
already,
I,
I
will
put
it
in
the
agenda
all
right
too
many
meetings
yeah,
and
I
think
that
I
haven't
actually
followed
up
on
this.
B
It
sounded
like
one
of
the
things
that
came
up
in
with
discussion
with
between
some
folks
here
last
time
was
that
the
ii
crew
is
helping
the
cncf
sort
of
create
a
tool
or
a
site,
or
something
to
automatically
verify
that
the
set
of
conformance
tests
that
are
run
equals
the
expected
conformance
tests
that
are
run,
and
you
all
are
looking
on
how
to
do
this
retroactively
or
historically,
and
I
feel
like
it
would
be
good
for
us
to
align
our
efforts
there.
B
A
Okay,
I
think
that
sounds
good.
A
We'll
talk
about
that
on
thursday,
then
with
the
bigger
group,
I
would
like
to
be
involved
in
that
follow
up
as
well
whoever's
hippie,
if
you're
taking
notes
there,
so
I
had
planned
but
not
completed,
plan
to
present
some
rough
sketches
of
what
profiles
might
look
like
and
get
feedback
from
this
group,
but
I
didn't
get
a
chance
to
do
that,
but
I
will
plan
for
that
for
next
next
meeting
and
what
I'm
trying
to
think
about
are
things
like
you
know:
should
there
be
a
windows
profile
to
apply
windows,
specific
tests
to
windows
nodes?
A
Should
there
be
a?
I
think
there
should
be
a
profile
that
separates
out
privileged
workloads
from
ordinary
workloads
in
some
way
and
cluster
administrative
operations
versus
workload,
oriented
operations,
but
exactly
how
all
those
things
are
going
to
fall
into
buckets.
Isn't
clear
to
me
yet
one
thing
I
thought:
I'd
like
some
feedback
on
I
was
thinking
about.
A
Is
we
actually
have
two
different
types
of
conformance
tests,
sort
of,
broadly
speaking,
and
one
is
sort
of
conformance
like
api
level,
conformance
control,
plane
level,
conformance
from
a
sort
of
an
external
operator
trying
to
make
the
cluster
do
something
or
run
a
workload
or
whatever
it
may
be,
and
then
the
other
one
is
the
runtime
environment
itself.
So
if
you
look
at
some
of
our
conformance
tests,
some
of
them
apply
to
you
know
I
can
use
you
know
pods,
you
know
whatever
like
I
can.
A
A
I
guess
you
need
the
control
plane
to
tell
the
power
that
could
do
it,
but
there's
sort
of
the
data
plane
aspect
of
it
that
we
that
we
test
and
we
test
the
control
plane
aspect
of
it,
and
so
I'm
wondering
if
it's
useful,
to
think
about
the
different
tests
in
those
categories.
So
when
I
run
a
test
that
checks
that
my
workload
can
look
up,
dns
rec
services
using
specific
dns
names-
or
I
run
a
test
that
checks
that
two
pods
can
talk
to
each
other
over
the
network.
F
C
A
We've
we've
got
a
workload.
Definitely
that
means
that
the
control
plane
functions
have
to
be
there,
because
the
way
we
define
our
workloads
is
in
terms
of
these
manifests
the
top
of
the
control
plane,
but
if
the
runtime
environment
like
in
windows,
this
is
why
this
is
kind
of
why
it
came
up
for
me
in
windows.
The
runtime
environment
is
different.
A
A
Anything
to
do
with
downward
api,
for
example,
is
a
kind
of
like
from
the
things
that
are
the
downward
api
mounts
a
bunch
of
stuff
in
a
in
a
directory
right.
So
like.
A
Then
that's
from
the
runtime
environment
of
the
workload
you
know
is
what
you'd
be
checking
there
as
opposed
to
being
able
to
launch
something
or
create
a
service.
A
B
This
is
something
I
guess:
I've
typically
thought
of,
or
or
shunted
off
into,
the
domain
of,
like
node,
e2e
or
adherence
or
conformance
to
plugable
interfaces
like
cri
and
cni
and
csi.
Things
like
that.
But
right
here
to
your
point
about
like
two
paws
being
able
to
talk
to
each
other
is
something
I
would
want
to
verify
across
notes
which
makes
me
think
it's
not
necessarily
something
I
can
specify.
B
A
Reason:
okay!
Well,
if
anybody
has
any
thoughts
on
that
reach
out
to
me
as
I
go
through
these
ed
tests,
I
think
I'm
gonna
at
least
try
to
categorize
them
that
way
in
my
head,
whether
we
make
use
of
it
or
not
in
profiles.
I
I
kind
of
something's
bugging
me
like
that.
There's
that
there's
something
there
that
we
need
to
make
a
distinction
about
when
I'm
not
sure.
A
A
I
mean
for
a
workload
to
successfully
run
on
a
given
cluster.
You
need
the
control
plane
apis
that
it
expects
to
be
there
and
you
need
the
runclean
environment
to
work
in
the
way
that
it
expects
right.
You
need
both
of
these
things
right
and
in
general,
we
haven't
had
to
differentiate
because
we're
all
linux
right
and
the
runtime
environments
just
are
expected
to
be
identical
everywhere,
but
and
maybe
we
don't
care
right,
maybe
we
don't
need
to
distinguish
these,
but
it
just
it's
just
something.
A
lot
of
people
thought
here.
B
A
A
D
A
A
given
workload's
going
to
run
successfully
in
a
given
environment
because
you
know
tested
the
data
plane,
but
well,
but
that's
anyway,
I
don't
want
to
go
back
there.
Okay,
that's
all
I
want
to
say
about
that
is
put
it
in
people's
heads
and
if
it
bugs
you
like
it
bugs
me,
then
in
two
weeks.
D
It
would
be
good
to
go
through
the
the
particular
promotions
that
need
an
approved
or
a
milestone,
release
to
make
sure
that
those
move
forward,
and
there
was
an
overall
suggestion
from
aaron
that
proxy
the
option.
The
proxy
options
would
be
a
good
place
to
to
go
next
and
just
wanted
to
get
some
initial
direction.
Even
though
we've
already
reached
out
to
sigma
network
and
ap
machinery,
but
not
much.
E
B
Okay
yeah.
I
just
remember,
though
there
are
a
variety
of
proxy
if
you're.
Looking
at
this
purely
from
an
endpoint
coverage
perspective,
there
are
a
variety
of
endpoints
that
all
relate
to
proxying,
whether
that
be
to
a
node
or
a
surface
or
a
pod,
and
there
are
many
different
operations
for
attaching
or
proxying
to
a
specific
path
or
proxying
with
a
specific
verb.
B
A
You
see
my
screen
now
by
the
way.
Yes,
okay
with
the
agenda.
Okay,
so
you're
saying
the
core
has
a
lot
of
proxy
things,
a
lot
of
a
lot
of
endpoints
missing
there,
all
those
proxy
engines.
I
see
I
mean
we
do
expect
like.
B
B
It
is
probably
worthwhile
to
verify
that
all
the
http
verbs
we
expect
to
go
in
one
end
and
pop
out.
The
other
end
actually
do
so
unmodified
and
modified
as
expected.
B
Presumably
there's
eva
test
for
all
of
this
proxy
code.
Alright,
there's
an
e2e
test
for
proxy,
but
it
only
really
verifies
the
get
verb
works
and
it
only
verifies
that
it
works
along
certain
ports.
So
it
doesn't
do
any
verification
with
regards
to
subpads
that
I
am
aware
of,
and
it
doesn't
verify
that
other
verbs
like
post
head
connect
stuff
like
that
go
through
as
expected
and
it
could
be
like
like.
Maybe
this
is
just
an
artifact
of
over
focusing
on
api
endpoint
coverage.
B
Maybe
some
of
these
don't
necessarily
make
logical
sense
and
we
should
exclude
them
from
the
list
of
possible
api
coverage.
But
I
think
if,
if
I
were
like
a
really
paranoid
or
uncertain
user,
I'd
really
want
to
make
sure
that
all
the
verbs
that
I
put
into
kubernetes
or
ingress
or
whatever
actually
ended
up
in
my
application.
H
B
B
D
B
E
B
Sure
I
I
didn't
realize
I
hadn't
dropped
it
before
that.
I
can
do
that.
B
B
I
think
my
my
quick
glance
at
it
was
like.
Oh,
this
looks
like
it's
just
hitting
a
bunch
of
end
points
and
verifying
that
the
endpoints
get
hit.
Is
there
any
actual,
resulting
behavior
that
we
should
expect?
Are
there
any
epi
machinery
subtleties
that
we
would
want
exercise?
As
we
look
at
hitting
these
groups-
and
I
guess
I
had
all
those
thoughts
in
my
head
and
didn't
actually
write
that
down
in
a
comment
on
this
pr
and
I'm
sorry
for
that.
D
We
can
ask
them
aaron
we're
also
trying
to
figure
out
how
to
increase
engagement
on
that
in
this
particular
pr,
if
I
remember
right,
we're
iterating
through
the
preferred
available
versions
of
the
apis
that
are
returned
and
ensuring
that
those
preferred
available
apis
are
actually
available
right.
So
it's
much
it's
more
than
just.
B
A
F
D
A
Yeah,
I
know
you
got
to
pick
somebody.
It
doesn't
do
any
good
to
your
senator
if
everybody's
responsible
nobody's
responsible
daniel
or
I
mean
clayton's
the
obvious
choice
but
he's
busy.
So
any
of
these.
E
B
I
That
test
covers
the
one
endpoint
that
was
missed
one
legacy,
so
we
reached
out
to
them
to
to
tell
them.
We
were
willing
to
pick
it
up
for
them
and
stephen
did
us
this
right
test
so
based
on
what
lingard's
team
done
already.
J
J
J
This
doesn't
look
like
there's
any
jewelry
tactics.
It's
only
for
networking,
isn't
it.
B
H
A
All
right
cool
amount
of
time
out
of
things
on
the
list,
all
right,
thank
you,
everybody,
and
we
will
see
you
next
time.