►
From YouTube: Kubernetes SIG Network Bi-Weekly Meeting 20221208
Description
Kubernetes SIG Network Bi-Weekly Meeting 20221208
A
C
B
B
Everybody
can
see
this,
yes
looks
good
all
right.
So
thanks
to
all
the
folks
again
who
went
through
issues
yesterday
and
in
earlier
this
week
we
don't
have
too
many,
but
not
zero,
so
Dan
you
filed
this
amigur
34
minutes
ago.
Do
you
want
to
talk
to
it.
D
Just
came
out
of
a
question
from
obian
kubernetes
people.
They're,
saying
like
you
know,
so,
is
a
deny
all
policy
expected
to
deny
cluster
Ingress
traffic,
since
we
don't
specify
a
lot
about
cluster
Ingress
traffic
and
I
felt
like
yes,
even
though
we
don't
specify
what
you
can
allow
in
in
terms
of
cluster
Ingress,
we
we
do
specify
what
you
can
deny
and
that's
all
of
it.
E
E
B
All
right,
Sig
Network,
test
flakes
if
minsync
period
is
too
large.
Antonio
is
Antonio
here.
B
Okay,
no
Antonio,
I
didn't
read
this
whole
issue
because
I
thought
we
would
talk
about
it
today.
D
D
C
D
Basically,
there
was
a
test:
some
of
the
the
ede
test
runs
run
with
Min
sync
period
equals
10
seconds,
meaning
once
you
create
a
service,
it
may
take
up
to
10
seconds
for
the
IP
tables
rules
to
be
there
and
right.
Some
of
the
tests
were
assuming
that
changes
that
they
made
would
be
applied
in
less
than
10
seconds.
Okay,
there
there
was
some
debate
over
like
the
the
correct
timeout
to
be
using
and
and
exactly
and
then
Antonio
changed
his
mind
about
something
at
the
end
and
but
I.
F
B
All
Right
Moving
On
Reserve
aesthetically
static,
Port
range
for
node
Port,
so
this
is
a
feature
request
that
is
basically
do
what
we
did
for
service
IPS
for
node
ports
carve
off
the
bottom
section
for
static
use
and
only
Auto
allocate
from
it.
When
needed.
B
Is
it
that
we
have
some
Vehicles,
oh
Antonio
is
here,
can
you
hear.
F
F
Okay,
so
this
one,
this
was
a
problem
with
send
you
test
that
that
got
added
and
didn't
have
this
low
attack,
because
people
was
filtering
by
his
law.
So
they
got
this
test
and
I
visited
in
cops
and
one
person
is
going
to
fix
it
in
once
we
open
Master
again
so.
B
I
I
have
not
looked
at
these
at
all.
Anybody
on
the
Windows
Q
proxy
side
here.
G
B
B
G
G
B
Can't
decide
it
I
can't
assign
it
to
you
then,
but
can
you
jump
on
these
and
well
actually
I'll
just
Market
triage
accepted,
but
are
you
anywhere
on
here.
C
B
Excellent
all
right
we're
now
into
issues
from
October,
which
is
good.
So
if
we
need
to
stop
Casey,
keep
us
honest,
UDP
traffic
loss
observed
during
start
of
service
pod
run.
External
traffic
policy
is
local.
There
was
some
chatter
on
this.
I
didn't
absorb
all
of
it
and
oh
Lars
updated
it
this
morning.
H
Yeah
I
missed
the
part
when
they
when
they
said
that
he
has
the
same
ephemeral
Port
all
the
time,
but
there
is
obviously
an
improvement
in
in
contract
cleanup.
So
I
would
like
him
to
to
retest
with
the
latest
kubernetes,
basically
about
it.
Okay,
it
was
fun
to
do
the
grabs.
B
Okay,
all
right
well
we'll
leave
this,
then
for
revisiting
next
time.
The
same
one
did
I
just
hit
the
same.
One
twice
looks
like
it:
oh,
no
same
graph,
different
issue:
okay,.
B
B
Okay,
I
pinged
this
one
for
updates,
we'll
see
what
happens.
Leave
it
till
next
time,
flaky
test
in
ppc64,
wow.
F
The
issue
do
they
have
a
several
tests
that
are
flaking
in
their
environment,
but
there
is
no
access
to
the
environment,
then
I
I,
don't
know,
I
mean
I
I'm,
trying
to
follow
with
them,
but.
B
D
Mean
certainly
build
it.
Yeah
IBM
supports
it.
So
I
know
that
we
do
build
open
shift
on
that
architecture
somewhere.
Okay,
all.
F
F
I
No
I
was
gonna
say
that
we
are
planning
to
deprecate
these
things
on
increasing
Gen
X,
at
least
because
building
them
doesn't
pay
for
like
people
using
that.
So
I
would
say
that,
maybe
in
our
case,
I
would
like
that
IBM
use
that,
because
usually
in
our
case
at
least
in
English,
we
spend
a
lot
of
time
and
a
lot
of
CPU
building.
Those
teams,
and
almost
no
one
uses
that
so
I'm
not
sure.
If
this
is
the
same
case
for
us.
F
B
We'll
leave
this
open
for
now
we
can
revisit
it
in
two
weeks
and
see
if
it
or
whenever
we
have
our
next
meeting
and
see
if
there's
any
updates
on
it.
This
is
the
last
one.
So,
let's
squeeze
it
in
before
Casey
kicks
us
off.
Oh,
is
this:
this
is
an
old
one.
This
is
one
where
I
pinged
Dan
a
month
ago,
yep.
A
To
you,
Casey
and
I'll
bring
it
right
over
to
Antonio
for
performance
tests
and.
A
F
Oh
I
almost
forget
I
think
that
we
we
it
seems
that
the
in
the
old
days
of
kubernetes
that
only
a
few
persons
remember
they
were
using
external
IPS
at
the
fall
and
most
of
the
people
now
prefer
to
use
internal
because
external
have
caused
and
may
they
are
not
really
needed.
So
I
think
that
was
I.
Don't
remember
nowadays,
but
I.
Think
that,
because
we
are
going
to
suggest,
is
to
prefer
internal
IP
first
or
before
testing.
G
J
Yeah
so
I'm,
one
of
the
tech
leads
in
six
storage.
I
just
wanted
to
stop
by
and
say
hello
I'm
doing
sort
of
a
world
tour
of
the
six.
J
But
basically
there
is
this
end:
user,
kubernetes
Community
called
data
on
kubernetes
and
they
there's
a
lot
of
members
there
who
are
basically
database
vendors
and
they
go
and
Trade
Practices
best
practices
on
how
to
run
stateful
workloads
on
kubernetes.
They
also
and
operators,
and
things
like
that.
So
they've
written
a
lot
of
operators
and
best
practices
of
operators.
J
I
am
planning
on
sort
of
starting
a
regular
sort
of
Round
Table
session
between
that
end,
user,
community
and
kubernetes
maintainers,
and
hopefully
we
can
use
this
to
sort
of
facilitate
feedback
directly
from
the
end
users
on
sort
of
What
kinds
of
things,
they'd
like
to
see
in
kubernetes
to
help
them
run
the
state
four
workloads
more
smoothly.
So
if
anyone
here
is
interested
in
sort
of
joining
that
Forum
I
started
a
doc
to
collect
a
bunch
of
names
and
we're
gonna
try
to
schedule
the
first
meeting
in
around
January
sometime
in
January.
B
Think
it's
a
really
interesting
topic
for
us,
because
we've
had
some
issues
even
recently
about
the
intersection
of
stateful
databases
and
load,
balancers
and
Ingress.
So
I
do
think.
That's
a
sort
of
interesting
topic.
If
folks
want
to
jump
in.
J
J
K
I
wanted
to
bring
up
a
just
kind
of
talk
about
an
issue
we
talked
about
in
the
mailing
list
a
couple
weeks
ago
and
there's
an
open
issue
for
donating
a
repository
for
those
of
you
who
I'm
going
to
do
a
quick
recap
of
what
it
is
for.
Those
of
you
who
may
have
not
seen
it,
but
bleaks
is
what
it's
called
originally
started
as
an
experimental
layer,
4
load
balancer
using
ebpf
for
the
data
plane
at
Kong
where
I
work.
K
We
now
have
several
people
from
the
Gateway
API
Community,
including
us
maintainers,
and
also
Andrew
stoichus,
all
kind
of
contributing
to
it
and
building
it
up
for
the
purpose
of
it
being
the
conformance
testing
implementation
that
we
use
for
Gateway
API,
since
it
has
historically
not
had
that
for
like
PRS
and
stuff
like
that.
K
So
last
we
left
off.
We
had
I
think
three
issues
which
I
think
we've
got
like
three
concerns
about
taking
it
on
which
I
think
we've
covered.
The
first
concern
was
support
surface.
We
absolutely
don't
want
it
to
be
a
production,
any
kind
of
production
support
service
for
this
and
I
think
we've
covered
that
with
the
combination
of
the
writing
on
the
wall,
plus
the
fact
that
we
are
in
the
thread.
K
We
talked
about
being
committed
to
having
some
kind
of
technical
mechanism
to
actually
stop
people
from
using
it
in
production,
as
opposed
to
saying
don't
use
it
in
production
and
then
later
they
do.
It
literally
stop
them
today.
That
is,
it
only
works
with
kindness
and
in
the
future.
We're
open
to
things
like
having
a
shutdown,
timer,
so
I
think
that
one's
resolved
and
then
the
naming
I
think
no
nobody's
worried
about
that.
K
But
if
they
are,
we
can
adjust
the
name
and
then
the
fact
that
it
includes
rust
was
the
third
one
and
I.
We
didn't
get
a
whole
lot
of
feedback
on
that
one,
but
I
I
think
that
Nick
Young
made
a
comment
to
the
effect
of
it's
really
low-key
and
if
we're
going
to
start
somewhere
potentially
that
we
should
just
not
worry
too
much.
But
that's
kind
of
what
I
wanted
to
bring
up
for
discussion
was.
Were
there
any
lingering
concerns
about
I?
K
L
Yeah
I'll
just
add
my
I
think
I've
commented
on
the
issue
and
the
threat
as
well
and
I
I'm.
Supportive
of
this
we've
been
really
trying
to
push
L4
through
for
Gateway
API,
and
a
project
like
this
is
going
to
be
very
helpful
to
at
least
you
know,
run
some
proof
of
Concepts
ensure
the
apis.
We're
building
actually
make
sense
and
are
usable.
L
We
don't
have
the
same
set
of
you
know
with
L7
apis.
We
had
a
pretty
large
set
of
implementations
that
were
actively
helping
us
evaluate
this
with
L4.
Having
some
kind
of
test.
Poc
project
seems
useful,
so
I
I'm,
supportive
as
well.
A
The
next
topic
was
actually
me.
I
wanted
to
bring
this
up.
I
am
planning
on
stepping
down
from
the
chair
position
versus
network,
been
in
the
spot
for
a
long
time
and
want
to
give
an
opportunity
to
someone
else
to
take
over
this
role.
So
I
wanted
to
bring
that
up
here
and
see
if
anyone
was
interested
in
that
or
if
anyone
has
any
their
thoughts
on
that.
That
sort
of
succession
I
think
this
is
the
first
time
to
my
knowledge
that
we've
done
a
succession
in
Sig
Network.
B
I
think
this
is
this
is
important
and
honestly
we're
probably
remiss
for
not
having
done
it
sooner.
Most
of
the
other
sigs
have
either
done
this
once
or
twice
or
even
institutionalized
this
as
a
regular
thing,
it's
an
opportunity
for
other
people
to
get
involved.
B
B
It's
helping,
make
sure
things
run
and
get
keeping
the
meetings
in
order
and
organizing
and
finding
the
right
people
and
making
contact
so
I
think
it's
a
great
opportunity
for
somebody
who
wants
to
get
more
involved,
but
you
know
maybe
doesn't
want
to
be
owning
all
the
the
design
and
code
problems
you
can
do
both
if
you
want,
but
it's
an
opportunity,
I
Know,
Dan
Williams
is
all
right.
Dan
are
you
here
still.
A
B
Is
now
here
today,
okay,
Dan
made
similar
noises,
so
I
don't
want
to
speak
for
him
yet,
but
here's
an
opportunity
for
new
Opera
new
new
people
to
step
up
in
an
interesting
way
and
take
some
more
Community
leadership.
So
you
don't
need
to
sign
up
right
now
right,
but
if
you're
interested
in
this,
let
me
or
Casey
or
Dan
Williams
know
that
you're
interested
and
we
can
talk
about
it.
A
And
I'll
say:
I'm
like
I'm,
not
falling
off
the
face
of
the
Earth,
so
I'll
be
around
to
answer
questions
and
help.
If
anybody
decides
to
pick
this
up
unless
they
must
be
super
helpful
and
everybody
here
is
very
helpful
and
and
supporting
you
in
that
role.
So
it's
not
something
to
be
super
stressed
about.
C
Yeah
I
would
suggest
it
would
be
a
great
idea
to
help
people
understand
if
they
are
interested
or
they're
trying
to
make
a
case
to
their
employers
to
how
why
they
should
spend
the
time
they
probably
would
want
to
go
into
that
discussion
with
an
idea
of
the
both
the
time
commitment
and
am
I
signing
on
for
one
year
three
years
forever
like
defining.
That
might
be.
You
know,
Sig
specific,
but
also
useful,
for
people
trying
to
make
decisions.
A
Yeah
for
sure,
and
and
there
are
some
like
project
wide
guidelines
or
recommendations
around
that
that
I
can
I
can
dig
up
and
share,
may
also
makes
sense
for
us
to
think
about
codifying
a
little
bit
more
of
a
formal
process
and
timeline
like
Tim
was
saying
within
sign,
Network
itself.
B
Yeah
I
mean
honestly,
like
we
should
probably
formalize
this
and
say
like
if
you're
signing
up
you
know
every
year
or
or
18
months
or
something
we
will
offer
this
to
new
people
if
they
want
to
take
it
right.
So
nobody
feels
like
they're
stuck
because
there's
nobody
no
succession
plan
and
that
they
can't
get
out
of
it
like
nobody
should
feel
that
way.
This
should
not
be
an
obligation.
I
M
Casey
I
think
it
will
help
a
lot
of
people
if
you
do
an
FAQ
somewhere
of
the
things
you
do
today
and
the
time
load
on
you,
and
so
people
who
want
to
volunteer
can
say:
okay,
it
will
look.
I
will
obviously
not
committing.
It
will
be
the
same,
but
it
will
look
almost
like
what
you
have
in
your
area.
There
are
key
questions
that
everybody
else
in
like
how
long
does
it
take?
What's
that,
what
do
I
do?
A
Point
to
it,
I'll
come
up
with
a
collection
of
all
the
relevant
talks
and
roles
and
responsibilities
and
share
that
on
the
mailing
list
and.
B
You
know
to
the
to
Andrew
to
your
point.
Thank
you
for
reminding
me.
We
we
I
started
a
conversation
a
number
of
weeks,
even
months
back
about
formalizing
the
recognition
of
more
roles
within
the
Sig
and
and
building
towards
more
formal
succession
plans
and
then
I
totally.
Let
it
fall
off
my
radar,
so
I
will
try
to
find
that
discussion
and
bring
it
back
if.
L
Yeah
we've
been
working
on
something
similar
with
Gateway
API,
we're
trying
to
build
a
clear
contributor
ladder
and
process
because
yeah
we
also
face
the
same
problems.
A
Cool,
that's!
That's
all
I
had
so
there's
any
there's,
not
any
more
discussion
on
that.
It
looks
like
Andrew.
What's
up
next.
E
E
It
includes
some
really
nice
resources,
video
on
the
birth
Network
policy,
with
our
own
Tim
and
Dan
and
other
folks,
and
this
came
up
in
the
network
policy
API
subgroup
meeting
on
Monday
and
the
general
Vibe
with
it
was
like
kind
of
it's
kind
of
confusing
right.
So
this
website
is
literally
Network
policy.
I
o
it's
preaching
like
a
fully
Upstream
Network
policy
object,
but
there's
a
network
policy
editor
on
the
front
page
that
is
not
fully
open
source
that
I
could
find.
E
I
could
be
wrong
if
there's
other
selling
folks
on
the
page,
so
we
were
kind
of
discussing
like
what
we
could
do
with
this
to
kind
of
bring
it
into
the
fold
like
rewinding,
even
more
like
the
network
policy,
API
subgroup,
like
kind
of
want
all
these
random
Network
policy
resources
to
kind
of
live
in
one
place.
So
we
can,
you
know,
make
it
easier
for
users
who
want
to
figure
out
our
policy
and
may
have
trouble
understanding,
just
the
default
Upstream
documentation.
E
B
If
we,
if
we
were
to
repatriate
this
like,
would
we
take?
Oh
would
we
want
to
take
over
the
site?
First
of
all,
it's
owned
by
psyllium
right,
so
I'm
not
signing
psyllium
up
to
give
it
to
us,
but
suppose,
hypothetically
that
they
were
like
yeah
sure
go
right
ahead.
Take
it
over
is
that
what
we
want
to
do,
I.
E
Don't
think
so
we
have
a
website
already,
so
it
would
be
more
like
kind
of
taking
the
resources
that
are
here
and
putting
them
in
our
website
and
deprecating
this
website
or
making
it
less
of
like
a
generic.
This
is
Network
policy
because
it
looks
like
an
official
Upstream
kubernetes
Network
policy
kind
of
site
like
and
we
had
someone
come
and
ask
if
it
was
and
I
was
like.
E
No,
we
don't
own
this,
and
then
he
was
like
I
was
confused
because
I
thought
this
was
generic
Upstream
Network
policy,
but
then
I
realized
the
network
policy.
Editor
was
actually
part
of
psyllium
and
it
was
just
like
creating
some
confusion,
so
I
think
the
goal
would
be.
Maybe
they
make
this
and
to
ask
them
to
possibly
make
this
into
more
of
like
this
is
psyllium
Network
policy
editor
like
this
is
great
and
we
put
the
generic
like
Network
policy
related
resources
in
an
upstream
repo.
Does
that
make
sense.
B
Yeah
I
mean
it,
it
makes
sense.
We
should
go,
ask
the
question.
Ultimately,
we
don't
have
a
trademark
on
network
policy,
and
so
we
can't
prevent
anybody
from
doing
this.
If,
if
the
psyllium
folks
want
to
maintain
this
because
it's
valuable
for
them
and
their
customers,
that's
great,
we
could
ask
them
to
make
it
clearer
that
it's
not
officially
Affiliated
or
something.
If
we
think
that
that's
what
we
want
to
do,
but
we
can
have
the
discussion
I,
don't
think
anyone
from
psyllium
is
here
on
a
regular
basis.
B
E
Was
to
reach
out
to
Liz
and
Thomas
craft
from
slim
and
just
see
but
I
didn't
know
if
there
was
more
backstory
that
I
was
missing,
so
I
wanted
to
bring
it
up
here.
A
Cool,
that's
the
the
end
of
the
agenda.
The
next
meeting
would
be
December,
22nd
and
I.
Think
we
said
we're
going
to
keep
that
one
on
and
although
I
suspect,
it'll
be
a
little
bit
light.
I
know
that
I
won't
be
here
that
week.
B
B
The
the
cycle
starts
to
go
back
to
the
the
kept
kept
phase
of
the
cycle.
Although
you
know
officially,
we
don't
really
have
a
cycle,
but
this
is
how
it
works.
In
practice,
I
sent
out
a
note
to
Signet
about
how
I'm
going
to
try
to
manage
my
own
time
around
reviewing
stuff,
because
I
I
feel,
like
I've,
been
giving
people
an
unsatisfactory
experience.
Lately,
cramming
everything
in
right
at
the
last
second,
so
I'm
going
to
try
to
do
better
at
spreading
that
out
over
time.
B
B
B
Please
let
me
know
earlier
rather
than
later,
so
we
can
discuss
it
and
if
there
are
big
ones,
I'm
happy
to
make
time
to
sit
on
a
zoom
call
on
a
regular
basis
and
talk
about
you
know
incremental
progress
as
opposed
to
dropping
giant
PR
bombs
at
the
end
foreign,
that's
I'm,
setting
this
up
for
myself.
Personally,
if
other
people
want
to
follow
suit,
that's
totally
fine
I'm,
not
imposing
it
on
anybody.
This
is
how
I'm
trying
to
manage
my
time.
C
A
B
Like
we
in
the
past,
we've
said
like
we
don't
merge,
half
done
things
right,
because
it's
bad,
we
don't
have
the
right
Machinery
in
place
to
like,
have
the
API
checked
in,
but
not
have
it
appear
in
Discovery
docs,
and
so
so
we
just
choose
not
to
merge
the
apis
until
the
implementations
are
in
place
and
that,
like
is
automatically
a
mega
PR
but
like
I'm,
happy
to
just
sit
and
talk
through,
even
even
just
talking
through,
like
what
are
you
doing?
B
What's
changed
in
the
last
couple
of
weeks
as
you've
evolving?
This
thing
I
can
review
commits
on
your
own
Branch
or
on
a
shared
branch
that
we
use
to
figure
out
how
to
do
incremental
reviews
I'm
making
it
up
as
I
go
here
like
what
I
know
is
what
we've
been
doing,
isn't
working
and
so
I'm
gonna
just
try
something
anything
different.
In
fact,
I
might
try
three
or
four
different
things
with
different
people
and
see
what
what
does
and
doesn't
work
better.
B
B
I
wanted
to
go
for
something
low
touch
like
put
it
in
your
own
Branch
somewhere
and
just
rather
than
Force,
pushing
your
branch,
send,
commits
and
or
send
PRS
against
your
own
branch
and
like
tag
me
in
so
I
can
look
at
them
right
and
then,
when
it
comes
to
that
final
merge,
it
might
still
be
a
mega
PR,
but
it
won't
be
something
that
I've
never
seen
before.
Right.
M
B
Right,
yeah
look
I'm,
not
being
shy
about
leaning
on
you
and
others
when
I
need
help
purchasing
yeah.
M
B
M
For
the
sake
of
this
discussion,
I
am
all
right,
I
guess
it's
part
of
showing
up
right,
so
I
am
and
I'm
quite
sure
a
lot
of
people
will
be
as
well,
so,
let's
just
spin
over,
so
we
don't
have
to
funnel
everything
so
one
person
or
a
few
persons
like
the
more
the
more
the
merrier.
As
long
as
we
know
like
we
maintain
a
certain
level
of
quality
and
consistency,
we
should
be
fine.
A
F
F
Have
this
in
in
openshift
and
I,
see
this
in
Syrian
and
and
you
see
that
with
topology
people
comes
to
us
for
more
features,
but
they
are
couponing
features
and
I
really
think
that's
about
the
experience
having
a
an
API
that
works
all
in
one
places
and
doesn't
work
in
others
and
and
I
thought
that
well
maybe
the
solution
is
to
add
conformance
and
I,
take
it
to
with
kmpa
kpng
too,
and
and
they
have
some
failing
things
that
are
not
working.
So
my
point
here
is
I.
F
Don't
think
that
conformance
always
is
going
to
fix
things
because
some
people
just
skip
it.
I
see
that
in
in
some
implementation
skip
the
test,
so
they
don't
Implement
that
they
skip
to
this.
So
my
question
is
moving
forward
as
I
see.
How
do
we
want
to
do
this?
I
mean
to
have
to
not
grow
organically,
on
qproxy
or
in
some
custom,
significant
that
enables
and
nobody
have
implemented,
or
it's
only
hard
to
implement.
M
I,
this
is
a
very
deep
like
thought
like.
It
always
goes
to
my
nerve.
The
topic
right
and
I
think
I've
talked
to
stem
so
many
times
about
this
and
I've
granted
about
it
so
many
times
what
we
have
is
a
system
that
has
n
by
n
by
n
number
of
capabilities
in
it
that
the
end
user
has
no
way
to
tell
that
the
thing
I'm
about
to
push
to
the
API
server
will
actually
succeed
by
the
same
that
implements
it.
I
might
use
this
example
about
policies.
M
Let's
take
a
network
policy
Network
API
Gateway
has
many
many
sharp
edges
where
oh
I
deployed
something
from
something
somebody,
but
the
thing
doesn't
support
grpc,
all
right
or
whatever,
so
the
API
server
has
no
way
to
say
your
request
will
will
likely
work
right,
Beyond,
API,
static
validation
and
the
stuff
we're
doing
conversion
and
all
of
that
stuff,
which
is
fair.
But
what
this
thing
the
entire-
and
this
goes
by
the
way
to
disks
it
goes
to
gpus.
It
goes
to
like.
M
Oh
I,
don't
want
the
gpus
that
can
split
or
whatever
GPU
do
these
days.
There
is
no
way
you
can
tell
that
the
driver
that
supports
the
GPU
actually
can
do
the
feature
you
want.
What
I'm
trying
to
say
is
we're
lacking
kind
of
capability
enumeration
like
we
cannot
show
up
to
a
system
and
talk
kubernetes
and
say:
hey:
can
you
do
policy
of
type
whatever
or
hey?
Can
you
do
desk
of
type
whatever
or?
And
this
is
because
we
don't
have
that
API
server,
except
many.
M
M
So
that's
that's
that
that's
beyond
conformance,
because
conformances
is
a
commitment
of
oh
hey,
I,
I,
I'm,
a
Gateway,
API
and
I
can
do
these
things
all
right,
but
that
deploy
the
deployment
the
snowflakes
deployment
that
the
user
have
by
adding
more
add-ons
to
their
system
or
not
what
people
run
the
conformance
against,
but
not
what
we
confirm
against
all
right.
So
that's
that's!
That's!
That
has
been
an
ongoing
line
of
hope
that
I'm
I've
been
involved
in
anyway,
I
I
in
the
floor.
D
I
was
just
going
to
say
you
mentioned
kpng.
They
know
like
the
fact
that
they
skip
some
of
the
conformance
tests
is
because
the
code
is
not
done
yet
like
they
explicitly
have
in
their
plan.
We
will
eventually
pass
all
of
the
conformance
tests,
so.
F
Yeah,
my
my
point
is
I
I,
don't
want
it
to
my
point
is
what
I
see
is
conformance
is
not
enough
or
it's
too
late.
F
K
F
What
is
it
good,
this
new
one
that
you
are
updating
lately.
H
D
B
B
E
Is
is
the
answer
like
maintaining
I,
don't
know
some
infrastructure
that
runs
conformance,
that
where
we
run
conformance
on
certain
implementations,
so
that
we
can
say
this
implementation
is
conforming
to
our
spec
like
right.
Now
we're
just
trusting
that
Downstream
implementations
run
our
tests,
but
they
can
skip
them
like
Antonio
said
they
do
whatever
like?
Is
it
for
us
to
maintain
a
as
a
community
to
maintain
a
list
of
conformant
implementations
and
keep
that
in
date,
up
to
date.
F
B
Should
also
be
careful
that
you
know
the
way
Cube
proxy
implements
things,
isn't
necessarily
what
the
specification
is
right.
Like
Cube
proxy
may
say,
we
publish
all
endpoints
right
or
I'll
pick
on
topology
since
Rob
is
on
my
screen.
You
know
the
way
that
topology
is
implemented
between
Cube
proxy
and
the
API
server
is
not
the
only
way
that
it
could
be
implemented
and
would
a
different
implementation
be
valid
or
invalid.
If
it's
different
from
what
we
do
right.
B
B
Network
layer
that
had
really
good
information
about
bytes
per
second
on
each
flow,
could
you
use
that
to
feed
into
a
topology
aware
service
proxy
that
it
made
smarter
decisions?
It
seems
like
it
would
should
be
valid
right.
L
Yeah
I
I
would
say
that
topology
is
a
good
example,
but
likely
not
the
only
one
of
features
we
have
made
with
the
limitations
of
our
current
implementation
of
Coupe
proxy
in
mind.
That
may
not
apply
to
all
other
proxy
implementations
in
the
kubernetes
ecosystem,
so
maybe
we're
not
making
the
best
spec
decisions.
I
mean
we
don't
yeah,
agreeing
with
everything.
That's
been
said,
but
Shane
you've
got
a
hand.
K
Oh
I
feel
like
I'm,
really
going
to
regret
what
I'm
about
to
say,
if
you're
willing
to
take
the
stretch
with
me,
that
kubernetes
is
like
an
operating
system
for
clusters
and
we're
talking
about
having
applications
deployed
on
the
operating
system
that
aren't
fully
compatible
with
the
operating
system.
K
F
I,
don't
think
that
that
way,
I
don't
own
on
that
problem.
You
know
in
in
so
what
I'm
saying
is
what
I
really
I
think
that
didn't
say
this
through
two
or
three
years
ago,
is
that
we
should
freeze
Q
proxy
and
stop
adding
feature
based
on
YouTube,
proxy
and
I
start
thinking
more
holistically,
based
on
specs
and
kind
of
things.
L
F
D
F
K
Go
ahead
Antonio,
it
sounded
to
me,
like
maybe
I,
misunderstood
what
you're
saying
it
sounded.
It
sounded
to
me,
like
you,
started
off
with
being
worried
about
people
coming
in
and
building
Coupe
proxy
like
things
but
not
meeting,
conformance,
plus
your
added
thing
that
you
don't
know
that
conformance
testing
really
is
the
thing
that
solves
that.
But
then
you're
talking
about
like
wanting
to
freeze
Koop
proxy,
which
these
seem
kind
of
like
different
issues,.
F
F
Miss
anything
the
the
thing
is,
let
me
let
me
so.
The
thing
that
they
see
is
that
Services
is
growing
and
we
keep
adding
features.
Okay
and
right
now
we
have
a
a
big
portfolio
features
in
Q,
proxy
and
people
keep
the
money
more
fit
to
some
more
advanced
features
and
what
I
don't
see
that
sustainable,
because,
yes,
the
other
implementation.
There
are
no
many
many
services.
F
Implementation
cannot
keep
up
with
Q
process
right
now
and
if
we
keep
developing
this
way,
we
are
going
to
keep
making
the
Gap
bigger
and
what
I
I
feel
is
that
Services
is
overwhelming
and
cannot
keep
us
more
than
it
is
doing
in,
and
we
still
keep
working
on
having
more
things
on
services.
So
I
think
that
we
are
in
a
busy
cycle
with
qproxy,
and
this
is
also
affecting
how
people
use
services
and
and
I
don't
know,
I
mean
it's.
B
So
we
should
be
clear
that
there's
two
at
least
two
different
categories
of
stuff:
there's
API
impacting
stuff,
which
you
know
like
adding
some
new
thing
to
the
API.
We
try
really
hard
not
to
add
a
lot
to
the
service
API.
But
you
know
adding
service
types
right,
like
is
a
big
deal
right,
because
we
know
that
every
implementation
has
to
understand
that
and
there's
non-api
impacting
things
like
topology,
which,
strictly
speaking,
should
be
invisible
to
most
users,
except
that
it
makes
their
system.
B
You
know
less
expensive,
for
example,
and
when
we
we
have
to
figure
out
I
guess
if
the
these
non-api
impacting
things
are
requirements
like.
Is
there
a
topology
conformance
requirement
right
or
is
that
simply
a
hey
use?
B
Whatever
information
you've
got
to
make
smart
decisions,
sort
of
situation
and
then
for
API
impacting
stuff,
I
I
think
we
have
done
and
need
to
continue
doing
the
I
know
it's
not
going
to
sound
great,
but
the
ietf
sort
of
multiple
conform
or
multiple
interoperable
implementations
must
exist
right,
and
we
can't
finish
it
until.
We
know
that
there
are
interoperable
implementations
and
I
chose
interoperable
instead
of
conformant,
because
I'm
not
sure
we
want
to
put
a
stamp
on
it.
B
It's
like,
maybe
we
need
a
specification
for
the
the
API
service
that
is
divorced
from
what
Cube
proxy
does.
It
should
already
be,
but
it's
probably
not.
L
Yeah
I
mean
it
sounds
like
we
are
catching
up
to
the
fact
that
coup
proxy
wow
that
the
coop
proxy
has
become
a
situation
that
is
very
similar
to
Network
policy,
to
Ingress
to
Gateway
API.
L
All
these
other
networking
apis
that
have
multiple
implementations
when
we
started
with
coup
proxy
I
I
was
not
around,
but
when
coup
proxy
started
when
when
all
of
this
initially
was
created,
I,
don't
think
that
was
anywhere
in
anyone's
mind
and
now
we're
just
kind
of
catching
up
to
the
new
reality
that
hey
this
is
implemented
everywhere.
B
You
know,
strictly
speaking,
we
said
from
the
beginning:
services
are
an
optional
API
in
kubernetes.
You
don't
have
to
run
Cube
proxy.
You
don't
have
to
offer
DNS.
Those
were
optional.
In
fact,
DNS
was
in
the
add-ons
directory
for
a
long
time
right
that
it
was
because
it
was
this
thing
that
you
could
put
on
top
of
your
kubernetes
cluster
to
make
it
more
useful.
B
It
has
become
a
de
facto
Norm
that
you
have
services
and
that
you
have
DNS,
but
we
are
now
seeing
Cube
proxy
be
replaced
in
the
same
way
that
Network
policy
and
others
are
replaced.
I,
don't
know
anybody
who
actually
runs
a
cluster
without
any
form
of
services,
but
they
probably
could.
B
B
E
B
Doc
not
exactly
that
topic,
but
sort
of
similar
thinking
about
the
principles
for
when
we
say
yes,
we're
willing
to
consider
a
cap
and
when
we're
willing
to
say
no
we're
not
going
to
consider
that
so
I
didn't
write
very
much
over
just
a
couple
of
paragraphs
to
as
a
seed
and
I
haven't
circled
back
to
it.
F
B
So
I
have
to
drop
off.
I
need
a
couple
minutes
to
prep
for
my
next
meeting,
so
I'm
going
to
drop
off,
but
you
guys
can
keep
going.
If
you
want
to.
This
is
great
I'll
be
around
in
two
weeks.
Hopefully,
I'll
see
some
of
you.
If
not
everybody
have
an
amazing
holidays.
Take
some
time
off
and
we'll
see
you
in
January.