►
From YouTube: Kubernetes SIG Network Meeting for 20220303
Description
Kubernetes SIG Network Meeting for 20220303
B
C
C
Okay,
well,
this
issue
was
a
user
who's,
doing
something
interesting,
trying
to
put
their
own
annotations
and
finalizers
on
an
endpoint
slice.
I
think
everybody
on
the
thread
has
disavowed
him
that
that's
a
good
idea.
C
The
question
that
is
open
is:
do
we
actually
cover
that
in
tests?
What
happens
if,
if
somebody
puts
a
finalizer
on
an
endpoint
slice
and
the
controller
fails
to
delete
it
or
thinks
it's
deleting,
it
does
cube
proxy
keep.
Those
endpoints
seems
like
a
really
good
corner
case
to
make
sure
we
have
covered
in
tests
so
I'll
leave
this
open.
C
So
that
I'm
surprised,
if
that's
true,
I'm
surprised-
I
mean
somebody
had
to
write
code
to
like
clear
the
finalizer
explicitly,
which
I
don't
think
would
happen.
So
my
guess
is,
if
you
put
the
finalists,
I
haven't
gone
and
tried
it
yet.
But
if
you
put
a
finalizer
on
an
endpoint
slice
and
then
you
delete
the
service
that
the
slice
will
stick
around
until
the
finalizer
is
removed,
which
is
what
I
would
expect
to
happen,
but
I'll
bet
that
it
might
confuse
cube
proxy.
Maybe
the
controller
too.
D
C
Like
it
doesn't
consider
in
process
of
being
deleted,
I
don't
know
I
I
can't
help
but
feel
like
we're
missing
some
hooks
or
something
in
the
watch.
Client.
That
makes
it
really
clear.
Not
just
it's
been
updated,
but
it
has.
The
deletion
process
has
been
started
and
that's
different
from
an
update
that
sets
the
deletion
timestamp
the
command.
D
C
But
it
seems
like
the
sort
of
thing
that
everybody
will
forget,
because
it's
not
an
explicit
hook
like
it
seems
like
there
might
be
an
interesting
opportunity
to
provide
a
delete
hook
or
you
know,
delete
started
hook
so
that
the
client
could
capture
that
as
a
distinct
like
just
so
that
in
documentation.
When
you
look
at
it,
that's
a
distinct
thing
from
a
plain
old
update
like
it's
an
important
difference,
I
don't
know
we
don't
need
to
discuss
it
here.
C
E
There
is,
there
is
another
another
common
common
problem
with
the
way
the
way
client
works.
So
I
don't
think
it's
the
client,
the
problem,
but
it's
in
all
controllers.
Everybody
on
on
update,
they
start
doing
their
dance
right.
The
problem
with
is
on
update,
doesn't
mean
a
field
that
you
are
operating
on
has
been
updated.
E
So
a
common
example
is
the
updating,
annotation
updating
labels
triggers
resynchronization
all
over
the
place.
Even
the
default
controllers
that
are
part
of
kubernetes
does
not
really
do
that
than
most
places.
So
that's
that's
also
something
I've
witnessed
of
all
the
places.
There's
no
way
you
can
ask
the
controller.
Tell
me
if
this
field
was
updated
or
those
fields
are
updated
and
nobody
does
the
check
sanity
check
off.
Oh,
I
have
old
cached
version.
Is
it
different
or
not?.
F
C
These
are
maybe
these
are
things
that
we
should
build
into
the
example
controllers
more
completely
right,
like
first
here's
a
filter
process
is.
Does
this
update
apply
to
me
right
yeah?
If
the
this
update
is
delete,
call
a
different
hook?
If
it's,
you
know
not
applicable
just
toss
it
in
the
cache
and
leave
it
alone,.
G
I'm
I
missed
the
start
of
this
conversation,
but
this
is
this
related
to
the
bug
that
is:
okay,
the
annotation
and
finalizer,
and
an
endpoint
slice.
That's
managed
by
endpoint
slice
controller.
That's
right!
I
I'm
very
confused
by
this
use
case.
C
So
you
you
missed
the
the
intro,
which
was,
I
think,
we've
disavowed
this
person
of
the
idea
that
this
is
a
reasonable
way
to
use
it.
But
I
don't
know
if
this
is
covered
in
test
cases.
I
don't
know
if
cube
proxy
and
or
the
various
endpoint
slice
related
controllers
will
actually
kind
of
go
crazy.
If
you
put
a
finalizer
on
a
slice,
I
wouldn't
be
surprised
if
they
do-
and
I
tagged
you
on
the
bug
to
say,
hey
rob.
We
should
maybe
put
this
into
the
tests.
G
G
G
C
C
But
if
you
have
a
service
with
multiple
slices
and
you
delete
a
slice,
then
you
will
keep
those
old
end
points
around
right
and
and
okay
like
there's
a
like
the
endpoint
slice
controller.
Has
this
rebalancing
logic
at
some
point
when
you
hit
a
threshold
it'll
pull
all
day
and
points
out
of
one
slice
and
put
them
into
the
other
slices
in
the
empty
spots
right,
something
like
that
right,
rob
and
then
it'll
delete
one
of
those
slices.
C
G
Yeah
and
if,
if
it's
pending
deletion,
I
think
it
would
still
show
up
as
an
existing
endpoint
slice,
which
means
our
controller
would
try
to
potentially
add
endpoints.
You
know,
I
don't
know
if
we
have
a
check
on
deletion
timestamp
in
the
controller
itself,
so
I
can
see
you
know
it's
an
unexpected
input
to
the
system,
but
we
should
exactly
in
fact,
if
they're,
adding.
I
J
K
I
E
I
don't
think
we
should
react
until
the
thing
is
deleted,
because
you
don't
never
know
that
the
finalizer
is
there
to
do
some
pre-clean
up
right.
Somebody
needs
to
do
some.
Somebody
said:
do
not
remove
this
object
until
I
do
some
prior
cleanup.
We
are
not
in
a
position
to
say
exactly
what
is
this,
so
I
acting
on
the
aside
from
what
then
just
said,
which
I
somehow
see
where
it's
coming
from
I
actually
been
thinking
about
that
like
there
is,
has
to
be
a
way
where
we
say
this
or
mine.
E
You
cannot
touch
them,
but
commenting
on
your
earlier
comment
about
which
do
anything
when
we
see
lead
time
stamp.
We
don't
know
if
this,
if
somebody
depend
on
the
fact
that
this
object
exists
or
in
in
a
way
we
don't
know
if
somebody
depends
on
the
fact
that
there
are
end
points
system,
routes
to,
and
everything
works
until
this
person
decide
that
it's
okay,
now
clean
up.
K
C
C
So
I
I'm
gonna
disagree
with
cal,
but
we
can
take
it
to
the
bug.
I
think
first
step
is
rob,
go
see
what
happens
or
see
if
it
explodes
yeah
we'll
do,
and
this
might
be
a
fun
one
to
bring
back
to,
like,
I
don't
know,
sig
arch
or
api
machinery
or
something
like
hey.
This
is
a
problem
that
probably
exists
for
other
people.
Is
there
a
pattern
that
we
could
use
to
solve
this
right?
It
would
be
what
would.
G
C
A
Yeah
wild
turn
of
events.
One
issue
has
filled
up
the
full
15
minutes
of
triage,
I'm
going
to
shuffle
up
the
agenda
a
little
bit
because
I
know
that
dan
williams
needs
to
leave
halfway
through,
and
this
is
last.
So,
let's
pull
that
to
the
top
and
quickly
talk
about
the
sct
trick.
L
Yeah,
I
know
I
was
about
to
say:
let's
you
know
if
we
don't
get
to
it,
that's
fine,
but
no.
The
issue
came
up
the
other
day
because
we
were
looking
into
some
bugs
that
we
had
and
it
looks
like
cube
proxy
treats
sctp
contract
the
same
way
as
it
treats
udp
contract,
but
that
seemed
a
little
wrong
because
sctp
is
closer
to
tcp,
and
so
I
guess
the
question
is
around
no
there's
been
a
lot
of
change
in
this
area.
L
D
So
the
the
the
code
that
is
now
I
was
good
to
cargo
in
because
it
was,
there
was
a
method
that
say
contract
a
contract
needed
or
something
and
sctp
was
there
and
I'm
an
ignorant
on
http.
So
I
I
carry
over
with
that.
But
then
there
was
another
bag
and
I
was
checking
with
one
person
that
perhaps
in
the
net
filter
call
and
the
contract
filter
code
checks
the
session.
So
the
kernel
should
be
able
to
track
the
station
and
and
clear
the
contract
entry.
J
Was
the
first
one
to
say,
I
think
it
was
pear
who
said
no
yeah
I
mean
so.
The
problem
with
sctp
is
really
that
it
has
this
multi
multiple
endpoint
thing
going
on
right,
but
you
you
can
set
two
addresses,
but
it's
supposed
to
go
to
the
same
state
and
the
many
applications
require
that
you
have
that
set
as
the
tip
is
a
very
I
mean
lars
is
here
as
well.
J
He
has
a
lot
of
opinion
on
satp
when
both
satp
and
a
multi-part
tcp
is
very
tricky
to
implement
through
a
contract
because
read
you
need
to
go
and
read
what
happens
inside
the
protocol.
You,
you
cannot
get
all
the
information
just
from
what
comes
in
the
intcp.
I
mean
in
the
sin
that
you
get
or
any
of
the
people
that
matters
that
you
have
the
five
tuple
with
both
multiples
disappear
and
sntp.
You
have
additional
information
that
is
carried
in
pdus
when
the
first
recession
is
set
up,
they're,
very
tough,
to.
M
I
don't
think
multi-homing
is,
is
an
issue
to
be
considered
because
it's
not
supported
in
kubernetes,
but
one
important
fact
is
that
very
many
http
clients
binds
to
to
a
certain
port
tcp
clients
never
do
that.
They
they
use
the
fm
aerial
port,
but
satp
clients
quite
often
binds
to
the
same
port
and
it
reduces
it
for
reconnects
all
over
again
and
I
believe
it
would
be
the
same
problem
with
tcp.
If
people
did
that,
but
they
don't.
J
I'll
disappear,
you
have
the
time
weight
and
well
specified
and
close
weight
right.
So
so
the
loca,
the
local
stack,
will
take
care
of
it
to
make
sure
that
you
don't
even
try
to
set
it
up,
it
will
block
it
there,
but
yeah
I
mean
you
see.
We
have
a
couple
of
udp
test
examples
right
when
we
that
that
blind
binds
the
client
port.
K
It
sounds
like
we
should
add
tcp
tests
that
explicitly
bind
to
the
source
port,
because
we
know
that
the
bug
report
involves
people
binding
the
source
port.
So
maybe
we
should
just
add
tcp
tests
that
do
that
and
see
if
it
works
for
the
existing
code
and
if
it
doesn't
then
figure
out
what
we
need
to
do
to.
J
J
The
local
client
just
should
stop
you
from
doing
it.
The
second
time
right,
you
should
not
even
get
to
the
service.
K
L
L
L
C
L
J
D
C
L
The
other
question
I
had
was:
how
do
we
gracefully
drain
a
service?
Is
that
where
you
just
you,
keep
the
service
around,
you
kill
all
the
pods
to
that
service
in
the
back
end,
and
then
you
delete
the
service
or
like
how
do
you
stop
accepting
service
traffic
but
then
allow
existing
connections
to
gracefully
age
out?
You.
K
You
kill
all
the
pods,
but
they
have
termination
grace
whatever,
okay
and
because
they're
terminating
the
end
points
go
away
and
because
the
endpoints
go
away,
the
service
stops
accepting
new
connections,
but
we
don't
delete
the
contract
entries.
So
the
existing
connections
stay
there
until
the
pods
finish
them
and
exit
for
their
grace
period.
Okay,
but.
L
D
J
The
the
reason
is
simple
right,
because
if
you
have
something
so
with
udp-
and
there
is
stp
is
a
little
bit
different,
but
the
udp
there
is
no
guarantee
that
you
have
traffic
coming
back
that
way,
so
you
have
unidirected
udp
streams,
which
means
that
you
can
have
something
sending
and
it
will
never
know
that
the
server
service
is
gone.
So
there's
nothing
that
can
recreate
the
stream,
because
the
contract
you're
already
have
an
entry
in
there.
So
you
need
to
delete
that
entry.
J
But
going
back
to
the
first
office,
I
mean,
if
you
say
that
the
service
should
for
the
xx,
inflow
should
continue,
and
then,
if
you
want
to
have
them
removed
right
that
they
will
do
with
that,
you
would
do
with
a
network
policy
entry.
Then
right,
you
will
remove
the
ability
for
these
two
objects
to
talk
to
each
other,
and
then
you
would
have
to
clean
up
the
contractor.
L
Yeah
I
mean
I
guess
my
question
is:
is
the
current
behavior
we
have
for
clearing
sctp
contract
on
endpoint
deletion,
correct,
that's
the
basic
question
and
then
I
guess
I
would.
The
other
question
I
had
about
service
draining
is
more
general.
J
J
It
can
be
minutes
or
hours
depending
on
what's
there
right,
but
if
you
have
a
unidirectional
unidirectional
scheme
like
you
can
get
with
the
udp,
the
sender
can
keep
on
sending
there's
no
receiver
and
there's
no
way
that
the
the
infrastructure
will
tell
him
that
there's
no
one
receiving
this
anymore,
there's
no
knack
or
anything
going
right
or
sntp.
There
is
state
control,
so
there
needs
to
be
something
that
goes
back
and
sort
of
say
the
equivalence
of
a
reset
right.
C
C
E
J
E
And
and
udb,
if
you
decide
to
use
udp,
you
are
consciously
saying
that
I
am
not.
I
have
an
upper
level
app
protocol
where
I
expect
a
knack
or
an
act
or
whatever
so
yeah,
so
even
with
tcp,
by
the
way,
like
the
fact
that
this
thing
works
is
astonishing,
just
to
sit
there
like,
I
said
it.
Yes,
the
fact
that
tcp
works
most
of
the
times
is
actually
astonishing.
If
you
read
the
specs
and
the
way
connection,
tear
down
and
rst
and
fanon
holds
this
entire
dance
and
all
those
things
expected
to
work
yeah.
E
J
Lars
had
a
pretty
good
answer.
You
set
up
a
completely
new
five
tuple
right.
Then
you
have
a
fairly
large
face
on
that
and
the
old
one
will
die,
sometimes
a
nice
death
and
sometimes
a
painful
death
right
yeah.
I
think
it's
six
hours.
It
can
stay
in
the
contractor.
Today.
C
J
I
L
O
And
stdp
does
have
states,
at
least
on
contract.
You
can
see
the
established
shutdown
shutdown
act,
which
is
same
as
the
finn
finnac
in
tcp.
The
only
thing
that's
confusing
for
me
is
what
fair
was
saying
about
reset
being
sent
or
when
in
time.
M
J
A
Well,
it
sounds
like
we're
reaching
and
end
on
this
one.
I
think
we
should
move
on
to
syria
and
winship.
K
Yeah
so
surya
was
was
doing
internal
traffic
policy
and
over
kubernetes.
She
asked
me
to
deal
with
this
topic
because
it's
11
00
pm
there
and
she's
half
asleep,
but
so
the
the
big
question
came
down
to
whether
node
port
connections
are
internal
or
external
and
in
particular,
if
a
pi
connects
to
a
node
port
on
the
same
node.
Is
that
internal
or
external?
And
if
a
pod
connects
to
a
node
port
on
a
different
node?
K
Is
that
internal
or
external
and
the
answers
seem
to
be
awful
and
because,
with
with
external
ip
and
load
balancer,
no
matter
where
you
connect
from?
If
a
pod
connects
to
the
ip,
it
gets
redirected
and
it's
treated
as
an
internal
connection,
not
an
external
connection.
So
you
would
sort
of
expect
node
to
work
the
same
way
and
it
can
work
that
way
when
you
connect
to
a
node
port
on
the
same
node.
K
So
we
just
sort
of
wanted
opinions,
like
is
the
right
answer
that
node
ports
behave
inconsistently
or
should
we
just
say,
internal
traffic
policy
actually
only
applies
to
pod
to
cluster
ip
traffic
and
and
not
anything
else,
since
that's
sort
of
the
the
canonical
use
case,
although
that
still
leaves
open
the
question
of
what
external
traffic
policy
apply
to
these
connections
if
they
weren't
internal,
so
that
might
not
really
be
an
answer.
C
So
my
gut
feeling,
I
read
your
questions
before
the
meeting
and
I
chewed
on
them
for
a
little
bit.
My
my
gut
feeling
is:
they
are
external
like.
If
we
were
refactoring
the
service
api
from
scratch,
we
would
likely
have
a
different
resource,
or
at
least
a
different
stanza
of
resource
for
cluster
ip
and
for
node
port
and
for
load
balancers,
and
each
of
those
would
have
different
policy
statements,
and
so
I
think
internal
traffic
policy
would
only
apply
to
cluster
ip
traffic.
C
Now
I
haven't
thought
through
the
implication,
the
implementation
implication,
but
that
that's
where
I
would
default
go,
and
I
see
boys
said
the
same
thing
on
chat.
C
K
C
D
But
when
you
talk
about
topology
you
you
talk
about
locality,
we're
not.
D
K
So
so
so
tim
I
get
what
you're
saying,
but
that
is
that's
the
opposite
of
how
we
defined
internal
versus
external.
So
far
like
right
now
it's
about
who
the
client
is
not
how
they
accessed
the
service,
so
we
could
possibly
change
that,
especially
for
internal,
although
I
mean
external
traffic
policy
is
like
has
been
ga
forever
right,
so
I
don't
know
if
we
can
change
that.
C
Don't
we
have
a
weird
corner
case?
Is
this
the
same
corner
case
with
external
traffic
policy
and
no
like
node
port's
the
thing
that
falls
in
the
crack
all
the
time
right?
Does
that
trip
this
up?
Also,
I'm
not
sure
what
you
mean
like
if
I,
if
I
set
external
traffic
policy
local
and
I
go
to
another
node's
node
port-
is
that
external
traffic.
At
that
point,.
K
K
C
I
I
agree
that
I
think
that
that's
broken.
I
think
that,
like
as
as
we're
staring
at
how
to
slice
up
the
endpoint
assignments
and
the
policy
decisions
within
that
code
path,
we
should
be
clearer
about
these
ones
are
internal
ones.
J
J
That
there
is
no
address
translation
right
and
stuff
like
that.
I
I
I
wouldn't
be
surprised
at
some
point.
If
you
see
things
for
v4,
would
use
map
t
to
map
it
to
a
v6
address
and
then
map
it
back
and
all
these
things
to
to
use
the
sources
as
to
classify
unless
you're
on
the
node,
where
that
source
is.
J
K
J
C
So
there's
one
there's
the
one
classic
corner
case
for
same
node,
node
port,
which
is
the
local
host
thing
that
doesn't
work
in
ipvs,
but
does
work
in
iptables
unless
you
disable
it
and
that's
like
I
want
to
access
a
local
host
registry
or
something
right.
That's
the
one
use
case
that
I've
seen
for
this
in
the
past.
D
C
P
C
C
E
Problem
the
problem
was
according
to
how
things
are
described.
It's
not
that
we
misnamed
a
thing
and
documentation.
Clarity
will
fix
it
or
some
big
note
somewhere,
it's
the
intent
is
broken,
so
either
we
treat
all
internal
as
internal
or
all
external
as
external,
but
right
now
the
idea
that,
if
I
am
a
part
setting
on
this
note
talking
to
a
notepad
on
another
note,
I'm
considered
external.
But
if
I
go
through
the
load
balancer,
I'm
back
internal
as
a
behavioral
there's
a
broken
behavior.
It's
like
I
don't.
O
E
C
So
here's
here's
here's!
What
I'm
thinking,
though,
the
fact
that
it's
different,
whether
you
talk
to
a
local
node
port,
same
node,
node,
port
or
a
different
node,
node
port
seems
like
the
big
red
flag,
like
it
shouldn't,
be
different.
C
The
load
balancer
that
just
feels
broken,
and
I
think
we
justify
that
that
is
in
fact
broken,
and
I
mean
especially
because
it's
different
on
different
implementations
right
on
a
proxy-ish
load.
Balancer,
you
don't
get
that
and
so
then
the
question
is:
if
we're
going
to
fix
the
big
red
flag
here,
which
way
do
we
fix
it?
We
can't
say
that
node
port
traffic
to
a
different
node
is
local,
because
you
can't
preserve
client
ip.
C
So
therefore
it
has
to
be
considered
external
right
ergo.
I
think
we
should
say
same
node.
Node
port
is
also
external
and
we
bring
consistency
to
node
port.
We
fix
the
glitch
on
the
load,
balancer
ip
and
tell
people
it's
about
your
intent,
there's
a
long
standing
to
do
also
in
there
somewhere.
That
says,
should
we
apply
the
load.
Balancer
source
ranges
to
node
ports
right
so
like
there
are
policy
things
that
apply
to
load
balancers,
but
not
node,
ports
that
we
have
historically,
you
know
not
come
back
and
fixed
either.
C
K
C
Yeah
because
we
have
a
it's
the:
what's
it
the
api,
the
interfaces.
C
F
C
C
C
E
Will
break
I
just
remember
that
connection
weight
thing,
notepad,
opener,
stuff,
that
we
were
talking
about
it's
actually
triggered
by
somebody
had
like
pair
node
registry
cache
and
they
connect
to
it
via
pods
and
stuff
like
that,
and
I
can
assure
you
it's
to
them-
it's
localhost
the
way
they
think
about.
It
is
localhost.
I
don't
know
if
they
have
policy
on
them
or
not,
but
I'm
quite
sure
if
the
policy.
E
E
Because
if
they
are
using
somebody's
using
like
hyper
hypervisor
container
firecracker
whatever,
and
they
are
running
two
containers
using
two
different
tenants
on
the
same
note
and
you
can
loop
back
traffic,
I
wonder
if
they
know.
C
Let's
start
something
last
last
comment:
when
we
move
on,
I
don't
know
if
we
have
any
more
agenda,
but
internal
traffic
policy
is
very
new
anyway
right,
so
before
that
there
there
was,
it
was
either
I'm
getting.
When
I
talked
to
a
same
node,
node
port,
I
was
going
to
get
global
behavior
cluster
scope
behavior
anyway,
because
there
wasn't
a
way
to
specify
otherwise
right.
C
H
O
No,
I
was
about
to
say
that's
because
we
didn't
ask
dan
venture
before
we
implemented
that,
because
danmanship
solution,
where
no
to
a
pod
to
other
node
is
external,
it
would
have
simplified
things.
L
Quick
clarification
for
dan
winship.
I
had
written
that
second
issue
there
or
second
item
there,
and
then
you
crossed
that
out
and
said,
but
it
can't
be
for
compat,
but
I
thought
that
is
exactly
what
tim
said
earlier.
Is
it
just
that
we
can't
do
that?
I.
C
K
I
guess
the
load
balance
or
short
circuit
thing.
The
big
thing
that
it
wants
to
to
preserve
is
that
if
a
potter
node
connects
to
the
load
balancer
ip,
when
it's
on
a
node
that
doesn't
have
an
endpoint,
it
shouldn't
be
rejected,
and
it
currently
does
that
by
treating
the
connection
as
local.
But
I
guess
it
could
also.
K
C
As
local,
I
guess
right
I
mean
the
the
reason
this
logic
exists
in
the
first
place
is
because
on
those
vipish
systems,
that's
actually
a
local
address
anyway.
So
if
we
don't
handle
it,
it
will
go
out
to
the
host's
network
interface
and
then
try
to
connect
to
something
that
isn't
running
on
the
host's
ip
right
and
it
will
just
fail
completely.
A
H
I
don't
need
the
10
minutes.
It's
just.
I
posted
a
pr
implementing
or
starting
to
implement
the
amp
api
and
associated
crds.
So
if
folks
want
to
take
a
look,
feel
free,
there's
two
main
unresolved,
you
can
still
find
in
the
cap
that
we
want
to
fix
in
this
api
pr.
So
keep
that
in
mind
when
you're
reviewing
that's
all
I
have,
though,
thanks.
A
D
D
O
C
I
mean
we
have
what
we
have
historically
defined
or
left
undefined
or
implicitly
defined
via
code,
but
as
long
as
we
can
convince
ourselves
that
we're
not
going
to
break
users,
we
can
define
it
how
we
think
it
should
be
defined
as
long
as
we
have
reasonable
confidence
that
we're
not
going
to
break
users
and
an
escape
plan
if
it
turns
out
we're
wrong
and
we
broke
them
anyway,.
C
C
That's
currently
we
consider
that
internal,
and
so
we
don't
apply
external
traffic
policy
right
yeah.
Well,
are
you
a
pod
or
are
you
an
outside
source?
You
just
said:
if
I
talk
to
a
node,
I'm
a
pod
and
I
talk
to
an
import
on
my
node,
then
I
can
preserve
client
ip,
because
that
will
work
just
fine,
regardless
of
whether
the
external
traffic
policy
or
the
regardless
of
whether
internal
traffic
policy
is
cluster
or
local
right,
because
if
it
was
local,
it's
fine.
C
H
C
C
I
I
So
it
actually
just
does
it
based
on
the
use
of
the
ip
and
not
the
source,
but
that
also
has
led
to
like
a
lot
of
people
complaining
about
like
if
you,
if
you
create
a
load
bouncer
and
you
set
external
traffic
policy
to
local.
I
K
C
K
If
you
send
to
a
a
bad
node,
a
node
with
no
end
points,
if
it
just
treated
the
service
as
though
it
was
cluster,
that
would
have
no
externally
visible
effect.
Oh
no
because
there's
the
problem
that
that
the
load
balancer
might
be
slow
to
catch
up
with
endpoint
changes
and
might
send
okay,
no
nevermind.
I
Yeah,
I
don't
think
we've
ever
fixed
the
like,
if
you,
if
you
connect
to
a
low
bouncer
ip
on
the
pod
and
there's
no
pods
on
that
same
node,
like
that
traffic
is
still
dropped
and
there
aren't
a
lot
of
like
we've
discussed
workarounds
of
like
defaulting
to
ip
tables
for
that
kind
of
case,
but
like
they
all,
seemed
really
hacky
like
it's
really
hard
to
work
around
the
fact
that
you
have
the
ip
binded
on
the
ips
stack.
I
K
C
Maybe
you
can
link
it
to
the
mailing
list
or
something
or
dan.
Do
you
have
an
issue
where
you're
discussing
this
there
we
go.
O
All
right
also
just
one
quick
thing:
can
we
have
a
decision
node
where
someone
is
going
to
start
documentation
where
we
discuss
more
because
the
way
ovnk
does?
Is
we
ask
what
does
q
proxy
do
and
it
would
be
good
to
get
some
of
these
things
explicitly
defined
in
a
documentation
for
q
proxy
so
that
we
can
just
point
users
saying
upstream:
does
it
this
way?
That's
why
we
do
it
this
way.