►
From YouTube: Kubernetes SIG network meeting 2019-10-31
Description
Kubernetes SIG network meeting from Halloween, 2019.
D
E
E
F
C
If
they
say
cube
net
been
that's
the
CNA,
he
uses
the
bridge
plug-in
underneath
gotcha.
D
G
B
C
B
G
B
B
D
A
G
G
G
E
E
Was
there
a
dual
stack
stuff
with
it
when
I
try
to
do
that,
and
then
that,
unlike
endpoint
slice,
I
had
on
end
points,
lies
delete.
You
just
call
the
meter
proxy,
the
proxy
that
implements
it,
and
you
expect
that
the
proxy
will
work
now.
Their
substitution
doesn't
work
because
in
point
slice
we'll
have
mix
it
in
pots
versus
what
we
used
to
have
before
endpoint
before
with
end
points
where
you
know
the
end
point
family,
so
you
call
the
correct
proxy
based
on
the
family.
E
You
have
from
it
a
proxy,
so
we
have
the
way
I
see
it.
We
have
unless
I,
don't
see
the
entire
picture.
We
have
two
choices:
either
we
make
endpoint
slices
slices
per
family,
so
the
slice
will
always
have
one
family.
It
doesn't
matter
which
alright,
it
doesn't
even
need
to
carry
a
flag
for
the
family.
G
E
So
that
way
we
call
both
blindly
and
that
the
base
case
of
each
one
is:
is
this
my
family?
If
not,
then
the
ground
violently
ignoring
my
mother
around
that
might
work,
but
there
is
nothing
to
tell
us
that
what
we've
had
this
situation
before
we
I
remember.
We
had
this
talk
about
the
service.
The
first
controller
inside
the
API
server,
where
you
were
talking
about
ignoring
the
problem,
was
ignoring
it
and
ignored
it.
How
would
you
report
that
I.
I
E
I
Well,
but
still
it
could
I
mean
we
can
just
assume
that
the
ipv4
proxy
will
process
all
of
the
the
v4
addresses
in
there
in
the
v6.
One
will
process
all
to
be
six
ones,
and,
and
so
as
long
as
all
the
addresses
are
before
v6,
and
we
know
that
neither
proxy
is
going
to
ignore
or
that
there
won't
be
any
left
over
ignored,
addresses
I
was
I
was
just
responding
to
the
question
of
you
know
what
what.
G
Way
today,
if,
if
there
was
a
bug
and
we
what
you're
describing
to
be
a
bug
right,
so
if
there
was
a
bug
in
queue
proxy
that
it
decided
to
ignore
even
number
and
IP
addresses,
there's
still
nothing.
That's
going
to
report
that
right,
some
user
would
have
to
figure
it
out
and
then
file
a
bug
saying
what
the
heck
guys.
Why
don't
you
give
to
all
these.
J
E
G
Anyway,
this
can
be
heavy,
though
like
there
could
be
a
lot
of
them
with
a
lot
of
data
and
so
I'm
copying
all
that
data
for
the
sake
of
I.
Don't
know
not
that
much
value
honestly,
so
when
we
initially
when
it
when
the
point-wise
came
I
think
actually
the
very
first
version
of
it
had
separated
IP,
4
and
IP
6
and
I'll.
Take
the
blame
for
saying:
no,
no
just
do
an
IP.
We
we
can
taste
and
see
which
one
it
is
right.
We've
written
that
function.
G
J
F
E
By
the
way,
by
the
way,
one
of
the
things
that,
as
we
move
this
to
beat
we're
not
gonna,
be
able
to
change
the
API
so
somehow
I'm
against
getting
in
family
of
the
slice
to
just
keep
as
long
as
the
endpoint
that
the
controller
as
finds
the
same
family
to
the
same
slice,
we
should
be
fine
all
right.
So
this
is
this-
is
this
is
only
impacts
from
impacts.
The
endpoint
slice
part
inside
to.
G
Keep
proxy,
so
it's
not
true
that
we
can't
change
the
API
we
can
add
with
the
address
type
is
explicitly
in
enumeration
right,
so
we
can
add
new
values
and
say
for
GA,
the
IP
type
is
deprecated
and
the
IP
496
types
are
preferred.
If
that's
the
direction
of,
we
really
have
to
go,
we're
not
locked
in
there
good.
We,
the
the
thing
that
I
keep
coming
back
to
in
my
head.
We
already
have
co-funded
in
queue
proxy.
G
That
says
hey
this,
isn't
an
address
that
I
understand
so
I'm
gonna
ignore
it,
because
we
had
a
bunch
of
bugs
filed
when
people
have
these
six
addresses
in
otherwise
v4
proxies,
and
we
were
blowing
up.
Ip
tables
feeding
empty
six
addresses,
so
we
I
think
it's
you
who
already
went
there.
Wasn't
you
maybe
was
Holly.
G
E
E
E
K
Yeah
I
just
wanted
to
say,
though,
be
a
the
Dhupia
for
the
worst
I've
been
ready
for
review.
The
first
one
is
downward
API
changes
and
also
updating
the
host
file
with
ipv6
and
then
the
second
one
is
basically
adding
support
for
multiple
sided
masks.
One
per
family,
ipv4
and
ipv6.
So
I
went
with
the
sedition
that
bin
and
Andrew
had.
We
just
have
separate
flags
for
ipv4
and
ipv6,
and
the
user
can
use
that
to
configure
in
case
of
dual
stack.
G
G
K
D
Yeah
I
mean,
if
one
is
really
brief,
is
that
I
just
put
a
link
to
the
ingress
GA
meeting
notes
if
anyone
is
interested
in
the
meetings
are
recorded,
so
those
are
going
on
in
alternate
to
this
meeting
but
in
the
same
time
slot
and
I
think
we're
going
to
try
to
figure
out
if
there's
like
a
better
time
slot-
and
the
second
item
is
that
external
DNS
is
moving
to
kubernetes
SIG's
is
move
has
moved,
but
the
PR
hasn't
been
closed.
Anyways.
D
H
It
doesn't
actually
say
anything
about
coup
proxy,
treating
it
that
way
or
endpoints
itself
I.
Unfortunately,
endpoints
themselves
and
coup
proxy
themselves
have
treated
it
a
certain
way
that
the
comment
doesn't
suggest
it
basic.
Basically
it
going
back
historically,
it
seems
like
there
was
an
annotation
before
this
something
around
tolerate
unready
endpoints,
and
this
is
doing
that
same
kind
of
functionality.
All
over
again,
we
had
initially
ignored
that
and
decided
we
weren't
going
to
do
that
with
endpoint
slices
because
it
didn't
seem
like
it
was
specified
to
do
that.
H
But,
as
we
thought
about
it
more
as
I
thought
about
it
more
as
endpoint
slices
are
going
to
bethe,
no
one's
intentionally
upgrading
to
them.
So
any
kind
of
change
in
functionality
is
something
that
somebody
would
not
expect.
So
my
idea
and
I
have
a
PR
linked
in
the
agenda
is
to
actually
continue
to
support.
Be
published
not
ready,
addresses
behavior
that
endpoints
specified
and
likely.
H
If
we
do
that,
we
should
update
the
comment
to
suggest
what
it
actually
does,
because
that
right
now
it
seems
like
it
that
actually
suggests
that
it's
only
relevant
to
DMS,
not
actual
ku,
proxy
or
other
implementations,
but
as
it
turns
out
and
and
now
endpoint
slices
are
also
paying
attention
to
that
attribute.
I'm.
Sorry,
that's
a
is
that
field
on
service
or,
if
that's
a
that's
a
field
on
service
I.
Remember.
L
H
M
H
N
I
H
M
For
this,
but
just
wanted
T
of
the
idea
discussion,
so
today
we've
been
trying
to
think
through.
How
would
we
give
all
clusters-
let's
say
two
after
you
created
them,
maybe
by
adding
things
like
discontinuous
ciders
to
it
are
trying
to
fit
a
curve
already
stressful
and
say
if
we
have
a
bunch
of
disjoint
address
spaces.
Things
like
that,
and
and
and
in
order
to
be
able
to
even
do
those
things.
M
So,
in
the
case
of
iptables
implementation,
the
three
rules
are:
there
is
one
rule
that
basically
uses
the
cluster
cider
and
says
if
the
source
of
the
flag
pick
is
not
a
cluster
cider
and
if
the
traffic
is
addressed
to
a
service,
IP
then
masquerade
by
the
known
IP
right.
So
this
essentially
is
handing
use
cases
where
you
put
around,
let's
say
an
advanced
route
to
a
node
for
the
service.
Ip
range
send
traffic
to
the
service.
M
Have
you
from
outside
the
cluster
and
I
can
expect
it
to
work
now,
if
you
assume
like,
if
you
take
a
step
back
and
think
about
how
IB
tables
is
used,
all
pod
traffic
that
is
leaving
the
node
today,
essentially
is
translated.
Add
the
node
boundary
before
it
leaves
a
note
right.
So,
if
bought
a
within
a
node,
a
slice
to
send
a
traffic
to
a
service
IP,
it
will
go
in
and
get
been
added
by
the
pod
IP
before
it
actually
leaves
them
off.
M
So
if
you
now
assume
and
then,
which
is
reaching
the
federalism
assumption
to
say
all
bought
the
service
translation
for
traffic,
but
then
the
cluster
happens
at
the
node
boundary
before
the
traffic
leaves
the
node,
then
we
can
actually
simplify
this
idea
of
me
using
the
cluster
ID
cluster
IP
to
determine
whether
the
source
for
both
in
the
cluster
to
just
the
node
spot
cider.
So
essentially,
we
can
stay.
M
If
the
traffic
is
not
from
the
known
spot
cider,
then
it
did
not
originate
from
inside
the
class,
which
essentially
means
that
we
don't
expect,
in
normal
cases
for
part
eight
to
have
destination
a
source
IP
and
reach
node
D,
which
is
outside
the
node
boundary
right.
So
those
are
that's.
That's
one
reasoning
behind.
So
that's
one
way
to
think
about
this
problem.
The
second
use
case
that
we
have
is
to
say.
F
M
G
M
M
I,
don't
want
to
do
the
whole
cab,
but
that's
the
idea
on
this
thing.
The
other
rule
was
essentially,
there
is
a
rule
that
says
if
a
pod
reaches
out
to
the
load,
balancer
IP
don't
send
it
outside
the
cluster,
but
actually
short-circuited
to
the
service
chain,
and
we
can
apply
the
same
logic.
Essentially,
your
answering
is
before
the
pod
leaves
a
node,
so
we
can
rewrite
the
rule
in
systems
of
the
notes
outside
or
in
tester
cider,
and
the
third
case
will
be
accept
traffic,
that's
related
and
accepted.
M
So
what
this
will
allow
us
to
do
is
then
the
queue
proxy
implementation
and
there's
similar
translation
IBB
as
we
check
all
our
color
in
the
cap,
will
then
allow
us
to
start
tracking
the
cluster
cider
as
part
of
the
queue
proxy
with
IPS
or
IP
tables
implementation,
which
case
we
can
have
other
models
of
doing
or
IP
allocation
at
the
cluster
level,
which
doesn't
have
to
coordinate
any
changes
of
the
routing
level
for
services.
Right
and
that's
the
essential,
Pepin
hypothesis
on
how
we
want
to
think
about
this
problem
so.
F
F
M
G
Yes,
so
Mike
I
think
you
are
correct.
I
know
you
are
correct
in
fact
today
it's
a
great
point.
Today,
cluster
cider
is
an
optional
flag
to
cute
proxy
and
everywhere
that
uses
it
is
wrapped
within,
if
not
nil.
Otherwise,
the
log
I
can't
tell
whether
this
is
internal
or
external,
so
I'm
going
to
ignore
it.
I
see.
F
G
Is
that
is
correct,
so
for
those
environments
we'll
need
to
make
sure
there's
a
way
if
you
want
to
trigger
these
sorts
of
functionality?
If
you
want
to
allow
the
node
act
as
service
gateway,
then
you
don't
have
to
be
a
way
to
say
this
is
this?
Is
the
local
traffic
cider
okay
at
the
bottom
of
the
question,
is
how
do
I
know
whether
traffic
came
from
this
machine
or
from
a
different?
If
I
may
say,.
E
Something
well
it's
not
necessarily.
The
only
situation
were
what
Mike
saying
I've
seen
at
least
so
far.
Throat
is
where
we
used
our
APIs
for
a
location
from
V
pcs
for
the
B
Nets,
the
cluster
leben,
and
by
definition,
when
you
use
that
you
don't
do
nothing
less,
nothing
more
anything
just
usually
go
out
of
IP.
You
have
because
it's
routable
on
the
network
in
essence,
then
then,
turkey,
proxy.
E
G
M
If
you
want
to
go
down
that
path,
the
other
way
of
doing
this
by
completely
dropping
the
notion
of
a
cluster
cider
is
to
be
able
to
say
if
we
assume
all
pods
have
interfaces
that
start
with
the
prefix.
You
can
rewrite
the
rule
to
say
if
the
source
has
came
from
an
interface
with
this
prefix
name
plus,
then
we
assume
it's
local
traffic,
and
otherwise
we
assume
it's
not
right,
which
that
works
for
calico
would
work
for
yeah.
G
C
I
C
C
G
Right
so
I
think
what
we
end
up
with.
It
is
a
flag
that
says
here's
how
you
figure
out
local
versus
non-local,
and
there
will
be
a
suite
of
options
that
we
can
choose.
One
would
be
interface,
prefix
one
might
be
a
member
of
this
cider
one
might
be
your
on
a
bridge.
One
might
be
I,
don't
know
what
else
but
like
we
can.
We
can
start
to
build
that
up
as
sort
of
a
more
flexible
ordinance.
C
H
G
I
G
Really
called
clustered
laterally,
so
I
I
think
what
we
want
to
do
is
break
the
gum
break.
The
idea
that
it
is
the
cluster
cider
right
and
say:
there's
gonna,
be
a
family
of
ways
to
configure
local
testing
for
Q
proxy
and
we'll
have
to
just
update
all
the
proxies
to
understand
all
the
mechanisms
which
is
okay,
there's
only
three
or
four
that
I
think
are
actually
affected,
as
I
guess
all
of
them,
but
that
seems
finite
up
for
the
sake
of
time.