►
From YouTube: Kubernetes SIG Network Bi-Weekly meeting 20201210
Description
Kubernetes SIG Network Bi-Weekly meeting 20201210
B
Yeah,
so
I
thought
this
was
really
interesting
actually
because
we
are
doing
an
amazing
job
lately
of
triaging.
I
will
share
this
if
it
lets
me.
C
B
C
B
Did
it
was
amazing,
let's
see
if
there's
more
than
the
15
that
were
we
were
down
to
okay,
we're
down
to
these
15,
so
one
second
latency
in
host
service
ipod
with
ib
tables,
so
this
person
has
issues
well
documented
issues
suggested
fix
blah
blah
blah
uses
ip
tables.
I
think
somebody
was
already
looking
at
this.
B
F
A
B
And
then
we're
going
already
to
29
days
ago,
oh
my
gosh,
all
right!
This
is
already
assigned
service.
Topology
doesn't
reject
traffic.
When
there's
no
matching
end
point
and
it
looks
like
andrew,
do
you
have
any
ideas
or
do
you
want
this
to
sign
someone
else.
B
H
B
J
K
J
K
E
B
Okay,
so
I'm
gonna
sign
antonio,
so
you
can
follow
up
sound
good.
Thank
you.
All
right,
nf
tables
do
not
scale
for
services.
Let's
see,
and
we
have
tim
asking
what's
going
on
here.
What
do
we
think.
E
I
I
ping
this
today
I'd
like
to
know
what
the
what
we
think
the
follow-up
is
it
to
me.
It
didn't
sound
like
this
was
expected,
behavior
and,
and
it
doesn't
align
with
what
other
people
have
reported.
So
I
guess
I
want
to
figure
out
what
the
next
investigation
is.
A
The
version
he
should
try
to
put
a
newer
version
of
vip
tables
than
that
particular
one.
I
know
there
have
been
performance
fixes
since
the
one
he's
referencing
there.
B
Okay,
all
right
range,
allocator
dot,
go
crashes,
the
entire
world,
or
at
least
the
entire
kcm.
If
the
setters
are
incorrect,
first
single
dude.
H
E
H
E
I'll,
let's
I'll
switch
the
labels
on
this.
Let's
confirm
this
as
a
bug.
If
somebody
wants
to
work
on
it,
I
don't
know
how
hard
it's
going
to
be
or
or
what
the
impact
of
trying
to
fix
it
will
be
I'll
update
this
one.
Okay,.
H
B
I
know
we're
running
low
on
time,
but
we
also
got
started
a
little
late.
So
just
tell
me
when
we
should
stop,
but
we've
got
pod
pod
low
bandwidth,
wow
we're
back
into
october
already
all
right.
What
do
we
think,
mr.
E
H
Why
don't
you
just
assign
honlin
and
and
then
I
can
ping
him
internally
and
ask
him
if
he
wants
to
follow
up
or
andrew
can
yeah.
B
H
Oh
yeah,
I
I
think
we
all
agreed.
This
was
probably
the
client
we
need
to
spread
these
yeah.
We
know
what
to
do
here.
We
just,
I
almost
feel
like
this
is
a
good,
getting
started
issue
or
something,
but
we
know
what
to
do
here
nobody's
doing
it.
It's
not
hard.
We
just
need
to
move.
We
need
to
make
this
test
so
that
it
so
that
it
spreads
the
spreads
the
client
around
in
a
logical
way.
J
E
E
Yeah,
it's
just
a
consistency,
then
you
always
want
to
run
it
on
the
same
node.
You
always
want
to
go
worst
case.
E
E
C
E
H
J
J
H
J
Yeah
well,
yes,
we
have.
We
have
a
a
way
to
schedule
the
parts
in
different
nodes.
This
is
what
we
are
using
in
several
e3.
J
H
J
J
B
B
E
And
the
I
think
ping
this
person
today
and
they
said
they
will
try
to
get
it
they'll
try
to
take
a
look
at
it.
Oh.
A
All
right
thanks,
bridget
and
thanks
everybody
else
too.
Next
up
danman
chip,
endpoint
slice
and
proxy
modes.
C
Yeah
so
it
I
was
just
doing
some
planning
stuff
and
it
occurred
to
me
that
probably
at
some
point
we
will
want
to
not
have
duplicated
code
in
cube
proxy
once
everybody
is
supposed
to
be
moved
to
endpoint
slice,
and
I
wasn't
sure
if
anybody
had
thought
about
that
already.
L
Yeah,
that's
thank
you
for
bringing
this
up.
That's
a
good
point
and
I
think
the
questions
you
added
here
are
are
great
and
relevant.
I
don't
know
of
any
plans
right
now
to
add
support
to
user
space
proxy
for
endpoint
slices
and
I
don't
know
of
anyone
using
the
user
space
proxy
either,
but
I
other
than
openshift
like
you
mentioned
there,
but
but
with
that
said,
I'm
I'm
open
to
it.
I'm
sure
anyone
else
is
too,
but
I
I
don't
personally
have
plans.
C
H
L
That
that
has
the
same
issue
where
endpoint
slices
have
not
been
ported
to
windows
user
space-
some
some
folks
on
the
azure
team,
ported
it
to
win
windows
kernel,
but
it
has
not
made
its
way
to
windows
user
space.
H
There
I
have
a
cleanup
pr
for
the
windows
user
space,
one
by
the
way,
also
kind
of
in
flight.
I
was,
I
remember
andrew
mentioned
this
as
well
a
related
one,
there's
a
lot
of
stuff
to
do
on
those
user
space.
C
L
So,
for
a
little
bit
more
context
right
now,
there's
a
kind
of
coup
proxy
local
data
structure
that
both
endpoints
and
endpoint
slice
map
into,
and
then
each
proxy
here
uses
that
data
structure.
Well,
each
of
the
non-user
space
proxyers
use
that
data
structure.
L
There
is
more
duplicated
code
for
the
user
space
proxiers,
but
it's
it's
not
a
huge
cost
but
like
like
has
already
been
alluded
to.
If
we
imagine
there's
a
time
in
the
not
too
distant
future,
where
people
just
won't
be
using
endpoints
in
kuproxi,
then
it
may
not
be
worth
the
value
of
keeping
it
around
forever.
C
L
Yeah,
I
think,
that's
reasonable.
I
think
the
so
the
feature
gate,
that's
guarding
endpoint
slice.
Implementation
in
coupe
proxy
is
probably
one
to
two
releases
away
from
being
ga,
and
if
it's
anything
like
the
controller
functionality,
we've
said
we
wouldn't
deprecate
anything
related
to
endpoints.
Until
after
the
endpoint
slice,
functionality
went
ga
and
that's
a
long
path.
C
L
J
J
L
This
is
something
like
I
had
been
trying
to
graduate
the
endpoint
slice
api
to
ga
and
the
120
cycle,
and
I
had
a
pr
that
that
did
that
and
got
some
questions
and
some
hesitancy
around
the
topology
field,
because
it
was
no
longer
being
used
essentially
by
by
any
of
the
active
proposals,
and
we
were
wondering
well,
hey:
can
we
can
we
get
rid
of
this
and
obviously
you
can't
get
rid
of
a
field
at
the
same
time
or
you
shouldn't
at
the
same
time
as
going
to
ga,
and
so
we
decided
to
well
that
there's
there's
some
discussions
on
github
on
the
on
the
pr
around
this.
L
But
but
the
general
idea.
As
as
I
remember
it
was
let's
try
and
pull
topology
out
and
instead
just
use
node
name
to
mirror
what
endpoints
already
has
and
and
use
slice
level
labels
for
topology
keys,
like
zone
and
region
and
and
subsetting
based
on
that,
is
that
what
you're
asking
about.
L
Oh,
I
see
what
you
mean:
yeah
yeah,
there's.
There
is
a
way
that
it
can
be
converted
in
a
backwards
compatible
way.
I
I
forget
the
specifics
of
it,
but
but
there
should
be
a
way
to
to
make
that
work
as
part
of
an
upgrade.
J
L
L
I'll
make
a
pr
to
update
the
cap.
I
know
there's
already
been
some
updates,
but
I'll
go
through
and
make
sure
it's
a
little
bit
clearer,
what's
happening
cool
thank.
L
A
E
You
so
I'm
sure
most
of
you
saw
the
external
ips
cbe
drop
this
week.
It's
not
new.
Actually
clayton
filed
a
bug
against
it
in
like
2016.,
so
it's
it's
known.
The
problem
is
that
it's
just
an
overly
powerful
feature
that,
because
of
our
compatibility
guarantees,
we
can't
just
unilaterally
break
so
I'm
I
wrote
a
small
cap
this
week
to
propose
that
we
add
a
new
built-in
admission
controller
to
disable
the
use
of
external
ips
and
people
cluster
admins
can
opt
in
to
disabling
it.
E
I
believe
that
you
know
anecdotally,
like
99
of
clusters,
can
probably
opt
into
this
and
will
never
use
it
and
everybody
else
who
really
does
need.
It
can
not
opt
in
and
they
can
either
leave
it
the
way
it
is
or
they
can
apply
policy
through
opa
or
whatever
other
policy
mechanism
they're
currently
using,
which
is
strictly
simpler
than
what
the
open
shift.
Openshift
has
a
controller.
That's
similar
to
this,
I
guess,
but
it
actually
allows
some
cider
checks
and
things
like
that.
E
I'll
take
silence
as
no
the
cap
is
out
there
I'll
link
the
cap
in
the
chat.
Let
me
find
it
it's
a
small
one,
and
I
think
I
just
need
to
convince
the
api
machinery,
folks
that
this
is
worth
having
another
built
in
for.
E
So
then,
the
second
one
there's
the
cap,
I
just
linked
it.
The
second
one
is
this
older
tim.
G
I
had
a
question:
yeah
go
ahead,
do
we
think
do
we
think
we
should
think
about
deprecating
that
feature
or
we
just
kind
of
leave
it
and
that's
it.
E
We,
I
don't
know,
usually
when
we
say
deprecation,
I
want
to
say
you
know,
here's
the
thing
you
should
use.
Instead,
we
don't
have
a
thing
to
use
instead,
so
for
now
I'm
not
I'm
not
recommending
that
we
deprecate,
I'm
just
saying
we
should
just
block
it
for
most
clusters
and
then
we
can
figure
out
from
there
whether
we
want
to
really
deprecate
it
or
not.
Disagree.
E
C
K
E
Yeah
and
truthfully,
if,
if
enough,
people
are
using
it
and
they're
going
to
complain
about
it,
I
don't
mind
bringing
back
a
different
feature
that
does
the
same
thing,
but
in
a
more
controlled
way,
right
or
or
honestly,
we
can
just
leave
it
and
tell
people.
This
is
a
really
powerful
feature.
You
probably
shouldn't
use
it,
but
if
you
do
like
meet
my
friend
oppa.
E
Great
okay,
so
the
second
cva
that
I
threw
on
the
agenda
to
revisit
was
this
old
localnet
cve
and
just
to
refresh
people's
memory.
Cube
proxy
and
iptables
mode
enables
a
syscuddle
in
linux
that
allows
the
127
network
to
be
treated
like
a
routable
network
which
is
in
violation
of
the
rfc
and
it
what
it
allows
is
node
ports
on
localhost
now
ipvs
doesn't
allow
node
ports
on
localhost
at
all
and
ipv6
doesn't
allow
node
ports
on
localhost
at
all.
E
E
Most
there's
not
a
lot
of
use
cases
for
supporting
it
at
all.
I
do
know
that
there
were
some
people
who
were
using
it
for
things
like
insecure
doctor
registries
on
local
host
colon
5000,
and
they
were
using
it
as
their.
E
How
cubelet
would
pull
images
that
feels
like
a
gross
hack,
but
I
don't
want
to
break
people
so
we've
been
discussing
in
one
of
the
issues
and
I'll
have
to
find
that
one
also.
How
do
we
actually
want
to
tackle
this.
E
E
I
just
wanted
to
bring
it
back
to
people's
front
of
their
brain
as
we
wrap
up
the
year.
It
would
be
nice
to
not
let
these
linger
too
far
into
the
new
year.
C
J
C
E
Right
yeah,
so
I
I
wanted
to
just
throw
this
in
front
of
people
again
to
see.
If
there
are,
you
know
more
creative
answers
for
how
to
do
this,
or
if
it's
really
just
something
that
we
should
start
weaning
people
off
of,
which
is
mostly
what
the
thing
I
just
linked
to
says
for
the
life
of
me.
I
can't
find
a
way
in
ip
tables
to
do
what
is
currently
being
done
with
the
route
localnet
without
route
localnet.
E
E
E
Oh
yes,
so
I
took
a
a
couple
of
weeks
sort
of
off
of
my
normal
day-to-day
responsibilities
at
work,
and
I
focused
just
on
getting
some
upstream
issues
pinned
down
specifically
the
cal,
and
I
spent
a
lot
of
time
together,
finishing
off
the
dual
stack
stuff
in
late
october,
early
november.
E
It's
some
of
the
oldest
rest
code
in
the
system,
and
it
does
things
that
no
other
resource
does
and
it's
written
it
like.
It
just
hasn't,
been
updated
as
the
best
practices
have
evolved,
and
so
it's
just
it
was
just
horrible.
It's
got
this
two-layer
cal
says
he's
being
nice,
it's
worse.
E
It's
got
this
two-layered
rest
thing
that
ends
up
calling
validation
and
strategy
stuff
twice,
so
there's
actually
some
validation
code
that
allows
things
that
shouldn't
be
allowed
because
it
gets
run
once
before,
validation,
once
after
or
once
before,
allocation
and
once
after
allocation,
it's
a
it's,
a
it's
a
real
mess
and
so
cal
and
I
brainstormed
on
how
we
could
do
better
and
how
we
could
actually
bring
it
into
a
single
layer
and
make
it
look
more
like
the
rest
of
all
the
other
resources.
E
So
to
that
end,
I
started
on
a
pr
and
I've
proven
and
and
that
we
can't
actually
move
it
down
into
one
layer,
but
the
pr
is
getting
unwieldy
and
I'm
here
to
say,
if
anybody
has
some
free
cycles
and
they'd
like
to
learn
a
new
area
of
the
code
that
they
haven't
played
with
before
and
help
write
some
tests
and
port
some
test
cases
and
clean
up
some
stuff.
E
I
could
sure
use
help
on
it
and
I'll
link
here,
the
pr
that
I
have
in
progress
and
anybody
who
opens
that
will
realize
that
it's
a
there's,
a
lot
of
work
to
be
done
still.
E
So
anybody
who
wants
to
pitch
in
and
help
out
on
that
a
lot
of
what
I
need
to
do
is
just
making
sure
that
the
test
cases
are
really
sufficient
like
in
in.
I
did
the
create
path,
and
I
found
some
test
cases
that
weren't
covered
before
and
in
fact
I
found
bugs
so.
The
update
path
needs
help
on
test
coverage
and
then
there's
just
a
lot
of
cleanup
and
stuff
to
do
in
there.
E
I
I
had
a
couple
that
were
like
attempts
at
at
solving
it
in
different
ways.
This
is
the
the
one
that
I
think
we
should
actually
do,
and
it
builds
on
a
different
pr
that
is
also
not
merged
yet
against
the
api
machinery.
So
that's
all
included
in
there
and
I'm
discussing
getting
that
api
machinery
patch
merged
separately
from
this.
E
There's
a
change
we
need
to
do
in
the
api
machinery
that
the
api
machinery
folks
basically
agree
to,
but
it's
not
merged
yet,
and
then
this
builds
on
top
of
that.
F
J
Right
what
about
the
discussion
that
we
had
the
last
cycle
with
daniel
daniel
smith
and
then
winship
with
the
service
submit
daniel
proposed
to
to
move
the
allocation
to
a
mission
controller,
or
something
like
that
kind
of
be
the
balloon
volume
provisioning?
And
these
things
I
was
working
with
that
idea,
and
I
think
that
we
can
do
it
in
with
an
emission
controller
too,
and
remove
the.
E
We
could
accept
the
admission
controllers
are
not
called
if
an
operation
is
rejected.
So
if
you,
the
user,
sends
us
in
something
that
will
be,
that
will
fail
validation
and
then
we
go
ahead
and
we
allocate
ips
and
whatever
for
it.
And
then
it
fails
validation.
We
don't
get
a
callback
to
say,
oh
by
the
way
it
failed
validation
undo
whatever
you
just.
I
E
E
And
it's
made
more,
it's
made
worse
by
the
fact
that
the
the
amount
of
permutations
of
all
the
different
fields
around
type
and
cluster
ips
and
cluster
ip
family-
and
I
I
p
family
policy
and
ports
and
all
those
like
all
the
different
ways.
Those
things
can
can
permute
against
each
other,
makes
a
very
large
test.
Matrix
and
update.
Is
that
squared
so.
J
E
The
the
tests
around
update
are
right
now,
I'm
well.
First
of
all,
I
could
use
anybody's
eyeballs
to
look
at
the
tests
around
create.
I
think
I
have
a
good
representation
of
the
various
interesting
cases
through
create.
E
H
E
No,
I've
got
this
pr
open
and
I'm
saying
like
if
somebody
is
interested
in
it,
then
let's
talk
about
how
we
want
to
collaborate.
I
don't
know
we
could
open
a
bunch
of
issues.
We
could
start
a
dev
branch.
We
could
just
pr
each
other
against
our
own
private
forks.
Like
I
don't
really
care.
We
can
figure
that
out.
If
nobody
says
they're
interested,
then
I'll
just
keep
plodding
along
at
my
own
pace.
H
At
some
point,
do
all
these
test
permutations
need
to
move?
I
know
antonio,
you
had
mentioned,
there's
a
service
apis
test
throughout
a
tree
and
like
is
there
some
point
like
a
new
set
of
tests
that
need
to
be
out
of
tree?
I
was
wondering
for
for
some
of
this
stuff
once
the
e2e
tests
get
like
for
once,
all
these
permutations
start
becoming
important,
or
is
it
all
going
to
go
and
treat
the
weight?
H
J
E
Right
so
I
did.
I
wrote
all
these
tests
as
unit
tests,
which
really
just
call
into
the
rest
stack
and
then
verify
the
results.
So
I
thought
unit
tests
would
be
the
smallest
scope
and
therefore
probably
best,
but
I'm
open
to
arguments
about
why
integration
would
be
better
or
why
testing
at
different
levels
gives
us
sufficient
confidence
in
it.
E
G
J
A
C
I
was
reviewing
ricardo's
end
port
pr
and
noticed
that
he
was
updating
the
api
types
and
extensions
and
I
was
like:
why
does
this
still
exist
and
christopher
luciano
responded
to
some
of
my
questions,
but
did
we
drop
the
ball?
Did
we
forget
to
delete
stuff?
We
were
supposed
to
have
deleted
or.
M
Yeah
I
put
in
there-
I
I
don't
know
if
you
remember
tim,
but
soon
after
we
officially
put
in
egress
into
v1
of
network
policy,
there
was
an
issue
created
or
something
about.
What
do
we
actually
do
as
far
as
deleting
old
code,
for
instance
like
extensions
or
something
like?
Is
there
a
point
where
we
can
actually
delete
it?
Because
we
can
do
things
as.
M
E
I
I
think
the
standard
answer
is
for
a
beta
api.
Is
we
announce
the
deprecation
say
we
announce
it
in
21
and
we
keep
it
through
21
and
22,
and
then
we
can
get
rid
of
the
beta
api.
We
just
didn't
do
that.
E
C
E
I
feel
like
we
should
give.
If
we
did
that,
and
then
we
forgot
to
repeat
ourselves,
we
should
probably
repeat
ourselves
one
more
time
so
in
the
case
that
we
found
that
we
should
say
in
the
release
notes
for
21.
That
21
will
be
the
last
version
to
support
extensions,
b1,
beta1,
okay
and
then
we
can
get
rid
of
it
in.
E
A
A
C
Sounds
okay
to
me,
okay
andrew
is
asking
on
in
the
chat:
don't
beta
apis
get
automatically
disabled
removed.
Now
it's
not
literally
automatic
they're,
they're
supposed
to
be,
and
and
people
are
supposed
to
go
in
and
find
and
for
some
reason
the
cap
that
started
that
process
did
not
look
at
extensions
at
all.
M
M
I'm
trying
to
remember
if
I
did
or
not
mainly,
I
was
wondering
about
the
issue
that
I
just
linked
in
there.
Five
two
one,
eight
five.
C
E
So,
does
that
mean
that
you
can't
actually
use
the
extensions
v1
data
api
for
network
policy?
If
so,
we
should
just
remove
it.
M
I
think
yeah
that
was
the
original
thing
that
I
posted
in
there.
You
can't,
as
of
118,
even
put
a
flag
on
to
make
changes
to
it.
D
M
A
All
right
well,
it
sounds
like
we
have
a
couple
more
things
to
figure
out
here,
so
I
assume
dan
and
chris.
Maybe
you
guys
can
take
a
look
at
what
the
current
situation
is
yeah
and
then
figure
out
how
to
proceed.
Based
on
the
advice
tim
has
given
so
far,.
A
All
right
thanks
bridget
and
thanks
dan
and
chris.
We
are
out
of
time.
A
So
unless
somebody
has
something
very
pressing,
we
should
also
say
that
the
next
meeting
is
canceled,
because
that
is
likely
a
holiday
for
a
lot
of
people.