►
From YouTube: Kubernetes SIG Network Bi-Weekly Meeting for 20210429
Description
Kubernetes SIG Network Bi-Weekly Meeting for 20210429
B
C
This
person
reached
out
in
the
slack-
and
I
asked
them
to
open
the
the
issue-
is
a
a
very
specific
scenario,
and
I
don't
know
if
I
added
my
comment,
but
it
would
be
good
if
more
person
look
look
at
that
and
and
can
give
an
opinion.
So,
in
my
opinion
is
a
very
specific
scenario.
I'm
I
don't
think
that
this
can
be
generalized,
but.
B
A
Do
we
I
mean,
do
we
think
that
it's
a
bug,
but
that
one
that
might
not
be
fixable
or
do
we
think
that
it's
still
needs
investigation?
A
It's
this.
C
C
The
the
thing
is
that
the
failures
are
legit
because
they
are
creating
a
a
contract.
Entry
in
is
being
renewed
and
is
like
calling
the
the
traffic.
But
I
don't
know
to
what
point
you
proxy
should
be.
You
know
the
in
charge
of
deleting
that
contract.
A
I'm
casey
davenport.
B
F
B
C
C
G
C
This
I'm
watching
this.
So
let
it
right
during
two
weeks,
because
I
think
that
is
a
problem
with
his
tap
or
some
configuration.
C
B
Okay,
this
one
I
was
looking
at
this
earlier
and
it
sounds
like
people
want
this
person
to
test
with
a
more
recent
version,
because
it's
so
old.
H
Yeah
I
have
commented
that
we
have
had
many
of
these
in
the
past
and
I
would
like
to
see
if
we
can
have
the
same
problem
in
1.21,
because
otherwise,
even
if
we
find
it,
it
will
not
be
fixed
because
it's
an
it's
an
alpha
feature.
Yeah.
B
H
B
F
B
I
So
there
was
something
where
yeah
assign
this
to
me.
I
think
this
is
already
fixed.
That's
dan
winston
somebody
we
were
trying
to
force
both
the
linux
and
windows
user
space
proxies
to
use
endpoint
slices,
even
though
they
don't
have
endpoint
slice
support.
So
I
think
this
just
got
fixed.
B
Yeah,
I
think
I
saw
that
mentioned
in
a
different
issue
too.
So,
okay,
this
person,
doesn't
they
they
need
to
give
us
more
information
like
yeah.
They
didn't
tell
us
the
club
event
or
anything.
So
I
guess
that
one
sits
for
another
period
and
then
we'll
see.
G
A
B
G
C
B
B
That's
right:
okay,
awesome!
Moving
right
along
tim
opened
this
one
there's
been
some
discussion
on
it.
Oh
and
we
accepted
triage
an
hour
ago.
You
can
see
I
loaded
this
up
a
little
bit
ago,
so
I
guess
we're
okay,
there.
F
B
C
Yeah-
and
I
have
my
opinion,
but
my
opinion
is
not
because
then
green
ship
and
all
the
others
should.
G
B
F
So
there's
two
things
here:
both
cube,
dns
and
ip
mask
agent
have
to
update.
I
did
the
q
dns
part
I'm
trying
to
find
someone
that
can
update
ipmask
agent,
so
yeah,
okay,
it
is
a.
F
I
know,
varun
was
one
of
the
owners,
I'm
trying
to
see
if
there's
somebody
else
familiar
with
it
yeah,
but
if
someone
wants
to
take
it
from
the
group
here,
that's
it's
open
too,
but
we
need
to
build
a
new
image
and
update
it
with
the
new
client
go.
C
B
C
A
Great,
thank
you
very
much.
I
think
the
first
item
on
the
agenda
is
the
all
ports
cap.
F
Okay,
yeah.
I
mostly
wanted
to
start
the
discussion
here
to
see
what
path
to
go
down
further
to
move
this
forward.
So
the
the
work
in
progress
cap
is
quite
short
right
now.
The
the
motivation
to
do
all
port
services
is
that
we
have
many
applications
in
the
wipe
and
rtp
space
that
require
several
ports
to
be
exposed
by
a
single
service
and
listing
all
those
out
can
be
tedious.
F
So
we're
trying
to
see
if
we
can
export
the
entire
port
range,
the
valid
port
range
through
a
service
just
by
setting
a
simple
field,
a
boolean
field
on
the
service
spec.
So
that's
that's
how
we
initially
started
this
proposal.
So
what
I
have
here
is
just
a
single
field:
all
ports
boolean
on
the
service,
ap
service,
port
api,
and
it
needs
to
be
on
the
service
port
api
because
we
don't
want
to
break
the
validation.
F
So
currently,
any
valid
service
requires
at
least
one
port
to
be
specified,
except
if
it's
headless,
so
to
keep
that
same
validation
going.
We
wanted
to
introduce
this
field
in
the
service
port
spec,
but
through
the
reviews
we
actually
got
some
interesting
alternatives.
F
So
yeah
one
idea
was
to
create
a
brand
new
service
type,
so
we
don't
have
to
worry
about
breaking
existing
validation
there,
and
this
could
just
be
a
load
balancer
at
the
ip
level,
so
all
ports,
all
protocols,
would
be
supported.
That's
one
idea:
it's
it's!
It's
a
pretty
major
change
to
the
api
plus
validation,
but
yeah
it's
cleaner
in
that
we
won't
break
anything
existing
and
if
you
want
load
balancers,
we
need
low
balancer
controllers
to
implement
this
new
service.
F
Extending
this
is
that
this
could
instead
be
a
gateway,
l4
gateway
implementation
rather
than
the
service
api.
So
that's
that's
one
option.
This
is
like
a
bigger
change
and
possibly
a
longer
term
option.
The
other
option
that
we
got
out
of
the
reviews.
I
think
this
is
pretty
interesting
too,
is
restricting
all
port
support
to
just
load
balancers
and
to
just
headless
service,
as
mentioned
before.
Headless
services.
F
Don't
require
you
to
specify
a
port,
so
we
get
that
zero
port
validation
step
for
free
right,
since
they
are
already
allowed
in
headless
service.
The
only
validation
we
have
to
open
up
is
load.
Balancer
services
can
now
also
be
headless
right
now.
It
checks
that
you
have
a
valid
cluster
ip,
but
if
we
open
this
up,
we
could
relatively
easily
allow
this
support.
F
J
I
have
a
question
cal
here:
how
do
you
plan
to
deal
with
the
not
port
specifically
for
load
balancers,
so
load
balancers
today,
use
notepod
for
health
checks
and
all
of
that
stuff,
yeah.
F
Yeah
yeah,
that's
a
great
point
and
that's
true
that
it
wouldn't
make
sense
to
do
note
ports
when
you
are
trying
to
do
an
all
ports
service
to
open
everything
up.
So
if
we
did
the,
if
we
did
the
headless
service
plus
a
load
balancer,
there
wouldn't
be
any
note
port
assigned,
except
for
health
check
for
traffic
policy,
local
and
that
would
still
continue
to
work
as
it
did,
but
you
would
not
be
able
to
access
the
you
would
not
do
any
port
translation
like
this
node
port
actually
translate
to
this
service.
F
This
back
and
port.
You
wouldn't
have
that
functionality
at
all.
We
would
just
have
the
node
pool
for
the
health
check
if
needed,
but
yeah,
that's
that's
it.
The
node
functionality
would
just
not
be
there.
F
Correct
yeah
on
all
of
these
changes,
you're
right
that
we
would
have
to
make
changes
to
keep
proxy
without
make
changes
to
the
api
validation
and
to
the
load
balancer
controllers
to
to
pick
up
this
option.
Yeah.
F
J
So
yeah
this
one-
I
I
just
want
us
to
understand
the
implication
of
what
we're
trying
to
do
here
if
we
are
to
to
to
not
do
poor
translation,
which
I
believe
we
shouldn't,
because
you
cannot
lock
all
the
force
of
a
node.
We're
basically
saying
this
type
of
services
is
only
supported
by
dsr,
the
sorry,
the
direct
service
server
response.
That's
the
only
way
people
can
expose
a
service
like
that
they
cannot
expose
it
using
any
other
way,
which
is
fair
by
the
way.
J
I
just
want
us
to
document
this,
because
there
is
no
way
there
is
no
ports
on
the
node
right.
I'm
gonna
deliver
the
packet
to
the
node
using
the
original
public
ip
that
I
received
the
packet
on
and
that's
dslr,
it's
not
directed
to
the
cluster
ap,
that's
not
red!
That
has
nothing
to
do
with
cluster
ib,
because
cluster
ip
is
not
known
by
load.
Balancers
load
balancers
doesn't
know
that
your
service,
that
it
knows
the
public
ip
that
it
listens
on
so
the
way
direct
server
servers,
respond.
J
Direct
server
response
today
works
is
a
packet
received
on
a
say,
pip
or
a
public
ip.
On
the
load
balance,
then
it's
forwarded
to
to
the
node
using
the
same
destination,
ip,
just
a
different
mac
right,
and
that's
that's
one
of
the
ways
you
can
make
a
service
today.
The
other
way
as
you
do,
map
ports
like
you
for
this
ip.
This
port
goes
to
this
node
port.
All
right,
so
we're
not
going
to
do
not
port.
So
we're
only
saying
the
services
of
type
whole
course
of
all
ports.
J
F
Yeah
that
I
agree,
that's
that's
a
good
point
and
I
think
there
are
some
low
balancer
implementations
that
do
just
that
today,
like
dsr,
the
nodes
accept
packet
destined
to
the
low
balance
of
it,
and
the
response
goes
from
the
back
end,
pods
directly
to
the
to.
J
J
H
Yeah,
the
the
headless
service,
I
believe,
is
a
good
candidate
because
it
removes
many
of
these
questions
that
are
raised
in
this
forum.
For
instance,
dsr
is
not
needed
at
all.
You
can
have
address
translation
to
the
pod
address
and
have
the
same
thing
basically
as
today,
but
for
the
entire
ip
range
or
the
entire
ip
address
actually
and
to
limit
it
to
headless
also
removes
the
need
for
an
extra
port
that
isn't
needed
and
have
to
be
there,
which
is
very
hard
to
document
in
a
in
an
understandable
or
sane
way.
H
You
do
not
need
a
port
and
it's
not
used,
but
you
have
to
write
it
there.
It's
weird,
but
to
like
in
interpret
a
headless
service
with
the
load
balancer
type
as
the
same
as
it
is
today
internally,
but
externally
it
you
will
get
all
the
traffic
flow
from
that
ip,
the
load
balancer
ip
to
that
particular
port.
I
believe
it's
a
it's
simple
to
implement
and
for
me
at
least
it's
intuitive.
J
So,
okay,
I
have
I
I
understand
where
you're
coming
from
lars
and
it
makes
sense,
but
I
want
to
bring
your
attention
to
two
points
right
with
load
balancers.
J
If
you
said:
okay,
we're
gonna
limit
this
to
only
headless
service,
meaning
there's
no
cluster
ip,
all
right,
you're,
basically,
assuming
that
the
load
balancer,
which
is
an
external
entity
to
the
cluster,
knows
how
to
route
to
pod
ips
right.
Not
only
that
you
are
forcing
people
to
the
cloud
provider
controller
or
whatever.
We
call
it
now
to
respond
to
to
react
to
pods
coming
in
and
out
of
the
load,
balancer,
meaning
the
selector.
Let's
say:
selector
chose
three
poles
today.
J
Tomorrow
is
four,
then
it
needs
to
know
the
force
and
then
go
and
configure
the
load
balance.
And
god
knows
there
are
so
many
state
issues.
I
can
bring
your
attention
to
graceful,
delete
and
all
the
problems
we
had
there
all
right.
The
problem
with
this
also
is
the
idea
of
pod
ip,
which
limits
this
implementation
to
networking
cni's
that
are
allocating
from
the
same
network
where
the
cluster
lives.
So
in
azure,
for
example,
it's
addressing
I
on
gcp
is
going
to
be
gcpc,
aws
is
going
to
be
ni,
and
so
on.
J
That
means
anybody,
who's
using
ncap
d,
cap
style
or
vxlan
or
any
of
those
intelligent
implementation
right,
won't
be
able
to
fit
a
load
balancer
on
top
of
a
whole
port
load
balancer.
On
top
of
this
implementation,
because
the
load
balancer
doesn't
know,
politics
doesn't
know
how
to
route.
What
I
beat
sorry.
K
I
know
it
sounds
like
there's
a
couple
between
cluster
ip
and
external
load.
Bouncers,
but
is,
is
that
a
strictly
because
I
thought
cluster
ip
was
only
an
internal
within
the
cluster
notion.
K
It
doesn't
route
to
traffic
to
cluster
ips,
yes
right,
but
what
what
you
were
saying
just
now
is
that
actually
it
is
required.
I'm
just
trying
to
understand.
J
K
K
E
F
J
Okay,
in
that
case,
it
makes
more
sense-
and
I
let
me
just
carry
carry
on
on
this
point,
but
the
assumption
here
is:
I
will
never
access
the
service
from
the
cluster
right.
Am
I
getting
it
correctly
like
if
I
want
to
access
this
service
from
the
cluster
on
the
load
balancer?
J
Yeah,
sorry,
so
it's
it's!
Okay!
We
are
victims
of
of
our
own
api.
By
the
way.
It's
it's
really.
It's
yeah.
H
J
I
think
I
wasn't
clear:
let
me
repeat
that
and
try
to
be
more
clear.
If
I
have
a
service
today,
cluster
ip
service,
I
can
access
it
from
the
cluster
on
on
the
cluster
ip
right
and
I
get
load
balanced
across
all
the
backing
box.
Is
that
correct
yeah?
J
Yes,
now,
if
I
move
to
the
future,
what
we're
telling
users
are
yes,
you're
gonna
have
that,
but
if
you
chose
to
go
with
a
whole
port
service,
the
only
way
you
can
route
the
whole
port
service,
whether
a
cluster
is
by
an
external
load
balancer
or
by
some
other
means
like
running
some
top
balancer.
On
top
of
your
cluster.
J
F
A
Sounds
like
there's
a
lot
of
discussions
still
to
have
here
and
I'm
I'm
aware
that
we
have
a
few
more
agenda
items
to
get
through,
but
there
is
a
pr
linked
in
the
notes.
So
I
suggest
we
take
the
rest
of
this
to
the
the
cap
and.
A
J
B
Mine's
very
short,
we
would
like
to
move
dual
stack
to
stable,
which
means
we
need
to
take
a
look
at
issues
and
we
have
to
look
at
pr's
and
we
have
to
see
if
what
needs
to
be
done
now
and
what
can
be
done
later,
please
add
you,
don't
have
to
read
them
aloud
right
now,
please
add
links
to
any
of
your
favorite
enhancements
or
bug
reports
or
things
that
are,
in
your
mind,
blockers
to
dual
stack
going
to
stable.
That's
all
we
will
talk
about
this
again
in
two
weeks.
A
Thanks
bridget,
that
was
nice
and
short
ricardo
and
james.
Do
you
like
to
talk
about
yeah.
D
Yeah,
that's
thank
you,
casey
that
that's
just
a
follow-up,
so
we
we,
we
had
our
first
ingredient
gin
x,
super
project
meeting
this
week
and
it
started
really
good.
Bowie
helped
us
a
lot
with
the
the
moderating
thing.
James.
I've
asked
james
to
come
here
because
james
has
proposed
to
be
some
some
somehow
some
sort
of
not
maintainer
but
help
us
with
the
issue
triage
and
doing
all
of
those
administrative
and
other
things
as
well
and
also
taking
a
look
into
the
call
and
yeah.
D
M
That
is
not
my
goal,
so
hi
everyone.
So,
as
ricardo
said,
I'm
james,
strong,
I'm
interested
in
helping
out
and
just
making
sure
that
things
are
moving
along
since
there
hasn't
been
a
lot
of
traction.
So
just
going
through
the
issues
like
bowie
suggested
just
getting
understanding
of
the
state
of
the
world
and
what
I
can
do
to
help
and
learn
how
to
help
be
a
contributor
as
well
and
just
getting
up
to
speed
with
the
project
itself.
M
A
C
A
Yeah
so
yeah,
so
I
noticed
that
that
note
this
time
around
we
need
to,
at
the
beginning
of
the
cycle
put
in
to
the
spreadsheet
the
the
caps
that
we
intend
to
complete
for
for
the
release.
A
So
I
think
the
the
main
thing
there
is
they
need
to
have
somebody
who's
committed
to
driving
those
through
in
this
cycle
to
whatever
target
state.
They
are
so
what
I
was
going
to
propose.
It
looks
like
the
deadline
is
the
evening
of
our
next
meeting.
A
C
Yeah,
that's
the
other
thing
is
the
people
with
caps
that
doesn't
feel
that
you
know
maybe
last
time
we
we
didn't
get
the
one
that
was
working,
that
I
don't
remember
the
name,
the
local
policy
or
something
like
that.
So,
if
somebody
needs
help,
I
mean
I
can
have
to
with
some
of
the
games.
I
know
people
that
is
wanting
to
help.
So
just
I
don't
know,
send
an
email
or
something
asking
for
help
and
we'll
see
if
we
can
graduate
more
more
features
in
this
release.
B
A
Yeah
for
sure,
I
think
that,
right
after
this,
I
may
send
out
a
thread
on
the
mailing
list,
so
that
folks
can
just
respond
there
with
their
intentions.
So
here's
my
cup
and
here's
me
saying
that
I
want
to
do
it
and
then
we
can
chat
through
all
of
those
on
the
next
meeting
and
then
I'll
fill
in
that
spreadsheet
between
2pm
and
midnight.
Next,
the
next
meetings,
thursday.
A
L
Yep
I
so
if
everyone
doesn't
know
me,
I'm
andrew
I've
been
helping
run
the
network
policy,
api
subgroup
meetings
and,
after
a
lot
of
discussion,
we
kind
of
came
to
the
conclusion
that
we
need
a
new
repo
kind
of
like
the
gateway
api
in
order
to
house
some
of
our
work,
such
as
cluster
scope,
network
policy
and
whatever
else
we're
working
on
maybe
v2
and
other
things.
So
I
just
wanted
to
put
this
issue
out
there.
I
opened
to
do
that.
L
A
Great
well,
it
looks
like
36
minutes
ago
tim
gave
it
a
thumbs
up,
so
yep
you're,
probably
pretty
good
to
go.
A
Thank
you
thanks,
andrew
and
then
back
to
antonio
for
the
last
item.
C
C
A
K
Oh
sorry,
I
was
walking
my
kid
home.
Yes,
we
are
cutting
0.30.
K
K
A
Cool,
well,
I
think
everybody
gets
about
15
minutes
back
then
and
keep
an
eye
out
on
the
mailing
list
for
that
kept
kept
thing
that
I'm
about
to
send
I'll,
see
y'all
next
time.
Awesome
thanks.