►
From YouTube: Kubernetes SIG Network Bi-Weekly Meeting 20221222
Description
Kubernetes SIG Network Bi-Weekly Meeting 20221222
A
All
right,
this
is
the
Sig
Network
Regular
meeting.
It
is
Thursday
December
22nd
of
2022
the
last
meeting
of
the
year.
We
are
recording
as
usual.
The
cncf
code
of
conduct
applies
here,
which
boils
down
to
Don't
Be
a
Jerk
welcome
everybody
attendance
is
low.
Today
we
were
just
starting
to
discuss
whether
we
want
to
postpone
some
of
the
agenda
items
until
attendance
is
higher
mache.
If
you
think
it's
interesting
to
give
an
overview
for
folks
here,
I'm
totally
supportive.
A
B
B
A
Have
plenty
of
time
left
but
again
it's
up
to
you
whether
you
want
to
use
today
to
talk
to
the
10
or
so
people
who
are
here
or
whether
you
want
to
save
it
for
the
usual
attendance,
which
is
more
like
40
to
70
right
so.
B
A
Okay,
all
right,
you
have
edit
on
the
agenda
right,
so
yeah
go
ahead
and
just
move
yourself
to
the
next
section.
We'll
start
with
triage
I
will
share.
Give
me
one
second
share
my
screen
issues,
KKK.
A
A
I
here,
can
you
hear
me
yep,
yep,
yeah,
sorry,
Zoom
decides
when
you
share
a
screen.
It's
gonna
mute
your
mic,
because
the
screen
might
share
sorry.
So
starting
issues
with
this
one,
Lars
I,
don't
see
Lars
here.
This
is
an
issue
to
add
a
new
ipvs
scheduler
algorithm
Lars
makes
a
proposal
here
to
just
make
it
a
string
and
stop
trying
to
validate
it
and
just
trust
that
the
user
who's
configuring,
the
node
knows
enough
to
configure
a
load
balancer
algorithm
that
works
I'm.
A
Fine
with
that
I
think
it
shouldn't
be
our
business
to
adjudicate,
which
ones
you
can
or
can't
use
so
he's
going
to
make
her
make
a
a
PR
for
that.
A
Next,
wind
kernel,
Cube
proxy
I,
don't
know
who
gets
assigned
wind
kernel,
Cube
proxy
issues.
Does
anybody
know.
E
B
A
A
A
E
A
A
Next
fqdn
endpoint
slices
we
hit
it
at
this
is
a
request
for
what
does
it
mean?
We
had
added
it
on
the
thought
that
it
would
be
useful
in
a
general
purpose
sense,
but
we
never
defined
it
well
enough,
then
maybe
we
just
shouldn't
so.
The
proposal
here
was
just
add
some
API
warnings
when
somebody
uses
it
since
we're
all
in
on
API
warnings.
These
days.
A
Opened
a
can
of
worms,
but
it's
a
it's
a
good
can
of
worms
adding
these
API
warnings,
so
I
think
this
one's
resolved.
I
just
wanted
to
bring
it
up
here
and
in
fact,
I
will
triage
accept
it.
A
I
labeled,
it
help
wanted
I
got
sort
of
scolded
before
for
labeling
things.
Good
first
issue
that
weren't
Obviously
good
first
issues
that
were
more
complicated,
there's
actually,
apparently,
rules
I
have
to
go
reread
the
rules
before
I
label.
It
good
first
issue.
So
but
it's
a
this
is
a
relatively
easy
one
for
anybody
who
wants
to
jump
on
and
get
a
PR
under
their
belt
and
start
working
towards
org
membership.
A
I
know
not
a
lot
of
people
are
here,
but
so
maybe
we
won't
close
it
if
we
decide,
but
it
was
worth
discussing
the
proposal
or
the
request
is
to
add
some
sort
of
API
that
lets
us
configure
the
default
DNS
when
the
policy
is
cluster
first,
so
basically
a
pod
that
doesn't
say
anything
about
DNS.
They
get
the
cluster
first
policy.
The
cluster
first
policy
is
basically
hard-coded
in
the
container
runtimes,
and
this
is
a
request
to
add
an
API
to
make
that
configurable.
A
This
is
the
first
time
I've
really
heard
this
request,
so
not
a
whole
lot
of
demand
for
it.
It
seems
like
a
lot
of
work
for
relatively
low
return.
A
Suppose
you
don't
want
n
dots
five
by
default,
you
want
n
dots2
by
default
for
everybody.
This
would
allow
you
to
specify
that
today
you.
D
B
A
Want
to
set
a
like
a
policy
for
all
of
your
cluster,
you
have
to
go,
set
a
web
hook
and
then
modify
every
pod
on
the
way
in
that's
pretty
tedious.
So
this
is
asking
for
you
know:
can
we
just
have
an
API
for
it,
but
we
don't
have
a
really
good
way
to
add
configuration
stanzas
for
the
cluster
right
like.
A
B
A
While
it's
stable,
don't
touch
it
so
I
I
wanted
to
bring
this
up
for
the
group
because,
like
I
said
I'm
not
inherently
against
it,
and
if
somebody
wanted
to
go
and
like
start
thinking
about
this
either
this
specifically
or
the
sort
of
more
General
problem
of
how
to
do
configuration
of
the
cluster
itself.
I
wouldn't
be
I,
wouldn't
tell
people
not
to
do
it,
but
I'm
not
gonna,
go
Champion.
It
I,
don't
think
it's.
C
A
Yes,
well,
it
is
a
can
of
worms
and
in
fact,
if
we
had
done
this
at
the
beginning,
we
probably
would
have
ended
up
with
something
that
looks
like
a
config
map
but
with,
or
you
know,
or
with
a
schema,
and
we
would
have
had
a
field
that
said
cluster
cider
and
a
field
that
said
service
cider
and
now
we
know
that
those
were
not.
Those
would
have
been
inappropriate.
We
we
don't
want
them
to
be
a
single
cluster
configuration.
We
actually
need
them
to
be
more
flexible
than
that
yeah.
A
So
we
would
have
made
bad
choices
if
we'd
done
it.
The
sort
of
naive
way
and
I
I
worry
that
that's
going
to
be
true,
no
matter
what
we
look
at
so
I'll
leave
this
open.
We
can
touch
on
it
again
when
there's
more
attendance,
but
my
inclination
is
to
close
this
issue
and
just
say
unlikely
to
happen.
A
We
have
a
test.
Flake
Antonia.
You
were
the
last
person
to
comment
on
it
about
node
ports.
Do
you
remember
this
one?
Should
we
should
we
close
it?
I
didn't
get
the
full
content.
E
It
seems
that,
if
I
in
the
kubernetes
solutions,
everything
you
said
external
IPS
as
before,
right
and
a
lot
of
tests
has
this
check
use
external
IP.
Then
it
seems
that
external,
if
he
never
wasn't
always
available.
So
it
has
a
fallback,
will
definitely,
but
there
are
a
lot
of
states
that
assume
externalities
and
all
the
notes
for
tests
at
least
the
ones
that
were
running
in
DC
I,
don't
know
how
they
were
working.
E
E
Thin
or
whatever,
and
the
cluster
may
be
running
in
a
different
project
in
a
different
data
sector
or
whatever,
so
that
test
only
works
when
they
do
it
as
binary.
It
has
connection
to
the
nodes
and
and
and
some
I
was
moving,
all
these
tests
to
Eternal,
IPS
and
I.
Don't
know
if
these
people
is
facing
this
problem.
E
D
A
Yeah
I
mean
it
seems
like
we
should
not
rely
on
connectivity
from
the
test
execution
engine
to
the
nodes
themselves.
We
should
run
a
pod
that
runs
the
test.
E
A
E
A
We're
back
into
older
issues.
This
is
the
multiple
node
IPS
in
dual
stack
issue:
I
won't
rehash
it
here,
Dan's,
not
here
to
talk
about
it.
That's
it
for
triage.
I
left
these
two
issues
open
because
they
were
my
item
on
the
agenda
and
since
I'm
next
I'll,
just
I'll
just
jump
to
my
item.
A
A
A
My
my
own
feeling
is
that
it's
not
cluster
IPS
are
not
nearly
as
limited
of
a
resource
as
node
ports
are
in
general
and
I'm,
not
sure
that
we
want
to
add
all
of
these
extra
parameters,
like
it's
more
API
surface,
more
test
cases,
more
combinations
of
stuff
I've
never
heard
this
asked
before,
and
so
my
inclination
was
to
say
we're
not
going
to
do
this
right
now
and
in
fact,
if
we
want
something
like
this,
this
is
what
Gateway
API
is
better
for.
E
Sure
we
also
have
these
the
issue
with
class
that
IP
that
you
have
a
clustered
AP
service.
You
move
to
a
terminal
name
and
no,
what
is
it
yeah?
You
have
a
question
because
ipx
you
move
to
external
name,
you
go
to.
You
drop
the
question
API
and
then
you
move
again
from
Mr
manual
to
Cluster
AP
and
you
can
assign
whatever
IP
you
want.
So.
B
E
E
C
Yeah,
so
if
I
have
an
X
I'm
going
to
load,
but
it's
external
load,
balancer
I,
sort
of
control,
what's
getting
routed
to
that
entrance
point
and
if
I
just
wouldn't
sort
of
let
cluster
IP
traffic
coming
in,
then
it
would
only
be
accessible
inside
of
the
cluster
right.
So
you
really
need
to
turn
it
on
I.
Don't
I!
Think
I
can
just
block
it
from
or
leaking
out.
A
F
C
That's
a
someone
I've
worked
with
IP
I
would
much
about
to
sort
of
see.
How
do
we
avoid
kubernetes
cluster
from
typically
use
masquerade,
also
for
V6,
not
just
for
V4
right
but
I
mean
it's.
It's
not
really
kubernetes,
it's
actually
the
person
that
starts
the
cluster
that
that
creates
these
problem.
If
you
don't
don't
see,
there's
it's
hard.
D
D
Just
making
the
Assumption
that's
what
he's
trying
to
do
and
he
wants
to
do
this
on
layer,
three
and
not
layer,
four,
and
if
Gateway
API
is
layer.
Four,
then
is
that
that
doesn't
satisfy
his
needs,
but
then
having
a
very
small
cluster
service,
cider
range
is
I'd
like
to
understand
why
it's
so
small
yeah.
D
A
But
the
service
cider
is
intended
to
be
cluster
private.
Like
yeah,
exactly
I
know
some
people
do
route
it,
but
you
know
I,
don't
think.
That's.
C
It
if,
if
it's
internal
right,
why
wouldn't
you
use
an
an
overloadable
address?
Yes,.
A
C
Have
all
this
self-serve
I
mean
the
ipvc?
If
you
use
IPv6,
you
yeah
do
random,
48-bit
right
and
then
certain
use
the
FD
or
fc08
or
whatever
is
called
oh
exactly.
If
it's.
A
Okay
cool,
then
it
seems
like
everybody's
sort
of
agreeing
I'll
I'll,
add
some
comments
to
this
and
then
I'll
go
ahead
and
close.
It.
D
Do
you
have
your
hands
up?
Oh.
G
Yeah
I
mean
I
was
just
gonna,
add
flavor
to
the
Gateway
API
thing,
but
so
what
Rob's
referring
to
is
our
desire
to
eventually
have
you
be
able
to
create
a
Gateway
and
have
IP
like
load,
balancer
IP
provisioning,
similar
to
what
you
get
for
service?
A
lot
of
implementations
just
use
service
for
this
today,
under
the
hood.
That
is
going
to
be
a
long
ways
out.
G
That
is
definitely
not
happening
in
any
kind
of
short
order.
So
I
just
wanted
to
throw
that
out
there,
but
once.
E
C
Typically,
the
promise
not
for
incoming
traffic
is
how
to
make
sure
that
you
can
that
a
pod
can
reach
out
right
and
the
traffic
can
come
back
and
then
it's
I
mean
either.
You
have
routable
that
app
is
routable
all
the
way
back,
or
you
hope
that
you
can
masquerade
to
something
that
is
routable
from
outside.
C
Typically,
many
systems
masquerade
this
into
to
another
address
right
that
is
routable
from
the
outside,
because
most
systems
there
don't
know
sort
of
on
which
pod
a
particular
on
which
node
a
particular
order
address
space
is
running,
so
you
need
to
get
it.
So
you
don't
set
up
the
routes
on
the
outside
in
Mana
systems,.
A
So
so
I
still
think
that's
a
case
for
Gateway
API,
though
I
saw
something
internally
from
a
customer
who
had
wanted
something
similar
and
I.
I
said
this
feels
like
a
good
use
case
for
a
custom,
Gateway
class
with
an
own
with
your
own
implementation.
A
That
knows
how
to
attract
traffic
through
some
other
mechanism
at
L3,
probably
and
knows
how
to
program
it
on
the
nodes,
and
it's
completely
opaque
to
kubernetes
itself
what's
happening,
except
that
there's
a
Gateway
of
a
particular
class
and
it
points
to
a
service
or
a
pod,
and
so
I
think
that
this
is
what
this
is.
A
What
I'm,
hoping
personally
as
a
very
selfish
person,
I'm
hoping
Gateway,
gives
us
is
that
we
can
stop
considering
new
features
for
service
and
instead
say
this
seems
like
a
good
use
for
Gateway,
so
Shane,
even
if
it
takes
a
while
to
get
to
the
place
where
L4
external
lb
provisioning
works
in
the
same
way,
I'm
still
okay
with
it
as
a
direction.
Although
I'm
curious,
why
you
think
it'll
be
so
long?
Is
it
just
priorities
or
is
there
some
technical
complication.
G
I
I
just
get
the
first
of
all
I
agree
with
you,
I.
Don't
think
that
we
should
be
I,
think
that
we
should
be
aiming
for
Gateway
for
these
kinds
of
things,
but
when
I
say
so
long
I
mean
you
know
six
months
to
a
year
out
kind
of
territory,
and
it
just
seems
like,
if
I'm
understanding
their
problem
correctly.
This
is
like
an
immediate
kind
of
problem,
they're.
Looking
for
like
an
immediate
kind
of
fix
form,
so
I
agree.
G
B
G
G
G
So
yeah
so
I'm
wondering,
if
maybe
it'd,
be
useful
for
us
to
take
the
time
to
make
sure
that
we're
understanding
where
they're
coming
from
like
we
I,
haven't,
read
this
entire
thing
clearly,
but
just
talked
about
it
with
you
guys
on
this
call
like
do
we
is
there
more
to
it
like.
Is
there
something
else
that
we
might
be
able
to
do
for
them
in
the
short
term,
because
it
it
feels
we
weird
that
they're
coming
in
here
with
like
we're
running
out
of
ips
in
our
sitter,
we're
not
willing
to
change
that.
C
C
Probably
and
I
mean
especially
the
last
one
was
like
no
I.
Don't
want
to
have
one
IP
address,
go
to
the
to
this
internal
service
and
not
IP
I'll
just
go
to
the
load.
Balancer
address
I
mean
first,
they
don't
have
to
come
from
the
same
Cedars
right
so
so
I
have
very
little
understanding
for
why
he
would
want
to
to
do
this.
C
I
mean
the
the
very
last
one
you
had,
or
maybe
where
it
were
before
when
I
talked
about
that
the
basically
Q
proxy
takes
one
address
and
then
the
load
balancer
takes
another
one
and
his
or
maybe
and
he's
running
out
of
addresses
it's
how
the
system
works.
G
So
I
I
agree
with
closing
it,
as
was
originally
suggested.
I've
subscribed
to
the
issue
and
I
will
try
to
like
if
they
come
back
with
all
right
well,
we'll
go
engage
with
Gateway
I'll
try
to
follow
up
with
them.
If
they
come
back
with,
we
really
don't
want
to
do
that.
Maybe
I'll
take
the
time
to
try
to
dig
in
a
little
bit
further
into
why
and
see.
If
there's
more
to
it,
awesome
something.
A
G
A
Okay,
then
I'm
done:
where
did
your
window
go
there?
You
are
I'm,
gonna,
stop
sharing
and.
F
Let's
go
for
it
just
because
why
not
it's
about
to
be
holiday,
so
we're
just
looking
more
review
for
more
review
on
this
PR
regarding
Cube
proxy
Libs,
so
kind
of
like
where
we're
going
with
all
the
kaping
stuff,
we've
kind
of
re-consolidated
and
decided
like,
let's
start
small.
Let's
start
by
breaking
out
some
of
the
shared
Q
proxy
code
into
staging,
and
then
you
know
down
the
line.
F
We
can
start
incrementally,
maybe
fixing
stuff
replacing
stuff,
adding
functionality,
and
so
that's
what
this
cap
is
starting
to
hint
at
I'd
say
the
cap
isn't
like
fully
done,
but
it's
in
a
state
where
we
like
want
to
get
opinions
on
it
from
folks,
so
that
we
can
kind
of
steer
the
rest
of
the
cat.
F
I!
Think
that's
really
the
Spiel
I
have
for
that.
It's
also
relevant
to
this
other
cup,
which
is
in
involved
moving
bits
out
of
the
endpoint
slice
controller
into
staging,
so
I
think,
like
they're,
fairly
aligned
in
that
they're,
both
literally
just
looking
to
move
code
from
core
into
staging,
and
so
you
know
these
two
will
kind
of
set
the
stage
for
how
we
do
that.
In
the
future,
so
yeah,
that's
really
this
feel
I
have
on
it.
E
F
At
least
for
the
cube
proxy
code,
we
were
thinking
like
mainly
all
the
the
shared
code
in
Q
proxy
Dash
proxy,
which
is
like
the
client-side
caching
mechanisms
and
client-side
informers
and
and
clients
and
stuff
along
those
lines
of
new
to
staging
for
the
endpoint
slice
controller.
I
thought
it
was
the
same
sort
of
flow
Antonio.
It
might
be
different.
F
F
So
it
kind
of
started
like
that,
but
based
on
Dan's
review
of
that
cap,
it
turned
into
more
like
what
can
we
do
now
and
the
conclusion
there
was
if
we,
if,
if
we
just
put
bits
from
kaping
into
staging
right
now,
it's
just
more
code
for
us
to
maintain,
and
it's
not
really
like
helping
cue
proxy
at
all.
So
Dan
was
like
why?
F
Why
don't
we
move
the
cube
proxy
shared
code
out
of
Q
proxy
into
staging
to
start
everything
stays
stable
because
it's
the
same
code,
it's
just
moved
and
then
we
can
start
like
maybe
bringing
bits
of
kapang
in
to
replace
bits
of
the
existing
Q
proxy
code.
Incrementally.
So
we
don't
have
like
one
big
code
change
that
could
lead
to
breakages.
F
Thing
like
just
right
away:
folks,
Can
Vendor
it
a
lot
easier
like
if
folks
want
to
use
the
services
and
endpoints
chain
tracker,
that's
already
implemented
and
core
proxy.
Like
then,
external
people
can
actually
use
that
from
day
one
if
they
haven't
written
a
client-side
caching
mechanism
and
then
our
eventual
goal
would
be
to
make
it
a
lot
easier.
You
know
to
provide
more
tooling
to
make
it
easier
to
write
proxies
for
the
future
right.
F
We've
moved
a
lot,
and
so
the
kaping
cap
is
still
up,
but
that
Community
is
kind
of
I
think
over
updating
it
maintaining
it.
So
they
told
me
to
go
work
on
this
cup.
Instead,
what's
Jay
opened
and
I'm
working
on
so
so
yeah
I'm
sorry,
the
path
has
changed
a
lot.
F
A
Procedural,
it
looks
like
this
PR
is
consumer
or
is,
is
the
pr
to
implement
2104
kept
2104,
but
kept
2104
is
still
titled
Cube
proxy
architecture
is
that
one
is
that
right.
F
Okay,
so
what
has
happened
here
is
Jayden
make
a
new
issue
for
this
new
cap.
He
is
the
existing
issue
yeah
and
if
you
read,
I
should
have
mentioned
that
earlier.
Dan
mentioned
the
same
thing.
I
said:
I'll
go
make
a
new
issue
and
Jay
basically
told
me
to
wait
until
folks
comment
more
but
I'm
happy
to
go.
Make
a
new
issue
for
this
cup
into.
A
Repurpose
2104
to
mean
this,
then
let's
go
retitle
2104
and
you
can
keep
it
as
is
or
if
you
want
to
keep
2104
as
the
overarching
goal,
and
this
is
a
sub
goal,
then
let's
make
a
different
issue:
enhancement
issue:
okay,
I'm
happy
to
look
at
I,
haven't
looked
at
this
cap,
yet
this
PR
yet
but
I
did
flag
it
for
review.
So
it's
on
my
list.
A
F
So,
at
the
end
of
the
day,
Antonio
I
think
our
goal
is
still
to
have
like
an
inner,
a
library
that
folks
can
use
to
make
writing
proxies
easier.
Like
we've
just
tried
to
really
simplify
it
in
this
kept,
like
we've
said,
like
that's
our
ultimate
goal,
but
the
first
step
in
that
ultimate
goal
is
just
moving
bits
out
slowly
from
core,
because
it
doesn't,
it
just
doesn't
make
sense
for
us
to
write
a
bunch
of
coupon
code,
put
it
in
staging
and
none
of
the
core
proxies
use
that,
like
it
just
means.
F
E
E
F
This
thing
yep
it's
in
the
cap,
so
it
literally
in
the
summary
section
of
the
cap.
We
refer
back
to
the
document
that
Tim
made.
D
F
No,
no,
it's
totally
all
right,
and
this
is
why
I
wanted
to
bring
it
up.
It's
good
thought,
like
I
I,
think
I
need
to
expand.
This
kept
a
little
bit
to
talk
not
just
about
like
step
one,
but
what
our
ultimate
goal
is
a
little
bit
more.
You
know
we
do
talk
about
it
a
little
bit
but
we're
trying
to
focus
in
on
like
what
are
the
actual
operational
steps.
F
D
A
So
we
have
some
reading
to
do.
F
Yeah
and
just
any
comments
you
all
make
I
will
update.
I
have
Push
rights
to
Jay's
Branch,
so
I'm
kind
of
helping
out
with
it.
A
Okay,
great
since
mache
pushed
to
next
cycle,
we
have
no
more
agenda
for
today
unless
anybody
has
other
business.
You
want
to
talk
about.
A
Okay,
I'm,
adding
to
the
agenda
for
next
meeting
to
to
do
a
kept
review
for
next
release.
We
can
run
through
the
kept
board,
make
sure
that
it's
up
to
date
and
that
new
incoming
caps
have
been
added
to
it
and
what
talk
about
what
what
people
are
planning
for?
27?
What
what
moves
you
want
to
make
update
the
Target
release,
Target
Milestones
rather,
and
get
progress
back
on
track.
A
Okay,
then
have
an
excellent
holidays.
Hopefully
everybody
takes
some
time
off
work
on
things
that
you
want
to
work
on
instead
of
things
that
you
have
to
work
on
and
or
don't
work
at
all,
even
better
good
luck
with
the
cold
out
there,
it's
going
to
be
brutal
for
some
of
you
happy
New,
Year,
we'll
see
you
all
in
two
weeks
time
on
our
regularly
scheduled
meeting.
We
are
not
missing
any.