►
From YouTube: Kubernetes SIG Network Bi-Weekly Meeting for 20220707
Description
Kubernetes SIG Network Bi-Weekly Meeting for 20220707
A
This
is
the
sick,
Network
meeting
for
July
7th
2022.
I'm
gonna,
shake
it
up
and
start
with
a
cap
review
this
time.
B
All
right,
so
I
thought
it
was
useful
to
walk
through
the
open
caps
which
are
tracked
as
issues.
We
can
also
look
at
the
cap
dashboard
if
we
want,
if
we
have
time
I,
did
take
a
quick
spin
through
it,
as
some
PRS
are
coming
through
now
and
I
moved
some
things
around
in
there.
But
as
we
look
at
these,
maybe
I'll
just
encourage
people
to
go
figure
out
whether
their
tiles
in
the
dashboard
are
correct
or
not.
So
in
no
particular
order,
I
mean
I.
B
Guess
it's
actually,
the
whatever
order
GitHub
showed
them
to
me
in.
We
have
a
kept
for
support
services
with
same
SRV
name
on
different
protocols,
which
I'm
not
sure
that
I'm
even
tracking,
it's
not
even
on
the
dashboard.
B
Well,
it
is
now
so
I
haven't
looked
at
this
one
at
all
and
I
don't
know
if
anybody
else
has
no.
It's
already
marked
stale
because
nobody's
looked
at
it.
B
This
is
this
issue
I'm
sure
most
of
us
remember
this
issue.
They
want
to
put
the
same
SRV
name
on
different
ports
with
the
with
same
port
number
different
protocol,
which
we
usually
see
in
DNS,
and
we
can't
because
kubernetes
demands
that
the
port
name
be
unique,
and
this
will
not
fly
so
I'm
going
to
unstale
it.
Since
we
didn't
even
really
know
it
was
there.
C
B
Yeah
I'm
not
actually
sure
how
to
solve
this,
but
since
somebody
took
the
effort
of
starting
a
discussion,
it's
worth
I
guess
following
up
on
foreign.
B
B
Good
for
him,
I
did
actually
review
a
bunch
of
PRS
from
him
this
week,
a
bunch
of
PR's
good
grief
that
guy's
been
busy.
So
this
is
an
active
cat.
He
is
in
fact
working
on
cleaning.
This
up
Antonio
should
also
be
on
vacation
and
not
be
here.
This
one
I
think
moved
to
Beta
in
25,
so
I
think
I
moved
this
tile
in
the
dashboard.
B
C
Also
similar
things
when
people
do
lags
I
mean
that
you
in
some
switches
you
prefer
local
and
don't
blow
balance
over
the
entries,
but
and
that's
that's
there.
The
policy
called
prefer
local,
typically
in
Juniper
switches
and
I,
think
Cisco
as
well
right.
So
so
I
I.
B
Get
it
I
I
understand
it
as
an
optimization.
It
feels
like
the
sort
of
thing
that,
like
topology,
that
we
should
be
able
to
do
automatically
and
not
make
the
users
tell
us
about
right,
because
it's
not
a
semantic
thing.
It's
not
like.
If,
if
you
talk
to
someone
non-local
it's
wrong,
it's
just
an
optimization.
So.
C
That's
so
good,
you
are
saying
that
the
decision
is
made
in
another
layer
of
the
abstraction
stack
and
they
shouldn't
even
be
know
how
the
traffic
comes
in
and
reality.
E
B
D
Yeah
I
mean
the
problem
that
we
have
is
that
this
was
for
DNS.
We
want
to
be
able
to
have
things
talk
to
the
DNS
on
their
local
node,
but
if,
for
some
reason
that
is
not
there
because
it
is
being
upgraded
restarted
whatever
you
know,
don't
just
fail:
DNS
go
to
a
different
node
and
we
don't
care
at
that
point.
It's
more
of
an
optimization
for
latency.
B
Yeah
so,
like
I
said,
I
I
haven't
read
this
one
for
this
cycle
and
it's
not
in
for
25,
so
we
should
revisit
this
for
26
and
figure
out
what
we
think
the
right
way
of
expressing
this
is,
and
if
it
really
turns
out
that
policy
is
the
sort
of
least
bad
way
of
expressing
it.
I
won't
block
it
too
hard.
But
it's
just
not
my
favorite
model.
B
Next
Network
policy
status,
I,
don't
know
if
Ricardo's
here
today,
I.
F
B
Network
policy
status
if
Ricardo
and
I
had
a
conversation
last
week
about
whether
to
actually
proceed
this
or
not
the
the
question
being.
If
it
feels
like
it's
redundant
with
implementations
which
would
Implement
a
web
hook
for
saying
you
know
their
status
or
their
feature
isn't
implemented.
So
maybe
maybe
we
ought
not
actually
do
this.
F
B
So
it's
not
in
for
25,
we
should
revisit
for
26.
yeah
I'll
update
this
cni
traffic
shaping
support.
G
H
D
B
Future
so
in
my
mind,
this
comes
to
the
same
discussion
as
the
ipam
requesting
families
per
pod.
It's
it's
a
wormhole
from
the
user
to
the
implementation
like.
Should
we
be
adding
an
API
for
this
at
the
Pod
level
or
some
other
form
of
API
that
lets
the
user
or
the
administrator
honestly
Express
policies
about
address
allocation
and
network
performance
that
may
or
may
not
be
implemented
by
particular
Network
plugins.
B
For
sure
and
Lars
I
apologize,
this
one
was
sort
of
on
my
cue,
but
it
was
difficult
and
I
kept
finding
other
things
to
do,
and
then
the
deadline
hit.
B
G
B
And
we'll
have
to
revisit
it,
but
I
would
like
to
revisit
it
because
I
think
it
and
the
other
IP
problem
are
real
problems.
G
I
can
say
why
it's
it's
requested
from
my
users:
it
they
have
file,
transfers
that
blocks
the
the
next
basically
and
they
use
this
to
limit
the
bandwidth
from
pods
yeah,
and
they
only
use
it
from
pods.
As
stated
in
the
psyllium
documentation,
it's
not
reasonable
to
have
it
for
incoming
traffic
because
you
can
just
drop
packets.
That
is,
that
is
already
there
right.
So.
B
The
the
problem
that
I
always
have
with
this
and
the
discussion
always
comes
back
to
it.
This
is,
is
a
literal
limit.
What
people
want
in
some
use
cases?
Yes
in
other
use
cases,
it's
just
don't
prevent
other
people
from
accessing
it.
They
want
like
prioritize
access
to
the
bandwidth
right
and
other
people
literally
want
to
schedule,
bandwidth
and
reserve
it.
So
we
have
I
think
the
unclear
set
of
requirements,
foreign.
I
I
This
week,
but
I'm
pretty
sure
this
is
just
unfortunately
not
at
any
time
on
it.
B
And
it's
already
in
beta,
so
okay
status,
host
IPS
for
pod.
This
one
was
on
me
to
review
and
I
didn't
get
to
it.
This
cycle,
pretty
sure
yeah,
so
we'll
have
to
just
revisit
for
next
cycle.
B
Expanded,
DNS
configuration
I
even
forget
what
this
one
is.
Does
anybody
remember.
B
Oh,
is
this
the
more
than
six
search
paths
I
think
it
is.
B
All
right,
multi
ipam,
it's
that's
on
25.!
This
one
is
going
to
take
another
shot
at
getting
merged.
B
I
Yeah
there's
a
few
minor
changes.
I
want
to
make
in
this
cycle,
but
because
we
had
the
delayed
real
graduation
debate
or
beta
on
by
default.
I
think
this
has
to
probably
wait
to
126
to
get
to
stable,
okay.
B
I
B
Okay,
Cube
proxy
kpng.
C
B
Excellent
I
pity
Whoever
has
to
resync
all
the
iptables
logic,
because
Dan
Winship
has
been.
C
B
Great
okay,
admin,
Network
policy
has
a
PR
open
for
Alpha,
so
obviously
I
mean
it's
not
really
version
locked
anyway,
because
it's
a
crd,
any
updates.
They
want
to
want
to
mention
on
this.
B
I
wasn't
tracking
it.
I
still
need
to
look
at
it,
but
they
told
them
to
go
ahead
without
me:
okay
and
the
network
policy
internal
traffic
policy.
So
here
we
get
into
the
Andrew
Kim
batch
and
I
know
Andrew's
out
on
eternity.
B
B
Likewise
well,
has
other
ones
Antonio's
multiple
service,
citers,
the
kept
looks
pretty
solid.
I
have
yet
to
look
at
a
PR,
but
he
says
he's
got
a
working
prototype
of
it.
The
fun
part
is
intersecting
this
with
his
other
one,
which
is
the
service
ranges.
B
B
Optionally,
disabled
node,
ports
for
type
load
balancer.
This
ga.
B
Oh,
it's
just
it's
just
only
open
to
remove
the
gate
it
is.
It
is
GA
cool.
We
need
a
better
I'm
trying
to
think
about
a
better
way
to
track
it
because
there's
a
desire
to
sort
of
close
issues
once
they're
implemented,
but
we
still
have
to
come
back
and
remove
the
gate.
So
there's
some
like
there's.
If
you
look
at
the
dashboard,
there
are
some
that
are
closed,
but
still
on
the
dashboard
because
they
still
need
to
have
their
Gates
removed.
This
one
I'm
arguing
we
should
leave
open.
H
Cool
clothes
store
open
as
long
as
it's
still
in
the
dashboard,
with
the
Milestone
of
the
one
that
it
needs
to
be
removed.
And
if
the
last
comment
says:
hey
come
back
in
1.26
and
remove
this
or
whatever
yeah
I
know
you
did.
You
did
that
for
dual
stock
and
that
one's
sitting
there
and
it's
like
all
right-
that's
cool.
B
B
Okay,
make
kubernetes
aware
of
load
balancer
Behavior.
This
is
the
IP
mode
thing,
I,
think
and
I
sort
of
woke
it
up
in
late
2021
and
then
let
it
age
out
and
then
was
doing
some
house
cleaning
yesterday
and
realized
that
I
had
let
that
fall
into
the
crack.
So
I
woke
it
back
up
at
least
the
PRS
that
are
open
to
prep
for
this.
So
maybe
maybe
26.
B
H
So
I
it
went
to
Beta
in
24
and
if
you
scroll
down
to
the
bottom,
I
updated
it
I
know
it's
very
long.
I
updated
it
today
from
looking
at
what
you
said:
I,
don't
okay,
so
I
I
didn't
file
anything
for
25
I,
don't
think!
There's
any
blockers
to
it!
But
I
didn't
like
make
the
enhancement
cut
off
so
I,
don't
know
if
I
need
to
file
for
25
or
we
just
wait
for
26.
it
it's
kind
of
like
because
it
it
went
to
Beta
in
24.
H
H
B
Okay,
so
that's
it
my
main
point
here
today
being
we
have
22
open
caps
and
we
should
close
out
as
many
of
these
as
we
can
before
we
open
too
many
more
many
of
these
have
been
around
a
long
time
and
they
just
need
the
final
bows
put
on
and
a
little
bit
of
Polish
and
docks
and
last
testing
I
would
love
to
close
out
all
of
the
terminating
endpoints
stuff
and
the
topology
stuff
has
been
really
long
in
the
tooth.
B
A
Sounds
good?
The
only
other
item
on
the
agenda
was
from
Alex
Alexander.
J
Yeah,
it
was
just
a
general
poke
concerning
the
the
problem
that
I
had
mentioned
right
before
keep
gone
about,
syncing
load
balancers
on
really
big
clusters,
where
you
have
many
many
relationships
between
load,
balancers
and
nodes.
J
Tim
I
had
poked
you
personally
on
slack
about
that
yeah,
so
I,
poked
Bowie
as
well,
because
he
had
mentioned
that
he
he
was
interested
in
the
problem.
But
anybody
on
this
meeting
please
have
a
look.
This
is
have
happening
on
live
clusters
for
my
company
at
least.
Hence
why
I'm
kind
of
pushing
for
it?
So.
B
I've
got
it
open
here
on
my
on
my
screen
and
I
will
look
at
it.
Asap.
A
Great,
do
we
want
to
do
a
couple
minutes
of
triage
before
9
30.
B
Yep
so
I'm
already
sort
of
halfway
through
I
removed
any
well
the
ones
I've
looked
at
so
far,
I've
removed
the
ones
that
you
know
Dan
or
somebody
has
commented
on,
and
this
is
where
I've
left
off
so
I've
got
about.
Eight
left
that
I
haven't
looked
at
so
Port
missing,
Port
mutation
test
Jay.
Are
you
around?
D
E
B
All
right
should
the
available
condition,
controller
support
optional
check
for
endpoints
I,
don't
know
what
this
is.
B
What
I
was
just
gonna
guess:
aggregated
API
server
now
that
I
read
it
on?
Maybe
that's
a
acronym
jargon
from
the
API
Machinery
Sig,
but
I
don't
know.
B
B
B
G
G
B
That's
fine
you
are
entitled
to,
since
you
have
the
context
here.
If
you
think
this
is
not
something
we
should
do,
then
you
can
go
ahead
and
close
this
issue
or
if
you
think
it
is
something
we
should
do,
but
it's
not
super
important,
then
you
can
say
triage
accepted.
G
G
It
again
I
would
like
to
see
what
he
comes,
because.
H
A
B
D
B
Okay,
I'm
gonna
stop
here
we're
actually
at
the
ones
that
we
covered
two
weeks
ago.
So
that's
good.
B
Stop.
Share
and
I'm
gonna
have
to
drop
off,
feel
free
to
continue
without
me
thanks.
Everyone
thanks.
A
A
So
we
don't
have
any
other
agenda
items
so
unless
there's
something
anyone
want
to
talk
about,
but
they
didn't
put
on
the
agenda,
I
would
propose.
We
just
give
half
an
hour
back.
A
Cool
sounds
like
I'll
see
you
guys
later
thanks,
bye
have.