►
From YouTube: Kubernetes SIG Network meeting for 20230608
Description
Kubernetes SIG Network meeting for 20230608
A
A
Hello,
everybody
Welcome
to
the
Thursday
June
8th,
kubernetes,
Sig
Network
meeting.
As
usual,
we
are
governed
by
the
kubernetes
code
of
conduct,
which
basically
boils
down
to
don't
be
a
jerk,
so
please
don't
be
a
jerk,
not
that
we
have
a
problem
with
that
here.
I'm
gonna
start
with
triage.
We
have
a
fairly
light
agenda
today,
which
is
great.
A
If
people
want
to
add
things
to
the
agenda.
Still
it's
it's
open.
The
doc
was
shared
with
the
calendar.
Invite
let
me
do
some
screen
sharing
of
a
window
I'm
going
to
give
you
an
infinity,
mirror.
A
A
B
A
Seems
like
it's
probably
not
going
to
be
us
and
we'll
close
it
I'll
keep
an
eye
on
it.
Next
I
opened
these
two.
These
are
actually
being
worked
on
already.
These
are
just
adding
warnings
to
API
operations.
We
did
some
Antoine
and
Joe
and
I
have
been
looking
at
some
of
the
issues
around
server-side
apply
and
incorrect
map
keys,
and
we
turns
out
we're
really
bad
at
this
across
the
project
and
we
have
a
whole
bunch
of
places
and
I
didn't
realize
that
server
side
apply
completely
barfs.
A
If
you
have
an
invalid,
something
that
is
allowed
through
validation
but
not
allowed
through
the
map,
keys
and
server
side
apply.
Just
does,
unlike
client-side
apply,
which
just
corrupts
your
data
server
side
apply
just
gives
up,
even
if
you're
not
manipulating
that
field,
it
will
give
up
so
we're
looking
at
how
to
address
it.
B
A
B
Yeah,
that's
the
we're,
having
kind
one
example
of
doing
to
install
Ingress
and
doing
a
patch
and
the
patch
has
this
problem.
So
it
doesn't
work
because,
because
of
the
duplicates
of
server-side
apply
but
I
I
thought
that
you
we
have
the
the
way
in
in
the
into
in
The
Crying
To
indicate.
If
you
want
to
do
server
side
or
try
inside
that
Prime.
You
know
if
they
removed
that
we.
A
No,
you
do
I
mean
on
Cube
cuddle.
There's
dash
dash
server
side
argument.
A
The
so
the
problem
is
that
in
some
of
these
cases
the
Strategic
merge
patch,
which
is
client
side,
will
corrupt
your
data.
It
will
find
duplicate
keys
and
it
will
just
pick
one
and
throw
the
rest
away.
Oh,
is
this
thing
too?
Okay,
and
if
you
do
it
server
side,
server
side
apply,
looks
at
it
and
says:
oh,
these
are
not
unique,
Keys
barf
and
it
just
returns.
A
generic
500.
B
A
So
we're
looking
at
adding
a
better
error
code,
possibly
making
it
so
that
it
only
barfs
if
you're
actually
like,
manipulating
those
specific
Fields,
adding
better
warnings
when
you
actually
do
these
things
and
possibly
converting
some
of
the
lists
that
aren't
really
Maps
into
Atomic
lists.
Instead
of
pretending
that
they're
mapped.
A
A
Right
you
can't
co-own
it:
okay,
okay,
yeah,
it's
unfortunate,
but
things
like
environment
variables
is
one
of
these.
Where
the
map
says
you
can
only
have
one
value
for
each
name,
but
actually
they.
It
is
not
a
map,
it's
an
ordered
list,
because
each
name
can
reference
names
that
came
before
it,
and
so
you
can't
just
reorder
it.
A
A
And
if
you
put
your
defaulting
in
the
wrong
place,
then
you
will
make
a
big
mess
for
deployments
and
jobs
and
stateful
sets
and
everything
else.
And
if
you
put
your
defaulting
in
the
right
place,
then
it
will
only
affect
when
the
Pod
actually
is
created.
B
A
B
A
We
should
just
not
support
patch.
We
should
just
yeah
it's
a
mess.
All
right
next
issue
support
a
list
of
app
protocols
open
last
week.
I
know
this
has
been
talked
about.
Rob,
do
you
have
any
stages.
C
Yeah
we've
been
talking,
I
mean
this
is
very
relevant
to
Gateway
API,
because,
honestly,
the
the
people
that
are
reading
and
consuming
app
Protocols
are
largely
the
same
implementations
that
our
implementing
Gateway
API
app
protocols
has
always
just
been
more
of
a
hint
than
a
requirement,
and
the
the
problem
is:
if
you
have
implementation,
specific
hints,
there's
not
a
way
to
describe
that,
because
you
can
pick
one,
but
then
you
can't
have
interoperability
across.
You
know,
implementations
a
list.
C
C
That
is
my
own
unique
interpretation
of
this,
and
this
is
what
it
means,
but
that
is
only
really
useful
if
you
also
have
something
like
a
list,
so
you
can
have
some
interrupt
across
multiple
implementations
on
top
of
the
same
service:
I,
don't
love
it,
but
I
also
don't
have
a
good
alternative,
so
I
I
chatted
with
Leo
or
a
bit
yesterday
about
this,
and
it
seems
like
this
still
feels
inevitable,
but
it
may
maybe
we'll
just
let
it
so
can
get
some
more
ideas
before
trying
to
push
this
forward
in
this
specific
cycle.
A
C
Yeah
I
would
love
to
avoid
this
I.
Have
we
have
discussed
you
know?
Potentially,
maybe
this
just
becomes
something
like
we
describe
in
Gateway
API
this
it
that's
even
worse,
though,
because
then
you
have
to
define
the
intersection
between
like
if
somebody
sets
app
protocol
and
then
they
set
this
other
thing,
that's
similar
on
Gateway
API,
and
how
do
you
interrupt
between
those
I
don't
know,
I
discussion.
Ideas
are
very
welcome
on
this
one
yeah.
D
So
so
the
issue
is
that
if
you
create
a
service
with
a
selector
and
then
remove
the
selector,
you
are
left
with
an
orphaned,
endpoints
and
endpoint
slice,
which
is
intentional.
But
if
you
then
modify
the
endpoints
object,
you
get
a
second
endpoint
slice
for
the
same
service,
because
now
you
have
one
endpoint
slice
that
was
orphaned
by
the
endpoint
slice
controller
and
one
that
was
created
by
the
end
point
slice
mirroring
controller,
and
this
seems
like
an
edge
case
that
we
just
didn't
think
about
and
probably
didn't
intend.
D
We
just
need
to
figure
out
the
exact
fix
for
it.
I
think.
D
C
D
My
suggestion
was
that
when
we
orphan
the
endpoint
and
the
endpoint
slice
that
we
make
the
endpoint
slice
be
owned
by
the
the
mirroring
controller,
so
that
then,
if
you
modify
the
endpoints
object
it
just
you
know
it
modifies
the
existing
endpoint
slice
rather
than
the
then
creating
a
new
one.
C
C
D
C
C
C
C
As
long
as
there's
a
fairly
minimal
delay
between
endpoint
slice,
getting
deleted
and
endpoint
slice
mirroring
controller,
creating
a
new
one,
I
feel
like
that's
fine
to
me.
Another.
D
C
Well,
so
right
now
the
way
the
controllers
work
is.
They
are
informers
that
only
watch
the
endpoint
slices
they
own
so
like
they're,
so
they
only
know
of
their
own
little
world.
They
don't
know
and
changing
that
could
actually
be
more
complicated.
At
least
that's
how
I
remember
the
controller
working
we
should
verify,
but.
C
C
A
Yeah,
unfortunately,
the
only
answer
right
now
is
not
to
use
server
side
apply
when
well,
not
to
use
server
side
apply
with
pod
updates.
A
I
mean
it's
fundamentally,
it's
an
SSA
problem,
so
I
will
tag
them
too.
Okay,.
B
Let
me
ask
different
things:
there
is
this
problem
that
they
added
this
logic
and
because,
when
this
logic
fails
because
of
this
problem,
the
post
that
was
collector
starts
to
fail.
The
root
cause
is
the
the
silver
solid
pricing,
but
what
I'm
asking
is?
Are
we
going
to
leave
this
control
in
this
state
of
the
week,
come
up
with
a
different
solution,
or
do
we
want
to
fix
quickly
this
controller
and
yeah.
A
That's
a
a
good
question:
I
I,
don't
know
the
answer
to
that,
but
I'm
I'm
on
these
threads
so
I
will.
B
A
Somebody
has
assigned
themselves
to
it.
That
was
two
weeks
ago.
C
There's
already
a
fixed
VR,
I'm
overdue
to
I.
Think
I
did
one
very
brief
review,
but
I
need
to
do
a
proper
review,
but
I
think
this
is
on
me
and
yeah
definitely
accepted.
A
B
B
A
A
That's
what
we
said
before
I'll
ping
Casey
is
Casey
here.
A
I'll
wait
for
time.
Oh,
let's
wrap
up.
Let's
make
this
the
last
one
then
is
a
month
ago
can
actually
survive
due
to
invalid
contract.
Do
we
know
this
one.
B
A
Let's,
let's
take
this
one
offline
and
do
the
rest
of
the
agenda
and
then,
if
we
have
time
we
can
come
back
to
it
because
we
do
have
at
least
a
couple
of
other
things.
Antonio
I,
I
jumped
I
put
my
name
in
ahead
of
you,
but
I.
Don't
know
how
long
your
topic
is.
You
just
wrote
a
lot.
Do
you
want
to
go
first?
Yes,.
B
It's
quick,
okay,
so
I've
been
attending
the
cni
meetings
for
because
one
of
the
recurring
problems,
mainly
with
cni
plugins
that
can
install
as
demon
said,
is
that
they
don't
handle.
Well,
they
know
the
life
cycle,
and
may
you
don't
do
a
lot
of
trickery
with
paints
or
or
everything
is
super
Congress
to
get
this
right.
B
So
one
of
the
the
way
that
that
the
network
really
word
for
kubernetes
is
just
as
the
container
runtime
the
container
runtime
implementation
is
very
basically
just
checked
that
this
has
cni
copy
five,
and
that
is
ready
and
one
possible
solution
is
to
implement
a
verb
in
the
cni
spec.
So
the
container
Valentines
can
have
a
better
information
about
the
CNA
status
and
use
this
to
provide
the
no
node
printers
node
network
will
read
this
to
the
key.
D
But
overall,
my
feeling
in
this
discussed
in
the
the
cni
issue
that
you
linked
to
is
that
we
we
need
to
stop
pretending
that
or
we
need
to
stop
trying
to
use
cni
as
the
primary
interface
to
kubernetes
network
plugins,
because
they're
they're
much
bigger
than
cni
and
and
so
we
should
try
and
I
mean
this
is
a
bigger
project.
A
I
I
have
thought
down
this
path
myself
for
a
while
and,
like
you
said,
it's
a
big
project
and
I
haven't
had
the
activation
energy
but
I
I
think
I,
agree.
People
I
can't
I'm
getting
tired
of
correcting
people
when
they
talk
about
Network
policy
as
part
of
their
cni
and
I.
Just
you
know
what
the
vernacular
wins.
Sometimes
it's
it's
the
kleenex
of
network
plugins,
but
I
agree
that
these
things
are
not
uncoordinated
components.
They
need
to
be
together
and
there
probably
should
be
one
holistic
API.
A
It
should
probably
include
host
ports
for
pods.
It
should
include
ipam,
it
should
include
probably
node
ipam
and
pod
ipam,
maybe
definitely
Services
Network
policy
and
as
Network
policy
apis
evolve,
probably
should
cover
all
those
too
so
you're
right
it
becomes
a
real
proper,
like
Network
driver.
D
A
Okay,
well,
look
I'm,
I'm,
broadly
supportive
of
the
initiative,
I'd
love
to
see
if
somebody
wants
to
start
putting
together
thoughts
on
what
would
all
be
included
in
such
an
API
like
starting,
very
vague
and
refining
it
I'd
love
to
discuss
and
look
at
that
I'm
not
going
to
have
time
to
drive
it
myself
in
the
near-term
future.
But
I
do
think
it's
important.
A
Okay,
Antonio!
Is
that
all
of
your
topic.
B
D
A
A
The
cap
prr
freezes
nominally
today
and
we
are
short
on
prr
reviewers
more
than
ever,
so
anybody
who's
looking
for
a
way
to
contribute
shadowing
prr
is
a
great
way
to
have
some
outsized
impact,
but
I
wanted
to
run
through
the
Caps
that
we've
got
open
and
make
sure
that
I'm
doing
the
right
thing
with
them.
So
let
me
share
screen
one
more
time
share
this
one.
A
Everybody
see
this.
Yes,
all
right.
The
columns
sort
of
reflect
the
code
base
of
the
features,
not
the
code
base
of
the
Caps,
so
a
few
of
them
have
like
already
moved
towards
GA,
but,
looking
at
anything,
that's
tagged,
128
I
think
we're
still
going
where
we
did
GA
already.
We
we've
merged
that
for
pte
iptables,
restore
minimization
was
I,
think
it
was,
it
must
have
been
checked
in
if
I
already
moved
it
to
across
columns
and
DNS
expansion.
That's
fine
I
talked
with
Rob
and
Antonio
this
morning.
A
We're
going
to
punt
topology
aware
one
more
time:
node
Port,
static
ranges,
I
think
wants
to
go.
Ga
we're
just
waiting
for
a
PR
to
do
that.
Iptables
ownership
Dan!
Was
there
a
PR
for
this
yet
or
not?
Yet
that's.
A
Oh
it's
beta
for
120s.
We
need
a
I'm
working
on
a
better
dashboard.
It's
not
ready
yet
okay,
so
it's
going
to
Beta
in
28,
got
it.
A
Oh
okay,
great
yes,
my!
Hopefully
my
new
dashboard
will
be
much
clearer
and
we'll
say
like
what
the
next
step
is
and
when
that's,
what
I'm,
trying
to
figure
out
how
to
represent
the
service
controller,
stuff
I
think
also
wants
to
go.
Ga
we
just
need
a
PR
oud.
A
Sorry
go
ahead,
I'll
get
on
the
pr
sorry,
I've
kind
of
lost
track
of
these.
It's
fine!
We
we've
got
a
month
plus
to
get
PR's
in,
but
it
is
planning
to
go
ga
right,
yeah.
A
That
that
was
my-
that
was
my
read
too.
So
no
no
rush
we've
got
over
a
month
to
do
it.
We
just
need
the
pr
okay,
multiple
service
citers
is
paused.
While
we
figure
out
the
overlapping
issues,
Antonio's
got
some
docs
there
that
we
need
to
revisit
so
not
going
to
make
the
cut
for
this
week.
Admin
Network
policy
is
async:
dual
stack,
node
IP,
stuff
Dan.
D
B
A
A
Okay,
those
are
all
the
alphas.
Now
things
that
might
go
be
going
into
Alpha,
I
didn't
have
time,
I,
don't
think
they've
done
anything
there.
A
Field
status,
host,
IPS
I
think
is
going
to
make
alpha
we're
still
reviewing
the
PR,
but
it's
at
PR
stage.
So,
okay
and
this
one
I
tagged
but
I,
don't
even
remember
what
it
was.
The
details
now
or
the
drain
terminating.
C
A
The
the
the
health
check
yeah
exactly
exactly.
D
A
Okay,
great
as
long
as
they
like
it,
doesn't
need
prr
or
anything,
then
we're
good
for
this
week's
deadline
are
there.
These
ones
are
all
in
some
form
of
stasis.
I
think
I,
don't
know
anything
about
the
host
Network
Port
host
Network
support
for
Windows.
Does
anybody
here
know
what's
going
on
with
this.
B
A
All
right,
I'll
have
to
follow
up
on
that
endpoint
slice
into
staging
stuff.
Is
that
moving
at
all
and.
C
A
Well,
I
mean
honestly,
if
it's
not
like
substantially
done
with
respect
to
well,
this
won't
have
any
prr
impact.
Will
it
no.
A
See
well
so
I'll
ping
it,
but
it's
got
on
the
order
of
what
seven
days
six
days
to
get
through
the
cap
approval.
So
that's
it
Dan
I,
don't
presume.
Nf
tables
is
moving
forward.
D
I'm
making
progress
on
the
code,
but
it's
not
moving
forward.
Actually
yeah
the
cap
hasn't
even
merged
yet
I
guess
people.
Well.
When
we're
past
deadlines,
people
should
read
through
it
again
and
see
what
needs
to
happen
before
we
can
merge
the
the
initial
cap.
A
B
D
Then
well,
okay,
so
so
actually
the
the
the
incremental
changes
stuff
in
in
the
service
chain,
tracker
and
endpoint
change
tracker
is,
is
sort
of
broken
right
now,
and
so
we
do
need
to
fix
that
before
NF
tables
can
be
fully
working
but
and
and
the
refactoring
is
sort
of
leading
towards
that.
But
no
in
general
the
refactoring
is
just
to
make
everything
nicer.
It's
it's
not
actually
like
a
hard
requirement
for
NF
table
support.
A
Okay,
this
is
the
kaping
issue
which
we
need
to
resolve
completely
at
some
point:
it's
relatively
soon,
but
it
it's
not
moving
forward.
I
think
not
I
mean
not
in
this
cycle.
A
These
are.
There
was
some
discussion
of
command
line
and
config
config
file.
B
I've
been
just
in
the,
there
is
an
issue
with
the
moving
the
conflict
to
beta
one
and
all
this
discussion
about
putting
the
platforms
I
think
that's.
This
could
be
yeah.
A
A
B
A
So
not
moving
anywhere
right.
Now,
that's
fine!
We
are
all
very
constrained
we,
the
good
news
is.
We
are
like
getting
a
lot
of
caps
into
conclusion,
which
is
awesome.
It
will
make
room
on
our
agenda
for
the
big
ones
like
Network
policy
like
rebooting
the
network
plug-in
system.
A
So
that's
good
frankly,
my
my
quarter.
This
quarter
has
been
ridiculous
and
I
haven't
had
enough
time
to
review
everything
that
I
wanted
to
review
so
much
less
my
own
caps,
any
other
questions
on
caps
did
I
miss
any
maybe
no
of
a
cap.
That's
not
on
this
board.
Yet.
A
D
I
mean
their
their
own
well,
I,
don't
know
how
we're
tracking
Gateway
stuff
in
terms
of
caps
but
like
we're,
not
there's
no
Gateway
on
here.
Okay,
yeah
there's
been
talk
about
adding
in
a
network
policy
equivalent
of
gaps
except
that,
except
that
nobody
can
agree
on
what
acronym
to
use
so
that
that's
holding
it
up,
but.
A
C
I
we
have
so
many
things
in
fight
it
would
it
would
get
out
of
track
pretty
quickly
what
we
we
did
have
a
cap
or
Gateway
API,
and
that
cap
tracked
basically
everything
until
we
graduated
to
from,
like
the
experimental,
API
Group
to
the
standard,
Kate's,
API,
Group
and
so
like
the
cap
was
just
this
API
I
should
graduate
to
standard
Kate's,
API
Group
and
then
at
that
point
everything
just
was
gaps:
okay,
I'm.
A
Right
now
this
this
dashboard-
this
is
the
old
GitHub
project
format
and
it's
not
very
extensible.
The
new
project
format
is
much
more
extensible,
so
I
mentioned
I'm
I'm
working
on
a
new
dashboard
for
myself
that
that
I'm
going
to
bring
back
into
this
and
hopefully
convert
this
into
that
and
then
it'll
be
easier
to
manage,
and
then
we
can
consider
by
then
it'll
probably
be
too
late
for
Gateway.
But
we
can
consider
adding
more
caps
for
more
fine-grained
stuff,
because
I
think
life
cycle
will
be
easier
to
manage.
C
Yeah,
just
for
anyone
curious,
we
have
our
our
own
Gap
project
board
in
Gateway,
so
it
it's
similar
to
this.
That
is
using
the
new
style
and
yeah.
We've
got
a
lot
of
things
too,
so
yeah
completely
agree
that
the
new
Project
Style
is
it's
very
helpful
for.
A
Right,
I'm
gonna,
stop
sharing.
If
we're
done
with
this
and
I'll
follow
up
on
a
couple,
I
think:
that's
it
for
our
agenda.
Yes,
anybody
else
want
to
bring
up
topics.
A
Okay,
I
guess
you
can
all
have
15
minutes
of
your
life
back.
Thank
you
all
for
coming
today.
If
you
have
issues-
and
you
want
to
discuss,
you
can
find
us
all
on
slack
or
one
of
a
multitude
of
other
places.
A
If
you
have
PRS
that
need
to
merge
around
caps
in
the
next
seven
days,
please
ping
me
directly.
Don't
be
shy
and
I'm
happy
to
look
and
try
to
approve
if
we
can
get
those
done
thanks.
Everyone
I'm
gonna,
stop
the
recording.
Now.