►
From YouTube: Kubernetes SIG Network meeting for 20230511
Description
Kubernetes SIG Network meeting for 20230511
A
This
meeting
is
being
recorded,
hello,
everyone
and
welcome
to
the
May
8th.
No
sorry,
the
May
11th
edition
of
the
Sig
Network
meeting
just
a
reminder
that
this
meeting
is
governed
by
the
kubernetes
code
of
conduct
boiling
down
to
please
be
nice
to
one
another.
So
please
be
what
nice
to
one
another
and
please
use
the
hands
up
feature
in
Zoom
when
you
can,
when
you
need
to.
A
When
you
want
to
ask
questions
and
interject,
we
don't
have
a
ton
of
agenda
today
we
have
a
couple
of
items
in
triage,
so
we
might
as
well
just
get
started,
but
it's
open
Agenda.
So
if
you
have
something
else,
you're
thinking
of
please
do
drop
it
on
there.
We
probably
will
have
time.
A
All
right
so
just
getting
started
with
triage
as
usual.
These
are
the
triage
issues
that
don't
have
anybody
assigned
to
them.
Yet
so
I
figured
we'd
go
through
them.
In
chronological
order
from
oldest
to
newest
starting
with
traffic
is
getting
send
to
unready
pod
and
kubernetes,
we
have
an
added
Readiness
probe
and
container
is
an
unready
State
until
it
is
fully
loaded,
but.
B
C
C
D
C
By
his
penises
some
people
report
and
and
then
forget,
another
engage
and
start
to
give
the
information.
So
I
don't
I
later
with
these
people
to
follow
up.
You
know
if
they
engage
and
give
you
the
information.
You
see
that
it's
going
to
be
a
useful
issue
that
you
can
progress.
If
you
try
to
come
up
with
things
just
from
the
information
that
they
give.
You
may
end
frustrated
because
after
three
weeks,
they
don't
say
anything
and
yeah.
D
We
usually,
usually,
we
give
it
four
weeks,
two
rounds
of
pinging
from
from
this
meeting
and
then,
if
they
still
don't
respond
after
that,
not
much,
we
can
do.
A
Good
one,
you
know
you
have
a
test
that
you
wanted
to
highlight
here.
C
No,
but
it's
not
a
test.
This
is
something
that
we
under
I
added
this
to
the
gender
that
we
are
not
doing
and-
and
this
is
are,
the
problems
are
not
doing
this.
This
was
after
enabled
the
ga
of
terminating
points.
So
all
these
tests
started
to
run
and
then
all
this
started
to
fail.
I
had
this
contributor,
I,
don't
know,
Diamond
I
said
you
can
assign
to
him
he's
working
hard
on
that
he
reproduces
he
finds
and
things
and
see
I
think
we
kind
of
assign
this
to
him.
C
But
this
the
problem
I
mean
something
happens
with
ipbs
and
IPv6,
with
terminating
Empire,
okay,.
A
C
A
Sounds
good!
That's
if
you,
if
you
know
a
lot
about
ipvs
and
want
to
jump
in
there,
that's
one
one,
seven,
eight
six!
Three
and
it's
we
can
it's
on
the
board.
There
all
right
make
net
dot.
Ipv4.Tcp
me
live
time.
A
safe,
CIS,
cuddle.
D
I
was
just
looking
at
this
one.
You
can
assign
it
to
me,
I
I,
think.
If
it's
really
a
safe,
namespace
ciscoll,
then
we
can
go
ahead
and
add
it
to
the
safe
list,
whether
that
actually
solves
their
problems
or
not,
is
a
different
question.
But
if
it's
safe,
we
should
probably
put
it
in
list
of
things
that
people
can
tweak,
because
we
know
that
people
do
tweak.
It.
D
Mean
yeah
agree
if
they're
all
safe,
we
should
do
them
all.
I,
don't
know
off
the
top
of
my
head.
If
they're,
safe
or
not.
C
D
C
Yeah,
but
the
thing
is
they
take
by
default:
they
don't
people
respect
that
this
keep
allies
to
run
every
one
minute
to
something.
But
when
the
connection
is
established
they
don't
run
until
the
timeout.
So
it
may,
you
know,
send
the
keep
alive
after
15
minutes
or
something
like
that
by
default.
I,
don't.
D
D
D
117909,
which
is
a
Dan
Winship
bug,
titled
generic
versus
platform,
back-end
specific
options
in
Cube
proxy
he's
talking
about
the
cube
proxy
config,
which,
to
be
frank,
is
a
topic
that
we've
really
let
rot.
We
haven't
really
paid
attention
to
it
at
all,
and
we
should
have
I
mean
it
goes
back
all
the
way
to
Lucas
bothering
me
about
it
at
kubecon,
in
Denmark
or
even
before
that
to
to
get
on
this
one.
So
I
don't
know.
Dan
are
you
here.
D
I,
don't
know
that
we
have
a
clean
answer.
You
know
looking
at
it
now.
If
I
could
go
back
and
do
it
all
from
scratch.
I
would
probably
duplicate
some
of
the
config
options
and
say
like
here's,
the
ipvs
mode
block
and
here's
the
iptables
mode
block,
and
we
just
don't
share
anything
between
them,
even
if
they
have
duplicated
Fields.
That's
okay,
because
they're
logically
independent
I
think
but
I
don't
really
know
what
that
would
translate
into
in
code,
because
flags
and
stuff
are
weird
and.
C
D
D
Internet,
that's
some
advanced
stuff
right
there
when.
A
D
A
D
A
Oh
no
I
didn't
miss
one
okay.
If.
D
A
Connection
reset
by
appear
due
to
invalid
contract
packets
10
hours
ago,
on
purpose
kind
of
out
of
window
arrived
on
case
node
contract
mark
them
is
invalid.
You
could
proxy
will
ignore
them
without
rewriting
dnat.
However,
there's
no
corresponding
session
link
on
the
host
and
the
host
sends
reset
packet,
causing
the
session
look
to
be
interrupted.
C
That
is
a
problem
with
this
issue.
There
is
another
issue
that
is
just
clear,
so
this
this
was
fixed
because
already
was
added
to
drop
on
forward
right,
and
there
is
another
issue
that
is
asking
to
remove
that
rule,
and
this
one
is
asking
to
add
the
rule
to
input.
So
we
will
have
a
complete
of
Interest
quiz.
A
C
A
C
Rule
is,
legit
is
legit,
it's
Urban
issue
is
well
documented
and
it
has
a
nice
blog
and
everything.
This
I
don't
understand
if
it's
legit
or
it's
just
something
I'm
related
to
services,
and
we
have
this
third
one.
That
is
saying
we
should
remove
this,
embodied
because
in
a
scenario
squeeze
asymmetic
routing
it
drops
traffic.
C
F
B
A
Just
a
little
bit
more
understanding,
because
if
we're
going
to
make
a
decision,
because
it
sounds
like
what
you
said-
Antonio
we're
in
a
situation
where
we've
kind
of
made
decisions
that
run
a
foul
of
what
he
kind
of
wants.
So
if
we're
going
to
make
new
decisions
or
anything,
we're
going
to
need
a
lot
of
context.
A
So
it
really
just
needs
somebody
to
move
forward
Shepherd.
It
sounds
like
it
is
probably
a
real
thing.
C
C
A
I
just
want
to
make
sure
if
you're
feeling
overloaded
I
might
be
able
to
take
this
one.
You
just
let
me
know
or
I
can
work
with
you
on
it.
Either
way
start
there
and
if
you
want
some
help,
Pull
Me,
In,
okay,
that
was
everything
that
didn't
have
an
owner
before.
A
The
one
that
you
know
this,
one
that
you
sent
me
yeah.
A
The
node
status
node
info,
who
proxy
version
is
a
lie.
People
node
has
this
field,
which
the
API
docs
say,
but
the
field
is
set
by
kubelet,
which
does
not
actually
know
the
version
with
a
to-do
there,
yeah.
E
B
I've
run
into
this
before
too
it's
kind
of
hilarious,
but
my
interpretation
of
this
is
that
there's
an
implicit
assumption
throughout
kubernetes
that
Coupe
proxy
version
has
to
be
a
direct
match
to
kublet
version,
and
anything
outside
of
that
is
just
untested
and
unsupported.
Whether
or
not
we
should
have
a
better
answer
than
that.
I
don't
know,
but
that's
been
my
interpretation
of
the
current
state
of
the
world.
F
Oh
go
ahead.
Sorry,
okay,
I
was
gonna,
say
I,
think
we
use
this
in
test
cases
to
do
certain
checks
for
stuff
and
run
certain
tests
based
on
whether
or
not
the
behavior
is
like
Cube
proxy
specific
behavior,
yeah
I,
don't
know
like.
Basically,
we
fake
this
value
out
for
openshift,
for
example,
because
we
do
all
the
things
that
Q
proxy
does,
but
we
don't
use
Q
proxy
and
so
the
last
test
don't
run
so
I
mean
like.
F
D
No
good
deed
goes
unpunished,
I
I,
don't
even
remember,
I
mean
I'm,
pretty
sure
we
added
this
because
we
didn't
have
any
other
way
to
like
just
dump
State.
Okay,
here's
a
snapshot
of
what's
going
on
on
your
machine,
and
now
we
use
it
for
disabling
tests.
F
D
D
If
there's
no
contract
left,
should
there
be
any
tests
that
care
about
the
two
of
these
versions
being
in
sync
like
I,
would
I
think
ideally,
I
would
like
to
just
say
this
field
is
meaningless.
It
has
no
value
to
anybody,
except
as
a
human
who
might
consume
it
and
say:
oh
that's
interesting.
There
are
different
versions
and.
D
F
F
C
D
F
It's
not
guaranteed
to
be
populated
unless
I
think
you're
running,
there's
some
reasons
why
it's
not
populated,
at
least
even
if
we
guarantee
it
or
not,
it
is
not
necessarily
populated
in
practice,
so
that's
effectively
no
guarantee
so
I.
D
We
should
we
should
I
think
we
should
call
it
deprecated
and
seek
out
all
the
places
that
we
can
find
that
depend
on
it
and
tell
them
that
they
are
bad
and
they
should
feel
bad
and
then
stop
I,
don't
know.
Maybe
we
don't
stop
people
from
setting
it,
but
we
just
say
like
document
it
more
clearly
like
it
has
no
meaning
it's
not
required
to
be
set.
If
you
don't
run,
Cube
proxy
feel
free
to
leave
this
blank
or
even
write
gobbledygook
in
here.
D
A
I
mean
I
can
just
say
that
so
I
guess
hirozawa
would
then
probably
try
to
take
that
and
just
go
make
the
pr
right,
but
he's.
D
I,
don't
have
it
open,
which.
A
A
Okay,
all
right
so
should
we
just
should
I
send
this
as
a
comment
and
basically
just
encourage
them
to
get
that
process
started.
A
And
please
move
forward.
Please
go
ahead.
A
C
C
Is
not
doing
anything
and
well,
we
have
the
conformances
and
I.
Don't
remember
last
time
that
we
promoted
something
to
conformance,
so
the
risk
of
conformance
is
that,
if
you
don't
implement,
it
is
going
to
fail.
Users
are
going
to
protest.
We
have
the
bad
president
of
the
Affinity
theme.
So
that's
why
I
want
to
bring
up.
These
are
the
candidates
that
I
I
found
so.
C
I,
don't
think
we
should
promote
other
ones.
Just
keep
I
know
one
week
between
then
or
something
like
that,
but
I
think
that
people
that
do
implementation
of
these
things
should
then
run
should
run
them
first
and
give
feedback
before
this
is
promoted,
because
you
know
how
painful
is
to
remove
something.
C
D
I
do
think
the
Divergence
here
is
a
real
pain,
Point.
Another
one
that
isn't
covered
here
but
we've
talked
about
before,
is
the
Affinity.
What
Affinity
means,
what
session
Affinity
means,
whether
it's
two
Tuple
or
five
Tuple
and
for
some
reason
it's
come
up
three
or
four
times
in
the
last
couple
of
weeks
with
different
customers.
So
I
I,
wonder
if
that's
another
candidate
for
specialization
here.
D
The
question
is:
should
we
try
to
bring
that
back
because
it
it's
clearly
affecting
some
set
of
of
customers
and
users?
I,
don't
know
if
anybody
else
hears
that
one,
but
for
some
reason
it
has
popped
up
on
my
radar
at
least
a
few
times
so.
C
But
I
don't
know,
I
mean
I.
I
think
that
we
should
promote
the
tests
and
I
have
an
intent
to
make
as
much
as
not
the
agnostic
and
less
freaky
as
possible
and
to
try
to
avoid
any
anything
like
we
had
an
affinity,
that's
implementation
details,
but
I
made
this
contestant.
That's
why
I
think
the
people
that
that
is
going
to
be
running
then,
if
they
can
run
in
and
give
feedback,
is
going
to
be
easier
to
modify
you.
D
C
C
D
D
A
So
yeah
I
mean
I
agree
with
the
the
original
problem
here.
I
mean
the
the
whole
issue
of
kind
of
the
Divergence
of
the
implementations
was
something
that
at
one
point,
I
felt
like
the
kpng
project
was
starting
to
poke
that
bear
and
I
was
really
hoping,
we'd
kind
of
get
into
a
place
where
we
wouldn't
be
quite
here,
but
I
do
think.
The
next
step
would
be
conformance,
sounds
like
it's
gonna
be
I
mean
make
a
list
yeah.
A
B
D
Those
and
I'm
I'm
sure
I'm
sure
that
there
are
other
aspects
of
implementation
decisions
that
are
not
covered
by
these
four
items
like
whether
you
do
an
icmp
reset
or
drop
packets
is
a
choice.
There
were
some
more
that
I
think
psyllium
was
doing
differently.
D
D
D
You
know,
protocol
being
in
Port
suggests
to
me
that
it
matters
and
that
any
implementation
which
ignores
protocol
is
wrong.
So.
A
C
There
is
one
thing
is
when,
when
you
promote
a
filter
to
GA,
this
partition
of
the
project
is
due
to
set
conformance
tests
for
the
feed,
so
I
I
don't
want
to
to
treat
this
as
we
are
now
going
to
legislate.
The
thing
is:
prostate
terminated
employees
graduated
in
128..
At
that
time
you
have
to
run
the
test
and
and
send
a
PR
to
promote
after
they
soak
for
one
week,
I
think
without
Flex.
D
D
We
should
also
look
back
at
the
Historical
things
that
we
have
screwed
up
on
how
to
figure
out
okay,
were
they
or
is
it
okay,
that
Affinity
is
defined
by
whatever
the
implementation
is,
or
did
we
actually
mean
it
to
be
standard?
We
just
didn't
weren't
smart
enough
to
Define
it
at
the
time,
and
maybe
we
can't
recover
from
that,
but
at
least
we
can
derive
principles
for
our
for
future
work.
D
Yes,
yes,
so
exactly
like
we
need
to,
we
should
define
whether
we
thought
that
that
was
not
necessarily
covered
by
a
conformance,
but
we
thought
that
that
was
intentionally
done
a
certain
way,
but
because
we
didn't
put
a
test
on
it,
we
don't
get
to
assert
that,
but
we
shouldn't
like
we
could
say
in
the
future.
We
should
have
done
that
one.
So,
let's
not
make
that
mistake
again.
C
Yeah,
that's
why
I
want
to
for
the
features.
I
don't
want
to
make
a
refresh
it's.
It's
just
follow
the
normal
press
procedure
that
you
have
a
feature,
that's
ta
and
you
promote
the
test
and
and
well,
but
for
the
other
things
I
know
that
I'm
going
ship
wrote
some
good
document
for
cap
ND
about
this
topic
and
I
would
like
to
summarize
and
send
it
to
the
mailing
list.
So
we
start
from
there.
C
The
problem
is,
is
this
is
to
keep
them
the
ball
rolling?
So
we
we
all
know
these
things,
and
we
talk
about.
The
problem
is
that
we,
after
we
move
from
this
meeting,
everybody
has
their
own
thing
so
for
the
the
problem
here
is
what
inside
is
we
created
apis
channel?
The
apis
were
based
on
keep
proxy
and
some
of
things
like
it
one
way
or
the
other,
and
we
need
to
entangle
that
for
the
people
that
wants
to
create
a
proxy
know
what
they
have
to
do.
C
A
D
D
I'm
also
very
worried
about
that,
like
I
want
to
be
intentional
about
where
we
incur
or
encourage
implementations
to
be
creative
and
where
we
say
actually,
this
is
really
important
that
people
have
reasonable
expectations,
that
it
is
the
same
yeah.
A
Absolutely
more
okay,
I'll
be
waiting
for
your
your
email
for
sure.
Thank
you
for
bringing
that
up
Antonio.
If
anybody
else
has
any
thoughts
and
stuff
on
that,
you
know
keep
the
conversation
going,
maybe
in
sick
Network
foreign
moving
on
to
the
next
thing.
Oh
it's
my
thing.
A
This
is
just
a
little
follow-up
to
a
conversation
we
had
in
a
previous
Stig
Network
sync.
We
had
previously
talked
specifically
about
bpfd
that
it
may
have
been
the
last
thing.
I
think
it
was
the
last
thing:
Andrew
went
and
demoed
it
for
us
and
so
forth,
and
thank
you
for
that.
A
We
have
an
ebpf
channel,
it's
not
really
100
about
network,
but
it's
definitely
coming
from
the
networking
perspective
right
now,
based
on
some
of
the
experience
we've
had
in
Gateway
API
and
the
experiences
of
like
the
bpfd
people
of
trying
to
build
in
the
kubernetes
Community
or
build
ebpf,
build
and
deploy
ebpf
programs
in
the
kubernetes
ecosystem.
A
So
we
had
a
meeting
that
ended
up
having
like
25
people
on
it
yesterday,
including
some
people
who
are
fairly
big
in
the
the
ebpf
community,
like
Bill
Mulligan
and
Dan
finneran
and
Dave
Tucker,
and
just
a
bunch
of
people
kind
of
showed
up.
So
there's
a
ton
of
interests
that
we've
suddenly
realized
is
there
in
trying
to
make
that
ecosystem
better.
If
you
are
interested
just
check
out
the
evpf
channel
in
kubernetes
slack,
but
I
just
wanted
to
say
that
was
a
follow-up.
B
Yeah,
just
real
quick.
We
released
rc2
yesterday
for
Gateway
API,
that's
our
second
release
candidate
and
what
we
expect
to
be
our
final
release
candidate.
If
we
don't
hear
anything
we're
targeting
Monday
for
the
final
release
of
this
API
would
be
great
to
get
like
a
formal
sign
off
on
that,
but
otherwise
we're
just
gonna
Keep
On
Moving
yeah.
Some,
some
great
new
features
coming
in
path,
redirects
and
red,
writes
and
response.
Header
modifiers
are
graduating
the
standard,
and
then
we've
got
a
few
new
things
that
are
coming
into
experimental.
B
So
yeah
really
excited
definitely
check
it
out
and
let
us
know
yeah.
A
All
right
and
then
Andrew
you
put
one
on
here
and
yeah
I,
didn't
see
it
sorry
so
go
ahead
and
talk
about
Network
policy.
Please
you.
E
Didn't
see
it
because
I
just
I
just
put
it
on,
we
weren't
like
quite
ready
there,
but
this
agenda
was
short
and
I
saw
Gateway
API
presenting
like
a
super
Stellar
update.
So
I
was
like,
let's
throw
on
on
for
the
network
policy
API
yeah,
so
we
are
really
close
to
finally
actually
cutting
like
our
quote:
unquote:
official
release
for
V1
Alpha,
One
of
Baseline
admin,
Network
policy
and
admin
Network
policy.
E
We
have
pretty
good
consensus
in
the
Upstream.
The
two
implementations
so
far
are
like
pretty
much
there,
so
they're
kind
of
in
the
final
stages.
So
that's
really
exciting.
We're
happy
about
that
and
doing
so
will
allow
us
to
kind
of
move
forward
into
analyzing.
E
What's
next
for
admin
or
policy
and
based
on
adminar
policy
kind
of
some
shifts,
we
have
done
we're
changing
our
website
a
little
bit,
which
is
all
in
here,
and
you
can
kind
of
check
it
out
we're
moving
to
kind
of
follow
what
Gateway
API
does
more
so
in
how
we
like
refer
to
our
apis
before
we
were
referring
to.
E
You
know
the
admin
Network
policy
API
as
a
standard
set
of
things.
Right
of
objects
and
resources
now
we're
kind
of
moving
everything
to
fall
back
under
the
network
policy
API,
which
is
going
to
be
known
as
like.
The
next
generation
of
policy
related
apis
for
kubernetes
and
under
that
will
be
things
like
admin,
Network
policy,
Baseline
Network
policy
and
then
maybe
in
the
future
developer.
Network
policy,
which
is
essentially
going
to
be
known
as
like
a
netpal
V2.
E
C
E
Know
subgroup
ecosystem
makes
a
ton
of
sense,
and
then
the
last
thing
we're
doing
is
we're
going
to
try
to
standardize
our
workflow
for
adding
new
use
cases
in
like
V1
Alpha
2
everyone
beta1
kind
of
following
just
like
gaps
right,
except
for
we're,
trying
to
figure
out
what
to
call
it.
We
don't
know
if
it's
going
to
be
pepper
and
pepper
whatever
Dan.
E
Yeah,
so
we're
trying
to
standardize
that
basically
we're
just
trying
to
keep
the
lifeblood
pumping
in
here.
You
know
it's
been
a
really
slow
burn.
We've
been
blocked
on
folks,
Downstream
kind
of
getting
their
implementations
done,
but
Surya,
who
is
a
very
active
member
in
our
community,
was
able
to
go
to
kubecon
EU,
and
we
actually
now
have
issues.
We
have
almost
done
implementations
for
Atria
and
oven
kubernetes,
and
we
also
have
folks
and
users
asking
psyllium
and
Calico
for
this
support
for
this
API.
E
So
that's
really
exciting,
so
yeah
we're
just
pushing
on
it
and
really
hoping
to
get
more
Outreach
and
coupon
North
America
we're
going
to
be
working
to
submit
a
maintainer,
shack
talk
and
also
standard
cfp
talk,
so
I
think
at
this
point
it's
it's
like
we're.
We
need
help
from
everyone
here
to
get
the
message
out
that,
like
we're,
we're
here,
we're
ready
to
keep
rolling
on
it
so
and
huge
shout
out
to
Gateway
API
because,
like
y'all
have
paved
the
road,
this
is
awesome.
A
Yep
thanks
for
bringing
it
up
candidly
I'm,
already
kind
of
involved
in
this
I
I
go
to
all
the
meetings
and
stuff
so
I'm
very
glad
to
see
things
are
really
getting
a
lot
of
steam.
Glad
we're
we're
heading
that
way.
I
think
Chicago
will
be
very
exciting
for
Network
policy
API
all
right.
We
don't
have
anything
else
on
the
agenda,
but
we
have
15
minutes
left
so
I'll
take
a
drink
and
give
a
couple
seconds
if
anybody
has
any
last
minute
things.
Otherwise
we
can
adjourn
and
get
some
time
back.