►
From YouTube: Kubernetes SIG Network meeting 20210722
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
And
we're
recording
welcome
everybody.
This
is
the
kubernetes
sig
network
meeting
for
july
22nd
2021
dan
has
volunteered
to
run
triage
for
us
today.
A
C
B
Yep
all
right
shall
we
note
to
that
and
then
close
it.
B
Okay,
all
right,
in
that
case.
B
B
B
B
All
right-
and
so
this
one
we
still
have
needs
triage
on
and
it
looks
like
there's
been
a
lot
of
discussion,
but
that
was
after
last
week's
meeting
it's
already
assigned
to
rob.
B
B
B
B
B
B
Path-
and
it
looks
like
we
have
no
response
yet
does
anybody
from
the
gateway
side
want
to
take
a
look
at
this
one.
B
E
B
B
A
Cool
thanks
dan:
let's
take
it
over
to
the
other
down
downwind
ship
for
next
item.
C
Yeah,
so
pr
went
in
a
few
weeks
ago
to
make
us
use
endpoint
slice
processing
all
the
time,
because
that's
ga
now-
and
there
was
some
discussion
about
oh
well-
should
we
delete
the
user
space
proxy
since
it
doesn't
support
that
yet
and
eventually
the
decision
was
made.
No,
let's
not
do
that,
but
they
did
remove
all
of
the
end.
Point
controller
support
from
all
of
the
tests,
which
means
so
now
we
have
no
e
to
eci
of
the
user
space
proxy
and
no
ci
ede
or
unit
testing
of
the
endpoint
controller.
C
C
Except
that
also,
then
we
have
to
figure
out
if
we're
going
to
kill
off
the
windows,
users
based
proxy
as
well,
and
we
can
never
know
antonio
points
out
in
chat.
There
was
no
user
space
ed
test
before
yes,
that's
correct,
but
so
before
there
was
at
least
testing
of
the
endpoint
controller
stuff,
and
now
there's
not.
C
So
anyway,
we
had
discussed
this
like
six
months
ago
and
everyone's
like.
Oh,
no,
let's
not
delete
the
the
user-based
proxy,
but
still
nobody
actually
uses
it.
F
Everyone
else
use
the
kernel.
We
on
the
andrea
side
currently
use
the
windows
user
space
proxy.
I'm
happy
to
manually
test
it
and
maintain
it
test
results
somehow
or
if
you
really
want,
I
can
add
ci
for
it,
but
I
don't
think
it's
a
big
deal
for
us
to
eventually
get
rid
of
it.
We
just
currently
use
it
right.
E
But
only
I
think
there
I
think
there
was
a
bit
of
confusion
here
on
what
happened.
It
sounds
like
so
and
and
sweat
I
think,
is-
is
a
bit
more
familiar
with
this
than
I
am.
But
my
my
understanding
here
is
that
the
endpoints
so
basically
there's
this
package
proxy,
that
is
inside
coupe
proxy,
and
it
includes
an
endpoints
adapter
and
an
endpoint
slice
adapter
and
that's
the
code
that
changed
and
I
think
sweta
was
the
one
who
actually
made
the
pr
to
remove
any
use
of
endpoints
api
in
that
shared
code.
E
That
shared
code
is
very
similar
to
code
that
exists
inside
the
user
space
proxiers,
whether
that's
windows,
user
space
or
linux
user
space
proxyers.
E
Now
the
the
I
guess
change
of
sorts
is
we
had
some
form
of
test
coverage
for
code
that
was
somewhat
similar
to
what
user
space
and
proxies
are
using
inside
package
proxy
and
that's
gone,
but
it
wasn't
actually
the
same
code
and
that's
why
you
might
have
more
context
here
so
I'll
hand
it
off
to
you.
If
there's
anything
else,.
G
Yeah,
I
I
think
you
covered
everything.
Sorry,
my
mic
wasn't
working
before,
so
thank
you
for
jumping
in
essentially,
the
only
change
that
we
made
is
in
in
terms
of
the
test
and
objects
in
the
proxy
is
there
is
the
endpoint
change
tracker,
and
that
was
the
only
thing
that
was
doing
endpoints
or
endpoint
slices
like
there
was
a
boolean
there
and
we
removed
that,
and
that
is
only
used
by
ipv,
ipvs
iptables
and
the
wind
kernel.
G
I
might
be
wrong,
but
none
of
the
user
stuff
is
using
it.
So,
as
rob
said,
the
testing
wise,
where
exactly
how
we
were
before
that
change,
went
in
for
the
user
space
stuff,
which
might
just
you
know,
add
to
the
argument.
If
it's
not
being
maintained,
maybe
it
should
be
removed,
but
I
think
right
before
we
started
talking,
jay
mentioned
that
they
are
still
using
it
and
they're
willing
to
maintain
it.
So
maybe
adding
back
coverage
might
be
the
right
solution
here.
E
F
H
C
C
E
I
mean,
I
think
we
do
have
tests
so
there's
there's,
obviously,
ewe
tests
for
endpoints
controller,
but
you're
yeah
you're
right
that
all
of
coup
proxy.
We
we
don't
have
any
tests
right
now
that
yeah
you're
right
you're
right.
I
see
I
see
where
you're
going
now
that
we,
because
we
all
our
e
tests,
just
us
just
test
blindly-
that
coup
proxy
works
and
the
only
test
coverage
that
includes
is
coup
proxy
with
iptables
ipvs
or
wind
kernel.
E
H
Yeah,
the
the
question
is,
do
we
want
to
record?
I
mean
that's,
that's
for
sure
we
we
don't
have
that
cobras
since
when
I
don't
know,
but
the
thing
is,
do
we
want
to
recover
it
or
not,
because
recovery
is,
is
I
mean
it's
not
a
complicated
thing,
it's
tedious,
but
it's
it's
a
straightforward
to
other
job,
just
to
testing
points
or
to
move
to
a
different
library.
I
don't
know
I
mean
there
are
technical
solutions,
but
do
we
want
to
recover
that
or
not
that's
the
question.
B
I
mean
either
way
if
we
keep
it,
we
either
have
to.
We
should
have
a
job
to
test
user
space
proxy
and
if
we
port
it
to
endpoint
slice,
we
should
still
have
a
job
to
test
user
space
proxy
mode
right.
C
H
E
I
think
to
do
that
we
would
have
to
either
test
the
user
space
proxyer
or
disable
a
ga
feature
gate
which
the
disabled
ga
feature
gate
seems
like
you
can't
actually
do
it
won't.
Let
you
right
right.
So
it
seems
like
the
only
thing
we
can
do
is
add
user
space
products
here,
like
it
seems
like
they're
inextricable.
C
Some
e
to
e
test
that,
like
poke
at
individual
endpoints
objects
for
to
test
random
things,
we
don't
actually
need
to
test
the
cube
proxy
like.
If
we
don't
care
about
the
user
space
proxy,
then
we
don't
need
to
test
the
cube
proxy
with
endpoints
still
works.
We
just
need
to
make
sure
there
are
some
tests
that
yep
we're
still
generating
endpoints.
They
still
match
the
endpoint
slices.
E
C
F
Yeah
I
mean
like
topology
is
definitely
not
working
and
then
obviously
endpoint
slices
doesn't
work.
I
have
the
whole
list
I
can
share
with
folks
if
they
want.
C
F
Is
because
in
the
emtria
we
write
a
network
routing,
egress
rule
to
open
vswitch
and
that
or
to
hns
and
using
an
hns,
like
extension,
that,
and
you
can't
do
that
if
you're
using
the
windows
like
you
can't
use
the
windows
kernel
space
proxy
because
of
the
way
that
we
like
do,
the
actual
like
andrea
stuff,
with
open
v
switch.
F
So
and
that's
related
to
the
fact
that
open
fee
switch
on
windows
operates
as
an
extension
of
the
host
networking
system
in
windows.
F
Yeah
the
thing
is
it's
going
to
go
away
eventually,
so
it's
like
I
mean
if
everybody
says,
let's
just
kill
user
space.
I'd
be
I'd,
agree
with
it.
I
just
want
to
say
that,
like
I
do
use
it
so
like,
but
like
you
know,
if
we
kill
it,
let's
like,
let's
just
like
figure
out
how
we're
going
to
kill
it,
but
let's
I
think
we
could
probably
kill
it
and
find
another
solution.
C
So
so
openshift
sdn
uses
parts
of
the
user
space
proxier
to
implement
unidling.
Where,
when,
when
you
idle
a
service,
it
kills
all
the
pods
but
keeps
the
servers
entry
there
and
when
you
connect
to
it,
it
uses
the
user
space
proxy,
which
accepts
the
connection
and
then
sends
out
a
signal
that
causes
a
controller
to
relaunch
the
pods
that
back
the
service
and
then
it
forwards
the
connection.
C
So
so
it's
only
using
a
subset
of
the
functionality
and
it's
using
it
in
a
weird
way
where
we
have
to
have.
You
know,
call
backs,
and
things
like
that.
So
when
we
were
talking
about
this,
we
realized
you
know
we
could
actually
simplify
it
if
we
stopped
trying
to
actually
use
the
user
space
proxy
code
and
just
imported
the
parts
of
it
that
we
needed
so.
F
So,
let's
just
kill
it
I
mean:
does
anybody
really
want
to
keep
it
because
it
seems
sounds
like
this
is
a
theoretical
debate,
but
the
two
people
that
use
it
are
totally
happy
to
sort
of
like
kill
it
off
in
the
next
few
releases.
I
mean
I'm
pretty
sure
andrea
is
not
going
to
need
it
within
the
next
couple
of
releases
and
even
if
it
did,
I
think
we
could
port
it
to
paying
really
easily,
and
I
actually
want
to
do
that
anyways,
because
it
gives
me
an
excuse
to
get
people
to
use.
D
C
F
F
F
E
I
think
that
makes
me
next
is
it
possible
for
me
to
share
my
screen.
I
just
had
like
a
10
minute
or
less
update
on
gateway
all
right.
Let
me
see
if
I
can
make
this.
E
Try
this
again
all
right.
Are
you
going
on
slack
this
time?
Okay,
sweet!
Thank
you
all
right,
so
this
is,
if
I
can
share
it's
still
working.
E
Yes,
it
is
cool
all
right,
so
gateway
api,
v1,
alpha
2.
This
is
going
to
be
pretty
quick,
but
I
wanted
to
we've
been
working
for
a
while
on
gateway
api.
Obviously,
there's
plenty
of
people
who've
been
working
on
gateway
api
in
this
meeting,
even
but
I
wanted
to
take
this
opportunity,
like
just
a
few
minutes,
to
run
through
what
we're
trying
to
do
for
v1,
alpha
2,
which
I
think
is
going
to
be
a
very
significant
release.
E
We're
hoping
that
v1
alpha
2
will
be
our
last
release
with
breaking
changes.
Our
only
release
with
breaking
changes,
at
which
point
our
api
will
be
stabilizing
and
preparing
for
beta.
E
There's
a
new
cap-
that's
associated
with
this,
so
one
of
the
things
we're
doing
as
the
transition
to
v1
alpha
2
is
moving
to
the
kubernetes
api
group,
kate's
io.
I
were
currently
in
the
experimental,
x-kto
api
group
and
we've
got
an
advice
that
we
should
move
to
the
official
kubernetes
api
group.
That
means
that
we
need
to
go
through
formal
api
review
processes.
E
We
need
this
cap.
We
need
a
variety
of
things.
So
that's
what
we're
working
on
as
part
of
this
v1
alpha
2
proposal,
and
that
also
means
we
need
more
interaction
with
the
broader
sig
network
community
to
make
sure
that
what
we're
doing
actually
makes
sense
and
that
we're
getting
full
api
reviews
from
sig
network
api
reviewers
as
well.
E
So
on
that
note
I'll
skip
ahead
to
the
last
thing,
I've
I
reached
out
to
andrew
dan
cal
and
tim
to
ask
if
they'd
be
api
reviewers
for
us
and
set
aside
some
time
in
august
to
help
run
through
the
entire
gateway
api.
Well,
the
v1
alpha
2
gateway
api
and
make
sure
that
it
it
makes
sense
and
that
you
know,
hopefully
they
find
things
that
we
have
done.
That
may
not
make
sense
or
changes
we
can
make,
but
that's
that's
really.
E
Our
goal
is
to
try
and
get
wide
buy-in
and
also
wide
feedback
from
a
number
of
experienced
api
reviewers.
This
is
not
meant
to
be
an
exclusive
list
if
there
are
other
people
out
there
that
are
interested
in
this
api
and
want
to
be
involved
and
want
to
be
reviewing
these
changes.
I
would
I
would
love
it.
E
You
know
with
the
ingress
api,
we
got
lots
of
feedback
after
the
fact
you
know
after
it
was
too
late
to
make
significant
changes
about
things
that
were
less
than
ideal
about
it
that
we
wish
you
could
have
this
or
that
or
we
wish
you
didn't,
have
this
et
cetera.
This
is
the
opportunity
this
is
this.
E
Is
that
point
where
we
can
still
make
those
breaking
changes,
or
we
can
make
more
significant
changes
than
when
we
really
have
to
start
providing
stability
backwards,
compatibility
and
all
that,
so
I
think
we've
come
up
with
a
pretty
compelling
api,
I'm
obviously
biased,
but
if
you
have
opinions
about
how
an
api
like
this
should
be
structured,
this
is
a
really
good
time
to
to
take
part
to
review
whatever
it's
all
out
there.
Already.
It's
not
like.
You
need
to
wait
till
august
to
review
things.
E
The
vast
majority
of
it
is
is
already
in
place.
So,
yes,
that
that's
what
we're
thinking
about
there
is
going
to
be
because
we're
we're
migrating
api
groups
etc.
There's
not
going
to
be
any
support
for
conversion
between
v1,
alpha
1
and
v1.
Alpha
2.
they're
going
to
exist
in
isolation.
There's
also
going
to
be
breaking
changes
between
these
releases,
so
it
just
it
would
not
work.
E
So
these
two
apis
are
going
to
be
separate
and,
as
I
mentioned
plan
for
future
api
releases
to
be
completely
convertible,
we're
targeting
august
9
to
be
ready
for
final
reviews.
So
what
that
means
is,
as
a
working
group
we've
been
going
through
and
trying
to
you
know,
reach
our
own
consensus
among
the
working
group
about
api
changes
that
we
want
to
commit
to.
But
then
we
recognize
that
when
it
comes
to
august
9
and
we
get
the
formal
kubernetes
api
review,
we
could
get
feedback
that
other
things
need
to
change.
E
But
we
are
trying
to
do
our
own
review,
our
own
enhancement
proposals
and
everything
else.
So
you're
welcome
to
be
part
of
that
process
as
well
again
we're
trying
to
get
as
many
people
involved
as
possible.
Now
just
from
small
to
large.
I
want
to
go
through
just
like
some
of
the
changes
that
we're
talking
about
in
v1
alpha
2..
These
there's
nothing.
Well,
I
shouldn't
say:
there's
nothing
huge,
but
there's
a
bunch
of
small
things.
E
I
won't
cover
these
in
any
detail,
because
I
don't
want
to
take
up
too
much
time,
but
some
small
changes
here
we're
also
doing
some
things
to
comply
with
api
conventions
a
little
bit
better.
We
had
some
maps
for
like
header
matching
and
queer
prams
and
etc
that
that's
not
api
convention.
So
we
moved
to
lists
instead,
which
is
a
closer
match.
E
Similarly,
for
our
filters,
we
had
a
header
modification
and
that's
also
moving
from
a
map
to
a
list.
So
again
small
changes,
a
new
feature
in
v1
alpha
2
is
going
to
be
redirects,
there's
a
lot
of
capabilities
there
and
we're
also
working
on
adding
rewrites,
and
you
see
in
the
top
right
here
that
this
is
a
shared
slide,
deck
whatever.
E
But
we
have
this
concept
of
gaps
and
they're
linked
out
here,
but
they're,
like
gateway
enhancement,
proposals
and
we've
gone
through
a
lot
of
effort
to
try
and
outline
why
we
think
an
api
should
be
designed
the
way
it
is
and
provided
alternative
designs
and
why
we're
choosing
against
them
so
you'll
see
that
in
the
rest
of
the
three
or
four
slides
that
there's
a
gap
for
each
of
these
bigger
concepts
here
we're
also.
This
is
a
huge
one.
E
In
light
of
our
recent
cve,
we
are
very
interested
in
providing
a
safe
way
to
do
cross
namespace
references,
so
people
got
into
trouble
by
doing
cross.
Namespace
references
maybe
unsafely
the
ingress
api
was
not
did
not
specify
it.
Clearly
enough,
so
in
this
case
we're
trying
to
find
a
way
where
we
can
do
this
safely,
so
there's
a
whole
gap
about
this,
but
the
fundamental
idea
is
that
for
a
safe
for
a
reference
across
a
namespace
boundary
to
be
safe,
both
sides
need
to
agree
to
it.
E
So
it
describes
how
the
same
policy
attachment
mechanism
could
work
for
both
and
finally,
the
last
thing
route
gateway
attachment.
This
is
a
gap.
That's
currently
in
flight,
we're
exploring
different
ways
that
routes
and
gateways
can
be
attached
together
and
maybe
trying
to
simplify
that
a
little
bit
and
fundamentally,
it
means
that
the
route
chooses
that
I
want
to
attach
myself
to
this
specific
gateway
or
set
of
gateways,
there's
a
bit
more
to
it
than
that
and
there's
a
full
gap
about
it.
E
But
these
are
the
changes
we're
talking
about
in
gateway,
api
again
august
9
is
what
we're,
what
we're
targeting
for
being
fully
ready
for
v1
alpha
2
review,
but
if
you're
interested
in
these
kinds
of
topics,
if
you
want
to
give
feedback,
if
you
want
to
tell
us
something
that
we're
missing
out
on
or
that
doesn't
make
sense,
this
is
a
really
good
time
to
do
it.
Well,
we
can
still
make
those
bigger
changes.
I
Okay,
cool
is
is
gap,
something
I
mean.
Why
do
you
guys
not
use
regular
caps.
E
Yes,
that's
that's
a
good
question.
We
we've
been
inspired
by
other
sig
projects
or
working
groups
like
cluster
apis
may
be
similar
to
us,
and
they
they
have
a
similar
concept
of
a
ceph.
I
don't
know
how
they
actually
pronounce
it,
but
cep
in
their
case.
There
are
some
things
in
keps
that
don't
really
apply
to
us,
like
prd
as
an
example.
What
we're?
E
What
we're
designing
is
entirely
an
api,
we're
not
designing
the
implementation
of
that
api,
so
a
lot
of
the
prd
process
and
other
concepts
in
the
in
caps
are
a
little
a
little
too
involved
for
something
that
is
purely
an
api.
So
this
is
maybe
a
simplified
version
and
also
focused
on
api
design.
More
than
implementation,
I
guess.
D
I
I
think
like
it,
we
will
eventually
go
to
cap.
It's
just
at
this
point.
It
was
pretty
high
overhead
and
people
basically
just
deleted
almost
like
90
of
the
cap
template.
So
that's
why
we
just
said:
hey
just
use
this
simplified
one
for
now
and
eventually,
when
we
go
to
beta,
obviously
it
probably
will
just
align
to
the
normal
kubernetes
status
process.
I
D
Yeah,
the
idea
for
the
gap
is
not
because
we
were
using
just
github
issues
and
we're
finding
it
hard
to
keep
track
of
things.
Now
it
was
discussed
in
the
group
like.
Should
we
just
go
to
the
just
use
the
cap
process,
but
it
was
viewed
that
this
incremental
step,
at
least
for
the
next
probably
six
months,
is
sufficient
for
documentation
and
then
we
would
probably
just
use
caps.
E
Yeah,
I
think
this
is
similar
to
the
the
network
policy
working
group
as
well,
in
the
sense
that
we're
we're
trying
to
collaborate
on
an
api
design
and
and
in
a
way
kept
make
that
a
little
difficult
like
if
we
had
a
single
cap
for
gateway
api
that
would
be
really
difficult.
What
we're
talking
about
with
gaps
is
like
these,
like
there's
a
gap
for
how
we
do
rewrites
in
gateway
api,
which
would
be
pretty
overwhelming
to
go
through
the
entire
cap
process.
E
For
so
these
are
kind
of
smaller
things
where
we're
trying
to
collaborate
just
on
a
small
concept
and
maybe,
as
the
api
stabilizes
caps
will
make
more
sense.
But
right
now
we've
got
lots
of
different
people
that
are
trying
to
contribute
smaller
chunks
that
I
think
it
would
be
pretty
overwhelming
to
try
and
do
that
all
with
a
cat
with
different
cats.
H
E
Yeah,
that's
a
really
good
question
and
and
to
be
honest,
we're
still
trying
to
figure
it
out
the
right.
Now
we
we
are
completely
untied
to
kubernetes
release
cycle,
but
we
we
of
course
will
follow
the
backwards
compatibility,
guarantees
and,
and
everything
else
is
so
associated
with
the
official
kubernetes
api,
we're
kind
of
going
through
this
relatively
new
process
of
an
official
kubernetes
api.
That
is
a
crd,
we're
not
the
first
but
we're
probably
one
of
the
largest
in
scope.
E
D
Yeah,
I
think
this
is
probably
will
become
immediately
relevant
when
we
go
beta
in
the
alpha
phase.
It's
like
yeah.
You
can
kind
of
play
fast
and
loose.
This
is
some
discussion
that
we
will
have
to
have
with
sig
arch
like
how
to
integrate
it.
D
E
Months,
I
would,
I
would
love
to
do
it
in
under
six
months,
but
I
recognize
that
it
is.
It
is
impossible
to
predict
buy-in
consensus,
etc
in
open
source,
and
we
don't
have
the
same
firm
timelines,
because
we're
crd
that
open
source
kubernetes
release
cycles
have
so
there's,
there's
nothing
that
says
well,
you
have
to
get
to
beta
by
123,
for
whatever
it
is,
so
what
we're
trying
to
focus
on
instead
is
when
we
have
true
consensus
within
the
community.
E
Obviously,
most
of
the
people
working
on
this
api
are
also
trying
to
build
products
around
it
and
we're
all
very
interested
in
reaching
beta
just
so
that
we
can
build
products
around
something.
That's
relatively
stable.
H
E
That's
correct
a
lot
of
a
lot
of
the
work
done
there
can
apply
for
gateway
api.
I
have
a
pr
and
the
gateway
api
repo
that
copies
a
lot
of
those
concepts
over
for
our
own
conformance
tests,
but
it
has
not
merged
yet.
But
the
the
idea
is
to
at
least
share
a
lot
of
the
code
and
concepts
between
the
two
conformance
test
setups,
but
they
will
be
different.