►
From YouTube: Kubernetes SIG Network Bi-Weekly Meeting for 20220022
Description
Kubernetes SIG Network Bi-Weekly Meeting for 20220022
A
Network
meeting
for
thursday
september
15
2022.
first
up,
we
have
triage.
B
B
This
sorry,
please
give
because
when
I'm
sharing,
I
can't
see
who's
talking.
So
if
I
can
get
a
a
username
to
assign
to,
that
would
be
great.
B
B
We'll
wait,
but
thanks
for
taking
the
assignment
awesome,
all
right,
a
network
plot
policy
block
access
specifically
for
nginx
ingress
controller,
just
kind
of
scrolling
down.
It
looks
like
antonio
thought.
This
is
sig
network.
D
B
All
right,
kublet,
not
recognizing
multiple
options
and
it's
yourself.com
see
who
it
looks
like
we
have
some
discussion
already
happening,
whoa
a
bunch
of
discussion
already
happening
in
here.
Do
we
have
someone
who
wants
to
own
this?
One.
B
I
love
it
we're,
including
books.
E
C
A
C
B
B
All
right
that
was
easy
for
some
values
of
easy,
okay,
getting
called
after
startup
probe
failure.
See.
Oh
we've
got
a
bunch
of
troubleshooting
here.
B
A
Well,
the
startup
probe
failure.
You
know
it
looks
like
it's
trying
to
use
an
http
probe
so
that
the
network
plug-in
that
you
can
use
might
be
relevant
here.
F
C
A
B
F
I
mean
you
could
assign
this
one
to
me
and
we'll
I'll
take
a
look
at
least,
and
it
might
be
a
node
thing.
It
might
be
a
network
thing.
I
don't
know.
B
Okay,
see
there,
you
go
yeah,
I'm
always
concerned
when
people
say
like
this
is
happening
in
1.18,
because
it's
like
okay
is
it
still
happening
because
yeah
all
right,
endpoint
slice,
mirroring
see
that
three
times
fast,
so
we
create
without
a
controller.
Oh
we've
got
a
bunch
of
discussion
here
from
a
couple
weeks
ago.
G
B
B
B
Yeah
I'll
just
you
know,
ping
them.
A
All
right
next
up,
I'd
like
to
do
the
windows
things
first
before
we
get
to
multi-network
just
because
those
seem
like
they
probably
would
take
a
little
less
time
so
m.a
rossett.
I
Hi,
I'm
mark
I'm
the
chair
of
segwindows
and
got
two
issues
to
discuss:
yeah,
hopefully
they're
pretty
quick.
I
The
first
issue
is
that
I
was
looking
through
the
feature
gates
that
were
called
out
in
the
feature
features
package,
and
I
noticed
that
there's
two
very
old
feature
gates
that
are
only
used
in
cue
proxy
that
were
never
tracked
with
a
kepper,
an
enhancement
issue
when
dsr,
which
was
added
in
114
and
has
been
in
alpha
since
114
and
win
overlay,
which
was
also
added
in
114
but
went
to
beta
in
120..
I
I
don't
have
a
ton
of
history
around
those,
but
I
think
we
should
probably
try
and
progress
those
I
did
ask
in
sig
architecture
and
the
slack
channel,
and
the
conversation
is
linked
in
the
in
the
meeting
notes.
Kind
of
what
the
protocol
is
for
this
and
jordan
had
said.
Usually
what
happens
is
a
retroactive.
Kepp
is
authored
that
pretty
much
describes
kind
of
the
current
state
of
things.
I
I
think
it's
like
less
heavy
on
design,
details
and
discussion
and
then
that
cup
is
mainly
used
just
to
kind
of
qualify,
graduation
criteria
and
things.
I
was
wondering
if
anybody
had
any
concerns
with
doing
that
for
for
these
two
feature
gates,
I
think
they're
pretty
tightly
coupled.
So
we
could
probably
just
do
one
retroactive
cap.
I
And
also
if
there
is
any
preference,
if
sig
windows
is
the
owning,
sig
or
sig
network,
I
was
talking
with
mike
zapatu
and
I
think
we'd
be
okay
with
having
segwindos
on
it.
G
Yeah
I'd
prefer
sig
windows
to
own
these.
In
the
past,
we've
had
not
good
luck
with
cooproxy,
getting
getting
good
reviews
for
coup
proxy
on
windows,
so
yeah
anything
you
can
do
to
help
improve
that.
I
think
sig
windows
is
a
great
owning
sig
for
that.
I
I
Okay
sounds
good.
The
next
issue
was,
I
was
also
taking
a
look
at
things,
and
I
opened
up
a
new
kep
for
supporting
host
or
node
name
spacing
for
windows.
Pods
took
a
look
and
the
windows
operating
system
we
have
all
of
the
apis.
We
need
to
wire
that
up,
but
container
d.
The
only
container
runtime
right
now
that
we
use
with
windows
doesn't
support
this,
but
it
looks
like
it
would
be
not
that
difficult
to
add
this
support
here.
I
So
I
authored
a
quick
cap
and
I'm
looking
for
kind
of
reviewers
or
feedback
for
this,
too
only
changes
I've
identified
in
kubernetes
components
are
in
the
cubelets.
We
need,
we
probably
need
to
add
new
cri
api
fields
to
communicate
that
we
want
to
add
to
the
join
these
to
the
node
namespace
network,
namespace
and
then
actually
wire
up
those
fields.
G
D
Yeah
so
they've
been
working
on
the
updating
the
kept,
and
I
thought
jay
was
going
to
send
mail
to
fig
network,
but
he
didn't.
But
anyway
you
can
go
and
look
at
the
cap.
E
D
Definitely
ready
to
have
here
we
go
it's
ready
to
have
people,
look
at
and
review
the
kept
again
to
get
an
idea
of
what
the
the
caffeine
group
has
done
and
and
provide
feedback
to
them,
because
they've
gone
off
and
done
a
ton
of
work
on
on
this,
but
there
was
never
actually
an
accepted
enhancement
about
you
know.
D
This
is
what
the
sig
expects
them
to
do
and
is
willing
to
consider
and
stuff
like
that,
so
we
sort
of
have
to
figure
out
how
we're
going
to
get
all
this
work
that
they've
done
into
kubernetes
proper.
At
this
point.
I
And
in
this
case,
this
isn't
touching
cube
proxy.
This
is
the
the
this
is
happened
during
the
pod
sandbox
generation
in
the
container
run
time.
I
Yeah
the
cubelet
basically
looks
at
for
the
host
network
flag
and
then
does
some
validate
well,
there's
supposed
to
be
validation
done.
So
the
other
motivation
for
this
is
you
can
set
host
network
to
true
on
a
pod
and
if
it's
scheduled
to
a
windows
node,
it
passes
validation.
But
then
it's
joined
to
the
pod
network.
I
So
it
can
be
confusing
too,
but
as
far
as
I
can
tell
the
cubelet
just
populates
the
fields
and
sends
it
off
to
the
container
runtime
and
tells
the
container
runtime
how
to
set
configure
the
networking.
G
I
I
And
kind
of,
on
a
related
note,
the
reason
why
I
uncovered
the
other
feature
gates
where
I
was.
I
was
doing
a
prototype
of
getting
kpng
running
on
windows
nodes
and
was
able
to
get
that
mostly
working
on
a
cluster
that
I
set
up
with
cluster
api
provider
for
azure.
I
think
I've
got
some
contributed,
some
pr's
back
and
there's
still
some
open
pr's
against
the
repo
with
that.
C
C
A
Okay,
thanks
everybody
who's
interested.
Please
take
a
look
at
that
enhancement
and
the
other
windows
stuff
and
look
out
for
the
any
requests
from
sig
windows
on
moving
those
things
to
final.
C
H
H
Okay,
can
you
hear
me
now
yeah?
Okay,
can
you
see
my
screen?
Okay,
okay,
thank
you!
So
hi
everyone,
my
name
is
mattis
krotzky,
I'm
part
of
google's
gke
networking
team
and,
together
with
other
folks
in
the
community,
would
like
to
kind
of
put
forward
the
our
effort
that
we
kicked
a
few
weeks
back
to
this
sig
networking
group
about
the
effort
towards
multi-networking
introduction
to
kubernetes.
H
H
Additionally,
the
network
as
well,
it
is
what
today,
basically
is
a
just
assumed
concept
in
kubernetes,
which
basically
is
not
exposed
like
any
event
by
any
other
object
or
anything
that
kind
of
leads
towards.
H
We
don't
have
an
ability
to
express
a
kind
of
introduction
of
multiple
interfaces
to
the
pod
and
that
kind
of
led
to
situations
where,
where
were
probably,
there
are
other
reasons
for
that
from
historically,
but
basically
lead
to
a
situation
where
a
such
ability
has
been
created
by
by
some
side
projects
out
of
bound
projects
like
multus,
which
basically
create
that
ability
to
define
and
provide
ability
to
kind
of
introduce
that
capability
to
kubernetes
clusters.
H
H
Is
that
kubernetes
in
a
very
well
good
state
is
a
very
mature
and
stable,
where
we
can
start
kind
of
maybe
meddling
with
with
introducing
and
networking
concepts
into
kubernetes
core
kubernetes,
additionally,
with
all
the
other
ob
kind
of
projects
that
I
mentioned
before,
like
motus,
and
there
are
various
other
ones,
we
have
quite
a
lot
of
experience
and
know-how
on
what
it
means
to
introduce
this
multi-networking
concept
into
kubernetes
world,
because
those
additional
projects
already
did
it
right
in
some
way,
some
way
or
the
other.
H
Through
those
we
come
up
with
multiple
use
cases,
what
which
we
want
to
kind
of
satisfy
like
the
telecom.
H
Industry
or
the
other
use
case
would
be,
for
example,
for
addressing
vms
in
cubert
and
so
for
for
multi-tenancy,
and
there
are
probably
multi-other
multiple
other
ones,
but
those
are
like
most
kind
of
most
important
ones
and
what
it
like.
What
initially
we
are
thinking
of
kind
of
introducing
here
is
to
introduce
a
network
object
which
would
be
a
reference,
an
abstraction
of
what
sort
of
networks
a
specific
code
can
connect
to.
H
H
That
kind
of
leads
to
kind
of
two
things
here
to
keep
the
whole
concept
very
simple,
as
it
is
today
for
the
basically
today
networking
requirements
so
keep
it
simple
still
and
then
maybe
not
introduce
things
like
routers
or
or
firewalls
that
some
of
you
might
be
knowing
from
openstack
world
is
something
that
we
don't
want
to
do,
and
the
additional
thing
would
be
to
for
anyone
who
doesn't
care
much
about
like
multi-networking
in
their
cluster,
that
nothing
should
should
change,
so
basically
keep
the
kind
of
backward
compatibility
and
keep
keep
most
of
the
networking
concepts
kind
of
for
only
for
those
folks,
folks
that
really
care
about
it.
H
The
next
piece
would
be
to
integrate
the
support
for
networking,
with
existing
components
like
nodes
to
kind
of,
and
emphasis
put
like
information
about
networking
centric
stuff
up
to
that
in
to
that
object
and
then
leverage
it
for
our
needs
and,
additionally,
maybe
leverage
some
of
the
cluster
and
service
sliders
that
was
introduced
recently
in
in
kubernetes
to
kind
of
define
the
icon
for
those
additional
networks.
H
Next
thing,
what
we
are
thinking
is
to
introduce
something
but
we're
thinking
about.
Like
node
selectivity
for
networks,
so
basically
what
it
will
mean
that
only
subsection
of
nodes
could
have
access
to
specific
networks.
H
This
is
one
of
the
kind
of
requirements,
that's
quite
important
for
us,
where
maybe
the
default
network
that
we
have
today
that
the
standard
network
that
we
all
come
is
the
main
requirement
is
for
it
to
interconnect
every
node
and
be
available
on
every
node
two,
so
the
post
can
connect,
but
we're
thinking
of
for
the
additional
networks
that
you
would
want
to
define,
define
that
are
not
default
could
have
a
node
selectivity,
so
that
will,
of
course,
include
pod
scheduler
to
be
aware
of
those
and
be
able
to
handle
handle
such
situations.
H
J
Quick
comment
so
for
most
of
the
problems
that
you
have
up
there,
network
service
mesh
already
has
solutions
that
don't
require
changes
to
kubernetes,
so
scheduling
to
land,
where
you
can
actually
get
the
network
that
you
need
the
ability
to
insert
connectivity
both
between
pods
and
between
pods
and
other
things,
including
the
physical
network,
vms
other
pods,
both
as
an
attribute
of
startup,
if
you're
a
normal
workload
or
with
very
sophisticated
things
through
the
life
of
the
pod.
If
not,
we've
already
got
a
srivf
injection
working
where
you
can.
J
Actually
we
actually
think
about
it
differently.
We
don't
think
about
it
as
an
injection
of
an
interface,
because
no
one
actually
wants
an
interface.
They
want
to
be
connected
to
some
network
service,
possibly
connected
by
provided
by
their
physical
network.
So
those
solutions
have
all
been
running
for
some
time
in
network
service
mesh
without
having
to
actually
change
kubernetes
and
in
addition
to
that,
they
work
on
all
the
public
clouds.
J
They
work
on
all
the
cni's
they've
been
tried
with
they
don't
conflict
and
handle
avoiding
confliction
with
the
kubernetes
networking
support
sophisticated
features
if
you
need
them
like
source-based
routing
within
the
pod.
You
know,
as
I
said,
avoids
ipm
collisions
and
a
whole
host
of
other
things.
We've
also
got
engagement
from
major
nfv
producers
like
erickson,
who
are
building
things
based
on
top
of
this.
J
Would
it
make
sense
for
me
to
do
like
a
brief
presentation
at
the
next
meeting
about
the
sort
of
stuff
in
the
goodie
bag,
so
we
can
sort
of
level
set
on
what's
available.
H
Yeah
definitely
because
it
all
boils
down
to
our
requirements,
and-
and
this
is
where
we
are-
where
we
are
trying
to
gather
those
and
let's
definitely
end
right-
yeah
yeah,
let's
let's
that
would
be
great
to
to
if
you
could
do
that
right,
because
this
is,
we
are
in
a
very
early
phase
of
this
and
and
that'll
be
right.
We
can
present
that
and
then
we
are
still
trying
to
gather
like
use
cases
and
and
requirements
around
this
and
let's
see
what
how
that.
J
And
there
are
a
bunch
of
other
things
that
you
run
into
very
quickly
in
this
space.
Once
you
get
past
the
interface
thinking,
for
example,
we
handle
correct
fan
out
of
dns
both
for
the
kubernetes
cluster
into
any
other
secondary
network
services.
The
bot
is
attached
to
so
you
can
do
proper,
dns
resolution.
J
We
handle
the
problem
of
when
you
have
two
pods
that
are
supposed
to
be
cnf's
in
a
service
function
chain
that
need
to
communicate
to
each
other
with
something
faster
than
kernel
interfaces
which
everyone's
serious
playing
this
game.
Does
you
can't
just
do
interface
connection
at
startup,
because
you
need
them
to
be
able
to
establish
a
connection
when
they're
both
running
right,
so
the
entire
cni
approach
is
inapplicable
to
that
sort
of
a
scenario.
J
The
question
is:
where
would
you
like
where
and
when?
Would
you
like
me
to
come
and
talk
about
some
of
the
stuff
in
the
goodies
bag?
On
this?
I
could
certainly
talk
about
some
of
it
here.
It
sounds
like
you
may
have
another
meeting
that
we're
establishing
yeah.
H
Yeah
we
have
a.
We
have
enough
another
meeting
around
this,
so
that
would
be
the
best
if
we-
because
that's
that
meaning
is
all
focused
about
this
and
let's,
let's,
let's
do
it
there
that
sounds
fantastic
and-
and
there
is
there
link
to
in
the
presentation.
There
is
a
link
to
the
meetings
minute
and
there
is
a
right
now.
It's
I'm
using
my
corporate
meet
for
the
meeting,
but
in
the
future,
hopefully
we
can.
We
can
switch
to
the
zoom
and
be
part
of
that
so
that
it's
more
event
vendor
neutral.
J
Google
meat
is
perfectly
delightful
for
me.
I
I
work
well
with
pretty
much
anything
that
isn't
old-school.
I
forgot
what
it
was.
It
was
the
the
the
thing
that
microsoft
bought
skype
for
business
guy
for
business.
Hates
me
for
some
reason
I
don't
understand
they
won't.
Let
me
connect
anything
else,
works
for
me,
but
yeah.
It
may
be
useful
because
it
sounds
like
a
lot
of
the
requirements.
You're
trying
to
meet
are
related
to
nfv
requirements
and
yeah.
J
That's
that's
the
space
that
we've
been
plumbing
very
deeply
on
the
nsm
front
and
that
we
have
a
lot
of
history
with
I
I
I've
been.
You
know
I
go
back
in
nfb
almost
to
the
beginning.
Well
before
they
tried
to
make
the
cloud
native
jump.
J
K
K
Hi
ed,
this
is
sorry.
I
just
need
to
add
a
comment
here.
This
is
I'm
ana
salus
from
deutsche
telekom.
I
was
invited
here
by
the
google
eu
team
to
participate.
K
Maybe
it's
the
first
time
I'm
I'm
here
exactly
for
this
topic,
yeah,
and
what
what
shiraki
mentioned
now
is
one
of
our
biggest
pain
points
in
deploying
5g
calls
and
auran
related
deployments,
basically
any
telco
workload
on
a
kubernete
this
because
of
this
multi
interfaces,
all
the
security
that
comes
with
it,
the
risks
with
this
ravs
and
so
on,
a
lot
of
accelerators
and
so
on
is
being
used.
So
too
much
there
is
going
on,
and
this
is
one
of
our
big
pain
points.
K
So
what
we
are
trying
to
do
now
is
maybe
I
can.
I
can
share
something
just
quickly
to
show
I'm
not
hijacking
the
meeting
just
to
show
exactly
if
you
can
see
my
screen.
So
if
you
can
see
here,
this
is
exactly
what
we
are
okay
great.
So
just
this
is
just
an
old
document
just
to
give
an
example.
We
are
heavily
depending
now
on
this
on
this
multus
thing
right,
which
gave
us
the
ability
to
have
multiple
interfaces.
K
Usually
the
if
zero,
as
you
all
know,
is
the
main
interface
and
then
we
create
extra
interfaces.
Sometimes
it's
reach
out
to
four
or
five
extra
interfaces
on
one
pod,
of
course,
that
can
look
here
a
bit
yeah
simple,
maybe,
but
in
reality
it's
much
more
complex
due
to
some
of
these
interfaces
needs
to
reach
directly
to
the
vf
on
the
on
the
physical.
K
Neck
port,
and
sometimes
you
have
to
care
for
its
security,
its
policies
and
so
on,
because
these
interfaces
are
completely
outside
the
jurisdiction
of
kubernetes
network
and
in
some
aspects
this
also
poses
a
huge
risk,
yeah,
so
security
risk,
because
actually
you
have
a
freeway
here
from
your
inside,
but
to
the
outside
saying
this
interface
or
this
nick
got
compromised
by
anything.
Then
you
have
direct
access
to
your
to
your
application.
K
We
are
looking
into
this
risk,
of
course,
trying
also
to
mitigate
it
in
some
other
ways,
by
introducing
some
firewalling
here,
there's
smartnix
that
also
can
be
used
and
so
on.
But,
as
you
can
see,
it's
a
it's
a
topic
that
is
crucial
and
we
are
trying
sorry.
J
Could
you
could
you
ping
me
on
slack,
because
I've
seen
many
variations
of
this
picture
before
and
I'd
love
to
to
to
jump
out
a
call
and
talk
through
what
you've
got
here
at
some
point?
If
that's
okay,.
J
Yeah
yeah
yeah
yeah
gave
me
this
name
for
some
reason:
kubernetes
kubernetes
slack
cncf
slack,
just
look
for
ed
warnicky.
I'm
super
super
easy
to
find
in
that
regard.
If
you
get
lost
and
confused
just
say
something
I
will
pound
nsm
and
I'll
find
you
there.
You
know
it's
fairly
straightforward,
but
I've
seen
a
mil
I've
seen
this
diagram
a
million
times
and
when
you
talk
about
needing
to
compose
things
in
for
security
reasons,
that's
a
very
familiar
problem
to
me.
J
That's
why
network
service
mesh
supports
composition
of
network
services,
but
there
are
other
things
that
you
can
do
potentially
before
that
around
security,
for
example.
It
also
supports
you
know
effectively
cryptographically
verifiable
identities
for
the
workloads,
so
that
you
can
basically
make
policy
decisions
about
who
can
and
can't
connect
to
that
via.
J
Absolutely
so
much
stuff.
How
do
you
not
conflict
with
the
kubernetes
cluster
and
the
other
secondary
interfaces
that
are
injected
with
ipam
yeah?
The
ipam
is,
is
turns
out
to
be
a
hard
problem
if
you
make
it
hard
and
it
turns
out
to
be
a
relatively
straightforward
problem,
if
you
don't
so
no
happy.
K
Why
the
keb
idea
was
really
sorry.
A
So
I
I
hate
to
to
cut
off
conversation
a
little
bit,
but
if
we
could
pause
this
and
then
finish
the
agenda
and
then
if
there
is
more
time
at
the
end
of
the
meeting,
then
we
can
pick
this
back
up
or
we
can
carry
it
through
to
the
next
meeting.
A
K
To
make
sure
to
if
I
can
be
invited
to
this
next
meeting
between
you
shiroki
and
you
add,
if
you're
gonna
have
a
meeting
later.
H
The
link
is
in
the
slides
to
the
minutes
meeting
and
you
get
the
link
there
and
there
is
a
info
about
when
it's
happening.
Basically,
it
happens
on
the
off
off
week
to
this
meeting.
So
it's
going
to
happen
next
wednesday
at
8,
8
pm,
pacific
time,
okay,
great.
C
H
H
That's
a
good
question
yeah.
So
so
that's
we
are
still
working
on
it.
Meaning
weed
is
discussing
so.
E
A
month
it
doesn't
have
to
be
absolutely
finished
before
you
publish
it,
because
that
is
the
normal
discussion
forum.
So
please
post
what
you
got
in
a
cap
and-
and
I'm
sure
you
will
get
very
many
comments
on
it.
Lord,
there
is
still
work
to
do
with
fill
up
with
quite
a
quite
a
few
use
cases.
E
L
L
A
Okay,
if
we
could
move
on
to
rob's
agenda
for
our
item
for
pro
post
change,
the
gateway
api
and
again,
we
can
circle
back
to
multinetwork
since
there's
clearly
more
discussion,
but
I'd
just
like
to
make
sure
we
get
through
the
rest
of
the
agenda
items
first,
but
great
discussion.
G
Cool
yeah,
thank
you
for
a
good
discussion
there.
I
have,
I
think,
a
relatively
short
item,
but
I
wanted
to
raise
it
here
because
it
is
maybe
has
room
to
be
somewhat
controversial
in
gateway
pi.
We
have
a
few
resources
that
have
graduated
beta.
As
many
of
you
are
aware,
we
are
considering
adding
and
changing
the
conditions
that
we
recommend
implementations
set
and
status
some,
so
so
we're
considering
that
on
a
beta
api.
G
I
it's
really
unclear
to
me
if
api
conventions
allow
that,
I
think
they
do,
but
I
wanted
to
just
point
to
this
cap
or
gap.
Sorry
as
a
thing,
that's
coming
that
we'd
like
to
do
and
try
and
make
all
our
conditions,
all
our
status
consistent
across
the
api.
G
But
if
anyone
you
know
sees
that
and
says,
oh
maybe
we
shouldn't
do
that.
It
would
be
great
to
get
that
kind
of
feedback
earlier
we're
trying
to
get
a
release
out
before
kubecon
so
four-ish
weeks
from
now-
and
this
is
one
of
the
major
changes
we're
trying
to
get
in.
So
I
just
wanted
to
raise
this
idea
that
this
is
coming
and
if
you
would,
rather,
if
this
kind
of
change
feels
too
large
in
scope,
please
let
us
know
sooner
than
later.
F
Cool
thanks
rob
next
on
the
agenda
was
a
volts.
C
Hi,
I'm
andy,
I
I
work
on
a
team
at
microsoft
and
we've
just
been
using
this
service
and
internal
traffic
policy,
and
I
was
hoping
to
assist
in
getting
it
to
ga.
It
seems
like
it's
a
bit.
C
G
G
I
can
talk
with
you
just
you
can
ping
me
in
slack
or
or
andrew
about
this,
but
that
this
has
some
intersections
with
the
topology
aware
hints,
cup
and
a
lot
of
work
went
into
ensuring
that
they
worked
reasonably
well
together,
but
we
need
a
bit
more
testing
still
to
ensure
that
what
we
think
we
did
actually
works
as
expected,
basically
ensuring
that,
if
someone
in
enabled
apology
wear
hints-
and
you
know
internal
traffic
policy,
the
interaction
is
exactly
what
we'd
expect
and
external
traffic
policy
those
three
all
kind
of
play,
weirdly
together,
but
yeah
we
very
very
open
to
help.
G
I
think
everyone
wants
to
see
this
one
get
to
ga,
so
maybe
follow
up
with
me
on
slack,
I'm
pretty
easy
to
find
sounds
good
from
there.
Thanks
rob.
C
F
Yep
thanks
for
looking
to
help
with
that,
and
then
I
think
the
the
last
item
on
the
agenda
was
bridget.
B
I
don't
know
we
if
we
need
to
spend
a
lot
of
time
on
this
one,
given
that
people
want
to
discuss
other
things
as
well.
I
wanted
to
point
out
that
there's
been
a
discussion
going
on
in
sick
cloud
provider,
and
some
discussion
pointed
back
to
maybe
we
should
double
check
with
sign
network
to
agree
what
should
happen
when
a
node,
reboots
or
kubelet
restarts
and
the
ip
has
changed.
B
So
we
have
a
pr
there.
That
is
welcoming
your
suggestions
and
input
and
we
have
a
docs
update
and
this
is
a
obviously
it
would
be
great
if
we
could
get
this
into
1.26
just
because
it
is
attempting
it's
it's
basically
a
bug
fix,
but
it
might
be
important
to
make
sure
that
everybody
understands
and
agrees
with
what's
happening
here
anyway.
F
I
don't
have
any
immediate
comments.
It
sounds
like
one
that
might
need
a
little
looking
through
history.
Absolutely.
B
Yep
bug
fix
and
docs
update,
and
do
we
want
to
have
changes
or
do
we
want
to
just
tell
the
world
what
is
actually
happening?
You
know
there's
a
number
of
questions
that
are
open.
That
input
is
welcome.
B
Awesome-
and
I
did
want
to-
I
did
want
to
say
as
well-
I
don't
know
if
everyone
knows
how
to
see
the
notes,
but
I
did
put
some
note
some
links
in
the
chat
about
joining
the
kubernetes
slack
being
on
sig
network
link
to
the
minutes
that
we're
all
talking
about
and
showing
from
time
to
time
anyway,
for
anyone
who's
new
on
the
call
and
needs
that
info
look
in
the
zoom
chat
of
this
meeting.
Right
now,.
F
Yeah,
so
that
was
the
last
item
on
the
agenda.
I
know
there
was
still
plenty
of
discussion
to
be
had
on
some
of
the
multi-network
and
vrf
stuff,
so
we
can
go
back
to
that
for
10
minutes
unless
there
were
any
last
minute
items
that
anyone
wanted
to
squeeze
in.
K
Okay,
thanks
ed,
I
corrected
my
name
from
that
robot
name
to
my
real
name,
so
just
just
to
have
a
particularly
understanding,
because
currently
you
are
in
the
middle
of
a
heavy
engagement
with
all
hyperscalers
and,
as
I
said,
this
multi-networking
thing
is
important.
K
Just
want
to
understand
if
this
is
going
to
be
in
the
direction
of
replacement
of
maltose
completely,
because
I
I
already
read
the
the
intro
document
and
I
saw
what
this
discussion
is
about,
and
I
know
that
I
mean
one
of
the
story
that
is
written
and
I
think
the
user
stories
written
in
this
document
is
about
this.
Multiple
interface
interco
deployment
like
a
5g,
upf,
networking
function
and
so
on.
K
So
is
this
to
replace
multis
completely
and
present
a
more
kubernetes
native
solution
to
this
issue,
or
is
this
just
to
provide
a
single
interface
with
initial
sub
interfaces
within
and
the
rest
is
up
to
you
so.
L
Hi,
this
is
doug
smith,
I
at
least
as
far
as
I
understand
from
mache's
material,
that
this
is
not
implementation
specific.
So
what
we're
looking
to
define
here
is
how
you're
going
to
express
your
intent
to
have
these
multiple
networks.
So
this
is
where
it's
coming
from.
So
it's
not
necessarily
about
one
implementation
or
another,
but
about
the
use
cases
we
have
here
and
what
mate
has
put
together
for
a
data
representation
of
that
functionality.
L
K
E
Yeah,
it
will
more
replace
the
document
that
well
dan
and
that
the
kubernetes
network
custom
resource
definition,
the
practice
standard
that
will
go
right
in
all
the
annotations
to
provide
so
to
express
networking
in
the
pod,
the
templates
and
so
on.
That
will
go
away.
That's
what
this
will
replace.
Yeah.
K
Yeah
sure
sure,
of
course,
I
was
just
talking
about
multis
as
a
enabler
for
this
extra
interfaces,
because
your
anywhere
you
are
using
also
cni
would
want
us
to
provide
the
networking
right.
C
C
H
Oh
sorry,
so
yeah
motors
is
just
one
of
the
implementations
of
this
right,
but
but
basically
the
goal.
What
done
what
I
just
just
mentioned
is
to
ex
enable
apis-
and
I
I
see
ed,
probably
laughing
at
us
that
he
already
sold
it
other
way.
So,
let's
talk
about,
but
putting
that
aside,
like
our
current
idea,
is
to
kind
of
get
rid
of
like
you
familiar
if
you're
familiar
with
maltose,
it's
all
annotation
based
there.
H
We
have
something
to
do
this
and
that
so
the
goal
is
to
get
and
get
rid
of
all
this
right
so
make
it.
Maybe
enough
food
have
the
functionality,
but
it's
not
do
it
in
a
kind
of
all
around
and
leverage
all
the
pieces
that
we
currently
have.
So
we
don't
have
to
modify
the
core.
Let's
make
it
the
core,
let's
make
it
so
that
you
have
the
apis.
You
have
the
fields
you
care
about
and
you
would
want
to
fill
in
and
then
have
the
controller's
existing
controllers.
H
Look
at
those
and
based
on
them
do
something
right.
So
that's
that's
the
initial
call
so
and
and
we're
working
with
with
folks
from
motors
as
well,
so
that
we
don't
want
to
replace
it.
No
yeah.
C
H
And
then
basically
yeah
and
basically
it
should
work
with
with
maltose.
It
should
work
for
multiples.
Maybe
with
some
modifications,
I'm
not
saying
it
won't
be
so
straightforward.
There
will
be
some
migration
story
in
the
meantime,
but
let's
have
we
need
to
start
discussion
because
right
now,
I
think,
even
with
either
the
nsm
stuff
or
hear
what
we're
doing
that
can
we
need
to
have
a
path
towards
that.
That's
kind
of
everyone's
happy
and
and
and
can
use
right
and
not
create
some
sort
of
silos
that
we
create.
E
K
K
So
it's
also
the
run
the
run
thing,
which
is
the
most
crucial
most
needy
part
of
any
any
deployment
of
a
telco
workload
with
all
its
demands
for
latency
and
thinking
time
and
ptp
protocols,
and
all
these
nasty
stuff
kubernetes
is
reaching
to
the
to
the
edge
of
an
antenna,
and
this
is
why
it's
we
are
really
now
I
mean
in
many
things
we
are
depending
in
things
like
operators,
because
we
cannot
change
kubernetes
as
fast
as
we
want,
and
operators
more
or
less
also
achieving
some
of
the
extra
functionalities
that
we
are
looking
for.
K
But
this
one.
If
it's
coming
natively
with
governance,
it
will
be
also
yeah
a
lot
of
help
in
in
in
the
idea
of
separating
traffic,
maybe
because
this
is
much
more
needed,
also
not
only
on
the
basis
of
control
control,
traffic,
user
traffic,
but
also
on
other
things,
yeah.
K
J
It's
not
x86
networking,
it's
kernel,
networking
that
gives
you
the
latency
I
mean
I
I
I
have
vpp
processing
a
terabit
per
second
of
ipsec
traffic
on
a
commodity
server
with
no
accelerating
hardware.
It's
not
the
x86
architecture
that
gives
you
the
problem.
It's
the
it's
having
the
process,
kernels
factors
of
the
kernel
right.
If
you
move
to
user
space,
you
can
run
yeah.
K
I
just
wanted
to
say:
there's
also
this
kept
thing
as
we
are
really
interested
in
knowing
its
information
and
how
it's
going
to
develop
and
looking
forward
also
to
know
if
someone
can
provide
a
date
that
maybe
an
alpha
or
beta
can
be
there
any
time.
So
at
least
we
get
our
hands
on
it
and
start
testing.
It.
H
On
the
closing
note,
I
welcome
everyone
to
the
meeting
next
next
wednesday.
It's
again
8
a.m,
pacific
time
to
kind
of
come
in
and
and
there's
a
mini.
The
the
link
to
the
minutes
for
that
particular
meeting
is
in
the
slides
and
it's
in
the
in
the
minutes
of
today's
meeting.
So
there's
plenty
of
those
and
basically
you
can
put
your
agenda.
It's
all
open
so,
like
every
other
meetings,.
L
Detail,
as
far
as
I
see
it
is
as
how
this
is
implemented,
we
we've,
you
know,
I
think
in
mache's,
docs
you'll
see
that
you
know
it
tries
to
avoid
that
kind
of
a
detail.
We're
trying
to
figure
out
the
use
case,
how
it
maps
to
this
data,
and
then
the
implementations,
I
think,
can
can
take
it
from
there.
I
I
do
think
that
has
interesting,
implement,
implement
implications
for
cni,
but
I
I
think
that
that's
not
the
entirety
of
the
conversation.
L
I
think
that
we're
kind
of
hitting
it
at
a
different
level
than
how
how
this
is
handled
right
absolutely.
E
J
Well,
no,
I
mean
there,
there
are
ways
to
deal
with
that
problem,
because
the
the
problem
with
the
cni
is
it's
the
wrong
point
in
the
life
cycle.
It
can
only
do
things
at
startup
time
and
there's
a
huge
set
of
use
cases
where
you
want
to
be
able
to
connect
together
containers
in
order
to
change
them
that
you're
kind
of.
E
J
J
So
I'll
see
what
I
can
do
to
juggle
balls
around,
but
it
may
not
be
next
week
as
a
result.
It
may
have
to
be
the
week
after.
H
That's
fine.
We
we
do
bi-weekly.
C
H
H
F
We're
gonna
have
to
end
up
the
meeting
but
invite
everybody
who's
interested
to
join
next
week,
wednesday
8am
it
sounds
like
and
we'll
have
this
meeting
again
in
two
weeks.
I
believe
yeah
thanks
everybody
for
coming.