►
From YouTube: Kubernetes SIG Network meeting for 20230928
Description
Kubernetes SIG Network meeting for 20230928
A
All
right
did
it
notify
people
didn't
notify
me.
This
is
the
kubernetes
Sig
Network
for
Thursday
September,
28th
2023.
We
are
governed
by
the
kubernetes
code
of
conduct,
which
basically
boils
down
to
don't
be
a
jerk,
so
everybody
please
be
kind
to
each
other.
We
are
running
on
a
schedule
today,
so
we're
going
to
jump
straight
in
bypassing
triage
and
go
straight
into
topics.
A
With
oh
yeah,
it's
not
your
cap!
First
yeah,
let's
do
keps!
First,
so
I'll
move
you
to
the
bottom
of
the
agenda.
If
that's
okay,
ma
you're
first
on
the
list,
then
are
you
here.
C
C
B
A
B
Play
the
game
again:
okay:
class
Contex:
we
started
with
the
service
CER
and
cluster
CER
and
we
say:
oh
this
overlap.
We
have
discussions
and
we
deci
okay
service
is
good.
Ruster
cider
has
an
inconvenience
because
ER
not
everybody's,
going
to
implement
it
and
I
try
to
I
keep
discussing
this
with
d
wiip
and-
and
he
has
a
strong
argument
that
I
cannot
refuse
to
make
this
a
core
API.
B
That
means
CR
theid,
because
these
arguments
are
correct
and
saying
that,
well,
you
can
have
crer,
but
they
can
have
open
sheet
or
whatever
that
is
using
another
ion
in
if
I
configure,
crust
sers.
This
is
not
going
to
represent
my
crust
side,
then
correct
me,
and
so
the
the
problem
that
we
have
now
is
that
we
mer
this
is
Alpha
and
we
need
to
decide
if
we
want
to
keep
this
as
score
and
how
is
going
to
be
Nam
it
or
if
we
move
it
from
core
and
we
create
a
separate
project
with.
C
A
so
to
to
that
note,
wasn't
the
I
think
what
you're
trying
to
is
to
edit
what
we
ly
already
merged
right.
There
was
already
cluster
cider
C
that
was
merged
back
in
the
day,
so
why
we
are
backing
out
of
this
and
then
other
ask
to
this
is
if
some
other
implementation
didn't
implement
the
the
the
core
compability.
Why
we
shouldn't
pursue
it
further
on
I,
don't
think
a
Val
argument
is
that,
oh,
we
didn't
implemented
the
core
basic
functionality.
C
We
have
something
of
our
own,
so
we
shouldn't
have
this
because
that
introduces
confusion.
Isn't
that
kind
of
implementation
specific
then
that's
kind
of
that's.
Why
I'm
thinking
right,
if
I,
have
an
implementation
of
my
platform
and
I
come
up
with
my
own
stuff
and
I'm
going
to
oppose
to
something?
That's
cor
is
trying
to
push
forward
because
it
doesn't
fit
with
mine.
That's
at
least
how
I'm
seeing
that
Dan
yeah
go.
G
F
So
so
the
issue
is
that
if
you
read
the
cap,
it's
says
that
the
goal
is,
you
know
to
make
it
possible
to
extend
the
cluster
cider
and
blah
blah
blah.
But
then
it
proposes
a
feature
that
just
lets.
You
reconfigure
the
node
ipam
controller
and
in
the
vast
majority
of
kubernetes
clusters,
just
being
able
to
reconfigure
the
node
ipam
controller
doesn't
give
you
the
ability
to
extend
the
cluster
cider.
So
the
cap
isn't
doing
what
it
promises
to.
C
Do
maybe
we
overload
here
some
words,
let's
maybe
Define
on?
Maybe
this
is
where
I'm
missing.
What
do
you
mean
by
clusters
side
there
then,
because
that's
maybe,
when
I'm
I'm
having
a
gap.
F
F
In
order
to
do
that,
you
need
to
adjust
more
configuration
than
just
the
node
ipan
controller,
and
this
cap
doesn't
provide
any
way
to
do
that,
and
people
have
already
tried
to
use
it
to
implement
that
feature
and
other
things
in
ways.
That
then
doesn't
work
and
has
subtle
bugs,
because
the
cap
didn't
even
try
to
figure
out
how
to
make
that.
A
C
And
not
only
that
I
think,
because
I
think
what
Dan
you're
introducing
here
is-
and
that
was
never
the
case
in
the
the
in
the
initial
original
design-
is
that
it
doesn't
control
what
the
cni
does.
It's
a
hint,
and
maybe
that's
the
the
main
issue
of
this
initial
implementation
of
this
is
a
hint
which
is
not
imposed
on
any
of
the
cnis,
because
it's
a
hint
for
the
node,
no.
C
Like
no,
no,
no,
the
evolution
of
it
right
now.
Yes,
but
I'm
I'm,
not
stepping
back
to
the
pot
Sider
field
in
the
mod
right,
because
I
think
this
whole
cap
is
around
that
single
field
and
how
it
worked
till
today-
and
this
is
what
I'm
referring
to
as
currently
the
PO
Sider
is
a
hint
right
for
the
cni.
If
you
want
to,
there
is
some
this
field
and
you
can
use
it
for
your
I
pump
to
how
you
want
to
do
it
right.
C
But
if
you
don't,
then
just
don't
use
it
right,
that's
what
I'm
referring
to
and
now
the
new
cap
is
evolving
the
same,
maybe
flawed,
design
that
we
did
back
in
the
days.
Maybe
not
fla
I,
don't
want
to
kind
of
badmouth
anyone
that
initially
introduced
that
because
we
use
it,
but
that's
maybe
the
problem
here
right
and
be
and
I
see
what
you're
saying
where
what
about
enforcing
that
right,
because
that's
what
you
would
want
to
like!
F
C
C
F
And
so,
therefore,
we
need
to
assume
that
an
API
that
lets
you
reconfigure,
the
node
I
Pam
controller-
is
not
an
API
that
lets.
You
extend
the
Pod
cider
because
not
all
cluster
networ,
you
know
cni
plugins
use
the
node
ipam
controller
and
in
some
cases
they
need
to
have
other
things
adjusted
if
they
want
to
extend
the
Pod
cider.
F
G
G
C
So
I
don't
think
that
the
the
cap
mentions
that
we
can
resize,
because
that's
what
we,
what
you
refering
to
I,
don't
think
that's
the
case
right.
It's
immutable
once
once
say
at
least
from
I
remember
when
we
initially
introduced
this
the
object.
When
you
define
a
specific
cider,
it's
immutable.
You
cannot
change
that.
You
can
add
new
ones
that
that
kind
of
can
expand
that
that
that
list,
but
I,
don't
think
you
can
change
the
current
one
right.
G
D
C
But
then
what
we
are,
maybe
what
we
are
mixing
here
and
and
I
think
I
I
I
I
saw
that
similar
case
where
we
are
treating
the
sl6
sl24
as
a
subnet,
which
should
not
be
is
not
the
case.
Those
are
just
ranges
right
here
and
we
should
treat
those
as
ranges
so
I
want
to
use
those
ranges
and
how
I'm
routing
and
do
that.
That's
that's
completely
separate
and
independent
out
of
this
team.
A
So
there's
some
confusion
about
whether
this
API
purports
to
be
authoritative,
like
the
source
of
Truth,
for
the
configuration
right
I
think
we
all
agree.
It
can't
be
so
if
it's
non-authoritative,
then
it
really
is
only
a
configuration
mechanism
for
a
particular
controller
that
we
don't
even
have
in
our
tree.
Yet
that
would
be
an
extension
or
or
new
version
of
a
controller
that
we
do
have
in
our
tree.
So
we
have
this
precedent
of
having
an
ipam
controller
built
in
with
a
configuration
that
configuration
happens
to
be
a
flag
today.
A
The
question
then
I
I
think
if
I
can
get
to
the
heart
of
what
Dan's
concern
is.
Should
we
continue
with
that
Convention
of
having
a
built-in
ipam
controller
that
has
a
configuration
resource
which
is
modeled
through
the
API
surface,
or
should
we
say
get
out
of
here?
This
is
a
perfect
example
of
what
should
not
be
in
tree,
and
you
know
find
a
different
mechanism.
You
know
pod
node
spec,
pod
cider.
F
F
But
the
other
thing
is
is
that
the
cap
is
presenting
itself
as
a
a
pod,
Network
extending
feature
and
as
as
Antonio
had
pointed
out,
flannel
some
people
made
changes
to
flannel
to
read
these
cluster
cider
objects
and
reconfigure
flannel
based
on
them
because
they
were
they're,
treating
it
as
a
pod,
Network
extending
feature
not
as
a
node
ipam
reconfiguring
feature,
but
the
feature
isn't
designed
in
a
way
to
work
arbitrarily
as
a
pod,
Network
extending
feature,
and
so
the
way
that
flannel
did.
It
doesn't
actually
work
right
and
will
create
bugs.
F
So,
if
we're
going
to
keep
this
kept
and
have
this
inry
reconfigurable
node
ipam
controller,
we
need
to
make
it
very
clear.
This
is
a
node
ipam
controller,
reconfiguring
cap.
This
is
not
about
extending
your
pod
Network.
We
should
remove
all
of
the
goals
and
all
of
the
user
stories
that
talk
about
dynamically,
extending
the
Pod
Network,
because
it
doesn't
Implement
that
feature
for
most.
F
I
I
I
know
what
he
was
saying
because
we've
talked
about
this,
so
so
so
I'll
clarify
that.
Also
a
lot
of
the
cloud
providers
are
switching
to
using
the
external
Cloud
controller
stuff
which
have
their
own
separate,
node
ipam
controllers,
so
they
won't
even
be
using
the
one
that
this
cap
is
modifying
in
Cube
controller
manager
anyway,
which
is
a
third
problem.
A
So
if
somebody
came
to
us
with
a
brand
new
idea
for
an
entirely
different
model
of
ipam
that
was
based
on
machine
learning
and
they
said
we
want
to
put
this
controller
into
controller
manager,
we
want
to
add
an
API
resource
to
configure
it.
What
would
we.
A
A
Ahead
say:
oh
I
can't
tell
from
your
from
all
the
conversations
we've
had
about
this,
where
what
your
actual
position
is
at
like.
Should
we
just
abort
the
alpha
and
say
this
should
be
an
outof
tree
controller
with
an
outof
tree
custom
resource.
F
B
That
this
can
span
and
grow
in
their
own
because
they
can
build
on
on
all
these
things.
That
was
saying
right
about
spending
the
the
B
Network
and
everything,
and
it
will
be
easier
to
consume
for
people
for
doing
more
things
that
just
doing
the
PO
side
of
it's,
not
a
big
project
is
more
and
it's
nice,
so
new
contributors
can-
and
we
have
all
this
problem
right
at
the
end
we
are,
we
don't
have
enough
people
to
work
in
court,
and
we
have
this
other
problem.
Is
that
it's
too
hard
to
get
in
court?
B
A
Okay,
I
I,
think
I
I.
Think
I
agree
as
much
as
I
like
the
idea
of
making
a
more
robust
version
of
the
existing
ipam
controller,
it
it's
probably
a
false
start
in
the
first
place,
and
so
cutting
it
off
before
it
grows
too
many
more
limbs
is
seems
like
a
reasonable
argument.
If
we
kick
this
out
of
tree,
it
means
we
basically
have
to
start
it
over.
They
can
take
a
lot
of
the
controller
code,
but
the
custom,
resource
and
stuff
will
have
to
be.
B
F
F
B
H
I
think
the
I
think
90%,
if
not
more,
of
the
Clusters
out
there
can
live
with
just
a
simple
cloud
like
ciders
and
whatever
we
have
right
now
and
that
serves
the
purpose
and
it
has
served
the
purpose
and
I
do
understand.
Dan's
comment:
oh
this
modeled
after
this
particular
Cloud
Network
and
how
it
worked.
That's
that's
fine
happened
in
the
past
and
Frankly
Speaking.
Nobody
cares
about
this
in
the
cloud
s
side
outside
the
cloud.
A
lot
of
people
do
worry
about
this,
but
even
us
at
at
our
side
of
the
fence.
H
We
don't
like
everything
is
done
to
allow
this
to
happen
anyway.
Now
the
growth,
if
you
want
to
do
like
Dynamic,
changing
and
all
of
that
stuff,
that
we
talked
about
I
favor
outside
the
tree,
just
by
virtue
of
simplifying
the
code
that
we
worry
about
and
all
the
stuff
that
we
need
to
build
and
all
the
stuff
that
we
we
have
to
think
about
for
all
the
N
byn
byn
Matrix
of
use,
use
cases
allowing
people
to
either
use
some
standard
outside
tree
outside
the
tree.
H
Controller
should
work,
and
everybody
who
has
a
special
case
can
build
their
own
all
right
now.
The
question
remains:
should
the
API
and
the
core
be
generic
enough
to
allow
n
per
node
and
ciders
per
node,
or
should
we
say
oh,
this
is
how
it
will
work.
The
core
supports
only
one
cider,
but
if
you
want
gross,
go
use
some
other
other
controller
or
build
your
own
own,
and
your
own
API,
like
I'm,
worried
that
we
end
up
with
n
number
of
apis
out
there.
H
Everybody
will
come
up
with
their
own
API,
which
will
make
that
ecosystem
extremely
fragmented.
So
maybe
we
should
follow
what
drop
dead
in
the
Gateway
API
like
having
a
common,
API
external,
that's
loadable,
and
then
people
will
focus
on
the
controller
only
so
it's
somewhere
in
the
middle.
Just
just
a
thought
out.
There.
A
So
so
my
concern
here
is
I.
Don't
like
Gateway,
we
had
a
lot
of
evidence
of
what
that
API
should
look
like
when
we
tried
to
bring
it
together
to
having
a
common
API,
and
we
knew
where
the
extension
points
probably
were
I.
Don't
know
we
weren't
100%
correct,
but
we
were
pretty
close.
I
have
no
idea
if
there's
one
API
that
describes
what
multiple
implementations
would
or
should
do
here.
A
So
my
my
recommendation
is
that
we
kick
it
out
and
we
say
say
if
there's
an
alignment,
if
there
are
other
implementations
that
want
to
use
a
common
API,
that's
a
great
goal,
but
it
shouldn't
be
a
requirement
of
this
project.
To
continue
is
that.
H
Fair
I
do
not
agree
on
this.
I
would
argue
that
the
API
for
multier
network
is
a
lot
simpler
than
a
Gateway.
Yes,
it
has
degree
of
complexity
to
get
I
do
see
or
Point
around.
Oh,
we
were
not
sure
everything
out
there
and
we
cannot
enumerate
it,
but
I
think
we
can
get
pretty
close.
A
I,
so
I
worry
that
we
end
up
Reinventing
the
problem
that
Dan
started
with,
which
is:
is
this
an
authoritative
API,
or
is
this
a
reflective,
API
I?
Don't
know
enough
about
all
the
multitudes
of
network
plugins
out
there
that
how
they
would
go
about
actually
reconfiguring
their
pod
networks
and,
in
fact,
it's
sort
of
gets
into
the
multi
Network
proposal
too,
like
trying
to
make
this
the
one
true,
authoritative
API,
for
that
one
little
aspect
of
network
definition
seems
like
a
a
slippery
slope.
A
If,
if
we,
if
people
want
to
agree
that
there
is
a
reflective
API
that
is
configured
for
ipam
controllers
and
that
this
is
what
it
should
look
like,
I
am
certainly
not
going
to
get
in
the
way.
That
sounds
like
a
great
outcome,
but
I
don't
want
to
make
that
the
the
True
North
Star
like
I,
think
the
the
goal
here
is:
here's
a
controller
implementation
that
does
what
the
existing
one
does,
but
is
a
little
bit
smarter.
F
H
So
there
are
two
ways
to
look
at
this:
the
outside
the
end
end
towards
the
outside,
which
is
the
current
discussion,
but
also
there
is
the
outside
End
discussion
that
usually
we
we
usually
like.
We
tend
to
keep
it
to
last
I'm
worried
that
we
end
up
with
a
situation
where
you
have
somebody
who
spent
years
operating
very
large
sum
of
clusters
using
certain
way
and
then
goes
somewhere
else
and
it's
the
same.
Cluster
same
binary
same
everything.
C
C
B
B
A
It's
it's
in
the
notes.
Antonio,
do
you
want
to
follow
up
with
an
email
to
Signet
and
the
folks
who
are
working
on
the
multi
existing
implementation?
To
say
this
was
the
discussion.
This
is
the
decision
that
we've
made
like
scream
now
or
forever.
After
be
quiet.
A
G
A
Okay,
cool
I'm
gonna
have
to
drop
in
a
in
a
minute
is
if
it's
okay,
if,
before
we
jump
into
the
design
topics
that
we
run
through
caps,
real
quick
quickly,
just
to
make
sure
that
we're
on
the
right
track,
give
me
one
second
and
I
will
share
a
screen.
I
appreciate
everybody
who
has
kept
the
table
up
to
date
in
this
this
project
dashboard,
it's
been
actually
helpful.
D
B
A
All
right,
I
pinged,
a
couple
I'm
just
going
to
run
through
these
super
super
quickly.
Please
scream.
If
the
current
disposition
here
is
not
correct.
My
assessment
of
host
Network
support
for
Windows
pods
is.
We
still
have
no
idea
whether
it's
going
anywhere
given
the
deadline
is
in
like
eight
days
or
or
something
it
sounds
like
it's
not
going
into
29
dual
stack,
API
server
support
is
looking
for
someone
to
own
it.
Dan
is
that.
F
Right
yeah
I
mean,
or
somebody
needs
to
work
with
the
the
API
server
team
to
to
push
that
forward,
because.
B
Okay,
can
we
postpone
this
to
once
we
have
the
allocators
and
everything
because
I
I'm
getting
to
approval
in
that
area,
so
if
I,
we
don't
have
to
R
it
I
may.
F
A
B
B
A
Postponed
yeah,
it
is,
it
is
postponed
unless
somebody
were
to
step
up
anyway,
so
SRV
is
not
getting
any
attention.
Component.
Config
is
still
the
zombie
proposal
that
needs
to
be
addressed,
Dan
with
your
nft
tables
stuff.
Maybe
that
gives
us
a
path
towards
a
a
simpler
config
here,
but
that
would
be
a
long
path.
A
This
one
load,
balcer
Behavior,
got
kicked
out
last
time
because
it
shouldn't
have
been
committed,
looks
like
it's
set
for
Alpha
in
29,
multiple
service
siters.
We
just
talked
about
multi.
A
B
A
I'm
running
through
caps
in
all
my
free
time,
right
now
so
you're
you're
on
my
short
list,
cool,
okay,
multi
network
is
not
going
into
29
right.
A
Okay,
n,
if
tables
Dan,
I
first
of
all,
I
want
to
say
anybody
who
hasn't
read
this
cap
should
go
read
this
cap.
It
was
an
excellent
cap
at
a
a
great
level
of
abstraction,
with
details,
but
not
so
many
details.
It
was
a
really
good
cap.
Thank
you,
Dan
Alpha
29.
Yes,.
F
Yeah
I
feel
like
the
the
cap
ought
to
merge
like
in
a
week
or
so
because
you
were
basically
okay.
Voicey
check
was
basically
okay,
so.
A
Well,
we've
got
until
the
sixth
to
get
it
merged
right.
So,
okay,
there's
a
relatively
new
proposal
for
disabling
health
check
ports
for
load,
balancers,
sort
of
the
same
way
we
allow
disabling
node
ports,
I
I
hate
it,
but
I
acknowledge
the
use
case,
and
so
I
haven't
seen
a
kep
yet,
but
the
author
says
they
would
like
to
get
it
in
29
IPM
for
multiple
cluster
cers
is
what
we
just
talked
about.
A
A
Excellent
admin:
Network
policy
is
not
phase
locked,
but
I
do
need
to
come
back
to
that.
We
did
this.
That's
the
same.
Nodeport
reservations
looks
like
it's
set
to
GA
in
29.
There's
been
no
objections
or
problems
that
I've
seen
topology,
aware
routing
rob
you.
A
D
A
A
List,
okay,
purely
selfishly,
I'm
a
little
happy
that
the
list
is
getting
small.
It
means
that
we
can
start
to
give
some
real
bandwidth
to
some
of
the
bigger
issues
that
we've
been
ignoring,
and
this
has
been
a
hellish
cycle
for
me.
If
you
have
a
cap
that
isn't
on
this
list,
I'm
gonna
stop
sharing.
A
If
you
have
a
c
that
isn't
on
this
list-
and
you
think
should
be-
please
tag
it
Sig,
Network
and
let
me
know
and
we'll
look
at
it:
I've
gone
through
and
everything
that
was
looked
like
it
was
going
to
get
touched
in
29,
I
added
the
opted
in
label,
and,
if
that,
if
you
find
the
kep
that
you're
responsible
for
is
not
queued
up
and
making
the
release
team
happy,
please
let
me
know
as
soon
as
possible,
please
with
that
I'm
gonna
have
to
drop
so
I'm
GNA
hand
ownership
over
to
who
wants
ownership,
Dan
you're
first
on
my
screen,
so
you're
going
to
get
it
okay
and
you
can
run
the
rest.
E
A
More
make
host
you
are
now
the
host
and
I
have
to
drop
off
U
wonderful,
to
see
everybody
I
asked
on
the
chat,
also
on
the
on
slack
who's
going
to
cubec
con.
Let
me
know
because
I'd
like
to
see
about
planning,
you
know
events
and
lunch
and
stuff.
So
if
you
get
a
second
just
look
at
that
thread
and
say
yes,
no
thanks.
F
B
The
problem
is
that
if
these
those
addresses
the
ones
that
are
at
the
in
startup
and
the
ones
that
added
later
by
the
cloud
provider
are
not
the
same,
the
host
Network
pods
are
going
to
have
different
addresses
right
depending
when
they
are
created.
I.
Don't
think
that
this
is
a
a
cor
Behavior,
so
I
have
a
pull
request
to
open
an
issue
and
and
and
other
person
from
cloud
provider.
V
feere
are
complaining,
but
but
they
have
the
problem
that
they
have.
B
Is
that
when
they
know
the
starts,
the
ports
are
schedul,
they
don't
have
IPS.
So
that's
that's
another
P
that
is
related
to
that.
If
we
change
the
the
cubet
logic
to
set
up
the
address,
only
when
the
crow
provider
said
addresses,
there
is
a
time
that
the
the
node
is
ready
but
doesn't
have
any
address
at
all,
so
the
pots
start
and
they
don't
find
any
IP
and
that
the
to
fix
that
back
then
we
should
one
option
is
we
should
not
set
the
node
ready
until
they
have
address.
B
F
B
C
Can
I
have
presentation,
I
have
some
slides,
I
want
to
kind
of
show
I
want
to
go
over
them.
Quite
I.
Would
TR
TR
to
go
over
them
quite
quickly,
so
I
just
want
to
kind
of
give
you
an
update
on
what
did
we
do
for
the
last
few
months
we
initially
divided
the
whole
multi
networking
effort
into
few
phases,
and
we
are
considering
that
designing
for
the
phase
one
as
completed
and
I
want
to
give
you
an
update
on
that.
F
C
D
D
E
C
Okay,
all
right,
so
let
me
quickly
go
over
this.
So,
as
I
started
talking,
can
you
see
my
screen?
Yeah?
Okay?
So,
as
I
mentioned,
we
have
the
phases
and
we
consider
the
first
phase
completed
and
I
want
to
kind
of
go
over
of
what
we
kind
of
come
with.
So
this
is
just
a
reminder
of
what
our
goal
mainly
is
is
to
ensure
that
we
have
some
sort
of
handle
a
representation
of
a
network
inside
kubernetes
cluster,
something
that
we
don't
we
don't
have
today.
C
We
want
to
have
some
sort
of
standardization
of
the
API,
and
the
ultimate
goal
is
to
provide
some
some
two,
some
main
use
cases
and
I
want
to
kind
of
talk
about
them
as
well
later
on,
but
first
just
to
kind
of
ensure
that
what
is
a
network
and
in
our
design,
we
decided
that
we
don't
want
to
put
any
enforcement,
what
a
network
a
specific
implementation.
C
Think
as
so,
we
don't
put
any
emphasis
on
that
any
definition
and
we
we
don't
impose
any
definition
of
it
and
it
will
boil
down
to
how
the
implementation
thinks
what
a
network
for
them
is
is
so
this
is
very
important
so
that
we
don't
want
to
make
it
as
flexible
as
possible.
Here,
that's
kind
of.
G
C
The
gist
of
this,
the
main
use
cases
that
we
want
to
cover
I
I,
have
we
have
more
in
link.
C
I
will
have
at
the
end
of
presentation,
but
the
two
ones
that
I
want
to
kind
of
put
emphasis
on
is
the
basic
like
multiple
interfaces
into
the
Pod
so
and
in
this
case
and
I
have
some
description
of
a
user
story
where
some
sort
of
an
separation
of
traffics,
where
I
I
could
could
identify
in
my
work,
CL
two
types
of
traffics,
which
I
for
some
sort
of
compliance
I
need
to
isolate
across
my
infrastructure.
C
So
here
I
want
to
have
two
interfaces
inside
the
Pod,
and
my
application
is
smart
enough
to
pick
and
choose
which
interface
to
use
for
what.
So
that's
one
use
case
and
the
other
use
case
is
basically
the
multi-tenancy
via
multi
networking,
so
not
multiple
interfaces
into
the
Pod,
but
multi
networks
in
the
cluster,
a
single
cluster
and
then
I
want
to
have
some
pods
connecting
to
one
network
and
some
pods
to
the
other
networks
without
losing
any
kubernetes
capabilities
right.
C
So
Services
Network
policies
Etc
all
functioning
in
parallel
in
the
such
cluster.
How
the
networks,
how
pods
communicate
across
networks
that's
up
to
the
the
vendor
and
whether
they
want
to
Route
them
or
not
how
they
interact?
This
is,
of
course,
up
to
the
implementation,
but
the
cluster
itself
provide
you.
The
capability
to
I
want
my
namespaced
pod
networks
connected
to
a
specific
interface
or
a
network
representation
of
a
network
right.
So
those
are
the
kind
of
two
highlighted
U
use
cases
I
want
to
talk
about.
C
So
what
we
want
to
propose
and
what
we
come
up
with
is
to
introduce
a
new
core
object
called
pod
Network.
It
will
be
representing
the
the
the
the
it
will
be,
the
handle
that
the
Pod
connects
to
it
has
to
be
vendor
agnostic,
and
it
will
includes
what
we
come
up
with.
We
kind
of
copy
pasted
stuff
from
K2
API,
so
we
want
to
provide
a
capability
for
any
implementation
to
plug
into
this
and
provide
their
own
parameters.
C
What's
not
to
this
object
so
that
they
can
parameter
ize
it
as
the
way
they
want
to,
and
here's
an
example
of
this.
Basically,
what
does
it
show
is
simple
things
like
provider
to
be
multivendor
cap
to
have
multivendor
capabilities
in
single
cluster?
C
If
someone
needs
that
and
then
just
simple
parameters
to
to
your
custom
things,
this
is
a
list,
something
new
that
we
want
to
do
as
well
details
in
the
design,
doc
that
I
going
to
have
in
the
link,
but
yeah
we
want
to
make
it
a
list
and
basically,
pod
network
is
the
thing
that
we
want
to
then
have
a
reference
to
across
all
the
other
core
objects
in
kubernetes.
That's
the
the
kind
of
central
point
of
this.
C
We
want
to
introduce
as
well
something
called
default
pod
network
as
we
and
and
think
about
this
as
a
default
namespace
we
have
today
in
the
cluster,
so
defa
Network
represents
whatever
we
we
Treet
today's
Network,
as
so.
Basically,
whatever
we
connect
in
today's
cluster
to
is
basically
default,
Network
right,
it
comes
with
the
when
when
the
cluster
is
created,
there
is
a
ways
to
autocreate
it.
C
So
for
some
cluster
that
don't
really
care
about
this,
it
will
be
just
there
in
the
background
and
they
don't
really
have
to
worry
about
it
and
and
and
and
think
about
this.
This
will
be
completely
transparent
to
clusters
that
really
probably
90%
of
the
Clusters
that
don't
need
to
in
care
about
the
Pod
networking.
The
other
object
that
we
want
to
introduce
to
the
story
is
POD
Network
attachment.
So
this
is
a
more
kind
of
for
our
for
some
of
the
implementation
to
provide
more
flexibility
to
configuration
of
the
Pod
network.
C
Pod
Network
attachment
points
to
a
pod
Network,
so
it
still
is
part
of
a
specific
one
pod
network,
but
then
it
provides
a
pod
level
parameterization
to
the
attachment
how
specific
pods
attach
to
to
a
specific
pod
network.
With
this
we
want
to
provide
an
ability
where
pod
Network
it
can
be
a
logical
representation
of
some
Network.
That,
in
my
implementation,
is
a
thing
and
then
how
I
really
attach
it,
then
the
Pod
Network
attachment
can
specify
that
even
further
right.
C
That
can
be
like,
for
example,
what
physically
I'm
connecting
how
physically
I'm
connecting
to
to
that
specific
logical,
pod
Network
would
be
defined
by
this
bot,
Network
attachment
and,
of
course
it
it
comes
with
a
vendor
specific
custom
Resource
as
well.
On
top
of
that,
so
you
can
specify
those
in
two
places
and
those
can
be
completely
separate
CRS
in
your
implementation.
This
is
just
a
simple
example,
very
similar
to
the
previous
one.
C
This
one
has
just
the
mandatory
pod
network
name
so,
basically
to
which
pod
Network,
the
Pod,
Network
attachment
kind
of
belongs
to,
and
then
some
custom
optional
par.
If
you
need
those
now
with
all
this
now,
we
want
to
as
well
introduce
changes
to
the
PS
spec
and
this.
What
we
Me
by
that
is,
we
would
like
to.
This
will
be
the
first
core
object
that
we
want
to
modify
because
in
phase
one
we
just
want
to
introduce
the
API
and
have
a
means
to
use
it
somehow.
C
So
we
want
to
introduce
change
to
podspec,
and
this
is
I'm
aware.
We
have
to
go
to
the
other
s
to
talk
about
this,
but
this
sig
networking
I'm
using
as
my
G
P
to
kind
of
go
through
this
presentation.
So
my
next
steps
is
to
go
to
those
other
six
and
talk
about
this,
but
I
want
to
introduce
a
network
sections
in
the
Pod
spec
over
there.
C
We
would
identify
what
pod
network
name
I
want
to
attach
to,
and
this
can
be
exchangeable
either
p
network
name
or
I
can
specify
as
the
last
one
item
when
I
can
specify
my
pod
Network
attachment
name.
So
one
of
the
two
and
of
course
this
section
is
optional.
C
If
you
don't
specify
it,
we
will
autop
populate
with
the
default
name
and
it
will
just
function
similar
to
if
I'm
not
mistaken,
that
there
is
a
field
called
Noe
name
which
is
filled
in
by
scheduler.
You
can
set
it
out
yourself,
but
if
you
don't,
then
scheduler
will
put
the
node
name
into
a
podspec.
So
this
is
something
similar
to
what
we
want
to
do
here
with
the
default
pod
Network.
Additionally,
on
top
of
the
po
spec
change,
we
want
to
change
as
well.
Po
status,
this
is
smaller
changes.
C
C
What
we'd
like
to
do
is
expand
that
stct
with
the
Pod
name,
that
will
basically
kind
of
Link
the
specific
IP
to
a
specific
p
network,
and
that's
the
only
thing
we
need
basically
to
list
what's
what
so
so
that
we
can
in
in
that
in
that
list
to
properly
identify
which
IP
belongs,
to
which
pod
network
pod
IP
field
would
stay
be,
would
stay
unchanged
would
behave
the
same
way.
It
is,
and
we
have
an
interface
name,
U
I,
don't
want
to
go
into
that,
but
in
the
doc.
C
If
you
have
some
question
that
you
can
read
on
this
and
leave
some
comments,
po
scheduler
changes.
What
the
only
change
we
would
like
to
do
is
the
Pod
scheduler
to
do
some
basic
validation
of
the
Pod
names.
We
don't
want
to
provide
here
some
sort
of
a
selectiveness
of
a
pod
Network
where
p
network
is
available,
where
we
don't
want
to
do
in
phase
one.
So
this
is
very
straightforward
and
simple
change
initially,
where
we
would
like
the
scheduler
to
just
see
whether
the
Pod
Network
even
exists.
C
That's
one
first
and
then
thing
the
other
thing.
There
are
some
conditions
in
the
Pod
Network
object.
We
want
to
leverage
to
see
okay,
whether
the
network
is
ready,
ready
or
not.
If
it's
not,
then
just
block
the
scheduling
of
such
pod
and
then
provide
the
proper
State
for
the
Pod
itself,
when,
when
a
specific
p
network
doesn't
exist
or
it's
not
ready.
So
basically,
this
is
small,
but
maybe
big
change,
but
very
minimal,
as
as
we
can.
Where
are
we
at
with
this?
C
In
the
terms
of
project
status,
as
I
mentioned,
we
have
our
use
cases
and
requirements
defined
the
two
one
that
I
initially
provided.
That
was
just
from
like
eight
that
we
defined
I
have
a
link
at
the
at
the
end
that
you
can
look
at
then
we
consider
The,
Phase,
One,
Design,
complete,
which
I
just
show
you.
C
Basically,
we
defined
with
the
ccrs
and
what
sort
of
changes
we
want
to
come
and
how
some
behavioral
dependencies
defined
and
initially
the
implementation
was
Target
for
129,
but
since
we
have
just
one
week
till
end
of
cap
closure,
probably
that
won't
happen.
So
I'm
I'm
changing
the
target
for
130
to
have
this
implemented
and
the
links
there's
a
PR
and
there's
a
phase
one
design
doc.
C
D
Was
no
it's
cool,
I
I
appreciate
the
presentation.
I
had
lost
track
of
everything
that
was
going
on,
but
it
all
seems
reasonable
to
me.
Don't
have
any
questions
personally.
F
C
Okay,
thank
you.
If
you
have
the
slid
link,
links
to
the
slides
is
in
the
minutes
and
then
those
links
are
in
the
slides.
So
please
feel
free
to
I.
Think
all
the
links
are
open
for
comments,
so
please
free
drop
any
if
you
want
or
read
through
what
what
more
details
the
design
has.
Thank.
F
You
all
right.
The
last
agenda
item
is
Dave
P,
but
we
don't
seem
to
have
any
Dave
P's
and
we
are
basically
out
of
time
anyway,
unless
somebody
here
is
secretly
Dave
P.
D
Yeah
I
think
he
Dr
he
dropped,
but
that's
fine.
We
can
cover
next
time
all.
D
B
Quite
there
yet,
but
the
timeline
is
not,
is
not
right.