►
From YouTube: Multi-Network Community Sync Meeting for 20230712
Description
Multi-Network Community Sync Meeting for 20230712
A
All
right
welcome
everyone
at
the
multi-networking
community,
sync,
it's
July
12th.
Today
we
cover
some
ai's
from
last
week
and
then
continue
our
discussion
about
the
cap
last
week.
A
I
think
we
finalized
on
the
pod
network
interface
object
and
we
can
kind
of
start
from
there,
but
before
that,
the
AIS,
just
to
remind
folks,
we
were
talking
initially
about
reducing
the
scope
from
the
phase
one
and
one
of
the
kind
of
question
and
doubts
was
whether
we
should
kind
of
reduce
the
ability
to
retrieve
networking
information
for
the
prom
within
the
Pod,
and
one
of
the
ideas
was
to
use
downward
API
I
looked
into
that
and
basically
download
API
is
something
similar
to
how
you
can
pass
a
I.
A
A
A
So
that's
what
the
download
API
is
I
think
that's
that's
useful
and
I
think
that
might
be
the
the
way
to
provide
the
the
apis
we
did
in
the
requirement
mentioned
that
it
can
be.
It
doesn't
exactly
have
to
be
that
that
was
one
of
the
proposals,
but
I
think
unless
someone
else
in
the
room
has
some
other
ideas
on
how
to
do
that.
I
think
that
will
be
the
way
to
do
this.
A
I
think
the
question
was
whether
we
really
need
it
in
phase
one
considering
and
a
kind
of
question
was
well
how
much
how
much
of
work
it
is
to
do.
This
I
think
in
terms
of
amount
of
work,
I
would
say
is:
is
medium-sized,
considering
I
saw
the
example
of
how
huge
Pages
API
has
been
passed,
so
there
is
ACL
there
PR
sorry
that
and
adds
it.
A
So
there
is
an
example
of
how
to
do
it,
but
then
basically,
I
would
even
then
I
would
try
to
limit
our
scope
even
having
that
example.
Unless
we
are
going
to
have
a.
A
I'm
looking
at
it,
maybe
from
the
perspective
of
myself
doing
this
whole
thing.
That's
why
I'm
I'm
a
bit
relaxed
reluctant
on
pushing
that
so.
A
I
am
towards
kind
of
still
pushing
it
out,
I
think
in
terms
of
having
this
item
in
the
cap,
I
think
we
can
definitely
mention
that
in.
A
B
So
much
I
know
that
I
know
that
Tomo
had
comments
about
this,
but
he's
not
able
to
attend
today.
So
could
we
keep
this
in
the
agenda
for
next
week's
meeting?
Sure.
A
And
I
think
there
was
another
AI
I
think
we
were
having
discussion.
The
microcumbria
is
not
here.
We
were
having
discussion
about
the.
A
It
is
slightly
what
multo's
doing
today
as
well,
because
it
boils
down
to
what
the
cni
can
configure
right
with
chaining.
You
could
basically
with
single
Network
attachment
definition
having
one
one
configuration
and
business
is
the
cni
configuration
one
scene,
I
can
have
multiple
calls
to
multiple
cnis
and
it
will
with
single
call,
create
multiple
interfaces
and
question
is:
should
our
case
here,
support
this
and
I.
Think
Thomas
was
was
just
strongly
asking
about
this.
A
There
are
some
examples
of
how
this
has
been
done
by
the
I
think
that
was
what
was
the
company
I
think
Uber,
or
he
provides
an
example
of
a
company
doing
this
I
think
this
was
the
the
one
here
I'm,
not
sure
anyone
else
has
had
a
chance
to
look
into
this
about
what
it
boils
down
to
being.
The
use
case
itself
is
slightly
different.
It's
not
at
all
about
multi-networking,
it's
just
about
an
compatibility
that
use
case,
so
they
add
an
additional
interface
just
to
kind
of
support.
A
A
host
networking
not
host
networking
access
to
host
Network
and
have
a
faster
path
for
that,
and
this
is
how
they
chose
to
do
this
so,
and
the
question
stands
whether
something
like
that
is,
we
should
represent
here
in
the
Pod
networking.
A
I
am
feeling
kind
of
opposed
that
because
then
I
having
multiple
interfaces
with
single
cni
call
kind
of
breaks
the
whole
concept
of
whatever
we
do
today,
like
by
the
basic
standard
for
kubernetes
I,
know
that's
possible,
but
that's
just
the
cnis
as
an
implementer
capability
right
and
I,
don't
think
we
should
implement
this.
A
B
Yeah
so
I'm
gonna
say
a
bit
that
the
same
thing
last
time
we
discussed
this
last
week,
we
were
missing
valuable
input
from
Michael
yeah
he's
not
here
still
I
guess
he
will
be
back
for
next
week's
meeting.
So
perhaps
we
can
discuss
this
in
next
week's
meeting,
but
I
can
recap
a
beat
the
general
concern
and
during
the
time
we
have
been
using
multis.
B
We
have
recognized
this
valuable
use
case
that
we
were
not
able
to
achieve
with
multus,
so
this
would
be
a
nice
to
have
so
if
we
are,
if
we
are
not
expecting
to
cover
this
use
case,.
B
I
I
guess
at
least
we
should
acknowledge
it
explicitly
somewhere
and
log
it
somewhere
that
we
discuss
about
this
use
case.
We
don't
want
to
support
it.
So,
let's
lock
that
somewhere
in
any
case,
I
expect
a
valuable
input
for
Michael
next
week
or
whenever
he
comes
back.
There
is
another
area
of
concern,
so
this
is
nice
to
have
that
we
cannot
do
with
multus
that
might
be
supported
by
a
cni
based
implementation.
B
The
the
specific
attachment
parameters.
That's
another
topic
we
have
discussed,
and
maybe
you
will
come
back
to
that
later
on,
but
that's
another
thing
that
is
hard
to
to
that.
We
want
to
have
that
is
not
possible
with
multus
so
having
specific
parameters
on
the
attachment
that
can
be
passed
down
to
implementations.
B
So
those
two
are
nice
to
halves
that
we
can
discuss
if
we
want
to
support
them
or
not,
and
we
can
just
log
that
we
had
this
discussion
and
the
recent
behind
not
supporting
the
them
if
we
decide
not
to
support
them
and
the
other
concern
that
we
have
is
what
can
we
do
today
with
multus
and
a
cni
based
implementation?
That
is
going
to
be
prevented
to
be
achieved
because
of
this
specification.
A
C
B
A
Can
still
do
whatever
you
do
today
and
whether
you're
gonna
use
spot
Network
at
the
API
is
is
different
so
saying
that
we're
gonna
block
something
completely?
That's
that's
not
true.
It's
we
are
not
going
to
represent
that
with
with
this
API.
Maybe
that's
your
question
because
you're
still
going
to
be
able
to
do
this
the
same
way
you're
doing
today,
it's
not
like
we're
gonna
prevent
you
from
doing
this
is
whether
this
API
is
going
to
represent
this.
A
So
let's
make
sure
that
that
that's,
let's
not
call
it
out
that
we
are
going
to
block
something,
because
we
are
not
blocking
anything.
Each
implementation
can
still
do
whatever
they
wish,
and
this
is
just
just
me
being
picky
about
the
voting
here,
but
but
let's
make
sure
that's
the
case
because
nobody's
gonna
block
anything
here,
it's
just
whether
the
API
is
going
to
support
a
specific
use
case.
A
page
Amy.
C
Now
I
was
going
to
say
as
soon
as
you
start
talking
about
things
like
this
right
when
I
say,
add
multiple,
add
a
remove,
add
and
remove,
or
do
you
go
with
the
represent
router
say
this:
is
the
interfaces
I
want
and
present
the
full
the
full
structure
right
I
mean
we
will
always
end
up
in
those
discussions.
This
I
think
this.
The
current
situation,
where
you
just
add
and
remove
one
interface,
is
the
worst.
It
would
be
better
to
do
add
a
remove.
C
A
That's
true
right:
that's
what
I'm
saying
right,
treating
cni
as
an
implementation
and
how
that
implementation
now
fits
towards.
C
A
What
will
that
mean
if
we
were
to
do
and
want
to
support
that
if
we
say
that
a
pod
Network
represents
a
apple,
a
your
infrastructure
in
some
way
or
ability
to
connect
to
a
H,
it's
hard,
a
network
but
I'm,
not
sure
a
infrastructure.
But
let's
say
this
is
infrastructure
inside
your
cluster,
not
external
infrastructure,
just
to
kind
of
sake
of
using
different
word.
So
let's
say
you
want
to
have
an
infrastructure
to
which
fills
in
all
the
requirements
that
we
defined.
A
So
basically,
one
of
the
main
requirements
is
all
the
pods
can
communicate
within
between
each
other
on
that
infrastructure,
Network
infrastructure
right,
that's
the
main
requirement
how
you
achieve
this!
It's
it's!
You
can
do
this,
and
this
is
what
what
even
Tomo
brought
up
the
other
use
case
here
that
the
one
that
I
think
that
was
I'm,
not
sure
who
was
that.
C
A
Someone,
but
basically
from
this
use
case,
even
that
was
left
out
there.
It
is
so
even
from
this
use
case
that
he
was
proposing
here,
and
that
was
exactly
how
they
were
achieving
the
connectivity,
and
that
was
one
interface
for
connectivity
between
the
nodes
and
the
other
interfaces
for
the
connectivity
on
the
Node.
A
So
that
was
the
one
that
was
the
reason
they
they
created
the
two
interfaces
of
a
single
call
just
to
achieve
that
a
connectivity
everywhere
right
between
the
pods
so
and
that's
how
they
choose
to
to
do
that
and
that's
why
that's
why
they
do
this
right?
A
If
implementation
willing
to
do
that
and
then
kind
of
play
together
with
all
the
rest
of
the
apis,
that
we
have
in
kubernetes
and
then
yes,
kind
of
sky
is
the
limit
right
up
at
the
point
and-
and
everyone
can
do
whatever
right,
but
should
we
now
represent
those
two
interfaces
in
some
ways
to
a
interface
I,
don't
think
we
should
considering.
We
rely
on
a
single
IP
per
the
infrastructure,
connector
kind
of
infrastructure
which
fits
towards
then
the
all
the
like
Network
policy
objects
or
Services.
A
Otherwise,
if
we
were
to
represent
with
a
single
item
two
of
those,
then
that's
kind
of
now,
we
have
to
refactor
most
of
this
to
other
stuff,
which
basically
is
impossible,
so
I
I,
today's
API
doesn't
like
the
other
infrastructure.
The
other
apis
that
we
have
today
in
kubernetes
doesn't
even
support
that
use
case.
That
I
can
have
with
single
cni
multiple
interfaces
and
then
represent
that
towards
the
other
elements
of
the
API.
A
It's
the
first
argue
against
supporting
this
argue
against
having
having
our
API
support.
Such
use
case
that
my
single
pod
Network
introduces
two
interfaces
into
the
Pod,
so
can
introduce
two
IPS
into
the
Pod
and
so
on
and
so
forth.
A
On
the
same
pod,
Network,
yes,
no,
no
on
pair
pod
Network
because
more
than
I
can
apology
more
is,
is
everything
other
IP
on
the
Pod
is
represented
by
a
pod,
Network
right,
API,
so
that's
kind
of
what
we
want
to
have
and
that's
proper
right,
because
then
how
that's?
How
we
identify
things
on
the
pot
if
I
have
pod
Network?
That
can
connect
multiple
interfaces
at
once
to
a
single
pod
and
those
can
get
their
own
IPS
or
yeah
their
own
IPS.
A
On
this
one,
because
that's
my
problem
and
going
back
to
even
the
lift
use
case,
they
don't
even
assign
the
eyepiece.
They
leverage
the
kernel
capability
that
the
iip
defined
in
the
namespace
is
assumed
by
all
the
interfaces
and
they
even
don't
assign
the
IP
to
the
other
interface.
So
basically,
it's
just
a
pipe
too.
C
So
today,
when
I
I
mean
with
V6
right,
it's
easy,
so
one
interface
from
the
outside
use
route
and
sort
of
push
into
get
any
number
of
addresses
on
that
interface
right
on
different
L3
Networks.
So
so
that
I
think
needs
to
be
supported.
If,
if
you
have
different
physical
interfaces
or
not,
why
would
the
Pod
Care
it.
A
Wouldn't
be
I
agree
at
this
point:
I
I
we
are,
we
are
but
look
at
it
from
different
point
of
view.
If
you
would
have
multiple
of
those
I
agree
with
you.
That
use
case
is
covered
right
because
each
L3
and
you
can
have
that
implementation
as
well,
but
this
still
works
for
us.
What
you
just
said
where
we
can
have
the
single
interface,
physical
interface,
let's
say:
physical,
like
V12
or
whatever
you
use.
A
But
let's
say
we
have
one
interface
in
your
namespace
and
then
each
pod
Network
represents
the
specific
L3
Network
that
you
connect
through
that
interface
and
that's
we
can.
We
can
model
that
right
with
this
API,
that's
possible
now
what
if
I
have
other
way
around
I
have
single
pod
Network
and
now
that
introduces
those
two
L3
networks
for
you
is
that
something
that
you
would
want
to
do?
That's
the
question
right,
because
now
you
have
the
single
interface
you
have
single
pod
network,
but
that
single
pod
Network
adds
two
viewforce.
C
C
C
Yeah
so,
but
the
question
is
what
what
is
a
pod
Network
right?
Is
it
a
so?
We
discussed
different
types
of
pod
Network.
So
if
you
call
what
we
call
a
just
the
layer,
3
Network,
then
you
you
represent
one
of
them.
How
do
implementation
manage
to
do
that
if
it
sets
up
link
local
and
that's
what
you
have
to
drive
your
L3
or
if
it
sets
up
two
networks?
It's
really
up
to
the
implementations,
I
think
it's
you
get
back
to
that.
C
If
someone
sets
up
like
and
calls
it
more
an
Ethernet
like
Network,
you
have
a
layer,
2
Network,
where
you
can
do
things
over
and
then
so
on.
On
top
of
that
does
layer
3
networking?
Well,
so
you
get
what
you
ask
for
right,
but
I
mean
what
is
the
problem
again?
Is
it
we
did
discuss?
Is
it
a
sort
of
an
Ethernet
type
of
network?
So
you
have
one
network
that
goes
and
your.
A
C
Is
it
one
network
that
you
use
over
all
your
nodes
and
attached
pods
to
or
is
it
something
that
a
designer
has
set
up
and
a
design
right?
So
you
have
a
structure
where
you
have
maybe
one
network
per
node
that
you
attach
pods
to
and
then
you
route
in
between
it's
without
having
a
way
to
to
understand
that
connectivity
at
what
level
are
you
connected
and
what
is
the
topology?
Otherwise,
it's
really
hard
to
do
an
API
that
can
be
located
to
all
of
them.
C
Then
you're
going
to
have
to
do
the
worst
case,
always
I.
Think
and
that's
I
mean
we
had
those
discussion
before
right,
where
we
discuss
the
IP
connect,
IP
networks
of
type
IP
and
networks
that
type
of
ethernet,
without
knowing
what
you
have
they're
so
different.
C
I'm
saying
that
I
think
we
need
to
specialize
on
the
type,
because
there's
so
much
more
freedom
in
when
you
have
a
network
of
type
ethernet
right,
then
people
can
run
whatever
on
it
and
as
a
user,
I
should
be
able
to
expect
to
do
all
these
things.
If
you
specify
to
say
that
you
have
IP
connectivity,
then
that's
what
you
have.
We
can
specify
us
and
sort
of
say
that
when
you
have
that
you
get
one
IP
address
that
you're
going
to
use,
if
they
will
invitation,
hangs
up.
C
C
We
sort
of
from
kubernetes
standpoint.
We
can
specify
this
network
type
gives
you.
These
Services
actually
behave
like
this,
but
with
ethernet.
If
I
don't
decide
to
do
l3s
on
top
of
it,
the
whole
world
is
open,
it
can
be
connected
to
I
mean
to
normal
machines
on
the
outside,
and
then
all
sort
of
weird
stuff
can
happen
there.
So
the
ethernet
is
it's.
C
What
we
have
today,
it's
remotes
and
so
on,
but
I
mean
it's
the
thing
that
we
should
try
to
make
sure
that
it's
only
used
in
very
rare
cases
like
if
you
want
to
run
a
router.
If
you
want
to
run
on
some
ethernet
type
of
protocol,
the
normal
connect
should
be
an
IP
connect
and
we
should
specify-
and
we
can
put
much
more
restrictions
on
what
that
is
than
what
we.
A
Can
do
so
I
think
I
would
I
would
answer
this,
that
this
way
that
the
apis
today
is
today
rely
on
the
other
apis
that
we
have
to
fit
towards
right,
rely
on
IP
for
Network
policy
and
services
and
Ingress,
and
what's
not.
C
So
I've
been
gone,
I
mean
you
haven't
seen
me.
I
was
just
in
last
week
and
I
need
to
drop
now.
I've
been
gone
for
like
five
six
weeks,
I'll
promise
for
next
week.
I
will
read
through
everything
that's
been
said
over
the
next
over
the
last
two
months
and
sort
of
what,
where
the
spec
is
and
I'll
I'll
contact
you
on
the
side,
Channel
Magic.
C
A
To
what
pair
was
saying
in
terms
of
like
the
ethernet
connection-
that's
like
this.
Let's,
let's
assume
this
is
just
L2
at
that
point
right,
which
then
would
definitely
will
have
a
reduced
functionality,
kubernetes
functionalities
to
to
to
what
we
have
to
what's
available
right
and
the
I
think
standard
should
be
going
and
the
IP
layer
right
where
we
will
have
the
IP
on
which
we
can
rely
on
and
I
think
this
is
the
the
hardest
case.
That's
what
I
want
to
try
to
get
to
that.
A
D
D
A
A
Or
do
you
just
need
a
handle
to
the
interface
that
you
connect
to
and
then
the
other
eyepiece
is
something
that
you
do
internally,
because
there's
that
aspect
right
of
what
functionality
from
kubernetes
do
you
expect
on
each
of
those
networks
right
or
all
those
L3
networks
which
you
call
like
multiple
IPS,
but
so,
let's
call
them
L3
level
right
networks.
So
if
I
I
expect
for
each
of
the
l3s
to
have
the
full
kubernetes
capabilities,
then
I
would
I.
A
Would
then
I
I
would
position
this
as
each
subnet
being
represented
by
separate
pod
Network
object
even
then,
but
then
my
implementation
can
more
kind
of
implement
in
such
a
way
that
I
will
apply
all
those
networks
in
on
a
single
interface
of
the
Pod
right.
It
doesn't
mean
what
you
mentioned,
that
each
Network
represents
a
new
interface
inside
the
pod.
That's
what
we
are
familiar
with
today,
but
from
Cap's
point
of
view
it
it
doesn't
have
to
be
like
that,
and
it's
basically
boils
down
to
the
implementation.
A
If
your
implementation
can
achieve
that,
that
I
have
one
interface
and
then
I
always
pipe
one
and
then
what
other?
What
networks
you
want
to
have?
Okay,
you
want
to
have
default.
You
want
to
have
a
b
and
full,
so
I
will
just
add
those
networks.
So
those
eye
pumps
create
the
icons
for
that
interface
there
and
there
you
go
create
routes
whatever
I
have
to
do.
But
that's
my
implementation
knows
all
the
specifics
and
does
that
right.
A
D
D
You're
suggesting
what
you're
suggesting
it
basically
yeah,
even
if
you
have
two
plus
networks,
and
they
all
there
is
a
pod
connected
to
two
different
to
the
same
pop
Networks,
each
having
a
different
IP.
The
implementation
could
actually
model
this
as
two
different
interfaces
or
a
single
database
with
two
different
sub.
A
Exactly
yeah,
that's
if
your
implementation
wants
to
handle
it
that
way.
Yes,
that's
what
this
API
can
can
represent.
The
the
Pod
Network
API
does
and
decide
that
oh,
it
has
to
be
a
separate
eth
right
in
the
Pod
namespace.
It
doesn't
it's
up
to
your
if
you're
implying
but
keep
in
mind,
then
your
replacing
how
to
handle
pipe
it
all
through
and
and
do
the
all
the
plumbing
whatever
is
needed
for
that.
But
then
that's
your
implementation,
that's
how
you
handle
it
right.
A
Most
use
cases
that
we
are
familiar
at
least
I'm
familiar
with
and-
and
most
of
us
are
familiar
with,
probably
is
we
always
see
a
pod
Network
as
a
separate
interface
in
the
namespace
of
a
pot
right
where
it's
a
separate
eth
and
then
because
that's
how
we
usually
do
that,
but
pair
is
bringing
up
the
L3
use
case
right
where
it's
not
doesn't
have
to
be
right.
It
can
be
a
separate
L3.
That's
what
what
this?
Why
that's?
A
Why
we,
our
in
my
initial
kind
of
proposal
from
last
year,
had
all
the
L2
parameters
Mac
address
ipu.
What's
that,
but
then
pair
said
why
I
I
might
not
have
data,
my
network
is
going
to
be
just
IPS
and
he
was
right
right,
because
that
can
be
right.
My
implementation
can
do
completely
different
things.
It
can
be
just
a
single
pipe
to
the
port,
and
then
my
pod
networks
are
are
isolated
or
layered
completely
different
way
right.
It
doesn't
have
to
be
always
L2.
D
I
do
know
because
one
of
the
I
think
I
drive
or
one
of
the
drivers
for
this
cap
is
also
Telco
right.
So
Telco
you
have
that
weird
applications
because
yeah
they
mumbled
the
physical
world
into
the
let's
say
originally
to
Virtual
and
then
to
Containers,
so
I
know
in
the
virtual
space
we
as
representing
Nokia
right
now.
We
had
this
use
case
quite
a
bit,
I'm,
not
sure
in
the
container.
D
A
We
can
I
think
we
can
try
to
talk
about
this
I'm,
not
sure
next
week,
another
half
an
hour,
probably
gonna,
take
and
if
we're
gonna
start
again
with
that
one
I,
hopefully
not,
we
won't
have
to
we'll
see
if
Tom
will
can
join
next
week
and
convince
him
on
this
all
right.
Let's
continue
on
the
cap
itself.
Last
week
we
end
a
discussion
on
the
Todd
network.
Interface,
object
and
I.
Just
want
to
share
this
stuff.
Is
that
the
right
one
and
yeah.
A
So
we
were
discussing
on-
let's
maybe
go
from
this
big,
a
bit
of
the
reminder:
how
how
we
could
change
the
pods
back
and
basically
ability
to
provide
parameters
to
how
you
attach
to
a
pod
right.
You
either
just
directly
say
what
network
I
want
to
attach
to,
or
you
want
to
provide
some
additional
parameters
on
how
I
attach,
for
this
specific
part,
right
and
I
was
thinking
of
introducing
this
port
network
interface
object,
and
this
is
the
guy
here
here-
a
network
interface.
A
That's
supposed
to
be
powered
network
interface,
which
would
contain
as
ability
again
to
point
to
a
custom
resource
that
will
then
pair
implementation
have
whatever
I
think
this
would
work
for
most
use
cases
from
the
the
devices
that
we
heard
last
week
was
to
having
ability
to
sped
that's
something
that
I
was
kind
of
initially
mentioning
this,
that
this
object
could
be
only
specified
by
a
single
thought,
because
I
was
thinking
of
having
this
having
some
like,
for
example,
IP,
address
and
Mac
address
parameters
which,
which
would
then
be
applicable
just
pair
pod.
A
So
it
could
not
be
reused
in
in
replica
set,
but
I
think
we
discussed
that
we
should
be
able
to
do
both
right
either.
These
guys
specified
her
once
per
pod
or
multiple
per
pod,
where
I
additionally
provide
some
tweaks
to
how
my
network
is
attached,
right
and
I.
Think
here
was,
and
one
of
the
examples
was
where
my
pod
network
is
represented
by
let's
say,
a
a
simple
mac,
VLAN
and
then
additional
high
performance
srov,
and
they
are
connected
to
the
same
infrastructure,
Network
and
now,
when
I'm
in
infrastructure.
A
It's
the
environment
that
I'm
in
they
are
all
connected
to
the
saml
too,
and
and
I
want
to
have
ability
to
now
to
kind
of
choose
between
our
one
or
the
other,
the
faster
high
performance
or
none
which,
and
then
they
are
still
the
same
subnet
and
same
ipam
and
represented
by
one
pod,
Network
object,
which
is
a
valid
use
case,
because
that's
then
how
so
that
we
can
kind
of
fit
into
since
ipam
is
the
story
behind
services
and
network
policies
so
that
we
don't
have
to
replicate
those
for
the
for.
A
If
you
were
to
split
those
two
networks,
that's
what
we
would
want
to
achieve
the
parameters.
Rev,
then,
could
be
the
additional
implementation
specific
indicator
on
which
network
high
performance
or
normal
I.
How
and
how
I
want
to
attach
to
those
right.
A
So
this
was
a
good,
but
then
I
I
was
thinking
about
this
one.
The
status
for
this
guy
has
name
IP,
address
and
Mac
address.
Those
are
purple
items
and
those
cannot
be
in
this
approach
right
if
I
can
apply
same
interface,
spec
to
multiple
pods
I
cannot
each
pod
cannot
report
whatever
the
status
of
this
is
right,
and
this
is
where
I
am
losing
on
capability,
because
this
then
works
for
the
case
where
I
want
to
use
the
same
for
multiple
pods.
A
But
then
this
doesn't
work
for
case
where
I
want
to
have
one
purple
right.
How
would
I
report
an
information
of
this
per
pod
I?
Think
that's
and
any
question:
will
that
even
be
possible
at
that
point?
Right,
probably
not
because
then
this
guy
doesn't
can
report
only
stuff
by
the
implementation
and
then
whatever
whether
it
is
Primus
are
ready
or
not.
A
So
maybe
there
maybe
the
question
here
would
be
this:
do
we
need
a
object
that
will
really
represent
how
a
pod,
how
the
attachment
to
a
pod
is
done
like
networking
information
of
that,
because
that's
currently
we
don't
have
that
data.
The
only
thing
that
we
today
have
is
in
pod
status,
and
let
me
show
that
this
guy,
so
basically
what
we
today
have
is
this
right,
those
two
things
and
I
slightly
modify
that
I
will
get
to
that.
A
A
Now,
when
I
think
about
this,
considering
the
network
can
be
represented
in
various
ways
common
can
have
L2
someone
can
L3
I,
don't
think
there
is
one
way
we
could
represent
each
attachment
for
each
of
the
implementations.
So
maybe
that
that's
right,
the
only
thing
that
we
should
represent
is
IP
and
anything
else
should
be
provided
by
the
implementation
itself.
Maybe
that
should
be
the
case.
A
D
A
This,
the
Implement-
let's
assume,
let's
try
to
kind
of
model
this
with
maltus
right
so
do
is,
let's
start
from
the
top
on
this
one.
So
where
is
my
pod
Network
here
so
pod
Network
will
point
to.
Where
are
you
now?
A
This
pod
network
will
have
parameters,
references
right
and
basically,
pod
Network,
for
that
use
case
would
have
two
had
to
not
a
link
to
to
a
network
to
network
nuds
right,
one
will
be,
let's
call
it
slow
and
let's
be
Mac,
VLAN
right
and
the
other
will
be,
and
the
other
is
fast,
and
this
is
sriov.
Let's
say
that
that's
that's!
We
had
those
two
Network
attachment
definitions.
Those
are
internally
defined
so
that
they
share
the
IP
ipam
range
right.
A
So
basically
they
are
connected
to
the
same
L2
in
this
case
and
they
share
the
subnet.
Those
two
guys
right,
but
now
we
just
this-
is
definition
of
pod
Network
right.
The
thing
here
is
right:
now
how
do
I
pick
one
of
the
other
right
one
over
the
other,
or
what's
that
right?
Still?
The
item
is
the
same,
but
now
what
how
do
I
pick?
One
of
them,
and
to
do
that
now
you
go
to
the
pod.
A
Maybe
then
you
create
a
the
next
thing
that
you
create
will
be
the
Pod
network
interface
right.
A
So
that
would
mean
on
how
you
attach
to
a
specific
pod,
and
then
you
would
reference
your
own
object
in
here
and,
let's
assume
multis
comes
up
with
some
sort
of
new
crd
that
defines
that
right
right
now
they
have
all
the
kind
of
structure
defined
inside
as
The
annotation,
Json
I'm,
not
sure
how
much
you
all
folks
are
familiar
but
multiples
beside
Network
attachment
definition
has
the
kind
of
Json
format
of
The
annotation
that
you
can
specify
in
in
pods
annotation
right.
A
So
let's
assume
that
is
being
whole
thing
is
being
transferred
into
a
new
crd
and
it
is
being
referenced
here
in
that
crd.
They
as
well
added
some
enumerator,
stating
select,
select
and
add
one
or
two
right,
and
basically
this
is
how
I
would
Define
that
okay
for
those
pods
I
create
here
an
a
reference
to
a
the
new
crd
that
that
that
says,
I
can
I
want
to
use
the
fast
connection
right.
A
So
that's.
How
would
I
Define
it
here
through
this
network
network
network
interface,
object
and
then,
lastly,
inside
the
pod,
then
I
instead
of
citing
saying
I,
want
to
connect
to
the
network
I
connect?
Let
me
just
maybe
just
copy
paste
it
correctly.
So
no,
not
this
so
then
inside
the
Pod.
What
I
do
is
for
some
of
the
pods
I
will
use
instead
of
network
directly
I
will
use
a
pod
network
interface,
name
and
I
point
to
that
network
interface
that
I
created
right.
So
my
interface
fast.
A
I
have
to
specify
either
for
network
interface
name
or
the
network
right,
one
of
the
two
and
then
basically
I
know
that
those
this
spot
is
going
to
use
the
fast
right
and
then
there
can
be
another
pod
which,
for
which
I
will
create
a
pod
interface
with
which
will
use
the
slow
one,
the
mcvin
approach
and
then,
instead
of
using
fast
I,
will
use
the
other
one
on
that
on
that
right.
A
The
this
will
still
have
to
comply
with
one
of
the
conditions
here
this
guy
now
and
add
this
yeah
I
think
this
is.
This
is
the
same
thing,
because
that's
the
this
is
the
indirect
restriction
and
that
restriction
is
to
allow
only
one
always
one
pod
Network
per
pod.
A
So
basically,
though,
it's
not
directly
listed
here
but
I
should
not
be
able
to
do
something
like
this,
let's
say
fast
and
slow.
This
should
not
be
allowed
because
those
two
reference,
the
same
port
Network
so
I
should
not
be
able
to
connect
twice
to
the
same
thing.
So
basically,
this
should
not
be
allowed
right.
A
No
because
each
each
connection
should
be
as
separate,
then
that
will
fit
because
then
each
of
the
connection
is
a
separate
pod.
Network.
A
B
I
do
yeah
so
if
I,
if
I,
were
to
interpret
this
as
I'm
seeing
it
right
now,
it
means
that
we
cannot
have
two
interfaces
attached
to
the
same
network
right
if
I
were
to
interpret
what
I
kind
of
I'm
reading
here.
A
B
A
And
and
Jaime
I
will
and
I
will
answer
you.
This
moltos
is
not
the
end
of
the
of
all
the,
and
this
is
not
too
satisfying
Motors
like
I
will
I
will
answer
to
you.
This
I
know
you're.
B
A
You're
subjective
to
maltose,
but
let's
make
sure
Motors
is
not
all
in
all,
for
how
multi-network
is
done,
because
it's
not
exactly
it
is
the
first
one.
I
I
bet
you
and
it's
the
most
popular,
but
let's
make
sure
it's
not
the
all
in
all
so
and
I'm
bringing
that
other
and
those
in
that
time
is
teasing.
You
here
I
apologize
for
that,
but
why
maybe
I
will
tell
you
this?
Why?
Why
I'm
bringing
this
bring
this
up?
A
Because-
and
this
is
maybe
we
can
have
some
folks
kind
of
left
already,
but
we
can
have
more
discussion
on
this
next
time,
whether
we
have
more
audience.
But
the
reasoning
for
limiting
a
single
interface
per
pod
is
exactly
to
what
I'm
kind
of
saying
from
the
very
beginning
of
today,
because
most
of
the
other
features
in
kubernetes
rely
on
only
API,
okay
and
we
should
not
be
able.
A
We
should
not
be
having
multiple
IPS
multiple
connections
to
the
same
pod,
Network
right,
because
then
you
see
here
on
the
bottom,
you
would
have
let's
say
you
would
have.
This
is
V6
right,
so
V6
is
acceptable
right
before
V6,
because
those
are
apips.
But
then,
if
I
were
to
have
something
like
this
right,
where
I
have
one
IP
and
second
type
connected
to
the
same
network,
what
do
I
do
then?
What
do
I
do
with
the
service
to
which
endpoint
do
I
do
I
route?
A
A
Considering
that
that
the
Pod
network
is
the
identifier
of
of
the
of
the
interfaces
of
of
your
let's
call
it
infrastructure
what
I
said
right
of
inside
the
Google
cluster,
how
to
identify
which
do
I
refer
to
so
you
see
in
this
case
this
doesn't
work
with
services
and
endpoint
slices
and
all
that
right.
That's
that's!
Where
I'm
coming
from
and
now
I
think.
A
One
of
the
use
cases
for
multis
is,
for
example,
bridging
to
srov
interfaces
right,
I,
I
I
always
know
that
that's
one
of
the
use
cases
where
I
want
to
have
the
two
srov
interfaces
connected
to
my
pod,
because
I
want
to
and
just
Bridge
them
internally
which
basically,
what
I
care
about
is
just
L2
at
that
point.
If
we
care
about
in
this
in
this
use
case,
that
I
mentioned
the
bridging
we
care
about
just
L2,
we
don't
care
about
IP
anyway.
A
Why,
then
we
don't
care
about
those
two,
those
two
interfaces
being
in
the
same
pod
Network,
then
those
two
can
be
in
separate
pod
networks
right
and
then
you
can
model
it
that
way
right
each
of
the
interfaces
because
they
have
to
be
in
separate
switches
for
the
redundancy
right
that
that's
the
whole
point
of
the
kind
of
timing
them
too
and
then
bridging.
So
they
have
to
be
in
different
switches.
A
A
It's
definitely
not
exactly
what
you
can
do
today,
because
you
can
multiply
multi
call
out
montus
multiple
times
and
be
done
with
that.
But
you
must
make
sure
that,
even
with
that,
you
must
make
sure
you
you
reach
out
from
different
PFS
if
it's
a
serovic
case
right,
because
otherwise
it's
busy
the
purpose
right.
If
it's
not
the
same,
if
it's
different
coming
from
the
from
the
same
PF,
there
is
no
point
of
view
bridging
those
right.
B
Yeah,
so
I
am
not
saying
and
I
want
to
be
clear
about
this
I'm,
not
saying
that
we
should
support
all
the
multi-use
cases.
What
I'm
saying
what
I'm
saying
is
that
a
lot
of
people
that
are
using
multis
today,
which
is
the
and
The
Wider
I,
guess
that
it's
the
multinerary
spec
that
is
being
used
today
or
the
white
as
what
the
broadest
one
people
are
going
to
be
moving
to
Motor
City
and
people
that
are
going
to
find
that
their
use
case
is
not
going
to
be
possible.
B
B
A
A
A
Now,
all
right
so
I
I
well
one
of
the
use
cases
that
that
I
described
is
one
of
those
right
and,
and
we
can
model
it
in
a
way
where
we
can
go
around
it.
But
what
are
the
real
other
use
cases
that
that
will
require
what
you
do
that
that
capability?
That's
my
question
to
you.
I
know:
that's
that's
keep
in
mind.
There
is
much
more
and
the
same
same
goes
with
the
cni.
Cni
can
do
this
multiple
interfaces
with
one
call
I
I,
and
that
doesn't
mean
that
we
have
to
that.
B
So
that's
right,
so
for
me
it's
really
easy
to
call
up
things
that
I
can
do
with
multis
and
I
cannot
do
with
this
cup
I'm
not
saying
that
I
have
a
use
case
for
them
and
or
that
we
should
support
it.
But
I'm
saying
it's
very
easy
for
me
to
call
them
up,
and
you
know
it's
very
easy
for
me
to
be
aware
and
to
discuss
them
and
decide
whether
we
want
it
or
not.
That's
the.
A
Answer
so
so
so
we
need
this
needs
to
all
be
driven
by
use
cases,
if
anything
are
the
examples
of
how
really
it
is
being
used
because
just
having
the
capability
for
the
sake
of
it
I
I,
say
that's,
not
a
that's,
not
a
argument.
Okay,
if
we
have
and
and
I
do
have
some
experience-
maybe
Vim
can
even
add
us
from
from
his
side,
maybe
from
from
the
Telco
side
of
what
sort
of
deployments
they
do.
A
But
from
my
experience
myself,
the
the
main
usage
of
multiple
interfaces
of
from
the
same
network
attachment
was
for
the
bridging
of
a
set
of
interfaces.
That
was
the
only
use
case
I
can
think
of
and
which-
and
it
can
be
done
here
as
well.
I
know
the
capability
is
there
for
maltus,
but
yeah
it
it
it.
It
won't
be
fully
modeled
because
we
want
to
you
know,
integrate
with
the
rest
of
kubernetes.
We
don't
want
to
do
more
than
just
what
motors
can
do
today
right?
It's
it's
limited.
D
So
I
mean
I,
agree
I
mean,
in
my
view,
having
the
I.
It's
really.
It
will
be
really
exceptional
case
to
do
something
like
that.
In
my
view,
one
question
I
had,
for
example,
one
use
case
that
is
used
which
potentially
leads
to
multiple
IPS,
but
they
will
be
on
the
same
subnets
on
the
interfaces
like,
but
but
it's
like
in
any
cost
Gateway.
So
you
have
like
a
let's
say,
a
physical
IP,
an
IP
attached
to
your
pops
right.
D
Let's
say
you
have
two
pops
each
which
a
dedicated
IP,
but
then
you
have
like
in
any
kind
of
address
that
that
is
routing
to
both
so
I
believe.
My
understanding
is
that
we,
the
way
to
do
this
is
through
service
right
is
that
you
would
add
a
service
that
that
points
to
basically
both
IDs
and
and
that's
how
we
would
Implement
something
like
that.
A
Exactly
so,
basically,
depending
on
what
you
yeah
yeah
exactly,
then
you
would
have
just
a
little
balancer
between
the
two
right
unless
it
has
to
be
multicast
and
then
that's
slightly
different
right
because
then
through
I
want
to,
depending
on
what
you
want
to
do
exactly
it's
still.
But
you
still
need
just
one
IP
right
from
even
multiple.
A
Yeah
yeah,
we
we
exactly
exactly.
We
want
to
achieve
a
full
integration
with
with
the
kubernetes.
So
that's
that's
the
main
goal
here
right
all
right:
let's,
let's
still
think
of
it,
Jaime
I
would
appreciate
if
you
could
provide
some
other
use
cases
to
kind
of
for
the
need
of,
and
there
was
like
comments,
I
think
from
Peter
as
well.
For
that
purpose,
I
think
there
was
not
only
from
Peter.
There
were
some
other
comments,
I
think
maybe
Vim.
That
was
you
as
well.
Look
through
the
comments.
A
There
were
some
other
folks
bringing
the
same
issue
up
right
with
the
Restriction
of
single
one
pod
Network
per
pod.
Right
in
the
comments
you
probably
can
find
them
and
I
would
appreciate
him
if
you
could
provide
some
other
use
cases
from
you
from
from
the
users
that
you
can
know
of,
or
you
can
maybe
ask
around
of
what
will
be
the
real
Blocker
of
for
that
for
that
restriction,
all
right,
we
are
at
the
time
any
other
last
comments.