►
From YouTube: Kubernetes SIG Network meeting 20210520
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
welcome
to
the
network.
Plumbing
working
group
meeting
may
20th
2021
kicking
it
off
with
the
regular
business.
It
doesn't
look
like
we
have
any
new
member
candidates,
maintainer
candidates
or
candidate
projects,
and
it
looks
like
we
have
a
lot
of
familiar
faces
on
the
call.
So
thank
you
all
for
joining.
I
appreciate
it,
jan
and
perry.
You
guys
wanna,
kick
it
off
with
vlan
trunking.
B
B
B
So
what
we
want
to
present
today,
perry
and
myself
is
the
question
of
how
we
can
improve
the
support
that
we
have
in
in
multus,
with
ovs
cni
for
trunk
interfaces,
trunk
secondary
network
attachments
that
are
actually
vlan
trunks,
and
let
me
start
with
the
with
the
status
quo
that
we
have
with
obscni
today.
B
Ovscni,
basically,
when
it
comes
to
the
kernel,
interfaces
supports
two
types
of
kernel
interfaces:
one
is
access
interfaces
like
net
one
here
which
are
through
a
vth
pair,
connected
to
a
attacked
port
in
obs,
so
that
obs
takes
care
of
the
vlan
tag,
removal
or
insertion,
and
the
application
in
the
port
sees
untagged
traffic
and
for
this
kind
of
simple
access
network
attachments,
ipam
is
fully
supported,
meaning
I
can
inter-work
with
ipam
modules
like
whereabouts
and
then
configure
the
injected
interface
with
the
ip
address
and
other
information
like
mtu
size
and
routes
and
stuff.
B
B
More
relevant
are
the
selective
trunk
interfaces
where
we
specify
which
of
the
allowed
lists
of
vlans,
are
exposed
to
the
pod,
and
then
the
pod
sees
vlan
tech
traffic
on
the
parent
interface
net
2
that
is
injected,
and
for
these
trunk
interfaces
today
there
is
no
support
for
any
any
ipam.
Obviously,
because
there
are
the
the
parent
interface
is
not
really
the
interface
that
should
have
ipam.
B
There
should
be
vlan
sub
interfaces,
for
example,
and
those
vlan
sub
interfaces
would
then
have
to
be
configured
with
ip
addresses,
so
the
support
exists
for
selective
trunks,
but
only
very
bare-
and
this
has
been
good
enough
for
the
first
application
that
we
were
targeting,
which
was
basically
dealing
with
the
with
the
raw
packet
sockets
here
or
even
with
dpdk
interfaces
and
doing
all
the
vlan
handling
and
ip
address
management
themselves.
B
B
And
if
there
is
such
ipam
configuration
present,
then
the
obscni
plugin
would
invoke
ipam
several
times,
one
for
every
trunk
vlan
that
has
such
ipam
config
optionally.
The
c9
plugin
could
then
create
vlan
sub
interfaces
on
the
parent
interface
and
inject
them
into
the
pod
namespace
and
apply
the
ipam
configuration
that
was
received
from
the
ipad
module
if
available
and
then
eventually
the
obscni
plug-in
would
return
information
about
all
the
configured
port
interfaces.
C
I
don't
understand
why
the
number
of
networks
would
you
say
that
you
need
to
do
that?
You
do
okay,
you
want
to
do
that.
Vlon
trunking,
that
you
need
lags
in
order
to
do
vlon
trunking.
I
don't
I
don't
see
it.
I
can
understand
it
for
aha.
I
can
understand
it
for
bandwidth,
but
not
to
have
anything
to
do
with
the
number
of
networks
it
puts
on
top.
B
B
B
C
B
Okay,
I
think
yeah
I
mean
there,
there
is
a
dpdk
flavor
of
this
yes,
and
for
that
it
is.
This
discussion
is
not
so
relevant,
but
we
have
also
control
plane,
applications
that
are
not
dpdk
applications.
B
That
is
what
they
do
exactly.
That
is
what
they
need
to
do
today,
to
run
the
raw
packet
socket
on
the
parent
interface
and
do
all
the
vlan
handling
themselves
right
and
that
works
and
all
the
all
the
ip
address
management.
But
if
you
have
an
application
that
doesn't
really
is
not
prepared
to
do
that
or
doesn't
want
to
do
it.
C
C
B
Okay,
yeah
great
perry
are
you:
are
you
able
to
share.
B
B
Then
this
might
work,
but
it
might
be
a
little
bit.
No,
I
can.
I
can
make
it
bigger.
Okay,
that's
good!
Okay,
so.
B
D
E
E
D
D
D
F
So,
who
created
the
sub-interfaces
in
the
pod.
D
B
Networks,
there
are
different
networks,
they
are
trunk
networks
on
on
the
same
trunk,
so
they're
part
of
the
same
network
network
attachment
definition,
file,
yeah
yeah.
That.
B
D
But
from
malta's
perspective
like
it's
from
the
same
network
like
that,
is
overseen
in
it
so
because
we
are
having
single
network
attachment
of
nissan
right.
So
so.
D
D
F
So
the
result
of
a
cmd
add
command
yields
like
multiple
network,
like
network
statuses,
correct,
that's
the
I
mean
in
regards
to.
If
we
look
at
the
implications.
D
B
B
A
D
A
D
G
H
Hey,
can
you
hear
me
okay
yeah,
so
I
I
have
the
so
yeah
as
you
mentioned
that
you're
doing
not
changing
the
return
result,
object
the
structure,
point
of
view,
but
this
drastically
changes
the
semantics
of
the
result
object.
H
So
so
the
I
mean
that
so
currently
the
we
so
for
each
for
each
network,
attachment
definition
showing
the
one
network.
I
mean
that
the
this
contains
the
one
net
ip
address
space
and
then
the
others
networks
means
they
are,
how
they
say
splits
into
the
broadcast
to
domain
stuff.
So
the
euro
configuration
is
changing.
One
network
attachment
definition
may
have
multiple
networks,
so
they
are
currently
directly
network.
H
Multi-Network
policy
we
are,
the
mountain
dew
of
policy
is
based
on
the
assumption
that
one
network,
showing
one
network
definition,
is
showing
their
one
network
so
there
this
is
the
so
currently.
This
is
not
they're
following
your
stuff
and
then
also
regarding
so
currently
the
one
network
attachment,
so
they
and
also
they
are.
How
do
I
say
so
so
result
object
the
we
we
can
easily
identifying
the
network
by
their
name,
but
to
the
in
your
case
one
network,
account
definition
name.
H
I
mean
that
in
this
case
the
default
obvious
cni
net
have
their
multiple,
so
we
do
not
identify
which
network
based
on
their
names,
so
that
this
is
also
the
broken,
the
our
their
architecture
and
then
also
they
are
yeah.
We
need
to
check
this,
the
multi-network
spec,
because
there
we
also
have
the
assam
the
this.
H
We
made
something
written
related
to
this
design
and
also
the
I
have
the
question
about
the
stuff
is
so
currently
some
meta
plugin.
Let's
imagine
that
the
tuning
or
mac
they're
plugging-
well-
maybe
I
I
could
say
the
route
over
at
cni,
so
this
stuff
is
only
chan-
is
based
on
the
v1.
H
The
first
interface
is
to
be
a
container,
and
the
second
and
third
is
not
to
go
through
so
the
in
this
case.
The
the
obs
cni
does
not
support
these
meta
plugins.
So
it's
so
the
it's
pretty
unique
stuff,
and
then
there
are
some.
D
No,
no,
then,
what
is
the
reason
like
the
the
result?
Object
is
containing
like
array
of
interfaces,
because
the
result
object
like
can
have
multiple
interfaces.
Multiple
ip
addresses
so.
H
H
For
example,
in
this
the
some
case
like
the
ptp,
so
that
I
only
know
the
multiple
interface
is
the
maybe
the
mainly
the
pdp.
Maybe
the
bridge
is
also,
but
I
I
don't
remember
it
correctly,
but
at
least
the
ptp
returns
the
two
interface.
The
first
is
to
the
container
side
and
the
second
is
the
container
host
side.
So
they.
H
One
one
network
in
the
so
there
in
this
case
when
the
ptp
returns
the
two
objects.
Both
interface
is
in
the
same
network
right,
so
the
one
so
one
ptp
means
the
aperture,
isn't
it.
This
is
the
appointed
point,
so
the
one
one
pair
one
one
interface
goes
to
the
container
side
and
the
other
thing
is
goes
to
the
container
host
side.
So
the
both
ip
address
is
in
the
same
network
right.
A
H
This
is
just
okay,
okay,
so
so
the
so
I'm
I'm
just
wondering
that
yeah
you're
changing
yeah.
As
you
mentioned
this
may.
This
doesn't
changing
the
structure,
but
from
the
semantics
point
of
view,
this
may
not
only
be
the
need
to
be
changing
the
mouse
network,
spec
and
also
the
cni
specification,
is
also
need
to
be
changed.
H
So
I'm
I'm
a
little
bit
concerned
this
stuff
because
they
are
multis,
so
they
are
so
currently
the
amount
is
so
even
though
they
are
cna
doesn't
support
that
they
are
multiple
interface,
but
the
maltese
is
trying
to
following
the
cni
specification
and
their
cnr
structure
stuff
and
then
maybe
that
that's
why
the
they
are
lots
of
challenges
is
still
in
the
stuff
and
then
they
also
they
also
be
modest.
H
Team
is
working
closely
into
the
asean
community
and
they
try
to
fixing
their
several
stuff,
and
then
some
of
them
should
be
addressed
in
the
cni
to
point
out
stuff.
Maybe
so
so
they
are
that
so
that
that's
the
stuff
so
and
then
also
they
are
as
previously.
I
don't
remember
when
it
is,
but
at
least
two
years
or
three
years
the
we
have
this
some
discussion
similar
stuff.
I
mean
that
the
multiple
interface
result
of
a
multiple
interface
in
the
one
result
at
that
time.
H
We,
as
far
as
I
remember,
the
downv
we
didn't,
we
don't
have
the
but
the
he
he
is
also
they
have
some
consideration,
concerns
for
that
and
then
the
that
stuff.
So
I
I
don't
remember
the
correctly,
but
so
there
may
be
a
we.
I
we
maybe
the
next
meeting
there
we
we'd
like
to
we.
We
could
ask.
D
H
B
B
It's
one
way
that
we
have
seen
that
could
work,
but
I
I
tend
to
agree
with
you
tomorrow
that
this
probably
touches
the
semantics
of
the
cni
spec
and
the
multi
network
spec,
and-
and
we
need
to
be
very
careful
right.
I
I
totally
agree
with
you
here,
but
yeah.
B
We
have
basically
already
in
in
the
past,
decided
when
this
was
integrated
into
ovs
cni
in
the
first
place,
without
icon
to
to
support
something
like
trunk
network
attachments,
and
if
you
look
at
srv,
that
is
a
absolutely
normal
use
case
that
an
srv
virtual
function
within
transparent
mode
basically,
and
then.
C
Absolutely,
but
I
mean
what
you're
saying
is
that
so,
if
you
look
underneath
so
so
we
have
our
own
visual,
it
doesn't
use
obs,
but
you
have
a
number
of
networks
doesn't
matter
how
they're
presented
to
to
this
one
it
can
be
in
reality.
You
should
be
able
to
reveal
on
tagging
in
the
best
okay,
so
you
want
to
present
to
the
pod
a
number
of
networks,
let's
say
x,
and
you
want
to
use
one
sort
of
base
interface
and
you
want
the
other
ones
to
be
child
vlon-based
interfaces.
C
C
B
That's
true:
a
gold
pod
wouldn't
care
when
using
when
consuming
those
interfaces
that
doesn't
make
a
big
difference,
but
for
the
infrastructure
it
makes
a
big
difference.
C
I
think
what
you're
really
saying
is
that
for
for
obs,
this
is
an
ovs,
because
I
can't
see
the
minus
kernel
really
caring.
If
it's
a,
we
call
it
the
primary
or
a
or
a
vlog
based
interface
to
me,
it
seems
more
like
this
is
a
then
limitation
in
obs
and
as
an
optimization.
This
is
a
good
way
to
get
around
the
limitation.
You
have
an
ovs.
Am
I
correct
in
that
or-
and
I
have
no
problem
with
the
behavior-
I
still
want
to
understand.
B
B
If
you
do,
if
you
have
a
single
interface
that
you
inject
or
you,
you
basically
split
up
the
vlans
on
very
many
and
then
inject
and
write
network
attachment
definitions
for
those
very
many.
So
it's
a
big
difference.
I
would
say.
B
From
from
from
an
infrastructure.
C
C
C
C
I
think
that
that
is
the
that's.
What
tomo?
Yes!
That's
your
that's!
What
you're
saying
right
that
the
that
definition
is
the
network.
E
B
C
B
The
analogy
to
openstack
is
goes
only
so
far
right
because.
G
B
G
B
Looking
at
trunking
providing
different
kind
of
network
attachments
like
an
srv
network
attachment
or
kernel
network
attachment
and
depending
on
the
cni
that
actually
provides
it,
there
are
even
different
ones.
So
the
analogy
is
anyway,
not
very
clear,
but
here
we
are
really
talking
about
vlan,
trunking
and
and
and
subports
basically-
and
there
is
no
such
thing
currently
modeling
this
in
in
in
the
network
coming
multi-net,
spec.
C
But
so
if
you
look,
if
you
turn
around
and
say
you're
the
infrastructure,
and
now
I
mean
the
switch
right,
a
switch
doesn't
care,
it's
a
network
and
it's
sort
of
the
encapsulation
is
using
vlons.
It
could
come
as
a
vx
lawn
with
geneve
or
mpls
or
any
other
encapsulation
right.
It's
just
a
network,
and
then
you
have
a
hardware
or
software
that
that
reacts
and
can
balance
these
networks.
So
they're
all
networks,
so
they're
looking
at
here,
is
what
you
said
before.
B
B
You
can
put
it
like
this.
I
mean
yes,
instead
of
having
in
your
network
annotations
in
the
plot
to
list
400
networks,
you
just
refer
to
one,
and
that
is
the
trunk
that
contains
all
the
400
and
if,
during
the
lifetime
of
the
cluster,
10
networks
need
to
be
added,
because
there
are
new
apns
created
in
the.
B
C
C
G
B
H
So
at
least
they're,
currently
the
network
so
network
prompting
working
group,
the
mouse
network
spec
does
not
maybe
that
we
need
to
be
a
changes,
but
before
that,
of
course
the
cni
is
also
the
target
to
be
discussed,
because
I
suppose
I
so
now
I
don't
read
the
synastic
completely.
I
I
don't
remember
the
completely
yet,
of
course,
so
yeah
yeah
yeah
we
need
to
so
there
you,
you
need
to
check
the
cnr
spec
and
then
also
the.
H
If
us,
this,
your
design
is
not
following
the
cni
specification,
then
there
you
need
to
change
it
so
yeah,
so
currently
the
network,
of
course,
so
the
multi-network
space
is
based
on
the
cni
spec.
So
they
are.
G
C
Said
about
using
the
management-
I
guess
that
was
said
there
that
yeah,
so
we
so
we
would
yes,
it
would
sort
of
add
a
layer
of
network
orchestration
into
this,
and
not
just
talk
about
single
networks.
G
But
I
think
also
there's.
There
is
new
concept
here,
like
the
relationship
between
kind
of
parent
and
child
l2s
like
those
strong
they
belong
to
to
to
a
single
port.
You
could
see
similarity,
let's
say
on
the
bounding
interfaces.
Yeah.
If
you
start
talking.
G
B
Yeah,
but
you
can't
get
dynamicity
there
without
without
updating
all
the
pod
specs,
all
the
time
with
adding
and
removing
network
annotations
in
order
to
get
access
to
less
fewer
networks.
How.
F
B
B
F
C
Is
that
that
you
have
a
grouping?
Forget
that
it's
a
strong
grace?
You
have
a
grouping
that
groups
two
networks
together
as
a
something
that
should
be
scheduled
together
and
then
you
happen
to
use
vlone
right
now
you
could
have
used
vxlan
or
whatever
to
separate
them
right.
So
so
I
understand
I
understand
what
you're
doing,
but
I
sort
of
I
wouldn't
want
to
tie
it
so
from
a
spec
level.
C
B
Yeah
the
use
case,
the
use
case
here
is
vlan
tagging
on
on
for
ovs
cni.
There
might
be
other
cni's
with
other
vswitches
beneath
that
would
do
other
encapsulation,
and
but
yes,
the.
What
we're
after
here
would
be
a
generalization
of
of
the
spec,
if
needed,
of
the
specs
cni
and
multi-network
spec
to
make
sure
that
a
single
cni,
basically
plumbing
one
parent
interface
into
a
pod,
can
return
more
than
one.
Let's
say:
network.
C
B
That's
what
that's,
what
we're
doing
right?
The
pod
says
I
want
ovsci
net
and.
B
I
Can
you
go
back
down
to
the
status
for
the
network
status?
That's
returned,
yes,
it's
here,
I
mean,
I
think
part
of
the
issue
is
that
I
mean
the
the
main
issue
is
that
you're
calling
the
cn
the
command
add
and
you're
getting
multiple
interfaces
back
if
for
like
net
one
you
we
I
mean
just
in
the
structure,
is
it
possible
to
add
like
sub
interface,
which
is
of
type
the
original
type,
and
then
that
way
you
know
net1
could
then
have
sub
interfaces
of
net1.42.50..
I
Would
you
say
62
or
whatever
this
added
and
then
who
knows
in
the
future?
Maybe
there's
some
other
sub
off
of
that
if
you
needed
to
so
you're
still
returning
one
status,
but
now
it's
you
know,
the
data
is
being
returned,
so
the
cni,
but
it's
under
one
structure,
so
the
cni
sees
one
interface
coming
back.
I
A
Yeah
I
put
down
as
a
similar
question
yeah
what
if
we
had
some
sub
interfaces
field
and
yeah,
we
embed
these
results
to
kind
of
pack
it
together
into
something
that
multis
could
parse
and
then
that
way
we
don't
change
the
semantics
of
this
object
as
much,
and
maybe
it
doesn't
find
the
face
of
this
cni
result.
I
also
just
want
to
give
a
thank
you
to
you
guys
for
putting
together
the
presentation
and
bringing
this.
C
So
I
think,
there's
one
thing:
I
would
change
and
there's
nothing
with
this,
but
I
would
specify
the
use
case-
and
this
is
so
for
cns
to
be
able
to
sort
of
make
it
bearable
to
to
do
the
manifest
and
so
on
and
not
I
mean
less
error
prone
by
having
one
thing
you
add
in
the
annotation
and
sort
of
not
forget
thing
two
out
of
forty
or
something
like
that,
and
that's
how
I
would
come
in
with
it
and
sort
of
not
push
to
say
this
is
for
obs.
So
this
is
a
need.
C
That's
there
for
for
a
group
of
users
that
that
is
very
network-centric
and
that
need
exists
for
everyone.
It
exists
for
kaloon
and
our
customers
for
sure
and
others,
and
I
I
I
sympathize
with
sort
of
what's
trying
to
be
done
and
I
I
like
it
and
then
sort
of
how
it
gets
delivered
up
into
into
the
pod
if
it's
at
a
dot
or
if
it's
a
separate
interface
depending
on
the
technology
underneath
right.
C
F
Use
case,
so
these
still
need
to
be
defined
somewhere
right,
but
just
a
question
right.
So
let's
say
we
had.
Instead
of
one
network
attachment
definition,
you
know
having
n
networks,
sub
interfaces,
having
like
n
network
attachment
definitions
and
sort
of
like
relying
on
on
an
admission
controller
to
given,
given
a
configuration
or
a
mapping
of
this
group
to
inject
those
automatically
into
the
pod
spec
before
creation.
C
F
So
it's
like
you
still
keep
that
the
fact
that
is
one
network
attachment
definition
per
network
and
the
management
perspective
is
automated,
given
a
predefined
mapping
by
by
which
it
will
be
done
by
like
an
admission
controller
for
that.
B
But,
as
I
said
before,
this
is
in
production
used
today
without
the
ipam
part
and
without
the
creation
of
the
vlan
sub
interfaces
with
dpdk
applications
and
and
other
applications
that
are
using
raw
packet
sockets
on
the
parent
interface
and
handling
the
vlans
internally,
and
they
rely
on
the
trunking.
B
C
A
B
Do
you
suggest,
as
the
way
forward,
how
shall
we
form
kind
of.
C
Also,
well,
you
shouldn't
listen
to
me
for
suggestion.
So
for
my
talk
person
for
me,
I
think
to
be
able
to
generalize
this
and
specify
the
behavior
as
part
of
multus
and
because
to
me,
multis
is
so
dominant
that
for
any
secondary
interface,
that's
what's
there.
I
I
personally.
I
don't
really
care
what
it
says
in
the
cni
specs.
C
I
care
what
what
it
says
in
what
the
plumbing
working
group
has
said,
that
how
secondary
interfaces
work-
and
I
do
think
we
need
a
grouping-
how
it's
done
specifically
for
ovs
and
how
it's
done
for
for
our
switch
kvs.
I
don't
really
care
about.
We
need
to
be
able
to
specify
it
and
give
the
the
applications
what
they
need,
because
I
don't
does
your
applications
really
care
if
it
says
net.42
or
if
it
says
net1234
in
in
the
pod.
C
C
They
just
want
to
see
a
network
within
with
the
ip
addresses
on
and
separated
when
it
when
it
enters
a
container.
In
this
case,
the
way
you
set
it
up.
F
C
So
you
won't
think
so.
You
want
to
encode
the
vni
information
information
in
the
name
and
that's
what
the
dot
42.
I
think
that
well
abdallah,
is
the
one
that
should
talk
to
this.
We
ended
up
hashing
our
interface
names
because
we
support
so
many
networks
and
so
many
attachments
that
the
15
characters
that
you
have
to
express
this
wasn't
enough
and
what
did
you?
What.
C
The
same
network
twice
right,
so
the
the
idea
to
encode
vni's
into
the
interface
name.
I
think
is
I
mean
if
someone
does
it?
Yes
sure,
but
it's
dangerous.
E
And
also,
I
think
it
like
in
your
previous
question.
It
matters
if
it
is
like
with
with
the
dot
notation,
because
right
now
today,
even
before
the
spoke
implementation,
if
you,
if
you
want
to
have
a
selective
trunk
in
and
use
obvious
ni,
you
will
get
like
one
selective
trunk
interface
in
obs
that
is
attached
to
a
pod.
And
then
you
create
some
sub
interfaces
on
that
trunk
interface.
B
D
C
B
B
C
B
C
B
The
way
is
that
the
the
application
comes
and
says
I
need
access
to
x,
number
of
networks
referred
to
in
their
their
virtual
link,
descriptors
and
then
somebody
goes
and
creates
a
network
attachment
definition
like
trunking
all
these
networks
on
on
an
ovs
trunk,
for
example,
but
the
the
names
that
are
specified
by
the
application
can
be
put
into
the
network
attachment
definition.
So
the
application
can
basically
recognize
those
network
interface
names
and
they
know
what
it
is.
B
G
A
A
Some
of
the
things
I
wrote
down
as
a
possible
next
steps
is
number
one
is
to
outline
the
use
case
in
some
generic
terms.
It
looks
like
you
know.
We
have
a
few
cases
here
that
this
would
apply
to
that's
kind
of
this
like
grouping
of
sub
interfaces.
A
Another
possibility
I
wrote
down
is
that
we
could
present
this
problem
to
the
cni
maintainers
themselves.
I
think
that
it's
it's
an
interesting
problem.
I
think
that
it
would
be
to
our
group's
benefit
to
make
them
aware
of
this,
so
that
it
can
be
considered
in
the
current
spec
or
and
especially
as
it
plays
into
the
cni
version
too.
But
last
but
not
least,
is
how
can
we
reconcile
this
problem
with
the
current
state
of
the
cni
spec
and
what
we
can
do
in
our
multi-net
spec?
A
In
order
to
accommodate
this,
and
I
also
wrote
down
you
know
what
billy
was
highlighting,
which
is?
Is
this
idea?
Can
we
you
know
pack
this
into
results
and
make
it
as
an
additional
field
in
these
objects
that
appear
in
the
network
status
list?
C
A
No,
it's
awesome
and
when
I
hear
somebody
possibly
being
aggressive,
which
I
didn't
interpret
that
way,
I
see
it
as
excitement
and
it's
quickly.
C
C
I
know
a
lot
of
guys
sort
of
had
worked
with
me
before
we
were
working
so
long
together
now,
but
yes,
I
do
think-
and
I
also
think
sort
of
that
as
a
general
problem
sort
of
how
can
anyone
run
so
that
runs
on
a
kubernetes
point,
because
then
we
don't
have
this
problem
with
the
s1
interface
right.
But
how
do
you
map
the
interface
in
the
namespace
to
something
that
makes
sense
for
the
application?
I
Go
ahead:
okay,
okay,
I
was
gonna,
say
we
ran
into
this
with
the
device
info
spec,
where
we
tried
to
pass
additional
information
into
the
pod
to
help
identify,
and
the
only
thing
we
had
at
the
moment
was
like
the
network
name,
which
isn't
ideal.
We
were
looking
for
some
type
of
additional
metadata
to
kind
of
identify
which
additional
interface
was
associated
with
which
network
to
kind
of
let
the
application
know.
H
So
for
myself,
I'm
I'm
pretty
happy
to
hear
this.
The
proposal,
because
the
this
shows
that
the
the
not
only
us
there
are
many
others.
They
are
using
the
network,
that's
reputation
and
they
and
then
also
they're,
finding
that
some
something
that
is
not
covered
yet
so
the
so
currently
the
yeah
today
as
he
mentions
the
maybe
the
sum
of
use
cases
they'd
like
to
create
the
multiple
network
in
their
one,
their
structure,
and
then
they
are
from
the
us.
How
do
I
say?
H
Well,
so
the
maybe
I'm
just
wondering
that
maybe
we
may
introduce
the
new
cld
which
generate
or
some
the
multiple
stuff
or
I'm,
I
I
don't
think
in
deeply
yet
of
course,
so
I'm
just
come
up
with
something
stuff,
but
also
this
that
this
could
be
the
changes
of
the
the
our
cld
stuff
and
then
also
the
ecosystem.
I
mean
that
the
socrates,
the
narendra
modis
also
the
the
amounts
network
policy
and
then
the
first
in
the
future.
H
We
we
also
addressed
india
service,
abstraction
as
well,
so
they
are,
and
then
they
are
this.
The
the
your
proposal
is
enriching
via
this
stuff,
and
then
they
also
provides
the
users
to
the
to
use
that
our
stuff
in
the
various
ways.
So.
C
There's
a
similar,
but
slightly
different
problem,
that
we
should
look
at.
Let's
say
that
you
actually
you
wanted
to
steer
this
met
these
networks
up
in
the
trunk,
but
on
the
receiving
side
you
did
not
want
to
have
the
sub
interfaces.
If
you
wanted
to
consume
the
raw
vlans,
be
able
to
specify
that
would
be
nice.
C
B
B
A
B
Let
me
thank
you
all
for
for
listening
and
providing
valuable
feedback.
I
I
hope
that
we
can
together,
find
a
good
way
forward
with
this.
A
Yeah,
I'm
sorry
for
hijacking
the
entire
meeting.
No
that's
great!
This
is
one
of
my
favorite
meetings
in
a
while.
So
I
appreciate
it.
C
Where
are
you
working?
Are
you
at
the
erickson?
Yes,
okay,
I'll
I'll
teamsping,
you,
okay,
you
ask
john
cyrus
at
erickson.com
right.