►
From YouTube: Network Service Mesh BoF Meeting - 2019-03-26
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
A
Yeah,
it
can,
you
add
me
as
well
and
so
events,
so
we
now
have
three
where
we
have
three
recurring
talks
or
weekly
meetings.
We
have
this
one.
Every
Tuesday
we
have
the
NSM
Docs
every
Wednesdays
and
we
now
have
the
innocent
use.
Cases
which
has
been
moved
from
Friday
to
Monday
is
every
other
Monday.
B
A
A
Sniffing
consider
joining
in,
we
also
have
the
Intel,
out-of-the-box
and
developer
meet
up,
and
that
is
during
the
afternoon
on
April
2nd,
and
so
we
will
have
a
90-minute
talk
and
hands-on
workshop.
That
people
will
be
able
to.
Try,
though,
is
that
they
write
the
pour
right
before
only
that
starts
and
I'm
grabbing
going
to
that
right
now,.
A
A
So
we
also
have
ons
coming
up
of
where
we
have
three
three
talks.
One
of
them
is
the
intro
to
network
service.
We
have
a
panel
discussion
about
using
kubernetes
as
a
network
service
Orchestrator,
and
we
have
a
MSM
and
OpenStack
integration
that
we
are
doing
in
conjunction
with
Priscilla
at
Ericsson.
I
see,
Allah
told
me
that
she
may
not
be
that
she's,
probably
not
gonna,
be
able
to
show
up
so
I'm,
maybe
I'm,
just
giving
the
talk
on
my
own
in
this
scenario,
but
she
definitely
did
a
lot
to
help
in
this
scenario.
C
B
A
A
B
Exactly
and
also
we
can
talk
about,
it's
a
free
flow.
So
if
you
click
on
the
link,
you
will
see
the
schedule.
Onap
discussion
opa
nothing.
So
this
is
elephant.
So
what
we
can
do
is
we
can
probably
share
it.
An
outlet
posted
along
with
open
till
8:00
and
then
we
can
draw
the
audience
I
mean
we
can
have
free,
float.
C
B
C
A
D
A
A
Yeah
and
if
you
end
up
missing
it,
there
is
a
really
good
transit
system
in
Barcelona
the
train
leaves
are
free,
five
minutes
or
so,
and
so
so,
if
you,
if
you
don't
find
somewhere
near
there,
get
one
that's
near
one
of
the
one
of
the
trains,
but
just
make
sure
that
the
line
you're
getting
on
is
close
to
is
your
hotel
is
near
the
line,
then
it's
on
because
otherwise
didn't
you
a
few
transfers
and
we
also.
We
also
have
a
couple
located
events.
A
C
A
C
This
is
actually
not
super
surprising.
There's
been
a
lot
of
stuff
floating
around
Twitter,
where
the
lines
of
talks
that
accepted
tended
to
be
fairly
in
range
of
things
that
were
well
understood
by
the
program
committee
par
for
the
course
for
her
program.
Companies,
so
I
think
you're
effectively
comes
down
to
is,
as
we
become
more
well-known,
understood
beyond
simply
never
I
think
will
do
better
over
there,
but
we
will
still
have
some
things.
A
A
I
agree:
that's
and
ons
Europe
talk,
sir.
The
call
for
papers
is
already
open.
We
have
a
little
bit
of
time
before
submitting
in
so
if
you
intend
to
talk,
there
feel
free
the
rule
to
do
so
or
feel
free
to
engage
us
and
we'll
help
you
put
together
a
compelling
talk,
so
this
this
part
is
unfortunate.
We
have
MAF
2019
and
cube
pod
North
America
on
the
same
days.
A
C
Now,
the
way
I
think
this
is
currently
going
in
the
talk
is
they
have
normally,
the
technical
oversight
committee
meets
an
Indian
Pacific
time
on
Tuesday
mornings
every
other
week
or
actually
the
first
and
third,
the
first
and
third
Tuesday's
of
the
month.
I
believe
what
they
have
done,
because
they
have
a
little
bit
of
a
backlog
of
projects.
Is
they
have
taken
the
Tuesdays
at
8:00
a.m.
C
that
they
normally
don't
meet
and
they're
now
meeting
at
those
times
just
to
process,
project
proposals,
and
so
I
think
that
would
mean
that
if
we
were
to
be
scheduled
in
April,
it
would
be
either
April,
9th
or
April
23rd
and
because
that
is
at
the
same
time
as
the
MSM
community
meeting
one
of
the
things
that
we
can
decide
what
we
want
to
do
when
we
actually
get
a
firmly
scheduled
time
slot
is.
F
C
Necessarily,
there's
there's,
you
know,
there's
all
kinds
of
different
kinds
of
value
that
could
come
from
that
I.
Don't
think,
there's
value
in
the
sense
that
you
will
influence
one
way
or
the
other
how
the
review
goes.
But
you
know
some
of
us
will
at
least
have
to
go
and
present
it
in
that
meeting.
So
we'll
not
be
able
to
be
here
and
it
might
be
the
kind
of
thing
that
would
be
nice
for
the
community
to
be
there
to
witness
and
for
some
people
I
think
there
may
be
value
there.
C
C
A
G
A
Alright,
so
I
think
our
next
thing
is
just
to
get
the
timeslot
organized
and
then
we'll
put
an
announcement
on
here.
Worst-Case
scenario
is
the
announcement
is
done
less
than
a
week
before,
in
which
case
we
will
put
a
big
banner
on
the
top
of
this
document.
Saying
please,
please
go
to
the
other
meeting.
A
C
Because
we're
going
to
want
to
stand
up
the
chains
of
things
in
points
and
then
I
think
there's
some
potentially
interesting
stuff
around
the
the
Numa
issues.
These
you
pinning
those
kinds
of
things
quite
interesting
work.
Does
that
match
your
understanding
for
the
folks
from
the
scenic
testbed?
You
guys
know
that
the
environment,
better
than
I,
do.
H
That's
yeah
that
sounds
about
right.
I.
Think
we're
trying
to
do
this
and
multiple
steps
and
help
with
the
use
case
of
using
an
assumed
with
open
sac
is
one
of
the
items
and
then
being
able
to
use
NSM
as
an
option
with
kubernetes
for
use
cases
in
general
on
the
CNF
testbed,
and
so
we're
going
at
that
in
several
ways.
H
There's
quite
a
few
tickets
that
are
running
right
now
related
to
this
on
that
one
one
of
them
is
getting
the
V
switch
in
the
pod,
which
is
actually
completed
and
several
other
things
that
are
related.
That's
just
NSM
related
Tynan,
but
about
the
open.
Second
in
assume,
yeah
I
think
what
you
just
link
the
213.
C
C
That's:
okay!
That's
very
good
news!
Okay,
so
if
you've
got
it
in
a
pod,
the
pod
and
you've
got
the
Mellanox
Nix
using
Intel
Nix.
Then
it
should
just
be
a
relatively
simple
matter
of
getting
of
getting
everything
turned
into
a
network
service
and
put
our
network
service
client
to
get
NSF
working
in
those
test
beds.
And
then
you
know
everything
else
around
CPU
putting
stuff
is
things
that
have
to
be
figured
out
in
their
own
time
anyway,
so
yeah,
okay,
no,
this
makes
total
sense.
H
So
we
may
want
to
break
this
part
down
at
the
item.
That's
hila
do
right
now
that
CNF
test
bed
use
of
NSM
that
wouldn't
be
213.
There's
we
probably
need
an
epic
or
I'll,
create
a
project
or
something
that
contains
all
the
tickets
that
ticket
213
there
is
for
using
in
a
kubernetes
cluster
that
has
an
SM
enabled
talking
with
the
OpenStack
cluster,
that's
deployed
using
the
CNF
testbed
code.
So
we
should.
H
We
have
I,
guess
two
efforts
happening
at
the
same
time,
adding
in
SM
to
the
init's
to
the
CNF
testbed,
so
that
it
can
be
used
anytime
with
kubernetes
clusters.
And
then
the
other
item
is
that
ticket
213,
which
is
a
kubernetes
cluster
using
I,
believe
it's
all
to
make
files
and
stuff.
That's
currently
used
in
an
SM
for
deploying
and
setting
up
a
cluster
and
then
adding
in
SM
to
the
cluster.
Okay.
C
C
A
A
G
Know
I
I
have
been
thinking
about
my
attributes
and
yeah
doctor.
Definitely,
while
building
that
I
was
considering
like
we
just
deploy
a
pocket
cool
store,
I
mean
if
you,
if
we're
talking
about
the
currency,
I
just
deploy
a
pocketful
star
and
do
some
charity
poll
just
to
verify
that
something
is
there
and
then
maybe
do
some
things?
I,
don't
know,
involved
a
check
scripts.
C
But
I
want
to
try
and
see
if
we
can
keep
as
much
in
the
line
of
the
incoming
testing
basis
as
we
can
get
away
with
without
bloating
verify
times
to
insane
levels,
because
that
way
we
actually
do
know
the
world
is
in
a
good
state
at
all
times.
So
you
know
that
said:
I
don't
know
about
20
minutes,
end
up
being
helpful.
I
think
they
actually
start
causing
people
to
do
crazy.
A.
G
G
Yep,
maybe
I
mean
we
have
some
some
patches
that
are
being
prepared
for
having
a
support
for
namespaces,
which
could
allow
us
to
run
different
Dennis
managers
in
different
namespaces,
which
could
eventually
help
like.
If
we
keep
unique
namespaces
for
each
of
the
tests,
then
they
probably
can
be
parallelized.
C
A
Alright,
so
one
option
that
we
potentially
have
as
well
is
to
throw
more
hardware
at
it
over
time
and
I.
Think
the
the
namespacing
stuff
will
definitely
help
in
another
aspect.
One
option
that
we
have
is
to
spin
up,
as
when
we
spin
up
a
cluster,
we
could
actually
spin
up
a
persisting
cluster
and
that
of
laids,
so
we
could
add
and
delete
namespaces
on
the
fly.
G
C
So
basically
comes
down
to
is
being
a
good
citizen
Oh,
which
is,
we
should
try
and
try
to
make
sure
that
we're
actually
not
consuming
insane
amounts
of
resources
and
that
we,
what
we're
doing
we're
actually
consuming
in
an
efficient
way
right.
So
right
now,
I
think
if
memory
serves
when
we
want
to
see
I
run
occurs,
we
start
up
to
of
the
smallest.
Oh
he's
backpacking
offers
and
I
think
that
runs
us
7
cents
per
instance.
C
So
we've
got
a
total
of
14
cents,
cost
to
run
a
CI
run
yeah,
which
is
not
bad.
If
were
double
that
to
28
cents.
You
know
presuming
that
we
were
actually
you
know
doing
something
useful
with
it
like
parallelizing
or
testing.
I,
wouldn't
think
that's
an
egregious
should
resources
I.
C
Doesn't
really
help
us
paralyzed
but
running
on
you
right
if,
instead
of
running
one
cluster,
if
we
were
to
run
two
clusters,
for
example,
which
meant
we
could
paralyze
the
test
running,
that's
a
fairly
marginal
cost
shift.
You
know,
quite
frankly,
I'm
much
more
concerned
about
figuring
out
why
we
occasionally
have
zombie
instances
yeah.
I
C
H
Pillar,
so
the
thing
with
I
think
the
last
thing
was
enabling
hyper
threading
on
one
of
the
Intel
quad
Intel
machines
that
was
yesterday,
so
that
we
could
increase
basically
for
all
the
test
cases
that
were
trying
to
validate.
We
couldn't
deploy
as
many
CNS.
Oh
it's
working.
We
we
want
to
do
more
of
the
testing.
C
I
One
quick
comment
here
on
the
CP
opening
of
the
CNF
or
even
the
V
switch.
As
you
guys
understand,
the
new.
My
affinity
will
be
associated
to
the
physical
NIC
as
well.
For
example,
the
internet
that's
been
discussed
here.
You
know
so
on
a
test
better
point
of
test
bit
point
of
view.
I
was
thinking.
You
know
you
should
have
a
NIC
per
Numa
node.
C
Yeah
you're
absolutely
correct
about
like
what
makes
for
good
results,
there's
an
ongoing
set
of
interesting
questions
and
kubernetes
around
how
to
handle
the
pneuma
affinity
of
things
and-
and
it's
the
the
very
short
version
that
this
is.
It
is
not
going
to
ever
work
the
way
that
it
worked
in
something
like
open
sac
where
you
just
to
very
fine
granular
mapping
of
stuff.
That's
never
going
to
be
acceptable
in
the
kubernetes
community.
That's
the
bad
news!
C
The
good
news
is
that
there
are
things
in
progress
in
signal
and
resource
management
working
group
for
actually
allowing
you
to
get
what
you
need
without
doing
that.
Fine
level
of
granularity
of
newer
mapping
and
those
are
hopefully
going
to
land
in
one
box
kubernetes.
What
not
15
and
I
would
expect
the
CNF
testbed
would
want
to
take
advantage
of
that.
Did
that
answer
all
some
of
what
your
comment
was.
Yes,.
I
My
experience
is
surely
coming
from
the
OpenStack
side,
so
yeah
that
granular
meaning
I
was
required
and
was
done.
But
here,
if
the
flexibility
or
if
it
is
not
going
to
be
so
granular
or
strict
in
pinning
I
understand,
you
know,
but
ability
to
exit
out
of
the
right
Nick
as
a
policy
might
also
be
a
good
thing
to
think
about.
One
thing
to.
J
Keep
in
mind,
too,
is
in
June
AMD's
drop
in
the
rome
architecture,
so
the
actual
silicon
underneath
all
of
this
is
about
to
get
some
drastic
update.
Some
AMD
and
Intel
are
going
in
drastically
different
direction,
but
Intel
is
going
to
give
you
the
ability,
at
the
hardware
level,
to
kind
of
customize
what
your
new
Mazzone
look
like
and
then
AMD
is
coming
up
with
this
like
I
forget
what
they
call
it,
but
it's
basically
a
distributed
across
all
the
different
dies,
spanning
the
different
sockets
and
even
within
the
same
socket.
J
I
think
I've
got
some
slides,
I'll
try
to
track
him
down,
but
the
Rome
architecture
is
very
unique
in
the
sense
that
they
are
getting
away
from,
like
the
Numa
madness
and
they're.
Saying
we're
going
to
homogenize
all
this
you're
going
to
pay
a
penalty
but
it'll
be
minor,
and
we
think
that
the
ease-of-use
overcomes
the
small.
I
The
ski
yeah,
the
the
placement
of
the
CNF
yeah,
that
makes
sense
ed
and
that
happens
in
OpenStack
as
well.
It's
just
that
unit.
There
are
a
few
detailed
scenarios.
You
know.
Maybe
it
will
be
deviating
the
meeting
today,
but
the
details
in
Oreos,
where
you
want
to
make
sure
that
the
traffic
enters
to
the
right
physical
Nick,
so
that
you
know
the
receive
path
is
optimized
for
the
CNF.
I
C
Problem
with
the
current
solution
as
I
understand,
it
is
that
it
has
a
couple
of
sort
of
presumptions
that
aren't
stated,
and
one
of
those
presumptions
is
that
a
single
pod
is
only
my
system.
It
is
only
going
to
really
be
interested
in
a
single
NIC
because
it
doesn't
really
have
a
good
solution
for
thee.
I
have
you
know,
nick
0,
in
song,
socket
0
and
Nick,
one
on
socket,
1
and
I
have
a
CNF
that
wants
to
use
both.
It
really
has
no
meaningful
solution
for
that
problem.
Sure.
I
And
and
I
think
as
Jeff
was
pointing,
you
know,
the
CP
architectures
are
evolving
and
changing
right.
I
think
it
might
be
good
idea
to
have
put
the
boundary
with
the
V
switch
and
the
CNF
doesn't
care
and
if
we
have
to
add
any
new
Mahaveer
nests
or
some
intelligence,
you
know
that
can
be
in
the
beast
which
layer
rather
on
the
CNS
well,
but.
F
I
J
Now
that
they're
moving
back
into
the
enterprise
class
server
market
like
they
are
not
going
to
do
Numa
from
the
standard
of
you
know
this
memory
lane
with
this
PCIe
Lane
with
this
socket
all
go
like
this,
and
I've
now
had
to
cross
the
qpi
two
times
and
that
is
x.
Y&Amp;Z
latency,
like
some
of
that,
is
going
to
be
abstracted
some
of
it's
going
to
become
more
complicated,
depending
on
how
you
decide
to
carve
these
up
in
the
BIOS.
So
it's
going
to
be
some
exploration
and
it's
not
going
to
be
the
you
know.
J
F
I
Yes,
sir,
on
key
the
execution
of
the
CNF
on
a
core
which
is
associated
in
nuuma
you're
right,
we
need
to
have
an
opinion.
The
placement
part
right.
What
I
was
trying
to
say
was
the
CNF
they
eat
exiting
out
of
the
server.
Why
a
particular
NIC
that
can
be
hidden
behind
the
V
switch.
You
know
we
got
to
divide
the
problem
into
two
pieces,
then
they're,
networking,
the
exit
and
entry
point
to
reach
the
container
is
one
aspect
and
placement
of
the
container
is
another
aspect
right.
J
J
C
I
C
Actually
so
this
is
interesting
news
light
you
want
to
move
on
because
we've
got
some
other
things.
What
comment
I'll
make
in
closing
is
that
well,
we've
mostly
been
talking
about
here
is
essentially
a
pod
placement
problem
at
the
end
of
the
day,
and
the
good
news
is
every
surface
mesh,
that's
not
what
it
actually
does.
C
So
we
have
an
interest
as
a
community
and
how
this
gets
salt
in
sig
note
in
the
resource
management
working
group
and
I
would
highly
encourage
folks
to
participate
in
those
spaces,
and
we
certainly
care,
but
that's
not
specifically
our
problem
to
solve
it's
a
problem.
We
very
much
need
to
have
solved,
but
it's
not
gonna
get
solved
in
a
time.
F
F
G
B
F
F
So
now
so
what
we
did
was
recently
took
this
and
then
mapped
out
certain
specific
scenarios,
we'll
walk
through
them.
So
one
case
is
essentially
I
mean
we
wanted
to
start
small,
not
blow
it
up.
You
know
drive
through
example.
What
he
said
was:
let's
take
one
case.
You
know:
SRO
V,
unique,
VLAN,
/
VF,
so
complete
Hardware
slicing
read
complete
where
essentially,
what
we
are
really
saying
is
a
hardware
in
this
topology
hardware.
Port
is
already
nail
your
nose
already
nailed
on
a
specific
node
and
PMF
I
mean
basically
the
polkas
notes
there.
F
F
So
far,
so
good,
very
simple,
so
now
interest
a
little
more
complex
scenario.
So
here
what
we're
saying
is
there
is
no
SRA
vody
right,
correct,
I'm
in
a
more
yeah
interesting
one,
no,
a
sorry
Bobby.
Of
course
hardware
food
is
already
nailed
on
the
node
and
PMF
and
a
same
thing.
Yes,
the
quirian
sm
explosive,
only
endpoint
still
so.
Basically,
as
you
can
see,
you
have
two
functions
to
deal
with
some
ENS
inside
the
Gateway
and
the
PMF
right,
but
we
NSM
as
far
as
the
innocent
goes.
F
It's
still
exposing
one
and
only
one
endpoint.
So
the
first
step
here
is
essentially
establishing
a
tunnel
between
these
part
and
the
gates
of
bus.
That's
right!
There
en
SM
I
find
that
VX
LAN
ID
right
for
the
tunnel
right
I
mean
other
than
a
segment.
So
what
you
do
is
thin
as
the
end
of
it.
You
create
this
panel
between
the
pod
and
the
Gateway
right,
ok
and-
and
actually
the
fault
got
no
idea
that
you
are
connecting
to
the
Gateway
by
the
way
right,
it's
all
completely
accepted,
vanishing
to
something
exactly.
C
F
F
Next,
so
once
you
have
done
the
be
excellent
creation,
so
now
go
to
the
second
part
right,
so
it's
not
over
it.
So
now
you
are
using
that
the
excellent
and
also
signal
the
VLAN
ID
enter
it
right,
but
it's
more
interesting.
So
basically,
what
you're
doing
is
you're
still
talking
to
Ian
SM
all
day.
So
now,
ENS
M
assigns
VLAN
ID.
So
basically
you
fix
the
BNI.
Remember
your
fix
of
Vienna.
You
know
which
color
you're
going
right.
Now
we
are
generating
a
VLAN
ID
104
that'd
be
a
nice
right.
F
That's
what
Ian
sm
gives
up
right
and,
interestingly
enough,
yet
remember
that
hey
from
here
you're
sending
VLAN
ID
100
BNI
thousand
goes
here
right
and
then
the
Gateway
could
do
any
translation.
You
have
no
idea
what
it
could
translate
to
if
you
translate
basically
gulliver
versus
wall.
Okay,
show
the
end-to-end
to
show
I
mean
really
deployment
scenario
right,
yep.
C
Yep
one
thing
to
keep
in
mind:
is
we
look
at
this,
and
this
is
this
is
something
that's
super
counterintuitive,
because
we're
used
to
thinking
about
things
like
the
eyes,
particularly
as
point-to-multipoint
concepts,
and
so
they
end
up
being
quite
a
bit
scarce
ER.
But
the
truth
of
the
matter
is
that
when
you
look
at
that
original
puddle
on,
let's
buy
a
spike
to
slide
five
really
quickly,
because
it's
easier
to
explain
there.
It's
a
simpler
slide.
The
you
bounce
back
to
slot
slide
five.
That
tunnel
is
actually
not
parametrized
by
the
eye.
C
C
It's
the
space
of
VN
eyes
between
that
source
and
discipie,
okay,
and
that
ends
up
making
the
problem
much
simpler,
because
the
multiplicity
of
things
you
have
available
is
enormous,
ly,
larger,
and
so
the
likelihood
that
you
would
have
to
do
something
like
add
additional
layers
like
VLAN
tags
along
that
tunnel
is
much
smaller,
not
saying
it
doesn't
happen.
I'm
sure
there
will
be
cases
where
it
happens,
but
but
the
likelihood
is
much
smaller.
F
Thinking
to
this
another
question
came
to
our
mind,
so
that
was
one
of
the
related
topics,
so
basically
so
the
keys.
Essentially,
this
is
very
akin
to
sort
of
the
MPLS
label
allocation
strategy.
If
you
go
back
to
MPLS,
so
basically
it
assigns
the
downstream
router
will
assign
the
label
right
very
similar
trait.
So
so
now.
In
that
scenario,
of
course,
if
you
the
MPLS
world,
so
what
happens
is
each
router
is
full-blown
Hetchy
right
house?
So
basically
you
can.
F
You
know
be
assured
that
it's
highly
available,
because
each
I
mean
imagine,
imagine
very
simple
you're
talking
to
the
gateway
which
is
a
router
as
an
example
rate
and
that
implements
re
in
SM.
It's
got
a
chip
functionality
right
so,
for
example,
of
one
control
plane
unit
goes
down.
Then
you
have
the
backup
control
plane
to
make
sure
that
you
know
things
are
stable
right
from
sort
of
a
control,
plane,
label
exchange
or,
in
this
case
idea
exchange
perspective.
But
if
you
come
here,
what
happens
is
like?
F
Basically,
we
are
letting
the
node,
like
you,
said
to
your
point,
assign
the
labels
like,
basically
it's
all
of
local
significance,
but
the
question
is
whatever
that
knows
goes
down,
I
mean
we
have
head
shape
but
of
course,
at
a
global
kubernetes
cluster
level,
not
at
a
node
level
right.
So
how
do
we
handle
such
scenarios
right.
C
We've
handled
it
today,
the
current
resilience
of
story
is
if
I'm
a
network,
service,
client
and
I
have
a
connection
that
takes
me
somewhere
right,
I,
don't
really
know
what
happens
inside
that
connection.
I
just
show
packets
in
and
they
come
out
the
other
end
and
vice
versa.
If
the
network
service
endpoint
that
I
am
talking
to
goes
away
right,
the
particular
instance
goes
away.
C
Now,
when
I
say
more
or
less
okay,
let's
be
super
clear
if
Network
stateful,
something
like
a
stateful
firewall,
unless
the
guy
who
wrote
the
the
stateful
firewall
wrote
it
so
that
state
could
be
shared
among
replicas
you're,
going
to
have
some
state
loss
there,
but
network
service
match
will
auto
heal
those
connections
to
whatever
or
the
network
service.
Endpoint
is
so.
There
is
resiliency
built
into
the
system
very
definitely
so.
F
The
the
point
to
note
here
it
is
like
the
ID
itself
right.
So
basically,
let's,
let's
go
to
this
very
simple
example
right.
This
simple
example
even
simpler.
So
basically
the
idea
is,
like
you,
have
some
running
ID
generation
here:
100,
101,
etc,
right
and
then
this
function
goes
down
correct.
So
basically,
this
is
the
one
doling
out
IDs,
so
you-
and
this
was
the
one
managing
these
IDs
right-
correct
this
VLAN
ID
space.
So
now
I'm
done
now.
This
node
connects
to
some
other
node
and
moves
ahead
right.
F
F
So
there
it's
just
that,
probably
we
have
to
work
out
the
scenarios.
How
this
is
all
going
to
line
up
right,
I
mean
basically
like
hey.
This
gave
out
like
say
100
to
200
now,
thus
region
is
that
region
is
unusable
right
from
the
spot,
because
you
know
that
function
died,
no
died
and
then
now
it
connects
to
a
new
node.
We
have
to
see
what
that
new
range
it
that
don't
sound
right,
correct,
which
may
not
be
the
same.
C
C
C
Is
essentially,
the
pod
still
has
its
kernel
interface.
The
cross
connect
to
the
tunnel
would
just
change
right.
So
as
long
as
it
is
the
case
that
the
other
ans
endows
whatever
has
to
happen
to
the
physical
network
such
that
now,
let's
say
a
VLAN
ID
200
is
assigned
goes
to
the
same
or
an
equivalent
physical
network
function.
The
pod
never
sees
any
of
that.
The
pod
literally
never
sees
the
VLAN
ID
right.
It
doesn't
go.
C
F
So
what
I'm
saying
is
these
are
like
it
can
get
a
little
tricky
depending
on
different
scenarios.
Right
I
mean
basically
in
some
there
may
be
a
V
switch.
Are
there
are
a
sorry,
OVA
cases?
This
is
some
details.
You
have
to
really
work
out
how
this
is
all
going
to
come
together,
because,
typically,
how
this
is
all
handle
is
all
through
some
global
IT
management
rate,
but
this
is
basically
taking
into
account
all
the
policies
you
recall.
They
were
discussing
the
use
cases
right
it.
In
some
cases
you
are
assuming
like
hey.
F
So
I
wanted
I
would
a
lot
of
things.
F
Just
a
simple,
downstream
allocation,
because
so
the
point
to
notice
like
if
he
no
matter
what
we
do
right.
Even
if
we
do
like
a
point
to
point
allocation,
we
are
reserving
be
a
nice
sort
of
a
node
which
perspective
right
and
we
have
to
see
the
global
effect,
at
least
to
my
mind,
I,
don't
think
we
have
worked
out
all
the
scenarios.
You
know
when
things
go
down,
how
how
the
IT
management
is
going
to
happen.
A
C
Cool,
thank
you.
Thank
you
very
much.
These
are
super
good
things
for
you
to
bring
up
romkey.
There
are
things
we
need
to
walk
through
I
think
at
the
end
of
the
day,
they
don't
end
up
being
tricky
they're,
just
the
things
that
make
them
simple
or
super
unfamiliar,
and
so
it's
really
important
to
talk
to
make
sure
we
got
the
mill
down
exactly.
F
That's
what
it's,
because
you're
getting
into
the
next
level
of
detail
on
implementations,
I'd,
want
to
make
sure
this
is
all
fully
understood
and
nailed
down,
including
what
are
the
different
strategies
for
handling.
You
know
these
complex
policies
around
you
know,
isolation
and
all
those
Oh
take
everyone.