►
From YouTube: Network Service Mesh WG - 2018-06-15
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
A
B
B
C
D
C
A
A
A
Me
I
think
silences
okay,
so
the
next
thing
is
there
is
a
for
those
who
are
attending
the
open
source
summit
in
the
end
of
August
in
Vancouver.
There
is
a
cloud
native,
never
function,
seminar
that
is
going
to
be
held
the
day
before
the
summit
starts.
So
the
summit
itself
is
on
Wednesday
through
Friday
on
August
29th
through
31st.
A
They
hold
a
couple
many
sessions,
I
guess
you
could
call
them
or
workshops
on
Monday
and
Tuesday
before,
and
so
the
Tuesday
before
is
the
is
the
seminar
so
feel
free
to
join
in
I?
Believe
you
have
to
register
for
it
when
you,
when
you
register,
for
your
open
source
summit
best
and
as
a
and
one
of
the
topics,
is
going
to
be
about
network
service
mesh
or
at
least
there'll,
be
some
discussion
on
it.
A
A
C
E
A
E
To
be
clear,
like
this
is
an
open-source
community,
we
all
get
that
every
that
all
the
actual
items
you
take
our
aspirations
and
and
so
don't
don't
feel
like
you're
signing
in
blood.
When
you
sign
up
for
a
thing
you
know
just
you
know,
do
it
if
you
can
and
if
you
can't,
let
folks
know
it's
up
for
grabs
again.
Yeah.
A
A
Talk
about
probably
won't
happen
for
those
for
the
more
complex
ones,
but
for
the
simple
ones:
okay,
so
it's
the
opening
github
issue
to
with
to
verify
container
runtimes,
and
so
the
issue
was
created
as
Edie
to
documents
in
the
wiki,
the
having
getting
the
namespace
from
inside
the
pod
I,
don't
recall
seeing
anything
on
the
wiki
yesterday.
So
it's.
E
A
D
So
I
can
probably
quickly
give
an
update
on
the
current
status
just
for
the
benefit
of
all.
So
until
now
we
have
17
responses
and
then
out
of
which
we
have
close
to
seven
responses.
That
said,
no
the
remaining
10
people
have
said
yes,
so
which
means,
from
a
majority
perspective,
it
is
still
leaning
towards
the
current
time
slot.
E
E
E
Concrete
so
and
I
know,
you'd
made
a
comment
Frederic
about
the
about
people
not
getting
to
weigh
in
on
times.
If
they
said
they
were
okay
with
the
timeslot.
Do
we
want
to
try
and
just
run
a
quick
doodle
poll
for
a
new
timeslot,
and
we
could
include
the
current
timeslot
on
that
doodle
poll.
You
know
so
that
we
can
get
a
sense
of
like
you
know
where
everything
stands,
yeah,
that's
what
I
thought
we
were
gonna
do.
That
would
make
most
sense.
Yeah.
A
G
E
D
B
E
D
A
D
E
But
if
I
were
in
your
position,
I
would
kind
of
feel
like
I
keep
getting
pushback
on
on
on
it.
Would
you
would
you
be
okay
if
we
can
find
another
volunteer
to
pick
this
up,
sure
I'll
be
happy
yep?
Is
there
someone
else
in
the
call
who's
who's
highly
yahoos
who's
highly
organized
I
am
NOT
who
might
be
interested
in
picking
this
up.
F
A
A
A
A
A
D
A
H
I
had
some
comments
online
and
document
and
probably
I
think
that's
fine.
Just
now
to
have
people
look
at
it
with
comments.
I
think
Ivan
from
in
tell
me
some
related
comments
and
try
to
we
try
to
sell
narrow
it
down.
I
mean
the
problem
is
in
the
data
plane.
You
know
what
what
everyone
exposed
in
the
data
plane
to
hto
from
nfm
I
do
have
a
solution,
but
perhaps,
as
we
do
the
use
cases
something
may
jump
out
at
us.
A
Bleech
the
charities
won't
have
expected,
and
so
that
mug
was
was
squashed,
there's
also
work
that
was
done
to
rehab
to
refactor
the
some
of
the
code
and
get
it
so
that
it's
reduced
some
of
the
code,
some
of
the
code
duplication
that
we
had
and
make
it
a
little
bit
more
more
robust,
and
there
was
some
more
work
that
was
done
around
handling
handling
errors.
So
it's
a
lot.
A
For
the
pool
for
pull
requests
that
where
they
were
merged
as
well
see,
it
was
I
think
that
pretty
much
covered
the
majority
of
things,
the
only
other
thing
was
we've
also.
We
also
updated
the
kubernetes
dependent
and
version
and
the
client
step
dependencies
to
use
the
semantic
versioning
so
career
at
ease
for
some
reason,
releases.
Multiple
versions
and
some
are
semantic
versions,
and
some
of
them
are
not
and
depending
which
ones
you
cool
in
you.
A
G
A
sorry
I
have
a
question
regards
to
the
dependence
here
with
the
111
the
regime.
There
was
a
change
in
the
client
go
and
basically
the
111.
The
latest
like
a
beta,
2
clients
version
7
doesn't
work,
so
you
really
have
to
do
like
a
release.
Use
branch
release,
age,
0,
0
and
I
was
kind
of
curious.
What's
the
plan,
that's
right
now
you're
using
the
kubernetes
110,
so
you
guys
planning
to
move
to
111
at
one
point
or
what
stop?
What's
the
plan.
A
So
that's
that's
good
question
at
this
point.
My
recommendation
is
until
111
is
released,
that
we
not
that
we
the
way
because
I
the
semantic
versioning
specifically
I,
believe
that
they
cut
a
release
of
client
client
go
after
after
111
would
would
be
released,
and
so
it
really
it
really
depends
on
what
the
state
of
the
system
looks
like
at.
A
G
I,
don't
think
so.
I
mean
at
least
based
on
on
the
change
between
the
release,
7
and
release.
8
I
mean
it
seems
that
it's
a
kind
of
a
breaking
change.
One
of
the
reason
why
I
looked
into
the
client
8
is
because,
with
the
111
they
introduced
a
new
dynamic
client
like
when
you
create
a
kubernetes
cubic
kubernetes
client.
You
know
to
talk
to
the
API
server,
you
return,
kubernetes
interface
and
then
a
rest
interface
and
then
basically,
three
types
of
interfaces
and
the
diversion
eight.
G
A
E
This,
of
course,
to
say,
is
if
you're,
using
this
in
production,
you're
a
braver
man
than
I
right
now,.
A
C
C
A
H
C
A
C
Fredrik,
just
one
quick
thing
before
we
move
to
that
Cheryl
I
just
wanted
to
bring
up
that
I
would
love
to
get
some
reviews
on
PR
91,
because
that
actually
they're,
currently
some
of
the
stuff
has
broken
due
to
that.
So
getting
that
one
merged
would
be
would
be
great
if
anyone
has
a
chance
to
take
a
look
at
that
sure.
I.
A
A
D
A
G
A
D
Sure
so,
quick
update
on
the
use
case
document.
There
are
a
bunch
of
comments
and
then
I
will
incorporate
the
comments
for
the
use
cases
related
to
that
of
the
cloud.
Networking
I
also
added
the
distributed
mesh
or
incorporated
that
on
just
to
reflect
on
how
the
use
case
looks
with
respect
to
the
distributed
bridge
model
I
just
want
to
briefly
cover
about
that
and
then
probably
pass
it
on
to
John
to
share
his
updates.
D
E
E
D
Oh
sorry,
I
think
you've
just
passed,
yeah,
okay,
okay,
so
the
use
case
basically
talks
about
the.
How
do
we
build
a
distributed
or
using
distributed
bridge?
How
do
you
build
the
same
use
case
here?
That
can
be
two
types
of
two
types
of
meshes:
one
was
the
assistant
full
mesh?
What
is
meant
by
percentage
as
a
as
a
prerequisite
of
the
of
this
particular
use
case,
you
do
need
VX
lands
between
all
the
compute
nodes.
D
So
the
only
down
side
to
this
particular
approach
is
that
when
the
number
of
compute
nodes
increases,
it's
going
to
increase
the
number
of
mesh
between
this
because
of
the
numbers
in
wall.
So
the
idea
that
basically
mentioned
here
is
that
how
about
an
on
demand
full
mesh?
The
idea
of
on
demand
full
mesh
is
that,
let's
assume
that
we'll
play
with
the
use
case,
let's
assume
that
the
one
of
the
application
exposes
l2
channel
and
the
others
would
essentially
want
to
connect
with
that.
D
So
that's
what
I've
captured
here
and
then
the
diagram
again
explains
the
interaction
between
the
various
components.
Here
you
have
the
bridge
spot
which
essentially
exposes
channel
and
then
what
the
subsequent
part
would
that,
once
you
have
a
connection,
would
essentially
request
for
the
connection
and
then
it
continues
on
in
the
life
cycle.
So
this
is
AB.
Data
I
have
added,
in
addition
to
that
of
the
conventional
model.
I
also
incorporated
the
other
comments
from
people
have,
even
until
now.
D
Thanks
for
the
comments
also
I
think
I
just
touched
upon
the
cap
map,
you
just
saw
the
cap
abuse
case.
I
think
this
seems
to
be
similar
to
that
of
the
bgp
evpn
use
case.
Probably
when
we
touch
when
we
discussed
the
cap
abuse
case,
we
can
power
before
the
discuss
on
whether
we
want
to
have
some
collaboration
between
both
the
use
cases.
D
H
E
Somebody
whose
mind
works
all
still
that
way
I
would
find
it
super
useful,
because
I
tend
to
up
and
down
the
abstraction
of
trees,
and
so
you
know,
I
do
tend
to
sort
of
look
at
a
lot
of
concrete
things
and
try
and
squeeze
out
what
the
common
patterns
are.
So
it
would
be
real
helpful
for
me:
I,
don't
know
how
everybody
else's
printing
works.
D
H
E
D
B
E
B
E
B
I
I
A
B
Apart
from
the
use
case
perspective,
the
thing
what
we
need
to
do
here
is
obviously
to
transport
cat
blob,
encapsulated
frames
coming
from
the
fields
basically
outside
the
communities
cluster
to
a
cup
up
controller
is
basically
UDP
encapsulated
protocol
for
the
control
plane,
each
user
playing
I
think
you
guys,
as
a
court,
probably
know
or
need
we
do.
We
need
to
go
into
it.
B
It
is
basically
it's
a
Wi-Fi
control
protocol
for
wireless
access
points,
which
is
a
standardized
and
an
ITF,
and
basically,
as
as
anywhere
you
have
a
cup
up
control
protocol,
which
tells
the
yellow
box,
which
is
a
wtp,
a
wireless
termination
point
to
bring
up
bring
up
a
Wi-Fi
network
so
that
the
user
equipment,
the
you
a
can
can
attach
to
and
or
the
control
from
elements
for,
our
syndication
authorization
and
channel
management,
etc.
Are
going
to
the
cap
up
C
channel,
it's
a
classic
control,
plane
and
the
use
of
plane.
G
B
J
E
D
B
B
We
have
done
it
in
a
bare-metal
deployment,
with
some
special
moments
of
configuration.
At
one
day,
the
you
hippie
traffic
reached
the
port,
basically
where
the
control
plane
and
data
plane
is
running
as
a
container
and
this
represented
by
the
CP
and
EP
box
here,
and
this
CP
and
EP
talks
to
each
other
over
normal
cube
network
over
c
and
I
to
to
share
informations
is
basically
an
internal
protocol.
B
So
now
we're
referring
to
who's
a
layer
2,
because
now
the
the
the
data
cost
d
capsule
aids
saket,
but
you
traffic
and
after
decapsulation
you
have
basically
the
traffic
or
the
layer
2
frames
of
the
device
which
are
from
the
mega
addressing
perspective
foreign
to
the
commodities
Network.
So
basically
it
would
be
forwarded
at
all.
That
means,
if
we
want
to
give
this
traffic
to
another
network
function
in
this
case
cg
WIP
ii
GRE,
that's
called
in
all
terms
connect
the
gateway.
E
B
That's
a
problem:
yeah
you'll
notice,
when
you
put
the
l2
payload
into
the
cloud
and
you're
died
because
of
all
security
settings
will
not
allow
to
have
remote
payload
from
or
a
mechanism
any
number
of
crazy
things
could
go
wrong,
absolutely
poor.
Nearly
everything
for
my
cloning
cables
it
almost
everything
goes
right.
I'll
buy
that
I'll
buy
that.
B
B
The
main
growth
so
from
there
we
layer
to
traffic
how
payload
traverses.
Well,
we
expand
to
the
next
pot
with
then
creates
an
IPSec
GRE
tunnel
which
is
terminated
externally
and
the
service
control
box.
And
this
way
we
leaving
the
cluster
basically
because
we
need
to
head
or
what
were
foreign
system
here.
So
basically,
we
have,
in
the
use
case,
is
in
our
communities,
cluster
L
to
payload,
which
needs
to
be
distributed.
B
Across
pots,
and
then
we
need
to
leave
the
cluster
and
forward
the
traffic
sorry
re
channel
to
to
a
remote
system
and
to
achieve
that,
basically,
a
production
at
the
moment
for
a
couple
of
months
or
even
a
year
now,
but
as
no
mentioned
to
go
exist
in
this
way
and
just
implementation
upstream,
it's
just
to
demonstrate
the
use
case.
Ten
minutes
before
the
call
we
was
ahead
of
Antonin.
B
B
G
B
J
E
Look
like
it's
actually
very
good,
you're
right.
This
is
very
much
up
the
alley
of
the
clients.
Think
thoughts
were
thinking
here
in
researcher
smash,
because
effectively,
what
you've
done
is
you've
come
up
with
a
way
to
sort
of
hack
standing
up
what
we
call
a
connection
for
ltalians
between
your
your
tap
lap,
TS
and
your
cgw
ipsec
grea,
and
then
likewise
to
stand
up
a
connection.
You
know
of
type
IP
sac
to
your
external
service.
Can
control.
E
B
Use
cases
we
have
in
mind,
we
have
a
quite
similar
one
with
just
carrying
GDP,
is
so
generic
commoning
protocol,
payloads
and
and
bringing
up
thoughts,
implementing
and
ng,
GSM
and
mobile,
and
if
it
fails,
it
comes
up
on
a
different
pot
and
gets
away
like
land
and
controller
to
come
up
with
the
same
endpoint,
IP
address
and
stuff.
But
there's
all
different
use
cases
on.
If
we
try
to
put
in
the
edge
we
use
case,
but
because,
as
you
say,
everything
will
break
payload.
E
Everything,
it's
even
worse
than
me.
Everything
breaks
as
an
l2
payload
in
the
cluster
in
that
and
I've
occasionally
had
this
conversation
with
people.
Kubernetes
actually
makes
no
has
no
concept
of
melty
segments.
So
if
you
try
and
stick
a
Mac,
you
know
to
frame
out
there.
God
only
knows
what
would
happen
right,
even
if
nothing
broke.
It
certainly
has
no
guarantee
of
getting
where
it's
supposed
to
go.
B
Exactly
I've
seen
in
nearly
all
virtualization
environment,
there's
nothing
cubanía
is
related.
We
have
deployed
the
same
thing
in
a
mistake
not
in
containers
not
in
ports
it
for
us
and
even
in
VMware.
If
you
are
not
careful
and
you
put
layer,
2
payloads
in
there,
it's
either
doesn't
break
things
or
unique.
You
have
special
security
settings
and
it's
all
all
a
mess.
E
Now
this
is
very
cool.
I
appreciate
your
bringing
this
to
us.
So
what
are
your
for
doing
interest
area?
It
sounds
like
you're
interested
in
sort
of
you
know,
looking
at
making
sure
number
one
that
we
can
meet
your
use
case
with
network
service
match.
Overall,
are
you
interested
in
the
medium
to
long
term,
with
the
in
terms
of
using
network
service,
/
yeah.
B
Is
that
because
a
network
service
mesh
from
this
perspective,
there's
not
not
really
a
use
case,
but
we
want
to
leverage
native
environment
communities
at
it
as
it
is.
We
have
a
strong
opinion.
That's
network
service,
mesh
component.
You
guys
call
it
has
exactly
similar
functionalities
from
a
pattern,
perspectives
and
classic
service
mesh
that
says
the
TCP
and
HTTP
ones,
but
pathway
to
and
s3
payloads.
So
the
medium-term
goals
would
be
to
to
join,
to
join
network
service
mesh
here
and
one
day
face
audible,
excellent
control
as
a
homegrown
stuff
to
to
go
there.
B
We
are
also
in
a
project
currently
which
is
running,
which
is
a
black
fest
about
this
stuff,
where
we
are
on
a
research
project.
So
we
could
pretty
people
like
to
promote
to
start
to
bring
opposites
lab
environment
with
service
mesh,
because
numbers
of
l2,
SVP,
payload
CNS,
will
come
up
in
this
lab
and
that's
why
we
are.
E
B
That's
well,
you
needn't
try
to
be
shy.
It's
just
about
timing
some
times
and
console,
but
interesting
part
for
us
would
be
to
learn
how
your
activity
your
working
group,
is
received
in
the
Signet
working.
It
says
something
which,
besides
beside
of
that
or
having
a
chance
to
get,
let's
call
it
upstream
or
get
get
forward
here
or
where
we,
where
you
stand
at
the
moment,
was,
was.
E
E
The
the
one
of
the
benefits
of
the
network
surface
mesh
approach
is
that
we
don't
actually
need
any
changes
in
kubernetes
proper,
which
is
really
helpful
to
us,
because
we
don't
have
to
go,
try
and
convince
like
three
or
four
or
five
different
groups
in
kubernetes
to
change
something
for
us.
But
it's
also
seen
as
a
good
thing
by
Signet
working
I
know.
Tim
made
a
comment
that
he
really
liked
that
about
this
particular
approach.
E
You
know,
since
then,
we've
actually
been
stirring
up
conversations
and
trying
to
have
a
conversation
with
the
Signet
working
group
about
whether
it
makes
more
sense
for
network
service
mesh
to
be
some
project
of
state
networking
or
a
kubernetes
working
group
or
sort
of
what's
the
right,
formal
structure
there.
We
were
gonna
have
that
conversation
yesterday
at
the
Signet
wiki
meeting,
but
the
turnout
at
sig
networking
was
very,
very
low.
B
I
can
give
you
my
experience
for
that
the
boss,
if
you
go
back
to
mating,
distancing
networking,
there
are
three
attempts,
and
even
in
the
channel
on
select
to
say,
hey,
SFC,
service
function,
training
and
service
measures.
Basically
the
same,
why
not
seeking
a
test
direction,
and
but
this,
as
was
mostly
abner
bones
and
Noah,
was
followings
as
due
to
the
HTTP
HTTP
forks
usually
are
there
and
networking
is
not
highly
represented
in
this
group.
In
my.
E
Experiment,
well,
that's
that's
fine
I
mean
the
thing.
Is
the
the
same?
Networking
guys
have
solved
a
very
important
class
of
problem
really
well
it
just
like
the
sto
and
the
envoy
guys
have
solved
a
very
important
class
of
problem
very
well.
You
know,
and
even
though
I'm
hugely
in
favor
of
sort
of
borrowing,
by
analogy
the
cool
things
they've
done,
I,
don't
think
trying
to
ram
l2
and
l3
payloads
into
their
already
functioning.
Really
well
for
them
system
is
like
a
happy
experience
for
anyone.
Yeah.
A
To
to
further
that
the
kubernetes,
these
cases
are
primarily
around
enterprise,
which
primarily
calls
for
a
very
specific
l3,
l4
pattern,
and
so
like
I
know,
it
sounds
a
little
bit
negative,
but
this
actually
was
a
right
approach
for
kubernetes.
In
order
to
keep
it
simple
and
to
grow.
You
know
it
just
fit
affects
us
by
saying
not
having
anything
like
l2
or
so
on
so
I.
A
This
is
an
attempt
to
lift
those
but
lift
those
use
cases,
but
we
are
from
a
kubernetes
perspective
like
if
you,
if
you
go
up
to
them
and
you
ask
them
for
a
feature.
If
that
feature
has
wide
enterprise
use
cases,
then
there's
a
good
chance
we'll
get
in.
But
if
it's,
if
it
doesn't
or
it
complicates
those
use
cases,
then
the
chances
of
getting
it
in.
It's
still
not
impossible,
but
it'll
be
significantly
more
difficult
because,
like
telco,
and
so
is
not,
is
not
the
main.
A
E
Would
second
everything
you've
my
bad
perception
matters
you're
a
jerk
with
one
exception
which
is
Nick.
It's
the
perception
of
broad
enterprise,
use
cases,
I.
Think
in
various
points.
There
are
things
enterprises
will
discover
they
need
that,
may
not
be
perceived
as
being
a
need
yet
and
I
hopeful
hope
we
can
help
with
some
of
those.
Oh
absolutely.
A
A
Yeah
yeah
I
think
from
I
from
my
view
as
well
like
this
particular
approach
like
we,
we
wonder
with
the
term
of
service
mesh
because
it
makes
it
easy
for
people
to
latch
on
to,
but
the
goal
was
not
to
stop
to
stop
there,
but
it
build
something
that
was
a
lot
more
flexible,
so
use
cases
like
yours
can
can
be
built.
So
this
is
like
totally
in
scope,
so
don't
feel
like
you're
diverting
us
or
anything
like
that
by
bringing
up
these
type
of
use.
A
G
B
Differences
use
ksv,
we
show
you
how
ready
for
prime
is
NSM
at
the
moment
when
you
think
we
should
start
to
make
extra
hands
dirty
on
what
you
expect
from
from
our
steep
or
flat
learning
curve
here
for
people
already
have
done,
it
doesn't
make
sense
to
mess
around
with
that
already
from
implementation
perspective
to
be
up
with
that,
or
should
we
wait
a
little
bit
more?
It
just
goes
basically
back
to
Antoine
on
the
call,
because
he
has
written
to
be
exact
controller
for
from
our
side.
B
This
is
already
0s
as
an
experiment
or
what's
the
status
of
venison
and.
A
We're
still
very
we're
still
very
new,
like
in
fact
just
from
a
timeline
perspective.
The
first
conversation
that
Ed
and
I
had
about
this
was
in
mid
to
late
March,
and
so
all
the
work
that
you
seen
from
them
to
now
has
literally
been
between
literally
the
past
70
days
and
so
from
a
implementation
side.
We're
you
know
we
already
have
where
we're
building
up
the
primitives
at
this
particular
point.
A
In
order
to
in
order
to
describe
this
so
like
we've
added
in
kubernetes
series
and
reading
we've
built
up
protocol
buffer
api's,
which
those
would
actually
be
really
good
for
you
to
review
as
well.
Just
so
you
can
get
a
sense
as
to
like,
what's
so,
I
think
what
the
core
functionality
is
that
we're
that
we're
working
with
and.
A
E
Do
you
think
stem
to
mind
for
me
like
number
one?
Is
you
know?
Obviously,
you
guys
need
a
cue
stick
over
giri
and
people
with
real
concrete
needs
at
hand
who
want
to
try.
Things
tend
to
bump
up
the
priority
of
things
as
I
mentioned.
Ipsec
over
giri
was
not
on
my
list
before.
It
is
definitely
now
the
other
one
that
I
actually
want
to
throw
out
there
is.
You
would
really
welcome
your
participation
in
the
development
community.
E
You
sort
of
have
an
opportunity
here
to
to
shape
making
sure
that
we
meet
the
kinds
of
needs
that
you
see
and
that
you
see
from
other
folks
by
participation
in
the
community,
and
I
can
tell
you
having
often
arrived,
the
communities
after
the
the
after
things
have
hardened
after
your
stuff
is
already
in
deployment.
It's
really
nice
to
have
that
opportunity
for
early
influence,
and
so
we
welcome
your
participation
and
then
also
you
guys.
Look
like
essentially
prime
beta
customers
for
us
in
terms
of
you've
got
a
use
case.
E
A
But
before
we
continue
on,
let
me
just
wrap
up
the
meeting
and
we
can
continue
the
discussions
afterwards,
if
you're,
both
if
you're,
both
interested
so
first.
Thank
you.
Everyone
for
for
attending
is
there
any
last-minute
stuff
that
we
that
we
didn't
get
to
that.
We
should
add
to
the
to
the
agenda
for
for
next
week.
I
think
meeting
time
planning
was.
It
was
really
the
only
one.
A
A
E
E
A
Okay,
yeah
so
I,
so
I
was
the
same
before,
like
I
think
these
type
of
use
cases
you
know
we
agree.
We
talk
about
network
service
mentioned
we've,
given
some
some
examples,
but
you
know
like
the
examples
that
we've
given
I
like
by
no
means
like
saying
this,
is
in
concrete,
like
the
rails
that
were
setting,
so
we
we
want
to
make
something.
A
But
ideally
you
know
we
want
to
have
we
have
we
want
to
have
the
Sdn
and
the
services
and
clients
all
worked
out
and
they
essentially
negotiate
the
transports
so
that
so
that
you
can
build
whatever
it
is.
You
want
to
you
want
to
build
in
this
particular
like,
like
this
particular
use
case
and
and
get
things
working
so
so
we
definitely
definitely
appreciate
this
particular
this
case.
B
It's
for
we're
receiving
that
for
process.
Basically,
it's
a
question
when
and
let's
say
how
the
join
activities
here
so
therefore
a
set
to
mentor
guys
have
says
where
we
teeth
so,
which
was
all
as
I
think
easy,
because
it
doesn't
affect
CNI,
underling,
et
cetera,
so
and
I
said
we.
We
are
starting
not
in
products.
Now
we
are
not
intended
to
say:
hey,
please
make
SM
stable
next
week
because
we
need
to
migrate
servers
over
next
week.
B
B
You
know
that's
whether
the
question
was
coming
from
where
where
and
when
we
feel
would
be
in
a
situation
where
a
bring
up
of
a
system,
every
second
breaks
and
it's
very
very
early-
it
saturates
it
only
to
understand
from
only
Tavella
parents
etc,
and
maybe
it
would
be
the
right
moment
to
do
to
go
in
here.
But
if
the
environment
is
already
in
in
a
shape
where
I
say
okay,
we
can't
start
with
that
and
adapting
and
helping
for
the
use
cases
and
bringing
back
issues,
problems,
etc
and
ideas.
B
E
Many
tools,
so
quite
honestly,
so
NSM
itself
is
agnostic
as
to
the
data
plane
that
you
choose
so
there.
As
do
you
choose
to
use
sort
of
the
there's,
the
data
plane
inside
your
CNF,
your
cloud
native
Network
function
and
you
obviously
we're
agnostic
as
to
that.
That's
whatever
you
got
to
do
when
SM
is
also
agnostic
as
to
what
you
might
call
sort
of
the
underlay
data
plate,
in
other
words,
the
thing
that
is
connecting
the
connections.
E
That
said,
as
you
might
imagine,
there
are
quite
a
few
people
in
the
NSM
community
who
care
a
lot
about
VPP
and
so
I
expect
that
to
be
one
of
the
early
data
planes
supported,
so
you
you're
gonna,
get
basically
what
you're
looking
for
I
have
a
question
just
out
of
my
own
curiosity,
and
this
is
because
often
so
you're
dealing
with
wireless
traffic
right
now
and
what
are
the
interesting
things
that
we've
gotten
from
some
other
folks
is
what
I
would
call
exotic
L
tubes.
E
So,
for
example,
you
exotic
L
to
protocols
so,
for
example,
in
talking
to
the
cable
guys,
they
have
use
cases
where
they
would
like
to
be
able
to
pass
DOCSIS
frames
as
the
LT
payload.
Okay,
do
you
have
exotic
L
twos,
like
that
in
the
wireless
space
that
it
might
be
interesting
to
pass
over
an
L
to
do
over
a
connection
and
network
service,
mash
I.
B
Say
from
a
from
out
of
frame
perspective,
it
starts
with
classic
Ethernet
or
when
the
traffic
arrives.
So
because
DOCSIS
goes
beyond
that.
Maybe
it's
on
wireless
framing
as
well.
If
you
don't
man
to
11
frames,
yes,
of
course,
but
know
about
from
what
we
need
to
carry
I,
don't
see
it
for
the
current
use
cases
we
we
have
in
production,
so.
E
Supporting
them
is
super
easy,
and
if
you
don't,
then
you
make
it
very
hard
for
people
like
I've
got
similar
things
talking
to
fibre
channel
guys
where
you
know,
they're
they've
got
their
own
l2
and
l3,
and
if
your
attitude
is,
there
are
kinds
of
l2
and
l3
payloads.
Then
it's
a
very
easy
game
to
play.
Mm-Hmm.
B
E
B
Radar
here,
as
we
go
into
into
the
mobile
core
network
elements,
senators
coming
non
IP
transport
for
for
Vento
data
in
the
narrowband
IOT,
all
right,
which
is
a
hundred
twenty
eight
byte
Center
frame,
which
is
encapsulated
somehow
and
then
we'll
arrive
somehow,
and
you
need
to
forward
it
somehow.
Yes,.
E
B
B
What
I
always
think
just
a
stepping
out
ideas
here?
What
we
have
discussed
and
offices
that
whose
network
service
measure
always
a
principles
of
network
service
metrics,
is
basically
encapsulating
any
kind
of
traffic
in
encapsulation.
It
called
VX
LAN
and
bring
it
to
the
next
part
could
be
also
a
transport
primitive
for
classic
service
mesh,
because
you
really
want
to
do.
It
was
a
classic
service
matches
very
expensive
yeah.
B
What
you
do
is
you
need
to
pass
protocols
and
you
need
to
put
in
HTTP
extenders,
and
then
you
create
a
new
packet
and
you
set
for
Wired
doing
it's
the
other
way
around
and
capsule.
I
encapsulate
the
traffic
put
in
NSA
around
that
and
no
need
for
for
even
decoding
that
packet
and
encoding
a
packet
again,
because
you
have
this,
you
have
such
trace,
IDs
or
whatever
IDs
runs
out
the
frame.
This
way
you
even
could
transport
Els
frames
or
whatever
you
want.
E
A
You
have
to
initiate
a
new,
a
new
ID,
but
there
might
be
an
interesting
use
case
to
to
show
as
a
to
to
add
in
a
example
where
perhaps
you
encapsulate
and
they
capsulation
and
add
in
these
particular
headers
and
then
transfer
as
you
and
make
your
decisions
as
you
would
and
then-
and
so
you
know
I
it's
it
should
be.
You
know,
and
it
should
be
trivial
to
do
this
in
in
our
in
the
architecture
that
were
they
were
proposing.
But
at
the
same
time
be
able
to.
A
You
know-
and
you
know
basically
I
think
it'd-
be
a
really
great
way,
because
to
did
also
demonstrate
some
of
the
flexibility,
because
we're
showing
here's
something
that
that
it's
l
to
frame
that
no
one
else
in
the
world
has
ever
seen.
But
here
it
is
handling
it
without
any.
Without
any
issues.
Does
that
make
sense,
yeah.
E
But
it
totally
makes
sense
and-
and
the
thing
is,
we've
got
some
really
fascinating
tools
for
that
as
well,
because
not
only
can
we
do
sort
of
a
thing
that
it
essentially
caps
leads
you
in
a
way
they
can
get
tracing,
but
we've
already
got
built
into
things
like
PPP
stuff,
like
the
IOM
port
O'call
from
the
IETF,
where
we
could
not
only
trace
what's
happening,
sort
of
above
the
tunnel.
We
can
actually
trace
where
the
tunnel
is
going,
because
you
know
to
degree
that
you
have
IO
am
support
which
is
starting
to
come
online.
E
B
A
E
Absolutely
I
totally
get
in
you
know,
for
example,
if
you
were
going
to
do
tracing
you
probably
want
to
negotiate
tracing
the
same
way
that
you
do
modeling,
so
that
you're,
basically
you're
actually
doing
tracing
in
a
way
that
both
sides
can
deal
with.
But
this
actually
brings
up
a
new
matter
or
which
is:
we've
talked
about
negotiation
of
tunneling
and
it's
all
fine
and
dandy
to
wave
your
hands
at
being
able
to
do
something
similar
for
tracing,
but
as
we're
building
out
the
the
B
form
you
see
between
ms/ms.
E
E
Things
need
to
support.
The
both
ends
need
to
support
the
tracing
tracing
mechanism
is,
and
then
there
is
an
exchange
of
laterz
and
right
now.
The
way
the
negotiation
between
two
FS
Evans
is
mostly
shaking
out
is
the
requesting
NSM
says:
I
can
do
this?
The
you
know
the
NSM
on
the
far
side
basically
comes
back
and
says:
okay.
Well
of
the
things
you
suggested
to
me,
m4
preference
order.
This
is
the
one
I
picked
you
know
because
of
my
offenses,
and
here
are
the
parameters
related
to
it
and
for
the
tracing.
E
A
One
of
the
other
use
cases
we're
gonna
have
to
think
a
little
bit
about
as
well
is
its
I
can
see
potential
use
cases
where
you
might
have
one
an
SM,
that's
man,
that's
being
that's
managing,
let's
say
VPP
manages
odl
and
you
want
to
do
tracing
across
the
both
of
them
as
an
example
and
so
I.
What
would
quote
that
use
case
like.
E
Know
I
would
say
that
something
that
looks
like
that
is
very
likely.
You
know,
because
you
know
it
again
in
that
scenario,
whatever
the
NSM,
that's
that's
controlling
or
talking
to
OD
all
it
would.
It
basically
have
to
have
some
set
of
things,
that's
capable
of,
and
you
know
so,
if
the
end
of
them
on
the
pod,
it's
using
DDP,
you
can
do
IOM
and
it
comes
across,
and
it
says
okay
I'd
like
to
trace
with
IO
am
as
part
of
this
connection
and
the
before
in
comes
back
and
says.
Well,
that's
nice!
E
B
B
E
E
A
The
one
that
comes
to
mind
that
would
probably
be
most
helpful,
is
probably
what
they
call
circuit
braking,
but
there's
there's
numerous
techniques
that
we
that
we
can
borrow
for
them.
So
it's
just
a
matter
of
picking
the
ones
that
that
we
think
would
best
suit,
and
then
we
can
see,
if
modifying
for
this
case,
for
our
use
cases
and
see
if
there's
a
good
way,
we
can
take
them
in
easily.
A
B
I'm
pretty
aware
about
superbrain
taking
a
chip
right
cetera,
so
we
we've
developing
other
applications
which
has
Oz's
super
wise
or
trees,
etcetera,
etc.
But
it's
always
inside
an
application.
So
the
principles
are
quite
real.
You
need
to
signal
something
right-hand
fails.
You
should
signal
that
this
has
failed,
and
so
the
question
here,
if
you're
leaving
your
application
environment,
which
you
usually
use,
it
might
might
support
that
and
say
hey.
We
have
independent
parts
which
are
created
in
a
different
language
in
a
different
environment.
B
Let's
say
you
want
your
mother
and
I
be
sick
on
the
right
hand,
side
using
line
of
kernel
and
some
transcripts
just
to
give
an
example
at
you
on
the
left
hand,
side
you
have
the
running
part
implemented
in
c
or
VPP
doing
another
thing
so,
and
but
this
parts
have
a
pass
that
has
a
connection.
Maybe
their
readiness
pro
tells
probes
lifeless
probes.
So
you
need
to
coordinator
somehow
or
across
your
little
bit
in
the
orchestration
environment,
which
tells
let.
E
Me
ask
you
a
questions.
Is
that
you're,
a
good
person
to
see
a
real
problem
right?
So
one
of
the
things
that
I've,
occasionally
mused
about
on
is
the
possibility
like
the
following,
which
is
for
situations
in
which
the
thing
you're
doing
is
effectively
stateless
right,
I'm,
gonna,
say
some
things
that
may
be
unclear
about
your
Muse
case.
E
Now
you
may
have
a
five
replicas
of
that
gateway
and
we
happen
to
have
routed
your
connection
to
one
of
them
and
since
it's
just
a
be
actually
in
connection,
my
presumption
is
that
you
don't
have
any
magic
state,
you're
just
shoving
frames
at
somebody
who
will
then
be
able
to
shove
them
into
an
IPSec
Jerry
tunnel
to
where
they
need
to
go.
If
you
happen
to
connect,
we
connect
you
at
first
to
replica
number
one
and
replica
number
one
dies.
We
discover
via
liveliness
probe.
It's
gone.
E
Maybe
the
node
caught
on
fire
for
God's
sake
right
you,
there
are
some
scenarios
in
which
just
seamlessly
and
quietly
connecting
you
to
replica
number.
Two
is
probably
doing
you
a
favor
and
it
seems
we
should
be
able
to
do
that
at
NSM,
quietly
and
seamlessly
reconnect
a
stateless
connection
to
another
replica
that
provides
the
service
you're
looking
for
is
that
would
be
useful
to
you.
Yeah.
B
That's
exactly
the
case
and
I
think
usually
a
back
wall
protocols
and
even
in
the
cop
up
it's
much
simpler,
because
the
client
makes
a
retry
so
requests
coming
over
anyway.
Keeping
the
state
of
putting
it
to
another
stateless
element
would
be
even
better,
but
that's
more
less
easy,
because
a
failing
pot
knows
he
has
failed,
and
the
problem
is
sometimes.
E
Sometimes
he
knows,
he's
failed,
he's
Newsies
filled
for
certain
kinds
of
failures
right,
so
it's
your
example
of
I
can
no
longer.
My
IPSec
tunnel
is
down
on
the
Gateway
there.
You
know
you
feel,
but
if,
if
you
something
went
wonky
on
the
physical
server,
where
your
note
is
mom,
and
so
the
note
went
down
on
gracefully
and
so
the
pod
just
disappeared
mom,
then
it
strikes
me
that
you
know
it
would
be
a
favor
to
you.
If
you
were
truly
stateless
to
when
your
cap
web
sends
another,
you
know
basically
sends
another
frame.
E
We
just
you
know
having
realized
that
that
one
is
done.
We
put
you
on
to
a
new
VX,
the
ethanol
that
takes
you
to
another
replica
and
you
continue
to
get
serviced
and
so,
instead
of
having
to
think
inside
the
cap
web
server
and
do
a
lot
of
logic.
Essentially,
you
get
a
very
brief
period
where
it's
just
not
working
followed
by
working
again
and
it
just
works
right.
It
looks
like
a
packet
drop
or
it
just
looks
like
a
blip
of
packet.
Drop
owed
by
packet
drop
is
no
longer
dropping.
A
B
B
B
This
should
delete
you
I,
don't
accept
more
frames
on
the
left-hand
side,
because
otherwise
they
will
send
traffic
and
traffic
and
fret
they
can
have
an
impression
everything
is
right,
but
the
state
of
this
connection-
it's
it's
gone,
actually
the
practical
use
case,
even
even
this,
with
the
stock
equipment
at
the
moment,
so
this
will
still
try
to
send
traffic
evening
either
redundant
second
copy
in
another
data
center,
etc.
Thank
you
because
you
don't
know
that
this
connection
on
the
right
hand,
side
has
been
failed.
They
still
put
traffic
in
here
so
basically.
E
What
you're
saying
is,
if
so
think
about
this
for
the
point
of
you
cap
wet
pot.
It's
you
know,
I
find
it
very
useful
to
think
about
local
points
of
view.
Well,
the
point
of
view
of
the
the
cap
web
pot.
It
does
not
matter
why
the
connection
it
has
to
a
gateway
service
is
not
working
like
why
it's
not
working
is
not
its
problem.
Even
it
is
no
longer
a
good
connection.
E
E
If
the
Gateway,
for
whatever
reason
is
no
longer
showing,
is
lively
which
can
include
it
declaring
itself
not
lively
because
it's
lost
its
outgoing
connection
or
it
could
include,
you
know,
somebody
took
a
sledgehammer
to
the
node
it
was
on
and
that
pod
just
isn't
there
anymore
right.
It
doesn't
matter
which
one
the
NSM
essentially
notes
that
it
is
failed.
Its
liveliness
check
and
having
failed.
Its
liveliness
check
means
that
we
to
look
at
the
connections
it
has
and
either
notify
the
pod
that
they
are
gone
or
reconnect.
E
A
Yeah,
like
yeah
I,
think
the
way
that
I'm
looking
at
there's
like
there's,
there's
multiple
areas,
and
this
this
would
be
as
something
that
whoever
is
designing
this
particular
path
would
have
to
decide
on
so
like
if
the
IPSec
connection
goes
down,
you
know,
is:
is
it
possible
for
it
to
resolve
the
connection
issues
itself,
open
up
a
new
IP
sex
channel
and
then
silently
deal
with
it?
You
know,
and
that's
that's
one
option.
A
Another
option
would
be
to
signal
ups,
it's
upstream
connection,
saying
I
no
longer
have
connectivity
I'm
going
to
go
away
now
and
you
make
a
new
request
for
anything.
Making
the
new
request
for
a
new
connection
or
passing
the
error
upstream
would
be
the
decision
of
the
of
the
next
hop
up
a
so
like
in
essence
that,
like
the
the
IPSec
imagine
that
the
IPSec
path
itself
that
you
had
there
was
like
another
NSM
like
that
that
connection
that
context
of
that
kind
and
that
connection
it
doesn't
know
anything
about
about.
A
What's
above
it
other
than
a
limited
amount
of
state.
That's
and
metadata
that's
been
passed
to
it,
and
so
it
doesn't
actually
have
the
context
to
deal
with
it,
but
the
next
hop
up
definitely
or
may
have
that
context
or
the
next
one
up
from
there.
So
it's
like
you
know
you
have
to
pass
that
information
up
until
you
get
to
a
point
where,
where
so,
where
something
can
make
the
decision
like
I
want
to
retry
to
reconnect,
or
we
should
fail
up
the
entire.
A
We
should
fail
if
we
should
fail
up
the
chain
and
eventually
you
you
hit
the
the
customer
where
you
might
fail.
The
connection,
the
worst
case
scenario,
yeah
yeah,
so
I
think,
like
we
still
want
to
capture
statistics
on
all
this
stuff
like
if
we
see
I'd,
be
sick
tunnels
dying
all
the
time.
You
know.
We
definitely
want
to
know
this,
and
so
the
tracing
is
still
is
still
important
but,
like
you
said
orthogonal,
but
the
actual
decision
itself
to
me,
it
sounds
like
like
we
want
to.
A
So
you
know
with
the
the
management
channel
that
we're
using
at
this
particular
point,
we've
built
up
through
protocol
buffers,
the
the
management
path
like
how
do
you
make
a
new
connection
or
so
on?
So
one
of
the
things
that
we
need
to
build
out
as
well
as
there's
passing
some
of
this
state
information
backup
about
about
a
connection,
so
we
haven't
added
any
primitives
to
that,
just
just
yeah
like
I
name.
This
is
exposing
a
hole
in
that
in
that
area.
A
Where
I
think
it's
not
this,
it's
not
that
we
didn't
think
about
it
is
that
we
haven't
in
our
development
cycle.
It's
it's
on
the
it's
on
the
agenda
and
I
am
part
of.
So
what
we're
using
is
we're
using
G
RPC,
which
has
a
dual
which
bi-directional
receipt
mechanism,
and
so
in
this
particular
scenario
you
know
it
sounds
like.
What
we
need
to
do
is
ensure
that
the
that
there's
a
some
way
that
we
can
communicate
this
information
to
and
from
each
each
service,
pod.
A
A
If
the
client
says
I
want
to
connect
to
an
IP,
SEC
network
and
that's
handled
by
another
network
service
mesh
and
it
dies,
I
mean
we
could
potentially
even
add
some
functionality,
functionality
to
say,
hey
if
this
thing
dies,
don't
even
bother
returning
returning
back
just
trying
to
connect
to
another
one
and
then
only
return
to
me.
If
that
fails
and.
E
A
E
That's
absolutely
true
right
I
mean
because
the
the
the
client
is
the
one
who
knows
whether
a
reconnect
on
failure
is
going
to
be
a
problem
or
not
right.
You
know
again,
it's
not
available
at
the
local
knowledge.
You
know
tap
web
understands
whether
this
is
okay
or
not,
and
so
it
should
indicate
if
that
kind
of
thing
is
okay,
when
it
requests
the
connection
right.
A
And
because
it's
on
a
per
connection
basis
as
well,
that
means
that
you
can
have
multiple
connections
from
a
from
a
single
client.
So
that
means
that
you
could
set
up
based
on
your
SLA
requirements
or
so
on,
exactly
what
you
need
and
even
if
it's
the
same
data
path,
perhaps
one
customer,
you
have
a
different
recovery
strategy
because
of
contractual
obligations,
yes,
and
that
could
be
that
could
be
added
in
to
your
tea
or
an
attempt.
So
you
can
try
the
service
of
his
best.
B
Cents
will
be
what
they
promote
anyway,
at
ma
the
third,
so
engine
intent,
you
already
noses
as
a
lazy
teacher,
etc,
and
it
would
drive
some
behavior.
So
it's
exactly
what
I
would
like
on
that
is
it's
not
on
the
PowerPoint
paste
from
a
from
a
from
a
site,
calm
perspective,
doing
that
or
its
regular.
Then
we
have
it
on
a
connection
based
yeah.
So
it's
a
okay.
This
connection
fails
from
an
ism
and
information
on
this
level
and
not
on
let's
say
VBP
porters
fails
or
something
as
a
globally.
E
The
other
thing
that
I
would
suggest
you,
it
sounds
like
you're
already
starting
to
think
this
way
is
one
of
the
things
that
I
found.
Really
profound
here
is
the
very
tiny
here.
These
l-2
and
l-3
connections
in
our
service
mash
opens
a
whole
world
of
possibility
that
we
just
haven't
thought
of
before
cuz.
The
world
was
too
static
right.
So
what
is
it
useful,
for
example,
to
have
a
connection
per
client
right
coming
out
of
your
cap
web?
You
know
yeah,
why
not?
B
What's
happening,
the
soft
Creek
so
CRE,
you
may
say
parent
client,
you
make
a
make
it
soft
GRE
connection,
which
is
not
pre-configured
heuristic,
bringing
it
up,
because
this
12
clients
needs
to
go
to
another
one
hop
as
isazo
wants
and,
as
you
can
establish
dynamically,
that's
entirely
possible
and
it
wasn't
before
that
oversee
and
I
only
possible
that
bring
up
time
and
not
not
at
runtime
anymore.
So
absolutely.
E
A
If
you
have
the
bandwidth
to
help
build
this
out
fantastic,
if
you
don't
have
the
bandwidth
to
boot,
to
help
build
it
out,
you
know
the
use
cases
alone
are
are
invaluable,
so
you
know
so
don't
don't
feel
bad
if
you
don't,
you
don't
have
that
at
this
visitor
time,
timelines
I
don't
have
a
good
answer
the
to
to
the
timeline
at
this
particular
at
this
particular
moment
other
than
you
know.
This
is
a
do
me.
This
is
a
high
priority.
A
I
want
to
get
this
up
and
running
as
fast
as
possible,
but
I
want
to
temper
that
with
making
sure
that
we
get
it
right.
So
that's
that's
why
I
can't
really
commit
to
say
hey.
You
know
this
should
be
usable
by
October
or
November
or
like
a
sort
of
specific
date.
So
so
so
I
apologize
about
the
Hat
about
having
something
concrete
on
the
side.
B
It
means
from
a
from
a
time
being
from
a
current
state.
It's
a
environment
is
not
even
usable.
It's
just
about
defining
the
api's
at
the
moment,
or
this
is
already
user
in
a
very
limited
scope.
And
can
you
can
you
push
a
package
from
A
to
B
array?
Yeah,
that's
a
question.
You
know,
as
you
imagine,
and
Roma
yeah.
A
So
we
haven't
built
out
the
so
we're
targeting.
Vpp
is
one
of
the
as
the
first
as
the
first
SDM.
At
this
point
we
haven't
built
the
backend
for
that
just
yet
so
there's
no
there's
no
patent
so
that
so
that's
that's!
Where
we're
at
so
from
a
production
perspective.
I
know
that's
you're
willing
to
build
out
such
a
such
a
component.
It's
not
really!
It's
not
really
ready!
Yet
right!
Now
we
we!
A
E
I
mean
the
the
net
thing
I
would
say:
is
that
there's
some
placeholders
right
now
for
the
API
is
don't
take
them
too
seriously.
We
just
needed
something
as
a
placeholder
we're
building
out
the
in
to
be
able
to
handle
those
api's
and
and
I.
My
suspicion
is
that,
once
that
infrastructure
is
in
place,
there
will
be
some
rapid
iteration,
getting
very
serious
about
those
api's
that
will
occur.
E
A
So
so
we
should
have
the
ability
to
push
packets
very
soon,
especially
with
the
expertise
that
we
have
and
the
and
the
releases
and
the
access
to
resources
that
we
have
in
the
team,
including
including
add
with
that
and
so
so
will,
will
have
something.
We
should
have
something
relatively
quick,
barring
any
major
stoppers
that
that
we
find
it's
not
it's
not.
Quite
it's
not
not
ready
to
be
demo.
A
E
Other
thing
I
will
mention
to
you,
which
is
the
way
NSM
looks
at
the
actual
CNF.
They
you
can
sort
of
divide
them
into
click,
two
classes
immediately
there
you
might
call
from
NSF
if
you
too,
smart
CNS.
These
are
the
CNS
that
are
intelligent
enough
to
participate
in
the
conversation
with
the
NSM,
and
then
there
are
what
you
might
call
the
Dom
CNS.
These
are
the
CNS
that
are
not
smart
enough
to
participate
in
the
conversation
with
the
NSM.
E
E
B
It's
like
it's
like
it
as
a
cheap
proxy
and
a
service
on
some
shining
saying.
Are
you
eyes
or
if
antennas
age
of
where
vnf
or
or
you
need
to
have
a
so
as
a
few
proxy
and
Tuesday
so
I
think
Bay
Belize
is
there,
but,
as
you
see
with
the
open
BNF
group
already
organization,
we
are
about
the
vnf
cells.
So
basically
we
see
us
as
a
PMF
or
CNF
provider.
So
we
can
make
some
smart
as
we
want
it's.
B
No,
there
is
no
winner,
you
know,
there's
no,
there
shouldn't
be
a
vendor,
CNF
fare
so
because
we
are
about
to
boots
of
ENF.
Obviously,
if
something
comes
in
which
heats
in
in
a
system
integration
perspective,
if
you
say
hey,
but
we
have
this
way
nor
CNF
or
vnf
still
here
then
from
system
integration
perspective.
Okay,
this
can
be
represented
by
an
interface.
But
if
you
thought
about
our
cm
knives
and
we
on
itself
would
always
be
small
and
they're
already
as
a
consuming
directly
communities
resources
they
pushing
matrix
on
us
from
météo.