►
From YouTube: Network Service Mesh WG Meeting - 2019-03-12
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
A
B
B
B
C
A
Dude
cool,
okay,
okay,
so
so
we
have
some
events
coming
up.
So
first
we
have
some
recurring
meetings.
So
besides
this
particular
group,
we
have
the
NSM
documentation,
which
is
every
Wednesday
at
8
a.m.
Pacific
time
and
on
Friday.
We
also
have
a
use
case
call
from
on
also
at
8
a.m.
we
also
encourage
people
to
go
to
the
cnc
of
test
bed,
which
is
every
first
and
third
monday,
if
you're
able
to
make
it
so
NSM
and
edison,
who
definitely
has
a
good
role
to
play
there.
A
A
We
have
on
April
2nd.
We
also
have
another
coming
up,
which
is
the
Intel
out-of-the-box
in
Santa
Clara.
So
this
is
that
had
Intel's
site
and
so
I'll
be
talking
about
what
network
service,
mashes
and
and
what's
and
also
I
I'm,
organising
to
see
if
I
can
get
them
to
and
give
me
what's
called
I.
Basically,
a
workshop,
so
I'm
probably
very
familiar
with
us.
The
idea,
basically
is
that
people
would
get
some
hands-on
experience
with
network
service
over
a
period
of
I
think
two
to
three
hours.
A
We
have
ambulance,
Sdn
and
V,
which
I
don't
think
we
have
any
talks
to,
but
it's
worth
going
just
to
hearing
what
people
have
to
say
if
you're,
if
you
happen
to
be
in
Paris
container
world
in
April
17th
through
19th,
was
a
talk
accepted
by
Prem
I
cube
con
EU,
so
Nikolay
mentioned
a
couple
of
the
talks
were
rejected,
I'm
still
waiting
to
hear
from
some
of
the
others.
So
we'll
know
for
sure.
What's
going
on
with
that
soon
we
have.
We
do
have
a
co-located
event
at
Q
Connie,
you
so
the
co-located
event.
A
We
also
have
cube
con
in
China
and
Shanghai,
where
I
don't
believe
anyone's
good
at
any
talks
there,
and
we
have
we
finally
have
ons
in
Europe
and
which
is
going
to
be
Antwerp
I.
The
call
for
paper
is
currently
open
and
closes
in
June
16th.
So
if
you
would
like
to
give
a
talk
that
ons
Europe,
definitely
let
us
know,
and
we
can,
we
can
help
organ.
We
can
help
craft
a
a
craft,
a
compelling
talk.
A
A
A
B
Yeah,
okay,
so
guy
can
I
take
over
just
to
share
my
screen
and
to
show
some
things:
okay,.
B
B
So
currently,
our
main
way
to
deploy
and
to
develop
is
to
use
vagrant
virtual
machines.
We
are
able
to
bring
I
mean,
depending
on
your
hardware,
specs
of
the
of
your
computer.
You
are
able
to
to
spawn
two
virtual
machines.
I
mean
this
is
what
we
have
as
the
default
and
then
one
is
the
master.
The
other
is
worker,
and
then
you
deploy
loads
across
them.
With
the
current
deployment
that
we
have
with
kind,
we
are
getting.
B
Nodes
three,
like
one
master
into
two
worker
nodes
and
what
I
want
to
show
you
is
how
quickly
so,
of
course,
the
very
first
time
that
you
have
to
deploy,
because
it's
based
on
docker
it
essentially
downloads
and
prepares
an
image,
and
once
this
image
is
prepared,
so
this
is
essentially
this
kind,
not
V.
Of
course,
this
is
the
version
of
the
occupied
net.
Is
that
it
yes,
that
it
deploys
so
after
that
is
just
as
quickly
as
what
you're
seeing
now.
B
This
is
a
kind
of
fast
way
to
to
do
development.
Of
course,
we
already
found
some
limitations
like
mounting
cost
volumes.
We
know
that
there
might
be
some
instabilities,
unfortunately
turns
out.
We
have
we
kind
of
know
some
some
people
that
are
related
to
the
development
of
kind,
so
I
guess
thought
that
it
would
be
a
good.
B
B
And
see
that
you
have
three
nodes
as
to
not
ready,
but
essentially
the
idea
is
that
that
this
this
makes
it
more
accessible
for
people
to
develop,
to
try
it
without.
You
know
all
the
complex
setup
that
we
have
up
till
now.
You
have
to
download
virtualbox.
If
you
are
on
linux,
you
have
to
escape
em
if
you
want
to
so
all
the
different
things
that
we
have
in
our
QuickStart
guides.
B
B
You
can
do
deploy,
deploy
old
images,
but
the
other
thing
that
that
we
were
trying
and
trying
to
evaluate
is
if
we,
you
know,
if
we
we
can
use
this
as
part
of
as
a
part
of
our
continuous
integration.
So
today,
our
as
we
circle
CI
as
a
way
to
build
images,
and
then
all
these
images
are
pushed
to
dock
Europe
and
then
from
docker
hub.
B
They
are
downloaded
in
pocket
where
respond
physical
machines,
and
then
we
have
two
nodes
kind
of
creating
a
similar
situation
as
what
we
have
with
vagrant
and
but
we're
trying
to
evaluate
how
how
this
project
can
help
us
in
speeding
up
our
CI.
So
today
we
have
like
I,
think
a
little
bit
less
than
20
minutes.
B
A
Yeah
I
find
the
I
find
the
the
power
usage
of
running
another
VM
slightly
annoying
but
acceptable,
but
yeah
I
find
in
in
that
scenario
this
this
is
compatible
with
with
OSX,
and
we
do
have
people
who
are
at
the
start.
We
have
Nikolaj
and
myself
who
will
be
using
in
the
OS
X
environment.
So
if
you're,
it
would
be
interesting
to
see
how
this
works
in
in
Windows
as
well.
There
are
some
problems.
A
There
are
some
problems
with
make
files
and
so
on
within
within
Windows
and
how
and
how
how
it
all
works
together,
but
it's,
but
it's
I,
think
it'd
be
a
good
place
to
start.
If
so
on.
If
someone
is
is
really
wanting
to
get
a
Windows
environment
up,
this
would
be
a
really
good
place
to
to
initiate
that.
B
A
C
B
B
A
E
So,
for
example,
we
are
seeing
one
more
use
case
coming
together,
some
john,
if
there
is
potential
interest
to
carry
that
forward
from
Palo
Alto,
so
and
also
essentially
clarifying
the
fact
that
there
are
several
protocols
into
consideration,
but
it's
not
a
use
case,
for
example,
segment
routing,
it's
kind
of
a
protocol
to
you
know
to
realize
these
use
cases
right
some
of
those
clarifications
also
we
added
so
with
that
said.
Essentially,
we
have
spent
some
time
on
the
use
cases
themselves
last
couple
of
calls
which
quickly
like
to
get
the
next
level
of
detail.
E
The
only
change
from
last
time
was
when
we
discussed
with.
Additionally,
there
is
an
internet
break
out,
as
you
can
see.
So
basically,
we
talked
about
enterprise,
corporate
corporate
taxes.
Last
time,
right,
traffic
coming
hitting
the
P
and
then
going
to
you,
will
Gateway
arrest
gateway
in
4G
terms,
for
you
know,
mobile
processing,
session
processing
and
then
heading
to
enterprise,
so
the
other
type
is
like.
E
Essentially,
you
directly
break
out
the
internet
right
from
be
after
you,
you
know,
go
through
your
mobile
session
processing,
you
don't
hate
to
the
enterprise,
but
you
go
to
the
internet.
But
at
that
point
you
hit
a
DPA
ancient.
The
deeply
is
primary
for
reverse
traffic,
but
you
know
to
keep
it
symmetric.
E
So
essentially
to
summarize
what
would
happen
is
we'll
talk
about
some
of
the
details,
but
from
an
innocent
connectivity
role
when
there
is
in
short
summary
when
there
is
no
SRA
will
be
or
smart
think
about
any
sort
of
hardware
acceleration.
There
is
an
additional
in
additional
tunneling
needed,
but
if
they're
using
hardware
acceleration,
notably
SRA
various
smart
legs,
essentially
you
can't
use
leverage
the
existing
tunneling
mechanism
because,
for
example,
gtp
U
is
a
UDP
tunnel
on
top
of
customer
packets
same
both
of
IT
sector
l2tp
right.
E
C
So
this
is
becoming
more
common.
The
sd1
use
case,
particularly
with
h,
picking
up
where,
on
the
main
scenario
here
is,
you
have
core
cloud
which
is
which
can
be
in
enterprise
case.
It
can
be
a
main
data
center
and
then
you
have
h
cloud
which
are
closer
to
that
of
the
customer
premises
where
you're
providing
services.
There
are
a
bunch
of
use
cases
where
and
it
becomes
very
important.
So
in
that
case,
the
scenario
what
we
are
talking
about
here
is
what
what
would?
C
How
would
innocent
play
a
role
in
building
the
infrastructure,
as
well
as
helping
out
the
apps
to
seamlessly
access
the
resources
both
in
each
cloud,
as
well
as
on
the
IT
clone?
So
that's
about
the
sd1
scenario
and
also
I
have
just
put
the
link
where
and
there
is
also
in
the
Oneg.
There
is
specific
article
about
how
kubernetes
and
is
Devon
plays
a
role.
For
example,
you
have
scenarios
where
and
within
a
chocolate
in
area
network
man
network.
C
You
would
essentially
have
the
kubernetes
master
sitting
on
the
IT
cloud,
and
you
would
have
the
cluster
formed
across
this.
Each
cloud
and
another
important
thing
is
the
edge
cloud
has
to
be
really
efficient
in
terms
of
its
footprint
beat
hardware
or
software,
and
also
in
certain
scenarios.
This
HG
load
would
essentially
be
instantiated,
and
then
it
can
be
instantiated
based
on
the
requirement
and
also
brought
down
once
the
requirement
is
met.
So
so
this
is
primarily
where
and
how
will
you
play
out
each
cloud
using
kubernetes
and
awesome
yeah.
E
So
before
jumping
into
the
connectivity
we
walked
through,
series
of
you
know
how
innocent
place
at
all
I
do
want
to
point
out.
I
just
sent
an
email
from
right
from
the
kubernetes
scaling
group,
essentially
a
distributed
and
as
well
scenario
howell
kubernetes
world.
So
essentially
a
potential
potential
deployment
model
could
be
going
beyond
your
actual
physical
data
center.
Where
basically
latency
is
the
order
of
microseconds
to
you
can
take
it
to
milliseconds.
E
Basically,
the
case
there
is,
you
know
like
an
enterprise's
devan
or
the
telco
distributed
cloud
where
a
hub
site
which
could
cover
several
small
edge
sites
like
as
an
example.
An
enterprise
example
is
the
branch
office.
Right,
probably
you
just
have
one
server
there
is
in
cover.
You
don't
need
to
install
a
kubernetes
master
there,
but
as
long
as
the
hub
side
is
within
a
few
milliseconds
away,
latency
away,
they're,
good
and
officially
confirmed
by
the
kubernetes
six
scale
group
we
can't,
but
essentially
the
millisecond,
you
can
call
it
Metro.
E
Distance
right
within
a
metro
should
be
fine,
but
definitely
not
over
van
right,
but
also
the
key
point.
Is
you
don't
need
to
have
a
kubernetes
pressure
in
each
and
every
location
like
if
you
just
have
a
single
server?
It's
not
necessary
to
just
dump
a
kubernetes
cluster
there,
the
time
tom
so
going
to
the
connectivity
walkthrough.
So
essentially,
there
are
three
layers
right
so
coming
from
inside
out
right.
So
the
first
is,
of
course,
the
tenant
or
the
customer
payload
right.
E
E
So,
for
example,
we
saw
you
know
IPSec
being
used
in
both
in
the
distributed
telco,
cloudy
skies
or
in
the
Estevan
you
scape
and
segment
routing,
of
course,
the
tunneling
mechanism,
which
is
popular
more
in
the
Kalka
world,
but
it's
very
much
possible
in
the
enterprises
Givens,
because
it's
typically
multipath
you
have
the
packets
through
the
internet
using
IP
shake.
But
then
some
critical
traffic
will
go
with
MPLS
network.
At
that
point,
we'll
be
using
segment
rotate
and
at
this
point
the
tunneling
produce
other
MPLS
IP
tunneling
mechanisms.
E
But
the
key
to
note
here
is
this
is
essentially
you're
talking
underlaid
here
and
in
this
case
all
these
packets
are
all
rotate
right,
I
mean
so.
Basically,
the
fundamental
mechanism
is
all
talking
at
this
underlay
level.
Right
and
l2
header
is
basically
providing
the
match
header,
but
the
packets
are
still
doubted
you
at
each
routing
hop.
We
look
up
the
MAC
address
when
you
take
action
on
it.
E
So
now,
as
you
peel
down
the
layers
in
if
there
is
sra
over
your
smart
nick,
so
essentially
what
happens
is
at
the
point
you
can
you
have
the
marker
rich
to
process
either
it
could
be
if
the
global
market
or
it
could
be
processing
directly
or
if
you
want
to
deal
with
private
MAC
addresses
then
d.
You
need
to
bring
in
the
outer
wheel,
underlay
or
outer
VLAN
tag
also
into
the
equation
to
model.
E
E
The
typical
one
I
mean
it
could
be
the
excellent
exercise
where
I'd
come
could
be
China
or
something
else
out.
So
the
rationale
for
here
essentially
what
you're
doing
is
you're
still
routing
with
respect
to
you
know
the
underlay,
but
you
are
able
to
come,
convey
the
entire
L
through
payload
I
mean
this
whole
payload.
Everything
you
know,
including
the
GRE
or
IPSec
tunnel,
is
a
payload
for
it.
E
Essentially,
there
are
two
interfaces
and
this
example
is
also
catering
at
the
multi-tenancy.
If
there's
a
single
tenant,
then
you
would
just
need
one
additional
interface.
I
won't
lie
besides
kubernetes
in
traffic,
but
in
the
case
of
multi-tenancy,
and
if
you
want
superior
isolation,
this
is
lexical
deployment
model
and
the
path
background.
A
E
E
E
So
the
current
innocence
group
suppose
I
mean
again
to
be
very
specific.
One
endpoint
but
I
see
the
networks
of
each
description
right.
Well,
if
I'm
a
description
perspective,
but
then-
and
of
course
it
supports
software
and
hardware
act
related
CNI,
some
interconnection,
it's
like
very
flexible
working
with
the
device
plugins
and
then
automates.
The
end-to-end.
E
So
so
now,
if
you
enhance
initial
scope,
you're
looking
at
a
sort
of
a,
we
definitely
need
from
a
multi-tenancy
use
case
perspective,
support
multiple
endpoints
but
and
I've
seen
the
network
service
description,
so
the
the
in
in
basically,
if
you
go
externally
before
it
NSD
network
service
descriptor.
So
basically,
from
that
perspective
break
but
multi-tenancy,
we
definitely
need
to
support
multiple
endpoints.
If
you
go
too
long,
I
mean
from
a
long
term.
E
This
was
actually
really
looking
at
implementing
a
full-blown
H
a
policy
and
there
are
different
types
of
policies,
but
the
thought
processes,
but
a
simple
I'm
in
first
step.
This
is
actually
could
be
one
of
the
HEA
policies
they're.
Typically,
there
is
a
fault
in
in
the
bad
scenario.
There
is
a
fault
in
an
endpoint.
Typically,
it's
an
indicator
for
bigger
problems
because
you
will
be
hitting
essentially
the
you
know,
both
endpoints,
no
matter
what,
even
if
they're
going
to
different
network
functions,
will
be
hitting
the
same
horse.
E
Perhaps
the
same
I
mean
the
next
or
the
or
the
socks
which
nice
pitch
right.
So
basically,
one
has
a
policy
could
be
simple
in
the
scales
we
just
kill
the
entire
network
function
and
recreate
the
portrait
I
suggested
first
step,
but
essentially
uses
coming
to
implementing
full-blown
head
shape,
policies
and
Daniel.
Others
were
experts
in
segment.
Routing
know
this
more
than
me.
So
basically,
it's
pretty
elaborate.
How
you
want
to
fail
over
and
take
different
paths
traits.
You
know
open
passage,
success
which.
E
The
next
one
is
essentially
again
the
use
cases
multi-channel
see,
but
this
is
getting
into
another
detail
all
right.
So
basically,
we
talked
a
little
bit
about
you
know
say
the
case
where
how
you
manage
the
two
cases
right,
so
one
is
the
SRO
we're
smart
Nikki's
there.
How
do
you
manage
to
MAC
address
right?
So
basically
it
could
be
a
global
MAC
address
or
alter
MAC
address
and
you
know
which
is
which
could
be
private
and
then
a
VLAN,
but
also
the
more
important
one
is
essentially
I.
E
Think
as
if
to
capture
the
details
here.
So
the
support
endpoints
are
grouping
for
interconnects
policy.
Let
me
explain
a
little
more.
So
let's
say
we
went
to
the
domains
where,
essentially,
there
is
no
FRS
martynuk,
which
is
a
very
popular
case,
for
example,
is
using
vbt
or
OBS,
or
you
know,
being
Barbies,
which
I
mean
if
there's
so
many
districts
right.
So
basically
what
happens
at
that
point,
you
have
to
critically
look
at
how
you
manage
the
ID
space.
E
One
simple
way
say:
I
mean
if
you
are
looking
at
mix
line,
one
simple
way
would
be
go
back
and
say:
hey
all
these
interconnects
I
mean
you
basically
have
a
network
service
subscription
which
in
which
has
several
annex
of
functions,
and
you
can
say
hey
all
of
them
belong
to
the
same
BNI.
It
basically
did
a
point-to-point
connections,
but
you
make
that
simplifying
assumption
same
BNI
and
then
differentiate
big
from
systemic
rephrase.
E
So
this
is
a
scalable
of
approach
versus
you
say:
hey
well,
I'm
going
to
I
need
fine,
grained
isolation
like
each
of
the
interconnect
pasture
need
to
be
completely
isolated
and
then
start
using
be
a
nice.
Then.
So,
basically,
you
could
say
scalability
issues
in
your
deployment.
So
some
things.
That
is
an
important
thing
to
keep
in
mind,
so
that
sort
of
the
notion
of
the
end
point
grouping
for
interconnect
policy
right.
E
So
so,
basically
we
need
to
have
this
policies,
but
they
can
start
small
saying
you
know
one
simple
policy
could
be
hey
it's
just
either.
I
mean
two
are
two
policies:
one
is
full
isolation
or
no
isolation.
Keep
it
simple,
but
then
we
can
start
going
into
more
fine-grained
policies
as
we
make
progress
and
also
do
want
to
stress
the
point.
All
these
standard
tunnelling
points,
mechanisms
which
you're
using
or
the
more
popular
ones
are
always
routed
from
an
underlay
perspective
which
basically
each
half
is
a
router.
E
That's
how
these
in
a
popular
turning
protocols
are,
and
essentially
the
next
one,
is
our
own
traffic
management,
so
basically
based
on
the
interconnect
policies
and
also
the
multi-tenancy
you
you
basically
support
interconnect,
usage
monitoring.
It
could
be
construed
independent
of
the
previous
one,
also
momentous
multi-tenancy
or
not.
We
need
to
look
at
interconnected
usage,
monitoring
right
and
figure
out
what
is
going
on
from
an
interconnect
perspective.
E
You
know
packets
in
now,
and
then
you
can
take
it
to,
and
this
could
be,
for
example,
useful
for
figuring
out
if
there
is
some
sort
of
DDoS
attacks
happening
right
and
then
take
further
action,
and
this
is
to
second,
the
next
one
is
more
of
a
repeat
saying:
this
should
be
done
for
both
software
and
hardware
activity
see
a
nice
not
for
one
or
the
other
right.
The
same
interest
and
accusation
wanna
turing
and
the
last
one-
is
essentially
support
bandwidth
management
right.
E
This
is
a
again
a
very
interesting
and
critical
one,
so
the
use
case
is
QoS.
So
if
you
look
at
the
you
know,
the
compute
side
create
a
supposed
elaborate
policies.
For
example,
you
know
quality
of
service.
There
are
three
levels
from
a
computer
perspective
of
the
cpu
and
memory,
you
call
it
best
effort
or
guaranteed,
or
a
min
guarantee
model
right
or
best-effort
mentality
or
absolute
guarantee
right,
and
in
our
case
this
would
also
entail
creators
scheduler
changes.
G
So
I
had
one
user
wave,
passed
segment,
routing
and
VX
liner
kinds
of
overlay.
That's
fine!
It's
all
good,
but
there's
a
problem
there
I
think
of
who's
authorized
to
use
which
kind
of
overlay.
If
you
were
doing
the
enterprise
use
case
this
you
suggested
and
used
SR,
then
that
simply
wouldn't
fly,
because
there's
somebody
in
the
middle
between
who's
got
to
agree
that
SR
is
actually
permissible
and
workable.
So
I
think
if
you
I
mean
I'm
guessing
that
one
of
your
next
steps
here
is
to
break
this
down
to
some
sort
of
call
sequence.
G
E
G
So,
what
within
the
tunnel,
the
packets
will
appear
to
be
routed?
Yes,
I'm
not
debating
that
I'm
more
trying
to
work
out.
You're
I
mean
we're
NSM
here.
So
the
question
is
how
you
set
up
the
mesh,
and
so
the
question
I'm
asking
myself
is
how
I
would
set
that
mesh
up
with
a
sequence
of
calls
to
NSM
managers
and
that's
where
this
sort
of
loses
its
trap.
G
A
I
to
add
on
to
what
to
what
Ian
is
saying
and
correct
me
if
I'm
wrong,
so
like
people,
it's
it's
not
that
someone
wants
segments
siglent
routing
is
that
they
want
some.
They
want
some
feature
functionality
that
could
render
into
they
can
render
into
segment
routing,
and
so
far
so
part
of
so
part
of
what
would
be
useful,
like
things
would
be
to
try
to
work
out.
A
G
Could
be
the
the
thing?
Is
that
overlay?
It's
not
clearly
a
network
service
endpoint,
but
nevertheless,
that
overlay
is
an
important,
a
significant
part
of
the
the
mesh
that
you're
setting
up
so
you're,
asking
for
something
specifically
by
property
and
I'm,
not
quite
sure
how
that
request
would
happen
and
again,
particularly
because
this
cross
is
an
administrative
boundary
between
an
enterprise
and
a
service
provider.
That's
quite
an
important
question,
because
the
the
call
might
result
in
asking
the
service
provider
for
something
rather
than
asking
for
segment
routing
by
name
yeah.
A
It's
the
same
advice
that
that
I
gave
to
the
network
service
planning
group
on
their
inaugural
meeting
when
they
were
talking
about
multiple
interfaces
and
I
say:
okay,
it's
not
the
thing
that
people
want
is
not
the
multiple
interface
it's
actually
you
have
to
look
at.
Why
do
why?
Are
they
asking
for
that
feature?
A
It
might
be
for
a
faster
data
plane
or
might
be
for
something
that
they
can
get
more
more
predictable
performance
or
quality
of
service
on
in,
in
order
to
solve
a
very
specific
use
case
and
the
actual
quality
of
service
mechanism,
or
the
fact
that
it
rendered
into
another
interface
is,
is
not
actually
what
what
the
user
was
was
looking
for.
It
was
just
a
means
to
an
end,
so
so
we
should.
We
should
try
to
work
out
what
some
of
the
more
popular
things
are
like.
Why?
A
E
There
are
two
aspects
to
it.
Right
of
course,
a
protocol
like
SRB
six
bring
some
additional
value
more
I
would
say
with
respect
to
the
end-to-end
service
itself,
sort
of
TV.
You
can
ask
us
correct,
but
we
also
have
some
looks
that
look
like
what
does
it
exactly
mean
from
an
inocent
perspective,
at
least,
if
you
go
back
to
the
architectural
impact,
it's
basically
the
use
cases.
It
would
sort
of
then
fall
into
the
QoS
aspect
for
that
right
or
the
traffic
management
mole
towards
fat,
so
basically
segment
routing.
E
If
you
using
segment
routing,
you
can
do
some
fine
grain
to
it.
Between
these
two
end
points
and
then
the
way
that
will
actually
percolate
to
NSM
is
more
of
the
QoS
right,
correct
or
even
classic
MPLS.
You
can
do
bandwidth
management,
but
it's
sort
of
more
of
the
implication
rather
than
anything
else.
Is
the
thought
process
right.
G
Yeah
but
again,
the
problem
at
the
moment
is
is
you've,
got
use
cases
described
quite
well
and
implementations.
You
know
components
that
you
can
use
to
implement
those
use.
Cases
and
nfm
is
the
glue
between
the
two
and
I'm
just
not
seeing
the
link
here.
So
the
the
really
it
comes
down
to
effectively
use
cases
tools
and
how
NSM
would
assemble
those
tools
to
make
the
casework
and
I
think
that's
what
this
documents
little
bit
scattershot
in
that
regard.
At
the
moment,
it
makes
it
larger.
D
E
Yeah,
yes,
for
the
but
bad
aspect,
essentially
we
are
I.
Think
probably
are
you
looking
at
sort
of
so
now,
once
you
have
clarity
and
we
have
some
idea
of
what
you're
going
to
do.
I
mean
we
want
to
double-click
into
sort
of
a
workflow
to
make
that
bring
that
clarity,
saying
hey.
This
is
how
it
will
look
like
or
anything
else,
because
I
think
many
of
this
is
it's
probably
clear
and
people
who
are
extremely
familiar
with
this
in
their
mind,
but
sort
sort
of
it's
still
not
reflecting
the
word
flow
rate.
G
Familiar
with
service
provider
networking
and
with
the
use
cases
that
you're
explaining
and
yes,
absolutely
I,
think
that's
exactly
what
you
should
do.
You
need
to
double
click
into
what
the
calls
would
be
to
NSM
to
set
this
up
and
would
be
on
failure
as
well,
because
you
sort
of
hand-wave
that
one
off
slightly,
because
that
is
precisely
the
detail.
We
need
to
actually
work
out
whether
the
current
description
of
NSM
actually
applies
to
this
use
case
or
whether
we're
missing
something
and
that's
what
we
need
to
understand.
G
A
So
if
we
started
with
that
basic
premise
and
that
we
start
asking
the
question
like
how
do
we
ensure
isolation
between
between
the
two
customers?
How
do
we,
how
do
liam?
How
do
we
ensure
the
user
gets
the
throughput
that
they're
looking
for?
How
do
we
ensure
the
user
gets
the
security
they're
they're,
looking
for
with
with
with
other
some?
How
do
we
ensure,
when
the
failure
occurs,
with
a
link
or
one
of
the
services
that
they
get
rerouted
and
so
I?
A
Think
if
we
look
at
it
each
one
piecemeal
in
that
scenario
to
start
off
with,
and
then
that
would
probably
be
a
good
way
to
start
and
then
what
we
can
do
is
start
to
compose
them
saying
how
do
we
get
Auto
healing,
plus
security
plus
plus
whatever
and
start
to
drive
to
a
full
end-to-end
use
case?
So
I
think
that
you're
starting
off
really
well
like
you're
asking
the
right
side.
A
The
questions
the
the
feedback
I
think
is,
that
is
to
focus
primarily
at
the
top
level
use
case
like
what
is
it
that
we
want?
You
know
if
we
had
thirty
Seconds
to
to
tell
an
executive
who
is
going
to
making
the
decision
on
this,
what
we
provide
them.
You
know
we
saying
things
like:
oh,
we
provide
quality
of
service
and
we
provide
you
know
those
things
are
important,
but
we
have
to
be
able
to
drive
a
effective
narrative
that
helps
lead
people
to
see
how
this
solves
it.
At
a
holistic
view,
yeah.
G
B
G
And
that's
what
I'm
trying
to
understand
from
your
document,
you've
reeled
off
a
list
of
and
again
part
the
problem
with
the
document.
Is
it's
a
mix
of
two
things
together.
One
is
high
level
use
cases
which
should
have
under
them.
You
know
enabling
features
that
will
make
them
better
and
one
perhaps
is
low
level
features
that
you
might
use
to
enable
those
use
cases.
G
But
the
more
important
question
which
is
completely
unanswered
as
yet,
which
is
totally
understandable,
is
how
NSM
enables
you
to
use
the
features
you're
talking
about
to
deliver
the
use
cases
that
you're
interested
and
that's
what
you've
skipped
over
in
there.
It's
just
you've
mixed
the
two
up
and
then
the
missing
component,
just
isn't
obviously
missing.
I.
Think.
C
Yeah
yeah
yeah,
those
are
fair
points,
I
agree
with
you,
and
so
what
we
have
done
is
essentially,
we
took
up
more
of
a
top-down
approach
where
and
first
define
the
use.
Cases
then
get
into
the
details
of
what
would
be
the
requirement
from
own
perspective
and
I
agree
as
a
next
step.
What
we
can
do
is
we
can
identify
the
components
and
then
more
like
a
call
flow
or
a
sequence
diagram
between
them,
which
essentially
enumerates
the
details
on
how
things
would
would
essentially
look
in
both
the
use
cases.
Yeah.
G
E
To
other
aspect
was
like
precisely
this
innocence:
we
wanted
to
get
some
early
feedback
di-gel
and
yes,
this
particular
step.
What
I
felt
was
it
probably
need
the
experts,
it
is
Fred
and
Nicolai
right,
I
mean
who
know
the
code
inside
outbreak
or
adulterated
that
matter,
so
basically
we're
the
people
who
have
been
looking
at
the
workflows,
and
you
want
to
make
sure
first,
as
we
have
at
least
them.
You
know
basically
direction.
E
We
are
setting
here
with
the
distributed
cloud
and,
as
it
looks
like
I
think
there
is
good
interaction,
also,
basically,
alignment-
and
this
is
more
of
saying
for
setting
that
stage.
For
that,
that's
what
we
were
thinking
in
mind
rather
than
if
you
do
everything,
and
we
find
that,
oh,
my
god,
these
use
cases
themselves
are
busted.
Then
it's
a
bigger
problem.
We
are
facing
right,
oh
yeah,.
G
Yeah,
that's
precisely
yeah
I'm
trying
to
make
sure-
and
it's
not
about
fact
they
know
the
code.
It's
about
the
fact
they
know
what
they're
trying
to
do.
But
the
point
about
your
use
case
is
it
evaluates
whether
they're
current
thinking
is
the
right
one
or
whether
we
need
to
change
direction
slightly.
That's
exactly
what
you're
trying
to
do.
Yeah.
A
Just
do
just
to
be
clear:
I
think
this
is
a
fantastic
start,
like
literally
we've
had
what
one
or
one
or
two
meetings
at
the
most
to
me,
and
when
we're
already
up
to
having
this
information
discussed
and
in
a
in
a
scenario
that
is,
that
is
digestible,
so
so
I
really
want
to
make
clear
like
you're
off
to
a
really
great
start,
so
we
so
yeah.
We
just.
A
We
want
to
make
sure
that
it's
that
this
thing
is
heading
towards
towards
the
direction
to
to
make
it
very
very
clear
as
to
not
only
what
is
it
what
use
case
we're
trying
to
solve,
but
also
make
it
very
clear
like
this
is
the
part
that
network
services
mesh
provides,
and
then
this
is
the
part
that
you
have
to
plug
in.
A
You
know,
and
things
like
firewalls
and
so
on,
like
we're,
not
gonna
provide
a
firewall
for
you,
but
you
can
bring
in
your
favorite
firewall,
provided
you
can
you
can
plug
in
or
create
your
own
endpoint
for
it.
So
so,
in
that
scenario,
like
I
think,
I
think
this
is
a
fantastic,
a
fantastic
start.
So
I'll
definitely
make
sure
my
time
is
available
to
help
to
help
you
work
out
not
only
high
level
stuff,
but
also
details
as
well.
E
C
I
think,
even
before
we
get
into
the
call
flow
rate,
what
I
was
thinking
is
at
least
zoom
into
further
details
on.
How
did
how
the
typical
deployment
looked
like
right
and
then
what
we
can
do
is
we
can
bring
in
call
flows.
So
that's
what
I
was
thinking.
Probably
one
level
of
further
getting
into
the
details
on
example,
beat
an
edge
clone
or
beat
in
the
scenario.
What
will
be
the
competence?
What
would
a
service
be
and
what
will
happen
when
a
service
get
stage,
but
things
like
that.
B
E
C
Also,
we
are
probably
calling
out
for
more
people
who
can
get
in
because
definitely
this
is
this.
Team
has
a
lot
of
experience
and
probably
whoever
is
interested,
can
probably
join
and
help
us
in
defining
the
details,
because
once
the
rubber
hits
the
road,
then
you
definitely
have
a
lot
of
unknown
or
a
lot
of
questions
and
more.
The
team
size
better
addresses
what
I
have.
D
A
standing
exact
with
the
Friday
call
but
I'm
hoping
to
eventually
reschedule
that,
because
I
have
a
lot
of
questions
on
this,
especially
since
it
tries
to
like
expand
both
the
service
provider,
network
and
enterprise
network
and
the
considerations
for
like
what
it
actually
means
when
an
enterprise
peers
with
a
service
provider
directly
like
wants
to
become
a
BGP
peer.
This
and
that,
like
I,
think
a
lot
of
this
stuff
looks
good
on
paper
and
I.
Think
that
the
technology
is
there
to
support
it.
D
But
that
doesn't
necessarily
mean
that
the
business
processes
or
the
way
that
service
providers
onboard
customers
necessarily
is
there
to
support
it.
Unless
you're
purely
just
gonna
ask
for
you
know
a
BGP
peer
and
then
do
tons
of
tunneling
over
everything
or
you
know,
I
saw
Sdn
was
in
there,
but
like
I
said,
that's
even
that
still
you
could
have
NSM
make
all
these
calls
to
the
SD
wham.
D
But
if
it
only
controls
the
overlay,
then
nothing
actually
happens
on
your
underlay,
and
so
you
say,
increase
this
to
500
Meg's
and
you
still
only
have
a
100
Meg
line.
I
mean
there's
lots
and
lots
of
considerations
if
you
want
to
start
playing
in
the
service
provider
space.
That
I
think
there's
just
weird
caveat
that
don't
apply
in
the
enterprise
space
that
we'll
have
to
account
for
another.
G
D
D
Right
so
anyway,
you
said
well
and
that's
what
we've
been
working
on
like
when
I've
been
like
working
on
like
the
mess
standards
for
api's
and
stuff
is
like
if,
like
romkey,
I
recommend
you,
google
after
this
and
just
look
into
the
interlude
API
and
the
Sonata
API
and
stuff,
and
at
least
you
know
my
Charter
and
a
few
others.
We've
been
looking
into
the
mess
space
on
doing
exactly
what
Ian
just
described
where
we
present
an
API
and
if
resources
available,
you
could
potentially
make
requests
for
changes
to
the
underlay.
D
This
is
still
in
its
early
early
infancy,
but
it's
something
that
at
least
in
the
ethernet
space
like
standards-
and
you
know,
models
and
whatnot
are
starting
to
like
see
their
infancy.
I
mean
I,
worry
that,
like
anything
in
the
service
provider,
space
there'll
be
fifteen
different
standards
and
then
nobody
will
use
them.
D
But
these
aren't
considerations
and
like
in
sm
at
that
point,
and
this
is
where
mine
and
Ed's
definitions
of
data
plant
get
a
little
wonky,
because
in
this
case
that
interlude
api
for
service,
you
know
modification
c
underway
would
essentially
your
data
plane.
As
far
as
then
sm's
concerned-
and
you
make
a
request
to
the
service
provider
to
not
only
me
know-
get
that
overlay
change,
which
you
would
do
to
the
SD
win
platform,
but
then
also
make
it
request
for
the
underlay
change
to
that.
A
A
D
G
Think
we
can't
absolve
ourselves
of
this
by
saying
it's
all
the
data
planes
problem,
because
in
many
regards
the
data
plane
here
is
just
another
to
service
again,
particularly
when
it's
it's
cross
one
and
it's
got
specific
behavior.
So
you
know
we
have
to
deal
with
the
consequences
of
having
services
that
people
print
bring
along
that
implement.
You
know:
end-to-end
connectivity.
A
And
the
question
is
going
to
be:
what
what
are
you
see,
what
things
we
want
to
lift
from
the
from
the
data
plane
into
into
NSM?
So
we
can
so
we
can
do
the
right
kind
of
negotiation
between
disparate
things.
So
that's
that's
gonna,
be
a
very
interesting
question
and
may
actually
be
different
between
surgeons
between
certain
types
of
data
plans.
So
so
we
definitely
have
a
lot
of
things
to
to
look
at
in
that
space,
but
with
number
we're
actually
out
of
out
of
time.
So
yeah.
E
Good
point,
and
also
a
good
point
and
message
Jeffrey.
So,
similarly,
probably
we
should
also
be
looking
at
if
there
is
anything
to
leverage
from
Etsy
right
I
mean
they
have
I'm,
not
saying
they're,
gonna
fantastic
job
on
a
network
service
descriptor,
but
that
could
also
be
leverageable
I
mean
as
ugly
as
we
walk
through
all
this,
and
so
one
request
to
the
team.
So
if
you
think,
do
you
think
that
this
Friday
is
a
little
I
mean
basically
not
the
most
convenient
time?
The
Friday
morning,
I.