►
From YouTube: Network Service Mesh WG Meeting - 2018-10-16
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
A
We
are
currently
working
on
a
network
service
mesh
demo
and
we'll
have
more
to
talk
about
that
later
in
the
meeting,
and
we
would
like
some
help
with
people
who
are
interested
in
either
building
the
demo
or
evangelizing
the
demo
in
some
way,
through
the
form
of
a
podcast
blog
or
a
medium
of
your
choice.
We
also
have
a
Fido
mini
summit
paper
that
was
submitted
or
proposal
that
was
submitted
for
network
service
messed
by
Tom,
Herbert
and
I.
Believe
that's
still
pending
acceptance.
Unless
tom
has
heard
about
it,
I
certainly
results.
A
Right
so
sorry,
all
the
work
I
believe
where
that's
that's
currently
pending
I
need
to
actually
update
the
the
content
in
that.
But
there
were
some
improvements
to
VPP
that
should
be
coming
down
the
line
that
should
make
that
a
lot
easier
so
make
sure
to
add
that,
and
do
that
so
in
terms
of
tasks
to
add
sidecar
containers
in
network
service
mesh,
which
was
added
by
pratik,
is
Prateek
on
I.
C
A
A
A
A
Exactly
looking
for
erroneous
suits
of
errors
simultaneously,
if
you're
writing
new
code,
please
use
the
go
errors
as
well,
and
so
what
the
go
errors
gives
us
is
so
once
the
error
is
pet
is
passed
so
the
way
it
goes,
errors
were
errors
in
general
working
go
is
that
you
create
an
error.
It's
it
has
a
string
embedded
in
it,
maybe
a
little
bit
of
extra
metadata
that
you've
added
in
and
then
you
pass
it
up
eventually
that
string
gets
printed
somewhere,
but
the
thing
that's
missing
is
the
context.
Where
was
this
error?
A
When
was
this
error
created?
Where
was
this
error
created,
and
so
that
information
is
injected
madly
by
ago
and
it
works
with
law
gross
so
that
if
it
ends
up
being
printed
out
to
the
to
the
event
stream,
then
we
actually
get
a
stack
trace
of
that
error.
So
it'll
be
extremely
helpful
where
so,
anytime,
you
have
an
error
that
did
you
need
to
use
like
please
reach
for,
go
errors
instead,
so
in
terms
of
becoming
a
kubernetes
workgroup
member.
D
Folks
would
very
much
like
us
to
be
a
kubernetes
working
group,
the
kubernetes
folks,
when
we
were
last
talked
about
this,
we're
in
the
middle
of
redefining
what
it
means
to
be
a
working
group,
and
so
there
was
general
confusion
as
to
sort
of
what
the
criteria
were,
how
to
engage
that
project,
that
process
etc.
And
since
then,
we've
been
very
busy,
actually
writing
code
and
trying
to
get
to
something
to
demo
acute
con.
So
we
haven't
really
picked
that
up.
I
do
have
one
question
on
the
board
here,
which
is.
D
A
C
A
C
Yeah
definitely,
but
moving
back
to
this
I
just
wanted
to
make
sure
we
close
this
discussion
here.
So
what
because
we've
discussed
a
few
different
times
as
a
broader
project.
You
know
this
could
burn
any
working
group
versus
CN
CF
one
and
we
I
don't
know
we
ever
I
mean
what
what
so?
What
is
this
data?
Did
we
decide
which
way
we're
going
or-
or
you
know,
as
a
community
here,
I.
D
C
Right
so
I
know
so:
Frederick
I
know
when
you
and
I
were
for
an
Amsterdam
Corona,
as
we
discussed
that
a
bit-
and
you
know
like
my
preference-
was
to
maybe
try
to
move
us
towards
more
of
a
CNCs
working
group.
So
I
guess
I'll
just
kind
of
say
to
that
that
but
I
fully
admit
that
I
haven't
done
anything
to
make
that
happen
either
so,
but
that
that
was
kind
of
my
preference
in
order
to
go
ahead.
C
A
To
be
honest
at
this
point,
I
think
I
would
prefer
CN
CF
is
well
I
think
we
should
add
this
as
an
action
item,
maybe
for
for
for
next
week
we're
not
actually
I'm
a
agenda
item
for
for
next
week.
So
we
can
talk
about
a
little
more
in
detail
and
work
out
like
do.
Is
this
something
we
want
to
progress
now,
because
there
is
work
involved
and
that
that
one
of
the
risks
that
we
have
is
that
we
end
up
spending
a
lot
of
time
on
this
but
and
end
up
not
producing
demo
code?
A
A
A
A
A
B
A
D
You're
right
there
isn't
you
button,
so
we
talked
about
that
a
little
bit
of
about
this
last
week
and
so
I'll
just
go
through
it
really
quickly.
In
case
there
are
people
who
have
missed
it,
but
there's
an
active
desire
to
do
demos
at
cube
con
for
network
service
mesh
and
so
we're
sort
of
looking
at
what
what
can
be
accomplished
between
now
and
then
and
the
kind
of
generic
thing
you
could
do
at
a
high
level.
D
We
were
sort
of
talking
about
doing
a
simple
chain
demo,
where
you
know
you've
got
two
nodes:
you've
got
a
client
pod
that
runs
on
one
of
them.
It
obviously
only
consumes
kubernetes
interfaces
that
gets
cross
connected
via
the
X
LAN,
and
then
we
chain
through
to
network
service
endpoints,
and
in
this
case
you
know,
pmmif
and
then
be
a
direct
mem
if'
for
reasons
of
efficiency.
Now
one
thing
to
be
clear
about
is
I.
Don't
expect
network
service
wiring
to
be
working
for
cube
con?
Yet
so
that's
you
know.
D
For
this
case
it
would
probably
be
one
network
service
endpoint
explicitly
consuming
the
next
rig
network
service
endpoint.
Instead
of
some
of
the
really
fun
things
that
we're
looking
to
do
with
Maori
service
wiring,
but
it
still
sort
of
shows
the
basic
idea
you
know
getting
more
specific.
This
is
sort
of
one
of
many
options,
so
I
think
sort
of
option.
Zero
would
be
whatever
the
vnfs
or,
if
they're,
trying
to
see
NS
that
are
being
produced
with
the
BNF
CNF
comparison.
D
Getting
a
little
more
realistic,
we
can
look
at
doing
replicas,
which
means
that
we
would
actually
be
picking
out
one
or
service
endpoint
for
a
network
service.
Out
of
maybe
three
to
five
replicas.
This
gets
a
bit
more
ambitious,
we'll
have
to
see
kind
of
where
we
get
to
and
time
for
Q
come,
and
then
we
had
some
conversations
about
how
to
graphically
visualize
apology
and
whether
we
might
or
might
not
get
the
auto
self-healing
working
in
time.
If
we
get
the
auto
self-healing
working
in
time,
then
we
could
show
cool
things.
D
Like
you
have
a
client
pod,
it's
connected
to
a
network
service
endpoint,
you
kill
the
network
service
in
point.
The
client
pond
gets
automatically
connected
to
a
new
network
service
endpoint,
providing
the
same
network
service,
so
you're,
a
nice
smooth,
auto
healing
behavior.
That
again,
is
a
stretch
for
cube
con,
but
you
know
if
we
just
get
to
something
as
simple
as
showing
the
simple
chain
demo
I
think
everything
is
we
get
past
that
point
it
becomes
gravy.
D
F
You
know
this
whole
CLI,
terminal
kinds
of
demos
and
just
just
don't
fly
and
I
would
say,
based
on
some
other
things
that
I've
done
with
with
kubernetes
clusters
that
we
that
we
do
have
close
to
the
the
number
of
api's
that
we
would
need
to.
You
know,
go
in
there,
pull
out
and
and
then
render
I
guess
number
three.
We
need
to
start.
Maybe
laying
this
out
and
I
can
do
that
in
terms
of
you
know
some
sort
of
high
level.
F
You
know
wireframe
or
mock-up,
keeping
it
simple
and
maybe
trying
to
you
know,
scope
our
goals
so
that
you
know
hey.
This
is
absolutely
attainable.
Let's
get
this
nailed
up,
ready
to
go
rock-solid
and
that
sort
of
thing
stretches
are
fine,
but
I
do
think
we
are
limited
in
time.
Given
that
there's
going
to
be,
you
know,
Thanksgiving
holiday
the
end
of
next
month,
so
I
think
the
way
to
pursue
here
would
be
a
number
one.
F
And
then
you
know
we'll
have
a
place
that
this
is
running
and
then
we
can
start
to
to
build
this
thing
now.
No
specific
promises
right
now
in
the
visualization.
However,
as
I
mentioned,
we
are
on
another
project,
rendering
the
same
type
of
information.
So
if
we,
if
we
have
the
api's,
we
can
go
in
there
and
you
know,
pull
it
out
and
render
it
and
show
it
and
I
think
it'll
be
a
pretty
good
demo.
D
D
Yeah,
so
that's
great
and
anybody
else
who
has
stuff
they
want
to
contribute
to
this
I'm
happy
to
do
work
on
this
and
then
in
terms
of
the
API,
is
needed
to
feed
the
topology
graphical
stuff.
It
would
probably
be
useful-
and
we
can
take
this-
you
know
off
into
the
IRC
channel
or
issues
or
whatever-
to
sort
kind
of
what
things
we
need
in
the
API,
particularly
since
one
of
the
things
that
I'm
starting
to
look
at
in
the
Internet
API
is
sort
of.
F
A
Okay,
yeah,
and
just
a
quick
note
so
for
the
for
the
UI
on
the
on
the
demo
like
I,
don't
think
it
has
to
be
pretty
or
anything
like
that,
just
as
long
as
it
shows
the
information
to
start
off
with,
and
if
we
have
time
we
can
always
go
back
and
make
it
look
nicer.
You
know
just,
but
even
just
showing
the
information
alone
is,
you
know,
outside
of
the
CLI,
is
I,
think
extremely
valuable.
Yep.
F
D
Used
for
the
network
service
managers
to
talk
to
each
other
and
so
I
started
picking
a
swag
at
that
and
I
originally
been
calling
this.
The
NSM
to
NSF
API,
which,
as
it
turns
out,
is
a
terrible
name
and
so
Sergey
had
suggested
that
I
call
it
the
inter
and
a
API.
Let
me
go
ahead
and
find
the
right
profile
to
talk
through.
So
here's
the
one.
D
Okay,
interesting
dump
right
out
and
so
sort
of
laid
this
out
as
a
profile
where
you
have
basically
a
remote
connection,
request
where
you
request
a
remote
connection,
a
delete,
remote
connection
and
an
update,
remote
connection
and
keep
in
mind.
These
are
the
connections
that
are
leaving
a
particular
node,
so
you
know
we're
not
local
locally
cross.
Connecting
these
are
we've
discovered.
The
network
service
we
have
to
cross
connect
to
is
on
a
different
node.
D
We
need
to
go:
ask
each
network
service
manager
to
help
us
negotiate
the
appropriate
remote
mechanism
for
this
cross
connect
for
this
we
go
through
and
don't
seem
to
be
seeing
the
whole
file.
Let
me
try
to
view
here
and
see
if
that
helps.
This
is
better,
so
the
other
one
that
I
have
here
is
we
have
update
cross
connect
and
then
we
have
monitor
remote
connections.
This
is
the
one
I
was
mentioning
Chris,
where
I'm
starting
to
look
at.
How
do
you
stream
updates
about
remote
connections?
D
For
this
we've
got
remote
connection.
Id.
You
know
effectively,
it
encapsulates
the
ID
of
the
source
network
service
manager
and
the
destination
network
service
manager,
because
they're
the
ones
who
are
managing
the
connections
and
then
the
source
and
destination
connection
IDs.
We
the
idea
here
being
if
we
have
a
certain
amount
of
autonomy
here,
then,
if
you're
pure
network
service
manager
or
goes
away
and
comes
back,
it
can
discover
relevant
information
about
the
state.
A
D
A
D
So
I
think
of
the
remote
remote
connections
call
here
right
in
the
inter
an
SME,
API
I
think
about
as
an
east-west
call.
It's
one
networks,
service
manager
talking
to
another
or
saying
I
would
you'll
please.
Let
me
know
information
about
these
connections,
because
I
want
to
monitor
the
connections.
I
have
your
opinion
of
what's
going
on
with
the
connections
I
have
for
you
right.
So,
for
example,
you
might
find
out.
Oh
wait.
The
other
guy
has
sent
me
an
update.
That
says
he
thinks
the
connections
been
delito
closed.
D
I,
don't
think
the
connections
been
closed.
Clearly
something
is
wrong
or
you
know,
I
have
you
know
metrics
that
say:
I
sent
a
gigabyte
on
this
connection
to
the
other
end
and
the
receive
metrics
coming
back
from
the
other
end
are
telling
me
that
he
thinks
these
receive
zero
gigabytes
or
zero
megabytes
of
data
at
all.
Therefore,
we
probably
have
a
problem,
but
when
I
talk
about
north-south,
what
I'm?
D
Basically
thinking
is,
you
also
have
an
interesting
place,
and
this
is
where
the
visualization
comes
in,
where
you
might
want
to
monitor
connections
north-south
in
the
system,
and
so
they
probably
won't
be
the
same
callback.
They
definitely
won't
be
the
same
call,
but
they
might
be
in
the
same
style.
Does
that
answer
your
north/south
versus
east-west
question,
yeah.
D
What
we
have
here
is
actually
the
excellent
pedal
here
as
a
sources
desk
IP
in
the
VI
and
the
connection
context.
Now,
where
the
remote
mechanism
we
get
sort
of
this
notion
of
fully
versus
partially
specified,
when
you
request
a
connection,
you
may
have
partially
specified
the
remote
mechanism.
For
example,
saying
one
of
the
things
I
can
do
is
I
can
do
if
the
X
LAN
here
is
my
source
IP
and
here's
the
list
of
the
eyes.
D
D
Yo
I
am
asking
for
connections,
you
get
all
the
sort
of
partial
specification
of
the
connection
and
then
by
the
way
here
are
the
supported
remote
mechanisms
that
you
can
pick
from
on
your
end
and
then
reply
comes
back
with
success
or
error
or
the
actual,
fully
specify
rural
connection
and
then
remote
connection
updates,
which
is
what
comes
back
from
the
monitor
remote
connections.
Call
it
just
feed
you
back
remote
connections
and
possibly
some
set
of
metrics
about
them.
There's
all
that
sort
of
you
know
in
the
rough
sense
make
sense
to
folks.
G
Well,
I
have
a
concern
about,
inter
basically
exchanging
the
updates
between
the
NSM,
especially
if
this
information
can
be
used
for
some
logical
decisions.
I
mean
we're
looking
into
the
implementing
a
router
or
a
routing
protocol
like
a
BGP,
because
we
have
to
track
adjacency.
We
have
to
track
the
state
changes.
I
mean
it's
a
huge
work
on
the
NSM
I
mean
I,
don't
see
like
a
huge
benefit
of
replicating
bgp
implementation
in
the
NSM.
D
Implementation
or
anything
like
that,
because
all
we're
really
talking
about
is
having
good
information
about
the
state
of
this
one
connection.
So,
for
a
particular
connection,
you
want
information
about
its
state,
just
to
make
sure
that
you
agree
with
your
ear
about
its
state.
It's
not
talking
about
syncing
sort
of
a
full
database.
If
your
your
your
BGP
rip
out
for
the
world,
so
people
see
how
you
see
it.
You
know
it's
much
simpler
and
more
straightforward
than
that.
Okay,.
D
A
fair,
that's,
a
fair
analogy,
but
that's
there's
somewhat
of
a
fair
analogy,
but
yeah
the
the
you
know
we
could
talk
about
sort
of
what
makes
sense
in
terms
of
monitoring
the
thing
I've
been
thinking
in
terms
of
was.
It
would
be
really
really
really
handy
to
know
if
the
connection
was
not
actually
functioning
or
you.
If
we
have
disagreements
about
what's
going
on
with
the
connections
functioning,
then
we
probably
have
a
situation
where
we
need
to
turn
on
the
connection
and
connect
to
a
new
network
services
point
for
the
network
service.
D
G
Exactly
exactly
first
of
all,
I
mean
even
if
we'll
lose,
even
if
we
lose
the
one
of
the
NSM,
the
G,
our
PC
connection
will
be
gone
from
both
ends
even
from
the
surviving
end.
So
basically
that
connection
is
gone
and
ideally
all
the
all
the
data
related
to
that
connection
needs
to
be
invalidated.
And
then,
when
the
new
and
SM
comes
back
up,
it
will
need
to
re-establish
everything
to
be
on
the
fresh
side.
Instead
of
pulling
the
old,
possibly
stale,
information
from
it's
a
neighbour.
D
D
G
D
That
is
actually
state
information
of
the
connection,
so
the
data
plate
has
its
notion
of
things.
The
pure
NSM
has
its
control
plane,
notion
of
things,
and
you
have
to
be
able
to
get
back
to
a
world
where
you
can
map
the
what's
happening
in
the
data
plane
to
the
actual
semantic
qualities
that
are
living
in
the
NSM
right.
D
You
have
to
be
able
to
get
back
to
a
place
where
you
know,
okay,
that
that
cross
connect
is
actually
you
know
talking
to
that
remote
@sm,
which
means
I,
know
who
the
go
ask
to
delete
it
when
it
comes
time
and
I
know
what
ID
to
use
to
go
to
leave
it
when
it
comes
time,
you
know
just
something
as
simple
as
I
want
to
go
and
delete
a
connection
on
the
far
end
after
I've
been
restarted,
requires
a
certain
amount
of
state
information,
and
our
options
are
basically
either
we
stay
stale.
D
We
either
store
the
information
local
to
the
network
service
manager,
but
that
the
interval
and
time
between
when
the
network
service
manager
is
goes
away
and
when
it
comes
back,
means
that
we
have
no
notion
of
the
state
that
we
stored
locally
is
actually
currently
reflecting
the
world
or
we
can
obtain
the
opinions
of
our
peers
as
to
the
state
of
the
world,
and
we
at
least
know
what
the
rest
of
the
world
believes,
and
we
can
reconcile
with
that.
With
things
like
what
do
we
actually
have
on
our
damn
data
plane
right,
yeah,.
G
G
G
Well,
that's
the
that's
the
trick.
I
mean
it's.
The
application
responsibility
to
detect,
hey
I,
lose
the
connection.
I'm
gonna
kill
myself
to
restart
trying
to
re-establish
it.
That
I
mean
that's.
The
whole
concept
of
kubernetes
problem
put
crashes,
tries
to
establish
the
state,
not
rely
on
some
some
other
maker.
It's
like
a
loose
loose
consistency.
D
Because
then,
you
can
have
things
to
go
wrong
in
the
system
without
disturbing
the
application,
because
the
applications
aren't
used
to
thinking
at
network
level
if
I've
got
an
application
pop,
that's
running
that
is
requested,
secure,
Internet
connectivity,
it
absolutely
is
not
used
to
the
notion
of
I've
lost
my
connection
to
my
secure
Internet
connectivity.
It's
used
to
the
notion
if
I
lost
my
TCP
connection,
but
that's
a
whole
different
layer
in
the
system.
G
D
D
D
A
Okay,
so
X
Factor
see
enough
to
updates,
so
I
haven't
done
much
modification,
yet
this
particular
week
because
I've
had
you
know,
it's
only
been
Friday
since
our
last
meeting
and
I
was
trying
to
work
on
it
tomorrow.
So
I
don't
have
any
major
updates
on
that.
There
are
a
couple
of
you,
people
who
are
on
the
call,
so
I'll
just
give
it
a
really
quick
message
as
to
what
it's,
for
so
what
this
is
trying
to
do
is
provide
guidance
for
CNF
vendors.
A
A
CNF
is
a
is
a
DNF
that
friends
in
a
container
in
kubernetes.
It
makes
use
of
these
constructs
to
scale,
and
so
what
this
is
is
taking
things
that
we've
learned
from
and
in
building
applications
in
the
enterprise
environment
and
how
and
how
those
scale
horizontally
and
taking
information
that
we
know
about
how
networking
vnfs
work
and
saying
if
we
were
to
combine
both
of
these
worlds
together,
then
what
would?
What
should
such
a
system
look
like?
A
G
Well,
there
was
no
change
since
last
Friday.
Basically
the
local
portion
is
done.
It
got
merged
and
I'm
waiting
for.
You
know
to
finalize
that
inter
and
SM
enter
in
a
SEM
API
to
be
able
to
add,
I
mean
whoever's,
gonna
planning
to
add
the
VX
lon
feature
to
be
able
to
talk
to
remote
NSM.
So
that's
pretty
much.
It.
A
D
I
Okay,
I
can
give
a
quick,
quick
status
on
what
up
and
doing
at
least
so
since
last
week,
I've
run
some
some
benchmarks
using
some
of
our
setups
on
the
packet
machines
and
compared
to
what
we're
seeing
on
the
CC
test
bench.
The
results
are
are
about
thirty
to
forty
percent
lower.
So
so
we
still
have
some
debugging
to
do
here
to
verify
that
we
we
get
the
proper
performance
and
the
packet
machines,
otherwise
think
most
of
our
test
cases
at
least
unpacking
are
running.
I
We
can
do
two
different
types
of
change
for
CNS
and
we
can
do
multiple
Network
functions
and
change
bar
for
vnfs
as
well.
So
so
most
of
the
basic
functionality
is
available
through
scripts
and
right
now,
I
think
we
just
started
out
on
the
et
templating
and
trying
to
automate
things
a
bit
better
than
just
running
running
a
ton
of
bash
scripts
to
set
everything
up.
E
That's
good
husband's
killer,
and
so
we
also
have
planned
out
a
lot
of
the
openstack
part.
So
for
what
we're
doing
per
cube,
Khan
will
officially
be
doing
all
of
the
comparisons
on
kubernetes
and
then
try
as
close
as
possible
on
OpenStack.
The
part
of
last.
The
last
few
weeks
has
been
working
on
getting
that
going
at
the
same
time,
and
this
is
hopefully
all
reproducible
as
well
is
the
goal
so
that
on
a
cloud
net,
you
can
bring
up
a
open
set
cluster
and
then
run
the
similar
tests.
Actually.
E
H
E
H
E
Over
my
FG
VPP
and
then
connecting
to
the
next
C
enough,
what
we're
referring
to
as
a
snake
connection
and
we're
doing
that
one
specifically
because
it
looks
closer
to
what
we
have
to
do
and
open
sex
since
we
can't
directly
connect
them,
I
mean
in
what
we're
doing
right
now.
It's
pre
in
SM
is
mounting
volume
and
then
making
the
mem
I
have
interface
available
to
the
containers.
D
You
know
before
you
guys
need
November
5th.
That
actually
does
that.
The
hope
is.
We
can
probably
start
integration
in
the
week
before
so
that
you
guys
could
use
network
service
mesh
to
wire
up
those
containers,
and
so
we
get
a
nice
clean
sort
of
cloud
native
the
approach
to
the
problem
cool.
Thank
you.
D
Yeah,
so
we're
we're
trying
to
documentation
as
we
go
so
different
parts
of
it
are
sort
of
in
different
states
of
repair,
but
one
thing
I
do
kind
of
want
to
walk
through
really
quickly
because
it
gets
to
be
interesting
and
important
is
I.
Think
the
section
on
network
service
measure,
components
and
the
abstract
is
probably
in
pretty
good
shape,
and
this
is
mostly
just
calling
out
that,
even
though
we're
extremely
extremely
focused
on
kubernetes
network
service
measure
actually
can
operate
beyond
simply
the
kubernetes
environment,
and
this
gets
to
be
crucially
important.
D
As
we
look
at
things
like
how
do
we
consume
Network
surfaces
that
are
coming
from
physical
boxes
in
the
network?
How
do
we
consume
network
services
there
external
to
your
cluster
within
that
some
other
kind
of
system,
or
even
some
other
kubernetes
cluster?
And
so
we
sort
of
talk
about
these
components
in
the
abstract.
You've
got
your
network
service
client,
which
is,
of
course,
when
we're
talking
about
Corinna
DS.
That's
just
a
pod.
G
Quick
question:
sorry
to
interrupt:
you
I
mean
you
see
on
this
picture.
Network
service
endpoint
has
a
single
in
ingress,
but
in
the
real
life
it
never
decays
I
mean
in
most
of
the
cases.
Nse
would
need
at
least
two
connections,
one
the
ingress
from
the
client
and
then
probably
egress
to
the
service
that
it's
providing
and
I
then
have
you
had.
Have
you
had
a
chance
to
think
about
it?
Yeah.
D
So,
actually,
effectively,
once
once,
the
service
mesh
has
wired
up
the
client
into
the
network
service
endpoint.
What
happens
behind
that
network
service
endpoint
is
actually
not
never
serviced,
Misha's
business,
so
I'll
give
you
a
really
simple
example,
because
it's
familiar,
it
involves
kubernetes.
If
we
have
two
communities
clusters
right
and
so
I'm
in
the
cluster,
one
I
may
have
a
network
service
client
in
cluster
one.
That's
a
pod,
it
and
l2
l3
connection
to
some
network
service
endpoint.
That,
from
its
point
of
view,
is
simply
external
to
it
to
cluster
one.
D
It
could
happen
that
that
is
actually
a
pot
in
cluster
two
and
there
could
be
all
kinds
of
things
happening
behind
mat
in
cluster
two,
but
none
of
that
is
actually
the
network
service
mesh
in
cluster
one,
this
problem
or
the
network
service
clients
problem.
So
you
could
also
imagine
the
case
where
you
have
a
physical
box.
This
is
terminating
on
where
you've
got
all
kinds
of
weird
things
happening
behind
this
box.
Again,
none
of
that
is
actually
our
problem.
D
Or
a
physical,
router
or
I've
had
people
talk
about
it
being
a
neutron
network
for
legacy
stuff
with
OpenStack.
You
know
it
can
be
any
number
of
things,
but
as
long
as
we
can
set
up
an
l2
l3
connect
cross
connect
between
the
network
service,
client
and
a
network
service
endpoint
we're
good
to
go
in
the
abstract,
yeah.
D
D
Yes
and
then
I
sort
of
talked
through
some
of
these
in
a
little
more
detail
with
text
text
is
great
for
reading
a
little
less
for
good
for
presenting.
So
we
sort
of
talk
about
those
in
the
abstract,
and
then
we
say:
okay
well,
the
system.
When
you
look
at
it
more
broadly,
you
know
you
have
some
kind
of
network
service
registry
where
you
you
keep
a
list
of
services
network
services
in
the
network
service
endpoints.
This
is
by
the
way
in
the
abstract.
D
In
the
kubernetes
case,
this
would
just
be
custom
resources
in
the
kubernetes
api
server.
You've
got
network
service
managers
and
they
have
some
domain
in
terms
of
network
service,
clients
and
network
service
endpoints
and
cross
connecting
data
planes
that
they
manage,
and
you
know
a
signal
registry.
You
may
have
multiple
network
service
managers
now
in
the
kubernetes
cases,
you'll
see
we
dip
there.
This
translates
to
things
like
node
and
the
network
service
manager
being
in
charge
of
sort
of
all
the
managing
all
the
connections
for
network
service
clients
and
points
on
their
node.
D
Defining
this
in
the
abstract
is
actually
kind
of
nice,
because
what
it
gets
you
to
is
the
realization
number
one,
how
the
abstract
maps
to
the
actual
situation
in
kubernetes,
where
your
network
service
registry
is
just
CR
the
custom
resources
in
the
API
server,
the
network
service
managers
diamond
set,
the
domain
and
number
service
manager
looks
after
is
the
stuff
on
a
node.
You
know
that
every
service
clients
and
in
points
or
pods
etc,
but
then
it
actually
becomes
really
important.
D
When
you
look
at
how
the
API
is
work
because
in
the
abstract
you
look
at,
the
API
is
basically
in
the
AP
guys.
That
matter
in
the
abstract
case
are
what
we're
now
calling
the
inner
and
SME
API.
However,
service
managers
talk
to
each
other
and
also
then
and
I.
This
Micah
Patrick
wishes
not
converse.
That's
why
the
names
aren't
updated,
hang
on
just
a
second,
let
me
go
and
actually
show
the
updated
stuff.
This
is
going
to
be
a
little
easier
to
read.
A
D
But
you
know
so,
then
we
talked
about
the
internet,
a
semi
API,
which
is
the
only
thing
that
sort
of
we
have
to
worry
about
it,
the
abstract
and
that
gets
laid
out,
and
one
of
the
reasons
is
important
to
recognize.
This
is
kubernetes
isms
shouldn't
leak
into
the
inner
NSF
API,
because
that's
you
may
have
network
service
managers
that
are
external
to
kubernetes
for
various
reasons,
and
we
sort
of
talked
through
this
already
and
then
you
know
we
sort
of
so
so
that's
the
the
critical
point
is
realizing.
D
What
API
sense
to
me
to
have
kubernetes
is
something
into
and
what
a?
What
API
is?
It's,
not
okay,
so
for
the
inter
and
the
semi
API
you
shouldn't
have
proven
any
specific
things,
but
how
a
kubernetes
space
network
service
manager
talks
to
its
data
plane
or
its
network
service,
endpoints
or
network
service
clients
that
can
be
as
kubernetes
specific
as
it
needs
to
be.
A
To
be
to
be
clear
like
when,
when
you
start
looking
at
what
what
things
care
about
like
the
pods
like
the
clients
and
the
endpoints
like
they
care
about
the
payload
and
they
care
about
their
local
mechanism,
they
don't
generally
care
about
the
remote
mechanism
itself.
So
you
think
about
or
the
or
the
local
mechanism
of
the
thing
they're
talking
to.
So,
let's
look
at
the
separation
of
concerns
simultaneously
when
you're
looking
at
the
at
the
remote
connection.
The
same
the
same
thing
applies:
they
don't
really
care
about
what
local,
what
the
local
mechanisms
are.
A
So
we
look
at
like
enter,
and
this
like
its
its
primary
thing,
is
about
forwarding
the
information
forward
and
back
and
in
terms
of
the
service
request,
service
response
and
negotiating
that
that
remote
mechanism
connection
parameters,
and
so
you
can
think
of
it
as
like,
while
the
endpoints,
the
the
endpoint
and
the
client
care
primarily
about
give
me
my
service,
it
doesn't
care
about
how
you
how
it
gets
to
it.
It
just
cares
about
like
that.
It's
that
it's
going
to
receive
it
so
is
source
or
trying
to
build
a
reasonable
separation
of
concerns.
B
A
So
I
definitely
encourage
people
to
spend
some
time
after
the
meeting
between
now
and
next
week,
like
the
data
playing
API
stuff
is,
is
relatively
new,
so
please
spend
some
time
with
it.
It
was
like
read
up
on
it,
make
sure
it
makes
sense.
You
know,
try
to
try
to
find
holes
in
it
and
come
back
to
us
with
deficiency.
Is
it
defined?
So
you
know
we
the
just
to
stress
the
order
of
importance.
It's
like
beware.
A
A
If
we,
if
we
get
these
api's
right,
it'll
help
simplify
our
overall
path,
and
so
so
so
definitely
spend
the
time
looking
at
the
api's
and
make
sure
that
they
that
they're
reasonable
and
also
are
they
understandable,
like
if
this
is
the
first
time
you're
looking
at
the
day
between
API,
can
you
understand
what
they
do?
If
not,
then
that's
useful
information
and
we
need
to
work
out
a
way
to
do.
Simplify
the
message.