►
From YouTube: Network Service Mesh WG Meeting - 2018-09-28
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
A
B
Yeah,
we
also
have
a
good
news
on
ons
EU
front,
not
for
this
project,
I,
guess:
okay,
sorry,
no.
A
All
the
way
for
folks
who,
just
let
me
stick
in
the,
let
me
try
and
stick
in
the
chat
if
I
can
find
the
chat.
I
do
not
seem
to
have
access
to
a
chat
and
zoom
anymore,
the
link
to
the
meeting
minutes
and
let
me
actually
bring
up
and
share
the
meeting.
Then
it's
really
quickly
and
we'll
just
walk
through
them
live
on.
A
A
A
A
C
A
B
C
B
B
C
A
A
B
B
A
Good
putting
to
add
to
the
queue
con
seattle
discussion,
so
anything
else
folks
feel
we
need
to
add
to
the
agenda
all
the
way
oops.
You
feel
free
to
not
only
add
items
to
the
agenda
live
in
the
doc,
but
also
to
help
in
the
process
of
taking
notes.
It
is
really
useful
if
you
do
that
cool,
so
so
digging
down
to
events
so
I
know
there
was
a
bunch
of
network
service
enough
stuff
that
was
going
on
this
week
at
OU
and
I,
see
you
matcha.
E
B
A
B
So
Frederic
and
and
Karl
presented
in
a
semi,
been
to
one
one
of
the
talk
that
we're
doing
the
dialogue
with
the
with
your
spider-man
and
some
of
the
slides
I
think
it
went
very
well.
There
was
a
lot
of
interaction
with
a
room
and
and
a
huge
amount
of
interest.
I
personally
enjoyed
it
and
was
glued
to
the
to
the
presenters
and
indiscriminate
content.
So
I
like
that.
I
also
know
that
Frederick
and
cow
around
some
site
workshops,
but
I
I,
don't
know
I,
don't
have
anything
because
I
didn't
attend.
B
F
G
A
Cool
awesome,
I'm
glad
that
that
went
well
and
hopefully
we'll
hear
a
little
bit
more
when
Kyle
and
Frederick
make
it
back
so
and
then,
in
terms
of
events,
we
have
coming
up
the
next
big
one
is
cute:
calm
Seattle,
which
is
December
10th
through
the
13th
I,
think
it's
typically
the
11th
through
the
13th.
But
there
are
some
events
that
are
happening.
Is
it's
collocate
events
on
the
ton
that
are
probably
cool
to
go
to
as
well,
so
a
cube
con
Seattle?
A
A
We
would
also
love
it
if
you
could
promote
those
talks
to
other
people
who
might
be
interested
in
every
service
mesh
I,
think
that
would
be
good
and
then
I
think
we've
got
Chris
Metz
sort
of
pointing
out
that
we
need
to
come
up
with
an
NS
network
services
mesh
demo
and
suggesting
we
do
things
around.
You
know
podcasts
and
blogs,
leading
up
to
huge
con
as
well,
which
I
think
is
it
good
set
of
suggestions.
B
Shouldn't
the
question
be,
what
do
we
expect
to
be
working
I?
Don't
think
it
does
make
sense
to
hack
some
throwaway
code
for
just
the
demo
I'm
talking
to
Frederick
and
Carl.
No,
they
making
that
they
they
were.
They
were
updating
that
you
know
the
pro
the
steady
progress
is
being
made,
but
I
think
the
question
should
be
really
what
is
expected
to
be
working
and
based
on
that
work
out
the
demo
scenario:
/
Chris's
request
that'll
be
at
least
my
my
suggest.
A
In
terms
of
priorities
Oh
because
my
experience
has
been
when
so
when
you
set
definitive
goals,
saying
we
will
do
a
yo
X
by
Y
date,
you
tend
to
not
do
more
than
X,
but
when
you
set
a
list
a
priority
is
these:
are
the
priority
list
of
things
we're
working
on
and
we
need
to
at
least
get
X
working
by
Y
date.
Then
then
you're
much
more
likely
to
overshoot
your
goals,
so
I
guess.
D
Hey
yeah
just
practicing
those
remarks
or
so
I
guess
this
demo
would
be
some
sort
of
portfolio
of
material
that
we'd
want
to
expose
to
the
community
so
magic
to
your
point.
Even
if
it
is
a
hack,
there
could
be
at
least
some
things
we
show
you
know
existing
in
the
cluster.
D
Like
you
know
the
NSM
agent,
you
know,
whatever
sort
of
calls
might
be
established
or
calls
set
up
to
be
able
to
program
the
cross
connects
so
I
think
that
contributes
to
not
only
you
know
the
cube
con
presentations,
the
website,
but
just
sort
of
gives
the
audience.
You
know
something
else
to
look
at
and
at
least
picture
in
their
minds,
and
you
know
walking
away.
We
want
them
to
think
that
you
know.
Hey
networking
is
happening
again.
It's
happening
in
the
cloud.
This
is
really
cool
solution.
A
D
A
E
Before
you
leave
this
topic
before
and
I
submitted,
there's
also
a
Fido
day
at
Kubek
on
I,
don't
know
be
accepted
or
not,
but
I
submitted
a
something
to
the
Fido
about
constructing
a
simple
example:
a
layer
two
connection
only
using
a
network
service
mesh.
Of
course,
some
of
the
some
of
the
work
that's
going
on
right
now.
A
B
All
right
so
Miguel
we
exchanged
some
some
emails
on
where
things
are.
Things
are
I
understand
that
you
guys
are
using
packet
net
or
something
similar.
They
are
the
guys
who
are
hiring
the
or
renting
their
physical
servers.
I
keep
forgetting
the
their
domain
name.
Tyler,
Watson
and
Lucy.
Now
briefed
me
on
where
you
are:
we've
been
actually
chatting
every
day
when
anounced
them
and
I
understand,
you've
got
the
VMS
and
and
the
containers
working
but
I
they
were,
they
were
didn't
know.
C
Have
been
running
some
basic
data
plane
tests
so
far
using
using
in
a
B
bench
which
connects
to
t-rex
and
pretty
much
so
far,
I've
been
focusing
on
64
byte
packets
just
to
try
and
make
sure
we
don't
don't
start
to
like
on
the
the
network
interfaces
since
I
guess
we
only
have
ten
gig
connections
available.
C
B
C
Yes,
you
can
hear
me
now
I
think
my
connection,
a
while
all
right
yeah.
So
we
have
the
numbers
and
I
guess
for
for
just
a
single
chain:
I
guess
we
were
reaching
eight
point
three
four
million
packets
per
second
for
the
VM
and
for
the
odd.
No,
let
me
get
the
actual
numbers
because
I
guess
we
scaled
it
down
a
bit,
so
it
might
be
by.
B
Single
train
here,
you're
talking
EVP
correct.
Yes,
yes-
and
this
is
a
V
switch-
is-
is
what
is
V,
V,
P
or
obviously
Budokai
BBB.
We
switch,
is
VPP
and
then
VM
is
running
test
PMD
or
what
ultimately
also
the
GP
okay
and
it's
a
because
user
over
tire
for
VM.
Yes,
yes,
okay,
okay,
I'm,
just
updating
the
notes.
Sorry.
C
C
The
only
difference
is
we're
using
memory
F
to
interface
sure,
and
so
what
are
the
numbers
you're
serving?
You
said,
I
point:
three
million
TPS.
Let
me
get
the
actual
most
recent
ones.
I
have
available
just
a
second
I
should
have
them
open
and
what
is
the
packet
loss
ratio
that
you
are
measuring
in
that
I'm
using
RSS
measurements
or
MRI
measurements
re
and
mr
r,
mr
r,
as
defined
by
isis?
It
I'm
not
sure.
If
it's
e
said
I
imagine
there
might
be
a.
What
do
you
mean
by
m
RR?
Is
it
max
I.
C
C
B
Just
yeah
because,
as
you
as
you
probably
know,
mr
r
is
very
forgiving
for
the
computer
because
we're
running
a
computer
at
the
basically
we
don't.
We
don't
care
about
PL
our
packet
loss
rate,
so
yeah.
It
is
good
indicative
measure,
however,
for
measuring
memory
interface
efficiency.
We
probably
would
like
to
have,
like
you
know:
zero
packet
loss,
yeah
or
or
some
tolerance.
So
we
should
measure
both
that's
what
we're
doing
in
in
the
in
the
FDA.
Oh
yeah,
people,
don't
really
care
about.
Em
are,
are
it's
more
of
our
the
developers?
B
Yes,
yes,
do
you
know
what
is
the
computer
you're
running
it
on
one
of
the
the
guys
listed
on
this
packet
dotnet?
Is
it
the
sky
like
gold
that
they
listing
is
available
or
something
else?
Yes,
it's
a
gold.
Fifty
one
2051
20-
and
this
is
your
anything
in
hyper-threading
and
you
do
you
run
to
sibling
frets?
Okay,
yes,.
C
B
B
B
B
C
B
B
So
actually,
true,
booboo
song
is
a
good
number
because
we
actually
did
a
report.
The
reports
true
Bassano,
result
in
copenhagen
EU
and
we
have
provided
never
wait
with
Giles
on
Wednesday
in
system.
As
you
can
see.
Hopefully
this
the
slides
got
posted.
We
send
them
besides.
Earlier
few
hours
ago,
yeah.
A
B
B
B
C
B
C
C
B
C
B
A
C
Me
just
find
them
so
looking
at,
let's
say:
million
packets
per
second
at
at
the
ten
gig
connection.
I
guess
it
starts
out
with
two
CNF
sattell
Evan
point
five.
Then
eleven
point
two,
then
nine
point.
Ninety
four
and
nine
point:
ninety
nine!
At
the
nine
point,
eighty
four.
So
there
is
a
bit
of
drop
and
a
bit
of
variation,
but
it's
94.
What
was
the
next
one
and
nine?
Ninety
eight,
now
nine.
Ninety
nine!
Actually
ten!
If
you
round
it
nine
ninety,
now
it
was
the
next
one
984
and
the
next
one.
C
C
B
A
C
C
B
C
B
A
H
A
A
F
F
Let's
see
one
of
the
things
that
I
pitched
to
the
OTL
team
who's
focusing
on
the
co
e
project,
is
that
the
team
end
up
building
out
not
only
the
co
e
CMI
in
itself,
but
that
they
also
focus
on
to
other.
To
other
things,
number
one
is
providing
a
library
we
didn't
go,
which
sort
of
like
DPP
agent,
how
you
can
control
VPP.
F
Was
it
it
right
something
similar
so
that
you
could
do
that
with
the
o
yell
side,
but
the
more
important
one
is
that
they
also
create
some
either
a
member
service
endpoint
that
would
use
this
library
or
decrease
some
form
of
e
NS
m
that
that's.
They
could
then
used
to
lift
various
features
from
rodeo.
F
F
I
didn't
take
an
estimate
of
it.
Maybe
someone
else
who
was
a
song
that
they
call
had
the
bitter,
because
I
was
more
focused
on
getting
the
talk.
That
I
was
counting
people,
so
I
think
having
conversations
as
well
with
some
of
the
with
some
of
the
people
actually
with
the
person
from
Intel
and
I,
think
was
Ivan
Coughlin
and
so
we're
gonna.
We're
gonna
have
discussions
on
on
how
we
can
better
position
like
what
he
wants
is
he
wants
a
guidance
on
like
one
should
use
every
service
smash.
F
You
know
when
she,
when
she
pulled
motifs
or
that
kind
of
stuff,
so
I'll
help
it
right
that
that
guidance
up
so
because
one
of
the
things
that's
happening,
that
I
want
to
be
really
careful
with.
Is
that
there's
a
lot
of
misconceptions
just
to
where
network
service
mesh
is,
and
so
some
of
the
people
in
the
multis
community
and
so
on
are
a
little
bit
apprehensive
of
of
our
project
and
so
rather
than
let
things
continue
on
and
just
let
it
evolve.
F
Ended
up
having
a
talk
with
some
of
the
one
of
the
Swiss
telecoms
telecom
itself,
and
so
they're
they're
interested
in
some
of
the
network
service
mesh
stuff
as
well
so
I'm
gonna,
see
if
I
can
get
them
to
start,
giving
us
some
of
their
use
cases
where
they
think
it
might
be
useful
and
to
help
them
further
understand
it.
So
I
connected
with
one
of
them
through
through
another
avenues,
so
worst
case
scenario,
if
I
can't
get
them
on
to
this
meeting
or
on
to
the
mailing
list.
F
F
How
does
he
want
to
proceed
with
network
serve
as
much
as
well,
and
so
one
of
the
things
that
they're
asking
us
to
do
both
CN
CF
and
specifically
how
they
should
networking
that
it's
not
part
of
Edison
directly
but
I,
think
it's
something
we
can
help
a
lot
out
is
that
we
help
provide
guidance
on
what
a
CNF
is
in
the
first
place.
It's
looking
about
this
several
times
in
the
in
the
past
several
several
weeks
and
so
on.
So
there's
a
continuation
of
that
but
effectively
they
they
want.
F
They
want
help
in
defining
what
it
is
help
in
trying
to
try
to
work
with
telcos
to
and
the
vnf
who
providers
who
want
to
move
over
to
see
em
f's
and
if
we're,
if
we're
the
ones
who
provide
that
that
guidance.
And
then
we
can
make
sure
that
that
it's
like
I,
said
an
independent
event
assemble.
We
can
make
sure
that
they
don't
fall
into
the
same
pitfalls
that
we
saw
application
developers
do
when
they
were
starting
to
containerize
their
workloads.
When
dark
ogres
came
out.
F
F
F
But
yeah
the
number
one
thing
that
I
guess
the
number
one
feedback
that
I
kept
hearing
over
and
over
again
is:
we
have
to
get
some
form
of
a
proof
of
concept
out,
we'll
get
something
running
and
showing
because
right
now
people
cannot
pick
our
work
in
order
to
show
I
didn't
want
to
build
proof
of
concepts
for
other
things
as
well
and
they
want
to
pull
us
in,
but
they
can't
pull
us
in
them
because
we're
not
radio.
So
we
have
to
get
ready.
A
Can
everyone
see
the
shrimp
can
anyone's?
Okay?
Yes,
yes,
so
within
within
kubernetes
there's
a
data
plane
between
the
network
service
manager
and
whatever
your
data
plane
is
and
this
this
is
basically
how
the
network
service
manager
asks
for
cross
connects
from
whatever
data
player
data
planes
are
present
on
the
system,
and
so
we've
been
trying
to
define
this
sort
of
MSM
to
ennis
them
data
plane
api.
In
other
words,
what
is
the
NSM
say
to
the
NSF
data
plane?
A
And
this
has
got
you
know,
things
like
create
cross,
connects
update,
cross
connect,
delete
cross
connect,
listen
watch
cross
connects,
which
is
a
pattern
which
basically
says
look
give
me
the
status
about
the
cross,
connects
you've,
got
and
then
listen
watch
mechanisms
which
we'll
get
to
mechanisms
in
just
a
second,
but
mechanisms
are
sort
of
like
the
things
you
can
support
like
I
am
a
data
plane.
They
can
do
colonel
interfaces
in
VX
lab,
but
there's
the
only
mechanisms
I
support.
A
So
if
you
need
somebody
to
give
you
cross,
connects
for
mif
and
srt-6
I
can't
help
you
right
and
so
listen.
Watch
mechanisms
allows
you
to
send
information
from
the
data
plane
up
to
the
MSM
about
the
mechanisms
and
then
the
other
one
that
we
define
as
a
simple
registration
for
the
network's.
This
is
the
sort
of
within
the
network
services,
mesh
data
playing
talks
to
the
network
service
manager,
and
it
just
has
a
registration
that
sort
of
says:
hey,
I'm,
a
datum
plane.
A
This
is
how
you
phone
phone
me
back
and
then
we've
been
working
through
sort
of
sorting
out
these
mechanisms
as
well.
Yeah
we're
a
mechanism
is
one
of
either
a
remote
mechanism
or
a
local
mechanism,
and
we
look
at
local
mechanisms.
You
get
things
like
a
type,
and
currently
we've
got
four
types
that
we've
identified
so
far:
curl
interfaces
in
MiFID
host
user
and
then
we've
got
a
map.
That's
a
bunch
of
labels
and
we're
currently
thinking
is
these.
Labels
could
express
you
preferences
or
constraints
or
communicate
the
final
values
of
a
parameter.
A
So
for
a
kernel
interface,
for
example,
you
might
have
a
label
name
equals
e2,
and
so,
if
I
am
a
pod
coming
up,
you
know
wanting
to
be
connected
to
network
service
I.
Might
you
know
say:
I
look,
you
know
among
my
preferred
list
of
local
mechanisms,
you
know,
I
would
prefer
an
interface
and
I
would
prefer
that
it
be
made
these
two
and
then,
when
you
give
I
need
a
plane
would
be.
A
The
mechanism
was
actually,
you
know,
serve
it
out,
and
then
we've
also
got
remote
mechanisms
to
find
they
sort
of
first
get
to
find
what
we're
looking
at,
how
NSM
is
communicate
with
each
other
and
the
remote
mechanisms
are
sort
of
very
similar
they've
got
to
type
and
a
bunch
of
labels.
The
kinds
of
things
you
communicate
in
those
labels
would
be
somewhat
different,
so
we
sort
of
use
an
example
here
of
the
ex
slam
right.
A
So
you
would
imagine
that
you
know
you
would
have
source
IP
source,
port,
dusty
dust,
port,
envy
and
I,
and
so
when
one
in
a
sense,
it's
a
remote
connection
request
to
another
NS
m.
It
would
specify
source
IP
source
port
and
a
list
of
acceptable
VN
eyes,
probably
expressed
as
ranges
and
then,
when
the
NS
m
to
comes
back,
it
would
still
send
back
a
source
port
best
part.
Each
start
support
an
IP,
but
it
also
sends
the
dust
IP
import
and
the
particular
V
and
I
that
it
picked
as
labels
make
sense.
F
A
And
so
the
big
part
of
this
is
there's
a
lot
of
conversation
happening
on
IRC
back
and
forth,
because
Sergey
is
trying
to
produce
code,
and
god
bless
in
he's
chasing
moving
architecture,
which
is
incredibly
brave,
but
it's
also
productive
because
he
keeps
sort
of
poking
things
back
and
saying
hey.
Why
is
this
so
complicated
and
so
things
get
simpler?
H
Well,
basically,
just
just
one
I
mean
if
it's
all
possible,
I
would
really
really
prefer
to
keep
Anna
same
cut
as
away
from
being
mechanism
knowledgeable,
so
there
shouldn't
be
any
cool
in
the
NSM
for
any
type
of
mechanism.
So
it's
just
like
a
bridge
I
would
consider
it
as
a
bridge
doesn't
matter
if
it's
a
Ferrari
runs
over
the
bridge
or
somebody
on
the
donkey
crossing.
The
river
I
mean
I,
don't
care.
I
A
A
G
I
H
That's
one
level
of
the
selection
and
second,
to
look
at
the
actual
details
for
that
specific
remote
mechanism
selection
and
do
some
analysis
so
I
would
III
I
mean
I
think
it
would
make
sense
to
do
the
selection
on
the
first
level
on
the
type
of
the
mechanism,
but
leave
the
more
detailed
analysis
to
the
data
plant
who
actually
implements
it
and
have
a
way
better
position
to
to
parse
them
and
to
analyze
them
than
to
do
it
in
the
NSM
code
makes
sense.
I.
A
I
A
Doesn't
actually
come
from
central
from
a
central
idea,
but
I
think
what
you're
really
getting
out
of?
We
should
probably
strive
to
move
forward
with
your
is
some
sequence
diagrams.
So
people
can
see
these
things
in
context,
because
it's
good
to
have
the
API
is
defined,
but
I
think
we
sort
of
getting
a
sequence
diagram
of
how
the
messages
flow
in
context
with
the
complete
filling
out
of
some
of
these
fields
would
be
massively
helpful
to
make
a
lot
of
this
clearer.
I
A
Full
and
and
and
all
right
cool
anything
else
before
we
move
on
to
other
items
in
the
agenda,
because
we're
time
keeps
on
ticking
I
would
strongly
encourage
people
to
get
involved
in.
In
some
of
these
things,
like
I,
said,
there's
a
lot
of
activity
on
the
IRC
Channel.
We've
had
a
lot
of
really
useful
feedback
from
a
bunch
of
folks,
though
the
PR
is
out
there
for
comment.
The
cars
are
being
run
pretty
hot,
meaning
that,
as
they
progress,
they're
getting
updated.
A
Precisely
so
there's
a
nice
place
for
people
to
go,
read
through
and
add
comments.
So
you
know
this
is
an
exciting
time
in
the
project.
We
would
love
to
have
more
people
involved
in
it,
cool
awesome,
so
action
item
cracking
so
Frederick
since
you're.
Actually
here
now,
do
you
mind
you're
much
better
at
this
than
I
am
I'm
happy
to
share
the
the
project
board?
Do
you
want
to
talk
Dilli.
A
Cool
so
I
think
we
probably
need
to
go
through
and
write
and
clean
up
some
of
these.
So,
for
example,
if
you
use
the
X
Factor
C
house,
which
is
definitely
something
folks
would
like
to
work
on,
but
I
think
things
like
the
migrate
go
errors
to
go
errors.
I
think
that's
been
resolved.
Is
that
correct
Frederick.
F
F
We
can
then
serialize
all
the
the
logger
into
whatever
format
we
want
and
it's
but
ensure
that
we
keep
that
structure
so
we're
so
we've
set
it
up
so
that
when
you
write
to
I
think
it
was
la
gross,
then
you'll
have
all
that
available.
All
the
information
and
context
available
in
your
in
your
logging
system,
so
you
can
filter
by
them
or
or
perform
whatever
analysis
you
want.
So.
A
Awesome
cool,
so
we've
got
the
ongoing
the
coming
communities
working
group
member
and
that's
been
backward
a
little
bit
lately.
Do
you
remember
Fredrik
what
the
work
out
documentation,
infrastructure
stuff
is.
F
A
That's
the
right
place,
because
one
of
the
things
that
we've
started
to
do
in
the
arc
doc
is
to
get
really
clear
about
what
is
network
service
mission
of
the
abstract
and
what
things
are
particular
tune:
every
service
mission
kubernetes
so,
for
example,
the
innocent
and
the
same
api.
You
know
how
to
never
service
managers.
Do
you
make
it
with
each
other?
That
is
not
at
all,
particularly
kubernetes
right,
so
kubernetes
isms
shouldn't
creep
into
that,
but
the
you
know
the
network
service
clients
to
never
service
manager
API
within
kubernetes.
A
You
know
that
we
can
sort
of
look
at
in
a
much
more
sane
way,
because
we
know
that's
always
going
to
be
a
kubernetes
thing
if
someone
is
using
and
that
resource
manager
a
different
context,
they'll
have
their
own
way
for
network
service
and
points
and
network
service
clients
to
communicate
with
them
in
that
context
pool.
So
the
in
his
proposal
supports
Siena,
CN,
CF,
CNF
project
I
think
we're
actually
moving
towards
that
Michael.
Does
it
sound
like
we're
heading
towards
things
that
would
be
helpful
and
useful
to
you?
Yeah.
A
Yeah
cool
and
then
we
had
a
really
good
point
here,
made
by
doing
hammer
last
week
about
separating
this
house
somewhat
by
audience
in
terms
of
who
we
are
addressing
and
he
started
taking
a
swag
sort
of
what
he
saw
as
the
audience's.
You
know,
developers
of
NSF,
rework
and
API
is
developers,
love
plugins
insanity
that
consumers
of
those
etc
and
I
think
that's
a
very
good
point.
I
think
right
now
we're
sort
of
very
much
of
heads
down.