►
From YouTube: CNCF Networking WG Meeting - 2018-08-14
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
A
So
effectively
I'm
going
to
be
talking
to
you
today
about
network
service
mash,
which
is
sort
of
a
new
thing,
that's
emerging
to
deal
with
what
what
some
people
might
think
of
as
the
more
esoteric
networking
problems
in
cloud
native
and
having
presented
a
large
number
of
times
now.
It's
finally
occurred
to
me
that
almost
everything
goes
better
in
a
story,
and
so
this
is
a
narrative
introduction
to
network
service
mesh
that
basically
talks
through
it.
A
From
the
point
of
view
of
a
developer,
who
has
one
of
the
many
classes
of
use
cases,
the
network
service
mesh
solves
very,
very
well,
so
our
protagonist
so
meet
Sarah
Sarah?
Is
writing
a
kubernetes
app
to
be
employed
in
the
public
cloud
and
one
of
the
positive
Sarah's
app
needs
secure
access
to
her
corporate
intranet
right
so
that
that's
a
networking
requirement-
and
this
is
the
way
it
looks
from
Sarah's
point
of
view
right.
She
has
her
pod.
A
It
has
its
existing
kubernetes
in
her
face
that
interacts
with
the
rest
of
the
cluster
and
does
all
the
things
that
she
knows
and
loves,
and
she
needs
some
other
thing
that
will
connect
her.
You
know
give
her
an
l2
or
l3
connection
back
to
her
corporate
intranet
and
somewhere
along.
There
are
security,
happens
and
she
doesn't
really
know
or
want
to
know
or
care
what
constitutes
security.
So
this
is
conceptually
what
Sarah
wants
from
the
system
right.
A
So
you
know
normal
interfacing
sitting
receiving
to
the
corporate
intranet,
but
unfortunately
this
is
not
what
Sarah
is
going
to
get
if
we
go
with
something
that
the
that
looks
like
Neutron
right.
So
if
you
go
that
route,
this
sort
of
becomes
almost
a
developer's
definition
of
how
right
how
do
I
find
out
what
some
that's
the
envy.
A
second
interface
connects
to
who
defines
that
subnet.
A
You
know
what
sized
is
the
Sun
that
happened
to
be
bees,
so,
okay,
great
now,
I'm
going
to
define
interfaces
from
that
subnet
to
a
VPN
gateway,
pod
yo,
oh
crap,
I
was
told
there
would
be
no
routing.
But
in
fact,
if
you
want
some
trying
to
go
to
your
corporate
Internet,
you
need
some
routes
that
will
point
to
that,
but
not
all
routes.
A
You
know
replicas
have
this
way
of
growing
if
you're,
something
that
ends
up
being
too
small.
Now
you
got
to
start
all
over
again,
and
that
means
you
have
to
change
your
interfaces
again
and
then
God
help
you.
If
you
pick
a
subnet,
that's
incompatible
with
something
in
the
corporate
intranet,
which
you
typically
will
find
out
as
you
go
along
and
God
help
you
again.
A
If
your
corporate
network
guys
reaiiy
P
something
in
the
internet
and
now
that
sudden,
that
is
incompatible
again
and
then
effectively,
you
know,
then
somebody
actually
decides
they
want
to
do
something.
On
your
end
about
security,
say
for
whatever
reason
they
would
like
to
have
a
first
pass
firewall
in
the
cloud,
and
so
now
it
becomes
Sarah's
problem
to
wire
all
of
that
in
as
well.
A
A
So
this
is
where
we
see
Harry
Andre,
the
NSM
spider
coming
in
say:
maybe
I
can
help.
So
who
are
you
right?
So
this
is
area
under
the
NSM
spider
sort
of
the
mascot
for
the
network
service
mesh
team.
You
know
because
a
spider
for
weaving
web,
it
seemed
like
a
good
idea,
and
so
what
is
that
we
service
mash
now,
because
network
service
mesh
contains
service
fashion?
It's
a
very
natural
thing
to
think.
Okay!
A
Is
that
like
this
deal,
and
the
truth
is
that
it
is
actually
a
lot
like
is
do
all
the
cool
things
that
is
to
dos
for
TCP
and
HTTP
connections,
network
service
mesh?
Does
that
for
IP
Ethernet
and
whatever
other
l2
or
l3
protocols.
It
is
that
you
actually
care
about.
It
turns
out
that
most
people,
most
of
the
time
care
about
IP
and
Ethernet,
but
there's
a
whole
slew
of
others
that
people
actually
care
about
as
well.
A
So
this
is,
you
know,
so
this
is
Sarah.
It's
letting
her
problem,
Arion
Day
Sarah
again,
just
once
to
connect
her
pod,
which
does
exactly
what
he
wants
us
to
do
with
kubernetes.
She
wants
to
connect
it
securely
to
the
corporate
Internet.
That's
all
she
wants
so
to
do
this
with
network
service
mash
effectively.
What
you
do
is
you
say:
okay,
we
have
a
new
resource,
we're
doing
this
with
Sirius
call
the
network
service.
A
You
use
selectors
on
pods
to
find
the
pods
that
provide
the
network
service.
You
expose
channels
that
are
sort
of
like
listening
on
ports.
The
one
big
difference
is
that
we
talk
almost
exclusively
on
a
network
service
mesh
about
the
payload
seeds
in
my
payload
is
IP
or
my
payload
is
Ethernet.
You
wouldn't
talk
about
any
kind
of
an
underlay
technology,
because
that's
not
something
Sarah
actually
gives
a
about,
and
then
you
know
so.
This
is
a
I
need
to
deploy
this.
A
For
my
VPN
gateway
deployment
for
the
VPN
gateway
pod
and
the
network
service
resource,
and
that's
very
close
to
being
all
that
is
actually
needed
for
Sarah.
The
truth
is,
as
you'll
see
shortly.
There
isn't
me
for
in
a
NIC
container
and
a
config
map
telling
you
what
to
do
now,
then
the
other
container
is
something
that's
being
actually
written
by
the
network
service
mesh
project.
C
A
Currently
out
of
github,
one
of
the
reasons
were
actually
happy
that
I'm
sort
of
you're
talking
to
you
guys
independent.
The
fact
that
you're,
fun
and
I
felt
like
talking
to
you
is
we're
in
the
process
of
trying
to
figure
out.
What's
the
right,
formal
home
for
network
service
mesh.
So
we
talked
to
kubernetes,
sig,
networking
and
cig
networking
feels
strongly.
A
It
should
be
a
kubernetes
working
group
as
we're
bubbling
that
conversation
up
because
they're
in
the
process
of
redefining
kubernetes
working
groups,
it's
also
not
clear
or
whether
that
is
or
is
not
the
right
home
for
it,
and
so
you
know
we're
sort
of
talking
to
various
people
to
begin
advice
as
to
what
they
think
the
right.
Formal
disposition
would
be.
But
right
now
it's
just
a
lot
of
code
being
written
very
quickly
by
folks
from
a
bunch
of
different
companies
in
github.
A
C
A
A
Draw
two
distinctions:
there's
the
distinction
between
you
can
use
and
requires
right.
So,
for
example,
there
are
a
lot
of
people
involved
in
the
community
who,
like
VPP
as
a
data
plane,
and
so
it
is
certain
that
that
will
be
one
of
the
data
planes
supported
under
Network
Star
smash.
However,
we
also
feel
very
strongly
that
we
have
to
be
data
plate
agnostic
and
we're
taking
great
care
to
be
deflating
agnostic,
and
so
so
you
could
say:
okay,
can
it
use?
Evp,
yes,
doesn't
require
VPP.
No.
Does
that
make
sense?
Yeah,
yes,.
A
All
right
Oh,
not
a
ton,
you're
you're,
sort
of
getting
into
some
interesting
topics.
I
talked
about
with
a
different
deck.
Subsequently,
there's
a
tremendous
amount
of
flexibility
as
you'll
see
as
we
go
along
with
never
service
mesh.
What
are
the
interesting
bits
of
flexibility
that
it
allows
you
to
have?
Is
that
you
can
compose
metal
thinking.
A
You
can
pose
things
together
to
create
a
network
service,
but
you
can
compose
the
data
planes
and
control
planes
semi
independently,
which
opens
up
a
whole
world
of
cool
things
you
could
do,
but
that
is
a
much
deeper
conversation,
one
that
I
would
love
to
have,
but
probably
best
to
get
the
basic
concept
going
for
folks,
first,
okay,
that
makes
sense
yeah,
but
I
mean
if
you're
getting
at
things
like.
Could
I
use
network
service
mesh
in
a
way
that
would
advertise
bgp
routes
for
things?
A
A
A
This
is
a
logical
concept
of
the
thing
that
you
want
now
it
turns
out
that
the
generally
speaking
app
developers
never
want
a
subnet
I
I've
literally
stood
in
front
of
an
audience
of
600
app
developers
and
asked
how
many
of
you
ever
want
to
know
that
a
sudden
that
exists
again
and
couldn't
get
a
single
person
to
raise
their
hand.
So
that's
not
something
that
actually
want.
They
want
things
much
more
like
with
Sarah
ones,
which
is
I,
want
secure
connectivity
to
my
corporate
Internet.
A
B
Service,
hey
Edie,
just
mm-hmm
not
to
play
not
to
play
I
can
devil's
advocate
here,
but
then
I've
been
like
dealing
with
him.
Master
Card
is
I,
I,
don't
know.
If
I
don't
want
my
developers
to
even
know
anything
about
the
network.
I
think
they
should
just
be
grandfathered
access
to
the
Corp
Internet
and
to
the
Internet
and
to
other
like
database
other
segments
I
do
do
you
only
think
there's
even
a
need
for
the
developers.
B
B
Network
control
right,
you
know
that,
no,
it's
fair,
so
you
can't
see
the
need
for
both.
You
need
something
that
gives
them
more
specific.
They
can
make
own
definition
so
subscribe
to
certain
definitions,
and
also
that
could
be
a
need
where
we
would
be
able
to
kind
of
hide
that,
under
the
covers
from
them
right.
A
Right
well
again,
this
is
to
disobey
independently
composable
data
planes
and
control
planes
in
network
service
mesh.
One
of
the
things
I
that
you
can't
do
in
network
service
mesh
is
you
can,
as
the
network
guys
still
influence
things
out
of
the
cover,
so
one
thing
that
I
suddenly
will
not
be
presenting
in
this
spec
is
the
you
how
many
folks
in
the
room
are
familiar
with
segment
right,
maybe
six
and
since
Dax
yep,
definitely
million
yeah.
A
So
one
of
the
things
you
can
actually
do
in
network
service
mash,
so
sr
v6
can
be
thought
of
as
just
another
tunneling
option
between
Sara's
pod
and
whatever
network
service
area
is
dealing
with.
There
are
mechanisms
and
network
service
mesh
that
would
allow
you
to
insert
a
proxy
in
between
that.
Could
you
essentially
both
sniff
this
exactly
back
and
forth
and
augment
this
exact?
So,
but
let
me
get
a
little
further
on
I'm
getting
ahead
of
myself
but
effectively.
A
We
have
no
problem
in
network
service
mesh
with
the
network
doing
work,
it's
just
a
sliding
scale
for
us.
You
know
you,
you
pick
your
poison.
A
So
again,
this
is
something
like
secure.
Internet
connectivity,
when
you
talk
about
the
network
service,
so
the
second
concept
is
a
network
service
endpoint,
it's
a
pod,
that's
providing
the
network
service
that
you
want
now,
since
you
brought
this
up,
I
will
point
out
that
this
doesn't
necessarily
have
to
be
a
pod.
A
It
could
be
something
in
your
physical
network,
but
for
the
moment,
we're
just
going
to
talk
about
pods,
providing
network
service
endpoints
because
it
simplifies
the
conversation
so-
and
this
is
exactly
like
endpoints
for
services-
I-
think
the
only
the
only
major
differences
we
got-
advice
from
the
Signet
working
folks
that
effectively
came
down
to
for
the
love
of
God,
don't
make
it
plural.
That
was
a
very
bad
idea,
and
so
we
have
in
fact
not
made
it
plural.
When
we
talk
about
resources.
A
So
an
example
of
this
would
be
the
VPN
gateway
pod
right,
that's
a
network
service,
endpoint
and
then.
Finally,
the
third
concept
is
this
l2
l3
connection
between
your
pod
on
the
network
service
endpoint,
and
it's
tempting
to
think
about
this
as
an
interface,
and
it
can
be
an
interface
so
usually
for
most
application
pause.
This
will
get
instituted
as
a
kernel
interface.
There
are
people
who
want
to
do
nfe
kinds
of
use
cases
where
they
want
other
kinds
of
mechanisms
for
their
local
connectivity.
A
Could
be
applicable
to
Windows
I,
do
not
talk
about
Windows
pods,
because
quite
honestly,
I
am
unfortunately
relatively
ignorant
there
I
would,
if
you
are
in
a
position
to
become
a
bit
more
involved
to
make
sure
that
we
stay
on
a
track
so
that
we
can
also
do
this
for
Windows
pods
I
would
be
ecstatic
run
in
anytime.
Anything.
A
Also,
if
you
could
drop
me
a
line
that
would
be
usually
helpful,
so
we
can
sort
of
sync
up,
because
I
much
would
like
to
make
sure
that
we
have
a
nice
happy
landing
on
the
window
side.
My
major
problem
is
I.
Quite
honestly,
don't
know
enough
to
know
whether
or
not
I'm
leaving
the
right
architectural
white
space
and
we
don't
have
anyone
else
in
the
community.
You
who
is
a
big
windows
guy,
so
that
would
be
very
sure.
D
If
you
go
to
the
previous
slide,
the
it's
always
a
challenge-
new
comes
to
the
kernel
interfaces.
It's
possibly
probably
easy
to
do,
but
the
moment
is
go
to
the
exotic
things
like
memory
loss
users
right,
we
have
to
be,
we
understand
what
will
of
interface
they
have.
The
HMS
is
very
will
be.
The
api's
today
is
very
network
centric,
so
we
have
to
see
what
are
the
equivalents
for
such
exotic
things
right
is
not
there
currently
and
so.
A
To
do
with
your
assistance
is
try
and
get
some
of
these
things
to
be
a
little
more
generic.
So,
for
example,
me,
my
F
is
a
fairly
straight
for
shared
memory
mechanism
for
two
containers
on
the
same
box
to
communicate
with
each
other,
and
so
you
know
if
you
have
shared
memory,
and
you
have
something
that
looks
like
a
UNIX
file
socket,
it
should
take
a
lot
to
make
mif
work
on
windows.
D
Yeah
yeah:
that's
right
like
things
like
main
pipe.
So
in
the
equivalence
there
is
a
named
pipe
variable
socket
in
the
next
right.
So
and
that's
why,
if
you
look
at
the
terminologies
Sabir
done
the
mistake
in
the
past,
that
is
the
talker
when
we
call
it
the
neck,
specific
terminologies
and
finding
that
appropriate
terminology
on
the
other
platform
find
which
one
it'd
be
hard
at
later
stage.
So
at
this
time
see
if
you
can
find
a
common
is.
A
That's
right,
I
talked
about
this
I
call
this
architectural
white
space,
which
is
90
percent
of
the
time
you
can
take
an
action
that
is
exactly
the
same
cost
as
the
one
you
might
have
otherwise
taken
okay,
but
you
know
six
months
from
now
when
you
go
to
do
the
next
thing,
your
choice,
a
is
really
expensive
in
choice.
B
is
really
really
cheap.
Let's
make
twist
be
right:
okay,
cool!
A
Is
not
the
thing
that's
providing
the
VPN
gateway?
The
VI
part
is
a
network
service.
We
connect
network
services.
We
don't
traditionally
provide
the
network
services
right,
so
the
choice
of
where
you
put
the
VPN
gateway
pod.
Probably
that
would
be
something
that
would
run
on
the
same
cluster
as
Sarah's
pod
and
then
would
have
a
connection
back
to
some
VPN
concentrator.
A
A
E
A
And
with
l2
l3
connections
that
are
point-to-point
cross
connects
between
your
part
of
the
network
service
you
want,
so
you
don't
really
have
to
think
about
subnets.
In
that
context,
one
of
the
interesting
things
here
is
if
you
want
a
bridge
domain-
and
there
are
people
who
do
for
all
kinds
of
good
reasons,
that
bridge
domain
itself
is
a
network
service
right
and
so
all
network
service
mesh
would
do
is
connect
you
to
the
British
domain
network
service,
and
so
by
virtue
of
being
point-to-point
cross
connects.
A
So,
of
course,
then
there's
the
question
for
Sarah,
in
that
new
routes
routes
are
a
big
pain
for
her
right,
getting
the
right,
prefixes
and
that
kind
of
stuff
and
looking
at
these
kinds
of
problems,
one
of
the
things
we
realized
is
addresses
and
routes
for
the
l2
and
l3
connection.
You
know
from
the
net.
A
They
naturally
should
be
coming
from
the
network
service
endpoint
like
your
VPN
gateway,
because
your
Gateway
probably
has
a
pretty
good
understanding
of
what
IPS
are
going
to
be
validly
assignable
for
talking
to
your
corporate
internet,
and
it
probably
has
a
pretty
good
understanding
of
what
prefixes
should
be
routed
to
your
corporate
intranet.
But
that's
not
something
you
want
to
go,
have
to
manage
or
the
great
giant,
I've
ham
in
the
sky.
A
It
really
is
a
contract
between
your
VPN
gateway
and
Sarah's
pod,
and
so
you
know
that
is
something
that
should
be
handled
from
Sarah's
point
of
view
automatically
there's
a
little
bit
of
thought
on
the
part
of
the
person
who
is
deploying
the
VPN
gateway
come
on,
and
then
so.
Finally
there's
this
new
firewall
pod,
but
they
that
her
security
people
want
to
stick
before
the
VPN
gateway
pod.
A
Now,
network
service
measures
a
mesh
so
far,
I've
just
shown
you
signaling
to
point
connections
and
in
this
example,
once
you
introduce
that
firewall
pod,
the
firewall
pod
and
the
VPN
gateway
pod
are
working
together
to
provide
the
secure,
Internet
connectivity
service
from
Sarah's
point
of
view.
It's
just
one
service
right:
the
fact
that
there
are
multiple
things
composed
to
provide.
It
is
not
really
her
problem,
and
so
you
know
the
question
becomes.
How
does
that
work?
And
the
answer
of
course
is
we
do
we
always
do
in
this
situations?
A
A
A
So
for
that,
you
would
need
to
network
service
wirings
the
first
one
which
I
have
creatively
named
secure
Internet
connectivity,
wiring
one
has
a
target
of
secure
internet
connectivity.
In
other
words,
if
you
were
trying
to
reach
the
network
service,
secure,
Internet
connectivity,
this
network
service
wiring
might
apply
to
you.
A
The
secure
Internet
connectivity
wiring
one
has
a
qualifier
which
says
if
your
source
is
not
something
that
provides
secure.
Internet
connectivity
like,
for
example,
Sarah's
pod,
then
that
then,
then
this
policy
does
apply
to
you
and
then
the
action
is
to
route
to
a
destination.
Where
you
run
your
pod,
selector,
basically
on
the
label,
firewall
equals
trip
right.
So
if
you
are
a
Sarah's
pod
and
you
reach
out
to
connect
to
secure
Internet
connectivity,
you
will
be
connected
to
our
well
equals
true.
So
this
picture
shows
shows
that
graphically.
A
You
owe
me
about
this
network
service.
Wiring
Sarah's
pod
is
trying
to
connect
to
secure
Internet
connectivity
because
it
is
not
providing
secure,
Internet
connectivity,
it
matches
this
network
secure,
Internet
connectivity,
wiring
one
and
therefore
it
gets
connected
to
the
firewall
pod,
make
sense
so
far.
A
B
A
By
cheating,
so
by
cheating,
do
you
mean?
Can
you
take
multiple
actions,
so
you
route
to
a
firewall,
pata
thing:
you
do
something
else
right.
Yes,
we're
very,
very
early
in
our
thinking
about
this,
and
we
would
welcome
more
participation
from
folks
thinking
a
little
bit
deeper
about
it.
I
will
tell
you
right
now.
The
reason
we
call
this
out
as
an
action
is
one
of
the
things
that
we
have
as
a
use
case
that
people
would
like
to
be
able
to
do
in
the
future
is
to
have
an
action
that
basically
says.
A
Please
go
spawn
a
firewall
pod
in
this
cluster.
If
you
don't
have
one
already
and
then
route
to
it.
Essentially,
the
dynamic
spawning
of
network
service
endpoints
is
something
that's
on
our
radar,
and
in
that
case
this
would
sort
of
work
out
as
a
chain
thing
where
your
first
action
would
be.
You
know
spawn
if
not
in
existence
and
the
second
action
would
be
the
route,
but
that's
not
fully
thought
through.
Yet
in
the
community
it's
been
been
teed
about,
but
it's
not
fully
thought
through,
and
so
we
would
welcome
more
participation
from
folks
and.
A
I
would
like
to
have
a
network
service
endpoint
on
every
node,
where
someone
is
asking
for
it
right,
I,
don't
want
to
run
5,000
Network
Service
endpoints
for
this
thing
all
over
my
cluster,
when
I
might
have
20
nodes
that
actually
the
damn
thing
running
that
would
be
horrible,
and
so
that
kind
of
problem
is
what
we're
thinking
about
around
this,
but
anyway
I
water
off
a
little
bit
of
the
weeds.
So
how
does
the
fire
robot
pod
get
connected
to
the
VPN
gateway
pond
mod?
A
Now
one
of
the
lovely
things
here
is
the
firewall
pod
does
not
have
to
have
any
comprehension
of
the
VPN
gateway.
Pod
exists
and,
if
I
decide,
I
want
to
insert
a
new
pod
between
the
firewall
pod
of
the
VPN
gateway.
Pod
I
just
have
to
change
my
network
service.
Wiring
I
don't
have
to
change
my
firewall
pod,
nothing
about
my
firewall
pod
changes.
A
A
Cool,
so
you
know
the
next
question
sarah
has
because
this
is
been
a
pink
wafer
is
what
happens
when
I
pee
decides
to
put
something
else
in
there
for
more
security,
say:
ITV
sighs.
Now
they
want
an
IDs
pot
on
the
chain
effectively.
You
just
have
to
have
an
appointment
for
those
new
pods
and
a
network
service
wiring
that
connects
them
and
that's
it.
A
So
there
are
no
interfaces,
no
IPS,
no
subnets
and
no
routes
that
sara
has
to
work
about
worry
about
any
of
this.
So
all
of
those
concepts
continue
to
not
be
part
of
the
northbound
API
for
the
application
developer
and
you
don't
need
a
new
version
of
kubernetes.
This
works
with
stock
unaltered
kubernetes
and
you
don't
have
to
use
of
specific
magic,
C
and
I
plug
in
because
network
service
mesh
does
not
actually
use
C
and
I.
See
and
I
does
a
great
job
for
kubernetes
networking.
A
F
A
A
A
Hey
man,
I
am
NOT
here
to
speak
ill
of
C&I.
It
works
for
the
kubernetes
networking
that
we
have
today,
and
one
of
the
reasons
that
we
work
with
it
completely
unaltered
kubernetes
is
that
we
are
not
trying
to
change
anything
about
existing
kubernetes
networking
it
works.
Well,
let
it
continue
to
lose
job.
Neither
are
we
looking
for
alterations
from
you
know
the
device
plug-in
mechanism
or
any
of
the
rest
of
that
we
can
use
those
things
particularly
for
handling
physical
mix,
an
SRA
movie.
We
think
they're
great.
We
don't
need
them
to
change.
A
So
be
how
does
the
magic
work?
So
you
guys
I'm
sure
I've
seen
this
picture
before
right?
It's
it's!
You
know
from
okasada
this
presentation
where
you
have
the
sidecar
in
the
picture
and
and
so
in
network
server.
Spanish.
We
have
something
that's
kind
of
like
this.
That
does
service
discovering
route
in
routing.
It's
called
the
network
service
manager
and
you
run
it
as
a
daemon
set,
so
you
have
one
on
each
node
now.
It
happens,
though,
that
we
have
a
slightly
different
situation
here.
Then
you
have
with
service
mesh
with
service
smash.
A
Everything
is
running
over
TCP
and
every
kernel
has
a
TCP
stack,
so
you,
the
connection
management
piece,
doesn't
have
to
be
handled
by
the
thing.
That's
handling
your
service
discovery
of
routing
in
network
service
mesh.
We
do
have
to
handle
the
connection
management
because
there
is
no
one
true
connection
to
manager
for
l2
and
l3
connections
and
networking.
A
So
the
NSM
init
container,
when
your
pod
comes
up
reads
the
config
map
in
order
to
figure
out
what
network
services
see
respond
needs
to
be
connected
to,
in
this
case
the
VPN
gateway
pod,
and
it
sends
a
G
RPC
call
to
the
network
service
manager
to
request
an
l2
l3
connection.
Does
it
your
internet
connectivity,
Network
Service,
so
the
request
connection
has
any
information.
You
need
to
be
clear
about
how
you
want
the
connection
to
be
handled
locally,
your
pot.
A
This
is
where
you
would
specify
what
we
will
call
what
we
would
call
the
local
mechanism.
So
you
would
say
something
like
I
would
like
a
kernel.
Interface.
I
would
like
that
kernel
interface
to
be
named
if
or
I
would
like
a
memo,
AF
or
whatever
you
basically
give
it
a
preference
ordered
list
of
what
you
would
like
in
terms
of
the
local
mechanism
for
connecting
this
network
service
to
you
and
then
the
gateway
pod.
A
We're
going
to
talk
first
about
the
case
for
the
VPN
gateway
pod
is
on
the
same
node,
just
because
it's
a
simpler
case
right,
we'll
talk
later
about
node
to
node.
So
if
the
VPN
gateway
pod
happens
to
be
on
the
same
node,
then
you
send
a
request
connection
to
it.
It
sends
back
an
accept
connection
again
over
G
RPC
and
the
network
service
manager
creates
that
injects
the
interface
whatever
mechanism
it
is
into
the
pod
from
bday
to
play
and
creates
that
injects.
Theater
interface
into
service
pod
and
cross
connects
them.
A
Yeah,
just
like
any
containers,
okay
now
I
will
point
out
if
Sarah
wanted
to
rent
an
ultra
smart
pod,
the
dynamically
requested
connection.
Could
you
know
connections
to
network
services
throughout
its
lifetime?
She
certainly
could.
But
for
this
you
say
case
I,
wouldn't
imagine
why
she
would
want
to
write.
This
is
very
much
take
something
off
the
shelf
configure
put
a
big
map
in
and
go
situation
so.
A
It
is
very
much
a
distributed:
control
playing
for
cross
connects
okay,
but
it's
only
doing
the
cross.
Essentially,
the
cross
connects
piece
of
it.
The
other
nice
thing
is
this
means
the
network
service
mesh
is
the
only
person
in
this
picture
that
has
to
have
any
kind
of
privilege
in
the
system.
You
know
there
are
people
who've
done
things
where
you
would
sort
of
privilege
a
big
container
into
Sarah's
pot.
A
I
know
that,
right
now
the
sto
guys
are
trying
to
dig
themselves
out
of
that
hole
because
it's
printful
security-wise,
but
the
network
service
manager
is
the
one
that's
doing
all
the
manipulation
of
the
data
plate
of
the
injecting
the
interfaces
and
it's
actually
pretty
agnostic
as
to
what
that
data
plate
is.
That
could
be
the
kernel.
F
A
C
A
A
So
again,
this
is
didn't
your
your
question.
What
if
the
EP
and
Gateway
pod
is
on
a
different
note,
all
right
great,
so
in
that
case
it
starts
out
the
same.
It
looks
exactly
the
same
to
Sarah's
pot
and
it's
in
a
container.
It
literally
cannot
tell
whether
the
VPN
gateway
pod
is
on
the
same
node
or
not.
It
sends
its
request
connection
to
the
network
NSM
one
on
node
1
and
the
kubernetes
api
server,
where
we
have
these
resources
for
a
network
service,
endpoints
and
network
service,
wirings
and
sm.
A
One
looks
up
in
that
api
server
or
more
likely
from
its
cache,
finds
network
service.
Endpoints
that
are
providing
the
service
looks
through
the
network
service
wirings
to
identify
an
appropriate
candidate
and
figures
out
that
there
is
an
appropriate
candidate
on
node
2
that
is
managed
by
an
SM,
2
and
part
of
the
network
service.
Endpoint
tells
it
how
to
talk
to
NSF,
so
NSM
one
sends
a
request
connection
to
NSF
over
G
RPC.
A
Now
this
is
really
close
to
the
request
connection
that
gets
sent
from
me
in
a
container
NSM
one,
but
the
mechanisms
are
different
rather
than
talking
about
local
mechanisms
like
interfaces
or
MAF.
In
this
case
you're
talking
about
remote
mechanisms,
typically
tunneling
mechanisms
links
me
actually
on
GRE,
and
so
in
that
request
connection,
you
send
a
preference
order
list
of
you
know
these
are
the
mechanisms
I
would
prefer.
These
are
the
constraints
I
have
on
the
from
the
parameters
for
them,
etc.
A
A
G
A
G
A
A
Tunnel
is
being
created
by
the
network
service
managers,
not
by
the
NSC,
because
the
network
service
manager
is
the
one
that
is
talking
locally
to
your
nodes
data
playing,
whether
that's
the
kernel
or
V
switch
BPM.
The
VPN
gateway
pod
just
knows
that
it's
getting
an
interface
objective
for
that
network
service,
so.
G
A
No,
the
VPN
gateway
pod
doesn't
know.
Now
we
do
have
mechanisms
that
I
don't
talk
about
in
this
deck
to
allow,
for
example,
service
pod,
to
ask
please
be
connected
to
somebody
on
the
same
note,
if
at
all
possible,
because
there
are
definitely
used
cases
where
you
care
about
locality,
but
in
this
very
generic
use
case
yeah.
A
You
know
you
don't
the
Sara
spawned
the
VPN
gateway
pod,
don't
have
any
notion
of
what
the
underlay
is
in
terms
of
the
tunnel
selected,
and
so,
if
you
want
to
introduce
a
new
tunnel
mechanism,
all
you
have
to
do
is
teach
the
network
service
smash.
You
don't
have
to
update
the
hundreds
of
hundreds
of
possible
network
service
and
endpoints
or
pods
in
order
to
use
it.
They
will
just
get
whatever
resource
up
from
a
negotiation
between
the
NSS.
A
All
right
so
there's
a
natural
question
here:
how
does
the
network
service
endpoint
resource
get
into
the
API
server
and
that's
fairly
straightforward
when
the
VPN
gateway
pod
comes
up?
It
sends
a
RPC
to
these
local
network
service
manager,
saying
hey
I'm,
exposing
a
channel
and
the
network
service
manager
goes
and
creates
that
it
works
service
that
our
endpoints
in
the
kubernetes
api
server,
so
that
they
are
present
there
to
be
discovered
by
edison
when
the
time
comes
so
and
I
believe
that
is
my
last
slide.
A
So
if
folks
have
questions
or
things
they
want
to
discuss
or
I
want
to
make
sure
I
actually
capture
the
folks,
particularly
the
gentleman
who
was
interested
in
helping
out
with
Windows
and
I,
think
you
can
we're
interested
in
getting
involved
to
make
sure
certain
other
aspects
for
being
covered
in
a
way
that
you
thought
were
reasonable.
Do
we
have
other
questions
from
folks.
F
A
You
you
actually
can
cross
connects.
You
can
actually
things
via
whatever
data
plane.
You
would
like,
so
the
data
planes
that
we
currently
are
looking
at
right
now
are.
We
definitely
have
people
who
are
interested
in
doing
it
via
directly
via
kernel
cross
connects,
you
know
with
you
know,
pairs
and
cuddles,
that's
one.
We
definitely
have
people
who
are
doing
with
UDP.
A
We
actually
have
a
lot
of
interest
in
some
of
the
things
that
I
didn't
cover
here,
about
how
you
use
direct
service
mesh
with
physical
mix
and
SR
iov,
and
we've
definitely
had
people
expressed
interest
in
using
OVS
as
the
data
playing
and
so
we're
carefully
architecting
it
so
that
the
data
plane
is
pluggable
and
you
can
just
simply
use
whatever
your
desired.
Local
data
playing
is.
A
Right
though,
this
hopefully
also
makes
the
windows
discussion
easier,
because
I
know
Windows
has
quite
different
data
planes.
Then
we
have
in
Linux.
It
also
allows
you
to
make
different
choices
depending
on
your
needs.
Your
needs
are
relatively
lightweight.
The
convenience
of
using
the
kernel
is
your
data
plate
is
probably
going
to
whip
it
out.
If
you
have
really
intensive
needs,
you
may
need
a
stronger
data
plane
that
is
more
efficient
and
more
performance,
something
more
like
DPP,
you
know,
and
and
people
will
have
different
needs
and
their
needs
will
evolve
over
time.
G
A
A
Pod
would
see
it
the
pod
would
see
in
this
example
a
kernel
interface.
That's
what
circuit
sees
the
VPN
gateway.
Pod
sees
a
kernel.
Interface.
Okay
happens
to
be
just
to
pick
a
really
bizarre
example,
an
MPLS
of
Ruggieri
tunnel,
but
between
them
is
nobody's
business,
but
the
network
service
fashion
right.
A
B
A
So
things
that
I
did
not
have
space
for
in
this
stack
because
I
time
constraints,
it
turns
out
that
you
can
have
something.
We
call
an
external
network
service
manager
and
a
NSM
that
manages
physical
network
stuff.
That
looks
exactly
like
any
other
network
service
manager
from
inside
the
cluster,
so
you
do
have
the
ability
to
interact
outside
of
just
the
kubernetes
cluster
and
so
there's
potentially
a
great
deal
more
broadness
here
than
just
kubernetes.
A
So
part
of
what
we're
looking
for
is
advice
as
to
what
the
best
formal
vehicle
forward
from
your
point
of
view
would
be.
As
the
CNC
F
network
working
group
effectively
the
advice
from
say
networking
was,
we
think
you
should
become
a
kubernetes
working
group,
I'm
curious
what
the
advice
is
from
CN
CF,
the
CNC
F
networking
working
group.
What
is
it?
We've
identified
four
possibilities.
So
far
one
is
kubernetes
some
project
under
Signet
working,
one
is
criminales
working
group.
F
Yeah,
it
actually
touched
upon
a
an
important
point,
is
how
do
you
use
this
mechanism
to
leverage
existing
virtual
appliances
that
might
be
deployed
in
the
network
because
my
storage
in
the
cloud
and
come
and
drop
their
clusters
in
an
existing
network?
That
already
has
all
these
appliances
in
place
so
use
this
mechanism,
direct
traffic
who
say
an
appliance
outside
the
cluster
and
then
go
out
to
other
places
from
there.
That
would,
and
that
would
have
a
lot
of
value.
A
But
effectively
you
have
is
a
physical
appliance
out
there
in
the
world
effectively.
You
just
end
up
deploying
a
network
service
manager
that
speaks
the
network
service
manager,
a
network
service
manager,
a
gr
PCI
and
on
one
side,
so
that
other
network
service
managers
can
talk
to
it
and
then
does
whatever
it
needs
to
do
to
manage
those
physical
appliances.
On
the
other
side,
you
know
and
we're
quite
agnostic
as
to
what
that
is.
F
A
Something
that
looks
like
the
network
service
man,
what
is
managing
frankly
any
to
manage
on
the
other
side
of
its
communication
and
in
Stara
spa
doesn't
know
any
different
and
I
said
one
doesn't
know
any
different.
You
know
it
all
looks
the
same
animal
and
then
it's
the
being
massively
powerful
when
you're
dealing
with
existing
appliances.
C
Guess
maybe
a
couple
of
just
summarize
the
project
overall
and
a
semi
it's
about
bringing
in
at
V
2008
or
in
whichever
way
you
direction
on
a
state
that
security
case
that
we
went
through
today
was
less
necessarily
NFV,
but
probably
somewhat.
But
what
is
concern
for
those
that
are
Ridge
area
now
datacenters?
You
know
in
the
comment
use
case
they're
beating
up
on
crime
and
like
to
access
my
cloud-based
straight.
You
know
my
cloud-based
container
deployment
makes
a
lot
of
sense.
The
other
use
cases
that
that
nfm
facilitates
are
are
my
hunches.
C
You
know
are
things
that
are
layer
three
and
below
are
MPLS,
and
you
know
there
are
service
provider
or
oriented
it's
not
that
it
doesn't
service
things
that
are
above
layer
three,
but
the
distinction
between
what
did
you
get
out
of
kind
of
sound
versus
other
service
meshes
well
hate.
Those
other
service
meshes,
don't
address
layer,
two
layer,
three,
they
don't
right
now,
don't
facilitate
kind
of
like
an
insect
based,
VPN
or
work
or
whatever
they
didn't
think
it
very.
A
Well
could
be,
you
know,
we're
happy
to
negotiate,
whatever
connections
are
doable
on
both
ends,
so
you're,
absolutely
right,
I
mean
even
if
he
is
certainly
one
of
our
use
cases
it
turns
out.
There
are
lots
of
other
use
cases.
Sort
of
like
the
one.
I
showed
your
illustration
prize.
Oriented
use
cases
people
are
brought
up,
is
you've,
got
people
who
got
existing
sort
of
opens
back
you'd
like
to
connect
a
to
a
neutron
Network.
A
They
don't
want
to
have
to
backhaul
the
entire
concept
space
into
kubernetes,
because
that's
ugly,
you
know
that
sort
of
gives
you
the
hello
that
Sarah
was
talking
about.
Well,
a
neutron
network
is
a
perfectly
fine
network
service
as
far
as
we're
concerned,
and
that
would
be
a
case
where
you
would
have
some
kind
of
en
s/m
or
m/l
s/m
to
connect
you
to
it.
A
We've
got
people
do
want
to
bridge
domains
if
people
do
want
to
be
able
to
have
mixer
nests
or
Iovine
inks
that
are
providing
Tiffani
to
specific
networks
like
a
radio
network
service
or
that
kind
of
thing
you
know.
So
we've
got
a
very
broad
set
of
use
cases.
I
you're
absolutely
right,
though,
that
do
you
Mark
for
us
is
more
l3
and
below,
because
the
the
existing
service,
my
stuff,
that
he
was
doing
they're
doing
a
kick-ass
job
for
l4
through
l7.
C
C
A
You
know
I
mean
that
ends
up
being
really
really
cool,
because
again
the
world
continues
to
look
simple
to
all
the
stuff
in
the
cluster.
Even
though
there's
something
horrendously
complicated
outside
you
know,
you
insulate
the
firmament
by
having
very
simple
api's
and
by
dealing
with
very
simple
cross
connects.
A
So
you
know
I
literally
just
before
I
got
on.
This
was
chatting
with
somebody
who
has
a
MacNeill
in
use
case.
Where
he's
got
a
physical
spunky
work
for
coming
into
a
node.
He
wants
to
deploy
a
pod
that
consumes
that
physical
yo
aphelion
trunking
network
service
and
then
also
exposes
the
virtualized
VLAN
trunking
service
to
other
BOTS.
C
Something
you
were
saying:
the
communities
project
is
interested,
I
mean
clearly
they're
interested
in
like
bare
metal
provisioning
of
nodes
to
get
you
know
to
get
clusters
up
and
so
that
people
can
use
communities
and,
but
it
sounds
like
they'd
also
show
an
interest
in
the
physical
kind
of
the
aspect
of
having
a
separate
signet.
So.
A
C
So,
in
some
respects
to
facilitate
migration
from
or
integration
with,
pre-existing
technology
deployments
like
OpenStack
is
a
good
example.
Maybe
I
know
we're
out
of
time
out
of
time.
Maybe
there's
a
couple
other
questions,
I
think
for
me
to
give
them
good
feedback
about.
Maybe
we're
the
best
home
for
kind
of
whether
or
not
a
cell
working
group
here
would
make
sense,
or
some
of
those
are
along.
C
A
A
So
basically,
this
is
your
basic
jumping-off
point.
It
lists
out
in
the
readme
a
bunch
of
the
different
collateral
in
terms
of
their
there
are
many
more
much
more
collateral
than
of
the
year
gives
you
the
link
to
the
meeting
the
weekly
meetings,
the
calendar,
the
mailing
lists,
Twitter's
to
the
IRC
channel.
It
you,
as
you
all
know,
communities
tend
to
differ
in
their
personalities.
This
didn't
network
service.
Much
seems
to
be
a
very
IRC
oriented
community
for
some
reason,
as
it
turns
out.
A
F
A
Yeah,
so
that
is
the
general
pattern.
I
would
imagine
to
be
easily
adaptable
to
something
like
what
music
sphere
is
doing.
I'll
be
honest,
I,
don't
know
what
I
thought
miso
stew
know
quite
with
the
sharp
pointy
bits
are
going
to
be,
but
I
imagine
it
probably
could
be,
and
that's
probably
something
else
we're
getting.
Someone
involved
in
the
community
early
would
help
because
there
is
a
tendency,
sort
of
a
llama.
F
B
B
B
A
Please
do
cuz
again
I'm
really
excited
by
some
of
the
things
on
the
windows
and
mesos
front
we're
by
getting
an
early
participation.
We
could
actually
not
make
life
hardly
here.
That
would
be
wonderful,
great
thanks.
Anyone
all
right
cool
I
actually
do
have
to
run
at
this
point,
but
any
closing
questions
before
I
do.