►
From YouTube: Network Service Mesh BoF - 2018-05-25
Description
Join us for KubeCon + CloudNativeCon in Barcelona May 20 - 23, Shanghai June 24 - 26, and San Diego November 18 - 21! Learn more at https://kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
Join us for KubeCon + CloudNativeCon in San Diego November 18 - 21. Learn more at https://bit.ly/2XTN3ho. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
A
So
let's
go
ahead
and
start
off
with
agenda
bashing,
while
people
are
writing
themselves
in
as
participants
which
is
much
appreciated,
so
basic
identity,
always
starting
with
agenda
bashing,
then
we've
got
a
review
of
some
action
items
that
folks.
That
said,
they
were
planning
on
working
on
from
last
week.
A
No,
it's
just
we
want
to
see
where
they
stand,
then
you
know,
we've
got
a
review,
a
development
activity
with
Frederick
and
Kyle
plus
whoever
else
has
been
working
on
things
I
think
we
got
some
interesting
stuff
that
John
push,
but
he
didn't
want
to
talk
about
it
till
next
week.
He
want
to
get
people
a
chance
to
look
for
it.
We
got
a
review
of
the
use
case,
mapping
with
pram
and
Fabian
and
John.
A
A
Yes,
I
added
that
question.
Thank
you
that
that's
very
helpful.
Actually,
what
did
it
make?
Sure
credit
went
where
credit
was
due
and
so,
in
addition
to
sort
of
the
various
other
things
that
were
there
in
terms
of
collateral
that
we
can,
you
know,
look
at,
however
folks
find
useful,
I
I
did
add.
You
I
had
a
conversation
already
with
Mike
when
we
talked
about
sort
of
a
VPN
gateway
case,
so
I
added
some
collateral
for
that,
and
then
I
attempted
to
answer
some
of
the
questions
that
you
would
posed
Mike.
A
All
righty,
then,
let's
go
ahead
and
dive
in
so
you
know.
First,
do
please
add
yourself
to
the
attendees
we
I
know:
we've
got
people
who
are
currently
arriving
and
I'll
stick
the
link
to
the
agenda
in
the
chat
again
for
the
new
folks
you're
just
getting
here
makes
it
easy
to
keep
track
of
who's
around
so
a
I
review.
So
on
code
activity,
I
think
you.
You
would
hope
tile
that
you
would
have
some
CRT
stuff
out
as
a
PR
and
I
will
happy
news.
There
yeah.
B
Definitely
so
I
did
I
pushed
out
multiple
versions
of
54
PR
54,
which
is
the
the
CID
patch
as
well
cool,
so
yeah,
so
that
I'd
love
to
get
some
feedback
on
that.
You
know
at
this
point
it
would
be
I
put
a,
but
you
know
it's
the
the
problem.
I
love
I
love
ven
during
in
go,
but
it
makes
code
reviews
interesting,
because
when
you
pull
in
new
dependencies
you
know
the
patch
looks
gigantic.
Is
it
I.
B
Right
exactly,
it
is
what
it
is,
so
what
I
did
I
tried
to
at
least
you
know,
put
some
Commons
in
maybe
to
aid
in
review
for
people
who
want
to
take
a
look
at
that
patch,
so,
like
I
definitely
will
be
available
today.
If
you
know
Fredrik
I
don't
know
if
you
want
to
sync
up
maybe
later
today.
If
you
have
some
time
on
that
and
yeah.
A
And
I
presume
that'll
be
on
the
IRC
Channel.
Then
definitely
yeah
we're
service
rush,
cool,
a
lot
of
fun
stuff
happens
on
pod
network
service,
Misha
I
highly
recommend
it
highly.
It
does
so
cool.
So
that's
actually
excellent
news.
So
we've
got
that
out
you
in
the
process
of
review.
Do
you
want
to
talk
a
little
bit
about
what
the
CRT
pull
request
is
doing?
I
know
some
people
are
familiar
with
CR
DS
and
how
we're
looking
to
use
them
here
and
some
may
be
less
familiar
right.
So.
B
It's
so
we've
decided
that
well
at
least
that's
what
this
patch
proposes,
and
it
would
be
great
to
get
feedback
well,
we
decided
we're
gonna,
implement
we're
gonna,
implement
both
network
services
and
and
network
service
endpoints
as
custom
resource
definitions
and
the
beauty
of
that
is.
We
get
to
utilize
all
cute
cuddle
and
we're
gonna,
tease
database
and
everything
on
the
backend,
and
so
what
the
patch
does
is.
B
It
allows
us
to
take
our
protobuf
file,
which
we
already
have,
and
that
generates
a
bunch
of
go
code,
and
we,
then
we
use
the
kubernetes
code
generator.
So
we
just
create
a
types
go
file
with
our
CRT
definitions,
which
utilizes
the
generated
go
file
from
the
protobuf
file
in
as
far
as
essentially
the
the
schema
for
us.
So
it's
pretty
slick,
the
rest
of
the
code
is
mostly
a
lot
of
generated.
Then
I
created
a
new
plugin
that
that
essentially,
is
able
to
run
the
backend
informer
for
this
as
well.
No.
A
That
that
that's
awesome,
thank
you.
I
appreciate
it's
good
work,
cool,
so
Frederick
you
had
had
some
hope
of
poking
your
other
Kyle
had
some
hope
that
you
would
poke
it
in
off
in
cluster
off
this
week.
I,
don't
recall
you
being
here
to
commit
yourself
to
it,
I,
just
threes.
If
anything
happened
there
I'm.
C
Currently,
working
on
that
at
this
point,
so
probably
got
sidetracked
with
a
couple
other
tasks
from
my
day
to
day
work
so
I
think
it
did
I've
it
as
deeply
as
I
as
I
want.
But
what
I
do
have
is
I've
already
set
up
a
test
cluster
and
I'm
going
to
create
a
couple
jobs
that
and
a
couple
sample
applications.
It's
designed
specifically
to
test
the
boundaries
of
the
in-class
authentication.
Now,
specifically,
there's
I'm
expecting
there
to
be
at
least
two
I
guess
you
would
say
roles.
C
One
of
them
would
just
be
like
a
standard
row
which
I
assume
would
have
limited
access
to
change
things
in
the
API
and
then
I'm
expecting
them
I
need
to
work
out.
How
is
it
possible
to
gain
privileged
access
in
order
to
in
order
to
make
changes,
and
so
we'll
need
to
work
out
as
to
how
much
how
much?
C
How
many
privileges
do
we
need
and
work
applets
and
proper
way
to
expose
that
if
there's
such
a
way,
if
there
isn't,
then
what
we'll
need
to
do
is
we'll
need
to
create
credentials
out
of
band
and
it
will
have
to
inject
them
into
into
kubernetes
and
there's
a
number
of
ways.
We
could
do
that,
and
so
so
we
have
a
path
to
to
make.
C
A
D
Right
so
I
did
update
it
and
what
happened?
I
mean
I
have
updated
based
on
the
feedback,
but
I'm,
also
working
on
looking
at
the
distributed
VP
and
what
we
discussed
face
to
face
trying
to
see
what
would
be
the
issues
in
case
if
we
fit
in
the
basically
been
into
the
distributed
bridge
concept.
So
that's
what
I'm,
currently
working
on
and
I've,
also
captured
the
pros
and
cons
between
the
two
approaches.
A
Yep,
so
that's
that's
cool,
so
it
will
probably
talk
a
little
bit
more
about
that
when
we
get
a
use
case,
review
sector
and
then
I
was
like
I
know,
you
and
I
had
the
opportunity,
because
I
was
speaking
in
San
Jose
to
sit
down
and
walk
through
the
distributed
bridge
deck
right
distributed
scene.
Have
you
found
that
helpful?
It
may
be
something
we
decide
to
review
in
the
conceptual
review
section
today
as
well
cool
and
then
John.
Did
you
get
sequence
diagrams
added
to
your
use
cases
I
did
with
some
help
from
friend.
F
Yeah
so,
as
I
mentioned,
I
would
like
to
get
a
just
a
better
idea.
The
use
case
is
SOI
to
understand
exactly
where
you
know
packets
are
flowing
through
one.
Also,
as
mentioned
get
the
the
definitions
down.
Strake
is
I,
think
I
heard
last
week,
overlay
on
delay,
different
end
caps
and
that
kind
of
thing
yeah.
A
No
I
can
see
how
you
I
can
see
how
that
would
be
confusing.
So
hopefully
we
little
picks
them
up
in
the
conceptual
review
or
I'd
be
lost
to
make
some
time
next
week
to
sit
down
with
you
and
talk
you
through
whatever
would
be
helpful.
Okay,
absolutely
yep
I
had
a
good
conversation
like
that
with
Mike
this
week,
that
was
at
least
I
found
highly
productive,
so
cool,
very
good
I'll
get
on
your
calendar,
Thanks
awesome,
okay,
so
review
a
developer
activity
other
than
the
stuff
we've
already
mentioned,
on
CRD,
pole
and
plus
trough.
B
I
mean
so
so
I
had.
This
is
something
you
and
I
were
talking
about.
So
I
think
another
area
is,
is
you
know
just
testing
in
general
and
Frederick
I?
Think
you
kind
of
brought
this
up
with
what
you
were
discussing
earlier,
but
I
think
that's
kind
of
the
next
area
that
it
kind
of
started
to
look
at
see
Rd
is
is,
is
doing
a
bit
more
testing
on
some
of
the
existing
stuff.
We
have
you.
A
Know
no
I
totally
agree
and
it's
something
I've
been
poking
out
a
little
bit
too,
as
I'm
sort
of
looking
at
doing
a
proper
device,
plugin
plugin,
I'm
sort
of
discovering
some
interesting
and
useful
patterns.
It
turns
out
that
that,
if
you
were
consider
this
plug-in
framework
that
it
becomes
easy
to
write
a
test
plug-in
that
just
exercises
your
plugin
and
then
checks
to
make
sure
that
the
plugin
is
doing
the
right
thing
as
part
of
its
activity.
So
hopefully
I'll
have
a
pattern
that
I
can
show
next
week.
C
So
one
thing
we're
going
to
probably
run
into
in
the
future
is
when
we
want
to
test
functionality
that
requires
a
clear
Nettie's
cluster
to
be
available.
I
think
that
we're
going
to
run
into
limitations
with
Travis
and
circle
CI
and
so
on.
So
we
need
to
work
out
a
long-term
test
strategy
as
well
for
for
the
integration
tests
that.
B
That's
definitely
true
and
I.
Actually,
so
add
me
up
with
some
of
the
the
the
CIC
be
testing
folks
from
CN
CF
I
have
an
action
kind
of
to
follow
up
with
them
because
they
they
are.
They
are
actually
looking
into
this.
You
know
for
things
beyond
an
SM
as
well,
so
Frederick
I
can
definitely
pull
you
into
that
as
well.
If
you'd,
like
yeah,.
C
That's
yeah
that'd
be
a
good
thing
to
do
because
I
mean
first
start.
The
kubernetes
community
has
to
do
this
testing
themselves
at
this
point
and
if
we're
there's
also
some
work
that
I've
been
doing
not
part
of
CN
CF,
the
part
of
the
Linux
Foundation,
so
I
guess
the
step
higher.
We,
you
may
be
able
to
help
a
little
bit
with
some
of
the
stuff
as
well.
A
A
A
A
D
So
on
the
communication
or
flow
I'm
trying
to
capture
the
use
cases
that
have
been
discussed
so
I've
added
the
distributor,
the
bridge
use
case
also
because
I
intend
to
use
it
in
the
l3
VPN.
The
VP's
can
use
case
scroll
down.
We
can
scroll
down
the
tip
of
T,
so
we
discussed
about
it
last
week,
further
down
the
page
yeah
yeah
for
the
next
page.
D
D
So
it
is
V
excellent
channel
that
would
be
set
up
by
network
service
mission
a
priori
and
then
once
that
happens,
the
pod
would
essentially
be
exposing
the
l2
channel,
and
then
the
network
service
manager
would
essentially
publish
this
L
two
endpoints
on
to
the
API
server,
and
then,
whenever
other
pod
wants
to
have
a
communication
with
that,
the
port
at
node
one,
it
would
essentially
use
the
l2
channel
to
talk
to
it.
So
I
also
see
a
bit
of
parallel
with
that
of
the
VPN
gateway.
D
D
D
This
can
be
thought
of
as
sitting
on
our
piece
on
t
v--
excellent
on,
so
this
would
be
essentially
within
the
data
center
and
then
the
next
is
essentially
the
connection
between
the
nodes
to
total
t
visa
gateway.
Again,
the
assumption
is
that
the
GRE
tunnels
would
be
set
between
the
nodes
and
then
the
DC
gateway
and
then
essentially
the
MPLS
channel,
all
the
LS,
if
that
ran
all
the
way
from
that
of
the
visa
gateway
to
this
nodes.
D
So
this
is
more
like
a
point-to-point
scenario,
but
one
other
scenario
can
be
the
distributed
bridge
concept
which
had
brought
in
haven't
populated
it
because
I
see
some
gaps
in
that
I'm
still
working
on
it.
Soil
populated
a
little
little
this
the
next
week,
and
we
can
also
probably
discuss
more
about
during
the
scenario
discussion.
So
that's
the
update
with
respect
to
the
to
the
of
the
useful
document.
Yeah.
A
Cool
excellent,
so
thank
you.
I
appreciate
that
update
and
it's
it's
good
that
you
went
through
and
update
the
sequence
diagrams
and
if
look
more
interested
in
the
distributed,
you
know
distributed
CNI
for
distributed,
which
case
we
can
talk
about
that.
So
the
conceptual
review
section
later
on
now
then
John
I
think
you
also
had
updated
some
sequence
diagrams
in
talking
about
which
there
someplace
you
would
a
driver.
Would
you
like
me
to
know.
E
By
trust,
your
driving
ahead
implicitly
you're
doing
a
great
job,
so
I
copied
you
know
friends
diagrams
and
got
some
input
from
him,
so
I
did
to
sequence.
Diagrams
one
was
more
sort
of
the
first
funding
picture,
highlighting
there
she's
great
is
bringing
up
a
new
resource,
and
so
the
first
thing
is
the
pod
says
to
network
service
mesh
I'm
requiring
a
new,
a
new
channel.
E
It
cost
API
server
talk
to
the
device
plug-in
daemon
set,
ask
for
a
resource
triggers
instantiation
of
ice
for
ask
for
new
network
namespace
in
the
pod,
from
advice
plug-in,
which
then
triggers
a
stanchion
of
a
new
security
CNF
or
could
be
any
CNS.
This
is
just
for
this
case.
I
mean
it's
anything,
and
then
we
inject
the
CNS
into
the
container
of
the
pod,
which
is
then
running.
So
you
imagine
this
being
a
security
vnf.
It
could
be
a
vehicle
and
Gateway
or
V
tap
or
anything
else
we
want
to
put
in
there.
A
Makes
a
certain
amount
of
sense
because
I
mean
the
effectively
what
you're
saying
is
that
you
have
the
use
case.
You
should
clear
about
the
use
case.
You
would
like
to
be
able
to
inject
a
new
network
namespace
into
the
pod
and
inject
a
security
container
into
the
security
CNF
container
into
the
pod,
and
that's
sort
of
an
interesting
use
case.
There's
a
lot
of
interesting
discussion,
even
how
about
who
should
be
doing
what?
Where
and
when
in
this
process,
yeah
and
I
think
you
were
pretty
clear
about
that.
A
C
E
E
Yeah
I've
talked
to
a
bunch
of
customers
and
people,
and
you
know
the
attraction.
The
positive
feedback
is
mainly
around.
It
makes
this
CNF
atomic
with
the
pod,
which
is
a
standard
kubernetes
design
pattern.
They
don't
have
to
worry
about
additional
resources
when
I
bring
up
a
pod.
It's
the
resources
there
with
the
pod.
A
Absolutely
does
that
in
depending
on
how
much
work
you're
looking
to
do
I
mean
you're
well,
aware
and
I.
Think
a
lot
of
people
in
the
fall
are
well
aware
of
the
trade-offs
in
terms
of
you,
when
you're
really
doing
good
things
with
the
plane
work,
you
know
moving
all
about,
you
know
distributing
all
of
that
to
a
bunch
of
different
CN
apps
versus
putting
it
into
did
not
some
number,
of
course,
but
you
know
the
truth
of
the
matter
is
I
absolutely
see
the
use
case.
I
need
to
support
both
and.
F
E
Thinking
about,
if
I
want
an
l-3
management
network
top
of
my
existing
l3
kubernetes
network,
no
there's
a
whole
bunch
of
work
being
done
in
multis
and
other
things
and
where
the
multis
is
the
right
way
of
doing
it
over
the
network
service
meshes
right.
We're
doing
it.
I
think
is
another
interesting
discussion,
because
there
is
so
many
use
cases
for
having
this
I
won't
use.
The
word
overlay
Chris
just.
E
A
It
is
almost
always
the
case
that
what
you
want
to
say,
network
service-
you
know,
for
example,
in
the
case
of
a
management
interface,
you
don't
really
want
it
plugged
into
some
blimp
linking
subnet,
because
then
you
gotta
go
manage
that
and
figure
out
how
to
get
what
you
really
wanted.
Some
out-of-band
way
what
you
really
want
this
for
it
to
connect
to,
for
example,
your
VPN
gateway
service
right,
which
does
all
kinds
of
nice
things
for
you,
including
backhauling
you
to
various
other
places.
No,
no!
This
is
good.
This
is
good
I
appreciate
it.
E
A
Think
part
of
what's
helpful
here
is
I've,
had
some
conversations
with
people
and
we
all
put
these
different
levels
of
the
most
comfortably
at
different
levels
of
abstraction
versus
concreteness
and
so
at
a
very
abstract
level.
Exactly
the
same
pattern
everywhere
right
when
creepiness
is
very
helpful
running
through
examples,
inform
the
same
pattern
as
it
applies
in
different
environments,
so
this
is
very
helpful.
Thank
you.
A
A
There
had
been
a
point
raised
that
having
a
meeting
on
Friday
is
problematic
for
certain
parts
of
the
world,
because
Friday
as
part
of
the
weekend
for
certain
portions
of
the
world
and
I
think
where
we
had
left
it.
Was
it
correct
me
if
I'm
wrong,
because
I'm
little
vague
here
was
that
that
trim
was
going
to
send
out
another
doodle
and
that
Mike
was
going
to
find
people
for
whom
this
was
completely
a
problem
to
speak
up.
A
A
D
H
D
A
The
fact
that
I
wrote
a
fine
meeting,
but
why
it's
it's
it's
challenging,
because
I
dunno
I
believe
we
have
participants
from
Europe
already
as
well
as
North,
America
and,
and
that
means
basically
mornings
in
North
America,
at
least
five
mornings
and
most
of
the
rest
of
those
have
been
chewed
up
already
about
the
other
many
collaborations.
So
it's
it's
a
tricky
thing
always
to
find
a
time
that
suits
everyone
right,
Oh.
What
if
I
think
I
would.
A
D
A
E
A
A
She
called
for
humans
called
the
harad
factor
which
I'm
very
fond
of
it's.
The
number
of
things
that
you
can
perceive
count
of,
without
actually
counting
so
like
if
I
threw
three
pennies
down
on
a
desk.
Most
people
can
perceive
that
it's
three.
If
I
grow
thirteen
pennies
down
on
the
desk,
most
people
can't
conceive
it's
thirteen
without
counting
them
and
I.
Think
most
people
wouldn't
manage
more
than
they're
her
odd
factor
of
stuff
going
on
at
once,
cool
awesome,
so
use
case
mapping
Prem.
C
A
You
know
in
particular,
you
know
everyone
I
think
has
been
through
the
into
the
intro
and
there's
the
video
there
there's
stuff
talking
about
how
Hardware
interfaces
work,
because
it's
fairly
straightforward,
but
it
helps
to
see
it
the
distributed
bridge,
sort
of
distributed
CNF
so
I
think
there's
probably
a
lot
of
interest
in
the
VPN
gateway
case
is
one
that
came
up
this
in
talking
to
Mike
and
then
Mike
was
kind
enough
to
provide
us
with
a
bunch
of
questions
and
I
provided
or
really
I
made
an
attempt
to
provide
some
answers
to
them
in
a
way
they
could
be
referenced.
D
So
one
thing
I
want
to
publish
a
generic
question,
but
probably
thought
is
we
can
say
that
mesh
can
fall.
So
if
you
look
at
the
typical
the
overlay
underlay
concept,
so
you
have
full
mesh
between
the
nodes.
So
that's
a
typical
way
to
build
it.
But
is
there
a
way
to
optimize
the
whole
vehicle
on
mesh
just
starts
on
how
we
can
optimize,
because,
let's
assume
that
you
have
like
hundreds
of
nodes,
then
imagine
building
a
full
mesh
between
these
nodes.
It'll
increase
as
the
number
of
nodes
increases.
A
Some
general
ideas
there,
you
know
if
some
general
idea
is
there,
there
are
trade-offs.
Of
course,
it
sounds
like
you
kind
of
like
to
start
with
distributing
rich
domain
stuff,
and
then
we
can.
We
could
sort
of
jump
from
there.
Does
that
sound
here?
Okay,
cool
and
I
do
a
lot.
You
know
the
very
latest
in
the
last
ten.
You
know
five,
ten
minutes
we
go
back
to
Mike's
questions
because
you
put
effort
into
constructing
them
and
and
so
trying
to
talk
through
them.
A
I
think
it's
important
I
like
to
reward
effort
with
actual
feedback,
so
cool.
So
this
is
the
distributed
bridge
domain
deck
I,
don't
remember
how
much
of
this
I
animated!
God
help
us
all.
So
this
is
literally
there's
a
class
of
things
where
your
your
container,
your
your
cloud
native
network
functions,
are
actually
distributed
across
a
bunch
of
different
nodes.
They
aren't
actually
living
in
one
central
place
and
and
and
it's
a
generic
class
of
things
and
I
tend
to
think
abstract
lis,
so
I
think
about
it
as
a
generic
class
of
things.
A
But
the
truth
is
there's
one
that
almost
everybody's
familiar
with,
which
is
distributed,
bridge
domains
right
and
so
I
figured
if
I
talk
through
that
it
will
sort
of
show
the
pattern
for
how
you
would
handle
distributed.
Cns
in
general,
I'm
gonna
skip
past
the
getting
the
most
out
of
this
presentation.
This
is
in
case
I'm,
ever
presenting
it
in
video
people
to
fight
other
X.
A
So
let's
look
at
what
the
actual
problem
is
here
and
I'm,
gonna
try
and
animate.
This
gonna
help
us
all
you
notice.
I.
G
A
Cool
the
general
problem
is
you've,
got
a
bunch
of
pods
on
a
bunch
of
nodes
and
they,
of
course,
have
their
normal
K
eights
networking
in
the
normal
way.
They've
got
an
interface
for
that,
but
you'd
also
like
to
be
able
to
connect
them
to
distributed
bridge
domains
right,
so
some
distributed
bridged
remain
zero.
A
Some
distributed
bridge
domain,
one
not
everybody's
connected
to
every
British
domain.
Some
people
are
connected
to
more
than
one
bridge
domain,
but
this
is
a
sort
of
very
common
kind
of
problem
that
people
have
for
a
variety
of
reasons,
and
often
you
will
implement
this
with
VX
LAN
for
tunneling.
But
quite
frankly
you
know:
that's
not
you
know
whatever
works
for
whoever
is
providing
the
distributed.
British
domain
CNF.
A
So
to
look
at
this
from
another
standpoint
of
you,
you
start
by
defining
a
network
service
for
your
bridge
domain
zero
and
then
you
deploy
some
set
of
pods
or
daemon
sets
that
implement
bridge
bridge
domain
0
and
you
match
across
them
using
labels.
You
know
selectors
on
one
side
and
labels
on
the
other,
and
let's
say
just
for
the
sake
of
argument.
This
is
the
full
mesh
case
and
we'll
get
back
to
your
partial
mesh
term.
A
At
the
moment,
from
in
the
full
mesh
case,
you
just
deploy
a
daemon
set
of
br0
pods
across
every
node
nightly.
That's
the
simple
case,
and
then
the
question
is
okay.
How
do
you
actually
get
hooked
up
if
you're,
a
pod
who
wants
connection
to
distributed
bridge
domain
0
and
it's
fairly
straightforward?
You
know
every
you
know
every
node.
There
is
a
br0
pod.
A
So
it
simply
makes
a
request
to
the
RCR
pod
for
the
connection
except
connection
you
inject
the
interface
or
MMF
or
V
host
user
or
whatever
into
the
dr0
pod
for
the
pod,
that's
seeking
to
connect,
and
then
you
inject,
on
the
other
end,
the
minute,
V,
host
user
etc
into
that
pod
and
tell
it
that
it
actually
has
that
connection
and
at
that
point
you've
got
something
going
through
your
data
playing
that
with
that
interface,
pod,
zerah
yo,
the
pod
talks
to
the
br0
pod
locally.
On
your
note,.
D
A
So,
in
order
to
keep
it
speed,
each
bridge
domain
is
an
LC
service,
the
LC
service.
It
provides
this
bridging
for
others
right
the
same
domain,
so
if
I
had
two
different
tenants
and
they
wanted
two
different
bridge
domains,
those
are
two
different
network
services:
okay,
exactly
okay,
yeah!
You
know
you
could
even
you
can
even
imagine
a
situation
and
I
sort
of
showed
that
a
little
bit
here
originally
where
you
know,
let's
just
say,
the
the
the
what
the
far
left
node
is
tenant
one.
A
D
A
Entirely
up
to
those
bridge
domains
requesting
okay;
no,
if
they
want
to
go
in
step-by-step,
build
themselves
for
around
you
know
with
something
of
that
nature.
What
you're
really
talking
about
is
those
bridge
domains
having
a
connection
to
a
network
service
that
does
something
for
them,
whether
it's
as
simple.
A
More
complicated,
like
a
VPN
gateway,
you
know
that
that's
their
business,
not
ours,
sure,
okay!
Well,
yeah.
We
don't
need
to
reinvent
the
neutron
model
here,
the
high
level
we
just
have
to
be
able
to
support
it
for
the
people
who
really
really
want
it
so
distributive
reach
remaining.
So
you've
got
a
bunch
of
nodes.
They
have
br0
pods
effectively.
A
Those
pods
are
responsible
for
standing
up
the
vehicle
it
tunnels
between
each
other,
because
that's
how
they
provide
the
network
service
they
want.
So
the
NSM
is
actually
not
involved
in
this
at
all,
because
it's
just
connecting
pods
to
CNS.
Now
the
tunnels
could
be
normal
kubernetes
net
over
the
normal
kubernetes
networking.
A
They
could
be
over
some
other
network
service,
that's
requested
by
the
br0
pods
right,
that's
up
to
whoever
is
put
that
together
the
distributed
bridges
together
and
then
the
br0
pods
coordinate
with
each
other
using
whatever
mechanism
the
implementer
of
the
br0
pause
decided
they
were
using.
It
could
be
a
controller,
it
could
be
used
some
other
mechanism.
You
know
it
could
be
an
SDN.
They
could
be
talking
BGP
for
evpn.
A
That's
really
the
problem
with
a
person
who
is
deploying
the
distributed
CNS,
so
the
choice
of
that
is
outside
of
the
NSM
scope.
Its
whole
purpose
in
life
is
to
hook
up
pods
to
the
network
service.
That
is
br0
now.
One
other
thing
I
will
point
out
here.
This
gets
to
your
fully
meshed
versus
non
fully
meshed
case
prom.
In
the
event
that
say
on
node,
one
I
had
an
OB
r
zero
pod
either
because
I
chose
not
to
deploy
one
there
or
because
for
some
reason
it
has
had
an
accident.
A
Pods
elsewhere
make
sense
right
and
so
you'll,
basically
there's
a
cost,
because
that
means
that
any
bridging
has
to
hairpin
through
wherever
the
bureau
zero
plot
is
remotely,
but
it
may
be
worth
it
to
you
to
not
deploy
a
br,
zero
and
all
of
a
hundred
hundreds
of
your
nodes,
simply
because
you
know
that's
expensive
and
to
in
some
cases
be
back
hauling
and
in
fact,
if
you
wanted
to,
because
we
have
pod
affinity
available
where
you
can
essentially
say.
Please
deploy
my
pod
near
another
pod.
A
You
could
sort
of
put
your
thumb
on
the
scale
in
terms
of
scheduling
of
your
workload,
pods,
to
encourage
them
to
be
on
a
node
that
has
a
VR
zero
pod
running,
but
everything
still
works.
If
you
don't,
and
by
the
way,
neither
the
pod
requesting
a
connection
to
the
br0
network
service
nor
the
pod,
providing
it
actually
know
jack-shit
about
whether
or
not
that
that
connection
is
room,
is
local
and
remote
or
being
backhaul
to
some
other
node.
It's
just
not
their
business
make
sense.
A
B
A
D
A
I
know
we
have
some
people
in
the
broader
community.
You
want
to
do
this,
there's
no
reason
that
you
have
to
have
a
pod
for
returning
something
that
is
smart
enough
to
handle
mini
mini
bridge
domains.
It's
effectively
just
exposing
many
network
services,
and
that
can
work
too
right.
So
you
can
always
have
something
that
is
a
a
br
0
through
n
pod
that
exposes
a
bunch
of
bridge
domains
as
separate
network
services.
E
A
That's
very
much
this
picture,
you
know.
Basically
it
works
pretty
much
like
anything.
You
want
to
connect
to
this
tribute,
protect
you
at
all.
So
if
I've
got
a
pod
here
on
the
right,
you
know
so.
First
of
all,
the
br0
pod
exposes
the
channel
saying:
okay,
yeah
I'm,
an
endpoint
for
this
network
service
for
zero
network
service.
A
Then
the
pot
on
the
right
is
someone
who
wants
to
connect
to
the
br0
network
service
right,
makes
a
request,
and,
as
part
of
that
request
because
keep
in
mind
for
these
requests,
we
can
define
our
own
G
RPC,
so
we're
in
the
process
of
doing
that
in
developing
the
coding
activity.
One
of
the
things
you
probably
want
to
be
able
to
do
is
express
a
request
for
a
local
affinity.
A
In
other
words,
please
connect
me
to
the
instance
of
this
on
the
same
node,
if
at
all
possible,
so
having
expressed
that
you'll
request
a
connection
of
service
and
request
a
little
acidity
preference,
the
NSM,
knowing
that
it
has
something
to
satisfy
a
local
affinity.
Preference
with
goes
ahead
in
both
the
normal
setting
up
of
a
connection
to
the
br0
pot
requests
a
connection.
The
bureau,
zero
pots
is
sure
you
inject
an
interface,
you
know
mem,
IFE,
host
user,
etc
or
that
connection
into
the
br0
pod.
A
A
A
You
know
it
exposes
the
end.
Point
gets
advertised
with
a
requested
connection.
If
you
could
imagine
the
pod
saying
please,
if
at
all
possible,
SM
realizes,
you
can't
do
that.
So
he
figures
out
where
you
can
find
a
network
service
in
point
for
br0
service,
request
connection,
and
it
looks
very
similar.
It's
just
that
you've
got
the
the
peering
between
two
NSF's.
A
Locally,
the
connection
from
the
pod
is
to
whatever
your
data
plane
is
her
local
interface,
and
then
your
data
plane
will
have
been
provisioned
by
the
NSM.
She
knew
whatever
it
agreed
with
the
remote
NSM,
whether
that's
we
actually
on
GRE,
whatever
the
right.
Not
our
problem
they've
come
to
some
agreement
between
themselves
in
terms
of
what
they
prefer
and
what
they
support.
I
I
The
differences
that
pac
packets
will
have
less
latency
and
perhaps
better
performance
because
they
won't
be,
but
because
the
network
service
manager
hopefully
will
be
able
to
get
the
best
quality
service.
We
don't
maybe
sometime
in
the
future,
we'll
have
to
talk
about
SLA
s
and
all
that,
but
this
is
sufficient
to
work.
Yeah.
A
The
idea
is
you,
you
can't
always
get
what
you
want,
but
if
you
try,
sometimes
you
get
what
you
need
so
clearly.
The
ideal
is
to
be
able
to
get
a
local
vr0
pod.
In
this
case,
or
at
least
from
the
pods
point
of
view,
that's
the
ideal,
maybe
from
the
guy
who's,
not
wanting
to
burn
hundreds
of
instances
of
the
era
zero
where
he
may
not
need.
A
A
Cool
so
I'd
actually
like
to
drop
back.
We've
got
about
seven
minutes
left
and
Mike
was
kind
enough
to
provide
a
bunch
of
really
good
questions
and
I
made
an
attempt
to
answer
them
and
I.
Think
sort
of
like
the
thing
that
really
jumped
out
at
me.
Mike
was
the
who
is
agreeing
with
whom
about
what
and
so
I
sort
of
phrased
the
deck
that
way.
But
you
asked
a
lot
of
sort
of
granular
questions
in
there.
That
were
also
very
helpful.
Should
we
dive
into
that?
H
A
H
So
I
have
a
bit
of
a
reaction
based
again
on
just
a
quick
skim
that
I've
done
so
far,
and
it
is
you
know
we
that
it's
kind
of
interesting
you
manage
to
not
show
in
either
of
the
agreements
that
that
I
teased
out
of
you.
So,
okay,
okay,
I'm,
sorry,
I,
didn't
click
into
the
question.
You
wouldn't
it's
revealing
right,
because
it's
telling
me
that
you're
focusing
on
different
issues,
then
then
I
think
need
to
be
explained
upfront
at
the
top
of
the
presentation
about
this
whole
idea
mm-hmm.
H
So
the
two
agreements
right
that
we
talked
about
was
the
agreement
between
the
application
level
container
of
containers
right.
So
we
talked
about
the
example
of
a
web
server
and
a
web
client
there's
an
agreement
between
them,
which
is
that
they
are
communicating
via
TCP,
essentially
and
and
that
they
and
there's
a
local
agreement.
Let's
say:
there's
several
agreement,
so
just
tick
them
off
right.
There's
a
local
agreement
between
each
of
those
bits
of
application
and
the
kernel
we
focused
on
that
the
web
server
case,
who
has
agreed
with
the
kernel.
A
To
map
that
into
what
you
did,
here's
the
thing,
here's
the
thing
else,
a
lot
of
those
agreements
are
actually
not
agreements
that
either
of
the
Apple
gives
a
about
they're.
Just
there
they're.
The
way
things
happen
to
be
done.
I
would
actually
suggest
that
the
agreement
between
the
web
server
in
the
web
client
is
that
if
the
web
client
sends
a
stream
of
bytes
towards
the
web
server,
the
web
server
will
interpret
them
in
some
particular
way
and
send
a
stream
but
stream
of
bytes
back
to
it.
A
The
fact
that
they
happen
to
be
using
TCP,
it's
kind
of
incidental
just
happens
to
be
the
way
that
you
send
streams
of
bytes
between
two
two
entities.
So
it's
actually
important.
The
payload
is
what
matters
not
be
underlay.
In
this
case
the
underlay
would
be
the
TCP
socket.
So
I
would
say
this
is
a
matter
of
layers.
H
So,
or
maybe
as
scopes,
so,
the
agreement
between
the
application
code
in
the
kernel
is
a
little
more
like
what
you
said.
They
use
the
kernels
interface
for
streams
over
the
network
yep,
but
the
there
is
another
agreement,
I
think
between
the
network
peers.
That
is
important
here:
okay
right,
because
these
things
are
not
developed
by
the
same
shop
right
and
I'm
not
deployed
or
operated
by
the
same
shop
and
so
different
organizations
across
the
world.
H
A
A
H
H
The
world
is
not
as
simple
as
it
was
in
1990.
Sorry,
1995
right,
so
there
has
been
evolution
and
there's
a
problem
of
how
do
you
introduce
new
protocols?
Alright,
there
have
been
several
runs
at
introducing
new
protocols.
We
have
speedy,
we
have
HTTP
2,
we
have
HTTP,
we
have
TLS
I,
all
sorts
of
groups
are
actually.
A
H
Right
so
I
guess
you
have
to
tell
me
no
we're
at
it
by
out
of
time.
So
let
me
try
it
this
way.
There
are
multiple
agreements
in
the
world.
Okay,
there
are
a
bunch
of
people
who
have
agreed
that
we're
going
to
leverage
TCP
and
the
available
evolution
facilities
built-in
to
agree
that
we're
going
to
use
no
I,
know
I.
Think
it's
no.
It's
important
to
understand.
H
It's
not
just
the
kernels
that
have
agreed
it's
the
people
that
chose
to
run
this
application
code
on
this
kernel,
with
the
expectation
that
other
people
out
there
in
the
world
by
looking
them
up
their
IP
address
up
in
DNS,
can
reach
their
application
code
over
TCP
and
yes,
there's
some
other
people
who
are
using
quick.
That's
an
additional
agreement.
Okay,
but
there's
a
bunch
of
people,
who've
agreed
on
TCP
and
HTTP
and.
A
The
various
deliberations
of
that,
but
not
because
they
actually
care
about
it
because
it
happens
to
be
there
and
it
happens
to
be
something
that's
universally
available
and
universally
used.
The
important
point
is
that
it's
an
agreement
because
they're
a
bunch
of
people
have
agreed
on
it.
The
important
distinction
here
is
that,
in
the
case
of
network
services,
there
is
literally
no
universally
accepted
agreement
or
how
we
move
an
IP
packet,
we
tunnel,
an
IP
packet
or
how
we
tunnel
and
I
in
Ethernet
frame
or
an
MPLS
frame
between
two
places.
A
There
are
a
million
and
one
answers
to
that
question.
None
of
them
are
actually
agreed
and
it
turns
out
the
weak.
If
we
focus
on
the
payload,
we
only
have
to
agree
link
wise
between
the
people
who
are
handling
the
underlay
carriage
and
and
by
abstracting
it
away.
We
don't
actually
have
to
get
agreement
at
all
between
the
pods
about
how
we
have
this
conversation.
I
think.
H
A
I
do
apologize,
I
have
a
hard
stop
at
the
top
of
the
hour.
Okay,
you
make
sure
that
we
follow
up
I'd,
be
happy
to
talk
to
you
either
offline
or
in
the
meeting
next
week.
Okay,
either
way,
because
I
think
this
is
a
very
interesting
conversation.
Okay,
well
I
mean
it
has
been
marvelous.
I
will
see
you
guys
next
week
same
time,.