►
From YouTube: Network Service Mesh Meeting - 2018-07-27
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
A
All
right
cool,
so
let's
go
ahead
and
get
rolling
so
first
up,
as
always,
is
agenda
bashing.
So
if
we
go
ahead
and
look
at
the
agenda,
we've
got
a
lot
of
action.
I
don't
refuse.
In
fact
most
of
this
action
item
reviews
and
then
we've
got.
You
know
some
time
to
review
new
items
for
the
following
week.
So.
A
So
you
can
actually
go
and
see
if
there's
actually
a
link
from
the
issue,
but
there's
a
project
board
here,
which
I
think
will
probably
try
and
use
more
of
as
we
go
that
lists
a
bunch
of
the
different
items
that
we
have
as
issues
that
are
either
to
do
is
or
in
progress.
You
know
so
things
like
publishing
the
images
to
docker
hub
or
checking
out
sorry
I
would
be
unpacking
net,
so
we
can
get
going
on
the
survey
of
the
use
case
and
so
I
mean,
as
we
go
I
think
you.
A
B
A
C
A
There
are
certain
kinds
of
MSM
use
cases
for
which
you
need
physical
servers,
particularly
the
ones
that
infect
and
involve
getting
your
network
service
from
a
physical
NIC
or
an
SSRI
OVU
from
a
physical
NIC
and
so
I
believe
that's
moving
around
nicely
I.
Believe
that's
done.
So.
If
folks
would
like
to
start
shipping
into
that
direction,
reach
out
to
Frederick,
you
can
add
you
to
the
access
list
and
we
can
start
drilling
into
some
of
that
set
up,
because
you
know
there
is
a
bit
more
to
servers
than
just
dropping
code
on
them.
A
A
There
we
go
so
basically
just
adding
a
bunch
to
the
existing
documentation
to
walk
through
a
high-level
overview
and
pros
of
what
network
service
mesh
is,
and
this
is
actually
incredibly
helpful.
It
turns
out.
Frederic
is
really
good
at
prose
and
I
am
really
not,
and
so
I
think
this
will
be
really
really
good
for
us.
Do
you
folks,
have
any
opinions,
thoughts
or
comments.
D
A
D
Exactly
how
would
I
you
know
do
an
end-to-end
example.
These
surgeries,
simple
data
plane,
is
really
good.
I
kind
of
I
stick
it
out,
but
if
I
wanted
to
build
my
own,
what
would
have
to
do
and
what
llamo
files
have
to
do.
I
had
two
end
points:
I
want
you
to
connect
like
I.
Had
a
you
know,
a
simple
REST,
API
p2
nodes.
How
would
I
do
that
and
use
the
simple
data
plane
to
do
that,
because
I
think.
E
I
can
I
inject
just
a
quick
comment:
there
is
a
excellent
script.
Integration
test
and
like
I,
would
use
it
as
a
baseline
for
basically
mimicking
those
steps
on
the
local
cluster,
because
I
mean
everything
is
very
generic
there
there
are
I
mean
at
least
I,
don't
recall
any
any
big
dependencies
on
the
Travis
or
on
the
CI.
So
if
you
follow
step
by
step
bringing
those
pieces
at
the
end,
you
should
be
able
to
ping
between
those
two
conduce,
those
two
ports.
E
D
Irc
is
useful
to
put
all
the
new
extensions
and
a
separate
directories,
so
you
can
read
me
and
all
these
files
in
the
same
place,
because
you
guys
are
working
on
it
day
in
day
out,
know
where
all
the
files
are
I.
Look
at
it
and
I
go
which
file
is
a
core
piece
of
the
platform
which
piece
is
an
extension.
I
can
go
and
look
each
individual
file
and
figure
it
out,
but
it's
not
that
productive
and
if
you
want
other
people
to
use
it,
it's
like
here's
a
way
to
extent
and
it
Sam.
F
A
This
is
this
is
actually
why
I
was
asking
if
we
get
at
Chris
point
on
it
John,
because
you
I
told
you
say:
I
need
more
documentation.
That's
a
completely
the
credible
statement
to
me
independent
of
context
at
the
stage
of
the
project,
so
the
question
becomes
of
all
the
space
of
documentation.
We
can
write
what
might
be
more
thoughtful,
so
cool
awesome.
A
A
F
I,
just
I
got
comments
from
two
more
reviewers
I
was
in
the
process
of
responding
to
two
Fred's
comments,
and
you
know:
I
I
I
have
some
a
comment
or
a
response.
I
put
in
the
bottom
of
the
review
to
comments
by
yourself,
Edie
and
and
sergej
I
think
somebody
else
I
think
it
was
surgical.
Pratik
and
just
please
place
a
comment
in
there.
If
you
agree
to
it
now,
I'll
keep
this
going.
Hopefully,
I
can
get
this
wrapped
up.
A
That
could
allow
people
to
it
is
really
important.
The
the
thing
that
was
sort
of
rolling
around,
at
least
in
my
mind,
I
suspect
also
in
critics,
is
the
thing
you
want
front
and
center
is
the
you
know.
Queue
control
apply
that
kind
of
thing
that
lets
people
just
who
do
know
communities
just
go
from
zero
yeah.
F
So
I'll
put
a
like
I
stated
here:
I'll,
put
a
comment
on
the
top
saying:
if
you
already
have
a
cluster
up
or
your
reef
familiar
with
the
cluster,
then
go
straight
to
step,
X
and
then
that'll
have
the
basic.
You
know
o
get
to
get
the
the
repo
I
mean
to
get
the
code
make
and
then
who
took
control
of
life.
Oh
yeah.
A
F
B
Well,
I
would
say
that
even
once,
which,
by
the
way,
I
have
a
patch
that
that
publishes,
those
I
have
not
pushed
it
out
because
I've
been
traveling
this
week,
but
I
plan
to
do
that.
Monday,
but
but
even
with
that,
I
think
it's
so
important
to
document
the
whole
process
of
how
you
build
it
and
everything
so
I
think
yeah.
F
C
F
How
to
get
a
docker
image
into
the
cluster
without
publishing
at
first,
which
which
isn't
tremendously
difficult
I,
also
got
a
comment
that
people
say
well,
they
you
know
just
start
the
cluster
without
a
VM
and
it
really
doesn't
make
any
difference
except
for
I.
Think
people
are
a
little
intimidated
by
nested
virtualization.
You
need
to
start
a
daemon
if
you're,
already
working
in
a
VM
and
I
think
like
like
container
and
cluster
and
kubernetes
people,
just
don't
like.
Maybe
M's
are
unfamiliar
to
them
and
they
say
well,
look
we
got
kubernetes
and
stuff.
F
We
don't
deal
with
that
stuff
anymore,
but
what
I
was
thinking
was
like
if
we're
gonna
do
a
real,
like
I,
think
edge
when
I
logged
in
here,
like
I,
think
was
eggs.
Edie
said:
if
we're
gonna
do
a
service
that
includes
a
base
data
plane
that
talked
to
an
underlying
software
data
plane
and
a
host
we're
still
going
to
need
some
of
that
old-fashioned,
vm
stuff,
like
v
host
user
for
at
least
the
bottom
level
layer
to
service
at
the
bottom
level
of
presenting
the
the
fast
networking
interface.
A
A
F
Exactly
exactly
either
with
physical
mix,
our
make
sure
that
whatever
is
underneath
in
the
in
the
host
or
the
VM
that
contains
whatever
VM
is
running
our
cluster
at
some
point,
we're
gonna
need
access
to
fast
data,
whether
it's
virtual
or
physical,
underneath
us.
So
that's
my
thinking
in
sort
of
a
generic
way
so.
F
G
G
F
G
A
A
A
I,
don't
remember
the
check
for
deprecated
kubernetes
api
calls
comment,
apparently
that
I
made
I
I
do
generally
advocate
for
clear
errors
to
tell
you
how
to
fix
things,
but
I
literally.
Don't
remember
this
psychologize.
E
A
Might
have
been
I
mean
the
yeah
I
mean
that
very
well
may
have
been.
You
know
it's
it's
fundamentally
down
to
you
know.
I.
Think
I've
made
this
comment
several
times,
I'm
generally
in
the
opinion
that
you
should
fail
as
early
as
possible
and
as
clearly
as
possible,
with
this
good
instructions
on
how
to
fix
whatever
it
is
that
you
can't
resolve
yourself
as
you
can
so,
but
I
don't
remember.
A
A
B
I
I
did
CTE
in
this
week.
I
I
believe
that
he
thought
we
were.
We
were
good
to
go
because
I
believe
that
he
also
so
it's
my
understanding
that
the
fact
of
machines
might
have
those
Intel
five,
ten
or
seven
ten
cards
as
well,
which
it
was
pretty
pleased
with
that
and
I
believe
that
he
was
able
to
confirm
this
early.
That's
REO
visa
force
or
whatever,
no,
not
stars,
were
there,
though
his
only
concern
was
the
number
of
BS
that
smella
not
started
close
versus
the
Intel
darts
I.
Guess.
A
Yeah
I
recall
him
saying
that
they
were
exposing
something
like
eight.
No,
it's
important
to
realize:
I'm
chatted
with
the
packet
guys
a
fair
bit
and
apparently
they
generally
standardize
on
Mellanox
NICs,
but
right
now
for
some
of
their
smaller
older
machines.
They're
running
in
elects
three
and
then
four
sort
of
newer,
larger
machine.
If
they're
running
em
elects
four
and
they
would
like
to
get
to
ml
x,
five,
but
apparently
that's
really
sort
of
hot
and
fresh
right
now
and
I.
Think
that
in
you
know,
Taylor
keep
me
honest
here.
A
H
A
C
A
Marvelous,
thank
you
so
much
I
appreciate
it
because
I
think,
at
least
for
me.
The
next
thing
up,
I
kind
of
want
to
get
working
is
some
of
the
sort
of
hard
a
hardware,
knickers
I'm,
sorry
Obi
channel,
that
provides
the
network
service
example
that
that's
kinda,
the
next
one
in
my
head
in
terms
of
use
cases,
please
note
if
you
have
other
use
cases
in
your
head,
you
want
to
work
on.
Please
do
that
to
work
in
parallel,
yeah.
F
A
A
E
Right
yeah
I
was
I
started,
looking
at
the
VPP
and
the
way
how
to
interact
between
the
NSM
and
the
VBP
hit
a
couple
of
roadblocks,
but
they
were
resolved.
But
you
know,
while
waiting
on
on
the
answers
on
that
ADP
related
things,
I
kind
of
moved
a
little
bit
and
implemented
that
a
simple
data
plane
just
to
be
able
to
run
end
to
end
in
the
CI.
A
G
Hi
I'm
here
so
the
cord
is
mostly
done.
The
only
challenge
I'm
facing
right
now
is
adding
up
to
the
CI,
with
mini
Q,
so
mini
cube
works
in
one
of
the
implementation
like.
So
there
is
a
step
in
our
process
where
we
need
to
get
approve,
get
a
certificate
approved
and
issue
by
the
Kuban.
It
is
a
PS
server,
so
mini
cube.
Has
this
two
modes:
you
can
run
with
the
local
cube
where
everything
is
one
binary
which
only
works
on
Travis
and
the
other
mode
which
is
which
is
powered
by
cube.
G
Adm
doesn't
work
on
Travis
so
with
the
local
cube
mode
certificate
does
not
get
issued.
So
that's
where
I'm
blogged
right
now
I
was
talking
to
Kyle
on
on
the
IRC.
So
once
we
move
to
Kuban,
it
is
on
packet.
Maybe
that
will
be
the
right
approach
to
go
and
then
over
there
we
can
get
the
certificate
approved
and
issued
by
the
API
server,
which
will
unblock
us.
G
So
I
added
all
the
comments
in
my
PR,
but
it's
still
failing
and
who
address
those
issues,
because
it's
failing
in
the
CII
I
tried
a
lot
of
things.
I
tried
using
the
Ubuntu
1604
in
Travis,
but
that's
not
officially
supported,
so
we
can't
move
there.
Yet.
If
we
move
to
1604,
then
we
can
run
mini
cube
in
cube,
Radian
mode
which
solves
the
problem,
but
for
now
we
will
have
to
just
use
Ubuntu
what
Travis
supports
and
run
mini
cube
with
local
cube.
So
that's
where
we
are
okay,.
A
G
I
mean
I
tested
it
out
on
my
cluster
on
humanities
cluster.
It
worked
fine
and
then
I
moved
everything
to
Travis
I
thought
in
mini,
give
it'll
just
work,
but
then
it
didn't
work.
Then
I'd
install
mini
cube
on
my
Mac.
It
worked,
but
it
was
not
working
on
try
the
second
then
I
did
it
on
Linux.
It
didn't
work,
so
I
figured
I
narrowed
it
down
to
this
settings
to
local
cube
and
not
running
without
looking
yeah.
So
that's
it
from
myself.
G
G
It's
fix
the
driver,
but
it's
also
they
have
this
mode.
Where
who
starts
all
the
Kuban,
it
is
components
like
so
does
all
the
components
run
as
part
of
one
single
binary,
local
cube
or
cube
Adium
bootstraps,
the
whole
cluster.
So
that's
the
difference
here
so
driver
part
is
a
step
forward
like
how
all
the
weekends
all
the
infrastructure
is
run,
but
this
is
more
how,
through
the
boots
Kuban,
it
is
on
top.
G
B
B
B
B
G
Do
works
for
me,
everything
comes
up.
The
only
issue.
There
is
a
bug
with
local
cube.
Is
it
doesn't
issue
a
certificate
which,
for
which
there
is
already
a
issue
file
against
mini
cube
like
in
local
to
mode
it
doesn't
issue
you
a
certificate?
That's
only
challenge,
but
rather,
if
I
don't
run
in
with
local
cube,
the
issue
is
resolved.
I
get
the
certificate
issued.
So
so
there
is
no
problem,
but
I
don't
have
any
preference
with
local
cube
or
other
mode.
I
just
need
the
certificate
issued.
So
it's
not
logging
in
local,
give
more.
A
Okay,
cool
so
I
think
next
up
is
amusingly.
We've
got
our
perennial.
You
know
item
about
agenda
about
a
mascot,
I've
kind
of
been
sort
of
using
the
Mary
Andrei
Ariane
de
Spider
that
I
used
in
the
narrative
deck
I
know.
If
you
can
bring
that
up
and
see
how
people
feel
about
it
in
general,
we
would
need
to
go
and
eventually
get
our
own
version
of
it
made.
This
one
was
purchased
from
a
stock
graphics
company,
but
due
to
folks
in
general,
like
the
friendly
spider
as
a
mascot
I.
A
A
B
Exactly
I
worked
on
that
earlier
in
the
week,
but
I've
been
traveling
since
Wednesday,
so
I'm
hoping
Monday
well,
I
should
be
able
to
get
it
out.
Monday
I,
just
I,
just
need
to
rebase
it
after
everything
that
went
in
this
week
and
just
make
sure
everything
still
still
is
good
and
then
I'll
push
that
up.
Monday.
B
My
plan
is
the
only
push
on
merges
pushes
the
master,
we're
not
gonna
we're
not
going
to
push
dr.
images
on
TRS
when
people
push
PRS.
B
A
And
this,
hopefully,
will
also
help
with
some
of
the
as
we
build
up
more
system
level
tests
and
packet
and
hopefully
with
the
cross
cloud
CI
stuff,
having
the
binary
artifacts
for
downstream
consumption
of
that
stuff.
I
think
will
be
really
good,
because
you
know
yeah
I,
think
that'll
be
really
good.
So
Taylor
did
you
update
some?
What
you
are
you
want
to
talk
about
the
support
things
that
you
guys
need
for
the
CNCs
cns
project,
hello
way
there
to
sees
too
many
ends
and
too
many
SMS
by
name
there.
H
Are
I
think
we
want
to
hold
until
we
get
some
of
the
testing
that
we're
doing
right
now
on
the
CNS
on
packet,
so
I
think
when
we
figure
out
what
we
can
do
with
this
first
network
function,
then
we'll
be
able
to
describe
those
parts
we
are
working
on
the
I
guess
the
use
case
a
ride
up
for
that.
H
H
So
right
now
we're
doing
some
comparison
said:
I,
guess
a
much
simpler
way,
we're
using
docker
containers
and
docker
to
have
the
containers
and
KVM
either
direct
or
with
libvirt,
where
we're
talking
to
KVM
compared
to
the
vans
and
we're
doing
all
this
on
packet.
So
well,
we'll
try
to
share
any
of
the
information
on
like
the
network
cards
and
stuff,
especially
on
that
other
ticket
for
the
SRV
yeah.
D
H
Yeah,
so
I
will
definitely
have
to
make
some
adjustments
on
what
we
jump
from
the.
What
we're
calling
box
by
box
or
we're
just
doing
that
just
as
minimal
as
possible
down
to
the
container,
has
been
the
most
possible
to
them
when
we
go
into
what
we're
calling
orchestrated
so
kubernetes
and
we're
looking
at
comparing
that
to
OpenStack.
H
So
that's
kind
of
the
goal
that's
going
to
make
it
more
complicated
at
the
moment
we're
on
single
system,
so
a
single
packet
node
for
each
of
the
tests
will
be
doing
the
multi,
node,
multiple
physical
machines
and
that's
we're
keeping
that
in
mind
for
our
sending
the
traffic
between
the
containers
and
the
test.
The
container
running
the
network
function
and
the
test
containers
so
yeah
I.
C
H
D
H
H
A
Cool
alright,
then
so
I
think
the
next
up
that
we
had
was
working
our
documentation,
infrastructure
and
I
think
this
was
Frederick
sort
of
saying:
ok,
look
we're
starting
to
document
things
in
markdown,
which
is
awesome.
You
know
sort
of
migrating,
these
all
to
Docs
migrating
to
Hugo,
adding
go
Docs
support
that
kind
of
stuff
I,
don't
know
how
much
progress
has
been
made
on
this
just
yet
I
generally
liked
the
direction.
F
Well,
some
stuff
is
working,
I
mean
if
you
put
your
doc
in
and
Doc's
adult
it'll
render,
so
you
just
have
to
go
back
to
the
readme
and
make
sure
the
link
is
correct,
but
everything
seems
to
create
pop
down.
You
know
it
renders
the
build
the
markdown
files
just
fine,
so
I,
don't
I
think
that
he
had
far
more
in
mind
with
that,
but
that
stuff
C.
A
E
Yeah,
that's
correct.
So
at
one
point
we
from
the
NSC
perspective
I
needed
to
pass
hostname
to
the
NSM
to
be
able
to
register
the
relation
between
the
channels
and
the
host
name.
So
when
then
nsnsc
terminates,
then
I
could
actually
identify
which
channels
were
advertised
by
that
specific
host
name,
which
belongs
to
the
X
or
old
NSC
and
then
clean
those
channels
from
the
references.
E
Well
Fredrik
mentioned
that
it's
not
very
reliable
way,
and
so
he
I
think
he
was
gonna
investigate
for
a
more
reliable
way
and
then,
frankly,
I
don't
see
why
it's
not
reliable,
because
named
name
and
namespace
guarantees
the
uniqueness
in
the
kubernetes.
So
I.
From
my
point
of
view,
it's
good
enough,
but
I
guess
he
probably
came
up
with
some
corner
cases
when
it's
not
sufficient.
G
Yeah
I
think
so
so
the
parts
house
host
name,
you
can
say
and
I
think
the
most
reliable
way
is
and
also
promoted
by
communities
guys
is
using
the
downward
APA.
So
you
can
add
this
information
in
the
parts
that
itself
so
when,
when
community
starts
apart,
they
add
this
information
in
a
file
and
you
can
read
from
that
file
or
they
set
an
environment
variable.
So
that's
the
recommended
way
to
do
it.
A
Ongoing
creep
in
terms
of
how
much
modification
has
to
be
made
to
pods,
you
know
really
Usenet
surface
mash,
so
I
totally
agree
with
you.
You
can
bring
this
in
via
the
downward
API,
but
we
would
like
network
service
message
to
be
as
easy
to
pause
as
we
can
and
having
to
add
more
and
more
and
more
things
to
the
pods
back
past.
A
certain
point
starts
making
it
difficult
at.
A
What
I
mentioned
just
to
keep
in
mind
is
we
have
this
really
strong
tendency,
which
I
think
is
completely
healthy
at
this
stage,
to
think
about
network
service
mesh
completely
within
the
context
of
a
single
cluster,
but
as
I
look
forward
out
into
the
world
and
places
I
expect
this
to
be
used.
We
will
have
instances
of
people
wanting
to
connect
to
network
service
endpoints
that
are
outside
the
cluster.
So,
for
example,
I
know
that
in
the
nfe
cases
there
are
audio.
A
Don't
think
that
actually
has
impact
on
this
particular
case,
because
when
you're
talking
about
the
NSC
to
go
sort
of
Noble,
Oh,
Len
SN
API,
that
intrinsically
means
you
are
running
in
the
damn
cluster,
but
it's
worth
noting
so
that
people
can
keep
that
in
mind.
So
we
don't
accidentally
preclude
some
of
the
really
good
use
cases
with
external.
A
I
I
It's
named
whatever,
whatever
the
its
internal
name
is:
that's
that
that
is
no
longer
what
what
was
said
in
the
host
and
what
would
have
been
said
as
a
host
name
at
that
variable
map
and
set.
So
though,
so
the
problem
is
like.
Typically,
if
they
don't
set
it,
then
it's
a
nice
reliable.
You
can.
You
can
get
the
host
name
and
it
matches
the
name,
but
if
it's
not
set-
or
rather
if
they
do
set
it,
then
we
can
no
longer
rely
on
that
on
that
technique.
A
E
I
I
Perhaps
we
can
leverage
information
additional,
privileged
information
that
that
may
have
access
to
like
looking
at
a
docker
looking
at
the
name
of
the
pod,
because
that
information
is,
if
you
do
docker
PS,
you
can
potentially
correlate
that
with
the
with
the
namespace.
So
we
sue
because
we
have
the
namespace
ID.
So
we
can
potentially
correlate
that
to
the
exact
pod
that
that
we
need
to
gain
access
to.
A
For
example,
because
we
need
space
ID,
we
could
run
FS
notify
or
something
similar
across
the
bar
run,
an
S
directory
and
watch
for
the
disappearance
of
the
files
for
that
namespace
there
that's
another
thing
we
can
do.
The
problem
with
that
I
realize
it's
just
Circuit
pointed
it
out
to
me,
is
I.
Think
Sergey
is
right
now
striving
to
minimize
the
amount
of
state
that
we
keep
in
the
MSM,
because
that
way
we
don't
have
to
squirrel
it
away.
A
A
E
And
I
mean
I'm
a
bit
surprised
I
mean
like
for
the
kubernetes
having
a
name
in
the
namespace.
It's
a
sufficient
proof
of
the
uniqueness.
I
mean
why
we
cannot
follow
the
same
model.
I
E
I
Is
it
turns
out
that
before
you
put
in
that
patch
there's
other
instances
of
it
as
well,
where
the
name
is
being
used
from
the
hostname?
So
that's
why
I
created
the
issue?
So
it's
not
it's!
It's
not
a
it's
not
only
about
the
patch
that
you
push
them
or
we're.
Looking
at
it's.
It's
also,
there's
also
other
instances
that
we
need
to
fix,
because
when
we
get
a
pod
with
a
with
a
hostname,
that's
been
over
overridden,
then
we're
gonna
see
failures.
So.
A
A
C
A
The
name
or
namespace
of
the
pod,
or
if
you
fail
to
configure
it
so
that
something
is
configured
it
will
configure
it
to
something
that
reflects
the
pod
name
and
namespace,
but
because
that
is
a
fallback
position
from
the
user.
Actually,
configuring
it.
You
can't
rely
on
the
host
name
being
unique
or
having
any
any
relationship
with
a
pod
name
and
namespace
ID.
B
B
A
Alright!
So,
let's
see
on
the
agenda,
do
we
have
okay,
so
I
think
we've
got
on
upcoming
items
for
next
week.
We've
got
using
the
project
board
for
agenda,
so
please
do
make
sure
to
get
issues
in
because
they
automatically
reflect
there
and
I.
Think
because
John
has
this
knack
for
catching
the
action
items,
we
do
have
a
created
issue
requesting
a
document
on
how
to
stand
up
a
pod
and
connect
to
a
network
service
in
point,
so
we
can
get
a
good
idea
of
the
documentation
that
John
most
urgently
needs.