►
From YouTube: Network Service Mesh WG - 2018-06-01
Description
Join us for KubeCon + CloudNativeCon in Barcelona May 20 - 23, Shanghai June 24 - 26, and San Diego November 18 - 21! Learn more at https://kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
Join us for KubeCon + CloudNativeCon in San Diego November 18 - 21. Learn more at https://bit.ly/2XTN3ho. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
A
A
Okay,
so
always
first
up
is
agenda
bashing,
so
a
quick
review
where
we're
going
will
review
on
actual
items.
We've
got
some
of
those
that
carry
forward
from
last
week
review
a
development
activity
with
Frederick
and
Kyle
review
Hughes
case
mapping.
I
think
John
you're
leading
this
this
week,
because
Prem
is
actively
flying
right
now.
A
A
Awesome
cool,
so
let's
go
ahead
and
dive
right
in
then
so
from
action
items
that
we
had
from
last
week
on
code
activity.
Frederick
have
you
thought
any
more
on
the
in
cluster?
Stop
yeah.
C
So
there
were
two
things:
I
ended
up.
Looking
at
the
first
one
was
so
I
ran
a
few
tests
and
in
order
to
work
out
the
in
cluster
off,
though,
since
we're
writing
and
go
kubernetes
provides
a
very
nice
way
to
grab
a
configuration.
So
basically,
you
call
in
cluster
config
method
in
the
go
client
and
the
back
configuration
has
everything
you
need
in
order
to
in
order
to
access
the
API.
C
The
second
thing
that
we
need
to
do
from
there
is
the
default
service
account
that
he
gives
us
has
almost
no
privileges.
You
can
do
version
and
that's
about
it.
So
what
I
added
into
it
was
like
I
created
you
service
account
and
it
gave
it
access
to
a
limited
set
of
of
api's
that
I
enumerated,
and
that
worked
out
that
worked
out
well,
so
I
think
well.
We'll
need
to
do
is
we'll
probably
need
to
create
at
least
one.
C
Maybe
two
service
accounts
with
different,
depending
on
the
roles
that
we
want,
which
will
I,
which
would
monitor
the
new
posit,
are
being
created,
new
nodes
that
are
being
created.
We
have
to
work
out
if
we
want
to
modify
any
of
any
of
this,
for
example,
do
we
need
to
do?
We
need
the
ability
to
create
new
pod,
or
do
we
need
the
ability
to
create
to
add
containers
or
so
on?
So
if
we
do,
then
we
we
can
add
these
privileges
to
the
network
service
manager
pods.
So
so
that
was
pretty
easy.
C
The
the
second
thing
that
we
can
add
in
is,
we
can
actually
add
through
the
pod
spec
capabilities.
So,
for
example,
one
capability
that
we
will
almost
certainly
need
for
certain
STM's
is
the
net
admin
capability,
so
we
can
actually
inject.
So
all
these
capabilities
are
dropped
by
the
container
runtime
by
by
default,
and
we
cannot
to
not
drop
things
like
captain
net
admin
or
so
on,
and
that
gives
us
the
ability
to
manipulate
the
network
interfaces
and,
and
so
on.
C
Of
course,
we'll
still
have
to
we'll
still
recommend
that
users
test
and
make
sure
that
that
things
work,
because
there's
other
things.
Besides
capabilities,
they
may
block
a
user
from
from
being
able.
So,
for
example,
if
SELinux
this
is
is
active,
it's
possible.
Then
it
can
be
configured
to
deny,
despite
the
fact
that
we
have
a
net
admin,
account
access
same
thing
with
the
Ubuntu
side
as
well.
C
They
have
their
own
SELinux,
it
will
say
it's
equivalent
in
terms
of
functionality
that
can
block
these
type
of
requests
and
actually,
if,
if
they
want,
if
a
user
wants
to
go
all
out,
they
can
actually
fine-tune
the
request.
So
it
only
allow
certain
types
of
requests
that
match
and
the
network
service
manager
use
case
and
wok
others
as
well.
So
there's
there's
way
they
can
do
that,
but
but
generally
we
want
to
make
sure
that
they
have
the
tested.
C
I
REM
did
not
have
as
a
linux
block
them,
but
it's
something
to
keep
in
mind.
If
you
see
things
fail
so
so
anyway,
so
so
the
two
things
are
service
counts.
Add
a
service
account,
bind
a
pod
to
the
surface,
account
on
creation
and
the
ability
to
to
retain
capabilities
druzy
through
the
pod
spec,
which
will
likely
need
for
added
things
like
adding
interfaces.
C
Another
option
is
we
can
document
on
and
in
some
document
directory
as
well,
so
that
way,
it
lives
along
with
the
github
repo,
but
I
mean
for
this
kind
of
information.
It's
it's
true,
regardless
as
to
the
state
of
the
container
system
itself.
So
you
know
so
I
thing
he's
a
good,
a
good
approach
here.
A
A
D
Experience
regarding
this
I
did
it
for
some
other
project.
Also
and
I
was
playing
with
the
same
thing,
which
you
just
described
in
the
last
few
minutes.
So
last
night
I
was
playing
with
the
service
accounts
for
the
network
service
mesh,
so
I
created
one
commit
and
I
pasted
the
link
in
the
chat
just
to
give
a
little
bit
of
perspective.
What
you
were
talking
about
right
now,
folks
can
go
ahead
and
see
I'll
I'll
try
and
create
a
pull
request
later
today,
which
is
thank.
E
Essentially,
essentially,
it
allows
us
a
way
to
use
the
the
standard.
Kubernetes
essentially
act
as
the
database
for
us
they'll
set
up
we'll,
be
able
to
use,
cube,
cuddle
and
anything
like
that
for
all
of
our
resources.
Essentially,
so
so
over
the
last
couple
weeks,
I
I
was
able
to
figure
out
a
way
to
to
essentially
take
our
protobuf
file
and
from
that,
obviously
we
could
generate
gold
code
that
has
a
bunch
of
structures
inside
of
it.
Essentially,
then
I
was
able
to
create
a
type
step,
go
file.
E
It
references
those
structures
as
the
spec
and
then
use
all
of
the
kubernetes
code
generation
tools
to
generate
everything.
We
need
kind
of
stitch
it
all
together
and
so
Frederic
and
I
spent
a
bunch
of
time
last
week.
Reviewing
that
and
he
merged
that
this
week
as
well.
Right
now,
there
is
one
there
is
one
problem
with
with
what
was
merged
that
I'm
still
looking
into,
and
that
is
issue
59.
So
deletion
doesn't
quite
work
as
expected
right
now,
so
I
won't
bore
everyone
with
the
details
on
that.
E
But
if
you
want
to
go,
look
at
that
issue
as
well,
I'd
also
open
58
and
57.
Those
aren't
really
issues
but
more
things
that
Frederick
and
I.
During
the
review
we're
like
yeah,
we
should
look
into
that
as
well,
but
those
aren't
really
bugs
so
basically,
what's
in
there
should
work
now,
I
also
opened
and
critique.
F
D
E
E
A
You
know
what
other
thing
I
actually
did
notice
to
the
CRTs,
and
this
is
probably
a
worth
discussion,
because
I
don't
really
know
what
the
right
answer
is
here
so
in
in
in
all
the
presentations
have
been
giving
I've
been
talking
about
a
network
service
potentially
have
exposing
multiple
channels.
So
we
have
service
definition.
It
can
have
multiple
channels,
each
with
their
own
name
and
payload.
Now
this
was
done.
A
Basically,
a
hundred
percent
because
I
was
sort
of
you
know
imitating
as
closely
as
possible
services
from
kubernetes
I
noticed
that
in
the
stuff
that
you
currently
have
in
there,
you
only
have
essentially
a
server
lunch
at
a
service
equals
one
channel.
Now,
honestly,
in
the
examples,
I've
tried
to
work
with
this
damned
if
I've
actually
found
a
good
example
for
multiple
channels
right
so
I'm
in
no
way
complaining,
I'm
simply
saying
at
some
point.
E
A
But
the
only
one
that
comes
immediately
to
mind
is
so,
let's
say,
for
example,
that
I
have
a
network
service
and
I
exposed
a
channel
that
handles
like
a
payload
ipv4
and
a
channel
that
handles
payload
ipv6.
That's
the
one
example
that
comes
to
mind
right.
It's
the
same
logical
service,
but
you
know
you're
getting
all
three
payloads
in
both
cases,
but
you
handle
v6
and
before
or
often
quite
differently
and.
E
I
agree
with
that,
my
only
my
only
turn
is
you
know,
and
obviously
it
seems
like
we
might
want
to
solve
some
of
these
simple
cases
at
first.
But
but
if
I
guess
my
I
guess
the
thing
would
be
I.
Don't
it
doesn't
seem
so
far
like
it's
a
lot
of
work
to
carry
multiple
channels
at
this
point
from
a
code
perspective.
So
as
long
as
that
maintains,
maybe
we
should
leave
that
flexibility
open
for
down
the
road
rather
than
closing
it
down
now
and
then
every
bit
later
change
the
model
and
everything
you.
A
A
Sort
of
what
their
experiences
and
how
they
feel
about
their
own
decision,
I
mean
I,
I'm
sure
we've
all
been
there
where
you
know
somebody
is
sort
of
copying,
something
that
you
were
responsible
for,
and
you
like,
don't
do
that.
That
was
a
bad
idea.
I
can't
get
out
of
it
yet
anymore,
because
it's
already
set
in
stone,
but
if
I
had
to
do
it
again,
I'd
never
do
that
right.
We've
all
been
there.
B
G
B
A
That
is
actually
something
we
should
think
about
as
well.
You
know
and
I
think
you
actually
into
thinking
about
it.
I
think
you're
part
of
two
things.
One
is
your,
you
know
the
invisible
network
piece,
the
other
one
is
that
we
may
have
some
use
cases
where
you
know
to
sort
of
the
via
the
the
thing
you're
working
via
is
not
the
kubernetes
network
at
some
other
networks.
So
imagine
for
a
moment
that
I've
got
a
box
that
has
a
physical
NIC.
A
You
know
that
that
may
be
something
we
have
to
think
about
as
well
yeah,
you
know
I
would
say
definitely
the
default
should
be
via
the
kubernetes
network,
but
there
there
will
be
cases
where
you
know
that
that's
not
going
to
be
quite
the
thing,
so
you
know
we
definitely
need
to
think
through
some
of
those
do
you
would
you
be
willing
to
sort
of
you
put
together
sort
of
a
crisp
statement
of
those
problems
for
either
the
mailing
list
of
the
meeting?
Next
week,
John
you
see
how
that
works.
A
B
C
B
A
B
A
Awesome
so
we're
moving
right
along
the
meeting
time
planning
stuff.
So
there
was
an
AI
last
week
that
premise
of
us
send
out
a
new
poll
or
Google
Form
for
this
I.
Don't
think
that
actually
happened?
Do
we
have
Mike
on
the
call
because
I
think
the
other
one.
The
other
sort
of
thing
that
came
up
was
if
he
had
any
concrete
people
who
were
actually
having
the
problem
with
this
being
on
Friday
I.
Don't
think
we
heard
back
on
that
I,
don't
see
him
on
the
call.
A
A
A
E
C
E
The
site
there's
that
one
there's
like
a
there's,
a
half
one
there
there's
another
gentleman
that
last
year
had
an
example
of
a
repository
and
took
it
most
of
the
way,
but
but
yeah
there's.
It
would
be
nice
to
get
like
one
concise
here.
It
is
fully
yeah
yeah,
including
creating
the
controller
side
using
the
informers
and-
and
you
know,
actually
implementing
some
business
logic.
Yeah.
D
A
But
very
useful
thing:
man
cool
anything
else,
Kyle
before
I,
move
on
the
tapestry
Fredrik.
That's.
E
A
Cool
Fredrik
any
ambitions
for
next
week.
C
And
see
if
we
could
start
working
working
towards
that,
so
it
could
be
simple
as
something
like
it
chain.
These
multiple
things
together
and
have
have
one
of
them
respond
or
or
echo
or
or
so
on,
but
yeah
I
think
from
a
coding
perspective.
I
I,
don't
know
what
what
that
means
just
yet
so
I
know
so.
I
need
to
work.
A
C
Yeah,
well,
that's
that's
what
I
was
thinking
is
it
needs.
It
needs
to
be
something
that
we
can
get
through
on
both
them
both
directions
and
so
even
just
working
out.
What
what
do
we
want
to
demonstrate
as
like
as
a
starting
point
and
eventually
may
turn
into
boilerplate
for
people
who
want
to
who
want
to
build
out
their
their
first
change,
so
they
can
learn
about
it.
Yeah.
A
C
A
C
Any
the
other
thing
that
I
want
to
to
do.
That's
that's
aiming
towards
us
as
well,
is
to
start
loading
these
these
things
up
as
as
services
as
daemon
sets,
and
so
on
so,
and
to
get
some
kubernetes
config
files
to
checked
in
to
the
end
of
the
repo.
So
people
can
can
start
applying
that.
So
that's
a
whole
multiple
set
of
tasks
which
range
from
creating
the
ammo
files
to
creating
the
the
images
we
have
to
work
out.
C
How
do
we
want
to
get
the
images
onto
some
repository
somewhere
and
which
means
we
have
to
build
images
somewhere?
So
I
think
that
we
need
to
start
working
that
particular
path
out,
so
that
so
that
we
can
have
a
deployment,
a
deployment
story
as
well,
and
this
will
help
as
well
with
the
with
the
long
term
goal
of
getting
integration
tests
in
place
for
for
network
service
mesh.
C
A
E
Would
be
so
so
along
these
lines,
if
you
all
want
to
take
a
look
at
PR
60
that
was
kind
of
my
intent
was
that
moves
us
kind
of
into
that
direction
a
bit
because
from
from
just
the
cirrie
perspective
that
that
has
enough
logic
to
almost
be
like
a
super,
simple
integration
test
of
you
know:
hey
we've
got
it
up
and
running.
You
know,
network
service
measures
running
as
a
daemon
set.
You
can
go
and
create
the
CR
DS.
E
B
E
A
D
And
the
other
thing
I
just
wanted
to
just
get
at
some
thoughts
around.
It
was
what
was
the
rationale
of
not
running
the
demon
said
for
the
network
service
mission.
All
the
nodes
like
we
have
to
tack
label
the
nodes
specifically
for
it
to
get
scheduled
on
that
node.
So
I
was
just
trying
to
know
is
any
specificity
is
not.
E
Really
that
was
that
was
a
simple
way
to
get
it
going
and
and
I
guess
the
other
way
is,
and
actually
this
is
probably
a
discussion
point
you
know
by
doing
it
that
way,
at
least
initially
we're
not
requiring
people
to
run
it
on
every
node.
We
don't
come
in
as
being
ownerĂs
that
we
have
to
run
everywhere
so.
A
I
mean
always
doing
that
approach
that
just
to
make
people's
lives
easier,
you
in
the
mo
file,
your
building,
you
effectively
have
it
just
applied,
Damon,
said
everywhere
and
then
comment
out.
The
stuff
with
you
know,
put
a
comment
in
section
with
the
the
piece
you
would
uncomment
and
instructions
if
you
want
to
run
on
a
subset.
Definitely
it's
there.
Okay,
if
you
don't
read
it
everywhere.
This
is
how
you
do
it,
but
I
suspect
like
for
the
kicking
tires
put
of
you.
Most
people
are
just
gonna
spin
up
a
cluster
and
pry
it.
D
Then
I
just
plain
I
saw
clicks,
a
demon
sent
it
should
just
go
on
each
node.
Then
I
saw
okay,
I
have
to
put
a
label
because
there
is
a
selector.
So
then
I
was
little
confused.
Is
there
any
example
behind
putting
it
on
specific
nodes,
or
it
can
come
up
on
all
nodes
that
you
can
definitely
come
up
on
all
nodes?
If
you
want
to
attach
for.
C
B
A
B
A
A
Excellent,
so
we
now
sort
of
get
to
the
section
of
our.
You
know
our
meeting,
where
we've
got
open
space
to
talk
about
some
of
the
conceptual
issues
here,
I,
don't
know
how
many
of
you
folks
were
there.
I
did
give
a
presentation
to
sig
networking
yesterday
and
in
that
presentation
there
was
amazing
conceptual
breakthrough
and
how
I'm
explaining
this
and
that
conceptual
breakthrough
was
I,
remembered
to
explain
what
the
data
plane
is
doing,
which
I'd
previously
not
done,
which
I
feel
a
little
bit
silly
about.
A
A
B
A
B
G
A
F
A
Well
so
think
about
it,
hang
on
I
actually
has
I
think
a
good
slide
helps
with
visualizing
this
see
if
I
can
dig
it
out.
It's
really
an
issue
of
layers.
A
Here's
that
I'm
looking
for
it's
really
a
matter
of
layers
right,
so
what
what
service
measures
working
with
this
primarily
l4
through
l7
right?
So
it's
the.
How
do
I
proxy
tcp
ports
around?
How
do
I?
How
do
I
route
HTTP
to
messages
across
a
variety
of
available,
tcp
connections,
that
kind
of
stuff
and-
and
it
does
that
really
nicely,
and
then
what
we're
looking
at
a
network
service
mesh
is
primarily
things
that
l2
and
l3,
so
I
have
Ethernet
frames
that
I
need
to
treat
as
payloads
or
IP
packets.
That
kind
of
yeah.
F
A
Should
be
utterly
compatible
now
modulo
a
couple
of
comments
that
John
had
made
where,
if
he
wants
sto
to
be
able
to
operate
over
some
network,
that's
but
some
network
service,
that's
been
plugged
in
by
network
service
mesh
that
sto
doesn't
understand.
There
could
be
an
understanding
mismatch
there,
but
I
think
that
should
be
rectifiable,
because
the
sto
guys
do
have
ambitions
of
not
solely
being
tied
to
kubernetes
and
being
able
to
do
SEO
service
measures
across
multiple
clusters.
A
B
Don't
think
it's
the
ratio
control
platon
it's
an
issue
because
it
has
pluggable
modules.
It's
the
data
plane
piece.
If,
because
you
you
know,
issue
can
plug
into
an
GX
envoy,
a
bunch
of
others,
it's
a
question:
how
do
you
get
information
from
a
data
plane
into
issue,
so
the
data
plane
is
now
network
service
mesh?
How
does
it
talk
to
hto
and
how
does
this
geo
apply
policy
into.