►
From YouTube: Network Service Mesh BoF Meeting - 2019-04-02
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
A
A
A
A
B
Sure,
let's
get
started
then:
okay,
so
first
events,
so
the
network's
Christmas
Day
is
now
come
as
now
done.
So
the
next
event
is
today
I
believe
the
event
starts
at
around
1:00
and
I
believe
the
talk
starts
around
3:30,
so
we're
gonna
have
around
90
minutes
to
go
over
a
variety
of
enric
service.
My
shop
is
that
at
the
Intel
out-of-the-box
you
vote
for
meetup.
So
if
you
are
in
town-
and
you
have
time,
I
feel
free
to
stop
by
before
we
go
ahead
and
clip
this
listener
stay
out
of
the
agenda.
B
B
So
one
of
the
unfortunate
things
was
they
were
running
out
of
time.
So
the
amount
of
time
that
I
had
available
to
talk
about
network
service
Lesh
was
cut
almost
in
half.
So
so
I
wasn't
too
happy
about
that,
but
I
basically
gave
he
run
down
as
to
what
we're
doing
and
afterwards
got
swamped
by
a
number
of
people
who
are
recognized
that
the
l4,
through
l7
used
cases
don't
solve
the
l2
or
l3,
and
that
the
l2
and
l3
are
just
as
important
for
solving
certain
use
cases
that
they
have
so
I.
B
So
I'll
start
I'll
start
funneling
them
towards
this,
so
that
we
can
start
so
we
can
find
a
way
to.
Ultimately,
the
most
important
part
is
I
mean.
Is
we
work
out
how
to
build
alignment
with
groups
like
envoy
and
so
on?
So
we,
if
we
build
that
alignment,
then
I
think
we'll
we'll
have
an
easy
way
to
to
move
the
larger
community
towards
the
towards
the
direction
that
we
want.
For.
For
these
specific
use
cases
cool.
B
Well
so,
let's
see
we
have
ons
starting
tomorrow
for
three
days,
so
that
will
so.
We
have
three
talks
that
that
are
going
to
be
given
by
network
service
mesh
or
people
in
the
community,
and
so
do
we
I,
don't
think
we
have
to
we.
We
should
put
the
the
times
on
this
as
well.
I
think
that'll
probably
probably
help
that.
A
B
And
finally,
according
to
Prem
and
I,
don't
know
if
Emma's
on
or
not
I
do
it
doesn't
look
like
he
is
there
supposed
to
be
a
demo
of
Network
Service
mission,
the
oh
yeah,
it's
listed
there
next,
the
elephant
demo
booth.
So
at
the
elephant
booth
you
should
be
able
to
see
a
demo
as
well
of
a
neighbor
of
service
mission.
Cool
awesome,
so
I
need
to
follow
up
with
some
today
to
make
sure
that
all
issues
that
they've
been
having
have
been
resolved.
B
B
A
B
A
B
And
in
the
last
final
event,
I
really
enjoyed
the
what
came
out
of
that
one
as
well.
So
we
have
people
from
the
CFCF
in
the
last
one
who
presented
the
CNF
test
bed
and
we
had
a
lot
of
really
great
topics,
and
so
it's
it's
worth
going
to
it.
If,
if
you're
in
a
fishery
will
come
earlier,
see
we
also
have
ons
Europe
coming
up
in
Antwerp,
so
call
for
papers.
It's
currently
open
closes
in
June
16th,
though
we'll
we'll
see.
B
What's
gonna
happen
with
that
we
have
any
F
2019
in
Los
Angeles
and
we
have
cube
corner
of
America
at
the
same
time.
So
will
will
probably
and
I
believe
that
there's
some
people
from
the
community
who
are
going
to
go
to
MAF
2019
so
we'll
make
sure
and
and
are
trying
to
give
a
talk,
so
we'll
make
sure
that
they're
well
prepared,
and
so
it's
also
the
call
for
paper
for
Q
con
opens
up
on
May
6
and
with
that
buts,
at
the
main
agenda,
we
have
CN
CF
proposal
next
to
next
week.
B
A
Soon,
as
we
finish,
this
meeting
I
think
that's
next
up
for
the
for
putting
on
the
internet,
just
didn't
want
to
confuse
people.
Are
you
putting
it
there
yet
perfect?
The
one
thing
I
actually
won't
buy.
That
is
the
proposed
lines
to
present
I
have
a
link
to
those.
If
folks
could
try
and
get
me
some
feedback
in
the
in
the
next
day
or
so
on,
those
that
would
be
super.
A
In
other
happy
news,
so
we
talked
briefly
about
MSM
in
various
cover
news
environments
right,
particularly
public
clouds,
and
we've
been
looking
at
gke
aka
sandy
chaos.
It
would
be
you
folks,
through
other
public
clouds
we
should
be
covering.
That
would
be
super
helpful.
I
know,
I
I
know
somebody
has
mentioned.
Maybe
the
Alibaba
public
cloud
is
that
something
you
might
be
able
to
help
this
young
and
getting
figuring
out
how
to
plug
into.
A
C
A
Well,
the
good
news
is
that
the
once
we've
become
a
cm
CF
project.
We
can
get
some
resource
CNC
off
to
pay
for
some
cloud
time.
It's
entirely
possible.
I
know
that
most
of
the
public
clouds
also
donate
substantial
cloud
time
to
CN
CF
for
use
by
CN
CF
projects,
and
so
Alibaba,
maybe
do
something
similar
I'm
just
I
literally,
don't
even
know
how
to
start
engaging
with
running
and
stuff
there.
So
if
you
could
help
us
figure
that
out
and
maybe
help
get
get
some
of
the
stuff
running,
that
would
be
great
sure.
A
Closing
John
and
one
of
the
one
of
the
quick
thing
I
just
wanna
mention
is
I'm,
told
that
as
of
this
morning,
the
the
the
guys
have
actually
gotten
stuff
working
on
G,
K,
aks
and
II
chaos,
and
so
we
should
have
PR
shortly
to
fix
the
last
few
niggling
problems
there
and
then
having
done
that,
then,
hopefully
we're
gonna
get
some
CI
running
on
those
very
shortly,
so
that
we
can
run
our
CI
across
those
public
clouds
as
well.
Oh.
A
It
turns
out
the
the
underlying
problem
was
deeply
deeply
deeply.
Embarrassing
I
had
screwed
up
the
handling
of
her
outs,
and
that
was
what
was
causing
the
problem.
I
things
happen,
they
do
they
do.
This
is
why
rebuilding
over
to
us
much
yeah?
Well,
it
turned
out
it
was.
It
was
a
little
bit
of
a
chimera
of
a
problem
because
normally,
if
you
have
screwed
up
the
routes,
when
you
do
a
trace
in
vvv,
you
get
to
the
IP
lookup
node
and
then
you
get
error
dropped.
A
You
look
if
echo
a
route
problem,
but
as
it
turns
out,
the
VX
Leon
in
cap
node
has
an
optimization
where
it
does
the
route
lookup
itself,
and
so
we
were
hitting
the
vetting
cap
node
and
then
getting
drop
it.
It's
like
what
happened.
This
is
not
where
you
expected
a
routing
issue,
but
it.
C
A
The
the
we're
trying
to
get
to
the
point
where
it
will
always
work
so
think
of
the
things
in
layers
right,
which
is
we're
trying
to
write
on
a
seminal
way
where
it
will
always
work
now,
as
you
probably
well
know,
if
you
want
to
be
high-performance,
you
may
have
to
do
more
right,
and
so,
but
we
all
wanted
to
make
sure
it.
It
always
always
works.
So,
for
example,
like
one
of
the
things
we
had
to
fix
for
the
gke
case
is
the
fastest
way
to
get
a
kernel.
A
Interface
into
VPP
is
with
a
host
of
net
up
with
every
host
net,
and
it
just
so
happens
that
the
normal
out-of-the-box
Linux
that
the
GK
is
running
on
doesn't
have
Devi
hose
nut.
Now,
there's
no
reason
it
should
right.
So
it's
not
like
it's
a
problem
on
their
end
at
all.
So
what
we
had
to
do
was
to
check
for
its
presence,
and
if
it
wasn't
there,
then
we
had
to
fall
back
to
the
second
fastest
way
of
doing
it.
A
And
you
know-
and
so
my
guess
is
that
there's
a
really
good
chance
that
it
will
just
work
on
Alibaba
managed
kubernetes
offering
out-of-the-box,
but
if
it
doesn't
I
expect
the
kinds
of
things
we'll
need
to
solve.
It
will
be
those
kinds
of
little
little
issues
that
you
run
into
when
you
go
to
a
new
environment.
So
the
goal
is
to
try
and
ascend
and
get
an
SM.
So
it
works
out
of
the
box
and
I'll
leave
Ava's
communities
offering.
C
B
One
that
we
may
want
to
try
to
take
a
look
at
because
you
look
at
the
largest
ones.
You
have
AWS,
Azure,
Google,
Alibaba,
etc.
Ibm
is
pretty
huge
as
well,
and
they
do
have
a
container.
They
do
have
a
what
they
call
IBM
cloud
community
service.
No,
it
may
it
may
make
sense
to
reach
out
to
them
and
see
if
they,
if
we
could
potentially.
A
A
We'll
see
but
yeah,
because
the
basic
goal
is
we
want
to
make
sure
that
we
we
just
work
out
of
the
box
on
any
kubernetes.
You
run
us
on
particularly
on
the
public
clouds,
so
cool
awesome.
So
that's
all
been
super
super
good
news,
all
right,
so
upcoming
release
dates
so
nicholai
provided
these.
We
had
asked
about
them
last
week.
We
previously
agreed
to
these,
but
it's
good
to
get
it
really
stays
front
and
center
in
people's
minds
frequently
and
so
April
23rd
is
when
we
plan
on
pulling
the
NSM
v1
branch.
A
So
that's
when
we're
by
that
point
we
should
have
the
features
we
want
for
our
one
dot
or
0.1
release
in
place
and
then
the
designated
release
date
is
April
30th,
and
then
we
do
our
dot
one
release
on
May
14.
So
we
have
it
in
place
for
cube
con
and
essentially
you
know
think
of
it.
As
the
423
day.
We
should
have
all
the
features
in
by
the
release
date
should
have.
You
know,
really
beaten
on
it,
fixed
bugs
increased
testing,
etc
and
then
on
the
514
date.
A
We
essentially
will
take
any
pending
fixes
from
the
actual
release
and
incorporate
those
and
continue
to
add
additional
testing
so
from
4:23
until
five
fourteen
we'll
probably
have
quite
a
focus
on
testing
as
well
as
we
go,
but
please
note
that
it
all
gets
pulled
on
to
a
branch,
so
master
will
continue
to
be
open
continuously
for
new
features,
cool
anything
on
the
dates.
I.
B
A
It's
a
branch
versus
tag
thing
telling
it's
important
and
then
we
get
to
go
through
the
joy
and
I
do
mean
joy
of
figuring
out
what
our
release
process
looks
like
in
the
course
of
this
having
having
written
the
tooling
for
release
process,
it
possible
communities.
That
is
always
interesting.
The
first
time
so.
A
A
B
A
A
A
A
D
E
Were
hoping
probably
with
CPU
limits
could
be
related
in
on
of
various
commits.
We
added
CPU
limits
default
forward
data
plane
only
so
I'm,
not
sure,
and
some
someone
somebody
experiences
issues
if
starting
an
SMG
because
of
his
not
sure.
Probably,
we
need
to
remove
his
CPU
limits
and
at
am
only
in
case
of
a
question.
I
really
need
them.
A
A
Could
you
take
a
look
at
that
if
you're
seeing
the
same
thing
that
he
was,
if
you
could
chime
in
there
and
if
you're,
seeing
a
different
things,
if
you
could
speak
up
because
we
definitely
want
to
stamp
out
on
any
instability
and
get
some
testing
to
prevent
it,
but
yeah
it
would
be
good
if
you
could,
if
we
need
capture
that,
because,
if
you're
seeing
instability,
even
if
it's
just
you,
it's
probably
something
in
your
environment
that
somebody
else
is
going
to
have
in
their
environment
right.