►
From YouTube: Panel Discussion - The Future of Service Mesh: Is eBPF a Silver Lining or a Silver Bullet
Description
Panel Discussion-The Future of Service Mesh: Is eBPF a Silver Lining or a Silver Bullet- Moderated by Craig Box, Google; Thomas Graf, Isovalent; Idit Levine, Solo.io, Vik Gamov, Kong & William Morgan, Buoyant
Service mesh implementations normally take one of two forms: a proxy per node, or a proxy per workload (the so-called "sidecar"). Linkerd went from A to B. Cilium is suggesting we can go from B to A. Is eBPF a savior, or are we hyper-optimizing a tiny piece of the datapath? And what else might the future of service mesh hold?
A
So
this
is
asked
to
sub
out
for
you
vile.
I,
it
was
my
intention
that
we
shall
switch
her
back
in
at
some
point.
Otherwise
we
have
a
very
manly
panel
and
no
one
wants
that.
So,
let's
start
with
victor
here
so
again
for
everyone
who
will
have
heard.
I
think
everyone
here,
but
louise
had
a
chance
to
have
a
chat
today,
some
earlier
in
the
morning
than
the
others.
A
So
if
you
don't
mind,
please
just
a
little
quick
introduction
to
yourself
two
sentences
and
we'll
pass
the
mic
down
the
panel
as
we
go.
B
All
right,
it's
working,
I'm
victor.
I
work
as
a
developer
of
the
care.com
doing
all
things
around
cloud
connectivity,
api,
kubernetes
cloud
native,
all
this
type
of
stuff.
I
feed
this
in
one
sentence.
Beautiful
depends.
C
Hey
I'm
william
morgan,
one
of
the
creators
of
linkery.
You
probably
remember
me
from
the
very
boring
keynote
this
morning.
F
A
All
right
so
can
we
bring
the
mic
down
to
william?
Please,
because
I
want
to
ask
first
of
all:
that's
you
about
the
for
people
with
a
shorter
memory
in
the
ecosystem
space
here,
linker
d
started
out
as
a
single
proxy
that
ran
per
node,
and
then
there
was
a
sort
of
a
skunk
works
project.
C
Sure
yeah,
so
you
know,
linker
d
was
a
very
first
service
mesh
and
the
way
we
started
out
was
with
some
scala
technology
that
we
had
imported
from
twitter.
So
we
were
on
the
jvm
and
jvm
was
awesome
at
a
lot
of
things,
but
it
was
not
very
awesome
at
being
small
and
tiny.
So
you
know
the
recommendation
for
the
linker
d1.x
was
for
you
to
run
it
on
a
per
host
basis.
You
could
actually
run
it
inside
cars,
but
it
was
like
150
megs.
C
So
you
know
if
you
had
a
giant
application
that
was
kind
of
okay,
you
got
a
tiny
application,
it
kind
of
sucked
and
we
had
a
lot
of
problems
from
our
early
adopters
who
were
adding
it
to
their
kubernetes
stack
on
a
per
host
model
operational
problems.
Primarily
you
know
things
around
upgrades
and
and
and
and
maintenance,
and
you
know
when
something
went
wrong,
trying
to
figure
out
where
that
was
so.
C
We
eventually,
for
that
and
other
reasons
like
craig
mentioned,
we
we
end
up
rewriting
everything
and-
and
we
now
have
a
data
plane
that
you
know
is
designed
to
be
in
a
sidecar,
wren
and
rust,
and
we
have
a
control
plane.
Some
go
and
increasingly
have
some
rust
in
it.
I
wrote
a
longer
version
of
this
in
an
info
queue
article.
So
if
you,
google,
for
like
service
mesh
lessons
learned
or
something
you'll
you'll
find
it
in
there,
but
that's
kind
of
the
basic
history.
A
And
then
louis,
perhaps
if
you
wouldn't
mind
giving
a
sort
of
a
rundown
on
the
secret
internal
google
thing
that
we
promised
not
to
talk
about,
the
istio
was
in
some
way
sort
of
based
on
the
idea
of
putting
the
sidecar
next
to
the
workload
and
why
that
was
important
to
us.
E
You
know
we
had
a
lot
of
those
use
cases,
and
you
know
at
google
scale
that,
like
the
management
of
that
became
extraordinarily
important,
you
know
so
you
could
kind
of
consider
it
as
google's
legacy
problem
right
and
you
know,
google
in
you
know
the
software
in
production
tends
to
update,
update
quite
regularly
and
is
rebuilt
very
regularly,
but
we
still
consider
this
to
be
a
kind
of
legacy
workload
and
you
know
framework
management
problem
and
that's
why
we
introduced
sitecars
to
solve
that
problem.
A
All
right,
so,
as
so
as
not
to
sort
of
mischaracterize
what
thomas
is
talking
about
before,
there's
currently
no
intention
to
get
rid
of
the
layer
7
proxy.
There
is
a
goal
that
perhaps
that
the
functionality
could
one
day
be
moved
into
the
kernel.
Everything
seems
to
run
quicker
there.
I
want
to
dig
into
the
idea
of
the
the
psyllium
service
mesh
idea,
saying
well,
we'll
move
some
of
that
down.
We
might
still
have
to
run
one
proxy
per
node
versus
running
one
proxy
per
side
car.
A
There
are
some
tenancy
concerns
that
were
brought
up
when
this
discussion
first
happened.
So
I
don't
know
if
it's
it's
probably
better
for
you
to
summarize
them
than
me,
thomas.
If
you
would
so
the
idea
of
what
do
you
gain
by
doing
this
and
what
things
do
you
think
you
lose
if
we're
moving
to
a
model
where
all
of
the
layer
7
processing
is
still
done
by
a
proxy
envoy
in
this
case?
But
it's
now
done
by
one
per
node,
rather
than
one
per
process
per
sidecar.
F
Yes,
I
think
very
important
is
that
we
don't
necessarily
need
to
do
everything
in
kernel
and
ebpf
to
gain
some
benefits.
A
good
example
is
we
can
do
in
kernel,
ebpf,
http
visibility
and
pure
ebpf,
and
the
gains
are
massive,
but
we
cannot
do
layer,
7,
retries
or
low
balancing
yet,
but
that
does
not
mean
that
we
shouldn't
do
the
visibility
part.
Getting
open,
telemetry,
metrics
and
traces
almost
for
free
is
amazing.
So
to
the
question
of
the
the
the
proxy,
so
we
love
envoy
sirium
has
been
integrating
with
envoy
forever
matt.
I
love
you.
F
You've
built
a
great
proxy
we've
been
using
envoy
to
leverage
and
enforce
layer,
7
policies
for
years,
and
it's
been
in
production
for
years.
The
multi-tenancy
aspect
is
very
interesting,
because
the
discussion
we're
having
right
now
is
very
similar
to
the
one
when
vms
and
containers
were
or
containers
were
replacing
virtual
machines.
One
of
the
immediate
concerns
was
what
about
multi-tenancy.
What,
if
my
apps
share
the
same
operating
system,
who
controls
memory,
who
controls
cpu,
who
controls
access
to
all
of
these
resources?
And
we
re?
We
was
required
that
we
built
a
model.
F
Multi-Tenancy
angle
into
the
operating
system
into
linux:
that's
what
we
call
containers
today
and
we
got
a
lot
out
of
it.
We
actually
got
better
control
out
of
it,
because
now
we
can
do,
we
can
do
fair
cueing.
We
can
actually
share
memory
and
do
best
effort
and
so
on.
So
I
think
that's
actually
the
benefit
so
just
because
the
sidecar
model
was
once
the
right
approach
doesn't
mean
that
we
shouldn't
question
it
and
we
shouldn't
look
at
potentially
introducing
a
multi-tenant
proxy.
Also,
it's
not
necessarily
one
proxy
per
node.
F
It
could
be
one
proxy
per
name
space,
one
proxy
per
service
account
or
some
other
granularity.
That
makes
sense.
A
F
I
think
what
we
probably
is
actually
very
close
to
this.
I
think
envoy,
running
envoy
in
part
of
the
kernel
you'll,
probably
face
some
rejection,
so
when
you
say
we
have
this
concept
of
multi-tenancy
user
space.
Yes,
that's
one
of
the
two
answers.
The
other
answer
is
c
groups
and
namespace,
and
that's
what
containers
are.
What
we
propose
is
essentially
this
envoy
actually
has
a
great
design
and
it's
very
close
to
what
the
kernel
does.
It's
multi-threaded
and
it's
very
siloed.
F
The
kernel
has
the
capability
to
run
individual
intuitive
threats
of
an
application
in
separate
c
groups,
so
we
can
run
part
of
a
single
envoy
instance
in
the
c
group
off
the
part
and
we
get
the
the
cpu
accounting
automatically
accounted
for
in
the
c
group
of
the
part.
That's
amazing
right
because
that's
exactly
like,
as
mentioned
before,
with
layer,
seven
cpu
is
the
bottleneck
like
http
processing
is
very
cpu
and
intensive.
So
you
definitely
don't
want
all
of
your
cpu
off
the
note
to
be
used
up
by
a
single
single
proxy.
E
Yeah
I
mean
there
was
a
you
know,
thomas
I
and
tim
had
a
bit
of
a
back
and
forth
on
twitter
about
this.
It
took
a
long
time
for
all
that
to
happen
in
the
kernel.
It's
a
lot
of
engineering
work
and
there
are
a
lot
of
ancillary
concerns
that
come
in
because
it's
not
just
this
process
or
this
request,
or
this
thing
that
I
have
to
deal
with
it's
the
whole
configuration
space,
the
shared
memory
space,
it's
a
lot
of
complexity
for
envoy
to
take
on
and
like
that
doesn't
exist
today.
E
It's
yeah,
so
I
you
know
like
reasonable
people
can
disagree
on
that
front,
I'm
a
little
skeptical
about
when
and
if
that
would
be
viable
like
as
you've
all
mentioned.
You
know
there's
what
holes
today
and
they're
what
might
hold
two
years
from
now,
so
you
know
right
now.
I
don't
think
it's
practical
and
you
know
I
think,
what
we
do
with
user
space,
with
good
acceleration
from
ebpf
to
get
data
into
user
space
as
efficiently
as
possible.
E
You
know
gets
us
very
close
to
that
anyway,
in
terms
or
should
get
us
very
close
to
that.
In
terms
of
efficiency,
I'm
not
going
to
say
side
cars,
don't
have
problems
right,
we're
all
aware
of
the
total
cost
of
ownership
issues
like
lifecycle
and
maintenance,
but
you
still
have
to
maintain
one
running
on
the
node
and
there's
granularity
issues.
So
right
there
are
right
now,
I'm
just
a
little
skeptical
of
you
know
vaping
the
solution.
You
know
I'm
happy
to
be
proven
wrong,
but
right
now,
that's
not
where
I
would
be.
E
A
Now
not
everyone
on
the
stage
is
spending
their
dollars
on
envoy
in
general.
I
know
that
there
is
work
happening
to
support
rust
in
the
linux
kernel,
though
I
know
that
vlankerdeep
proxy's
written
in
rust,
is
there
any
synergy
between
this.
Is
there
any
chance
that
you'd
consider
internal
options
for
the
length
of
the
proxy.
C
Yeah,
that's
a
good
question.
You
know
to
a
certain
extent
right.
You
know,
linker
d
doesn't
use
odd
voice
like
I
don't
really
have
a
direct
horse
in
this
race.
Although
it's
interesting
to
listen
in,
I
think
for
me,
you
know,
would
we
consider
you
know
running
stuff
in
the
kernel
I
mean,
I
guess
you
know.
I
guess
we
consider
everything
that
the
would
you
consider.
C
Yeah,
I
think
we
should
import
the
kernel,
of
course,
into
the
proxy,
and
you
know
it's
a
lot
simpler.
That
way.
No,
I
think,
for
me
the
the
question
is
always
you
know:
what's
the
actual
user
benefit
that
that
we're
getting
and-
and
I
happen
to
be
someone
who
loves
the
sidecar
model,
I
think
it's
a
really
elegant
model.
C
It
has
some
implementation
issues
in
kubernetes,
especially
when
it
comes
to
like
ordering
and
things
like
that,
there's
stuff
that
has
to
be
fixed,
there's
annoying
aspects
of
it,
but
I
think
as
a
model,
I
actually
love
it.
So
if
someone
were
to
come
to
me
and
say
I
want
a
sidecar
free
service
mesh
like
why,
like
what
you
know,
what
you're
prescribing
an
implementation,
what
problem
are
you
actually
trying
to
solve?
Is
it
like?
You
want
to
reduce
complexity?
Okay?
Well,
then,
why
don't
you
say
that
I
want
a
simpler
service
mesh?
C
Oh,
is
it
taking
too
much
memory?
Okay?
Why
don't
you
say
that
I
want
a
smaller
service
mesh
right,
so
I
think,
from
my
perspective,
we've
tried
very
hard
to
make
the
linker
d
proxy
and
implementation
detail.
It's
not
something
you
have
to
think
about,
we
don't
even
give
it
a
good
name.
It's
got
like
a
terrible
name
and
it's
not
meant
to
be
consumed
by
anyone
outside
of
linker
d,
and
I
tried
very
hard
to
to
have
users.
Have
that
mindset
too.
C
You
know
it's
not
something
that
you're
directly
manipulating
you
know
and
in
tuning
except
in
extreme
cases,
so
I
actually
don't
really
care
like.
I
could
follow
that
same
theme.
Yes,
we
could
put
stuff
in
the
kernel
or
we
could
put
it
in
outer
space,
and
you
know
what
I
care
about.
The
most
is:
what's
the
operational,
you
know
kind
of
impact
of
all
that
and
when
the
user
is
maintaining
their
service
mesh
and
they're
operating
it
and
they
have
to
upgrade
it
or
like
there's
a
problem
and
they
have
to
trace
it
down.
C
You
know,
try
trace,
trace
it
to
its
its
root
cause
like
what
does
that
actually
entail,
and
so
far
the
sidecar
model
has
been
beautiful.
For
that
I
think
you
know.
In
my
opinion,
it's
been,
it's
been
a
really
nice
way
of
doing
that
and
it
ties
that
functionality
to
the
kind
of
the
your
mental
model
of
your
application.
Anyways
right,
like
you,
want
to
change
something
in
one
service.
C
While
you
change
it
on
that
service,
you
know
and
and
the
further
we
get
away
from
that
the
harder
it
is
for
me
personally
to
to
think
that
you
would
maintain
that
same
kind
of
operational
simplicity,
but
I
I'm
like
a
babe
in
the
woods
when
it
comes
to
these
discussions.
So
you
know
I'm
happy
to
learn.
A
If
anyone
has
any
questions
that
I'd
like
to
pose
to
the
panel,
please
do
stand
in
front
of
the
microphone
over
there.
While
you
make
your
way
there,
I'd
like
to
bring
back
into
the
discussion
and
say
that
kumar
and
kong's
mesh
play
based
on
envoy,
but
not
based
on
sdo,
you
have
the
benefit
of
having
seen
some
of
this
play
out
over
time.
You
have
the
benefit,
perhaps
of
the
project
having
come
up
in
a
world
where
ebpf
was
perhaps
a
nascent
possibility.
B
B
We
really
want
to
people
have
a
simpler
mesh
with
the
benefits
of
having
service
mesh
capabilities
that
they
know,
because
the
people
already
have
some
experience
with
envoy
people
have
experience
understanding
and
getting
out
of
it
a
lot
of
things
there's
a
plenty
of
the
protocol
supported,
including
some
of
the
l4
protocols.
B
Some
people
running
like
more
and
more
like
workloads
like
mongodb
and
kafka
in
in
things,
and
some
of
the
things
that
no
one
actually
mentioned
yet,
and
I
probably
would
be
it's
a
very
unpopular
thing
and
running
like
your
production
workloads
on
windows,
and
I
think
for
yourself
exactly
so,
and
the
the
running
similar
like
same
experience
for
developers
in
in
the
mesh
in
regardless
on
operating
system
that
you're
running
in
production
and
that
the
flexibility
that
gives
us
like
a
side.
Car
capabilities
is
something
that
we're
really
getting
benefits.
B
Plus
we
simplified
a
we're
not
trying
to
abuse
crds
like
that
much
and
we
want
you
to
use
the
crds
only
for
the
things
that
are
really
important
to
configure.
So
that's
that's.
The
only
thing
like
kuma
tries
to
be
like
developer
friendly
and
put
a
lot
of
pointers
into.
You
know
how
you
can
get
simpler
mesh.
G
F
Yeah,
I
will,
I
will
publish
the
slide,
so
you
can
see
the
specific
differences
in
terms
of
visibility.
It
makes
a
massive
difference.
I
I
don't
have
the
exact
numbers
in
my
head
right
now,
but
it
was
in
the
single
digit
percentage
for
like
overhead
for
in
kernel,
http
visibility
and
the
latency
was
2
3x,
4x,
bigger
for
a
proxy
for
the
visibility
case.
We
also
measured
the
the
psyllium
envoy
filter
against
the
istional
filter.
F
Maybe
that's
an
unfair
comparison,
because
the
on
the
psyllium
on
white
filter
is
massively
simpler
compared
to
the
istio
on
y
filter,
in
that
in
that
environment,
or
in
that,
in
from
that
perspective,
there
is,
I
think,
a
feature,
feature
imbalance
there.
G
F
So
I
think
for
things
like
retries
circuit
breaking
whenever
it
is
about
connection
splicing
or
replaying
traffic,
I
think
the
combination
of
ebpf
and
envoy
will
be
the
answer
where
it
is,
I
think,
was
lewis
said
correctly:
it's
about
leveraging
ebpf
to
inject
envoy
better
and
quicker
and
faster,
and
not
require
this
very
expensive
network-based
injection
of
the
sidecar
proxy
and
what
to
loose.
It
was
actually
very,
very
accurate,
and,
I
think,
a
couple
of
years
ago,
the
complexity
of
solving,
what's
needed
to
make
this
happen
would
have
been
very
hard.
F
H
Okay,
I
think
I
should
have
passed
this
at
ebpf
day
yesterday,
but
then,
how
would
you
compare
calico
ebpf
with
celium.
F
But
maybe
not
strictly
a
service
mesh
related
question,
but.
H
With
combination
of
you
know,
using
envoy
with
celium
versus.
F
A
All
right
so
in
the
web
browser
we
have
the
javascript
runtime
and
through
a
sequence
of
events,
we
decided
that
we
could
basically
re-implement
a
turing-complete
machine
dot,
dot
webassembly.
So
we
now
have
a
mechanism
for
running
doom
quake,
whatever
you
want
in
the
web
browser
or
probably
doing
some
actual
real
work
as
well.
A
The
google
team
on
working
on
this
dear
envoy
especially
led
a
lot
of
work
to
add
support
for
web
assembly
into
the
envoy
proxy,
allowing
arbitrary
code
to
be
run,
giving
the
safety
taking
the
safety
models
of
the
webassembly
sandbox,
putting
that
inside
the
the
concept
of
the
proxy
so
put
that
aside
for
a
second,
we
have
the
colonel
and
that,
if
you're
going
to
ask
a
question,
I'm
going
to
demand
that
you
come
and
ask
ask
it
from
the
stage.
A
Please
put
us
that
aside
for
a
second
and
say,
we
now
have
these
points
in
the
kernel
where,
as
I
understand
and
thomas-
and
I
spoke
about
this
on
a
podcast
back
in
january,
there
are
certain
extension
points.
You
can
say
send
me
a
message
when
this
thing
happens,
and
there
are
a
certain
set
of
things
that
you
can
say.
Why?
A
Don't
we
get
to
a
point
where
we
can
run
a
webassembly-like
thing,
if
not
actually
webassembly
in
the
kernel,
and
we
can
implement
a
turing
complete
thing
to
what
uval
was
saying,
where
we're
able
to
arbitrarily
hook
anything,
and
we
can
rewrite
envoy
in
javascript
and
run
it
in
the
kernel
and
get
all
these
benefits
and
not
have
to
worry
about
the
arbitrary
split
between
we
can
do
certain
things
on
packets.
But
we
can't
do
them
on
streams.
F
I
think
that
discussion
is
actually
exactly
happening
with
rust
and
not
with
ebpf,
but
there
are
people
that
want
exactly
this,
like
ebpf
has
been
specifically
designed
to
not
be
able
to
crash
your
kernel
and
a
big
part
of
this
is
you
have
to
run
to
completion?
You
can
loop,
but
loop
needs
to
be
bounded.
F
It
means
that
whatever
program
can
run
as
an
ebpf
program
needs
to
be
safe,
needs
to
be
guaranteed
to
complete,
which
is
why
ebpf
on
its
own
is
not
enough
like
why
the
combination
of
envoy
and
ebpf
makes
sense.
It's
essentially
when
we
get
to
the
level
of
complexity
where
it's
not
possible.
Bpf
we
go
to
to
to
envoy
for
the
full
touring
complete
version
of
this
discussion
or
the
the
upstream
consensus
is
now
currently
leaning
towards
just
enabling
rust
in
the
linux
kernel.
But
that's
probably
a
couple
of
years
out.
A
So
I
might
just
pass
it
down
to
yuval
if
we
can,
but
the
my
understanding
of
the
rust
support
in
the
linux
kernel
is
basically
to
allow
you
to
write
parts
of
the
kernel
in
rust,
not
necessarily
to
arbitrarily
inject
rust
into
the
kernel
at
runtime.
Please,
if
that's
not
correct
like
please
tell
me
what
okay,
so
so
that
that's
fine,
but
again
that
comes
down
to
installing
a
kernel.
Module
re
recompiling,
your
own
kernel,
perhaps
so,
that's
not
necessarily
as
simple
as
upload
a
module
like
we
might
expect
today,
like
uval.
A
D
There
is
a
way:
definitely
just
you
want
to
guarantee
that
yeah.
Sorry,
oh
closer
yeah,
so
you
want
to
guarantee
that
a
certain
program
doesn't
bring
the
whole
kernel
into
a
halt
right.
So
you
need
to
find
a
way,
for
example,
with
web
assembly
to
help
it
terminate
while
still
running
it
in
native
speeds
right,
which
means
that
you'll
have
to
instrument
the
web
assembly
in
order.
So
the
program
itself
will
stop
whether
somebody
is
structured
in
such
a
way.
It's
actually
not
horribly
hard
to
do.
D
D
We
could
potentially
provide
you
know
a
budget
for
a
weber
survey
program
to
run
and
once
it
exceeds
this
budget,
stop
it.
You
know
return.
An
error
have
some
some
semantics
around.
What
happens
in
case
of
you
know,
run
it
running
out
of
gas
right
and
to
do
that,
it's
it's
not
that
hard
conceptually!
You
have
to
instrument
the
webassembly
program
and
eject
opcodes
in
the
cases
where
it
can
recurse
and
it
can
loop,
but
those
cases
are
pretty
limited.
As
far
as
webassembly
goes,
I've
seen
some
papers
around
it
in
the
internet.
D
A
Yeah,
so
I'm
interested,
obviously
in
the
the
fact
that
a
lot
of
people
are
moving
stuff
out
of
the
kernel.
A
lot
of
things,
network,
processing
and
so
on.
Packet
processing
is
being
offloaded
to
specific
hardware
where
there
are
user
space
programs
that
are
able
to
access
them.
A
lot
of
the
conversation
that
we're
having
here
is
effectively.
We
need
to
move
things
back
into
the
kernel
in
order
to
get
things
to
be
sped
up.
Is
this
the
right
direction?
A
Is
there
a
way
of
like
we
talked
before
about
kubernetes
needing
to
support
running
sidecars?
Is
there
a
way
that
we
as
a
community,
can
petition
the
linux
developers
and
thomas
your
friends
in
that
community
to
to
solve
this
problem
in
such
a
way
where
we
don't
have
to
think
about
it?
So
much
as
as
moving
things
into
the
kernel,
but
just
making
the
things
we're
running
in
our
sidecar
model
run
quicker.
D
D
F
I
think
this
is
an
interesting
topic.
We
see
a
massive
shift
back
from
user
space
processing
back
into
the
coral,
and
the
reason
is
containers.
Virtual
machines
were
essentially
machines
and
we
had
to
kind
of
it
didn't
matter
whether
it
was
the
kernel
or
the
user
space
doing
whatever
processing
was
required.
F
Applications
directly
interface
with
the
kernel
and
packets
data
needs
to
go
through
the
kernel.
There
is
no
additional
operating
system,
that's
running
in
the
vm
as
in
the
virtual
machine
model,
and
this
is
what's
why
making
ebpf
is
so
interesting
because
it's
directly
integrated
into
the
kernel.
That's
also
the
difference
of
ebpf
with
other
languages
like
webassembly
ebpf
is
specifically
for
the
linux
and
now
windows
kernel
and
its
main
value
is
that
it
can
interact
with
the
kernel
with
the
operating
system,
so
it
can
take
shortcuts.
It
can
do
processing
it
does
not
have
to.
F
For
example,
we
talk
about
vbp
or
other
dpdk-based
applications.
Yes,
they
do
processing
and
user
space,
but
then,
in
order
to
deliver
that
data
into
the
application,
you
have
to
either
change
the
applications
which
most
users
are
not
willing
to
do
or
have
to
go
back
through
into
the
kernel
and
because
of
the
rise
of
containers.
We're
seeing
more
and
more
processing
essentially
go
back
into
the
kernel
with.
E
Yes,
a
mule,
I
don't
know
right
any.
E
Right
and
just
it's
it's
the
degree
to
which
some
functionality
will
live
in
ebpf,
like
I
think,
ebpf
has
done
an
excellent
job
in
like
accelerating
some
of
these
integration
points
and
providing
lower
level
hooks
for
certain
types
of
things.
Right,
like
maybe
you'd,
see
something
like
dpdk
right
used
for
things
like
middle
boxes,
right
where
you
never
have
to
go
back
into
the
kernel,
but
certainly,
if
you're
going
back
into
the
application
space
you're
going
to
go
back
through
the
operating
system
right
because
that's
what
all
the
applications
are
targeting.
E
It's
really
when
you
look
at
the
functionality
that
you
provide
like
from
you
know,
l3
l4
l7,
where
there's
going
to
be
a
sweet
spot
in
terms
of
management
and
updatability
and
maintainability
and
platform
coverage
right,
that's
what's
going
to
determine
it
and
you
know,
ebpf
is
certainly
moving
the
needle
somewhat
towards
the
kernel.
But
you
know:
there's
there's
disagreements
about
how
far
that
can
go
before
you
know,
you're
going
to
hit
limits,
and
you
know
people
are
going
to
have
issues
with
maintenance
cycles
or
other
types
of
tenancy
issues.
I
So
quick
question
is
about
debug
debug
ability
like
one
of
the
things
that
is
anyway
not
easy
with
envoy
is
basically
how
to
figure
out
where
the
problem
is
now
think
about
that.
If
we're
taking
a
lot
of
those
functionality
to
the
kernel,
how
is
we
going
to
do
that?
How
easy
it's
to
debug
a
problem.
C
Yeah
I
mean
I
feel,
like
I'm
just
gonna
sound
like
a
broken
record
like
the
the
stuff
that
I
think
is,
is
really
important.
It's
exciting
to
have
these
conversations,
but
what's
really
important
is
like
what's
the
effect
on
the
end
user
and
you
know,
what's
the
what's
like
the
operational
burden
that
we're
asking
them
to
to
take
on
now.
A
I
can
I
can
sort
of
twist
the
question
a
little
bit.
If
you
would,
we
have
in
a
group
like
this,
we
sort
of
represent
a
percentage
of
people
who
are
end
users
and
care
about
it,
but
then
we
also
have
a
percentage
of
people
who
are
building
the
various
technologies
out
so
setting
the
user
part
aside
for
a
second
like
there
are
benefits
that
thomas
talked
about,
especially
just
to
shorten
the
data
path
between
two
different
processes
using
ebpf.
A
Is
that
something
for
for
you
and
then
for
vic
in
terms
of
kumar?
Is
that
something
that
is
a
win
for
you
to
build
into
the
application
such
that
the
users
don't
need
to
see
it?
And
if
so,
do
you
see
linker
d,
two
point
whatever
the
next
one
is
supporting
this
out
of
the
box
or
is
there
something?
That's
that's
stopping
it
being
a
quick
win
in
that
regard?.
C
A
B
I
think
the
someone
mentioned
I
think,
valve
inch.
The
hybrid
mode
is
the
win,
so
that's
something
that
we
looking
into
to
implement
in
cool,
for
example,
replacing
the
way
how
how
we're
currently
handling
the
network
traffic
through
right.
Now,
these
ip
tables
and
we're
looking
to
use
this
epf
functionality
to
to
to
replace
potentially
this
thing
and
again
to
make
it
also
like
invisible
for
people
if
they
want
to
use
the
the
models
would
be
just
like
configuration,
switch
and
allow
them
to.
B
You
know
for
compatibility
reasons.
So
I
don't
believe
that,
like
everything
would
be
just
like
replaced
with
one
thing
and
operability
and
user
experience
is
the
first
thing
rather
than
performance.
It's
I
would
like.
I
might
sound
like
you
know
the
clueless
person,
but
like
can
we
have
like
a
bigger
machines
or,
like
spent
a
little
bit
more
money
on
the
cloud
or
type
of
thing,
I'm
a
cloud
vendor
so.
B
In
the
past
there
would
be
there
would
be
kind
of
numbers
that
even
developers
need
to
know
like
there's
a
chart
like
how
your
you
know,
the
performance
goes
and
perform
like
throughput
goes
down
and
you
latency
grows
from
the
processor
to
network
to
to
to
distribute
things.
Now,
it's
just
like
one
credit
card
swipe
and
you
have
a
bigger
machine
to
calculate
your
stuff.
After
that
you
can
kill
this
machine
and
just
pay
for
it
for
the
task.
It's
it's
a
practical
choice.
A
Just
yeah
bill
bill
all
your
vms
to
to
vic
hill
sort
of
that,
for
you
well
just
just
quickly
to
follow
up
on
on
its
question,
specifically-
maybe
maybe
thomas,
maybe
louis,
like
the
if
everything
else
can
be
held
the
same
like
in
the
case
of
ebpf,
the
more
we
move
in,
we
want
to
be
able
to
tell
when
things
go
wrong.
A
Is
there
a
concern
that
a
user
who
might
be
debugging
an
application,
isn't
necessarily
able
to
get
access
to
because
they're
no
longer
dealing
with
a
process,
that's
inside
their
own
container,
their
own
namespace?
Perhaps
some
of
this
might
run
in
the
c
group
they
control,
but
do
I
have
the
same
visibility
into
the
kernel
with
the
the
model
we're
talking
about
here,
as
I
had
in
the
past,
and
am
I
able
to
debug
my
own
application?
Absolutely.
F
I
would
actually
turn
it
around.
It's
actually
an
opportunity
to
provide
even
better
visibility.
Monitoring
performance,
troubleshooting
observability,
has
been
a
main
driver
of
ebpf.
Ebpf
has
been
primarily
used
for
perth
linux,
performance,
benchmarking
and
monitoring.
We
have
a
lot
of
experience
in
building
a
networking
layer
with
ebpf
and
we
have
built
massive
observability
and
troubleshooting
capabilities
in.
I
would
actually
turn
around
and
say
it's
an
opportunity
to
provide
better
visibility
on
the
lower
levels.
That
process
like
kuma
can
then
leverage
to
provide
great
end
user
experience.
F
I'm
a
kernel
developer,
I'm
not
especially
good
with
ux,
but
I'm
really
good
at
providing
like
the
low
level,
visibility
and
and
the
intro
and
introspection
that
is
required
for
troubleshooting,
because
running
at
scale
such
as
psyllium
clusters
or
running
observability.
Monitoring
metrics
is
absolutely
essential,
and
that
goes
all
the
way
into
the
service
mesh.
Of
course,.
J
D
Maybe
just
a
follow-up
question
is,
I
know,
you
know
cm,
has
a
the
data
plane
implemented
in
mvpf?
If
you
have
a
problem
there,
you
know
today,
when
I
do
ip
tables,
I
can
add
iptables
log
everywhere
till
I
figure
out
which
rule
is
my
problem.
How
would
you
go
about
that
type
of
debugging
with
ebpf
k,
printf.
F
Take
a
kernel
system:
no,
no,
there's
tooling,
like
cli,
tooling,
observable
dashboards,
everything.
Similarly,
and
usually
it's
actually
on
a
higher
level
intent.
It
would
take
kubernetes
metadata
into
into
account
very
similar.
It's
usually
not
a
dump
of
100
000
ip
tables
rules,
which
you
sometimes
get
and
nobody
likes.
It's
usually
like,
I
think,
more
more
abstracted,
so
no
user
has
to
read
ebpf
bytecode.
You
don't
even
need
to
understand
ebpf
programs,
it's
an
implementation
detail
that
gives
opportunity.
A
E
E
You
know
if
you're
looking
in
logs,
we're,
probably
not
helping
you
and
we're
not
doing
a
good
job,
and
you
should
probably
fire
us
right
like
you
should
be
looking
in
the
tooling
and
the
integrations
in
the
tooling
for
the
the
protocol,
and
you
know
the
application
type
and
all
those
types
of
things.
If
we're
talking
about
syslog
dumps
and
like
what
yeah,
we
should
just
stop.
So
maybe
we
should
take
the
next
question.
J
I'm
a
bit
stuffed
there.
We
go
so
I'm
really
interested
in
your
opinion
about
the
adoption
of
service
mesh,
because
we're
here
talking
about
ebpf
and
kernel,
not
kernel.
In
my
opinion,
performance
is
not
the
inhibitor
for
adoption
for
service
mesh.
So
what
is
the
inhibitor
for
adoption
for
wider
adoption
of
service
mesh?
In
your
opinion,
like?
What
do
we
need
to
do
to
make
service
measures
more
widely
adopted.
A
F
So
I
think,
actually,
the
we've
done
a
variety
of
surveys.
When
we
launched
some
service
mesh
and
we
asked
what
do
you
want
us
to
do?
What
is
your
motivation
main
ask
was,
please
know,
sidecars.
Why
complexity?
It's
not
performance
right,
it's
great
to
show
benchmarks
and
yes,
performance
is
always
better.
I
think
william
said
it's
100
correct,
same
complexity,
same
values,
overall
performance
is
better,
but
our
main
motivations
to
get
rid
of
sidecars
is
actually
not
necessarily
performance
but
getting
to
a
simpler
model.
E
I
would
agree
tco,
although
you
know
he
and
I
have
slightly
different
opinions
about-
maybe
how
to
go
about
it,
but
I
think
we
generally
would
agree
with
that
point.
Market.
Confusion
is
probably
not
helping,
you
know
just
being
honest
about
it
and
you
know
getting
to
a
standard
api.
That
most
people
here
could
agree
is
a
good
api.
E
I
think
there's
an
opportunity
in
this
space
right
now.
You
know,
I
think
the
kubernetes
gateway
apis
and
that
specification
right
are
a
good
set
of
apis
for
traffic
management
and
they
are
applicable
to
the
service
mesh
use
case.
So
you
know
it's
it's
my
intention
to
kind
of
foster
that
I
think
that's
a
good
thing
for
the
community
and
I
think
they,
you
know
the
the
establishment
of
that
under
the
umbrella
of
kubernetes
would
actually
be
helpful
here.
A
D
D
C
Yeah,
so
what's
blocking
service
mesh
adoption
cncf
released
a
micro
survey
this
very
morning,
so
I
would
encourage
you
to
check
it
out.
It's
our
service,
mesh
microserver
that
asks
people
exactly
what
that
was,
and
I'm
not
going
to
tell
you
what
the
answer
is.
You're
going
to
have
to
go.
Look
at
it.
Just
don't
look
at
the
graphs
I'll
just
say
you
know
again.
I
I
agree
that
complexity
is
a
big
issue,
whether
that's
a
real
issue
or
a
perceived
issue,
I
think,
is
a
little
blurrier
these
days.
C
I
agree:
zero
percent
that
sidecars
are
the
fundamental
source
of
the
complexity.
I
think
the
sidecar
model
again
it's
a
beautiful,
elegant
model,
there's
tooling,
that
can
help.
There
are
some
busted
parts
of
it
that
kind
of
suck,
but
that's
not
those
are
not
fatal
flaws.
I
think
the
model
is
a
really
nice
model,
so
you
know.
A
B
It's
like
a
very
hard
sport
to
be
because,
like
so
many
like
good
opinions
were
shared,
so
I
think
what
what
we
can
do
better
is
just
to
alleviate
the
confusion
around
the
the
we
should
not
developers
hate
magic.
We
just
they
love
to
use
magic.
They
love
to
use
technology.
That
looks
like
magic,
but
they
hate
when
they
need
to.
You
know
deal
with
this
and
especially
when
they
need
to
debug
something
in
the
4
a.m
in
the
morning.
B
So
that's
responsibility
of
like
my
personal
responsibility,
to
to
provide
more
more
knowledge
around
the
the
things
and
what
they
should
put
an
application
code
and
what
they
should
use
from
infrastructure.
So,
in
my
that's
probably
my
my
final
thoughts,
yeah
just
elevate
confusion
and
just
like,
let
make
it
less
magic
and
you
know
allow
people
to
to
use
this
technology.
K
You
so
much
to
our
panelists
and
craig
good
job,
moderating
this.
So
I'd
like
to
take
a
time
to
some
time
to
thank
everyone
from
the
program
committee.
Please
stand
up
if
you're
in
the
program
committee,
I
believe
victor
ur
and
edith
and
craig.
So
thank
you
so
much
for
making
sure
we
have
a
wonderful
programs.
K
Thank
you
for
all
of
our
sponsors
and
thank
you
most
of
you
for
attending
servicemeshcon.
Without
you,
you
know
this
has
been,
I
guess
the
best
conference
I've
ever
had
since
covet
so
really
really
exciting
to
be
on
the
stage
and
also
talking
to
everyone.
You
know
honestly
to
me,
as
as
somebody
sitting
in
the
conference,
the
biggest
takeaway
I
have
is.
I
think
we
are
pretty
confused
at
the
market
right
now,
right.
Listening
to
the
debate
of
evpf
service,
mesh
psychiatrists
and
all
the
projects
we
are
seeing
in
the
ecosystem.
K
With
you
know,
qmi
ico,
linker
d
and
console
connect
it's
and
now
cilia
service
mesh.
You
know
what
I
really
hope
is
next
time
when
we
get
together
at
servicemashcon.
I
really
hope
you
know
we
can
bring
some
clarity
to
our
user.
You
know
so
that
we
can
be
less
confused
about
the
market.
We
can
be
less
confused
about
the
architecture
of
service
match.
Well,
there
would
be
a
little
bit
more
agreement
among
some
of
our
industry
leaders.
K
With
that.
I
want
to
thank
you
again.
I
believe
you
all
have
the
ticket
for
the
drinks.
I'm
actually
not
sure
where
the
drinks
will
be,
but
I
think
it's
somewhere
outside
so
enjoy
our
kubecon
tomorrow
and
enjoy
the
drinks
this
evening
and
if
you
haven't
take
any
of
the
sponsors
events.
I
believe
there
are
some
events.