►
From YouTube: Application Networking Day Session #6: Sidecarless service mesh. No local TCP-IP stack. What’s left?
Description
Featuring Kevin Dorosh.
A
A
I'm
not
the
first
one
to
talk
about
ambient
mesh
or
ebpf
today,
but
we're
going
to
take
this
opportunity
to
dive
a
little
bit
deeper
into
some
of
the
internals
behind
how
they
work
just
kind
of
as
a
curiosity's
sake,
and
also
just
to
kind
of
get
an
idea
of
like
the
future
direction
of
the
project.
A
So
yep
you've
got
me
today,
a
little
bit
about
me,
I'm
in
a
Solo.
Now
for
a
handful
of
years,
I've
been,
you
know,
worked
on
our
Edge
API
Gateway,
built
on
top
of
envoy
proxy,
and
then
I
did
some
work,
championing
a
project
internally.
Building
graphql
filter
into
Envoy
for
our
platform
and
most
recently,
as
Lynn
mentioned,
doesn't
work
on
the
ambient
mesh,
in
particular
like
testing.
Support
for
eks
so
been
very
much
involved
in
in
the
product
and
on
the
technical
side
of
things.
A
So
just
to
get
into
it
a
lot
of
the
stuff
that
we're
gonna
be
talking
about
today,
kind
of
Builds
on
these
Technologies
I.
Imagine
if
we're
all
sitting
here
today,
we've
already
heard
of
these,
but
just
to
kind
of
quick
recap
or
you
know,
make
sure
we're
on
the
same
page.
You
know
first
I'll
start
with
IP
tables.
You
know,
iptables
is
a
user
space
Linux
program
that
allows
you
to
configure,
like
kernel
level
firewall
extent
like
firewall
rules.
A
It's
the
technology
that
kind
of
underpins
like
in
kubernetes
itself,
like
the
cluster
IP
kubernetes
service.
You
use
it
to
create
those
virtual
cluster
IPS
that
can
be
reached
under
the
covers
in
istio.
It's
used
often
to
handle
the
redirect
so
that
when
you
know
pod
a
tries
to
talk
to
pod
B,
you
still
use
the
original
pod
address,
but
we
redirect
to
both
the
sidecar
both
on
source
and
destination,
using
IP
tables
and
yeah.
That's
that's.
A
Ip
tables
ebpf
different
technology
kind
of
used
in
a
very
similar
way,
so
ebpf
is
a
set
of
Berkeley
packet
filter
extensions.
It's
a
bunch
of
extension
hooks
in
the
operating
system,
kernel
that
allow
you
to
like
inject
your
own
custom
logic
into
the
kernel
itself.
Obviously,
the
the
kernel
is
a
very
critical
piece
of
your
hardware
and
kind
of
requires
all
of
your
permissions
to
make
sure
that
this
is
safe.
There's
a
lot
of
restrictions
built
into
ebpf.
So
by
that
it
means
it's
not
a
turn
complete
language.
A
A
We'll
talk
about
very
high
level,
briefly
sidecar
acceleration,
but
it
does
allow
us
to
basically
mop
set
map,
socket
Communications
and
skip
things
like
iptables
and
get
more
performance
out
of
our
mesh,
so,
like
high
level
background
around
ibtables
ebpf,
they
cover
that
there
yeah.
So
we
just
heard
literally
last
talk
about
Ambience,
so
I
don't
have
to
go
too
deep
into
ambient
mesh
and
the
sidecarless
architecture,
but
as
well
I've
mentioned
before
you
know,
istio
today
or
the
traditional
model
is
the
sidecar
model.
A
So
every
single
microservice
in
your
cluster
gets
an
Envoy
sidecar
and
those
side
cars
talk
and
do
the
communication
get
their
own
certs?
You
know,
there's
optimizations
being
built
in
those
optimizations
are
the
sidecarless
architecture
most
of
the
resource
utilization
and
gains
come
from
allocations,
so
the
communication,
basically
the
switch.
The
fundamental
change
is
we're
separating
L4
and
L7
into
different
proxies,
so
the
like
usual
value,
proper,
the
sort
of
service
mesh,
is
zero
trust
communication
right
so
mtls
node
to
node
is
all
just
L4
communication.
A
If
we
can
put
that
all
in
the
Z
title
proxies
so
on
the
left
you'll
see
the
the
sidecar
model,
where
you
have
a
bunch
of
envoys
and
those
envoys
handle
L4
and
L7
and
on
the
right
you
see
the
two
different
modes,
the
top
one
is
just
Z
tunnel
to
Z
tunnel,
so
you'll
see
all
those
containers.
They
all
share
that
notes
that
per
node
Z
tunnel
proxy,
which
in
today's
Envoy
and
it
talks
to
the
other
Envoy
on
the
other
z-tunnel.
But
it's
just
doing.
A
L4
L7,
like
HTTP
parsing,
is
the
most
expensive
part
of
doing
communication
in
the
mesh.
So
the
optimization
here
is
by
just
not
doing
it.
We
get
a
lot
of
performance
gains
and
in
fact,
in
the
bottom,
when
we
talk
about
the
the
Waypoint
proxy,
we
the
we
split
all
four
and
L7.
A
If
you
want
to
apply
L7
policy
and
ambient,
that
policy
is
enforced
in
this
Waypoint
proxy
and
that
Waypoint
proxy
is
the
one
that
runs
on
your
node
or
not
perhaps,
but
it
runs
per
namespace
or
per
service
account,
and
it's
the
one
that
enforces
authentication
header
based
routing
like
anything
at
the
the
L7
level
or
layer.
And
you
know
an
astute
observation-
would
be
that
Waypoint
proxy
might
not
live
on
the
node
right.
A
That
was
going
out
and
once
on
the
side
car
that
was
coming
in,
but
if
we
just
do
it
once
in
an
external
Waypoint,
we
know
we
haven't
iterated
that
far
Ambience,
not
that
mature
yet,
but
it's
theorized
that
it
actually
might
be
quicker
because
we
only
deal
seven
once
so,
that's
kind
of
like
the
Future
Vision
of
ambient.
You
know
where
we
are
today
where
it's
where
it's
headed
I
think
I
didn't
even
mention
it
further.
A
But
yes,
this
all
works
under
the
covers
with
similar
those
IP
tables
and
ebpf
that
we
were
just
discussing
the
way
that
those
Technologies
were
implemented.
Original
source
and
destination.
Ips
are
preserved,
which
is
important
because
it
allows
ambient
istio
or
just
Upstream
istio
to
work
with
psyllium
or
other
network
policy
on
the
original
IPS.
A
Okay,
going
quickly
ebpf
acceleration
I
mentioned
this
already
briefly,
but
you
know
it's
another
optimization.
You
know
recent
in
the
service
mesh
space
to
you
know
optimize
connections.
A
The
idea
is
to
bypass
the
TCP
IP
stack,
which
I
think
is
best
demonstrated
by
this
diagram
here,
which
I
hope
is
big
enough,
but
on
the
left
you
see
you
know
the
current
sidecar
or
even
you
know,
non-side
car
model,
but
basically,
if
you
use
IP
tables
you
get
what's
on
the
left
and
you'll
note
that
you
go
through
the
socket
and
you
run
the
IP
tables
rules
on
the
way
in
and
out
when
redirecting
to
yourself,
when
you
could
just
map
socket
to
socket,
because
we're
not
doing
anything
to
manipulate
that
request,
we
just
want
it
to
go
elsewhere.
A
First,
ebpf
acceleration
has
some
limitations
number
one.
It
requires
Linux
kernel,
five,
seven
or
newer,
so
you
got
to
make
sure
your
infrastructure
supports
it.
You
got
to
get
this
in.
You
know
proved
in
your
organization,
security
posture.
A
good
news
is
that
we
can
fall
back
to
IP
tables
evpf
acceleration.
Well,
I'll
talk
about
that
more
in
a
second.
A
So
now
I
want
to
tie
this
into
kind
of
like
the
cloud,
and
you
know
how
things
are
set
up
today
and
again
moving
towards
the
future.
So
we
mentioned
iptables
when
you
install
istio
today,
either
in
a
sidecar
or
ambient
mesh
architecture,
you
usually
get
an
istio-nip
container
that
starts
up
on
every
pod
and
that
init
container
writes
these
iptable
rules
into
like
the
Pod
networking
namespace
and
handles
the
redirect
for
you.
A
Some.
This
requires
elevated
permissions.
You
know,
iptables
is
not
like
a
is
a
program
that
you
don't
just
want
anyone
to
be
able
to
run
to
support
another
alternative
that
doesn't
require
elevated
pod
permissions,
even
in
a
container
istio's
added
support
for
nsdo
cni,
which
can
replace
the
indent
container.
This
istio
cni
is
well.
A
Perhaps
I'll
talk
a
little
bit
about
cni
for
those
who
haven't
heard
it
before,
but
it's
the
container
network
interface,
it's
an
abstraction
built
into
kubernetes
itself
that
allows
you
it's
like
a
hook
that
lets
you
set
up
the
network
for
pods
to
communicate
to
each
other.
So
if
you
are
on
eks
or
gke
or
any
of
these
other
platforms
every
time
a
pod
starts
up,
you
make
a
call
out
to
the
cni.
A
There
are
three
parts
of
the
spec,
it's
literally
just
create,
add
and
check,
or
do
create,
delete
and
check
so
you're,
just
creating
like
a
network
pod
like
IP
Network's
namespace,
you
can
delete
it,
which
is
basically
creating
or
deleting
iptables
rules
and
check
that,
like
what
you
you
know
asserted
is
there
and
exists,
but
this
is
all
implementation,
specific
specific.
So,
like
GK
as
it's
on
c
c,
I
eks
has
its
own
cni
they're
implemented
differently.
A
That
means
that,
like
istio,
has
to
build
on
top
of
that,
cni
cnis
can
be
chained.
They
can
be
like
plugins.
It's
like
an
arbitrary
binary
that
you
can
execute
like
as
part
of
this
interface
and
so
again,
just
part
of
kubernetes
like
this
pod
spins
up,
you
call
it
cni
it'll
set
up
the
initial
networking
namespace
with
like
eks
cni
or
gke
cni,
and
then
istioc9
on
top
of
it
can
just
also
do
the
iptables
rules
there.
A
That's
like
the
the
quick
version
Additionally,
the
merbridge
is
a
project
out
for
istio.
That
can
also
do
the
ipt,
like
the
same
redirect
that
IP
tables
does
as
a
cni
but
using
ebpf.
A
There
are
some
limitations
with
merbridge
today,
I
believe
most
of
the
esto
feature
set
is
supported,
but
there's
some
Corner
cases
I
think
some
stuff
with
IPv6
that
don't
work
exactly
as
you'd
expect.
Building
on
that
yeah,
HTC
and
I
need
to
be
validated.
So
you
know
that
it
works
with
others,
because
the
iptable's
rules
are
selecting
certain
devices
like
in
your
pod
networking
namespace.
If
you
need
to
figure
rewrite
the
rules,
sometimes
depending
on
the
cni.
A
That's
just
why
I
like
testing
for
ambient
eks
and
GK,
where
it
was
important
and
different.
Their
rules
aren't
identical
for
IP
tables,
and
there
is
other
platform
log
specific
Logic
for
istio,
that's
built
in
today
and
will
need
to
be
updated
for
ambient
in
the
future,
which
we're
going
to
follow
up
on
right
now.
A
Okay,
so
yes,
what's
left
like
future
work
towards
like
ambient,
it
seems
like
we've
got.
We've
got
ambient
mesh.
It's
it's
we're
at
the
point
of
optimizing
right
like
we're,
trying
to
cut
back
resources.
Things
work.
Maybe
it's
you
know
not
ideal,
but
we're
optimizing.
Ebpf
feels
like
a
really
extreme
optimization.
If
we're
at
the
point
of
optimizing
like.
Are
we
already
done?
A
Let,
but
no
we've
got
a
lot
left
to
do
so
before
I
get
into
one
of
the
parts
of
an
issue
we
want
to
change
for
ambient
I
want
to
talk
a
bit
about
certificate
management.
How
it
works
today,
so
istio,
it's
mtls
with
istio
is
a
CA.
A
A
You
can
replace
this
with
an
external
CA,
so
there's
an
API
in
kubernetes,
where
you
can
just
add
an
extension
hook
or
point
istod
will
delegate
the
signing
to
an
external
assert
manager.
For
example,
so
that's
like
the
right:
the
left
is
kind
of
default.
You
know
istio.
If
we
move
towards
ambient
should
really
support
all
of
the
modes.
You
know,
including
itself
as
a
CA,
which
is
what
we're
going
to
get
into.
A
Also,
we've
got
the
way
that
the
istio
d
self
sign
CA
works
today.
Is
it
uses
the
token
request
and
token
review
API
in
kubernetes?
So
for
those
who
haven't
heard
of
this
before
there
is
an
API
for
for
a
specific
service
account,
you
can
request
a
jot
like
a
jot
that
proves
to
kubernetes
to
others
that
you
are
this
service
account
and
then
you
can
also
get
that
reviewed
and
so
there's
in
the
diagram
the
bottom
here
the
way
istio
uses
this
API
is,
you
know,
there's
Envoy.
A
So
when
you
want
a
new
cert,
istio
agent
goes
ahead
and
makes
a
token
request.
The
kubernetes
API
server
says:
hey,
I
am
microservice
Fu
I
am
service
account,
Foo
I
want
a
jot.
That
proves
that
I
am
Foo,
and
then
you
sends
that
jot
to
istiod
when
it
wants
a
cert
signed.
It'll
generate
a
sign
locally
use
that
cert
like
keep
it
private,
but
send
the
public
start
to
be
signed
with
the
jot.
A
That
says:
hey
I'm,
proving
that
I
am
Foo
stod
will
then
get
that
token
reviewed
by
the
API
server
and
then
sign
that
cert.
The
important
thing
to
note
here
is
that
the
istio
agent
is
per
pod.
So
you
know
in
the
sidecar
model.
There's
the
istio
agent
is,
you
know
one
per
pod,
so
if
we
think
about
certificate
Management
in
ambient
we're
kind
of
changing
the
model
a
little
bit
when
istio
was
initially
built,
there
was
an
implicit
assumption
that
you
know.
A
There's
one
service
account
for
one
Envoy
for
one
ntls
cert
that
gets
signed
by
istiod
in
a
default
installation
and
on
the
right
you'll
see
well
now
that
you
know
today
the
Waypoint,
not
the
Waypoint,
the
Z
tunnel
proxy.
Is
these
detonal
proxy?
Is
you
know
per
node
right,
so
it's
sharing
multiple
tenants
behind
so
when
it's
requesting
to
sdod
it
might
want
mtls
certs
for
either
service
account,
appfu
or
app
bar.
A
It
really
needs
both,
and
this
is
kind
of
a
fundamental
change
in
how
istio
had
worked
before
to
the
way
that
ambient
works
today.
Is
it
impersonates
the
kublet,
because
the
pro
the
security
posture
of
the
Z
tunnel
is
basically
the
same
as
the
kublet?
The
kublit
has
you
know
domain
over
everything
on
its
node
and
so
does
the
Z
tunnel.
The
Z
tunnel
can
proxy
anything
on
its
node.
The
security
posture
is
relatively
the
same,
and
impersonation
is
supported
in
that
token
review
token
request.
Api.
A
Now
the
problem
we're
getting
back
into
platform.
Specific
things
is
impersonating.
The
kublet
is
possible
on
gke
the
way
the
kublet
was
configured
gkas
on
you
know.
The
cool
blood
is
owned
by
gke,
an
eks.
It
uses
IAM
for
authentication.
We
can't
just
impersonate
it,
so
there
needs
to
be
changes.
The
sdod
certificate
model
so
that
we
can
support-
and
this
is
like
future
up
and
coming
work
right
now
it
you
know
it
works.
We
have
I'll
talk
about
how
it
works
right
now.
A
It
works
on
eks,
because
the
Z
tunnel
has
admin
permissions,
but
that's
not
that's,
not
the
long-term
solution
right.
This
is
permissions,
for
you
know
we're
going
to
limit
those
permissions
and
the
way
that
we
can
do
this
is
the
Z
tunnel.
Will
the
istio
agent
will
then
make
requests
would
send
two
spiffy
IDs?
So
spiffy
is
like
the
technology
behind
like
the
Certificate
request
and
approval
from
an
sdod
instead
of
just
saying.
A
I
want
service
account,
Foo
and
saying:
I
am
service
like
I'm,
proving
I'm
service
account
food
and
I
want
the
cert
review,
it's
gonna
say:
I
am
the
Z
tunnel
and
I
want
service
account
Foo
and
we'll
verify
that
it
is
the
Z
tunnel
and
then
this
DOD
will
also
verify
that
the
service
account
it
wants
exists
on
the
Node
and
so
istio
D
is
going
to
have
to
be
updated
a
bit
to
watch
the
like
the
pods
on
a
node
so
that
it
can,
when
it
verifies
the
Z
tunnel
comes
it
can
issue
the
shirt
cert
rather
than
relying
on
impersonation,
which
isn't
really
cross-platform.
A
So
again,
like
you
can
test
all
these
things
they
work
but
ambient
still
up
on
their
eyes.
So
yeah
we
went
into
to
One
technical
thing,
but
what's
up
what
else
is
Left
Right?
Like
you
know,
ambient
is
an
improvement,
we're
benchmarking,
where
we're
already
seeing
significant
gains
and
a
lot
of
these
things
are
are
getting
things
detected
or
production
ready.
A
The
future
of
of
ambient
and
mepf
in
the
cloud.
Well,
let's
start
with
hdb3
when
we
think
about
the
Z
tunnels
communication,
it's
basically
I
think
the
terminology
was
H
bone.
It's
like
using
an
hdb
based
overlay,
Network
and
it's
implemented
by
using
HTTP
connect,
HTTP
2
in
particular.
Right
now,
all
the
Z
tunnels
are
Envoy
Envoy
when
it
gets
support
for
HTTP
3
connect,
which
is
mask
then
we'll
get
additional
performance
benefits.
Http
3
is
better
in
networks
that
have
that
drop
more
packets.
It.
A
You
know,
if
you
think,
about
the
the
progression.
I
think
there's
a
talk
later
today
about
HTTP
one
two
and
three,
so
we'll
get
really
deep
into
it,
but
you
know
it
should
be
one
to
two.
You
know
the
big
change
was
multiplexing.
You
can
use
a
connection
Multiplex
from
two
to
three
moving
from
TCP
to
UDP
and
not
getting
the
connection
stuck
on
a
dropped
packet
on
any
of
the
connections
is
the
big
Improvement
right,
so
packet
loss
is,
is
less
painful,
that's
coming
to
to
Z
tunnel.
A
Secondly,
and
I
think
this
was
touched
upon
by
Will
right
before,
but
z-tunnel
is
moving
to
rust,
currently
Envoy,
so
I
think
we
will
also
cover
the
benchmarks.
They
look
a
lot
better.
The
the
RAM
usage
in
particular
at
20K
pods,
the
RAM
usage,
was
20
times
more
with
no
pods
on
Envoy,
then
20
times
Less
on
rust,
without
and
in
particular.
A
The
reason
why
is
because
the
abstractions
that
that
Envoy
have
has
built
in
are
not
great
for
what
the
z-tunnel
is
trying
to
do
so,
the
way
it's
implemented
today,
an
Envoy
with
the
Z
tunnel
is
we
have
like
an
internal
listener
that
routes
back
to
itself,
so
they
can
have
a
tunnel
like
it
can
tunnel
out,
and
that
means
that
has
like
a
cluster
per
cert
and
the
really
short
version
is
an
n-squared
configuration
problem.
A
You
add
a
ton
of
PODS
and
the
configuration
gets
really
large,
but
z-tunnel
will
be
different
and
and
not
Z
tunnel.
The
rust
proxy
will
have
a
different
API,
which
will
just
resolve
that,
yes,
updating
the
istio
ca
so
platform
specific
needs
already
mentioned
a
little
bit
about
some
of
the
certificate.
Management
in
istio
today
so
platform
updates
for
things
like
eks,
but
also
yeah,
that
new
xdsd
tunnel
API
and
then
last
yeah,
like
optimizing
ebpf
redirect
so
most
of
the
ambient
iterations
today
are
making
use
of
iptables.
A
There's
some
groundwork,
that's
being
done
on
ambient,
but
those
redirect
rules
are
different
are
not
ambient
ebpf
so
that
rules
need
to
be.
You
know,
flushed
out
and
I,
think
I've
covered
everything.