►
Description
Featuring Will McKinley and Marino Wijay. Istio’s new ambient feature, a sidecar-less operational mode, makes service mesh a first-class citizen of the cloud-native platform. What this means for your services is a friction-less entry into automatically providing zero-trust networking with minimal operational burden to the developer. In this talk, we take a look at resource usage and detail the differences between allocation and utilization and how best to optimize for costs when using ambient.
A
A
Cost
of
resources
that
you
might
save
with
ambient
mesh
so
first
off
I
want
to
give
a
little
shout
out
to
my
colleague
who
couldn't
be
here
today,
because
he
had
his
son
was
sick
last
night
he
was
actually
here
at
the
rejects
conference
and
had
to
fly
home,
but
he'll
be
he'll,
be
here
later
today,
and
hopefully
you
can
run
into
him
if
you,
if
you
do
he's
a
great
guy,
definitely
talk
to
him.
Well,
myself.
A
I'm
will
McKinley
I'm
a
director
here
at
Solo
in
the
field
engineering
team
and
also
a
thank
you
to
Christian
and
Greg,
who
are
on
Lynn's
hoot
recently
talking
about
this
subject.
They
did
all
the
research
here
so
I'm
just
here
to
talk
about
it,
but
definitely
go
in
and
take
a
listen
to
that
YouTube
video,
it's
quite
good.
You'll
get
you'll
get
info
straight
from
the
people
that
worked
on
this.
A
A
And
you
know
ambient
mesh
really
is
just
a
separation
of
L4
traffic
and
L7
traffic.
You
know
a
couple
of
different
proxies
right,
so
we're
going
to
look
at
the
Z
tunnel
right,
which
is
your
L4
traffic
and
then
something
called
a
waypoint
proxy.
A
So
if
you're,
if
you're
looking
at
just
deploying
a
bunch
of
microservices-
and
you
want
those
micro
services
to
be
secured
identity
wise
within
istio-
the
traditional
way
that
you
would
see
with
spiffy
IDs,
then
Z
tunnel
will
work
for
you
and
you
can
see
you
know
through
this
talk
how
beneficial
that
will
be
for
you
now
if
you
are
doing
more
kind
of
advanced
traffic
management
with
L7
between
your
services,
we'll
talk
about
that
as
well,
but
you'll
you
get
all
the
richness
of
that
with
a
lot
more
flexibility
with
ambient.
A
So
just
looking
at
at
how
ambient
works
right
with
the
with
the
Z
tunnel,
you
can
see
that
the
Z
tunnel
is
deployed
as
a
node-based
proxy
right
so
going
from
a
service
on
your
cluster
one
to
Cluster
two.
They
need
to
call
through
each
other,
there's
going
to
be
service
identity,
that's
transferred
through
mtls
between
each
sea
tunnel.
A
A
Now
the
Waypoint
proxy
obviously
is
going
to
give
you
the
kind
of
advanced
functionality
above
what
you're,
seeing
with
that
Z
tunnel
and
we'll
talk
about
how
you
deploy
those
you
know
it
gives
you
a
lot
of
configurability
in
in
terms
of
deployment,
architectures
and
and
we'll
just
show
you
how
you
know
we
did
this
within
our
testing.
A
So
there
are
some
resource
implications
of
taking
a
sidecar
based
approach,
and
it's
not
that
it
it,
you
know,
isn't
very
useful
because
it
is
a
very
uniform
approach
right.
But
if
you
think
about
you
know
like
the
traditional
book
info
application
that
comes
with
comes
with
istio
and
it's
a
polyglot
application,
it's
a
microservices-based
application,
and
you
know
if,
if
you
wanted
to
scale
this
up
to
handle
a
bunch
of
traffic,
well
you
would
need
to
do
is
determine
what
number
of
replicas
you
needed
to
run
for
each
of
these
back-end
services.
A
A
You
can
quickly
see
that
if
you're
scaling
up,
you
know
3x,
and
this
in
this
situation,
where
I've
got
you
know
four
applications,
one
of
them
has
three
different
versions
and
so,
on
the
left
hand,
side
I've
got
you
know
six
proxies
when
you
scale
that
up
you're
going
to
have
18
right
if
you're,
assuming
you're
scaling
that
up
in
a
uniform
way.
A
So,
on
the
other
hand,
looking
at
this
same
application
in
ambient
mode,
what
you
see
is
with
the
Z
tunnel
being
a
node-based
proxy
and
then
the
Waypoint
proxy.
Let's
assume
that
you're
only
doing
L7
to
one
of
the
services
right,
maybe
maybe
between
reviews
and
ratings
you're
gonna
do
some
kind
of
traffic,
maybe
circuit
breakers
retry,
something
like
that
right,
and
so
you
need
L7
there.
In
that
case,
you
can
separate
the
deployment
strategies
between
scaling
up
the
application
which
may
not
be
as
performant
and
that
L7
proxy,
so
the
Waypoint
proxy.
A
You
may
only
need
one.
You
know
seeing
that
it's
based
on
service
identity
right,
so
so
that
gives
you
a
lot
better.
I
I,
think
functionality
and
configurability.
A
So,
let's
just
talk
about
how
we
ran
these
tests
first
and
and
how
we
got
the
graphs
that
you're
going
to
see
towards
the
end
of
this
talk.
A
So,
first
of
all,
we
wanted
to
have
an
application
that
we
did.
We
didn't
we
weren't
trying
to
you,
know
kind
of
fight
against
it
when
we're
running
a
load
against
it
right
so
book
info,
unfortunately,
is
not
a
great
example
of
something
that
you
can
run
load
against,
because
it's
not
terribly
responsive.
So
what
we
did
instead
was
to
use
HTTP
bin
application.
So
it's
a
very
lightweight
python,
based
server
that
just
responds
back
with
whatever
request,
headers
and
body
that
that
you
sent
it.
A
We
deployed
three
different
versions
of
this,
and
then
we
used
the
the
istio
test
20o
to
to
test
load
against
all
of
these
okay,
and
each
of
these
versions
had
10
replicas
within
the
cluster.
The
cluster
is
a
three
node
cluster.
It
has
eight
CPU
and
24
gigs
of
memory
per
node.
A
So
you
can.
You
could
sort
of
take
that,
as
you
know,
our
standard
that
we
ran
against
and
extrapolate.
You
know
for
your
conditions
right
and
then
we
ran
it
in
three
modes,
so
we
ran
it
in
the
sidecar
model.
A
So
the
first
thing
you'll
see
is
that
if
you
look
at
the
charts
for
total
CPU,
you
can
see
already
there's
a
pretty
dramatic
difference
between
you
know
running
as
a
sidecar
and
running
as
just
Z
tunnel
or
even
Z
tunnel
with
the
Waypoint
proxy
right
and
the
green
line.
There
is
actually
the
total
CPU
of
all
workloads
where
the
the
yellow
line
actually
is
pretty
hard
to
distinguish
those
colors,
but
I'll
say
the
the
Top
Line
is,
is
all
workloads
and
then
the
bottom
line.
A
A
One
thing
I'll
note:
you
probably
have
a
question
about
is:
what's
this
little
Spike
at
the
beginning,
right
so
I'll
I'll
mention
that
when
we
get
to
after
these,
these
slides
is
kind
of
the
the
future
of
of
ambient,
but
you
will
notice
that
it's
consistent
between
sidecar
model,
as
well
as
ambient
model,
looking
at
total
memory
again
you're
going
to
see
this-
probably
roughly,
you
know
50
60
Improvement
here.
A
A
I
think
this
is
also
if,
when
we're
talking
about
deployment
strategies,
looking
at
the
Stacked
view
is
very
helpful,
because
now
you
can
see
just
how
many
sidecars
you
know
you're
running
in
and
even
though
they
take
up
a
small
amount
of
CPU,
all
that
aggregated
it
turns
out
to
be
quite
a
bit
right,
and
so,
when
you're
thinking
about
now,
I'm
going
to
need
to
go
and
do
do
some
cost
research
for
what
kind
of
cluster
do
I
need
to
actually
purchase
right.
A
A
Right,
I
also
know
that,
from
a
standpoint
of
the
data
path,
all
the
way
through
L7,
maybe
you
know
that
proxy
may
be
a
lot
more
performant
than
than
my
application,
and
so
I
may
have
to
scale
up
my
application,
whereas
my
actual
Waypoint
proxy
I
could
still
leave
it
as
a
single
replica
or
maybe
just
you
know,
h
a
you
know
have
a
couple
of
them
available.
A
A
Looking
at
this
just
in
a
table
format
you
can
see
here,
like
the
contain
the
number
of
containers
required
in
our
setup.
You
know
31
with
sidecars,
whereas
even
with
the
L4
and
L7
you're
at
six
containers.
A
Okay
in
terms
of
CPU
again,
you
know
you're,
looking
at
three
bcpu
for
running
this
entire
example
versus
you
know
just
a
fraction
of
a
CPU
like
roughly
a
half
a
CPU
in
The,
L4
plus
L7
case
and
then
total
memory.
Again,
you
know
we're
talking
fractions
here
right.
A
A
Obviously
it's
going
to
give
you
that
increased
flexibility,
because
when
you're
thinking
about
you
know
your
deployment
strategies,
you're
going
to
be
able
to
separate
concerns
pretty
easily,
along
with
the
reduced
complexity
that
you
get,
which
you
probably
already
heard
about.
Where
you
know
you
can
roll
out
your
new
applications,
you
can
upgrade.
A
And
now,
let's
talk
a
little
bit
about
the
future
of
ambient
mesh,
so
I
mentioned
before
you
know
this
is
this
is
relatively
new
right,
so
there's
still
optimizations
that
we're
looking
at
so,
for
instance,
when
you
look
at
the
Z
tunnel
today,
that's
being
implemented
with
an
Envoy
proxy
right
that
was
done
in
terms
of
expedience
getting
into
the
market,
because
we
already
had
all
those
filters
built
in
right.
A
We
could
already
establish
mtls
with
with
Envoy
proxy,
most
likely
that
is
going
to
be
Rewritten
and
and
to
be
more
performant
and
specialized
just
to
handle
that
L4
traffic
right
and
so
you'll
expect
to
see
even
more
performance
gains
in
the
future.
A
All
right
and
there's
there's
just
additional
information.
I
think
these
slides
are
going
to
be
available
after
so
you'll
actually
get
these
links.
So
so
don't
worry.
If
you
know
you
can't
see
links
here,
that's
okay,
we'll
make
sure
you
get
those
and
I
think
that's
the
end
of
my
talk.
So
it's
nice
and
short.