►
From YouTube: What Istio Ambient Mesh Means for Your Wallet?
Description
Join us in this hoot livestream to discuss one of the key benefits of Istio ambient mesh - cost! Bring your questions - Greg, Krisztian and Lin would love to hear from you on any questions you may have regarding ambient mesh!
Blog and steps: https://www.solo.io/blog/what-istio-ambient-mesh-means-for-your-wallet/
A
A
So
on
the
first
day
of
the
announcement,
which
is
wednesday,
we
did
a
particular
quote
about
istio
ambient
mesh,
and
what
did
it
mean
to
you?
So
we
did
a
hood
with
guest
speakers
from
google
and
also
a
few
speakers
from
solo,
so
check
it
out.
If
you
haven't
so
it's,
it
was
a
celebration
of
israel,
ambient
mesh
and
also
discuss.
You
know
what
it
means
to
you
as
a
issue
user.
A
Yesterday
we
did
another
hood
live
stream.
Where
we
answer
questions
from
you
and
also
I
had
the
pleasure
to
talk
to
your
field
leads
at
solo,
where
they've
been
actually
talking
about
mbms
to
many
of
our
customers
before
the
launch
on
the
nda,
so
they
were
able
to.
You
know,
educate
me
on
what
common
questions
they
heard
from
their
from
the
customers,
and
you
know
the
answers
to
those.
So
don't
miss
out
leads
to
you
if
you're
interested
in
istio
ambient.
A
Now,
today,
we're
going
to
talk
a
little
bit
more
about
mbms,
but
specifically
focus
on
what
does
ambient
means
for
your
wallet,
so
I
am
so
excited
to
have
greg
handsome
and
also
christian.
I
believe
both
of
them
have
spoke
at
the
hood
live
stream
this
year,
but
for
folks
who
don't
know
them
I'd
like
to
have
them
introduce
greg?
Why
don't
you
go
ahead
and
introduce
yourself
to
the
audience.
B
Yeah
sure
so
this
is
my
second
hoot.
I
appeared
in
one
a
little
earlier,
introducing
a
nice
little
on
guard
ui
tool
back
back
then,
and
so
today
yeah
I've
been
working
on
ambient,
since
I
started
at
solo
gosh
back
in
march,
so
feels
like
a
long
time
ago,
but
not
really,
not
even
six
months
yet
and
ambien's
been
an
exciting
project.
I've
been
contributing
to
the
istio
community
since
day,
one
so
going
to
the
ambient
model
and
seeing
sidecar
as
potentially
disappearing
from
the
deployment
model
is
really
exciting.
A
That's
awesome,
yeah.
I
was
just
pulling
out
the
hood.
You
have
yeah,
so
debug
onward,
config
and
analyze
envoy
logs.
Yes,
that's
the
hood.
You
have.
Unfortunately,
you
have
to
watch
the
ads
now,
but
it's
getting
a
lot
of
good
views,
so
episode
24.
If
you're
interested
in
that.
Thank
you
greg
christian.
C
A
Okay,
cool
awesome,
so,
let's
dive
into
what
is
your
ambient
mesh
means
to
your
wallet
right
so
I'll
start
the
way
we're
going
to
run.
This
is
I'll
start
asking
some
questions,
but
we
would
love
to
hear
the
questions
from
the
audience,
because
that's
really
really
what
makes
this
interesting
right.
So
I
guess
the
first
question
I
would
ask
is
in
terms
of
resource
usage
right.
So
what
are
the
key
measurements?
B
So,
typically,
when
you
decide
your
cloud,
your
cluster
and
cooper
for
kubernetes,
you
want
to
make
sure
you're
checking
the
amount
of
cpu
and
memory
that
you
need
per
node
to
make
sure
that
it's
enough
for
your
application
just
for
running
but
also
under
maximum
load.
So
you
wanna
make
sure
the
right
amount
of
cpu
and
memory
is
allocated
for
each
of
those
nodes
and
you're
setting
requests
and
limits
for
your
applications
to
make
sure
those
limits
are
respected.
B
And
so
with
ambient
one
of
the
big
savings.
There
is
just
being
able
to
eliminate
a
bunch
of
resources
that
you
no
longer
need
to
keep
track.
C
A
Yeah,
definitely
so,
can
you
clarify
I
notice
in
your
blog,
you
talk
about
allocation
and
utilization,
which
I
believe
are
critical
concepts
in
terms
of.
What's
you
actually
paying,
can
you
clarify
you
know
which
ones
are
you
measuring.
B
Sure
so
we
have
a
grip
on
a
dashboard
and
that's
published
on
github,
it's
linked
in
the
blog
and
it's
the
one
that
we
use
for
collecting
the
image.
The
images
that
are
displayed
in
the
blog
there,
and
so
most
of
those
graphs
are
tracking
utilization,
so
memory
and
cpu
usage
during
our
little
skill
and
performance
testing
with
ford
io.
B
What
are
the
pods
containers
are
actually
using
while
we're
sending
all
these
queries
through
ford,
io
and
http,
then,
and
then
there's
allocation
allocation
goes
back
to
what
I
mentioned
earlier
when
you
initially
size
your
cluster,
because
at
the
end
of
the
day,
that's
what
you
pay
for
on
a
per
monthly
basis
for
your
cluster
is
what
it's,
what
its
size,
how
many
cpu
and
how
much
ram
you
need
to
keep
your
cluster
running
on
a
day-to-day
basis
versus
the
utilization,
which
is
what
is
currently
in
use.
A
Yeah
makes
sense
I
purchased
my
laptop
well,
I'm
sleeping
I'm
still
paying
for
it
right
because
I
purchased
the
whole
thing
regardless
how
it's
used
and
how
much
percentage
is
being
used:
okay,
very
cool
and
now,
as
far
as
measurement,
what
are
the
key
components
that
take
in
consideration
for
measurement?
I
assume
it's
a
z
tunnel
and
waypoint
proxy.
B
Correct-
and
I
don't
know
christian
if
you
want
to
step
in
and
speak
about
some
of
the
specific
metrics
but
yeah
for
the
most
part,
the
containers
that
we're
watching
all
have
are
conveniently
named
istioproxy.
So
in
the
sidecar
example,
that's
going
to
be
the
sidecar
istio
proxy
envoy
containers
that
run.
A
B
Inside
each
pod,
alongside
your
application
containers
and
then
in
the
ambience
scenario,
those
osteoproxy
containers
account
for
the
z
tunnel,
z,
tunnels
deployed
in
the
istio
system,
name
space.
You
have
one
of
those
per
node
for
your
kubernetes
cluster
and
then
it
also
keeps
track
of
the
waypoint
proxies,
which
are
deployed
at
the
name.
Space
level
per
service
account.
A
A
Okay
makes
sense,
yeah,
so
essentially
we're
measuring
how
much
resources
is
being
used
during
cycle
when
you
have
it
running
within
your
application.
A
Part
was
in
ambient
when
you
are
running
with
the
z
tunnel
and
then
potentially
optionally,
you're
running
with
c
tunnel
and
the
waypoint
proxy
right.
So
you
were
comparing
all
that
yeah.
Okay,
that's
very
cool!
That
makes
sense.
So
can
you
share
a
little
bit
about
what
you
discovered
as
part
of
this
exercise?
I
think
that
now
we
have
a
good
understanding
of
what
are
you
measuring
and
now
we're
trying
to
get
understand.
You
know
what
is
the
findings.
B
Sure-
and
I
don't
know
I-
this
is
an
interesting
little
tidbit
that
I
ran
into
immediately
when
I
started
setting
up
my
scripts
for
collecting
this
data
lynn.
Could
you
share
my
screen.
B
Let's
see
there,
we
go
all
right.
So
when
I
first
deployed
my
cluster,
I
just
thought:
oh,
I'm
testing
things
out.
I
don't
want
to
cost
solo
too
much
money,
so
I
didn't
think
too
much
about
sizing,
and
so
I
I
created
this
fairly
small
cluster
and
I
decided
to
test
against
book
info
just
because
that's
the
istio
staple,
so
you
can
see
in
this
environment
I
have
the
pods
deployed.
They
have
a
sidecar
injected.
B
So
this
is
the
classic
sidecar
scenario
and
immediately
when
I
tried
to
scale
upwards
in
my
cluster
for
book
info,
I
noticed
that
a
lot
of
my
pods
got
stuck
in
pending
pretty
quick.
B
I
was
not
checking
to
make
sure
my
pods
were
actually
running
before
I
was
driving
load
through
them.
I
was
wondering
why
my
results
were
unexpected
and
it's
just
because
I
didn't
have
enough
requirements
just
based
on
the
limits
and
requests
that
are
in
the
pod
specs
for
these
additional
containers.
A
B
Yeah
and
yeah,
I
can
run
through
an
ambient
scenario
but
yeah.
I
I
started
with
running
against
ambient
and
I
was
able
to
scale
there,
fine,
just
for
the
sake
of
collecting
some
initial
data
and
then,
but
as
soon
as
I
tried
scaling
with
the
the
sidecar
example,
it
started
hitting
problems
pretty
quick.
A
B
B
And
I,
if
you
guys,
want
to
see
it,
I
can
show
just
a
little
scale
thing
of
ambient,
or
I
can
just
run
through
a
quick
little
test
of
the
actual
performance
script
that
we
ran
to
collect
this
data.
If
people
are
interested
in
trying
this
against
different
scenarios
in
their
environments,
I
can
do
that.
A
Yeah,
so
if
you
can
maybe
show
the
quick
tester
you're
wrong,
I
assume
it's
relatively
automated
that
you
just
kick
things
off.
That
would
be
interesting
and
also
explain
to
us.
What
are
you
running
in
the
test?
I
think
that
will
also
be
very
interesting.
B
Okay,
so
yeah
right
now
I
just
have
this-
is
my
config
yaml?
This
is
everything
that
determines
how
the
test
is
run
specifically
here
are.
Let
me
increase
the
sides
here
for
this.
B
B
A
B
So
there's
three
scenarios
that
I'm
running
through
one
is
sidecar,
which
is
the
classic
istio
example
and
then
there's
ambient
with
only
l4
enabled.
So
that
means
only
z,
tunnels
are
deployed
and
then
finally
there's
ambient
with
l4
and
l7,
which
includes
c
tunnels
and
the
l7
waypoint
proxies
and
then
some
of
the
other
toggles.
Here
we
have
test
weight.
You
can
add
a
little
duration
to
sleep
between
each
test
that
helps
make
a
clean
line
between
each
of
the
scenarios
when
we
start
viewing
the
data
in
grafana
yeah.
A
B
So
I
am
just
going
to
kick
off
the
test
and
well,
let
me
just
double
check
yep
pointing
to
everything
and
so
with
the
config
gamel.
The
only
thing
that
you
need
to
do
to
get
running
after
you
pull
it
from
github
is
just
enter
in
your
own
cluster
context
here,
and
it's
already
set
up
to
use
the
hub
and
tag
of
the
publicly
released
images
for
ambient,
and
so
I'm
just
going
to
kick
this
off
with
just
run.
Test.Sh.
A
Yeah,
so
I
think
greg
you
also
mentioned
everything
is
in
the
github
right.
What
you
are
showing
other
people
can
reproduce
it
through
the
github,
and
I
think
this
what
you
are
doing
is
useful
to
help
them
not
only
calculate
the
potential
cost,
but
also
help
them
to
kind
of
size,
their
their
psycho
and
the
zita,
no
and
also
waypoint
proxy
right,
particularly
for
the
application,
the
workflow
they
have
in
mind.
B
It
was
it
makes
for
a
very
nice
use
case
for
demoing
service
mesh,
but
it's
not
made
to
handle
that
many
requests
per
second
over
that
long
period
of
time.
So
instead
I
just
went
to
something
fairly
lightweight,
just
because
I
didn't
want
the
actual
usage
of
the
application
pod
to
take
up
too
many
resources.
I
want
to
keep
that
pretty
small
and
light
just
to
highlight
the
actual
usage
of
the
istio
data
plane
resources
so.
B
B
A
B
B
C
Can
share
my
screen?
Okay
now
you
should
be
able
to
see
this.
So
basically,
these
are
these
three
scenarios
that
grant
mentioned
this
first
run
is
for
the
regular
umb
steel
sidecar
test.
The
second
one
is
running
with
z,
tunnels,
so
level
4
only
and
this
third
one
is
with
level
4
and
level
7
with
ambient.
A
Very
cool
yeah,
very
cool.
We
have
a
question
from
our
audience:
hey
ahmed!
Thank
you
so
much
for
joining
us,
yeah
in
case
of
layer,
7
waypoint.
By
the
way
we
point
that
together.
A
Do
we
need
to
ensure
waypoint,
multiple
part
per
namespace
to
ensure
h
a
especially
in
multi-zone
scenario?
That's
a
great
question,
so
does
any
of
you
want
to
take
that
or
you
prefer
me
to
answer.
B
A
Yeah
so
last
time
I
checked
so
the
way
you
stand
up
a
waveform
proxy
is
using
the
kubernetes
gateway
resource
right.
So
last
time
I
checked,
I
don't
recall,
seeing
a
place
to
specify
replicas
specify
how
you
want
to
place
the
waypoint
proxy.
I
think
that
would
be
involving
upstream
how
to
do
that.
The
current
api
is
based
on
my
last
time,
look
at
it,
which
was
a
few
weeks
ago.
A
I
don't
recall
there
is
a
place
to
allow
me
to
configure
that,
but,
ideally
with
the
right
api,
you
do
want
to
kind
of
place.
Your
waypoint
proxy
have
a
high
ij
right.
You
do
want
to
try
to
place
it,
maybe
on
different
node
right,
because
if
one
of
your
nodes
goes
down,
you
still
have
your
waypoint
proxy
on
another
node.
Gragas
are
absolutely
right
and
the
the
way
the
waypoint
proxy
is
designed
is
it's
supposed
to
be
running
outside
and
have
different
scaling
characteristics
than
your
application
part.
A
So
you
would
have
a
lot
more
control
to
scale.
That
was
your
differently
than
your
application
pod.
So
that's
a
great
question.
I
hope
that
answers
your
question.
If
not
do
let
us
know
in
the
comment
on
the
side,
another
question
from
ahmed
is,
if
actually
being
tested
based
on
gat,
how
z
tunnel
behaves
in
case
of
posts
with
payload
on
the
body.
That's
a
great
question.
A
B
And
I
do
know
fort
io
has
that
option
to
pass
to
pass
in
a
post
body.
We
unfortunately
didn't
run
the
tests
with
that
yeah.
It's
a
good
question
and
the
other
thing
is
right
now.
This
is
just
relying
on
basically
the
kubernetes
name
routing
so
rather
than
say:
creating
a
virtual
service
for
routing
between
http
bin
version,
1,
http,
bin
version,
2
and
version
3.
we're
just
calling
those
services
directly
just
based
on
their
fully
qualified
domain
name.
B
So
we
basically
didn't
want
to
have
to
change
a
virtual
service
between
each
of
the
tests
and
get
a
little
bit
of
spike
from
sdod
and
each
of
the
z,
tunnels
and
waypoint
proxies
or
sidecars,
as
they
receive
a
brand
new
config
every
time
a
virtual
service
changes.
So
that's
something
else
that
this
test
doesn't
cover.
This
is
just
supposed
to
kind
of
test.
It
under
load,
request
load
that
is
but
yeah
the
the
post
behavior
would
be
interesting
to
see
for
sure.
A
Yeah,
so
just
just
to
add
to
what
greg
said
so
zetano
can
process
right,
but
zetano
doesn't
try
to
differentiate
a
get
or
post
right.
Z
tunnel
doesn't
passing
headers
right,
so
it
doesn't
really
do
anything
related
to
layer,
7
processing.
So
that's
the
job
of
the
waypoint
proxy.
What
z
tunnel
does
is
to
upgrade
the
connection
right
using
the
edge
phone
connection
and
make
sure
that's
mutual
ts
and
then
send
the
traffic
with
the
right.
A
C
And
this
is
this
is
very
important,
I
think,
because
originally
before
ambient
mode,
you
could
also
adopt
sdo
gradually,
but
with
this
new
ambient
mode
you
can,
and
if
you
are
only
interested
in
layer,
four
capabilities.
You
can
start
only
with
with
these
and
check
this
out,
and
if
you
need
any
additional
capabilities
on
top
of
layer,
7
layer
4,
you
can
add,
you
can
add
those
as
well.
And
yes,
as
you
can
see
on
these
on
this
graphs,
the
performance
savings
are
quite
significant.
C
B
A
lot
of
people
are
only
after
the
auto
mtls
feature
between
each
of
their
services,
and,
if
that's
all,
you
need
z
tunnel
creates
a
much
smaller
footprint
and
just
looking
at
the
grafana
dashboard
there
at
the
bottom
yeah,
you
can
see
the
resources
cut
down
to
size.
When
you
only
run
with
that
z
tunnel.
A
Yeah,
I
feel
like
that,
if
there's
a
one
thing
like
like
the
biggest
innovation
of
ambient,
I
do
think
separate
the
layer,
four
and
layer.
Seven
is
the
biggest
innovation
ambient
right.
So
we
we
kind
of
talk
about
from
the
research
perspective,
but
I
think
also,
if
you
look
at
the
upgrade,
if
you
look
at
cve
right,
how
often
do
you
need
to
upgrade
the
layer
for
tier?
Is
it
it's
doing
much
less
work?
So
it's
much
less
vulnerable.
You
know
the
need
to
upgrade
is
much
less
frequent
than
layer
seven.
A
So
that's
really
really
cool
too,
because
I
know
greg.
I
think
you
are
the
issue.
Product
security,
work,
group
colleague
right.
Most
of
the
attest
me
right
if
most
of
the
enway
cve
we
have
in
istio
are
related
to
layer,
seven
processing,
yeah,
all
right.
So
christian,
let's
go
back
to
your
dashboard.
Is
there
anything
else
you
want
to
show,
because
I
know
you
were
trying
to
show
something,
but
I
also
want
to
bring
the
question
from
the
audience
so
just
want
to
make
sure
you
you've
shown
what
you
wanted
to
show.
C
Maybe
it's
interesting
to
mention
that
these
max
graphs
in
the
middle
are
only
here
to
get
the
actual
local
maximum
during
each
run,
and
if
you
are
running
this
at
home,
in
on
your
own
laptop
or
your
on,
your
own
cluster
make
sure
to
change
the
interval
to
get
the
actual
maximum
value.
I
think
my
local
port
forwarding
might
be
okay,
but
we
get
it
so
basically,
these
are
tailored
to
the
actual
test
run
test
length
that
we
have
for
each
of
the
runs.
C
So
if
you
are
performing
shorter
or
longer
just
make
sure
you
upgrade
these
values
to
get
the
proper
local
maximum,
we
really
just
use
these
to
get
some
idea
about
the
actual
spikes
that
we
might
have,
and
these
can
also
help
you
to
save
your
size,
your
notes
under
under
your
kubernetes
cluster,
but
the
most
interesting
graphs.
I
think
the
these
four
and
the
the
last
two
where
you
can
actually
see
the
cost
savings
that
you
can
have
running.
Multiple
scenarios.
A
A
B
Oh,
I
was
just
going
to
say
in
regards
to
the
I
guess,
the
cpu
graph
right
now
that
that
initial
spike
from
the?
U
proxies
is
throwing
off
our
max
calculations
and
it
and
it's
worthwhile
to
point
out
that
ambient
is
still
actively
under
development
and.
A
B
You
know
smooth
out
performance
wise
going
forward,
but
it
is
interesting
that
the
average
is
still
lower
in
both
of
those
scenarios,
so
if
only
we
can
dissolve
that
little
spike
at
the
beginning,
our
max
graphs
would
be
looking
a
lot
better
too,
and
okay.
A
Yeah,
so
with
that,
I
actually
have
a
question,
because
I
remember
reading
your
blog
at
the
end,
you
said
it's
about
75
savings
right.
So
when
you
give
that
conclusion,
that's
the
state
of
ambient
today
and
that's
on
considering
the
spikes
at
the
beginning
is
that
true.
B
Correct
and
it's
the
savings
aren't
necessarily
just
in
utilization.
This
goes
back
to
that
utilization
versus
allocation
item
too,
because
you
are
allocating
your
cluster
or
under
load
values,
so,
even
if
you're,
not
necessarily
using
the
maximum
of
those
values,
what
you
initially
size
your
cluster
for
is
what
you're
charged
for.
So
the
savings
are
still
there,
because
you
don't
need
to
account
for
this
many
more
pods
requiring
this
many
more
resources
at
maximum
load.
B
A
Yeah
yeah,
because
the
number
really
adds
up
with
between
30,
which
is
your
psycho
case
with
just
a
few
z
tunnel
and
a
few
waypoint
proxy
right.
So
that
really
really
adds
up.
B
B
A
B
B
My
blog
points
to
a
specific
branch
which
is
just
pointing
to
how
to
run
it
to
get
the
same
results
that
we
have
in
the
blog
here,
but
we
have
a
wider
range
of
http
and
tcp
available
there.
I
don't
think
grpc
is
there
yet
we,
I
think
google
has
done
grpc
testing,
but
I
don't
think
we
have.
A
A
Maybe
you,
if
you're
interested
to
kind
of
modify
it
for
something
else
right
like
grpc,
you
just
need
a
client
to
be
able
to
drive
grpc
load
which,
if
photo
supports,
you
can
use
it
or
if
you
can
have
some
other
clients,
but
the
dashboard
to
help
you
visualize
the
scripts
to
help
you
kind
of
you
know,
kick
out
the
load
and
have
the
three
scenario
and
compile.
I
think
it
will
help
to
expand
it
to
grpc.
C
C
Different
scenarios.
A
Yeah,
I'm
referring
to
that
question
yeah.
So
so
thank
you
daniel
for
their
questions.
So
I
we
would
love
a
poor
request.
That's
something
really
interesting
to
you
and
we
would
love
to
hear
your
feedback
when
you
happen
to
run
the
test
to
see
if
your
result
is
different
than
what
we
have
wrong.
So
christian.
I
think
you
are
referring
to
this.
This
concern
about
post.
Is
that
right?
Yes,
yeah!
So
let's
get
to
it
yeah.
So
so,
what's
your
thoughts
I'll,
take
your
thoughts.
First
before
I
say
anything.
A
A
So
I
guess
christian,
if
you
want
me
to
chime
in
first,
I
I
guess
I
would
say
you
know
you're
not
going
to
have
all
your
encryption
handled
by
z,
tunnel
single
z
panel,
pod,
first
of
all
right.
So
so,
if
you
have
very
large
with
very
large
load
and
if
you
have
multiple
clients,
so
z
tunnel
is
only
handle
application
paths
co-located
on
the
same
node
right.
So
so,
if
you
have
a
large
traffic
and
if
you
have
multiple
clients
parts
application
parts
you
most
likely,
you
will
have
multiple
z.
A
Tunnels
handle
those.
So
that's
the
number
one
now,
if
you're
talking
about
one
single
application,
part
who
is
sending
a
large
body
which
you
you
could
right
so
that
it
will
be
handled
by
that
co-located
z
tunnel,
so
you're
right
about
that.
But
this
will
be
similar
as
psycho
today
right
because
saika
today,
you
also
kind
of
having
having
the
psychot
you
kind
of
would
do
the
encryption.
A
Do
the
mutual
tls
upgrade
for
you
right.
So
this
is
not
much
that
different
than
the
sidecar
case
other
than
the
z
tunnel
would
actually
handle
multiple
parts
co-located.
On
the
same
node,.
A
That's
a
great
question,
so
the
source
z
tunnel,
with
the
sense
the
traffic
on
behalf
of
the
client
right.
So
if
your
your
client
say,
for
instance,
is
sleep,
so
the
source
on
z
tunnel
would
ascend
on
behalf
of
sleep
using
sleep's
identity,
and
then
the
target
depends
on
what
target
is.
In
the
case
of
the
second
test,
which
the
target
is
the
target
z
tunnel,
the
target
z
tunnel
will
terminate
right.
That's
what
z
tunnel
does
once
it's
terminate.
A
So
first
of
all,
z-tunnel
knows
where
the
target
z-tunnel
is
because
istio
control
plane
tells
z-tunnel
through
the
configuration
for
xds
farmware
today
to
kind
of
figure
out
which
other
z
tunnel.
You
should
send
this
traffic
to
reach
to
your
target.
A
So
once
the
traffic
arrives
at
the
z
tunnel
of
the
target,
the
first
thing
it
does
is
actually
terminate
the
connection
and
then
determines
you
know
where
the
original
destination
is
on
that
particular
node
and
then,
and
then
forward
the
and
then
and
then
forward
the
the
traffic
to
the
original
destination.
B
Oh,
it
was
just
reminding
me
to
I've
been
with
istio
for
a
while,
and
I
know
one
of
the
issues
that
I
handled
back
when
I
was
still
at
ibm
was
a
large
file
transfer.
Oh.
A
B
Yeah
there
are
some
bugs
with
getting
that
to
work
for
the
entire
file
to
transfer.
It
makes
me
wonder
how
z
tunnel
performs,
in
that
type
of
scenario,
just
just
another
interesting
case
that
we
could
potentially
test
and
I'd
be
interested
in
seeing
the
results.
A
Yeah,
I
think
that
that's
I'm
concerned
because
I
remember
I
was
also
at
ibm
working
with
you
on
that
we
ended
up
having
to
make
a
configuration
change
to
almoy,
to
make
sure
it
can
send
the
large
fights
right.
I
don't
recall,
was
it
posted
or
get
might
be
just
against,
but
it's
a.
I
guess.
It's
a
similar
problem
too
right
yeah
when
sending
large
body
well,
maybe
on
we
couldn't
even
handle
by
default.
Yeah
yeah.
I
remember.
B
But
yeah
yeah,
I
would
be
very
interested
to
see
how
those
two
cases
actually
compare
in
sidecar
versus
ambient.
So
if.
A
Yeah,
I
think
it's
a
valid
concern
ahmed,
so
yeah
not
something
we
have
tested,
but
I
don't
believe
it
would
handle
much
differences
in
psycho
today.
C
A
Yeah,
that
is
interesting
because
I
know
for
a
long
time.
It's
still
cycle
couldn't
support
jobs
right
with
ambient.
We
certainly
do
think
it
will
support
jobs
because
the
u
the
z
tunnel,
is
going
to
capture
all
the
incoming
and
outgoing
traffic
for
any
partying
ambien,
regardless
whether
you're
running
short,
leave
or
longer
leave
the
paths
yeah.
Definitely
something
needs
to
be
tested,
but
we
do.
We
do
think
ambi
is
designed
to
tackle
that
problem
as
well,
along
with
many
other
application
problems
so
such
as
staple
sets
such
as
silver.
A
First
speak.
First
protocol
yeah
many
of
the
issues
we
do
expect
to
resolve
with
with
ambient.
I
just
needs
more
testing
too
yeah
all
right.
Thank
you.
So
much
for
that
question
amanda.
We
really
appreciate.
Oh
that's
you
is
there
anything
else
you
guys
want
to
share.
A
A
We
talk
about
the
test
and
the
timestamp
and
how
to
plug
in
to
see
it
in
the
grid
founder
dashboard
and
what
would
be
your
key
takeaway?
I
guess
that
would
be
the
next
thing
I
want
to
ask
before
we
end
this
live
stream.
B
Well,
for
me,
I
I
think
the
key
takeaway
is
definitely
that
that
top
and
bottom
graph
on
the
grafana
dashboard,
just
because
you
are
eliminating
that
many
more
resources
in
your
environments,
so
yeah
the
savings
are
right.
There.
B
So,
let's
see
the
the
green
line
is
the
total
of
the
the
work
workload
usage
for
the
testing
name,
space
that
the
scripts
are
running
against,
so
that
includes
http
bin
usage
and
their
their
side,
cars
and
the
data
plane,
whereas
the
yellow
line
on
the
bottom
is
just
the
data
plane.
So
in
that
first
hump
there,
that's
specifically
all
the
istio
proxy
sidecar
resources,
and
once
we
get
into
that
second
and
third
scenario:
we
have
the
z
tunnel
from
the
istio
system,
namespace
added
in
there
too,.
A
A
B
A
C
Yes,
yes,
I
think
these
savings
are
really
good
and
and
and
the
jobs,
the
the
jobs
and
the
the
promise
that
you
can.
You
can
have
sort
of
skill
injected
jobs
and
have
all
the
benefits
than
before.
But
now
you
don't
need
to
hack
into
the
container
config.
That's
that's
really
an
interesting
thing
and.
A
B
Correct
yeah
christian,
I
don't
know
if
you
want
to
take
that,
since
I
know
I
got
the
steps
for
installing
it
from
you.
C
Yes,
yes,
that's
right!
You
can
find
the
instructions
to
install
all
these
under
the
dependency
section
and
basically
we
are
just
using
the
cube
prompt.
You
stack
that's
very
common
across
kubernetes
clusters.
We
have
this
simplified
various
otml
file
and
yes,
that's
basically
it
because
once
you
apply
these
configurations,
you
will
have
node
export
fc
advisor
and
or
the
required
configuration
deployed
to
to
be
able
to
reproduce
these
in
your
own
environment.
A
Yeah,
so
this
is
actually
the
reason.
I
ask
you
this
question
like
because
I
believe
this
is
another
value
and
being
provides
right,
because
if
you
guys
use
israel
today
or
before,
you
will
know
that
having
promises
as
part
of
the
istio
or
the
monitoring
stack
as
part
of
istio
and
having
the
mutual
tales
between
your
application.
Part
to
your
monitoring
stack,
is
not
a
simple
solution
with
another
simple
problem
to
tackle
in
the
sidecar
world
with
ambience.
A
C
A
Yeah,
that's
a
like
a
huge
benefit
of
the
ambient
yeah.
All
right,
I
think,
with
the
time
is
up.
It's
been
really
nice
talking
to
both
of
you.
I
think
I
have
a
better
insight
of
what
you
guys
are
doing
and
how
you
are
measuring.
You
know
what
is
the:
what
is
the
net
net?
Key
summary,
I
think
everything
is
exciting
as
it
is
with
ambient
in
terms
of
resource
usage
and
the
cost,
because
we're
looking
at
75
saving
and
that's
even
with
I
guess,
the
initial
release
with
steer.
A
All
right,
thanks
everyone
so
much
for
joining
us
today.
We
really
appreciate-
and
thank
you
for
all
the
questions
and
thank
you
greg
and
christian
for
coming
up
the
hood
and
do
a
live
demo
and
talk
us
through
your
journey
into
these
numbers.
We
really
appreciate
them
thanks.
Everybody
bye
now.