►
From YouTube: Istio Ambient Service Mesh & Rust-Based Ztunnel Update
Description
Istio ambient mesh has graduated from the experimental branch and merged to Istio’s main branch! This is a significant milestone for ambient mesh, paving the way for releasing ambient in Istio 1.18 and installing it by default in Istio’s future releases. Join us in this upcoming hoot livestream where John Howard and Lin Sun, members of Istio Technical Oversight Committee, provide the latest update on Istio ambient service mesh and the new Rust-based ztunnel.
A
All
right
looks
like
we
are
live
today:
hello,
hello,
everybody
welcome
to
hoot
episode
46..
Today,
we're
going
to
talk
about
istio
ambient
service
match
has
graduated
from
the
experimental
Branch
emerged
to
istio's
Main
Branch.
This
is
a
huge
milestone
for
ambient
really
Paving.
The
way
for
release
ambience
in
istio
wonders
18,
and
we
expect
the
ambient
will
be
the
default
installation
for
istio's
future
releases
after
1.18,
so
in
this
live
stream.
A
I
am
so
excited
that
John
is
also
joining
me
to
provide
the
latest
update
on
istio
ambient
service
mesh
and
also
the
new
raspace
zitana.
So
welcome
to
welcome
John
for
folks
who
are
not
familiar
with
you,
which
I
think
everybody
probably
are
familiar
with
you.
Can
you
give
a
quick
intro
about
yourself.
B
Yeah,
hey
everyone:
I'm
John,
Howard
I'm,
a
software
engineer
at
Google,
I've
been
working
on
Easter
for
my
entire
career,
really
the
past
four
years
on
HBO's
QC
and
steering
committee
and
whatnot
I'm
really
excited
for
ambient
in
general,
and
you
know
talk
about
ambient
today.
So.
A
All
right,
that's
awesome,
so
the
first
thing
we
want
to
talk
about
is
definitely
this
particular
announcement.
If
you
haven't
seen
it
yeah
definitely
check
it
out.
So,
let's
jump
in
to
discuss
what
are
the
major
changes
from
the
initial
launch.
From
your
perspective,
job.
B
Yeah
one
big
change:
that's
not
so
much
a
product
change
but
from
a
organizational
standpoint,
was
the
initial
launch
of
ambient
back
in
I.
Think
last
September
we
had
a
own
dedicated
experimental
branch
and
so
to
use
it.
You
know
you
have
to
go
build
Easter
yourself.
We
didn't
have
the
full
cscd
testing
release
process,
all
that
so
now,
we've
actually
gotten
that
merged
into
the
main
branch.
B
So
now
it's
just
another
part
of
istio
and
it's
just
one
flag
away
from
enabling
on
your
normal
E,
Studio
install
so
I
think
Lynn
mentioned
that,
but
that's
going
to
be
released
as
part
of
the
1.18
release.
Our
next
release
coming
up
in
I
think
a
couple
months.
So
that's
really
exciting.
B
So
now
it's
kind
of
an
official
part
of
istio.
It's
in
the
you
know
feature
stages.
We
have
Alpha
which
it
is
today
and
eventually
beta
and
stable,
et
cetera,
so
it's
kind
of
moving
along
the
paces
of
the
graduation
and
stability
beyond
that,
though,
there's
also
been
a
ton
of
technical
changes.
We've
made
to
to
it
since,
since
the
initial
launch,
when
we
first
released
the
experimental.
A
B
The
biggest
change
is
probably
the
Z
tunnel
component,
so
if
you're
not
familiar
with
with
the
Z
tunnel,
it's
basically
this
per
node
proxy
that
is
responsible
for
doing
encryption
and
L4
policy
enforcement.
B
So
if
you
haven't
seen
the
initial
blog,
post
and
I
think
we'll
probably
get
into
this
a
bit
more
later,
we've
kind
of
split
the
sidecar,
which
was
this
one
single
component
that
did
everything
into
a
per
node
proxy.
That
is
just
responsible
for
encryption
and
tunneling,
and
you
know
enforcing
some
authorization
policies
and
the
Waypoint
proxies
which
do
all
the
full
L7
service
mesh
features.
B
We
know
and
love
right,
so
the
Z
tunnel
is
really
important
that
it's
efficient
because
it's
being
deployed
on
every
single
node
and
you
know
we
want
to
support
clusters
that
have
10
000
pods.
Maybe
multiple
clusters
joined
together
and
you
have
a
hundred
thousand
or
even
more
pods
right.
So
we
spend
a
lot
of
time
on
optimizing
Z
tunnel,
both
from
like
a
runtime
performance
like
how
much
through
a
report
or
latency
you're,
adding
and
from
a
footprint.
B
You
know
how
much
memory
does
it
take
to
have
just
a
cluster
of
ten
thousand
pods,
for
example,
even
if
we're
not
sending
any
traffic
so
in
our
initial
experimental
and
feel
free
to
chime
in
Lynn.
If
you
want
in
our
initial
experimental
release,
we
had
built
Z
tunnel
using
Envoy.
You
know
this
was
an
obvious
choice.
We
use
Envoy
for
the
rest
of
istio,
it's
a
great
proxy,
but
we
ran
it
to
some
scalability
issues
with
Envoy.
B
Largely
this
boiled
down
to
using
Envoy
in
ways
that
it
wasn't
really
designed
to
be
used
like
Envoy,
is
a
pretty
generic
proxy.
That's
used
in
a
lot
of
different
use
cases
right
in
istio.
We
use
it
as
an
Ingress
and
egress
Gateway
at
the
edge,
and
we
also
use
it
for
sidecars
and
those
are
two
pretty
common
deployment
patterns
for
Envoy
across
the
industry
that
have
been
kind
of
optimized
for
with
z-tunnel.
It's
a
bit
different.
We
we
don't
have
a
lot
of
routing
information.
B
We
don't
have
all
these
rich
filters
and
Integrations,
and
you
know
whatnot,
like
Envoy,
has
tons
and
tons
of
Integrations.
If
you
go
through
a
docs
for
every
single
different
service
out
there
for
zetunnel,
we
don't
need
any
of
that.
We
really
just
want
to
do
something
very
focused,
and
we
want
to
do
that
very
efficiently.
B
And
so
this
overhead
of
envoy,
being
quite
generic,
meant
that
everything
we
wanted
to
do
was
more
verbose
and
slower,
and
then
we
we
wanted-
and
so
we
kind
of
looked
at.
How
can
we
optimize
this
configuration
and
that
led
us
to
actually
deciding
to
build
kind
of
a
purpose-built
z-tunnel,
that's
kind
of
a
instead
of
trying
to
configure
Envoy
to
do
what
we
wanted
as
a
z
tunnel?
B
We
built
up
the
proxy
from
the
ground
up,
so
this
was
largely
based
on
the
needs
that
were
kind
of
a
hypothesis
that
if
we
built
a
very
focused
purpose-built
implementation
that
we
could
be
better
at
that
one
task
like
we
have
no
desire
to
compete
with
Envoy,
or
you
know
belief
that
we're
going
to
make
a
better
Envoy,
but
Envoy
is
kind
of
there
to
do
everything,
and
we
just
want
to
do
one
thing
really
really
well.
So
that's
kind
of
the
premise
of
this
new
purpose-built
Z
tunnel.
A
B
A
Would
also
add
to
the
fact
you
know
from
a
user
perspective,
it's
a
lot
more
easy
to
debugging,
so
many
of
you
probably
are
somewhat
familiar
with
right,
but
at
the
end
of
the
day,
ombre
is
very
complicated,
I'm
sure
none
of
you
enjoy
like
pulling
out
on
way
config
when
you
need
to
troubleshooting
an
issue
right.
There
are
thousands
and
thousands
of
lines
of
code,
even
though
you
may
just
have
like
two
Services
deployed
in
your
cluster
right
with
all
the
cluster
listener
employing
you
know
all
the
configuration
of
envoy.
A
It's
definitely
overwhelming.
So
what
John
was
mentioning
about
the
purpose.
Build
z-tunnel
really
enabled
us
to
focus
on
a
super
limited
set
of
functionalities
and
also
make
the
configuration
really
simple.
All
right.
We
have
a
couple
of
folks
I
want
to
say
hi
to
them.
So
if
you
are
in
the
live
stream,
we
would
love
to
have
you
say
hi
to
us:
hey,
Peter,
thanks
so
much
for
joining
us,
hey
boji
and
thank
you.
Hey
up,
you're
joined
from
Levelland
awesome,
hey
Daniel,
thanks
so
much
for
joining
us
all
right,
John
I'll!
B
Yeah
I
would
agree
to
scroll
down
onto
this
configuration
protocol.
Section
I,
think
that
provides
a
pretty
good
concrete
example,
so
kind
of
going
like
diving,
deeper
into
that
before,
with
Envoy
for
a
single
service,
we'd
have
to
configure
an
Envoy
configuration
right
in
Envoy.
This
is
called
a
cluster
kind
of
confusing
name,
but
that's
kind
of
how
we
represent
service
and
for
each
service.
B
We'd
have
about
350
lines
of
the
ammo
configuration
for
it
and
that's
because
every
aspect
of
communicating
to
Services,
even
though
it's
common
for
like
Easter
communication,
two
services
in
roughly
the
same
way
and
most
of
those
350
lines,
are
the
same.
Envoy
is
generic
right.
It
doesn't
know
how
to
communicate
with
these
Services
until
they
tell
it
to
it's,
not
an
istio,
aware
thing:
we
need
to
make
istioware
by
configuring
it
and
so
something
as
simple
as
saying.
Oh
use,
mtls!
B
That's
about
100
lines
of
configuration
for
every
single
service,
so
in
the
Z
Channel
we
don't
need
to
encode
this
istio
awareness
into
the
configuration,
because
the
Z
tunnel
is
already
specific
right.
All
that
logic
that
was
previously
in
Envoy
configuration
is
now
embedded
in
the
Z
tunnel.
Binary
and
all
we
need
to
do
is,
do
you
know
simple
configuration
to
tell
it
what
to
do
so.
B
What
before
was
this
100
lines
of
config,
telling
it
how
what
exactly
Eastview
mtls
means
and
how
to
send
traffic
and
then
verified,
and
that
whatnot
is
now
just
a
single
enum.
This
protocol
field-
you
can
see
on
this
example
it's
TCP,
but
they
could
easily
have
been
mtls,
which
is
another
option
for
that,
and
that
would
tell
Z
tunnel
to
do
MTO,
istio
mtls,
which
means
a
lot
of
things
right.
B
It
means
we're
using
spiffy
identities,
we're
enforcing
the
identities,
we're
doing
TLS,
1.2
or
1.3
The
Cypher
Suites,
all
these
different
things
that
previously
we
had
to
tell
Envoy
each
and
every
time
we've
now
pushed
into
the
Z
tunnel,
and
this
makes
it
quite
a
bit
more
efficient
right.
B
Every
time
a
pod
changes
or
is
added
or
removed.
We
may
need
to
send
this
payload
of
data
here,
but
that's
very
cheap.
It's
I
think
it's
200,
kilobytes
or
something
perhaps
or
maybe
200
bytes.
Actually,
sorry,
so
you
know
pushing
those
updates
is,
is
quite
a
bit
cheaper
than
this
giant
Envoy
configuration
and
it's
even
cheaper
to
store
in
memory
for
z-tunnel.
B
So
we
found
that
you
know
with
Z
tunnel.
We
could
have
configuration
for
100
000
pods
without
using
that
much
memory
at
all.
So
this
will
allow
scaling
to
much
much
larger
clusters
than
we
saw
with
sidecars.
A
A
The
amount
of
networks
sending
from
the
control
plane
to
the
data
plane
is
also
going
to
be
drastically
reduced.
Yeah.
B
B
B
First,
so
if
you
had
a
hundred
thousand
pods
and
you
did
the
math
for
the
bandwidth
required
to
push
out
that
information,
if
you
have
pod
turn
it's
astronomical
right,
so
we
quickly
find
that
we
need
Alternatives
and
that's
kind
of
what
led
led
us
down
this
path.
A
Yeah,
oh
there's
a
couple
of
other
people:
let's
go
ahead,
say
hi
to
them
hello.
This
is
someone
from
Reading
thanks
so
much
for
popping
and
there's
Ian,
hey
we've
seen
you
in
the
community
too,
and
somebody
without
name
hello
and
there's
a
crack.
Hey
thanks.
So
much
for
joining
us
and
Daniel
and
thanks
so
much
all
right
feel
free,
yeah
feel
free
to
ask.
If
you
have
any
questions,
let's
go
to
do
you
want
to
talk
about
the
new
recipes?
Why
rust?
As
far
as
implementation.
B
Yeah
yeah
definitely
so
yeah
I'm,
just
thinking
how
to
phrase
this,
it's
like
it's
easy
to
see
like.
Oh,
we
use
rust,
and
that
was
like
the
start
of
the
decision.
I
wanted
to
introduce,
like
some
of
the
background
on
why
we
made
a
purpose-built,
z
tunnel,
because
I
think
that's
actually
the
key
part
of
the
benefits
of
this
this
new
approach.
B
But
then,
once
we
decided
okay,
we
wanted
to
build
a
purpose-built
thing.
We
still
had
to
pick.
How
do
we
build
it
right
and
there
was
a
few
choices
we
considered.
You
know
we
could
take
Envoy
and
use
it
as
a
library
or
do
some
sort
of
C
plus
plus
thing,
but
we
had
found
that
using
Envoy
and
or
C
plus
plus
in
general,
the
development
experience
was
was
quite
poor.
The
security
properties
from
you
know,
memory
safety,
Etc,
is
also
quite
poor.
B
In
general,
the
industry
is
kind
of
moving
away
from
C
plus
plus
not
entirely,
but
you
know
for
newer
projects.
Rust
is
kind
of
preferred
in
many
cases
another
so
that
kind
of
ruled
out,
C
plus
plus
for
us
mostly
another
thing
we
considered
was
go.
This
was
kind
of
the
obvious
choice,
I
mean
actually
prototyped
zetol
and
go
initially,
and
it
was
quite
easy
to
implement
because
the
rest
of
istio's
original
ghosts.
We
have
all
sorts
of
libraries
interact
with.
B
B
Actually,
given
how
simple
Z
tunnel
is
the
issue
we
found,
though,
was
that
it
was
really
hard
to
make
it
as
performant
as
rust
could
be,
so
it
worked
fairly
well,
but
just
tuning
the
CPU
and
memory
usage
to
be
as
minimal
as
possible
is
challenging,
and
so
we
knew
that
if
we
spent
a
lot
of
time,
we
could
probably
optimize
it
further
like
there
are
networking
products
that
are
rich
and
go
that
are
highly
optimized
and
they're
pretty
efficient,
but
we
felt
that
long
term,
like
that,
may
you
know,
go
as
fast
or
develop
and
we
could
maybe
spend
a
lot
of
time
optimizing
it,
but
long
term.
B
So
we
went
with
the
rest
based
approach.
Rest
is
highly
successful
in
these
high
performance,
low
resource
utilization,
where
safety
is
Paramount,
you
know,
there's
extensive
history
in
the
industry
from
other
service
meshes,
networking
proxies
load
balances,
Etc
using
rest.
B
It
out
of
the
box
was
just
immediately
performant.
We
didn't
have
to
do
any
optimizations
at
all
and
it
was
much
much
faster
and
lower
memory
footprint
than
the
go
implementation,
and
obviously
we
can
also
optimize
it
so
that
will
improve
in
the
future
and
obviously
the
memory
safety
is
huge
for
such
a
critical
component
on
the
Node.
So
for
us,
rust
was
was
a
great
choice.
B
B
Istio,
that
was
really
the
only
downside
that
we
saw
it
turned
out,
I
think
not
to
be
as
bad
as
we
thought.
B
You
know
we
had
a
lot
of
people
ramp
up
on
Rust,
pretty
quickly
I
think
zetunnel
has
10
15
contributors
already
that
have
have
ramped
up
so
overall
I
think
it's
been
a
great
choice.
Yeah
we
built
on
top
of
Tokyo
and
Hyper
libraries,
which
are
kind
of
the
de
facto
standards
in
the
industry.
You
know
most
of
those
products
I
talked
about
are
also
using
those.
B
So
we
have
we're
not
really
trailblazing
here,
we're
using
like
tried
and
true
technologies
that
have
been
abled
plenty
of
other
projects
to
be
highly
performant
and
secure.
A
Yeah
I
think
the
community
definitely
excited
about
how
lightweight
the
raw
space
zetano
is.
The
learning
curve
was
a
little
bit
hard
initially,
but
a
lot
of
people
have
gone
through
the
hurdle
already.
There's
a
couple
of
other
people,
I
want
to
say:
hi,
hi
Pasha
thanks
so
much
for
joining
us.
Hey
Kevin,
Hey,
Jay
thanks
so
much
and
oh
Basha
is
from
the
cubado
team.
Very
cool.
Oh
Daniel
has
a
question.
The
workload
config
is
delivered
from
istio
control
point
to
each
zirano
using
XDS,
correct.
B
Yeah,
so
if
you're
familiar
with
us
just
sidecars,
then
you
know
that
the
envoys
connect
to
the
Easter
d
control
plane
and
using
a
protocol
called
XDS.
The
configuration
is
pushed
from
the
control
plane
to
the
envoys,
so
XDS
is
actually
two
parts.
One
is
the
transport
protocol
like
the
grpc
service
and
interface
and
then
there's
the
XDS
AP
API
types
like
the
zero
Envoy
clusters
routes,
listeners
Etc.
B
So
in
the
Z
tunnel
we
use
the
xjs
transport
protocol
because
it's
used
in
istio
and
it's
perfectly
fine
protocol
and
fits
all
of
her
needs.
But
we
add
you
to
find
our
own
type,
which
is
that
workload
configuration
that
I
showed
previously,
so
that
allows
us
to
be
much
more
efficient,
but
we
still
have
a
lot
of
the
Integrations
that
we
get
from
using
XDS
yep.
A
Yeah-
and
that
was
the
also
like
the
key
thing
we
were
mentioning
about
earlier-
how
much
the
configuration
is
being
dramatically
reduced
based
on
a
simple
workload
to
Z
tunnel
is
just
very,
very
minimized,
with
exactly
what's
absolutely
necessary.
That
zetano
needs
to
know
all
right.
I
think
we
talked
a
lot
about
zetano
I,
believe
there
are
other
exciting
changes
as
part
of
from
the
initial
launch.
Besides
sitano.
A
Let's
quickly
talk
about
them
too,
so
one
thing
I
think
was
really
cool.
Is
we
have
a
a
simple
way
to
to
deploy
Waypoint
configuration
using
istiocado
waypointment,
so
you
can
easily,
you
know,
deploy
a
waypoint
proxies
for
your
service
account
or
you
can
use
the
debugging
debug
ability
has
also
improved.
You
can
do
config
dump
of
your
Waypoint
and
also
zitana,
not
only
on
the
sdod
perspective.
A
A
We
spend
a
lot
of
time
on
the
semantics
of
authorization
policy,
how
authorization
policy
is
moving
from
layer
for
authorization
policy
to
layer,
7
authorization
policy,
so
one
of
the
key
things
we're
introducing
in
the
authorization
policy,
particularly
for
layer
7,
is
to
allow
you
to
bind
the
authorization
policy
to
a
particular
particular
Waypoint
was
selecting
the
actual
destination
workload.
The
reason
is:
it's
a
waypoint.
That's
actually
enforcing
the
layer,
7
authorization
policy
for
you
now
in
ambient.
Well
previously,
in
the
cycle
mode,
it
was
the
psychon.
A
B
Yeah
I
would
just
expand
a
bit
on
the
simplifying
I
mean
I
won't
go
into
all
the
the
nitty-gritty
details,
because
I
think
I'll
lose
everyone.
Although
I,
we
maybe
will
do
a
blog
post
kind
of
doing
a
deep
dive,
but
in
the
initial
experimental
launch
we
had
focused
a
lot
on
getting
the
behavior
working
at
the
expense
of
performance,
so
the
waypoints
were
kind
of
laughably
unscalable
like
if
you
had
even
a
small
size
cluster.
Their
configuration
was
ginormous
right
because
we
had.
B
B
In
addition,
we've
kind
of
changed
how
waypoints
interact
as
their
their
primary
mechanism
so
previously
in
in
the
sidecar
model.
Most
of
the
configuration
for
how
I'd
like
to
do
routing
retries,
all
that
is
on
clients,
right
and
so
a
server
would
typically
create
a
virtual
server
and
say
send
10
of
traffic
to
version
one
because
I'm
doing
Canary,
but
then
that's
actually
handled
by
all
the
clients.
B
In
the
mesh
right,
so
that
configuration
is
taken
from
the
server
namespace
and
then
applied
to
every
single
proxy
in
the
mesh,
so
they
can
communicate
that
doesn't
scale
terribly
well
because,
like
I
said,
it's
an
N
squared
scaling
problem
and
it
also
doesn't
work
if
the
clients
aren't
part
of
the
mesh
right
if
you're
having
traffic
from
outside
the
cluster
entirely
or
you
just
don't-
have
your
entire
cluster
with
sidecars.
B
And
so
you
end
up
with
this
kind
of
bizarre
behavior,
where
some
clients
are
respecting
your
rules,
but
then
others
are
just
going
directly
and
your
Canary
is
suddenly
not
working.
You
don't
understand
why,
right
so
with
ambient
on
waypoints,
we've
actually
shifted
that
so
that
policies
are
enforced
on
the
server
side,
and
so,
if
I,
as
the
you
know,
server
namespace
deploy
a
waypoint
instead
of
some
Canary
rules,
I'm
guaranteed
that
all
traffic
that's
coming
into
my
name.
Space
is
going
to
go
through
that
Waypoint
proxy
and
establish
all
those
rules.
B
Even
if
the
rest
of
the
cluster
isn't
involved
in
East
Geo,
they
might
not
even
know
what
used
to
is
right.
You
may
be
the
one
namespace
in
your
entire
kind
of
multi-tenant
cluster
that
even
knows
by
these
two
and
has
deployed
it
and
installed
it
you'll
be
guaranteed
that
all
the
traffic
goes
through
the
Waypoint,
so
those
policies
are
enforced.
A
Yeah
I
think
that's
a
really
great
point.
I
guess
one
thing
I
would
add
for
those
of
you
who
are
familiar
with
scika
today
right
there
is
one
great
resource
called
cycle
resource
in
istio
right.
So
if
you
ever
needs
to
run
it's
your
in
production
with
a
large
cluster
with
a
lot
of
services,
I
wouldn't
say
a
lot
by
maybe
just
more
than
a
dozen
services.
A
So
you
probably
have
looked
into
the
psychologists
in
the
Israel
project
and
probably
you
have
deployed
the
cycle
resource
into
your
cluster
and
the
reason
is
you
want
to
kind
of
config
the
visibility
of
what
your
Envoy
configuration
can
see.
So
it
can
see
the
only
the
minimum
required
configuration
and
nothing
more,
so
you
can
improve
the
performance
of
your
Envoy
cycle.
So
what
John
was
just
mentioned
with
a
lot
of
the
configuration,
a
particular
routing,
related
configuration
moving
from
consumer
side
to
the
project
uses
site
we
actually
find
out.
A
B
B
We
still
need
each
Z
tunnel
to
know
about
all
the
pods,
so
that's
still
N
squared,
but
we
optimize
that
by
making
it
really
really
really
cheap
with
the
workload
custom
type
and
we
think
that
will
get
us
to
about
a
hundred
thousand
pods
with
under
50
megabytes
of
ram,
so
pretty
small
footprint
even
for
huge
clusters.
B
We
also
have
some
tricks
upper
sleeves
that
I
won't
get
too
far
into,
but
just
kind
of
scale
that
Beyond
well
well
beyond
100
000
pods
in
the
future,
with
waypoints
we've
actually
been
able
to
cut
out
the
N
squared
entirely
by
having
this
namespace
Waypoint.
That
has
is
responsible
only
for
its
own
namespace
right
and
so
that
allows
us
to
scale
far
beyond
sidecars
and
not
have
the
user
have
to
manually.
You
know
prune
things
with
the
sidecar
resource,
so
I'm
really
excited
about
this.
B
You
know
scalability
has
been
a
huge
concern
in
East,
Geo,
I
swim,
probably
30
of
my
time
for
the
past
four
years.
Working
on
that
problem,
and
so
Ambien
is
like
a
huge,
huge
step
forward
in
that
regard.
Instead
of
you
know,
minor
increments,
that
we've
made
on
sidecars
over
the
years.
So
I'm
super
excited
about
that.
A
Yeah,
very
cool,
okay,
so
there's
the
other
folks
who
want
to
say
hi,
hi
Rohit
thanks
so
much
for
joining
us
and
there's
a
question
popped
in.
Does
it's
your
cuddle
support,
M1,
Mac
I
believe
that
the
answer
is
yes,
but
I
don't
have
a
Mac,
so
not
something
I
personally
validated
yeah.
B
So
but
it's
a
Easter
cuddle,
regardless
of
ambient
I,
believe,
has
shipped
an
M1
Mac
build
for
a
while.
That
is
just
the
Easter
cattle
client
binary,
though
istio
itself,
like
the
server
components,
does
run
on
arm,
but
only
on
Linux
arm,
so
not
on
Mac.
A
B
A
B
Like
it's
obviously
extremely
important
right
now,
ambient
Stone
Alpha
and
what
Alpha
means
is
that
it's
you
know
it's
usable
for
trying
things
out
in
a
non-prot
environment,
Etc
but
upgrades
you
know,
zero
downtime,
upgrades
or
upgrades
in
general
is
kind
of
a
to
do
or
not
entirely
stable
right.
So
this
is
top
of
mind,
but
we
don't
have
a
guide.
Yet
that's
something
that
will
likely
come
as
ambient
promotes
to
Beta
later
in
the
year.
A
Yeah,
that
sounds
right
all
right,
so
we
have
a
clap
from
Alexandro.
Thank
you.
So
much
for
that.
I
believe
that
was
when
we
talk
about
scalability
improvements
and
then
then
confirmed
the
M1
Mac
and
istiocado
works
fine
for
him.
Awesome
thanks
so
much
and
we
got
a
hello,
hi
Farm.
It's
me,
I
love
your
name,
actually
pretty
cool
name.
Thank
you
all
right
with
that
John.
Should
we
do
a
demo.
A
Right
so
we
can
talk
a
lot,
but
we
think
it's
more
important
to
show
you
what
we
have.
So
what
we're
going
to
do
is
actually
go
through
our
get
started,
guide
and
kind
of
explain
to
you.
You
know
what
we
are
doing
as
part
of
the
guide.
So
with
that,
let's
pray
to
the
demo
God
everything
is
going
to
work
all
right.
So
the
first
thing
we're
going
to
do
is
I.
Believe
I
have
my
kubernet
cluster
running.
A
So
it's
just
a
clean
cluster
with
two
node,
so
you
can
see
ambient
worker
and
worker
2..
So
the
first
thing
we're
going
to
do
is
install
ambient.
So
if
you
follow
a
get
started
guide,
we
actually
have
a
alpha
zero
out
there,
that
you
can
download
so
I
already
downloaded
on
my
machine,
so
I'm
not
going
to
go
through
the
download
process,
but
you
can
see
I'm
installing
ambient
and
I'm
enable
the
access
log.
A
So
this
is
going
to
in
install
a
bunch
of
ambient
components
for
me.
So
yes,
I
am
going
to
say
yes
and
what
it's
going
to
do
is
installing
SEO
core,
which
I
believe
is
a
bunch
of
customer
resource
of
istio
and
also
the
web
hooks
configuration
and
it's
just
install
it's
your
D
and
then
install
SEO
CMI
Z
tunnel.
So
if
you
install
istio
today,
this
is
nothing
new
to
you
except
the
Z.
Tonal
component
is
new
and
the
istio
CI.
A
We've
made
a
lot
of
change
to
update
it's
your
CLI,
so
it
can
do
traffic
redirection
for
all
the
incoming
and
outgoing
traffic
for
the
application
in
ambient.
So
we
can
redirect
to
the
z-tunnel
first
all
right,
so
we
just
installed
our
ambient.
You
will
see
the
components
should
be
running
so,
as
John
was
mentioned,
z-tunnel
is
designed
to
deploy
it
onto
each
and
every
single
node.
So
you
see,
I
have
3D
tunnel,
which
I
have
two
worker
VM,
along
with
the
control
plane
VM.
A
So
the
next
thing
we
are
going
to
do
is
we're
actually
going
to
install
kayali
and
polices.
So
this
shouldn't
be
anything
surprising
if
you
installed
kayali
and
premises,
and
this
is
just
to
help
us
visualize
the
application
and
the
next
thing
we
are
going
to
do
is
we
are
going
to.
Let's
see
we
are
going
to
do
I'm
sorry,
I,
just
realized,
I
I
was
I'm,
sorry
did
I
lose
you
John.
A
But
I
really
want
is
I,
think
oh
okay,
so
yeah,
so
it
is
spoken
for
oh
yeah,
I
was
wasn't
sure
if
I
was
reading
around
documentation
all
right,
so
the
next
thing
we
are
going
to
do
is
we're
going
to
install
the
booking
for
application.
So
you
guys
don't
know
about
that.
So
I'm
not
going
to
go
through
detail
on
the
booking
for
application
and
then
we're
going
to
install
the
sleep
and
not
sleep.
So
these
are
just
like
the
current
client
so
that
we
can
make
requests
now.
A
We
are
also
going
to
install
the
book
info
Gateway,
which
includes
the
Gateway
resource,
to
expose
the
product
page
to
to
traffic
out
to
clients
outside
of
the
cluster,
along
with
the
simple
virtual
service
configuration
to
tell
the
istio
Ingress
Gateway
how
to
route
to
the
product
page.
So
I
actually
have
a
diagram
on
this.
So
if
I
can
bring
my
diagram
on
this,
so
basically
what
we
are
showing
right
now
is
sorry.
I
should
be
a
little
bit
more
organized.
Can
you
see
my
diagram
gel.
A
Okay
yeah,
so
basically
we
deploy
sleep
and
not
sleep,
and
we
also
deploy
the
rest
of
the
book
info.
We
have
issue
English,
Gateway
config
and
to
Route
traffic
to
product
page.
So
nothing
is
running
with
sidecar
today,
right
with
our
environment,
we
could
we
don't
even
need
the
ecod
if
we're
not
using
istio
Ingress
gateway
to
expose
product
page
to
the
outside
of
the
cluster,
but
we
we
are
using
sdod
to
config
istio,
Ingress
Gateway
right
now
at
the
moment,
all
right.
A
So
if
I
minimize
this
all
right,
so
we
got.
Let's
see,
we've
got
the
Gateway
deployed
and
we
should
be
able
to
visit
booking
for
application.
So
this
shouldn't
be
a
surprise
right.
So
this
is
actually
nothing
in
the
match
other
than
the
SEO
Ingress
Gateway.
So
like
from
sleep
to
product
page
or
from
not
sleep
to
product
page.
This
is
just
basic
kubernetes
right,
even
without
istio.
The
last
two
traffic
is
without
is
still
all
right.
So
let's
talk
about
how
to
add
our
application
to
ambient.
A
So
the
first
thing
we're
going
to
do
is
label
our
namespace
to
data
plane
mode
ambient.
So
with
that
we
do
expect
to-
hopefully
you
know
to
gain
some
benefit
immediately,
so
notice
here
we're
not
restarting
anything
right,
so
booking
for
application
was
running
two
minutes
ago.
We
are
continue
running
without
restarting
so
this
is
one
of
the
best
benefit
of
ambient.
Is
it's
a
transparent
right
to
be
ambient
as
part
of
your
environment
without
you
actually
even
need
to
noticing
it?
So
application
continue
to
work
right.
A
So
let's
talk
about
what
benefits
we're
supposed
to
catch
right.
So
if
I
go
to
here,
let's
see
you
remember,
I
installed
kayali
before
so,
hopefully,
I
can
bring
the
kayali
dashboard
and
I'll
bring
it
here.
A
Actually
it
might
be
easy
to
launch
it
in
my
other
window,
just
because
I'm
running
with
two
screen,
so
what
I'm
going
to
do
is
we
are
going
to
bring
up
the
kayali
dashboard
all
right,
so
you
can
see
I
have
a
bunch
of
application
right.
It's
kayali
is
reporting
the
missing
sidecar
because
they
are
not
running
with
sidecar
and
if
I
go
to
the
graph
here
so
hopefully
let
me
generate
some
traffic
first
of
all,
because
without
traffic,
sometimes
it's
harder
to
visualize
things
in
kayali.
A
So
the
first
thing
we're
going
to
do
is
we
are
going
to
generate
traffic.
So
let
me
so
yeah
so
basically
we're
generating
like
a
bunch
of
requests
and
whether
you
sleep
two
seconds
between
each
of
the
requests-
and
you
know
we're
just
generating
each
traffic
from
clients
to
the
clients
that
sleep
is
your
Ingress
Gateway
and
not
sleep
and
let's
go
ahead
and
enable
traffic
animation
security
there.
A
We
go
so
realize
these
right,
this
from
sleep
to
product
page,
not
sleep
to
product
page,
only
the
mutual
TS
traffic
right.
So
that's
with
you
just
needing
to
label
your
name
space
to
ambient
and
without
you
doing
any
additional
work
you
got
Mutual
cos.
So
that's
really
really
cool
yep
all
right.
So
that's
the
demo
for
kayali
and
the
layer
for
authorization.
A
I'm.
Sorry
just
add,
in
the
benefit
of
adding
your
your
paths
to
ambient
the
next
thing
we're
going
to
do
is
actually
I
want
to
show
John
talk
about
the
workload
XDS
right,
so
we
should
probably
show
that
first.
So
so
let
me
go
ahead,
find
out
the
zetamol
pods
and
we're
going
to
do
a
workload
commands
on
the
zetano
Pod,
one
of
our
zetano
pod
on
the
first
work,
ambient
work
of
yeah.
So
if
you
recall,
we
talked
about
the
workload
XDS
right.
A
We
talk
about
the
protocol
field
right,
so
when
everything
was
outside
of
ambient,
it
has
protocol
with
TCP
right
as
you
as
I
was
adding
book
info
and
sleep
and
not
sleep
into
ambient.
The
protocol
changed
to
Edge
bone
so
that
essentially
instructs
the
zetano
to
upgrade
the
connection
to
be
HBO
so
that
we
can
get
the
the
mutual
TRS.
The
secure
pedal
log
on
that
hiale
diagram
by
istio,
zitano
upgrading
the
connection
to
edgebone.
For
us,
you
can
also
get
a
little
bit
more
information
into
the
XDS
workload
by
running
a
config
Dom.
A
So,
for
instance,
we
can
exactly
into
one
of
the
zitana
and
do
a
config
dump
in
this
case,
so,
for
instance,
we
can
get
into
like
the
workload.
This
is
workload
for
review
and
it's
mentioned
it's
actually
ball.
It's
mentioned
it's
node,
it
mentioned
it,
doesn't
have
any
authorization
policy
at
the
moment.
Is
there
anything
else
we
should
highlight?
It
also
doesn't
have
any
waypoints.
B
A
Yeah,
that's
a
great
explanation,
all
right,
so
all
right
folks,
so
we
got
an
application
added.
The
next
thing
we're
going
to
do
is
we're
going
to
apply
a
layer
for
authorization
policy.
So
let
me
go
ahead.
Paste
the
authorization
policy,
if
you
ever
use
istio's
layer
for
authorization
policy,
you
probably
realize
this
is
very
familiar
right.
There's
nothing
changed
as
well
as
the
semantics
for
this
one.
A
You
basically
specify
which
are
the
principle
that's
allowed
to
access
anything
that
matches
label
application,
app
equals
product
page
right,
so
I
will
effectively
apply
this
Oscar.
The
Z
tunnel,
that's
running
co-located
with
the
product,
page
parts
to
say:
hey,
go
ahead,
only
allow
principal
from
sleep
and
it's
your
Ingress
Gateway
and
nothing
else
right.
A
So
with
that
we
should
be
able
to
continue
visit
book
info
from
sleep
and
also
from
istio
Ingress
Gateway.
But
now,
if
we
go
to
from
not
sleep
to
visit
a
product
page,
you
will
see
it's
actually
failed.
So
let's
go
ahead
to
go
to
the
Z
tunnel.
So
if
you
look
at
product
page
right,
it's
running
on
ambient
worker
number
one.
A
So
if
we
go
to
zetano
ambient
worker
number
one-
which
is
this
guy
here,
so
if
we
look
at
the
logs,
we
will
see
the
accepted
connection
and
also
the
ABAC
rejection
connection,
so
that
is
the
r
back
from
2.5
to
2.11
that
got
rejected.
So
if
we
go
back
to
2.5,
it
should
be
the
not
sleep.
A
Okay,
so
premises
is
also
rejected.
I
guess,
because
the
premises
was
not
the
principal
list
also.
A
Why
I
would
see
more
rejection
than
I
was
expecting
because
I
was
expecting
only
one
rejection,
but
I
was
seeing
a
bunch
of
more
yeah?
Okay,
that
makes
sense
all
right,
so
that
is
layer,
four
authorization
policy.
Let
me
see
if
I
have
a
diagram
for
that
all
right.
So
basically
what
we
just
shown
right.
A
We
have
sleep
and
not
sleep,
and
it's
your
English
Gateway
as
of
a
client
and
the
Z
tunnel
on
the
target,
which
is
the
product
page,
see
tunnel
winter
had
allowed
the
car
from
sleep
and
also
from
istio
Ingress
Gateway,
based
on
the
layer
for
authorization
policy,
but
it
was
not
allow
the
access
Farm,
not
sleep.
A
So
that's
that's
what
we
just
demoed
the
next
we
want
to
demo
is
layer,
7
authorization
policy.
So
let
me
go
ahead.
Show
you
the
diagram
first
before
the
demo,
so
to
enable
layer,
7
authorization
policy,
because
z-tunnel
is
only
on
layer,
4
policy
enforcement.
A
You
will
need
a
waypoint
proxy
to
do
layer,
7
policy
enforcement,
so
the
first
thing
we're
going
to
do
is
deploy
a
waypoint
proxy
for
product
page
and
then
we're
going
to
deploy
some
authorization
policy
bind
to
the
Waypoint
proxy
that
we
just
deployed
and
we're
going
to
config
to
be.
A
Let's
go
ahead,
allow
sleep
to
call
the
get
method
and
let's
go
ahead
and
not
allow
sleep
to
call
delete
method
and
not
sleep
is
continue
not
allowed,
and
one
thing
I
want
to
highlight
here,
because
we
have
SEO
Ingress
Gateway
and
also
because
we
have
zetano
and
also
the
Waypoint
proxy.
So
I
want
to
highlight
two
of
the
XD.
Yes
config.
We
have
that
John
talked
about
early
right,
so
sdod,
essentially
sending
the
envoy
XDS
config
to
SEO
Ingress
Gateway
and
the
Waypoint
proxy.
A
Well,
the
ISD
is
also
sending
the
workload
XDS
config
to
the
source
and
targets
Eternal
in
this
diagram.
All
right.
So,
let's
go
ahead.
Actually
before
we
deploy
authorized
authorization
policy
or
even
Waypoint
proxy,
we
have
to
install
the
Gateway
API
the
Gateway
resource,
so
John.
You
won't
mention
why
we
need
to
do
that
in
a
minute.
B
Yeah
and
there's
actually
a
question
about
this
on
on
the
chat.
If
we
can
pull
up
the
last
question,
I
don't
know
if
we
can.
B
That's
from
YouTube
well,
I
can
answer
this
one,
but
that
was
the
one
I
was
talking
about,
but
I'll
do
I'll
get
this
one
first,
then
we
can
go
the.
B
One
yeah
so
Z
tunnels
report
is
a
Daemon
Set
yeah.
So
today,
because
istio
is
Alpha
a
lot
of
the
transitions
of
like
adding
zitons
or
waypoints,
removing
that
pots
going
up
and
down
there's
a
lot
of
like
edge
cases
where
those
aren't
fully
handled
Our
intention
and
our
designs
will
allow
those
to
be
completely
zero
downtime.
So
what
that
would
look
like
is
when
you
do
an
upgrade.
We
roll
a
new
Z
tunnel
on
the
Node.
We
shift
the
traffic
over
there.
B
We
close
down
the
old
one,
just
kind
of
a
standard,
gradual
rollout.
Now
on
scaling,
it's
a
bit
different,
so
scaling
a
damage,
that's
fairly
hard
right.
Kubernetes
doesn't
really
have
an
out
of
the
box
like
diamond
set
scaler.
B
So
we
have
a
few
options.
We
could
technically
run
multiple
on
a
node.
That
is
an
option,
although
I
don't
think
it's
the
best
one.
We
don't
have
a
very
good
way
to
do
sophisticated
load
balancing
because
we're
only
doing
this
at
the
connection
level,
and
but
we
have
designed
z-tunnel
to
be
very
lightweight
right.
It's
not
doing
any
of
L7
HTTP
processing,
it's
not
doing
any.
You
know
complex
operations.
All
it
does
is
take
the
traffic,
stick
it
in
the
npls
tunnel
and
forward
the
bytes
right.
B
So
that's
something
that's
fairly
cheap
and
that
we
think
can
scale
to
meet
the
capacity
of
the
node
fairly
easily,
even
with
the
single
instance.
B
So
that
was
an
intentional
design
to
move
all
the
complex
work
to
the
waypoints,
which
are
just
standard
deployments
and
those
components
can
scale
easily
right
with
just
a
horizontal
autoscaler
or
vertical
pod,
Auto
scale
or
whatever
you
want.
It
can
have
100
Waypoint
proxies.
You
know,
so
that's
the
scalable
component
that
does
most
of
the
work
it's
kind
of
similar
to,
like
you
know,
even
in
a
node
today
you
have
your
cni
or
even
the
Linux
kernel,
that's
doing
like
Network
traffic.
B
It's
fairly
low
cost
right,
like
you,
don't
often
run
into
scalability
issues
with
the
cni
or
the
network.
As
you
tunnel
is
kind
of
similar,
it's
a
little
bit
higher
level,
but
we
don't
expect
it
to
be
a
bottleneck.
A
B
Yeah,
so
the
kubernetes
Gateway
API,
if
you're
not
familiar,
is
kind
of
this
new
API
in
kubernetes.
That's
in
some
ways
it's
an
evolution
of
the
Ingress
API
in
kubernetes,
but
for
E
Studio's
perspective,
it's
very
similar
to
our
own
apis
virtual
service
and
Gateway.
But
it's
kind
of
learned
from
a
lot
of
the
mistakes
we
made
and
so
in
some
ways,
I
see
it
as
an
evolution
of
our
apis,
but
also
it's
in
kubernetes
core.
So
it's
a
win-win.
B
There
are,
you
know
better
apis
that
are
easier
to
understand
and
reason
about,
and
it's
in
you
know
the
kubernetes
ecosystem.
So
it's
vendor
neutral,
share
between
everyone,
common
tooling,
documentation,
Etc.
So
that's
currently
in
beta,
both
in
the
actual
API
itself
and
in
Easter's
implementation,
and
we
do
use
the
Gateway
API
in
ambient.
B
So
that's
why
Lynn
actually
just
installed
the
Gateway
apis
they're
not
installed
by
default
on
the
cluster,
but
you
know
just
a
simple
command
way
to
get
them
and
then
we're
going
to
use
in
the
next
step
the
Gateway
kind
of
confusing
naming,
because
you
also
has
a
Gateway,
but
the
kubernetes
Gateway
resource
is
how
we
actually
deploy
the
Waypoint.
So
I
think
that
will
be
Lin's.
Next
Step.
A
Yeah
so
removable
earlier
one
we
mentioned
there
is
a
waypoint
command
to
help
you
easily
deploy
the
Waypoint,
so
I
just
deploy
the
Waypoint
for
the
product
page
and
the
next
thing
we
want
to
do
is
view
the
configuration
of
the
waypoints
there's
actually
a
little
difference
from
the
initial
launch.
I
believe
the
Gateway
class
name
is
different
and
also
The.
Annotation
is
for
the
service
account
for
booking
for
product
page
so
and
The
Listener
I,
believe
we
don't
have
that
before.
So
this
is
very
explicit.
A
Even
with
the
applied
a
lot
of
allowed
routes,
the
status
of
the
Gateway
with
the
Waypoint
proxy
should
be
very
helpful
and,
at
the
end
should
tell
you
everything
is
running,
which
I
would
also
have
it
here
on
my
cluster
all
right.
So
that
is
a
book
info
Waypoint
proxy.
Now
that
we
have
a
waypoint
proxy
full
product
page,
let's
go
ahead
to
deploy
our
authorization
policy
that
we
talked
about
earlier,
that
we
want
to
apply
the
bind,
the
authorization
policy
to
to
the
Waypoint
proxy.
A
So
if
you
recall
on
the
label,
we
have,
we
have
it
on
the
on
the
actually.
We
need
to
look
at
the
booking
for
product
page
the
descriptor
here,
so
this
have
a
label
called
istio.io,
Gateway
name
book
info
product,
page
right,
so
in
the
authorization
policy
with
exclusively
selecting
the
Waypoint
proxy
by
matching
the
label
and
then
we'll
specify
for
sleep
and
also
issue
Ingress
Gateway
we're
only
going
to
allow
the
get
method
and
nothing
else
right.
A
So
if
we
have
the
authorization
policy
applied
now
that
we
do
run
a
command,
so
you
can
see
if
I
call
with
delete
it's
it's
deadlined.
If
I
call
from
not
sleep,
it's
also
dealined
and
if
I
call
it
with
just
the
get
command.
I
do
get
the
book
info
returned.
A
So
let's
go
back
to
the
Waypoint
and
you
can
see
how
the
403
rbac
access
D9
is
imposed.
This
is
actually
from
a
premises.
I
believe
this
is
the
one.
A
Let
me
see
if
I
can
wrap.
Please
yeah.
This
is
one.
So
this
is
a
200
to
for
product
page,
and
this
is
another
200
for
I'm,
sorry,
the
434
product
pages,
so
that
is
the
rbac
D9.
A
Obviously,
we
also
get
a
bunch
related
permissions
in
the
lot
too,
all
right,
so
that
is
Waypoint
proxy.
Oh,
the
other
thing
I
want
to
show.
You
is,
let's
see
if
you
do
view
the
Waypoint
proxy
in
kayali.
So
let's
go
ahead,
generate
some
loads
on
this.
So
essentially
what
we
were
just
sending
is
send
a
loop
and
if
I
do
bring
up
kayali,
hopefully
it
would
actually
show
us
a
good
amount
of
traffic
denied.
Let's
see
refresh
this
yeah.
A
So
as
you
can
see
here,
you
know
we
got
a
cup
of
red
here.
So
if
you
go
here,
you
can
see
we
got
200
code
and
403,
which
is
the
method
not
allowed
Response
Code.
So
we
got
it
50
of
the
time
time.
That's
because
we
will
call
in
delete
50
of
the
time
and
also
get
50
of
the
time
all
right.
So
that
is
authorization
policy.
Oh
one,
last
thing
I
want
to
share.
Let
me
get
out
of
here.
A
So
if
you
do
view
the
z-tunnel
configuration,
you
will
be
able
to
see
the
authorization
policy
here
yeah,
so
you
will
be
able
to
see
the
product
page
viewer.
You
will
be
able
to
see
oh
actually
John.
This
is
an
interesting
point.
Actually
it
it
had
a
product
page
viewer
when
we
had
the
layer
for
authorization
policy,
but
the
moment
we
deploy
layer,
7
authorization
policy
because
it
was
more
enforced
on
the
Waypoint.
A
B
Page
viewer
is
a
L7
policy
and
zetano
doesn't
know
about
L7,
so
the
rule
just
gets
kind
of
collapsed
into
deny
everything,
because
it
can't
check
if
there's
a
get
or
not.
Now
that
rule
isn't
actually
enforced
anywhere
in
Z
channel.
So
potentially
it
could
have
been
optimized
way,
but
that's
that's
very
much
getting
kind
of
the
nitty-gritty
details.
A
Yeah
one
thing
I
want
to
mention
which
I'm
not
going
to
show
just
because
it's
also
very
detailed
already.
So
if
you
do
do
a
config
dump
on
the
Waypoint
proxy
or
if
you
do
a
port
forwarding
and
view
the
Amway
configuration
of
the
Waypoint,
you
will
see
the
r
back
the
HTTP
R
back
filter
that
that's
kind
of
reinforcing
the
the
authorization
policy
in
your
Waypoint
configuration.
A
So
if
you
are
familiar
with
the
envoy
configuration
that
shouldn't
be
a
surprise
for
you
yeah,
so
the
next
thing
we
want
to
show
is
also
using
Waypoint
to
control
traffic
right.
So
we're
going
to
deploy
a
waypoint
proxy
for
the
reviews
and
we're
going
to
apply
virtual
service
off
to
Route
90
of
the
traffic
to
version
one
and
ten
percent
of
traffic
to
version
two
and
and
then,
if
we
do
send
a
bunch
of
requests.
So
you
can
see.
Most
of
traffic
goes
to
version
one
right.
A
So
in
this
case,
if
you
ever
dump
the
Waypoint
configuration.
So
let
me
go
ahead,
set
up
the
Waypoint
reviews.
A
So
if
I
set
up
a
waypoint
review
to
our
Waypoint
and
if
we
ever
do
a
port,
forwarding
I
would
be
able
to
see
so
it's
a
localhost
1500.
A
A
B
A
Sorry
I
was
hoping
to
show
you
the
Waypoint
config,
but
somehow
I'm
having
trouble
to
to
to
get
it
config
all
right,
I
guess,
I'll
show
it
some
other
time.
So,
let's
recap
so.
A
Basically
what
we
have
done
is
we
have
showed
you,
the
traffic
config
weighted
routing
for
90
to
version
one
and
ten
percent
to
version
two
and
that's
enforced
on
the
Waypoint
proxy,
which
is
on
the
producer
side
of
Waypoint
proxy,
so
notice
here
we're
not
deploying
any
layer,
7
components
on
the
client
side,
either
that
whether
that
client
is
is
still
it's
the
sleep
or
not
sleep
in
this
case
anything
else
you
think
we
should.
Oh,
we
have
a
couple
of
questions
coming
into.
A
B
Yeah
on
the
first
one,
so
houses
Eternal
works
is
we
automatically
will
use
mtls
anytime,
it's
possible.
So,
as
a
you
know
like
when
client
connects
to
product
page,
for
example,
ztom
is
going
to
look
at
the
destination
product
page
and
it's
going
to
figure
out.
Does
product
is
product
page
going
to
accept
mpls
right
and
that
could
be?
It
has
a
sidecar.
It
has
a
z
tunnel
deployed
on
the
same
node,
maybe
even
it's
using
the
proxy
this
grpc.
B
So
as
long
as
it
has
some
ability
to
accept
that
m2s
traffic
zetone
is
going
to
send
mtls,
we
don't
have
an
explicit
way
to
say
you
should
send
mtls
to
this
target.
It's
all
automatically
discovered.
So
as
a
user,
you
don't
have
to
worry
about
ever
configuring
mtls
at
all.
It
will
just
automatically
use
npls
wherever
is
possible.
Typically.
A
B
B
You
know
the
reasons
in
Easter
before
was
that:
okay,
like
sidecars,
have
kind
of
some
burden.
Some
life
cycle
issues
resourcing
issues,
performance
issues
like
we've,
addressed
a
lot
of
those
in
East
yo.
So
we
kind
of
expect
that
the
new
Baseline
is
mtls
everywhere.
That's
kind
of
part
of
what
the
name
ambient
means
is
that
a
user
will
just
have
zetone
deployed
everywhere
on
their
whole
cluster.
B
B
On
the
next
question,
we
don't
currently
support
that,
but
there's
no
reason
that
we
won't
in
the
future.
We've
just
been
trying
to
keep
the
feature
set
and
lean
so
that
we
can
make
progress.
There's
plenty
of
some
minor
configuration
additions
that
we'll
be
adding
over
the
the
coming
months.
A
Yeah,
that
sounds
right.
So,
along
with
the
multi-cluster,
you
know
some
of
those
multiple
trust
routes
yeah.
So
right
now,
you
can
probably
see
all
of
our
documentation
is
pretty
limited
to
single
cluster
relative,
simple
scenarios,
which
we
want.
Your
feedback
I
believe,
that's
all
the
questions
we
have
well
I
want
to
take
a
minute
to
thank
you.
John
I
think
we're
kind
of
running
out
of
time.
So
I
want
to
thank
you.
A
So
much
for
you
know,
come
to
the
show
and
talk
to
us
about
the
latest
of
ambient
and
Ladies
of
sea
tunnel.
How
do
folks
reach
out
to
you
what's
the
best
way.
B
We
Pro
we
have
an
ambient
channel
on
slack,
that's
just
ambient.
It's
called
on
the
Easter
slack.
If
you
have
feedback
questions,
anything
I
think
it'd
be
great
to
to
leave
questions
there
happy
to
discuss
whatever.
A
Awesome
yeah
and
folks,
if
you
enjoy
the
live
stream,
all
the
recording,
if
you
watch
the
recording,
please
give
us
a
thumbs
up.
I
am
super
grateful
for
everyone
who
watched
our
live
stream
and
also
subscribe
to
our
Channel
and
happy
learning
is
still
and
application.
Networking
see
you
at
the
next
episode.
Thank
you
so
much
John
thanks.
Everyone
for
joining.