►
From YouTube: IstioCon 2021 Workshop: Using Istio
Description
#IstioCon2021
Presented at IstioCon 2021 by Lee Calcote & Abishek Kumar.
This workshop introduces service mesh concepts and each aspect of Istio. Gain hands-on experience with this popular tool as you learn how to deploy and configure Istio alongside microservices running in Kubernetes.
For upcoming events and workshops visit https://events.istio.io
A
So
abhishek
kumar
and
utkar
srivastava
join
us
today.
Gents,
do
you
want
to
say
hi?
B
Hello,
everybody.
This
is
lavishy.
B
C
Yeah
hi
guys,
I'm
I'm
working
as
an
intern
at
fairfield
and
I'm
an
open
source
contributor
to
machine
too
yeah,
and
that's
pretty
much
awesome.
If
you
like
glasses
with
questions
and
stuff.
A
So,
thank
you.
Tim
tim
pointed
out
my
first
mistake,
which
was
hey.
I
was
talking
about
this
url
in
yellow
and
not
sharing
the
screen
so
already
we're
withholding
not
good
but
yeah.
It's
cowcoatstudios.com,
it's
just
where
we
I
end
up
posting
great
and.
D
Just
published
a
post
just
to
make
more
organized
to
getting
to
know
how
far
the
audiences
is
on
their
journey,
so
we
I,
I
launched
a
quick
poll
so
far
I'll
I'll
leave.
It
open
for,
let's
say
10
more
seconds
to
see
to
get
more
answers.
A
Thank
you
for
that
pedro
that
actually
really
helps.
I
I
realize
it's
occasionally
unnerving
for
people
to
to
do
so
in
a
non-anonymous
way.
So
this
is
this
is
great
and
yeah.
We
are
at
the
right
you're
all
at
the
right
conference,
some
if
about
half
of
you
have
got
istio
in
production,
so
this
is
fantastic
one
other
thing,
so
I
spend
my
full-time
focus
in
the
open
source
community
at
layer.
Five,
there's
it's
all
meshy
in
there.
A
I
guess
there's
a
number
of
open
source
projects
that
we
focus
on
in
that
community.
If
you
want,
if
you
want
to
get
a
hold
of
any
of
the
three
of
us
after
the
after
this
week,
you
know,
please
join
the
slack
there.
A
A
We
will
probably
end
up
using
them.
Both,
I
shouldn't
say:
probably
the
labs.
Take
you
through
use
of
both
if,
for
some
reason
you
couldn't
get
a
hold
of
docker,
we
can
probably
work
around
that
and
just
use
kubernetes,
but
kubernetes
will
be
a
requirement.
So
it's
it's
a
byok.
It's
a
bring
your
own
kubernetes.
A
Your
own
kubernetes
workshop
good,
I
think.
Hopefully
everyone
can
see
that
we're
yeah
good,
good,
good,
we're
sharing
all
right
of
the
requirements
for
docker.
By
the
way
I
will
be
using
docker
desktop
today,
I
think
abhishek
might
be
using
minicube,
and
that's
that's
fine,
really
just
about
any
kubernetes
environment
is
gonna
work
for
us.
A
There's
a
couple
of
considerations
depending
upon
which
one
you're
using
we
find
that
a
lot
of
people
are
going
between
docker
desktop
minikube
kind.
Some
of
you
might
be
running
have
access
to
kubernetes
clusters
in
from
managed
service
providers.
You
know
cloud
providers,
that's
fine
as
well
the
labs
themselves.
They
will
walk
you
through.
They
will
make
the
assumption
that
you're,
using
either
minicube
or
docker,
desktop
for
kubernetes.
A
It
doesn't
need
to
be.
It
can
be
a
single
node
cluster.
It
can
be
small,
it's
fine!
This
is
this
is
not
the
multi-cluster
kubernetes
work
or
istio
workshop.
A
So
you
you
here.
It's
not
so
important
that
that,
if
you
don't
have
four
gigs
of
extra
memory
to
give
to
docker
it's
nice
and
you
should,
if
you
have
it,
if
you
don't
this
used
to
be
really,
we
used
to
have
a
lot
more
trouble
in
earlier
versions
of
istio,
with
making
sure
that
there
was
enough
memory
on
on
students
on
attendees
systems.
A
So
in
terms
of
like
what
version
of
kubernetes
you
need
to
have
for
the
labs,
you
know
as
long
as
you've
got
well
yeah
as
long
as
you've
got
1.9
or
higher,
which
I
think
we'd
be
hard-pressed,
for.
I
think
anyone
to
be
able
to
go
find
a
version
that
old
you
should
be.
You
should
be
good
in
one
nine.
A
There
were,
that
was
when
admission
controllers
and
mutating
web
hooks
was
were
still
hadn't,
quite
landed
all
the
way
and
we'll
be
using
some
of
those
today,
some
of
those
terms
are
probably
familiar
to
about
half
of
you
for
the
other,
half
we're
gonna
we're
gonna
walk
through
how
side
cars
get
injected.
How
proxies
get
injected
today,
so
we've
got
some
obligatory
introduction
to
what
a
service
mesh
generally
is
we're
not
going
to
spend
a
whole
ton
of
time
here.
A
I'll
say
this
that
well
there's
a
short
book
that
I'd
written
it
was
just
a
second
edition,
was
just
published
about
a
month
ago,
a
little
about
about
a
month
and
a
half
ago,
the
enterprise
path
to
service
mesh
architectures.
If
this
eco,
this
space
is
still
new
to
you,
it's
a
free
report.
I
I
I
highly
recommend
the
report.
A
It's
a
short
read
helpful,
helps,
explain
some
things
about
why
service
meshes
are
here
and
how
they
compare
and
contrast
to
technologies
that
you
probably
already
know
what
value
do
they
add
in
the
face
of
a
container
orchestrator?
What
value
do
they
add
in
the
face
of
an
api
gateway?
A
So
if
you
articulate
the
the
value
of
a
mesh
or
the
functionality
of
a
service
mesh
like
istio,
specifically,
you
can
characterize
it.
You
can
bucketize
these
things
in
these
ways,
I'm
trying
to
get
get
all
my
zoom
windows
out
of
the
way
here,
a
lot
of
times,
you'll
hear
and
about
people.
People
will
speak
to
the
value
of
a
mesh
and
the
functionality
mesh
in
about
three
pillars:
security,
observability
and
traffic
management.
A
I'm
going
to
toss
five
at
you
today
and
see
if
these
don't
make
sense
by
the
end
of
lab
8
by
the
end
of
the
workshop,
so
traffic
control
we're
going
to
spend
a
fair
bit
of
time
there
going
through
labs
on
getting
fairly
granular
with
how
we're
going
to
steer
traffic
around
service
meshes,
intercept,
requests,
intercept
all
of
the
traffic
that
go
in
between
each
of
the
the
individual,
microservices
micro
or
macro
in
istio's
case,
whether
they're
literally
running
in
containers
on
kubernetes
or
running
in
vms,
or
on
bare
metal
outside
of
kubernetes
istio
supports
workloads
in
either
environment.
A
You'll
find
that
you
know
you'll
some
of
your
the
mileage
between
these.
This
functionality
will
vary
a
little
bit.
You
know
so
we're
going
to
look
at
just
kubernetes
workloads
today
we're
using
two
different
sample
applications
we'll
run
those
both
in
kubernetes,
but
part
of
the
reason
that
people
come
to
a
mesh
is
to
modernize
their
existing
workloads.
A
A
Those
requests
to
your
services
to
do
so,
transparently,
to
provide
you
granular
control
over
whether
or
not
to
accept
them,
reject
them
redirect
them
rewrite
them,
mutate
them,
maybe
to
to
change
them
from
from
one
protocol
to
the
next,
it's
fairly
fairly
powerful.
A
As
a
matter
of
fact,
even
though
we're
on
record
here
I'll
say
that
this
that
particular
notion
is
part
of
why
I
left
cisco
in
my
role
some
many
years
ago,
frustrated
that
we
weren't
doing
more
there
with
the
network.
So
super
pleased
about
sort
of
this
next-gen
sdn.
If
you
will
a
service
mesh
resiliency,
is
something
that
a
service
mesh
very
much
so
provides
it's
fairly
universal
functionality
that
a
mesh
will
give
you
things
like
retries
the
ability
to
drop
packets,
so
inject
faults.
A
To
rate
limit
to
to
do
so,
some
of
them,
let
you
do
so
very
intelligently
security,
so
each
service
mesh
brings
its
own
identity
and
sense
of
identity.
They
will
automatically
assign
a
unique
identity,
a
certificate
to
each
individual
workload
to
each
individual
service.
It's
through
that
that
a
few
different
things
are
facilitated,
most
notably
mutual
tls,
like
our
mutual
authentication
and
encryption
in
between
your
services.
For
some
people
for
some
organizations,
that's,
like
that's,
that's
the
golden
ticket.
A
That's
like
the
entire
reason
that
they're
deploying
the
mesh
is
to
be
able
to
get
that
east-west
traffic
in
between
their
their
services,
secured.
A
A
That's
what,
as
a
matter
of
fact,
that
the
majority
of
individuals
that
I
speak
with,
it's
observability
that
gets
them
hooked,
there's
a
fifth
area,
a
bucket
of
functionality.
That,
I
would
say
is,
I
think,
is
most
intriguing
and
is
somewhat
emergent
for
service
meshes
and
we'll
we'll
get
to
an
example
of
this
today.
A
A
I'll
stop
there
and
we'll
just
give
an
example
of
that
later.
It'll
be
about
infrastructure
logic.
That's
inside
of
your
apps,
pulling
out
more
of
that
than
I
think
that
some
of
you
are
aware
of
today.
So
istio,
specifically
in
its
definition,
it's
an
open
platform.
I
was
saying
earlier
to
connect,
manage
and
secure
microservices,
so
each
of
these
features
istios
delivers
and
delivers
in
spades,
istio
itself
as
a
project.
What
was
it
like?
Was
it
too?
A
I
have
to
go
back
and
look.
Was
it
like
2007
march
2017
was
first
announced,
I
think
so.
It's
gone
through
its
own
iterations.
We're
gonna
deal
with
istio
1.9
today
in
the
labs,
so
the
latest
and
greatest
1.9
was
it
just
last
week
should
come
out
in
version
1.5
between
1.5
and
1.6.
Some
significant
architectural
changes
occurred
in
the
project.
A
We'll
look
at
that.
So
I
had
said
before
observability
it's
a
big
deal.
It
takes
the
blinders
off
for
a
lot
of
people.
Some
people
are
still
flying
blind
with
what's
going
on
in
their
cluster,
what's
happening
in
their
services.
It's
not
a
this,
isn't
to
necessarily
be
entirely
oversold.
A
When
you
look
at
the
the
metrics,
the
logs
and
the
traces
that
we
will
see
today,
that
there's
a
difference
between
well,
I
hesitate
to
use
these
these
terms.
I
still
haven't
figured
out
better
ones
just
yet,
but
in
our
industry
they
have
been
referred
to
as
like
a
white
box
monitoring.
So
your
ability
to
see
inside
of
the
application
and
what's
going
on
inside
the
app
and
your
black
box
monitoring
where
the
system
itself,
the
application,
is
opaque
and
you're
just
getting
telemetry
or
you're
able
to
observe
it
from
the
outside.
A
That's
the
surface,
mesh
kind
of
does
both
so
the
metrics
and
well.
The
default
metrics
and
the
default
logs
that
istio
will
generate
is
really
focused
on
traffic.
That's
going
to
and
fro
your
services
less
so
about
internal
logs
generated
from
the
application
itself
from
you
know
the
the
business
logic
that
you've
written
that
you're
you're
running.
A
It
will
still
facilitate
you
know
alongside
kubernetes
or
what
have
you
it
will
facilitate
the
collection
of
those,
but
but
the
metrics
and
the
logs
you're
going
to
see
are
much
more
about
from
from
an
opaque
perspective,
the
distributed
tracing
that
a
service
mesh
facilitates.
Now
that's
much
more
what's
happening
inside
of
your
app
and
you
can
answer
some
really
difficult
questions
with
distributed.
Traces
you're
able
to
one
of
those
tough
questions
is
like
it
is
like.
Why
is
my
service
slow?
It's
not.
A
Why
is
my
service
down
like
you
can
tell
the
service
is
down,
it's
not
responding
and
you
go
figure
out
why
that
is.
But
if
it's
slow,
that's
often
a
lot
harder
to
answer
so
another
big
category
we
mentioned
was:
you
know
fine
grain
traffic
control,
the
resiliency
I
did.
I
mentioned
a
couple
of
things
around
retries
and
fault
injection,
but
there's
timeouts,
there's
some
service
meshes,
I'm
not
istio.
I
have
error
budgets
circuit
breakers
and
health
checks
very
powerful
things.
A
There's
some
chaos
engineering
that
can
be
done
on
top
of
a
of
istio,
so
they're
in
the
ingress
and
egress
routing,
we'll
look
at
egress
ends
up
being
used
a
lot
of
times
with
a
security
oriented
focus
about.
What's
exiting
your
apps,
what's
exiting
your
infrastructure
who's
reaching
out
what
that
data
is
so
so
I'm
really
I'm
stoked
about
what
service
measures
bring.
One
thing
that
I
don't
know
that
a
lot
of
people
like
reflect
on
that
much
or
speak
to
is.
A
We
talked
about
in
somewhat
hard
terms
what
a
mesh
will
provide,
what
istio
will
provide,
but
also
in
these
softer
terms,
istio
and
other
meshes
will
provide
value
to
these
individual
personas.
These
are
you
know.
Not.
Everyone
falls
into
these
particular
three
buckets,
but
I
think
it's
from
my
perspective.
It's
probably
most
obvious
how
the
service
mesh
benefits
an
operator
by
way
of
the
observability,
the
things
that
we
just
prattled
off
the
ability
for
them
to
get
that
type
of
a
control
in
one
place,
just
you
know,
described
declaratively
over
their
infrastructure
over
there.
A
The
services
well,
the
same
thing
you
know,
developers
can
be
empowered
through
that
white
box,
the
distributed
tracing
that
we
just
mentioned.
A
They
can
just
change
the
behavior
by
updating
some
ammo
by
by
reconfiguring
the
service
mesh
okay.
Let's
so
I
want
to
get
so
we're
going
through
concepts.
This
is
good,
but
I
do
want
to
get
your
fingers
on
the
keyboards
sooner
than
later
before.
We
all
all
get
bored
I'll
highlight
these
things
real
briefly
and
we'll
move
on.
A
That
is
to
say,
there's
service
meshes
are
a
thing
in
part,
because
container
orchestrators
don't
bring
all
that
you
need.
They
do
some
things
by
layer.
Seven,
you
know
clearly
there's
so
many
sort
of
this
is
a
little
bit
tongue-in-cheek,
but
there's
so
many
layer,
four
and
below
concerns
about
the
infrastructure,
nodes
and
clusters
and
admission
and
et
cetera,
scheduling
and
resources
that
some
of
those
higher
layer
needs
have
been
left
unmet
and
hence
service
meshes,
have
stepped
forward
to
fill
some
of
that
gap,
you're
all
familiar
with
api
gateways.
A
Microservices
api
gateways,
there's
like
quite
clearly
overlap
with
a
service
mesh
and
its
functionality
in
part,
because
this
literally
the
same
proxies
that
istio
and
other
meshes
will
use,
are
literally
the
same
api
gateway.
The
same
technologies
that
you
use
in
your
api
gateway.
A
lot
of
the
difference
comes
down
to
the
deployment
models
and
what
the
placement
of
those
proxies,
if
you
think
about,
if
you
wanted
to
make
a
very
a
highly
resilient
application
and
have
it
deployed
in
a
very
in
a
highly
resilient
and
highly
scalable
way.
A
Well,
you'd
have
many
points
of
indirection.
You'd
have
everything
if
you
had
a
three-tiered
web
app
web
application
database
tier,
you
would
have
like
you
know,
a
virtual
ip
address
or
a
load
balancer
in
between
each
of
those
three
tiers
one
out
in
front
and
then
in
between
each
of
those
three
tiers.
A
A
There's
a
little
bit
of
overhead
to
manage
and
balance
there
and
that's
been
a
study
of
ours.
We'll
we'll
take
a
look
at
that
in
one
of
the
labs
cool
a
lot
of
times,
people
will
describe
a
mesh
about
getting
getting
infrastructure
concerns
out
of
your
services.
A
I
think
some
of
this
is
hopefully
fairly
obvious
to
you,
as
you
think
about
whether
or
not
to
write
this
logic
about
when
to
retry
a
failed
connection
request
when
to
rate
limit
a
client
from
hammering
on
your
service.
You
can
code,
you
can
write
that
in
code
or
you
can
just
ask
the
mesh
to
do
it
so
because,
in
summary,
service
meshes
are
you
istio
is
used
to
modernize
existing
infrastructure.
A
One
interesting
thing
that
I
that
I
alluded
to,
but
I
don't
know,
was
entirely
obvious
that
on
that
slide,
where
we
looked
at
developers,
product
owners
and
operators
is
that
today,
if
you,
if
you
reflect,
and
actually
if
for
anyone
who
would
like
to
share
in
the
chat
for
those
that
are
running
istio
now,
I'm
particularly
curious.
If
this
has
been
your
experience
or
what
or
what
your
current,
how
you
currently
operate
is
there's.
A
There
can
be
something
of
a
diffusion
of
responsibility
between
if
we,
if
we
boil
the
world
down
to
just
developers
and
operators
like
to
you,
know,
who's
who's
responsible
for
enforcing
or
defining
and
enforcing
a
rate
limit.
Is
that
the
developer,
or
is
that
the
operator
for
a
retry?
Where
should
that
be
implemented,
and
who
should
decide
how
many
times
you
should
retry
and
by
the
way
that
probably
changes
over
time
as
your
infrastructure
changes
as
well?
A
My
hunch
is,
and
I
and
again
please
sound
off
if
you
would.
My
hunch
is
that
the
answers
here
are
different
in
different
environments.
Service
meshes,
hopefully
bring
some
uniformity
uniformity
to
that,
and
irrespective
of
who
it
is
whose
responsibility
it
is.
It
becomes
evident
that
there's
a
single
place
to
define
the
behavior
of.
A
A
We're
on
the
cusp
of
getting
our
fingers
warm
it's
important
for
us
to
just
level
set
briefly
on
a
high
level
architecture.
A
A
There's
a
lot
to
a
lot
of
the
power
of
the
mesh
is
about
networking
and
enabling,
hopefully,
all
of
the
rest
of
us
non-networking
folks,
with
the
power
of
a
network
and
putting
it
into
our
hands.
The
workhorse
of
the
service
mesh
is:
are
these
the
proxies
that
reside
in
the
data
plane?
They're
intercepting
this?
A
This
is
the
in-band
aspect
of
a
mesh
they're
intercepting
and
seeing
each
of
the
the
requests
that
go
to
your
services
and
hence
the
the
power
of
a
mesh
like
the
ability
for
them
to
do
any
number
of
things
with
those
packets
generate
telemetry
block
things
require,
you
know,
authorized
things.
A
Some
meshes
istio
comes
with
an
ingress
and
an
egress
gateway,
but
not
all
meshes
bring
those
or
or
that
they
can
be
optional.
For
some,
the
egress
gateway
can
be
an
optional
thing
for
istio
deployments.
The
control
plane
is
where,
where
you
know,
the
the
the
service
mesh
is
something
of
an
element
management
system.
A
If
you
think
of
your
collection
of
proxies
as
elements
to
be
managed,
proxies
to
be
managed
the
control
plane,
one
way
of
describing
it
there's
a
lot
that
goes
on
there,
but
one
way
of
describing
it
is
as
a
configuration
management
for
your
proxies
configuration
management
for
envoy.
In
this
case,
it's
where
a
lot
of
you
will
interface
with
istio
and
configure
the
mesh
to
the
extent
that
there's
a
number
of
things
you
might
want
to
facilitate
on
top
of
a
mesh
and
harness
the
power
of
mesh.
You
might
want
to
do
well.
Chaos,
engineering.
A
You
might
want
to
have
some
expanded
governance,
expanded
policy
you
might
want
to
have
approval
flows
about
who
can
change
what
and
you
might
want
to
do.
Multi-Mesh
things
between
different
types
of
service
meshes.
You
might
want
to
get
some
additional
observability
like
there's
a
long
list
of
things
that
you
might
want
to
do
with
a
management
plane.
A
So
I'd
asked
if
any
of
you
are
network,
you
know
or
of
a
networking
background,
because
these
terms
are
sort
of
embroiled
in
the
annals
of
network
engineering
like
they're
fairly
familiar
and
for
a
lot
of
you
who
have
been
doing
things
in
and
around
kubernetes
control.
Planes
and
kind
of
data
planes
are
probably,
if
you
think,
of
the
kubernetes
cube
api
kind
of
the
kubernetes.
What
do
we
call
it
now?
A
Great
okay
on
istio,
specifically
this.
This
slide
this
architecture
that
you're
seeing
here
is
updated
and
I'm
intentionally
showing
this
because
it
shows
that
there's
different
functions
that
are
happening
within
the
control
plane.
So
today,
when
we
go
through
the
labs,
we
won't
see
each
of
these
components
that
are
called
out
by
older
names,
citadel
mixer
pilot
galley.
A
We
won't
see
them
in
our
labs,
explicitly
we'll
see
them
implicitly
inside
of
a
single
binary
that
binary
or
that
that
app
is
called
istio
d
is
due
damn,
but
these
the
functionalities
that
pilot
and
galley
and
citadel
and
mixer
have
done
in
the
past
are,
for
the
most
part,
still
around
it's
just
that
they're
not
necessarily
deployed
individually
a
lot
of
different
deployment
models
for
istio,
though
so
to
to
be
clear,
like
this
is
the
the
way
that
you
know
one
way
of
visualizing
how
it
is
that
that
requests
enter
through
into
istio
through
the
ingress
gateway.
A
A
That's
essentially
the
difference
between
the
containers
that
you
might
be
running
in
in
a
container
orchestrator
today,
those
applications,
those
that
are
not
on
the
mesh-
that's
essentially
the
difference
for
them
being
on
the
mesh
is
that
they've
got
a
proxy
in
front
of
them
now
for
istio
that
proxy
is
sidecard.
So
it's
in
the
same
pod
in
kubernetes
for
a
couple
of
other
service
meshes,
that's
not
the
case.
They
use
node
level
proxies,
there's
some
trade-offs
there
to
those
different
designs.
A
A
What
they're
written
in
some
interesting
things
we
end
up
at
layer,
five
end
up
dealing
a
lot
with
other
service
meshes,
and
so
some
interesting
things,
but
we're
here
at
istio
con,
so
the
control
plane
and
actually
pilot
and
galley
like
those
prior
names
for
these
components
like
there's
a
there's,
an
aspect
of
istio,
that's
responsible
for
synchronizing
service
service
discovery
like
istio
itself,
needs
to
be
very
tightly
aware
of
the
state
of
kubernetes
or
whatever
underlying
platform
that
you're
running
and
understand,
as
and
when
services
as
and
when
istio
or
kubernetes
reschedules
or
schedules
existing
or
new
services,
so
that
it
can
be
aware
of
the
state
of
the
cluster
and
in
turn,
take
that
configuration
that
understanding
and
inform
and
push
that
configuration
to
the
data
plane
to
each
of
the
envoys
that
are
running.
A
So
that's
an
ongoing
process
that
that
needs
to
be
there
mixer
has
is
probably
the
biggest
shift
I
mentioned
kind
of
in
1.5
to
1.6.
It
was
the
rem
basically
going
to
something
of
what's
referred
to
as
a
mixer-less
model,
so
taking
mixer
out
of
the
control,
plane
and
moving
it,
which
is
a
mixer,
is
about
telemetry
gathering
or
it
was
in
the
past.
A
Now
these
telemetry
filters
are
web
assembly
based
envoy
filters
that
are
responsible
for
some
of
the
same
things
so
responsible
for
generating
telemetry,
logs
and
symmetrics,
but
are
reside
in
the
data
plane
and
there's
been
performance.
Improvements
in
there's
been
a
little
bit
of
functionality.
That's
been
left
behind,
but
there's
also
been
some
performance
improvements
through
this.
This
model
of
moving
the
from
mixer
to
telemetry
filters,
so
kind
of
like.
A
This,
the
so
okay
again,
we
won't.
We
won't
look
at
a
component
called
citadel
today,
but
this
this
capability
is
inside
of
sdod
and
that
is
a
managed
certificate
authority.
So
this
this
certificate
authority
provides
verifiable
identity.
It
generates
certificates
and
manages
them
refreshes
them
rotates
them
assigns
a
certificate
per
service.
Each
individual
workload
that
that
gets
onboarded
onto
the
mesh
gets
a
unique
identity
and
that
is
used
for
authentication
authorization,
traffic
encryption.
A
So,
okay,
good,
there's,
there's
a
there's
a
landscape
out
there.
If
you're
interested
in
the
other
service
meshes
a
landscape,
you
might
want
to
go
check
out
a
lot
of
details
about
when
they
came
to
be
how
they
work.
It's
that
layer,
five
io,
slash
landscape,
but.
A
If
we
get
to
the
end
of
our
workshop
and
we
want
to
talk
about
multi-mesh
things
or
some
of
the
specifications
and
standards
that
describe
them,
we
will
we'll
look
at
service
mesh
performance
as
a
standard
here,
just
through
our
labs
and
how
smi
service
mesh
interface.
If
these
terms
are
new
to
you
and
if
they're
of
interest
come
come
and
ask
after
the
workshop
okay.
So,
lastly,
I
think
this
is
the
last
concept
is:
I
mentioned
service
mesh
patterns
as
well.
A
The
first
of
two,
a
two-part
book
series
on
service
meshes
of
service
mesh
patterns,
30
patterns
per
book.
The
early
release,
I
think,
is
going
to
come
out
in
about
a
week.
We
there
we
have
shared
the
list
of
those
60
different
patterns,
at
least
the
names
of
them
and
the
categories
of
them.
A
So
if
I
don't
the
link
used
to
be
in
this
slide,
I
don't
see
it
now,
but
if
that's
of
interest
to
you
ping,
okay-
okay,
let
me
let
me
let's
keep
over
this
okay
good,
so
questions
we're
gonna
go
through
the
this,
this,
the
last
prereq,
so
we
talked
about
docker
kubernetes
and
our
last
prereq
to
get
into
our
first
lab
is
to
install
mesherie.
A
So
it's
a
management
plane
service,
mesh
management
plane
it's
within
the
cncf
landscape.
It's
discussed
a
lot
inside
the
cncf
service
mesh
working
group.
It
implements
both
service,
mesh
performance
and
service
mesh
interface
to
help
manage
istio
it'll,
be
very
useful
for
us
today,
one
because
mesherie
was
built
for
this
here
for
workshops.
It
was
built
born
over
workshops,
helped
to
help
people
learn
service
meshes
operate
them
well.
A
Also
to
do
answer
questions
about
what
I
had
said
before,
that
the
data
plane
is
the
workhorse
of
a
service
mesh
and
that
you
know
you
need
to
pay
for
that
functionality
by
pay.
For
I
mean,
there's,
there's
a
little
bit
of
a
there's
some
performance
to
manage
how
much
cpu,
how
much
memory,
how
much
latency
are
you
incurring
based
on
what
you're?
Turning
on
in
the
mesh?
A
For
my
part,
I
would
you
know,
consistently
and
perpetually
say
that
it's
absolutely
worth
the
overhead,
if
you're
getting
metrics
or
logs
or
traffic
control
somewhere
else
today,
you're
paying
for
it
over
there
as
you
deploy
istio
and
you
get
that
in
a
central
place
and
you
get
uniform
governance
over
that
and
a
uniform
way
of
having
all
that
operate,
you're
paying
for
it.
I'm
in
the
same
way.
A
Here
it's
actually
just
easier
to
measure
what
that
overhead
is
that's,
so
that's
another
reason
why
we'll
use
mesh
re
today
is
it's
going
to
help
us
get
up
istio
very
fast,
get
up
a
bunch
of
different
sample,
apps
and
it'll
generate.
It
has
a
few
different
load
generators
built
in
with
some
statistical
analysis.
Some
performance
analysis
that
it
does
that'll
be
helpful
to
us
as
we
do
some
traffic
splitting
and
and
explore
some
of
the
resiliency
aspects
of
traffic
management.
A
So
the
way
to
get
measuring
going
it
depends
upon
what
type
of
system
you're
running
the
way
that
I'll
end
up
running
it
today
is,
I
mentioned
earlier
on
the
call
I'll
use
docker
desktop
on
I'm
on
I'm
on
a
mac,
I'll
use
docker
desktop
on
the
mac.
A
So
now
is
a
good
time
to
take
a
moment
to
to
get
mesh
re
going.
This
bash
script
will
work
for
mac
or
linux
may
even
work
for
windows.
I
don't
remember,
remember
so:
I'm
going
to
get
my
fingers
a
little
warmed
up
and
get
we're
going
to
get
this
prereq
installed.
A
That's
a
convenient
way
of
getting
mescheri
ctl
meshri
ctl
is
the
command
line.
Client
for
measuring
the
there
is
a
new
well,
it
will
be
announced
this
week,
there's
a
new
version
of
mesri
that
the
latest
version
is
v0.5
shari,
as
a
project
is
about
a
year
and
a
half
coming
up
coming
up
close
to
two
years
old
now,
like
I
said
before,
it
was
built
out
of
workshops
like
this
questions
that
we
we
keep
getting
from
from
different
people.
A
So
there's
a
release
candidate
and
rc2
that
should
be
available
to
everyone.
So
once
you
have
the
command
line,
client,
you
can
I'm
just
going
to
verify
what
version
I'm
running.
Mastery
ctl
version,
v050rc2
great.
C
D
A
I
apologize
so
what
I
just
as
a
brief
recap:
nothing
special.
What
I've
done
is,
I'm
I
just
run
brew,
upgrade
mastery
ctl
for
others.
If
you're
installing
fresh,
it's
it's
more
of
a
brew,
install
brew,
install
and
you'll
get
the
tab
for
the.
If
you're,
using
homebrew
here
and
you'll
run
mesh
for
ctl
system
start
I'm
going
to
copy
that
if
you'd
rather
use
a
bash.
This
is
here,
there's
there's
a
list
of
different
installed
methods.
A
A
You
you
don't
have
to
be
running
this
you'll.
Do
a
measuring
ctl
system
start
if
you're,
not
running,
docker,
it'll,
probably
say
docker's,
you
know
not
running.
Do
you
want
to
start
it?
If
you
are,
if
it's
your
very
first
time
running
meshri,
it
will
probably
also
say
you
don't
have
a
context
configured
and
you
do
want
to
configure
a
context
context.
Is
it's
not
very
different
from
a
cube,
ctl
context?
A
It
describes
a
measuring
deployment
when
measuring
installs
that
that
command
line
client
keeps
track
of
its
context
that
keeps
track
of
the
how
many
measuring
deployments
you're
managing
in
in
a
config
file
in
your
your
user's
home
directory
under
a
hidden,
folder
dot
meshery
that
config
file
is
here
it's
just
a
simple
yaml
file
that
describes
your
mesh
redeployment.
So
my
mine,
I'm
going
to
use
docker
as
my
platform.
A
And
hopefully
each
of
you
will
as
well.
Even
if
you
have
a
kubernetes
system
still
you
can
it's
probably
most
convenient
to
spin
up
meshri
on
docker
and
point
it
at
your
cluster.
So
mesherie
in
this
case
is
running
external
to
your
cluster.
It's
fine,
too!
If
you
want
to
go
ahead
and
deploy
meshri
as
an
app
inside
of
kubernetes,
that's
fine
as
well.
A
Good,
so
you
do
a
mystery.
Ctl
system
start
that
will
that
should
do
a
couple
things.
It'll
download
different
adapters
for
managing
different
surface
meshes.
Clearly
we're
focused
on
on
istio.
It
should
also
pull
up
your
system's
default
browser
when
you
you
get
there.
You'll
you'll
like
you'll,
see
us
this
screen
you'll
see
the
screen
to
sign
in
and
choose
a
messaging
provider.
A
We
work
with
a
couple
of
a
couple:
different
universities
on
research,
around
the
performance
of
a
surface
mesh
and
depending
upon
what
provider
you
choose
you'll,
you
can
either.
I
think
right
now
that
there's
there's
two
that
you'll
see
one
is
to
not
use
a
whoops
to
not
use
a
provider
at
all,
which
means
that
you'll
use
meshrie
as
just
a
single
tool
as
a
as
an
ephemeral
tool
that
you
sort
of
pull
out
of
your
tool
bag
use.
A
So
either
way
you
go
is
is
fine,
it
will
work
with
today's
labs.
A
So,
okay,
good.
A
So
with
that,
prereq
done
we're
ready
for
our
first
lab
and
the
and
by
the
way,
we're
also
ready,
probably
for
a
link
to
our
labs.
I
think
pedroid
said
that
he
shared
a
link
to
the
labs
with
with
all
of
you,
I'm
going
to
share
two
links
with
you
and
actually
abhishek.
If
you
don't
mind
to
share
there's
kind
of
I
mentioned
earlier
on,
the
call
there's
two
workshops
that
we'll
deliver
one
is
and
we'll
start
with
this
one.
A
It's
the
intro
to
istio
and
there's
an
advanced
istio
and
we're
gonna
try
to
combine
them
a
little
bit
today
to
hopefully
get
all
of
you
on
both
ends
of
the
spectrum
of
where
you're
at
so
at
first
at
bat,
will
be
to
go
out
to.
A
A
But
they're
there
for
self-study,
please
study
them.
Some
of
you
are
finding
broken
links
to
old
images.
Maybe
old
pictures,
please
please
fix
them
or
please
please
mention
them.
Please
point
them
out
so
that
others
can
get
them
fixed.
A
A
A
You
don't
need
to
be
a
kubernetes
expert
to
go
through
this,
so
in
lab
one
we're
going
to
set
up
istio
and
get
it
deployed
now.
Let
me
orient
you
to
the
labs
a
little
bit
so
today
we're
gonna
we're
gonna
walk
through
these
workshops
using
meshri,
also
hitting
istio
directly
and
exploring
istio
internals
the
labs.
When
you
walk
through
there
there's
sort
of
broke
each
of
the
labs
are
broken
into
two
sections,
so
the
top
of
the
lab
is
about
how
to
use
measuring
to
to
go
through
the
lab
to
deploy
istio
to
play.
A
A
Some
of
it's
somewhat
common
question
people
ask
hey:
do
I
need
to
have
all
these
adapters
for
the
other
service
meshes?
No,
you
don't,
and
you
can
turn
those
off
so
that
meshery
just
has
the
one
adapter
istio
the
value
of
measuring,
really
isn't
so
much
about
managing
all
the
other
meshes
it's
more
about!
Well,
some
things
that
we'll
walk
through
it's
about.
A
I
won't
spoil
the
surprise,
but
we'll
see
so
when
you're
signed
in
you'll
need
to
you.
A
The
first
one
of
the
first
things
you
need
to
do
is
make
sure
verify
that
you're
able
to
talk
to
kubernetes
what
meshri
does
it
really
tries
to
make
the
install
process
very
easy,
so
you
install
meshrey,
it
automatically
brings
up
your
browser,
it
searches
for
your
you,
it
searches
in
your
home
directory
for
your
user's
cubeconfig,
so
it
will
leverage
your
user's
cube,
config,
assuming
that
that's
probably
what
you'd
want
the
kubernetes
environment
that
you
want
to
use.
A
A
A
Load
up
your
cube
config,
so
for
me
it's
docker
desktop.
You
can
click
on
this
chip
here,
just
to
make
sure
that
meshri
is
in
communication
with
kubernetes
and
so
we're
we're
working
with
a
release
candidate.
That,
I
think,
has
a
has
a
bug
in
it
on
its
settings
area.
A
If
you
run
into
this
and
if
that's
an
issue
for
you,
let
let
folks
know
in
chat.
A
No,
the
so
great
question
about
like
the
version
of
istio
and
that
we're
going
to
be
working
with
today
and
whether
or
not
that
really
matters
it
doesn't.
The
labs
that
we're
going
to
use
should
be
just
fine.
Going
back
from
1.9
is
what
mescheri
is
going
to
want
to
deploy
could
be
going
back
to
like
one
five
should
be.
A
A
A
A
A
A
A
Wow
real
estate
management
of
this
screen,
okay,
so
to
install
istio
you'll,
go
over
to
manage.
Imagine
the
life
cycle
of
your
service
mesh.
So
you
will,
by
the
way,
if
you,
if
you
want
to
verify
connectivity
to
your
istio,
the
adapter
that
mesher
uses
to
talk
to
istio,
you
can
click
on
it
and
test
it
out
if
you're,
not
if
you're
having
again,
if
you're
having
problems
message
in
the
chat,
I
don't
see
that
anybody's
having
problems,
though
good
first
step,
install
istio,
this
installation
is
meant
to
be
is
not.
A
It
is
meant
to
be
easy.
It's
meant
to
it
defaults
the
demo
profile
of
istio.
So
istio
has
this
concept
of
configuration
profiles
and
of
which
istio
has
a
few
published.
One
of
them
is
called
demo,
which
includes
what
which
enables
a
few
features
that
we
will
use,
that
we
can
learn
a
lot
about
istio
so
immediately
when
we
do
that.
A
Is
I'm
installing
istio
we'll
get
istio
system
there?
Lab2?
Let's
deploy
a
sample,
app
istio's
default
sample
app
or
it's
canonical
sample
app
is
called
bookinfo.
Many
of
you
are
familiar
with
it
already,
and
many
of
you
aren't.
A
This
is
a
great
way
of
describing
how
it
is
that
those
side
card
proxies
come
into
play
and
get
used
within
a
service
mesh
to
get
used
within
istio.
A
A
A
Namespace
and
so
you've
got
one
service
for
each
of
those
pods.
The
ingress
gateway
is
the
pro
the
one
that
is
most
intriguing,
at
least
initially
it's
our
point
of
well,
it's
our
point
of
entry
where
traffic
that
wants
to
reach
services
that
you've
onboarded
into
istio.
A
It's
it's
the
point
of,
I
keep
wanting
to
say
ingress,
but
the
point
of
entry
for
where
the,
where
those
requests
come
in,
it's
the
first
point
of
control.
It's
the
first
point
to
potentially
generate
some
telemetry
first
point
to
reject
some
packets
or
re,
redirect
them,
and
so
as
such
by
default
it.
You
know
it
comes
with
a
few
open
ports
so
as
we
go
to
take,
go
into
lab
2
and
deploy
book
info
as
a
sample
as
an
application.
A
A
A
The
app
is
a
beautiful
sample
application,
not
because
it
has
a
behemoth
java,
app
which
is
frustrating
sometimes,
but
because
it
is
an
application.
That's
made
up
of
four
different
services,
each
service
written
in
a
different
language
showing
off
part
of
the
power
of
a
service
mesh
that
that
you
get
all
this
functionality.
A
Irrespective
of
of
what
your
what
language
your
apps
are
written
in,
so
the
this
application
in
terms
of
functionality
is
very
simple:
it's
a
a
web-based
book,
catalog
or
catalog
of
books
that
people
can
use
to
review
books,
look
at
them
their
details
and
assign
stars
to
them
so
very
fairly,
small
in
nature.
A
When
traffic,
when
you
you
can
take
this
sample
application
and
deploy
it
off
the
mesh
off
of
vistia
and
and
go
poke
at
it
and
run
it,
and
when
you
pull
up
its
web
pages
and
look
at
the
reviews
of
the
different
books,
the
traffic
flow
will
look
like
this.
Those
yellow
lines
represent
requests
that
are
made
from
well
starting
out
here
from
you.
Those
requests
that
go
in
and
are
directed
to
the
application
itself
directly
to
the
application.
A
The
application
responds
to
the
requests
when
we
go
to
deploy
the
sample
application
and
we
do
so
on
istio
istio
will,
by
default
it
will
automatically
ng.
Well,
I
shouldn't
say
by
default
anyway,
istio
will
automatically
inject
an
envoy
as
a
sidecar
adjacent
to
each
of
the
application
containers
and
by
adjacent
we
mean
inside
the
same
pod.
Kubernetes
pods
share
the
same
networking
namespace.
A
A
Some
of
the
the
magic
sprinkles,
if
you
will
of
of
how
envoy
or
get
side
card
as
a
proxy,
transparently
adjacent
to
your
application
and
part
of
how
we'll
look
at
actually
so
we're
going
to
go,
deploy
this
sample
app
and
we'll
look
a
bit
more
in
specific
about
how
traffic
is
being
redirected.
A
Okay,
so
these
these
side,
cars
by
default
they
get
automatically
injected
like.
I
said
this
injection
in
the
way
that
istio
controls
this
is
through
labels
on
well,
it's
in
a
couple
of
different
ways,
but
most
generally
is
based
on
labels
associated
to
your
namespace.
A
So
you
would
take
and
label
a
namespace
and
a
namespace
that
carries
the
this
injection
label
is
what
istio
will
or
you
still
will
work
with
a
kubernetes
mission
controller,
and
it
will
register
a
mutating
web
hook
to
ensure
that
any
time
a
new
service
is
to
be
a
new
workload,
a
new
service
is
to
be
scheduled
into
a
namespace
that
has
this
label
that
a
sidecar
is
proc.
That
envoy
is
inserted
in
to
that
that
manifest
that
application
manifest
automatically
envoy
is
inserted
in
and
becomes
a
second
container.
A
So
today-
and
this
is
one
of
the
something
you
could
do
with
mastery
real
quickly
is
to
you-
can
go
deploy.
A
Book
info
off
the
mesh
and
see
it
see
it
deployed
you'll
see
just
one
pod.
Oh
I'm
sorry
one
container
per
pod.
If
you
do
that,
so
to
do
that,
you
know,
since
I'm
talking
about
it
I'll,
do
it
we'll
do
it
real,
quick
qctl
get
pods,
let's
sit
here
and
watch
for
any
pods
to
come
up.
If
we
go
into
measuring,
we
can
take
well.
A
What's
nice
here
is
that
you
can
take
book
info
or
other
service
meshes
sample
applications
and
cross-pollinate
them
so
to
speak
across
different
meshes
to
to
better,
learn
and
understand
how
they
work
so
we'll
be
working
with
book
info.
Currently
we're
deploying
that
into
the
default
namespace
of
your
kubernetes
cluster
as
it
comes
up,
you'll
see
that
there's
a
there's
one
container
running
and
ready
of
one
container
when
we
to
explore
this
a
little
bit
that
that
means
that
this
app
is
running
off
the
mesh.
It
hasn't
been
onboarded
to
explore.
C
D
A
A
A
Right
now
we're
running
in
an
environment
in
which
it
was
a
fresh
kubernetes
system,
so
the
the
default
namespace
does
not
carry
istio
injection
as
a
label.
So
when
you
deploy
an
application
like
we
just
did
into
that
default
namespace,
you
won't
have
the
sidecar
automatically
injected.
If
you
go
annotate
or
go
label
that
namespace,
that
will
that's
gonna
change,
how
we're
gonna!
A
A
A
So
again,
if
we
go
back
and
watch
our
pods
and
now
that
we've
got
our
default,
namespace
labeled
with
istio
injection,
when
you
redeploy
the
sample
app
you'll,
see
that
it
has
two
of
two
containers
running
in
each
pod,
explore
that
a
little
bit
more,
it's
actually
not
just
the
fact
that
there's
two
containers
in
each
pod
there's
actually
a
third
container
that
had
been
there
ever.
So
briefly,
let's,
let's!
Let's
look
at
that!
What
do
I
mean
by
that.
A
So
we
can
get
our
get
our
list
of
pods
take
any
one
of
the
book,
infos
pods,
it
doesn't
matter
which-
and
we
can
get
that
pod
and
output-
it
its
innards,
it's
in
terms
of
it's
running
config
in
in
yaml,
so
you
can
do
a
dash,
oh
yaml,
or
you
could
describe
it
either
way.
If
you
take
a
look
at
if
we
can
go
back
to
the
top
of
this.
A
So
a
lot
of
this
should
be
familiar
to
you,
but
if
you,
if
you
take
a
look
and
search
for
the
our
containers,
stanza
or
containers
section
see
that
we've
got
a
container
our
application
container,
the
name
of
that
container
is
product
page.
A
A
Envoy,
what
you
may
not
have
been
entirely
aware
of
before
is
the
fact
is
this
notion
of
an
init
container,
an
initialization
container,
one
that
runs
ever
so
briefly
to
do
some
pre-provisioning
before
envoy
comes
up
that
pre-provisioning
work.
That's
done
is
to
run
a
small
script
to
reconfigure
ip
tables
within
that
pod.
A
It's
those
ip
tables,
then
that
get
configured
to
redirect
traffic
all
inbound
traffic
to
the
pod
to
first
go
through
the
site
card
proxy
to
first
go
through
envoy
and
then
to
be
passed
along
it.
You
know
assuming
that,
assuming
that
that
traffic
is
good
to
be
passed
along
to
your
application
and
on
the
the
reverse
is
true,
when
your
application
goes
to
address
either
an
upstream
service
or
to
to
pass
along
that
request.
A
For
that
those
ip
cable
rules
will
be,
in
effect,
to
ensure
that
that
traffic
is
routed
again
back
through
envoy
through
that
sidecar
proxy
to
have
any
policy
applied
if,
if
any
and
then
I'm
directed
to
where
it
needs
to
go
and
so
yeah
there's
an
init
container
that
runs
very
quickly
that
performs
that
task
of
reconfiguring
iep
tables.
A
So
you
can
there's
there's
ways
to
manually
control
when
a
sidecar
when
a
proxy
is
side
card
and
one
is
not.
Historically,
it's
been
based
on
labels
on
on
the
namespace.
A
A
A
A
A
The
fact
that
you
can
take
sample
apps
like
book
info
to
play
them
on
on
the
mesh
off
the
mesh
on
the
mesh
being
when
more
or
less
being
defined
by
when,
when
it
has
a
proxy
that
side
card
to
it-
and
we
mentioned
that
there
were
two
other
components
that
got
deployed
when
istio
got
deployed,
and
that
was
the
ingress
gateway,
the
egress
gateway.
We
can
go,
take
a
look
at
the
ingress
gateway.
A
This
is
the
point
of
entry
for
traffic
coming
in
and
traffic
that
will
be
directed
to
our
the
first,
the
first
application
in
the
book
info
app
that
application
is
called
product
page
we
can
we'll.
So,
let's
go
into
lab
three.
A
So
to
inspect
the
ingress
gateway,
ingress
gateway,
just
like
the
proxies
that
are
side
card
to
your
app.
They
contain
an
instance
of
envoy
and
let's
go
take
it.
Let's
go
take
a
look
at
that
instance,
and
just
look
at
the
envoy
config
real
quick.
A
If
you
feel
like
you,
haven't
seen
enough
yaml,
then,
let's,
let's
make
sure
that
you,
you
get
your
fill
of
yaml.
Let's
go!
Take
a
look
at
an
envoy,
config,
okay,
so
to
to
slow
this
down
a
little
bit.
If
you
go
and
look
at
the
istio
system,
namespace
you'll
have
had
the
ingress
gateway
that
we
looked
at
before
previously
deployed.
A
A
A
Okay,
so
once
you're
once
you're
inside
this
is
this
is
an
instance
of
envoy
go
go
look
around
if
you're
in
since
you're
inside
envoy,
you
can
take
a
look
at
some
of
the
envoy
exposes
a
lot
of
a
lot
of
metrics.
It
exposes
it's
popular
in
part
because
it
has
well-written
apis
that
that
service
mesh
control
planes
can
be
wrapped
around.
A
A
A
Actually,
I
think
this
this
is
that
that
particular
location
is
covered
in
a
different,
we'll
save
that
for
another
lab.
A
Good
now,
the
okay,
so
obviously
ingress
gateways
are
fundamental
to
the
way
in
which
istio
does
traffic
management.
There's
a
few
core
concepts
to
istio's
traffic
management
gateways.
Sidecars
we've
been
talking
about
these
a
couple
of
logical
constructs,
so
there's
a
logic:
there's
a
physical
gateway
that
we're
looking
at
here,
there's
a
logical
gateway
that
you
can
configure
a
configuration
that
you
can
apply
to
your
physical
gateway.
There
are
destination
rules
and
virtual
services
as
well.
A
So,
let's,
let's
exit
out
of
your
ingress
gateway
and
take
a
look
at
how
it
is
that.
A
We
also
see
for
the
gateways
here.
If
you
get
that
custom
resource.
A
And
sorry,
the
you
can
abbreviate
gateways
as
g
gw,
if
you
like,
there's
a
a
gateway
that
gets
installed
when
you
install
book
info
using
meshri
and
that
logical
gateway
is
called
the
sample
app
gateway
behind
this.
This
logical
configuration
will
deploy
a
few
different
sample
apps.
A
You
can
take
a
look
at
the
config,
it's
not
very
long.
As
a
matter
of
fact,
it's
about
it's
about
yay
long.
What
this
logical
configuration
this
logical
gateway
configuration
this
custom
resource
is
doing,
it
gets
applied
to
the
physical
ingress
gateway
and
what
it
says
is
that
this
sample
app
gateway
is
interested
in
traffic
that
comes
in
over
port
80
and
traffic,
that
is
of
type
http
like
that
is
using
the
protocol,
http
and
and
traffic
that
is
coming
from
anywhere
from
so
from
from
any
host
if
they're
sending
in.
A
A
Okay,
let's
also
take
a
look
at
the
virtual
services
that
may
or
may
not
be
present.
So
when
we
installed
bookinfo,
there
was
a
virtual
service
that
was
deployed
along
with
the
book
info,
manifests
that
virtual
service
is
called
book
info.
This
is
an
appropriate
time
to
talk
about
what
virtual
services
are
and
what
destination
rules
are.
B
A
The
virtual
service
book
info
and
help
put
that
to
yaml
a
little
bit
of
a
longer
configuration,
not
by
much
so
I'll,
say
that
a
virtual
service
in
istio
is
a
fundamental
building
block
of
of
istio's
traffic
management.
It
is
what's
used
to
do
maybe
two
different
things
depending
on
how
you
want
to
think
of
it.
It's
used
to
what's
used
to
route
traffic,
it's
also
used
to
match
traffic.
A
So
for
a
couple
of
you
that
raised
your
hands
as
being
kind
of-
formerly
you
know,
or
either
currently
or
formerly,
network
engineers,
like
sometimes
you'll
work
with
systems
that
have
traffic
matching
as
a
separate
construct.
A
separate
thing
that
you
can
write
out
and
describe
how
it
is
you
want
to
match
traffic
then,
like
you
know
what
to
do
with
it
and
having
these
separate.
This
is
a
little
bit
munged
together,
instead
of
a
virtual
service,
where
first,
this
virtual
service
is
configured
and
applied
to
our
sample.
A
App
gateway
recall
that
our
sample
app
gateway
is
interested
in
traffic
on
port
80
and
http
traffic,
and
that
it
applies
to
traffic
that
is
directed
to
this
virtual
service
is
applied
to
traffic.
That's
directed
to
bookinfo.meshree.io
in
chapter
in
lab
3,
there's
a
section
here
about
creating
dns
entries
so
that
we
can
do
some
intelligent
things.
A
So
we
can
well
use
the
power
of
sni
or
service
name
indicator
to
be
able
to
run
multiple
services
behind
port
80
so
to
share
port
80
or
to
share
port
443
and
expose
multiple
services
behind
that
port
service
name,
indicator
or
sni
is,
is
used
to
grab
the
host
name
of
packets
that
are
coming
in
understand
where
that
host
name,
where
those
packets
are
directed
by
that
hostname
and
use
that
to
read
it.
To
do
two
things.
A
One
is
to
share
the
the
same
port
for
traffic,
that's
aimed
at
two
different
applications
and
then
two
to
also
be
able
to
direct
traffic,
even
though
it
might
be
encrypted
such
that
just
that
that
service
name
indicator
is
exposed,
so
unterminated
tls
traffic
can
still
be,
can
still
share
the
same
port
as
a
traffic
to
other
applications.
A
Okay,
so
again
back
to
the
virtual
service,
there's
the
virtual
service
again,
this
one
is
applied
to
the
sample,
app
gateway.
It
is
interested
in
or
will
will
function
against
traffic,
that's
directed
to
bookinfo.mesri.io
our
sampleapp,
and
it
will
match
against
these
paths.
A
In
this
case,
it's
going
to
route
it
to
a
particular
destination.
That
destination
is
the
first
application
in
our
sample
app.
The
product
page
is
the
first
service
in
our
sample
app
product
page
when
deployed
internally
or
when
it's
deployed
on
kubernetes
it'll,
be
running
on
port
9080.
That's
where
it
exposes
its
http
services.
A
Okay,
so
one
thing
for
us
to
do
is
to
establish
some
destination
rules,
so
we
talked
about
virtual
services.
We
talked
about
how
virtual
services
are
about
matching
traffic
and
routing
traffic
destination
rules
are
another
core
traffic
management
construct
in
istio.
These
destination
rules
are
really
they
apply
much
more
outbound
they're
about
are
defining
and
they're
about
applying
policy
to
where
traffic
is
destined
as
it
as
the
traffic
exits.
A
A
pod
exits,
an
envoy
exits,
a
a
proxy,
and
so
one
of
the
more
popular
things
that
destination
rules
are
are
used
for
is
to
create
subsets
another
istio
construct,
subsets,
meaning
that
like
to
use
the
book
info
as
an
as
an
example
in
book
info.
There's
one
service
called
the
review
service
that
has
three
different
versions
of
the
same
service.
A
A
So
again,
I'm
going
to
go
back
to
kubernetes,
I'm
just
going
to
set
up
on
this
watch
again,
I'm
going
to
get
at
destination
rules
in
the
default
namespace
go
over
to
measurey
and
what
we
can
do
is
from
the
lab
you're
able
to
take
and
copy
these
destination
rules.
A
So
if
you
look
at
the
destination
rules
by
the
way
that
the
most
you
know
that
there's
two
versions
of
the
rating
service
and
three
ends
up
being
three
versions
of
the
reviews:
service
we'll
copy
this
go
into
mesherie
and
you
can
apply
custom
configuration
you'll
click
on
the
play
button
apply
that
in
the
default
namespace
paste
in
your
yaml
paste
in
your
destination
rules
and
have
those
applied
to
istio.
How
does
it
apply
to
your
cluster?
A
So
we'll
see
that
you
know
four
new
destination
rules
get
configured.
They.
We
won't
go
introspect
them,
because
this
is
exactly
this.
What
we
just
pasted
in
that
gets
us
set
up
to
go.
Do
some
interesting
traffic
management
things
I'm,
given
that
the
time
I'm
going
to
skip
over
this
section
about
looking
at
envoys
config,
I'm
going
to
get
into
some
traffic
management.
Get
into
some
observability.
A
So
istio
itself
comes
with
a
few
add-ons.
Things
are
classified
as
an
add-on,
three
of
them
that
we'll
take
a
look
at
prometheus,
grafana
and
jaeger.
These
are
about.
These
are
observability
add-ons.
A
A
So
good,
that's
where
we
want
them
get
prometheus
turned
on
maybe
jaeger
as
well
and
have
those
come
up.
So
we
get
jager
and
some
prometheus
going
on
measuring,
given
that
it
needs
to
manage
any
changes
going
on
in
the
mesh
should
then
be
automatically.
It
should
be
aware
of
where
those.
A
Prometheus
has
been
deployed,
and
so
I
think
we've
got
a
there's
a
small
bug
in
that
was
just
in
this.
This
second
release
candidate
that
I'm
running
I
mean,
if
you're
running
the
same
release
candidate
you'll
end
up
finding
that
you'll
probably
get
that
page
not
found.
Let
me
let
me
give
you
all
this
direct
url,
where
you
can
now
that
you've
deployed
prometheus
and
grafana,
where
you
can
connect
meshri
to
prometheus
and.
A
A
So
here,
because
grafana
was
just
deployed
measuring,
discovers
that
you'll
you
can
connect
to
the
address.
That's
inside
of
your
cluster
connect
to
it.
You
can
also
connect
to
prometheus.
If
you
like,.
A
And
then
that
way,
mastery
is
immediately
able
to
take
advantage
of
some
built-in.
A
Boards
and
and
display
the
metrics
that
you
would
otherwise
see
in
grafana.
You
can
also
bring
up
grafana
directly
and
take
a
look
at
those
same
boards
and
panels.
So
in
my
case
it
would
be
local
host
port
3000,
given
that
I'm
running
on
docker.
A
Desktop
and
you'll
see
these
same
dashboards
here,
it's
the
mesh
dashboard,
so
really
you're
not
much
going
on,
because
we
really
haven't
brought
up
our
sample
app
and
played
around
with
it.
Much
now
because
of
what
I
was
talking
about
before
about
the
virtual
services
and
the
destination
rules
and
us
having
configured.
Those
is
that
our
sample
app
is
ready
to
go.
It's
ready
to
be
interacted
with.
We
can
see
the
fact
that
traffic
will
go.
A
From
your
client
from
your
web
browser
through
the
ingress
gateway,
touch
the
virtual
service
and
the
destination
rules
and
ultimately,
land
at.
A
A
You'll
want
to
create
these
local
entries,
one
for
bookinfo.measurey.o
to
point
to
your
localhost
and
another
one
for
imagehub.mastery.io.
These
will
allow
us
to,
like
I
said
before,
to
do
well
to
to
direct
traffic
external
to
the
cluster.
So
right
now
measuring
is
running
external
to
the
cluster
you're,
going
to
use
measurey
to
generate
a
bunch
of
load
against
these
apps.
A
A
That
we
were
defining,
there
were
three
of
three
for
three
different
versions
of
the
reviews
service.
So
if
you
go
over
and
get
the
destination
rules
for
reviews
and
I'll
put
them
to
yaml.
A
You'll
see
that
just
very
simply,
there's
three
versions
of
reviews
defined-
and
this
is
these
subsets-
are
what
istio
will
use
to
as
the
construct
to
well
by
default
round-robin
traffic
across
these
versions.
So
that's
the
default
setting,
let's
get
into
some
labs,
where
we
begin
to
manipulate
that
traffic
and
tell
it
to
do
some
different
things
with.
A
Sending
routing
traffic
to
different
different
directions.
Okay,
as
a
matter
of
fact,
we
won't
we'll.
I
really
I'm
looking
at
the
clock
and
like
a
lot
of
times,
we'll
give
this
workshop
in
about
in
a
much
longer
setting.
So
I
want
to
hurry
into
some
of
these
labs
here
so
so,
initially
like
we
took
and
apply.
If
you
go
to
lab
5,
we
took
and
applied
these
destination
rules.
So
we
have
these
defined.
That's
what
we're
just
looking
at.
A
So
if
we
go
down
there's
some
some
things
that
we
can
begin
to
do
very
quickly,
which
is
very
simply,
you
can
say:
hey.
You
want
to
change
that
behavior
from
round
robining
to
just
going
to
to
just
restricting
it
to
one
version
of
this
review
service
and
to
do
that
in
lab
three
there's
a
destination
rule
in
lab
and
I'm
sorry
lab
three
lab.
A
A
A
Because,
because
that's
where
our
sample
app
is
running
so
this
this
simply
says
this
will
add
a
new
virtual
service.
It
says
that
it's
interested
in
traffic,
that's
directed
to
reviews
to
that
application
traffic
that
is
h
of
type
http,
go
ahead
and
route
that
traffic
over
to
only
the
v1
subset.
So
this
is
gonna
to
override.
A
C
A
Okay,
so
we'll
kind
of
recap:
if
you
take
a
look
at
book
info
I'll,
put
that
in
terms
of
yaml
again,
it's
it's
tethered
to
the
sample,
app
gateway.
That
gateway
was
looking
for
things
on
port
80,
that's
http!
A
It's
interested
in
traffic,
that's
being
directed
to
this
host
name
book
info
dot,
mesh
retardant
and
matches
traffic
with
these
paths
routes
it
over
to
product
page
great.
We
look
at
the
virtual
service
that
we
just
applied.
A
Reviews
very
simple:
this
is
these
are
additive.
These
two
virtual
services
are
running
they're
being
compiled,
if
you
will
by
by
istio,
if
you,
if
you
think
about
like
firewall
rules
and
how
they're
interpreted
kind
of
top
down
istio,
does
this
as
well.
It
will
add
these
virtual
services
together
and
look
for
its
first,
its
its
matches.
A
If
there's,
if
there's
happens,
to
be
a
a
tie
or
or
a
traffic
that
gets
matched
in
the
same
match
by
two
different
virtual
services,
istio
will
use
the
virtual
service
that
was
first
created
so
the
time
stamp
of
the
first
one
to
kind
of
break
that
conflict
to
break
that
tie
and
let
that
supersede
the
the
decision
on
where
to
route
the
traffic.
In
this
case
these
are
complementary
virtual
services,
and
so
this
this
particular
this
second
virtual
service.
A
It
doesn't
it
applies
and
is
interested
in
traffic
that
is
directed
at
hosts
service,
whereas
the
the
first
book
info
virtual
service
is
interested
in
traffic.
That's
directed
at
book
info,
so
so
this
is
like
traffic.
That's
going
to
hit
that
first
application
product
page
great!
It
matches
this
it
matches
here
as
product
page
looks
up
what
those
reviews
are
against
the
individual
books.
A
This
virtual
service
matches
traffic,
that's
directed
to
reviews.
What
it
says
is
you
should
you'll
you'll
only
get
version,
one
of
reviews
and
that's
what's
happening,
okay,
great,
so
here's
the
nice
thing
about
the
rest
of
our
labs
is
that
we
kind
of
is
that
if
you
have
these
foundational
concepts
understood-
and
I
know
that
it
took
us
a
while
to
get
to
get
to
these
but
intentionally
so
so
that
hopefully,
everyone
can
follow
along.
A
One
of
one
of
the
first
things
I
think
we'll
take
a
look
at,
is
well
content-based
routing,
and
I'm
gonna
be
aware
of
the
time
here
so
we'll
hit
a
couple
of
these
briefly.
So
so.
A
Let
me
skip
this
and
go
to
something
a
little
more
interesting
this
this
5.4.
So
if
you
wanted
to
do
traffic
splitting
or
traffic
shifting,
do
you
want
to
canary
between
one
version
of
your
service
to
another?
Well,
you
might
assign
a
certain
percentage
of
the
traffic
to
go
to
that
new
version
test.
It
out
make
sure
it's
good
for
some
time.
You
might
incrementally
continue
to
shift
more
of
that
traffic
over
to
your
new
version,
and
so
that's
what
5.4
is
looking
at
is.
A
Sending
50
splitting
the
traffic
50
50
between
v1
reviews
and
v3
of
reviews,
so
let's
go
ahead
and
grab
this
virtual
service
go
back
to
measuring.
Let's
apply
this.
This
configuration
we're
just
going
to
apply
it
we're
going
to
overwrite
the
existing
reviews.
Virtual
virtual
service,
we'll
apply
that
we
can
come
verify
that
that
that's
been
updated,
and
so
it
has
here's
your
reviews,
virtual
service.
Now
it's
doing
traffic
splitting
50
50..
A
If
we
go
over
and
look
at
that,
just
you
know
by
refreshing
the
browser
great
we're
saying
you
know
we're
only
we're
not
seeing
any
black
stars
we're
just
seeing
either
no
stars
or
red
stars
great
well
to
make
things
a
little
more
interesting,
especially
as
you
go
to
try
out
some
of
the
other
labs
that
are
there.
You
want
to
end
up
using
you
want
to
generate
some
load.
Some
significant
load
really
see
how
well
istio
is
how
well
it
does
in
terms
of
sticking
to
50
between
them.
A
Let's,
let's
generate
some
load
briefly,
so
for
some
of
you,
you
might
run
into
some
networking
issues
here,
I'm
going
to
move
quickly
anyway.
This
really
depends
on
your
environment
and
what
you've
got
set
up.
The
labs
walk
through
this
more
slowly,
I'm
using
docker
desktop.
So
I'm
running
as
a
reminder,
I'm
running
meshery
in
docker,
which
is
its
own
network
and
then
and
that's
external
to
my
kubernetes
cluster.
I've
got
kubernetes
running.
I've
got
bookinfo
running
inside
of
it.
So
mesherie
is
is
representative
of
most
of
you.
A
A
We
can
use
this
performance
test
to
generate
a
bunch
of
load,
so
in
this
case,
for
you
excuse
me,
we're
using
istio
give
this
a
a
name.
You
don't
have
to
give
it
a
name
in
this
case,
the
the
url
that
I'm
going
to
test
is
because
docker
desktop
is
also
running.
Kubernetes
I'll
use
my
host
machines
ip
address,
my
my
mac
has
an
ip
of
192
1.15
on
my
local
network.
A
That's
my
wireless
address
I'll,
generate
a
bunch
of
traffic
and
send
it
to
slash
product
page,
because
my
because
docker
desktop
is
sharing
that
same
interface
with
kubernetes
set
it
to
run
for
some
amount
of
time
when
I
set
it
to
run,
I'm
also
going
to
pass
in
a
host
header
to
identify
where
I
would
like
for
this
traffic
to
go
like
what
host
am
I
identifying
sending
this
traffic
to
the
host
header?
A
The
format
here
looks
like
this,
so
I
want
I
want
for
this
traffic
generator
inside
of
measuring
t
to
every
time
it
generates
this
load
to
add
the
this
http
header
host.
Where,
where
do
we
want
to
send
it,
we
want
to
send
it
to
book
info
meshri.io.
A
A
If
you've
added,
if
you've
connected
to
grafana,
you
will
be
able
to
you
know,
do
do
a
few
things.
You'll
be
able
to
look
over
certain
time
ranges
and
by
default
it's
set
to
just
well.
It's
set
to
it's
set
to
live
tail
metrics
that
are
coming
through.
So
one
when
this
is
done,
measure
will
generate
a
statistical
analysis,
oops,
sorry,
statistical
analysis,
I'll
generate
a
histogram,
I'm
showing
you
your
p50,
your
p90,
p99,
p99.9
it'll.
Tell
you
a
bunch
of
statistics.
It
will
say
that
hey
there
wasn't
any
errors.
A
These
are.
This
is
the
averages,
that's
what
you're
looking
at
it'll.
Also,
if
you
connected
to
grafana
and
are
pulling
in
any
of
those
dashboards,
it
will
show
you
what's
going
on.
So
we
have
two
virtual
services,
the
two
that
we
had
got
our
destination
rules
on
one
gateway
you
can,
you
can
use.
You
can
connect
up
your
grafana
boards
to
to
look
to
verify
that
you're.
In
fact,
that
istio
is
in
fact
splitting
the
traffic
5050,
and
actually
we
just
missed
it
right
here,
go
run
this
again.
A
A
A
Well,
this
is
not
what
I
want
to
be
showing
happening
across
reviews
v1
and
reviews
v3.
We
should
be
seeing
basically
an
equal
split
reviews:
v1
reviews,
v3,
2.1,
ops,
2.1,
ops,
fairly
equal,
the
latency
between
them
is
a
little
bit
different,
which
why
would
that
be
well,
because
reviews
v1
doesn't
have
any
stars
it
doesn't.
A
It
doesn't
actually
have
to
query
the
ratings
service
to
get
the
stars,
so
you
might
expect
that
it's
a
little
bit
quicker
and
looks
like
it
is
on
average,
it's
coming
back
about
half
as
fast
or
twice
as
fast,
okay
good.
So
so
I'm
cognizant
of
the
time,
so
we've
got
about
30
minutes
left
so
with
what
you
all
have
been
doing
it
with.
As
far
as
we've
gotten,
you
are
well
on
your
way
and
empowered
to
go
through
a
number
of
different
traffic
management
labs.
A
Some
of
the
ones
that
that
we
would
start
on
next
are
around
resiliency
and
chaos.
Engineering.
I'm
going
to
hit
a
couple
of
these
very
briefly
and
then
see
if
we
can't
get
over
and
look
at
envoy
filters.
Look
at
some
rust-based
webassembly
filters
and
we'll
talk
about
that
product
owner
role
that
I
was
referring
to
before
how
that
works.
A
At
least
this
last
traffic
split
a
couple
of
ways
you
can
do
that
you
can
grab
that
definition,
or
even
just
this
first
part
this
this
name
and
go
into
measuring
you.
Can
you
can
delete
that
custom
configuration
so
instead
of
playing
it
you'll
delete
it
and
now
it
would
remove
it.
You
could
go
over
and
use
cube
ctl
to
remove
it.
A
A
Just
to
verify
we'll
be
refreshing
here
we
should
only
be
seeing
black
stars.
That's
the
v2
version
of
reviews,
okay,
great!
Well!
Now,
let's
mess
with
the
traffic
a
bit,
let's
inject
some
latency,
some
some
delay
for
http
packets,
so
it
should
be
noted
that
there's
by
default
istio
will
configure
it
will
configure
a
global
set
of
timeouts
for
an
end
for
for
a
request,
sort
of
end
to
end
at
least,
and
so
that
is
set
to
10
seconds
for
book
info.
A
So
if
you
make
a
request
of
the
product
page
and
it
takes
longer
than
10
seconds
for
any
of
that
to
get
a
response,
the
that
default
envoy
configuration
will
time
out
the
request
and
the
user
will
get
back.
You
know
we'll
get
back
a
timeout
okay.
So
if
we
can
apply,
you
know
we
get
very
specific
about
where
we
want
to
inject
delay.
In
this
case,
we
can
inject
delay
not
on
the
entire
app
itself,
but
maybe
just
on
the
reviews
service.
A
If
you
all
can
hear
our
rooster,
I
think
he's
a
little
late.
Waking
up
this
morning.
A
You
will,
let's
apply
a
delay
on
retrieving
those,
but
let's
get
specific
and
only
apply
that
delay
if
the
user
that
signed
in
is
has
the
name
jason.
A
Look
at
our
go
look
to
see
that
we've
got
ratings
reviews
and
I'm
just
gonna.
I'm
gonna
delete
this
reviews
here.
Oh
no,
no,
I'm
sorry!
Yeah!
A
The
the
reviews
is
to
restrict
everything
to
v2,
v2,
oops,
sorry,
v2
reviews
and
then
now
this
ratings
has
that
delay
injected
again,
if
we
kind
of,
if
we
look
over
this
and
examine,
what's
going
on
as
we're
saying
any
traffic,
that's
directed
to
the
rating
service
and
that's
the
service
that
gives
back
stars,
zeros,
no
stars,
black
stars
or
red
stars,
we're
saying
that
if
the
traffic
is
directed
there,
that's
of
interest,
if
it's
of
type
http,
then
what
we're
going
to
want
to
do
is
inject
some
fault.
A
What
type
of
fault
well
delay
fault
like
latency,
let's
inject
some
latency
for
how
long
well
in
a
fixed
way,
let's
let's
for
every
well
for
every
packet
100
of
the
time,
let's
inject
seven
seconds
of
delay,
okay,
but
only
for
the
user
jason.
A
Only
if
this
user
is
signed
in
now
when
this
simple
application
book
info
it
uses
it
has
cookies,
and
so
you
can
make
up
any
name
that
you
want
to
and
sign
in
like
sign
in
to
the
app
I
mean
that
will
be
that
that
username
will
be
included
in
the
cookie.
A
I
got
a
response
back
from
product
page
and
and
reviews,
but
I
didn't
get
any
data
for
ratings
so
that
that
rating
service
had
seven
seconds
of
delay
associated
and
as
such
event.
Eventually,
then
that
gets
timed
out.
We
could
we
get
this
message,
I'm
not
gonna
say
so.
You
know
the
example
here
is
to
really
help
understand
that
you
can
get
extremely
specific
about
how
you
want
to
manipulate
traffic.
A
A
You
can
do
some
much
more
interesting
things
than
this.
Let's
start
looking
at
some
of
those,
so
this
other
one
is
to
let's
update
the
rating
service
to
now
immediately
send
back
and
error
code
500,
let's
not
inject
delay.
Let's
do
a
different
type
of
fault
this
time.
Let's
just
we
will
abort,
we
will
immediately
100
of
the
time.
A
A
A
Let's
go
back
to
the
book
info
app,
we
refresh
the
page
and
it
comes
back
immediately,
but
it
does
come
back
with
under
the
covers.
What's
an
http
500
happening,
okay,
interesting?
Okay,
with
that
good,
there's,
more
labs
to
be
done,
there's
one
on
circuit
breaking!
This
is
actually
one
of
the
areas
where
you
really
would
want
to
use
a
mesherie
because
of
its
load
generation
capabilities.
So
you
can
go
break
your
circuit.
A
The
reason
that
I'm
we're
skipping.
This
is
because
I
hope
that
we
would
get
to
this
one,
but
it's
because
we
want
to
talk
about
some
of
those
advanced
things
around
webassembly
filters,
envoy
filters
and
how
they're
used
we're
already
actually
looking
at
them,
and
some
of
the
telemetry
that
we're
getting
back
through
grafana
through
prometheus.
That
telemetry
is
being
generated
through
the
the
mixer-less
telemetry
filters
that
ship
with
istio.
A
So
I
guess
we
never
get
to
take
that
break
that
I
alluded
to
earlier.
So
hopefully
all
of
your
bladders
are
holding
up.
I
apologize
it's
just
there's
so
much
to
talk
about
web
assembly
is
neither
of
the
web
necessarily
or
or
assembly
strictly.
A
So
if
this
is
probably
a
really
bad
way
of
introducing
webassembly
but
webassembly
as
a
technology
has
been
around
for
a
little
while
it's
up
and
coming
in
the
infrastructure
space
in
part
because
of
some
of
the
characteristics
that
it
actually
shares
with
docker
docker
was
popularized
because
of
its
docker
made
things
portable,
repeatable,
smaller
faster,
you
would
lose
10
pounds
if
you.
A
So
some
great
and
interesting
characteristics
of
docker
and
then
you
know,
cloud
native
came
along
and
then-
and
here
we
are-
and
some
of
those
same
characteristics
are
seen
inside
of
webassembly,
so
docker
was
more
secure.
What
web
assembly
is
as
well
webassembly,
you
know
a
way
of
you
know
again
bastardizing
the
description
of
it
is
something
of
if
you're
familiar
with
the
jvm
for
java.
A
The
java
virtual
machine
webassembly
is
also
a
virtual
stack
machine,
so
it
through
webassembly
you're
able
to
use
it
as
a
compilation
target
using
about
40
different
languages.
You
can
compile
to
different
targets
when
you
run
a
webassembly
app
that
runs
inside.
Of
that
virtual
stack,
machine
and
envoy
has
this
work
has
been
being
done
to
build
out
support
for
a
new
application,
binary
interface,
an
abi
between
to
provide
an
interface
between
envoys
capabilities
or
a
subset
of
envoys
capabilities
and
apps
that
are
running
in
webassembly.
A
There's
a
lot
boy,
there's
a
lot
to
pile
into
this
description,
so
the
istio
team
had
done
an
evaluation
of
what
engine
to
use
there
as
the
webassembly
engine.
It's
probably
of
no
surprise
that
part
of
the
the
same
engine
that
gets
the
javascript
v8
engine
that
gets
used
for
chrome
is
being
used
here
in
this
initiative.
A
There's
yeah
to
draw
that
analogy
again,
I
think,
to
to
docker
you
could
you
could
say
that
webassembly
or
wasm
is
becoming
something
of
a
right
once
run
anywhere?
That
was
part
of
the
same
promise
of
a
language
like
java,
so
an
interesting
thing
is
you're
able
to
so
what
do
you
want?
Why
do
you
want
to
run
this?
Why
are
we
talking
about
it?
A
A
Is
because
there's
a
lot
of
things
that
you
can
do
with
those
packets?
We
just
looked
at
faults,
injecting
latency
and
aborting
packets.
There's
a
lot
more
than
you
can
do
with
them.
You
can
rewrite
them.
You
can
get
a
lot
more.
If
you're
not
getting
enough
telemetry
you
can
out
of
the
default
filters.
You
can
apply
new
filters.
You
can
maybe
convert
traffic
from
http
to
nets
from
or
you
know
from
one
protocol
to
the
next.
A
You
do
want
to
optimize
what
you're
doing
you
do
want
to
pay
attention
to
how
much
that
costs
you
you'll
have
different
costs
in
terms
of
like
you
know,
just
naturally,
as
you
would
expect
the
the
more
that
you
use
something
the
more
that
you'll
incur
some
amount
of
overhead.
So
there's
a
trade-off
to
be
aware
of
tools
like
the
service
mesh
performance
standard,
hopefully
help
with
that
smp.
A
A
What's
the
difference
in
how
are
those
get
characterized
and
there's
again,
it
was
for
another
day,
but
there's
there's
some
statistics
and
things
published
so
feel
free
to
to
to
go.
Look
to
go
watch
some
of
those
talks
today,
but
better
than
that,
what's
probably
nice
what's
what
we
would
rather
do
is
get
a
get
this
into
your
hands
that
you
can
go
play
with
it
and
explore
part
of
that
power
and
explore
how
a
product
owner
it
can
be
enabled
one
of
those.
A
So
so
we
so
we
wrote
a
sample
application
to
do
that.
We
called
the
sample
application
image
hub.
Actually,
this
sample
app
was
demoed
at
dockercon
2020.
in
advance
of
docker
hub
itself
undergoing
you
know:
a
policy
change
in
terms
of
its
subscription-based
pricing,
in
terms
of
like
not
hosting
the
world's
open
file
share
for
images
anymore.
You
know
for
image
storage
inside
of
a
s3,
and
so
so
you
know.
A
So
if
you
try,
if
you're
using
docker
hub
today,
you
know
depending
upon
whether
or
not
your
your
project
is
open
source
depending
upon
whether
or
not
you're
a
paying
subscriber
you're
authenticated
or
you're,
not
you're,
going
to
get
a
different
response
from
dockers,
the
official
docker
registries
apis
from
how
many
pull
requests
you
can
do
per
second
they're
going
to
rate
limit
you
right,
okay!
Well,
it's
we!
A
You
know
so
we've
been
in
conversation
with
the
docker
hub
engineering
team,
about
how
they're
approaching
and
how
they're,
using
what
type
of
infrastructure
they're
running
and
how
they
can
take
advantage
of
the
intelligence
of
the
service
mesh
to
kind
of
get
there
for
free
without
having
to
change
a
lot
of
application
logic
in
docker
hub
itself.
A
So
image
hub
is
something
of
a
rip-off
of
a
generic
sample.
App
that
intentionally
is
a
sample
app
that
represents
docker
hub
now.
This
app
has
certain
functionality
written
into
it.
It's
just
a
simple
app,
it's
two
containers
and
it
has
the
ability
to
authenticate
users
and
generate
a
token
as
well
as
identify
what
subscription
plan
that
that
user
is
on,
but
it
doesn't
have
the
ability
to
it's.
Not.
A
It
doesn't
write
in
the
infrastructure
logic
to
rate
limit
users
based
on
the
plan
that
they're
subscribed
to
and
how
much
their
request,
how
frequently
they're
pulling
how
many
pull
requests
they
have
per
second.
Rather,
it
relies
on
the
intelligence
of
the
infrastructure,
underneath
to
do
that.
In
this
example,
we
use
console
and
because
console
is
also
an
envoy
based
data
plane.
A
A
service
mesh
with
an
onboard-based
data
plane
to
insert
these
filters,
but
that
that
doesn't
matter
we're
using
istio
here
and
istio's
ability
to
to
do
the
same,
to
have
dynamic,
to
have
onward
filters
dynamically
loaded
and
to
have
a
bunch
of
intelligence
happen
in
there
other
than
just
things
like
telemetry
or
things
like
simple
rate,
limiting,
rather
things
like
very
specific
application
level
or
business
logic,
level
rate
limiting
happening
based
on
who
signed
in
and
what
subscription
plan
they're
on
this
sample
app
lets
you
explore
that.
A
So.
On
that
note
I
want
to
well.
I
wanted
for
us
to
go
through
one
last
lab,
and
this
is
this
is
where
I
get
to
shut
my
yapper
and
let
abhishek
take
us
through
this
last
lab.
I
left
him
all
of
15
minutes.
A
A
B
All
right
so
like
how
lee
introduced
you
guys
to
the
topic
of
web
assembly
being
a
very
advanced
concept,
and
I
understand
that
it's
very
difficult
to
grasp
the
or
think
about
the
abilities
that
anywhere,
presumably
filters
can
do,
or
it's
always
said
that
it's
easier
to
watch
and
learn
other
rather
than
reading
it.
So
I
want
to
remove
that
to
show
you
guys
assist
of
what
exactly
what
wasn't
filters
are
capable
of.
B
A
You're,
no,
nothing
all
right.
B
In
that
case,
let
me
just
I
hope
it's
not.
It
is
so
I
got
my
hdr
installed
in
my
cluster.
I've
watched
for
parts
h3d
is
already
in
there,
I'm
going
I'm
gonna,
so
I'm
running
mini
cube
right
now,
which
would
require
you
to
run
mini
cube
channel
alongside
because
you'll
need
to
run
the
source
type
load
balancer
for
progress,
so
yeah.
B
Having
said
that,
let's
start
with
the
workshop,
so
the
agenda
of
the
workshop
is
that
we
deploy
the
image
of
the
image
of
application
and
then
we
enable-
or
we
deploy
a
filter
over
the
application
to
test
out
the
rate
limiting
feature
or
the
rate
limiting
capability
of
the
filter
by
running
a
performance
test
over
it
yeah.
So
let's
get
started
so
in
the
measure.
B
B
As
you
can
see,
the
containers
are
being
created.
There
are
currently
two
containers
that
consist
of
the
image
of
architecture.
B
One
is
the
web
application
itself,
which
is
which
consists
of
the
ui,
which
is
a
dashboard
for
the
application,
and
the
api
is
the
collection
of
apis
that
that's
the
ui
as
simple
as
that,
as
you
can
see,
there
is
only
one
part
currently
that
will
be
one
container
that
consists
of
this
part,
which
means
that
the
site
card
is
not
injected,
so
I'm
going
to
go
ahead,
inject
the
sidecar
first
using
measuring
here.
B
Which
would
which
are
add
a
label
to
your
name,
space,
which
means
whatever
the
applications
in
this
namespace
you've
been
you're
deploying
will
be
cycle
enabled
by
default.
B
So
now
you
can
see
the
side
cards
have
been
enabled
in
here,
so
let's
go
ahead
and
try
to
access
these
applications
in
order
to
do
that,
we'll
need
to
check
what
so.
This
is
a
name.
The
site
cards
are
enabled
in
this
in
these
applications.
So
it's
behind
the
html
gateway-
and
I
know
the
english
vps
ip
from
the
mini
cube
tunnel.
B
So
you
want
to
cross
check
what
is
the
port
that
is
running
the
html,
the
report
that
the
english
vp
is
listening
to?
It's,
the
port
80
is
map230823,
and
this
basically
would
help
many
cube
users
to
get
started
and
yeah.
So
basically,
this
is
the
ipn
port
in
which
I'll
be
accessing
imager,
but
also
we'll
need
to
add
a
host
header.
B
Basically,
what
lee
demonstrated
was
by
using
a
host
entry,
but
I
externally
apply
the
post
header,
which
is
which
implies
that
it,
the
host,
has
been
called
so
when
I
go
ahead,
open
it.
B
B
Okay,
I'm
going
to
take
a
step
back
here
and
say
that
I'm
not
able
to
ugly
will
you'll
be
able
to
demonstrate
it.
Probably
I
can
explain
it
on
from
your
screen.
A
Yeah
good
yeah,
good
question:
we
given
the
time
that
we
have
left
probably
best
to
field
a
few
of
the
questions
that
are
remaining
as
a
matter
of
fact.
That's
a
call
for
questions
to
the
extent
that
you've
asked
some,
but
they
haven't
been
addressed,
I'm
going
to
put
in
a
couple
of
resources
that
I've
mentioned
before
to
make
sure
that
you
all
are
enabled
with
them.
So
this
particular
sample
app,
really
interesting.
A
It
doesn't
yet
run
what's
behind
docker
hub,
but
we'll
see
where
those
conversations
land
and
if
it
and
if
it
ultimately
doesn't
so
so
it
could
be
really
interesting
for
you
all
to
check
out.
There's
a
excuse
me.
There's
a
collection
of
other
web
assembly-based
filters
that
you'll
find
in
the
ecosystem.
A
B
Yeah,
so
basically
I've
brought
it
up
and
I
swear.
I
didn't
change
anything
all
right,
yeah.
Basically,
sometimes
it
might
take
time
instead
of
cluster.
B
You
won't
know
so
to
do
a
performance
test,
you'll
need
the
user
token
or
you
need
to
be
enabled
with
the
you
need
to
be
logged
into
the
user
to
sort
of
pull
images.
So
I'm
going
to
go
ahead
and
get
my
user
cookie,
which
will
be
available
in
here
in
the
authorization
tool
and
now
when
I
go
to
a
machine
dashboard
to
do
a
performance
test
over
this.
B
B
Another
thing
I
miss
mentioning
is
that
there
is
an
end
point
called
slash,
pull,
which
is
called
when
we
click
on
this
particular
section
here.
So
I'm
gonna
go
directly
hit
this
api
to
test
out
my
performance,
so
we
can
see
that
this
particular
api
gives
returns
a
200
currently
and
then,
when
we
try
to
so
what
we
do
is
we
first
enable
the
filter,
which
is
a
weight
limit
filter
which
does
a
50
rps
rate
limit
currently
on
the
team
subscription.
B
Now,
when
I
do
that,
we'll
need
to
watch
for
the
default
name
space,
what
it
does
is
it
patches
the
existing
containers
to
enable
to
enable
the
online
filter
and
install
it
inside
those
proxy
containers,
so
once
it
has
done
it,
I
will
go
ahead
and
run
the
performance
test
to
see
what
kind
of
how
the
rate
limit
filter
behaves.
B
B
And
I'm
going
to
give
concurrence
request.
I
don't
know
I'm
gonna
pump
up
the
qps
to
60
rps
for
15
seconds,
we'll
need
to
give
it
a
couple
of
headers
like
before
the
post
header,
which
would.
D
B
B
B
B
A
No,
you
know
thanks
for
this
time,
but
I
think
this
introduces,
I
think,
I
think,
accomplishes
the
mission
of
trying
to
introduce
like
an
advanced
concept
about
how
it
is
that
there's
a
lot
of
power
in
the
data
plane
and
that
through
the
ability
to
dynamically
load,
onward
filters
like
this
you're
able
to
well
do
a
lot.
I
mean
you
can
offload
application
logic,
application,
infrastructure
logic.
A
A
Any
thoughts
I'll
say
this-
that
feedback
is
especially
critical
feedback
is
always
welcome,
so
reach
out
connect,
share,
share
the
feedback,
try
out
the
labs,
walk
through
the
rest
of
them
that
we
weren't
able
to
get
to
that's
intentional
by
the
way,
there's
just
a
lot.