►
From YouTube: Istio 1.4 feature tour
Description
In this video, Megan O'Keefe from Google Cloud Developer Relations takes you on a tour of the new features in Istio 1.4.
A
Hello,
I
am
Megan
O'keefe
from
developer
relations
at
Google
cloud
and
I'm
here
today
to
give
a
quick,
10-minute
rundown
of
some
of
the
new
features
in
sto
1.4
I
have
the
changelog
open
for
the
newest
EO
1.4
release,
which
problem
was
released
last
week
and,
as
you
can
see
here
in
this
list,
there
is
a
whole
lot
that
got
released
with
1.4.
What
I'm
going
to
try
to
do
is
take
one
or
two
items
from
each
of
these
categories.
Traffic
management
telemetry
features
and
show
you
them
in
action
on
a
running
kubernetes
cluster.
A
So,
let's
dive
right
in
all
I've
done
before
this
is
I
have
set
up
a
new
gke
cluster,
just
regular
four
nodes,
kubernetes
one
dot,
sorry
TKD,
1.13
and
I
have
installed
STM
1.4
onto
the
cluster.
This
is
just
a
standard
home
template
install
script.
I've
installed
the
sierra
DS
I
have
toggled
a
lot
of
options.
The
reason
has
to
do
with
some
of
the
demos
I'm
going
to
show
in
just
a
moment.
A
The
key
thing
to
note
here
about
this
install
is
that
I've,
actually
completely
turned
off
mixer,
so
mixer
is,
is
Gio's
original
telemetry
and
policy
engine
that
it's
part
of
the
control
plane
and
a
really
exciting
development
in
the
community
is
actually
working
towards
getting
rid
of
this
component,
largely
due
to
performance
reasons
and
the
way,
and
but
obviously
you
know
you
want
to
get.
You
still
want
telemetry
from
all
of
your
Envoy
proxies.
A
So
the
way
to
do
that
is
actually
to
install
these
adapters
they're,
basically,
custom
envoy
filters
that
are
going
to
go
down
into
the
data
plane
and
I'm.
Also
doing
here,
all
these
animals
I'm
applying
is
I,
am
actually
also
enabling
a
filter
that
will
send
metrics
directly
from
the
Envoy
proxies
by
sidecar
proxies
to
stackdriver,
which
is
Google
clouds,
metrics
product,
so
okay,
so
this
this
is
all
I've
done.
We
can
see
here
that
I
have
installed
this
do
now
an
hour
ago
or
so
you'll
notice,
there's
no
telemetry.
There's
no
policy
here.
A
The
other
thing
I
have
done
is
I've
just
installed
like
a
really
basic
sample
up
with
just
it's,
basically
a
retail
app
if
I
open
it
using
and
I've
exposed
it
through
the
ingress
gateway
as
well
so
yeah.
It's
just
it's
a
good
way
to
kind
of
see
what
like
what's
happening
in
the
cluster,
because
it
has
a
lot
of
services
that
talk.
A
A
These
are
all
my
pots:
okay,
interrupted
with
the
sto
proxy,
so
going
back
to
this
envoy
native
telemetry
piece.
This
is
the
first
thing
that
I'll
show
so
new
and
1.4
is
the
ability
to
get
these
metrics
into
stackdriver.
What
I
have
here
is
the
stock
driver,
metrics
Explorer
for
my
project
in
Google
cloud
that
my
cluster
is
part
of,
and
what
I
can
am
able
to
do
here
and
I
can
actually
just
walk
through
it.
Piece
by
piece
is
I'm
able
to
directly
get
things
like
throughput
from
the
Envoy
proxies
directly.
A
These
are
you
know
the
names
of
my
pods.
You
know
a
cart
service
6b
whatever
and
I'm
able
to
see
these
directly
in
stock
drivers.
So
before
there
was
an
extra
hop
right,
we
had
to
go
from
envoy
to
mixer
mixer
to
stock
driver
using
the
stock
job,
a
mixer
adapter.
Now
we're
able
to
go
directly
from
envoy
to
stack
driver
so
for
performance
reasons.
This
is
like
really
exciting
and
it
just
adds
simplicity,
I
think
and
all
the
the
basic
proxy
metrics
we
are
now
able
to
see
in
stock
driver.
A
There
is
one
thing
which
is
that
TCP
metrics
I,
don't
think
are
working
so
I
have
some
TCP
traffic
to
a
database,
that's
not
showing
up
in
key
ollie,
that's
okay,
that
will
most
likely
be
fixed
soon.
Okay,
so
that's
that's
ongoing
native
telemetry.
The
first
thing
I'll
show:
let's
go
back
into
the
change
notes
here,
so
there
are
two
security,
one
that,
for
that
I
am
super
excited
about.
A
One
has
to
do
with
authentication
the
other
authorization,
let's
start
with
M
TLS,
so
before
this
is
kind
of
the
old,
the
old
way
right
to
get
em,
TLS
or
encryption
to
work
between
to
sto
injected
workloads.
You
needed
to
have
two
resources:
you
needed
a
policy
or
in
mesh
policy
and
a
destination
rule,
so
this
is
to
handle
both
the
server
side
and
client
side
of
mutual
TLS.
Both
both
ends
new
in
month.
Four
is
the
ability
to
not
need
this
destination
role
piece.
A
So
what
I'm
going
to
show
here
is
actually
we
are
able
to
only
have
a
policy
and
no
destination
role,
and
so
the
reason
this
is
exciting
for
me
is
because
often
this
is
often
like
a
major
debugging
snafu
with
this
do,
because
before
we
had
this
history
of
CTL
authentication
TLS
check
command.
There
was
no
real
way
to
know
if
your
destination
rule
was
saying
not
M
TLS
and
your
policy
was
saying
another
thing
so
well.
A
I'm
going
to
show
here
is
actually,
if
I
apply
this
Y
Amal
I
get
M
TLS
fully
functional
from
one
of
my
services.
Email
service,
no
destination
will
needed.
So,
let's
here
this
is
just
applying
that
Y
Amal
and
if
I
rerun
this
check
on
them,
TLS
scroll
up
here
I
could
actually
see
that
that
traffic
is
encrypted
for
email
service.
Only
I
can
continue.
You
know
to
make
check
outs
in
my
app
all
as
well.
A
There
is
one
tricky
thing
which
is
that
once
I
do
this
I
don't
know
if
key
Ollie
knows
that,
although
I'm
TLS
is
a
thing
because
well
we'll
see,
and
maybe
a
minute
is,
this
is
gonna.
Actually
get
grayed
out
and,
and
also
if
we
enable
like
the
security
icon,
it's
not
gonna,
say
it's
empty
LS,
but
again
another
thing
that
can
be
ironed
out.
Okay,
awesome.
The
next
piece
has
to
do
with
authorization.
A
So
new
in
1.4
is
this
V
1
beta,
1
authorization
policy
CRD
this
resource
replaces
the
are
back
policies
that
are
currently
part
of
this
do
so
before,
and
so
I'm
going
to
show
I.
Guess
this
example,
because
I
think
it's.
It
speaks
to
the
simplicity
this.
This
new
feature
adds
before.
If
you
wanted
to
get
to
make
our
back
work
for
one
of
your
services,
you
need
to
three
resources.
You
need
to
turn
on
the
are
back
for
that
service.
You
needed
a
role
and
you
needed
the
role
binding.
A
So
if
you're
familiar
if
you're
familiar
with
kubernetes
auerbach,
it's
very
similar
this
this
was
I
think
problematic
for
a
couple
of
reasons.
It
kind
of
operated
at
the
host
level,
rather
than
the
workload
level
it's
complex.
So
what
we
can
have
here
now
is
actually
the
ability
to
have
one
resource
that
enables
authorization
defines
the
sort
of
abstract
role
and
then
it
sort
of
binds
a
policy
to
a
specific
workload,
so
the
source
principal
source
destination.
A
This
is
I,
think
super
exciting
there.
The
other
really
exciting
thing
is
that
now
you
can
enforce
these
policies
on
the
ingress
and
egress
gateways.
So
if
you
have
multiple
server
services
that
are
backing
the
one
of
your
gateways,
the
ingress
gateway,
for
example,
you
can
create
a
resource
for
that
and
that's
actually
the
example
I'm
going
to
show,
because
my
app
here
is
exposed.
The
front
end
is
expose
with
the
Gateway.
A
I
can
actually
show
you
that
so
yeah
I'm
sort
of
routing
all
requests
through
the
ingress
gateway
to
this
front-end
service
running
on
port
80.
So
what
I'm
actually
gonna
do
here,
so
this
authorization
policy
says
any
traffic
that
is
going
through
the
ingress
gateway,
and
then
you
get
requests
I
want
to
make
to
my
front
end
completely.
Lock
it
down
so
I'm,
not
actually
opening
any
traffic
I'm,
just
showing
like
a
very
basic
example
of
a
lock
like
locking
down
all
access.
So.
A
A
You
can
do
you
know
enable
disable
based
on
the
source
properties
of
the
source
workload.
Making
the
request
you
can.
There
are
all
sorts
of
conditions
like
the
header
has
to
contain
a
certain
thing
to
be
allowed,
so
I'm
really
excited
to
dig
into
this
more
and
maybe
make
some
examples
for
it:
cool
authorization
policy
super
exciting
stuff.
Okay,
let's
talk
about
traffic
now,
there's
one
item
that
I'm
really
excited
about
for
traffic
management,
new
and
1.4,
which
is
being
able
to
mirror
a
percentage
of
traffic.
A
Why
is
this
useful,
I
think
it's
it's
useful,
because
often
we
hear
users
talking
about
traffic
mirroring
in
the
context
of
a
be
testing
in
production,
though,
if
you
want
to
do
an
a/b
test,
chances
are
unless
you're
doing
some
kind
of
load
test
situation.
You
don't
actually
want
to
mirror
all
of
the
traffic
going
from
service.
A
A
to
the
you
know,
mirrored
version,
especially
it
also,
if
you're
doing
testing
that
can
be
I
think
counterproductive
in
some
cases
so
being
able
to
choose
an
exact
percentage
of
what
traffic
you
on
the
mirror
or
shadow
over
to
another
I
think
has
a
lot
of
applicability.
So
let's
actually
see
how
this
works
so
I'm
going
to
reenact
or
perform
the
process
of
doing
a
sort
of
an
a/b
testing
situation
with
one
of
my
services,
the
product
catalog
service.
What
I'm
gonna
do
is
deploy
a
new
version.
A
V2
create
a
destination
rule
which
slices
up
my
service
into
two
subsets
v1
and
v2,
based
on
the
deployment
labels
and
then
I'm
going
to
mirror
so
I'm
gonna
send
all
production
traffic
100%
of
it
to
v1
coz
we're
still
v1
is
still
the
sort
of
source
of
truth.
It's
still
prod
v2
is
the
new
version,
or
there
may
be
the
test
version
and
I'm
going
to
mirror
40%
of
my
production,
v1
traffic
to
b2
and
the
way
I'm
going
to
show
this
is
working
is
all
open.
A
Key
Ali
once
I
apply
this
rule
and
I
will
also
show
you
the
access
logs
for
v2,
the
key
thing
to
know
about
mirroring
if
I
click
on
this
is
that
all
access
logs
show
up
as
hoax
Authority
ranked
service
shadow.
So
what
I
should
hope
to
see
on
the
v2
side
is
that
inbound
requests
are
say
like
product
catalog,
shadow
cool.
Let's
apply
this
what's
going
here,
the
first
one
thing
that
we're
going
to
do
is
deploy
the
new
version.
A
You
know
make
sure
it's
running
there.
It
is
over
here,
cool
we've
got
our
destination
rule
for
product
catalog
just
deployed,
and
we
are
going
to
know,
mirror
40%
of
that
v1
traffic
over
to
v2.
So
what
we
should
see
pretty
quickly
here,
we
are
in
our
version,
tap
graph.
If
I
zoom
in
here,
we
now
have
a
virtual
service
applied
to
product
catalog
oops
refresh
that
page,
and
we
have
v1
and
v2
to
actually
see
what's
actually
what's
happening
here.
A
If
I
go
in
here
and
I,
let's
see
and
I
get
the
sto
proxy
logs
or
the
Envoy
logs
for
v2
I
start
to
see
these
requests
come
in
and
you'll
notice
that
they
are
labeled
product,
catalogs
service
staff
shadow.
So
we
can
see
that
this
is
actually
sort
of
this
fire-and-forget
mirrored
traffic
when
I
want
to
hopefully
see
in
the
future,
which
would
be
super
exciting.
Not
traffic
intermission
is
that
this
request
percentage,
it's
I,
think
a
little
possibly
a
little
bit
deceiving
in
that
yeah.
Sometimes
it
doesn't
actually
say.
A
100
percent
of
traffic
is
in
fact
going
to
be
one
right
now
it
is
which
is
good,
but
yeah.
Sometimes
it
says
that
a
percentage
is
going
to
be
too
when
actually
it's
that's
technically
true,
but
all
of
the
production
traffic
is
still
going
to
be.
One
is
the
thing
to
them.
Okay,
moving
right
along
here,
the
last
couple
of
new
features,
I
would
love
to
show
have
to
do
with
config
management
and
troubleshooting.
A
So
one
I
think
that
one
of
the
most
major
things
to
be
released
in
1.4
is
extensions
and
improvements
to
sto
CTL
analyze.
So
SCS
detail
analyze
is
a
command
that
is
packaged
up
with
the
SEO
CTL
tool
which
you
get
when
you
install
a
new
release
or
when
you
download
the
release
of
this,
do
so.
If
I
just
clear
this
out
here,
it's
DF
CTO
now
there's
a
whole
lot
of
commands
that
come
with
this.
A
Do
CTL
like
verify
your
installation
or
do
pre
installed
checks
or
apply
a
new
settings
configuration
or
look
at
the
status
of
a
specific
service.
There's
also,
all
of
these
experimental
commands
I'm
using
the
dashboard
one
to
open
the
ple
dashboard.
In
the
background,
yeah
there's
there's
even
like
experimental
support
for
multi
cluster
installation,
but
the
one
I'm
going
to
look
at
here
is
analyzed.
So
it's
do.
Cto
analyze.
A
See
here,
okay,
sorry,
it's
an
experimental
command.
There
we
go.
So
what
this
lets
you
do
is
not
only
analyze
your
live
cluster,
but
it
also
lets
you
check
yamo
files
before
you
apply
them
to
see.
If
there
are
any
issues
which
I
think
is
huge
right,
it's
almost
like
a
dry
run
of
applying
new
configuration
so
picture.
A
You
know
you
could
add
this
command
easily
to
a
like
a
CD
system
or
even
CI,
where,
if
you're
pushing
us
do
configuration,
you
can
actually
do
a
check
against
what
you
have
in
your
cluster
to
see
if
something
could
potentially
go
wrong.
So,
for
example,
if
I
run
this
CH
CTL
analyze
kay,
this
is
what
will
inspect
the
live
version
of
my
cluster.
So
we
can
see
you
know
right
now.
Everything
is
fine
and
clear.
No
validation
issues
found
what
I
can
also
do
is
run.
This
command.
I
should
go
into
the
rectory
here.
A
If
I
run,
it
suits
ETL
analyzed
on
a
yema
file
that
I
have
here.
I've
made
several
things
wrong
with
this
file.
One
is
that
we
don't
have
a
hello
world
service.
The
other
is
that
we
don't
have
destination
rolls
that
do
anything
with
the
hello
world
service
and,
lastly,
my
traffic
weights
do
not
add
up
to
100.
So
imagine
you
know
vert,
imagine
you're
doing
a
canary
deployment
and
you're
doing
a
progressive,
canary
and
yeah
you've
messed
up
the
config.
This
can
actually
detect
that.
So,
let's,
let's
see
how
so,
let's
go
otherwise.
A
A
Oh
I
did
misspelled
the
word
analyzed:
okay,
let's,
let's,
let's
see
what
the
analyzer
had
to
say
here
about
our
bad
virtual
service,
it
didn't
recognize
hello
world.
That's
good,
didn't
recognize
that
we
had
the
sub
sets
because
we
didn't
and
then
we
also
have
a
schema
validation
error
that
knows
that
all
of
these
destination
weights
have
to
add
up
to
100,
and
it
did
that
check
for
us
and
we
get
these
errors.
A
Ok,
one
last
thing
I
want
to
show
is
the
new
client
Go
Go
line,
client
library
for
Sto.
So
up
until
1.4
there
was
no
official
client
library
for
sto.
There
were
others
out
there.
What
we
now
have
in
1.4
is
a
new
official
client
go
repo,
that's
directly
tapped
into
the
CR
DS
in
the
SEO
API
itself
and
I
think
this
will
really.
This
is
a
super,
exciting
I
think
because
well
for
a
lot,
a
lot
of
reasons.
A
There
are
tools
that
already
need
to
use
a
client
library
for
sto
that
can
now
repot
form
on
this,
but
also
just
in
terms
of
the
future.
I
think
we're
sto
excels
is
providing
a
ton
of
functionality
and
features,
but
in
reality,
when
we
talk
to
use
it,
when
I
talk
to
users,
I
find
they're
only
actually
looking
to
expose
developers
to
a
subset
of
what
the
api's
provide,
and
there
are
platform
engineers
that
actually
want
to
build.
On
top
of
this
do
to
build
UIs
and
all
kinds
of
things.
A
So
this
is
super,
exciting
and
I
think
the
first
step
towards
building
out
that
sort
of
ecosystem
of
layers.
On
top
of
this
deal
anyway,
if
I
could
go
into
the
client
go
repo,
so
I've
just
sort
of
forked
it.
This
is
running
on
the
master
branch.
I
can
run
this
sample,
which
is
just
doing
a
bunch
of
gets
on
various
resources.
A
It's
pointing
to
my
cube
config,
so
it's
going
to
talk
to
my
cluster
and
we
can
see
it's
getting
all
the
virtual
services
and
saying:
oh,
you
have
one,
you
know
for
your
wild
card.
All
hosts!
You
actually
have
one
for
product
catalogue.
This
is
our
the
mirroring
stuff
we
did
before.
Here's
the
auto
MPLS
stuff
we
just
did
and
here's
the
gateway
exposing
the
front
end,
and
this
will
also
let
you
do
any
sort
of
crud
operation
on
the
askew
api's.
So
imagine
you
want
to
build
a
tool
that
generates
virtual
services.
A
You
can
now
do
that
pretty
easily,
using
using
probably
like
less
than
100
lines
of
go
code.
So
yeah,
that's
the
last
thing.
I
want
to
show
super
exciting.
So
with
that
I
will
pause
here.
There's
a
lot
more
features.
I
didn't
cover
so
definitely
check
out
the
one
up
for
release.
Notes.
I
also
have
this
repo
here,
a
steel
1.4
featured
tour
where
I
will
push
all
these
manifests
and
I
have
that
link
down
in
the
description.
Thank
you
so
much
for
watching
and
happy
happy
service,
meshing.