►
From YouTube: Service Mesh in the Real World video series - Ep 3
Description
This educational video series will walk you through various application networking challenges, how we have traditionally solved them, service mesh networking concepts to solve them for microservices, and show you how to do it through live demonstration.
Slides from the session here
https://speakerdeck.com/crcsmnky/securing-your-microservices-using-istio
Learn more about the technologies used in the video:
Istio https://istio.io/
Google Cloud https://cloud.google.com/
Solo.io https://www.solo.io/
A
A
Very
good,
thank
you
for
coming
by
and
observing
the
third
installation
they
still
in
the
real
world
we'll
be
talking
about
using.
Is
the
SEMA
today
to
secure
your
service
to
service
communication,
and
my
name
is
Christian
posta
I'm,
a
field
CTO
at
a
startup
called
solid
I/o,
where
we
build
service
mesh
tooling,
to
help
people
successful
with
service
production
and
with
me
today
we
have
send
it
yeah.
A
C
So
with
that,
why
don't
we
get
started
so
today?
What
we're
gonna
cover?
We've
got
five
main
areas
and
when
I
talk
through
solo
on
what
are
some
of
the
challenges
are
around
securing
Microsoft
services
deployments.
Why
does
this
deal
makes
sense
as
an
option
there?
What
are
some
of
the
actual
specific
solutions
that
st
offers
and
we'll
talk
a
little
bit
about
what's
available
in
the
ecosystem
as
well?
C
We're
gonna
close
with
a
little
bit
of
what
else
is
new
in
1.4
and
tease
a
little
bit
of
what's
coming
in
1.5,
and
then
we've
got
time
for
questions
as
well.
So
if
you've
got
questions,
please
drop
them
in
the
chat
and
we'll
try
to
get
through
them.
If
we
can't
answer
them
during
this
session,
we
will
try
to
follow
up
as
best
as
possible.
C
So,
let's
start
with
challenges
when
we
think
about
securing
micro
services
deployments,
one
of
the
clearest
requirements
that
comes
up
quite
a
bit
and
we've
heard
a
lot
in
the
ecosystem
of
kubernetes
users-
is
how
to
properly
secured
service
to
service
communication.
So
what
that
typically
means
is:
how
do
you
encrypt
traffic
between
your
services?
Now
when
you're
running
in
a
typical
kubernetes
deployment
service,
is
always
communicating
using
plain
text
right
everything
from
ingress
into
your
services?
All
the
services
are
communicate
with
they're,
always
going
to
use
plain
text
by
default.
C
Now,
if
you
want
to
secure
communication
channels,
it's
certainly
possible
to
do
that
and
to
create
your
own
mechanism
to
do
so,
but
it
does
require
a
few
things
that
are
really
important
one.
Your
applications
have
to
be
updated
right,
so
you've
got
to
include
encryption
support
in
your
application
in
some
meaningful
way,
either
compiling
in
libraries
or
including
external
functionality,
and
building
that
into
your
application
and
regenerating
that
container
image.
C
You've
got
to
have
some
other
infrastructure
to
generate
and
supply
those
things
across
your
services
that
are
running,
which
leads
us
to
typical
challenges.
People
run
into
so
the
first
one
I
think
is
it's
pretty
straightforward
right.
We
have
to
update
applications
to
include
recruiters
before
we
talked
about
that.
The
second
one
is
really
the
one
where
we
found
folks
get
really
really
lost
in
write.
This
is
a
hard
problem,
creating
a
distributed
system
to
provision
and
manage
those
keys
and
also
do
things
like
rotate
them,
make
sure
they're
the
route.
C
Then
the
certificate
authority
is
a
trusted
chain,
so
there's
quite
a
bit
of
work
to
make
that
all
sort
of
fit
together
and
do
it
in
a
scalable
fashion,
and
then
the
last
one
I
think
is
another
area
that
people
forget
about
is
you
may
have
legacy
applications
that
can't
be
updated
right?
That
may
be
running
in
containers
and
may
be
running
in
posit
in
your
environment,
but
they
might
not
be
able
to
pick
up
a
cryptid
support
for
a
variety
of
reasons
right.
C
C
Now,
if
you
wanted
to
prevent
unauthorized
access
like
in
the
diagram
here
on
the
right
service,
a
conductor
service
b
b
can
talk
to
see,
but
a
cannot
talk
to
c.
So
how
would
we
typically
implement
that
with
kubernetes
right
as
a
starting
point?
The
good
news
is:
is
that
with
something
like
service
accounts
and
the
kubernetes
and
are
back
so
recuperation
service
counts?
Plus
our
Beck
gives
you
some
of
this
capability,
but
it's
relatively
coarse
grain
right.
It's
really
just
talking
about
service
to
service
access
for
a
lot
of
deployments.
C
They
need
something
a
little
bit
more
fine-grained
than
that.
So
there
are
two
aspects
here:
one
they
want:
a
better
understand:
service,
identity
and,
potentially
user
level
identity
right
you
wanna
be
able
to
pass
in
user
level
authorization
credentials
all
the
way
to
that
service
running
in
your
deployment.
But
the
other
part
of
this
is
that
you
may
want
to
have
again
a
finer
grained
level
of
control,
so
you
may
want
to
say
things
like
well:
service
a
can
access,
a
specific
route
and
a
verb
combination
from
service
B
right.
C
So
you
might
say,
I
only
want
to
allow
the
get
method
to
this
particular
URL
on
service
B
or
if
it's
a
gr,
PC
application.
You
may
only
want
to
allow
access
to
a
particular
package
path
and
again
a
particular
verb
associated
with
that.
So
it's
it's
not
simple,
and
it's
not
easy
enough
to
just
say
we'll,
lock
down
access
just
based
on
service
little
granularity.
We
want
to
get
down
to
kind
of
host
and
path
and
verb
level
granularity
to
implement.
C
This
does
again
introduce
a
series
of
challenges,
so
one
applications
need
to
either
compile
and
support
or
build
and
support
in
some
meaningful
way
to
understand
and
verify
service
and
user
identity,
potentially
in
every
single
call
right.
If
you
want
to
get
to
that
fine-grained
level
and
then
on
top
of
that
applications
need
to
apply
those
identity,
controls
to
every
potential
route
or
endpoint.
Slash,
verb,
combination
right,
so
every
URL
that
you
may
expose
every
along
with
every
possible
HTTP
verb.
You
know
for
HTTP
connections
that
can
become
again
I'll.
C
Add
a
lot
of
cognitive
overhead.
Now,
in
a
small-scale
scenario,
this
isn't
terribly
difficult.
There
are
ways
to
do
it,
but
if
you
think
about
this
problem
extending
to
hundreds
or
thousands
of
micro
services,
it
becomes
very,
very
challenging
very
quickly,
and
so
that's
where
we're
that's
what
we're
left
with
as
far
as
what
some
of
these
challenges
are,
when
either
securing
communication
or
implementing
some
kind
of
authorized
access.
C
So
now,
let's
talk
a
little
bit
about
why
you
would
choose
this
do
to
help
solve
this
problem
so
before
I
think
Service
mentioned
SEO
specifically
came
along
when
applications
running
in
in
kubernetes
environments
wanted
to
kind
of
take
on
some
advanced
functionality
and,
frankly,
what
we
find
is
Enterprise
function
that
folks
really
want
to
have,
as
they
again
go
bigger
all
right.
So
don't
think
of
this
as
one
two,
three
four
or
five
microservices.
C
Think
of
this
as
tens,
hundreds
or
even
thousands
of
micro
services.
When
you
want
to
include
things
like
circuit,
breaking
your
identity
support,
you
know
encryption
or
tracing
those
are
things
you
have
to
compile
in
your
application
or
build
into
your
application
in
some
meaningful
way,
and
then
you
still
may
want
to
have
some
kind
of
access,
control,
ingress,
control,
egress
control,
and
so
without
is
do
these
things
become
hard
right.
These
are
all
components
you
have
to
again
take
on
the
responsibility
for
managing
or
including
your
applications.
C
Once
you
incorporate
SEO
the
story
changes
a
little
bit.
You
have
now
the
ability
to
leave
your
application
as
is,
and
it
lives
in
that
pod
and
instead
what
we
do
is
we
follow
this
kind
of
site
car
model
and
a
control
plane
approach
where
the
sidecar
is
there
to
sort
of
mediate,
all
inbound
and
outbound
network
traffic
and
by
mediating
all
that,
once
it's
paired
with
a
control
plane,
a
control
plane
can
actually
instruct
the
sidecar
and
how
to
do
things
like
circuit,
braking
your
fault
injection.
C
It
can
handle
some
of
the
aspects
around
identity
or
encryption
and
because
it's
mediating
all
that
in
and
outbound
connectivity,
your
gut
you've
gotten
a
window
into
observability,
so
tracing
logging
and
metrics
data
almost
immediately
and
that
control
plane
also
does
a
lot
of
heavy
lifting
right.
It
also
implements
ingress/egress
controls.
It
has
a
certificate
authority
built
into
it,
and
you
can
use
a
kind
of
a
simplified
API
to
implement
access,
control
or
routing
rules.
So
this
this
adds
quite
a
bit
again
and
it's
outside
the
bounds
of
the
application.
C
But
the
point
really
is
is
that
there
is
a
a
clear
sort
of
understanding
here
that
these
policies
can
be
implemented
at
different
levels
of
granularity
right
down
to
the
individual
service,
up
to
like
the
namespace,
its
level
or
even
across
the
entire
mesh
and
identity
can
can
pass
through
out
here.
For
all
of
your
services,
as
well
as
we
can
encrypt
with
em
TLS,
there's
authorization
controls
there,
support
for
things
like
jaw,
tokens
or
oh
I.
D
C
is
an
identity
mechanism.
So
there's
quite
a
bit.
C
That's
encompass
hearing
and
we're
gonna
simplify
a
little
bit
here
and
kind
of
focus
on
unto
specific
areas,
but
this
is
just
to
give
you
a
quick
glance
at
what's
possible
if
you
sort
of
expand
out
all
the
entirety
of
sto
security
capabilities,
so
now
some
of
the
solutions
and
starting
with
some
of
the
ecosystem
aspects
of
it.
Frankly,
what
we
found
is
it's
hard
right.
Securing
communications
is
is
a
hard
task
to
do,
and
there's
lots
of
work
on
securing
connectivity
between
ingress
and
pods
right.
C
So
you
can
imagine
like
securing
ingress
right
at
the
front
end
with
things
like
cert
manager
or
even
securing
sort
of
external
balancer
to
ingress
mechanisms,
but
once
you
enter
in
that
kubernetes
deployment,
there
isn't
as
much
that
isn't
sort
of
bespoke
or
custom
right
and
the
same
thing
applies
to
incorporating
identity.
Folks
use
coober
a
service
accounts
as
much
as
possible.
That's
great!
C
So
let's
talk
about
the
first
pass
here,
which
is
encrypting
service
traffic
using
SDO.
So
when
you
want
an
able,
NLCS,
authentication,
there's
two
main
API
components:
you've
to
worry
about,
one
is
the
policy
object
and
the
second
is
the
destination
rule
objects
and
a
policy
object.
Tell
services
what
sorts
of
connections
they
can
accept
and
a
destination
rule
object,
tells
clients
of
that
service.
What
sorts
of
connections
they
should
use.
C
So
in
this
case,
if
we
apply
a
policy
that
says
strict
and
we're
telling
services,
they
can
only
accept
strictly
MPLS
connections,
and
now
we've
got
to
make
sure
do
the
other
half
of
that
and
make
sure
we
tell
clients
that
clients
of
service
a
can
only
accept
MPLS
connections.
So
you
must
use
mutual
authentication
and
that's
the
typical
process
for
rolling
out
empty,
less
authentication.
The
upside
of
this
approach
is
that
it
allows
us
to
take
to
take
this
idea
of
securing
a
subset
of
services
and
rolling
and
TLS
out
slowly
across
the
deployment.
C
The
benefit
here
is
that
if
you
have
services
that,
for
whatever
reason,
may
not
be
able
to
adopt
em
TLS,
maybe
there
are
enterprise
requirements.
Maybe
those
services
like
in
a
legacy
namespace
can't
communicate
with
other
upstream
or
downstream
services
using
MCLs
for
again
requirements
that
we
may
not
have
access
to.
But
those
are
real
Enterprise
scenarios,
so
we
want
to
be
able
to
incorporate
and
leverage
MPLS
without
cutting
off
access
to
legacy
services.
C
So
the
way
we
typically
do
that
so
here
in
this
example,
deploy
know
got
two
namespaces
one
is
called
legacy
and
what's
called
secure
in
the
security
namespace
I've
got
its
new
injection
enabled
so
front-end
and
back-end
have
the
sidecar
proxy
already
included
in
their
pods,
and
we
can
do
is
from
a
starting
point.
We
can
actually
implement
a
policy
objects
in
the
secure
namespace
that
just
tells
backend
to
only
to
accept
connections
from
anyone
right.
So
we're
setting
up
to
be
permissive,
which
means
it
can
accept
connections
that
are
not
authenticated.
C
So
this
way
service
is
still
able
to
communicate
with
back-end,
and
then
we
can
actually
use
a
destination
rule
with
ISTE
of
mutual
and
tell
the
backend
tell
clients
our
back-end
that
they
can
use
this
to
mutual
if
they
can
present
it.
So
now,
what's
happening
here
is
that
the
service
in
the
legacy
namespace
is
still
using
plain
text,
but
front-end
is
communicating
to
back-end
using
mutual
TLS,
and
so
now
we've
got
this
kind
of
mixed
mode
deployment.
C
And
now,
if
we
want
to
sort
of
complete
the
cycle
and
say
now
we're
gonna
cut
off
access
to
all
legacy
clients,
the
final
step
is
to
update
that
existing
policy
object
and
change
the
MPLS
mode
district.
So
now
anyone
that
tries
to
communicate
over
plain
text
is
going
to
get
rejected
and
the
front-end
and
back-end
connections
are
getting
are
still
secured
with
NLCS,
and
this
is
typically
the
process.
We
see
again
with
enterprises
rolling
out
support
for
this,
because
they
want
to
leverage
encryption,
but
they've
got
to
manage
its
rollout
over
time
now.
C
What's
really
nice
is
that
new
in
sto
1.4
there's
support
for
something
called
Auto,
em,
TLS
and
alpha.
Auto
TLS
is
very
powerful
because
right
out
the
gate,
sto
sidecars
automatically
know
to
use
em
TLS
connections
unless
you
actually
specifically
deactivate
it,
and
you
can
enable
strict
connections
across
the
board
for
all
clients
right,
so
any
legacy.
Clients
would
no
longer
be
able
to
connect
using
one
single
policy
object.
C
So
we've
taken
up
the
hard
requirement
of
using
two
two
mechanisms:
policy
and
destination
rule
to
control
it
now
with
Auto
TLS
one
it's
on
by
sort
of
default.
In
the
background
for
clients
that
can
do
it
and
then
when
we
want
to
strictly
allow
access
only
to
MTA
less
capable
mechanisms,
we
can
do
it
with
a
single
policy
object
and,
of
course
it
can
be
overridden
by
destination.
Rule
objects
as
needed.
The
key
here
is
that
this
behavior
is
an
option
that
you
have
to
turn
on
at
install
time.
C
So
it's
a
mesh
wet
install
flag
and
you
can
see
on
the
middle
line
here
we're
setting
values
that
global
that
MT
LS,
that
Auto
equal
to
true
and
that's
going
to
turn
on
this
automatic
support.
I
would
strongly
recommend
that
for
anyone,
who's
really
interested
in
this
feature
one
to
go
test
it
out
and
to
there
is
some
active
discussion
within
the
community
about
making
this
automatic
MPLS
option
a
default
instead
of
optional.
There
is
a
github
issue
linked
on
the
auto
MT
LS
page
on
the
ISTE
of
Docs.
C
So
that
was
a
little
bit
under
encryption.
Now,
let's
talk
about
authorizing
service
access.
So
if
you
remember
before
we
talked
about
kubernetes
service
accounts
being
the
part
that
established
a
service
identity
and
needing
you
know
more
than
what
our
back
provides
so
before,
if
CEO
1.4,
there
were
four
components
you
needed
to
really
implement
authorization.
Obviously
one
is
that
service
account,
I
just
talked
about,
which
is
a
kubernetes
primitive,
and
then
we
needed
three
Sto
mechanisms.
The
first
was
cluster,
our
Beck
config.
C
This
was
a
mechanism
that
actually
an
API
address
that
turned
on
sbor
back
across
the
entire
deployment,
and
you
were
able
to
do
things
like
which
namespaces
to
include
or
exclude,
but
then
to
actually
set
up
the
specific
service
level
authorization
controls.
You
had
to
create
a
service
role
and
a
service
role
binding.
The
challenges
here
is
that
there's
a
little
bit
of
overlap
between
service
role
and
service
or
binding,
so
you've
got
to
sort
of
develop
those
in
parallel.
C
So
there's
an
option
there
to
sort
of
migrate,
an
existing
authorization
deployment
into
the
new
authorization
policy
model,
and
this
model
is
great,
I
love
it
because
it
simplifies
a
lot
of
what
you
do
and
it
puts
all
the
functionality
you
care
about
in
a
single
API
object,
so
it
keeps
it
a
lot
simpler
from
a
sort
of
design
and
implementation
perspective.
So
now,
if
we
look
at
an
example
of
this
here
again,
we
want
to
implement
fine-grain
authorization
controls
right.
In
this
case,
we've
got
service
a
in
the
team.
C
One
namespace
and
service
be
in
service
C
in
the
team
to
name
his
face
and
both
of
these
namespaces
have
its
dual
injection
enabled.
So
now,
let's
say
we
want
to
implement
the
example
we
talked
about
earlier,
where
a
can
talk
to
be.
You
can
talk
to
see,
but
a
cannot
talk
to
see.
How
would
we
do
such
a
thing?
C
So
it's
actually
relatively
simple.
What
we
can
say
is
that
service
C
can
only
allow
connections
from
service
B
and
what
we
use
to
identify
service
B
is
a
kubernetes
service
account.
So
there
is
a
little
bit
of
updating
that
has
to
happen
here
right.
If
you
didn't
have
service
accounts
deployed
previously,
you
have
to
make
sure
to
create
those
service
accounts
right
in
this
case,
we'd
have
to
create
one
for
service.
C
B
and
we'd
also
have
to
make
sure
to
update
its
pods
Bekker
its
deployment
spec
to
incorporate
the
service
account,
but
once
that's
in
place
and
that's
a
simple
kind
of
rolling
update
approach,
you
can
do
relatively
easily
with
a
simple
authorization
policy
object,
and
this
is
a
really
really
bear
example.
We
can
control
again
that
service
level
access
now
if
we
wanted
to
get
really
granular.
C
We
could
add
more
here
and
say
you
know,
let's
add
an
additional
principle
for
a
service
a
and
we
can
actually
again
highlight
specific
routes
and
verb
combinations
that
are
allowed
from
for
a
to
access
C.
So
we
could
say
service
a
can.
Only
access
service
see
on
the
with
a
get
method
on
just
the
slash.
You
know
the
base
route
so
that
sort
of
level
of
control
as
possible
here
so
again,
this
ends
up
becoming
a
very,
very
powerful
mechanism.
C
Some
things
to
remember
here
and
the
biggest
one
actually
is
that
these
rules
are
additive,
so
they're
they're,
not
they're,
not
negating
Rosen
and
over
time,
there's
actually
active
work
going
on
too
to
allow
negation
but
right
now
it's
everything,
sort
of
additive.
So,
as
you
add,
rules,
you're,
adding
capabilities,
you're,
not
taking
them
away
or
you're,
not
overriding,
what's
already
there.
So
that's
an
important
thing
to
remember,
as
you're
trying
to
roll
out
again
a
kind
of
fine-grain
authorization
approach.
A
Thanks
to
Nick,
so
we
can
also
require
a
valid
jot
token
or
hoping
that
I
did
identify
as
a
user.
In
the
request
before
we
had
allow
a
request
to
go
through,
and
we
can
take
the
principles
from
that
as
well
and
use
that
in
the
authorization
policies,
that's
and
if
it's
show,
let
me
let
me
share
my
screen
and.
A
Walk
through
a
demo
of
this,
so
if
we
come
here,
the
demos
we're
going
to
show
three
demos
providing
that
they
actually
work,
we're
going
to
show
the
three
different
concepts
that
we
just
introduced.
One
is
the
auto
MPLS
for
our
services,
which
would
encrypt
all
the
traffic
going
between
your
services
in
the
mesh
automatically
and
with
the
honor
and
TLS
functionality.
We
can
leave
if
the
mash
in
permissive
mode
for
those
those
services
better
they're,
not
quick,
to
do
MT
less
immediately.
A
So
we
can
have
a
combination
of
those
and
then
we'll
show
the
authorization
policies
that
send
it.
This
introduces
as
well
as
requiring
jobs
for
access.
So
let's
let's
come
here.
The
first
thing
I
want
to
point
out
is
that
the
source
code
for
this
demo
is
all
on
github,
and
we
can
add
this
to
the
notes
and
in
the
comments
as
part
of
the
docs
for
source
code.
Here
we
see
how
to
set
up
a
cluster
help
get
started.
A
We
also
see
how
to
install
misty
oh
and
we
are
basically
setting
the
auto
MPLS
flag
of
truth
and
turning
off
MPLS
once
we
once
the
install
and
what
this
will
do
is
put
into
permissive
mode
now,
if
we
want
to
come,
we
have,
let's
take
a
look
at
our
cuff,
so
we
see
that
we
have
is
still
installed
if
we
want
to
come
in
and
verify
that
we
in
indeed
have
any
less
time
to
less
enabled
we
can
take
a
look
at
the
sto
config
map
in
SEO
system.
Auto
we
can
see.
A
Auto
MPLS
is
indeed
on.
So
let's
take
a
look
at
our
demo
here.
This
demo
is
going
to
be
using
the
hipster
shop
from
the
google
cloud
micro
sources
example.
We
should
be
able
to
see
the
see
the
main
UI
took
on
the
source
code
down
here.
We
can
see
exactly
where
is
hosted.
You
can
pull
down
a
handful
of
services
that
collaborate
together
to
provides
some
functionality.
A
We
come
back
here
to
some
of
the
diagram,
so
this
is
just
a
diagram
of
what
the
hipster
shop
looks
like.
What
we're
going
to
do
is
we
saw
that
and
that
auto
MPLS
is
enabled.
We
saw
that
we
are
in
permissive
mode.
Let's
make
sure
that
for
services
in
the
mesh
that
can
do
MPLS,
so
that's
the
ones
with
these
side
bars
here
and
it's
actually
in
and
and
and
enforced.
A
So
the
first
thing
we're
going
to
do
is
we
can
take
a
look
at
the
ISTE
o
CTL
tools
for
checking
whether
a
service
is
involved
in
TLS,
and
we
can
see
that
the
settings
are
auto
and
permissive.
But
what
we're
going
to
do
is
go
a
little
deeper.
We're
going
to
pick
doesn't
matter,
but
we're
gonna
pick
the
traffic
that's
going
from
front
end
from
the
front
end
service
to
the
product
catalog
service.
A
If
we
come
back
here,
that
would
be
from
the
front
end
here
to
the
product,
catalog
servers
and
we're
going
to
verify
that,
indeed,
that
TLS
encryption
that
mutual
TLS
is
enabled
here.
So
this
is
the
IP
address
of
the
product
catalog
service
internally
inside
the
the
mesh
we're
going
to
do
is
use
TCP,
dump
to
capture
the
traffic
between
front
end
and
the
product
catalog
using
that
IP
address
so
on
the
on
the
bottom
pane
here.
What
we
have
is
the
deployment
for
the
front.
A
End
servers
is
a
quick
config
setting
that
I
need
to
change,
because
by
default
you
cannot
write
to
the
file
system,
we're
going
to
we're
going
to
change
that,
because
we
want
to
be
able
to
capture
the
TCP
dump
and
then
in
the
packets
the
network
packets
going
through
and
save
that
file.
Gonna
save
it
locally
on
on
the
pod
there.
Let's
make
sure
that
that
pod
has
come
up.
It
is
absolutely
coming
up
in
a
second,
and
so
what
I'm
gonna
do
is
remotely
start
to
be
dumped.
A
D
A
Now,
if
we
come
back
up
here,
let's
run
the
command
to
start
the
TCP
dump
and
we
see
that
is
happening
and
it
will
save
the
output
to
a
file
in
the
pod
called
output,
dot,
P,
Kappa
I
come
over
here
and
refresh
and
hit
our
front
end
service.
Some
traffic
should
be
going
through
the
system.
Now,
if
I
end
that
now
we
can
take
now.
A
If
we
do
file
open,
look
in
temp,
so
you
can
see
our
output
peak
at
there
and
if
we
open
it-
and
we
can
see
there
is
application
data
flowing
between
the
front-end
and
the
product
catalog
service.
But
we
can
also
see
that
this
this
traffic
is
encrypted.
Now
we
didn't
capture
the
the
the
packets
as
the
TLS
and
MPLS
session
was
being
created.
It
looks
like
we
captured
it
in
between,
so
we
don't
see
that
a
handshaking,
but
you
can
see
that
the
data
is
indeed
encrypted.
A
Gonna
do
this
is
we're
going
to
create
a
couple
of
service
accounts
that
we
use
for
authorizations
and
then
we'll
speak?
How
I'm,
using
this
new
API,
we
can
block
access
to
certain
services.
So
come
back
here.
Look
at
the
diagram
we
can.
First,
we
can
enable
traffic
to
go
from
the
checkout
service
to
the
currency
service,
but
we
can
block
traffic
going
from
the
front
end
to
the
currency
service.
So
let's
take
a
look
at
how
we
would
do
that.
This
is
the
Aussie
policy
that
will
start
with.
A
A
A
We
click
on
refresh.
We
should
see
that
we
could
not
retrieve
the
currencies,
because
the
front-end
service
could
not
communicate
with
the
the
currency
service.
So
if
we
add
the
the
policy
are
back
here
so
that
front
end
and
check-out
can
call
the
currency
service,
then
we
should
restore.
We
should
restore
access
here.
Let's
try
that
out.
A
You
see
that
configured
come
back
here
kind
of
refresh
cross
fingers,
okay,
we're
back
so
now
the
the
front
end
service
in
this
in
this
case,
because
of
the
policy
access
that
we've
allowed
can't
pop
to
the
currency
services
and
as
well
check
out
service
stops
the
currency
service
and
things
proceed
as
we
would
expect
now.
The
last
news
case
that
we'll
look
at
is
when
a
service
when
you're
trying
to
talk
to
a
service-
and
you
want
to
enforce
that.
It
has
a
jock
token
that
that
token
is
valid.
A
A
A
A
A
A
A
A
D
C
We
can
kind
of
close
a
little
bit
here
with
what's
new,
so
we
talked
about
authorization
policy
in
Auto
and
TLS
already.
What
else
is
in
1.4,
which
is
a
couple
months
old
at
this
point
honestly,
but
doodle
thanksgiving
and
christmas?
It's
hard
to
kind
of
schedule
in
another
video,
so
we
waited
until
the
new
year,
but
one
of
the
big
things
is
mixer
lists
ulema
tree.
So
if
you'll
remember,
this
has
been
a
mechanism
or
and
that's
but
that's
being
sort
of
teased
and
slowly
rolled
out
over
the
next
few
releases.
C
But
ultimately,
mixer
is
gonna,
go
away
and
telemetry
is
going
to
be
reported
directly
from
the
sidecar
from
that
from
the
VST
or
proxy.
That's
built
on
the
way,
that's
going
to
report
directly
to
any
of
the
downstream
metrics
mechanisms,
whether
it's
prometheus
or
if
you
have
custom
integration
when
things
like
stack,
driver,
Dana,
dog
but
everything's
gonna
be
mixer
list.
So
that's
a
really
really
important
distinction
because
it
introduces
a
couple
things
one.
Ultimately,
when
mixer
goes
away,
telemetry
will
go
and
become
the
responsibility
of
the
history
of
proxy
sidecar.
C
This
was
again
a
feature
that
was
experimental
back
in
1.3
and
is
now
getting
more
and
more
power,
and
it
gives
you
the
ability
to
do
a
few
things.
One
is
to
analyze
an
individual
sto
API
object,
an
individual
llamó
file
in
your
file
system
or
to
analyze
a
bunch
of
files
together
to
see
how
they're
gonna
interact,
or
you
can
actually
point
it
to
a
live
cluster
and
analyze.
C
What's
there,
so
you
have
a
lot
of
capability
to
understand
is
something
broken
so,
for
example,
in
the
in
the
institution
we're
just
dealing
with
where
you
know,
Christian
applied
a
policy
object
with
shot
with
Ja
choke
ins,
and
it
wasn't
picking
up
that
change.
We
could
have
essentially
used
associate,
analyse
to
sort
of
understand.
Hey
was
that
authorization
policy
written
correctly,
but
not
referring
to
the
right
source,
or
is
there
a
conflict
somewhere
with
another
file?
C
We
haven't
seen
so
there's
a
lot
that
can
happen
from
a
conflict
perspective
at
runtime
and
that's
what
the
analyze
capability
in
AC
of
CTL
can
help
understand
among
those
as
well
from
improvements
in
1.4.
The
other
big
area
was
a
bunch
of
improvements
on
the
sidecar,
so
much
better
graceful
exiting
in
crash
scenarios,
a
lot
more
metrics
being
reported
and
now
as
well
the
ability
to
mirror
a
percentage
of
traffic
which
was
really
handy
so
traffic
marring
is
a
really
really
cool
capability
that
is
tio
provides.
C
So
you
can
set
up
a
new
version
of
a
service
and
just
mirror
traffic
to
it,
silently
it's
out
of
the
critical
path.
It's
a
sync
on
the
on
the
standard
request
path.
So
you
get
to
see
how
your
application
performs
with
the
new
version.
Without
impacting
user
facing
behavior,
and
now
we
can
actually
dealt
with
percentage
based
routing.
So
it
gets
a
little
bit
more
friendly.
C
But
we
also
want
to
talk
about
what's
coming
in
1.5,
there's
only
three
bullets
here,
but
the
first
one
is
really
the
most
sailing.
The
most
important
its
Tod
is
coming
if
Zod
takes
the
remaining
control,
plane,
components
and
instead
of
running
them
as
a
series
of
micro-services
actually
turns
them
back
into
a
monolith
and
I
didn't
get
a
chance
to
get
the
link
here.
But
Christian
did
a
great
video
on
this
I
think
a
couple
of
weeks
ago.
C
Maybe
so,
maybe
we
will
meet
when
we
upload
the
slides
to
speaker
deck
or
something
we
can
update
this
with
that
link.
But
it
was
a
great
video
explaining
you
know
explaining
why
we're
making
this
transition
in
SEO
right.
Why
are
we
going
back
from
micro-services
to
a
monolith,
and
in
this
case
it
does
make
a
lot
of
sense.
It's
Tod
is
a
big
change,
but
it's
gonna
have
a
lot
of
downstream
operator.
C
Experience
improvement,
particularly
around
things
like
upgrades
right,
there's
a
lot
more
that
we
can
do
as
a
monolith
and
especially
as
we've
taken
out
something
like
mixer.
So
it's
not
in
the
critical
path
on
every
request,
and
now
it's
really
just
sort
of
pilot
and
citadel,
and
we
can
put
those
two
things
together
and
that
does
have
some
other
downstream
impact.
On
things
like
control,
plane,
security,
I,
put
a
juror
I,
put
a
link
in
here
to
the
direct
release,
notes
which
isn't
a
Google
Doc
being
written
by
the
community
Christian.
A
A
So
the
Microsoft
whole
point
like
services
is
to
enable
parallelism
and
autonomy
to
be
able
to
move
faster,
and
what
we
noticed
with
the
SEO
control
claim
is
that
organizationally
those
assumptions
never
really
took
hold
so
when
folks
would
take
and
consume
sto
and
run
them
either.
A
platform
team
which
consisted
of
a
handful
of
people
or
you
know,
serve
as
much
team,
which
consisted
of
one
person.
They
were
in
charge
of
operating
the
control
playing
managing
it,
updating
it
upgrading
it
and
so
on,
and
they
would
do
that
as
kind
of
a
hole
right.
A
C
B
C
C
B
A
By
as
we
saw
earlier
in
the
presentation,
the
sidecar
in
issue,
its
issued
a
certificate
I
identify
what
that
service
is
when
the
communication
happens
within
the
ACO
mesh
and
either
the
auto
and
TLS
is
enabled
or
you're
explicitly
enabling
it
with
the
various
policy
destination.
Rule
objects
so
recall
that
the
policy
object
configures
the
server
side
to
either
either
vault
to
handle
mutual
TLS
where,
as
the
destination
rule,
object,
configures
the
client
to
expect
mutual
TLS.
B
A
C
So
in
this
case,
if
you
want
to
terminate
external
connections
that
termination,
the
way
sto
is
designed,
happens
at
the
sto
ingress
gateway,
so
that
connection
gets
terminated
there
and
then
mutual
TLS
takes
over
to
encrypt
traffic
between
the
ingress
gateway
and
your
application
pods.
So
there
isn't,
as
far
as
I
know
a
way
to
let
ingress
gateway,
be
a
pass-through
and
let
your
application
terminate
right.
It's
it's
going
to
terminate
there
with
with
TLS
and
then
use
the
stom
TLS
from
that
point
forward
from
the
ingress
gateway
on
another.
A
Is
solo
open
source
project?
It
also
builds
on
ongoing
and
allows
you
to
extend
this
do
to
support
edge
api
gateway
like
functionality.
So
you
might
need
to
a
example.
I
tried
to
show
the
chat
demo,
but
you
might
need
to
assert
a
user's
identity
using.
Oh,
I
DC
or
a
lot
or
api
keys
or
some
some
other
mechanism
and
glue
is
a--
is
a
good
way
to
solve
that
challenge
and
bridge
into
the
rest.