►
Description
One of the key operational challenges of securing microservices is understanding, securing and monitoring access to external services. A service mesh like Istio can enable organizations to offload this critical functionality from applications to the infrastructure, thereby decoupling the developer and operations teams and gaining efficiency.
This webinar will explore various architectural options available to secure traffic to external services and their tradeoffs when using Istio. Neeraj will also cover how operations teams can incrementally increase their security posture by using telemetry from Istio and configuring explicit policies for external services access control.
A
Very
good
looks
like
we've
got
quite
a
few
attendees
this
morning,
so
good
good
good
morning,
everyone
good
evening,
I've
got
as
I
hold
on
and
gripped
my
copy
tightly.
I've
got
a
hard
time
saying
anything
but
good
morning,
but
welcome
to
today's
scene,
CF
webinar
on
how
to
secure
and
monitor
external
service
access
with
a
service
mesh.
I'm
raquel
co-founder
of
layer,
5
and
a
cloud
native
ambassador
I'll
be
moderating
today's
webinar,
but
we
would
like
to
welcome
new
rajput.
Are
the
co-founder
and
engineering
lead
at
aston?
A
That's
a
good
fantastic
have
Niraj
with
us
today,
as
as
we
go
to
hand
it
over
to
him
I
do
we
do
have
a
few
housekeeping
items
to
to
note.
So
this
is
a
scene
CF
webinar
during
the
webinar,
you
know
you're
not
able
to
talk
as
an
attendee,
but
questions
are
highly
encouraged.
There
is
a
Q&A
box
at
the
bottom
of
your
screen,
so
please
do
feel
free
to
drop
in
your
questions
there
and
we'll
get
to
many
as
them
as
we
can.
A
I
would
love
for
someone
to
stump
in
the
rajah
I've
not
seen
that
before.
So
that
would
be,
but
that
said
we
are
recording-
and
this
is
an
official
CN,
CF
webinar,
and
so
it
is
subject
to
the
CN
CF
code
of
conduct.
So
please
don't
add
anything
to
the
questions
you
know
or
to
the
chat
that
might
violate
the
code
of
conduct,
and
so
essentially
you
please
be
respectful
of
your
fellow
participants
and
the
presenter
with
that.
A
B
Right
thanks,
Lee,
hello,
everyone,
I'm,
neeraj,
I'm,
the
co-founder
and
engineering
leader
Aspen
mesh.
Thank
you
all
for
joining
me
today
in
exploring
how
to
secure
and
monitor
external
service
access
with
a
service
mesh,
so
before
I
get
too
deep
into
the.
How
and
what
are
the
various
ways
you
can
achievement?
Let's
quickly
explore
why
this
is
important.
Why,
as
an
organization,
if
you
are
currently
running
micro
services
in
kubernetes
environment,
why
do
you
even
care
about
external
service
access?
B
So
the
main
reason
you
care
about
it
is
you
want
to
make
sure
you
are
protected
from
security
breaches?
So
if
you
are
an
organization
currently
having
any
sensitive
data,
whether
it's
privacy
data
related
to
user
information,
their
emails
or
the
grade
cards
or
health
information,
you
want
to
make
sure
that
data
is
protected.
At
the
same
time,
if
you
do
have
a
security
breach,
you
want
to
make
sure
you
can
react
quickly
to
it,
so,
basically
protecting
access
to
external
services
and
monitoring.
B
It
is
one
of
the
ways
to
make
sure
you're
covered
for
under
those
scenarios.
Interestingly,
as
the
organization's
are
moving
towards
cloud
native
technologies
and
adopting
micro
services,
there's
an
interesting
paradigm
happening
where
you
get
it
gets
more
and
more
difficult
for
you
to
understand
your
security.
Basically,
you
have
more
micro
services
that
are
coming
up
and
going.
They
are
trying
to
reach
out
to
external
services,
and
you
actually
don't
understand,
what's
happening.
So
in
a
way.
B
So
moving
on
this
point
about
monitoring
and
managing
your
external
services
is
also
emphasized
by
if
you
look
at
the
poor's
top
10
list.
So
if
you're
not
familiar
with
ORS,
they
they
list
out
that
most
common
top
10
vulnerabilities
that
are
affecting
the
applications.
And
if
you
look
at
them,
there
are
three
which
thirdly
fall
into
this
category
of
external
services.
So
the
first
one
is,
if
you
are
using
component
with
known
vulnerabilities.
B
So
if
you
have
applications
which
are
using
libraries,
whether
they're,
open
source
or
in
some
form
factor-
and
they
have
vulnerabilities,
you
are
open
to
breaches
and
most
and
most
often
than
not
the
way
it
works.
Is
these
applications
which
have
vulnerabilities
they
try
to
look
at
your
sensitive
data
and
publish
it
through
a
public
website.
B
So
the
example
here
that
I
have
is
a
recent
Python
vulnerability
where
fake
version
of
the
date
util
library
was
published
on
pi
PI,
where
it
was
periodically
listing
all
the
directories
and
contents
of
your
file
and
publishing
it
on
a
public
website,
as
you
can
imagine,
if
an
organization
correctly
monitored
and
secured
external
services,
this
could
have
been
prevented
moving
on.
If
you
have
insufficient
logging
on
monitoring,
it
affects
you
in
two
ways.
B
Last
thing
that
I'm
going
to
talk
about
before
we
go
into
the
details
of
how
to
achieve
this
is
most
of
the
organizations
are
currently
trying
to
achieve
zero
press
security
and
more
often
than
not
a
lot
of
focus,
is
an
attention
is
paid
to
how
you
get
traffic
inside
your
clusters.
How
do
you
make
sure
that
it
is
authenticated,
authorized
and
encrypted?
This
is
the
other
half
of
the
equation,
so
you
have
to
pay
equal
attention
to
what
services
you
are
consuming
because
breaches
can
happen
there
too.
B
So
with
bad
background
in
place.
Let's
get
started
so
I'm
gonna
start
off
with
a
simple
example
of
a
micro
service
environment
which
is
running
in
your
kubernetes
cluster.
So
here
you
have
an
application,
a
which
talks
to
application
B
an
app
C.
They
can
be
written
in
different
languages
most
where
time
they
are
so
the
traffic
comes
from
the
internet.
It
helped
it
hits
application
a
application.
B
A
is
going
to
reach
out
to
an
external
Identity
Management
Service,
which
is
very
common
for
organizations
to
use
something
like
a
key
cloak
or
Cognito
to
do
user
management,
and
then
apps
see
here
reaches
out
to
an
external
database
so
that
you
are
using
a
SAS
managed
service
like
a
dynamo,
DB,
our
BigTable
and
making
sure
your
burden
is
reduced
right,
very
common.
It's
a
very
simple
architecture.
The
reason
I
am
explaining
this
now
is.
We
are
going
to
use
this
as
the
slide
as
the
presentation
progresses
and
incrementally
add
complexity
to
it.
B
Ok,
so
from
an
external
service
point
of
view,
if
your
current
state
is
this
and
you're
a
security
operator
or
an
application
developer
in
your
organization,
this
is
what
you
want
to.
This
is
where
you
want
to
be
so
the
desired
state.
Is
you
want
to
make
sure
all
the
traffic
to
your
external
services
are
encrypted?
You
have
to
make
sure
you're
not
talking
to
an
external
service
with
HTTP.
It
should
be
HTTPS.
The
next
thing
is,
you
want
to
make
sure
any
unauthorized
access
is
blocked.
B
What
that
means
is,
if
ABI
has
a
new
version
deployed
and
suddenly
it's
trying
to
reach
out
to
github,
you
should
probably
block
it.
This,
probably
our
known
vulnerability
that
has
crept
in
and
now
it's
trying
to
publish,
maybe
your
sensitive
data,
and
the
third
thing
you
want
to
do
is
going
to
make
sure
you
get
observability
and
visibility
for
both
of
those
scenarios.
So
you
want
to
track
when
your
communication
is
going
as
expected
and
also
when
there
are
some
violations
happening
and
you
have
blogged
it
now.
B
This
visibility
can
be
at
different
layers,
you
can
have
metrics,
you
can
have
creasing
or
you
can
have
logging
or
you
can
have
all
the
three.
So
with
this
goal
in
mind,
let's
try
to
see
how
we
can
achieve
this
and
I'm
going
to
focus
on
how
you
can
achieve
this
with.
So.
To
summarize,
the
goals
for
external
service
access
are
basically
threefold.
First,
how
do
I
know
what
external
services
I
am
connecting
I
am
connecting
two.
So
basically,
this
is
the
visibility
and
logging
and
monitoring
that
you
want.
B
Second,
is
how
do
you
make
sure
you
are
securing
the
access?
That
means
all
the
traffic
is
encrypted,
and
the
third
thing
is
you
want
to
block
unauthorized
access,
so
any
external
service
that
should
not
be
reached
to
is
blocked,
and
then
again
you
have
visibility
on
all
the
either
of
those
scenarios
when
the
traffic
is
actually
meant
to
go
out
and
when
the
traffic
is
blocked.
B
You
can
also
use
open
source
libraries
and
other
third-party
tools
and
embed
that
in
your
application,
so
that
you
offload
that
functionality
into
some
shared
code.
There
are
two
basic
problems
that
happens
with
this
approach.
First
is
because
you
are
embedding
this
capability
in
your
application
if
you
have
different
languages,
so
if
you
are
using
Python
and
go
and
JavaScript
now,
you
have
to
make
sure
you
have
consistent
libraries
or
you
have
to
upgrade
them
all
at
the
same
time.
B
Secondly,
if
you
are
using
even
TLS
stuck
in
your
applications,
if
there
are
vulnerabilities,
you
have
to
rebuild
all
of
these
applications
and
deploy
them
again.
It
just
inherently
lowers
or
I
would
say,
increases
the
time
that
you
need
to
react
and
fix
the
issue
at
hand
right.
The
third
way,
which
I
highly
recommend
organizations,
use
that
the
third
method
that
organization
should
use
is
offloading
this
functionality
to
an
infrastructure
layer.
So
basically,
what
you
want
to
do
is
take
this
complexity
out
of
your
application,
move
it
to
an
infrastructure
layer.
B
Today
we
are
going
to
cover
service
meshed,
which
is
one
of
those
infrastructure
layers
you
can
use.
That
has
two
benefits.
One
is
application
developers
don't
have
to
do
this
anymore,
so
they
get
to
focus
more
on
what
they
really
want
to
do,
and
then
you
are
enabling
your
operations
team
to
configure
security
policies
via
configuration
and
not
via
code.
Alright.
So
with
that
in
mind,
I'm
gonna
do
a
quick
recap
of
what
a
service
mesh
is,
and
then
we
are
going
to
move
on
to
how
a
service
mesh
can
help
in
this
endeavor.
B
B
It
does
two
things
it
allows
developers
to
offload
the
functionality
like
I
was
saying
and
allows
them
to
focus
more
on
the
business
logic
and,
at
the
same
time,
it
allows
operators
to
get
the
resiliency
in
the
security
in
their
environments
outside
of
the
dev
cycles,
which
means
they
both
of
these
teams
or
both
of
these
personas,
can
do
the
job
independently
and
still
successfully
achieve
the
goals
of
an
organization
or
a
business.
So
typically
service
meshes
work
by
adding
a
proxy
and
that
proxy
intercepts
traffic
coming
in
and
out
of
your
mesh.
B
Sorry,
in
an
order
of
your
application
and
as
that
proxy
is
intercepting
the
traffic,
it
can
add
a
lot
of
advanced
functionality
depending
on
where
the
proxy
is
placed
in
your
architecture.
You
have
a
lot
of
different
options,
so
today,
I
am
going
to
cover
one
of
those
architecture,
options
which
is
called
our
sidecar
proxy
architecture.
So
if
you
haven't
heard
of
this
term
I'm
going
to
just
quickly
explain
what
it
is,
a
sidecar
proxy
architecture
is
basically,
you
insert
a
proxy
as
close
to
the
application
possible.
B
B
So
in
the
simple
example,
if
you
have
app
a
which
wants
to
talk
to
a
be
first,
the
app
is
traffic
will
be
intercepted
by
the
proxy.
Then
the
proxy
and
service
a
is
going
to
talk
to
the
proxy
in
service
B,
and
then
the
request
will
eventually
go
to
the
application.
B
container,
having
the
proxies
that
both
the
edge,
both
the
sides,
which
we
call
bookended
proxies,
give
you
the
capability.
It
gives
you
the
capability
to
actually
enforce
policies
both
at
the
client
and
the
server.
B
So
all
the
proxies
together
is
what
we
call
the
data
plane
of
a
service
mesh
service
mesh
is,
most
of
them
also
include
a
control
plane,
so
the
control
plane
is
responsible
for
looking
at
your
kubernetes
environment,
looking
at
the
configuration
that
an
operator
provides
and
lower
that
configuration
to
the
data
plane
so
that
the
data
plane
can
understand
what
to
do
so.
This
gives
you
like
in
mice,
abstraction
where
you
can
replace
the
data
plane.
B
B
The
second
category
is
security,
so,
as
the
proxies
are
receiving
traffic,
whether
it's
inbound
or
outbound,
you
can
do
authentication,
you
can
do
encryption,
you
can
do
operas,
and
today
you
can
actually
block.
We
are
going
to
cover
how
you
can
block
extended
services
and
the
third
category
is
observability,
depending
on
the
type
of
request
and
the
proxy
and
the
type
of
talk
seeing
the
proxy
is
doing
so,
whether
it's
doing
TCP
proxy
over
there
is
doing
HTTP
proxy.
The
proxy
can
surface
a
lot
of
metrics
tracing
and
logging
for
you.
B
So
moving
on
so
today,
I'm
going
to
focus
on
service
mesh,
but
particularly
I'm
gonna
focus
on
how
is
to
enables
this.
So
this
is
going
to
be
my
transition
to
talk
more
and
more
about
how
history
allows
you
to
do
external
service
access
and
monitoring
so
to
quickly
explain
that
in
in
sto
we
are
using
a
sidecar
proxy
architecture
and
the
proxies
that
I
use
that
envoy
proxy
is
if
you're
not
familiar
with
envoy
envoy,
is
a
cien
safe
project.
It's
a
high-performance
proxy
written
in
C++.
B
So
this
is
the
architecture
of
Sto,
but
it's
an
architecture
of
most
of
the
service
measures.
You
have
a
control
plane
and
a
data
plane
and
sto.
The
data
plane
is
envoy
all
right
so
with
that,
let's
not
dig
into
specific
criteria
about
how
service
meshes
and
how
particularly
sto
can
help
you
with
external
services.
So
this
is
the
updated
diagram
of
this
initial
slider
I
had
so
the
same
architecture.
You
have
app
a
B
and
C.
B
Now
you
have
a
proxy
which
is
inserted
so
now,
when
app
a
wants
to
talk
to
external
identity
management,
it
has
to
go
through
the
sidecar
proxy.
Similarly,
when
app
C
talks
to
the
external
DB,
it
has
to
go
through
the
proxy.
So
with
this,
with
this
architecture
in
place,
let's
look
at
all
the
options
that
are
available
to
us
to
help
us
manage
external
services,
all
right,
so
in
sto.
These
are
the
four
options
that
are
available
to
you,
I'm
going
to
go
into
details
about
each
one
of
them.
B
These
four
are
allow
any
restricted
access
with
TLS
passed
through
restricted
access
with
TLS
origination
and
egress
gateway
with
TLS
origination.
Some
of
the
other
service
measures
might
also
provide
you
some
of
these
options
or
all
of
them
I'm,
not
particularly
sure,
because
there's
so
many
service
measures
out
there,
but
it's
geo
currently
provides
these
four,
so
the
key
thing
I'm
gonna
do
is
I'm
gonna
focus
on
the
three
parameters.
Basically,
these
are
the
three
key
things
that
I
listed
from
wasp,
but
I
want
to
cover.
So
we
give
it
a
lot.
B
We're
gonna,
compare
and
contrast
this
options
for
how
easy
it
is
to
configure
them.
If
it
is
hard,
probably
people
are
going
to
screw
it
up,
and
then
you
have
more
prone
to
breaches
than
what
you
intended
to
be.
Second
is:
what's
the
level
of
visibility
you
get
and
then
the
third
is
actually
how
secure
it
is.
A
false
sense
of
security
is
sometimes
more
hurtful
than
actually
having
no
security,
so
you
I
want
to
be
clear
on
the
options
that
you're
going
with.
What
do
you
actually
get
out
of
them?
B
B
So
what
I
mean
by
that
is,
if
app
is
talking
with
encrypted
traffic
or
HTTPS
to
external
entity,
it
will
be
allowed.
Similarly,
if
app
C
is
talking
over
regular
HTTP
to
external
DV,
it
will
also
be
allowed.
This
really
means
you
don't
have
any
security.
In
my
view,
you,
the
proxy,
is
proxying
things
at
a
TCP
level
without
actually
enforcing
anything,
and
similarly,
because
it's
doing
that,
when
app
C
wants
to
talk
to
github,
it's
actually
allowed
and
not
blocked
the
reason
I
am
bringing
it
as
one
of
the
options
is
currently
many.
B
Many
people
who
run
kubernetes
environments-
they
actually
don't
monitor
the
external
services.
What
that
means
is
when
they
add
st
on
top.
If
you
start
restricting
external
services,
it
breaks
the
environment.
So
this
option
is
there
to
easily
transition
them
into
adopting
Sto.
So
let's
look
at
the
pros
and
cons
with
those
three
parameters
of
configuration,
visibility
and
security
for
this
first
option.
B
B
The
second
thing
is,
you
do
get
some
telemetry
so
recently
we
had
added
support
for
getting
telemetry
in
sto
for
allow
any
and
that
delimiter
is
in
the
form
of
TCP
metrics,
so
you
can
configure
it
and
you
can
get
destination
IPS
if
you
want,
but
again
that
information
is
very
restrictive,
because
if
really
an
attacker
is
trying
to
leak
or
get
some
sensitive
data
out,
there
probably
going
to
continuously
change
your
IP
addresses.
So
it's
just
getting
a
TCP
level.
Metrics
not
sufficient
to
enforce
security
here.
B
So
next,
what
I'm
going
to
do
is
I'm
gonna
show
you
how
you
can
configure
allow
any,
and
this
is
the
format
that
I'm
going
to
use
throughout
the
slides.
Will
remaining
of
this
presentation
is
talk
about
an
option.
Tell
you
how
to
configure
sto
to
use
that
option
and
actually
show
you
some
envoy
configuration
so
that
you
can
actually
go
to
the
source
of
truth
and
understand,
what's
happening
in
well
so
for
allow
any.
B
The
first
thing
you
need
to
do
is
make
sure
that
config
mode
is,
if
that
is
deployed
in
your
sto
system.
Namespace,
which
is
the
config
map
for
the
mesh
says
that
you
have
allow
and
you
configured
simple
enough.
Moving
on
the
on
were
configuration
for
any
of
the
pods
that
have
the
sidecar
should
look
like
this,
so
you
see
what
is
called
a
virtual
outbound
listener,
so
you're,
looking
at
the
config
dump
of
envoy
envoy,
has
in
Istria
we
configure
envoy
to
have
virtual
outbound
listeners.
B
These
listeners
are
the
fault
listeners
that
all
the
traffic
from
the
application
get
routed
to,
and
these
are
configured
with
original
desks.
What
that
means
is
if
there
is
a
more
explicit
listener
that
listener
will
in
will
get
involved,
but
if
there
are
no
explicit
listeners
for
the
traffic
it
is
receiving,
the
configuration
of
this
particular
virtual
outbound
listener
will
get
activated.
So
in
this
case,
if
you
have
allow
any,
the
configuration
for
the
virtual
outbound
listener
is
a
TCP
proxy.
B
That's
why
you're
only
getting
PCP
level
stats
and
it
is
configured
to
use
the
cluster
which
we
call
pass
through
a
pass
through,
is
a
special
virtual
cluster
in
Envoy,
which
tells
it
to
forward
the
traffic,
as
is
to
the
original
destination.
That's
why
this
actually
works.
So
sometimes
there
are
many
customers
and
also
community
users
who
ask
me
how
does
allow
any
work
and
how
can
I
verify
whether
I
have
allow,
and
they
can
figure
them
out?
This
is
how
you
should
verify
it.
B
If
you
have
your
virtual
outbound
configured,
if
you
have
allow
any
configured,
your
virtual
outbound
listener
will
always
have
a
pass
through
cluster.
Alright,
moving
on
to
the
next
option,
so
clearly,
this
is
not
a
very
secure
option
so
progressively
we
are
going
to
look
at
more
secure
options
and
options
which
give
you
more
visibility.
B
So
the
second
option
is
restricted
access
with
TLS
pass
through
what
that
means
is
in
this
scenario.
Operators
have
to
explicitly
configure
the
mesh
and
and
tell
it
what
are
the
hosts
and
services
you
are
allowed
to
talk
to
so
in
sto.
By
default,
all
the
services
within
the
kubernetes
cluster
are
allowed
to
talk
to
each
other.
B
When
you
turn
this
option
on,
you
won't
be
allowed
to
talk
to
any
other
external
service,
but
you
have
to
explicitly
whitelist
them
and
then,
as
the
operators
are
white
listing
them,
it
does
not
make
sense
for
them
to
whitelist
HTTP
services,
so
you'll,
always
quite
lace,
HTTP
services.
Now
the
trick
here
is:
it's
called
a
TLS
pass
through
so
TLS
pass
through
means.
The
proxies
are
configured
to
look
at
the
sni
header,
which
is
the
server
name,
indication
in
the
TLS
handshake
and
then
route
traffic
based
on
the
SMI
hello.
B
What
that
means
is
application
is
talking
to
the
external
identity
management
application.
A
itself
creates
an
HTTP
request.
That
means
that
TLS
handshake
will
happen,
and
then
proxy
looks
at
the
sni
header
and
sensor
traffic.
If
the
proxy
sees
a
request
where
the
SMI
header
is
not
in
its
configuration
list,
it
will
reject
that
prophet.
B
So
similarly,
app
seek
and
talk
to
external
database,
and
now
you
are
secured
because
only
encrypted
traffic
can
go
through
and
as
github
is
not
part
of
your
whitelist,
it
will
be
blogged
here.
So,
as
you
can
see,
we
have
added
a
fair
amount
of
security.
Here.
There
are
two
things
that
we
got.
We
got
that
the
traffic
is
encrypted
and
we
are
able
to
block
any
traffic
that
is
not
in
our
whitelist.
So
let's
evaluate
what
are
the
pros
and
cons
of
this
option
so
from
the
pro
side,
you
are
fairly
secure.
B
Do
you
achieve
two
of
the
goals
that
you
wanted
from
the
cons?
You
obviously
now
need
more
configuration
I'm
going
to
show
you
what
configuration
you
will
need
to
make
it
work
with
this
tier.
The
visibility
side
of
things
are
still
limited
because
the
proxy
is
doing
TCP
proxying
with
SMI
routing.
You
only
get
TCP
metrics
again
to
quickly
react
to
any
security
breaches
or
data
breaches.
B
Any
organization
should
collect
as
much
information,
and
if
that
information
is
at
layer,
7,
that's
the
best
thing
you
can
do
so
this
option
is
secure,
but
the
visibility
is
not
that
great
and
the
third
thing
about
security
which
I
listed
is
a
con,
is
very
interesting.
So
in
this
both
up
a
and
app
C
are
using
the
TLS
stack
of
the
application.
B
What
that
means
is,
if
you
have
a
vulnerability
in
the
TLS
that
you
will
have
to
rebuild
your
application,
so,
for
example,
if
if
this
was
deployed
in
your
cluster
and
the
vulnerability
like
heartbleed
came
out,
you'll
have
to
rebuild
the
whole
world
if
interactive.
On
the
other
hand,
if
the
proxy's
TLS
stack
was
used,
then
the
awkwardness
can
deploy
a
new
version
of
proxy,
so
you
get
some
security
and
you
also
get
some
vulnerability
is
because
of
the
way
this
works.
B
An
interesting
thing
about
this
option
is
many
of
the
users
that
I
talk
to.
They
turn
this
on
in
the
AWS
environments,
especially
and
they
start
to
sink
failures,
and
they
say
that
my
AWS,
my
services,
if
within
the
cluster,
talk
to
s3
for
example
and
I,
have
configured
s3
service
entries,
but
they
still
fail
so
I'm
going
to
quickly
show
you
why
that
happens,
and
what
are
the
configuration
you
will
mean
so
in
sto.
If
you
want
to
continue
the
TLS
pass
through.
B
The
first
thing
you
need
is
you
want
to
make
sure
you
have
switched
that
config
map
option
to
registry?
Only
so
registry
only
is
no
longer.
You
are
allowed
to
pass
through
all
the
traffic
if
you're
saying,
but
only
the
only
the
traffic
it
is
whitelist.
It
is
allowed
to
go.
When
you
do
this.
The
virtual
outbound
listener
that
I
was
talking
about
now
gets
switched
to
a
TCP
proxy
with
black
hole.
B
Cluster
a
black
hole
cluster
is
the
opposite
of
the
pass-through
cluster
in
sto
and
obviously
the
name
is
saying
there
is
going
to
black
hole,
any
traffic
that
it
sees
so
good.
So
you
are
not
going
to
be
able
to
access
anything
you
want,
but
only
the
whitelist.
So
let's
look
at
what
configuration
you
will
need
in
your
cluster
as
an
operator
to
make
this
work,
so
you
need
to
create
a
service
entry.
A
service
entry
in
sto
is
a
way
to
augment
the
services
that
the
proxy
should
be
allowed
to
router.
B
So
like
I
was
saying:
normally
you
can
only
allow
two
things
within
the
cluster
or
service.
Entry
is
a
way
to
update
that
registry.
So
in
this
case
I
have
told
it
you
can
talk
to
HTTP
bin
dot-org.
So
there
are
two
key
things
here.
One
is
the
name
here
is
HTTP
and
the
protocol
is
HTTP.
This
means
that
you're
going
to
be
doing
a
semi
routing
and
the
application
itself
is
going
to
make
a
request
with
HTTP
:
WWE
load
HTTP
bin
dot-org.
B
So
this
is
very
important
to
make
sure
you
don't
put
here
HTTP,
because
that's
gonna
create
vulnerabilities,
a
given
your
normal.
So
when
you
configure
service
entries,
this
is
how
your
on
configuration
will
look
like.
We
will
create
our
sto
control.
Pain
will
create
additional
listeners.
So
in
this
case
the
listener
will
be
four
wildcard
host
or
wild
girl,
IP
and
443.
We
will
configure
an
SN
I
match,
which
will
say
if
the
server
name
is
HTTP
bill.
B
You
should
you
should
activate
this
filter
and
the
filter
will
be
a
TCP
filter,
which
is
a
TCP
proxy
and
it
will
send
the
traffic
eventually
to
443.
So
this
is
fairly
simple
and
gives
you
some
amount
of
security.
Coming
back
to
the
AWS
thing,
I
was
talking
about
if
you're
talking
to
AWS
services
like
s3
and
you
just
configure
a
service
entry.
B
Normally
it
won't
work
and
the
reason
is
AWS
services
normally
talk
to
some
other
things
like
a
moderator,
server
or
an
STS
service
to
actually
get
tokens
so
depending
on
how
you're
getting
your
credentials
need
to
make
sure.
Not
only
do
you
whitelist
the
services
in
EWS
that
your
applications
are
directly
consuming,
but
you
whitelist
things
like
the
moderator,
server
and
STS
service,
so
just
something
to
keep
in
mind.
I
am
debugged
fair
number
of
issues
where
people
who
start
with
this.
B
You
add
this
options
and
then
everything
is
broken
in
database,
all
right
so
moving
on.
So
this
is
the
third
option,
which
is
my
favorite
option,
which
is
what
I
always
recommend:
users
to
use,
which
is
restricted
access.
So
you
still
are
white
listing
the
services
that
are
allowed
in
your
mesh,
but
you
are
doing
TLS
origination,
so
TLS
origination
means
you
are
deferring
the
TLS
negotiation
to
the
proxy.
B
So
if
you
look
at
the
pod
as
your
security
boundary,
the
main
thing
you
want
to
achieve
is
your
packets
are
never
unencrypted
outside
the
powered
pot
boundary.
This
achieves
it.
Your
traffic
is
always
encrypted
eventually
to
external
services,
but
within
the
pod
they
are
not,
and
this
actually
is
beneficial
to
you,
because
now
you
can
apply
lots
of
advanced
layer,
7
policies
and
the
visibility
that
you
get
from
the
proxy
is
not
layer
7.
B
Similarly,
just
like
the
last
option,
if
you
talk
trying
to
talk
to
github
and
it's
not
in
your
whitelist,
so
if
you
look
at
the
pros
and
the
cons
here,
the
pros
are
a
lot
more
than
the
last
option.
So
you
are
getting
the
same
level
of
security.
You
are
able
to
apply
layer,
7
policies
now,
so
you
can
use
in
sto.
B
If
you're
familiar
with
this
here,
you
can
use
what
you'll
service
this
destination
rules
and
get
retries
times,
timeout
all
the
things
that
you
want
and
as
an
operator,
you
don't
have
to
rely
on
application
area
from
the
visibility
point
of
view.
If
you
have
configured
the
right
options
in
sto,
you
are
not
gonna
get
layer,
7,
metrics
or
HDB
metrics
you're
gonna
get
access,
login
you're
going
to
get
tracing.
B
This
is
a
lot
of
visibility
right
out
of
the
box
without
changing
your
application,
the
corn
is,
as
is
most
of
the
times,
with
security
and
visibility.
If
you
get,
if
you
are
getting
both,
you
are
also
getting
a
lot
of
configuration
right.
So
that's
one
downside
and
another
downside.
If
you
want
to
talk,
tell
that
it's
a
downside
is
you
can
have
unencrypted
traffic
now
between
application
and
proc.
So
again,
most
of
the
organizations
and
businesses
I
know
they
can
live
with
this
because
pod
is
the
security
boundary.
B
So
this
is
how
you
configure
restricted
access
with
TLS
or
generation
in
Sto,
so
you
still
need
a
service
and
to
like
before,
but
now
the
interesting
thing
is.
You
need
to
also
add
port
80
and
with
the
protocol.
Http,
but
don't
worry,
the
proxy
is
gonna
upgrade
the
connection
and
the
way
it
does
that
is.
B
You
have
a
virtual
service
configured
which
says
when
the
application
is
trying
to
talk
to
HTTP
bill
dot,
org
at
port,
make
sure
you
route
it
to
a
specific
destination
which
is
configured
your
destination
rule,
but
at
four
at
port
443
and
that
destination
rule
is
saying
for
port
443
you're
going
to
do
a
TLS,
mod
symbol.
So
this
means
the
connection
from
the
application
is
going
to
come
to
the
proxy
at
port
80.
But
the
proxy
will
make
an
connection
out
at
port
443
using
simple
TLS,
so
the
on
word
configure
it.
B
So
this
was
all
the
sto
configuration
you
need.
The
control
plane
eventually
lowers
it
to
envoy,
and
this
is
what
envoy
configuration
will
look
like.
So
we
will
create
a
new
listener
at
port
80
that
listener
at
480
will
be
an
HTTP
connection,
will
be
an
HTTP
filter
by
bad
and
that
HTTP
filter
will
have
a
route
configured
for
that
listener.
So
the
way
on
where
configuration
works
is
listeners
either
have
TCP
filters
or
they
have
mostly
HTTP
filters,
or
if
you
have
some
other
protocols,
they
can
have
those
filters
for
HTTP
the
filter.
B
The
HTTP
filter
typically
points
to
a
route,
so
in
this
case,
is
pointing
to
our
route
80.
That
route
now
will
point
to
a
cluster.
So
the
way
it
goes
is
listeners
routes
and
clusters.
So
then,
the
cluster
for
route
80
is
pointing
to
a
special
cluster
here,
which
is
outbound
TLS
origination
and
the
configuration
for
that
cluster
basically
says
use
the
TLS
context,
which
is
a
common
TLS
context.
So
this
common
TLS
context
means
it
will
use
the
well-known
search
to
verify
the
certificate
of
the
server
on
the
other
side.
B
As
you
can
see,
the
number
of
camels
or
configuration
you
have
to
achieve
you
have
to
write
to
achieve
this
is
fairly
high,
but
you
get
a
lot
of
benefits
and
with
the
right
amount
of
automation
or
using
some
vendors
vendor
products
which
create
a
wrapper
around
it,
you
can
get
the
benefits
and
still
you
don't
have
to
worry
about
all
the
iniquities
of
the
configuration.
So
I
always
recommend
people
to
use
this.
The
fourth
option
here
is
a
pretty
advanced
option.
B
That's
using
an
egress
gateway,
so
an
egress
gateway
gives
you
the
capability
of
routing
all
the
traffic
for
external
services
through
a
special
gateway
which
is
like
the
inverse
of
the
e
ingress
gateway.
It's
a
standalone
proxy
and
there
are
two
use
cases.
I
know
what
people
want
to
use
it.
So
one
is
either.
B
Now
this
is
going
to
be
a
lot
of
configuration
and
I'm
going
to
make
sure
I'm
gonna
inform
you
that
if
you're
using
this
make
sure
you
have
those
requirements,
otherwise
you're
taking
on
a
lot
of
pain,
a
lot
of
benefit
and
also
with
egress
gateways,
you
can
have
multiple
options.
I
feel
like.
If
you
care
about
security
that
you
want
to
use
an
egress
gateway,
you
only
should
be
using
TLS
origination,
otherwise
you're
actually
giving
up
security
compared
to
the
last
option.
B
So,
let's
quickly
see
what's
the
architecture
here,
so
we
are
using
TLS
origination.
That
means
app
is
talking
to
the
proxy
unencrypted.
Then
the
connection
between
the
proxy
to
the
egress
gateway
here
will
be
encrypted,
and
this
encryption
is
through
sto
mutual
TLS.
This
is
very
important
and
some
of
the
customers,
or
some
of
the
users
I
have
seen
who
are
currently
using
egress
gateways.
They
don't
realize
that
you
need
MPLS
to
make
this
work.
B
Otherwise
you
have
gaps
in
your
security
and
then
the
egress
gateway
will
again
do
a
TLS
upgrade
so
lots
of
moving
pieces
here
to
kinds
of
encryption.
But
if
you
really
want
the
benefits
of
it,
you
can
do
this
so
pros.
Here
are
the
same
pros
for
the
last
option.
Additionally,
you
have
the
public
IP
control,
which
I
was
saying.
The
visibility
is
the
same
for
the
cons.
There
is
really
a
lot
of
configuration,
I'm
going
to
walk
you
through
their
configuration
quickly
next
and
then
from
the
security
point
of
view.
B
Really,
you
have
to
enable
em
TLS
in
your
cluster.
Otherwise
this
link
here
will
not
be
encrypted,
so
the
configuration
for
this
is
first,
you
need
to
enable
egress
gateway,
and
you
have
to
verify
that
it's
running
you
need
the
service
entry
in
this
service.
Entry
should
look
very
familiar
at
the
exact
same
service.
Entry
is
for
option
3
and
now
comes
the
main
configuration
overhead
that
comes
with
enabling
the
egress
gateway.
So
here
we
have
to
write
a
gateway
resource
gateway.
B
B
So
then
you
have
a
destination
rule,
and
this
destination
rule
tells
that
whenever
you
are
talking
to
the
east,
your
address
gateway,
you
should
actually
just
use
TLS
HT
of
mutual
and
the
sni
should
be,
or
the
header
to
look
at
for
the
hosts
HT
people
all
right.
Well
with
me
a
little
bit
more
configuration
here.
You
have
to
next
configure
a
virtual
service,
and
this
virtual
service
is
for
the
host
HTTP
Ben,
but
you
will
match
different
gateways
and
accordingly
take
actions.
B
So
if
you
are
matching
on
the
mesh
gateway,
that
is
the
side
curve
proxies
your
going
to
route
it
to
east
your
egress
gateway
service.
But
if
you
are
matching
it
on
the
Gateway
proxy,
your
going
to
send
it
to
HTTP
bin-
and
this
is
where
you're
gonna
do
a
TLS
upgrade
or
a
TLS
origination
in
the
last
piece
of
the
puzzle.
Is
you
need
to
create
a
destination
rule
and
the
destination
rule
applies
to
the
sto
egress
gateway,
where
it
does
TLS
mode
simple
and
then
upgrades
the
connection.
B
As
you
can
see,
lots
more
configuration.
You
have
opportunities
here
to
screw
it
up,
but
if
you
have
some
automation,
you
might
be
able
to
get
away
with
it
and
again
only
use
it
if
you
have
those
stringent
requirements.
So
let
me
quickly
summarize
what
this
means.
So,
if
you're
trying
to
go
for
more
security
or
more
visibility,
sadly
more
configuration
always
comes
with
more
security,
and
that's
just
how
life
works.
I
would
say,
allow
any
is
the
option
that
you
should
not
even
consider.
B
If
you
want
security,
restricted
access
with
pass-through
is
reasonable
to
start
with,
and
then
you
should
eventually
try
to
be
somewhere
here,
whether
it's
restricted
access
with
TLS,
ordination
or
egress
gateway
with
TLS
origination,
depending
on
your
needs.
The
last
thing
is,
you
want
to
make
sure,
as
a
team
or
as
an
organization
that
you
don't
have
to
stay
at
the
Sun
to
reach
reach
your
goals.
That
means
whichever
option
you
are
going
with.
B
You
have
some
automation
around
it
or
some
request
to
simplify
the
life
of
your
developers
and
the
last
thing
which
I
really
like
to
talk
about
Forrester
is
it
is.
It
was
designed
for
incremental
adoption,
so
in
here
you
can
start
with
any
of
these
stops
that
we
have
and
still
reach
the
goal
you
want.
So
if
you
are
just
starting
out
with
this
year
and
you
wanna
have
allow
any
that's
perfectly
reasonable,
you
can
start
capturing
destination
IPS
from
the
TCP
telemetry
then
go
ahead
and
you
can
create
service
entries
for
those
external
services.
B
So
you
have
to
do
a
reverse.
Lookup
for
DNS
there,
but
then
you
can
add
those
service
entries.
Then
you
can
flip
to
registry
only
as
you
have
service
entries.
So
now
you
have
only
encrypted
traffic
going
and
you
have
blocked
all
the
unauthorized
traffic.
Then
you
can
update
the
application
we
use
HTTP
and
then
eventually
use
TLS
origination
and
once
you
use
TLS
ordination,
you
have
security
and
visibility
with
reasonable
amount
of
configuration.
B
A
This
is
a
great.
This
is
on
an
awesome
presentation,
Niraj,
so
yeah
we
do
yeah.
We
do
look
like
you
were
about
hip-deep
in
the
amal
there
for
a
while.
So
bad
you
escaped
oh
good.
Do
so,
if
you
to
all
of
our
attendees,
if
you
do
have
questions,
please
do
ask
them
put
them
into
the
Q&A
at
the
bottom
of
your
screen,
we'll
get
to
as
many
of
those
as
we
can
and
Niraj
we
do
have
a
few
questions
are
coming
in.
A
B
Absolutely
when
sto
you
can
pick
what
you
want
now
again,
you
want
to
minimize
your
pain
and
hurting
here
right.
The
more
variety
of
options
you
have.
You
need
to
make
sure
those
fine-grained
configurations
are
correct.
What
I
would
recommend,
if
you
really
have
this
use
case,
is
making
sure
you
have
your
denying
or
the
overall
posture
configured
first.
So
if
you
want
to
block
everything,
make
sure
everything
is
blogged
and
then
incrementally
add
things
to
allow
and
then
configure
origination
or
mutuality
in
this.
B
The
second
thing
is
used
tools
like
steal,
CTL,
auth
and
policy
checks
and
analyze.
That
gives
you
a
way
to
see
if
your
configurations
are
in
conflict
with
each
other
and
the
last
thing
is
go
to
the
source
of
truth.
Look
at
your
own
word
configuration
and
that's
why
I
had
those
snippets
here,
that's
gonna
really
tell
you
what's
happening.
Oh.
A
B
Mr.,
which
are
kind
of
weird,
and
we
are
trying
to
fix
in
the
community
one
is
some
of
these
api's
have
what
we
call
a
global
effect.
So
if
you
configure
a
service
entry
for
any
host
currently
it
applies
to
all
the
sidecar
proxies
across
all
the
namespaces,
but
you
have
an
option
here
which
is
called
export.
What
that
means
is
you
can
limit
the
scope
of
the
visibility
of
the
service
entry
to
a
particular
namespace?
B
So
let
me
see
so
the
way
I
would
do
it
is
to
use
destination
rules
and
again
I
hear
people
when
they
are
worried
about
using
so
much
configuration
and
we'll
make
it
simple.
So
when
you
configure
destination
rules,
you
can
specify
for
the
sidecar
that
you
want
to
enforce
that
rule
on
similar
to
service
entry.
You
can
have
destination
rules
be
scoped
by
specifying
the
export,
so
what
I
would
do
is
have
the
destination
rule
defined
in
the
same
namespace?
You
have
egress
gateway
scope,
it
down,
configure
the
destination
rule
to
use
self-signed
cert.
B
In
that
way,
only
the
egress
gateway
will
get
those
certificates
I
think
we
can
make
it
even
better
and
again,
the
api's
are
still
evolving.
It's
definitely
doable
by
scoping
it
down,
and
you
are
right.
Whoever
asked
this
question,
you
don't
want
all
the
side
cards
to
get
the
certificate
and
the
way
to
achieve
it
is
the
scope
it
down.
A
And
I
was
so
hopeful
that
that
was
a
stumper
good,
we're
yeah,
okay,
so
other
questions
that
come
through
one
of
those
is
yeah.
So
one
of
those
questions
is
about
the
ability
to
watch
the
seminar
again
in
the
future,
and
so
yes,
seminars
recorded
the
slides
will
be
available,
and
so,
although
the
recording
will
have
the
link
to
that
in
chat
very
good
another
one
here
is
the
question:
is
you
know,
would
we
get
the
same
security
with
with
gr
pcs.
A
Good-
and
we
do
have
a
little
more
time
for
some
questions,
so
here's
another
one.
How
would
you
compare
and
contrast
the
options
the
you've
outlined
to
something
like
sto
mesh
expansion,
where
you
are
adding
an
envoy
agent
on
an
external
service,
or
these
you
know,
are
the
cases
where
mesh
expansion
would
be
a
better
alternative.
B
So
again,
it's
an
interesting
question,
but
the
aim
of
mass
expansion
is
a
bit
different
right.
So
when
you're
trying
to
do
mesh
expansion,
you're
trying
to
bring
your
VMs
into
your
mesh
so
that
services
within
the
kubernetes
cluster,
which
are
part
of
your
service,
may,
can
transparently
talk
to
other
services
which
are
outside
the
mesh.
B
What
you're
trying
to
ask
I'm
guessing
is
if
that
VM
or
the
application
on
the
VM
itself
is
trying
to
reach
out
to
egress.
How
do
you
enforce
it?
I
think
these
api
is
and
the
configuration
options
I
gave
should
work,
but
to
be
honest,
I
haven't
tried
them.
So
if
you're
really
interested
please
reach
out
to
me,
you
have
my
email
and
Twitter
there
and
all
the
friendly
help
you
nice.
A
B
Yeah
you're
right
and
that's
why,
if
you're
an
aspen
master,
customer
or
user
distribution,
we
don't
even
have
that
we
disabled
it.
The
reason
we
brought
it
or
we
have
it
as
missed
your
community-
is
the
organizations
that
aspire
to
use
this
tea
or
reach
zero
trust.
They
are
in
different
phases
of
that
option.
Some
people
are
just
moving
to
kubernetes
when
they
just
move
to
kubernetes.
If
you
start
blocking
external
services
for
them
things
break,
so
we
need
to
give
them
incremental
adoption
and,
yes,
it's
not
secure,
but
for
many
organizations
I.
B
A
B
A
Yeah
it'll
be
a
burning
concern
at
some
point
very
good.
You
have
been
sufficiently
peppered
with
question
after
question.
This
is
good.
We've
got
another
couple
of
comments
coming
through.
Maybe
maybe
one
one
more
here
that
the
question
is
that
you
do
you
need
to
have
policy
enforcement
enabled
in
sto
policy
to
switch
to
registry
only.
B
No,
that's
a
really
good
question,
I'm
glad
somebody
asked
because
presentation,
so
the
options
that
I
configured
and
I
showed
you
right
now
they
are
orthogonal
to
askew
policy.
So
in
HT
of
1.4
we
actually
removed
sto
policy
as
a
default
option.
So
if
your
policy
used
to
be
a
separate
container
in
which
to
run
as
a
form
of
mixer
well,
attributes
is
to
go
from
side
cars
and
then
the
policy
is
to
be
evaluated
outside
of
the
centralized
point
of
control
going
forward.
We
are
deprecating
that
and
the
policies
will
be
enforced.
B
A
Very
good,
great,
well
Niraj.
This
is
a
great
presentation,
lots
of
good
questions
and
that's
probably
all
the
time
that
we
have
questions
today
and
we
were
that
close
to
potentially
stumping
neeraja.
Just
I
want
to
let
everyone
on
the
call
know
that
I'm
slightly
disappointed
today,
but
there
will
be
more
chances,
I'm
sure
you,
you
can
catch
both
his
Twitter
and
his
email
here.
Niraj.
Will
you
be
at
the
upcoming
q
con
here
I.