►
From YouTube: Service Mesh in the Real World video series - Ep 1
Description
This educational video series will walk you through various application networking challenges, how we have traditionally solved them, service mesh networking concepts to solve them for microservices, and show you how to do it through live demonstration.
Learn more about the technologies used in the video:
Istio https://istio.io/
Google Cloud https://cloud.google.com/
Service Mesh Hub http://servicemeshhub.io
Solo.io https://www.solo.io/
B
Hey
everyone:
this
is
Betty
Jo,
not
from
solo
IO
and
Jo
chase.
Suno,
Dumars
and
I
are
very
excited
to
be
kicking
off
our
first
series
in
service
matched
in
the
real
world
and
digging
into
that
specifically
using
sto.
So
at
this
time,
I'd
like
to
introduce
Christian
posta
of
solo,
a
oh
and
some
gay
Paree
Google
to
kind
of
take
it
away
with
our
first
topic:
managing
aggressive
music
mister,
you
can
wait
guys
awesome,
Thank,.
A
Traffic
management
and
specifically
the
for
the
challenges
around
getting
traffic
out
of
your
cluster
and
out
of
your
SEVIS
mesh
to
summer,
so
that
live
outside
of
those
those
boundaries.
So,
like
I
said,
my
name
is
Christian
I'm,
a
field
CTO.
At
solo
solo
we
build
tooling
to
help
it
help
the
dot
service
mentioned
in
the
media.
A
And
then
my
my
co-presenter
here
thanks
person
and
thanks
13
days
for
setting
this
up.
My
name
is
sunday
spark
cloud
native
advocate
at
Google
cloud
and
we
focus
on
helping
developers,
operators
sre,
you
know
thank
you,
learn
how
to
use
SEO
and
how
to
scale
their
deployments
with
this
year.
So
why
don't
we
step
in?
Oh?
So
what
we're
going
to
cover
today
is
first,
what
I
lay
out
a
little
bit
of
a
challenge
around
managing
egress
traffic,
and
then
we
want
to
walk
through
some
example.
A
So
our
challenge
for
today
is
how
do
we
get
a
stable,
outbound
origin
for
egress
traffic
right?
So
what
we're
trying
to
do
here
is
make
sure
that
any
traffic-
that's
leaving
our
cluster.
In
this
case,
let's
say
for
a
kubernetes
cluster,
then
we've
got
a
stable
outbound,
an
Origin
IP
or
hostname
yeah
to
control
for
downstream
usage,
and
so
we
get
it's
a
little
bit
more
detail
there.
Why
do
we
have
to
worry
about
things
like
this
well
for
many
deployments
that
need
to
communicate
with
external
applications
or
external
systems?
A
Those
downstream
systems
need
to
know
where
the
traffic
is
coming
from
right
in
this
kind
of
basic
example.
Here
we've
got
a
pod,
that's
communicating
with
an
external
application
that
traffic
is
going
straight
through
it
right
now,
but
for
more
sort
of
security
minded
you
know,
use
cases
or
solutions.
We
may
need
to
know
where
that
cost
traffic
is
originating
from,
and
we
want
that
to
be
a
stable
origin
point
the
reasons
you
know
there
are
many
reasons
why
folks
might
want
to
do
this
for
their
deployments.
A
Obviously,
chief
among
them
is
kind
of
a
secure
approach,
but
specifically
it
could
be
for
things
like
monitoring
data,
exfiltration
monitoring
for
infiltration
under
systems
or
just
generally
for
components.
And
most
often,
what
we'll
see
is
a
firewall
sitting
in
front
of
an
external
location
or
an
external
system,
and
the
origin
needs
to
be
white
listed
in
that
firewall
for
traffic
to
get
through.
A
So
a
couple
of
specific
examples,
as
we
mentioned
so
particularly
run
compliance
by
things
like
PCI
compliant
deployments,
may
require
this
type
of
white
listing
approach
or
just
additional
systems
that
may
live
outside
of
your
kubernetes
cluster.
So
things
like
you
know,
legacy
applications
or
database
infrastructure,
especially
in
hybrid
deployments.
A
So
this
is
the
basics
of
our
deployment
today
and,
as
I
mentioned,
the
requirement
that
we
have
to
achieve
here
or
satisfy
is
that
all
outbound
traffic
must
originate
from
a
stable
source.
Our
secure
application
fits
behind
a
firewall
and
we
must
be
able
to
whitelist
that
origin
or
that
source
in
the
firewall
configuration
all
right
so
that
solves
kind
of
our
challenge.
Here.
How
do
we
get
traffic
from
that
pod
through
a
firewall
to
the
external
application
TV
in
mind?
We
want
to
keep
that
source
from
a
stable
perspective.
A
A
Some
future
benefits
here
is
that
we
do
solve
for
the
this
table
out
by
the
IP
from
where
this
table
outbound
origin
part,
where
only
the
NAT
instances
in
the
net
gamers
need
to
be
waitlist
by
the
firewall
and
all
pods
just
send
their
traffic
through
the
NAT
gateway,
that's
all
of
those
that
that
sort
of
requirements,
but
it
does
introduce
some
other
challenges.
Now
we've
got
to
worry
about
kind
of
BN
life
cycle
and
the
NAP
gateway
management
aspect
and
we're
getting
very
coarse
grade
traffic
controls.
A
A
Another
approach
would
be
to
use
something
management
infrastructure.
This
has
some
of
the
same
problems,
but
it
does
does
offer
us
the
stable,
outbound
IP
approach,
but
it
does
still
bring
with
it
coarse-grained
traffic
control
again
every
bit
of
traffic
and
leaving
those
pods
and
imagine
right.
This
is
not
just
a
single
pot,
but
a
deployment
with
you
know
it
could
be
hundreds
or
thousands
of
pods
all
that
traffic
has
to
go
through
the
Gateway
meaning.
A
We
don't
get
a
lot
of
flexibility
and
control
over
that,
and
even
though
we
still
get
the
manage
aspect
of
that
service,
there's
also.
The
approach
of
setting
up
a
proxy
infrastructure
proxies
do
give
us
again
that
stable
outbound
origin
of
that
source
and
they
do
give
us
some
level
of
protocol
control
so
a
little
bit
more
fine-grained
than
kind
of
a
blanket
mat
approach,
but
again
you're
introducing
other
challenges.
A
You've
still
got
the
VM
and
proxy
lifecycle
management,
which
is
separate
from
your
customer
structure
so
another
to
manage,
and
it's
not
very
particular
once
we're
in
a
kind
of
a
protocol
approach.
We
still
have
kind
of
a
core
strain
component
there
and
then
the
last
filter.
That
would
be
an
internal
proxy
and
this
is
starting
to
get
closer
to
you
know
what
we,
you
know,
how
what
service
mission
deployments
look
like,
but
with
just
a
blanket
proxy
by
itself.
A
We
could
solve
for
this
table
outbound
problem
or
the
stable
origin
problem,
and
we
do
get
the
levels
in
good
local
protocol
control.
But
now
we've
got
to
worry
about
managing
the
deployment
and
configuration
of
this
proxy
and
in
this
example
scenario
here,
we've
got
no
control
play
right.
There
is
no
central
management
system
for
that
foxy.
So
if
you
have
hundreds
or
thousands
of
pods
or
services
controlling
and
configuring
each
one
of
these
individual
proxies
introduces,
you
know
more
of
a
management
challenge
and
it
solves
for
it.
A
That
is
that
is
pinned
on
a
particular
node,
and
you
can
configure
that
to
have
an
explicit
static,
egress
IP.
So
as
a
traffic
in
the
kubernetes
posture
is
trying
to
communicate
with
external
systems
like
databases,
message
queues
and
so
on,
they,
the
traffic
will
be
routed
through
the
particular
gateway
that
gates
that
you
specify
using
muting,
static,
egress,
IP
and
then
will
take
on
that
static.
Ip
that
configure
now.
This
is
a
project
is
an
interesting
project.
It's
an
alpha
quality
or
alpha
state.
A
B
A
You
know
it
was
a
move
into
the
cluster
talking
amongst
other
services,
but
then
may
need
to
make
a
call
out
to
an
external
database
or
external
cloud
service.
We
want
to
have
a
stable,
egress
IP
and
using
the
isseo
egress
gateway
is
one
way
to
solve
that.
So
with
what
this
you
know,
what
we
need
to
do
first
is
tell:
is
the
registry
what
that
external
service
is
and
what
and
enhance
to
reach
it,
whether
that's
a
IP
address
or
a
DNS
entry
or
hostname?
A
Let
me
do
that
in
is
do
using
the
service
entry
resource.
You
can
see
here
on
the
right-hand
side
we've,
we
would
have
an
external
service
that
runs
at
a
particular
IP
and
we
configure
the
service
with
a
host
name.
In
this
case,
hcp
tend
to
see
feed
or
external
that
will.
If
we
use
that
hello
same
inside
the
cluster,
then
it
will,
you
know
ystem
will
understand
what
that
is
and
figure
out
how
to
route
the
traffic
to
to
that
service.
A
A
Okay,
the
last
the
last
bit
of
config.
We
need
in
an
SEO,
so
we
have
the
service
entry
which
specifies
what
the
service
is,
where
it
lives.
Externally.
To
give
this
your
heads
up
about
it,
we
looked
at
configuring,
the
egress
gateway,
so
we
specify
the
phone
call
and
the
ports
and
the
hopes
to
watch
out
for
and
then
now
with
the
virtual
service
in
NEC.
A
The
second
match
clause
here
in
the
virtual
service
says
and
if
you're
trying
to
route
to
that
particular
external
service-
and
you
are
the
egress
gateway,
then
route,
then
route
normally
to
the
external
service.
Knowing
that
the
service
entry
was
was
created
there.
So
that's
what
helps
to
find
the
actual
IP
address
and
and
know
that
it's
external
to
the
mesh.
So
we
have
two
different
clauses
here,
one
for
routing
from
inside
the
mesh
to
get
to
egress
gateway
and
then
one
once
you're
at
the
egress
gateway
routing
properly
outside
of
the
mesh.
A
So
the
demo
that
we're
going
to
look
at
it
looks
similar
to
these
this
architecture.
Here,
where
we
have
an
application,
we
have
a
very
simple
application
and
a
demo
it's
going
to
be
the
sleep
pod
from
the
ischial
samples,
free
phone,
basically
just
blows
up,
and
it
just
sits
there
and
waits,
and
you
can
go
in
there
and
run
commands
and
so
forth.
But
then
we're
going
to
set
it
up
so
that
the
traffic
is
going
through
the
eat,
grass
gateway
through
its
do
of
egress
gateway
and
then
to
our
external
application.
A
And,
moreover,
what
we're
going
to
do
is
we're
going
to
set
up
a
firewall
rule
that
says
we
cannot
allow
any
traffic
into
this
external
application
unless
it's
coming
from
a
known,
unknown.
Ip
in
this
case
it'll
be
that
the
external
IP
of
the
egress
gateway.
So
we'll
try
to
set
all
this
up
and
run
it
in
demo
X
here
now.
A
Okay,
so
the
first
thing
that
I
want
to
point
out
is,
if
you're
getting
started
with
a
service
match,
but
to
this
video,
then
Solek
we're
building
a
a
tool
called
the
service
mesh
hub,
which
is
a
management
plane
for
your
it's
new
installations
and
essentially
multiple
clusters
of
ischium,
and
it
gives
you
the
ability
to
quickly
add
extensions.
So
here
we
can
see
we
have
a
serial
sort
of
fish
running
I
click
on
install
extensions.
We
can
add
additional
pieces
to
our
service
match
to
increase
the
value
of
it.
A
So,
for
example,
we
can
install
flagger,
which
is
a
canary
analysis
project
from
we
works,
we're
going
to
install
blue,
which
is
a
Envoy
based,
API
gateway
which,
as
a
Miss
barely
complements
and
extends,
is
Theos
capability
to
API,
centric
type
policies,
and
we
can
add
additional
things,
including
your
own
extensions.
You
might
feel
for
the
mesh
or
demos
that
you
want
to
use
to
showcase
capabilities
of
the
match,
but
once
you
have
one
installed
like
we
do
here,
then
we
can
go
to
the
thing.
A
A
So
let's,
let's
cross
our
fingers,
that
everything
works
out
all
right.
So
we
see
we
have
our
suite
on
running
in
our
super
nice
cluster
we
have
sto
running
and
specifically,
we
have
enabled
the
deep
breath
gateway
now
again
in
the
sample
code.
Will
leave
it
with
instructions
for
how
to
do
all
this
stuff
and
set
it
up
now.
The
first
thing
we
we
have
is
on
the
bottom.
We
have
a
instance,
a
DCP
instance
running
with
H
speed
bin
on
it.
A
So
if
we
take
a
look
at
and
try
to
call
it
locally,
we
can
see
that
we
get
the
correct
response,
we're
following
it
at
spot
shedders
and
get
the
response
of
expect.
If
we
tried
you
call
this
within
our
sleep
pod
and
will
notice
that
the
traffic
won't
get
through
and
cannot
get
through,
because
we
set
up
viral
Farlow
rules
that
block
any
traffic,
that's
coming
unless
it's
known,
unless
it's
known
traffic.
So
we
get
this
this
error
here.
A
So
traffic
cannot
go
directly
from
our
suite
pods
and
needs
to
go
to
the
egress
gateway
first,
but
we'll
create
that
firewall
here
using
GCP,
and
once
we
have
that
far
wall
created,
we
can
now
set
up
the
routing
config
in
ISTE
o,
which
I
showed
earlier.
That
was
the
service
entry,
the
destination
rules,
the
virtual
service
and
the
gateway.
So
we'll
set
up
those
four
things
now:
you'll
notice
that
we're
setting
up
a
service
entry
in
this
first
step
in
the
SEO
system.
A
A
We're
going
to
use
throughout
the
rest
of
our
routing
the
the
hostname
that
we
are
virtualized
for
or
this
in
inside
the
cluster.
Well,
it
listens
on
a
particular
external
address,
but
within
the
cluster,
the
rest
of
the
routing
rules
are
going
to
be
using
this,
this
hosting
so
next,
let's
create
our
destination
rule,
our
gateway
and,
lastly,
the
rules
in
our
virtual
service,
which
direct
the
traffic
from
our
sleep
to
the
egrets
and
then
from
egress
to
the
external
service.
So
now
that's
all
setup
on
the
bottom
pane
bottom
left-hand
pane.
A
We
see
a
log
for
the
sleep
pod
on
the
right-hand
pane,
we
see
a
log
for
the
egress
gateway
and
when
we
make
our
call
to
our
external
service,
we
should
see
the
trapping
will
see
access
logs.
Here
we
should
see
the
traffic
go
to
our
sleep,
pod
proxy
and
then
into
the
egress
gateway
proxy
and
then,
ultimately,
she
is
should
see
a
successful
returns.
Let's,
let's
execute
that
man
now
passing
the
hostname
that
we
use
throughout
our
sto
configs.
We
see
the
correct
response
on
the
bottom
left-hand
pane.
A
We
should
see
an
access,
log
or
okay.
We
see
it
in
the
right
hand,
side.
We
should
see
an
access
log
saying
that
it
went
through
the
egress
gateway.
So
we
see
that
from
sleep
we
went
so.
Traffic
was
directed
to
the
Eagan
egress
gateway,
because
you
can
see
here
in
the
NSX
o'clock
from
the
egress
gateway.
We
can
see
that
it
went.
It
went
out
found
directly
to
the
HDD
been
service
entry
that
we
set.
A
Okay,
so
to
get
even
tighter
control
over
this,
this
mechanism
is
what
we
did
was
we
rely
on
SD
O's
default
rules
and
iptables
manipulation
to
force
traffic
through
the
gateways
we
can.
We
can
get
even
tighter
rules
and
egress
policy
around
what
traffic
has
allowed.
Some
leaves
upon
by
using
Network
policy
and/or,
seeing
the
traffic
either
directly
through
is
disk
using
egress
gateway,
and,
in
this
case,
in
the
example,
we're
looking
at
we're
allowing
the
pancit
off
with
DNS,
as
well
as
any
of
the
components
in
the
sto
control
point.
A
A
Now,
lastly,
we'll
point
out
a
nice
contribution
from
too
thin,
and
it
specifically
is
in
the
SEO
ecosystem
repository
and
was
well.
The
two
different
folks
contributed
here
is
a
way
of
capturing.
What
are
the
requests
inside
the
proxy
or
starting
inside
the
cluster
inside
the
service
match
that
are
trying
to
leave
the
service
match,
so
we
at
least
get
an
understanding
of
what
is
that
external
traffic
in
previous
versions
of
this
neo
that
external
traffic
was
blocked
so
automatically
blocked
from
trying
to
leave
the
cluster
and
more
recent
versions?
A
That's
now
open,
others
allowed,
but
you
know
that
creates
some
interesting
security
and
observability
challenges.
So
with
this
DNS
discovery
plugin,
what
we
can
do
is
we
can
sit
in
front
of
the
medina
server
and
figure
out
what
are
the
external
hosts
that
we're
trying
to
resolve
and
and
then
that
can
be
used
to
automatically
create
SPO
service
entries.
So
you
don't
have
to
manually,
try
to
figure
out
who's
using
what
and
when
and
manually
go
create
those
service
entry.
A
So,
with
the
DNS
discover
weekend,
we
can
get
a
traffic
analysis
almost
and
and
be
very
explicit
about
what
traffic
we're
allowing
out
of
the
emotion
that
to
send
it
for
the
what's
new
and
is
do
thanks
person.
So
we
wanted
to
close
out
before
we
take
any
questions
we
want
to
close
out
with
sort
of
what's
new
in
is
do
so.
Does
that
a
couple
of
skip
like
a
few
brief
stats
about
SEO
1.2,
which
was
released
background
June
18?
So
really,
you
know
kind
of
42
some
major
improvements
across
300
commit.
A
So
that's
all
that
was
good.
I
think
the
most
interesting
part
of
this
and
Krish
net
worth
I
messed
earlier
was
the
record.
There
were
78
individual,
different
contributors
to
that
release.
So
there's
a
pretty
solid
long
tail
of
community
driven
participation
from
the
ecosystem,
which
were
really
excited
about
saying
you
know
it's
not
just
being
driven
by.
You
know
a
few
kind
of
core
contributors
like
Google,
lift
IBM
VMware.
We
were
seeing
contributions
from
pivotal,
Red
Hat
and
a
host
other
companies,
as
well
as
along,
like
I,
said,
a
long
tail
of
individuals.
A
A
So
that
means,
as
we
get
into
August
here,
if
teachers
aren't
ready,
if
they
haven't
been
tested
or
furberries,
they
will
not
make
those
really
cut
and
that's
important.
We
want
to
make
sure
that
we're
releasing
the
software
on
a
predictable
basis,
but
also
that
there's
a
personal
quality
around
each
of
these
releases
and
part
of
what
went
into
that.
A
A
big
part
of
that,
in
fact,
was
a
fair
amount
of
time
that
seems
spent
on
the
build
tests
and
release
infrastructure
and
the
kind
of
underlying
machinery
there,
and
so
what
that
means
is
a
handful
of
new
sub
teams
are
created
within
the
working
groups,
around
github,
workflow,
organizing
the
repositories,
a
testing
methodology
and
build
it,
a
release,
automation
and
what
that's
giving
us
again.
It's
speeding
right
back
into
that
quality
and
that
timing.
So,
at
any
given
point
in
any
of
the
any
part
of
the
relief
cycle
we
know
with
their
qualities.
A
We
know
how
many
tests
are
passing
coming
us
are
failing
on
a
much
more
regular
basis.
We
have
a
much
more
predictable
release
and
fill
pipeline
as
well.
So
all
of
this
kind
of
internal
investment
is,
you
know,
showing
up
as
just
a
general
improvement
in
quality
across
the
board,
which
is
great,
and
then
there
were
a
lot
of
new
features,
but
a
couple
of
ones.
That
I
think
we're
really.
A
You
know,
I,
think
hotly
requested
by
the
by
the
broader
community,
where
things
like
global
log
levels
for
a
data
control
plan
right
instead
of
having
to
set
individual
a
lot
of
levels
for
the
proxies
or
the
control
plane
components.
We
have
one
consistent
kind
of
entry
point
there:
the
ability
to
validate
your
cooperate
environment
using
the
SPOC,
GL
command
line
tool,
that's
included
and
then
another
interesting
fix
for
annotating
services
and
eliminating
the
need
for
container
port,
which
we
can
talk
about
a
little
bit
later.
A
Some
other
big
ones
were
features
that
went
from
maintenance
table,
specifically
sm
out
of
ingress
that
moved
from
beta
to
stable,
as
well
as
distributed
and
service
tracing,
and
then
things
that
move
from
alpha
to
beta,
where
components
like
certificate
management
on
ingress,
sorry,
feeding,
kind
of
an
SMI
approach
and
then
some
config
oriented
components
like
resource
validation
and
processing.
Cali,
so
those
are
some
of
the
really
big
highlights
specifically
around
traffic
and
that
original
annotation
for
including
a
thumb
for
start,
including
inbound
course,
food
snakes.
A
Releasing
a
you,
know:
hurry
deploying
the
service
mesh
a
little
bit
easier
because
we're
not
having
folks
having
not
foursome
to
change
some
of
the
animal.
If
re
deployed
we're
just
simply
adding
an
optional
annotation
there
and
we'll
try
to
make
that
a
default
as
we
get
further
out
and
then
another
a
few
other
highlights
things
like
improved
locality,
based
routing
and
multi
clusters
of
limits
and
some
other
components
like
we
added
the
ability
to
configure
the
DNS
refresh
rate
or
the
ischial
proxy
to
produce
them
load
on
the
equipment.
A
A
So
with
that
Christian
and
what,
if
they
thank
you
for
joining
us,
you
can
always
send
us
questions
or
comments
better
on
Twitter
kirshen
of
that
Christian,
posta
and
I'm
at
circus
monkey
on
Twitter
or
any
other
additional
information.
I
was
learn
more
obviously,
by
going
to
sto
that
IO
we've
got
links
in
there
for
cool
cloud
for
solo,
for
glue
specifically
for
the
service
Metro,
as
well
as
the
link
to
the
demo.
That's
off
my
might
give
up
repo
right.
The
second
moment
we
may
be
better
on,
but
we'll
definitely
post
an
update.
A
A
Yeah,
so
the
question
is:
are
the
virtual
service
gateway
and
base
station
rules
also
deploy
to
the
FCO
system
namespace
or
to
the
namespace
in
which
the
application
is?
The
point?
Is
there
any
question?
So
it
is
actually
the
latter,
so
the
virtual
service,
the
Gateway
and
the
destination
rule
were
deployed
in
the
default
namespace,
along
with
the
sleep
pod.
A
A
B
Going
once
going
twice
thanks
everyone
for
joining
and
thank
you,
send
it
and
Christian
for
folks
who,
if
you
enjoyed
this,
we'll
be
doing
this
again
kind
of
following
along
on
to
the
ACO
released
cadence,
so
look
for
go
to
the
sto
YouTube
channel,
so
we'll
post,
probably
you
know
a
few
weeks
before
it's
ready,
but
we're
looking
to
do
something
to
another
one
in
October.
So
we'll
see
you
all
online
again
soon
thanks
everyone.
Thank
you.