►
From YouTube: Hoot [Episode 3] - HashiCorp Consul Service Mesh
Description
Hoot is a livestream by engineers talking about and trying out new technology.
Get to Know Service Mesh
We kick this off with a series on service mesh - each episode will look into a different service mesh provider.
* Istio
* Linkerd
* Consul
* Community requests -- suggest a service mesh
* Compare and contrast the different service meshes, explain their unique features and how to choose which one(s) to use for your applications.
Questions or suggestions?
Join the community slack https://slack.solo.io
DM us on Twitter https://twitter.com/soloio_inc
B
Right,
they're,
nice.
So
this
is
our
third
episode
in
who,
where
we
review
service
measures
I'm
about
to
carve
a
math
engineering,
solo,
I,
oh
and
let's
start
today's
episode.
So
in
the
previous
episodes
we
had
the
first
one
and
they're
all
recorded,
you
can
watch
the
Brutes
episodes
on
YouTube.
We
had
Christian
pasta
talking
about
SEO
and
Rick
was
talking
about
link
reading
and
the
goal
of
this
series
is
to
give
the
community
some
knowledge
and
insight
about
the
different
service
missions
out
there,
the
trade-offs.
B
You
know
what
are
the
good
for
and
do
it
quick
demo.
So
you
kind
of
get
the
feel
of
how
to
run
them
and
on
this
episode,
we'll
focus
on
console,
connect
and
I'll
be
looking
at
the
chat.
If
anybody
has
any
questions
any
time
and
I'll
try
to
answer
them
as
they
come
about
so
first
a
bit
of
introduction
to
console
what
sponsor
console
is
a
tool
written
by
a
Shi
Corp,
and
you
know
you
can
check
out
their
website.
They
have
excellent
documentation
which
we'll
review
shortly.
B
Hashey
corporate
touch
console
it's
a
tool.
That's
originally
designed
to
help
organizations
transition
monolith
into
micro
services
and
the
way
they
do,
that
is
by
providing
facilities
that
were
easy
in
a
month
in
the
monolith
world,
but
we're
missing
in
the
micro
services.
So
initially
as
console
was
developed,
it
had
two
main
features
service
discovery,
and
that
also
includes
registry
house,
shaking
all
that
and.
B
Raishin
a
key
value
store
and
let
me
go
a
bit
about
those.
So
originally,
when
install
consulate
mentioned,
you
know
you
just
transitioned
from
your
monolith
micro
service
and
you
probably
had
an
auto
scaling
group
on
Amazon
with
ec2,
and
you
build
your
immutable
images
in
CI
and
essentially
a
similar
concept
approach
today
in
kubernetes,
but
in
VMs.
So
once
you
break
your
monolith,
there
are
certain
things
that
used
to
be
easy,
they're
now
harder.
For
example,
what
used
to
be
a
function?
B
Call
to
a
different
module
in
the
monolith
is
now
a
network,
and
the
question
is
network
called
to
where
and
that's
the
question.
This
service
discovery
feature
and
console
Help
Center.
So,
for
example,
service.
A
could
query
console
about
the
location
of
service,
B
and
console
would
provide
service
a
with
the
healthy
instances
of
service
B
right.
B
So
the
question
is,
how
does
console
know
what
are
the
instances
of
service
B
and
that
part
is
service
registration
and
the
way
that
works
is
actually
a
very
similar
concept
to
a
sidecar
container
by
again
in
the
world
of
the
MS,
each
VM
would
have
the
console
agent
installed
in
it,
and
the
console
agent
would
report
to
the
console
control
plane.
Essentially
the
console
master
that
a
new
instance
has
come
alive
and
registering
and
to
be
available
for
service
discovery
good.
B
It
will
go
back
to
the
easier
way
of
doing
things,
and
you
can
hear
more
about
that
in
the
first
episode
by
Kristian,
pasta,
so
that's
service
registry
discovery,
hell
shakes
console,
will
actually
monitor
everything
and
make
sure
each
application
could
get
to
the
instances
of
the
service.
It
depends
on
now.
The
second
functionality
that
console
provides
is
a
key
value
story
similar
to
etcd,
for
example
it
if,
in
the
past
configuration
was,
you
know,
a
single
configuration
file
consumed
by
the
monolith.
B
Now,
where
everything
is
distributed,
the
configuration
needs
to
be
handled
as
well
right,
so
console
allows
you
to
have
a
single
key
value,
store
that
the
operator
can
update
the
configuration
and
that
would
propagate
to
the
whole
of
the
console
cluster.
Now
the
way
console
is
architected.
Is
you
have
a
set?
You
know
usually
between
one
to
five
L
of
masternodes
address
there.
B
So
the
key
to
get
configuration,
values
replications,
will
query
its
local
agent
and
see
how
it
needs
to
be
configured
right
so
far.
If
anybody
has
any
questions
about
that
part,
you
know
it's
kind
of
like
an
intro
to
console.
There's
a
lot
of
videos
that
you
know
going
to
details
on
this.
I
just
want
to
keep
that
brief
introduction
and
if
anybody
has
any
questions,
feel
free
to
ask
and
now
all
right.
So.
B
Recently
considered
at
feature
called
Console
Connect,
which
will
be
the
main
focus
of
our
thought
and,
depending
on
time,
we'll
see
how
much
deep
we
can
go
into
it,
and
the
idea
with
Console
Connect
is
to
solve
an
additional
problem,
which
is
identity,
security
and
segmentation.
So
the
problem
is
that
you
know
right.
B
To
service
C
right,
if
we'll
take
a
classic,
you
know
three-tier
application
where
you
have
a
web
layer
talking
to
the
business
player
and
talking
to
the
storage
layer,
you
don't
want
the
web
layer
talking
directly
to
the
storage
they're
ready
want
to
segment
your
network
such
that
services
are
only
allowed
to
talk
to
the
entity.
They're
allowed
to
talk
to
right
and
that's
what
console
Connect
sauce
so
console
Connect
is
the
service
mesh
feature
of
console
and
it
builds
on
top
of
the
service
registry.
B
That
it
works
inside
kubernetes
or
outside
kubernetes,
and
has
some
integration
with
service
discovery
such
that
console
services
outside
the
cluster,
connects
the
services
inside
a
cluster
and
vice
versa,
and
it's
pretty
light
way
to
configure
and
we'll
see
a
demo
and
see
how
we
install
and
configure
it.
You
know
this
little
mini
cube.
I
have
set
up
here.
B
B
B
In
a
second
now
and
we'll
start,
the
demo,
we
will
install
a
console
to
kubernetes
via
the
official
Hasek
or
camp
chart,
and
you
will
see
that
it
deploys
the
console
agent
as
a
daemon
set.
It
deploys
the
console
and
masters
as
a
stateful
set,
and
it
will
create
a
sidecar
injector
to
inject
the
console
sidecar,
so
it
forms
a
mesh
and
now
the
way
the
console
mesh
works.
B
B
Can
say
you
know
agent
up
to
be,
and
you
need
a
language
to
express
that
the
identity
part
is
solved
using
MPLS
and
client
certificates
very
similar
to
how
it's
done
in
it
co
and
linker
D.
So
when
and
as
we'll
see
in
our
demo,
when
the
code
is
provisioned,
a
sidecar
proxy
is
also
provisioned
with
the
pot
and
console
has
a
few
ways
of
integrating
a
application
into
the
console
match.
B
So
there's
a
first,
you
can
use
the
console
native
proxy,
it's
a
console
that
it's
a
proxy
that
they
wrote
in
go
and
it
comes
out
of
the
box
and
the
main
advantage
of
it.
It's
very
easy
to
use
right.
It
just
works,
and
recently
they
also
added
support
for
Android
proxy
I,
believe
starting
the
1.5
release.
B
So
there's
a
also
built
in
support
to
running
and
boy
as
a
sidecar
proxy
and
when
console
connect
first
was
released.
We
had
our
own
blue
envoy
integration
that
provided
envoy
as
a
sidecar.
In
addition
to
the
initial
proxy,
there
was
the
glue
and
well
integration
from
us
from
solo
and
now
that
they
have
the
built-in
support
you
can
just
use
that
and
the
other
option
is
to
write
their
own
proxy
right.
If
you
like
a
tree
proxy
or
if
you
like
nginx,
they
don't
have
an
official.
B
You
know
nginx
integration,
but
they
have
the
documentation.
You
can
follow
and
integrate
your
own
proxy,
so
they're
very
flexible.
In
that
sense
right
they
can
tell
you
you
know.
If
you
follow
this
task
or
this
spec,
you
can
integrate
your
own
proxy
into
the
mesh,
and
you
know
through
that
proxy
that
your
organization
is
familiar
with
that
you
have
knowledge.
This
could
be
a
nice
option
for
you
that
you
don't
have
to
learn
something
new
I'm.
B
B
You
essentially
need
to
use
their
library
to
make
connections,
and
this
library
will
you
know,
do
the
console,
connect
heavy
lifting
and
make
sure
your
application
is
in
the
mesh.
Now
using
your
library
comes
with
its
own
trade-offs,
mainly
that
it's
kind
of
why
service
smash
as
a
product
was
started
to
begin
with
right.
So
if
the
service
mesh
pre-course
or
the
finicky
library,
you
know
a
javo
library
and
the.
B
Know
compiled
in
the
application
to
change
it.
You
have
to
redeploy
the
application,
unlike
with
a
cycler
model
where
you
just
have
to
redeploy
the
part,
there's
no
code
changes,
you
don't
need
to
redeploy
in
your
application
binary.
So
there's
pros
and
cons,
but
we'd
consider
you
can
do
any
option
you
choose
essentially
whatever
works
for
you.
You
know
you
might
decide
that
managing
a
sidecar
proxies
to
operationally
complex
and
you
have
go
infrastructure
anyway.
It
might
as
well
use
their
a
go
client
library
to
get
those
mesh
features.
B
So,
let's
you
connect
the
where
applications
alright.
So
these
are
the
options
that
are
available
to
you
for
a
mesh
in
this
demo,
we'll
deploy
console
into
our
mini
cube
here
and
we
look.
You
know
how
we
look
at
all
the
components
in
this
demo
on
who
is
used
as
a
sidecar,
and
the
way
it
works
is
that
when
the
pod
is
started
in
envoy
side,
fire
is
injected
to
the
pod.
B
C
B
The
certificate
never
leaves
the
note,
then
the
local
agent
manages
it.
So
it
never
travels
over
the
network
and
this
certificate
will
be
used
to
establish
identity
and
now
every
time
console
will
every
time
the
application
will
go
upstream.
It
will
go
through
the
proxy
and
we'll
see
how
it's
done
in
console.
It's
not
exactly
the
same
way.
It's
done
in
it.
Zero
and
lick
early
memory,
there's
no
iptables
magic
and
it
will
go
through
the
the
application
goes
through.
B
Ongoing
will
go
through
another
proxy
in
the
other
side
car
in
the
destination
which
will
end
up
in
the
service,
which
is
pretty
much
how
you
submit
in
the
linker
the
initiative
all
right.
So
unless
there's
any
question,
I'm
gonna
start
deploying-
and
in
this
part
we
like
to
cross
our
fingers
that
everything
will
work
as
expected.
B
So
I
have
your
little
Jupiter
notebook
without
the
command
and
I'm.
Essentially
following
the
console
kubernetes
guide,
which
is
here
I
just
wrote
all
the
commands
in
a
Jupiter
lab.
So
it's
reproducible
for
me
to
just
run
them.
But
if
you
want
to
follow
along,
you
can
just
go
to
learn
how
Chicago
comm
search
console
search
to
burn
any
such
mini
cook
and
that's
essentially,
what
we'll
be
doing
we'll
go
and
do
a
bit
more
in
depth
than
in
this
document.
But
you
can
use
this
as
a
reference.
B
B
This
is
a
bit
bigger
here
all
right,
so
we
see
that
the
console
agent
was
installed
as
a
daemon
set.
We
see
there
is
a
web
hook
to
inject
the
console
sidecar
we'll
get
to
that
in
a
second-
and
you
don't
see
it
here,
but
if
we
look
at
state
4
sets
we'll
see,
the
console
server
deployed
is
a
stateful
set.
B
Actually
you
do
see
it
here,
nevermind
all
right,
so
we
have
console
deployed
to
the
cluster.
Everything
seems
to
be
running
fine.
All
the
pods
are
in
steady
state
good.
So
now
we
can
coop
serial
port
forward
to
access
that
console.
Ui
console
comes
by
default
with
a
nice
UI
where
you
can
see
what's
going
on
all
right,
so
this
should
be
this
UI
all
right.
So
this
is
the
console
UI
and
as
we'll
deploy
the
services,
they
will
register
into
console
and
will
appear
in
the
services
screen.
B
Let's
go
back
to
the
notebook
deploy
service
a
and
it
sends
a
service
from
there
demo,
it's
the
counting
service
and
there
will
be
two
services.
This
is
the
back-end
service.
It
holds
a
counter
and
we'll
connect
it
to
a
dashboard
service
that
displays
the
counter,
so
do
note
that,
in
addition
to
deploying
the
service
itself,
we're
also
deploying
a
service
account
in
kubernetes
service
accounts
are
used
as
a
source
of
identity.
So
having
a
unique
service
comedy
is
important,
because
that
way
you
can
have
a
unique
identity
for
each
service.
C
B
B
B
The
second
container
is
the
console,
connect
and
voice
ID
card,
so
this
sidecar
will
it
it's
a
sidecar
that
the
communication
goes
through
alright
and
the
last
container
is
the
life
cycle
sidecar
and
that's
responsible
for
registering
an
average
of
certain
things
from
the
console
local
agent.
That's
to
help
console
maintain
its
registry.
Alright.
B
So
far,
so
good
everything
is
ready
and
nothing
crash.
Thank
you,
demo
got
and
we
can
go
on
and
continue
deploying
the
second
service
well
before
we
do
that.
Let's
just
straight
the
console
UI
as
you
can
see,
counting
service
and
the
sidecar
proxy
appear
in
the
services
and
health
checks
are
passing
so
everything's
good.
As
expected.
C
B
C
B
In
order
for
their
mesh
to
work,
unlike
eto
and
linker
D,
they
don't
do
IP
tables
magic.
What
they
do
instead
is
when
you
deploy
an
application,
you
specify
the
app
stream
services
and
needs
to
access.
So
what
this
annotation
here
means.
It
means
that
we
want
to
make
the
counting
upstream
service
available
to
this
part,
and
we
won't
expose
it
on
locals
on
port
9001.
So
it's
a
bit
confusing.
B
That's
not
the
part
of
the
counting
service,
that's
the
local
port
in
which
the
counting
service
will
be
available
for
the
dashboard
service,
and
you
can
see
here
that
the
comic
service
URL
is
HTTP
the
localhost
9001
and
do
notice.
They
should
be
no.
They
ship,
yes,
because
the
dashboard
service
will
connect
as
if
there's
no.
A
B
B
B
So
now
we
have
the
dashboard
service
deployed
and
connected
to
their
counter
service,
all
right,
we'll
proxy
to
the
dashboard
service
and
open
a
tap
for
it
and,
as
you
can
see,
we
are
connected
to
the
desperate
service
and
this
one
counter
is
retrieved
from
the
accounting
service.
So,
as
we
refresh
you
know,
the
counter
increases
and
I
think
there's
some
JavaScript
air
that
increases
it
regardless
all
right.
B
So
let's
do
a
quick
recap
of
what
we
have
so
far
before
we
go
to
intentions,
so
we
have
the
console
servers
they
that
are.
You
know
that
the
leader
election,
usually
between
one
server,
three
or
five.
So
if
one
of
them
fail,
you
still
be
or
if
there's
a
network
petition,
rather
is
to
be
able
to
inform
those
servers
if
we
look
at
the
pots.
So
this
is
our
single.
B
Is
this
one
here,
it's
deployed
in
it
as
a
daemon
set
one
on
each
kubernetes
node
and
then
the
two
micro
services,
the
coming
in
the
dashboard.
Now,
let's
kind
of
play
again
how
how
it
works.
When
the
dashboard
service
comes
alive,
we
create
the
deployment
the
deployment
creates,
a
pod
and
when
the
pot,
before
it's
admitted
into
kubernetes,
it's
injected
a
by
this
web
hook.
The
web
book
adds
two
sidecars
a
proxy
side
karma,
so
the
Envoy
console
sidecar
and
the
lifecycle
side
hard
to
manage
registration.
B
Now,
this
pot
is
configured
with
the
annotation
that
tells
it
to
make
the
counting
console
service
available
locally
at
port,
9001
and
well.
That
means
that
any
when
the
Dashwood
service
connect
to
local
laws
for
9001,
it
will
go
upstream
to
the
counting
service,
but
will
be
part
of
the
mesh,
which
means
the
connection
will
have
the
identity
of
the
dashboard
surface.
Rather,
that's
expressed
in
TLS
certificates.
B
C
B
B
B
There
is
a
TLS
handshake,
usually
in
TLS.
You
know
whenever
you
visit
with
your
browser
to
any
website
that
has
TLS,
which
is
almost
all
of
them
today.
The
server
presents
with
a
certificate
that
asserts
its
identity
in
this
case
and
again
the
same
with
linker,
the
initial.
What's,
that
is
a
mutual
TLS,
which
means
that
the
client
also
presents
a
certificate
now.
B
The
the
certificate
of
the
client-
it
verifies
its
indeed
sign
correctly
by
console
so
now
he
trusts
that
the
client
is
indeed
who
it
says
is
him.
He
contacts
the
local
console,
client
or
the
console
agent
and
stok
michael--ah,
either
the
receiving
service
allowed
to
receive
this
request
from
this
service
right.
C
B
This
case,
when
that's
what
makes
a
connection
to
counter
to
an
accounting
service,
accounting
service
will
go
to
console
and
I
am
counting
dashboard,
just
open
a
connection
to
me:
am
I
allowed
to
accept
it,
and
if
it's
allowed,
the
connection
will
go
through
it
and
if
it's
not
allowed
the
connecting
cycle
proxy,
the
counting
sort,
the
counting
proxy,
will
terminate
the
connection
and
specifically
in
envoy.
The
way
it
works
is
that
they
leverage
the
envoy
external
off
the
TCP
external,
odd
filter.
B
Okay,
we're
good
all
right.
So,
let's
show
an
example.
Now
the
intention
in
castle
is
very
simple.
To
read
all
it
is
consul
intention
create,
create
a
new
intention
and
you
want
to
deny
when
dashboard
talks
to
counting
right.
So
intention
is
that
mechanism
is
that
language,
where
you
express
policy
in
console
right,
you
express
that
you
intend
for
certain
service
to
talk
to
another
service
or
not.
B
Now,
intentions
can
be.
You
know,
default
all
on,
except
the
nine
or
the
other
way
other
way
default
all
off,
except
allowed
in
this
simple
case
it's
on
except
tonight,
and
we'll
demonstrate
the
nine
right
now.
So,
let's
apply
this
one
all
right.
You
see,
we
created
the
intention
what's
going
to
coming
and
that
will
be
denied
and
if
we'll
go
to
the
counting
service,
which
I
lost
its
tab
somewhere
here
and
counting
counting
counting
counting
did
I
close.
B
B
Everything
is
healthy
here
because
we
it's
not
that
the
service
is
not
healthy,
just
that
it's
not
allowed
to
access,
and
you
can
see
in
the
intentions
that
there
is
a
d9
tensions
between
the
source
word
and
the
destination
counting
all
right,
and
we
can
once
we
delete
this
intention
this
time,
I'll
delete
it
through
the
UI
app
it
just
as
easily
do
it
through
the
command
line
confirm
delete
again.
Everything
looks
fine
here,
but
when
we
go,
we
see
that
it's
connected
again.
C
B
B
From
the
console
service
to
the
console
clients
to
the
local
node
agent,
so
that
way
a
call
to
check
if
the
intention,
if
a
connection
is
allowed
by
the
intention
policy,
is
a
local
call
right,
let's
say
from
the
pod
to
the
node
agent
there
on
the
same
note
that
Cole
doesn't
go
through.
You
know
outside.
B
So
it's
a
very
fast
car
and
it's
also
done
on
the
TCP
level,
not
on
any
request,
but
rather
on
identities.
You
know
service
a
service
P,
so
it's
only.
Let's
say
you
have
an
ongoing
HTTP
connection.
The
intention
check
will
only
be
performed
once,
as
the
connection
starts
right
on
the
SSL
handshake
process.
B
B
Yes
or
no
right,
a
console
has
additionally
l7
features
for
canary
robots
or
not
cover
this.
Today's
can
be
a
lot
for
one
time,
but
you
can
read
all
about
those
in
the
console
documentation.
You
have
your
whole
section
on
connect
and
how
to
do
with
it
yeah.
But
the
next
part
is
an
elf
or
a
TCP
level
feature
all
right.
C
B
B
B
C
B
B
B
More
flexible
policy
language
that
you
can
describe
more
into
details,
what
things
can
do,
what
things
can
not
do?
The
console
approach
is
that
you
probably
don't
need
all
of
that.
You
know
service.
Can
you
most
of
the
time
most
of
the
uses?
You
probably
want
just
to
say
you
know
this
service
will
do
that
service,
yes
or
no
yeah.
All.
B
Saw
we
can
use
console
to
do
Network
segmentation
in
that
regard,
we
kind
of
looked
at
how
the
injector
the
injected
parts
work
with.
We
saw
that
it's
very
similar
to
the
existing
console
model
model
today
of
having
a
no
local
agent
that
connects
to
the
console
server
catches,
all
the
data
and
then
feeding
information
to
applications
running
on
the
same
node.
So
essentially,
when
a
boat
starts,
it's
injected
with
a
sidecar.
B
B
They're,
starting
to
support
mixer
less
deployments
with
console
it's
a
similar
model
where
on
every
call
you
got
at
the
application,
so
another
network
on
every
connection.
The
application
needs
to
talk
to
the
local
agent.
So
that's
the
kind
of
advantage
of
the
simpler
intention
model.
You
only
need
to
do
policy
once
as
the
connection
starts
and
not
on
every
request
and
in
addition,
the
policy
is
local.
B
B
C
B
Extending
it
to
the
mesh
might
be
just
the
easier
thing
to
do,
and
we
leverage
your
existing
knowledge
as
you
transition
into
micro
service,
or
even
if
you
want
a
service
measure
without
micro
service.
You
just
want
to
encrypt
stuff
on
the
go
on
the
network,
and
you
don't
want
to
move
your
existing
deployment
structure
into
kubernetes
concept
and
I
can
also
be
a
great
option
for
you.