►
Description
*This Hoot Episode was originally scheduled to run on Feb 9 but has been moved to February 16. Apologies for any inconvenience.*
Join Scott Weiss discuss the problem of multi-cluster networking in Kubernetes, and review a few different existing solutions (ours with Gloo Mesh, Istio's native solution, and Intuit's Admiral)
About us https://www.solo.io
Questions? https://slack.solo.io
Code Samples: https://github.com/solo-io/hoot
Suggest a topic to cover here: https://github.com/solo-io/hoot/issues/new?title=episode+suggestion:
A
Okay,
hi
everybody.
Welcome
to
this
episode
of
the
hoot.
I
am
scott
weiss
architected
solo.
I
o,
and
I
will
be
diving
to
various
approaches
to
multi-cluster
networking
using
istio
and
kubernetes
so,
and
that
specifically
means
how
do
pods
in
separate
clusters
communicate
with
each
other
over
the
network
and
some
options
for
that
that
service
mesh
opens
up
to
us.
A
So
when
you
have
a
service
mesh
installed
traffic
flows
through
a
proxy
and
that
traffic
will
match,
the
proxy
will
match
the
traffic
based
on
the
the
host
header,
that's
presented
and
present
in
an
http
request
and
use
that
to
apply
policy
on
the
http
level
before
routing
it
to
the
original
destination,
where
the
pod
originally
intended
to
send
the
traffic-
and
this
is
all
transparent
to
the
pod
itself.
A
So
just
this
is
just
a
brief
overview
of
how
communication
works
within
a
single
cluster
service
mesh.
Are
there
any
questions
on
that
so
far
before
I
continue,
it
should
be
pretty
straightforward,
no
okay!
So
let's
move
on
then,
and
let's
talk
about
traffic
across
kubernetes
clusters,
so
kubernetes
does
not
provide
a
native
way
for
pods
to
communicate
across
clusters.
Even
though
each
cluster
has
its
own
service
discovery
data.
A
A
A
This
talk
is
not
going
to
go
into
that,
because
we
are
focusing
on
things
that
you
can
run
on.
Let's
say
vanilla,
kubernetes
or
kubernetes,
that
just
has
a
components
installed
on
top
of
it.
I
do
believe
that
psyllium
has
support
for
overlay
network,
but
we're
still
not
going
to
get
into
this
right
now.
So
the
the
tools
that
a
service
mesh
gives
us
one
is
the
ability
to
extend
dns.
This
allows
us
to
generate
new
addresses
for
services
running
in
remote
clusters.
A
So
these
are
the
the
building
blocks
that
can
be
used
to
create
this
multi-cluster
cross-cluster
traffic.
At
solo.
We
use
the
term
federation
for
this.
It's
the
name
of
the
feature,
as
we
call
it
in
our
project
glue
mesh,
which
provides
this
functionality
through
the
use
of
a
high-level
crd,
but
we're
going
to
talk
more
about
implementation
here.
A
A
There
are
certain
opinionated
assumptions
that
the
admiral
model
takes,
but
the
architecture
is
very
similar
to
what
we
do
in
glue
mesh
as
well.
Glue
mesh
also
provides
actually
several
different
reference
architectures,
depending
on
whether
the
network
is
segmented
and
whether
the
user
wants
to
share
trust,
which
means
sharing,
root
certificates
across
clusters
or
limited
trust,
which
enables
clusters
to
communicate
by
terminating
their
local
tls
originating
a
new
tls
connection
on
the
outbound
and
then
terminating
that
tls
on
the
outbound
and
then
originating
a
mesh.
Local
trust.
A
A
A
A
A
It
also
it
requires
no
additional
configuration
just
some
installation
steps
in
order
to
support
this
architecture.
The
downside
is
that
there's
no
way
to
address
services
running
in
a
specific
cluster.
So
if
I
have
service
a
running
in
cluster
one
and
two
there's
no
way
to
specifically
address
cluster
two
service,
a
versus
cluster
one
service,
a
all
the
endpoints
are
aggregated
istio,
does
provide
a
mechanism
to
prioritize
the
local
cluster.
A
A
Nine
group
configs
that
have
to
be
distributed
to
each
of
the
ten
clusters
so
that
it
can
access
the
nine
remote
clusters
they're
not
running
on,
and
the
last
piece
is
that
it
assumes
a
shared
trust
model
so
that,
when
a
pod
makes
a
request
through
its
sidecar
proxy
to
another
pod
in
another
cluster
they're
using
root
certificates
that
are
signed
by
the
same
ca
and
that
allows
them
to
trust
each
other
when
they
say
they're,
they
declare
their
identity.
A
I
am
service
a
and
namespace
b
and
again,
that's
why
the
the
preservation
of
service
names
is
important
in
istio,
because
the
identity
is
assigned
to
a
service
and
it's
not
there's
no
concept
of
a
cluster-based
identity.
Yet
so
those
are
just
some
of
the
advantages
and
constraints
of
that
model
in
glue
mesh
and
in
in
glue
mesh
and
admiral.
We
have
a
bit
of
a
different
approach
where
we
to
istio
the
multi-cluster
configuration
is
opaque.
A
B
A
So
let
me
go
back
to
my
slide
here,
so
there
are
a
few
advantages
to
this.
One
is
that
allows
us
to
do
fine-grained
control
of
the
services
being
addressed,
so
we
can
select
not
only
by
name
and
namespace
but
by
cluster
as
well.
A
I
apologize
here.
These
slides
are,
admittedly,
still
in
still
in
progress
here.
So
let's
I'll
have
these
slides,
cleaned
up
and
able
to
be
presented
on
the
on
the
hoot
repository
later
today.
A
A
The
next
thing
that
we
do
is,
depending
on
the
trust
model,
we
may
set
up
a
gateway
that
does
tls
pass-through.
It
just
passes
through
the
connection
to
the
back
end,
and
that's
when
you
have
shared
trust
so
that
the
receiving
proxy
on
the
destination
service
itself
can
verify
the
identity
of
the
downstream
client
or
we
can
use
a
virtual
service
and
a
gateway.
A
This
is
currently
only
working
for
https,
but
we
can
actually
terminate
the
trust
domain
between
the
the
local
service
and
the
local
egress.
A
A
Downside
or
pain
point,
depending
on
how
you
look
at
it
is
that
additional
configuration
is
required
in
order
to
apply
policy
because
we're
creating
new
dns
names.
So,
therefore,
we'll
need
new
virtual
services
and
destination
rules
that
map
to
these
external
services.
In
order
to
have
the
policies
applied,
that
we
want.
A
A
A
I'm
not
sure
I
understand
this.
Maybe
you
could
provide
some
more
context.
There's
just
a
list
of
different
features,
and
it
mentions
thanos
here
fluent
bit.
Fluent
d
in
general
service
mesh
produces
a
tremendous
amount
of
data
that
can
be
used
to
inform
a
system
like
prometheus.
So
you
have
all
of
these
sidecars
here
as
well
as
your
gateway.
All
your
proxies
are
generating
data,
which
can
then
be
scraped
by
prometheus
thanos
type
of
solution,
and
this
can
become
a
part
of
a
company's
monitoring
stack.
This
is
also
something
that
glue
mesh.
A
A
B
See
if
I
have
any
any
other
questions
here,.
B
A
Thank
you,
lucas,
okay,
so
let's
just
clear
that
away
for
the
moment.
A
And
let's
take
a
look
at
how
this
is
actually
being
configured
here,
so
what
does
the
architecture
actually
look
like?
So
we
have
an
istio
in
one
cluster
and
we
have
an
istio
and
another
cluster
and
they
don't
actually
know
about
each
other,
and
so
I
have
a
service
here
or
let's
call
it
a
pod
or
workload.
A
A
That
lives
in
a
remote
cluster
that
lives
in
this
remote
sdo
here.
So
what
do
we
actually
need
to
do
in
order
to
configure
them
to
communicate?
The
first
thing
that
we
need
here
is
a
service
entry,
so
the
service
entry
is
going
to
allow
us
to
resolve
requests
to
a
pod.
You
know
to
something
that's
external,
to
this
instance
of
kubernetes
right
here.
A
B
A
This
is
a
logical
representation
of
this
service
living
in
this
cluster.
It
would
be
more
like
this.
It
would
be
service
to
the
namespace
to
the
cluster
too.
A
Now,
depending
on
which
trust
model
you
have
that
gateway
may
be
configured
as
a
tls
passthrough.
In
that
case
it's
tls
passthrough.
It
just
looks
at
the
sni
data
that
comes
in
on
the
request
and
it
forwards
it.
It
rewrites
the
cluster
name,
so
it
rewrites
it
to
something
that
we
have
locally,
which
is
like.
A
A
A
I'm
asking
if
it
is
okay
to
use
a
service
mesh
for
sending
white
box
traffic
through
service
mesh.
Is
it
okay
for
sending
white
box
traffic?
A
Are
you
saying
that
that
you
want
to
route
the
traffic
for
metrics
logs
through
the
mesh
itself?
So,
for
example,
we
want
to
do
prometheus
scraping
we
want
prometheus
to
have
a
sidecar
and
then
scrapes
through
the
sidecars
to
this
to
the
application
itself.
I'm
not
100
sure.
I
understand
the
the
question.
I'm
sorry.
Maybe
you
can
provide
some
more
context.
There
see
another
with
the
istio
implementation
will
require
a
sidecar
filter
to
rewrite
the
host.
A
No,
so
let
me
explain
how
the
istio,
the
istio
flat
net
or
the
the
istio
eds
based
solution
is
working.
So
here
let
me
I'll
I'll
write
up
a
diagram
here.
So
let's
say
this
is
this:
is
our
our
egress
based
model?
A
A
A
Let's
say
you
wanted
to
achieve
the
same
thing
in
the
other
model
you
would,
you
would
want
to
basically
create
a
traffic
split
or
a
some
of
this,
something
that
we
call
a
failover
service
that
I
won't
get
too
much
into
how
it
works,
but
essentially
you
would
want
to
abstract,
because
this
this
dns
name
here
resolves
to
a
specific
cluster.
A
A
Both
of
these-
and
this
way
we
can,
we
can
also
specify
priorities
and
weights
for
these
different
clusters
for
the
or
these
different
instances
of
the
service
running
in
different
clusters.
A
A
A
Helped
not
exactly
prometheus,
scraping
metrics
from
istio
itself
asking
is
okay
to
service
mr
sending
whitebox
in
general.
You
should
be
able
to
send
whatever
you
want
through
the
mesh,
including
having
prometheus.
Do
it
scraping
through
the
mesh.
A
I
haven't
actually
tested
this
myself,
injecting
prometheus
with
a
sidecar,
but
I
would
assume
that
as
long
as
the
mtls
options
are
set
up
properly
so
you're
scraping
when
you
scrape
a
service,
you
scrape
it
with
mtls
or
the
the
you
know,
with
the
mtls
enabled
to
match
that
service,
and
when
you
scrape
the
proxy,
the
proxies
metrics
will
will
not
be
served
via
https
they're
served
from
the
admin
port.
That's
on
there.
So
it's
actually
it's
going
to
be
served
based
on
how
the
the
proxy
is
configured
to
expose
them.
A
To
go
yeah,
we
have
this
thing
from
before
here,
so
when
prometheus
goes
and
it
actually
scrapes
a
side
car,
it
needs
to
know
it's
scraping
a
sidecar.
It
may
have
a
different
configuration
for
doing
so
than
scraping
the
pod,
which
goes
through
the
sidecar.
It's
just
something
to
be
aware
of,
but
you
should
be
able
to
do
that.
Maybe
hopefully
that
answers
the
question.
A
So
yeah,
I
think
that
should
cover
this
architecture
overview.
If
people
are
interested
in
seeing
code,
I
can
definitely
prepare
that
for
a
future
hoot
and
we
can
do
a
little
bit
more
of
a
deep
dive.
I
was
hoping
to
demo
this.
Let
me
just
see
if
my
cluster
is
set
up.
A
A
So
I
have
some
pods.
I
really
just
have
nothing
installed,
I'm
going
to
do
a
very
quick,
I'm
going
to
run
a
script.
That's
going
to
set
up
my
cluster
rather
quickly.
Just
give
me
a
minute
here.
A
A
And
we'll
see,
glue
mesh
will
automate
for
us
this
setup
and
again
the
real.
The
real
purpose
of
this
is
not
so
much
as
a
demo
of
glue
mesh
and
more
an
understanding
of
how
it's
working
under
the
hood
to
orchestrate
this.
But
of
course
you
know,
we
can't
can't
recommend
glue
mesh
enough
and
there's
there's
really.
It
deserves
its
own
hoot.
A
A
B
B
So
so
I'm
having
a
little
issue
here,
maybe.
B
A
Send
generic
logs
from
applications
from
cluster
one
to
cluster
two,
this
will
generate
a
lot
of
data.
Is
it
okay
to
use
service
mesh?
I
see
so
it's
true
that
there
will
be
a
lot
of
metrics
generated
from
having
a
lot
of
traffic.
You
can
scope
down
what
prometheus
is
scraping.
I
believe
in
the
scrape
configs
there
are
some
relabel
configs
that
you
can
use
to
constrain.
What's
collected
there,
if
you're
worried
about
blowing
up
prometheus,
explain
more
about
admiral,
have
you
worked
with
it?
I've
looked
at
it
briefly.
A
A
What
admiral
is
really
taking
care
of
is
automating
the
creation
of
those
service
entries,
so
it
will
create
the
service
entries
for
you
automatically,
based
on
a
crd
that
you
create,
I
believe
it
will
also
configure
the
ingresses
for
you
automatically.
However,
I
don't
think
that
it's
handling
the
removal
of
service
entries.
It's
it's
not
like
a
a
robust
kubernetes
controller.
Let's
say
I
believe,
a
lot
of
things
are
hard
coded
and
it's
it's
designed
specifically
around
intuit
use
case.
A
So
it's
not
necessarily
addressing
all
of
the
you
know
certain
things
that
will
matter
to
some
organizations
like
the
ability
to
garbage,
collect,
stale
service
entries
and
things
of
that
nature.
A
A
Unfortunately,
I
think
I
will
have
to
save
the
demo
for
another
video,
I'm
having
some
trouble
getting
the
environment
spun
up
here.
I
can
try
to
do
it
manually,
but
I
think
it's
going
to
take
too
long.
A
So
glue
would
be
a
more
configurable
option.
Yes,
glue
is
definitely
going
to
have
more
options.
It's
really
built
as
a
robust
multi-multi-mesh
management
plane.
So
it's
designed
to
address
all
these
use
cases.
Various
deployment
scenarios
using
certificate
authorities
that
are
signed
by
you
know
not
using
self-signed
certificates.
Things
like
that.
A
A
A
Any
other
questions,
and
I
promise
there
will
be
a
follow-up
demo.
Actually
there
are
demos
that
you
can.
You
can
get
for
federation
already
and
I'll
make
sure
to
link
those
on
the
github
as
well
as
provide
them
in
any
social
media
where
this
is
linked.
A
Least
view
the
tutorial
on
setting
it
up
and
that'll
be
here
we
go.
Here's
multi-cluster.
A
Maybe,
like
I
said,
we
refer
to
this
features
as
federation,
so
I'll
just
scroll
down.
So
what
we
do
here
is
once
you
have
glue
mesh
installed.
You'll
have
two
clusters,
one
of
them:
they
both
have
steel
installed
on
them,
and
one
of
them
will
have
glue
mesh
as
well,
and
this
will
be
our
our
management
plane,
our
management
cluster.
A
So
we
create
a
crd
for
a
glue
mesh
called
the
virtual
mesh
and
it
tells
it
basically
to
logically
group
meshes
together
sharing
configuration
as
well
as
federating
or
exposing
the
services
they're
in
for
multi-cluster
traffic,
so
I'll
scroll
past
this
here,
you
can
provide
your
own
root
certificate.
What
what
this
model
is
using?
The
shared
trust
and
in
shared
trust,
what
glue
mesh
will
do
is
provision
a
root
certificate
for
or
it's
it's
the
it's
the
root
certificate
for
the
mesh,
so
each
mesh
will
get
a
root.
A
A
It
uses
that
to
create
a
certificate,
signing
request,
which
is
signed
by
glumesh,
using
the
shared
certificate
and
now
those
agents
can
then
give
istio
an
intermediate
ca
that
has
a
shared
root
of
trust
and
the
istios
can
trust
each
other
all
right.
So
what
you'll
see
is
once
the
trust
is
established
and
we've
created
that
virtual
mesh,
we
should
now
have
a
bunch
of
service
entries,
get
created
automatically
and
you'll
see
what
they're
named.
B
A
B
In
this
video,
I'm
not
sure
that
it
does.
A
Okay,
I
I
really
apologize
for
the
lousy
demoing
here,
just
having
some
cpu
issues,
all
right,
I'll
I'll,
just
wrap
it
up
by
asking.
If
we've
got
any
more
questions,
all
right,
you
worked
with
admirals
the
major
performance
difference
between
the
ingress
approach
and
the
istio
approach
with
eds.
So
I
haven't
tested
it
to
verify
this,
but
it
seems
that
I
believe
you
would
have
a
less
of
a
performance
impact
or
the
istio
control.
Plane
itself
is
going
to
require
a
lot
more
memory
in
order
to
store.
A
Okay,
so
istio
has
to
load.
Let's
say
I
have
an
endpoint
that
lives
in
one
cluster
here,
so
let's
say
I
have
in
my
istio
setup.
Let's
say
I
have
this
pod
and
I
have
an
endpoint
for
that
pod.
That
endpoint
has
to
be
shared
with
every
istio
control
plane
in
every
other
cluster,
which
is
creating
it's
going
to
create
an
additional
memory
pressure
on
istio
when
you
have
a
management
plane
running.
A
A
But
again
I
haven't
actually
tested
performance
as
far
as
the
performance
of
the
traffic
itself.
It
should
not
be
there
really
shouldn't
be
a
difference,
because
the
request
is
going
on
the
same
path.
Ultimately,
either
the
request
will
go
through
if
we're
just
talking
about
a
shared
trust
model,
which
is
the
only
thing
istio
supports
today,
the
pod
will
go
through
either
it'll
go
through
a
remote
gateway.
A
A
This
is
the
diagram
I
was.
I
was
intending
to
show
that
in
in
the
istio
model,
these
ips
have
to
be
shared
across
all
the
proxies
and
all
the
control
planes
where,
in
the
management
plane
model,
you
only
need
to
share
the
ip
of
this
gateway,
and
that's
the
only
thing
that
that
the
pod
that
the
sidecar
needs
to
know
and
that
the
istio
needs
to
know.
A
So
I
think
that
about
wraps
it
up
again,
I'm
really
sorry
about
the
demo
having
some
cpu
issues
lately,
I
will
definitely
have
a
demo
up
and
linked
for
anyone
who's
interested
to
follow
up
on.
A
Okay,
well,
if
that
is
it,
then
no
more
questions,
then.
Thank
you
guys
so
much
for
checking
this
out
really
appreciate
the
feedback
and
the
community
that
we're
building
here,
it's
really
very
cool
and
a
very
interesting
space.
There's
a
lot
of
stuff
happening
right
now
and
or
at
solo.
A
You
know
we
want
to
not
just
have
products
to
sell,
but
we
really
want
to
share
our
knowledge
and
insight
and
and
be
co-creators
in
this
space
and
help
to
discover
the
future
of
what's
going
to
be
possible
with
surface
mesh
and
kubernetes.