►
Description
As the number of clusters and compute resources driving your application multiplies, so does the complexity of your network configuration, right? Well, Nir and Stephen tell me it doesn’t have to! Join us as we talk to some of the experts behind submariner about all things multicluster networking. If all goes well, I might get to cut out the mess of Cat-6 cables I’ve woven together in my home-lab!
A
Hello,
welcome
to
the
cloud
multiplier
we
are
coming
to
you
live
today
is
episode
three.
Today
we
have
the
pleasure,
my
I'm
gurney
should
say:
I'm
gunny
buchanan
your
co-host
joined
by
my
other
co-host
joy,
deep
and
we're
honored
to
have
two
guests
from
the
submariner
project.
Today
near
and
stephen,
you
already
know
myself
and
jordy
steven
near.
Do
you
want
to
introduce
yourselves
I'll
go
top
to
bottom
stephen.
A
A
Awesome
thanks
for
joining
us,
so
we'll
get
into
all
things
multi-cluster
networking
here
in
a
bit
the
first
thing
that
I
admitted
while
we
were
waiting
before
the
call
started,
I
did
not
take
a
networking
class
in
college,
so
my
networking
is
I'll,
be
a
very
good,
very
good
user.
Here
I
want
somewhere
to
make
things
simple,
because
I
I
don't
want
to
have
to
learn
all
of
this
fun
networking
I'd.
Rather
it
figure
it
out
for
me.
A
So
y'all
are
the
experts,
I
think,
will
have
some
fun
there,
but
first
as
always,
we
have
top
of
mind
topics.
I
have
still
stolen
that
name.
I
think
I
came
up
with
a
name
in
the
last
stream
and
then
forgot
it
so
we're
back
to
this
one,
my
I'll
als
I'll
start
off,
because
I
actually
gave
joy
deep
some
prep
time
this
time
on
on
what
he
wanted
to
bring
up.
A
But
my
past
couple
of
weeks
have
been
dominated
by
an
incredibly
fun
time
around
understanding
how
every
ci
system
holds
an
unbelievable
amount
of
power,
because
you
have
to
put
credentials
into
it.
If
anyone
else
travis
cia
has
continued
to
leak,
some
sensitive
secrets
that
you
have
in
jobs
and
we
used
to
be
a
travis
ci
shop,
we
still
have
some
stuff
that
we
ship
in
in
z,
streams
for
the
red
hat
advanced
cluster
management
project,
that's
on
on
travis.
A
So
every
time
I
see
a
news
article
in
my
feed
that
says,
travis
has
been
leaking
your
keys
once
again,
it's
it's
always
a
fun
week.
So
came
back
from
vacation
to
that
notification.
Joy,
deep
I'll
message
you
after
so
you
can
rotate
all
of
your
keys
as
well.
A
I
don't
think
any
of
yours
were
thoroughly
legion
travis,
but,
as
always,
I
remember
I'm
reminded
about
the
need
for
ever
increasing
security
and
the
fact
that
I
should
probably
learn
more
about
hashicorp,
vault,
bad
and
sealed
secrets,
I
think,
are
I
I
think
are
two
of
the
more
interesting
projects.
I've
hacked
at
lately
is
is
vault
for
seeing
seeing
as
we
operate,
a
bunch
of
kubernetes
clusters
seems
to
be
a
a
pretty
cool
piece
of
tech.
We
can.
We
can
talk
about
that
some
more
later
jody.
A
We
might
actually
have
someone
on
to
talk
about
that
at
some
point,
but
jody
you
had
a
book
that
you
wanted
to
talk
about.
You
were
talking.
D
Yeah
yeah
and
let
me
off
by
going
after
what
you
were
telling
gurney
we
here
more
and
more
customers
talk
about
hashicorp
world,
and
indeed,
in
today's
scenario
I
mean
with
all
the
security
concerns.
The
last
thing
you
want
is
your
secrets
to
fall
into
wrong
hand.
So
that
is
a
very
hot
topic
and
yeah
we,
I
guess
we
can
have
folks
talk
about
that.
I
think
in
the
last
one
gus
kind
of
referred
to
when
we
were
talking
about
policies
he
kind
of
touched
upon
that,
but
on
the
book
yeah.
D
D
D
Book
of
why
I
think
it's
called
the
book
of
why
it
is
it's
written
for
lay
people,
it's
written
by
a
ucla
professor
who
won
the
touring
award
nobel
prize
for
computer
scientist
in
2011.
D
It's
a
it
can
be
read
by
anybody.
It's
a
fantastic
book
that
talks
about
it's
all
about
causation,
how
you,
how
you
build
a
causal
model.
It
is
fantastic!
It
talks
about
how
philosophers
have
treated
this
subject.
How
statisticians
have
treated
this
subject,
how
computer
scientists
and
machine
learning
folks
have
treated
this
subject,
so
it
is,
and
just
coming
from
extremely
you
know
he
is
top
of
the
notch
right.
So
he's
explaining
to
you,
so
you
just
you,
you
just
enjoy
that
right.
There
are
things
you
can
understand
and
there
are
things
I
cannot
understand.
D
I
have
to
read
it
10
times,
but
the
other
interesting
thing
is
it
at
least
makes
me
think
that
hey
all
of
this
data
we
have
and
the
way
we
are
trying
to
extract
meaning
out
of
the
data.
Is
there
a
different,
more
robust
way,
perhaps
which
we
could
use?
So
you
know
it's
it's,
that
kind
of
stuff
that
gives
you
enough
thought
in
your
brains
and
it's
it's
interesting.
D
D
D
A
A
Oh
goodness,
I
will
add
that
I
I
have
it
pulled
up
on
amazon
right
now,
which
is
which
is
just
it's
thing.
It
suits
me
to
have
already
pulled
it
up
on
amazon
anyways
that
sounds
wonderful
joy,
deep,
I'm
gonna
have
to
give
that
a
give
that
a
try
as
always
open
floor
steven
near
any
any
fun
open
source
projects.
Have
you
been
reading
any
good
books,
any
good
documentation.
A
B
Yeah,
so
just
I
just
got
this
one:
you
were
talking
about
not
knowing
anything
about
networking,
so
I
haven't
read
it
yet,
but
this
is
a
book
about
the
company.
I
started
my
career
in
which
is
3com,
which
is
everybody's
forgotten
about
it
now,
but
they
were
a
big.
A
A
C
A
Okay,
it
might
be
joy
deep.
I
think
I
know
what
the
three
the
three
com
may
have
been.
One
of
our.
I
started
my
career
at
ibm,
so
it
may
have
been
one
of
my
colleagues
there
because
I
knew
know
some
people
from
the
mainframe
division,
so
so
that
might
have
been
and
a
networking
related
product
come
to
think
of
it.
D
A
D
B
B
D
And
me
like,
whenever
you
talk
about
it,
I
have
to
think
about
okay,
what
is
layer
7
app
and
what
does
for
you
have
to
think
about
it.
It
doesn't
come
automatically,
so
a
refresher
like
that,
but
first
time
we
bring
networking
into
the
cloud
multiplier
and
it's
it
is
important.
It
is
one
of
the
key
backbones.
B
So
you
want
well
if
we
talk
about
just
the
layers,
so
the
the
idea
of
networking
is
that
you
stack
layers,
and
so
there
are
different
models.
Tcpip
has
one
model,
there's
a
an
osi
model
which
is
supposed
to
be
an
official
standard
and
that's
the
one
everybody
reasons
about,
and
so
the
bottom
layer
is
the
physical
layer
which,
in
the
case
of
wi-fi,
doesn't
actually
exist
and
then
there's
layer,
two
which
is
it
corresponds
to
mac,
addresses
and
what
most
people
know.
B
So
that's
sort
of
the
the
transport
layer.
That's
how
you
get
data
from
one
one
device
to
another,
then
there's
layer,
three,
which
is
interconnecting
networks,
so
you
start
rooting
things
here
and
obviously
that's
where
the
internet
starts.
You've
got
all
these.
You
know
the
internet.
That
means
internetwork.
So
it's
all
these
networks
that
are
connected
together
and
then
you
add
so
layer
four
above
that
is
when
you
start
to
have
structure
to
your
communications.
B
So
this
is
where
tcp
lives.
So
you
have
long-lived
sessions
which
are
more
than
just
you
know,
udp
drugs.
Where
nobody
gets,
you
don't
know
where
anybody
gets
it
tcp
drugs.
You
actually
do
know
if
the
your
recipient
gets
it
because
there's
a
two-way
communication
with
handshakes
and
there's
a
whole
state
state
diagrams
with
state
transitions.
So
everybody
that's
involved
in
the
communication
knows
where
the
others
at
sort
of
thing,
and
then
you
add
layers
on
top
of
that
so
osi.
B
I
can't
remember
what
five
and
six
are
and
we
tend
to
talk
about
seven,
which
is
the
application
layer,
and
this
is
where
things
like
http
come
in,
where
there's
actual
content.
That
has
semantic
meaning,
let's
say,
and
so
that's
in
the
cloud.
This
is
where
most
of
the
interesting
things.
Well,
I
wouldn't
say
most
of
the
interesting
things
happen,
but
it's
what
you
ultimately
care
about.
B
A
B
Yeah
so
yeah
so
yeah,
it's
important
because
well.
The
reason
I
mentioned
the
submariner's
earlier
three
is
that
when
you're
working
a
certain
layer,
anything
that's
built
on
top
of
that
will
work
transparently.
So
if
you're,
if
you're
acting
at
layer
three,
then
anything
at
the
layers
above
that
will
work
on
top
of
whatever
you're
doing
without
noticing
that
there's
a
difference.
If
you
work
higher
up,
then
things
that
rely
on
specifics
below
below
you
won't
work.
B
So
that's
why,
for
example,
you
can't
you
know
if
you've
only
got
a
layer,
seven
say
a
layer,
seven
proxy,
for
example,
on
a
network.
Then
you
can't
root.
You
can't
use
that
for
anything
below
that.
So
that's
why,
for
example,
you
might
have
an
http
proxy,
but
you
can't
use
it
for
well
anything
that
doesn't
go
over
http,
so
your
games
might
work
over
our.
A
Yeah,
okay,
okay,
that's
that's
very
good
framing
that
was
that
filled
with
some
gaps
for
sure,
let's
see
so
so
submariners
at
level
three.
So
what
is
I
guess
to
frame
the
problem?
Better
submariner.
I,
from
my
very
brief
reading,
seems
to
be
a
solution
for
the
problem
of
now.
A
I've
created
a
heterogeneous,
highly
scalable,
varied
everywhere
network
landscape
and
I
need
to
run
an
application
across
more
than
one
of
those
and
I
need
to
be
able
to
communicate
in
some
sense
of
a
reliable
way
with
with
a
nice
low-level
interface
or
a
nice
powerful
interface.
And
that's
where
submariner
steps
in.
I
guess:
here's
here's
the
point
where
I'm
curious.
What
submariner's
biggest
use
case
is:
where
do
people
put
submariner.
C
Sure,
maybe
I
can
take
that
so
yeah,
so
that's
exactly
what
we're
trying
to
solve.
So
obviously
we
are
seeing
users
and
customers
deploying
all
sorts
of
kubernetes
clusters
all
over
the
place
right.
It
could
be
on
prem
or
on
each
of
the
major
public
clouds
or
any
combination
of
these,
and
then
what
we
are
trying
to
do
is
to
really
interconnect
them
directly.
C
So,
as
stephen
said,
basically
go
to
this
kind
of
level,
three
l3
foundation
and
at
the
infrastructure
layer.
I
just
interconnect
the
clusters
and
then,
in
terms
of
use
cases.
Maybe
I
can
share
my
screen
because
I
have
a
bunch
of
very
interesting
ones
and
there's
one
that
fits.
B
A
B
Exactly
yes,
we
we
follow.
Was
it
the
john
postel?
I
think
principal
we're
liberal
in
what
we
accept.
Okay,.
A
C
To
use
both
yeah,
so
I
guess
the
the
most
basic
example
or
or
use
case
is
just
to
basically
interconnect
components
of
the
same
application
across
different
clusters
right
and
we
actually
have
a
demo
of
this
exact
use
case
later
on.
C
So
here
in
in
this
slide,
what
you
can
see
is
a
cluster
a
over
on
the
left
side,
where
there's
a
database
component
and
a
front-end
component,
and
then
over
on
cluster
b
on
the
right
side.
There
is
just
this
front-end
component
and
then,
as
you
can
see
like
in
order
for
the
front
end
on
the
right
side
to
connect
to
the
database
on
the
left
side,
you
just
need
to
have
some
kind
of
a
secure,
direct
kind
of
vpn
connectivity
between
the
clusters
right.
C
So
that's
the
very
kind
of
basic
use
case
and
then
the
more
kind
of
interesting
one.
This
is
something
that
we
did
with
core
code
gtb,
which
is
a
very
popular
cloud
native
database.
B
C
Yeah,
so
here
again,
as
you
can
see,
we
have
three
different
clusters
in
this
particular
diagram.
C
We
comes
with
it's
a
minor
to
just
offer
this
interconnectivity
right
and
one
other
use
case,
which
is
quite
popular
in
the
context
of
sub
miner,
is
disaster
recovery,
and
this
is
something
that
we
are
actually
delivering
together
with
the
red
at
openshift
data
foundation,
teams
that
what
used
to
be
the
red
hat
storage
earlier-
and
this
is
where
over
on
the
storage
side,
they
have
all
the
fancy
disaster
recovery
feature
set
with
volume,
replication
and
whatnot.
C
But
then,
in
order
for
this
to
actually
work,
they
need
the
underlying
infrastructure,
the
underlying
connectivity
between
the
the
site.
Right
and
again,
this
is
where
submariner
comes
in,
offer
this
kind
of
l3
infrastructure.
And
then
you
know
the
replication
just
works
on
top
of
it
right.
So
these
are
the
type
of
use
cases
we
are
currently
kind
of
supporting
and
hearing
most,
but
there
are
definitely
others.
D
And
and
near
obviously,
you
mentioned
that
you
provide
low
latency,
very
critical
for
replication,
for
let's
say
cockroachdb
and
stuff
like
that.
So
does
submariner
give
any
indication
publish
any
metrics
on
what
kind
of
latencies
are
being
experienced
and
stuff
like
that.
Is
there
any
way
to
back
up
the
claim
that
hey
I'm
low,
latency
connection.
C
C
So
I
guess
from
a
connectivity
perspective
again
we
rely
on
on
rail
and
like
linux
and
the
kernel
and
then
yes,
we
do
have
some
health
check,
information
that
tracks
the
connectivity
between
the
clusters
and
we
even
show
the
the
latency
like
the
live.
Latency
numbers
stephen.
I
think
you
have
it
in
your
demo.
If
you
want
to
show
it
up.
A
Yep
I
was,
I
was
gonna.
I
was
literally
about
to
say
we
do
have
a
good
question.
The
first
question
is
I'll
pop
it
up
on
screen
as
well,
where
this
is
supported.
Can
multi-cluster
networking
work
with
non-open
shift
and
then
this
I
guess
it's
a
couple,
questions
non-openshift
non-cade,
so
so
non-um,
oh
goodness,
non-kubernetes
and
pair
those
with
openshift.
B
Yeah
yeah,
so
just
from
a
upstream
community
perspective,
submariner
the
project
supports
any
kubernetes
clusters,
so
there
are
some
restrictions
as
to
what
cni's
can
be
used
and-
and
we
don't
necessarily
test
all
the
possible
combinations,
because
there
are
far
too
many.
But
if
you
can,
the
the
only
requirement
is
that
there's
one
shared
cluster
that
everybody
can
access,
and
so
we
call
that
the
broker.
B
That's
where
all
the
data
that's
used
to
synchronize
between
the
clusters
right
there
is
on
on
screen.
So
that's
the
top
cluster
there.
So
all
the
all,
the
information
that
submariner
needs
to
share
across
clusters
lives
there
and
so
any
cluster
that
can
access
that
kubernetes.
Api
endpoint
can
join
the
what
we
call
the
cluster
set,
which
is
all
the
clusters
that
are
working
together,
and
so
you
could
have
just
using
submariner
upstream.
B
You
could
have
open
shift
on
one
side
and
any
other
kubernetes
implementation
on
that
on
the
other
and
connect
the
two
and
mix
and
match
you
could
have
a
variety.
You
can
have
more
than
two
clusters
and
they
can
all
be
different
kinds
as
long
as
they
can
talk
to
the
broker
and
there's
some
way
of
getting
them
to
talk
to
each
other,
then
it
will
work
now
from
a
product
perspective,
a
red
hat
product
perspective.
A
Okay,
that's
a
good
answer.
Thank
thank
you
for
hitting
that
stephen
now
that
we've
interrupted
near
we
were
hopping
over
to,
I,
I
think,
to
the
demo
cluster,
because
I'm
very
curious
to
see
this
this
multi-front-end
one
database,
you
also
did
bring
up
the
data
foundation,
which
is
another
another
group
of
people.
I
I
need
to
bother
to
be
on
the
show,
because
they
have
some
really
cool
pieces
of
tech
over
there,
and
we
also
have
the
volsing
folks
that
I
know
you've
worked
with.
B
C
B
Don't
have
we
don't
have
anything
as
interesting
as
that
up
in
the
demo,
but
just
to
take
a
look.
So
this
is
acm
advanced
cluster
management
for
kubernetes
and
on
this
screen
we
can
see
that
I've
got
two
clusters
that
are
connected
together
and
one
of
them
is
on
aws,
the
others
on
gcp
and
so
from
this
screen.
B
We
can
get
all
sorts
of
information
about
the
clusters
themselves,
how
to
get
to
the
openshift
console
and
so
on,
and
we
can
also
see
here
that
we've
joined
them
together
in
a
cluster
set,
which
is
imaginatively
called
submariner,
and
we
get
a
quick
health
check
there.
The
number
of
clusters-
and
we
can
drill
down
to
get
more
information
here.
So
we
get
the
connection
status.
B
Between
the
two
clusters,
the
status
about
all
the
submariner
components
that
are
involved
so
everything's
green
here,
so
it's
not
saying
very
much,
but
if
something
was
wrong,
you'd
get
a
pop-up
telling
you
exactly
which
component
was
wrong
and
the
node
labels
part.
This
is
because
we
can't
tell
well
submariner
can't
know
on
its
own,
which
parts
of
a
cluster
it
can
use
to
communicate
with
the
outside
world.
B
So
we
rely
either
on
the
administrator
to
label
a
specific
one
or
more
specific
gateway
notes
that
we're
going
to
use
as
gateways
or
we
rely
on
setting
up
a
specific
gateway.
So
this
is
what's
been
done
here.
Submariner
is
capable
of
going
to
talk
to
aws
gcp
and
a
few
other
platform
cloud
platform
providers
to
actually
go
and
set
up
a
specific
node.
B
So
if
you're,
not
careful,
you
know,
submariner
might
run
away
with
and
you
might
end
up
with
a
surprising
bill
at
the
end
of
the
month,
but
no
that's
just
a
joke.
So
once
so,
you've
got
the
nodes,
labeled
everything's
set
up
correctly
and
so,
like
I
said,
there's
nothing
special
really
running
here.
B
One
of
the
underlying
principles
of
submariner,
which
is
which
it
inherits
from
what's
called
the
multi-cluster
specification
multi-cluster
services
specification,
which
is
a
kubernetes
sig,
which
publishes
a
spec
which
describes
how
to
connect
while
not
really
how
to
connect
clusters
together,
but
how
to
provide
services
across
multiple
clusters.
So
that's
the
the
api
that
submariner
implements
and
it's
all
service
based.
B
So
if
we
go
and
have
a
look
at
the
the
services
on
my
two
clusters,
so
I've
got
my
aws
cluster
here
and
my
gcp
cluster
here
and
I've
created
an
nginx
test
namespace
on
both
and
it
doesn't
have
any
services
currently.
B
B
Image
and
next
I'm
going
to
create
a
service
using
that
so
using
a
nginxsvc
yamo
file,
which
I
prepared
earlier,
and
you
can
see
it
appear
straight
away
here
on
the
aws
open
shift
console,
but
there's
nothing
on
the
gcp
console
yet
and
we
can
check
using
cubecut
hole
as
well.
That's
things
have
happened,
so
there's
a
service,
it's
got
a
cluster
ip
and
it's
on
port
8080
and
we
can
ask
cubecastle
to
describe
it.
B
So
we
get
the
same
information
back
and
now
this
is
where
submariner
is
going
to
come
in
so
at
least
one
of
the
layers
of
summary
here
so
there's
two
aspects
to
submariner.
Really
one
is
the
network
connectivity
and
that's
available
all
the
time.
B
As
soon
as
two
clusters
are
connected,
their
networking
is
shared,
so
the
ip
layer,
all
the
clusters,
become
accessible
to
each
other
and
one
pod
and
one
cluster
can
talk
to
another
pod,
another
cluster
using
its
ip
address,
but
the
layer
on
top
of
that
is
the
service
layer
and
nothing
happens
automatically
at
this
layer.
So
we
need
to
tell
submariners
that
we
want
to
export
this
service.
So
we've
got
the
nginx
service
in
the
nginx
test
namespace
and
we
export
it
using
a
command
called
sub
cattle
export
service.
B
Sub
console
is
a
small
tool
that
the
submariner
projects
provides
and
it's
a
utility
in
the
cube
cuttle
style,
which
simplifies
all
the
submariner
operations.
Really
you
can
use
it
to
set
up
your
broker.
You
can
use
it
to
connect
clusters
together.
You
can
use
it
to
export
services
on
export
services.
You
can
also
use
it
to
run
diagnostics,
to
gather
a
whole
lot
load
of
debugging
information.
B
You
can
even
use
it
to
run
all
the
tests
that
we
run
in
ci.
We
package
them
all
up
and
make
them
available
in
subcutaneous,
so
people
can
run
them
on
their
own
set
up
if
they
want,
and
so
the
way
this
works
is
so
it
exports
a
service,
and
this
creates
a
new
object
on
top
of
the
service
object
that
kubernetes
users
will
be
familiar
with,
which
is-
and
this.
B
Called
the
service
exports,
and
so
you
can
see
here
it's
it's
using
a
multicluster.xkate.io.
B
Api
namespace,
so
it's
this
is
not
a
summer
in
your
specific
object.
It's
a
part
of
the
multi-cluster
standards
and
it's
a
service
export
object.
It's
called
nginx
same
as
the
service
and
it's
in
the
same
name
space
as
the
service.
It
exports-
and
you
can
see
here
what
happened
to
it
so
at
first
it
doesn't
exist.
Well,
it
exists,
but
it
doesn't
have
a
corresponding
global
ip.
B
Then
it
gets
synchronized
to
the
broker.
So
I
I
created
it
on
the
aws
cluster,
but
this
has
been
sent
to
the
broker
and
the
last
stated
that
it
was
successfully
synced
to
the
broker.
So
at
this
point
the
broker
knows
about
it
and
it's
been
made
available
to
other
clusters,
and
it
doesn't
so
you
noticed
here
it
doesn't
use
quite
the
same
name
in
the
logs
service
imports,
not
a
service
export.
B
B
So
I
have
another
cube
config
file
which
points
to
the
the
gcp
cluster
and
we
can
see
it
shows
up
in
a
bunch
of
different
ways,
but
this
is
how
services
are
exported
from
one
cluster
to
the
other.
So
one
cluster
exports
them
using
an
export,
object,
service,
export
object
and
that
creates
a
service
import
object
which
gets
propagated
to
all
the
clusters
that
are
drawn
together
and
the
service
import
object
is
then
used
by
dns
and
the
receiving
clusters
to
make
the
service
available,
and
we
can
check
that
by
running.
B
A
test
pod
in
the
gcp
cluster,
so
just
to
just
to
show
people
that
there
are,
there
are
still
no
nginx
services
on
the
gcp
cluster
itself,
so
nothing
is
happening
locally
in
gcp,
but
thanks
to
submariner
we
can
access
the
nginx
service
that
lives
in
the
aws
cluster
and
we
can
do
that
using
a
new
domain,
so
people
might
be
filming
with
cluster.local
and
in
a
multi-cluster
service
scenario
you
use
clutterset.local
instead
and
that
gives
access
to
a
service
wherever
it
lives
in
the
cluster
or
rather
in
the
cluster
set,
and
we
can
also
retrieve
information
about
it
using
dig
from
dns,
and
that
tells
us
it
lives
on
the
other
cluster.
B
C
Yeah
and
it's
important
to
highlight
that
this
is
really
relying
on
the
underlying
connectivity
that
subrainer
provides
right.
So
without
the
ip
the
layer,
3
connectivity,
dns
wouldn't
be
able
to
resolve
this
like
ip
address
right.
So
sarmini
really
provides
this
base
ip
connectivity
and
then
also
the
implementation
of
this
mcs
api
and
dnf.
D
This
is
actually
awesome.
So
to
summarize,
if,
if
I
can
replay
back
what
both
of
you
stated
that
if
I,
if
I
need
to
connect
services
that
are
running
across
different
clusters,
the
first
thing
is
bring
those
clusters
together:
okay,
establish
cluster,
set
whatever
to
and
make
sure
they
are
all
connected,
and
then,
as
a
developer,
I
create
my
stuffs
in
one
cluster
and
somebody,
a
consumer
or
maybe
another
part
of
me-
creates
some
other
stuff
from
another
cluster
business
as
usual.
D
C
C
Yeah,
that's
right,
and
another
thing
to
highlight
is
the
the
operational
model
model
here,
which
is
really
the
the
connectivity
part
is,
is
really
meant
for
the
then
the
admin
right,
like
the
sre
type
of
person,
where
they
are
really
responsible
to
kind
of
bring
up
the
cluster
set
right.
A
D
C
Up
the
the
fabric
and
and
like
interconnect
the
the
the
clusters
and
then
once
the
cluster
step
is,
is
up,
and
you
know
the
the
network
reachability
is
there
between
the
clusters.
It's
really
each
and
every
developer
application
developer
can
just
export
its
services.
So
this
is
the
the
operational
model
of
submariner
yeah.
A
A
I
I
know
people
who
work
in
in
financial
tech,
for
example,
and
they're
application
developers
and
their
application
front
end
needs
to
talk
to
12
different
services
that
I
are
hosted
in
a
bunch
of
geos
on
a
bunch
of
different
clusters,
and-
and
this
would
this,
this
is
how
you
achieve
that
sort
of
transparent
interaction
with
a
series
of
other
services.
For
you
know,
a
consumer
of
multiple
services
than
in
in
themselves
provides
another
service
right.
D
B
Mesh,
let's
get
that
out
of
the
way
first,
but
it
can
work
with
service
meshes,
although
this
is
an
an
evolving
space.
This
is
all
also
all
moving
fairly
rapidly,
but,
for
example,
istio,
which
is
perhaps
the
better
known.
Let's
say
if
not
the
most
popular
service
mesh
implementation
can
actually
pick
it
back
on
top
of
a
well
any
mcs
provider.
So
any
multi-cluster
services
provider,
so
s2
on
its
own
can
provide
multi-cluster
connectivity
and
a
service
measure
across
clusters.
C
Yeah,
I
guess
one
thing
to
highlight
is
that
sar
miner
is,
is
like
the
main
focus
of
sub-miner
is
really
the
reachability
the
connectivity
aspect,
and
if
you
look
at
service
meshes,
they
really
are
targeting
a
different
feature
set
like.
C
And
traceability
and
routing
and
load
balancing
and
whatnot,
so
we
are
trying
to
to
like
keep
sarina
really,
you
know
focused
and
then,
as
stephen
said,
if
you
want
to
run
something
like
istio
across
multiple
clusters,
we
can
take
care
of
the
connectivity
right
so
right,
so
you
can
run
like
istio
on
top
of
interconnected
kind
of
clusters
and
we
actually
have
a
blog
post
showing
exactly
that.
So
we
set
up
subreiner
and
then
running
istio
on
top
and
istio
from
our
perspective
is
just
an
application
right.
C
So
we
talked
about
layers
before
we
are
the
layer
tree.
We
we
provide
this
infrastructure
foundation
and
then,
from
our
perspective
from
submariner
perspective,
istio
is
just
an
application.
You
know
trying
to
leverage
or
like
connect,
different
endpoints
and
yeah.
I
can
I
have
the
link
to
that
blog,
so
we
can
share
it.
I.
A
B
Yeah
lisa's
question
in
the
chat,
so
are
there
any
case,
customer
resources
related
to
the
nginx
deployment
in
the
gcp
cluster
or
only
in
the
aws
cluster?
So
there
are,
but
I
guess
the
underlying
question
is:
did
the
user
have
to
do
anything
in
the
gcp
cluster
to
get
this
to
work?
And
the
answer
to
that
one
is
no,
but
I
thought
maybe
it's
worth
looking
at
the
objects
that
are
actually
involved
here.
B
So
this
is
on
the
aws
cluster
and
I
mentioned
service
export
and
service
import
and
we
can
see
them
here.
There
are
crds,
so
there's
service
export
here
and
you
can
see
two
service
import
crds
because
there's
a
legacy
one
from
before
the
multi-cluster
sig
implementation
which
we
used,
but
we
don't
no
longer
use
that
one.
So
if
we
look
at
the
service
export,
so
this
is
what
I
created
manually
on
the
ws
side
of
things.
B
We
have
one
of
them,
so
that's
the
object
that
I
created
and
that's
all
I
did,
but
then
submariner.
So
a
component
of
submariner,
that's
called
lighthouse,
which
is
why
this
is
called
lighthouse
here,
saw
that
I
created
the
service
export
and
automatically
created
a
matching
service
imports,
and
it
did
so
in
two
different
namespaces.
So
it
did
it
in
the
operator
namespace,
which
is
where
our
own
objects
live.
So
ignore
these
there
are
old
ones.
B
These
two
are
the
ones
that
were
created
as
a
result
of
my
actions
during
the
demo,
so
the
operator
one
and
then
it
got
pushed
to
the
broker
and
from
the
broker
so
I'll
go
over
to
the
gcp
cluster
now
down
into
the
crds
again,
so
just
to
we'll
just
check.
First
of
all
that
there
are
no
service
exports.
B
And
this
is
in
another
namespace
so
and
it's
a
bit
older,
so
this
is
from
other
tests
that
were
done
on
this
clusters.
That's
not
what
I
was
using
and
it's
got
a
different
name
and
on
the
service
import
side
of
things
we
have
this
one
here,
which
is
the
one
I
created.
So
this
one
was
automatically
imported
from
the
broker
from
the
aws
cluster,
and
that's
all
that
happened
and
then
the
dns.
So
we've
got
a
core
dns
plugin
that
runs
on
all
the
clusters
that
are
joined
together.
B
The
core
dns
plugin,
that's
running
on
the
gcp
cluster
sees
this
service
import
and
maps
the
ip
address
that
corresponds
to
the
import
to
the
service
name
and
that's
how
it
all
works.
So
I
didn't
do
anything
on
gcp.
Apart
from
ensure
that
the
namespace
exists,
everything
else
is
taken
care
of
for
us
by
subrainer.
B
So
that
means
that
there's
a
there's
a
great
deal
of
flexibility
that
happens
transparently
for
the
users.
You
know
you
just
you.
You
choose
to
export
the
service
in
the
cluster
that
has
it
or
clusters
that
have
it
and
it
becomes.
It
becomes
available
across
all
the
clusters
that
are
joined
together
transparently,
and
this
means
that
you
can
also
move
services
around
from
one
cluster
to
another
or
make
them
available
in
multiple
clusters.
B
For
example,
I
could
set
up
the
nginx
service
in
gcp
and
then
it
would
well
when
it
becomes
locally
available,
we'll
prefer
the
local
version
for
latency
reasons.
But
if
this,
if
you,
if
we
had
a
third
cluster
in
the
the
demo,
then
I
could
run
a
test
and
you
would
see
it
round
round-robin
between
the
two,
so
there's
distribution,
and
so
that's
hopefully
going
to
improve
at
some
point
in
the
future
as
well.
B
So
you
can
add,
but
there's
work
going
on
in
the
six
and
the
kubernetes
six
around
all
this,
so
you
can
have
metrics
that
will
allow
you
to
prefer
services
in
one
cluster
over
another.
If,
for
example,
if
you're,
if
you've
got
bandwidth
costs
that
vary
between
different
clusters
or
you,
you
have
latency
requirements
for
your
services,
but
it
also
enables
things
like
failover.
B
So,
for
example,
if
you
have
the,
if
you
have
one
service
that's
available
in
multiple
clusters,
submariner
will
actually
check
regularly,
which
ones
are
available
and
if
it,
if
it
notices
that
one
of
the
clusters
is
no
longer
reachable,
it
will
automatically
stop
offering
it
as
a
as
an
endpoint
for
the
service.
And
so
all
the
other
clusters
will
stop.
Trying
to
talk
to
and
you'll
automatically
fail
over
to
clusters
that
are
still
available.
A
That's
that
was
a
I
was
about
to
ask.
It
sounds
like
this
is
applicable,
very
applicable
in
a
failover
scenario,
or
allows
you
to
run
those
services
and
almost
a
pseudo
only
across
clusters,
across
regions
across
those
geos.
A
I
assume
that's
up
the
same
vein.
You
know
it
costs
us
more
to
heavily
use
the
service
in
this
region
than
another.
So.
C
B
B
Yeah
and
that's
that's
one
of
the
use
cases
that
are
possible
with
submariner
you
can
well
near.
Has
a
nice
slide
with
all
the
different
well
with.
I
think
eight
of
the
main
use
cases
that
we
care
about,
and
one
of
them
is
expenditure.
So
you
can.
You
can
reduce
your
costs
by
moving
compute
to
cheaper
regions.
You
can
reduce
your
costs
by
reducing
the
amount
of
network
transfers
that
you
use,
and
obviously
there
are
also
other
scenarios
like
you
might.
You
might
have
data
where
look
at
legal
ramifications.
B
So
that's
important.
You
know
for
for
well
gdpr
in
europe
or
equivalent
laws
in
california
or
brazil
and
so
on,
and
so
you
want
to
ensure
that
your
data
stays
in
one
place,
but
you
might,
you
might
be
able
to
build
services
on
top
of
it
that
can
be
made
accessible
to
other
regions.
C
A
A
A
C
Again,
I'm
not
a
service
mesh
expert,
but
I'm
not
sure
you
can
avoid
federation
because
that's
more
for
the
control
plane,
but
I
guess
you
can
avoid
using
like,
for
example,
istio
gateways
or
like
those
data
data
path
component
that
interconnect
istio
yeah.
So
you
can
avoid
that
layer.
D
Exactly
and
near
back
to
the
back
to
the
thing
you
were
stating
earlier,
service
measures
have
a
different
goal
in
life.
They
serve
a
lot
of
purposes
on
submariner.
We
are
focused
only
on
the
network
connection,
so
the
what's
it
called
the
east
west
or
I
keep
on
forgetting
that
yeah
east
west,
that's
right,
yeah
eastward
connect
that
submariner
can
provide
the
underlying
layer
of
that,
but
federation
required
for
other
reasons.
That
would
still
be
required.
Yeah
and
steven
one
of
the
things
you
mentioned,
I
didn't
catch
it.
D
B
Yes,
so
submariner
tries
to
be
cni
agnostic,
so
there
are,
for
example,
other
multi-cluster
solutions
like
celium,
so
cerium
has
lots
of
other
advantages.
That
can
be
interesting
in
some
scenarios,
but
one
of
the
the
big
constraints
with
it
is
that
it
is
a
cni.
So
it
replaces
your
cni,
whereas
submariner
tries
to
piggyback
on
well,
it
doesn't
try
it
piggybacks,
on
top
of
whatever
cni
you're
you're,
using
whether
it
ends
up
working
or
not
depends
on
the
cni
and
whether
we've
tested.
B
Be
specifics,
and
so
the
way
that
works
is
that
really
submariner
adds
so
it
acts
in
two
different
ways.
His
first
task
is
to
create
connectivity
between
the
clusters.
So
to
do
so,
it
opens
a
tunnel
between
the
chosen
gateways
in
each
cluster.
B
Remember,
I
said
you
had
to
label
gateways
inside
each
cluster,
so
one
of
those
gateways
will
be
chosen
and
in
each
cluster
and
submariner
will
open
a
tunnel
between
those
gateways,
and
you
can
use
a
variety
of
technologies
there.
You
can
use
ipsec
using
libris
one
it
can
use
vxlan.
If
you
don't
want
to
add
encryption,
you
know
if
you
trust
your
underlying
network.
B
For
example,
it
can
also
use
wire
guard
and
it's
got
a
plug-in
architecture,
which
means
it
would
be
fairly
easy
to
develop
new
what
we
call
cable
drivers
for
this,
and
so
once
the
tunnels
up.
Obviously
it
has
to
get
traffic
through
that
through
the
tunnel,
and
so
to
do
so
it
adds
iptables
rules
on
all
the
nodes
inside
your
cluster.
B
So
this
is
acting
beneath
the
pod
networking
layer
in
kubernetes
terms,
so
that
all
the
traffic
that's
generated
on
one
of
your
kubernetes
nodes,
and
that
is
that
has
to
go
to
a
node.
That's
in
another
cluster
ends
up
going
through
the
tunnel
and
because
it's
using
iptables
rules
and
not
modifying
not
configuring,
the
cni,
it
can
work
with
any
cni
as
long
as
it
fits
in
with
the
iptables
setup.
Really.
B
So
that's
where
the
testing
comes
in,
because
obviously,
if
a
cni
has
an
approach
that
doesn't
work
with
the
way
we're
doing
things,
then
we
won't
know
about
it.
So
we
can't
handle
it.
But
if
we
do
end
up
knowing
about
it,
then
hopefully
we
can
change
submariner
relatively
easily
to
to
make
it
work.
C
Yeah,
I
guess
yeah
to
add
to
what
stephen
said.
I
guess.
First
of
all,
the
the
goal
of
the
project
is
to
really
be
compatible
with
as
many
as
kubernetes
providers
and
cni
providers,
which
are
out
there.
I
think
for
cni's,
which
are
based
on
q,
proxy
and
ip
tables.
It's
pretty
much
just
it
works
out
of
the
box,
but
with
other
cni's.
We
may
need
to
do
things
differently,
and
then
this
is
where
we're
listening
to
our
users
and
trying
to
you
know
make
sure
that
we
support.
C
What's
is
more,
you
know
popular
and
what
users
are
asking
for,
for
example,
we
did
add
support
for
calico
explicitly
and
also
for
ovn,
so
yeah.
That's
why.
A
That
kind
of
lets
you
tame
right
rather
than
rather
than
the
two
ways
to
tame
heterogeneity
in
in
this
case,
seems
to
be
replace
or,
or
you
know,
adapt.
So
this
is.
This
is
adapting
to
what
you
have
running
already.
You
don't
need
to
change
anything
we're
not
going
to
cause
any
headaches.
A
Hopefully,
if
we
tested
it,
we're
not
gonna
cause
any
headaches.
That's
that's.
C
Incredible
yeah,
but
that's
that's
a
big
design
or
kind
of
architectural
choice
that
we
made,
because
we
could
have
like
built
a
new
cni
right
and
then,
but
then
you
just
have
to
use
the
same
cni
across
all
of
your
fleet,
all
of
your
clusters.
C
In
order
for
this
to
work
with
subreddit,
you
can
actually
mix
and
match
different
cni's,
and
we
heard
about
this
as
well
like
from
users
trying
to
migrate
from
one
cni
to
another
or
like
all
sorts
of
like
heterogeneous
environments,
and
then
this
is
what
we
really
want
to
support.
C
A
Yeah,
once
you
reach
the
manage
the
managed
kubernetes
the
manage
service
domain,
you
have
a
little
bit
less
choice
and
control
over
the
cmi
that
you're
using.
So
so
you
you
need
to
be.
You
need
one
layer
higher.
That's
that
is
an
incredible
point
and
that
also
I
I
in
many
cases
probably
helps
a
lot
with
on-premises
and
cloud
deployments
as
well,
and
some
of
that
translation
right
right.
I
it's
just
a
theme
for
it's
a
theme
for
for
distributed
cloud
things
where
we're
trying
to
tame
the
chaos
of
the
heterogeneity.
A
A
Let's
see
I
I
had
something
on
my
mind
and
it
has
gone
away.
Yes,
I
I
before
I
forget,
I
have
to
ask
the
question.
I
put
a
link
out
earlier
to
the
submariner
project
in
the
submariner
projects
docs.
A
C
Yeah,
so,
first
of
all,
if
it's
I
mean,
I
guess
we
haven't
said
that
so
star
raider
is
fully
open
sourced.
We
are
on
github.
All
of
the
code
is
in
on
github,
including
the
testing
and
documentation.
So
everything
is
completely
open.
C
I
guess
the
most
common
or
popular
way
to
reach
out
to
us
is
over.
On
the
kubernetes
slack.
We
have
the
submariner
channel,
which
is
yeah.
It's
linked
there.
On
the
left
side,
we
also
have
a
user
mailing
list
and
a
develop
mailing
list,
but
I
think
slack
is
just
the
more
the
most
popular
one
and,
of
course
we
welcome
contribution.
We
are
very
proud
of
our
user
community.
C
A
B
Yeah
so
we
joined
the
submariner
project
after
after
it
was
created.
It
was
created
by
by
a
rancher
engineer
and
he
chose
the
name
following
the
sort
of
general
kubernetes
nautical
theme.
But
it's
not
a
greek
name,
because.
B
That
doesn't,
it
didn't
exist
back
in
back
in
ancient
greece,
which
is
under
undersea
cables,
and
so
they
don't
get
laid
by
some
range.
But
this
was
the
general
idea.
Submariner
provides
the
undersea
cables
to
connect
clusters
together.
A
Okay,
that's
clever!
That's
clever!
I
I've
been
curious
about
that
for
a
while.
That's
incredible
and
also
really
interesting
here,
go
ahead.
Joy,
deep!
Oh.
A
B
D
B
Yeah,
I
was
gonna
say
that's,
what's
great,
about
open
source
really.
Is
that
a
project
can
it
isn't
necessarily
tied
to
its
creator
and
lots
of
different
people
from
different
communities
and
different
companies
can
work
on
it.
D
D
Keeps
you
know,
given
I
mean
left
to
us,
we
would
create
clusters
left
right
center,
all
over
the
place
and
burn
all
our
budget.
Gurney
is
the
one
who
keeps
watch
like
hey,
hey,
you're,
consuming
too
much
of
you
know.
You
have
too
many
clusters
there.
It's
costing
this
one
your
money,
so
he
is
really
the
penny
watcher
for
us.
A
You're
priming
for
us
to
have
the
the
cost
manager.
A
Have
the
cost
management
folks
on
at
some
point,
because
I
I
we've
we've
have
we're
three
presentations:
deep
and
we've
taught
people
how
to
make
a
bunch
of
hypershift
clusters
spoiler
alert.
Hopefully
we'll
teach
people
how
to
make
a
bunch
of
micro
shift
postures
soon
and
then
you,
you
network
them
all
together
and
you
make
them
all
compliant.
But
now
you
have
a
lot
of
hardware.
C
D
That
have
you,
have
you
run
into
situations
where
people
people
have
clusters,
let's
say
in
amazon
and
clusters
on
prem
has
have
you
got
into
debates
where
people
are
talking
about
hey?
Do
I
use
the
amazon
vpn
solution
or
whether
I
use
a
submariner,
and
I
do
realize
vpn
solution
would
connect
to
many
more
other
things
rather
than
clusters
right.
C
Yeah,
that's
a
great
question
so
yeah
so
the
hyperscalers
they
offer
their
own
set
of
kind
of
vpn
technologies
and
like
vpn
services,
but
typically
they
are
limited
to
the
same
provider
right
so
like.
If
you
want
to
interconnect
different
regions
of
aws,
then
you
can
do
it
with
the
aws
vpn
service.
But
then,
when
you
want
to
to
connect
two
different
public
cloud,
this
is
where
it
becomes
more
challenging.
C
D
D
A
That
makes
sense.
That
makes
perfect
sense,
but
yeah
and
let's
see
oh
other
community
question
that
I
had
thought
of.
How
did
you
guys
get
to
the
submariner
project?
I
it
sounds
like
steven's
been
in
the
networking
land
for
for
just
a
little
bit.
I
I
would
guess,
but
I
don't
know
how
you
happened
upon
it.
I'm
always
curious
how
you
found
a
community.
B
Can't
actually
remember
how
we
find
submariner
itself,
it
might
be
miguel
who
came
across
it,
but
the
the
story
in
how,
as
a
team,
how
we
ended
up
working
on
submariner
is
that
before
that,
we
used
to
work
on
a
project
called
open
daylight
which,
which
is
was,
is
still
sort
of
alive,
but
not
quite
a
big,
a
framework
to
build
software-defined
networks,
and
we
decided
for
a
variety
of
reasons,
to
to
stop
working
on
that.
B
And
so
then,
as
a
team,
we
looked
around
at
what
would
be
an
interesting
space
to
start
working
in
and
multi-cluster
connectivity
came
up,
and
so
we
had
to
look
at
everything
that
was
happening
in
the
space
at
the
time.
So
this
was
the
beginning
of
istio.
Really,
solutions
like
that
and
submariner
was
a
brand
new
project,
and
it
was
so.
If
I
remember
correctly
what
we
liked
about
it
was
that
it
was
technically
relatively
straightforward.
B
I
mean
there's
a
fair
amount
of
complexity
in
what
it
does,
but
or
rather
how
it
goes
about
what
it
does,
but
the
what
the
the
what
part
of
what
it
does
and
the
services
it
renders
they're
fairly
simple,
to
explain
fairly
simple,
to
understand.
We
hope-
and
so
it
was
a
a
relatively
well
defined
project
that
solved
an
actual
problem
which
was
connecting
clusters
together,
and
so
we
decided
to
get
involved
in
it.
A
B
B
So
the
linux
foundation
was
created
to
provide
a
home
for
linux,
the
kernel,
but
it's
expanded
over
the
years
to
to
encompass
a
whole
variety
of
things,
and
one
of
the
big
one
is
the
big
drivers
behind
the
linux
foundation's
activities
over
the
past
few
years
has
been
to
encourage
collaboration
between
companies
working
on
on
big
projects
that
were
you
know,
designed
to
change
the
world
or,
and
so
open
daylight
was
one
of
those,
and
I
I
think
it
came
out
of
cisco,
initially
cisco
and
juniper,
perhaps,
and
the
linux
foundation
started
linux
foundation,
networking,
which
was
the
first
host
organization
for
something
other
than
linux
and
opendaylight
became
the
model
for
collaborative
projects
in
the
linux
foundation.
B
With
all
these
companies
who
were
actually
competitors
working
together
at
an
engineering
level
on
a
common
project,
and
so
obviously
each
company
aimed
to
have
products
derived
from
all
these
projects,
with
their
competitive
advantages
and
so
on,
but
to
get
them
all
working
together
was
quite
a
feat
really
and
that
then
led
to
the
model
behind
the
cncf.
The
cloud
native.
B
A
That's
incredible,
that's
I
I
I
should
say
I
I
did
look
up
open
daylight
in
the
side
here.
I
have
it
open
for
later.
I
did
find
the
newcomers
guide
and
it
starts
with
the
text.
What
is
garrett,
which
is,
which
is
definitely
definitely
tells
me-
that's.
I
don't
think
that
stuck
around
for
cncf,
but
but
garrett
is
always
a
fun
place.
To
start,
that's
amazing.
A
I'm
gonna
have
to
read
up
a
bit
more
well,
we
are
at
top
of
the
hour,
but
before
we
finish
up
steven
near
did
we
miss
anything
that
you
wanted
to
talk
about
today
and
all
of
our
questions.
B
A
Awesome,
okay
sounds
good
thanks
for
thanks
for
coming
along
again
I'll
go
ahead,
I'm
to
splash
up
the
show
contact,
so
we've
had
a
good
primer.
Stephen
you
hit
the
nail
on
the
head.
We
do
have
a
show
contact.
So
if
anyone
has
any
questions
afterward,
this
email
will
be
live
and
I
can
loot
in
steven
and
near
if,
if
anyone
has
any
interesting
questions
or
thoughts
about
submariner
also
find
them
on
kubernetes
slack
under
submariner,
so
that
is
a
place
to
get
involved
and
participate.
A
Thanks
again
for
joining
the
show
steven
near
will
let
near
get
head
off
to
sleep.
It
is
very
late
for
him,
so
he
is
our
first.
I
think
this
is
our
first
fully
international
guests
on
the
show,
so
we've
kept
people
on
late
thanks
everyone
for
coming
to
the
cloud
multiplier.
I
do
not
have
an
outro
we're
gonna
keep
using
the
intro
as
the
outro
and
I'll
see
everyone
in
two
weeks.
I
think
I
actually
have
something
that
I
could
tease
for
next
week.
I
think
we're
jody.
A
Should
we
tease,
I
think
we're
talking?
Is
it
talim
now
and
it
stands
for
what
is
it
tech
topology
aware.