►
From YouTube: Istio August Meetup/ Demo: Multi-cluster Istio Mesh - Complex or Piece of Cake? by Laszlo Bence Nagy
Description
In this meetup Lazlo Bence Nagy shows an almost fully automated solution that can form and then sustain a multi-cluster Istio mesh using:
an open-source Istio operator, and a cluster registry controller component, which provides continuous synchronization for the necessary resources between the clusters. In this session, you will learn:
How to form a multi-cluster mesh with ease
How the necessary resources are synced between the clusters
How the system recovers even when network endpoints are changed
A
Hey
everyone
welcome
to
this
session.
Thank
you
very
much
for
having
me
it's
great
to
be
here.
So
my
name
is
laszlo
and
my
name
is
laszlo
benson.
I
am
a
software
engineer
at
cisco
and
I
previously
worked
at
a
startup
advanced
cloud
which
was
acquired
by
cisco
at
monza
cloud.
We
started
to
work
with
sto
a
while
back.
It
was
like
the
end
of
2018.
A
A
The
title
is
multi-clustery
steel
mesh.
Is
it
complex
or
a
piece
of
cake
and
I'm
letting
you
know
right
now
that
I
will
ask
the
same
at
the
end
of
my
presentation
as
well,
and
you
might
have
some
idea
about
this
right
now,
but
I
hope
that
my
presentation
will
alter
your
thinking
a
little
bit
about
this
at
least
a
little.
A
So
that's
your
agenda.
Actually,
I
prepared
a
quite
a
long
session
for
today,
so
good
luck
for
staying
staying
until
the
end,
the
the
agenda
is
very
easy.
Actually,
I'm
planning
to
do
a
lot
of
demos
today,
so
I'd
like
to
introduce
you
to
two
open
source
tools
that
we
have
developed
so
today,
I'm
not
gonna
talk
a
lot
about
the
actual
state
of
the
multicaster
setups
in
istio.
A
Today,
rather
I
will
talk
about
what
we
did
to
improve
it,
to
automate
it
for
ourselves,
but
these
are
open
source
tools
so
for
everyone
that
might
use
it.
So
I
will
introduce
this
to
these
two
open
source
tools.
Then
I
will
briefly
talk
about
the
current
istio
multi-primary
setup
and
I
will
show
it
in
action
how
to
do
that
with
our
two
open
source
tools,
then
I
will
touch
up
on
the
primary
remote
setup
and
show
it
in
action
as
well
with
our
tools.
A
A
We
actually
started
to
implement
this
operator
before,
like
the
still
one
zero
release
like
in
early
2019.
It
was
open
sourced
already
and
at
that
time
they
haven't
even
start
started
to
work
at
on
the
officially
still
creator.
Actually
so
this
one
existed
before
before
the
official
run-
and
you
might
remember
this
by
the
name
benzycal
this
operator
now,
since
the
acquisition
we
usually
refer
to
it
as
cisco
ht
operator.
A
A
Or
not
very
straightforward,
to
create
and
configure
multiple
gateways
with
upstream
steel,
so
for
that
by
logo
we
introduced
a
support
by
introducing
the
separate
custom
resource.
We
call
it
steel,
mesh
gateway
and
with
that
custom
resource,
you
can
very
easily
create
any
kind
of
ingress
and
address
gateways
in
a
namespace
with
the
right
ownership
with
the
right
permissions.
A
So
that's
one
thing
and
the
other
thing,
and
the
bigger
thing
probably
is
the
multicluster
support
that
I'm
gonna
talk
about
today.
Let
me
also
say
that
sometimes
operators
generally
kubernetes
operators
got
the
critique
that
they
are
just
installers.
Basically,
I
think
this
whole
multicaster
support
that
I'm
gonna
be
talking
about
today
is
a
prime
example
of
the
operator.
Pattern
can
be
used
for
much
much
more
than
just
being
an
installer
and
also,
lastly,
about
this,
the
operator
in
case
you
are
wondering
it's
still
actively
maintained
maintained.
A
So,
even
though
it's
there
for
a
long
time
now,
almost
four
years,
they
are
still
actively
working
on
it.
For
example,
this
whole
multicaster
support
was
rewritten
from
scratch
last
year.
Actually,
and
for
example,
as
you
will
see,
I
will
demo
already
with
still
115,
which
is
not
even
released
yet
today,
so
yeah
we
are
working
on
this
next
tool
is
called
cisco
cluster
registry.
A
A
It
is
a
fully
generic
tool
for
multi-cluster
kubernetes
use
cases.
Of
course,
we
use
it
with
dc
operator
for
this
specific
multicaster
use
case
that
I'm
talking
about
today,
but
other
than
that.
This
aims
to
be
a
fully
generic
tool,
because
what
it
does
is
that
it
can
synchronize
kubernetes
resources
between
multiple
clusters
and
we
think
that
that
can
be
used
for
a
lot
of
other
use
cases
as
well,
not
just
for
this
system.
A
By
the
way,
if
you
are
interested
more
in
this
tool,
we
will
have
a
session
dedicated
to
this.
The
open
source
summit
eu
so
check
that
if
you
are
interested,
I
will
just
say
one
thing
about
this,
which
is
particularly
interesting.
I
think
that
this
tool
is
fully
distributed.
There
is
no
central
component,
no
single
point
of
failure
in
the
system,
and
yes,
as
I
mentioned
with
this
operator,
these
two
tools
enable
all
kinds
of
nice
enhancements
for
the
steel
multi-cluster
setups.
A
A
In
this
setup
we
have
one
single
steel
mesh.
Those
are
federated
across
multiple
kubernetes
clusters
and
the
primary
means
that
there
is
an
steel
control
plane,
actually
practically
an
std
running
on
both
of
these
clusters
and
the
different
networks
network
means
that
there
is
no
port
to
put
connectivity
configured
between
discus
these
two
clusters,
which
means
that
odds
from
cluster
one
cannot
directly
reach
poles
running
on
cluster
two.
Rather,
they
will
need
to
communicate
over
a
dedicated
index
gateway
to
set
this
setup
or
topology
up
according
to
the
upstream
studio
documentation.
A
You
need
to
do
several
straps,
which
are
appearing
bullet
points.
Real
quickly
is
that
you
need
to
configure
the
trust
relationship
between
these
two
clusters.
You
need
to
set
this
networks
so
that
they
are
communicating
over
the
database
or
not.
You
need
to
install
these
gateways.
You
need
to
create
and
configure
these
two
configurations
expose
the
services
between
the
clusters
so
that
they
can
be
reached
from
one
another
and
also
you
need
to
enable
endpoint
discovery,
because
stods
needs
to
know
about
the
available
endpoints
from
the
other
cluster
as
well.
A
So
some
cube
ctr
commands
applying
some
still
operators
here
running
some
scripts
with
your
ctr
applying
some
errors,
then
you
need
to
do
the
same
thing
for
the
other
cluster
as
well.
Then
you
need
to
run
some
still
ctr
commands
and
apply
some
resources
from
lancaster
to
another,
and
then
congratulations.
You
are
ready,
but
what
we
found
is
that
this
is
not
a
convenient,
so
we
wanted
to
automate
this
and
make
this
better
and
that's
what
we
did.
A
I
have
two
kubernetes
clusters
right
here.
The
first
one
is
called
primary
one.
The
second
is
called
primary
two.
These
are
running
on
aws
by
the
way,
so
what
I
have
installed
on
these
clusters.
A
A
Is
the
is
the
ist
operator
and
the
cluster
register
controller,
these
two
kinds
of
storage
that
I've
I've
been
talking
about,
and
I
did
one
more
thing
as
a
preparation,
which
is,
I
made
sure
that
the
resources
can
be
synchronized
between
these
two
clusters
in
the
cluster
registry
to
do
that.
That
was
an
additional
manual
step.
Actually
it's
it's
pretty
easy
to
do,
but
I
deliberately
don't
want
to
go
there
and
talk
about
faster
registry.
I'd
like
to
focus
on
this,
your
related
stuff
today.
A
A
So
these
are
istio
control
plan,
custom
resources,
which
is
a
custom
resource
for
our
cisco
ist
operator,
based
on
which
this
operator
creates
this
steel
control,
plane,
meaning
these
2d
and
any
other
resources
that
might
be
needed
based
on
the
configurations
provided,
and
these
two
crs
are
almost
identical.
There
is
just
one
difference
between
them,
which
I
will
highlight,
but
it's
enough
for
now.
A
If
we
take
a
look
at
the
first
as
I'm
explaining
and
also
I
I
just
have
to
say
that
in
in
the
hto
docs,
as
you
could
see
this
one,
this
very
similar
cr
was
called
this
operator,
which
is
honestly
very
weird
to
me
so,
basically
based
on
this
resource,
that
is
creating
an
steel
control
plane,
not
an
sd
operator.
So
even
if
you
are
not
installing
sto
officially
with
an
instill
in
this
with
the
official
history
operator,
you
need
to
create
this
resource
and
it's
not
even
on
what
you
are
creating.
A
You
are
creating
this,
your
control
plane.
So
it's
just
very
weird
that
naming
to
me
and
that's
exactly
why
we
didn't
use
that
we
went
with
this
system
control
plane
name
instead.
Sorry
about
that,
so,
let's
get
back
here,
but
of
course
it
has
a
name
on
the
namespace
this
resource,
and
then
it
has
a
version.
As
I
mentioned
it's
using
already
the
latest
system
115
bit
meter
reviews,
that's
that's
out
there
and
it's
it's
using
an
active
mode,
which
is
essentially
the
same
as
the
primary
in
the
multi-primary
term.
A
It's
just
been
keep
changing
these
terms
in
the
history
of
terminology
over
the
years
in
the
world
releases,
it
was
not
always
primary
primary
and
primary
remote.
They
were
called
differently
and
we
just
did
not
follow
that.
We
just
went
with
the
active
passive
terminology.
Instead,
as
you
will
see,
there
will
be
quite
a
few
of
these
differences
as
well,
for
example.
Another
difference
is
that
we
are
not
so
this
much
expansion
is
basically
the
especially
any
steels
terminology.
A
We
just
started
to
go
with
the
mesh
expansion
name,
but
basically
it
basically
starts
that
to
install
these
systems
lately
and
there
are
some
other
settings
which
are
needed
for
the
trust
relationship.
I
will.
I
will
talk
about
that
later
today,
so
yeah
and
what's
the
difference
between
these
two
crs,
is
that
the
network
is
set
differently
natural
line
network.
True
between
these
two,
and
that
is
because
this
is
the
different
network
setup.
There
is
no
protocol
connectivity
configured
between
these
two
clusters.
A
They
will
instead
communicate
over
the
east
festival,
mesh
expansion
gateways
and
talking
to
each
other,
all
right.
So
this
is
what
I
did.
I
applied
two
crs
to
these
two
clusters.
That's
all
I
did,
and
if
you
remember
this
talk,
you
need
to
do
several
manual
steps
to
make
this
work,
but
the
thing
is
all
those
manual
steps
were
done
automatically
in
the
background
by
these
two
open
source
tools.
A
Let
me
prove
that
to
you.
So,
first
of
all,
the
easy
part
is
that,
of
course,
the
reason
is
still
the
mystery
control
plane
running
on
both
of
these
clusters.
Then
there
is
this
geometric
expansion
gate
space.
This
has
gateways
running
on
both
of
these
clusters
as
well.
Of
course,
they
have
some
external
ips
so
that
they
can
reach
from
each
other.
A
This
is
the
easier
part.
Another
thing
we
need
to
do
to
make
this
setup
work.
The
multi-primary
setup
is
that
we
need
to
enable
end
point
discovery
between
the
two
clusters,
so
any
stod
running
on
this
cluster
needs
to
access.
A
Let
me
take
a
look
at
the
secrets
on
these
clusters
and,
as
you
can
see,
there
is
this
secret
right
here,
which
is
basically
the
cluster
name
and
the
list
of
control
plane
resources
name
together.
So
if
you
take
a
look
at
this
resource,
you
will
see
that
there
is
some
base64
encoded,
something
right
here
in
this
secret.
So
let
me
just
decode
that
for
us-
and
what
we
will
see
is
that
this
is
actually
q
config.
A
This
is
a
cube
config
for
this
first
cluster
right
here,
and
what
happens
is
that
there
is
an
sd
operator
running
on
this
cluster
and
then
still
control
plane
is
created
based
on
that
and
based
on
the
cube
api
server
access
it
generates.
If
you
config
like
this
for
this
kubernetes
cluster
itself,
and
this
clicks
cluster
by
the
way
has
a
steel
reader
user.
So
it
will
have
very
limited
reader
capabilities
to
be
able
to
do
the
estio
service
discovery.
A
A
And
if
you,
if
you
remember
what
I
said
about
cluster
registry,
I
said
that
it
can
synchronize
kubernetes
resources
between
clusters.
So
we
are
going
to
use
exactly
that
and
there
is
a
custom
resource
called
resources,
including
there
is
this
multi-cluster
secret
plus
resource
includes
specifically
that
I'm
gonna
show
you
today.
A
A
I
actually
synchronize
this
secretly
this
type.
So
if
I
check
this
secret
that
I
just
showed
you
it
has
this
history,
reader
secret
type,
which
is
right
here,
so
it
synchronizes
those
kind
of
secrets
and
it
synchronizes
secrets
with
this
label.
This
is
basically
just
to
make
sure
that
we
only
synchronize
the
right
distill
control,
plane
version.
A
A
A
Yeah,
please
keep
going
okay.
Should
I
should
I
be
real,
quick
and
wrap
up
in
10
minutes,
or
should
we
do
something
else
I
just
clicked
on?
A
I
think
it
should
be
fine
now,
but
yeah.
Please,
can
you
please
try
to
just
like
upload
it.
A
Do
that
sorry,
I
didn't
catch
you.
What
did
it
say
for
you?
So
it
says
for
me:
discounting
the
time
left
for
me,
like
nine
minutes
left
right
now,
and
it
also
said
that
we
should
upgrade
the
free
zoom
account.
So
I
I
don't
think
that's
possible.
We
do
have
there's
something
with
the
configuration
of
zoom
today.
A
Let's
just
keep
going
with
this
one
and
if
not,
we
we
can
restart
this
session
just
to
finish
it:
okay,
okay,
okay,
all
right!
Let's
do
that!
We'll
see
how
that
goes
all
right.
So,
as
I
was
saying,
we
are
adding
this
mutation,
this
label
to
this
secret
as
well,
that
is
needed
for
histiod.
I
I
will
show
that
in
a
sec.
A
A
As
you
can
see,
the
same
secret
is
already
on
this
cluster
as
well
so,
and
this
this
is.
This
is
synchronized
here
by
the
last
registry
controller,
because
because
of
this
resourcing
that
I
just
explained
so,
if
we
check
this
secret,
it
would
be
the
same
as
we
saw
and
on
this
cluster.
As
you
can
see,
it
only
has
this
ui
or
av
label
on
the
other
cluster.
A
So
this
secret
is
right
here
and
this
2d
needs
to
pick
it
up,
so
how
we
still
do
you
will
do.
That
is
that
if
we
check
these
2d
logs,
you
can
see
that
from
this
secret
it
picked
this
cluster
up
and
based
on
that,
it's
able
to
do
service
discovery.
A
All
right,
this
was
about
the
service
discovery.
Part
next
up
is
trust.
A
A
A
The
idea
is,
I
will
show
that
in
a
second.
The
idea
is
that,
instead,
we
will
show
the
root
set
of
this
cluster
to
this
other
cluster
and
the
other
way
around,
and
basically
let
them
know
that.
Please
allow
certificates
signed
by
these
ca
to
talk
to
this
cluster
so
that
mutual
tls
traffic
can
work.
A
I
will
show
I'm
sure
how
this
works
by
we
went
with
this
federated
trust
based
approach.
Well,
for
once
with
this,
it's
it's,
it
can
be
automated.
A
It's
it's
much
easier
to
set
up
because,
as
you
saw
as
I
as
I
showed
in
the
before
you
begin
section
you
even
before
you
start
setting
up
your
still
control
plane,
you
need
to
configure
these
certificates
for
both
of
your
customers
or
no
matter
how
many
you
have
and
with
this
solution,
I
didn't
do
anything
like
that
they
can
have
totally
separated
through
cs
and
still
able
to
communicate
each
to
each
other.
It
can
also
be
very
helpful,
for
example,
when
you
need
to
do
some
certificate
rotations,
for
example.
A
So
that's
why
we
use
this
federated
trust,
based
approach
and
how
this
works.
So
let
me
show
you
the
root
series
both
of
these
clusters,
as
you
can
see,
based
on
the
end
of
these
certificates.
These
are
different
of
these
clusters
and
still,
if
you
believe
me,
the
communication,
mts
communication
would
work
with
imports
in
these
clusters,
but
how
so?
What
we
do
is
that
there
is
a
resource.
A
There
is
another
concrete
map
we
still
complete
map
on
the
clusters
and
in
which
there
is
a
in
the
mesh
copy.
There
is
a
there.
Is
this
field
ce
certificates?
This
is
an
upstream
steel
feature.
This
is
not
coming
from
us,
so
in
the
cs
certificates
mesh
config
field-
you
can
specify
other
certificates
that
are
that
will
be
trusted
by
this
by
this
cluster
for
communication
and
as
you
can
see
this
one
right
here
is
the
same
as
this
one.
A
So
basically,
we
do
this
that
we
make
sure
that
this
comes
here
somehow
and
with
that
it
should
be
possible
for
the
mtrs
communication
to
work.
But
the
big
question
is:
how
did
this
from
this
cluster
get
here?
A
As
you
might
imagine,
we
will
need
some
cluster
registry
magic
again
to
come
here
from
here,
and
what
we
do
basically,
is
that,
let's
take
a
look
at
the
system,
still
control
plane
resource
that
we
created
earlier.
We
saw
the
spec
of
this
resource,
which
I
created,
but
in
the
status
the
operator
writes
this
certificate
from
this
cluster
and
we
synchronize
this
resource
to
the
other
one,
but
that
synchronization
is
pretty
unique,
I
would
say,
because
it
synchronizes.
A
This
is
your
control
plane
resource
by
its
namespace
name,
but
there
is
a
very
interesting
mutation
that
we
use,
which
is
my
favorite
actually
is
that
we
change
the
kind
of
the
resource
so
on
this
cluster.
This
is
any
steel
control,
plane
resource,
but
on
this
other
cluster,
this
will
be
actually
purely
steel
control,
plane
resource
right
here.
A
So
it
has
the
same
spec
the
same
status,
but
the
kind
of
it
is
different
and,
as
you
can
see,
the
name
of
it
is
different
as
well,
because
we
are
appending
the
cluster
name
to
the
end
like
this,
this
is
just
to
avoid
collisions.
You
know,
if
from
other
clusters,
the
same
resource
with
the
same
name
would
be
imported
and
then
they
won't
collide.
A
Root
set
that
we
need,
and
from
here
the
local
list
operator
on
this
cluster,
sees
this
resource
and
based
on
that,
it
will
set
it
to
that
mesh
config
field
see
a
certificate
field
that
I
just
showed
you.
That's
that's
how
the
federated
trust
solution
works
for
us.
A
So,
as
I
was
saying,
the
last
thing
I
wanted
to
show
regarding
this
multi-primary
setup
is
that
so
we
set
everything
up
for
this
multi-primary
setup.
I
I
I
kind
of
showed
most
of
the
interesting
stuff
how
we
did
that.
A
But
after
you
set
this
multi-primary
setup
up,
then
it's
usually
nice
to
have
these
two
configurations,
these
two
crs
destination,
with
virtual
services,
etc.
Synchronized
across
these
multi-primary
clusters
to
have
the
same
histo
configurations
for
both
vods
and
to
do
that,
you
can
either
do
that
manually.
You
can
either
do
use
some
external
tools
like
either
github
stores,
or
there
are
some
other
tools
that
can
be
used
for
something
like
this,
not
necessarily
for
these
two
crs,
maybe
for
the
services
themselves.
A
A
I
just
created
it
it's
already
there
and
if
you
check
it's
already
created
on
the
second
cluster
as
well,
and
if
we
take
a
look
at
the
resource
detail,
you
will
see
that
this
is
a
single
period
authentication
and
the
other
one.
There
is
an
additional
label-
the
cluster
id
label,
which
shows
us
that
this
was
actually
synchronized
here
by
the
cluster
register
controller.
A
A
resourcing
pool
for
this
one
as
well.
We
can
take
a
look
at
that.
It
actually
synchronizes
the
resource
based
on
the
labels.
That's
on
it,
so
it's
either
synchronizes
the
crs
if
it
matches
the
istio
control,
plane
version
or
if
there
is
no
label.
Regarding
this
new
edition
on
that
particular
custom
resource-
and
this
is
computer
authentication
but
of
course,
for
any
other
config
custom
resources,
we
can
do
the
same
so
again.
This
is
this
is
an
additional
stuff.
A
After
you
set
up
the
multi-primary
topology
in
steel,
and
you
can
either
use
these
external
tools
for
it
or
without
open
source
tools.
This
comes
for
free
as
well.
So
that's
all
I
wanted
to
show
about
the
multi-primary
setup
and
I'd
like
to
talk
about
the
primary
remote
one
as
well.
That
will
be
much
shorter.
A
Hopefully
now
it
will
much
shorter
for
sure.
So
the
primary
remote
on
different
networks,
setup
is,
has
a
lot
of
similarities
to
the
to
the
promoted,
multi-primary
one.
The
difference
is
that
there
is
no
still
control
plane
on
the
remote
cluster
on
the
remote
side,
which
is
basically
essentially
an
htod.
It's
not
here
on
this
cluster.
A
A
A
A
Let
me
show
you:
what's
in
there
exactly
it's
a
little
shorter
than
the
previous
ones.
As
you
can
see,
that's
all
that's
what
I
applied
and
the
setup
should
be
ready
automatically
in
the
background
as
well.
So
it's
yet
another
steel
controller
resource
with
the
same
name
name,
space
name
in
the
same
steel
version.
A
The
difference
is
that
this
is
not
an
active
rather
a
passive
or
remote
initial
terminology
control
plane,
and
it
is
yet
another
network
network
free,
because
there
are
no
portable
connectivity
configured,
so
they
will
communicate
over
the
his
fest
or
mesh
expansion
gateways
which
are
enabled
and
we
are
enabling
another
metropic
setting
as
well.
So
that's
what
I
did
and
if
I
take
a
look
at
the
boards,
you
will
see
that
there
is
a
sidecar
injector
running
on
this
cluster,
which
I
need
to
explain
because.
A
So
at
least
last
time
I
checked
the
current
community
still
set
up
for
this
primary
remote
fan.
They
actually
installed
an
sdod
on
the
primary
side.
We
just
we
just
talked
about
that.
The
remote
means
that
there
is
no
still
control
plane
running
on
the
remote
side,
but
it
is
not
an
actually
full-fledged
distilled
d
that
they
were
installing.
A
We
only
install
an
install
sidecar,
injector
mod
and
by
the
way
there
is
no
sidecar
injector
docker
image
available
anymore,
since
we
still
demerge.
So
basically,
we
fork
this
and
maintain
this
work
for
ourselves
for
the
sidecar
injector.
So
what
happens
is
that
we
install
the
cycle
injector,
and
this
is
of
course
responsible
for
injecting
the
sidecars
to
the
proxies
and
about
the
thrust
we
solve
it
differently.
I
will
show
that
yeah,
so
we
have
the
sidecar
injector
and
we
have,
of
course,
this
smash
expansion
gateway
installed
as
well.
A
The
next
thing
I
mentioned
is
that
the
proxy
is
running
on
this
cluster.
We
need
to
breach
the
other
studies
to
get
these
two
configurations,
their
way
configurations
from
them.
A
Now,
let's
see
how
that
works,
let
me
take
a
look
at
the
services
and
there
is
this
study
service.
This
is
to
these
services
here,
so
that
kubernetes
can
do
the
dns
resolving
for
this
name
and
as
you
can
see,
this
is
a
headless
service
with
non-class
driving.
That
is
because
there
is
no
still
d
running
on
this
cluster
right.
A
So
instead
of
you
know
letting
kubernetes
create
a
stripey
service
like
this.
We
don't
report
in
this
cluster
because
there
is
no
report.
It
actually
creates
the
underlying
end
points
communities
endpoints
for
this
resource.
A
These
endpoints
have
these
addresses
set,
and
what
are
these
addresses
if
it's
checked,
the
eastpass
gate
is
or
is
called
the
mesh
expansion
gateways
on
the
other
two
clusters,
you
will
see
that
these
four
ip
addresses
are
the
same
should
be
the
same
as
these
four
right
here,
and
this
is
because,
let's
see
a
service
needs
to
access
and
avoid
config,
it
caused
the
issue
that
is
still
the
that
svc
see
that
svc
cluster
local
hostname
kubernetes
resolve
this,
and
if
you
see
these
endpoints
under
it-
and
it
will
call
these
addresses,
it
will
direct
direct
to
this.
A
This
cluster,
for
example,
on
the
other
one
and
of
course,
on
the
other
side,
there
is
some
steel
magic
with
gateway
and
virtual
service,
which
are
making
sure
that
if
traffic
pointing
to
steel
are
coming
here,
these
will
be
directed
to
these
dod
codes
running
on
on
this
cluster.
A
So
that's
the
idea
and
that's
pretty
great,
but
how
these
ip
addresses
got
here.
Well,
we
got
that
nice
psdo
control,
plane
resource
that
we
talked
about
already
today,
which
contained
in
status
these
certificates,
but
the
certificates
are
not
the
only
thing
that
are
stored
here
in
the
status.
There
are
other
stuff
as
well,
for
example,
the
the
addresses
so
for
this
primary
one
cluster,
for
example,
you
can
see
these
two
addresses
are
the
same.
A
So
what
happens?
Is
this
operator
running
on
this
first
cluster?
It
locally
reconciles
to
the
steel
control,
plane
resources
status
to
this
resource
status.
These
addresses,
then,
with
the
resourcing
rule.
Cluster
registry
controller
copies
this
resource
here
here
are
the
great
addresses
and
based
on
that
is
the
operator
running
locally
on
this
cluster.
Based
on
these
addresses,
he
feels
that
kubernetes
endpoints
resource
to
have
these
addresses
and,
of
course,
the
other
ones
as
well
for
the
other
cluster
right
here.
A
So
if
right
check,
for
example,
this
is
study.
Has
this
address
right
here
and,
as
you
can
see
by
chance,
we
have
this
same
address
in
this
resource
status
as
well.
So
if
istio
operator
sees
that
the
network
is
the
same
between
these
for
these
two
clusters,
the
network
is
set
in
these
two
control
current
resource.
So
it
is
aware
of
that.
A
If
he
sees
that
the
network
is
the
same,
then
in
the
end
point
kubernetes
endpoint
resource
it
will
not
set
these
git
addresses.
It
will
instead
set
this
to
the
address,
and
that
way
we
can
also
support
the
flat
network
setup
as
well,
but
it's
still
all
right.
A
Speaking
of
these
addresses,
let
me
show
you
one
more
time
this
still
configmap,
because
in
this
there
is
another
stuff
stuff
that
is
interesting,
which
is
this
mesh
networks
setting
right
here.
A
This
is
basically,
you
know
to.
Let
us
still
know
what
are
the
networks
here
and
where
to
root,
in
which
case
basically-
and
basically
this
is
a
network
list
containing
the
registry
and
ip
addresses
where
those
networks
are
accessible,
and
this
list
is
basically
informed
based
on
these
speedy
steel
control
plan
resources
as
well.
So
in
these
resources
we
have
the
networks
available.
We
have
this
this.
A
This
is
basically
the
cluster
name
as
well
accessible,
and
these
addresses,
as
we
just
saw
as
well-
and
this
is
again
fully
automatically-
is
the
operator
knows
this
from
this
resource
and
peers
this
out
for
still.
A
We
are
using
the
common
thrust
base
approach
as
well
at
the
federated
trust
fund.
We
only
use
the
federated
transfer
for
the
multi-primary
setup,
and
how
did
this
resource
got
here
is
that
you
can
already
tell
by
the
label
that
this
was
actually
synchronized
by
the
cluster
registry
controller
as
well.
A
There
is
a
resourcing
group
for
this
as
well
called
this
one
and
what
it
does
is
basically,
as
you
can
see
very
easy
synchronizes
the
confidence
with
this
name
to
every
namespace
and
that's
how,
on
the
remote
side,
we
will
have
trust
available
as
well
from
js
communication
between
these
two
clusters.
A
Last
thing
about
this
now,
really
the
last
one
is
that
if
here,
basically
that
the
remote
side
should
be
as
as
lightweight
as
possible.
So
ideally
not
even
have
this
like
the
injection
port.
Here
only
you
know
the
steel
proxies
running
next
to
the
replication
containers,
and
to
do
that,
you
know,
sdod
has
or
sorry
officially
still
has
this
tod
for
the
ca
support
and
cycle
injection.
A
We
only
have
the
sidecar
injection,
because
ca
saw
with
the
cast
registry
controller
and
by
the
way,
we
also
have
a
proof
of
concept
solution
where
this
sidecar
injector
is
not
even
here
rather
than
the
mutating
web
hook
on.
This
cluster
is
called
from
this
cluster,
so
that
the
cycle
injection
can
happen
on
this
side
again
without
without
any
problem.
But
that's
maybe
for
another
time
all
right.
A
So
basically,
with
these,
you
can
very
easily
create
any
number
of
primary
and
any
number
of
remote
clusters
to
be
in
the
same
mesh
and
again,
let
me
remind
you
that
I
saved
almost
fully
automatically
because
there
was
that
additional
manual
step
which
needed
to
do
for
the
cluster
registry
to
be
able
to
synchronize
the
resources
between
the
clusters,
which
I
did
not
show
but
other
than
this.
I
honestly
got
to
say
that
they
are
very
happy
with
this
solution.
A
These
are
keep
running
in
the
system
and
keep
reconciling
the
clusters.
So
if
anything
changes
either
by
accident-
or
you
know
like
it
could
be
a
malicious
user
as
well,
but
if
any
anything
changes
the
operators
keep
reconciling
it,
so
they
will
keep
the
multi-cluster
states
in
a
in
a
stable
way.
So
it's
it's
working
pretty
great
for
us.
A
Another
great
thing
thing
about
the
operator
pattern
is
that
it
supports
very,
very
decorative
and
github
surface,
so
you
basically
install
the
operators,
your
cluster,
you
know,
push
these
cr
system
control
and
crs,
and
based
on
that,
you
get
pretty
easily
all
this
setup
created
as
well.
A
For
yourself-
and
lastly,
I
mentioned
this-
is
your
config
centralization,
so
you,
after
you,
set
everything
up
and
by
the
way,
as
I
mentioned,
it's
even
the
multi-primary,
even
the
primary
remote
and
even
the
new
front
and
the
network
setups
as
well
works
for
every
every
every
one
of
these.
A
After
this,
you
would
need
these
two
config
synchronization
feature
with
these
two
pencils
tools.
You
you
get
that
for
free
as
well.
So
as
I
warn
you
at
first,
it
was
a
very
long
time
ago.
You
might
not
even
remember
it,
but
I
will
ask
you
the
same
again.
That
is
is
still
multi-cluster
complex
or
a
piece
of
cake.
Maybe
maybe
I
could
change
your
mind
with
and
if
you
guys,
let
me
I
would
do
one
marketing
slide
as
well.
A
So
if
you
don't
want
just
one
piece
of
cake,
rather
you
want
the
full
cake.
Then
you
should
try
cisco's
product
podcast
team,
which
you
can
actually
try
for
free
by
the
way.
So
in
this
tool
we
basically
have
a
cli
tool
with
which
it
is
actually
fully
automated.
All
these
multi-cluster
setups.
So
with
one
cli
command,
you
can
attach
and
detach
kubernetes
clusters
into
a
steel
mesh
very,
very
easily
and
convinced
conveniently
and
this
product
is
much
more.
It
comes
with
the
ui.
A
As
you
can
see,
its
topology
view
is
fine
tuned
optimized
for
the
multi-cluster
setups,
and
it
has
much
more
features.
So,
if
you
are
interested
in
that
go
to
carista.app
check
it
out,
you
can
try
for
free.
If
you
are
interested
in
this
all
these
open
source
technologies
is
the
operator
request
registry.
You
can
find
them
on
github
github.
Please
create
issues
vias
or
just
reach
out
to
me.
There's
my
email
address
as
well.