►
From YouTube: Istio August Meetup/ Istio Multicluster for Zero Downtime Migration- by Zufar Dhiyaulhaq
Description
In this meetup Zufar Dhiyaulhaq talks about migration of workload, archiving disaster recovery, and implementing active-active in Kubernetes between cloud providers. In this talk, you will learn about the Istio multicluster pattern implemented on GoPay.
A
First
of
all,
thank
you
for
the
opportunity
from
the
istio
community
to
let
me
share
share
this
presentation.
Let
me
introduce
myself,
I'm
a
zoofar,
a
cloud
engineer
from
gocheck,
and
today
we
will
discuss
about
how
we
using
istio
multicluster
for
zero
data
migration,
and
probably
I
can
also
share
the
active
active.
Where
is
that
traffic
between
kubernetes
cluster
so
yeah
in
this
meetup?
A
I
hope
that
we
can
learn
some
pattern
about
this
setup
and
I
also
really
wanted
to
see
some
feedback
from
the
history
community
about
about
the
set.
So
let
me
give
your
introduction.
A
From
our
past
experience,
I
think
yes,
yes
from
our
past
experience,
the
migration
is
always
painful,
so,
for
example,
between
fertile
machine
to
kubernetes
or
between
kubernetes
cluster,
as
we
know
that
microservice
is
actually
crit
dependency,
each
other.
For
example,
when
a
single
surface
is
a
migrated
or
down,
it
can
actually
affect
multiple
services.
A
A
A
A
The
console
puzzle
is
very
old
version
and
no
one,
no
one
want
to
touch
that
the
the
running
console
no
one
want
to
upgrade
and
yeah,
and
it's
affecting
our
day-to-day
basis.
A
So
I
will
introduce
about
copay.
So
go
pay
is
like
a
money
for
go
check.
In
golpe,
we
have
300
more
than
300
micro
services
more
than
one.
What
is
that?
115
million
apo
call
ipa
call
each
week
and
more
than
3
000
deployment
every
week
that
actually
spread
into
more
than
10
kubernetes
cluster.
A
So
we
are
in
progress
actually
to
move
along
all
of
the
workload
to
steel,
to
htms,
from
console
solution
that
that
that
we
showed
before
and
actually
this
this
moving
progresses
will
be
presented
from
my
college.
He
will
present
about
immigration
about
this.
This
story
in
kubecon,
north
america
2021,
so
I
probably
need
to
have
in
short
introduction
about
steel
multi-cluster.
A
I
believe
some
people
here
is
already
know
the
concept
of
steel
multicluster,
but
I,
I
probably
will
will
explain
some
of
it,
so
we
actually
already
implement
the
old
multicluster
setup
in
1.6,
but
luckily
we,
no
one,
no
one
or
deaf
or
or
or
or
system
team
use,
use
this
concept,
and
so
istio
actually
introduced
a
new
concept
for
multi-cluster
and
in
this
concept
we
must
understand,
at
least
for
for
for
object,
which
is
single
or
multi-mesh
single
or
multi-network,
singular,
multi-control,
plane
and
and
the
thrust
model.
A
A
Service
is
actually
referred
to
the
same
service.
They
they
do.
They
do
the
same
thing
if
the
west
and
its
cluster
is
actually
share,
share
share
the
same
as
the
the
multi-message
is
actually
opposite
of
the
single
mesh.
So
every
surface
in
in
this
concept
is
different.
So,
for
example,
you
have
a
surface
b
on
what
is
that
on
cluster
clusterwiz
and
surface
b?
Also
in
the
password
is,
if
we
use
multimesh,
they
refer
to
the
different
surfaces,
so
they,
even
even
the
name
is,
is
same
or
similar.
A
Okay,
so.
A
Also,
we
have
a
network
concept.
The
first
thing
is
single
network,
so
in
single
network
setup,
so
every
workload
or
port
or
deployment
or
yeah,
I
see
it
workload.
A
Every
workload
in
this
mesh
can
communicate
each
other
directly
without
the
gateway.
So,
for
example,
you
have
a
bot
or
workload
in
net
kubernetes
a
and
what
in
kubernetes
b,
they
can
directly
communicate
each
other
without
the
gateway.
A
The
multi-network
is
actually
opposite
of
single
network
workload
between
kubernetes
cluster
in
the
network
is
they
cannot
communicate
each
other,
so
they
need
to
communicate
via
gateway,
but
this
concept
have
additional
advantage
like
you
can
use
overlapping
id
between
kubernetes
cluster.
So
you
do
not
need
to
maintain
the
ip
between
kubernetes
cluster.
They
can
use
overlapping
ip
and
we
can
scale
the
network
separately
between
this
kubernetes
cluster.
A
Actually,
multi-network
is
a
concept
that
I
personally
like,
rather
than
a
single
network,
so
istio
actually
actually
automate
automatically
handle
the
communication
between
this
cluster
via
is
west
gateway.
So
you
you
have
a
specific
way
for
for
this
communication
and
it's
ensured
the
security
since
it
and
since
it's
automatically
encrypted
by
mutual
dls,
so
communication
from
service
a
to
surface
b
is
actually
a
via
mutual
dls.
So
you
so
you
do
not
need
to
worry
about
it,
so
you
also
have
a
concept.
It's.
It
is
the
control
pane.
A
If
you
have
a
single
mesh
setup,
you
can
set
up
the
control
plane
in
each
cluster,
for
example
in
cluster
west
and
cluster
is
or
you
only
need
to
set
up.
You
can
set
up
the
centralized
control,
pin
in
only
a
single
cluster
in
this
example
is
clustered
os.
A
A
There
is
also
another
concept
which
is
multi-control
plane
in
multi-control
pen.
You
have,
you
will
have
a
control
plane
in
each
cluster
and
the
control
plane
is
communicating
each
other
via
kubernetes
api.
To
and
after
that,
I
prefer
this
because
when,
when
something
wrong
with
the
other
cluster,
we
can
just
make
this,
for
example,
is
if
something
wrong
with
cluster
one.
A
A
You
need
to
have
sorry
the
same
trash
between
multiple
clusters,
so
we
can
achieve
that
using
the
same
road
ca
between
the
cluster
itself
and
installing
the
intermediate
ca
in
in
cluster
1
and
cluster
2..
A
So,
as
as
far
as
I
know,
the
prerequisites
for
for
this
new
multicast
series,
we
need
to
have
it's
better
to
have
a
dns
proxy
for
simplicity,
which
is
is
enabled
in
easter
one
dot
age
and
on
kubernetes
note
we
must
take
the
note
with
topology
and
topology
region
and
topology
zone,
I
believe
in
mass
in
all
of
the
cloud
providers.
They
already
automatically
do
this,
but
if
you
install
the
kubernetes
in
in,
for
example,
using
using
your
own
setup,
probably
we
need
to
set
up
the
sorry.
A
A
A
So
we
based
on
that
concept,
you
mean
the
mesh,
the
control
plane,
the
network.
We
use
a
single
mesh
with
a
multi-network
which
is
multi-primary,
sorry,
a
multi-control
plane
more,
which
is
multi-primary,
and
we
just
also
did
different
network.
A
We
go
with
this
because
this
is
the
easy,
easy
easiest
way
to
implement
the
multicluster,
because
we
do
not
have
to
think
about
the
workload
to
workload
communication
between
cluster
right,
so
we
do
not
need
to
set
up
vpn
or
something
like
that.
We
just
use
the
object
and
multi-primaries
control.
Planes
also
ensure
us
that
there
is
no
dependencies
between
cluster
itself.
A
A
A
It's
automatically
do
that
and
if
we
already
confident
to
migrate,
we
can
just
delete
the
service
in
cluster
0.1
and
automatically
the
traffic
from
gateway
in
cluster
0.1
will
automatically
forward
it
to
the
cluster
zero
pins
in
this
state.
If
something
happened,
we
can
just
install
or
roll
back
the
the
hello
world
in
cluster
0.1.
A
If
something
happen.
In
this
case,
we
can
just
scroll
back
right,
but
if
we
already
confident
to
to
migrate
this
one,
we
can
just
install
the
same
gateway
like
the
kubernetes
cluster
0.1
and
just
redirect
the
user
or
the
dns
record
to
point
to
the
gateway
in
cluster
0.2
and
the
migration
is
very
easy.
The
user
does
not
need
to
change
a
domain.
There
is
no
need
to
change
anything
and
we
can
just
duplicate
the
cluster
0.1
if,
if
all
services
is
already
located,
so
let
me
give
you
a
demo.
A
A
A
Okay,
so
we
will
only
have
example,
hello,
world
31
in
cluster
1,
and
we
will
have
no
example,
hello,
world
v1
in
class
of
2.
It's
terminating
so.
A
Let's
scroll
the
the
service,
this
is
the
ip
of
the
public
gateway
in
cluster
0.1.
A
A
A
A
A
A
So
yep
sorry,
I
think
I
already
demo
this
one.
Actually
I
created
a
video
but
yeah,
so
we
also
can
implement
this
thing
for
active
and
active
flow.
So,
for
example,
in
cubase
0.1
we
have
100
percent
traffic
go
to
this
cluster
and
we
want,
for
example,
10
traffic
or
15
traffic
to
go
to
the
club
kubernetes
cluster
0.2.
A
A
We
probably
also
can
set
up
some
dns
record,
which
is
a
dns
resolution
based
on
the
region.
So
if
you
more
close
to
the
cluster
0.1,
you
go
to
the
cluster
0.1.
If
you
were
close
to
the
cluster
0.2,
you
go
to
cluster
0.2,
all
right
and
still
multi-cluster
can
also
use
for
failover
cluster
level.
So,
for
example,
you
implement
ninety
percent
of
traffic
and
ten
percent
of
traffic.
A
For
example,
when
the
kubernetes
class
0.1
is
thought
completely
down,
we
will
have
a
downtime.
Actually,
if
we
do
not
do
something
in
in
in
the
user
level
like
like,
for
example,
implementing
a
health
check
in
the
dns
record,
or
something
like
that,
but
the
migration
or
or
the
fallback
is
pretty
simple.
We
only
need
to
create
the
gateway,
the
same
gateway
in
cluster
0.2,
and
we
just
change
the
dns
record
to
point
to
the
cluster
0.2
and
yeah
the
user
can
can
can
can
access
this
service
again.