►
From YouTube: Automated Disaster Recovery for Openshift Workloads
Description
Join Michelle DiPalma and guest Annette Clewett for a demo of disaster recovery features of Openshift Data Foundation 4.11 new features.
Host:
Erik Jacobs
Guests:
* Roderick Kieley
* Jared Sprague
* Jason Dudash
A
B
What
we're
going
to
do
is
we're
going
to
continue,
because
this
is
not
the
first
session
that
you've
done
with
me,
but
we're
going
to
continue
to
talk
about
the
evolution
of
disaster
recovery
for
openshift,
and
you
know
we're
we're
getting
better
at
it
and
we're
doing
more.
Automated
features
to
make
it
easier.
B
So
we're
lucky
that
just
yesterday
the
latest
version
released
for
openshift
data
foundation,
which
includes
all
the
new
disaster
recovery
operators.
So.
A
B
Yeah
yeah
to
go
with
openshift
4.11
that
recently
released.
So
so
we're
going
to
be
looking
at
the
newest
capabilities
and
and
hopefully
people
will
find
it
exciting,
fantastic.
B
Let
me
let
me
go
ahead
and
I'm
going
to
just
start
with
some
slidewear
okay
and
then
we'll
go
from
there.
Let
me
know
when
you
can
see
it
full
screen
full
screen,
perfect.
Okay,.
A
B
B
B
B
How
do
we
trust
that
this
new
disaster
recovery
capability
is
going
to
work
and
if
we
have
regulatory
requirements
that
it's
it's
going
to
meet
those
requirements,
and
also
this
is
this-
is
a
relatively
new
solution
for
both
openshift
and
the
operators
that
have
been
developed
and
it
is
a
combination
solution.
So
we
have
dependencies
with
different
operators
like
advanced
cluster
management,
openshift
data
foundation
and
then
outside
of
openshift,
red
hat,
stuff
storage.
So
all
those
things
sort
of
have
to
come
together
at
the
the
right
time
to
produce
the
solution.
B
By
doing
this,
if
we
do
are
successful
in
doing
disaster
recovery
for
containers
on
an
openshift
platform,
then
we
will
have
a
way
of
actually
failing
over
to
a
new
cluster
environment
and
being
able
to
provide
service
as
we'll
see,
there's
some
measurements
of
of
how.
Well
we
do
that,
and
we
can
make
sure
that
any
in
the
case
of
a
problem
that
we
do
have
an
alternative
and
it
creates
a
methodology.
If
we
have
re,
if
we
have
regulatory
requirements,
we
can
provide
the
methodology
for
that
so
that
we
meet
that
requirement.
B
So
the
goals
for
disaster
recovery
are
the
way
that
we
we
measure
it
in
terms
of
you
know.
Are
we
actually
meeting
requirements
as
we
try
to
set
the
requirements
up
before
we
even
have
a
disaster
recovery
plan,
so
recovery
point
objective
is:
is
a
measure
of
how
much
data
can
you
lose
and
recovery
time
objective?
Is
how
long
can
the
application
or
service
be
unavailable?
B
So
the
solution
overview,
we
have
created
sort
of
two
two
flavors
and
they're
they're,
similar
in
how
you
configure
them,
but
they
have
some
some
differences.
So
we
we've
called
one
regional
disaster
recovery
and
the
other
metro
disaster
recovery.
So
regional
disaster
recovery
will
be
a
solution
where
we
pair
openshift
clusters
and
the
distance
between
those
can
be
quite
quite
large,
meaning
we
don't
really
have
a
latency
limitation
between
the
clusters.
B
We're
going
to
use
a
replication
strategy
based
on
interval,
replication,
say
we
we
replicate
every
two
or
three
minutes
we'll
set
that
and
then
the
data
will
be
essentially
sent
to
the
opposite:
cluster,
the
the
pure
cluster
or
the
failover
cluster
on
that
interval,
and
in
that
case
our
recovery
point
objective.
B
We
could
have
as
much
outstanding
data
as
our
replication
interval.
So
if
we
haven't
replicated
the
data
and
we
have
a
failure,
we
would
lose
the
data
that
had
not
been
replicated.
Okay,
dr
is
the
solution
where
we
use
a
a
storage
plane
provided
by
red
hat
storage
that
is
outside
of
openshift.
We
still
have
the
idea
of
a
failover
cluster,
so
we
create
sets
of
two
openshift
clusters,
connect
them
to
this
external
storage
plane
and
then,
when
there's
a
failure,
we
can
fail
over
big
difference.
B
Is
this
solution
can
provide
recovery
point
objective
of
zero
because
it's
a
synchronous
solution,
so
there's
no
outstanding
data
that
has
not
been
replicated
it
is.
It
does
require
some
additional
steps
to
do
the
failover,
so
you
would
still
have
a
recovery
time
objective,
which
you
would
need
to
meet,
meaning.
How
long
is
the
application
out.
B
Yeah
and
I'll
show
a
diagram
in
a
bit
here
so,
but
did
you
have
any
other
part
any
other
question?
No,
no.
B
Yeah,
if
it's
not
clear
by
then
certainly
let
me
know
okay,
so
what
what
components
within
red
hat
products
do
we
use
to
do
this?
It's
required
that
we
have
red
hat
advanced
cluster
management.
B
This
is
really
allowing
us
to
use
the
automation
available
in
this
this
operator
and
and
custom
resources,
and
then
it's
also
all
of
the
disaster
recovery
that
we're
developing
the
automation
the
customer
resources
are
coming
in
through
openshift
data
foundation.
Like
I
said,
we
just
released
yesterday
available
openshift
data
foundation,
4.11
and
then
used
by
openshift
data
foundation,
as
well
as
a
standalone
offering
is
red,
cef
storage,
so
ceph
storage
is
used
in
openshift
data
foundation,
whether
it's
on
openshift
itself
or
whether
it's
external
to
openshift.
B
So
those
are
the
the
three
and
then
within
the
disaster,
recovery
capability
that
you
get
with
openshift
data
foundation,
we're
going
to
have
three
additional
operators:
the
dr
hub
operator,
the
dr
cluster
operator
and
the
openshift
data
foundation,
multi-cluster
orchestrator-
and
it
says
regional,
dr
only-
should
have
fixed
that
that
was
for
4.10
openshift
data
foundation.
B
Multi-Cluster
orchestrator
is
going
to
be
used
now
for
both
metro
and
regional
disaster
recovery
and
does
a
lot
of
the
automation
and
and
set
up
so
that
that
operator
is
used
in
in
dr
for
all
solutions
and
then
go
back
real
quick.
So
it's
still,
you
know.
B
This
still
requires
that
you
use
soft
volumes,
so
you
know
you're
going
to
be
using
your
storage
classes
that
are
provided
with
odf
one
restriction
or
today
with
original
dr,
is
that
for
support
we
only
support
block
volumes
or
the
what
we
call
the
cfrbd
volumes
that'll
be
changing
and
we
already
have
a
tech
preview
solution
for
rwx
or
cfs
volumes,
but
by
412
all
all
ceph
modes,
rwx
rwo
will
be
will
be
supported
for
metro,
dr
there
is
no,
it
supports
all
all
volume
modes,
just
showing
here
quickly
sort
of
the
expectations
for
rto
and
rpo
that
we
talked
about
earlier
recovery
time.
B
Objective
recovery,
point
objective:
the
the
metro
disaster
recovery
is
is
a
good
solution.
If
you,
if
you
have
to
not
lose
data,
but
it
does
have
a
distance
limitation
because,
just
like
any
synchronous
communication,
you
know
it
will
have
a
distance
limitation.
If
you
want
your
right
latency
to
to
be
usable,
so
right
now
we're
saying
no
more
than
the
the
two
openshift
sites
that
would
be
paired
together
for
metro,
dr
need
to
be
not
more
than
10
mil
seconds
round
trip
time,
apart
from
each
other.
Okay,.
A
B
Yeah
well
10
milliseconds
in
in
my
experience
of
how
the
bit
might
travel
could
be
up,
as
is
certainly
above
500,
in
the
us
500
miles
apart.
So
it's
you
know
it's
it's
it's
a
limitation,
but
it's
it's
not
like.
They
have
to
be.
You
know
across
the
street
from
each
other
great
right,
so
just
looking
a
little
more
the
into
disaster
recovery.
Again,
there
is
no
distance
limitations
for
this
solution.
B
The
two
sides
can
be
as
far
as
you
know,
your
your
environment
requires
and
it
does
again
use
acm
in
terms
of
the
automation
that
the
gtm
at
the
top
there
is
global,
traffic
management
might
be
global
load.
Balancing
might
just
be
load
balancing,
but
it
does
require
an
external
load
balancer
for
being
able
to.
If
you
have
connections
coming
inbound
to
one
cluster
and
that
cluster
goes
away,
then
you
need
your
load
balancing
to
to
switch
over.
B
B
So
you
shouldn't
have
to
create
one
yourself
if
you're
trying
to
test
the
solution,
but
but
you
could
use
something
as
simple
as
h.a
proxy
and
put
it
on
like
a
virtual
machine
or
a
server
or
something
and
and
just
set
it
up.
You
know
it's
relatively
simple
if,
if
you
needed
it
for
testing,
so
one
point
here
is
that,
though
it
shows
the
left
hand,
side
is
the
active
and
the
other
one
is
the
passive,
that's
really
on
a
per
application
level.
B
So
you
could
certainly
have
some
of
the
applications
on
cluster
two
in
an
active
state
and
their
failover
cluster
be
cluster
one.
So
it's
not
required
with
this
solution
that
all
applications
are
active
on
the
same
cluster.
They
can
be
active
on
the
other
cluster
and
then
have
their
failover
cluster
just
to
be
the
opposite.
This
makes
use.
B
Then,
if
you
want
to
have
you
know,
resources
used
on
both
clusters
and
not
just
have
a
cluster
sitting
there
doing
nothing,
you
could
you
know,
sort
of,
have
some
applications
active
one
and
and
then
some
on
two
and
you
would
be
using
your
resources
more
efficiently.
The
only
thing
you
have
to
make
sure
is
is
if,
if
there's
a
failover
that
you
have
enough
headroom
on
either
cluster
to
run
all
applications,
did
you
have
a
question
michelle.
A
No,
I
I
think
I
was
just
gonna
say:
that's
great,
I
know
I'm
sure
we
won't.
I
don't
know
if
I'll
have
time
to
cover
this
here,
but
if
you,
if
you
didn't,
want
to
run,
let's
say
you
split
up
your
applications
nicely
and
then
something
happens.
You
go
to
failover
and
you
don't
quite
have
enough
resources.
Can
you
scale
like
I
was
just
wondering
about
like?
Do
you
always
have
to
have
those
resources
sitting
around?
Are
you
able
to
scale
up
recently,
but
that
we
can
talk
about
that
later?
Yeah.
B
I
I
would
think
I
I
that's
you
know,
I
don't
know
that
much
about
the
area
of
openshift
and
scaling
up
resources,
but
if
you're
using
ipi
and
machine
sets
right,
I
certainly
think
that
would
be
very
possible
to
to
scale
on
demand.
But
I
don't
know
if
that
would
be.
You
know,
sort
of
a
separate
step.
B
You
would
scale
your
say,
essentially
your
nodes
and
your
resources,
so
you
had
more
places
to
schedule
or
or
could
you
do
it
from
the
application
point
of
view
and
and
have
it
actually
scale
and
then
say
you
know,
hey,
I
need
more
nodes
or
more
resource,
but
it
you
know,
with
with
machine
sets.
That
would
definitely
be
be
possible
to
to
scale
when
needed,
right,
right,
yeah,
cool
yeah,
just
and
I'm
sure
there's
some
amount
of
automation
around
that
too.
So
metro
vr
get
a
view
of
that.
B
Again
still
have
the
two
pure
clusters
we're
only
showing
two
you
you
can
have
many
sets
of
clusters
really
as
many
as
you
want.
So
it's
just.
The
idea
is
that
each
each
cluster
will
have
a
failover
cluster,
but
I
could
have
you
know.
I
could
be
having
disaster
recovery
for,
say,
10
openshift
clusters,
and
they
would
just
be
configured
into
pairs
so
that
I
had
five
sort
of
five
failover
clusters
and
that's.
A
B
Essentially,
you
you
do
that,
but
in
terms
of
the
what
we
call
the
disaster
recovery
policy,
you
set
up
the
policy
ahead
of
time
so
that
only
the
opposite.
When
you
go
to
apply
the
policy
to
the
applications
and
tell
it
what
failover
cluster
it
has
you,
you
already
have
that
configured.
So
you
can't
just
say
you
know
this
application
will
fail
over
to
any
one
of
these
five.
Okay,
it'll
be
one
when
you
create
the
dr
policy
you're
gonna
say
this:
is
your
failover
cluster?
Any
applications
are
going
to
use
this
peer
relationship.
B
So
again
we
have
external
odf.
We
call
that
red
hat
storage,
it
uses
the
external
mode
capability.
When
you
install
openshift
data
foundation
to
connect
to
that
external
cluster,
the
cluster
has
to
be
configured.
The
the
staff
cluster
has
to
be
configured
in
an
architecture
called
stretch
mode
with
arbiter.
B
So
that's
a
particular
configuration
so
that
that
we
essentially
can
ensure
that
if
everything
in
in
the
site
data
center,
one
was
unavailable
that
the
data
would
still
be
synchronously
replicated
and
there
would
be
enough
replicas
of
the
data
on
everything
in
data
center
too,
so
that
your
applications
could
still
run
now.
The
arbiter
node
must
stay
available.
If
there
is
a
failure,
so
it
has
to
be
somewhere.
B
Where
is
which
is
not
data
center
one
or
data
center
too?
Okay,
it
it
can
run,
and
it's
just
a
what
we
call
a
monitor
service.
So
it
could
run
in
a
cloud
environment
on
a
on
a
say,
ec2
instance,
or
something
as
long
as
there's
reachability
back
to
the
the
ceph
cluster.
B
Good
question:
there
is
not
latency
restrictions
for
the
most
part.
You
know
we're
talking
about,
maybe
a
restriction
of
100
or
150
milliseconds.
So
for
most
realistic
you
know
locations,
that's
not
really
a
restriction.
I
mean
that
would
cover
any
any
place.
You
wanted
to
put
it
in
the
u.s
right
or
even
if
you
had
for
some
reason,
you
had
your
your
data
centers
in
the
us,
and
you
wanted
to
put
your
arbiter
somewhere
in.
I
don't
know
london
or
something
you
know
or
or
even
farther
somewhere
in
asia.
B
You
would
still
be
good,
okay,
so
again,
distance
limitation
for
the
the
two
data
centers
that
are
that
are
hosting
the
openshift
clusters,
as
well
as
the
main
components,
the
data
nodes
for
the
red
hat,
soft
storage.
B
So
we're
going
to
look
at
one
of
these
assets,
but
right
now
I
think
michelle
you're
going
to
bundle
up
this
powerpoint
or
turn
it
into
something
that
we
can
make
available
to
people
sure.
Hopefully,
these
links
will
still
work,
but
these
are
some
assets
we
just
with
a
new
version.
B
We
just
released
the
latest
disaster
recovery
and
it
has
combined
now
the
the
original,
the
metro,
dr
instructions
into
one
guide,
and
I
think
if
I,
if
I
switch,
but
let
me
know
if
you
can
see
this
michelle,
can
you
see
this.
B
Yeah,
I'm
just
gonna
try
real
quick
to
switch
to
a
different
one.
Okay,
can
you
see
this
now?
Okay,
yes,
okay!
So
this
is
our
latest
guide.
B
It
was
just
released
yesterday
and
I
just
want
to
make
it
clear
that
it's
got
the
regional
disaster
here
and
there's
a
whole
set
of
instructions
that
go
with
how
to
configure
that,
including
what
you
need
to
do
on
acm,
and
then,
let's
see
here,
how
do
I
get
this
to
go
down
a
little
bit
more
and
then
the
same
guide
will
include
the
metro,
dr
solutions.
B
B
All
right,
so
I'm
going
to
show
you
a
demo.
It
is
pre-recorded,
but
you
will
be
able
to
watch
it
afterwards.
So
again,
I'm
annette
kluett,
I'm
a
principal
architect
in
an
engineering
disaster,
recovery
team.
B
So
what
we
have
here
in
advanced
cluster
management
is
we
have
in
addition
to
the
local
cluster
that
has
advanced
acm
installed.
We
have
four
other
clusters
that
I
imported
and
again
this
goes
to
we'll
see
later
that
these
are
have
two
different
ways
of
either
synchronously
or
asynchronously
replicating
the
data.
B
And
then
what
we
want
to
look
at
is
what
is
installed
on
what
we
call
the
hub
cluster.
So
we
have
a
hub
cluster
on
that
is
the
advanced
cluster
management,
the
odf
multicultural
orchestrator
and
the
openshift
disaster
recovery
hub.
B
B
B
So,
let's
go
ahead
now
and
in
order
to
test
disaster
recovery,
we're
going
to
create
an
application
and
the
application
that
I'm
going
to
use
is
going
to
be
at
a
github
repo
that
I
forked
from
some
application
samples
and
what
the
the
reason
that
I
forked
it
we'll
see
is
so
that
I
could
configure
the
storage
class
to
use
an
update
of
data
foundation,
sap
rbd
storage
class.
B
So
now
we're
going
to
place
our
application-
and
this
is
all
acm
on
one
of
the
clusters,
and
this
one
will
be
one
of
the
ones
called
boss,
and
this
will
be
boss,
one.
So
we're
going
to
go
ahead
and
create
and
again
nothing
about
disaster
recovery
has
been
done
yet
we're
just
basically
using
acm
to
schedule
an
application
on
a
particular
cluster
that
it
manages
and
okay.
A
B
Yeah
now
this
is
good
yeah.
Let
me
let
me
clarify.
I
just
stopped
the
video
for
a
minute.
This
is.
This
is
absolutely
scheduled
on
one
of
the
managed
clusters
not
on
the
hub
cluster
right,
because
it
it
all
acm
is
doing
when,
when
I
configured
that
a
little
while
ago
is
I'm
just
told
it
where,
where
do
I
want
pac-man,
I
could
have
picked
any
of
the
four
managed
clusters,
but
I
just
I
picked
this
one.
B
Okay,
so
the
you
you,
the
applications
that
you
schedule
via
acm
don't
run
on
the
cluster
that
that
has
acm.
Usually
I
mean
it
could
if
it
was
just
a
small
test,
but
for
disaster
recovery
it
needs
to
run
on
one
of
the
managed
clusters.
Okay,
where
where
openshift
data
foundation
is
actually
installed
right
right,
okay,
so
we've
got
that
now.
B
Pacman
is
running
on
one
of
the
clusters,
and
what
I
want
to
do
now
is,
I
want
to
actually
go
play
the
game,
and
I
want
to
play
the
game
so
that
I
can
create
some
persistent
data
that
will
be
written
to
the
openshift
data
foundation,
ceph
rbd
volume,
so
that
we
can
actually
test
the
failover
and
I'm
doing
very
badly
here,
and
the
reason
is
is
because
after
I
fail
three
times,
I
get
to
save
a
score
and
that
will
show
up
again
in
high
score.
B
B
Okay
and
when
we
select
them
it'll,
come
back
and
tell
us
what
their
replicate.
What
their
relationship
is.
So
it
is,
it
is
able
to
actually
inspect
those
to
storage
clusters
and
we
can
see
that
it's
asynchronous
by
default.
The
synchronous
interval
is
five
minutes,
but
we
can
change
that
to
be
either
shorter
or
longer.
B
The
minimum
replication
interval
is
one
minute,
so
I
create
this
now
and
then
I
want
to
also
create
one
just
to
show
for
the
other
two
clusters
just
to
show
how
they
are
different.
So
in
this
case
I'm
not
going
to
put
into
my
name
a
replication
interval
and
I'll
show
you
why,
in
a
minute
here,
I'm
going
to
not
need
that,
because
when
the
inspection
was
done
of
these,
you
notice
it
says:
ocs,
external
storage,
cluster
storage
system.
So
this
is
actually
connecting
to
red
hat,
stuff,
storage
and.
A
B
Situation
absolutely
yeah
yeah,
the
other
one
was
the
other
policy
that
we
created
was
for
was
essentially
mapping
to
regional.
Dr,
what
we
call
regional,
dr
and
this
one
is
happening
to
to
metro
deo
yeah
100,
but
but
the
main
thing
is
well,
it's
not
the
main
thing,
but
one
of
the
things
is
that
you,
you
don't
have
to
know.
B
You
just
have
to
select
the
right
clusters
like
if
I
had
selected,
say
perf1
and
baz2
that
wouldn't
have
worked
because
they're,
not
they
they
don't
they.
You
know
one
is
set
up
for
synchronous,
the
other
is
they
set
up
for
asynchronous
and
that's
not
going
to
work.
B
Gotcha,
okay,
okay,
I'll,
go
ahead
and
continue
the
video,
so
we've
now
created
the
the
or
we're
going
to
just
in
a
minute
here
create
the
the
second
dr
policy
or
data
policy,
and
now
we
can
use
that
to
assign
it
to
a
application.
So
I'm
going
to
choose
to
use
the
first
one
I
did,
which
is,
as
michelle
said,
as
a
regional,
dr
and
I'm
going
to
apply
the
policy
to
my
pacman
application.
B
So
at
this
very
point
now
we
there
will
be
some
new
resources
created
in
the
pacman
namespace,
and
one
of
the
most
important
ones
is
called
dr
control
placement
and
we'll
see
in
a
minute
what
that.
A
Okay,
so
at
this
point
we
should
be
looking
at
the
pacman
application
that
is
set
up
over
regional.
Dr
is
ready
for
failover
right
like
if
we
pull
the
plug
on
one
side.
We
should
see
it
fail
over
to
the
other.
B
Yeah,
so
what
we're
going
to
do
is
we're
going
to
now
go
ahead
and
try
the
failure
and
to
do
that.
We're
going
to
use
that
dr
place
on
control
that
we
just
created.
A
B
B
A
B
That
at
the
hub
cluster,
because
the
data
has
already
been
replicated
to
the
opposite
cluster
and
we're
recreating
the
application
from
a
github,
I
mean
the
github
has
to
be
available.
Okay,
but
all
the
resources
that
are
going
to
recreate
the
application
are
not
on
the
cluster.
That's
offline
right
now,
right
right
this.
Once
we
set
the
the
action
to
fail
over
here
and
the
failover
cluster
and
save
it
all
that's
going
to
be
initiated.
A
Okay,
so
so
for
the
in
this
demonstration
in
the
video
you're,
forcing
the
failover
to
happen
right,
you're
not
going
to
bring
down
bos,
one.
B
No,
I
import
in
this
case.
This
is
just
you
know,
a
test
which
actually
is
a
good
point,
because
a
developer
will
have
name
space
control
and
they
a
developer
or
someone
who
owns
the
application.
Could
I
could
actually
do
this
and
test
failover
and
they
would
not
impact
anybody
else.
It
would
be
only
for
their
application.
B
A
B
I
want
to
fail
over
and
the
answer
today
is:
yes,
you
do
have
to
go
into
the
cml
file
and
put
in
the
values
I
just
put
in
okay,
but
in
the
next
version
the
product
4.12
this
you
will
be
doing
it
via
ux
via
the
ui,
okay,
and
you
won't
have
to
be
modifying
yaml
files
or
anything
like
that.
B
B
So
we're
going
to
save
that
which
I
think
we
did,
we
can
look
at
the
failover
now
in
a
couple
different
ways:
we
have
event
stream
and
if
you
read
that
you'll
see
it's
it's
doing
it,
we
can
also
go
back
here
and
if
you
look
closely,
you'll
see
we're
on
buzz
one
and
we
just
switched
over
to
boss
too.
Okay.
B
If
you
know
it
does
happen,
work.
B
And
then
let
me
just
pause
for
a
minute
here:
okay,
so
this
is
the
load
balancer
that
I
talked
about
earlier.
B
We
can
see
that
the
in
the
middle
of
the
the
screen
there
that
one
of
them
is
green,
one
of
them
is
red.
So
that's
the
job
of
a
basic
load.
Balancer
right
is
tell
point
connections
to
the
instance
that
is
currently
active
and
that
right
right,
when
we
did
that
switch
over
that,
we
saw
on
the
topology
this
they
were
both
read
before,
because
both
of
them
were
down.
Remember
we
right
when
we
fail
over.
B
We
we
take
down
the
application
on
the
opposite
cluster,
so
the
cl
the
application
only
runs
on
one
cluster
at
a
time,
but
as
soon
as
as
the
as
the
pacman
application
came
online
on
boz
ii,
the
failover
cluster,
this
went
green
and
any
incoming
connections
which
I'm
going
to
do
in
a
minute.
Here
then,
would
go
to
the
failover
or
the
the
second
cluster.
A
Question
so
when,
when
the
application
failed
over
to
boz2,
all
of
the
steps
are
taken,
it
pulls
from
the
git
repo.
Is
it
yeah?
It's
okay,
yeah.
B
And
it
mounts
the
latest
amounts
the
the
stuff
volume
image
with
the
latest
replicated
data.
So
again,
if,
if
we
had
done
the
high
score-
and
you
know
there
hadn't
been
time
for
it
to
let's
say
we-
there
was
a
fail
over
one
minute
and
it
hadn't
done
the
replication
and
it
would
not.
We
would
mount
it.
We
would
not
see
it
because
it
didn't
it
didn't
get
replicated
in
time.
Okay,.
A
A
B
Absolutely
yeah,
that's
a
good
yeah.
It's
not
just
providing
you
know
a
possibility
of
recovery.
It's
actually
yeah.
It's
it's!
It's
saying
you
know
they.
The
alternate
site
is
no
longer
used
at
all:
okay,
gotcha,
okay,
okay,
so
we're
continuing
here.
I
just
wanted
to
make
it
clear
about
the
load
balancing
so
we'll
go
back
to
our
application.
We
need
to
refresh
because
we're
on
a
different
cluster,
now.
B
So
it
had
time
to
replicate-
and
you
know
this-
this
is
just
a
simple
example,
but
you
know
I
the
the
idea,
even
if
you
had
applications
that
had
high
io,
you
should
be
able
to
get
all
of
the
data
available
on
the
alternate
cluster
and
the
only
outstanding
data
for
regional,
dr,
would
be
data
that
had
not
been
replicated
so
choosing
your
replication
interval
is
really
important
because
you
know
that's
going
to
essentially
create
or
or
determine
your
your
replication
interval.
A
B
A
B
And
if
that
had
been
a
synchronous,
if
it
had
been
the
other
two
clusters,
then
we
wouldn't
have
lost
any
data
I
mean
because
because
it
would
have
synchronously
replicated,
we
would
still
have
application
down
time
because
it
takes
time
to
rehydrate
the
application
and
to
mount
the
storage.
So
we
would
still
have
time
when
the
application
would
become
an
event,
would
be
unavailable.
B
A
Okay,
and
so,
if
you
wanted
to
minimize
that
time,
let's
say
things
are
really
sensitive,
so
we're
doing
definitely
doing
metro,
dr
and
you
wanted
to
say
not
be
dependent
on
github.
You
just
make
sure
you
have
a
repository
that
the
cluster
can
reach
like
start
to
remove
external
dependencies
and
bring
it
all
yeah.
B
Yeah-
and
you
know
the
the
other
thing
there
was
application
set
rocd.
So
there's
you
know
all
the
ways
the
acm
supports
essentially
creating
applications.
You
know
we
we
can
support
that.
So
it's
you
know
as
as
acm
evolves
to
to
be
able
to
have
more
ways
of
creating
applications.
Then
we
we
would
just
we
we
just
basically
piggyback
on
that.
B
A
B
A
Okay,
can
I
I
have
more
questions?
Can
I
ask
you
a
few
more
or
do
you
need
two
sure?
Okay?
So
when
you
do
the
pairing
right,
let's
say:
everything's,
regional,
dr,
let's
just
take
metro,
one
of
them
out
of
the
picture
right.
So
no
external
staff
read
like
you.
Let's
say
you
have
ten
clusters
when
you
pair
them,
you
can
only
pair
with
each
other,
like
I
couldn't
say,
buzz
one
pairs
with
boss,
two
with
boss,
two
pairs
with
boss.
Three,
like
I
couldn't
do
something
yeah,
I'm
just
putting
it.
B
Currently,
that
is
not
supported
or
our
product
is
not
configured
that
way.
We
we,
we
are
looking
for
that
kind
of
solution.
I
can
definitely
see
situations
where
you'd
want
one-to-many
so,
but
right
now
it
requires
that
you
decide
the
the
two
and
then
you
have
to
have
configured
it
and
there's
a
configure
configuration
part
of
this.
That's
what
I
showed
in
the
documentation.
A
In
the
situation,
so
I
have
four
clusters.
Let's
say
and
I
have
super
important
information
like
a
hospital.
I've
got
my
metro
dr
happening
because
I've
configured
external
ceph
and
that's
beautiful.
If
I
wanted
there's
no
way
to
configure,
I
mean
I'm
getting
really
wacky,
but
I
had
an
idea
about
like
okay.
Let's
say
I
need
to
take
a
replication
off
of
that.
My
metro,
dr
situation,
and
put
it
off
site
and
then
have
so
like.
That's
all
like
you
know
how
these
things
grow
right.
You
talk
about
it.
B
No,
no,
your
question
is
yeah.
That's
a
good
other
way
is
good,
so
I
didn't
talk
about
the
idea
of
backup
at
all,
but
you
know
the
traditional
backup
and
recovery
or
restore
when
you
apply
it
to
this
solution
is
totally
complementary,
because
if
you
think
about
it,
whether
it
be
synchronous,
replication
or
even
asynchronous
replication,
you
can
replicate
something
that
is
corrupted
when
you
replicate
something.
That's
corrupted,
there's
nothing
that
can
help
you.
B
B
That's
now
a
sort
certified
operator,
red
hat,
certified
operators,
openshift
api
for
data
protection
oedp,
so
you
will
definitely
see
oadp,
become
complementary
to
the
disaster
recovery
so
either
on
either
side,
either
the
preferred
cluster
or
the
failover
cluster.
B
I
will
want
to
to
back
up
and
move
move
that
back
up
and
with
oadp
it
just
introduced.
The
data
mover
capability,
so
you
can
move
your
your
backup
and
your
persistent
data
that
you
that
you
back
up
offsite.
So
if
you
need
to
you
can
recover
so
yeah
yeah
it
I
mean
it's
not
it's
not
using.
You
know,
like
smearing
or
the
replication,
that
we
talk
about
right.
A
B
B
Yeah
and
back
up,
recut
restore,
and
even
you
know,
disaster
recovery
solutions.
They've
been
around
for
all
the
legacy
apps
as
well.
So
there's
it's
not
like
you
know.
These
are
new
ideas,
we're
just
sort
of
containerizing
and
making
it
we
hope
easier.
A
Okay,
so
one
more
question:
have
you
been
through
a
git
ops
process
with
this?
Have?
Is
there
any
talk
about
what
this
would
look
like
if
you
wanted
to
capture
this
in
get
off
argo
cd
or
not,
yet
I.
B
Mean
the
configuration
part
of
it
yes
yeah
I
I
personally
haven't,
but
I
know
that
our
customers
are
thinking
that
way,
and
you
know
I
I
guess
from
some
point
of
view,
our
quality
inter
engineering
group
is
doing
it
because
they
have
to
set
up
this.
This
thing
they
they
took
the
time
to
automate
the
setup,
because
the
the
manual
setup,
that's
in
the
the
design,
the
disaster
recovery
guide,
there's
a
lot
of
it.
I
would
say
about
three
quarters
of
it.
B
B
So
you
know
the
goal
is
to
make
the
the
configuration
as
easy
as
possible,
but
you
still
have
to
you
know:
even
you
still
have
to
install
stuff
if
you're
doing
metro,
dr
outside
of
openshift,
of
course,
using
the
methods
by
which
you
install
stuff,
we
can
do
more
automation
inside
openshift,
with
the
regional,
dr.
So
so
there's
still
parts
of
it
that
could
lend
to
what
you're
saying
but
but
they're.
You
know
a
lot
of
the
a
lot
of
the
things
that
were
difficult
before
are
now
completely
automated.
Okay,.
A
A
Yes,
that's
always
good,
so
this
this
video
is
available.
I
will
I
can
put
it's
on
one
of
our
channels
on
youtube.
I
can
put
it
up
in
the
description
and
any
slides
that
we
want
to
make
available
I'll
make
sure
to
modify
the
the
description
of
on
of
this
on
youtube,
so
everyone
can
take
it
and
your
video
actually
has
a
voiceover
to
it,
so
they
don't
have
to
re-watch
this.
They
can
just
go
right
to
the
video.
B
B
One
I
gave
you
is
even
more
slick,
it's
only
less
than
nine
minutes.