►
From YouTube: Backup and DR for Databases on OpenShift Gaurav Rishi & Michael Courcy Kasten OpenShift Commons 2022
Description
Backup and DR for databases on OpenShift with Kasten by Veeam
Gaurav Rishi & Michael Courcy Kasten by Veeam
OpenShift Commons Gathering on Databases held on 02/23/2022
Slides: https://bit.ly/3Izrh6k
Join OpenShift Commons: https://commons.openshift.org/index.html#join
Full Agenda here:
https://commons.openshift.org/gatherings/OpenShift_Commons_Gathering_on_Databases.html
A
I
am
gaurav
rishi,
I'm
the
vp
of
product
here
at
gastonvalve
and
also
you'll,
hear
from
michael
corsi
later
who's
going
to
show
you
a
quick
demo
and
really
what
we're
talking
about
is
an
important
topic,
given
that
databases
is
one
of
the
most
common
workloads
on
kubernetes
and
openshift,
of
course,
being
a
prime
example
of
that,
and
so
with
that
said,
we'll
sort
of
get
into
how
we
do
it
here
as
part
of
cast
and
bervin
and
very
quickly,
we'll
go
through
the
use
cases,
quick
architecture,
why
customers
use
us
and
also
then
we'll
sort
of
run,
a
recorded
demo
with
michael
corsi
talking
about
how
to
actually
do
this
in
in
real
life
and
you'll
get
a
good
sense
of
that.
A
So
I
think
just
a
quick
slide.
We've
been
working
with
red
hat
openshift
for
a
while,
in
fact,
if
you
go
to
casting.io
slash,
openshift
you'll
see
a
variety
of
collateral
around
it.
You
know,
I
think,
very
happy
to
sort
of
use
it
for
a
few
reasons.
Right
and
I'll
just
run
into
the
three
use
cases
that
we've
been
laser
focused
on
as
a
company.
So,
first
of
all,
casting
is
an
acquisition
which
veeam
did.
A
We
are
the
number
one
kubernetes
sort
of
backup
leader
in
this
space
and
veeam,
of
course,
is
very
well
known
for
doing
backups
in
the
context
of
hypervisors.
So
custom
k10,
which
is
the
name
of
the
core
product,
has
been
our
core
bread
and
butter
has
been
these
three
use
cases.
A
You
know
their
favorite
or
the
most
appropriate
database
under
the
covers,
whether
it's
a
streaming
database
or
a
sql
or
a
nosql,
and
what
you
do
need
to
backup
in
this
new
world
is
really
the
entire
application,
which
is
made
up
of
multiple
databases
under
the
covers,
and
hence
polyglot
persistence.
So
we've
been
doing
that
so
that
application
centricity
has
been
our
starting
point
and
we
do
that
really
well
right.
Disaster
recovery
is
the
second
use
case.
A
Don't
need
to
explain
why
that's
needed
people
do
make
errors
all
the
time
disasters
do
strike
and,
of
course
you
have
things
like
ransomware
which
make
it
all
the
more
important
in
today's
world.
So
we
do
that
really
well
in
context
of
regions,
availability
zones,
hybrid
architectures
and
because
not
every
architecture
and
deployment
type
is
the
same.
You
might
go
ahead
and
need
to
restore
or
rehydrate
your
application
where
the
storage
class
under
the
covers
has
changed.
You
might
have
done
something
in
spinning
media,
but
when
you
rehydrated
it
it's
ssd.
A
We
as
gaston
have
the
abilities
to
be
at
make
these
transformations
so
really
important
and
powerful
feature
and
that's
something-
that's
quite
useful
and
then
kubernetes.
Of
course,
people
expect
it
to
be
portable.
That's
one
of
the
advantages
of
that,
whether
it's
because
you're
going
ahead
and
rehydrating
your
applications,
because
every
quarter
you
have
a
new
kubernetes
version,
come
up
or
you
are
creating
your
clusters
or
maybe
you
have
test
dev
clusters.
A
We
have
a
really
rich
language
to
be
able
to
go
ahead
and
have
these
application
transforms,
as
we
call
them
in
action.
So
all
of
this,
of
course,
works
really
well,
and
that's
one
of
the
reasons
you
know
caston
for
the
second
time,
was
put
up
as
the
number
one
in
doing
kubernetes
data
protection.
A
So
you
know
there's
a
qr
code.
I
would
encourage
people
to
sort
of
go
ahead
and
scan
that
and
read
the
report
for
yourself,
and
you
know
I'm
happy
to
answer
questions
about
that
if
they
come
up
later,
but
let
me
get
on
to
the
to
the
meat
of
you
know
where
we
have
been
integrating
and
how
all
of
this
works
right.
So
no
presentation
would
be
complete
without
a
stack
diagram.
A
So
here's
one
from
me,
but
this
is
obviously
a
joint
presentation
that
you
know
we
had
done
with
red
hat
like
I
said.
If
you
go
to
casting.I
you'll
see
a
lot
more
collateral,
but
to
just
walk
you
through
it
at
the
bottom
layer.
Again
you
have
any
structure,
whether
it's
hybrid
or
whether
it's
public
cloud
or
private
clouds,
or
even
in
the
edges
now
and
openshift.
A
Of
course,
whether
it's
aro
or
rosa
or
openshift
on-prem
is
got
the
layers
and
the
capabilities
that
you
see
in
the
middle
part
of
it
and
cast
in
k10
is
a
cloud
native
application
itself.
So
we
get
installed
on
a
cluster
on
top
of
openshift.
We
support
all
of
these
variants,
which
you
see
in
the
layers
below
and
on
top
of
us.
Like,
I
said
we
take
a
polyglot
approach,
which
means
we
protect
the
databases,
but
we
do
it
in
an
app-centric
manner.
A
It
will
go
ahead
and
decompose
what
the
dependency
map
of
that
application
is
in
terms
of
the
various
microservices,
the
pvs
pvc
bindings,
the
data
services,
whether
it's
sql
or
nosql.
You
see
some
icons
in
the
layer
above
and
we'll
go
ahead
and
make
sure
that
you
can
set
policies
that
allow
you
to
say.
I
want
to
back
up
this
application
x,
number
of
times
a
minute
or
hour
or
a
day
and
then
go
ahead
and
safeguard
the
data
in
some
repository,
which
could
be
offsite
into
an
object
store,
for
example.
A
So
that's
really
the
backup
dr
mobility
point,
which
I
talked
about,
that
gives
you
that
power
of
getting
the
portability
as
well
as
the
security
of
that
application.
We
are
red
hat
operator.
Certified
so
you
can
find
us
at
the
in
the
catalog
we
allow
for
a
life
cycle
of
the
operations
in
this
particular
case,
and
then
we
are
also
under
the
red
hat
marketplace,
which
is
operated
by
ibm
and
actually
very
recently.
We
also
added
transactions
to
it.
A
We
support
a
variety
of
open
shift
distributions
like
whether
you're
on
azure
with
aro
or
whether
you're
on
aws
with
rosa
or
whether
you're
on
premises
with
openshift
work
in
air
gap
environments.
We
obviously
are
integrated
with
things
like
the
openshift
container
storage
system,
and
so,
as
a
customer,
you
do
have
that
freedom
of
choice.
A
So,
just
to
get
under
the
covers
even
more
a
little
bit
of
a
busy
picture,
but
let
me
walk
you
through
this
very
easily
so
that
you
can
understand
it.
So,
like
I
said
at
the
bottom
of
this
picture,
you
can
see
you
might
have
a
few
different
variants
of
kubernetes
that
you
might
be
using
in
your
system
whether
it's
a
mix
of
aks,
but
you
want
to
move
to
open
shift
or
whether
maybe
you're
using
aro
or
maybe
using
plain
millennia
kubernetes.
A
You
have
a
cluster
and
on
top
of
that
cluster
you
might
have
applications
in
this
particular
example.
I'm
just
showing
you
two
toy
examples
of
two
applications
out
there.
One
application
ultimately
breaks
down
into
microservices
running
in
pods,
connected
through
a
persistent
volume
to
some
volume
claim,
and
then
the
second
application
ends
up
also
using
a
database.
This
shows
sql,
but
I'll
show
you
a
demo
with
under
the
covers
right,
and
so
that's
a
second
application.
A
Casting
k10
gets
installed
in
the
same
cluster
as
soon
as
you
install
it.
It
works
through
the
kubernetes
api
server
and
it
will
go
ahead
and
discover
all
these
applications
and
the
dependency
map.
Like
I
mentioned,
you
can
go
ahead
and
use
the
cli
or
a
user
interface
that
you'll
actually
see
to
be
able
to
set
up
these
policies,
as
we
call
them
to
decide
how
often
you
want
the
applications
backed
up
and
where
they
should
be
backed
up
and
what
those
applications
need
to.
A
We
are
also
like
I
said
in
the
operator
certified,
so
we've
gone
through
a
rigorous
process
ourselves
as
casting
k10
to
be
a
part
of
that
and
at
the
end
of
the
day,
the
customers.
The
reason
they
use
us
is
because
extremely
simple
to
use-
and
hopefully
some
of
that
sort
of
shows
up,
as
I
show
you,
the
application,
but
you
know,
even
though
I
think
openshift's
done
a
great
job
in
terms
of
making
it
very
easy
to
scale
operations.
A
So
usually
it's
that
that
turns
out
to
be
really
important,
but
also,
I
should
mention
that
we've
been
built
sort
of
ground
up
with
our
own
data,
mover
serverless
type
technologies,
to
be
able
to
sort
of
go
ahead
and
get
you
the
parallelism
and
the
security
that
one
expects
from
a
backup
product.
So
that's
really
important
and
close
to
our
hearts
do
so
with
that
said,
let
me
do
one
thing
I
want
to
sort
of
switch
over
and
let
michael
show
you
what
what
this
actually
looks
like
in
place.
A
B
On
a
bloodsheet,
I
have
a
mongodb
database,
deploying
an
open
shift
through
helm,
chart
a
very
simple
situation.
Actually,
it's
two-step
full
set
once
that
reset
is
a
db
replica
set
in
the
sense
of
not
in
the
sense
of
kubernetes.
So
basically
it's
two
two
pod
one
is
the
primary.
The
other
one
is
the
secondary,
and
each
of
them
has
their
persistent
volume,
these
corresponding
persistent
volume
planes
and
there
is
another
stateful
set,
which
is
the
arbiter
and
the
aviator
is
responsible
of
electing
the
leader
between
the
two
replicas
set
in
case
of
one
okay.
B
So
once
that
is
deployed,
you
have
installed
custom
on
your
kubernetes
on
your
openshift
clusters
and
you
become
a
able
to
discover
all
the
applications,
particularly
I'm
going
to
find
out.
My
google
db
application.
Sorry,
this
mobile
application
and
I
can
see
the
two
pvc,
the
two
workloads,
the
two
simple
set,
and
so
so
I'm
going
to
create
a
policy
from
there.
B
I
just
want
to
do
on
the
moon,
but
if
I
was
early,
for
example,
I
can
choose
my
retention,
which
means
that
as
soon
as
I
average
the
24
backup,
the
next
one
will
be
replaced,
the
20,
the
first
one
is
going
to
be
replaced
by
the
25th,
but
the
first
one
is
going
to
be
promoted.
Weekly
is
the
grandfather,
sun
solution
and
I
can
export
my
backup.
So
I
can
do
a
snapshot.
B
I
mean
real
storage
snappers
and
I
can
export
it
to
a
storage
location
like
s3
bucket,
for
example.
I
need
to
choose
my
profiles
or
I
have
the
capacity
to
define
many
many
stock
means
profile,
mini
location
profile
and
I'm
choosing
this
one,
which
is
basically
a
string.
I
get
on
the
usb
one
region
and
I
said
I'm
not
going
to
do
it
only
better
demand.
B
That's
about
all.
I
can
select
my
applications
here,
I'm
choosing
the
simplest
situation,
which
is
choosing
by
the
name
of
the
application,
but
I
can
choose
labels,
for
example,
and
in
this
case
all
the
namespace
having
these
levels
or
all
the
namespace
having
workloads
getting
these
levels
will
be
backup.
So
that's
in
my
case,
I
just
want
to
do
a
backup
by
name,
so
that's
going
to
be
the
mongodb
button
and
that's
all
now.
B
I
can
also
filter
results
if
I
want
typically
when
you
have
some
weird
object
like
big
lfs
pc
in
your
namespace-
and
you
don't
want
to
backup
this
one,
because
it's
a
whole
story
and
it's
not
supporting
any
kind
of
support
like
snapshot,
support
or
it's
difficult
to
make
it
up,
you
can
just
introduce
an
excluded,
filter
and
say:
okay.
I
want
to
exclude
this
persistent
volume
clam,
which
has
four
my
big
pvc
excluded.
B
Well,
this
is
not
realist,
but
you
can
do
things
like
that
and
that's
very,
very
useful.
Okay,
in
our
case,
we
looking
for
simplicity.
We
are
just
going
to
backup
all
resources
and
you
can
include
also
the
non
name
species
name
spacing
resource.
I
mean
all
the
scoped
cluster
resources,
like
old
crd
or
the
old
custom
world
binding
the
cluster
because
binding
all
these
things
need
to
could
be
captured
in
this
point.
B
B
B
B
It's
not
a
good
thing,
and
the
reason
is
that
if
you
don't
do
a
snapshot,
you
are
not
going
to
make
crash
consistent
backup
and
what
is
crash.
Consistent
crash
consistent
means
that
you
capture
all
the
files
in
the
same
time,
so
your
file
system
is
consistent
with
what
you
have
what
you
had
before
the
crash.
B
Now,
if
you
don't
do
a
crash,
consistent
backup,
you
may
have
five
inside
your
file
system,
which
are
unconsistent,
because
if
you
have
a
big
pvc,
you
start
to
capture
your
file
system
at
time.
1
you
capture
f1
and
at
time
n
you
capture,
fn,
but
f1
may
have
changed
when
you
finish
the
capture
of
fm
and
then
you
get
files
that
are
not
exactly
in
the
same
type
and
that
lead
you
to
potential
difficulties
to
restart
your
workload.
B
So
at
least
this
is
a
minimum.
Now
that
may
be
not
enough.
So
let's
see
that
the
just
the
capture
of
the
of
the
pvc
for
snapshot
is
very
simple.
We
just
take
a
snapshot
of
the
pvc.
We
capture
the
spec
all
together.
We
put
that
in
restore
point
in
our
internal
database
and
when
it
comes
to
restore,
we
use
this
reference
to
rebuild
the
pvc
and
to
rehydrate
the
manifest
and
that's
the
minimum.
B
But
this
solution
is
not
application.
Consistent,
because
for
many
reasons
you
may
have
many
data
kept
in
memory.
For
example,
if
you
are
using
these
tools,
like
I
don't
know,
kafka
kafka
is
putting
a
lot
of
data
in
memory.
So
if
you
just
capture
the
snapshot,
just
the
disk
snapshot,
you
probably
have
some
issues
with
the
data
that
are
not
flushed
on
the
disk,
so
you're,
not
application.
B
We
have
another
solution,
which
is
the
logical
method.
So
what
means
logical,
magical,
magical,
logical
backup
means
that
you
capture
that
and
in
this
case
testing
is
not
going
to
capture
the
pvc.
The
snapshot
of
the
pvc
is
just
going
to
capture
the
dumb.
So
what
we
have?
We
have
an
extension
mechanism
that
lets
you
say.
Okay,
I
don't
want
to
capture
the
pvc
on
this
workload.
I
just
want
to
make
a
dump
and
I
push
this
dump
to
a
external
location
like
s3
bucket
and
that's
what
we
call
the
the
blueprint.
B
So
this
strategy
is
interesting.
It's
guaranteed
to
be
application
consistent,
but
it
has
a
serious
issue:
it's
not
impressive!
So
what
does
it
mean?
It
means
that
if
your
mongodb
database
is
an
urge
with
one,
I
don't
know
not
one
terabyte
of
that
one
terabyte
of
data
on
your
mongodb
on
your
mobile
database
that
can
happen
actually
so
you're
going
to
make
a
dump
of
one
terabyte
and
you're
going
to
push
through
the
s
through
bucket
this
one
terabyte.
B
B
We
are
able
to
execute
on
the
pod
itself.
Some
of
the
database
primitive
database.
Api
call
that
guarantee
you
can
do
a
file
system,
backup
of
a
snapshot,
at
least
so
in
the
case
of
mobile
db.
This
primitive
is
this
function
called
f,
sync
unlock
a
sync
block,
f
symbol
and
everything.
Clock
is
flushing
all
the
right
operational
memory
in
the
disk
and
is
locking
any
right
operation
to
keep
them
in
memory.
B
B
Then,
when
your
snapshot
is
made-
and
you
put
the
reference
here,
then
you
have
to
call
everything
unlock
to
make
sure
that
now
your
database
is
released
back
in
its
previous
state,
normal
working
state
and
back
in
business
as
usual
in
this
kind
of
things.
Custody
is
about
to
support
that
without
any
issue,
but
you
need
to
have
such
api
to
support
this
strategy,
and
now
you
have
the
best
of
both
world.
The
backup
is
application
consistent
and
it's
incremental,
which
means
that
we
just
export
to
the
s3
location
to
the
external
location.
B
B
Okay.
So
how
do
we
do
that?
For
that?
We
use
something
that
we
call
blueprint.
Blueprint
is
an
extension
mechanism
that
makes
sure
that
the
95
percent,
a
solution
which
is
custom,
is
able
to
reach
the
100
solution,
which
means
that
you
never
have
a
perfect
backup
solution.
Unless
you
have
an
extension
mechanism
that
lets
you
fill
the
gap
to
the
perfection
and
that's
exactly
what
our
extension
mechanism
appropriate
through
the
the
framework
is
going
to
bring
to
you.
B
They
are
more
and
more
using
operators.
Operators
is
basically
a
controller
which
is
working
with
a
specific
api
and
they
create
the
data
as
they
maintain
the
databases
they
do.
All
the
operations
that
regular
operators
should
do
regular
human
operator
should
do,
and
that
has
database
operator
is
becoming
now
a
de
facto
background
when
we
are
deploying
database
on
openshift.
B
An
example
of
one
of
them
is
the
tcd
operator
so
for
the
tcd
variable.
Now
you
don't
build
your
ecd
yourself
when
you
want
to
to
the
to
deploy
unity,
cd
database
on
your
openshift,
because
you
use
an
etc
and
the
etc
operator
is
exposing
different
api
to
create
mtc
database.
While
you
define
the
number
of
replicas,
but
also
the
etc
operator-
and
that's
very
interesting
is
defining
a
backup
api.
B
The
api
for
backup
is
this
way,
so
you
define
an
tcp
backup
object
and
you
provide
some
parameters,
and
here
you
are.
You
have
your
backup
suddenly,
but
it's
interesting,
but
that's
the
only
way
to
make
the
backup
of
your
dc
database
in
this
situation,
and
what
does
it
mean?
It
means
that
using
a
snapshot
or
using
any
other
things
is
not
possible.
That's
the
only
way
to
do
backup
with
etc
and
more
and
more
database
operators
are
exposing
their
backup
api.
So
the
question
is:
how
do
you
deal
with
that?
B
The
blueprint
I
show
you
is
responding
to
this
question.
So,
let's
see
an
example
of
a
sorry,
a
blueprint
that
I
wore
for
that,
and
this
is
this
one.
So
what
I
do
I
create
a
blueprint
which
is
basically
executed
every
time
we
do
a
backup
on
the
namespace
and
we
create
this
api
and
we
reference
it
in
our
restore
point.
B
B
B
If
I
lose
my
my
cluster,
for
example-
and
I
don't
have
this
snapshot
available
in
my
infrastructure
because
I
lose
everything
then
I
can
still
work
with
my
exported
restore
point
because
it's
on
the
s3
bucket
and
out
of
my
disaster.
So
I
can
visit
the
restore
point.
I
can
apply
some
rules
for
the
transfer
for
the
restoration
and
especially
some
very
useful
rules
like
changing
a
storage
class
or
things
like
that,
and
you
can
choose
what
you
really
want
to
restore.
B
B
A
All
right,
so
thanks,
michael
for
actually
having
that
walkthrough
I'll.
Just
leave
you
all
with
this
another
qr
code.
Really,
hopefully,
you've
got
a
sense
of
you
know
how
we
do
it.
So
it
was
a
look
behind
the
scenes
in
terms
of
the
variety
of
options.
A
We
have
to
be
able
to
back
up
databases-
and
you
know
some
of
you
are
actually
running
through
these
issues-
will
appreciate
that
and
then
for
folks
who
just
care
about
going
ahead
and
having
a
single
click
operation
to
be
able
to
back
up
export
and
then
be
able
to
recover
when
the
time
is
needed.
We
support
that
too,
with
the
easy
to
use
interface.
A
So
with
that
said,
you
know
we
have
a
qr
code
which
lets
you
try
casting
k10
completely
fully
featured,
and
that
is
free
for
you
to
use
for
a
limited
number
of
nodes.
So
try
that
or
like
I
was
pointing
out
earlier.
You
can
also
go
to
the
red
hat
marketplace
and
there
are
a
few
different
options
there.
Also
for
you
to
get
going
immediately,
takes
you
less
than
10
minutes
to
get
this
system
up
and
running,
and
so
definitely
encourage
you
to
try
that.