►
From YouTube: OpenShift API Data Protection OADP with Ceph CSI Annette Clewett Red Hat OpenShift Commons Briefing
Description
OpenShift API Data Protection OADP with Ceph CSI
Annette Clewett (Red Hat)
Dylan Murray (Red Hat)
Raghavendra Manjunath (Red Hat)
OpenShift Commons Briefing
9/21/2020
A
So
this
is
going
to
be
open
shift
api
data
protection
and
using
csi
combined
with
openshift
container
storage,
so
I'm
annette
kluit,
I'm
in
the
storage
business
unit
and
been
working
with
openshift
and
openshift
container
storage,
as
well
as
rook
and
stuff
csi
for
the
last
three
or
four
years.
So
I
know
dylan
is
on
dylan.
You
want
to
introduce
yourself
quickly.
B
Yeah
sure,
thanks
annette,
my
name
is
dylan
murray.
I
actually
come
from
the
openshift
migration
engineering
team
and
we
learned
a
lot
doing
the
migration
work
using
valero
and
we
wanted
to
take
that
knowledge
and
apply
it
to
what
we're
calling
oadp,
essentially
our
backup
restore
initiative
at
red
hat.
So
thanks,
we
also
have
a
another
folk
ragu.
B
Imagine
out
that
I
wanted
to
call
out,
because
he
also
comes
from
the
openshift
storage
side
and
has
been
doing
a
lot
of
work,
helping
us
make
sure
that
the
odp
project
aligns
nicely
with
you
know
the
csi
initiative
and
and
all
the
other
work
coming
from
ses,
so
talking
a
little
bit
about
the
openshift
backup
restore
plan.
B
So
the
key
goals
here
right
is
that
we
want
to
provide
application,
backups
right
that
are
granular
and
can
cover
not
just
the
metadata
of
the
resources
in
the
cluster,
but
also
the
applications,
persistent
data
so
right
that
would
be
backing
up
persons
and
volumes
on
the
cluster
as
well.
Then
we
also
wanted
to
enable
a
wide
range
of
backup
solutions
with
different
backup,
isp
partnerships.
B
So
the
solution
really
contains
openshift
valero,
rook,
ceph
and
dsl
upstream
projects,
openshift
container
storage,
right
that
incorporates
rook
ceph
with
cefcsi,
and
we
have
a
community
operator
oedp
which
enables
the
valero
apis
so
allows
you
to
install
the
lower
easily
configured
with
openshift
and
then
exposes
the
valero
backup.
B
So
what
does
a
traditional
backup
restore
workflow?
Look
like
right
before
we
kind
of
get
too
deep
into
the
solution,
let's
kind
of
talk
about
what's
the
problem
here,
so
when
you
want
to
manage
backups
right
and
orchestrate
them,
you
kind
of
have
three
sections
here
all
using
some
type
of
backup
target
to
save
the
state
of
the
previous
backups.
B
You
can
restore
it
from
a
future
point.
So,
starting
on
the
left
here
right,
you
always
have
an
application
definition
and
some
people
simplify
this
to
just
be
the
pause
and
the
persistent
volumes.
But
that's
not
totally
true.
You
probably
have
a
large
set
of
kubernetes
resources
that
make
up
your
full
application.
We
need
you
to
define
them,
then
we
want
to
extract
the
resources
that
make
up
the
app
right.
So
in
this
case
it's
generally
just
some
yaml
files
that
represent
kubernetes
resources
in
the
cluster.
B
It
could
be
some
internal
images
if
you're
using
openshift,
right
and
you've
enabled
the
internal
registry,
but
then
it
could
be
persistent
volume
data.
If
you
have
a
stateful
app,
we
need
to
take
all
of
that
up
and
then
package
it
up
into
the
backup
target
right.
So
this
is
some
place
off
cluster,
preferably
that
contains
the
state
of
the
cluster.
B
We
reference
the
backup
right,
execute
a
restore
and
that's
going
to
go
ahead
and
apply
all
the
kubernetes
resources
so
backup
time.
You
can
almost
think
this
of
this
is
like
an
oc
export,
but
then,
at
creation
time,
essentially
an
oc
apply.
Obviously,
taking
into
account
that
we
can
strip
some
of
the
data
from
you're
going
to
want
to
restore
the
internal
images
right
so,
let's
say
they're
just
container
images.
B
So
let's
take
that
just
a
little
bit
more
as
we
look
at
the
solution,
so
oadp
itself
right.
We
kind
of
want
to
think
well
with
that
backup
restore
workflow.
What
do
we
need
to
provide
to
enable
openshift
users
and
partners
to
effectively
take
backups
of
the
cluster?
B
Well,
odp
needs
to
provide
cluster
consistent,
backups
right
so
providing
the
ability
to
take
all
the
resources
in
the
cluster
and
the
pv
data
and
providing
a
stable
backup.
B
While
the
application
was
running,
we
need
to
be
portable
right,
so
have
a
platform,
that's
able
to
use
plug-ins
and
extend
the
core
backup
platform
to
make
sure
that
when
we
take
an
application
right
on
one
cluster
and
moving
it
somewhere
else,
even
if
it's
in
a
different
cloud
storage
provider
that
the
application
works
and
then
finally,
obviously
providing
a
red
hat
ecosystem
support,
making
sure
that
the
backup
api
is
actually
speeding
up
the
partner
solution
and
because,
at
the
current
state
of
odp,
we
are
a
community
operator.
B
So
we're
available
in
operator
hub
easy,
easy
to
install
and
obviously
easy
to
provide
new
future
version
updates.
B
So
with
that
bunch
of
backup
and
store,
and
what
the
odp
project
is
solving
right.
There's
kind
of
three
three
points
here
that
odp
is
going
to
help
out
in
the
effort
defining
the
application
itself.
The
the
backup
api
needs
it
to
be
easy
for
you,
as
the
user
or
application
developer,
to
be
able
to
label
your
app
or
supply
a
backup
object
that
can
effectively
express
what
makes
up
the
application
of
the
cluster.
B
But
then
there's
the
orchestration
endpoint
right.
So
the
ability
to
actually
enable
an
easy
to
use
api
for
users
to
take
backups
of
the
clusters,
then
a
set
of
controllers
right
that
are
walking
watching
for
when
we
need
to
create
a
backup
or
when
we
need
to
create
a
restore,
maybe
you're
scheduling,
backups,
and
we
need
to
be
able
to
orchestrate
that
effectively
for
you
and
quickly.
B
But
then.
Finally,
we
can
integrate
the
execution
point
as
well
right.
So
we
want
to
be
able
to
provide
plugins
that
make
this
backup
api,
openshift
native,
so
providing
backup
of
of
any
app
that's
running
on
openshift
and
using
openshift
custom
custom
components
and
then
obviously
extending
openshift
container
storage
solution
to
have
plugins
that
make
it
easy
and
painless
for
folks
take
backups
of
their
application
data
and
then,
obviously,
all
that
data
is
gonna
have
to
go
to
some
backup
target.
B
So
getting
into
you
know:
how
does
that
oadp
api
satisfy
all
these
requirements?
Well
we're
using
valero,
which
is
an
excellent
backup
for
store
utility
that
is
native
to
kubernetes,
who
are
also
supplying
a
set
of
openshift
backup
and
restore
plugins
right.
So
we're
extending
the
valero
code
using
their
native
plugin
system
to
make
sure
that
if
you
take
backups
of
openshift
resources,
we're
doing
all
the
custom
logic
required
to
make
it
to
be
able
to
restore
on
any
cluster
right
so
really
supplying
the
application
portabilities.
B
So
we
do
this
right
from
the
cluster
resource
standpoint
by
the
openshift
plugins
and
a
plug-in
to
manage
internal
images,
but
then,
as
well
as
a
set
of
vendor
plugins,
to
be
compatible
with
any
backup
vendor
and
then
from
the
storage
resource
component.
Right.
We're
going
to
be
using
controllers
to
orchestrate
these
backup
restore
actions.
We
want
to
make
sure
that
we're
integrating
with
native
open
shipping
right
my
backups.
B
So,
moving
on
a
little
bit
talking
about
the
architecture
right
so
since
we
did
depend
heavily
on
valero,
I
want
to
talk
a
little
bit
about
the
blower
api
so
with
it
between
this
and
the
next
slide,
but
I
also
wanted
to
briefly
mention
this
data
mover
concept.
So
if
you
look
at
this
from
the
top,
you
have
some
backup
isp
project
or
product
right.
B
That
is
providing
policy,
scheduling,
governance,
compliance
reporting
and
they
want
to
be
able
to
integrate
with
some
set
of
api
in
the
cluster,
to
provide
the
backup,
interface
right.
So
valeria
api
is
what
we're
doing
to
do
this.
It
exposes
at
a
high
level,
really
four
major
custom
resources
and
then
another
big
one
that
we'll
get
into
in
a
second,
but
they
are
a
backup,
storage
location
right.
So
this
is:
where
is
all
the
backup
data
going
to
be
put,
and
then
a
volume
snapshot,
location.
B
Sorry,
I'm
hearing
echo,
not
sure
a
volume
snapshot
location
is
the
ability
to
supply
valero
with
a
api
to
take
snapshots
of
your
underlying
this
as
cloud
storage
volumes.
So
ebs
was
your
gcp
and
you
want
to
use
the
native
snapshot.
Api.
That's
on
their
platform.
Valero
allows
a
way
to
extend
that.
B
We
also
use
that
api
to
provide.
You
know
really
good
support
for
csi
volumes
as
we'll
get
into,
and
then
the
plug-in
interface
itself
right.
So
there's
kind
of
obviously
three
ways
to
extend
this.
The
openshift
plugins
we've
kind
of
talked
about
a
little
bit,
but
a
good
example
of
this
is
say
a
route
say
you
have
take
it.
You
have
a
route,
that's
supplying
the
application
on
cluster
one
and
the
domain
changes
between
the
two
clusters.
You
want
to
let
openshift
regenerate
the
route
name.
B
This
is
a
very
simple
transformation,
but
we
supply
a
plug-in
that
does
that
automatically,
for
you
allows
the
cluster
to
dynamically,
restore
on
the
second
side,
blame
snapshot
plug-ins.
We
just
talked
about
right.
You
have
the
csi
extension
point
and
pretty
much
any
cloud
provider.
B
Then
you've
got
a
backup,
storage
location
plug-in,
so
the
ability
to
support
custom
backup
targets.
We
generally
use
s3
native
solutions
today,
but
velara
has
support
for
zerblob
google
cloud
vsphere
and
people
are
writing
their
own
custom
plugins
to
support
other
providers
right,
like
even
nfs
volumes.
B
You
can
write
a
plugin
for
which
would
basically
be
file
system
targets,
a
lot
of
expo
sorry,
a
lot
of
extendability
with
the
valero
plug-in
interface,
but
then
there's
this
other
topic
that
is
still
in
the
design
phase
for
adp,
but
you'll
notice
that
a
lot
of
other
backup
partners
and
vendors
have
this
concept
of
what
they
call
a
data
member.
So
let's
say
that
I
do
have
an
application
with
persistent
volumes
that
have
csi
drivers
enabled
and
if
I
take
a
csi
snapshot
of
a,
for
example,
a
self
cluster.
B
My
backup
of
that
volume
is
still
going
to
exist
wherever
the
storage
provider
is
so
if
that
storage
rider
is
on
cluster,
there
is,
you
know,
a
desire
from
your
standpoint
to
actually
move
that
snapshot
and
be
consumed
elsewhere.
So
we
want
to
be
able
to
provide
a
controller
that
can
allow
backup
isvs
to
plug
in
with
the
ability
to
move
these
csi
volumes
wherever
they
like
and
restore
them
from
anywhere
else.
B
So
I
just
want
to
briefly
bring
up
the
data
mover
it's
still
in
the
early
design
phase.
So
it's
not
really
usable
volume
today,
but
it's
definitely
a
core
component
of
a
full
backup
solution
here,
if
you're
taking
on
cluster
backups
so
briefly
want
to
get
into
the
blair
api,
just
kind
of
everything
that
it
exposes
at
a
high
level,
we've
got
kind
of
five
big
ones
here.
I've
already
covered
four
of
them.
B
You've
got
the
backup
right
so
some
way
to
describe
a
backup
of
an
application
in
the
cluster
or
a
set
of
applications.
Sure
you
can
see
it
allows
for
pretty
granular
specifics
right.
So
let's
say
you
just
wanted
to
define
everything
yourself.
You
could
use
a
label
selector
label
every
resource
you
want
to
include
in
a
backup
and
when
you
create
this
backup
cr
it's
going
to
go
and
grab
everything
with
that
label
resources.
B
And
if
it's
a
persistent
volume
or
an
image
stream,
we're
going
to
automatically
back
up
the
pv
data
and
the
internal
image
itself,
you
can
maybe
scope
it
to
just
a
set
of
namespaces
right
and
then
we'll
go
intelligently,
grab
every
namespace
scope
resource
in
those
namespaces
and
then
try
to
grab
any
cluster
scope,
resources
that
may
be
referenced
by
those
names,
then
obviously,
storage
location,
whether
or
not
you
want
cluster
resources
and
all
that
stuff.
B
But
the
one
thing
that
I
really
want
to
call
out
here
that
we
didn't
want
to
talk
too
deeply
in
today,
but
I
wanted
to
make
sure
you
guys
were
aware
is
that
the
vloger
api
extends
the
backup
ability
to
take
hooks
as
well.
So
what
that
means
is
because
I
kind
of
butchered
that
sentence
is
at
backup
time.
If
you
have
an
application
right,
that
say
is
stateful
and
it
needs
to
supply
some
custom
action
in
order
to
have
a
crash
consistent
backup
of
the
app.
B
So
let's
take
a
database,
for
example
like
mysql,
you
may
want
to
run
a
command
right
to
stop
the
mysql
database
safely
and
and
halt
transactions.
While
the
backup
is
running
and
then
when
the
backup
is
complete,
you
want
to
bring
the
app
into
a
functional
state
right,
so
valera
does
have
an
extensibility
what
they
call
hooks
to
be
able
to
control
that
behavior
of
backup.
B
A
then
restore
is
obviously
the
ability
to
restore
from
a
backup
and
perhaps
even
take
a
granular
set
of
resources.
From
the
back
of
it,
you
got
a
backup
storage
location,
which
is
where
we're
going
to
store
the
kubernetes
resource
data
right
then
optionally,
potentially
some
pv
data,
but
probably
just
pv
metadata
like
csi
snapshot,
information
or
cloud
provider
snapshot.
B
B
That
was
kind
of
from
the
user
perspective.
We
kind
of
talked
already
about
the
individual,
backup
cr
itself,
but
make
sure
you
know
you
can
set
the
hooks
and
you
can
also
set
the
retention
policy
as
well.
B
The
restore
kind
of
the
other
interesting
thing
I
want
to
call
about
about
the
restore
here
is
generally
restore
is
assuming
you're
going
to
the
same
cluster.
We
have
the
openshift
ability
with
arbitrary
plug-in
to
make
sure
that
we
restore
pretty
effectively
into
new
clusters.
But
you
have
to
be
aware:
if
you're
restoring
the
new
clusters,
you
have
to
make
sure
that
plugins
are
handling
that
transformation
for
you,
and
you
can
also,
you
know,
provide
a
namespace
mapping.
So
let's
say
you
had
you
want
to
test
a
backup,
restore
workflow
right?
B
You
don't
have
to
restore
the
same
namespace,
you
could
restore
to
a
test
name,
space,
make
sure
everything's
working
as
expected,
and
then
you
know
your
backups
are
safe.
B
The
backup
storage
location,
I
I
want
to
call
out
kind
of
what
what
support
exists
today.
So
we
we
have
name
support
from
from
existing
supported,
valero
plugins
that
we
contribute
to
as
well.
You've
got
the
s3
plug-in.
So
if
you
want
to
use
amazon,
s3,
nuba,
mineo,
etc,
azure,
blob
storage,
google,
cloud
storage
and
then,
as
I
said,
there
are
some
community
plugins
that
you
can
use
for
like
file
system,
backup
targets,
but
they're
not
officially
supported
just
yet.
B
But
I
wanted
to
also
call
out
that
valero
does
have
the
ability
for
you
to
use
this
solution
called
rustic
and
odp
does
enable
you
to
to
install
rustic
as
well
with
flareo
and
what
rustic
does
is
essentially
for
all
of
your
pvs
that
don't
have
snapshot
capabilities.
So
it's
not
a
cloud
provider
pv.
It's
not
a
csi,
enabled
persistent
volume.
Then
you
can
always
fall
back
to
the
solution.
B
That's
called
rustic,
which
essentially
is
going
to
take
a
file
system
copy
of
the
pb,
and
if
you
do
choose
to
use
rustic
right
since
it
needs
somewhere
to
put
the
data,
we
will
store
the
rustic
pv
data
in
the
backup
storage
location.
B
This
would
be
your
s3
bucket
right.
Wherever
your
object,
storage
is
configured,
you
will
get
the
rustic
pv
data
store
there
as
well,
but
for
most
other
use
cases,
especially
if
you're
using
more
upstream
work,
you're
going
to
want
to
use
csi
snapshots
and
the
object.
Storage
in
this
case
would
only
contain
the
metadata
referencing,
the
snapshot
information-
okay,
just
quickly
talking
about
the
vsl's
right,
so
the
volume
snapshot
location.
B
This
is
kind
of
the
interface
that
allows
flower
to
be
extended
to
support
csi
snapshots.
But
you
also
know
that
you
can
pretty
much
write
a
plug-in
for
any
type
of
pv
provider
here.
B
B
So,
finally,
the
schedule
right-
this
is
actually
pretty
straightforward.
Most
backup
apis
have
this,
but
this
is
the
ability
to
essentially
have
a
crown
lock
crown
job
like
syntax
or
on
specifying
when
you
want
backups
to
run
right.
So
hey
run
this
every
every
three
hours,
every
24
hours.
B
B
If
you
want
to
deeper
dive
into
this
stuff,
I'm
provided
a
few
links.
So
if
you
want
to
load
the
presentation,
look
it
up
if
you're
looking
to
extend
any
of
these
with
with
plugins
right
kind
of
have
to
dig
in
to
figure
out
what
exactly
you
need
to
support.
But
here
are
some
links
to
different
portions
of
the
api
if
you
want
to
click
them
later.
B
So
with
that,
I
will
turn
it
over
to
annette,
where
she's
going
to
talk
about
how
csi
works
in
conjunction
with
dude.
A
Actually,
dylan
we're
going
to
turn
it
over
who's
joined
and
is
going
to
have
you
advance
the
slides
for
him.
B
Are
you
would
you
briefly
quickly
just
introduce
yourself.
C
Yeah
sure
hi,
my
name
is
raghavendra.
C
I
am
a
part
of
ocs
engineering
team.
I
have
been
working
on
ocs
last
few
years
and
most
recently
I
have
been
working
with
others
on
oadp
and
storage
integration
yeah.
That
is
quick
introduction
of
me
and
regarding
yeah
odp
with
csr
storage.
Specifically,
what
we
are
doing
is
using
the
staff
csi
in
the
like
storage
area.
So
specifically,
that's
the
one
so
generically
speaking
if
we
have
to
understand
like
csm,
is
like
so
it
is
a
plugin
in
the
generic
container
storage
interface.
C
So
if
you
look
at
this
diagram,
so
we
have
kubernetes
services
like
api
server,
control
manager,
etc.
C
Whereas
if
you
look
at
the
nodes,
that
is
where
we
have
things
specific
to
csi
running
things
like
csi
provisioner,
that
would
ensure
that,
like
the
csi
plugin
parts
are
running
means
like
in
this
context,
there
will
be
a
plug-in
from
ceph
that
would
be
used
for
integrating
chef
with
the
tsi
layer
and
that
particular
plugin
would
be
started
by
things
like
like
csi
provisioner
and,
like
you
can
see,
there
is
a
csi
plugin
as
well.
That
would
be
the
fcsi
plugin
specifically
for
our
integration.
C
In
general,
there
can
be
different
plugins
provided
by
different
storage
vendors
for
csi,
whereas
in
this
particular
case
the
storage
would
be
provided
by
chef.
Hence
the
plugins
would
be
safe,
plugins.
C
Yeah
and
like
moving
on
to
next,
this
is
how
generic
welder
or
tsi
snapshot
communication
look
like.
So
there
is
an
entity
in
welder
as
well
called
valero's
plugin
for
csi,
so
that
would
be
the
one
which
will
ensure
that,
like
it
will
be
executing
the
csi
snapshots
and
the
css
snapshot
would
be
there
with
in
the
storage
provider
or
the
csi
provider,
which
in
this
particular
instance,
is
def
and
well.
Error
will
look
like
what
it
will
do
is
like.
C
Whenever
jss
snapshot
has
to
be
executed
for
a
particular
persistent
volume,
then
it
will
look
at
the
configured
volume
and
it
will
also
look
at
the
snapshot
plugins
that
have
been
configured
and
for
the
particular
persistent
volume
whose
snapshot
has
to
be
taken.
It
will
just
identify
the
provisioner
there
and
compare
it
with
the
provenan
or
driver
that
is
there
in
the
snapshotter
and
based
on
that
it
will
take
the
appropriate
tsi
snapshots
so
and
hence
like
whenever
valero
has
to
take
a
backup
of
an
application
that
has
persistent
volumes.
C
So
the
moment
it
has
to
take
the
snapshot
of
the
pv
or
the
persistent
volume
it
will
identify.
The
storage
provider
there,
based
on
which
it
will
be
able
to
identify
the
appropriate
snapshot
or
plugin
for
that
and
execute
that,
so
that,
like
this,
the
snapshot
should
be
taken
and
will
be
stored
in
the
storage,
cluster
and
yeah.
Next,
I
see
a
good.
C
This
is
how
a
typical
backup
request
would
look
like
in
this
particular
setup,
where
the
application
that
makes
the
backup
request
would
first
make
it
through
valero's
aps,
and
the
call
would
come
down
to
the
belarus
csa
plugin,
which
would
identify
the
backup
recursion
understand
it
like.
C
Okay,
this
is
the
particular
persistent
volume
whose
snapshot
has
to
be
taken
and,
like
I
mentioned
before,
it
would
identify
the
appropriate
provisioner
for
that
particular
volume,
which
in
this
case,
would
be
safe
and
based
on
that
it
would
identify
the
csi
plugin
and
would
issue
a
css
snapshot
to
that,
and
it
results
in
and
the
resulting
snapshot
would
be
saved
in
the
test
cluster
or
the
storage
cluster.
So
that
would
be
the
generic
flow
with
respect
to
creating
a
backup.
C
However,
like
dylan
mentioned,
there
is
an
evolving
component
called
data
more,
so
the
data
mover
would
be
actually
the
one
which
would
ensure
that,
like
it
copies
out
these
snapshots
to
a
different
target
location
so
that
the
data
is
actually
our
snapshots
are
actually
backed
up
in
a
different
location.
This
is
an
evolving
component
which
is
still
being
worked
on,
but
this
is
the
overall
idea
of
a
data.
More
would
look
like
in
the
overall
things
of
like
how
an
application
would
take
the.
A
Thank
thank
you.
Yeah
thanks
thanks
dylan,
so
I'm
going
to
do
a
few
slides
here
and
then
we're
going
to
go
into
a
demo.
So
first
is
the
oedp
operator
and
dylan.
If
you
want
to
yeah
so
given
in
the
demo,
I'm
not
going
to
install
it.
I
just
want
to
make
it
clear
that
if
you
deploy
the
community
operator,
catalog
oadp
is
already
in
the
community
operators
and
that's
actually
what
I'm
going
to
use
in
the
demo
so
quite
easy
to
deploy
it
and
next
slide.
Please.
A
So
you
just
basically
install
it
and
if
you're
in
openshift
4.5,
it
will
actually
create
the
namespace
for
you
and
once
you
hit
subscribe,
you
basically
have
it
installed.
A
Screen
so
everybody
see
this
so
now
that
I'm
sharing
my
screen
again,
I've
already
installed
the
oadp
operator
and
open
shift
container
storage.
A
So
we're
going
to
look
at
the
application
that
that
we're
going
to
use
for
this,
and
it
has
two
pods
wordpress
and
my
sql,
and
in
addition
to
that,
this
particular
application
has
persistent
storage
supplied
by
openshift,
container
storage
and
stuff
and
the
type
or
the
the
storage
class
that
we
use
is
from
stuff
rbd.
So
these
will
be
two
block
volumes
that
support
this
application.
A
So
basically
the
the
goal
here
is
to
see
if
we
wanted
to
be
able
to
back
up
and
then
restore
this
application.
So
the
front
end
of
the
application
is
wordpress.
A
I've
already
put
in
an
article
and
just
to
test
when
we
actually
do
the
the
backup
and
restore
I'll
start
a
comment
here-
and
this
is
great
and
then
we'll
go
back
to
looking
into
oadp
and
how
do
we
we
set
this
up
so
as
dylan
mentioned,
oadp
has
a
lot
of
apis
and
the
very
first
api
that
we
need
to
create
is
the
valero
api.
A
So
a
couple
things
to
point
out
here
enable
csi
plug-in
is
true.
A
Also
down
under
object
store,
so
in
this
case
the
the
the
volume
the
the
location
for
storage
is
going
to
be
an
s3
bucket
on
aws,
and
I
need
to
say
what
region
I
need
to
give
it.
My
secret,
which
I
create,
which
has
my
creds
for
my
bucket,
and
the
other
thing
is:
if
you
look
down
there
at
the
very
bottom,
enable
restrict
is
false,
but
I
do
have
my
default
plugins,
including
csi.
A
A
So
we
can
see
that
actually
it's
already
running
so
is
now
running
and
we
can
also
see
if
we
take
a
look
at
it.
That.
A
And
let's
take
a
look
now
at
our
our
wordpress
project,
so
once
we
have
the
velo
resource
running
we're
ready
to
do
a
backup
and
restore,
as
as
dylan
said,
that
can
be
at
the
level
of
a
name,
space
or
or
resources
within
the
namespace.
So
in
this
case
we're
going
to
do
it
at
the
level
of
a
namespace
right
now
we
have
no
volume
snapshots,
they
haven't
been
initiated
yet
and
down
in
the
bottom
terminal
on
on
you
see
there
are
some
csi
volumes.
A
So
I
have
the
ability,
as
as
dylan
showed,
to
have
included
so
I'm
going
to
include
namespace,
and
in
this
case
my
included
namespace
is
just
going
to
be
wordpress
you
you
could
envision.
If
you
know,
if
you
had
a
lot
of
namespaces
in
your
backup,
you
would
put
you
know
the
name
spaces
at
that
point
to
include
more
than
just
one,
and
I
also
uniquely
named
the
backup.
A
So
if
I
want
to
look
at
the
the
backup
is
a
custom
resource,
and
because
of
that,
I
can
describe
it
in
the
odp
operator.
I
can
also
see
on
the
bottom
and
in
the
stuff
that
I
now
have
a
new
snapshot.
So
that's
already
happened.
As
rogue
said,
the
the
snapshot
is
initiated
out
of
valero,
but
the
actual
snapshot
capability
is
a
csi
snapshot,
so
so
ceph
csi
supports
snapshot
capability
and
clone
capability
in
openshift
container
storage
4.6,
which
is
the
next
release
as
well
as
rook
upstream
version.
A
1.4
is
already
supported,
backup
and
clone
I
mean
excuse
me,
snapshot
and
clone.
So,
looking
at
my
backup,
it's
currently
in
progress
and
if
I
look
now
in
my
for
for
the
snapshot
class,
so
this
is
really
the
glue
between
you
know
being
able
to
to
initiate
a
snapshot
and
get
a
snapshot
into
a
particular
storage
so
based
on
the
provisioner
of
the
the
pvcs
and
the
pvs.
A
So
we
now
see
down
in
ceph,
we've
now
completed
two
snapshots
and
if
we
go
back
and
describe
our
backup,
we
should
be
able
to
see
if
it's
completed
yet
and
it
has
completed.
A
So
at
this
point
in
terms
of
of
what
we've
done,
we
know
that
we
have
the
persistent
data
and
stuff
and
the.
If
we
go
now
into
the
s3
bucket
that
I
specified
in
the
valero
resource,
we
can
look
to
see
here
under
backup
one
here's,
the
metadata
and
again.
This
is
not
the
actual
persistent
data,
but
it
is
you'll
notice.
There's
a
volume
snapshots,
backup,
one
volume
snapbot.
That
is
the
metadata
for
the
pv
and
the
pvc.
A
So
it
can
be
restored
and
then
the
actual
contents
of
of
the
volume
are
restored
from
the
snapshots
and
stuff
csi
and
just
quick.
If
we
had
data
mover
available
for
a
particular
vendor
like
trilio
or
some
other
backup
vendor,
then
they
could
move
those
snapshots
off
off
platform
and
and
that
would
further
protect
your
data.
A
So
now
that
we
have
a
backup,
the
thing
we
want
to
do
is
test
restore,
so
we're
going
to
go
ahead
and
delete
the
wordpress
project,
and
it's
deleting
now.
If
we
go
now
back
to
our
application,
given
that
we've
deleted
it,
the
application
is
failing,
which
would
be
expected
if
you
delete
the
namespace.
A
So,
given
that
we
now
have
deleted
it,
we
want
to
just
make
sure
that
it's
fully
gone
so
that
when,
when
we
do
the
restore
we're
doing
it
from
basically
you
know
with
the
the
namespace
not
existing,
and
if
we
go
back
here,
we
can
see
all
the
resources
are
gone,
but
just
to
make
sure
the
the
project
has
terminated.
A
Let's
go
ahead
and
get
the
project
it's
not
around
and
before
we
do
the
restore,
let's
put
a
watch
on
all
the
resources
that
that
will
be
in
the
new
in
this
case
we're
using
the
same
name
for
the
the
name,
space
wordpress
and
what
we'll
see
here
is
of
now.
We
have
none,
because
we
just
deleted
them.
We'll
also
put
a
watch
on
the
pvcs.
A
And
similar
to
backup
we're
going
to
do
it
from
the
ammo
view
and
we're
just
going
to
add
a
few
lines
here.
The
first
is
the
name
of
the
backup.
So
this
is
why
it's
important
that
the
backup
have
a
unique
name,
so
you
could
go
back
to
a
particular
point
in
time
to
restore
from
that
point
in
time,
and
then
the
other
thing
we
want
to
further
do
is
say
what
what
namespace
do
we
want
to
include.
A
Then
we
can
watch
it,
it's
it's
in
progress
and
actually
it's
already
completed
so
we'll
go
back
to
our
rcli
and
almost
even
before
we
could
get
back.
You
can
see
all
the
resources
are
actually
in
a
running
state.
So
we've
used
the
snapshots
to
for
the
persistent
data
to
to
put
back
into
the
the
pv
pvc
resources.
A
The
pods
are
all
running
so
so
so
basically
the
application
appears
to
be
restored.
So
now
we
want
to
test
it
and
refresh
on
that
and
on
my
comment
I
actually
had
not
finished
my
comment
so
I'll
post.
My
comment
and
you
can
see
that
it
did
work
so
we're
up
and
running
on
our
application.
A
So
what
we've
done
here
today
shown
how
oedp,
combined
with
stuff
and
stuff
csi
via
openshift,
container
storage
or
rook
stuff
upstream,
you
can
use
the
valero
apis
to
initiate
the
csi
snapshots
and
using
the
backup
and
restore
capability.
A
A
D
Right,
it's
always
good
to
say
thank
you.
That's
always
a
nice
one.
B
I'm
sorry
I
was
on
mute,
but
real
briefly.
I
just
want
to
bring
up
super
quick.
We
did
have
a
link
here
to
the
odp
operator
on
github.
I
just
want
to
pull
it
up
if
you're
interested
in
using
this.
We
do
have
instructions
on
installing
it
from
operator
hub.
If
you
want
to
assault
us
outside
of
olm,
we
have
the
instructions
for
that
as
well.
So
the
readme
here
is
really
good
place
to
start.
If
you
want
to
try
this
out
yourself.
A
B
A
B
Then
I
will,
I
don't
think
we
have
any
more
slides,
so
I
will
just.
D
I'll
have
you
pop
over
to
one
other
place
too?
Just
maybe
if
you
go
to
valero
io
as
well,
just
so
people
can
they
want
more
background
on
valero
you,
you
ran
through
a
whole
lot
of
acronyms
and
other
things
as
well
here
so
and
just
trying
to
get
all
the
resources
here.
You
also
mentioned
restic
earlier
on
too,
as
an
old.
B
Yeah
we
we
didn't
want
to
talk
too
much
about
rustic,
because
we
wanted
to
focus
heavily
on
the
csi
use
case,
but
this
is
interesting
to
look
at
if
you
do
not
have
a
csi
enabled
backup
storage
provider-
I'm
sorry
storage
provider
itself,
so
you
know
good
use.
Cases
of
this
is,
if
you
have
gluster
say
you
have
nfs
volumes
right
and
you
can't
swing
the
volume
definition
over
then
rustic's,
a
good
fallback
tool
to
use
here.
So
there
are
instructions
under
the
valero
docks
use,
rustic
itself.
B
D
Cool
all
right,
I
had
I
hadn't
seen
that
one
before
so
that
that
was
a
good
one
just
in
passing,
but
I
think-
and
I
don't
see
any
questions
in
the
chat,
so
there
are
24
or
25
of
you
at
any
given
moment
so
either
you
guys
did
an
awesome
job,
demoing
things
or
you've
just
shattered
people's
brains
by
facilitating
awesome
backups.
D
So,
let's
go
hopefully
we'll
go
with
the
shattering
of
brains,
but
I
really
think
that
one
of
one
of
the
things
that
I
always
ask
people
is
what's
next,
where
what
what's
the
next
set
of
enhancements
to
this
or
where
do
you
want
people
to
come
and
give
you
feedback
on
this?
If
they're,
using
it
and
and
catching
up
with
you.
B
Yeah,
no
thank
you.
The
odp
operator,
github
is
where
we're
tracking
work
in
terms
of
enhancing
the
oadp
api
itself,
which
you
know
really
is
valero
with
some
additions
on
top
of
it.
That
data
member
project
is
still
in
the
design
phase.
We
don't
have
a
timeline
on
when
that
would
actually
be
usable
by
anybody
yet.
B
But
if
you
are
interested
in
providing
feedback
on
that,
probably
talking
in
the
odp
operator,
github
would
work
we're
also,
obviously
in
the
core
os
slack,
if
that's
a
good
place
to
chat
with
us,
but
for
anyone
using
the
upstream
effort,
if
you
have
find
bugs
or
issues
or
like
enhancements
to
the
odp
api
and
the
operator
itself,
please
do
go
to
that.
Github
link
and
open
up
some
issues
and.
A
Yeah-
and
I
I
would
mention,
as
I
mean
I
showed
basically
openshift-
container
storage,
but
really
the
components
were
from
the
rook
stuff
version,
1.4
upstream,
in
terms
of
doing
snapshot
and
clone
using
csi.
A
So
if,
if
you
want
to
try
it
today,
you
would
need
to
be
using
rook
soft
we're
looking
to
bring
this
downstream
sometime
around
early
november
in
the
next
version
of
openshift,
container
storage
and
also
integration
is
definitely
going
on
with,
like
ibm,
spectrum,
protect
plus
other
backup
vendors
that
are
looking
to
integrate
at
either
the
data
mover
or
even
just
the
csi
capability
for
snapshot
and
clone
so
they're,
building
their
own
controllers
they're
on
crs
to
to
basically
take
the
data
and
get
it
into
their
their
platform.
A
In
addition
to
that,
we're
looking
at
doing
some
work
with
cloud
pack
for
data
via
via
ibm
on
their
apps
to
they're
gonna,
develop
the
hooks
that
dylan
showed
how
you
could
have
a
hook
so
added
before
you
do
a
backup
you
would
have
like
a
a
pre-activity
to
freeze
or
to
stop
and
then
a
post
activity
to
start
again.
So
those
so
you
know
in
any
of
these
things,
I
think
the
future
development
is
really
to
get
it
into.
A
You
know
more
integrated,
backup
and
and
restore
solutions,
as
well
as
possibly
in
the
future.
Even
having
one
that's
native
to
openshift,
I
think
is,
is
a
possibility.
D
There
is
one
question
coming
in
juggender
singh
is
asking
if
he
can
use
data
mover
to
copy
the
data
from
snapshop
to
another
san
or
nas.
B
That
that
is
a
design
goal
of
the
data
mover
project.
Yes,
absolutely
so
so
how
existing
data
member
projects
work
right
so,
for
example,
trilio's
got
an
example
of
this
others
that
I
can't
think
of,
but
the
idea
here
would
be
that
if
you
take
a
snapshot
of
a
volume
you
may
want
to
export
it
into
some
common
format.
B
Say
like
I
don't
know
a
qcat
2
image
or
something.
Then
you
want
to
be
able
to
take
that
more
common
format,
formatted
file-
or
you
know,
state
of
that
of
that
application
itself
and
then
convert
it
onto
some
other
storage
provider.
That
is
definitely
a
use
case
of
the
data
mover
project.
Yes,.
B
Yep,
that's
actually
how
damn
mover
is
intended
to
work.
So
that
is
absolutely
another
use
case
here.
D
D
Requests
so
I'm
not
seeing
too
many
other
questions
here,
so
you
must
have
done
a
really
awesome,
mind-blowing
job
and
I'm
really
thrilled
with
that.
So
we
may
may
leave
it
at
that
today
unless
there's
something
else
that
you'd
like
to
showcase
or
talk
about
today.
I
know
ragu
has
sort
of
dropped
off
the
video,
but
I
think
we're
nearing
the
the
end
of
this
and
it's
always
wonderful
to
have
you
guys
here.
D
I
always
learn
something
something
new
and
get
my
mind.
Blown
a
little
bit
and
and
really
backups
are
are
the
essential
element
of
everything.
So
you
guys
are
doing
some
great
work.
There.
A
Yeah
yeah
thanks
a
lot
diane
for
for
inviting
us
and
yeah.
I
think
having
this
actually
work
today,
as
you're
seeing
again
upstream,
would
be
rooksef
right
now,
but
but
downstream.
We'll
have
this
working,
and
you
know
it's
pretty
exciting
to
that.
It's
basically
orchestrated
now
in
openshift.
A
D
D
All
right,
thanks
and
I'll
add
those
links
in
in
the
the
google,
not
in
the
google,
in
the
youtube
video
that
I'll
upload,
if
you
guys
send
me
your
slides
and
and
and
and
that,
if
you
could
also
send
me
the
link
to
your
video
and
that
would
be
good.
D
That'd
be
great
I'll,
try
and
conjoin
all
of
those
things
together
for
everybody
here
who
was
probably
taking
copious
notes
as
I'm
trying
to
always
take
screenshots
and
things
like
that
to
catch
things,
but
I'll
have
it
up
on
the
the
youtube
channel.
That's
rh
openshift,
probably
in
the
next.
I
don't
know
four
or
five
hours.
If
all
things
collide
well,
where
I'm
right
co-hosting
right
now
in
another
screen,
the
latin
american
open
shift
commons
gathering.
D
So
if
any
of
you
are
spanish,
speakers
go
to
commons.openshift.org
and
look
down
the
gathering
list,
and
you
can
hear
a
lot
of
this
stuff
in
spanish,
which
has
got
subtitles
whenever
I'm
speaking
and
andrew
clay
shaffer
delivered
an
awesome
keynote
this
morning,
but
he
talks
so
fast.
The
poor,
subtitled
translators
must
have
been
going
mad,
trying
to
keep
up
so
it's
a
busy
day,
and
thank
you.
Thank
you,
everybody
for
paying
attention
and
being
here
virtually
with
us
whether
it
was
on
live
stream
or
facebook
or
youtube
or
twitch.
D
Wherever
you
are
watching.
We
really
appreciate
it
and
we
would
love
to
get
your
feedback
on
this
as
it
is
an
evolving
project.