►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
thank
you.
Everyone
for
joining
us
today
welcome
to
today's
cncf
live
webinar,
canister
application
level
data
operations
on
kubernetes,
I'm
libby
schultz
I'll,
be
moderating
today's
webinar
and
I'm
going
to
read
our
code
of
conduct
and
then
hand
over
to
michael
cade,
senior,
technologist
and
member
of
technical
staff
and
pavin
member
of
the
technical
staff,
both
with
castin
by
veeam
a
few
housekeeping
items
before
we
get
started
during
the
webinar
you're,
not
able
to
talk
as
an
attendee,
but
there
is
a
q,
a
chat
on
the
right
hand.
A
Side
of
your
screen
feel
free
to
drop
your
questions
there
and
we'll
get
to
as
many
as
we
can.
This
is
an
official
webinar
of
the
cncf
and,
as
such
is
subject
to
the
cncf
code
of
conduct.
Please
do
not
add
anything
to
the
chat
or
questions
that
would
be
in
violation
of
that
code
of
conduct
and
please
be
respectful
of
all
of
your
fellow
participants
and
presenters.
A
A
B
Thanks
libby
yeah,
so
so
this
session
is
going
to
be
focused
on
an
open
source
project
called
canister
just
before
we
go
into
that.
If
we
just
go
through
the
introductions
again
of
of
ourselves.
So
again,
I'm
michael
cade
from
I'm
a
senior
technologist.
I
basically
sit
within
our
product
strategy
group
within
veeam.
Obviously,
a
recent
acquisition
was
around
custom
focused
on
kubernetes
data
management.
B
That's
not
what
we're
talking
about,
but
ultimately
the
focus
around
kubernetes
cloud
native
has
been
my
focus
purely
my
focus
for
the
last
six
to
eight
months
solely
and
then
before
that
probably
tinking
around
tinkering
around
learning
a
little
bit
more
around
this
space
for
the
last
two
or
three
years.
So
that's
that's
me
in
a
nutshell,
pavon.
C
Hey
y'all,
I'm
pawan,
I'm
I'm
a
member
of
technical
staff
here
at
gaston
by
veeam.
So
for
the
last
I
think
three
and
a
half
to
almost
four
years
now
I've
been
solving
data
management
problems
for
mostly
stateful
labs
on
kubernetes.
B
Awesome
and
pavon's
going
to
be
the
the
one
focused
on
showing
you
the
deep
dive
into
the
into
the
open
source
project,
but
also
getting
to
show
you
how
it
works,
what
it
and
what
it
does.
I
I'm
going
to
go
into
a
little
bit
of
the
the
challenges
and
the
the
issues
that
we
have
from
a
data
management,
a
broader
data
management
space
and
then
how
canister
can
help
with
with
some
of
those
those
issues.
So
if
we
go
to
the
the
next
slide,
pavon.
B
And,
and
really
what
I
want
to
focus
on
here
is-
is
that
data
management
challenge
around
cloud
native
first,
I
think
from
a
day
two
point
of
view,
or
at
least
from
a
day
one
point
of
view.
Kubernetes
deployment
cloud
native
deployment
is,
is
relatively
advanced.
It's
quite
easy
to
go
and
deploy
your
your
application,
your
kubernetes
clusters,
your
applications
out
there
in
in
the
spaces
where
we're
seeing
more
of
the
challenges
around
day,
two
and
whether
that
be
data
management,
whether
it
be
security
or
observability.
B
As
well
and
then
as
we
look
into
the
different
applications
that
we
have
especially
around
state
for
workloads,
where
we're
going
to
have
that
level
of
responsibility
to
protect
that
data
is
different
data
services.
Now
not
saying
that
we
don't
have.
These
data
says
data
services
in
other
platforms,
but
there's
a
different
way
of
looking
at
data
services,
especially
when
it
comes
to
kubernetes
and
how
how
those
data
services
are
provisioned
and
how
they're
protected
then
from
a
data
management
point
of
view.
B
B
So
some
of
the
things
that
we
have
to
consider
are
these
layers
of
operations
are
are
broader
than
what
we
maybe
once
had
in
a
virtualization
world,
for
example,
and
that's
where
we're
gonna
just
quickly
go
over
some
of
those
those
areas
and
then
I'll
get
into
a
little
bit
more
around
some
of
the
options
that
we
have
when
it
comes
to
data
management.
B
So
the
first
and
foremost,
and
this
it
really
aligns
to
any
of
our
platforms
whether
it
is
virtualization.
As
I
know,
a
lot
of
us
are
on
this
learning
journey
around
cloud
native
around
kubernetes
and
it
doesn't
matter
whether
you're
on
virtualization,
whether
you're
in
a
cloud-based
work,
a
cloud-based
environment,
whether
you're
on
kubernetes
you're,
going
to
start
with
some
physical
storage,
a
layer
of
storage
where
you're
going
to
store
some.
Some
of
your
data
and
that'll
be
the
spinning
disc,
the
flash,
the
nvme
that
is
propping
up
the
the
infrastructure
underneath.
B
B
When
we
think
about
databases,
we
think
about
nosql
sql,
but
we
also
have
to
think
about
the
messaging
queues
and
those
batch
processing
workloads
that
require
some
stateful
or
state
stateful
data
that
we
have
to
consider
and
we
have
to
protect.
So
then
we-
and
at
that
point
that's
where
we
start
looking
at.
How
do
we
protect
that
data
and
how
do
we
do
it
specifically
for
that
particular
data
service,
whether
it
be
mongodb,
whether
it
be
nosql
or
mysql,
sorry
or
any
of
the
others
that
you
see
pictured
on
there?
B
B
It
can
be
an
external
data
service
such
as
amazon,
rds,
and
but
we
still
have
to
protect
that
workload,
and
we
have
to
then
have
a
have,
a
more
have
a
better
understanding
of
what
the
application
that
is
using
that
data,
so
that
we
can
protect
all
of
the
data
and
the
application
together,
and
that
leads
us
on
to
especially
in
the
kubernetes
world,
is
that
a
stateful
application
is
made
up
of
many
different
areas,
whether
it
be
config
maps,
whether
it
be
secrets,
services,
application,
pods
deployment
sets
staple
sets,
etc,
and
they
all
play
a
huge
part
in
making
sure
that
we
have
that
scalability
and
how
we,
how
we
leverage
that
that
stateful
application
so
there's
a
few
different
differentiators
or
differences,
is
when
it
comes
to
not
only
the
data
services
but
the
stateful
application.
B
B
So
now
we've
got
different
areas
of
or
different
flavors
of
data
management,
and
we
need
to
understand
well
what
level
of
protection
is
enough
I.e
are
we
gonna
be
and
that's
what
we're
gonna
talk
about
over
the
next
couple
of
slides
is
really,
first
and
foremost,
we
need
to
be
able
to
one
take
a
backup,
because
we
need
to
be
able
to
restore
if
and
when
there
is
a
failure
scenario
to
that.
Now,
what
that
failure
scenario
looks
like
and
what
we
can
withstand
within
the
environment.
B
Very
much
determines
what
level
of
protection
you
potentially
need,
but
also
the
importance
of
that
data,
and
it's
not
just
going
to
be
a
one
size
fits
all.
When
it
comes
to
kubernetes
data
management,
you're
gonna
potentially
have
different
applications
that
require
different
slas
and
different
service
requirements.
When
it
comes
to,
how
do
we
get
that?
How
do
we
get
that
application
back
up
and
running
as
fast
as
possible?
B
If
failure
scenario
a
b
or
c
was
to
happen,
and
then
we've
got
the
added
options
around
this
because
of
the
nature
of
cloud
native
workloads.
We've
got
the
ability
to
quite
easily
lift
and
shift
and
migrate
those
workloads,
those
applications
from
one
cluster
to
another,
and
that
might
be
based
on
a
failure
again,
but
it
also
might
be
based
on
migration.
It
might
be
based
on
performance,
it
might
be
based
on
scalability
to
or
different
options
within,
maybe
certain
public
clouds
and
then
in
the
same
vein
to
that
application
mobility.
B
We
have
the
disaster
recovery
use
case.
Fire
flood
blood
all
still
happens
from
a
from
a
platform
perspective
when
it
comes
to
kubernetes
or
or
any
platform
for
that
matter.
So
we
have
to
consider
that
as
well
from
a
from
a
a
data
management
point
of
view
is
what
happens
if
the
worst
was
to
happen
in
our
in
our
production
site
and
our
production
cluster
was
to
no
longer
be
available.
B
What
does
our
disaster
recovery
plan
need
to
look
like,
and
how
do
we
get
that
data
from
a
to
b
so
that
we
can
have
that
business
continuity
and
and
keep
things
up
and
running?
Then
we
had
as
if
that
wasn't
enough,
then
we
had
the
at.
We
have
the
added
complexity
around
compliance
requirements,
regulations
about
keeping
data.
B
And
if
we
just
go
through
some
of
these
options
for
data
management
and
I'll
touch
on
some
of
the
benefits
of
them,
but
also
some
of
the
maybe
the
the
best
way
to
put
it
is
some
of
the
pitfalls
of
of
some
of
those
angles
for
for
data
management.
If
we
go
to
the
next
slide,
please.
B
So,
first
and
foremost,
I
would
say
that
a
large
majority
or
a
large
percentage
of
that
underpinning
physical
storage
will
carry
some
layer
of
or
capability
of
being
able
to
leverage
storage,
centric
snapshots,
regardless
of
what
platform
we're
running
on.
But
let's
say
about
where
we're
looking
at
cloud
native,
we're
looking
at
kubernetes
again
now-
and
this
is
basically
leveraging
the
underpinning
storage
system
with
no
hook
into
the
application
itself.
B
So
it's
very
similar
to
pulling
the
power
out
of
your
your
desktop
pc
and
obviously,
when
that
comes
back
up,
it's
going
to
look
and
feel
exactly,
hopefully
what
it
looked
like
as
it
went
down.
But
it's
a
very
dirty
way
of
being
able
to
have
a
point
in
time
crash
consistent
copy
of
that
data.
Now,
depending
on
your
data
service,
that
might
be
sufficient.
B
That
might
be
enough
to
have
a
really
fast
recovery
point,
if
you're,
if
your
application
can
withstand
that
that
that
that
process
of
of
being
able
to
take
that
point
in
time
crash
consistent
copy,
but
obviously
it's
very
dependent
on
the
application
and
the
file
system
of
which
your
data
is
running.
Now.
B
So
yes,
it's
a
point-in-time
copy.
It's
crash
consistent,
which
is
great
if
your
application
can
withstand
that,
but
it's
not
going
to
give
you
any
transaction,
transactional
level
or
granularity
of
being
able
to
recover
that
data
plus.
If
your
failure
scenario
is
your
storage
system
is
now
no
longer
serving
data,
then
your
storage
snapshots
are
also
no
longer
storing
data.
So
the
the
word
of
warning
here
is:
if
this
is,
this
could
be
maybe
used
in
conjunction
with
some
other
methods.
B
This
might
be
a
a
valid
way
of
being
able
to
protect
that
that
workload.
So
if
we
go
down
to
the
the
the
next
one
now
this
is
where
it
starts
to
get
interesting,
and
this
is
where,
when
when
pavan
is,
is
talking
about
canister
later.
This
is
really
where
the
first
hook
comes
from
from
a
canister
point
of
view.
So
this
is
the
same
as
what
we
just
said
about
storage
snapshots.
B
However,
now
we're
going
to
actually
speak
to
the
application
and
the
data
service
and
we're
going
to
put
a
hook
in
there
so
that
we're
making
this
now
application
at
least
application
consistent,
so
that
we're
going
to
freeze
and
flush
the
data
services
layer
we're
going
to
initiate.
Then
the
storage
layer
snapshot
that
we
just
spoke
about
we're
going
to
unfreeze
the
data
services
layer
and
then
we're
going
to
record
that
completion
and
the
status
of
that
snapshot.
B
Again,
that's
going
to
give
us
a
really
fast
recovery
point,
but
now
we've
got
the
added
benefit
of
it
being
a
little
bit
more
consistent.
However,
we've
still
got
the
same
problem,
but
it's
on
the
same
storage
as
production.
So
at
this
point
we
probably
want
to
start
thinking
about
how
we
move
that
data
away
from
that
production
storage
system
as
well
into
a
different
media
type,
so
that
we've
got
a
a
copy
away
from
that
that
storage,
that
production
storage
system.
B
So
then,
if
we
start
looking
at
well,
how
do
we
do
that?
This
is
where
we
start
thinking
about
the
data
service
centric,
point
of
view
and
how
we
take
a
copy
of
that
data
and
then
potentially
store
that
into
maybe
a
repository
such
as
object,
storage
or
nfs,
or
a
file
based
location
just
somewhere
different
to
where
our
production
workload
resides.
B
B
Centric-
and
this
is
the
focus
about
being
able
to
capture
everything
under
the
application
banner
as
it
were
so
the
front
end
the
back
end,
but
also
the
data
service
that
that
we're
wanting
to
be
able
to
leverage
and
be
able
to
restore
from,
and
what
this
allows
us
to
do
is
have
that
freedom
of
choice
but
freedom
of
choice
when
it
comes
to
recovery.
I
want
to
be
able
to
use
those
fast
application
consistent
snapshots
if
it
makes
sense,
and
the
failure
scenario
doesn't
involve
an
outage
of
my
f
of
my
storage
system.
B
But
if
not,
I
want
to
be
able
to
work
through
and
have
an
understanding
of
what
that
whole
application
looks
like,
especially
when
we
look
at
kubernetes,
where
a
you
could
have
hundreds
of
different
pods
and
hundreds
of
different
system
volumes
and
claims
around
that
that
that
hold
that
important
data
now.
This
will
give
us
that
that
level
of
consistency
and
that
flexibility
of
picking
and
choosing
what
we
actually
need
to
recover
and
the
granularity
around
that,
and
I
think
I
then
summarize
some
of
these
bits
in
the
next
slide.
B
So
there's
four
options:
there's
actually
one
more
that
we
could.
We
could
have
gone
into
around
a
dirty,
read
and
around
that
that
aspect,
but
ultimately
we
and
it
will-
it
will
depend
on
what
that
data
service
looks
like
as
to
where
and
what
your
data
management
strategy
looks
like
for
your
workloads.
B
If
a
storage,
centric
snapshot
approach
is,
is
enough
for
that,
and-
and
it's
going
to
give
you
a
copy
of
that
data-
a
very
fast
recovery
point,
but
you
don't
have
the
requirements
to
have
that
off
site
or
have
that
at
least
on
a
different
media
type
away
from
that
production.
B
Just
in
case
the
production
storage
was
to
fail,
then,
if
it
is
an
application
that
requires
that
that
post,
freeze
and
post
thor
type
operation,
then
we've
got
the
ability
to
be
able
to
look
into
that,
or
is
that
the
the
capability
of
the
strategy
that
we
need?
Then?
B
If
we
do
need
to
take
it
out
of
band
and
onto
a
different
storage
layer,
then
we
can
do
that
by
being
able
to
take
a
copy
of
that
data
and
storing
that
in
our
object,
storage
in
our
file
system,
external
from
the
production
storage
and
then
a
full-blown
like
overview
of
your
whole
application
is
really
focusing
on.
How
do
I
protect
the
whole
application
at
the
same
time
to
give
us
all
of
the
options
around
flexible
recovery?
B
Now,
that's,
hopefully
bringing
up
the
the
data
management
challenge
that
we
have
both
from
a
wider
platform
point
of
view,
but
also
from
a
kubernetes
point
of
view,
cloud
native
point
of
view.
But
I
think
what
what
we
should
do
now
is
is
maybe
take
another
look
into
the
need
for
application
consistency,
and
with
that,
I'm
going
to
hand
this
over
to
pavan.
Who's
who's
been
heavily
involved
from
a
from
a
canister
project,
point
of
view,
and
he
can
explain
a
lot
in
a
lot
more
detail
than
I
can
around.
C
C
So
we
would
be
able
to
protect
those
as
well
and
at
the
same
time
we
I
think
we
discussed
about
some
of
the
hooks
that
we
can
have
to
freeze
and
unfreeze
the
data
services.
C
And
finally,
we
could
also
be
requiring
I,
I
think,
advanced
scenarios
where
we
need
to.
I
mean
an
example:
here
is
mongodb
secondaries.
If
we
have
a
replica
set
with
multiple
nodes
and
multiple
clusters
here,
we
would
want
to
take
backup
of
secondaries
and
stuff
like
that.
C
They
all
have
the
same
requirements
of
protecting
the
application,
but
they
don't
have
the
same
expertise.
So
a
cluster
admin
may
not
know
the
internal
workings
of
a
database.
Always
how
do
you
generally
put
together
all
these
concerns
and
have
like
a
single
way
of
protecting
all
kinds
of
applications?
So
they
are.
I
mean
these
are
some
of
the
complex
workflows.
C
Then,
once
you
have
figured
out
how
to
protect
an
application,
then
you
have
different
moving
parts
like
in
terms
of
infrastructure.
You
could
be
using
an
object,
store
or
you
could
be
using
a
vendor
targets
or
a
file
storage
for
your
backups.
Then
again,
we
spoke
about
types
of
backups.
Some
someone
may
want
to
use
like
logical
dumps
or
logical
backups
of
the
data
service,
or
someone
else
would
want
to
use
volume
snapshots,
while
also
doing
that
we
would
want
to
handle
the
lifecycle
of
an
application.
C
What,
if
the
the
workload
is
up
or
it's
down
during
the
backup?
What
when
do,
we
need
it
to
be
running,
and
when
do
we
need
it
to
be
like
frozen
or
scaled
down
in
terms
of
kubernetes
workloads,
so
bringing
this
all
together
like?
C
If
we
think
about
all
these
requirements
and
the
workflows,
we
came
up
with
canister
to
kind
of
put
all
these
together
in
one
particular
framework
to
allow
all
different
users
and
different
goals
to
be
accomplished
using
like
a
single
mechanism
and
as
it
says
here,
if
we
want
to
capture
different
requirements
from
different
experts
across
the
infra
team
or
the
developers
or
the
database,
that
means
we
want
to
provide
a
way,
a
common
way
to
perform
like
backup
and
recovery
tasks
across
these
teams
and
also
be
able
to
share
each
like
share
their
workflows
with
each
other
and
extend
them
in
case
they
require
that
so
bringing
this
all
together.
C
I
think
canister
is
a
tool
that
allows
these
things
to
work
seamlessly
in
kubernetes
way
or
in
a
standardized.
Api
could
be
used
to
do
these
things,
so,
let's
actually
move
into
canister
and
discuss
more
about
canister.
C
What
is
canister?
It's
it's
a
framework
for
application
level,
data
management.
It's
mostly
made
up
of
four
main
components:
the
canister
controller,
blueprints,
action
sets
and
profiles,
so
the
canister
controller
is
nothing
but
it's
based
on
kubernetes
operator
pattern.
It's
mostly
responsible
for
the
state
management
of
these
custom
resources
that
we
have
here.
The
blueprints
action
sets
and
profiles
and
a
blueprint
is
like.
We
discussed
it
kind
of
defines
the
workflows
for
your
backup,
restore
or
delete
operations.
C
It
could
be
other
operations
as
well,
which
we'll
see
later,
but
mostly
if
we
want
to
define
backup,
workflows
for
a
particular
data
service
or
a
particular
workload
on
kubernetes
blueprints
are
used
for
that
now,
once
we
have
operations
and
workflows
defined
in
blueprints,
we
use
action,
sets
to
run
those
actions
or
mostly
to
inform
the
canister
controller
on
which
action
to
run
and
from
which
blueprint
on,
let's
say
on
which
work,
workload
and
stuff
like
that
now.
Finally,
we
also
have
profiles.
C
C
So,
apart
from
these
the
components
that
we
discussed
just
now,
we
also
provide
a
couple
of
tools
or
command
line
tools,
along
with
canister,
so
can
cuttle
is
it's
a
small
tool
that
we
can
use
to
create
these?
The
crs
that
we
discussed,
mostly
action,
sets
and
profile.
Crs
can
be
created
using
can
cutter.
C
Now
we
also
have
this
tool
called
can
do.
This
is
mostly
used
to
move
data
to
and
from
an
object
store
location.
So
it's
generally
used
inside
blueprints
and
requires,
like
a
container
a
specific
container
called
canister
tools
to
run
this.
C
So
we
have.
We
have
seen
all
these
different
components.
We
can
actually
go
through
some
examples
and
dive
deep
into
some
of
these
specific
components.
Now.
What
we
see
here
is
an
example
blueprint
it's
a
simple
blueprint.
I
haven't
added
a
very
complex
workflows
here,
so,
as
I
described
earlier,
blueprint
is
used
to
tell
the
canister
controller,
how
to
backup
or
restore
an
application.
C
Now
this
is
done
through
actions
and
these
actions
actually
contain
one
or
more
phases,
and,
as
we
see
here,
each
phase
can
have
like
a
canister
function.
C
It
is
a
primitive
that
we
use
to
execute,
let's
say
backup,
sorry
bash
scripts
or
shell
scripts,
or
it
could
also
be
used
to
take
volume,
snapshots
and
stuff
like
that.
I'll
cover
canister
functions
in
in
some
time,
but
let's
go
through
this
example
here.
What
we
see
is
is
a
mongodb
blueprint
and
the
main
action
shown
here
is
a
backup
action,
and
here
the
output
artifacts
are
mostly
used
to
store
state
from
a
backup
action.
C
So
once
we
execute
any
backup
action,
we
would
want
to
store
some
state
in
in
terms
of
most
of
these
data
service,
backups
that
we
have
here,
it
would
most
likely
be
a
path
in
in
from
our
object
store.
So
the
path
we
see
here
is
the
paths
that
we
can
find
inside.
Our
object
store
bucket
that
I
I'll
cover
how
we
can
provide
that.
But
that
is
what
we
are
storing
here
and
the
phases
we
see
here.
C
There
is
a
single
phase
and
the
function
the
canister
function
called
cope
task
is
used
here
now
the
function
itself
spawns
a
pod
in
the
name
space
that
is
provided
here
and
with
a
container
of
image.
That
is
also
provided
here
and
finally
executes
the
bash
command
that
we
have
provided
so
the
command.
Here
you
can
see
we
are
using
dump
here
to
capture
the
data
from
the
mongodb
replica
set.
C
So
this
is
a
simple
blueprint.
Now,
once
we
have
the
blueprint
defined,
how
do
you
tell
the
controller
that
we
want
to
execute
a
particular
action
from
a
blueprint?
So
that's
when
we
deploy
an
action
set
now
again,
if
you
see
the
spec
of
the
action
set,
it
contains
mostly
details
about
what
action
to
run
from
which
blueprint
or
the
subject
for
the
action.
C
So
here
in
this
example,
we
are
selecting
the
backup
action
from
the
blueprint
which
we
just
saw,
the
mongodb
mongodb
blueprint,
and
then
we
are
selecting
a
resource
to
run
the
action
on
that
is
the
stateful
set
of
mongodb
replica
set,
and
it's
assuming
that
it's
also
deployed
in
the
name
space
mongodb.
C
Now
the
profile
example
profile
is
being
used.
So
once
this
action
set
is
submitted
to
the
canister
controller
and
the
action
itself
is
executed.
The
controller
then
sets
the
status
section
of
the
action
set.
It
updates.
Whatever
information
we
we
have
provided
in
the
output
artifacts
in
the
blueprint,
if
you
remember
that
it
stores
that
the
artifact
value
there
and
at
the
same
time
it
also
kind
of
shows
us
the
progress
of
each
of
the
phases
from
the
blueprint.
C
C
So
if,
if
a
particular
phase
is
in
progress
or
it's
completed
for
every
change
in
state,
we
see
an
update
in
the
action
set
moving
on
to
a
profile,
we
just
saw
how
a
profile
is
used
to
define
a
target
location
for
our
operation,
but
what
does
it
contain?
So
if
we
look
at
this
example,
the
profile
itself
contains
two
main
components.
C
First,
an
object
store
location
in
this
case.
It's
an
s3
compliant
or
an
amazon
s3
bucket
called
canister
backup.
Then,
once
we
have
this
bucket
now,
we
would
need
the
credentials
also
to
communicate
with
that
bucket.
So
that's
where
the
community
credentials
comes
in
now
there
are
a
few
different
ways
of
providing
credentials,
but
the
one
that
I
have
used
here
is
called
a
key
pair.
C
It's
selecting
the
ids
that
we
id
field
and
the
secret
field
that
we
have
provided
from
the
secret
reference
that
we
can
see
here.
So
in
this
case
it's
actually
taking
the
credentials
from
the
example
key
id
and
example
secret
access,
key
that
you
would
be
able
to
find
in
the
example
secret.
C
So
if
we
go
and
dig
into
the
secret,
we
would
see
those
fields
and
the
value
set,
so
it's
kind
kind
of
a
secure
way
to
provide
credentials
so
that
they
are
not
exposed
anywhere.
C
So
now
now
that
we
have
seen
all
the
different
components,
how
do
they
interact
with
each
other
during
the
execution
of
a
particular
action,
so
assume
a
database
workload,
a
blueprint
and
a
canister
controller,
or
are
already
deployed
on
a
particular
cluster?
So
how
do
we
back
up
this
particular
database
workload?
C
The
first
thing
we
would
do
is
create
an
action
set
and
like
we
saw
in
the
example,
the
action
set
should
be
defining
the
action
from
this
blueprint.
It
should
select
this
blueprint
and,
let's
say,
a
backup
action
from
this
blueprint,
and
it
also
needs
to
provide
the
database
workload
as
its
subject
and
also
select
a
target
destination
if
required.
C
Now,
once
the
action
set
is
created,
the
controller
which
is
constantly
watching
or
polling
for
creation
of
action
sets,
looks
at
it
and
finds
all
the
actions
and
the
blueprint
that
we
have
provided
there
and
it
goes
and
fetches
the
action
from
that
blueprint.
C
So
here
one
more
thing
we
saw
was
that
the
namespace
is
provided
as
a
go
template.
This
is
actually
a
way
to
kind
of
generalize
a
blueprint.
We
can
have
a
single
blueprint
and
we
can
use
that
blueprint
across
different
objects
in
the
cluster.
So
if
we
have
multiple
deployments
of
mongodb
replica
set
in
the
cluster,
the
same
blueprint
can
be
used
for
that.
So,
like
I
said
it
goes
and
fetches
this.
Whatever
action,
the
action
set
provides
and
it
kind
of
renders
all
these
go
templates.
C
C
So
once
all
these
things
are
executed,
the
data
is
moved
out
of
the
cluster
into
the
object
store.
The
controller
then
comes
back
and
sets
the
status
on
the
action
set
that
we
saw
in
the
example.
So
the
status
will
then
have
the
location
information
about
the
snapshot
from
the
bucket,
and
it
also
constantly
updates
each
phase
status
and
that's
how
like
a
work.
The
whole
workflow
is
once
we
have
a
successful
action
set.
We
would
know
that
the
backup
has
been
successfully
taken.
C
So
we
have
seen
how
canister
works.
In
theory,
we
can
actually
look
look
at
a
live
example
and
see
how
we
can
use
canister
to
protect
mongodb
replica
set
on
a
live
cluster.
Let
me
actually
share
my
screen.
C
C
C
So
I
have
a
gk
cluster
with
121
kubernetes
version,
and
I've
also
deployed
mongodb
here.
C
Yeah,
so
things
are
running
fine
here.
What
we
can
do
is
I
have
not
added
any
data
here,
so
we
could
go
ahead
and
add
some
data
there.
C
What
I'll
do
is
I'll,
execute
or
run
coke
cutter
exec
into
the
pod
that
we
see
here,
and
I
think
it
should
have
a
client
there
which
I'll
use
to
execute
or
add
some
data
there.
So
I'm
creating
a
database
with
some
restaurant
entries
here.
C
Yeah,
so
we
see
four
entries
here,
so
the
database
is
set
up
with
some
data
and
it's
actually
running
on
this
cluster.
Now
what
we
can
do
is
actually
see
how
simple
it
would
be
to
deploy,
canister
and
protect
this
data
database.
C
So
the
namespace
got
created
now.
If
we
see
canister
documentation,
we
do
provide
commands
to
deploy
canister
using
helm.
I'm
actually
copying
the
command
from
there
and
let's
just
install
that.
C
Okay,
so
we
just
installed
canister,
you
can
check
if
all
the
pods
are
up
there
yeah.
So
the
controller
is
running
in
this
canister
name.
Space.
C
Next
thing
I
would
do
is
just
install
the
tools
the
can
cuttle
and
can
do
tool
that
I
talked
about
earlier.
There
is
an
a
simple
command
to
actually
just
install
these.
I'm
going
to
run
that
and
let's
see.
C
Cool,
so
I
think
we
have
those
installed
now.
Let's
confirm
that
and
yeah
I
I
provided
this
0.68.0,
which
is
our
most
recent
release,
so
so
we
have
cancel
from
that
version
now.
One
thing
we
talked
about
was
a
profile
or
a
destination
for
these
backups.
C
I
have
already
set
up
a
bucket,
so
it
should
work
yeah,
so
the
profile
got
created
and
the
secret
we
see
here
is
nothing
but
the
secret
where
our
credentials
are
stored
and
the
profile
just
references
that
secret.
C
C
C
You
can
actually
check
what
phases
it
has
so.
The
one
I
showed
before
was
a
very
simpler
or
a
simple
version
of
the
same
blueprint.
So
we
see
here
that
it
has
a
backup
action
and
again
similar
to
what
we
saw.
In
the
example
we
have
an
output,
artifact
and
under
the
phases
we
can
see
that
the
coupe
task
is
the
phase
itself
is
called,
take
consistent,
backup
it's
using
mongodump
here
once
the
mongodum
creates
a
snapshot
of
the
database.
C
C
So
there's
also
a
delete
delete
action
here
in
in
the
blueprint.
This
is
nothing
but
it
can
be
used
to
delete
your
the
snapshots
that
you
you
have
taken
in
the
backup
phase.
There's
one
thing
that
that
both
restore
and
delete
have
in
common.
That
is
the
input
artifact,
so
the
artifact
that
we
created
here
actually
stores
the
location
of
the
backup
right.
So
we
provide
that
to
these
two
actions
as
a
part
of
the
input
artifacts.
C
C
C
So
I'll
be
using
the
can
cuttle
tool
to
create
an
action
set,
so
it
has,
this
command
called
create
and
we
are
providing
action
set
as
the
option
there.
Now
I
can
select
the
action
I
can
select,
which
name
space.
The
action
set
should
be
deployed
in,
and
I'm
also
selecting
the
blueprint
here
and
this
stateful
set
acts
as
the
subject
for
the
blueprint
action,
so
I
selected
my
mongodb
replica
set
and
finally,
we
also
have
to
provide
a
profile
if
required.
B
C
C
So,
okay
cool,
I
think,
looks
like
it's
already
done
with
the
backup
so
just
to
explore
the
action
set.
What
we
have
here
is
the
blueprint,
the
action,
the
object
that
we
selected
and
the
profile
that
we
provided.
Now,
if
you
see
the
status
section,
it's
actually
complete.
Now,
if
we
see
the
state
it's
completed
so
this
it
had
like
phases
from
the
blueprint
that
that
are
now
complete
and
we
also
add
events
at
regular
intervals
to
see
how
the
action
set
is
progressing.
C
C
C
Yeah,
so
if
we
see
here
this
file
just
got
created
when
we
ran
the
action
set,
so
now
everything
is
set
up,
the
backup
is
done.
What
I'll
do
is
I'll
simulate
a
failure
or
a
disaster.
C
And
verify
once
if
everything
is
gone
so
looks
like
the
table
is
gone
now.
How
do
we
recover
this?
C
So
there
is
an
easy
way
to
create
an
action
set
actually
to
to
run
the
restore
operation
from
the
previous
backup
that
we
just
took
so,
let's,
let's
again
use
a
can
cuttle
tool
to
create
that,
and
if
you
see
here.
C
What
we
have
here
is
a
from
flag
that
you
can
provide
and
let's
take
the
backup
name
here.
So
what
it
does
is
takes
the
output
artifacts
from
the
previous
action
and
provides
that
as
input
artifacts
into
the
restore
action.
C
Cool,
so
we
see
here
that
it
completed
the
phase
from
the
restore
action.
It's
called
pull
from
blob
store
again,
it's
use
the
s3
profile
that
we
provided
in
the
backup
and
the
same
stateful
set
subject
and
the
location
which
it
got
from
the
output
artifact
from
the
backup.
C
So
I
mean
that
was
pretty
much
it.
That
was
how
we
could
just
restore
the
mongol
in
case
of
a
in
case
of
a
disaster.
It
was
as
simple
as
that.
Just
once
you
have
the
blueprint,
it's
all
about
creating
action
set
and
executing
these
actions.
C
There's
also
one
more
action
that
we
have
the
delete
action
that
we
saw
in
the
blueprint.
I
can
actually
show
that
as
well,
so
if
we
just
use
the
can
cutter
to
create
action
set
for
action,
restore
it's,
this
is
very
similar.
In
this
case,
we
again
use
action
as
delete.
Instead,
let's
provide
the
profile
and
let's
provide
the
from
that
we
used,
which
is
nothing
but
the
backup
action
that
created
the
output.
Artifacts.
C
So
one
more
thing
to
notice
here
is
that
we
run
these
delete
operations.
We
don't
need
when
we
run
these.
We
don't
need
a
subject,
so
we
don't
really
have
to
provide
this
mongodb
as
a
subject.
Instead,
you
can
actually
say,
since
it
was
a
coupe
task,
that
spins
up
a
port,
you
can
select
the
name
space
where
that
pod
has
to
come
up.
That
is
the
that
is
used
by
or
that
is
provided
by
this
flag
here,
which
is
a
namespace
target.
C
C
We
see
now
the
the
file
is
gone,
so
this
is
how
we
can
actually.
The
delete
operation
is
useful
when
we
want
to
maintain
a
certain
number
of
snapshots
and
if
you
want
to
delete
or
retire
some
of
those
older
snapshots,
we
can
use
these
delete
operations,
so
that
was
pretty
much
it.
I
think
we
saw
how
we
used
canister
to
recover
from
our
disaster
on
the
mongodb.
C
If
we
see
here,
we
have
a
lot
more
functions
and
options
available
to
be
used
in
the
blueprints
for
executing,
like
shell
scripts
or
like
bash
commands
and
stuff,
like
that,
we
can
use
these.
These
three
functions
that
we
see
here
under
custom
logic.
Coupe
exec,
is
one
of
the
more
important
ones
here.
C
This
actually
works
as
if
you
were
running
coop,
cuddle,
exec
on
a
particular
pod
and
a
container,
but
through
a
blueprint,
so
you
can
automate
that
process
and
provide
a
command
to
execute
on
a
particular
container
or
a
or
a
port
now
for
a
resource
lifecycle
where
we
talked
about
earlier.
C
If
we
want
to
scale
scale
up
or
down
a
particular
workload,
we
have
functions
for
that.
Then
we
have
functions
that
handle
pvcs.
Here
we
see
backup
data
restore
data.
These
functions
mostly
go
and
perform
volume
or
file
system
based
operations.
C
So
we
could
mount
a
pvc
on
a
particular
pod
and
go
into
the
pvc
volume
volume
there
and
whatever
file
system
it
has
underlying
there.
We
could
run
some
operations
on
that.
Then
there's
also
functions
to
automate
the
process
of
creating
volume
snapshots.
C
These
could
also
be
used
in
a
blueprint
like
with
other
restore
and
delete
snapshot
functions.
Now
we
talked
a
bit
about
rds.
C
C
More
function
that
we
have
where
in
case,
you
want
to
move
your
data
out
of
rds
into
let's
say
some
other
provider
where
you
have
deployed
postgres.
This
function
is
actually
helpful.
You
can
take
the
data
out
and
move
it
out
into
a
non-rds
postgres
as
well,
so
those
were
some
of
the
functions
that
we
already
have.
C
The
point
I
have
here-
s3
compliant-
it
just
means
we
can
have
something
like
mineo
or
something
that
is
compliant
to
s3
apis.
C
Then,
if
we
do
want
to
take
snapshots
of
entry
providers
or
volumes
that
we
have,
I
mean
we
spoke
about
storage
centric
snapshots.
There
are
helpers
out
there
in
canister,
so
canister
can
actually
be
used
as
an
sdk,
and
these
helpers
or
the
functions
that
we
have
for
creation
of
snapshots
or
creation
of
volumes
from
snapshots.
They
can
be
used
in
whatever
software
you're.
Using
I
mean
as
long
as
it's
golang
based
you
can
import
canister
and
use
these
functions,
so
that
was
most
of
what
I
wanted
to
cover.
Michael.
C
You
want
to
talk
about
some
of
the
new
features
that
we
are
coming.
I
mean
that
are
coming
in
the
near
future.
B
Yeah,
I
think
awesome,
demo
and
and
really
deep
dive
there
paving
really
good.
So
one
of
the
things
that
we're
working
on,
if
they're
not
already
in
the
in
the
in
the
project,
is
different
file
storage
destinations
for
different
backups.
B
Obviously,
if
we're
moving
data
from
a
to
b,
we
want
to
be
focusing
on
one
security
around
encryption,
secondly,
dedupe
and
compression
so
that
we
can
get
data
much
more
streamlined
and
efficient
from
a
to
b
and
then
also
other
data
services,
seeing
a
an
increase
of
data
service
operators
out
there,
one
being
case
sandra,
but
there's
others
out
there
as
well,
that
that
canister
has
the
ability
to
to
start
protecting
as
well
so
they're,
either
on
the
road
map
or
recently
added
to
to
canister,
and
I
think,
as
a
takeaway.
B
The
next
slide,
I
think,
is
just
maybe
let's
go
to
the
next
one.
Haven't
it's
just
easier
to
to
shout
this
one,
I
think
from
from
our
point
of
view,
all
the
slides
will
be
available.
B
I
think
my
biggest
ask
is:
take
a
look
at
the
project,
see
what
how
it
can
help
you
feedback
contributions
like
raise
an
issue.
Give
us
some
give
us
some
ideas
about
where
wait
where
it
could
be
used
where
you're
using
it
spreading
the
word,
but
then
also
just
understanding
what
data
management
tasks
are
out
there
and
how
and
when
to
when
to
choose
canister
or
to
potentially
look
at
other
data
management
tools
in
that
in
that
area,
and
I
think
with
that
we
can.
We
can
close
out.
A
A
I
have
posted
our
public
black
channel
for
online
programs
in
the
chat.
So
if
anyone
wants
to
continue
the
conversation
after
this
feel
free
to
hop
into
that
and
shoot
out
any
other
questions,
you
have
thank
you
so
much
michael
and
pavin
for
a
great
presentation
and
unless
there's
anything
else,
we
will
see
y'all
next
time.