►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
welcome
to
today's
cncf
live
webinar.
Thank
you
for
joining
us.
Enhancing
data
protection,
workflows
with
canister
and
argo
workflows,
I'm
libby
schultz
and
I'll
be
moderating.
Today's
webinar
I'm
going
to
read
our
code
of
conduct
and
then
hand
over
to
ivan
sims
software
engineer
and
michael
cade
senior
global
technologist,
both
with
castin
by
veeam
a
few
housekeeping
items
before
we
get
started
during
the
webinar.
A
You
are
not
able
to
talk
as
an
attendee,
but
you
can
give
us
messages
and
send
questions
through
the
chat
box
on
the
right
hand,
side
of
the
screen,
please
feel
free
to
drop
your
questions
there
we'll
get
to
as
many
as
we
can
at
the
end
or,
as
the
speakers
see
fit
as
we
go.
This
is
an
official
webinar
of
the
cncf
and,
as
such
is
subject
to
the
cncf
code
of
conduct.
Please
do
not
add
anything
to
the
chat
or
questions
that
would
be
in
violation
of
that
code
of
conduct.
A
Basically,
please
be
respectful
of
all
of
your
fellow
participants
and
presenters.
Please
also
note
that
the
recording
and
slides
will
be
posted
later
today
to
the
cncf
online
programs,
page
at
community.cncf.io,
under
online
programs
they're
also
available
via
your
registration
link,
and
the
recording
will
also
be
made
available
on
our
online
programs
youtube
playlist.
B
Thanks
thanks
sleepy,
let
me
share
my
screen
here.
B
So
can
you
all
see
my
slides,
yep?
Okay,
cool,
awesome,
hey
everyone
thanks
for
taking
the
time
to
be
here,
so
it's
good
to
be
able
to
talk
to
you
today
so
yeah,
as
mentioned
earlier,
my
name
is
ivinson.
I'm
open
source
software
engineer
at
castin
and
joining
me
today
is
michael
kate
senior
technologies
from
casting.
B
So
today
we
will
be
talking
about
data
protection
workflows,
so
we're
going
to
start
off
by
talking
a
little
bit
about
running
stateful
workloads
on
kubernetes
and
why
we
would
want
to
do
that
then,
from
there
we
will
have
a
brief
introduction
into
kubernetes
container
storage
interface,
also
known
as
csi,
and
how
data
protection
fits
into
csi.
B
So
why
run
state
for
workloads
on
kubernetes
during
the
early
days
of
kubernetes,
many
of
us
were
told
to
use
kubernetes
to
primarily
run
like
stateless
workloads,
even
though
kubernetes
came
with
a
collection
of
apis
and
constructs
to
support
like
stateful
workloads,
so
we're
talking
about
going
back
to
the
days
of
pet
sets
before
it
was
even
called
like
staple
sets,
and
we
back
then
we
were
told
that
when
it
comes
to
stateful
workloads,
we
would
be
better
off
to
use
like
our
management
services
and
fast
forward
to
today.
B
B
I
can
we
like
control
over
the
compute
specification,
maybe
compute
size
in
terms
of
cpu
or
memory,
specifications
or
iops,
and
everything
and
anything
in
between
it
really
boils
down
to
cards,
and
we
also
like
to
utilize
and
depends
directly
on
the
kubernetes
neutral
api
and
we're
not
even
talking
about
stateful
workload,
related
api,
we're
talking
about
scheduling,
apis
like
port
disruption,
budget
part
affinity,
anti-affirmative
resource
requirements
and
limits
and
load
balancing,
etc.
B
For
some
of
us,
like,
we
may
have
more
stricter
requirements
on
the
at
rest,
encryption
to
be
used
for
our
data,
and
they
might
also
like
I'm
stricter,
like
data
sovereignty,
regulation
that
we
need
to
comply
with
at
the
end
of
the
day
it
really
boils
down
to
like
who
and
how,
how
and
where
we
handle
and
store
our
valuable
customers
data
and
who
owns
like
the
the
backup
artifacts
of
all
this
data.
B
So
when
we
use
like
a
managed
data
services,
inevitably
like
our
data
protection
strategy,
would
be
like
coupled
to
the
stack,
the
api,
the
libraries
of
the
providers.
Now
this
is
not
necessarily
a
bad
thing.
You
know
it
really
boils
down
to
like
our
use
cases
and
our
requirements
for
some
of
us.
I
can
have
such
direct
dependency,
maybe
okay,
but
some
of
us-
maybe
not
so,
and
even
come
from
talking
to
our
users
again
one
of
the
things
that
they
did
share
with
us
about
running.
B
Staying
for
what
losses
like
and
then
we
gain
that
visibility
and
control
into
the
upgrade
mechanism
and
in
cases
where
things
fail
during
an
upgrade
the
recovery
and
the
rollback
is,
is
something
that
they
may
have
control
of.
Like
you
know,
when
they,
you
know,
run
like
stateful
workloads
themselves
on
kubernetes.
B
B
The
emergence
of
operator
pattern
has
helped
us
out
a
lot
in
terms
of
automations
of
the
installation,
deployment
and
day
two
operations
of
data
services
and
databases,
and
if
we
think
about
it,
it
makes
sense
right.
One
of
the
main
goals
of
an
operator
is
to
be
able
to
encapsulate
all
this
some
specialized
knowledge.
B
We
have
about
the
applications
about
the
ds
services
and
codify
them
and
automate
them
and
be
able
to
share
them
with
the
rest
of
the
team
with
the
rest
of
the
community
and
again,
there
has
also
been
tremendous
again
continuous
growth
and
improvement
of
the
kubernetes
container
storage
interface,
also
known
as
csi.
We'll
talk
more
about
csi
in
the
next
slide,
and
we
as
a
community.
We
have
grown
so
much
in
terms
of
our
experience
and
our
expertise
with
running
and
managing
like
cloud
native
state
for
workloads
compared
to
the
early
days.
B
We
as
a
community
are
now
more
confident
in
our
ability
to
debug
like
on
containerized
some
safer
workloads
and
manage
them
and,
like
I'm,
spoiler,
alert
like
if
we
are
using
manage
their
services
underneath
it
like
this
data
managed
this
manager
services
are
probably
running
on
kubernetes.
B
B
B
B
So
csi
primarily
manages
like
volume
life
cycle
with
out
of
three
csi
drivers
within
the
csi
framework.
There
is
a
collection
of
optional
cycle
containers
managed
by
the
csi
community
and
this
sitecar
containers
that
basically
encapsulate
common
storage
operations
that
you
can
embed
and
bundle
with
your
csi
drivers.
B
So,
for
example,
if
you
are
storage
providers-
and
you
have
like
a
collection
of
storage
features-
you
want
to
expose
to
your
users,
you
will
implement
your
csi
drivers
and
then
you
will
go
to
this
some
collections
of
sitecar
containers
and
pick
and
choose
and
bundle
them
with
your
csi
drivers
to
expose
the
kind
of
features
that
you
want.
Your
users
to
have
access
to
and
upstream,
like
within
the
kubernetes
community.
B
There
have
been
a
lot
of
effort
and
push
towards
like
moving
away
from
the
entry
like
on
volume,
plug-ins
that
come
with
kubernetes
to
all
these,
like
out
of
three
csi
drivers
and
some
of
the
benefits
of
this.
This
push
is
pretty
clear
right
like
so,
for
example,
like
we
as
users
and
implementers,
we
are
able
to
test
and
maintain
and
upgrade
and
grow
the
csi
drivers
outside
of
the
gun.
The
release
cycles
of
the
kubernetes
core.
B
Now
some
of
us
might
already
be
familiar
with
csi
for
those
of
us
who
are
fairly
new
to
it.
I
want
to
just
share
some
quickly
some
useful
features
within
the
csi
framework
that
includes
like
a
volume
expansion
like
resizing
after
volumes
via
the
pvc
api,
snapshotting
of
volumes,
cloning
of
volumes
and
initializing
volumes
with
initial
data
using
data
populators
and
more
recently
like
we
have
also
seen
like
csi
driver
that
is
capable
of
mounting
secrets
from
your
secret
stores
into
your
running
workload
via
like
csi
volumes.
B
B
So
you
can
export
your
backup,
artifacts
to
and
there's
api
to
snapshots
and
backup
applications.
B
So
if
data
protection
is
like
relevant
to
the
thing
that
you
do
on
a
daily
basis,
I
encourage
you
to
join
like
the
slack
channel
of
the
working
group
in
the
kubernetes
slack,
and
I
have
also
shared,
like
the
link
to
the
white
paper
so
like
feel
free
to
download
it
and
give
it
a
re
at
your
own
leisure.
B
So
why
bother
with
data
protection?
Why
talk
about
data
protection
on
kubernetes
today?
Last
year,
the
cncf
survey
report
shows
that,
like
64.69
percent
of
its
respondents
said
that
they
were
either
already
running
stateful
applications
in
containers
in
kubernetes,
or
they
are
planning
to
shift
and
migrate,
their
stateful
workloads
to
run
on
kubernetes.
B
So
that
again,
that
has
been
the
increasing
trend
for
the
reasons
we
discussed
earlier
and
we
as
a
community
also
noticed
that
in
the
past
couple
years
again,
the
infrastructure
at
the
architectures
and
the
toolings
and
the
support
around
like
cloud
native
infrastructure
and
application
architecture
have
grown
and
changed
quite
a
bit
over
the
past
couple
years,
whereas
things
related
to
data
protection,
the
architecture,
the
toolings,
all
the
support
around
it
has
fallen
behind,
and
we
want
to
change
that.
B
And
finally,
like
one
thing
that
we
really
like
about
kubernetes
is
that
it
and
it
provides
like
a
set
of
apis,
a
set
of
common
apis,
that
different
teams
and
different
verticals
within
an
organization
can
use
to
to
manage
resources
and
enforce
policies,
and
we
feel
like
data
protection.
Solutions
should
be
managed
the
same
way.
B
So,
in
other
words
again,
there
shouldn't
be
like
repositories
of
yammers
for
your
for
our
microservices
and
policies
and
all
those
things,
whereas
the
data
protection
tools
and
scripts
are
in
this
separate,
like
repositories
in
different
format,
you
know
with
who
knows
who
have
credential
access
to
them.
So
we
want
to
be
able
to
find
a
way
to
bring
them
together,
using
a
cohesive
like
cloud
native
tools
and
apis.
B
It
really
boils
down
to
like
the
diff,
the
different
requirements,
the
different
strategies
around
snapshots
and
backups-
that
different
teams
with
different
experience
and
different
scope
may
have
different
requirements
on,
for
example,
if
you
are
someone
who
works
on
the
platform
team
and
you
work
closely
with
the
cloud
infrastructure,
you
might
have
some
apis
and
tools
that
allow
you
to
automate
the
snapshotting
of
the
virtual
disks
that
are
attached
to
your
nodes.
So
keep
in
mind
that
the
snapshot
capture
at
this
level
is
usually
crash
level
consistent
only
so.
B
In
other
words,
it
means
that
data
that
has
been
persisted
to
disk
get
snapshot,
whereas
data
in
memory
or
sometimes
data
in
10
memphis.
They
may
or
may
not
be
included
now
again,
depending
on
your
requirements
like
that,
and
your
use
cases
that
may
or
may
not
be
important
to
you
and
as
we
move
up
the
stack
to
the
data
services
to
the
specific
databases
to
our
micro
services,
we
might
have
like
different
sort
of
scripts
to
freeze
and
unfreeze
some
data
service
layers.
B
We
might
have
scripts
that
utilizes,
you
know
utility
tools
like
mysql,
dum
or
pg
dumb,
and
all
of
this
like
require
like
direct
access
into
your
production
databases
right
like
how
are
they
currently
being
managed?
B
Who
do
you
know
who
have
access
to
what
and
which
versions
which
team
are
using
overall
they're
just
way
too
many
moving
parts,
and
the
analogy
that
we
like
to
use
is
like
imagine
yourself
being
a
barista
is
like
a
with
all
these,
like
lists
of
coffee
types
and
on
on
the
on
the
menus
ish
with
its
own,
like
ingredients
and
recipes
that
you
have
to
memorize
in
order
to
put
together
like
the
the
the
coffee
that
your
cons,
your
customer
and
consumer,
may
ask
of
you
so
just
way
too
many
things
to
remember,
and
just
you
know
way
too
many
ways
that
things
can
go
wrong.
B
So
this
is
where
we
hope,
like
canister,
can
come
in
to
help
you
to
implement
and
streamline
your
entire
data
protection
workflows.
So
I'll.
Let
michael
talk
us
through,
like
the
internals
of
canister.
C
Yeah
cheers
ivan,
and
I
I
think
I
think,
the
so
the
story
that
ivan
tells
around
data
protection
specifically
around
kubernetes.
This
isn't
a
new
like
phenomenon
at
all.
It's
this
has
been
the
set
the
same
requirements
around
data
protection,
whether
we
look
at
other
platforms,
virtualization
physical
systems,
there's
no
shining
on
any
particular
platform
that
gets
rid
of
that
that
the
boring
talk
around
backup.
We
still
need
to
do
it
in
particular,
when
we
think
about
data
services,
databases,
postgres,
etc.
C
They
still
require
that
that
data
protection
and
it's
how
far
into
those
those
areas
that
I've
been
touched
on
is
really
where,
like
how
how
much
hand
holding
does
it
need
through
the
process
of
protecting
that
data,
is
a
crash
consistent
copy
going
to
be
good
enough
in
the
light
of
some
sort
of
fire
flood
blood
type
disaster
almost
to
get
our
database
back
up
because,
most
of
the
time,
these
databases
are
called
like
they're,
holding
mission,
critical
information
in
our
environment?
C
C
So
the
application,
consistency
etc
can
all
be
achieved
via
scripts
by
like
being
able
to
hone
in
on
a
particular
data
service
and
be
able
to
script
that,
to
the
point
that
you
get
a
copy
of
the
data,
either
via
a
snapshot
or
via
an
export,
a
tar
file
that
gets
exported
out
into
object
storage,
but
that
just
involves
you
knowing
a
bit
more
about
a
bit
more
things
when
it
comes
to
kubernetes.
C
We
already
know
that
the
ramp
up
from
a
kubernetes
engineer,
communities
administrator,
is
already
big
enough,
like
there's
already
big
topics
around
networking,
around
storage
in
itself
around
all
all
other
areas
of
kubernetes.
So
really
what
canister
does
is
kind
of
hit
the
easy
button
and
takes
away
some
of
that
that
knowledge.
You
don't
really
need
to
know
everything
about
everything
when
it
comes
to
protecting
those
potential
mission,
critical
data
services.
C
C
Copies
of
that
data.
Export
that
move
that
wherever
that
needs
to
go
again,
abstracting
away
that
tedious
the
tedious
detail.
So
we
don't
need
to
worry
about
maintaining
scripts
or
specific
people
scripts
or
all
the
process
around
that
this
gives
us
a
way
of
being
able
to
define
what
what
we
want
to
do,
how
we
want
to
do
it
and
making
that
happen
when
canister
is
deployed
and
we'll
go
through
some
of
that
that
process.
C
But
it's
it's
implemented,
it's
deployed
within
your
kubernetes
cluster
acts
as
that
kubernetes
controller,
so
it
already
already
integrates
into
that
kubernetes
api.
So
it
allows
us
to
embed
ourselves
into
the
kubernetes
api
and
take
advantage
of
all
the
good
stuff
there,
and
then
this
brings
us
back
the
extensibility
of
canister.
I've
mentioned
some
of
the
like
off
the
shelf
or
the
community
driven
blueprints
that
we
have
the
mysql
the
postgres,
but
really
we
can
they.
C
We
can
create
a
blueprint
for
any
data
service
that
is
available
and
we
do
that
by
these
functions
that
are
built
into
into
the
controller.
So
if
we
go
next
slide
either,
but
if
we
just
take
one
two
or
two
steps
back
and
we
start
thinking
about
okay,
what
does
canister
look
like
in
terms
of?
What
can
we
do?
So?
First
of
all,
we
have
a
blueprint.
I've
mentioned
that
a
few
times
now,
if
we
think
of
a
blueprint,
is
a
set
of
a
set
of
instructions
of
well.
C
This
is
how
I
want
to
perform
actions
on
a
specific
application.
So
again
we'll
go
back
to
like
my
sequel
or
postgres,
and
this
will
say
I
want
you
to
pause
the
the
I
o
to
our
database.
I
want
you
to
leverage
a
pg
dump
and
I
want
you
to
then
export
that
out
into
into
object
storage,
then
what
an
action
set
then
does
is
actually
okay.
How
do
we
make
this
happen?
How
do
we
instruct
the
controller
to
make
that?
Okay?
C
I
think
we
just
now
output
as
to
whatever
that
may
be,
so
the
action
set
is
the
let's
go
and
do
it
it's
the
trigger,
and
then
we
think
about
a
profile.
As
in
where
do
we
want
to
store
that
data
that
our
file
or
that
export
of
that
data,
so
three
simple
mechanics
of
what
canister
does
and
how
it?
How
how
we
use
that
now
also
so
an
action
set.
C
Think
of
that,
as
a
backup,
but
also
think
of
that,
as
a
restore,
so
we'd
have
a
restore
action,
set
a
backup
action
set
and
I
think
ivan
will
touch
on
some
of
those
other
areas
as
well
as
we
as
we
as
we
walk
through
the
demonstration
in
a
little
while
if
we
go
next
slide
so
yeah
some
of
those
canister
functions-
and
this
is
just
a
small
snapshot
of
the
amount
of
functions
that
we
have
within
canister
and
in
particular,
that
the
demo
that
ivan's
going
to
show
is
very
much
focused
around
using
the
create
csi
snapshot,
restore
csi
snapshot.
C
But
it's
also
worth
noting
as
well
like
we're
seeing
more
and
more
customers.
That
may
be
using
kubernetes,
and
this
goes
back
to
that
stateless
argument
that
ivan
spoke
about,
but
where
they're,
using
or
where
you're
using
a
path-based
services,
such
as
amazon
rds,
where
you're
going
to
store
your
mission,
critical
databases.
So
canister
has
the
ability
to
go
outside
of
the
cluster
and
be
able
to
also
capture
that
data
as
part
of
that
process.
Now
I
don't
believe
we're
going
to
show
that
in
the
demo.
C
But
I
know
that
vivek
and
other
other
maintainers
of
the
project
have
done
various
other
cncf
webinars
that
specifically
go
into
that
rds
piece
as
well.
So
these
are
just
some
of
the
the
arguments
that
we
have
within
each
of
those
cancer
functions.
They
enable
us
to
perform
specifics
when
it
comes
to
things
like
csi
or
rds
that
you
see
on
the
on
the
screen.
So
there's
a
lot
more
cancer
functions.
C
You
can
see
that
in
there
canister
that
I
o
and
basically
these
are
all
go
everything's
written
and
go
from
a
canister
perspective.
C
So
if
we
go
next
slide
ivan,
so
if
we
think
about
the
architecture,
so
canister
is
deployed
by
a
helm
into
our
kubernetes
cluster,
and
then
we
have
this
long
list
of
blueprints
that
we
can
choose.
So
if
you
know
what
what
application
or
what
data
service
you're
using,
you
can
see
here
that
we
have
cassandra
elasticsearch,
probably
one
I've
done
a
session
as
well
from
a
cncf
point
of
view
on
elasticsearch
being
that
that
forgotten,
stateful
workload
that
might
live
within
your
kubernetes
cluster
capturing.
C
C
Okay,
so
we've
got
two
two
factors
here:
we've
got
our
controller,
that's
deployed
as
part
of
the
into
our
kubernetes
cluster,
and
then
we
have
a
list
of
blueprints
that
are
purely
focused
on
our
application
and
our
data
services
within
that
application.
C
So
if
we
go
to
the
next
slide,
please
ivan
and
then
to
trigger
that.
We
have
then
a
an
action
set.
Now
I
mentioned
about
an
action
set
being
a
backup
or
restore,
but
really
that
could
be
anything
to
verify
or
validate
at
anything
that
we've
done
throughout
the
blueprint
or
to
trigger
that
blueprint.
C
The
set
of
instructions
that
we've
defined
in
the
blueprint.
So
if
we
go
next
slide
again,
so
basically
the
controller
is
sat
there
and
it's
watching
and
waiting
for
an
action
set
to
be
implemented
or
pushed
into
the
the
kubernetes
environment.
And
then
it
says,
okay
found
that
we
want
to.
We
want
to
use
this
blueprint
for
that
specific
data
set.
C
Then
we
trigger
that
canister
function,
which
is
an
exact
function
to
the
database
or
the
data
service
in
general.
To
say
this
is
what
I
want
you
to
do,
and
this
is
how
I
want
you
to
play
that
I
want
you
to
play
through
these
steps
and
I
think
the
next
slide
or
a
few
slides
is
an
example
of
what
that
looks
like
and
then
export
those
artifacts
out,
so
whether
that's
eg
dump
into
a
tar
file
or
my
sequel
dump
into
a
tar
file,
etc.
C
We
can
export
that
out
into
an
object,
storage,
location,
for
example,
next
slide
so
and
then
we
update
what
that
looks
like
from
an
action
set
point
of
view,
which
gives
us
the
ability
to
to
see
the
success
rate
of
that
of
that
action
set
that
we
triggered
and
next
slide
so
and
if
we
think
about
what
a
restore
looks
like
in
terms
of
that,
but
just
before
we
go
to
the
the
demo
which
ivan
will
actually
show.
This
is
obviously
from
a
restore
action
set.
It's
still.
C
C
So
with
that
I'll
hand
it
back
over
to
to
ivan
to
get
into
the
into
the
demo
yeah.
B
Thanks,
michael
so
yeah
earlier,
like
I
talked
about,
there
will
be
two
parts
to
this
demo
and
we
will
be
doing
the
demo
on
an
ek,
awsome,
eks
cluster
and
on
the
cluster.
We
have
a
postgres
database
installed,
which
has
like
a
pvc
and
pv
attached
to
it
and
it's
backed
by
an
actual
ebs
volume.
B
So
during
the
first
part
of
the
demo
I'll
show
you
how
you
can
use
canister
to
interact
with
the
csi
endpoint
on
kubernetes
and
manage
that
the
volume
snapshot
and
volume
snapshot
content
resources
from
there
and
to
actually
initiate
the
creation
of
an
ebs,
some
snapshots
in
the
awsm
space.
B
B
So
let's
take
a
look
at
what
the
csi
snapshot
blueprints.
Look
like
as
michael
mentioned
earlier.
Blueprint
is
a
custom
resource
definition
that
comes
with
canister
within
a
blueprint.
We
have
a
collection
of
action,
so
line.
Two
here
show
a
create
snapshot
action.
If
I
go
further
down
to
line
47,
there
is
a
describe
snapshot
action
and
towards
the
lower
half
of
the
screen.
You
see
a
restore
snapshot
action,
so
all
in
all
this,
this
one
blueprint
tells
canister.
B
This
is
how
you
create
and
restore
and
list
all
the
ebs
snapshots
via
the
ebs
and
csi
driver.
So
within
each
action
we
have
some
faces.
Let
me
scroll
down
a
little
bit.
A
face
is
a
step.
It's
an
atomic
step
that
canister
would
execute
a
step
is
backed
by
a
canister
function.
B
So
the
first
phase
here
basically
talks
about,
like
I'm
putting
my
postgres
database
into
read-only
mode
and
it's
backed
by
a
canister
function
called
cube.
B
Exec
underneath,
like
this
canister
function,
is
a
bunch
of
go
code
that
uses
the
same
packages
that
cube
cutter,
exact
use
to
stream
like
remote
command
execution
and
to
stream
like
output
back
from
the
pods,
and
the
second
phase
here
is
a
lot
shorter
in
terms
of
his
yammo
and
it
basically
calls
into
a
canister
function,
called
create
csi
snapshot
again
underneath
it,
as
you
can
imagine,
it's
a
bunch
of
goku
that
uses
client
go
to
interact
with
the
csi
volume
snapshot
and
volume
snapshot
content
apis.
B
So
let's
take
a
look
at
the
action
set
that
we
are
about,
so
we
look
at
the
blueprint.
B
The
next
thing
we
want
to
do
is
actually
use
an
action
set
to
trigger
the
create
snapshot,
action
and
we're
going
to
use
an
action
set
called
snapshot,
create
yemo
is
pretty
relatively
short.
It's
pretty
simple,
basically
tell
canister
to
go
and
find
this
blueprint
called
csi
snapshots
and
within
the
blueprint
there
will
be
a
create
snapshot.
B
And
then
now
we
can
quickly
examine
the
the
status
up
resource
of
the
action
set
that
we
just
created.
B
If
I
scroll
down
to
the
lower
half
of
the
screen,
you
will
see
the
status
sub-resource
of
the
action
set
is
being
populated
with
real-time
state,
face
state
information
by
chemistry.
You
can
see
like
within
the
two
phases.
Again
we
was,
we
successfully
put
the
postgres
database
into
read-only
modes,
and
then
now
we
are,
you
know
talking
to
the
csi
apis
who
say:
hey,
go
and
create
a
volume
snapshot
and
the
volume
snatch
our
content
resources.
B
You
can
see
that,
like
the
volume
snapshot,
resources
was
created
just
less
than
a
minute
ago
and
under
the
ready
to
use
column,
kubernetes
is
telling
us
that
hey
your
snapshot
is
ready.
Your
eds
snapshot
is
ready.
Now,
kubernetes
thinks
that
you
know
the
the
snapshot
is
ready,
but
is
it
really?
I
think
the
best
way
for
us
to
verify
is
to
actually
talk
to
the
aws
api
and
confirm
that
the
snapshot
was
indeed
created.
B
Now,
I'm
going
to
run
the
get
blueprint
command
again
and
then,
if
we
go
back
to
line
47
oops,
you
will
see
that,
like
under
the
describe
snapshots
actions,
we
essentially
uses
the
aws
cli
to
talk
to
call
to
talk
to
the
ec2
ebs
snapshot,
api
and
say:
hey
go
find
like
the
snapshots
that
we
just
created
for
this
particular
ebs
volume
that
I
know
is
attached
to
my
postgres
database
now
for
demonstration
purposes
like
I
just
you
passing
like
an
audio
some
batch
scripts
here.
B
Just
for
visibility,
you
know
in
a
real
serious
environment.
You
probably
would
you
know
just
curate
and
add
your
own,
like
container
image
there
to
do
all
these
things.
The
important
thing
is
also
to
show
that
we
can
pass
in
like
secret
references
to
secret
resources
from
that
already
exists
on
our
cluster,
so
cool.
B
Now
I
want
to
execute
that
describe
snapshot
action
so
to
do
that.
The
first
thing
I
would
need
to
do
is
get
hold
of
the
ebs
second
volume
id.
B
So
what
this
command
did
was
it
looked
into
the
pv
that
is
attached
to
my
postgres
database
and
get
the
actual
volume
id
so
that
we
can
pass
it
into
our
describe
snapshot
action
set?
Let
me
just
copy
and
paste
this
over
here,
so
essentially
what
this
did
was
we
passed
into.
We
piped
into
cubecardo
create
an
action
set
that
is
very
similar
to
the
first
action
set.
B
We
use
to
create
snapshot,
we
say:
go
to
blueprint,
csi
snapshots
and
run
this
describe
snapshots
function
and
here's
the
volume
id
that
I
wanted
you
to
use.
Now.
If
we
take
a
look
at
the
status
sub
resource
of
the
action
set
that
we
just
created,
we
will
see
that
we
actually
got
response
back
from
the
aws
api.
It
just
said
yep.
B
I
recognize
this
ebs
volume
and,
yes,
you
had
a
snapshot
that
you
just
created
not
too
long
ago,
so
there
kubernetes
said
our
snapshot
is
ready
and
aws
api
confirmed
that
you
know
hey.
This
is
it's
natural,
it's
also
already.
So
this
goes
back
to
what
michael
talked
about
earlier.
Like
you
know,
with
basically
with
canister
blueprint,
you
can
use
it
to
do
many,
many
more
things
in
addition
to
backup
and
restore
now.
B
The
last
thing
that
we
need
to
do
is
really
just
to
restore
the
ebs
snapshot
that
we
just
created
to
a
new
instance
of
postgres
database,
and
to
do
that,
I
need
to
get
hold
of
the
actual,
create.
B
Action
set
id
now,
instead
of
pasting
copy
and
pasting
like
a
bunch
of
yammo
that,
like
I
just
did,
I
used
this
tool
called,
can
ctl
to
create
the
restore
snapshot
action
set.
The
interesting
thing
here
is
like
I'm
able
to
tell
can
ctl
that
says:
hey
use
the
previously
deploy
action
set
as
the
parent
or
as
the
base
of
this
new
action
set.
So
I
pass
in
the
dry
run
option
just
so
that
we
can
get
a
sense
of
what
the
yaml
looks
like
now.
B
If
we
look
here
the
again
very
familiar
yamo
properties,
blueprint
cs
eye
snapshot
run
this
restore
snapshot,
action
pass
in
a
bunch
of
input
parameters
and,
in
addition
to
that,
use
this
some
artifacts.
So
the
concept
of
artifacts
is
like.
So
when
the
create
action
set
snapshot
finished
earlier,
it
has
like
a
bunch
of
return,
values
and
outputs
that
got
stored
in
this
status.
B
Some
sub
resource
so
can
ctl
was
able
to
go
into
that
status
of
resource
and
re
like
rewri,
and
this
out
return,
values
and
inject
it
into
my
restore
action,
set
and
use
that
as
inputs,
so
that
you
know
I
don't
have
to
like
figure
out
okay,
which
what
were
the
return
values
that
I
got
back
from
the
csi
endpoint.
B
So
I'm
gonna
go
ahead
and
pipe
this
into
keep
cuddle
and
create
this
okay.
And
let's
take
a
look
at
the
restore
snapshot,
resource.
B
B
So
that's
the
one
that
we
just
created.
So
you
can
see
the
account
was
30
seconds
old,
so
the
first
pvc
is
attached
to
our
original
postgres
database.
B
This
is
backed
by
the
pv
and
the
ebs
with
the
original
data
that
we
snapshot
now.
The
restore
pvc
is
something
that
canister
just
created
via
the
csi
api,
based
on
the
ebs
snapshot
that
we
just
created
a
couple
of
minutes
ago.
Now
this
new
pvc
is
going
to
stay
in
the
pending
status
until
like
a
new
postgres.
B
Part,
or
instance,
is
deployed
and
that's
exactly
what
we're
going
to
do.
Next,
we're
we're
gonna
use
helm
to
install
like
a
separate
instance
of
postgres
database
into
the
same
namespace.
The
only
thing
to
to
pay
attention
to
here
is
that
yeah
we're
telling
that
there's
the
scheduler
or
the
stateful
scheduler
to
hey.
We
use
the
existing
pvc
that
we
just
restored,
don't
create
a
new
one
based
on
your
volume,
template
spec,
so
hit
enter.
B
To
see
like
the
data
that
we
just
restored.
B
So
that
is
this:
is
we
exact
into
the
restore
part
and
then
like?
This
is
the
data
that
we
we
managed
to
restore
from
our
ebs
snapshot,
and
you
probably
are
already
noticed
right
all
of
these,
like
without
our
user,
needing
to
know
anything
about
csi
endpoint
needing
to
know
about
how
to
work
with
client
go
and
and
stuff
like
that,
so
just
yaml
manifest
and
that
canister
like
do
all
the
low
level
like
heavy
lifting
for
you
now.
B
The
second
part
of
this
demo
is
pretty
straightforward.
All
we're
going
to
do
is
repeat
the
same
data
operation,
backup
and
then
I
and
then
I
can
let
argo
workflows
like
scale
it
out
to
run
in
parallel,
so
that
we
can
snapshot
multiple
instances
of
postgres
at
the
same
time.
C
B
So
this
is
an
argo
workfor
yamo.
The
trick
is
that
I'm
able
to
pass
in
a
list
of
from
statefulset
workloads
in
to
to
the
work
to
the
workflow
and
say
go
to
this
namespace,
these
three
namespaces
and
find
all
this
stateful
set
and
do
the
operation
that
I'm
going
to
tell
you
inside
the
template.
If
you
scroll
down
into
the
waffle
template
we
scroll
down
to
the
execution
step,
it's
really
just
using
ken
ctl
and
say
now
go
ahead
and
create
like
action
set
for
all
these
different.
B
This
all
these
different
postgres
database
here
and
then
I
can
use
the
argo
cli
to
submit
the
workflow
yamo
to
to
to
argo
and
let
it
like
do
its
thing.
There.
C
Something
happens
there,
so
obviously
that's
massively
important,
because
not
all
of
our
applications
are
a
single
front,
end
container
with
a
back
end
database
or
data
service.
Nine
times
out
of
ten.
Your
application,
especially
in
the
microservices,
is
built
up
of
many
different
data
services,
so
being
of
them
all
in
one
succinct
kind
of
like
workflow.
C
That's
what
the
benefit
here
of
incorporating
something
like
argo
workflow
into
is
is
allowing
us
to
have
group
consistency
across
multiple
data
services,
or
at
least
talking
to
multiple
data
services
at
once,.
B
Yeah,
absolutely
and
just
spot
on,
michael,
like
you
know,
like
just
like
you
you're,
using
like
argo
to
to
help
us
to
like,
because
again,
the
main
thing
here
is
with
canister.
It
is
a
data
protection
workflows.
It
comes
with
all
these
data
protection
functions.
B
Now
when
it
comes
to
more
sophisticated,
like
workflow
concepts
like
running
in
parallel,
like
scheduling,
retries
error
handling,
we
can
try
to
build
all
of
this
into
canister,
or
we
can
just
utilize
like
a
really
cool
project
like
argo
workflows,
which
is
a
more
generic
like
workflow
engine,
you
know,
and
and
use
them
together
to
to
provide
that
really
cohesive.
B
Like
integration,
good
point
so
yeah,
you
know
if
we
were
to
just
take
a
look
at
the
volume
of
csi
resources,
you
could
see
that,
like
they
have
all
the
snapshots
have
all
been
created
by
other
workflows,
running
in
parallel
across
the
different
name
spaces.
B
So
again,
all
of
these
without
us
having
to
manage
like
different
batch
groups
or
make
files
passing
in
different
parameters
or
even
giving
different
teams
different
users
different
credentials,
access,
so
yeah.
I
think
that's
pretty
much
the
demos
before
summing
up
like
michael,
is
there
anything
you
want
to
add.
C
No,
I
think
we
just
covered
a
hell
of
a
lot
right.
We
just
went
into
a
101
of
what
canister
is
and
what
it
does
and
why.
Hopefully
it
simplifies
that
that
application
focus
around
data
protection
and
then
we
obviously
showed
you
how
that
looks
and
how
it
works
and
then
also
incorporated
that
or
integrated
that
into
another
open
source
project
such
as
argo
workflow.
C
That
allows
us
to
not
only
orchestrate
some
of
that
some
of
that
data
protection,
but
also
allow
you
to
hit
multiple
applications
or
data
services
at
one
at
the
same
time,
whilst
also
being
able
to
schedule
that.
C
So
I
think
that
a
summary
that
I'll
give-
but
I
know
this
is
this-
is
pretty
important
to
us
and
the
community
as
well,
because
this
is
a
growing
community
for
us
and
it's
really
the
community
that
enables
us
to
advance
what
we're
doing
from
a
from
a
canister
point
of
view,
yeah
with
what
we're
trying
to
do.
But
I'll.
Let
you
you
you
do
the
the
talk
track
here.
B
Yeah
so
yeah,
you
know
if
you
are
currently
planning
and
designing,
like
a
data
protection
workflows
to
protect
your
data,
consider
checking
out
canister.
The
source
code
is
on
github
and
we
also
have
a
slack
channel.
B
B
You
know
come
in
and
share
with
us
your
use
cases,
and
you
know
we
want
to
help
to
want
to
hear
about
what
you,
what
your
your
your
vision
of
a
data
protection,
workflow
engine,
looks
like
you
know
and
build
something
that
actually
works
for
you
and
help
you
out
and,
of
course
like.
Of
course.
We
all
welcome
like
contributions
for
sure
and
that's
it.
I
think
we
have
time
for
questions.
C
Anyone
has
any
questions,
please,
please
drop
them
in
the
chat,
we'll
we'll
be
happy
to
answer
them.
A
A
B
Yep-
and
you
all
can
also
like
find
me
and
michael
on
twitter
feel
free
to
dm
dms.
If
you
have
questions
about
data
protection,
love.
B
A
Well,
if
there
are
no
questions,
thank
you
so
much
ivan
and
michael.
Thank
you,
everyone
for
attending
remember
this
will
be
on
demand
shortly
and
you
can
find
it
either
on
our
youtube.
Playlist
on
the
website
or
via
this
link.