►
From YouTube: Data Protection WG bi-weekly meeting for 20200325
Description
Data Protection WG Bi-Weekly Meeting for 20200325
A
A
A
Has
been
couple
weeks,
Korea
has
something
to
discuss,
so
she
will
be
the
first
one
to
present
ten
minutes
to
go
through
some
of
the
idea
around
having
the
interface
to
a
low
backup
upgrades
through
the
kubernetes
api
and
then
Tom
from
case.
Then
we
have
invited
him
to
give
us
a
talk
about.
Howcast
and
building
blocks
can
be
utilized
within
this
data
group,
or
maybe
we
can
work
together
on
some
of
those
concepts
and
third
thing:
is
they?
A
Should
we
had
a
couple
discussions,
sheesha
from
Valora
a
couple
of
discussions
around
how
they
are
utilizing,
the
CSI
snapshot
to
conduct
backups
for
the
cluster?
Apparently
they
are
experiencing
some
only
not
quite
inconvenience
features.
We
will
also
go
through
those
features
and
collect
some
feedback
from
our
session,
and
then
we
do
some
more
time.
If
we
have
more
time,
we
can
discuss
both
and
issues
all
right,
let's
get
started.
Karissa
is
all
yours.
Thanks
for
sharing
yeah.
C
What's
Cushing's
were
happening
that
I
had
start,
but
we
also
are
considering
the
operation
so
on
backup
such
as
upgrades
through
kubernetes
versions,
so
it
might
be
too
early
or
it
might
be
out
of
context
for
this
group.
I'm
not
completely
clear
but
I
figured
I
would
throw
the
question
out
there.
So
what
some,
in
my
mind,
is
how
backups
are
going
to
be
represented
through
the
interfaces
in
the
operations
on
the
back
in
the
backup,
if
upgrading
a
backup
to
be
comparable
as
kubernetes
move
up
in
versions.
A
A
C
C
D
C
D
Having
a
separate,
you
know,
hey,
let
me
go
through
and
figure
out
what
all
the
resources
you
had
in
the
backup
and
upgrade
them
to
the
latest.
Api's
and
things
strikes
me
is
a
pretty
challenging
task.
Volume.
Data
alone
is
a
little
bit
different,
but
because,
presumably
that
could
be
done
if
you're
willing
to
do
that.
But
I
don't
know
so
it
begs
the
question
which
is
I
assume
where
you're
going
with
this
is
that
backups,
then
implicitly,
given
the
pace
of
kubernetes
have
a
kind
of
shelf
life
exactly.
C
Yeah
yeah,
so
so
I
would
have
a
question
with
whether
the
volume
will
be
include
definition,
but
the
resources
definitely
would
be
part
of
at
least
what
I'm
thinking.
So,
for
example,
if
I
want
to
restore
a
backup
in
into
a
new
cluster,
but
that
class
area
has
another
version
of
kubernetes
is
I,
as
a
user
would
be
grateful
if
I
could
just
not
have
to
to
upgrade
my
source
cluster
to
the
newest
version,
run
another
backup
and
then
to
the
restore
so.
D
C
E
I
can
give
an
example
where
this
is
kind
of
the
thing,
so
I
remember
right
with
Postgres
like
if
you
want
to
upgrade
Postgres
versions
at
one
point,
what
you
actually
did
was
you
did
a
PG
dump,
which
put
everything
out
basically
into
a
text
file
as
sequel
commands,
then
you'd
create
a
new
database
that
would
have
a
new
internal
format
and
then
you
play
the
PG
dump
into
it.
So
that's
one
way.
E
D
If
you
assume
that
a
cluster
quote
backup
is
effectively
serialized
gamal
for
the
resources,
then
playing
those
back
into
the
cluster
should
get
the
advantage
of
all
of
the
upgrades.
That
would
happen
normally
right
with
ya,
when
hook
so
I'm
I'm
kind
of
wondering,
and
then
there
will
be
a
set
of
those
for
which
there
needs
to
be
some
kind
of
manual
activity,
because
somebody
didn't
set
up
the
auto
mutation,
stuff
and
I.
Don't
know
how
we
set
policy
around
that
as
a
general
rule.
A
A
A
Somehow
in
some
sense
is
that
kubernetes
itself
already
have
such
a
capability
in
most
of
the
cases
when
you
try
to
restore
a
lower
version
of
back
up
into
a
higher
level,
a
higher
version
of
kubernetes
cluster,
there
might
be
cases
where
certain
CRTs
are
not
backward
compatible,
but
I'm,
not
I'm.
Just
not
clear
how
this
is
working
group
can
collect
all
this
information
and
do
an
automatic
version.
Transferring
let's
talk
about
well.
E
It
may
not
be
a
matter
of
us
defining
that,
but
first
we
should
like
at
least
look
at
the
you
know
is
this
something
that
is
a
reasonable
use
of
backup
and
restore,
and
then,
if
so,
you
know
what
you
know,
what
constraints
can
we
have
on
this?
You
know:
where
could
we
go
with
this,
because
we
can
certainly
keep
things
like
you
know:
I
Andrews,
talking
about
volume,
data
well
right
now,
like
a
rustic
backup,
would
let
you,
for
example,
dump
all
your
data
out.
E
F
E
A
I
feel
it's
slightly
different
and
configuration
data
in
this
sense
because
I'm
not
quite
sure
what
is
the
operating
concept
in
water
right?
There
is
a
formatter
transforming
as
you
mentioned,
but
what
is
the
upgrading
concept
over
here
is
normally
connect
with
education,
not
not
long
they're
connected
to
the
back
of
a
safe
yeah.
D
A
It's
a
good
topic
to
discuss
by
just
don't
fear
this
ten
minutes
is
gonna.
Do
we
need
O&H
for
that
Calise?
You
want
to
take
some
action
on
this
item
or
maybe
later
on
in
better
meetings.
We
discussed
this
thing
again.
I
hope
we
can
also
I
hope
you
can
come
up
with
some
preliminary
thoughts
around.
What
are
the
potential
solutions
we
can
provide.
All
we
can
as
a
as
a
group.
A
H
D
I
think
that
the
general
question
was
a
reaction
to
backups
have
the
sense
of
having
a
limited
shelf
life,
because
if
your
cluster
revolves
after
you've
taken
a
backup
and
in
particular,
if
that
evolution
includes
kubernetes
version
upgrades,
then
how?
What?
What?
If
anything,
do
we
need
to
do
that
a
backup
can
be
replayed
into
a
cluster
with
say
a
higher
kubernetes
version
and
and
I
think
some
of
the
summary
of
that
is
well.
D
Some
of
that
is
just
the
normal
backward
compatibility
guarantees
that
we
have
with
kubernetes,
but
of
course
those
don't
apply
if
you
had
alpha
TI's,
you
were
using
in
the
original
cluster
and
even
with
backward
compatibility
guarantees.
You
have
to
think
that
over
long
enough
time
horizon
you
know
we
will
obsolete
out
interfaces.
So
there
is
always
going
to
be
some
kind
of
shelf
life.
So
I
think
the
open
question
is
what,
if
anything,
do
we
need
to
say
about
this
and
that's
what
I
think
Sean
asked
right.
E
G
Okay,
we
so
so
like
our
1.3
release,
we
we
had
some
issues
with
CR
DS
going
G
a
with
structural
schema
translations.
So
we,
even
though
some
of
the
things
some
of
the
Garin,
the
compatibility,
guarantees
weren't
necessarily
concrete
there.
We
are
doing
some
translations
for
users
and,
in
other
cases,
we're
just
saying:
okay,
we're
gonna
back
up
the
feed,
1
beta
1
version
and
restore
the
V
1
beta
1
version.
G
I'm,
not
I'm,
not
remembering
the
details
there
now.
Also,
we
have
someone
exploring
doing
mappings
of
various
he's,
got
a
table
of
various
versions
of
the
built-in
API
groups
on
different
versions
of
kubernetes
and
where
they
overlap
and
things
like
that
and
we're
exploring
changing
velaro
so
that
it
backs
up
all
versions
of
a
given
API
group
instead
of
just
server
preferred
so
that
we
can
possibly
refer.
G
D
D
J
Noticed
that
the
preferred
version
is
usually
the
highest
and
so
I
I
think
that
comment
does
stand
where,
if
you're
backing
up
everything,
if
you're
only
only
backing
up
serving
preferred,
you
have
the
latest
version
possible
in
that
cluster.
But
then,
if
you
want
to
restore
it
to
an
older
cluster,
you
know
you
want
the
the
older
versions
as
well.
Yes,.
I
I
think
the
challenge
is
as
kubernetes
gross
popularity
and
have
many
many
more
API
groups.
It's
kind
of
like
difficult
to
make
those
assumptions
I'll
give
the
example
of
the
out
scaling
group,
which
v1
is
the
preferred,
but
a
lot
of
people
using
v2
Bay
that
you
so
it.
Basically,
we
breaking
out
this
problem,
two
steps.
One
is
firstly,
backup
of
all
versions
and
then
the
second
step
will
be
coming
up
of
the
logic.
How
we're
gonna
make
this
season
system
restore
the
groups
based
on
versions?
Does
it
make
sense.
A
I
If
I
break
out,
the
two
major
categories
of
API
group
is
one
native
to
kubernetes,
which
is
part
of
API
server.
All
right.
Today's
around
20
groups
depends
where
the
version
you
are
and
then
another
category
will
be.
They
play
groups
that
that
you
know
other
products
outside
of
native
group,
minetti's
create
I,
think
that's
gonna,
be
a
great
conversation
to
the
community
and
give
some
options
how
people
can
decide
to
restore
those
groups.
B
A
I
definitely
agree,
I
think
one
Nora
and
I
Rockville
was
describing
what's
pretty
interesting
and
a
very
experienced
in
this
place
can
definitely
benefit
this
group.
Do
you
guys
and
the
kadesha?
Do
you
guys
want
to
take
an
action
item
mount
this
and
maybe
present
back
later
on
to
the
group,
how
you
solve
this
problem
and
what
kind
of
challenges
you're
facing
right
over
there?
Whether
you
got
into
the
version
I.
C
A
J
J
So
we've
been
working
on
a
project
called
cancer
for
the
past
almost
three
years
at
this
point
around
two
and
a
half,
and
you
know
we-
we
have
a
kind
of
a
deep
background
in
data
protection
and
so
I
want
to
bring
some
of
the
learnings
from
canister
to
this
group
and
see
if
there's
kind
of
a
path
forward
with
parts
of
canister
being
used
in
this
group
as
well.
So,
first
of
what
I
want
to
do
is
go
through
learning
crews,
then.
J
I
go
through
like
this,
but
first
what
I'll
do
to
start
with
learnings
that
we
have
with
canister,
and
then
we
gotta
go
from
there.
So
what
we've
noticed
about
data
protection
workflows
is
that
they
often
require
application.
Specific
playbooks
and
so
I'll
give
some
examples
of
that.
Dave
actually
mentioned
one
this
morning
with
Postgres,
which
is
a
great
example.
J
I
think
data
protection
is
not
just
limited
to
backing
up
and
restoring
data,
but
also
kind
of
as
part
of
other
larger
workflows
like
doing
schema,
changes
and
upgrades,
and
so
an
example
of
what
you
need
to
do.
That
would
be
taking
a
logical
backup.
That's
a
lot
of
different
databases
have
tools
that
would
help
you
do
that,
so
a
lot
of
dump
type
tools
that
extract
data
at
the
logical
level.
Other
workflows
include
using,
for
example,
cloud
provider.
Api
calls
southern
example
here
would
be
RDS.
I
can
take
snapshots
of
an
RDS
instance.
J
I
may
also
want
to
use
some
like
an
operator
and
create
CRS
custom
resources.
As
my
API
call,
there
may
also
be
small
components
within
data
protection
workflows,
so
things
like
doing
application
specific
quiesce
on
quiesce.
I
know
that
we've
talked
here
about
doing
more
generic
ones
for
volumes
so
doing
like
SS
freeze
unfreeze,
but
in
many
cases
it
makes
more
sense
to
use
application
specific.
We
have
some
quiesce
because
you
may
need
fewer
missions
right.
It
can
be
part
of
your
app
owners
workflow
rather
than
kind
of
a
system
admins
workflow
example.
J
There
would
be
like
a
flush
tables
with
relock
for
my
sequel,
which
would
give
you
instant
view
of
my
sequel
instances,
data
there's
also
more
advanced
scenarios,
some
of
which
we've
also
discussed
here.
I
know
Andrews
talked
about
taking
back
those
from
secondaries,
which
is
a
great
use
case.
You
know
a
lot
of
people,
often
if
you're
taking
a
backup
of
your
cluster,
don't
want
to
put
too
much
load
on
your
primary
instances.
J
The
other
learning
is
that,
as
we've
kind
of
just
mentioned,
data
protection,
where
flows
can
be
quite
complex.
One
thing
that
asks
is
complexity.
Is
that
many
different
domains
you
have
to
deal
with
and
the
different
types
of
personas
in
those
domains.
So
you
you
often
will
have
kind
of
crew
days,
cluster
admins,
who
are
different
than
your
application
developers,
who
may
be
different
than
your
service
experts,
some
cases
DBAs
and
I
think
each
of
you.
J
The
domains
often
will
play
a
part
in
data
protection,
and
so
you
may
not
have
all
the
expertise
kind
of
in
one
small
group
to
execute
these.
So
there
requires
some
coordination.
In
addition,
there's
many
kind
of
infrastructure
moving
parts
as
well,
so
you
you
can
map
backup
to
many
different
places.
You
know
object,
stores
and
a
lot
of
been
your
specific
target,
so
you
can
backup,
as
mentioned
earlier,
there's
many
different
types
of
backups
that
use
different
parts
of
your
infrastructure
and
in
the
new.
K
A
L
A
B
J
A
J
J
B
J
K
B
J
We've
also
noticed
that
there's
a
lot
of
different
data
protection,
primitives
that
are
common
across
different
workflows,
and
so
there
are
a
lot
of
things
like
inquiries
exacting
into
container
I
know
that
Jing
your
work
on,
for
example,
execution,
hooks,
there's
kind
of
a
good
example
here
of
something
that's
very
useful.
We've
also
seen
a
lot
of
pushing
to
object,
storage
in
different
ways,
either
direct
we
were
through.
You
know,
for
example,
Valera's
rustic.
J
Obviously,
taking
long
snapshots
is
very
useful,
a
lot
of
great
work
with
the
snapshot
controller,
for
example,
but
also
using
entry
providers
supporting
we
have
a
lot
of
customers
who
you
still
use
those
and,
of
course,
controlling
the
lifecycle
of
your
application
through
commands
through
coop
cuddle
type
commands
like
scale
up
and
scale
down
like
slide.
Please,
and
so
some
of
the
motivation
we
have
actually
Jing
I.
Think
your
honor
older
version,
these
slides.
J
B
I
B
J
And
so,
based
on
kind
of
all
of
these
things,
we
noticed
in
the
Indy
attraction
were
closing.
We
said
we
set
some.
We
set
some
goals
that
for
building
a
project
that
wouldn't
would
help
us
fulfill
these
these
requirements,
it's
the
one
thing
we
wanted
to
do-
was
codify
all
these
workflows
and
commonplace
those
meetings
both
pulling
from
the
community
as
well
as
within,
within
a
company
having
kind
of
a
set
of
resources
that
people
use.
J
J
J
J
Cancer
is
actually
pretty
unappealing,
there's
a
lot
of
kind
of
existing
definitions,
but
from
the
cannister
perspective,
all
these
all
these
different
definitions,
map
to
a
group
of
resources,
and
so,
if
you
look
at
like
the
application,
C
or
D
or
home
charts,
really
what
they
are
is
a
collection
of
different
resources
and
from
the
cannister
perspective,
we
want
to
support
all
these
different
types
of
groupings
rather
than
support
kind
of
these
broader
definitions.
Right,
we
don't
want
to
have
a
strong
opinion
on
which,
which
whole.
F
C
B
F
J
I
get
cut
again.
Yes,
oh
my
gosh,
my
eye,
my
eye,
like
this
connect
from
what
I
concluded
my
phone,
even
yeah,
I,
think
my
phone's
connecting
in
just
maybe
I'll
just
turn
it
off.
Okay
did
you
hear
was
I
able
to
go
through
the
applications
hard
here?
Yes,.
D
I
have
a
quick
question
when
you
say
you
support
existing
application.
Constructs
yeah
and
I
know
it's
a
quick,
deep
dive,
but
do
you
do
it
by
listing
resources
in
a
manifest?
Do
you
do
it
by
label
selector?
How
do
you?
How
do
you
do
that
group?
Obviously
you
say
that
you're
you
allow
multiple
different,
but
there
still
has
to
be.
J
And
so,
from
the
cannister
perspective,
we
actually
only
operate
on
individual
resources,
and
essentially
you
tell
you
passing
these
resources.
As
you
know,
you
execute
actions
on
these
resources
through
canister.
It's
a
to
what
we
call
an
application
intersection
controller
to
define
that
what
that
grouping
maps
to
right
so
from
the
cancer
open
source
perspective,
really
the
the
subject
of
any
action
is
a
single
resource,
and
from
that
resource
you
can.
F
J
J
But
in
terms
of
let's
say,
if
I
have
a
namespace,
which
has
many
many
resources,
canister
will
operate
on
those
one
at
a
time
and
this
kind
of
maps
to
something
that
actually
have
some
slides
on
this
later.
But
business
is
something
that
Dave
Smith
has
talked
about
and
passed
a
little
bit,
which
is
more
of
a
bottoms-up
approach,
rather
than
a
top-down
approach
which
lets
each
individual
object
kind
of
define
how
it
should
be
backed
up,
and
maybe
those
are
a
higher
level
objects
like
a
for
example,.
B
J
A
Resources,
I
guess
not
most
of
the
question
over
here
it's
around
when
you
say
individual
resources
say
there's
a
state
per
set
right.
It
contains
not
only
itself,
but
all
the
PVCs
that'd
be
good.
That
would
be
created
by
the
PVC
temperate.
If
the
even
has
any
reasonable
question
and
relies
on
how
do
you
figure
out
those.
J
D
D
J
Okay,
great
yeah-
this
is
I
think
this
is
a
good
question
and
something
that
we
can
keep
talking
about.
But
for
us
you
know,
I,
think,
there's
kind
of
things
that
we
think
belong
in
a
essentially
a
data
protection
controller,
and
there
are
things
that
belong
inside,
canister
and
so
I
think
defined
in
the
application
we
pushed
up
up
to
the
date.
Oppression,
controller
and
the
other
thing
we
pushed
there
also
was
handling
the
actual
contents
of
resources.
J
So
the
the
configuration
path
we
we
think
belongs
inside
the
data
protection
controller,
and
a
lot
of
this
is
because
you
know
I
think
there's
already
implementations
for
this,
but
also
there's
a
lot
of
different
ways.
You
could
deploy
this
configuration,
you
know
through
home,
charts
or
through
get
ops.
For
example,
you
know
most
most
companies
already
have
a
mechanism
for
for
managing
the
the
API
objects
themselves.
J
D
Sorry
I'm
getting
a
little
bit
lost
in
the
abstraction,
so
I'm
gonna
ask
a
couple
questions.
If
you
don't
mind
so
one
of
the
one
of
the
things
that
presumably
you
have
to
do
is
is
book,
definitions
and
I.
Think
about
definitions
is
there
might
be
multiple
containers
in
multiple
pods
that
you
required
fairly
complicated
orchestration
and
so
to
even
do
that.
D
You
kinda
have
to
know
what
are
all
of
the
pod
that
are
part
of
this
general
space,
and
you
have
to
have
logic
to
be
able
to
identify
pods
and
container
so
container
named
pod
select
or
something
like
that.
Yet,
if
you're
not
dealing
with
the
configuration
I,
don't
know
how
you
do
that
here,
so
I'm
missing
something.
J
When
I
say
dealing
with
the
configuration,
I
really
mean
avoiding
backing
up
the
configuration
and
restoring
it
right.
I
think
the,
for
example,
Valera
has
an
approach
which
is
similar
to
a
TENS
approach,
which
will
use
essentially
discovery
to
go
figure
out
what
the
objects
are
backup.
All
the
specs,
including
it
looks
like
multiple
versions
of
those
those
specs
those
configs
right
now
and
restore
them.
So
we
actually
we
push
that
we
push
that
off.
We
don't
think
that
belongs
a
canister
I
guess.
D
The
question
I
have
is:
how
do
you
ensure
that
the
domain
that
the
boundary
of
the
application
is
agreed
upon
both
by
the
canister
and
the
DP
controller,
like
how
do
you,
how
do
you?
How
do
you
reason
about
here's,
an
application,
and
this
is
how
I
back
it
up?
If
you
don't
have
a
consistent
view
of
what
the
scope
of
that
is.
That's
my
concern.
The.
J
Typical
pattern
we
see
for
applications
is
that
you'll
have
a
deployment
but
a
flare
staple
set
that
maps
to
your
database,
and
so
you
can
get
a
consistent
view
of
your
database
or
you,
for
example,
have
a
single
workload
that
maps
to
the
group
of
volumes
that
you
have
to
make
consistent,
and
so
you
can
work
on
individual
workloads
themselves.
If
you
need
to
coordinate
multiple
things
that
is
possible
as
well,
but
that's
not
really.
The
common
case
we
see.
Are
you
thinking
of
an
example?
Well,.
D
You
gave
an
example
of
namespace
being
a
possible
definition
of
an
application
or
you
could
imagine
a
multi
moving
part
app
like
say,
a
wordpress
install,
that's
god
of
a
web
server
bit,
and
it
has
a
database
bit
and
and
so
I
have
coordination.
So
I
I
kind
of
really
need
to
know.
Oh
I'm
doing
this
one
and
then
I'm
doing
this
one,
and
these
individually
need
to
be
consistent.
But
don't
don't
increase
the
domain
of
consistency
across
both
instances
of
this
because
well.
K
J
It's
a
good
idea,
yeah
and
I
will
I
will
say
WordPress
or
you
know,
there's
kind
of
a
picture
gallery
from
the
community.
That
is
a
good
example
because
they
they're
table
pretty
typical
web
apps
right.
They
have
components
that
are
kind
of
common
across
different
systems,
but
I
think
it's
good,
maybe
and
retinue
few
next
few
slides
and
if
you
can
follow
up
after
because
I
think
it's
it's
a
good
question
and
something
that
we
we
really
think
about
a
lot.
J
So
really
from
these
kind
of
requirements
we
built
canister
one
one
view
of
canister
is
you
can
one
thing
canister
can
do,
is
do
full
backups
in
restore
or
just
back
and
restore
hooks,
and
this
all
goes
through
custom
resources.
So
there's
a
standard
set
of
ways
to
create
an
action.
We
call
these
action
sets
again
a
custom
resource
and
there's
a
custom
resource
called
a
blueprint
which
defines
the
workflows
again.
J
J
The
what
we
do
inside
the
blueprints
is
we
actually
used
go
templating
so
that
you
can
given
an
object,
consume
different
instances,
have
different
instances
of
an
object,
be
the
subject
for
a
blueprint
in
a
way
that
lets
you
write
a
blueprint
once
and
then
handle
kind
of
any
any
instance
of
that
type.
So,
for
example,
if
I
have
a
standard
definition
in
my
company
of
what
MongoDB
looks
like,
for
example,
I
have
a
helmet
art
that
people
come
leaves.
J
I
can
create
a
blueprint
for
that
helmet
art
and
then
for
any
instance
of
that
helmet
art
and
he
released
that
pom
chart.
I
can
use
the
same
blueprint
because
I
have
access
to
templating
and
what
we've
also
done
is
added
a
bunch
of
helpers.
What
we
call
canister
functions
that
have
kind
of
implemented
a
bunch
of
the
common
primitives
you
need
for
data
protection
workflows.
J
J
J
So
this
I'll
go
through
blueprint
here.
You
know
this
is
a
simpler
app
because
it's
just
a
database.
Well,
we
usually
see,
though,
is
that
if
you
can
get
a
consistent
snapshot
of
your
database,
then
you
can
you
actually
kind
of
can
achieve
a
consistent
backup
application.
I
don't
have
an
example
of
a
more
complex
app
that,
for
example,
may
maybe
need
some
falling
state
as
well
as
state
in
database,
but
typically
cloud
native
applications
will
handle
that
kind
of
in
an
OK
way,
but
I'll
run
through
this.
J
This
kind
of
example,
pretty
quickly
cuz
I,
know
we're
pressed
for
time
here.
So
blueprint
is
a
CR
there's,
there's
the
kind
of
standard
to
see
our
boilerplate
so
existing
in
the
same
namespace
is
a
canister
controller
and
has
a
name
next
slide.
Please
the
there's
a
set
of
actions
inside
a
blueprint.
In
this
case
the
action
is
named.
Backup
and-
and
this
action
operates
on
staple
set
next
slide.
Please
well,
we
defined
in
blueprint
is
output
artifacts,
so
usually,
when
we
take
it
back
up,
you
have
to
put
it
somewhere.
J
We
left
this
pre
general,
but
a
common
case
is
pushing
to
object,
storage,
and
so
here
we
have
a
path
once
you've
created
bucket.
We
had
a
path
inside
that
bucket
next
slide.
Please
and
we
have
a
set
of
actions
to
perform.
We
call
these
phases
in
this
case
we're
invoking
a
canister
function,
called
qu
cask,
which
will
run
a
new
pod,
the
name
space
field.
J
Here,
if
you
look
see
if
templated,
and
so
we
want
a
handily
object
in
the
same
name
space
as
a
staple
sad,
because
we're
actually
working
on
that
stable
set
and
we're
running
an
image
which
has
whatever
tools
we
need,
in
this
case
we're
running
a
shell
scripts.
You
found
that
is
pretty
common.
A
lot
of
admins
like
to
write
shell
scripts
or
port
shell
scripts
from
their
existing
infrastructure
and,
in
this
case,
we're
running
dump,
and
this.
K
I
think
Andrew.
This
might
be
a
good
place
to
just
quickly
answer
your
question
about
discovering
the
components
of
a
specific
workload,
for
example.
So
when
canister
knows
that
you're
invoking
an
action
on
a
stateful
sense,
it
makes
available
the
components
of
the
stateful
set
as
template
parameters.
K
So
all
the
parts
that
are
part
of
that
stateful
said
all
the
PVCs
that
are
part
of
that
staple
set
and
then
the
blueprint
author
has
an
option
of
referencing
those
in
the
blueprint,
for
example.
If
you
want
to
invoke
a
specific
command
only
on
the
first
replica
of
a
staple
set,
you
now
have
the
ability
to
do
that
as
well.
Yeah.
D
D
J
E
K
Missus,
yes,
so
the
function.
If
you
look
at
the
function
there
that
the
canister
function
you're
using
it's
a
coop
task
function,
so
this
says
spin
up
this
image
and
run
that
command
there
there.
There
are
other
functions
like
coop
exec,
in
which
you
can
specify
a
specific
auto
container.
You
want
to
run
the
command
in
ok,.
D
J
This
is
actually
really
this
is
very
simplified,
and
so
we
kind
of
emitted
a
lot
of
the
details
here.
What
I,
at
the
end,
I,
have
a
link
with
the
full
blueprint,
which
is
a
lot
more
complex,
includes
things
like
how
you
pull
from
the
service
to
get
the
hostname
and
include
a
little
bit
more
information
on
on,
maybe
even
like
the
service
account
user
on
the
pod.
J
J
Okay
sounds
good.
Thank
you.
Great
questions.
Next
slide.
Please
so
similar
to
a
blueprint
we
have
an
action
set.
An
action
set
will
be
how
we
invoke
invoke
that
blueprint
and,
for
example,
if
I
want
to
take
it
back
up,
I'll
create
the
action
set
similar
to
the
loop
here,
any
other
custom
resource.
We
have
all
the
required
wonderfully.
We
took
we
like
to
generate
the
name,
so
you
can
use
the
same
action
set
and
created
local
times
next
slide.
Please
an
action
set
has
a
spec
in
a
status.
J
So
in
the
spec
you
define
the
action
you
want
to
run
inside
a
blueprint.
So
here
we
have
a
reference
to
blueprint,
as
well
as
the
action
within
the
blueprint
we
want
to
run
next
slide.
Please
it
also
references
a
specific
objects,
especially
the
resource,
so
here
we're
referencing
the
staple
set
that
we
want
to
perform
the
backup
on.
J
And
this,
if
you
want,
if,
let's
say,
multiple
instances
of
deployed,
you
would
just
change,
for
example,
the
namespace
or
the
name
of
this
reference
Exide,
please,
and
what
we
do
at
the
end
is
once
the
action
completes.
We
kind
of
update
the
status
with
any
errors
that
occurred,
or
in
this
case
we
actually
succeeded
and
we
created
a
set
of
artifacts.
So
the
action
set
after
is
completed
will
contain
all
the
artifacts
created
during
that
execution
of
the
blue-crab
X
slide.
J
F
J
What
we
see
is,
if
I
have
a
database
I'll
have
one
blueprint
for
that
type
of
database.
You
know
most
companies
will
deploy
say
for
a
home
chart
or
from
like
an
open
shift
template
and
within
that
company.
We'd
hope
that
you
have
mostly
one
blueprint
for
for
and
then
you
could.
Multiple
users
could
use
the
same
blueprint
and
let's
say
my
application
requires
multiple
pieces.
Maybe
it
has
a
longer
database.
Maybe
it
has
volume,
you
know
the
cases
application
more
complex
applications.
A
A
J
F
J
F
J
F
J
J
F
L
L
A
J
J
What
we
see
is
haves
in
August
Orage,
which
are
references
to
beginning
you've
brushed,
and
so
the
controller
with
the
total
watch
has
somehow
understand
that
type
of
artifact
or
in
some
cases,
maybe
doesn't
right,
maybe
the
treating
entirely
as
a
canister
abstraction,
where
canister
will
know
how
she
handle
whatever
string
it
produces.
J
There's
something
I
didn't
get
into,
which
is
configuration
and
we
there's
another
CR,
the
customer
customer
resource
for
handling
objects
or
configuration.
We
call
them
profiles,
but
I
didn't
include
that
in
this
and
the
slide
any
slides,
I
just
say.
The
great
questions
next
slide
is
skip
this
one.
Please
skip
this
one,
you
know
so
I
to
wrap
up
I
just
want
to
talk
about
what
we
what
we
kind
of
envisioned
for
for
the
rest
of
canister.
J
J
We
also
want
to
continue
to
have
cancer
to
support
whatever
workflows
the
working
group
comes
up
with
you
know,
I
think
kind
of
having
a
back
and
forth
on
those
would
be
really
superb
Kansas
for
the
rest
of
the
community,
there's
also
some
kind
of
more
community
building.
We
want
to
do
round
canister.
You
know
I,
think
the
blueprints
that
we
write
and
that
others
have
ever
written
would
be
useful
for
the
community
at
large
and
getting
more
input
and
feedback
on
those
write.
J
You
know
database
experts
to
help
with
with
that,
at
the
end,
I
have
a
full
list
of
all
the
kind
of
glass
blueprints
we
have.
You
know
we've
customers
for
using
internal
blueprints
that
they
haven't
open-source,
but
we
have
a
growing
list
of
open
source
ones
as
well
and
of
course
we
want
to
integrate
cancer
with
more
of
the
data
protection
controllers,
so
that
will
come
out
of
this
working
group
right
now.
J
It's
integrated
with
Canada,
with
castings
k10
but
I
think
you'll
be
useful
for
other
controllers
as
well
and,
of
course,
with
some
roadmap
when
we
work
on
which
I
think,
as
we
discussed
here,
supporting
higher
level
resources
like
like
applications
here
to
use
in
home,
charts
I
think
that's
a
map.
Allah
just
reads
in
all
time:
sorry
for
my
poor
Network
here,
but
you
know,
feel
free
to
reach
out
to
me
or
anyone
a
cast
in
a
few
of
questions
on
canister
and
we'd
love.
To
start
more
conversations
and
keep
keep
these
talks
going.