►
Description
Meeting of Kubernetes Storage Special-Interest-Group (SIG) Volume Snapshot Workgroup - 15 October 2018
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Jing Xu (Google)
A
B
B
So
is
it's
very
similar
to
volume,
topology
and
last
time,
I
think
we
have
a
little
bit
issue
two
comments
for
replication,
because
in
the
comments
we
mentioned,
volume
should
be
accessible
from
two
zones,
and
here
we
mentioned
okay,
for
example,
specific
volume
is
accessible
from
two
zones:
okay,
synchronously
replicated
and
it
seems,
require
started
provider
so
that
if
volume
is
accessible
from
two
zones,
that
means
the
volume
should
be
replicated
in.
Those
two
zones
but
I
are
thinking
is,
should
not
such
a
requirements.
B
B
B
A
B
B
B
Next
item,
so
we
mentioned,
we
want
to
add
deletion
policy
similar
to
volume
like
PVC,
so
they
sound
workflow
from
volume
with
the
PVC
right
we
add
deletion
policy
force
natural
to
one
thing
is
the
lesion
policy
is
something
only
system
I
mean
we
are
thinking
should
specified,
so
only
you
can
specify
the
addition.
Paula
screaming
snapshot
class
right
now.
It's
so
we
require
deletion
policy.
Can
you
see
it
I
think
recurrent
policy
or
retention
policy?
B
B
C
D
Mean
if
someone
wants
to
manage
their
kubernetes
snapshot
deletion
snap
solution
through
kubernetes,
they
would
be,
they
would
implement
that
time
to
live
on
top
of
these
api's
and
then
they
would
expect
deletion
to
actually
delete
it.
If
someone
wants
to
manage
it
underneath
kubernetes,
then
Carnation
me
touching
this
stuff.
D
B
Bottom
central
class,
so
only
system
mean,
can
change
it
and
because,
since
bottom
central
class
is
used
in
creating
snapshots
right
and
if
the
data
policy
is
defined,
let's
say
return
or
meet.
Then
when
the
snapdragons
credit,
the
deletion
policy
will
kind
of
what's
in
the
snapshot,
content
API
object
and
depends
on
how
you
specify
in
the
class
right
it
will
follow
that
either
retain
or
delete
and
user
cannot
change.
It
afterwards
only
says
the
mean
might
be
able
to
change
it.
So.
B
It's
fine
like
yeah,
it
seems
fine.
Okay,
so
next
is
for
deletion.
Finalizar
pizza
is
to
protect
either
volume
snapshot
or
volume
being
deleted
while
it
is
being
used-
and
we
do
have
a
more
document
explaining
some
of
the
protection
we
are
trying
to
implement.
So
first
one
is
warning
snapshots
content.
B
We
want
to
prevent
you
delete
the
volume
step,
not
content
directly.
Well,
it
is
still
bound
to
volume
snapshot
and
this
is
and
by
a
snapshot
controller.
So
we
will
add
finalizer
right
when
volume
snapshots
it's
great
and
we
also
check
if
the
deletion
times
the
man
stamp
is
already
set
for
this
content,
then
we
don't
need
to
add
feminize
there
anymore.
Since
that
indicates
this
content
will
be
deleted.
B
That
means,
if
it
is
currently
bound
to
a
snapshot,
we
cannot
be
removed
finalizer,
so
if
analyzer
will
prevent
prevents,
you
delete
the
contents
right
now.
Only
a
currently
the
content
is
not
being
used,
or
in
other
word
it's
not
only
to
any
volumes
and
shot,
then
the
finalizar
can
be
removed
and
the
content
can
be
deleted.
B
This
is
the
protection
for
volumetric
content
and
we
also
have
a
fertilizer
for
volume
snapshot
so
because,
when
you're
trying
to
delete
water
set
shots
that's
possible.
Currently,
this
bottom
snapshot
is
used
by
creating
a
volume,
since
we
can
provision
a
volume
from
a
snapshot.
So
that's
the
goal
for
protecting
the
volumes
and
trust
so
we'll
add
the
final
result
of
watering
snapshot
to
and
when
you
want
to
delete
a
shot,
it
will
check
okay,
whether
this
volume
snapshot
is
currently
being
used
by
any
volume.
B
So
the
controller
has
to
go
through
the
list
of
PVCs
and
also
check
whether
those
PVC
has
a
data
source
that
match
this
bottom
snapshot
and
only
if
the
PVC
already
pond
successfully.
That
means
the
volume
is
created
successfully,
so
don't
need
centrally
at
all.
If
the
face
is
still
pending
right,
then
we
can
now
remove
the
panelizer.
B
Yes,
so
our
assumption
is
PVC
the
volume
life
cycle.
It's
independent
of
step,
shock
life
cycle,
I
know,
there's
some
storage,
that's
kind
of
have
dependency
so
for
snapshots
and
their
volume,
but
that
will
be
determined
by
the
bottom
plug
in.
So,
if
you're
trying
to
delete,
let's
say
a
volume.
Well,
it
is
has
some
a
snapshot,
a
social
visit
and
the
plug-in
does
not
allow
it.
The
plug-in
will
return
like
an
arrow
when
you
try
to
delete
the
volume.
B
C
B
Guy's,
like
modern
plug-in,
should
return
alone.
Then
the
Canalis,
as
it
will
kind
of
report
this
arrow
up
to
the
API,
so
user
know
why
it's
cannot
delete
it,
but
we
don't
provide
like
protection
at
our
lair
for
that,
because
different
volume
plug-in
have
different,
very
different
right.
Some
are
abandoned.
Some
are,
is
NSC
between
monuments
and
shots,
so
we
can
not
make
assumptions
we
yeah.
B
Finalized
err
we
put
PVC
so
because
we
create
snapshot
right
through
bottom.
We
want
to
prevent.
You
deletes
the
volume
while
this
snapshot
is
still
being
created,
so
we
still
plan
to
do
this
through
snapshot
controller
and
when
to
add
panelizer.
So
when
there
is
a
snapshot
request
coming
right,
we
can
add
finalizar
to
is
associated
PVC
if
there
is
no
panelizer
before
I
did
it
before,
but
there
is
to
remove
analyzer
on
this
PVC
at
a
bit
trickier.
So
we
are
saying:
ok
after
you
die
or
create
snapshots,
the
central
is
already
cut.
B
We
don't
need
the
void.
Volume
exists
anymore,
so
we
can
remove
analyzer
at
that
moment,
but
we
don't
know
actually
whether
there
are
other
snapshots
are
being
created
by
this
volume
at
that
moment.
So
we
are
thinking
to
use
kind
of
card
account
annotation.
So
we
can
check
whether
currently
there
are
any
snapshots
being
created
or
not.
B
B
Central
is
ready
so
as
rise
cut,
so
we
can
P
decrease
this
number
and
the
controller
will
check
whether
the
count
number
this
count
is
equal
to
zero
or
larger
than
zero
and
decide
whether
we
can
remove
the
final
answer
or
not,
and
we
are
thinking
also
it
might
be
useful.
We
can
have
also
snapshot
ready,
stanford
count
so
to
indicate
how
many
snapshots
is
created
and
already
ready
for
this
particular
PBC.
B
D
May
be
a
dumb
question
because
it's
more
of
a
general
question,
but
what
prevents
race
is
on
incrementing
or
decrementing
the
count
in
kubernetes.
Is
there
a
way
to
do
a
transaction
that
will
prevent
multiple
increments
from
overwriting
each
other?
So.
B
F
B
F
B
We'll
have
decades
admission
plugin
for
snapshots;
no,
it's
not
kind
of
it's
part
of
it
will
be
added
as
part
of
central
controller,
so
we
already
have
a
PR
to
add
animation
controller,
but
it's
kind
of
integrated
into
the
snapshot
controller.
So
as
rise
you
as
a
deploy,
a
central
controller
and
set
up
the
service
for
the
mission,
it
is
a
web
cool
current.
So
as
long
as
you
set
it
up,
then
it
will
be
like
part
of
the
controller
that
makes
sense.
Yeah.
B
B
So
all
this
panelizer
right,
we
might
have
this
tomb.
We
cannot
have
like
100
present
guaranteed,
so
there
still
might
be
some
kind
of
risk
condition,
because
when
you
check
right
your
check,
okay,
it
is
okay
to
remove
panelizer,
but
before
you
remove
it,
the
condition
changed,
and
so,
but
it
should
be
rare.
So
basically,
we
cannot
have
a
automic
like
operation
by
checking
and
then
remove
analyzer.
So
after
you
check,
there
is
something
might
be
like.
Another
event
happened
and
it's
no
longer
true,
but
we
still
remove
analyzer,
but.
B
B
Okay,
so
so
far
seems
it's
fine,
so
we'll
go
ahead.
We
already
have
this
working
progress,
passing
so
we'll
update
it
when
it's
ready
and
for
this
count
thing
currently,
we
propose
two
counts,
any
other,
like
suggestions,
wise
means
how
many
step
prod
is
currently
being
created.
That
is
increasing
count
the
other
week
already.
Stan
short
count.
That
means
how
many
snapshots
it's
ready
for
this
PVC.
B
B
G
G
So
we
ultra
don't
call
it
consistence
group
here,
because
we
want
to
make
it
more
generic.
So
it's
a
warring
group
instead
and
then
actually
that
can
support
storage
systems
that
can
provide
a
consistency,
consistent
group
snapshot
and
also
those
that
do
not
provide
the
consistency
and
then
they
can
still
use
this
grouping
concept.
G
So
this
grouping
concept
can
allow
the
volumes
to
be
grouped
together
and
then
to
be
managed
together.
So
there
are
mainly
two
to
eighty
objects.
One
is
called
volume
group
and
the
other
one
is
called
a
group
snapshot,
so
the
warring
group
will
have
one
group
spec
and
one
group
status
and
in
the
inner
spec,
so
we
will
have
a
the
third
take
a
source.
This
is
the
only
when,
if
you
try
to
create
a
one
group
for
all
equipped
snapshot,
so
you
need
this.
G
A
source
they'll
give
you
the
ID
of
the
quick
snapshot,
but
if
you
just
to
create
a
brand
new
modern
group-
and
this
you
don't
need
this
source
and
we
have
a
boolean
quite
a
consistent
group
snapshot.
So
that
can
be
true
or
false.
But
if
what
is
force
I
am
named,
and
then
we
have
this
list,
the
list
of
system
volume
create
claims.
So
here
you
can
define
a
several
a
PVCs
that
you
want
to
be
included
in
this
group
and
then
the
next
one
is
the
status.
G
G
G
G
So
if
some
error
happens,
if
you,
if
you
create
a
group
of
volumes,
if
you
say
if
you
have
five
in
the
list
and
you
create
it
as
three
successful
about
a
fourth
one
field,
and
in
that
case,
so
that's
a
failure,
so
she
did
return
failure
further
for
the
entire
group
and
so
the
so.
This
is
the
same
as
the
or
the
other
like
create.
A
volume
operation
creates
no
operations
where
the
Sun
also
needs
to
be
idempotent,
so
which
means
if
the
driver
creates
a
warning
group.
G
If,
if
the
CEO,
if
quanities
makes
the
same
clock
in
driver,
needs
to
make
sure
that
it
doesn't
create,
let's
say
it
doesn't
create
an
should
justice
to
be
5
and
then
needs
to
handle
the
failure
cases
so
either.
So,
at
the
end,
if
the
request
is
to
have
was
5
PVCs,
then
at
the
end
it
should
just
be
5
created
to
support
idempotency
and
so.
G
A
B
D
Side
I
mean
yeah
I
realized
in
kubernetes.
We
can
find
whatever
we
want,
but
I'm
just
saying
like
if
you,
if
you
define
it
to
be
that
the
group
is
mutable,
that
puts
a
really
heavy
burden
on
implementations
to
be
flexible
enough
to
take
something
that
is
not
in
the
group
and
then
put
it
in
the
group
is.
G
G
So
it's
really
it's
really
where
you
take
it
snapshot,
then,
when
you
take
a
group
snapshot
at
that
time,
that's
when
you
can
really
support
a
consistency
or
not
right.
When
you
are,
you
know
when
you're
just
accreting
a
group
of
volumes
at
this
time,
we're
not
talking
about
a
consistent
with
snapshot
right
so
similar
is,
is
similar
to
what
we
did
in
in
cinder
is
that
you
can.
Actually,
you
can
actually
add
a
warning
to
the
to
the
group
you
can
remove
on
in
front
of
so
you.
G
Yeah,
but
there
will
be
one
each
in
the
controller.
We
need
to
make
sure
that
if
it's
part
of
the
group,
then
you
can't
really
just
say
delete
that
one
friend
was
a
it'll
be
check
our
checks
right.
If
it
has
a
group
ID
in
that
volume
in
a
PVC,
then
you
can't
just
delete
it.
You
have
you
can
remove
it,
but
you
cannot
really
just
delete
it
when
you
kill
it.
If
you
have
to
delete
a
whole,
the
whole
Boop,
so
there
will
be,
there
will
be
some
some
our
changes
there
in
the
controller.
G
D
G
Yeah,
so
that's
the
reason
we
want
to
discuss
right.
So
you
know
we.
G
All
right,
it's
it's
actually
so
I'll
just
talk
about
instant,
the
cinder.
That's
the
way
we
implement
this
one,
it's
the
clogging,
it's
the
responsibility
of
the
plugin,
because
not
all
the
systems
can
can
support
this
type
of
consistence.
But
right
now
we're
talking
about
this
consistency
group
at
the
storage
storage
system
level,
right
so
the
plug-in.
Only
if
the
plug-in
can
support
of
this,
then
it
can
do
this.
Otherwise
it
can't
just
say:
inconsistencies
force
so
that
you.
F
G
F
G
D
G
G
So,
basically
in
and
then
I
was
thinking
in
cinder.
The
only
scene
that
we
added
in
the
group
type
is
whether
it's
consistent
or
not.
So
that's
why
we,
you
know
G
and
I.
We
had
to
discuss
about
that.
So
that's
why
we
kind
of
move
it
out,
but
maybe
it
makes
sense.
You
still
have
that
have
a
type
so
that
user
knows.
Okay,
it's
there
are
different
types:
these
poor
different
functionalities,
even.
F
D
But
but
here
but
I
think
what
art
alone
is
getting
at
is
you
you
could
define
a
group
that
has
the
consistent
type
true
and
then
you
can
have
two
different
PVCs
that
are
on
totally
different
pieces
of
storage,
and
you
could
put
them
both
in
the
group
now.
The
Cooper
days
will
not
stop
you
from
doing
that,
but
now
the
plug-in
has
no
way
to
actually.
G
I
think
we
was
because
this
is
the
same
as
the
other.
You
know
when
you
create
rank
rewarding
you
do
you
have
this?
The
probationer
or
you
know
in
the
class
you
have
that
specified
right.
So
here
is
something
that
we
need
to
that
like
as
well
right.
So
you
need
to
specify
what
is
your
profession,
so
you
know,
okay,
this
clunky.
G
D
D
But
but
but
that
making
sure
could
involve
like
moving
the
data
as
as
as
part
of
putting
something
into
the
group
like
the
act
of
adding
somebody,
the
group
might
mean
that
the
backend,
in
order
to
make
the
guarantee
that
it
can
take
inconsistent
snapshot.
It
may
have
to
lift
up
the
data
from
one
place
and
put
in
another
place,
which
could
be
an
enormous
copy
operation.
Just
add
something
to
a
group
and,
and
you
you
can,
you
can
do
it,
but
it's
it's
not
just
you
know
like
flipping
a
switch.
B
D
I
D
I
So
maybe
I'm
incorrect,
but
I
thought
the
storage
class,
with
the
exception
of
you,
know
something
like
Trident.
That
does
another
obstruction
of
multiple
backends
behind
it
and
there's
other
cases.
That
does
that.
That
might
do
that
as
well,
but
that
that
should
work
and
then
it
should
be
up
to
the
plug-in
to
determine
the
scheduling
issues
from
there
shouldn't
it.
I
mean.
D
I
I
G
As
far
as
where
the
data
is
supposed
to
be
okay,
because
I
know
that
we
have
some
we'd
be
running
into
some
boxing
insulin
or
twenty
on
your
add,
I
need
to
make
sure
that's
from
the
same
poor
things
like
that.
The
scheduler
actually
got
involved
right.
You
add
some
special
checks
that
yeah,
so
definitely
it
can
be
tricky.
So
the
storage.
C
G
A
Agree
so
I
agree
with
the
concerns
about
whether
this
should
be
immutable
or
not
beautiful,
but
before
we
discuss
that
I
wanted
to
take
a
step
back
and
put
on
my
API
reviewer
hat
and
question:
do
we
need
another
API
object?
What
is
the
end
purpose
of
doing
something
like
this
and
I
understand
the
idea
that
we
want
to
be
able
to
snapshot
multiple
volumes
and
end
up
with
a
consistent,
a
snapshot
across
all
of
them,
but
the
way
I
was
imagining.
A
This
would
work
was
through
the
grouping
that
we
already
have
for
volumes,
which
is
the
workload
the
pod.
When
you
create
a
pod,
you
specify
the
volumes
that
you're
concerned
about,
for
that
particular
application
and
in
general,
when
you
want
to
take
snapshots
of
multiple
volumes,
the
natural
grouping
is
the
consumer
of
those
volumes.
So
if
we
already
have
the
pod,
as
a
group,
natural
grouping
may
be
having
some
way
to
trigger
a
snapshot
against.
A
You
know
a
pod
object
to
say:
please
take
a
consistent
snapshot
of
all
the
volumes
that
are
available
to
this
particular
pod.
That
seems
more
natural
to
me
and
it
can
also
take
advantage
of
the
consistency
or
the
lifecycle
hooks
that
we
could
inject
into
the
pod
object
right.
So
for
cue,
essing
and
things
like
that,
and
what
we
can.
If
we
have
those,
we
could
potentially
end
up
building
a
system
that
would
work
across
more
storage
systems,
not
just
storage
systems
that
enforce
consistency.
Well,.
D
A
A
The
biggest
concern
I
had
was
the
immutability
mutability
aspect.
I,
don't
like
that.
A
nice
thing
is
that
once
a
pod
is
created,
it
is
essentially
immutable
in
terms
of
the
number
of
volumes
that
it
has.
So
that's
good,
but
you
what
you're?
Probably
your
concern
is
right.
You
could
have
a
pod
that
references
three
different
bonds
from
three
different
backends
I,
think
that
would
be
okay.
We
need
to
think
about
how
to
handle
that.
A
One
way
is
that
we,
you
know,
let
kubernetes
handle
p
snapshotting
and
q
essaying
of
the
workload
and
then
it
just
triggers
three.
Can
you
know
snapshots
against
three
backends
at
the
same
time
or
if
it
recognizes
that
all
three
of
them
belong
to
the
same
or
two
out
of
the
three
belong
to
the
same
storage
system?
It
issues
some,
you
know,
group
snapshot,
call
to
say:
please
create
a
snapshot
of
volume,
a
and
volume,
B
and
I
think
I.
Think.
D
What's
really
needed
is
a
hint
to
to
the
thing.
That's
scheduling,
PVC
is
to
say,
like
I
want
I
want
to
create
three
PVCs
and
I
want
them
to
end
up
in
the
same
place,
and
so
I
need
a
way
to
basically
when
I
create
those
three
PVCs
add
a
little
bit
of
metadata.
That
says,
you
know,
go
I,
don't
care
where
it
is
as
long
as
all
three
are
in
the
same
place
and
then,
if
I
use
those
three
PVCs
in
a
pod
and
I
can
be
guaranteed.
D
I
I
really
like
the
approach
that
Zod
mentioned,
because
that
makes
the
most
logical
sense
for
the
use
case
and
also
it
makes
the
most.
You
know
easy,
workflow
and
use
case
for
the
for
the
end
user.
What
I'd
like
to
see
you?
The
awesome,
is
addressed
that
first
and
then
figure
out.
If
you
want
to
do
some
special
scheduling
enhancements
for
volume
to
a
back-end
for
your
special
use
case
bin,
then
we
should
look
at
that
as
a
separate,
separate
one.
B
Thing,
I
wonder
is
apartment.
You
can't
have
many
different
kinds
of
volumes
so,
like
some
opposites,
enzyme
are
like
a
tomorrow
and
so
wait
which,
like
volumes,
do
it
group
and
it's
not
like
easy
way
to
specify
right.
There
may
be
ten
volumes
for
that
part,
but
two
of
them
belong
to
one
group,
the
other
to
put
onto
a
second
group
I'm,
not
sure
whether
it's
a
huge
case
for
that
or.
I
Associated
with
that
pod,
wouldn't
you
just
do
all
PVCs
associated
with
that
part.
I
I
I'm
not
sure
what
that
context
would
be,
but
what
you
could
do,
then,
is
just
issue
the
snapshot
on
both
of
them.
The
thing
is,
is
the
only
things
unless
you
get
into
multi,
that's
the
only
thing
is
actually
writing
data
to
those
up
to
those
PVCs
be
in
the
pot
in
most
cases
right.
Yes,
it's
not
always
true.
So.
B
A
That
case,
what
I
imagine
is
basically
building
an
equivalent
operation
for
these
different
classes
of
deployment
objects
that
we
have
so
initially
we
start
with
some
mechanism
that
will
allow
us
to
create
a
snapshot
of
a
pod.
I,
don't
know
call
it.
You
know
pod
snapshot.
Once
we
have
that
primitive
ironed
out.
We
have
a
ability
to
take
a
consistent
snapshot
of
all
the
persistent
volumes
for
a
given
pod.
Then
we
can
use
that
to
leverage
it
to
build
higher
level
functionality.
A
Stateful
sets
builds
on
top
of
paused
replica
sets
builds
on
top
of
pods.
You
could
introduce
higher
level
primitives.
That
say,
you
know,
snapshot
stateful
set
and
it
can
do
the
custom
logic
that
is
required
to
handle
that
higher-level
snapshotting,
but
then
use
the
underlying
primitive
that
we
exposed,
which
was
not
pod,
snapshot
that
we.
G
G
So
when
you
let's
say
when
we
talk
about
this
consistent
group
saying
does
that
mean
we
need
to
create
three
three
consistent
groups?
Each
group
will
be
associated
with
with
a
part
or
with
a
one
replica
of
the
stay
for
set,
because
we're
not
really
if
there
are
multiple
replicas
we're
not
putting
them
all
together
in
this
okay.
A
It
would
become
an
option
at
the
stateful
set
snapshot
controller
level
so
for
the
stateful
set
snapshot,
you
have
one
of
two
options.
One
is
that
you
get
consistency
or
per
shard
or
you
get
consistency
across
all
the
shards,
and
if
you
choose
to
do
consistency
per
shard,
then
essentially
what
we
do
is
go
out
and
create
a
bunch
of
snapshot.
Pod
snapshot,
objects
in
the
pod
snapshot
controller
handles
that
if
they
want
consistency
across
all
the
shards,
meaning
all
the
volumes
across
all
the
instances,
that's
a
little
bit
more
challenging.
A
A
D
A
From
the
bottom
up
so
Rina,
we
introduced
the
basic,
very
basic,
primitive,
of
creating
a
snapshot
against
a
single
volume.
I
think
the
natural
next
extension
of
that
is,
let's
focus
on
the
pod.
Once
we
have
something
at
the
pod
level,
we
can
start
going
up
the
stack
and
see
what
additional
things
we
need
to
add.
In
addition
to
the
primitives
that
we
already
have.
B
So
I
think
that's
also
a
good
kind
of
way
to
design,
but
one
thing
I
want
to
make
sure
is:
if
we
kept
our
level
right,
then,
when
you
create
volume,
you
kind
of
already
indicated
okay,
this
volume
should
be
in
a
group
like
I
mentioned
right,
so
we
need
to
say:
okay,
these
wanting
to
create
a
peer
on
to
this
group
and
the
driver
might
do
something
place
them
into
the
same
place
or
right
now
we
don't
have
that
wait.
What.
H
I
A
That,
as
an
opt,
the
way
I
see
it
is
the
pod
is
the
natural
grouping.
The
easy
first
implementation
of
this
should
be.
The
pod
snapshot.
Controller
looks
at
all
the
PVCs
and
triggers
you
know.
First,
it
calls
the
qsn
hook
against
the
application
to
make
sure
it
pauses
the
application.
Then
it
calls
snapshot
against
each
of
the
volumes
and
once
that's
completed
on
qsr
as
the
workload.
A
If
we
want
to
get
fancier
and
say,
okay,
you
know
two
out
of
these:
three
volumes
are
off
the
same
type,
and
the
storage
system
should
be
able
to
do
fancy
things
to
make
sure,
there's
consistency
between
them.
We
could
leave
that
logic
to
that
pod
snapshot
or
to
figure
out
when
it
when
it's
you
know
issuing
those
snapshot
calls
as
long
as
it
has
some
way
to
tell
the
storage
system.
Instead
of
taking
a
snapshot
of
one
volume.
A
G
I
think
I
think
if
we
do
this
a
pot
snapshot,
then
it's
going
to
be
so
we'll
achieve
like
application
level
consistency,
but
it
doesn't
really
guarantee
there's
a
certain
level
consistency,
because
so
the
snapshots
will
be
taking
quite
a
time
right.
So
we
are,
we
I
think
we
we
will
not
see
to
that
at
the
first
step.
I
think
that's
so
yeah.
A
I
think
application
level.
Consistency
should
be
the
higher
level
priority
and
whatever
mechanism
you
come
up
with
triggering
this,
you
could
have
additional
options
on
it.
So,
for
example,
if
we
model
it's
similar
to
the
way
that
we
modeled
the
the
volume
snapshot,
we
have
something
like
a
pod
snapshot
and
then
you
can
have
options
that
say:
I
also
want
storage
level
consistency
by
default.
That
option
is
off,
so
you
only
get
application
level.
Consistency
kubernetes
can
handle
that.
If
your.