►
Description
Meeting of Kubernetes Storage Special-Interest-Group (SIG) Volume Snapshot Workgroup - 01 October 2018
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Jing Xu (Google)
A
Nice,
that's
good
me
is.
We
are
ready
to
announce,
snapshot,
feature
and
ARPA
and
we'll
have
a
blog
post
soon,
and
these
couple
of
weeks
see
and
I
like
being
working
on
adding
more
documentation.
So
please
check
them
out.
If
you
see
there
is
any
like
problem
or
you
think
any
information
missing,
please
let
us
know
and
then
we'll
continue
to
basically
discuss
the
features
we
want
to
have
in
like
near
future.
Some
are
needed
before
we
can
move
to
beta.
A
A
Okay,
good
and
we
also
want
to
have
a
deletion
policy
so,
as
we
know,
PVC
PV
right,
we
have
run
out
your
caudal
reclaim
policy,
and
here
we
probably
want
to
call
it
deletion
policy
more
appropriate.
So
but
ephod
is
the
neat.
That
means
when
user
delete
volume,
snapshot,
object
and
the
volume
snapshot,
content,
object
and
is
associate,
physical
snapshot
will
post
needed
and
the
other
policy
is,
we
call
retain
means,
the
user
deletes
their
volume,
snapshots
object,
the
content,
object
and
also
the
associate
physical
snapshot
will
still
exist,
so
the
controller
won't
delete
them.
A
And
what
extra
question
here
I
want
to
confirm
is
4qe
PVC.
So
if
the
the
policy
is
written
and
the
volume
the
PV
will
go
into
cards
released
face
and
in
that
face,
that's
volume
that
PV
cannot
be
bounced
to
any
other
PVC
again,
so
the
user.
The
administrator
need
to
manually,
actually
delete
the
PV
object
and
create
a
new
PV
object
and
then
that
new
PV
object,
of
course,
can
be
fun
to
other
PVC.
A
A
A
B
A
A
B
You
have
to
require
users
to
keep
track
of
that
reference.
Somehow,
post
solution
of
the
volume
snapchat
object.
So
this
is
something
we
need
to
ask
them
to
do.
They
have
to
do
some
bookkeeping
on
the
side,
because
once
they
delete
the
object,
they
would
lose
any
record
of
that
relationship,
so
they
have
to
do
some
bookkeeping.
Out-Of-Band.
A
A
B
A
Kind
of
relevant
to
this
is
in
the
bottom.
I
have
been
thinking
whether
we
should
provide
some
functionality,
maybe
just
through
cute
cuddle,
the
command
and
I.
Think
if
the
user
know
a
volume
right
and
they
can
query
the
list
of
volume
snapshots
like
taken
from
this
volume,
but
there
are
many
choices
like
how
we
can
provide
that
functionality.
Basically,
as
a
user
have
volume
and
the
user
might
want
to
know,
okay,
what
are
the
oldest
nap?
A
A
A
Okay,
good:
we
want
to
check
okay
what
drivers
really
support
car
support
and
plan
to
support.
Currently
we
have
the
following:
CS
driver
already
implemented,
including
GCE,
open,
STS
self,
our
PD
host
path,
hot
works,
and
we
heard
like
and
Matt
are,
is
working
in
progress.
They
money
takes
some
time
to
like
March
the
the
feature
in
to
their
driver,
and
also
there
are
a
couple
of
CSI
driver
plan
to
implement
snapshot.
A
A
C
D
A
B
So
one
issue
you're
ready
to
when
you
were
trying
to
support
volume.
We
slice
was
that
admission
controller
had
headaches
for
certain
drivers,
whether
they
support
resize
or
not,
and
then
everything
else
was
rejected.
So
this
caused
complications
for
CSI
drivers,
as
well
as
for
the
external
controllers
that
were
trying
to
implement
sighs.
B
So
just
a
cautionary
note.
Let's
not
repeat
that
I
want
to
say
some
mistake,
but
let's
consider
that
some
CSS
parties
may
support
natural
some
may
not
or
we
may
support
snapplus
with
NFS
right
CCTVs
through
external
controllers,
so
we
shouldn't
have
the
checks
that
would
reject.
You
know
ball
in
snatch
operation
based
on
just
driver
type
in
the
at
Mission
Control.
B
A
C
Think
that's
something
else
right.
We
need
to
check
the
capabilities
reported
by
CSI
driver
right,
I
believe
you
can
say
something
is
a
day
after
checking
the
type
like
whether
it
is
glass
rafael,
so
whether
it
is
ice
Kazi,
something
like
okay,
I,
don't
think
we
have
that
because
this
is
a
CSI
drivers.
We
don't
check
that
the
three
size,
maybe
because
that's
implemented
for
entry.
Maybe
that's
why
I'm
thinking
yeah
I,
don't
think
we
have
this
problem.
B
Drivers,
you
know
you
need
to
query
the
CSI
driver
based
on
his
capabilities
to
know
whether
it's
supposed
to
be
size
or
not.
So
the
way
at
Mission
Control
was
previously
written
for
restarts.
It
was
used
just
doing
a
simple
check
that
whether,
for
example,
GC
was
an
expandable
plug,
you
know
or
not
open
SC.
B
It
was
an
expandable,
you
know
or
not,
and
he
was
rejecting
everything
else,
so
that
prevented
controllers
from
supporting
resize,
so
basically
I'm
saying
that
logic
should
reside
in
one
of
the
controllers
that
handle
resize
or
snapshot
as
opposed
to
admission
controller.
The
way
does
it
bluntly
and
without
much
context.
C
A
A
I
know
we
don't
have
animation
controller,
but
we
plan
to
have
it
and
in
before
we
move
to
beta
and
right
now
we
have
a
PR
like
to
add
a
vision
controller.
So
that's
animation.
Controller
is
basically
four
already,
but
it
verification
like
verifying
whether
the
step
morning
snapshot
in
time
checked.
The
information
is
cracked
because
for
each
one
snapshot
at
the
object,
you
have
the
PVC
information.
You
want
to
take
snapshot
from
that
volume
and
it
will
try.
A
For
example,
DBC
is
bound
to
give
me
and
weather
like
it
could
take
snapshot
for
that
volume
right
at
least
the
PV
PV
C
should
exist
and
they
are
found
and
also
check
snapshot
class.
If
it
does
not
specify
snapshot
class,
it
can
set
a
default
one
and
the
way
34
demands
actually
is
first
catch,
this
storage
class
from
the
PVC
and
based
on
the
storage
class
probationer
right.
What's
the
driver,
you
use
provision
volume
and
it
find
out
whether
you
have
a
default
snapshot
class
with
the
same
driver.
A
Currently
we
haven't
thought
about
anything
else
and
for
resource
code.
I
think
we
discussed
last
week
and
it's
a
pretty
important
and
similar
to
storage
quota.
We
can
have
restrict
the
total
number
of
volumes
snapshots.
You
can
create
per
name
space,
also
purse
natural
tasks,
and
so
we
say
we
also
want
to
like
have
the
size
restriction,
but
that
size
would
we
restore
sight
since
that
the
only
size
information
we
can
from
snapshot.
A
Currently,
in
Belarus
we
don't
have
implant
the
implementation
to
support
resource
Kota
for
CRD,
because
natural
is
early
right,
but
I
know.
Someone
in
our
team,
like
in
different
group,
is
trying
to
working
to
support
the
co
time
for
CR,
DS
and
I
will
talk
to
them
and
see
how
it
progressed
and
see
how
far
away
for
having
this
feature,
I
hope
like.
Yes,
we
can
have
this
soon
since
before
me
to
beta
I.
Think
we
need
is
Kota.
Support
too.
A
Okay,
good
and
for
recreation,
retry
policy,
so
I
think
we
not
yet
to
reach
her
conclusion
final
conclusion,
but
this
seems
not
very
like
critical
but
nice
to
have
a
feature
for
now:
I'm
thinking
of
a
cinco,
like
you
allow
user
to
set
a
period
of
time
to
retry
this
spirit.
I'm,
like
start
from
when
the
AP
object
is
created.
When
a
PR
project
is
created,
there
is
a
timestamp
and
the
controller
can
check
like
waving
as
a
in
ten
seconds
and
if
they
fail
to
Krishna
browse
it
can
do
a
rich
wine.
A
A
Nope,
okay,
good,
so
another
thing
is
relate
to
topology,
so
we
know
for
snapshots.
Some
cloud
providers
have
the
capability
of
crape
shot
and
also
applause
and
shouts
to
clouds
right,
so
it's
available
from
either
globally
or
from
a
zone
in
snapshots,
but
for
some
other
providers
after
its
Christophe
shots,
they
might
require
us
a
separate
step
to
upload
snapshots
to
a
certain
location
and.
A
A
You
allow
to
prevail
in
this
volume,
so
I'm
thinking
for
snapshots.
We
can
follow
the
similar
or
the
same
approach
right
to
support
the
same
interface
for
topology,
and
so
it
tells
which
zone
for
certain
cloud
providers
right.
You
want
to
create
a
snapshot
and
the
first
win
to
add
this
into
CSI,
and
then
we
can
support
it.
A
A
A
B
B
A
Okay,
good,
so
the
next
one
we've
been
thinking
is
ready
to
reverse
that
shot.
So
current
today,
we
only
supported
create
a
new
volume
from
snapshot
right.
That's
a
new
volume,
completely
completely
different
volume.
From
the
original
volume
you
take
sacra
from
a
some
driver.
My
supports
revert
capability.
That
means
you
just
revert
the
snapshot
to
the
original
volume
right.
A
You
still
have
them
you're,
not
going
to
create
a
new
one,
it's
just
the
still
the
old
one
that,
after
you
reverse-
and
rather
it
go
back
to
your
previous
state
when
you
take
a
snapshot
and
some
drivers,
like
my
useless
feature
like
very
neat,
so
it's
important
to
them.
Some
driver
Minot's
like
have
these
functionality,
would.
A
E
A
A
So
currently
we
say:
ok,
you
create
a
neo
PVC
and
the
source
would
be
a
snapshot
then,
until
we'll
provision,
a
new
volume
from
such
a
call
reverse.
That
means
you
probably
have
a
thinking,
EC
right
and
then
you
want
to
revert
a
snapshot
until
pubic
state
and
how
we
trigger
that,
by
just
modifying
my
say,
a
data
source
of
the
PVC
to
a
snapshot
or
not,
and
this
is
also
relevant
to
the
things
we've
been
discussing
and
some
time
before
is
we
call
in
place
restore?
That
means
your
PVC
already
exists.
A
It
bound
to
a
PV,
but
the
user.
Somehow,
once
you
change
the
underlying
volume,
and
so
a
new
volume
is
provision
formula
snapshots
and
the
PV
can
reference
to
that
new
volume
and
the
PVC
can
bond
to
that's
new
PV.
That
is
useful
for
Paisley.
If
user
have
a
running
workloads
and
you
went
through
something
happens
for
the
volume-
and
you
don't
want
to
fail
over
to
the
new
storage
and
that
this
small
volume
can
start
from
the
snapshots
can
be
copied
from
the
snapshots
provision
from
the
sections.
A
So
these
two
are
kind
of
relevance
but
different
things,
and
what
are
you
know
face
we
can
use
and
what's
how
to
trigger
those,
it's
kind
of
a
checking
anything
and
we
need
to
think
a
lot
more
about
that
and
in
place
restore.
We
do
have
a
proposal,
but
it's
not
like
have
a
consent.
It
and
next
time,
I
think
we
we
can
work
on
those
two
and
how
to
differentiate
and
how
to
design
those
tools.
If
we
want
to
support
them.
A
Okay,
good,
so
the
the
next
one
here
is
to
lace,
delete
snapshots
taken
from
a
PVC
as
I
mentioned.
So
this
might
be.
That's
a
nice
feature
for
user,
be
able
to
easily
query
okay.
What
snapshots
have
been
taken
for
particular
volume
and
they
can
manage
if
there
are
too
many
they
can
delete
some
of
those
and
this
weekend
might
be
use
cacao
as
in
the
face,
and
then
you
can
put
a
flag
I
like
bottom
sauce,
equal
to
something.
E
A
E
I've
been
trying
to
trying
to
think
of
a
way
that
kubernetes
can
take
advantage
of
the
snapshot
list.
Csi
RPC
cuz,
it's
you
know
so
far.
We
haven't
needed
it
at
all,
but
it
there's
got
to
be
a
way
to
say:
okay,
CSI
knows
that
there's
some
snapshot
that
chilis
doesn't
yet
know
about.
How
do
we
get
creates
know
about
it?.
F
So
the
snapshot
list
I
think
is
valuable,
particularly
for
like
a
crate
snapshot,
request
and
stuff
like
that,
and
then
any
internal
things
that
the
plug-in
does
mm-hmm
so
I
think
that's
all
good.
As
far
as
the
like
Ben
are
you
are
you
thinking
of
like
the
ability
to
import
external
snapshots
into
the
system?
Is
that
what
you're
thinking
well.
E
Yeah
yeah
in
particular
ones
that
are
taken
on
a
schedule.
That's
that's
beneath
implemented
beneath
kubernetes,
instead
of
on
top
of
kubernetes
so
like
assuming
that
there
is
something
taking
snapshots
and
you
decide,
you
want
one
of
those.
How
do
you
get
it?
How
do
you
get
rid
of
these
to
know
about
it
so
that
it
can
then
do
the
rest
of
the
things
Koreans
can
do
with
that
snapshot?.
A
Okay,
I
just
want
to
say
like
right
now.
If
you
want
to
import
like
existing
snapshots,
you
can
do
that,
but
you
need
to
know
your
snapshots
handle
or
like
a
DI
key,
and
you
can
recall
static,
binding
so
similar
to
PvP
VC.
You
create
a
snapshots
content,
API
eject
to
represent
that
snapshot,
and
then
user
can
create
volume,
snapshots,
API
project
point
to
that
content,
object
and
system
can
find
them
together.
A
E
A
E
It's
it's
not
the
kind
of
thing
that
you
could
expect
an
administrator
to
just
know
how
to
do
they're
gonna
need
something
to
help
them
create
that
thing
you
say
you
know,
show
me
a
list
of
what's
there
or
you
know
or
taking
input
that
will
help
me
seek
through
the
list
of
what's
there
and
then
create
the
right
volume,
content
or
volume
snapshot,
content
for
that
snapshot
and
I.
Don't
think
that's
something
we
can
expect
an
administrator
to
just
know
how
to
do.
A
F
E
E
F
The
other
I
think
your
example
of
somebody
has
an
external
snapshot
that
doesn't
work
with
kubernetes.
Frankly,
I
think
that
sort
of
thinking
and
scope
should
be
that
sort
of
thinking
or
process
should
be
out
of
scope
for
what
we
discuss
here,
because
you're,
if
you're
not
using
kubernetes,
then
we
can't
guess
how
you're
gonna
do.
What
no.
E
E
E
E
So
so
I'll
reiterate
the
concern
I
brought
up
a
few
months
ago,
if
an
in
case
anyone
has
forgotten,
which
is
that
one
of
the
problems
with
snapshots
is
that
they're
potentially
a
huge
number
of
them,
because
you
know
if
you're
taking
them
on
a
schedule
for
all
of
your
volumes,
you
can
quickly
end
up
with
ten
or
a
hundred
times
as
many
snapshots
as
you
have
volumes.
That's
a
lot
of
stress
to
put
on
curries
and
so
there's
a
good
reason
for
people
to
want
to
take
this
net
or
to
implement
snapshot.
E
That's
my
big
concern.
If
it
was
no
big
deal
for
for
there
to
be,
you
know
fifty
thousand
snapshots
in
the
kubernetes
database,
where
you
know
you're,
creating
and
deleting
a
thousands
of
them
every
day,
then
I
would
say.
Okay.
This
is
not
something
we
need
to
worry
about,
but
I
think
that
it's
a
scale
problem
just.
G
A
quick
thing
on
that
I
think
that's
a
good,
very
good
point,
and
but
I
think
that
there's
two
things
around
that
from
the
kubernetes
point
of
view.
As
far
as
I
know,
please
let
me
know
if
I'm
wrong
is
that
it
is
a
user
serviceable
request
and
then
the
the
one
where
the
system
is
doing
the
schedule-
one,
that's
that's
an
administrator
of
the
storage
system
or
or
the
system
itself
doing
that
mm-hmm,
so
I
may
not
be
able
to
that.
G
The
that
the
user
may
be
able
to
see
that
if
the
user
probably
sees
only
what
they
requested
and
maybe
if
they
can
go
to
the
administrator
and
say
no
I'm
wondering
now,
there's
a
scheduled
one
that
I
can
access,
but
maybe
the
kubernetes
is
only
doing
what
the
user
point
of
view
is.
Does
that
make
sense?
Well.
E
F
E
G
I
think
this
is
that's
a
great
point.
I
think
going
back
is
that
since
the
user
did
not
create
them,
they're
not
gonna
be
available
for
them
as
Liz.
Unless,
unless
we
have
kubernetes
like
continuously
watch
this
CSI
for
events
and
those
events,
meaning
like
I'll,
just
create
it,
and
then
it
will
create
the
this
see.
F
G
G
Use
it,
for
that
would
be
that
the
D
as
a
storage
provider,
if
I
had
that
system,
then
I
would
provide
my
own
way
to
show
the
user.
The
availability
outside
of
scope
of
communities
I
mean
through
my
own
CRD,
my
own
operator,
or
something
that's.
What
the
way
I
would
do
it,
because
if
not,
then
then
they
will
go
get
it
hairy
like
you.
G
F
G
F
A
G
If
I'm,
like,
which
I
am
a
storage
provider,
I,
would
make
it
very
simple
for
my
user
to
be
able
to
consume
that
and
to
do
that,
I
would
have
support
for
one
for
kubernetes
snapshot
and
then
I
will
have
an
operator
that
sits
there,
and
that
provides
the
the
other
model
or
maybe
I
can
combine
them.
Both
right,
I
just
need
to
provide
a
model
that
makes
a
simple
and
don't
even
think
about
this
I.
Just
hide
that
under
covers
yeah.
A
I
I,
like
discussion,
but
I,
do
see
a
point
from
bans,
so
it's
possible
right,
but
it's
probably
very
future
work.
We
have
a
CRD
something
kind
of
how
to
import
snapshots
from
the
existing
snapshots
so
visiray,
for
example,
you
can
specify
your
waring
source
and
the
time
period,
and
this
in
cxeh
has
this
list
function.
It
can
query
out
of
snapshots
satisfying
your
requirements
like
the
source
from
this
and
the
time
period
this
and
automatically
generate
all
the
contents
objective
for
you.
So
it's
all
possible.
So
it's
just
not
okay.
E
E
A
A
So,
okay,
that's
good,
and
so
for
this
cube
kado,
maybe
also
is
kind
of
a
future
word
nice
to
have
like
we
will
try
to
probably
implement
it
when
we
have
time
and
the
next
on
the
list
is
group
of
snapshots.
That's
also
quite
interesting,
more
the
master
feature
so
considering
you
have
a
facade
since
there
are
multiple
paths
running
and
you
have
different
volumes,
and
you
want
to
take
that
job
in
consistent
snapshots
for
all
the
volumes
and
also
for
part.
A
You
might
have
multiple
volume
to
we
have
a
list
of
those,
and
so
that
means
you
can
create
a
group
of
snapshot
or
together
at
the
same
time
or
some
Scala
provider.
Even
have
this
consistent
group
concept.
They
can
help
to
create
such
consistence
sent
shots
for
group
snapshots
and
then
how
we
implement
and
I
provide
interface
and.
A
A
G
I
think
what
usually
call
consistency
groups
in
OpenStack
in
groups
now,
it's
very
important
and
I
think
we're
gonna
hit
this,
probably
as
soon
as
Snatchers
start
being
used
by
customers,
because
when
they
hit
normally
they
they
do
a
stack
of
containers
right
as
that's
what
they
want
to
do.
As
a
user
like
I
want
to
snap
my
app
in
my
database
and
whatever
else
all
together.
That's
the
use
case
they're
looking
for
right.
They
don't
want
to
independent
things.
G
A
G
A
A
A
I
haven't
played
that
group
of
snapshots
yet,
but
normally
the
underlying
makes
a
the
driver.
Plugging
you
can
specify
the
group
of
volumes
and
then
the
commands
I
say
take
a
snapshot
for
this
group
volumes.
Is
that
how
it
works?
Yeah.
G
A
G
D
So
my
feedback
on
this
is
that
this
is
a
very
long
in
list
of
all
the
features
that
we
need
to
get
in,
which
is
good.
It
means
we
have
a
long
term
plan
or
Q
for
Q
for
is
a
short
quarter.
What
we
should
do
is
prioritize
this
and
figure
out
what
the
most
important
pieces
are,
that
we're
gonna
focus
on
for
q4.
Instead
of
trying
to
do
everything
and
not
finishing
anything,
so
maybe.
A
A
Okay,
that's,
why
is
more
like
it
relate
to
Cologne
and
sharing
after
this,
we
can
just
quickly
go
through
and
we
already
have
a
few
or
at
p1
I
just
want
to
confirm.
Okay,
those
are
something
we
definitely
want
to
work
on
in
q4,
so
the
last
one
is
related
to
clone
sharing.
So,
for
example,
for
snapshots
same
shot
also
have
did
like
source.
It's
not
continuous
source
for
this
source.
A
Currently,
sources,
PVC
right.
How
about
the
source
is
another
snapshot.
What
does
mean?
Maybe
we
can
use
it
for
less
a
clone
snapshot
or
just
like
you
still
have
them
Winesap
shots,
but
you
create
a
separate
snapshot.
Api
objects
to
represent
so
that
in
a
different
name,
space
that
could
be
useful
sharing
and
also
it
can
be
useful
to
clone
snapshots
to
a
different
zone.
Different
region,
I
think.
F
A
couple
of
things
on
that
one
is
I,
think
there's
way
too
much.
I
think
that
would
be
a
good
one
to
drop
for
right
now,
just
an
interest
of
what
you
already
have
and
there's
a
couple
of
reasons
for
that
one
is
hopefully
we're
gonna
work
on
the
transferring
objects
across
namespaces
as
a
general
thing,
this
release
as
well.
F
So
that
would
be
something
you'd
be
able
to
do,
but
I'm
still
pretty
strongly
I,
don't
think
that
doing
things
like
creating
a
clone
of
a
snapshot
and
then
transferring
that
clone
of
the
snapshot
object
to
another
namespace
I
think
that's
a
bad
idea.
I
think
you're
way
better
off
just
creating
the
volume
from
the
snapshot
and
insurance
from
the
volume.
So
you
don't
have
to
manage
all
any
snapshot
tendencies
or
anything
like
that.
There's
just
too
many
weird
dependency
cases
you
can
get
into
you,
but
I'll
I'll.
F
E
I
agree
that
for
the
purposes
of
sharing
cloning
is
the
wrong
primitive
to
base
your
sharing.
On
top
of
that,
there
are
reasons
to
clone
snapshots.
In
particular.
You
know,
like
you
mentioned,
if
you
want
them
to
be
in
different
places
like
I
want
to
copy
here
and
I
want
to
copy
over
there,
but
not
for
names,
say
sharing
yeah.
A
So
those
are
kind
of
two
different
things
so
for
sharing.
Definitely
since
working
on
the
apparently
they
scoop
are
probably
also
working
on
some
generic
sharing
proposal,
so
we
want
to
also
like
participant
and
how
it
designed,
and
so
we
can
see
how
that
what
you
yeah
yes
for
this
quarter,
we
we
don't
need
to
really
work
much
about
this
feature.
Yes,.
A
So
go
back
to
the
list,
this
deletion
policy
reclaim
policy.
No,
we
really
don't
read
out
confident
policy,
deletion,
protection,
I,
think
that's
something
we
can
probably
should
finish
in
q4,
so
we
can
have
that
and
for
resource
code
on
the
again
I
need
to
check
with
the
person
who
is
working
on
this
or
supported
in
CRD.
And,
let's
let
you
know
how
much
time
we
probably
can
have
this,
but
this
is
I
think
is
also
quite
important
feature.