►
Description
Meeting of Kubernetes Storage Special-Interest-Group (SIG) Volume Snapshot Workgroup - 10 September 2018
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Jing Xu (Google)
A
A
A
So
first
some
updates
about
snapshots
feature
so
right
now
all
the
basic
snapshots
code,
Oh
word,
including
snapshots,
API
snapshot
controller
because
Mac
shutter
external,
said,
shorter
and
also
restore
volume
from
snapshot,
is
a
source
code
in
external
provisioner
onwards,
so
played
today.
If
someone
wants
to
use
such
a
feature,
they
can
do
it
now,
and
so
currently
we
are
working
on
documentation
like
really
examples,
and
we
also
plan
to
post
a
blog
about
this
feature
and,
for
example,.
A
Now
we
have
the
GCE
PvE
driver,
it's
not
much
yet
so,
but
I
think
it
will
be
soon
and
we
have
example
how
to
deploy
it
and
I
think
seeing
also
have
example,
poster
path
in
properties,
endurance
and
then
anyone
who
interests
using
it
can
follow
those
examples
and
the
rest
I
think
on
probably
we
over
many
ways
how
to
create
PVC,
and
then
we
have
volume
how
to
create
web
chat
class
in
create
volume
snapshot
and
in
it
and
to
restore
volume
from
star
sakshat.
We'll
also
have
some
documentation
about
it.
B
C
A
D
C
B
C
D
B
A
B
B
A
B
A
B
E
B
Each
people
right
now
and
kubinashi
aside,
has
its
documentation
dedicated
to
that
repo
developer.
So
the
thing
is
that
we
have
so
many
repos
right
now
that
we're
trying
to
have
developers
like
we
get
northern
new
developers
and
we
want
them
to
come
in
and
learn
and
so
developer.
Repo
documentation
is
very
important
and
I
really
value
it.
So
how
do
I
start?
You
know
helping
out
kind
of
stuff.
A
To
cover
that,
and
so
I
didn't
touch
my
stop
here
last
time,
I
think
we,
this
discussed
this
readiness
design
proposal
I
haven't
got
a
chance
to
update
March
yet
but
I
think
for
snapshot.
We
decide,
we
don't
need
this
readiness
concept
after
we
have
more
detailed
proposal.
Relates
you
take
up
operator.
That
means
the
volume
is
created
first
and
then
there
is
a
second
step
to
populate
data
into
the
volume.
We
need
this
feature
for
that.
A
So
this
is
not
so.
This
is
not
like
implemented
when
the
snapshot
feature
it's
merged,
so
because
for
snapshot
the
body
will
be
created
and
data
will
be
populated
and
once
that
there
is
no
like
separate
steps,
and
so
we
basically
tell
users,
you
must
have
a
newer
version
of
provisioner
so
that
you
can
be
able
to
use
the
snapshot
feature
and
as
long
as
your
provisionary
like
is,
has
restored
snapshots
to
volume
functionality
right,
there
is
no
need
to
check.
A
B
Just
a
quick
question:
yeah
I'm,
trying
to
understand
why
this
is
a
kubernetes
project,
like
it
maybe
I'm
coming
in
too
late.
But
to
me
it
sounds
like
it's
like
a
cool
feature
like
I.
Definitely
think
it's
a
great
feature
but
I
think
it's
something
on
top
of
kubernetes
I,
don't
know
if
Cooper
nation
should
provide.
This
am
I
wrong.
I
know
that
if
I'm
wrong,
then
that's
fine.
E
I
talked
with
Jordan
Liggett
a
little
bit
last
week.
Well,
actually
about
this
and
I.
Don't
we
need
something,
and
it
may
be
this
some
sort
of
readiness
concept,
especially
with
snapshots
and
eventually
doing
transfers
of
volumes
between
namespaces,
yet
some
way
of
preventing
that
volume
from
being
used
before
it's
ready,
whether
it
be
by
pre
populations.
So
you
create
the
PvP
VC
and
then
you're
going
to
populate
it,
either
with
a
snapshot
or
github
or
a
clone.
E
D
E
Is
on
the
architectural
review
board?
I
spent
a
lot
of
time.
Making
sure
that
you
know
cross
name.
Space
movement
is,
is
pretty
tricky
right
and
and
not
really
desirable.
So
he
said
we
definitely
need
something
that
kubernetes
can
understand.
So
to
me,
I
feel
like
yes,
I,
don't
I,
don't
know
if
we're
quite
there
on
how
we
should
do
it,
but
absolutely
think
we
need
to
advocate
for
it
after.
D
And
so
in
this
case,
what
we
realized
is
for
most
of
snapshots.
We
could
have
it
external,
but
we
needed
one
hook
into
the
life
cycle
of
the
the
the
volume
to
be
able
to
populate
it,
and
that
population
step
is
not
something
that's
unique
to
snapshots.
They
can
be
generally
applied
to
other
use
cases,
including
cloning
and
all
the
use
cases
that
Aaron
mentioned,
and
so
it's
better
for
the
kubernetes
core
to
have
a
more
generic
hook
that
we
can
reuse
for
for
future
use
cases
and.
E
Jordan,
to
give
me
a
couple
examples
of
cross
namespace
API
things
today
that
exist
in
kubernetes
and
one
was
like
granting
permissions
to
service
accounts
across
different
namespaces
and
the
other
one
than
we're,
not
network
policies
with
label
selectors.
So
that
still
isn't
that
can
speak
to
different
pods
across
the
namespaces.
So
those
aren't
exactly
what
we're
trying
to
do,
but
it
does
kind
of
set
a
precedence
for
the
introduction
of
moving
things
with
the
namespaces
that
might
be
useful,
even
outside
of
storage,
but
I.
E
Think
storage
is
still
unique
because
we're
still
binding
something
in
a
namespace
to
something
that's
global
right
and
so,
in
all
reality,
we're
more
moving
claims
right
or
more,
where
we're
trying
to
just
change
the
ownership
and
so
I
guess
those
two
examples
with
the
the
permissions
for
the
service
council.
The
network
policy
is
similar
right.
We're
just
changing
permissions
of
things.
E
B
E
That's
our
that's
the
whole
crux
of
what
is
wrong.
If,
if
we,
if
it
was,
we
could
populate
it
when
it's
unbound,
we
wouldn't
have
this
prom,
but
since
we
have
to
bind
it,
we
also
have
to
say:
okay
I'm,
going
to
bind
this
wagon
popular,
but
I
I
only
want
this
specific
namespace
or
you
know
so
on
and
so
forth
to
be
able
to
use
it
once
that
process
is
complete.
So
we
need
a
little.
B
A
Thank
you
for
great
discussion,
so
this
readiness
checking
right
give
us
like
kind
of
a
condition
match,
but
additional
condition
to
see
whether
the
volume
can
be
used
or
not,
except
just
currently
like
a
pond
face.
So
apparently
pond
face.
It's
not
like
enough
to
support
to
these
new
features
and
also
like
attach
attach
controller
needs.
Some
kind
of
information
to
know
when
to
attach
when
to
detach
core
data
populate
her,
it's
different
from
like
no
more
attach
detach,
workflow
right
now.
A
A
Hopefully,
the
next
meeting
will
have
like
more
design
and
body
carrier
design
relate
to
this,
and
also
actually
for
snapshot
itself.
We
are
saying
the
first
version
we
don't
can
see
their
finalizar
but
I
think
after
these
code
already,
we
need
to
start
work
on
something
rate
to
that
when
the
analyzer
I'm
also
working
on
addition,
automation,
controller.
So
the
code
almost
ready
but
need
more
testing
and
the
current
thoughts
about
the
automation
controller
is
it
will
be
packed
in
the
same
package
of
starter.
A
A
So
I
think
I
will
probably
give
like
some
ways
and
then
all
or
some
example
of
admission
controller.
What's
its
will
do
piece
away
its
do
some
like
checking
whether
you're
Yama
file
is
correct,
whether
the
PV
PV
C
is
correct
when
you
try
to
take
snapshot
of
that
module
and
also
it
can
be
used
to
assign
a
default
spectral
class
to
the
snapshots.
Currently,
all
these
logic,
validation
or
take
and
and
default
snapshot
class
to
the
snapshots
object
are
all
embedded
into
the
controller
code.
A
Then
that
means
you
create
a
central
API
object.
First
and
then
controller
will
check
for
the
mission
controller
a
two
before
you
create
the
IP
object.
It
will
do
the
early
checking
and
if
anything
wrong,
it
will
fail
to
create
the
API
object.
So
user
don't
need
to
you
like
worry
about
deleting
the
object.
E
A
Didn't
add
permission
hearts
right
now:
it's
basically
checking
the
fields
and
also
whether
PVC
exists
or
PVC
like
a
pond
before
it
takes
that
shots
make
sure
the
volume
is
ready
to
be
snapshots
so
for
permission,
pod
right
now,
I
think
it's
very
simple.
Just
if
you
have
the
access
for
the
names
basement,
you
can
take
a
snapshot
of
the
volume
in
that
namespace.
A
About
questions
about
the
permission
to
take
snapshots
yeah,
we
don't
know
I,
think
we
didn't
really
think
think
much
about
whether
we
should
have
additional,
like
a
permission,
checking
or
taking
snapshots.
Basically,
if
you
have
access
to
the
namespace
right
now,
you
should
be
able
to
create
a
snapshot
for
the
volumes
just
like
that.
You
have
to
permission
to
create
volume.
E
G
Volume
volume
resize
we've
added
a
new
field
in
the
first
class
to
allow
volume
expansion.
We
can
follow
a
similar
model
here,
but
I
don't
see
it
as
a
security,
sensitive
issue
or
anything
like
that.
If
you
take
a
snapshot
whatever
it's
basically
used
for
snapshot
would
count
against
your
quota,
so
it's
up
to
user,
whether
they
want
to
take
snaps
or
not,
and
some
admins
monitor
is
created
if
it's
a
performance,
sensitive
operation
on
storage
by
hands.
G
A
Okay,
so
yeah
like
a
Kota,
also
something
we
I
think
we
also
need
probably
and
it
as
feature
for
snapshots,
because
you
Leslie
it
takes
that
Chaudron.
You
definitely
use
some
storage
going
on.
You
don't
want
to
use
that
storage
storage
for
back,
just
storing
a
lot
of
snapshots.
If
you
system
like
periodically
thing,
is
that
shot,
and
we
should
like
restrict
how
much
space
you
need
to
use
can
be
used
for
taking
that
shot.
A
E
A
A
A
Also
for
finalizer
right
basically
right
now
we
are
saying
snapshot.
Is
life
cycle
is
in
completely
independent
of
air
volumes.
So
for
some
sorry
provider
it
is
true
like
DC
aid.
Our
listener
on
the
snapshot
web
cycle
is
completely
independent
of
their
volumes,
but
for
some
others
on
those
are
like
it's
not
true,
so
you
cannot
delete
your
snapshot.
A
A
H
B
H
B
H
B
B
C
A
source
of
the
snapshot
will
be
the
wall
in
so
that's
some
dependency.
It's
not
going
to
be
dependent
on
another
selection
unless
success
back
in
is
doing
something
like
incremental
I.
Don't
know.
If
that's
the
case,
then
maybe
he's
a
mini
shoe,
but
we
haven't
really
taken
that
into
your
coming,
our
in
the
corners
each
I
I
saw
so
right.
Now
we
are
talking
about
just
the
dependency
of
the
of
the
bonus.
C
B
I
think
that
maybe
the
flow
things
would
be
that
the
request
comes
in
to
try
to
delete
that,
and
the
storage
systems
provides
enough
correct
information
for
the
status
to
be
filled
in
on
something
to
be
filled
in
because
I,
don't,
in
my
opinion,
it's
just
a
right.
It's
just
an
extension.
The
storage
system
is
the
one
determining
yes
or
no.
This
can
be
done
right.
E
B
What
I'm
trying
to
say
is
that
if
the
maybe
I'm
incorrect,
but
it's
going
with
the
example
that
we
want
to
delete
a
snapshot
volume
right
from
the
users
point
of
view
right
and
the
kubinashi
should
not
prevent
that
call
from
going
all
the
way
to
the
storage
system.
How
distortion
decided
that's
possible
when
I'm
translators,
I'd?
Let
this
don't
decide
if
that's
possible
and
let
let
let
cooperate
and
report
if
it
was
or
it
was
not.
B
B
B
It
go
all
the
way
to
the
source.
I'll
give
you
example.
If
you
would
use
host
path,
just
a
stupid
example,
but
if
you
host
path
it
doesn't
care
it
just
copies
or
everything
over
right.
You
can
delete
snapshots
in
between,
but
you
can
go
to
another
storage
system
where
you
cannot
write
because
it's
like
it,
it
depends
on
the
diffs
right
so
I'm,
just
saying
like.
Maybe
we
shouldn't
put
too
much
intelligence
into
in
between
let
the
story
system
decide
now.
How
do
we?
We
port
back
errors
or
anything
like
that?
B
H
I
I
think
I
would
I
would
definitely
agree
with
that,
the
more
we
can
leave
up
to
the
storage
system
that
better
and
leave
out
of
kubernetes
the
better
that,
in
answer
to
that
question,
I
mean
the
thing
that's
nice
about
that
model.
Is
it
lets
me
as
the
CSI
plugin
developer
or
backend
maintainer?
It
lets
me
make
that
decision
on
what
to
do.
I
may
decide.
I
want
to
do
something.
H
You
know
like
okay,
I
can't
actually
delete
the
volume
but
I'm
going
to
report
back
that
it's
deleted
so
that
it's
no
longer
available
to
kubernetes,
but
I
have
an
internal
pointer.
That
knows,
whenever
all
those
snapshots
or
whatever
are
cleaned
up,
then
I
actually
auto
clean
it
right.
So
it
gives
me
the
flexibility
to
actually
do
those
sorts
of
things,
as
opposed
to
having
some
hard
rules
set
in
kubernetes.
That
limits
me
yeah.
B
B
A
Like
that's
the
direction,
then,
if
mom
leaves
us
like
need
to
think
more
about
how
to
report
error
and
what
state
we
are
going
to
get
after.
We
like
Cunanan,
is
trying
to
meet
something
it
failed,
so
we
don't
want
to
like
end
up
in
a
situation.
Yes
think
that
is
like
I
mean
somewhere
and
I
cannot
do
any
further
actions.
H
A
H
H
C
H
C
B
D
D
We
can
do
is
if
you
try
to
delete
and
we're
unable
to
delete
mean
we
don't
get
a
successful
response.
We
leave
the
object,
as
is,
and
we
generate
events
to
indicate
what
happened
and
then
the
user
can
take
a
look
at
the
events
and
say:
oh
okay,
see
it's
not
being
deleted
because
of
such
and
such
error
from
the
backend
yeah.
B
E
H
Once
is
a
pending
delete
right,
so
if,
if
we
want
to
get
hung
up
on
this
specific
case
right,
you
could
have
something
like
a
a
formal
pending
delete
that,
for
all
intensive
purposes,
means
okay.
You
probably
shouldn't
be
trying
to
reuse
this
or
pick
it
up
anywhere
else
or
anything
like
that.
It's
not
actually
gone
yet
because
there's
something
something
we
don't
care,
what
it's
keeping
us
from,
actually
doing
the
delete
on
it,
but.
H
I
was
so
so
personally
from
my
perspective,
I'm,
absolutely
fine
when
just
for
returning
an
error
or
letting
the
driver
do
something
else
whatever,
but
I
can
see
that
that
does
have
some
issues.
So
the
the
delete
pending
seem
to
be
an
interesting
general-purpose
additional
state.
That
I
think
was
where
we
were
going
so.
E
E
H
E
It
ever
get
to
a
state
where
it
can
delete
then,
and
is
it
so
far
out
from
that
state
that
you
is
there
a
possibility?
You
would
not
want
it
to
execute
at
that
point.
I
guess:
that's
how
I
think
about
it
like
I
I,
get
the
whole
concept,
but
is
do
I.
Want
it
to
fail
fast
and
say
it's
not
gonna
happen
because
it's
chained
or
we
don't
allow
that
on
the
storage
system
or
something
else
right
or
do
I
want
it
to
try
forever
until
it
actually
successfully
execute
so
I.
A
Like
trying
to
delete
continuously
yeah
or
like
just
or
any
other
objects,
so
for
snapshots,
we
have
exception,
like
we
say
when
we
create
a
fail
which
is
written
a
row
and
we
are
not
going
to
retry
to
create
they
that's
kind
of
a
new
farm
system.
So
you
just
like
trying,
probably
open
up
the
question
whether
propose
something
like
fast
fail.
Once
you
fail
to
eat,
we
should
not
trying
to
retry
to
to
eat.
B
Also,
maybe
this
is
a
different
direction,
but
in
CSI
we
could
have
cut
to
build
capabilities
and
we
could
ask
the
system
to
use
support
deletion
of
snapshots
or
something
like
that
we
could
do.
We
could
go
all
the
way
down
to
CSI
and
put
it
there
and
then
coop
in
any
Content
interrogate
that's
another
way
of
doing
it.
B
Exactly
right,
so
if
we
have
a
situation
where
we
have
the
driver
report,
what
it
can
cannot
do
this
is
at
this
point.
We
could
then
input
that
knowledge
into
the
controller
and
then
pass
it
around.
So
maybe
if,
if
the
CSI
driver,
the
CSI
spec
had
a
maybe
a
new
flags
from
capabilities,
then
we
could
use
them
I'm
just
trying
to
propose
a
new
area.
Maybe
they'll
help
you
out,
instead
of
just
failing
an
error
in
not
knowing
right.
G
G
B
G
I
think
that's
up
to
the
storage
driver
implementation
to
take
care
of
it
either
they
clone
a
snapshot
so
that
they
remove
the
stick
this
coupling
between
snapshots
or
they
just
prevent
the
deletion,
which
is
just
giving
an
error
when
that
happens.
So
it's
not
really
capability
issue.
I,
don't
see
that
they
it's
just.
H
An
error,
the
only
reason
it
becomes
a
capabilities
issue,
so
I
think
this
is
a
another
alternate
compromise
right,
but
remember
the
the
whole
thing
is
going
back
to
the
question
of:
do
you
try
and
actually
control
these
things
and
make
these
decisions
in
kubernetes?
Or
do
you
do
it
in
the
plugin?
And
let's
say
you
do
it
in
the
plugin,
then?
What's
the
correct
response,
depending
on
some
people,
have
the
perspective
that
failing?
It
is
not
a
good
response.
Some
people
think
that
that's
okay
capability
is
kind
of
clear.
H
G
B
C
It
may
be
tricky
to
report
those
capabilities
right,
so
whether
you
can
delete
a
warning
that
still
has
snapshots,
then
community
needs
to
first
check
whether
there
are
any
snapshots
on
that
volume
and
then
ask
because
otherwise
you
know
sometimes
you
are
supposed
to
get
either.
Sometimes
you're
not
supposed
to
right.
So
it's
not
like
the
driver
can
give
you
answer
that
works
for
everybody
right.
So
who
is
going
to
give
that
chance?
Companies?
Let
me
do
my
check
for
us.
A
A
A
A
So
no
no
I
don't
have
other
like
topics.
Anyone
have
some
other
like
things
relate
to
what
we
have
discussed
or
other
topics
just
snapshots.
I.