►
From YouTube: Kubernetes SIG Storage 20200521
Description
Kubernetes Storage Special-Interest-Group (SIG) Meeting - 21 May 2020
Meeting Notes/Agenda: https://docs.google.com/document/d/1-8KEG8AjAgKznS9NFm3qWqkGyCHmvU6HVl0sk5hwoAE/edit#heading=h.6b1krtdaknge
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Saad Ali (Google)
A
A
The
agenda
dock
is
shared
in
the
meeting,
invite
if
there's
anything
that
you
want
to
talk
about,
whether
it's
PRS
that
need
attention,
design,
reviews
or
anything
else,
feel
free
to
add
it
to
the
agenda
and
we'll
get
it
get
to
it
towards
the
end
of
the
meeting.
First
up
we're
gonna
go
over
the
planning
spreadsheet,
where
we're
keeping
track
of
the
work
that
the
sig
is
doing
for
the
1.19
release.
A
B
Yes,
so
the
recovery
from
expansion
failure
cap
got
moist
and
that's
I,
think
item
below
and
the
CSI
for
online
offline.
We
have,
we
have
a
consensus
in
CSI
community.
Yesterday
we
had
a
call
I'm
updating
the
CSS
spec
and
the
plan
is
basically
to
deprecate
the
support
for
plugins
ability
to
declare
that
it
in
the
offline
and
online
expansion,
so
plugins
can
support
online
X
on
offer.
An
expansion
without
declaring
the
in
the
wall,
get
volume,
capabilities
call
and
they
can
return.
Just
the
bottom
use
error,
so
I'm
making
the
spec
change.
B
C
D
A
E
A
F
So
we
continue
to
do
a
back
Pyxis
twisted
my
back
fix
to
fix
the
breaking
issue
that
one
just
got
merged.
Hashanah
has
a
PR
that
isn't
out
for
a
while
the
metrics
I'd
like
to
get
that
one
in
as
well
they're
a
couple
of
others
we're
trying
to
cut
a
release
with
some
of
the
changes
that
we
are
working
on.
C
B
B
A
B
B
G
H
A
A
I
A
A
A
J
C
A
Okay,
so
I
think
tract
for
both
of
those
items
and
I
guess
they're.
A
third
item
was.
F
A
F
Right
so
what
about
to
part
of
it
so
the
spreading
car?
So
it's
going
to
be
we're
going
to
be
looking
at
that
together
with
this
row,
number
15
right
spreading
over
fellow
domain
and
for
that
tap
we
reviewed
it
two
weeks
back
so
I
got
some
comments
and
need
to
update
that
and
also
need
to
schedule
renew
meeting
for
the
spreading
part.
A
Okay,
so
your
comments
and
scheduled
follow-up
meeting,
Thank
You
Shane
next
up
are
the
CSI
drivers
that
are
common
and
owned
by
the
sig.
At
the
moment
you
have
NFS
I,
scuzzy,
fiber,
channel
and
flex,
and
the
goal
is
to
have
somebody
in
the
Stig
take
ownership
of
these
drivers
or
if
we
can't
find
an
owner
to
deprecate
the
driver,
and
so
the
goal
with
fiber
channel
and
flex
is
to
deprecate
with
NFS
and
I
scuzzy.
We
were
able
to
find
owners.
A
D
D
A
Next
up
is
this
external
storage
repository,
which
was
where
all
the
experimental
projects
for
storage
initially
were
hosted
and
a
lot
of
them
ended
up
becoming
kind
of
being
used
by
a
lot
of
people,
and
the
repository
itself
was
a
mess.
It
doesn't
have
its
own
releases
or
anything
like
that.
One
of
the
projects
that
was
hosted
in
here
was
the
external
provisioner,
just
the
the
core
external
provisioner
that
has
since
been
moved
out,
and
we
want
to
deprecated
that
repo,
along
with
the
organization
kubernetes
incubator
altogether.
A
So
before
we
do
that,
we
want
to
make
sure
anything.
That's
in
that
repo
that
people
find
useful,
gets
moved
out
somewhere
else
before
it
is
deprecated,
and
so
so
far
we've
identified
three
provisioners
that
needed
to
be
moved
out:
the
Gluster
fs1
NFS
and
the
NFS
client,
provisioner
and
those
are
in
in
progress
in
terms
of
being
moved
out
once
those
are
complete,
we're
going
to
proceed
with
the
deprecation
of
external
storage.
A
E
A
K
A
Thank
you
very
much,
fair
and
so
mark
that
one
is
started
and
then
the
status
update
for
the
overall
deprecation
is
you
want
to
wait
for
the
previous
items
to
complete
first
before
we
proceed
with
that
and
again
warning
to
anybody
who
depends
on
anything
in
this
repo.
Please
please
raise
a
flag
if
there's
something
in
there
that
hasn't
been
migrated
that
you
depend
on
so
far.
The
things
that
we
identified
are
these
three
provisioners,
plus
the
core
provision
or
library.
A
L
A
F
Yes,
yeah
cap
yeah
so
updated
the
cap
added
those
new
criteria,
so
yeah
that
got
marched
so
so
it's
the
work
is
in
progress,
so
Nick
has
he
are
reviewing.
It
has
been
updating
it
waiting
for
him
to
at
unit
test
and
also
notice
somebody
else
helping
with
implementing
the
Cinemark
driver
and
then
also
we're
waiting
for
the
csi's
back
1.3.
C
E
Get
to
it
in
120,
I,
the
the
thing
I
will
say
is
I
would
like
input
from
Andrew
large'.
Is
he
here
in
the
call
I?
Don't.
E
M
Saad
yeah,
we've
I,
think
cemented
the
the
many-to-one
binding
from
the
user
side
to
the
cluster
scoped
object
and
I've
got
a
slide
deck
for
the
meeting
after
this
one
to
propose
some
different
ideas
about
a
credential
minting
and
how
we
can
incorporate
that
into
the
design.
A
And
if
anyone's
interested
in
looking
at
the
kubernetes
object,
storage
API
those
design
meetings
are
happening
now,
like
John
said
that
meeting
is
gonna
immediately,
follow
this
meeting
on
the
same
zoom
so
feel
free
to
attend.
If
you're
interested,
there's
two
parts
to
this,
one
is
the
kubernetes
api
and
the
second
is
a
api
similar
to
CSI
being
called
currently
cos.
I
the
container,
object,
storage,
interface
or
cozy
so
feel
free
to
attend
that
and
provide
your
input
so.
M
M
A
A
Let's
see
this
is
this
is
different
from
the
new
generic
inline
ephemeral
volume.
That's
being
proposed
up
here.
This
is
an
existing
API
using
CSI.
The
big
difference
between
this
API
and
the
new
proposed
API
is
that
this
API
is
inline,
doesn't
create
any
PVC
objects.
It
just
calls
out
directly
to
see
aside
to
provision
for
CSI
drivers
that
support
it
versus
the
new
in
line
of
ephemeral
volume.
Design
above
is
proposing
reusing
the
PVC
framework
that
already
exists,
and
instead
tying
the
life
cycle
of
those
PVCs
to
the
pod.
A
So
when
the
pod
is
created,
the
PVC
is
created
and
when
the
pot
is
deleted,
the
PVC
is
automatically
deleted
so
effectively
you
end
up
with
ephemeral
behavior,
but
you
get
to
reuse
all
the
infrastructure
that
exists
for
PVCs
in
storage
class
like
topology
and
so
on.
So
B
ends
up
being
much
more
powerful,
but
it's
a
little
bit
more
heavyweight
in
terms
of
provisioning.
A
Therefore,
this
other
existing
ephemeral
inline
volume
for
CSI
continues
to
coexist.
Ideally,
we
want
to
kind
of
merge
there.
It
api
is
a
little
bit
as
we
move
forward
so
that
it
becomes
easy
to
discover
and
use,
and
that's
all
I'll
say
about
that.
Patrick
or
anybody
want
to
give
an
update
on
this
one.
G
A
Yeah
I
think
I
can
see
the
benefit
of
both.
The
only
thing
I
would
like
to
see
is
make
sure
that
the
API
is
kind
of
converge,
I
like
that
there's
a
new
ephemeral
volume
source
with
the
other
cap.
So
if
we
could
tuck
this
inside
of
that
and
it
becomes
an
either/or
I
think
that
would
be
a
good
place
to
end
up.
So
it's
less
confusing
for
users.
G
A
And
arguably
the
same
issue
exists
with
CSI
drivers
as
well,
because
you
could
have
drivers
that
support
dynamic,
provisioning
and
ones
that
don't,
but
usually
the
end
user
doesn't
have
to
worry
about
that.
It's
the
cluster
administrator
who
does
so
I,
guess
that's
how
it's
slightly
different
anyway.
We
don't
have
to
discuss
that
here.
A
A
A
Next,
up
are
the
CSI
migration
items.
So
for
those
of
you
who
don't
know,
CSI
is
kind
of
the
future
of
the
kubernetes
volume
API,
but
before
we
did
CSI,
there
were
a
number
of
volume
plugins
that
were
entry,
meaning
their
code
was
built
into
the
core
of
kubernetes.
As
you're
probably
well
aware,
there
is
a
big
push
within
the
kubernetes
community
to
push
any
cloud
provider
specific
code
out
of
the
core
of
kubernetes,
and
so
this
includes
the
volume
code
and
so
for
the
volume
code.
A
Our
plan
is
to
provide
a
common
mechanism
that
allows
for
this
code
to
be
handled
by
a
CSI
driver
while
the
API
is
maintained.
So
the
the
thing
with
kubernetes
is
that
we
have
a
very
strict
deprecation
policy
and
backwards
compatibility
policy
with
the
api
and
all
of
these
old
entry
volume
plugins
actually
exposed
bits
of
api,
which
makes
them
very
difficult
to
deprecate,
and
so,
ideally,
what
we
want
is
users
can
continue
to
use
those
api's
referencing
entry,
but
silently
internally.
We
redirect
that
and
have
it
handled
by
a
CSI
driver.
N
A
D
D
A
C
C
A
A
D
A
C
A
D
D
A
A
M
A
F
Know
so
we
had
a
meeting
with
folks
massing,
node
and
Tim,
so
we
didn't
reach
consensus.
This
signal
had
a
lot
of
concerns
about
this
I
think
the
main
concern
is
that
they
want
to
be
able
to
tell
whether
a
comment
run
in
the
container
is
really
successful,
really
has
shipped
the
results,
but
it
tended
to
be
so.
There
are
like
a
few
alternative
approaches
in
disgust.
One
is
to
add
a
probe
in
addition
to
the
current
design,
so
I
put
together
a
draft
in
yesterday's
data
protection.
F
F
Basically,
you
should
have
to
send
us
a
comment
to
container
waiting
for
the
results,
and
then
you
have
to
probe
it
again
to
make
sure
that
it's
really
it
has
really
done
what
it
means
to
be
so
I
think
yes,
so
we
think
we
still
need
to
have
more
discussions
on
that.
So
I
also
painted
him.
Ask
him
to
take
a
look,
but
this
is
mainly
trying
to
address
signals
concerns
right.
So
this
is
one
one
thing
that
the
nation,
so
there
were
a
couple,
others
so
need
more
discussions,
cool.
A
So
this
is
an
area
of
active
discussion
and
development.
Keep
an
eye
on
that
Thank,
You,
Shane
and
last
item
is
the
kubernetes
utils
mount
library.
We
want
to
move
it
to
a
dedicated
repo,
preferably
a
staging
repo
I
saw
the
requests
come
in
for
creating
a
new
staging
directory
Mashal,
any
updates
on
that
or
yan.
A
O
O
O
N
To
get
a
detail
from
our
being
regarding
this,
the
thing
is
of
how
we
introduce
this
parameter
is
like.
Let
me
tell
you
this
initially.
When
we
started,
we
were
provisioning,
the
volume
on
the
shed
it
has,
but
only
then
we
thought.
Ok,
let
us
provide
option
to
select
a
datastore,
so
we
added
a
storage
class
parameter
called
datastore
name,
and
then
we
thought
now
we
can
use
the
storage
policy,
so
we
added
a
storage
policy
and
deprecated
a
datastore.
N
M
N
N
So
volume
will
be
migrated,
there
will
not
be
any
issue,
but
the
new
volume
that
you
are
going
to
create
with
the
n3y
ml.
In
that
we
will
be
dropping
those
parameter
with
the
said
that
these
parameters
are
no
longer
supported.
So,
basically,
all
in
line
BC
and
raw
policy
parameter
data
store,
name
parameter.
They
will
be
dropped.
So
what
will
be
supporting
in
the
driver?
Is
storage
policy,
name,
data
store,
URL
and
CSI
storage.
Gators?
Are
your
FS
type
parameter
and.
N
A
N
Store
default
policy
will
be
applied
and
I
think,
while
implementing
of
new
features
for
it
policy
name,
he
published
a
cinder
here
that
will
be
going
to
deprecated
all
this
parameter,
so
we
need
to
collect
the
data
like
how
many
customers
are
still
using
this
older
feature.
So
as
we
added
this
feature,
we
can
fabricating
those
parameters.
D
D
N
N
We
can
do
that,
but
the
data
store
name
is
like
highly
general.
You
can
change
it,
it's
not
durable,
so
so
after
provisioning,
the
volume,
if
someone
changed
the
name
of
the
data
store,
it
can
break
the
entire
a
square
out
wider.
So
that
is
why
we
no
longer
want
to
support
providing
a
URL
or
a
name
in
the
driver.
So
even
for
the
data
store
URL,
we
are
not
targeting
to
keep
it
for
future.
N
A
You
kind
of
have
a
invisible
set
of
parameters
that
you
support
for
Migration
that
are
not
officially
published
and
in
order
to
use
them
you
know
you
need
to
set
some
parameter.
That
says:
I
am
migration
code
or
something
like
that,
and
then
you
can
continue
to
pass
along
these
old
parameters
and
they
will
be
respected
in
order
to
allow
for
continuity
and
then
for
new
kind
of
users
of
CSI.
You
don't
publish
any
of
this,
and
you
just
say
here
that
you
set
that
aloud
he's.
N
Not
evolving,
but
the
thing
is
we
need
to
change
the
driver
and
allow
these
numbers
in
the
storage
class.
So
since
the
code
is
open
source,
anybody
can
refer
the
code
and
try
to
use
those
parameters
within
your
workload
in
the
CSI.
But
the
intention
of
those
parameter
is
just
for
migration,
but
nobody
can
restrict
the
user
to
use
those
parameter
in
there,
CSI
storage
plus.
So
that
is
the
reason
we
don't
want
to
put
any
migration
related
parameters.
O
So
I
just
want
to
add
one
sort
of
detail
sort
of
the
the
intention
for
CSI
mine
equation,
like
CSI
came
out
before
there
was
CSI
migration
and
and
a
lot
of
drivers
could
offer
new
and
improved
features
in
CSI
that
were
not
available
in
entry.
The
purpose
of
migration
was
not
to
bring
the
new
and
improved
features
to
the
legacy
drivers.
O
O
N
We
will
be
able
to
migrate
the
existing
pre
provision
volume.
So
let's
say
you
have
a
storage
class
will
have
provision
the
volume
using
the
data
store.
Url
volume
is
version
in
that
data
store
will
be
able
to
migrate
them
to
the
CSI
on
CSI
type
will
handle.
The
attack
attach
delete
workflow,
but
the
new
sir.
You
are
going
to
create
with
your
older
ml
that
will
drop
these
parameters.
That
is
the
my
I.
O
Agree
and
that
matches
what
I
understand
from
reading
the
the
pr.
But
so
it
looks
like
if
I
provision
a
volume
before
migration,
then
it
will
have
file
path
and
square
brackets
and
the
data
store
name
and
then
the
ndk
path.
And
then,
when
we
turn
on
migration,
that
stringing
get
sent
to
the
CSI
driver
would
have
to
resolve
the
data
store
by
name
which
could
fail.
If
the
data
store
was
renamed
and.
C
O
N
So
so
that
that
that
is
another
issue,
the
thing
is,
what
we
are
saying
is
that
we
should
allow
provisioning
and
new
volumes
with
the
older
amell's
and
we
should
not
drop
this
parameter,
but
the
newer
API
is
that
we
are
using
in
the
CNS
that
doesn't
support.
So,
let's
say
recent
wrong
policy.
We
will
never
be
able
to
support
with
the
CNS
api's
and
regarding
the
data
stored.
That
is
also
not
in
line
with
our
plant
for
the
other
features.
N
Gun
companies
preclusion
volumes
will
be
migrated.
New
volumes
cannot
be
created
with
honoring
all
old
entry
parameters,
because
those
parameters
are
no
longer
supported
with
the
newer
api's
and
they
are
not
in
line,
and
they
have
lot
of
issues
like
if,
like
I
said
it's
durable
and
it
can
break
the
entire
setup.
So
those
kind
of
parameters
we
are
no
longer
supporting,
so
we
will
be
able
to
migrate
existing
workflow,
but
the
new
workload
has
to
be
provisioned
using
a
CSI
driver.
A
I
want
to
push
back
a
little
bit
on
that
and
here's.
Why
I
think
it
touches
on
what
Jonathan
was
saying,
which
is
the
intention
of
the
migration
right?
The
the
kind
of
ideal
scenario
for
migration
is,
it
happens
silently
and
the
user
doesn't
know
and
doesn't
have
to
change
anything,
and
so
they
should
continue
to
use
the
existing
API
objects
untouched
and
internally.
A
We
should
automatically
handle
translating
that
to
CSI
in
a
way
that
they
effectively
get
the
same
behavior
as
before,
and
so
when
we
flip
the
switch
for
migration
and
eventually
when
we
delete
the
code
for
entry,
users
should
not
notice
and
I
think
what
is
being
proposed
with
option
number
one
here
is:
that's
partially
true
for
existing
volumes,
but
if
you
try
to
provision
a
new
volume
using
the
existing
storage
class,
that
may
not
function
correctly,
and
that
is
alarming
to
me.
They
are
there
any
other
options
that
were
considered.
N
So
we
can
go
back
and
we
edit
these
parameters
so
Bible
own.
It
was
not
a
PM
requirement
or
any
customer
requirement,
so
we
never
removed
them.
So
now
we
need
to
go
back
to
the
customer
and
then
figure
out.
Are
they
using
these
parameters?
Really
because
when
you
have
a
good
available
and
if
they
are
no
longer
using
it
and
then
we
can
literally
drop
it,
the
test
is
not
that
durable
anybody
can
chain
them
and
it
can
break
system.
So
why
would
customer
use
that
in
so
one
speaking.
B
From
like
the
cases
that
we
have
seen,
customers
do
actually
used
it
up.
Data
store,
name,
storage
class,
it's
not,
it
was
supported
and
it's
one
way
for
the
customers
to
provision
like
split
they're
there,
like
as
long
as
the
data
stores
are
shared
with
all
the
all
the
ESXi
hosts.
It's
one
way
for
the
customers
to
use
different
data
store
for
different
workloads,
so
customers
to
use
actually
like
the
leaders
to
a
name
in
there,
because
in
the
in
the
storage
class,
because
generally
the
cloud
config
has
just
one
datastore,
no.
D
B
N
So
then
it
is
good
information
for
us.
We
thought
nobody
is
using
it
so
that
information
will
be
useful
and
is
anybody
using
a
piece
and
draw
policy?
I
guess
nobody
must
be
using
it
because
you
have
a
rich
features,
are
available
with
the
storage
policy
name
parameter.
Then
it
doesn't
make
sense
to
use
the
raw
fish
and
fall
so.
O
N
Then
let
us
try
to
migrate
and
support
the
data
store
diameter
and
last
parameter
left
is
the
disk
format.
Basically,
it
is
thin
thick
and
0
T,
but
that
is
also
now
driven
by
the
storage
policy.
So
spbm
is
now
enhanced
to
take.
This
object
reservation
space
with
that
we
can
control
how
you
want
to
provision
a
volume
with
a
thick
thick
or
thin.
No
zero
thing,
so
is
any
customer
using
that
parameter
to
form
a
thick
disk
in
this
zero
misc
format,
yeah.
N
There
is
no
way
we
can
translate
it
with
the
spbm
policy
name.
So
now
then
new
CNS
API
is
they
only
accept
the
policy
name
as
a
parent,
or
so
there
is
no
way
we
can
use
the
disbarment
data
store.
We
can
still.
We
can
do
the
conversion
and
consulate
it
to
the
URL
and
supply
that,
but
this
format
will
not
be
possible
so.
O
A
Feel
free
to
bring
it
back
to
the
cig.
If
you
want
further
discussion,
I
I
think
we
should
do
better
than
option
one.
At
the
very
least
it
sounds
like
there
are
users
that
are
using
these
existing
parameters,
so
if
we
could
get
a
proper
migration
story,
that
would
be
good.
So
once
you
have
conclusion,
please
come
back
to
the
cig
and
let
us
know
where
you
ended.