►
Description
Kubernetes Storage Special-Interest-Group (SIG) Per Driver Capabilities Discussion - 07 June 2022
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Ben Swartzlander (NetApp)
A
All
right
so
hello,
everyone
and
welcome
to
this
one-off
meeting
on
per
volume,
csi
capabilities
as
part
of
the
kubernetes
sig
storage
group.
If
this
meeting
may
turn
into
a
series
if
we
have
enough
to
discuss-
but
I
just
wanted
to
have
this
kickoff
meeting-
to
collect
the
the
use
cases
and
brainstorm
solutions
here,
so
I
have
been
talking
to
hemant
offline
about
some
of
these
problems.
It's
come
up
in
the
in
the
kubernetes
six
storage,
csi
meetings.
A
A
And
if
we
had
an
nfs
only
driver,
we
would
set
the
fs
group
policy
to
not
use
the
feature,
but
the
driver
that
we
have
actually
supports
both
protocols
and
there's
no
way
to
sort
of
signal
to
cubelet
on
a
per
volume
basis.
What
it
should
do
and
we've
thought
about
a
number
of
ways
to
work
around
this,
including
csi
spec
changes
and
we
never
really
arrived
at
a
satisfactory
conclusion,
partly
because
this
is
already
in
kubernetes
as
a
sort
of
a
non-csi
extension
right.
A
The
the
csi
driver
object
in
kubernetes
is
actually
not
part
of
the
csi
spec.
It's
totally
kubernetes
specific
and
basically
enables
drivers
to
tell
kubernetes
to
do
specific
things
that
are
outside
the
spec
and
it's
been
working
reasonably
well.
But
but
this
is
one
problem
where
it
sort
of
fails.
So
I
would
be
open
to
maybe
changing
the
way
that
it
works
so
that,
on
a
per
volume
basis,
a
csi
driver
could
tell
kubernetes
something
that
makes
it
look
at
different.
A
Csi
driver
objects,
potentially
right
you
could,
it
could
be
a
default
csi
driver
object,
but
then
there
could
be
a
per
volume,
one
that
had
different
options
or
we
could
enhance
the
csi
spec
to
just
specifically
communicate
the
you
know,
override
values
for
on
on
a
per
volume
basis
and
just
make
solve
this
inside
the
csi
spec,
but
but
right
now,
there's
really
no
information
at
all.
A
That
comes
back
on
a
per
volume
basis,
so
we
have
to
add
something
to
the
csi
spec
to
sort
of
add
a
hook
that
then
cubelet
can
can
see
that
it
needs
to
do
something
specific,
and
that's
that's
the
use
case
that
I
care
mostly
about.
I
see
that
humble
wrote,
a
response
that
I
haven't
read
yet
so
I'll
read
it
real,
quick.
A
Okay,
at
the
end,
he
says,
if
the
requirement
is
really
to
have
a
per
volume.
I
expect
the
current
volume
attribute
on
the
csi
spec
to
accommodate
that
so
yeah.
One
way
to
do
it
is
is
just
extend
the
csi
spec
to
specifically
address
this
use
case.
A
I
was
thinking
of
something
more
flexible,
where
maybe
you
could
return
like
a
some
sort
of
a
volume
type
key
that
would
key
into
something
in
the
csi
driver
that
then
kubernetes
could
look
up,
and
you
could
you
know,
continue
to
do
the
out
of
spec
style
thing
that
kubernetes
currently
does,
where
you
know
it's.
It's
basically,
proprietary.
A
But
I
could
go
either
way
as
far
as
solving
my
problem,
but
but
I
would
I
would
like
to
come
up
with
a
more
with
the
general
scheme
that
addresses
everyone's
problems.
So
sandeep
can
you
can
you
describe
your
your
use
case?
B
Yeah
sure
I
can
describe
the
use
case
that
we
see
here
so
yeah,
I'm
basically
from
vmware,
and
we
have
a
driver
called
psp
csi
driver
that
single
driver
supports
both
read,
write
once
and
rate
at
many
volumes,
and
we
are
seeing
some
challenges
with
respect
to
the
capabilities
that
we
need
to
export.
For
example,
one
use
case
that
we
have
is
there
is
something
called
the
max
volumes
per
node
that
we
need
to
set
in
a
node,
get
info
response.
B
A
B
That
is
used
by
kubernetes
to
actually
limit
how
many,
how
many
volumes
of
this
dsi
driver
can
be
attached
to
that
wall
to
get
the
node
vm
so
because
we
are
implementing
both
read
write
once
and
three
that
many
it's
a
challenge
for
us
because,
like
on
our
side,
the
limit
is,
you
can
attach
a
maximum
of
59
block
volume.
So
you
write
once
volume
12
yen,
whereas
for
rewrite
money
these
are
nfs
mounts
like
there
is
no
really
not
not
really
much
of
a
limit
there,
so
you
can
even
go.
C
B
59
or
whatever
it
is
so
now
we
have,
we
have
a
problem
where
we
have
to
either
set
it
as
zero
or
something
I
think.
If
you
don't
set
it,
then
it
is.
It
assumes
that
there
is
no
limit.
If
that
happens,
then
there
is
a
problem
if
at
all
like
a
lot
of
stateful
workloads
land
on
a
particular
node
vm.
If
it
goes
beyond
59,
then
things
start
failing.
So
that's
a
problem
and
if
you
limit
to
59,
then
we
cannot
scale
the
number
of
other
stateful
workloads
like
rewrite
many
ones.
B
So
I
think
the
requirement
that
I'm
seeing
here
is,
if
you
can
have
a
capability
in
csi,
spec
itself
that
says
per
access
mode.
I
mean
I'm
not
looking
for
like
volume
like
per
volume
type.
If
you
can
say
what
is
the
maximum
volumes,
it
can
support
per
node,
and
if
you
can
make
that
changes
wherever
necessary,
the
kubernetes
stack
like.
A
Just
on
that,
I
I
totally
understand
the
problem.
B
A
I
don't
think
that
I
don't
think
that
in
everyone's
case,
it's
going
to
be
tied
to
the
access
mode,
because
yeah
one
one
could
imagine
a
scenario
where
you
had,
even
within
the
same
access
mode,
different
kinds
of
volumes,
some
of
which
had
a
higher
per
node
limit
and
others
that
had
a
per
node
limit
or
lower
per
node
limit.
And
so
I
I
can't
remember
when
we
designed
the
the
node
limit
feature.
C
A
If
we
considered
the
possibility
that,
like
some
drivers
would
have
very
complicated
notions
of
limits,
but
but
this
is,
this
is
an
obvious
example
of
a
real
world
example.
Where.
C
A
D
The
similar
problem
with
will
be
with
the
capacity
responses,
also
right,
because
the
capacity
from
which
read
write
once
block
devices
being
provisioned
capacity
from
which
reader
at
many
volumes
is
getting
provision
could
be
different.
Oh
yeah.
A
So
yeah
I
mean
it
so
so
that
kind
of
argues
for
it
would
have
been
better
to
have
two
different
csi
drivers
for
the
two
different
cases
and-
and
you
could
make
the
same
argument
about
my
problem
with
nfs
and
iscsi.
A
A
E
E
Something
is
that
a
problem
we
should
consider
tackling
like.
I
know-
we've
talked
a
lot
about
in
the
past
about
like
consolidating
all
the
side,
cars
into
a
single
process
and
things
like
that
or
making
a
sidecar
able
to
well.
I
don't
know
if
that
would
work,
but
a
side
car
being
able
to
handle
two
drivers
in
the
same.
A
A
I
think
you
need
like
at
least
four
side
cars
and
and
more
if
you
do
like
the
liveness,
probes
and
metrics
and
other
well,
I
don't
know
about
metrics,
but
yeah.
It
can
get
complicated.
So
that's
something
to
consider
for
the
future,
but
like
we
still
have
a
situation
where
today
we
already
have
a
bunch
of
csi
drivers
that
don't
do
that.
That
are
stuck
right
because
for
for
existing
volumes,
you
can't
change
the
driver
name.
So,
even
even
if
it
was
easy
to
split
our
driver
into
two.
E
E
Yeah,
I
think
my
only
concern
is
that
I
feel
like
the
a
lot
or
almost
all,
of
the
features
we
have
are
all
under
the
assumption
that
you
know
the
single
driver
only
supports
one
type
of
volume.
Yeah,
I
think,
to
support
you
know
a
like
the
driver
supporting
two
types
of
volumes.
We
would
basically
have
to
revisit
every
single
feature
that
we've
implemented.
A
D
But
there's,
but
there's
just
like
this
certain
cases
which
are
unblockable
it's
just
because
the
driver
is
some.
Drivers
are
in
a
state
where,
like
they
are
not
ready
for
consumption
yet
like
like,
for
example,
azure
file,
which
is
a
read,
write
many
volume
by
default.
It
has
a
block
mode
so
where
you
can
specify
fs
type
and
then
it
gives
you
a
loopback
device
which
is
basically
on
on
a
read,
write
many
samba
share
and
you
can
mount
it.
D
D
That's
how
the
driver
author
has
chosen
to
implement
it.
It's
like
is,
if
you,
if
you,
if
you
create
a
storage
class
for
your
file
and
you
specify
fs
type
in
the
you
know
in
the
yeah,
then
it
gives
you
a
block
volume.
If
you
skip
it,
then
it
gives
you
a
reader
at
many
volume
and
the
way
these
two,
even
though
the
driver
name
is
same
the
way
these
two
volume
behaves
is
like
completely
different.
B
Let
me
talk
about
like
our
like
what
we
went
through
when
we
designed
this
csi.
So
at
that
time
we
we
knew
that
we
have.
We
had
to
support
both
read
write
once
and
pre-write
money
like
two
different
volume
types,
and
we
did
a
research
actually
like
we
looked
at
csi
spec
we
saw
like
does
the
cfr
spec
say
we
need
two
separate
drivers
for
separate
access
mode
and
at
least
the
conclusion
that
we
reached
at
that
time.
B
I
remember
like
I
even
reached
out
to
the
community
and
all
that
I
I
don't
think
there
was
a
specific
guideline,
that
you
should
have
one
single
driver
per
volume
type
or
per
access
mode.
So
that
was
one
of
the
reasons
why
we
felt
okay,
the
cs
aspect
doesn't
say
anything.
I
mean
it's
quite
possible
that
we
can
have
one
driver,
have
multiple
volume
types
and
all
that.
B
So
that's
the
reason
why
we
went
into
this
direction
and
if
you
want
to
revisit
that
and
if
it
means
we'll
have
to
split
this
driver
into
multiple
driver
depending
on
the
volume
type
and
that
itself
going
to
be
a
huge
effort
on
our
site.
Well,.
C
A
C
Actually,
I
see
there
is
a
field
called
the
volume
life
cycle
mode
were
added
in
the
css
driver
to
let
you
know
let
let
a
driver
specify
what
volume
modes
are
supported
and
it's
kind
of
you
can
add
multiple
volume
modes.
So
I
feel
that
that
field
means
suppose
the
css
driver
should
support
multiple
volume
modes.
E
That
was
more
about
persistent
or
ephemeral,
not
really
access
modes.
A
C
Is
very
overloaded,
yeah
you're
thinking
whether
it
is
a
trend
for
css
driver
to
support
multiple
different
type
of
modes
or
like
for
each
individual
mode
that
we
should
have
a
different
level?
If
we
agree
on
like
a
search
end,
we
should
go
then
yeah.
A
I
I
do
feel
like
if,
if
we
come
down
on
the
side
of
you
know,
you
should
be
writing
multiple
csi
drivers.
Instead
of
trying
to
cram
different
features
into
the
same
csi
driver
like
it
is,
we
would
be
incumbent
on
us
to
make
that
easier
than
it
currently
is,
especially
in
terms
of
the
the
tax
on
the
cluster.
When
you
install
multiple
node
plug-ins,
like
that's
a
lot
of
ram
per
per
node,
potentially
to
have
that
demonstrating
everywhere.
A
E
I
just
I
kind
of
feel
like
in
terms
of
just
timelines
and
like
timelines
and
scope
of
what
needs
to
like
be
done.
I
think
supporting
the
latter
will
get
done
quicker
than
trying
to
fix
all
the
various
features.
A
Well,
let's,
let's
get
through
the
rest
of
the
use
cases
and
see
how
many
features
there
really
are
that
are
that
are
hurting
people
I
have
so
I've
heard
the
the
fs
group
policy
is,
is
the
one
that's
hurting
us,
the
the
per
node
limits
is
hurting
sandeep
sandeep.
Did
you
have
a
second
case
or
is
your
second
bullet
point
more
about
the
same.
B
A
Yeah
but
but
I
mean
so
so
kubernetes
already
has
this
csi
driver
object
that
we
use
to
basically
signal
just
kubernetes
specific
features
that
csi
spec
doesn't
doesn't
cover
and
and
something
as
simple
as
being
able
to
have
multiple
instances
of
that
object
per
driver
and
then
to
be
able
to
switch
between
them
on
a
per
volume
basis,
wouldn't
be
the
end
of
the
world.
I
mean
cubelet
could
easily
just
look
up.
You
know
the
appropriate
instance
of
the
csi
driver
on
a
per
volume
basis
and
read
whatever
value
it
was
going
to
read.
A
So
if,
if
we
could,
if
we
could
fix
it,
just
for
that
things
that
depend
on
the
csi
driver
object
that
that
seems
like
a
pretty
light
lift
it
might
be
weird
to
have
multiple
instances
of
the
csi
driver
object
or
to
have
to
change
it,
but
I
don't
know
I
want
to
hear
about
the
sa
linux
use
case,
because
I'm
most
familiar
with,
what's
going
on,
there.
D
Yeah,
so
the
sl
linux,
the
yarn
is
working
on
proposal
where
he
is
working
to
implement
s
linux
as
a
mount
option.
So
basically
it
amounts
to
the
same
thing
you
have.
D
The
driver
defines
in
a
csi
driver
object
how
if
it
can
support
sl
index
as
a
mount
option,
because
the
mount
option
has
to
be
applied
on
the
first
mount
at
the
node
staging
time,
so
that
capability,
whether
it
supports
or
not,
has
to
be
known
to
the
cubelet
before
it
calls
node
stage,
and
currently
it's
like
it's
not
known,
because
we
are
recursively,
ch
corning
the
file.
So
it's
like
we
can
determine
that
by
how
sec
level
option
appears
after
the
mount
happens.
D
D
A
D
A
D
Currently,
cri
is
doing
this
recursively
like
it
always
does
it
like
if
the
if,
if
the,
if
the
cubelet
determines
that
there's
a
slings
context,
support
on
the
sling
support
for
the
volume
by
looking
at
the
mount,
how
the
volume
was
mounted
during
the
stage,
then
it
it
basically
and
stage
or
publish
then
it
recursively
does
the
cri
sierra
not
cube.
Cri
does
recursive
ch
con
to
and
first
it
gets
the
parts
context
the
linux
context
from
the
pod,
and
then
it
does
ch
con
of
the
entire
volume
that
is
done
by
its
container
runtime.
D
We
want
to
move
this
responsibility
from
the
container
runtime
into
cubelet,
so
that
cubelet
supplies
cubelet
itself
doesn't
do
it,
but
cubelet
supplies
this
mount
option.
The
snnx
mount
option
to
the
driver,
so
the
driver
can
mount
it
correctly.
A
D
Even
if
we
had
to
have
the
capability,
it
still
mean
that
like
for,
for
it
would
be
for
the
entire
driver,
even
if
it
would
be
like
it
removed
it
to
say
csr,
driver
capabilities
on
the
note
it
will
still
be
like
for
the
whole
driver.
It
wouldn't
be
for
the.
So
it
doesn't
matter
whether
it
is
in
the
csi
driver,
kubernetes
object,
csi
driver
or
it's
the
capability
on
the
on
the
driver's
side.
It's
still
for
the
whole
driver,
whereas
what
right.
D
H
F
D
A
H
And
yeah.
E
D
A
Well,
I'm
imagining
some
csi
spec
change
that
when,
when
you
create
a
volume,
you
get
returned
an
additional
piece
of
some
some
key
into
some
table
right,
an
additional
key
that
says
what
the
volume
protocol
the
volume
type
or
you
know
some
some
field
that
we
have
to
identify.
That
would
be
a
like
the
the
key
that
you
would
use
to
look
at
the
correct
csi
driver
object
or
or
if
we
made
the
csi
driver
object
contained
in
itself
like
a
map.
Then
it
would
be
the
key
to
look
it
up
in
that
map.
A
Yeah
yeah,
because
because
the
it
would
be
frozen
at
creation
time,
which
I
think
is
would
address
everyone's
needs
right,
because
at
that
point
you
know
where
the
volume
is,
what
kind
of
volume
it
is
and
what
it's
going
to
be
able
to
do,
and
you
would
just
need
to
signal
back
some
addition.
Some
key
that
that
lets.
You
know
what
kind
of
volume
you
actually
got,
because
the
problem
with
the
problem
with
the
csi
spec
as
it's
written
is,
you
know
you
say,
create
volume
and
you
you.
C
A
A
Fine,
like
you,
may
prefer
not
to
do
that,
but
it's
allowed,
and
so
any
volume
that
meets
the
specifications
that
we're
sending
to
create
volume
is
valid
and
the
the
external
provisioner
sidecar
doesn't
get
to
have
any
input
other
than
the
size,
the
access
mode,
the
volume
mode
and
then
all
the
parameters
from
the
storage
class,
and
it
has
to
accept
what
it
gets
back
and
there's.
There's
too
many
cases
where,
even
with
all
that
filtering,
both
nfs
and
iscs
are
still
valid,
and
it's
up
to
the
driver
to
choose
what
to
create.
A
And
so
yeah
having
some
key
that
comes
back
from
the
create
that
says.
Okay,
you
actually
got
a
nice
because
you
volume
now
or
you
actually
got
an
nfs
volume
now
and
therefore,
when,
when
you're
doing
these
node
operations,
you
need
to
use
the
key,
that's
iscsi,
specific
or
nfs
specific.
That
would
satisfy
the
need.
I
I
don't
know
if,
if
that's
what
peop
the
direction
people
want
to
go
in,
but
I
I'm
able
to
convince
myself
that
at
least
it's
technically
feasible
it
now.
A
None
of
this
addresses
sandeep's
case
of
like
node
limits
right,
because
that
that
that's
a
that
is
a
csi
feature,
that's
built
into
the
spec
and
it's
it's
considered
to
be
a
global
value
for
the
whole
driver
and
we
would
have
to
redesign
the
the
csi
feature
to
to
address
his
case.
I
think.
D
He
had
another
use
case
of
published
list
volumes.
I
think
where
the
published
volume
is
it's
not
possible
to
do
that
for
redirect
many
volumes
where
it's
possible
for
read,
write
it's
not
possible
that
many,
but
it's
possible
for
it
at
once.
A
Work
sometimes
not
other
times
yeah.
That
doesn't
surprise
me
at
all.
Although
there
are,
there
are
ways
to
work
around
that
by
just
making
your
driver
store
more
state,
which
is
what
we
do
you
can
you
can.
You
know,
just
store
the
state
inside
your
driver
so
that
you
know
which
nodes
a
particular
volume
is
published
to
and
thereby
implement
the
feature
with
the
current
spec.
A
It
would
be
weird
just
to
say:
well,
you
can
get
it
for
some
flame
types
and
not
other
volume
types,
because
because
the
the
consumer
of
that
feature
is
the
external
attacher
side
car
and
it's
it's.
It's
reconcile
function.
That
happens
when
you
start
up
the
sidecar
right
and
for
the
reconciler
to
only
be
able
to
reconcile
half
your
volumes
would
be
strange,
but
but
you
can
easily
just
not
implement
that
capability
and
still
get
all
the
features
except
for
that
reconcile
function.
A
A
D
I
just
I
want
to
briefly
discuss
that
option.
Also
like
like
how
hard
it
is
to
actually
split
the
driver
like
can
we
like,
for
example,
if,
like
we
know,
that's
certain
features
of
some
drivers
are
broken
because
of
you
know
like
this,
using
the
same
driver
handler
name
and
if
we
had
to
like
freeze
that
driver
in
time
that
okay,
you
can
use,
keep
using
this
driver
as
you
want,
but
certain
feature
won't
work.
D
But
but
if
you
want
to
use
the
new
feature,
like
you
know,
like
maybe
faster
fs
group
application
faster,
I
see
the
next
support
and
those
things.
Then
you
have
to
change
your
handle
to
this
one.
So
is
it
like
feasible
that
that
we
start
encouraging
or
inspect
from
the
starting
from
spec
side
to,
you
know
like
to
encourage
driver
authors
to
basically
spread
their
drivers.
I'm.
A
D
Not
really
I'm
proposing
that
okay,
I'm
thinking
out
loud,
actually
I'm
proposing
at
this
point
but
like
there
are
a
certain
like,
for
example,
like
I'll
just
take
a
zero
file
again
azure
file,
one
driver
supports
cifs,
nfs
and
blockboard.
That's
just
too
many
features,
and
some
of
them
are
broken
to
certain
certain
degree.
Vsphere
csi
driver
supports
redirect
many
and
this
thing,
and
some
features
like,
for
example,
node
attached
limit,
doesn't
work
for
retract
many
volumes.
Currently
it
doesn't
report
anything
or
it
doesn't
work
as
expected.
D
So
what
I
was
suggesting
is:
can
we
have
this
thing
that,
like
okay,
these
drivers
can
keep
working
as
they
are
working
today,
but
if
they
want
to
be
more
robust,
get
sln
support,
support,
node
limits,
no
detached
limits,
support
like
more
newer
features
that
are
possibly
still
alpha
or
beta.
They
have
to
update
to
a
new
driver's
name.
So
it's
like
I'm
proposing
that
how
hard
really
it
is
to
freeze
some
drivers
in.
A
Time:
yeah,
yes,
so
so
we
we
could
so
our
driver,
I
think,
is
called
like
trident
dot,
csi.net.io
or
something,
and
we
could
totally
in
the
next
release,
have
two
different
driver
names.
You
know
like
nfs,
trident
and
iscsi,
trident
and
and
and
every
volume
you
created
from
that
point
forward,
could
use
the
new
names
like
that
is
totally
possible.
The
problem
is
for
all
your
old
volumes.
A
What
do
you
do?
I
mean
they
keep
working
using
existing
mechanisms,
but
then
you
need
a
third
driver
right
and
now
you
have
two
new
drivers,
plus
the
old
one.
That
still
does
all
the
old
stuff
and
you
have
to
maintain
it
forever
and
that's
that's
not
acceptable
to
any
development
team.
I
don't
think
to
just
say:
yeah.
You
got
to
build
this
new
thing,
but
you.
C
F
F
D
A
And
you
can't
ever
get
rid
of
the
old
one
that
that's
the
problem.
I
see
if
we
could
provide
people
a
bridge
to
to
get
off
of
a
unified
driver
onto
two
separate
drivers
and
in
a
way
to
properly
deprecate
and
get
rid
of
the
old
legacy
driver.
I
think
some
people
might
cross
that
bridge
and
it
might
solve
a
lot
of
problems.
A
But
today,
unfortunately,
like
the
the
driver
name
in
your
pv
is
immutable
and-
and
you
can
have
very
old
pvcs
that
are
attached
to
pods-
that
have
been
running
for
a
year
or
more
right
and
you
still
need
to
be
able
to
detach
that
pod
someday
and
and
it's
it
was
attached
by
the
old
driver
and
so
like.
A
So
you
have
to
you
basically
have
to
keep
that
old
driver
around,
so
that
it
can
detach
that
volume
when
that
pod
eventually
dies
and
given
how
long
pods
can
live
that
that's
a
real.
I
mean
it
strongly.
Incentivizes
people
to
just
keep
the
same
csi
driver
name
and
the
same
csi
driver
implementation
forever
so
that
you
never
have
to
deal
with
the
migration.
D
Yeah,
no,
I
I
see
your
point.
I
think
we
should
keep
both
options
like
you
know,
like
per
csr
driver
per
volume
like
csi
driver
objects
and
maybe
have
a
csi
something
in
css.
That
returns
some
more
measure
that
which
csi
driver
instance
to
choose
and
at
this
point
on
table
so
that
we
can
like
yeah,
like
I'm
yeah,
I'm
not
sure
which
way
to
take
yeah
michelle.
You.
C
But
do
we
also
have
the
concern
of
like
keeping
have
too
many
like
drivers
each
time
we
have
something
similar
right?
We
encounter
this
issue
and
I
noticed
like
a
storage
class
right.
We
we
have
a
map
like
parameter,
that
is,
can
be
opec
string,
maps,
printer
string,
whether
we
consider
have
similar
parameter
that
you
know
each
driver
can
have
their
own
parameter
and
override
certain
fields.
A
A
C
So
the
problem
here
right
now
right,
we
don't
have
a
way
like
to
specify
different
value
for
certain
views.
I'm
just
saying
if
we
can
provide
a
way
to
you
know,
overwrite
value
allows
you
to
specify
what
value
for
certain
fields.
A
G
Are
you
talking
about
adding
that
parameter
in
the
css
driver?
Is
that
what
you
mean
you
see?
It's
a
driver,
object
yeah.
A
So
so,
every
value
in
the
csi
driver
object
is
non-opaque
because
it's
all
consumed
by
cubelet
and
the
rest
of
kubernetes
right.
Anything
you
add
to
the
csi
driver
object
is
to
be
consumed
by
kubernetes
and
therefore
it
can't
be
opaque
because
we
have
to
document
what
each
field
means
and
how
it's
going
to
be
interpreted.
C
A
C
G
A
If
haman's
problem
and
my
problem
were
the
only
only
ones
we
had,
then
I
would
say
like
yeah,
I
can
go
off
and
come
up
with
a
proposal
to
like
either
change
the
csi
driver,
object
or
or
have
multiple
of
them
and
and
a
corresponding
csi
spec
change.
That
would
give
us
a
hook
that
cubelet
could
then
use
to
figure
out
what
to
do
that
like
I
could.
That
would
be
a
fairly
small
change
and
not
not
too
painful.
I
don't
think,
but
it
would
only
address
my
use
case
in
himansi's
case.
A
I
think
it
wouldn't
help
sandeep
and
it
wouldn't,
and
you
would
still
be
left
with
some
some
of
this
weird
grossness
around
you
know
you,
you
send
a
great
volume
request
and
you
don't
know
what
you're
going
to
get
back.
Yeah.
D
D
A
D
Yeah
yeah
yeah,
so
the
one
challenge
that
we
have
in
designing,
like
switching
this
csi
driver
object
based
on
the
volume
context.
It
might
be
tricky
because
we
tried
doing
something
similar
with
initially
when
we
had
when
when
we
had
this,
whether
to
apply
fs
group
or
not,
we
had
this
heuristic,
but
if
your
volume
is
read
right
once
and
if
you
have
fs
type,
then
we
recursively
see
h,
o
c
h,
c
h,
och,
mario
volume.
D
If
you
have
read
write
many,
then
we
don't
do
it,
but
that
assumption
broke
down
in
many
cases
and
like
maybe
coming
from
css
back.
If,
yes,
that
driver
tells
us
what
to
do,
then
maybe
it
will
be
more
robust,
but
it
was
very
fragile.
So
that's
why
we
went
ahead
and
added
this
thing
in
csr
driver
object,
so
I'm
just
want
to
provide
some
historical
context.
D
A
You
know
you
got
to
get
community
agreement,
you
got
to
go
through
alpha,
ga
it
takes
time
to
do
the
releases
and
all
that
so,
of
course,
there's
always
this
temptation
to
just
do
the
easy
thing
add
a
feel
to
the
csi
driver,
object
and
then
implement
your
feature
on
in
purely
in
kubernetes,
and-
and
I
don't
want
to
take
that
away
right
because
that
that's
a
nice
hack
to
be
able
to
to
leverage.
A
But
in
this
case
the
hack
got
us
into
trouble
because
it
was
a
bad
hack
right
that
the
heuristic
is
is
not
right,
often
enough,
and
so
that
that
suggests
to
me
that
we
for
this
one
at
least,
we
should
do
the
right
thing,
but
yeah
for
stuff,
like
sc
linux
should
should
we
do
the
proper
csi,
spec
change
or
or
continue
trying
to
just
do
some
hack
based
on
the
csi
driver,
object
and
hope
that
we
can
get
it
right
on
a
per
driver
basis
or
or
do
we
have
to
have
a
way
of
you
know,
sort
of
on
a
per
volume
basis
saying
which
which
which
csi
driver
object
to
use
which.
H
I
gotta
say
I
I
have
similar
feelings
to
michelle,
where
I
think
we
might
be
playing
whack-a-mole
with
this
thing,
where
there
might
be
attractive,
short-term
solutions
that
fix,
I
think,
some
subset
of
issues,
but
we
would
really
need
to
look
at
this
more
holistically
and
kind
of,
like
michelle,
said
feature
by
feature.
What
what
do
multiple
driver
approaches
need
from
each
feature,
and
that
is
seems
very
daunting
to
me.
So
her
proposal
of
you
know,
instead
of
instead
of
going
down
the
path
of
let's
make
multiple
drivers
into
a
single
driver,
work.
H
How
about
we
make
it
easier
to
run
multiple
drivers.
So
if
you,
I
think
there
was
two
problems
that
I
heard
one
was
you
know,
deployment
of
multiple
drivers
is
a
pain.
So
if
you
could,
you
have
a
shared,
you
know
set
of
side
cars
for
multiple
drivers
that
could
ease
the
pain,
and
the
second
thing
I
heard
was
migration
is,
is
a
nightmare
right,
but
it's
impossible.
E
I
think
there
has
been
some.
I
know
some
people
have
been
able
to
like
basically
re-import
pvs
without
down
time.
I
think
it's
it's
been
done,
but
yeah.
A
Yeah
we
like
netapp,
wrote
code
when
we
moved
from
our
dynamic
provisioner
to
csi
to
go
through
and
basically
rewrite
all
your
pvs
to.
You
know
that
used
to
be
nfs
and
iscsi
pvs,
and
then
they
became
csi
pvs
and,
of
course
we
have
a
script
that
just
goes
through
and
does
that
the
problem?
Is
it
only
works
on
unattached
pvs
right?
If
the
pv
is
attached,
you
can't
muck
with
it
because
you
have
to
use
it
has
to
be
detached
by
the
same
thing
that
attached
it.
H
H
A
Smaller
scope,
I
was
going
to
say
that
that
that
sounds
huge,
but
it
does
sound
feasible.
If
that's
that's
the
path
people
want
to
go
down.
There
was
a
there's,
a
third
path,
aside
from
sort
of
the
gross
hacks,
we're
talking
about
and
michelle's
proposal,
to
strong,
more
strongly
encourage,
separate
csi
drivers,
which
is
we
could
go
through
the
csi
spec
and
fix
all
the
specific
areas
just
to
have
csi
drivers
be
able
to
tell
the
co
kubernetes
in
this
case,
what
to
do
on
a
per
volume
basis.
A
It
would
be
a
bunch
of
new
capabilities
in
a
bunch
of
new
fields,
but,
like
you
know,
it
would
be
purely
backwards
compatible
right.
You
know
the
default
behavior
would
be
do
what
do
we
do
today,
but
we
could
have
new
behavior
where,
if,
if
the
both
kubernetes
and
the
csr
driver
supported
this
future
csi
spec
and
the
capability
was
asserted,
you'd
just
be
able
to
say
okay.
What
should
I
do
for
this
volume?
You
know
and-
and
you
could
just
implement
it
correctly,
then
so.
D
Can
we
look
into
a
way
of
like
this?
This
is
a
crazy
idea,
but
can
we
look
into
a
way
of
two
things
we
can
do
is
like
relax.
C
D
Requirement
of
making
pv
driver
name
immutable.
That
is
it'll,
be
hard.
I
know
it's
and
alternatively,
rather
than
keeping
a
mapping,
I
wonder
if
we
could
yeah.
Basically
one
is
relax
the
requirement
that
where
you
can
never
ever
change
the
driver
handle
driver
name
in
the
pv
or
can
we
provide
a
way
to
override
the
name?
Why
annotation
in
the
pv
or
something
similar?
D
A
Yeah,
I
I
don't
think
change.
I
mean
it's
perfectly
easy
to
just
delete
the
pv
and
create
a
new
one
with
the
same
name
and
and
basically
mutate
all
the
fields
you
want
to
that
way.
The
only
problem
is
that
that
doesn't
work
for
attached,
tvs
and
so,
but.
D
The
many
cases
of
like
sorry
driver
update
also
essentially
requires
you
to
like
go
through
like
like
drain
the
node
and
and
and
start
back
up.
So
it's
like
that
seems,
for
example
like
when
you
upgrade
like
a
cluster
like
from
in
openshift,
for
example
like
the
nodes
could
be
drained.
Potentially
you
still
don't
lose
your
entire
replica
set
or
you
don't
lose
your
straightful
set,
but
you
could.
A
A
D
D
A
Right
to
today,
you
can
have
a
single
storage
class,
that
is
your
default
storage
class
that
can
cover
both
all
your
iscsi
use
cases
and
your
nfs
use
cases
just
based
on
you
know,
whatever
the
access
mode
is
or
the
volume
mode
or
other
things
that
get
passed
in
when
you
create
a
volume.
If,
if
we
said
okay
we're
going
to
have
two
different
drivers,
one
for
nfs,
one
five
skz,
it
forces
you
to
have
two
storage
classes,
and
only
one
of
those
can
be
your
default
storage
class
and
so
by
default.
A
But
it's
not
only
the
access
mode
that
matters
right.
It's
the
volume
mode,
fs
type,
fs
type.
Well,
the
fs
type
is
also
defined
in
the
storage
class,
but
the
volume
capacity
like
it
can
conceivably.
We
can
give
you
a
different
volume
based
on
how
big
the
thing
you
asked
for
is
right.
If
you
ask
for
a
two
terabyte
thing,
we
might
say
you
know
what
that's
going
to
be
an
nfs
share,
not
a
nice
scuzzy
line.
E
I
mean
it
is
feasible
today
for
like
if,
if
there's
a
you
know,
if
a
driver
really
wants
to
provide
like
really
advanced
storage
class,
routing,
like
that
like
it
could
be
done
as
a
mission
web
hook.
A
Yeah
yeah
I
mean
we
could
we
could
redesign
everything
I
guess
to
to
to
go
around.
You
know
that
would
basically
be
overriding.
The
whole
default
storage
class
functionality
and
saying
okay,
if
they
don't
specify
the
storage
class,
there's
going
to
be
some
controller.
That
will
look
at
the
request
and
fill
in
the
storage
class
on
the
kubernetes
side
before
it
ever
gets
to
the
provisioner
sidecar.
D
Is
that
like
how
how
how
like
it
in
practice
like
for
you,
and
at
least
in
openshift,
I
can
only
say
from
apple,
we
install
different
storage
classes
for
different
types.
Basically,
we
don't
because,
even
though
a
driver
supports
multiple
types
within
the
same
just
thing,
we
install
different
stories
class,
so
you
don't
end
up
like
always
having
one
type
with
everything.
So
it's
I
don't
know
how
big
of
a
issue
is.
The
storage
class
thing
is
in
practice,
but
yeah
like
it
might
be
bigger
for
for
netapp
customers,
but
for
us
it
was.
A
And
I'll
tell
you
that
internally,
our
scheduler
is
very
dumb
right.
We
just
we
filter
out
everything
that
we
know
isn't
going
to
work,
and
then
we
start
trying
to
create
the
volume
on
each
back
end
internally.
And
then,
if
it
succeeds,
we
declare
success
and
if
it
fails,
we
go
to
the
next
one.
We
go
through
a
loop
until
they're
all
done,
and
so
it's
very
robust
because
we
never
give
up
until
you
literally
can't
create
the
volume
anywhere
but
yeah
the
behavior.
A
A
Yeah
yeah,
I
I
feel
like
we
don't
have
a
decision,
but
which
is
fine,
but
what
I
will
do
because
it
doesn't
look
like
anyone
was
taking
notes.
I
will
write
up
my
notes.
A
C
A
C
Want
to
make
like
more
progress,
I
suggest
just
one
week.
Otherwise,
oh.
A
A
We
can
just
keep
keep
going
weekly
until
we
have
have
a
sense
of
what
we
actually
want
to
do,
and
then
we
could
dial
back
the
frequency
of
the
meetings
or
cancel
the
series
altogether.
Once
we
have
a
something
some
kind
of
agreement
but
yeah
for
now.
I
will
I'll
wrap
up
the
meeting
and
I'll
put
my
notes
in
the
document
and
we'll
schedule
a
follow
up
next
week
unless,
unless
we
find
out
that
there's,
unless
we
have
a
different
idea,
that's
better.
A
All
right,
so,
thank
you,
everyone
for
attending.
If,
if
there
was
anything
that
anyone
else
wanted
to
say
and
didn't
get
a
chance,
you
can
still
put
it
in
the
document
and
we
can
bring
it
up
next
week.