►
Description
Kubernetes Storage Special-Interest-Group (SIG) Per Volume CSI Driver Capabilities Design Meeting - 19 July 2022
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Ben Swartzlander (NetApp)
A
Okay,
so
hello
and
welcome
this
is
the
kubernetes
sig
storage
community
meeting
on
her
volume,
csi
capabilities
where
we
left
off
last
week
was
the
think.
I
think
we
kind
of
rejected
relying
on
splitting
csi
drivers
apart
as
the
way
to
solve
this
and
the
the
best
available
alternative
that
we've
come
up
with.
So
far
is
this
concept
of
formalizing
subtypes
and
michelle
had
said
her
concern
was
when
you're
trying
to
schedule
a
wait
for
first
consumer
pvc.
A
B
Wait
but
before
we
conclude
like
I,
I
don't
remember
discussing
that.
We
agreed
to
reject
that
because,
at
least
in
my
mind
that
seems
like
the
only
viable
way
that
solves
all
the
problems.
C
A
A
That's
the
big
concern
is
like:
will
a
netapp
actually
rewrite
trident
to
have
multiple
csr
drivers
will
will,
if
vmware
actually
rewrite
their
vsphere
driver
to
have
multiple
csi
drivers
like
it
seems
unlikely,
even
if,
like
we
say,
that's
the
way
to
do
it
like
it,
I
think
the
outcome
of
that
would
just
be
that
the
problem
will
not
get
solved
because
we'll
keep
doing
what
we
do
today.
A
So
hey
you
know,
and
no
one
has
proposed
a
truly
workable
scheme
for
having
a
single
default
storage
class
that
basically
lets
the
driver,
decide
what
type
of
volume
to
give
you
based
on
the
specifics
of
the
request
right,
we
went
through
the
details
of
like
well.
If
you
tried
to
do
some
sort
of
a
mutating
web
hook,
that
would
like
pick
the
a
storage
class
of
the
appropriate
type
for
you
just
in
time
like
that
web
hook
would
have
to
do
all
of
the
work
that
the
csi
drivers
are
currently
doing
to
make.
D
C
A
E
I
don't
think
I
don't
think
our
driver
will
be
split,
what
it's
just
not
likely
yeah,
maybe
maybe
you
can
you
can
do
it,
but
I
didn't,
I
think,
give
all
the
all
the
amount
of
work
even
like
how
long
we
can
actually
release
q1
release.
It's
just
a
cute
driver
like
that,
whatever
the
work
right
so
no.
B
B
D
B
And
responding,
and
so
it's
like
technically
is
it's
not
I
mean
like
if
we
have
to
like
we
could
red
hat
can
make
the
change
in
which
face
is
the
driver.
If
a
vmware
is
willing
to
accept
the
patches,
it
doesn't
have
to
be
in
a
new
image,
it
will
be
it
just.
It
would
be
the
same
image
doing
taking
both
the
work
and
it
would
work.
A
E
Just
code
changes
come
on
that
that's
the
thing
or
the
testing,
but
you
cannot
replace
that
you
had
so,
unfortunately
yeah
but
anyway,
I
I
think
yeah.
I
don't
think
we
are
going
to
make
progress
just
by
talking
about
those
powers.
It's
not.
We
are
not
we're
going
to
make
this
decision
anyway.
I
don't
know.
E
A
So
so
so
I
I
don't
want
to,
I
don't
want
to
rule
out
that
other
approach.
I
just
want
to
say
that
I
feel
like
we've
reached
the
end
of
the
road
with
with
what
we
can
do
there
and
we
know
there's
a
bunch
of
useful
work
to
do
to
combine
csi
drivers,
whether
or
not
we
do
that
so
like
I
don't
want
to
stand
in
the
way
of
any
of
that
work
moving
forward,
but,
like
I,
I
wouldn't
be
contributing
to
any
of
it,
because
I
wouldn't
benefit
from
any
of
it.
A
A
As
long
as
we
believe
that
there's
value
in
having
a
single
storage
class,
where
the
the
driver
uses
some
opaque
black
box
to
actually
decide
what
kind
of
of
a
volume
to
give
you,
which
is
at
least
how
trident
works
today
and
and
one
can
imagine,
getting
a
lot
of
other
useful
benefits
out
of
that
style
of
scheduling
where
it's
just
the
decision
that
the
driver
makes
and
then
it
tells
kubernetes,
then
I
don't
see
any
alternative
other
than
the
some
sort
of
formal
subtype
and
so
that's
sort
of
where
we
had
left
the
previous
meeting
is
yeah.
A
A
You
know
I
guess
doing
design
work
right.
We
know
what
needs
doing
there
and
and
we
can,
we
can
go
ahead
and
do
it
so
so
getting
back
to
the
the
idea
of
having
a
formal
subtype,
I
wanted
to
propose
this
example
driver.
A
I
just
wrote
a
little
bit
of
text
in
the
notes
before
the
meeting,
so
imagine
an
example
driver
that
it
can
make
two
types
of
volumes:
I'm
calling
them
pink
and
green.
Just
you
know
just
to
be
totally
abstract
and
the
the
decision
for
whether
you
get
a
pink
volume
or
green
volume
is
based
on
some
complex
internal
logic,
because
just
because
we
have
to
assume
this
and
then
imagine
that,
for
technical
reasons,
a
notes
can
only
have
10
pink
volumes
and
they
can
only
have
20
green
volumes
right.
A
So
in
this,
in
the
subtype
proposal
that
we
have
made,
what
we
would
do
is
when
you
called
node
node,
get
node
git
info
on
the
node
plug-in
like
it
would
return
as
part
of
its
grpc
response.
You
know:
subtype
pink,
10,
subtype,
green
20
and
then
kubernetes
would
have
that
information
and
it
could
make
its
way
over
to
the
scheduler
and
then
and
then
the
idea
is
when
you're
scheduling
a
volume
or
scheduling
a
pod.
That's
attaching
to
a
pink
volume.
A
A
A
So
if
the
scheduler
is
able
to
look
at
a
node
and
count
up
the
the
current
number
of
attached
volumes
with
each
subtype
and
compare
that
to
the
maximums
for
that
csi
driver
and
and
observe
that
all
of
them
are
not
at
the
cab,
then
it
could
proceed
and
say:
well,
I'm
going
to
go
ahead
and
put
the
pot
on
that
node
and
then
because
the
subtype
isn't
going
to
matter.
A
A
From
my
perspective,
right
because
because
this
is
going
to
be
pretty
a
pretty
uncommon
situation,
anyways
when
a
node
actually
has
the
maximum
number
of
volumes
of
of
any
given
type-
and
it's
also-
you
only
have
these
kinds
of
situations
when
you
have
waited
for
first
consumer,
because
if
you
go
ahead
and
bind
the
volume
ahead,
you
know
before
scheduling
the
pod,
then
you
will
know
the
subtype
and
you'll
be
able
to
get
exactly
at
the
limit
I
had.
A
I
had
been
considering
last
week
saying
that
you
know
well,
maybe
maybe
what
the
schedule
should
do
is
just
use
the
smallest
or
just
just
use
the
limit
for
the
unknown
type
volumes
right,
the
existing
max
volumes
per
node
value
or
the
integer
value.
That's
in
that
get
node
info,
rpc
response
and
so
for
this
driver.
That
number
would
have
to
be
10
right,
because
if
you
don't
know
what
the
type
is
you
have
to
assume
the
worst
case.
A
That's
pink
and
you
have
to
assume
that
you
know
you
can
only
go
up
to
ten,
but
but
that's
actually
too
conservative
for
the
unbe
down
case
right,
because
if
you
know
that
you
have
more
than
ten
volumes
but
they're
all
green,
then
you
know
it's
still
safe
to
create
a
volume
of
unknown
type.
As
long
as
there
aren't
20
green
volumes.
A
So
I
guess
I
guess
my
my
proposal
is
like
this
is
actually
not
too
big
of
a
deal
if
we
can
teach
the
scheduler
to
just
count
up
attachments
for
a
given
volume
type
by
subtype
and
then
look
at
the
maximums
for
all
of
the
subtypes
for
a
given
volume
type
and
we
have
to.
We
would
have
to
assume
that
the
information
coming
back
from
the
node
plug-in
was
comprehensive
and
that
there
wasn't
a
third
unknown
type
that
it
could
potentially
create.
A
Then
you
could
say
well
all
of
the
attachment
limits
are
below
their
maximums
and
therefore
it's
safe
to
schedule
the
pod,
because
we'll
we'll
definitely
be
able
to
attach
the
volume.
At
least
you
know,
modular.
When
you
raise
conditions,
I
actually
don't
know
how
the
scheduler
handles
race
conditions.
If
multiple
pods
are
all
trying
to
land
on
the
same
node,
they
all
have
weight
for.
First
consumer
pvcs
and
they're
all
of
the
same
type
is
there
going
to
be
a
race
and
one
of
them
will
win
and
the
other
one's
a
lose.
A
I
actually
don't
know
how
that
works,
but
this
certainly
wouldn't
be
any
worse
than
the
status
quo.
In
that
regard,.
B
Yeah,
I
think
that
scheduler
counts,
like
the
all
the
in-progress
attachment
towards
the
limit
and
but.
C
A
D
A
Okay,
so
so
in
that
case,
what
you
would
have
to
do
is
temporarily
reserve
one
of
each
subtype
volume
right.
So
if
you
had
19
19
green
volumes
and
nine
pink
volumes
and
you
were
provisioning
a
new
one,
you
would
have
to
assume
that
it
was
both
green
and
pink
for
the
purposes
of
preventing
other
nodes
from
scheduling
there,
yeah
right
until
until
you
knew
whether
it
was
actually
pink
or
green.
And
then
at
that
point
you
could
fix
the
math
yeah.
A
So
so
it
seems
like
a
tractable
problem,
I
mean
you
will
have
certain
pathological
cases
where,
like
you've,
maxed
the
number
of
green
volumes
and
it
turns
out
that
it
would
have
given
you
a
pink
volume,
but
there's
no
way
to
know
that
in
advance.
A
So
you
end
up
not
picking
that
node
and
like
that,
that's
sub-optimal,
but
it
seems
pretty
close
to
optimal
from
for
mariah
said,
and
I
I
feel
like
you
probably
don't,
actually
hit
no
limit,
no
volume
limits
maximums
in
practice
too
often,
or
at
least
not
on
a
majority
of
your
nodes
so
like
even
if
a
few
of
your
nodes
are
at
the
maximum
you'll
probably
find
another
one
that
isn't
at
the
maximum
and
you'll
be
fine.
B
B
A
B
I
forgot
what
the
limit
on
azure
was.
Australia
was
like,
like
in
case
of
aws,
for
example,
where
the
limits
are
low,
and
I
know
I'm
very
familiar
with
in
that
case,
like
the
drivers
efs,
for
example,
the
file
system
driver
is
separate,
so
they
don't
run
into
this
issue,
because
the
block
store
isn't
just
shared
files,
which
is
different
for
azure,
something
where
the
drivers
are
saying.
Actually,
the
driver
is
so
different
again
so,
but
there's
a
same
driver
that
ships
different
different
type
of
volume.
B
So
the
only
place
where,
like
I
know,
of
a
practical
driver
where
it
matters
is
vsphere
where
it's
it's
it's
the
same
driver
that
supports
blocks
and
files
system
both-
and
it
has
upper
limit
of,
I
think,
56
volumes
for
that.
A
So
so,
if,
if
we
went
ahead
with
this
scenario
and
implemented
subtypes,
could
you
express
you
know
that
there
is
a
low
limit
for
the
block
volumes
and
a
higher
limit
for
the
file
system
volumes
and
then
expect
that
you'd
stay
out
of
trouble
with
regard
to
scheduling
volumes,
because
you'd
never
hit
the
higher
limit
and
it
would
always
be
a
question
of
when
the
scheduler
was
deciding
whether
a
pogba
won
a
node?
Is
that
lower
limit
at
the
max
or
not?
A
B
But
in
b
case
you
run
out
of
resources
on
a
node.
Far
sooner
before
you
run
out
of
the
node
limits,
if
that
makes
sense
like
56.
If
it
is
the
limit,
then
you,
the
maximum
number
of
volumes
you
can
attach
to
a
node
is
56.
Then,
like
you
are
likely
to
run
out
of
resources
on
the
nodes
sooner
than
you
will.
A
A
B
Yeah
they're
not
going
to
bring
subtypes,
that's
true
yeah,
the
only
one
I
know
is
vsphere
and
the
limits
are
far
higher,
and
so
in
practice,
like
of
the
drivers,
we
know
about
it's,
it's
gonna
be
fine
yeah,
it's
going
to
be
not
not
a
end
of
the
world,
but
but
yeah,
but
obviously
michelle
they're
gonna,
look
into
from
the
perspective
of
like
the
like
whether
it
scales
to
support
other
driver
types
or
not.
Just
because
it
works
for
driver
in
front
of
us
does
it
mean
work
from
all
of
the
drivers.
B
D
B
You
would
run
into
problem
because
the
e,
the
the
the
network
path
and
everything
for
the
file
file
is
vsan
file
is
completely
different
from
the
block
storage
and
you
can
actually
attach
more
than
16
the
nfs
vsan
shares.
You
cannot
attach
more
than
16
blocks
this
thing,
so
you
will
run
into
this
problem.
A
Okay,
but
but
I
mean
if,
if
we
could
you're
saying
even
if
we
had
the
subtypes
and
we
could
properly
express
those
numbers,
you
think
the
scheduler
would
make
the
wrong
decision
on
a
on
a
high
enough
frequency
that
yeah.
B
A
A
A
B
It's
not
tiny
corner
guess
this
is
like
like
say
the
in
case
of
b
sphere,
for
example,
that
the
limits
are
configurable
and
somebody.
I
admin
can
configure
the
limit
to
be
like
16.
and
then
yeah
that
they
already
attached
six
16
volumes
and
they're
running
pause
it
just
maybe
it's
a
32,
gb
or
16
gb
ram
node,
and
then
they
want
to
attach
some
some
yeah
some
shared
file
systems.
They
could
still
attach
it,
but
the
scheduler
will
not
prevent
it,
because
it's
counting
the
limits.
A
Okay,
so
so
what
what
could
we
do
in
that
situation
to
it
to
address
it
like?
If,
if
I'm
an
admin-
and
I
know
you're
like
okay-
I'm
I'm
constrained
on
these
block
volumes
and
the
scheduler
might
not
put
certain
pods
on
certain
nodes
if
they
run
out
of
block
volumes,
and
I
need
to
now
give
a
somehow
give
a
hint
to
the
schedule
that
says
I.
A
B
B
Yeah
so
yeah,
and
that
that
we
stopped
respecting
in
scheduler
for
a
while,
once
we
migrated
to
csr
volumes,
but
that
basically
defines
global
limit
on
the
node
I
mean.
Theoretically,
we
could
have
you
know
that
was
it
it's
been
supported
since
forever,
but
now,
with
the
csi,
the
support
of
that
environment
variable
is
kind
of
iffy
qpd
max
volume
that
defines
like.
B
A
Well,
I
I
was
thinking
of
something
less
global
and
more
more
tailored
like
like
imagine.
I
had
a
second
storage
class
that
had
some
opaque
parameter
that
forced
the
driver
to
give
you
a
file
system
volume
right.
So
any
any
pvc
from
a
particular
storage
class
is
going.
It
will
have
a
known
subtype,
even
if
kubernetes
doesn't
know
it
right,
then,
is
there
a
way
to
basically
also
annotate
the
storage
class?
A
With
information
on
to
the
scheduler
that
says:
okay,
scheduler,
you
know
we
don't
you
know
we're
not
going
to
put
subtypes
into
the
storage
class
as
a
formal
part
of
the
design,
but
like
we
could
provide
a
hint
that
says.
I
happen
to
know
that
any
volume
provisioned
by
this
storage
class
will
have
a
certain
subtype,
because
I've
arranged
for
that
to
be
the
case,
and
then
the
scheduler
could
say:
okay.
A
B
It
that's
something
that
could
work,
but
that's
that's
interesting,
but
it
will
have
to
kind
of
flex
out
the
design
how
it
will
look
like
like
like
how
we,
how
you
flex
out
the
subtype
design,
I
think,
not
to
see
how
how
this
this
hint
on
this
first
class
would.
A
A
Say
if
it's,
if
it's
full,
then
well,
I
guess
the
the
trouble
is
if
we
define
that
that
that
populating,
that
field
means
that
you're
definitely
going
to
get
a
volume
of
that
subtype.
We
have
no
way
to
enforce
that.
That's
actually
true
right,
because
it
would.
It
would
just
be
an
assertion
made
by
the
by
the
admin
that
I
know
that
volumes
from
this
subclass
will
be.
A
So
yeah,
maybe
just
like
an
annotation,
that's
a
hint
that
says:
hey,
I'm
the
admin,
and
I
I'm
hinting
that
that
I
know
but
but
it
might
be
wrong
because
that
means
admins
make
mistakes
right.
B
A
B
A
You
could
you
could
have
a
formal
subtype
field
and
allow
them
to
leave
it
blank,
in
which
case
it
really
is
up
to
the
driver.
But
then,
if
it's
filled
in
that
that
could
we
could
actually
check
it
right.
We
could
say:
okay,
this.
This
storage
class
is
for
green
volumes
and
there's
going
to
be
something
in
the
storage
class
that
indicates
to
the
driver.
It
must
create
a
green
volume
and
then,
if
the
driver
returns
anything
other
than
green
volume,
it's
actually
an
error
and
it
fails
to
bind
ray.
B
A
B
Yeah,
just
a
random
thought
I
was
thinking
like
like.
Could
we
do
like
counting
at
a
label
where
like
where,
where
like
scheduler
counts,
the
labels,
like
the
the
pbs
like
we
do
two-step
counting
one
is
like
okay,
how
many
pvcs
are
of
that
type
driver
type?
And
then
we
like?
We,
we
provision
the
volume
we
fill
the
pv
type
like
the
the
subtype
of
the
pv
and
then
scheduler
can
count
the
the
type
like.
Can
we
do
something
like
that.
A
That
that's
that's
the
issues
if
you
go
all
the
way
through
the
process
of
creating
the
volume
for
a
particular
node,
and
it
turns
out
that
you
stepped
over
the
limit.
Now
you
have
to
unwind
the
thing
you
just
created,
delete
that
pod
and
delete
that
pv
and
go
try
again
somewhere
else,
and
so
we
were
trying
to
like
get
ahead
of
it
and
say
either.
We
know
this
is
safe,
regardless
of
what
happens
in
which
case
go
ahead,
or
you
know
use
some
hint
from
the
admin
that
says.
A
I
actually
can
promise
you
before
you
create
it.
It's
going
to
be
this
subtype,
and
so
as
long
as
you
trust
that
hint,
then
you
can
go
away.
No.
B
I
was
saying
that
before
like
this
is
I'm
not
sure
if
it
will
work
but
yeah
no
like,
rather
than
like,
creating
full
creating
fully
like
the
the
creation
called
resulting
in
creation
of
complete
pv?
I
wonder
if
there's
a
way
we
step
where
we
could,
we
could
just
fill
in
the
subtype
of
the
volume.
Okay,
oh.
A
A
Yeah
the
the
problem
with
that
is,
you
would
have
to
use
it
always
and
you
would
have
to,
and
it
would
become
part
of
the
life
cycle
of
every
volume
that
you
know
sometimes
you'll
go
through
pre-create
and
then
never
follow
it
up
with
the
create.
Now
you
have
a
message
to
clean
up,
sometimes
you'll
call
pre-create
and
it
will
succeed
and
then
you'll
call
crete
and
it
will
fail
and
what's
the
error,
recovery
path
from
that.
A
B
But
in
this
case
you
did
not
create
you're.
Just
this
is,
should
you
literally
just
sorry
the
the
provisionary
just
making
a
call
to
the
driver
to
fill
the
subtype
like
what
the
subtype
of
the
volume
is
going
to
be
and.
A
You
created,
then
it
then
you
could
get,
it
could
fail
right.
You
could
say
pre-create
and
will
tell
you
it's
going
to
be
this
type.
But
then,
if
you
wait
a
long
time
and
someone
uses
up
all
that
space
and
now
you
try
to
call
create
it
might
not
work
anymore,
and
so
now
you
need
like
an
to
undo
what
you
did
in
the
pre-create
after
create
itself
fails
to
clean
up.
A
You
know
just
to
clean
up
the
state,
so
yeah
you
would
you'd
be
forced
to
to
develop
the
extra
layers
of
retry
and
error
handling
around
the
pre-create,
no
matter
what?
Even,
if
you're,
not
supposed
to
create
anything
at
that
layer,
drivers
will
create
some
amount
of
state
that
just
represents
a
reservation.
Yeah.
B
B
I
just
don't
know
if
it'll
be
acceptable
or
like
it
does
worry
me
a
little
bit
because
for
the
some
drivers
it
would,
it
would
matter
a
lot
for
some
others.
It
will
not
matter
yeah.
B
A
Up
until
the
last
minute,
she
even
said
she
might
join
at
like
1
30,
but
it's
past
1,
30
and
she's,
not
still
not
here.
So
probably
something
happened
yeah,
but
but
I
mean
we
can.
We
can
wait
until
next
week
and
try
again,
I
I
can
I,
after
talking
about
the
idea
of
like
putting
this
subtype
into
the
storage
class,
I'm
I'm
starting
to
like
that,
a
little
bit
more
because
it
actually
does
help,
but
it
I'm
worried
about
how
the
heck
you
document
that
for,
for
you
know,
storage
classes
are
confusing
enough.
A
You
know,
if
we
add
this
field
that
says
like
you,
should
leave
this
empty
to
get
correct
behavior,
but
but
in
this
one
really
weird
situation
where
you
know
you
know
what's
going
to
happen,
then
you
should
fill
it
in
like
that.
That's
just
hard
for
a
user
to
understand
right,
like
I'm
imagining
the
netapp
case
where
it's
like.
A
I
I
know
that
there
are
there's
arguments
you
can
pass
into
the
storage
class
parameters
that
will
guarantee
you
get
an
nfs
volume
and
like
in
that
situation
you
could
safely
put
nfs
subtype
in
your
storage
class
and
then
the
scheduler
could
take
advantage
of
that
knowledge
and
be
smarter.
But
I
don't.
C
A
I
I
don't
know
how
we
would
document
to
our
users
that,
like
that's
what
you're
supposed
to
do,
because
someone
could
just
as
easily
like
put
the
nfs
subtype
in
the
storage
class
and
then
not
put
the
other
stuff
that
guarantees
you're
going
to
get
an
nfs
volume
and
then
they'll
just
have
a
bunch
of
errors,
and
they
won't
know
why
and
that
that
doesn't
seem
like
a
great
user
experience
right.
The
the
whole
idea
here
is:
this
is
supposed
to
be
kind
of
hidden
and
magical.
A
You
know
you
just
have
a
storage
class
and
you
provision
some
storage
and
it
works,
and
I
I'm
worried
about
creating
more
air
conditions.
I.
B
I
have
a
question
that
lecture
how
common
it
is
that
that
a
user
request
like
a
general
storage
class
and
from
a
storage
from
a
general
choice
class
and
they
get
nfs
or
iscsi
whatever
it
seems
like
to
me.
It
seems
like
like,
like,
depending
on
workload
like
nfs,
might
not
be
suitable
to
uncertain
workload.
Only
is
required
for
certain
workload
so
like
like
in
my
use
cases
where
I
have
seen.
Customers
know
what
type
they
want.
A
I
guess,
in
my
experience,
what
it
usually
comes
down
to
is
you
have
you
know
you?
You
have
some
infrastructure
guy
who
set
up
the
cluster
itself
right
and
they've
purchased
the
hardware
and
they
purchased
some
sort
of
you
know:
storage,
hardware,
solution
or
storage
software
solution
and
they
build
it
all
up,
and
then
they
put
storage
classes
on
there
or
at
least
one
storage
class,
and
then
the
people
who
consume
it
who
consume
the
cluster
very
often
are
like
deploying
helm,
charts
or
just
you
know,
stuff.
A
They
got
off
the
internet
and
those
don't
have
storage
classes
in
them
right.
They
just
have
pvc
definitions
with
a
blank
storage
class
name.
That
says
I
need
raw
block
volume
or
I
need
rwo
file,
system
volume
or
any
rwx
system
volume.
You
know
whatever,
whatever
the
application
needs
in
its
manifest,
and
the
idea
is
that
we
can,
you
know,
given
a
single
storage
class,
we
can
satisfy
any
of
your
requests
right.
A
If
you
ask
for
raw
block
you're
going
to
get
iscsi,
if
you
ask
for
rwx
you're
going
to
get
an
nfs,
if
you
ask
for
rwo,
you
might
get
either
one
depending
on
how
we're
feeling
that
today,
I
mean
there's
a
bunch
of
specific
there's,
a
bunch
of
specific
things
that
would
determine
what
you
would
get.
But
the
idea
is
your
pvc
definition
specifies
what
you
actually
wanted,
and
the
storage
class
is
just
a
way
to
get
there
or
or
if
you
have
multiple
storage
classes.
B
Yeah,
no,
I
I
just
this.
This
use
case
of
dynamically
deciding
pv
type.
You
know
it
just
seems
a
little
bit
like
strange
to
me
like
imagine.
If,
if
I
had
I'm
asking
for
like
I
don't
know,
I
ask
you
for
a:
I
need
a
block
volume,
so
I
specified
read,
write
once
and
then
because
there
was
no
block
volume
available,
and
this
thing
it
gives
me
a
nfs
volume
and
then
now
my
I'm
trying
to
run
postgres
on
top
of
nfs,
and
it's
just.
A
Yeah
yeah,
no,
we
get
those
kinds
of
bugs.
I
I
actually
actually
think
you
can
with
the
proper
hacks.
But
yes,
yes,
I
mean
we
have
those
kinds
of,
but
but
that's
that's
big.
That's
a
problem
introduced
by
kubernetes,
trying
to
abstract
storage
a
little
too
much
right.
Kubernetes
creates
the
fantasy
that
a
file
system
volume,
all
file
system
volumes
are
the
same,
and
all
you
have
to
do
is
ask
for
a
file
system,
volume
and
you'll
get
something
that
works
yeah
in
reality.
A
That's
not
always
true,
and
people
that
have
a
deep
understanding
of
their
cluster.
Will
you
know
they'll
use
the
various
tools
available
to
ensure
they
get
what
they
want,
but
I
I
actually
think
those
are
less
common
right
in
a
lot
of
cases.
You
just
have
infrastructure
and
you
have
people
consuming
infrastructure
and
they
don't
really
talk
to
each
other.
B
A
A
It
is
a
check
that
what
you
get
back
matches
the
subtitle
right,
that's
what
it
would
end
up
being
unless
we
also
have
mended
the
csi
spec
to
create
a
channel
for
the
provisioner
sidecar
to
shove,
down
the
subtype
alongside
everything
else,
or
maybe
that's
what
we
do
right.
Maybe
we
define
a
special
csi
provisioner
namespace
prefix
for
this
subtype
and
we
just
pass
it
along
as
one
of
the
opaque
parameters
like
we
do
some
of
the
other
ones
so
that
it
is
available
to
csi
plugins.
A
A
That
would
at
least
shift
if
the
error
detection
left
right.
If
you,
if
you
explicitly
communicated
nfs
in
the
storage
class
and
that
made
its
way
all
the
way
down
to
the
to
the
plug-in
app,
you
know
at
crate
volume
time,
then
you
could
notice
that
you
were
never
going
to
get
an
nfs
volume
and
immediately
return
an
error
before.
B
B
Yeah
it's,
it
seems
to
me
yeah
like
this,
this
yeah,
if
you're
gonna
do
the
subtype
we
should
we
should
like
do
it.
As
I
don't
know
what
other
people
call
think,
but
it
might
be
better
to
get
more,
it
might
be
easier
to
get
more
buying.
A
A
C
A
A
D
A
Yeah
yeah
yeah,
but
I
guess
we
would
need
to
figure
out
exactly
what
the
csi
spec
change
looks
like
then,
because
it
well.
I
guess
you
would
always
need
to
come
back
in
in
the
case
that
the
request
was
empty
right.
If
you
didn't
care
what
subtype
you
get,
then
you
really
do
need
to
know
when
it
comes
back
like
what
it
is
right.
So
in
the
case
of
a
blank
subtype,
it
would
not
need
to
match
right.
A
A
We
need
to
have
error,
checking
to
make
sure
that
they
didn't
differ
and
which
none
of
what
none
of
that
would
be
too
bad
and
and
then
yeah,
and
then
we
could
plumb
all
that
over
into
the
over
into
the
scheduler,
so
that
it
had
the
perv
per
node
volume
limits
for
each
subtype
and
then
the
scheduler
would
be
able
to
look
at
the
storage
class
as
well
to
get
an
additional
hint.
Then
you
know,
but
if,
but
if
the
hint
was
not
there,
it
could
still
function.
A
It
would
just
have
to
look
at
all
of
the
possible
subtypes
and
make
sure
that
all
of
them
are
not
at
the
max
before
scheduling
the
volume
and
that
that's
probably
a
little
bit
of
nasty
code
in
this
schedule.
That
would
be
required
to
do
it
because
you'd
have
to
loop,
and
you
know
possibly
reserve
a
bunch
of
sort
of
non-pre-existing
volumes.
Not
pre-existing
volumes,
don't
exist
yet
yeah
and
then
and
then
after
the
volume
does
exist,
then
go
back
and
fix
all
those
numbers.
G
D
A
Sure
sure
yeah
and
I
mean
maybe
maybe
this
just
remains
at
the
design
stage
for
a
while.
We
don't
actually
push
forward
with
it,
but
I
I
feel
like
if
we
really
do
want
to
solve
these
kinds
of
issues
we're
having
with
cubelet
and
with
things
like
sc
linux
policy
or
the
fs
group
change
policy
and
the
se
linux
relabeling,
like
those
kinds
of
things
that
it
works,
sometimes
or
not
other
times.
A
I
don't
see
any
way
other
than
this,
and
the
alternative
is
just
to
not
solve
them
and
tolerate
tolerate
the
pain
for
a
while
or
do
they
come
on
suggested,
and
you
know,
encourage
drivers
to
start
splitting
apart,
multiple
drivers,
because
that
also
that's
also
a
viable
solution.
It's
just
one
that
I'm
confident
certain
vendors
will
not
take
so.
D
Yeah,
it's
hard
it's
hard
to
go
back
to
them
and
say:
hey,
you
know,
guess
what
yeah
yeah,
but
I
think
overall,
though,
I
still
think
that
the
subtype
solution
would
solve
the
the
original
issue
which
motivated
this
conversation
like
around
the
fs
group
on
policy
and
stuff,
like
that,
so
yeah
cool.
A
All
right,
so,
regarding
future
meetings,
I
don't
know
if
we
need
to
keep
having
these.
I
guess
I
want
to
get
michelle's
reaction
to
this,
and
maybe
she
can
come
next
week,
so
we
can
keep
that
meeting
on
the
calendar,
but
until
until
we
do
get
some
sort
of
agreement
and
a
plan
to
do
any
kind
of
implementation,
we
might
just
say:
okay,
we
have
a
design
and
then
put
it
on
the
shelf
or
you
know.
A
E
E
A
E
E
Probably
better
just
just
to
cancel
this,
and
then
we
can
definitely
schedule
it.
If
you
know
if
we
have
a
need-
and
you
have
more
people
join
the
meeting-
that's
not
a
problem.