►
Description
Kubernetes Storage Special-Interest-Group (SIG) Per-Volume CSI Capabilities Design Meeting - 14 June 2022
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Ben Swartzlander (NetApp)
B
A
B
A
All
right,
hello,
welcome.
This
is
the
kubernetes
sig
storage
community
meeting
about
per
volume,
csi
capabilities
as
we
agreed
last
week.
This
is
now
a
weekly
meeting
and
we're
going
to
keep
doing
weekly
meetings.
I
guess
until
we
get
things
sorted
out
to
the
point
where
we
can
agree,
so
I
wanted
to
pick
up
the
discussion
where
we
left
off
last
week
I
wrote
notes
for
anyone
who
wasn't
able
to
attend
and
if
you
were
able
to
attend
you
can.
A
A
I
forgot
something,
but
my
memory
was:
there
were
three
overall
solutions
that
were
proposed
and
I
tried
to
capture
each
one
of
them
in
their
pros
and
cons
in
my
notes
down
here,
and
so
I
just
wanted
to
go
over
a
summary
of
these
options
and
then
I
guess
get
back
into
the
arguing
about
which
one
we
should
actually
do,
because
I
think
we
need
to
agree
on
an
approach
before
we
can
get
into
the
next
little
detail.
So
any
any
questions
before
I
launch
into
that.
A
First
of
all,
we
could
require
that
you
know
we
solved
this
by
splitting
csi
drivers
into
multiple
drivers
with
one
driver
per
protocol,
so
that
the
existing
csi
interface
would
solve
all
these
problems
by
itself,
right
that,
if
we,
if
we
do,
that,
no
no
csi
level
changes
are
needed,
but
we
just
have
to
figure
out
how
to
overcome
the
momentum
that
we
already
have
in
the
wrong
direction,
and
I'm
I'm
kind
of
opposed
to
this
one
on
the
grounds
that
so
much
upfront
work
would
have
to
be
done
before
we
could.
A
We
could
even
get
to
the
point
of
fixing
the
csi
drivers
that
we
wouldn't
see
the
solution
to
these
problems
for
a
long
time
and
I'm
not
even
convinced
we
would
ever
see
it,
but
but
it's
still
on
the
table
because
it
has
a
certain
elegance
to
it
in
that
we
don't
have
to
change
the
csi
spec.
If
we
go
down
that
path,
the
second
option
was.
A
Wait
a
minute,
let
me
scroll
down
here
to
remind
myself
oh
yeah.
We
could
treat
each
individual
problem
as
a
as
a
new
feature
in
the
csi
spec,
so
we
could
go
one
by
one
and
say:
okay
for
the
fs
type
or
fs
group
policy
problems.
Let's
go
add
a
new
csi
spec
feature
to
solve
that
problem
for
the
se.
Linux
labeling
issues
like,
let's
add
a
feature
to
the
csi
spec
for
that
for
the
maximum
volumes
per
node
issue,
like
let's
add
a
new
feature
to
the
csi
spec.
A
The
the
benefit
of
doing
it,
this
way
is,
is
we
can
solve
all
of
our
problems.
That
way,
I
mean,
there's
nothing
that
can't
be
fixed
by
just
updating
the
csi
spec.
I
don't
think
so.
Some
of
them
will
be
more
heavyweight
changes
than
others,
but
you
know
one
could
imagine
a
redesign
of
the
volume
limits
per
node
feature
to
return
more
information
that
would
enable
a
co
to
actually
get
to
the
right
answer.
A
One
could
imagine
specifically
calling
out
you
know:
fs
group,
the
ability
to
you
know
set
fs
group
information
on
a
per
driver
basis
in
a
way
that
cubelet
could
always
do
the
right
thing
same
thing
with
se
linux.
So
that's
sort
of
the
second
high
level
approach,
and
I
don't
know
how
many
people
were
in
favor
of
that
one,
but
I
I
find
it
more
appealing
than
the
first
one.
A
The
third
approach
was
the
one
that
I
was
proposing.
Oh
someone
pasted
something
in
here.
Okay,
we
can
look
at
that
in
a
minute.
A
The
third
approach
was
to
basically
take
the
hack
that
kubernetes
invented
a
while
ago,
which
is
this
csi
driver
crd,
which
allows
driver
authors
to
basically
tell
kubernetes
to
do
certain
things
when
their
driver
is
in
use
in
a
way
that's
outside
the
csi
spec
and
therefore,
is
really,
you
know
easy
to
to
implement
changes
to
to
basically
extend
that
hack
to
be
a
little
more
flexible
either
by
having
multiple
instances
of
that
object
and
selecting
which
one
to
use
on
a
per
volume
basis
or
by
extending
the
object
itself
so
that,
based
on
some
key
that
comes
back
from
the
driver,
you
would
be
able
to
look
up
certain
behaviors.
A
Instead
of
have
one
constant
behavior
for
the
whole
driver.
The
the
big
downside
to
that
one
is:
it
only
solves
some
of
the
problems
that
were
raised
last
week.
In
particular,
it
would
help
solve
my
problem
of
the
fs
group
policy
concern
it
might
help
with
the
se
linux
labeling,
but
like
it,
definitely
would
not
help
with
things
like
volume,
node
limits.
A
So
we
would
have
to
do
something
else,
for
you
know,
problems
that
this
doesn't
solve
and
those
were
the
three
approaches
and
they
all
have
pros
and
cons,
and
you
know
who
wants
to
sort
of
argue
for
one
of
them
or
throw
out
a
fourth
option
that
we
didn't
capture
last
week.
A
C
I
just
a
comment
on
option
two.
I
guess,
even
though
it
seems
kind
of
heavyweight
to
like
change
the
csi
spec
for
each
of
these
things-
and
you
know
in
in
practice,
especially
for
the
bigger
vendors
having
changes
to
their
csi
driver
on
their
own
is
actually
a
fair
amount
of
process
as
as
well
like
you
know.
C
As
mentioned,
there's
like
you
have
to
address
legacy
things,
you
have
to
figure
out
how
to
make
sure
upgrades
work
successfully
and
such
and
so
in
some
way
like
doing
this
work
through
the
csi
spec
having
things
more
regular
across
the
community
has
a
lot
of
benefits,
and
I
think
the
overhead
of
the
process
of
changing
the
spec
like
may
not
be
quite
as
high
as
as
it
initially
seems,
considering
that
there's
a
lot
of
overhead
anyway,
any
anytime,
someone
is
changing,
a
driver
that
you
know
is
being
shipped
with
their
kubernetes
deployment.
A
Yeah
yeah,
so
so
I'll
echo
that
I
mean
it
sounds
like
what
you're
you're,
basically
agreeing
that
option
number
one
is
really
expensive,
yeah
and
in
comparison
option,
two
doesn't
look
so
bad,
so
so
yeah
exactly
just
to
sort
of
get
everyone
on
the
same
page.
My
concern
with
option
number
one
is:
we
would
first
have
to
address
the
problem
of
why
driver
vendors
lean
towards
this
unified
driver
for
multiple
types
of
volumes.
A
A
To
get
to
the
point
where,
like
a
driver
vendor,
would
even
want
to
do
the
the
thing
that
is
suggested
by
option
one.
Then
we
would
have
to
figure
out
how
to
help
people
migrate
from
their
legacy
situation
to
the
new,
in
theory,
better
design
of
having
separate
drivers
and
that
that's
a
whole
separate
migration
problem,
and
then
only
after
you
solve
those
two
very
hard
problems.
Can
you
actually
expect
driver
vendors
to
start
splitting
apart
their
drivers
and
and
some
of
them
might
not
do
it?
A
Anyways,
like
I
don't
know
if
I
could
convince
netapp,
for
example,
to
to
do
that
hard
work,
but
but
I,
but
I
can
definitely
say
that
I
couldn't
convince
them
to
do
it
until
we
solved
those
other
problems
so
so
like
it
would
be
a
long
path
just
to
just
to
get
to
the
point
where
we
could
start
to
solve
the
problem
and
even
then
success
isn't
guaranteed.
A
So
I
I
I'm
very
nervous
about
saying,
like
that's
what
we
actually
want
to
do,
because
there's
a
lot
of
work
and
no
payoff
for
like
maybe
a
year,
maybe
longer
with
the
second
one.
I
will
say
that,
while
it's
not
a
ton
of
work
to
amend
the
csi
spec,
we
do
have.
A
A
Work
to
get
there,
but
the
amount
of
time
that
would
elapse
would
be
still
pretty
high
so
that
that
that's
one
reason
that
I
gravitate
towards
option
number
three
is
because
it's
the
kind
of
thing
you
can
do
quickly,
but
I
agree
that
there's
doing
it
properly
in
the
csi
spec
has
its
appeal.
It's
just.
We
have
to
accept
that
it
would
take
a
while
to
get
it
done.
C
A
We
could
figure
out
exactly
how
long
it
would
take
you're
right.
It.
A
D
Oh,
you
might
things
and
because
there
was
a
well,
it
depends
on
whether
it's
an
entry
code
or
if
it's
just
external
repo
code
right.
D
Then
you
need
to
have
you
have
to
have
a
feature
gate
right,
like
the
the
volume
house
that
we
added
to
cubelet,
we
have
a
feature
gate
so
that
okay,
we'll
need
to
go
through
alpha,
beta
and
ga,
and
then
so
now
we
are
moving
to
beta,
like
at
this
time,
we'd
like
to
move
the
css
back
feature
to
ga
from
from
alpha
to
g,
but.
A
C
D
That's
why
we
we,
actually,
I
think,
we
kind
of
rushed
the
two
more
the
there's,
a
csi
capacity
tracking
related
feature.
E
If
a
kubernetes,
if
a
csi
feature
is
alpha,
then
yes,
it
can
be
used
like
by
as
alpha
feature
in
kubernetes
for
sure,
like
we
are
doing
that
for
volume
amount
group
which
is
alpha
in
csi,
spec
and
and
it
is
beta
and
kubernetes
it.
It
was
introduced
as
an
alpha,
but
it's
a
beta
anyway.
So
kubernetes.
A
E
Are
proposing
csi,
spec
g
and
now
and
when
we
are
at
the
same
time
we
are
trying
to
like
propose
the
the
ga
kubernetes
so
same
time.
Similar
thing,
I
think,
happened
with
the.
What
was
the
capacity.
D
A
D
A
Cap
do
a
feature
gate
get
the
feature
into
the
csi
spec
release.
An
alpha
version
merge
that
into
kubernetes
release
a
kubernetes
alpha
feature
all
in
one
release
and
then
ship
that
and
then
in
one
more
release.
Assuming
there's
no
problems,
you
could,
in
principle
go
to
beta.
So
the
best
best
case
is
two
releases
from
the
time
that
you
get
your
kep
approved
and
we're
kind
of
missing
the
the
125
feature
freeze,
because
that's
like
what
next
week
or
something
so
so
like.
Realistically,
the
the
best
case
is
alpha
in
126.
A
If
we
go
down
to
the
option
number
two,
but
then
we
could,
we
could
have
beta
in
127
and
it
could
be
turned
on
by
default.
So
so.
C
If
we
see
the
csi
spec
as
an
api,
if
we're
I'm
adding
a
a
a
field
or
something
that
has
you
know,
nice
super
clear,
upgrade,
semantics
and
stuff,
then
the
actual
feature
gate
would
be
like
on
the
csi
driver
side
as
to
if
it
consumes
that
new
field,
but
in
terms
of
the
field
being
available
in
the
api
that
can
be
done
immediately
and
doesn't
need
to
go
through
a
alpha.
I
think
we're
getting
more.
D
D
A
The
the
agreement
that
we
reached
on
the
csi
community
was
we're
going
to
have
an
alpha
process.
There's
no
beta.
It
just
goes
from
alpha
to
ga,
but
but
the
idea
was
for
brand
new
features.
We
would
initially
tag
them
as
alpha
and
because
of
the
semantics
of
grpc.
That
means
that
the
the
field
slot
that's
consumed
with
the
grpc
layer
is
dedicated
to
that
feature
forever.
But
but
we
can
deprecate
alpha
stuff
and
remove
it,
and
if
we
decide
it
doesn't
work
out.
A
D
A
In
the
grpc
message,
and
then
if
we
later
decided
to
back
that
out,
that
slot
would
just
be
lost,
but
but
we
deserve
the
right
to
do
so.
As
long
as
it
was
alpha
and
then
assuming
things
went
well,
we
would
promote
it
to
ga
and
then
it
would
just
be
permanently
in
that
slot.
So
it
was
just
a
way
to
give
ourselves
a
way
to
try
something
out
without
promising
not
to
remove
it.
A
And
then,
when
you
promote
it
to
ga,
that's
the
promise
not
to
remove,
and
so
far
that's
been
a
workable
scheme
for
the
grpc.
C
So
they
were
like
what
exactly
do
you
mean
by
kubernetes,
consuming
it
at
a
ga
on
my
level
and
like
the
thing,
I'm
speaking
of
just
like
super
super,
specifically
I'm
thinking
of
gke,
where
we
you,
basically
can't
you
know,
create
any
anything
close
to
a
prod
cluster
that
has
a
alpha
feature.
C
Gate
enabled,
but
anything
that
is
beta
you
know
can
be,
but
and-
and
I
I
guess
the
thing
here
is
that
I'm
it
isn't
clear
to
me
that
there'd
be
a
kubernetes
feature,
gate
involved
and
if
there's
not
a
feature
gate
involved,
then
like
again
thinking
selfishly
of
a
gk
case,
it
would
be
a
lot
easier
for
us
to
launch
and
test
something.
You
know
like
at
our
own
cadence.
A
Let
me
try,
let
me
try
to
hand
wave
my
way
through,
like
an
actual
an
actual
version
of
this
like
so
so
so
my
my
specific,
the
thing
I'm
interested
in
is
fs
group
policy
in
particular,
because
some
volumes
that
the
netapp
driver
creates
you
know
yeah.
They
need
they
need
cubelet
to
do
the
recursive
challenge
to
get
the
fs
group
set
correctly
and
other.
D
A
You
can't,
and
so
what
what
would
be
ideal
from
my
perspective
is
to
have
some
boolean
flag.
That's
returned
a
volume
creation
time
that
that
that
kubernetes
would
remember
or
be
able
to
to
access
a
runtime
so
that,
when
cubelet
gets
to
the
point
in
his
code,
where
it's
going
to
decide,
do
I
do
the
recursive
channel
or
not,
instead
of
just
relying
on
the
current
mechanism,
which
is
to
look
at
the
csi
driver
object
and
some
other
features
of
the
pvc.
A
A
Do
the
recursive
channel
and
it
would
just
no
and
then
yeah
and
then
that
so
then,
and
again
by
default
customers
would
not
get
that
behavior
right.
The
feature
gate
would
default
to
off,
so
it
would
just
be
the
old
way,
but
people
could
flip
it
on
if
they
wanted
and
get
the
better
behavior.
And
then,
if
everyone
was
happy
with
that,
you
would
move
the
cubelet
feature
gate
to
beta,
so
it
would
default
to
on.
But
then
it
would
only
work
with
drivers
that
implemented
the
feature.
D
A
For
older
drivers
that
didn't
implement
that
feature,
they
would
have
to
know.
Okay
do
the
default
thing,
but
for
new
ones
that
do
do
the
new
thing
and
then
eventually
it
becomes
a
ga
thing
at
the
csi
layer.
And
then
you
can
just
declare
the
feature,
ga
on
the
kubernetes
side.
And
then
you
stop
having
a
feature
gate
and
then
and
then
you're
done.
C
Yeah,
okay,
so
so
so
in
in
the
case,
you're.
Thinking
of
because
you're
making
changes
to
the
cubelet
there's
going
to
be
a
kubernetes
feature,
gate
that
is
in
alpha
for
a
release
any
anyway.
So,
like
you,
basically
do
have
that
like
two
two
plus
release
cycle,
depending
on
whether
you
make
the
feature
freeze
or
not,
basically,
right,
yeah,
yeah,
okay,.
A
A
Se,
linux
labeling,
because
again
it's
just
the
driver
telling
cubelet
whether
it
should
whether
this
particular
volume
has
the
behavior
that
that,
once
I
see
linux
labels
or
not,
I
think
right.
Someone
correct
me
if
I'm
wrong
and
then
for
the
for
the
per
node
limits.
You
need
a
different
feature,
but
again
I
think
it's
I
don't
know
what
consumes
it
if
it's
cubelet
or
if
it's
the
scheduler,
who
consumes
the
node
limits
and
where
would
that
feature,
go.
E
It's
sent
by
the
driver
to
the
queue
late
which
sends
it
to
the
cubelet
asset
to
csi
node
object,
which
is
consumed
by
the
scheduler
to
decide.
Okay,
whether
slots
are
available
on
the
node
to
mount
a
new
volume.
So
technically,
you
could
also.
A
C
A
I
don't
think
so
I
mean
if,
if
you
get
your
cap
approved
and
you
get
the
csi
spec
change
approved
all
within
about
a
month
of
each
other,
then
you
can
get
the
csi
spec
released
in
time
for
the
code
freeze
and
get
everything
in
alpha
all
in
one
release.
It
does
require
a
little
bit
more
coordination,
but
there's
no
reason
you
couldn't
achieve
that.
E
By
the
way
for
the
yeah
for
the
for
the
volume
limit
thing,
it's
going
to
be
a
problem
because
the
csi
node
the
same
object
carries
information
for
both
the
block
volume
or
the
shared
volume
type.
So
if
even
if
the
driver
could
tell
the
cubelet
that
this
volume
does
not
support
limits,
it's
that
information
does
not
flow
back
to
the.
A
G
D
Controller
level
right
when
you
report
controller
plug-in
level,
so
it's
like
we
need
to
add
something.
A
Well,
I
mean
it's
it's
keyed
by
the
csi
driver
name
when
you
get
back
to
the
scheduler
this
the
scheduler
doesn't
know
what,
if
it's
an
nfs
volume
or
iscsi
volume,
it
just
knows
that
it's
that
particular
csi
driver
well.
D
A
E
A
A
You
get
back
all
of
the
free
space,
but
if
you
send
in
a
volume
capability
that,
like
in
theory,
maps
to
iscsi,
the
driver
should,
in
theory,
return
how
much
ice,
because
the
space
is
available,
but
like
that's,
not
exactly
how
it
works
right.
The
volume
capabilities,
don't
map
cleanly
onto
protocols
or
volume
types.
A
A
There's
a
big
difference
between
attaching
an
nfs
volume
to
nice,
because
you
volume
in
terms
of
what
the
node
needs
to
do.
But
in
terms
of
how
much
free
space
you
have
at
least
on
a
netapp
device.
We
don't
care
if
it's
nfs
rise,
because
it's
coming
out
of
exactly
the
same
storage.
So
you'll
get
the
same
answer,
no
matter
what
you
ask,
but
but
a
different
driver
might
be
totally
different
and
well.
D
D
E
Hard
should
we
also
like
at
least
consider
option
one
like
how
we
can
do
like
while
we're
doing
this,
I
mean,
if
yeah
I.
A
E
Problem
is
that,
like
michelle,.
E
See
option
two
and
option
three
doesn't
solve
all
the
problems
like.
We
cannot
solve
volume
limits
like
problem
just
because
we,
even
if
we
our
driver
reports,
the
capabilities,
what
it
supports
on
per
volume
basis
at
the
csi
spec
level.
It
still
cannot
differentiate,
disambiguate
between
like
volume
limits
and
capacity
tracking,
because
this
this
again
for
the
entire
volume-
and
we
don't
have
primitives
in
in
kubernetes-
to
report
that
stuff.
D
B
A
Yeah
yeah,
I
I
agree.
I
agree
it's
harder,
I'm
not
willing
to
to
grant
that
it's
impossible.
It's
certainly
not
as
simple.
H
For
option
one
like
even
let's
say
we
can
solve
the
problem
with
option,
one
like
how
many
drivers
we
expect
to
have
to
complete
itself
so
how
we
like
define.
If
we
say
two
drivers
can
solve
all
the
problems
that
might
be
okay,
but
if
we
have,
we
need
to
have
three
or
even
more
drivers
to
be
able
to
solve
all
the
problems.
That
would
be
awesome.
Definitely.
E
H
E
It's
more
than
that
definitely
more
than
three.
I
think.
Okay,
so
no
it's
like
depends
on
how
many
protocol
one
particular
driver
is
bundling.
It's
like
it's
like,
for
example,
vsphere,
bundles,
two
protocols,
zero
file,
bundles
like
three
or
four
protocols
in
same
driver
name,
it's
yeah.
D
G
D
A
A
Right
so
so
so
the
the
answer
is,
first
of
all,
like
it's
just
too
expensive
from
a
runtime
perspective,
to
have
like
three
copies
of
your
driver
running
right.
It
takes
three
times
the
memories.
Three
times
the
pause.
It's
a
big
mess.
C
As
I
just
do
a
big
like
this
something
we're
actually
starting
to
see
in
gke
as
well,
that,
like
the
node
daemon
sets,
are
like
getting
increasingly
non
trivial
and
if
it's
a
fairly
niche
driver
that
isn't
used
a
lot,
because
there's
not
really
any
nice
way
to
sort
of
deploy
the
node
driver
on
demand
right.
Yes,
yes,.
C
Scalable
solution
to
add
more
because,
especially
in
like
a
managed
setting
the
thing,
the
the
thing
that
you
want
to
do
is
have
these
drivers
available
without
the
consumer
without
the
customer
having
to
explicitly
deploy
them
and
when
they
do
get
deployed,
they're
deployed
across
the
whole
cluster,
and
so,
if,
like
you
know,
there's
a
customer
in
a
multi-tenant
situation
where
only
a
small
fraction
of
the
nodes
are
using
some
particular
storage
feature.
C
A
Could
just
have
10
csi
drivers
installed,
none
of
them
would
be
running
until
one
was
needed
and
then
cubelet
could
just
do
whatever
was
necessary
to
spin
one
up
to
go
attach
a
volume.
It
would
solve
all
kinds
of
problems
if
cubelet
was
smart
enough
to
do
that,
but
but
we
don't
so
so.
Maybe
that's,
maybe
that's
something
we
should
do.
A
B
C
And
like
until
nodes
have
swapped,
basically,
I
think,
like
the
personal
opinion
I've
come
around
to
is
until
nodes
have
swapped
like
it
just
isn't
going
like
that's
going
to
be
super
hard
to
do,
because
if
there's
customers
who
have
their
nodes
packed
super
efficiently,
you
know
the
extra
memory
consumption
from
a
csi
driver
can
make
all
sorts
of
cascading
problems.
If
there's
no
swap.
H
So
if
we
redesign
css
back
with
idea
in
mind
that
a
driver
might
support
different
type
of
volumes
like
how
we
might
consider
to
like
redesign
the
attack.
A
Yeah
I
mean
it
would
be
as
simple
as,
as
you
know,
every
just
returning
ace
quote:
subtype
for
every
volume
created
and
then
kubernetes
would
have
to
remember
the
subtype
and
then
every
time
kubernetes
was
treating
a
volume.
It
would
need
to
know
both
the
volumes
driver
type
if
it
was
csi
and
a
subtype,
and
that
would
have
to
follow
that
volume
around
everywhere.
It
goes
so.
H
So
right
now
we
have
some
volume,
not
not
like
mode
right,
rwx
or
a
w,
o
or
r.
Our.
H
Yeah
access
mode,
but
if
we
define
volume
type
like
how
it's
do,
we
have
a
way
to
define
some
kind
of
like
a
context
right
volume
based
on
this
volume
type
of
context,
and
you
you
have
different
capabilities-
is
that
possible
to
have.
A
Well,
it
I
mean
it
would.
If
we
go
down
that
path,
I
would
say
that
the
subtype
would
have
to
be
some
opaque
string.
Just
like
the
csi
driver
name
is
right.
The
csi
driver
name
is
an
opaque
string
and
the
and
kubernetes
treats
it
as
such,
and
so
if
there
was
a
subtype,
it
would
just
be
another.
Opaque
string
and
you'd
have
to
have
the
both
opaque
strings
to
disambiguate
any
particular
csi
capability.
D
A
H
A
A
H
B
A
Proposal
number
three
yes,
is
to
like
actually
just
have
multiple
instances
of
this
object,
one
per
subtype
that
that
that's
my
proposal,
number
three
or
to
have
individual
fields
converted
into
like
maps.
So,
instead
of
being
a
singleton
you'd,
have
fs
group
policy
would
be
a
map
of
subtype
to
fs
group
policy.
A
E
E
I
was
saying
about
like
if
we
have
to
design
css
spec
to
support
multiple
volume
types
within
one.
Can
we
like
consider
that
the
the
get
plug-in
info
call
that
csi
the
cubelet
makes
it
returns,
basically
an
array
rather
than
one
value,
so
that
it
the
driver,
tells
okay?
I
support
these
different
volume
types
and
then
kubernetes
can
go
ahead
and
pick
and
choose
whatever
it
wants.
It
still
leaves
us
with
a
problem
of
existing
pv
tab
that
are
deployed,
but
it
probably
yeah.
D
A
A
That
that
was
my
third
proposal
was
yeah
just
just
instead
of
instead
of
trying
to
do
something
complicated
just
just
say,
look,
look
we're
going
to
have
multiple
of
these
things
and
have
a
way
for
have
there'll,
be
one
csi,
spec
change
to
return
some
key.
That
would
be
enough
for
kubernetes
to
figure
out
which
one
to
look
up.
H
A
A
Kubernetes
proprietary
stuff
that
we
decided
not
to
put
into
the
csi
spec,
we
did
put
volume
limits
directly
into
the
csi
spec
as
a
proper
feature,
and
so
it
would
need
to
be
redesigned
to
address
that
particular
use
case.
Now
we
could
say
that
use
case
isn't
important
enough
to
bother
with
and
we're
going
to,
ignore
it
for
now
and
just
focus
on
the
other
ones,
but
but
that
that's
a
decision
we
have
to
take.
I
I
personally
would
like
to
solve
that
one
as
well.
If
we
can.
H
H
D
A
H
A
D
D
So
I
think
sandeep
added
the
two
right
sandeep
added
two:
why
is
the
limits
right
and
then
the
other
one
is
list
of
volumes
that
has
a
that
returns.
The
published
node
ids,
oh.
A
D
A
I
I
had
forgotten
about
that
one,
but
but
my
response
when
he
raised
that-
and
I
don't
think
we
have
cindy
pier
today
but
but
last
week
when
he
brought
that
up.
I
I
pointed
out
that
you
can
solve
that
entirely
on
the
csi
driver
side.
By
just
remember,
you
know,
by
having
storing
states
somewhere
to
keep
track
of
which
nodes
a
particular
volume
is
published
to
and
and
that's
actually
what
what
we
do
with
our.
D
A
Volumes
are
published
to
which
nodes
and
we
it's
always
accurate
and
it's
it's
reconciled
with
the
you
know,
external
attacher
and
the
whole.
I
mean
that's
the
whole
purpose
of
the
the
list
published
nodes
or
the
the
published
nodes
feature
of
list
volumes
is
so
that
external
attacher
can
ask
you
what
what
should
this
be
published
too.
A
A
But
we
put
it
in
a
kubernetes
cr,
that's
where
we
store
it
so
so
that
that
particular
feature
just
layers
on
top
of
kubernetes
crs
but
yeah.
We
we
keep
track
of
that
stuff.
So
so
I
I
guess
we
can
consider
it
as
something
that
could
be
improved
on
the
csi
spec
side.
But
given
that
there
is
a
solution
that
already
works,
I
I
find
that
one
less
critical.
A
Yeah,
so
when
the
external
attacher
sidecar
starts
up,
it
calls
list
volumes
to
try
to
see
what
is
already
published
and
then
it
reconciles
that
against
the
volume
attachments
that
it
knows
exists
on
the
kubernetes
side
and
if,
if
there's
any
situation,
you
have
a
volume
attachment,
but
the
csi
driver
says
it's
not
published.
It
will
recall
controller
published
volume
at
that
time
to
try
to
fix
it.
A
H
But
actually
I
I
feel
that
hides
some
issues
there,
but
we
can
talk
details
later
so
yeah
this.
This
is
kind
of
the
last
important
since
it
has
a
workaround.
I
agree
so
ben.
Do
you
have
like
more
concrete
design
for
the
third
approach
and
what's
problem
can
be
solved
and
what
cannot
be
solved?
And
so
we
can
think
how
to
solve.
Like.
B
H
B
A
The
the
third
approach,
which
is
most
squarely
aimed
at
the
the
stuff
that
is
in
the
csi
driver,
object
so
things
like
fs
group
policy.
I
guess
I
use
the
sc
linux
label
policy.
Isn't
there
yet,
but
short
of
adding
a
a
new
csi
feature
like
the
way
we
would
address
it
on
the
kubernetes
slide
is
by
extending
the
csi
driver
object
with
a
new
field
right,
because
it's
just
quicker
and
easier
to
just
do
that
on
the
kubernetes
side
and
not
touch
the
csi
spec.
E
A
And
no
csi
spec
change,
okay,
so
so
yes,
so
so
those
two
features
are
the
ones
that
could
be
addressed
by
basically
making
the
the
csi
driver
object,
have
multiple
instances
or
by
changing
it,
to
have
maps
instead
of
you
know
global
fields,
but
I'm
pretty
sure
that
it
would
not
help
with
the
per
node
limits.
A
E
So
in
it's,
it's
basically
same
as
no
limits,
each
node
repeat
reports
how
much
capacity
is
available.
I
don't
remember
if
it
was
it
reports
available
or
consumed,
but
it
reports
a
value
now,
if
the,
if
the
csi
driver
is,
for
example,
vsphere
csi
driver
is
consuming,
both
is
providing
both
block
storage
and
file
storage,
they
literally
come
from
different
data
stores
and
they
they
they
could
have
different.
This
thick,
like
yeah.
A
D
A
But
even
in
that
mode
you
don't
run
the
controller
plug-in
on
every
node.
You
still
just
run
one
instance
of
it.
I
thought
oh.
E
A
Things
but-
but
just
so
I
understand
so,
if
you're
in
that
mode,
where
you
you're
basically
running
the
controller
plug-in
on
each
node
and
the
particular
plug-in
you're
using,
has
different
subtypes
of
volumes
that
it
can
support.
You're,
saying
that
the
existing
inputs
to
that
rpc,
which
are
the
topology
the
parameters
and
the
volume
capabilities,
don't
give
you
enough
of
a
filter
to
basically
say
how
much
of
this
type
of
storage
do
you
have
versus
how
much
of
that
type
of
storage?
Do
you
have
yeah
yeah?
A
What
would
be
ideal
then?
Because,
because
this
is.
E
E
A
Expected
right,
like
like
the
the
driver,
is
going
to
just
like.
Let's
say
you
have
nfs
and
iscs
you
two
options:
the
the
provisioner,
the
the
create
volume
request
handler,
is
going
to
decide
whether
to
give
you
iscs
your
nfs,
based
on
exactly
the
same
inputs.
It's
going
to
be
the
topology,
the
storage
class
parameters
and
the
capabilities
that
you
ask
for
and
whatever
matches
those
things.
H
E
And
it's
just:
how
would
we,
how
would
we
even
do
it
like
in
this
dealer,
like
you,
have
the
csi,
the
scheduler
first
of
all,
scheduler
folks,
don't
like
to
have
a
lot
of
dynamic
information.
They
like
information
that
they
can
cache
so
that
they
can
respond
fast,
but
in
this
case
it
sounds
like
scheduler
has
to
look
into
different.
A
This
would
all
be
cacheable
information.
It
would
just
be
that
you
would
instead
of
having
one
value,
which
is,
I
have
a
limit
of
x
volumes
per
node.
It
would
be
for
subtype
a
I
have
a
limit
of
y
and
for
subtype
b
I
have
a
limit
of
z,
but
those
numbers
wouldn't
be
changing,
and
this
the
scheduler
knows
what
subtype
the
volume
that
it's
considering
scheduling
is
because
we
would
have
to
store
that
too.
H
The
initial
design
do
we
have
the
that
in
mind
like
for
each
driver,
it
only
supports
kind
of
one
volume
type
and.
A
I
think
you
would
say:
subtype
is
empty
right,
so
for
for
the
for
the
majority
of
drivers
that
don't
have
subtypes,
they
would
just
use
the
empty
string
and
they
would
be
backwards
compatible
with
themselves
and
then
for
any
of
these
weird
drivers
that
do
have
different
subtypes.
They
would
still
have
to
support
an
empty
subtype
for
backwards
compatibility
and,
basically
guess
the
right
behavior,
but
then
for
ones
that
had
non-empty
subtypes.
You
could
give
them
the
right
answer.
A
Then
the
pv
would
not
the
pvc.
We
would
need
to
stick
it
in
the
pv,
alongside
the
csi
driver
name,
some
csi
subtype
field.
H
A
Okay,
so
so
so
you're
saying,
let's
flesh
out
option
number
three
or
maybe
a
combination
of
options,
two
and
three
right
correct,
so
that
we
can
also
address
the
the
per
node.
D
A
Issue
similarly,
okay,
I
I
I'm
comfortable
with
that
as
a
path
forward.
It
is,
and
oh
gosh
we're
almost
out
of
time,
so
yeah,
let's,
let's
see
if
we
can
reach
agreement
on
that
as
a
tentative
direction
and
then
maybe
next
meeting
we
can
get
into
detail
on
what
exactly
that
looks
like
so
we
can
pick
at
it.
Is
there
anyone
who
thinks
that's
a
terrible
idea.
E
D
D
A
Okay,
all
right,
yeah
and-
and
it's
helpful
to
think
about
all
of
the
actual
touch
points
on
the
kubernetes
api
that
would
be
needed
to
introduce
the
concept
of
a
subtype,
because
it
would
have
to
be
a
new
field
that
would
have
to
go
through
alpha
beta
ga
and
we
probably
have
one
big
one
big
feature
gate
that
would
cover
all
of
this.
On
the
kubernetes
side.
F
H
That's,
I
think
we
need
to
start
like.
We
cannot
not
changing
anything
right
and
it
needs
to
be
involved
or
the
issues.
Okay,.
A
Well,
I
I
I'm
glad
that
there's
energy
behind
this,
because
something
this
big
and
daunting
can
easily
just
make
people
run
away
and
go
work
on
other
stuff,
but
it
I'm
glad
that
people
are
interested
in
getting
the
bottom
of
this
one,
because
because
I
am
so
okay,
so
thank
you
all
we're
gonna
have
to
end
the
meeting.