►
Description
Kubernetes Storage Special-Interest-Group (SIG) Per Volume CSI Capabilities Design Meeting - 12 July 2022
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Ben Swartzlander (NetApp)
A
All
right,
okay,
so
hello,
welcome.
This
is
the
kubernetes
sig
storage
community
meeting
on
the
topic
of
per
volume,
csi
capabilities.
A
We
we
missed
last
week
because
I
think
a
bunch
of
people
were
out
on
holiday
and
the
prior
two
weeks.
We
only
got
a
little
bit
done
because
we
didn't
have
michelle
michelle.
Are
you
here
with
us
today.
A
C
A
You
fixed
the
audio
excellent
all
right.
Let
me
put
this
on
here.
A
All
right
so
so
michelle,
where
we
had
come
down
was
we.
I
had
a
specific
proposal
that
basically
was
a
combination
of
I'm
trying
to
go
back
in
history
and
remember
when
the
last
time
you
met
with
us
was
yeah.
It
was
a
combination
of
the
second
and
third
option,
bringing
or
updating
the
csi
spec
and
then
extending
the
current
csi
driver,
and
the
specific
proposal
was
basically
to
modify
the
persistent
volume
api
object
with
this
new
field
called
subtype,
which
is
going
to
be
as
a
sibling
of
the
driver
field.
A
So
you
know
every
right
now,
all
of
your
different
csi
drivers
have
their
names.
The
idea
would
be
to
have
a
subtype
under
that
name,
which
would
be
a
vendor
specific
and
it
would
be
filled
in
by
modifying
the
csi
spec,
with
this
new
subtype
field
on
the
volume
object
which
is
returned
in
the
crate
volume
response,
and
this
would
of
course
have
to
go
through
alpha
ga
on
the
csi
side,
on
the
kubernetes
side,
this
this
would
be
a
new
alpha
field
and
have
to
go
through
alpha
beta,
ga.
A
It
would
default
to
being
empty.
Obviously,
because
it's
a
new
field,
so
it
would
have
to
default
to
being
empty,
in
which
case
you
could
get
exactly
the
same
behavior
as
today,
but
the
idea
is
for
drivers
that
want
to
opt
into
this
subtype
behavior.
They
could
return
this
value
at
create
volume.
Time
kubernetes
would
store
it
in
the
pv
object
if
it
was
available
and
then
when
cubelet
was
doing
the
various
things
that
it
does,
that
might
where
you
might
want
to
have
different
behavior
on
a
per
volume
basis.
A
You
would
be
able
to
go
look
at
the
subtype
field
of
the
pv
and
decide
what
to
do
so
if
it
was.
If
it
was
a
question
of
the
group
ownership
change
policy,
you
could
have
different
policies
for
different
volumes.
If
it
was
a
question
of
the
sc
linux
labeling
policy,
you
could
have
different
values
for
different
volumes.
A
The
one
thing
that
this
this
basic
solution
doesn't
solve
directly
is
the
node
volume
limits
and
so
to
address
that
are
proposed.
Additionally,
in
the
node
get
info
response,
where
we
currently
return
a
single
field,
which
is
max
volumes
for
node
to
return
a
map
that
is
subtype
max
volumes
per
node.
That
would
let
a
driver
in
principle
return
a
map
of
values
so
that
different
subtypes
could
have
different
maximum
volume
counts.
A
Now
we
would
also
need
to
plumb
this
all
the
way
through
to
where
the
kubernetes
scheduler
can
get
it,
and
I
don't
have
a
specific
proposal
for
that,
but
I'm
sure
you
can
imagine
what
that
might
look
like,
and
so
this
was
the
this
was
my
high
level
proposal.
A
A
Was
that
no
matter
what
we
do,
you
can't
get
around
the
fact
that
a
storage
class
only
specifies
a
single
csi
driver
and
today
that
that
enables
you
to
do
the
kind
of
thing
we
do
where,
like
you
only
have
one
storage
class,
you
only
have
one
csi
driver
and
it
it.
It
can
decide
like
whether
to
give
you
an
nfs
or
iscsi,
based
on
the
specifics
of
your
request
and
and
the
proposal
where
we
just
have
multiple
csdi
drivers
would
break
that
right.
A
You
would
no
longer
be
able
to
just
have
one
storage
class
and
then
let
the
csi
driver
decide.
You
would
need
to
have
multiple
storage
classes
and
they
would
and
and
and
if,
if
somebody
wanted
to
get
the
default
storage
class
behavior,
they
would
be
stuck
with
exactly
one
of
those
storage
classes.
So,
for
example,
if
you've
had
nice
guys
in
nfs
you'd
have
to
pick
one
to
be
the
default
and
then
you'd
always
get
that
which,
which
is
which
would
be
a
step
backwards
from
what
we
have
today.
A
C
D
A
Even
admins
don't
specify
right
when
an
admin
fills
out
a
storage
class.
They
do
put
the
csi
driver
name
in
the
storage
class
and
that
determines
which
csi
driver
will
be
your
provisioner
for
that
storage
class.
But
yes,
in
this
case,
the
the
driver
fills
in
the
subtype
and
and
only
only
if
they
want
to
right.
If
the
driver
has
legacy
behavior
or
doesn't
want
to
have
subtypes,
it's
just
blank
blank
string
and
you
get
the
exact
current
behavior.
A
C
Guess
the
the
part
that
I
think
might
be
challenging
is
for
the
feature
where
for
the
features
where
we
need
things
to
be
done,
but
we
basically
need
the
subtype
to
be
determined
before
provisioning,
so
that
does
include
things
like
the
volume
limits
and
the
storage
capacity
things
like
all
of
these
things
are
reported
before.
A
C
C
A
A
A
So
I
can't
safely
put
the
pod
on
a
node
knowing
that
after
it
lands
there
I'll
be,
you
know
the
pv
will
be
able
to
bind
and
that
it
will
be
able
to
attach,
because
I
don't
know
the
subtype
boy,
that's
a
hard
problem.
A
Yeah
I
mean
so
so
that
that's
that's
a
bit
of
a
of
a
serious
wound
to
the
volume
limits
application
of
this
solution,
and
maybe
maybe
that
does
need
a
fundamentally
different
approach
that
we
need
to
think
about.
But
what
do
you
think
about
this
for
just
the
other
two
problems
that
we're
aware
of,
like
the
stuff
related
to
cubelet,
needing
to
look
up
some
policy
behavior
in
the
csi
driver
crr.
A
C
A
Well,
we
could
do
it
that
way
right,
we
could
have
a
cs
or
driver
per
subtype,
or
we
could
have
change.
The
csi
driver
object
itself
to
have
new
fields
which
were
map
fields
where
you
would
just
have.
You
know,
instead
of
fs
group
policy
being
a
a
scalar
value,
you'd
have
fs
group
policy
map
which
would
be
a
you
know,
have
have
keys,
which
were
the
subtypes
and
values
which
was
the
policy
value
so
yeah.
C
That
way,
it
seems
that
approach
seems
feasible
for
at
least
those
problems
right
yeah,
I
think
it's.
It
is
these
scheduling
ones
that
are
the
harder
one
to
figure.
D
F
Oh
yeah,
just
the
last
call
I
yeah
I
want
like
we
were
we
kind
of
started
to
talk
about
the.
How
can
we
make
it
easier
to
like
migrate
or
there's
two
things?
One
is.
How
can
we
make
drivers
multiple
drivers
to
run
potentially
within
the
same,
not
even
part,
but
within
the
same
there's
one
that
we
can?
They
can
run
as
same
part,
but
different
containers.
F
Second,
is,
then
they
could
run
within
the
same
part
of
the
same
process.
So
okay,
so
we
talked
about
this
briefly
in
the
last
call,
and
the
I
was
thinking
more
about
like
same
container
problem
is
kind
of
the
same
part.
Problem
is
relatively
solvable
because
you
have
you,
you
can
bind
to
different
sockets
and
you
can
like
unique
sockets
and
they
can
return
different
values
of
kit
plug-in
info
and
that's
all
fine
but
same
process
is
more
tedious
because
then
yeah
it
will
require.
F
I
think
changes
into
the
csi
at
least
like
identity
calls,
like
you
know,
get
volume
info
and
things
like
that,
but
it
has
to
report
multiple
things
on
the
same.
So.
A
A
Making
yeah,
but
I
mean,
but
the
the
snags
are
very
diff
different
for
each
one.
So
so,
if
we
focus
on
controller
plug-ins,
you're
saying
I
might
want
to
run
two
different
csi
drivers
in
one
pod,
and
in
that
case
I
can
just
have
maybe
multiple
instances
of
of
the
of
the
actual
driver
and
then
multiple
instances
of
each
sidecar
and
point
them
all
at
the
correct
two
different
sockets
map
them
all
correctly
and
then
the
right
thing
happens
inside
one
of
them
pod
is
that
so.
F
Yeah,
that
is,
yeah
yeah
further.
That's
one
way
of
solving
this
one
way
of
solving
it,
but
given
that
we
also
wanted,
like
I
don't
know,
if
it's
still
a
goal
like
to
reduce
the
memory
footprint
on
the
node
and
that
that,
basically
on
the
node
side,
then
we
are
talking
about
either
like
letting
multiple
like.
A
F
F
If
we
take
this
middle
approach,
where
you
have
like
multiple
drivers
running
in
different
containers,
but
maybe
as
a
part
of
same
same
part,
then.
A
I
just
I
guess,
I'm
not
grasping
with
the
challenges
like
no
matter
as
long
as
you
have
two
different
sockets:
they
they
can
both
be
listened
to
by
the
same
process
right
and
then
that
process
can
do
the
work
of
two
drivers
with
two
two
different
go
threads
or
go
routines,
and
you
can
economize
on
memory,
at
least
for
the
driver
that
way
right
and
then
we
could,
in
in
principle,
do
the
same
thing
with
the
csi
sidecars.
We
could
make
an
omnibus
sidecar.
A
A
F
The
the
I
was
thinking
also
about
how
the
node
registration
will
work
in
the
case
of
cubelet,
because
the
node
resistor
calls.
I
think
it
calls
get
plugin
info
call
on
the
on
the
on
the
driver.
So.
F
A
F
F
A
C
Like
first
phase
is,
you
know,
do
a
sidecar
per
socket
and
then
second
thing
would
be.
You
know,
consolidate
all
the
sidecars.
A
What
I
mean,
no,
I
I
think
I
think
what
you
meant
to
say
and
tell
me
if
I'm
wrong
is
yes,
the
first
phase
is
making
each
sidecar
able
to
handle
an
array
of
sockets
instead
of
just
one
right
and
then
later,
once
all
the
side,
cars
can
do
that
collapse
all
the
side
cars
down
into
like
an
omnibus
sidecar.
That
just
does
all
the
things
in
one
process.
C
Or,
or
is
the
first
phase
like
have
have
the
driver
expose
multiple
sockets.
A
F
Yeah
yeah
yeah.
No,
definitely
it's
it's
doable
except,
like
I
think,
you'll
have
to
just
run
multiple
version
of
node
register
so
running
multiple
drivers,
but.
A
D
F
F
F
Yeah,
you
know
they
are
totally
or
like
for
further
as
phase
one.
As
you
said,
we
could
run
two
node
resistors,
basically
like
so
that
we
don't
have
to.
Then
it
would
be
simpler.
So
you
it's
like
assuming
no
restart,
doesn't
take
too
much
memory.
So
it's
like
you
have
two
two
drivers
could
run
as
same
process
but
different.
F
They
start
they
listen
on
different,
unique,
sockets
and
and
but
but
the
copies
of
you
still
running
copies
of
notes
this
star
for
each
each
plugin
in
each
socket
that
could
that
could
work
as
as
phase
one
in
future.
We
could
enhance
the
noted
discharge
to
listen
to
an
array
so
that
it
handles
this
correctly,
but
it
yeah.
A
How
about
the
idea
of
of
of
cubelet
knowing
how
to
start
plugins
itself
and
then
allowing
them
to
like
not
run
unless
they're
needed.
A
F
A
Well,
yeah
yeah,
so
so
that
I
think
that's
the
other
problem
we
have
that
would
be
solved
by
this
like
today.
We
have
this
issue
where,
when
you
drain
a
node,
if,
if
cubelet
happens,
to
kill
this
csi
node
plug-in,
while
they're
still
attached
volumes,
you're,
screwed
and
so
the
way
we
solve
it-
is
by
like
giving
these
csi
no
plugins
ridiculously
high
priority.
A
F
F
The
scheduler
folks
are
very,
very
conservative
about
about
accepting
resources
that
can
change
a
lot
like.
A
A
A
F
Maybe
yeah
I
mean
we
could
we
could
say
that
as
a
phase
three,
but
it
requires
even
yeah.
It
requires
more
design
than
even
phase
one
phase.
Two.
So
so
of
of
this,
like
better
ways
of
running
drivers,
yeah
drivers.
A
I
feel
like
that's
going
to
have
more
payoff
than
trying
to
squeeze
multiple
drivers
into
one
plug-in.
I
mean
I
agree
that
that
that's
also
something
you
you'll
be
nice
to
be
able
to
do,
but
they're
always
going
to
be.
You
know
if
you
have
multiple
csi
plugins
from
different
vendors
or
different
csi
plugins
for
different
purposes,
they're,
never
going
to
be
combined,
they're
just
going
to
be
separate
and
the
ability
to
not
run
them
on
all
of
your
nodes.
All
the
time
would
be
a
dramatic
yeah.
C
I
I
think
in
the
for
the
most
part,
we've
managed
to
kind
of
get
by
this
by
just
you
know,
having
the
you
know,
admin
sort
of
just
decide
and
pick
and
choose
what
they
run.
C
A
But
I
guess
it
just:
it
continues
to
bother
me
that
we
have
this.
This
flaw
where,
when
you
drain
the
node,
the
csi
plug-in
can
actually
get
killed
too
early,
because
cuba
doesn't
know
that
it's
a
csi
plug-in.
It
would
be
nice
if
cs
if
there
was
a
way
to
flag
it
as
like.
This
pod
is
a
csi
plug-in
and
please
keep
it
alive
until
the
last
volume
attachment
is,
is
removed.
A
F
So
I
was
let's
let's
I
I
haven't
thought
a
lot
about
this
problem
and
I
think
it's
running
drivers
on
demand.
F
It's
interesting
problem,
but
I
I
don't
know
I
haven't
talked
too
lot
about
it,
it's
kind
of
challenging,
but
I
also
want
to
spend
some
time
like
when
michelle
is
here
this.
This
thing
that
we
talked
in
last
call
the
driver
selection
if
single
storage
class
is
specified.
So
another
problem
that
you
mentioned
ben
was
that
so
even
if
we
have,
we
managed
to
run
multiple
drivers,
either
a
side
car
same
process
doesn't
matter.
You're
still
left
with
the
problem
like.
F
Currently,
those
drivers
are
deployed
with
single
storage
classes
like
not
always
like
in
some
cases
azure
case.
For
example,
it
deploys
actually
different
drivers,
wizard
file,
but
but
yeah.
In
other
cases,
it's
possible
that
they
deploy
a
single
storage
class
and
based
on
access
mode
that
gets
passed
to
the
csi
driver.
It
provides
different
kind
of
volumes
like
vsphere
csr
driver
provisions
redirect.
A
Yeah,
that's
how
the
netappointment
works
and
and
other
things
can
impact
it
too.
Like
you
know,
we
can
decide
to
schedule
larger
volumes
as
nfs
and
smaller
volumes
as
a
iscsi
or
you
know
blog
well.
Obviously,
if
it's
a
block
volume
you're
going
to
get
iscsi
and
if
it's
a
file
system
volume
you
can
get
it
either
so,
like
other
aspects
to
the
pvc,
can
determine
what
you
get
even
with
just
one
storage
class
and
it
can
come
down
to
just
like
which
one
we
happen
to
have
free
space.
A
F
Yeah
so
okay,
interesting
so
I
was-
I
was
talking
about
the
last
last
call
that
like,
if
we
decide
to
so
there
are
two
things
one
is
like
for
for
the
problem
that
admission
plug-in
can
do
this,
but
it's
not
clear
how
multiple
dryers,
but
that
mechanism
would
coexist
like
in.
D
A
Like
but
that
admission
plug-in
could
only
do
its
work
on
pvcs,
where
there
was
no
storage
class,
specified,
yeah
right,
and
so
today,
if
you
have
two
different
csi
drivers
from
two
different
vendors,
you
can
still
pick
which
one
you
get
with
the
storage
class
and
then,
after
you
pick,
the
storage
class
still
get
a.
You
know
another
scheduling
process
of
what
exactly?
What
exact
kind
of
volume
is
the
csi
driver
going?
A
To
give
you
I
mean
the
netapp
driver
has
a
whole
scheduler
inside
of
it
right
that
looks
at
possibly
multiple
storage
back
ends
and
tries
to
make
the
best
decision
and
it
has
a
as
a
retry
loop.
So
if
anything
fails,
it
just
keeps
retrying
on
the
next
one,
the
next
one,
the
next
one
until
it
finds
one
and
succeeds,
or
it
exhausts
every
possible
option.
E
F
F
So
yeah
and
all
these
volume
types
are
immediate,
binding
or
delayed
binding
because
like
like,
if
you
are
delayed
binding,
then
this
sounds
like
it's
interacting
with
scheduler
that
built
into
kubernetes
and
then
there's.
A
F
A
A
So
so
we
support
both
and
if,
if,
if
it
is
after,
the
pod
is
scheduled
and
we
have
topology
information
and
that
matters,
then
we
will
use
it
because
we
do
have
a
a
flavor
of
driver
that
you
know
is
built
in
the
cloud
like.
I
think
we
have
a
azure
netapp
files,
which
is
part
of
microsoft's
azure
cloud
and
like
if
you're
in
a
certain
region
or
zone.
It
will
make
sure
that
the
storage
is
in
the
same
region
or
zone.
If
it
can
those
kinds
of
things.
F
D
F
Work,
but
if
we
are
talking
about
like
complex
decision
making,
I
I
wonder
like
will
it
be
like
too
late.
So
when,
when
the
for
example,
then
when
the?
If,
if
this
calls
get
executed
by
a
web
hook,
then
I
wonder
if
it
will
be
too
expensive
to
factor
into
like
scheduling
the
the
decision
the
scheduler
is
doing
and
then.
D
A
Probably
yeah
because
because
the
kinds
of
things
that
our
scheduler
would
do
is
it
was
actually
try
to
create
the
volume
and
then,
if
it
succeeds
great
and
if
it
doesn't
it'll
try
another
place
and
it
goes
to
reach,
try
loop
inside
of
the
crate
call.
So
if
we
had
to
do
all
of
that
in
some
sort
of
an
emission
hook,
then
by
the
time
that
by
the
time
that
the
admission
hook
set
the
storage
class,
we
would
have
already
created
the
volume
somewhere
and
it
would
have
could
have
possibly
taken.
F
F
F
F
F
Yeah,
that's
done
by
by
I
think
the
admission
hook,
the
difference.
A
Right
but
the
change
you're
proposing
is
that,
like
we
do
something
even
before
that,
where,
if
the
storage
class
is
blank,
we
run
some
vendor
specific
logic
and
then
fill
in
a
vendor
selected
storage
class.
For
you
to
deal
with
the
fact
that
you
know
if
you
have
multiple
csi
drivers,
and
you
want
to
pick
one
based
on
the
request,
you
need
admission
hooks
only
to
do
that,
but
but
that
would
happen
even
before
schedule.
The
nodes
could
get
scheduled
right
because
you'd
have
to
set
the
storage
class
in
your
web
hook.
A
It
I
don't
know
I
I
guess
my
feeling
on
that
that
class
of
proposals
is,
I
think
it
could
work,
but
it
would
be
like
a
massive
re-architecture
for
something
like
trident
to
work.
That
way,
and
I
don't
see
it
happening
like
even
if,
even
if,
like
that
was
the
agreed
upon
path
forward,
we
probably
would
just
decide
not
to
implement
it
and
keep
doing
what
we're
doing
and
live
with
the
limitations
thereof.
C
D
A
Yeah,
I
I
just
I
mean
while
we
could
have
gone
down
that
path
years
ago,
like
the
fact
that
we've
done
what
we've
done
makes
it
seem
unlikely
we'll
ever
get
out
of
the
situation
of
just
having
one
giant
csi
driver
name
and
one
storage
class
for
a
particular
vendor.
I
feel
like
we've
we're
stuck
in
that
you
know
on
that
path.
Now,
just
due
to
momentum
and
then
that
that's
why
I
was
saying
you
know
well,
a
subtype
would
help
us
move
forward.
F
D
F
A
And
we
might
be
able
to
find
a
way
to
make
it
work
for
the
for
the
node
limits.
We
just
have
to
think
through
the
the
scenario
that
michelle
laid
out,
because
I
hadn't
hadn't
occurred
to
me
before.
F
A
Like
what
you
could
do
in
that
case,
so
so
my
specific
proposal
for
the
volume
node
limits
was,
you
know
there
is
a
fallback
right.
You
don't
remove
the
existing
node
limit
that
you
know
for
the
empty
subtype,
which
is
you
know
the
legacy
behavior.
You
just
add
specific
volume
limits
for
for
specific
subtypes
and
those
could
be
higher
or
lower
than
the
default.
A
You
could
just
continue
to
use
the
default
for,
for
once
you
don't
know
the
subtype
and
then,
as
long
as
and
then
you
could
set
a
rule
that
says:
okay
as
long
as
the
default
is
less
than
or
equal
to
the
the
subtype
specific
volume
limit,
you
will
get
the
correct
behavior,
because
the
scheduler
will
never
pick
a
node
that
doesn't
have
you
know
the
minimum
or
doesn't
have
fewer
than
the
minimum
number
of
attachment
prints
left
and
you
could
get
correct
behavior
now
you
would.
A
The
effect
of
that
would
be
to
artificially
limit
the
number
of
deferred
binding
volumes
on
a
node
to
whatever
the
node
limit
was
for
all
of
the
subtypes
for
that
particular
driver.
But
that
doesn't
seem
like
the
worst
thing
right.
Like
if,
if
let's
let's
say,
for
whatever
reason,
we
could
only
have
32
nfs
volumes,
we
could
have
100
iscsi
volumes.
A
But
for
for
non-delayed
binding
volumes
for
ones
that
had
a
pv,
you
could
go
look
at
the
pv
and
see
that
it
already
existed
and
see
that
it
was
iscsi
and
then
say:
oh
I'm
going
to
use
the
limit
of
100
instead
of
the
limit
of
32,
because
I
know,
but
when
you
don't
know
you
could
just
default
to
the
32
and
say
well.
I
know
that
that's
the
safe
number
right
I
know
I'm.
A
F
A
F
A
And
that
that's
another
very
hard
problem
that
I
I
don't
want
to
have
to
deal
with
personally
yeah
and
and
but
that's
I
don't
think,
that's
even
the
biggest
problem
with,
but
that
other
strategy
I
mean
it's
just.
F
F
Yeah,
so
we
like
openshift,
for
example,
can
deploy
a
web
hook
or
an
admission
which
could
start.
You
know
like.
If
no
stories
class
is
specified
in
your
pvc,
then
it
will
set
one
or
the
other
storage
class.
D
F
On
that
criteria
and
pretty
much
you
get
back
with
comfortable
behavior
and
similar
with
as
your
file
driver
also
so,
but
for
if,
if
a
driver
requires
complex
decision
making,
it's
kind
of.
F
But
but
that's
what
I
was
talking
about
like
like
briefly
yeah,
it's
it's
admission,
books
will
be
slow
and
the
api
called
themselves
will
timeout
and
generally
api
calls
some
lower
timeout.
I
guess
we
had
issues
where,
but
but
within
the
grand
scheme
of
things
like
if
the
if
the
provisioning
is
delayed
and
your
driver
is
making
the
call
to
making
complex
during
decision
during
the
like
provisioning,
that's
kind
of
factored
into
the
pod
startup
time
part
cannot
get
scheduled
until
a
volume
is
until
pvc's.
F
A
It's
it's
an
api
change
and
yeah
it
would,
it
would
be,
it
would
be
multiple
api
changes.
We'd
have
to
change
the
csi
spec
in
at
least
two
places.
We
have
to
change
the
kubernetes
apis
in
at
least
two
places.
A
We
have
to
change
the
csi
driver
object,
or
at
least
the
way
cuba
interacts
with
that
object
in
at
least
two
places
like
yeah,
there's
a
but
but
they're
all
they're,
all
little
bite-sized
pieces
of
work
right.
There's
a
lot
of
little
bits
of
work
to
do
and
a
lot
of
people
need
to
nod
their
heads.
But
it's
no
one
needs
to
like
rewrite
their
driver
from
scratch,
which
we
just
feel
like
it
feels
like
where
we
would
be.
E
A
B
B
F
Yeah
I
mean
I
think,
yeah
sandeep
came
to
one
of
sixth
stress
calls
also,
and
we
recommended
them
to
split
the
driver
long
back
like
a
year
back
or
so,
but
this
is
inertia.
Okay,.
F
Okay,
so
I
think
I
spoke
with
jan
about
a
little
bit
about
like
allowing
a
way
to
override
change.
I
want
this
like
yeah.
We
have,
let's,
let's
think
through
both
the
decisions.
If
we
can
solve
this
problem
or
we
don't
need
to
solve
this
problem,
that's
that's
something
we
can
talk
about
in
the
next
call,
and
but
for
this
allowing
a
way
to
write
change
or
change
driver
name
in
existing
tvs.
F
How
would
you?
How
would
we
do
it
like?
Like
correctly,
the
driver
name
in
the
pv
source
is
imitable
on
set
and.
A
F
A
F
A
Are
you
sure,
because
I
mean
I
I
can
imagine
bad
things
that
would
flow
from
that
right?
If
anyone
is
currently
depending
on
the
field
not
changing,
they
could
be
caching
that
value
forever
and
never
reading
it.
And
then,
if
you
say
well
now
it
can
change
and
anyone
who's
depending
on
the
value
not
changing,
could
all
of
a
sudden
break.
A
Now,
maybe
that's
acceptable
according
to
the
api
rules,
but
it
feels
like
you
could
have
regressions
pop
up
if
anyone
was
depending
on
the
field
being
immutable.
F
No
yeah,
I
I
hear
you
but
but
from
like
purely
like,
like
like.
Obviously
this
is
going
to
happen
in
vacuum,
but
they
from
purely
like
api
convention
point
of
view.
I
was
speaking
from
ap
conventions
like
okay,.
A
A
Then
we
would
just
have
to
go
through
the
exercise
of
figuring
out
if
anybody
would
break
as
a
result
of
that
to
convince
ourselves.
It
was
a
safe
change
to
make
yeah
yeah.
I
don't
know
I
mean
I
adding
a
new
field
is
much
more
straightforward
right,
because
you
can
easily
describe
the
behavior
when
nobody
uses
it
or
hops.
Out
of
it,
you
say:
well,
you
just
get
the
current
behavior,
because
the
field
is
blank.
A
F
Okay,
yeah:
how.
F
I
think
one
of
the
the
problems
with
like
with
changing
the
driver
name
was
like
you,
have
to
pretty
much
train
every
node
and,
like
they've,
drained
the
nodes
and
everything
before.
D
You
can
do
that
so
yep
and
because,
oh.
A
Oh,
why
I
will
say
for
for
the
you
know:
if
you
wanted
to
have
a
hack
to
change
the
csi
driver
name,
one
thing
you
can
do
today
is
just
delete
the
pv
and
create
a
new
one
with
different
values
and
refine.
It
right
like
that.
A
That
actually
works
hey,
because
we
looked
at
that
when
we
were
trying
to
figure
out
how
to
do
migrations.
You
know
from
pre-csi
to
you
csi.
We
would
play
games
where
you
can.
I
think,
just
delete
the
pv
and
create
a
new
pv
with
exactly
the
right
values
and
re-bind
it
back
to
your
pvc,
and
it
just
works.
A
F
C
I
didn't
think
you
needed
to
restart
the
pot.
In
that
case,
I
think
we've
we
actually
got
some
prototype
working
where
you
don't.
A
A
And
the
and
the
old
driver
won't
be
there
necessarily.
I
mean
in
the
case
of
going
precious
side
of
csi.
It
was
actually
cuba
doing
the
detaching.
So
maybe
that
was
less
of
an
issue,
but
yeah
I
mean
you
just
have
this
issue
where,
like
what
the
the
driver
that
attached
it
also
has
to
be
able
to
detach
it
unless
the
new
driver,
like
knows
everything,
that
the
old
driver
knew
and
that's
that's,
a
very
hard
guarantee
to
make.
A
C
F
F
I
was
also
thinking
like
without
an
api
change
could
be,
I
mean.
Obviously
it's
an
ugly
hack
could
be
support
like
overwriting,
a
name
by
annotation
or
something
similar
like
if
you
remember
early
stages
of
csi
driver
migration
had
a
bunch
of
information
stored
in
an
annotation
actually,
eventually,
I
think
we
got
rid
of
it,
but
yeah.
F
I
think
yeah,
if
you
enable
the
migration,
then
it
used
to
store,
I
forgot
just
to
store.
There
still
was
my
annotations
involved
in
the
decision
making
actually
and
but
but
we
don't,
we
no
longer
store
the
whole
like
translated
spec,
which
we
used
to.
F
I
think
it
was
pv
controller,
which
was
doing
that.
I
don't
remember
which
controller
I
was
doing
it,
but
but
someone
was
doing
it.
A
Why
I
need
to
do
a
time
check
because
we're
at
the
top
of
the
hour
yeah?
It
sounds
like
there's
still
more
to
talk
about,
so
we
can
keep
the
feature
meetings
on.
I
guess
I
just
really
want
to
quickly
want
to
ask
michelle
like
what
is
your
feeling
about
the
proposal
that
we've
been
talking
about
the
last
two
weeks.
Are
you
still
strongly
against
it
or
warmed
to
it?.
A
Right:
okay,
okay!
Well
then,
we
can
maybe,
between
now
and
next
week,
think
about
additional
approaches
to
address
the
mid
limit
for
volume
in
the
delayed
binding
case,
because
that's
hard
all
right
well,
thank
you,
shang.
I
think
we
better
end
the
recording.