►
Description
Kubernetes Storage Special-Interest-Group (SIG) Modify Volume Discussion - 02 June 2023
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
A
Bit
of
context
aren't
exactly.
B
C
C
But
the
problem
is:
is
that
the
actual
implementation
is
is
way
more
nuanced
and
complicated
than
like
a
single
number
can
really
Express,
or
even
even
two
numbers
or
three
numbers,
and
you
know
in
in
the
interest
of
vendor
adoption
and
allowing
lots
of
experimentation
and
Innovation
I.
Think
that
the
the
idea
was
opaque
params,
let
you
still
get
the
basic
feature
slightly
less
nice
documentable
user
interface,
but
you
can
get
all
kinds
of
capabilities
and
nuance
and
we
can
work
out
some
of
the
details
of
exactly
how
Qs
should
work
like.
C
Should
you
have
separate,
separate
iops
for
reads
and
writes,
should
you
have
floors?
Should
you
have
ceiling?
Should
you
have
targets?
Should
you
have
bands?
You
know
all
the
different
kinds
of
design
decisions
that
we
would
ideally
like
to
figure
out.
We
can
postpone,
and
just
let
let
let
the
vendors
figure
them
out
through
an
opaque
interface
and
then
maybe
maybe
someday
in
the
future,
we'll
actually
converge
and
say
this.
This
is
the
one
way
to
do
it
and
then
do
something
different
at
that
time.
D
Yeah
I
would
I
would
plus
one
to
what
Ben
just
said,
I
think,
at
least
on
the
AWS
side.
I
know
for
a
fact
that
a
lot
of
our
customers
have
asked
for
this
kind
of
capability
in
the
past,
but
it's
not
just
limited
to
the
Qs
parameters
they
they
have.
D
They
have
asked
for
similar
support
for
things
that
that
may
be
related
to
potentially
modifying
characteristics
of
a
volume
and
and
at
the
time
when
we
were
thinking
about
this
internally,
we
were
trying
to
make
sure
that
we
did
not
come.
We
did
not
limit
ourselves
in
any
way
by
by
picking
options,
which
would
exclude
certain
capabilities
in
the
future,
so
using
the
opaque
parameters
based
approach
seemed
like
the
the
right
step
forward
in
that
in
that
respect
and
I.
D
Think
if
I
remember
right,
the
original
cap
did
not
advocate
for
that,
and
then
we
kind
of
talked
about
it
over
the
last
couple
of
meetings
and
and
then
that's
how
we
we
kind
of
came
along,
came
to
the
conclusion
that
this
might
this.
D
This
had
benefits
which,
which
you
know
the
original
cap,
which
which
thought
of
it
in
terms
of
just
adding
support
for
the
iops
parameters,
that
that
approach
was
going
to
be
limiting,
and
so
that's
that's
why
we
we
feel
that
I
think
going
the
opaque
parameters
throughout
makes
sense.
D
B
First,
about
for
Azure
I
saw
that
you
have
read,
write
iops
and
read
write
throughput,
but
you
also
have
read
eye
Ops
and
read
throughput
or
separate
like
right,
iops
and
write
throughput
like
if
we
don't
go
down
the
route
to
use
opaque
parameters.
How
do
you
think
we
can
support
those.
A
Well,
I
mean,
in
my
mind,
I
thought
it
was
most
important
to
Define
what
the
API
and
the
user
experience
should
be
first
and
then
it
would
be
up
to
the
store
to
Providers
to
see
how
they
can
map
their
systems
to
them.
I,
I
I,
don't
think
it's
a
good
approach
to
say
that
this
is
what
everybody
can
support.
Let's
try
and
find
a
sort
of
lowest
common
denominator.
A
You
know
I
think
there
are
some
key
values
that
if
we
could
agree
on
it
would
provide
a
better
user
experience
and
like
I'm,
not
convinced
that
we
can
agree.
I
just
thought
that
the
next
step
was
to
have
that
conversation
about.
Can
we
find
an
agreement
I
and
I
apologize?
You
know
I'm
coming
at
this
now,
because
I
hadn't
realized
that
this
is
the
path
that
was
taken
and
I,
probably
missed.
A
few
meeting
invites
so
my
apologies
I'm
just
commenting
on
on
the
personal
as
it
stands
now
so.
D
A
So
I
think,
as
storage
providers,
we've
always
been
discouraged
about
adding
adding
provider
specific
configuration
to
what
should
be
a
portable
resource.
The
PVC
is
the
portable
resource.
We've
always
been
discouraged
from
from
adding
annotations
and
things
like
that
to
PVC
to
say,
configure,
curve,
volume,
replication
policies
or
something
like
that.
Now,
I
don't
see.
These
are
paid
parameters
in
classes
as
being
indifferent
from
that
they're
very
provider
specific
and
by
having
Xbox
name
just
like
gold
or
silver
that
doesn't
really
mean
anything
to
a
user.
C
But
so
that
that
last
part
is
is
true,
the
the
the
meaning
of
Any
Given
performance
class
wouldn't
be
portable
across
clouds,
but
I
believe
that
it's
pretty
evident
from
the
the
different
Qs
momentations
that
we
see
in
the
market
anyways
that
there
isn't
a
single
definition
that
that
would
be
portable
and
that's
that's
part
of
the
problem.
If
you
say
look
well,
it's
going
to
be
iops,
then
it's
like!
C
Well,
you
better
get
everyone
to
agree
with
an
IOP
is
in
a
way
that,
like
4,
000
on
AWS
means
4
000
on
Azure
and
means
4
000
on
NetApp,
and
it's
like
that's
very,
very
hard
right
to
like.
We
all
agree
on
what
a
byte
is
and
what
a
byte
per
second
is.
But
but
an
IOP
is,
is
just
a
little
bit
harder
and
and
when
you
add
in
you
know,
floors
and
ceilings
and
bands
and
targets
and
possibility
that
some
some
implementations
will
tie
performance
to
capacity
it.
C
It
seems
unlikely
that
that
the
alternative
would
actually
give
the
proposed
benefit
of
portability
and
so
that
it
becomes
like
storage
class.
You
know,
storage
classes
are
also
not
portable
but
they're
a
standard
part
of
the
API,
because
it's
a
sort
of
a
necessary
evil
to
paper
over
these
inevitable
differences.
E
If
I
can
add
some
thing,
I
think
from
what
I've
seen
for
like
for
companies
that
do
do
like
multi-environment
deployments
and
they
have
to
manage
infrastructure
across
multiple
different
environments.
What
I've
seen
how
they
work
is:
they'll
typically
have
like
a
platform
team
that
is
responsible
for
providing
that
layer
to
the
app
teams
and
so
I
I
kind
of
see
that
parallel
there
between,
like
you
know,
the
platform
team
is
responsible
for
sort
of
defining
some
resources
that
can
be
portable
across
the
platforms.
E
D
Yeah
I
would
I
would
agree
with
that.
I
think,
at
least
in
my
mind,
a
lot
of
these
parameters
are
going
to
be
attributes
of
volumes
that
are
going
to
be
difficult
to
standardize
and
put
across
different
vendors
in
in
a
consistent
way
and
I
think
it
will
be
it
will.
D
It
will
be
difficult
to
take
what
is
implemented
on
one
Provider
by
one
provider
and
make
it
completely
consistent
across
all
providers
because
of
the
differences
in
the
implementation
and
what
Michelle
is
talking
about
in
terms
of
having
a
platform
team
I
think
that's
that's
pretty
standard
I
also,
don't
see
how
you
know
trying
to
standardize
on
this
is
going
to
get
rid
of
that
that
difference
in
the
in
in
the
provider
platform
completely
I
I.
D
Think
for
that
to
happen,
we
would
need
to
have
a
much
broader
scope
of
standardization,
similar
to
maybe
not
quite
at
the
level
of
a
byte,
but
but
we
would
need
to
have
very
clear
definitions
of
what
iops
means,
what
throughput
means
and
what
other
parameters
mean
across
all
the
different
platforms
and
I.
Don't
see
that
happening
anytime
soon.
D
The
the
other
thing
that
I
also
want
to
emphasize
here
is
that
we
are
not
talking
only
about
performance
characteristics
or
volumes
here
there
are
other
things
that
are
of
interest
as
an
example.
On
the
AWS
side,
there
is
interest
in
exposing
encryption
settings
eventually
at
some
point
in
the
future,
and
that
would
be
something
that
would
be
relatively
easy
to
do
with
the
current
approach.
D
But
if
we
decide
not
to
go
with
the
opaque
parameter
based
approach,
then
we
now
have
to
find
a
completely
independent
mechanism
of
essentially
doing
exactly
the
same
thing
and
and
that,
in
my
mind,
just
does
not
seem
like
a
good
good
design.
So.
A
We
can
definitely
see
the
advantage
of
of
having
a
way
of
configuring
op.
You
know
SP
specific
configuration
per
volume,
then
a
class-based
way,
I'm,
not
doubting
that
I
guess
what
I
was
thinking
is
that
biops
and
throughput
is
something
that
you
know
we
we
could
put
a
stake
in
the
ground
and
say
this
is
what
it
means,
because
those
two
are
well
known
enough
and
you
know
they're
pretty
much
in
just
the
you
know.
The
concepts
at
least
are
industry
standards
and
they
mean
a
lot
to
users.
A
So
if
we
could
put
a
stake
in
the
ground
and
say
well,
this
is
like
a
no
there's
that
there's
no
such
thing
as
a
typical
workload.
But
if
you
were
to
say
it
was,
for
example,
a
50
50
right
random
rewrite
read,
write
workload.
This
is
the
expected
iops
that
that
system
could
provide.
If
a
system
doesn't
have
those
knobs,
it
might
have
to
make
some
adjustments
in
order
to
guarantee
those
iops
I'm,
not
saying
that
it's,
it
has
to
be
a
one-to-one
mapping
to
storage
providers.
It's
just
that.
C
I
I
think
that
that
you
know,
even
though
yes,
we
can
all
agree
that
you
know
a
throughput
number
is
sort
of
indisputable.
You
know
that
that
assumption,
like
50
50,
random,
read,
write
workload
is
going
to
be
violated
all
of
the
time
and
with
unfortunate
results
for
for
those
who
have
sequential,
read
and
sequential
right
style,
workloads
right,
it
I.
C
I
don't
think
there
I
mean
we
can
agree
that
you
know
a
megabyte
per
second
is
is
a
megabyte
per
second
I.
Don't
think
that's
disputable,
but
like
whether
you
want
to
limit,
reads
and
writes
as
one
number
or
as
two
separate
numbers
or
you
know,
do
something
more
complicated.
I
think
that
is
up
for
dispute.
A
Yeah
so
I
mean
there
is
a
bit
of
Precedence,
like
you
know
the
the
CPU
request,
which
is
the
number
of
shares,
so
that's
not
going
to
be
the
same
performance
depending
on
the
Node
types
of
the
CPU
types
that
you
get.
So
it's
not
like
you
know.
These
requests
are
guaranteed
in
the
term.
So
it's
going
to
be
exactly
the
same
on
every
platform
and
I
think
this.
That
applies
here
as
well.
F
A
But
I
guess
what
I'm
saying
is
like
how
guaranteed
for
every
workflow
do
we
actually
need
to
be?
Or
can
this
just
be
a
guideline
that
says
well
on
whatever
platform
you're
on
you
know
the
SP
is,
you
know,
you're
making
your
requests
and
the
DSP
should
honor
it
now
there
are
also
some
you
know
like.
If
you
look
at
how
huge
pages
is
defined,
you
know
you
can
it
it's
basically
like
a
prefix,
so
there
are
sort
of
options
for
saying
well,
we
could
have
like
the
you
know,
generic
iops.
D
Well,
but
I
mean
I
I
feel
like
the
downside
of
treating
these
two
parameters
are
special
far
outweigh
any
plus
side.
We
might
get
from
standardizing
on
just
these
two
parameters
like
I.
Don't
want
us
to
go
and
build
a
different
solution
for
everything
else,
and
just
read
these
two
parameters
as
special
I.
Don't
think
that's
the
right
approach.
C
D
I'm
saying
that
there
is
a
lot
of
interest
on
the
part
of
customers
to
see
us
provide
capabilities
which
to
see
us
provide
a
capability
which
can
be
used
to
modify
volume,
attributes
and
I'm
I'm,
saying
that,
instead
of
building
two
or
three
different
mechanisms
of
doing
that,
we
should
build
one
mechanism
which
customers
can
rely
on
in
a
consistent
way
to
to
make
that
happen.
So.
C
I
I
share
that
intuition,
I
I
think
it
would
be
good
to
to
spell
out
a
few
to
to
make
the
point
stronger.
I
earlier
you
mentioned
encryption
and
I
was
trying
to
respond,
but
I
was
on
mute.
I
wanted
to
understand
the
by
way
of
an
example.
Why
would
you
want
a
mutable,
opaque
parameters
to
enable
encryption
like?
Can
you
just
flush
the
flesh
out
that
use
case.
D
D
He
can
probably
provide
more
details
as
well,
but
my
understanding
is
that
one
one
possible
scenario
where
something
like
that
might
come
in
is:
if
you
want
the
contents
of
a
volume
to
be
encrypted,
and
maybe
you
want
a
way
to
specify
the
key
that
that
you
use
to
encrypt
that
volume
or
or
maybe
modify
that
key
in
some
way.
I
think
that
would
be
one
use
case
for
for
that
to
happen.
C
Yes,
but
like
you
couldn't
use
volume
performance
classes
as
proposed
for
that
purpose,
because
there
you
just
have
these
static
classes
that
every
volume
has
to
share
you're,
describing
like
a
a
per
volume
parameter
that
would
be
stored
somewhere
else
in
kubernetes,
like
like
the
CSI
layer,
could
be
the
same.
I
agree
with
that,
but
you
couldn't
reuse
volume
performance
classes
to
achieve
that.
If
you
want
a
separate
value
per
volume,
for
example,
well,.
D
I
mean
it
does,
it
is
theoretically
possible
if
you
were
modifying
the
volume.
If
you
were
modifying
the
encryption
settings
for
a
for
a
for
a
bunch
of
volumes
that
you
are
operating
on
at
the
same
time,
so
a
scenario
might
be,
the
volume
is
unencrypted
and
you
want
to
go
ahead
and
encrypt
it,
and
you
want
to
maybe
potentially
Set
some
certain
attribute
which
which
would
allow
you
to
to
make
that
operation
possible.
On
the
back
end.
D
Used
for
and
I
I
I
I
I.
That
may
not
be
the
best
example
that
that
that
exists
for
for
this
scenario,
but
but
I
think
there
will
be
such
such
examples
coming
up
in
the
future
and
and
I
think
there
is
value
in
trying
to
build
a
single
solution
that
addresses
all
of
those
use
cases
and.
B
G
D
I
think
I
think
that's
the
main
concern
right
like
so,
for
instance,
as
as
an
example,
we
started
the
work
on
this
sometime
about
a
year
back
internally,
and
then
we
filed
a
cap
sometime
around
August
of
last
year.
We
are
now
in
June
and
we
are
still
not
at
a
point
where
we
have
agreed
on
a
solution
and
in
the
meanwhile,
we
keep
getting
requests
from
customers.
Saying
hey.
This
is
really
important
to
us.
Can
you
do
something
about
this?
D
So
like
the
solution
that
we
built
as
an
interim
measure
based
on
custom
annotations,
we
wouldn't
have
had
to
build
it
if
it
hadn't
taken
this
long
for
for
the
entire
process
and
I
I
I'm,
not
saying
that
you
know
it's
anybody's
fault,
because
it's
not
I
think
these
things
deliberately
take
longer
than
than
expected,
because
there
are
a
lot
of
different
parties
involved
and
this
is
open
source
but
at
the
same
time,
I
think.
If
we
are
going
to
build
something,
we
should
try
to
be
a
little
bit.
G
G
G
I
think
that,
like
the
problem
is
like
building
something
new
like
there
was
like
modify
volume
like
something
new
was
much
more
tricky
to
get
it
right
and
we
went
through
several
iterations
to
get
where
we
are
and
but
I
mean
I'm.
Big,
optimistic
that,
like
incrementally
making
changes
is
not
as
as
hard
as
like
designing
a
feature
from
from.
Like.
G
It
was
just
that
it
was
I
was
referring
to
in
this
case
like
if
we
have
like.
If
we
have
a
modified
volume
feature,
that's
there
in
ncss
Spec,
and
it
has
Fields.
So
in
future,
if
you
need
to
add
new
Fields
is
relatively
less
work
in
in
CSS,
spec,
modification
and
and
kubernetes
to
add
new
fields
and
purpose
like,
for
example,
like
let's
say
we
had
a
good
example,
would
be
secret
for
various
CSI
requests.
H
C
We
have
opaque
parameters
they're,
just
not
mutable
and
and
I
think
they're
heavily
leveraged
for
a
lot
of
these
kinds
of
weird
proprietary
features
that
you
know
it
doesn't
make
sense
to
spend
a
lot
of
time
trying
to
standardize,
and
so
the
yeah,
the
the
pain
Point
has
been
okay.
You
know
you
can
set
Q
of
s
today
using
opaque
parameters
at
creation
time.
It's
just
that.
You
can't
change
it
later
right
and
that's
that's
what
irritates
people.
G
But
but
like
the
various
features
we
talked
about
multi-attach
encryption,
that's
even
that
scares
me
more
actually
like,
because
these
those
things
are
different
from
Performance
Plus,
Qs
class
that
we're
talking
about,
because
those
features
have
application
in
in
terms
of
how
external,
attacher
and
Cube
controller
manager
treats
volumes.
And
so
those
like,
you
cannot
just
add
that
there's
those
parameters
in
a
in
a
bag
of
map
and.
C
I
I,
but
I
think
Qs
is,
is
the
perfect
example
of
a
situation
where
you
know
people
are
already
setting
these
things
through
opaque
parameters
in
the
storage
class
and
then
when
they
want
to
change
them,
they're
they're
stuck,
and
we
can
give
them
this.
This
mechanism
remember
there's
two
different
designs:
there's
the
CSI
layer
design,
where
the
current
proposal
is
just
this
bag
of
strings
modify
volume
with
opaque
parameters,
but
at
the
kubernetes
layer
it
is
formalized
into
a
volume
performance
class.
C
That
is,
you
know
a
non-named
face,
object
defined
by
the
administrator
that
you
choose
from.
So
it's
like
a
menu
of
options.
You
choose
from
you,
you
can
pick
one
option
and
then
you
can
change
your
selection,
but
you
can't
change
what
the
menu
options
mean
and
that
that's
a
I
think
a
good
amount
of
guard
rails
around
the
the
feature
at
the
kubernetes
layer,
because
it
lets
you
do
things
like
quotas
and
and
control.
C
Who
who
can
do
what
and
you
know,
do
the
right
kind
of
our
back
if
we
were
to
have
if
we
were
to
expand
this
to
do
something
around
encryption
and
I
I
struggle
to
think
about
what
that
would
look
like
you
might
end
up
reusing
the
CSI
layer,
modify
volume
feature
so
no
CSI
change,
but
you
would
need
to
do
something
in
the
kubernetes
layer
to
have
some
per
volume
state
that
says
what
the
old
value
was
and
what
the
new
value
is
and
you'd
have
to
have
a
reconciler
that
would
reconcile
them
to
you
know,
get
the
get
the
desired
result.
D
I'll
I'll
say
again:
let's
not
use
the
encryption
example,
as
you
know,
as
a
representative,
or
what
what
else
this
might
be
get
used
for,
because
that
was
just
something
that
we
have
talked
internally
as.
B
D
As
an
item
that's
possible
in
the
future,
we
don't
have
any
concrete
plans
around
that
at
this
point.
So.
C
C
We're
not
just
sort
of
throwing
it
open,
like
the
equivalent
of
taking
all
the
annotations
and
passing
them
down
right.
That
has
been
proposed
before
and
we
said
no
we're
not
going
to
do
that
because
that
then
all
hell
will
break
loose
and
it's
probably
for
the
best.
This
feels
like
something
that's
distractible.
You
know
we
talked
through
the
the
our
back
implications
of
performance
classes.
C
We
talked
through
quota
implications
and
I
think
we
felt
comfortable
that,
like
it's
possible
to
to
have
the
right
amount
of
control
with
this
feature
and
if
it
only
ever
gets
used
for
qos
I
think
the
outcome
will
be
good
and
if
it
gets
used
for
qos
adjacent
things,
I
think
it'll
still
be
fine.
D
I
think
I
think
I
would
I
would
say
that
at
this
point,
I
don't
anticipate.
Like
I
mean
you,
you
cannot
predict
the
future.
You
don't
know
how
customers
are
going
to
end
up
using
this
kind
of
thing
right
once
it's
built,
but
at
this
point,
I
feel
like
Qs
adjacent
items
are
basically
what
what
will
drive
the
adoption
of
this
use
case.
That's
that's
my
sense
and
but
but
I.
Don't
think
that
it's
just
those
two
parameters,
I
opt
send.
C
I
I
I
think
you'll
end
up
seeing
very
rich
versions
of
qos
definitions
that
are
more
than
just
iops
and
throughput.
I
think
you'll,
see
floors
and
I,
think
you'll,
see
ceilings,
I,
think,
you'll,
see,
bands
and
targets,
I,
think
you'll,
see
separations
of
reads
and
writes,
maybe
even
down
to
like
separate
numbers
for
sequential
and
random
I
mean
I.
Think
you'll
see
people
figure
out
what
the
what
the
right
access
to
to
slice
is
and
then
come
up
with
their
performance
classes
based
on
along
that
axis,
I.
D
C
I
agree
with
that
and
and
I
don't
think
we
know
today,
I
mean
we.
We
can
make
a
guess
as
a
as
an
engineering
team
and
say
yeah.
We
think
we
think
you
know
one
iops
number
that's
based
on
50
50,
random
read
write
is
the
is
the
thing
and
we
could
be
wildly
wrong
and
then
we'll
be
stuck
and
I.
So
so
it
yeah.
A
Can
I
just
clarify
on
on
you
know
what
I
was
pushing
for
was
really
just
the
kubernetes
interface,
like
you
know,
implementing
as
a
as
a
bag
of
strings
and
CSI.
A
Because
most
most
flexible
I
guess
you
know
like
for
me
the
class
you
know.
If
if
we
went
to
class
approach
which
the
opaque
strings
you
know,
I
I,
I
I'd
suggest
not
not
naming
it
performance
class
because
it's
not
going
to
be
just
performance
class,
it's
going
to
be
whatever
the
vendor
wants
it
to
be,
and
secondly
it
you
know
those
are.
You
know,
they're,
obviously,
class
based
so
I'd
like.
A
More
the
the
kubernetes
okay.
C
C
It
will
be
used
more
generically,
but
at
the
kubernetes
layer,
I
I
actually
think
there's
more
value
in
picking
something
that
suggests
how
it
should
be
used
and
maybe
because,
because
that
doesn't
prevent
us
from
in
the
future,
making
more
things
that
that
combine
with
performance
classes
in
terms
of
how
how
they
get
reconciled
and
how
they
get
pushed
down
to
the
CSI
layer.
You
could
imagine
following
performance
classes
for
one
purpose
and
then
encryption
classes
who
the
hell
knows.
C
You
know
for
something
else
that
that
has
maybe
a
more
crisp
definition
and
then
they
could.
You
know
the
reconciler
would
have
to
take
the
combination
of
those
and
push
them
down
through
the
CSI
driver,
using
the
modify
volume,
interface
potentially.
A
But
the
problem
is:
is
that
that's
going
to
take
a
while
to
add
that
field
and
in
the
meantime,
vendors
have
added?
You
know
whatever
they
like
in
there.
So
it's
no
longer
performance
class
and.
D
Yeah
I
think
that's
a
good
I.
Think
that's
a
good
point.
I
do
agree
that
if,
if
we
find
that
it's
getting
used
for
other
things
outside
of
Qs,
the
name
performance
class
is
going
to
be
confusing
to
a
lot
of
people
and
people
will
wonder
why
why
it
was
picked
so
so
I'm,
okay,
I
I,
like
the
idea
of
calling
it
something
a
little
more
generic.
B
A
E
H
A
Then
we
would
like
to
put
forward
using
annotations
for
that
or
per
volume
basis,
and
you
know
we
would
prefix
them
and
whatever,
but
to
me
they're,
no
less,
no
less
portable
than
the
classes
are,
but
at
the
moment
that
would
be
discouraged.
Yeah,
I,
I
believe.
E
I
guess
I
would
I
would
disagree
that
they're
no
less
portable,
because
I
think
I'm
coming
from
sort
of
the
perspective
of
like
someone
writing
a
database
operator,
for
example
right.
So
they
could
today
a
lot
of
these
database
operators.
They
do
provide
the
they
do
accept,
say
like
storage
classes
and
input
right
and
it's
basically
up
like
I,
said
to
like
a
platform
team
to
kind
of
Define,
the
portability
semantics
of
it.
But
you
know
I.
C
C
Maybe
the
answer
is
you
should
be
the
the
kubernetes
administrator
on
that
cluster
and
just
make
your
own
class
and
then
use
it
and
you'll
get
what
you
want
but
like
in
in
a
in
a
shared
cluster.
The
the
possibility
of
someone
saying
you
know
what
I'd
really
like
10
million
iops
today
and
just
modifying
their
PVC
to
get
10
million
iops
feels
like
not
something
that
there's
implications
there
right.
H
Have
we
considered
making
this
volume
performance
class
named
space
sculpt.
C
I
think
the
specific
proposal
was
to
do
what
we
do
for
storage
classes,
which
is
make
them
non-namespaced,
but
then
have
the
ability
to
quota
them
per
namespace.
So
you
could
say,
look
you
can
use
the
gold
class,
but
you
can
only
use
so
many
gigabytes
of
gold
in
your
namespace
right
and
the
quota
would
ensure
that
I.
H
Think
if
you
just
go
with
namespace
scope,
then
you
can
eliminate
certain
types
of
problems,
because
you
can
have
a
namespace
but
like
Broad
and
Dev
and
staging
or
whatever,
and
you
can
specify
like
the
same
name
for
the
performance
class.
And
you
can
be
sure
that
the
Pod
running
in
the
dev
namespace
is
Never.
Gonna
use
the
performance
that
is,
for
the
broadening
space.
C
My
intuition
there
is
it's
better
to
have
just
separate
performance
classes
and
then
have
an
admission
controller.
That
makes
sure
the
Pod
makes
your
PVCs
get
the
right
performance
class
depending
on
what
namespace
they're
in,
like
that
seems
like
a
solvable
problem
without
doing
per
namespace
classes.
F
Yeah,
it
seems
that
this,
like
this
sort
of
duct
typing,
leads
to
problems
longer
term
and
and
I.
Just
just
going
back,
it's
like
a
big
advantage
of
using
classes
is
we
have
an
existing
quota
mechanism?
F
If
you
use
annotations,
then,
like
you
know,
it
does
make
the
adjusting
a
single
volume
a
lot
easier
to
do.
But
you
know
each
cloud
provider
is
going
to
have
to
do
whatever
custom
quota
management
to
prevent
that
from
being
abused,
which
you
know
is,
is
sort
of
a
burden.
F
Think
something
important
to
point
out
here
is
that
AWS
already
has
annotations
if
I
understand
correctly
we're
actually
planning
on
doing
that
here
at
Google
as
well,
because
we
have
stuff
that
we
would
like
to
launch
before
this
cap
is
going
to
get
out
and
it
sounds
like
you're
also
sort
of
moving
in
that
Direction
so
like,
even
if
we
arrive
here
to
an
upstream
solution
that
does
not
use
annotations
I
think
the
reality
is
that
we
are
going
to
have
existing
Implement
implementations
that
are
using
annotations
and
so
coming
up
with
a
nice
way
to
either
migrate
or
or
just
have
something
that
that
like
works
together
with
this
reality
that
there
are
IOP
annotations
is
probably
something
we're
going
to
want
to
do.
F
E
Think
from
just
like
a
kubernetes
perspective,
I,
don't
think
the
kubernetes
project
should
be
responsible
for
trying
to
solve
like
custom,
workarounds
and
solutions
that
people
have
put
in
place.
I
think
we
should
concentrate
on
sort
of
what
is
the
ideal
experience
we
want
to
support
for
our
end
users.
F
D
Yeah
and
I
I
would
agree
that
you
know
from
like
custom
annotations,
so
just
to
give
you
an
example,
when
we
talked
to
one
of
our
customers
about
the
fact
that
we
were
thinking
of
building
a
custom
annotations
approach,
they
were
like
oh
cool,
it's
great,
that
you're
doing
it,
because
we
already
have
something
in
place
that
that
we
are
using
for
that.
So
so
it's
it's!
D
It's
at
a
point
now,
where
that's
already
something
that
customers
have
built
some
instrumentation
around
in
order
to
address
the
lack
of
a
standard
solution
in
in
this
space,
and
so
they
my
take
is
that
you
know
they
are
here
to
stay.
We
will
probably
be
supporting
them
for
the
foreseeable
future.
D
Given
that
we
are,
we
just
came
out
with
a
solution
that
provides
customer
annotations,
based
approach
and-
and
you
know,
I
I
still
want
to
see
a
standardized
solution
eventually
be
in
place
and
and
hopefully
at
some
point
in
the
force
in
the
in
the
future.
We
can
deprecate
the
custom,
annotations
based
approach
and
just
use
the
standardized
solution,
but
you
know
I
do
want
that
solution
to
to
be
broader
in
scope
than
just
the
Q
is
related
parameters,
foreign.
C
I'm
trying
to
figure
out
if
we're
coalescing
around
the
one
idea
it
is
it
is
it
that
we're
generally
okay
with
the
opaque
params,
and
we
just
want
to
change
the
name
from
performance
class
to
something
even
more
vague
and,
and
that
would
make
everyone
happy
or
is
anyone
still
think
that
that
we
have
to
not
do
that
and
and
find
some
way
of
expressing
this
as
first
class
parameters
where
everyone
just
agrees
on
what
they
mean
right.
A
C
A
That
there's
going
to
be
a
much
better
Community
user
experience
if
we
could,
if
it
doesn't
sound
like
if
we
can,
if
that's
achievable,
you
know
personally
I
I
think
that
users
will
tolerate
a
little
difference
between
platforms.
They
do
already
for
CPU
requests,
I
think
if
we
label
the
fields
accordingly,
then
we
could
reach
compromise.
A
You
know
I'm,
not
convinced
that
the
classes
as
they
are
now
is
something
that
we
that
kind
of
fits
the
way
that
we
want
to
have
users
provision
iops
in
azure,
but
I
can
definitely
see
having
a
a
generic
mutable
map
would
be
beneficial
for
other
features
and
and
we
we
could
end
up
using
that
for
performance
as
well.
F
So
the
question
I
have
is:
is
in
your
annotation
based
approach?
How
are
you
thinking
of
dealing
with
quotas,
how
to
how
to
deal
with
if
users
request
more
than
they
actually
have
coded
for
in
the
back
end
like?
Is
there
going
to
be
any
KH
interface
to
that,
or
is
it
just
like?
They'll
have
to
go
to
the
the
the
back
end
dashboard
to
see
what
the
state
is
of
their.
A
D
C
D
C
G
In
this
case,
like
if
a
PVC
has
like
an
user
Associates
PVC
with
a
non-existent
like
there's
a
default
risk
classical
idea,
so
that
makes
the
pvc's
portable,
but
you
have
a
PVC
that
has
this
opaque
think
and
you
try
to
use
it
in
a
different
platform.
What
will
happen.
G
C
G
C
The
create
volume
call
would
would
would
fail
or-
and
you
just
have
an
Unbound
PVC
sitting
there
with
the
event
saying
I,
don't
know
what
this
performance
class
is
yeah.
E
C
C
Well,
so
so
the
The
Proposal
was
you,
you
could
mutate
them,
but
like
existing
volumes,
that
pointed
to
it
would
not
get
the
new
values
like.
We
only
would
reconcile
the
the
the
an
individual
volume
when
the
name
of
the
class
that
it
had
changed.
So
if
it
was
silver
at
creation
time,
and
then
you
go
change,
what
silver
means
you
have
to
update
the
volume
two
to
get
it
to
reconcile,
so
we
would
say:
don't
don't
change
silver
go,
create
a
silver
2.0
and
change.
It
change
your
PVC
to
point
to
Silver
2.0.
C
D
G
But
not
retroactively
applied,
then
the
problem
becomes
like.
If
that
operator,
author
wrote,
the
I
wrote
the
operation
with
the
certain.
You
know,
like
certain
characteristics
of
the
volume
in
mind
that
would
make
and.
C
Have
all
these
problems
with
storage
class
right?
The
only
thing
that
we're
changing
is
now
some
of
the
values
become
mutable
after
after
the
volume
is
created.
So,
like
you
already
have
this
issue
really
yeah.
You
could
test
on
a
weird
system
that
had
a
storage
class
that
caused
some
very
special
behavior
that
doesn't
exist
anywhere
else
and
if
you
don't,
if
you
didn't
know
that
you're
in
trouble,
so
so
this
that's
not
a
new
problem.
It's
just
now,
after
the
volume
is
created.
E
I,
like
I
I,
would
imagine
you
know
if
say
I'm
writing
a
database
operator
like
I
would
probably
say
something
like
I
would
have
you
know
like
minimum
config
and
recommended
config
right
and
and
then
so,
whoever
is
sort
of
deploying
that
operator
on
a
certain
platform.
They
have
to
make
sure
they.
You
know,
pass
in
certain
storage
classes
or
performance
classes
that
meet
the
recommended.
Configs.
G
Yeah,
it's
easy
to
say
that
when
the
parameters
are
not
map
of
string
of
string
when
when
they
are
explicit,
iops
and
throughput,
we
can
say
that
okay,
this
is
the
recommended,
and
this
is
minimum
women.
But
if
there
are
parameters
of
map
of
string
of
string,
then
we
we
cannot
say
that
because
thank.
E
B
D
G
C
Can
specifically
say
like
look
if
you
do
anything
with
your
opaque
parameters
that
like
break
CSI?
Like
that's
your
fault
right,
we
there's
no
guarantees
that
the
stuff
is
gonna.
You
know,
because
we
own
the
the
community
we
own,
all
the
sidecars
and
they're
not
going
to
change
as
a
result
of
anything
other
than
the
simple
reconcile
that
says.
Look
if
the
name
of
the
volume
attributes
class
changed,
we're
gonna
call
modify
volume
until
it
returns,
success
and,
and
then
we're
done
and
that
that's
literally
all
the
record
seller
has
to
do.
C
Is
it
has
to
watch
the
name
of
the
volume
attributes
class
on
each
PVC
and
have
a
have
a
spec
class
and
a
state
class,
and
when
they're
not
the
same,
it
has
to
reconcile
them,
and
everything
else
is
just
you
know
exactly
the
way
it
is
and
yeah
like
I,
don't
understand
what
multi-attach
means,
because
because
kubernetes
specifically
handles
multi-attached
with
its
access
modes
and
and
the
attached
detached
or
sorry
published,
on
published
workflows,
but
I
I
think
that
there's
there's
plenty
of
subtle
behaviors
that
are
currently
being
smuggled
in
through
storage
classes.
B
C
But
I
mean
quotas
are
not
required
right
by
default.
There
are
no
quotas,
so,
like
I
think
we
would
have
to
just
in
the
documentation.
For
this
feature
say,
you
know
specifically
suggests
that
this
feature
is
used
for
stuff
like
performance,
and
you
might
want
to
quota
that,
but
yeah
out
of
the
box.
I
think
your
quotas
are
infinite.
So
this
is
not
an
issue.
No.
G
C
C
B
For
for
this
picture,
we
are
just
thinking
about
like
you
can
restrict
how
many
PVCs
were
on
a
attributes
class.
C
All
right,
I
have
a
hard
stuff
in
like
two
minutes
yes
same
here,
do
we
do
we?
Do
we
need
to
follow-up
meeting
I?
Guess
that's
the
important
question
or
does
do
people
feel
comfortable
enough
with
this
path
forward
that
we
can
update
the
spec,
get
everyone
to
read
it
and
review
it
and
approve
it
and
move
on
so.
F
F
I
I
I
I
think
we
are
at
Google
supportive
of
this.
All
that
said,
like
you
know,
as
I
mentioned
and
I
think
we
are
going
to
have
annotations
as
well.
You
know
just
because
that's
a
pragmatic,
shorter
term.
B
And
also
a
little
bit
background.
Actually,
the
first
class
I
asked
through
approach
works
very
well
with
our
PD
product,
but
it's
because
we
see
some
other
storage
vendors
use
case
and
think,
like
this
mutable
parameter
solution.
C
I
I
will
say:
I
I
think
we
could
come
to
an
agreement
on
our
first
class
parameters,
but
I
think
the
the
end
result
would
be
way.
Fewer
vendors
would
implement
it
in
the
long
run
and
we
would
end
up
changing
it.
Inevitably
at
some
point,
I
think
that's
what
would
happen
if
we
went
down
that
path,
we
would.
H
G
Rather
than
having
a
bucket,
because
I
can
see
how
it
could
be
misused
and
like
not
misused
but
like
I,
don't
know,
can
cause
problems,
I
think
this
is
too
strong
award.
I
would
also
like
to
know
how
it
would
work
retroactively,
like
user
mutates,
the
parameters
and
then
like
Ben,
said:
okay,
it
won't
apply
it
to
existing
yeah.
G
A
C
Right
all
right,
but
we
may
need
another
follow-up
meeting
if
people
are
still
on
the
fence
and
not
not
thrilled
but
I
have
to
drop
I'm.
Sorry
yeah.