►
Description
Kubernetes Storage Special-Interest-Group (SIG) CSI Volume QoS Discussion - 06 December 2022
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Matt Cary (Google)
C
B
Cool
there's
the
the
usual
Zoom
weirdness
about
pausing
the
in
order
to
get
the
microphone
to
work.
So
please,
let
me
know
if,
if
the
screen
sharing
has
gotten
out
of
date,
okay,
so
welcome
everybody
thanks
for
I'm
joining
I'm
excited
to
make
some
progress
on
qos,
as
this
has
been
sort
of
a
long-standing
feature
that
we
have
needed
to
have
first
I
figured
I
would
make
sure
we
have
kind
of
consensus
on
what
the
problem
is.
B
We
were
trying
to
solve
so
with
that
in
in
in
mind,
is
there
the
critical
use
cases
I've
written
down
here?
Does?
Does
this
seem
to
cover
what
everyone
is
interested
in
or
are
there
other
features.
B
B
D
Well,
I,
I
guess
we
can
there's
two
ways
to
look
at
it
one.
We
can
build
something
broad
enough
to
support
all
those
features
or,
if
we're
going
to
say
no
we're
solving
qos
problems.
Specifically
then
I,
guess
if
the
parameters
are
not
opaque
to
kubernetes,
then
that's
one
way
to
do
it,
but
I
think
that
makes
the
problem
much
harder
to
solve
at
defining
a
qos
language
that
is
General
enough.
E
D
E
Guess
I
guess
the
question
is,
is:
do
we
see
value
in
making
qos
a
first
class
resource?
Would
you
know,
would
users
similarly
to
how
they
can
directly
specify
like
CPU
and
memory
limits?
Would
they
want
to
do
something
like
that
for
qos
on
for
storage.
F
Well,
I
I
think
the
question
of
whether
they
want
to,
but
we
know
that
the
thus
all
these
requests
are
coming
from
people
want
to
be
able
to
change.
It
I
think
the
the
the
dividing
line
comes
down
on
whether
you
want
end
users
to
be
able
to
change
it
through
some
end
user,
visible
API
or
whether
you
want
it
to
be
proprietary,
slash
something
only
an
admin
can
do.
You
know
requires
elevated
permissions.
F
If,
if
it's
the
former,
then
you
kind
of
have
to
make
it
a
first
class
feature.
I
think
that's
where
we
came
down.
If
it's
something
that
we
just
want
to
have
be
part
of
the
kubernetes
API,
so
end
users
can
just
use
it.
Then
we
got
to
find
a
way
to
make
it
a
first
class
thing
and
standardize
it
and
get
everyone
to
use
that.
F
If,
if
it's
something
that
we're
okay
with
being
proprietary
or
an
admin
only
function,
then
you
can
get
away
with
some
of
these
more
squishy,
flexible
Solutions,
because
you
know
admins
have
to
deal
with
more
complicated
API
objects.
Anyways.
F
G
You
just
mentioned
one
keyword
for
me.
This
Patrick
I've
been
involved
in
discussion
where
someone
tried
to
guarantee
a
certain
iops
rate,
even
when
other
parts
misbehave
and
perhaps
try
to
consume
more
resources
than
they
should
be,
and
it's
entirely
unclear
to
me
how
this
could
be
enforced
on
a
note
whether
it's
even
possible
to
implement
something
like
that.
F
F
H
G
Keep
the
traffic
shaping,
whereas
the
C
group
controller,
that
has
rate
limits
that
you
can
apply,
but
then
you
need
to
basically
throttle
all
processes
in
advance
to
a
certain
limit,
and
even
if
you
have
spare,
I
o
bandwidth,
you
can't
give
it
to
a
process
because
it
might
cover
then
consume
that
when
it's
not
supposed
to
consume
it
anymore,
okay,
but
the
underlying
assumption
is
that
the
storage
system
has
a
certain
number
of
iops
and
it
can
assign
that
to
certain
clients
and
it
will
do
reinforcement.
It's
I.
Guess:
that's
yeah!.
F
There
do
exist,
I
used
to
work
for
a
company
that
had
a
storage
system
that
that
you
know
had
guaranteed
iops
and,
and
you
know,
and
if
there
was
if,
if
whatever
reason
there
were
extra
iops
available,
like
yeah,
other
processors
could
take
them.
But
a
lot
of
effort
was
put
into
making
sure
that,
at
least
from
the
storage
systems
perspective,
everyone
was
promised
a
certain
amount
of
resources
and
that
the
pie
and
and
you
knew
the
size
of
the
whole
pie.
I
J
F
G
B
I
mean
I
I
think
that's
going
to
be
pretty
hopeless,
because
so
much
of
this
stuff
is
going
to
be
like
implementation.
So
it's
like,
if
we're
talking
about
things
that
have
a
local
disk,
we
have
a
whole
set
of
implementation,
specific
things.
If
we're
thinking
about
network
attached
storage,
you
can
do
a
lot
more
where
you
have
Network
throttled
on
the
volume
side,
but
you
still
have
problems
like
AVM
may
have
a
maximum
Network
throughput!
B
That's
going
to
kick
in
no
matter
how
you
define
things
on
you're,
a
volume
and
anyway,
all
this
stuff
seems
so
provider
specific
that
I
feel
like.
If
we
try
to
come
up
with
something
general
that
fits
everyone,
we
are
never
going
to
make
progress,
and
so,
instead
it
seems
more
useful
to
define
a
mechanism
for
some
kind
of
abstract
resources
that
are
going
to
be
specific
to
you
know
particular
volume
types
and
yeah
and
and
expose
an
API
that
way.
F
B
Do
we
do
we
want
to
have
an
API
that
allows
users
to
actually
choose
those
specific
parameters
so
like
to
choose?
You
know
100K,
where
100K
is
interpreted
as
something
that's
specific
to
the
volume
type.
You
know
it's
a
yeah,
a.
F
What
I'm
envisioning
is
an
administrator
type
user
would
set
up
the
definitions
of
the
qos
classes,
so
an
admin
would
come
in
and
say
a
gold
means
8,
000,
iops
and
silver
means
800
iops
and
you
know,
or
whatever
made
sense
in
that
particular
storage
system.
So
they
would
have
to
sort
of
read
the
manual
and
figure
out
what
what
knobs
do
I
have
at
the
storage
system.
Layer
put
those
into
some
Qs
policy
object
then
present,
those
to
the
end
users
and
the
end
users
choose
from
the
menu
of
available
options.
B
Right
so
the
particular
problem
that
I
think
were
or
that
I'm
interested
in
solving
is
a
volume
that
has
fine-grained
tuning
of
a
ma
throughput
and
a
customer
who
who
wants
to
match
the
throughput
of
their
volume
to
their
particular
workload.
You
know
in
order
to
optimize
cost
you
know
in
in
order
to
get
the
performance
they
need,
but
not
over
overpay
for
a
provisioned
I
o,
and
in
that
case
I,
don't
know
if
a
you
know,
discrete
set
of
Io
Io
classes
will
be
enough.
There,
like
I
I,
can
see
people.
B
D
F
Yeah
I
mean
I
think
it's
important
to
realize
there.
There
are
certain
there's
a
lot
of
use
cases
where
the
administrator
and
the
end
user
are
different
people,
but
there's
also
a
huge
number
of
use
cases
where
the
in
the
end
user
is
the
same
person
right.
You
have
your
own
cluster
you're
running
stuff
on
it
and
you're,
trying
to
optimize
your
costs,
and
so
it
in
the
case
for
the
administrator
and
the
end
users,
the
same
person
yeah.
F
You
could
have
a
special
storage
or
a
special
Qs
class
that
just
you
know
for
this
purpose,
and
you
could
twiddle
it.
You
know
and
try
to
get
it
dialed
in
and
and
because
you're
in
control,
you
just
make
sure
that
nothing
uses
that
particular
Qs
class,
except
for
the
one
application
where
you
care
about
it.
E
I
think
I
think
the
other
another
use
case
that
I
think
might
be
a
little
difficult
with
the
class
concept
is.
If
you
want
to
do
something
like
Auto
scaling
like
say,
you
want
to
specify
a
request
and
a
limit,
and
then
you
want
to
be
able
to
burst
occasionally
and
so
I
think.
If
it's
it's
easier
to
do
that,
if
you
actually
Define
things
in
numbers
and
then
you
can,
basically,
you
know
go
between
a
Min
and
Max
number,
but
it's
it
would
be
difficult
to
do
that
if
you
had
to
like
go.
F
So
you
make
the
volume
10
times
as
large
and
you
sort
of
get
10
times
as
much
performance
at
the
same
tier
now.
Not
all
of
them
work
that
way,
but
a
lot
of
them
do
and
you
have
to
be
able
to
sort
of
deal
in
either
neither
language.
C
So
speaking
for
Microsoft,
we
would
prefer
not
to
use
classes
and
to
allow
tuning
to
be
set
per
volume.
We'd
like
to
see
it
done
in
the
same
way
as
capacity,
so
people
can
request
a
certain
amount
of
iops
and
it's
up
to
the
driver
whether
or
not
they
can
satisfy
that
request.
F
F
E
I
F
F
C
F
G
G
I
B
B
Yes,
indeed,
great
Okay
cool,
so
then
you
can
actually
maybe
answer
questions
here,
because
the
impression
I
have
of
that
is
that
it
it's
it's
kind
of
a
it
could
could
be
used
to
Define
a
resource
that
you
you
can
have
a
a
class
which
just
which
just
describes
some.
B
You
know
implementation
of
a
resource,
and
then
people
can
use
that
as
a
parameter
in
their
pods.
In
this
case,
is
that
true,
like?
Is
that
the
intention
or
have
I
no.
G
So
if
you
have
pods,
for
example,
let's
suppose
we
we
were
to
use
that
API
for
Io
bandwidth,
what
happens
with
a
part
that
hasn't
used
that
API
and
it's
still
you
trying
to
use
a
volume.
Does
that
mean
that
it
has
zero?
I
o
bandwidth
because
they
didn't
allocate
any
it's
it's
that
kind
of
problem
that
you
get
into
when
you
try
to
do
these
more
squishy
or
unloosely
defined
resources
through
an
API
that
ultimately
wants
to
have
fixed
guarantees
about
something
is
available.
B
Right
so
then,
if
we
had
something
where
a
volume
declares
that
it
is,
it
needs
to
be
used
with
you
know
as
a
resource.
Then,
if
there
is
a
pod
that
does
not
declare
a
resource
use,
it
wouldn't
be
able
to
like.
If
it
has
a
PVC
attached
to
the
volume
it
wouldn't
necessarily
get
any.
I
o
at
all
I
mean
I,
guess
it'd,
be
the
the
kind
of
draconian
way
to
in
enforce
that
like
would
that
be
useful,
or
that
just.
F
G
B
Because
I
mean
I,
guess
it's
a
second
thing:
I've
taught
is
that
there's
actually
a
couple
different
ways
to
go
about
iops
as
well
like
that
there
are
some
that
are
going
to
be
per
volume
so
like
this
is
like
a
cloud
provider.
Implementation
on
the,
on
the
other
hand,
there's
biops
provision
which
is
going
to
be
done
by
pod,
like,
for
example,
if
you're
going
to
do
a
c
groups
based
based
thing,
and
probably
those
two
approaches
are
always
going
to
be
somewhat
separate.
B
F
Well,
you
could
have
two
different
apis
overlapping
purposes.
I
mean
there
is
the
storage
device
is
the
only
thing
that
knows
how
big
its
Pi
is
and
how
much
it
can
hand
out
and
then,
but
it
can't
enforce
anything
on
the
node
right.
So
even
if
people
were
just
promising
you
a
thousand
iops,
if
your
network
is
slow
like
you,
don't
get
a
thousand
iops,
that's
too
bad
or
you
know
or
anything
else
that
can
cause
problems,
but
but
you
you
could
still
Define
this.
You
know
from
the
storage
perspective.
F
F
B
Yeah
so
I
guessed.
If
we're
thinking
of
like
a
node-based,
a
c
group
based
thing:
do
we
think
the
resource
claim
API
just
works
for
that.
G
G
G
What
what
happens
if
we
have
a
process
and
how
do
you?
How
do
you
enforce
that
Ben?
You
basically
would
need
Dynamic
Readjustment
of
of
IO
quotas
for
for
processes
of
it
when
something
changes
on
the
Node,
so
that
you
now
have
Parts
running
that
have
done
the
due
diligence
and
and
preserved
a
certain
quota
that
they
actually
get
it.
G
C
Maybe
just
a
quick
question
on
the
on
the
scope
are:
are
you
we
suggesting
that
we
would
provide
a
mechanism
within
kubernetes
to
you
know
with
c
groups,
for
example
to
actually
do
the
the
the
rate
limiting
the
qos?
Where
are
we
just
passing
the
request
to
the
storage
Droid,
this
the
storage
system,
the
storage
system
would
handle
that.
B
Yeah
I
mean
like
I
I,
guess
that
I
feel
like.
If
there's
a
a
c
group
type
approach,
there
is
going
to
be
like
no,
no
specific
stuff,
you're
you're
going
to
have
to
provision
a
special
kind
of
node
that
has
some
kind
of
resource
manager
or
something
that
has
this
like
special
C
group
stuff.
And
so
the
semantics
could
be
that,
like.
B
If
you
schedule
a
pod
on
this
node
and
you
haven't,
made
a
request
for
Io,
then
yeah
like
there
is
some
node
defined
a
default
class
or
a
best
effort
class.
But
the
point
is:
is
that
we
don't
necessarily
necessarily
need
to
Define
that
at
the.
B
Level
like
this
could
be
something
that's
specific
to
the
particular
resource,
a
manager.
That's
you
know,
just
kind
of
in
the
same
way.
If
you,
if
you
don't
request
a
GPU,
you
don't
get
one,
but
in
any
case
like
whether
or
not
you
have
a
GPU
is
very
specific
to
the
node
configuration
that
you
have
in
your
cluster.
B
So
anyway,
like
I
I,
guess
that
the
thing
I'm
proposing
is
it
sounds
like
if,
if
we
we
need
to
do
two
qos
at
a
node
or
pod
level,
it
sounds
like
the
resource
claim.
Api
might
be
a
good
place
to
start,
which
means
that,
like
the
problem
now
we
we
can
just
restrict
ourselves
to
the
problem
of
if
there
are
per
volume
on
my
parameters.
C
J
A
A
B
Mentioned
that
so,
but
so
I,
basically
you're
proposing
that
you
could
use
a
crd
to
test
out
like
instead
of
adding
new
fields
to
a
PVC.
A
Yeah,
no
I
was
thinking
that
we
could
do
something
like
as
a
project
like
Define
a
Qs
resource
type
and
that
can
be
applied
like
we
are
assuming
here.
Most
of
these
Qs
parameters
can
be
applied
via
control.
Plane,
RPC
calls,
so
we
could
Define
that
crd
and
it
will
like
it,
has
the
claimer
of
the
field
and
it
will
apply
it
see
how
it
works
out,
and
then
we
can
take
the
next
step
like
putting
it
in
PVC.
A
B
So
just
a
question
for
Patrick:
if
we
allowed
a
PVC
to
have
a
resource
claim
attached
to
it
like
does
that
seem
totally
crazy
or
would
that
be
a
way
to
kind
of
have
a
general
mechanism.
G
H
B
B
So
so
that
you
would
have
a
resource
claim,
ref
in
a
PVC
which
points
to
a
resource
that
would
have
some
I
mean
I.
I
guess
this
is
sort
of
going
towards
the
hash
map
thing,
because
this
is
kind
of
getting
us
to
a
provider.
Specific
definition
of
iops
I.
G
Think
it
would
get
complicated
because
then
every
code
that
currently
looks
at
a
part
and
needs
to
determine
whether
it
uses
a
certain
resource
claim
would
need
to
check
both
a
direct
reference
in
respect
resource
claims
array,
and
it
would
look
up.
It
would
have
to
look
up
the
PVC
and
see
whereverse
any
reference.
There.
G
I
suppose
it
would
be
it
wouldn't
that
would
be
used
more
user
friendly
in
this
way,
but
it
would
also
be
more
complicated
for
kubernetes
I.
Think
I
would
prefer
to
just
then
say
tell
users
that
yeah
this
PVC
here.
If
you
want
to
use
it
in
your
part,
you
have
to
have
a
resource
claim
entry
in
your
box
spec
for
this.
B
Right
well,
I
guess,
if
there's
a
I,
guess
that,
like
the
the
way
I'm
thinking
that
the
of
like
you
know
specific
implementations
of
this-
that,
if
a
particular
volume
has
provisioned
iops
associated
with
it,
that's
actually
independent
of
who
is
attaching
to
it.
So
I
don't
know
if
that
adds
that
if
this
is
so
yeah.
D
B
D
B
Of
so,
if
the
implementation
I
have,
is
that
when
you
provision
a
volume,
it
has
a
a
certain
throughput
say
and
any
and
it's
it
doesn't
care
about
how
many
machines
this
is
necessarily
attached
to
if
it
even
supports
multi
attach
or
if
it's
just
attached
to
a
single
machine,
it
doesn't
care
about
how
many
processes
use
that
a
device.
B
You
know
all
of
the
throttling's
at
the
a
volume
level
and
all
that
seems
perfectly
well
defined
and
would
not
require
any
decisions
to
be
made
like
at
the
time
upod
is
scheduled
or
whatever.
E
B
A
G
Yeah,
but
what
what
does
the
allocation
then
mean?
What
what
are
you?
What
what
pie
are
you
splitting
up
when
you
create
the
volume?
Is
it
iops
rate
supported
by
the
storage
system
that
provides
the
volume?
Yes,.
F
A
It
have
any
scheduling,
concerns
like
resource
claim
seems
like
embedded
within,
like
like
scheduling
like
whether
it
is
will
be
available.
This
iops
throughput
doesn't
seem
like
it
controls.
G
G
If
we
attach
it
to
the
PVC,
it
pretty
much
implies
that
this
up
iops
rate
is
available
across
the
entire
cluster,
because
there's
no
indep
there's
no
correlation
with
iops
rate.
For
this
volume
on
this
particular
node,
it's
really
just
per
volume,
but
that
seems
to
be
the
or
per
storage
system,
and
it
then
has
to
be
provided
on
any
node
that
this
volume
made
might
end
up
being
used
on.
A
A
G
No
I
think
it
could
be
done,
so
we
current
design
allows
for
things
to
be
Auto
reserved
a
resource
claim
for
something
that
is
not
a
port,
and
we
haven't
really
thought
about
what
those
other
users
of
a
resource
game
could
be,
but
making
a
PVC
the
user
doesn't
or
a
PV.
Well,
that's
actually
one
one
of
the
questions
that
I
have.
In
that
context.
Do
we
attach
the
resource
game
to
the
PVC
or
to
the
PV?
G
And
how
do
we
know
that
the
allocation
is
not
needed
anymore,
so
it
gets
a
bit
faster
when
we
talk
about
Dynamic
provisioning
of
a
volume,
because
we
do
have
these
two
objects
but
as
well.
I
guess,
because
it's
provided
by
the
user,
the
user
creates
the
PVC
and
they
create
a
resource
claim
for
the
corresponding
custom
parameters.
Attributes
like
iops
it
pretty
much
I
guess
it
has
to
be
the
PVC,
so
I
mean
I.
G
G
It
would
be
possible
to
say
this
resource
claim
must
be
allocated
and
allocation
means
that
a
certain
iops
rate
has
been
set
aside
for
PVC,
and
we
could
record
for
PVC
as
the
user
in
the
resource
claim
that
wouldn't
be
terribly
terribly
bad.
It's
questions
just
is
who
enforces
that
and
who
makes
this
reservation,
who
undoes
it
when
the
PPC
gets
deleted,
so
a
custom
controller
would
need
to
take
care
of
that,
because,
right
now
the
core
kubernetes
only
knows
about
Parts
trying
to
use
a
resource
claim.
That's
the
part,
that's
handled.
F
Yeah
I
I'm
not
attached
to
the
idea
of
resource
claims,
in
particular,
as
the
mechanism
I
mean
what
API
object
we
use
as
a
separate
concern,
but
whatever
it
is,
I
think
it
needs
to
be
on
the
PVC
so
that
the
end
user
can
say
this
is
what
I
want.
The
enforcement
has
to
be
done
by
the
CSI
driver,
because
it's
the
thing
that
knows
how
to
tell
the
storage
like
reserve.
This
is
iops,
and
then
you
need
to
think
about
for
for
any
kind
of
reconciling
when
you're
changing
it
right.
F
If
you
start
off
with
one
and
you
want
to
change
it
to
another,
you
have
to
think
about
what
that
reconcile.
Loop
looks
like
in
terms
of
desired
State,
current
state,
changing
it
and
knowing,
when
you're
done
changing
it,
you
might
end
up
on
both
the
PVC
and
the
PV,
just
so
that
you
can
write
a
a
reconciler
that
gets
to
the
correct
result
and
I
think
that's
true.
Whether
resource
claims
are
trying
to
treat
them
like
you
know,
use
them
for
volumes
instead
of
PODS
or
whether
you
use
some
other
object.
F
That's
specific
to
qos.
We
invent
I
will
just
say
that
PVCs
are
scheduled,
just
not
scheduled
by
kubernetes
right,
like
the
CSI
driver
internally,
frequently
has
a
scheduling
problem
to
solve,
or
where
am
I
going
to
put
this
volume,
and
it
frequently
uses
information
about
storage
class
and
the
any
QRS
requests
in
the
storage
class
to
make
decisions
about
where
to
put
the
volume.
So
this
would
just
be
an
extension
to
that
process.
That's
already
happening.
B
Yeah,
so
so,
just
just
to
put
out
like
a
concrete
scenario,
suppose
you
have
a
volume
which
has
a
provision
amount
of
throughput,
and
you
know
your
nodes
have
a
maximum
Network
bandwidth.
B
So
here
you
actually
know
that
if
you
have
several
volumes
with
high
throughput,
you
aren't
going
to
be
able
to
fit
those
on
the
same
node.
If
we
had
a
resource
frame
attached
to
a
PVC
like,
could
we
even
solve
that
problem
or
are
the
decisions
made
at
the
wrong
time
in
scheduling.
B
Yeah
and
then
you,
you
have
a
bin
packing
thing
of
you
know
if
you
have
I'm
off
of
for
PVCs,
any
three
of
which
could
fit
on
one
node
and
you
have
a
bunch
of
PODS
being
scheduled.
How
do
you
make
sure
that
that
you
are
satisfied?
Those
constraints
I
mean
I,
don't
even
know
if
the
scheduler
could
deal
with
that
now
right
because
it
may
choose
the
node
it's
like
it
either
assumes
the
PVC
is
attached
to
a
certain
node
or
it's
free
to
to
wait
for
the
first
consumer.
F
Well,
I
was
gonna,
say
it's
worse
than
that,
because,
because
the
all
the
volume
logic
and
kubernetes
is
about
attaching
to
nodes
and
then
the
kind
of
arbitrary
number
of
pods
on
the
Node
sharing
the
volume.
Unless.
J
B
B
Like
just
in
in
the
customer
conversations
I've
had
like,
that
is
something
that
they
could
understand
that,
like,
if
you're
defining
io4
a
volume,
if
they
have
multiple
pods
using
the
same
volume,
you
know
it's
well
understood
that
the
throughput
could
be
shared
among
those
pods
in
like
a
not
clearly
defined
way.
I
think
that's.
Okay,.
F
So
so
I
mean
I.
Think
there's
two
separate
problems.
There's
there's
you
know
how
much
there's,
how
do
you
manage
the
promises
that
the
storage
system
makes
to
kubernetes
and
you
know
that
that's
what
I
think
of
as
iopsy
qos
and
then
there's
the
separate
question
of
how
do
you
make
sure
that
the
rest
of
the
system
has
enough
bandwidth
to
take
advantage
of
the
iopsy
storage
can
provide
you?
Can
you
can
treat
that
as
a
separate
problem?
We
don't
have
to
tackle
that
right.
H
B
Yeah
and
and
and
so
then,
if
we
forget
to
do
it
from
the
storage
side,
do
we
really
have
anything
better
than
an
opaque
cash
map
of
parameters
like
it
seems
like?
We
haven't,
really
decided.
F
B
Yeah
sure
I
I
could
go
ahead
and
add
what
we
have
here
at
Google,
that
is
public
to
the
Doc
and
I
definitely
invite
everyone
else
to
contribute
as
well.
H
But
I'd
like
to
share
like
what
we
did
and
we
went.
There
was
a
project
which
you
know
had
pretty
much
similar
requirements,
but
you
know
deleted
qas.
Basically,
storage
providers
were
reporting
their
own
qas
to
the
vcenter,
and
people
could
craft
storage
policies
based
on
these
Qs.
Sorry-
and
you
know
this
Qs
could
be
anything
you
know
here,
it's
specific
to
a
you
know:
you're
you're
calling
out
iops,
but
it
could
be
anything.
You
know
vendor
specific
and
people.
H
Could
you
know,
create
a
policy
and
send
associate
a
disk
with
this
specific
policy
and
there
was
an
option
to
look
at
compliance.
The
storage
providers
themselves
would
report
a
compliance
saying
that
if
it
has
satisfied
the
specific
us,
so
that
was
something
we
tweeted
and
be
aware
a
long
time.
I
just
wanted
to
share
it.
E
H
Yeah
but
conceptually
it's
very
very
similar.
There
are
n
number
of
storage
providers
each
wanted
to.
You
know
report
different
capabilities
and
the
capabilities
also
are
reported
in
terms
of
range
and
etc,
etc.
There
was
10
ways
to
report
the
capabilities
as
such.
C
H
Trains
about
I
think
the
initial
idea
was
to
have
like
set
something
like
iops.
You
know,
and
everybody
had
to
report
these
I.
Don't
remember
the
reasoning
but
I
think
the
vendors
did
not
want
to
report
common
qos.
They
did
want
to
report
and
something
slightly
different.
I
want
to
their
own
qos.
A
A
J
H
J
If
you
think,
can
you
also
change
the
you
can
also
change
the
policy
itself
right
right,
okay,
yeah,
so
maybe
there's
a
difference.
A.
H
Little
bit
different,
so
when
you
change
the
policies
you
had
to
apply
the
policies
explicitly
and
then
the
storage
provider
would
get
to
know
that
you
know
hey,
there's
a
policy
delete
and
they
would
figure
out.
If
this
change
in
policy
you
know,
is
it
able
to
satisfy
it
so
that
it
reported
using
some
compliance,
statuses.
A
Why
the
separate
us
policy
type
crd
that
can
be
attached
to
PVCs
like
how
bad
proposed
it
could
be
a
sort
of
defined
predefined
policies
like
a
drop
down.
I
haven't
yet
thought
through
the
whole,
how
that
will
work,
but
yeah
we
can,
rather
than
putting
them
these
fields
specifically
in
PVC.
We
should
think
on
those
lines.
J
E
C
J
F
A
F
Ideally,
yes,
I
I
think
that's
sort
of
the
that's.
The
easiest
thing
that
we
can
do
is
to
have
some
sort
of
a
generic
qos
class
object,
that's
opaque
and
allow
you
to
assign
it
at
creation,
time
and
change
it
later,
and
that's
that's
pretty
easy
to
understand
and
you
can
make
it
work
and
it
would
be
more
valuable
than
what
we
have
today.
But
some
people
want
more
than
just
that.
J
You
know
probably
should
look
at
it.
How
this
one
is
done,
I
mean
the
one
that
Deepak
is
talking
about
just
to
see.
If
we
can,
you
know
we
can,
we
can
do
something
similar
Maybe,
but
that
is
more
than
pure
s.
So.
B
Cool
so
I
actually
have
to
run
so
I
will
stop.
The
recording,
I
I
feel
like
we
have
a
couple
of
AIS
to
add
some
specific
examples
of
of
how
people,
either
you
know,
implement
the
stuff
at
a
Storage
level
or
some
like
the
the
kubernetes
policy
example
mentioned.
B
If,
if
people
could
please
add
that
to
the
doc-
and
you
know,
obviously
we
need
to
continue
discussing
this
so
all
set
up
another
session.
Does
that
sound,
sound,
good,
great.
F
J
B
You
cool,
thank
you
all
this
is.
This
has
been
super
interesting,
take
care,
happy,
Tuesday,.