►
From YouTube: Kubernetes SIG Storage - Bi-Weekly Meeting 20200730
Description
Meeting of Kubernetes Storage Special-Interest-Group (SIG) Bi-Weekly Meeting - 30 July 2020
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
Moderator: Xing Yang (VMware)
A
A
The
first
one
is
csr:
online
offline
resizing.
Do
we
have
a
command
here,
looks
like
he's
not
here:
okay,
okay,.
A
B
B
That
sad
requested,
but
that's,
I
think,
not
super
critical
and
I'm
planning
to
schedule
a
call
next
week
to
discuss
if,
if
the
without
the
recovery
from
resize
feature,
if
we
can
declare
or
we
can
move
towards
moving
resizing
to
ga,
because
recovery
from
resize
will
be
introduced
as
an
alpha
feature
and
it
will
be
a
while
before
that
thing
goes
ga.
So
I
think
this
is
the
same
status.
I
gave
last
time
so.
A
A
Snapshot
fixing
issues
yeah,
so
we
continue
with
about
fixing.
I
think
we
we
want
you.
We
want
to
cut
a
rc
dude
once
we
get
the
separate
the
api
inclined
package
out,
so
there
are
still
some
issues
that
we
need
to
get
resolved
and,
and
then
there's
also
andy
and
the
xiangtian
don't
know
if
they
are.
Are
they
there?
Are
they
here,
so
they
are
working
on
the
web
hook,
validation.
A
So
in
the
reason
there
is
a
design
cap
that
we
reviewed
in
yesterday's
deportation
or
group
meeting
so
hope
to
get
that
one
done
and
and
then
that
one
actually
would
require
some.
There
is
some
backward
compatibility
requirements
for
for
that,
because
it's
it's
like
a
api
change,
so
we
will
need
to
have
a
do.
The
scene
like
a
two
phases:
that's
a
different
snapshot
side
and
now
recursive
volume
ownership.
Do
we
have
a
update
on
this.
B
Yes,
so
I
think
this
is
the
same
update.
That
is
last
time
I
gave
me
michelle,
we
talked
and
a
call,
and
we
agreed
upon
what
the
new
implementation
will
be,
will
look
like
and
I'll
open
I'll
update
the
cap
for
that
implementation
and
try
to
get
as
many
eyes
on
it
as
possible.
C
A
Because
see
linux,
recursive
mission
handling.
A
A
All
right,
thank
you,
file,
permission
handling
for
windows,
not
started.
Is
it
still
not
started?
Okay,
assume
this
one
is
still
no
update,
started,
see
anyone
see
deep
chain
here.
A
Next
one
is
file
commission
handling
in
projected
service
silicon
volume.
So
that
was
done.
So
I
think
we
don't
need
to
update
here.
Csi
entry
read
only
handling
this
one
says
not
started.
Is
this
still
not
started?
It's
humble
here.
A
A
A
Storage
capacity
tracking
is
patrick
here
yeah.
Do
we
still
have
any
pr
spending
in
external
provisioner
for
this
feature.
E
A
All
right
next
one,
pvc
inline
ethmoid
volume-
does
this
one
need
any
changes
in
external
provisioner
young.
Do
you
know.
C
A
Thank
you
spreading
over
failure
to
me.
So
that's
mine.
I
I
don't
have
any
update
on
this
one.
Yet,
since
I
worked
this
one
together
with
the
volume
group,
so
this
one,
I
need
to
schedule
a
review
meeting.
A
And
next
one
is
csi,
auto
tree
okay,
so
this
one
says
dumb
movement.
If
it's
driver
finish
done
next
one
move
ice,
cuz,
the
driver,
other
trees.
This
one
does
anyone
know
the
status
of
this
one.
A
Because
michelle
is
not
quality,
so
not
no
update,
so
those
who
probably
still
not
started
see
the
fiber
channel
and
flex
volume.
A
Okay,
it
says
we'll
move
we'll
move
to
deprecation
t1.20.
Okay,
so
I
think
it's
probably
the
same
from
the
last
time
and
then
move
out
class
dfs
provisioner.
This
is
on
the
handle
is
humble
here.
Does
anyone
know
the
progress
on
this?
A
F
Yeah,
I
think,
on
this
one,
the
initial
pr
got
merged
last
time
and
start
started
to
test
it
out
locally,
and
I
raised
one
more
pr
on
this
one
to
update
the
talks
and
all
that
taking
the
help
from
the
community
members
who
signed
up
for
the
migration
also
to
review,
maybe
once
that
review
is
done
after
that
I'll
reach
out
to
y'all
not
be
sure
to
watch
it.
F
The
mfs,
that's
the
nfs,
yes
for
the
nfs
client
provisioner.
I
have
the
initial
pr
that
actually
migrates
are
also
pulled
in
some
of
the
changes
that
were
done
in
the
external
repo
after
I
worked
out
commented
on
all
the
issues
there
and
the
prs
to
take
the
help
to
move
here.
The
initial
pr
is
ready
for
review
can
take
a
look
at
it
and
I
will
pull
in
the
remaining
pr's
also
here
slowly
once
this
is
merged.
A
F
A
F
Is
already
moved
out,
it's
the
automation
of
the
build
that
is.
A
D
Yeah,
I
think
that's
fine,
go
ahead
and
put
the
deprecation
notice
there.
Okay,
it'll
start
serving
as
a
reminder
for
folks
that
they
should
stop
using
this
repo.
G
Was
the
nfs
and
the
gloucester
code
moved
out
of
the
entry
and
and
moved
to
csi
in
118,
or
has
it
happened
since
118
was
released.
D
So
you're
talking
about
the
code
that
used
to
live
the
external
provisioners
that
used
to
live
in
this
repo
correct
yes,
so
those
external
repos
are
still
in
the
process
of
being
moved.
I
think
karen
just
gave
gave
an
update
on
the
nfs
one.
They
were
moved
this
quarter,
which
aligns
with
the
119
release.
Okay
and
then
the
csi
drivers
are
kind
of
standalone
they're,
not
necessarily
a
direct
replacement.
If
somebody
was
depending
on
these
old
provisioners,
they
can
continue
to
use
them.
D
A
F
A
D
No
not
yet,
but
we
need
to
start
thinking
about
that
in
the
next
couple
of
releases.
Okay,.
G
Yeah
the
reason
I
ask
and
I'm
sorry
I
don't
much
do
wish
to
belabor
this,
but
I
had
one
of
my
team
look
at
this
and
in
1.18
it
appears
that
the
entry
provisioner
is
still
available.
D
Yes,
yeah
and
they'll
still
be
available
in
119.
D
Yeah,
the
entry
stuff
is
hard
to
get
rid
of
because
they're,
you
know
it's
exposed
to
the
kubernetes
api
and
kubernetes
api
has
a
very
strict
deprecation
policy.
We
can't
just
pull
it
out,
so
even
if
we
end
up
removing
it,
we
need
to
keep
the
api
and
then
actually
divert
the
internal
logic
to
csi.
So
that's
what
we're
doing
for
the
cloud
providers.
G
A
A
Okay,
what
in
house
yeah,
so
we
have
been
making
progress
on
warning
house.
There
are
a
few
prs,
you
add
tests,
so
I
think
those
should
be
merged
soon
and
then
there
are
a
couple
bug
fixes
that
we
need
to
make.
I
think
we
are
on
track.
D
Yeah,
so
they
did
a
review
of
the
kept
last
week.
They're
gonna
continue
that
this
week,
it'll
be
on
this
same
zoom
right
after
this
call
and
they're
gonna
likely
go
over
the
spec,
the
cozy
spec,
hoping
to
get
that
wrapped
up
either
this
meeting
or
next
meeting,
and
once
that's
done,
the
cap
is
approved,
they're
to
be
unblocked
for
a.
A
C
A
Thank
you.
Next
one
is
fs
groups
poorly
in
csi,
so
this
one
is
done.
Okay,
I
think.
Okay,
that's
done,
there's
no
more
work.
Good.
This
here
is
a
migration.
Is.
H
A
A
Okay,
so
yeah,
so
so
it
the
entry
pr
all
merged
in
cooling
docks.
So
there
are
still
some
issues
that
we
are
fixing
in
the
driver
ripple
so
and
also
doing
more
testing.
A
And
then
I
think
we
are
we're
going
to
have
a
meeting
with
the
cloud
provider
side
to
decide
the
you
know
when
when
we
can,
when
we
can
move
to
ga
when
we
can
remove
the
entry
driver,
so
I
think
there
is
a
meeting
coming
up
saturn.
Is
it
anything
else
on
this
for
the
csm
migration.
D
G
No,
I
don't.
It
is
rather
well
documented
in
the
open
shift
online
documentation,
although
I'm
not
aware
of
anyone
using
it.
A
Okay,
thank
you
surf
okay
yeah,
so
this
one's
it's
under
humble,
so
it
looks
like
humble,
is
not
here
so
just
put
no
update
here.
A
And
then
we
have
this
immutable
secret,
centric
map
system.
That's
done,
okay,
so
we,
this
one
is
done
next,
one
is
pvc
created
by
stiffer
set
will
not
be
auto
removed
to
we
have.
I
think
kk
was
working
on
that.
Okay.
J
Yeah
we
I
faced
a
work
in
progress
kept
based
on
our
internal
discussions.
E
J
We'll
go
through
some
review
and
discussions
on
the
cap.
J
So
the
kept
was
raised.
There
were
some
comments
from
ken
but
heyman
and
I
are
meeting
up
today
to
discuss
the
decrease
in
volume
size
and
we'll
go
from
there.
A
Okay,
so
execution
hook
for
application,
so
this
one,
so
I
need
to
schedule
another
meeting
to
discuss
with
signaled.
I
do
get
some
comments
from
tim,
so
I
need
to
update
the
proposal.
Is
that.
A
D
I
think
there
is
an
owner.
There
was
a
work
started
this
quarter,
but,
oh
okay,
I
believe
it
wasn't
completed.
Can
you
check
the
status
from
last
time?
I
think
it
should
be.
A
K
And
yeah
the
first
pr
is
out
and
it's
still
ready
for
review.
It's
blocked
because
of
the
119
portfolio.
A
All
right,
so,
okay,
that
that's
all
we
have
on
the
spreadsheet
we'll
go
back
to
here.
We
don't
have
any
prs
listed
here.
We
have
a
design
review,
so
this
is
also
from
kkk
dynamic
performance
attributes
for
quantity,
storage.
Okay,
you
want
to
talk
about
this
yep.
Let
me
actually
open
this.
E
J
J
So
if
you
could
move
to
slide
two
yeah,
so
basically
one
one
of
the
use
cases
is
where
there
is
a
relational
database,
which
is
running
with
otp
workload,
but
at
the
month
end
the
database
will
have
to
do
some
batched
work,
which
requires
more
throughput
from
the
database,
which
is
mounted
on
a
pvc
and
the
way
that
currently
this
would
work
is
that
we'll
have
to
go
back
and
recreate
the
pvc
or
do
some
kind
of
custom
stuff
to
make
this
change
in
terms
of
the
throughput.
J
J
J
There
is
another
use
case
where
gaming
workloads
are
concerned
where
there
is
like
a
distributed
gaming
workload
scenario,
but
there
are
spikes
at
certain
locations
and
no
spikes
at
other
locations.
So
this
is
another
scenario
where,
when
the
spikes
happen
like
based
on
those
spikes,
the
pvc
could
be
tuned
to
improve
to
increase
the
throughput,
and
you
know
we
could
through
put
an
iops,
and
we
could
get
to
a
point
where
these
customers,
these
users
are
served
well
as
well.
J
So
the
idea
here
is
that
we
could
set
restrictions
around
the
quartus
of
these
performance
attributes
and
thereby
ensure
that
it
doesn't
do
a
cost
hour
and
for
the
for
the
subscription
bills,
and
things
like
that,
can
we
go
to
the
next
slide
yeah
so
currently,
azure
offers
something
called
ultra
storage,
which
which
has
this
capability
that
once
the
pvc
is
created,
you
can
basically
go
and
modify
the
underlying
throughput
and
iops
without
any.
J
You
know,
runtime
changes
required
from
the
rest
of
the
infrastructure,
so
amazon
also
seems
to
be
offering
the
same
kind
of
feature,
but
we're
not
experts
in
amazon
so
would
like
some
community
input
if
there
are
folks
listening
into
this
conversation,
any
other
storage
providers,
if
they
are
offering
similar
features,
they'll
be
great
to
know
as
well.
J
That's
that's
where
the
underlying
storage
understanding
of
what
we
have
right
now.
J
Like
can
be
created
like
can
be
added
when
the
creation
at
the
creation
time,
but
once
it's
provisioned,
this
cannot
be
modified
and
also
it
cannot
be
used
for
resource
quarters
as
well,
which
is
which
lands
us
up
in
a
very
static
kind
of
possibilities
here
regarding
performance.
So
that's
one
of
the
trigger
points
of
this
proposal.
That
can
we
make
something
happen
where
there
is
this
dynamic
nature
of
throughput
and
iops,
and
you
know,
like
potentially
other
performance
parameters
which
we
can
anticipate
and
add
to
this
infrastructure
of
kubernetes.
J
Can
we
go
to
the
next
one
please?
So
the
goal
is
that,
like
like,
there
are
like
at
a
high
level.
There
are
three
main
goals.
One
is
that
we
want
to
be
able
to
like
the
we
want,
the
user
to
be
able
to
dynamically
provision
based
on
a
performance
attribute,
so
that
there
is
like
a
pvc
match
which
is
made
to
the
corresponding
pv,
which
has
the
required
performance
and
iops.
J
Then
the
then
the
ability
to
kind
of
modify
these
performance
attributes
at
run
time,
just
like
storage
size,
currently
happens,
and
then
also
this
ability
to
kind
of
set
resource
quotas
based
on
these
performance
attributes
as
well.
So
those
are
the
high
level
goals
of
this
proposition
yeah.
We
can
move
to
next
slide,
so
one
one
approach
is
that
we
make
performance
attributes
of
first
class
citizen
in
the
persistent
volume.
J
So
here
is
an
example
which
I
put
like
to
indicate
how
it
would
look
like,
so
this
would
be
very
similar
to
what
we
have
in
terms
of
storage,
but
we
would
have
these
two
additional
parameters
called
throughput
and
iops,
which
would
be
set
up
in
the
request,
and
then,
when
the
pvc
is
either
dynamically
created,
it
would
go
and
set
this
up
to
the
underlying
pv.
So
we
will
be
changing
the
pv
also
to
have
these
parameters.
J
This
would
be
actually
just
another
resource
list
which
would
be
named
performance
like
because
current
resource
list
is
not
generic
and
it's
very
specific
to
the
capacity.
J
State
of
this
current
state
of
this,
these
fields
in
the
pv
as
well,
can
we
move
to
the
next,
so
we
will
have
to
add
a
new
resource
types
to
the
iops
and
like
basically,
iops
and
throughput
would
be
the
new
resource
type.
The
same
resource
type
can
be
used
across
other
places
where
we
are
thinking
about
throughput
and
iops
in
general
as
well.
J
We
will
also
need
a
cst
spec
changes
where
create
volume
will
have
to
accept
these
iops
changes
and
also
we'll
have
to
convert
the
current.
You
know,
update
volume,
call
that
is
there
to
update
the
size,
which
is
very
specific
to
size,
update
we'll
have
to
make
it
a
little
bit
generic
if
possible,
to
have
a
controller,
update
volume
kind
of
call,
or
we
could
have
a
scenario
where
we
have
a
separate
call
for
the
performance
updates
as
well.
So
those
are
kind
of
the
csi
spec
changes.
J
We
will
need
some
changes
to
the
external
provisioner
to
make
sure
that
it
passes
to
the
create
volume
and
also
changes
to
the
volume
matching
logic
that
is
there
around
matching
the
pvcs
and
the
pve.
So
these
are
the
anticipated
changes
that
could
be
there.
If
we,
you
know
kind
of
go
ahead
with
this
proposal.
J
Another
yep
please
go
ahead
to
the
next
one
yeah,
so
another
potential
possibility
here
is
that
we
could
have
modifiable
options
as
parameters
or
some
other
way
in
which
we
can.
You
know
like
have
this
update
as
well,
but
right
now
this
does
not.
The
current
infrastructure
does
not
give
us
that
ability.
So
we'll
have
to
add
this.
J
You
know
make
either
the
options
dynamic
and
then
add
this
as
one
of
these
options,
which
would
pass
in
through
from
the
top
level
to
the
end
provider
code
that
is
around.
So
that's
one
other
option
that
is
around
so
this
we
are
looking
for.
You.
H
A
So
for
the
second
option,
instead
of
saying
we
are
only
modifying
the
iops,
I'm
thinking
that
maybe
we
could
also
think
about
like
change
like
modify
a
or
change
a
storage
class
to
different
display
class
or
something
then
that
would
include
not
just
that
will
handle
not
just
this
performance
attributes,
but
also
other
attributes
defined
in
the
student
class
as
well.
Let's
just
just
saw.
D
D
My
recommendation
would
be
one
seems
very
heavyweight,
I'm
not
sure
if
we're
going
to
be
able
to
design
an
api,
that's
going
to
work
for
everyone
in
terms
of
representing
iops
and
throughput,
but
it's
on
the
kind
of
right
direction,
but
I
think
a
third
option
here
to
consider
is-
and
it's
something
that
can
be
implemented
more
quickly
for
an
individual
storage
provider,
is
when
you
provision
your
pv,
create
automatically
create
a
config
map
with
the
iops
information
in
there.
D
In
the
user's
namespace
or
have
the
user
create
that
config
map
and
then
reference
it
in
the
pv
object
when
it's
provisioned,
you
can
think
of
this
kind
of
like
if
you're
familiar
with
how
secrets
are
done
in
csi,
your
csi
storage
class
can
reference
the
secret,
it
could
be
a
fixed
secret
or
it
could
be
a
dynamic
name
that
is
generated
based
on
the
pvc
name.
D
Things
like
that,
and
so
you
can
imagine
a
config
map
that
you
reference
with
information
about
iops
and
then
there's
a
pointer
to
that
in
the
pv
object
and-
and
then
you
basically
just
look
into
that
object,
you
would
have
to
have
your
own
controller
to
monitor
the
config
maps
and
do
the
relevant
updates,
but
it
would
allow
you
to
do
effectively
this
dynamic
iops
behavior
without
any
changes
to
the
core.
J
Yes,
one
quick
question
there
is
that
how
do
we
handle
this
quarter
related
stuff
in
in
that
you
know
in
that
context,
and
also
how
do
we
kind
of
match
the
pvc
kind
of
scenario
if
we
have
to
match
it
using
performance
aspects
like?
Is
there
a
way
to
leverage
this
to
make.
J
So
if,
if
a
pv
is
exists,
which
has
a
certain
performance
characteristics.
D
J
Customer
comes
up
with
a
pvc
saying
that
you
know
this
is
what
we
need
to
match.
How
do
we
work
out
with
the
config
map
approach.
D
Right
so
you're
thinking
about
like
the
pre-provisioned
case,
like.
D
A
so
I
think
that
depends
on
how
you're
going
to
generate
the
config
map-
name:
okay,
if
it's
some
deterministic
way
of
generating
it,
then
you
could.
You
know
presumably
manually,
create
that
config
map
object
at
a
later
point
and
if
it
exists,
then
whatever
controller
is
monitoring,
for
it
can
say:
oh
yep,
I
see
this
now
exists,
I'm
going
to
start
using
its
value
if
the
value
doesn't
match
what
I
expect
I'll
go
ahead
and
issue
a
call
to
change
that.
L
J
L
D
D
So
you
can
imagine
you
know.
I
have
a
pre-provisioned
volume
and
I
want
to
introduce
this
and
take
advantage
of
this
dynamic,
iops
ability.
So
I
normally,
what
I
would
do
is
create
a
pv
object,
create
a
pvc
object.
Have
them
bind
together
so
for
this
case,
in
addition
to
that
I'll,
create
a
config
map
and
in
my
pv
object
I'll
point
back
to
that
config
map.
I
I
feel
like
sorry,
so
I
agree
with
chris,
I
think,
is
a
pre
provision
case.
He
asking
about
matching
between
pvc
and
pv.
Then
let's
say
you
have
a
volume
available
with
certain
parameters,
so
you
create
storage
class
with
something,
let's
say:
performance,
a
storage
class
and
associate
with
that
storage
class.
D
I
see
so
this
is
not
a.
I
have.
D
I
L
That
makes
sense
yeah
that
would
work
and
then
on
the
dynamic.
I
do
have
a
comment
on
the
big
map.
Personally,
I
I
don't
like
the
idea
of
using
objects
that
are
also
user
configurable
from
a
namespace
like
there's
no
control
knobs,
then.
So,
if
a
user
has
access
to
his
or
her
or
their
namespace,
then
they
will
likely
be
able
to
edit
their
own
config
maps
in
their
namespace
right,
and
you
now
can't
like
protect
what
someone
can
configure
in
those
parameters,
whereas
if
it
was
a
different
object,
you
know
crd
or
whatever.
L
That
was
like
volume.
Performance
parameters
object.
I
don't
know
what
you'd
call
it
you
could
then
lock
access
to
said,
object
versus
you
can't
if
it's
a
standard
object,
config
map.
D
D
L
M
M
The
namespace
user
would
go
ahead
and
create
a
pvc
with
the
with
a
low
performance
storage
class
and
whenever
the
user
needs
a
higher
throughput.
You
just
update
your
pvc
with
the
storage
class
with
the
new
storage
class
and
that
would
basically
have
a
callback
into
a
controller
update
kind
of
a
csi
api
or
something
and
cloud
provider
can
go
ahead
and
apply
the
newer
storage
slice.
D
Yeah,
I
think
the
idea
of
changing
the
mapping
of
a
pvc
to
a
storage
class
after
it's
provisioned
is
odd.
Ideally,
what
we
want
is
a
storage
class
is
a
template
for
provision
time
after
provision
time.
Your
pvc
and
pv
should
effectively
operate
independently
of
that
storage
class,
and
I
I
can
kind
of
see
this
logic
of
like.
Oh,
if
I
change
the
storage
class,
maybe
you
know
like
today.
If
we
change
the
size
of
the
pvc,
we
we
force
it
resize.
A
Yep,
but
I
think
those
would
be
implemented
by
the
drivers
and
the
drivers
who
don't
know
you
know
what
handled
what
cannot
be
handled
right,
so
you
could
have
some
method
to
validate.
You
know
what
are
supported,
what
are
not
supported,
usually
before
you
really
go
and
do
an
update,
a
story
class.
You
can
first
run
this
invalidation
to
see
if
this
is
a
supported,
type
of
change
right
and.
M
And
sorry
to
cut
you
off,
so
I
just
want
to
speak
about
some
of
the
experience
I've
had
with
the
storage
vendors,
most
of
the
storage
vendors
actually
support
dynamically,
changing
the
performance
characteristics
or
even
the
availability
characteristics
of
a
storage
object.
So
having
a
mechanism
like
this
in
csi
will
be
very
useful.
That's
what
I
think
at
least,
but
I
don't
know
if
it
applies
for
like
public
cloud,
vendors.
A
So
like,
for
example,
in
cinder,
we
do
have
a
retype
which
is
changing
the
volume
type,
which
is
very
similar
to
changes
story
class.
So.
J
A
D
L
Yeah,
I
always
just
get
certain
on
the
object:
quantity
barriers.
I
mean
in
our
environments,
we're
approaching
over
10
000
tvs,
yup
yeah.
If
that's
a
concern
at.
M
Yeah,
I
think
that
makes
more
sense.
This
config
map
approach
would
basically
mean
we
are
giving
control
to
the
the
namespace
user
to
set
whatever
he
wants,
and
maybe
the
cluster
admin
may
not
want
to
give
that
flexibility.
M
D
So,
instead
of
the
storage
class
object
like
a
good
quick
way
of
doing
this
again
is
like
config
map
right.
So
you
could
have
a
user
visible,
config
map
object
inside
there
you
could.
Let
them
select,
you
know
high
medium
low
whatever,
and
then
your
controller
can
operate
off.
Of
that.
D
Right,
you're
gonna
need
some
sort
of
additional
controller.
That'll
have
to
reconcile
these
config
maps
to
actually
go,
and
you
know
issue
the
commands
to
change
the
underlying
iops.
So
it'll
have
to
be
a
new
additional
controller.
L
So
I
will
say
that
I
implement
this
already
today
and
advice
I
would
give
is
instead
of
doing
like
on-the-fly
updates,
you
would
tell
your
end
users
to
change
an
annotation
or
what
not.
If
you
want
to
use
annotation,
because
it's
probably
easiest
for
a
label
and
then
you
would
just
tell
the
user
to
delete
their
pods
that
are
currently
using
the
current
performance
tier
and
then
when
they
get
rescheduled
and
they
get
remounted,
you
would
have
your
attached
logic
go.
I
D
D
I
mean
so.
If
we
implemented
this
storage
class
proposal,
then
it
would
be
very
explicit
right.
It
would
be
this
storage.
One
of
the
parameters
in
the
storage
class
says
you
know,
use
this
config
map
for
performance
tuning
and
then,
but
the
existence
of
that
storage
class
will
block
provisioning.
L
D
Right
so
when
you
provision
it
looks
at
the
storage
class
or
the
config
map
object,
it
figures
out
whatever
the
current
value
is
provisions
with
that,
and
then
you
have
another
controller.
That's
monitoring
that
config
map
for
changes.
D
If
there
are
any
changes
to
that
config
map,
it
will
dynamically
after
the
provisioning
process
go
in
and
change
the
iops
or
whatever.
L
D
That's
where
I'm
saying
that
we
want
to
make
this
approach
very
similar
to
secrets
where
it
could
be
one
config
map
per
storage
class,
or
it
could
be
one
config
map
per
volume.
If
you
want-
or
you
could
even
do
something,
weird
and
kind
of
like
a
subset
where
you
have
corporate
space
or
per
name
space
right
or
what
kk
was
kind
of
asking
for
which
is
like.
Let
me
have
three
different
options:
low
medium
high,
so
you
could
say
you
know
I'll
expose
a
config
map
to
the
user
to
select
between
those
three.
D
D
Of
have
the
granularity
from
like
absolutely
one
to
one
between
storage
class
and
all
the
volumes
or
completely
dependent
changeable
per
volume,
or
something
in
the
middle,
where
you
could
have
like
a
predefined
set
that
you
pick
from.
I
So
so
far,
in
a
proposal
they
mentioned
like
throughput,
they
are
pretty
like
a
kind
of
standard
performance
metrics.
Is
there
any
other
kind
of
metrics
considered?
So
I'm
just
thinking
if
they
are
very
standard
across
all
different
kinds
of
storage?
Is
it
beneficial
for
the
whole
community
like
we
can
have
a
standard
api
to
represent
them
instead
of
each
vendor,
have
their
own
and
also
have
have
implement
their
own
controller.
M
Yeah
I
I
was
able
to
say
that
in
fact,
like
changing
the
story,
sla
on
the
fly
seems
like
a
genetic
requirement
to
me.
A
lot
of
storage
vendors.
I
have
worked
with.
They
actually
support
this
they're
having
an
ability
to
dynamically
change,
this,
the
storage
class,
or
maybe
something
in
conflict
map
having
a
csi
extensions
to
update
the
volume
that
seems
generic
to
me.
D
I
mean
we
can
start
with
the
kind
of
storage
vendor
specific
implementations
here
with
this
idea
of
this
config
map
and
controller,
and
then,
if
it
becomes
viable,
you
know
we
can
consider
pulling
it
back
into
the
core,
and
if
we
pull
it
into
the
core,
you
know
we
can
extend
csi
to
have
a
modify,
iops
call
or
something
like
that,
and
but
that's
going
to
be
a
larger
project.
D
I
think
it'll
require
a
lot
more
changes
and
take
a
lot
longer
to
get
out,
but
it
would
be
a
lot
more
useful
if
we
already
had
a
couple
of
proof
of
concepts
or
implementations
that
were
already
working
and
that
will
allow
you
to
kind
of
unblock
yourself
much
more
quickly,
as
well
as
feed
into
the
larger
design.
J
So
sad,
the
I
have
a
poc
with
this
current
approach.
One
the
whole
changes,
but
I
understand
you're
concerned
that
it's
larger
and
it's
going
to
take
a
long
time
to
get
it
through
the
question.
One
more
question
about
the
config
mac
thing
is:
can
we
provide
something
where
we
have
resource
quotas
as
well
applicable
in
that
case?
J
So
is
there
a
way
in
which
we
can
do
this
with
config
map
and
specify
the
resource
code
for
performance
as
well?
I.
D
Don't
think
you'd
be
able
to
tie
it
into
the
existing
kubernetes
quota
system,
since
it's
not
aware
of
these
resources,
but
you
could
imagine
you
know
the
controller.
That's
implementing
these
iops
changes
having
its
own
internal
kind
of
quota
management.
J
A
Yep,
okay,
well,
we're
actually
right
on
time,
so
I
I
think
that
we,
we
do
have
a
couple
of
things.
I
think
we're
running
out
of
time.
We
will
discuss
the
rest
of
them
in
the
next
meeting.