►
Description
Meeting of Kubernetes Storage Special-Interest-Group (SIG) Storage Pool design meeting - 13 May 2020
A
We'll
continue
discussion,
there
are
30
pool
design
today,
I
think
last
time
we
talked
about
that.
We
want
to
explore
how
to
do
use
fretting
during
on
top
of
what
Patrick
is
proposing.
So
so
I
thought
about
that.
I
can
share
that.
But
before
that
I
know
that
Patrick
II
you
have
updated.
Did
you
update
a
couple
of
you?
You
put
some
idea
there.
You
have
some
suggestion.
B
Can
talk
about
the
idea?
It's
just
a
proposal
at
this
time,
I've
not
added
for
camp
itself.
Let
me
see
whether
I
can
find
my
comment.
So
basic
underlying
a
proposal
was
that,
because
we
are
currently
really
stuck
on
this
question
of
what
is
the
storage
pool?
Do
we
want
to
expose
it
in
the
cluster
and
that
this
controversy
is
holding
back
the
the
merging
of
escape
and
continuing
continuing
worth
of
work
on
on
the
just
the
storage
capacity
issue?
B
My
proposal
was
that
I
remove
basically
the
the
pool
concept
of
pool
definition
from
my
my
kept
my
API
proposal
and
otherwise
just
keep
a
structure
as
it
is
so
we
currently
have
as
a
top-level
object
in
the
cap,
the
CSI
storage
pool
object
and
then
some
data
inside
that
which
describes
capacity
in
a
certain
topology
for
certain
storage
classes.
My
proposal
is
to
avoid
the
controversy
that
I
just
renamed,
that
API
object
to
see
as
I
storage
capacity
and.
B
That's
good
enough
as
alpha
for
the
purpose
off
of
this
cap.
It's
it's
perfectly
sufficient
and
then,
in
parallel
with
me
doing
some
implementation
work.
We
can
continue
at
a
more
leisurely
pace
around
what
other
things
may
have
to
be
in
that
top
level
object,
whether
we
want
to
call
it
CSI,
storage
pool,
and
if,
if
that
turns
out
to
be
a
valid
use
case,
then
we'll
probably
just
rename
the
whole
thing
again
to
see
a
storage
pool.
But
that
is
rough
proposal.
C
C
So
the
only
thing
I
wanted
to
add
was:
let's,
let's
be
patient,
I
apologize
that
this
takes
so
long,
but
the
goal
is
to
try
and
get
this
right
right.
We
want
this
to
work,
not
just
for
any
one
of
us.
We
want
it
to
work
for
all
of
us
and
be
kind
of
the
best
interface
that
it
can
be
for
all
of
our
users.
That
said,
back
back
to
you,
Patrick.
B
D
A
Right
yeah
but
sudden
proposed
always
me
making
it
a
1:1
mapping.
So
it's
still
different
from
your
proposal.
I
know
you
proposed
Obama,
maybe
first
and
then
there
was
another
second
proposal
which
is
to
keep
the
existing
structure.
Everything
unchanged,
except
that's
your
last
approvals.
Don't
think
that
we
still
have
a
little.
A
A
B
That
is,
the
other
see,
is
I
storage
capacity,
which
has
a
different
structure.
So
that's
a
flat
structure
with
more
objects,
no
topology
storage,
class
name
and
capacity
are
basically
in
that
object
and
whenever
something
is
a
different,
we
have
different
objects.
That
is
that
for
structure,
but
you
want
to
have
an
API
object.
Yeah.
A
Okay,
so
basically
so,
ok,
so
basically
every
time
we
created
this.
Yes,
you
don't
like
the
name,
but
you
know
we
put
that
aside
for
now.
So
every
time
you
crave
us,
so
this
one
contains
that
no
apology
is
good
class
name
see,
but
you
could
potentially
have
another
see
this
little
capacity
that
has
the
same
sort
of
class
name,
but
maybe
with
a
different
capacity
right.
That's
that's
still!
That's
two
possible.
B
Oh,
if,
if
taking
yeah
technically,
if
it's
possible,
so
I
I,
don't
like
that
flat
structure
there,
the
one
that
the
one
that
I'm
highlighting
here
I
think
is
problematic,
so
constructing
this
object
we
will
have
a
lot
more
of
these
objects.
My
expectation
and
that's
just
a
guesstimate
at
this
time-
is
that
the
load
under
the
API
server
will
be
higher
if
we
use
this.
B
So
if
purely
from
a
technical
perspective,
I
think
this
is
the
worst
solution,
the
worst
approach
for
for
storing
this
data,
and
that's
really
about
all
that
it
comes
down
to
at
this
point
which
way
of
representing
this
information
is
more
manageable
for
VAP
Iser
right.
We
are
not
we're
not
concerned
about
concepts
anymore.
We
are
now
fired
trying
to
find
a
good
representation
right
or
what
what
other
concerns
do
we
have
if
C
is
our
storage
capacity?
If
the
struct
looks
different.
B
Meta
object
metadata,
we
may,
we
may
have
I,
think
Vista
busiest
writers,
I
driver
name
is
missing,
so
it
would
have
CI
storage
capacity
spec
with
just
four
name,
because
that's
the
only
invariant,
topology
well
I'm
storage
class
name,
probably
would
be
also
in
respects.
Or
whenever
that
changes
we
would
remove
or
modify
the
things
topology.
Well,
I,
don't
know
we
we
can.
We
can
evade
what
that
goes
all
into
a
stowed
status
or
spec.
B
But
my
concern
really
is
that
this
is
technically
not
the
best
representation,
because
I
know
that
we're
out
there
were
concerns
about
having
to
complex
objects
and
in
particular,
when
they
get
updated
by
multiple
different
entities.
Cuz
that's
problematic,
but
that
cons
doesn't
apply
here
because
there
is
just
one
producer
of
these
objects
and
they.
C
C
B
But
there
is
no
one
to
one
network
and
topology.
That
is
one
of
the
issues
that
we
have
that,
if
you
have
a
storage
class,
says,
give
me
persistent
memory.
My
favorite
example,
because
that's
what
I'm
working
on
then
there
are
different
topologies
for
that
one
per
node.
It's
a
node
local
store,
app
storage.
You
can't
just
have
one
storage
class
per
note
that
just
wouldn't
scale
it
wouldn't
work
for
me.
Applications
I.
C
E
C
C
B
So
the
last
time
we
talked
about
placement
and
that
that's
where
we,
where
we
kind
of
assumed
that
perhaps
a
storage
class
representation
of
a
different
availability
classes,
may
work
for
local
storage,
I,
sorry,
I'm
Adam
and
this
doesn't
just
doesn't
work
for
the
applications.
We
want
to
have
one
storage
class
that
they
can
select
in
their
PVC
and
they
don't
care
where
the
storage
is
as
long
as
it
has
fig
this
one.
B
He
attributed
that
it
is
persistent
memory
and
then
the
system
is
supposed
to
find
one
node
that
has
sufficient
capacity
left
to
create
a
wall
volume
of
the
pod
and
create
that
volume
there
and
we.
We
therefore
have
one
storage,
glass
and
n
different
different
apologies
with
different
capacity,
and
we
need
to
represent
that
yeah.
G
Is
not
this
part
that
that
were
concerned
about
the
mapping?
It's
it's
the
fact
that
can
you
share
when
we
had
a
single
pool
object
with
multiple
storage
classes,
then
the
question
came
came
up,
is:
can
multiple
storage
classes
share
the
same
pool?
What
does
the
pool
capacity
of
the
pool
actually
mean
then,
and
a
lot
of
other
sort
of
orthogonal.
A
F
B
F
B
G
I
agreed
that
you
need
an
object
per
topology,
but
I
think
where
the
complication
is
is,
if
you
have
a
single,
if
that
single
object
has
multiple
storage
classes
in
it,
I'm
not
sure
why
I'm
not
sure
the
value
of
that,
because
the
capacity
we
report
is
going
to
be
not
accurate
already
so
like
I'm,
not
sure
why?
Having
that
sort
of
mapping.
A
If
I
just
look
at
the
capacity
RPC
in
CSI
I,
don't
think
that
is
the
Varma
mapping
I
mean
for
each
get
capacity.
You
do
have
a
bouillon
cube.
Ility,
you
have
a
topology,
then
you
have
a
parameters
which
is
mostly
story
class
right.
But
that's
just
saying,
like
you
know
this,
two
together,
you
you
return.
You
return
this
capability.
Yes,.
A
G
A
E
F
E
Multiple
storage
classes
share
a
capacity
and
you
consume
from
one,
and
it
actually
ends
up
also
consuming
from
the
other
one,
and
that,
depending
on
how
racy
the
system
is,
could
result
in
bad
decisions
being
made.
But
you're
always
going
to
have
races
in
your
capacity
information
you're
always
going
to
have
situations
where.
B
B
B
E
We
need
to
set
our
standards
such
that,
like
the
capacity
is
a
useful
hint
that
will
usually
result
in
being
the
right
decision,
but
except
that
there's
a
lot
of
situations
where
it's
going
to
result
in
a
suboptimal
decision,
and
you
need
to
be
able
to
recover
from
those
to
just
say
that
it's
not
acceptable.
I
forgot.
F
B
B
The
whole
thing
with
this
flat
structure
in,
if
that's
what's
needed,
to
get
it
accept
it
I
will.
My
concern
is
just
that
performance
will
be
worse
than
it
could
be
otherwise,
but
if,
if
that
is
something
that
we
can
come
back
then
later,
perhaps
if
we
have
some
benchmarks
or
so
okay
I
I'll,
take
what
I
get
and
and
we'll
go
could
go
ahead
with
a
cap
with
that
flat
structure.
If
that's
the
only
thing
that
we
can
agree
on.
B
For
me,
it's
twofold:
one
is
performance,
the
other
is
if
we
end
up
deciding
that
storage
pools
as
a
concept
are
useful
and
this
flat
flat
structure
is
the
wrong
one.
Then
we
may
have
other
future
arguments
that
require
reintroducing
what
is
currently
called
si
si
storage
pool
in
this
cap,
but
then
we've
we'll
have
written
already
a
lot
of
code.
That
assumes
this
flats
is
flattened
objects
and
will
basically
have
to
do
redo
redo
code.
But,
okay,
that's
I.
B
A
C
B
A
H
A
C
A
Think
I'm
fine,
let's
go
moving
for
business.
I
just
want
to
say
that,
but
before
we
move
from
alpha
to
beta
I
wanna
make
sure
that
we
get
the
this
other
spreading
problem
solved
and
if
you
gotta
buy
that
we
need
to
have
a
soda
pool
in
CS
out.
Not
so,
we
need
to
figure
that
out
before
we
move
to
beta
I,
think
I
think.
C
That
seems
reasonable,
so
maybe
what
we
can
do
is
unblock
Patrick
here
he
can
start
implementation
for
119.
In
parallel.
We
can
continue
the
discussion
around
placement
and
come
up
with
the
design,
hopefully
by
the
end
of
order
for
that,
and
then
for
the
next
implementation,
whether
that's
next
quarter
or
possibly,
if
it
takes
longer
the
quarter
after
that,
we
can
merge
both
of
these
into
a
single
beta
implementation.
B
Before
I
can
start
with
implementation
work,
but
I
also
need
to
cap,
accept
it
and
I.
Think
I
need
tell
indeed
planning
to
put
something
into
119,
so
I
for
the
enhancement
issues.
I
also
have
been
holding
back
with
a
confirmation
that
this
is
for
119,
but
I
think
both
the
storage
capacity
cap
and
the
generic
in
line
volumes
that
Jana
review
today
I
think
those
two
will
will
be
will
be
scheduled
for
119
right.
I
can
go
ahead
with
with
that.
So.
A
B
C
A
A
B
B
B
Can
do
that
now
that
we
have
reached
a
decision
about
in
line
Williams
aye
if
we
hadn't
decided
to
go
down
that
path
with
generic
in
my
bones,
it
would
have
to
be
in
this
cap
because
it
would
have
meant
extending
in
volumes
and
having
this
other
field.
That
needs
special
support
in
the
scheduler,
but
now
that
we
don't
do
that,
yeah
I
can
I
can
take
out
that
part.
Okay,.
I
C
B
C
C
A
A
B
D
C
B
So
this
is
the
description
of
the
idea
that
we
deal
with
the
more
complex
variant
of
inline
volumes,
those
that
are
actually
provided
by
mostly
a
traditional
CSI
driver,
one.
What
needs
to
provision
volumes,
we're
provisioning,
isn't
a
lightweight
local
operation
and
we
want
all
the
usual
features
that
we
have
with
persistent
volumes
and
CSI
also
available
for
inline
volumes.
B
That's
the
motivation
we
could
get
there
by
from
bottom
up
by
extending
what
is
called
CSI
effeminate
volumes,
but
I
think
that
is
not
going
to
get
us
all
the
way
and
it's
going
to
be
horrendously
complex
with
different
cold
curves
for
the
same
thing
for
normal
volumes
and
CSI
in
line
volumes.
Therefore,
for
proposal
here
is
to
do
something
to
achieve
the
same
goals
where
a
pot
spec
can
specify
a
volume
in
line.
Let's
pick
one
key
part,
the
other
is
that
it's
still
an
ephemeral
volume.
B
So
it's
most
this
one
part
that
will
use
the
volume,
but
in
contrast
to
CSI
ephemera
in
the
volumes,
it's
it's
a
work
with
a
normal
unmodified
storage
drivers,
no
special
logic
in
the
code,
no
different,
no,
no
special
deployment,
things
necessary
for
it,
and
also
just
the
usual
code
paths
in
Cuba
and
a
it
is
all
the
actual
provisioning
in
kubernetes
and
then
the
key
part
that
triggered
all
of
that
is.
We
would
straight
get
storage
capacity
tracking.
B
We
have
a
cap
also
for
these
kinds
of
inland
volumes,
because
right
now
for
seasonal
volumes,
those
are
assumed
to
not
have
capacity
to
be
provisioned
on
any
node
and-
and
we
want
that-
to
the
storage
capacity
rating-
also
work
for
this.
For
these
other
kind
of
inland
Williams,
it's
not
a
replacement
simply
because
we,
the
overhead,
will
be
there
for
the
more
heavy
weight
provisioning
process.
So
we
need
both
in
the
future.
Why.
B
C
G
B
F
B
B
C
I'm
just
brainstorming
here,
so
this
might
be
a
stupid
idea.
Is
there
any
way
that
we
can
keep
the
existing
in
line
CSI
API
and
also
implement
this
and
the
way
that
I'm
imagining
it
is?
You
could
have
a
driver,
opt
into
this
behavior
and
say:
I
support
it,
and
then
you
only
add
additional
fields
to
the
existing
API
that
the
minimal
number
of
fields
that
this
needs,
so
something
like
the
size
of
the
volume,
the
capacity
or
sorry
the
the
size
of
the
volume.
C
F
B
C
C
B
C
C
I
B
B
F
D
F
F
G
D
G
B
B
Okay,
well,
I?
Guess
we'll?
If
that
turns
out
to
be
an
implementation
problem,
we
can
make
the
type
simpler
by,
for
example,
replacing
it
with
a
persistent
Bornean
claim
spec
or
individual
fields.
But
this
here
is
what
I
would
try
to
implement
first,
and
if
that
that
works,
I
would
stick
with
it
simply
for
for
consistency
and
because
it's
it's
generic
if,
for
example,
I'm.
G
D
B
F
B
Volumes
sauce,
as
Yun
said,
is
a
bad
name.
It's
it's
it's
confusing,
because
our
volumes
are
also
in
line
I
I
was
my
count
of
I'm
ever
proposed
was
claim
volume
source
because
that's
what
it
really
does.
It
has
a
clear
it
creates
a
claim
and
its
provision
through
that
claim
or
if
we
want
to
be
even
more
correct
but
sound
worse.
In
my
opinion,
is
volume
claim
volume
source
so
I'm
now
taking
votes
essentially
much
whoever
wears
a
better
way
to
come
up
with
them.
D
B
B
D
To
me
is
basically
a
smarter
engineer.
Instead
of
using
the
root
disk,
I
can
use
whatever
storage
I'm
having
my
cluster
instead
of
empty
here
and,
of
course,
I
can
use
like
green
snapshot,
a
source
of
the
claim
or
growing
go
boom,
but
it
will
just
live
as
long
as
the
for
this
link.
So
if
he
could
query
the
scratch
space
or
generic
empty
day,
or
something
like
that,.
B
C
B
B
It
would
be
a
pot
speck
like
this
one
here,
inline
volume
source.
There
would
be
another
field
that
says
ephemeral
falls
and
then
the
controller
will
simply
skip
setting
the
ownership
relationship
and
then,
when
the
port
terminates,
that
PVC
continues
to
exist
with
that
deterministic
name
and
someone
else
will
need
to
clean
up
or
do
something
with
it.
So
technically
that
would
be
very
easy
to
implement.
If
there
is
someone
who
takes
care
of
an
orphaned
PVC
or
if
does
something
you
see,
if
that
isn't
the
case,
then
bit
small
doesn't
make
sense.
I'm.
B
You
do
get
that
you
get
some
more
structure
around
how
those
PVCs
get
created,
but
as
I
did
yeah,
because
it
assumes
that
there
is
some
other
entity
managing
the
PVC
that
entity
might
as
well
deal
with
previous
equation.
Okay,
so
I
guess
I'm
fine,
retracting
my
objection
and
we
can
just
call
it
FML
volume
source.
D
G
B
B
I,
don't
see
why
not,
because
we
controller
itself,
it
will
just
do
the
initial
creation
that
is
necessary
to
set
the
ownership
relationship.
It
can't
be
done
by
some
higher-level
object,
higher
level
controller
because
they
don't
know
in
advance.
What's
the
UID
of
a
pod
will
be,
but
the
controller,
if
it
reacts
to
an
existing
port
object,
can
can
set
that
ownership
relationship
immediately
when
it
creates
the
PBC.
B
It's
not
through
the
pv,
the
embedded
volume
source
because
improve
it,
this
special
controller,
but
intentionally
so
it
keeps
for
implementation
of
this
controller
very
simple.
It
also
is
not
currently
possible
to
modify
anything
in
a
pod
spec
other
than
some
very
selected
fields.
In
particular.
The
entire
volumes
array
is,
is
read-only
or
create
only
can't
can't
be
modified
technically.
The
API
server
prevents
word.
So,
even
if
we
wanted
to
modify
this
inline
volume
spec,
we
couldn't
do
that
at
the
moment.
A
B
B
C
It
makes
it
makes
it
composable
instead
of
having
to
redo
everything
for
ephemeral
volume,
so
we
can
reuse
the
existing
infrastructure
that
is
very
attractive.
Yeah
I
think
the
only
major
concern
I
have
is
around
API
usability.
If
we
can
come
up
with
a
way
to
minimize
the
confusion
to
end-users,
so
I
like
the
name
just
sticking
ephemeral
in
the
name,
because
that
makes
it
very
clear
what
the
purpose
is,
but
the
existing
CSI
ephemeral
is
the
API
is
different.
C
Instead
of
specifying
ephemeral,
you
specify
CSI
and
the
driver
name,
and
it's
kind
of
implicit
that
this
is
an
ephemeral
driver.
So
maybe
what
we
can
do
is
if
we
like
this
API
in
the
future
or
before
we
go
to
GA,
we
can
evolve
CSI
ephemeral
to
basically
follow
this
API
and
become
another
option
inside
here.
So
instead
of
specifying
volume
claim
template,
you
specify
you
know
the
CSI
in
line
ephemeral
information
as
a
separate
struct,
so
it's
either
or
and
that
way
as
a
end
user.
I
have
one
entry
point
into
ephemeral.
B
That's
fine,
we
just
because
the
name
wouldn't
it.
It
wouldn't
be
any
anything
about
CSI
in
this
proposal
here.
If
we
call
just
call
it
ephemeral,
volume
sauce,
then
I
I
get
what
Sartre
says
that
we
could
have
two
optional
fields
where
one
of
one
of
Eva
must
be
provided,
but
not
both
with
and
one
would
be.
The
current
account
proposal
here
and
the
other
would
be
the
information
that
we
currently
have
for
CSI
or
employed
I.
Think
that's
the
better
set
looks
doable.
B
B
F
B
C
Yeah
I
just
want
to
make
sure
we're
not
like
all
of
this
makes
technical
sense
for
us.
It
makes
our
lives
easier
as
developers
of
kubernetes
and
as
storage
vendors,
who
are
implementing
CSI
drivers
that
I
want
to
make
sure
we
keep
the
user
experience
in
focus
as
well.
If
we
have
two
different
ways,
actually
three
different
ways
to
do:
a
femoral
users
should
be
able
to
quickly
understand
them
and
not
be
confused.
That
should
be
our
goal.
B
B
B
C
B
C
B
B
Yon
actually
suggested
that
this
will
just
work
straightforward
by,
if
we
add
a
new
entry
in
the
definition
of
what
football,
what
what
can
be
listed
in
a
pod
security
policy,
then
the
existing
mechanism
should
either
allow
or
prevent
that's
I
need
to
stoop
any
to
is
still
need
to
look
at
into
how
all
that
will
work
in
detail,
but
I
think
it's.
It
should
be.
It
sounds
like
it
will
be
straightforward,
okay
and.