►
Description
Meeting of Kubernetes Storage Special-Interest-Group (SIG) Storage Pool Design - 08 May 20120
A
B
Things
so
the
next
step
of
what
we,
what
we
need
to
discuss
today
is,
as
far
as
I
understand
in
the
last
remaining
open
question
for
storage
capacity
tracking,
partly
that
is
about
the
new
API
in
the
cap.
But
it's
also
about
the
more
general
question
in
SiC
storage,
what
it
means
to
have
storage,
glasses
and
storage
pools
and
how
they
are
related.
B
So
in
my
cap,
I
currently
have
a
concept
of
a
new
object,
called
CSI
storage
pool,
and
the
idea
is
that
this
is
some
underlying
set
of
storage
might
be
a
set
of
local
disks
in
an
LVM,
logical
volume
or
a
physical
volume
group.
Something
like
that
where
that
that
is
used
to
create
volumes
in
in
that
space,
and
therefore
that
is
the
starting
point
for
for
tracking
capacity
from
in
my
cab.
The
only
common
attribute
of
AB
storage
pool
is
basically
it
stopped.
It
stopped
ology
it
answers
for
question.
B
B
B
B
B
B
It
needs
to
look
at
for
storage
class
of
a
volume
and
then
make
a
decision
whether
a
certain
node
has
a
chance
of
being
usable
by
a
pod
when
that
volume
still
needs
to
be
created,
and
it
that,
let's
record
here,
we
aware
the
scheduler
then
can
decide
no
I.
Don't
have
any
available
storage
for
that
nodes.
For
that
storage
class
I
need
to
I
need
to
find
a
different
node.
B
That's
why
we
have
a
second
nested
structure.
Basically
where,
for
this
particular
apology,
we
have
different
entries,
indexed
by
storage
glass,
and
that's
that's
the
key.
Basically,
this
is,
strictly
speaking,
it's
it's
a
it's
a
list,
but
it's
it's
defined
as
a
as
a
map,
which
means
that
the
storage
class
name
must
be
unique
among
all
all
entries,
and
that,
then,
is
the
capacity
for
this
particular
combination
of
storage,
class
and
location.
Where
storage
is
available
for
that
class.
C
C
B
I'll
try
to
answer
my
understanding
is
that
there
is
no
one
to
one
mapping
between
storage,
pools
and
storage
classes.
It
is
basically
n2m
for
each
storage
pool
we
have.
We
may
have
more
than
one
storage
class
that
may
be
applicable
to
that
pool.
My
my
one
example
is
LVM,
where
we
have
a
bunch
of
disks
in
in
a
common
pool
on
one
node
and
the
storage
class
says
I
want
replicated
data
data
that
is
stored
on
more
than
one
disk
or
I
want
to
have
striped
data
volume
or
striped
storage.
B
B
Understanding
of
a
storage
class
in
Cuban
IDs
is
basically
a
contract
between
the
cluster
cluster
administrator,
who
creates
false
classes
and
applications,
and
what
for
storage
class
describes
is
a
certain
set
of
characteristics.
Acorn,
basically,
I
want
a
volume
with
certain
attributes.
Just
this
could
be
performance,
characteristics,
reliability,
characteristics
it
could
be
from
full
driver,
but
I'm
working
on
this
storage
must
be
backed
by
a
persistent
memory.
It's
things
like
that.
This
is
all
not
standardized,
but
it's
basically
part
of
a
documentation
for
cluster.
These
storage
class
exist
and
they
have
this
meaning
and.
B
That's
orthogonal,
in
my
opinion,
to
storage
pools,
so
the
application
be
clustered
administrator
and
the
application
developer.
They
agree
on
the
storage
classes,
what
the
names
mean
and
the
applications
fanless
stateful
said
says:
I
want
this
disk
to
be
fast.
It
wanted
I
wanted
to
be
on
an
SSD.
This
volume
can
be
slow
but
higher
kept
because
I
need
I,
just
need
it
for
four
level.
Four,
four
I
don't
know
streaming
media
or
something
that's
content
that
can
be
on
a
hard
disk.
B
So
you
you,
have
you
basically
agree
with
your
cluster
admin
to
create
two
different
storage,
glasses
and
the
storage
system
then
picks
suitable
storage
for
both
different
volumes
where
they
are
available
and
that's
where
the
storage
pool
comes
in,
it's
basically
depends
on
the
cluster
setup.
Where
that
storage
is
available,
it
might
be
on
one
node
there.
You
have
fast
as
it
steers
on
another
node.
You
have
larger
capacity
hard
disks,
and
that
means
that,
indirectly,
through
the
storage
clause,
your
pod
ends
up
running
and
one
node
and
might
not
be
runnable
on
another
note.
C
And
also
it's
possible
that
the
same
storage,
classic
I
think
I'm
just
reiterating
a
point
that
same
storage
class
can
can
provision
from
multiple
storage
schools,
because
multiple
storage
pools
might
match
what
it
wants.
And
in
that
case,
when
the
cubed
neti
scheduler
is
deciding
placement,
it
needs
to
say
for
this
storage
class.
B
B
So
the
situation
would
be,
for
example,
I
could
could
be
that
a
storage
pool
is
not
suitable
for
a
certain
storage
class
say
the
pool
is
about
age
at
hard
disks
and
for
storage
class,
explicitly
selects
as
a
Steve's.
Then
there
simply
wouldn't
be
an
entry
for
that
storage
class
in
that
particular
storage
pool,
because
the
capacity
would
be
zero
it
wouldn't
it
wouldn't
make
sense
to
have
such
an
entry
and
wherefore
it
wouldn't
be
listed,
and
then,
when
keeping
even
committed,
scheduler
looks
for
storage
with
topology
and
a
certain
class.
C
So
why
there,
on
the
subject
of
influencing
the
kubernetes
placement,
the
poor
placement,
would
it
also
makes
sense
at
that
time
to
look
at
total,
available
capacity
on
the
needed
store
for
the
needed
storage
class
for
every
node
and
take
that
into
account,
while
deciding
pod
placement
I,
not
sure
if
storage
is
one
of
the
things
that
it
takes
into
account,
while
saying
this
is
the
best
node
for
this
board?
No.
B
D
B
What
we
currently
can
do
is
that
the
scheduler
picks
a
note
for
late
binding
volumes
then
asks
for
storage
provision
to
create
a
volume,
the
storage
provision,
I
can
say
or
contrive
it.
It
can
fail
and
it
can
report
back
bad.
It
can't
do
that
with
currently
selected
node
and
then
the
cube
scheduler
tries
again
with
a
different
node.
B
And
that
there's
no
guarantee,
but
it
just
that
it
actually
it
makes
a
different
choice.
It
doesn't
have
any
state
information
over
state
might
not
have
changed
at
all.
Oh
and
that's
there's
some
randomness.
It
may
a
may
just
end
up
on
the
same
node
again
and
that's
where
this
capacity
information
comes
in
because
for
check
then,
and
when
looking
at
candidates
for
the
next,
no
that
it
dries
with.
It
then
can
check
the
size
of
current
volume
against
the
capacity
and
that
will
rule
out
notes
that
don't
have
enough
capacity
and.
C
B
D
B
D
B
Occasionally
it
might
get
triggered
by
volume
creation,
because
that
clearly
changes
capacity,
but
that's
still
certain
there's
a
delay.
The
bot
scheduler
may
end
up
scheduling
multiple
ports
and
it
doesn't
try
to
model
what
the
effect
is
of
scheduling
and
creating
volumes.
It
just
takes
whatever
information
it
currently
has,
and
it
keeps
using
that,
even
though
it
has
already
scheduled
something
that
will
change
the
situation.
It
just
doesn't
know
how
how
the
situation
will
change.
E
B
B
E
C
B
So,
from
a
practical
perspective,
one
advantage
is
that
I
by
just
by
guessing
I,
just
try
to
understand
how
the
access
patterns
will
be
that
this
approach
is
a
bit
more
compact.
So
there
will
be
one
writer
which
creates
si
si
storage
pools,
obviously,
as
AI
storage
pool
object
and
then
updates
that
status
and
by
keeping
common
information
shared
like
in
particular,
no
topology,
which
can
be
fairly
large.
B
We
only
store
that
once
we
keep
the
number
of
total
objects
smaller
compared
to
this
API
here,
I've
switched
tabs
to
basically
my
comment
from
from
nine
days
ago,
where
the
alternative
is
to
do
not
have
four
storage
pool
object
at
the
root
and
just
have
a
storage
capacity.
I
omitted
spec
and
stages
here,
but
this
is
basically
the
information
that
will
be
repeated
in
every
single
object:
the
topology
storage
class
name
and
capacity.
That's
flattening
the
structure.
E
B
The
same
argument
applies
on
the
consumption
side.
The
consumer
still
needs
to
iterate
over
all
that
data,
and
it
also
needs
to
read
all
of
them
or
watch
them.
So
if
we
update
10
objects
at
once,
it
will
get
10
updates
instead
of
one,
whereas
larger
embedded
structure
gets
updated
and
iterating
is
in
fact,
easier
and
with
a
more
complex
structure,
because
the
nodes
topology,
which
it
needs
to
see
needs
to
check,
is
only
represented
once
so.
B
E
C
So
if
we
don't
do
m2n
mapping
of
storage,
pools
to
storage
classes,
there's
lots
of
functionality
that
we
lose
actually
so
I
will
try
and
explain.
From
my
perspective,
what
is
the
functional
loss?
If
we
cannot
map
the
same
storage
class
to
multiple
storage
pools,
then
things
like
stateful
sets.
Don't
work
because
stateful
set
needs
to
say
I
want
my
PD
on
this
storage
class,
but
then
it
needs
to
have
the
flexibility
of
consuming
storage
pools
that
might
be
only
accessible
on
certain
nodes
as
the
pods
get
created.
C
That's
if
we
don't
have
the
other
way
functionality
of
having
each
storage
pool
map
to
multiple
storage
classes,
then
the
functional
loss
is
if
I
want
to
say.
I
want
to
create
this
Phoebe
with
hundred
I
ox
on
storage
pool,
sp1
and
I
want
this
Phoebe
with
ten
I
opps
on
storage
pool
sp1
I.
Don't
have
a
way
to
express
that,
because
I
need
two
storage
classes,
one
that
says
I,
ops,
hundred
I,
ops,
ten
and
both
need
to
map
to
sp1
so
that
they
both
can
consume.
C
F
C
F
B
F
C
C
Explosion
of
storage
classes
is
like
Michelle,
says
a
from
it's
just
a
performance
problem
I
have
a
functional
problem
that
I
can't
express
through
a
PVC
template
that
I
want
this
kind
of
placement.
I
have
to
use
different
different
storage
classes
for
the
PVC
that
will
be
placed
on
load
one
and
a
different
one
for
PVC
that
is
placed
on
node
2,
which
doesn't
allow
me
to
do
stateful
sex.
That's.
F
Not
my
understanding
so
I
actually
discussed
this
exact
use
case
with
Michelle
beforehand,
because
I
was
off
the
same
opinion
as
you
before
and
after
a
long
debate.
She
convinced
me
otherwise
and
so
I
think
my
understanding
is
that
you
can
still
express
that
you
want
kind
of
essentially
what
you're
staying
saying
with
the
storage
pool
is
that
you
have
certain
accessibility,
constraints,
right,
you're,
saying
that
a
specific
pod
has
to
go
to
a
specific
node,
and
so
what
you
can
do
is
still
say:
I
have
a
storage
class
that
has
a
hundred.
C
E
B
E
B
For
this
proposal
here
it,
it
is
clearly
meant
that
the
pool
is
wherever
data
is
so
a
cluster
with
local
disks
has
more
than
one
pool.
Basically,
the
disks
inside
the
node,
maybe
one
pool,
depends
a
bit
whether
that's
the
de
LVM
Wow
LVM
is
set
up.
For
example,
we
might
even
have
more
than
one
pool
per
node.
If
you
really
deal
with
separate
discs,
each
disk
by
itself
is
is
managed
separately
by
a
BCI
driver.
E
B
E
C
F
Yeah,
so
let's
take
a
step
back
I.
Think
you
hit
the
nail
on
the
head
by
saying
the
definition
of
storage
pool
is
what
is
confusing.
So,
let's
stop
using
the
word
storage
pool
and,
let's
start
being
more
specific
about
what
are
the
things
that
we,
the
specific
attributes
that
we
care
about,
so
the
ones
that
I've
heard
about
are
two
or
three
main
things.
One
is
capacity,
the
the
pool
has
some
inherent
capacitive
capacity
and
we
want
to
make
the
kubernetes
schedule
or
aware
of
this
limitation
when
it's
doing
scheduling.
F
The
second
is
accessibility
constraints.
The
volume
that
is
provisioned
may
not
be
equally
accessible
to
all
the
nodes.
We
want
to
make
the
kubernetes
scheduler
aware
of
this
constraint
and
be
able
to
influence
it
from
your
workload,
and
so
those
are
the
two
main
kind
of
use
cases
that
I've
seen
so
far
for
a
storage
pool.
Is
there
any
other,
because
those
two
existing
storage
pools
I
think
can
are
those
two
existing
functionalities?
F
F
A
C
One
thing
like
from
use
case
perspective
we
like
to
achieve
one
where
the
other
is
the
ability
to
represent
every
disk
attached
to
a
node
as
a
pool
and
make
in
the
CSI
driver,
or
wherever
make
some
intelligent
placement
decisions
about
where
a
PV
should
be
placed
on
the
different
disks
like
let's
take
out
storage
pool
entirely.
Let
me
just
say
it
as
the
draw
Hardware
topology
I
would
like
to
be
able
to
pick
a
single
disk
and
say
i1
a
BBC
to
go
to
this
disc.
F
So
it
sounds
like
what
you're
trying
to
do
and
correct
me
if
I'm
wrong,
you
want
to
kind
of
circumvent
the
kubernetes
scheduler.
Then
you
as
an
end-user
want
the
kind
of
raw
hardware
layout
exposed
directly
to
your
application.
So
from
your
application,
you
can
kind
of
select
and
manipulate
it.
Is
that
correct.
C
And
it
can
say
that
I
want
to
put
four
different
stripes
of
my
racial
coding
on
these
four
different
disks
attached
to
this
node,
and
there
could
be
two
kinds
of
disks
attached:
one
is
nbme
and
one
is
hard
drives
and
I
want
to
be
able
to
say.
I
want
to
go
to
the
hard
drives
and
I
want
my
four
stripes
to
be
on
four
different.
This.
F
F
F
C
A
A
Think
it's
looking
at
picture
easier,
then
just
talking
about
it.
So
there
are
two
use
cases
here
right.
So
this
is
the
one.
This
is
one
of
the
I
think
men's.
He
has
been
talking
about
right,
so
so
kind
of
three
replicas
and
then
each
one
has
four
PVCs
and
we
want
them
to
be
scheduled
across
four
different
discs
on
that
node.
So
we
want
to
be
able
to
say
we
want
to
pick
up
or,
but
we
want
those
PVCs
to
be
scheduled
not
on
the
scene,
not
on
the
scene
poor
on
this
node.
A
A
This
line,
and
was
another
one,
so
this
is
more
like
what
we
have
been
talking
about
that
and
those
also
in
this
one
right.
So
it's
a
this
is
similar,
but
it's
a
little
different
right,
so
also
I,
say
I
want
this
part.
You
have
this
two
volumes
that
we
want
them
to
be
on
different
storage,
pod,
so
straight
pause,
so
yeah.
So
there's
two
two
diagrams
got.
F
A
F
F
E
A
D
A
C
E
E
Today,
topology
in
kubernetes
is
always
relative
to
a
node,
a
kubernetes
node.
We
don't
have
a
way
to
support
topology
outside
of
a
kubernetes
cluster,
so
I'm
wondering
like
if,
if
we
want
to
dressed
like
both
use-cases
sort
of
had
this,
this
use
case
of
I
want
to
spread
across
failure,
domains
I'm
just
wondering.
If
maybe
we
would
need
to
consider
some
other
way
of
addressing
that
use
case
and.
F
A
E
One
thing
is
like
into
that:
that's
the
the
storage
system
itself
understands
how
the
disks
are
laid
out
and
how
much
capacity
is
available.
If
is
it
possible
that
we
sort
of
try
to
simplify
this
for
the
user
and
say
hey
the
storage
system
understands
exactly
how
its
laid
out.
If
you
tell
the
storage
system
that
you
want
some
group
of
volumes
to
be
spread
across
different
failure
domains.
If
that's
all
we
told
the
storage
system,
could
the
storage
system
figure
it
out
themselves
how
to
spread
and
allocate
the
disks.
F
I
agree
with
Michele's
approach
here,
which
is
what
we
don't
want
to
do-
is
kind
of
expose
arbitrary
knobs
to
the
user
right.
If
we
expose
a
hundred
knobs
effectively,
the
API
becomes
useless.
What
we
want
to
do
is
kind
of
make
automatic,
intelligent
decisions
on
behalf
of
the
user,
with
them
kind
of
just
giving
the
bare
minimum
of
input
that
they
absolutely
that
we
absolutely
need
from
them.
F
Exposing
some
kind
of
you
know
hole
all
the
way
from
the
bottom
out
to
the
user
is
a
hack
because
we
couldn't
come
up
with
a
better
API.
The
ideal
solution
I
would
want
here
is
some
automated
mechanism
for
either
the
kubernetes
scheduler
or
the
storage
system,
via
the
CSI
driver,
to
be
able
to
do
the
intelligent
placement
and
spread
across
failure
domains,
and
so
the
question
is,
will
should
be
kubernetes
scheduler,
do
the
assignments
there,
or
should
the
storage
system
do?
F
The
assignment
I
think
Michelle's
argument
here
is
that
if
you
left
the
store
system
due
at
the
storage
system,
already
knows
about
the
internal
failure
domains,
the
only
additional
information
it
needs
is
kind
of
workload.
Information
about
like
these
pods
are
related
to
each
other.
Please
spread
them
versus
the
alternative,
which
is
making
the
kubernetes
scheduler
understand
the
internal
topology
requirements
of
a
specific
storage
system
and
do
that
in
a
generic
way
that
would
work
for
all
storage
systems.
My.
B
My
understanding
of
the
storage
placement
camp
actually
was
that
this
decision
of
where
to
play
storage
is
supposed
to
be
done
by
an
operator.
So
it's
it
wasn't
I
think
not
for
your
initial
goal
to
actually
enhance
communities
itself
to
make
more
intelligent
decisions.
It
always
think
I
think
it
always
has
been
the
goal
that
we
just
find
a
way
to
expose
information.
I.
F
C
Today,
the
people
Nettie's
model
doesn't
expose
enough
about
the
hardware
topology
to
these
systems
to
be
able
to
do
this
in
bare
metal
environments,
so
they
are
going
very
blind
and
it
works
wonderfully
well
on
AWS
or
gke,
because
there
you
do
three
very
application
of
storage
underneath
and
you
expect
them
to
stay
blind,
but
on
bare
metal
that
same
and
Knology
doesn't
work.
They
don't
have
anything
to
work
with
to
really
understand
these
storage
layered
underneath
to
be
make
those
better
decisions,
and
we
have
an
explosion
of
these
intelligent
storage
systems.
C
F
I
agree
that
storage
failure
domains
are
not
currently
handled
within
the
kubernetes
ecosystem.
I
guess
my
question
is
more
around
in.
Why
are
we
focused
on
kind
of
bypassing
kubernetes
and
making
this
the
problem
of
a
kubernetes
user
to
solve?
Why
are
we
not
trying
to
solve
it
within
kubernetes?
We.
F
The
point
of
like
hey:
let's,
have
operators
and
just
expose
kind
of
raw
storage,
topology
information
up
to
an
operator
and
the
operator
will
figure
it
out.
That
seems
like
what
we're
doing
is
put
basically
poking
a
hole
all
the
way
through
the
kubernetes
api
down
to
the
storage
system
and
saying
well,
forget
the
CSI
driver
forget
kubernetes
and
kubernetes
scheduler
somebody
who's
using
kubernetes
will
figure
out
what
to
do,
and
that
seems.
C
C
Going
to
add
to
that
I
just
was
trying
to.
If
you
do
that,
though,
the
operator
cannot
make
a
express
wish
without
understanding
what's
available,
underneath
how
do
I
have
PI
tests
to
have
three
discs
or
not
to
have
nvm
ease
and
hard
drive,
so
I
have
only
hard
drives.
Do
I
have
I
mean
nodes.
It
does
no,
but
just
like
that.
What
is
then
at
the
storage
layer
what
is
available
unless
it
knows
what's
available,
what's
healthy,
which
one
of
those
has
capacity?
Unless
it
has
that
information,
it
cannot
even
express
a
wish
right.
C
F
What
we
want,
ultimately
is:
they
should
be
able
to
express
the
minimum
required
information
right.
Ideally,
it's
something
like
hey,
please
spread
across
all
failure,
domains
and
if
they
say
that
we
intelligently
figure
out
what
these
failure
domains
are
on
the
storage
system
and
take
that
into
account
when
we
are
scheduling
and
provisioning
disks
right,
yeah.
F
B
If
I
may
interject
here,
I
I
see
both
sides
as
something
that
has
merit.
The
problem
that
I
see
is
that
by
saying
this
has
to
be
solved
in
communities,
we're
basing
basically
tackling
a
problem
that
just
may
not
have
good
solution
in
communities
unless
we
first
do
something
else,
something
simpler.
That
is
specific
to
one
application
and
then
later
on,
generalize
it
so
that
it
becomes
built-in
feature
of
kubernetes.
D
B
Know
sad,
you
don't
like
the
term
but
operator.
Basically
now
is
the
out
of
tree
component,
where
such
an
experiment
with
automatic
placement
can
be
done,
and
if
that
leads
to
some
new
ideas
that
then
can
be
added
given
it
is
I,
don't
see
why
that
shouldn't
happen,
I
just
wouldn't
make
it
a
requirement.
So
I.
F
Think
that's
my
just
my
two
cents.
My
kind
of
point
around
all
of
this
is
that
the
purpose
of
kubernetes
storage
is
to
act
as
a
abstraction
layer
that
enables
kind
of
workload
portability.
So
the
things
that
you
deploy
on
top
of
kubernetes
the
workloads
should
be
able
to
operate
independent
of
the
underlying
storage
system.
F
We
want
to
balance
that
with
the
ability
to
ensure
that
all
the
functionality
that
any
given
storage
system
has
is
exposed
and
available
to
for
use,
and
that's
why
we
have
those
who
take
parameters
on
storage
class
that
allow
you
to
kind
of
customize
your
storage
system,
as
you
wish.
My
kind
of
ultimate
fear
is
that
we
expose
you
know
raw
underlying
storage,
implementations
and
workloads
start
becoming
dependent
on
those
and
that
effectively
removes
the
workload
portability.
So
whenever
we
start
talking
or
the
kind
of
conversation
starts,
adding
towards.
F
You
know,
let's
poke
a
hole
in
the
API
and
start
from
there.
It
makes
me
take
a
step
back
and
say:
okay,
what
are
the
actual
use
cases
that
we're
trying
to
solve?
Let's
come
up
with
generic
api's
within
kubernetes
understand
who
the
consumers
are.
Is
that
the
kubernetes
scheduler?
Is
it
the
CSI
drivers?
F
A
F
One
is
kind
of
the
use
cases
in
the
use
case
that
I
heard
today
that
really
struck
with
me
is
storage.
Failure,
domain
spreading
and
I
agree,
that's
kind
of
something
that
we
punted
on
for
a
while.
So
let's
double
down
on
that
and
say:
okay,
where
do
we?
We
have
to
figure
out
who
the
consumer
is
of
this
information?
F
We
know
what
that
the
information
is
locked
away
in
the
storage
system
today,
and
only
the
storage
system
is
aware
of
it
that
we
kind
of
have
three
options
here,
one
we
can
say,
let's
give
the
storage
system
more
information,
so
that
it
can
effectively
make
a
decision.
The
second
is,
second
option
is:
give
kubernetes
enough
information
about
the
storage
system
so
that
kubernetes
scheduler
can
make
the
effect
make
the
most
optimal
decision.
F
The
third
thing
that
I'm
hearing
on
this
call
is
just
poke
it
all
the
way
up
through
the
kubernetes
API
surface,
that
information
to
a
consumer
of
kubernetes
and
let
the
consumer
figure
it
out,
and
so
I
just
want
to
push
back
on
that
last
option
and
say
that's
not
the
path
we
should
go
down.
Let's
focus
on
either
number
one
or
number
two.
D
A
So
what
so?
This
is
my
own
strain.
What
we're
trying
to
pose
is
like
the
user
will
be
able
to
say
okay
in
the
stay
for
said,
I
want
the
I
want
those
PDS
to
be
scheduled
on
those
pools,
but
I
don't
want
them
to
be
scheduled
on
the
same
store.
Something
like
that.
So
it's
like
the
affinity
or
antiphon
here
or
not
right.
F
More
than
that,
that's
perfectly
fine
as
long
as
we
don't
kind
of
hard
code
like
the
concept
of
like,
for
example,
what
I
wanted
to
avoid
when
we
did
topology
was
every
single
kind
of
cluster
that
you're
going
to
have
is
gonna,
have
a
different
cluster
topology,
depending
on
how
you
can
void
it.
You
know
if
you're
on
cloud
you'll
have
regions
and
zones
if
you're
on
pram,
you
might
have
racks
or
even
some
other
custom
topology
and
the
initial
approach
is
that
folks
wanted
to
take.
F
We
want
to
come
up
with
some
way
to
be
able
to
express
that
there
are
failure,
domains
within
the
storage
system
and
give
that
information
to
whoever
needs
it
to
make
the
decision
so
I
think
what
you're
saying
Shing
is
that
you're,
okay,
with
letting
the
kubernetes
system,
make
an
intelligent
decision
based
off
of
failure,
domains
and
I
think
that's
a
perfectly
fine
approach
to
explore.
So,
let's
explore
that,
if,
if
okay.
A
B
So
we
we
have
five
minutes
left
I
think
this
advanced
storage
placement
that
clearly
needs
further
discussion.
But
what
I
would
like
to
see
is
that
we
at
least
move
forward
with
the
storage
capacity
tracking
one
way
or
another
personally
I
would
prefer
to
keep
the
API
as
as
proposed
as
in
the
document
with
this
route,
Sears
I,
George
Boole.
B
F
I
think
the
push
back
here
has
been
that
the
the
use
cases
that
we've
been
focused
on
so
far
capacity
and
topology
accessibility
topology
could
both
be
satisfied
without
having
a
standalone
storage,
pool,
object
and
kind
of
the
principle
that
we
start
with
is
don't
add,
unnecessary,
API
write
like
just
because
we
think
something
might
be
hypothetically
needed
in
the
future.
Let's
not
add
it
until
we
need
it.
So
if
we
think
that
storage
pool
is
a
requirement,
let's
come
up
with
a
concrete
use
case
for
it,
it
sounds
like
storage
failure.
F
B
F
I
think
for
just
the
capacity
and
accessible
topology
use-cases,
a
one-to-one
mapping
between
a
storage
class
in
the
storage
pool
is
what
I'm
looking
for
whether
that
storage
pool
is
a
standalone
object,
whether
you
call
that
storage
pool
a
storage
class
capacity.
All
of
that
is
implementation.
Detail
to
me,
but
I
think
the
thing
that
makes
most
sense
to
me
based
on
those
two
use
cases
a
strict
one-to-one
mapping
and.
A
C
F
B
F
E
B
B
Many
I'm
proposing
basically
just
well.
Basically,
it's
meant
many
to
many.
A
poor
pool
is
local.
True
note
may
have
multiple
classes,
there
might
also
be
the
opposite
direction.
I'm
slicing
that
in
n2n
mapping
by
saying
I'm,
starting
with
a
pool
and
adding
information
per
class.
So
it's
basically
bad
sense.
It's
one
pool
and
all
the
storage
classes
would
apply
to
it.
B
That's
how
how
I'm
representing
it
that
doesn't
mean
that
the
storage
classes
only
is
in
the
list
for
one
pool,
it
may
appear
in
multiple
pool
objects
and
in
fact
it
will
because
certain
certain
simple
storage
classes,
like
like
my
pmm
example,
that's
just
one
storage
cuz,
it
just
says:
I
want
P
ma'am.
That
will
then
have
multiple
storage
pools
where
it
has
different
capacity
values,
but
we
VP
I,
basically
slice
it.
It
starts
it's
it's
using
the
series,
I
storage
pool
as
a
starting
point.
It's
it's.
F
B
Okay,
I
I
see
my
arguments
are
not
convincing
enough.
In
that
case,
the
only
way
I
see
for
bood
is
to
flatten
the
whole
thing.
I
had
a
tentative,
a
proposal
for
for
that
revised
API
in
that
comment
that
I
showed
earlier
and
if
sorry
Michelle
say
that
this
is
what
fatal,
except
I'll
just
rewrite
the
cap.
F
You're
in
a
really
hard
spot,
Patrick
I
really
appreciate
that
I'm,
sorry,
I
I,
don't
think
any
of
us
is
trying
to
be
a
jerk
here.
I
think
we're
all
trying
to
make
sure
that
we're
making
something
that'll
work
best
for
end-users
and
I
know
this
is
frustrating
so
I
apologize,
I
hate
to
suggest
it.
But
can
we
do
another
meeting
to
see
if
we
can
come
to
a
consensus
rather
and
try
to
make
a
decision
in
the
last
minute?
I
need
to
jump
off
in
another
meeting.
We.