►
Description
Meeting of Kubernetes Storage Special-Interest-Group (SIG) Workgroup for Storage Pool Design Review - 23 April 2020
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
B
A
B
B
B
That's
clearly
going
to
be
impossible.
I
also
see
that
Michelle
had
some
more
Commons
now
on
the
cap.
That
I'm
not
sure
whether
we
should
go
forward
here
or
just
continue.
The
normal
kept
review.
I
think
my
preference
would
be
to
keep
that
in
the
cap.
Prop
the
PR
discussion
and
use
this
meeting
here
to
talk
about
the
other
two
big
issues
that
we
identified.
B
One
is:
how
does
that
work
in
combination
with
ephemeral,
inline
volumes
problem
with
a
current
situations?
I
affirm
lean
on
volumes
is
that
they
are
basically
a
special
case.
Volumes
aren't
really
getting
provisioned
using
create
volume
and
series.
I
driver
needs
to
be
modified
to
support
such
such
volumes,
and
it
also
in
particular
it
also.
It
only
happens
after
a
pot
has
already
been
scheduled
to
a
node
and
that
all
makes
it
very
difficult
to
do
can
any
kind
of
capacity,
tracking
and
advanced
scheduling
of
a
pod,
because
it
just
clearly
isn't
designed
for
that.
B
So
for
proposal
that
I
since
last
meeting
turned
into
a
cap-
and
it
really
is
work
in
progress.
It
just
outlines
very.
The
overall
idea
is
that
we
add
another
volume
source.
In
addition
to
over
to
the
CSI
volume
source
that
we
have
right
now,
the
new
inline
volume
source
would
be
embedded
in
a
volume
source
inside
for
pot
spec,
and
it
would
just
say
if
a
pot
comes
along
needs,
miss
volume.
Well,
here's
a
persistent
volume
claim
that
you
can
create
and
then
creating
that
piston
volume
claim
will
trigger
normal
volume
creation.
B
That's
the
underlying
idea
behind
this
this
cap
here
and
because
it's
reusing,
persistent
volume
claim
of
normal
provisioning.
We
are
basically
done
as
far
as
capacity
tracking
is
concerned.
This
this
new
PVC
will
be
provisioned,
like
any
other,
any
other
mechanism
that
we
can
come
up
with
to
improve
pot
scale.
Cheering
in
combination
with
PVCs
will
also
work
for
these
kind
of
inline
volumes.
C
B
It
might
be
the
most
efficient
way
from
an
architecture
perspective
running
some
other
controller
would
also
work.
The
relevant
part
is
that,
at
some
point
we
do
have
a
PVC.
It
is
bound
to
the
port.
That's
Fikret
here
ever
tells
the
the
volume
scheduler
that
it
can
proceed
with
that
part,
because
there
is
this
one.
Pvc,
VAT
and
and
MV
PVC,
of
course,
must
be
bound,
like
any
other
PVC
that
the
power
depends
on
both
both
will
work.
B
It's
mostly
now
been
a
question
of
deciding
where
to
put
that
code,
whether
we
want
to
have
a
separate
control,
loop
or
just
in
extent.
The
existing
one
I
would
argue
that
it
should
still
live
in
the
cube
scheduler,
because
otherwise
we
end
up
adding
something
to
the
cluster
that
doesn't
exist
today,
which
has
all
kinds
of
issues
with
telling
people
that
know
they
need
to
deploy
the
cluster
differently.
I
would
still
keep
that
control
of
insert
cube,
scheduler
I.
D
C
B
D
C
C
D
B
But
oh
no
I
was
thinking
a
voice
control
if
a
control,
if
a
power
in
line
pot
in
the
if
the
pot
spec
gets
updated,
such
that
the
embedded
PVC
gets
modified.
What
do
we
do
then,
in
that
case,
but
that's
what
I
was
wondering
if,
if
I
only
look
at
it
once
at
creation
time
that
inline
PVC
basically
is
immutable,
we
don't
allow
updates
to.
It.
Is.
B
B
Not
sure
I
think
that
is
because
it's
mutated.
If
it's
a
mutating
web
book
it
it
gets
called
first.
If
it's
not
it's
an
admission
one
well,
can
you
create
that
part
before
it
actually
has
been
admitted?
That
sounds
odd.
I
I
suspect
that
the
pot
doesn't
actually
exist
in
the
cluster.
That
means
you
don't
have
a
UID.
You
might
have
enough
criteria
to
create
the
ownership
reference
without
the
UID,
but
this
is
something
that
we
would
have
to
check
so
yeah
it
it.
It
might
be
doable
to
do
that
in
in
a
web
hook.
B
C
C
B
A
B
B
The
overall
concern
was
if
well,
of
the
feedback
in
general
was
yeah.
This
is
a
new
thing.
If,
if
you
think
that
the
use
case
is
strong
enough
to
change
four
ports
back,
it
would
be
doable,
it
would
be
acceptable,
and
my
proposal
is
that
we
pursue
that.
Quite
quite
simply,
I
think
we
need.
We
need
to
do
something,
and
this
seemed
to
be
the
most
widely
applicable
solution
to
to
more
versatile
generic
attractive
to
more
versatile
in
line
volumes.
B
The
alternative
is
to
add
more
and
more
features
to
the
existing
CSI
inland
volumes
and
I.
Think
that
is
just
not
going
to
scale
it's.
Not
it's
not
going
to
be
the
last
issue
that
we
have
with
the
currency.
As
I
learned
volumes,
missing,
storage
gas
support
might
be
another
one.
I
I
really
don't
want
to
touch
that.
That
part
of
a
design
and
and
just
add
fear,
I
a
pile
more
things
on
to
it.
C
Yeah
every
I
don't
like
this
proposal.
You
can
talk
about
the
recommendation,
because
if
it
is
a
book
or
controller
or
schedule,
and
but
the
the
thing
I
don't
like
is
that
it
uses
something.
That's
called
persistent
volume
for
something.
That's
not
persistent,
but
I,
don't
see
a
nice
way
around
it
yeah.
B
We
should
probably
at
some
point
persistent
volume
claim,
should
have
been
renamed
to
volume
claim
and
that
entire
problem
would
be
gone
instead
of,
instead
of
expecting
that
this
volume
is
persistent.
You
just
say
that
this
is.
This
is
my
request
for
volume
and-
and
we
would
be
done,
but
this
is
now
history.
We
basically
are
stuck
with
that
name.
C
B
B
Have
you
ever
had
it's
just
a
name:
I
mean
people
will
we'll
get
used
to
it,
and
my
counter-argument
is
that
a
persistent
volume
claim
also
doesn't
imply
that
that
volume
never
goes
away.
The
life
cycle
always
has
been
over.
Volume
has
always
been
tied
to
the
PVC,
and
this
is
just
the
case
where
the
PVC
is
known
to
go
away
at
some
point,
but
we
still
don't
know
when
that
is
if
a
pot
just
keeps
running
so
does
the
PVC.
B
So
that's
the
quantity
of
difference,
but
not
a
quality
of
difference
compared
to
how
other
PBC's
get
created
and
deleted.
Perhaps
it's
just
a
bit
more
automatic
with
what
it
with
some
built-in
mechanism
to
this
to
delete
it,
but
that
spurts
the
only
difference
so
I
guess
I
now
have
the
AR
to
figure
out
where
to
implement
that
thing,
and
we
do
have
some
options.
B
On
the
other
hand,
how
how
do
we?
How
do
we
figure
that
out
if,
if
I
come
up
with
say,
for
example,
the
idea
to
run
a
separate
web
book
I
can
I
can
investigate
whether
that's
technically
feasible?
It
depends
a
bit
on
on
the
ownership
model,
whether
that
web
book
actually
gets
called
at
a
time
when
we
can
set
the
ownership
relationship
reliably.
B
C
B
Yeah,
that's
it's
good
feedback,
I
I
think
I'll.
Look
into
that.
What
I'll
check
why
poke,
but
that
has
probably
I,
think
it
has
technically
to
end
and
superwomen
issues,
this
pv
controller
and
making
making
it
control
part
of
that
binary
that
that
sounds
more
promising
to
me.
Okay,
so
no
big
objections,
I
hope.
That
means
we
can.
We
can
tend
to
drift
move
forward
with
us.
Cap
I
clearly
need
to
spell
out
more
details
about
timing.
B
B
There
we
have
a
mechanism
in
place
where
the
storage
system
gets
asked
to
tend
to
create
a
volume
for
a
certain
node.
If
that
works,
fine,
if
it
doesn't
support
gets
rescheduled
but
for
multiple
volumes
we
may
end
up
with
one
pot
one
one
volume
created
another
war,
IAM
getting
stuck
and
then
report
can't
be
rescheduled
because
of
the
first
volume.
B
Lots
of
details
are
missing
here,
I'm,
just
basically
outlining
the
rough
idea
and
I.
Think
Michelle
already
had
some
four
comments
that
some
things
might
not
work
as
envision,
but
that's
no
surprise.
I
really
just
threw
that
down
to
to
have
something
that
we
can
talk
about
and
personally
my
preference
would
be
to
focus
on
that
later.
I
just
want
to
make
sure
that
you
all
think
that
this
can
be
made
to
work
and
that
we
therefore
can
move
forward
with
the
other
camps.
B
D
D
B
Fat,
that
is
indeed
a
particular
class
of
applications
that
will
always
have
multiple
volumes,
so
yeah
I
that
that's
a
good
point,
but
those
those
cases
when
will
remain
problematic,
I
think
this
is
doable.
We
will
still
need
to
figure
out
lots
of
details
of
how
the
protocol
between
external
provisioners
and
the
volumes
scheduling
part
will
work.
We
also
need
to
solve
the
question
of
how
to
make
that
race
free,
but
I
think
it's
it's
doable.
It
just
needs
more
more
Ford
and.
B
I'm
still
considering
priorities,
my
my
proposal
for
119
would
be
to
move
ahead
with
storage
capacity.
Tracking
and
see
is
I
in
line
wall.
You
know
not
see
SNM
volumes,
that's
wrong
wrong.
Tap.
We,
the
generic,
sees
I
volumes,
do
both
or
try
to
book
get
both
into
alpha
in
119
gave
us
some
experience
with
that
and
then
perhaps
for
120.
Add
the
other
cap
for
for
multiple
volumes.
D
This
is
this
was
one
of
the
concerns
that
Tim
had
when
he
last
reviewed
this
cup
about
the
recovery
process.
So
I
think
it's
important
that
we
can
come
up
with
a
straw,
man
that
we
are
fairly
confident
can
work.
It
doesn't
have
to
be
fully
fleshed
out,
but,
like
we
should
come
up
with,
you
know
like
the
basic
flow,
and
we
should
be
fairly
confident
that
this
is
feasible.
D
B
B
B
D
B
Would
be
ideal
because
you
clearly
know
that
area
better
than
I
do
but
I'm
not
sure
whether
I
can
provide
additional
ideas.
Perhaps,
but
probably
you
know
better,
which
of
those
are
all
suitable.
If
you
can
take
that
part
and
flesh
out
another
cap
or
well
yeah
do
do
do
PR
us
into
this
one
here
to
it
to
modify
the
text
or
just
provide
feedback
on
what
I
should
write
down
that
it
all
would
be
helpful,
because
when
I
I'll
focus
on
on
store
capacity
and
is
his
island
Walliams
first.
E
E
B
E
So
one
of
the
things
that
we
talked
about
was
how
are
we
gonna?
Do
a
mapping
from
storage
class
to
storage
pool
and
someone
on
the
call
suggested
we
can
extend
CSI
to
have
a
list,
storage,
pools,
call
and
then
extend
get
capacity
to
return
the
storage
pool
that
a
given
storage
class
belongs
to
so
basically,
you
say:
there's
my
storage
class,
what
it
hold
it
and
that
make
sense,
I
think.
B
We
actually
don't
need
to
extend
get
capacity
for
this.
What
we
need
is
this
list
storage
pools
and
what
that
returns
is,
at
least
for
this
particular
purpose,
fit
apology
of
that
storage
pool
and
that's
for
information
that
the
external
provisioner
then
needs
for
the
gate
capacity
call.
It
then,
can
just
loop
over
all
storage
classes,
whether
they
are
applicable
to
that
particular
storage
pool
or
not,
and
call
the
CSI
driver
for
pools
vats
for
for
storage
classes
that
don't
even
select
this.
The
current
of
the
current
pool.
B
D
I
had
an
I
had
a
question
yesterday
that
I
commented
on
in
the
spec,
but
I'm
currently
not
really
seeing
the
value
of
having
a
separate
storage
pool.
That's
not
like
having
a
one-to-many
mapping
between
storage
pool
to
storage
class
like
and
we
potentially
simplify
this.
If
we
assume
just
a
one-to-one
mapping.
B
Yeah,
we
can
actually
a
prior
version
of
that
cap,
recognize
that
this
is
a
valuable
special
case
for
some
CSI
drivers
and
had
a
way
to
model
that
effectively,
but
then
I
think
it
was
you
actually
who
asked
me
to
simplify
recap
and
remove
that
special
case.
I
can
just
revert
and
bring
back
that
text
it.
It
then,
would
be
still
the
same
same
same
API,
but
it
would
have
just
one
entry
for
the
CSI
storage
pool
with
capacity
that
applies
to
all
storage
classes.
B
D
B
B
D
D
B
I
think
one
objection
was
that
in
that
case,
if
we
ever
end
up
with
the
need
to
represent
more
information
about
the
CSI
storage
pool,
we
don't
have
a
good
place
to
put
that
that
was
the
other
kept
from
from
VMware,
where
they
basically
have
wanted
to
have
the
pcrc,
a
storage
pool
or
storage
pool
in
general
as
an
object
that
actually
refers
to
one
pool
and
where
they
want
have
additional
information.
It's
not
in
this
cap
here,
but
this
would
be
a
natural
extension
of
that
concept
and
then
it
will
become
awkward.
B
D
A
D
Exact,
but
today
so
like,
this
is
basically
how
the
CSI
spec
is
today
right.
The
capacity
is
reported
her
storage
class
parameters
per
/,
topology,
there's
no
sense
of
a
separate
storage
pool
object
and
in
at
least
in
this
proposal
the
weight
capacity
is
reported.
It's
reported
per
storage
class
so
anyway,
so
we're
not
really
we're
not
really
using
this
shared
storage
pool
concept
in
kubernetes.
B
My
war
is
is
a
bit
that
if
we
have
indeed
have
many
different
storage
classes,
that
we
end
up
creating
more
more
data,
but
the
individual
objects
will
be
smaller,
but
we
replicate
information
a
lot
more
in
the
API
server,
because
every
single
object
that
has
some
server
topology,
for
example,
will
be
will
be
replicated
multiple
times
in
every
single
object.
I.
D
Think
that's
fine.
It
just
to
me
like
having
these
nested
structures.
I
think
makes
it
really
hard
to
sort
of
understand,
what's
going
on
and
to
be
able
to
act
on
it
and
if
we
can
just
like,
since
we're
not
actually
taking
advantage
of
this
structure,
I
would
prefer
that
we
try
to
simplify
it
as
much
as
we
can.
G
This
is
soldiers.
Do
you
mind
if
I
chime
in
this
is
interesting
conversation
and
that
I'd
like
to
use
on
use
cases?
So
thinking
in
the
use
case,
right,
storage
pool
to
me
is
a
pool
of
disks
and
then
storage
class
is
an
abstraction
layer
that
is
sitting
consuming
storage
pool
right.
So
we
can
set
quotas
per
storage
class
and
what
I'm,
seeing
customers
using
they
for
simplicity,
purpose
for
exactly
the
same
simplicity
purpose
like
creating
collection
of
the
squee
column.
G
Storage
was
exactly
same
name
in
certain
defined
storage
and
we
assigning
those
storage
pools
to
storage
classes.
So
the
storage
pool
creation
process
is
done
manually
by
the
storage
admin,
but,
as
I
said
for
simplicity
purposes,
they
created
one
storage
pool
and
they
give
the
storage
pool
to
multiple
departments
within
their
organization.
So
it
ends
they
have
to
create
multiple
store
glasses.
So
today
that's
exactly
how
it
works.
B
Yeah
I
think
that
that
strengthens
the
argument
that
we
need
to
track
capacity
per
storage
class
and
no
I.
Don't
think,
there's
any
doubt
about
that
question.
Just
ven
is:
do
we
still
need
to
surface
for
fact
that
there
is
some
underlying
shared
pool
that
implements
both
difference?
George
classes,
whether
that
is
useful
in
communities
that
keeping
its
API
level
is
currently
what
we
have
where
we
aren't
sure.
A
B
Introduced
George
pool
as
a
concept
for
different
purposes
and
after
some
debate
we
basically
are
unified
on
this
API
that
you
see
here
so
our
storage
pool
then
capacity
/
storage
class,
and
he
agreed
that
this
would
also
serve
his
needs.
If
we
now
take
away
the
series,
our
storage
pool
from
this
cab,
we
are
basically
back
to
the
drawing
board
for
him,
where
you
need
to
come
up
with
something
else.
Yeah.
A
I
think,
if
it's
going
back
to
you
without
a
story
before,
then
that's
not
going
to
work
I
think
it's
just
going
back
to
the
beginning.
Just
have
the
straight
class
I
think
there
is
some
use
cases
solicited
in
the
episodic
app
you
could
potentially
have.
If
you
can
put
there,
you
have
so
many
store
g30
classes
and
the
each
and
each
poor
I
think
it's
actually
need
to
look
at
his
cap
and
find
that
example.
Yeah.
D
B
Perhaps
van
I
don't
know
the
way
for
wood
wood
for
this,
for
this
cab,
indeed,
would
be
to
do
what
what
Michelle
says:
I'm
not
sure.
Well,
that's
what
we
will
stick
with
long
term,
but
I
suppose
for
for
the
initial
implementation.
We
can
keep
it
simpler
and
I'm
I'm
more
worried
now
about
getting
storage
capacity,
tracking
working
I'm
a
bit
selfish
here.
That's
that's
a
pressing
concern
that
I
have
so
if
it
helps
to
get
this
cab
accepted.
B
D
B
B
D
B
But
that
just
tells
you
where
the
node
is
that
doesn't
tell
you
whether
the
storage
is
in
that
same
topology
or
is
more
relaxed
and
can
be
is
at
a
higher
level.
For
example,
the
storage
might
be
in
a
data
center,
but
the
node
is
in
a
data
center
in
Iraq
in
a
certain
slot
whatever.
So,
the
topology
of
the
nodes
is
typically
more
specific
than
the
topology
of
a
network.
Attached
storage
system,
storage.
D
B
B
D
D
B
B
D
D
B
D
B
But
suppose
you
have
a
key
that
is
zone
and
you
have
another
key
that
is
wreck
quad
data
center
for
the
nodes
you
have
zones
and
data
center
said
and
potentially
wreck
set
for
every
single
node.
How,
then,
how
do
you
know
that
it's
enough
to
four
to
iterate
over
the
Sun
key
song?
Key?
Isn't
it
doesn't
it
at
four
verse
for
the
extra.
B
So
basic,
so
basically,
you
take
all
the
values
that
you
get
for
every
single
node
and
you
iterate
over
those
and
then,
if
CSI
driver
actually
doesn't
care
about
the
data
center
over
the
REC
key.
It
will
basically
ignore
that
in
the
gate,
capacity
response-
and
you
just
get
the
same
information
multiple
times
right,
potentially
yeah,
I
I'm,
fine
doing
it
that
way.
B
B
C
B
E
Can
we
walk
through
a
concrete
example
here,
so
if
I
have
a
storage
system,
a
forage
pool
that
I
can
have
two
different
storage
classes
on
one
that
says
I'm
going
to
do
no
replication,
one
that
does
you
know
3x
synchronous
replication,
so
they
consume
volumes
differently,
but
they're
consuming
from
the
same
storage
pool.
How
would
it
work
in
this
this
so.
B
B
Due
to
a
union-
and
you
end
up
in
different
apologies
for
each
of
these
typologies,
you've
then
iterate
over
the
different
storage
classes
that
are
defined
for
that
driver
and
you
do
get
capacity
call
with
each
topology
and
every
storage
class
parameter
that
is
defined
in
industry,
majority
of
different
storage
classes.
Okay,.
C
B
So
in
this
cap
here,
I'm,
basically
distinguishing
two
different
interpretations
of
that
get
capacity,
value
one
is
where,
for
CSI
driver
really
knows
about
limitations
and
what
the
maximum
store
maximum
volume
could
be.
Given
these
parameters
and
returns,
bad
information
so
get
perp
a
satirist.
Basically,
the
maximum
volume
size
that
can
possibly
be
created
or
can
cannot,
can
potentially
be
created
within
the
current
situation
with
these
parameters.
B
B
B
B
D
B
My
concern
here
is
that
if
we
say
in
the
API
object
itself
that
we
have
a
maximum
volume
size,
whatever
we
call
that
field,
if
we
just
call
it
capacity,
do
we
buy?
What
do
we
call
it?
Do
we
call
it
just
capacity?
Do
we
call
it
a
maximum
volume
size
or
do
we
call
it
total
capacity?
Well,
would
what
would
you
choose.
D
To
align
with
the
current
CSI
spec
I
believe
it's
called
available
capacity.
Yeah.
B
F
B
It's
basically
ambiguous,
which
makes
it
very
hard
to
use
it
for
something
specific.
You
can
basically
reported
out
to
the
user
and
then
let
the
user
figure
out
what
that
value
means,
perhaps
by
by
no
looking
up
but
indices,
I
Drive
the
documentation,
what
what
that
particular
driver
returns,
but
for
something
that
is
used
in
communities,
for
example,
to
do
a
comparison
with
with
volume
size
yeah
we
have
we
have
to.
We
have
to
make
guesses
so
well.
A
B
E
B
D
B
B
If
it
helps
to
get
the
I'm
fine
with
having
just
a
capacity
field
and
just
copy
what
we
get
from
CSI
but
I
think
at
some
point,
if
we
clarify
what
the
exact
semantics
is,
we
might
have
to
rename
that
field
in
vapi
in
the
Kuban
lady's
API,
because
then
we
have
more
more
specific
knowledge
about
what
the
actual
semantics
is
and
then
just
calling
it
gap
capacity
yeah.
We
could
do
better
than
that
at
some
point.
I.
D
B
But
then
you
don't
know
exactly
when
you
just
look
at
an
object
which
semantics
it
had
when
it
was
created,
so
I
think
it
will
remain
ambiguous,
yeah.
Let's
just
have
one
capacity
field
and
say
that
we
assume
that
this
field
can
be
compared
against
a
single
volume
size
and
then
figure
out
whether
we
want
it
can
can
can
enhance
on
that
later,
on,
together
with.
E
A
E
B
There's
no
calculation
in
Cuban
eighties
CSI.
The
idea
was
to
have
one
CSI
storage
pool
per
no
topology
I'm.
Why
do
I
still
have
nodes
here,
but
I
think
I
thought
I
had
removed
that
anyway?
The
idea
is
that
you
have
topology
as
attached
to
the
CSI
storage
pool.
You
do
get
capacity,
calls
with
that
topology
for
every
single
storage
class
and
you
just
record
that
capacity.
So,
basically,
the
capacity
that
is
stored
is
what
you
get
from
get
capacity,
and
you
just
record
that
no
no
calculations
whatsoever.
B
What
that
happens
in
the
volume
scheduling
code
is
that
it
matches
the
note
the
topology,
but
it
has
four
four
four:
the
four
nodes
that
it's
currently
investigating
against
both
storage
pools,
the
topology
that
you
have
recorded
earlier.
It
looks
at
both
storage
pools
that,
where
that
weather
topology
matches
and
where
the
storage
class
also
matches-
and
if
that
combination
gives
tells
you
yeah,
the
capacity
is
large
enough
for
the
current
volume
that
we
are
trying
to
provision,
then
that
node
is
suitable
and
Dad.
E
B
E
B
Happens
each
time
the
volumes
scheduling
library
is
called,
it
has
a
filter,
function,
a
filter
function
that
looks,
add
four
volumes
of
the
port
and
tells
back
to
the
scheduler.
These
are
the
nodes
fair
where,
where
we
can
provision
without
the
volumes
that
don't
exist
yet
and
that
code
gets
extended
to
not
just
look
at
topology
but
also
consider
capacity.
Currently,
it
only
checks
the
topology,
but.
A
B
New
strand,
wood
yeah,
so
that
that's
that's
the
open
question,
but
we
had
do
we
keep
for
CSI
storage
pool
concept
or
not.
If
we
do
keep
it
yeah,
it
looks.
It
looks
like
you
see
here
now.
Storage
pool
has
four
no
topology
and
has
different
instances
of
this
CSI
storage
by
class.
If
we
flatten
it,
we
just
iterate
over
more
objects.
The
information,
conceptually
is
the
same
I
mean.
E
B
D
A
I
think
if
we
are
flattened
I
think
we
still
need
to
have
this
some
some
struct,
otherwise
I
think
we
still
have
this
a
problem.
I
mean
that's
in
this
other
cat
that
we
could
have
like
exponential
number
of
or
switch
classes,
so
yeah
I
think
we
need
to
have
a
follow-up
meeting.
I
also
well
go
back
and
take
a
look
at
that.
One
then
discuss
more
yeah.
B
I'll
hold
off
making
changes
to
this
year's
ice
George
pool
until
here
and
until
I
hear
back
functioning
and
VMware
well.
That
is
actually
something
that's
acceptable
for
them,
but
I'm
now
stuck
here
between
two
different
forms
and
I:
don't
I
don't
want
to
choose
so
I
would
prefer
to
have
that
being
hashed
out
among
yourself
and.
B
I
can
I
can
I
can
do
I
can
do
that.
I
can
also
clarify
the
thing
that
Michelle
brought
up
about
how
to
identify
the
topologies
I'll
update
the
cap
to
include
this
assumption
that
we
we
make
about
chronology
of
different
nodes.
So
I
can
make
some
robust,
but
it
looks
like
CSI
storage
pool
yes
or
no
is
one
of
these
blocking
questions,
but
we
need
to
hash
out.