►
Description
Kubernetes Storage Special-Interest-Group (SIG) Per Volume CSI Driver Capabilities - 21 June 2022
Meeting Notes/Agenda: -
Find out more about the Storage SIG here: https://github.com/kubernetes/community/tree/master/sig-storage
A
All
right,
hello
and
welcome
this
is
the
kubernetes
sig
storage
community
meeting
about
per
volume.
Csi
capabilities
looks
like
we
have
pretty
light
attendance
today
and
so,
as
promised
last
week,
I'm
going
to
sort
of
offer
a
a
slightly
more
detailed
sketch
of
what
my
idea
is
and
solicit
feedback,
but
we
may
have
to
wait
till
next
week
to
get
more
people
involved
to
see.
A
If
this
is
the
direction
we
really
want
to
go
and
if
everyone's
on
board
with
this
or,
if
or
if
this
is
a
dead,
end
so
scrolling
down
here
to
my
agenda,
so
here's
an
example
of
a
kubernetes,
persistent
volume
and
specifically
what
I'm
suggesting
we
could
add.
That
would
address
all.
B
A
Okay,
yeah,
so
here's
just
zoomed
in
so
here's
my
resistant
volume
with
all
of
its
fields,
yadda
yadda.
The
crucial
thing
is
every
every
csi
pv.
This
is
a
pv
of
type
csi.
Has
this
driver
field,
and
currently
that's
the
only
information
kubernetes
has
about
this,
only
non-opaque
information
about
what
kind
of
volume
it
is,
and
we
use
this
to
look
up.
You
know
the
csi
driver
object
and
get
all
those
proprietary
kubernetes
options.
A
A
A
What
not
only
what
the
driver
type
is,
but
what
this
subtype
is
as
well,
and
this
would
this
could
just
be
another
sort
of
opaque
string
similar
to
the
driver
name
we'd
have
to
have
to
you,
know,
define
what
what
is
allowed
and
what's
not
allowed,
but
it
would
just
be
a
string
and
then
the
idea
would
be
so
so
that
would
be
the
only
change
on
the
pv
side
that
we
would
need
to
do
this
for
a
proposal
on
the
csi
side.
A
A
What
I'm
proposing
is
we
add
a
sixth
field
called
subtype
as
an
alpha
field
on
the
cs5
side
and
again
this
would
default
to
false
and
you
if
it
was
I'm
sorry,
it
would
default
to
the
empty
string
and
the
empty
string
would
imply,
you
know,
do
the
backwards
compatible
thing,
so
existing
drivers
that
don't
have
subtypes
could
to
totally
ignore
this.
This
feature
in
this
field
right,
you
just
leave
the
field.
A
But
if
it's
not
empty,
do
something
smarter
right
where
something
smarter
could
involve
again
calling
the
csi
driver
or
looking
at
some
map
in
this
in
the
in
the
csi
driver
crd
or
we
or
we
could
have
multiple
instances.
I
mean
we
still
would
have
to
decide
exactly
how
you
want
to
handle
the
out
of
spec
stuff
that
goes
into
the
csi
driver's
crd,
but
the
idea
would
be.
A
A
A
A
That
would
get
returned
here
and,
of
course,
there'd
be
a
separate
third
limit
for
the
empty
one
which
would
be
provide
backwards
compatibility.
So
that
would
be
the
overall
limit
and
we
would
have
to
modify
some
of
the
internal
kubernetes
stuff
to
communicate
to
the
scheduler
these
multiple
values.
A
You
know
this
is
a
map
string
in
64.,
as
opposed
to
just
one
in
64.,
but
we
could,
you
know,
figure
out
how
to
communicate
to
the
scheduler
on
a
per
sub
type
basis,
what
the
volume
limits
are
and
then,
of
course
the
scheduler
can
see
for
the
pvs
what
the
subtype
is,
so
the
scheduler
could
also
be
updated
to
do
something
smarter
if
the
subtype
was
not
empty
and,
of
course,
the
subtype
field.
Being
a
new
kubernetes
field
would
have
to
be
feature
gated
and
it
would
default
to.
A
You
know
alpha
and
default
to
off.
So,
for
the
first
release,
no
one
would
see
this,
but
we
could
start
adding
in
all
the
logic
around
it.
In
the
second
release
we
could
flip
it
to
beta
and
turn
it
on
and
then
any
csi
driver
that
was
implementing
this.
This
feature
here
would
start
filling
in
that
field
and
you
start
getting
new
behaviors.
So.
B
B
Could
be,
I
mean
could
be
I
mean
the
second
issue
that
we
were
talking
about
the
other
day.
The
two
see
the
y
is
by
one
limits.
The
other
one
is
the.
What
is
that
num
published
notes
or
something
about
two?
So
potentially
the
second
one
might
be
addressed
by
this,
because
we
are
changing
the
scheduler
side
as
well.
Well,.
A
The
the
issue
is
that
calls
like,
like
list
volumes,
would
not
be
directly
changed
now
list
volumes
returns
a
message
of
type
volumes,
so
you
would
get
the
subtype
for
everything
you
listed,
but
whether
the
published
nodes
was
populated
or
not
is
an
all
or
nothing
capability.
Today,.
A
B
Nothing
has
to
be
completely
addressed,
I'm
just
saying,
since
this
change
would
actually
change
the
controller
side,
so
then
it's
possible
that
one
can
be
also
based
on
this
one
make
some
changes.
B
A
I
wanted
to
draw
the
contrast
between
this
proposal,
which
you
know
it's
one
csi
change
to
add
this
field,
and
potentially
this
other
field
that
we
do.
We
would
do
one
time
it's
one
kubernetes
api
change
and
it
allows
us
to
basically
continue
then
using
our
existing
out
of
csi
spec
hacks,
and
then
you
know
the
csi
driver
crd
right
cause
that
that's
that's
the
approach
we've
used
up
to
now
to
basically
specify
non-csi
behavior,
that's
kubernetes
specific,
oh.
B
A
B
B
B
A
Csi
driver
or
we
could
just
change
csi
driver
to
also
have
new
fields
that
are
of
this
dub,
this
style
with
the
map
string
to
something
so
that
you
could
then
on
a
per
sub
type
basis,
get
different
options.
So,
like
fs
group
change
policy
could
become
a
map
or
or
we
could
add
a
second
field.
That
was
the
map
for
fx
fs
group
change
policy,
and
then
we
could
look
up
based
on
the
subtype.
What
the
policy
is.
A
A
If
we
want
to
go
back
to
more
of
like
the
option
two
would
be
to,
rather
than
forcing
kubernetes
to
keep
track
of
this
sub
type
field,
we
could
just
add
a
bunch
of
new
csi
rpcs
that
basically
allowed
cubelet
or
the
sidecars
to
on
a
per
volume
basis.
Just
query
like
for
this
volume.
What
is
the
fs
group
change
policy
for
this
volume
mode?
Is
the
fs
group
change
policy
and
they
could
just
pass
in?
You
know
the
the
idea
has
today,
which
is,
I
think,
this
the
string.
A
A
B
A
So
the
idea
is,
we
could,
instead
of
having
the
concept
of
a
subtype,
just
make
kubernetes
ask
the
csi
driver
every
time
it
wants
to
know
about
something
specific
about
a
volume
right.
We
could
say
for
this
volume.
What
is
the
attach
limit
for
this
volume?
What
is
the
fs
group
change
policy
for
this.
B
A
What
is
the
sc
linux
policy
right
and
then
those
will
all
be
new
csi
rpcs
that
you'd
have
to
add
to
the
csi
spec
and
so
okay,
that's
a
lot
of
work
and
a
lot
of
change
in
the
csi
spec,
and
what
I
wanted
to
point
out
was.
A
This
allows
us
to
avoid
lots
of
csi
spec
changes,
basically
push
all
the
complexity
back
into
kubernetes
by
just
having
one
new
key
and
then
keep
all
the
other
stuff
out
of
the
csi
spec,
with
the
exception
of
the
volume
limits
or
the
node
limits,
as
we
talked
about
so
this
is
the
the
general
shape
of
the
sort
of
the
combination,
one
and
two
that
we
had
talked
about
last
week.
It's
not
beautiful.
Like
I
mean
it,
it's
still
a
little
bit
gross.
A
A
B
A
To
get
a
different
limit
that
is
potentially
lower
or
higher
and
potentially
get
better
behavior,
so
I
I
just
wanted
to
point
out
that,
like
I
think
this
is
relatively
lightweight
and
simple,
but
it's
not
totally
perfect
and
I
just
wanted
to
sort
of
pull
the
room
and
say:
does
anybody
hate
this
as
as
the
path
forward
like?
Do
we
need
to
seriously
go
back
and
look
at
you
know:
option
number
one.
C
Yeah,
like
I
I
kind
of
agree
like
I
can
see
this
whole
subtype,
you
know
having
to
kind
of
duplicate
everything
with
this
subtitle
field.
Is
I
mean
I,
I
agree,
it
seems
a
little
warty
what
about?
Instead,
if
we
looked
at
something
that
like
in
so
like
a
the
the
couple
of
ideas
that
I
have
is
that
one
if
we
had
separate
csi
drivers
for
some
type
like
from
the
kubernetes
side,
that
would
work
fine.
C
The
problem,
of
course,
is
we
don't
want
a
ton
of
drivers
to
install
and
then
the
second
thing
is
that
the
subtype
really
is
you
know
expressing
the
same
thing
as
a
different
driver,
how
about
like
what,
if
we
could
do
something
where
we
allowed
the
same
server
to
respond
to
several
different
driver?
A
Hold
on
hold
on
so
I
I
agree
with
you
in
principle
that
it's,
it
is
like
creating
more
csi
drivers,
but
it
has
the
crucial
difference
that
all
of
the
different
subtypes
of
a
given
csi
driver
do
go
through
the
same
crate.
Volume
funnel
and
can
use
the
same
storage
class
and,
as
we
discussed
when
we
tried
to
you,
know,
go
into
detail
on
how
it
was
at
option
number
one.
Let
me
just
find
my
nodes
to
be
absolutely
sure.
A
When
we
were
talking
about
that,
we
we
realized
that,
like
you,
would
be
forced
to
have
a
separate
storage
class
per
csi
driver
and
there
would
be
no
way
to
let
the
csi
driver
choose
the
type
of
volume
for
you.
You
would
have
to
choose
by
your
choice
of
storage,
class
and
and
furthermore,
there's
all
the
other.
C
A
I
see
what
you're
saying
you're
saying
somehow
change
the
concept
of
the
csr
driver
in
storage
class
to
be
like
a
list.
A
C
A
The
the
issue
is
the
moment.
The
moment
you
create
a
pvc,
let's
say,
for
instance,
you
have
three
different
csi
drivers
installed
right,
they're,
all
running
their
own
external
provision
or
sidecar.
C
A
Okay:
okay,
okay,
I
get
what
you're
saying
okay,
so
I
have
one
csi
driver
that
has
both
support
fried
scuzzy
nfs.
I
have
two
storage
classes:
my
nfs
storage
class,
my
excuse,
your
storage
class
and
I've
just
tricked
the
the
external
provisioner
side
card
to
watching
both
of
those
and
sending
both
of
them
to
that
particular
driver.
D
C
A
B
A
So
the
way
we
think
one
of
the
reasons
I
think
that
people
are
doing
the
sub
type
thing
now
is
because
they
want
to
have
a
single
storage
class
and
a
single
csi
driver
that
funnels
all
the
requests
in
there
and
then
lets
the
driver
decide
to
give
you.
You
know
a
different
protocol
depending
on
what
you
asked
for,
because
it
it
can
have
a
lot
of
intelligence
down
at
that
layer.
A
C
Yeah,
actually,
a
question
for
amma
seems
to
be
on
the
call.
Would
this
help
the
windows
stuff
I
mean
like
I,
I
know
we
had
spoken
about
explicit
support
for
understanding
the
different
file
system,
but
maybe
this
could
be
a
way
to
simplify
that.
E
So
the
current
design
right
it
basically
can
handle.
No
matter
is
windows
or
linux.
In
your
storage
class
you
just
have
one
default
search
class
and
based
on
the
operating
system
on
each
node.
It
can
figure
out
what
file
system
it
should
use
for
windows.
It
will
be
using
ntfs
and
for
linux
will
be
default
to
est4.
E
So
that's
the
current
situation,
the.
C
E
There
already
there
are
already
changes
that
make
this
hap
make
this
kind
of
available.
The
only
thing
for
pd.
E
Gcdpd
is
we
have
somewhere
like
have
an
additional
parameter
setting,
I
think,
in
the
provisional
controller
somewhere
we
set
default
to
est4,
so
no
matter
it's
windows,
node
or
linux,
node.
That's
the
reason-
and
we
didn't
change
that
here
so
because
we
worry
about
someone
depends
on
because
remember
there
is
the
drawback
of
pv
will
not
have
file
system
field
filled
up.
If
we
do
that
way.
So
because
at
that
provisioning.
B
E
We
don't
know
what
is
the
system
I'm
going
to
use,
so
we
haven't
changed
that
behavior
for
gcpd
yet,
but
if
we
don't
add
that
parameter,
I
think
in
the
controller
like
provisional
somewhere,
then
yeah
everything
should
work
so.
C
Yeah
because
here
the
subtitles
added
to
the
pv
so
like
this,
this
change
would
actually
solve
all
that
on
its
own,
because
we
we
could,
it
could
have
the
the
same
default
storage
class
and
all
that
stuff.
And
then
the
pdcsi
driver
would
just
stamp
us
a
subtype.
That
is
like
windows
or
ntf
best
or
something.
And
then
it
would
be
a
tracked
in
the
pvb.
A
Sub,
I
mean
I
don't
care
about
the
name
of
the
field.
To
be
honest,
if
you
have
a
better.
E
C
A
Right
so
so,
for
the
nana
point
very
specifically,
this
would
be
iscsi
or
nfs
most
likely,
and
then
you
know
over
time.
We
do
have
a
few
others
that
we're
adding
but
yeah
like.
We
would
just
pick
appropriate
strings
for
each
of
those
and
stamp
them
or
we
would
just
return
them
in
the
volume
object
from
crate
volume
and
then
we'd
expect
them
to
get
stamped
here.
B
A
A
A
So
the
exact
name
we're
a
long
way
away
from
agreeing
on,
but
but
the
main
thing
was:
if
we
do
it
this
way,
it
saves
us
from
having
to
make
lots
and
lots
of
csi
spec
changes,
because
it
basically
gives
us
one
extra
key
that
kubernetes
can
key
off
of
for
all
of
its
proprietary
decision
making,
like
you
know
the
fs
group
policy
and
the
selinux
stuff
that
we
don't
want
to
put
into
the
csi
spec
proper.
A
Well,
the
cubelet
would
have
to
you
know,
there'd
be
a
feature
to
get
around
this.
So
if
feature
gate
was
turned
off,
it
would
just
do
what
it
does
today.
The
feature
8
was
on.
It
would
look
at
this
field
and
anytime
it
was
non-empty.
A
It
would
do
something
different
than
what
it
does
today
right
either
by
looking
up.
E
Those
are
information
will
be
retrieved
during
volume
like
attachment
and
the
amount.
A
Yeah
yeah,
if
that's
true
policy
matters
at
attachment
time
on
the
node,
similar
se,
linux
labeling,
I
think
matters
at
attachment
time.
The
the
volume
limits,
of
course,
matters
at
scheduling
time.
So,
whatever
is
currently
pushing
that
information
from
the
node
to
the
scheduler
would
have
to
you
know,
push
it
and
then
the
scheduler
would
have
to
be
aware
of
these
things
at
scheduling
time.
A
A
And
then,
and
and
if
this
field
is
populated,
you
shove
that
into
whatever
keyboard
is
currently
sending
to
the
scheduler,
so
there
we
could
need
changes
at
that
layer
as
well,
but
it
would
it
would
be
similar
to
this.
We
would
preserve
the
old
field
and
add
a
new
optional
field
for
the
more
detailed
information,
so
the
backwards,
compile
compatibility
would
be
easy.
A
A
So
matt
you
sort
of
spoke
against
this.
Are
you
still
kind
of
hating
it
or
are
you
ambivalent?
But
what
is
your
stance.
C
C
Like
you
know,
in
a
single
opaque
thing
that
that
it
it
gets
for
each
node
right,
but
I
I
mean
this
thing
of
of
having
it
tied
to
the
storage
class
to
certain
hard
because,
like
there
are
these
use
cases
where
the
I
mean
like
I
guess.
Basically,
what
we're
talking
about
is
we
kind
of
want
the
driver
to
stamp
extra
information
on
the
pv.
C
So
maybe
it's
unavoidable
that
this
is
like
truly
a
new
bit
of
information
that
can't
be
captured
in
the
cc
in
the
driver,
even
from
the
kubernetes
side.
A
Well,
it's
the
only
way
to
sort
of
give
kubernetes
the
ability
to
tell
the
difference
without
you
know,
making
it
ask
on
a
per
volume
basis.
You
know
what
to
do
with
it
with
any
particular
volume,
while
also
preserving
the
ability
to
just
have
one
storage
class.
That's
your
default
that
maps
to
all
your
subtypes
and
lets
the
driver
decide
so
like.
B
A
Other
option
gives
up
one
of
those
two
benefits
and
the
other
thing
is
I
I
hope
we
can
convince
ourselves
like
in
principle.
This
is
doable
in
two
releases
right.
If
we,
if
we
set
up
the
cap
and
write
it
up
and
get
it
approved
and
do
the
csi
driver
work
and
do
the
cubelet
work
you
could
have
it
be
beta
in
the
release
after
that
and
usable,
and
so
a
two
release
turnaround
time
to
hack
actually
solving
the
problem.
My
big
concern
with
option
number
one.
A
While
I
I
believe
we
should
do
all
of
the
things
that
you
know
that
have
been
mentioned
about
making
the
side
cars
less
onerous
and
making
making
the
the
node
node
plugins.
You
know
able
to
start
up
and
shut
down
and
all
those
things
are
good.
I
just
have
no
no
faith
that
we
could
get
those
done
in
a
reasonable
amount
of
time.
Oh.
C
B
C
D
A
Okay,
so
if
no
one
else
has
any
comments
or
criticisms,
I'm
okay
with
sort
of
wrapping
this
up
and
reconvening
in
a
week
with,
hopefully
a
wider
audience
I'd
like
to
sort
of
get
you
know
buy-in
from
from
everyone,
who's
actually
interested
in
solving
this,
because
I
I
don't
see
a
way
to
in
a
reasonable
time
frame,
actually
get
this
addressed.
And
if
we
decide
like
option,
one
is
where
we
want
to
go
long
term.
A
I
think
that
that
just
means
you
know
we're
basically
just
not
solving
the
problem.
We're
just
you
know
nibbling
around
the
edges
of
some
of
these
other
problems
like
like
the
memory
footprint
of
the
node
plug-in
and
the
complexity
of
dealing
with
lots
of
side
cars,
and
we
should
do
those
things,
but
I
I
don't
see
the
actual
per
volume
capabilities
getting
addressed
through
that
mechanism,
so
and-
and
it
would
be
fine
if
we,
if
that's
where
we
ended
up
right.
A
This
is
this-
is
a
proposal
that
I
think
would
make
things
better,
but
it's
not
do
or
die.
From
my
perspective,
we're
limping
along
in
the
status
quo,
we
can
continue
to
limp
along.
If
we
have
to
so,
if
if
we
come
down
and
say
we
don't
do
any
of
this,
that
may
be
the
outcome,
but
I'm
hoping
that
that
this
is
not
too
bad
in
our
path
forward.
C
Yeah,
so
perhaps
I
mean
I,
I
think,
you've
kind
of
basically
convinced
me.
I
like,
I
still
feel
like
this
kubernetes
plumbing,
for
the
subtype
is
messy
and
could
be
improved,
but
maybe
actually
the
most
expedient
way
to
deal
with,
that
is
to
start
the
cap
and
then,
in
going
through
the
detailed
kubernetes
side
design,
we
can
either
improve
things
or
convince
ourselves
that
this
is
the
best
we
can
do.
Yeah.
C
E
B
A
Yeah
we
would
assuming
that
we're
happy
with
the
kubernetes
side
changes.
There
would
still
be
that
one
csi
spec
change.
We
would
have
to
convince
the
csi
community
that
this
isn't
just
some
wormhole
that
we're
going
to
shove
a
bunch
of
functionality
through,
and
that
actually
means
something
you
know
concrete,
but
we
could.
I
I'm
pretty
sure
we
could
wordsmith
that
for
the
csi.
B
Gene
was
actually
thinking
about
the
voting
group
like
one
particular
thing
is,
for
you
know
for
block
volumes,
you
can
actually
add
existing
volumes
just
group,
but
for
five
volumes
you
cannot
right.
So
that's
one
thing
depending
on
the
sub
type,
but
you
could
do
this.
Oh.
B
Right,
I'm
just
saying
that
that
would
be
something
that
this
in
the
subtype
thing
might
help.
A
E
Yeah,
so
from
design
point
we
already
a
driver
like
should
be
able
to
handle
like
different.
I
don't
know
characteristic
of
volumes
or
features
right.
It's
not
possible
to
just
have
one
driver
for
every
single
different.
A
B
A
Yeah,
well
I
mean
the
the
csi
driver
itself
it.
It
should
know
what
type
it's
dealing
with
with
any
given
volume
and
what
it
can
and
can't
do
it
just
can't
tell
the
side
cars
about
that
information
right
the
side
cars
have
to
assume
every
volume
is
the
same
capability
wise,
whereas
the
driver
itself.
Of
course,
it
knows.
E
A
driver
should
supposedly
handle
different,
let's
say,
started
class
and,
let's
say
each
search
class
has
different
how
to
say
property
of
the
volume
and
because
of
its
difference,
you
probably
have
different
values
or
capabilities.
A
Yeah,
absolutely
I
mean
you
can
always
have
more
than
one
storage
class,
even
if
you
only
have
one
csi
driver
and
you
can
specify
as
much
as
you
want,
but
I'm
pretty
sure
that
the
common
case
in
most
places
is
just
one
storage
class
and
you
just
use
it.
I
think
I
get
the
impression
that
that's
what
most
people
do.
B
Normally,
you
would
have
multiple
story
classes
right.
They
would
just
for
different
things
so
like
for
like
service
level
or
something
right
did
they
have
something.
A
Potentially
I
I,
if
you're
deploying
like
apps
from
manifests,
you
know
manifest
app,
manifest,
don't
say
anything
about
storage
classes
right.
The
storage
class
field
is
always
empty
in
an
app
manifest.
So
in
a
lot
of
cases,
you're
just
gonna
get
the
default
right
because
you're
using
something
off
the
shelf
that
doesn't
specify
a
storage
class.
It's
willing
to
accept
whatever
you
give
it,
of
course,
if
you're
doing
more
custom
things
on
a
particular
cluster
and
you
know:
okay,
I
have
a
gold
silver,
bronze
storage
class
and
this
app
needs
to
be
gold.
A
E
So
you
mentioned
that
like
if
using
multi,
multiple
storage
class,
there
still
needs
changes
right
to
be
able
to
handle
all
the
use
cases
mentioned
here.
A
Excuse
me,
I'm
not
necessarily
I
mean
you
can
have
multiple
storage
classes
today
you
can
continue
to
have
multiple
storage
classes
after
this
it'll.
Just
be
that
when.
B
A
B
A
So
the
perfect
example
is,
you
know,
with
a
netapp
driver.
If
you
ask
for
an
rwx
volume,
you
are
definitely
going
to
get
nmfs
if
it's
an
rdvx
file
system
volume
right,
because
we
can't
do
that
with
viscosity.
A
And
so
all
we
need
to
know
is
you
know,
the
storage
class
is
the
netapp
one,
and
the
access
mode
is
our
wx
and
based
on
that,
we
can
say:
okay,
we're
giving
you
nfs,
but
if
you
turn
around
use
the
same
storage
class
and
ask
for
raw
block
volume,
you're
definitely
going
to
get
iscsi
because
we
can't
give
you
an
nfs
raw
block
volume.
A
So
again
again,
like
that's
just
you
know,
we
look
at
the
specifics
of
the
pvc
and
the
volume
capabilities
that
are
coming
into
the
csi
layer
and
we
decide
what
what
can
we
give
you
that
would
match
your
needs
now.
If
you
ask
for
an
rwo
file
system
volume,
then
we
could
give
you
eyes
because
your
nfs,
it
doesn't
really
matter,
and
at
that
point
it's
just
going
to
be
up
to
the
driver's
preference
or
wherever
we
have
space
or
whatever
makes
sense.
E
Yeah,
so
that's
part
right,
we're
I'm
just
saying
it's.
The
current
capability
right
is
now
solving
the
problem
like,
for
example,
the
maximum
volume
per
node
issues
with
multiple
search
class,
or
I'm
not
sure,
fs
group.
It
can
also
not
solving
that
problem
either
right,
so
we
you
still
need
to
have
a
way
to.
D
E
E
Yeah,
I'm
just
because
you
mentioned
the
like
a
single
large
class
and
the
multiple
search
classes.
I'm
just
want
to
confirm,
like
even
have
multiple
storage
class
will
not
solve
problem
like
here.
The
proposal.
A
A
E
So
like,
if
you
could
add
here,
you
already
gave
the
like
some
new
api
edit
for
the
css
back
and
maybe,
if
possible,
to
add
end
to
end
flow
like
starting
from
start
class
to
pv
to.
A
We
can
flesh
out
this
design
an
actual
cab
if
we
get
buy-in.
I
I'm
most
concerned
about
michelle
because
she
was
the
one
that
was
pushing
for
option
number
one
in
the
first
call.
We
don't
have
her
today,
so
I
want
to
go.
I
hope
she
can
attend
next
week
and
I
hope
we
can
get
a
a
discussion
about
that
because
if,
if
she's,
okay
with
this,
then.
A
I
think
the
next
step
is
turn
it
into
a
cap
and
get
the
appropriate
reviewers
to
start
reviewing
it
and,
and
hopefully
that
will
fill
in
all
the
details
you
want
to
see,
but
if,
but
if
she
continues
to
be
opposed
to
that
or
if
we
can
think
of
other
reasons
why
this
is
a
bad
idea,
maybe
we
just
decide
not
to
do
anything
or
to
slowly
chip
away
at
option
number
one,
but
not
you
know
not
really
try
to
solve
this
problem
in
the
immediate
term,
because
I.
A
E
Sure
so
I
can
ping
her
like
about
hope
next
week
meeting
so
to
make
sure
yeah
we
can
attend
yeah
yeah.
She
said
today
she
has
a
conflict.
B
B
C
Yeah
exactly
we'll
speak
to
her.
Perhaps
we
can,
you
know,
do
a
discussion
off
the
line.
If
she
can't
make
the
meeting
next
week
but
yeah,
we
will
definitely
follow
up.