►
From YouTube: CNCF Storage WG 6/9/2017
Description
CNCF Storage WG meeting: 6/9/2017
https://github.com/cncf/wg-storage
Join us for KubeCon + CloudNativeCon in Barcelona May 20 - 23, Shanghai June 24 - 26, and San Diego November 18 - 21! Learn more at https://kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
B
E
D
C
A
F
A
A
We're
gonna
we're
going
to
first
look
at
just
kind
of
who
is
able
to
join
I.
Just
love
us
to
do
some
quick
introductions
who
you
are
maybe
briefly
why
why
you're
joining,
which
can
be
as
simple
as
just
curious?
It's
totally
fine
but
I
love
to
just
everyone
to
go
to
understand
a
little
bit
about
a
who's
participating
and
then
we're
going
to
hear
from
Saad
Angie
about
the
work.
A
That's
been
going
on
to
introduce
a
container
storage
interface,
so
they
got
a
presentation
that
they're
going
to
walk
through,
and
then
we
can
I
think
we'll
use
a
majority
of
the
time
both
for
doing
that
presentation
than
just
discussion
around
that
and
I'd
like
to
save
at
least
a
handful
of
minutes
at
the
end,
just
to
collect
topics
for
future
meetings.
Thanks
folks
would
like
to
talk
about.
A
F
A
G
J
K
C
M
C
O
O
P
R
T
U
V
B
Hey
on
this
is
G
I'm
from
missus
fear.
I'm
I'm
from
like
a
party
made
those
PMC
as
well
I'm
a
long
time
in
the
office
projects
focusing
on
storage
container
and
networking
XG
Travis.
X
F
Y
M
AA
A
Okay,
let's
see
Howard
I.
E
E
F
D
A
Look
sorry
about
that.
Okay,
Brian
Brian
call
for
you
on
my
go
next:
hey.
A
S
AC
AC
A
A
A
V
AH
A
B
Huan
thing
I
see
my
screen:
yep,
okay,
all
right
so
I
think
so
I'm,
my
name
is
G
and
Sally.
I
citavi
want
yeah,
I,
think
Diana
I
walked
you
I
present
this
the
current
status
of
continuous
storage
interface
to
you,
guys
and
I
think
I'm
going
to
the
first
part
of
the
presentation
inside.
Let's
do
the
rest
of
the
presentation,
so
here's
the
kind
of
the
agenda
of
the
presentations
I'm
going
to
give
you
some
motivation.
B
So
the
motivation
is
this,
so
we
kind
of
like
to
seeing
the
success
of
CNI
in
the
networking
space
that
there
is
a
single
spec
that
all
the
continual
experiences
and
the
networking
vendors
can
integrate,
and
we
see
the
success
from
that
and
we
are
thinking
that
when
I
just
make
a
interface
for
storage
as
well,
and
so
that,
like
all
the
containers
orchestration
system,
the
storage
provider
can
integrate
with
them.
We
integrate
this
back
on
to
to
to
make
it
work
and
I.
Think
that
goes
trying
to
say.
B
With
this
bag
containing
expression
system
like
kubernetes,
docker
meso
scale
foundry
can
can
convert
any
third-party
storage
vendors
storage
distance
without
having
to
like
do
any
like
Enki,
for
example,
Kunitz
right
now
use
the
entry
model
for
their
volume,
plugins
and
that's
kind
of
painful
and
and
and
storage
providers
with
this
interface
can
like
they
don't
need
to
write
multiple
plugins
for
different
internal
distances,
and
you
can
just
write
one
plugin
that
works
for
harder
cos.
B
So
that's
kind
of
the
motivation
arm
so
giving
you
an
overview
of
like
what
is
kind
of
CSI
and
the
words
on
the
interface.
So
so,
basically,
I
get
storage
provider
SP
implement
two
plugins,
so
each
plug-in
inside
CSI
is
a
on
G
RPC
service.
These,
like
the
two
plugins,
are
no
plugins
and
a
controller
plugins.
So
the
reason
we
need
to
separate
these
for
these
two
plugins
is
because
understand.
B
Functionality
from
the
storage
provider
has
to
be
run
on
a
node
where
the
boning
will
be
used
and
but
the
rest
of
the
functionality.
You
can
run
that
anywhere.
So
we
want
to
produce
two
plug-in
so
that
we
can
on
a
lot
of
different
deployments,
and
you
can
see
later
in
the
discussion
in
the
presentation
and
it's
okay,
that
you
ship
a
single
package
to
package
these
two
plugins
that
you
have.
B
You
provide
those
two
G
refugee
services
with
actually
a
single
container,
for
example,
and
you
distribute
a
single
container
through
the
continued
exhibition
system,
so
that
I
can
run
those
two
plugins
so
that
the
container
expression
system
will
interact.
The
plugins
provided
the
by
destroyed
vendor
on
to
do
things
like
dynamic,
provisioning
and
de-provisioning
the
volume
attaching
detaching
and
volume
from
node
and
mounting
and
mounted.
So
that's
the
kind
of
scope
for
e25
for
now
for
1.0,
so
from
other
functionality,
like
I've
shown,
you
can
discuss
later
on
after
1.0.
B
So
we
envision
like
a
multiple
architecture
to
to
deploy
the
storage
plugins
and
how
Co
will
interact
the
storage
plug-in.
So
that's
one
potential
architecture
that
people
might
use.
If
you
have
two
plugins
controller
plugins
run
on
a
master
node,
for
example,
the
controller,
for
example,
in
Mesa
store.
B
It's
the
natural
host
or
anywhere
in
the
cluster
data
container,
is
different
in
talk
to
the
controller
plug-in
using
G
RPC,
and
then
you
also
deployed
a
note
plug-in
on
every
single
know
where
you
want
to
use
the
volume
and
you
ship
these
two
plugins
as
separate
containers
and
the
field
will
manage
those
containers
on
different
hosts
and
talk
to
them
when
appropriate.
So
that's
one.
The
architecture,
oh
by
the
way,
feel
free
to
interrupt
me.
B
If
you
have
any
questions,
so
that's
one
potential
architecture
you
have
to
plug
in
separately
on
controller
plug-in,
no
plugging,
no
plugging
on
a
node,
and
this
is
a
different
architecture
that
potentially
we
can
do
so
also.
Basically,
this
is
like
a
headless
design.
Basically,
you
don't
have
a
controller
in
the
middle
in
a
centralized
component
and
you
deploy
all
the
controllers
logging
and
no
plug
into
every
single
node
in
the
cluster
and
the
seal
on
every
single.
B
B
Those
RPC
specifications
that
you
know
enable
us
to
do,
for
example,
dynamically
provision
and
deprovision
and
volume,
essentially
attaching
detaching
and
volume
for
our
notes
and
mounting
and
mounting
a
voting
front
note,
and
we
want
to
support
both
not
just
like
multiple
volumes
and
also
we
want
to
support
rob
block
access
to
the
volumes
and
also
another
kind
of
the
goal
is
like.
We
want
to
support
local
storage
for
a
couple
of
things
like
LVM
de
vacuum,
effort
that
it's
not
very
tight.
B
There's
some
known
go
scene
1.0
on
specifically,
we
don't
want
to
dictate
the
lifecycle
management
of
a
plugin.
So
it's
up
to
the
CEO
to
decide
how
to
before
install
upgrade
install
manager
and
the
responder
logging
when
there's
a
failure-
and
these
are
totally
out
of
buy
noxious
psi-
it's
up
to
the
CEO
to
decide
how
to
do
that.
We
don't
want
to
dictate
the
we
don't
want
to
introduce
the
first
class
of
like
a
group
like
storage
class
or
creative
storage
initially,
and
we
don't
want
to
define
protocol
level,
authentication
or
authorization.
B
B
You
can
using
system
v2
to
launch
those
services,
dimensional
services
or
use
rpm
to
package
Australia,
so
it's
totally
fine,
so
I
want
to
dictate
that,
and
also
we
don't
to
dictate
whether
it's
POSIX
compliant
or
not,
because
this
is
because,
like
many
of
the
other
systems
are
not
actually
POSIX
compliant,
so
we
don't
want
to
dictate
that
alright,
so
that's
pretty
much
kind
of
the
overview
on
and
on
star,
we
are
going
to
the
detail.
All
these
interfaces,
alright.
H
R
H
You
what's
next
somebody
giving
me
echo
that
is:
okay,
I,
think
I'm.
Okay,
now
so,
let's
get
started.
As
G
said:
we
separated
the
interfaces
into
three
distinct
services,
the
controller
service,
the
node
service
and
the
identity
service.
In
reality,
these
three
services
can
be
packaged
into
a
single
container
or
single
endpoint,
but
they
need
to
be
able
to
be.
The
controller
must
be
accessible
from
anywhere
within
the
cluster.
H
B
H
Can
you
see
my
screen?
Yes,
ok,
so
first
up
is
the
controller
service
actually
Before.
We
jump
into
this
I
I
wanted
to
give
a
piece
of
guidance
for
what
we
did
to
figure
out
what
should
be
as
in
the
1.0
interface
and
what
shouldn't
be,
and
we
had
a
lot
of
people
suggesting
all
I
really
am
teachers
that
we
could
potentially
put
into
a
1.0
interface,
and
we
decided
not
to
tackle
those.
An
idea
was
for
1.0.
This
interface
should
be
essentially
the
least
common
denominator.
H
What
what
would
be
the
most
common
layer
between
cluster
container
orchestrators
and
storage
vendors
to
get
them
off
the
ground,
and
so
we
wanted
to
tackle
mountain
mounts,
attach
detach
provision
and
deletion.
Beyond
that
we
can
expand
that
in
future
CSI
versions,
but
that's
just
the
scope
for
b,
1.0
and
so
jumping
into
the
controller
service.
H
The
vast
majority,
the
commands
here
are
optional.
The
first
command
validate
volume
capabilities
is
used
to
figure
out
whether
the
plug-in
supports
these
other
commands.
So
when
you
call
it,
it
can
return
whether
it
supports
create
volume,
delete
volume,
controller,
publish
on
publish
and
the
rest
of
these
commands,
and
so
can
create
volume.
Delete
volume
is
used
to
provision
a
new
volume
or
to
delete
a
volume.
H
Controller,
publish
and
controller
unpublish
are
calls
that
are
executed
against
the
control
plane
to
make
a
volume
available
on
a
specific
node,
those
of
you
in
the
kubernetes
and
are
probably
familiar
with
this
as
attach
and
detach.
We
went
over
the
naming
of
this
back
and
forth
a
lot.
The
problem
would
attach
and
detach
was.
It
was
confusing
because
for
some
storage
providers
the
bottom
detach
operations
are
triggered
from
the
node
machine
and
really
the
distinction
that
we
wanted
to
make
was
in
order
to
make
a
volume
available
on
a
machine
there.
H
It
can
essentially
be
broken
into
two
components:
components
that
can
be
executed
from
anywhere
on
the
cluster,
the
controller
published,
and
then
the
component
that
is
executed
on
the
node
machine.
The
final
steps
required
to
actually
not
be
mount
the
volume
into
a
known
location
on
the
machine,
and
so
we
basically
called
that
process
publish
and
we
have
two
versions
of
it.
H
Next
up
is
list
volumes,
and
this
call
is
to
return
all
the
volumes
that
are
available
for
a
particular
storage
plugin.
The
idea
here
is
for
a
CEO
to
be
able
to
handle
pre
provisioned
volumes.
The
call
is
optional
and
it's
also
paginated.
So
you
have
a
large
list
of
volumes.
A
request
contained
the
number
of
volumes
that
should
be
returned
in
each
call
and
a
new.
H
H
Again,
this
is
optional,
so
we
realize
that
this
call
may
be
very
heavyweight
for
a
lot
of
storage
providers
and
if
they
don't
want
to
implement
it,
they
don't
have
to
and
for
kubernetes
side
we're,
probably
not
going
to
use
this
very
frequently
next
commanded
get
capacity.
This
is
something
to
figure
out
what
the
full
capacity
of
the
storage
pool
is
from
which
volumes
will
be
provisioned
again.
It's
an
optional
call
and
idea
here
is
to
create
volume,
calls
are
going
to
be
carving
out
volumes
from
some
pool
of
storage.
H
J
Hey
so
I
hope
this
isn't
getting
too
detailed.
Just
a
quick
question
that
touches
on
both
of
these
previous
calls
Jim.
Is
it
understood
that
a
storage
plug-in
targets,
a
single
endpoint
of
a
given
type
or
all
endpoints?
So
in
the
example
of
NFS,
you
could
have
multiple
shares
you
would
these
calls
be
going
across
all
the
shares
that
would
be
known
to
the
plug-in
or
would
it
be
on
a?
Would
you
run
an
individual
plug-in
instance
per
share
that
you
wanted
to
access?
For
example,
I'm
thinking
also
like
storage
arrays,
get
multiple
NetApp.
H
So
there
will
be
a
one
instance
of
the
plug-in
running
/
storage
provider,
and
then
these
calls
get
called
per
volume.
So
in
the
NFS
case
you
would
have
one
NFS
volume
plug-in
or
potentially
broken
into
a
controller
plug-in
and
a
node
plug-in
running
on
every
single
node
in
the
master.
And
then
when
the
CEO
wants
to
make
a
volume,
an
NFS
volume
available
a
particular
instance
of
an
NFS
volume
available.
Then
it's
going
to
call,
for
example,
controller,
publish
volume
referencing,
this
particular
NFS
share
and
say:
please
make
that
available.
Is.
I
J
Z
Z
Understanding
will
set,
you
know
if
you
want
to
be
on
multiple
plugins.
You
know
that
if
you
had
multiple
instances
of
a
particular
plug-in
type
for
some
technology
like
that,
would
be
kind
of
plug-in,
specific
I,
don't
I,
don't
think
the
stack
actually
calls
out
you're
only
allowed
to
have
a
single
instance
called
a
plug-in
type
for
specific
technology.
That's
my
understanding
right.
H
H
We
define
idempotency
of
protocol
and
I'll
point
you
that
the
spec
at
the
end
of
the
presentation
so,
for
example,
create
volume
delete
volume
are
expected
to
be
idempotent.
We
try
to
achieve
this
by
passing
in
a
parameter
which
is
a
unique
ID
or
a
because
volume
name
a
little
bit
overloaded,
but
the
idea
is
that
if
you
call
create
volume
with
the
same
name
twice
I
the
first
call
ends
up
with
a
network
timeout.
The
CEO
calls
it
again.
H
The
storage
provider
should
be
able
to
not
provision
a
a
new
volume
but
recognize
that
the
volume
ID
is
the
same
in
provision
the
same
volume
under
the
covers.
Again,
we
leave
that
behavior
optional
to
the
CEO.
If
they
are
to
the
storage
provider,
if
they're
unable
to
for
whatever
reason
provide
idempotency,
then
they
can
choose
to
provision
a
new
volume
and
every
call-
and
with
the
caveat
that
you
know
you
may
potentially
end
up
with
unused
volumes,
are.
I
H
H
And
in
fact,
I
don't
cover
error
codes
in
this
derivation.
But
if
you
look
in
the
spec,
we
have
error
codes
which
are
a
way
for
the
storage
provider
to
suggest
we
expect
the
expected
behavior
or
recovery
behavior
on
any
error
case,
whether
they
should
retry
or
not.
So
there
is
a
mechanism
in
the
spec
to
for
a
storage
provider
to
say
this
call
failure,
don't
try
again
or
try
again
with
exponential
back-off
great.
L
Thank
you.
Is
there
any
vision
or
thought
as
to
what
the
cardinality
is
between
controller
services
and
container
orchestrators?
What
I'm
thinking
of
this,
if
you
had
something
like
a
Federation
of
two
instances
of
kubernetes
or
one
of
kubernetes,
one
of
mais
O's,
would
you
ever
have
one
controller
service
talking
to
both
simultaneously
or
would
you
deploy
two
controller
services
so.
H
This
goes
back
to
what
Michael
was
mentioning
where
we
don't.
We
try
not
to
dictate
the
packaging
as
much
as
possible.
We
try
to
limit
ourselves
to
the
interface,
so
we
left
it
open
to
say
that,
as
far
as
we
are
concerned,
the
service
is
exposed
as
a
UNIX
domain
socket,
and
so,
if
you
want
to
implement
your
CSI
plug-in
as
some
service
that
is
exposed
both
to
a
mazes
cluster
and
a
kubernetes
cluster
go
for
it.
So
the
spec
is
not
opinionated
on
it
at
all.
If.
I
You,
if
you
look
at
a
pattern
that
I
see
quite
often
in
kubernetes
in
order
container
orchestration,
is
to
have
centralized
storage
in
the
infrastructure
layer
and
not
even
in
the
not
being
managed
by
the
container
Orchestrator
at
all,
and
often
this
will
get
shared
between
DM
services
and
container
services,
and
so
to
me
that
seems
like
a
very
viable.
Oh.
I
H
H
You
could
have
you
could
implement
one,
but
not
the
other,
but
sometimes
vice
versa
doesn't
make
sense,
we'll
go
in
and
clarify
that
in
the
spec.
G
H
The
only
so
we
divided
it
into
interface,
which
is
everything
that
you're
seeing
the
grq
interface
and
packaging
packaging.
We
specifically
did
not
make
a
specification.
Anything
around
packaging
is
going
to
be
a
recommendation
so,
whether
it's
going
to
be
a
docker
container
or
how
you
execute
that
docker
container,
all
these
will
be
recommendations
or
examples
on
how
you
can
run
on
various
systems.
The
idea
with
the
spec
is
to
make
it
as
generic
as
possible.
Michael
had
a
good
way
of
putting
it.
H
It's
like
the
various
layers
in
networking
where
the
HCP
layer
doesn't
really
care.
What's
what's
going
on
underneath
the
covers,
and
so
we
want
to
make
sure
that
the
CSI
SEC
has
is
essentially
defining
just
the
interface
and
nothing
below
it,
and
therefore
we
leave
the
packaging
pretty
flexible.
So
for
one
thing,
one
thing
is
that.
W
I
One
thing
is
I,
think
I'll
be
the
first
to
admit
that
we
are
not
Windows
experts,
at
least
like
stuff,
like
I,
don't
think,
sounds
a
window
of
an
expert
I
think
that's
fair
to
say
so.
I,
don't
know
jeez
expertise
in
this
manner.
So
if
there
is
some
gaping
windows
hole,
that
is
definitely
something
that
we
should
understand
and
make
sure
that
we're
not
closing
any
doors
there.
There's
that
fair
to
say,
gee
yeah.
F
H
Yes,
so
the
first
call
validate
volume
capabilities
basically
returns
a
list
of
what
is,
and
it
is
not
supported
by
the
plugin
I.
Think.
Z
H
Yes,
I'm,
sorry
yeah,
it's
absolutely
right.
I
control,
again
facilities
is
used
to
get
the
capabilities
figure
out,
which
commands
are
supported
by
the
volume
plug-in.
The
first
call
validate
volume
capabilities
is
used
for
the
pre
provisioning
case
or
if
you
have
volumes
that
did
not
go
through
the
create
volume
flow
and
you
have
volumes
that
were
created
beforehand.
Out
of
fan,
validate
volume
capabilities
is
a
mechanism
to
verify
whether
that
particular
volume
is
going
to
be
supported.
H
H
We're
going
to
call
through
to
the
storage
provider
to
say,
validate
volume
capabilities,
we're
going
to
call
the
NFS
volume
plug-in
and
we're
going
to
say
the
user
intends
to
use
this
as
a
block
device
is
that
okay
NFS
cannot
be
used
as
a
block
device.
That's
going
to
reject
the
call
and
that's
an
indication
to
the
CEO
that
this
is
an
invalid
request
by
the
user.
H
H
That
must
be
that
will
be
called
on
the
node
machines
themselves,
so
node
published
volume
and
node
unpublished
volume
are
the
counterparts
to
controller,
publish
volume
and
controller,
unpublished
volume
I
for
the
most
part,
these
are
responsible
for
making
a
making
the
volume
available
at
our
target
app,
whether
that
means
mounting
the
device
or
mounting
a
share
or
whatever
that
means
for
a
particular
volume
plugin.
You
can
do
that
here.
H
H
But
do
these
nodes
have
all
the
requisite
bits
that
are
required
in
order
to
run
this
storage
plugin,
and
so
in
order
to
figure
this
out?
What
the
CEO
will
do
is
call
probe
node
against
the
volume
plug-in
on
each
node,
and
this
call
the
swordship
provider
can
do
whatever
they
want
to
verify
whether
the
the
bits
binaries
kernel,
modules,
etc,
are
actually
available
on
that
machine
and
if
they
fail,
then
that
is
an
indication
to
the
CEO
that
fits
volume
plug-in
is
unavailable
to
their
end
users.
H
AC
H
Should
be
called
once
on
deployment,
there
is
a
second
version
of
this
that
were
considering
for
future
versions
of
the
spec,
which
would
be
2,
which
would
be
per
volume
to
figure
out
the
health
of
the
underlying
storage
service
and
the
idea
there
would
be
it
would
be
called
more
frequently
throughout
and
figure
out.
If
the
underlying
volume
are,
the
underlying
storage
becomes
unavailable
for
any
the
disk
is
corrupt.
Anything
like
that
and
let
the
storage
system
know
that
call
is
not
part
of
the
V
1.0
spec
we're
hoping
implement
time
in
the
future.
Okay,.
H
I
E
V
C
I
U
Why
don't
I,
don't
know
how
you
going
to
gathering
feedback,
but
when
I
write
out
of
scenario
for
this?
It's
especially
interesting
in
cases
where
the
actual
like
take
you
know,
GK
or
others,
where
the
actual
OS
is
very
constrained
in
terms
of
what
kernel
module
charts
are
available,
that
it
might
influence
the
publishing
publishing
decision
upstream
at
the
controller
level,
depending
on
what.
A
H
Oh,
let's
be
through
this
last
command
over.
Here
is
no
get
capabilities,
which
is
the
counterpart
to
the
controller,
get
capabilities.
The
idea
here
is,
if
there
are
any
optional
calls.
This
is
a
mechanism
by
which
you
can't
see.
Oh,
can
you
discover
those
with
a
spec,
as
defined
so
far
on
the
node
side
of
things?
There
are
no
optional
calls
probe,
node,
no
publish
and
unpublished
are
all
required.
H
Actually,
this
is
a
no
op
for
the
time
being.
Next
up
is
the
identity
service.
These
are
calls
that
must
be
implemented
by
both
the
node
plug-in,
as
well
as
the
controller
plug
in
the
first
call
here
is
get
supported
versions.
This
is
just
a
dumb
call
that
says
these
are
all
the
versions
of
the
CICS
I
spec
that
a
particular
volume
plugging
understands
and
supports
so
that
if
a
client
speaks
the
cloud,
CEO
can
use
this
to
verify
that
the
version
that
it
intends
to
talk
to
the
the
plug-in
is
supporting
it.
H
Next
up
is
get
plugin
info,
and
this
is
a
mechanism
to
get
the
plug-in
version,
information,
plug-in
name
and
any
optional.
Manifest
information
about
that
particular
instance
of
the
plugin
we're
going
to
skip
over
the
life
cycles
and
jump
to
our
last
slide
over
here,
which
is
what
the
current
status
is.
We
have
a
repository
where
the
spec
is
currently
hosted.
We
have
a
working
group
with
a
link
there.
You
can
feel
free
to
join
that
and
we
are
planning
to
help
hold
regular
monthly
syncs
during
be
sent
out
to
that
Google.
H
Group
Co
integrations
are
being
tracked
in
those
issues
at
the
bottom,
so
the
next
steps
for
us
so
for
q2.
The
plan
was
to
have
a
draft
of
the
spec
published
which
is
now
published
and
then
for
q3.
The
plan
is
for
us
to
go
back
to
our
respective
communities,
kubernetes,
mazes
cloud,
foundry
and
docker,
and
think
about
what
a
prototype
implementation
would
look
like
and
then
use
that
to
revise
the
implementation
of
the
spec
and
once
we
all
have
something
along
the
lines
of
a
working
prototype.
H
A
AG
Is
funny
I,
just
wonder
how
that
holding
studies
on
issues
like
that
creates
a
volume
and
what
it
will
not
be
asleep?
Coming
back
the
most
infamous
calendar,
there's
no
space
increment
cause
to
modify
the
bottom
study.
I
do
have
a
follow-up
like
a
pond
in
service
to
polar
if
the
bottom
is
available,
but
sometimes
one
believes
a
volume
and
creates
a
volume.
Take
a
longer
time,
especially
have
snapshots
on
me.
That's
the
consi
opposite
hangout.
If
you
have
like
a
protein
column,
that's
good
have
beginning.
There
are
lots
of
complexities
so.
H
H
AG
M
AG
M
B
I
think
I
think
we
should
like
yeah
I
think
you
should
probably
open
the
issue
there
and
we
can't
start
discussing
in
the
github
group
on
private
people
and
that
we
can
I
find
a
lot
like
we
can
revise
the
spec
if
we
find
out
that's
necessary,
I,
don't
think
which
is
for
the
door
here
on.
Just
let's
revise
that
on
during
the
discussion.
They
have
the
issue
all
right.
Thank
you.
Awesome.
A
Okay,
so
this
has
been
great
thanks,
so
much
sudden
G
for
presenting
this,
so
just
just
so
folks
to
understand.
You
know
that
there's
a
goal
ultimately
is
to
propose
CSI
into
to
CNCs,
and
you
know,
but
that's
that's
not
something
that
that's
not
what
this
calls
for
that'll
be
a
call
with
the
TOC,
and
that
will
be
something
that
we
do
in
the
future.
So
I
just
want
to
put
that
up
there
in
case
any
questions
about
that
or
curious
about
that.
A
So
I
know
we're
basically
at
a
time
and
I
want
to
respect
that
for
people.
If
folks
would
like
to
really
quick
just
brainstorm,
if
there's
things
they'd
love
to
be
discussing
with
respect
to
storage
and
cloud
native,
if
folks
want
to
throw
things
out
right
now
they
can.
Alternatively,
we
if
you
can
fill
in
the
doc
that
Chris
started
Krista,
quick,
an
adjustable
section
for
brainstorming,
future
topics,
and
if
folks
can
just
go,
add
topics
there,
then
I
can
reach
out
to
you
afterwards
and
we
can
discuss
them
also.