►
From YouTube: CNCF Storage Working Group 10.25.2017
Description
Join us for KubeCon + CloudNativeCon in Barcelona May 20 - 23, Shanghai June 24 - 26, and San Diego November 18 - 21! Learn more at https://kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
A
A
A
A
A
A
A
A
B
B
B
B
C
C
My
focuses
asleep
what
joining
Alex
skier
cop
senses
regrets
it's
his
wedding
anniversary
and
I
make
it,
but
he
did
send
out
a
draft
of
the
side
deck
for
some
of
the
landscape
stuff
that
we
talked
about,
which
Clint
we'll
talk
about
a
little
more
today
on
the
meeting.
We
will
control
be
talking
about
the
accompanying
white
paper,
but
so
just
check
your
inbox
for
sort
of
the
emerging
categories
that
we're
thinking
of.
B
B
See
if
I
can
hear
me
out
there,
we
can
all
right
sorry
about
the
technical
difficulties
I'm
over
here
in
Europe.
So
sometimes
it's
a
bit
challenging
alright.
So
welcome
everyone
to
the
meeting
Ben
asked
me
to
to
chair
it
today
because
he
is
he's
got
some
responsibilities,
form
a
siscon
which
is
starting
tonight
tomorrow
over
here
in
Prague.
There's
a
couple
things
on
the
agenda
that
we
want
to
try
to
cover
today,
and
you
know,
as
usual,
there's
opportunity
for
other
others
to
add
items
to
the
end
of
the
agenda.
C
B
B
Maybe
we'll
take
a
second
been
in
and
take
a
look
at
some
of
those
slides
and
get
some
feedback
from
the
group
on
what
our
thoughts
are.
Based
on
the
last
meeting
he
sent
out
the
day,
I
was
clicking
on
that
link
and
it
looks
like
he
updated,
starting
on
slide
5
through
6
and
7
had
has
anyone
had
a
chance
to
take
a
look
at
that
yet.
D
B
D
C
C
So
definitely
like
don't
get
too
wound
up.
If
you
don't
like
what
you
see
something
you
know,
sample
means
to
change
it,
but
you
know
slide.
2
is
just
basically
the
proposal
to
create
this
landscape
as
it
was
originally
teed
up,
and
then
we
meant
on
October
the
11th
to
basically
talk
about
it's
in
the
working
group
like
some
of
the
different
patterns
that
we
want
to
reflect
in
this
landscape.
C
C
Now
it's
kind
of
tricky
because,
as
we
started
talking
about
these
things,
we
noticed
that,
like
there's
like
multiple
dimensions
like
we
think
it's
important
to
the
staff
like
what
storage
systems
are
like
file,
block
and
object,
but
like
some
things,
are
file
and
have
an
API
that
allows
dynamic
provisioning.
Some
things
are
file
and
don't
do
that
some
things
are
block
that
do
you
do
that
some
things
are
block
that
don't
do
that.
So
it's
like
how
did
I
represent
these
things
in
simple
categories?
C
It's
gonna
be
kind
of
tricky
because
there's
a
lot
of
like
thin
diagram
overlap
stuff
going
on,
but
but
here
are
some
of
the
key
patterns
right
and
so
slide.
5
is
it's
operable,
which
basically
means
there's
been
some
work
done
with
the
product
or
project
to
integrate
it
with
the
CEO
such
that
a
CEO
can
use
the
backend
storage
system,
and
that's
you
know
basic
interoperability
would
probably
be
you
know
you
can
create
a
volume
in
the
storage
system
and
then
you
can
use
the
volume
with
container
running
in
the
CEO.
C
Now
notice,
I
didn't
say,
dynamic,
provisioning
right,
because
that's
a
sort
of
a
separate
thing
right
and
so
self-service
is
sort
of
like
the
next
improvement
upon
that
I
would
say
where.
Basically,
your
surahs
system
exposes
an
API,
an
interface
that
allows
the
CEO
to
dynamically
provision
volumes
from
that
right.
So
you
don't
have
to
go
talk
to
a
storage
administrator
to
get
a
volume
created
all
right.
C
So
this
is
like
an
easy
way
to
think
of
this,
to
sort
of
reiterate
something
that
can
maybe
take
something:
that's
not
self-service
and
like
sit
in
between
it
and
the
CEO,
and
the
API
framework
knows
how
to
talk
to
the
CEO
and
vice
versa,
and
can
sort
of
expose
access
to
that
back-end
storage
platform
in
a
way
that
is
self-service,
so
I
think
there's
a
category
there
and
then
and
then
the
next
one
is
sort
of
the.
So
that's
all
that
consumption
right
so
like
how
storage
is
consumed
from
within
the
container
Orchestrator.
C
The
next
one
is
sort
of
the
actual
storage
platform
itself
right.
So,
historically,
storage
platforms
have
been
separate
adjacent
clusters
to
contain
a
locust,
so
they
they
sit
next
to
them
or
somewhere
where
they
can
be
reached
across
the
network.
And
so
so
that's
one
pattern
right
where
they're
sort
of
adjacent
and
then
there's
another
pattern
where
they're
actually
containerized
and
running
in
the
container
Orchestrator
itself
right.
C
So
things
like
you
know:
port
works,
storage,
OS
could
read
our
container
native
storage
unit,
lester,
kubernetes,
etc
so
wanted
to
call
out
that
sort
of
more
runtime
deployment,
different
difference
things
and
then
yeah
storage,
access
method.
Quite
sure,
when
do
you
have
any
comment
on
on
that?
Because
I
think
we
covered
the
clash
and
stuff
already
in
the
other
patterns
I'm
not
quite
sure
how
this
little
bit
yeah.
B
B
C
You
know
the
next
slides
five,
six
and
seven
basically
provides
sort
of
visual
representations
of
them,
but
I've
sort
of
you
know
gone
through
a
look
at
this.
The
big
thing
is
I
said
before
it's.
You
know:
we've
got
to
be
able
to
overlay
like
a
system
which
of
those
patterns
and
supports
and
whether
it's
file
object
and
that
the
5-minute
will
carry
those.
B
E
B
You
know
it's
actually
using
the
kernel
to
use
networking
and
it's
probably
sending
that
off
to
a
cloud
provider.
That's
running
a
you
know,
long-running
its
sequel
instance
or
you
know
it's
running
sending
you
know.
Log
base
data
like
so
I
see
that
as
a
very
valid
use
case,
but
I
don't
know
that
we
need
to
capture
that,
like
in
terms
of
the
goals
of
the
storage
working
group.
I,
don't
know
that
we
need
to
capture
that
in
the
short
term,
but
we
could
book
market
to
say,
hey
this
area
of
you
know.
B
C
C
That's
the
second
ones
that
like
to
talk
but
yeah,
so
the
other
thing
is
so
I
agree
with
everything.
Clint
said
that
this
is
the
very
primitive
layer
and
I
think
this
is
the
layer
that
database
systems
would
use.
So
you
know,
presumably
your
database
is
backed
by
some
sort
of
if
it's
a
distributed
database
like
a
song
or
something
each
nodes,
backed
by
block,
will
file
on
block.
E
Don't
have
to
be
assumed
that
every
database
is
based
on
a
file
system
or
based
on
is
block
storage.
Dad
databases
you
eventually
this
having
their
data
on
a
block
device.
Okay,
so
because
everything
at
the
end
he's
throwing
a
blood
vessel
in
memory.
So
I
wouldn't
include
data
services
such
as
no
sequel
database,
for
example,
a
costs
mostly
B
from
me
sure
or
dynamodb
from
Amazon
and
others,
which
are
data
services
which
can
save
your
state.
E
So
if
they
di
is
that
you
need
some
place
to
save
the
state
of
the
application
well
for
a
container,
then
you
cannot
exclude
all
the
other
data
services.
You
I
do
agree
the
main
part
of
destroyed
subsystems,
as
usually
does
it's
a
file
and
it's
constantly
fiber
China
kind
of
a
block
systems,
but
the
other
ones.
It's
just
a
different
type
of
slowing
your
information
within
it.
Yeah.
B
And
I,
don't
like
great
I,
don't
disagree
with
that.
I
think
that
capturing
say
data
services
and
the
interfaces
and
the
tools
and
the
methods
of
doing
those
along
with
the
the
key
semantics
of
C
Steve
described
it
as
it's
a
difficult
thing
to
do
in
in
one
diagram
and
one
paper
and
have
it
all
makes
sense,
I
feel
like
we
need
to
buy
it
off.
You
know
chunks
and
try
to
just
describe
you
have
certain
parts
of
the
landscape
individually
because
they
are,
they
are
pretty
different.
B
A
I
think
one
thing
that
might
be
helpful
is
that
there
is
obviously
a
larger
class
of
services
consistent
on
either
for
stateful
were
close.
Typically,
when
people
say
storage,
so
I
things
I
mean
all
stay
full
workload.
So
it's
kind
out
of
a
time
series.
Databases
and
database
is
in
general
log
log
databases
and
all
this
other
stuff
when
I
think
storage
is
typically
just
a
little
lower
level
infrastructure
like
blog
file
and
object.
Yeah.
D
A
B
And
I
feel
like
that
part
of
this
is
like
the
background
that
you
come
from.
You
know
if
you
came
from
the
storage
world
and
I
think
you're
pretty
aligned
to
calling
everything.
You
know
primitive,
wise
and
you
know
file
on
and
LBA
wise,
like
just
you
know,
core
storage,
and
if
you
come
from,
you
know
the
cloud
world,
then
things
are
other
things
you
know
outside
of
that
are
storage
as
well,
so
I
think
it's
either
make
sure
we
define
clearly,
as
we
you
know,
write
the
stuff
and
create
our
landscapes.
A
B
A
C
A
C
B
A
B
There's
I
think
two
things
that
we
can
do
that
we
are
being
we're
basically
asked
of
us
at
the
last
update
that
happened
a
month
or
two
back.
One
is
that
we
work
on
a
landscape
which
includes
the
classification
but
also
includes
you
know
where
certain
players
fit
so
kind
of
an
expansion
of
what
they
currently
have
as
a
landscape,
and
then
the
second
thing
is
a
white
paper
that
helps
everybody
understand.
You
know
why
that
landscape
was
put
together
and
how
things
interact
and
how
things
are.
B
You
know
why
things
were
done
so
I
think
those
are
our
two
deliverables
that
we
we
have
to.
We
have
to
work
on
in
terms
of
this
landscape,
I
think
if
everybody's
I
think
we
should
probably
review
the
slides.
You
know
after
this
meeting
again,
but
if
I
but
everybody's
in
agreement
of
how
this
stuff
lays
out
that
we
should
start
to
classify
and
kind
of
bring
over
examples
of
platforms
or
certain
platforms
to
actually
start
putting
in
that
landscape,
so
that
we
can
present
that
to
the
TOC.
B
B
This
is
something
that
the
TOC
is
requesting
unless
there's
a
a
similar
white
paper
at
the
serverless
working
group
put
together-
and
you
know
it's
something
that
I
think
has
helped
the
TOC
understand
the
space
a
bit
more
and
understand
or
gives
them
better
understanding
of
how
to
you
know,
make
decisions
in
the
space
that
helps
it's
important.
Cn
CF,
so
we've
been
asked
as
well
to
put
together
a
storage
white
paper,
and
the
idea
behind
this
is
to
help
educate.
That's
also
to
help
describe
why
the
landscape
was
put
together.
B
How
it
was
so
I
took
a
first
stab
at
this
to
start
to
build
out
like
a
pretty
high-level
structure
which
follows
on
from
what
we've
been
talking
about
in
these
working
groups
and
also
what
was
put
together
by
Alex
on
some
little
slides
in
the
landscape.
I
know
that
you
know
that
everybody
on
the
call
hasn't
had
a
chance
to
to
look
through
this
whole
thing,
but
I
guess
I'll
open
it
up
to
to
anybody
who
has
had
a
chance
to
look
through
a
structure
and
I.
A
D
C
B
Yeah
I
definitely
didn't
give
the
group
enough
time
to
comment
on
it.
So
I'll
briefly
walk
through
pieces
of
this
at
a
high
level,
and
then
we
can
actually,
we
probably
need
to
just
get
some
volunteers
after
this
call,
so
that
we
can
sign
up
and
iterate
on
it
to
hopefully
get
it
done
and
agreed
upon
by
the
TOC,
but
I'd
say
a
high
level.
You
know
it.
B
The
goal
is
to
carve
out
you
know
what
is
clouds
colony
of
storage,
let's
define
the
category
and
define
the
classifications
like
I,
said
to
help
understand
why
the
landscape
put
together.
So
it
starts
out
to
say
what
is
cloud
native
storage.
It
should
answer
the
question
of.
Why
is
it
important
and
it
should
ask
a
question
of
like
what
types
of
storage
are
relevant.
It
gets
into
this.
The
next
section
gets
into
this.
B
So
it's
up
to
say
you
know:
why
is
it
important
with
these
use
cases,
he
would
go
up
just
a
little
bit
Steve
and
then
the
next
section
is
talking
about
the
primary
functions,
and
this
is
where
we
break
out
the
two
key
things
that
the
workgroup
had
defined.
One
being
I
think
the
core
reason:
why
will
caught
what
we'll
call
something
cloud
native
storage,
which
is
that
it
can
be
interoperable
or
it's
orchestrated
volumes
that
idea
they
can
just
really
attach
detach
on
demand
to
volumes
that
are
requested,
and
then
it
sets
up.
B
The
next
piece
of
you
know:
cladding
of
storage,
which
is,
can
you
self
consume
by
way
of
the
api's?
And
can
you
give
like
a
kubernetes
or
a
co
consumer,
the
ability
to
define
their
own
volumes?
So
it
describes
those
two
things.
Then
it
gets
into
the
where
we
are
today
like
how
is
the
ecosystem
in
terms
of
extensibility,
so
it
talks
about
entry
volume
plugins,
such
as
the
things
that
kubernetes
provides.
B
Then
I'll
talk
about
auditory
volumes
such
as
volume
plugins,
such
as
doctor
volume,
driver
and
CSI,
and
then
it'll
get
into
the
local
capabilities
that
the
ce
o--'s
provide
for
local
file
systems,
etc.
Then
it
steps
into
the
component
session
where
we
actually
break
out
the
individual
pieces
of
the
landscape.
So
we
set
it
up
to
say
what
our
Co
is
going
to
be
consuming
the
functionality
like
was
it
actually
looked
like
once
the
one
uses
it?
Then
it
gets
into
the
interfaces
and
the
platforms
and
the
plugins
and
frameworks
etc.
B
The
responsiveness
like
you
know
how
quick
is
the
API
to
attach
and
detach
volumes
availability.
So
you
know,
is
my
source
platform
highly
available
elasticity
of
the
resources
I'm
going
to
consume
the
ability
to
have
role
based
access,
control,
authorization,
etc,
etc,
cetera
so
I
think
the
capabilities
basically
just
defines
the
strength
of
the
cloud
native
platform,
and
you
know
how
many
different
use
cases
that
it
can
be
relevant
for
next
page
and
then
we
get
into
the
requirements
session.
B
Containerized
tools
that
that
can
be
packaged
to
make
plug-ins
easy
to
operate.
Then
we
get
into
the
the
types
of
storage,
so
local
remote,
whether
they
have
volumes
or
they're
raw
with
LBA
access,
whether
they're
shared
layered
encrypted
and
then
the
next
section
is
the
volume
orchestrations.
This
is
talking
about
describing
what
happens
during
some,
these
certain
processes
of
attaching
detaching
and
mounting
unmounting,
etc.
And
then
the
kind
of
a
final
section
of
describing
Klaudia
storage
is
expectations
for
performance.
B
So
there's
two
two
ways
of
thinking
about
this:
one
is
what
is
if
I'm,
using
different
platforms
like
what's
the
attach
detach
performance,
so
how
quickly
should
I
expect
these
things
to
happen,
and
then
the
other
is
whether
you
know
for
my
crate
remove
operations.
How
quickly
should
that
stuff
happen?
So
it's
really
the
just
understanding
from
a
control
plane.
You
know
what
my
expectations
are
across
these
types
of
platforms
and
then
the
the
data
plane
basically
says
it
shouldn't
matter,
because
all
this
stuff
is
out-of-band
and
you
should
get
native
performance
from
the
platform.
B
So
I
think
that
that
kind
of
summarizes
the
this
section
of
defining
Claudia
storages
as
what
it
is.
What
are
the
two
key
fundamentals
or
features
or
functions,
which
is
life,
cycle
operations
and
orchestration,
and
then
what
are
the
different
things
that
you
can
kind
of
understand
it
for
to
make
it
a
stronger
or
weaker
cloud
data
storage
platform
and
I'll
stop
for
comment.
There.
D
B
Thank
you.
Yeah
I
mean
it's,
it's
a
straw
man,
so
I
I
definitely
want
to
have
the
feedback
from
everyone.
The
next
section
of
the
document
gets
into
describing
the
orchestrated
storage
platform
or
the
cloud
native
source
platform.
Whatever
you
want
to
call,
it
I
think
the
for
this
white
paper.
We
would
basically
say
that
that
area
is
still
to
be
defined.
I,
don't
know
what
you
guys
think
about
that.
C
So
we
have
journalists,
reading
the
CSCS,
I
spec
and
all
the
comments
and
stuff
like
that.
So
I
think
I'm
doing
that
like
for
those
of
you
that
know
me
personally,
I
have
this
rant
I,
going
about
how
a
few
good
articles
I've
ever
seen
about
cloud
native
storage
from
the
press
right.
That's
they're,
using
like
massive
being
correct,
so
I
look
at
this
and
be
like
good.
Lord
I
hope.
C
Somebody
actually
reads
through
this
because
it'll
set
them
straight
before
they
write
anything
if
I
customer's
read
this
it'll
give
them
a
very
fair,
unbiased
view
of
the
world,
so
I
like
it
and
I,
also
think
like
at
the
end.
You
know
just
to
come
back
to
the
original
question.
I
think
that
might
be
a
place
where
we
could
sort
of
you
know
to
oryx.
Earlier
point,
maybe
point
to
like,
if
you're
looking
for
more
persistence
topics
in
general,
you
know,
there's
I,
don't
know.
C
A
What
what
one
common
point
that
I
have
on
this
is
that
it
might
might
help
just
in
terms
of
scoping
this
to
start
with
a
focus
of
white
paper
initially
for
the
November
forties
on
a
high
level
kind
of
the
lay
of
the
land
and
just
define
some
terminology
and
and
have
a
more
detailed
section
to
happen
later.
I,
don't
know
how
we
get
through
all
of
this
through
November
14.
C
The
other
thing
is
that
I
don't
think
we
should
like
I,
think
what
we
should
do
is
like
maybe
take
a
process
like
this
white
paper
fill
in
like
a
lot
of
the
non-contentious
stuff,
I
think,
once
we
as
a
group
start
filling
in
the
landscape,
we're
gonna
have
debates
about
what
category
things
should
be
in,
and
that's
gonna
put
put
the
high-value
content
into
the
white
paper
around
specific
to
you.
You
know
that
the
nuances
around
all
this
different
stuff,
so
I
think
and
I
really
feel
like
this.
B
Know
exactly
and
I
think
that
you
know
in
terms
of
November
14th
I
feel
like
the
visual
landscape
that
we
give.
The
TOC
doesn't
need
to
include
like
what
we'd
call
the
strength
of
the
cloud
native
platform
or
whatever.
We
want
to
define
like
those
capabilities
as
but
we
should
basically
say.
November
14th
is
just
saying
like
who's
who's,
interoperable,
who
can
be
orchestrated
and
may
be
used
by
cloud
native
storage
or
as
a
cloud
native
storage,
and
then
the
the
white
paper
I
feel
like
we
should.
B
B
So
I
think
we
could
probably
do
this
I
think
I
think
we
do
two
things
one
we
need
to
get
old
after
this
call
take
a
peek
at
it
and
maybe
comment
on
certain
areas
that
you
want
to
contribute
to
and
then
to
decide
on
a
time
that
we
can.
You
know
as
a
subgroup
whoever's
interested
in
contributing
meets
a
few
times
to
make
some
progress
on
it.
Yeah
sounds
good
yeah.
C
B
Yep,
so
maybe
that's
a
follow
up.
I'll
send
out
a
scheduler
to
get
everybody
back
on
when
they
can
be
available
for
ample
meetings
before
are
before
next
week,
and
those
that
are
interested
in
contributing
and
and
commenting
can
feel
free
to
join
those
and
and
and
help
us
fill
this
thing
out,
but
in
general,
like
feedback
from
everybody.
D
Okay,
yeah
just
have
a
quick
question.
I
think
we
talked
a
little
bit
about
this
last
time
is:
is
there
a
place
somewhere
written
down
with
the
charter
of
this
working
group
is
supposed
to
be
I,
think
it
is
who
I'll
put
the
documents
like
you're
doing
right
now
and
in
the
slides
whoa?
What's,
though,
though,
like
the
mission.
B
The
TOC
is
a
technical
oversight
that
can
of
the
audio
computing
foundation,
so
within
the
within
the
CSUF.
You've
got
the
TOC,
which
are
just
people
who
are
volunteering
from
the
ecosystem,
who
are
thought
leaders
who
make
decisions
like
technically
and
what
the
CN
CF
foundation
is
supporting.
So
that's
one
group,
then
you've
got
a
group
which
is
the
financial
supporters.
Essentially
the
sponsors
of
the
tea
see
and
that's
the
governing
board
and
then
you've
got
an
end
user
committee
within
the
teos
within
CN
CF.
B
That
describes
their
use
cases
and
kind
of
looks
at
what
let's
see
this
guy's
doing
and
make
make
sure
it's
going
on
the
right
path.
So
there's
three
there's
three
groups:
the
TOC
itself
is
the
technical
side.
Again
it's
from
the
the
volunteers
thought
leaders
in
the
industry,
so
there's
a
subgroup
of
that
and
what
happened
was
a
last
later.
Last
year
we
started
talking
about
storage
things
to
the
TOC,
and
you
know
among
the
TOC.
B
There
wasn't
a
high
level
of
awareness
about
what
was
going
on
in
the
storage
ecosystem,
same
thing
for
networking,
so
they
decided
to
form
these
subcommittees
called
the
networking
working
group.
The
storage
working
group
service,
local
working
group-
and
these
are
another
group
of
just
volunteers
that
are
you-
know
there
to
look
at
the
ecosystem
and
talk
amongst
themselves
and
try
to
come
to
consensus
to
help
advise
the
TOC
on
what's
going
on
in
the
ecosystem
and
make
recommendations
to
the
TOC.
So
that's,
essentially
what
we're
doing
is.
B
You
know
we're
here
to
as
industry
experts
to
try
to
classify
and
understand.
What's
going
on
the
ecosystem,
try
to
put
together
information
to
help,
you
know,
educate
everyone
and
grow
the
community
and
help
the
TOC
make
better
things
about.
You
know
projects
that
the
CN
CF
can
help
sponsor
to
to
grow
the
community.
C
So
the
first
thing
is
like
web
groups
and
the
CN
CF
can
be
short
or
long.
So
that
was
my
first
question
right.
What's
our
purpose
and
always
short
lived
along,
so
been
answered
like
he
envisions,
this
group
is
being
long
lived
and
there's
some
lengthy
stuff
to
to
get
through
around
CSI
and
the
you
know.
The
other
thing
Clint
was
saying
that
I
think
that
TST
went
so
super.
C
Where
is
the
the
literally
the
amount
of
innovation
in
storage
for
cloud
native
is
sort
of
counterintuitive,
so
I,
don't
I
think
was
in
a
bit
of
a
blind
spot
so
because
of
all
that
innovation
going
on
they
clear.
This
is
quite
a
lot
for
us
to
mean
to
continue
to
refine
and
talk
about
so
it's
longer.
C
The
second
thing
is
governments
right
like
how
does
the
governance
work?
You
know
Clint
laid
that
out,
but
like
just
more
crudely
put
like
Ben
as
the
final
say
right,
we
informed
Ben,
but
the
whip
group
sort
of
serves
at
Ben's
pleasure
for
more
medieval
explanation.
I
haven't
been
sand
incredibly
busy,
dude,
so
Ben.
Actually
so,
while
while
technically
the
government
is
laid
out
like
that,
he
strongly
awaits
the
recommendations
of
our
group.
A
group
is
still
very
small
right
now
and
the
so
we
don't
really
have
a
formal.
C
You
know
way
to
determine
consensus,
but
I
think
if
we
wanted
to
propose
something
to
Bay
need,
be
pretty
amenable,
but
right
now,
we've
actually
there's
just
been
a
lot
of
thing.
We've
been
picking
sort
of
low-hanging
fruit,
you
know
and
there's
just
been
pretty
widespread
consensus
on
what
what
to
do
and
how
to
go
about
it,
it's
all
being
very
reasonable.
Does
that
help.
D
C
C
B
Think
that
what
we
really
need
to
focus
on
is
I
mean
I,
think
Steve
laid
out
very
well
like
what's
going
on
and
what
that
governance
is,
but
in
the
short
term
like
we
really
want
to
get
some
deliverables
to
the
TOC
and
then
I
think
after
this
you
know,
I
would
definitely
be
amenable
to
making
sure
that
we
quit
more
things
down
on
paper.
So
so
everybody
can
understand.
You
know
what
what
this
estimate
is
all
about
is
III.
B
All
right,
so
as
an
action,
please
do
check
out
the
document,
please
do
add,
add
comments
to
certain
areas.
If
you
want
to
be
participate
and
then
I'll
open,
the
distribution
of
the
doc
out
or
the
editing
rights
and
I'll
also
be
sending
out
a
scheduling,
invite
to
see
who's
available
during
certain
dates.
So
we
can
have
some
follow-ups
to
to
make
some
progress
on
the
doc.