►
From YouTube: CNCF Storage WG Meeting - 2018-05-09
Description
Join us for KubeCon + CloudNativeCon in Barcelona May 20 - 23, Shanghai June 24 - 26, and San Diego November 18 - 21! Learn more at https://kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
Join us for KubeCon + CloudNativeCon in San Diego November 18 - 21. Learn more at https://bit.ly/2XTN3ho. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
A
C
C
B
A
On
the
on
the
agenda
for
the
day,
we've
got
two
things
lined
up.
We've
got
one
Ching
from
Huawei
to
present
on
open
SDS
for
about
20,
20,
minutes,
25
minutes
and
then
about
five
minutes
questions
and
then
the
last
half
was
gonna,
be
the
the
wireframe
discussion.
Is
that
sound
right
for
everybody
sounds.
A
All
right:
well,
let's,
let's
kick
it
off,
so
welcome
everybody
to
another
swg
call
hot
off
the
heels
of
cube
con
and
EU
I
want
to
thank
everybody
who
is
on
the
call
for
joining
us
in
those
sessions
and
in
Europe
I.
Think
that
we
had
some
great
attendance
that
both
of
our
our
intro
session
and
our
advanced
session
I
think
it
advanced
session.
A
We
actually
spent
about
35
minutes
and
went
around
the
room
talking
about
you
know
what
cloud
native
storage
storage
is,
and
it
was
I
thought
it
was
a
great
session
where
we
collected
a
lot
of
really
good
feedback
from
everybody.
So
thank
everybody
for
participating
and
be
in
there
and
I
definitely
see
a
few
new
names
on
the
phone
today.
A
D
A
E
A
We've
in
terms
of
the
agenda
today,
we've
got
two
things:
we've
got
one
we're
gonna
hear
from
seeing
an
open
SDS
and
then
we're
going
to
talk
about
the
the
wireframes
for
the
storage
white
paper,
and
if
anybody
has
any
additional
items
that
they
wanted
to
talk
about
or
questions
feel
free
to
go
to
the
agenda
and
throw
them
on
there.
And
if
we
have
time,
then
we'll
get
to
them
at
the
end.
So
with
that
I'll
hand
it
over
to
Shane
to
talk
about
openness,
yes,.
F
F
So
I'm
Shinyoung
I'm
the
lead
architect
on
SES,
which
is
an
open
source
project
under
Linux
Foundation
I
joined
my
way
six
months
ago.
What
he
on
does
they're
interesting
project
before
that
I
worked
for
AMC
I'm,
also
a
contributor
in
kubernetes
and
CSI.
Currently
working
on
the
snapshot
feature.
F
So
today,
I'll
be
talking
about
what
is?
Oh?
Yes,
yes,
how
to
use
honest,
yes
provision
and
manage
persistent
volumes,
how
we
map
over
his
profile
to
cope
in
a
sweaty
class
and
how
we
define
our
profiles
for
policy
driven
story.
Provisioning,
live
management
and
I
will
talk
about
how
to
use
open
STS
to
provide
the
other
protection
and
disaster
recovery
for
persistent
volumes,
use
in
array-based
and
host-based
replication
and
I'll
talk
about
the
road
map
for
our
Aruba
and
Bali
release
and
who
are
involved
in
the
organiz
committee.
F
Almost
yes
has
two
core
projects.
The
first
way
is
sushi
the
northbound
plug-in
project.
It
has
plug-ins
for
the
container
orchestration
systems,
OpenStack,
vmware
and
other
northbound
ecosystems
and
second
project
is,
could
hotpot
the
storage
controller
project.
It
provides
unified
control
for
block
file
and
object
services
that
supports
a
variety
of
storage
platforms.
F
This
is
our
project
framework,
so,
in
addition
to
discovery
provision
oxidation
of
storage,
we
also
have
plans
to
provide
extensions
and
tools
for
deployment
monitoring
and
also
include
AI
and
machine
learning.
I
will
show
you
a
roadmap
later
on,
so
you
can
see
what
we
have
already
implemented
and
what
we
are
still
working
on.
F
This
is
still
Penzias
architecture
on
the
top
is
the
OCS
northbound
plugin
and
in
the
middle,
is
the
nastiest
controller
that
has
a
API
server
that
takes
requests
from
the
northbound
plugins
and
send
a
request
to
the
selector
and
other
controllers.
The
sector
is
our
scheduler.
That
takes
the
request
from
API
and
compare
that
with
the
capabilities
reported
by
the
backend
and
finding
a
matching
back-end
and
the
policy
engine
is
a
component
that
can
execute
some
policies
asynchronously.
F
What
in
controller
is
the
component
that
handles
watering
operations
such
as
create
and
delete
a
volume,
and
we
also
have
a
VR
controller
that
understands
the
replication
logic
that
will
handle
replication.
It
means
to
communicate
with
the
water
controller
and
other
components
in
the
dark
is
the
docking
station
that
hosts
watering
drivers
and
your
application
drivers.
So
for
the
walling
drivers
we
have
a
driver
for
cinder,
shaft,
LVN
and
other
vendor
driver
and
for
the
replication
driver
rapper,
and
now
we
have
a
work-in-progress,
the
APD
driver
for
host-based
application.
F
The
diagram
shows
the
relationship
between
kubernetes,
CSI
and
gracias.
So
here
we
have
a
kubernetes
master
that
has
controllers
and
API
server
running,
and
we
have
a
kubernetes
node
with
couplet
and
has
to
CSI
plug-in
pod.
Why
is
the
cs
controller
plug
in
pod?
That
has
the
CSI
helper
containers
for
provisioning
and
attaching
and
oh
my
yes,
yes,
there's
a
plug-in
that
is
performing
the
controller
functionalities
such
as
creating
volume,
controller,
publish
and
publish
volume,
and
we
have
a
caesarian
note.
F
Plugin
part
that
has
the
CSI
helper
container
for
driver
registrar
and
a
openSUSE
has
a
plugin
for
node
functionalities
such
as
node
stage
and
say
to
our
own.
Node.
Publishing
and
publish
volume
and
organizations
are
plugging
communicates
with
OBS
controller,
which
will
find
a
sweetie
backend,
and
it
manages
to
perform
the
volume
provisioning
operation
on
the
right
hand
side,
but
we
have
a
open,
CS
Muslim
plug
in
this.
One
shows
that,
in
addition
to
CSI
plug-in,
we
also
have
a
flux,
welding
plug
in
a
dynamic
provisioner
and
also
integration
with
a
service
broker.
F
This
album
shows
how
we
map
OSTs
profile
chicken
ate
his
third
class,
so
admin
creates
a
profile
in
a
quest.
Yes,
then
admin
creates
a
story:
class
referencing
the
profile
created
in
Valencia's,
then
user.
Once
you
create
a
PVC
specifying
the
story.
Class
name
created
by
the
admin
in
that
request,
get
passed
through
the
OpenSSL
plugin
to
the
opacity
s
controller,
so
the
open,
SES
controller
has
the
profile
and
in
those
how
to
find
the
storage
back-end
that
can
perform
the
create
volume
request.
F
Obst
s
profile
is
based
on
swordfish
specification
and
swordfish
is
a
some
redfish
specification.
Roughage
is
an
industry
standard
that
defines
specifications
to
manage
scalable
platform
hardware,
so
it
provides
definitions
for
chasis,
which
is
the
physical
view
of
the
system
and
also
a
logical
view
of
the
general
progress
systems.
F
So
fish
is
an
extension
of
a
rel
fish.
It
provides
specifications
to
manage
storage
systems
in
a
cloud
environment,
so
it
has
definitions
for
storage
assistance
such
as
the
model,
number
serial
number
storage
services
for
blog
file
and
object
and
other
informations
about
the
search
system
like
volumes,
three
pools
of
our
systems
and
so
on.
F
So
here
is,
in
example,
how
we
can
define
profile,
a
data
storage
life
service.
This
is
a
definition
from
the
swordfish
spec.
It
has
three
properties
recovery,
time
objective
and
how
soon
we
can
access
to
you,
the
alternative
replica
if
a
failure
occurs,
the
provisioning
policy,
whether
it's
a
thing
or
thick,
is
space
efficient.
So
if
let's
say
gdb
is
enable,
then
that
means
it
is
efficient.
F
The
I/o
connectivity
live
service,
that's
another
definition
from
sawfish
spec
that
includes
access
protocol.
What,
if
I,
scuzzy,
FC,
r,
BD
or
other
protocols,
the
maximum
I
off,
send
max
and
bandwidth
for
replication?
We
use
the
data
protection
line
service
defined
in
swordfish
stack,
so
that
includes
recovery,
geographic
objective
that
defines
a
failure
domain,
whether
it
is
that
a
rat
level
or
availability
zone
or
regional
level
and
RTO
and
RPO
replicas
replica
type
would
be
mirror
for
application,
and
also
the
replication
update
mode
specifies
whether
it
is
a
single.
F
While
the
consistency
is
enabled
replication
period
and
replication
bandwidth
and
then
for
a
snapshot.
We
can
specified
policies.
How
often
we
want
to
take
snapshot,
what
time
daily
or
weekly
monthly
and
a
retention
policy,
whether
we
want
to
keep
a
specific
number
of
snapshots
or
whether
we
want
to
keep
us
to
save
the
snapshot
for
a
period
of
time.
In
this
diagram,
on
the
left
hand,
side,
it
shows
a
profile
named
high
performance
that
is
matched
up
with
a
storage
back
in
one
and
on
the
right
hand,
side.
F
F
So
here
are
some
examples
of
a
search
class
mo
file
in
the
search
class
mo
file.
We
specify
as
the
parishioner
as
CSI
organizes
plug-in
and
on
the
parameters
we
specify
the
profile
as
high
performance,
so
this
can
be
either
the
profile
ID
or
the
name
at
the
PVC
mo
file.
We
just
specify
the
straight
class
name
as
open
SCSI
high
performance
SC.
That's
the
search
class
we
created
using
the
Yama
file
on
the
left
hand
side
so
to
run
openSUSE
as
a
plug-in
and
just
to
use
the
deployment
llamo
files
in
our
repo.
F
F
F
It
can
periodically
call
the
driver
to
take
snapshot
based
on
the
schedule
specified
in
a
profile,
and
then
it
can
also
periodically
call
the
driver
to
delete
a
snapshot
that
is
already
taken
based
on
the
retention
policy,
and
the
controller
can
also
ask
the
driver
to
upload
the
snapshot.
Ua
object,
store
and
Prem
or
somewhere
in
cloud
based
on
whatever
is
specified
in
the
profile.
F
F
So
here
the
controller
was
for
simplicity,
I
didn't
specify
the
DR
controller
and
the
volume
controller,
but
the
dr
controller
is
the
one
that
knows
the
replication
logic
it
can
detect
whether
the
story
back
and
can
support
array
based
replication
or
not.
If
it
does,
it
will
go
through
the
array
based
replication
workflow.
Otherwise
it
will
go
through
the
host
based
replication
of
flow.
F
F
And
for
host
based
replication
here
we
have,
we
have
a
few
more
types
of
Doc's
shown
here.
The
first
one
is
a
regular
talk
hosting
the
bottle
driver.
The
second
one
is
an
attached
dock
that
is
responsible
for
attaching
and
detaching
a
walling.
The
third
one
is
at
the
our
talk
that
is
hosting
a
host
based
replication
driver,
sits
the
dr
BD
driver
in
our
case
here,
so
so
the
workflow
will
be
a
little
different.
F
So
after
that
this
is
a
set
up
and
ready
for
hostess
replication
for
the
ability
we
support.
The
obd
version.
9
thought
you
that
supports
automatic
failover.
So
that
means,
if
you
user,
Allman's
the
file
system
from
the
primary
and
months
of
our
system
on
the
70
host
fiora
happen
automatically.
There's
no
menu
see
how
I
command
from
the
video
be
decide.
After
that,.
F
So
here
are
the
few
items
that
we
will
be
working
on
in
the
future.
The
first
one
is
thing
on
STS
in
and
continue
rising
environment
for
some
applications.
They
may
not
need
a
full-blown
replication.
They
may
not
need
a
full
blown
orchestration
engine
that
supports
multiple
backends.
So
in
this
case
we
provide
a
lighter
version
of
open
SES.
F
So,
basically,
the
CSI
plug-in
will
be
communicating
with
the
obvious
API,
which
well
communicate
with
the
doc
directly.
The
talk
will
be
hosting
a
specific
storage,
back-end
driver
and
the
database
is
optional.
In
this
case,
we
will
also
be
working
on
data
migration,
most
host
days
and
array
based
replication,
a
host
base
and
a
base
immigration
and
for
Modi
clock
control.
That
is,
migration
across
different
clouds.
It
can
be
private
cloud
on
preneur
can
be
public
cloud
like
AWS
jus
cloud,
and
we
also
have
plan
to
work
on
multiple
open
static,
claimant
support.
F
Here's
our
roadmap,
so
at
the
end
of
last
year
we
have
a
Zeeland
release.
That's
a
beta
release.
We
have
some
basic
volume
operations,
we
support
standalone
cinder.
We
have
Nikhil
driver
for
7
lb
n.
We
have
our
CSI
plugin
and
also
a
flux,
falling
plugin
and
dynamic
probationer
and
for
Aruba,
the
first
half
so
we're
planning
to
release
that
at
end
of
June,
but
the
robot
release.
We
will
support
a
simple
OpenStack
integration,
which
means
openness.
F
Tia's
can
work
seamlessly
within
the
open
stack
environment
and
we
also
have
replication
support
for
hosts
days
and
array-based
that
I
talked
about
earlier
and
we
have
a
UI
and
we
have
30
profiles
and
Robert
opposed
the
nd
and
UF
support
that
is
being
worked
on
by
Intel
right
now.
It's
still
in
the
design,
taste
enumeration
is
referring
to
Korean
resources
based
on
theater
and
also
have
a
few
drivers
to
serve
at
the
end
and
again,
and
a
hobby
driver
for
the
second
half
will
have
a
body
release
we'll
be
focusing
on
the
multi-cloud
control.
F
We
will
be
working
on
data
migration
when
you're,
showing
that
it
will
be
some
loading
matrix
and
some
capacity
usage
monitoring
in
multiple
OpenStack.
There
was
support
s3
and
the
and
your
F.
We
should
have
a
driver
at
that
time.
Also
group
snapshots
and
good
replication.
Well
right
now
our
Caesar
plugin.
We
only
tested
that
with
the
coronaries,
so
we
want
you
to
start
with
the
missiles
and
a
doctor
as
well,
and
we
also
hoping
to
you-
have
a
swordfish
southbound
driver
from
DMC
Andra
net
up
and
for
Capri.
F
F
So
here
we
have
the
open,
STS
repo
under
github,
that
has
our
projects
for
the
most
unplugging
and
the
suspect
controller
and
a
few
other
APIs.
We
have
a
slack
Channel
and
we'll
have
a
few
making
lists
you
can
subscribe.
If
you
want
to
get
information
and
we
also
have
a
weekly
meeting
so
to
accommodate
different
time
zone,
we
run
those
meetings
by
weekly.
One
meeting
time
is
on
Tuesday
at
9:00
a.m.
Pacific
time,
and
there
was
another
meeting
time
which
is
on
Thursday
at
6
p.m.
Pacific
time.
F
B
A
F
G
F
Oh
yes,
we're
talking
well,
so
we
do
have
a
CSI
plugin,
so
you
can
use
this
as
a
plugin
and
then
you
can
leverage.
You
know
all
the
features
that
we
support.
So
I
mentioned
that
we
will
be
cooking
on
a
lighter
version
of
Oakland
SES
as
well.
So
with
that
you
can
actually
have
a
more
I
say,
a
more
basic
CSS
rail.
You
just
have
a
CSI
features,
but
then
you
with
a
southbound
openness.
Yes
Trevor!
F
G
F
Yeah,
so
if
you,
you
can
definitely,
of
course,
just
to
use
a
you
were
talking
about,
I,
say
stop.
That
is
just
not
this
direction
goes
to
your
back-end.
Is
that
about
talking
about
hurry,
I!
Think
that's
what
you're
talking
about
that's,
how
that
is
different
from
a
CSI
plugin
for
obs
correct.
Thank
you.
F
F
Yes,
oh
man.
Yes,
it
has
the
other
focus
as
well
as
I
said
in
the
second
half
of
year,
we'll
be
focusing
on
data
mobility,
the
multi-cloud
control,
so
that
will
allow
you
to
migrate
data
from
across
cloud
right.
So
open
SES
basically
provides
additional
functionalities
that,
because
seers
are
plugging
itself
right
now,
that
is
right,
it's
still
evolving.
It
only
has
you
know
pretty
violently
rewarding
attachment
attachment,
and
then
the
snapshot
is
coming,
but
still
pretty
basic
right.
So
with
openness,
yes,
you
can
have
more
functionalities.
In
addition
to
that,
okay.
F
H
F
H
F
That's
actually
a
very
good
question
so
right
now
our
replication
impatiens,
it's
still
at
the
first
revision,
Wesley
wrapping
up
with
the
host
based
application
right.
So
we
actually
I
need
to
provide
some
steps
on
how
that
is
done,
so
that
user
has
a
bad
experience,
but
I
think
by
the
time
we
at
end
of
June.
We
shall
have
some
better
description
of
that,
so
that
I
can
share
with
you.
F
B
Yeah
I
had
a
related
question,
maybe
a
little
less
direct,
but
to
what
extent
does
the
content,
if
I
guess
it's
a
container
that
is
attached
to
this
open,
sto
storage?
How
smart
does
the
container
itself
need
to
be
to
deal
with
a
lot
of
these
features?
So,
for
example,
does
the
container
see
when
snapshots
are
being
taken
and
what
consistency
model?
Does
that
see
what
happens
when
failure
that
does
the
container
have
to
unmount
and
remount
volumes,
etc?
When,
when
failover
is
happening,
these
kinds
of
things
or
they
yeah.
F
I
think
I
think
for
those
for
those
things
that
we
definitely
need
to
sort
out
some
of
those
details
because,
as
I
said,
the
replication
feature
it's
which
we're
just
wrapping
up
is
still
waiting
for
the
host
based
application
to
be
done.
I
think
I
will
be
providing
more
details
on
how
to
do
that
in
future
yeah.
So
right
now.
Basically,
we
we're
just
basically
some
sum
of
how
the
how
the
container
can
figure
out
whether
it
is
change
or
not
right,
so
we
want
to
provide
a
smooth
transition.
F
So
basically
that
means
that
when
we
create
the
volume,
we
need
to
provide
some
information
in
that
the
CSI
has
a
warning
sauce.
That
means
to
have
information
about
the
replication
replication
keys
as
well,
not
just
to
the
volume
of
the
primary
site
right.
So
those
are
the
kind
of
details
that
we
are
seeing
me
to
someone
I
need
some
fine-tune
and
I
should
have
some
more
details
later.
A
Okay,
excellent,
just
in
the
essence
of
time,
I
would
think
we
gotta
move
on
to
the
next
agenda
item
Shang.
Thank
you.
Thank
you.
So
much
for
preparing
and
doing
the
presentation
for
us
today
that
there's
an
inmate
that
has
any
follow-up
questions
feel
free
to
send
them
to
the
to
the
email
group
or
directly
wishing
thank.
A
B
A
B
B
Can
everyone
see
the
document
yep
okay,
so
this
is
not
really
rocket
science.
Basically,
what
I
tried
to
do
was
first
of
all,
just
make
very
clear
what
our
goals
and
non
goals
are.
I
hope
those
have
been
reasonably
well
communicated
so
far,
but
perhaps
I'll
just
whip
through
them
again
to
make
sure
they
just
run
on
the
same
page.
So
the
main
aim
is
to
clarify
the
terminology
across.
B
We're
gonna
provide
some
information
about
in
very
general
terms
how
these
things
are
actually
currently
being
used
in
production
and
to
factual
and
the
emphasis
there
being
on
actually
being
used
in
production.
So
so,
if
we
wish
some
of
these
things
were
being
used
in
production
that
doesn't
count
and
then
I
guess
perhaps
most
importantly,
once
we've
clarified
all
the
terminology
and
where
these,
how
these
pieces
fit
together.
B
We're
gonna,
try
and
characterize
the
various
different
types
of
storage
and
and
specifically
with
respect
to
you,
know
their
primary
properties,
and
these
are.
This
is
not
a
you
know,
a
product,
competition
shootout.
This
is
you
know.
Some
of
these
types
of
storage
are
fundamentally
different
than
others
in
terms
of
availability,
scalability,
consistency,
durability,
performance,
it's
an
API,
so
basically
we're
what
we
want
to
just
get
everyone
on
the
same
page
understanding
what
those
properties
looked
like
across
the
different
types
of
storage.
B
As
importantly,
the
non
goals
we're
not
going
to
try
and
define
what
isn't
is
not
cloud
native
storage,
I
think
until
we
have
the
goal
sorted
out,
that
that
is
a
an
exercise
fraught
with
peril
and
we're
also
not
going
to
provide
any
recommendations,
the
CNC
FS
to
preferred
storage,
approaches
or
solutions
we
just
want
to
get
on
on
paper.
You
know
what
the
different
properties
of
the
different
kinds
of
storage
are.
B
B
So
the
outline-
and
in
fact
you
can
see
most
of
this
from
from
the
actual
table
of
contents
that
I
put
together,
but
basically
we
take
each
fact.
Maybe
I
will
back
there.
We
take
each
of
the
fundamental
types
of
storage
and
I've
got
a
few
proposals
here
and
I'm
sure
there
are
some
missing
bits
that
people
might
want
to
fill
in,
but
I
provisionally
recommended
that
we
look
at
block
stores,
file
systems,
object,
stores,
key
value
stores,
databases
and
that's
about
it
and
then
for
each
one
of
them.
B
We
look
at
what
are
the
data
access
interfaces?
So
how
do
you
get
to
these
things?
Management
interfaces?
Have
you
managed
them
and
then
for
each
one,
what
the
primary
classes
are
so
for
block
stores.
We
clearly
have
local
block
stores,
remote
block
stores,
distributed
block
stores,
and
perhaps
there
are
some
other
types
and
define
you
know
what
we
mean
by
each
of
those
words
and
how
they
kind
of
fit
together.
If
at
all
and
basically
the
same
pattern
of
the
comparison,
I
proposed
a
table
that
looks
roughly
like
that
for
each
type
of
storage.
B
So
what
are
the
relative
properties
with
respect
to
availability,
scalability,
consistency,
durability,
performance
and
I?
Think
we
can,
you
know,
make
some
broad
statements
there
about.
For
example,
a
local
block
store
is
not
available
when
a
single
node
fails,
whereas
a
remote
block
store
may
have
availability
that
is
independent
of
the
mode
accessing
it.
B
What
is
the
likelihood
of
your
data?
Getting
lost?
It's
clearly
higher.
If
you
have
a
local
block
store
than
a
remote
block
store,
a
single
hosted,
remote
block
store
is
again
less
durable
than
a
distributed
block
store
with
replication,
etcetera,
etcetera
and
then
I
just
you
know
the
same
pattern
for
all
of
the
other
areas
that
I
mentioned
I
also
added
a
section
at
the
end.
B
I
think
there
are
a
bunch
of
types
of
technology
that
kind
of
rare
they
had
across
many
of
these
different
areas
of
storage,
things
that
consensus,
algorithms
back
raft,
two-faced
commits
distributed
transaction
algorithms,
just
actually
understanding
what
consistency
coherence
of
isolation
me.
This
word.
Consistency,
for
example,
gets
used
in
many
different
contexts
and
can
mean
completely
different
things:
the
canonical
example
being
that
the
sea
for
consistency
in
the
cap
theorem
is
not
the
same
as
the
sea
for
consistency
in
the
acid
okay.
A
B
I
Could
read:
some
is
Alex
here
and
I.
Just
wanna
apologize
I
knew
I
was
supposed
to
be
putting
a
white
paper
together,
just
doing
half
the
time.
Unfortunately,
if
is
so
so
this
looks
actually
pretty
good,
but
I
think
there's
a
huge
focus
on
the
finding.
The
storage
systems
and
I
think
we
could
probably
add
a
couple
of
things
as
a
probably
a
bit
more
generic
as
well
to
this.
So
that's
what
I'm
thinking
is
in
all
these.
With
all
of
these
different
sit
down
storage
systems,
whether
it's
block
a
file
or
origin,
etc.
B
Yeah
I
absolutely
agree
and
just
to
be
clear.
I
knocked
this
together
at
like
9
o'clock
last
night
in
20
minutes
or
something
and
I
also
tried
not
to
make
it
so
big
as
to
be
undigestible.
But
yes,
so
so
implicit
in
here,
and
we
can
decide
how
we
want
to
structure
it
there
to
things
that
are
not
sort
of
explicitly
called
out
in
the
headings.
One
is:
how
do
these
things
fit
together?
So
I
think
it's
worth
talking
about.
The
fact
that
you
know
file
systems
are
built
on
black
stores.
A
B
Yeah
I
think
I
think
it's
probably
the
index
that
is
inadequate
rather
than
the
title
that
is
too
broad.
So
nearly
orchestration
belongs
in
here
somewhere
and
and
I.
Don't
have
a
strong
opinion
about
whether
it
is
a
section
after
databases
here
which
is
like
orchestration
of
storage
systems
or
whether
we
weave
it
in.
Does
anyone
have
any
opinions
on
that
matter?
I.
I
I
Think
it's
probably
would
be
words
looking
at
that,
because
you
kind
of
have
three
three
levels
as
to
when
you're
trying
to
decide.
What's
the
best
solution
for
your
particular
use
case,
which
is
one,
and
how
are
you
going
to
automate
them
to
compose
this
and
make
it?
You
know,
fits
into
your
CIC
lead
path,
processes
and,
secondly,
which
classes
of
services?
J
This
is
Louise
from
port
works.
Thank
you
for
putting
this
paper
together.
First
of
all,
one
of
the
things
I
wanted
to
ask
is
what
what
do
we
expect
from
the
reader
when
they,
when
they
read
this,
do
we
expect
that
they
don't
know
anything
about
storage
or
do
we
expect
that
they
have
some
idea
about
storage
and
if,
if
we
don't
expect
them
to
be
very
knowledgeable,
storage
storage
is
quite
complicated.
J
Maybe
this
is
a
it
sounds
to
me
like.
This
is
a
reference
right.
This
is
a
reference
of
of
terms
and
technologies
and
then
maybe
another
paper
that
describes
more
higher
level
stuff
reference
this
paper.
So
this
is
a
bit
more
like
a
location
for
definitions
of
terms
and
technologies,
and
then
we,
then
we
we
use
this
as
a
foundation
for
other
other
documents.
I
just
think
that
we
we
shouldn't
just
make
this
document
so
big
that
it
becomes
unbearable.
Just
make
it
simple.
J
A
An
interesting
common
art
along-
and
you
know
part
of
me-
thinks
that,
what's
in
this
document,
like
you
know
in
terms
of
defining
the
different
types
of
store,
data
storage
like
it's
been
done
already
somewhere,
and
if
this
doesn't
have
a
context
to
it
like
how
much
of
this
are
we
just
repeating
and
putting
in
one
place
specific
for
CN,
CF
and
I?
Don't
know
the
answer
to
that,
like
maybe
someone
hasn't
done
a
great
job
at
comparing
and
contrasting
the
different
data
stores
that
are
out
there.
Yeah.
J
I
I'm,
just
gonna
throw
in
maybe
a
tiny
curveball
here,
so
I
kind
of
agree
that,
obviously
all
of
this
is
being
done,
and
there
are
lots
of
resources
about
these
different
things.
But
I
think
we.
The
one
thing
that
I
have
noticed
is
in
in
many
many
organizations
storages
previously
managed
by
some
core
team
and
and
what's
really
different
in
cladding
to
the
violence,
is
that
the
developers
get
exposed
to
storage
and
and
often
for
the
first
time,
in
a
way
that
you
know
that
they're
learning
about
it.
J
I
Even
though
some
of
these
things
may
be
documented
elsewhere,
having
having
a
reference
for
people
is,
is
really
really
good
because
you'd
be
surprised
how
many
times
I'm
speaking
to
developers
and
they
actually
generally
don't
understand
the
difference
between
block
stores
and
objects
inside
so
I
like
I,
do
think
it
is
actually
useful
to
have
this
as
a
starting
point.
Yeah.
B
Yes,
as
for
the
previous
comment,
I've
had
I've
sort
of
got
pressure
from
both
sides,
one
being
let's
keep
this
thing
simple
and
relatively
uncontroversial
and-
and
you
know,
get
something
out
the
door
quickly
and
the
other
one
is
well.
You
know
if
we're
not
defining
exactly
what
we
consider
to
be
cloud
native
and
what
we
don't
and
covering
all
the
cloud
native
orchestration
techniques.
Then
we
haven't
really
got
anywhere
so
and
and
I'm
sensitive
to
both
sides
of
that
argument.
B
I
think
if
we
can
lock
this
thing
out
pretty
quickly
and
move
on
to
if
we
decide
that
we
want
to
separate
this
document,
which
just
defines
the
terms
and
basic
stuff
that
we're
talking
about
and
then
have
a
separate
follow-on
document
that
talks
about
orchestration
of
all
of
these
things
and
how
they
all
tie
together,
different
approaches
to
managing
them,
etc.
I'm,
okay,
with
that,
as
long
as
we
get
to
that
one,
you
know
soonish
so.
B
This
is,
you
know,
ultimately,
for
the
use
of
the
CNC.
If,
in
general,
you
know,
for
example,
I
don't
think
everyone
on
the
TRC
is
necessarily
a
storage
expert,
and
so,
when
they're
being
asked
to
vote
on
the
inclusion
or
exclusion
of
projects
at
some
point
in
the
few
you
know
they
do
need
to
understand
the
stuff
in
this
document
to
be
able
to
have
an
opinion
and
follow
the
debates.
I
think.
Similarly,
consumers
of
CN
CF
projects
and
technology
you
know
come
to
the
CN
CF
to
to
learn
things.
B
G
B
B
And
don't
be
shy.
I
will
not
be
hidden
crushed
if
you
think
my
outline
is
terrible
and
you
want
to
propose
a
different
one
and
you
just
have
enough
time
to
it's
totally
fine.
What
I
was
going
to
suggest
is
that
we
decide
whether
we
gonna
use
this.
We
can
make
that
decision
now
or
we
can
give
people
another
week
or
two
to
come
up
with
alternative
proposals.
It
doesn't
sound
from
the
group
like
we
need
to
wait
that
extra
two
weeks
or
whatever
we
can
start
with
this.
J
G
J
I'm
just
saying
that
this
looks
to
me
like
the
storage
landscape,
like
this
is
like
its
storage
definitions
and
then
we
got
a
container
orchestration
landscape
which
consumes
this
document.
That's
the
way
I'm
thinking
about
it,
I
mean
it
could
be
the
part
to
chapter
or
something,
and
we
can
even
combine
them.
Both
I'm,
not
saying
not
to
I'm,
just
saying
I'm,
trying
to
divide
it
into
that
model.
I
think.
B
A
Yeah
I
think
I
feel
like
it's
something
as
simple
as
just
adding
a
little
bit
of
cloud
context
to
this
to
say:
hey
these
platforms
are
these
are
the
different
types
of
platforms
and
here's
the
different
characteristics.
But
you
know
when
you're
in
a
public
cloud,
they
can
be
available
to
you
when
you're
in
cloud
native.
You
could
actually
do
it
yourself,
and
you
know
the
definition
or
the
description
of
what
this
is
like
and
how
it
happens
would
be
in
the
next
white
paper.
I
Do
you
know
one
of
the
one
of
the
primary
goals
that
I
would
like
to
get
out
of
this
document
would
be
to
clarify
the
you
know
the
terminology
and
taxonomy
with
which
I
think
was
one
of
the
goals
that
we
wanted
to
get
out
of
this
document,
and
for
for
me
it's
it's
really
important
that
an
inducer
can
understand
from
terminology
point
of
view.
The
difference
between
you
know,
interface
and
the
orchestrator
and
the
actual
storage
system.
That
is
really
really
important,
because
we
we
you
know
III
again.
This
is
just
based
on.
B
That
that's
reasonable
one
way
of
feeling
would
be
to
have
start
us
off
with
a
kind
of
a
very
basic
diagram
saying
that
you
know
all
storage
systems
have
a
data
access
interface,
a
management,
interface
and
and
they're.
You
know
why
just
some
of
those
sort
of
common
components
of
organ
systems
and
then
those
teams,
through
the
rest
of
the
document
with
you,
know,
put
CSI
where
it
belongs
and
explicitly
say
that
it
is
a
management
interface
for
block
stores.
For
example,
yeah
yeah
makes
sense.
B
So,
okay,
so
I
think
we
put
the
orchestration
in
this
document
and
I
think
we
put
it
after
the
description
of
the
different
types
of
storage
and
their
relationships
with
each
other
and
I
think
we
tackle
the
first
half
of
the
document
first,
just
so
that
we
got
common
terminology,
so
I'm
gonna
intentionally
leave
that
last
section
relatively
bare
right
now.
This
one
over
here
destroys
orchestration
management
and
we
can
fill
in
the
subsections
later
it
feels
like
we
might
want
to
similarly
break
it
down
into.
B
That
one
cannot
actually
say
that
file
systems
always
built
on
top
of
block
stores,
because
they're
not
and
databases
are
not
always
built
on
top
of
file
systems
or
block
stores.
They,
you
know,
there's
many
different
ways
of
skinning
cats
and
specifically
when
you
get
to
the
distributed
stores
that
the
landscape
gets
even
more
confusing,
so
I
would
kind
of
lean
towards
describing
these
things
under
each
of
the
headings
I've
mentioned.
So
when
we
get
to
distributed
file
file
systems,
for
example,
we
can
say
that
local
file
systems
are
typically
built
under
local
block
stores.
B
I
think
that's
a
reasonable
statement.
Maybe
there's
some
exceptions.
We
can
call
them
out.
Remote
file
systems
are
similarly
often
built
on
remote
block
stores
distributed
file
systems.
On
the
other
hand,
you
know
there
in
making
ways
of
exposing
these
things.
You
get
file
systems
on
top
of
s3,
for
example,
that
have
seen
before,
and
those
are
clearly
very
different
than
other
kinds
of
distributed
file
systems,
I
yeah.
I
I
think
it's
more
important
to
describe,
because
you
said
there
are
so
many
infinite
permutations
right
I
mean
you
could
have
look
a
fascist,
that's
pimping,
distributed
block
stores
for
example,
or
whatever
or
lot
of
object,
stores
within
stuff,
but
I
think
it's
more
important
to
describe
the
type
of
functionality.
Well,
that's
the
other
user
is
meant
to
expect
out
of
the
different
types
of
technologies
you
know
so,
for
example,
a
a
Frances,
a
local
flat
system
is
available.
I
One
node
only
and
isn't
is
this
typically
not
shared
and
those
sort
of
things,
whereas
you
know
there
are
shared
file
systems
can
be
used
by
more
than
one
applications
or
more
than
one
node.
At
the
same
time,
and
those
are
probably
more
important
as
differentiators
to
kind
people
consume
the
technology
as
opposed
to
the
technology
just
builds
the
pieces
I
mean.
B
Yeah
I
guess
the
only
problem
with
that
is
that
sometimes
how
the
technology
is
built
fundamentally
affects
what
the
user
experiences.
So
you
know
if
you,
if
you
use
if
a
user
is
consuming
a
remote
block
store
that
is
on
a
you,
know,
high-performance
enterprise
storage
system.
They
have
a
different
experience
or
built
on.
B
The
performance
is,
you
know,
very,
very
different,
that
your
is
noticeably
different.
S3
just
had
a
look
just
now.
You
know
it
is
designed
for
ten
lines.
Availability,
I,
don't
think
that
that
can
be
said
of
you
know
even
the
fanciest
enterprise
storage
system.
So
so
there
are
fundamental
differences.
You
might
you
know.
In
both
cases
you
might
be
consuming
a
remote
block
store,
for
example,
or
a
remote
file
system,
but
but
the
user
experience
is
extremely
different
along
the
axis
that
I
mentioned
here
so
yeah
we
just
go.
E
B
B
Going
going
gone,
if
you
would
like
to
join
Alex,
please
reach
out
to,
but
we
will
nominate
Alex
as
at
least
one
of
the
primary
authors
of
this
document
and
I
will
happily
help
where
I
can.
But
I
would
like
not
to
be
the
primary
author
personally
one
because
I'm,
not
a
storage,
expert
and
two
I,
think
I
would
like
the
document
to
come
from
this
group
as
a
whole,
not
from
me
and
the
TOC.
So
let's
go
ahead
on
that
basis
and
we're
out
of
time,
so
we
better
wrap
up
and
carry
on.