►
From YouTube: CNCF Storage WG Meeting - 2018-03-13
Description
Join us for KubeCon + CloudNativeCon in Barcelona May 20 - 23, Shanghai June 24 - 26, and San Diego November 18 - 21! Learn more at https://kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
C
B
D
C
Fine
just
leave
it
as
this
one
perfect.
Thank.
C
All
right,
I'll
see
we
all
have
a
normal
load
of
people,
but
I
think
we
should
still
get
going
anyways
here
good
morning.
Everybody
thanks
for
joining
we've
got
a
schedule
within
the
Google
Drive.
As
usual,
it's
packed
with
a
couple
things.
We're
gonna
have
a
presentation
from
from
you,
gabbai
so
cartax
on
the
line
here
to
talk
about
a
scale
out
database
that
they've
been
creating
I,
think
it's
pretty
cool
stuff
and
then
we're
also
have
a
spot
to
talk
about
the
sessions
that
cute
Cana
will
follow
up
discussion
before
getting
into
that
explanted.
C
A
general
note
I
think
that
Camille
has
been
reaching
out
to
some
folks
on
the
SWT.
Please
do
spend
some
time
with
Camille
to
give
her
some
feedback
on
the
fwg
and
what
it's
doing
and
what
it
should
do
or
what
you
think
it
should
do.
This
is
all
part
of
you
know,
making
sure
that
as
a
TLC
makes,
decisions
on
you
know
we're
giving
charters
to
the
storage
working
groups
that
they're
informed
of
you
know
based
on
the
perspectives
of
the
people
in
the
herps.
C
A
Right
thanks
a
lot:
hey
guys,
I'm
Karthik,
going
to
like
going
to
talk
to
you
about
gigabyte,
it's
a
transactional,
high
performance
database
for
planet-scale
application
and
we'll
dive
right
into
what
that
is
in
detail,
a
real,
quick
intro
about
ourselves
like
three
of
us.
The
founders
started
this
like
it's
canon,
Karthik
and
myself.
We
started
the
project
I'm
one
of
the
founders
and
the
CTO
here
and
all
three
of
us,
including
nine
others,
worked
at
at
Facebook
on
a
variety
of
different
applications
in
production.
A
We
worked
on
both
Cassandra
and
HBase
in
order
to
put
it
in
production
for
use
cases
such
as
our
messaging
inbox,
messaging,
search
time,
series,
spam
detection,
so
on
and
so
forth
and
yeah.
Let's
jump
right
in
so
real
quick
thing
about
the
problem
we're
trying
to
solve.
We
saw
this
pattern
repeated
quite
often
at
and
having
been
in
the
open-source
community
with
HBase
way
back.
A
We
have
seen
that
a
lot
of
companies
were
trying
to
repeat
this
at
the
web
to
daughter
like
text
company
sector,
but
now
this
pattern
is
becoming
even
more
common
in
the
enterprise,
especially
with
the
advent
of
the
public
cloud.
So
how
do
people
build
planet-scale
apps?
It's
like
pretty
clear
that
docker
with
kubernetes
as
the
orchestration
is
the
favorite
choice
for
people
to
put
stateless
applications
in
and
that's
pretty
much
going
into
production
and
becoming
mainstream.
But
when
it
comes
to
data,
that's
when
the
challenge
begins.
A
So
today's
way
of
doing
a
data
architecture
is
to
have
a
sequel,
master
and
slave,
whether
it's
sharded
or
a
single
node
scale-up
solution.
They
have
a
sequel,
master
and
slave
and
they
have
one
or
more
no
sequel
solutions,
because
there's
certainly
advantages
provided
by
no
sequel
databases
that
really
help
and
the
minute
you
have
put
your
data
across
multiple
data
stores.
It
becomes
very
expensive
to
recompose
the
data,
so
people
put
the
data
that
they
need
to
serve
to
the
end
user
into
a
cash
like
Redis.
A
You
need
to
figure
out
how
to
replicate
it
at
pretty
much
every
level
right
and
if
there
is
a
failure
and
this
sort
of
a
system
that's
put
together
and
it's
like
the
blueprint
is
similar,
but
the
exact
implementation
varies.
Maybe
the
choice
of
technology
varies
a
little
bit
here
and
there,
but
inevitably,
if
there's
a
failure,
it
takes
a
long
time
to
figure
out
what
went
wrong
right.
So
as
you
go
by
so,
if
the
question
we
get
asked
is
like
suppose,
you
go
to
a
public
cloud
like
AWS.
A
So
let's
take
the
AWS
example.
How
does
it
change
this
picture?
Well,
it
makes
a
little
easier
for
sure,
but
not
a
whole
lot,
because
you
replace
the
Redis
set
of
machines
with
ElastiCache
that
Amazon
or
a
cloud
provider
will
manage
for
you.
The
sequel
is
replaced
with
something
like
an
aurora
or
an
RDS,
and
the
no
sequel
tier
is
replaced
with
DynamoDB
so
so
effectively.
The
architecture
is
still
pretty
much
predominantly
the
same,
so
at
yoga
pipe.
We
try
to
go
into.
A
Why
is
it
not
possible
to
converge
all
the
three,
and
this
is
based
on
a
lot
of
work
we
did
at
Facebook
and
with
a
lot
of
other
work,
we
are
done
with
scene
done
as
history
teams
with
the
projects
like
Tao.
So
what
really
is
the
characteristic
of
the
databases
that
makes
it
like
makes
an
app
require
multiple
of
them
right?
A
So
if
we
split
it
into
three
core
requirements
like
pillars
that
a
database
should
offer,
you
can
think
of
it
as
SQL
databases,
including
our
Ora,
offer
you
high
performance
and
transactionality,
but
not
planet
scale,
because
it's
difficult
to
get
your
data
distributed
and
scaled
out.
Add
machines
as
you
want,
like
all
of
those
are
manual,
no
sequel.
Databases
like
MongoDB
on
the
open
source
side
or
a
variety
of
others,
and
like
that's
just
an
example
and
Azure
cosmos
DB,
which
is
a
multi
model.
A
No
sequel
database
through
Microsoft
both
offer
high
performance
and
planet-scale,
but
don't
offer
transactions
when
you
need
them
like
I
am
talking
about
transactions
in
the
in
both
a
single
row
and
multi
row,
so
some
of
that
is
offered.
Some
of
that
is
not
on
the
other
side,
like
the
other
tax
that
Google
spanner
took,
was
to
go
after
planet-scale
and
transactional
workloads.
A
But
it's
not
ideal
for
high
performance,
because
you're
subject
to
atomic
clock
like
effectively
the
atomic
clock
latency
for
streaming
type
of
workloads
where
you
don't
really
need
it,
and
so,
as
you,
the
byte
we're
trying
to
bring
all
the
three
pieces
together.
So
it's
gotta
be
high
performance
where
you
can
serve
it
with
low
latency
and
it
can
just
be
a
serving
here.
It's
got
to
be
transactional
when
you
need
it
for
the
subset
of
applications
that
are
subset
of
workloads
that
need
transaction
and
planet-scale
okay.
A
So
those
are
our
design
goals,
transactional,
high
performance,
planet-scale
and,
of
course,
cloud
major,
so
really
quickly.
On
the
transactional
side
we
wanted
to
have.
We
wanted
the
core
data
fabric
to
have
distributed
as
a
transaction
support
for
both
single
row
and
multi
row
acid
and
with
a
document
based
storage
engine
core,
but
that
can
be
exposed
using
a
variety
of
different
API
that
people
are
used
to
on
the
performance
side.
We
wanted
it
to
be
really
low
latency,
so,
ideally
for
a
majority
of
the
workloads.
A
The
obvious
ones
are,
of
course,
being
highly
scalable
and
highly
resilient
or
add
nodes
when
you
need
to
either
expand
your
storage
footprint
or
you
need
more
serving
capacity
or
cache
capacity
and
be
highly
resilient,
which
is
tolerate
node
failures
or
most
of
the
common
cloud
failures
without
any
intervention.
But
more
importantly,
also
make
it
really
easy
for
the
user
to
use
this
database
by
expressing
an
intent
and
the
database
kind
of
respecting
the
user's
intent
and
also
give
a
seamless
operator
experience
for
day
to
operations.
When
you're
trying
to
keep
this
running
in
production.
A
We're
going
to
look
at
a
few
of
these
things
in
detail,
but
at
the
core
of
the
database
like
what
we
did
was
instead
of
being
too
purist
about
the
exact
languages
we
brought
in
the
features
the
best
features
of
the
two
sides
of
the
house.
So
this
on
the
sequel
side,
we
bring
in
strong
consistency
secondary
indexes
asset
transactions,
single
row,
multi
row
and
the
expressiveness
of
the
query.
Language,
where
you
have
where
clause
and
joins,
is
something
we'll
continually
work
toward
then
add.
A
So
that's
at
the
core
philosophy
and
on
there
is
no
sequel
side
we
bring
in
tunable,
read,
considers
read
latency
so
read
from
my
follower
or
one
of
my
async
replicas
of
the
nearest
data
center.
If
you
want
low,
read
latency
but
you're,
okay
with
timeline,
consistency
optimize
for
large
streaming,
writes,
support,
features
like
automatic
expiry
of
data
with
time
to
live
kind
of
feature
and
be
able
to
scale
out
and
be
fault
tolerant
with
your
data
with
primitive
to
support.
How
do
you
partition
data?
A
Okay,
so
if
you
take
as
your
cosmos
DB
as
the
bleeding
edge
of
no
sequel
and
Google
spanner
as
the
bleeding
edge
of
sequel
in
a
cloud-like
environment
today,
what
you
go
by
is
it
brings
the
best
of
the
two
words
into
a
single
database,
so
we're
multi
model
and
high-performance,
just
like
Azure
cosmos,
DB
and
acid
transactional
and
globally.
Consistent
like
spanner
okay,
so
very
briefly
on
the
architecture
at
the
core.
It's
a
scale
out
database
you'll
be
able
to
add
machines
in
order
to
scale
it
out.
A
Each
node
has
a
what
is
called
a
doc.
Db
is
what
we
call
it
internally,
it's
a
heavily
customized
version
of
rock's
DB
and
the
nodes
in
order
to
replicate
data
with
consistency
across
nodes.
We
use
raft
based
replication.
We
have
a
global
transaction
manager
in
order
to
do
distributed
transactions
or
distinguish
it
from
a
single
row
asset
and
still
keep
that
highly
performant,
and
we
do
automatic,
sharding
and
load
balancing
across
all
the
data.
A
A
All
right,
so
so
just
a
brief
intro.
Now
let
me
go
into
what
the
current
state
of
user
bite
is
and
then
we
can
jump
into
like
a
demo
that,
like
a
shopping,
cart
on
the
current
state
side,
we're
in
zero
nine
seven
publicly
available
beta,
marching
towards
a1
dot,
o
generally
available
version
in
mark
in
April
timeframe,
but
we've
tested
it
so
far
for
high
scalability.
A
So
we've
gone
up
to
fifty
nodes
and
we're
able
to
see
that
you
can
linearly
scale
and
get
millions
of
reads
and
write
I
ops
without
really
sacrificing
your
latency
so
like
what
what
you
see
at
fifty
nodes
is
an
ADIZ
are
key
value.
Like
point
key
value
reads
so:
2.6
million
reads
with
200
microsecond
latencies
and
1.2
million
writes
with
three
millisecond,
but
that's
a
3-way
replicated
consistent
right,
okay
and
it's
a
highly
performant
database,
because
that's
another
of
our
core
pillars.
A
So
we
tested
it
against
some
of
the
more
performant,
no
sequel,
databases
like
Cassandra.
This
is
a
y
CSV
report
of
what
gigabyte
compares
with
Cassandra
and
it
shows
the
number
of
operations
per
second.
So
we've
taken
a
lot
of.
We
put
in
a
lot
of
effort
and
a
lot
of
learnings
from
running
such
systems
in
production
at
Facebook,
in
order
to
squeeze
a
lot
of
performance
out
of
it.
But
performance
is
a
continuum,
it's
a
it's
never-ending,
so
we
will
continue
to
keep
improving
it.
A
We
added
distributed
transactions,
so
you'll
be
able
to
create
a
table
a
Cassandra
table,
and
in
this
classical
banking
banker,
bank
account
example.
You
have
a
count,
name
account
type
balance.
You
can
shard
your
data
by
account
name,
having
charted
all
of
the
account
names
and
keeping
them
together,
you'll
be
able
to
perform
cross
shard
transactions
where
one
account
you're
able
to
transfer
some
money
from
one
account
to
another
account
which
would
potentially
live
on
different
nodes,
and
we
do
the
whole
o
clock
tracking
clock,
skew,
etcetera.
A
This
is
an
actual
running
system
in
one
of
our
customers
environments.
It's
like
an
example
of
a
user
login
password
style
set
up
two
copies
of
the
data
in
uswest,
two
copies
in
US,
east
and
one
copy
in
tokyo.
The
replication
factor
is
five,
which
mean
you
need
a
quorum
of
three
guys
in
order
to
do
the
the
right
successfully
with
consistency,
and
your
reads
can
happen
from
any
of
the
data
centers
that
are
local
to
you.
A
We,
like
new
debate,
already
works
with
multiple
clouds,
so
Amazon,
Google
and
on-premise
are
well
tested
and
Azure
is
something
that
we
are
trying
to
add
support
for,
but
let's
jump
quickly
into
our
demo-
and
this
is
an
all
kubernetes
demo,
you
restore-
is
a
sample
app.
That's
an
online
e-commerce
book
store.
You
can
find
it
on
github,
it's
it.
A
So
it's
an
open
source
project
as
well,
so
the
first
thing
that
I
have
done
and
because
this
is
not
too
terribly
interesting
to
do,
live
and
wait
for
it
to
come
up
is
to
bring
up
yoga
bite
as
a
kubernetes
stateful
set,
it's
a
replication
factor.
3
set
up,
so
the
you
go
by
cluster
is
3-way
replicated
and
it's
got
three
nodes
in
it,
and
this
can
be
scaled
up
or
down
on
the
fly.
A
The
second
thing
that
I
did
was
to
bring
up
the
yoga
store
app.
This
is
a
nodejs
express
and
react
based
app,
which
simulates
a
bookstore.
So
it's
like
a
very
simple
ecommerce,
app.
It
lists
some
books,
you'll
be
able
to
categorize
books
into
some
study
groups,
and
so
on.
So
having
done
that,
let
me
quickly
jump
into
showing
you
the
actual
application.
A
Hopefully
you
guys
are
able
to
see
the
screen.
It's
it's
the
kubernetes
dashboard
and
please
do
say
something
if
you're
not
otherwise,
I'm
assuming
it's
all
good.
So
what
you
see
here,
the
first,
the
b3t
servers
are
the
slaves.
These
are
the
guys
that
actually
serve
io.
The
three
masters
are
background
coordinators.
There
are
as
many
masters
as
theirs
as
the
replication
factor,
and
the
last
deployment
here
is
the
stateless
app
deployment
so
I'm
going
to
go
ahead
and
switch
into
the
you
gabite
dashboard.
A
So
this
is
actually
running
inside
kubernetes
and
you
see
that
the
different
masters
have
talked
to
each
other
and
using
raft
elected
one
of
themselves
as
the
leader
and
this
setup
has
a
replication
factor
tree.
It
has
one
key
space
with
one
table
in
it
called
products
and
we're
going
to
look
at
a
demo
of
how
that
shows
up
in
the
UI.
It's
got
three
tea
servers
and
obviously
that
is
scalable
on
the
fly.
So,
if
I
go
to
the
oops.
A
So
like
that
thing
hums-
and
you
guys
wouldn't
be
able
to
hear
me-
but
it's
all
good
now,
so
we're
back
in
business.
Sorry
like
that
thing
really
makes
a
noise
on
my
machine
yeah.
So
this
is
the
tablet
service.
What
you
see
about
this
setup
is
it's
all
running
in
a
single
cloud:
single
region,
single
zone.
So
it's
not
a
multi,
but
it
can
very
easily
be
deployed
in
a
multi
region,
multi
zone
or
a
multi
cloud
fashion.
A
Right.
So
so
that's
the
app
and
I
am
still
working
on
adding
like
a
checkout
on
the
shopping,
cart
and
that
side
of
things
which
requires
like
this
tribute
extraction,
but
jumping
back
to
our
presentation.
So
how
does
gigabyte
simplify
this
right,
like
typically
for
the
less
static
content
like
the
best
dynamic
content
like
the
title
and
the
description
SQL
like
API,
like,
for
example,
Cassandra?
Is?
There
is
a
great
choice
to
store
the
data,
because
you'll
be
able
to
see
most
of
the
attributes.
A
You
want
and
you'll
be
able
to
add
the
ever-growing
attributes
to
like
a
JSON
data
type,
whereas
for
the
highly
dynamic
content
that
changes
all
the
time.
Redis
is
a
great
example
of
figuring
out
the
things
you
want
to
store
like,
for
example,
the
average
rating
or
the
total
number
of
reviews
right.
So
in
you,
gabite
you'll
be
able
to
model
your
product
as
a
table
and
run
a
query
such
as
the
one
shown,
and
we
will
try.
A
This
live
to
be
able
to
select
some
books
from
the
business
category
and
on
at
the
bottom,
you'll
be
able
to
use
Redis
sorted,
sets
you
know
and
with
the
actual
reviews,
as
the
score
to
figure
out
that
most
reviewed
books
or
the
number
of
stars
as
a
score
to
figure
out
the
most
rated
book.
Now,
let's
actually
do
that.
A
Yeah,
okay,
so
I'm
going
to
connect
to
this
Tootsie
server,
zero
and
using
a
Cassandra
shell,
and
we
can
actually
do
a
select
and
figure
out
the
top
two
books
in
the
business
category,
which
is
able
to
fetch
that
and
you
can
go
ahead
and
add
any
number
of
categories
and
and
and
you
can
alter
the
table
online,
upgrade
like
software
online
so
on
and
so
forth.
You
can
actually
reconfigure
the
database
to
run
on
a
different
set
of
nodes
or
regions
without
taking
an
application.
A
Downtime
I'm
going
to
go
ahead
and
connect
to
Redis,
and
so,
if
you
wanted
the
top
ten
books
by
the
number
of
reviews,
you
can
go
ahead
and
run
that
and
that's
a
read
is
sorted
set.
All
of
this
data
is
being
stored
as
a
persistent
store
inside
gigabyte.
So
you
don't
need
to
supplement
radius
with
the
data
being
present
in
another
database.
So
all
of
this
is
just
a
single
database
dealing
with
everything
and
finally,
let's
run
the
equivalent
a
robot
user
like
like
you're
like
the
Bangladeshi
click
farm.
We
use
a
special
example.
A
So
if
you
so
it's
just
like
viewing
products,
one
after
the
other
and
we'll
be
able
to
go
into
our
UI
here
and
we'll
be
able
to
refresh-
and
we
should
start
seeing
some
load
getting
pumped
into
the
into
the
various
machine.
And
the
point
here
is
that
you
can
add
nodes
on
the
fly
and
the
load
would
get
evenly
distributed.
You
can
change
the
setup
of
the
system
to
run
on
a
different
cloud
or
region
and
all
of
this,
while
the
system
is
online.
A
A
We
have
a
CEO
dition,
which
is
everything
that
I
showed
you
today
in
the
demo,
and
we
have
an
EE
Edition
that
has
the
UI
deployment
like
deep
integration
into
the
cloud
built
in
metrics
and
alerting,
as
well
as
some
features
that
are
more
production
gate
to
features
such
as
async
replication
to
remote
regions
or
like
tearing
of
data
when
you
have
a
lot
of
data
to
cheaper
tiers.
So
all
of
those
are
in
the
EE.
You
can
check
us
out
on
github.
We
have
a
great
Docs.
A
You
can
get
started
in
just
a
few
minutes
if
you
want
to
give
it
a
spin
on
your
laptop
in
our
next
steps
in
the
gigabyte
kubernetes
journey
that
we
are
like
if
that's
on
our
roadmap
and
you're
working
on
internally
is
to
build
yoga
by
operator,
so
that
people
who
are
running
this
in
production
can
do
so
with
great
ease
and
to
do
an
OS,
be
like
open
service
broker
integration
so
that
end
users
can
consume
this
with.
So
the
first
one
is
making
it
easier
for
the
other
for
the
operator.
C
A
It
so
we've
been
in
github
for
about
four
months.
We've
been
building
the
database
for
about
two
years,
but
we've
been
building
it
like
without
having
thinking
about
how
to
monetize
the
project
or
go
to
market
like
we
didn't
want
to
focus
on
that.
We
just
wanted
to
focus
on
the
core
problem
because
it's
like
a
fairly
hard
problem
to
solve,
and
it
takes
a
lot
of
work
to
get
there.
But
more
recently,
like
we've,
tried
to
figure
out
what
is
the
company
going
to
look
like
what
we
want
to
do?
A
It's
been
out
on
github
for
about
four
months
and
we're
working
on.
You
know
like
working
with
a
community
like
kubernetes,
where,
like
the
philosophy
of
what
CN
CF
does
and
what
we
want
to
do,
or
what
we
want
to
achieve
is
fully
aligned.
So
we
want
to
figure
out
how
to
make
that
even
more
accessible
to
developers.
A
A
All
of
that,
but
we're
waiting
for
these
customers
to
go
into
production
and
become
referenceable
around
our
1.0
time,
which
is
going
to
be
the
April
timeframe,
and
we
expect
a
few
more
to
come
on
board
and
go
into
production
soon
after
we
are
being
deployed
in
on-premise,
Google,
Cloud
and
AWS.
Aw
I
should
reverse
the
order.
Aws,
on-premise
and
Google
cloud
is
the
order
of
number
of
customers
using
us
that
we
see
use
cases.
A
We
are
closer
to
going
into
production
for
single
row,
asset
use
cases,
and
these
are
like
the
FinTech
industry,
where
you
have
stock,
tickers
and
stock
coats,
and
all
of
that
things
like
logistics
and
tracking,
which
is
closer
to
a
real-time
IOT
like
where
you
want
to
figure
out
where
vehicles
are
and
how
do
you
want
to
do
the
reporting
on
them?
There
are
some
ecommerce
sites
that
are
looking
at
as
security
and
fraud
is
another
place.
A
So
it's
a
variety
of
different
verticals,
because
the
the
database
itself
is
it's
pretty
horizontal,
but
most
of
the
most
of
these
applications
require
like
two
or
more
of
those
three
pillars
which
is
transactionality,
whether
it's
single
row
or
multi
rows.
The
data
consistency
is
important:
distribution
across
the
world,
sync
async,
hybrid
deployment,
microservices
architecture,
that
side
of
things
and
a
good
performance
for
being
for
this
being
a
serving
peer.
C
Quiet
group
today,
all
right,
thank
you
so
much
for
presenting
to
us
I
think
that
was
I
was
really
cool.
Looking
forward
to
to
working
with
you
guys-
and
you
know
please
reach
out
to
this
towards
working
group,
if
you
have
anything
that
that
you
need
and
kind
of
looking
forward
to
collaborating
with
you
in
the
future
here
awesome.
Thank
you.
Thank
you,
alright
team.
C
So,
on
to
the
next
agenda
item
for
the
day
we
slated
I
think
last
time
we
took
the
last
half
hour
to
talk
a
bit
about
our
cue
con
presence
in
the
EU
and
I.
Think
we
decided
was
that
everybody
needs
some
more
time
to
think
about
it.
Just
a
reminder.
We
had
three
sessions
that
we
were
that
were
slated
for
@q
Kong.
C
First
of
all,
the
the
private
session
is
one
that,
were
you
know,
trying
to
figure
out
who's
actually
gonna
be
what
that's
gonna
be
I,
think
that
the
private
one
was
gonna
involve
possibly
getting
these
numbers
from
the
TOC
that
come
speak
with
the
SPG
about.
You
know
what
their
thoughts
are
on
the
working
groups
and
what
they'd
like
to
see
and
try
to
get
like
more
of
a
charter
from
them,
so
that
we
could
start
tackling
some
of
those
important
things
that
they
they
feel
like.
C
We
should
be
doing
so
that
that
one
is
is
still
in
discussion
and
they'll
report
back
on
on
where
that
goes.
There
were
two
other
ones.
I
think
Saad
mentioned
that
the
intro
session
was
overlapping
with
the
kubernetes
session
for
intro,
and
we
were
working
with
the
program
committee
to
get
that
moved
right
now.
So
I
think
that
one's
still
ago
and
I'll,
let
you
guys
know
when
that
time
gets
updated.
So
it's
not
conflicting
and
then
the
second
one
was
the
the
deep
dive.
C
F
A
F
C
You
know,
just
in
terms
of
people
that
are
going
to
be
present,
so
I
think
that
you
know.
We've
got
these
sessions.
We
can
figure
out
exactly
what
they're
gonna
be,
but
who's
interested
and
actually
being
involved
in
more
the
planning
and
possibly
the
delivery
for
these
sessions,
like
who's
actually
going
to
be
at
the
conference.
D
F
This
is
Steve
like
them.
I've
got,
we've
got
some
vacation
scheduled
on
like
the
Friday,
so
Tibor,
it's
just
my
wife
grade
out
of
town.
So,
if
you
guys
could
stuff
I
could
look
into
you
know,
especially
if
we're
focusing
on
Thursday
and
then
I
could
travel
back,
Friday
I
could
maybe
come
so
just
let
me
know
I'm
not
opposed
to
it.
I
was
kind
of
wanting
to
go
in
the
first
place,
but
you
know
just
I'm
just
going
to
figure
out
the
logistics
got
it.
Okay.
C
E
C
B
E
You
know
I,
think
presentations
like
that
are
great
and
that's
the
BG
being
a
place
where
we
can
have
these
presentations
great,
but
I
think
it
would
be
great
if
we
also
tried
to
decide
and
use
the
time
the
face-to-face
to
decide
what
else.
If
anything
we
want
the
swg
to
do.
You
know
I
think
we
we
left
the
last
face-to-face
with
some
ambitious
goals
of
defining
some
some
stuff
around
cloud
native
storage
and
what
it
means
to
operate
cloud
native
storage
and
I.
E
Define
you
know,
you
know
looser
sense.
At
least
cloud
native
storage
from
a
operations
perspective
versus
from
an
application
consumption
perspective.
Yes,
absolutely
and
yeah
I
mean
just
I,
don't
know
that
we've
kind
of
dug
back
in
to
that
or
that
anyone's
really
had
the
time
to
do
that.
I
think
if
we
would
have
done
that
I
think
there
would
have
been
more
clear.
A
E
You
want
to
call
it
white
papers
or
definitions
or
whatever
you
want
to
call
it
along
those
lines,
but
that's
not
really
something
we've
done,
which
is
okay
and
to
me,
I,
think
that
sort
of
leaves
the
group
with
a
little
bit
of
a
less
defined
and
less
clear
mission
about
what
its
output
is.
You
know
what
role
it's
playing,
yeah
and
I've.
Just
to
me,
this
face-to-face
would
be
good
to
just
settle
on
that,
even
if
the
role
of
it
is
not
as
ambitious
as
to
find
all
that
other
stuff,
that's
fine.
E
F
You
can't
see
my
air
quotes,
but
storage
like
kubernetes
storage
sig,
which
is
basically
the
storage
that
supports
the
application
platforms,
or
is
it
all
application,
persistence
and
we've
got
to
decide
on
what
we
are
and
did
I
have
an
opinion
on
that.
But
I
don't
want
to
hijack
this
meeting
to
jump
into
that,
but
I
do
think
we
need
to
get
to
the
bottom
of
that.
E
Yeah
so
so
Clinton
to
answer
your
question.
I
would
be
happy
with
having
one
of
the
sessions
just
dedicated
and
devoted
to
figuring
that
out
and
I
think
we
can
either
try
to
get
feedback
from
TSE
members
ahead
of
time.
We
can
have
them
be
present
to
also
get
there
they're
taking
perspective
on
it,
or
we
can
brainstorm
ourselves
and
then
go
back
and
say
hey.
This
is
what
we
think
we're
doing
this,
who
we
think
we
are
and
but
it
seems
like.
C
So
we
must
good
use
the
time.
Do
you
think
that
we've
got
the
three
sessions
right?
The
one
at
8
o'clock
is
is
questionable
who
we
can
actually
get
there.
Are
you
saying
that
maybe
we
take
one
of
the
the
General
Sessions
and
have
that
be
like
a
roundtable
format,
or
are
you
saying
the
8
o'clock
one
that
we
try
to
tackle
up.
E
I
mean
I,
think
it's
gonna,
be
whichever
one
we're
gonna
get
critical
mass
at.
It
sounds.
A
C
Do
we
think
that
the
like
a
public
audience
is
going
to
benefit
from
seeing
some
of
that
I
wouldn't
call
it
dirty
laundry
I
think
it's
just
open
source
process
at
the
end
of
the
day,
figuring
out
what
we
need
to
do.
What
we're
gonna
do,
but
is
that
something
we
want
to
be
a
public
session
I
mean.
E
I
think
that's
perfectly
fine
if
it
folks
from
the
public
won't
want
to
come
in
I.
Think
I,
don't
think
that
needs
to
be
any
shame
and
us
wanting
to
better
define
exactly
the
how
we
want
the
group
to
run
fact.
I
think
all
groups
should
probably
be
doing
this
periodically.
It's
just
a
continuing
reflection
on
how
things
are
working.
Yeah.
C
E
E
Yeah,
what
else
do
we
feel
we
have
queued
up
to
talk
about?
If
not
this
and
and
I
I
par
Jat
apologize,
Steve
I'd?
My
phone
seems
to
have
disconnected
right
when
you
were
speaking
and
then
reconnected,
and
so
I
completely
missed
everything
you
said
and
all
I
got
back
to
was
Clinton,
saying,
okay,
sod
plus
ones
that
so
plus.
F
We
should
probably
have
a
CNC
F
presentation
rather
than
a
meet-and-greet
in
the
city
in
the
track,
because,
despite
people
just
from
personal
experience,
despite
an
organizer
wanting
to
have
a
meet
and
greet
what
tends
to
happen,
is
people
show
up
expecting
to
see
a
session
and
they
don't
talk.
And
then
you
just
stand
up
there.
Looking
weird
so
and
then
the
the
third
one
in
the
evening
was
the
casual
meet-and-greet
you
know,
and
if
we
can
get
TST
folks,
they're
awesome,
if
not
like.
F
We
just
have
a
opportunity
forum
for
like
high
bandwidth
conversations
which
we
can
always
use
because
I
think.
Like
one
thing,
it
was
my
guess,
that's
been
something
the
tier
sees
observed
was
that
it's
taken
a
while
to
like
diffuse
exactly
how
the
CNC
F
works.
Like
you
know,
different
aspects
of
the
governance
model
at
state
sure
like
I,
know
personally,
like
I'm
being
routinely
educated
as
I.
Ask
more
questions,
so
I
think
that's
opportunity
for
more
education.
Conversation
around
that,
as
was
good.
C
You
know
the
last
time
we
talked
about
the
sessions
I
think
there
were.
There
are
two
things
that
I
wrote
down
from
notes,
and
one
thing
was
that
we
could
have
a
short
presentation
setting
some
context
and
then
we'd
have
a
panel
discussion,
so
open
forum,
and
then
the
second
was
that
we'd
have
a
like
a
review
of
what
the
s
e-g
has
been
discussing
as
a
landscape,
and
you
know
this
obviously
hasn't
been
ratified
and
is
still
you
know
to
be
determined.
E
We
can
ask
questions
and
we
can
educate
folks
I,
think
it's
a
great
opportunity
to
share
and
discover
and
talk
about
a
lot
of
the
interesting
storage
projects
out
there.
That
can
be
a
completely
acceptable.
You
know
decision
that
we
make,
which
is.
This
is
sort
of
the
extent
of
what
we
want
the
swg
to
be,
but
we
can
also
do
a
lot
more
and
I.
Think
it'd
just
be
great
if
we
had
had
clarity
for
this
group
for
the
TOC
and
for
everybody
else
about
any
other
stuff.
E
That
we're
trying
to
do
seems
like
it's
a
good
opportunity
to
have
these
discussions
face-to-face
versus
just
by
the
online,
but
but
we
could
also
do
it
in
one
of
our
future.
Just
calls
so
I
think
you
know
I'll
put
it
back
on
everybody
else,
which
is
if
everyone
agrees
that
we
want
to
have
those
discussions
when
we
want
to
have
them,
people
won't
have
them
to
face-to-face
and
we
want
to
leave
the
yeah.
E
C
Mean
I
think
that
at
the
conference
of
public
sessions,
people
are
gonna,
expect
I
mean
you're.
Gonna
have
complete
newbies
to
the
area,
who
are
just
really
interested
in
storage
and
I.
Think
they're
probably
expect
more
canned
and
well
presented
information
so
that
they
can
quickly
catch
up.
I,
think
that
you
know
we've
got
our
needs
as
a
group
which
are
somewhat
less
like
slightly
separate,
but
I
feel
like
we'd
accomplished
both
at
the
at
the
event.
C
I
think
that
you
know
we
could
take
one
session
could
make
sure
we
was
having
no
great
intro
presentation
landscape.
You
know
open
panel
discussions,
so
at
least
we
and
we'd
have
a
mix
of
you,
know,
intro
and
and
more
advanced
discussion
going
on
there,
and
then
we
have
maybe
that
that
expert
session
we
use
as
a
face-to-face,
which
is
that
roundtable
with
at
TSE
members
to
discuss
with
SME
to
can
be
you.
E
Sounds
like
then,
we
will
want
to
maybe
prepare
some
of
that
canned
content,
Clint
yeah,
which
no
you
and
I
can
kick
off,
and
then
we
can
never
creators.
E
C
All
right
we
are,
we
have
next
week
we
have
I,
believe
it's
dot
mesh
presenting
not
next
week,
but
the
next
session
is
dot
mesh.
Do
our
first
30
minutes.
If
anybody
has
any
other
storage
projects,
please
do
reach
out
right.
We
definitely
want
to
get
that
agenda
filled,
I
definitely
enjoy
hearing
from
all
the
different,
interesting
storage
projects.
Like
then,
what's
talking
about
out
there
and
the
ecosystem
think
it
helps
educate
me
on
what's
going
on
and
so
I
enjoy
it,
so
you
guys
have
any
anything
else.
You
because
end
up
here.