►
From YouTube: CNCF Storage Working Group - 2018-01-24
Description
Join us for KubeCon + CloudNativeCon in Barcelona May 20 - 23, Shanghai June 24 - 26, and San Diego November 18 - 21! Learn more at https://kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
A
B
C
D
D
D
B
B
So
that
well,
the
thing
I
wanted
to
open
up
with
was
the
coding
and
testing
in
the
status
of
that
right
now.
I
think
that
I
counted
it
last
night
and
out
of
the
90
OC
members
I
think
that
there
were
four
that
have
binding
boats
on
a
rock
and
I
think
that
it
needs
to
get
to
six.
Does
that
set
accurate
from
here
assessment
back
yeah.
E
D
E
B
That's
that's
one
of
the
key
things
is
the
TFC
members
have
been
asking
of
us,
as
a
group
in
us
individually,
is
to
make
sure
that
we
help
with
the
vetting
process
and
any
type
of
perspective
that
that
you
can
provide
is
walking
than
those
four
requests.
So
they're
still
not
pretending
to
do
so.
Any
any
comments,
anything
else
on
the
voting.
That's
pretty
much
all
I
learned
to
cover
for
it.
B
Great
okay,
all
right
easy
enough.
The
next
item
on
here
is
the
white
paper
update.
At
the
end
of
the
call
on
our
last
session,
we
had
I
think
Michael
Rubin
asked
me
for
an
update
on
where
we
are
with
the
existing
white
papers,
so
we
had
discussed
working
on,
and
you
know
this.
This
is
something
that
you
know.
We've
had
I
think
that
Theo
sees
them
as
best
me
internally
and
and
I
think.
B
The
consistent
feedback
from
from
the
TOC
has
been
that
you
know
what
we're
describing
in
the
white
papers
and
some
of
that
consensus
that
we
got
to
in
terms
of
what
cloud
native
storage
could
be.
It's
actually
aligned
to
what
some
of
them
are
thinking
about.
You
know
kind
of
changing
or
updating
some
of
the
general
cloud
native
terminology
too.
B
What's
what's
being
discussed
is
hey
that
that
kind
of
information
about
what
cloud
native
is
probably
needs
to
be
updated
and
it
needs
to
serve
as
the
foundation
for
what
any
of
these
sub
papers
are
built
on,
whether
it's
cloud
native
storage
or
serve
your
lists
or
cloud
networking,
and
so
that's
something
that
the
TOC
really
needs
to
to
to
work
on
and
provide
so
that
we
can
actually
start
building.
On
top
of
that
type
of
perspective.
B
The
other
thing
is
that
the
TOC
has
been
I,
think
they
they
understand
that
they
haven't
been
clear,
with
expectations
about
what
they've
been
asking
for,
whether
its
individual
individual
contributors
or
the
the
working
groups
themselves
and
they're
gonna
work
on
trying
to
be
more
clear
and
for
now
you
know
consistent
feedback
from
them.
Is
you
know
we
can
have
our
meetings
with
ESCO
VG
or
the
working
groups?
You
know
we'll
discuss
ecosystem
things
and
whatever
topics
that
we
want
to.
B
But
you
know
the
the
contributors
to
the
in
terms
of
the
vetting
process
and
the
TOC
is
definitely
asking
for
individual
contributors
to
be
involved
to
help.
You
know
that
the
projects
and
provide
different
perspectives
on
you
know
how
their
to
be
relevant
or
not.
Now
generally,
for
me,
though,
I
think
that
yeah
I
feel
good
about
what
we
did
in
terms
of
those
discussions
and
some
of
those
email
threads.
We
had
I
thought
that
we
actually
can.
D
B
So
the
next
piece
is
a
little
bit
less
CSI
update
for
everybody,
so
not
representing
the
CSI
project,
but
I'm
doing
this
as
a
little
of
an
intro
to
the
next
topic.
You
know
I,
put
on
the
agenda
to
discuss
Rex
great
today.
To
give
you
guys
all
the
understanding
of
you
know
what
the
future
of
Rex
Rey
is,
and
it
has
a
lot
to
do
with
the
CSI
project.
So
I
thought
it
was
important
to
just
do
a
quick
kind
of
brief
see
as
I
update
for
that.
B
So
that's
that's
what
this
is
and
that's
why
we
have
it
here.
The
you
know
CSI
is,
is
obviously
you
know
in
the
category
of
cloud
native
storage,
interoperability.
It
was
tagged
at
a
0.104,
the
spec
back
in
December.
So
you
know
thank
you
to
the
CSI
Orchestra
or
the
orchestrator
team
and
the
community
tons
of
work
last
year
to
make
CSI
happen
and
to
get
it
into
that
stable
0
to
1
tag.
B
There's
two
limitations
that
we
have
so
far
that
are
that
are
public,
so
this
who
are
kubernetes
and
maysa
and
they
have
their
public
documentation.
That
describes
how
you
can
get
those
things
up
and
running.
So
that's
that's
kind
of
excellent
news
that
we've
got
some
early
implementations
from
cos.
B
There's
also
other
implementations,
I
think
the
kubernetes
or
the
Cloud
Foundry
sweep
team
also
has
some
progress,
I'm,
not
sure
about
the
dates
but
I'm
sure
we
can
get
that
info
from
Julian
at
some
point
here
so
exciting
work
from
the
CEO
perspective.
We've
also
got
tons
of
plugin
not
tons,
but
we've
got
early
plug-ins
for
on
the
zero-one-zero
side
as
well.
B
So
there's
a
driver
page
under
kubernetes,
which
shows
all
different
drivers
have
been
created
as
part
of
the
kubernetes
ESI
project
and
then
you've
also
got
Mesa
sphere
who's
created
their
own
initial
side
driver.
So
we've
got
working
implementations
to
end
to
end
of
plugins
and
cos.
So
that's
a
great
thing
for
those
early
phase
in
the
the
CSI
project.
B
The
next
thing,
in
terms
of
like
an
action
item
for
anybody
who's
out
there
who's
looking
to
get
involved
in
storage
in
the
cloud
native
ecosystem,
I
think
the
biggest
thing
is
going
to
do
this
face
to
face.
It
comes
up
actually
before
that
there
are
biting
or
there's
monthly
meetings
that
happen
for
CSI,
and
you
can
find
that
at
the
gif
of
CSI
page
and
their
community,
so
please
join
their
if
you're
interested
in
collaborating
but
there's
also
a
face-to-face.
B
So
some
really
critical
stuff,
that's
gonna,
be
discussed
there
and
I
I
encourage
you
guys
to
join
that
and
that
kind
of
leads
into
where
we're
getting
with
this
very
next
phase
of
CSI,
which
is
you
know,
making
sure
that
we
can
actually
start
developing
these
these
plug-ins
in
these
new
ways,
all
right,
that's
like
then,
so
what
is
CSI
for
anybody
who's
new
to
it.
On
the
less
of
this
diagram,
we
have
this.
You
know
environment
where
there
there
were
many
integration
points
that
you
would
pursue.
B
If
you
wanted
to
be
relevant
to
some
of
these
cloud
native
orchestrators,
so
you
had
the
docker
volume
driver
interface,
which
I
think
was
the
the
first
one
that
was
created.
You've
got
DVD
CLI,
which
was
a
CLI
implementation
into
the
docker
ball.
You
driver
interface,
that
missus
used,
you've
got
kubernetes
flex
and
you've
got
Cloud
Foundry
and
they
actually
implemented
a
early
live
storage
client
for
their
their
interaction.
B
So
you
really
had
four
different
ways
to
integrate
storage
across
the
CIO's,
and
you
know
that
all
turned
into
one
thing,
which
is
this
new
container
storage
interface
project.
So
it's
a
a
great
thing
for
you
know
for
the
user
experience
in
cloud
native
and
it's
really
important
for
ensuring
the
interoperability
works
between
COS
and
storage
next
lapin.
B
So,
what's
like,
how
do
you
actually
be
relevant
in
CSI?
You
know
from
a
simple
perspective
like
we're,
connecting
apps
to
to
storage
right.
That's!
This
is
the
black
boxes,
but
in
the
middle
there
is
a
CSI
interface
and
there's
two
implementations
for
CSI
that
we
focus
on.
One
is
the
the
CEO
side
of,
and
the
other
is
going
to
be,
the
plugin
side,
implementation,
which
is
for
the
storage
providers,
and
those
are
just
you
know,
simply
GRP
seed
implementations
next
slide,
please.
B
So
the
idea
is,
you
know
we
need
everyone
to
create
these
these
these
drivers,
but
there
is
a
lot
of
work
that
Anna
actually
involved
in
creating
a
great
driver
and
that's
where
x-ray
comes
into
play.
So
so
Rex
raised
a
cloud
native
stories.
Orchestration
engine
that's
been
around
for
a
couple
of
years.
Its
inception
was
around
the
docker
volume
driver
interface
time,
and
then
it
you
know,
moved
forward
kind
of
following
the
ecosystem.
B
Most
recently,
we've
made
changes
to
x-ray
to
architectural,
align
it
to
be
a
SCSI
native
implementation
and
and
what
that
means
is
that
the
focus
of
Rex
rain
is
going
to
be
providing
value
on
top
of
any
CSI
drivers
that
are
created,
it's
actually
kind
of
a
a
middleware
layer,
but
it
should
be
transparent
to
the
consumers.
So
anybody
who's
using
storage
with
any
of
these
videos
would
be
able
to.
B
You
know
fire
out
that
plug-in
or
driver,
and
they
may
or
may
not
even
know
that
Rex
Rae
is
is
running
that
driver,
but
it
should
make
the
experience
for
them
great
so
to
cluster
providers
and
operators.
It's
gonna
mean
that
when
you
actually
start
a
a
plug-in
or
a
driver,
you're
gonna
have
a
great
user
experience,
so
the
instructions
for
the
CEO
is
the
packaging
relevant.
B
The
CIO's,
that's
all
going
to
be
handled
by
by
rex
writing
and
it's
gonna
be
consistent
across
any
of
the
storage
platforms
that
Rex
Rhea's
is
packaging
as
storage
drivers
and
then
relevance
to
the
storage
projects
and
products.
You
know
if
you're,
a
storage
company
out
there
or
you
know,
storage
project
and
you
want
to
be
relevant
to
CSI
Rex
ray
is
going
to
be
the
the
least
friction
approach
to
creating
a
great
CSI
implementation.
B
So
whether
it's
the
you
know
that
type
of
you
know
common
tool
set
or
common
packaging
or
processes
or
whether
it's
specifically
documentation,
like
you
know,
that's
all
redundant
information
that
you
know.
We
want
to
try
to
simplify
and
standardize
to
help
create
a
better
experience.
So
you
get
that
and
then
also
there's
gonna,
be
enterprise,
features
that
are
built
into
the
direction
rating
middleware,
actually
framework.
B
B
B
So
what
does
this
look
like
from
a
consistent
packaging
perspective
like
what
is
what's
the
target
from
our
perspective?
Well,
today,
like
in
you
know,
with
CSI,
everybody's
gonna
create
their
own
plugin,
how
it
get
its
packaged
and
where
it
gets
shipped
and
how
it
gets
rammed
like
it's
there's
nothing
in
the
specification
that
actually
determines
that,
and
that's
one
way
to
think
about
this
is
like
you
know.
B
The
CSI
stuff
is
gonna,
define
just
the
direct
interoperability
between
storage
platform
and
ACO,
but
it's
not
gonna
define
the
user
experience
spec
like
what
is
what's
really
expected
to
really
help
this
project
be
successful,
and
so,
from
our
perspective,
you
know
this
is
a
pretty
good
example
of
where
we
think
things
need
to
go
in
the
from
a
docker
volume
driver
perspective.
It
started
out
as
hey
everybody
create
their
own
processor,
their.
B
You
know
their
own
app
or
tool
or
plugin
or
a
volume
driver,
and
if
you
people
are
going
to
run
it
never
wit
anyway,
that
they
want
to
and
then
what
we
move
to
after
that
with
docker
is
dr.
manage
plugins,
and
this
is
where
they
you
took
this
the
processor
tool
you
package
it
up
as
a
container
and
all
of
a
sudden.
There
was
a
standard
way
that
you've
actually
deploy
and
run
the
plugin,
and
these
your
experience
was
much
much
better
and
that's
essentially
I
think
worse.
B
Csi
has
to
go
as
well
well,
and
this
is
just
showing
you
what
that
looks
like
from
a
docker
perspective,
so
from
a
docker
hub
on
the
left
side.
There
you
see
that
R
x-ray
the
rest
for
a
repo
itself,
where
there
x-ray
org
has
yeah,
twelve
or
so
manage
docker
plugins
that
are
all
containerized
and
very,
very
easy
to
get
better.
B
Okay
next
slide
Bennett.
So
how
do
you
actually
create
a
Rex
ray
driver?
I
hope
it's
been
clear
so
far,
but
the
the
only
thing
we've
actually
have
to
do
is
create
a
CSI
driver,
because
we
are
native
CSI
implementation
and
we
use
CSI
drivers
in
the
backend
to
actually
talk
to
any
storage
platform.
So
what
I'm?
B
Those
drivers
are
actually
kept
in
a
separate
repo
and
that
net
might
be
Rex
gray,
slash
a
driver
if
it's
gonna
be
a
part
of
the
Rex
rave
project,
or
it
might
be
something
that's
held
within
your
projects.
Repo,
like
CSI,
my
storage
platform,
the
packaging
of
this,
the
Rex
ray
tool
with
your
driver
happens
separately
from
the
creation
of
your
driver
itself.
Another
key
point
here
is
that
the
the
Rex
rate
architecture,
pre
CSI,
was
focused
on
what
we'll
call
live.
B
Storage,
I
think
that
you
know
a
handful
of
you
are
probably
familiar
with
what
that
is.
Essentially,
the
live
storage
was
a
similar
yeah.
It
had
a
similar
goal
of
CSI
and
type
in
terms
of
creating
a
universal
API,
and
so
all
of
the
referee
drivers
in
the
past
were
live
storage
drivers.
Now,
as
of
three
or
four
months
ago,
all
of
the
drivers
are
being
moved
to
being
native
CSI
drivers,
all
right
next
leg.
B
So
how
does
it
actually
do
this?
How
does
this
all
work
I
think
a
pretty
simple
visual
depiction
of
it?
You've
got
in
the
middle
there
on
the
left.
The
rectory
engine
you've
got
the
the
ability
to
advertise
this,
this
kind
of
northbound
incoming
interface
for
docker
million
drivers,
but
also,
at
the
same
time,
any
of
the
CSI
providers,
and
then
the
backend
communication
happens
to
these
storage
platforms
by
way
of
this
CSI
driver.
So
so,
in
summary,
like
Rex
ray
is
gonna
provide
the
common
user
experience
right.
B
It's
gonna
package
up
any
new
CSI
drivers.
It's
gonna!
It
has
a
pretty
well
tuned
CIS
to
be
process
for
publishing
the
actual
artifacts
in
different
places,
and
then
it's
going
to
provide
a
layer
of
middleware
to
that's
actually
going
to
use
the
middleware
with
into
your
PC
to
add
value.
On
top
of
any
of
these
CSI
drivers
that
are
created.
B
All
right
looks
like
so
so
the
I
mean
the
purpose
of
it
right
now
or
where
we're
at
with
Rexxar.
Is
that
we're
really
trying
to
support
the
CSI
ecosystem?
I?
Think
that
the
CSI
team,
you
know,
solved
a
huge
technical
challenge
of
getting
story
closer
to
applications
and
making
sure
that
the
inner
aquas
was
better
than
it
was
before
and
I
think.
B
It's
also
solved
a
challenge
for
storage
companies
to
source
platforms,
because
we
need
just
focus
on
one
interface
versus
having
to
pick
and
choose
and
divide
our
efforts
and
have
and
have
you
know
not
as
as
great
implementations
at
that
point.
But
there's
still
work
to
be
done
to
make
the
good
getting
to
see
SI
stability.
B
It's
really
going
to
require
that
people
use
the
plugins
and,
for
example,
like
if,
if
I'm
thinking
about
the
kubernetes
world,
I've
got
a
lot
of
entry
plugins
in
kubernetes
right
now,
and
you
know
why
would
I
go
and
use
a
alpha,
CSI
plug-in
if
the
entry
plug-in
works?
Just
fine
all
right,
so
so
getting
people
to
actually
start
using
these
CSI
plugins,
which
is
kind
of
a
key
point
of
maturing
the
the
adoption
of
getting
CSI
adopted
and
maturing
it
and
getting
it
to
stable?
B
It's
gonna
require
that
people
come
take
that
jump
and
use
new
plugins.
Instead
of
the
entry
plugins
and
kubernetes,
and
to
do
that,
we're
gonna
have
to
make
sure
that
it's
got
a
great
user
experience.
So
you
know
Rex
ray
is
his
setting
that
setting
us
up
for
that.
I
think
that
the
standardizing
the
implementation
of
the
plugins
is
kind
of
a
key
to
helping
you
know
mature
and
help
the
ecosystem
move
forward
and
I
think
the
key
measure
of
success
is
like,
for
example,
the
kubernetes
ecosystem.
B
E
B
I
think
that,
up
to
this
point,
the
CSI
project,
maintainers
have
discussed
it
to
say
they
want
to
make
sure
that
the
ecosystem
grows
around
the
project
before
they
think
about
bringing
things
in.
There
is
I
think
there's
many
things
from
a
coding
perspective
that
the
CSI
project
would
be
interested
in
and
I
think
this
short
term
of
what
they
described
in
the
roadmap
is
the
validation,
tooling,
and
not
necessarily
like
a
kind
of
a
middleware
layer
like
this
I
think
that's
one
way
to
differentiate.
B
It
is
like
what
CSI
as
a
project
is
gonna
bring
in,
is
going
to
be
things
that
are
very,
very
specific
to
the
specification
and
things
that
are
abstract
of
cos
and
I.
Think
that's.
You
know
it's
clear
from
their
intentions
and
then
something
like
Rex
ray
is
gonna.
Be
that
layer
on
top,
which
is
helping
standardizing
user
experience
side
when
it
comes
to
how
you
actually
consume
these
these
drivers
with
the
CE
OS.
Does
that
make
sense?
Okay,.
F
F
We
want
those
to
emerge.
Naturally,
Rex
ray
is
a
great
example
of
of
something
that's
coming
out.
Naturally,
we
don't
want
to
pick
a
winner
here
and
then
once
the
project
matures
and
there
are
go
to
libraries
that
everybody
or
is
is
using,
we
can
consider
pulling
those
into
the
CSI
project
itself
and
that's.
E
B
I
think
like,
if
I
think
about
the
next
couple
years,
you're
gonna
have
CSI
right
and
you're
gonna
have
these
more
they're,
more
religious
side
of
CSI
or
the
more
direct
implementation
of
CSI,
as
as
these
libraries
and
then
you're
also
gonna
have
you're
gonna
need
some
type
of
tooling
that
provides
a
layer
of
innovation
which
moves
somewhat
separately
from
CSI,
because
there's
gonna
be
things
that
that
you
want
to
add.
You
know
that
can
add
value
to
these
drivers.
That
CSI
is
gonna,
say
hey
like
I,
don't
know
if
that
belongs
or
not.
B
Let's,
let's
see,
let's
see
how
it
goes
to
see.
If
the
community
you
know,
cares
or
not
before
we
actually
bring
that
into
the
spec
and
as
an
example,
one
of
the
things
is
encryption
right.
So
one
things
that
Rex
ray
is
gonna.
Add
you
know
what
it's
going
to
provide
to
any
driver
is
it'll,
actually
bring
a
once.
B
Yes,
I
suspect,
maybe
as
additional
methods
or
what
have
you
or
maybe
code,
but
other
times
it's
really
going
to
be
long-living
outside
of
it.
So
I,
I
kind
of
see
a
world
where
there's
there's
definitely
need
to
like
enhance
and
augment
and
contribute
things
to
the
CSX
pack.
But
sometimes
that's
sometimes
things
like
don't
belong
there
and
should
be
a
little
bit
abstract
of
it
and
I.
Think
that's
where
x-rays
can
apply.
I
think.
A
B
So
the
G
RPC
has
these
interceptors
and
you
know
when
you're
building
a
plug
in
a
CSI
driver.
You
know
if
you
want
to
build
a
good
one
right.
You're
gonna
have
do
this
stuff
that
we
always
do
right.
You're,
gonna,
add
your
your
logging,
your
authentication,
your
authorization
like
know
these
things
that
you
just
typically
build
in
there
right.
It
takes
effort
to
actually
do
and
do
the
right
way.
So
there's
an
ability
within
with
trpc2,
add
in
interceptors
and
the
interceptors,
are
where
we're
gonna
argument
and
enhance
the
CSI
drivers.
B
So,
as
we
list
out
some
of
the
things
that
we're
thinking
about
I,
have
it
on
the
roadmap
slide
like
a
lot
of
those
things.
They're
gonna
come
to
fruition
through
just
ejecting
interceptors,
which
mean
that
you
know
you,
you
create
a
native
CSI
driver
that
is
focused
on
you
know,
implementing
your
core
features
of
your
source
platform
for
like
your
crud
operations
and
they're
your
orchestration
operations,
and
then
these
interceptors
just
come
in
extensively
to
add
value.
On
top.
B
I
mean
the
simple
way
is
just
to
say:
hey
like
logging:
how
can
we
make
logging
standard
across
the
drivers?
Well,
one
way
to
do
that
is
to
add
an
interceptor
for
logging
and
all
of
a
sudden,
you
can
add
context
IDs
and
things
that
are
valuable
for
tracing
operations.
So
it's
it's
just
a
ng
RPC.
It's
a
great
way
that
you
can
just
easily
bring
in
these
core
things
that
make
your
implementation
better
and.
E
B
That
may
be
I
think
that's
how
you
implement
some
of
this
extra
value
and
I
think
logging,
for
example,
or
something
that
may
be
contributed
to
CSI.
We
got
this
go
CSI
package
and
that's
where
some
of
these
interceptors
live
and
I
think
that
that
kind
of
thing
would
be
valuable
to
everybody
and
it's
something
that
corridor
just
like
creating
a
great
plugin.
B
Aside
from
what
your
perspective
is,
and
that's
an
example
of
something
that
we'd
say,
really
quick,
that
we
would
introduce
to
the
CSI
project
and
see
what
the
responses
and
I
said:
there's
other
things
that
are
that
we
do
in
interceptors
as
well,
which
may
not
be
the
same
thing.
So
maybe
it's
gonna,
be
authorization
authentication
like
there's
kind
of
a
list
of
other
things
that
we
can
do
in
a
simpler
way.
E
B
I
hear
you,
there
I
think
that
I
stick
that
over
time,
like
we're,
gonna
need
somewhere
to
be
very
innovative
and
somewhere
to
test
out
whether
people
care
about
some.
These
things
you
know
before
they
make
it
down
into
a
slot,
so
I
sign
labs.
This
wreck
strings
like
CSI
labs.
It
can
be
yeah
I
mean
because
that's
one
way
that
you
could
think
about
it.
B
F
The
way
that
I
like
to
think
about
it
is
that
CSI,
ultimately,
the
things
that
it
dictates
are
the
specification,
the
protocol
to
interact
between
cluster
Orchestrator
and
a
volume
plug-in,
and
those
are
the
only
things
that
it
dictates
everything
else
that
goes
around
and
how
you
make
that
interface
exist.
Our
CSI
will
never
dictate
it.
It
may
suggest
it,
it
may
recommend
it,
but
it'll
never
dictate
it.
F
So,
while
we're
starting
the
project
we're
just
focusing
on
what
exactly
that
interface
should
look
like,
we
haven't,
we've
purposefully
avoided
defining
what
the
packaging
should
look
like.
That
can
differ
from
co2
Co,
defining
what
logging
authorization
and
all
these
things
are
going
to
look
like.
If
you
go
to
the
kubernetes
project,
you'll
see,
we
recommended
one
way
to
deploy
it
on
kubernetes,
but
we
want
these.
F
Ultimately,
if
it
ends
up
that
there
is
just
a
very
common
standard
way
that
folks
are
deployed
creating
CSI
volume
plugins,
then
it
may
make
sense
to
pull
that
into
the
CSI
project
itself,
as
here
is
some
recommended
packaging
that
you
can
use,
but
you
don't
have
to.
Ultimately,
all
you
have
to
do
is
create
something
that
implements
the
interface.
It
doesn't
matter
how,
in
order
to
create
a
compatible,
CSI
driver.
You
can
use
this
optional
tooling,
if
you
want
to,
but
you
don't
have
to.
E
Yeah
tripped
up
on
is,
if
for
x-ray,
is
a
CNC
F
project?
Does
that
talk?
Do
we
start
thinking
about
common
sets
of
packaging
or
a
common
interface
for
packaging
on
CSI?
But
that's
the
part
I
completely
understand
we
set
aside
about
not
men.
You
know
all
the
critical
pieces
in
core
CSI
yeah,
but
I
think
this
is
in
the
context
of
R
x-ray
as
a
CNC
F
project,
yeah.
G
E
G
Hi
I
killed
these
check
REE
so
yeah
after
developing
some
drivers
right,
like
I,
was
part
of
the
kubernetes
ESL
effort.
I
realized
that
a
tooling
like
this
will
really
help
because
there's
a
lot
of
duplication
and
if
every
vendor
has
to
go
ahead
and
do
all
the
stuff
there's
a
lot
of
common
code
which
can
be
avoided
and
some
storage
systems
might
want
to
implement
only
few
of
the
api's
and
maybe
can
leverage
something
like
this.
That
will
really
help
them.
B
Absolutely
you
know
that
discussion
and
I
consider
like
go
CSI
of
a
part
of
the
Rex
grade
project
and
that's
a
very
example
of
something
that
we've
tried
to
keep
religious
is
specific
to
the
CSI
include
for
interface
itself,
and
it's
not
that
kind
of
thing
more
parts
of
it
that
we'd
be
very
interested
in
contributing.
What's.
E
B
Totally,
okay
and
I,
so
I
kind
of
intrude-
this
to
say,
hey
I,
didn't
want
to
have
a
whole
we're
not
having
a
little
CSI
discussion
here.
I
know
that
wrecks
raise
people
largely
focused
on
it
and
that's
what
I
was
talking
about,
but
I
encourage
you
guys
to
join
the
face-to-face
and
when
that
face
Fayette
face
is
gonna,
be
because
I
think
it's
this
kind
of
discussion
that
will
carry
on
it.
There
get
pretty
lively,
so
good
stuff,
that's
going
next,
great,
okay,
all
right,
so
the
the
roadmap.
So
what
do
we?
B
What
are
we
thinking
about?
And
this
is
where
one
we
want
to
get
it
contributed
to
a
foundation?
I
think
that
one
of
the
challenges
we've
had
with
the
project
over
the
last
couple
years
is
is
really
the
collaboration
from
other
storage
companies
and
it's
unfortunate
but
like
in
the
storage
ecosystem.
B
So
that's
one
of
our
roles
for
the
year
just
to
increase
collaboration
and
get
more
folks
involved
in
it
as
part
of
those
e20,
12
and
1
dot
x
releases
coming
up,
you
know
number
one,
and
the
biggest
thing
is
as
being
I,
never
sent
CSI
as
we're
not
one
compatible.
The
current
graduate
release
is
the
pre
zero
one
tag.
So
there's
some
small
changes
to
get
that
up
to
date.
Once
that
happens,
like
all
thirteen
or
so
x-rays
drivers
are
all
CSI
compatible
right
away,
suicide
0.1
compatible.
So
that's
that's
number
one.
B
The
you
know
will
continue
to
provide
the
Interop
capability
with
all
those
CSI
drivers
and
the
existing
ones
through
docker
at
cloud
foundry
that
come
rates,
flex,
etcetera
and
maysa.
So
that's
that's
gonna,
be
in
there
still,
but
we'll
get
the
enterprise
user
experience
I'm
actually
really
interested
to
hear.
You
know
feedback,
you
know
separately,
maybe
not
hearing
a
call
but
separately.
B
If
you
guys
are
interested
in
getting
engaged,
you
know
what
is
it
the
enterprises
of
people
who
are
actually
gonna
use
this
stuff
like
what
do
they
care
about,
and
what
do
we
need
to
do
to
make
this
a
great
user
experience?
The
first
things
we've
thought
about
was
the
the
deployment
so
for
any
of
the
SEOs.
We
did
a
simple
inconsistent
deployment
management
of
these,
these
plugins,
the
second
things
about
security
and
credential
integration.
B
So,
if
we're
gonna,
be
you
know
configuring
these
plugins
and
we
want
to
store
our
credentials
or
to
be
asking
for
sensitive
credentials.
We
gotta
use
something
else
to
actually
store
those,
whether
the
CEO
provides
it
through
like
a
future
CSI.
You
know
API
that
we
add
who
knows
but
I,
think
for
right.
Now.
We
just
need
to
make
sure
we
have
external
integration
through
something
like
vault
to
store
these
sensor
of
its
sensitive
credentials.
B
We
need
to
make
sure
that
we're
tracing
and
logging
and
providing
metric
integration
so
that
we
can
actually
record
all
the
events
and
provide
all
the
visibility
to
what's
going
on
with
these
plugins.
So
those
are
those
are
three
key
things
that
we
want
to
make
sure
we
accomplish.
For
these
experience
and
I
think
it's
it's
arguable
that,
like
those
are
three
key
things
that
everybody
should
do
with
their
plugins
right,
but
they're
not
easy
to
do
so.
B
If
we
can
provide
that
all
through
x-ray,
I
think
there's
tons
of
value
in
just
packaging,
your
your
driver
within
Rex
uh-huh.
You
know
another
thing
that
that
some
run
into
with
their
CSI
deployments
or
CSI
plugins
is
scale.
We've
got
centralized,
EPI
throttling
omlette
on
the
docket,
and
so
what
does
that
mean?
So
if
you've
got
a-
and
this
is
actually
really
difficult
to
pull
off
with
CSI
if
you've
got
if
you've
got
AWS,
for
example,
and
you've
got,
you
know,
ten
hosts
or
maybe
like
ten
different
clusters.
B
You're
gonna
have
a
bunch
of
CSI
plug-ins
running.
If
all
those
problems
are
in
two
independently
trying
to
use
the
AWS
API,
it's
gonna
saturate
it
very
very
quickly
if
you've
got
one
kubernetes
cluster
today,
like
essentially
that
it
manages
it.
But
what
if
you've,
got
like
ten
different
kubernetes
clusters
like?
How
can
you
essentially
like
throttle
that
stuff?
So
one
of
the
things
that
we're
adding
to
tier
X
ray
is
@cd
integration
so
that
it
can
increase
the
item
potency
domain
beyond
just
a
single
implementation?
B
So
you
can
truly,
actually
you
know
lock
and
limit
the
API
calls
to
that
single
eight
of
us
endpoint.
So
that's
kind
of
a
cool
thing
that
we'll
be
adding
that
can
provide
value
for
any
any
storage
platform.
The
next
thing
here
is
extended
volume
functionality,
so
the
data
data
at
rest-
encryption
I,
mentioned
that
earlier
that,
if
you've
got
any
platform
as
providing
block
storage,
we
can
add
a
middleware
step
where
we
add
a
an
encryption,
Shadow
device,
3dm
crypt
and
then
any
any
data
at
rest
is
in
good,
pretty
cool.
B
B
That
can
happen
within
something
like
racks
to
handle
the
situation
where
volumes
are
locked
and
you
actually
do
forceful,
detaches
and
then
eventually
like.
If
that
is
successful,
we
can
figure
out
how
to
move
that
into
suicide.
But
availability
is
another
thing
in
terms
of
enterprises.
We
just
need
to
make
sure
it's
fully
tested
and
support
at
a
global
level
across
the
the
volume
plugins
and
then
extensibility
wise,
like
we're
planning
on
integrating
it
to
you
know
the
portfolio
of
CN
CF
projects.
B
B
Alright,
so
the
history
releases
with
R
x-ray,
we've
had
78
so
pretty
consistently
over
the
past
couple
years
with
reps.
So
that's
that
top
chart
on
the
right
in
terms
of
activity
we've
had
a
pretty
steady
increase
in
activity
activity
of
the
repo,
so
we're
at
I
think
about
a
thousand
stars
right
now.
We've
had
150,000
or
so
of
entry
downloads
of
R
x-ray,
and
you
know
the
that
was
actually
the
past
year
and
then
the
docker
hub
downloads
were
for
over
50,000
all
right
next
slide.
B
The
contributors
like
this
is
where
we're
trying
to
increase
it.
This
is
the
contributors
over
time
on
we've
had
42
individual
contributors
of
code
and
then
264
cloud
route,
collaborators
in
the
project,
github
issues
and
other
things,
and
then
on
the
right
side.
You
can
see
that
steady
growth
again
of
the
stars.
Alright
next
slide
all
right.
So
so
why
why
your
x-ray?
B
Think
with
our
experiences
and
team
and
working
this
area,
we're
pretty
laser
focused
on
you
know
what
we're
hearing
from
customers
and
what
we
think
is
going
to
help
the
CSI
community
move
forward,
a
mature
so
having
the
foundation
having
collaborators
in
the
project.
It's
going
to
be
a
great
thing
for
us,
any
any
comments
or
thoughts
on
that
I
mean
that's,
that's
pretty
much.
The
rest
rate
pitch
that
we're
thinking
about
is
about
search.
B
Actually,
I
had
a
quick
question,
so
in
my
this
is
Matt
from
from
doTERRA
I
was
wondering
in
for
Rex
ready.
Is
there
a
focus
on
you?
You
seem
to
be
providing
this
functionality
on
top
of
what
CSI
is
defined
as
and
I
want
to
heard
from
the
perspective
of
a
vendor,
or
is
there
a
focus
on
making
sure
that
there
are
vendor
pass
throughs
like
say
if
a
vendor
already
has
hardware
level
encryption
built
in
and
they
don't
necessarily
need
to
use
the
D
encrypt
option
that
you
have
in
Rex
ray?
B
Is
there
a
way
to
bypass
that
and
use
the
vendors
option
and
the
vendors
capabilities
yeah
if
it's
built
into
your
your
driver,
I
mean
that's.
That's
kind
of
the
next
thing
about
architectural
work.
Rex
rates
gone
now
is
that
the
Rex
rate
just
packages
up
native
CSI
drivers,
so
whether
your
CSI
driver
just
runs
standalone
and
just
is
this
es
dry
driver?
And
it
does
you
know
the
minimal
functionality.
B
You
expect
it
to
that's
great
yeah,
but
it
is
you
could
you
would
implement
those
core
encryption
features
inside
of
your
CSI
driver
and
then
Rex
ray?
Would
be
able
to
use
those-
and
you
know
it's
an
interesting
parameter
to
say,
like
do
not
allow
extra
middle,
do
not
allow
like
Rex
rays
encryption
capability,
but
do
allow
all
the
other
things
that
Rex
rate
provides
I.
Don't
think
that
we've
thought
from
that
perspective
yet,
but
it
would
definitely
be
something
that
we
would
consider
and
I
think
something
that's
important.
B
So
you
don't
have
people
duplicating
that
level
of
encryption?
Yeah
awesome,
that's
pretty
much.
What
I'm!
Looking
for
a
lot
of
you
know
what
we
do
as
a
vendor
is
trying
to
implement
the
fastest
way
of
doing
these
things.
You
know
because
we
have
access
to
the
hardware
layer
and-
and
so
you
know
duplicating
that
that
sort
of
effort
at
the
software
layers
seems
excessive.
B
Well,
it
could
be
you're.
Also
talking
about
like
I
mean
if
we
get
into
storage
stuff
here,
you've
got
like
you,
maybe
put
I
guess
it
depends
on
your
client
implementation
like
if
you
have
a
client,
that's
doing
the
inflight
encryption
of
that
as
it
gets
stored
and
encrypted,
like
you
know,
at
the
device,
or
what
have
you
I?
Think
you're,
okay,
but
DM
Creek
would
provide
a
layer
of
encryption
for
someone
who
doesn't
do
in-flight
encryption
but
may
do
at
rest
encryption
on
the
platform,
so
it
just
I
think
it
all
kind
of
depends.
B
But
yeah
I
agree
with
you
like
that
capability
to
say
no
like.
We
never
want
to
allow
yeah
two
levels
of
encryption
like
we
can
probably
have
that
as
a
parameter
and
so
be
great
feedback
to
have
from
you
if
you're
interested
in
collaborating
on
that.
Okay.
Thanks
great
any
other
comments
out
there,
I.
H
B
B
Absolutely
I
mean
if
you
go
through
the
github
issues,
you'll
see
lots
of
questions
and
people
involved
in
the
project.
I
mean
there's
40,
plus
collaborate
and
contributors
to
it.
So
those
are
all
people
that
have
been
actively
involved.
We've
got
a
splash
page
that
lists
customers
and
organizations
that
we've
worked
with
on
the
project
in
the
past.
So
there's
definitely
lots
of
lots
of
adoption
of
it.
I
think
that
you
know,
in
terms
of
this
use
case
of
persistent
storage
with
applications.
B
You
know
the
primary
focus
of
your
x-ray
has
been
docker
and
and
mesas,
and-
and
it's
only
been
recently
with
kubernetes,
as
we've
adopted
CSI
that
hey
now
there's
this
new
opportunity
with
the
kubernetes
ecosystem
to
be
relevant,
so
I
think
that
the
know
the
adoption
and
interest
in
this
type
of
project
is
only
going
to
increase
because
of
CSI
is
CSI
and
because
of
the
you
know,
future
move
as
kubernetes.
You
know
lose
towards
CSI
and
less
of
the
less
focus
on
the
entry
drivers.
D
Okay,
great
listen!
So
let's
do
this!
Let's
use
now
the
last
few
minutes
just
to
talk
about
future
topics,
so
folks
have
anything
that
they'd
like
to
be
presented
in
the
future.
Discussions
would
like
to
have
I'd
love
to
hear
them
and
we
can
catch
them
here
and
then
Clinton
I
can
reach
out
to
folks
and
try
to
get
some
stuff
scheduled.
So
I
I
see
we
have
Minnie.
Oh
yeah,.