►
From YouTube: CNCF Storage Meetup: September
Description
Join us for KubeCon + CloudNativeCon in Barcelona May 20 - 23, Shanghai June 24 - 26, and San Diego November 18 - 21! Learn more at https://kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
A
A
A
So
this
is
the
first
cloud
native
storage,
SF
meetup.
It
was
formerly
the
cluster
HQ
meetup.
Some
of
the
pioneering
folks
behind
container
storage
is
a
meet-up
that
got
sort
of
revived
around
that
so
hopefully
we'll
be
doing
regular.
It's
a
sort
of
general
cloud
native
oriented
storage,
oriented,
meetups,
yeah
and
tonight
we've
got
some
really
cool
presentations
from
a
couple
folks
in
the
community
from
the
somme.
Tabara
Ike
wants
him
talking
about
rook
and
from
Saad
Ali
at
Google.
Talking
about
containers,
storage
interface,
so
without
further
ado,
I
pass
over
to
Scott.
C
C
So
I'm
gonna
go
over
what
exactly
CSI
is
container
storage
interface,
where
it
came
from
what's
next,
how
it
actually
works
and
we'll
save
Q&A
until
the
end,
if
you
have
any
burning
questions,
just
raise
your
hand,
so,
first
up
the
background:
what
were
the
motivations
behind
container
storage
interface?
Basically
two
things
from
the
cluster
orchestration
system
and
when
I
say
cluster
orchestration
system,
that's
kubernetes!
That's
docker
swarm!
That's
mezzos,
Cloud
Foundry!
We
as
as
cluster
orchestration
systems,
want
to
be
able
to
offer
our
users
as
many
storage
systems
as
possible.
C
On
the
other
hand,
storage
vendors
want
to
do
want
to
also
make
their
system
available
to
as
many
users
as
possible
and
do
as
little
work
as
possible
and
today
what
they
kind
have
to
do
is
navigate
through
all
the
different
orchestration
systems,
kind
of
figure
out
how
to
make
it
work.
With
those
systems-
and
so
what
CSI
is,
is
it's
an
attempt
to
define
an
industry
strength
in
use?
C
We
define
an
industry
standard
cluster
orchestration
system
which
can
be
used
to
expose
any
arbitrary
storage
system,
and
you
know
it's
like
the
old
joke,
but
we
just
one
more
standard
and
they'll
solve
everything
and
well,
you
know,
there's
a
joke,
it's
kind
of
true
it
might
where
we're
hoping.
This
is
the
one
that
will
work
for
all
cluster
orchestration
systems,
though
it
is
a
collaboration
between
kubernetes,
meso,
stalker
and
cloud
foundry.
It
started
in
February
of
2017.
C
So
where
are
we
today?
There
is
a
spec
that
is
published
on
github
at
github,
comm,
slash,
container
storage
interface.
You
can
go
check
it
out.
I
wanted
to
thank
G
and
James,
who
did
a
lot
of
the
heavy
lifting
and
helping
actually
craft
the
the
spec
and
the
initial
version
of
the
spec
has
been
published,
and
what
we're
doing
now
is
cluster
orchestration
systems,
docker,
swarm,
kubernetes,
mezzos,
etc.
We're
all
going
back
to
our
respective
communities
and
saying
okay.
C
How
do
we
take
this
draft
of
an
interface
spec
and
actually
implement
it
in
our
systems
and
the
idea
there
is
that,
while
the
CSI
spec
hasn't
gone
to
1.0
we're
going
to
go
back,
try
to
implement
it,
learn
from
that
revise
the
spec
and
then
call
whatever
we
all
Rhian
1.0.
So
that's
the
that's
the
point
that
we
are
at
today
and
where
we're
going
tomorrow,
so
we
can
jump
in
a
little
bit
into
what
CSI
actually
looks
like.
C
Let's
start
with,
what
CSI
is
not
CSI
does
not
define
the
packaging,
the
deployment,
the
monitoring
of
the
plugins
themselves.
What
CSI
does
is
define
the
interface
and
it
very
purposefully
limits
itself
to
that
layer.
Imagine
it
like
the
networking
layers
HTTP
doesn't
really
care
what
it
runs.
What
runs
underneath
it.
We
want
to
have
a
very
strict
abstraction
layer
and
say
this
is
what
CSI
is
the
packaging
you
can
decide
for
yourself.
C
Ultimately,
we
do
want
storage
vendors
to
be
able
to
write
a
CSI
compatible
plugin
and
have
it
just
work
everywhere,
so
we'll
need
some
sort
of
packaging.
The
packaging
needs
to
kind
of
align
at
some
point.
Well,
we
want
that
to
happen
organically.
We
want
the
cluster
orchestration
systems
to
define
that
in
a
way
that
it
makes
sense
for
each
of
their
systems
and
have
that
converge.
While
the
spec
is
just
the
the
interface,
we
also
didn't
define
grades
of
storage
within
this
spec.
This
was
something
that
people
kind
of
confuse.
C
Then
let's
talk
about
what
CSI
actually
is.
It
is
an
interface
and
it
enables
three
major
scenarios.
One
is
creation
and
deletion
of
volumes.
This
is
when
an
end
user
requests.
Storage.
The
cluster
orchestration
system
can
go
out,
talk
to
the
storage
back-end,
to
have
a
new
volume
created
to
fulfill
that
request
and
then,
when
the
user
is
done
with
it
have
it
deleted.
The
second
is
making
the
volume
available
on
the
machine
that
their
workload
gets
scheduled
on
for
some
storage
systems.
This.
C
What
about
all
these
other
cool
features
that
storage
systems
expose
like
snapshots
and
replication
and
etc,
etc.
The
the
purpose
with
CSI
was
not
to
be
the
cutting
edge.
It's
not
going
to
lead
the
way
in
storage.
What
we
want
to
do
is
make
it
an
appliance
it's
these.
This
is
the
least
common
denominator
that
all
of
us
need
in
order
to
get
bare
minimum
functionality,
and
so
with
1.0
we're
focusing
on
just
those
three
simple
things.
C
The
plan
is
to
add
additional
things
as
we
agree
on
them
as
they
converge,
we
will
let
them
evolve
in
the
cluster
orchestration
systems
and
once
we
converge,
then
we
added
into
the
spec.
So
when
we
say
two
non-critical
functionality
not
today,
the
last
thing
I
want
to
add
is
this:
mount
unmount
call
is
the
one
call
that
must
be
executed
on
the
node
machines
themselves.
C
The
create
and
delete
calls
and
the
attach
detach
calls
theoretically
could
be
executed
from
anywhere
within
the
cluster,
because
the
storage
control
plane
usually
exposes
some
kind
of
interface
and
call
it
from
anywhere
within
the
cluster,
but
the
last
mile
of
making
your
storage
system
available
to
the
user.
That
code
needs
to
be
executed
on
the
node
machine.
C
And
so,
for
that
reason
we
kind
of
segmented
the
the
API
into
three
sections.
One
is
the
controller,
the
Secchia.
The
second
is
the
node,
and
the
third
is
identity
with
the
controller.
These
are
calls
that
could
be
run
from
anywhere
so
I
said,
attach
detach
and
creating
delete
with
calls
that
are
categorized
as
node.
These
are
guaranteed
by
the
cluster
orchestration
system
to
only
be
called
on
the
node,
where
the
workload
that
is
referencing,
that
particular
volume
is
actually
scheduled.
The
finally
identity
would
be
a
set
of
calls
that
provide
generic
plug-in
information.
C
So
next
up,
let's
talk
about
naming
because
I've,
given
you
these
really
nice
names,
create
volume
delete,
while
you
make
sense,
attach
volume
and
detach
volume,
make
a
lot
of
sense
to
me.
That's
kind
of
what
we
call
them
in
kubernetes,
but
the
feedback
that
we
got
was
that
wait
a
minute,
I
I,
you
know,
I,
don't
have
an
attach
call
I
have
something
that
needs
to
be
executed
to
make
a
volume
available.
But
it's
not
really
an
attach
and
mount
and
unmount
can
actually
encompass
more
things
than
just
mounting
or
unmounting.
C
You
could
theoretically
apply
a
file
system
there
or
a
bunch
of
other
things,
and
so
what
we
did
was
name
them.
The
big
long
funny
names,
controller,
publish
and
no
publish
controller,
publish
as
the
control
plane
control
at
call.
Basically
and
node
publishes
the
component
that
is
going
to
be
executed
on
the
local
machine.
C
C
C
So
if
you
deploy
a
CSI
compatible
plugin
one
mechanism
for
you
to
do
that
would
be
to
have
your
or
your
CSI
plugin,
containerized
and
running
on
master
node,
master
and
node.
On
the
master
side,
it
would
only
expose
the
controller
interface
and
identity
interface
and
on
the
notes,
it
would
expose
note
interface
and
the
idea
identity
interface.
C
So
now,
if
an
end-user
goes
to
the
cluster
orchestration
system
and
says
I
want
storage,
it
has
to
be
20
gigabytes.
Make
it
happen.
What
your
cluster
orchestration
system
will
do
is
it'll
call
the
the
controller
plug-in
and
say:
I
need
a
new
volume,
and
here
are
all
the
parameters
it
needs
to
have
this
much
capacity
and
I'm
going
to
use
it
as
a
block
device,
etc.
C
The
plug-in
can
then
go
ahead
and
call
out
to
the
back-end
storage
control
plane
or
do
whatever
it
needs
to
do.
Rook,
which
you
guys
are
going
to
hear
from
next.
Does
some
cool
things
with
a
controller
centric
model
where
they
generate
object
within
the
kubernetes
api,
and
they
have
a
controller
that
monitors
that
so
really
the
implementation
is
up
to
you.
C
And
so
once
a
volume
is
created,
the
user
will
want
to
actually
use
that
volume
somewhere.
They're
gonna
tell
the
cluster
orchestration
system
that
volume
that
you
gave
me
I
want
to
use
it
with
this
workload,
go
and
schedule
this
workload
somewhere
where
there
is
availability.
The
container
orchestration
system
goes
out
and
says:
ok,
I
know
that
there's
some
memory
and
CPU
available
on
node
e
I'm
gonna
place
your
workload
there
and
now
I
need
to
make
sure
that
that
particular
volume
is
available
on
that
machine.
C
So
then,
the
the
plug-in
calls
out
to
the
control
plane
makes
that
happen
and
returns
a
published
volume
info,
which
contains
some
opaque
map,
which
is
up
to
the
controller
up
to
the
storage
vendor,
to
determine
what's
what
the
contents
of
that
is.
Basically,
whatever
information
is
returned
on
the
attached,
we're
going
to
close
the
container
orchestration
system
is
responsible
for
passing
back
in
on
the
node
published
for
the
vast
majority
of
storage
systems.
They
may
be
so
now
that
the
volume
is
actually
attached
and
made
available
on
the
host
machine.
C
You
can
imagine
this
as
Amazon
EBS
device
showing
up
as
a
device
on
the
node.
You
need
to
take
that
device
and
make
it
actually
available
in
a
path
that
can
then
be
mounted
into
the
container.
That's
where
the
node
publish
volume
step
comes
in.
This
is
going
to
be
executed
by
the
container
orchestration
system
on
the
actual
node,
and
so
we're
gonna
call
the
portion
of
the
plugin,
that's
actually
running
on
the
node
machine
and
we're
gonna
pass
in
the
same
volume
information
that
we
had
when
the
volume
was
created.
C
To
say
this
is
the
volume
that
needs
to
be
mounted.
The
volume
plug-in
is
then
responsible
for
making
it
appear,
making
it
target
pack
and
when
that's
done,
then
the
workload
is
ready
to
run
and
it's
available
inside
of
the
container.
We
take
the
target
path.
We
pass
it
into
the
container
on
time,
whether
that's
docker
or
something
else,
and
it's
available
for
the
user
to
you.
Air
down
is
just
going
to
be
a
reverse.
All
you
do
know
what
unpublish
controller
on
publish.
C
So
that's
pretty
much
it.
There
are
a
few
more
calls
if
you
guys
want
I
can
go
into
details
of
what
they
are,
but
you
can
think
of
them
as
mostly
supporting
calls
they're,
not
exposing
any
new
functionality.
They're
kind
of
they're
they're
the
glue
to
make
everything
work
together
and
that's
pretty
much
it
short
and
sweet
any
questions.
C
So
so
for
kubernetes,
specifically
the
way
that
volume
plugins
work
is
they're
all
in
tree,
meaning
they
are
actually
compiled
into
the
kubernetes
codebase
and
that's
very,
very
painful
for
storage
vendors,
because
either
they
don't
want
to
open
source.
They
don't
want
to
be
tied
to
the
kubernetes.
Lifecycle
are
the
relief
cycle,
so
this
is
the
mechanism
by
which
we're
planning
to
go
out
of
tree
and
one
of
the
mechanisms
we
already
have
flex
volumes,
which
is
an
exact
based
model.
Yes,
I
we're
trying
to
come
up
with
something
more
robust.
C
D
C
What
CSI
does
is
it
exposes,
so
the
so
the
way
that
the
storage
classes
work
in
kubernetes
today
is
that
you
create
a
storage
class
with
a
bunch
of
opaque
parameters
that
get
passed
through
on
creation
to
the
entry
volume
plug-in,
and
so
it
works
very
nicely
with
this,
because
the
only
thing
that's
changing
is
that
of
passing
in
passing
those
opaque
parameters
into
the
entry
volume,
plugin
you're
going
to
be
passing
them
into
some
by
external
volume
plugin.
So
the
the
public
facing
interface
doesn't
change
at
all.
C
Tsi
specifically,
does
not
define
who
selects
the
node.
It
is
the
responsibility
of
the
cluster
orchestration.
So
if
you
look
at
kubernetes
today
we
have
a
scheduler
and
when
a
workload
is
created
within
kubernetes,
that's
a
pod.
It
is
unscheduled.
We
have
controllers
that
monitor
these
pods
and
the
scheduler
says.
Oh
I,
see
that
there's
a
pod
that
is
unscheduled,
I
know
what
resources
are
available
on
my
server
I'm
going
to
go
ahead
and
place
this
wherever
there's
resources
to
do
I.
C
So
I
personally,
don't
what
I
or
the
learnings
that
we've
used
are
a
lot
from
what
we've
already
implemented,
including
systems.
That
said,
we
realize
we
are
not
storage
experts
and
the
project
is
public
and
open
source,
and
so
we
want
folks
to
come
and
join
our
community
and
Oh
shape
it.
If
there's
something
that's
missing,
you
know
join
in
to
join
the
meeting.
C
G
C
C
So
we
actually
just
published
a
governance,
doc
I'm,
not
sure
if
it's
actually,
the
idea
is
that
we
do.
We
don't
want
CSI
to
become
a
war
between
storage.
What
ends
up
happening
oftentimes
is
that
storage
vendors
see
a
opportunity
for
themselves
to
kind
of
push
their
agenda.
We
want
to
shy
away
from
that,
and
so
I
did.
Okay
from
the
cluster
orchestration
side
of
things.
We
want
to
promote
the
best
user
experience
that'll
be
first
and
foremost,
we
kind
of
one.
C
We
realized
what
that
end
user
experience
like
we've
kind
of
played
around
with
that
independently
through
all
of
our
cluster
orchestration
system,
and
so
a
lot
of
the
earnings
from
that
are
feeding
into
that
said,
we
realize
we're,
not
storage
experts
back
I,
think
the
feedback
I'm
hearing
from
you
is
that
you're
providing
feedback,
and
it's
not
being
heard
and
that's
fair,
but
ultimately
the
the
folks.
We
don't
want
to
make
the
approvers
people
who
have
a
vested
interest.
G
C
So
there's
basically
two
lists:
there
is
the
Austral
registration
list
and
then
there's
the
public
lists.
The
idea
is
that
the
public
list
is
where
input
is
generated
and
then
final
decisions
are
made
in
Austral
registration.
That
said,
I'd
you
know
it's,
nothing
is
set
in
stone.
Yet
I'm
hearing
the
feedback,
I'll
beat
it
back
and
we'll
see.
If
we
can
I'd
love
to
talk
to
you
afterwards,.
B
A
Awesome
thanks
that
so
next
up,
asam
tabara
from
quantum
to
talk
about
rook,
so
questions
like
a
school
hold
till
the
end
and
then
after
this
we'll
be
wrapping
up
some
networking
but
yeah
definitely
CSI
is
rapidly
evolving.
So
Saad
is
one
of
the
leaders
and
the
community
got
a
unique
opportunity
to
get
some
one-on-one
time
with
them
after
that
I
can't
drink.
F
Alright,
so
I
talked
about
rook,
if
you
think
about
psi,
and
the
targets
are
just
gave
as
the
front
side
of
the
house
it's
about
basically
provisioning
volumes
so
that
they
work
with
container
odds
and
all
that
stuff.
Brooke
is
essentially
on
the
back
side
of
the
house
as
the
storage
plane,
this
control
plane
or
on
the
actual
storage
system.
It's
providing
storage
to
all
Ian
plugins
and
everything
else
that
Inka
Brooker
is
essentially
running
in
cluster.
F
F
Fine
point
was
how
do
I
get
my
own
EBS,
or
am
I
honest,
three
FS
or
whatever
layer
one?
How
do
I
run
it
in
my
cluster,
like
I'm
running
everything
else
in
my
cluster?
So
so
you
know
essentially
bring
in
the
storage
system
to
the
doctor.
I'd
also
do
it
in
a
way
so
that
you
don't
have
to
kind
of
reinvent
the
stack
go
where
we
didn't
want
to
go,
build
it
a
path
I
wanted
to
use
existing
harden.
F
Fouled
has
two
data
paths
focus
more
on
the
orchestration
integration
into
our
native
environments,
as
opposed
to
ending
the
10-man
years.
It
takes
the
hardened
data
back
so
so
rook
is
an
Orchestrator.
It
takes
store
software.
It
deals
with
deploying
them
bootstrapping
them
configuring
them,
provisioning
them,
ailing
them
all
the
things
that
you'd
expect
either
scripts
or
an
admin
or
a
cluster
admin
or
storage
admin
to
do
manually,
Brooke,
orchestrates
and
automates
our
off
the
orchestrator,
our
and
all
the
services
available
to
us
to
make
that
happen.
F
On
a
really
solid
orange
service,
there's
a
part
two
rook
is
that
integrates
deeply
into
the
cloud
nave
environment.
So
we've
started
with
kubernetes
and
wholly
known
extension
points
as
far
as
I
know,
actually
make
it
so
that
the
storage
service
itself
looks
like
it
was
built
and
it
looks
like
it
was
in
tree
or
some
shoes
and
we're
gonna
work.
It's
it
looks
like
it
was
an
extension
to
kubernetes
as
opposed
to
having
your
own
another
CLI
or
another
management
layer,
another
management
interface.
F
F
And
yeah
just
somewhat
motivate
this
a
little
bit.
If
you
think
about
you
know,
if
you
look
at
a
storage
cluster
here
on
the
left,
there
is
a
lot
of
work
that
goes
into
orchestration
environments
for
volume,
plugins
and
how-to
volumes
version
volumes,
but
you're
on
your
own.
When
it
comes
to
storage,
running
the
actual
storage
cluster
in
the
cloud
you've
got
things
like
you
know:
Google,
persistent
disk
and
EBS,
and
all
those
that
you're
not
in
a
public
cloud
you're
literally
on
your
own.
It's
either
special
storage
software.
F
There
are
all
NetApp
boxes
or
you're
against
the
bosses
or
whatever
or
it's
you
know,
roll
out
your
own
set
of
cluster.
F
F
Actually,
underneath
the
kids,
I'm
kind
of
some
kind
of
persistent
surgery,
dynamic
volume,
provisioning,
here's
the
machinery
to
provision
the
storage,
but
who's
providing
storage
right.
So
you
have
to
go
build
this.
We
have
a
bunch
of
things
like
I
do.
I
provision
this
the
upgrades
backup,
dr
everything
else
scale
it
all
of
that
stuff
has
to
happen
manual,
but
even
if
you're
in
on
get
lab
on
kubernetes
in
amazon.
As
an
example
they're,
the
stories
changes
a
little
bit.
Every
WBS
you
have
that's
three
following
provisioning
supports
them
still
not
integrated.
F
F
F
Think
about
all
the
effort.
That's
going
into
orchestration
environments
such
that
it'll
work.
We
do
quickly
really
sick.
All
the
work
happening
there
once
hours
an
hour.
I
think
you
think
of
that
that
you
know
essentially
that
that
friction
and
they're
like
well.
We
can.
We
can
do
better
than
that
and
if
you
go
across
availability
zone
stories,
worse
cuz
failover
volumes
across
available
visit,
Donal
Amazon
you,
you
have
to
take
a
snapshot
and
pop
it
over
to
s3
and
then
restore
it
from
s3.
At
the
actual
you
know,
EBS
wanting
to
show
up.
F
And
then
so
so,
the
aha
for
us
was
hey,
wait
a
minute
distributed,
storage
systems
are
distributed
systems.
Why
not
use
just
run
distributed
systems
on
a
thumb
cloud
native
put
them
in
containers,
mostly,
you
know,
turn
them
into
micro
services
based
and
then
having
be
dynamically
managed
by
an
Orchestrator
and
build
all
the
logic
that
takes
makes
them
run
like
completely
man.
That's
what
and
do
it
in
a
way
that
integrates
with
the
cloud
native
environment.
F
Alright,
so
I
have
a
three
node
cluster,
there's
nothing
on
it.
That's
a
bear,
kubernetes
cluster.
What
I'm
gonna
do
is
use.
You
know
a
kubernetes
Yemma
file
to
deploy
the
rook
operator.
The
operator
is
essential.
This
is
a
one-time
operation.
The
operator
is
essentially
the
extension,
but
that
it
knows
how
to
run
storage.
Cluster.
F
How
that
the
operators
running
like
I,
said
you
do
this
once
per
cluster
once
per
your
company's
coaster,
I'm
gonna
go
ahead
and
create
storage.
Cluster
storage
cluster
is
essentially
a
set
of
nodes
in
your
and
devices
storage
devices
that
you
want
to
bring
together
to
create
a
storage
cluster.
F
You
decide
which
ones
you
want
to
be
in
all
declaratively
actually
deployments
back
so
here,
you'll
see,
I,
have
a
cluster,
it's
gonna
be
called
rook,
I'm,
gonna,
use
all
the
nodes
and
all
devices
off
and
then
some
storage
specific
things.
Let
me
go
ahead
and
do
that,
but
this
would
be
the
equivalent
of
say:
hey
I
want
my
own
s3
or
I'm
on
my
own
EBS
store
I'm.
What
equivalent
of
that
I'll
create
the
cluster
again
I'm
using
cube
CTO,
there's
no
API
here,
there's!
No!
It's
early
just
making
changing
the
cluster.
F
The
rook
operator
is
watching
looking
at
the
G
happen,
an
API
see
it
make
the
declarative
statement
that
I
told
it
to
do
hey,
go
run
a
new
storage
cluster
and
go
act
on
it
so
odds
using
the
kubernetes.
I
almost
said
all
sorts
of
infinity
on
there
running
well
on
those
nodes-
and
you
know
Buster
essentially
IV
no
cluster,
you
can
see
their
deployment
case
monitors,
3
OSD
now,
I
have
a
porridge
cluster,
actually
go
use
it.
F
What
I'll
do
next
is
I
will
create
a
pool
and
a
pool
is
a
virtual
concept.
I
like
just
that
most
other
storage
systems,
and
in
this
case
the
replication
I
can
do
a
racial
coding
or
others,
but
I'll
just
say
a
replication,
and
for
now
this
a
size
of
1:1
replica.
So
it's
a
pool
but
I
also
create
essentially
I
called
storage.
Cost
sod
was
talking
about
that
earlier,
actually
sets
a
quality
of
service
or
an
SLA
around
the
kind
of
storage
and
I'll
link
it
to
the
pool
in
my
storage
cluster.
F
F
F
F
There
really
two
volumes
here
on
underneath
web
server
won't
even
like
just
two
and
at
this
point
you're
looking
at
pretty
standard,
kubernetes,
dynamic
volume,
provisioning,
so
so
you'll
see
the
WordPress
Maya
schoo
instance
volume
claim
and
the
storage
class
it's
going
to
use
is
a
block
which
is
associated
with
oh
and
in
the
storage.
That's
how
you
link
them
up
hey
this
might
as
well
could
have
been
external
to
the
system
based
on
men.
F
F
Progressed
and
WordPress
MySQL
coming
up
the
pods
coming
up
our
running,
so
what
happened
here
is
that
WordPress
came
up
and
always
using
volumes,
dynamic
volume,
provisioning
in
kubernetes,
okay,
the
volumes
are
provided
by
the
other
pods
running
in
the
storage
cluster
that
are
essentially
being
the
provider
for
the
storage
Oh
some
level.
This
is
a
very
converged
whatever
turned
on
buster.
F
F
F
F
F
F
As
a
controller,
it
watches
the
kubernetes
doesn't
on
listening
on
changes
to
objects
that
are
in
the
kubernetes
api
is
that
we've
extended
clustered
pool
objects
to
our
file
system
and
it's
acting
on
those,
and
it
has
all
the
logic
back
and
specific
logic.
That
sure
is
that
a
storage
cluster
can
run
in
a
healthy
way.
F
That's
so
by
scheduling,
pods
running
all
sorts
of
codes
by
these
pods
that
actually
are
doing
the
actual
storage
and
on
each
of
the
storage.
Now,
if
there's
going
to
be
a
rock
volume
plug-in
it's
in
the
works
at
Trenton,
initially
CSI,
first,
all
the
state
or
Rock
lives.
I,
which
is
persistent
accent.
There
is
no
other
state.
You
don't
other
config
database
worry
about
how
to
manage.
That
operator
is
written
in
a
way
that
is
like
other
parts
of
kubernetes
austin
they're,
reconciling
what
is
happening
in
the
cluster
versus
what
it
should.
F
F
F
F
F
B
F
F
F
G
F
F
Actually,
it's
well
it's
using
coordinated,
selectors
and
everything
else,
but
plugging
them
into
freshmen
those
affinity
or
intake
there's
not
at
all
which,
which
is
why
storage
controls
are
really
interesting.
I,
don't
expect
the
storage
team,
I,
say
others
to
be
building
project
first
off
or
hardware
store
systems.
I
expect
a
storage
controller
to
honor.
You
know
apology.
G
F
A
F
F
F
F
F
That's
why
and
that's
essentially
why
we
built
this
as
a
reconciliation
movement,
but
you
can
take
you
know
it's
not
taking
past
knowledge
of
the
system.
It's
taking
current
knowledge
of
what's
out
there
and
the
desired
knowledge
of
what
should
be
there
and
I
reconcile
them.
So
you
can.
You
can
imagine
that
getting
more
and
more
X
over
time.
F
Lethargy
and
in
where
it
wears
any
your
life
and
you
could,
if
you
wanted
to
use
any
of
the
other
but
seems
like
it,
defeats
the
purpose
to
do
that.
Having
storage
systems
layer
doesn't
make
a
lot
of
sense,
promise
wise
and
cost
wise.
Yes,
in
this
example
that
was
running
directly
on
local
storage
at
some
level,
the
value
of
storage
systems
is
DJ,
unreliable,
local
storage
and
turned
it
into
highly
reliable.
You
know
others
can
use.
F
F
B
F
F
F
F
F
F
F
That's
a
good
question.
So
far,
we've
only
focused
an
opensource
I
know:
I,
you
know
license
that's
right.
So
so
you
could
argue
that.
But
then
it
becomes
an
engagement
beyond
the
open
source
project.
It
becomes
more
of
a
biz
Devon.
They
should
not
use
a
commercial
relationship.
I
think
I
might
have
to
think
about
it
with
quantum
not
and
with
my
quantum
had
all
not
with
my.
F
F
F
F
F
F
What
we've
done
is
we've
taken
Max
and
Laird
on
top
of
cumin,
a
very
dynamic
environment.
Now
you
have
to
deal
with
phases
between
odds
coming
up
quorums
for
forming
cheese,
retries
scaling,
isn't
it
add
another
pod
and
you
want
to
use
it
as
storage
node.
You
know
that
happens
all
the
time
and
saying
thank
you.
But
it's
not.
It
happens
fairly
frequently
watch
what
should
happen
should
I
grow.
My
cluster
should
I
start
moving.
All
the
data
or
the
data
that
happened.
Oh
shall
I
have
a
transition
period.
All
those
things
require
logic.
F
There
are
some
policies
that
are
built-in
right.
Now,
it's
or
more
need
to
coming,
but
this
is
the
earlier.
You
know
something
like
she
had
to
reinvent
how
to
manage
machines
and
networks,
and
all
that
stuff.
You
assume
that
that's
a
substrate
and
it
is
a
good
substrate
for
a
long
history.
So
that
was
the
premise.
G
F
G
F
F
That's
a
function
of
the
core
areas,
API,
which
is
a
function
of
a
CD
and
so
I'm
running
these
brief
instant.
There
are
ten
minute
operations
and
actually
those
are
the
easy
ones.
It's
the
middle
east.
Second
operations
that
are
harder,
so
the
10-meter
operations
are
easy,
a
you
know
a
long
time.
The
millisecond
operations
are
the
ones
that
can
and
store
system
sense
to
be
little
or
tolerant.
F
F
B
F
F
A
rule
for
how
to
select
which
subset
of
nodes
you
want
to
be
part,
that's
arbitrary
could
be
ones
with
the
SSD
drive
comes
with
a
blah
blah
right
memory
or
very
specific
and
then
within,
within
that
it's
d
clustered.
So
every
bit
of
data
you
right
into
that,
that's,
depending
on
your
application
settings
distributed
upset,
but
only
to
the
recording.
F
F
A
Awesome
thanks
for
some
so
I
think
we're
gonna
get
kicked
out
of
here
around
9:00
9:10,
so
for
your
network
and
get
to
know
each
other
talk
with
the
speakers
thanks.
Everyone
for
coming
probably
do
this
on
a
monthly
basis.
Maybe
once
every
other
month,
just
shout
out
to
Claude
Claude
enter
computing
foundation
for
sponsoring,
also
for
Brooklyn,
sponsoring
the
pizza
and
the
drinks
and
we'll
see
everybody
next
time.
Thanks.