►
From YouTube: CNCF Storage WG - 2018-05-23
Description
Join us for KubeCon + CloudNativeCon in Barcelona May 20 - 23, Shanghai June 24 - 26, and San Diego November 18 - 21! Learn more at https://kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
Join us for KubeCon + CloudNativeCon in San Diego November 18 - 21. Learn more at https://bit.ly/2XTN3ho. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
A
B
A
C
Cool
this
is
going
to
be
a
quick
one.
So
the
background
of
this
is
we
discussed
about
this
component
called
Nordisk
manager
for
a
project
in
the
kubernetes
face
to
face
storage
meeting
that
happened
last
week
and
Clint
suggested
that
you
know,
maybe
it's
better
to
present
it
to
a
wider
audience,
so
just
quickly
walk
through
this
one.
The
intent
is
to
kind
of
share
what
we
are
doing
here
and
then
also
get
some
feedback
and
inputs
in
terms
of
collaborations
or
ideas
for
this
open
source
project.
C
All
right,
just
a
bit
of
a
background
about
me.
I
go
for
my
Anita
I
work
on
the
open,
EBS
project,
open
ideas.
Project
was
one
of
the
projects
that
we
presented
in
the
CNC
of
storage
work
back
in
February,
so
one
of
the
offshoots
of
opening
PS
project
was
there
were
several
storage
related
problems
that
are
pretty
common
for
storage
solutions,
so
we're
trying
to
handle
this
as
multiple
projects.
They're
more
like
know,
manager
is
one
of
the
project.
C
C
So
the
little
bit
of
a
background
is,
we
have
primarily
three
types
of
persistent
volumes
that
we
can
create
in
kubernetes.
Today,
one
is
the
network
attached
mode,
which
is
primarily,
you
know
either
an
external
Sandnes,
Ulf
log
disks.
The
actual
storage
is
outside
of
the
kubernetes
cluster,
but
then
there
are
like
two
other
modes
that
are
coming
up.
One
is
a
direct
attached
storage.
This
is
where
a
lot
of
work
is
going
on
with
local
movies
and
the
other
one
is
hyper-converged
solutions
like
open,
EPS
or
cluster
phase
or
loop.
C
So
the
common
thing
about
the
or
cache
is.
We
need
some
kind
of
a
mechanism
to
manage
the
disks
that
are
attached
to
the
kubernetes
clusters
so
that
some
kind
of
a
high-level
operators
can
be
written
to
create
local
PBS
or
in
the
case
of
container
or
attached,
storage
solutions.
It's
about
providing
these
disks
different
types
of
disks
to
a
cache
pod
so
that
they
can
provide
some
storage
controller
functionality
and
the
volumes
can
be
shared
by
different
applications
in
local
PV.
Typically,
you
kind
of
go
from
a
application
to
one
of
the
disks
right.
C
So
one
regular
pattern,
all
right,
you
know
one
common
implementation
pattern
that
we
see
is
each
of
these
catus
nodes
can
be
linked
with
local
disks,
or
this
could
become
informational
targets
like
I,
scuzzy
or
FC,
or
it
could
be
cloud
like
GBD
or
EPS.
So
all
these
disks
will
be
taken
and
a
concept
or
pool
will
be
created
and
using
this
pool,
multiple
TVs
can
be
created
for
applications,
pools
can
be,
or
it
could
be
as
simple
as
creating
a
HD
for
of
a
lbm
waste
host
directory
or
it
could
be
works.
C
Right,
what
are
some
of
the
common
challenges
or
discredited
things
that
we
need
to
handle,
or
these
cash
ports
are
typically
long-running
and
they
are
on
the
critical
path.
So
it's
not
advisable
to
kind
of
restart
them
very
frequently,
and
how
do
we
handle
the
cases
of
disk
failure
strike
so
could
disk
fails?
Then
we
should
be
able
to
get
notification
so
that
we
can
replace
that
with
some
other
spare
disks
or
take
some
corrective
actions
before
the
actual
disk
go
bad.
C
So
the
motivation
for
doing
this
so
though
it
started
off
as
a
sub-project
of
openness,
it
kind
of
started
making
sense
to
do
it
in
a
generic
way
so
that
multiple
projects
can
use.
So
one
of
the
places
where
this
can
be
used
is
local,
TVs
right,
so
local
TVs
today
is
about
statically
provisioned
TVs
that
can
be
used
by
apps
and
there
are.
C
There
is
a
need
for
some
high
level
operators
to
be
written,
which
I
think
in
the
last
clip
can
be
heard
from
beyond,
and
we
shall
support
the
note,
prep
and
other
containers
that
have
to
be
launched.
That
will
actually
discover
the
disks
do
some
kind
of
vehicle
in
a
lower
cost
formatting
work
and
then
provide
it
to
the
local
babies.
So
these
kind
of
operations
can
be
generally
done
by
the
disc
manager
and
then
additional
things
like
you
know.
How
do
we
monitor
the
usage
not
like
in
errors?
C
So
while
it
complements
the
PV,
then
they're
within
kubernetes,
there
are
as
when
you
are
operating
that
at
scale.
You
would
need
some
kind
of
storage
operation
operators
that
you
will
eventually
turn
up
to
manage
the
failed
disks
or
the
failing
disks.
This
is
for
that
purpose,
and
also,
how
do
we
kind
of
say
the
disks
move
from
one
node,
that
node
is
gone
into
a
state
where
it
came
out
to
recover
so
that
especially
which
heart
is
caught.
C
If
it's
a
externally
attached
disk,
you
can
kind
of
detach
it
from
that
mode
and
attach
to
some
other
node
or
you
detect
that
okay,
it's
actually
a
disk
that
was
used
to
store
some
data
now
I
can
recover
it
from
a
different
node
and
use
it
for
a
local
TV.
That
kind
of
operators
can
be
built.
All
this
can
be
done
if
the
disk
information
is
available
with
some
unique
identifiers
and
some
kind
of
a
type
attribute.
C
So
forecast,
though
this
becomes
even
more
necessary
because
local,
TVs
or
position
volumes
itself
cannot
be
used,
since
we
cannot
dynamically
attached
attached
disks
and
most
often
cast
will
have
that
requirement
of
making
sure
there
are
different
is
attached
to
that,
and
you
should
replace
a
failed
is
with
a
new
disc,
the
kind
of
stuff
right
and
also
ability
to
predict
failures
before
they
happen.
So
that's
got
some
kind
of
a
migration.
Workflow
can
be
taken,
or,
let's
say,
for
performance
reasons.
A
Yeah
I
had
one
or
two
questions.
One
is,
it
seems
to
be
a
blurry
line
here
between
file
systems
and
block
stores,
and
it
just
in
your
volume
types
you
you
had
both
file
system
types
and
box
store
pipes,
so
zfe
and
ext4,
for
example,
or
file
systems,
but
ii,
BSNL,
BM
or
block
stores.
So
these
blocks
are
really
there
are
discs
that
are.
C
C
A
C
C
C
A
E
Today,
there
is
a
way
to
expose
both
block
and
file
in
the
kubernetes
volume
subsystem.
You
have
a
way
to
request
it
in
the
persistent
volume
claim
object
to
say
whether
you
want
block
or
file
and
then
under
the
covers.
The
plug-in
can
can
decide
how
it
wants
to
implement
either
of
those,
and
then
you
also
have
the
ability
to
implement
file
on
block
implicitly,
if
you're
a
block,
and
you
want
to
support
file
as
well
in
the
future.
We're
considering
making
file
on
block
a
first-class
field
in
Ursus.
E
A
D
A
A
B
C
C
And
the
next
one
one
more-
this
is
the
last
one.
Okay,
oh,
so
the
way
this
notice
manager
is
able
to
work.
Is
it's
going
to
be
a
daemon
set,
that's
running
on
storage
nodes
in
the
kubernetes.
It's
going
to
use
different
discovery
mechanisms
to
identify
the
block
discs
that
are
attached
to
a
node,
and
then
it's
going
to
put
them
into
the
kubernetes
as
the
custom
resources
right.
C
I
have
an
example
of
how
the
custom
resource
looks
in
the
next
slide,
but
using
those
custom
resources,
the
storage
control,
plane,
operators
like
let's
say
the
local,
PD
or
open
areas
or
cluster,
two
things
use
them
to
create
objects
that
they
need.
For
example,
it
could
be
like
a
local
precision
volume
or
in
case
of
open
EPS,
it
could
be
creating
a
storage
pool
bot.
Similarly,
a
cluster
reference
cluster
Damon
or
the
cluster
table
set
can
use
these
disclosures
resources
to
identify
which
nodes
should
have
the
full
created
right.
C
This
also
a
monitoring
piece
that
is
getting
built
into
no
disk
manager
that
can
monitor
this
disks
for
errors,
as
well
as
the
matrix
in
terms
of
I/o,
ops,
latency,
throughput,
etc.
That
will
be
exposed
to
prometheus
that
can
be
actually
configured
to
be
exported
to
Prometheus,
or
there
could
be
some
alert
setup
or
some
events
that
can
be
sent
to
the
storage
control
plane
so
that
they
can
handle
that
appropriately.
C
C
C
Okay,
so
this
just
explains
some
of
the
objects
that
I
explained
earlier
the
interest
of
time-
let's
just
go
to
the
next
one.
Ok,
so
this
is
a
example.
Disk
resource
object
that
will
be
created,
it's
a
custom
resource.
It
will
have
information
of
the
topology
of
the
disk
here,
I've
just
taken
from
the
kubernetes
GAE
clusters,
so
it
just
shows
the
host
name
from
widgets
available.
C
What
is
the
path
and
the
capacity
and
then
some
details
that
we
kind
of
got
from
indice
blk,
but
there
are
additional
information
that
we
can
get
in
if
it's
a
block
disk
or
if
it's
a
SSD
or
if
it's
nvme
connected
or
if
you
want
to
get
to
the
topology
at
the
CPU
level.
Those
things
can
be
obtained
from
this
one.
C
So
I'll
stop
here.
I
think
the
main
intention
was
to
kind
of
say
that
we
had
this
project
going
on
and
we
have
open,
EBS
and
then
also
humble
I
think
he's
also
on
the
fall
from
Red
Hat.
Looking
at
how
to
use
this.
For
writing
the
using
this
with
the
cluster,
a
percent
open
EPS.
We
are
really
looking
for
contributions
in
terms
of
design
ideas
and
then,
if
we
take
it
forward
to
see
how
to
make
it
generic
so
that
we
can
pull
some
high-level
storage
or
practices
in
this.
A
C
Did
I
think
now
is
a
good
time
to
take
questions.
Also,
one
thing
I
have
one
but
other
one
yeah
one
one
other
thing:
I
wanted
to
still
look
at
I
haven't
I
need
to
get
in
touch
it
moment
if
it's
on
the
call
to
see
how
what
they
are
doing
in
row.
There
is
a
similar
functionality
there
to
see
how
that
kind
of
relates
to
this,
and
if
there
is
any
collaboration,
that's
possible
with
this
one.
A
A
If
not,
if
not
I'll
ask
my
question,
so
it
was
around
storage
networks,
so
most
of
the
public
cloud
providers.
Don't
really
have
the
distinction
that
I'm,
aware
of
but
green
malt
balls
works,
whereas
in
enterprise
data
centers
it's
not
uncommon
to
have
multiple
storage
networks
attached
we're,
certainly
within
the
data
center,
and
possibly
also
attached
to
a
particular
node.
A
If
anyone
or
if
you
have
given
much
thought
to
the
concept
of
storage
networks
and
how
do
you
distinguish
between
them
and
how
do
you
advertise
it?
You
know
it's
not
not
and
become
a
lien
to
be
advertised
on
multiple
networks
and
sometimes
that
different
performance
characteristics
etc.
Have
you
given
any
thought
to
that
I
haven't.
C
A
A
A
Okay,
that
makes
sense
just
to
get
back
to
my
previous
question.
Is
there
anyone
on
the
call
who
can
comment
on
whether
or
not
they
have
any
strong
requirements
for
even
exposing
the
concept
of
a
storage
network
to
start
with
and
then?
Secondly,
the
multiples
Dorris
networks
in
a
data
center
or
per
node,
is
that
a
I've
seen
requirement
some
number
of
times,
but
I'm
not
sure
how
general
the
requirement
is
curious.
If
anyone
else
has.
C
B
A
Well,
the
question
that
I
was
so
the
the
use
cases
that
I've
seen
in
the
past
are
one.
You
may
have
multiple
full
storage
networks,
as
you
say,
the
kinds
of
mentioned,
and
some
of
them
are
connected
to
all
the
nodes,
and
some
of
them
are
only
connected
to
some
of
the
notes
and
so
having.
If
a
given
node
is
connected
to
a
given
storage
network,
you
know,
volume
is
is
exposed
on
that
network.
D
D
There's
one
of
the
problems
is
no
multihoming.
Networks
on
container
is
still
not
supported
in
kubernetes
I've
seen
people
do
this
and
we've
looked
at
this
for
work
scenarios
specialty
and
I
plug-in
that
enables
you
to
switch
between
between
networks,
so
they
there's
a
body.
Was
all
troubles
host
me
no
longer
working
group
with
with
some
details,
but
but
that's
one
area
that
I
think
need.
You
know
we
need
to
think
through
how
containers
and
choose
which
passing
those
out
or
uses
when,
when
you
know
wanted
to
use
the
network.
A
Yes,
I'm
well
aware
of
the
fact
that
that
we
don't
provide
multiple
networks,
product
and
I
mean
it's
quite
problematic
in
some
application
areas
specifically
building.
You
know,
network
infrastructure
there
like
and
also
applications
which
have
multiple
no
control.
What
we
can
better
play
network,
etcetera
right.
A
Yeah
yeah,
we
also
have
we
have
a
thing
called
CNI,
Genie,
I,
think
which
does
something
similar,
I
guess
a
pertinent
question
in
this
space
is
is:
do
we
want
to
treat
storage
networks
as
generic
networks
and
sort
of
leave
it
up
to
C
and
I,
and
the
networking
sig
again
these
kind
of
people
to
to
solve
that?
Or
do
we
think
that
a
storage
network
s,
and
basically
when,
as
is,
is
a
special
enough
thing
and
distinct
from
general
networks
that
want
to
have
for
those
and
solve
that
in
a
separate
storage?
Centric
way.
B
So
I'd
suggest
that
there's
two
levels
to
that
answer
at
one
level
you're
discovering
the
network
connectivity
between
essentially
ports
on
one
side
with
ports
on
the
other
side
and
and
switches
in
the
middle.
At
the
second
level,
you
want
to
know
what's
underneath
the
ports,
so
you
want
to
be
able
to
go
through
a
core.
You
know,
as
you
and
say,
okay,
what
can
I
see
and
to
discovery
there.
A
So
very
you
broke
up
a
bit
there.
George
I,
think
I.
Think
my
question
was
a
different
one.
Do
we
want
to
treat
storage
networks
as
distinct
from
data
networks,
or
do
we
want
to
treat
them
as
a
converged
thing?
It
seems
to
me
like
their
arguments
in
both
directions.
I
mean
in
many
cases,
general-purpose
TCP
networks
are
used
to
attack.
It's
just
rich,
but
McLeod's
work
for
example.
A
But
there
are
the
cases
where
you
have
completely
physically
separate
and
and
the
protocols
that
run
over
them
are
completely
different
and
those
are
you
know
and
like
to
be
solved
by
the
networking
sig.
Unless
we
go
and
push
on
them
to
solve
that
I
I
would
imagine
it
would
be
difficult
in
practice
to
to
solve
the
two
problems
separately
because
there's
such
a
big
overlap,
but
we
should
just
be
aware
of
that.
A
B
B
A
Yeah
indeed,
and
and
one
could
argue,
both
ways-
whether
that
belongs
within
a
general
purpose
platform
or
like
kubernetes
or
all
the
kinds
of
things
CN
CF
provides
or
whether
that
is
more
proprietary,
whether
whether
each
storage
provider
might
have
different
interfaces
for
those
discoveries
and
things.
And
maybe
we
push
that
outside
of
the
standard
definition.
A
C
We
do
not
like
to
just
add
one
more
loss
per
minute,
so
one
of
the
things
that
we
wanted
to
do
with
the
nor
disk
manager
is
form
a
separate
group
that
needs
bi-weekly
to
kind
of
make
progress
on
that
one.
So
if
anybody
from
this
group
is
interested
in
joining
that,
please
be
me
or
comment
on
the
slides
and
just
open
that
up
and
then
we'll
take
it
from
there.