►
From YouTube: CNCF SIG Storage 2020-08-26
Description
CNCF SIG Storage 2020-08-26
C
B
B
Yeah
I'll
just
we'll
just
wait
a
couple
more
minutes
and
have
a
couple
more
people
join
before
we
start.
A
B
Okay
welcome
everybody
good
morning
good
afternoon,
good
evening,
depending
where
you
are
so
the
main
agenda
item
for
today
is
we
are.
B
B
Presentation,
kieran
and
his
team
are
here
to
present
this
for
backgrounds.
Open
ebs
is
a
it's
a
distributed
block,
storage
cloud
native
storage
system.
It's
currently
a
sandbox
project
of
the
cncf,
and
it
was
it
was
introduced
as
a
sandbox
project.
I
guess
just
before
barcelona
keep
going
barcelona
right,
kieran.
B
Perfect,
all
right,
so
the
team
have
have
raised
a
a
proposal
to
to
move
this
into
into
incubation,
and
we
are.
We
are
looking
to
to
prepare
the
review
kieran.
How
would
you
prefer
this
to
to
run?
Do
you
want
the
questions
to
be
interactive
or
or
should
we
hold
till
the
end.
C
Yeah,
I
think,
let's
keep
it
interactive.
What
I
have
done
is
just
put
together
a
few
slides
just
to
set
the
context
I
may
or
may
not
be
able.
C
The
questions
today,
I
at
least
like
you
know,
we'll
try
to
follow
up
on
them
in
the
next
calls
with
the
right
people.
Let
now,
let's
do
it
interactive.
C
All
right
I'll
share
my
screen,
and
I
also
have
like
some
of
the
team
members.
I've
shared
this
link
with
them
just
in
case
my
internet
goes
off
or
things
like
that,
all
right,
a
quick
summary
of
the
project.
Since
we
became
missions
here
of
last
may.
C
After
the
around
barcelona
con,
we
have
around
39
35
new
companies,
contributing
in
some
capacity
and
they're
all
pulled
in
from
the
dev
stats,
also
part
of
the
annual
review,
as
well
as
the
incubation
pr,
and
we
seem
to
be
attracting
at
least
like
five
contributors.
Every
month
we
have
gone
to
a
monthly
release,
cadence,
where
we
are
on
boarding
new
reviewers
new
contributors.
C
So
just
a
quick
recap
for
those
of
you
who
are
just
hearing
about
open
ebs
and
then
we'll
get
into
like
quick
updates
on
what
we
have
done
since
open
ebs.
0.9
was
released
when
we
introduced
it
to
sandbox
right
now.
It
is
at
open.
Abs
2.0
provide
the
summary
of
the
changes
that
we
have
done.
C
So
we
are
it's
a
hyper
convert
storage
and
we
call
this.
Category
of
storage.
Engines
has
container
attached
storage
because
the
storage
services
itself
are
delivered
as
containers.
These
containers
are
castrated
by
kubernetes
managed
via
kubernetes
native
resources
and
custom
resources
and
native
resources,
as
well.
C
One
thing
about
open
abs
with
all
of
its
data
engines.
We
try
to
run
them
in
user
space,
make
them
portable
ability
to
run
on
any
kind
of
kubernetes
platform
or
overlays.
C
That's
been
one
of
the
unique
design
constraints
that
we
have
said
to
ourselves
and,
at
the
same
time,
most
of
the
adopters
that
we
will
see
have
mentioned
that
openvs
is
easy
to
use.
C
So
this
one
just
briefly
introduces
what
container
attached
storage
is,
and
then
we
tried
to
operate
kieran.
If
I
made
for
a
moment.
E
Are
you
going
to
cover
that
there
was
some
concern
around
relationships
with
other
existing
projects?
I
think
longhorn
was
one
of
them
in
the
past.
Are
you
going
to
cover
the
the
projects
fit
in
here.
C
Oh
yes,
I
kind
of
list
all
the
storage
engines
and
also
we
have
covered
that
information
in
the
annual
review.
Pr
as
well.
How
open
abs
compares
with
longhorn
how
open
ebs
compares
with
rook
ceph?
We
can
continue
that
discussion.
C
That
all
right,
so
I
think
most
of
the
orchestration
work
is
offloaded
to
kubernetes.
I
think
that's
one
of
the
primary
differentiation
between
container
attached
storage
versus
implementations
of
storage
services
that
are
done.
C
The
core
of
the
gas
engines
is
about
data
services
that
we
kind
of
take
care
of
this
current
storage
management
on
the
kubernetes
nodes,
making
sure
that
data
is
highly
available
for
the
applications
and
enabling
the
data
protection
on
them.
Those
are
the
services
that
are
implemented
within
the
cast.
C
Just
some
examples.
Yes,
we'll
talk
a
little
bit
more
about
longhorn
today.
I
think
all
this,
in
my
opinion,
kind
of
fit
into
this
example
and
rook
is
a
great
example
that
actually
orchestrates
that
could
orchestrate
all
of
these
engines,
as
well
as
like
the
way
it
does
ceph
and
other
hfs
kind
of
storages
it
for
somebody
to
get
started
with
open
abs.
C
These
are
like
some
basic
examples:
basic
commands,
you
just
use
the
single
helm,
install
command
to
install
openbase
it
ships
with
some
default
storage
classes
as
well,
and
then
you
launch
the
application.
The
picture
here
kind
of
shows
how
it
works.
So
the
application
pvc
is
backed
by
a
target,
or
you
know,
the
pv
comprises
of
a
target
and
a
set
of
replicas
and
target
is
responsible
for
distributing
the
data
across
the
different
replicas.
C
Slightly
a
high-level
architecture,
diagram
of
where
the
what
are
the
different
components
or
how
we
categorize
them.
So
just
we,
we
have
cluster
level
components
and
note
level
components
most
of
the
operators,
including
the
csi
driver,
control,
control,
plane,
components
as
well
as
the
open
eps
operators
itself
that
manage
the
engines
across
the
different
nodes.
They
are
all
called
under
class,
called
as
cluster
components.
C
These
expose
all
these
cluster
operators
work
where
kubernetes
resources,
so
you
can
kind
of
integrate
this
with
the
other
third
party
or
open
source
or
commercial
products.
So
some
of
the
things
that
we
have
integrated
with
are
like
whether
or
not
prometheus
and
kubera
prometheus
and
cortex.
In
fact,
and
in
terms
of
node
components,
there
again,
like
we
kind
of
divide
them
into
three
categories,
one
is
the
management
of
the
storage
and
volumes,
which
is
the
csi
node
components
or
agents
and
the
node
disk
manager.
C
I'll
just
get
into
the
question
that
clinton
asked
just
before
that.
I
think
this
is
like
slightly
more
detailed
interaction
between
the
various
components
and
we
kind
of
have
collection
of
data
engines,
and
we
can
kind
of
segregate
them
as
replicated
versus
local
pvs
in
case
of
replicated
previous.
These
are
like
used
by
applications
that
need
high
availability
feature
from
the
underlying
storage.
C
So
here
we
with
all
the
three
engines
that
we
support,
jiva,
which
is
actually
taken
from
a
fork
of
a
longhorn
even
before
longhorn
was
part
of
cncf,
and
there
have
been
like
slight
differences
in
the
way
the
longhorn
and
jiva
have
progressed
in
terms
of
how
they
maintain
the
data
availability
scenarios.
C
That's
the
primary
difference
and
while
the
core
engine
parts
are
the
same,
the
high
availability
aspects
are
where
they
differ
and
they
continue
to
be
in
four
projects
but
all
of
c
store.
My
store
and
jiva
have
the
similar
architecture
where
a
pv
is
backed
by
an
iscsi
target.
My
story
is
starting
to
support
different
types
of
access
targets,
but,
as
this
is
the
most
common
one
used
as
of
today
with
orange
users,
the
this
is
a
kubernetes
service
associated
with
a
deployment
object
and
that
takes
in
the
ios
and
rights
to
various
replicas.
C
B
Yeah
a
couple
couple
of
questions.
Actually,
if
we,
if
we
look
at
the
previous
slide,
the
the
different,
the
different
components
of
open
ebs.
So
so
I
believe
when
we
did
the
sandbox
we
talked
about,
you
know.
B
We
talked
about
the
the
open
ebs
core
and
the
data
engines,
but
just
to
clarify
when
we're
when
we're
looking
at
the
project
at
this
stage
in
terms
of
the
incubation
submission,
does
that
include
things
like
the
the
operator
and
the
and
the
the
disk
manager
function
as
well,
or
are
those
external
components.
C
It
includes
definitely
the
node
disk
manager
components.
These
are
all
written
by
the
open
abs
authors
as
part
of
the
open
abs
project
during
the
sandbox
we,
for
example,
c-store
data
engine
was
split
between
the
changes
that
open
abs,
authors
wrote
versus
what
comes
from
the
zfs.
So
that's
the
modifications
done
to
zfs
they're
kept
outside,
but
the
lib
cfs,
that's
the
one
that
adds
the
duplication,
layer
and
all
that
part
of
the
cncf.
So
we
can
get
in.
C
If
I
understand
the
question
correctly
alex,
so
we
should
get
into
the
details
of
like
what
are
all
the
reports
that
we
get
to
as
part
of
this
incubation
is
that
right.
B
Yeah
so
so-
and
I-
and
I
only
mentioned
this
because
because
we
had
this-
you
know
we
had
this
discussion
with
with
with
another.
A
C
Hospital
x,
I
just
was
observing
that
on
the
tikv
project
as
well,
so.
C
Yeah
there
are
around
70
reports
right
now,
but
most
of
them
are
for
independent
reports.
So
just
for
build
compatibility,
we
maintain
that
the
core
repositories
are
spread
across
14
to
15
reports.
I'll
share
that
list.
B
B
Mostly,
just
out
of
curiosity
so
so
does
is
the
is
the
iscsi
target.
Is
that
a
component
of
the
of
the
engine,
or
is
that,
is
that
a
separate
component.
C
C
Yeah
iscsi
is
a
part
of
the
engine
as
a
target
that
particular
container
actually
gets
spawned.
When
a
volume
is
created,
it's
not
always
running
it's
per
volume.
There
is
a
viscosity
target
pod
that
gets
created
right.
B
C
So
in
in
this
diagram,
all
the
green
components
are
engine
components.
I
have
not
mentioned
the
operators
of
the
csi
driver
components
that
are
there.
It
just
goes
to
say
that
whenever
you
create
a
pvc
for
a
given
specific
storage
class,
the
operators
work
out
on
launching
these
data
engine
components
for
that
particular
pvp.
E
This
this
might
be
getting
into
too
much
detail,
but
that
that
iscsi
target
there
is
that
located.
Is
that
co-located
with
the
the
volume
replicas
or
is
that
co-located
with
the
attached
container.
C
Right,
that's
actually
controlled
by
the
storage
class
policy
that
we
can
configure.
There
is
no
hard
tying
of
that
one
with
either
the
replicas
or
the
application
parts,
but
for
performance,
or
you
know
those
kind
of
reasons.
Users
can
decide
to
say
that
it
has
to
be
on
the
replication
ports
or
it
can
be
moving
along
with
the
application.
D
Karen,
could
you
speak
to
where
thin
provisioning
comes
in
around
that
iscsi
target?
Is
that
all
done
by
the
back
end.
C
Right
so,
in
the
way
that
thin
provisioning
works
is
based
on
the
replica
it
is
handled
by
the
replicas
here
yeah,
I
think
that's
a
great
question,
so
it
also,
for
example,
iscsi
does
a
synchronous
replication.
So
if
we
have
asked
for
like
a
5
gb
volume,
then
we
need
to
have
5
gb
available
space
on
all
of
the
replica
pods.
C
Now
it
is
possible
for
this
replica
pods
to
serve
multiple
iscsi
targets
and
store
the
data
on
the
physical
hard
drives,
so
you
could
have
potentially
like
a
50
gb
overall
capacity
at
each
replica,
but
you
can
provision
100
volumes.
We
monitor
the
provisioning
like
the
capacity
usage
at
the
replica
and
you
can
expand
with
additional
disks
on
the
replicas
itself
and
the
new
day.
Two
operators
that
I
talk
about
later
on
can
actually
help
with.
C
Let's
say
if
one
of
the
replica
parts
was
scheduled
to
take
in
data
from
multiple
replicas,
and
if
that
is
going
out
of
space,
we
can
shift
that
replica
to
another
node
where
there
is
capacity
available
for
them.
C
One
distinction
also
is:
this
is
not
a
scale-out
storage,
so,
for
example,
you
cannot
consume
the
capacity
on
different
nodes
and
provide
a
capacity.
That's
aggregate
of
all
this
capacity
to
a
nice
cpv.
It's
the
capacity
to
provide
to
the
iscsi
is
kind
of
what
you
get
and
it
has
to
be
available
on
each
of
the
replica
modes.
C
B
Right,
whilst
a
volume,
the
capacity
for
a
volume
can't
exceed
the
capacity
of
an
individual
node,
the
volumes
effectively
get
distributed
across
all
of
the
nodes
that
have
available
storage
right.
C
Also
did
I
was
able
to
answer
the
thin
provisioning
question.
C
Thanks
for
those
questions,
so
now
we
get
into
a
little
bit
of
specifics
and
how
these
things
are
different.
Just
to
kind
of
reiterate,
quentin's
point
the
jiva
repo
that
we
have
that's
a
four
of
longhorn
engine.
That's
one
of
the
components
of
the
longhorn
project
and
the
longhorn
engine
works
in
combination
with
the
longhorn
controller,
long
on
ui
and
the
availability
is
really
maintained
by
some
additional
operators
which
add
and
remove
replicas
to
the
controller.
C
But
in
case
of
jiva
there
is
a
layer.
That's
added
on
top
of
the
longhorn
engine
which
automatically
reconciles
itself
or,
like
you
know,
reconnects
to
the
controller
on
node
restarts
and
node
failures,
and
it
does
not
depend
on
a
control
plane
for
the
volumes
to
continue
to
work.
That's
where
the
major
difference
comes,
and
also.
This
is
one
of
the
reasons
why
we
have
put
a
limitation
of
50
gb
kind
of
a
capacity
on
jiva
and
similar
limitation
does
not
exist
with
longhorn
and
with
jiva.
C
We
need
to
have
at
least
two
replicas
in
online
mode
for
we
for
us
to
read
and
write
data
into
that
volume,
whereas
longhorn
works
with
even
a
single
replica
in
a
read,
write
mode
because
the
control
plane
or
like
the
ui
controls
on
the
quorum
logic
or,
like
you
know
it
decides
on
who
gets
to
be
the
master.
The
quorum
logic
is
in
built
into
jiva.
C
That's
why
we
wanted
to
reduce
the
time
taken
for
rebuild
and
make
sure
like
this
is
tuned
to
have
the
two
replicas
available
most
of
the
time,
if
not,
the
nice
busy
usually
puts
the
volumes
into
read-only,
and
there
is
a
manual
operation
required
to
remove
that.
C
Also
like
longhorn,
went
ahead
and
made
some
changes
in
terms
of
like
backup,
support
and
all
that
the
open
ebs
project
for
backup
and
restore
it
uses
valero.
So
the
all
those
enhance
all
those
changes
are
where
jeeva
differs
from
longhorn
and
the
other
difference
or,
like
you
know,
the
change
with
the
code
is
in
terms
of
space
reclamation
for
maintaining
the
high
availability
g
g.
One
log
on
both
actually
create
some
kind
of
an
internal
snapshots.
C
Jiva
ends
up
automatically
purging
them
beyond,
like
some
threshold,
that's
configured
and
long.
One
added
that
capability
later
on,
which
was
to
purge
those
snapshots
via
the
ui.
B
So,
just
from
a
just
from
a
roadmap
point
of
view,
I
believe
if
I,
if
I,
if
I
recall
around
the
sandbox
time
c
store,
was,
was
sort
of
the
the
primary
engine
where
most
of
the
development
was
happening.
Correct
is,
is.
A
B
Still
the
case
or
or
or
are
you
moving
to
the
new
maya
store
engine
then.
C
So
when
we
started
the
project,
we
kind
of
thought
that
there
may
be
like
one
engine
that
may
be
suitable
for
all
workloads,
but
then
we
soon
realized
that
that's
not
going
to
be
the
case
depending
on
the
capabilities
of
the
storage
node
and
the
application
demands.
There
was
a
need
for
different
types
of
engines,
for
example
like
I'll
just
get.
There
is
a
local
pv
that
we
mentioned
here.
This,
I
think,
was
introduced
around
the
same
time
as
the
sandbox
time
frame.
C
A
lot
of
operations
around
just
carving
out
a
block
device
from
the
local
storage,
that's
available,
typically
discovered
by
ndm,
was
itself
sufficient
without
adding
either
c
store
or
jiva,
and
that
local
pvp
itself
has
taken
on
its
own
journey
based
on
the
feedback
from
the
users.
C
Now
we
support
four
variants
of
local
pv
in
open
abs
and
based
on
this
feedback.
We
are
continuing
to
support
all
the
engines
as
the
opening
based
community
at
this
point,
but
restricting
the
use
cases
to
which
each
of
those
are
suitable
right,
for
example,
like
jiva,
is
mainly
suitable.
For
you
know,
workloads
actually
like
we
have
built-in
arm
support
for
that,
as
well.
So
for
lightweight,
when
there
are
no
additional
block
devices,
I
should
get
into
that
a
little
bit
here
or
actually
the
last
line
here.
C
So
when
there
are
no
external
hard
drives
or
extra
devices
available
on
the
node.
That's
when
and
you
need
replication
capability.
That's
when
like
gy
is
preferable,
but
if
you
have
hard
drives
or
ssds
or
you
know,
you
want
to
be
able
to
expand
that
on
the
fly,
then
c
store
is
preferred
and
c
store
also
has
inbuilt
capabilities
around,
like
you
know,
instantaneous
snapshots
and
clones,
which
we
don't
plan
to
add
for
jiva,
for
example,
and
my
store
was
mainly
intended
for
the
performance
reasons.
With
the
c
store.
C
We
couldn't
drive
up
the
performance
to
a
large
extent
because
of
the
way
we
were
dependent
on
the
zfs
technology
there.
So
we
started
working
on
maestro
that
actually
solves
some
of
those
bottlenecks.
It's
inspired
from
the
work
that
we
have
done
prior
on
c
store.
Jira
and
my
store
is
the
new
engine
for
hardware
where
nvme
devices
are
available
and
you
have
a
lot
of
cpu
power
and
you
can
drive
up
the
performance.
C
Yeah,
so
the
quick
answer
to
that
also
is
we
continued.
We
planned
to
support
all
the
engines.
I
have
like
a
roadmap
where
I
kind
of
mention
what
we
are
planning
to
do
on
those
aspects
as
well.
C
Another
interesting
use
case
that
we
continue
to
see
is
like
how
many
people
ask
for
read,
write
many
support
and
we
support
that
via
the
nfs,
and
it
can
pretty
much
work
with
all
the
block
storage
options
that
we
have
today
and
we'll
see
in
the
doctor's.
In
fact,
like
cncf
itself
uses
the
local
pv
plus
the
nfs
to
run
most
of
the
dev
stats.
C
Portals
so
any
questions
before
I
move
on
to
the
history
or
like
the
current
state
of
local
previous,
that
we
have.
B
With
the
with
the
local
with
the
local
pvs,
could
you
maybe
spend
two
minutes
to
sort
of
differentiate
how
this
is
different
from
say?
I
don't
know
the
local
parts
type
things
which
are
kind
of
native
to
kubernetes,
for
example,
right.
C
Persistent
volume
source,
where
we
specify
that
word
local,
that's
the
same
thing:
that's
used
in
almost
all
the
local
pvs,
so
that
source
which
forces
you
to
specify
and
or
like
stick
it
with
the
node
affinity.
That's
the
pv
spec
that
we
use,
but
how
do
you
actually
give
the
path
to
that?
C
Do
you
give
a
static
device
like
the
kubernetes
static
provisioner,
which
goes
through
a
list
of
mounted
devices,
and
then
you
go
and
create
the
pvs
for
that,
and
then
the
kubernetes
storage
class
and
internal
scheduler
kind
of
takes
care
of
scheduling
the
applications
to
that
that's
the
core
functionality.
But
on
top
of
that,
like
how
do
you
provision
the
storage
to
this
local
tvs?
That's
where
I
think
openlps
has
helped,
and
I
you
know.
I
also
noticed
that
this
is
kind
of
increasing.
C
I
saw
a
coupon
talker
on
top
of
lvm,
which
also
is
doing
something
similar,
so
these
projects
like
open
abs,
local
pvs,
as
well
as
like
topo
lvm
or
like
ranchers,
host
path.
Provisional.
These
are
built
around
that
core
concept,
but
in
how
do
you
make
it
easier
for
users
to
use
that
in
terms
of
managing
that
local
storage
is
where
the
differentiation
comes?
There
are
three
things
that
I
have
listed
here,
there's
one
more
thing
that
there
are
four
things.
Fourth
one.
C
I
will
add
it,
probably
by
next
time
we
talk
about
this.
The
main
intent
of
using
or
like
you
know
I
was
going
towards
local
pv-
was
the
capability
that
we
had
already
with
ndm.
That
would
discover
the
log
devices
and
the
partitions
that
are
attached
to
the
node,
so
you
can
dynamically
claim
a
device
instead
of
using
the
static
propeller.
C
That's
where
we
started
off,
but
then
we
soon
found
out
that
there
are
nodes
where
devices
are
not
readily
available
and
people
want
to
use
like
some
kind
of
a
host
path
or
a
directory
by
creating
a
new
directory
within
that.
That's
where,
like
global
people,
host
path
came
in
the
provisionals
for
these
two
are
same
and
they
are
based
out
on
the
external
provisioner.
We
are
moving
them
to
csi
based
things.
C
Zfs
local
pv
was
kind
of
an
interesting
ask
from
the
community
where
they
liked
the
concept
of
local
pvs,
but
they
also
wanted
resiliency.
On
top
of
the,
you
know,
basically
protect
against
the
disk
failures,
so
you
create
g
pool
and
then
each
local
pv
is
actually
backed
by
a
z
wall
or
a
cfs
data
set.
This
is
this.
One
is
mostly
compatible
with
the
topo
lvm
project
that
was
presented
at
coupon.
C
That
makes
sense,
and
the
one
that
we
are
adding
now
is
called
raw
file,
which
is
the
local
preview
host
path
or
like
the
device
or
kubernetes
static.
Local
pv.
All
of
them
have
one
limitation
in
terms
of
enforcing
the
quota
on
the
device
so
typically
like
the
volumes
can
grow
beyond
the
capacity
that's
actually
given
to
them.
Applications
can
write
beyond
that.
C
Cfs
local
pb
can
restrict
that
because
it
has
a
boundary
management
via
zfs
and
lvm
with
lvm
the
new
one
that
we
are
adding
is
based
on
the
host
path,
where
we
put
up
a
sparse
file.
That
kind
of
helps
you
to
contain
the
capacity
used
by
the
application.
F
C
C
It
does
not
create
a
pv,
it
exposes
block
device
resources
and
we
can,
for
example,
when
you
want
a
local
pv
or
a
c
store
engine
to
be
created
on
those
devices.
These
c
store
operators
or
the
local
pv
operators
can
request
for
a
block
device
similar
to
pvpc
concept.
A
block
device
claim
and
ndm
operator
will
pick
up
the
node
or
the
block
device.
Whichever
is
available
and
mounted
to
the
pdc
and
the
higher
level
operators
can
take
that
block
device
and
consume
it.
C
Ndm
itself
is
independent.
It
is
not.
You
know
it
can
be
actually
used
for
other
purposes
as
well
for
other
operators
it
need
not
be
tied
or,
like
you
know,
it's
not
it's
independent
of
the
open
abs
components.
That's
what
I'm
going
to
say.
It
has
its.
B
Own
interests
just
to
add
to
that
I
think.
B
E
E
Correct
about
local
pvs:
do
you
do
anything
about?
Are
these
stuff
to
excuse
me,
I'm
not
that
familiar
with
some
of
these
other
technologies,
but
is
there
any
data
management,
so
so
is
a
local
pb
by
definition
empty
when
when
you
create
it
and
and
just
and
all
the
data
deleted
when
it
goes
away
or
or
do
you
have
some
mechanism
for
restoring
the
data
into
a
local
pv,
if
it's
not
already,
there.
C
Yeah
so
the
way
it
is
today,
it
is
an
empty
device
or
a
directory
that
you
give
to
an
application,
and
once
the
application
deletes
that
pv,
it
comes
back
to
ndm
operator
and
indium
can
take
care
of
deleting
that
data,
and
it
actually
marks
it
as
released
and
does
the
deletion
in
the
background
and
makes
it
as
unclaimed
once
that
cleanup
process
is
completed.
So
it
can
be
reused
by
some
other
application
in
terms
of
data
services.
C
If
we
want
to
put
some
data
into
the
local
pv
before
it
is
given
to
the
application,
one
approach
could
be
to
use
the
volume
data
source
mechanism.
We
have
not
done
that,
yet
that's
definitely
on
something
that
we've
been
thinking
about
and
the
other
data
service
that
we
provide
with
local
pvs.
That
again,
like
was
a
community
ask
us
to
make
sure
like
we
take
backups
out
of
this.
So
this
is
a
you
know.
We
again
use
the
vanderholistic
based
backups
for
local
pvs
as
well
in
case
of
gfs
local
pb.
C
We
can
be
a
little
bit
more
smart
because
we
can
take
incremental
snapshots
and
send
that
to
the
backups.
These
are
the
two
primary
services
that
we
offer:
data
services
and
zfs
local
pv,
also
as
a
potential
we
actually
experimented
with
it.
We
can
provide
encryption
support
on
top
of
service
local
pv
as
well
encryption
at
rest.
C
Okay,
so
this
is
what
has
been
keeping
us
busy.
This
is
like
just
a
quick
snapshot
of
the
things
that
we
have
done
at
a
very
high
level.
We
can
dig
into
each
of
these
for
further
details
in
the
upcoming
calls,
or
you
know
there
is
something
quick
I
can
probably
answer
now.
The
ndm
was
at
version
0.
C
3,
I
think
when
we
went
into
the
sandbox
a
lot
of
enhancements
around.
There
are
all
sorts
of
clusters
where
we
were
deploying
ndm
and
we
kind
of
learned
a
lot
in
terms
of
how
the
block
devices
are
actually
attached
to
kubernetes
nodes
and
the
different
variations
in
cloud
environments
with
respect
to
how
the
block
devices
are
represented.
So
all
those
things
were
factored
into
ndm.
Now
we
are
able
to
successfully
detect
almost
all
types
of
block
devices.
C
The
most
challenging
was
around
the
virtual
devices
and
partitions.
We
could
detect
it,
but
because
there
was
no
unique
identifier
like
serial
number
or
wwe,
and
things
like
that
and
no
reboots
things
would
change
in
terms
of
path
and
it
would
become
a
little
difficult
to
know
what
was
the
previous
name
to
that.
One
so
kind
of
adding
support
to
handle
those
kind
of
situations
is
the
main
thing
that
that
I
want
to
call
out
as
part
of
ndm.
C
Just
like
each
of
these
engines.
India
is
its
own
project
and
it
has
its
own
maintainer
reviewer
and
a
lot
of
interest
in
terms
of
adding
features
to
that
right.
Now
there
are
a
couple
of
alpha
features
that
have
gotten
in
into
ndm
as
well,
which
we
want
to
work
going
forward
or
in
terms
of
metric
support
at
the
block
level.
Things
like
smart
metrics
on
the
block
devices.
C
How
do
we
capture
and
get
them
out
and
put
alerts,
for
example
like
temperature,
or
it
could
be
like
spot
smart
characteristics
like
error
rates
and
all
those
things
if
the
device
supports?
How
do
we
get
that
those
things
are
in
progress,
and
this
also,
it
is
controlled
by
the
kubernetes
crs
right
now,
but
there
have
been
some
asks
about
ability
to
control
via
grpc
api.
C
So
we
are
adding
an
grpc
api
layer
also
to
ndm
to
list
and
discover
and
perform
some
operations
on
top
of
indium
jiva
was,
I
think,
longhorn
since
we
took
from
longhorn
it
was,
and
we
had
tested
it
almost
for
like
two
years
before
we
called
it
stable
and
a
lot
of
users
had
actually
already
started
using
it,
a
few
things
that
were
causing
the
problems
for
the
users
around
jiva,
where
in
terms
of
volumes
going
into
read-only,
especially
when
the
capacity
was
growing
and
also
if
you
kind
of
create
a
5gb
volume
and
the
application
is
such
that
it
keeps
on
reading
and
writing
the
same
blocks.
C
The
snapshotting
technology
used
to
maintain
high
availability
would
end
up
creating
a
lot
of
internal
snapshots
that
would
end
up
consuming
a
lot
of
space
so
as
users
started
using
it
for
like
one
year
and
more
than
one
year,
they
started
complaining
about.
5Gb
volume
on
a
replica
is
taking
like
three
times
the
capacity
and
all
that
so
that's
where,
like
we
had
like
some
initial
work,
done
on
cli
work
to
reclaim
the
space,
but
we
automated
that.
C
So
that's
gone
as
part
of
2.0
csi
driver
is
definitely
where
we
want
to
move
for
this
engine
as
well,
and
that's
available
in
alpha.
At
this
moment,
one
of
the
primary
drivers
for
us
to
go
towards
csi
driver,
though
we
don't
support
all
the
other
capabilities
like
snapshots
and
clones,
is
the
ability
to
remount
the
volumes
when
they
become
available
right
now.
C
The
iscsi
pvs
kind
of
need
some
kind
of
a
manual
intervention
from
the
user
if
they
get
into
read
only
to
remount
them,
so
that
additional
capability
we
are
adding
as
part
of
jiva
css
drivers,
c-store
was
in
early
beta.
When
we
introduced
to
sandbox,
we
continued
to
keep
that
in
beta
for
a
very
long
time.
C
The
initial
users
that
started
using
started
giving
us
a
lot
of
feedback
in
terms
of
data
operations
that
they
want
to
perform
on
top
of
c
store.
The
version
one
or,
like
you
know
we
even
alpha
one
spec
that
we
went
with,
was
becoming
very
difficult
to
kind
of
support,
all
those
data
operations.
C
So
we
went
back
and
changed
the
schema
a
little
bit
based
on
how
users
wanted
to
actually
use
that,
in
fact,
like
we
didn't
have
some
elements
in
the
aml,
we
got
the
ammos
and
found
that
users
were
like
putting
some
comments
in
the
ml
for
creating
the
c
store
pools.
So
we
took
those
comments
and
converted
into
a
new
spec,
so
the
new
c
store
schema
is
up
for
beta.
Now
again
and
this
time
we
went
directly
with
csi
driver
for
this.
C
So
most
of
the
things
that
we
started
off,
we
came
up
with
the
new
engines
that
we
supported.
We
are
going
with
csi
drivers
for
the
older
engines.
We
have
started
the
work
on
supporting
the
csi
drivers
as
well
for
local
pv,
host
path
and
device.
There
were
few
configuration
asks,
for
example,
like
especially
like
the
device
based
one
if
there
are
nvme
ssds
and
local
ssds
available
on
a
node.
C
How
do
we
tag
these
block
devices
as
nvme
or
ssd
and
use
that
information?
To
put
let's
say,
mongodb
on
nvme
ssds
or
some
other
application
on
sas
ssds?
Or
you
know
that
kind
of
fixes?
So
those
went
in
we
are
in
the
process
of
like
kind
of
adding
capacity
based
scheduling.
We
did
that
for
zfs
local
pv,
but
for
host
path
and
device
as
well,
but
then
we
also
saw
that
there
is
kubernetes
enhancement
around
that,
so
we
are
trying
to
see
how
to
integrate
with
that
work.
C
My
story
is
the
new
one
which
we
basically
started
off
from
scratch
again,
based
on
our
experiences
with
the
old
one,
and
this
is
a
new
engine-
that's
written
in
rust
at
the
moment,
and
it
also
supports
csi
driver
in
terms
of
comparison.
It
we
just
released
0.3
and
it's
in
the
ugly
alpha
where
replication
capability
is
built
in
basically
node
level.
H
is
supported.
C
We
plan
to
add
snapshot
and
clone
capabilities
also
into
my
store
already
spoke
about
raw
file,
local
pv,
the
other
important
thing
that
we
did
was
around
the
upgrades.
There
was
post
sandbox.
We
support
now
seamless
upgrades,
actually
they
can
be
automated
by
via
kubernetes
jobs.
C
That's
been
a
people
really
like
this
feature
compared
to
like
many
of
the
others.
I
think
this
is
like
on
the
top
of
using
the
pain
of
the
users.
A
lot
of
community
has
come
together
to
help
us
with
building
open
abs
for
arm
and
power
pc.
We
still
maintain
it
as
alpha
because
we
have
not
enabled
d2e
pipelines
so
other
work
that
we
have
done
as
part
of
2.0
is
open
source.
C
The
e2e
pipelines
on
you
know,
for
example,
like
gke
platform,
aws
platform
and
bare
metal
vmware
run
the
open
abs
every
build
and
every
release
on
different
platforms,
and
one
of
the
spin-off
of
open
abs
was
litmus
that
helps
us
to
stabilize
or
test
the
resiliency.
By
introducing
chaos,
and,
in
fact,
like
open
ebs
was
the
first
chaos
the
chaos
thing
came
from.
How
do
we
test
this
storage?
C
Since
we
are
now
distributed
and
it's
running
in
containers?
How
can
we
kill
all
of
this
and
make
sure
data?
Consistency
is
maintained.
That's
how
we
have
started
that
project.
I
think
it's
taken
its
own
wings
right
now,
but
a
lot
of
improvements
in
terms
of
edu
also
have
common.
C
B
Hey
more,
could
you
just
talk
a
little
bit
about
the
the
sort
of
the
zfs
dependencies?
Are?
Are
those
an
external
repo
or
are
they
part
of
the
ip?
That's
that's
being
added
to
this
cncf
and
and
if
so,
how
are.
C
Yeah,
so
for
zfs,
local
pv,
it's
definitely
outside.
So
just
like
how
you
install
for
anything,
you
can
set
up
the
zfs
outside
of
it.
C
Now
when
it
comes
to
c
store,
I've
actually
had
some
conversations
with
chris
on
this
in
terms
of
licensing
and
all
that
the
core
cfs
itself
is
separated
out
it's
in
a
separate
repository,
so
it
can
be
pulled
in
as
a
library
there's
some
discussions
on,
like
you
know,
maybe
like
that,
can
be
maintained
as
part
of
separate
organization
and
not
part
of
cncf.
C
I
think
we'll
work
that
out
alex
and
that's
the
only
I
think
like
we
have
run
the
fossa
scan
on
this
and
that's
the
only
thing
that
has
a
cddl
licensing.
The
rest
of
the
things
are
something
that
are
compatible
or
like
are
actually
apache
itself.
E
So,
just
to
clarify,
it
seems
like
some
of
these
external
code
bases
are
forked
and
some
of
them
we
just
incorporate
directly.
Is
that
true,
yes,
and
I
think
it's
going
to
be
very
important
to
make
very
clear
which
are
forks
and
which
are
just
external
dependencies,
some
of
which
are
optional?
I
assume,
and
so,
if
you
don't
use
the
zfs
stuff,
then
you
don't
incorporate
that
repo
and
then
you
don't
inherit
any
licensing
complications.
But
if
you
do,
you
need
to
be
aware
that
you're
pulling
in
a
third-party
component
exactly
is
that
made?
A
C
B
Hey
just
a
quick
one
though
so
is
is
c-store.
I
mean
the
components
that
are
dependent
on
the
fs
like
like
c-store
and
the
local
pvs
lfs.
For
example.
Are
they
usable
without
those
are
the
first
components,
though,.
C
So
see
c
store
kind
of
pulls
in
the
dependencies.
This
system
part
when
you
run
it,
it
has
to
have
the
it
actually
has
the
code
that
is
user
space,
cfs,
which
is
like
a
modified
version
of
cfs,
that's,
probably
where
we
need
to
have
further
discussions
and
see
how
to
deal
with
that.
One
in
case
of
local
pvc
fs
local
pbcfa
started
with
supporting
the
csi
driver
and
additional
options
in
terms
of
capacity
scheduling
for
local
pvs.
C
So
zfs
is
one
of
the
storage
options
that
you
can
configure.
You
could
easily
change
that
to
like
you
know,
I
don't
want
to
create
a
z
wall,
but
I
can
create
a
directory
there,
just
like
a
host
port
directory,
or
I
want
to
actually
use
a
ndm
block
device
to
use
that
so
the
code
that
is
there
in
the
local
pvc
office
is
revisible
for
other
local
pv
options
that
you
can
support.
C
You
see,
storage,
accord
dependency.
Yes,
it's
a
yep.
B
Okay,
I
I'm
not
an
expert,
but
I
think
we
we
might
need
to
we
need.
We
might
need
to
look
at
ask
the
cncf
for
some
legal
advice
here
given,
given
that
you
know
the
the
ip
is
not
actually
the
ip
that
the
project
is
dependent
on,
isn't
actually
part
of
the
project.
C
Right,
I
will
go
ahead
with
that
conversations
and
probably
like
to
the
needful
there
alex
will
pull
you
in,
I
think,
should
be
part
of
due
diligence
as
well.
C
Yeah
this
one
just
gives
a
snapshot
on
areas
of
improvement
or,
like
you
know,
things
that
we
are
already
planning
to
do
as
part
of
the
upcoming
two
dot
x
releases.
We
have
a
monthly
cadence,
so
some
of
these
things
will
come
in
this
month
next
month.
You
know
it
kind
of
goes
like
that.
C
The
features
are
again
like
picked
up
based
on
the
monthly
product
review
meetings
that
we
have
with
all
the
maintenance
as
well.
It's
open
for
end
users
to
chip
in
and
based
on
the
github
issues
that
are
raised
by
the
end
users.
We
pick
this
up.
Probably
this
is
just
a
plan,
and
one
other
thing
or,
like
you
know,
a
few
other
things
that
we
want
to
definitely
improve
on
is
the
end
user,
documentation
and
website
in
general.
C
We
have
done
very
little
in
terms
of
open
abs
advocacy
in
terms
of
you
know.
Whenever
we
now
talk
at
cube
corner
somewhere,
people
come
and
tell
us
that
they've
already
used
open
abs
or
they've
integrated,
like
gravitational
already
uses
opennvs
that
we
get
to
know
when
a
community
user
comes
and
asks.
So
we
need
to
do
some
active
work
there
and
contributors
on
boarding.
C
Just
to
kind
of
briefly
list
the
different
adopters
we
again
like
started
off
like
this
exercise.
Like
you
know,
once
we
started
thinking
about
incubation,
we
opened
up
this
adopters.md
file
and
created
an
issue
where
people
can
comment.
So
there
are
around
25
users
that
have
commented
how
they
use
open,
eps
they're
all
listed
on
the
github
just
to
highlight
some
of
them.
C
It
is
one
of
the
oldest
and
actually
like
one
of
the
users
that
influenced
us
to
continue
to
improve
on
the
jiva
space
reclamation
problem
arista,
and
this
is
their
story.
I
think
they
were
running
on
premise
and
they
really
liked
open
abs
in
because
of
its
ease
of
use
and
simplicity,
and
it
enabled
them
to
move
towards
kubernetes
faster.
That's
their.
C
This
is
another
thing
that
we
recently
got
to
know
because
of
some
slack
channel.
We
are
now
in
as
part
of
the
kubernetes
slack
open.
Abs
is
a
channel
where
we
hang
out
and
we
got
to
know
this
user
was
already
using
even
three
beta
stage
0.7
and
has
been
upgrading
and
is
currently
at
1.10
version.
C
This
is
a
news
case
where
local
pv
options
kind
of
get
highlighted.
I
think
this
is
definitely
a
pattern
that
we
see
where
workloads
are
going
towards
more
distributed
in
nature,
and
local
baby
is
just
on
like
rise
and
demand.
It's
just
increasing.
A
So
the
licensing
when
we,
when
we
went
over
this
in
sandbox
there
was
some
concern
over
it
being
compatible
with
the
required
meant
for
cncf
is
that
I
mean
I
still
think
we
need
to
possibly
work
through
that.
You
know.
We've
already
talked
about
the
dependencies
as
well,
but.
A
E
Yeah
just
to
be
clear,
I
think
this
was
very
clearly
highlighted
years
ago
when
the
project
entered
sandbox.
So
I
personally
am
a
little
surprised
that
that
it
hasn't
been
rectified
yet,
and
I
it
sounds
like
if
it
hasn't
been
done
in
two
years.
It's
not
going
to
be
done
anytime
soon.
I
mean
to
be
to
be
perfectly
blunt.
I
can't
see
this
going
into
incubation
anytime
soon,
either,
because
it's
it's
it's
going
to
be
a
hard
requirement
right.
C
So,
just
to
I
think,
my
apprehension
in
answering
that
straight
was-
and
we
need
like
more-
is
mainly
from
the
legal
expertise,
but
we
did
something
quintin
after
the
feedback
where,
based
on
the
suggestion
from
the
cncf
legal
team,
we
split
up
the
code
base
to
c
store
and
lipsy
store
and
dip
c
store
is
what
was
part
of
the
cnc
of
donation
and
not
the
c
stock
component.
C
But
I
do
wanted
to
go
through
a
little
bit
more
of
rigorous
lesson
scanning
thing
and
I've
been
having
that
conversations
with
chris,
where
we
can
come
up
with
some
solution.
It's
what
he
told.
So
we
will
work
that
out.
E
But
it
it
and
we're
probably
out
of
time
now,
but
it
might
be
worth.
I
mean,
there's
always
a
lot
of
detail
in
these
due
diligences
and
especially
around
licensing,
but
the
high
level
requirements
are
actually
fairly
simple
and
pretty
straightforward
to
understand,
and
maybe
I
can
just
summarize
them
in
30
seconds
now.
The
basic
requirement
is
that
the
project
should
be
usable
without,
depending
on
anything
which
has
a
license,
which
is
incompatible
with
the
cncf
open
source
licenses
and
then
any
optional
components
that
do
not
fall
into
that
group.
I.E.
E
Potential
optional
dependencies
that
have
incompatible
licenses
should
be
very
clearly
called
out,
so
that
people
can,
you
know,
explicitly
adopt
them
once
they
decide
that
they're
comfortable
with
the
licensing
restrictions
that
they
that
they're
adopting
and
it
doesn't
sound
like
you
know,
and
then
there's
a
ton
of
detail
underneath
that
to
to
make
sure
that
all
that
that
actually
checks
out.
But
it
sounds
like
we're-
we're
if
I
understand
correctly
we're
very
fairly
far
away
from
even
that
basic
requirement.
C
Yeah,
I
think
notice,
dot
md
has
been
added
with
that
one,
but
I'm
with
you
with
you
all
actually
in
this
regard,
just
as
a
cncf
member
and
contributor,
but
we
need
to
get
this
done
and
we
have
to
do
it
the
right
way.
C
So
I'm
not
biased
towards
answering
that
question.
We
will
I'll
follow
up
on
this
question.
Quentin.
I
think
that's
the
we've
definitely
done,
but
I
want
some
legal
answer
to
be
clear.
You
know
these
are
like
don't
want
to
have
any
grey
answer
around
that.
That's
the
only
reason
I'm
hesitating
to
continue
on
that
yeah.
It
makes
sense.
B
B
B
B
That
step
out
of
the
way,
only
because
you
know
with
the
emphasis
on
as
clinton
said,
making
the
dependencies
available
it's
it's
kind
of
hard
to
move
a
project
or
a
repo
into
into
into
into
incubation.
If,
if
any
of
the
dependencies
aren't
aren't
compatible.
So
so
we
we
just
need
to
probably
sort
that
out
as
a
step.
One
at
this
stage.
B
B
Cool
all
right,
thank
you,
everyone
for,
and
thanks
karen
for
the
for
the
presentation
and
we're
looking
forward
to
to
working
together
on
this
and
we're
just
a
little
bit
over
time
now.
So,
unless
anybody
has
anything
urgent,
I
think
we'll
call
this
meeting
to
an
end.