►
From YouTube: CNCF SIG Storage 2020-06-24
Description
CNCF SIG Storage 2020-06-24
A
A
A
B
A
Okay,
so
who,
as
our
first
topic
on
their
jet,
we
have
two
two
items
on
the
agenda
today.
Our
first
topic
is
the
Profi
gur
project,
so
oh
hi,
Louise,
okay,
so
Profeta
presented
to
us
a
few
meetings
ago.
I've
put
the
YouTube
recording
in
the
list
because
raviga
presented
to
the
sake
before
they
actually
made
an
incubation
proposal.
A
My
personal
recommendation
at
this
stage
is
that
we
we
should
recommend
this
to
move
to
the
the
TRC
asking
for
the
sig
leads
to
to
weigh
in
or
any
of
the
other
signatures
to
to
to
weigh
in
on
the
on
the
preview
project.
I
think
the
the
proposal
that
they've
that
they
have
together
is
particularly
strong,
so
on
this,
unless
we
have
any,
you
know
specific
objections.
I'd
recommend
that
we
move
this
to
the
TOC.
A
C
A
D
D
A
D
A
This
to
take
the
majority
of
the
rest
of
the
call
is
the
presentation
of
the
Piraeus
datastore
I'm,
hoping
I'm
pronouncing
that
right.
So
this
is
this
is
a
criminai
T's,
client
native
storage
projects
that
that
builds
on
the
dr
DB
project
and
I
believe
Philippe
is
on
the
call
and
he's
going
to
be
presenting
hello,
L
exhales.
B
B
So
this
is
why
I
have
them
here
in
a
slight
tag,
so
the
first
building
block
is
LVM
and
I'm,
pretty
convinced
all
of
you
know
what
LVM
does
physical
volumes
into
volume
group
and
out
of
that
we
get
Elvis
and
snapshots
has
been
around
since
must
be
like
30
years
now.
Right,
then,
a
few
years
later
it
got
the
capability
to
manage
Findley,
allocated
logic
volumes
and
that's
just
in
pull
driver.
B
If
you
take
multiple
snapshots
from
a
single
origin,
then
what
else
is
there?
There
is
Linux
raid.
You
look.
Software
raid
provides
all
the
raid
levels
from
striping
mirroring,
wait,
five,
six
and
so
on,
and
it
has
even
to
the
front
end
these
days,
but
it
goes
to
the
same
back-end
implementation
in
the
kernel.
B
Then
there
are
a
number
of
implementations
to
use
to
block
storage
tiers
where
one
is
a
cash-flow
adult
for
another
one.
The
two
major
1
on
the
mainstream
kernel,
RDM
cache
and
be
cash.
They
both
serve
about
the
same
purpose
and
there's
a
third
one
and
he
topped
it
with
light.
The
third
one
is
called
DM
bright
cash
and
DM
right.
Cash
was
built
with
the
purpose
to
put
pmm
in
front
of
let's
say:
enemy
drives.
B
There
is
also
deduplication
on
a
few
kernels,
all
the
relevant
five
and
later
also
sent
OS
7.5
and
later
kernels
where's,
that
coming
from
it's,
oh,
it's
called
video
but
later
optimizer
and
it's
coming
from
an
acquisition
of
red
head
and
maybe
one
day
they
manage
to
bring
it
to
the
upstream
kernel
right
now.
This
is
a
rail
or
CentOS
technology.
B
Only
then
there
are
all
sorts
of
targets
and
initiators
so
these
days
the
only
really
want
to
stay
Li
o,
which
provides
ice
cozy
and
all
all
its
relatives
and
the
new
kid
on
the
block
is
nvme
were
fabrics
and
we
have
a
target
and
initiate
implementation
on
the
upstream
kernel
and
also
arriving
in
there.
In
the
recent
distributions.
A
B
A
B
B
It
also
has
a
complete
replacement
for
LVM
built-in
that
is
capable
of
doing
thin
provisioning,
and
it
also
has
its
own
spin
of
the
raid
idea
called
rate
set
or
wait
see,
and
it
also
brings
caching
to
use
SSDs
as
cache
for
a
slower
search,
tears
yeah
and
for
what
I
am
concerned.
I
will
only
look
at
their
let's
say:
volume
management
aspects
of
it,
I,
don't
care
about
the
file
system,
aspect
of
it
yeah
and
then
here
at
link
between
a
lot
focused
on
on
D
buddy.
B
It
is
implemented
as
a
device
driver
like
a
virtual
device
driver
for
Linux.
It
provides
you
a
block
device,
definitely
something
here
and
definitely
something
there,
and
in
the
moment
you
open
it
or
you
mount
it.
It
promotes
the
primary
and
start
to
replicate
everything
you
write
to
it.
In
the
moment
you
unmount
it.
It
denotes
the
secondary
and
you're
free
to
mount
it
on
the
other
side,
and
the
direction
of
the
replication
will
be
reversed.
In
that
moment,.
B
Let's
say
you
have
Oracle
database
and
you
have
your
database
logs
on
the
fast
nvme
drive
and
you
have
your
table
spaces
on
the
slow
drive.
And
if
you
mirrored
two
volumes
concurrently
in
within
a
consistency
group,
then
it
makes
sure
that
the
rights
are
never
reordered
and
the
two
volumes
are
always
at
its
call
them
same
logical
point
in
time
on
the
replication
target.
B
It
supported
this
class
mode,
and
that
means
the
primary,
so
that
is
the
node,
where
you
using
a
data
where
you're
accessing
your
data.
It
doesn't
need
to
have
a
local
replica
of
the
data,
but
is
also
capable
to
ship
the
read
requests,
and
in
this
case,
where
it
has
two
nodes
that
actually
have
copies
of
the
data,
it
will
do
and
a
kind
of
a
load
balancing
scheme
between
the
two
nodes
for
read,
requests
and
for
write
requests.
B
It
sends
the
write
request
to
both
nodes
concurrently,
the
application
running
here
and
the
primary
is
of
course,
shielded
from
any
failures.
So
let's
say
this
secondary
goes
away
and
there
was
a
read
request,
processed
on
this
secondary
and
now
it
crashes.
Then
the
primary
will
reissue
the
read
request
to
this
node
and
deliver
the
data
to
the
application.
So
the
implication
is
shielded
from
this
error.
B
B
B
Yeah,
all
that
comes
from
a
background
of
building
high
availability
systems,
and
we
have
been
working
on
that
since
nearly
20
years
now
and
what
we
did
recently
is
recently
we
optimized
it
in
case
our
metadata
is
located
on
p.m.
M
or
n
mediums,
because
then
we
have
the
luxury
that
we
can
update
the
data
there
in
smaller
units,
then
full
blocks
so
like
in
cache
line.
Granularity
is
what
you
get
then
and
on
the
roadmap,
we
are
planning
to
look
into
ratio
coding,
but
there's
a
still
very,
very
deep
of
work
in
progress.
B
Okay,
so
far,
I
told
you
about
all
these
storage
building
blocks
on
linux,
including
debrie,
and
they
can
they
can't
be
combined
as
you
need
them,
so
you
can
use
logic
volume
from
LVM
as
backing
devices
for
DVD.
You
can
put
video
deduplication,
we
load
the
LVM
or
you
can
slip
a
DM
crypt
encryption
between
DoD
and
LVM
and
so
on.
So
you
can
combine
them
on
the
data
plane
as
you
need
it.
B
B
So
it
is
a
distributed
application.
You
run
on
a
bunch
of
generic
notes.
Their
only
requirement
is
that
these
nodes
run
the
Linux
kernel
and
it
can
then
fulfill
your
volume
requests
like
your
children.
I
need
a
new
volume.
It
should
be
to
be
replicated.
It
should
be
that
size
and
I
also
give
it
a
name,
and
it
can
do
that
for
you
and
to
the
user.
B
It
exposes
a
REST
API
and
on
top
of
that,
REST
API
we
have
built
various
connectors
and
one
is
for
the
kubernetes
world
and
I
will
put
a
focus
on
that
and
we
also
have
connectors
for
OpenStack
OpenNebula,
proxmox
and
XC
PNG
in
the
works
yeah.
Then
maybe,
let's
look
at
an
example.
So
in
a
hyper-converged
architecture,.
B
B
So
right
now
for
the
orange
part
and
their
blackboard,
this
layout
is
in
the
optimal
State,
and
that
gives
us
that
every
all
the
read
requests
can
be
carried
out
locally,
not
touching
network
at
all,
giving
pastor,
fallens
and
with
using
load
on
the
network.
Only
for
rights.
We
need
to
send
something
over
the
net
now,
in
case
a
VM
gets
life
migrated
or
a
pod
is
moved.
B
B
So
now
all
the
results
are
shipped
over
the
network,
but
it
takes
a
single
instrument
or
a
time
trigger
policy,
and
given
we
have
enough
available
storage
space
here
then
still
can
allocate
here
new
logic
volume
added
to
the
DVD
config
DVD
will
start
to
copy
over
all
the
blocks.
Reason
go
everything
over
and
when
that
is
finished,
lynnster
will
remove
the
now
redundant
third
copy
redundant
in
a
sense
of
of
the
policy
we're
using
here
in
this
example.
B
B
The
controller
is
the
central
part.
It
establishes
connections
to
all
its
satellites,
to
do
something
useful
and
in
traditional
in-store
setups
the
controller
would
be
stateful.
It
has
embedded
SQL
database
in
a
context
of
kubernetes.
It
can
put
everything
into
an
etsy,
D
key
value
store,
and
then
the
control
itself
is
also
stateless
and
can
be
easily
be
moved.
Around.
B
I
should
mention
here
that
this
structure
we
are
seeing
on
this
slide
is
only
the
control
structure.
It
has
nothing
to
do
with
the
data
path,
so
the
data
path
is
you
buddy
and
they
did
a
path
is
independent
of
that.
So
that
means
we
can
stop
and
start
the
satellites
or
the
controller.
We
can
even
upgrade
the
controller
and
the
satellites,
a
completely
lynnster
system
and
all
the
existing
volumes.
All
the
existing
persistent
volumes
that
are
in
use
continue
to
do
I
owe,
while
we
can
do
that.
A
A
B
A
B
A
B
The
other
things
we
see
on
the
slide
is
yeah
well,
the
rest
API
and,
and
that
there
is
a
little
giant
library
and
the
CLI
program
how
we
can
inspect
all
the
lynnster
system
and
what
kind
of
challenges
it
can
solve
for.
You
is,
for
example,
data
placement,
so
you
could
it
supports
you
that
you
can
tag
your
nose
with
chassis
numbers,
room,
number,
rec
number
and
then
refer
to
these
tags
in
your
policy,
and
you
could
express
a
policy
like
always
place
replicas
in
different
share
CAES,
but
in
the
same
wreck.
B
Things
like
that
and
and
this
this
placement
policy
I
recently
got
a
let's
call
it
multi-dimensional
thing,
so
it
can
take
into
a
car
into
consideration,
available,
storage
space,
all
your
constraints
based
on
on
labels,
but
also
other
metrics,
like
available
bandwidth
on
your
neck
or
available
bandwidth
to
the
pack
and
storage
literally
arbitrary
things.
You
want
to
take
into
account
that.
A
B
They
are
Lynn
store
objects
and
from
from
kubernetes
you
or
in
the
company
this
world,
you
have
a
storage
class
and
the
storage
class
maps,
one
two
one:
two,
a
so-called
resource
group
in
Lynn
store
and
on
the
resource
group.
You
express
all
these
policy
stuff,
okay,
I'm,
not
sure.
If,
at
this
point,
every
every
property
of
such
a
Lynn
store,
whistles
group
can
be
a
addressed
through
the
storage
class.
You
know,
but.
B
So
we
have
now
module
loader
containers
for
deity
of.
We
have
prepackaged
containers
of
the
satellite
of
the
controller,
and
even
now
in
operator,
we
started
with
deployment
by
Yama
files
and
yeah,
and
that
is
now
starting
to
be
deprecated
by
the
operator
and
yeah,
and
here
comes
in
where
what
is
various
data
store.
Various
data
store
is
a
packaging
off
of
Lin
store
and
DVD.
B
In
store
controller
satellites,
the
operator
etc
sees
I
drive,
of
course
forgot
to
mention
that
right
since
I
driver-
and
recently
we
finish,
the
work
on
stork
plug-in
so
stock
is-
is
kubernetes
scheduler
extension
that
allows
you
to
collocate
you've
workload
with
replicas
of
your
storage,
so
we
are
working
with
the
port
works
people
to
get
that
merged
into
a
stork
upstream
and
yeah.
That's
that's
pretty
much
about
it
yeah.
B
A
Just
just
a
quick
few
questions
so
because
I
I
feel
like
we,
we
we
covered
a
lot
of
the
detail
of
what's
happening
in
the
data
playing
with
Lin
store
and
the
drdp
foundation,
but
I'm
sort
of
still
I'm
still
a
bit
fuzzy,
and
maybe
we
haven't
covered
enough
detail
of
sort
of
how
this
operates
in
in
a
cloud
native
worlds.
In
terms
of
you
know
when,
when
you're
operating,
it's
either
hyper-converged
or
or
otherwise,
you
know
how
does
basement
decisions
happen?
How
do
fail
overs
happen?
A
B
A
It
was
a
pretty
sort
of
quick
turnaround
between
asking
for
a
presentation
and
actually
presenting
so
I.
You
know
it's
fine,
if
you
don't
have
everything
on
a
slide,
but
I
I
was
wondering
if
you
could
go
into
a
little
bit
more
detail
on
some
of
the
cribben
at
these
integration
aspects.
So
you
know
you
mentioned
that
there
is.
You
mentioned
that
there
is
a
controller,
but
what
does?
What
does
the
controller
do
in
terms
of
the
satellites?
Does
it
configure
LVM
and
set
up
the
RBD
connections,
and
how
does
it
manage?
A
B
That
has
the
database
the
overview
of
what
what
is
the
custom.
You
know
all
the
nodes,
all
the
volumes
are,
all
these
objects
and
that's
still
not
kubernetes
specific
right,
so
they're,
the
lynnster
controller
would
be
the
same
if
it
is
used
in
another
environment
right
and
the
kubernetes
specific
parts.
Well
there.
This
is
I
Drive,
all
right.
F
E
G
F
G
B
B
B
H
H
B
H
Share
the
screen
for
the
two
slices
sent
you
today,
yeah,
so
I
think
what
a
Shing
or
mrs.
younge
machine
is
asking
about
is
if
the
difference
between
peers
and
Link
store
and-
and
this
slide
is
about
it
so
generally
links.
This
is
a
stack
similar
to
what
you
see
from
ruk,
plus
self
plus
new
path.
Sort
of
so
here
are.
The
previous
does
is,
is
for
the
can
transition
and
orchestration
of
the
link
store
components.
H
Okay,
now
he's
to
its
dumped
by
operator
and
also
has
a
CS
driver
and
also
will
contain
some
like
a
failover
fencing
for
our
wo
volumes
and
also
the
the
connection
to
the
historic
scheduler.
So
all
the
kubernetes
components
will
be
inside
a
previous
project,
but
Ling
story
is
actually
the
story
system
that
does
the
clustering
volume.
Lifecycle
management
means
a
create
delete
volume
and
also
resize
volume
and
volume.
Monitoring
is
done
by
Ling
store,
and
the
RPD
here
is
for
the
block
replica
publications.
Okay,
this
is
a
stack,
so
say
the
control
flow.
H
The
bathroom
control
flow
is
inning
store.
The
data
path
is
in
dr
BD,
plus
the
LVM
volume
underneath
PR
PD
and
all
the
body
control
front
and
control
that
and
that
operates
with
kubernetes
are
with
imperious.
So
this
is
a
stack
if
you
can
go
to
the
next
slide
out
so
generally,
what
we
won't
do
is
to
wanna
contributing
the
this
free
stock
into
the
nooks
nooks.
Actually,
durability
is
already
within
the
minions
kernel
so
and
on.
G
A
F
F
A
F
A
Think
we
I
think
we
means
we
need
some.
You
know
basic
structure
to
kind
of
say,
look
when
you
when
you
use
the
pyrius
operator,
for
example,
it's
it's
implements
the
Lin
store
controller
and
the
Lin
store
satellites,
for
example,
and
then,
when
you
know
a
volume
is
a
PVC
or
or
volume
requesters
is
issued,
what
process
does
pyrius
implement
and
and
also
you
know,
if
you're
making
the
comparison
with
rope
I'd
like
to
understand.
H
I
think
the
POC
is
most
close
to
the
concept
the
rook.
Actually,
it's
a
collections
of
the
storage
operator
sees
a
driver
and
also
other
like
scheduler
things
that
makes
storage
system
like
links
to
a
cognitive.
Okay.
This
is
a
periscope,
and
the
link
story
is
actually
the
actual
story
system
again,
the
RPGs
are
denying
telepath
technology.
A
B
A
A
Need
to
prepare
a
little
bit
more
background
on
the
periods
operator
and
the
period
functionality
specifically
I,
think
you
know
we.
We
got
a
good
understanding
of
sort
of
lune
store
and
the
RDB
which
which,
which
is
you,
know
great,
and
thank
you
for
that,
but
I
think
I
think
we
we
didn't
quite
understand
the
functionality
of
various
and
and
what
you're
planning
on
on
on
building
with
with
with
periods.
A
B
B
A
A
Think
that's
I
think
that's
helpful
to
to
set
the
scene,
but
I
I
suspect
it
would
be.
It
would
be
helpful
to
get
you
know
a
presentation
on
pyrius,
specifically
perhaps
of
the
future
date.
You
know
we
can.
We
can
do
this
in
the
next
meeting.
Can
I
just
quickly
ask
it's
the
intention
to
apply
for
sandbox
or
is
it
for,
or
are
you
looking
to
possibly
apply
for
incubation.
A
Okay,
that's!
That's!
That's
fine!
In
that
case,
what
I
would
recommend
you
know
it's
possible,
obviously
to
make
a
sandbox
application
to
the
TAC
directly,
but
I
sort
of
strongly
recommend
that
that
we
can
maybe
structure
the
presentations
slightly
more
thoroughly
so
that
we
can
get
a
better
understanding
of
what
of
what
pyrius
is
going
to
cover
and
what
maybe
some
of
the
plans
are
for
peers,
because
I
think
there's
there's
a
bit
of
a
gap
of
understanding
with
the
team.
So
so
that
would
be
useful
for
the
next
steps.