►
From YouTube: CNCF SIG Storage 2020-10-28
Description
CNCF SIG Storage 2020-10-28
A
Hey
morning,
alex
in
fact,
probably
good
afternoon,.
A
By
the
way,
I
noticed
this
session
when
I
just
joined
instead
of
recording
right
so
zoom
can
now
do
automatic.
Recording
now.
B
Yeah,
all
of
the
sick
meetings
are
are
set
to
automatically
record
and
then
they
get
posted
to
youtube
so
they're
available
for
the
public.
A
B
C
B
A
Hey
alex,
which
time
zone
in.
A
Okay,
gmt
yeah
erin,
which,
which
times
are
you
in.
C
I'm
in
mountain
time
zone
so
hour
ahead
of
pacific
yep.
C
B
A
B
The
but
in
the
us
it
goes
back
next
sunday.
A
Yeah,
you
know,
that's,
california
has
voted
to
cancel
to
cancel
the
daylight
saving,
but
it's
only
on
the
like
state
level
and
they
said
that
you
have
to
get
permission
from
the
federal
level
to
really
remove
it
right.
But
then
it
has
another
problem
say
the
california
is
going
to
have
a
different
time
zone
compared
to
say
washington
and
oregon
right.
That's
going
to
be
weird
because
you're,
basically
on
the
supposedly
on
the
same
time
zone
so
well,
but
this
daylight
savings
things.
A
I've
heard
is
like
hundreds,
people
probably
have
a
higher
risk
of
a
stroke
every
year
because
of
like
change
of
the
schedule
and
change
of
clock
stuff,
not
really
sure
why
I
still
need
it.
B
A
Yeah,
I
think
I'm
scared,
because
you
don't.
Oh
no,
in
fact,
I
I
don't
think
arizona
has
daylight
saving.
B
Alright,
I
think
we
don't
have
a
lot
of
people
joined.
Oh
we've
got
a
couple
more
now,
but
I'd
suggest
I
suggest
we
we
we
we
can
start
and
then
we
can
share.
We
can
show
the
recording,
if
need
be,.
B
See
I
see
the
full
the
full
screen
presentation
page
here:
okay,
that's
good.
A
All
right
so
thanks
everyone
for
joining
this
session,
and
so,
as
you
know,
that
loan
form
is
currently
a
cncf
sandbox
project
and
we
are
applying
for
the
incubation
stage.
So
this
is
the
longer
incubation
review
all
right
so
for
this
review.
First,
I'm
going
to
go
through
a
few
like
recaps
about
basic
of
longhorn.
A
Why
we
do
it
and
how
we
do
it
and
the
later
we
can
go
through
that
how
longhorn
has
grown
growth
and
things
join
the
cncf
and
with
tractions,
and
I
was
on
the
roadmap.
I
see
so
feel
free
to
interrupt
me
at
any
moment
and
yeah.
So
let's
get
started
all
right.
So
what
is
longhorn
so
longhorn
is
the
open
source
distributed
storage
software
for
kubernetes?
A
Our
goal
is
pretty
clear:
we
want
to
have
very
simple
and
very
very
simple
way
to
add
persistent
story
to
your
cluster,
so
one
click
installation
to
add
persistent
storage
support
for
any
kubernetes
cluster
is
the
goal
we
want
to
have
and
also
the
things
is.
A
So
first
is
reliability
and
because
it's
a
storage
software
right,
so
the
last
thing
you
want
to
do
is
like
lost
your
data,
so
long
form
provides
crash
consistent
and
make
sure
that
in
every
data
you
write
to
the
long
run,
volume
will
be
right
and
the
preserved
on
the
disk.
It's
no
cache
in
between
right
and
the
second
thing
is
room-
provides
multiple
layers
of
protection
against
the
data
loss.
A
So
that's,
including
the
building
snapshot
mechanism
which
is
inside
the
cluster
and
also
the
backup
support
which
going
to
backup
the
snapdragon
to
offsite
outside
the
cluster,
to,
for
example,
s3
or
nfs
server,
and
in
fact,
there's
a
third
layer
compared
to
some
other
solutions
is,
if
you
have
your
longhorn
used
directory
data
directory
available.
In
fact,
you
can
directly
extract
the
data
from
that.
Given
that
you,
for
example,
you
lost
your
whole
kubernetes
system
and
you
lost
your
whole
every
metadata
right.
A
A
So
previously
that
means,
in
fact
this
helped
a
lot
during
the
early
days
of
long
run,
when
those
kubernetes
has
the
driver,
choice
of
the
flex
volume
and
the
csi
we
migrate
to
csi
at
like
in
csi
0.3
and
later
0.4
and
upgrade
to
1.0,
but
in
fact
even
like
sometimes
you
have
to
choose
flex
volume
for
the
kubernetes
distribution
doesn't
have
it
and
the
csi
different
versions.
In
fact,
it's
not
really
compatible.
A
So
we
built
something
called
the
driver
display
deployer
to
automatically
detect
the
version
of
your
kubernetes
and
the
uninstall.
The
compatible
csi
for
you
now
is
the
last
problem
because
everybody
has
standardized
to
csi
1.0,
but
a
lot
of
still
a
lot
of
effort
we
made
in
to
try
to
make
this
manual
configuration
installation
process
as
easy
as
possible.
A
Another
thing
is:
don't
provide
the
policy
user
experience,
including
a
building
the
main
ui
you
don't
need
to
have
like
third-party
ui
or,
like
add
add-on
for
that,
so
that
is
all
included,
so
you
can
operate
long
form.
Many
like
create
volume
stuff
inside
the
group
control,
of
course,
but
you
can
also
do
that
from
the
ui,
and
you
can
see
the
dashboard
and
to
show
what's
the
system
level
overview
looks
like
and
it
performed
the
backup
restore
snapshot,
scheduling,
backup
those
kind
of
operation
ui
as
well.
A
So
longhorn
is
designed
to
be
easy
to
be
understand.
So
when
I
will
talk
about
a
little
bit
more
on
the
architecture
later,
but
the
really
goal
is
make
sure
that
even
you
don't
really
have
like
very
complex
storage
background.
You
can
understand
most
of
concept
and
understand
how
low
home
works
right
then
also
longhorn
provides
a
way
to
easy
to
recover
you
in
the
worst
case
scenario,
that
is
as
long
as
well.
I
mentioned
there's
three
layers
of
protection.
A
As
long
as
you
have
any
one
of
them
available,
you
can
recover
your
data
in
in
in
for
recovery
data
of
your
cluster
right
and
look
also
provides
upgrade
without
interrupting
workload.
That
is
also
what
we
call
the
live
upgrade
feature,
which
means
that
you
can
feel
free
to
upgrade
your
longhorn,
including
lower
data
engine
and
the
when
you
still
have
the
running
workload
right.
So
that's
really
like
reduce
your
downtime,
reduce
your
schedule
to
maintain
this
window.
A
When
you
want
to
do
the
continuous,
like
deployment
of
when
we
want
to
do
the
maintenance
work
for
for
your
cluster,
all.
A
So
this
doesn't
take
extra
space
unless
you
use
the
up
to
all
the
spaces
and
what,
in
snapshots
and
backup
restore
snapshot,
how
we
define
snapshot,
others,
the
history
snapshot
point
inside
cluster,
which,
as
long
as
you
have
this
warning
inside
cluster,
you
can
revert
back
and
stuff
and
the
backup
and
it's
going
to
be
outside
the
cluster
right,
so
that
we
support
incremental
backup
and
incremental
restore.
So
what
an
expansion
and
you
can
resize
the
volume
across
az
replica
scheduling.
A
This
is
mostly
for
some
clouding
vendor
environments
and
they
want
to
have
the
enhanced
availability
across
the
whole
whole
controlled
different
age
in
the
same
region
right
so
then
you
lost
one,
you
see.
A
A
Okay,
all
right.
So
this
is
the
overview
of
how
and
how
longhorn
works
on
the
knees.
So
currently
we
have
two
notes
here:
both
node
has
a
storage
and
the
ram
and
cpu,
and
the
kubernetes
ask
longhorn
for
new
warning
right.
So
this,
when
this
request
come
in
logo,
is
going
to
create
two
replicas,
preferably
on
two
different
nodes,
because
we
want
to
have
like
if
one
graphic
up
and
down,
we
still
have
the
replica
available
on
the
other
node
I
can
show.
A
I
can
demonstrate
the
process
of
philadelphia
later
so
then
longhorn
is
going
to
create
an
engine
to
connect
it
to
those
replicas
and
the
engine
is
going
to
expose
the
block
device
to
the
warning
right.
So
this
is
very
simple
way
of
doing
like
do
to
set
up
this
the
data
path
to
provide
the
storage
for
the
for
the
part
to
use.
A
If
we
are
going
to
have
the
second
part
asking
for
a
second
volume,
we
do
the
same
and
third
part
we
do
the
same.
So
there
are
two
advantages
of
this
approach.
The
first
thing
is,
you
can
see
that
the
data
path
of
each
volume
is
not
it's
not
in
the
wind,
it's
basically
isolated
from
each
other
right.
So
if
one
volume
goes
downward,
even
one
engine
go
down.
It's
not
going
to,
in
fact
like
affect
any
other
volumes
right.
Another
thing
is,
you
can
see.
A
The
engine
we
have
here
is
always
collocated
with
the
part
with
the
workload
so
in
the
in
the
most
common
scenario
that
we
want
to
guard
against,
for
the
sa
cases
is
the
note
down,
but
in
this
case,
if
the
node
one
is
down,
for
example,
and
then
the
engine
the
volume
work,
the
engine
will
be
down,
of
course,
but
the
workload
part
one
will
be
done
as
well
right.
A
So
then
the
kubernetes
are
going
to
like
reschedule
the
part
to
another
node
and
the
engine
can
be
mine
like
just
move
along
with
it,
and
everything
will
be
back
to
normal.
So
that's
greatly
simplifying
our
design
for
the
engine,
because
we
don't
need
to
have
one
engine
to
kill
like
more
than
one
node,
and
then
this
we
don't
need
to
have
really
complex
mechanism
to
do
the
same
engine
right.
So,
but
how
does
this?
How?
Why
there's
nothing
like
this
before
right?
A
So
the
problem
is
because
engine
replicas,
in
fact,
are
micro
services,
they're
currently
running
as
processes
right
and
the
and
the
first
version.
In
fact,
when
we
come
up,
this
are
running
on
h1
as
a
container
as
parts,
but
we
do
hear
some
limitations
later,
so
we
change
them
into
process,
but
in
the
end
those
are
separately
orchestrated
entities.
A
So
it's
pretty
hard
to
do
this
without
the
help
of
kubernetes
right,
if
it's
possible
right.
So
that's
why
this
mechanism
this
way
we
choose
to
do
it,
is
basically
it's
bound
to
kubernetes
it's
with
a
kubernetes
help.
We
can
do
this.
Otherwise,
it's
going
to
be
like
we
have
to
write
some,
our
own
scheduling
mechanism
to
move.
This
part
move
this
engine
process
around
those
stuff.
That
is
why
we,
that
is
why
we
only
see-
is
this
kind
of
magnesium
coming
this
kind
of
character
coming
until
now
right.
B
Hey,
hey
shang,
so
just
a
quick
question
so
is
effectively.
Does
every
volume
have
its
own
engine.
A
A
A
Every
has
its
own
process
as
well,
so
a
little
bit
of
history
that
we
will
design
every
engine
replica
to
be
a
docker
container
instead
of
a
process
and
a
first
version
I
think,
before
0.6
right,
but
later
we
have
a
one
user
going
and
complaining
wow.
I
have
a
huge
app,
a
very,
very
big
machine.
It's
so
beefy
and
I
can
run
like
20
30
workload
on
that
and
then
I
needed
130
volumes
well,
but
all
those
engine
and
replicas
take
place
inside
the
part.
So
then,
I'm
going
to
have
like
well
80.
A
If
I
really
round
them
on
a
single
note,
they'll
have
eight
or
even
minimum
40
red
parts
and
they
just
take
a
lot
of
force,
because
that
is
only
allowed
at
110
parts
per
node
right.
So
then
we
decided
we
just
decided
okay,
so
it
seems
to
make
more
sense
that
we
aggregated
to
a
way
that
they
are
running
as
a
separate
instance
a
separate
process,
but
they
are
on
the
same
note,
so
we
save
that
resource
on
the
port
level.
A
So
that's
why,
in
the
next
page,
you
will
see
something
called
instance
manager.
That
is
why
and
how
it
works
right
now,
right,
okay,
thank
you.
Any
other
questions.
D
A
Yeah,
so
the
engine
itself
doesn't
really
doesn't
correlate,
but
it
was
related
to
a
part,
as
I
said
before,
but
because
of
like
chord
limitation
we
had
on
the
values,
it
can
only
be
110
per
node
right,
so
we
decided
to
not
take
that
resource
like
after
later
right.
So
now
the
engine
is
running
inside
the
pod
right
and
there
can
be
multiple
engines
running
inside
that
we
call
it
as
many
parts.
I
can
explain
more
on
the
next
page.
A
A
All
right,
okay,
so
this
is
some
detailed
review
of
the
architecture
of
the
engine
side.
You
can
see
that
now
we
have
three
nodes
and
the
the
node.
You
can
also
see
that
there
are
some
nodes.
They
have
a
sprayer
disk
for
longhorn
like
this
black
colored
ssd.
We
can
use
that
for
longer,
but
you
have
like
some
what's:
yellow
colored,
which
we
assume
is
the
root
disk.
You
don't
really
want
to
use
it
for
the
storage,
otherwise
you
might
introduce
unwanted
like
this
pressure
stuff
right,
so
you
want
to
have
separate.
A
You
have
one
separated
and
also
you
can
see
that
for
the
node
that's
with,
without
or
with
the
storage
for
longhorn,
you
have
a
replica
instance
manager
running
on
top
of
that.
That
means
those
nodes
are
potentially
able
to
run
replicas,
but
for
every
node
because
they
are
able
to
all
of
the
know
that
here
are
vocal.
They
are
able
to
run
using
the
long
volume,
so
we
are
going
to
have
engines
managed
running
on
top
of
that.
So,
let's
take
the
same
example.
A
We
have
port
a
and
we
want
to
create
a
volume
for
port
a
we
have
replicas
scheduled
on
two
different
nodes:
node
one
node
two
and
then
the
replica
process
will
be
started
inside
replica
instance.
Manager
and
the
engine
process
will
start
inside
the
instance
manager
on
the
same
node
after
part,
a
and
then
connect
to
expose
block
device
to
to
part
a
right,
pretty
straightforward,
and
you
have
pro
we
have
for
b
on
the
on
the
note
2
and
we
do
the
same
thing
plus
c
on
the
node
2.
A
We
do
the
same
thing
right.
So
next
question
is:
what's
going
to
happen
if
the
node
a
when
node
1
went
down.
So
if
no
one
went
down,
as
you
can
see
in
the
previous
page,
support
a
in
fact,
the
volume
a
going
to
have.
We
have
the
engine
on
node
one
and
the
replica
and
node
one,
the
two
node
one
went
down
and
port
one
output,
a
everything
went
down
right,
but
because
it's
kubernetes
kubernetes
is
going
to
decide
that
okay.
A
A
I
still
need
a
warning
and
then
the
command
is
asking
for
the
volume
and
long
ones
see
that
okay,
so
there's
still
a
data
of
this,
what
workload
is
inside
the
node
2,
as
you
can
see,
that's
right,
the
red
replica
there
and
the
longhorn
is
going
to
start
the
engine
on
note,
3
and
connect
it
to
the
red
replica
and
resume
the
service
to
the
port,
a
right.
So
that
is
the
how
the
just
in
the
overview,
if
the
failure
happens,
how
the
recovery
works
in
the
goodness
word.
C
A
Yeah
so,
but
they're
in
fact,
just
controlled
by
the
long
horn.
We
built
the
controller
for
them
because,
for
example,
when
you,
when
you
don't
have
available
disks
on
the
node,
you
don't
really
need
a
replica
instance
manager
right
so
that
that's
why
we
build
them
as
as
a
separate
controller,
rather
than
just
using
demonsent
right,
but
every
one
of
them
is
a
definite
part.
A
A
Note
3
the
failover
node
yeah
so
currently,
so,
if
there's
note
four
with
available
disk
on
node
four,
they
yeah
we'll
recreate
the
replica
of
course,
and
because
note
3
doesn't
have
a
disk
available
for
the
longhorn
right.
So
that
is.
That
is
why
we
don't
do
the
rebuild
of
the
replica
on
node
three.
So
of
course
even
node
one
went
back.
We
can
reuse
that
replica.
A
Yeah,
that's
exactly
yeah
it's
what
we
I
want
to
indicate
that's
kind
of
different,
that
is,
for
the
root
file
system
right,
so
that
is
the
the
available
disk
is
like
much
says,
those
black
or
gray
colors
right.
So
the
ssd
on
note
3
is
not
really
for
the
long-haul
storage.
So
that
is
also
why
we
don't
have
the
replica
instance
manager
running
there.
B
Hey
so
quick,
very
quick
question-
and
maybe
you
might
come
to
this
in
this
in
a
future
slide,
but
if,
if,
as
you
said,
node
one,
you
know
reboots
or
recovers
and
and
comes
back
onto
the
network,
so
the
the
the
engine
on
on
node
three
can
then
reconnect
to
the
to
the
replica.
That's
on
node
one,
but
would
it
would
it
have
to
I
I
assume
it
would
have
to
re-sync
it
right
at
that
stage.
A
Yeah
so
currently
in
the
one.x,
and
we
we
always
review
the
new
replica,
but
for
the
webcoming111.1
release
we
are
going
to
try
to
start
using
the
existing
replica,
but,
of
course,
any
replica
we
use
to
either
review
the
new
replica
or
using
solution
replica.
We
are
going
to
check
and
sync
the
data
before
we
can
use
it.
It's
always
going
to
be
that
case
yeah.
We
cannot
just
blindly
use
it
anyway.
B
D
Also,
the
recovery
workflow
that
you
outline.
Does
that
also
happen
when
you
do
don't
add
any
new
nodes?
So
let's
say
if
you
already
had
another
node
3
that
was
already
serving
some
engines
and
some
replicas
can
that
take
over
serving
the
engines
are
replicas
of
node
1
that
failed.
You
don't
necessarily
have
to
add
new
nodes
to
replace
node
1..
A
And
then
node
three
became
responsible
for
all
the
yeah.
So
in
fact
note
three
is
always
be
there
right.
So
this
is
not
running
like
related
workload
at
the
moment,
but
note
3
is
inside
cluster,
so
yeah,
of
course,
if
you
want
to
add
a
new
node,
the
new
node
will
have
the
engine
instance
manager
and
parts,
unlike
if
you
could
not
decide
to
schedule
part
on
that
node.
A
That's
still
going
to
work
right,
if
you
say,
if
the
like
say,
if
you
don't
have
note
3
and
you
have
no
2
and
a
kubernete
decided
to
schedule
this
part
a
on
no
2
yeah,
it
was
still
still
going
to
work.
It's
not
it's
no
different.
I
just
using
those
three
to
make
like
the
concept
more
clearly
here
it
doesn't
doesn't
need
to
be
so.
The
long
horns
engine
and
the
replica
is,
unless
you
enable
a
certain
feature,
called
data
localities
doesn't
need
to
be
on
the
same
node.
D
So
I
think
these
are
separate
issues,
but
I
think
locality
here
really.
As
far
as
long
hauler
is
concerned,
a
pod
that
is
consuming
a
long
core
volume
has
to
have
a
just
a
local
engine,
yes,
but
the
actual
data,
the
actual
replica,
can
be
on
a
different
note.
D
A
Yes,
yeah,
so
I
I
don't
quite
understand
what
you
mean
by
surfing
engine,
but
yes,
any
engine
as
long
as
there
is
a
replica
inside
this
kubernetes
cluster,
and
then
you
can
have
engine
connect
to
that
replica
and
the
serving
serving
the
volume
yes
from
any
node
inside
cluster.
As
long
as
you
have
like
a
limitation
on
that
part,.
B
So
so,
just
one
last
question
kind
of
related
to
that,
so
I'm
assuming
an
engine
spun
off
within
the
engine
instance
manager
as
part
of
a
kubernetes
controller,
receiving
a
request,
or
something
perhaps
for
icsi
or
something
like
that-
I'm
I'm
kind
of
speculating,
but
how?
How
do
you
make
the
decision
to
to
schedule
a
replica
on
on
any
particular
nodes
is:
is
that
is
there?
Is
there
some
some
logic
or
determination
there
or
or
or
is
it
around
dropping
or.
A
Yeah,
so
this
basically
comes
down
to
the
their
nose
that
first,
this
the
first
thing
is,
of
course
the
notes
of
the
disc
should
have
the
space
right.
Otherwise,
assuming
they
should
have
the
space,
and
the
second
thing
is
they
have
to
meet
the
restriction
as
like,
like
storage
tag,
for
example.
I
always
I
have
to
schedule
this
ruling
with
this
tag
with
the
disk
on
this
tag
or
note
down
this
time.
A
They'll
have
to
be
there
and
the
third
thing
is:
if
you
enable
the
start
and
dfinity
which
is
enabled
by
default,
and
then
the
replica
need
to
be
scheduled
on
the
different
nodes
right
always
going
to
be
on
the
different
nodes.
So
if
you
don't
have
a
different
node
to
certify
that
requirement,
they're
going
to
schedule,
failure
and
those
parts
and
also
there's
a
bunch
of
other
scheduling
rules,
you
have
to
apply
once
you
pass
all
those
filters
and
then
you
have.
A
B
A
Okay,
thank
you.
Thank
you,
okay,
so
this
is
on
engine
and
the
next
slide
is
on
the
manager.
In
fact,
this
is
going
to
be
even
simpler,
so
we
have
kubernetes
cluster
and
kubernetes
class.
One
warning
so
who
I
talked
to
the
connect
cluster
is
going
to
talk
to
the
longest
css
plugin
through
the
csi
interface
right.
A
For
example,
I
ask
him
for
a
new
warning,
so
normal
manager
going
to
create
a
volume
crd
object
and
store
that
in
the
kubernetes,
a
guest
server
right,
of
course,
backed
by
icd
or
others,
and
then
the
controllers,
the
volume
controllers
inside
local
manager,
watch
for
the
object
and
see
okay.
This
is
the
new
volume
object
coming,
so
I
need
to
create
a
replica,
an
engine
for
it,
and
then
they
decide
to
create
those
replicas
and
engine
and
the
formula
from
the
volume
and
provided
to
the
user
right.
A
For
example,
you
want
to
add
more
disk
to
this
node
and
also
doing
the
backup
and
the
snapshot,
and
you
can
also
set
the
recurring
snapshot,
which
means
that
you
want
to
take
a
snapshot
or
take
a
backup.
Every
morning
at
one
am,
and
you
can
ask
you
know
you
can
use
internal
manager
to
configure
it,
but
also,
of
course,
if
you,
if
you
prefer,
you
can
also
use
the
cognitive
storage
class
to
configure
that
as
well
yeah.
A
So
local
ui
is
currently
a
complement
for
the
the
most
csv
plugin,
and
then
they
have
the
that
combination
of
them.
Both
they
will
have
the
full
functionality
which
we
exposed
to
the
user
and
in
the
future
we
are
going
to
introduce
a
longer
cr
as
well
to
allow
you
to
program
it
program,
those
logic
inside
your,
for
example,
your
maintenance
script,
stuff.
A
Well,
I'm
just
going
to
go
through
live
online
and
the
first
one
is
what's
the
position
and
for
long
term
we
always
position
to
be
a
full
stack,
storage
software
and
compiled
to
rook,
which
is,
I
think
it
currently
is
graduated
and
the
position
as
a
storage,
orchestration
and
open
ebs
is
also
full
stack,
storage,
software
and
the
second
part,
is
about
the
engine.
What's
this
data
engine,
what's
its
own
lane,
so
longhorn
has
a
long
engine
which
we
custom.
We
build
ourselves.
A
The
rook
is
currently,
I
think,
the
most
common
user
case
for
the
rook
is
using
the
saf
right
opps.
They.
They
have
a
few
bunch
of
choices,
including
jiva,
which
in
fact
is
the
fork
of
longhorn
engine
box
two
three
years
ago,
and
the
performance
wise.
The
local
performance
is
on
par
with
the
staff,
and
the
opps
well
as
say
depends
on
which
engine
you
use
and
the
the
gui
on
the
longer
side
has
built-in
gui
and
the
rook
has
depends
on
the
engine.
I
think
saf
has
a
dashboard
and
open
ebs.
A
They,
I
think
they
have
a
ui,
but
they
provided
that,
I
think,
probably
at
the
extra
cost,
if
I
remember
correctly,
and
for
the
backup
restore
and
the
crosstr
volume
tobacco
restore
longhorn,
because
we
we're
aiming
to
provide
those
functionalities
like
in
the
the
most
user-friendly
way.
So
we
currently
have
the
backus
restore
as
a
building
option
right.
A
We
do
incremental
backup
and
then
we
do
incremental
restore,
which
is
the
this
dr
warning
option
layer
there
and
I
think
the
rook
and
the
set
itself
doesn't
have
like
building
backup
restore,
but
rook
can
take
advantage
of
the
using
the
third
party
software
to
do
so,
and
I
think
it's
the
same
for
the
open
ebs
for
cross
cluster,
dr
volume,
disaster
recovery
and
local
business
on
top
of
our
backup
restore
feature,
and
that
is
the
really
provided
way
for
the
user
to
use
it
easily,
like
you
have
a
backup
cluster
which,
after
running
in
no
time
if
the
main
cluster
went
down,
so
I-
and
in
fact
I'm
not
certain
on
the
answer
for
the
root-
can
open
ebs.
C
Shang
so
is
having
rook
on
here,
just
being
a
storage
orchestrator.
Do
you
guys
plan
to
extend
the
way
that
you
do
orchestration
to
other
storage
providers?
It's
maybe
a
good
comparison
on
here,
even
though
it's
not
an
existing
cncf
project,
I
think
maybe
it
would
be
helpful
for
the
toc
to
understand
for
just
the
cloud
native
landscape
in
terms
of
storage
and
how
long
harm.
C
I
just
think,
maybe,
as
we
take
this
into
the
cncf,
if
you
guys
are
meant
to
present
there-
that
rook
here,
maybe
is
maybe
not
the
best
comparison.
We
should
maybe
have
cloud
native
storage
options
and,
of
course,
there's
tons
of
them
within
kubernetes
and
understand
how
longhorn
fits
against
those
in
terms
of
functionality,
because
rook
can
actually
deploy
open,
ebs
and
seth
and
min
io
and
many
other
ones.
So
so
I
I
would
I'm
just
providing
a
recommendation.
I
think
it
would
make
more
sense.
A
Yeah
so
yeah,
we
definitely
can
do
that.
The
product,
the.
Why
we
release
the
rule
here
is
when
you
look
at
like
storage,
like
a
storage
project,
focus
on
more
focus
on
the
like
block
storage
level,
there's
probably
obvious
rook
and
longhorn
they've
really
mentioned
together
pretty
often.
So
that's
why
we
put
the
rook
here
yeah,
but
that
makes
sense.
Yes,.
C
A
C
A
All
right
so
sorry
yeah.
So
this
is
the
status
update,
and
so
we
have
our
last
latest.
A
Release
is
102
and
in
fact,
longhorn
has
just
released
the
ga
release
about
five
months
back,
and
so
that
is
the
happens
on
the
may
30th
2020
and
since
the,
and
also
by
the
way,
just
just
a
reminder
that
longhorn
has
joined
the
cncf
like
last
october,
which
is
so
now
is
exactly
one
year
right
so
for
after
for
the
period
that
long
for
joining
the
cnf
it's
just
one
year,
but
we
now
have
50
kilometers
from
the
10
different
companies
and
in
fact
one
of
the
one.
A
In
fact
two
of
the
commuters.
They
made
a
very
significant
like
contribution
to
longhorn,
so
they
implement
the
arm
solution
by
themselves
and
submit
in
the
big
pr
to
the
longhorn,
and
we
take
so
long
team
took
them
off
then
and
just
add
some
like
just
like
polishing
it
a
little
bit
and
now
the
arm
support
is
going
to
be
experimental
feature
for
the
longhorn
1.1
release.
A
So
that's
the
few.
That's
a
huge
thing
we
saw
from
happens
in
our
contributor
community.
A
Yeah
so
currently
also,
I
have
a
bunch
of
dev
states
right
now
and
local
is
pretty
much
very
active
commits
per
week.
51
youtube,
open,
24
issue
close
per
week,
18
and
new
pr
per
week
in
29.
Yes,
so
those
are
at
the
state
we
come.
We
get
from
that
state,
dot,
cnc,
dot,
io
yeah.
So
on
the
right
side,
you
can
see
that
we
have
huge
committee
growth
since
we
joined
the
cncf
and
I
think
the
github
stars
is
probably,
if
I
remember
correctly,
600
versus
like
2000
right
now.
A
Slack
user
is
like
two
two
three
hundred
two
hundred
versus
like
close
to
when
something
is
like
900
people
right
now.
I
think-
and
the
no
account
note
account-
and
this
was
about
like
three
thousand
something-
and
now
we
are
closing
to
like
1500.
I
think
it's
1400,
some
14
thousand
something
yeah.
So
the
the
growth
of
the
community
and
the
usage
of
longhorn
is
is
pretty
is
in
fact
it's
pretty
huge.
A
All
right,
so
those
are
the
the
community
building
things
we
do
and
the
first
is
we
actively
maintaining
the
github
slack
channel,
and
in
fact
this
I
have
to
say
is:
if
it's
going
to
be
it's
in
fact,
it's
not
easy,
because
our
goal
is
like
no
unanswered
questions.
We
gain
a
lot,
we
we
receive
a
lot
from
community
and
we
want
to
make
sure
that
we
meet
the
requirement
right.
A
So,
if
you're
looking
at
long-term
github
issues
and
form
like
slack
channel,
you
can
see
that
every
day
we
have
at
least
about
other
lists
like
three
four
coming
up.
Three
four
issues
and
those
three
four
users
start
asking
questions
on
stuff
right.
So
basically
the
responsibility
for
for
my
and
my
team
is
to
answer
those
questions
and
make
sure
and
help
them
make
sure
users
have
their
best
experience
with
long
form.
A
That's
that's
that's
in
fact,
for
us,
it's
a
huge
thing
and
secondly,
we
have
a
monthly
community
meeting
and
plus
office
hour
happens
on
every
second
friday
of
the
of
the
month,
and
we
are
recording,
is
old,
definitely
available
on
youtube,
and
you
can
check
that
out
and
in
the
long
community,
github
page
there's
a
link
to
the
recording
there,
and
also
we
have
moved
our
infrastructure
to
cncf
and
now
long-term
every
night
we
run
a
nightly
task
of
for
currently
the
time
time
is
about
six
to
seven
hours
and
though
those
net
test
results
and
also
drone
build
result
is
like
going
to
run
for
every
pr
and
every
merge
commit.
A
They
are
publicly
available
all
right,
sorry,
and
also
we
have
a
metrics
dashboard
which,
which
is
publicly
as
well.
This
is
how
we
get.
We
know
no
node
account.
So
the
initial
story
is,
we
have
upgrade
server,
which
is
running
publicly
inside
instead
of
safe
infrastructure
and
when
every
hour
there
you
there,
the
node
running
on
the
local
manager
is
going
to
asking
for
if
there
are
new
server
version
available.
A
That's
also
why
you
can
see
that
the
users
they
get
notification
of
a
new
server
and
they
very
frequently
upgrade
very
soon
after
the
new
server
come
up
right,
but
when
they,
when
the
local
manager
send
that
request,
we
know
that
there's
one
node
available,
we
don't
have
any
way
to
identify
who
that
node
is,
but
we
just
see
okay.
This
is
one
request
coming,
so
I
count
this
as
new
active
node.
So
that's
the
old
nesting
is
shown
on
the
magic
dashboard
right.
A
That
is
all
public
available
and
also
we
have
participated
in
the
coop
con
and
the
for
the
kukan
eu.
We
have
host
the
boost
bay,
booth
and
office
hours,
two
office
hours
plus
one
session,
so
that's
in
fact
the
fee
and
also
we
run
a
survey
and
got
about
300
response
and
regarding
the
kubernetes
storage
native
storage,
and
why
people
using
or
why
people
not
using
it
right,
but
unfortunately,
in
the
end,
we
feel
like
the
sample
size
is
probably
still
too
small
to
reach
any
like
a
different
defining
conclusion.
A
So
I
so
we
didn't
really
end
up
publishing
a
official
report
on
that.
A
Okay,
so
those
are
some
of
the
end
users
using
longer
in
production
and
those
end.
Users
are
all
we
gathered
all
this
information
from
the
public
user
channel.
Those
are
not
like
wrenching
users,
those
are
all
open:
source
users
and
they're,
not
a
pain,
wrencher
or
like
for
anything
right.
So
those
are
one.
The
first
one
is
the
tribunal
regional
okay.
A
So
I
cannot
find
spanish,
okay,
it's
the
regional
electoral
court
of
the
state
of
power,
brazil,
and
there
is
using
long
brain
production
story,
back-end
with
prometheus
minion
and
pg
and
ming,
and
the
second
one
is
cinema,
and
it's
a
health
information,
tech,
con
technology
and
the
third
one
is
qik
and
they
are
also
using
longhorn
in
one
of
the
next
in
their
service
management
platform.
A
So
so
how
so
we
basically
how
we
got
those
and
users
is
we
basically
just
shout
out
in
the
slack
channel
and
and
asking
if
we're
asking
them
for
help
for
our
incubation
process
right?
So
that's
why
that's
why
we
got
that's
how
we
got
this
and
also
we
reach
out
to
a
few
users
in
the
github
that
we
saw
that
really
frequently
interaction
with
us
and
asking
questions
and
stuff
to
and
want
to
know
if
they
can
help
and
that's
something
the
case
here.
B
And
and
just
to
confirm
these
end
users
are
are
not
commercial
rancher
users.
Therefore
they
they,
they
are
using
the
open
source
version
of
the
product.
A
Yes,
yeah,
they
are
not
commercial
rental
users
and
also,
in
fact,
the
commercial
there's.
No
commercial
version
of
longhorn,
so
rancher
only
sells
support,
so
even
their
commercial
rental
users
they're
going
to
use
the
same
open
source
product
yeah
we
just
like
provide
them
support,
that's
as
a
rental
apps
right.
A
So
but
those
are
not
even
like
random
commercial
users,
yeah
they're
there
they
are
retro
commercial
users
yeah,
but
we
we,
I
think,
it's
better
to
show
the
opens
on
the
open
source
side,
and
so
that's
why
we
reach
our
user
in
this
way,
rather
than
depends
on
the
range
of
customer
to
do
so.
B
Okay
and
and
sorry,
I
I'm
I'm
just
going
to
ask
a
few
questions
on
this,
because
we
got
we.
We
we
had
similar
questions
that
came
up
with
another
project
recently.
I
just
want
to
confirm
that
the
the
reason
why
I'm
asking
around
the
commercial
rancher
thing
is
is
because
I
want
to
make
sure
that
these
users
are
not
using
some
service
or
or
some
function,
that's
only
available
in
the
commercial
rancher
edition,
but
not
available
in
the
open
source
edition.
If
you
see
what
I
mean.
A
Yeah,
I
see
yeah,
so
no
they
they
were
definitely
open
using
open
source
hundred
percent
because
in
fact
there's
no,
we
don't
make
any
like
commercial
version
or
like
a
proprietary
version
of
longhorn.
So
even
they
want
they
don't
have
a
way
to
use
that
I
mean
even
for
the
rental
customers.
So
it's
the
same
for
the
rent,
wrenches
100,
open
source
right.
So
as
a
rental
customer
you're,
getting
the
version
of
the
renter
is
the
exactly
same.
You
download
from
the
github.
A
So
those
are
the
roadmap
and
for
november,
we're
going
to
release
longhorn
1.1
release
soon,
and
it's
going
to
include
in
the
native
of
redoing
manning
support
and
we're
doing
that
using
fs
on
top
of
longhorn
block
device
and
also
the
permissions
csi
snapshot,
support
and
some
data
locality
feature
as
and
also
the
arm
support,
which
is
experimental
and,
as
mentioned,
the
arm
support
is
coming
from
contributions
from
in
the
community
and
in
the
future
we
are
going
to
do
the
longhorn
cli
and
the
svgk
application,
backcountry,
store
and,
and
also
some
other
items.
A
E
So
thanks
thanks
hank,
so
I
have
a
couple
of
questions
right.
First
of
all,
if
someone
has
already
some
existing
data
somewhere
right
on
on
a
bucket
or
a
cef
or
something
like
that,
is
there
a
way
to
migrate
into
logging
core
or
they
have
to
manually.
A
Yeah,
in
fact,
that
question
come
up,
I
think,
a
few
months
back
yeah
but
yeah.
Currently,
we
don't
have
a
native
way
to
to
help
you
to
migrate
from
other
storages,
but
you
can
always
do
as
kubernetes.
I
can
always
do
that.
You
create
a
new
pvc
and
the
month
and
the
both
owed
a
new
pvc
into
the
part
and
around
the
cp.
A
I
can
think
in
between
yes,
but
this
is
one
item,
we're
tracking
and-
and
we
think
we
can
provide
some
help,
in
fact,
not
just
from
other
storage
vendors
to
cook
like
too
long,
because
we
see
kubernetes
provide
very
flexible
way
of
operating
between
storage
vendors,
so
we
probably
can
provide
a
tool
for
you
to
help
move
from
any
storage
render
to
any
storage
window.
So
that's
that's
how
we
see
it
yeah.
That
would
help.
E
A
lot
on
the
adoption-
I
think
one
and
second
question
so
you
mentioned
a
bit
about
the
snapshots
and
the
recovery
and
all
this
stuff
is
it?
Is
it?
Are
you
utilizing
the
features
of
csi
about
snaps
about?
You,
know
the
new,
the
new
methods
about
snapshots
and
the
restore
and
all
this
time
and
all
these
things
yeah.
A
E
E
A
Snapchat
support
this
is
the
snapchat
in
the
in
this
context,
where
mapping
is
to
not
really
the
snapshot,
including
longer
it's
going
to
be
the
backup.
The
backup
is
right,
because
it's
the
backup
that
you
can
migrate
outside
of
the
volume
for
snapshots
longer
snapshots,
you're
always
going
to
be
using
inside
the
long
form
right
so
does
that
snapshot
support
yeah,
it's
well
be
there
for
the
one
map.
E
Okay
and
the
final
suggestion,
I
think
more
and
also
you
know,
if
you
can
answer,
I
I
think
the
engine,
so
I
wasn't
aware
of
the
project
I'm
just
learning
today.
E
The
project
reminds
me
a
bit
a
luxio,
because
it
is
also
a
storage
engine,
not
so
much
as
to
look
because
you
know
you
are
also
storage
engine
itself,
so
maybe
some
comparison
with
a
luxio
might
make
sense.
You
know,
for
I
think
I
think
it's
more
similar
than
than
rook,
because
because
you
have
the
your
own
storage
engine
yeah
so.
A
Yeah,
I
think
I
have
I
haven't
heard
this
name
and
I
I
haven't
like
look
into
what,
how
they
do
it
and
yeah
so
yeah
we
can.
We
can
just
try
to
see
if
we
can.
E
Yeah,
just
for
you
to
have
a
look
on
the
project,
it's
similar
with
the
different
layers
of
of
storage,
so
they
they
have
something
something
similar
they're,
not
so
kubernetes
integrated
as
far
as
I
remember,
but
yeah
just
for
you,
too
yeah
thanks.
Thanks
for
the
presentation.
E
Yeah
yeah
yeah
yeah,
so
basically,
my
point
was
also
it's
it's
kind
of
difficult
to
compare
with
with
rook,
because
rook
doesn't
provide
their
own
storage
engine
right.
Instead,
it
would
make
sense
to
compare
with
something
like,
but
they
have
they
have
their
own.
I
I,
if
I'm
not
mistaken,
you
can
use
you
can
use
aluxeo
without
having
other
other
as
a
standalone
backing
store
as
well.
If
I'm
not
mistaken,
right.
B
Right:
hey,
hey,
shang,
just
a
few
other
things
in
terms.
C
B
The
the
incubation
criteria,
so
so
it
looks
like
the
number
of
the
number
of
committers
has
has-
has
improved
quite
quite
a
lot
recently.
Would
you
be
able
to
to
share
maybe
some
ratios
of
sort
of
rancher,
committers
versus
external.
A
A
Yeah,
so
still,
the
super
majority
is
coming
from
the
ranger
labs
and
also
their
other
from
independence
and
yeah.
I
think
other
and
also
the
cnc
have
helped
with
the
website
and
the
field
project
there's
some
contribution
with
suse
recently
and
some
others.
So
this
is
the
what
we
have
right
now.
I
think.
A
C
B
No,
I
think
I
think,
I
think
that's
fine,
would
it
be
possible
to
to
share
a
pdf
or
a
link
to.
B
Yeah,
so
I
I
was
in
that
team.
Excellent,
all
right
then
does
does
anybody
else
have
any
questions
for
shang.
B
All
right,
in
our
case
thanks,
thank
you
so
much
shank.
This
has
been
a
a
really
great
presentation
and
we
we
look
forward
to
making
our
recommendation
to
the
coc
all.