►
From YouTube: CNCF SIG-Storage Meeting - 2019-07-24
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
A
A
So
searching
has
put
together
the
information
for
the
long
horn
project
and
filled
out
the
questionnaire,
as
well
as
put
together
a
presentation
which
we
have
linked
in
the
agenda
documents,
so
there
they're
available
for
for
the
background
reading
and
unless
there
are
any
other
immediate
questions,
I
think
Jen.
Please
go
ahead
and
start
sure.
C
Hi,
oh
thanks
for
coming
cig
storage
and
my
name
is
Jane
young
I'm
working
for
venture
labs
and
for
the
last
few
years
I've
been
working
on
this
open
source,
distributed
block
storage
software
for
rental,
apps
called
the
Longhorn.
So
today
I'll
be
glad
you
I'll
present
you
long
horn
and
tell
you
more
about
it.
So
sorry,
I
probably
got
a
bit
bit
cold
last
night,
so
myself
downpour
again
to
the
horse
and
also,
if
you
have
any
questions
you
can
just
free
to
interrupt
me.
C
Given
my
presentation
all
right,
so
we
start
along
horn
in
late
2014
and
the
wrench
elapsed.
I
think
it's
about
September
and
the
motivation
we
started
is.
We
won't
have
open
source
digital
block
storage
software
for
containers,
but
what
this
make
is
different
is
we
want
it
to
be
simpler,
simpler
in
the
way
that
is,
that
should
be
simpler
than
the
staff
which
we
know.
That
is
the
basically
the
most
popular
open
source
stories
of
you
out
there.
But
we
have
checked.
C
We
didn't
really.
We
are
not
really
a
staff
expert,
but
we
have
have
seen
many
users
using
self
and
the
families
were
difficult
to
operate.
It
requires
certain
knowledge
to
really
operate
itself
correctly,
and
that
is
why
we
started
long
for
so
long
for
himself
has
been
adapted
by
OBS
as
one
of
their
storage
back-end
back
in
March,
2017
and
I.
Think
that
is
one
when
the
month.
C
The
proof
does
that
Lauren
is
really
the
idealistic
targeting
as
enterprise
grade
storage,
software
and
and
this
talking
technology
has
been
adapted
by
other
complaints
and
they
use
it
for
their
own
product
and
also
that's
demonstrate
our
our
embrace
for
the
open
source
open
source
models.
So
don't
worry
all
of
the
boat
lawrence
code
unlicensed
in
up
attitude
at
all,
and
if
you
want
to
know
more
about
the
licensing
and
external
library
dependency,
you
can
check
the
document
or
our
PR
to
the
CSF
toc.
C
C
So
if
we
get
away
without
doing
it
and
the
still
provides
value
to
the
users
we
found
that
should
be
of
what
should
be
the
field
of
storage.
Software
should
be
much
simpler
and
also
we
use
proofing,
Linux
storages
features
like
sparse
file
and
we're
planning
to
do
QoS,
which
is
where
C
groups
and
in
the
future.
So
that's
that's,
made
us
not
necessary
to
rebuild
and
really
reviewed
our
full
stack
from
ground.
C
We
utilize
the
mature
and
existing
technology
to
do
a
lot
of
features
rather
than
just
beauty,
rather
than
just
write
them
by
ourselves
and
in
lockers
model.
Each
wording
is
just
a
set
of
independent
micro
services
and
now
is
illustrated
by
the
kubernetes
longhorns
vanity.
Planner
mention
playing
local
manager
is
totally
wrong
on
top
of
kubernetes
and
is
follows.
Kubernetes
controller
model
to
write
a
bunch
of
controllers
and
control
and
illustrated
the
flows
I'll,
create
the
flow
of
creating
deleting
and
operating
go
home
warnings,
so
currently
long
run
is
the
most
code.
C
Most
of
code
is
writing
in
gold.
The
currently
functional
code,
which
exclu
the
testing
part,
is
about
30,000
I
of
go
code
and
that's
including
the
data
plan
which
is
too
low
for
engine
and
the
magic
plan
which
is
local
manager.
So
the
data
plan
we
can
okay,
I
will
talk
about
more
about
the
architecture
of
the
data
plan
and
then
the
plan
major.
C
So
here
is
the
overview
of
the
was
the
current
is
almost
immediate,
looks
like
we
have
submitted
the
step
box
PR
to
the
CN
CF
TOC,
and
the
current
Independence
Day
and
Norma
has
about
600
github
stars,
and
we
have
made
about
twenty
three
releases
Singh
forward
for
the
things
we
change,
everything
into
mandatory
role,
everything
on
the
kubernetes
and
curry.
We
have
200
class
members
in
the
Longhorn
storage
channel
on
the
renter
slack.
C
So
one
thing
I
want
to
emphasize
that
is
the
our
600
plus
github
stars
is
purely
organic.
We
don't
infectious
for
the
last
few
years,
things
nor
instill
a
product
in
the
alpha
stage.
We
don't
spend
much
of
marketing
effort
on
that
or
so
basically,
we
have
some
announcements
once
every
bounce
and
the
or
I
was
either
two
miles
from
our
Rancher
official
Twitter
and
announced
that
the
new
releases
or
we
have
new,
demonstrate
a
demo
coming
and
the
new
master
class
coming
something
like
that,
but
other
than
that
we
don't
really
have
spent.
C
C
So
that's
this
is
when
we
were
trying
to
launch
our
fool
a
marketing
campaign.
But
for
now
that's
just
we
are
you've
heard
of
the
the
the
project
in
the
ranchers
like
Casey,
reas,
cakes,
Rio
s
and
the
oldest
are
the
real
and
that
we
don't
really
have
in
this
batch
much
market
African
than
that.
But
once
we
reach
a
beta
and
the
GA,
we
will
tell
you
more
and
the
IR
we
expect
this
number
to
be
grow
substantially.
C
C
D
C
So
the
first
thing
is,
we
rewrote
the
hosting.
So
the
first
thing
meditation
is
basically
scrap
off
at
early
2016,
because
that's,
why
is
we
distilling?
The
first
segmentation
is
way
too
complex
ice
is
working,
see
a
C+
past,
so
we
basically
just
get
rid
of
it
and
they
just
start
from
scratch
and
write.
We
wrote
everything
in
goal
and
that
is
2016
and
in
the
2017
we
officially
announced
the
project,
and
but
we
know
in
fact,
when
we
build
a
longhorn
foot
in
the
2016.
C
This
we're
targeting
is
really
a
trencher.
Sorry,
not
a
branch
of
a
talker,
so
I'm
2017
with
announced
in
the
first
version
we
announced
long
before.
We
see
the
starting
of
the
communities,
but
it's
really
really
ramped
up
very
fast.
So
at
the
end
tying
that
you
don't
stand,
then
it
hasn't
become
I.
C
We
basically
did
a
rewrite
and
the
management
playing
again,
because
we
see
that
we
can
utilize
communities
cue
for
many,
many
more
horns
cut
ability,
so
we
basically
just
rewrite
to
the
Benjamin
playing
and
the
so
lowly
focused
on
the
Canaries,
and
that's
is
how
you
see
you're
going
to
see
was
to
have
texture
right
now.
It's
basically
is
solely
based
on
Canaries.
So
since
we
fully
rewrite
you
fully
rewrite
again
and
the
talking
kubernetes,
this
has
been
about
one
year.
In
fact,
this
twenty-something
releases
is
all
happened.
C
This
year
happened
in
doing
is
why
nearby
and
a
half
period?
So
after
that
I
think
we
progressed
this
pretty
decent,
but
those
things
is
we
really
want.
You
make
sure
that
the
users
can
trust
they
have
data
to
it,
because
storage
is
really
really
important.
You
cannot.
The
worst
thing
is
not
that
your
one
warnings
of
lying.
What
that's
that
it's
really
bad,
but
the
worst
thing
is
you
lost
data
somehow,
so
we
really
want
user
name
any.
C
We
have
many
great
user
feedbacks,
but
also
we
want
to
make
sure
that
the
many
user
tried
it
and
they're.
Not.
There's
not
a
case,
we
wear
off
that
no
more
lost
your
data
I
know
that
that,
in
fact,
this
github
there's
one
case
that
one
user
externally
deleted
one
replica
which
his
thing
is
fought
it,
but
that
worker
happened.
It
contains
the
last
of
the
keys
of
really
the
use
of
data
for
his
warning.
Ness
is
owning
message
to
all.
This
is
one
known
data
lost
at
the
time
and
we
we
patch
it
up.
C
We
basically
just
say
that
even
this
replica
is
afford
it
and
we
don't
allow
user
to
delete
it
if
it
contains
just
the
last
piece
of
the
knowing
data.
So
that's
that's
the
some
things
we
put
efforts
on
the
usability
and
the
we
put
effort
on
make
sure
things
are
stable.
So
that's
that's.
Why
is
taking
really
long
time
and
also,
of
course,
we
have
a
theory
right
and
in
fact,
not
a
story,
so
we
also
change
the
front
end
and
that's
also
and
that's
another
long
story.
If
you
want
to
talk
about
it,
yeah.
C
All
right
so
yeah,
that's
continuing
so
so
currently
Longhorn
offers
the
enterprise
radius
to
be
the
blog
storage
software,
as
so
when
and
also
Longhorn
offers
a
wooden
snapshot
and
buildings,
building,
warning
backup
and
every
store
support.
The
difference
between
the
snapshot
and
the
backup
here
is
a
snapshot
is
the
the
snapshot
you
made
in
the
in
cluster
for
the
in
classrooms.
C
So
whatever
you
made
snapshot
you
stay
in
the
cluster,
but
when
you
do
a
backup,
we
allow
you
to
backup
your
volume
to
the
third
party
like
s3,
or
s
very
compatible
object
or
Mac
menu
or
NFS.
So
in
this
way,
user
even
users
lost
it's
called
a
whole
cluster
and
they
still
have
access
to
their
data.
So
in
in
long
form,
this
is
one
one.
C
We
one
part
differentiate
us
from
other
many
other
solutions,
so
we
give
you
the
backup
and
restore
by
out
self
and
the
wiki
that
in
the
incremental
way,
because
we
want
that
warned-
we
sync
by
Huckabee's
were
important
for
the
safety
of
the
users
data.
So
we
want.
We
want
to
provide
first
party
support
to
that.
So
that's
is
so.
This
is
one
key
point
up
that
what
makes
long
more
defend
another
point
is
the
currently
long
where
we
can
do
life
upgrade
without
downtime,
even
on
data
plan,
so
I
think
spend
more.
C
How
we
did
that
later
and
we
support
across
classes
is
that
a
recovery
with
the
define,
RTO
and
RPO?
That
is
also
achieved
by
with
the
help
alpha
our
backup
store,
which
is
the
location,
your
backup,
your
warnings,
no
more
provide
intuitive
UI
as
infections
that
the
first
thing
many
users
notice
about
long
for
is.
C
This
means
that
we
are
using
controller
pattern,
time
CRT
for
the
management
plane
and
you
can
install
over
just
using
one
line
of
the
code,
control,
apply
or
helm
installation
and
longer
runs
on
the
ending
kinetics
cluster,
and
one
think
you'd
make
note
here.
That
is
when
we
see
that
you
can
do
what
lines
to
racing
you
scope,
control,
applied
a
chef.
That's
we
do
really
mean
it,
because
you
know
there's
many
devote
Daniels
in
the
detail,
so
many
many
storage
winners
are
claiming
that
or
applications.
They
claim
that.
C
Okay,
you
can
just
use
a
one-line
store
and
then
you
can
just
use
a
controller
Jeff,
but
that
you
have
to
choose
all
kind
of
options
and
make
like
a
tell
you
what's.
The
kinetic
version
is
was
the
driver
is
and
what's
option
you
want
to
make,
and
you
have
to
feel
them
this
and
they
generate
the
Yama
file
for
you.
C
So
we
spend
much
effort
in
the
you
make
it
easier
for
you,
sir,
to
accessible
for
so
many
methods
and
automatic
ease,
and
we
basically,
we
just
building
automatically
detaching
like
a
what
is
your
especially
on
the
driver.
Part
like
at
the
in
the
CSI
we
we
basically
we
are
going
to
deploy
different
version.
C
Yes,
I
depends
on
what
your
kinetics
version
is
and
if
you
could
not
easy
to
old
we're
going
to
deploy
flex
warning
and
for
in
each
case
is
we're
going
to
detect
left
what
is
the
correct
directory
for
don't
want
to
install
the
driver,
so
you
can
took
your
code.
Nettie's
can
connect
you
to
bond
correctly.
C
C
So
you
guys,
you
see
it
here.
The
attentions
were
easy
with
the
Longhorns
Dana
Flynn,
but
if
we
have
multiple
volumes,
where
you're
going
to
just
start,
multiple
replicas
and
engines-
and
everything
will
be
the
same
so
another
benefit
of
this
architecture
is
the
data
path.
Is
the
isolated
is
as
if
we
seen
the
between
the
different
warnings?
So
if
anything
happens,
your
data
pass.
That's
the
one
warning
the
other
one
is
not
really
going
to
get
affected.
C
A
C
So
we
conceptually
we
designed
them
to
be
separate
instance,
and
the
current
implementation
in
the
law,
for
example,
find
out.
Oh
is
the
separate
apart.
So
basically,
we
starting
one
part
per
engine
and
one
part
per
replicas,
but,
as
you
know
this,
as
you
can
realize,
that
will
quickly
become
a
problem,
because
the
kinetics
had
the
limitation
of
the
110
pass.
The
note,
so
we
already
have
some
user
he's
that
I'm,
the
warrior
logical,
maybe
is
no
defense
note.
So
in
the
next
release.
We're
doing
is
look
we're
doing
architecture.
C
We
are
going
to
react
out
on
the
how
the
engine
revved
car
started
and
the
in
the
next
release.
We
are
going
to
start
them
as
the
process
instead
of
a
container
I'm
instead
of
pod,
and
the
one
part
will
contain
the
multiple
process
and
the
in
fact,
I,
don't
know
the
where
we
one
note
one
part
contained
of
engines
and
other
part
contain
replicas
and
the
inside
inside
that
part.
The
replicas
are
also
a
steel
accesses,
independent,
separate
processes
and
engines
also
separated.
A
B
Bluster
fest
had
the
same
model
where
he
had
a
process
per
volume
process
he's
put
bricks
with
the
color
and
in
the
end,
when
they,
when
we
tried
to
when
I,
was
part
of
the
project,
and
we
tried
to
containerize
it
and
so
on.
We
noticed
that
it
was
consuming
a
lot
of
memory
for
many
thousands
of
volumes,
so
instead
they,
what
we
called
was
called.
They
call
it
multiplexing
they
would.
In
other
words,
a
single
process
was
able
to
handle
many
volumes.
C
Yeah
definitely
yet
the
reason
is
so
basically
the
at
first
we
are
doing
on
the
part,
because
this
it
seems
very
obvious
choice
because
everything
out
Rebecca,
ladies
and
then
we
he
limited
up
that
100
cans,
so
we
decided
give
it
as
the
process
and
in
fact
we
also.
We
also
thought
about
multiplexing
but
I,
just
what
we
are
not
sure
how
complex
that
seemed
well
be
because
they
were
also
because
we
need
to
a
multiplexing
I
go
to
the
same
processes.
We
can
measure.
C
That's
well
taken
much
more
effort
than
just
running
all
our
existing
for
a
local
using
single.
If
you
using
single
instance
for
each
and
generate
because
but
yeah
I've
definitely
think
that's
something
we
need
consider
and
if
we
are
designing
for
say
each
node,
we're
gonna
have
thousands
of
the
volumes
I.
Think
now
we
have.
C
Each
node
has
we're
talking
around
some
hundreds,
because
we
have
block
storage,
but
if
each
node
we
are
talking
more
than
that
we're
okay,
of
course,
we
are
going
to
consider
that
how
to
do
multiplexing
and
the
async
you
in
one
process
handle
more
requested.
Of
course
that's
well.
You
are
safe,
the
memories
and
you
will
be
more
efficient
but
which,
at
this
moment,
we
Sinclair
handling
process
is
but
of
course,
we're
going
to
take
that
into
consideration.
If
well,
the
future,
we
are
going
to
meet
more
high.
D
When
additional
related
comment,
even
independently
of
how
many
these
volumes
you
have
essentially
to
provision
to
provide
a
piece
of
block
store,
you
your
consuming,
you
know
RAM
and
CPU
as
well,
and
those
are
vastly
more
expensive
than
the
actual
storage.
So
I
mean
that
that's
the
other
motivation,
irrespective
storage,
because
because
there's
so
much
cost
tied
up
in
the
RAM
and
CPU
yeah.
C
So
I
think
currently
we
the
ram
the
ram
consumption
of
is
okay,
and
but
the
CPU
sometime,
when
you
rel
mean,
of
course
we're
unimpressed,
some
of
a
pressure
test
and
benchmarks,
the
CPU
utilization,
something
we
need
to
deal
with
so
yeah.
So
in
fact
we
we
soft
talk
a
lot
about
if
you
want
to
keep
the
single
instance
and
the
multiple
instance,
and
and
or
by
instance,
handling
multiple
requested
again
for
now.
C
We
just
we
wanted
each
piece
simpler
and
we
want
you
to
be
at
least
reliable
and
in
the
current
stage,
because
we
have
spent
much
effort
and
user
have
tried
this
many
times
so
a
list,
the
statements
of
stable
for
now
and
but
in
the
future.
Of
course,
if
is
needed,
we
definitely
have
to
change
to
that
model.
If
we're,
if
it
is
really
it's
really,
it's
really
needed
for
the
larger
scale.
C
D
C
This
works
very
similar
to
that
everything
is
synchronized.
On
the
replica
part,
we
do
the
synchronized
application.
Each
replicar
is
the
same
as
any
other
replicas
supposed
to
be
same
yeah,
so
we
we
are
so
when
any
instructions
send
by
the
engines
and
the
replicas.
The
engine
will
wait
for
replicas
to
confirm
that
a
situation
before
is
before
it's
response
back
to
the
block
layer
and
say
that's
this:
this
block
has
been
written.
C
For
the
replicas,
so
the
replica
doesn't
responds
or
it
doesn't
confirm
the
right.
We
see
in
a
certain
time
limited
and
you
would
just
cut
it
off,
and
the
manager
will
start
an
on
the
replica
and
start
rebuilding
process.
Yeah,
we
know
that's
the
the
replication
is
definitely
the
the
worst
of
the
data
intensive
and
also
to
the
CPU
intensive
part.
So
that's
the
things
it'll
be
at
least
for
now.
We
didn't.
C
So
if
we
are
not,
if
we
have
a
beauty,
something
like
locality
next
means
that
they
would
have
definite
going
to
be
different
between
the
local
replicas
and
the
remote
replicas,
and
there
will
be
much
more
things
need
to
be
deal
with
that
area
so
and
I
think
for
now
this
we
just
we
just
going
to
have
to
buy
into
this
amplified
rider
re
and
by
the
right
problem
and
the
West
we
can
and
see.
If
we
can
prevent
in
the
future
a
couple.
E
More
questions.
Excuse
me,
the
pods
that
consume
any
volume
can
only
be
scheduled
to
one
of
the
two
nodes
that
the
replicas
exist
on
Oh
No.
C
So
yeah,
so
basically
the
the
nose,
that's
provided.
Storage
to
the
Longhorn
is
not
the
same
note
that
can
use
the
storage,
so
the
basically
would
we
deploy
is
we're
going
to
deploy
in
the
next
cluster
and
the
replica
doesn't
need
to
be
on
the
same
node
as
the
consumer,
but
and
you
have
to
be
on
the
same.
Node
got.
E
C
That
low
disk
is
discovered
by
need
to
just
user
need
to
specify
that
which
pass
on
in
the
local
on
file
system
made
mounted
on
the
new
disk.
So
we
do
have.
We
do
have
building
some
error
detection
in
case
user,
one
double
counting
and
the
week.
Of
course,
we
don't
want
to
have
a
company
say
you're
using
the
same
file
system
for
say
two
different
directories,
but
basically
what
user
need
to
do
to
add?
C
C
B
B
C
B
Because
again,
this
is
again
very
close
to
Gloucester
and
cluster
has
client-side,
replication
and,
and
one
of
the
issues
with
client-side
replication,
especially
specifically
with
replica
two
is
you're
gonna.
You
may
get
a
lot
of
split
brain,
so
one
of
the
things
that
they
wanted
to
do
in
clusters
do
server-side
replication
like
SEF
the
surface,
server-side
propagation,
and
then
that
way
the
server
can
then
decide
when
to
send
the
the
replicas
and
how
to
log
the
replicas
and
so
on.
B
C
C
Your
client
have
to
be
wrong
at
every
node
to
provide
service
and
yeah
for
the
photo
kubernetes.
You
are
read
right
many
right,
but
at
the
blog
as
a
plot
device,
the
long
brought
device
provider
Longhorn
is
the
reach
right
once
type
a
service,
Soviet
storage.
So
we
are
only
able
to
provide
storage
in
one
node,
so
you
don't
know
the
census
that
engine
is
the
one
on
their
nose.
So
of
course
it's
only
one
engine
that
knows
and
there's
no
other
engines
connect
to
the
replicas.
So
the
split
of
brain
is
another
problem.
D
You
yeah
I,
think
actually
had
some
other
kind
of
similar
stuff
that
so
I
think
I
think
the
split
brain
actually
is
independent
of
whether
it's
client
side
or
server
side,
replication
alors,
you
have
you
have
similar
problems
in
both
cases
and
I.
Can
you
know
I
don't
want
to
go
into
too
much
detail
now,
but
you
can
imagine
many
different
cases
where
the
network
connection
from.
B
C
C
So
currently
we,
the
first
step,
is
currently
the
status
where
our
key
card,
the
single
point
of
shoes,
is
coming
from
the
engine
and
the
second
thing
is:
is:
can
it
be
detected?
The
the
failure
is
basically
just
depends
on
okay,
so
if
engines
think
this
replicas,
that
is
that
so
at
least
we
know
that
if
and
I
know
also,
we
know
that
which
replica
is
the
last
one
receive
the
written
from
read
comment
from
the
engine
since
we're
good.
D
D
C
That's
coming
we
sour,
so
we
have
the
failure
or
fail
over
us.
Sorry,
failover,
that's
just
weak.
So
basically,
if
engine
Singh
error
replica
is
bad
of
course
ADM
ago
to
go
down
and
we
have
some
other
mechanism
when
the
engine
slows
down
and
the
warning
market
supported,
we
can
take
a
look
into
the
replicas
and
try
to
figure
out
which
one
is
really
the
recently
writen
and
which
one
contains
the
most
data
most
of
region.
C
So
that's
one
will
be
shoes
as
the
as
the
choose
choose
as
the
choose
for
this
data
and
to
send
the
state
for
the
data.
So
that's
and
also
and
then
and
get
a
start
with
that
Reb
you
can
start
rebuilding
yeah,
but
perfectly
understand.
This
is
really
complex
problems
and
we
we're
trying
out
no
so
so
we're
trying
our
best
you
to
get
is
working
and
in
the
case
of
two
replicas
of
both
felt.
C
Not
really
so
we
don't
have
the
concept
here
so
yeah,
okay,
all
right,
wow,
that's
40
minutes
already,
gosh,
okay,
I
think
they're,
probably
going
to
skip
a
few
slides
agent.
Okay!
So
that's
about
the
lovely
ancient!
Now
we
talked
about
how
the
mental
plane
is
going
as
the
manager
part
and,
of
course
the
loafer
is
Romney
on
top
of
coop
next
cluster
and
the
one
wink
Nettie's
class
or
one
you
have
a
volume
up,
persistent
woman
created
and
assigned
to
one
g-spot,
the
Kunitz
cluster.
C
Well,
don't
get
caught
with
the
CSI
and
CSI
were
going
to
talk
with
medicine.
Sorry
long
horse's,
head
marking,
which
in
turn
caused
tanaka
api
to
the
local
manager
and
the
novel
manager
as
I
said,
is
the
the
warranty
orchestrate
all
the
volumes.
So
whenever
you
create
a
new
volume,
Google
manager
at
the
ATM
part
will
write,
will
create
a
new
warning
object
in
the
cognitive
API
server
using
CRT,
sorry
and
that
the
creation
of
new
object
will
be
kicked
out
by
the
controllers
in
the
local
manager
as
well,
and
the
controller
will
see.
C
Okay,
there's
a
new
volume.
This
volume
was
created
and
then
it
needs
to
be
attached
to
some
node
and
the
controller
world
starting
the
engine,
the
replica
process
and
dealing
with
and
dealing
with
that
and
the
mix
and
the
problem
compose
this
no
home
volume
to
the
to
the
need
of
the
part
and
and
if
we
have
more
volumes,
we're
going
to
just
created
a
few
more
of
the
set
of
the
energy
and
the
replicas
to
provide
the
services
of
that
warning.
C
So
another
way
to
access
to
the
local
manager
is,
of
course,
through
the
local
UI,
so
local
UI
complements
the
functionality
of
that
is
created
each
and
attached
detachment
amount
guys
and
the
currently
know
who
you
I
can
do.
Basically,
everything
Longhorns
feature
is,
and
they
provide
dashboard
and
the
snapshot
know
the
management
back
hub,
restore
and
add
on
another,
some
more
features
like
cross
class
replication
and
also
we
are
walk
here
on
the
one
on
snap,
shorter
and
and
also
the
block
device
support
for
the
code
melis.
C
So
this
is
the
one
exam
or
the,
how
long
burn
use
the
kinetic
control
a
patent
you
achieved
operating
operating
the
volumes.
For
example,
we
have
four
notes
here
and
whatnot.
One
two
three
has
read:
replicas
run
Jana
and
engine
connection,
and
every
system
is
fine
and
woody.
If
we
somehow
lost
notes
treat
the
replica,
the
engine
will
immediately
attack
that
the
rapid
nursery
lost
connection
and
the
engine
will
mark
no
registry
as
if
ordered
and
the
manager
will
see
that
will
remove
the
manual
remove
rebel
country
from
the
engines
triangles
back.
C
So
you
can
see
on
the
right
side
the
volumes
supposed
to
have
three
housing
replicas
but
occurring
on
half
a
tube
since
it
sounding
wrong
on
two,
so
the
manager
will
also
see
ok,
there's
another
note
note
for
which
we
can
put
the
replica
on.
So
the
manager
will
start
a
new
part
with
with
replicas
for
a
new
instance
in
the
later
releases
and
attack
should
engines
back
an
engine.
We
are
going
to
see
that
ok,
so
now,
I
have
two
replicas
but
I.
C
Suppose
you
have
three
and
the
last
one
is
replica
for
and
the
engine
will
connect
to
the
replica
for
and
starting
the
rebuilding
process.
Once
the
rebuilding
process
was
completed,
the
replica
for
were
changing
to
the
healthy
state
and
everything
will
be
recovered
and
also
this
time
the
warning
stages
are
showing
the
current
housing.
Replicas
will
be
three
and,
of
course,
is
master,
was
the
desired
state
of
the
number
of
replicas,
so
everything
back
to
normal
all
right.
A
A
C
So
the
way
that
the
CRTs
reflects
to
the
state
the
state
was
observed
from
engine,
so
the
engine
still
the
testing
a
point
of
choose
in
in
this
case
yeah.
But
what
we
observe
in
the
engine
we
are
going
to
stay
store
that
in
the
CRT
like
in
the
engine
status
and
in
like
I
said
they
stick
here
in
the
replicas
list.
C
You
so
I,
don't
know
if
I
have
time
to
go
over
the
engine
under
the
hood.
So
let
me
just
go
through
this
work
quickly.
So,
as
I
said
this,
the
long
horn
in
the
end,
the
using
the
link,
is
fast
file
to
store
differentiate,
discs,
maternity,
have
512,
byte
block
size
and
the
read
is
lazy.
Feeling,
as
let
me
explain
how
that's
work.
Yeah
I
think
for
this
I
think
that
probably
many
who
you
already
know
how
this
work
is
this
way
very
standard
way
to
handle
a
snapshot.
C
For
example,
here
the
handling
the
data
and
with
the
basis
of
the
snapshot
so,
for
example,
drive
data
always
has
the
highest
priority
like.
So
we,
when
we've
read
the
block
one
we're
read
that
from
highest
the
proud
one
which
is
live
data,
and
when
we
log
0,
because
large
data
has
no
data
in
the
block
0
we're
going
to
read
a.
We
only
check
if
the
data
on
the
news,
the
snapshot
and
we
found
that
block
there
and
a
brief
read
from
there
and
the
block
Q.
C
Seven,
and
this
thing
is
from
live
data
and
blog
three
is
from
the
news
snapshot
and
the
blog
five
is
from
the
yeah,
the
tomato
snapshot
and
the
six
o'clock
six
from
that
data,
and
if
we
write
a
new
block,
say
that
user
now
just
write
a
new
block
into
the
warning
and
that
block
is
block
five
and
we
are
going
to
update
our
index
and
remove
that
from
the
original
position
and
the
rewriter
that
shouldn't
redirect
that
to
the
live
data.
So
next
time
we
use
it
read
from
want
to
read
a
block
five.
C
So
the
index
is
stored
internally
in
the
in
the
minutes
pass
file.
So
this
is
a
function,
call
call
death,
ie
math,
which
you
can
get
the
layout
of
one
sparse
file.
So
that's
why
a
long
requires
the
underlying
file
system
to
be
X,
t4
or
X
of
s
which
is
supposed
to
sparse
file.
If
you
leave,
the
underlying
file
system
are
using
by
long
story,
cannot
kill
the
slaves
pass
fire.
We
have
no
way
to
know
that
was
the
worst
day
times.
C
D
C
B
C
Devised,
ok,
so
the
cache,
in
fact,
is
the
replica
though.
So
the
thing
is
the
the
the
the
cash
you
see
in
the
memory,
the
cache
using
the
memory
and
every
time
when
the
end,
you
want
to
breathe
something.
The
thinking
was
just
in
the
read
one
of
the
replicas
and
that
right,
because
is
to
happen.
I
have
the
responsibility
to
keep
that
well,
the
blawker
should
be
and
which
stepson
I
should
have
read
from
so
that
magnetic
memory,
cache
is
kind
in
the
map
using
the
replicas,
but
we
didn't
store
that
physically
under
there.
A
Just
a
quick
question:
doesn't
that
imply
quite
a
large
memory
overhead,
because
if
you
had
you
know
a
volume
of
a
couple
of
hundred
K
concise,
for
example,
doesn't
doesn't
that
mean
that
you
end
up
with
millions,
if
not
hundreds,
of
millions
of
of
kings
and
in
the
index
that
need
to
be
a
memory?
Yes,.
C
B
C
So
this
is
the
this
block
size.
512
here
is
the
kind
of
collision.
We
saw
the
cue
cow
size
so
because
we
also
are
supported
using
you
can't
file
as
the
base
image
for
your
warning.
So
if
you,
when
you
use
the
coupon
file,
you
have
to
a
liar,
so
we
basically
just
okay,
so
we
use
the
five
twelve,
because
if
I
was
budget
for
fact,
health,
but
yeah
I,
say
I,
think
it
yeah
I.
Think
it's
good
idea.
C
If
we
can
outside
this
tube
or
if
you
know,
we
have
four
K
and
it
probably
can
do
even
bigger
and
and
but
we
need
to
measure
the
that
was
the
overhead
and
the
competitor
was
the
was
the
memory
usage
is
to
decide.
That
was
the
optimum
block
size,
but
the
things
is,
this
block
sizes
have
to
be
have
to
be
fixed
for
the
for
one
warning
so
otherwise
is
they're
going
to.
We
are
not
going
to
have
worry
good
time,
saying:
I,
try
to
figure
out
where
the
location.
C
A
C
B
C
C
The
backup
is
basically
hanging,
I
mean
the
pointer
to
the
backup
blocks
and
therefore,
for
example,
if
you
see
the
green
box
coming
from
the
snapshot,
you
and
Stephanie
have
a
reference
to
three
blocks,
while
orange
block
from
snapshot
1
and
what
you're
putting
blocks
from
snapshot
you
and
when
we
do
a
backup
for
the
staff.
Shell
3,
we'll
see
that
ok
snapshots
3
is
only
a
different
nation,
is
different
from
Snatcher.
3
is
different
from
snapshot
2,
which
is
already
we
backed
up
so
the
snapshots.
We
only
have
to
change
blocks.
C
So
what
is
really
happening?
Is
this
mashups?
We
just
coffee
was
the
make
a
data
of
snapshot
us
and
plus
the
change
of
2
that
2
blocks
and
update
the
reference
of
the
first
and
second
block
cuter.
What's
the
blocks
we
copy
the
front
inception
3,
so
that's
with
how
we
implement
as
the
incremental
snapshot,
also
the
Alex
as
a
recovering
warning
feature.
Our
backup
is
also
incremental.
So
that's
that's
basically
it
on
how
we
how
we
do
the
back
back
of
a
snapshot
that
carbon
mouse
restore
the.
B
E
C
We
are,
if
you
mean
pad
reading
cache
the
reading
in
tax
right.
We
are
not
using
at
the
part.
We
are
just
we're
in
the
back
of
Magnus
and
we're
going
to
look
into
really
the
layout
of
the
each
snapshot
and,
first,
of
course,
this
real
works.
If
the
premise
that
shot
was
back
at
the
half
and
still
exists.
So
if
the
very
stressed
out
snapshot
that
makes
this,
it
will
not
work
so.
C
Still
we're
looking
at
the
new
sparse
file
using
that
I'm
at
core
and
the
layout
furnace
snapshot,
because
Thatcher
don't
change
so
there's
no
risk
condition
or
whatever.
So
we
got
into
this
now
we're
getting
to
cut
out
of
the
layout
and
we
come
which
two
blocks
which
two
mega
bloks
we
need
to
copy
and
whatever,
and
we
just
back
head
back
down
and
update
the
reference
in
the
new
snapshot.
I'd.
B
B
C
Right
so
the
next
one
is
just:
how
white
have
a
story?
This
very
simple:
we
have
a
configuration
for
one
warning
and
the
one
has
two
snapshot
and
the
one
has
five
blocks,
and
so
basically
the
backups
just
store
the
reference
to
the
blocks
all
right.
So
this
is
the
last
slide.
I
think
this
is
doing
the
life
upgrade.
So
what
do
we
do
is
because
the
from-
and
we
have
some-
we
have
a
UNIX
domain-
socket
come
back
the
front
end
which
the
engine
and
what
happens?
C
If
you
want
to
update
the
data
plan,
we
are
going
to
start
another
set
of
in
general
replicas
and
we
are
going
to
use
the
same
disk,
basically
same
location
for
the
data
and
make
replicant
point
to
them
and
we're
going
to
just
switch.
We
have
a
wait
for
the
previously
retry
to
be
complete
and
just
immediately
switch
to
the
the
new
engine
and
after
the
after
switches
down,
the
new
engine
can
be
done
to
me,
can
be
I'll,
get
rid
off.
Yeah
I'm.
B
C
C
You
I
also
have
contributed
a
few
kernel
patches
to
that.
You
see
mu
to
make
to
make
the
speed
much
faster,
because
previously
series
is
doing
the
read,
write
in
the
synchronized
way.
So,
let's
taste
it
so
that's
not
really
to
be
production
use,
but
in
the
end
we
decide
to
go
with
the
TGT.
Because
is
we?
C
The
patch
is
in
the
kernel?
If
you
want
a
patch
kernel
is
take
years
to
reach
downstream
like
a
coupon,
the
distributions,
so
we
can
not.
We
don't
want
to
create
a
barrier
for
user
to
entry.
At
that
point,
and
also
any
spark
you
found
in
the
kernel
that
take
many
years
and
at
least
many
months
to
reach
to
the
downstream
distribution,
so
we
give
after
that.
Okay,
we
just
go.
Is
the
user
space
solution
here
and
make
sure
more
user
has
accessibility
to
the
Longhorn
completely.