►
From YouTube: PDXRust July 2016: Andy Grover - Froyo
Description
PDXRust
More on http://www.meetup.com/PDXRust/events/230723873/
Help us caption & translate this video!
http://amara.org/v/2Fi5/
A
B
A
Expressed
whoo
so
I'm
gonna
talk
today
about
a
rust
project
that
I've
been
working
on
called,
oh
yeah,
so
my
day
job
is
I'm
a
Red
Hat,
ER
and
I
work
on
a
storage,
stuff,
LVM
walk
devices
because
that
sort
of
thing
but
and
have
been
really
excited
by
how
the
possibilities
that
rust
has
for
low
level
systems,
programming,
I've
written
a
lot
of
C
I'm.
Tired
of
writing.
A
C
I'd
like
to
write
more
rust
so
and
especially
because,
like
one
line
of
rust,
you
do
so
much
more
than
the
equivalent
line
of
C,
so
really
just
in
terms
of
productivity.
I
find
myself
being
a
lot
more
product
product
that
productive
in
rust.
So
this
this
project
that
I've
been
working
on
kind
of
on
the
side
is
called
foreo.
It's
when
still
storage
waited
and
then
just
it's
not
really.
A
This
talk
is
going
to
talk
a
little
bit
about
the
project,
but
it's
also
going
to
be
talking
in
terms
of
in
terms
of
just
like
the
different
rust
things
that
it's
doing
and
the
capabilities
that
it's
using
and
even
if
you're,
not
necessarily
interested
in
Froyo.
Specifically,
then,
for
a
lot
of
examples
that
might
be
applicable
to
whatever
code
that
you're
writing
as
well.
In
addition
to
I
like
to
think
that
Froyo
right
now
is
around
5,000
lines
of
code
and
that's
including
whitespace.
So
one
thing
I've
found
with
with
rust.
A
Is
that
there's
a
lot
of
like
small
examples
and
then
there's
big
examples
of
like
like
Scerbo,
but
just
trying
to
get
more
kind
of
medium
sized
projects
so
that,
if
you're
interested
in
kind
of
seeing
how
different
things
are
working
and
you
can
kind
of
like
see
them
in
action
and
how
they're
being
used
in
a
in
a
medium
sized
project?
So
that's
kind
of
another
place
we're
looking
at
for.
You
might
be
interesting,
even
if
you're
not
particularly
interested
in
storage,.
A
A
It
should
just
like
do
all
that,
for
you,
and
just
like
the
only
things
should
be
like
you
give
a
data,
it
returns
the
data
and
then,
if
it's
getting
short
on
space,
then
it
like
lets.
You
know
and
says
feed
me
more
disks.
So
that's
we're
trying
to
be
as
simple
as
possible
here
but
being
as
simple
as
possible
under
the
covers,
of
course,
can
be
hard.
A
So
there's
already
exist
on
Linux
there's
already
a
really
awesome
set
of
capabilities
called
device
mapper
which
let
you
take
a
block
device,
and
then
you
can
divide
it
up
into
smaller
block
devices
that
you
can
use
for
whatever,
or
you
can
take
multiple
block
devices
and
glue
them
together
or
you
can
take
multiple
guys,
watch
devices
and
glue
them
together
and
have
them
be
provide
redundant
storage
like
raid.
One
of
the
more
recent
capabilities
is
thin
provisioning.
A
So
you
have
a
block
device
that
can
look
like
an
infinitely
big
block
device
to
the
file
system
and
kind
of
abstract
the
actual
storage
available
from
what
the
file
system
thinks
is
there
and
what
the
user,
what
the
user
has
to
has
to
deal
with.
So
all
these
different
tools
were
going
to
kind
of
combine
and
hopefully
put
them
together
in
a
way
that
makes
things
easy
for
the
user.
A
So
have
you
heard
of
like
a
copy-on-write
file
systems?
So
it's
the
notion
that
in
a
traditional
file
system,
where
you
have
like
you
give
it
a
gigabyte
and
the
file
system
thinks
there's
a
gigabyte
and
like
if
you
write
it
offset
zero,
then
you're
actually
writing
at
offset
zero.
And
then
you
just
make
it
it's
as
big
as
it
thinks
it
is
so
within
provisioning.
You
tell
the
file
system,
I
have
one
terabyte,
but
you
only
actually
allocate
the
space
when
act
when
the
file
system
actually
writes
to
the
whatever
sector.
A
So
it's
kind
of
like
virtual
memory.
It's
virtual
storage
you're
not
actually
allocating
the
storage
until
it's
actually
used
by
the
file
system
or
by
the
by
the
user
by
writing
files
to
the
file
system.
So
of
course
the
downside
of
that
there's
a
lot
of
websites,
but
of
course
the
downside
of
that
is
that
if
the
user
thinks
that
they
have
a
terabyte
and
they
write
a
terabyte
and
they
don't
actually
have
a
terabyte
of
storage
underneath
then
bad
things
happen.
So
that's
yes,
something
we
need
to
worry
about.
A
We'll
talk
more
about
that,
so
we're
using
the
Linux
device
mapper
framework
we're
putting
a
filesystem
on
top
of
the
most
bulletproof
one.
That
I
know
right
now
is
XFS,
which
is
good
for
big
files
and
lots
of
lots
of
files,
but
it
could.
It
could
be
a
different
file
system
down
the
road
who
knows
and
we're
also
using
d-bus
is
it
who
here
is
familiar
with
d-bus
a
couple.
B
A
Inter-Process
communication:
you
can
like
enumerate
services
that
are
available
and
you
can
make
method,
calls
and
different
things
about
properties
and
so
different,
different
unrelated
processes.
Different
processes,
written
in
different
languages
can
communicate
with
each
other.
It's
used
very
heavily
in
terms
of
the
the
Linux
desktop
integration
that
sort
of
thing
all
uses,
d-bus
heavily.
So
the
great
thing
about
Vivas
from
our
perspective
is
that
hey
the
great
is
that
it's
language
independent.
A
So
if
I'm
writing
my
thing
in
rust,
then
that
doesn't
necessarily
mean
that
clients
to
provide
or
anything
that
wants
to
use
the
friary
I
need
to
be
written
about
that
can
be
written
in
whatever.
So
that's
really
cool
and
in
fact
we
go
so
far
as
that
the
Froyo
command-line
tool
doesn't
like
do
its
own
thing
and
then
there's
like
this
API
hanging
off.
A
It
uses
the
same
API
and
same
d-bus
API
that
we're
exposing,
so
that's
kind
of
a
way
to
guarantee
that
the
API
is
solid
enough
to
base
an
actual
to
be
actually
used
by
different
things.
Last
of
all,
of
course,
it's
written
in
rust.
It
just
creates
a
binary,
though
right.
So
we
don't
care,
I
mean
it
could
be
whatever.
So,
of
course,
if
we
do
care
when
it
comes
to
the
implementation
of
things.
A
So
what
Froyo
does
to
get
around?
That
is.
We
create
multiple
kind
of
raid
stripes,
which
would
be
hard
to
manage
if
you're
a
human
being,
but
since
we're
a
computer
program,
then
we
just
have
to
built
in
building
built
into
handling
multiple
raids
and
we
can
come.
We
can
create
that
redundancy
layer,
and
then
we
can
build
on
top
of
that
redundancy
layer.
A
And
so
this
is
how
we
do
it.
We
take
the
redundant
storage
that
we
got
by
building
the
rate
layers
and
we
create
one
of
those
thin
provisioning
pools
that
I
was
talking
about
earlier
and
then
on
top
of
that
we
create
a
thin
pool,
and
then
we
put
the
actual
file
system
that
the
user
will
be
using
on
top
of
all
that.
So
this
is
pretty
complicated
and
you
wouldn't
necessarily
enjoy
doing
this
as
a
user.
A
A
A
We
have
linear
devs,
like
I,
said
before
these
all
need
to
be
the
same
size
when
you're
creating
a
raid
out
of
them,
but
you
can
have
you're
going
to
have
multiple,
multiple
chunks
from
the
same
size
for
multiple
different
disks
that
you're
going
to
combine
to
create
the
raid
dev.
So
we
have
another
data
structure.
The
r8
dev,
which
kind
of
keeps
track,
takes
all
the
linear,
devs
and
isn't
charged
for
talking
to
device
mapper
and
creating
the
actual
raid
block
device
that
we're
going
to
be
building
the
higher
layers
upon.
A
So
once
you
get
above
the
raid
dev,
you
don't
have
to
worry
about
any
of
the
underlying
stuff.
You
just
you
have
like
a
new
set
of
block
devices,
but
they're
just
you
know
that
it
they're
redundant
under
the
covers.
So
if
one
of
them
fails,
you
know
that
well,
the
layers
underneath
are
going
to
make
sure
that
no
actual
data
gets
lost.
A
A
A
A
So
in
terms
of
you
have
and
then,
as
you
go
up,
you
have
you
have
another
rust
data
type,
that's
in
charge
of,
so
you
have.
If
you
have
these
radius,
but
of
course
you
have
multiple
ones
of
them.
So
if
you,
what
the
thin
pool
layer
wants
is
to
contiguous
block
devices
to
build
its
stuff
on
top
of
so
you
actually
have
to
use
a
linear
device
to
kind
of
glue
those
back
together
because
they
could
be
spread
across
multiple
raid
devs.
A
And
then
you
have
those
to
continue,
apparently
continuous
block
devices,
and
then
you
can
make
it
simple,
and
then
you
can
make
the
thin
device
on
top
of
that
and
in
terms
of
the
rust
objects
they're
all
kind
of
like
they
all
need
to
like
know
about
each
other
and
kind
of
top.
Be
talking
be
able
to.
You
know,
the
rain
layer
needs
to
find
out
how
much
disk
space
is
available
on
on
the
block
devices
to
see
if
it
can
room
for
another
rate,
that
sort
of
things.
A
So
there's
a
lot
of
linking
between
these
different
layers
and
since
rust,
since
we
don't
want
to
use
bare
pointers,
we
use
a
lot
of
ref
counting
to
do
that.
We
have
the
block.
Deb
has
a
list
of
the
linear'
Deb's
that
are
allocated
out
of
it.
It's
the
disk,
no
switch
spaces
in
use.
So
if
we
want
to
create
another
rate,
we
can
either
say
yes,
this
is
the
space
it's
still
available
or
no,
it's
not
available.
A
A
And
then,
of
course,
we
can
build
this
structure
of
of
mapping
devices
when
the
user
requests
them,
but
we
need
to
be
able
to
rebuild
this
structure
and
get
all
the
mappings
back
the
way
they
were,
if
that,
if
the
user
I
don't
know,
decides
to
reboot
their
computer
ever
so,
Froyo
has
a
from
each
of
those
block.
Devices
foreo
takes
a
chunk
of
space
and
saves
the
current
configuration
has
metadata
and
the
way
the
format
that
it
does
that
right
now
is.
A
It
has
a
little
bit
of
a
binary
header,
because
you
have
the
way
that
discs
work.
You
you,
you
pretty
much
always
know
that
if
you're
writing,
512
bytes
that
you're
always
going
to
get
either
the
entire
five
full
bytes
written
or
none
of
the
512
bytes
written
if
the
system
crashes,
but
if
you're
talking
about
longer
than
that
you
could
get,
you
could
get
partial
partial
writes.
So
then
that's
bad.
So
the
system
that
Froyo
uses
is
too.
A
You
have
a
small
binary
header
that
points
to
two
different
metadata
areas
which
store
the
information
in
JSON
and
then
one
of
them
gets
updated
and
then
updates.
The
header
which
you
know
is
going
to
be
atomic
and
then
the
next
time
it's
time
to
update
it,
then
the
other
one
gets
updated
and
you
update
the
headaches.
A
So
you
never
get
your
never
overwriting
your
good
metadata,
you're,
always
putting
it
in
a
new
place
and
then
writing
the
header
to
make
sure
that
it's
atomic
and
we
use
JSON
for
the
layout,
because
not
that
it's
particularly
space
division
but
I
mean
disks
are
so
big
these
days
that
if
you
take
a
megabyte,
you
take
two
megabytes.
You
write
it
out
in
text
format.
You're
I
mean
it's
still,
it's
still
a
drop
in
the
bucket.
So
let's,
let's
use
the
the
data
format.
A
It's
written
to
all
the
block
devices
at
the
head
and
the
tail.
It's
got
time
stamps.
So
if
some,
whatever
those
you
get
some
of
the
disks
that
come
back
or
of
some
other
one
of
the
disks
that
come
back,
you
should
always
be
able
to
know
the
complete
layout
of
everything
and
be
able
to
put
things
back
together
or
if
you
can't
put
things
back
together,
you
know
what's
missing
so
that
you
can
say
to
the
user,
hey
so.
A
Certainly
has
macros
that
you
can
just
say,
like
you
defining
a
struct,
and
you
just
say
you
know,
derive
serialize
derive
deserialize
and
then
it
does
it
from
there.
Unfortunately
remember
what
I
was
saying
of
before
about
everything
kind
of
having
RC
rush
sells
all
the
other
different
layers.
Sturdy
has
a
problem
with
that,
because
you
get
loops,
you
have
your
parent
and
then
your
parent
is
pointing
back
there
and
it
just
kind
of
goes
around
and
around
and
and
doesn't
support
that.
So
what
so?
What
Froyo
does
is?
A
It
has
a
second
kind
of
set
of
mirror
struts
for
each
of
the
things
that
we
want
to
save
to
JSON.
That
kind
of
that
you
can
convert
to
that.
Don't
have
those
loops
in
them.
Instead
of
having
a
parent
pointer,
you
might
keep
track
of
the
parents
UUID
and
then,
when
it
comes
to
deserialize,
then
some
code
has
to
be
written
to
kind
of
take
those
uu
ids
and
then
match
them
up
and
then
kind
of
relight
things
when
you're
recreating
the
in-memory
objects.
A
A
Another
thing
that's
I,
guess
kind
of
a
good
rule
of
thumb
is
to
so.
You
have
like
things
that
are
ordered
and
you
have
things
that
are
unordered
like
the
order
of
segments
for
a
volume
you
better
get
the
order
right
because
volumes
don't
really
like
it.
When
you
take
this
part
of
the
disc,
and
you
pretend
that
it's
over
here
and
that
sort
of
thing,
but
this
themselves
are
not
ordered
raid
volumes
are
not
ordered.
A
They
could
start
out
as
being
the
first
thing
on
the
disk,
but
then,
through
some
manipulations
they
could
end
up
on
an
entirely
different
disk
and
they
and
that
segment
could
be
the
second
raid
volume
on
the
disk.
So
so
one
thing
that
I
found
really
helpful
is
to
kind
of
think
about
that
and
to
use
the
either
addicted
or
a
struct
or
the
or
the
JSON
equivalents
that
makes
it
impossible
to
misuse
them.
B
A
We
have
your
traditional
unix
IPC
mechanisms
like
name
pipes,
v
foes
and
that
sort
of
things
and
D
bus
is
built
on
top
of
those.
But
it
is
actually
pretty
complex.
You
have
objects.
An
object
can
have
multiple
interfaces
which
have
each,
which
has
methods
that
have
properties
that
can
be
rewriter,
read-only
properties.
So
luckily,
there
is
a
there's,
a
rust
library
called
the
bus
Ras,
which
does
a
lot
of
that
stuff
for
us.
A
Unfortunately,
this,
because
it's
because
of
everything
that
Froyo
is
doing
and
because
of
the
complexity
of
D
bus
and
because
of
the
way
the
D
bus
RS
uses
the
built
a
builder
pattern
in
kind
of
an
interesting
way.
This
is
like
the
hardest
code
to
look
at
because
you're
like
making
things
and
there's
a
there's
like
a
lot
of
closures
and
cloning
and
that
sort
of
stuff,
just
just
because
you
need
to
reference
things
and
each
method
and
they're
attached
anyway.
I.
B
A
A
A
Other
libraries,
we
use
there's
a
library
called
clap,
which
is
great
for
command-line
parsing
I,
like
the
command
sub-command
way
of
structuring
my
command-line
parsing,
and
it
handles
that
quite
well.
It
handles
other
other
styles,
too,
the
option
style
and
it
uses
the
Builder
pattern
too.
So
if
you
have
you
kind
of
like
build
this
massive
thing
and
it's
very
indented,
but
then
everything
gets
kind
of
taken
care
of
you
from
that
point,
we
also
use.
A
A
B
A
Is
the
wonder
of
libraries?
I?
Don't
have
to
worry
about
that!
It
works
some
other
libraries
that
we
use.
We
use
the
CRC
library.
We
use
that
for
our
to
put
CRC's
over
the
Jason
metadata.
I
was
talking
about
before
to
make
sure
that
those
don't
get
corrupted.
We
use
a
lot
of
use
IDs
so
that
you
have
you
create
the
term
crate.
It's
used
for
colorizing
output,
a
little
bit
which
is
nice.
A
The
last
one
device
mapper
crate,
is
something
that
I
wrote
to
just
kind
of
take
all
the
ugliness
around
calling
the
device
mapper
I
octal
and
there's
like
the
strings
that
get
put
into
buffers
and
input
buffers
and
output,
buffers
and
flags
and
lots
of
stuff
and
just
kind
of
becoming
becoming
your
own.
Your
own
library,
client.
It's
like
you
kind
of
hide
that
off
you
put
it
over
there.
A
You
know
when
you're
developing
it
you
have
to
wear
that
hat
and
then
once
you're
developing
something
higher
level,
then
you
can
just
like
that's
a
crate
and
I'm
just
going
to
use
it
and
I.
Don't
I
can
kind
of
forget
about
all
the
ugly
ugliness
and
unsafe
that's
kind
of
inside
there
and
I
can
just
do
my
thing,
so
you
can
be
here.
You
can
be
your
own,
could
be
your
own
library,
user,
I.
B
A
Next
I'd
like
to
talk
about
a
couple
of
different
ways
that
the
rust
type
system,
how
Froyo
leverages
that
to
kind
of
reduce
bugs
improve
development
that
sort
of
thing,
so
the
first
thing
is
that,
instead
of
using
so
I
mean-
as
you
probably
can
tell,
a
lot
of
Froyo
is
like
ranges
of
ranges
on
disks
and
ranges
on
top
of
block
devices.
So
and
those
could
be
represented
all
as
unsigned
64
64.
But
we
want
to
be
very
careful
that
we
don't
make
mistakes,
because
you
can
make
a
mistake.
A
You
get
a
calculation
wrong,
then
horrible
things
happen
and
you
could
be.
You
could
be
referring
to
things
in
terms
of
bytes
or
what's
more
common
with
block
devices?
Is
you
refer
to
things
in
terms
of
sectors
which
is
512
byte
blocks
or
there
can
be
sector
offsets
or
then,
when
you
start
talking
about
the
thin
provisioning
layer,
it
doesn't
deal
it's
kind
of
allocating
things
in
terms
of
data
blocks
which
are
usually
like
4
megabytes.
A
So
all
these
different
numbers,
these
different
scalar
values-
you
don't
want
to
make
you
want
to
make
sure
you
don't
mix
them
up,
and
so
that's
why
defining
separate
types
for
each
of
these
which
you
have
to
convert
between,
even
though
they're
all
used
before
under
the
cover
seems
like
a
really
good
idea.
So
there's
a
library
called
new
type
Drive
which,
because
you
can
define
a
new
type
for
sectors
or
sector
offset,
but
then
you
still
want
to
be
able
to
add
them
together.
A
So
if
you
look
at
the
Froyo
type
star
RS,
we
still
have
to
implement
serialize
and
deserialize
for
each
of
these
new
types,
which,
once
you
figure
out
how
to
do
it
once
you
just
cut
based
the
next
thing
that
we
do
so
we
have
our
Froyo
struct
and
our
Froyo
has
a
list
of
our
block
devices.
It
has
a
list
of
our
linear
devs
as
a
list
of
our
array.
A
A
A
Well,
I
guess
we'll
find
out
as
we
go
along
so
another
thing
that
we
did
with
using
rust
types.
Is
they
don't?
When
you
look
at
the
the
raid
devs
referencing,
the
lair
does,
which
reference
the
block
devs
it
doesn't
it's
not
necessarily
true
that
you,
the
block,
devs,
are
always
going
to
be
there.
So
if
you
have
a
three,
if
you
have
three
disks
in
your
Froyo
dev
and
one
of
them
is
missing,
then
you
can't
just
pretend
that
you're
a
to
disk
rate
device.
A
You
need
to
know
that
it's
like
one
and
three
are
there,
but
number
two
is
missing,
so
we
so
Froyo
uses
the,
and
this
is
really
a
perfect
example
of
using
the
the
some
types
of
the
e
gnomes
to
to.
When
something
is
there.
We
have
a
a
present
block.
Member
is
present
present
type
and
then
includes
all
the
information
for
when
it's
there.
If
it's
not
present,
then
either,
then
we
still
have
something
that's
there
and
we
can
know
that
it's
missing
and
we
can.
A
That
also
extends
up
to
the
to
the
raid
member,
where
we
have
a
member
of
the
rate
can
either
be
present
or
it
can
be
absent,
which
means
that
we
still
have
the
configuration,
but
we
just
don't
have
that
particular
disks
right
now
or
it
can
be
removed,
which
is
we've
actually
forgotten
about
the
configuration
and
anytime
we
put
in
a
new.
We
initialize
a
new
disk
and
put
it
in
it's
going
to
try
to
look
and
and
and
spread
itself
across
that
new
disk
as
to
re-establish
redundancy.
That.
A
Like
I
said
before,
yeah
The,
Container
choices,
kind
of
imply
the
semantics
of
using
them
so
use
a
map
for
your
discs,
because
they're
inherently
not
ordered
and
I,
think
you
know
you
kind
of
start
out
when
you're
doing
development.
It's
like
put
everything
in
a
BEC
and
that
works
for
a
while
and
then
I
think.
A
Rust
is
really
good
as,
like
you
change
that
to
a
to
a
different
data
type
and
then
it's
just
like
you
go
and
fix
it
up
and
then
and
then
you're
good
to
go,
and
you
can
have
a
really
high
level
of
confidence
that
everything
is
going
to
benefit
together
and
work.
The
way
you
want
it
from
there
on
out
the
last
thing
in
terms
of
types
is
Froyo
resultant
trail
air.
A
Is
also
another
kind
of
rust
pattern.
Is
that
you
have
you
have
all
your
libraries
in
each
library
has
some
some
result
type
that
it
is
returning
its
results
as
and
you
can
you
make
an
enum
type
for
your
project
that
uses
all
these?
In
this
case,
it's
Froyo
result
and
you
write
a
couple
methods
that
enable
all
those
different
library
error,
types
to
convert
to
the
different
library
result,
types
to
convert
to
your
result
type
and
then
that
enables
you
to
use.
A
A
How
many
people
here
are
familiar
with
Clippy,
okay
majority,
so
Clippy
is
extra
whip.
Checking
for
your
rust
code,
which
you're
like
oh
yeah
I
did
do
that
and
then
you
fix
it
and
then
you
feel
better.
But
when
you
first
turn
Clippy
on,
you
feel
dirty
until
it's
clean.
B
A
2015
5:15
works
with
all
my
libraries
and
then
I
do
a
cargo
update
and
that
updates
all
my
dependencies
and
I
update
the
nightly
compiler
and
everything
breaks.
And
then
you
kind
of
like
do
that
and
then
then
there's
a
day
where
everything
works
and
then
you're
like
all
right.
That's
the
new
snapshot
and
then
you
can
kind
of
go.
A
B
A
Have
actually
more
than
300
calls
to
try
in
the
code
and
but
very
easy
to
just
like
you
just
write,
try
and
then
you
know
you
return
your
photo
result.
Then
you
don't
have
to
worry
about
it,
but
that
really
is
not
good
enough.
There's
going
to
be
cases
where
you
know
complete
fail
where
you
want
to
do
things
fail,
gracefully,
and
so
we
need
to
go
through
the
code
and
really
think
about
each
of
those
tries
and
say:
okay,
this
is
you
know
this
is
this?
A
A
The
next
thing
is
on
Linux.
There's
a
system
called
you
dev,
which,
like
whenever
you
plug
in
a
drive,
you
can
get
a
new
dev
notification,
which
is
nice,
so
Freud
stew
integrate
with
that.
So
you
can
have
again
if
you
have
like
a
three
disc,
Froyo
dev
and
you
have
to
the
discs
inserted.
Well,
you
could
come
up
integrated
mode,
but
you
really,
if
you
waiting
for
that
third
disc,
then
you
could
come
up
in
redundant
mode,
and
so
you
need
to
get
that
device
notification.
A
You
can
come
up
with
just
two
discs,
but
then,
when
the
third
one
comes,
you
want
to
note
you
want
to
see
that
it
shows
up
and
then
and
then
go
into
redundant
mode.
So
you
don't
I
mean
that's
kind
of
a
that's
fair
to
call
it
a
dark
corner
of
of
Linux.
But
it's
a
it's
a
corner
that
not
that
many
people
have
to
do
have
to
deal
with,
but
I
don't
know.
Maybe
it
won't
be
that
bad,
there's
still
some
missing
features.
I've
been
working
a
lot
on
the
shrinking.
A
So
it's
like
you
have
three
disks.
You
don't
have
that
much
data
on
the
on
your
foreo
dev,
but
you
have
three
disks
and
then
one
of
them
fails
so
now
you're
not
redundant
anymore.
So
if
another,
one
of
the
two
drives
fails
you're
going
to
lose
your
data,
but
if
you're
at
not
using
that
much
space,
you
can
like
reconfigure
the
two
remaining
drives
and
re-establish
redundancy
on
those
two
drives.
A
So
that's
got
not
working
right
now
by
neck,
but
the
case
where
you
have
three
drives
and
then
you
add
another
Drive,
so
you
have
more
capacity.
So
all
your
raid
out
should
get
bigger,
still
working
on
that
the
throttling
is
getting
back
to
what
Jim
was
saying
about
the
or
about
the
thin
provisioning.
A
But
there's
this
hidden
constraint
that
it's
now
under
based
upon
the
thin
provisioning
layer
under
that,
if
it
runs
out,
then
it's
going
to
be
hate
in
life,
so
the
way
that
some
way
that
some
thin
provisioning
solutions
handle.
This
is
to
just
get
when
you,
when
you
approach
that
running
out
of
thin
provisioning
blocks,
is
to
just
get
progressively
slower
and
slower
so
that
you
never
actually
fill
up
your
thin
provisioning
blocks,
which
there
aren't
a
lot
of
options.
A
Actually
because
I
mean
you
could
just
you
can
like
blues
data
or
you
can
just
get
slower
and
slower
or
looking
like
that,
so
probably
at
least
initially
we'll
try
the
throttling
approach
and
see
how
that
works.
The
second,
the
last
one
is
interesting.
I
was
talking
before
about
working
on
the
shrinking
case
and
it's
been
very
tricky.
You
had
to
like
over
layer
a
mirror
on
top
of
existing
stuff,
but
the
kernel
has
gotten
better.
There
is
getting
better
in
the
meantime,
so
it's
getting
support
for
these
VM
raid
layers
to
reshape
themselves.
A
That
would
mean
that
we
could
only
have
that
feature
on
that
kernel
or
later.
So
this
is
a
case
where
there's
we
might
want
to
support
the
reshape
on
those
kernels,
but
still
have
the
older
clunkier
method,
the
manual
method
for
older
kernels
or
that
sort
of
thing
so
I
think
there's
trade
objects
might
be
the
solution
where
we
can.
We
can
figure
out
what
kind
of
kernel
we're
on
early
on
and
and
then
set
up,
set
up
that
and
then
call
them
called
methods
on
it
or
something
like
that.
A
I
haven't
had
to
worry
about
that
yet
because
it's
they
posted
the
patches
but
they're,
not
they're,
not
in
the
ocean
current.
Yet
the
last
unresolved
issues
is
distro
support,
so
I
understand
that
rusts
tables
in
Debian,
which
is
awesome.
It
is
not
yet
in
the
district
I
care
about
which
is
Fedora.
It's
making
slow
progress,
which
is
great
and
I,
think
there's
been
a
lot
of
work
on
both
the
rust
side
and
I.
A
A
A
A
A
You
can
use
virtual
machines
for
a
lot
of
that.
Like
surprise
removal.
You
know
that
just
goes
away
on
the
X
and
also
you
can
use
device
mapper
device.
Mapper
has
some
targets
that
will
lay
over
an
existing
target
and
return
error
codes
and
then,
if
they're
under
the
raid
layer,
then
the
raid
layer
will
handle
those
and
freak
out
when
stuff
starts,
throwing
errors
so
haven't
actually
got
to
that
stage,
I'm
still
more
in
the
handling
device.
Removal
at
this
point,
but
virtual
machines
are
awesome.
B
A
A
If
you
decide
to
change
that
format,
it
becomes
a
much
bigger
decision
at
that
point
and
you
can
either
start
over
with
Froyo
or
you
can
have
some
sort
of
conversion
mechanism
or
you
can
just
change
your
code
so
that
it
handles
both
I
know.
A
lot
of
existing
file
systems
have
done
that
or
I
mean
LVM
has
has
been
around
for
a
long
time.
It
actually
also
has
a
text-based
metadata
format,
but
it's
managed
to
evolve
without
a
whole
lot
of
changes.
This
whole
time
and
I
think
actually
being
text-based
helps
with
that.
B
B
A
Roy,
oh
so,
are
any
of
you
for
there's
a
product
called
the
Drobo,
which
is
an
actual
hardware
device
that
has
four
device
bays
in
it
and
it
kind
of
operates
in
the
same
manner
where
you
just
plug
in
disks,
and
it
gives
you
file
system.
A
So
this
is
like
that
and
there's
I
think
there's
there's
advantages
that
the
hardware
has
in
it
like
the
the
running
out
of
disk
space
thing.
It
actually
has
lights
on
it
that
can
like
light
up
and
freak
out
when
you
see
the
red
light
blinking
so,
but
there
are
also
a
lot
of
advantages
for
a
software
based
approach
and
I
actually
have
a
Drobo
and
so
it
you
know
it's
lasted
six
years
and
it's
still
using
a
six
year
old
processor.
A
B
Just
interested
in
using
RC
red
cell
for
switching
yeah
Vincent's
crascell,
not
that
you
make
a
method,
call
to
borrow
it
shared
content
that
calls
borrow,
and
here
so
it's
dynamic,
recharge
contents.
Are
you
also
mentioned
that
you
got
parent
factors?
So
there
are
this
things
where
you
have
some
little
something
pointing
down
to
something,
but
then
those
things
are
also
pointed
back
right.
Okay,
so
you
have
a
cycle
and
reference
counting
doesn't
collect
segments.