►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Anyway,
so
I
think
next
we're
going
to
do
a
short
Q&A
with
sort
of
an
open,
ZFS
QA
just
before
we
kind
of
get
the
beer.
So
just
you
know
open
the
floor
up
to
any
questions
and
20
questions
on
the
live
stream.
You
know
about
ZFS,
so
if,
if
Matt
and
George
grab
the
mics
so
yeah,
so
any
questions
kind
of
about
CFS
about
you
know
deep
dive,
I
suppose
maybe
performance
issues
or
experiences
or
suggestions
or
features
or
questions
about
where
George
got
such
a
great
shirt
from
I,
don't
know
but
yeah.
A
C
Dessert
all
this
SSDs
are
great,
I
mean
they're
they're
super
fast
for
any
kind
of
random
write,
random,
read,
intensive
workloads,
I
mean
we
say
we
got.
We
recommend
them
to
our
customers,
because
our
customers
are
mostly
using
databases
and
databases
are
very
random
access.
Intensive,
so
I
know,
I,
know
lots
of
other
many
vendors.
B
A
B
C
I
mean
eat
sir
Waldo
yeah
so
like
using
virtualization
with
4k
block
size
or
4k
or
8k
block
size
or
record
size
and
ZFS.
If
you're
using
raid
Z,
then
you
definitely
want
to
be
using
discs
or
SSDs
with
512
byte
sector
size,
yeah,
otherwise,
you're
just
wasting
space
and
you'd
be
better
off
using
mirroring.
That's,
basically
what
I'm?
C
But
aside
from
that
particular
case
like
it's
going
to
be
fine
like
if
you're,
using
512-byte
with
factual
bite
sectors
with
red
Z
with
4k
or
8k
record
sizes
like
it's
not
going
to
get
as
perfect
of
you
know,
use
going
to
give
me
a
little
bit
of
space
overhead
to
using
the
raid
Z
like
if
you
have
a
raid
z1
with
five
discs
and
you
have
for
data
and
one
parity
disk.
It's
not
going
to
be
exactly
twenty
percent
parity
overhead
it'll
be
a
little
bit
more.
Maybe
twenty
five
percent!
D
C
E
C
A
A
D
D
C
I
mean
I
make
that
I
think
that
makes
a
lot
of
sense,
and
I
think
that,
like
this
talk
about
SSDs,
it
really
does
go
to
the
hybrid
pool.
We're
like
the
hybrid,
the
idea
behind
poll
and
having
different
types
of
disks
in
it
is
going
to
continue.
It
might
just
be
different
types
of
SSDs
for
different
use
cases
within
the
pool,
rather
than
SSDs
vs
spenny
discs
right.
You
may
have
your
TLC
low
endurance
stuff
for
the
bulk
data
and
then
other
things
for
you.
F
C
C
Imagine
that
we
could
do
additional
things
kind
of
along
the
the
next
sent
idea
of
having
disks
or
devices
that
are
dedicated
for
metadata
yeah,
because
metadata
tends
to
be
over
in
much
more
often
more
frequently
and
freed
more
frequently
than
user
data.
You
know
being
able,
if,
if
we
had
something
like
that,
maybe
you
use
like
some
high
endurance
devices
for
your
metadata
and
no
endurance
devices
for
user
data
that
tends
to
be
replaced
less
often.
E
C
Synchronous
operations-
that's
roughly
correct,
so
synchronous
operations
would
be
like
you
know,
a
database,
that's
issuing
f
sinks
or
openly
things
with
a
sink
or
NFS
operations
is
kind
of
the
big
big
use
case.
So
you
know
fer
pretty
much
for
for
those
kind
of
use
cases
like
NFS
servers
in
most
cases.
C
D
H
C
D
C
D
D
D
G
I
think
we
all
know
about
Rick
no
requirement
but
like
best
practice
of
using
ECC
memory,
and
we
do
but
I
just
like
a
an
answer
to
that
which
is
okay.
Let's
imagine
we're
not
using
ECC
memory.
What
do
we
risk?
Do
we
risk
that
after
the
data
has
been
written
to
ZFS,
let's
assume
without
corruption,
because
we
don't
have
ECC
memory?
If
there's
a
bit
flip
in
the
memory?
Could
that
corrupt
the
pool
or
do
some
damage
to
already
written
data?
Or
is
the
risk
limited
to
the
data?
G
That's
in
transit
somewhere
in
the
stack
before
its
checks,
immed
and
written,
because
what
I
mean
is,
if
that's
the
risk?
Okay,
it's
certainly
less
good
to
have
this
risk
that
not
having
it
by
using
ECC
memory,
but
in
most
of
our
workstations
and
laptops,
we
work
without
ECC
memory
and
we
accept
that
risk,
which
is
a
very
different
risk
scenario.
Then,
after
my
data's.
C
C
You
know
your
Mac
whatever,
so
in
all
of
these
file
systems,
you
there's
always
the
chance,
if
you're
not
using
ECC,
that
some
bit
of
a
critical
piece
of
metadata
could
get
flipped
in
memory
and
then
ran
out
in
an
incorrect
way
right
so,
like
all
file
systems
have
super
blocks
or,
like
some
block,
that's
like
controls,
a
lot
of
different
data
in
the
in
the
file
system.
You
know,
if
you're,
about
to
write
that
out
in
some
critical
bit
flips
in
there.
C
That
says:
oh
actually,
there's
no
data
in
the
in
this
file
system
and
then
that
gets
written
out
like
it's
corrupt,
it's
wrong
and
that
you
know
that
could
happen
to
ZFS
ext2.
You
know
when
windows
file
systems
like
there's
really
nothing
special
about
ZFS
there.
There's
there's
certain
things
where
ZFS
is
can
be
more
reliable.
We
have
some
ideas
of
like
protective
in
the
future
like
doing
more
kind
of
protecting
it,
while
it's
in
memory
so
like
the
window
of
vulnerability
gets
smaller
with
ZFS.
But
you
know
in.
D
Really
depends
on
what
piece
was
corrupted
at
the
time
that
you
wrote
it
that's
going
to
be
that
that's
going
to
determine
it.
You
know
best
case
scenario.
It's
just
a
piece
of
user
data
that
is
now
lost
right.
It's
it
check
sums
correctly,
because
the
bit
field
happened.
We
checked
summed
it
wrote
the
check
sums
correctly
and
it
looks
like
valid
data,
but
in
fact
it's
logically
incorrect
or
you
panic
the
box.
D
G
C
C
H
C
A
A
C
H
C
B
H
B
H
D
F
A
C
D
C
E
C
A
E
A
A
G
B
G
C
I
mean,
for
example,
like
our
you
know,
our
customers
are
like
what,
like
a
hundred
of
the
fortune,
500
amerock
100
of
the
biggest
500
biggest
companies
in
America,
use
our
product
they're
using
ZFS
they're,
relying
on
it,
for
you
know
the
core
of
their
IT
organizations.
They
may
not.
You
know,
may
not
even
know
that
they're
running
ZFS,
because
it's
just
embedded
in
our
product.
A
C
All
of
us
have
cast
you
know
all
these
companies
have
reference
customers
right,
so
you
at
a
dell
flexkom
you're,
going
to
see
all
of
our
reference
customers,
I'm
sure
if
you
go
to
next
entercom
they're
gonna
be
like
next
enter
used
by
you
know,
x,
y
and
z
and
they're
all
going
to
be
like
household
names.
You
know
one
of
our
reference.
Customers
is
facebook
right,
Facebook
uses
ZFS
in
their
IT
organization,
so.