►
Description
Speaker: Chris Burroughs, Engineer
ZFS is an advanced file, raid, and volume management system originally developed by Sun Microsystems, 'The Last Word in File Systems' has been unavailable on Linux until recently. AddThis uses ZFS to more effectively scale up dedicated hardware, getting twice the performance at half the cost. ZFS is also fundamental to containerization, allowing nodes from multiple clusters to be co-located with safe persistent storage.
A
Hi
everyone
I'm
Chris
I'm,
an
engineer
at
add
this-
am
also
a
crew
organizer
by
Cassandra
DC,
meet
up
and
like
all
Munich
organisms,
the
boys
looking
for
speakers,
so
if
you're
ever
in
DC
figure
to
stop
by
and
say
hi,
and
let
the
talk
to
our
group.
That
would
be
great,
too
there's
a
bunch
of
contact
information
for
me.
A
If
you
want
to
improve
me
using
my
gmail
or
just
all
beer
around
the
rest
of
the
day,
if
you
have
any
questions,
if
you're
free
to
interrupt
me
if
I
say
something
that
doesn't
make
too
much
sense,
so
we're
talking
about
first
Cassandra
had
this
added
this.
What
are
we
used
to
turn
your
water
use?
It
can't
provide
the
context
for
the
rest
of
the
talk,
we're
talking
about
CFS
and
then.
B
A
A
How
we
use
DFS
to
sort
of
scale
up,
get
more
efficiency
for
physical
compute,
node
and
also
sort
of
skip
down
so
speak.
Handle
are
smaller
clusters
more
additionally,
with
Molly
fancy.
This
is
now
this.
We
make
a
suite
of
tools,
polishes
apparently
webpage,
and
that
includes
everything
from
relatively
static
social
sharing
buttons
should
a
facebook
or
twitter
of
dirt
school
particular
language
or
geographic
region.
Under
more
data,
rich
I
widgets,
there
might
be
content
recommendation
some
sort
of
recirculation.
C
A
If
our
widgets,
that
people
put
on
their
page
are
slow
or
unavailable
or
customers
are
quite
reasonably
not
happy,
and
this
in
brief
context
about
sort
of
the
scale
that
had
this
operates
on
tools
without
add
this
and
tools
run
roughly
four
t
million
different
Jermaine's
pages
with
advis
as
jewels
on
it
are
viewed
roughly
three
billion
times
a
day
which
piece
around
very
roughly
80,000
request.
A
second
power
center
back
end
is
mostly
java
on
linux,
on
physical
servers,
so
Cassandra
was
a
really
natural
choice
for
us.
A
A
The
proverbial,
like
PHP
monolith,
was
broken
up
many
many
years
ago,
we're
still
sort
of
evolving
our
our
architecture.
Just
because
there's
a
few,
you
know
you
have
a
bunch
of
moving
parts,
and
not
one
big
movie
part
doesn't
mean
you're
done.
These
things
continue
to
evolve
over
time
and
our
engineering
team
is
broken
up
into
individual
squads,
with
a
great
deal
of
self
direction
and
discretion
and
how
they
make
their
product
objectives.
So
it's
really
important
for
us
that
we're
never
in
a
situation
really
keen
a
can't
move
forward.
Til
teen
be
changes.
A
A
schema
cluster
see
real
a
lot
easier
to
move
forward
and
not
blocking
each
other
as
much
as
possible.
Using
cassandra
and
production.
Add
this
since
0.6
we're
possibly
a
dozen
clusters,
we
create
a
new
one
per
use
case
or
SLA,
and
that's
something
I
really
like
about
how
we're
operating
Cassandra
because
of
having
multiple,
smaller
clusters
and
said.
One
super
clusters
makes
it
a
lot
easier
to
reason
about
resource
use,
capacity,
planning,
I,
just
understanding
the
needs
of
individual
teams,
so
we're
the
I
think
that's
a
common
practice
number
of
companies
using
Cassandra.
A
It's
primarily
used
for
latency
sensitive,
read,
mostly
storage,
so
we
have
a
variety
of
background
analytic
systems
there
might
be
checking
along
and
doing
like
business
intelligence
for
the
homestake.
Whether
I
have
some
machine
learning
application,
this
isn't
going
to
feed
data
into
Cassandra
but
in
most
cases
the
source
of
data.
This
from
these
back
at
applications
and
not
observed.
A
Or
doing
more
traditional,
so
the
current
applications
for
their
literature
to
their
web
server.
Every
cluster
is
multi
DC.
That's
something
I've
really
important
to
us
about
Cassandra
and
really
any
sort
of
data
storage
system
that
can't
handle
multiple
data.
Centers
is
a
stable,
take
stress,
that's
not
not
so
the
ability
consider
user
and
again
you
know
some
cases
will
have
some
sort
of
batch
system
or
some
scheme
in
the
system.
It
might
be
updating
Sandra
every
hour
or
a
few
minutes
we're
more
often-
and
that's
the
primary
use
case
that
we
have.
A
This
is
a
quick
two,
quick
screenshot
of
internal
dashboard,
the
solver
clusters
as
a
few
days
ago,
and
makes
too
much
different
graphs
about
them
and
have
you
know
their
capacity
and
how
they're
behaving
this
is
a
chart
that
shows
the
size
of
the
cluster
sizing
each
cluster
unit
in
a
single
DC.
So
you
can
see
we're
both
cluster
on
the
size,
it's
dramatically
larger
than
the
rest,
and
that
cluster
is
something
that
shouldn't
focus
a
lot
of
performance
attention,
but.
A
Of
clusters
that
are
they're,
really
small
and
they'll,
be
what
we
talked
about
in
the
second
half
of
the
top
cool
to
ZFS.
So,
first
of
a
side
of
abstractions,
so
we
talked
about
storage.
There's
a
lot
of
moving
parts,
there's
block
devices,
those
partitions,
there's
file
systems
themselves,
afraid
this
volume
management.
So
there's
a
lot
of
green
parts
and
they're
relatively
brittle.
With
a
big
plan
up
front,
usually
I
would
say
it's
not
super
toy
boats
or
change.
You
know
add
new
devices
to
your
raid
array
or
change
block
sizes
like.
C
A
Not
really
talking
each
other,
where
l
the
file
system,
things
that
stalking
your
block
devices
really
written
by
managers
talking
about
blocks
to
something
things
as
the
block
device
was
really
are
greater
red
charting
stuff
across
multiple
devices.
Actually,
our
block
devices
and
gently
from
all
this
sort
of
complexity,
the
sort
of
data
integrity
story-
isn't
great.
There's
not
you
don't
have
a
great
thing
teeth
of
the
blocks.
I
just
wrote,
I
read
it
on
the
blocks.
A
I
just
wrote,
so
it's
really
common
application
for
applications
to
do
this
for
their
own
check,
sobbing
and
stuff
like
that,
which
is
great
like
the
same
videos
that
has
a
little
something
each
compressed
block.
It
was
great
it
really
asking
that
operates
from
sudden.
You
can
solve
this
problem
for
us.
You
can
trust
that
memory,
where
the
abstraction
programmers
deal.
What
the
other
day
is
virtual
memory
and
have
for
some
decades
they
malloc
and
free
they're,
really
there's
no
notion
of
like
which
DMI
now
looking
from
or
out
of
something
has
to
change.
A
So
I'm
gonna
send
some
out
the
data
center
to
switch
to
games.
That
would
be
the
sort
of
weird
doesn't
make
sense.
You
might
have
to
worry
about
something
like
Numa
and
never
gonna
be
really
healthy
kid,
but
mostly
to
construe
this
as
a
runtime
polymer.
I
can
decide
with
that
setting
so
much
have
to
rearrange
anything
physically,
but
I'd
like
to
interleave
or
pin
things
or
could
do
a
physical
socket
or
something
like
that.
Suzie.
A
Subsystem
it
consumes
subsumes
and
sort
of
traditional
responsibilities
of
a
filesystem
rate
and
volume
manager.
It's
always
consistent.
Our
desk,
which
you
need
to
think
up
as
if
no
fsck
or
fsck
is
always
running,
say
universal
storage
system
and
provides
book,
POSIX,
liberal
POSIX,
filesystems
block
devices,
NFS,
samba,
etc.
I've
been
using
Linux
like
great
tools
for
a
long
time
using
ZFS,
but
I
find
the
ZFS
sort
of
administrative
tool
set
much
easier
to
work
with,
despite
having
knocking
step
nearly
as
long.
A
I
know
he's
a
scalable
data
structures
so
that
the
maximum
size
are
the
single
pools
to
the
75th,
which
I
believe
is
256
data
bytes,
which
is
very
large
even
for
Cassandra
clustered
your
soda
budget
Bosworth
and
Matthew
Aaron,
Fitz,
son
or
a
2001.
So
if
you
couldn't
believe
the
trope
that
a
file
system
takes
at
least
a
decade
to
mature
and
stabilize
ZFS
is
now
a
fine
wine,
you
can
enjoy
it
as
well
before
also
modern
unix
operating
systems.
A
Son,
well
is
no
longer
with
us
and
does
not
make
linux,
but
there's
been
a
Linux
support
available
since
2013
and
was
poured
two
previous
be
in
2008
or
so
so.
There's
a
long
history
of
ZFS
as
a
cross
cross
platform
filesystem
the
equipment
of
gfs
typically
was
with
me
about
Andrew,
going
on
the
nearest
film
aliases
thats,
our
granite
layering
violation,
because
it's
combining
these
traditional
responsibilities
of
multiple
moving
parts
I
think
instead
it
but
ever
think
about
it's,
just
sort
of
rearrange
the
parts
a
little
bit
and
then
clarified
the
abstractions
between
them.
A
So
instead
of
everything,
thinking
is
talking
to
a
block
device.
Even
though
that's
a
cathode
logical
lie.
Instead,
there's
a
something
on
top:
that's
thinking
about
file
names
for
60
file
system
type
things
it's
going
to
perform
wants
to
perform
transactions
on
them
as
objects,
so
the
part
in
the
middle
it
takes
satisfies
these
transactions
on
objects,
Maps
them
through
a
virtual
address
space
and
that's
something
maps
with
virtual
addresses
to
the
physical
addresses
over
that.
It's
not
like
a
one
giant
thing
and
it's
still
broken
out
into
multiple
subsystems
that.
C
A
Allow
some
flexibility,
for
example,
you
can
ZFS,
have
usually
used
as
a
normal
post
X
file
system
running
locally
on
the
box.
It
also
unfs
Samba,
etc.
You
can
also
do
expose
do
the
trick
where
you
exposed
raw
block
of
ice
too.
If
he's
gonna
play
that
game
such
as
for
a
swap
partition,
you
can
also
run
something
like
Oracle.
That
expects
is
talking
to
a
block
device
or
lustre
which
is
ur
distributed.
A
Bus
and
popular
in
the
HPC
space
I
mentioned
that,
because
the
actual
original
impetus
reporting
ZFS
delimits
was
from
the
Lawrence
Liverpool
National
Laboratories.
They
wanted
his
luster
on
top
of
DMF
ZFS
on
lustre
for
their
super
computer.
So
they
were
a
variety
of
presentations
around
that
super.
You
have
a
hobby
into
computers.
C
A
A
A
series
of
pointers
updated
and
this
change
is
not
alive
until
the
last
atomic
printer
at
the
root
of
the
tree
is
updated.
This
is
the
hub
is
you
must
provide
n
data
integrity
charities
because
not
only
due
to
the
box,
not
only
does
each
point
have
a
pointer
to
the
next
block,
but
also
has
a
graphic
cash
and
the
expected
contents
of
that
block.
So
we
always
know
what
we
read
is
what
we
originally
wrote
by
having
this
keeping
track.
A
The
different
root
of
the
tree
snapshots
are
very
straightforward:
implementation
on
top
of
DFS,
so
Sanchez
being
a
read-only
copy
of
taken
a
particular
point
in
time,
and
because
remember,
this
is
something
like
our
sake.
Where's
parsing
is
a
fantastic
program.
He's
in
all
sorts
of
places
has
a
lot
of
clever
algorithms
and
does
a
really
great
job
coordinating.
So
the
minimal
amount
of
data
is
exchanged
between
two
points
of
the
network,
which
is
all
great
the
end
of
the
day.
A
Our
sings
in
no
event
operation
has
to
look
at
all
of
the
files
and
see
if
it
change
lose.
Efs
took
Harvey
knows
between
two
points
in
time.
These
are
all
the
changes
that
we've
done,
so
it's
sort
of
significant
advantage
Santos
can
also
be
promoted
through
every
right
clone.
You
look
at
the
difference
and
accumulated
changes.
A
So
final
feels
like
Arctic
feel
a
bit
about.
Cfs
gives
you
a
best
all
projects.
You
can
also
give
ZFS
all
SSDs,
and
this
will
make
your
life
better
and
all
the
general,
where
is
that,
give
it
using
sses
instead
of
arrow
and
rotational
media,
makes
your
life
better,
but
that's
not
necessarily
economical
in
RSS.
A
These
are
still
cheaper
per
I
operational
media
still
cheaper
per
gigabyte,
so
a
hybrid
solution
can
be
more
economical,
so
Z
of
S
supports
both
using
an
SSD
as
a
reader
mean
cash
and
a
non
vult
Iran
born
SSD
as
a
write
log
device.
That,
for
example,
would
be
very,
but
our
hardware
RAID
controller,
might
do
with
this
onboard
non-volatile
ram,
so
their
variety
of
other
features
gfs
that
are
really
cool.
If
you're
administrating
it
anniversary
exciting,
and
you
want
to
give
like
each
user,
their
own
data
set
up
home
directory
purpose
of
Cassandra
can.
B
A
Pass
that
bye
for
now
I
try
to
keep
the
terminal
of
your
relatively
driven
generic
news,
no
like
vocabulary
quiz
at
the
end,
but
this
is
sort
of
this
of
the
basics.
A
vast
vocabulary,
so
I
say
something
and
it
doesn't
quite
make
sense.
This,
hopefully,
will
help
so
a
ZFS
we're
creating
a
storage
pool.
That's
supposed
to
be
my
abstract
collection
of
resources.
C
B
A
A
Exposed
hero
s
as
a
block
device
or,
if
you're
doing
something
local,
a
nice
cozy
or
something
could
potentially
give
used
for
that
local
storage
is
more
common,
just
sort
of
is
more
common.
In
general,
it
is
set
to
the
things
we
created,
the
storage
pool.
There
are
usually
file
systems
for
the
purpose
of
the
pressure
of
stock.
You
just
think
of
them
as
file
systems
they're
configured
with
with
key
value
pairs
of
property.
So
you
can
use
something
like
ZFS.
A
A
Was
one
of
the
nicest
most
plus
officials
about
switching
the
ZFS
goodbye
ever
catch
the
things
I
care
about
the
most
or
how
larger?
How
many
of
these
verses
are
my
dedicated
to
this
and
the
case
of
an
operating
system
cache
as
probably
a
lot
there?
One
of
how
effective
is
that?
Where
does
that
Rick
I'm,
actually
getting
for
the
Linux
page
cache?
This
data
is
generally
difficult.
A
Was
sort
of
the
latest
the
interfacing
pools?
Is
that
something
to
get
a
hit
rate
at
which
is
great
for
linux's
on
the
latest
kernels
and
now
having
more
mature
dynamic,
you
some
tools,
this
one
real
answer:
you
just
look
up
rock
and
get
your
answer.
This
is
an
example.
So
again,
all
this
gas
is
sort
of
being
precise,
proc
/,
something-something
arc
arc
status
or
attritional
to
Liz
Parker
and
put
out
a
bunch
of
stats,
and
you
can
let
it
fill
up
your
terminal
and
I'm
sort
of
your
conditional
unity.
A
B
A
Very
very
different
at
work
with
one
is
probably
much
more
sequential,
while
the
other
is
patching
normal
random.
You
also,
you
know,
since
they're,
just
SAT
and
a
bunch
of
things
in
proc,
you
can
type
of
the
gangly
or
collecti.
Whatever
monitoring
tools
you
use
quite
easily,
the
integration
mirror
to
be
there
east
at
as
a
plug-in,
if
you
like,
a
pretty
color
terminals
as
welcome
for
that.
Next,
all
of
your
all,
the
other
stats
you're
used
to
looking
at.
A
The
final
part
about
the
the
ark
through
the
memory
management
there
is
the
ZFS
supports
prefetch
in
both
row
major
column,
major
order,
sir,
can
handle
the
newly
single
case
of
I'm
sequentially
scanning
through
a
file
handle
the
case
of
I'm
sequentially
scanning
through
a
bunch
of
different
files
in
parallel
or
I'm
scanning
through,
but
I'm,
not
doing
a
total,
sequential
scan.
Reading
the
block
and
then
meeting
every
nth
block
after
that
sort
of
prefetch.
The
next
block,
without
wasting
panic,
touching
all
the
blocks
in
between
so.
A
A
test
where
we
had
one
note
on
the
two
nodes
in
the
same
foster
introduction
one
node
using
an
electrician,
ellipse
page
hash
heather
was
using
the
arc.
Then
we
catted
bunch
of
junk
some
gigabytes
of
jumps,
a
death
knell
for
the
pic
afternoon,
the
90th
percentile
latency
drum
by
four
to
six
x,
which
is
that
that
would
be
a
lot
for
the
arc.
The
LHC
jumped
by
2x,
which.
B
A
A
great
pedicure
is
definitely
still
room
for
improvement
there,
but
that's
a
dramatic
better
than
what
we
were
seeing:
the
legs
page
cache
and
even
a
modern
sewer
systems
like
Cassandra,
you
know,
he's
lobster
and
birch
trees
compaction,
repair.
These
are
common
operations
that
are
happening
all
the
time
and
a
common
area
of
performance
concerns.
I
think
this
is
more.
A
Now
than
ever
again,
I
Gentiles
suggestively
for
me
I
find
the
administrative
command
really
easy-to-use
compared
to
the
sort
of
like
Linux
nvram
stuff.
Zpool,
creates
storage
pools.
Zfs
creates
a
BSF
file
systems
on
top
of
that,
there's
a
third
command
for
debugging
information
that
was
really
focused
at
like
kernel
development,
not
something
like
union
used
it
above
your
pool
a.
A
S
example:
we're
going
to
bc,
pool,
create
I'm
going
to
call
it
a
full
tank
because
that's
the
same
time,
people
be
using
for
the
past
decade,
we're
going
to
use
to
myriad
pairs
kind
of
a
grade.
10
SDF
is
an
SSD,
so
we're
going
to
use
that
as
a
catch
device.
We're
create
a
file
system.
On
top
of
that,
it's
good,
repressive,
stables
for
Cassandra
I
will
go
to
mount.
The
data
sets
the
more
convenient
for
Cassandra
we're
going
to
naval
compression
because
lz4
was
really
cheap.
A
So
why
not
we're
to
turn
off
a
time
really
comment
sort
of
performance
tuning
setting,
whether
it's
AFC
4
x,
ms
or
ZFS,
we're
going
to
go
into
cassandra
and
then
Cassandra
can
go
ahead
and
run
prints,
SS
tables
there?
Can
you
see
FS
list
to
list
all
the
data
sets
we
just
created.
I
would
do
something
like
CFS
get
see
what
the
crescent
ratio
is.
A
I
would
say
one
point:
your
ax
is
typical
of
what
we've
seen
if
we
enabled
both
the
ZFS
miss
compression
and
Cassandra's
bolting
block
compression
on
production
brooklyn's,
the
synthetic
we're
close.
Obviously,
that's
I'd
never
get
for
a
liar
so
again
so
specific
to
linux,
because
that's
what
I
think
most
people
are
just
you
know
in
this
room
run
and
one
time
we
go
over
your
middle,
you
might
end
up
at
like
some
old
file
system
and
user
space
port,
but
that's
them.
There's
now
a
gated
for,
and
that's
the
one
we're
talking
about.
A
Mr.,
whenever
we're
excusing
app
we're
snowed
by
lawrence
liverpool,
national
laboratories
for
their
sakura
super
computer,
since
2013's
been
generally
available
if
you're
headed
run
on
your
laptop
or
reproduction
or
anything
else
like
that
and
since
about
2014,
the
sort
of
cross
operating
system
development
has
really
picked
up.
I
think
you're,
really
to
me
impressive
degree
that
about
correlation
between
Lumos
PSD
in
Linux
is
really
good
to
see.
That's
relatively,
where
unusual
I
wish
these
people
talk
to
each
other
more,
so
you
can
get
more
cross-pollination.
A
The
latest
version
is
065
release
this
month.
This
is
not
an
exhaustive
list
and
distribution.
It's
quite
a
package
for
just
about
every
linux
distribution.
I'm,
you
care
to
run
it's
an
active.
You
use
a
queer.
You
can
get
help
in
our
trio
or
mailing
list.
There
loves
the
vaulted
box
going
back
a
long
time
and
most
of
the
docks
at
explain,
administration
or
sorry
risky
or
similarly
applicable.
A
You
probably
notice
that
the
version
of
really
are
served
with
a
zero
I
wish.
Your
might
really
give
you
some
trepidation
about
fafsa,
so
you're
going
to
run
in
production
even
for
something
like
this
and
which
has
its
own
built-in
replications.
This
blog
post
is
sort
of
the
canonical
and
best
summary
of
the
current
state.
First,
like
a
five-part
series
going
to
a
lot
of
details
about
support
on
cases
and
caveats
and
history,
so
you
can
going
to
make
the
decision
for
yourself
the
summer
is
all
the
sort
of
basic
integrity
features.
C
A
Not
lose
my
data
they're
all
there,
they
all
work
the
same
with
during
other
production
systems,
a
half
or
some
time
that
that
code
has
a
change.
The
major
area
of
friction
is
sort
of
workload,
performance
and
a
particular.
The
interaction
between
ZFS
is
memory.
Management
and
Linux
is
leverage
memory.
A
Management
at
this
point
is
not
that
were
some
free,
for
example,
if
you
type
3
will
not
necessarily
incorrectly
account
for
all
of
the
Z
arrested,
memory
is
used
instead
of
plus
or
minus
buffer
cache,
as
they
usually
would
be
same
for
some
other
tools,
so
looking
at
slash
pocket
figure
out
how
the
memories
being
used.
So
this
is
probably
a
gentle-
probably
applies
to
just
about
everything.
There
might
not
be
better,
probably
not
better
everything,
but
it
might
be
a
really
good
fit
for
some
of
your
use
cases.
A
A
Had
an
internal
oscillator
trying
to
hit
up
around
35
milliseconds
of
the
98th
percentile,
but
importantly,
this
was
unfortunately
or
fortunately
depending
the
point
of
view.
This
was
a
really
successful
internal
service.
It
was
going
to
drive
a
lot
of
products.
Those
products
are
either
going
to
be
there
s
always
or
just
want
you
not
be
able
to
launch
them
yet
because
the
performance
of
the
customer
is
unsatisfactory
and,
despite
adding
additional,
you
know
doing
several
rounds.
Trendy
notes,
add
a
little
more
hunger
and
see
if
it
gets
better.
You
really
weren't
getting
any
situation.
A
C
A
C
A
Had
this
levels
your
heart
thing,
so
we
wanted
to
try
that
out
and
get
how
that
works.
If
is
if
a
meeting
press
comes
in
I
was
in
memory.
Great
and
you
know
satisfy
would
be
to
rest
everything's
good,
if
not
we're
check
if
it's
on
the
SSD,
if
so
great,
you've
written
an
SEC
in
return
it.
That
was
even
as
fast
as
memory,
but
still
pretty
fast.
A
But
if
that's
in
this
as
well,
then
we
have
to
go
down
there
media
this
is
implemented
as
a
ring
buffer,
which
has
nice
operational
properties,
namely
that
there's
certain
threats
that
are
looking
at
the
end
of
the
most
frequently
huge
list.
The
say:
hey
this
things
about
to
fall
off,
I'm
going
to
write
to
the
SSP
nap
because
since
there
was
in
the
arc
to
begin
with,
it's
probably
a
pretty
useful
thing
to
keep
around
since
that's
an
asynchronous
process.
A
There's
no
situation
where,
like
the
firmware
bugging
st
or
your
SSD
like
breaks
or
you
just
need
to
evict
a
lot
of
memory
really
quickly.
There's
no
situation
where
being
unable
to
write
to
the
SSD
will
have
any
impact
on
certain
normal
operations
or
the
system
performance
might
get
a
lot
worse.
There's
no
like
deadlock
with
something
like
that.
You
have
to
worry
about,
so
that
was
a
really
nice
fast
give
her
some
comfort
as
we
were
going
into
it
and
again
creating
it
is
really
straightforward.
A
Here's
you
know,
Admiral
on
says
the
same
is
block
device,
make
it
a
cache
and
you're.
You
know.
Well,
let's
start
working
right
away.
I
apologize,
nice
graph,
so
a
little
hard
to
read
with
their
ugly
colors
or
whatnot.
These
are
always
a
real
progression.
Graphite
recois
working
of
these
lies
last
week,
so
this
cluster
is
getting
roughly
25,000
request.
A
second
in
one
PC,
the
Cassandra
bro
cash
has
about
seventy
percent
hit
rate.
The
arc
has
about.
D
A
C
A
A
I
remember,
the
quote
is
that
we
got
twice
the
performance
path
because
you'll
notice,
we
went
from
not
meeting
your
SL
ways
for
exceeding
RS
la's
in
order
to
reduce
the
number
of
nobody
needed
in
the
cluster,
your
mileage
drugs.
They
vary
between
your
workload
with
your
richard
durand
SSD.
They
never
available
what
is
sort
of
the
working
set
size
and
what
is
sort
of
the
distribution
of
requests.
But
for
us
this
particular
Casper
tificate
is
you
know,
web
ish
looking
traffic
is
a
fairly
use
case
for
Cassandra
worked
out
really
well.
A
So
this
is
not
that
big
cluster.
This
is
a
different
cluster.
It
has
three
notes:
it
has
about
150
megabytes
per
data,
so
this
is
a
big,
no
Thea.
This
isn't
like
medium
data.
This
would
fit
on
the
cd-rom,
drive.
I
think
this
would
fit
on
a
zip
drive.
Anyone
still
has
a
zip
drive.
This
is
replicated
three
ways.
This
isn't
even
like
I'm,
the
newfie
data
megabytes
of
data
or
no.
This
is
150
megabytes
of
data.
This.
A
Of
data
at
all
from
the
Conte
perspective,
I've
is
actually
tremendously
important
data
to
us.
It's
user
configuration
data.
It
has
among
the
highest
latency
requirements
there
we
have
and
so
getting
at
it.
Access
to
it
is
really
important.
I
mean
we
have
a
number
of
clusters.
Some
of
them
are
really
latency
smaller
clusters,
some
rich
like
this
one
you
have
to
nutrient
latency
requirements.
So
what
we
wanted
to
do
is
we
had
all
these
clusters.
There
were
pretty
small
there.
You
know
three
nodes,
/
DC
and
we
have
2
dcs.
A
So
that's
6
physical
notes,
the
store,
150
megabytes
of
data,
which
is
a
golem
and
efficient.
They
were
again
they're
among
some
of
our
most
likely
sensitive
services
or
just
kind
of
stick
them
in
a
corner.
On
some
hardware,
VMS
did
not
seem
like
a
good
good
solution
to
go
down
at
the
time
we
had
some
relatively
complicated
networking
requirements,
so
is
really
important.
Just
for
Cassandra
safe
to
this,
you
know
give
it
an
IP
normally
and
not
do
something
crazy
port
mapping
or
iptables
scripting,
or
something
like
that.
A
A
Transparent
to
the
rest
of
our
infrastructure,
including
like
kinky,
dns,
tcp,
inventory,
configuration
management,
monitoring,
etc.
We
step
0
was
solve
all
outstanding
problems
with
monitoring,
FML
containers,
networking
and
storage
that
would
sort
of
not
not
work
out,
not
product
or
a
reasonable
level
of
effort.
This
is
containers,
obviously
our
top
hot
topic
as
a
late.
This
is
one
arbitrary
access
on
one
hand,
you
might
have
your
sadly
pot
of
gold
on
Eric,
it's
a
sweet,
microservice
talking
your
service
discovery
framework,
and
it's
just
you
know
you
spit.
C
A
A
Something
looks
like
a
normal
like
unix,
server
or
workstation,
and
how
that's
look
for
the
past
few
decades.
As
you
know,
original
purpose
also
existing
truly
understanding
the
stands,
how
it
works,
etc.
Give
these
things
names,
you
might
call
the
stuff
on
one
hand
your
application
containers,
my
probably
other
systems
or
infrastructure
in
containers,
because
you
know
they
look
more
like
a
traditional
UNIX
operating
system
that
I
hope.
No
one
love
any
of
these,
like
really
concrete
implementations.
A
Doctor
is
particularly
there's
a
lot
of
excitement
about
that
and
if
using
that
for
application,
containers,
lxc
project
has
been
around
for
a
long
time
and
does
a
good
job
just
giving
you
a
thing
that
looks
like
your
normal
unix
station.
So
there's
no
notion
of
election
money
to
container
eyes.
My
hat,
you
know.
C
B
A
Are
these
container
things
how
we're
going
to
use
them
happy?
We
use
them
to
improve
our
architecture,
for
the
consequences
for
service
discovery,
monitoring,
I'm
learning,
etc.
Expenses
there.
You
know
people
doing
all
sorts
of
interesting
things.
You
might
be
your
taking
your
micro,
your
service,
when
you
shove
your.
A
Stack
into
one
docker
container
so
against
through
wanted,
like
persistent
local
storage
and
not
some
sort
of
ephemeral
things
suitable
for
a
stateless,
microservice
ZFS
was
a
really
natural
fit
because
the
problems
you
wanted
to
solve,
like
I,
want
to
create
something
from
a
face
image.
I
wanted
to
have
local
storage
that
you
know
it's
persistent
I
want
to
easily
back
at
running
applications.
Potentially
my
great
containers
between
house
without
losing
data
wanna
manage
quotas
so
that
one
container
cannot
consume
them
all.
A
The
storage
resources
already
hosts
I'll
have
really
natural
TI
vest
is
this
is
DFS
buttons
and
DFS
snapshots
and
receive,
is
their
best
properties
etc.
After
part,
this
isn't
like
some
like
crazy,
like
epiphany.
We
had.
This
is
how
freebsd
of
the
most
another
sometime
in
you
know,
working
for
years
and
for
everything
from
public
clouds.
The
trading
on
Wall
Street,
fortunately,
for
us,
Alex
II
already
had
gfs
support.
A
Its
kind
of
very
like
the
man
page
is
like
somewhere
in
the
bottom
of
the
main
page
of
Alexei
dash,
create
there's
like
what
back
in
store.
Do
you
want?
Has
this
long
list
of
lvm
in
just
a
flat
directory
or
best
whatever,
that
is
anyway,
so
save
I,
throw
those
options
and
basically
just
sort
of
worked
out
of
the
box
which
was
really
nice.
A
So
what
we
did
was
we
giving
these
sort
of
constraints
and
our
sort
of
need
for
transparency,
because
we
wanted
to
change
you
one
thing
and
see
how
that
work,
not
sharing
everything
at
the
same
time
hope
it
worked
out.
So
we
wrote
a
bunch
of
glucose.
These
commands
was
like
a
feeling.
The
lines
of
Python
is
nothing
Carolee
technically
exciting,
they're
going
to
operator
can
do
something
they
build
container
into
a
container
with
it.
A
This
many
resources,
please
put
it
over
here,
add
the
chef
roles
or
through
it
and
then
once
that's
all
set
looks
good
to
you
allocated
sorts
enter
production
stage.
The
monitoring
double
the
existing
monitoring
tools
can
go,
find
it
monitor
and
you
know
record
on
alert
if
it
breaks
so.
Yes,
so
again,
it
was
a
so
transparent
for
chef
and
ganglia.
Dns
I
said
on
its
really
building.
A
A
This
is
a
truth
by
screenshot,
so
this
is
listing
all
the
containers
on
a
host
we're
all
given
unique
IDs
I
need
nap
when
were
12
ZFS
data
steps
at
a
time
really
24
earlier
this
year,
whether
we
have
not
yet
exhausted
our
patients
for
shipping
container
funds,
so
the
sort
of
basin
bridge
is
called
in
drydock.
That's
that's
entirely.
My
fault.
You
can
see
it's
it's
not.
A
minimal
thing
is
relatively
fast,
like
some
hundred
megabytes,
an
OS
image,
but
that's
some
kind
of
megabytes
that
is
not
counted
against
the
individual
container.
D
A
C
A
Got
our
clean
method
for
having
multiple
pc-
clusters
of
being
multi-tenant
on
the
same
post
I
would
include
a
break
room
apart.
Sla
abuse
case
aren't
bound
by
some
sort
of
arbitrary
restriction,
and
this
quarry
every
viewer
bad.
This
is
rules
involving
questa
Cassandra,
usually
an
indicator
and
overuse
on
top
of
ZFS,
so
in
the
future,
so
concave
forward
support
for
that
largest
Buster.
You
know,
you
know
all
of
them
effectively.
We
like
before
us
to
be
better.
A
It's
really
the
hot
topic
as
of
late
is
the
alignment
of
blocks
for
reads
or
lack
thereof,
and
also
so,
if
we're
over
here
two
lines
of
glucose,
that's
great,
so
I'll
be
watching
problems
that
are
was
happy.
We're
going
to
keep
this
going
instructors
of
service
or
something
else
we
should
adopt.
Instead,
that's
kind
of
an
open
question
we
have
to
solve
if
working
on
Cassandra
administrators,
let
me
trusting
you
we're
hiring
really
quickly
to
work
on
Cassandra
and
our
cluster
is
make
you
the
best
wave
storage
solution
for
our
engineering
teams.
A
B
A
A
Much
was
about
other
issues,
with
fragmentation
of
storage,
run
it
to
what
service,
because
there's
no
like
win32
style,
I
can
frag
guilty.
The
answer
is
no
I
believe
the
service
game
not
via
run
into
a
link
with
standard
answer,
was
as
about
roughly
eighty
percent
full.
You
should
definitely
be
on
the
watch
out
for
their
issues,
obviously,
sort
of
a
after
area
researchers
have
removed
that
ATP
eighty-one
percent,
85-90,
etc,
but
not
a
very
durable
tools
with
that
whole.
This
time.
A
A
D
C
A
Great,
we
can
refresh
your
index
file,
so
that's
how
we're
getting
that,
like
eight
percent
or
whatever
boost
I,
think
that's
a
little
weird
sort
of
your
Valley
right
in
the
future,
as
it
sort
of
complicates
a
lot
thinking
about,
do
how
much
overeat
am
I
going
to
get,
because
no
one
really
knows
how
many
things
are.
Reading
cuz,
CFS,
no
sander
said
I
want
to
read
this.
The
block
that
I
think
is
this
big
that
it's
actually
compressed
and
then
trying
to
be
here.
A
That's
the
question
is
about
licensing
specifically
Linux.
It
is
famously
under
the
GPL
v2
only
while
ZFS
is
under
the
cuddle,
I
think
the
smoothly
organization
def,
we
didn't
have
any
problem
with
that
I
believe
there's
an
extensive
like
a
question
and
answer
section
on
ZFS
on
my
Nexus
Network
website.
Thank
the
short
summary
is,
if
you're
not
distribute
play
many
things,
if
you're,
not
a
distributor
of
compiled,
binaries
you're.
Fine,
if
you
want
to
sell
like
a
vendor
product,
we're
going
to
combine
the
two
together.
I
should
probably
popular
more.