►
From YouTube: RH InkTank Ceph Day Sessions Mario Blandini HGST
Description
Ceph Day Boston 2014
http://www.inktank.com/cephdays/boston/
A
A
That
sage
talked
about
in
the
transformation
of
infrastructure
to
meet
the
demands
and
new
applications
and
I.
Think
all
of
us
are
here
because
of
that
right.
You
you're
here
to
look
at
ways.
You
can
scale
storage
for
your
applications
and
the
topic
I'm
talking
about
is
a
new
topic
in
the
industry.
Specifically
the
open,
Ethernet
drive
architecture,
yeah
go
ahead
and
blame
me.
A
I'm,
a
marketing
guy
and
I
came
up
with
the
darn
term,
but
rather
than
come
up
with
a
code
name
or
something
there,
I'm
more
of
a
practical,
Gaia,
I'm,
no
longer
qualified
to
be
a
eunuch
sysadmin,
which
is
why
I'm
in
marketing,
but
generally
speaking,
I
like
things
to
be
descriptive,
and
certainly
that's
what
this
is.
So,
let's
go
ahead
and
get
into
the
discussion
a
little
bit
before
I
do,
however,
you
know
you
may
not
know
who
hgst
is.
A
If
you
don't,
we
are
the
makers
of
arguably
the
best
storage
products
on
the
planet,
hard
drives,
ssds
pcie,
flash
and
software
that
store
the
world's
most
valuable
data.
Now
that
world's
most
valuable
data
is
often
in
the
systems
that
you
deploy
and
we're
working
on
things
that
are
arguably
larger
than
a
single
one
of
those
things
I
described
and
that's
part
of
what
I'm
going
to
be
talking
about
here
is
things
that
are
a
little
bit
larger
than
that
are,
and
so
why
am
I
showing
this
forward-looking
statement?
A
Most
of
us
have
mobile
phones,
if
you're,
if
you're
out
trying
to
trade
western
digital
stock
right
now,
because
I'm
saying
really
exciting
things,
you
could
do
that
after
the
meeting
and
I'm
going
to
give
you
a
lot
of
forward-looking
statements
here
thinks
about
the
future
cool
things
that
we're
working
with
ink
tank
on
to
bring
the
market.
Naturally,
you
can't
expect
me
to
have
to
come
back
to
you
and
tell
you
everything
else.
We've
opted
to
update
you,
so
that's
what
the
forward-looking
statement
is.
A
A
A
A
It's
okay,
there
you
go
and
all
of
us
who
at
the
school
and
studied
math
something's,
not
lining
up
there
right.
So
clearly,
there
needs
to
be
some
new
technological
innovations
and
I
like
to
think
of
the
high
jump
and
I.
Do
that
because
I'm,
a
five
foot,
six
200-pound
guy
and
I
like
to
imagine
myself
high
jumping
what
happened
in
the
first
50
years
of
the
high
jump
in
the
Olympics.
A
You
could
reasonably
expect
to
squeeze
fifteen
percent
of
it
about
every
50
years
or
a
very
small
amount
of
incremental
upside.
Do
things
a
little
bit
different
and
you
can
get
a
really
big
impact
right
away?
So
that's
why
I'm
passionate
about
sup
got
some
other
stickers
up
here.
If
you
see
my
Matt
Swift
Mario
on
on
twitter
used
to
work
at
Swift
stacks,
so
I
really
love
this
stuff.
Why?
A
Because
these
data
is
the
data
here
is
telling
us
we
need
to
do
something
a
little
bit
different,
so
likewise
hgst
interested
in
this
at
all.
Well,
if
you
think
about
what
type
of
data
is
being
created,
the
people
who
use
our
infrastructure
are
telling
us.
You
can't
just
put
it
on
tape
and
stick
it
away
forever,
because
part
of
the
value
of
that
data
is
being
able
to
get
access
to
it
and
I've
done
lots
of
tape.
Backups
in
the
past
I'm
you
can
see.
I
was
a
early
veritas
user.
A
In
the
day
you
know,
ND
impedes
didn't
work
right,
so
I
lost
my
hair
doing
that
stuff,
not
that
that
goes
away.
It's
just
that
a
lot
of
the
things
you
want
to
store.
You
want
to
be
able
to
access
in
seconds
not
hours
or
days.
Now
we
we
at
hgst
make
the
fastest
PCI
flash
cards
on
the
planet
too.
If
you
want
to
access
it.
Super
super
quick,
see
my
buddies
there
in
the
back.
A
If
you
want
to
talk
about
that,
the
the
reality
is,
though,
for
the
data
pyramid
there's
some
amount
of
it
in
the
middle
toward
the
bottom.
That
was
probably
stuff
that
you
wouldn't
think
you
need
to
readily
access,
or,
according
to
this,
you
didn't
even
store
think
about
the
bits
that
are
created
today.
Sensor
data
log
data,
all
the
video
imagery
all
that
sort
of
stuff
how
much
that
stuff's
actually
being
saved
not
a
lot.
But
if
you
build
a
really
cool
stuff,
cluster
you'll
have
a
better
chance
at
saving
a
lot
of
that
stuff.
A
So
anyway,
analytics
compliance
I'm
not
going
to
ask
for
a
show
of
hands
there,
but
I
imagine
most
every
company's
looking
at
those
sort
of
initiatives.
So
how
do
we
store
more
data
in
a
more
effective
way?
I
think
you
get
there
by
having
more
affordable
disk
media
solutions
that
can
offer
you
better
price
capacity,
price
density,
all
that
sort
of
stuff?
It's
not
that
you
want
to
spend
less
and
just
want
to
get
more
out
of
what
you're
spending
and
software-defined
storage
helps.
Do
that
all
right.
A
So
let's
go
ahead
and
break
down
the
language
here.
Do
we
have
any
native
French,
Spanish
or
other
romantic
language
speakers?
Anyone?
Okay,
we
have.
We
have
a
couple
back
there.
You
may
look
at
this
and
say
that
the
noun
is
either
open
or
Ethernet,
but
in
gaelic
languages
like
english,
that
we
speak,
the
noun
is
architecture
right,
but
is
tech
people
we
look
at
that.
We're
like
it's
got
to
be
about
ethernet.
It
must
be
either
ether,
it's
cool.
It's
got
to
be
an
Ethernet.
A
Well,
this
happens
to
have
Ethernet
as
a
descriptor,
but
the
noun
is
architecture
as
a
building
block
for
the
software-defined
data
center.
So
what
does
a
CSE
do?
We
are
the
best
makers
of
these
storage
building
blocks
and
think
of
this
architecture
as
a
larger,
storing
built,
storage,
building
block
the
same
way,
you
would
look
at
a
a
intel.
Server
jammed
full
of
a
bunch
of
SAS
drives
today.
That
is
a
node
right
and
you
think
of
that.
A
As
a
unit
of
measure,
think
of
this
architecture
now
as
being
a
unit
of
measure
providing
some
new
cool
capabilities
and
to
look
at
some
of
the
other
descriptors,
they
wouldn't've
invited
me
here.
If
it
wasn't
open,
I
think
sage
personally
blocks
people
at
the
door
if
you're
not
into
the
open
thing
right
and
ethernet
happens
to
be.
What
do
people
geek
out
about
I'll
describe
that
here
in
a
second
and
drive?
Yes,
that's
what
we
do
at
hgst
we
make
drives,
but
think
of
it
as
a
foundational
building
block
for
the
software-defined
data
center.
A
All
right
do
you
want
to
spend
ur
specific
API
people?
What
if
I
told
you
hey,
you
know
what
I've
got
some
really
cool
stuff?
Here's,
my
API!
You
want
some
of
that!
Well,
unless
it
was
really
really
awesome,
which
is
hardly
ever,
but
even
at
that
right,
even
that
you're
thinking,
yeah
I've
got
tickets
to
the
Red
Sox
game,
but
I
have
to
walk
five
miles
and
you
know
in
the
rain.
I
don't
know
yeah.
Are
you
that
hardcore
you'd
walk
five
miles
in
the
rain?
A
Maybe
you
would
maybe
that's
a
bad
analogy,
but
there
comes
some
point
where
something's
really
cool
but
you're
like
that's
not
worth
it
right.
So
our
view
is
you
can't
scale
out
if
you
really
have
a
vendor
specific
API,
because
it's
just
never
going
to
be
there.
I
put
an
api
out.
You
guys
want
to
innovate.
I
have
to
up
reb
the
api.
The
limitations
just
doesn't
work.
Why
would
give
you
something
open?
So
that's
why
I'm
so
stoked
about
this
we're
going
to
go
a
little
mission
impossible.
A
Mmmmmm
I
had
to
do
this
because
we're
going
to
go
straight
down
to
the
CLI
in
the
demo
and
I
want
to
create
some
contrast
right.
That's
where
I
get
my
biggest
news
in
Oz,
more
like
partying
on
the
hard
drive
CLI
style.
How
will
we
get
a
party
on
the
hard
drive
at
the
CLI?
It's
because
we
have
open
software
and
hardware.
It's
part
of
our
opening,
ternet,
drive,
architecture
and
I.
A
Think
in
our
demo,
you'll
see
where
we
the
concept
that
if
you
take
services
that
would
be
distributed
in
a
intel
note,
architecture
and
move
some
of
those
services
closer
to
the
drive
itself.
There's
an
opportunity.
Do
some
cool
things.
I
think
everybody
knows
that
now
it's
on
us
to
prove
that
it's
actually
cool
and
I'll.
Tell
you
little
more
about
those
development
here,
but
we
want
to
distribute
those
service
services
across
the
infrastructure.
As
close
to
the
dr
meetings,
you
can't,
how
do
we
do
that?
A
Anybody
heard
a
debian
before
that's
what
you
get
debian,
we
have
an
ethernet
driver.
We
wrote,
of
course
it's
open-source
you
do
ever
you
want
to
image
to
help
you
get
access
to
the
ethernet
ports
on
the
drive.
It
looks
like
eh
t0,
so
you
can
do
you
know
UDP
tcp
AOE,
any
ease.
You
want
right,
it's
open
source,
do
whatever
like
you
want.
Secondly,
to
get
access
to
the
drive
in
cpu.
You
already
know
how
to
use
the
drive
in
the
cpu
in
the
memory.
That's
on
the
drive,
because
it's
a
open
linux
environment.
A
You
can
do
what
you
want,
which
is
why
I'm
trying
to
coin
the
phrase
party
on
the
hard
drive
they're
not
liking,
that
too
much
anyway
and
the
drive
happens
to
have
ethernet
connectivity
I'll,
show
you
how
that
comes
comes
to
fruit
here
in
a
second.
But
the
idea
here
is
that
it's
like
a
micro
server
I,
don't
want
to
call
it
a
micro
server,
because
that
everybody's
going
to
have
hackathons
and
put
likes,
let's
sort
of
wacky
stuff
on
it.
At
the
end
of
the
day
when
people
say
yeah,
I
asked
them
hey.
A
A
Well,
if
I
push
real
hard,
it's
the
low
cost,
which
wins
every
time,
I
think
that's
part
of
our
being
successful.
With
these
software-defined
storage
solutions
is
really
redefining
how
much
these
darn
things
cost.
So,
if
I
were
to
put
the
biggest
baddest
processor
on
the
planet
on
there,
it
defeats
the
purpose.
If
I
put
a
modest
infrastructure
there,
that
allows
you
to
run
a
modest
set
of
services
that
keeps
one
hard
drive,
spindle
busy
that
helps
do
peer-to-peer
communications
between
multiple
little
micro
server
nodes.
A
There
could
be
some
really
cool
things
going
on
there,
so
this
does
not
eliminate
the
need
for
CPUs
at
all.
In
fact,
if
you
add
a
race,
your
coding,
what
happens
to
your
need
for
CPUs
in
a
environment
work
with
me,
people
vocalize
goes
up
right,
so
there
are
opportunities
to
distribute
things
to
better
distribute
the
load.
That's
kind
of
what
we're
thinking
about
here.
As
far
as
this
architecture,
so
it
runs
linux,
you
get
to
do
what
you
want
with
the
CPU
and
the
memory
and
it
connects
via
ethernet
here's.
A
The
reason
why
we
want
to
connect
it
via
ethernet
is-
and
I
had
a
good
discussion
with
a
couple
gentlemen
here
I
won't
out
it-
will
protect
the
names
to
withhold
the
names
to
protect
the
innocent.
That
is,
they
said,
hey.
You
know
we're
kind
of
the
were
the
glue
that
glues
everybody
together,
but
when
we
bring
these
tested
solutions
to
folks
the
storage
people
would
say,
oh
I
can't
manage
that.
That's
not
my
thing.
It's
not
a
storage
array.
It's
not
from
you
know.
A
The
vendor,
I
buy
storage
from
yeah
I
can't
manage
that
the
servers
guys
well
I
can't
do
that
either
because
that's
not
my
area,
the
the
reality
is
if
it
connects
via
ethernet
they're
there.
The
hope
is,
it
looks
a
lot
like
a
server
and
connects
like
a
server
and
can
be
managed
using
the
same
management
tools
that
you
have
today.
So
there's
no
need
for
a
special
monitoring
software.
If
you
want
to
monitor
Linux
box,
what
do
you
do
today?
You
put
some
stuff
on
it
and
you're
on
it
cool?
A
That's
what
you
do
you
want
to
check
out
what
your
network
is
doing.
You
already
know
how
to
do
that
sort
of
stuff.
So
the
message
around
Ethernet
is
it's.
The
easiest
connectivity
possible
plus
a
lot
of
the
apps
that
we're
doing
are
using
restful
api
s
all
the
way
down
the
ethernet
route,
so
who's
with
me
so
far,
cool
all
I'm
gonna.
Do
a
quick
check
to
see
if
my
friend
has
joined
participants,
the
participants
are
not
actors.
This
is
a
live-fire
exercise.
A
A
So,
let's
show
you
a
little
bit
about
what
the
hardware
looks
like
exactly
because
people
always
ask
whoa
that
looks
cool.
What
does
it
look
like?
What
does
it
look
like
to
you
I
kind
of
boring
like
a
hard
drive?
In
fact,
it
looks
like
a
three
and
a
half
inch
hard
drive
the
SAS
connectors
in
the
same
darn
spot.
It's
a
SAS
connector,
which
happens
to
have
pins
that
carry
multiple
signals
for
us.
We
can
use
those
pins
for
different
things.
A
A
012,
the
answer
is:
1
all
right.
I
a
couple
put
people
free
sticker
over
in
the
back
for
anybody
by
the
way
I'm
gonna
plug.
We
have
a
prize
back
there.
We
have
something
that
looks
a
lot
like
a
hard
drive
back
there
for
your
personal
use
back
your
stuff
up.
You
know
you
need
to
make
sure
you
stop
by
and
see
Kem
there
at
the
back,
throw
your
card
in
the
fishbowl
and
we
will
randomly
select
a
lucky
winner.
A
This
is
39
pachar
glad
we
chose
a
4
terabyte
one,
so
people
ask
me:
hey:
Mario
can
I
get
this
in
an
SSD?
Why,
mostly
because
I
think
it's
just
really
cool,
would
it
be
really
cool
to
have
an
SSD
either
with
the
Linux
head
and
I
can
do
all
sorts
of
stuff
we've
chosen
to
do
a
4,
terabyte
drive
and
here's?
A
Why
a
lot
of
the
feedback
that
we've
gotten
is
that
these
scale-out
clusters
tend
to
be
more
capacity
oriented
and
that
at
the
lowest
level
of
the
storage
tier
in
a
scale-out
cluster,
that
you
capacity
and
price
capacity
is
what
people
are
looking
for.
That
being
said,
this
design,
where
we
have
an
additional
arms
or
two
arms
on
there
additional
DRAM
and
the
ethernet
built-in
it's
something
that
can
be
applied
to
any
drive
technology
that
we
have
and
people
ask
me:
hey
Mari
one
can
I
go
buy
one
of
these
things.
A
The
answer
is,
we
will
productize
it
when
we
get
that
killer
combination
of
a
killer
application
with
the
architecture.
The
reason
is,
while
this
is
cool,
you
might
put
it
in
your
lab
play
with
it
for
a
while,
but
like
a
lot
of
those
cool
toys
that
your
kids
play
with,
and
they
just
simply
don't
use
ever
again,
you
know
it's.
A
We
don't
want
to
be
one
of
those,
in
fact,
when
we
know
that
it's
something
you're
going
to
use
every
day,
because
the
apps
just
truly
killer
and
the
software
is
optimized
to
run
on
it.
That's
when
you
expect
to
see
it
in
the
marketplace
and
I'll
talk
a
little
bit
more
about
some
of
the
milestones
here
in
the
future,
but
something
it's
not
for
sale.
When
we're
talking
about
it.
My
guess
is
you're
stimulating
because
most
people
have
here
this
they're
stimulated
right
Willie.
This
is
cool.
A
They
like
the
sort
of
stuff
ethernet
connectivity,
integrated,
SOC,
minimal,
cost
uplift
for
us,
because
if
it
wasn't
affordable,
it's
something
you
couldn't
deploy,
and
this
is
how
we
do
it
it's
we
do
it
in
a
very
efficient
way.
Now
I
thought
I
heard
myself
say
it
with
ethernet
and
here's
how
we
take
the
drive
and
we
put
it
into
an
enclosure
which
holds
hard
drives
the
way
industry
standard
server
with
hold
lots
of
hard
drives.
This
particular
reference
design
because
it
has
60
slots,
and
you
know
some
power
supplies
in
the
back.
A
They
go
in
dishwasher
style.
For
you,
reference
design,
60
drives
in
there
240
terabytes
at
a
4
terabyte
drive.
It
has
an
embedded
switch
fabric,
so
this
is
more
like
a
blade
server
than
it
is
a
storage
enclosure
in
the
sense
that
you
have
a
blade
server
to
have
a
bunch
of
nodes
in
it
and
there'd
be
a
integrated
top-of-rack
switch
in
the
blade
server.
You
could
think
of
it
like
this,
and
so
we
have
two
of
those
back
there.
A
The
rendering
here
only
has
two
ports
on
the
back,
but
actually
our
reference
design
has
eight
ports
of
10
gige
on
the
back
side,
so
60
drives
that
have
gonna
be
thrown
on
the
inside
8
10
gigs
on
the
outside.
Now
is
this
the
final
configuration
naturally
not.
We
need
to
figure
out
the
right
perfect
recipe
in
order
to
make
this
work,
but
what
it
allows
us
to
do
is
test
it
scale.
A
lot
of
these
solutions
really
test
it
at
scale
versus
a
single
little
dongle
adapter.
A
On
the
back
of
one
drive,
that's
kind
of
cool-
that's
you'll
play
with
that
for
a
little
bit
and
throw
that
away
here.
We
can
actually
test
it
out
in
a
really
large,
meaningful
fashion.
Now
the
devices
look
like
Linux
servers
on
your
network,
so
here's
some
photographic
evidence
a
little
too
heavy
for
the
overhead
bin
on
the
flight
over
but
yeah
they
the
dish,
the
dishes
go
in
hello:
the
drives
go
anywhere
with
D
go
up
and
down
inside
that
way.
A
We
have
some
photos
of
the
the
stuff
that
was
there,
and
so
anybody
did
anybody
go
to
the
OpenStack
summit
in
Atlanta
all
right,
quite
a
lot
of
the
Red
Hat
and
ink
tank.
Folks
did
so.
We
had
a
lot
of
folks
coming
by
because
whatever
is
flashing
lights
and
real
running
hardware
draws
people,
you
know,
there's
a
sound
of
the
fans
to
I.
Think
what
you
mean
while
you,
we
have
some
demos
back.
A
A
No
hey
somebody
call
car
and
beer,
look
them
up
on
the
online
and
give
them
a
call,
no
worries,
though
I've
got
more
stuff,
which
we
should
just
play:
the
George
Carlin
video
right
now.
That
would
be
perfect.
If
you
haven't
go
to
youtube
George,
Carlin
stuff,
it's
an
epic
bit,
absolutely
epic,
alright!
So
what
kind
of
stuff
do
we
have
running
today
and
really
they
are
coming
out
party
for
this
technology
was
the
OpenStack
summit.
A
Previously
nobody
knew
even
knew
who
hgst
was
by
and
large,
and
they
certainly
didn't
know
that
we're
doing
some
cool
things
around
open
source.
So
what
we
did,
because
we
didn't
have
to
ask
anybody's
permission.
We
just
simply
went
to
github
and
grab
some
stuff
compiled
it
for
our
arm
processor
and
threw
it
on
the
box
and
what
we
found
some
pretty
cool
stuff.
A
We
naturally
can
run
stuff
and
we
run
each
drive
as
its
own
OSD
and
a
couple
of
them
as
monitors
and
OSD,
just
to
kind
of
show
that
it's
flexible,
that's
a
linux
environment
and
go
ahead
and
queue
up.
Your
favorite
commands
because
we're
gonna
go
full,
live
demo
unplugged
and
let
you
shout
out
commands
that
will
type
in
and
and
actually
see
what
happens
there
well.
A
Let
me
go
back
to
the
demo
time,
we'll
give
them
like
two
more
minutes
to
join
we're
radically
ahead
of
schedule.
I
think
that's
one
of
the
things
that's
blocking
us
here,
but
really
the
goal
of
our
technology
demonstration
was
to
work
with
cloud
software
engineers
on
these
new
building
blocks
for
the
data
center.
The
future
now
I
assume
you
guys
all
paid
money
to
come
here,
because
you
believe
this
software-defined
stuff
really
does
have
a
role
in
the
data
center
of
the
future
right.
A
So,
what's
going
to
stop
us
anybody,
philosophical
up
in
here,
what's
going
to
stop
us
from
being
successful
in
this
in
using
these
scale-out
open-source
sort
of
software
storage
solutions,
I
will
be
doing
I'll,
be
glad
to
stay.
The
lot
of
Devil's
going
to
work.
I
just
need
to
I
need
my
buddy
on
the
other
side.
To
get
it
run
up.
A
Our
hope
is
that
can
really
help
make
it
a
lot
more
consumable,
as
well
so
marketing
statement
at
the
top
yeah.
You
can
move
data
centric
services
as
close
as
possible.
Is
that
anybody
think
analytics
in
the
stuff
that
I
present
did
you
think?
Oh
it
processor
down
there.
Maybe
I
can
ship
some
little
SETI
like
code
to
go
down
there
and
check
this
thing
out
and
give
me
some
answers.
Back.
A
I
used
to
have
a
big
net
back
of
environment
worked
for
a
storage
service
provider,
which
you
know
got
a
lot
of
money
and
then
went
out
of
business
like
we
all
did
and
we
ran
steady.
We
got
to
the
top.
You
know
like
one
tenth
of
one
percent
of
contributors
back
then,
because
we
weren't
using
the
service
for
anything
else.
A
But
I've
always
been
fascinated
with
that
idea
that
we
have
all
this
big
data
and
what,
if
we
could
ship
more
of
the
processing
closer
to
where
the
data
is
so
we
can
eliminate
some
of
that
traffic
back
and
forth
because,
as
was
discussed
earlier
that
traffic
in
the
back
binder
mellanox
that
back-end
network
gets
pretty
pretty
busy
in
a
lot
of
these
clusters.
So
maybe
there's
a
way
we
can
send
less
across
the
there
as
an
opportunity
in
some
of
this.
A
A
Anybody
read
that
facebook
paper
that
was
done,
a
lot
of
a
lot
of
people
have
and
I
think
what
it
was
is
that
in
scale
out
solutions
you
you
design
for
failure.
You
expect
there
to
be
failures,
but
when
it
rains
it
pours
and
there's
always
too
many
there's
more
failures.
At
the
same
time,
then,
you
would
have
expected
quite
honestly,
it's
one
of
those
tough
things
to
plan
for,
and
one
of
the
things
about.
A
You
know
production
systems
if
you
plan
for
fifty
percent
of
your
capacity
being
out
you're
going
to
be
radically
over
provisioned,
so
you're
you're
trying
to
find
that
sweet
spot
there
in
the
middle
and
one
of
the
things
that
the
hgst
research
team
has
been
looking
at
doing
is
really
understanding
the
role
that
drives
play
and
then
understanding
the
role
in
New
durability,
schemes
for
storage,
because
I
think
everybody
agrees.
Raid
is
awesome.
I
worked
back
at
adaptec
back
in
the
90s
when
we
had
raid
on
motherboard.
That
stuff
was
really
cool,
but
you
wouldn't.
A
You
can't
create
100
petabyte
raid
sets
anymore,
so
the
technologies
need
to
change
and
how
we
really
survived
from
those
scale
out
solutions-
and
you
know,
have
data
recovery
there,
here's
what's
going
on
in
terms
of
the
traffic,
if
you
design
for
failure,
you
really
want
to
try
to
minimize
the
amount
of
impact
that
a
failure
has.
So
this
paper
did
talk
a
lot
about
how
lots
of
things
happen,
while
it's
trying
to
go
through
and
recover
itself.
A
So
the
method
in
which
you
do
your
erase
your
coding
can
benefit
that
and
the
some
of
the
feedback
we've
gotten.
Is
that
because
we're
a
hard
drug
company
right,
you
think
we'd
love
replicas,
like
this
all
day,
long,
three,
four
or
five!
That's
awesome
right,
because
you
use
a
lot
more
hard
drives
right,
but
the
reality
is
you're
not
going
to
spend
any
more
or
less
money.
You're,
just
you're
always
going
to
spend
everything
you
got
right.
You
just
want
to
use
as
much
as
possible.
So
for
us
we're
thinking.
A
Okay,
great,
that
means
you'll
have
just
net
more
usable,
you'll
still
buy
all
the
same
stuff.
You'll
just
get
a
lot
more
net
usable
if
you
use
erasure
codes,
how
many
of
you
are
thinking
about
racial
codes
in
the
you
know
in
the
future,
not
for
every
application
right.
Some
applications
benefit
real
greatly
from
replicas
others.
A
So
there's
we're
not
changing
that
we're
also
looking
at
leveraging
xor
functions
that
would
beat
there
so
as
you're
doing
software-defined
stuff
you're,
not
incurring
any
extra
stuff
you
might
otherwise
be
doing
as
a
part
of
doing
that.
We're
really
comes
is
optimizing.
The
bandwidth
and
access
ratios
for
recovery,
because
you
can
design
for
no
failures.
It's
you
want
to
minimize
that
for
failures,
because
I've
even
talked
to
a
lot
of
object.
Storage
companies,
folks
that
by-product
from
hgst
and
one
of
their
biggest
challenges,
is
telling
customers
how
to
deploy
all
their
stuff.
A
So
any
number
of
combinations
of
failures
wouldn't
result
in
too
much.
You
know
hardship
on
the
solution.
It's
really
hard
to
do
when
your
unit
of
measure
is
three
nodes
and
even
harder,
if
you
have,
let's
say
20
nodes,
but
you
have
20
nodes
to
data
centers
for
Rosie
each,
and
where
do
you
put
what?
Where
and
how
aggressive?
A
The
reason
I
can't
give
you
any
more
information
is
because
we
want
to
make
sure
that
we
get
all
that
stuff
legally
signed
off
before
we
do
it,
but
we
have
some
smart
folks
working
on
this
stuff,
because
Parkinson's
Law
probably
is
true.
Six
solutions
will
scale
to
meet
the
capacity
of
that
solution.
It's
just
going
to
happen.
A
So
if
we
can
give
you
a
lot
more
capacity,
you're
gonna
be
a
lot,
do
a
lot
more
with
it,
and
if
you
can
do
a
lot
more
at
the
same,
that's
what
our
bosses
I
think,
try
to
pay
us
on
and
whether
we
show
up
and
work
on
time,
but
hey
a
lot
of
that
stuff.
You
can
do
from
your
home.
You
know
VPN
n.
All
right
want
to
learn
more
about
this
sort
of
stuff.
A
Yes,
while
we
try
to
connect
it
all
right,
request
permission
to
show
you
a
very
brief
commercial
I've
got
that
I
got
the
the
microphone.
So
I
guess
I
can
do
what
I
want.
Here's
my
last
slide
before
we
get
into
the
live
demo
and
hopefully
jumps
on
here,
real
quick,
so
we're
definitely
happy
to
take
a
growing
role
in
the
SEF
community.
We've
done
a
couple
doc
contributions
today.
Those
are
always
easy,
but
part
of
that
is
so.
Our
engineering
team
can
get
ramped
up
on
it.
A
We're
also
optima
exploring
the
optimization
of
SEF
services
to
run
as
close
to
the
dr
needy
as
possible.
If
the
open
Ethernet
drive
stuff,
how
can
we
give
you
that
as
an
option
which
truly
scales
that
stuff
out
and
if
you've
got
an
idea
on
how
you
could
even
use
that
for
some
of
your
own
code,
fantastic
I'd,
love
to
hear
your
ideas
there
and
you
know,
solutions
for
software-defined,
scale-out
storage
is
something
working
on.
Has
anybody
heard
of
helium.
A
A
B
A
Remember
the
days
at
pagers,
when
you
page
somebody
we're
thinking
hey,
did
they
get
that
my
page
they
get
my
page
nowadays,
you
can
see
whether
or
not
your
SMS
went
through
delivered
or
not.
So
what
they're?
Looking
at
the
phone?
There
was
a
different
thing
all
right.
Well,
while
we
go
and
get
that
set
up,
in
fact
all
a
transfer
ownership
over
to
him.
A
A
A
Yes
as
an
option,
and
just
put
it
up
there,
because
it's
all
about
choice
right
there
there's
different
use
cases
for
different
types
of
solutions.
So,
let's,
hopefully,
corn
beer
can
pump
up
his
thing
a
conure.
Can
you
hear
me
if
you
can
pump
up
the
size
of
your
display?
I
shake
our
viewer.
If
you
can
pump
up
the
size,
your
display
that'd
be
great.
A
Hey
so,
can
you
hear
me
honk
kharghar,
you
can
hear
me
out
good,
alright
pump
up
the
the
size,
your
display,
if
you,
if
you
don't
mind,
I'm
gonna,
grab
my
notes.
Alright,
drumroll,
please
it's
party
on
the
hard
drive
people
we're
going
to
show
some
live
demo
stuff
here.
I'm
gonna
grab
some
notes
to
make
sure
I,
don't
miss
any
of
my
key
points.
A
Alright,
so
that's
the
issue
see
website
that's
Cisco
WebEx,
and
what
the
heck
is
this.
This
is
a
bit
of
code
that
we
took
it's
it's
based
on
marantis
fuel
ripped
it
down,
threw
together
a
quick
visualization,
we're
going
to
the
OpenStack
summit.
We
figured
heck.
We
might
want
to
just
use
CLI
only,
but
you
got
a
bunch
of
press
and
analyst
folks
there
they
got
to
see.
A
You
know
something
that
looks
like
a
GUI,
so
we
threw
together
a
GUI
here
which
shows
that
we're
in
the
same
environment,
running
an
OpenStack
environment,
a
SEF
cluster
and
a
cluster
cluster
is
cluster
cluster.
A
real
word
say
that
ten
times
fast
cluster
cluster
go.
Let's
go
out
hooked
into
it.
The
the
cool
thing
about
this
is
because
it's
the
same
code
that
runs
on
Intel
nodes
just
compiled
for
that
arm
and
running
there.
You
can
mix
and
match
all
this
cool
hard
hard
drive
technology
with
existing
nodes.
Today.
A
That
makes
it
pretty
cool
now,
as
we
drill
into
it,
will
see
that
we
have.
This
is
our
SEF
nodes
or
sep
cluster.
Here
we've
got
an
intel
server
running
our
radios
gateway,
you
know
and
monitor
their.
If
I
look
at
that,
you'll
see
that
it
has
two
vlans,
so
you
really
need
your
glasses
for
that
or
the
resolution
not
looking
so
good.
A
You'll
trust
me
with
it
says:
106
and
107
as
my
two
vlans
that
are
there
one
you
know
the
backside,
one
the
front
side,
we
do
have
three
of
our
open,
Ethernet
drives
that
are
deployed
here.
So
these
two
have
the
you
know,
storage
and
monitor
on
it.
The
other
one
just
has
the
OSD
you
have.
You
know
IP
addresses
for
each
one
of
those
and
we'll
we'll
drill
into
each
of
those
and
show
you
kind
of
how
they
work.
A
The
next
thing
you
wanna
do
is
where
I
go
into
the
client
machine.
It's
just
you
know
a
standard
box.
You
know
linux
box
itself
and
we're
going
to
go
through
the
VLAN
and
just
show
the
networking
that
are
ya.
That
is
the
client
machine.
Thanks,
hey
Karan
beer.
If
you
can
hear
me
if
you
can
just
increase
that
text
at
all,
that'd
be
super
helpful.
A
If
you
can
Center
the
box
a
little
bit
car
and
beer,
we
have
a
picky
crowd
here
today:
high
maintenance,
seguin,
no
I,
guess
right
now.
You
sure
the
networking
know
that
I
think
it
looks
fine,
but
if
you
can
move
the
window
over
just
to
the
rights
the
whole
window,
just
a
little
bit
that'd
be
helpful.
A
Boom,
yes,
it's
it's!
It's
pretty
big,
so
you
go
ahead
and
drop
that
down,
but,
generally
speaking,
what
we
want
to
show
you
that
the
that
we
can
show
the
network
topology
here
and
what
do
we
notice
that
we've
got
those
that
a
VLAN
out
there?
That's
going
to
that
that
side
of
the
network,
the
external
part
of
the
network-
and
if
you
go
back
and
look
at
that
yeah
it
matches
the
that
one
VLAN
out
there.
So
what
do
you
say?
We
start
moving
some
data.
A
So
we're
going
to
actually
use
some
OpenStack
api's
to
move
some
data
across.
That's,
what's
so
cool
about
about
Seth.
Do
you
know
what
the
number
two
used
block
storage
driver
in
all
of
OpenStack?
Is
you
got
to
get
motivated
about
that?
It's
Seth
people?
No
one
would
have
thought
that
right
there,
like
whoa,
number,
two
number,
two
totally
popular
all
right
so
rocking
it
out,
OpenStack
style,
you'll,
see
that
we've,
you
know,
done
a
command
there,
you're
using
the
RESTful
API.
Let's
go
ahead
and
upload
a
file
to
that
particular
node.
A
Takes
a
little
bit
of
a
second
all
right
and
we
should
see
that
it
moves
some
stuff
pretty
well.
Big
file
is
in
the
house
now,
so
you
can,
you
know,
check
to
see
if
it's
there.
Naturally,
you
expect
it
would
be
there.
You
can,
you
know,
remove
it.
You
can
do
that
sort
of
stuff
as
far
as
that
stuff
goes.
So
what
he's
going
to
do
now
is
he's
going
to
go
ahead
and
download
that
and
you'll
see
here
that
yeah,
like
you'd
reason
to
expect,
compare
it
now.
A
Hopefully,
you're
not
too
stimulated
by
this,
because
the
reality
is
it's,
it's
the
most
belt
basic
elementary
of
stuff.
Well,
it's
showing
you
is
that
just
for
fun
we
download
some
stuff.
We
through
it,
we
compiled
it,
threw
it
on
there
and
it's
working
really
really
well
so
that
there.
That
concludes
kind
of
some
of
the
stuff
we're
going
to
do
to
move
things
around.
But
let's
go
ahead
and
go
back
to
the
Gateway
itself,
I'm
interesting,
going
back
there
and
showing
the
config
you
know
of
the
monitors
and
the
OSD.
A
A
All
right
so
he's
there,
that's
an
Intel
notice,
we've
described
it's
not
a
the
rado
skateway
is
a
full
full-blown
intel
server
and
if
we
now
go
show
us
like
the
monitors
in
the
OS
DS,
you
see
here
that
yeah,
that's
what
what's
there
that
we
can
see.
You
have
the
different
IP
addresses
and
stuff
all
right,
no
smoke
and
mirrors.
So
far.
A
What
he's
going
to
show
us
now,
though,
is
that
we
have.
You
know
an
ethernet
interface
on
one
of
the
drives
themselves
and
let's
go
ahead
and
go
down
to
one
of
those
and
see
that
so
you'll
see
it's
got
its
VLANs.
What's
its
IP
address
there,
do
we
see
it
off?
He
saw
it
we'll
see
it's
here
soon,
SSH
into
that
guy.
A
A
We
can
also
show
some
of
the
processes
that
are
running
on
the
box
when
I,
say
box,
I.
Think
of
it
as
a
box.
That
box
is
a
little
drive
with
a
system-on-chip,
an
arm,
seven
on
it.
So
you
see
we're
running
those
particular
services
so
on
that
monitor
OSD
now
this
may
not
be
the
optimal
way
to
deploy
it.
A
We're
going
to
be
working
with
some
of
the
folks
over
at
ink
tank
to
do
some,
some
hackathon
Inge
and
some
other
stuff
to
see
how
to
even
better
do
it,
but
this
is
just
us
having
fun
grabbing
the
stuff
down
and
deploying
it
on
the
drive.
So
just
so
everybody
knows
this
is
a
little
bitty
three
and
a
half
inch
hard
drive
running
Linux
thinking
that
it's
a
big
bad
OSD.
A
A
I
told
you
it
was
linux,
it's
cool,
okay,
so
you
know,
but
people
might
want
to
think
that
we're
you
know
like
do.
An
IP
link
I
like
that.
A
A
A
That's
a
fantastic
question,
so
the
question
was
very
observant
to
switch
fabric
modules
in
the
back
of
the
chassis.
How
does
it
provide
redundancy
inside
the
system
today
for
our
developments?
We
have
a
single
Ethernet
port
on
the
demo
drive
whether
or
not
it
makes
sense
to
take
both
of
those
and
run
them
teamed.
Take
one
and
run
them
to
different
networks.
A
lot
of
that's
going
to
be
an
implementation
decision
that
we
want
to
make
in
cooperation
with
a
software
partner
to
find
out
the
best
way
to
run
it.
A
If
you
think
about
what
this
would
replace,
it's
a
SAS
adapter
connected
to
a
disc
which
in
many
cases
is
just
one-
you
know
connection
designed
for
failure,
type
of
environment,
so
we
kind
of
wanted
to
mimic
that
from
a
behavior
and
a
cost
perspective
to
start,
but
we're
not
limited
by
the
design.
In
that
way,
we
put
the
chassis
together
just
because
it
was
a
commercially
offer
off
the
price
list
available
chassis,
which
happened
to
come
with
to
switch
modules
in
it
for
now.
A
So
that
was
a
very
observant
thing
put
an
extra
business
card
in
there.
You'll
increase
your
chances
of
winning
a
hard
drive.
I
got
nothing
to
give
you,
though,
I
got
actually
I
took
it
back.
I
got
a
sticker.
You
want
to
stick
her
other
than
other
ways
come.
Has
anybody
got
a
nice
commander
fur
con
beer.
A
A
So
it's
our
belief
that
well
I,
guess
some
people
ask
hey,
can
I
get
a
different
version
linux
on
that,
not
sure,
let's
work
with
some
software
developers,
but,
generally
speaking,
what
we
want
to
provide
flexibility
for
is
any
software
that
run
in
a
Linux
environment.
It
may
be
kind
of
hard
for
us
to
support
every
version
out
of
the
Sun
and
you'll
see.
We've
got
what's
probably
the
latest
stable
version
of
D
being
up
there
right
now
and
we're
working
on.
You
know
the
next
version
in
our
labs
as
well
question.
A
A
Yeah,
the
idea
is
that
we
would
want
to
give
the
flexibility
to
the
developer.
If
there's
anything
you
leave
from
this
presentation,
folks
other
than
that
loud
guy
from
California.
It's
that
this
is
an
open
platform.
We
want
you
to
do
what
you
want
to
do
with
it,
because
that's
the
way
that
the
best
creative
solution
is
going
to
be
created.
We're
not
going
to
give
you
a
key
value.
A
Api,
that's
only
limited
to
one
one
thing:
you
can
put
anything
you
want
on
it,
but
of
course
you
would
have
a
different
mac
address
and
we
you
have
to
manage
that,
but
a
lot
of
people
would
say
is
60
drives
behind
one
switch.
Oh,
that
looks
weird
I.
Don't
that's
too
many
devices
to
manage?
Well,
you
guys
know
how
to
manage.
You
know.
Ip
addresses
right,
it's
a
Linux
environment.
Do
what
you
want
you
firewall!
You
can.
You
know
load
balance.
You
can
do
all
sorts
other
wacky
stuff
from
a
network
perspective.
A
A
A
Our
view
is
if
we
work
with
the
folks
sitting
tank
and
they
come
up
with
a
killer
recipe
that
uses
one
of
our
hybrid
drives
great
slap
that
board
on
a
hybrid
drive
and
that
becomes
a
really
killer
solution.
We're
not
married
to
the
specific
drive
itself
and
the
second
most
universally
said
thing
after
someone
sees
this
at
the
first
one's
always
hey
put
that
on
a
flat,
an
SSD
and
they'll
be
awesome.
A
The
second
thing
is
putting
on
a
hybrid
drive
that
way
create
a
bunch
of
cool
ramdisk
to
do
all
sorts
of
other
crazy
stuff.
That
being
said,
it
is
a
micro
server.
So
then
we
got
to
figure
out
from
a
creativity
perspective.
It
would,
hopefully,
I
haven't,
sold
past
the
close
people.
It's
open,
you
get
to
do
what
you
want.
What's
it
good
for
that's?
A
Why
we're
out
here
talking
to
you
guys,
because
we
want
to
know
what
kind
of
use
cases
you
might
see,
we
want
to
see
how
it
adds
value
and
the
best
way
to
do
that
is
actually
do
some
testing
and
such
and
the
best
way
to
do.
That
is
work
with
the
smartest
of
the
software
folks
out
there,
which
is
why
we're
very
happy
to
be
having
been
invited
to
here
at
SEF.
Dey
can
I
get
a
round
of
applause
for
corn
beer
sing
live
from
Denver
folks.
A
He
said
sorry
was
late.
All
right,
hey
thanks
card
beer,
all
right,
so
it
was
live
enough.
We
screwed
it
up
enough.
Clearly,
if
that
wasn't
live,
we
really
did
a
bad
job,
because
nothing,
nothing
would
have
been
that
good.
I
have
my
auntie
got
tho
gozaimasu
slide
to.
Thank
you
very
much
for
your
time.
I
think
we're
gonna
do
the
panel
and
you
can
hit
me
with
more
questions
then,
but
it's
been
a
lot
of
fun.
Folks
and
I.