►
From YouTube: Dell @ Ceph Day London
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Thinking
of
doing
this
sort
of
thing,
are
you
interested
I
looked
at
my
director
of
innovation
and
when
are
we
interested,
he
said.
Do
what
you
like
guys,
your
Europe,
where
you
can
run
your
thing
where
you're
at
Middle,
East
and
Africa?
Actually
that
really
throws
our
Texas
colleagues
and
for
any
Texans
in
the
room,
that's
rest
of
world,
but
then
again
so
is
rest
of
us
for
any
Texans
in
the
room.
We
know
that
so
oh
gosh,
you
here
later.
A
At
the
same
event,
we
had
the
ink
tank
guys
guesting
on
our
booth
and
the
year
after
that
they
were
also
guesting
on
our
booth,
and
then
they
announced
the
redhead
acquisition.
I
think
stevil
talk
about
what
we
do
there,
what
we
think
this
is
brilliant.
No,
it
was
certainly
not
there
in
day
one,
but
one
of
the
things
I
wanted
to
just
cover
on
this
is
so.
B
A
First
thing
is
okay:
why
Adele
interested
in
red
hacking,
tanks,
f,
there's
an
interesting
statistic:
doing
the
rounds
on
social
media
that
I
think
it
IDC
said
del
were
number
one
storage
in
terms
of
that
the
volume
of
storage
we've
sold.
Some
people
say
yes,
but
you
sell
a
lot
of
disks
insider
laptops
and
desktops
and
yeah
I
get
that
that
don't
know
how
IDC
did
this,
but
it
is
certainly
through
some
of
our
competitors
at
the
more
classic,
EMC
and
netapp.
A
So
you
hang
on
what
a
dell
doing
at
number
one
I
kind
of
argue
back
to
you
guys
and
to
anyone
who's
prepared
to
listen.
It's
because
we
embraced
products
like
Seth.
This
is
this
is
great
Fidel.
You
can
buy
support
from
Dell.
If
you
feel
the
need
to
buy
red
hat
support,
you
can
buy
that
from
Dell.
You've
got
a
dell
credit
and
dal
own
a
bank,
so
we
can
do
all
sorts
of
smart
things
with
that.
A
You
can
buy
services
and
training
from
Dell
and
that's
really
good
as
well,
and
of
course
you
can
download
it
for
free
and
I'll,
be
brutally
honest
with
you.
We
as
della
not
that
worried
if
that's
the
route
you
choose,
that
is
not
a
bad
route.
In
fact,
we'd
encourage
you
to
start
with
that
route
and
Steve
will
talk
about.
A
A
A
You
know
I've
really
worked.
This
slide
hard
for
CEOs
marketing
and
finance,
and
we
spent
a
lot
of
money
on
this
one.
This
is
what
we're
very
good
at
that's
how
we
sell
it.
It's
inherently
complex,
but
that's
how
we
help
everyone
understand
in
the
Dell
sales
engine
in
our
channel,
whether
it's
open
stacks,
f,
its
stuff
and
you
mr.
mrs.
executive
get
stuff
and
nine
times
out
of
ten.
We
have
a
brilliant
moment,
a
brilliant
moment,
saying
you
will
be
able
to
get
more
than
your
competition
for
a
lower
price.
A
It's
a
double
whammy,
so
just
before
I
hand
over
to
Steve.
This
is
a
fantastic
event.
We
sponsored
the
Frankfurt
event.
We
love
SEF
because
it
allows
us
to
say
to
customers.
You
will
be
able
to
do
more
for
less
it's
absolutely
what
we're
about!
So
congratulations
for
spotting
that
I'll
be
back
in
a
minute
because
we're
going
to
throw
some
ideas
at
you,
but
now
I'm
going
to
hand
you
over
to
the
brains
of
the
organization.
Follow
him
on
twitter.
Mr.
steven
is
late.
Thank
you.
Thank
you.
Paul
the.
B
B
That's
good
cool
come
on
home.
Thank
you
very
much.
Yeah
I
appreciate
that.
So
as
I'm
Steve
Smith
I'm,
not
a
storage
guy
I
have
got
a
suit,
so
you
can
tell
them
from
event.
B
I'm,
not
a
lot
of
SEF
expert.
So
if
you've
got
any
tricky
questions,
don't
ask
me
out
there:
you
probably
have
more
knowledge
of
SEF
itself
than
I
do
but
I
think
there's
an
approach
that
I
use,
which
hopefully
will
come
through
in
the
in
the
next
few
slides
that
help
you
to
maybe
take
a
slightly
different
look
at
how
to
get
the
most
out
of
what
you're
running
this
stuff
on
I'd,
also
like
to
thank
Kyle
and
not
a
big
tank
as
I
stole
a
lot
of
their
information
off
their
website.
B
So
consider
yourselves
acknowledged
so
I
think
this
is
a
fairly
standard
business
as
usual
sort
of
sort
of
slide
here
right.
If
you're
going
to
put
a
big
thing
in
be
a
storage
system
or
a
new
video
I
assist
or
whatever
you
need
to
plan
it,
and
you
need
to
think
about
what
your
business
objectives
are,
what
your
sizing
is
going
to
be
like
and
so
on,
in
particular
with
storage.
We
need
to
think
about
traditional
things
like
I,
ops
and
throughput
in
the
HPC
business.
B
We're
mainly
concerned
with
very
fast
throughput
and
we
use
luster
and
gpfs
and
big
file
systems
like
that,
and
we
typically
deliver
gigabytes
or
tens
of
gigabytes
per
second
of
storage
throughput
to
the
cluster
clients.
Ok,
imagine
if
you've
got
a
thousand
processing
cores
all
working
on
a
model
of
a
Formula
One
race
car.
B
B
It
was
quite
interesting
that
the
original
funding
for
Seth
was
part
of
a
dio
e
grant
of
leave
on
for
an
HP
SI
system
and
I
know
that
SF
has
been
run
at
one
of
the
National
Labs
on
a
DD
n
system,
which
is
a
fairly
expensive
project.
Piece
of
kit,
I
think
there's
an
SFA
12k,
which
is
rated
at
12
gigabytes
per
second,
and
they
got
10
gigabytes
per
second
out
of
it
using
the
block
device.
B
So,
that's
quite
good
actually
so
from
that
point
ago,
as
good
as
a
pure
file
system,
you
ain't,
but
hey,
we're
not
here
to
talk
about
that.
So
clearly,
we
need
to
think
about
what
we're
going
to
store
in
it.
Why
we
need
this
thing
in
the
first
place,
what's
wrong
with
what
we've
got
and
have
some
envelope
or
boundaries
that
show
us
what
we're
going
to
do?
We
also
need
to
understand
some
architectural
things
around
how
we're
going
to
implement
Seth,
which
bits
were
going
to
make
use
of.
B
Are
we
going
to
go
for
absolute
reliability
or
we're
really
doing
a
test
or
based
on
lowest
cost?
Try
this
thing
and
try
it
with
a
few
team
users,
and
so
on
things
like
you
know,
if
we've
got
a
multi
rack
implementation
going
on,
do
we
want
rack
failure
as
a
demain?
What
do
you
want
data
center
as
a
domain,
and
how
are
we
going
to
plan
for
those
things?
B
Because
Seth
can
we
have
customers
already
doing
it
in
the
store,
multiple
petabytes
across
multiple
sites,
and
we
need
to
be
able
to
plan
that
and
work
out
the
best
way
of
architecting
that
overall
system
to
make
sure
that
what
we
end
up
with
suits
our
requirements
and
delivers
what
we
need
it
was.
One
webinar
was
listening
to
yesterday.
Actually,
when
Mark
I
think
he's
a
CT
out
or
something
was
talking
about,
data
redundancy
and
durability,
and
he
made
the
comment
that
three
copies
equals:
8
9
s
availability.
B
The
other
thing
we
need
to
do
is
understand
what
we're
looking
for.
From
our
workload
perspective,
what
sort
of
apps
are
going
to
run
what
sort
of
users
who
can
have
on
there?
These
two
sides
are
not
mutually
exclusive.
You
can
equally
have
a
high
I
ops
requirement
or
a
high
memory
bandwidth
or
hype
storage,
bandwidth
for
common
and
very
high
storage
requirements,
so
they're
not
mutually
exclusive,
but
you
do
need
to
understand
where
you
want
to
get
to
with
this
stuff,
because
that
might
affect
how
you
start
deploying
in
the
first
place.
B
I
won't
dwell
too
long
on
that.
That
is
another
slide,
I've
stolen
from
ink
tank.
You
can
go,
get
it
on
the
web,
and
this
one
is
also
another
slight
thats
stolen
just
to
refresh
on
the
architecture.
I'm
really
going
to
look
at
what
we
can
do
around
building
the
the
storage
servers,
it's
a
place
where
your
host,
the
OS
DS
and
the
monitor
node.
Okay,
you
only
have
three
or
five
of
these
things,
typically
they're,
quite
small.
B
They
don't
do
much,
but
they're
very,
very
important,
so
look
after
them
and
the
Gateway
okay,
those
are
the
things
I'll
concentrate
on
in
the
backend
network
as
well.
We
clearly
we
could
spend
days
on
performance
tuning
and
looking
at
this
stuff,
so
in
a
half
hour
very
fast
overview.
You're
not
going
to
get
a
great
deal
of
depth,
but
I,
hopefully
I'll
give
you
some
pointers
and
hopefully
look
at
some
things
that
aren't
considered
normally
in
this
sort
of
area.
B
The
thing
I
do
is
try
and
understand
what
the
software
does.
So
in
this
case,
let's
have
a
look
at
what
Seth
does
from
a
right
point
of
view.
I'm
a
client
I
want
to
do
a
right
and
I
want
to
do.
Three.
Parallel
is
very
simplified
right.
So
I've
got
this
three
megabyte
chunk
of
data.
This
object,
oh
I,
say
right.
I'll
split
into
3,
1,
megabyte
chunks,
I'll
write
three
times
parallel,
so
my
client
yeah,
I
run
crush.
B
It
tells
me
where
to
put
the
stuff
it
says,
put
one
on
the
green
one
on
the
blue,
one
on
the
yellow
know.
If
I
go
and
that's
a
client,
that's
it.
I
now
wait
all
right,
because
it's
essentially
a
synchronous
activity.
Now
I'm
going
to
wait
for
an
acknowledgement.
Okay,
until
all
those
three
rights
have
completed.
B
B
B
That
immediately
means
that
if
this
is
a
multi-site
system,
I
could
have
some
serious
latency
issues
there.
So
we
need
to
consider
those
sorts
of
things
and
where
you're,
placing
your
copies
and
so
on
mm
I've
also
got
to
write
the
journal
as
well.
Actually
the
journals
written
first
then
the
data,
then
the
primary,
takes
the
data
and
writes
its
copies.
Its
copy
then
acknowledges
back
to
the
primary.
The
primary
then
acknowledges
back
to
the
client,
but
which
k,
which
time
the
client
now
has
acknowledgement
will
continue.
B
Okay,
so
bear
that
in
mind
as
we
go
through
oh
you're,
doing
the
same
for
Rachel
Rickard
comes
so
that's
decree,
I,
actually
don't
know.
I
haven't
got
that
far
yet
I,
don't
know.
We
can
ask
that
a
bit
later,
I'm
sure
someone
here
will
know,
but
that's
a
question
we
can
ask
later
yeah
about
razor
coding.
B
I
also
want
to
understand
what
the
makeup
of
the
hardware
is
that
I'm
running
on
okay-
and
this
is
a
very
standard-
very
simplified
to
socket
intel
or
AMD
server
will
come
back
to
this
a
bit
later
on,
but
I've
got
two
sockets.
There
got
memory
attached
to
each
socket
there
in
mind
with
processes
these
days
that
the
memory
controller
is
on
the
processor
socket
okay,
so
memory
is
a
different
distance
depending
on
where
you're
running,
then
that's
something
that's
often
ignored.
B
Also
pci
is
generated
by
the
process
of
package
itself,
so
it's
not
shown
on
here,
but
that
socket
has
its
own
pcie
lanes
right.
So
we
not
and
not
only
have
new
man
on
uniform
memory
access.
We
have
nuna
non-uniform
network
access,
okay,
because
if
I'm
on
this
socket,
my
process
is
on
the
top
socket
there
and
I'm
accessing
whoops
fall
off
the
stage
this
Nick
I've
got
to
come
through
there,
so
I've
got
a
longer
length
to
go.
So
just
these
are
things
just
to
be
aware
of.
You
might
get.
B
You
know
performance
degradation
over
time
because
things
are
going
in
the
wrong
places.
We
need
to
be
aware
that
we
got
to
look
at
these
things.
I
mentioned
multi-site
before
covered
that
piece.
Really,
if
you're,
if
you're
replicating
from
your
primary
to
a
secondary
or
tertiary
off-site,
your
latency
is
going
to
be
quite
long
and
your
client
is
going
to
wait
now.
B
The
current
camp
actually
they're
technically
two
x's
there's
an
act
that
says:
I've
got
it
in
memory
and
there's
an
act
that
says:
I've
got
it
in
disk
and
you
can
set
it
so
they
can
get
the
memory
a
can
operate
on
that
you
can
even
decide
to
ignore
the
act
if
you
want.
So
it's
not
strict
strictly
synchronous,
but
do
be
aware
of
that,
but
your
application,
unlike
time
out.
B
If
you're
running
a
web
interface,
so
you're
running
HTTP
as
the
inbound
stuff
and
you're
using
gateways,
you
Confederate
the
Gateway,
so
you
have
separate
multi-site
clusters,
which
then
work
through
the
Gateway,
rather
than
treat
them
as
a
completely
homogeneous
cluster.
Okay,
and
if
you're
doing
rdp
you
right
back
up,
then
you
can
treat
your
primary
cluster
and
your
secondary
cluster
as
the
synchronized
replications.
If
you
like,
through
through
various
replication
agents
that
you
can
use
there,
so
you
don't
have
to
treat
them
all
as
a
single
set
of
cluster
you
can.
B
So
let's
have
a
look
at
the
recommended
configurations
for
the
storage
server.
These
are
from
both
the
sifter
logging
website
and
ink
tanks
website
they're
a
little
bit
out
of
date.
I
have
to
say:
okay,
if
you
look
through
these
at
first
glance,
you
think
well
anything
will
do
this
and
to
an
extent
you're
right.
Actually,
you
can
throw
anything
at
this
stuff.
Little
Rock,
one
core,
giga
hertz
per
0
SD-
is
the
recommended
minimum.
B
That
means
if
I've
got
a
2
gigahertz
core,
I
can
support
20
s
DS
off
that
cool,
ok,
so
I've
got
an
eight-core
processor
running
at
to
figure
hurts
that
would
support.
Theoretically
16
OSD
16
drives
ok,
so
when
OSD
equals
a
disk
drive
in
general,
ok
slightly
less
for
AMD
AMD
cause
tends
to
be
about
sixty
seventy
percent,
the
performance
of
an
intel
core
these
days
of
that
area.
Okay,
of
course,
you've
got
two
sockets.
Then
you
can
put
even
more
OSD
support
in
there.
B
Typically,
don't
because
you
don't
have
enough
space
or
you
don't
want
to
overload
the
disks
in
a
particular
OS
storage
server.
Okay,
you
want
to
limit
the
number
of
disks
having
a
storage
server
if
you're
in
a
if
you're
interested
in
serving
storage
in
a
high-performance
manner.
If
you're
in
an
archive
situation
or
a
cold
storage
thing,
then
you
can
have
as
much
disk
as
you
want.
Yeah
I
know
single
node,
okay,
and
we
can
do
three
petabytes
in
Iraq
behind
three
service
right.
This
is
not
difficult.
These
days.
B
But
if
you're
in
a
more
production,
high
frequency
of
usage
environment,
then
you
don't
want
to
put
too
many
disks
behind
a
particular
storage
server,
because
you're
just
going
to
focus
on
all
the
requests
through
that
single
service
or
spread
them
out.
Okay
tends
to
be
around
12
drives
per
story
server
to
be
the
optimum.
That
means
you
want
12
cause.
You
have
to
go
quite
load
down.
The
process
is
stacked
to
find
a
6-core
device.
He
stays
and
if
you
stick
hyper-threading
on
where
you
could
support
you
more,
but
I
will
come
back,
though.
B
So
what
else
the
SAS
or
solder
doesn't
really
matter.
I
prefer
myself
Nearline
SAS
if
you've
got
a
choice
and
you
can
afford
the
slight
improvement
in
increasing
costas
literally
about
ten
quid
or
something
realign
SAS
have
the
advantage
that
they
can
reap
you
much
more
efficiently
than
solder.
So
actual
dis
writes
a
lot
quicker
and,
unlike
right,
the
size
of
the
disk
does
not
matter.
Six
terabyte
drives
are
fine,
because
you're
splitting
this
thing
over
100
or
200
placement
groups
and
those
hundreds
of
200
placement
group
disks
are
all
helping
in
the
rebuild.
B
If
you
lose
a
drive,
okay,
so
you're,
typically
10
to
50
times
quicker
than
a
rate
set
rebuilt,
not
into
phase
of
rebuilding
a4
or
a6
terabyte
drive
you
in
two
minutes:
okay,
so
actually
you
can
use
big
disks
looking
out
the
process.
Are
there?
Let's
have
a
look
at
the
current
Intel,
so
stack.
So
it's
a
bit
fuzzy.
This
is
the
Haswell.
B
So
this
is
the
Intel
Xeon
e5
dash
26,
x,
x
version
30
can
be
three
it
released
in
September
and
if
you
just
went
out,
bought
a
server
today,
you're
most
likely
to
get
one
of
these
in
the
HPC
area,
we
tend
to
go
in
the
top
of
the
advanced
stack
there,
that
top
left
block
or
in
the
top
of
the
segment
optimized
stack
there
for
this
stuff.
We
already
said
we
need
a
giga
hertz
per
core
/
OSD
and
we're
only
looking
at
12
drives
sensibly
in
an
operational,
quite
high
frequency
access
system.
B
We
were
looking
at
six
core.
Eight
core
devices
down
here
benefit
they're,
nice
and
cheap
okay.
These
were
expensive
there
now
real,
cheaper
about
2,200
follows
a
process
or
something
so
they're
at
the
lower
end
of
the
price
performance
curve,
the
more
than
adequate,
they're,
fairly
low
voltage,
85
watts
and
they've
got
turbo
boost,
so
you
can
get
extra
speed
there.
If
that
helps
at
all,
they've
got
decent
qpi
speed,
the
basics
drop
down
quite
a
bit
in
terms
of
their
overall
throughput
because
of
QP.
I
speeding
the
memory.
B
B
It's
unbelievable
how
many
people
did
not
understand
how
to
buy
memory
for
a
server.
This
is
my
own
personal
hobby
horse.
Okay,
really
annoys
me
for
about
the
last
six
five
or
six
generations.
The
memory
controller
has
been
on
the
process
of
package.
Okay,
so
there's
one
socket
with
eight
processes
in
it.
There's
another
socket
with
eight
presses
in
it
standard
to
socket
intel
server.
Also
AMD
same
thing
applies
slightly
different
numbers
also
on.
There
is
a
memory
controller,
so
the
memory
these
red
lines
coming
out.
B
The
sides
is
attached
directly
to
a
package
and
there's
a
ring
in
there
that
you
don't
see
which
connects
it
to
the
process
of
course,
or
the
course
access
memory.
By
going
on
to
the
ring
to
psych
c7
there
and
onto
one
of
those
memory
channels
or
more
so,
the
memory
bandwidth
available
theoretically
for
a
current
generation,
intel
processor
running
it
waiting.
What
is
it
1866
get
mega
transfers
per
second
is
about
60
gigabytes
per
second
theoretical.
Ok,
it's
delivered
by
each
of
the
four
memory
channels.
B
Each
one
contributes
a
quarter.
So
if
you
buy
a
machine-
and
you
only
put
two
dimms
in
it-
run
their
own
one-
there
you've
lost
the
75
percent
of
your
memory,
performance,
75%,
ok,
the
number
of
people
who
buy
six
teams
or
12
tips
or
freedoms,
just
always
always
always
by
eight
okay,
eight
or
16,
but
make
sure
every
channel
is
populated
on
each
socket.
If
you
don't
do
that
you're
losing
performance,
you
might
think
that
doesn't
matter
in
a
storage
environment
well,
you're
running
TCP
is
the
back
in
protocol
here
right.
B
You
got
a
lot
of
memory
copies.
You
got
a
lot
of
stuff
coming
in
and
out
you
got
all
your
I
know'd
stuff.
You
got
your
D
entries
everything
in
memory,
so
make
sure
you
don't
compromise
your
system
by
buying
the
wrong
memory,
end
of
got
it.
Okay,
the
other
thing
not
quite
as
important
but
still
fairly
important,
is
because
we're
in
a
Newman
mode.
B
Now,
if
my
process,
my
thread,
is
running
on
c0
up
here
and
my
data
is
over
here,
it's
got
a
longer
way
to
go
and,
of
course,
the
operating
system
is
a
bit
of
pain
in
the
arse,
because
the
operating
system
says
every
time,
I
get
an
interrupt
right.
You
get
off,
go
in
the
queue.
Wait
till
your
interrupts
wrist
or
whatever
I'll
put
someone
else
on
there
and
I
might
put
you
back
somewhere
else.
My
threat
might
be
running
on
c0
there.
B
My
data
might
be
on
the
memory
attached
there,
I
go
off
and
the
operating
system,
in
its
wisdom,
puts
me
over
here.
My
dad
is
still
over
there.
All
of
a
sudden
I've
got
a
longer
path,
so
it's
quite
important
to
pin
these
things
together.
Wait
so
just
pin
your
process
annual
data
to
the
same
socket
life
will
be
wonderful.
B
Interestingly,
because
we
typically
are
saying,
we
only
need
sort
of
12,
maybe
16
drives
on
a
note.
Now,
we've
got
a
especially
if
you
put
hyper
threading
on
you
could
say:
well
we
just
let
the
OS
float
around
and
take
whatever
spare.
No
spare
cores
are
out
there.
So
that's
another
thing
to
think
about.
B
Finally,
on
memory,
the
memory
market
has
changed
quite
considerably
over
the
last
couple
of
years.
The
current
generation
of
process
is
now
use
ddr
for
ddr4
memory.
Dimms
are
not
small,
they
start
up
for
geek
right
the
memory
market,
the
dettol
memory
modules
are
built
in
to
gigabit.
That's
like
four
gigabit
bits
now,
rather
than
two.
So
the
smallest
man
remember
you
can
get,
is
four
gigabytes.
If
you're
populating
eights
hey,
you
got
32
gig,
so
your
minimum
memory
on
a
system
is
32
gigabytes.
B
However,
that's
single
rank
single
rank
is
somewhat
slower
than
jewel
rank.
My
advice
is
for
the
small
increasing
costs,
go
for
64
gigabyte,
8,
8
gigabyte
things,
okay,
and
then
you
don't
worry
about
in
the
future.
If
you
go
to
erase
your
coding
you're
going
to
need
more
memory,
more
performance
from
the
process
or
anyway,
so
just
do
it.
Okay,
so
your
standard,
here's,
a
26-20,
2630,
2640,
v3
processor,
with
64
gigabytes-
are
dimmed
for
your
storage
mode
yeah,
and
you
want
to
argue
about
that
good
nella
that
here's,
an
old
one.
B
B
We
still
do
these
products
with
slightly
updated
processes,
but
they've
got
three
petabytes
of
storage
at
this
point
in
time,
with
three
copies,
so
a
petabyte
of
usable
running
on
AMD,
six
core
processors,
there,
nothing
really
to
say
on
that
other
than
bear
in
mind
that
a
few
simple
principles
about
Numa
and
memory
configuration
and
so
on,
and
we
should
be
fine.
Ok
now
we
do
ok,
the
gateway
server
ago.
B
He
does
a
lot
of
check
something
crc32
at
the
front:
ok,
an
md5,
so
the
CSE
32
is
now
hardware
encoded
in
San
in
the
Haswell
product.
So
if
you
go
for
the
latest
generation,
processor
you'll
get
a
speed
up
for
your
gateway.
Ok,
remember
the
Gateway
speaks
Swift
and
s3
out
and
it
speaks
trade-offs
in
so
it's
doing
that
translation.
It's
doing
that
reef
for
wedding
and
all
that
good
stuff,
64
gig.
Again,
that's
really
the
least
memory
you
should
buy
these
days.
Ok,
if
you
don't
want
to
compromise
performance
unwittingly.
B
Brooklyn
on
the
cluster
monitor,
you
can
mix
a
match
for
small
systems,
but
ideally
the
cluster
monitor
it's
the
most
important
node
in
there
right
without
the
cluster
monitors.
You'd
only
need
three
of
them,
don't
necessarily
skimp,
they're
fairly
lightweight,
but
they
need
to
be
there
because
they
are
the
people
that
keep
the
map
up
to
date
and
they
tell
the
OSD
s
and
the
clients
and
the
crush
algorithm
where
to
place
stuff.
So
without
these
you're
dead
right,
you
need
three
at
least
five,
maybe
sometimes
seven.
B
If
you've
got
lots
of
failure
domains,
you
need
to
make
sure
they're
placed
within
a
failure.
The
main
southern,
if
you
always
have
at
least
two
okay,
don't
operate
without
with
one
has
no
one
to
ask
if
it's
okay
to
do
things
so
just
yeah
fairly
cheap,
but
make
sure
you
got
redundant
power
and
raided
this
set.
That's
it
all
very,
very
quiet
networking
spent
hours
on
this,
but
I'm
not
going
to
because
I'm
not
really
a
networking
car
either
the
front
end.
You
really
want
10,
gig,
so
hosting
node.
B
The
storage
server
is
going
to
have
to
let's
get
four
networks:
okay,
the
simple
ones
to
gigabit
networks,
ipmi
for
remote,
how
tube
and
maybe
a
hundred
base
to
access
management,
network
gigabit
and
then
to
10
gig
networks
at
least
okay,
one
for
the
front
end
which
communicates
with
the
client
all
over
the
back
end.
The
back
end
is
quite
heavy
weight.
B
They
don't
have
to
completely
separate
switches,
so
you
can
use
48-port,
10,
gig,
switches
or
whatever
and
use
the
same
switch
just
make
sure
your
failure
domains
are
okay
and
right.
If
you
use
a
loser
front
end,
then
you
know
it
doesn't
matter
if
you
use
the
back
in
different
space
because
the
nodes
dead
anyway
right,
but
just
remember
that
the
back
end
does
a
lot
more
work
than
the
front
end.
And
if
you
can
go
to
something
like
a
40,
gig
I
mean
we
use
mellanox,
stuff
and
I'm
l-look.
B
So
here
later
we
use
Moloch's
Ethernet
stuff
for
40
gig
and
100
gig
Ethernet,
okay,
really
good,
very
cheap,
as
the
job
works
really
well,
and
you
can
also
go
to
infiniband
if
you
so
wish,
we're
using
finney
metalloid
in
the
HPC
environment
for
the
storage
area
networks,
as
we
call
them
off
parallel
file
systems,
luster
and
so
on
that
again
works
well.
But
for
many
people
it's
a
new
technologies.
I
want
to
go
there
stay
with
ethernet
just
use
the
fastest.
You
can.
B
Okay,
just
a
couple
of
adverts:
we've
got
some
product.
Of
course
we
have
the
only
thing:
that's
free
with
SEF.
Is
the
software
right
got
to
run
it
on
something
and
you
gotta
pace
for
support
somewhere,
if
either
internally
or
externally
or
both
right?
So
just
do
this
I
need
a
small
bit,
that's
free!
Of
course.
We
have
a
bunch
of
stuff
and
there's
just
three
examples
here,
starting
at
the
bottom
for
a
monitor.
B
Note,
something
very
low
costs,
a
single
or
dual
socket
AMD
box,
with
a
couple
of
one
terabyte
drives
in
there
for
the
monitor
logs.
They
do
a
lot
of
logging.
These
things
yeah
so
make
sure
you
don't
skimp
on
drives
in
there
10
gig
to
get
onto
the
network,
of
course,
and
then
to
the
top
one
there
we
call
it
the
poweredge,
r720
or
730.
Now
the
720
is
a
ivy
bridge
at
seven-thirty.
Is
the
latest
Intel
Haswell
processor
v3?
B
B
If
you
want
it-
and
we
also
have
this
very
dense
product
here-
called
the
sea
8000
range,
which
is
basically
a
chassis
for
you
hi.
So
what's
that
seven
inches,
will
there
abouts
and
you
can
slot
8
devices
in
there?
Okay,
after
that,
half
of
you
each
same
as
a
four
in
two,
but
the
bed
of
this
one
is:
you
can
have
compute,
you
can
have
GPUs
if
you're
into
that
sort
of
thing
and
you're
kind
of
storage.
B
B
We
also
have
when
you
box.
That
will
also
do
I
Luke
10
drives
in
one
you
well
24,
two
and
a
half.
So
quite
a
variety
of
packages:
buddies,
packaging,
okay,
but
the
motherboard.
You
always
decide
that
processes
are
the
same.
The
memories
the
same
the
Nixon
same
just
the
way
we
package
them
up
and
shape
them,
and
but
things
on
the
front.
So
you
can
read
what
the
same
so
so
we
have
got
a
wide
variety
stuff.
B
A
quick
word
on
mixed
juice.
Ideally,
yes,
dedicate
hardware
to
monitor,
20,
SD
and
two
gateways.
You
can
mix
a
matter
not
see
an
inner
in
a
test
environment.
That's
the
way
to
do
it.
The
reduce
costs
and
check
functionality,
at
least,
but
don't
make
performance
evaluations
from
a
test
rig
unless
you're
doing
it
for
testing
purposes
or
for
performance
evaluation
purposes,
in
which
case
make
it
a
proper
system.
The
way
you
would
deploy
it
yeah,
that's
just
sense
or.
B
Leprous
but
too
small
to
say
on
there
and
if
you're,
into
big
archival
or
big
cold
storage
use
cases
then
I
mentioned
earlier.
We
can
do
three
petabytes
in
Iraq.
Well,
this
is
it
you
take
one
of
those.
That's
actually
you
take
the
smaller
brother
of
that
there's,
a
one-year
box
as
the
the
storage
server
and
you
attach
360
drive
baster.
That's
a
total
of
thirteen.
You
three
of
those
in
Iraq
39,
you
plus
a
bit
for
the
rack
switches,
and
that
is
over
1.2
petabytes
of
storage
in
Iraq.
B
B
Finally,
for
me
before
I
hand,
back
just
in
general
overall
things
to
consider,
we
don't
need
sophisticated
componentry.
Here
we
use
J,
pods
or
just
SAS.
Hba
is
rather
than
raid
cards,
except
on
the
monitor
load.
Okay,
in
fact,
some
of
the
cleverer
raid
controllers.
If
you
set
them
up-
and
you
know
you
got
your
12
drives
in
your
storage-
server
and
you've
got
a
clever
rate
card
in
there
and
you
lose
a
drive.
B
You
won't
know
what
to
do
with
it
because
it
got
12
right
sets
or
striped
and
there's
a
one
drive
gone
with
the
rest.
It
just
falls
over
and
collapses,
so
buy
the
cheapest
raid
controller
or
assess
card
right.
So
you
do
we'll
use
the
inbuilt
SAS
or
SATA.
Okay,
don't
go
clever,
keep
the
networks
as
far
as
possible.
There's
a
lot
of
east-west
back
forward
traffic.
Here,
it's
not
the
typical
sort
of
structure
there.
B
You
spend
more
effort
on
the
back
end
network
than
the
front
end
network
and
think
about
certainly
for
large
deployments,
where
your
failure
zones
are
because
that
goal
goes
into
the
crush
algorithm,
so
crush
knows
where
the
place
stuff
right.
So
you
have
to
tell
it
what
your
architecture
is
and
then
it
says:
okay
you're,
going
there
there
and
there
and
then
I'm
resilient.
A
Thanks
very
much
we're
going
to
run
through
a
couple
of
slides
and
stevie
is
going
to
jump
in
anytime.
He
wants
because
we
have
a
pet
theory.
We
have
a
pet
theory
that
and
whether
this
is
research
data,
because
you're
in
an
organization
that
does
research,
whether
that's
life
sciences,
whether
that's
public
sector,
whether
your
university,
whether
you're
a
big
government
lab
or
whether
your
oil
and
gas
or
whatever,
that
the
principle
remains.
A
We
believe
the
same
I
believe
it
remains
the
same
you've
kind
of
got
at
the
top
stuff
you
publish
or
gets
published
and
depending
how
you
choose
to
publish
you,
may
and
see
it
that
there's
not
enough
people
of
my
age
in
the
room
to
understand
this.
It's
a
really
amazing
piece
of
technology.
Steve
showed
it
to
me
recently.
It's
paperwhite.
It
really
wasn't
easy
search,
but
there
is
just
one
here:
I
said:
can
we
see
it
yeah
it's
caught?
A
A
A
Stuff,
well
there
is
that
so
you
can
publish
that
way
or
you
can
publish
a
video
diary
or
a
V
log
or
lifelog
anything
you
want.
Meanwhile,
the
sort
of
pre-publication
will
sit
in
there
and
we
call
this
digital
other.
That
might
be
the
collaboration
that
happens
where
you,
if
you're
on
Facebook
or
LinkedIn.
You
only
want
your
friends
to
know
about
that.
You
don't
want
your
intellectual
property
kind
of
up
there
published
electronically
or
whatever.
So
we
feel,
irrespective
of
what
sort
of
research
you're
doing.
A
This
is
kind
of
the
digital
life
of
that,
and
there
is
a
lot
of
drive
to
store
as
much
of
that,
because,
particularly
now
with
researchers,
you'll
see,
I
think
this
is
a
really
cool
thing
that
the
researcher
type
a
and
researcher
type
B,
maybe
from
completely
different
domains.
But
that
does
not
mean
that
the
research
they're
doing
is
mutually
exclusive.
They
can
start
to
say.
Oh
you've
discovered
that
I
didn't
know
that
that
has
an
implication
on
what
I'm
doing,
one
might
be
a
materials
guy.
A
The
other
person
may
be
doing
pure
math
and
more
and
more
that
should
be
encouraged,
because
that's
how
genuine
new
discoveries
that
can
be
brought
to
bear,
but
then
you
get
the
naysayers
kind
of
doing
this.
This
is
great,
but
there
are
all
of
these
problems
with
whittle
of
holding
this
data
in
a
digital
library,
but-
and
you
know,
we're
thrown
up
some
of
the
challenges
our
customers
and
users
have
thrown
to
us
and
one
of
the
things
we
want
from
you
guys
are:
what
are
the
challenges?
A
If
you've
got
a
digital
store
here,
a
library
these
library
things
might
catch
on,
I
don't
know
the
challenge
would
be
searched
it.
How
do
you
secure
it?
All
of
these
challenges
are
absolutely
real,
absolutely
imperative
to
be
able
to
secure
your
pre
published
information
if
you're
in
a
drug
research
environment
for
a
life
science
company
via
that
could
be
the
difference
between
the
company
failing
and
succeeding.
So
the
security
is
very
relevant,
but
at
some
point
you
want
to
publish-
and
actually
it
isn't
so
secure.
A
A
I'm
getting
agent
and
can
barely
walk
and
talk
now,
but
you
know
register
for
something
free,
don't
think
so.
Next,
the
attention,
my
cat
has
a
longer
extension
attention
span
than
me
so
to
to
publish
you
need
to
be
quick,
so
you've
got
this
dichotomy
of
how
you
secure
what
you
need
to
secure
and
how
you
publish
and
encourage
publication
and
sharing
and
collaboration
and
everywhere
in
there
and
that's
just
security,
and
there
are
any
security
guys
in
the
room
gals
in
the
room.
A
A
We've
got
to
figure
this
out
is
that
I
stole
this
from
finding
petroleum
you're
holding
a
tin
cup
under
Niagara
and
just
I
won't
say
the
individuals
name,
who's,
astrophysicist
I
said
the
thing
is
when
you
press
the
button
and
the
satellite
starts
transmitting
that
data
starts
coming
and
coming
and
coming
now.
That's
fine!
If
you're,
a
physicist
and
you've
got
a
satellite
broadcasting
data
to
you,
do
I
work
for
doubt
and
add
technologists,
do
brilliant
things.
They
do
tear
downs
of
every
product.
A
They
have
so
that
our
channel
partners,
things
can
fix
them
and
they
shoot
that
on
a
little
flip
cam
they
shoot
it
in
their
little
rooms,
and
so
don't
take
that
screw
out.
You
can
lift
the
whole
there.
It
is
real
basic
teardown
videos
and
that
just
keeps
coming
the
volume
of
data.
We
know
volume
variety
velocity
we
get
that
it
keeps
coming
so
our
pet
theory,
and
we
want
you
to
shred
it
if
it's
wrong,
tell
us
we're
geniuses
if
it's
right,
steal
it
and
make
it
your
own.
A
If
you
feel
we're
going
the
right
way,
what
we
want
is
our
volume
variety
and
velocity
on
this
is
here's.
Your
key
I
told
you.
We
were
fans
of
this,
but
the
conversation
we
have
with
people
who
have
this
problem
is,
while
you
have
that
tin
cup
under
Niagara
and
I
mean
this
respectfully,
you
go
to
your
storage
salesperson
for
will
tell
you
put
it
on
an
array
and
they
will
put
a
nearly
said
dollar
I.
Do
apologize
a
bureau
or
a
pound
sign
next
to
that
and
you'll.
B
A
Nice,
I'm
talking
about
a
bit
more
than
that,
and
you
know,
is
the
sales
person
leaves
the
room
they're
going
Kerching
I'll
buy
a
new
boat,
but
they've
just
talked
your
budget
so
ridiculously
high.
It's
not
going
to
happen,
then
our
friends
at
Seth
came
along
and
when
we
designed
this
from
scratch
for
petabytes,
suddenly
it
get
your
kind
of
bit
more
than
a
tin
cup
under
Niagara.
So
for
our
friends
for
this
and
for
the
benefit
of
the
tape,
other
alternatives
are
available.
But
this
is
our
view.
A
Sef
is
the
way
to
go
and
the
longer
you
have
to
hold
that
data
for
more
likely
days.
You'll
want
to
put
it
on
spinning
disk.
You
could
put
it
on
tape.
Your
call
anyone
in
this
room
works
for
3m
good
on
you.
If
anyone
in
this
room
wants
to
put
all
this
stuff
on
tape,
go
talk
to
3m
and
watch
their
stock
price
rise.
A
When
you
say
how
many
tapes
you'll
need,
how
many
times
you'll
need
to
rewrite
to
those
tapes
and
talk
to
some
Adele's
backup,
guys
about
how
many
of
those
tapes
will
rot
the
data
before
you
need
it
back,
it's
no
shock,
Stefan
spinning
things
will
be
able
to
serve
the
data
up
a
lot
easier.
That's
how
the
Bay
Area
guys
do
it.
We
use
Google
every
day,
guys.
We
know
this
is
in
principle,
sound
I,
believe,
there's
an
OpenStack
layer
in
the
front
that
gives
a
bit
of
access
and
bit
of
cloudiness
to
it.
A
Other
options
are
available,
but
again
we
think
that's
a
neat
fit.
You
think
there
are
things
you
need
to
do
that,
maybe
proprietary!
Is
there
going
to
be
a
shock
and
horror
in
the
room
and
I
say
with
policy
control
and
governance
and
identity
management?
You
could
talk
to
the
dell
organization
about
all
of
those
and
shock
horror.
It's
highly
likely
that
we
have
a
proprietary
product
that
we
can
ship
you
for
that.
It's
also
highly
likely
that
some
of
them
are
award-winning
and
best-in-class.
A
Of
course,
that's
not
going
to
surprise
you
cuz
when
the
software
he
is
free,
I
got
to
make
some
money
somewhere
come
on
guys
play
the
game
key
point
on
this
one
and
it's
kind
of
the
one
again
we'll
give
you
energy
shred
us
if
we've
got
it
wrong.
If
we're
missing
something
please
share
with
us,
it
is
the
critical
message
to
all
our
customers.
Just
do
it
now.
It's
start
now
start
storing
it
now,
because
and
I
don't
want
to
go
back
to
the
slide,
with
all
the
balloons
on
it.
A
If,
as
an
organization
I
mean
this
so
disrespectfully,
it's
untrue.
You
allow
your
organization
to
lock
itself
down
and
start
to
worry
about
all
of
the
blobs
that
were
this
side,
that
to
Niagara
the
data
will
keep
coming.
Your
problem
will
get
bigger
and
something
I
will
get
bigger
by
the
day.
I
got
really
bad
news
for
you.
There's
plenty
of
slides,
go
on
SlideShare
and
see
how
much
data
is
created
a
minute.