►
Description
The disrupted Intel(R) Optane SSDs based on 3D Xpoint technology fills the performance gap between DRAM and NAND based SSD while the Intel(R) 3D NAND TLC is reducing cost gap between SSD and traditional spindle hard drive and makes it possible for all flash storage. In this session, we will 1) Discuss OpenStack storage Ceph reference design on the first Intel Optane (3D Xpoint) and P4500 TLC NAND based all-flash Ceph cluster 2) Share Ceph BlueStore tunings and optimizations, latency analysis, TC
A
All
right,
thank
you,
everybody
for
joining
us
for
the
next
track
here
or
the
next
presentation
in
our
SEF
de
track
at
open
source
days.
Just
as
a
reminder
before
we
get
started.
If
you
do
have
any
questions,
these
are
being
recorded
for
posterity.
So
please
use
the
microphones
so
that
everyone
can
hear
you
as
well
as
the
camera
can
hear
you.
So
our
next
presenter
is
is
John
from
Intel
he's
a
part
of
the
Intel
China
team.
B
Cinque
Terre:
this
is
Jim
I'm
from
China,
so
hi
I'm,
currently
leading
a
team
doing
the
self
I
optimization,
interact,
echtua,
optimization
and
also
open
source,
a
lot
about
tools
like
a
cost
bench
seat.
You
and
VSM.
So
there's
another
guy,
his
farmer
on
our
SSD
department,
but
he
kind
of
a
laugh
tip
because
of
some
emergency.
So
I
can
I.
You
know
talk
about
the
3d
crossbone
stuff
on
behalf
of
him,
but
if
there
are
some
kind
of
a
deep
questions,
I
can
I
probably
cannot
answer
and
we'll
check
with
him
again.
B
So
this
is
the
proof.
The
brief
agenda,
but
first
I
will
talk
about
a
set
in
China.
So
now
then
I
would
go
to
the
all
flash
configurations
we
proposed.
We
have
three
kind
of
a
configuration
and
the
next
we
have
obtained
launched
gesture
like
paper,
and
we
have
the
geography
3d
9sd
launched
just
last
week,
so
I'm
going
to
show
the
2.0
meeting,
2.8
million
IOPS
we
get
on
the
reference
architecture
and
with
some
kind
of
a
demonstration
over
how
obtains
going
to
improve
yourself
after
performance.
B
After
that
show
we
will
so
currently
the
some
kind
of
scalability
analysis
watch.
The
watch
is
currently
problems
and
how
can
we
make
the
performance
even
higher
and
the
last
part
will
show
how
this
reference
architecture
is
going
to
evolve
with
the
obtain
a
3t3
details
in
an
technology,
so
chef
is
pretty
hot
in
China
work
with
the
particular.
Luckily
we
kind
of
a
hood
like
350
in
China,
it's
attracted
like
we
learn
Hindi
from
like
500
companies.
B
They
are
kind
of
a
from
different
types
of
company,
so
what
I
observed
is
actually
more
and
more
companies
are
starts
to
do
Greek
development
based
on
staff
in
their
product.
That's
one
look
at
some
kind
of
om
and
the
traditional
enterprise
company.
Would
you
see
a
lot
of
adopting
in
the
cloud
service
providers
and
it's
really
hard
that
some
media?
B
They
just
jump
and
do
the
social
media
coverage,
so
you
probably
notice
that
there
are
a
lot
of
China
contributors
in
the
code
base
now,
so
we
start
from
when
compared
with
wins
to
the
surfing,
like
back
in
2011,
the
the
contributors
the
opens
are
contributing
is
becoming
more
and
more
so
who
is
using
SAP?
Actually,
this
is
based
on
our
customer
engagement,
who
work
and
also
searchable
examples.
B
The
first
part
will
be
the
telecom
because
the
with
a
system,
skill
and
the
data
increase,
the
traditional
Enterprise
story
is
not
going
to
hold
their
data
and
the
performance
it
kind
of
cannot
such
that
the
workload.
So
as
one
of
the
customer
stand,
the
story,
the
cost
is
pretty
high.
It's
like
30
to
50%
of
their
archi
constant,
so
it's
kind
of
limited
the
scalability
of
a
traditional
Aggie,
the
traditional
discovery
and
makes
it
very
difficult
to
operate
to
those
system.
It's
kind
of
a
operating
cost
is
really
high.
B
So
the
answer
for
examples
like
China
Unicom,
one
of
the
biggest
telecom
in
China.
The
second
part
is
the
the
cloud
service
providers.
We
call
the
non
tabla
the
interesting
work
with
non
Tencent,
Alibaba
and
pandu.
Those
are
the
three
biggest
ones
in
China,
but
the
rester
ones.
They
are
moving
away
from
the
traditional
SAS
or
stand
solution
to
the
open
source
gr
solution.
Also
from
the
consideration
of
the
cost
point
and
the
the
third
part.
This
is
such
more
examples
include
like
re
TV,
which
kind
of
a
YouTube
in
lucky
us.
B
We
also
have
a
sea
trip,
it's
a
travel
agency,
company
and
PR
cloud.
It's
a
small
cloud
service
provender.
Also,
we
have
a
lot
of
om
om
here.
They
are
trying
to
build
their
self
business
storage
solutions,
it's
a
hardware
and
a
software
altogether.
The
search
for
example
here
is
like
it's
through
C
and
Q
City.
We
will
collaborate
a
lot
with
qct
on
this
area.
B
The
last
part
is
the
traditional
enterprise
and
research
institutes,
the
further
research
institute,
their
interest
mainly
focus
on
save
file
system
and
also
some
something
like
the
TV
star
back
in
then.
The
sole
enterprise
they
are
likely
to.
You
know
deploy
the
self
cluster
in
the
open
source
with
by
themselves,
so
the
probably
will
purchase
some
kind
of
a
third
party
support.
So
let's
see
why
we
doing
self
our
own
flat,
sorry
the
source
tree.
There
is
those
service
providers.
B
They
are
trying
to
provide
a
high
performance
storage
back-end
for
the
either
plot
private
cloud
of
public
cloud,
so
they
are
trying
to
provider
the
EBS
like
services.
The
second
reason
is
just
joint
amount
now
to
running
the
enterprise
workload
in
the
in
subclass.
For
now,
so
what
I
observe
is
a
lot
of
customary.
B
We
want
to
run
the
cyclic
workload,
my
Seco
Oracle
stuff,
but
you
know
in
this
scenario
performance
a
really
important,
not
only
performance
but
also
latency
I,
don't
mean
the
original
ad
I
mean
the
till
agency,
because
it's
like
those
sequel
workload,
the
latency
is
really
important.
If
your,
if
your
latency,
the
really
high,
maybe
the
transaction,
will
about
so
your
workload,
you
can
simply
not
run
that's
one
thing,
and
the
second
thing
is
the
you
can
build
a
multi-purpose.
You
know
self
cluster.
B
So,
what's
the
advantage,
if
you
run
my
sequence
s
compared
with
your
traditional
way,
they
say
it,
you
can
skew
out
and
you
want
it's
a
it's
an
operating
cost
and
the
benefit
that
you
want
most.
The
last
part
is
performance,
you're
important.
So
that's
where
we
do
this
on
the
latest
launch
the
opting
cluster
so
the
last
part,
and
probably
the
most
important
one,
because
the
SSD
prices
continue
John
dropping
and
with
the
charities,
Reading
University.
We
believe
there
will
be
some
punches
at
you
in
your
future.
The
SD
presses.
B
We
will
be
much
close
to
the
influences
of
high-end
HDD
drives.
So
let's
take
a
look
on
the
three
configurations
we
propose.
We
have
three
contributors
here.
We
call
you
to
simply
call
it
a
good
better
best,
the
good
one
is
you
have
a
SATA
or
Mme
PCIe
SSD
at
your
journal
or
red
hair,
log
or
rocky
start,
and
you
have
the
hard
drive
at
your
digital.
So
your
process,
it
doesn't
need
to
be
very
powerful.
The
traditional
III
process
here
is
pretty
old.
B
One
catwalks,
you
do
not
have
a
you,
do
not
need
to
have
a
large
memory
and
a
high-speed
network
thinking
about
probably
do
listen,
Airy
the
this
is
in
this
type
of
configuration.
It's
cut
targeting
for
the
the
scenario
where
you
need
a
high
capacity
and
you
don't
really
care
too
much
about
our
performance.
So
in
this
configuration
probably
you
can
simply
calculate
the
throughput
of
the
performance
you
can
catch
by
multiplying
your
your
number
of
drives.
The
second
configuration
we
probably
all,
first
second
or
third,
are
all
flat
configuration.
B
The
second
way
is
the
you
probably
using
a
SATA
SSD
like
the
s35
thing
here.
The
use
scenarios
for
the
you
have
a
high
performance
requirement
for
through
food
key,
our
PS
or
L
sorry,
but
for
the
last
one
it's
all
enemy
configuration.
We
have
a
top
beam
processor
here
in
five,
twenty
six,
nine
nine
v4.
We
have
128
gigabyte
memory
and
we
have
a
option
dr.p
for
800
as
a
as
a
journal
or
red
hair
log
on
the
P
4500
SSD.
B
B
So,
let's
take
a
look
on
the
the
latest,
the
opt-in
performance,
with
a
blue
star
before
I,
go
into
the
details,
I
like
to
sew
a
picture,
a
permitting
readout
from
the
day,
one
we
doing
the
performance
job.
You
may
notice
that
the
first
release,
so
we
drawn
a
Senate
braid
you
p-professor,
it's
kind
of
a
Yuki
means
the
Uni
protester,
it's
a
single
socket,
it's
a
23
10
12,
18
processor,
which
is
pretty
true,
that's
a
hardware
crafter
and
then
at
the
0.286
we
went
to
the
first
of
lacerate
configuration
with
the
s3
700.
B
At
that
time
the
throughput
is
pretty
low.
It's
like
on
three
stern
LPS
preneur.
This
is
a
nominal
kernel
and
normally
open
or
throughput,
and
then
we
have
the
the
one
with
jima
log.
This
is
a
kind
of
a
non
automatic
into
the
subclass.
For
now
of
lesser
e,
you
have
a
three
point:
seven
times
performance
increment
and
then
we
move
to
you
know
the
next
release
the
opponent
info
and
after
that
we
move
to
the
enemy,
configuration,
p3,
700
piece
or
1100,
with
the
P
3500.
B
As
the
enemy
drive,
we
have
a
like
one
point:
six
six
times
the
performance
improvement
and
the
the
biggest
one
1.98
is
actually
from
faster
to
Bluestar.
You
know
this
is
purely
for
random
right,
not
for
read
and
will
only
demonstrate
a
radical
for
me.
So
it's
interesting
when
sometimes
you
see
like
from
the
the
broad
of
a
cluster
I
mean
the
e5
2009
before
we
have
a
0
0
11.
The
point
zero
point:
two:
the
performance
is
pretty
high,
but
you
you
may
notice
on
the
2.0.
B
The
perform
is
a
slightly
job,
because
some
kind
of
a
Tunis
does
not
work
anymore,
so
we
have
to
return
it
and
make
it
back
again.
So
at
that
time
we
switch
to
the
option
drive
cluster
with
with
we
change
the
PC
700,
with
the
P
4800
you
can
see
that
here
is
20,
prefer
20
percent
performance
improvement
yeah.
It's
a
bit
it's
better,
a
much
lower
than
what
we
can
bacteria,
because
no,
the
the
piece
or
808
is
like
a
twenty
times
faster
of
the
p3
700.
So
let's
take
a
look
on
these
details.
B
This
is
a
reference
to
the
architects
that
we
proposed.
It
have
a
note
each
know
the
words
configured
with
one
top
being
eg:
five,
two
six
nine
and
processor
and
forty
eight
gigabyte
net
gigabit
network.
Now
we
have
eight
OSD
here.
Each
OT
will
configure
with
one
of
the
option:
draft
and
eight
of
the
P
4500
TRC's
region,
an
SSD
drive,
so
we
actually
run
multiple
configurations,
but
the
best
performance
you
can
get
is
the
two
or
three
instance
arm
for
DRAM.
B
So
we
are
running
on
safe,
the
latest,
stable
vision
at
that
time
and
then
with
Ubuntu
1401.
So
let's
take
a
local
on
the
performance.
First,
probably
you
have
noticed
it.
You
are
damn
boots,
so
this
is
what
we
can
get
the
take
today
for
folky
run.
Read
it's
a
2.8
million
IOPS,
the
average
latency
is
the
priest
more
like
0.9,
millisecond
and,
what's
what's
most
bad
hard
hit
for
the
T
or
AT&T
I
mean
for
the
phone,
a
latency,
it's
only
2.25,
meaning
second
and
fourth:
ok,
grandma
right!
B
It's
a
600
k,
LPS
of
all
millisecond
average
latency
and
the
25
milliseconds
phone
a
latency,
but
then
we
gave
some
more
information
on
the
T
relations
part
here.
If
you
are
using
a
PC
in
her
13700,
the
teal
agency
would
be
like
a
5
million.
Second,
so
option
can
significantly
reduce
the
theoretically.
In
this
scenario,
and
for
the
64
key
sequence
of
workload,
the
performance
is
a
good
and
I.
Don't
say
we.
We
actually
hit
the
hardware
limitation.
B
Maybe
there
is
something
you
to
tune
like
the
frames
s
down,
but
this
is
today
what
we
can
get
and
the
highest
performance
so,
but
behind
this
heist
is
raw
food.
This
is
all
the
types
of
attuning
Xander
eval
everything
we
have
done,
starting
from
nuuma
that,
let's
see
because
Yuri
we
you
do
not
PT's
into
the
new.
My
workload
likes
a
few,
do
not
pounding
those
OS
leads
to
the
new,
my
node,
all
the
cause
directly,
but
we
do
all
of
that.
B
If
you
are
bonding
the
Numa,
the
OSD
to
us
about
facility
good
new
mono,
it
can
improve
your
performance
like
20%,
especially
for
the
sake
run
performance.
Then
we
did
a
half
is
already
in
tuning
the
same.
We
do
this
because
the
we
observe
the
data.
If
you
have
full
job
the
preneur,
the
performance
is
almost
the
same
as
a
charge
per
node.
That's
because
the
CPU
is
kind
of
a
fully
utilized,
so
I
have
a
detailed
data
on
that
part.
So
HAP's
ready
this
scenario.
B
It's
done
in
to
really
helps
for
the
four
key,
random
read
and
random.
Read
workload.
The
fact
the
surgeon
is
a
gmail
look
and
the
TC
malloc
we
start
with
rooster
dog
and
with
the
new
release
of
a
TT
Mellark
it
done
in
the
Hat
is
done
in
matter
whether
you
you
are
using
TT,
malloc
or
J.
My
log
and
the
most
interesting
part
is
on
the
drives
scaling.
You
see
little
the
first
part
we
do
a
ATSs,
deepen
or
configure
it,
but
comparison
with
the
fall
asleep.
Another
configuration
it's
actually
Darrent
helps
a
lot.
B
You
probably
expect
like
100
times
of
sorry
2
times,
performance
improvement,
but
here
you
may
see
only
20%
less
in
quickly
because
for
the
sort
of
renin
workload
you
probably
hit
the
heart
of
a
cpu
limitations,
and
the
second
thing
here
our
pasture
tuning
practice
previously
is
the
follow
the
enemy
configurating.
You
properly
need
to
create,
like
a
full
OSD
instant
support
drive,
but
with
more
SSDs
that
that
configuration
is
going
to
change,
for
example,
will
have
full
Jabalpur
known
for
OS.
B
B
So
the
conclusion
is
the
notice.
Gullibility,
looks
good
is
much
better
than
the
drive
scalability
and
the
district
scalability
I
mean
in
the
four
key
block.
The
small
block
wall
road-
it's
really
really
bad
you,
so
we
need
to
optimize
the
CPU
path
and
the
folding
for
the
ultra
series.
We
should
put
it
into
the
new
match:
unions.
It.
You
can
helps
a
lot
of
your
performance
here.
Let's
see
the
option
where
we
are
opening
helps
for
your
performance.
B
The
first
part
we
are
actually
running
Brewster
using
an
option
jam
as
a
red
hair
log
and
the
co-located
with
the
rocks
DB
database.
So
the
red
picture
is
with
p7
700
as
a
DB
job.
You
probably
see
that
the
performance
flag
fluctuates
a
lot,
but
with
P
Upton,
it's
a
very
stable
and
you
may
see
a
decrease.
We
believe
that
is
called
the
bad
SSD
cabbage
question
because
of
the
the
SSD
is
not
a
stable
data
so
at.
Firstly,
you
rock
you
write
to
the
SD.
B
It's
really
faster,
because
no
garbage
collecting
is
involved,
but
then
at
some
point
it
will
involve
those
process.
So
what
still
working
trying
to
figure
out?
What's
the
road
cut
over
that
written
at
this
moment
and
the
from
the
drive
perspective
you
can
see
the
driver
latency
like
a
the
opt-in
is
a
real,
stable.
B
It's
always
smaller
than
the
0.1
millisecond,
but
for
the
p-series
have
100,
and
when
it's
flag,
flag,
flag,
wins
your
flag,
fused
and
the
wind
the
rocks
we
start
to
do
the
compaction
you
miss
it
out
to
the
p700
is
latency
increased.
A
lot
is
up
to
like
30
and
17
milliseconds.
So
in
that
scenario,
or
in
our
case
the
even
the
five
previous
mm
ESD
at
because
up
it
to
become
the
performance
bottleneck.
So
this
is
the
second
thing
arm
under
mode
details
on
the
rocks
DB
staff.
B
We
actually
take
a
look
on
the
rock
division,
see
how
obtained
helping
to
limits
to
the
rocks.
Vb
bottleneck,
earth-dog
here
so
previously,
actually
with
the
p-series
handle
what
you
are
observing
in
the
performance
with
blue
skies.
At
some
point,
the
rats
Rupert
maybe
job
to
zero.
That's
because
Roxy
be
doing
rats,
doll,
Roxy,
be
kind
of
a
heather
mccanna
them
win
it.
When
this
last
data,
they
cannot
keep
with
the
rate
of
the
incoming
data,
it
will
do
raster.
This
in
this
is
nothing
with
the
compaction
the
lab
speakers
are.
B
The
second
row
is
about
compacts
and
usually
compacting
having
a
lot
of
time,
but
you
do
not
see
rest
all
times,
but
when,
when
the
window
rattles
door
triggers
the
submental
agency
increasing
large,
and
so
your
performance
sucks,
it
may
be
extremely
low
and
with
opt-in
we
totally.
We
completely
limited
the
register
in
this
scenario,
because
it's
a--,
it's
a
raspberries
like
twenty
terminal
pieces
in
seven
hundred,
the
last
part
it
on
the
latency
path.
Like
a
like
what
our
colleague
mm
jason
said
like
in
the
previous
setting,
the
teal
agency
is
really
important.
B
So
option
helps
a
lot
in
this
scenario
and
we
have
a
three
tap
relation
here.
Like
a
ninety
five.
Ninety
nine
and
four
nine.
The
phone
allegiances
reduce
like
a
twenty
times
width
option,
which
is
jobs
a
lot
okay
with
all
the
performance
that
stick
load
on
the
analysis.
The
first
part
is
what
we
taught
up
talked
about
the
CP
over
hat.
So
this
will
the
random
rate
and
a
sake
venom
right,
take
a
look
on
our
epic,
so
the
cpu
it's
like
95
average.
B
This
is
actually
happening
on
the
first
to
forty
four
calls,
the
last
late
part
from
45
to
80
88.
It's
not
that
kind
of
a
fool,
that's
simply
because
those
are
the
hyper-threading
cost
which
not
to
it.
It's
cheap.
It's
kind
of
a
having
some
scalability
issue
which
design
for
the
sequencer
work,
no
more
than
for
the
random
walk
load.
You
cannot
expect
a
high
performance
improvement
with
the
hyper
threading
cause
if
you
are
using
a
random
and
cpu
intensive
workloads.
B
So
the
this
behind
this
scenario
is
that
probably
you
can
only
wrong
fall
drop
down
top
in
the
arm
ii,
5
to
6,
9
and
cluster.
Adding
more
draft
to
that
note
down
in
helps
a
lot
and
the
same
thing
for
the
random
right,
so
the
high
six
releasing
its
really
limiting
the
draft
scaling
scalability
per
node
on
a
talent,
skill
app.
So
that's
one
thing
we
may
need
to
pay
attention
to
when
you're
building
our
flats
or
a
cluster.
So
we
take
a
look
on
the
pause
record
on
who
takes
multiple
CPU
cycles.
B
This
is
for
the
random
raid.
Sorry,
it
might
be
a
bit
of
smaller.
We
run
poor
for
30
second,
and
the
sets
depart.
You
may
see
like
the
moisture
parties,
actually,
the
async
messenger,
the
network
messenger
later
it's
a
it
overhead
is
really
heavy
and
the
for
this
picture.
You
can
have
a
clue
on
rampant
with
brick
a
the
kind
of
a
break
it
down.
You
have
a
lot
of
connections
connecting
a
lot
of
equal
driver
overhead
analog
lot
of
the
gtps
in
the
message
with
your
math
overhead,
so
just
for
the
OSD
thread
pot.
B
Let's
take
a
look
on
the
GP
or
skis
run
TP
data
in
that
apart,
the
first
one
is
a
system
call
rate
current
evaporate
and
the
second
one
is
the
oopsy
up
here.
Work
you
process
last
one
is
the
primary
p22
okey.
We
were
actually
expecting
more
overhead
or
from
the
dual
p,
but
just
in
this
picture
you
can
see
that
we
do
need
some
kind
of
optimization
networking
messenger
layer
to
to
reduce
the
situation.
So
you
can
put
a
more
SSD
in
your
OneNote
e
EE
or
not.
B
Let's
take
a
look
for
write
similar
here
for
the
Red
Hat
logger
removed
thread.
It's
like
a
1.8%
over
hand
for
the
async
manager.
It
takes
like
twenty
eight
point:
nine,
so
it
just
do
a
lot
of
Oh
hat
here
and
the
rocks
DB
also
takes
a
quite
a
bit
overhead,
but
that's
kind
of
a
relatively
small
in
this
scenario,
so
Bruce
dog
I'm
done
into
really
takes
too
much
of
the
overhead.
It's
like
a
32
or
36
percent
to
total.
So
then
this
is
the
under
CPU
profiling
part.
B
Then
we
look
at
on
the
red,
hard
log
tuning
path.
We
we
do
a
bunch
of
eternity
in
this
general,
the
default
baselines
with
Mme
at
the
database
and
a
redhead
log
journal
for
it
rather
hard
Lock
device.
So
the
the
throughput
is
like
three
hundred
and
forty
K
with
a
separator
rocks
DB
database
and
a
redhead
log
device.
The
first
Union
we
do
is
keep
of
the
database
on
a
Miami
job
well
moving
right
along
to
the
RAM
disk.
So
we
want
to
see
rock
to
be.
B
You
know,
RAM
if
you
put
it
around
WL
on
the
RAM
dick,
how
it
improves
it's
like
only
twenty
TRPs
improvement.
The
second
sensor
we
completely
limit
your
Oxley,
be
read
how
long
it's
a
it's
almost
the
same,
so
which
means
that,
even
if
you
have
a
fast
device
using
memory
as
a
rock
CB,
read
her
log,
you
cannot
improve
the
performance
significantly.
So
in
that
tab,
maybe
the
the
rock
to
be
stuff
isn't
a
problem
any
longer
and
the
sir
tuning
we
do
it.
We
completely,
you
know,
skip
as
a
redhead
log
rat
mode.
B
We
don't
write
the
redhead
log,
so
this
is
the
highest
performing
point
guard,
but
it's
not
safe
right.
So
the
last
thing
we
do
is
we're
trying
to
find
out
configuration
where
you
can
improve
your
performance.
Well,
keep
your
data
safe!
So
we
try
to
the
14.
So
we
try
not
to
remove
the
red
hair
lock.
So
keep
the
red
hair
log
constantly
writing
interesting
scenario.
B
The
performance
is
the
lowest
one,
which
means
that
if
you
do
not
recycling
the
right
hair,
long
staff
and
the
confidently
writes
to
the
retina
metadata
atrocity
at
her
log,
and
eventually
your
performance
will
degrade
a
lot.
So
here
comes
the
basket
in
which
incorrigible
you
have
a
another
job:
either
y'all
read
how
long
so
it's
like
a
300,
ATK
RPS
and
it's
a
safe.
So
this
is
our
recommendation.
Maybe
in
the
future
we
can
have
a
to
draft.
If
you
want
to
a
really
high
performance,
the
configurating,
you
can
use
two
jobs.
B
Each
one
Julio,
one
Julio
rocks
DB
database
ones
will
be
alright.
How
long
and
your
data
go
to
the
TRC
3G
Internet
these
stuff?
Okay,
this
is
about
our
performance
analysis
style.
We
want
to
deliver
two
methods:
the
first
sensor,
the
network
layer.
It
kind
of
the
overhead
is
pretty
high
when,
if
you
trying
to
put
more
SSDs
in
one
single
node,
that's
part
we
should
pay
attention
to,
and
the
second
thing
is
the
for
the
rel
dialog'
stuff
and
Roxy
B.
You
can
expect
that
with
fast
devices
you
can
even
with
a
faster
devices.
B
The
performance
will
probably
want
to
equate
too
much
because
in
a
scenario
rocks
you
become
taxing
whatever.
It
only
be
a
problem
and
longer
so,
let's
see
what
we
we
are
going
to
probably
the
master
it's
for.
They
say
it's.
The
internet
kind
of
I
have
to
kind
of
a
technology.
The
first
one
we
called
3d
crosspoint,
which
is
options
SSD
and
the
second
technology,
is
3d
none.
B
If
we
take
CPU
and
center,
the
first,
the
most
close
wine
DRAM,
you
have
a
small
capacity,
DRAM
low
salinity
and
after
that
is
the
opt-in
SSD,
which
the
capacity
is
much
bigger,
but
the
latency.
It's
also
rarely
to
increase
a
bit
and
on
the
outer
third
third
ring,
we
have
the
3d
9
SD
of
the
cottage
much
lower
like
the
P
4500
SD.
The
cautery
is
like
30
cents
per
dollar
and
butter
for
the
option
drive
probably
much
higher.
B
B
So
we
are
hoping
that
in
the
near
future
in
the
office
or
a
configuration
with
option
draft,
we
can
kind
of
a
put
a
mall
ERC's
region
and
an
SD
together
to
build
a
high
performance,
a
low
latency
and
high
capacity
and
cost-effective
solution.
So
the
simple
things
we
can
put
a
journal,
login
method
or
even
catch
on
the
option
drive
for
your
data
on
our
chairs
is
reading
an
SD.
So
don't
worry
about
the
endurance.
You
know
the
P
4500
acity.
B
The
endurance
is
a
pretty
similar
like
the
PC
700
that
are
on
the
same
level
and
okay.
So
in
this
way
we
can
provide
a
best
IOPS
per
dollar
bus
to
our
PS,
to
put
gig
terabyte,
and
maybe
a
turbine
rack
configuration.
So
you
probably
know
that
we
have
another
form
factor
on
the
three
crossbone
technology
and
the
device
will
be
kind
of
a
put
it
in
memory.
So
here
is
some
kind
of
a
person
in
work
we
done
with
the
person
domestic
device.
We
try
to
try
to
see.
B
How
can
we
use
the
percent
of
memory
in
staff?
So
we
do.
This
is
a
POC
walk
this?
This
is
a
bit
actually
based
on
elaborate
color
liberty
memory,
which
is
all
the
different
types
of
a
user
scenario.
Like
you
have
a
leap
in
and
block
you
have
a
little
memory
object,
each
it's
a
whole
bunch
of
library
that
can
let
let
your
user
level
application
exercise
of
the
processing
memory
device
directly,
bypassing
the
file
system
bypass
the
kernel.
B
So
here
comes
the
summary
stuff
is
awesome.
You
can
have
a
different
kind
of
configurations,
good,
better
best.
Configuration
second
thing
is
that
we
do
see
a
strong
team
on
our
of
less
configuration
from
different,
accountable
customers
and
the
optimist
of
wrestle
safe
cluster
is
capable
over
delivering.
Like
2.8
million
RPS.
You
know
extremely
low
latency
not
only
on
the
average
latency,
but
also
on
the
till
agency,
but
we
still
need
to
work
a
bit
to
make
it
more
efficient
with
the
authorizer
reconfiguration.
So
the
next
time
wait.
Maybe
we
nam
the
next
step.
B
We
will
try
the
opt-in
with
the
current
etiquettes
to
improve
the
aridity
further
ok,
this
is
actually
a
teamwork.
I
actually
share
the
credit
with
my
colleague,
huddle
and
jam
here
in
the
back
part.
We
actually
attach
to
all
the
detailed
self-tuning
configurations,
including
the
roxy,
be
tuning
staff
here,
the
blue
start
Union
staff
here
and
all
the
debug
level
tuning.
So
you
can
see
all
the
tuning
in
this
scenario.
Ok,
thank
you.
That's
all
the
content,
any
question.
C
B
B
B
E
B
Should
not
real-time
compacting,
the
Roxie
Beach
Union
is
all
here.
You
can
see
it's
kind
of
about
the
the
buffer
numbers
and
the
other
triggers
threads
about,
and
what
we
add
is
the
kind
of
event
chaser
in
a
rock
TV
stuff,
and
you
can
actually
choose
the
of
the
rock
TV
where
a
dump
it
dumped
it
out.
I
see
you
mean
the
deeper
the
all
the
contrast
right.
This
one
yeah.
E
E
A
Okay,
thank
you.
Alright,
any
other
questions,
no
all
right.
So
those
of
you
that
are
whoever
was
that
asked
about
the
slides
they
will
be
up
on
the
Ceph
SlideShare
account
where
most
of
our
SEF
date
content
ends
up
so
go
ahead
and
take
a
look
at
that.
If
you
don't
ask
on
the
mailing
list-
and
some
will
point
you
the
way
but
other
than
that,
thank
you,
John.
Let's
give
a
thank
you.