►
From YouTube: 2016-NOV-23 :: Ceph Performance Weekly
Description
Weekly collaboration call of all community members working on Ceph performance.
For full notes and video recording archive visit:
http://pad.ceph.com/p/performance_weekly
A
A
B
C
Gets
a
run,
completion
stuff
from
OSHA,
not
sure
who
the
person
is
it
the
reviewing
yeah
and
blue
star
so
that
it
will
do
as
much
work
as
possible,
including
synchronously,
writing
the
transaction
indoor
xgv
in
the
same
thread
and
when
appropriate,
and
that
is
it
basically
made
me-
realize,
there's
a
bunch
of
other
stuff,
those
broken
and
so
there's
another
patch
that
I'm
trying
to
merge.
First,
that
fixes
all
that
stuff
and
then
go
back
and
clean
it
up.
C
It's
sort
of
two
two
things,
though
I
mean
it's
only
useful
if
you
have
like
a
in
DRAM
type
journal,
something
super
fast
for
the
database.
Otherwise
it's
not
actually
going
to
be
an
improvement,
and
you
can't
do
it
if
there's
actually
any
I/o,
because
you
have
to
go
wait
for
the
I/o
to
finish
and
we're
not
doing
that.
I
have
synchronously
not
currently.
C
C
So
that
would
be
an
option
but
I
wanted
for
that
to
happen,
then
the
completions
niji
able
to
copy
called
synchronously
also
and
that
sort
of
means
some
refactoring
of
the
interface
with
the
OSD,
so
that
in
some
cases
you
can
call
basically
the
call
site
that
assumes
that
it's
queuing
it
asynchronously
might
actually
synchronously
complete,
and
so
we
need
to
make
sure
all
those
cloth
sites
can
handle
it
and
the
completions
can
handle
it,
but
they're
locking
so
on.
C
So
so
there's
some
more
work
to
do
there,
but
but
it
pushed
us
down
the
road
so
tulsi.
I.
B
C
D
B
Okay,
cool
the
only
other,
I
guess,
there's
a
couple
things
here,
but
on
the
song,
myth
has
been
playing
around
with
a
couple
of
different
things,
and
you
mentioned
this
morning
to
stand
up
for
boost
or
that
sink
submit
transaction
being
evil.
B
B
What
what
I
saw
basically
is
in
general,
with
the
bitmap
allocator
used
for
blue
of
us,
we're
not
quite
as
spiky
we're
more
consistent
in
in
the
overall
performance
levels
is
quite
a
bit
higher,
probably
in
the
order
of
about
twenty
to
twenty-five
percent.
We
also
seem
to
avoid
this
large
stall
that
was
present
with
the
stupid
allocator
towards
the
beginning,
though
the
the
scale.
This
is
a
little
bit
hard
to
see.
I
suspect
that
some
of
these
dips
are
actually
quite
a
bit
lower.
B
That's
candidate,
behavior,
we've
seen
in
the
past,
where
we
can
have
these
periodic
stalls
that
are
fairly
short
but
are
associated
with
rocks
TV
compaction,
but
but
it
for
whatever
reason
the
the
bitmap
allocator
things
generally
appear
to
be,
quite
it
better.
So
anyway,
we
I
don't
have
too
much
time
to
go
or
to
talk
about
this
anymore,
because
actually,
we've
got
folks
from
Nokia
that
want
to
present
mr.
findings
that
they
have.
D
E
E
B
C
B
Yeah,
so
that's
definitely
I
would
like
to
see
ya.
Someone
else
show
similar
results
before
you
know
trusting
this
laundry
percent,
but
it
seems
to
be
the
case
that
if
that's
the
only
thing,
I
change
and
am
seeing
different
behavior
I
should
also
mention
two
before
I
turn
it
over
to
nokia
that
in
other
tests
with
like
large
sequential
I,
oh,
the
bitmap
allocator
appears
to
be
doing
worse
so
there
it
may
not
be
a
universal
win
here.
So
it's
probably
worth
understanding
what
the
behavior
is
in
each
case.
Do
try
to
explain
this.
C
D
C
B
B
D
B
D
Okay,
so
so
this
is
the
way
actually
I'm
going
to
simulate
the
big
image
and
reconditioning
to
how
to
get
rid
of
that.
For
example,
I
have
seen
you
know,
sdn
I'm,
trying
to
fool
like
you,
want
elevate
image
on
that
and
they
make
it
to
the
steady
state.
So
I
have
to
do
forge
a
sequential
night
and
it
going
to
take
me
like
12
hours
happy
now,
so
we
discussed
its
age
and
we
come
up
with
this
idea
like
19-0.
Why
don't
you
two
to
love
skinny
through
store
cause?
That's
like
an
ID
right.
D
We
not
every
like
unity
to
be
Bob
data
and
with
that,
but
those
basically
I
usually
won't
be
touched.
But
if
you
only
fill
out
the
roster,
so
we
can
I
fill
up
and
then
I
clean
it
like
small
image,
like
I,
think
you
can
get
a
fork,
a
sequential
right
to
be
cognition
and
make
it
that's
this
image
to
taste
it.
D
You
see
the
behavior.
So
so
what
rocks
will
be
doing
tomorrow?
Kinda?
Can
we
change
and
then
your
side
very
can
discuss
later
but
yeah.
So
this
is
the
behavior:
will
conduct
eat
and
see
the
cpu
percentage
here
it's
eighty
percent
is
used,
it's
like
1
over
run
after
fully
reconditioned.
So
this
is
the
scalp
for
like
a
different
DB
partition.
One
partition
and
data
partition
data
means
actually
right.
Is
it
going
on
the
actual
right
and
I
collected
all
the
scan
on
that?
D
So
you'll
see
that
this
room
is
still
read
that
sorry,
it
is
mostly
happening
out
of
it
go
to
the
next
site.
So
this
is
the
me
tonight
little
ladies.
I
right
I'm
didn't
Envy
capita.
Out
of
that.
You
will
see
that
at
the
same
time
the
TV
I
it
is
happening
around
66,
key
and
the
water.
It
is
happening
around
70
to
be
so.
The
writer
is,
nothing
is
for
me
1.31
and
that
these
fans
right
so
the
heat.
D
That's
you
see
the
hundred
TG
Daleks
ready
it
is
happening
and
that
/
to
speak,
collect
like
how
many
will
not
reach
I
was
happening
in
the
canary
gave
a
bigger
actually,
but
still
now
see
a
lot
of
dishes,
so
that
will
kill
you
read
em
and
so
so
that's
actually
kind
of
be
so
makes.
If
you
see
that
now
same
thing,
500
e
th,
they
will
see
the
performance
is
kind
of
now
say
eight,
eight
to
nine
k.
It
came
down
to
this.
D
One
is
now
around
4k,
so
I'm
you
see,
the
cpu
is
also
twenty
percent,
more
cons
you
even
if
it's
generating
less
through.
So
it's
going
to
the
collection,
I
believe
so
this
is
the
world
for,
like
all
the
stats
for
500
key,
you
see
how
we've
searched
for
clamming
up
and
at
the
rest
of
the
data
and
similar
through
pool.
D
You
see
that
title
is
now
increased
like
1.3
12.00,
something
but
see
that
either
130g
to
380,
so
so
yeah,
that's
what
somebody
is
that
I
don't
think
that
the
way
of
populating
of
CDN
time
to
cheat
the
because
they
stay
nice
tennis,
it's
not
working
that
much
because
hundred
and
fergie
done
500
if
you
would
eat
the
similar.
I
can
tell
similar
performance.
So
we
need
have.
Is
our
problem?
Sorry
to
see
that.
C
Yeah
and
I
yeah
I'm
very
curious
about
that
notices
and
why
that's
why
that's
happening
I
think
we
should
chase
your.
E
C
E
I
think
the
I'm
wondering
if
configuration
of
rocks
is
getting
to
something
the
way
we
think
and
if
we're
not
there
are,
I
mean,
is
we've
noticed
before,
depending
on
how
you
set
up
the
compaction,
you
can
trade
off
your
right
amp
for
Reed
amp
and
that's
probably
not
what
we
want,
and
so
you
know
my
observation
here
is
is
if
you're
seeing
super
high
read
amp
here,
that's
not
the
model
that
we
would
be
expecting
them.
So
here
goes
something's,
not
configured
right,
yeah,.
D
E
And
the
slowdowns-
so
yes,
yes,
so
right,
so
what
you're
doing
is
you're
hiding
the
behavior
you're
looking
at
but
you're
making
something
that's
intolerable.
Much
worse,
I
mean
if
you
were
laying
a
latency
graph
up
against
this
you'd,
be
seeing
your
latency
going
up.
You
know,
I,
don't
know
two
seconds.
Maybe
I
mean
you
know
what
I
forget.
What
was
the
revamp?
You
were
showing.
Was
it
five
to
one
or
ten
to
one
yeah.
D
B
G
Right,
I'm,
on
with
a
with
a
colleague
as
well
called
me
too
who's
just
to
give
a
bit
of
explanation
so
yeah,
my
name
is
Benjamin
Jenkins
and
I'm.
The
chief
video
architect
in
in
nokia,
mu2
arm
works
in
our
development
team
and
he's
done
a
lot
of
work
on
unsafe
and
blue
store
in
particular
arm.
So
what
I'll
present
is
really
a
few
slides
just
to
introduce
arm
as
and
what
we
do
and
a
couple
of
our
key
use
cases
and
Howie
Youssef
and
then
mu.
G
Two
will
cover
sort
of
more
details
of
the
testing
is
done
and
and
the
experience
he
had
so
can
everyone
see
the
slides
that
are
hopefully
shared?
Yes,
perfect,
ok,
so
first
off,
who
are
we
knock,
is
quite
a
big
company
as
you're,
probably
aware
arm
and
so
where
I
work,
where
mu
to
work
is
in
the
video
business
unit,
which
is
part
of
dirt.
G
Two
networks,
division
and
basically
the
video
business
unit,
is
you'd
expect
we
make
products
that
deliver
television
and
video
across
the
across
the
internet
and
across
broadband
networks
arm
we've
got
a
range
of
customers,
some
of
which
I've
included
on
the
slide.
A
customer
base
is
primarily
like
telcos
ISPs,
cable
operators
or
though
we
do
have
some
content
providers
like
like
HBO
and
things
as
customers
as
well,
we're
four
primary
products
arm.
The
first
is
a
content,
delivery
network
product
that
does
in
network
sort
of
caching
and
scaling
and
distribution
delivery
of
HTTP
content.
G
The
second
product
is
a
cloud
DVR
that
I'll
talk
a
little
bit
more
about,
because
that's
the
product
that
we
use
Seth
in.
We
have
a
personalization
platform
that
does
and
it
sort
of
integrates
to
enable
add,
insert
into
into
videos,
dreams
and
abroad,
coughs
optimizer
for
older
IP,
TV
networks
arm.
So
what
is
cloud
zvi?
It's
basically
taking
the
personal
video
recorder
that
you
have
in
your
home
and
having
that
in
the
sort
of
quote-unquote
cloud
where
really
it's
in
the
network.
G
So
the
recording
the
recordings
that
you
want
made
are
made
within
your
ISPs
network
and
then
they're
available
to
be
streamed
to
you,
wherever
you
are
be
that
on
your
television
in
home
or
on
an
iPad
in
your
home
or
on
an
iPad
or
a
mobile
device
where,
when
you're
outside
the
home
and
that's
enabled
by
doing
the
recording
in
the
network
and
then
streaming
from
within
the
network,
this
is
a
marketing
slide.
Really
I
mean
it
won't
talk
too
much
about
it.
G
G
I
won't
go
into
huge
amount
of
detail,
but
the
video
the
way
that
we
typically
to
deliver
video
and
the
way
videos
often
delivered
online
by
folks,
like
Netflix
and
and
so
honest,
what's
good
adaptive,
bitrate
video,
so
the
basic
essence
there
is
the
TV
channel
or
that
video
on
demand
a
set
is
encoded
into
multiple
bit
rates
and
then
is
segmented
into
arm
different
files,
essentially
of
typically
ranges
between
two
and
ten
seconds.
So
every
let's
call
it
two
seconds
of
video
is
a
separate
file.
That's
available
at
different
bit
rates.
G
It's
all
requested
over
HTTP
like
like
normal
web
transactions,
and
then
the
the
client
is
able
to
adapt
arm
to
the
underlying
network
conditions.
It
has
bandwidth
essentially
that
it
has
by
selecting
a
different
bit
rate
depending
on
its
circumstances.
So,
for
example,
clients
will
quite
often
start
at
a
low
bit
rate
and
there's
a
video
buffer
Falls
and
as
they
detect,
they
have
more
bandwidth,
request
higher
bit
rates
and
then,
if
there's
a
problem,
we
get
in
a
particular
bit
rate.
They
can
rate
a
dappled.
G
It's
off
arm
absolute
quality,
so
you
don't
always
get
the
highest
quality
of
delivery,
but
you
get
the
highest
quality
that
the
bandwidth
that
you
have
available
to
you
can
support
and
that
can
adapt
over
the
time
of
the
video.
What
that
really
means
is
when
we're
recording
this
video,
then
we
have
multiple
bit
rates
to
record
split
into
segments
of
a
few
seconds.
So
we
end
up
with
lots
and
lots
of
objects,
or
files
in
the
storage
system
are
very
basic
architecture
of
how
the
product
works
from
right
to
left.
G
G
There's
then
a
CD
and
a
content
distribution
network
in
front
of
the
cloud
DVR
further
scaling
out
the
play
out
of
the
video,
so
the
CDM
will
cache
video
and
if
multiple
people
request
it,
they
get
that
from
the
CDN
cash
rather
than
going
all
the
way
back
and
hitting
safe.
As
a
read
for
that,
there
is
one
use
case
where
that
doesn't
apply
and
the
reeds
come
out
of
safe
directly,
but
I'll
talk
about
that
in
in
a
minute
arm.
G
So
then
I
just
have
a
couple
of
slides
on
a
couple
of
key
use
cases
and
I'll
hand
over
to
me
too
sort
of
interesting
test
details.
So
one
use
case
we
have
some
people,
call
it
a
sliding
window.
Some
people
call
arm
like
a
circularbuffer,
but
essentially
what
it
is
is
we
will
permanently
record
arm
and
then
make
available
the
last
n
hours
of
content.
So
it
depends
on
the
the
broadcaster
what
they
offer
but
could
be
between
22
hours
and
and
a
week's
worth
of
content.
G
And
then,
while
you're
watching
television,
you
can
press
rewind
and
sort
of
rewind
into
the
past
from
a
live
point
and
fast-forward
up
to
to
live
again
and
and
have
that
sort
of
seamless
kind
of
user
experience
without
having
to
have
actually
sort
of
forethought
and
selected
to
record
something
how
that
interacts
with
with
their
for
the
storage
on
layer.
Underneath
really
is.
Is
we
end
up
writing
a
continuous
stream
of
video
chunks
or
files
of
a
few
seconds
of
video
in
multiple
bit
rates?
G
So
we're
doing
constant
sort
of
Rights
reads
and
deletes,
which
then
sort
of
places
some
high-level
requirements
on
the
underlying
storage
of
consistent
kind
of
performance,
even
in
the
case
of
single
failures
and
obviously
a
storage
technology.
That's
that's
redundancy
and
fault
tolerance
and
the
reason
for
the
consistency
requirement
is
really
we're
doing
a
consistent
amount
of
right
all
the
time
we
were
supporting
a
large
number
of
channels.
We
can
only
have
so
much
buffering
and
ramen
things
before
we
need
to
actually
have
it
in
an
impermanent,
storage
and
and
out
of
RAM.
G
The
next
use
cases
similar
it's
what
we
call
shared
copy.
This
is
where
our
users
actually
explicitly
requested
a
copy
of
a
program
arm
and
the
normal
way
that
we
do.
That
is
it.
We
make
one
copy
of
that
of
that
TV
program,
arm
that
isn't
shared
between
all
the
users
that
have
requested
it
be
recorded,
so
a
thousand
users
might
ask
for
for
a
program
to
be
recorded
and
we
store
that
once
and
SEF
the
the
interaction
is
is
similar
to
the
kind
of
sliding
window
use
case.
G
I
just
talked
about
it's
a
continuous
sort
of
stream
of
video
trunks.
Well,
it's
not
armed
continuous
24-7
in
the
same
way,
because
obviously
popular
television
is
is,
is
typically
focused
in
in
a
few
hours
in
the
evening,
and
that's
what
most
people
record
and
then
you're
in
other
parts
of
the
day,
people
are
recording
things,
but
but
that
that's
not
like
the
peak
demand
on
on
the
system.
Also
when
people
tend
to
watch
video
overlaps
with
when
they
tend
to
record
it.
G
So
the
read
and
write
peaks
overlap
with
each
other,
but
because
the
content
is
shared
between
many
users.
We
can
leverage
the
CDN
to
to
cash
it
and
help
reduce
the
read/write
arm
deletes
are
typically
occur
when
a
user
selects,
when
the
last
user
selects
to
delete
a
program
or
in
some
cases,
there's
a
configured
sort
of
retention
period
where,
like
after
six
months,
content
is
lead.
G
Whether
these
requests
is
a
lot,
for
example,
which
drives
similar
similar
requirements
that
the
previous
use
case
in
terms
of
storage
and
then
the
last
one
is,
is
what
we
call
private
copy,
which
is
different
to
the
previous
two,
where,
where
that
is
typically
used,
is,
is
where
a
content
provider
arm
has
a
demand,
the
our
system
effectively
emulates
having
each
user
having
our
own
video
recorder
in
their
home,
which
means
that
we
have
to
make
a
recording
of
a
program
for
each
user.
The
request
set
so
a
thousand
years
of
gross
debt.
G
We
end
up
making
a
thousand
recordings
in
in
SEF
other
than
that.
The
way
it
works
is
it's
basically
the
same
as
its
per-share
copy,
there's
just
more
recordings
in
parallel
and
because
we're
making
one
recording
for
user.
The
other
difference
is
that
the
content
distribution
network
in
front
doesn't
add
value
because
the
content
isn't
shared
between
multiple
users
and
therefore
the
full
kind
of
read
bandwidth
is
coming
out
of
safe,
essentially,
and
that
places
an
additional
kind
of
requirement
to
try
and
get
the
cost
of
the
system
as
low
as
possible.
G
H
Then,
okay,
then
I
discontinued
the
depending
on
our
customer
requirements.
We
have
to
type
of
production
deployment.
Hardware's
one
is
the
apple
of
4200,
which
is
a
medium
scale,
storage
system.
It
has
28
cpu
goes
running
at
2.4,
gigahertz,
256,
gb
ram
and
it
has
24
or
48
disk
and
for
the
sip
SI
symphony
allocate
14,
cpu
cores
and
128gb
done.
The
other
hardware
is
a
high
storage
density.
It
has
32
course
running
at
2.3
in
gigahertz
total
ram
memory
is
around
380
4
gigabytes.
H
It
has
68
disk
of
photo
avec
and
then
for
the
safe.
We
allocated
16
cores
and
256gb
run.
Apart
from
this,
we
have
three
SGI
notes
as
a
test
platform
to
include
the
ways
development
bills
of
this.
If
it
has
20
cpu
cores
running
a
2.5
to
the
basics,
gb
ram
for
the
disk
having
1608-
and
this
is
purely
for
playing
disappearance
to
the
next
slide.
H
Actually
gives
the
architecture
about
our
product.
Each
hardware
will
have
three
softwares,
mainly
the
safe
and
be
its
way,
is
the
recording
engine,
our
recording
it
in
which
will
be
going
to
be
declined
for
the
safe,
and
this
creates
way
users
Cassandra
database
to
store
the
video
metadata
informations.
H
So
in
terms
of
the
resource
allocation,
we
allocate
the
socket
zero
or
the
node
0,
all
the
CPUs,
as
well
as
a
Rams
to
the
safe,
and
we
allocate
the
socket
one
of
the
node
one
cpu
and
ram
to
tell
VX
way
as
well
as
Cassandra
again
depending
on
the
customer
requirements
and
the
storage
requirements.
We
do
have
fine
motor
cluster
as
well
as
10
or
clusters.
H
So
and
in
terms
of
the
network
architecture,
we
have
provided
the
22
gigabyte,
20gbps
bonding
interface
for
these
safe
public
as
well
as
private
interfaces,
and
then
we
do
have
additional
to
20
gig
interfaces
for
the
bx
way,
one
to
complicate
with
the
safe
and
one
for
delivering
the
video
to
the
end
uses.
Apart
from
this,
we
do
have
a
one
gig
interfaces
to
come:
the
video
from
the
condo
provider
to
the
hour
proud
to
be
a
solution.
H
In
terms
of
the
performance
test
process
after
validating
the
functional
testing,
the
software
comes
to
the
performance
team.
Well,
we
install
them.
We
integrate
with
the
other
products,
and
then
we
do
a
basic
Raiders
bench
for
the
read
and
write
of
various
object
types.
Once
we
have
the
same
number
of
read
and
write
performance,
then
what
we
do
is
we
do
a
benchmark
the
number
of
channels
which
can
be
grated
for
on
a
particular
platform
depending
on
the
bitrate
such
as
babies.
So
we
just
take
that
number
and
then
on
stability.
H
H
We
do
work
in
self
for
almost
two
years,
starting
from
the
Firefly
to
Kraken.
We
do
have
seen
lot
of
integration
issues
with
the
safe
fice
tool
and
a
lot
of
challenges
we
see
we
have
seen,
which
you
will
see
in
the
Columbia
slides
so
and
during
the
month
of
May
and
June
I
think
we
have
seen
some
presentations
related
to
Bruce
towed
by
sage
will
and
then
we
thought
of
validating
it,
because
we
have
seen
lot
of
issues
with
right
and
the
date
with
respect
to
the
file
store.
H
So
this
was
the
first
step
which
we
did
was
we
have
integrated
the
Bruce
log
onto
our
distance
is
Jay
a
platform
which
is
having
three
notes.
We
used
a
racial
voting,
two
plus
one,
and
then
we
did
a
with
a
12-5
store.
What
we
did
was
we
didn't
eros.
Just
writing
to
MV
object,
a
continuously
from
zero
storage
to
almost
24
hours
to
fill
crossing
the
50
percentage
of
slowly
utilization.
So
what
we
noticed
was
the
initial
stage.
H
With
the
file
store
in
the
performance
was
around
17
gbps,
they
got
a
drastically
dropped
and
then,
when
the
storage
was
around
50
percentage,
we
saw
he
dropped.
It
well,
first
offensive
increase.
So
we
did
the
same
test
for
the
blue
store
initially
itself.
It
gave
a
tremendous,
a
performance
improvement
like
us,
28
gb
place
and
then
that
this
was
running
for
24
hours
till
the
stories
were
swilling
crossing
the
50
percentage
and,
interestingly,
the
performance
sticks
constant
to
the
same
28
gb
place.
H
H
So
then
we
try
to
integrate
the
blue
store
with
our
application
and
then
what
we
saw
was
after
six
hours
of
the
safe,
getting
crashed
and
every
time
we
have
to
rebuild
the
castle
again
and
again,
due
to
some
ways
to
class
issues.
And
then
we
we
stick
a
safe
community
and
they
were
pointing
that
in
the
coming
releases,
this
crash
would
be
solved.
H
So
what
we
did
was
we
try
to
clone
the
SIF
master
repository
and
once
in
two
weeks
we
try
to
rebuild
our
teams
and
then
validated
the
externality
of
the
system
to
ensure
that
that
the
issue
was
fixed.
Luckily,
just
before
the
release
of
a
Kraken
development
release,
one
week
before
we
got
the
development
filled
where
it
got
salt,
and
then
we
started
moving
the
this
loose
code
to
our
production
hardware.
So
those
are
the
decisions
we
can
see
next
lights
next
level.
H
So
this
is
the
performance
on
our
high
storage
architecture,
45-10
final
rocket
itself,
so
here
you
can
see
that
the
object
size
ranges
from
256,
kb,
24
MP,
and
here
what
we
want
to
highlight
is
the
joint
file
store
and
the
Kraken
blissful
on
knee
release,
11
dot,
0
dot
to
almost
pretty
much
same
performance.
There
is
no
much
difference,
but
what
we
saw
was
in
terms
of
file
store
when
the
storage
utilization
is
high.
The
performance
tickets
with
us
here
the
performance
remains
on
the
same.
H
Another
point
we
want
to
highlight
here
is
the
upper
low
44.
810
is
a
high
storage
architecture
where
we
have
huge
number
of
discs
to
in
terms
of
three
cluster.
We
have
68
in
25
x
disk,
but
we
have
only
I
number
of
I
I
ops
on
the
disk
storage,
but
we
could
not
able
to
achieve
more
write,
throughput
or
read
through
put
that
we
see
a
lot
of
issues
on
the
cpu
becomes
a
bottleneck
here
next
slide
on
our
medium
storage
architecture.
H
E
H
Performance
of
the
file
store
and
if
you
see
the
overall
on
the
gel
Brewster
on
top
to
bottom,
the
performance
is
almost
consistent,
whereas
if
you
see
the
Kraken
brew
store,
what
we
saw
was
there
was
a
decrease
in
the
significant
amount
of
performance
right
performance.
And
if
you
see
compare
the
dwell,
file
store
and
the
pack
and
blue
store,
it
will
be
more
on
the
same.
But
the
actual
blue
sir,
was
me
record
and
high
performance.
C
H
H
Yes,
it
was
matching
the
initial
performance
level
of
the
file
suit,
but
only
thing
is:
it
is
consistent
throughout
the
other
thing.
So,
as
you
said,
we
modified
the
message
that
was
simple
and
what
we
see
is
apart
from
256k
bite
and
fight,
will
kbyte
the
rest
of
the
things
you
can
see.
That
is
a
quite
performance
improvement.
B
Bruce,
it's
just
sorry,
it's
a
traumatic
where
you
see
people
needed
in
these
tests.
What
I
or
ucp2
limited
nice
toss.
C
B
H
H
C
E
C
Though
you're
doing
large
rights,
it's
still
a
lot
of
more
metadata,
which
means
that
there's
a
lot
more
stuff
in
our
XP,
which
means
that
compaction
is
going
to
start
be
painful
afferent.
This
is
a
spinning
disk.
So
if
you,
if
you're
using
liberate
us
directly,
there's
there's
a
hint
that
you
can
set.
C
That
says
it
basically
that
you're
reading
and
writing
the
object
sequentially,
and
if
you
set
that
it's
going
to
use
a
much
bigger
block
chunk
size
when
it
does
the
CRC's,
which
is
going
to
collapse
the
metadata
down
back
to
what
it
was
with
jewel.
Then
you'll
probably
see
that
number
come
back
up.
Oh,
it's
a
you're
using
liberate
us
directly
right
ease.
C
C
If
you're
doing
yeah
I
mean
if
you're
doing
for
Mike
rights,
then
I
yeah,
I,
don't
know
right
exactly
okay,
but
for
your
actual
application,
well,
yeah
we'll
fix
the
bench
mark,
definitely
and
then
a
practical
application,
though
I
mean
this.
This
streaming
application
is
always
reading
these
objects
in
their
entirety.
Right
yeah,.
C
So
there's
a
puss
owing
to
this
there's
a
best
advice
bags
and
just
look
for
those
flags
in
the
liberators
header
file
and
if
you
pass
the
sequential
one
that
it
will
know,
they're
going
to
read
and
write
it
sequentially.
C
C
H
In
terms
of
our
application,
we
do
have
a
lot
of
customer
focus,
use
cases
not
able
to
share
them,
but
what
we
can
share
is
on
the
real
improvement
numbers
so
in
terms
of
the
shack
copy
feature
very
comprises
of
the
channels,
new
recordings
recording
deletions
and
the
playback
mixed
load.
We
have
seen
significantly
three
times
performance
boost
when
compared
to
the
file
storm.
So
these
tests
are,
we
continue
to
it.
The
bracken
blue
store
a
sink
type.
Do
you
have
continued
all
the
tests
with
T
default
one?
H
H
H
H
We
suggested
to
simply
turn
it
off
on
all
our
platforms,
but
really
we
don't
know
what
is
it
added
advantage
of
having
deep
scrubbing
and
does
it
really
helps
in
making
the
stability
of
the
caster?
Maybe
if
you
guys
can
suggest,
as
it
would
be
helpful
in
this
game?
The
next
issue
which
we
faced
is
on
the
ticket
so,
like
I,
said
before
we.
Our
application
is
an
extensive
unit
operation.
H
So
when
our
application
starts
a
large
bowl
in
the
front
gate
operations,
most
of
the
time
of
the
files
or
the
cluster
stability
collapses,
and
it
reduces
the
performance
in
god
and
another
area
where
degree
c
performance
liquid
is
in
giving
curve,
which
is
also
fine,
and
we
try
to
calculate
the
recovery
time
because
it
is
being
requested
by
several
customers.
But
we
tried
several
iterations,
but
till
now
we
couldn't
be
able
to
identify
our
metrics.
What
will
be
the
recovery
time
when
we
have
X
number
of
stories?
H
This
is
also
concern
of
the
console
value
for
not
able
to
tap.
This
will
go
to
the
customers.
He
also
based
on
X
of
us
Kurupt
issues,
and
when
we
have
this
issue,
we
could
not
able
to
see
them
on
these
safe
locks
and
when
this
issue,
certainly
the
iOS
on
the
loader
increases
and
finally
collapses
liquid
cluster
funnels.
H
So
only
with
the
deep
scruggy
I
initiated
manually,
you
put
able
to
find
these
kind
of
errors,
and
then
you
could
able
to
remove
the
ways
the
out
of
the
cluster
reformat
with
the
xaphan's
and
then
be
inserted
into
the
class
of
the
only
force.
Or
do
we
do
so
the
next
one?
We
do
have
lot
of
hardware
failures
and
the
disk
using
the
lab
is
less
productions,
so
we
also
consulting
the
HP.
H
Another
major
concern
we
have
is
on
the
scalability,
because
when
we
want,
when
the
customer
needs
I
store,
HP
ink
is
the
final
default
final
cluster
to
attain
or
cluster.
Only
thing
what
we
see
is
on
the
storage
improvement,
but
we
haven't
seen
any
significant
improvement
on
the
right
rate
or
delete
the
files,
so
the
performance
of
the
right
and
read
and
the
delayed
space
only
advantage.
What
we
are
getting
is
on
the
tourists
and
the
iOS
and,
of
course
you
see
the
two
platforms,
the
high
storage
density
or
the
medium
density
storage.
H
H
So
in
terms
of
aeration
pony,
what
we
see
like
it
has
a
high
storage
efficiency
compared
to
the
bigger
thing,
but
you
really
cost
of
the
additional
I
ops.
You
see
a
performance
and
especially
on
the
resilience
which
we'll
see
in
the
next
slides.
We
had
a
lot
of
issues
with
the
initial
pudding
and
sometimes
when
I
always
d
is
down
unless
it
is
comes
out
of
the
cluster,
the
decorators
and
starts
and
I
think
of
a
need
and
write
operations.
H
We
also
want
to
share
the
resiliency
use
cases,
so
what
we
do
is
you
do
only
the
basic
resiliency
on
the
safe
cluster
like
more
failures,
the
public
interface
down.
If
I
beautiful
face
down
disk
failure,
research
during
/
thought
of
the
observations
on
the
recovery
no
down
and
then
comes
on
aim.
What
are
the
issues?
H
So,
if
you
see
on
the
replica,
we
haven't
seen
much
of
issues
and
we
have
seen
some
minor
issues
like
sometimes
temperance
the
cluster
in
stock,
and
we
are
not
able
to
move
here
and
write
operations,
but
other
than
that.
The
recovery
part
is
happening
good
and
when
a
disc
is
inserted,
it
just
fills.
It
is
automatically
disqus
failure.
The
rebalancing
is
happening.
All
these
things
are
imperfectly.
Only
concern
is
on.
The
replication
is
what
we
try
to
calculate
was,
and
their
storage
is
around
23
percentage.
H
H
So
if
you
see
the
erasure
coding
for
+1
or
fine
over
4,200,
the
case
is
completely
reversed,
so
except
to
be
Mon
failure,
use
case
where
we
don't
see
any
read
or
write
impact.
But
if
you
see
the
only
other
use
cases,
we
see
quite
lot
of
reading
rights
and
most
of
the
time
we
see
the
recovery
doesn't
start
at
all
and
then
it
just
freeze
until
all
the
finals
constant.
So
if
you
take
any
safe
public
interface,
downs
and
private
in
your
face
down
anywhere,
we
have
the
internals
of
bugs,
which
we
are
rubber.
I
H
I
H
H
So,
as
we
said,
the
Restless
is
another
key
area
where
we
are
focusing
a
lot
of
blocking
issues,
so
we
will
be
trying
to
get
out
of
her
to
by
adding
some
more
tunings
to
it
and
another
issue
seen
in
the
current
and
groceries
when
the
object
size.
When
we
write
the
big
optics,
it's
updated
until
megabyte.
H
B
B
H
B
Okay,
kids,
no
dad
interesting
that
we
can
reach
me
replicator.
H
H
So
these
are
the
summary
of
our
key
findings:
Brewster.
Yes,
we
are
seeing
very,
very
better
performance
in
terms
of
Rafah,
our
application.
We
see
close
to
two
or
three
times
performance
improvement
and
then
can
same
some
stability
also
on
the
glue
store,
and
then
this
point
also,
we
have
covered
like
with
the
default
think
we
are
getting
less
performance
and
comfortable
there.
H
You
also
like
to
see
what
are
the
advantages
of
having
the
icing
and
compared
to
the
simple
and
then
delete
is
always
there
as
our
concern,
because
our
obligation
is
more
delete,
oriented
and
for
the
blue
store.
This
pod
also,
we
have
seen
certain
times
like
a
diskless
field,
the
performance
degrees
and
our
high
storage
platform
would
not
able
to
get
high
right
throughput,
even
though
we
have
a
number
of
discs.
So
here
the
cpu
on
the
emitting
factor
is
received.
G
Thanks
me
too,
so
I
mean
that
that's
what
we
had
in
in
terms
of
sort
of
a
presentation,
I
guess
look
one
grocery
Navis
is.
Is
this
sort
of
information
useful
to
you
guys
arm?
Is
it?
Would
it
be
useful
for
us
to
to
continue
providing
this
information
in
some
form?
Is
there
some
way
that
we
can?
B
Yes,
so
so
you
probably
know
this
from
all
the
testing
you've
been
doing,
but
it's
still
blue
store
is
changing.
You
know
fairly
fairly
rapidly,
and
even
even
sometimes
it's
not
necessarily
a
major
code
change
that
will
make
a
big
performance
difference,
but
we're
still
pretty
actively
working
on
trying
to
understand
several
different
aspects
of
super
functional,
one
which
is
is
Brocks,
TV
compaction,
behavior
and
how
much
data
is
ending
up.
Metadata
is
ending
up
in
rocks
to
be
in
and
how
that's
dealt
with.
B
But
one
thing
that
we
haven't
had
a
lot
of
coverage
on.
Is
this
booster
with
erasure
coding
and
it
was
very
interesting
to
see
test
results
there,
so
that
was
good,
just
just
in
general,
keep
doing
what
you're
doing
if
you
can
get
us
Val
ground
runs
with
the
memory
week
for
the
10
megabyte
objects.
That
would
be
fantastic,
but
that
doesn't
a
little
tough
to
do
if
you're
not
familiar
with
that
grinds
so
I
can
I
can
definitely
help
you
out
there
yeah
yeah.
This
is
fantastic.
Thank
you
very
much.
G
Pleasure,
brilliant,
okay,
so
mark
am
I
I'm
best
just
working
with
you
on
and
I
can
feed
it
back
into
the
power
engineering
team
about
what
you
need
for
the
memory
leak
thing,
I,
guess
arm.
If
we
find
other
things
in
in
testing,
then
you
guys
don't
have
a
problem
with
those
just
sort
of
maybe
bring
it
up
on
the
list
or
something
first
to
check
that
it.
G
It's
a
note,
it's
not
already
a
known
issue
and
then
raising
bugs
against
you
and
all
day
it's
open,
Saddam,
I,
guess:
I'm
conscious
up
I'm,
not
I'm,
not
that
familiar
with
sort
of
open
source
development
and
I.
Don't
know
whether
what
what
the
etiquette
is
in
terms
of
like
raising
bugs
vs
contributing
code
versus
testing
versus
whatever
any.
B
Any
and
all
will
be
happily
accepted
if,
if
you,
if
you
want
to,
we
do
have
a
daily
blue
store,
stand-up
meeting
it's
about
15
minutes
long.
So
it's
real
short
we're
Campbell,
the
active
people
that
work
am
blue,
store,
get
together
and
and
just
kind
of
discuss
what
they've
been
doing,
feel
free
to
join
it
every
day.
If
you
want
or
or
if
you
would
prefer
to
just
join
it
when
you
have
something
interesting
to
bring
up.
That
would
be
fine
too,
and
also
the
mailing
list
is
perfectly
acceptable
or
or
bug
reports.
B
Thing
that
there
can
be
helpful
if
you
do
it,
I,
don't
know
how
being
your
setup,
but
if
you
do
see
a
big
discrepancy
performance
between
releases
like
in
in
jewel
versus
crackin
on
any
data
you
can
provide
on
what
was
going
on
on
the
disks.
So
you
know
I,
oh
sad
or
collect
Oh
with
using
the
disk
module.
B
Anything
like
that
would
be
useful
and
then
also
an
OSD
log
would
potentially
help
us
because
blue
star
sorry
rocks
TV
will
give
you
compaction
stats
on
periodically
and
some
other
information
in
there,
which,
which
can
maybe
tell
us
something
about
differences
between
the
Rios's.
So
any
anything
like
that
that
you
can
provide
would
probably
be
helpful.
G
F
Hi
mark
I've
just
got
a
couple
of
quick
questions
and
observations
go
ahead,
and
so
the
first
thing
in
deletions,
that's
someone
I
sort
of
struggle
with
and
what
I
see
is
when
you
have
a
large
number
of
objects
been
deleted
either
through
maybe
like
removing
a
snapshot
or
running
FS
trim
or
something.
It
seems
that
the
queue
on
the
OSD
basically
gets
overloaded
with
delete
operations,
and
it
sounds
like
they
suffer
from
similar
things
that
your
OSD
start
dropping
them
and
generally
the
whole
cluster
sort
of
either
completely
falls
over
already
slows
down.
F
Yeah
sure
it's
just
sort
of
came
to
I
was
watching
that
presentation,
so
I'll
send
them
out,
and
the
only
other
thing
I
saw
was
when
you're
using
a
razr
coding
in
your
send
all
I,
always
freezing
when
you
lost
a
disk
or
I
knowed.
It
I
saw
you're
only
using
like
one
arisia
chunk
for
the
for
the
data
and
I'm
wondering
if
the
yoke
getting
hit
by
some
sort
of
like
min
size
saying,
which
is
stopping
io
until
that
sort
of
has
recovered
itself.
H
Yes,
the
basic
requirement
we
have
is
to
have
80
percentage,
storage
efficiency,
and
there
was
a
reason
why
we
have
chosen
for
+1
so
and
we
normally
share
this
information
to
the
customer.
Also,
like
probably
one
this
curve
on
a
node
can
go
down
all
the
complete
no
control
board
on
than
the
performance
doesn't
affect.
You
might
see
sure.
G
G
G
F
And
the
only
other
thing
I've
been
looking
again,
it's
same
thing
this
week
is
with
your
current
file.
Store
is
have
you
played
around
with
the
VFS
cash
pressure,
because
what
I
found
is
that,
if
you've
got
sort
of
like
a
you
know
an
OpenStack
or
some
vm
environment,
where
you
tend
to
have
a
lot
of
hot
boss,
a
certain
amount
of
hot
data,
and
it
doesn't
have
that
much
impact.
But
for
what
we're
doing
on
my
cluster,
we've
got
day
to
the
might
of
the
access
for
24
hours
and
then
it's
very
intensely
accessed.
F
H
D
So
one
thing
you
read
to
the
gas
pressure
is
that
we
used
to
do
that
like
if
you
make
the
VFS
cash
measure
20,
it
will
pin
that
I
node
and
everything
so
it
will.
It
will
never
be
released
from
the
memory
time
so
that
drawback
for
it,
so
that
we
are
actually
doing
for
all
of
our
rock
testing
but
and
the
replicated
testing
and
it's
giving
me
giving
us
the
good
performance
related
to
the
others,
but
the
problem
with
division
coded
is
actually
hit
back
to
us.
D
The
reason
is
that
it
has
so
many
small
files
for
the
EC.
If
you
are
especially
if
you
are
using
the
edge
6+2
and
all
right,
you
have
eight
of
the
small
files
will
be
there
and
because
of
that,
I
know'd
is
going
and
the
amount
of
memory
is
pinned
in
the
exercise
which
is
not
release.
Unless
you
want
more
Mexico
snoring
so
along,
they
are
not
lake
surface
volume,
visibility
so
that
actually
hitting
back
to
us,
so
be
careful.
It
easy
on
that
is
now
Thank
You
samba.