►
From YouTube: 2016-NOV-09 :: Ceph Performance Weekly
Description
Weekly collaboration call of all community members working on Ceph performance.
For full notes and video recording archive visit:
http://pad.ceph.com/p/performance_weekly
A
A
I
guess
we'll
go
through
those
and
I'll
add
them
in
later
and
if
anyone's
interested
you
know,
feel
free
to
look
at
the
etherpad
but
yeah,
let's
see
of
the
ones
that
did
look
at
we've
got
some
we're
just
kind
of
in
general,
moving
a
lot
of
stuff
into
their
own
mem
polls
now
in
blue
stores,
so
that
we
can
better
track
what
memory
is
going
so
there's
one
in
here
for
blue
FS,
but
there's
probably
a
couple
of
other
PRS
floating
around
for
moving
other
stuff
in
there
too.
A
A
I'm
very
excited,
potentially
for
the
work
he's
doing
that's
kind
of
that
last
one
there
and
also
this
RT
C
programming
model,
one
for
blue
store,
I,
just
very,
very
briefly,
skimmed
that
but
yeah
worth
looking
at
if
you're
interested
in
Moskva
and
blue
store,
there's
a
whole
bunch
of
stuff
that
closed
the
ib
verbs,
RDMA
support
for
the
AC
messenger
merged.
A
A
We
merged
the
new
rocks,
TBE
config
apps
settings.
Basically,
we
switched
to
large
buffers
fewer
larger
buffers
because
it
reduces
right
right,
Anthony
rocks
TBE.
So
the
hope
there
is
that
it
should
make
things
better.
It
seems
to
in
our
tests,
unfortunately,
in
really
long
running
tests
that
looks
like
we're
still
throwing
so
much
data
at
rocks,
DB
that
we
start
stalling
on
compaction
from
level
zero
level.
A
One
a
couple
of
things
that
we
probably
still
need
to
go
back
and
verify
our
that,
but
we're
doing
everything
we
can
to
make
level
0
/
1
compaction
fast,
it's
single
threaded,
so
it
basically
the
the
faster
you
can
make
it
the
better
all
connections
to
other
levels.
You
can
do
multi-threaded,
but
that
one
you
can't
so
some
of
the
things
are
making
sure
they're
like
levels,
your
own
mobile
one
at
the
same
size,
making
sure
that
the
threat
has
high
priority,
reducing
the
amount
of
data
that
goes
into
local
zero.
A
So
there's
there's
actually
out.
Oh,
that's
countdown'
in
the
discussion
topics
section
but
sage
had
a
proposal
for
the
rocks
to
be
right
ahead,
log
to
kind
of
try
to
make
sure
that
we
don't
promote
stuff
into
level.
0.
That's
short-lived,
though,
that
if
we
can
do
that,
potentially
depending
on
which,
like
right,
had
a
lot
of
traffic,
we're
leaking
into
level
0
that
we
don't
want
there
that
might
help
this
as
well.
It'd
be
good.
If
we
could
actually
do
something
to
actually
see
how
many
of
those
those.
A
A
A
Yet
I
will
do
so
in
fri
link
report,
but
we
did
some
testing
for
sams
ec
overwrite
patch,
which
has
a
little
bit
of
a
negative
performance
impact
for
just
normal
replicated
rights
and
probably
for
erasure
coded
rights
to
the
the
gist
of
it
is
that
it
doesn't
really
make
sense
to
us
why
we're
seeing
what
we're
seeing,
but
it's
it
doesn't
really
have
much
of
a
performance
impact
at
all
or
CP
usage
impact
at
all,
except
in,
like
four
megabyte
bright
cases.
A
A
We
were
chem
more
expecting
to
see
it
like
a
small,
random
right
case,
or
we
did
not
see
it
so
I'll,
probably
at
some
point,
try
to
go
back
and
we'll
get
with
person
and
see
if
we
can
tease
out
what
exactly
it
is
toe
and
then
one
other
thing
I
don't
have
written
down
here.
That
I
should
mention,
though,
is
that
there
was
a
performance
regression
fairly
big
one
in
blue
store.
That
was
due
to
the
rocks
TB
compile-time
optimizations,
no
one
being
passed
in
when
we
changed
it
to
an
external
project.
A
It
turns
out
that
had
a
really
huge
performance
impact
for,
like
small
random
writes,
it
was
on
our
tests
up
here.
It
was
like
a
2x
performance
regression,
so
that's
been
fixed,
but
if,
if
in
the
last
couple
of
weeks,
you've
seen
like
major
performance
variations
in
blue
store,
that
probably
is
at
least
one
contributing
factor.
There
may
also
be
another
regression
that
we're
trying
to
track
down.
I
think
Igor,
seen
it
in
my
bisect
that
I
did
for
this
other
problem.
I
did
see
some
evidence
that
there
may
be
another
regression.
A
A
Basically,
there's
there's
a
compile-time
optimization
that
we
can't
actually
use
that
may
have
gotten
briefly
enabled
that
may
have
actually
helped
performance
and
then
heard
it
when
we
tipped
away,
but
we'll
just
have
to
see
on
that
one.
So
anyway,
that's
that's
all
I've
got
for
this
week.
Do
you
guys
want
to
talk
a
little
bit
then
about
your
zetas
skill
testing?
Oh
yes,.
C
C
Okay,
so
the
purpose
of
this
presentation
is
to
talk
about
initial
performance
numbers
that
we
very
have
with
disgusting.
Just
for
the
whole
audience
today.
Scale
is
another
key
value
store
like
watch
TV,
but
it
is
heavily
optimized
for
flash
and
one
fundamental
difference
between
rocks
TV
and
jetta
scale.
Is
that
rocks
TV
users
right
ahead
logging?
That
means
every
data.
Did
you
write?
C
It
goes
to
a
wall,
and
then
it
goes
to
the
final
location
data
scale,
users
like
behind
logging,
where
the
data
is
returned
to
the
final
equation
directly
and
we
create
just
a
metadata
alarmed,
so
the
data
is
rhythm
only
once
in
the
scroll.
That's
one
of
the
fundamental
difference
between
these
two
storage
engines
so
in
this
performance
test.
C
So
so,
basically
we
are
replaced.
We
used
the
desk
le
in
place
of
Locksley.
So
then
we
try
to
compare
the
performance
difference
between
these
two.
So
we
did
two
types
of
casting
one
with
the
environment
where
the
Monod
ice
is
not
reach
steady
state.
So
that
means
the
the
size.
The
sum
of
size
of
one
owed
and
the
Sharks
right
for
each
object
is
my
face
to
the
steady
state.
So
that's
one
one,
one
cuz.
We
wanted
to
see
how
things
look,
because
that
affects
the
overall
data
set.
C
That
logs
will
be
needs
to
serve
right.
So
so
that's
one
test
now.
Another
test
we
did
was
okay.
When
it
reaches
steady
state,
the
each
one
owed
how
the
performance
would
look
right
there.
So
the
two
aspects
we
are
focusing
and
on
the
first
side
we
started
with
a
single
OST.
Our
goal
is
to
get
the
single
OST
performance
to
just
see
how
the
performance
later
look
right,
but
OST
and
then
do
the
same
test
with
the
multi
lowest
is
like
typical
numbers
will
do
right.
C
Those
are
the
things
till
some
tests
are
in
progress,
but
we
have
some
data
to
talk
about
in
this
case.
So
first
I'll
start
with
the
single
OST
performance,
and
so
it
just
reused.
One
server
for
this
assistant
Louis,
be
the
total
capacity
of
the
OST
is
a
double
batch.
This
is
these
five
is
eight
8
per
our
bikes,
and
so
we
created
a
40b
image,
a
lovely
barbary
image
in
it
and
then
run
the
test
with
drunk
sleeping
and
unclothed
role
of
atrocity
games
arrested.
C
C
So
here's
the
this
performance
is
like
best
case
scenario,
where
the
load
hasn't
reached
the
steady
state
so
that
names
the
rocks
TV
data
set,
is
smaller
way
smaller
than
that
would
be
if
Ono
has
reached
a
steady
state.
So
you
can
see
that
these
are
three
charts.
I,
don't
know
we
have
a
line
shot,
but
somehow
I
might
be
able
to
bring
it
up
in
the
presentation
will
do
its
next
meeting
but
on
the
high
level
so
rocks
TB.
C
This
is
rock
steady
numbers
in
the
low
HD
number,
with
16
k,
mineral
oxide
rights
in
this
configuration
the
data
rights
also
going
to
rocks
to
the
wall,
but
it's
eventually
be
deleted
after
we
commit
right.
So
so
that's
what
it
is
and
the
rocks
will
be
with
the
fourth
K
mineral
lock
this
one
and
the
jetta
scale,
with
a
fork
in
the
lock.
The
observation
here
is
that
box
to
be
with
the
16th
eight
min
a
log.
C
It
provides
sixty
percent,
more
performance
than
proxy
bid
for
caiman,
odd
configuration
and
but
the
data
scale
also
provides
similar
performance
right,
but
thats
observation
and
here
in
this
test,
even
though
we're
in
seven
hours,
what
we
notice
is
that
the
performance
is
keep
going
down,
yet
it
doesn't
patient
steady
state
even
with
the
rocks
vp16
game.
Analog
host
sorry.
C
So
the
next
one
is
the
steady-state
performance,
so
the
same
configuration
the
way
we
achieve
the
steady-state
way.
The
way
we
emulate
the
steady
state
is
as
follows.
So
we
set
the
max
along
sighs
right
for
validator
block
allocated
to
4k.
So
when
we
fill
the
storage
after
we
create
the
image
first
thing:
we
do,
if
you
love
the
storage
by
sequentially,
writing
1mb
blocks
and
when
we
do
that
for
each
one
and
be
right.
C
So
typically
this,
if
my
environment
will
reach
this
state
after
maybe
many
many
days
of
runs,
we
do
a
typical
way.
We
create
a
image
right.
41
mb
fill
up
the
image
with
one
and
be
sequential,
writes,
and
then
we
go
to
a
4k.
So
if
you
do
that,
maybe
after
some
days,
maybe
maybe
we
will
reach
this
state
but
will
shorten
the
desk
time.
C
To
this
tape,
here
is
the
right
performance,
so
steady
state
really
affects
the
performance
of
both
the
mainly
the
rocks
TV
and
ended
espeon
do
so
so
let
me
explain
all
these
lines,
one
by
one,
these
blue
line-
this
is
with
the
blue:
store,
rocks,
TV,
4k,
minilogue
price,
this
purple,
one
proxy
be
16,
k,
minilogue,
and
so
that's
the
number.
This
orange
bar
is
jetta
scale
with
a
fork,
a
min,
a
lock
and
I'll
explain
what
it
is
this
one.
C
C
State,
but
once
you
reach
the
steady
state,
you
can
see
that
performance
keep
going
down
down
down
down
so
from
the
peak
to
it
pi
times
less.
It
settles
down
here.
The
explanation
party
is
very
very
obvious,
while
the
steady-state
very
alone
or
size
of
steady
state,
the
data
set
size
data
set
of
rocks
TV
its
many
times
higher
than
the
announced
to
this
day,
maybe
like
10
times
higher,
and
they
said
that
causes
a
lot
of
fun
function
and
it
basically
uses
the
disk
bandwidth.
It
affects
the
ions.
C
C
Right
so
on
the
jetta
scale
site,
we
look
at
it
so
again,
if
both
the
offered
on
the
steady
state
after
like
off
an
hour
of
brands,
the
both
the
box
will
be
16,
k
and
lettuce,
kale
stabilizes
of
almost
same
same
performance
and
that
in
the
non-steady
state,
the
dogs
TV,
with
16
k,
mandela
provided
around
sixteen
sixty
percent
more
performance,
but
under
steady
state.
Apparently,
both
look
seen
similar.
C
One
observation
here
is
that,
while
under
steady
state
I,
we
notice
lot
of
free
shouting
of
the
the
10
shops
happening.
So
we
see
up
to
like
five
to
six
rishaad
events
happening
where
transaction
well,
whoa,
node,
/,
akhter,
block,
right,
I,
think
under
steady
state.
That
is
not
expected.
We,
it
looks
like
a
bug
that
we
need
to
address,
but
steady
state.
C
We
expect
so
to
rights
right
so
on
the
own
own
header
and
the
shard,
because
it's
a
fork,
a
steady
state,
so
the
V
sharding
should
not
be
happening,
but
here
we
observe
that
it's
happening.
Maybe
that
could
be
the
one
reason
that
this
is
dropping
to
this
level
if
they
go
and
fix
that
maybe
the
Rock
City
with
16
came
in
a
lot
may
perform
better,
but
we
need
to
go
and
I
understand
why
this
lot
of
recharge
happening
under
steady
state,
so
that
is
also
affecting
very
badly
digital
scale.
C
Data
scale
Jesus
between
for
storing
data
jealous
skill
default
between
or
size
is
8k.
So
when
you
write
on
our
block,
so
you
will,
when
you
read
a
no
load,
you
write
the
header
and
on
our
more
shops
right.
So
that's
what
happens
and
under
steady
state.
We
expected
that
it
will
cost
to
rights,
the
one
header
and
once
all
the
technically
too
busy
right,
that's
what
we
expected.
C
But
what
we
observe
here
is
our
own
five
rights
happening,
so
one
short
header,
sorry
100,
node
header
and,
like
many
rayshad
things
cause
in
five,
we
have
a
note
right
and
that's
also
causing
the
poor
performance.
Is
it
a
scale?
So
if
we
go
and
fix
it,
if
you
make
it
like
a
steady
state
with
that
expectation
is
to
rights,
so
anticipate
a
bigger
scale,
providing
better
performance-
and
here
is
an
example.
This
is
the
party
expected
if
you
go
and
fix
that
we
shorted
problem.
C
So
the
orange
board
here
is
basically
the
same
run
with
a
smaller,
be
already
object
right,
instead
of
four
megabytes
used
find
cycle
cave.
So
what
this
view
is
that
overall,
the
owner,
flash
short
size
is
around
eight
times
less
than
the
forum
be
on
decoration,
so
that
basically
risk
since
the
overall
whoa,
node,
sighs
and
starts,
like
is
less
be
even
with
the
shoddy
the
rights
to
the
data
scale.
V3
is
basically
tool
so
maximum.
It
too
is
like
a
simulating
kind
of
steady
state
right
here.
C
So
this
is
what
expected
you
could
go
and
make
sure
under
steady
state.
We.
It
consists
two
on
two
rights:
oh
no
plus
one
strand
right
rights,
thats,
the
artifical
and
alternated
thing
is
making
fee.
If
you
can
use
smaller
object,
size
or
video
message
size
or
the
data
scale,
we
will
observe
this
match.
So
steady
state
performance
from
the
beginning
will
excel
this
one,
and
this
is
expected
because
data
skill
doesn't
have
any
compaction.
The
device
returned
to
the
final
location,
our
front
and
there's
no
compaction
effect.
C
So
you
will
see
a
steady
state
performance
from
the
beginning,
but
here
the
observation
is
Fox
to
be
while
the
data
size
is
going
and
the
compaction
kicks
in
and
that
continuously
drive
bandwidth
here
all
these
tests,
the
drive
some
other
person
digitalized.
So
so
that's
reason
we
see
if
this
drags
we'll
talk
in
the
comments.
C
C
Back
is
to
keep.
Let
me
see
here
around
five
keys,
a
bigger
that
means
the
Hemi
reshoring
is
going
on,
and
that
is
not
expected
and
that
is
basically
causing
this
five
be
ready,
no
actions
at
the
scale,
and
so
that
comes
seems
a
lot
of
iOS
and
bandwidth
and
that
max
we
that's
all
we
see
low
performance
here
and
if
you
can
go
and
solve
that
problem,
make
it
to
12
steady
state.
Then
there
are
still
supposed
to
do
2x
more
performance
than
what
we
observe
now
and
that's
what
this
orange
bar
shows.
C
B
C
F
F
Yes,
what
is
happening?
I
tried
it
the
small
image
and
we
saw
that
day,
shouting
happening
and
as
expected,
is
actually
stabilizing
after
some
time.
So
and
then
there
is
no
Rashard,
because
the
shard
number
actually
probably
in
steady
state
but
for
the
big
like
bigger
blocks,
I
ride
it.
It's
happening
for
long,
because
we
have
so
many
object
and
we
just
keep
on
touching
that
and
that's
happening
for
longer
period
of
time
and
that's
a.
B
F
B
So
it's
not
actually
so
much
that
the
starting
is
broken,
but
that
work
as
we
fill
in
the
objects-
the
restarting
happens-
I
won't
group
nothing
but.
C
G
G
Think
there's
a
couple
of
things
that
we
can
learn
from
this
one
is:
there
does
seem
to
be
a
problem
with
the
sharding
logic
being
triggered
incorrectly
and
you
know
clearly
fixing
that's.
Can
I
help
everybody,
but
you
know
the
simulation
here
basically
says
that
the
smaller
stripes
with
Zetas
scale
are
definitely
a
win
because
you're
sure
that
the
own
own
in
the
shard
or
in
the
same
btree
page,
that
will
be
a
winner.
G
The
rocks
DB
doesn't
care
about
that,
but
we
have
to
be
a
little
careful
here,
because
if
we,
if
it
is
correct
that
there
is
a
sharding
bug
and
you
fix
that
you're
going
to
see,
rocks
db's
performance
improve
also
because
the
frequency
of
compaction
is
going
to
be
reduced
because
right
now
you're
moving
extra
data
into
it.
That's
meaningless
and
I,
don't
know
what
the
Delta
and
frequency
is
going
to
be,
but
you're
going
to
see
his
performance
budge
by
whatever
that
number
is,
and
that
could
be
significant.
Yeah,
okay,
okay,.
C
Now
so
my
one
thing
I'd
like
to
raise
this
point.
So
if
this
is
a
steady
state
number,
we
are
running
a
files
to
a
performance
as
well.
I
think
the
file
store
should
be
better
way
better
than
this
I
think
we
observed
that
on
5k
in
a
file
shortly
earlier,
but
we
will
go
and
do
then
the
same
setup
will
go
into
one
run:
the
file
store.
We
wanted
to
see
where
the
Bartlett's
on
the
file
store
here
in
the
same
chart
wheels
with
that.
B
C
G
G
B
G
C
B
I'm
wondering
if
adding
a
ride
ahead
log
would
help
in
our
situation,
where
we're
updating
keys
that
aren't
necessarily
contiguous,
like
you
have
the
log,
the
log
entries
Oh
map
keys
and
you
have
fit
map
merge
operator
and
also
because
you
have
a
lot
of
keys
that
are
that
are
updated
multiple
times
in
short
succession
and
it's
a
log
were
implemented
properly.
B
G
Yeah
the
strategy
frazetta
scale
is
built
around
a
supposition
of
somewhat
larger
keys
when
it
when
it
was
commissioned
and
the
phenomena
that
you're
describing
the
usage
pattern
here,
you
know
clearly
could
be
improved
right.
Analog
is
sort
of
one
way
to
do
that.
They're,
actually
a
couple
of
other
ways
to
go
address
that
some
of
them
are
actually
actively
under
research
right
now
and
but
you
know,
there's
no
doubt
that
the
minimum
number
of
rights
can
be
reduced
for
the
ceph
use
case.
You
know
if
some
more
code
is
written.
G
B
B
C
The
poem-
and
so
that's
the
right-handed
person
right
here-
is
a
70-30
in
the
70
30.
So
you
can
see
the
big
difference,
so
the
blue
board
is
again
proxy,
be
with
the
4k
analog
and
the
purple
one
is
activated.
16
came
in
a
log
and
orange
juice
is
legit
at
scale
and
there
is
a
little
smaller
on,
but
you
can
see
that
performance
is
very
low,
that
the
reason
is
a
lot
of
compaction
is
going
on
a
game
here
there.
So
that's
occupying
the
bandwidth
disbanded
and
with.
A
A
G
This
this
is
the
same
phenomena
that
we
dealt
with
in
jewel
and
file
store
where
you've
got
the
sort
of
old
style,
binging
and
purging.
Where
you
know
periodically,
we
go
flush,
the
log
by
loading,
everything
up
into
the
buffer
cache
and
then
doing
a
sink
FS
on
that,
when
that
happens,
you
starve
out
the
reeds
okay
during
that
period
of
time,
and
what
you're
dealing
with
is
fundamentally
the
the
asymmetry
between
reads
and
writes
in
flash,
and
if
you
saturate
the
thing
with
rights,
you
know
you
start
to
see
that
I
mean.
G
G
Your
reads
for
the
same
reason,
because
you
know
its
a
50
to
1
performance
difference
between
the
two
that
the
Owens
though,
and
if
you
batch
them
together
and
sort
of
the
binging
and
purging
it
just
really
hurts
you
at
the
overall
okay,
now,
I
think,
there's
a
two
phenomenon
going
on
here:
I
think
that's
one
phenomenon,
the
other
phenomenon
is
I,
don't
think
rocks
TB
has
been
optimized
properly
for
full
reads
yet,
in
other
words,
if
if
this
was
and
I
think
we've
seen
this
before,
if
this
was
a
hundred
percent
read,
then
you
have
to
ask
yourself
the
question:
why
is
there
any
difference
between
rocks
and
Zetas
scale
and
there's
no
good
reason
why
there
should
be
a
difference
between
the
two
and
yet
that's
not
what
you
see
and
I
think
that's,
because
probably
the
the
file
format,
combinations
for
rocks,
aren't
really
set
up
for
the
use
case
that
we're
dealing
with
here
and
I
suspect
when
you
twiddle
that
it's
going
to
change
the
with
this
picture,
perhaps
dramatically
so
yeah
I.
G
C
C
That's
what
I
expected,
but
we
can
get
one
run,
will
will
get
one
run
with
an
agency
regular,
and
this
one
too
I
think
the
answer
marks
thing.
Yes,
there
is
some
compaction
going
on
even
before
the
7030
workflow
started.
This
will
do
to
the
previous
right.
This
compaction
still
in
progress,
and
on
top
of
that,
this
additional
life
is
making
it
worse.
Okay,
okay,.
A
Test
I
guess
that
makes
more
sense
to
me.
Maybe
these
just
seem
outside
some
of
the
tests.
I've
done
our
fifty
percent
read
fifty
percent
write
tests
and
okay
he's
just
these
just
seemed
surprisingly
bad,
but
I
could
I
can
believe
it.
If
there's
a
lot
of
compaction
happening,
especially
from
previous
rights,
correct.
D
One
more
point
here
is
that
the
run
we
do
it's
like
one
TV,
a
data
/
OST,
at
least
for
this
runs
Oh
with
that
scale.
Only
we
see
such
low
numbers
if
you
done
like
100,
gig
or
200
wing,
that
for
for
that
data
size,
we
see
the
numbers
that
say
it
mentioned
like
7k
8k
like
that.
So
once
you
increase
the
Torah
size,
0
1,
TB
porosity
that
time
you
see
this
kind
of
behavior
consistent
here
again.
D
A
Have
you
guys
looked
at
all
when
you
increase
the
OSD
size
from
from
smaller
to
larger?
That
sounds
like
that
really
dramatically
reduces
the
performance
with
rocks
to
be
here.
If
you
guys
looked
at
like
Perez,
alts
or
anything
else,
when
you
do
that
and
to
see
what's
going
on
well.
G
Think
about
it
this
way,
with
with
rocks
when
you're
doing
random
writes
every
time
you
do
a
compaction
you're,
basically
going
to
touch
all
of
your
metadata.
So
the
cost
of
the
compaction
is
a
direct
function
of
the
size
of
the
drive.
Okay
and
you
see
that
in
this
16
came
in
Alec
versus
for
Kane
and
Alec
you're,
basically
cutting
the
metadata
by
a
factor
of
four,
and
you
know,
you're,
seeing
a
significant
improvement
in
performance.
G
What
you
know
what
you're
doing
is
cutting
the
cost
of
compaction
or,
alternatively,
you
cut
the
side
of
the
drive
by
a
factor
of
four
basically
about
the
same
phenomenon.
You
do
have
to
factor
in
the
read/write.
So
that's
where
it's
you
know.
The
four
is
a
bit
of
a
stretch,
but
you
fundamentally
that's.
What's
driving
that
yeah.
A
I
guess
I
was
just
kind
of
wondering,
is
it
may
be
the
not
perf,
but
just
maybe
be
interesting
to
look
at
the
compaction
status?
In
that
scenario,
then
I'm
I
guess
I
was
counting,
is
just
what?
What
is
it
that.
B
C
G
E
C
The
okay,
so
that's
10,
k
manila,
may
not
be
the
right
configuration
file
to
cascade,
because
when
you
write
any
right
right
needed
when
he
goes
to
take
escape,
it
goes
and
writes
to
the
final
location
and
then
that's
going
to
be
costly
here.
Compared
to
the
rocks
TV
the
rocks
TV,
you
go
right
to
the
sequentially
life
sequential
log,
and
then
it
gets
deleted
immediately,
most
probably
while
at
the
mem
table
doesn't
go
to
the
data
location
all
so
we
don't
injera
scale.
E
C
C
Okay,
so,
but
so
this
one
we
can,
we
can
discuss
it,
will
the
email,
their
costumes
I'll,
send
this
chart.
The
next
test
is
basically
we
want
to
do
the
same
test
with
the
multiple
OST
for
a
20
s,
DS
and
so
on.
So
this
is
still
in
progress.
Test
is
in
progress.
C
A
C
C
E
B
E
E
C
E
Hey
Mark
I
was
wondering
whether
you
can
share
with
us
about
their
party
right
performance
data.
That's
in
case
you,
if
you
have
yeah
sure.
A
E
A
Yep
sounds
I.
Can
I
can
give
you
that
too?
The
only
thing
to
watch
out
for
is
that
recently
there
was
a
regression
in
master
and
there
may
actually
still
be
some
kind
of
regression
going
on
I.
Think
Igor
said
that
you're
still
seeing
you
have
a
vision
versus
previous
tests,
so
we
might
need
to
track
that
down,
but
I
can
certainly
give
you
the
results
I've
seen
as
parameters
and
everything.
Oh
thank
you.
So
much.
E
Right
part
of
another
issues
that
probably
I
want
to
share
with
you
guys
about
their
25
RDMA
or
you
tonight.
So
far
we
have
a
chance
to
set
up
all
the
RDMA.
You
know,
I
am
in
networking
inside
and
off
you
find
out
round
here
and
we
try.
We
are
going
to
have
a
try
with
our
DNA
I
synchronize,
the
messenger
with
RDMA
I
works.