►
From YouTube: 2017-MAY-11 :: Ceph Performance Weekly
Description
Weekly collaboration call of all community members working on Ceph performance.
http://ceph.com/performance
For full notes and video recording archive visit:
http://pad.ceph.com/p/performance_weekly
A
A
A
All
right,
let's
get
this
show
on
the
road
we've
got
a
couple
of
new
poll
requests
this
week
and
there's
one
for
enabling
both
a
sink
rights.
It
looks
like
that
might
actually
be
a
little
bit
faster
than
leaving
them
disabled,
at
least
for
a
small
IO,
primarily
for
hard
drives.
We
thought
improving
things
a
little
bit
for
large
right.
A
It
might
actually
be
a
little
bit
slower,
it
appears
so
far,
but
the
improvements
or
small
rights
is
is
bigger
than
the
the
the
hip
we
take
on
large
rights,
so
I
think
we're
probably
going
to
do
it.
We've
got
a
couple
of
last-minute
tests
to
run
through
there
with
our
be
cash
disabled.
Since
the
tests
that
we
had,
we,
we
actually
forgot
to
disable
that
when
we
were
running
through
them
earlier,
but
I
suspect
that
we'll
probably
see
the
same
thing
so
I.
Imagine
that
will
probably
merge
this
week.
A
There's
another
PR
here
for
the
manager
to
optimize
demon
state
index,
Cole
I
haven't
looked
too
closely
at
that.
That
looks
like
we
were
like
going
through
the
entire
demon
list,
and
probably
don't
need
to
do
that.
So
not
much
response
tonight
yet,
but
I
think
it
probably
makes
sense.
There
was
a
number
of
things
here
that
closed.
A
A
Searcy
cache
Miss
Perfect
canter,
one
computer,
that
was
that
was
interesting.
I
think
the
the
cache
miss
rates
were
a
lot
higher
than
sage
suspected,
and
it
was
not
entirely
clear
why
that
was
so.
It
looked
like
he
did
merge.
So
that's
that's
good
I
would
mind
hearing
more
from
Kyodo
fees
around
them.
That
one
see
how
you
are
here.
Oh.
A
B
A
B
A
A
It's
a
random
raid
called
attack,
sequential
II,
so
the
end
result
of
it
is
by
default.
Rocks
TV
was
not
doing
any
kind
of
read
ahead
for
any
of
that
was
just
doing
tons
and
tons
and
tons
of
mostly
sequential
but
overlapping
reads,
and
by
by
enabling
read
ahead
specifically
for
compaction.
We
were
able
to
turn
those
into
much
for
much
larger
reads
and
it
reduced
the
time
that
we
spent
in
the
compaction
thread
from
spending
something
like
70%
of
the
time.
That
thread
doing
reads
to
down
to
like
15
percent
or
something.
A
So
it
was
a
really
huge
win
for
a
very
small
code
change.
So
that's
that's
really
good
I
expect
for
people
that
are
doing
booster
testing.
They
will
see
a
pretty
big
difference
in
many
camp
traces
that
they
do
and
CPU
utilization.
Because
of
that
we
had
already
actually
reduce
looser
CPU
usage,
pretty
dramatically
maybe
a
month
or
two
ago,
but
this
this
kind
of
helps
out
even
more
so
that
that's
very
good.
A
There's
another
PR
here
by
pyotr
to
use
the
new
two.
Basically
there's
a
couple
places
where,
apparently
we
were
using
new
encoding
and
fixes
that
that's
good
and
then
there
was
also
a
blue
FS
sync
right
option,
adding
that
in
which
was
kind
of
a
prerequisite
to
the
the
one
up
above
the
1503
or
four
that
merged.
A
There's
an
ongoing
discussion
regarded
that
we're
going
to
shut
it
off
work
you
there
are
some
that
is
showing
up
in
some
wall
clock
traces
that
we've
been
doing
sage
had
an
idea
for
changing
walking
there.
I
did
actually
result
in
kind
of
improvement,
so
has
kind
of
blacks,
our
drawing
board
figuring
out.
If
there's
something
to
do
there
and
sage
has
really
has
a
new
idea
for
removing
certain
locking,
entirely
I
believe
so,
hopefully,
he'll
be
able
to
prototype
that
and
knesset
out.
A
There's
this
peg
SAS
dispatch
messages
with
menotti
Park
I
have
to
confess
I
haven't
looked
at
that
at
all,
so
I
don't
know
anything
about
it
and
then
there's
a
lot
of
ongoing
discussion
right
now
regarding
this
improve
CRC
calculation
for
zero
buffers
I
think
pewter
has
been
commenting
a
lot
on
that.
It's
just
it
sounds
like
there's
camp
a
crossover
point
where
the
existing
method
might
be
better
for
small
size
as
well.
A
This
new
method
is
bigger
for
large
sizes,
but
I
think
the
we
thought
right
now
is
to
figure
out
where
those
are
and
when
we
should
new
new
thing
that
that
amps
been
working
on
that
was
just
me
skimming
over
the
PR.
So
let's
see
no
movement,
there's
still
this
kind
of
idea
out
there
about
putting
blue
FS
in
the
middle
of
the
shared
device.
A
A
But
if
we're
not
sewing
up
the
device,
putting
data
in
the
middle
of
the
device
then
means
that
we're
actually
kind
of
towards
the
end
of
where
data
is
laid
out.
So
I
I'm
still
not
convinced
that
this
actually
makes
sense
in
a
testing
that
we
did.
There's
it's
almost
no
difference.
To
be
honest,
so
probably
we're
still
at
the
point
where
other
stuff
matters
more
than
this
does.
But
but
who
knows
what
we'll
see
now?
Maybe
maybe
it
does
make
sense
if
we
assume
that
people
have
pretty
full
disks?
A
A
What
else
still
is
this
kind
of
big
one
of
us
separating
the
K
vsync
thread
into
two
parts?
Igor
was
saying
that
that's
that's
a
really
big
win,
I
still
haven't
gotten
a
chance
to
test
it
recently.
So
I
need
to
do
that,
but
but
it
sounds
like
it's
definitely
something
we
needed
to
figure
out.
It
is
I
believe
still
deadlocking.
So
we
need
to
understand
why
that
is
now
yeah.
A
We
just
we've
got
a
huge
amount
of
other
stuff
here
that
we
need
to
go
through
a
lot
of
blue
source
stuff
and
then
I
think
some
async
messengers
things
for
our
DMA
are
still
so
there,
but
really
a
lot
of
its
booster,
so
yeah
I
think
I'll.
We
hit
that
for
probably
enough
to
go
to
for
this
week
for
purple
requests.
A
I've
added
a
couple
of
things
in
here,
but
I
believe
there
were
a
couple
of
folks
that
wanted
to
talk
about
their
own
work
this
week,
so
I'm
going
to
open
the
floor
now.
Does
anyone
have
anything
that
they
would
like
to
present
or
talked
about
this
week
before
I
talk
about
some
of
these
other
things?
Very
mark.
A
A
C
C
So
obviously,
today
I'm
going
to
introduce
the
basic
idea.
First,
then,
our
server
and
server
engines
from
a
debacle
into
issue
or
the
perform
data
and
how
we,
how
we
design,
of
course,
the
way
in
the
end
going
to
port
for
your
guys.
Otherwise.
So
today,
there's
two
engineers
were
two
prominent
civil
engineers
from
Iowa.
Are
they
in
the
meeting,
but
two
of
the
engineers
going
to
present
data
with
us
with
me?
C
A
fourth
one
is
Jesse
and
second
one
is
the
engine
they
go
to
their
work
on
these
issues
and
I'll
answer
of
the
background
why
we
still
care
about
the
performance
because
any
Baba
most
of
time
we're
on
their
high
performance
application
on
top
of
their
self
and
that's
reason
why
we
find
that
a
lot
of
law
you
know
plays
fly
till
you
increase
the
performance.
But
of
course,
somehow
sometimes
we
a
break.
C
Are
the
some
rules
like
a
strong,
consistent
rules,
but
there
you
know,
as
I
said
again,
we
need
for
looking
for
your
right,
otherwise
how
we
can
improve,
but
in
the
end
the
argument
goal
is
try
to
bring
their
self
to
their
next
level
or
high
performance
applications.
That's
pretty
much.
They
go
I
just
want
to
see
it
again,
and
so
in
terms
of
in
terms
of
high
performance
away.
Actually,
today
ping
other
database
team
in
Alibaba
to
work
with
us.
They
are
we
going
to
improve
the
self
a
performance
for
our.
C
You
know
a
gigantic
database,
our
applications
in
Alibaba
and
an
account
and
then
I
just
give
the
big
ideas
and
it
will
everybody
have
increased.
Probably
everybody
can
go
to
these
people
Lucas.
But
of
course,
this
pool
because
right
now
was
revealed
by
Craig
and
unfortunately
clicks
holes,
but
there
we
wanted
present
today
and
one
share
was
what
we
think.
In
the
meantime.
C
We
look
for
inside
eyes,
I
love,
so
basically
we
will
want
you
will
we
send
three
our
request
for
your
replicate
and
we
want
to
get
to
a
to
retie
and
then
we
think
a
little
reply.
Consider,
as
is
a
write
complete
so
basically
going
to
you
address
there.
No
snow,
OST
issues
so
I'm
going
to
move
on
that
our
engineer,
Jesse,
good
or
yen
you're
going
to
present
the
next
a
slice
near
here.
D
Okay,
now
I'm
a
Jessie
from
Alibaba
database
group,
a
yeah
and
then,
as
a
gem,
said
that
we
have
a
project
to
judge
just
played
the
storage
layer
from
the
national
database,
so
we
must
care
about
average
latency,
especially
for
the
four
nine
did
latency.
So
we
do
the
commit
maternity
feature
to
improve
the
latency
is
for
our
villainous.
D
We
have
two
stage
so
first
staging
in
a
way
did
the
committee
majority
added
a
server-side
I'll
caution,
as
Jim
said
in
a
way
at
this
villainous
circumstances
away,
may
lost
sums
a
data
consistency,
so
we
we
focus
much
more
things
a
latency,
so
we
did
the
commit
majority
and
tested
it
to
with
me,
and
it
is
a
committee
majority
at
kilometer
side
as
love
political.
So
today
never
this
is
slider.
D
D
D
C
The
literacy
was
also
can
improve
or
not,
and
with
the
enable
that
we
have
were
stable
energy
didn't
see
instead
of
instead
of
it
were
spiking
agency,
and
in
average
we
know
down
the
literacy
for
more
than
ten
to
fifteen
percent
as
well.
Comparing
to
the
aisle
also
quality
improve
more
than
ten
to
fifteen.
C
Yeah,
so
here
it's
very
important
message
with
loosen
up
under
everybody
know
we
use.
Are
we
actually
apply
ourselves
into
high
performance
or
RTP
database
and
it
little
bit
care
or
not
about
the
to
emergency?
So
here
we
specially
on
also
tension
team
for
nine
PR
agency.
You
can
see
from
the
data
we
can
know
down
there.
C
For
my
agency,
to
you
know
four
times
almost
three
or
four
times
as
a
huge
you
can
practically.
Our
application
is
a
you
know
where
you,
the
ecommerce
or
database
or
other
key
data
base.
We
have
a
square
stringent
words
really
requirement
and
towards
the
TN
agency,
because
any
of
the
tail
is
the
worst
part
can
break
any
of
the
transaction
happening
there
in
the
e-commerce.
So
there's
no
way
for
us
to
tolerate
any
of
that
spike.
C
That's
reason
why
we
are,
you
know,
try
area
means
or
every
message
to
know
that
it
hit
in
Tennessee.
These
are
either
commit
a
little
crash
action.
If
you
have
asked
to
achieve
this,
go
to
node
other
currencies
three
times
almost
and
of
course,
the
aisle
together
I
improved
almost
searching
searching
percent.
Okay,
I
just
want
to
add
on
or
just
you
can
and
you
know,
Kate
Kindley.
D
Okay,
it's
a
it's
a
wait.
Did
the
committee
McCoy
dad's
suicide,
so
in
the
future?
No
way
we
may
implement
it
to
the
committee
majority
at
the
correct
side,
just
using
the
raft
protocol,
so
it
may
be
much
more
difficult,
so
the
dinner
we
impede
implemented
at
that
the
server
side,
as
always
I
know
that
they
for
any
guys,
King
I
have
any
comments
I
like
that,
and
also
we
may
not
cover
any
corner
case.
So
if
it
is
any
risk
for
this
feature,
so
you
will
be
welcome
before
there
and
now
we
will
appreciate.
A
Yeah
I
was
actually
thinking
the
exact
same
thing
because
of
his
comment
on
the
poll
requests
here.
Maybe
it'd
be
really
good
to
use
here,
but
yeah
I,
don't
I,
don't
think
he's
he's,
probably
on
a
plane
or
something
right
now.
I
do
want
to
say
that
this
is.
This
is
really
a
good
showcase
of
kind
of
what
what
we
lose
right
now
in
terms
of
tail
latency
tended
by
our
current
scheme.
A
A
That
problem,
probably
just
like
you
guys,
I'm
pretty
concerned
about
Greg's,
comment-
that
if
a
certain
OSD
can't
be
used
as
an
authoritative
source
for
data
camp,
by
the
way
that
this
this
er
is
implemented,
it
is
not
safe
to
do
that.
Potentially
it
makes
kind
of
our
entire
recovery
scheme
not
safe.
So,
and
you
know,
I
I'm
not
familiar
enough
with
our
recovery
works
to
be
able
to
really
give
you
any
guidance
on
that.
But
you
know
certainly
Greg
and
Josh
when
they're
back
I
think
we'll.
C
I,
do
that
yeah
appreciate
it
for
a
month,
your
comments,
because
internally
we
do
not
of
reveal
and
the
design
and
we
try
to
make
sure
we
cover
all
the
clinic
aids.
The
love
you
carry
is
a
big
issue
and
there
we
know
that
strong
consistency
model
as
well,
and
then
we
design
the
code.
We
we
think
we
need
to
try
to
common
law
or
several
testing
case
and
color
the
common
case
and,
in
the
meantime,
we
already
wrong
of
our
testing
experiment
in
our
Tina's
in
our
in
the
castor.
C
Here
we
only
shoe
immune,
I,
guess
three,
so
we're
24
was
the
cancer
action
we've
drawn
like
a
two
hundred
was
300
or
SD
Rd
in
our
our
applications.
So
we
just
want
to
have
a
chance
to
talk
with
Greg
and
I
share
what
we
already
found.
What
we
think
is
regarding
to
these
questions
and
a
hallway.
We
approach
this
problem
and
how
we
avoid
what
he
mentioned
in
their
comments,
but
awkward
I.
Actually,
instead,
what
we're
them
coming
because
they
like
intimate
marrow,
can
be
compromised
for
sure
I
remember
occurs.
We
cannot
example
eyes.
C
We
cannot
tolerate
anything.
I
knew
that's
what
kind
of
person
for
sure,
but
I
guess
we
won't
have
a
chance
par
with
him
directly
and
saying
what
we're
thinking,
what
we,
what
we,
how
we
design
is,
will
always
been
award.
You
know
to
to
compromise
the
data,
precision
see
what
did
a
consistency
and
now,
of
course,
we
strong
in
looking
for
their
comments
as
well,
so
which
are
my
chance.
Please
bring
these
awards
to
work
and
away
you
have
a
either
phone
call
or
no
email,
then,
would
be
fine
to
all
of
us.
Absolutely.
A
Absolutely
yeah
I'll
see
if
I
can
I'll
reach
out
end
and
see
if
we
can
get
something
like
that
set
up,
and
then
my
guess
is
that
the
process
here
will
be
making
sure
that
it
passes
a
review
there
just
by
Gregor,
Josh
or
sage
or
somebody
that
is
familiar
enough
with
the
recovery
code
to
be
able
to
to
be
a
third
on
it.
And
then
the
other
thing
thing
will
be
making
sure
that
it
passes
a
radio
splashing
suit
in
our
technology
suite.
We.
F
D
No
yeah,
we
have
improved
as
a
society
and
we
did
a
lot
work
for
it,
especially
for
the
disaster
simulation
I,
just
like
ed,
like
OSD,
westport,
a
hardware
fault
and
the
network
for
so
we
did
a
lot
of
implementation
work
for
this
disaster,
just
for
the
other
four
hardware.
For
so
and
so
for
the
Alibaba
and
the
way
you
using
the
database
using
safe
as
it's
storage.
So
we
vary.
D
The
stability
and
SLA
I
mean
the
service
level.
Agreement
is
very
important
for
us.
So
that's
why
we
did
a
lot
of
work
for
society
to
test
our
distribution
storage.
Here
we
were
using
yourself,
so
it's
were
only
important
for
weight
design,
a
cellular
piece
to
test
the
committee
majority
and
accountant
a
and
it
is
to
working
very,
very
well.
C
Mean
we
have
to
mark
I
just
want
to
be
clear,
steaming,
oh
here
how
about
the
our
data
integrity?
We
cannot
afford
any
reduce.
This
is
number
one
number
two
we
have
our
in-house
archeology
and
we
added
more
remote
has
than
current
technology
to
you,
Tesla
images
regularly
and
performance.
We
have
a
we
do.
A
lot
of
you
know
they're
kinda.
We
are
finger
injection
and
I'll
network
within
Junction,
and
also
the
surgically
injection
CP,
open
injection
and
also
we
have
lots
of
a
combined
help.
You
change
the
world.
C
D
D
C
So
we
basically
want
to
collaborate
with
the
server
community
to
improve
call
our
interaction
in
front
right
now
we
focus
on
high
performance
we
with
to
love
the
world
on,
for
example.
Actually
we
also
going
to
present
another
improve,
we
made
tea
towards
their
PG,
PG
PG
and
their
whole
object
store
when
we
was
going
to
submitted
PR
and
there
we
going
to
talk
next
week,
whatever
okay.
A
Yeah
fantastic
yeah,
I
think
both
being
able
to
show
that
were
that
that
you're
passing
the
the
Raiders
thrashing
sweet
will
be
will
be
important.
I
think
that's
key
for
this
to
be
able
to
to
be
merged,
but
then
also
any
other
work
that
you've
added
any
in
additional
sweets
for
topology
or
any
additional
tests
that
you've
added
I.
A
Imagine
that
folks
will
be
really
interested
in
that,
especially
as
it's
picking
up
anything
that
is
not
not
currently
being
taught
by
our
a
oh
sweet.
I
know
we
actually
just
recently
got
hit
with
a
blue
sword,
bug
that
was
being
detected
in
the
RVD
sweet
that
should
have
been
picked
up
in
the
radio
sweet.
So
you
know
having
more
people
adding
more
tests
there
I
think
would
be,
would
be
really
good.
Yeah.
A
C
B
C
C
A
A
G
A
D
Come
today,
it's
very
good,
so
we,
if
we'll
have
any
comment
or
any
coming
for
it
and
I,
will
be
appreciate
and
their
way
probable,
good
database
group
in
a
way
steal
and
do
some
work
for
the
safe
and
also
we
in
the
future
new
looks
like
focused
on
the
clear
latency,
also
performance
and
the
high
so
good.
So
we
will
focus
on
this
features
percept.
So
it's
very
important
to
us
and
the
Florida
Bar.
D
They
decide
that
we
are
just
doing
the
project
we
have
already
mentioned
abusively
from
their
story
familiar
from
the
national
database,
though
they
will
focus
not
not
only
stability
and
performance,
so
I,
think
and
in
the
future.
We
work
with
with
self
community,
and
it's
very
good
for
us
for
the
future.
Two
things
very.
A
A
Alright,
let's
see,
does
anyone
else
have
anything
that
they
would
like
to
talk
about
this
week.
G
A
A
You
know,
unfortunately,
I
don't
I,
don't
remember
exactly
I
went
through
so
many
tests
in
the
last
couple
of
weeks.
I,
don't
remember
exactly
what
we
are
seeing
when
we
did
that
dog
to
get
back
to
you
on
it,
but
ya
know
I.
You
know
for
some
reason,
I
do
need
to
get
merged
yet,
but
it
looked
like
it
did
so
yeah.
Oh,
you
know.
Okay,.
A
G
A
Yeah,
it
looks
like
even
there's
there's
some
discussion
about
whether
or
not
that
should
be
kind
of
the
default
home
set
out,
at
least
until
the
the
colonel
is,
as
you
said,
better
of
figuring
out
sea
state
transitions.
So
yeah
we'll
see
what
happens
but
like
yeah.
It's
more
people
are
starting
to
notice
this
as
well,
so
yeah
we'll
find
out
X,
Malin
cool
yeah,
all
right,
well,
I
guess
I
will
jump
in
here
then,
and
talk
a
little
bit
more
about
what
we've
been
doing
the
last
week
here.
A
During
the
course
of
some
Bluestar
testing,
we
were
doing
I
started
getting
concerned
about
RVD
client-side
bottlenecks,
and
it
turns
out
that
that
I
had
inadvertently
forgotten
to
disable
our
BD
cache
on
this
particular
test.
I
was
running
so
I
was
right
to
suspect
client-side
bottlenecks.
Unfortunately,
it
is
actually
a
little
worse
than
then
I
remembered
it
being
from
last
time.
Oh
look
at
this,
so
the
the
deal
is
is
that
when
you're
on
nvme
OS,
these
are
just
really
fast,
OS
T's
in
general,
it's
looking
like.
A
Fio
using
the
lib
RBD
back-end
is
limited
to
about
fifteen
fourteen
or
fifteen
ki
ops
for
for
one
client
and
it's
using
about
300%
CPU
to
do
it
I.
Now
that
we've
got
our
handy
dandy
wall,
clock
profiler
that
we
can
look
at
this.
You
know
and
begin
under
the
covers
now
with,
as
it
turns
out,
we're
seeing
lots
and
lots
of
lot
contention,
so
so
Jason
I
think
suspected
this
already.
A
A
It
was
using
a
little
bit
more
CPU
to
do
that
and
there's
still
some
bottlenecks
if
you
look
at
the
trace,
but
but
a
lot
of
a
lot
contention
goes
away
and
we
start
seeing
some
other
things.
There's
actually
still
some
lock
contention,
but
it's
in
other
places
and
we
start
seeing
some
other
things
like
placement
and
acing
messengers
are
certain
to
consume
a
lot
more
of
that
CPU
time.
A
So
I
guess
I,
guess.
The
good
news
is
that
there
is
sort
of
a
workaround
by
just
totally
disabling
RVD
cache.
The
bad
news
is
that
you
have
to
table
Humber's.
We
cache
to
work
around
this
so
yeah
this
one
to
let
people
know
this
is
out
there
and
it's
probably
something
that
sooner
or
later
we
need
to
look
at.
But
but
for
now
you
know
36k
apps
from
assumed
client
without
really
cache.
Disabled
is
probably
good
enough
for
the
moment
for
most
users.
G
A
G
I've
been
trying
to
sort
of
make
a
second
Davide
with
like
sunlight,
64k
made
of
six
four
pi
objects
and
then
put
an
external
journal
not
to
sort
of
try
and
see
if
that
makes
any
difference.
But
both
I
haven't
really
managed
to
getting
sort
of
many
meaningful
results,
but
they
don't
might
be
other
sort
of
things
on
him.
A
The
maybe
the
counterpoint
to
that
is
in
a
lot
of
situations.
Folks
will
have
lots
of
volumes
right.
You
know
not
just
one
volume
talking
to
lots
of
those
bees,
so
in
that
situation,
maybe
maybe
it
doesn't
matter
as
much,
because
you
have
journals
spread
across
lots
of
those
bees
and
lots
of
journals,
but
if
you've
got
only
a
couple
file
systems
being
utilized
heavily
at
any
given
time,
it's
still
there,
maybe
only
hitting
a
couple
of
P
G's
very,
have
the
journalism
yeah.
That's.
G
What
I
think,
because
I'm
sort
of
one
of
the
things
we
use
ISM
NFS,
like
a
big
you
know:
15
20,
terabyte
volume
by
n,
FS
to
ethics
and
so
I
think
you're
just
getting
like
thousands
of
thousands
of
sink
rights.
But
this
seems
to
be
you
hit
a
limit
where
one
cannot,
you
know,
more
thing
doesn't
seem
to
get
any
more
performance,
even
though
the
back
this
type
of
it
and
I
was
started.
A
No,
it's
it's,
it's
quite
possible,
I
would
say
I
mean
well,
you
could,
if
you
happened,
to
have
the
opportunity
to
run
the
test
and
you
could
try
using
the
wall,
clock
profiler
that
we've
got
now
and
some
scrolling
through
it
on
the
screen
here.
I
see
actually
but
yeah.
You
could
just
try
running
that
during
during
the
test
and
see
if
you
see
the
PG
LOC
showing
up
okay,
yeah
yeah.
A
That
would
be
really
interesting
to
see
because
I
we
we
kind
of
already
know
that
the
teaching
classes
is
contented
in
in
various
situations.
So
we
need
to.
We
need
to
figure
out
what
to
do
about
it.
Yeah
yeah
it'd
be
good
to
see
you
to
see.
If
that
actually
is
true,.
A
Cool
cool
yeah,
that's
good!
So,
let's
see
what
else
yeah
there's
this
sink
right,
stuff
in
blue,
so
we
sort
of
already
talked
about
that
might
actually
help
a
little
bit
we'll
see.
A
So
it
looks
really
interesting,
at
least
based
on
the
profile,
but
that
Igor
sent
me
so
I'm
hoping
I'll
get
a
chance
to
look
at
it
again.
I
think
last
time
I
wanted
to.
It
was
need
to
be
rebased
and
make
it
again
by
now.
I
don't
know,
but
but
that
that
looks
hopeful.
I
think
I
think
that
the
work
that's
been
done,
that
by
by
various
folks,
is
going
to
pay
off
so
yeah.
That's
that's
very
good.
A
If
we
can,
if
you
can
justify
that
on
on
loose
or
by
using
a
racial
coding
and
saving
a
lot
of
space,
especially
now
that
we
can,
you
can
target
very
sure,
coding
without
a
BD.
So
we'll
probably
be
looking
at
that
a
little
bit
closer
in
the
coming
weeks
here
and
then
also
on
maybe
run
through
some
more
profiles
and
see,
if
there's
anything
that
can
speed
up
so
devil,
hopefully
will
be
coming.
So
that's
all
I
have
woody.
F
A
The
test
that
we
did
a
couple
months
ago,
I
think
we
did
two
one.
We
did
four
two
and
we
may
have
done
another
one
I,
don't
interrupt
top
my
head,
it
may
have
been
six
three,
but
basically,
across
the
board
versus
file
store,
we
were
seeing
pretty
dramatic
performance
improvements
with
the
store
those
file
store.
A
In
some
cases
we
actually
saw
that
something
like
an
easy
for
to
pool
was
faster
than
3x
replication
for
large
rights
and
that
you
can't
expect
that
if
it
was
working
well,
you
would
hope
that
that
what
the
case
would
be
because
you'd
be
writing
Atlas
data,
the
big
question
will
be:
can
we
get
it
to
be
close
to
as
fast
or
even
sort
of
close
to
assess
for
small
rights?
I
think
that's
still
two
problems.
Large.
A
A
But
but
again
for
smaller
rights,
at
least
at
that
point,
I
think
we
were
still
you
know
quite
a
bit
slower
than
replication
for
small
rights,
but
we
hadn't
we
had
it
really
dug
into
it
a
whole
lot.
It
was
still
faster
than
file
store
was,
but
but
that's
something
that
might
still
be
the
case.
There
might
not
be
a
whole
lot
about
about
it
that
we
can
do
for
her
luminous,
so
we'll
just
have
to
see,
but
but
yeah
provide
rights.
It
was.
G
G
A
That's
yeah,
I
I,
remember
talking
about
this
with
somebody,
but
now
you
know
I,
don't
remember
exactly
what
it
was
said.
So
what's
talking
to
it,
yeah
I
know
I
know
it
was
discussed
and
I
for
some
reason.
I
thought
something
nourish,
but
maybe
maybe
not
but
anyway
sorry
next
day,
I'm
striking
out
two
for
two
I
guess.
A
Alright,
so
yes,
that's
all
I've
got
any.
Anyone
have
any
less
minute
things
that
they
want
to
bring
up
or
talked
about
before
we
wrap
up
for
this
week.