►
From YouTube: ceph performance 2018-09-13
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
A
C
A
Allocating
see,
support
pages
alignment
is
stupid
and
total
overkill
because,
usually
when
we
line
two
pages,
we're
doing
it
for
a
direct
IO
and
me
like
she
need
four
K
well
I'm
in
so
whatever,
so
that
that
merged
there's
an
RBE
thing,
patrick
thing
to
change
the
logging
stuff
to
avoid
the
heat,
looks
like
it's
mostly
ready.
I
haven't
actually
followed
that
in
detail,
but
I'm
just
trusting
that
Patrick
and
I
think
kisi
and
whoever
else
is
looking
at
that
know.
They're
doing
the
blue
stores
chard
thread.
One
is
interesting.
A
I
think
I'm
still
it's
a
pretty
significant
change
and
the
initial
performance
results
that
he
posted
were
like
it's
slightly
better
in
some
cases
and
slightly
worse,
in
other
cases
or
fair
bit
better
in
some
cases
inside,
though
it's
some
curious.
After
all,
the
changes,
if
he's
repeated,
that
warrants
tests,
do
you
know
mark,
have
you
been
following
the
phone
at
all
I.
B
A
E
A
A
B
My
thing
I
noticed
that
there
was
like
some
talk
about
TC
Malik
being
broken
like
older
versions
being
broken
and
another
PR
that
Keith,
who
was
working
on
there
was
actually
just
like
an
update
to
it.
Recently.
Yeah
there's
no
possibility
that
we're
hitting
something
with
like
an
older
version
of
TC
Malik
in
the
QA
suite
right.
A
D
F
F
A
F
A
A
Yeah,
let
start
with
that
and
then,
if
there
are
like,
if
there's
significant
wins,
just
optimize
the
current
track
tracker
stuff,
then
we
should
do
that,
but
yeah
I
mean
all
this
is
in
full
we're
in
a
set
that
the
c-store
SD
is
not
gonna
reuse.
Any
of
this
could
probably
yeah,
but
that's
that's
fine.
It's
I
mean
that's
gonna,
be
it's
always
away.
So
it'll
take
time
before
we
get
there
anyway,
yeah
here's
their
card
link,
alright
I!
Think
that's
it
on
these
flow
requests.
B
So
sage,
last
week
we
had
a
long
discussion
about
buffer
list
and
you
can,
if
you
scroll
down
the
pad,
you
can
see
just
a
very
high-level
overview
of
some
of
this,
but
the
gist
of
it
is
that
I
think
we.
We
all
agreed
that
we
don't
really
have
a
good
idea
of
what
different
parts
like
how
different
parts
of
the
code
use
buffer
list
and
what's
really
required
from
places
for
it
and.
F
F
Just
pasted
drink
teams
at
well
at
least
maybe
it's
a
matter
also
of
instrumentation
I-
had
to
introduce
them
to
make
this
allow
inlining
of
the
encode
stuff,
but
well,
it
seems
that
is
not
particularly
big,
I
guess.
The
most
interesting
thing
is
the
is
that
it's,
the
third
snippet
for
the
4k
random,
writes
profiling.
Well,.
F
F
Not
so
particularly
big,
but
it
comes
to
other
when
it
comes
to
other
use,
use
cases
for
buffer
list
in
OSD
also
to
get
some
some
look
on
that
and
means
that
we
fixed
the
biggest
issues.
One
was
the
encode
system
in
blue
star
fixed
by
girl
one
year
ago,
like
that
the
second
one
was
messenger.
It
was
the
the
crypto
interface
we
had
a
buffer
list
in
there.
They
were
very,
very
costly
around
50%
of
iOS,
but
the
after
after
that,
there
is
no
huge
one,
huge
user
of
Bathurst.
B
F
F
F
C
F
B
Also
curious
in
the
the
right
case,
if
we
weren't
blocked
by
the
the
K
vsync
thread,
like
as
an
example,
if
you
put
lots
of
OSTs
and
one
nvme
drive
so
you're
kind
of
you
know
working
around
it
by
just
sharding
OS
T's,
you
know
so.
You've
got
essentially
multiple
Givi
sync
threads
I!
Wonder
if
you'd
start
seeing
any
difference
in
the
way
that
the
POS
DTP
thread
overall
cycle
counts.
If
they'd
look
different.
B
B
F
B
A
B
B
A
B
A
F
B
F
C
Mean
I
know
this
scenario,
but
I
interrupt
the
pro
flow
so
much,
but
there's
that
I
have
specific
action,
aims
for
the
FIR
TC
Malik
and
sort
of
questions
to
answer
about
it.
That
I
don't
know
if
we
were
be
spend
like
a
lot
of
good
effort
a
couple
of
years
ago.
Learning
a
lot
mark,
you
didn't
I,
guess
Elaine
what
about
the
different
that'll,
here's
at
the
the
state.
They
were
in
at
the
time
this.
C
This
possibility
that
there's
that
there
that
there
are
bugs
and
or
problems
with
older
versions
of
GC
Malik
is
out
there.
I
have
done
stream
concerns
for
that
and
the
anisa
to
solve
it
on
the
Luminess
space
baseline.
For
that
we
have
we've
been
the
part
in
the
past.
We
didn't
link
RW
with
DC
Malik.
Now
we
do
and
luminous
and
above
this
is
this
this
this
this.
C
This
could
be
a
threat
if
we
saw
it,
we
saw
a
problem
with
it
with
with,
with
with
linkage
of
it's
easy
to
Alec
and
on
Joel
in
some
environments,
no
to
xenial,
specifically,
which
we
didn't
see
elsewhere,
but
we
think
it
could
be
there.
You
have
some
upstream
information
to
that
that
there
are
workloads
dependent
that
maybe
aren't
married
by
a
smelling
version,
but
I
actually
haven't
any
concrete
proof
that
that's
the
case
there
are.
There
are
workloads
that
that
show
up
extraordinarily
high
act.
C
Activation
of
cheap
operations
in
DC
may
look
like
specifically
returning
memory.
You
know
slabs
to
the
central
free
list
and
it
can
show
up
as
a
meltdown
of
other
daemon
that
run.
You
know
when
we
get
in
some
cycle
like
that
and
I,
don't
know
which
Alex
do
and
don't
do
it
or
if
they
all
do
it.
A
A
B
E
A
B
F
A
A
C
F
C
F
Much
much
simpler
at
the
moment
the
buffer
library
is
not
held
on
the
library.
We
have
very
important,
very
frequented
part
of
the
path
put
in
CC.
We
thought
possibilities
to
inline
the
code
even
to
advancing
an
iterator
over
a
pointer,
I
guess.
We
would
really
want
to
move
to
a
library.
I
can
recall.
F
A
F
A
And
then,
and
then
some
judicious
in
lying,
perhaps
and
then
for
the
C
star,
we
should
sort
of
carefully
find
that
balance
between
adding
just
enough
complexity
to
do
what
we
need
to
do,
but
no
more
because
the
I
mean
the
buffer
list
is
motivated
by
like
being
able
to
do
anything
and
everything,
and
it
was
it's
just
too
much
there
that
we
don't.
We
don't
really
need
most
of
the
time.
A
A
E
A
Thing
I
think
perhaps
the
last
thing
that
we
should
consider
also
is
actually
not
specifically
related
to
par
for
list.
But
it's
the
container
allocators
thing
that
that
alan
proposed
forever
ago,
where
we
can
selectively
reserve
space
in
line
in
the
structure
to
allocate
container
items,
I
think
that
might
work
better
then,
because
that
we
can
sort
of.
If
we
have
that
general
tool
in
our
toolbox,
then
we
can
apply
it
not
even
just
through
this,
but
in
other
places
too
I.
F
A
E
A
A
I
mean
as
as
far
as
buffer
stuff
goes
just
ripping
out.
The
old
code
is
quick
and
easy.
Let's
start
there
and
I
think
getting
the
OP
tracker
thing
wrapped
up
I
mean
if
you
look
at
your
open,
full
requests.
You've
got
like
20
some
things
that
are
all
like
sort
of
half-finished,
but
be
nice
to,
like
figure
out
once
know,.
A
A
A
A
Basically,
this
path,
all
the
way
down
over
worth,
checking
subsets
of
it
and
that's
that's
all
great
on
the
read
path-
is
a
little
bit
more
complicated
because
things
are
coming
out
of
the
cache,
though
you
have
blue
store,
blobs
and
those
are
getting
pieced
together
for
the
final
result
that
goes
over
the
wire,
so
there
we
actually
sort
of
make
use
of
the
dynamic
list,
listy
nature
of
it.
But
beyond
that,
it's
a
neo
ste,
it's
mostly
clean,
except
messaging
coding,
and
the
main
messages
are
fine,
MOC
op
and
is
relatively
careful.
A
The
way
the
transactions
encode
it
is
is
somewhat
careful,
I
guess,
but
but
those
are
sort
of
unknown
size
and
unbounded,
and
so
they
they
need
that
sort
of
dynamic,
including
stuff.
On
the
metadata
server.
It's
a
pretty
different
picture
there.
We
sort
of
more
aggressively
used
the
populace
to
like
string
together
arbitrarily
complicated
and
crazy
things
and
we're
encoding,
big
hunks
of
complicated
tree
structures
of
metadata
and
sending
them
over
the
wire
to
the
other
metadata
server,
silencing
stuff,
and
that
that
makes
heavy
use
of
hay
and
for.
B
A
Alligator
refresh
my
memory
I
seem
to
recall
that,
but
the
the
dunk
and
encoder
stuff
does
glue
together,
but
I
think
it's
awkward
when
you
make
the
transition.
I
can't
remember
exactly
if
I
can't
remember
exactly
what
the
impact
is.
A
If
you
have
just
a
random
you're
doing
buffalo
style,
traditional
buffalo
style,
encoding
a
limit
to
random
stuff,
and
then
he
had
some
random
leaf
item
that
said
dink
style
and
could
Adam
get
the
camera
house
where
it
is
okay,
you
know
I,
think
worse,
okay,
but
but
yeah
that
might
help,
but
I
that
the
dang
stuff
is
a
little
bit
weird
because
it
it's
a
it's
a
two
phase:
traversal
it
traverses.
The
structure
wants
to
figure
out
how
big
it's
going
to
be,
and
then
it
reverses
it
again
to
actually
yep.
A
F
A
F
A
F
A
At
the
very
that's
at
the
very
edge
where
it
actually
is
that
encode
wrapper,
I
guess
at
the
very
end
has
to
find
out
how
big
of
a
buffer
to
allocate,
and
so
it
calls
in
and
traverses
a
whole
there's
a
depth-first
traversal
and
then
allocates
a
buffer
and
end
of
the
depth-first
traversal
to
encode.
Maybe.
F
A
I'm
understanding
correctly
list
yeah
anyway
I'm
on
the
buffer
side
of
things
that
I
would
start
with,
so
I
was
clean
up
and
then
looking
at
judicious
and
mining,
I.
Think
and
then
I'm
still
surprised
by
this
small
vector
is
the
result
that
it
isn't,
as
it
wasn't
fast.
So
I'd
probably
want
to
square
that
with
what
Mohammed.
F
A
B
A
A
Enough
and
the
problem
was
because
underscore
buffer
can
get
reallocated,
so
we
can't
store
a
pointer
into
it.
We
have
to
or
an
offset
and
then
dereference
it
each
time.
That
was
a
right,
because
we
could
have
a
flag
like
a
rule
that
says
that
I
reallocated
and
every
time
we
reallocate
or
resize
that
buffer
vector
the
flag,
and
so
we
only
have
to
selectively
dereference.
Did
you
try
that
well
note.
F
A
C
A
C
A
B
C
There's
good
things.
Basically,
this
is
this
answer
this
has
this.
Has
this
has
an
on
topic
front?
Well,
that's
my
part
which
are
called
relevant
and
then
an
off
topic,
which
is
to
point
out
that
once
this
2016
there's
a
whole
cottage
industry
of
discussion
about
both
key
value
store
a
problem
in
main
memory,
mainly,
but
of
course
it
could
spit
Iraq
softened
other
places,
I
ramifies
into
into
dresses.
C
We
care
well
we're
for
everyone
who
ever
would
who,
like
awesome,
who
cares
about
Lincoln's,
primarily
bi,
being
consumers
of
key-value
interfaces
or
historically
done
so
for
various
reasons
it
may
get
sucked
into
them,
but
basic
but
I
mean
but
I,
but
but
I
think
the
key
piece
I've
I
took
taken
away
is
the
policy
you
know
this
was
my
area
of
priorities
day
to
meet.
You
know.
This
is
aspect
of
daily
basic
instruction
was
not
an
area
of
implementation,
expertise
for
me,
but
I
I
think
point
one
we
have
to
become.
C
We
as
a
group
have
to
I
think
how
to
become
so
number
and
then-
and
that
request
us
to
think
about.
We've
got
a
lot
of
mileage
out
of
on
a
lot
of,
in
particular
locks
TB
and
the
work
that
you
know,
Google
others
already
did
there,
but
but
but
0.1
I
think
this
is
pretty
pretty
pretty
oven
a
lot
of
it
in
a
lot
of
different
literature,
but
I
think
it
to
be
summarized
nicely,
and
maybe
they
may
be
pretty
brilliantly
actually
about
to
do
so.
C
Folks
in
Wisconsin
there's
this
there's
a
in
the
whiskey
paper
weather
where
they
showed
how
you
could
do
something
really
tiny
to
know
to
to
help
a
lot,
but
basically
LSM
is
a
dead
end.
I
think
that
the
gallon
Samuel
for
us
and
then
a
Latinos
is
claimed
that
also
a
made
some
of
the
reason
why
but
there's
weather
cb2
reason
why
one
one
is
that
one
is
that
one
that
one
is
that
it
moves
too
much
data
around
and
the
other
and
the
other
is
the
size
of
the
of
the
of
the
in
memory.
C
Acceleration
requires
the
relative
size
of
about
the
bloom
filter
is
another
other
other
mm
table
and
all
that
all
the
relative
to
the
dollar
amount
of
data
being
stored.
We
hit
both
we
hit,
which
leaves
leaves
both
of
which
lead
to
the
hatred
we
see
at
run.
Time
was
issue
issues
with
compaction,
but
it's
up,
but
it's
also,
but
it's
also
put
imposing
a
limit.
C
Scale
of
the
nicely
log
scaling
the
behavior
of
an
index
much
earlier
than
we
want
much
earlier
than
a
competitor
or
a
system
would,
in
addition,
in
addition
to,
in
addition
to
the
higher
cost
of
organizer
every
organization
and
all
the
whole
area,
the
last
bit
of
how
we
got
here
and
how
rocks
to
be
a
level
to
become
it
became
such
a
such
as
such
such
a
useful
thing.
I'm
ABAB
at
the
state
of
the
at
this
point
in
history
of
indexed
data
is
interesting,
but
but
appears
that
he'd.
C
Let
me
that
that
most
other
thing
know
that
the
way
forward
is
likely
to
be
at
least
in
terms
of
organization
is
where
I
would
have.
You
would
have
thought
it
would
be
all
along
in
a
while
inquiry
is
in
classical
databases
which
is
b-trees
and
clustered
index
technology.
More
or
less
of
interest,
though,
is
prefix
trees,
because
I
think
you
know
that's
and
there's.
This
appears
to
be
a
part
of
things.
C
People
are
building
now
and
there's
and
there's
a
nice
paper
on
that
and
the
that
came
from
Korea
Institute
of
Technology
and
a
bunch.
People
have
worked
on
it,
they're
almost
consular
point
where
people
were
working
on
an
open
source
project
that
could
be
used
as
a
thing
I'm
Bob.
In
the
same
way
we
you
know,
if
he's
a
common
commercial
off-the-shelf
junk,
we
could
just
use
what
we
do
or
ox
to
be,
but
that
didn't
quite
develop.
I
think
I.
C
Because
because
I
guess
the
suit,
because
they
one
of
the
see
one
of
the
members
of
the
team,
the
primary
the
least
ESD
candidate-
for
that
from
that
group
you
sent
on
but
from
sting-
is
right.
I
apologize.
If
not
is
you
know,
it
was
a
go
both
built
a
bunch
of
prototypes
and
then
and
then
went
to
Couchbase
and
work
with
anything.
I
got
to
do
that.
Team
of
three
people
worked
on.
C
But
there
is
quite
a
lot
of
useful
code
and
it
seems
to
yonder
the
crunchbase
office
is:
is
hesita
submitting
properties
one
it's
mostly
intact?
It's
mostly
completely
described,
so
it
knows
one
set
of
things
and
it
sort
of
works,
Newser
spaces,
but
it,
but
it
hasn't
been
exhaustively,
optimized
and
dragged
into
a
complex.
You
know
form
with
and
with
a
with
a
lot.
C
You
know
whether,
with
a
lotta,
though,
though,
that
the
little
with
a
lot
with
a
lot
of
dependency
on
specific
primitives,
like
a
lock
various
lock
three
paradigms
in
there
map
at
all,
it's
all
whatever
other
junk.
So
that's
why
it's
fine
and
then
the
other
angle
that
you
know
that
it's
been
this
community
before
and
then
came
back
in
recently.
I
apparently
at
mount
point
is
that
you
know
that
a
lot
of
a
lot
of
it,
a
lot
of
people
working
at
a
low
level
on
flash
optimization.
C
C
C
B
C
C
Fan
out
at
the
pre
at
the
key
level
and
that's
and
that's
most
of
the
trunk
more
so
we
got
to
do
it.
The
separate
thing
you
got
to
do
that
whiskey
shows
and
other
things
just
do
already
is
his
hand,
is
handle
the
data,
the
data
out
of
line,
and
rather
than
you
know,
there's
Lucifer,
you
know,
or
in
some
efficient
way.
That
gets
that's
collected
later.
C
If
it's
going
to
be
large,
you
know
the
whiskey
shows
that
you
know
that
shows
that,
even
if
you
just
if
you
just
they
did
you
know
they,
they
both
they
just
did
a
little
sort
of
a
hack
where
they
put
pointers
into
the
data.
But
then
you
know,
but
somebody
wrote
know
this,
this
badger,
you
know.
So
that's,
that's!
That's
the
mascot
of
the
University
of
Wisconsin,
of
course.
Oh
so
this
or
this
batter
database
ascertain
in
this
coleg
thing
is
basically
right
from
scratch.
C
C
C
From
whether
more
data
heavy
move
improves
on
the
situation,
but
but
it,
but
it
still
suffers
from
the
same
problems
of
I,
think
acceleration
does,
but
but
the
key
piece
that
it's
giving
it
that's
what
you
pay
for
there
is
it's
a
lot,
is
to
allow
rate
or
range
based
searching
so
people.
Something,
though,
there's
a
whole
lot
of
research
to
buy.
Let's
get
rid
of
that,
but
of
course
that's
obvious,
but
but
we
can't
get
rid
of
that.
C
So
all
that
solvent
literature
is
dead
to
us,
but
prefix
trees
and
we
care
about
it
would
be
a
lot
of
people
on
the
situation
there
and
they're.
Also
in
this
similar
they're
gonna
do
this
situation
what
they
want,
what
they
want
to
move
to
architectures,
let
you
know
this.
Our
kissers
like,
like
somebody
be
you
know
like
like
sea
star.
Oh
sorry,
I
think
so.
C
I
think
that
we
won't
be
all
alone
and
work
on
this
or
that
they
joined
us
in
the
south
logic
or
whether
we
work
on
something
that
but
the
chairs
ever
with
other
people.
Other
ways
that's
but
but
I
think
we
should
get
involved
in.
In
doing
that,
because
it's
it's
it's
an
obstacle
in
everybody's
way,
so.
A
I
mean
yeah.
Clearly
we
need
something
better
than
our
CV
I.
Think
that
my
my
concern
is
really
just
high
level
like
what
should
we
be
targeting?
Should
we
be
targeting
a
sea
star
implementation,
that's
going
to
be
part
of
C
store
or
separated
out
from
it
or
some
part
of
that
new
world.
That's
you
know
tears
away,
or
should
we
we
need
to
target
some
sort
of
short-term
hack.
We.
C
C
But
it
well,
we've
hit
the
wall
with
it.
Oh
I
think
we
I
think
we
need
to
think
that
can
run
in
seas
are
running
since
you
start
from
early,
but
but
it
has
a
bit,
but
it
factors
everything
but
there's
all
the
index,
structure,
ideas
and
another,
and
then
a
bunch
of
there's
much
other
common
code
out
and
it's
a
common
to
run.
C
C
You
know,
you
know:
user
space
runs
I'm,
more
focused
and
see
start
for
that,
but
be
but
being
able,
but
that
not
be
locked
into
see
started
to
do
every
all
the
work
and,
and
ideally
they
have
something
that
can
be
run
in
OSD
I
I.
Don't
I
looked
around
for
things
that
are
a
commercial
off-the-shelf,
simple
rocks
to
be
I,
don't
think,
there's
anything
even
even
even
even
even
even
a
wired
Tiger,
which
is
a
successor
to
Berkeley
to
be
which
I'm
not
there's
so
much
Birkin
them
so
much
quickly.
C
An
MIT
license
on
them
on
the
main
page
of
the
lights
and
the
license
page
I
say
in
my
own
defense,
but
I
didn't
notice
that
the
cut
that
the
primary
license
is
BS
is
just
sorry
GPL
though
we
can't
even
touch
it.
So
so
so
it's
pretty
much
irrelevant,
but
even
then
Keith
musta
canola
and
his
key
acolytes.
The
key
collaborators
from
Berkeley
from
sleepycat
are
involved
in
it,
but
they're
focusing
on
LSM,
for
goodness
sake,
yeah
so
I
so
useful,
is
that
that
could
be.
A
C
E
C
C
C
B
F
B
C
C
It
has
some
transactions
other
pieces,
so
yeah
it
can
yeah,
it
can
be
done
and
but
almost
really
we
want
to
know
I
did
it,
but
that
expecting
it
to
be
a
quick
fix,
isn't
isn't,
isn't
expected,
you
know
and
baking
the
baking
and
making
the
interface.
The
key
value
interface
is
not.
The
is
not
is
not
the
key
piece,
but
yes,
you
can
and
and
yeah
I
think
it
night
and
you
were-
and
you
were
right
to
spot
it
really
obvious
learned
about
for
I
but
I.
C
B
C
A
separate
issue,
so
they
separate
it
and
we've
covered
it.
Another
other
call,
so
I
didn't
focus
on
it
here,
but
but
but
that
is
absolutely
the
case
if
at
least
I
think
it
is
the
case
and
I
think
you
corroborated
it.
You
know,
even
though
there's
other
reasons
to
believe
other
things,
empirical
evidence
I
think
to
support.
Logically,
you
want
to
separate
out
completely
those
things,
and
that
is
my
live
but
yeah.
C
But
yes,
the
you
know
the
different,
the
different
time
domain
data
essentially
and
factory
had
factored
it
out,
but
but
that
work
on
that
was
going
on,
and
but
it
was
what's
most
important,
though
it's
totally
orthogonal
to
everything
else.
We
could
do
it
immediately
if
we
have
time
and
it
makes
the
proxy
be
work
better
if
we
could
presumably
or
should
do
so.
So
what
work
was
good
at
work
was
on
going
to
do
that,
I
understood
at
Intel.
What
happened.
A
I
think
the
high
level
strategic
decision
here
is
really
about
resource
allocation
and
risk,
because
I
see
sort
of
three
three
paths
in
front
of
us.
One
is
make.
Incrementally
change
is
to
rocks
DB,
for
example,
to
separate
out
the
time
base
stuff
and
get
it
going
a
little
bit
better
right.
That's
sort
of
path,
number
one!
That's
a
relatively
low
risk.
All
the
rocks
of
the
integrations
already
they're
likely
to
be
accepted
by
the
rocks
to
be
upstream.
A
So
it's
like
this
low
risk
and
it's
I
think
relatively
small
investment
and
that
we're
not
actually
doing
it
that
sort
of
path.
One
path
to
is
the
the
long
term.
Two
to
three
years,
we're
going
to
see,
store
we're
going
to
be
like
writing
something
basically
from
stretch
we're
gonna.
A
In
order
to
bridge
the
gap
to
get
to
be
store
and
see
star-
and
that
strikes
me
as
also
a
pretty
large
investment
because
of
all
the
integration
work
that
would
need
to
be
done
with
something
like
us
to
be
in
order
to
glue
it
into
blue
store,
and
it
also
at
the
same
strings
same
time
strikes
me
as
high-risk,
because
we
actually
have
no
idea
what
their
performance
is
going
to
be.
If,
once
we
do
all
that
work,
we
can
sort
of
like
guess,
but
we
really
don't
know.
What's
going
to
happen.
C
Well,
that's
right,
I
guess,
but
I
mean
they're
totally
right
actually
I
mean
not
only
get
that,
but
I
mean
it,
but
but
well
number
one.
Yes,
with
a
much
more
free
support,
work.
The
high-risk
pieces
that
that's
that
that
adheres
to
both
two
and
three
I
mean
we
will
all
totally
get
them
an
efficient
c-store.
Of
course,
oh,
but
the
inside
underrate
the
positioning
paper
I'm
trying
to
adhere
that
logic.
Here
is
one
that
that
there's
a
we
both
directions
of
two
and
three
we
can
overemphasize.
C
Oh,
we
can
override
fry
sizes
and
get
PC
using
the
solution
as
a
piece
of
cots
and
two
we
can
over
emphasize
the
runtime
is
there
on
the
right,
runtime
fit.
You
know
the
c-store
piece
of
the
C
star
piece
of
the
of
it
or
their
current
OSD
piece
of
it.
In
the
other
case,
I
tend
to
ask
whether
there's
a
way
to
do
both
to
that
hybridizes
into
three
possible,
possibly
putting
a
question
mark
next
to
whether
we
would
ever
deploy
a
replacement
to
rocks
to
be.
C
A
A
big
up
and
have
to
do
some
thinking
about
that,
because
I've
always
assumed
that
the
runtime
is
going
to
dominate
the
way
that
the
code
is
written
and
it's
not
going
to
be
possible
to
like
write
something
that
can
be
adapted
to
run
in
C
star
and
also
when
it's
righted
context.
Maybe
that's
not
true,
but.
C
It
can
be
true,
I
mean
it
definitely
could
be
true,
but
I'm
I'm
interested,
whether
it
whether
it
has
to
be
like,
like
you
guess
it
seems
like
LM
DB,
which
are
sort
of
dominated
by
that
and
and
and
certain
certain
luck.
You
know,
and
in
some
ways,
though,
the
way
the
different
threatening
decisions
that
are
you
know,
I
mean
once
I
doing
having
specific
thread
roles.
F
C
C
A
C
This
is
the
reporting
exercise,
but
well,
as
we
actually
started,
we
want
to
build
on.
One
is
known
about
the
current
indexing
strategies
and
technologies
that
have
existed
before
the
papers,
plus
the
plus
everything
that
went
into
2005
well,
fifty
years
on
this
all
the
knowledge
that
we
have
to
professionalize
to
get
it
out
there.
That's
where
the
cut?
That's
that's
where
the
project
and
the
runtime
and
we
can
build
on
it.
B
C
C
Hear
I
think
you
would
agree
with
you,
I
think
you
and
also
we
don't
we
could
we
can
either
he
can
run
around
it.
Then
we
can,
but
the
FTL
piece
that's
a
piece.
We
know
that
was
another
piece.
I've
learned
another
thing
about
this
one:
it's
not
my
area
of
expertise
but
I
think
but
I
think
it's
a
specialized
area
of
knowledge.
We
what
we
want
to
factor
out
and
we
want
to
bring
in
the
experts
and
and
then
the
people
there
with
a
particular
with
our
particular
hardware
profile.
C
B
B
A
A
That
actually
is
independent
of
fact
that
you're
running
it
in
C,
star,
lock,
less
context
or
in
that's
right,
and
we
can
do
all
that
work
and
do
all
that
design
work
now,
and
it
may
be
that
when
we
get
to
the
point
where
we're
ready
to
start
working
on
the
runtime
that
is
possible
to
glue
it
into
blue
store
before
the
rest
of
the
OSD
is
rewritten
in
C
star
and
get
some
benefit
there.
That's
possible
sure.
C
Okay
and
we'll
find
out
when
we
get
there
or
we
get
closer
to
that,
but
as
other
people
join
the
project
and
help
us
out,
they're
gonna
find
we
could
invite
them
to
use
it
in
other
profiler
workload.
Profiles
that
another
roadside
product,
for
you
know,
environments
that
we
haven't
thought
of,
but
that
we
give
them
those
primitives
they'll
say:
oh
I
can
I
can
work
on.
A
Thinking
at
all,
because
this
never
needs
to
replace
rocks
to
be,
it
could
be
used
instead
of
rocks
TV,
but
you
would
run
blue
store,
rocks
to
be
on
hard
disks
and
you'd
run
blue
store.
Yes,
essentially,.
B
A
That's
what
I'm
saying
we
should
remove
hard
disks
from
that
conversation,
and
so
then
the
only
additional
thing
that
we
need
to
think
about
is
running
in
a
threaded
context
in
a
CC
start
context
and
I
and
I
leave
that
as
a
maybe
I'm
I'm,
still
sick,
older,
that
we
really
want
to
go
down
both
paths
but
I'm
willing
to
see
that
it
might
work
in
both
cases.
Oh,
but
we
okay
regardless
we
need
to
do
all
the
initial
work
around
like
what
are
the
data
starts.