►
From YouTube: Ceph Performance Meeting 2018-09-06
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
A
A
Although
I
do
have
to
I
guess,
do
these
pull
requests
but
but
I'll
be
honest.
I
was
kind
of
phoning
it
in
I,
I
updated
the
list,
but
I
can't
guarantee
the
accuracy
of
the
new
stuff.
In
here,
I
too,
look
and
Anna
my
eyes
are
glazing
over
the.
The
folks
from
the
Corps
meeting
was
going
a
little
over,
so
I
think
I
think
a
couple
people
will
be
here:
late,
Sage's,
apparently
like
triple
books,
I,
don't
think
we're
gonna
get
him,
but
at
least
maybe
we'll
get
Greg
in
and
a
couple
other
folks.
A
C
Yeah
there's
be
stuff,
I
mean
Casey's,
we're
gonna,
beast
ourself
up
with,
is
getting
practiced
and
treat
another
$30
release
or
whether
we
put
another
word
sorry
up
streamed
it's
in
it's
in
it's
in
moonless.
Even
so
we're
gonna
try
we're
gonna
turn
an
alignment,
there's
a
default
HTTP
and
then
our
next
on
stream
release.
First
and
energy
retreated
to
the
threats
and
a
request
work
from
the
Alcoa
team,
based
in
a
synchronous
using
the
same
primitives
AB,
something
with
that
from
the
Reno
side.
C
Don't
have
one
you
know
so
I
so
rate
us
and
easy
to
be.
It
will
be
under
single
processing
loop
without
without
without
thread
handoffs
and
stuff.
This
working
at
index,
reorganization
and
I
will
kind
of
OMAP
which
out
to
listen
all
that
pressure
and
for
that
I've
been
sort
of
looking
yeah
visiting
your
session.
C
A
C
I've
restarted
that
I
mean
I'm
doing
I
was
working
on
wired
tiger,
but
then
I
discovered
that
the
for
a
couple
reasons
one
this.
The
licenses
in
fact,
still
has
a
Berkeley
to
be
licensed.
That
kills.
That
will
kill
us
I
apologize
for
not
knows
that
earlier.
I.
Think
for
us
to
be
is
the
most
useful
sort
of
canned
thingamabob.
We
can
start
with
I,
don't
think
we're
gonna
be
able
to
keep
a
lot
of
it
as
such,
but
I
think
the
basic
ideas
in
it.
C
The
basic
stuff
is
right
as
a
starting
point
for
experimentation,
and
then
you
know-
and
it's
not
and
it's
small
enough
that
would
that
we
can
other
than
if
we
did
this,
because
we
could
do
their
other
meal
and
we
wouldn't
be.
We
wouldn't
be
just
spending
all
of
our
time,
managing
managing
a
giant
codebase
as
they
try
to
adapt
it
into
the
different
form
factors
we've
got
it
had
has
had.
C
It
has
had
a
bit
of
a
it,
had
trouble,
getting
an
open,
open
source
community
started,
but
I
think
that
we've
reached
a
point
in
history
that
if
somebody
did
that
did
that
right
that
an
open
source
development
which
would
certify
a
circle
around
it
but
anyway
I
think
I,
think
I,
think
the
Friday
but
I
think
the
but
I
think
you
can
just
sort
of
prove
that
that
LSM
and
is
a
dead
end
for
us.
I'm.
A.
C
A
A
C
Yes,
you
need
to
do
that,
but
but
here's
the
problem
right
what
we
mean.
Yes,
we
have
to
do
that.
That's
the
whole
track
as
a
whole
analytical
track,
but
number
one
did
we
do.
We
have
this
composition
this
conversation
next
week,
because
this
is
this-
this
is
the
buffer
League
buffer,
let's
work,
but
if
you
want
to
do
that,
I'd
be
happy
to
have
my
one
pager
ready
for
next
week.
Thinking
shoot
holes
in
it,
but
I
think
I
think
that
question
needs
to
be
considered
whether
but
I
think
I
think
I.
C
A
C
A
All
right,
like
I,
said
I've
I
kind
of
phoned
in
pull
requests
this
week,
I've
been
working
insane
hours
and
I
couldn't
even
concentrate
on
what
I
was
doing,
but
we've
got
a
couple
of
new
ones
in
the
last
two
weeks,
there's
some
kind
of
fairly
complicated
buffer
list,
PR
that
I
didn't
even
I,
don't
make
she
even
know
if
it's
complicated
but
I
assumed
it
was
so
there's
there's
that
if
anyone
ok,
it's
actually
kind
of
relevant
for
this
meeting,
possibly
did.
A
C
A
A
E
B
A
A
D
F
A
A
A
Okay,
I
have
no
idea
what
this
daesil
look,
so
we
can
move
on.
There's
another
MDS,
one
Oh
max
export
sizes
and
force.
Don't
ignore
this
fact.
This
close
to,
let's
see
log
boy,
keep
allocations
for
most
log
entries.
Oh,
this
is
like
an
MDS
thing,
I
think,
but
KC
and
keep
whoo.
You
guys
both
reviewed
this.
Is
there
anything
interesting
down
there?
I.
B
F
Yeah,
you
traded
a
the
pre-allocated
in
the
memory
with
with
a
mem
copy
I
think
it's
a
wing
this
way.
A
A
A
A
B
A
That's
it
Oh,
updated
things
here.
Oh
my
cash
bidding
thing.
I
was
really
really
hoping
that
that
would
just
pass
and
get
merged,
but
apparently
sage
God
got
something
to
seg
fault.
I,
think
it
was
like
TC
Malik,
it's
somewhere
dead,
so
I
need
to
figure
out
what's
wrong,
but
most
likely
it's
a
bug
in
my
code.
I
just
didn't
realized,
so
it's
never
say
faulted
when
I've
tested
it,
but
I
guess.
Unfortunately,
maybe
this
can
be
a
deeper
problem.
A
What
else
third
thread
duel
on
commits
thing,
Keef,
who's,
testing,
it
I,
guess,
I,
don't
know
anything
about
it.
That's
it
there's!
Maybe
some
other
stuff
here
that
possibly
changed
in
the
new
movement
list.
I
didn't
quite
get
all
the
way
through
it,
but
if
it
is
it's
all,
it's
all
old
stuff,
I
didn't
get
through
so
I
I
doubt
that
there's
actually
anything
else
new
in
here.
A
So
that's
it
before
we
certain
bufferless.
Does
anyone
have
anything
that
they
want
to
mention
or
talk
about.
B
It
touched
on
some
sea
star
stuff
and
what
they
do
for
zero
copy
networking.
They
have
a
packet
class,
that's
kind
of
similar
to
top
of
her
list,
but
we
also
talked
about
experiments
to
replace
standard
list
with
small
vector,
and
that
sounds
like
we
had
some
mixed
results.
There
I
definitely
think
it's
worth
exploring,
though
oh.
A
Yeah,
the
surprising
thing
to
me
right
is
I
would
have
thought
that
getting
rid
of
the
like
you
know:
double
pointer
dereference
would
have
just
helped
in
some
aspect.
Right
I
mean
you
still
have
like
you
know
these
car
star
fragments
all
over
the
place,
so
it
doesn't
deal
with
that.
But
you
you
know
I
I,
guess
I
figured
it
would
do
something,
but
it
seems
like
in
both
my
tests
and
in
radix
tests
with
with
very
similar,
but
slightly
different
PRS.
That
do
the
same
thing.
A
G
C
A
Just
gonna
say
my
my
my
very
very
uninformed
guess
is
that
we
still
have
lots
of
fragmentation.
Just
switching
out
standard
list
to
small
vector
doesn't
really
solve
it.
You
know
it
maybe
sort
of
does
minor.
You
know
improvement
in
some
aspects,
but
doesn't
really
solve
the
the
bigger
problem
which
right,
which
is
that
you've
got
these.
You
know
car
star,
fragments
all
over
the
place.
You
still
have
you
know
allocations
all
over
the
place,
you're
not
really
doing
any
kind
of
stack
allocation
for
much
of
anything.
So
it's
you
know
it's
kind
of
just.
G
There
could
be
also
another
thing
because
of
take
because
of
the
of
the
iterator
invalidation.
We
had
to
change
a
bit
the
way
how
iterators
over
buffer
list
works.
It
might
be,
it's
now
slightly
more
costly,
but
he
costlier,
but
taking
into
consideration
their
crazy,
the
huge
now
the
huge
amount
of
places
we
are.
We
have
we
are
iterating
over
buffer
list.
Well,
it
might
be
it
might,
it
might
take
any
benefits,
take
it
from
from
the
from
avoiding
unnecessary,
unnecessary.
G
A
Braddock
regarding
iterator
invalidation,
the
the
path
I
started,
going
down,
which
was
different
than
yours,
which
I
gave
up
on
and
just
adopted
what
you
were
doing,
the
the
thing
I
noticed
what
one
of
the
things
I
noticed
is
that
we
hold
an
iterator
open
when
we
create
the
encoding
at
the
beginning.
You
know
if
you
go
and
look
at
it
like
the
the
the
nasty
like
in
code
start
and
in
code
finish
stuff.
We
hold
an
iterator
open
throughout
that
like
whole
thing
right,
maybe
even
two
I
forget,
but
but
I
wonder.
G
That's
that's
a
problem
that
comes
from
using
from
the
our
usage
scenarios
for
buffer
list.
Basically,
it's
used
everywhere
for
everything.
Yes,
this
makes
this
impulses,
that
it
must
be
very,
very
generic
able
to
handle
many
situations,
so
maybe
starting
from
making
some
different
see,
make
putting
some
more
types
more
specialized
types
studying
for,
for
instance,
from
that
from
the
from
from
the
low-level
encoding/decoding
stuff
is
a
way
to
go.
Maybe
we
could
mod
well,
maybe
we
will
be
able
to
just
squeeze
some
requirements
from
from
the
list.
A
A
B
B
A
A
Have
you
looked
at
all?
Can
the
feasibility
of
where
we
could
make
use
of
something
like
that
or
even
just
you
know,
one
of
the
things
that's
come
up
in
all
this
stride
is
that,
like
Braddock
said
we
use
bufferless
for
like
everything
right
I
mean
is,
it
is,
are
there
places
where
we
can
stop
Kenneth
or
we
can
use
more
more
specialized
things.
C
Here's
the
question:
well,
that's
a
really
good
question:
I'm,
not
a
super
question,
though
I
mean
a
basic
one.
There
were
always
a
couple.
There
are
thorough
if
you
thaw
there
are
sort
of
me
if
I
can't
think
I
can
think
it's
one
of
four
things.
I
mean
there's
more,
that
buffer
would
that,
but
where
Phyllis
has
interface,
tensions
with
that
are
different.
There
might
roll
out
differently.
C
If
we,
if
we,
if
it
used
it,
we
if
we
used
it
differently,
but
it
but
one
it
come,
come
a
vector
of
iox,
but
but
we
have
over
it,
but
we're
using
standard
list
and
so
we're
allocating
stuff.
That's
what
you've
talked
about
that?
That's
one!
If
it's
a
two
to
two,
but
whereas
you
know
where
was
it,
where
is
that
I
you
IO
classical
you
IO
is,
would
be
an
array
of
a
by
of
IO
vaca.
C
C
Three
for
forest
is
that's
not
three.
No
three
or
four
there:
it
has
interfaces
that
are
based
on
rope
that
are
about
squirting
date,
more
data
into
buffers
and
discriminating
jug,
but
we
but
I
think
we
only
pay
for
those
if
you
use
them.
So
it's
okay.
So
this
is
the
invite
code
to
do
crappy
stuff.
Well,
these
countries
up
for
some
views.
C
So
because
we
saw
this,
we
were
connect,
snail
messenger
like
the
mist
like
there's
there
were
there
were
things
going
on
in
MDS
that
gaze
he
found
or
where
it
was
like,
where
we
like.
We
were
appending
stuff
that
we
were
kind
of
dribbling
in
and
expanding
buffers,
but
we
were
trying
to
take
buffers
out
and
treat
them
as
chunks
and
put
them
right
in
they'll.
C
If
you
know,
if
you
don't
do
that,
you're
like
you're-
probably
better
off
less
the
time
but
but
it,
but
they
deceive,
but
you
do
change
work
for
to
achieve
that
or
not
achieve
that.
But
in
this
inner
world
you
never
a
given
bufferless
is
never
going
to
be
shared
or
should
ever
need
to
be
shared
between
contexts
that
need
to
be
meet
that
needed,
ipi
or
fence.
Is
that
ever
even
need
that
in
a
sea
star
world,
can
we
just
compile
that
crud
out.
C
C
Feels
to
be
like,
like
that,
like
that
behind
heard
about
I
feels
my
intuition
is
telling
me
that,
but
the
two
things
could
be
killing
us.
Validation,
like
you
say,
and
fencing
I
didn't
work
that
when
I
was
working
for
in
a
condition
where
I
was
experimenting
with
this
was
basically
a
trailer
version
of
a
char.
C
Did
I,
don't
want
to
call
it
that,
but
whatever
we
have
a
structures
that
are
that
are
either
user,
multiple
that
are
used
by
that's
where
multiple
threads
would
end
up
with
for
the
for
the
date
for
the
duplicate
request
cache,
so
it's
TCP
associations
specific,
but
for
v3
and
FS.
We
we
know
we
have.
We
have
to
keep
them
at
the
track.
Every
request
that's
been
seen
to
make
sure
we're
not
seeing
a
dupe.
We
currently
use
a
b-tree
for
that.
B-Tree
is
extremely
fast
when
one
thread
is
doing
it.
C
Even
though
I
had
a
colleague
who
told
me
that
wasn't
the
case,
we
could
do
millions
of
operations
per
supreme
avert
for
a
second,
but
but
but
as
soon
as
we
tie
this
for
Sprint
hit
it
with
multiple
threads.
You
know
it
blow
it,
but
it
collapses,
but
I
so
I
did
charting
where
I
was
doing
page
Alliance,
stuff
and-
and
you
know
they
doing
thread-
you
know
doing
they
doing
cache
cache
lines.
You
know
friendly
things
and
so
forth.
C
If,
basically,
if
any
few
threads
ever
touch
the
same
thing,
you
get
this
you
can
you
lose.
You
lose
almost
all
performance,
it's
so
bad!
It's
this
terrible!
Well,
but
exactly
right,
but
I
mean
I.
Guess
my
point
is
that
it's
so
that
it's
so
bad,
it's
much!
It's
it's
much
worse
than
I
thought
it's!
It's
fish,
its
vitiating
all
attempts!
C
It's
actually
defeating
all
the
attempts
to
make
you
know
all
the
all
the
standard
techniques,
a
point
you
know
quarter,
marketing
and
charting
and
things
that
you
think
are
gonna
help
if,
if
the
thread
stuff,
if
the
rough
edge,
this
is
the
point
of
C
star.
If
the
threads
are
sharing
the
data
at
all,
we're
screwed.
A
C
C
G
G
C
G
C
G
D
G
C
So
mister
though
I
mean
is
it
would
it
be
to
hide
by
hypothetically?
Would
it
be
the
case
that
I
that
on
and
on
then
the
C
star,
like
model
I,
don't
know
if
we
I
don't
know?
If
that's
all
we
care
about,
but
number
one
I'd
like
to
look
either
identify
to
look
for
two
things:
one.
Can
we
recover
the
vector
reorganization
because
we
want
to
converge
on
a
usage
model
for
which
it
wins?
C
If
we're
so
don't
know
the
trace
tougher
stuff
in
the
MDS,
you
know,
if
that's
that,
that's
the
way
we
pay
for
that.
If
we
use
it,
but
we
stopped
using
it,
we
stopped.
If
we
stopped
doing
things
that
invalidate
hydrators,
then
we
just
don't
have
to
back
then
vector
should
win
a
lot.
It's
only
in
a
lot
of
different
ways
to
you
know.
If
we
got
if
we
stay,
if
we're
I'm
C
star,
do
we
need
to
do?
C
G
B
G
G
H
C
But
seriously
the
question
here
I
have
is
how
like,
is
it
really
the
case
that
are
that
are
that
our
workflow
pipeline
must
defeat
our
attempts
to
size
vector
appropriately
upfront
or
that
the
appropriate
size
is
so
large.
That
is
that
it's
intractable
to
do
so.
Doesn't
it
all
hypothetically?
Doesn't
it
all
to
come
down
to
that
core?
Those
two
questions?
It.
G
A
Yeah
I
wonder
if
all
this
discussion
about
everything
we've
just
talked
about
how
much
it
actually
makes
sense.
If
we
go
back
and
just
look
at
the
code
using
buffer
lists
and
have
we
is
it
just
that
you
know
where
we're
trying
to
solve
a
problem
that
we
shouldn't
solve
or
that
we
don't
need
to
solve.
Well.
C
A
A
A
A
A
C
C
It
gives
you
say
it
may
degrade
better
as
long
as
we
don't
pay
much
for
it
up
front
I.
Don't
look
at
that
closely.
I'd
like
to
see
a
work
I'd
like
to
see
a
work,
full
I'd
like
to
see
us
talk
about
how
we
could
get
the
work
work
work
work
flow
such
that
vector,
would
almost
always
be
correct
if
correctly,
sized
and
so
Dec
would
be
buying
a
static.
C
G
A
Like
I
think
that's
the
million-dollar
question
is:
what
is
the
workload
that
we
actually
invoke,
and
maybe
I'd
say
before
the
testing
question?
The
next
is:
is
that
workload
correct,
like
you
know,
are
we
doing
the
right
thing,
and
then
you
know
after
that.
Maybe
is
the
the
question
of
what
should
be
benchmarking.
What
should
bufferless
actually
do?
That's.
A
A
B
G
C
B
F
I
F
B
B
And
then,
once
once
you're
done
mutating,
you
would
turn
it
into
something
else
that
that
wouldn't
have
to
worry
about
iterator
and
validation
and
stuff.
C
G
List
into
a
bit
more
modular,
Fink
I
mean
let's
take.
Let's
take
the
fencing
as
an
example.
Maybe
we
will
be
able
to
have
in
pretty
the
same
building
blocks
or
buff
T
star
and
non
C
star
workloads,
but
for
for
non
sister,
maybe
we
maybe
we
could
just
put
some
fencing,
fencing
Claire
above
above
the
same
the
same
low-level
component
used
or
in
both
words.
B
B
J
D
J
Regarding
the
reference
counter,
I
think
blue
store
has
a
cache
and
we
do
use
the
ref
counting
when
we
give
the
data
doing
the
read
out.
If
that
may
be,
that
will
be
just
different
buffer
lists,
but
a
different
flavor
of
buffer
lists.
But
then
we
would
have
to
copy
somehow
into
that
that
sync
I'm
first.
F
A
A
Is
gonna
say
with
booster
caching,
it's
clever,
you
know
the
the
use
of
intrusive
list.
There
makes
kind
of
the
management
of
the
LRU
cache
really
simple,
because
you
can
just
like
have
stuff
go
away,
but
it's
not
I
kind
of
wonder
a
little
bit
if
that
model
is
really
very
friendly
for
just
everything
else.
We've
talked
about
right
there
I
imagine,
there's
probably
a
lot
of
memory.
Fragmentation,
there's
just
a
lot
of
stuff,
with
pointers
out
to
it
scattered
all
over
the
place.
So
you
know
it's
probably
a
totally
separate
discussion.
A
A
E
Know
that
there
are
some
complications
when
working
with
those
those
iterators
in
general,
because
they
don't
really
behave
like
STL
iterator
since
I
recall
it's,
it's
been
a
while
since
I've
looked
at
them,
but
that's
that's
one
of
the
kind
of
issues
I've
run
into
manipulating
things
with
buffer
lists
in
general.
In
some
of
the
comments
earlier
about
this
thing,
having
maybe
too
many
hats
to
wear
I
completely
agree
with
I've
done
things
like
change
the
backing.
E
You
know
into
different
data
structures
like
that
convector
and
so
on
so
forth,
and
you
can't
indeed
see
a
difference,
but
the
S
has
been
observed
without
knowing
what
our
actual
kind
of
expected
situation
is.
It's
it's
really
hard
to
make
a
recommendation
on
what
to
use
so
I've
gone
so
far
as
to
experiment
with
making
that
a
template
parameter.
You
know
and
working
around
it
with
some
specializations
for
dealing
with
splicing
and
things
like
that,
but
even
then,
and
have
gotten
that
work,
except
for
the
iterators.
E
You
know
so
with
all
the
unit
tests,
pasts,
except
those
dealing
with
these
iterator,
thingies
and
and
I
kind
of
ended
up
getting
on
other
things,
so
I
never
had
a
chance
to
delve
deeper
into
that,
but
there
is.
There
is
complication
directly
related
to
these
things.
So
if
we
could
replace
these
with
things
from
the
standard
library,
for
example,
if
that's
actually
feasible,
I
I
think
it
would
be
a
real
win
or
at
least
bring
it
closer
to
being
like
an
STL
container.
A
So
so
Radek
got
around
the
iterator
and
validation
issue
and
I
ended
up
adopting
his
code
at
first
I
was
actually
going
on
the
path
of
trying
to
like
fix.
Well
change
this
and
change
some
other
things
to
just
like
make
it
not
a
problem,
but
radix
code
actually
well,
he
gets
around
it
in
a
different
way
which
actually
makes
it
pass.
A
B
C
C
Is
that
crazy,
I
think
we
actually
would
I
think
we
would
actually
I
think
I
think
it
was
either
this
this
call.
We
would
actually
want
to
do
that
and
and
then
and
then,
if
we
had
that,
how
could
we
attack
the
one
of
the
types
before
the
most,
whatever
works?
What
the
interesting
workflows
where
he
would
accelerate
and
then
go
and
go
and
sort
of
be
aggressive,
and
in
optimizing
those
Matthew.
A
C
B
G
C
G
C
Well,
I
gotta
leave
me
no
more
date
more
time
to
discuss,
I
mean
that,
but
I
can't
think
of
two
or
three
or
four
ways.
This
could
go
know
what
the
ayats,
but
to
me,
it
feels
like
it
has
a
lot
to
do
with
the
context
see
star
allows
aggressive
optimizations
and
we
jumped
when
we
took
the
sort
of
railroad
track.
Switching
approach
with
to
go
to
sea,
star
or
I
tried
to
take
the
previous.
C
You
know
approach
is
saying:
let's,
let's
pull
the
lock
free
OSD
and
we're
in,
if
necessary,
we'll
change
the
semantics
of
rate
us
to
make
that
actually
legal
to
do
that,
and
when
you
have
to
let
you
know
she
sounds
better.
It's
more
is
it's
because,
because
it
solves
the
cache
line
problem
and
it
allows
us
to
scale
to
do
it
about
two
more
two:
more
dies
and
then
workhorse
in
the
same
way
handles
pneumo
actively
I
mean
we
can't
handle
it.
We
couldn't
handle
it
with
court.
C
We
could
get
lots
of
us
with
a
small
number
of
cars
on
one
time.
We
could
definitely
get
a
lot
of
a
lot
of
wins,
with
lock,
less
lock.
This
would
let
you
to
then
so
it
feels
I'm
saying
hypothetically,
can
you
really
get
any
big
wins?
Haven't
we
already
got
counter
evidence
that
we
can
get
a
lot
of
big
wins
by
changing
how
buffer
works
or
for
list
works
or
any
of
the
primitives
work?
If
we
can't,
if
we
can't
change
the
workflows,
well,
there's
a
duration
or
or
or
fencing
or
whatever.
C
A
Big
fan
of
letting
people
just
work
on
whatever
they
they
they're
interested
in
and
passionate
about
Braddock.
If,
if
you
want
to
clean
up
buffer
list
and
make
it
like
easier
to
read
and
easier
for
us
to
even
have
this
discussion,
I
would
encourage
you
to
go.
Do
it
because
no
one
else's
and
no
one
else,
has
sure.
G
C
If
you,
if
you're
you
know,
did
you
come
to
do?
Did
you
essentially
propose
to
that?
You
hinted
at
it?
You
burn
almost
probably
conceptually
almost
so.
What
were
you
I
thought
you're
saying?
Could
we
eliminate
like
if
we
can
eliminate
fencing
Caesar
can
give
us
that
is
there
any
point?
Even
talking
about
that
for
non
C
star
is
one
high
one
question
I
have
to,
though,
is
all
all
work
all
were
close
would
benefit
from
vector
if
we
could
eliminate
their
validation
problem.
Is
that
practical.
G
I'm
really
afraid
about
the
code
base.
We
have
so
money
these
very
different
users
of
the
buffer
list,
that
moving
everything
in
one
single
shot
looks
to
me
like
possible.
If
we
want
to
take,
if
you
want
to
change
the
work,
love
how
how
we
use
actually
the
buffer
list,
I
guess
we
should
start
from
step
by
step.
Method
may
be
based
on
profiling,
just
to
figure
out
the
most
prominent
things.
Well,
we
when
it
comes
to
our
BD
take
the
case.
The
case
the
case
is
pretty
straightforward.
G
You
are
running
running
our
around
right
or
around
treats
with
a
big
block
size
and
basically,
most
or
less.
It
should
resemble
what
people
do.
What
typical
I
hope
work
out
look
looks
like
I
have
absolutely
no
experience
with
performance
testing
of
I
know
we
have
cost
bench,
but
how
far
or
far
or
how
close
it
really
is
that
the
let's
say,
abstract
typical
average
workload
in
DC
I
have
no
idea.
A
G
A
G
C
A
C
G
G
A
G
B
G
Join
that
effort
and
sure
something
still
something
different,
but
still
sea.
Star
related
Josh
asked
me
to
take
a
look
on
the
LD
TNG
in
this,
but
of
sister.
At
the
moment
it's
working
probably
a
background,
a
background
one
but
I
I
will
have
a
paper
on
the
beginning
of
the
beginning
of
the
speaker,
I.
G
G
We
will
need
to
figure
out
with
profiler
at
assembly
level
why
it's
costlier
in
such
in
some
scenarios,
because
at
the
moment
we
make
a
lot
of
speculations,
maybe
because
it's
of
the
maybe
it's
because
of
the
mutation-
maybe
maybe
it's
because
of
the
of
the
extra
cost
of
invalidate
extra
cause
related
to
but
iterator
invalidation.
Who
knows,
we
need
to
be
sure.
E
I
might
point
out
that
Mohammed
Gabi
has
done
a
fair
amount
of
work,
benchmarking
buffer
list.
You
might
reach
out
to
him
cool.
E
A
E
E
A
C
There
should
be
I,
don't
agree,
III
feel
like
we
should
try
to
well.
We
can't
know
who
can
do
what
the
other
ones,
but
there
ought
to
be.
There's
this
this
the
work,
but
what
I
saw
in
other
people
to
do
the
to
do
the
vector
stuff
is
not
should
not
should
not
die.
We
should.
We
should
keep
pursuing
that
and
fall
out,
follow
up
around
a
slavs
proposal
to
better
to
better
to
better
understand
where
it
falls
over
and
try
to
get
to
a
point
where
we
can
just
use
it.
C
A
J
A
C
Work:
we've
done
this
over
the
years.
I,
don't
wanna
jump
efforts
with
with
with
with
bufferless
replacement
over
the
years
I'm,
not
convinced
that
we
can
do
it.
We
can
sell
the
general
problem,
but
it
seems
like
super
super
cratis
already
was
I
mean
I.
Do
I
do
I
do
sense
that
this
is
I've
already
got.
I
already
feel,
like
you
know,
like
I've
got
enough
battles
to
fight
as
far
as
OMAP
goes
and
and
and
and
and
shaping
up
to
be
a
big
battle.
Indeed,
III
feel
like
nevertheless,
I
feel
like
branislav.
C
Another
white
rack
here
it
jesse
is
a
endorse.
That's
you,
it's
not
the
challenges.
We
challenge
really
is
as
I
see
at
the
town's
really.
Is
that
the
to
it
that
there
are
two
or
three
we
don't
know
in
all
of
them?
Are
we
noticed
some
of
the
bar
by
the
threatening
model
is
one
of
them
and
and
and
and
another
for
aspects
of
the
other
of
the
Nancy
Star
OSD
workflow
that
are
gonna.
C
Make
me
a
better
letter
that
are
going
to
make
a
challenge
if
you
get
big
wins
out
of
autos
being
a
buffer
list
alone,
I,
don't
know
I,
don't
know
if
we
care,
but
but
but
they're,
so
huge
wins
to
be
had
there.
If
we
were
able
to
concert
a
I,
don't
know
I
mean
we
can't
waste
time
doctor,
but
this
to
some
extent,
but
we
did
want
to
waste
time
of
talking
about
it.
C
A
A
Gonna
be
a
while!
Well
that's
my
Matt!
Next
week
we
will
have
a
very
good
conversation,
I
think
about
OMAP,
because
I
imagine
that
we
have
a
lot
and
also
I
really
want
to
get
PG
log
out
of
out
of
maps
way
and
out
of
your
way.
I
think
that'd
be
a
really
good
idea,
but
we'll
talk
about
more
later
so
have
a
have
a
good
week.
Guys
well
I'm
sure
we'll
talk
about
this
more
thanks,
Mike!
That's.