►
From YouTube: Ceph Performance Meeting 20180-9-20
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Right,
well
maybe
this
is
it
let's
just
get
going,
though.
Let's
see
there
was
a
couple.
Well,
there's,
actually
a
number
of
close
pull
requests
here.
Nothing
new,
though
sage
removed
some
dead
code
for
a
mutex
perf
counter.
We
didn't
use
it
anywhere
apparently,
but
it's
just
pulling
everything
down,
though
that's
good.
A
B
First
floor
request:
I'm
chilling
that
music
proof
Canada
saw
something
that
Sam
added
like
five
years
ago
and
used
to
debug
some
random
thing.
I
doubt
anybody's
ever
used
it
since
so
well.
Thought
out.
There's
some
overhead
there
Fisher
with
every
new
text,
I've
used
it
the
slog,
keep
allocations
thing
from
Patrick
fun,
Rhino
circles
about
twenty
times,
but
looks
like
that.
Finally,
merged
yay
I'm
speeds
up
the
debug
logs.
Some.
B
This
OST
shard
thread
on
commits
also
had
many
iterations
to
get
it
right,
but
it
is
stable
and
in
some
cases,
shows
a
big
improvement.
I
think
it
was
like
as
much
as
a
hundred
percent
for
high
ups
and
lots
of
shards,
less
impressive
for
other
shard
counts.
I
think
there's
apply
some
figuring
out
to
do
around
what
the
right
number
of
shards
is
given
this
change,
but
basically
it
means
that
the
completions,
instead
of
being
queued
on
a
finisher
thread,
I'll
back
into
PG.
B
B
B
That's
cool
all
right,
bringing
containers
with
slab
allocation
that
was
close,
not
merged,
I,
think
that
was
just
the
old
iteration
of
that
when
you're
working
at
a
newer
one
same
thing
with
this
other
old,
optimization
close
that
up,
let's
see
Kryptos
that
Peter.
What
is
the
status
they're
working.
B
All
right,
if
you're
working
on
the
newer
version
of
the
containers
with
embedded,
alligator,
inline
allocation
stuff
and
then
there's
this
Justin
only
Rd
cash.
This
is
if
I
remember
correctly.
This
is
using
persistent
memory
as
a
persistent
cash
for
RVD,
so
pretty
cool
but
high-end
hardware,
but
pretty
cool
I
think
the
idea
is
that,
hopefully,
some
of
this
infrastructure
will
be
reused
to
also
have
a
SSD
based
or
Eng
me
based
persistent
cache
on
the
client
side.
But
the
initial
notation
is
just
for
persistent
memory,
but
the
pretty
cool.
B
Let's
see
cash,
your
bidding
Steph
mark
still
chasing
down
that
issue.
He's
blocked
on
me,
I
think
I'm,
working
on
that
there's
Braddock
sitting
with
first
counters.
B
A
B
Append
in
this
case
is
at
the
level
of
the
blob
and
so
I'm
not
sure
exactly
how
that
maps
to
the
client
workload
I
think
the
client
works
well,
it
doesn't
necessarily
have
to
be,
and
ok
I
think
it's
just
small
rights.
Basically,
it.
A
It
is
interesting
in
blue
store
that
I
have
I've
noticed
over
the
years
that
the
initial
create
performance
is
definitely
lower
than
the
override
case
and
like
within
the
RBD
workload,
and
it's
been
slower
than
that's
one
of
the
cases
where
boost
doors
historically
been
cast
lower
than
file
store.
I,
don't
know
if
it's
the
initial
creation
of
metadata,
that's
kind
of
hurting
us
there
or
what.
But
that
is
one
case,
but
seemingly
we
don't
do
quite
as
well
as
we
we
did
with
file
store,
yeah,
okay,.
A
B
Talking
about
for
a
while,
where
it's
order,
if
it
does
ordered
right
back
and
so
it's
sort
of
a
hybrid
mode
where
it
does
right
back
on
the
client
and
so
in
general,
you
sold
your
cache
everything's
great.
If
you
lose
your
client
cache
or
you
don't
wait
for
it
to
come
back
or
offense
it
or
whatever,
because
it
was
doing
right
back
to
the
torito's
an
ordered
way,
then
you
still
have
a
crash,
consistent
image
and
Rados,
but
it
you
just
lost
some
of
your
recent
rights,
okay
and
so
for
lots
of
workloads.
B
This
is
like
fine.
If
you're
like
the
DM,
you
know
works
back
in
time,
10
seconds,
it
was
like
something
that
you
were
shelled
into
and
doing
whatever
like
it's
totally.
It
doesn't
matter
for
other
things:
it's
not
okay,
right!
If
it's
like
a
database
or
something
that
and
you've
need
that
consistency,
then
it
clearly.
It's
not
the
right
model.
I
mean
the
idea
is
that
the
hope
is
that
this
actually
covers
a
lot
of
use
cases.
B
A
B
D
A
C
Another
thing
is
another
thing:
is
buffer
list
I'm
trying
to
get
rid
of
the
append
affair
and
I'm
driving
by
the
observation
that
most
of
I
think
most
of
the
users
of
buffer
list
wants
to
still
use
the
C
string
taking
variant
of
a
pet
and
unfortunately
they
do.
They
are
appending
small
chunks
of
data
4
bytes,
8
bytes,
something
like
that
and
for
such
rights.
We
have
really
really
significant
overhead
well
later,
just
related
to
housekeeping
of
our
of
our
up
and
buffer
in
ray
of
buffer
list.
C
We
are
trying.
The
current
logic
is
that
when
somebody
puts
you
see
string,
then
you
are
appending
it
to
cut
to
append
buffer
that
it's
in
that
is
private
buffer
list.
It's
not
yet
it.
It
might
be
not
linked
on
the
buffers
list
inside,
but
we
are
appending
there
and
after
after
that,
pranked
you're
trying
to
slice
that
this
space
to
append
into
and
put
that
using
the
buffer
pointer
mechanism
on
the
list.
C
A
C
The
problem
I,
don't
think
so.
Okay,
it's
I
can
see
that
the
string
taking
variant
is
used
by
the
alt
encoding
stuff
and
after
that
we
thought
I'm,
not
sure
I,
don't
know
whether
it's
possible
I,
don't
think
so,
but
I
can
be
wrong
anyway.
I
decided
to
give
up
try
to
another
idea:
try
to
move
the
overhead
from
the
C
string
variant,
which
seems
most.
It
seems
most
important
to
me
to
the
buffer,
to
the
buffer
pointer,
taking
variance
so
instead
of
merging
merging
buffer
pointers,
we
can
go
at
them.
C
B
C
Makes
a
problem
in
case
where
you
have
mixed
work
clothes
when
somebody
calls
the
C
string
variant
many
times
and
and
it's
interleaved
with
the
buffer
pointer
taking
variant
in
that
way.
The
next
call
the
very
next
call
to
system
variant
after
after
appending
buffer
point.
In
turn,
will
F
will
result
in
extra
location,
ed
waste
of
space?
Of
course,
though,
I
guess
we
could
try
to
remedy
to
get
rid
of
that
bye-bye
made
by
splitting
by
splitting
the
free
space.
C
That's
that
I'm
working
on
in
the
matter
of
a
pent
up
and
buffer
killing.
Another
thing
is
also
like
employ
find
the
the
buffer
is
its
death.
We,
the
pull
request
for
killing
unused
0
copy
facilities,
are
match
right
right
now
and
I
started
working
to
getting
creat
getting
getting
rid
of
the
backward
iteration
over
over
buffer
list.
C
C
I
would
love
to
be
able
to
allocate
all
that
place
in
one
more
place
for
hooks
for
the
container
place
for
buffer
pointer
place
for
buffer
point
buffer
pointer
row
and
finally
displace
everything
in
one
a
lock.
If
it's
come,
it
can
be
complicated
because
of
the
necessary
preset
to
synchronize
the
lifetimes,
but
still
might
be
possible.
C
C
C
Would
love
to
extend
this
at
the
same
heart,
very
good
solution?
Actually
Rob
combined,
so
I
would
love
to
have
combined
combined
printers
buffer
pointer,
stiff
buffer
pointers
containing
everything
when
we
will
get
the
app,
but
it
it
will
become.
It
changes
well
too
early
to
say
at
the
moment
just
an
idea.
So.
A
So
that
maybe
it's
a
good
segue
into
what
I
want
to
do,
which
is
instead
of
instead
of
something
like
placement
new
right,
you
use,
object,
pools
and
you
just
grab
whatever's
available
and
have
you
know
these
things
pre-allocated
already
in
and
go
that
route?
I
think
you'd
be
really
interesting
to
explore.
The
things
I
worry
about
is
whether
or
not
how
often,
how
much
variability
we
have
in
the
number
of
actual
objects
of
the
different
types
in
practice.
A
You
know
in
if
you've
got
a
certain
number
of
maximum
number
of
I/os
that
you
have
outstanding.
You
have
a
reasonable
upper
bound
on
the
limits
you
know
and
the
number
of
objects
we
have
maybe
occasionally
going
over,
in
which
case
you
need
to
make
new
ones
and
then,
and
then
you
shrink
that
back
down
or
or
is
it
really
just
all
over
the
place
and
I
have
no
idea
I.
C
What
is
the
deviation
in
size
between
instances
of
such
objects,
even
though
they
are
pretty
familiar?
If
the
deviation,
even
if
the
deviation
from
average
object
size
is
pretty
small,
then
we
could
don't
we
don't.
We
would
piece
and
answer
all
allocations,
initialization,
initializations
and
don't
waste
with
memory,
but.
A
A
You
know
you
have
you
have
like
a
buffer
pointer
right
and
what
is
it?
It's
just
a
collection
of
pointers.
Did
you
other
junk
right,
though
so?
My
assumption
is
that
the
pointers
are
really
probably
almost
all
the
same
size
anyway,
because
they
don't
really
do
anything
they're
just
sitting
there.
You
know
collecting
other
things.
B
A
This
is
this
is
my
assumption,
just
having
looked
at
it
that
it
seems
like
these
containers
really
don't
matter,
and
probably
the
right
answer
is
that
we
should
refactor
all
this
and
make
it
all
go
away,
but
but
assuming
that
we
don't
refactor
it
and
make
it
all
go
away.
I
wonder
if
an
object
pool
would
allow
us
to
kind
of
at
least
get
rid
of
the
the
the
the
creation
and
destruction
of
these
things
and
just
kind
of
set.
You
know
the
the
internals
to
be
different,
which
I
would
hope.
A
C
A
B
A
A
Pointer
buffer
pointer,
these
things
are
probably
scattered
all
over
memory
unless
TC
Malik
does
a
good
job
of
actually
like
you
know,
kind
of
laying
them
out
as
a
group
of
things
which
I
don't
know
how
good
it
can
do
at
it
or
or
you
know,
with
all
the
nasty
memory
patterns
we
have,
if
it
can't
really
do
a
good
job
of
allocating
that
in
a
specific
area
in
memory,
that's
well
as
the
thing
that's
hard
about
this
I
don't
know.
I.
C
Saw
that
that
TC
Malik's
can
can
actually
be
costly,
but
in
some
in
very
limited
scenarios,
but
the
problem.
The
problem
comes
in
a
situation
when,
when
you
have
when
your
memory
flows
between
frets,
if
you
are
making
a
location
in-
let's
say
messenger,
then
the
corresponding
the
allocation
is
made
by
T,
POS
DTP
and
in
profiling.
During
Quran
for
K
runs
rates.
I
can
basically
see
two
such
places.
One
is
our
waited
priority
queue
by
default.
There
is
poor
for
different
threads
when
wanting
to
to
operate
on
that
pre
messengers.
A
C
Items
up
key
items:
that's
the
obscure
items
are
our
start
by
the
skill,
messenger
or
a
messenger.
We
infuse
operation,
then
TP
OSD
TP
pop
pops
it
out,
and
this
requires
some
housekeeping
work
in
in
the
in
the
waited
priority
queue
and
because
it's
it's,
it
has
a
number
of
Flyers
really
or
four
with
a
location
on
each
one.
C
Well,
it's
pretty
scattered
over
memory
and
required
a
lot
of
allocations.
I've
pacified
it
with
the
slapped,
slapped
a
locator
and
can
see
that.
First
of
all,
there
is
no
absolutely
no
malach
overhead,
so
I
guess
there
is
also
no
cases
where
no
blocking
the
toast.
Very
there
is
no
the
very
rare
case
of
blocking
into
Similac
I
hope,
because
there
is
simply
not
a
similar
column
by
after
that.
C
C
A
C
B
C
The
embedded
pool,
of
course
you
will
go
to
a
lock
I'd
go
to
the
allocator,
but
you
will
go
by
23
times
lower,
let's
frequently
done
in
then
without
the
slap
a
locator
and
inside
inside
of
slap.
The
memory
is
continuous.
Of
course,
it's
tarnished
into
question
whether
the
physical
are
I
arrangement
of
your
memory
corresponds
to
logical
arrangement
of
your
container.
If
that's
not
the
case,
do
you
still
will
have
plenty
of
jumping
over
memory
but
sure
the
ttml.
D
Interaction
is
the
part
that
I'm
trying
to
get
more
out
of
too,
because
we
have
this.
We
have
that
son
issue
in
RW
I'm,
but
we're
not
gonna
be
able
to
avoid,
even
with
an
in
fact
we
may
have
been.
We
may
have
an
exacerbates
at
least
will
change
or
amuse.
You
know
the
amount
of
behavior
where
we,
where
we
well
we're,
sent
with
what
the
SiO
work,
where
we,
where
we,
where
we,
where
we
end
up
or
deallocating
in
a
different
thread,
it
seems
like
we
I'm,
not
sure
if
this
is
guaranteed
to
be.
D
This
seems
like
this
is
a
guaranteed
problematic
case,
442
Malaga
and,
furthermore,
it
it
would
even
see
it.
We
even
see
a
we
see
it
as
having
a
met,
a
large
impact,
even
in
terms
of
the
large
amount
of
memory
that
we're
flow
that's
flowing
through
a
buffer
lists.
So
it's
not
just
up
for
us
it's,
but
what
the
problem
we
see
is
not
just
about
all
allegations.
It's
actually
happening
at
even
with
larger
chunks
that
are
being
they're
better,
being
allocated
agreed.
This
way.
D
Possibly
a
different
tuning
with
CC
malloc
than
than
other
other
Damons.
Alternatively,
well,
your
in
your
description,
maybe
wonder
if
we
would
benefit
from
some
sort
of
an
alligator
flowing
through
these.
These
calls
and-
and
they
can
do
something
like
like
Sambas,
talaq
or
or
or
equivalent
to
to
certo,
to
reduce
all
the
marina
in
a
for
a
for
a
request.
One
shot
later
on
I.
A
D
D
Well
it's
it's.
It's
found
its
it's
about
its
it's
founded,
but
I,
think
but
I
think
but
I
think,
but
I
think
on
the
large
data
stream.
The
cost
of
large
to
allocate
large
data
stream
is
risk
dominates
over,
for
example,
the
the
object
switcher,
which
are
sort
of
smaller
number
four
requests,
and
mostly
living
in
our
metadata
cache,
at
least
as
far
as
I
know.
D
D
I
think
so,
because
that's
okay
I
am
we
can
tell
them.
You
know
what
I
know
that
I
wouldn't
do
it
all.
So
that's
that
that's
that's
the
old.
That's
the
only
way
we
would
know
the
TC
malloc
allocator
could
be
could
be
forcing
us
into
into
into
into
essentially
situations
wouldn't
be
enough.
Otherwise,
the.
D
Well,
I'm
not
sure
about
I,
don't
I,
don't
think
so,
but
I
mean
I
mean
we
see.
We
see
a
lot
of
siva,
we
don't
do
the
affair.
We've
seen
their
memory
kitchen.
We've
also
seen
dick
I
guess.
The
pretty
most
persistent
amusing
is
high
CPU
or
had
just
just
churning
through
the
central
free
list
returns
and
so
I
guess
that.
A
A
D
A
D
A
D
A
I
I
think
that
anything
we
can
do
to
help
TC
Malik
is
a
good
idea
or
whatever
allocator
right
I
mean
even
if
he
see
milk
or
anything
else
is
supposed
to
try
to
do
a
good
job
of
like
creating
regions
of
memory
to
to
kind
of
lay
out
like
sized
objects.
Anything
we
can
do
to
help.
It
is
probably
a
good
idea.
C
But
I'm
afraid
that
this
could
turn
into
having
a
memory
allocator
over
memory,
allocator
yeah.
We
will
need
to
take
a
lot
of
Pearl
about,
for
instance,
migration
data
between
between
frets,
if
you
want
to
use,
if
you
want
to
use
the
object
poorer
outside
of
your
sister
world,
I
would
expect
that
the
memo
that
we
will
need
to
handle
the
case
when
memory
flows
between
between
frets
well.
C
C
D
C
C
The
problem
is
with
with
appending
with
appending
small
buffer
list.
If
you
are,
if
your
buffer
is
is
very
if
you're
bad
buffer
is,
is
tiny,
it
has
16
bytes
inside
or
something
like
that,
then
going
with
that
with
the
memory
copy,
saving
procedures
will
I
am
afraid,
will
make
the
things
much
worse,
because
you
will
end
with
very
fragmented
buffer
lists,
built
with
AK,
with
extra
small
metal
are
located
separately,
I
mean
buffer
pointers.
Then,
if
you
are
passing
such
a
fragmented
buffer
is
to
do
to
be
freed
by
another
Fred.
A
So
in
Radek
in
in
that
what
you
were
just
describing,
is
it
predictable
what
we
do
with
data
coming
into
our
GW?
If
you've
got
some
data,
that's
coming
in,
is
it
possible
upfront
to
say
we
know
what
we're
generally
doing
with
this
we're
going
through
this
process
of
creating
bufferless
and
and
doing
something
with
it
and
eventually
you
know
this
is
kind
of
the
the
pipeline
that
we
end
up
with,
and
here's
where
we
go
boils.
E
A
The
moment
well,
but
even
if
that
whole
process
is
predictable,
that
this
is
what
ends
up
happening.
This
is
what
we
create.
This
is
how
the
process
goes
for,
creating
this
temporary
stuff
that
then
gets
passed
off
to
other
threads,
and
we
do
all
these
things,
but
eventually
we
end
up.
You
know
at
a
specific
point
where
now
we're
committing
data
to
something
we're
writing
it
out
to
an
OST
or
whatever
I
mean.
A
D
These
dudes,
like
foie
gras
I,
was
working
the
right
direction
here.
I
mean
that
office
culture
above,
but
but
in
our
core
network
we
had
one.
We
simplified
the
one
problem
for
this
particular
case.
You
know
that
the
Opera
Quest
Q
are
equivalent,
though
I
didn't
was,
but
but
it
did,
if
I
mean
what
we
were
actually
doing
there.
One
thing
we
went
through
hell
did
a
bunch
of
things
together,
so
we
were
allocating
1:1
structure,
essentially
for
everything
for
everything
that
was
gonna
get
allocated
down
the
path.
D
Definitely
the
way
we
got
rid
of
it
was.
We
actually
did
have
one
thread
to
pull
on
the
top
and
one
thread
on
the
bottom
and
and
we
had
a
lock
free
key
to
that
we
were
sending
the
whole
blocks
back
and
recycling
them.
Oh
the
original
thread
head.
Could
we
actually
recycle
the
same
thing
or
15
overflow?
It
would
free
it.
Nice.
C
Idea,
but
with
restrictions,
the
most
important
one
is
that
is
the
lifetime.
Is
the
lifetime
synchronization
a
very
strong
coupling
because
of
that,
but
still
might
be,
might
be
worth
doing
in
some
cases
the
Dinos
I
don't
know
at
the
moment.
I
would
love
to
get
to
gather
more
data
and
I'm,
not
perfect,
I'm,
not
able
to
do
that
with
with
path
I'm
thinking
about
putting
some
just
to
even
let's
roll,
with
the
case
of
of
checking
how
much
how
fragmented
our
buffer
is
really
hard.
Yes,
three
I.
C
A
C
We
will
need
the
plenty
of
macros
be
able
to
to
stairs
to
steel
with
this
behavior
because
putting
well
I,
let's
as
an
example,
I
took
the
slapped
a
locator
it's
at
the
beginning.
It
was
tightly
tied
to
man,
poor
infrastructure
I
mean
to
the
Charlotte,
but
still
atomic
accounting,
technical
accounting,
number
of
lights
inside
number
of
containers,
etc,
etc,
and
replaced
with
that
the
struck
one.
The
the
most
frequently
used
most
frequently
allocated
allocated
guy
in
weight,
priority
queue
and
I
got
huge
regression.
C
C
A
One
thing
I've
been
wondering
a
little
bit:
is
we
eventually
stop
accepting
writes
if
we
have
too
much
outstanding
incoming
data
right,
yeah,
yeah
yeah?
We
do
this
already
right.
We
do
so
so
does
naturally
does
any
of
this,
or
maybe
not
naturally,
but
can
any
of
this
fit
into
the
idea
of
using
circular
buffers
for
incoming
data
and
just
overwriting?
What's
already
there
once
something,
a
previous
request
has
has
eventually
kind
of
been
committed
in
order.
C
My
can
be
reasonable,
actually
because
in
chocolate
we
have
a
lot
of.
We
have
critical
sections
atomic
counting
out
all
that
stuff
I'm
I
was
working
on.
I
was
poking
with
throttler
made
some
in
some
cases
put
and
get
more
critical
ones.
I
was
able
to
squeeze
the
critical
section
in
most
cases
from
one
of
such
rank
of
such
methods
entirely
in
second,
when
I
went
with,
there
is
still
atomic
synchronizing
the
stuff,
but
I
so
around
twelve
one,
and
when
1.25
percent
improvement
or
something
like
that
in
matters
of
AI
ops.
If.
C
It
be
compatible
with
the
idea
of
zero
copy
pipeline
in
userspace,
I
mean
DP
DK
taking
die
data,
let's
roll
with
rights.
Take
me
the
data
from
client
using
DP
decay
writing
using
SP
decay
idea
is
a
course
is
to
have
is
to
a
light
upbeat
coffees,
but
what
would
after
think,
a
circular
buffer
between
them?
Is
it
I.
C
A
Haven't
very
carefully,
but
it's
something
that
I
kind
of
keep
it
keeps
kind
of
coming
up.
In
the
back
of
my
mind,
a
little
bit
is
if
I'm,
going
to
redesign
all
of
this
from
ground
up.
What
is
the
thing
that
would
allow
us
to
bring
in
data
in
order
block
when
we
have
too
much,
you
know
a
void.
Copying
memory
avoid
doing
crazy,
new
allocations
and
random
locations
in
memory.
That's
what
kind
of
what
I.
D
And
I
think
I,
don't
know
I
guess
I
don't
know,
but
I
but
I
think
we're
heading
of
ours
largely
into
C
star,
like
territory
now,
except
without
the
new,
a
sort
of
peace
and
and
I.
Think
and
I
think
the
goal
is
I.
Think
you
want
to
I
think
you
need
to
have
awareness
of
what
that
memory.
All
the
memory
in
them
is
doing.
A
D
Okay,
I
think
I,
think
I
think
I
think
it
is
dieter
from
from
from
for
did
see
mr.
Freeman's
that
had
the
most
powerful
slides
on
some
of
this
on
there
on
some
of
the
same
issues,
early
on
at
least
2010
or
11
I
saw
them
that
you
know
they
point
out
that
that
code
paths
are
long
go
pass
through
code
paths,
they're
subject
to
scheduling
with
latency
other
things
are
going
on
to
these
are
eliminates
this.
C
D
D
Well,
I,
don't
know,
I
mean
the
throttler.
Yes,
because
the
throttle
area
is
talking
is
his
budgeting.
This
is
operating
directly
on
the
on
the
TCP
segment
budget,
indirectly,
but
but
other
things
that
we
do
or
not,
they're
just
dimension
by
it.
Division
of
the
work
we're
actually
performing,
not
not
what
is
to
flow
control
or
not.
C
D
D
Okay,
there's
been
it
just
it
just
I
just
thought
about
profiling,
as
if
I
don't
know,
check
the
fire
fault
Firefly
or
something
there's.
No
we're
just
talking
about
the
data,
the
data,
the
OSD
data
path,
okay
and
then
it's
it's
still
APIs,
oh
I,
think
partly
applicable
to
now
I
mean,
but
but
it
isn't,
there's
a
complete
picture,
but
I
I'm
not
fully
sure
how
the
two
pieces
tie
up.
We
don't
mean
we
have
to
have
be
net
and
you
know
there
definitely
needs
to
be
end-to-end
Kos.
D
That's
that
suffocation,
visible
and
then
and
TCP,
but
ETV
fits
into
it.
It
fits
into
it
by
giving
giving
s
DZ
PDB
DK
is
gonna.
Give
us,
but
first
when
on
the
one
up
in
the
connection,
is
not
flow,
controlled
and
and
and
we're
gonna
have
to,
and
the
clients
gonna
be
affected
by
whether
or
not
we
return
results
to
on
the
association
back
to
the
back
back
to
its
clients,
not
quite
sure
how
it
ties
in
to
the
other
memory
allocation
pattern.
Themes
of
this
conversation.
A
A
C
A
C
C
D
We're
certainly
interested
in
it,
so
we
will
just
do
something.
I
think
we
I
think
we're
I.
Think
I
think
one
of
our
attempts
to
look
at
what
we're
don't.
We
know
what
what
what
kind
of
crime
is
ready
to
pay
for
things
we
could
do
in
terms
of
that.
You
know
the
conversion
tool
it.
The
ASIO
framework
I
mean
right
right
now:
I'm,
not
I'm,
not
I'm,
not
you
know,
I'm,
not
aware
of
a
terribly
large
number
of
overall
evil
things
you
make
tons
of
use
a
buffer
list.
D
We
move,
we
move
large
segments
through
roughly,
as
we
make
terribly
terribly
abused,
TC
Melek
in
the
sense
that
we've
in
the
sense
that
we
create
we
allocate
things
into
freedom
in
other
threads
or
use
them
the
other
threads,
and
then
freedom
and
et
cetera
I
mean
mastery
I'm,
not
sure
you
won't
be
making
that
worse
in
the
in
the
asao
world.
So
we
may
know
that
that
part
of
the
problem,
maybe
I,
look
at
it
as
a
separate.
This
is
a
separate
one,
but
but
I
think
you
want
to
look
at
whether.
D
C
A
Definitely
remember
seeing
this
is
this
is
old,
but
a
year
or
two
ago,
very,
very
high
CPU
usage
with
large
iOS,
like
four
megabyte,
writes
coming
in
for
me
but
objects.
This
you
know
rgw
was
consuming
a
ton
of.
If
you
course
handling
that
kind
of
load,
I,
don't
know
what
it
was
entirely.
I
didn't
look
too
carefully,
but
you
know
it
could
consume
a
large
amount
of
CPU.
Well.
D
And
we're
doing
it
pretty
soon.
The
other
was
happening
right
now.
There's
work
going
on
these
days,
so
I
think
that
this
would
I
saw
I'll
bring
this
up
with
that
in
memory
of
you,
salary
on
this
call
talk
about.
You
know
how
we
might
if
this
is
sort
of
thing
into
into
the
into
the
s
Oh
si.
Oh,
oh
okay!
So
maybe
we
could
talk
about
this
as
in
terms
of
instrumentation,
we
might
do
as
part
of
a
sign
application
estimate,
sort
of
standard
profiling
paths
and
and
and.
E
D
For
us
from
there,
we
have
every
reason
to
believe
that
there
are
reasons
why
we
really
want
to
be
an
asynchronous
model
in
garbage
quite
vast
number
of
friends.
We
have
having
said
that
there
was
soap,
they'll
still
be
potential
for
misbehavior.
Those
are
the
only
potential
for
abusing
buffer
lists.
What
other
things.
E
C
A
A
C
D
Yeah
but
we
have
hold
with
all
that
homework
over
Kludd
till
stable
to
you,
but
but
but
I
wouldn't
suggest
I'm,
not
sure
I.
Don't
think
I,
don't
think
that
our
Jimmy's
workload
is
a
good
substitute
for
us,
DS
or
or
more
Korea
or
or
no.
C
D
Is
Charlson
true?
Is
it
an
read
and
write
operations?
We
we
send
massive
amounts
of
data
through
the
process
and
to
and
from
ratos
and
an
under
sir,
under
a
few
uncommon
or
clear
conditions.
Since
a
couple
of
platforms,
as
I've
said,
we've
we've
seen
we've
seen
high.
We
know
we
we've
seen
on.
We
send
a
reasonable
seep
utilization.
As
you
see,
Malin.
A
Matt
the
the
case
where
I've
seen
where
our
GW
becomes
a
bottleneck,
because,
if
you're
dealing
with
very
very
fast
OS
DS,
like
on
nvme
devices
and
you're,
pushing
a
huge
amount
of
like
large
object
data,
so
yeah
like
40
gig,
e+
kind
of
scenarios
where
you've
got
a
single
rgw
instance
on
a
very
fast
node
talking
to
very
fast
OS.
These
yeah.
D
And
I
can
handle
a
me
on
this,
but
I
I
think
that
your
record,
this
is
critical
but
I,
think
but
I
think
it
should
be
framed
in
terms
of
testing
after
we've
converted,
we've
actually
inked
in
two
things:
one
one:
the
new
ASIO
dispatcher
and
to
the
top
of
the
connected
QoS
framework,
we're
trying
to
put
in
place
because
we
want
those
that
implement.
We
have
returned
ability
of
clock
based
QoS.
That's
that
that's
the
RDW
level,
that's
linked
back
through
dynamic
queues.
That
Erica
is
added
into
dim
clock
and
Raynaud's
level.
D
You
know
to
Mecca
QoS,
you
know
for
for
GC
and
for
for
regular
client
IO.
At
that
point,
even
we've
both
simply
if
we've
changed
the
problem,
but
we've
also
introduced
new
places
where
CPU
bind.
You
know,
CPU
bounds
processes,
especially
in
the
dim
clock
algorithms,
for
example.
It
could
could
appear.