►
From YouTube: Ceph Performance Meeting 2021-12-09
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right:
well,
we've
got
a
big
crowd,
so
that's
good.
Let's
get
this
kicked
off
all
right,
pull
requests
this
week,
we've
got
one
new
one
that
I
saw.
This
is
oh
shoot.
I
didn't
paste
the
title
on
this
one,
just
one.
Second,
this
is
start
a
new
md
log
segment
in
the
mbs
if
new
coming
event
possibly
sees
the
expected
segment
size.
A
So
apparently,
I
think
this
helps
in
heavy
load
cases
and
patrick
is
looking
at
it.
I
think
there
needs
to
be.
It
looks
like
there's
some
testing
that
needs
to
be
done
on
this
unit
test
created
for
it.
So.
A
Yeah-
and
that
is
it
for
that
one-
we
did
have
two
prs
that
closed
sage
implemented
some
optimization
for
gill
handling
in
the
manager,
basically
thread
thread
contention,
stuff
block
intention.
I
believe
we
looked
at
that
some
last
week
and
he's
trying
to
make.
Oh
was
this
that
problem?
No!
This
is
a
different
one.
Sorry,
anyway,
he's
he's
trying
to
get
the
manager
to
behave
better
in
certain
situations
that
he
was
running
into,
so
we
we,
we
figured
out
some
things
last
week
when
we
were
working
on
that.
A
Let's
see
second
one,
the
closed
set
min
alex
size
to
optimal,
I
o
size
that
was
basically
for
devices
that
have
a
larger
allocation
size.
Then
this
pr
just
tries
to
recognize
that
and
set
the
metallic
size
to
be
optimal
for
that
device.
A
I
had
a
number
of
updated
prs
this
week,
though
this
ttl
cache
implementation.
I
think
this
is
a
new
version
of
an
older
pr
that
does
the
same
thing
some
tests
for
run.
I
think
there
was
some
more
review
that
had
that
was
done
on
that
pr
and
now
it
needs
further
updates
the
whip
primary
balancer
pr
from
josh
solomon,
laura
reviewed
this.
I
think
there
was
just
a
couple
of
tweaks
that
needed
to
be
done
for,
for
the
docs
laura.
Are
you
here?
C
Yeah,
so
josh
is
working
on
that
right
now
and
we
actually
have
a
balancer
meeting
to
talk
about.
What's
going
on
with
that.
So
mainly
it's
a
factoring
pr,
so
as
opposed
to
refactor
the
existing
code
and
not
really
add
anything
but
we're
fixing
the
code.
So
it's
easier
to
read
and
that
it's
structured
better
and
then
this
is
kind
of
the
first
step
in
a
multi-step
process
of
actually
improving
the
performance
of
the
balancer.
A
Cool
sounds
good
all
right.
Let's
see
what
else.
A
We've
got
more
work.
Oh
sorry,
this
pr
from
radic
about
introducing
huge
page
based,
read
buffers
igor
reviewed
that
igor.
I
think
your
comment
was
that
you
wanted
more
information
about
what
this
would
be
used
for.
Is
that
right.
A
Okay,
cool
next
we've
got
bluefish
fine,
green
locking
from
adam.
I
believe
that
was
updated
and
new
tests
are
being
run
on.
That
is
that
right,
adam.
E
Now
it
seems
to
be
in
the
green,
but
still
first
we
will
test
incremental
updates
to
in
blue
fs.
So
still
still
one
more
round
will
be
needed
for
that.
One.
Okay,
very
good,
though
exciting.
A
A
That's
gotten
a
little
bit
more
discussion
and
updates,
hey
igor.
What's
what's
the
status
of
that
one?
Is
it
passing
tests
or
were.
D
State
of
the
objects
in
this
partly
removed
so
well
prior
to
this
fast
removal,
the
pitch
was,
it
wasn't,
removed
completely
and
some
remainders
still
exist,
and
hence
there
were
no
complaints
between
the
father
vaccine
and
now
we
fast
removal
g
is
able
to
to
be
removed
completely
before
the
the
next
page,
and
the
test
case
produces
some
additional
errors
which
probably
might
be
ignored,
but
well,
I
actually
realized
that
it
looks
like
pg
mobile
proceed.
D
D
Well,
hopefully,
I
found
the
root
cause
for
failing
test
case,
but,
additionally,
I
need
to
try
to
revise
some
some
behavior
of
this
stuff,
so
still
work
improves.
A
Okay,
well,
good
luck
sounds
good
all
right.
Next,
pr
get
rid
of
tc
malik
max
total
thread,
cache,
bytes
environment,
variable
and
uses.conf
configuration
value
instead.
Also,
let's
talk
about
this
below,
because
I
think
we'll
have
a
bigger
discussion
on
it
and
then
finally
another
one
from
you
igor,
reducing
memory
footprint
of
some
of
our
data
structures.
A
I
wanted
to
just
bring
this
one
back
into
the
limelight
a
little
bit
because
I've
there's
been
a
lot
of
discussion
lately
about
memory
usage
of
the
osd
again,
and
I
was
wondering
igor.
Do
you
think
this
would
be
something
that
we
can
reasonably
resurrect.
D
Well,
honestly,
I
don't
know
I'd
prefer
to
get
this
fast
removal
and
shared
block
with
ck
matched
first
before
proceeding
with
this
one.
So,
okay
and
I'm
not
sure.
D
A
Yeah
and
as
as
much
as
the
data
structures
exchanging
them.
If
I
remember
it
was
only
like
a
10
or
15
percent
gain,
which
is
you
know
sizable,
but
but
it's
not
the
the
dramatic
crazy
gain
that
you
saw
with
the
fsck
all
fix
all
right.
So
you
know
in
fact,
since
we're
talking
about
that.
Let's,
let's
move
on
to
that
topic,
unless
anyone
has
any
pr's
that
I
missed.
A
All
right
then
yeah.
Let's
talk
about
bob
fsck
memory,
usage,
adam
you!
You
gave
a
review
on
in
the
pr.
Do
you
wanna
summarize
your
your
concerns.
E
Yes,
I
mean-
basically
I
might
be
wrong
here.
So
I'm
very
glad
igor
is
is
today
on
on
the
call-
and
my
worry
is
that
new,
improved,
less
greedy
memory,
greedy
fsdk
procedure
comes
at
some
cost.
I
mean
I
think
it's
now
possible
for
some
specific,
although
unlikely
cases
of
shared
blobs
errors
to
squeeze
through
fsck
procedure
and
if
that's
the
case,
I'm
not
liking
it
very
much.
E
D
Yeah
well,
unfortunately,
I'm
a
bit
unprepared
for
this
discussion,
so
I
I
was
planning
to
take
a
look
later
and
it
was
well
a
while
ago,
when
I
submitted
this
pr.
But
as
far
as
I
remember
it
doesn't
provide
false.
D
E
Yeah,
I
noticed
that
in
he
really
that
pr
in
in
the
a
when
it
detects
there
is
an
error,
it
repairs
sometimes
more
shared
blobs
than
it's
necessary,
but
it
doesn't
actually
change
anything.
They
are
just
recreated
the
same.
That
part
is
okay.
I'm.
E
But
the
error
is
that
objects
that
do
reference.
Those
shared
blobs
do
not
match
with
with
the
shared
blobs
track
tracking
info.
So
in
total,
like
we
have
two
reference
counters
from
two
objects
but
referencing
one
shared
blob,
but
both
of
those
counters
are
split
into
two
shared
blobs.
So
it
sums
up
to
zero,
which
is
okay,
but
still
that's,
not
correct
state.
D
So
you
might
get
the
state
well,
theoretically,
when
errors
in
one
object.
D
D
E
A
One
one
thing
I
want
to
note
all
this
discussion,
as
we
think
about
this
at
a
high
level,
is
that
we
have
users
that
are
running
osds
now
in
a
container
that
has
a
five
gigabyte
container
limit
and
it
will
kill
the
osd
if
it
sees
that.
A
I
don't
like
this,
but
apparently
this
is
happening,
so
it
is
probably
worth
thinking.
A
This
is
probably
a
fairly
high
priority
thing
that
we
need,
especially
even
with
igor's
pr
it's
if
we're
in
a
situation
where,
potentially
you
could
be
using
more
than
that,
it
could
be
a
kind
of
a
we
might
start
seeing
posters
in
containers
like
guy
on
fsck.
E
D
D
D
Read
every
object
every
set
blocks
from
from
the
pool
and
if
the
shed
blobs
are
not
spread
equally
between
different
pools
might
still
get
the
case
when
just
one
pool
get
gets.
Every
well
gets
the
majority
of
shell
flops
and
it
still
goes
above
the
the
of
the
memory
limits
and
you
can't
well.
It
is
like
they
couldn't
find
they're
going
to
to
shard
these
shade
blobs
belonging
to
the
single
pool.
E
Well,
so
my
proposal
was
much
less
sophisticated
than
that.
I
just
thought
about
splitting
like
when
I
got
10
millions
of
shared
blobs.
I
was
thinking
if
I
can
handle
just
take
blobs
with
ids
0
to
3
million
and
make
one
run
through
all
the
objects
just
ignore
any
other
shared
blob
because
they
should
be
disjointed
sets.
So
I
will
I'm
okay
to
just
process
a
subset
there
and
after
I
finish
that
three
million
share
blobs,
I
will
go
another
set.
D
A
A
It's
unfortunately,
I
think
I
mean
joshua
and
how
you
can
you
can
jump
in
if
you
think
differently,
but
given
what
we're
seeing
people
do
with
containers,
I
think
we
need
to
figure
out
ways
to
avoid
situations
where
we
can,
where
we
cause
out
of
memory
crashes,
essentially.
F
A
So
I
guess
do
we
do
we
need
to
do
more
than
this
pr,
then
I
mean.
Is
this
something
we
need
to
get
in
for
quincy.
A
I'm
I'm
somewhat
nervous
about
the
the
bug
reports
that
we
might
start
getting
otherwise.
F
I
guess
the
other
side
of
it
is
how
often
does
fck
end
up
being
run
on
that's
like
well
on
the
day,
so
that's
election
night
to
cause
so
much
memory,
growth.
A
D
Right
so
the
deep
deep
differs
from
the
regular
one
in
a
way
that
it
reads:
object,
container,
integrates
and
verifies
object,
content.
D
Yeah
but
the
question
how
to
inspect
that,
since
I
believe
fosg
is
not
completely
started
at
this
point.
So
it's
oh.
A
D
I
just
want
to
want
to
mention
that
actually,
it
depends
on
the
usage
pattern,
so
it
looks
like
maybe
it
might
be
some
not
best
usage
from
so
I
mean
having
that
many
shared
blobs
means
something.
E
A
D
E
A
G
A
You
mentioned
that
you
wanted
to
talk
about
the
pr
that
you
made
a
little
while
ago
to
set
the
tc
mag
thread
cache
size
as
a
configurable
stuff
option
instead
of
using
the
environment.
Variable
sage
showed
some
interest
in
this
recently
because
he
wants
to
make
a
low
memory
configuration
template.
A
E
Well,
I
can
reiterate
what
I
already
told
some
time
ago.
Basically,
I
think
that's
just
a
clash
of
philosophies
here.
I'm
my
thinking
was
that
I
would
prefer
to
provide
a
toolkit
to
modify
tc
malloc
settings
and
it
should
be
just
easy
enough
to
apply
it
to
any
demon
just
provide
which
which
configuration
value
variables
should
be
tracked,
and
hence
any
demon
could
easily
inherit
that
logic
and
the
other
approach
is
just
make
it
a
generic
part,
and
that's
it
that's
the
end
of
topic.
A
So
adam,
are
you?
Are
you
familiar
with
with
that?
It's
the
the
code
here
that
lets
us
kind
of
abstract
away
from
the
memory
allocator
stuff.
A
I
don't
know
if
the
other
demons
that
use
tc
malek
if
they,
if
they
use
perf
glue
or
not-
I
guess,
but
I
guess
the
mon
does
and
the
osd
does
casey
are
you
here
does
rgw
well,
actually,
are
you
guys
using
gc
tcmalik
now,
hey,
I
think
so
pretty
sure
we
are,
you
are
okay.
Do
you
know
if
you're
using
perfect
glue
code,
so
you
can
do
like
the
the
heat
profiles
and
and
and
heap
stocks
and
that
kind
of
stuff.
A
A
A
B
A
So
I
mean
it
seems
like
this
would
be
a
good
place
for
it,
maybe
in
the
the
implementation
for
tc
malik.
I
thought
I
remember
this
being
kind
of
an
abstraction
layer.
A
Maybe
maybe
adam
we
could
add
something
into
the
into
the
heap
profiler
thing
here
to
to
just
set
the
the
thread
cache,
and
then
I
know
that
for
lipsy
malek
they
have
something
similar
where
you
can
control
that,
and
then
we
could
abstract
it.
If
we
ever
go
back
to
using
lipsy
malloc
again,
which
I
don't
know
that
we
will
but
but
then
we'd
have
the
option
to.
A
E
Well,
I
rate
that
as
a
specific
decision
for
each
demon,
because
you
say
include
something
and
then
apply
some
some
action,
then
I'm
hearing
there
will
be
some
library
to
manipulate
tc,
malloc
or
other
allocator
settings
that
you
have
to
use
it
and
I
thought
maybe
the
option
b
could
could
be
not.
I
am
like
proposing
it
actively
to
make
it
a
part
of
a
global
configuration
and
just
enforce
from
some
global
configurable
command
to
each
demon.
A
You,
wouldn't
you
just
have
a
self.conf
setting
per
demon.
That
would
then
call
this
thing.
I
guess
that's
what
I
was
thinking
called
whatever
you
know.
We
name
it
in
this.
E
A
So
what
you
have
right
now
in
the
cefosd
code
right
here,
I'll
post
it
in
the
chat
window.
A
I
E
I
Is
there
any
business
in
allowing
any
part
of
chef
to
not
use
tc
malloc,
I'm
pretty
sure
we
are
doing
this
for
clients
and
unit
tests
and
at
least
in
case
of
a
few
new
tests,
it
brings
some
unwanted
consequences.
You
know
I
faced
a
micro
benchmark,
microbial
provided
in
discussion
where
the
overhead
of
memory
allocator
was,
was
really
important.
I
F
Does
any
warning
the
concern
around
that
was
possibly
around
their
clients
from
writing
memory
from
a
different
alligator
and
then
that's
trying
to
free
it
with
tcmelic
or
something
like
that?
That's
some
sort
of
mix
of
allocator
uses
across
the
api
boundary.
I
This
was
apparently,
oh
god,
okay,
sorry,
it
would
apply
for
third-party
applications
linking
with
liberators.
I
But
what
about
the
clients
that
are
our
under
control.
A
Yeah
I
I
thought
that
braddock
was
meaning
that,
when
we
compile
like
our
own
tests
that
we
do
it
with
dcmalk
not
like
compiling,
I,
I
guess,
like
a
user,
should
liberate
us
with
it.
Maybe
I
have
a
misunderstanding.
A
Maybe
adam,
as
I
look
at
the
the
code
here
in
your
pr,
it
looks
like
the
counters
are
being
set
in
the
priority
cache
currently,
but
maybe
we
would.
A
E
I
look
at
that
pr
and
there
is
something
missing.
There
is
no
actual
listening
for
change
for
that.
That
variable
and
that's
not
something
I
remember,
I
remember
distinctly
that
I
tested
if
it
reacts
on
the
fly
for
changes
in
tc
malloc
size,
so
I
will
find
better
pr
because
I'm
thinking
that's
not
not
not
the
last
one
I
made
regarding
that.
A
Does
anyone
have
any
opinion
station
mentioned
putting
this
in
global
in
it?
I
wonder,
though,
if
if
we
should
be,
I
still
wonder
if
we
should
be
doing
most
of
it
in
perfect
glue.
Does
anyone
have
any
certain
opinions
one
way
or
the
other.
A
We
interpret
well
okay,
I
see
what
you
mean.
A
A
A
A
E
I
guess
perf
glue
is
already
initialized
through
a
global
limit.
Isn't
it
I
don't
remember.
E
A
A
A
All
right
next
related
conversation
kind
of
so
earlier
this
week
to
the
people
at
red
hat,
I
gave
a
presentation
on
crimson
account
quarter
three
status
and
as
part
of
that
presentation,
there
was
a
brief
discussion
about
memory,
fragmentation
with
different
allocators
and
earlier
this
week,
gabby
had
asked
me
about
why
that's
the
case,
and
so
I
thought
it
would
be
good.
Well,
actually
gavin
recommended
that
maybe
talk
about
it
in
the
the
meeting
here
today.
A
So
I
I
linked
a
couple
of
things
in
the
ether
pad
here:
there's
a
really
good
blog
post
about
the
whole
topic.
That
kind
of
gives
an
overview
of
why
libsy
malik
tends
to
fragment
memory,
and
I
just
posted
the
same
thing
in
the
chat
window.
Gabby
did
you?
Did
you
get
a
chance
to
look
at
some
of
that.
G
A
Yeah,
it
sounds
to
me
just
based
on
on
kind
of
the
description
of
how
it
works
that
you
get
to
decide.
Do
you
want
to
have
lock
contention
or
do
you
want
to
have
memory
fragmentation
and
you
can
do
one
or
the
other,
but
you
don't?
You
know
you
don't
get
to
eliminate
both
at
the
same
time
that
that's
what's
my
feeling
was
so
yeah
we,
I
also
in
the
same
section
on
the
ether
pad.
I
linked
a
couple
of
different
documents.
A
One
is
just
the
the
crimson
presentation
I
gave
earlier
this
week
and
you
can
see
kind
of
the
the
the
higher
memory
usage
with
loopsy
malloc
there,
but
then
there's
also
another
older
test
that
we
did
from
2018
when
rel
decided
to
remove
tc
malik
from
the
standard
packages
that
they
provide.
We
we
did
some
testing
at
that
point
as
well
with
lipsy
malek,
and
saw
also
extremely
high
memory
usage.
So
the
gist
of
it
is
basically
that
it
seems
like
it.
A
A
So
one
thing
I
did
want
to
mention
in
this
is
that
even
tc
malek
is
not
really
100
immune
to
what
we're
doing
when
we
see
when
we
load
a
lot
of
oh
nodes
into
memory
for
node
cache.
A
Typically,
what
I
see
is
that
the
osd
memory,
auto
tuning,
will
will
kick
into
high
gear
and
start
shrinking.
The
aggregate
cache
size
to
deal
with
fragmentation
so
before
when
we
have
a
lot
of
data
in
the
cache
or
we
have
a
lot
of
rocks
dbe
stuff
in
the
cache
we
tend
to
be
able
to
have
fairly
close
to
the
like.
A
A
three
gigabyte
overall
cache
size,
but
once
we
start
really
loading
lots
and
lots
of
oh
nodes
into
memory,
then
typically
that
shrinks
down
to
between
one
to
two
gigabytes,
just
to
try
and
deal
with
it
and
keep
the
overall
target
size.
You
know
if
it
is
like
the
default
four
gigabyte
then
keep
the
overall
mapped
memory
size
around
that
target.
A
So
this
is
still
I
mean
improving
our
memory
allocation
patterns
and
and
kind
of
reducing
the
amount
of
transient
objects
that
we
have
would
probably
help
a
lot.
But
that's
kind
of
my
takeaway
from
this
any
gabby
any
other
thoughts
or
comments
or
anyone.
G
I
was
just
wondering
it
seems
that
tc
malloc
is
much
more
concerns
about
memory
size.
Does
it
impact
performance?
Is
it
slower
or
faster
than
gdpc
model.
A
F
Yeah,
I
might
have
been
even
earlier
than
that,
but
yeah
it
was
a
significant
reduction
in
remember
usage,
even
time
it
was
introduced,
and
then
we
still
see
that
today,
like
as
you
saw
in
your
test
with
santos
or
rally
mark
and
that
performance
is,
I
think
it
was
also
also
pretty
significantly
better.
Although
I
think
the
effect
there
and
performance
was
less
drastic
than
memory
usage
overall.
F
G
A
A
G
F
A
Yeah
and
that's,
I
think,
that's
actually,
the
bigger
problem
right
is.
That
is
that
you
just
don't
know
when
the
kernel.
If
the
kernel
is
not
facing
memory
pressure,
it
may
just
not
ever
reclaim
stuff.
A
A
F
Yeah,
otherwise,
I'm
not
sure
anywhere
else,
but
something
we
maybe
don't
have
great
that's
on
today,
but
I
think
they
were.
That
was
one
of
the
things
we
were
thinking
about,
adding
to
the
earth
channel.
So
we
could
start
gathering
some
data
there.
A
Well
it,
but
it
floats
it's
not
going
to
be
a
constant.
You
know
it
depends
on
whether
or
not
yeah
I
from
just
anecdotally
I've
seen
it
look
really
different.
Depending
on
the
operating
system
on
ubuntu,
it
was
higher
than
rel.
It
looked
like
on
ubuntu
they've
got
kernel
tunables
such
that.
Maybe
it's
reclaiming
less
aggressively.
I
think
unreal.
Maybe
it
was
more
aggressive
in
the
reclaim.
I
think
it
was
like
typically
between,
like
10
and
20
percent.
A
A
It
caused
all
kinds
of
really
really
wild
swings
and
behavior,
because
the
you
know
you'd
be
releasing
memory
and
the
rss
value
wouldn't
change
for
maybe
a
minute
and
you'd
you'd
be
thinking.
Okay,
I
need
to
keep
releasing
memory.
You
know
free
memory
to
to
make
my
rss
value
go
down
and
it
it
doesn't
work
that
way.
A
J
I
just
wanted
to
chime
in.
I
spent
a
lot
of
time
with
memory
allocators
in
a
past
life,
and
I
don't
know
what
the
lipsy
allocator
is
based
on
today,
but
at
least
I
want
to
say
five
years
ago,
it
really
was
worse
in
every
way
than
tc
malek
or
jay
email
like
I
spent
a
lot
of
time
with
j.e
malik,
lester
and
ttc
malek,
but
the
biggest
difference
from
a
fragmentation
perspective
is
that
the
lipsy
malloc
was
actually
unbounded
from
a
fragmentation
perspective.
J
If
you
had
the
right
pattern
of
allocation
and
de-allocation,
you
could
actually
take
up
all
of
your
memory
with
a
very
small
amount
of
allocations,
because
of
the
way
that
it
did
heat
management.
The
tc
malik
and
j
malik
essentially
solves
that
problem.
By
having
size
classes,
you
don't
have
unbounded
fragmentation
anymore,
but
the
bay
of
the
vein
of
all
memory
allocators
is
an
allocation
pattern
where
you
allocate
a
whole
bunch
of
small
stuff
and
then
free
some
of
it,
and
you
will
always
have
fragmentation
from
that.
J
A
J
E
And
we
also
a
prolonged
time
of
some
objects
versus
another,
which
is
caching
in
bluestore
and
to
be
ready
for
fragmentation.
G
J
Yeah,
that's
generally
less
of
an
issue
in
tc
malek,
j
malek.
One
of
the
differences
there
is
lindsay
malik
at
least
traditionally
was
better
at
being
able
to
grow
the
string
in
place
without
aim
to
do
a
mem
copy,
but
with
size
classes.
Whenever
you
hop
between
a
size
class,
it
forces
the
mem
copy
at
the
allocator
level.
Essentially,
but
I
mean
it
was
never
guaranteed
with
the
lipstick
allocator
in
the
first
place
that
you
wouldn't
have
to
do
a
mem
copy.
J
F
I'm
also
curious
about
the
c-star
allocator.
I
know
I'm
not
sure
how
much
folks
have
looked
at
this
yet
but,
like
c-star,
does
its
own
allocation
by
failing
everything
up
front
and
then
entering
it
all
itself.
Instead
of
relying
on
the
kernel
to
reclaim
anything,
I
don't
know
right.
I
could
keep
looking
to
the
way
it
works
at
all.
I
Yep,
I
was
taking
a
look
and
it
that's,
as
you
said
it
allocates
it,
basically
chunks
the
memo
and
the
memory.
The
side
effect.
Well,
the
limitation
coming
from
it
is
the
fact
you
can
have
you
have.
Actually
you
can
have
only
limited
number
of
of
charts
in
in
sister,
no
way
to
utilize
more
than
two
five
six
cars.
If
I
record
correctly.
I
The
only
funny
thing
is
its
ability
to
to
the
allocate
from
extra
local.
That's
all.
A
Anecdotally,
using
the
c-star
memory
allocator
for
the
stuff
that
can
be
it
can
be
used
for
in
crimson
with,
like
you
know,
blue
store
and
alien
store.
It
shows
much
better
behavior
than
if
we
use
libsy
for
everything
so.
I
Even
dc,
I
I
I
think
well,
when,
when
I
was
switching
from
the
classical
to
the
sister
world,
I
was
amused
how
how
how
good
how
low
overhead
is
related
with
with
the
allocations
in
sister,
it's
very
fast.
A
Yes,
radic:
how
difficult
do
you
think
it
would
be
to
switch
alien
store
and
blue
store
in
crimson
to
use
tc
milk,
rather
than
like
right
now.
I
Doing
that
and
no
way
no
easy
way
because
well,
there
is
the
restriction
of
having
the
constant
number
of
frets
constant
and
fixed,
and
there
is
no
reusing
of
of
them,
which
stays
in
bright
contrast
to
what
roxdb
does
internally
during
this
time,
at
least
during
its
startup.
It
creates
a
bunch
pure,
richer,
big
bunch
of
threads
short
living
threads,
to
be
honest,
but
still
it
they
deplete
the
number
of
shots.
We
will
need
to
extend
the
the.
I
We
will
need
to
extend
the
sister
allocator
to
to
make
the
padding
more
dynamic,
precisely
to
recline
of
that
are
not
longer
used.
I
The
moment
at
the
moment,
it
works
in
that
way
that
sister
allocator,
the
sister
allocator,
is
responsible
for
serving
the
sister
world,
but
it
passes
through
all
requests
from
alien
world
particular
from
booster.
A
I
Sure
I
mean
I'm
not
sure
I
I
think
we
got
a
misunderstanding.
I
initially
understood
you
want
to
replace
the
lipsy
malloc
we
use
at
the
moment
in
alienized
bluestar
with
sis,
with
the
sister
allocator.
Not
oh,
no,
no,
no.
I
mean.
H
Think
I
think
so
yeah,
oh
okay!
Okay,
do
you
think
it
would
be
difficult
to
do
hopefully
not.
A
A
Okay,
let's
see
we
are
at
the
end
of
the
hour.
I
had
two
well
at
least
one
other
thing
on
here,
but
it'd
probably
be
a
big
discussion,
so
maybe
maybe
we
push
it
off
from
next
week.
That's
the
od
sync
write
performance,
so
I
guess,
does
anyone
have
anything
else
they
want
to
to
quick
discuss
this
week
or
should
we
be
done.
A
All
right
well,
then,
next
week
maybe
we'll
talk
about
so
decent
great
performance
topics.
I
wanted
to
bring
up
and-
and
I
think
josh
you
had
also
mentioned-
I
have
more
general
self
performance
topics,
so
we'll
we'll
push
those
off
for
next
week.
Does
this
sound
good.