►
From YouTube: 2019-11-11 :: Crimson SeaStor OSD Weekly Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Oh
boy,
okay,
let's,
let's
get
started
here,
we've
got
cool
folks,
now,
all
right
so
new
stuff
for
this
week
stage
implemented
a
new
minimum
default
for
the
PG
autoscaler
of
16
instead
of
4
I.
Think
that's
a
fantastic
idea
that
4
is
just
so
few
that
you
don't
get
a
whole
lot
of
parallelism
and
mechanic
causes
issues.
A
This
team
is
definitely
my
opinion,
a
much
much
better
default,
whether
or
not
we
could
also
kind
of
adopt
some
of
the
things
that
we
were
talking
about
last
week,
where
we
we
shrink
the
PG
log
length
so
that
we
could
even
push
that
out
further
yeah.
That's
maybe
a
discussion
for
later
on.
This
is
a
really
good
starting
point,
I
think,
but
certainly
more
P,
G's
and
and
have
more
minimum
P
G's
for
better
scalability
or
better
parallelism,
rather
I
think
he's
definitely
a
good
way
to
go.
I
think.
B
A
Alright,
let's
see
next,
we
have
PR
from
Mia
that,
after
our
discussion
from
last
week,
disables
arm
range
and
novelist
completely
I
think
that's
very
good
and
as
not
Alissa
Pacific
since
I,
don't
think
we
actually
even
have
the
ability
to
turn
off
delete
range
in
master
right.
Now,
though,
that's
good
I
think
that
probably
we
should
just
leave
it
off
for
novelist
indefinitely
and
focus
on.
B
Yes,
it's
right,
you
know
the
current
default
was
just
crazy
high,
so
it
makes
sense
to
put
it
much
lower
and
trying
over
there.
There
was
use.
A
B
Depends
a
little
bit
on
how
fast
the
device
is,
but
it's
more
I
think
usually
cpu-bound
often
times
and
having
a
having
a
you
want
to
have
these
operations
be
relatively
small
chunks
to
work
anyway,
nobody's
kind
of
oh
map
getting
operations
generally
they're
done
in
a
way
that
loops
over,
however
many
and
they
returned
and
community
continues,
though
it's
not
a
big
deal.
If
you
had
changed
the
total
size
for
a
given
requester,
then
you
functioning
for
sure.
B
A
A
A
A
Congratulations,
let's
see,
and
then
also
your
objection,
sustained
looks
like
I
got.
Some
updates,
I
have
finished,
doing
a
whole
bunch
of
rebasing
and
doing
a
hold
of
refactoring
and
integrating
with
the
changes
that
came
in
while
I
was
doing,
I
object
and
I'm
currently
going
through
testing
that
yay,
that's
great.
A
A
A
A
Let's
see
what
else
well
II
do
remember
wondering,
though
I
did
look
at
it
and
was
having
trouble
figuring
out
anything
that
was
wrong
with
it
after
staring
at
it
for
a
while,
I
was
starting
to
become
suspicious
that
maybe
I'm
breaking
something
somewhere
else
or
that
something
somewhere
else
is
breaking
because
of
it,
but
they
also
could
just
be
that
you
know
I'm
not
looking
at
it
right.
It's
my
fault
somewhere
yeah.
A
Yeah,
that's
what
I
was
wondering
too,
but
I'm
just
I'm
not
seeing
it
yet.
So
definitely
if,
if
you
don't
mind
or
anyone
else
doesn't
mind
staring
at
it
just
a
little
bit,
I
my
my
I'm
not
seeing
it
myself
yeah.
A
I've
got
a
PR
here
for
increasing
the
default
number
of
our
GW
bucket
shards
I
was
actually
did
start
testing
that
with
Erics
PR
I
wanted
to
have
fairs
paratimers.
Before
really
you
know
deciding
on
this
I
think
a
couple
of
folks
in
there
JB
cider,
advocating
for
a
smaller
number
of
default
charts
like
seven
I,
was
advocating
for
more
more
6061
would
be
the
prime
number
around
64,
the
after
doing
some
more
testing
and
also
getting
our
diely
range
stuff
fixed.
A
The
downside
is
there
to
downsize.
One
is
a
slow
down
bucket
listing,
but
not
nearly
as
much
as
it
used
to
now.
It's
even
with
61
shards.
The
impact
is
significantly
lower.
It's
it's
not
really
even
that
much
compared
to
just
like
7
shards
or
something
so
there's
that
there
is
still
some.
Some
is
still
a
little
floor,
but
not
not
bad.
The
bigger
downside
is
a
bucket
creation
through
throughput.
It
does
decrease
basically
linearly.
A
So
if
you're
gonna
create
a
bunch
of
buckets
having
the
default
be,
seven
shards
is
like
almost
eight
times
faster
than
creating
a
bunch
of
buckets
that
have
61
shards.
So
there
is
a
legitimate
concern
there,
for
if
a
user
wine
to
create
a
thousand
buckets
or
10,000
buckets
or
something
I,
don't
know
if
that
actually
happens
very
often,
but
but
it's
real,
so
those
are
have
the
dimensions
of
you
know,
at
least
from
a
performance
perspective
of
kind
of
what
what
we're
seeing
right
now
in
terms
of
what's
good
and
what's
bad.
A
A
A
And
then
I've
got
my
refactor
to
do
as
the
ops
PR
that
almost
certainly
is
now
or
already
like
super
out
of
date,
because
people
change
that
really
often,
but
it
was
not
passing
still
when
I
looked
at
it.
There
were
still
a
couple
of
issues,
though
I
wonder
if
I
should
maybe
break
that
up
into
smaller
chunks
and
just
kind
of
instead
of
trying
to
like
tackled.
A
Do
OC
ops
entirely,
maybe
just
tackle
the
things
that
don't
have
go
twos
in
them
first,
because
those
are
fairly
easy
and
then
move
on
to
anything
that
has
go
to
and
do
that
a
little
bit
more
slowly,
that
we'll
see,
but
it
would
be
really
nice
to
get
every
factor,
because
this
huge
and
ugly
and
awful
so
okay
I
think
this
probably
could
enough
like
this
is
pretty
close
to
where
I
petered
out
getting
through
these.
So
I'm
guessing
most
of
the
rest
of
the
stuff.
A
A
Okay,
so
moving
on
to
discussion
topics,
the
only
thing
for
this
week
that
I
have
is
that
there's
been
a
huge
kind
of
discussion
regarding
transparent,
huge
pages
and
stuff
right
now.
The
the
deal
is
that
if
you're
trying
to
stick
an
OSD
or
the
MDS
or
just
about
anything
into
a
container,
usually
that
container
are
oftentimes.
A
The
container
may
have
a
memory
see
group
that
it
that
is
associated
with
it,
where
the
processes
are
required
to
fit
within
some
kind
of
memory
envelope
and
as
it
turns
out
when
transparent,
huge
pages
is
set
to
always
rather
than
M.
Advise
the
cephalo
SD
process
or
other
processes
can
end
up
using
significantly
more
RSS
memory.
Then
the.
A
Mapped
memory
that
TC
malloc
believes
it's
using.
Presumably
this
is
due
to
fragmentation
due
to
you
know
us
only
using
a
portion
of
a
page
once
we've
read
something
of
a
larger
page
of
a
huge
page,
so
we've
got
some
test
results
there,
there's
a
Google
Doc
spreadsheet
in
the
ether
pad.
If
folks
wanna
look
at
it,
it
was
observed
that
the
difference
between
having
transparent,
huge
pages
to
em
advised
essentially
such
that
we
don't
use
it
versus
honey.
A
So
there's
a
couple
of
ways.
This
could
be
solved.
Ten
of
the
most,
in
my
opinion,
the
most
unfortunate
way,
but
but
an
effective
way
to
solve.
This
would
be
just
to
the
flag
inside
our
global
in
it
to
tell
the
u.s.
that
that
any
stuff
process
that
uses
that
doesn't
doesn't
want
transparent,
huge
pages,
and
that
would
work
fine.
I
patrick
wrote
it
he
tested
it.
He
said
it
was.
It
looked
like
he
was
working
so
I'm
fairly,
convinced
that
well
we'll
do
a
job.
A
Just
fine
I
think,
though,
that
from
the
camp
crater
standpoint
of
customer
experience
and
and
kind
of
what
happens
with
other
applications
that
also
suffer
from
transparent,
huge
pages
that
would
it
be
worth
looking
at
whether
or
not
having
the
operating
system
set
transparent
each
way
as
to
always
is
really
a
good
idea.
I,
don't
really
think
it
is.
A
It
turns
out
that
a
boon
to
actually
had
a
bug
report
for
this
a
couple
years
ago
and
they
ended
up
switching
from
always
to
M,
advised
that
not
even
counting
staff
or
containers
or
any
of
that
stuff.
They
just
saw
that
there
were
a
lot
of
applications
that
were
suffering
because
of
it
and
then
ran
their
own
memory.
Throughput
tests
and
solid,
transparent,
huge
created
wasn't
actually
helping
in
the
testing
that
they
did
so
anyway.
A
A
The
legitimate
is,
as
it's
still
a
huge
huge
improvement.
It
got
merged
it's
great.
It
continued
to
be
great,
even
if
nothing
else
happened
with
it.
I
do
think
we
can
probably
make
it
just
a
little
bit
better,
but
we'll
see
I've
got
a
man
idea
how
we
might
do
it
so
not
a
big
deal,
though,
and
then
now,
okay,
we
already
talked
about
delete
range,
a
little
bit.
That's
disabled
for
Nautilus,
I.
A
B
You
just
fix
that
Patrick
was
proposed
with
it,
I'm
disabling
it
just
for
stuff
with
that.
The
air
control
yeah
I
was
wondering,
if
maybe
alien
knows,
if
the
huge
page
is
used
by
other
processes
in
the
system
could
end
up
crowding
out
space
for
regular
pages
stuff
to
use.
In
that
case,
sir.
B
I'm
late
guys,
who
is
hey,
Ben
I,
was
just
curious
with
the
PR
kettle
approaching:
Patrick's
PR
to
disable
transparent,
huge
pages
just
for
the
surprises.
I
was
wondering
if
we
could
have
any
issues
with
lack
of
non
trend,
9
huge
pages
and
if
the
rest
of
the
processes
in
the
system
and
use
of
available
memory.
With
these
pages.
A
C
Question
is
so
one
of
the
things
that
concern
me
and
why
I
thought
PR
CTL
was
a
good
solution
that,
if
you're
doing
a
hyper-converged
system,
where
you
have
applications
and
hype,
you
know
and
something
like
OpenStack
or
openshift
running
on
the
same
system
as
South
and
basically
just
disabling,
transparent,
huge
pages
entirely,
is
from
making
that
decision
for
everyone,
not
just
for
stuff,
whereas
the
peer
CTL
option.
Basically,
it
allows
stuff
to
make
that
decision
about
forcing
other
people
and
I
understand.
C
About
like
pink
carpet
partners
like
SAP
and
Oracle,
which
I
did
not
expect
saying
you
know
bad
thinks
about
it.
The
default
but
I've
been
talking
to
other
people
in
my
group
who
say
that
there's
there's
and
I'm
asking
them
to
forward
success
stories
where
there's
there's
cases
where
it
does
help
and
I
think
the
two
the
two
communities
aren't
talking
to
each
other,
like
the
people
in
the
Ceph
community.
For
some
reason
don't
seem
to
hear
from
the
people
who
are
using
it
successfully
and
so
arm,
though
I
get
I
get.
C
You
know,
I
totally
get
your
position
mark
about
it.
I
mean
it
I've
run
into
two
cases
where
it
was
results
were
horrible
with
it
enable
you
know
like
there
was
a
case
where
OpenStack
was
melting
down
a
year
or
two
ago.
You
know
just
the
whole
cluster
imploded
yeah
part
of
it
was
having
to
do
with.
You
know,
PHP
and
then
there's
this
recent
bug
where
you
know
we're
hitting
the
secret
limits
and
they
worked
around
it
by
doubling
the
secret
lumen.
But
the
real
answer
is
to
manage
your
memory
better
in
turning
off.
C
Php
helps
with
that,
as
your
data
shows,
but
I
think
in
the
long
run,
I
I
think
in
the
long
run
there
may
be
an
intelligent
use
of
huge
pages
of
stuff
where
it
could
actually
benefit
it,
but
I
think
it's
not
the
way
that
we're
doing
it.
We
have
one
memory
allocator.
You
know
correct
me
if
I'm
wrong
on
any
of
this,
but
my
understanding
is
TC.
C
Malloc
is
the
memory
allocator
and
it's
basically
creating
memory
as
one
size
fits
all,
and
maybe
there
are
certain
kinds
of
cases
and
stuff
where
huge
pages
make
sense,
something
like
IO
buffers
where
they're
like
you
know
four
mega
byte
buffers,
and
you
know
you
could
do
that
for
efficiently
with
huge
pages.
But
you
don't
want
to
have
every
single
user
of
memory,
allocating
you
know
and
getting
huge
pages
delivered
to
them
when
they
really
don't
need
them
and
don't
want
them.
Does
that?
Maybe
that
makes
sense.
Yeah.
B
I
think
a
lot
of
that
makes
sense
in
terms
of
fun,
being
you
hear
more
success
stories
from
where
it
is
helpful
and
go
in
the
future
actually
with
let
the
crimson
work.
That's
looking
at
using
a
C
star,
these
stars
very
simple
allocator,
which
is
kind
of
her
core
thing
that
allocates
all
the
memory
upfront
and
for
that
purpose,
cute
pages
probably
make
sense
right.
C
A
The
question
there
I
guess
Ben
is:
is
some
the
the
one
thing
I
guess
I
have
a
little
bit
of
an
issue
with?
Is
that
I?
Don't
think
anyone
is
recommending
disabling
the
transparent,
huge
pages
right?
No
one
wants
to
outright
disable
them.
The
the
argument
has
been
to
have
it
be
opt-in
via
M
advice,
right
right.
C
A
Exactly
so
that
I
guess
my
question
is:
is
it
better
to
assume
that
transparent,
huge
pages
is
going
to
help
that
application
that
hasn't
said
anything
one
way
or
the
other
requested
or
denied
it,
or
is
it
better
to
assume
that
that
they
should
not
be
touched?
That
should,
just
you
know,
make
small
memory
allocations.
If
you,
if
you
haven't
explicitly
requested
huge
pages,
that's
kind
of
the
attention
I
think
if
there
is
yeah.
C
A
C
Kind
of
I'm
with
you
on
that,
but
you
know
I
just
I
think
there's
a
lot
of
different
points
of
view
that
on
this,
and
just
within
the
perfect
scale,
team
I've
heard
like
people
saying
like
OpenStack
uses
it
and
uses
them
and
so
forth,
but
I
think
generally
huge
pages
require
some
kind
of
memory
management
strategy
to
make
effective
use
of
them.
It
just
sort
of.
C
If
you
have
an
application,
that's
recycling
memory
and
by
me
recycling
I
mean
that
enhance
it
hands
pages
back
to
the
kernel
and
it
expects
it
to
reuse
them.
Then
you
that's!
Where
you
get
into
trouble
with
hey,
you
know
just
handing
them
out
at
random,
but
I
think
there
are
cases
where
you
think
about.
If
you
have
a
four
mega
byte
buffer.
That's
that's
you
know.
Potentially
a
thousand
TLB
misses
just
to
access
that
buffer
and
I.
C
Know
so
I'm
I'm
thinking,
maybe
you
need
to
memory
allocators,
basically
one
for
sort
of
your
vanilla,
messenger
type.
You
know
like
for
things
where
they
don't
really
need
large
buffers
and
and
then
one
for
they're
very
specialized,
like
you
know,
I
large
I/o
buffers
where
you
really
can
benefit
from
it
and
I.
Don't
I
think
maybe
something
like
that
might
work
more
effectively
for
stuff
and
just
one
size
fits
all
yeah.
A
C
A
C
A
I
I
guess
that's
kind
of
why
I've
been
wondering
if,
okay,
if
it's
better
to
just
kind
of
take
a
do,
no
harm
stance
to
start
out
with
and
then
let
those
applications
kind
of
you
know:
opportunistically
try
to
grab
huge
pages
when
they
can
benefit
from
them,
rather
than
kind
of
doing
a
blanket
approach.
But
you
know
it's
yeah.
C
A
And
Patricks
got
PR
that
does
that
and
it's
been
approved
both
by
me
and
I-
think
Josh
also
approved
it.
So
it's
it's
in
the
works.
I
just
think
it
would
be
very
unfortunate
if
you
know
the
end
result
of
this
conference
is
we
we
fixed
the
kernel
documentation
not
to
claim
the
EM
advises
the
default
anymore,
because
that's
that's
also
confusing
things
right.
Yes,
yes,
yes,.
C
A
C
I
I
think
I
think
we
should
continue
the
discussion
with
the
kernel
people
because
I've
been
talking
to
Joe
Mario
about
this
and
he
was
he
was
sort
of
a
pro
PHP
guy
and
I
think
he's
starting
to
get
that.
Okay,
there's
cases
where
this
really
isn't
working
and
that
you
know
there's
some
caution
is
needed.
You
know
so
I
think
you
should
continue
that,
but
at
least
we've
solved
it
for
stuff
and
you
can
move
forward
and
get
on
with
with
with
real
stuff,
and
so
that's
that's.
B
We
could
have
had
kind
of
a
more
detailed
question
about
the
pier,
couldn't
an
approach,
and
perhaps
then
you
might
know
more
about
the
kernel
memory
management.
Your
I
was
wondering:
if
is
this,
the
pier
kernel
approach,
kind
of
forces,
the
application
to
not
use
any
huge
pages.
I
was
curious.
If
you
had
a
situation
where
you
had
a
box
with
just
kind
of
over
committed
memory
in
other
processes,
for
using
lots
of
huge
pages,
it
was
possible.
They
were
being
not
enough
regular
pages
for
a
90
page
using
process
to
use
yes,
I.
C
Mean
that's.
The
problem
is
that
the
yeah
I
mean
if
some
other
process
some
other
program
or
application,
is
not
intelligently
managing
its.
You
know
its
memory
and
that's
going
to
affect
stuff.
That's
that's
limit.
That's
life
in
the
world
of
hyperconvergence
trying
to
squeeze
n
pounds
of
stuff
into
a
five-pound
bag.
You
know
they.
B
C
C
Does
that
answer
your
question
I
mean
I
can
go
into
more
detail,
I
mean
I
mean
obviously
OpenStack.
The
VM
has
a
fixed
memory
size
right,
but
with
openshift
and
kubernetes
their
secret
limits
on
memory,
so
that
that
it
mean
it.
Unfortunately,
what
happens
as
the
application
is
killed
that
exceeds
that
as
you
found
out
right,
but
but
at
least
it
limits
the
damage
to
the
other
applications
that
are
running
her
stuff.
That's
running
on
the
system,
so.
C
B
A
I
think
we
did
kind
of
see
them,
because,
even
if
you
look
in
like
the
OSD
memory
target
documentation,
when
I
wrote
it,
it
was
like.
Well,
we
sometimes
see
you
know
the
RSS
usage
like
20
percent
higher
than
the
than
the
map
memory
of
the
process.
Your
mileage
might
vary
because
sometimes
sometimes
we
didn't,
it
was
kind
of
assumed
that
it
was
probably
something
like
PHP,
but
we
just
didn't
really,
you
know
do
anything
about
you
know
just
you
know
this
is
what
I
can
do
well.
C
I
mean
did
I
ever
tell
you
like
three
or
four
years
ago.
I
was
working
on
this
OpenStack
and
SEF
configuration
and
we
were
trying
to
do
hyper-converged
and
we
were
running
FIO
and
the
thing
was
being
o
om
killed
and
we
had
free
memory
on
the
system.
He
had
like
gigabytes
and
gigabytes
of
free
memory
and
we
still
own
killing
things
like
what
the
hell
is
going
on
here
and
we
finally
wound
up
getting
like
Larry
Woodman
to
come
over
the
VM
developer
and
he's
looking
at
it
and
scratching
his
head.
C
C
C
C
A
B
C
I
had
a
question
about
TC
malachi
was
looking
at
the
heap
heap.
Stats
in
the
admin
socket
and
I
would
see.
There's
like
the
central
free
pool
that
would
get
really
big,
sometimes
and
I
was
wondering
like
is
that
you
know
that
actually
counts
against
the
RSS
of
the
process
rate.
The
in
other
words,
don't
go
ahead.
A
C
I
saw
that
get
as
big
as
like
15
to
25
percent
of
total
memory,
and
that
just
seems
to
me
to
be
way
bigger
than
it
needs
to
be.
You
know
if
you're
talking
about
a
4
gigabyte
process-
and
you
got
like
you
know
what
is
it
a
gigabyte
of
re
8?
You
know.
A
C
Mega
of
this
cool,
it's
like,
why
is
it
so
big?
Does
it
need
to
there's.
A
So
there's
a
there's
some
documentation
out
there
as
to
how
TC
malloc
works
and
I
don't
want
to
even
try
to
quote
what
I
vaguely
remember
from
reading
through
it,
but
I
think
the
gist
of
it
is
is
that
with
the
number
of
threads
that
we
have
in
SEF,
especially
the
number
of
active
threads
and
the
way
that
were
we're
requesting
memory
and
then
letting
memory
go
that
central
free
list
ends
up
just
getting
really
really
big
Josh.
Do
you?
Do
you
remember
more
details
of
that?
B
C
Your
your
homework
assignment
mark
and.
B
C
C
A
C
Just
the
TC
Malik
release
rate
okay,
I'm
gonna
put
something
in
the
chat
window.
It's
in
this
article
that-
and
it's
from
this
article
that
mean
here
and.
C
It
says
that
the
it's
a
range
from
zero
to
ten
and
defaults,
one
and
I'm
wondering
if
maybe
we
need
to
be
releasing
stuff
a
little
faster.
You
know
to
keep
it
from
if
that
central
cash
from
building
up
so
much-
and
you
know
that
might
I'm
not
saying
that's
gonna
solve
all
our
problems
or
some
magical
thing,
but
it
might
just
keep
the
you
know:
the
ratio
more
reasonable
and.
A
I
think
the
the
question
by
it
is,
if
that
that
thinks
that,
when
that
central
free
list
gets
big,
is
it
still
like
really
rapidly
grabbing
stuff
from
it
and
like
freeing
stuff
from
it
constantly,
and
that's
why
it's
big?
So,
even
if
you
were
like
by
using
the
TC
Malik
release
rate,
if
you
set
that
really
high,
would
that
just
be
churning
stuff
even
more
like?
A
Would
it
be
trying
to
release
it
back
to
the
OS
and
grabbing
more
stuff
quickly,
or
can
it
like
reuse,
some
of
the
stuff
internally
that
way,
I,
don't
I,
don't
really
know
I'm,
just
best
kind
of
the
thought
that
I
was
wondering
is:
why
does
that
get
big
and,
and
are
we
grabbing
stuff
from
the
central
realist
or
or
how
what's?
What's
you
know,
how
is
it
working
I,
don't
really
know.
C
Yeah
I
mean
it's
hard
for
me
to
visualize
that
you
know
many.
Hundreds
of
megabytes
of
memory
are
needed
to
satisfy.
You
know
all
the
requests
that
are
coming
in,
like
this
I
mean
I,
can
see.
You
need
a
certain
amount
to
avoid
churning
going
between
the
operating
system
and
the
process,
but
it
just
seems
like
it's
not
intuitive
to
me
why
it
would
need
that
much
I.
A
Imagine
at
least
some
of
it
is
because
we
have
lots
of
temporary
stuff
happening
right,
like
you've
got
all
these
temporary
buffers
that
are
being
built
up
and,
being
you
know,
you
might
have
some
memory
copies
happening
in
different
places
and
the
the
path
and
then
you've
actually
got
the
cache,
and
you
know
the
actual
right
requests
going
into
rocks.
Tbh
I
wonder
how
much
of
all
this
stuff,
you're,
you're,
just
doing
all
kinds
of
memory
work,
creating
temporary
things
that
go
away
really
quickly.
C
Well,
I'm
sure
that
if
there's
a
lot
of
it
right,
the
messenger
I
think
probably
a
state
simply
gave
but
I
mean
if,
if
these
buffers
are
small
enough,
I
mean
I
mean
I'm
wondering
like,
is
there
something
that
could
be
done
to
not
have
them
constantly
released?
You
know
like
constantly
allocated
released.
All
the
time
you
know
is
theirs.
Are
we
over
using
TC
Malik,
basically
or
abusing
it?
Yeah
I?
Think.
A
A
C
A
A
Go
ahead,
if
we
did
that,
if
we
did
that
too,
then
all
of
a
sudden
we
could,
instead
of
creating
all
these
temporaries
all
the
time
we
could
create
some
upfront
using
placement
new
with
a
giant
memory
allocation.
Once
do
you
allocate
a
bunch
of
memory,
use
placement
new
with
a
bunch
of
objects,
and
then
you
never
never
touch
the
the
you
know
we
never
reallocate
anything
ever
again.
I.
C
I
mean
for
you
know
as
long
as
this,
like
not
you
know
as
long
as
it's
not
a
huge
amount
of
memory
involved,
I
mean
of
why
not.
Why
not?
Do
that?
Here's
an
example
of
what
I
was
talking
about
with
the
central
cash
free
list
as
posted
that
so
in
this
case
we
had
something
like
or
it
was
right
up
against
the
you
know,
the
four
gigabyte
boundary
and
a
half
a
gigabyte
was
in
the
central
cash
free
list.
You
know-
and
it's
just
you
know
just
maybe
wonder
like
well.
B
It
go
ahead,
I'm
curious
if
it
like,
if
we
could
figure
out
like
how
much
of
that
is
based
on
like
bike
uses
from
the
cache
or
I'm
obvious
my
path
or
weight
as
I
went,
what
kinds
of
temporary
things
were,
freeing
so
often
yeah
big
mark,
you
think,
is
like
away
the
memories.
That's
gathering
that
we
have
packing
in
memory
pools
right
now
could
help
there.
A
I,
don't
know
I'm,
not
sure
how
will
you
dissociate
them
I
wonder
if,
in
if
there's
a
way
that
you
can
get
more
verbose,
stuff
out
of
TC
Malik,
we
don't
expose
it
I
think
we
can
get
some
better
stats.
I
wonder
if
there's
some
way
we
could
like
use
the
more
verbose
stats
that
actually
you
like
find
out.
What's
really
in
any.
C
What
about
valgrind
I
mean
that's
kind
of
like
it's
kind
of
like
drinking
from
a
fire
hydrant,
but
is
that
something
that
would
be
reasonable
to
get
some
insight
into
that
or.
A
Maybe
yeah
the
only
way
I've
ever
used.
All
kind
is
to
see
what's
using
memory
like
from
kind
of
a
not
like
the
rates
of
allocations
and
deallocations,
but
kind
of
like
what's
using
memory
right
looking
for
levy,
you
can
do
that
yeah
Josh.
What
about
LT,
T
and
G?
Could
we
actually
like
start
recording.
A
C
B
A
A
A
So
when
you
have
a
buffer
list
right
you,
the
kind
of
the
idea
behind
it,
is
that
you
have
a
bunch
of
different
contiguous
buffers.
But
overall
you
might
not
have
a
contiguous
amount
of
memory
right
or
a
space
of
memory.
You
might
have
all
these
other
disparate
things,
and
so,
when
you're
doing
things
like
a
pens,
how
does
that
actually
work?
What
are
you
doing
when
you
append
something?
Are
you
ending?
A
If
you
have
just
like
a
ton
of
tiny
little
buffers,
do
you
want
to
have
a
big,
fragmented
buffer
list
that
has
pointers
off
into
all
these
different
regions
or
at
some
point
you
do
you
make
the
decision
now
we're
going
to
just
like
take
all
this
stuff
and
put
it
into
a
contiguous
range
of
memory
that
we
now
are
newly
allocating
or
upfront?
Do
we
allocate
a
bigger
amount
of
memory
so
that
we
can
do
little
appends
and
then
have
to
reallocate
memory?
And
so
what's
yet,
what's
the
scheme
behind
all
this
stuff?
A
You
know,
and
and
and
Radek
looked
into
this
and
I
looked
into
this
and
I
got
into
like
things
like
okay,
can
we
use
small
vectors?
You
know
basically,
like
you
know
vectors
that
also
have
like
a
certain
amount
of
dedicated
memory,
upfront
that
you're
using
yeah
yeah
exactly,
but
that
ended
up
not
working
out,
because
we
were
in
reality
and
that
we
weren't
storing
data.
We
were
certain
pointers
anyway,
so
it
didn't
matter,
it
was
all
fragmented
anyway,
no
matter
what,
but
just
behind
all
this
is
just
there's.
A
C
B
I
think
we're
doing
I,
don't
think
we're
using
the
scatter
gather
API
so
much
there
there's
some
more
recent
work
to
use
IOU
ring
notice,
they've
reduced
overhead
of
this
is
skulls.
Yep.
C
Is
like
a
system
call
I
think
it's
called
Reed,
V
or
right
fee
that
basically
you
provided
an
array
of
buffer
pointers
rather
than
a
single
buffer
pointer.
So
in
theory,
if
you
have
that,
then
maybe
you
don't
need
to
you
know
coalesce
your
different
pieces
of
the
buffer
into
one
big,
contiguous
piece,
because
you
can
just
pass
it
to
you
know
the
socket
or
the
block
device.
That
way
he
said,
yeah.
B
Certainly
been
part
of
the
discussion
for
crimson
prefer
that
a
future
optimization
there
idea
being
that
you,
like
you,
said
you
don't
need
to
have
like
one
humongous
buffer
necessarily
but
means
they
keep
the
same
set
of
buffers
use
the
fact
your
vectorized
gels
in
both
networking
and
storage
tanks.
C
Says
I
think
we
do
scatter
gather
IO,
though
oh
okay,
I.
C
C
B
A
We
we
get
bufferless
everywhere
like
for
anything.
That's
basically,
you
know
encoding
something
and
then
later
decoding
it
for
some
other
reason
somewhere
else.
It's
like
bufferless
is
kind
of
like
the
default
thing
that
ends
up
being
used
for
it.
Okay,
maybe
overuse
honestly
from
my
perspective,
but
it's
definitely
highly
used.
Yeah.