►
From YouTube: Ceph Performance Meeting 2021-03-25
Description
A
Moving
so
not
a
whole
lot
of
pr
updates
in
the
last
two
weeks,
two
new
ones
that
I
saw
one
is
for
me,
that's
really
small,
just
I
guess
there's
a
lot
of
text
in
the
description,
but
not
actually
much
in
the
pr
itself.
The
only
real
thing
consequence
in
this
is
changing
the
o
node
map
from
an
orb
map
to
standard
map,
and
that's
a
little
controversial.
I
think
I'm
not
sure
if
we
really
want
to
do
it.
That's
why
it's
rfc.
So
the
the
reasoning
for
this
is
right
now.
A
It
appears
that
we
see
significant
collisions
with
unordered
map
due
to
the
way
that
the
hashing
works
so
standard
map.
It
does
not
suffer
from
that
or
we
could
do
a
separate
like
open
map
specialization
of
standard
hash,
which
would
maybe
be
also
a
dramatic
improvement.
Probably,
but
we
can
talk
about
that
more
later.
Anyway.
That's
what
that
changes?
Let's
see
next
pr
new
one
is
pr
for
rgw.
A
I
didn't
actually
look
at
what
it
does,
but
the
complaint
here
is
that
we
don't
actually
do
any
kind
of
osd
compression
after
rgw
compression,
which
I'm
not
sure,
maybe
that's
what
the
intel
what
he
wants
to
do.
I
don't
know
it
seems
it
seems
like
if
you're
already
compressing.
Maybe
you
don't
want
to
recompress,
but
I
don't
know
anyway.
That's
that's
a
new
pr
someone's
looking
at
it
three
closed
prs
in
the
last
two
weeks.
A
This
one
for
scattering
alien
store
threads
onto
specified,
cpu
cores.
I
talked
to
kifu
about
that.
I
think
we
thought.
Maybe
it
should
just
be
the
first
part
of
this
pr,
but
I
think
fifa
merges
so
maybe
he
was
happy
with
the
second
part
too.
A
In
the
end,
this
is
just
a
little
it's
a
little
more
advanced
than
just
basically
specifying
which
cores
the
alien
store
threads
should
land
on,
but
in
any
event,
that
that's
kind
of
one
of
the
major
things
this
year
does,
which
is
really
important
since,
before
all
alien
store,
threads
lasted
landed
on
the
second
to
last
core
on
the
system,
no
matter
what
what
crimson
process
they
came
from
or
anything,
so
that's
a
good
fix,
bluefish
buffered,
I
o
true
merged
or
basically,
switching
that
back
again,
this
was
pretty
much
necessary.
A
We
did
pretty
extensive
work,
trying
to
dig
into
the
behavior
here
and
it
both
users
were
seeing
that
this
was
causing
huge
issues
when
we
had
buffered
io
disabled
and
we
were
seeing
roxy
be
doing
a
lot
of
reads
on
the
disk
disc
in
other
situations.
A
So
in
the
end
we
turned
it
back
on
maybe
not
permanently
we'll
just
kind
of
have
to
see
how
things
go.
There's
a
chance
that
we
may
hit
a
bug
that
was
reported
where
the
colonel
starts
swapping
out
stuff
really
aggressively,
presumably
to
fulfill
cash
or
fulfilled
memory
needs
that
was
causing
huge
performance
issues
under
certain
tracks.
Rgw
works
low
workloads,
but
you
know
that
that
seems
to
be
a
less
bad
situation
than
than
we're
seeing
now.
So
anyway,
that's
what
it
is.
A
Oh
and
then,
in
the
past
two
weeks,
my
dynamic
pen
length
pr
for
refill
append
space
merged.
I
think
we
took
care
of
all
the
alignment
concerns
that
we
had
with
this,
and
it
did
continue
to
show
performance
gains,
especially
in
a
couple
of
new
tests
that
we
added
that
tried
to
do
a
better
job
of
really
kind
of
isolating
behavior
of
appends
and
tc
malik.
A
When,
when
you
have
like
a
lot
of
reserve
memory
already
or
a
lot
of
sorry
allocated
memory,
so
I
think
that's
good.
I
think
that
will
be
generally
a
good
improvement
for
us
in
a
lot
of
areas
that
aren't
real,
visible
but
we'll
see,
let's
see
updated
the
gabby's
remove
allocations
from
roxdb
dr
it
looks
like
maybe
there's
just
some
bugs
that
need
to
be
worked
out
in
that,
but
otherwise
he
was
mentioning
me
yesterday.
A
There,
the
performance
improvements,
look
really
good
and
we're
we're
seeing
not
quite
as
good
efficiency
gains,
but
still
efficiency
gains
nonetheless,
with
it.
So
still
really
excited
about
that
pr,
let's
see.
Oh
my
omap
bench
test
that
got
updated,
so
that's
yeah!
It
now
lets
you
specify
the
number
of
unique
hashes
to
use
in
h,
object
t
to
better
simulate
what
happens
when
you
distribute
objects
over
pgs.
A
So
if
you're
interested
take
a
look
at
that
and
then
just
more
testing
on
this
concurrency
pr
force
volume
that
lets
you
do
things
parallel,
I
mean
I
don't
really
know
if
that
is
merging
soon
or
not,
but
it's
going
through
testing
so
good
and
that's
that's
it
for
updated
stuff.
A
All
right,
so
I
don't
know
that
I'm
gonna
spend
a
ton
of
time
going
over
it
unless
people
really
want
to,
but
I
did
want
to
talk
about
some
of
the
the
work
I've
been
doing
looking
into
the
bluefish
buffered,
io
and
omap
performance
issues
that
have
come
up
well
has
been
going
on
for
a
while.
Now
I
guess.
A
So
this
this
kind
of
all
started
when
we
started
getting
reports
from
users
that
they
were
seeing
really
really
bad
performance
impact
for
things
like
snap
trimming
when
we
switched
buffered,
io
off
and
adam
did
investigation
and
I
did
investigation.
Other
people
have
done
an
investigation.
A
Some
of
this
seems
to
be
related
to
roxdb's
internal
behavior,
with
prefetching
and
the
way
the
cache
works
and
some
other
things.
Some
of
it
may
be
related
to
our
roxdb
bluefs
layer
and
kind
of
adam
fixed,
a
bug
there,
where
we
weren't
properly
reporting
the
behavior
that
or
what
we
were
doing
back
to
roxtv,
but
then
some
of
it
also
is
inside
bluefest
itself
or
sorry
bluestorm
itself.
A
What
I
saw
is
that
there
were
kind
of
like
three
separate
issues
that
I
was
seeing
when
trying
to
benchmark
the
omap.
Sorry
yeah
omap.
A
Behavior
in
in
blue
store.
The
first
is
that
well
here,
I'll
start
with
a
good
note.
The
first
is
that
blue
store
actually
can
be
faster
than
file
store
and
mem
store
in
ideal
situations.
It's
not
all
bad
it's
when
everything
is
in
cash
and
when
you're
doing
buffered
I
o
specifically,
and
just
everything
is
happy.
A
It's
faster,
pretty
much
in
everything
than
file
store.
So
so
yes,
that's
good,
but
there's
a
lot
of
ways
for
our
implementation.
Blue
store
to
slow
down.
A
The
first
that
I
saw
was
that
we
have
hash
collisions
by
using
an
ordered
map,
with
the
way
that
each
object
t
is
hashed.
We
allow
the
caller
to
set
the
hash
value
themselves
and
when
we
do
that,
we
we
typically
use
some.
I
forget
exactly
some
in
crush.
We
use
the
pg
seed,
the
placement
group
seed
at
least
partially
in
that
hash.
I
don't
josh,
do
you
know?
Was
it
directly
the
the
seed
or
was
it
some
variation
on
it?
B
A
In
any
event,
it's
not
unique
right
per
per
object
right.
We
end
up
with
a
lot
of
collisions
and
potentially
anyone
that
creates
each
object.
T
could
set
it
to
anything
they
want.
So
it's
very
difficult
to.
A
Contain
it
like
to
say
you
know
we,
we
know
it's
always
going
to
be
good,
because
we
don't
it
may
or
may
not
be
good,
maybe
normally
it's
okay,
that's
kind
of
where
we're
at
now.
I
think
so.
We
can
fix
this
in
a
couple
of
ways.
A
B
A
The
tests
that
I've
been
doing
so
far
have
been
just
with
a
hundred
thousand
objects,
which
is
pretty
small
and
100
omap
entries
per
object,
potentially
that
could
be,
could
be
higher,
but
the
owner
map
itself
should
be
smaller
per
shard
right.
B
D
A
So
I
mean
it,
could
it
could?
Maybe
you
could
construct
a
case
right
where
you
have
very
few
shards
and
you
have
lots
of
memory
and
you
could
maybe
in
that
case
see
standard
map
be
slower
than
an
ordered
map,
but
we're
not
talking
about,
like
you
know,
30
million
entries
in
one
individual
map
at
least
I
don't
think
we
are.
B
C
Not
is
it
a
specialization,
you
can
specify
the
you
can
always
specify
the
hash.
The
hash
function
is
part
of
the
definition
of
the
container.
A
A
Possibly
there's
another
document
I
have
where
I
started
looking
at
a
variety
of
different
data
structures
that
we
could
use
for
this.
I
didn't
work
my
way
through
all
the
tests,
although
in
some
of
the
other
tabs
you'll
you'll
be
able
to
see
them
looking.
A
You
might
have
already
seen
this
running,
but
I'll
I'll
supply
it
in
the
chat
window
anyway.
Here.
A
So
ronan,
I
I
I
might
agree
with
you,
you
know
the
standard
map
change
is
very,
very
simple.
It's
it's
really
straightforward.
From
our
perspective,
the
behavior
isn't
simple,
but
the
the
code
change
is
very
simple.
A
One
thing
that
is
a
little
gross
about
that,
though
right,
is
that
if
h
object,
t
changes
behind
our
backs.
Now
we
have
to
make
sure
that
we
update
that
that
special,
that
specialization
of
standard
hash
or
the
onoid
map
specifically
or
we
could
end
up
back
in
the
same
place
where
you
have
collisions
again
right.
C
Your
last
comment:
if
you
oh,
if
you
are
selecting
a
good
hash
function,
it
should
be
okay,
whatever
we
do
to,
if,
even
if
we
are
doing
some
minor
modifications
to
the
underlying.
A
Well,
we,
when
we
do
that
specialization
for
the
own
node
map,
right
we're
going
to
reach
inside
gh
object,
t
h,
object,
t
object,
t
all
the
way
back
to
stuff,
like
the
name
of
the
object
to
be
able
to
make
the
hash
work
well
right.
C
Yes,
but
as
we
can
build
on
the
at
least
upset,
I
think,
have
some
automatic
automatic
reports,
handling
of
structures
etc
for
creating
a
combined
hash.
C
A
That,
if,
if
you
were
to
do
something
like
change
the
the
way
that
you
we,
we
don't
even
have
this
right
right
now,
really
in
any
kind
of
explicit
way.
But
if
you
were
to
change,
object,
t
or
h
object
t,
for
instance,
in
a
way
that
made
it
so
that
the
uniqueness
of
each
object
was
different
based
on
different
criteria.
C
A
Yeah
I
mean
I'm,
I'm
not
I'm
not
tied
to
standard
map
by
any
means
I
mean
it's
the
simplest
thing
we
can
do,
but
I'm
I'm
open
to
to
making
a
new
specialization
of
unordered
map.
That
was
actually
frankly
why
cam
labeled
this
rfc,
because
I
suspect
that
there
are
differences
of
opinion
on
this.
A
B
C
A
Already
so
adam
yeah,
I
don't
have
a
way
to
actually
measure
collisions
that
are
happening
at
runtime,
but
I
did
add
to
the
benchmark
just
a
quick
test.
Looking
for
the
uniqueness
of
the
of
the
h
object,
t
hash,
given
the
sorry,
the
gh
object,
key
hash,
given
the
half
hash
value,
that's
specified
in
h,
object
t,
which
is
just
a
one
to
one
mapping
right.
It's,
however,
unique
hr,
gt's
hash
value
is,
is
how
you
know
uni.
A
We
see
the
actual
current
hash
specialization
for
gh
object
t,
but
we
could.
We
could
easily
do
that
for
a
specialization
right.
We
could
our
own
for
owner
map.
We
could
measure
how
how
unique
stuff
is.
C
C
A
Yeah
I
mean
it
like.
I
said
this
is
a
good
metric.
This
is
kind
of
what
I
already
do
with
looking
at
the
the
hashes
sort
of.
I
just
look
at
the
the
the
the
count
of
of
of
unique
things
by
throwing
in
another
hash,
but
like
with
hashes
right.
The
kind
of
the
the
interesting
thing
is
that
sometimes
you
can
have
collisions
that
follow
certain
patterns
right.
A
C
A
Sure
sure,
like
is
there
any
correlation
between
certain
input,
values
right.
A
Well,
so,
okay,
so
like
all
this
is
you
know,
is
is
good.
To
be
honest,
I
don't
think
it's
really
going
to
matter
that
much
if
we
have
an
ordered
map
with
a
different
specialization
or
standard
map,
at
least
given
the
the
amount
of
nodes
that
it
sounds
like
we're
talking
about
per
shard.
If
we
had
a
lot
more
I'd,
say
definitely
go
with
the
the
honored
map,
with
the
specialization
as
it
is,
I
mean
as
long
as
we
avoid
collisions.
A
I
think
that
was
the
thing
that
I
noticed
the
most
is
that,
especially
when
you
have
very
few
unique
hashes,
so
I
think
the
correlation
would
be
very
few
pgs
if
you
have
a
small
number
of
pg's
that
that
it
can
get
kind
of
bad,
but
there's
lots
of
ways
you
can
solve
this.
So
I'm
I
don't
really
care
whatever.
Whatever
people
want
to.
A
A
Adam
I
mean
ronan,
it
sounds
like
you're
you're,
pretty
in
favor
of
the
doing
the
hashtag
specialization
adam.
Do
you
do
you
care
much?
Do
you
still
think
like
standard
map
would.
B
Be
well,
I
prefer
having
unordered
map
with
a
good
hash,
but
I'm
just
challenging
my
ability
to
devise
such
hash.
If
we
have
someone
who
can
do
a
good
cache,
I'm
perfectly
fine
with
that.
A
C
A
The
numbers
yeah
sure,
if
you
look,
I
I
put
the
new
version
of
the
code
in
pr
the
one
that
I've
been
testing.
Basically,
it
just
cleaned
up
a
little
bit
here
and
I'll
paste
in
the
chat
window
command.
You
can
use
to
run
it
give
me
one.
Second,
I
can
provide.
A
A
So
this
command
is
actually
running
it
inside
with
cg
exec
in
a
c
group,
but
you
can
remove
any
of
that
stuff
that
so
this
is
just
running
it
with
debug
blue
store,
5,
which
actually
I'm
not
even
looking
at
at
the
moment
since
I'm
gripping
all
this
stuff
out,
but
otherwise,
hopefully
fairly
self-explanatory.
C
A
Okay,
that'd
be
great
I'd,
be
very
curious
to
see
if
you
see
anything
different
in
the
benchmark
itself.
I
tried
to
make
most
of
the
things
that
you
can
tweak
and
change
right
at
the
beginning
of
the
the
functions.
So
you
know
you
can
modify
it
fairly
easily.
A
Okay,
so
we
got
through
one
of
the
three
things
that
I
saw
so
far
that
was
related
to
the
data
structure.
A
A
You
are
much
much
better
if
you
can
store
all
of
the
omap
data
in
the
sorry,
if,
if
you
can
store,
probably
both
omap
data
and
ono's
in
the
the
caches,
the
respective
caches,
so
omap
data
in
the
roxtv
block
cache
and
oh
no
data
in
the
the
the
blue
store.
Oh
note
cache
it's
substantial
how
much
of
an
improvement
that
that
provides.
A
There
was
also
an
effect
that
I
saw
both
for
these
map,
get
an
iteration
and
especially
insanely,
so
for
remove
when
doing
operations
in
a
specific
order.
Originally,
when
I
first
wrote
this
benchmark,
I
was
just.
A
Looping
over
an
array
that
was
sorted
by
object,
name
so
object.
One
object:
two
object,
three
whatever
I
actually
did
not
realize
when
I
started
this,
that
that's
not
the
ordering
that
we
use
for
objects,
we
actually
do
ordering
more
based
on
pg
than
strictly
based
on
the
name
and
this
when,
when
looping
over
objects
in
this
way,
it
looks
like
I
haven't,
verified
this,
but
it
looks
like
we
probably
are
scattering
io
across
many
different
sst
files
in
roxdb,
and
it's
just
really
nasty
right.
We're
going
all
over
the
place.
A
When
we
do
things
in
order
using
standard
sword
to
to
get
we,
we
implement
everything
right
in
gh
object,
d
and
the
the
other.
You
know
hrgt
and
all
these
other
things
to
to
do
our
own
sorting,
our
own
ordering.
When
we
use
standard
sort
to
sort
that
array,
then
performance
for
a
lot
of
things
improves
and
and
remove,
is
insane.
I
mean
for
16,
gigabyte
memory,
buffered
io,
it's
like
160
times
faster
for
removing,
oh
no
data.
A
That's
I
mean
it's
just
crazy
right,
it's
a
huge
improvement,
so
this
might
not
matter
if
we
typically
are
doing
things
have
our
data
structures.
When
we
store
h,
object,
t
sorted,
we
can
probably
do
things
operations
against
them
quickly,
but
if
we're
not
sorting
them,
if
we're
just
storing
them
in
some
random
order
or
in
an
order,
that's
different
than
what's
expected,
we
could
be
extremely
slow.
A
A
One
thing
I
forgot
to
mention
is
that
adam
and
I
have
been
talking
a
lot
about
the
possibility
that
when
we
do
iteration,
if
we
don't
have
enough
memory,
you
store
all
omac
data
in
block
cache,
and
this
test
was
designed
to
in
the
4g
case,
there's
enough
data
enough
omap
data
that
it
won't
all
fit
in
memory
until
you
increase
the
osd
memory
target.
A
So
in
this
case
it
appears
that
we
are
very
rarely
or
potentially
never
actually
doing
reads
from
cash,
because
when
you
start
an
iteration
by
the
time
that
you've
you
finished
it,
the
beginning
has
already
moved
out
of
cache
so
like,
if
you're
doing
this,
like
a
repeated
iteration
over
and
over
and
over
again
of
the
same
data.
A
Potentially
when
you
start
the
new
one,
you've
already
lost
the
cache
data
for
the
beginning
of
of
that
next
iteration,
which
means
you
reload
it
and
as
you
reload
it,
you
push
out
things
further
along,
so
that
you
never
catch
up
and
you
never
re-hit
cash
again.
This
is
what
we
suspect
is
going
on,
but
we
haven't
proved
it.
A
Is
that
if
we
don't
have
enough
memory
in
our
caches
to
store
all
like
omap
data
during
iteration,
potentially,
if
you've
got
a
big
page
cache
and
not
all
osds
are
doing
iteration
at
the
same
time,
some
of
them
might
be
able
to
temporarily
gain
lots
of
memory.
Do
things
while
others
need
less.
If
we
had
some
ability
for
the
osd's
to
be
able
to
share
some,
you
know
pool
of
common
memory
that
they
can.
You
know,
use
when
needed
kind
of
like
a
page
cache.
E
Yes,
I
I
have
a
question
here
when
you
start
doing
the
iteration,
don't
you
know
what
is
the
total
size
of
objects,
the
number
of
objects
we
need
and
we
can
calculate
from
the
beginning
whether
it's
going
to
fit
into
the
cache
or
not
and
do
different
things
based
on
these.
A
E
The
question
was
before
we
the
question
whether
we
could
predict
when
we
start
iteration,
whether
it's
going
to
fill
up
the
cash
or
not.
First
of
all,
if
we
have
this
information,
then
there
are
things
that
we
can
do.
There
are
various
things
that
we
could
add
into
the
the
cache
education
process
in
order
to
improve
it,
but
the
question
whether
we
had
the
prediction
at
the
beginning
that
we
know
that
we
started
at
the
such
operations.
It
is
going
to
to
to
misuse
the
cash.
A
Things
that
we
can
do
so,
it
would
be
way
easier
if
we
actually
just
directly
had
no
node
right
and
we
knew
what
the
the
size
of
the
key
and
the
value
were
and
that's
what
we
cached
then
we
could
really
easily
say:
okay,
we
know
how
many
entries
how
many
objects
there
are.
Well,
I
guess
I
don't
know
if
we
know
exactly
how
many
entries
there
are
per
object.
E
E
E
A
Frankly,
right
now,
I
don't
even
trust
that
the
block
cache
in
roxy
b
is
working.
The
way
we
expect
it
to
in
general,
because
we're
we're
like
in
some
situations
like
for
omap,
get
it
with
buffered
versus
direct.
I
o.
Even
we
have
a
ton
of
memory
for
it.
We
still
see
get
performance
behaving
like
there's,
no
cash
at
all
or
very
little
cash
yeah,
so
it
yeah.
I
don't
know
I
mean.
Maybe
we
can
do
something
clever
inside
roxdb,
but
I'm
I'm
not
super
hopeful,
we'll
see.
B
A
A
All
right:
well,
then,
the
the
other
thing
this
week
that
I
thought
maybe
would
be
good
to
bring
up
since
there's.
Actually
quite
a
bit
of
discussion
going
on
with
it
is
igor
and
adam
have
been
investigating
and
fixing
issues
with
the
blue
store
caches,
specifically
around
trimming
and
splitting,
but
the
bugs
don't
end
there.
Necessarily
this
that's
the
most
likely
way
that
we
hit
them.
So
this
is.
A
This
is
the
way
that
the
locking
works
and
the
complications
around
it
given
when
we
try
to
split
caches
and
do
other
things,
igor
or
adam,
do
you
you
guys
are
both
looking
a
lot
more
deeply
at
ways
to
like
modify
this?
Would
one
of
you
want
to
talk
about
some
of
your
ideas.
B
B
B
A
Sure
yeah,
I
think
I'll
I'll
just
say
the
same
comments
I
said
earlier,
which
is
that
I
think
we
should
be
willing
to
give
up
a
little
bit
of
performance
if
it
makes
the
locking
simpler.
A
B
That
to
note
my
idea
is
to
drive
avoid
any
data
movement.
You
can
split.
B
Cash,
so
if
we
attach
a
node
to
a
cache
shark
permanently,
then
we
don't
need
this
stuff.
I
mean
split,
and
potentially
this
makes
things
easier,
despite
just
one
case,
when
we
need,
we
need
to
enumerate
all
the
unknowns
for
for
a
specific
collection,
but
as
far
as
I
can
see,
it
happens
on
collection
removal
only
or
maybe
it's
a
way
to
go,
but
this
this
actually
moves
us
to
your
discussion
about
effective
note,
map
or
another
map.
B
Since
we
need,
we
will
need
a
node
map
cash
charge,
not
their
collection
and
hence
we
will
have
more
entries
in
each
nodemap
instance,
but
well
things
they
depend
on
the
lookup
time
should
be
in.
B
B
B
A
B
Well,
maybe
we
can
discuss
a
bit
different
approach,
have
a
absolutely
different
approach,
which
is
what
we
have
currently
in
nautilus
and
in
fact
it
has
nothing
about
split
cash,
but
it's
more
about
being,
and
in
fact
we
we
do
not
have
well.
We
we
still
have
pinch
objects
in
nautilus
and
they
they
are
in
are
not
cash
chart.
B
They
remain
there
despite
their
state.
They
have
been
to
one
pin
stage,
but
instead
during
dream,
cache
streaming,
we
just
bypass
bypass
them
and
preserve
the
last
position
when
we,
when
they
ended,
I
mean
all
the
pinned
entries
and
then
on
the
next
iterations,
just
part
of
from
that
position,
rather
than
from
the
end
of
the
list,
and
in
that
case
we
don't
need
all
these
complexities
around
node
and
get
input
they
just
increment.
The
reference
count
and
that's
all.
A
B
B
B
We
are
currently
discussing
trinoculus,
but
we
faced
an
issue
with
memory,
high
memory
usage
and
some
scenarios
and
tons
of
print
entries
at
the
bottom
of
the
rule
at
the
bottom
of
the
note,
cache
prevented
from
streaming
and
well
for
you.
For
some
reasons,
these
pin
toy
entries
are
staying
there
for
for
ages
and
hence
no
teaming
could
and
we
need
that
sort
of
workaround.
But
in
fact
it
doesn't
look
that
ugly.
So
maybe
we
should
use
it
as
a
regular
solution.
B
A
The
before
doing
that
that
thing
I
did
originally
back
for
like
having
a
separate
pin
list,
I
had
a
different
set
of
things.
I
was
doing
that
made
it
faster.
I
don't
know
how
much
your
changes
igor
look
like
that,
but
maybe
maybe
some
combination
of
what
you
have
and
then,
if,
if
mine
are
any
different
than
what
you
did,
maybe
that
would
be
enough.
B
D
B
Yeah,
and
in
this
case
we
have
just
it,
looks
like
we
have
just
a
single
issue:
how
to
effectively
bypass
with
paint
entries
during
the
dreaming.
B
Yes,
so
well,
my
my
solution
is
just
to
just
to
keep
the
position
when
we,
when
we
see
the
first
non-print
entry
and
start
the
dreaming
from
that
position
on
the
next.
B
You
mean
to
the
end
of
the
list
yeah.
If,
if
dream
was
successful,
then
then
it
starts
over
from
from
the
beginning
and
something
like
that
so
yeah
it
definitely
can
it's
not
a
round
robin
scenario,
but
it
it
resets
this.
D
B
If
I
don't
find
any
entries
to
dream
at
this
specific
iteration,
I
just
preserve
the
position
where
I
stopped
searching
from
where
we
are
stopped
searching
and
on
the
next
iteration.
I
start
start
from
this
position
to
so
forth,
and
sooner
or
later
I
will
find
entries
to
to
dream.
Unless
everything
is
released.
Is.
A
B
We
we
had
a
cab
to
to
trim,
say
30
30
entries
per
period
duration,
so
I
on
on
each
iteration.
I
give
a
dream
that
amount
of
entries
or
bypass
that
amount
of
pinch
entries
or
a
mixture
of
cases.
B
B
B
B
A
Yeah,
I
agree
I
remember
before
we
did
this
whole
separate,
pin
list
thing
which
ended
up
being
a
bad
idea.
I
think
there
I
had
some
some
like
smaller
changes
that
were,
I,
I
think,
kind
of
like
what
you're
doing
right.
Where
you
you
store
the
position
and
then
there
were
some
other
things
I
had
done
I'll
see
if
I
can
dig
it
out
and
figure
out
what
it
was.
But
but
I
remember
it
helping
it
wasn't
perfect
right,
like
the
the
problem
you
have.
A
Is
that
you're
going
to
do
the
iteration
at
some
point
right,
you're
you're!
Even
if
you
leave
off
and
stay,
you
know
at
your
last
position
and
don't
start
over
the
beginning
again
it
helps.
But
you
still,
if
you
don't,
never
have
enough
entries
that
you
can
clear.
You're
gonna
walk
through
tons
of
stuff
either
either
this
iteration
or
the
next
iteration
you
know
and
and
you're
gonna
just
keep
going
over
and
over
and
over
again
right.
E
B
B
So
if
you
have
a
thousand
entries
in
your
a
thousand
pink
entries
on
the
bottom
of
your
list-
and
you
start
at
iterating
from
the
bottom
and
have
100
in
one
trim
event,
then
you
would
need
to
have
10
trim
events
to
actually
get
to
the
elements
that
you
really
want
to
trim.
B
But
once
you
get
there,
you
will
really
start
streaming
and
you
could
trim
as
much
as
you
could.
So
you
will
not
be
bothered
by
super
very
many
iteration
in
each
separate
stream
action,
but
your
delay
that
you
actually
trim
something
could
be
bigger
in
this
case.
10
trim
events
before
you
actually
trim
something
is
that
correct,
eagle,
yeah,
something
like
that.
A
The
way
I
remember
the
original
code
from
nautilus
before
we
started
in
all
this
stuff
worked
is
that
we
we
would
trim.
We
tried
to
trim
a
certain
number
of
entries
per
trim
right.
We
we
do
that,
but
if
there
are
pinned
entries
we
just
skip
over
them,
so
we
would
always
keep
searching
until
we
could
trim
off
whatever
it
was
right,
but
we'd
skip
those
pinned
entries.
Igor's
thing
improves
it.
A
I
agree
100
by
not
making
it
so
we
iterate
all
over
all
of
those
trimmed
entries
at
the
end
pinned
entries
at
the
end
over
and
over
and
over
again
for
every
trim
event
right.
So
that
is
a
big
improvement.
B
B
Yeah,
so
so,
if
you
have
a
bunch
of
permanently
trimmed
entries
well,
if
you
start
over
on
each
iteration
and
bypass
all
of
them,
it's
expensive.
If
you
stop
after
certain
amount
of
such
entries,
then
you
might
have
cash.
B
You
might
you
might
get
cash
growing
constantly
if
these
entries
are
permanently
pinned.
So
we
need
some
some
something
between
these
cases
and
that's
what
I
try
to
achieve
with
preserving
the
previous
position
and
then
trying
to.
B
Not
starting
to
to
trim
from
from
the
bottom
each
time,
but
but
trying
to
to
go
ahead
on
each
iteration
until
I
I
expect
well,
as
far
as
I
mean,
but
until
I
find
something
available
for
trimming,
I
move
the
pointer,
but
once
I
find
some
once
I
manage
to
trim
something
when
I
I
reset
the
the
point
and
then
start
starting
over
from
the
beginning.
A
Lopsided
bisection
sort
of
right
where
you
know
that
most
of
the
pinned
entries
are
going
to
be
at
the
end,
but
not
all
of
them,
but
maybe
maybe
you
you
don't
want
to
actually
round
robin
back
over
to
those
at
the
end.
Maybe
it's
like
you
want
to
just
kind
of
try
to
grab
things
sort
of
towards
the
middle
and
if.
B
B
B
B
The
only
issue
with
this
case
is
the
scenario
when
we
want
to
remove
this
a
node
shortly
before
shortly
after
then
pinning
so,
if
we
unpin
say
non-non-existent,
node
then
having
it
on
the
top
of
the
list
will
delay
and
will
delay
the
its
actual
removal,
but
again
okay.
We
can
probably
think
about
more
more
effective
scenario,
but
but
again,
the
general
idea
is
that
in
regular
scenario,
pinned
entries
go
to
the
top
of
the
list
once
they
are
unpinned.
B
B
So,
okay
I'll
try
to
to
clean
up
this
idea
and
maybe
a
apr
for
4fc
and
let's
try
to
let's
try
this
approach
check
if.