►
From YouTube: Ceph Performance Meeting 2022-06-02
Description
Join us weekly for the Ceph Performance meeting: https://ceph.io/en/community/meetups
Ceph website: https://ceph.io
Ceph blog: https://ceph.io/en/news/blog/
Contribute to Ceph: https://ceph.io/en/developers/contribute/
What is Ceph: https://ceph.io/en/discover/
A
All
right
well
we're
still
waiting
from
fo
for
folks
from
ford,
but
hopefully
they
should
be
wrapping
up
pretty
soon
here.
One
thing
that
I
did
want
to
bring
up
to
the
group
is
that
I
started
reading
a
paper
and
I
thought
it
would
be
an
interesting
one
for
our
group
here
to
discuss
so
take
a
look
and
see
if,
if
you
think
this
would
be
interesting
for
us,
this
is
from
fast22
and
it's
a
a
research
group
that
went
and
implemented
a
transactions
on
f2fs.
A
I
think
they
used
the
same
api
that
has
been
previously
used
for
implementing
file
system
transactions,
but
they
they
did
some
clever
things
beyond
that.
So
anyway,
if
folks
are
interested,
I
thought
maybe
at
a
future
meeting
we
could.
We
could
take
a
look
at
this
paper,
all
right,
so
move
on
to
prs
this
week,
not
a
whole
lot
of
movement.
A
I
see
that
adam
from
core
did
review.
Igor's
statifest
update
pr,
and
he
approved
that.
So
that
looks
good,
but
should
you
just
go
through
testing
now,
but
I
think
otherwise
it
is
in
good
shape
and
then
the
yanker
tracing
pr.
Thank
you.
Gabby
for
helping
depica
on
that
the
results
look
pretty
good,
even
on
a
faster
osd.
A
The
impact
was
fairly
minimal,
so
I
agree
that
it
looks
like
this
is
reasonable
to
compile
in
by
default,
maybe
not
turn
on
by
default.
Although
the
impact
there
looked
pretty
small
too
surprisingly,
so
in
any
event,
it
looks
like
it's
a
worthy
inclusion.
A
So
beyond
that
not
a
whole
lot
of
movement
on
things,
there's
this
buffer
bloat
mitigation,
pr
that
sam
needs
to
review,
he
just
got
back
from
pto.
So
I
think
that's
on
his
list
of
things
to
do.
A
I
need
to
go
back
and
still
review
this
bt
pr
for
technology.
Apparently
it
broke
a
while
back
because
there
was
old
output
directory
data
that
cbt
just
won't
won't
overwrite
it.
It
refuses
to
do
so.
A
So
I
just
had
the
toothology
test
clean
it
out
beforehand,
and
I
I
think
that's
all
right,
but
apparently
there
are
still
some
results
missing,
so
we
need
to
figure
out
what
that
is.
Otherwise,
I
think
there
are
a
couple.
B
A
Reviews
that
that
were
done,
but
not
a
whole
lot
of
new
movement
on
that.
I
think
at
least
one
of
them
we're
waiting
to
hear
back
from
from
you
colonel
on
on
that.
A
Yeah,
I've
got
a
couple
in
here,
actually
that
I
need
to
go
back
and
and
figure
out
what
to
do
with
this
time-based
adaptive.
Near-Fit
algorithm
one
is
less
important
now
that
we
found
the
other
more
prominent
root
cause
of
the
issue
with
the
the
the
allocator
on
the
samsung
drives,
that's
more
or
less
fixed.
This
could
could
be
added,
but
it's
it's
not
that
big
of
a
deal.
I
think
at
this
point
yeah,
so
I
think
that's
about
it
anything
I
missed
from
anyone.
A
Oh,
I
see
matt
you,
you
commented
on
the
yeager
changes
as
far
as
I'm
concerned.
I
think
they
can.
I
approved
it.
I
don't
know
if
there's
anything
else,
anyone
else
wants
to
do
to
look
at
it
before
then,
but,
but
this
is
fine
from
what
I've
seen
so
far.
A
All
right
so
yeah
was
there
anything
I
missed
from
anyone.
A
All
right
well,
then,
moving
on
see
wow,
I
don't
know
if
we
have
the
core
folks,
yet
they
must
be
deep
in
discussion.
A
Well,
I
guess
the
first
update
that
I
will
give
is
that
there
isn't
a
whole
lot
of
new
info
on
igor's
new
store
right
ahead.
Vlog
he's
on
holiday
right
now
kind
of
the
the
current
status
of
it
is
that
we
still
need
to
figure
out
a
couple
of
issues
that
that
has
remember.
A
There's
a
bug
where
it
doesn't
think
there's
enough
space
on
the
right,
headlock
partition,
and
so
in
some
cases
it
was
failing
to
deploy
the
osd
the
osd
just
refused
to
start.
So
that's
something
that
needs
to
be
fixed
and
there's
a
couple
of
other
things,
but
surprisingly,
overall,
it
worked
really
really
well,
as
so
long
as
the
osd
would
start
up
so
kind
of
related
to
that.
As
an
offshoot
of
that,
I
started
looking
into
the
behavior
of
the
classic
osd
when
there
are
what.
A
Basically,
when
you
you
get
rid
of
the
the
red
head
log,
get
rid
of
all
pg
log
updates
to
just
kind
of
focus
in
on
how
are
are
the
osd
scales
as
you
change
messenger
threads
and
you
change,
charts
and
tp-osdtp
threads.
There
was
some
kind
of
surprising
and
interesting
behavior,
I'm
going
to
quickly
realize
I
didn't
share
this
yet
so
I'll
do
so
now
in
the
chat
window.
Here
I
can
also
share
my
screen.
I
guess
that
probably
would
make
sense.
A
I
can
make
this
a
little
smaller,
so
that's
more
visible.
Is
that,
like
a
reasonable
size,
can
people
people
see
it.
A
Okay,
so
so
gabby
last
week,
igor
and
I
sat
down
to
to
test
and
examine
the
behavior
of
his
his
blue
star,
redhead
log,
and
so
we
went
through
and
did
that
and
it's
actually
pretty
impressive
it.
It
can.
Typically,
I
think
we
can
get
at
least
20
out
of
it
and
might
be
able
to
get
up
to
50
out
of
it,
but
that
that
will
take
more
work.
I
think,
but
but
so
as
an
offshoot
of
that
testing.
A
This
week
I've
been
looking
at
if
we
get
rid
of
the
pg
log,
essentially
or
at
least
take
out
all
the
calculations
for
it
and
we
remove
the
redhead
log.
So
that's
not
in
the
way,
then
what
does
the
behavior
of
the
classic
osd
look
like
when
we
start
changing
the
number
of
shards
and
the
number
of
threads
per
shard
and
the
number
of
messenger
threads
so.
B
A
Exactly
exactly
it's
not
for,
for
you
know
necessarily
a
realistic
number,
but
more.
I
want
to
examine
the
scaling
behavior
of
the
osd
code
as
we
change
the
number
of
shards
in
the
number
of
threads
when,
when
pg
log
isn't
a
factor.
A
Yes,
so
what's
very
interesting
that
I
started
seeing
is
that
when
you
have
contention
on
the
messenger
thread,
only
one
messenger
thread,
we
see
that
as
the
number
of
shards
increases,
both
the
performance
and
the
efficiency
of
the
osd
degrades
significantly.
A
You
can
see
that
when
you
only
have
two
shards
and
you
have
a
lot
of
threads
per
shard,
but
only
one
messenger
thread
that
we
have
about
twice
as
many
nearly
twice
as
many
cycles
per
up
compared
to
kind
of
the
best
case
scenarios.
B
Again,
sorry,
we
can't
be
showing
I'm
seeing
one
for
114..
Oh
sorry,
this
one.
A
Yep,
so,
okay,
so
when
we
have
youth,
charts
and
lots
of
threads
per
chart.
So
in
this
case,
if
we
look
at
the
24
threads,
so
that
would
be
12
threads
per
shard,
but
only
one
messenger
thread.
The
efficiency
of
the
osc
osd
degrades
significantly,
as
does
the
performance.
A
Okay,
so
now
what
happens?
If
we
look
at
two
messenger
threads?
Well,
we
see
a
similar
scenario
play
out.
It's
better.
The
performance
doubles,
but
we
see
the
same
kind
of
trend
where,
as
we
have
more
shards,
sorry
as
we
have
when
we
have
few
shards
performance
is
bad
and
we
have
more.
Shards
is
better,
although
so
far
in
these
tests,
it's
not
quite
as
good
as
when
we
had
fewer
shards
in
this
and
fewer
threads
overall.
Sorry.
B
A
Sorry
this
is
iops,
so
125
000
iops.
This
is
4k
random
right.
Okay,
that's
a
good
number,
exactly
exactly
right,
so
get
rid
of
the
right
ahead.
Log
get
rid
of
the
pg
log
and
we
see
that
that's
you
know
significantly
higher
than
it
is.
You
know
just
with
standard.
You
know
main
the
main
branch
right
now.
You
know
where
we
get
maybe
more
like
80.,
but
only
when.
A
Yeah,
like
12
12
threads
to
16
threads,
look
like
it
was
kind
of
the
sweet
spot
with
you
know
like
between
three
and
six
shards,
but
then,
when
we
had
more
after
that,
then
then
it
was
like
the
performance
started
to
degrade
fairly
significantly.
A
A
But
what
I'm
interested
in
is
why
we
see
this
this
really
really
bad
behavior
when
we
have
few
shards,
but
it
doubles
as
we
increase
the
messenger
threads.
It
looks
to
me
like
when
the
tpos
dtp
threads
are
contending
on
the
messenger
threads
that
that
that
they
start
behaving
badly,
that
they
start
blocking.
A
So
I
need
to
to
understand
that
more,
but
but
we
can
see
that
like
before
here,
the
the
cycles
per
op
numbers
are
bad
as
well,
so
we
spend
more
cycles
when
we
contend
on
the
messenger
thread,
I
think,
is
what
that
says.
A
So
my
my
my
instinct
here
would
be
okay.
Well,
if
this
is
the
case,
maybe
if
we
study
this,
maybe
if
we
understand
why
that's
happening,
maybe
we
can
even
help
these
other
situations
that
are
good
right
like
maybe
it's
not
that
there's
no
contention,
but
maybe
it's
that
there's
just
much
less
contention
in
those
scenarios.
C
B
D
I
mean:
isn't
I'm
not
surprised
that
24
threads
behaves
badly
like
that's,
why
we
have
two
why
we
default
to
two
threads
per
shard
right,
like
we
looked
at
the
ratio
of
work
done
in
the
different
threads,
and
that
was
about
the
right
number.
So,
in
this
case
you're
running,
like
you
know,
eight
threads
per
shard
and
the
I
os
are
small.
So
there's
not
a
lot
of
work
to
be
done
per
I
o
and
they're
feeding
into
the
same.
D
Probably
the
same
number
of
blue
store,
threads
and
and
like
the
messenger
thread,
still
needs
to
like
write
out
the
same
num
or
the
messenger
needs
to
write
out
the
same
number
of
replies
you're.
Just
this
is
way
too
many
threads
for
the
amount
of
work
that
the
rest
of
the
system
can
drive.
Is
that
surprising.
A
D
A
D
D
Doing
I
mean
the
other
thing
is
like
you
know:
you've
you've
ripped
a
lot
of
the
workout,
that's
doing
it,
so
I'm
not
surprised
that
they
look
a
little
bit
different.
Like
you
know,
the
pg
log
is
a
lot
of
work
and
when
you're
not
doing
4k
ios,
then
there's
more
more
kinds
of
work,
and
so
maybe
there's
a
lesson
to
be
learned
here,
but
I
think
the
lesson,
but
what
I'm
seeing
here
is
yeah
cache
line
content
or
like
lock
contention
and
cache
line
bouncing
is
bad,
so
you
should
try
to
avoid
that.
A
What
I'm
one
of
the
things
that
I've
I've
kind
of
toyed
with
over
the
years
is
looking
at
whether
or
not
using
a
a
luckless
cue
implementation
might
be
worth
trying
versus
the
shard
it
up
or
q.
C
How
does
it
change
between
charts
and
friends?
I
don't
have
that.
A
Yet
I'm
good
it's
on
my
list
to
do.
I
will
have
it,
but
I
just
don't
have
the
numbers
at
this
point.
A
I
agree
with
you.
That's
that's
one
of
the
things
I
also
want
to
look
at
is
when
we,
when
we
put
the
pg
log
back
in
android,
has
logged
back
in,
although
pg
log's
more
interesting,
because
that
happened.
That's
much
more
impactful
on
the
the
tpos
dtp
threads
and
the
messenger
thread
where,
as
well
as
primarily
limiting
the
kvsync
thread.
What
happens
right
like?
How
does
that
affect
contention
so
and
yeah?
I
agree
with
you.
It'd,
be
very
interesting
to
get
those
numbers.
A
B
I
was
thinking
about
something
I
was
looking
to
this
week,
together
with
josh
darden
in
snap
delete,
which
is
the
equivalent
of
pidgey
log
right.
It's
something
that
we
control
and
we
initiate
io
when
we
delete
pglog,
we
always
do
100
of
them
at
a
time,
because
we
want
to
save
the
right
ahead.
Log
access
right.
If
we
do
100
fidget
delete
in
a
single
transaction,
then
we
create
a
single
right
ahead.
Log
entry:
yes,
when
we
do
snap
delete,
which
is
essentially
the
same
thing,
we
have
the
full
control.
B
B
Because
it
seems
that
somebody
put
a
lot
of
effort
in
creating
the
code
to
create
transactions,
but
then
they
set
the
max
number
to
two,
which
is
it
probably
means
somebody
was
looking.
Otherwise.
I
can't
imagine
why
this
thing
happened,
because
you
it's
much
harder
to
do
the
code
if
you're
a
transaction,
but
you
have
it
now.
Moving
from
two
to
twenty
to
one
hundred
like
the
pg
log
is
just
changing
one
counter.
A
Yeah,
I
don't
yeah,
I
don't
know
gabby.
Why
why
it
was
set
to
two
was
that
which
was
that
config
go
ahead.
B
A
A
A
B
Is
pretty
heavy
the
code
making
decision?
Because
you
have
to
work
from
the
snap
to
the
object
node,
but
still
that's
only
cpu.
I
would
expect
that
generating
right,
headlock
transactions.
That
would
be
much
more
expensive,
especially
because
it's
done
synchronously
so
you've
been
waiting
on
on
the
ssd.
A
B
A
That
when
you
delete
snapshots,
though
that
might
happen
really,
I
mean
it
can
happen
with
pg
log
too,
but
that
doesn't
isn't
the
typical
behavior
that
might
be
deleted
far
in
the
future.
B
Sorry,
yeah
a
little
bit
far
in
the
future,
but
when
you
delete
them,
you
generate
a
lot
of
tombstones
and
if
we
suspect
that
the
digital
cost
stem
from
the
tombstone,
then
snap
deletion
might
have
the
same
problem,
and
if
we
could
fix
one
of
them,
then
you
can
fix
the
other,
because
ego
and
now
try
to
bring
that
point.
He
goes
right
ahead.
Log
allow
us
to
do
the
delete
without
in
a
much
cheaper
way.
Sorry,
it's
it's
right!
The
headlock
can
make
things
much
faster.
A
It
does
but
but
it
does
not
fix
the
problem
yet
of
passing
through.
A
Deleted
pg
log
entries
into
the
database
right
like
it,
doesn't
fix
the
problem
where,
if
you
have
a
short-lived
entry
like
for
a
pg
log
update,
you
still
will
have
a
lot
of
right
amplification
in
the
database.
If
you
make
the
mem
table
small,
so
it
doesn't
fix
that.
Yet
yes
and
we
can
do
that
with
pg
log,
but
I
don't
think
we
can
do
that
with
snapshot.
Deletion
right.
A
I
suppose
that
you
have
some
wasted
space
right
like
if
you
keep
100
snapshots
around.
A
Yeah,
you
know,
I
suspect
that
maybe
the
the
rationale
behind
that
was
maybe
not
to
hit
the
database
very
hard.
Maybe
the
thought
was
that
if
you
only
do
two
at
once,
then
the
impact
would
be
less.
A
You
could
weave
in
other
stuff
right
like
if
you
don't.
You
know,
hammer
the
database
with
these
big,
these
big
updates,
but
I
I
don't
think
it's
true.
I
think
it
may
be
that.
A
Yeah,
I
I
agree
with
you.
I
agree
with
you.
I
think
it's
actually
would
be
much
better
behavior,
I'm
just
trying
to
you
know,
guess
at
the.
Maybe
why
why
the
thought
why
you
maybe
would
do
small
ones,
you
know
a
larger
number
of
small
ones.
Maybe
the
thought
was
that
it
would
interleave
with
with
io
better.
A
Yeah
gabby,
I
think
that's
a
good
idea.
I
think
I
think
you
should
definitely
try
it
and
see
what
happens.
A
Are
you
is
this
related
to
the
rb
near
work
that
paul
has
been
looking
at.
A
A
I
think
the
thought
is
that,
when
you're
doing
small
changes
to
big
objects,
that
it
requires
a
fairly
significant
right,
amplification
to
you
know,
deal
with
objects
as
a
whole.
B
How
far
away
are
we
from
igor's
code
being
tried
to
be
managing
the
tree.
A
Well,
you
know
it
works
pretty.
Well,
there
are
some
bugs,
but
overall
it
worked
surprisingly
well,
but
you
know
it
would
need
to
go
through
the
full
test
test.
Suite
and
there'd
need
to
be.
A
B
A
Oh
well,
it's
in
the
ether
pad.
Let
me
let
me
pull
it
up
seconds.
A
But
in
the
chat
window
and
I'll
I'll
put
in
that
other
shared
window
as
well
so
like
here,
we've
got,
you
see
the
stock
numbers
and
then
these
other
headings
in
the
different
columns
stock.
A
That's
two
64
megabyte
mem
tables:
that's
shrinking
the
mem
tables,
so
we
get
the
performance
advantage
with
the
high
right
amplification,
okay
and
blue
wall
blue.
A
Yes,
that's
eager,
so
that's
a
two
gigabyte
right
ahead
log
and
we
we
flush
when
it's
half
full.
A
B
A
Yeah
exactly
exactly,
but
the
gain
was
really
big,
even
bigger
than
in
the
past.
Sometimes
right
yeah.
It's
like.
A
Yeah,
so
it's
a
really
big
gain,
but
we
can
still
get
some
gain
with
low
rate
amplification.
Just
by
using
his
right
ahead
log
and
the
existing
mem
table
sizes,
we
can
still
get
a
nice.
Nice
benefit,
it's
almost
as
good
as
stock
with
smallmouth
tables.
A
Yep
yep
and
that
that
was
what
I
was
trying
to
convince
igor
that
that
the
next
thing
we
should
do
is
try
to
see
if
we
could
just
keep
the
pg
log
entries
in
his
in
his
red
head
log
or
or
keep
track
of
them
and
and
see.
If
there's
some
way
that
we
can
can
do
a
better
job
of
of
keeping
them
out
of
the
database
for
longer.
A
So
now
don't
even
send
them
to
roxdb
unless
unless
they
they
get
too
old.
A
So
yeah
I
there's
it
looks
like
there's
a
lot
of
potential
here.
It's
kind
of
interesting
too.
Now
that
I
look
at
it
is
that
you
know
this
number
that
we
saw
120
000,
that's
that's
quite
high.
Even
compared
to
you
know
some
of
these
numbers
right.
That
was
like
what
we
have
here
with
no
redhead
log
and.
A
Yeah
yeah
two
sure
eight
shards
and
two
threads
per
shard,
so
in
16
total
threads,
four
shots
and
sixteen
frets
yeah
yeah.
I'm
saying
that
it's
in
that
other
test,
it
was
using
eight
shards
and
and
two
threads
per
shard
yeah.
A
So
maybe
there's
even
a
little
bit.
We
even
get
a
little
bit
more
out
of
it.
If
we,
if
we
tweak
that
a
little
bit
oh
and
three
messenger
threads
as
well,
instead
of
two,
so
there's
there's
some
differences
there,
but
yeah
definitely
we're
we're
getting
we're
now,
starting
to
get
to
the
point
where
we're
really
hitting
kind
of
the
the
osd
limitations
in
the
classic
osd.
I
think,
which
is
why
I
started
wanting
to
look
at
this
to
see.
A
B
A
A
Yeah
we
and
I
do
want
to
see
to
look
into
this
contention
between
the
messenger
threads
and
the
the
tpo's
dtp
threads.
That
I
think
is
worth
understanding
and
maybe
there's
something
that
we
can
do
there
that
helps
and
alleviates
it
a
little
bit.
So
that's
that's
something.
I'm
gonna
look
at,
I
think,
but
but
beyond
that,
though
I
mean
there's
so
many
things,
you
could
do
that
that
they're
just
not
worth
the
effort
right,
yeah.
B
Yeah,
maybe
the
snap
is
the
one
thing
also
to
investigate.
I
don't
know
that
it.
What
maybe
it
was
already
investigated,
because
somebody
made
a
very
elaborate
code
to
allow
for
multiple
trans
object
in
translation
in
transaction,
but
then
set
the
number
to
two,
I'm
assuming
it
was
done
for
a
reason,
but
maybe
it
was
started
and
then
somebody
meant
to
go
for
this
and
they
didn't
do
the
testing.
So
I
don't
know.
A
A
So
yeah
I
I
would
very
much
encourage
you
to
play
with
it
and
see
what
the
behavior
is
like
when
you
change
it,
a
benchmark
for
snapshot
deletions.
No,
not
that
I
know
of.
I
don't
think
we
have
any
benchmarks
that
really
look
at
the
behavior
of
snapchat
deletions.
We
have
the
omap
stuff
that
I
wrote,
which
you
know
might
give
you
some
idea
of
like
iterator,
behavior
or
performance,
or
you
know
roxdb
performance
in
general,
but
not
not
anything
specific
to
snapshots.
B
A
A
But
it's
definitely
been
something
that
people
have
have
talked
about.
A
lot
right,
like
rbd
mirroring,
was
not
the
first
time
that
snapshot
deletion.
Overhead
has
come
up.
F
It's
I
believe,
it's
also
interesting
to
understand
whether
the
snapshot
delete
performance
relates
to
the
difference
between
the
snapshot,
deleted,
snapshots
and
the
volume.
So,
while
you
keep
when
you
keep
doing
ielts
and
then
do
the
deletion
you
you
you
get
further
from
more
differences
between
these,
it's
also
interesting,
I
think,
to
try
to
understand
the
impact
without
I
o,
with
very
few
irons,
to
see
whether
snapchat
deletion,
when
you
don't
do
ios
when
the
snapshot
is
very
close
to
the
to
the
volume,
whether
it's
faster
or
not,.
F
That's
from
what
I
know:
that's
the
ultimate
performance
on
snapchat
division,
the
ultimate
performance-
it's
proportional
delete.
The
delete
is
proportional
in
time
to
the
the
difference,
to
the
amount
that
you
actually
need
to
delete,
because
if
eventually
you
have
two
snapshots
and
volume
which
are
identical,
the
delete
should
be
almost
instantaneous,
because
all
you
do-
or
you
should
do-
is
play
with
metadata.
F
You
don't
need
to
do
any
work
on
the
data,
because
all
the
records
appear
twice
all
the
data
is
shared,
so
you
don't
need
to
delete
any
data,
so
it
should
be
pretty
fast.
That's
the
ultimate
performance
requirement
from
snapchat
deletion.
So
if
you
already
test
this,
I
think
it's
worth
to
test
it
on
this
dimension
as
well.
F
B
B
A
The
but
like
I
was
saying
before,
though
I
think
in
the
past
what
we've
seen
has
been
that
this
is
really
heavily
tied
to
iteration
before
behavior
iteration
performance,
especially
once
tombstones
end
up
in
the
database.
Like
I
don't.
I
don't
remember
the
snapshot
code
very
well
with
snapshot
deletion,
but
but
I
would
I
would
especially
pay
attention
to
anywhere
where
it
looks
like
we're
doing
iteration,
especially
during
deletions,
while
we're
deleting
entries
and
creating
tombstones
in
the
database,
because
if
we're
not
also
doing
rights
into
the
database,
we
don't
trigger
compaction.
A
A
If
you,
if
you're,
deleting
entries
from
roxdb,
while
you're
doing
iteration
so
like
if
you're
iterating
and
deleting
things
that
creates
tombstones,
that
then
well
like
the
pattern
that
we
saw
before,
is
that
we
would.
We
would
iterate
to
a
point,
we'd
delete
some
stuff
and
then
we
would
reiterate
again
and
start
over
in
the
iteration.
But
now
we're
iterating
over
tombstones
that
were
created,
and
that
was
really
really
bad.
We
we
saw
that
roxdb
performed
extremely
badly
when
that
happened
until
a
new
compaction
happened
that
got
rid
of
the
tombstones.
A
And
are
we
iterating
from
a
certain
point
over
and
over
again
or
do
we
continue
from
an
old
point?
No.
B
A
It's
it's
possible
that
this
might
have
gotten
changed
at
some
point.
I
vaguely
recall
maybe
that
someone
fixed
it
where
maybe
we
used
to
iterate
from
the
same
point
over
and
over
again,
and
we
did
right
over
all
the
old
tombstones
and
then
someone
would
do
keep
track
and
iterate
from
the
new
point.
A
B
Actually,
you
know
what
maybe
the
problem
is
that,
while
we
iterate
over
oh
map,
looking
for
snaps,
this
thing
is
not
locked
so
between
every
query
that
we
do.
You
could
have
another
insert
yeah
force
the
iterator
to
restart.
B
B
B
B
A
B
Okay,
I
mean
the
code
itself,
of
course,
is
doing
a
linear
iteration.
But
if
every
time
somebody
is
doing
a
put
operation,
the
the
iteration
is
being
reset
transparently,
then
that's
going
to
be
a
very
bad
thing.
B
That
might
explain
why
we
do
things,
but
actually,
if
that's
the
case,
I
would
try
and
look
the
main
table
and
try
to
grab
as
much
as
I
can
in
a
single
iteration,
because
now
what
we
do
is
that
we
take
the
arm
up,
find
the
snap
then
go
and
jump
to
the
object,
node
and
find
the
associated
object
node.
So
I
wouldn't
do
that
way.
B
under
the
knock,
while
blocking
everybody
else
from
doing
things.
So
you
don't
need
to.
I
don't
know
I
don't
know.
What's
the
number
what's
the
sweet
spot,
but
probably
a
better
thing
to
do
for
walks
to
be
would
be
to
allow
things
to
enter
the
main
table,
but
don't
be
moved
to
the
right
location.
Just
allow
things
to
go
to
the
right
ahead,
log
set
somewhere
in
the
main
table,
but
allow
everybody
else
that
just
allow
the
scan
to
keep
on
moving.
So
this
thing
is
not
going
to
be.
Is
it
in
the
right
place?
B
A
B
A
B
E
A
A
All
right
well,
then,
have
a
great
week,
everybody
and
see
you
next
week,
bye
thanks
for
coming
bye,.