►
From YouTube: 2017-NOV-01 :: Ceph Developer Monthly
Description
Monthly developer meeting for the coordination of Ceph project development.
http://tracker.ceph.com/projects/ceph/wiki/Planning
A
So
welcome
everybody
welcome
to
the
CEM
that
developer
monthly
meeting
of
November,
so
I
was
just
checking
the
the
planning
document
and
we
have
cup
of
topics
to
discuss
tonight
or
today.
It's
the
medium
APEC
hours.
So
it's
a
night
for
us
you're
on
the
side
of
the
planet
in
the
day
for
bread
and
other
people
you
may
pack
so
Greg
would
like
to
just
start
or
mock
Josh
how
they
prefer
keep
the
order
of
the
least.
A
C
So
I'm,
gonna,
ports
and
chat
do
remind
everyone
of
what
that
was,
but
so,
for
those
of
you
here
that
that
didn't
see
my
post
on
Sept
about
the
gist
of
this,
is
that
when
we
do
small
random
lights,
especially
in
blue
store
but
potentially
file
stories.
Well
we're
for
causing
all
of
these.
The
insertion
and
deletion
events
for
PG
log
and
D
pops
and
and
maybe
PG
info
to
a
lesser
extent,
but
but
definitely
PG
logging
and
D
bops.
We
do
all
of
these
key
inserts
and
deletes
and
on
spinning
discs.
C
It
doesn't
really
matter
because
it's
so
we
were
so
slow
anyway
at
random,
writes
that
the
the
workload
is
kind
of
not
trivial,
but
but
it's
a
lot
less
complicated.
This
is
this
is
when
we
run
like
flash
that
this
is
kind
of
big
deal
and
it's
it's
not
just
the
the
write
amplification
overhead,
which
is
part
of
it,
but
but
that
that's
that's
not
the
necessarily
big
thing.
C
It's
that
there's
DRC
checks
on
all
these
keys
coming
in
and
were
recomputing
the
bloom
filters
and
there's
potentially
other
maintenance
stuff
just
gained
through
the
skip
lists
and
tables
and
looping
in
rocks
DB.
So
there's
all
this
extra
memory,
overhead
and
CPU
overhead
and
just
general
mayhem
going
on
dealing
with
all
these
keys,
and
so
I
I
talked
to
Josh
last
week
about
this
and
kind
of
my
my
naive
first
take
on
it
was
well.
C
So
you
can't
just
do
like
a
straight
ring
before
you
have
to
do
something
more
clever
like
right
out
sequentially
allocated
chunks
of
space
that
then
you
you
you
associate
with
each
D
G
and
then,
once
once,
it
kind
of
you
know
falls
out
of
a
scope
from
every
single
PG.
Then
you
can
get
rid
of
those
things.
Do
you
could
do
something
like
that
right?
It's
can't
like
the
rock
CD
right.
C
I
have
log
the
way
that
that
works,
except
that
you
just
make
it
keep
adding
them
until
until
they
you
know
you
can
you
can
finally
get
rid
of
stuff
right
and
then
recycle
it,
so
it
potentially
could
be
big
space
amplification,
but
the
benefit
would
be
that
you
know
you'd
only
ever
write
once
you've.
Never
do
any
chemist
compaction
ever
on
it
and
you
can
found
it
based
on
how
many
peopie's
you
have
and
how
big
you
help
menu
keep
around.
So
the
go
ahead
and.
B
Sage,
the
big
trade-off
is
that
you
didn't
have
to
do
an
additional
IO
for
every
update
operation,
because
you
have
to
do
an
IO
to
append
it
to
the
per
PGs
log
or
whatever.
And
then
you
have
to
do
your
transaction
commit
tube
to
is,
whereas
with
you
put
it
in
Roxy
B,
then
you
have
one
IO
and
then
later
after
you
like
a
hundred
or
a
thousand
of
these-
and
you
have
one
big
right
of
the
compacted
stuff.
C
D
On
what
the
costs
are
mark
I
see
that
you
have
some
numbers
about
IO
on
this
email.
You
tend
to
the
list,
but
I
have
Matt
spent
a
lot
of
time
in
rocks
TV,
so
I
only
sort
of
can
guess
what
they
mean
like,
but
you're
talking
about
CPU
time,
your
to
which
I
don't
say
anything
about
in
the
written
document
and
yeah
I
just
have
a
good
sense
for
how
it's
what
this
costs
us,
because
I
don't
have
any
context
or
what
the
numbers
mean.
So.
C
B
C
D
C
D
B
Don't
know
that
there's
a
good
number
to
anecdotal
information.
We
have
isn't
super
satisfying,
but
it's
that
the
kV
Singh
thread,
which
is
the
thread
that's
actually
submitting
these
into
just
doing
the
inserts
into
rocks
TV,
is
basically
saturating
the
core,
so
there's
one
CPU
is
doing
nothing
but
inserts,
and
a
lot
of
that
time
is
when
Adam
was
going
through
and
profiling.
It
is
CPU
stalls
because
of
memory
prefetches,
because
it's
traversing,
this
huge
skip,
was
trying
to
the
end
memory.
B
D
B
D
C
E
B
A
C
Up
with
was
like
huge
well
yeah
that
too,
but
but
even
even
midterm.
Don't
we
expect
to
have
like
huge
parallelism
to
like
multiple
kind
of
areas
on
the
flash
where
we
can
do
iOS
to
multiple
places,
but
trying
to
kind
of
if
we
can
paralyze
it
right,
we
can
kind
of
get
like
free
I/o
out
of
it
right
exactly.
D
B
D
C
Other
proposal
here
is
instead
of
doing
something
like
this.
We
implement
a
ring
buffer
inside
rocks
TV
itself
right.
This
was
kind
of
Josh's
idea,
but
instead
of
having
this
on
this
thing
now
over
journal
or
whatever
you
have
inside
rocks
TV
/
PG,
you
make
a
ring
buffer
and
you
just
go
through
it
and
never
insert
or
delete
keys.
C
B
B
C
B
It's
kinda
basically
like
write
out
the
SST
for
every
PG.
That's
what
will
happen
but
they'll
be
smaller,
I
think.
Actually,
if,
if
rocks
DB
were
tuned
such
that
as
its
first
comparison,
it
checks
to
see
if
the
key
that
you're
inserting
goes
at
the
very
end,
like
you
have
some
hint
that
says
that
usually
my
inserts
are
at
the
very,
very
end
of
the
column
family,
then
it
would
be
a
single
comparison.
C
Great,
but
one
day
I
did
want
to
address
Greg
said
earlier.
Is
that
one
thing
that
this
does
by
you
is
that
you're
not
trying
to
recompute
the
filters
for
all
these
like
short-lived
keys
that
are
coming
in
and
then
getting
promptly
to
meet
the
leaks?
It
looked
like
you
know,
work
what
we
saw
recently
that
was
happening
there,
spending
a
bunch
of
time
in
computing
in
filters.
Because
of
all
these,
we.
B
C
B
B
B
D
B
Legion
cheese
like
it
almost
seems
like
what
we
want
in
order
so
that
it
seems
like
the
main
factor.
That's
like
making
it
expensive
is
that
the
inserts
have
to
like
are
keeping
it
sorted
in
memory,
and
so
they
have
to
like
do
this
login
search
that
does
all
these
memory
misses,
like
that's
the
that's
the
main
thing,
if
there's
a
way
to
like,
have
a
hint
or
a
cursor
or
physician,
so
that
like
it,
doesn't
have
to
repeat
that
for
every
right
to
the
PG
like
that
would
be
the.
B
D
D
C
B
Okay,
just
put
out
for
mo
map,
let's
put
out
from
the
rest
of
our
map,
yeah,
which
I
think
got
to
be
fine
cuz,
it's
all
the
stuff
that
we
never
read
right.
So
we
can
basically
tune
the
compaction
so
that
they're,
like
100
level,
zero
files
or
something
I,
don't
I,
don't
know
like
we
just
should
be
a
way
to
figure
out
how
to
that's
like
more
efficient.
C
B
At
least
rarely
right,
if
it,
if
it,
if
it
when
it
does
its
levels,
your
compaction
of
like
10,
SS,
T's
or
whatever,
almost
always,
the
keys
have
been
deleted.
It's
only
the
ones
for
like
the
stale,
PG
s
or
the
PGS
that
have
very
little
traffic
or
whatever
that
indefinite
level
the
one.
So
it's
just
like
very
cheap
operation,
but.
B
B
C
B
B
Probably,
although
maybe
you
do
actually
right,
if
we're
talking
about
like
time,
synchronized
trimming
but
whatever
set
that
aside
for
a
minute
it
and
talk
about
the
do
pops,
that
is
something
where,
like
a
strict,
first-in,
first-out,
well
yeah
bring
buffers
like
all
you
need,
because
you
just
want,
like
a
window
of
history
to
it
back
in
time
to
eliminate.
Do
pops.
That's.
B
Because
I
wonder
so
that
the
thing
that
the
thing
that's
cool
about
the
FIFO
compaction
mode
or
whatever,
is
it
just
it?
You
know
when
it
comes
out
the
right
pepper,
it
writes
to
new
SSD
and
level
zero
and
after
other
any
of
them
it
just
delete
seal
just
one.
It
just
throws
it
away.
So
it's
totally
non-deterministic
when
it
goes
away.
You
don't
know
sound,
like
Roxby
tells
you
that
it
deleted
these
keys.
B
D
B
C
C
B
B
B
B
B
F
B
Actually,
in
order
to
like
see
whether
it
helps
we
would
just
basically
comment
out
all
the
lines
that
delete
the
log,
so
the
trimming
whenever
there
was
a
trim
log.
We
would
just
not
bother
updating
the
database
Roxy.
We
did
it
right.
We
would
just
trim
REO
trim
the
in
memory
part,
but
not
bother
deleting
same
with.
Do
pops
we
wouldn't
bother
pruning
the
old,
do
pups
so
I
guess:
we'd
have
to
change
a
little
bit
of
code,
but
that's
like
just
a
few
lines
are
coming
up.
C
C
C
C
C
C
B
B
B
Guess
so,
I
think
that
the
first
thing
to
try
is
commenting
out
the
deletes
for
the
trim,
so
the
key
to
log
into
pops
and
set
the
Khan
family
to
FIFO
and
bloom
filters
off
just
see
what
happens
see
how
much
of
a
difference
that
makes
but
I
think
the
real
win
is
probably
to
change
the
figure
out
if
there's
a
way
so
that,
if
we
know
we're
doing
successive,
the
same
writer
is
doing
successive
inserts.
If
it
can
like
have
cursor
or
something
or
a
hint
kind
of
like.
B
B
B
3
a.m.
or
something
okay.
C
B
Well,
I
think
that's
I
think
that
a
practical
matter,
that's
our
only
real
option
for
loose
or
I
think
that
once
we
do
a
bunch
of
CSR
stuff
and
we
sort
of
improved
the
rest
of
the
stack
and
blue
stores
in
sort
of
the
slowest
bit
anymore,
then
I
think
we're
actually
gonna
have
to
write
just
a
different
thing.
That's
like
written
for
pure
and
VME,
and
it's
like
it's
different
right.
B
It's
going
to
be
like
log
structured,
it
sort
of
does
its
own
cleaning,
it
doesn't
encode
and
decode
keys,
but
the
inner
representation
is
the
exact
same
thing
as
the
on
disk
representation.
So
it
could
be
a
mapped
or
just
like
copied
without
doing
any
marshalling
or
whatever
yeah
like
I.
Think,
that's
I
think
that's
what
we
have
to
do.
I
didn't
like
the
border
and
yeah
do
something
even
more
ambitious.
B
Your
new
raw
block
boy
stack
this.
You
want
to
write
another
one
yeah
pretty
much.
Well,
no
I
want
to
wait.
I
want
no
money
a
year
and
a
half
and
then
do
it,
but
I
think
I.
Think
that's!
What's
gonna
I!
Think
that
the
thing
that's
gonna,
like
always
kill
us
in
blue
sore,
is
the
fact
that
it's
like
it's
using
a
key
value
database
with
like
sorted
keys
and
everything
has
to
be
like
encoded
and
Marshall,
and
it
gets
copied
like
12
times.
I
think
that's!
It's
just
not
gonna.
B
B
Okay,
so
brief
update
on
the
snap
trimming.
We
don't
have
to
go
into
this
TV,
but
I
just
want
to
bring
everyone
up
to
speed.
I
realized
that
the
the
fundamental
problem
is
that
for
clones
that
exist,
there's
the
clone
with
a
name.
That's
one
of
the
snaps
it
was
originally
defined,
for
it
has
a
set
of
snap
IDs
that
it
exists,
is
defined
for
and
snap
mapper
indexes
those.
So
whenever
we
delete
one
of
those
snapshots
that
is
referenced
by
the
clone,
we
go
and
we
clean
it
up,
and
we
update
that.
B
We
update
the
clone
metadata.
We
update
the
snapshot
like
everything's,
fine.
The
problem
is
the
snap
sets
for
objects
that
haven't
been
cloned
yet
or
for
updates
that
are
recent
enough
that
they
haven't
resulted
in
a
clone
they're,
also
a
set
of
snaps
or
the
snap
set,
and
we
don't.
We
don't
touch
like
every
head
to
see
if
it
is
defined
for
those
snaps
and
we
don't
index
the
head
for
the
snaps
that
haven't
been
cloned
yet
and
so
those
don't
get
updated.
B
B
We
just
we
use
that
one
on
the
request
as
when
you
process
it,
but
we
can
get
rid
of
that,
because,
if
we're
doing
that,
if
we're,
if
the
snaps
seek
that's
already
on
the
head,
is
newer
than
the
one
that
we
submitted.
We're
not
going
to
clone
as
a
result,
because
the
object
on
disk
is
already
newer.
So
it's
not
old
enough
to
result
in
a
new
clone,
so
we
don't
need
to
know
what
the
snaps
are,
because
we're
not
going
to
clone
and
have
to
use
those
snaps.
B
B
B
And
so,
although,
like
row,
CPU
wastage
that
was
happening
before
and
so
we're
having
optimized
that's
all
gone,
but
the
removed
snaps
with
all
all
snaps
deleted
for
all
time
is
still
on
the
OS
team,
at
PG,
cool
thing
for
the
time
being,
and
so
I'm
thinking
is
that
we
do.
We
do
that
first
bit,
so
the
pruning
is
more
efficient,
and
so
we
make
the
problem
a
little
bit
better
and
then
hopefully,
if
we
can
like
buckle
down
and
do
it,
we
can
change
the
way.
B
The
cashed
hearing
does
the
snaps
and
promotes
the
flushes
and
promotes
so
that
it
doesn't
rely
on
that
and
I
think
if
we
just
make
it
so
that
when
we
flush
a
clone
instead
of
flushing
to
the
head
in
such
a
way
that
the
next
write
will
cause
it
to
clone,
we
just
flushed
directly
to
the
clone
or
flush
it
and
force
a
clone
so
that
it
happens
immediately.
I
think
if
we
do
it
that
way,
then
we
can
get
rid
of
the
other
thing.
D
B
B
The
other
thing
we
could
do
is
sort
of
the
annoying
thing
where
every
OSD
does
have
this
all
stance
deleted
for
all
time,
data
structure,
sort
of
globally
I,
don't
I,
just
don't
like
that!
I
just
don't
like
it,
because
it
still
feels
sort
of
broken
it
might
be
strictly,
but
it's
strictly
better
I
guess
and
what
we
have
right
now.
But
it's
like
the
best
thing.
D
D
F
D
B
Anyway,
yeah,
that's
that
and
that's
the
first
bit
I
I
missed
something
with
the
way
that
I
I
was
trying
to
get
rid
of
some
variable
of
whatever
it
I'm
still
debugging
that
stupid
thing,
hopefully
that
probably
get
in
and
I'm
I'm,
not
sure
if
I'm
gonna
like
have
it
in
me
to
deal
with
the
caching
thing.
Just
cuz
I
had
a
ton
of
mess
with
cache
sharing
at
this
stage,
but
we
might
have
to
you
because,
yes,.
D
B
B
What's
happening
now,
but
right
and
then
so
that's
fine,
let's
see
if
that's
the
efficient
way
to
do
it.
The
problem
is,
it
relies
on
the
based
here,
remembering
the
snap
sets
okay
and
in
the
head
or
actually
I,
guess
it's
the
next
right.
I
can't
remember
exactly
but
basically
needs
to
know
those
the
snaps.
That's
one
of
the
places,
one
of
the
two
places
where
the
snaps
vector
and
the
snap
set
is
used.
I'm
thinking
exactly
in.
B
Why
but
I
think
if
we
change
it
so
that
when
it
flushes
to
clone
it
like
sends
an
operation
that
actually
makes
it
become
the
clone
that
is
in
the
cashier,
with
the
same
exact
information,
then
we
don't
have
any
dependence
on.
We
don't
care
what
the
stamp
star
on
that
right.
We
just
make
the
same
clone
yeah,
it's
just
if.
D
B
D
D
B
I
mean
we
yeah
I
mean
it
would
be
a
flag.
That's
basically
like
this
is
a
clone
and
we
would
like
have
the
fields
and
the
snap
see
like
somehow
map
directly
to
what
the
clone
should
look
like.
It's
it's
a
little
bit
goofy,
but
it
would,
it
would
work,
it
would
get
rid
of
that
dependency
and
then
the
other
half
is
when
you
have
the.
B
B
Yeah
I
got
a
I
remember
way,
but
there's
another
thing
there,
but
I
think
they
could
be
fixed
with
making
cash
sharing
a
little
bit
more
low-level
than
it
is
already,
but
the
payoff
would
be,
and
then
we
can
get
rid
of
the
snap,
specter
and
stamp
set
and
all
these
sort
of
headaches
go
away.
So.
D
B
F
In
there,
where
like,
if
it,
when
the
the
cash
is
flushing
and
done
to
the
base,
2
and
the
base
2,
doesn't
accept
the
cash
tears
like
snaps
and
it
kind
of
builds
its
own
snaps
out
that
I'm
promotion
back
to
the
cast
you
it's
got
to
kind
of
have
to
merge
back
in
what
the
the
base
here
really
says.
The
current
real-life.
B
So
if
we
flush
the
oldest
clone
and
then
evict
it
and
then
promote
it
again,
when
we
try
to
read
that
clone
we're
actually
reading
from
the
head
in
the
base
year
and
so
we're
again
relying
on
the
base
tears
set
of
snaps
sort
on
the
head
in
the
snap
set
in
order
to
know
when
we
promote
the
clone,
what
the
what
the
set
of
snaps
are
that
it
should
be
defined
for
that's.
Why.
F
B
F
B
Anyway,
so
I
think
it
could
be
fixed
but
it'll
be
gross.
But
what
I
kind
of
my
sort
of
long-term
goal
in
my
edit
is
to
make
the
new
tiering
have
enough
functional
parity
with
cash,
steering
that
we
can
give
it
a
casting?
And,
although,
like
ugliness,
that
comes
from
having
this
like,
not
actually
knowing
whether
the
object
exists
or
not,
and
when
you're
doing
the
I/o.
F
F
B
I
mean
we
could
I
mean
I
would,
but
it's
always
like,
since
it's
in
pieces
he
still
have
to
deal
with
the
idea
that
only
one
piece
worked
but
I
think
again,
I
think
if
you
would
just
go
directly
to
the
clone
instead
of
going
to
the
head
and
having
implicitly
make
it
so
that
the
next
right
would
cause
the
clone
that
we
want.
We
just
right
straight
to
the
clone
or
have
that
happened
with
the
I.
Think
that
solves
that
problem.
D
B
D
D
B
D
B
D
B
B
B
B
Well,
PI
D
is
right.
So
there
you
have
a
thousand
objects
and
you
create
a
snapshot
on
it.
Then
you
have
to
examine,
and
then
you
write
them.
You
have
to
examine
those
a
thousand
objects
and
once
you
create
a
second
snapshot
and
writes
a
half
of
them,
then
half
of
them
will
be
in
the
first
snapshot.
B
F
D
It
may
be
that
is
expensive
enough,
that
we
don't
want
to
do
it,
but
I
also
like
not
having
to
make
protocol
changes.
Yeah.
B
Well,
I
mean
this:
this
case
would
be
we
would
it
would.
The
write-back
would
work
both
ways
right
and
I
think
the
long-term
plan
is
to
get
rid
of
Sam
sorry
cache
tearing,
and
so
this
would
be
like
a
little
bit
more
of
ugliness
and
debt
on
this
thing
that
eventually
we're
about
and
the
wind
the
wind,
the
upside
is
that
the
thing
that
we're
gonna.
B
Yeah,
that's
comparatively
cheap.
It's
it's!
It's
like
frustrating
when
I
realized
that
the
snaps
field
was
in
the
snap
set
and
like
it's
there's
all
this
work
being
done,
til
I
keep
it
up
to
date
and
there
was
like
it
was
like
almost
I
used
and
when
it
was
used,
it
was
like
totally
unclear
why
I
was
being
used
and
whatever
I
mean
you're
saying
it's
cheap.
D
D
So
we've
been
talking
for
many
years
about
how
the
multiple
thread
pools
modeled
the
OSD,
where
we
hand
off
every
operation
between
a
bunch
of
different
threads,
which
possibly
incident
from
different
cores
and
mutexes
across
everything,
is
just
you
know,
wildly
institution
for
nvme
and
M
fast
storage,
which
you
know
it's
not
surprising,
since
it
was
written
for
running
on
hard
drives
that
you
do
100
ops,
a
second
and
we're
at
the
point
where
we
think
we're
actually
ready
to
do
the
work
to
change
that,
because
we
think
we
have
to
so
I've
been
doing
some
research
into
sort
of
the
C
star
model
and
in
the
past
then
Adam
and
Casey
have
done
working,
have
some
some
actual
work
with
C
star.
D
So
they
sent
an
email
to
the
septa
vellus
about
a
week
ago,
titled
fun
with
C
star.
And
if
how
am
I
still
around,
then
he
responded
to
that
and
he
asked.
Are
we
really
going
to
put
in
the
work
to
like
like
to
rewrite
all
the
code?
And
answer
is
yes?
We're
gonna
refactor
all
of
the
code.
You
know
eventually.
D
What
we're
thinking
is
that
Adam
and
Casey
are
interested
in
doing
what
they're
calling
a
team
reactor
where
they
take
sort
of
the
C
star
futures
it's
off
and,
as
the
framework
exists
right
now,
it's
or
it
wants
to
own
the
world,
and
that
means
both
like
the
program
architecture.
It
has
its
own
options,
parsing
framework
and
everything
else,
and
and
also
basically
the
hardware.
So
that's
not
gonna
work
for
us.
D
Well
we'll
refactor
the
OSD
code,
so
that
we
can
run
just
the
normal
IO
path
inside
of
C
star
and
then
pass
it
off
into
into
blue
store
files
store
at
the
bottom
and
receiver
from
the
messenger
on
top
and
hopefully
that'll
hook
and
and
that'll
blow
up
the
mouth
of
the
framework
to
sort
of
show
off
that
it's
useful,
and
then
we
can
start
integrating
more
stuff.
The
message
will
be
the
next
one.
D
Actually,
although
I'm
not
super
well-informed
on
how
the
on
how
the
async
messenger
is
set
up
these
days,
I
assume,
since
it
has
a
DP
DK
system
and
that's
part
of
Z
star
anyway,
that
it
wouldn't-
and
it's
got
its
thread-
pools
and
stuff
that
that
won't
be
too
hard
to
put
in.
But
we'll
see,
and
you
know
gradually
spread
out
to
the
other
part
of
the
code
and
that's
got
two
big
benefits.
D
So
that
is
the
thing
we
have
right
now.
It's
unfortunately
time
is
little.
Weird
I
was
at
a
conference
last
week
an
event
another
one.
Next
week
and
Josh
is
on
vacation
and
it's
been
doing
a
lot
of
very
customer
specific
stuff,
but
we
hope
to
more
or
less
sequester
ourselves
in
rooms
in
December
and
after
the
holidays,
and
and
make
this
happen
so
I
make
the
core
part
of
it
happen.
All
of
their
solar
fee
will
take
much
much
longer.
F
F
D
We
do
at
least
we
have
some
nice
barriers
to
that,
so
we
can
sort
of
like
you
know,
blue
store
can
do
whatever
it
wants.
Underneath
us
and
as
long
as
we're
behaving,
then
we
should
see
some
benefits
so
yeah
we're.
The
initial
thing
will
very
much
just
be
the
the
sort
of
read
and
write
io
path
and
then
handing
down
a
blue
store
which
will
do
whatever
it
does,
because
you
know
it's
got,
rocks
TV
and
everything
else
and
immediately.
Imagine
like
the.
D
B
D
F
D
D
B
B
F
F
D
F
F
B
D
B
D
F
D
I
said
we're
just
doing
the
I/o
pass,
which
we
would
need
to
change
anyway,
and
once
we've
done
it,
then
we
we
should.
We
will
probably
actually
get
more
collective
parallelism
because
the
because
we
aren't
we
aren't
just
trying
to
make
it
up
by
having
a
bunch
of
threads,
but
we'll
actually,
like
you
know,
do
to
pot
like
pause
that
objects,
acute.
F
D
B
B
F
C
B
B
D
F
B
Cuz,
it's
it's
just
amazing
when,
when
does
the,
when
does
the
PG
alog
update
happen?
When
does
the
OP
CTX
pointer
to
the
like?
It's
it's,
it's
just
gross,
especially
when
you
look
at
like
half
of
it
being
reused
by
the
cache
turn
code.
It's
just
I
think
understanding
a
what
happens
and
be
like
trying
to
like
consolidate
that.
So
there's
like
a
much
clearer
like
beginning
middle
and
end,
that's
gonna
make
it
better
for
that
way.