►
From YouTube: Ceph Performance Meeting 2023-01-12
Description
Join us weekly for the Ceph Performance meeting: https://ceph.io/en/community/meetups
Ceph website: https://ceph.io
Ceph blog: https://ceph.io/en/news/blog/
Contribute to Ceph: https://ceph.io/en/developers/contrib...
What is Ceph: https://ceph.io/en/discover/
A
A
A
So
there
was
a
new
pull
request
for
I
priority
operations
and
then
plug,
and
then
this
a
PR
from
from
Corey
about
setting
roxyb,
iterator
balance
or
collection
list
and
Igor
I
think
you
reviewed
that
I
have
been
just
incredibly
busy
this
past
week,
so
I
haven't
looked
at
it
yet
did
it?
Did
it
look
decent
to
you.
B
A
Okay,
we
also
talked
last
week
about
instead
well,
either,
instead
or
in
in
addition
to
this
upgrading
to
the
newest
version
of
rocksdb,
which
has
some
improvements
for
Delete
range.
A
With
regards
specifically
to
mem
table
Flushing,
the
the
hope
is
maybe
that
with
the
pr
that
I
merged
earlier
in
December,
for
letting
you
do
deletion
of
tombstones
during
iteration,
enabling
that
rxdb
option
and
then
potentially
this
Improvement
in
in
the
new
version
of
rocksdb
for
delete
range
tombstones
in
men,
peoples
that
we
might
be
able
to
start
using
delete
range
more
aggressively
again,
and
maybe
maybe
that
also
helps.
B
And
yeah
I
was
I'd
like
to
to
double
check
if
using
merge
deletes
that
batch,
I
suppose
before
the
information
that
I
found
that
we
still
use
unbounded
iterator,
then
this
language
Keys
function,
which
actually
shows
some
Scrolls
in
my
case
so
I'm
currently
working
on,
adding
bound
iterator
dot
to
it.
B
B
Generally
yeah
I,
I'd,
like
to
to
move
to
range,
deletes
completely
yeah
how
it
works.
A
Yeah
I'm
still
a
little
nervous
about
ranged,
Elite
tombstones
in
SST
files.
It
sounds
like
maybe
they've
improved
it
in
mandible
flushing
behavior,
but
perhaps
if,
if
this
other
option
that
we're
looking
at,
if
if
they
count
all
of
the
entries
in
the
range
delete
as
part
of
the
criteria
for
when
to
decide
to
do
a
compaction
during
iteration
Maybe,
we
don't
need
to
worry
about
it
as
much.
A
Maybe
it'll
still
be
fine,
I'm,
still
not
clear
on
just
how
much
more
overhead
we
deal
with
when
iterating
over
a
ranged,
Elite
Tombstone
versus
a
normal
deletion
tooth,
don't
Tombstone,
but
but
it
seems
like
maybe
it's
more
overhead,
so
we'll
just
have
to
kind
of
see
what
happens.
I
guess.
D
I
did
do
some
testing
with
flushing
a
range
delete
Tombstone
to
an
SST
file
with
the
latest
version
of
rock
CB,
and
it
did
seem
to
also
not
iterate
over
the
tombstone
at
all.
In
the
case,
SST
file
as
well,
okay
or
at
least
didn't-
have
a
performance
impact.
I
need
to
look
more
to
make
sure
like
it
wasn't
compacted
out
in
that
brief
time,
but
I,
don't
think
I
think
it
looked
really
good
from
what
I
saw
with
brief
testing
of
it.
Okay,.
C
D
For
this
particular
thing,
I
will
say,
though,
that
in
this
case
over
the
case
that
my
PR
is
addressing,
where
we're
moving
a
PG,
we
still
need
to.
We,
we
still
need
to
have
individual
deletes
as
well,
because
we
need
to
delete
individual
objects
at
a
time
in
order
to
clean
up
the
actual
data
allocations
and
stuff,
and
then
I.
D
Think
at
least
working
theory
is
that
we
could
come
back
and
put
a
range
delete
in
as
well,
so
that
the
iteration
speed
issues
when
we're
doing
the
final
check
to
make
sure
that
key
range
is
empty,
are
not
a
problem.
Although
I
guess
we,
we
don't
even
really
need
to
iterate
over
it
again
if
we
end
up
putting
a
delete
range
Tombstone
there,
because
at
that
point
we
know
that
there's
nothing
in
that
key
range.
B
B
B
A
B
E
B
D
Yes-
and
she
said
that
maybe
my
earlier
proposal
that
we
could
do
both
a
delete
range
after
doesn't
make
sense,
because
we
would
need
to
know
that
the
key
space
was
empty
before
we
created
the
delete
range
Tombstone.
Otherwise
we
wouldn't
be
sure
if
new
objects
have
been
created
anyway.
So
maybe
that
is
the
only
approach
to
actually
using
delete
range
for
this
particular
problem.
B
Yeah
but
there
well
I
tried
to
implement
a
sort
of
engine
login
which
should.
A
B
B
Actually,
I
don't
know,
which
is
the
best
approach
so
far.
A
A
B
D
By
the
way,
first
sake
of
discussion,
there
is
one
other
thing
that
I'll
bring
up
related
to
that
and
I
just
posted
a
link
to
a
comment
on
another
pull
request,
but
basically
what
I
think
we're
gonna
do
next
week
to
resolve
our
current
issue
of
the
cluster
that
kind
of
started
the
discussion
is
make
use
of
the
racksdb
deadline,
parameter
to
essentially
detect
whether
or
not
we
have
too
many
tombstones.
D
The
iteration
is
taking
too
long,
and
so
I've
created
a
patch
version
for
us
for
Pacific
to
have
add
a
deadline
parameter
actually
I'm,
not
using
deadline,
but
I
would
like
to
it's
just
not
in
Pacific,
yet
the
version
of
rocksdb
there
I'm
using
Max,
skippable
keys
there,
because
it's
an
older
version
of
rocksdb,
but
the
same
principle
applies
and
adding
that
as
an
optional
parameter
to
the
object,
store
collection
list
method
so
that
from
the
PG
do
delete
work
method,
I
can
basically
set
a
time
limit
or
a
limit
to
the
number
of
Doom
stems
that
are
iterated
over
and
then
just
add.
D
A
back
off.
Retry
of
the
actual
EG
do
delete
work
there,
so
that
I
can
basically
just
wait
until
the
compactions
happen
in
combination
with
your
tuning
settings
mark
and
I
feel
pretty
good
about
that,
based
upon
some
initial
testing
on
other
as
clusters,
but
we'll
see
how
that
goes
next
week
and
maybe
something
that
could
be
considered
for
Upstream
in
the
future
too.
A
Yeah
we
one
of
the
things
I've
thought
about
doing
too,
is
actually
just
tracking
all
of
deletes
right
in
in
our
Blue
Store
box,
TV
glue
code
and
then
like
watching
for
compaction
events
and
for
men,
people
flushes
and
like
just
tracking
everything
but
I'm
hoping
I'm,
hoping
we
don't
actually
have
to
do
that.
A
Okay,
anything
else
on
that
PR
guys.
A
That
will
move
on
here,
I'm
just
trying
to
quickly
add
that
other
PR
that
you
mentioned
in
chat
here.
A
I
did
not
get
that
in
earlier.
Actually
well
I'll
do
that
as
people
talk
okay,
so
what
else
do
I
have
here
lots
of
close
PRS,
but
most
of
them
were
close
by
the
bot.
A
A
couple
of
them
were
mine,
actually,
okay,
but
but
before
I
get
to
those,
let's
see,
there's
a
fixed
race
condition.
Oh
node
put
the
author
close
that
in
favor
of
your
fix,
Igor
and
I
believe
that
Adam
has
been
reviewing
your
fix
as
well
and
was
kind
of
concerned
about
it.
Has
he
talked
to
you
about
that.
A
Okay,
yeah,
we
talked
a
little
bit
about
it
on
Monday
I.
Think
he's
worried
that
it
may
dramatically
improve
it,
but
not
completely
fix
the
issue,
so
he
I
think
he
was
trying
to
to
Think
Through
whether
or
not
he
thought
he
could
come
up
with
a
a
case
where
it
could
still
happen
even
with
your
fix,
but
I
think
he
did
think
that
a
dramatic
Improvement.
B
A
A
Okay,
let's
see
moving
on,
then,
let's
see
the
this
reduced
backfield
recovery
default
limits
for
M
clock
and
other
optimizations
that
was
merged
by
Neha
I
think
that's
been
suing
for
a
little
while.
So
that's
good
I
think
everything
else
was
closed
by
the
bot.
So
there
was
a
couple
of
different
ones
in
here
which
well
unfortunate.
A
Some
MDS
stuff
got
closed
related
to
dirt,
frags
and
optimization
for
large
numbers
of
clients
and
open
files,
Adams
PR
for
making
the
pinning
logic
simpler
in
Blue
Store
for
o
nodes
that
was
closed,
but
I
think
I,
think
that
was
kind
of
outdated
right
either.
I
I
think
we
we
that
might
have
even
been
something
that
that
you
fixed
that
Adam
had
made
a
different
version
of
or
a
similar
thing
for.
B
A
Yeah
and
that's
changing
more
anyway,
so
it's
probably
irrelevant
okay
next
I
had
made
a
a
sharded
object:
cash
in
rgw
a
while
ago,
I
think
that
was
like
maybe
a
year
ago
or
something
easy
do
you
remember
that?
Did
you
guys
ever
ever
rewrite
the
the
aftercash
stuff.
F
A
Well,
it's
there:
if
you
want
it,
maybe
it's
still
even
apply
can
be,
can
be
applied,
cleanly
I,
don't
know,
but
if
you're
interested
it's
it's
now
archived.
A
A
We
can
restore
it
if
anyone
anyone
actually
cares
about
it,
I
think
I
cleaned
it
up
and
I
think
I,
maybe
added
some
of
the
ideas
from
pet
store
from
like
three
years
ago
to
it,
but
it
it
was
not
it
I,
don't
think
most
of
it
was
even
necessary
anymore,
because
somewhere
along
the
way,
I
think
we
actually
made
them
start
faster.
So
anyway,
there's
that,
let's
see
what
else
oh
yeah.
G
Sorry,
there's
one
piece
of
mine
which
I
think
it's
not
on
the
late:
it's
about
snapdrin,
okay,
for
the
it's
for
the
existing
code
and
going
backwards.
When
you
do
snap
stream,
you
search
by
predicted
every
PG
has
eight
prefixes
and
object
all
of
them
thought
of
one
of
the
right
prefix
there.
G
So
when
you
third
view
third
by
the
first
prefix
until
you
depleted,
then
you
move
to
the
next
and
the
one
after
one
and
so
on,
and
the
problem
with
the
code
is,
after
we've
been
all
object
assigned
to
the
first
prefix
every
time
we
search
for
the
next
object.
We
first
go
by
the
first
prefix,
which
is
going
to
find
nothing
and
then
by
the
second,
the
third
and
so
on.
So
by
the
time
you
reach
the
last
eight
of
of
the
object,
every
object
will
be
printed
by
two
by
seven
hold.
G
G
I
think
they
did
so
in
the
new
snap
code.
This
thing
doesn't
apply
because
the
new
netcode
is
going
to
use
a
single
prefix
for
all
the
objects
owned
by
a
by
a
given
PG.
We
don't
need
eight
predictor.
Oh.
G
Remember
the
existing
code
still
needs
all
the
yet
prefixes
and
we
going
to
push
this
PR
to
go
backwards
for
all
versions
backwards.
G
No
I
do
not
and
I'm
driving
home
because
I
left
the
office
late
today,
so
I'm
out
of
the
car.
Okay.
A
A
Yeah
yeah
I
can
find
it
too
I,
probably
probably
just
missed
it.
The
the
with
ether
pad
being
down
and
and.
A
Oh
okay,
yeah
yeah,
no
I
I
usually
try
to
search
for
them
beforehand,
but
sometimes
I,
miss
stuff
and
today
was
kind
of
a
lot
of
stuff
to
go
through.
So
it
I,
probably
just
missed
it,
but
I'll
add
it
in
so
no
worries
sounds
really
good.
G
Yeah,
it's
a
very,
very
short
PR,
maybe
like
20
lines
of
code
of
that,
and
it
should
make
a
difference.
Nice.
G
A
All
right,
we
should
get
that
in
front
of
Paul
or
well
does
that
that
affects
the
RBD.
Stop
streaming
all
right,
I
think
so.
A
A
Yeah
we
should
get
that
along
with
Adam's
work
and
then
the
defragmentation
stuff
I
did.
We
should
get
that
into
and
get
that
testing.
A
Yeah
cool,
very
cool,
okay,
let's
see
what
else
do
we
have
here?
Okay,
memstar
stuff:
that's
not
real
important
Adams
runtime
ability
to
modify
the
size
of
TCM
Alex.
It's
redcash
I!
Think
we
still
want
that.
I'm
gonna
bug
him
to
see.
A
If
we
can
reopen
this
one,
because
it
would
be
really
nice
not
to
have
this
dependent
on
system
D
or
other
things
to
set
this
right
now
in
CPT
we
actually
do
a
like
a
prepend
on
environment
variable
for
it,
and
if
you
don't
have
that,
then
then
you'll
get
the
benefits,
so
I
think
being
able
to
modify
this
at
runtime
as
part
of
the
stuff
conf
would
be
or
or
as
part
of
the
stuff
conf
would
be
really
really
nice
and
change
it
after
the
fact.
A
Okay,
what
else
Blue
Store
track?
Booster
UPS,
it's
like
a
close
by
the
bot
I,
don't
actually
remember
what
that
one
was:
okay,
I
think!
That's
it
for
close
PRS,
updated,
okay,
Corey!
It
looks
like
Igor
I
reviewed
your
internet,
bounce
blue,
star
collection
list,
PR
I
think
that
just
needs
to
be
re-reviewed
right.
D
Yeah
I
just
up
there
that
this
morning
there
were
a
couple
failing
test
cases
and
and
I
think
he
had
one
comment
about
style
or
something
too
so
yeah.
It's
I
upped
there
this
morning
and
ready
for
review
again.
A
Oh,
that's:
a
I've
got
this
twice
too
I
had
it
in
new
stuff,
probably
because
if
it
had
been
broken
last
week,
okay,
yes,
I've
launched
our
whole
conversation.
A
Okay,
let's
see
next
adding
Rocks
TV
redhead
log
compression.
So
when
we
get
the
newest
version
of
rexdb,
when
we
update
to
it,
we
can
enable
this
fever.
I
completely
agree
with
your
comments
on
the
pr
I
think
we
need
to
do
testing.
That's
not
radio
scrunch
make
sure
that
I,
don't
think
that
Raiders
bench
actually
does
random
data
I!
A
Think
it's
it's
very,
very
compressible,
so
those
benchmarks,
maybe
aren't
great
for
actually
determining
what
the
the
benefit
of
this
this
is,
but
the
fact
that
we
see
higher
CPU
usage
with
it
I
think
means
that
we
probably
want
to
be
very
careful
using
this
for
nvme
drives
great
for
hard
drives,
like
you
said
so,
yeah
in
complete
agreement
with
you
on
your
comments.
B
A
Okay,
cool
next
we've
got
updating
rocksdb
to
this
version,
and
we
talked
about
this
a
little
bit
last
week.
I
I
think
we
definitely
want
to
get
this
in
testing
as
soon
as
we
can
so
that
we
we
can
get
sounds
like
the
the
improvements
are
easily
worth
the
pain
of
doing
interact
with
the
proxy
the
upgrade
so
yeah
we
should.
We
should
get
this
into
testing
and
and
bake
it
for
as
long
as
we
can
and
get
into
Reef.
I
think
so.
A
I
don't
know
is
Neha
here.
Yes,
how
do
you
know
can
when
you
are
we
writing
stuff
again?
Can
we
get
this
in.
C
A
Let's
see
here,
's
core
I'm
just
going
to
improve
this,
then
I
assume
this
is
just
a
model:
sub
module
change!
A
C
I
still
have
one
custom
thing
for
a
version:
upgraded
issue
from
like
mimic
or
something
yeah
yeah.
A
Okay,
okay,
so
I'll
put
a
note
in
here
and
request
changes
on
this
just
to
to
apply
it
to
to
ours,
get
ours,
upgraded,
yeah,
otherwise,
I
think
we
we
proceed
and
do
this.
A
Okay,
next
disable
busy
polling
in
qat
kefu
requested
a
self
review
on
that.
So
he's
going
to
review
that
next
enable
4K
allocation
units
in
blue
FS
eager
this
is
your
PR
I
saw
your
comments
and
it
looks
like
looks
like
this
is
useful
for
people,
but
also
it'll,
be
tough
to
back
Port
Maybe.
B
A
All
right,
well,
good
luck
on
that
one
and
then
finally,
for
updated,
PRS,
there's
an
older
PR
here
for
lib
RBD
to
optimize
out
the
async
up
tracker
from
the
dispatchers.
Apparently,
that's
broken
according
to
the
author
and
they
they're
when
they're
testing,
so
they
they
requested
not
be
merged.
I
I
added
the
do
not
merge
tag
to
it
and
just
ask
them
to
to
get
rid
of
it
once
once.
They
feel
like
they're
ready
again
so
I
think
that's
it
was
there
in
this
long
list.
A
All
right
well,
then,
moving
on
I
think
we
probably
covered
the
roxdb
iteration
Behavior
drink
collection
list.
Unless
was
there
more
on
that
than
anyone
wanted
to
talk
about.
A
F
Hey
Mark
typica
here,
hey
yeah
I,
just
wanted
to
bring
up
a
hard
discussion
with
Josh
as
well,
and
God
of
we
were
just
discussing
about
having
research
papers
and
ideas
that
are
related
to
research.
At
least
maybe
yeah
I
have
some
dedicated
time
for
it
to
get
back
to
those
topics,
so
we
were
thinking
about
using
maybe
a
perf
call
or
sometime
around
of
call
for
that
and
I'm
happy
to
volunteer
and
help
around
for
that
I
I
think
in
previous
times.
F
Also
we
had
some
discussions
on
research
paper
on
in
Perth
call.
They
were
like
somewhat
I
mean
really
interesting
and
having
the
input
of
folks
around
from
Seth
I
think
this
time
would
be
suitable,
so
I
just
wanted
to
have
like
discussed
about
that
a
bit
and
yep
and
work
it
out
some
way.
Yeah.
A
Yeah
absolutely
Josh
mentioned
that
you
were
interested
in
in
running
it
and
I.
Think
it's
a
great
idea.
We
we
have
used
this
call
for
it
in
the
past
and
if
you,
if
you
want
to,
we
can
continue
doing
that
if
or
if
you
want
to
have
a
separate
call.
That's
fine
too,
but
yeah.
Absolutely
there's
still
I
think
a
couple
of
papers
in
The
Ether
pad
that
we
never
really
followed
through
on
I.
Think
there's
still
maybe
two
in
there
or
three,
maybe
there's
more
than
two
well.
A
In
any
event,
there's
there's
yes
by
all
means:
go
ahead
and
organize
yeah.
H
Bring
it
actually
by
by
bi-weekly,
actually
maybe
once
in
every
two
weeks,
it'll
be
good
to
discuss
something
like
I.
Remember
last
time
back
in
one
of
the
calls
we
brought
up
a
machine
learning
based
research
paper
where
we
were
discussing
about
I,
think
there
was
a
optimizing
configuration
based
approach
which
they
are
trying
to
do
in
luster,
using
machine
learning,
yeah.
H
Both
calls
I
have
beaten,
never
continued,
so
I
thought
we
thought
it
would
be
a
great
Avenue
and
a
place
where
we
can
actually
start
off
these
discussions
and
we
move
on
more
experimental
side
and
once
things
pan
out,
we
can
panel
to
great
ideas
which
we
can
actually
continue
to
work
on.
We
can
translate
them
into
cbms
and
move
it
to
something
concrete
as
a
part
of
the
PDM
folder,
where
once
the
idea
is
Made
Concrete,
it
can
be
worked
out
into
an
actual
task
or
something
it
can
translate.
It.
A
For
sure,
just
I
guess
think
about,
if
you
want
to
have
it
in
in
this
meeting
I'm
happy
to,
we
can
cover
up
some
time
for
it.
If
it's
maybe
more
than
once
a
month,
we
should
maybe
I,
think
consider
doing
a
separate
meeting
just
so
that
it's
well
I,
don't
know
if
it
was
bi-weekly.
Maybe
we
could
do
it,
but.
F
It
might
get
tight
once
a
month
so
like
for
starting.
We
can
keep
it
once
a
month
as
well
and
then
yeah
I
see
if
we
are
not
able
to
cover
topics
or
need
more
time,
so
we
can
make
it
bi-weekly.
What.
A
A
Sounds
great,
so
do
you
do
you
want
to
maybe
collect
kind
of
an
updated
list
of
papers
and.
F
Yeah
I
just
created
a
ether
part
for
a
bit
safe
research
and
like
I'll,
try
to
also
add
some
paper
research
ideas.
People
might
be
interested
in
discussing
and
open
to
any
kind
of
ideas
that
people
want
to
add
as
well
feel
free
to
use
it.
Yep
have
some
fun
discussing
ideas,
I
think.
A
E
E
E
F
Okay,
probably
I'll
crowdsource
as
well
on
that
and
yeah
we'll
see,
maybe
yeah.
B
H
E
That's
a
good
idea,
because
there
are
a
lot
of
members
in
the
community
who
are
already
in
research,
and
they
have
you
know
they
as
a
part
of
their
job.
They
are
shortlisting
these
kind
of
good
papers
and
they're
keeping
an
eye
on
new
Publications,
so
they
might
be
able
to
just
help
us.
F
Yeah
also
I
find
like
people
from
Academia.
They
work
on
safe
projects
and
I
think
they
also
come
to
perf
call
itself
for
discussing
papers
in
past.
We
have
also
like
seen
seen
code
reviews
and
like
paper
turning
into
PR's
and
then
discussions
here.
F
Probably
we
can
also
like
have
those
discussions
in
this
call
and
yep
probably
have
more
feedback.
F
You
know
in
terms
of
stuff
research
and
we
updated
ourselves
as
well
like
what's
going
on
what
people
are
doing
and
in
general
yeah
I
also
learn
about
maybe
distributed
systems.
Anything
fun.
People
are
exploring
there
as
well.
A
F
F
Yeah,
that's
like
that's
for
sure,
I
think
something
that
I
also
have
to
to
like
yeah
work
out,
but
for
sure
yeah
I'll
try
to
protect,
read
and
create
summaries.
Let's
see
yep.
Hopefully
we
can
just
be
here
to
share
ideas
and
if
somebody
misses
we
can
have
some
summary
in
the
beginning
and
then
begin
discussion.
F
It's
we
yeah,
we
are
going
to
take
feedback
from
Greg
and
everybody
who
used
to
run
their
papers
the
meeting
as
well
and
yeah.
Hopefully
it
would
be
something
and
everybody
would
be
eager
to
also
like
volunteer
and
take
a
look
into.
Let's
see.
A
It
seems
like
in
the
past,
usually
you
end
up
with
kind
of
a
small
group
of
people
that
have
really
read
the
paper
aggressively
and
explain
it
to
everybody
else,
which
I
suspect
that
that
will
probably
be
the
case
right
like.
F
Something
yeah
it's
so
what
kind
of
feedback
do
you
think?
How
can
we
mold
it
to
be
more
discussion
oriented
rather
than
like
an
explanation,
and
then
yeah,
based
on
the
like
paper
reading
folks,
like
people
who
came
with
homework,
I.
I
Think
yeah,
something
is
just
to
make
sure
that
the
paper
is
well
targeted
for
what
we
want
for
what
we're
interested
in
in
our
group,
which
here
would
be,
you
know,
things
applicable
to
Seth
and
I-
guess,
probably
a
performance
Focus
if
we're
in
the
call,
but
not
necessarily
that,
and
so
you
need
to
like
filter.
You
just
need
to
filter
them.
Well
that,
like
that's,
the
part
that
takes
time
is
being
like.
Okay,
so
like
when
I
was
doing.
F
I
Hopefully,
I
was
right
well
like
if
you
end
up
with
one.
That's
too
light,
like
you
know,
if
you
end
up
with
one
that's
two
abstracts,
then
it
doesn't
really
have
any
applicability,
and
so
people
are
like
yep,
that's
kind
of
cool,
but
we
don't
care
and
if
it's
too
focused
on
a
particular
system
that
isn't
Seth,
then
it's
like
well,
here's
how
you've
solve
the
problem,
and
sometimes
those
are
really
valuable,
because
they're
like
how
someone
solved
a
problem
but
sometimes
they're,
very,
very
specific
to
the
system
involved
like
oh
Hadoop.
I
I
B
F
A
Yeah,
however,
you
want
to
do
it
happy
to
to
do
it
here
if
we
want
to
but
yeah
up
to
you.
F
Yeah
I'll,
maybe
start
with
yeah
I'll,
discuss
the
timing
with
everyone.
Maybe
on
Middle
East
has
some
options
available
and
ask
for
votes
and
yeah.
Based
on
that,
we
can
have
some
time
for
it,
and
yeah
I
can
volunteer
about
the
initial
papers.
Maybe
take
help
with
you
from
you.
Greg
and
everybody
who's
seen
the
meetings
and
then
the
deed,
so
yeah
I
have
probably
some
maybe
yeah.
We
can
get
started
and
at
least
have
some
idea.
F
F
Yeah
cool,
so
probably
I'll,
yeah
I'll,
think
about
the
time
the
discuss
with
you
then
maybe
have
some
votes
and
these
papers
added
as
well
correct
Mark.
You
can
use
the
ether
pad,
add
some
papers
and
yep
we'll
sync
with
you
guys
on
more
on
that.
A
All
right,
I,
don't
think
I've
got
any
other
topics
for
today,
since
Adam
kupchak
isn't.
C
A
A
A
So
the
idea
is
that
we're
trying
to
dramatically
reduce
the
number
of
shared
blobs
and
blobs
that
exist
and
and
hopefully,
as
a
result,
make
it
so
that
some
of
these
operations,
where
we
have
to
scan
through
things
much
faster,
my
PR
is
approaching
it
from
a
slightly
different
angle,
where,
when
we
have
a
lot
of
extents,
then
when
we
do
a
clone
operation,
if
we
see
that
we
we
with
some
a
little
little
fancy
fanciness,
we
we
will,
we
will
defragment
and
we
write
the
object
back
out.
A
So
the
hope
is
that,
with
the
combination
of
these
two
things
that
we'll
see
that
that
snapshots
are
significantly
faster
and
RBD
mirrors
happier
and
now
we
have
Gabby's
stop
trimming
one
to
look
at
as
well,
which
may
also
improve
it
even
more
so
I
think
good
news,
but
Adam's
still
working
on
getting
his
his
code
into
good
shape.
We
had
some
lab
issues
recently,
so
he
just
got
started
working
on
that
again,
I
think
a
day
or
two
ago.
So
that's
where
we're
at
on
that.
A
But
that's
that's!
All
I've
got
anything
else
from
anyone
this
week
that
that
anyone
would
like
to
talk
about.
A
All
right,
then,
great
discussion,
guys
looking
really
forward
to
your
plans
to
pick
up
our
research.
That
would
be
fun
and
we'll
see
everybody
next
week.