►
From YouTube: Ceph Performance Meeting 2020-08-20
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Thank
you.
I
appreciate
that
I
was
just
gonna
say:
okay,
we've
just
got
a
couple
of
of
quick
pull
requests
things
here
and
then
we
can
really.
You
know,
get
get
back
to
the
paper,
so
yeah
there's
only
one
pull,
there's
no
newer,
pull
requests.
I
saw
related
to
performance
and
the
only
closed
pull
request
was
majian
peng
that
got
closed
by
the
stale
bot
because
no
one's
looked
at
it.
A
So
you
know
that's
what
it
is:
two
that
that
were
updated,
another
one
from
majianpeng
that
adam
and
I
both
reviewed
that
looks
good.
That's
just
changing
the
flushing
behavior
for
bluerock's
environment
yeah.
That's
that
looks
like
the
right
move.
Then
there
was
another
one
that
was
updated
from
eric
regarding
rgw
ordered
list
map
efficiency,
and
I
believe
that
casey
did
actually
review
that
now
I
saw
that
there
was
some
discussion
on
it,
but
yeah
movement.
So
that's
good,
not
a
whole
lot
else
going
on
right.
A
Now,
though,
so
that's
it
all
right.
Do
you
want
to
take
over
here
and
and
guess
going
on
on
the
paper.
B
B
B
The
paper
was
introduced
about
six
months
ago,
almost
six
months
ago.
It
talks
about
well,
it
introduces
a
new
like
an
extension
to
the
crush
algorithm.
B
B
So
the
the
the
point
of
the
paper
is
to
to
overcome
this
issue
by,
by
adding
sort
of
times
time
dimension
mapping,
which
means
that
each
object
has
a
timestamp,
and
when
you
expand
your
cluster,
you,
you
don't
migrate
most
of
the
data,
so
so
that
you
only
write
to
the
new
osds,
for
example,
and
they're
also
talking
about
how
they
are.
They
are
addressing
the
load,
the
potential
loading
balancing.
By
doing
that
because,
well
the
point.
B
B
Did
anyone
get
a
chance
to
read
into
that
more,
I'm
also
going
to
link
it
again
in
the
chat.
So
this
is
the
shared
doc
on
my
drive,
which
contains
the
paper
and
kind
of
the
highlighting
and
marking
of
stuff
that
I
found
useful
or
important
just
to
make
the
reading
experience
better.
I'm
also
going
to
link
the
just
the
paper
itself
because
of
the
share
doc
is
only
available
inside
redhead.
B
A
B
B
Thanks
so
the
second,
the
second
link
I
just
sent
has
both
the
paper
and
the
presentation
from
the
conference.
One
important
note
is
that
the
video
from
the
family
conference
is
not.
The
presenters
are
not
the
writers
of
the
paper,
because
they
couldn't
make
it
due
to
copy
19.,
so
the
person
who's
presenting
it
is
just
basically
reading
through
their
notes.
B
But
if
you
want
to
to
look
at
the
slides
and
maybe
some
of
the
explanations,
you
can
take
a
look
there,
so
I
guess
I'll
start
off
by
one
thing
that
you
know
really
caught
my
eye
in
the
paper
that
they
talked
about
like
multiple
times
is
that
they
they
did
their
experiments
only
on
on
cfrbd
on
rbdn
and
7fs,
because
they
said
that
this
method
cannot
be
applied
to,
like
general
object
storage.
Do
you
see
that
mark.
C
D
B
Yeah
also
roland,
I
tried
reaching
to
them.
Last
week.
I
sent
them
an
email
to
the
people
who
who
wrote
the
paper
to
try
and
maybe
get
them
on
one
of
our
meetings
if
they
want
to
present
their
work.
But
I
still
didn't
get
a
reply.
C
It
by
the
way
you're
a
bit
broken
here
for
me.
I
have
a
few
questions
comments
about
that:
okay,.
C
One
issue
is
that,
like
you
said
it's
they're
using
timestamps
and
they
are
using
timestamps
that
at
least
for
if
the
file
systems,
they
are
a
bit
contrived,
they
are
using
the
creation
time
of
an
object
which.
B
C
E
It
doesn't
matter
how
the
timestamp
is
generated.
All
that
matters
is
that
when
they
go
to
look
up
the
crush
map
you
compare
a
number
to
whatever
is
already
in
there.
There's
no
need
for
any.
E
E
E
C
Okay-
and
this
is
okay-
and
the
the
I
think
what's
more
important-
is
that
they're
introducing
a
lot
of
what
seems
to
me
a
lot
of
complexity
in
the
management
of
a
every
algorithm
that
needs
to
compare
or
recreate
or
repair
the
data
or
and
this
and
then
for
something
that
is
only
meaningful
in
a
very
specific
and
very
limited
scenario
of
expanding
over
reducing
the
size
of
the
cluster.
So
it
says
we're
using.
It
means
paying
a
large,
probably
price
in
complexity,
for
something
that
is
rarely
used.
C
This
was
and,
and
the
solution
is
not
general,
it's
different
from
a
file
system
to
as
it
is
there
from
drop
devices
to
file
system
yeah,
I
I
think
it.
F
It
is
like
it
struck
me
as
well
running
it's
kind
of
a
constant
or
a
cost
that
you
pay
over
time
for
something
that
is
beneficial,
probably
for
a
relatively
short
period
of
time.
E
E
G
E
I
was
going
to
say
there's
a
sort
of
a
difference
between
there
there.
Okay,
there
were
a
couple
of
different
dimensions.
I
thought
were
interesting
here.
One
is
that
the
using
an
externally
stored
placement
key,
which
is,
I
think,
what
ronan
was
reacting
to
that
you
have
to
store
separately,
is
actually
a
very
cool
idea.
E
It
allows
you
to
distinguish
how
items
are
placed
based
on
external
details
like,
for
instance,
they
use
creation
time,
but
that's
not
the
only
way
to
do
that
here.
So
that
is
an
interesting
concept
that
we
might
wish
to
steal.
E
So
I
thought
that,
overall,
this
strategy
is
essentially
analogous
to
allowing
pools
to
have
more
than
one
brush
map
mapping
rule
with
essentially
different
pools
of
pgs
under
it,
which
is
also
interesting.
The
problem
here
is
that
they
go
further
and
say
that
every
time
you
create
a
new
expansion,
you
literally
don't
change
placement
of
the
older
data
and
because
of
the
way
crush
works.
E
So
ultimately
they
address
this
with
layer
merging
where
you
gradually
remove
the
timestamp
distinctions
between
the
different
pg
groups,
which
essentially
that
means
that
all
the
pgs
are
placed
using
the
same
rule.
But
the
pg's
don't
go
away
and
the
original
static
timestamps
don't
go
away.
I
believe
you
have
to
keep
both
of
those
or
the
objects
move
between.
B
B
They're
doing
three
things
to
to
address
the
potential
loading
balance
so
they're
saying
so
that
is
pg
remapping
cluster
shrinking
and
layer
emerging.
So
it's
not
like
it's
not
just
player
merging.
E
E
That's
what
I
was
getting
out,
so
it
is,
however,
possible
to
do
pg
emerging
even
under
these
conditions,
but
the
end
result
will
be
that
data
move.
That
would
be
the
whole
point
as
you
converge
back
towards
the
original
number
of
pgs,
with
the
new
crush
map.
You
end
up
back
with
our
current
behavior
just
with
a
longer
circuit.
E
E
The
disadvantage
of
that
approach
is
that
it
will
move
a
bunch
of
objects
in
line
with
how
it
changes.
The
placement
of
new
objects,
but
I
think
that's
actually
a
virtue,
not
advice
here,
sorry
go
ahead.
C
Just
to
follow
what
you
said
this,
this
was
my
initial
thought.
When
I
read
this,
did
we
consider
your
you
know
the
story?
Did
we
consider
using
the
having
multiple
maps
and
just
a
right
away
for
the
client,
either
by
trying
game
all
of
all
maps
or
by
some
other
option,
to
use
the
select
which
which
we
have
to
use.
E
F
Also,
I
think
going
back
to
what
what
problems
you're
trying
to
address
with
this
one
aspect
of
it
is
trying
to
address
the
impact
of
recovery
on
clients.
I
o,
which
I
think,
isn't
that
it
isn't
necessary
to
address
us
at
the
mapping
layer.
There
are
other
following
mechanisms
that
we
can
use
for
that
quality
of
service
at
work
in
that
direction
as
well.
F
The
other
direction
I
thought
was
interesting
here
was
the
idea
of
not
having
to
do
those
rights
twice
that
is
redirected
redirecting
because
they're,
essentially
adding
new
pgs
and
using
those
new
pgs
in
their
final
location,
already
not
needing
to
write
the
do
those
extra
rights
do
all
the
osv's
and
then
backfill
that's
right
to
perform
the
same
rights
again
because
sam
in
in
that
kind
of
scheme
you're
describing
where
did
it
address
that
aspect
of
it?
That's
the
kind
of
extra
right
amplification.
E
No,
but
like
that's
what
I
meant
by
you,
do
have
to
do
the
rights
to
the
previous
one
in
the
previous
location,
but
I'm
not
really
concerned
about
right
amplification
under
these
conditions.
To
be
honest,
I
think
the
core
trade-off
isn't
really
about
right
amplification.
It's
about
whether
the
final
data
placement
is
fully
specified
by
the
new
cluster
map,
so
over
time.
You
eventually
do
want
that
to
be
true.
E
Just
because
you
don't
have
to
store
as
much
metadata
than
the
osd
map,
otherwise
you
have
to
store
an
increasing
amount
of
you.
You
have
to
store
an
increasing
amount
of
location
information
in
the
osd
map
to
account
for
all
of
the
different
decisions
you've
made
in
the
past.
Otherwise
you
do
eventually
want
to
converge
to
a
single
rule.
So,
ultimately,
the
other,
your
old
objects
do
have
to
move
whether
any
particular
newly
created
object
happens
to
be
written
and
then
moved
is,
I
think,
less
important.
C
E
Actually,
their
complaint
wasn't
about
that.
It
was
about
the
fact
that
crush
in
its
normal
configuration
does
do
some
amount
of
extra
relocation
between
nodes
that
wouldn't
otherwise
have
changed.
If
you
look
at
the
original
paper,
I
recall
that
there
was
some
calculation
about
how
large
that
set
is
and
how
you
can
tune
it,
and
their
own
graphs
here
suggest
that
that
delta
isn't
very
big
figure.
Two
being
the
most
egregious
thing,
I
think
that's
the
one
with
the
blue
bar
that
says:
sixty
percent
the
far
larger.
C
E
With
six
percent,
but
I'm
not
totally
convinced
that
that's
a
particular
problem
either.
F
Yeah
and
after
that
measurement
is
necessarily
the
most
helpful
one
affected
number
of
pgs
doesn't
tell
you
how
much
data
is
moving,
for
example,.
H
E
I
mean
that
I
switch
yes,
that's
true.
E
C
E
E
E
C
Okay,
and
so
that's
what
I
thought-
and
this
case
we
gained,
we
had
very
little
complexity,
it
seems
the
system
and
we
might
be
gaining
something
and
the
other
and
the
option
that
you
dismiss,
I
think,
is
that
both
rules
will
be
there
and
you'll
have
the
client
will
have
to
try,
maybe
in
an
efficient
way,
just
to
try
one
of
one
of
the
rules
and
get
a
failure.
E
E
E
C
F
One
thing
I'll
also
think
about
is
how
this
map
sounds
practical,
practical
realities
of
how
people
expand
clusters
today,
like
this,
this
paper
kind
of
assumes
that
you're,
adding
like
a
large
set
of
servers
all
at
once
all
instantly
waited
into
the
map.
But
typically
you
don't
do
that.
You
would
rather
slowly
weight
them
in
over
time
until
they
reach
the
full
capacity
and
and
control
them
at
the
speed
of
remapping.
That
way,
sam
is
this
equivalent
to
your
idea
of
kind
of
stepping
through
the
old
versus
new
mapping.
E
Interestingly,
no
as
I
understand
it,
that
gradual,
increasing
weight
thing
multiplies,
the
amount
of
extra
data
movement
approach
does
quite
a
bit
that
assumption
wasn't
explored
in
the
original
paper,
but
by
my
recollection
from
when
we
discussed
this
a
few
years
ago,
was
that
empirically
you
do
move
way
more
data
that
way
total.
E
I
I
I
would
say
that
the
gradually
rewaiting
part
does
totally
work,
like
that's
a
totally
valid
mitigation,
but
that
is
what
it
is.
It's
a
mitigation
for
the
fact
that
stephanie's
current
form
doesn't
do
a
particularly
good
job
of
controlling
background
movement
and
really
all
the
thing
where
you
gradually
move
pgs
from
one
person
up
to
another
is:
is
a
more
compact
way
of
doing
tap.
Mappings,
there's
no
real
semantic
difference
between
the
two.
I
don't
think.
E
C
E
A
A
E
There,
well
sometimes
it
destroys
a
cluster.
Oh
wow,
okay,
like
that's
that's
what
this
looks
like
when
it
gets
out
of
control.
If
the
we
saw
this
with
the
big
certain
cluster,
it's
one
of
the
things
that
motivated
sage's
changes
to
cutting
down
hearing
messages
a
while
back.
B
There
was
one
thing
I
didn't
completely
understand:
maybe
somebody
can
explain
it.
They're
talking
in.
B
About
this
parameter
we're
using
instead
called
ost
max
backfills,
which
makes
the
migration
have
the
lowest
priority.
Do
you
like
you
expand
on
that?
Because
I
didn't
really
understand
like.
E
E
If
you
look
at
the
code
that
controls
backfill
in
the
osd
you'll
notice
that
it's
it
limits
the
number
of
pg's
that
could
be
backfilling
at
once,
and
this
doesn't
change
the
fact
that
they're
out
of
place,
but
it
does
change
the
rate
at
which
you
make
progress
and
putting
them
back.
So
I
mentioned
mappings.
So
they
glossed
over
a
lot
of
details
about
how
seth
actually
handles
this
in
its
current
form.
But
one
of
them
is
that
when
you
get
remapped
to
a
totally
new
location
on
the
cluster,
you
don't
actually
get
remapped.
E
The
new
primary
will
say:
I'm
not
a
I'm,
not
a
good
primary
for
this
pg
right
now
and
it
will
request
an
exception
mapping
in
the
osd
map
that
will
put
the
pg
back
where
it
was
before.
Then
the
old
primary
will
gradually
backfill
the
new
osds
in
the
background,
while
still
continuing
to
serve.
I
o
at
the
old
location,
but
the
number
of
the
number
of
pgs
that
will
currently
make
it
make
an
effort
at
making
progress
at
this
is
limited
on
each
osd
by
max
backfills.
E
F
Yeah
to
be
clear
that
the
test
that
they
set
up
was
designed
to
showcase
our
cluster
that
was
slowed
down
because
they
let
it
go.
Let
recovery
go
so
fast,
like
typically
in
an
all
htd
case
like
they
used,
you
would
never
use
max
pack
builds
equals
10
unless
you
didn't
care
about
client
io
at
all,
but.
E
F
B
They're
saying
that
it
does
like
essentially
mitigate
the
migration
but
they're
saying
okay,
so
so
you
just
have
less
severe
performances,
but
just
for
a
longer
period
of
time.
So
it's
not
one
spider.
E
So
that
was
my
real
problem
with
this
test,
so
I
mentioned
before
there's
a
distinction
between
permitting
new
rights
to
go
to
the
new
locations
and
relocating
old
data,
and
I
think
I
made
an
argument
that
you
do
eventually
have
to
relocate
the
old
data
or
you
have
to
carry
placement
information,
linear
in
all
of
the
changes
you've
made
over
the
lifetime
of
the
cluster,
which
is
what
their
design
does.
So,
I
don't
think,
that's
possible.
E
I
think,
eventually,
with
this
design,
you
would
have
to
add
a
way
to
actually
move
the
old
data,
and
that
does
mean
over
a
long
period
of
time.
Data
relocation
would
have
to
happen,
and
over
that
time
there
would
be
an
I
o
impact
here.
A
good
job
means
we
do
a
very
small
amount
of
client.
I
o
impact
for
a
very
long
time.
That's
what
well
that
that's
what
a
well
behaved!
E
G
Exactly
and
I
think,
that's
particularly
good
when
you're
doing
a
large
scale
expansion,
but
if
you're
doing
a
small
scale,
expansion
like
you
know,
adding
one
ost
or
like
one
host
at
a
time.
I
don't
see,
you
eventually
want
your
cluster
to
be
balanced
or
a
space
usage
to
be
balanced.
So
you
have
to
do
that
migration
at
some
point
and
whenever
you
do
that,
you
will
bear
the
cost
of
it.
It's
just
like
choosing
to
pay
the
cost
at
the
time
of
expansion
or,
at
a
later
point.
G
F
E
Yeah,
actually
with
this,
that
was
one
trick
with
this
design.
I
I
I
rather
liked
so
there
was
a
bit
I
was
skeptical
of
that
part.
I
was
wondering
what
would
happen
if
you
added
just
one
host
with
host
level
replication,
but
the
way
they
integrated
into
crush
means
that
you
will
put
the
other
replicas
on
the
old
look
portions
of
the
cluster.
E
A
Pairs
josh
with
the
qos
work,
that's
been
ongoing.
How
do
we
have
any
idea
how
good
that
is
right
now
at
kind
of
achieving
the
slower
delayed
behavior
for
background
work?
Is
it
is
it
do
we
have
anything
like
demonstratable,
yet.
F
E
A
A
E
A
E
F
E
F
I'm
saying
already
about
changing
their
replication
over
time,
rather
than
rather
than
debate
better
than
retargeting
and
adding
your
bds,
but
that
seems
like
it
could
be,
could
be
helpful.
E
E
A
Difference
so
sam
you're
you're,
basically
arguing
then
that
the
behavior
we
want,
combined
with
what
else
it
does
really
ultimately
doesn't
result
in
any
kind
of
right,
amp
benefit.
A
E
In
right
now,
so
that's
not
an
interesting
metric
yeah.
The
interesting
metric
is
how,
like
the
the
real
star
power
of
this
paper,
is
that
newly
created
objects
go
to
the
new
look
location.
So
that's
cool.
I
don't
like
the
way
it
does
it
because
of
the
increase
the
number
of
pgs,
but
some
other
combination
of
like
information
carried
by
the
rbd
info
or
something
could
be
interesting.
So
I
think
there
are
interesting
concepts
here.
E
I
think
a
lot
of
these
mitigations
could
be
interesting,
but
I
don't
think
that
any
of
them
are
silver
bullets,
in
that
it's
still
the
case
that
you
need
to
handle
the
data
movement
when
it
actually
happens
without
swapping
client.
I
o-
and
I
think
that's
the
real
problem
here,
even
with
this
approach.
E
Eventually,
you
have
to
do
this
layer
merging
thing
which
does
result
in
data
movement,
and
then
you
have
to
do
pg
merging,
which
is
far
more
expensive,
both
of
which
will
have
client
impacts
that
aren't
fully
measured
here
and
in
either
of
those
cases.
Something
like
qos
will
still
be
necessary.
So
this
isn't
really
a
replacement
for
that
effort.
F
If
you
were
thinking
more
about
the
primekey
pg-10
piece,
I
mean
that's
all
calculated
currently
on
the
on
the
monitor
so
you're
limited
to
a
single
host
cpu
power.
E
C
E
E
The
administrative
component
might
be
a
little
rough.
You
need
to
build
something
into
the
manager
to
make
sure
that
this
line
advances.
I
think
that
was
a
major
argument
against
it.
The
last
time
this
came
up,
which
is
that
we
didn't
have
the
moder
the
manager
component.
We
didn't
really
want
the
monitors
doing
it,
but
with
the
managers
getting
some
kind
of
a
cluster-wide
stat
snapshot,
they
could
be
in
charge
of
advancing
the
line
over
time
as
recovery
work.
B
The
point
of
this
was
not
you
know,
implementing
exactly
that,
because
that's
obvious
that
there
were
problems
with
that.
I
mean
it's
a
scientific
paper,
so
it
was
always
that
in
the
evaluation
as
well,
they
were
you
know
they
were
doing
stuff.
That
well
manipulating
was
sort
of
like
the
results
in
their
favor.
But
what
I
want
to
know
is
if
there
is,
if
there
are
stuff
from
that
paper,
that
we
would
be
interested
in
in
checking
or
trying
ourselves
or
performing
our
own
experiments
or
stuff
like
that,.
E
Well,
the
big,
the
big
problem
with
this
paper
is
the
extra
or
with
this
approach
rather
up
with
the
paper
is
the
extra
information
you
need
to
store
to
placement.
It's
not
generalizable
to
rgw,
for
instance,
so
it
would
mean
adding
a
rados
pathway,
that's
specific,
really
to
rbd.
I
don't
like
what
they
did
with
ffs
here.
I
don't
think
that's
actually
going
to
work.
E
I
also
don't
really
like
tying
it
to
time.
I
think
there
are
other
interesting.
I
think
there
are
other
interesting
things
you
could
do
with
a
key
like
that.
That
would
be
more
general
than
just
time,
though,
would
that
would
be
potentially
interesting,
but
because
of
the
limitations
on
pg
emerging
and
the
details
with
how
this
what
this
does
to
a
cluster
over
many
years?
C
E
Yeah,
I
mean,
I
think
it's
probably
a
good
idea.
I
don't
know
how
urgent
it
is
like
I
said
it
doesn't
actually
solve
the
qos
problem.
It
is
essentially
a
better
version
of
pg
priming,
so
the
question
or
it's
a
cheaper
for
the
leader-
monitor
version
of
pg
priming.
So
the
question
is
how
how
well
does
pg
priming
work
like
do
we
find
that
it
frequently
fails
to
happen
fpgs?
Do
we
find
that
the
space
overhead
in
the
osd
map
is
in
practice
too
large?
E
F
Term,
there
are
kind
of
more
improvements
we
could
make
to
the
pd
temporary
mechanism
directly,
where
currently
it
tries
to
do
a
bunch
of
work
and
then
kind
of
throws
it
away
if
it
doesn't
get
it
done
and
other
aspects
of
how
we're
doing
some
pd
calculation
in
the
monitor.
Right
now.
That
example
should
be
split
up
into
finer
grain
pieces
or
done
in
a
worker
thread
outside
the
the
main
monitor
lock.
F
I
don't
think
we've
seen
really
many
problems
from
the
space
used
by
the
temp
entries
worth
it,
but
I'm
not
sure
we
have
a
great
measurement
of
how
often
the
rhyming
itself
is
helpful
versus
distance
of
timing
out.
E
I
seem
to
recall
back
the
envelope
math
from
sage
back
when
he
introduced
it
that
or
last
time
I
talked
to
him
about
it.
That
suggested
that
it's
really
a
small
cluster
thing
for
a
large
cluster
you'll,
never
have
enough
monitor
cpu
cycles
available
to
do
the
job,
so
under
those
conditions
it
might
be
worth
exploring
the
other
thing
also,
the
other
thing
is
in
some
ways
simpler,
like
you're,
not
relying
on
the
monitor
to
burn
a
bunch
of
cpu
to
solve
this
problem.
It's
just.
F
E
E
E
F
H
I
have
one
question:
do
they
have
this
open
source,
the
mapx
code?
Can
someone
find
on
github.
B
H
B
I
don't
I
haven't
tried
searching
for
their
code.
I
they
try
to
email
them
if
they
want
to
come
to
one
of
our
meetings
and
present
their
work
and
maybe
answer
a
few
questions.
But
I
didn't
get
a
reply.
That's
why
I
asked
mark
earlier.
If
I
ccdu
and
josh,
did
you
guys
get
that
email
so
I'll
know
that
the
email
was
indeed
sent
and
it's
not
a
problem
on
my
end,
but
it's
them
I'll
reply.
B
So
yeah
they
just
didn't
they
just
didn't
reply
yet.
So,
ideally
you
think
you
know
if
if
they
could
come
to
one
of
these
meetings
and
then
you
know
answer
any
questions
you
have,
or
you
know,
provide
us
with
some
code,
then
yeah
that
would
be
nice,
but
as
soon
as
I
will
have
an
answer
from
them,
I'll
update.
H
Okay,
I
think
in
the
email,
this
performance
meeting
it
was
mentioned,
so
I
I
thought
the
presenter
itself
will
be
present.
That's
why
I
asked
okay
because
it
says
a
research
paper
hanukkah
presented
last
week
regarding
the
drawbacks
of
the
crash
algorithm
and
the
map
x
crash
extension.
B
A
C
E
Specifically
inapplicable
to
the
case
where
it
will
be
most
useful,
rgw
workloads
create
a
lot
of
objects,
but
rbd
and
ffs
workloads
may
not
well
maybe
7th
house,
but
not
rbd,
already
only
creates
objects
at
the
rate
at
which
you
create
new
rbd
images.
B
F
It
sounds
like
it's
given
us
some
interesting
ideas
about
how
we
could
try
to
address
extra
data
migration.
That
was
a
significant
problem
that
we
saw,
but
to
me
it
doesn't
seem
like
it's
a
highest
priority.
Concerto
issue,
though.
G
My
two
cents
from
the
paper
is
that
the
problem
that
they
are
trying
to
address
is
genuine,
but
is
not
as
impactful
to
a
regular
self-clusters
life.
As
that
we
also
the
approach
that
they've
taken
does
not
consider
some
important
aspects
of
creation
of
new
pgs
and
peach
emerging
extra
costs,
and
everything
like
that.
G
So
the
approach,
I
don't
think
is
something
we
would
want
to
explore
further,
but
probably
go
back
and
look
at
our
prime
pg
temp
stuff
that
we
already
have
and
see
where,
whether
we
can,
whether
it's
you
know
adequate
enough
or
whether
we
can
optimize
it
further,
is
probably
my
key
takeaway
from
from
reading
this
paper
and
having
this
discussion.
A
I
think
the
the
comment
that
was
made
earlier
about
graphs
was
kind
of
spot
on.
It
feels
like
a
lot
of
the
the
things
they
have
here
were
kind
of
designed
to
make
the
graphs
like
work.
Yeah.
Nothing
is
not
real,
but
you
know
it's
just.
F
E
E
A
B
So
I'm
just
going
to
say
that
if,
if
we
are
interested
in
that
just
in
that
stem
made,
then
I
can
definitely
take
a
look
into
that
and
make
time
for
that.
But
yeah.
A
Cool
yeah,
maybe
well
does
anyone
want
to
try
to
propose
a
paper
for
next
week,
either
amnon
or
josh?
Anyone
want
to
take
on.
B
H
Is
I
think
this
map
x
is
the
one
of
the
most
latest
one
which
I
have
also
found
from
the
research
community,
because
I
mean
I'm
annoys,
I'm
still
working
on
sev
completely
new
to
crash,
but
I've
been
reading,
just
basically
the
paper
from
target
and
to
understand
the
crash,
and
then
I
came
across
this
map
x.
A
A
Would
if
we,
if
we
do
start
doing
this
regularly,
do
folks
want
to
do
this
every
week
or
every
other
week,
or
you
know
every
month
what?
What
would
folks
be
interested
in.
A
G
A
Great
idea
you
can,
if
you
want
to
you,
can
edit
your
directly
too,
is
that
mine.
G
No,
I
mean
I,
I
just
want
to
say
that
people
should
know
that
we
are
discussing
papers
or
reading
papers
together
in
the
performance
call
so
that,
like
some
people
did
notice
it
today.
So
it's
a
common
theme
that
people
may
be
interested
in
joining.
A
Yeah,
here's
just
one
give
me
one.
Second,
this
is
the
pad
right
here.
I
try
to
send
out
a
link
to
this
in
the
email
that
goes
out
to
the
the
dev
list,
but.
A
Maybe,
depending
on
how
many
papers
people
actually
put
in
this,
we
can
we
can
figure
out
what
a
schedule
would
look
like.
If
we
only
have
five
papers
yeah,
we
only
do
it
once
a
month.
B
Yeah
I
mean,
I
think
it's
really
it's
a
really
good
idea,
because
then
you
know
there's
a
lot
of
new
papers
coming
out
and
it's
important
to
you
know
be
on
top
of
that
and
see.
Maybe
if
there
is
something
new
that
you
know
that
we
didn't
notice
before
I
mean
even
from
this
paper,
even
like
the
map
x
paper.
B
Even
if
we
don't
you
know,
take
you
know
if
we,
if
we
don't
need
to
implement
that
it's
just
some
nice
stuff,
that
you
know
key
takeaways
we
can
take
and
and
that
make
us
think
or
rethink
stuff.
So
I
think
it's
yeah.
A
Cool
all
right:
well
then,
everybody
add
your
your
papers,
if
you
find
them
and
I'll
send
out
a
announcement
on
the
mailing
list
that
we're
thinking
of
doing
this,
trying
to
figure
out
a
schedule
and
see
what
happens.