►
From YouTube: Ceph Developer Monthly 2022-06-01
Description
Every month the Ceph Developer Community meets to discuss current work in the Ceph codebase, and coordinate efforts to minimize collisions and issues.
This monthly Ceph Dev Meeting will occur on the first Wed of every month via our BlueJeans teleconferencing system. Each month we alternate meeting times to ensure that all time zones have the opportunity to participate.
https://tracker.ceph.com/projects/ceph/wiki/Planning
A
Welcome
to
cdm
for
the
month
of
june,
I've
added
the
agenda
in
the
chat.
It
seems
like
we've
got
a
couple
of
topics
so
far,
maybe
three
now
I
think
we
have
enough
quorum
at
the
moment.
So,
let's
begin
the
first
one
is
around
a
liberators
allocant
flag,
a
long
term
solution
for
blue
store
block,
detect
zero
block
detection.
B
Well,
I
suppose
I
can.
I
can
start
all
right
just
because
I
it
all
kind
of
started
with
with
really
big
provisioning
and-
and
this
is
you
know,
not
necessarily
kind
of
centered
around
a
particular
fix
or
a
particular
flag.
B
They
suggested,
but
also
intended
to
be
just
a
discussion
around
what
what
can
be
expected
by
the
consumers
of
the
object,
store,
interface
or
perhaps
not
even
object,
storage,
but
just
you
know
the
the
oesd
ops,
because
after
the
usd
is
accepted
by
the
usd
things
become,
you
know
some
parts
of
it
go
through
the
for
the,
like.
B
The
actual
object
store
interface
that
things
like
boost
or
implement,
and
some
parts
get
munched
in
the
do
underscore
is
the
what
is
the
underscore
ops.
You
know
function
that
does
a
lot
of
kind
of
transformation
on
those
operations
that
are
submitted
and
there
are
additional
details
that
are
sometimes
changed
in
the
transaction
later
before
the
transaction
gets
to
the
object
store.
B
So
this
this
is
really
you
know
what
can
the
consumer
such
as
rdd
or
cfs?
That
is
just
issuing
radius.
Ops
expect
in
terms
of
data
layout,
in
terms
of
like
the
difference
between
logical,
extents
and
physical
extents
and
in
terms
of
you
know
how
much
of
what
is
written
is
actually
preserved
and
the
zero
block
detection
is
just
a
you
know,
a
poster
child
for
this.
B
I
guess
because
there
are
other
things
such
as
compression,
so
you
can
think
of
various
other
transformations
that
the
object
store
might
do
while
preserving
the
appearance
of
storing
the
bit
pattern.
That
was
that
was
written
by
the
user.
B
But
it
turns
out
that
these
details
matter
and
in
particular
there
is
this
use
case
for
thick
provisioning,
rbd
images
there
was.
This
is
something
that
was
added
a
while
ago.
B
I
think
in
the
mimic
release,
or
maybe
even
earlier
than
that,
and
it
basically
works
by
writing
zeros
to
the
image
it
tries
to
be.
B
You
know
slightly
efficient
and
in
that
it
uses
the
right
same
as
the
operation,
instead
of
just
sending
an
object,
sized
buffer
full
of
zeros.
So
at
least
the
zeros
are
not
sent
across
the
network,
but
up
until
the
up
until
quincy
when
the
zero
blood
protection
feature
was
introduced.
B
This
resulted
in
you
know
the
zeus
actually
be
written
to
disc
in
the
absence
of
compression,
of
course,
the
other
thing
so
so
one
can
argue
that
rbd
fit
provisioning
has
always
been
somewhat.
B
You
know
somewhat
questionable,
because
booster
compression
could
always
get
in
the
way
and
zero
block
detection.
You
know
it
can
be
argued
to
be
a
form
of
compression,
but
the
other
thing
that
that
recently
came
up
in
which
which
both
ibd
and
7s
actually
running
into
is
encryption.
So
the
the
client
sign
encryption
for
sacrifice.
B
This
is
the
alpha
script
framework
and
for
rbd
it's
something
that
we're
doing
based
on
lux
internally
within
libraries
and
in
both
cases
we
need
to
be
able
to
distinguish
between.
B
Zeroes
that
are
written
to
the
to
the
image
explicitly
and-
and
you
know
the
areas
that
that
weren't
written
at
all
they're,
basically
just
holes
in
file
system
terms
and
the
zero
block
detection
feature
the
way
the
way
it
was
implemented.
B
B
B
B
If
there
are
disagreements
there,
then
we
need
to
kind
of
identify
these
and
document
because
you
know
like
I
said
there
is
some
parts
of
it
are
in
the
transaction
class,
some
parts
of
it.
You
know
there.
C
B
You
know
comments
on
the
object,
store
virtual
functions,
some
parts
of
it
just
in
the
osd
op
layer,
where
there
is
the
ops
you
get
munched.
You
know
truncates
zeros
things
like
that.
There
are
some
transformations
that
occur
there.
That
may
not
be
obvious
to
everyone,
and
the
other
part
is
the
is
the
proposed
solution
for
the
well
there's
a
block
deduction
feature?
B
D
Well,
adam
here
I
do
agree
that
we
should
really
document
somewhere.
What's
the
split
between
object,
store
requirements
for
behavior
and
what
is
not,
what
is
basically
three
for
implementation
detail,
the
currently
we
don't
really
have
it.
Even
some
flags
hints
are
not
really
described.
What
is
the
meaning,
for
example,
for
plaque
incompressible,
it's
just
a
hint
or
it
should
be
followed
more
more
closely.
B
Yep,
so
I
I'm
guessing
that
would
be.
One
kind
of
action
item
is
perhaps
we
could
start
by
just
collating
all
these
random
code
comments
together
and
also
you
know,
have
someone
just
walk
through
the
the
the
enumeration
where
the
osd
opcodes
are
defined
and
then
for
each
with
the
opcode
come
up
with
you
know
a
short
description
explanation
and
the
semantics,
and
one
example
that
I
I
would
just
give
here
is
again.
I
think
it's
some.
B
What
that's
done
in
the
tracker
is
the
osd
op
0
up
code.
B
All
right
so
originally
this
was,
I
think,
intended
to
be.
You
know
a
form
of
again
efficient
zeroing
so
that
the
zeros
are
not
transferred
over
the
network,
but-
and
this
is
way
back
so
we're
talking
we're
talking-
you
know
perhaps
even
pre-file
store
days.
The
this
up
code
was
repurposed
to
be
used.
B
B
So
the
expectation
the
semantics
are
now
the
expected
semantics,
which
rbd
relies
on
and
has
been
all
these
years.
Is
that
the
the
it's?
Not
just
that
zeros?
You
know
it's
not
just
that
the
appearance
of
zeroes
haven't
been
written
to
the
object
is
there,
but
also
that
the
that
the
the
data
is
discarded.
B
So
the
expectation
is
that
this
this
operation
frees
up
space,
whereas
for
for
the
right
zeros,
I'm
sorry
for
for
the
right
same
operation,
which
we've
been
using
for
thick
provisioning.
The
expectation
in
it
is
that
zeros
are
actually
written
out
and
consume
space.
B
These
are
the
kind
of
the
subtle
details
that
are
that
are
not
documented
there,
and
I,
I
think
we
should
start
with
osd
up
codes
since
that's
what
the
clients
are
exposed
to
and
then
work
our
way
down
to
the
pg
transaction
transaction
and
the
the
object
store
interface,
which
is,
I
think,
adam
and
other
blue
store.
Folks
are
most
interested
in,
of
course,
but
it
all
starts
with
rhdops,
and
so
this
needs
to
start.
There.
A
A
I
completely
agree:
this
is
cost
confusion
like
you
just
mentioned
the
zero
or
the
op
0
has
caused
confusion
earlier
as
well,
and,
given
that
we
are
at
the
moment,
rewriting
the
usd
crimson
would
also
benefit
from
it,
and
I
think
it
is
going
to
be
an
incremental
effort,
as
you
just
mentioned,
but
if
there
are
some,
you
know
if
there
are
priority
wise.
A
There
are
things
that
we
should
do
sooner
rather
than
later,
instead
of
just
going
bottom
up,
maybe
you
know
so
we
can
start
with
those
and
identify
those
first
that
doesn't
hurt
to
maybe
like
create
an
ether
pad,
maybe
and
have
all
those
written
down
and
check
them
off
as
we
go.
B
Yep
makes
sense,
I
I
don't.
I
don't
think
any
of
this.
Is
you
know,
kind
of
super
high
priority
or
anything,
because
the
immediate
issue
was
resolved
by
disabling
zero
block
detection,
and
you
know
clearly
documented
clearly
documenting
that
it
is
not
that
it.
It
doesn't
interact
well
with
some
high
level
features
in
lbd
and
cfs.
C
B
As
far
as
that
is
concerned,
this
year's
result,
which
is
going
forward
having
that
documentation
in
place,
would
would
definitely
help
avoid
something
like
that
happening
in
the
future.
So.
D
B
Other
part
of
this,
this
topic,
then,
is
the
proposed
proposed
solution
for
this,
like
in
particular
for
zero
block
detection,
but
also
I'm
guessing
this
concerns
compression
as
well.
B
So
what
adam-
and
I
kind
of
briefly
talked
about-
was
adding
a
flag
which
would
not
be
a
hint,
so
I
mean
I,
I
still
call
it
a
hint
just
in
the
name,
because
that's
what
the
enumeration
kind
of
insists
on,
but
again
it
is
as
part
of
documenting
the
usd
hops.
This
compressible
incompressible
flags
could
also
be
documented,
and
then
this
flag,
I
I'm
calling
it
don't
transform,
but
you
know
the
name
could
be
changed,
obviously
would
be.
B
Go
ahead,
and
you
know
rename
the
flags
that
that
that
have
the
word
hint
in
them
that
aren't
actually
hence,
but
anyway,
the
semantics
would
be
that
the
data
buffer
should
be
should
be
written.
B
It
should
reach
media,
as
is
so
whether
it's
zeros,
whether
it's
data
that
is
easily
compressible,
no
matter
what
you
know.
What
what
other
settings
are?
You
know
set
whether
it's
boost
or
configuration
options
that
come
from
the
from
the
from
the
monitor
config
or
whether
it's
you
know,
whatever
hints
that
are
that
are
passed
down
along
with
you.
B
Along
the
way,
this
lag
should
just
take
precedence
and
disable
all
of
that
and
basically
serve
as
a
as
an
instruction
to
just
take
the
data
and
write
it
out.
B
There
are
some
some
challenges
associated
with
that
that
adam
brought
up,
in
particular
the
fact
that
you
know
this.
This
is
this
is
obviously
very
easy
to
implement
for
the
local
object
store
so
for
a
particular
osd
where
this
this
right
happens
to
be
directed
to.
B
But
if,
if
later
on,
you
know,
rebalancing
happens
in
sound
form
and
the
object
is
moved
to
different
osd
that
osd
might
have
other.
You
know
booster
settings
again
either
coming
from
the
monitor
configuration
database
or
from
the
local
center
con
file,
so,
for
example,
that
other
hd
might
have
compression
enabled-
and
they
would
take
this-
this
object
that
was
supposed
to
be.
You
know.
B
To
be
not
compressed
and
be
written,
as
is
it
would
take
that
object
and
and
compress
it.
That's
changing
video,
the
intention,
the
original
intention,
so
this
flag
would
need
to
be
persistent
and
attached
to
the
object
and
kind
of
travel
with
that
object.
Whenever
that
object
goes.
B
So
that's,
I
think,
the
high
level
issue
that
was
identified
and
yeah
at
this
point.
B
I
think
I'll
again
stop
talking
and
see
what
adam
and
others
have
to
say.
As
far
as
whether
this
is
feasible,
yeah.
D
D
Maybe
some
problem
could
arise
with
a
larger
coding
that
we
will
somehow
strip
objects
to
the
pieces,
then
we
should
track
that
empty
empty
space
accordingly,
also
as
non-compressible
or
not
transformable,
so
we
will
actually
take
space,
but
my
imagination
is
that
on
the
blue
store
level
it
is,
it
is
pretty
trivial
and
to
implement
the
feature.
The
most
most
logic
in
handling
such
flag
would
be
on
osd
layer.
A
The
general
idea
would
be
that
this
flag,
no
matter
what
other
settings
are
applied,
will
prevent
any
kind
of
transformation
in
the
object
store
layer.
That's
the
motivation
is
my
understanding,
correct.
D
Exactly
the
the
mean,
the
reason
for
the
flag
is
to
some
flag
connected
to
object
is
only
to
be
persistent
across
replication
to
different
object
stores,
so
each
bluetooth,
basically
imagine,
would
get
the
flag,
of
course,
converted
to
objects
or
interface,
but
still
get
the
info
that
such
object
should
not
be
deflated
or
reduced
in
size.
In
any
case,.
A
I
think
I
think,
a
high
level.
It
makes
sense.
The
motivation
is
kind
of
pretty
clear
in
terms
of
adding
this
extra
metadata
to
the
object
in
terms
of
an
extra
flag.
I
think
it
should
be
possible.
A
You
have
to
look
at
the
place
where
we
need
to
add
it,
but
I'm
just
like
in
general,
this
who's
going
to
be
the
user
of
this
flag
in
terms
of
just
like
you
know,
when
we're
writing
zeros
the
the
the
zero
detection
stuff
and
who
else
is
the
user
of
this
flag?
I'm
just
trying
to
understand
the
amount
of
extra
code
that
we'll
be
adding.
B
So
so,
right
now
it
seems
like
the
the
only
user
would
be.
I
believe,
thick
versioning
feature,
which
is
somewhat
you
know,
which
is
a
bit
odd,
so
it
might,
you
know,
not
be
worth
it.
You
know
just
for
that,
but
I
think,
if
you
know
folks
have
other
use
cases
for
just
preserving
the
bit
pattern
of
exactly
as
it
is
and
just
writing
it
out
to
disc
that
might
make
more
sense
for
those
other
use
cases.
B
It's
definitely
again,
not
you
know
the
high
priority
sort
of
thing,
because
blue
store
compression
blue
store
has
supported
compression,
I
think
for
for
as
long
as
I
believe,
supported
thick
provisioning,
flag
and
they've,
never
interacted,
obviously
so
well,
they
never
interacted
well,
the
blue
store
compression
has
always
kind
of
took
precedence,
so
it's
just
one
of
the
ways
that
you
know
it's
just
like
something
that
came
up
as
far
as
zero
blog
detection
feature
here,
the
it
kind
of
it
it
concerns
more.
B
So
not
just
the
effect
provisioning,
but
also
there
is
this.
This
other
thing
that
I
mentioned
is
that
it
strips
the
way
it's
currently
implemented
it
strips
the
distinction
between
zeros
that
were
explicitly
written
to
the
image,
even
though
you
know
they
may
be
compressed
later
on.
B
So
we
at
this
point,
we
don't
care
about
whether
they're
stored
on
disk
or
not,
but
we
care
about
knowing
whether
the
the
it
was
the
end
user,
who
just
wrote
a
range
of
zeros
to
the
object
or
whether
that
range
was
never
written
to,
and
it's
just
a
whole
right,
or
maybe
it
was
written
too,
but
then
explicitly.
You
know
the
whole
was
punched
explicitly
these.
This
is
this
use.
B
Case
comes
up
with
with
you
know,
when
implementing
compression,
in
particular
in
the
separate
case,
it
hurts
the
most
because
for
rbd,
given
that
this
is
a
bug
device,
we
can
kind
of
kind
of
work
around
it.
B
Although
it's
not
not
easy
the
the
file
system,
given
that
it
has
to
abide
by
the
deposits
interface
at
least
two,
you
know
to
some
extent
it
has
the
like
the
file
system
user
can
can
do
you
can
punch
holes
and
those
holes
need
to
be
correctly
preserved.
B
So
this
is
where
the
zero
block
detection
feature
just
completely
breaks
everything
the
way
it's
completely
implemented,
so
the
other
like
if
we
wanted
to
going
forward,
potentially
enable
it
by
default,
because
you
know
it
might
be
useful
in
you
know,
for
some
workloads-
and
you
know
it
doesn't
doesn't
need
to
be-
doesn't
necessarily
need
to
be
limited
to
just
development
testing
in
yeah
topology.
B
We
wanted
going
forward
to
like
have
the
ability,
the
possibility
to
to
enable
it
in
the
field
the
other
fix
well,
and
it's
not
that
not
the
other
effects,
because,
like
cephas,
obviously
not
going
to
pass
this,
this
don't
transform
flag
on
all
operations.
B
The
fix
you
know
purely
for
the
zero
block
detection
feature
would
be
to
make
it
like
to
introduce
the
distinction
between
logical,
extents
and
physical
extents
at
the
object
store
layer
and
make
it
so
that
the
zero
block
detection
feature
only
transforms
physical,
extents.
So.
C
B
A
B
The
if
all
gate
system
call
has
there
like
there
is
a
difference
between
punching
holes
and
zeroing
out
the
range
and
the
the
difference
is
precisely
that
when
you're
zero
out
of
range
at
the
logical
level,
the
range
is
allocated,
whereas
when
you
punch
a
hole
at
the
logical
level,
the
range
becomes
deallocated,
and
so
I
think
that
the
distinction
is
kind
of
yeah.
It's
not
clear
how
much
work
would
it
be?
You
know
bringing
that
to
blue
store
and
then
exposing
it
through
the
object
store
interface.
B
But
that's
that's
another
approach
to
well.
That's
the
only
approach
that
that
I
currently
see
and
we
could
take
or
zero
block
detection
feature
to
be
enabled
in
the
wild.
D
I
think
exposing
a
difference
between
logical,
extents
and
physical
extents.
D
D
It's
never
fixed,
I
mean
we
never
do
a
transaction
that
we
overwrite
existing
space.
We
always
allocate
new
space.
This
is
for
us
to
provide
atomicity
if,
if
something
happens
during
our
write
operation,
so
even
if,
if
you
have
object
that
at
some
point
of
time
is
allocated
continuously
at
some
location
on
the
media
after
you
write
it
in
place,
some
inside
object
size.
You
get
it
fragmented,
because
the
new
new
parts
of
the
object
are
allocated.
D
B
I
I
don't
see
that
being
an
issue
for
this,
for
this,
for
this
kind
of
second
part
of
the
conversation
for
the
for
the
difference
between
logical,
extents
and
physical
extents,
because
this
is
precisely
the
point
of
of
kind
of
making.
This
separation
apparent
radio,
the
the
the
like
cfs,
would
not
be
concerned
with.
B
B
If,
if
a
particular
range
is
is
written
to
that,
when,
when
you
ask
for
an
extent
map
for
that
range
with,
there
is
an
osd
up
up
code
for
that,
I
think
it's
called
map
extents
or
something
something
like
that,
and
this
is
also
embedded
in
the
sparse,
read
operation,
which
is
when
you,
when
you
issue
a
spots,
read
against
the
object
it
it
returns.
B
It
returns
a
list
of
extents
essentially
with
with
the
extents
that
are
allocated.
It
also
returns
data
for
those
extents
and
the
extents
that
are
not
allocated
how
clearly
marked
as
not
allocated,
and
those
are
logical,
extents
right.
So
that's
not
you
know
the
zero
block.
Detection
could
just
skip,
write
it
zeros
to
disk,
but
as
long
as
it
maintains
the
record
that
this
range
was
explicitly
written
to
that's
fine
and
that
would
fulfill
the
desired
semantics.
B
So
I
I
don't
see
how
the
copy
and
write
or
redirect
on
the
right,
whichever
whichever
best
describes,
what
brewster
does
physically
I
don't.
I
don't
see
that
being
a
problem
at
all.
D
B
Yes,
yes,
that's
precisely
what
what
is
needed
and
that's
precisely
what
posix
file
systems
do
for
for
doing
this,
for
supporting
a
follocate
functionality.
B
So
so,
just
to
kind
of
be
clear.
This
is
completely
separate
from
the
from
the
don't
transform
flag
and-
and
perhaps
I
shouldn't
I
I
I
shouldn't-
have
kind
of
collated-
the
two
together.
These
are
really
kind
of
two
different
two
different
discussions
and
two
different
use
cases.
B
B
It's
just
a
use
case
that
is
probably
not
worth
it.
The
this
the
other.
The
other
topic
is
the
introduction
of
the
of
the
flag,
the
introduction
of
a
new
flavor
of
a
logical
extent,
and
that
is
targeted
at
making
the
zero
block
detection
feature
be
enable
enable
label
in
the
in
the
wild.
So
in
the
field
again,
it
depends
on
whether
we
think
you
know
how
useful
would
that
be.
B
If
we
are
fine
with
having
having
this
feature,
you
know
just
be
a
a
kind
of
a
development.
You
know
thing
used
just
for
testing
and
just
for
large
scale,
testing
and
topology.
B
Then
you
know
that's
fine
with
me
and
we
can
you
know
we
can
basically
maintain
everything
as
is,
but
I
would
still,
I
would
still
insist
on
documenting
the
semantics
more
clearly
for
all
osd
ops
going
forward,
because
I
think
that
that's
useful,
regardless
the
actual
code
changes,
whether
it's
the
dome,
transform
flag
or
the
new
logical
extent
flavor
in
blue
store.
These
are
you
know
completely
optional,
depending
on
how
much
work
would
be
needed
to
implement
those.
D
Ilia,
I
would
like
to
think
about
that
extra
kind
of
extent
in
a
background.
But
could
you
repeat
again
because
I
did
not
really
grasp
what
is
the
difference
for
a
customer
between
an
object
that
has
a
hole
inside
and
object
that
has
a
hole
inside,
but
it's
allocated.
B
The
difference
come
comes
up
in
in
encryption,
the
like,
in
particular,
just
speaking
speaking
about
zapperfest
and
and
fs
script,
which
which
jeff
layton
is
currently
bringing
into
set
of
s,
the
the
different
like
the
issues
that,
at
least
with
the
alpha
script
frame,
work
the
way
the
the
way
the
way
it
is.
It
is
implemented
the
holes
like.
B
Even
if
the
file
is
encrypted,
the
holes
still
remain
like
they
still
remain
false.
So
if
the
user
issues
a
read
from
a
whole,
they
should
get
zeros,
whereas
if
the
user
issues
a
read
from
a
from
an
encrypted
region
that
needs
to
be
that
needs
to
be
decrypted,
and
there
is,
there
is
a
corner
case
here
of
a
block
of
a
plain
text
block
that
encrypts
into
ciphertext.
That
is
all
zeros.
B
That's
that's
where
the
difference
comes
up,
because
if
this
is
a
ciphertext
of
all
zeros,
you
need
to
decrypt
those
zeroes
before
returning
them
to
the
user,
whereas
if
it's
a
hole
then
that
the
whole
need
not
be
decrypted
and
the
user
should
get
should
get
zeros-
okay,
hopefully
hopefully
that
wasn't:
okay,
okay,.
D
That
clearly
will
break
it.
If
we
like
had
zero
detection
on
each
block
and
there
will
be
some
infrastructure
that
will
cut
all
extents
blocks
being
all
zeros
just
making
holes
instead
of
writing
it
down,
then
it
will
break
that's
ffs
encryption.
I
understand
that,
but
I
don't
see
connection
of
that
with
having
a
whole
actual
hole
made
by
some
action
in
object,
like
maybe
zero
call
and
still
preserving
allocation
for
that
regular.
That's
the
something
I
don't
really
understand.
B
The
the
the
preserving
again
you
you're,
the
confusion,
comes
from
the
fact
that
I
I
kind
of
collated
the
two
topics
together,
the
the
the
preserving
like
that
is
completely
relevant
for
for
the
encryption
use
case.
So
the
the
preservation
goes
along
with
this
don't
transform
flag
which
I'm
guessing.
B
The
conclusion
is
that
it's
just
not
worth
it
so
just
forget
about
preserving
the
the
the
the
the
the
semantics
here
are
just
that
you
know
for
for,
for
this
new
kind
of
logical
extent,
the
semantics
are
just
that
blue
store
is
always
allowed
to
throw
out
the
the
the
the
the
physical
data
and.
B
Is
you
know
just
completely
skip
skip,
skip
writing
those,
but
it
should
not
punch
a
whole
at
the
logical
in
the
logical
extent,
that's
what
it
country
does
and
that's
what
breaks
the
encryption
use
case.
B
So,
if,
instead
of
if
instead
of
punching
a
hole,
it
would
just
mark
that
logical
extent
as
as
you
know,
with
with
that
special
with
that
special
meaning,
whether
it
discards
data
or
not
discards,
data,
cfs
and
rbd
as
far
as
encryption
is
concerned,
would
not
care
about.
A
Okay,
yeah.
I
think
thanks
for
clarifying
that
earlier,
I
think
tracking.
So
the
summary
would
be
the
tracking
zeros
it
at
the
logical
layer
would
enable
us
to
use
bluestore
a
block
detection
feature
in
like
a
real
use
case,
arbitec
provision
images
case.
So
I
think
that
is
more
worth
pursuing
and
checking
the
feasibility
of
adam.
I
would
say
from
the
blue
score
aspect
of
things.
D
A
D
It
requires
for
me
to
check
how
much
actual
changes
would
be
needed
when
to
to
do
that.
Yeah.
B
Yeah
and
again
it's
completely
optional
like
if
we,
if
we
think
that
the
zero
block
detection
can
stay,
you
know
developer
only
testing
only
thing,
then
none
of
this
is
needed.
A
Sure
I
mean
that's
that's
why
I'm
asking
adam
to
spend
some
time
and
check
the
feasibility,
given
that
this
is
very
recent
in
our
minds
and
writing
this
down
would
be
also
a
great
start
like
this
is
what
we
explored
in
this
scenes
seems
not
worth.
It
also
is
a
good
way
to
go
about
it.
B
Okay,
I
think
that's
it
for
this
topic,
then,
and
sorry
for
kind
of
colliding
the
two
together.
It
should
have
been
two
different
points
with
to
do
two
different:
two
different
discussions
and
management.
A
A
Yeah,
I
think,
finally,
when
you
explain
the
use
cases,
I
think
that
clarified
things
for
us
all
good.
A
Okay,
let's
move
on
to
the
next
topic:
use
boost
accumulators
or
stats
collection
who
added
this.
E
Hey
you
will
oh
hi
yeah.
I
sent
an
email
to
the
list
couple
days
ago.
I
I
was
looking
to
add
something
to
the
proof
counters
that
we
use
and
found
out
that
we
have
our
own
implementation
of
of
different
proof
counters.
E
E
It's
highly
optimized
when
you're
collecting
we're
trying
to
collect
multiple
stats
using
the
same
accumulation.
So,
for
example,
you
want
to
do
the
account
the
average
and
standard
deviation
of
some
value.
So
it
knows
how
to
do
that
in
the
most
efficient
way
and
has
some
advanced
algorithms
for
doing
things
like
percentiles,
not
based
on
on
histograms
but
more
much
more
efficiently.
So
it's
really
a
well-established
an
efficient
library
for
collecting
statistics.
E
So
what
I
was
wondering
is
if
this
was
ever
considered
or
if
it
makes
sense
to
try
to
embed
it
inside
our
perf
counters.
I
would
probably
avoid
changing
the
perf
counters
api.
E
So
because,
probably
tons
of
pricing,
the
code
to
touch
that,
but
under
the
hood
I
mean
no,
it
doesn't
matter
what
action
mechanisms
we
use
and
given
that
we
have
a
relatively
small
set
of
statistical
collection
mechanisms
there,
I
thought
that
we
can
change
the
underlying
mechanism
to
use.
A
Anybody
have
any
experience
with
post
accumulators-
I
I
don't
really
recall
or
have
I
don't
remember
when
we
made
these
choices
about
implementing
our
own
performance
counters,
but
I'm
not
even
sure
we
ever
considered
the
boost
once
since,
clearly
it
sounds
promising
anybody
have
any
context
around
this
area
on
the
call.
B
Well
boost
accumulators
are
used
to
some
extent
in
in
libra
bd
and
it's
working
fine,
so
yeah
as
long
as
the
the
proof
calendar
api
is
preserved.
B
You
know
that
the
you
know
so
that
they're
there
the
changes
don't
trickle
or
come
down
across
the
entire
code
base.
Then
it
sounds.
It
sounds
fine
to
me.
A
F
G
G
I
think
another
consideration
is
how
we
export
this
data
to
prometheus.
G
E
So
I
mean
if,
if
you
have
somewhere
in
the
code
that
just
call
like
average
or
max
or
mean
or
whatever
on
the
accumulator,
then
those
are
just
api.
So
that
would
work
the
same.
If
you're
actually
exporting
you
will
want
to
export
the
internal
values,
then
the
booster
combinators
I
mean
the
internal
values
are
internal,
but
they
do
have
a
serialize
and
deserialize
interfaces
and-
and
you
can
serialize
and
oh
I
mean
in
our
case
it
won't
be
we
just
to
deserialize
and
expose
the
internals
in
that
way.
H
Yeah,
I
was
just
saying
that
in
in
cruisin,
the
process
of
basically
sending
stats
to
the
manager,
specifically
pg
stats
is,
is
somewhat
significant
in
terms
of
overhead
in
the
the
single
reactor
less
of
a
big
deal
when
we
do
multi-reactor
right,
then
that
that
workload
per
reactor
should
be
less,
but
you
know,
generally
speaking,
any
kind
of
performance,
counters
stats,
collection,
stats,
you
know
dissemination
the
more
we
can
make
that
the
more
efficient
we
can
make
that
the
better.
H
So
I
don't
know
how
how
boost
aggregators
look
compared
to
what
we
have
now,
but
I,
I
think,
probably
whether
they're
worth
it
lives
or
dies
based
on
if
they
make
that
process
more
or
less
efficient.
This
is
the
same.
I
guess
who
cares,
but
that's
that's
my
input.
I
guess.
E
So
if
you
talk
about
it,
alright,
it
depends
how
complex
the
stats
that
we're
collecting
we're
just
collecting
averages.
Then
it's
pretty
much
the
same
implementation.
You
have
two
counters
one
for
the
sum
one
for
the
count
and
whenever
you
call
average
you
just
divide
them
so
pretty
pretty
much
the
same
thing:
the
the
real
value
from
the
booster
accumulators.
E
If
you
want
to
collect
like
on
the
same
measurement,
if
you
want
to
collect
multiple
stacks,
then
they
know
how
to
optimize
that,
in
a
way
that
you
do
the
minimum
amount
of
calculations
for
this
stack
and
of
course
they
have
a
much
bigger
variety
of
sites
that
we
currently
don't
use,
maybe
have
our
own
implementations
if
we
do
implement
percentiles
or
mediums
or
something
like
that
using
a
histogram.
E
I
Here
well,
the
question
is
how
much
if
you
are
talking
about
efficiency
slowly,
the
question
is
how
much
how
much
of
penalty
we
are.
We
are
getting
in
the
classical
osd
long
time
ago,
because
I
was
taking
a
look.
I
was
trying
to
optimize
the
perfect
the
current
counter
implementation.
I
However,
I
had
to
change
the
api,
so
this
was
painful.
This
was
really
painful.
Finally,
we
haven't
matched
it.
Yeah
is
closed.
I
For
crimson,
I
hope
I
haven't
checked
it,
but
I
put
a
lot
of
hope
in
in
not
needing
the
atomics
in
the
sister
world.
This
is
this
stays
in
contrast
to
what
we
have
in
the
classical
university.
Okay,
there
is
sharding
of
counters,
but
and
but
still
atomics
must
be
used
in.
E
I
Well,
I
was
just
trying
to
to
to
free.
Actually
the
proposal
is
free
of
atomics.
Everything
was
counted
locally.
Every
single
thread
was
getting
piece
of
memory
for
storing
local
stats.
I
F
E
F
F
F
If
you
take
a
look
at
sisters,
documentation
regarding
the
counters,
they
have
a
description
of
how
their
way,
how
their
plan
to
export
those-
I
think
they
have
the
code
to
do
that
and
I
glanced
at
it
a
few
weeks
ago.
I
don't
remember
the
details,
but
we
shouldn't
do.
A
Maybe
you
can
sync
up
with
ronan
and
figure
out
what
what
he
has
in
terms
of
sea,
star
of
counters
and
clearly
that's
something
to
keep
in
mind
because
you
know,
as
as
crimson
keeps
maturing.
We
want
classic
and
crimson
to
be
in
at
par
in
terms
of
features
and
in
terms
of
functionality.
A
But
going
back
to
this
the
boost
accumulators
thing.
So
I
think
we
all
agree
that
we
don't.
We
want
the
api
to
not
change,
but
in
terms
of
efficiency.
Like
you,
you
said
that
it
is
more
efficient
and
we,
it
has.
You
know
other
things
like
media
and
etc
what
you
mentioned.
But
how
do
we
measure
the
efficiency
like
you
have
any
thoughts
on
like
how
do
we
even
like
compare.
E
Well,
we
can
do
the
the
regular
benchmark,
assuming
that
the
the
counters
are
touched
right
when
you
just
run
a
regular
benchmark
of
the
code
and
see,
if
there's
a
difference,
I
don't
suspect
there'll,
be
an
improvement
again,
it
depends
what
kind
of
stats
you're
using
if
we're
just
using
averages
or
summaries,
I
mean
there's
no
difference.
It's
the
same
thing.
If
we're
going
to.
A
If
we
extend
that,
that's
when
we
probably
have
the
gains
right,
yes,
instead
of.
E
A
E
Yeah,
yes,
if
I
mean
I
can
I
can
search
your
code
and
see
where
and
why
we
use
histograms,
because
we
have
our
own
implementation
for
histogram
inside
the
first
rift
counters.
So
it
depends
how
we
use
it
and
where
we
use
it,
but
that
could
be
something
that
would
improve
the
the
performance,
and
I
mean
if
in
the
future
to
be
like
once
things
are
available,
people
will
use
standard
deviation,
then
their
implementation
people,
I
mean
we
can
have
the
similar
implementation
in
ourselves.
E
But
if
we
just
naively
implement
standard
deviation,
then
our
implementation
would
be
less
efficient.
So
I
what
I'm
saying
is
that
the
the
basic
test
would
be
to
see
that
we
don't
have
voice
performance.
That
can
happen,
because
maybe
we
have
some
concurrency
mechanism,
smartly
built
into
our
counters-
that
we
don't
have
in
boost.
So
this,
but
other
than
that.
A
Okay,
if
not
thanks
thanks
you
all
for
driving
this
discussion,
let's
move
on
to
the
last
topic,
so
this
is
about
logical,
large-scale
testing,
and
this
is
kind
of
related
to
the
blue
store.
Zero
detection
feature
that
got
added
to
enable
large
scale
testing
in
pathology.
A
C
I
can
start
so
first
off
yeah.
This
is
this
almost
should
have
been.
The
second
bullet
point,
I
guess,
to
expand
on
all
the
discussion
about
about
maintaining
the
contract
of
of
blue
store-
and
I
agree
with
all
of
that
moving
forward.
But
for
now
the
the
immediate
fix
for
blue
store
zero
block
detection,
which
the
goal
of
that
is
for
to
use
in
tautology
for
simulating
large-scale
testing
that
we
disabled
on
clusters
by
default.
C
C
So
the
goal
of
this
bluester
zero
block
detection
feature
is
to
set
up
a
lot
of
daemons
with
limited
physical
hardware
and
simula
simulate
a
large
cluster
without
like
the
a
lot
of
capacity
of
performance
or
without
filling
up
the
disks,
and
now
we
have
a
global
config
option
that
of
course,
is
disabled
by
default,
but
that
we
can
choose
to
enable
for
totology
testing
so
that
we
can
test
this
out
and
the
idea
would
be
to
extrapolate
existing
tests
that
already
exist
in
tautology.
C
We
can
develop
more
but
as
a
starting
point,
use
existing
tests
and
increase
the
number
of
demons
that
are
used
there.
There's
currently
ashforia.
Do
you
remember
what
the
limit
was?
I
remember
you
said
it
was
six.
Maybe.
J
I
think
per
node
right
now
we
probably
have
three
or
four
osds,
so
we
probably
want
to
experiment
with
more
after
turning
on
this
feature
and
also
figure
out
like
in
the
tests,
how
can
we
write,
zeroed
objects
and
we
laura-
and
I
are
going
to
work
on
that
and
figure
it
out
so
that
we
can
make
use
of
the
zero
detection
change.
C
And
thanks
and
the
idea
would
eventually
be
to
create
a
new,
a
new
folder
in
the
the
radio
suite
that
we
can
use
to
start
creating
tests
for
this
and
and
right
now
we're
working
on
kind
of
proof
of
concept
of
this
idea.
C
Was
there
anything
else
that
you
want
me
to
go
over
for
that.
A
Now
I
think
thanks
thanks
for
describing
the
the
the
plan
of
action.
I
think
I'll
touch
upon
the
motivation.
That's
come
up
multiple
times
and
we
discussed
it
in
cds
reef
as
well,
that
we
lack
larger
scale
tests
in
pathology
and,
when
I
say
larger
scale,
tests
like
ashwarya
mentioned
that
most
of
the
nodes
like
in
pathology.
We
have
the
concept
of
roles.
Each
node
plays
like
a
role
of
three
osds
at
max
at
max.
A
We
probably
six
nodes,
so
we're
probably
doing
less
than
20
usd
tests
at
max.
We
want
to
extend
that
and
with
this
zero
detection
feature,
we
will
be
able
to
do
that
without
filling
the
actual
pathology
machines.
So
also
this
in
light
of
the
recent
regressions
and
stuff,
it's
all
the
more
important
for
us
to
do
larger
scale
tests
to
be
able
to
catch
things
that
do
need
a
larger
number
of
osds
than
just
20
osds
or
so.
A
C
The
agenda
yeah
go
ahead,
I'll
just
say
you
know
for
final
clarity
about
the
initial
conversation
about
how
it
interacts
badly
with
rbdsfs
features
that
interaction
is
not
going
to
be.
You
know,
disabling
this
feature
will
be
the
immediate
fix
there
and
we're
separating
that
for
now.
Just
for
clarity
for
everybody
on
the
call.