►
From YouTube: Ceph Month 2021: RADOS Update
Description
By Neha Ojha
Slides: https://www.slideshare.net/Inktank_Ceph/ceph-month-2021-rados-update
Ceph Month 2021 Schedule: https://pad.ceph.com/p/ceph-month-june-2021
A
All
right,
so,
let's
start
with
the
quick
introduction
myself,
so
my
name
is
neha
ocha
and
I'm.
The
tech
lead
for
the
raiders
team
been
attack
lead
for
a
few
years.
A
Now,
today,
I'm
going
to
be
giving
an
update,
a
raiders
update
about
pacific
and,
what's
coming
up
in
quincy,
so
initially
we'll
start
with
what
has
already
been
released
in
pacific
a
few
months
back
and
then
we'll
move
on
to
what's
exciting
and
what's
new
in
pacific
and
I've
tried
to
structure
the
presentation
around
these
four
basic
themes,
which
is
usability,
performance,
quality
and
ecosystem.
A
Essentially,
there
will
be
some
features
that
will
cross
boundaries,
but
I've
tried
my
best
to
categorize
them
in
in
the
most
meaningful
way.
So,
let's
begin
with
the
usability
features
that
have
been
added
in
pacific,
so
the
first
thing
we
have
here
is
the
upmap
balancer
is
on
by
default.
So
I'm
pretty
sure
everybody's
aware
that
we
have
this
balancer
module.
It's
a
manager
module
which
is
responsible
for
balancing
pgs
across
osds.
A
Earlier
versions
did
not
have
this
on
by
default,
which
made
it
like
a
manual
step
for
users
to
enable
this,
but
with
pacific.
We
have
this
on
by
default
and
I
think
we
have
confidence-
that's
pretty
stable,
so
we
made
this
decision,
so
hopefully
users
can
benefit
from
this.
A
Then
we
have
this
new
health
warning,
which
essentially
does
the
job
of
detecting
different
demon
versions
and
alerting
the
user
that
they
have
different
demon
versions
running
in
their
cluster.
So
there
is
this
associated
health
alert
that
you
will
see
in
the
sef
health
section.
Now
we
do
understand
that
this
is
sometimes
expected
when
you're
doing
an
upgrade
or
you
intentionally
had
to
upgrade
some
demon.
So
you
can
be.
A
You
can
always
use
this
muting
feature
that
we
have
to
mute
this
health
warning,
but
in
general
this
is
just
an
additional
guard:
rail
that
we've
added
so
that
users,
un
intentionally
or
un
intentionally,
don't
forget
and
leave
their
cluster
in
a
limbo
state
after
partially
upgrading
the
cluster.
A
So
that's
that's
that
about
the
health
warning,
then
there's
this
ability
to
cancel
ongoing
scrubs.
So
I
would
like
to
highlight
this
when
we
say
ongoing
scrubs,
these
are
scheduled
scrubs,
so
sef
has
these
schedule
scrubs
that
run
at
some
configured
interval
and
for
reasons
like
you
know,
when
you
have
some
maintenance
going
on,
or
you
have
some
scenario
that
you
hadn't
predicted
earlier,
you
may
want
to
cancel
scrubs
in
order
to
prioritize
something
else
earlier.
A
You
did
not
have
the
ability
to
do
that,
but
currently
you
have
the
ability
to
cancel
any
ongoing
schedules,
scrubs
that
are
going
on
in
the
system
moving
on
we've
got
so
this
is
an
improvement
to
the
progress
module.
The
way
recovery
progress
is
shown
in
chef
status.
Earlier
we
had
separate
recovery
progress
shown
for
different
pgs.
Now
we
have
like
a
consolidated
recovery
progress
that
is
now
shown
as
a
part
of
chef
status,
and
you
can
always
use
self-progress
to
see
more
details.
A
The
next
one
is
something
that
is
completely
new
and
I'm
excited
to
talk
about
this.
This
is
work
in
progress,
of
course,
but
this
is
a
framework
for
distributed
tracing
in
the
osd
by
means
of
a
tool
called
jager.
So
the
idea
is
to
be
able
to
add
trace
points
in
the
osd.
A
A
So
this
is
again
work
in
progress,
but
hopefully
this
will
make
debugging
way
way
easier
for
for
everybody
using
seth
all
right.
So
moving
on
moving
to
quality
yeah,
I
like
to
name
this
robustness
because
I
mean
yeah.
We
always
have
quality
issues,
but
I
think
we
want
to
make
the
system
more
robust
in
order
to
not
have
quality
issues.
So
this
the
first
thing
we
have
here
and
I
yeah.
I
also
want
to
note
like
note
that
this
is
something
we've
also
back.
Ported
is
which
is
improved,
pg
deletion
performance.
A
So
earlier
it
was
noted
by
a
lot
of
users
that
pg
deletion
performance.
Wasn't
you
know
up
to
the
mark
and
we
did
realize
that
and
we
we
fixed
this
issue,
and
we
also
had
some
recent
comparisons
that
we
did
of.
Pg
deletion
performance
before
and
after
this
optimization
was
done
and
we've
we
found
out
that
we've
improved
4x
in
terms
of
just
simple
plane:
deletion,
pg
deletion
performance.
So
this
is
a
pretty
exciting
news
in
general.
A
A
So
this
is
also
something
this
is
a
stability
feature
and
has
been
back
ported
to
several
versions.
Next,
I
want
to
talk
about
messenger
2.1,
which
is
essentially
the
new
wire
format
for
messenger
2,
both
for
crc
and
secure
modes.
This
comes
with
a
whole
bunch
of
security
fixes
as
well,
so
this
is
going
to
be
the
new
default
messenger
version.
A
Moving
on,
there
are
again
some
efficiency
improvements
in
the
manager
module.
This
is
again
something
that
we've
gotten
some
reports
around
and
we've
fixed
it.
The
first
one
is
about
the
progress
module,
so
it
turned
out
that
the
progress
module
being
always
on
had
some
issues
at
scale,
and
at
that
point
it
was
really
hard
for
users
to
get
around
it,
because
there
was
no
ability
to
turn
it
off.
A
So
what
we've
done
here
is
that
we've
added
the
ability
to
turn
the
progress
module
off,
not
that
we
would
recommend
it
or
not
that
we
say
that
we
would.
You
know
you
would
need
to
do
it,
but
for
whatever
reason,
if
you
know
things
just
you
know
decide
to
go
wild,
you
should
be
able
to
do
that
in
future.
Again.
This
is
useful
for
and
relevant
for,
older
releases,
so
you've
back-boarded
it-
and
the
next
topic
here
is
about
efficient
use
of
large
c-plus
plus
structures.
A
This
is
not
just
relevant
to
the
progress
module,
but
a
lot
of
other
manager
modules,
essentially
what
all
the
manager
modules
end
up,
using
a
dump
of
basic
ost
stats
and
pg
stats
and
all
that
kind
of
stuff,
and
we
found
that
there
was
a
lot
of
redundancy
and
we
also
found
that
there
was
a
lot
of
extra
information
that
we
ended
up
dumping
even
when
we
did
not
need
it.
A
So
we
ended
up
doing
a
a
thorough
search
into
what
is
being
needed
versus
what
is
being
dumped
to
make
this
handling
of
these
c
plus
structures
easier
in
the
course
code
base.
In
future,
we
are
also
planning
to
see
how
we
can
actually
share
some
of
these
data
structures
and
even
cache
some
of
them
to
some
extent,
because
some
of
these
modules
don't
really
need
to
be
updated.
You
know
periodically,
so
we
can
afford
to
cash
some
information
and
use
it.
A
You
know,
consume
like
separate
manager.
Modules
can
consume
it
versus
individually,
trying
to
dump
all
these
information.
So
we
think
that
these
are
like
stability
improvements
that
are
going
to
be
really
helpful
in
in
terms
of
the
manager.
Next,
there
is
this
improvement
in
the
cef
device,
ls
output.
Now
we
also
have
ssd
wear
levels
that
are
displaced
in
this
output,
so
that
you
can
monitor
it
at
a
regular
basis.
A
Moving
on,
I
think
this
one
I'm
going
to
probably
skim
through
because
a
lot
of
it
was
already
covered
by
josh
in
the
earlier
talk.
A
So
here
I
mean
we
have
a
public
telemetry
dashboard
that
everybody
can
view
you
just
need,
probably
github
or
authentication
to
just
you
know
be
able
to
use
it,
which
is
essentially,
if
you
have
a
github
account,
you
should
be
able
to
use
it
in
terms
of
radar's
perspective,
we
yeah,
I'm
probably
going
to
talk
about
more
stuff
that
we
are
trying
to
do
to
be
able
to
make
telemetry
more
useful
in
the
later
slides.
So
I'm
going
to
just
move
to
the
next
one.
A
Okay,
moving
on
to
performance,
I'm
sure
a
lot
of
you
are
excited
to
know
what
we
are
doing
in
terms
of
performance.
The
first
thing
I
want
to
talk
about
under
blue
store
is
rocks
db
sharding,
so
here
the
essentially
the
idea
is
that
before
pacific
everything,
all
the
key
value
data
that
went
into
rocks
db
was
under
a
common
column,
family,
that's
a
concept
in
roxdb.
A
I
don't
want
to
go
into
further
details
into
that,
but
yeah
just
worth
mentioning
that
we
will
probably
share
these
slides
later
on,
and
I've
tried
to
add
links
to
relevant
documentation
that
you
can
go
visit
to
understand
in
more
detail
about
what
these
features
are
talking
about,
but
in
terms
of
rock's
tv
sharding.
A
The
idea
is
that
in
pacific
we
have
separate
column
families
to
be
able
to
manage
data
of
common
types
separately
and
also
that
helps
with
compaction
in
terms
of
doing
smaller,
compactions
versus
doing
like
a
big
compaction
of
everything.
That's
there
in
the
default
column,
family
and
hence
that
also
reduces
disk
space
requirements.
A
So
I
it's
worth
mentioning
that
roxdb
sharding
is
enabled
by
default
in
pacific,
but
anybody
upgrading
will
have
to
go
through
a
manual
step
of
enabling
this
in
their
upgraded
cluster.
A
Next
we've
got
hybrid
allocator.
This
is
also
something
which
is
widely
being
used
and
has
also
been
back
ported
because
of
all
its
goodness.
It
has
low
lower
memory,
use
and
disk
fragmentation
and
performs
much
better
than
bitmap
allocator
and
our
legacy
stupid
allocator
other
than
that
we've
got
4k
size
for
ssds,
so
yeah.
We
made
this
change
for
ssds
earlier
on,
but
with
hybrid
allocator.
A
We
could
also
take
this
decision
to
make
4k
melalex
size
for
hard
disks,
so
this
essentially
provides
much
better
lower
space
utilization
for
smaller
objects
and
it
will
clearly
be
visible
to
users
and
the
last
couple
of
topics
here
is
about
efficient
caching.
So
we've
done
a
bunch
of
work
in
terms
of
making
caching
more
efficient
and
also
tracking
memory
in
finer
grain
details.
A
So
we
have
this
concept
of
mempools,
so
we
have
tried
to
make
sure
that
all
all
the
memory
being
used
is
assigned
to
proper
mempos
so
that
users
are
able
to
view
so
there's
a
simple
dump:
mempools
command,
that
user
can
users
can
use
to
see
where
memory
is
being
used?
So
if
there
is
a
bug
or
some
issue
where
one
mempool
ends
up
using
much
more
than
required,
then
we
know
which
areas
to
attack
and
go
debug
further.
A
So
that's
all
for
the
blue
store
stuff
in
pacific,
and
next
I'm
really
excited
to
announce
that
we
have
quality
of
service
in
the
osd
in
pacific.
A
It
is
not
on
by
default,
so
the
behavior
is
not
going
to
be
enabled
by
default,
but
the
idea
is
that
qs
will
be
provided
by
means
of
the
of
a
scheduler
which
is
called
m
clock
scheduler.
So
essentially,
before
this
we
had
a
wpq
which
was
the
weighted
priority
queue
which
we
were
using
to
prioritize
client.
A
I
o
versus
background
activities,
but
now
this
will
be
done
under
the
hood
by
an
m
clock
scheduler,
and
you
can
go
read
about
what
this
does,
but
it
essentially
makes
everything
work
under
the
hood
and
the
way
we
have
implemented
it
is
we
have
these
different
profiles
that
users
can
choose
to
prioritize
client,
I
o
versus
recovery
and
other
background
tasks
in
the
usd
now.
A
The
reason
we
have
these
profiles
is
that,
under
the
hood,
the
the
m
clock
scheduler
has
a
bunch
of
configuration
options
and
they
also
are
dependent
on
self
config
options.
So
we
don't
want
users
to
go
through
the
trouble
of
trying
to
understand
what
those
options
mean
or
even
have
to
tune
them.
So
we
have
these
config
sets
that
are
implemented
under
the
hood,
to
hide
the
complexity
of
tuning
and
clock
parameters
and
other
safe
parameters.
A
So
it
should
be
as
easy
as
just
choosing
a
profile
and
saying
I
want
my
client,
I
o
to
be
prioritized
at
this
time,
and
it
will
just
do
everything
under
the
hood
and
yeah
like
for
pacific.
What
we
have
done
is
all
these
chef
parameters
have
been
optimized
for
best
performance
on
ssd,
based
off
of
our
extensive
testing
that
we've
done
our
in
in
our
lab,
but
in
future
we
are
going
to
be
extending
it
to
hard
disks
as
well.
A
Now
a
quick
update
on
what's
there
in
crimson
in
pacific,
so
for
anybody
who's
not
aware
of
what
crimson
is
it's
a
high
performance
rewrite
of
the
osd
in
crimson
we
have
implementation
of
recovery
and
backfill
we've
also
added
a
scrub
state
machine,
it's
common
to
both
the
classic
osd
and
the
crimson
osd,
and
it
has
been
added
to
lay
ground
for
scrub
implementation
in
the
crimson
university.
It's
similar
to
the
peering
state
concept
that
we
introduced
earlier.
A
It's
a
scrub
state
machine
that
has
been
added
in
pacific
in
terms
of
backing
stores,
so
c
store
is
going
to
be
the
backing
store
for
crimson
eventually
and
the
initial
prototype
of
c
store
that
is
in
place
targets
both
zns
devices
and
traditional
ssds.
We've
got
an
initial
implementation
of
ono
trees,
omap
and
lba
mappings
for
pacific.
A
So
it's
the
the
implementation
that
we
have
in
pacific
will.
Essentially
let
you
run
simple:
rbd
workloads,
not
not
too
fancy
stuff,
but
in
general
you
should
be
able
to
start
up
a
cluster
and
run
a
simple
fio
rbd
workloads
on
it,
and
we
do
have
compatibility
layer
with
bluestore.
A
So
the
way
it
has
been
implemented
is
in
form
of
an
alien
store,
so
this
is
just
to
make
sure
that
you
can
start
running
crimson
in
testing
and
even
in
you
know,
production
clusters
earlier
than
c
store
can
come
into
full
shape
and
also
to
provide
a
migration
path
to
existing
users.
A
A
The
idea
is
to
be
able
to
configure
a
stretch
cluster
across
data
centers
with
an
arbiter,
a
third
location
which
can
break
ties
in
terms
in
cases
there
is
if
there
is
a
network,
partition
or
or
something
of
a
similar
shot,
and
now
there
are
policies
around
this
that
you
need
to
follow
to
be
able
to
configure
this
kind
of
cluster,
I'm
not
going
to
go
into
too
much
details,
but
there
is
again
a
link
that
you
can
go
read
later
on
now,
moving
on
to
quincy,
what's
coming
in
quincy
in
terms
of
usability,
I
am
really
excited
to
announce
that
m
clock.
A
Scheduler
is
going
to
be
the
default,
and
some
of
you
must
be
wondering
that.
Why
am
I
talking
about
m
clock,
scheduler
and
usability
just
because
this
eliminates
the
need
to
set
thousands
of
throttles
that
we
have
currently
in
the
osd
to
make
sure
that
client
io
is
prioritized
over
background
ops
and
vice
versa
whenever
needed?
So
you
don't
have
to
set
any
of
those
and
the
m
clock.
A
Scheduler
will
do
the
prioritizing
all
by
itself,
and
there
is
also
an
automatic
benchmarking
step
which
will
be
performed
on,
and
this
is
also
all
on
the
osd
startup
and
nothing
has
to
be
done
by
the
user.
So
this
benchmark
is
going
to
be
run
on
ost
startup
and
it
essentially
goes
and
sets
a
bunch
of
parameters
that
are
needed
by
m
clock
to
make
more
informed
decisions
in
terms
of
the
auto
scaler.
A
There
is
going
to
be
a
new
profile
in
quincy,
so
like
the
default
behavior
we
have
is
a
scale-up
behavior
which,
when
when
there
is
need
for
more
pgs,
we
go
and
expand
that.
But
there
is
this
other
profile,
which
is
called
the
scale
down
profile
which
is
going
to
be
default
in
new
clusters.
This
is
essentially
for
better
performance
out
of
the
box.
A
This
is
implemented
in
the
form
of
a
back
pressure
mechanism
that
every
pool
start
outs
with
a
large
number
of
pgs,
and
when
there
is
unequal
usage
across
pools,
then
we
increase
the
number
of
pgs
on
the
one
that
needs
more
and
reduce
it
on
the
one
that
does
not.
A
So
this
is
again
something
new
that
is
going
to
be
coming
up
in
the
pg
autoscaler
world
in
terms
of
the
balancer
balancer
will
now
so
in
inherently
the
upmap
based
balancer,
just
like
it
does
a
balancing
just
by
the
number
of
pg's
per
osd.
But
sometimes
there
is.
A
This
is
not
efficient,
because
you
may
have
a
scenario
where
your
osd's
are
unequally
sized
and
in
other
cases
as
well,
where
you
might
want
to
actually
balance
based
on
the
actual
osd
utilization
versus
just
the
number
of
pgs.
So
now
we
are
planning
to
change
this
algorithm
and
also
take
into
account
osd
utilization
and
finally
displayed
degree
of
degradedness.
A
So
here
the
idea
is
to
give
more
meaningful
information
about
how
degraded
the
cef
cluster
is
versus.
Just
what
we
do.
Currently,
we
have
a
percentage,
degraded
objects
that
we
display
in
ceph
health.
Here
the
idea
is
to
be
able
to
let
you
know
how
bad
some
pgs
are
versus
others
like
in
on
some.
You
can
think
of
an
example
where,
if
you
lose
one
copy,
the
the
pg
may
become
inaccessible
after
that
point,
so
those
will
be
higher
priority.
A
In
terms
of
you
know,
getting
your
degradedness
fixed
versus
somewhere,
you
can
afford
to
lose
your
copies
and
you
can
still
keep
the
pgs
active
and
keep
writing
to
it.
So
those
kind
of
details
are
what
we
plan
to
add
further
in
in
terms
of
tech,
health
improvement,
moving
on
in
terms
of
quality,
more
improvements
in
the
monitor,
so
the
monitor
is
going
to
be
dynamically,
going
to
adjus,
adjust
the
trimming
rate.
So
currently
we
have
this
hard
value
that
we
have
set
in
the
monitors.
A
So
in
cases
when
there
there
is
the
ingest
rate
in
the
monitors
becomes
much
higher
than
the
trimming
rate.
There
can
be
cases
where
the
monitor
just
gets
backlogged
and
and
stops
running
completely.
So
in
those
scenarios
we
have
made
that
config
option
change
dynamically
to
adjust
the
trimming
rate
when
the
interest
rate
is
higher.
A
This
is
also
something
which
we
think
is
relevant
to
older,
older
versions,
so
we
will
be
backpotting
it.
There
are
further
improvements
to
pg
deletion
performance
in
terms
of
like
using
range
deletes
wherever
possible,
or
to
make
the
pg
deletion
called
more
efficient
in
terms
of
rocks
db
as
well.
A
But
that's
all
there's
a
outstanding
pr
that,
where
we
are
discussing
all
these
kind
of
improvements,
more
improvements
in
terms
of
manager
scalability,
so
there
is
manager
stats
period
so,
like
I
mentioned
earlier,
that
all
the
manager
modules
use
stats
from
from
the
ost's
and
other
places.
Now
there
is
this
hard
value
that
we
have
that
the
frequency
which
decides
the
frequency
at
which
we
are
capturing
the
manager
is
capturing
all
this
kind
of
stats.
A
But
what
we
plan
to
do
is
similar
to
the
monitors
or
doing
dynamic
trimming
adjustment
we
this
will
auto
tune
the
stats
at
which
the
auto
tune,
the
period
at
which
the
stats
are
collected
from
the
osd's
just
to
make
sure
that
the
managers
don't
become.
A
You
know
too
clogged
with
just
getting
stats
from
the
osds
beyond
that,
there
are
other
scalability
improvements
from
the
progress
module
and
the
inside
mod
insights
module.
Both
of
them
have
had
issues
in
the
past
with
scale
and
trying
to
handle
too
much
data.
So
we
are
trying
to
do
similar
such
stuff
for
progress
module
where
we
are
going
to
add
a
configurable
interval
at
which
maps
pg
map
updates
will
be
captured
and
even
for
the
insights
module.
A
There
is
a
lot
of
data
that
we
currently
persist
that
we
are,
we
are
going
to
probably
stop
persisting
in
the
monitor,
vb
completely
or
even
use
a
common
dot
manager
pool
that
got
introduced
in
quincy.
A
For
for
these
purposes,
there
are
further
improvements
to
slow
up
logging.
So
currently
we
log
everything
in
the
cluster
logs
and
it
also
gets,
goes
through
paxos
and
gets
persisted
in
the
monitor
store.
So
we
want
to
eliminate
all
that
complexity
and
just
simply
have
a
configurable
number
of
slow
ups
that
we
capture
and
add
it
to
the
manager
log.
A
So
the
osds
are
going
to
report
those
flow
ups
to
to
the
manager,
because
for
debugging
purposes
all
we
care
about
is
you
know
what
kind
of
slow
ups
are
showing
up
and
when
they
start
coming
up?
So
I
think
that
is
an
efficiency
improvement
that
is
going
to
be
really
helpful
for
larger
clusters.
A
The
other
thing
we
have
is
we
also
want
to
make
the
whole
log
monitor
code
more
efficient.
So
that's
also
something
we'll
be
working
on.
Quincy
in
terms
of
performance,
quality
of
service
is
going
to
be
expanded.
As
I
said,
we
are
going
to
be
doing
more
optimizations
for
hdds.
A
We
are
going
to
also
include
background
activities
like
scrubbing
and
pg
deletion
for
prioritization
purposes.
The
default
pro
profile
that
we'll
be
using
in
quincy
is
going
to
prioritize
client.
I
o,
but
users
can
al,
always
go
and
configure
their
cluster
to
use
a
different
profile
like
you
can
use
the
high
recovery
ops
profile,
which
will
prioritize
recovery
over
client,
io
and
finally,
the
next
big
piece
here
is
client
versus
client.
A
So
all
this
earlier
stuff
that
I
talked
about
was
essentially
client,
io
versus
background
stuff
in
the
ost.
This
is
going
to
be
qos
in
terms
of
different
clients,
not
not
the
oz,
but
this
again
is
work
in
progress
now
coming
to
bluestore.
The
one
big
change
that
is
coming
up
is
removing
allocation
metadata
from
roxdb,
so
we've
realized
that,
with
with
roxtv
sharding,
this
is
possible.
A
So
what
the
idea
is
to
be
able
to
remove
this
separate
column,
family
that
stores
allocation
metadata
in
the
rocks
db
when
we
are
doing
every
right
and
what
we
rebuild
the
allocation
map.
So
essentially,
this
is
only
needed
income
in
cases
when
there
is
a
failure.
So
the
idea
is
to
be
able
to
rebuild
this
allocation
map
and
failure
scenarios,
and
what
this
gives
us
is
significant,
small
right
performance
improvement.
So
you
are
saving
on
every
right
that
you're
doing
to
the
cluster.
A
So
I
think
this
is
again
a
huge
improvement
coming
up
in
quincy.
There
are
other
improvements
to
split
cash.
This
is
more
in
terms
of
simplifying
the
board
are
bug
free.
I
mean
we
realized
some
recent
bugs
around
locking
in
this
area,
and
we
want
to
ensure
that
we
don't
have
such
issues
in
future
and
also
simplify
the
code
to
you
know,
make
it
more
manageable.
So
that's
the
split
cache
idea
here
and
we
also
want
to
revisit
cache
age
bin.
A
This
is
something
we've
been
working
on
earlier,
but
we
gave
it
a
hulk
because
we
wanted
to
get
rocks
to
be
sharding
in
first,
so
this
is
going
to
be
built
on
top
of
roxdb
sharding
and
we
just
aged
the
cash
based
on
age
that
that's
essentially
what
the
name
means
and
that's
what
we
are
trying
to
see
if
it
still
is
relevant
or
not
after
rock's
tv
showing
and
finally,
this
is
this
is
more
probably
a
developer
friendly
thing,
or
maybe
a
user
friendly
as
well.
A
So
we
have
added
a
simple
benchmark,
which
is
called
oma
bench
to
benchmark
omap
heavy
workloads.
Earlier
this
was
a
challenge
we
had
to
go
through
rgw
workloads
or
even
cost
bench
and
the
setup
that
all
the
all
these
benchmarks
require
to
do
simple
benchmarking.
So
this
is
going
to
be
a
simple
tool
that
people
can
run,
our
users
can
run
or
devs
can
run
to
benchmark
omap
performance
and
also
identify
any
performance
regressions.
A
Finally,
in
terms
of
telemetry,
as
josh
mentioned
earlier,
we
plan
to
add
a
performance
channel
in
telemetry,
so
the
idea
is
to
be
able
to
capture
useful
metrics,
like
osd,
perf,
counters,
etc
from
telemetry
from
user
clusters,
so
that
we
have
more
meaningful
data
points
to
profile
our
workloads
on,
and
we,
as
devs,
would
like
to
use
this
data
to
adapt
our
existing
guidelines
as
well.
Like
some
of
some
of
the
guidelines
that
we
make
about,
you
know
how
much
your
wall
partition
should
be.
A
We
might
be
able
to
make
more
informed
decisions
in
terms
of
recommendations
and
that's
the
reason
I
say
it's
definitely
worth
opting
in
for
the
next
thing
is
about
enable
manager
modules.
So
in
every
release
we
have
some
always-on
and
modules
that
are
not
always
on,
but
you
can
enable
them.
So
we
want
to
be
able
to
gather
information
about
which
manager
model
is
actually
being
used
in
in
user
clusters
and
to
be
able
to
improve
those,
or
you
know,
prioritize
those
more
than
the
others
in
terms
of
crimson.
A
We
are
going
to
be
adding
scrub
implementation
in
quincy
and
there's
going
to
be
support
for
snapshots.
That's
the
last
piece.
When
I
mentioned
we
were
able
to
run
simple
rbd
workloads.
If
we
have
this
a
snapshot
piece,
we
will
have
full
support
for
rbd
workloads
in
crimson.
A
We're
also
going
to
be
working
on
multi-core
support.
So
currently
it's
a
one-to-one
core
to
ost
mapping.
So
we
are
planning
to
extend
this
to
multi-core
support
and
in
terms
of
c
store,
the
main
focus
is
going
to
be
make
it
feature
complete.
So
as
to
start
initial
technology
testing,
we've
already
started,
crimson
pathology
testing
or
extending
the
crimson
radar
suite.
You
know
as
we
speak,
but
the
idea
is
to
also
be
able
to
test
c-star
in
pathology.
A
There's
also
going
to
be
some
initial
performance
work
in
this
area
and
we
are
also
going
to
be
working
on
random
block
manager
and
support
for
persistent
memory
support.
So
all
the
way
the
ability
to
run
on
all
kinds
of
devices
is
next
step
here
for
crimson
and
finally,
in
terms
of
ecosystem.
A
A
It
goes
in
with
documentation
and
docs
get
generated
automatically
and
finally
redmine
integration
with
tracker,
which
I
think
josh
already
spoke
about,
so
I'm
gonna
skip
that,
but
the
idea
is
to
also
make
some
of
our
assets
and
stuff
in
in
the
usd
more
unique
to
be
able
to
help
telemetry
reports
be
clustered
in
the
right
possible
way.
A
B
B
So
we
have
a
couple
of
questions.
First,
one
is:
can
qos
eventually
be
adapted
to
prioritize
client?
I
o
in
different
pools
as
an
example
of
having
a
high
priority
pool
a
and
a
low
priority
pool
b.
A
Yeah,
I
that's,
I
think,
that's
the
piece
about
client
versus
client.
I
can
see
that
we
can
extend
that
to
a
pool
as
well.
So
essentially
a
pool
could
have
a
profile
enabled,
which
would
let
that
pool
be
prioritized
over
other
pools.
B
And
the
second
question
I
have
but
is
right
now
specifically
for
rbd,
maybe
for
radios,
but
what
fancy
I'm
guessing
radio
stuff
can't
be
run
around
with
c-store.
A
At
the
moment,
as
I
said,
I'm
in
a
sea
store
there's
basic
ability
for
c.
You
can
start
up
a
vstart
cluster
and
start
running
a
simple
rbd
workload.
Snapshot
stop
starts
are
still
not
there,
but
with
quincy
I'm
hoping
that
we
will
be
able
to
also
support
snapshots
even
without
c-store,
maybe
if
not
with
c-store,
but
you
should
be
able
to
have
snapshots
and
quincy
and
later
you
can
have
snapshots
and
c-store
both.
C
The
store
for
pacific
isn't
capable
of
running
an
osd
up.
It's
pronounced
basic
unit
testing,
but
that's
it
quincy
is
when
you
should
be
able
to
start
up
an
osd
and
run
an
rvd
workload.
C
B
Are
there
any
other
questions
for
nia,
specifically
for
this,
with
this
radius
update.
C
B
All
right,
oh
thank
you
all
for
the
update
and
we're
now
gonna
go
ahead
and
start
on
stuff
on
windows,
with
alessandro.