►
From YouTube: April 2019 :: Ceph Developer Monthly
Description
Monthly developer meeting for the coordination of Ceph project development.
http://tracker.ceph.com/projects/ceph/wiki/Planning
A
Shirred
are
I,
think
you're,
one
who's
gonna
be
doing
this,
so
there
was
a
there's,
a
thread
that
this
came
up
what's
up
to
Val
go
might
be
worth
reviewing,
although
I
think
we
should
all
just
go
through
all
that
details
here,
but
the
main
goal
is
to
Adam
on
memory
target
option
like
we
have
with
USD
that
just
that's
the
number
of
bytes
remember
where
you're
going
to
use
and
have
the
monitor
do
whatever
it
needs
to
do
in
order
to
stay
within
that.
A
Mark
you
talked
about
mocha,
so
just
what
what
olicity
currently
does
is
in
blue
store
right
now
and
blue
store.
There's
only
one
knob,
basically,
that's
being
turned:
that's
the
size
of
the
blue
store
cash,
and
so
it
has
in
blue
store,
there's
a
function
in
cash
size
that
looks
at
its
RSS
size
by
basically
pulling
out
of
relic
and
then
using
that
to
calculate
the
size
of
the
new
cash
they'd
want
to
go.
A
A
Seems
like
the
simplest
thing,
would
just
be
to
copy
that
code,
a
little
bit
of
code,
that's
actually
getting
the
size
and
then
implement
a
similar
controller
in
the
tick
function.
In
the
monitor,
it's
not
clear
to
me
that
pulling
out
some
separate
infrastructure
to
do
this
necessarily
makes
sense,
at
least
not
initially
just
to
get
this
working
age.
B
So
that
might
be
the
way
to
go.
That
was
kind
of
wise
trying
to
to
convince
folks
just
to
hang
on
a
little
bit
that
we
could
configure
how
we're
gonna
get
this
in.
This
is
part
of
a
branch
that
also
does
some
other
things
like
changes,
blue
stores
cash
to
do
eviction
on
right
rather
than
in
that
loop,
but
but
we
can
ignore
all
that
we
could
just
yeah.
A
Less
sure
about
that
that
changed,
okay,
the
the
one
thing
that's
a
little
bit
different-
maybe
it's
not
actually
different.
That's
a
little
bit
different
in
the
monitors
are
basically
two
things
that
we
can
adjust.
There's
a
rocks
to
be
cash
size
or
cash,
I,
guess,
they're,
multiple
RCP
caches,
there's
that
and
then
there's
an
honesty.
Monitor,
has
an
a
bunch
of
a
cache
of
low
esteem
apps
that
are
all
in
memory
and
decoded
code.
A
B
So
the
the
idea
behind
it
is
that
you
can
say
that
each
each
cash
can
say
well,
I
want
this
much
memory
at
this
priority
level
and
it
can
be
in
charge
of
defining
what
that
means
and
then,
depending
on
how
much
memory
is
available,
it
may
get
like
some
amount
of
fair
share
to
start
out
with
like
okay.
Well,
your
fair
share
in
the
first
round
is
gonna,
be
this
much
and
then,
if
there's
memory
left
over
then
in
a
subsequent
round.
At
that
priority
level,
it
can
potentially
get
more
memory.
A
B
A
B
So
the
good
news
is
the
rocks
TV
side
of
it.
You
can
just
use
the
existing
priority.
Cache
interface
is
already
implemented.
So
that's
that's
already
there.
It
will
already
take
care
of
the
indexes
and
filters
at
like
level
zero
priority
and
as
we
update
that
and
add
other
things,
you
know
we
can
make
it
so
that
the
kind
of
goal
with
the
age
bidding
is
that
it
will
cache
recent
things
in
the
block
cache
with
high
priority
and
older
things
with
less
priority.
A
A
A
B
Either
kind
of
an
interface
for-
or
you
know,
some
something
to
tie
into
the
priority-
cache
interface,
I
guess
I'm
on
the
four
of
the
OSC
map
caches.
But
that's
that's
not
hard.
That's
I
mean
it's
just
some
glue
code,
basically
or
or
implemented
directly
there.
So
I,
don't
think
that
would
be
bad
at
all
and
there's
lots
of
examples
with
the
the
rocks,
DB
caches
or
rock
V
block
cache
and
also
the
OST
buffer.
And
oh
no.
So
it's
it's
pretty
straightforward.
B
D
A
A
A
A
D
D
D
B
A
B
A
E
Sorry
that
was
muted
in
two
places,
so
yeah
I
think
we
can
apply
it
also
to
them.
Yet
I've
been
taking
a
close
look
at
it.
Yet,
okay.
C
B
A
All
right,
cool
Sebastian
trying
to
go
just
give
it
as
mostly
just
looking
for
a
quick
update
on
the
current
status
of
the
different
Orchestrator
implementations,
where
we're
at
and
then
probably
a
bunch
of
questions
about.
Next.
F
Yeah,
so
not
that
much
has
gone
was
was
done
in
the
last
few
weeks
after
the
Nautilus
release
so
and
especially
on
the
ansible
Orchestrator,
the
Jemma
girl
is
mainly
working
on
the
unstable
run,
a
service
and
not
so
much
on
me
and
so
Orchestrator.
Do
we
have
the
port
for
adding
and
removing
all
these
in
in
a
very
simple
form?
What
that's?
That's
it
from
an
update
perspective.
F
A
choker
also
after
the
initial
request,
no
I
didn't
had
so
much
time.
Adding
new
features
to
this
is
it
Orchestrator,
so
the
idea
is
to
change
it.
To
deploy,
containers
do
to
ease
deployment
across
distributions
and
other
benefits,
but
nothing
really
has
happened
in
message
of
this
writer.
Since
the
initial
pull
request.
F
Baroque
for
the
robot
Orchestrator:
if
did
some
work
on
NFS
Ganesha,
that's
that
was
awesome
and
rock
itself
changes
how
it
deploys
the
cluster
by
by
changing
the
Kuban.
It
is
namespace
and
I'm
working
on
a
fix
for
that
also
fixing
some
other
things
in
in
that
ecosystem.
F
A
F
A
F
F
F
A
Have
a
question
about
the
I'm
just
looking
at
this,
this
chart
again
so
host
at
a
less
than
RM
I.
Guess
the
LS
one
could
be
sort
of
in
theory
implemented
everywhere,
because
it's
just
listing
the
hosts
that
are
participating
or
available
to
participate
in
the
cluster,
but
that
an
RM
makes
sense
for
SSH,
because
you're
sort
of
manually
managing
the
set
of
those
but
yeah,
possibly
not
for
rook.
A
F
F
A
G
A
F
Yeah
there
are
some
code
in
there
that
already
does
that
I
have
to
check
if
it
fits
working
or
not,
but
yeah.
That's
I've
saw
some
code
in
it,
okay
for
any
broken
straight
already,
yet
I.
F
And
it's
brittle
if
Rock
changes
the
CDs
and
they
do
then
suddenly
the
orchestrator
does
not
really
work
anymore
and
no
one
is
really
notified.
There
is
no
automatic
way
to
match
the
cid
schema
to
the
implementation
of
the
orchestra,
because
there
is
no
schema
of
the
cid.
It's
just
how
rook
implements
reading
the
cid
and
that
can
can
change
some,
but
between
minor
versions
of
rock.
So
we
really
need
some
testing.
G
A
A
A
A
F
A
A
A
A
A
C
F
Yes,
no
not
today,
but
the
idea
of
having
a
bootstrap
tool
that
does
something
like
safe
deploy
for
a
very
minimal
cost
on
local
machine
and
then
using
the
SSH
Orchestrator
to
to
set
up
the
cluster.
The
word
cluster
and
that
will
then
be
a
way
to
replace
safe,
deploy
with
the
SSH
Orchestrator.
And
that
would
be
yes.
A
Yeah,
my
my
my
general
feeling
is
that
one
nice
possible
future
would
be
that
there
are
essentially
two
orchestrators
that
we
maintain
actively
maintained.
One
is
the
rook
one
if
you
have
a
full-fledged
Cabernets
environment
and
when
is
the
SSH
one
forbids
they
everything
else
yeah
and
then
the
SSH
case,
yeah
DTaP,
this
bootstrapped
I
would
just
use
like
a
monitoring
manager
and
that's
it
and
then
everything
else
would
be
like
a
day.
Yeah.
A
Yeah
yeah,
although
even
if
with
just
a
CLI
it'll,
be
a
similar
experience
to
except
deploy,
yeah
I
think
that
I
think
probably
the
next
step
there
is
to
is
to
change
the
SSH
one
to
do
the
do.
The
thing
with
running
the
demons
and
containers
is:
that
implies
all
the
concerns
around
and
complexity
around
installing
packages
under
current
and
just
rows
and
getting
them
and
then
making
the
bootstrap.
This
will
match
so
that
when
it
does
its
bootstrap
saying
it
just
that's
it
in
a
container,
probably
and.
A
A
The
next
thing
on
the
list
here
is
the
telemetry
status
reports
and
I
think
the
end
in
the
office.
Yet,
but
he's
been
looking
at
the
telemetry
stuff,
so
just
reminder
so
in
starting
mimic
we
had
a
a
telemetry
module.
It's
there
in
Nautilus,
with
some
updates
its
opted
in
its
off
by
default.
A
Burning,
Nautilus
I
changed
it
a
little
bit,
so
you
can
turn
it
on
and
you
can
do
toiletry
Show,
which
shows
what
the
telemetry
port
would
be,
and
so
you
can
see
what
information
would
be
shared
before
you
decide
to
turn
on,
and
this
is
a
separate
command
that
you
settle
energy
on.
So
the
upgrade
notes
for
Nautilus
basically
ask
people
to
turn
on
if
they're
comfortable
with
the
data,
that's
shared.
A
Those
also
so
that'll
include
what
version
the
code
was
running,
what
the
daemon
was
and
what
version
at
the
demon
and
and
then
the
stack
trace
for
where
it
crashed,
and
so,
if
I
actually
gets
reported,
then
as
developers
we
can
see
what
versions
are
crashing
we're
in
the
field,
and
so
we
can
prioritize
bugs
and
so
on,
independent
of
people
actually
noticing
and
going
and
opening
a
bug
to
get
or
whatever
so
potentially
super
valuable
for
developers.
A
So
the
end
start
looking
at
the
back
end
here
because
of
you
know,
fetus
ended
up.
It
was
reporting
to
a
elasticsearch
database
and
then,
like
I,
had
looked
at
it
since
memik
was
released,
turns
out
that
classic
searches
all
screwed
up
the
indexes
is
trying
to
index
every
field
and
was
airing
out
and
whatever
so
getting
that
fixed
up.
A
But
the
next
step
is
it
just
start
generating
some
like
useful
reports
out
of
that
and
I
wanted
to
just
get
people's
to
be
back
and
unlike
what
actually
we
wanted
to,
and
what
reports
would
be
useful
as
developers
so
I'll
just
put
a
pad
in
here
in
a
list,
but
I'm
thinking
things
like
or
clusters
size
distribution
of
clusters
so
have
how
many
clusters
are
deployed
and
reporting
in
what
how
big
they
are
installed.
Version
distribution
like
what
versions
of
stuff
are
actually
running
and
what
sizes
I.
A
C
A
A
H
H
It
may
not
be
quite
so
bad,
though
cuz
a
quick
glance
at
a
few
of
the
most
commonly
occurring
strings.
If
you
filter
out
like
white
space
and
friends,
probably
gives
you
a
couple
of
like
function,
names
that
we
can
look
for
pi
occurrences
of,
although
I
worry
that
it'll
be
mostly
like
whatever
method
and
boost
or
actually
execute
transactions
or
like
that,
but
still
higher
up
the
stack
trace
might
be
this
one
I'm.
A
Thinking
so
I'm
thinking,
we
can
do
a
couple
things.
They
can
look
at
just
the
number
of
crashes
by
version,
you
know
what
friction
to
crashy.
That
might
be
a
really
simple
report,
some
of
the
crash
reports,
if
it's
an
assert
that
fired
the
assert
metadata
is
in
there,
and
so
you
can
go
by
function
and
condition
ignoring
the
live,
number
number
it's
and
so
on,
because
that
gives
you
something
that
works
across
versions.
A
H
A
File
line
and
thread
name
if
we
just
take
the
condition
and
oh
and
there's
also
a
message
and
there's
function.
So
the
message
unfortunately
includes
the
line
number.
But
if
you
take
the
function
in
the
condition
that
should
be
a
unique
ish
key
Oracle,
but
orchid
literally
like
stripped
out
that
uninteresting
parts
of
the
filename
and
line
them
very
guess.
A
Then
I
guess
that
what
worries
me
is
that
you
could
imagine
making
some
like
crazy
query
GUI
that
lets
you,
like
click
through
and
say,
like
oh
well,
I
see
this
interesting
crash.
What
other
versions
of
the
show
and
gives
a
nice
little
plot
of
whatever,
but
the
reality
is
that
we
probably
want
to
just
generate
a
couple
fix
reports.
Just
so
like
an
email
goes
out
to
somebody,
or
somebody
can
actually
look
at
it
I'm
in
order
to
that's
the
first
pass
and
all
just
also
just
it
just
discover
that
there's
an
issue
and.
I
So
yeah
for,
for
the
initial
part,
I
think
just
to
know
whether
it's
replicated
pool
or
an
easy
pool
or
but
because
you
know
just
asserts,
are
not
very
useful.
If
you
don't
have
the
background
data,
so
we
need
to
like
group
failures
and
you
know
do
some
kind
of
Association
to
figure
out
where
we
are
seeing
these
it
here.
So
we
need
basic
information
like,
as
you
already
have
like
how
many
OSD
he's,
what
kind
of
pools
function
I
mean
I
see.
The
function.
A
H
H
H
H
H
I
A
G
A
All
right
right
now,
it's
there's
basically
a
huge
blob
of
Jason.
That's
the
telemetry
report.
Probably
what
needs
to
happen
is
when
it
gets
ingested
anyway.
We
just
recorded
that
report
in
a
sort
of
Rob
form,
but
we
also
extract
all
of
the
individual
crashes
and
insert
those
as
independent
records
in
a
separate
table.
That's
just
crashes!
That
way
we
have
sort
of
a
narrower
schema
for
the
crashes
that
we
can
do
queries
in
search
over.
Maybe
that's
I
might
be
okay,
the
next
step
for
again
actually
to
do.
G
G
A
H
A
J
So
given
the
rise
of
kubernetes
and
container
workloads,
and
things
like
that,
one
of
the
nice
things
that
the
current
visionary
and
driver
for
our
BB
on
kubernetes
it
supports
optionally
supports
the
RB
MBD
daemon,
which
exposes
a
kernel
backed
block
device
to
all.
The
I/o
is
passed
through
a
user
space
via
a
socket
communication
back
and
forth.
J
So
going
forward.
I
have
a
ticket
open
with
the
subsea
sod
to
kind
of
improve
the
current
integration
with
RB
DMD
right
now,
the
CSI
driver.
If
it
detects
that
the
RB
d
MBD
application
is
available
within
the
the
container,
it
will
use
it
over
k,
RB
d
and
I
think
number
one
goal
should
be
that
that
should
be
a
user
choice
like
you're,
specifying
that
yeah.
A
J
To
use
IBM,
BD,
yep
and
number
two,
they
should
not
be
invoking
RB
d
m
BD
directly.
They
should
be
using
the
RB
d
CLI,
which
Rd
has
the
tools
for
invoking
everything
directly,
which
is
going
to
become
more
important
going
forward,
because
one
of
the
goals
is
also
right.
Now
we
run
one
demon,
one
block
device,
slash
bar
BG
image,
combo
Paris.
J
We
we
for
every
flash,
dev,
MBD
X
device,
that
maps
to
a
single
RB
d,
mb
d,
daemon
running
in
the
background,
and
that
RB
d,
mb
d
demon
is
only
handling
the
socket
communications
and
translating
read
and
write
I
a
request
from
the
NDB
block
device
driver
to
a
single
image.
So
the
goal
going
forward
would
also
be
be
nice
to
again
optionally
supports
cooling,
multiple
MBD,
RBD
connections
within
a
single
demon.
J
One
of
the
ways
that
would
work
is
is
that
somewhere
in
the
four
decks
somewhere
in
the
form
of
external
series,
Facebook
added
some
new
api's
or
net
link
api's
to
control
the
NB
d
block
device,
which
adds
some
nice
features
like
you,
can
dynamically
add
and
remove
block
devices
before
you
had
to.
When
you,
when
you
loaded
the
NB
d
block
device
driver,
you
had
to
say
I
only
want
like
five
block
device
drivers,
you
know,
Babylonia
would
create.
J
You
know
MD
zero
through
four
and
then
our
BD
m
BD
had
to
scan
through
all
the
available
block
devices
and
say:
oh
this
one's
not
being
used.
I
can
try
to
use
that
one
and
it
was
kind
of
a
hack
but
going
forward.
We
could
we
want
the
RB
d,
NB
d
to
optionally
body
use
the
network
interface
that's
available.
If
it
is
available,
we
can
now
dynamically
say:
hey
colonel
allocate
me
an
M,
BD
block
device
and
now
attach
it.
J
B
J
C
J
J
A
J
It
was
inspired
by
your
your
testing
with
Baron
Lombardi
to
our
media
media.
That's
what
kind
of
inspired
we
added
already
it's
already
in
the
master
branch.
So
we
have
a
simple
block:
bio
scheduler,
for
sequential
iOS
in
the
body,
because
one
of
the
nice
things
that
our
media
media
was
able
to
do
is
able
to
do
very
well
in
sequential
I/o
is
because
the
kernel
was
already
batching
things
up
for
it.
J
It
sent
it
down
the
the
path
so
now
lib
RBD
can
get
that
same
benefit
too,
just
in
general,
where,
if
it
season
lost
BIOS
and
in
sequence
against
the
same
vacuum,
object
I'll
send
them
all
as
a
single
unit
to
the
OSD.
Instead
of
sent
a
number
of
individual
ops
2
the
OSDs
on
the
same
object,
and
then
we
also-
or
it's
still
work-in-progress,
that
the
right
around
cash,
because
the
object
catcher
is
slow
yeah.
J
It's
probably
for
our
biddies
point
of
view.
It's
probably
not
ever
gonna,
be
worth
it
to
add
or
really
worry
about,
creating
a
true
like
readable
cash,
because
it's
gonna
be
a
block
device
and
there's
gonna
be
most
likely
yeah.
Another
cash
on
top
of
it,
so
this
is
already
a
second
level
cache.
We're
really
trying
to
do
is
just
yeah,
well,
sis,
if
possible,
or
just
like
a
kite.
J
J
If
you
have
a
flush
request,
coming
it'll
it'll
make
sure
it
all
those
iOS
are
completed
before
returns
the
flush
just
like
a
write
back
cache,
and
then
the
error
we
get
returned
on
the
the
flush
of
theorem,
where
any
heirs
or
any
of
you
have
any
ideas
that
were
in
flight,
and
that
was
already
initial
testing.
It's
not
doing
the
full
amount
of
work.
I
need
to
do
yet,
but
it
was
already
showing
like
three
times
faster
and
the
write
back
cache
and.
J
Fully
replacing
out
to
cast
her
eyes
just
ripping
up
check,
casher
out
you
also
object.
Cache
will
still
be
an
option
so
now
there's
a
new
RBD
configuration
object,
option
policy
so
for
octopus,
I'm
gonna
have
a
default
to
the
right
around
yeah.
You
can
turn
it
to
right
back
right
through
or
whatever
in.
B
A
J
D
J
D
J
H
J
So
now
below
the
right
around
cash,
there's
a
layer,
the
next
layer
to
get
invoke
to
be
that's
where
the
simple
I/o
scheduler
comes
in,
so
it
feeds
Pelayo
of
scheduler
as
fast
as
possible.
Those
iOS.
It
can
say
all
that
now,
I
have
a
blush
and
sequential,
isn't
a
compaction
together,
even
better.
But
if
the
sequential
I/o
scheduler
sees
a
read,
it
automatically
flushes
all.
A
Okay
and
I
think
you
mention
this,
but
again.
The
motivating
use
case
here
is
that
in
a
container
environment
like
kubernetes,
you
have
a
gazillion
containers
on
the
same
host
that
are
mapping
devices
and
so
the
current
use
of
kernel.
Everybody
is
really
nice
because
you
have
one
client,
that's
connecting
the
cluster
servicing.
However,
many
images
are,
and
so
we
want
to
get
the
same
efficiency
in
terms
of
client
overhead,
a
number
of
TCP
connections
to
use
all
that
stuff.
A
A
Right
somebody's
got
to
run
it
somewhere
right,
it
has
to
be.
It's
like
I
could
imagine
that
that
rook
would
basically
just
schedule
a
pod
on
every
post
to
make
sure
that
this
got.
J
A
B
A
J
A
A
H
A
A
J
J
J
A
B
J
J
J
A
Cool
okay:
that
is
everything
on
the
list
and
are
there
other
topics
that
people
want
to
talk.
A
J
All
right
so
yeah,
just
a
high-level
first
one
is
the
online
response.
Off'
occasion
of
images,
so
we
occasionally
see
things
on
the
man
list
is
like
help
I've.
You
know
I
thickly
provisioned
block
device.
How
can
I
fix
it
so
new
CLI
tool
that
can,
even
while
the
image
is
being,
is
being
used,
it
can
go
through
and
and
punch
holes.
It
already
works
in
a
nautilus
for
replicated
back
ends.
So
the
goal
is
no,
that's
already
PR
open
to
fix
it
for
ec
beckons
as
well.
J
Really
want
to
eliminate
those
memory
copies
from
I
from
the
capi
there
was
a
PR
open,
and
then
we
cut
through
conversation
on
that
PR
kind
of
came
back
to
well.
Maybe
we'll
do
some
reference
counting
within
the
objector
so
that
we
know
when
it's
finally,
the
last
bit
of
memories
release,
because
we
only
care
about
that.
One
corner
case
where
granite
resent
to
a
difference.
Os
different
OSD
completes
the
I/o
before
the
other
OSD
your
path
like
release.
The
messenger
realized
that
the
other
OSD
use
down
is.
J
G
J
Don't
I
don't
think
we
necessarily
could,
because
we
have
no
hooks
back
to
the
user
to
say
like
unless
they
provide
us
the
memory
to
like
put
it
in
which
which
I
got
sent
to
see
api.
They
do
it's
the
ami
in
the
C++
API
with
the
buffer.
Listen
I,
thought
you
really
just
yeah
poppies
are
appended
or
whatever
to
the
destination
so
yeah
anywhere.
We
can,
if
possible,
great
for
a
minute.
A
J
I'm
walking
along
the
same
line
yeah
along
the
same
fine
lines
it
just
optimizing.
The
IO
path
gets
rid
of
as
many
locks
as
possible
or
make
those
lock
contention
as
possible.
That's
not
just
in
the
bar
buddy.
That's
Austin
liberate
us.
You
know
down
through
the
objector
the
improved
a
memory
cache
that's
effectively,
just
the
I/o
scheduler
and
the
right
around
cash,
the
RBD
MBD
this,
the
the
network
interface.
J
We
have
clone
v2
which
allows
you
to
delete
the
snapshots
that
the
clone
is
actually
attached
to.
So
then,
one
step
further
would
also
allow
you
to
transparently
delete
the
parent
image.
It
just
transparently
gets
moved
to
the
the
RB
d
trash
until
the
last
clone
is
flattened
or
removed,
and
then
you
can
remove
the
image
trying
to
hide
all
the
details
about
how
cloning
is
actually
implemented
and
from
things
like
the.
A
I
Yeah,
how
many
months
that
we
moved
from
Nautilus
octopus
are
already
tagged
and
yeah
I
think
that
they
are
okay
to
be
tagged
for
octopus,
partial
recovery
and
easier
recovery
below
men's
eyes.
Partial
recovery,
I
think,
is
looking
pretty
good.
I
did
a
recent
run
and
it's
looking
fine,
it's
just
I'm
going
through
the
cord
and
doing
the
review
right
now,
but
I
think
we
should
just
give
it
time
and
merge
it
and
see.
A
Yeah,
this
came
up
in
a
in
a
customer
conversation
a
couple
months
ago,
and
then
it
also
came
up
again
and
some
of
the
product
planning
that
Red
Hat,
so
I
think
I
think
it
makes
sense
the
idea,
basically
being
that,
if
you
set
it
to
host
for
example,
then
it'll
warn
you.
If
there
is
any
single
host
in
the
system
or
if
it
failed,
there
wouldn't
be
enough
space
for
the
system
to
go
fully
recover
again.
I
Just
mute
health
warnings
I'm
not
worried
about
the
trace
point
stuff.
There
is
an
outreach
II
project,
that's
going
to
start
this
summer
around
trace
points
and
Josh
and
I
are
going
to
be
mentoring
that
but
I
it's
going
to
be
more
like
an
intern
project
kind
of
stuff.
So
not
too
hopeful
that
we'll
get
in,
but
we
we
should
hope
for
like
a
proof-of-concept
from
that
thing,
but
nothing
that
we
mondo
promised
for.
I
A
A
I
guess
if
it
it
feels
like
they're
sort
of
two
too
independent
and
possibly
related
priorities.
One
is
to
have
a
set
of
trace
points
that
can
be
sampled
and
fed
into
something
like
was
it
Jaeger
or
whatever
to
do
that
per
type
thingamajig
and
then
the
other
is
more
efficient
logging
and
maybe
those
are
the
same
implementation
on
the
back
end.
Maybe
not.
B
That
that
reminds
me
Mohamed
at
one
point
recently
a
couple
weeks
ago:
I
don't
remember
where
it
was.
He
was
saying
that
he
had
tried
it
for
logging
in
the
OSD
and
didn't
really
see
much
difference.
They
had
never
tried
debugging
this
one
as
one
of
his
targets,
so
I
think
he's
going
to
do
that
now.
I,
don't
know
if
he
has
yet
or
not,
but
he
said
he
was
gonna.
Look
at
it.
Yeah.
A
And
then
the
this,
the
reserver
there's
one
here
at
300.
This
one
is
frustrating
because
it's
the
way
that
the
current
implementation
works.
It
makes
it
so
that
the
smallest
number
of
concurrent
work
items
that
a
single,
oh
so
you
can
do-
is
always
going
to
be
two
visits
ones
that
is
primary
four
and
we'll
things
that
it's
not
primary.
Four,
and
then
we
can't
really
fix
that
until
we
have
a
different
design
for
how
how
we
schedule
that
work.
A
I
I
Items
to
talk
about
adaptive,
recovery
and
backfill,
so
basically
throttling
recovery,
backfill
based
on
throughput,
even
the
same
thing
for
scrub.
That's
one
project
altogether
and
then
there
is
a
Q
s
bit
of
it.
So
if
which
one
should
we
focus
on
and
like
what
our
target
time
line
should
be
is
something
we
should
discuss
before.
We
focus
on
any
of
those
I
think.
A
A
B
There's
there's
one
thing:
I
would
like
to
bring
up
this:
it's
okay,
Toshiba
released
publicly
their
tea
rocks
DB
Road,
and
it
is
here
well,
here's
the
the
wiki
and
then
the
code
itself
is
here.
C
B
B
A
A
B
B
C
B
A
I
E
I
A
I
A
A
So
I
think
we
need
just
a
little
bit
of
like
billing
just
to
make
sure
we
can
express
everything
that
we
want,
maybe
by
getting
rid
of
that
hack
and
basically
explicitly
marking
everything
the
way
it
is
and
then
have
new
options
explicitly
annotated.
That
of
baked
in
assumption
I,
don't
know
pick
something.
B
One
thing
that
I
would
have
been
kind
of
concerned
about
regarding
that
is
that's
fast
busily
that,
but
the
just
injectable
options
is
how
how
many
listeners
can
we
have
for
option
changes
and
where-
and
you
know,
should
this
be
something
that's
like
centralized
per
demon
like
that,
you
have
one
thing
listening
for
option
or
sorry
for
thread
like
listening
for
option
changes
or
what,
how
how
should
that
look,
I've,
never
really
known
where
we
should
be
listening
and
under
what
custom.
You
know
what
context
we
should
be
I
mean.