►
From YouTube: Ceph Code Walkthroughs: CoDel for BlueStore
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Welcome
to
the
good
walkthrough
for
coddle
in
blue
store,
the
s
male
is
going
to
take
us
through
his
pull
request
and
and
design
and
experiments
around
that.
So
that's
meal.
Do
you
want
to
take
it
away.
B
B
So,
let's
start
with
the
problem,
so
the
problem
we
try
to
solve
is
that
basically
in
most
storage
systems,
we
have
front
and
then
backhand
the
backhand
is
specialized
to
store
the
blocks
and
everything
on
the
disk
and
the
front
end
is
the
higher
level
of
the
storage.
B
By
having
all
of
the
requests
to
the
backend.
We
are
having
this
problem
of
schedulability
on
the
front
end.
On
the
other
hand,
if
we
do
the
backward
and
keep
most
of
the
requests
in
the
front
end
and
a
few
in
the
back
end,
we
are
favoring
this
schedulity,
but
we
are
basically
ruining
the
throughput
basically
because
the
back
end
will
starve.
B
The
project
is
basically
trying
to
have
a
smart
and
intelligent
admin,
admissions
control
in
between
to
control
how
much
request
and
the
transaction
should
be
submitted
to
the
backend
and
keep
a
balance
between
the
schedulability
and
the
throughput
by
some
parameters.
B
So
the
very
related
and
similar
algorithm
that
already
exists
is
cuddle,
basically
is
used
to
sorry
to
solve
the
bufferable
problem
in
the
networks
buffer
below
this
is
really
similar
to
our
pro
our
problem
when
the
network
buffer
too
much
data
to
the
downstream.
So
the
coda
algorithm
is
very
simple.
Basically,
it
has
a
parameter
called
target
delay,
so
it
measure
it
measures
the
queuing
delay
of
request
and
compare
the
minimum
queuing
delay
with
the
that
threshold.
B
We
try
to
implement
the
similar
idea
in
the
self,
so
this
is
the
final
form
of
the
our
algorithm.
Basically,
we
have
two
different
loop.
B
There
is
a
high
frequency
loop
which
which
tries
to
minimize
the
latency
by
controlling
the
number
of
the
basically
the
number
and
the
amount
of
requests
into
the
back
end
by
using
a
target,
latency
parameter
and
a
lower
frequency
loop
that
tries
to
optimize
that
target
parameters
based
based
on
the
throughput
of
the
system
and
latency
to
be
able
to
create
and
keep
a
trade-off
between
the
throughput
and
the
latency
in
the
system.
B
So
basically,
the
by
zoom
here.
B
So
the
the
fast
loop
is
basically
the
the
alcohol
algorithm
that
tries
to
control
the
lewis
throttle
budget
by
using
a
parameter
called
target
latency.
It
measured
the
minimum
latency
of
the
the
whole
blue
store
and
then
compared
to
that
target
latency.
If
there
is
a
violation,
it's
going
to
shrink
the
throttle
maximum
size
and
the
slope
slow.
The
slow
loop
tries
to
optimize
these
target
parameters
based
on
the
throughput
and
to
keep
the
basically
trade-off
between
the
throughput
and
latency.
B
This
problem
to
different
backend
as
well.
We
just
keep
the
minimum
latency
of
the
transactions
and
the
request
and
compare
it
to
the
target
latency
parameter.
If
it's
bigger
than
the
threshold,
we
are
going
to
decrease
the
throttle
size
of
the
blue
store.
Otherwise,
we
are
increasing
the
throttle
size.
B
If
we
have
the
basically
the
the
graph
or
the
curve
of
the
throughput
latency,
we
can
argue
that
the
slope
of
these
curves
show
the
trade-off
between
throughput
and
latency
in
this
curve.
Basically,
if
we
have
a
few
request
to
the
backhand,
we
are
going
to
have
a
low
latency
but
lower
throughput
as
well,
but
if
we
send
more
requests
to
the
back
end
and
feel
the
backhand,
we
are
going
to
increase
the
latency
and
the
throughput,
but
at
some
point
the
throughput
will
reach
the
maximum
capacity
of
the
system.
B
So
it's
not
going
to
increase
that
much,
but
the
latency
will
also
will
keep
increasing,
since
the
all
of
the
requests
is
spending
their
time
inside
the
blue
store
and
they're
just
waiting
there.
B
So
this
is
the
overall
view
of
the
slow
loop
and
we
this
this
loop
monitors
the
throughput
of
the
blue
storm
and
keep
it
inside
the
throughput
latency
model.
It's
just
just
a
point:
the
target
latency
and
the
throughputs,
and
we
keep
those
points,
basically
a
history
of
those
points
inside
the
memory
and
we
are
going
to
age
it
out
after
it
sometimes.
So
we
are
keeping
a
fixed
number
of
the
points
here
and
then
we
use
this
throughput
latency
point
to
basically
estimate
the
curve
of
this
throughput
latency.
B
So
it's
really
so
the
throughput
latency
curve
can
be
estimated
by
a
logarithmic
function.
So
we
estimated
this
function
with
the
logarithmic
regression
and
after
that
we
can
find
the
best
throughput
sorry,
the
best
and
target
latency
for
the
workload
based
on
the
targeted,
slow
parameters.
So
this
targeted
slope
is
basically
a
controlling
parameters
that
controls
and
keep
give
the
basically
user
and
the
operation
teams
control
over
the
trade-off
between
troop
and
latency.
So
they
can
control
this
the
algorithm
at
the
end.
B
We
add
some
log
normal
noise
to
this,
to
make
sure
that
we
have
the
target
latency.
All
over
the
basically,
we
have
multiple
sample
of
the
target
latency,
so
we
can
have
a
more
accurate
regression,
so
it's
going
to
change
the
target
latency
and
since
it's
basically
a
low
frequency,
so
it's
going
to
execute
less
frequently
than
the
fast
loop
and
it's
going
to
control
the
target.
Latency
parameter
for
the
fast
loop
and
here's.
B
B
At
the
end,
we
use
this
history
to
do
a
regression
on
it
and
find
the
logarithmic
function
representing
the
curve,
and
we
use
this
basically
function
parameters
to
calculate
the
optimal
target
latency
and
then
we
add
noise
to
the
target
and
base
all
it
does
it.
Just
the
past
loop
tries
to
optimize
and
control
the
throttle
size
and
the
slow
loop
tries
to
optimize
and
control
the
target
latency
for
the
slow
loop,
sorry
a
fast
loop,
and
if
you
don't
have
any
question,
we
can
just
jump
to
the
code
for
doing
that.
B
I
have
to
change
my
screen,
so
the
most
of
the
color
code
is
inside
the
these
two
files.
The
blue
storage,
slow
fast
cuddle
that
ccns
blue
store
is
low,
fast
cuddle,
dot
edge
most
of
our
functionality
functionalities
here,
but
before
going
through
the
whole
detail,
I'd
like
to
go
over
the
entry
point
of
this,
the
algorithm
inside
the
bluestore.h
and
ballistor.cc
inside
the
blue
storage.
We
have
just
we
have
instance
of
the
cuddle
class
and
also
we
had
to.
B
B
Yeah
in
the
construction
constructor
of
the
blue
store,
we
are
constructing
our
cuddle
and
then
the
on
the
also.
We
have
some
configs
for
this,
which
I'm
going
to
go
over
it
after
this,
and
if
this
config
change,
we
are
going
to
reset
the
cuddle
algorithm
here
and
also
if
the
blue
is
the
throttle
change,
we
have
to
reset
the
blue
store
budget
as
well
and
over
here
yeah.
B
The
entry
point
of
the
whole
cuddle
can
is
at
this
line,
so
when
the
kb,
basically,
the
transaction
is
done
with
the
kb
thread
after
the
kb
thread.
It
submits
the
information
about
transaction
like
when
it's
enter
the
blue
store
and
the
size
of
the
transaction.
B
B
So
this
is
the
only
entry
point
of
the
algorithm
and
the
basically,
this
time
stamp
is
setting
is
getting
set
over
here
when
it's,
basically
the
transaction
just
created
and
in
this
function,
q
transaction.
B
So
the
most
of
the
configs
are
experimental,
but
the
two
most
important
configure
of
this
algorithm
is
a
bluester
cuddle
that
can
activate
or
deactivate
the
algorithm
by
default.
It
is
false,
so
it
will
be
deactivate
for
now
and
the
booster
cuddle
through
throughput
latency
trade-off,
which
is
the
targeted
slope
parameter
of
ours.
B
This
defines
the
trade-off
between
the
throughput
and
the
latency.
So
by
applying
admission
control,
we
are
going
to
lose
throughput
and
and
decrease
the
latency
as
well.
So
these
parameters
basically
keep
a
trade-off
between
the
throughput
and
latency,
but
it
means
the
in
terms
of
units.
It
means,
for
example,
targeted
slope
of
one
means
we
are
willing
to
losing
at
most
one
megabyte
per
second
for
every
one
millisecond
decrease
in
the
latency.
B
The
first
one
is
the
initial
initial
target
latency.
So
the
algorithm
should
start
with
the
initial
target,
latency
parameter
for
the
fast
loop,
and
we
have
the
slow
and
fast
loop
intervals,
how
how
frequently
they
are
going
to
be
executed
by
default.
B
Also
we
have
the
minimum
target
latency
and
maximum
target
latency
to
control
the
the
target
latency.
So
it
doesn't
go
wild
more
than
a
certain
value
or
less
than
a
certain
value,
and
the
initial
budget
for
the
throttle
is
set
by
this
parameter
and
also
we
have
a
minimum
budgets.
B
So
it
doesn't
let
the
blue
store
throttle
size
decrease
more
than
a
certain
point,
so
we
we
are
not
going
to
lose
too
much
throughput
and
also
the
the
increments
of
the
blue
store
throttles
in
the
basically
in
the
fast
loop
when
we
increase
the
blue
is
your
throttle.
We
increase
it
by
this
amount
of
increments.
B
At
the
end,
this
is
the
size
of
the
history.
We
keep
for
the
throughput
late
and
target
points
to
do.
The
regression
on
them
by
default
is
100.
B
So
I
think
we
can
just
go
over
the
bluestock
code
so
on
the
edge
fault,
let's,
let's
go
over
to
the
cc
file.
Actually
so
on
the
constructor.
It's
pretty
simple.
You
just
try
to
read
the
copies
and
start
the
algorithm
by
calling
unconfig
changed
on
the
unconfined
change.
We
have
it's
reading
the
configuration
here
and
then
it's
try
to
initialize
some
variable
and
at
the
end
we
at
the
end
of
this
function.
We
have
to
basically
timer
and
if
I
go
to
the
edge
file.
B
B
So
it
just
cancels
all
the
events
that
already
exist
and
just
run
the
fast
interval
process
and
the
slow
interval
process
in
the
fast
interval
process,
which
is
the
regular
cardio
we
just
lock.
We
have
a
lock
that
basically
controls
the
entrance
of
the
information
and
also
the
these
loops.
Basically,
when
the
loops
are
running,
the
blue
store
cannot
submit
more
requests.
B
Sorry
more
transaction
info
to
the
booster
cuddle,
it's
pretty
simple!
Actually
you
just
check
if
the
cuddle
is
activated
is
check
for
the
latency
violation,
which
is
just
a
simple
check.
It
just
check
the
minimum
latency
with
the
target
parameter,
target
latency
and.
B
If
the
violation
is
happening,
basically
just
the
what
the
algorithm
does,
it
basically
shrinks
the
let's
go
to
the
unminimum
violation.
So
on
the
constructor
we
have
two
function
pointers.
B
So
there
is
a
bluestock
budget,
reset
callback
and
get
kv,
throttle
currents.
So
these
are
the
functions,
that's
going
to
be
called
when
the
violation
is
happening
and
basically
the
bluester
budget.
Reset
callback
should
reset
and
the
maximum
size
of
the
throttle
beside
the
throttle
inside
the
blue
store.
B
We
be
implemented
in
the
function,
so
we
don't
have
to
pass
the
throttle
object
to
the
blue
store
cuddle
in
the
column.
We
just
call
this
function
pointer
on
minimum
violation
bypass
that
yeah.
So
when
the
bluestock
budget
changes,
we
just
call
that
minimum
violation.
B
We
just
call
that
reset
throttle
size
and
it
the
throttle,
will
be
reset
and
inside
the
blue
store.
We
are
basically
creating
this
function.
Pointer.
B
At
the
end
of
this
code,
it's
going
to
set
it's
going
to
set
the
timer
for
the
for
the
fast
interval
amount
of
time,
which
is
by
default,
50
milliseconds,
and
it's
going
to
call
this
same
function.
So
every
time
this
function
will
be
called
in
every
50
milliseconds,
and
it's
going
to
call
and
it's
going
to
set
the
timer
and
add
the
event
for
calling
itself
again.
B
For
this
low
loop
we
have
we
have
this
code.
Basically,
we
also
have
the
log,
and
if
the
algorithm
is
activated,
we
calculate
the
amount
of
time
that
for
the
whole
interval
so
to
be
accurate.
We
know
that
the
interval
is
500
milliseconds,
but
we
measure
it
again
to
be
more
accurate,
and
then
we
also
track
over
the
the
total
bytes
that
we
are.
We
accepted
through
the
bluestore
over
this,
this
low
interval.
B
If
I
go
to
the
update
from
transaction,
if
this
function
is
called
in
the
blue
store-
which
I
should
show
you
before-
and
pass
the
transaction
information
to
the
blue
store
in
here,
we
track
two
things.
B
The
first
thing
is
the
minimum
latency,
and
the
other
thing
is
the
this:
the
number
of
the
bytes
that
it's
the
transaction
have.
We
have
just
accumulate
these
bytes
for
this
for
the
slow
intervals,
so
at
the
slow
interval
we
can
calculate
the
average
throughputs.
Basically.
B
So
we
calculate
the
the
slow
interval
throughput,
the
average
and
it's
megabyte
per
seconds
and
at
the
end
we
just
push
it
to
the
history
of
of
the
for
the
regression
and
also
the
target
latency
and-
and
we
check
if
this
reaches
these
these
maximum
size
of
the
history,
we
are
going
to
erase
the
oldest
data.
B
After
that,
if
the
regression
history
reaches
the
certain
size,
we
are
going
to
use
that
regression
to
find
the
the
optimal
the
optimal
target.
Latency.
B
B
We
also
have
a
parameter
to
find
the
log
normal
noise
parameters
and
we
calculate
the
distribution
for
the
appropriate
log,
normal
noise
and
we
add
the
noise
to
the
target
latency
at
the
end,
and
after
the
all
of
this
done,
we
reset
some
of
the
variables.
B
Obviously,
for
example,
the
number
of
the
bytes
should
be
reset
to
zero
and
some
other
variables
and,
at
the
end
same
as
the
fast
loop,
we
have,
the
we
add
the
at
the
end,
at
the
event
to
the
timer
to
call
the
same
function
again
after
the
slow
interval
amount
of
time
and
the
slope
and
that
that's
the
code
for
the
slow
loop
for
the
regression.
B
B
B
So
what
it
does
it's
do.
The
regression
on
the
sorry
do
the
regression
on
the
regression
history,
basically,
the
throughput
and
the
target
latency
history
that
we
have
and
calculate
the
basically,
this
logarithmic
function
and
all
we
are
looking
for
is
just
title
1
or
just
the
b
constant
here,
and
we
use
that
to
just
calculate
the
optimal
target
latency
based
on
the
targeted
slope
and
for
the
regression.
B
B
So
the
details
of
how
it
calculates
it's
a
little
complicated,
I'm
I
I
don't
really
understand
myself,
but
it
works.
It
used
the
metric,
manipulation
like
matrix
products
and
just
traverse
and
other
function
to
just
calculate
and
find
the
estimation
of
the
logarithmic
function.
B
I
write.
I
write
the
unit
test
for
this
all
of
these
functions,
so
they
are
working
pretty
fine
I'll
go
over
the
unit
test
after
this,
so
the
regression
is
going
to
be
calculated
and
basically,
if
we
find
the
optimal
target
latency
for
the
caudal
algorithm
at
the
end,
also,
we
have
the
log
normal
distribution
parameters
finder.
B
So
what
we
want
for
the
log
normal
noise
is
that,
basically,
our
algorithm
is
trying
to
converge
to
a
target
latency
after
a
while,
but
the
problem
is
if
we
have
to
keep
like
a
certain
target
latency
after
a
while,
we
are
going
to
have
a
very
noisy
and
very
unstable
regression
for
good
regression.
We
are
going
to
need
a
larger
range
of
target
latency.
B
For
example,
if
we
have
multiple
points
concentrated
on
a
certain
area,
it's
really
hard
to
do
a
regression
and
find
the
exact
function
on
them.
So
we
add
noise
to
the
target
latency
so
that
it's
going
to
create
some
higher
target
latency
and
a
fewer
target
latency
than
the
original
and
optimal
target
latency.
B
So
we
by
adding
this
noise,
we
are
just
making
sure
that
the
regression
is
working
correctly
and
accurately
and
we
choose
the
log
normal
because
by
its
feature,
its
favor,
the
mode
so
the
most,
the
it's
really
probable
to
choose
a
variable
really
near
to
the
mode
and
very
less
probable
to
find
the
value
very
high
or
less
so.
This
function
calculate
the
right
parameters
of
for
the
log,
normal
distribution
over
a
certain
mode
and
minimum
and
maximum
target
latency.
B
So
you
just
convert
this
mode
minimum
and
maximum
to
mean
and
standard
deviation.
So
when
we
call
this
function,
we
just
pass
the
targe
target
latency
or
optimal
target
latency
as
mode
so
big.
So
we
have
more
value,
value
and
variables
and
more
values
around
and
near
to
to
optimal
target
latency,
and
we
experimented
this
and
this.
It
seems
that
by
adding
this
kind
of
noise
it
doesn't
really
mess
with
the
throughput
and
the
latency
of
the
system.
Overall,
it
has
impacts,
but
it's
really
very
limited.
B
At
the
end.
Also,
we
make
sure
that
if
the
target
latency
that
we
choose
is
smaller,
that
the
optimal
target
latency,
we
just
add
a
few-
add
a
little
bit
of
noise
to
it
to
make
sure
to
favor
the
throughput.
It
randomly
just
increase
the
latency
a
little
bit.
B
And
for
the
unit
test,
we
have
two
unit
tests,
one
of
them
for
these
for
the
whole
cuddle
algorithm,
which
is
on
the
test
files
here
on
tests,
os
blue
storm
blue
story,
slow,
fast
cuddle,
and
in
this
test
case
we
just
implemented
basically
a
mark
of
the
slow,
fast
cuddle.
B
We
set
our
own
parameters
that
we
are
going
to
experiment
with
and
at
the
and
we
try
to
control
these
the
algorithm
by
ourselves
here
and
these
we
run
the
targ
basic.
The
blue
store
cuddle
on
a
certain,
basically
the
intervals,
100
milliseconds
at
400
milliseconds
for
the
test.
So
we
don't
use
the
defaults
here
just
for
the
tests
and
in
the
test
case
we
actually,
we
are
going
to
submit
some
transaction
information,
basically
the
fake
transaction
informations
to
make
sure
that
the
violation
and
everything
is
work
correctly.
B
In
some
cases
we
add
transactions
to
have
the
violation,
and
sometimes
we
don't
have
the
violation
and
at
the
end
we
check
for
to
see
if
the
cuddle
algorithm
could
basically
catch
these
violations.
B
And
yeah,
that's
the
the
overall
explanation
and
good
work
through
of
the
cuddle
team.
B
A
Thank
the
smell,
I
was
just
gonna
ask
about
the
experimental
results
and
after
the
presentation
would
would
you
mind,
adding
the
slides
to
your
pr
as
well
like
be
very
helpful.
B
Yeah
sure,
okay,
let
me
so
go
over
the
experiment
results
so
for
experiments.
B
We
used
generated
workloads
using
fio
the
system
specs
that
we
did
our
experiments.
You
can
see
it
here.
It
has
a.
B
And
so
so,
what
did
so?
This
graph
shows
the
target
latency
parameters
over
time.
So
if,
as
I
mentioned
before,
the
slow
loop
tries
to
change
and
control
the
target
latency-
and
this
is
how
it
changed
over
time.
So
this
is
how
the
slow
loop
controls
and
change
the
target
latency.
For
example,
this
experiment
is
64
kilobytes
right
on
ssd,
it
starts
and
the
times
is
such
seconds
from
0
to
300
seconds.
Basically,
it's
5
minutes
after
100
seconds
gets
basically
stable
and
have
the
basically
a
certain
trends.
B
B
On
the
other
graph,
we
experiment
with
workload
change
to
see
how
the
algorithm
reacts
and
the
first
300
mil
300
seconds
is
the
workload
for
with
4
kilowatt,
writes
and
after
300
milliseconds,
it's
going
to
change
to
64
kilobyte
rights,
and
you
can
see
that
the
slow
targets
can
easily
detect
that
this
change
and
change
the
target.
B
Latency,
basically
fluctuation,
based
on
the
workload
and
the
next
result.
Yeah.
This
is
the
four
kilobytes
right
right
on
ssd,
so
we
basically
use
the
blue
store
without
any
throttles.
Basically,
we
disabled
any
kind
of.
B
B
B
So,
for
example,
target
slope
of
one
means
we
are
willing
to
lose
at
maximum
one
milliseconds
for
decreasing
the
latency
one
milliseconds,
and
so
by
latency
I
mean
the
latency
of
the
transaction
in
bluestorm.
B
If
the
latency
of
transactions
decreases,
it
means
that
the
transactions
are
not
spending
their
time
inside
the
blue
store
the
rather
they're
on
in
the
basically
in
the
front
end.
So
the
first
graph
is
the
throughput.
The
throughput
impacts
of
the
algorithm
and
the
next
graph
is
the
latency
impacts.
B
B
So
you
can
see
by
increasing
the
targeted
slope,
we
are
decreasing
the
throughputs
and
we
are
decreasing
the
latency
as
well,
so
they
target
the
slope
have
a
very
good
control
over
the
trade-off,
so
operation
operation
team
can
easily
decide
what
trade-off
they
really
need.
They
might
say
we
need
a
better
throughput
and
they
and
we
don't
need
really
latency.
B
So
we
are
going
to
choose
targets
loop
of
one
because
by
it's
increasing
the
sorry,
it's
decreasing
the
latency,
but
the
the
throughput
is
not
changing
that
much
so
they
might
choose
this
option
or
they
can
go
with
the
other
option
based
on
their
profile
and
their
preference
in
the
latency.
You
can
see
that
on
the
baseline,
the
most
of
the
time
of
the
transaction
is
happening
in
the
back
end
in
the
blue
store
by
increasing
the
target
slow.
B
We
are
decreasing
the
the
latency
of
the
transactions
and
we
can
decrease
the
time
basically
latency
and
keeping
the
throughput
at
the
acceptance,
acceptable,
basically
rate
and
trade-off
between
the
throughput
and
latency.
B
The
effect
and
the
latency
is
more
extreme
than
before.
It
decreases
the
latency
by
a
lot
and,
on
the
other
hand,
it's
the
it's
keeping
the
target.
Latency.
Sorry,
the
throughputs
at
accessible,
acceptable
rates,
especially
at
the
targeted
slope
of
one
or
0.5,
and
we
are
decreasing
the
latency
by
a
lot.
B
B
C
May
have
a
couple
of
qualifications,
please
yeah,
so
am
I
did
I
get
properly
that
you
treat
osg
core
as
a
front
end
and
blue
store
as
a
backhand
for
something.
C
B
Yeah,
so
the
latency
is
the
blue
store.
Latency
is
not
client
latency.
We
measure
the
blue
store
latency,
so
it
means
the
the
time
the
the
transaction
is
spending
in
the
blue
store.
We
want
to
minimize
this
to
make
sure
that
the
transaction
end
up
in
the
front
and
instead
of
the
backhand,
so
we.
C
Okay,
but
from
from
from
science
perspective,
there
is
no
much
difference
where,
in
which
cue
request
is
waiting,
the
choice,
d
level
or
the
blue
star
one
latency.
What
it
does
is
it
pushes
the
requests.
C
So
well,
my
question
would
be:
what's
the
benefit
from
the
client's
perspective
in
having
this
algorithmical
store,
would
it
get
some
greater
improvements
in
latency
or
whatever.
B
So
basically,
it
doesn't
improve
the
client
latency.
As
you
said,
it's
there's
no
much
difference.
We
just
moving
the
latency
from
back
end
to
the
front
end.
B
The
sole
purpose
of
this
project
is
just
increasing
and
improving
the
schedulability
of
the
front
end,
and
if
we
basically
improve
the
schedulability,
it
eventually
will
result
to
a
better,
maybe
better,
latency
and
better
and
fairer
resource
sharing
between
users
and
clients,
but
directly.
This
project
doesn't
basically
improve
the
latency
in
clients.
D
It's
about
quality
of
service.
Can
you
hear
me.
D
Yeah,
so
forcing
the
request
up
in
the
osd
cube
means
that
we
can
apply
quality
of
service
to
them
right
now.
If
all
of
the
requests
get
slurped
up
by
blue
store,
which
is
broadly
true,
then
any
quality
service
we're
applying
at
the
osd
level
is
irrelevant,
because
all
of
the
latency
is
down
in
the
blue
store.
D
Does
once
we've
sent
10
milliseconds
worth
of
latency
down
to
bluestore
any
scheduling
decision
the
osd
makes
is
necessarily
10
milliseconds
in
the
in
the
future.
D
So
if
a
high
priority
I
o
rise,
then
it's
going
to
have
a
minimum
latency
of
10
milliseconds.
Even
if
hypothetically,
we
could
have
preempted
it
to
the
front
of
the
queue
so
doing
it.
This
way
means
that
bluestore
only
consumes
as
much
latency
as
it
needs
to
hit
a
decent
percentage
of
its
maximum
throughput
and
the
rest
of
the
queue
remains
up
in
the
osd.
So
if
one
of
these
high
priority
I
os
arrives,
we
can
stick
it
in
the
front
of
the
cube.
B
Thanks
yeah,
we
are
exploring
the
way
to
measure
the
impact
of
our
work
on
the
scheduling
and
cuban
quality
of
service,
but
right
now
we
don't
have
anything.
So
basically,
we
are
just
trying
to
present
the
results
on
the
blue
store
latency
in
the
future.
We
are
going
to
have
a
measurements
on
the
quality
of
service,
maybe
that
how
how
much
impact
we
are
having
on
the
clients
and
the
scheduling
itself.
D
D
A
B
Yeah
yeah,
we
had
those
experiments,
I
didn't
add
them
here,
just
to
have
the
results
simplified,
but
yeah,
but
the
result
was
basically
the
same.
The
target
this
loop
was
able
to
maintain
the
trade-off
between
the
experiment.
Basically,
the
workloads
and
the
the
between
the
throughput
and
the
latency.
A
B
If
I'm
not
sure,
if
I
have
time
go
over
some,
some
other
slides,
is
there
any
other
questions.
B
So
I
just
wanted
to
go
to
more
detail
of
how
we
are
basically
calculating
the
why
we
are
adding
noise
or
why
we
are
using
the
like
normal
noise
and
everything
so
basically
in
the
slow
loop
and
calculating
the
optimal
target
latency,
we
just
try
to
find
the
the
latency
build
the
result
to
the
slope
of
the
target
slope,
which
is
a
very
simple
derivation
and
for
the
log
normal
noise.
B
But
if
we
have
well
distributed
points
around
the
different
values,
we
are
going
to
have
more
accurate
estimation
of
the
regression.
B
For
example,
both
of
this
function
are
random
points
on
this
function
here,
which
is
one
plus
three
logarithmics,
but
and
then
we
create
some
random
points
based
on
this
function,
and
then
we
try
to
find
the
regression
points.
Using
our
our
estimation-
and
you
can
see
the
it-
was
more
accurate
on
the
right
side
than
the
left
side.
B
B
So
we
don't
go
over
very
high
target
latency
and
we
calculate
the
corresponding
lognormal
parameters,
including
the
mean
and
the
standard
deviation
and
basically,
which
is
does,
is,
for
example,
the
left
side
and
the
right
side
shows.
The
change
of
target
latency
over
time
is
controlled
by
this
load
loop.
B
On
the
left
side,
we
don't
have
lognormal.
It
is
not
noisy
as
the
right
side,
but
it
is
not
the
stable.
It's
basically
lost
most
of
the
time,
but
on
the
right
side,
by
adding
the
log
normal
noise
vr,
we
are
having
a
very
noisy
change
and
fluctuation,
but
the
trend
and
the
overall
value
is
really
around
the
same
point.
I
think
it
here
it
was
five
or
something
close
to
it.
A
All
right
well,
thank
you,
so
much
asmail
for
going
through
all
this
and
very
interesting
to
see
the
results
and
how
you
ended
up
changing
this
in
response
to
the
data
like
the
log
normal
noise,
especially
at
the
end
here,
I
think
we'll
be
able
to
follow
up
on
that
pr
more
effectively
now
and
some
folks
couldn't
make
it
today,
but
we'll
be
able
to
watch
the
recording.