►
From YouTube: CDS Jewel -- RADOS QoS
B
B
C
The
interior
going,
we
are
offering
three
options
to
that
line.
One
is
the
minimum
I
offs
that
a
current
client
wants
and
also
the
maximum
my
arms
that
EP.
If
it
requires
exceed
that
maximum
limit,
the
osgi
will
clamp
at
that
point
and
when
the
minimum
I
ops
is
preserved
and
maximum
is
not
reached,
augment
then
part
of
it
is
usual
share.
In
general,
the
Linux
I
or
kissed
io
scheduler.
C
B
B
You
could
general
overview
of
that
again.
Clark,
so,
like
I
was
saying.
Basically
a
client
I
get
our
batok
are
greedy
or
RW
errs,
and
some
other
user
of
stuff
would
have
some
kind
of
high-level
policy
defining
what
what
minimum
level
of
I
ops
they
would
have
a
maximum
of
the
layups
and
the
portion
of
weight
that
they
wouldn't
be
default
like
mechanism,
if
there's
extra
capacity
or
if
the
cluster
is
over
capacity
oversubscribed
and
the
limits
kevy
set,
can't
be
met.
C
Well,
they're
the
major
issues
right
now.
Our
parent
focus
in
this
domain
is
the
kale
agency.
They
are
more
focused
on
the
overall
into
rental
agency
minimization,
so
the
GM
pocket
only
focus
on
the
I
apart,
but
there
are
some
other
part
as
well
like
the
memory
memory,
module
name
or
the
c2
hakuna.
So
there
is
an
approaches.
C
B
Yeah
and
have,
but
one
day
was
interesting
about
those
later
systems
that
more
current
research
has
been
his
being
done
on
said
to
get
that
those
latency
bands,
as
opposed
to
the
pier
I
of
bounds
I'm
most
did
it
end
up,
happened
to
I
look
at
other
bottlenecks,
besides
storage,
like
network
on
the
cpu,
and
it's
good
just
get
really
based
on
reservations
there,
instead
of
just
that
on
one
resource,
but
also
end
up,
ended
up
having
some
complicated
things
like
work,
live
modeling
and
more
global
sharing,
a
state
about
which
work.
B
Let's
existen
I'm,
trying
to
schedule
different
workloads
different
different
times,
mag
in
different
benefits,
based
on
grouping
things
together
in
a
more
efficient
way,
which
also
wasn't
the
obvious
how
to
do
it
in
a
just
reads:
just
read
system
without
that
kind
of
global
state
scalp
away,
though
the
damp
algorithm
is
much
simpler
in
that
sense,
since
it
doesn't
require
any
kind
of
large-scale
coordination.
It
simply.
The
client
spent
giving
the
osts
information
about
their
up
their
activity
directly.
B
C
B
And,
let's
see
talking
stock
may
be
a
bit
about
how
this
might
be
used,
like
eventually
the
PI
level
policy
could
be
exposed
as
like
configuration
options
for
our
body
images.
That
may
be
said
on
a
per
image
basis
and
perhaps
configured
more
generically
through
higher
level
tools
like
AB,
cinders
and
OpenStack
or
other.
My
CloudStack
or
other
management
tools
provide
the
different
classes
of
service
to
different
kinds
of
volumes.
B
D
B
That's
the
idea,
sorry
with
the
and
oh
how.
D
D
C
In
the
paper
there
is
a
concept
of
admission
controller
and
the
technician
controller.
We
decide
whether
a
new
requires
from
a
client
which
certain
reservations
will
be
accepted
or
not,
but
right
now
we
we.
We
do
not
focus
on
that
part.
Yet
we
just
try
to
add
the
functionality.
If
the
throughput
is
in
our
two
main
defiance
reservation,
then
what
will
happen
will
try
to
implement
that
part.
Then.
B
And
it
could
be
something
that
ends
up
being,
like
could
add
the
done
by
a
higher
level
to
like,
like
cinder.
Perhaps
it's
sort
of
like
quota,
eight
based
on
I
have
sequester
the
administrator
we
configure
so
could
just
easily
multiply
the
reservations
by
the
rare
volumes
you
have
and
see
whether
you're,
undercover
or
under
subscribes,
give
it
extra
space.
B
But
the
other
piece
is
also
that
this
algorithm
that
they're
just
need
to
have
have
some
notion
of
how
the
underlying
storage
can
perform.
We
have
really
like
you
believe
that,
yet,
if
we
give,
the
paper
described
some
bike
path
using
us,
every
conservative
bounds
on
that
tends
to
work
out
right.
That's
a
good
question
about
whether
we
can
auto
detects
like
what
the
performance
of
up
nose
to
you
will
be
craft
skate
using
the
assisting
the
co-host
E
bench
code
and
to
get
that
that
parameter
for
the
algorithm,
yeah
I.
D
B
B
D
So
the
question
is
where
we
have
to
configure
it
for
each
I
ve.
Will
it
make
sense
to
put
that,
for
example,
in
somehow
or
a
nun
fly,
or
would
it
be
some
configuration
you
have
to
head
to
the
op-ed
itself,
because
you
may
want
to
change
it
later
in
OpenStack,
environment
and
maybe
I,
don't
know
what
the
workflow
would
be.
I
mean
it
easiest
would
maybe
be
something
as
there
is
already
some
limitation
stuff
and
built
and
cooing.
Oh
yeah,.
B
And
cinder
already
has
a
way
to
like
set
specific
policies
for
quality
of
service,
and
so
happens
when
you
create
a
volume.
So
I
was
thinking
that
the
way
that
it
might
be
exposed
w
layer
would
be
the
test
account
settings
and
with,
but
I
would
end
with
the
new
RPG
metadata
that
how
I
implemented
the
idea.
Specific
settings
can
actually
set
per
image
and
stored
with
the
image
header,
though
they're
starting
the
Questor
themselves,
with
that
particular
image.
B
F
F
G
F
Yeah
I
guess
we
nearly
employment.
This
future
almost
II
said
I
think,
because
if
we
want
to
employment
or
scannable
or
this
Chapel
the
curious
we
we
need
to
add
some
like
tags
to
messages
for
tackle
and
we
needed
to
reserve
some
more
something
like
TCP.
Is
we
send
the
windows
window?
Yes,
something
like
this.
B
F
B
I'm
not
sure
if
that
would
play
very
well
with
them
like
that
Thanks
other
tamala
papers.
We
looked
at
I'm,
which
looked
at
more
than
resources
like
network
and
the
cpu
admission
to
the
I/o
and
tended
to
consider
all
of
those
things
together
it
as
part
of
10
s
algorithm.
Instead
of
treating
them
separately
like
at
the
switch
level
and
elsewhere
the
storage
level,
but
by
looking
at
them
in
one
yeah,
I
mean
unified
system
that
had
it
out
of
you,
/
of
everything
and
I
think
it's
related
f10
could
become
we're
investigating.
B
B
C
A
C
C
There's
more
complicated
overdone
and
they
basically
use
different
cues
mechanism.
Xr
can
point
at
different
points
and
they
also
use
a
global
controller
that
intelligent.
We
switch
our
cube
certain
parameters
in
certain
curious
system
so
that
the
overall
legacy
is
guaranteed.
For
example,
if
the
network
is
to
to
conjugate
for
for
a
certain
client,
are
then
probably
get
increase.
The
I/o
bandwidth
of
the
other
other
clients
to
balance
the
into
any
agency
I.
C
B
And
at
least
one
of
them,
but
a
nice
thing
about
that
I'll,
give
them
in
particular.
That
is
that
it
was
writing.
I
was
trick
your
bounds.
All
right,
then
just
I'm,
provisioned,
I
aptos
are
actually
provisioned
late.
She,
almost
though
it
was
giving
you
bounce
on
plate
and
CEO
of
all
operations,
rather
than
just
I.
B
B
B
G
G
G
G
It's
complicated
a
bit
by
sort
of
environmental
stuff
like
other
recovery
happening
or
scrub
happening,
but
you
actually
kind
of
want
to
capture
that
anyway,
if
you
have
a
short
enough
averaging
period,
then
you'd
be
able
to
catch
transient
effects
like
recovery
when
calculating
the
total
I
up
capacity.
So
that
would
be
good
a
feature,
not
a
bug.
B
G
We
may
be
able
to
consider
those
directly.
Those
are
scheduled
as
part
of
the
prioritized
to
as
well,
so
we
we
might
be
able
to
factor
prism
if
we
wanted
to
give
them
a
I,
op
reservation
concept
as
well
yeah,
but
like
to
really
a
professionally,
not
sure
I.
G
B
So
you
think
something
that
looks
at
the
latency
from
the
last
at
Ike
time
period
when
do
requests
would
make
sense,
yeah
I'm
doing
daily,
like
several
minutes,
sir
well.
G
Now
I'm
trying
to
think
about
actually
doing
at
the
object
store
level
if
we
gave
the
object
store.
Some
information
like
I,
think
this
is
a
3
ayah
or
a
a
3
I
op
trend
transaction
with
n
bytes
of
you
know,
operation
we
might
be
able
to
want
the
address,
store,
estimated
throughput.
That
way,
I
think
it's
probably
similar
to
the
client
bolo
or
do
I
yes,
but
yeah
either
way
you're
just
going
to
be
taking
a
an
average
of
iOS
over
the
last
10
seconds.
Making
decisions
dad
do
want
a
feedback
from
the
replicas.
G
That's
another
question
because
operations,
one
sense,
the
replicas
need
to
be
sent
at.
Basically,
don't
throttle
me
priority
because
once
you've
committed
to
performing
a
client
operation,
there
is
no
sense
in
letting
it
throttle
on
the
replica
side.
So
the
when
we
schedule
a
top
on
the
primary
side
we're
committing
the
replicas
as
well.
So
we
may
want
to
feed
that
by
a
mechanism
that
the
replicas
couldn't
notify
the
primary
of
how
much
work
that
available.
G
B
G
B
Even
just
for
multiple
I
got
clients
on
the
same
host
or
using
the
same.
What's
with
this
is
insane
p.m.
yeah.
F
B
B
You
know,
could
we
could
potentially
have
a
TM
as
a
simpler
setting
than
just
then
the
three
different
values
like
1-4
p.m.
is
that
you
don't
take.
You
need
to
have
randoms
at
reservations
for
what
they
get
at
some
proportional
share
and
they
have
some
photo
limit
and
that
and
then
another
setting
for
games
that
actually
have
a
minimum
reserve.
Visual
die
apps,
but
you
may
call
it
is
worth
bit
easier
to
use.
E
Portion
of
this
topic
as
I
understand
it
I'm
a
big
fan
of
the
DM
clock
algorithm,
partly
concealed
I've,
seen
that
I
think
might
work
so
I
like
it,
but
it
definitely
involves
trusting
the
client
we're
sending
you
that
data
right.
Yes,
is
there
any
discussion
of
how
we
might
broaden
that
for
untrusted
clients,
and
not
just
people
who
are
stuck
behind
a
hypervisor?
Where
do
we
think
it's
alive.
B
Ya,
and
in
that
case,
kind
of
a
conservative
thing
to
do
would
be
just
to
give
them.
I
can
hire
limits
and
not
like
you
have
a
sum
up,
the
siding
on
the
hosting.
Perhaps
I
would
just
reject
any
eqs
settings
from
because,
like
sub
sub
soil
clients,
perhaps
pmsf
capability
that
says
whether
you're
allowed
to
your
descends
again
your
settings
or
not,
and
if
the
OC
doesn't
see
those
keep
that
doesn't
see
you
how
that
capability.
B
E
B
G
G
B
A
That's
all
she
wrote
yes,
sir.