►
From YouTube: Ceph Month 2021: Qemu: librbd vs krbd performance
Description
Presented by: Wout van Heeswijk
Full schedule: https://pad.ceph.com/p/ceph-month-june-2021
A
Yeah
we
set
out
I'm
going
to
try
and
really
talk
like
lightning,
because
I
have
a
bunch
of
information,
but
we
have
customers
coming
in
and
they
they
have
questions
about
the
performance
of
krbd
or
librbd
or
krbd
versus
lip
rbd,
and
this
is
mostly
related
to
kimu
workloads.
A
A
What
we,
what
we
set
out
to
do
is
we
wanted
to
make
sure
that
we
gathered
data
that
makes
us
able
to
compare
the
performance
characteristics
of
the
two
major
ceph
clients,
the
major
rbd
clients,
so
the
lib
rbd
user
space
client
and
the
krbd
kernel
client.
A
A
What
we
often
find
is
that
somebody
will
think
something
or
feel
something
about
the
performance,
but
not
be
able
to
describe
what
the
exact
test
was
or
what
the
exact
scenario
was.
So
we
feel
it's
very
important
still
I'm
here
talking
about
my
feelings,
we
feel
like
it's
important,
so
there
are
four
major
scenarios
that
we
have
defined
for
this
particular
test.
It's
chemo
using
librbd
and
we
will
be
doing
the
test
inside
of
the
virtual
machine
using
a
lib
io
engine
kimo
using
krbd.
A
So
that's
bare
metal
on
the
same
host
that
the
virtual
machine
exists
on
and
we
will
be
using
the
lib
rbdio
engine
of
vo
io
and
we
will
be
testing
the
hostg
performance
on
bare
metal
with
the
krbd
mapping
and
the
io
engine
will
be
lib
io
here
now
the
tests
are
for
us.
What
what
was
important
is
to
see
different
types
of
results,
so
we
vito
did
a
talk
yesterday
about
the
importance
of
testing
with
q
depth
of
one.
A
Non-Sequential,
so
we
did,
we
did
one
run
for
each
of
the
scenarios
and
then
the
second
run
for
each
of
the
scenarios.
So
they
are
not
con
consecutive
we
did
a
random,
read
random,
write
profile,
clock
sizes,
4k,
64k,
4mbs
and
the
I
o
engines,
like
I
said
before,
and
we
are
only
looking
at
iops
right
now,
just
because
it's
a
handy
number
that
is
easily
compared.
A
A
So
the
setup
is
not
very
interesting
to
me
because
it's
what
we
are
trying
to
achieve
here
is
not
trying
to
get
the
highest
speeds
or
anything.
We
are
just
trying
to
get
results
that
are
comparable
and
that
that
was
our
goal
not
do
the
best,
have
the
best
chef
cluster
or
have
the
best
setup.
This
is
a
practical
setup
that
is
running
production
for
this
customer.
A
A
A
Now
this
is,
it
almost
looks
like
the
performance
goes
half,
but
it's
not.
If
you
look
at
the
left,
you
can
see
that
the
scale
changes.
A
It
tops
out
for
this
cluster
at
45
000
almost
while
the
lib
rbd
tops
out
at
nearly
20
000
for
just
about
everything.
You
see
that
if
you
increase
the
block
sizes,
then
performance
does
take
a
dip
a
little
bit,
but
that's
to
be
expected.
A
So
this
is,
the
cavity
host
is
tested
with
live
io
of
nvo
and
the
live
rbd
host
is
tested
with
live
rbd
in
fio.
A
What
you
see
here
is
that
the
the
lib
rbd
vm
and
the
krbv
vm
are
are
roughly
the
same
again.
So
lib
rbd,
host
and
kfd
host
are
variously
different,
but
the
vms
are
generally
the
same.
So
there's
another
ceiling
there
that
we
are
encountering.
A
A
B
A
A
The
lip
rd
and
cabba
d
and
keemu
are
about
equal
the
performance.
The
average
of
the
averages,
like
I
said,
is
about
100,
so
krbd
performs
at
100
percent
of
lip
rbd.
In
this
case,
when
we
compare
lip
rbd
versus
le
bio
on
krbd,
we
do
get
very
different
results,
so
we
get
an
average
of
150
percent,
so
krbd
performs
at
a
150
percent
of
lib
rbd
and
at
64k
chemo
krbd
performs
a
bit
worse
when
compared
to
live
rbd
with
4k
and
4mbs.
A
A
The
next
step
is
basically
test
with
pacific,
especially
based
on
the
on
the
torque
of
ilya,
and
then
complete.
Compare
the
results
again.
B
Yeah,
this
is
exactly
the
thing
that
I
talked
about
and
it's
interesting
that
an
independent
test,
sort
of
confirmed
it
right
down
to
the
same
number,
because
what
we
saw
was
somewhere
between
20,
a
thousand
and
thirty
thousand.
So
some
you
know
with
some
configurations.
He
could
push
it
a
bit
higher,
but
it's
again
you
know
the
fact
that
you
also
arrived
at
the
same
20.
000
number
is
just
another
confirmation.
A
Yeah,
so
I
think
it's
the
20
000
is
probably
based
on
the
system
that
I
was
using
and
not
not
on
a
really
practical
limit
of
lip
rpd.
I
think
we
did
not
set
out
to
do
everything
the
fastest.
We
just
set
out
to
test
this
system,
and
this
is
what
this
system
did,
but
yeah
the
20
000
seems
to
be
the
upper
limit
of
lip
rbd
in
any
situation,
and
the
same
limit
seems
to
be
applicable
to
kimu.
A
A
It
could
be-
maybe
I
don't
know
where,
but
we
we
we
our,
we
will
publish
our
our
fio
configuration
so
that
everybody
can
also
test
that
with
on
their
own
systems.
I
don't
know
of
any
throttling
that
we
encountered.
I
do
think
I
don't
know
if
you
have
seen
the
talk
of.
I
highly
recommend
it,
because
it's
a
good
update
on
where
rbd
stands
and
where
rbd
is
going,
and
there
are
some
clear
explanations
in
why
the
ceiling
exists
and
what
they
are
doing
about
it.