►
From YouTube: 2019-01-31 :: Ceph Performance meeting
Description
A
All
right,
we
don't
have
too
many
new
pull
requests.
Look
at
today,
although
the
the
two
that
we
have
are
I
think
both
kind
of
neat,
though
the
the
first
one
here
is
some
work
from
ma
Jian
pang
regarding
batch
handling
and
send
message-
and
there
was
some
comments
in
this
pull
request
about
even
maybe
potentially
doing
this
even
a
little
bit
better
I
haven't
looked
at
this
real
closely
yet
but
I
think
it's
actually
a
really
good
direction
to
go
in.
A
A
A
B
B
C
A
B
B
A
A
D
B
D
D
B
D
B
B
Thanks
cool
from
the
from
the
bunch
of
peers
or
buffer
least
the
last
three
ones,
killing
up
and
buffer
optimizations
for
reference,
counting
creation
and
destruction
of
buffer
pointers
deferred
the
smallest
one
is
just
for
killing
last
P
member
at
the
moment.
Well,
it's
usually
I
tried
to
make
one-to-one
change.
I
mean
optimization
without
any
follow
without
any
possible
impact
on
the
users
of
the
of
the
last
P.
But
after
rethinking
I
don't
know,
maybe
the
better
way
would
be
just
to
drop
drop
entirely.
B
The
last
P,
even
without
introducing
the
methods
taking
last
P
externally,
I'm,
not
sure
whether
it's
it
might
be.
That
is
just
the
last
piece
just
not
useful
at
all.
It
is,
but
we
don't
I,
don't
see
any
place
giving
evening
potentially
giving
huge
advantage
of
it.
Maybe
we
could
just
drop
it
and
if,
if
there
is
any,
if
we
got
any
report
about
visible
regression,
then
just
introduce
those
more
complex
methods,
taking
philosophy
externally,
just
for
the
single
place,
I'm,
not
sure.
A
F
F
To
you
know
just
sort
of
not
have
to
rewrite
the
entire
world
with
every
commit,
which
is
always
nice,
so
once
I
have
but
I'm,
basically
taking
his
interface
rewriting
parts
of
that
to
use
the
I
objected
or
phase
and
rida
mentioning
I
object
a
little
bit
so
that
we
aren't
relying
on
IO
context,
because
that
that
in
practice
doesn't
actually
match
the
way
that
rgw
uses
the
API
since
rgw
basically
has
makes
quite
extensive
use
of
the
object.
Locator
key.
It
doesn't
really
make
sense
in
this
case
to
essentially
treat
the
object.
F
Locator
key
is
part
of
the
pool,
as
opposed
to
some
of
it.
You
can
just
throw
in
to
address
the
object
at
each
point
anyway.
I'm
correct
I
am
basically
debugging
sort
of
my
first
revised
nucleus
attempt.
I
had
to
sort
of
split
things
out
slightly
more
than
they
were
just
because
of
a
bit
of
shared
infrastructure,
but
once
that's
done,
I
helped
to
push
that
thing
for
testing
and
once
that
works
I
can
now
I
can
then
just
sort
of
March
onward
from
there
each
in.
G
F
G
F
D
A
Adam
Allah,
I'll,
chime
in
and
say
I
was
at
a
little
stuff,
meetup
thing
that
they
had
here
in
Minneapolis
and
I
was
telling
the
the
guys
from
the
the
supercomputing
Institute
about
beasts
and
then
I'm
also
about
from
the
stuff
you're
doing
and
there
they
were
super
interested
and
excited
about
it.
So
you've
you've
got
fans.
A
C
C
G
C
A
B
Guess
it's
for
v1,
where
we
use
what
we
used
the
siphon
just
to
encrypt
decorators,
to
process
two
or
three
AES
blocks,
depending
what
we
are
going
with
its
effects
version,
one
signature,
its
effects,
signature
for
submission
to
version
ii
for
for
protocol
v2,
it's
entirely
different.
We
are
using,
we
are
doing
on
wire
encryption
or
for
for
entire
math
and
for
entire
message.
A
A
A
The
whip
cash
bidding
rebase
one
is
my
big
one
from
a
while
ago
for
the
the
auto
tuner,
the
memory
auto
tuner,
and
that
that
one
I've
been
breaking
down
into
much
smaller
PRS
and
have
gotten
like
maybe
70%
of
it
already
merged
into
master.
It's
the
last
part,
that's
really
the
the
most
interesting
and
kind
of
changing
one
that
hasn't
gone
in.
We
decided
to
wait
until
after
Nautilus
before
playing
around
with
that.
More
so
I've
actually
got
a
couple
of
branches.
Doing
testing
in
but
well
wait
on
that.
A
D
Put
in
chat,
I
thought
this
would
be
the
right
I,
don't
know
why,
but
my
intuition
is
that
this
is
a
good
time
to
talk
about
how
operations
and
7/7
SMDs
is
and
who's.
They
were
pay
their
work.
They
were
changes.
Do
that,
because
years
ago,
add
I
changed
to
make
lifts
and
a
vest.
I,
don't
know
room
with
a
big
walk
and
loves
a
fast
new.
You
couldn't
get
any
benefits,
because
the
S
wasn't
parallel
seems
like,
while
downstream
is
looking
away
from
superfast,
so
we
should
be
looking
ahead.
I.
A
H
D
A
Should
probably
see
if
we
can
get
Patrick
into
one
of
these
meetings,
if
he
he
doesn't
have
a
conflict
I'm,
not
sure
if
it
would
be
worthwhile
to
have
it
without
sage
and
Patrick
here
today.
Fortunately,
this
is
a
probably
to
that
know.
A
So
okay
I'll
do
is
I'll
write
this
in
here.
A
D
E
A
G
A
All
right,
let's
see
so
I've,
got
a
topic
here
recently
for
a
couple
of
reasons.
We've
been
looking
at
the
performance
of
creating
containers
in
a
kind
of
like
a
container
workload
with
SEF
RBD
behind
the
scenes,
and
one
of
the
things
that
we
noticed
was
that
when
a
cluster
has
a
lot
of
I/o
going
to
it,
the
the
creation
speed
can
be
so
slow
that
things
start
timing
out.
A
We
were
trying
to
understand
why
and
we
don't
we
don't
know
entirely,
but
one
of
the
things
that
came
up
in
the
process
of
doing
this
testing
is
that
the
the
step
of
creating
a
file
system
on
a
serf
cluster
that
has
a
lot
of
load
on
top
of
our
BD
kernel
RBD.
In
this
case
it
is
really
really
slow
and
and
is
slower,
the
bigger
the
file
system
gets
and
it
may
make
sense
right.
You
know
the
the
bigger
the
file
system,
the
more
work
that
potentially
has
to
do,
but
it's
pretty
dramatic.
A
We
were
seeing
on
a
small
cluster,
just
a
tiny
little
one,
that,
with
a
background,
sequential
write
workload,
it
could
take
over
800
seconds
to
make
a
4
terabyte
XFS
filesystem,
and
you
know
that's
as
far
as
like
the
container
world
goes.
4
terabytes
is
probably
pretty
big,
but
it's
still
kind
of
a
big
deal,
because
kubernetes
has
a
two
minute
timeout
for
it's
kind
of
like
Mountain
buying,
whatever
step
I,
don't
actually
know
kubernetes.
Well.
That
appears
to
me
what
scamp
hard-coded
in
their
their
code.
A
A
So
one
of
the
ways
we
could
maybe
kind
of
get
around
this
would
be
to
try
to
turn
those
into
discards
and
then
then,
and
then
basically
kind
of
like
ignore
discards.
Some
cases
we
could
maybe
I
saw
Matt
Falcon
but
I
think
he's,
maybe
not
talking
to
me
so
so
so
that
might
be
something
that
we
could
do.
That
would
like
make
this
faster.
A
The
good
news,
I
guess
for
people
that
don't
use
XFS,
is
that
both
ext4
and
and
butter,
if
s,
especially
when
no
discard
is
used,
are
much
much
much
faster
on
the
order
of
like
seconds
or
fractions
of
a
second.
So
you
know
this
is
this
is
really
pretty
XFS
specific,
but
it
does
appear
to
become
an
issue
so
for
folks
that
are
playing
around
with
containers
and
docker
and
and
kubernetes
and
all
this
stuff.
A
H
Regularly
have
this
of
my
50
terabytes
FS
file
systems,
and
yes,
it
is
extremely
slow.
Okay,
but
I
have
actually
started
to
look
at
using
surface
instead
for
for
this,
which
I
I
can
go
into
it
over
here,
but
certainly
with
erasure
Clayton
calls.
The
the
right
performance
is
a
lot
better
than
using
XFS
on
RBD
on
easy
cause.
H
H
Yeah
so
I
was
looking
at
the
counters,
the
the
right
or
the
right
for
counters
and
sort
of
by
playing
around
with
some
of
the
RVD
sayings
and
the
EXIF
s
our
question
setting.
So
it
does.
The
strike
alignment
and
I
was
getting
better
performance.
I
could
see
the
the
right
fall
was
going
up,
but
even
though
the
workload
was
quite
heavily
just
sequential,
it
wasn't
doing
it
in
full
rights
and
then,
when
I
tried,
it
was
so
fast.
It
just
steamed
ahead.
A
Yeah,
okay,
okay,
I
may
have
more
questions
for
you,
Nick
on
that
coming
up
on
the
future,
because
maybe
takes
us
into
the
next
topic,
which
is
that
we're
we're
really
interested
right
now
in
looking
at
what
the
future
of
kind
of
userland
RBD
might
be
and
and
kind
of
how
it
relates
to
Colonel,
RBD
and
features
and
Colonel
IBD
in
this
kind
of
thing.
A
So
stage
had
asked
me
earlier
in
the
week
to
to
go
through
and
and
just
kind
of
try
to
get
an
idea
of
where
we
are
at
with
our
BD
NBD
performance,
Colonel,
RBD,
there's
RBD
fuse
and
then
there's
also.
Apparently
we
have
something
for
using
TCM.
You
run
our
with
our
BD,
which
I
didn't
even
know
existed.
So
there's
a
lot
of
there's
a
lot
of
different
things
out
there,
and
now
it's
very
interesting
to
hear
that
you're.
You
were
having
better
experience,
at
least
in
your
test
neck
with
our
BD
MBD.
A
So
anyway,
I'm
gonna
try
to
go
through
and
start
examining
these
things
and
seeing
kind
of
where
we're
at
with
all
of
them
and
and
the
the
thing
that
that
that
I
kind
of
remembered
or
got
you
know,
I
scoured
my
my
to-do
list
and
remembered
that
the
the
RVD
benchmark
classes
and
CBT
are
kind
of
awful
they're
there
there's
a
bunch
of
dead
code
and
ugly
code,
and
none
of
them
are
really
unified.
A
So
yesterday,
I
sat
down
and
started
rewriting
all
the
the
the
FIO
RBD
benchmarks
to
kind
of
use,
a
common
base
class
to
just
can't
organize
everything
and
make
it
reasonable.
So
those
are
starting
to
work
now
and
hopefully,
then,
by
the
time,
we're
done
with
all
this.
We'll
have
a
really
nice
convenient
way
to
test
all
these
things
repeatedly.