►
From YouTube: Ceph Performance Meeting 2020-08-28
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hey
guys,
sorry
for
the
late
arrival
was
trying
to
quickly
send
an
email
out
and
lost
track
of
time.
Okay,
so
a
small
crowd
this
week,
which
is
totally
okay,
there's
not
a
whole
lot
on
the
agenda.
A
Let's
see,
we've
got
a
couple
of
prs
one
closed
this
week.
That
was
at
avoiding
flushing
too
much
data
at
once
from
machine
ping.
We
mentioned
that
last
week
that
looks
really
good,
that's
fantastic
and
then
another
one
that
is
updated.
Also
from
majiangpeng
radik
approved
the
changes
to
reduce
buffer
list
rebuilds.
A
I
think
he
was
also
looking
at
doing
that
more
generally,
so
whatever
work
was
being
done
there,
it
looks
like
reddick
thought
this
was
good
to
move
forward
with
igor.
I
think
he
had
requested
your
your
you
to
take
a
look
and
just
verify.
You
thought
it
was
okay,
too.
That's
36,
387.,.
A
Okay,
cool
other
than
that,
I
did
not
see
anything
else
that
had
closed
or
was
particularly
new
this
week,
so
a
little
slow,
but
that's
the
summer.
A
A
Interesting
all
right,
I
don't
actually
have
any
topics
to
discuss
this
week.
There's
lots
and
lots
of
ongoing
work,
but
nothing
really
interesting
to
report
at
the
moment.
Maybe
the
only
thing
would
be
josh
mentioned
a
couple
of
interesting
papers
from,
I
think,
both
from
fast,
and
I
was
wondering
if
anyone
would
have
any
interest
in
chiming
in
on
what
they
want
to
look
at.
C
Can
you
send
the
link
again
to
the
pad
or
yeah
in
general?
Is
it
possible
to
edit
to
the
event
in
the
calendar?
C
You
know
that's
weird,
at
least
in
the
blue,
the
blue
jeans
calendar.
I
think
I
I
have
permission
to
edit
it.
Oh
I
don't
know
if
it
will
change
it
to
everybody,
but
I
can
add
the
link
if
it's
great
yeah
sure.
A
A
D
A
Personally,
the
the
roxy
b,
one
looked
interesting,
I'd,
be
curious
to
see
to
look
through
what
they've
been
doing.
D
D
C
Yeah
I
haven't
had
time
to
read
the
even
the
abstract
of
it,
but
I'm
down
to
to
go
through
it,
and
then
we
can
discuss
it
in
like
two
weeks.
A
D
C
Did
you
see
the
email
I
forwarded
to
you,
the
response
from
the
one
of
the
authors
of
the
ice
paper
of
my
packs.
A
Yeah
yeah,
they
it
looked
like
they
had
just
said
that
they
weren't
ready
yet
right.
C
A
Yeah
yeah,
I
mean
definitely
follow
up
with
them.
I
don't
know
you
know
they
were
they're,
pretty
short
in
their
reply
right,
they
didn't
say
a
whole
lot.
C
Yeah,
they
just
said
they
were
pretty
much
busy
and
that
the
the
first
author
would
contact
me
once
they
have
some
progress.
A
Yeah
I
mean
if
they,
if
they
have
more,
that
they'd
like
to
to
you,
know,
discuss
or
showcase
you
know,
definitely
would
be
interested
in
finding
out
more.
I
wonder,
maybe
they're
being
a
little
tight-lipped
if
they're
gonna
do
like
a
follow-up
paper
and
they
don't
want
to
they
don't
want
to,
like
you
know,
tell
us
everything
they're
doing
right
now
to
that
they
can.
You
know
present
that
later
yeah.
A
A
Well,
does
anyone
have
any
opinions
besides
me
and
josh
about
the
next
paper
for
two
weeks.
A
D
Yeah
yeah
and
one
of
the
reasons
I
don't
want
to
say
two
weeks
from
now
as
next
week.
I
might
have
more
discussion
around
oh
notes
and
once
adam
is
back
and
we
have
someone
new,
I
read
how
to
gabby.
Bananuk
who's
been
taking
a
look
at
this
a
little
bit
and
has
some
interesting
ideas.
A
Yeah
both,
I
think
both
what
adam
was
talking
about
and
what
gabriel
was
talking
about,
look
really
really
interesting.
I'd
love
to
see
what
kind
of
you
know
experimental
results
if
they
pursue
some
of
these
ideas,
you
know
what
they
might
end
up.
D
Yeah,
definitely,
I
think
some
of
it
aligns
pretty
well
with
like
what
ego
you
were
trying
to
do
with
reducing
the
memory
footprint
of
the
notes
in
your
pr
from
the
in
the
past
year.
D
A
D
B
D
Or
anything,
to
try
to
figure
out
which
parts
of
the
owned
storage
were
expensive
or
which
ones
took
up,
ended
up,
taking
up
a
lot
more
space.
B
Well,
there
are,
there
are
in
one
of
our
unit
tests.
B
Hence
well,
that's
the
only
thing
that
I
used
to
to
decide
which
structure
to
optimize
or
not
before.
B
B
It's
about
reducing
the
amount
of
data
needed
to
update
on
each
right.
Instead
of
updating
the
hollow
node,
we
shard
it
into
several
pieces,
mainly
the
extent
map
of
it,
and
hence
when
right
comes
in,
you
need
to
update
just
a
single
chart
other
than
the
whole
node
structure.
D
B
B
In
in
some
charts
on
another
on
another
recharging
attempt,
this
recharging
is
completely
different
and
this
goes
back
and
forth
and
not
to
say
it's.
It
might
be
pretty
expensive.
B
A
B
B
D
Like
like,
simplifying
the
like,
reducing
the
block
sizes
or
that
we're
considering
and
potentially
maybe
having
specialized
doughnuts
for
different
types
of
data,
like
say,
an
rgw
object,
that's
all
written
in
one
giant,
write
and
or
only
appended
to,
and
we
already
know
what
its
size
is
going
to
be.
D
A
D
Yeah,
it's
definitely
relevant.
I
think,
at
this
point,
they're
working
on
a
lot
of
like
kind
of
low-level
internal
details
of
c-store,
not
not
even
hooking
it
up
to
the
object,
store
interface,
yet
so.
F
D
It
might
be
a
little
bit
early
to
think
about
optimizing
things,
but
thinking
about
the
internet
structure
for
c-store
also
makes
sense
yeah.
We
have
similar
concerns
regarding
like
what
kinds
of
workloads
and
and
what
kinds
of
requirements
make
sense
for
different
kinds
of
objects.
A
Cool
yeah,
I
I
definitely
think
that
sounds
good
for
next
week.
So,
okay!
Well,
does
anyone
have
anything
this
week
that
they
want
to
talk
about
or
bring
up?
I'm
I
basically
it
it
looks
pretty
much
like.
I
don't
have
anything
else,
so
anyone
have
anything.
B
Makes
much
sense
to
to
try
to
imp
to
fix
this
well
I'd,
say
regression
and
master,
since
that's
a
rather
corner
case
benchmark,
but
well
I'd
like
to
to
hear
from
you.
What
do
you
think
about
all
this
stuff.
B
Yeah
well,
actually,
this
max
blob
size
impacts
on
the
amount
of
on
this
search
range.
When
we
look
for.
B
Maybe
something
related
to
sharding
as
well,
but
we
can
say
for
sure,
actually
that's
probably
a
good
question
why
we
need
to
cap
the
blob
size
unless
we
are
talking
about
compression
and
actually
I
don't
have
good
enough
answer
to
that
to
this
question.
B
A
Igor
in
that
last
or
in
the
email
that
you
sent
out
where
you're
giving
benchmarking
numbers,
what
were
those
reads
or
rights.
B
No
that's.
This
relates
to
the
initial
benchmark,
which
performs
rather
punch
right
yeah.
It
was
straddles
bench
in
right
mode,
using
4.
Meg
writes
by
default,
so
I
haven't
tried.
I
haven't
tried
any
anything.
B
A
Is
it
with
a
like
a
smaller
blob
size
or
the
64k
blobs
max
blob
size?
Do
you
see
like
more
fragmentation
on
the
right,
so
there's
more
sikhs
that
are
hitting
the
disc
because
of
it.
B
B
So
I
haven't
seen
any
issues
in
redness
when
db
set
ssd
drive,
that's
in
the
long
drive,
but
but
again
this
is
pretty
artificial
scenario.
We
have
just
a
single
client
on
idle
cluster.
B
It's
it's
hard
to
say
if
we
can
see
this
regression
in
in
reality
in
in
this
scenario,.
B
B
Well,
first
of
all,
I'd
like
just
a
confirmation
from
somebody
else
that
the
precondition
like
spinner
own
list
scenario.
B
D
The
reporter,
or
with
your
test
decoy,
did
you
take
a
look
at
the
oh,
no
hit
rates
because
it
seems
like
decreasing
the
size
of
the
blobs
might
increase
download
size
leading
to
fewer
nodes
in
cash.
A
F
B
A
How
how
do
we,
so
if
you
have
like
a
64k
blob
or
a
512k
blob,
how
do
we
search
for
space
for
those
is
it
like?
Is
it
likely
that
if
you've
got
multiple
64k
blobs,
but
in
the
scenario
you
tested,
it
would
be
you'd
get
contiguous
space,
or
is
it
likely
that
you
would
get
space
from
different
regions.
B
Result
in
better
performance
for
for
master,
but
that's
not
the
case
so.
B
B
The
better
case
when
hybrid
is
also,
as
also
enabled
in
octopus,
the
fragmentation
should
be
pretty
the
same.
A
If,
if
it
turns
out
that
we
have
like
a
situation
where
we
would
benefit
by
having
contiguous
space,
even
if
we
have
a
smaller
blob
size,
so
if
we
have
64
cable
up
sizes
to
512
k
pop
and
if
the
problem
is
that's
not
contiguous
and
we're
jumping
over
doing
six
all
over.
But
we
could.
We
do
something
like
reserve,
the
512
k
and
you
know,
assign
blobs
contiguously
in
it.
But
then,
if
we
don't
use
it,
just
like
you
know
give
it
back.
Basically,.
F
A
B
A
It's
the
it's
the
benchmark
case,
right
that
that
is,
everyone
looks
at,
but
that's
bad
right.
A
A
A
C
B
A
A
You
were
reminding
me
again:
what's
the
benefit
of
on
hard
drive
of
going
to
a
64k
blob
size
max
maxwell
says.
B
Well,
let's
say
that
time
ago
I
don't
have
perfect
answer
to
this
question.
B
B
Well
well,
the
the
first
thing
that
comes
to
my
mind
is
that
we
need
to
well
this
this
maximum.
This
maximum
impacts
the
search
range.
When
we
try
to
reuse
blob.
B
If
us,
if
we
need,
if
we
have
64k
maximum,
then
we
need
to
go
back
to
minus
64
set
and
scan
from
this
starting
offset
till
plus
64k,
so
128k
total,
and
then
this
in
this
region.
We
are
looking
for
extents
and
blobs
to
reuse.
B
B
B
We
had
like
4k
unlock
size
for
ssd
and
64k
max
blob
size,
for
this
is
jesus
as
well
and
64
allocation
size
for
spinner
and
5
12
types,
which
is
16.
B
Just
to
avoid
any
additional
effects,
but
this
also
results
in
some
impact.
Yeah.
B
Yeah
yeah
and
the
the
the
that's.
What
made
me
talk
about,
I'm
not
structure,
simplification.
You
know.
C
B
Yeah,
I
agree
from
one
side
it's
hard
to
introduce
any
fixes
current
in
current
design
and
to
predict
how
this
specific
parameter
change
might
pack
for
the
all
the
things.
But
sometimes
we
need
such
such
changes
and
no,
it
becomes
a
pain.
A
Igor
about
a
year
or
two,
probably
more
closer
to
two
years
ago
maybe
ago,
I
I
actually
ported
the
new
store
code
to
work
in
master,
because
I
was
just
curious.
How
well
it
would
do,
and
it
actually
was
not-
that
slow,
much
slower
than
blue
store
was
a
much
much
much
much
less
complicated.
B
Yeah
but
well,
first
of
all,
all
this
additional
functionality
like
like
checksums
or
compression
yeah
yeah,
which
introduces
additional
complexity
and
other
stuff.
B
The
topic
are
the
benchmark
under
different
scenarios.
So
say
path
is,
might
be
invisible
for
for
simple
benchmarks,
but
it
comes
to
stand
in
more
complicated
ones.
A
B
Actually,
it's
currently
it's
a
good
time
to
revise
all
this
stuff,
since,
from
one
side
we
we
learn.
We
we
gathered
some
experience
in
how
how
it
behave
on
on
on
issues
which
arise,
production
and
all
this
stuff.
D
Definitely
a
better
place
to
make
more
empirical
decisions
now
than
before
it
was
in
production
for
years.
I
guess
another
thing
that
you
mentioned
like
compression.
I
forget
about
adam's
already
presented
his
results
there,
but
he's
been
doing
some
testing
around
that
and
found
that
the
current
blob
sizes
are
too
small
to
be
useful.
D
The
increasing
the
compression
side
max
of
lop
size
for
compression
is
definitely
something
that
we'll
want
to
do
as
well,
but
it
needs
a
lot
more
testing
to
figure
out
the
effects
of
that
and
yeah.
What
was
kind
of
optimal
for
compression
versus.
E
Yeah
at
a
different
level,
I
was
going
to
just
say
that
blue
store
has
very
quickly
evolved
with
releases,
and
a
lot
of
things
that
were
relevant
for
nautilus
are
maybe
not
relevant
for
pacific
at
some
point,
igor,
if
you,
if
you
would
like
to
do
a
code
walkthrough
of
the
new
aspects
of
bluestore,
that
people
are
not
aware
of,
it
might
be
something
that
people
may
be
interested
in
and
also
gives
us
a
chance
to
go
back
and
focus
on
aspects
that
we
have
probably
been
missing
for
a
while.
D
I
think
it'd
be
very
helpful
as
well,
especially
with
like
gabriel
strange
and
trying
to
familiarize
himself
with
things.
D
A
A
A
I
don't
know
I
mean
you're
right,
we've
done
tons
of
stuff
since
then,
we've
changed
a
lot
of
stuff
since
then,
but
you
know
kind
of
the
the
initial
model
for
how
all
this
worked
was
done,
be
fast.
D
I
mean,
eager
to
your
point,
about
lack
of
sufficient
test.
Suites
for
performance
would
be
worthwhile
trying
to
enumerate
the
different
scenarios
that
we
see.
Like
other
I
mean
the
most
common
in
the
wild.
B
B
On
each
right-
and
this
made
us
to
introduce
this
charging
stuff
just
to
to
reduce
the
data
written
the
reach
operation-
and
I
suppose
there
were
plenty
of
such
decisions
along
the
booster
development
path,
but
unfortunately
we
we
didn't
cover
this
with
good
enough
test
coverage.
So
well
right
now
we
remember
that
charging
was
introduced
to
to
avoid
blood
to
rock
db,
but
for
many
other
features
we
don't
we
don't
have
such
information
in
our
memory.
A
A
lot
of
these
kinds
of
things,
though:
it's
not
a
test
that
will
tell
us
right.
It's
like
you
know,
you
might
see
some
change
in
performance.
You
might
see
some
change
in
cpu
usage,
but
it's
it's
actually
understanding
what
the
code,
where
the
code
is
actually
spending
time.
That
kind
of
tells
you
you
know.
Was
this
a
good
change
or
was
this?
You
know
not
a
good
change
right.
B
Yeah
and
tomorrow
we
don't
have
good
enough
coverage,
so
we
we
are
not
generally,
we
are
not
able
to
to
check
for
regressions
in
performance.
Well,
at
least
myself.
B
So,
in
this
conditions
it's
hard
to
to
redesign
existing
code
so
for
for
features
we
have
more
or
less
good
enough
coverage.
So
if
we
break
something,
we
can
find
that
quite
easily
on
performance.
B
It's
not
that
easy,
because
it
takes
plenty
of
time
to
to
to
to
to
test
again.
We
don't
have
good
enough
coverage.
We
don't
have
some
not
sure
I
don't
know
how
to
say
in
english,
but
we
don't
have
fixed.
B
Fixed
hardware,
where
we
can
perform
all
the
benchmarks
to
detect
regression,
makes
no
not
much
sense
to
to
benchmark
using
this
hardware
this
year
and
that
hardware
next
year.
So
when
you
need
test
cases
and
we
need
some.
B
Sorry
one
moment,
preferably
not
virtual
one,
preferably
not
under
additional
additional
load
from
from
something
else,
so
you
need
to
rerun
these
scenarios
in
in
this
in
exactly
the
same
conditions,.
A
Igor
this
is
perfect
because
you
should
work
with
kifu
on
getting
the
scenarios
you're
talking
about
the
test
that
you're
talking
about
into
the
the
jenkins
stuff
that
that
he
and
radic
were
working
on
because
they
have
that
working
with
classical
osds.
Now
it's
it's
like.
Oh,
oh,
basically,
there
and
ready
to
go.
It
just
needs
like
useful
test
cases
like
you're
talking
about
it's
fixed
hardware,
moderately
fast
hardware
that
has
both
hard
drives
and
nvme
drives
in
it.
A
We
tried
to
set
it
up
so
that
it
wouldn't
get
like
os
updates
or
other
things
changing
it.
We've
got
four
nodes
right
now
dedicated
to
this
and
jenkins.
B
Network
yeah
that
sounds
promising,
but
anyway
that's
a
lot
of
work
to
do
to
to
have
good
enough
performance
coverage.
You
know.
E
That's
that's
something
that
we've
been
striving
for
for
at
least
in
the
beginning
of
pacific
release
right.
So
I
think
you
go
from
your
standpoint
if
you
could
just
like
describe
the
test
cases
that
you're
thinking
of,
if
there
are
there,
are
small,
shorter
tests.
That
can
finish
fast
enough.
E
I
think
what
mark
described
is
a
perfect
way
to
just
get
that
integration
done
with
jenkins,
but
if
there
are
longer
running
tests
and
their
aging
tests,
other
kind
of
things
that
you're
thinking
about
then
we
might
have
to
like
you
know,
use
some
other
hardware
for
it.
But
I
think
it's
it's
a
good
time
for
us
to
at
least
write
down
those
test
scenarios
and
see
whether
what
we
have
is
enough
to
start
off
something
well
I'd
say.
B
B
So
we
are
at
the
at
the
beginning
of
this
long
way.
So,
let's
start
with
some
trivial
scenarios
and
then
extend
them
needed.
C
A
Recognize
that
we
we
may
break
things
and
make
things
worse
before
we
make
them
better
right,
like
you
know,
that's
what
you
were
going
through
with
with
the
work
that
you
were
doing
for
the
min
alex
says
right
is
that
it
got
worse,
but
then
you
were
able
to
kind
of
you
know,
make
things
if
not
better,
overall
much
closer
to
what
they
were
previously,
while
still
getting
the
benefits.
B
B
D
I
mean
that's
kind
of
the
idea
where
we're
going
with
the
performance
ci
stuff
we've
been
talking
about
like
the
right
right
now.
The
jenkins
jobs
are
triggered
for
any
pr
that
has
the
performance
label,
and
the
idea
was
that
we
also
have
some
long
longer
running
tests,
anthropology
suite
that
might
run
on
dedicated
nodes
in
the
future.
D
But
I
think
we,
the
like
the
first
step,
like
you
said,
is
fine,
is
just
getting
the
kind
of
basics
basics
in
place.
There.
D
But
I
think
it's
gonna
be
a
it's
it'll,
be
like
a
very
huge
acceleration
for
us
in
the
longer
term.
If
we
can
get
those
in
place,
it
might
be
a
lot
of
work
to
get
there,
but
once
it's
once
it's
there,
it'll
save
us
a
lot
of
trouble
on
time,
trying
to
run
our
own
tests
on
different
hardware
and
and
trying
to
recreate
the
same
things
over
and
over
again.
E
Exactly
and
I
think
eager
to
your
point,
like
you
know
the
I
don't
know
if
you're
aware
of
not
of
this
or
not,
but
I
have,
I
had
worked
on
this
performance
suite
and
pathology
that
literally
anybody
can
go
run.
You
don't
need
to
know
how
to
run
cbt
or
how
you
need
to
set
up
stuff.
You
can
just
run
a
test.
E
Suite
and
it'll
run
different
kind
of
workloads
for
you,
but
the
again,
the
same
thing
same
problem
exists
that
we
again
don't
have
a
common
hardware
that
we're
running
these
tests
on
and
that's
also
one
one
of
the
aspects
of
the
verb.
Ci
work
is
that
we
have
some.
If
we
have
some
dedicated
hardware
that
we
can
run
these
tests
on,
we
can
definitely
incorporate
the
kind
of
workloads
that
you
are
thinking
of
and
have
have
a
base,
like
a
baseline
to
maintain
over
weeks
and
months
to
look
at.
A
Hey
guys,
I
have
a
hard
stop.
I've
gotta
go
right
now.
I
don't
know
if
it's
gonna
close
out
the
meeting
or
not,
but
really
good
discussion.
If
I
leave
and
it
closes
out,
everyone
have
a
good
week,
otherwise
feel
free
to
keep
talking.
D
Yeah,
so
maybe
we
have
some
like
minimal
tests
right
now
in
the
ci
system.
For
mr
takens.
D
B
Exactly
with
discussions
and
improvements
on
these
yeah,
that
sounds
good.
B
Be
easier
for
you,
since
your
contract
more
often
but
well
right
now,
I
am
just.
I
have
a
few
knowledge
about
all
this
stuff.
E
All
right:
well,
I
think
it
was
a
very
good
discussion
today
and
looking
forward
to
getting
this
work
and
be
more
useful.