►
From YouTube: Ceph Performance Meeting 2020-06-18
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
UPR
is
this
week:
nothing
I!
Guess
it's
been
a
little
bit
of
a
slow
week,
though
there
were
a
couple
that
closed
Adams
got
one
here
that
is
related
to
his
rocks.
Tv
sharding
work
to
prevent
really
large,
read
his
log
sizes
from
being
used.
It
just
puts
a
maximum
cap
on
them,
which
is
good.
A
A
Directory
I
guess
I
think
you
want
external
to
the
project
rather
than
internal
I
guess
so,
let's
see
what
else
I
don't
know
why
Adam
Emerson's
fight
for
the
datalog
er
closed,
easy
D,
no
I.
A
Okay,
no
worries,
okay,
let's
see,
and
then
there
was
an
older
PR
from
ma
Jinping.
That
was
also
around
mocking
that
got
closed
because
none
of
us
will
get
it
so
potentially
maybe
we
should
reopen
that
I
measure
is
pretty
old,
though
all
right,
let's
see
updated,
PRS
there's
one
about
blocking
traces
here,
I
think
Egor
self-assign
that
do
look
at
he
hasn't
had
time.
Yet
it
doesn't
look
like
though,
so
it's
still
there
mem
pools
splitting.
Oh,
this
is
good
because
PR.
So
this
looks
great.
A
It's
just
basically
making
the
men
poles
much
more
finer-grained.
So
we've
gathered
our
idea
of
what
we're
actually
using
memory
for
so
that
looks,
fantastic
I
can
choose
just
working
and
fixing
it.
A
Why
why
it's
not
working
right
and
then
the
last
updated
one
here
is:
oh,
my
my
PR,
so
this
is
kind
of
my
ongoing
work
in
the
MVS
trying
to
figure
out
how
to
make
Def
efest
faster
for
scenarios
where
you
have
lots
of
readers
and
writers
hitting
a
single
directory,
potentially
with
many
many
files
and
lots
of
MTS's,
so
I'll
talk.
I
can
go
with
that
more
later.
A
Otherwise,
not
up
not
a
lot.
You're
I,
don't
think,
there's
anything
in
the
no
movement
section.
That's
now.
Maybe
the
only
thing
is
that
we
should
still
try
to.
If
we
can
get
a
couple
igor's
peers
in
here.
That
would
be
good.
The
reduce
memory
footprint
one
would
be
really
good
be
given
for
the
next
release
and
the
simplify
oh,
no
dependent.
None
can
logic.
That's
probably
the
most
important
one,
actually
that
we
get
in
so
I,
don't
think
we
have
Igor.
A
No,
he
said
he's
gonna
try
to
work
on
it,
though
so,
hopefully
we'll
look
at
that
in
and
the
next
week
or
two
here.
Otherwise,
the
only
other
one
of
mine
that
probably
would
be
good
if
we
can
get
it
in
is
the
double
hashing.
Oh
no
caching,
one
which
I
had
not
looked
at
by
I,
told
me
how
that
I
would
I
would
try
to
get
it
done.
So
that's
also
in
the
list
all
right,
any
anything.
I
missed
for
Pierre's,
guys.
A
Alright,
so
the
only
thing
I've
got
here
for
discussion
topics
is
that
in
this
IO
500
testing
that
I've
been
doing
it's
kind
of
led
me
down
this
rabbit
hole.
Initially,
I
was
suspecting
that
the
process
of
trying
to
fragment
and
export
directories
in
the
MDS
was
causing
a
lot
of
slowdowns
and
stalls.
I
still
think
that
that's
not
a
false
it.
A
The
last
week,
I
ended
up
taking
the
pre
fragmenting
code
that
I'd
written
and
extending
it
to
also
pre
export
those
fragments,
so
I
got
that
working
on
Thursday
or
Friday
of
last
week,
and
it
pierced
more
or
less
worked
fine
and
it
actually
does
help
when
we
have
a
smaller
number
of
guesses.
Like
you
give
the
tests
I
was
doing
at
9:00.
In
that
case,
it
was,
it
was
significantly
helping.
We
immediately
saw
all
of
the
MDS
is
working
together.
A
A
We
still
got
the
benefit
of
immediately
having
all
the
indices
doing
work,
so
that
was
good.
I
mean
it's
still
a
little
bit
better
in
that
way,
but
the
throughput
was
very
low,
like
maybe
a
couple
of
hundred
ops
per
MDS
ups
per
second
for
MDS,
and
there
were
periodic
stalls
and
slowdowns
and
weird
behavior.
A
So
what
I
ended
up
doing
as
I
noticed
that
the
authority
of
MDS
for
the
directory
was
always
really
busy
and
profiled
it
using
GB
PNP
and
it
looks
like
it
is
doing
a
lot
of
work
related
to
the
Southwest
journaling
and
most
of
that
work
it
turns
out.
Is
this
e
meta
blob
data
structure,
where
it's
decoding
bufferless,
throwing
a
bunch
of
stuff
in
maps
and
those
things
that
are
throwing
in
there
there?
A
A
So
those
that
process
of
decoding
all
that
stuff
is
is
really
slow.
My
initial
thought
was
that
it
was
actually
the
map
data
structure.
That
was
bad
because
I
was
seeing
PC
malloc
like
going
nuts
trying
to
like
a
allocate
memory
from
the
central
free
list
and
that's
kind
of
known
thing
with
it
with
standard
map,
though
just
as
a
fun
thing:
I
rewrote
it
using
a
vector,
and
that
did
saw
for
the
TC
Moloch
issue,
but
the
linear
search,
the
naive,
linear
search
I
was
doing
end
up
being
incredibly
slow.
A
We
are
actually
using
the
the
fast
search
functionality
quite
aggressively
and
we
have
a
lot
of
defrag
so
that
didn't
work
so
well,
but
the
good
news
is
that
it
actually
proved
that
this
is
really
hot
code
that
slowed
down
all
of
the
tests.
That
involved
one
directory
was
lots
of
clients
very
dramatically.
So
I
I
next
tried
an
unordered
map.
Yeah
I
was
needful
at
map
as
well
banjo
mat
and
maybe
made.
That
might
be
the
way
to
go.
But
I
didn't
try
an
unordered
map
and
it
didn't
change
much
where
I
was
expecting.
A
A
I
think
that's
actually
what's
making
TC
Malik
freak
out
and
but
there's
a
bunch
of
other
stuff
too
I
mean
just
the
encoding
overhead
is
high
and
in
general,
that
the
map
might
be
contributing
to
some
extent
just
destroying
the
objects
that
are
put
into
the
map.
There
are
these
temporary
things
called
them,
I
think
they're,
dirt,
blobs
or
something
I
remember,
but
those
things
just
just
D
Alec
getting
those
there's
like
ten
percent
of
the
the
mb/s
thread
was
just
doing
that
and
you
know
it's
this
more
or
less
single
threaded.
A
Okay,
if
I
don't
win
most
would
inserting
in
order
okay,
okay,
yeah
I
mean
the
insertion
I,
think
it's
all
inserting
it
upfront
during
the
decode.
So
I
don't
even
know
that
we're
we're
actually
inserting
anything
after
the
fact.
I
think
it's
just
basically
creating
you
know.
A
decoding
is
decoding
code
whatever,
anyway,
it's
I
think
he's
only
doing
it
once
and
I
was
doing
a
bunch
of
operations
searching
on
this
thing
and
and
eventually
gained
rid
of
it.
A
B
Any
conversations
with
this
over
several
years
christian,
we
should
really
make
the
empty.
We
should
make
the
MDS
a
multiple
threads
multi-threaded.
Then
we
should
make:
let's
have
a
festival
to
threaded,
take
advantage
of
it
yeah,
obviously,
because
that's
the
second,
the
second
native
clients
are
already
already
already
turned
out
the
eggs,
but
you
get
immediately
mediate.
Massive
wins
I
mean
it's
obvious
right.
The
MDS
wants
to
be
big.
A
B
Well,
no
well
you're,
looking
at
good
stuff
but
I
mean,
but
but
that,
but
that's
orthogonal
II
we
should
drive
on.
We
should
drive
on
the
parallelism,
but
having
said
that,
you
know:
there's
there.
I've
talked
to
people
recently
about
this
like
this
week
or
something
like
that.
There
are
patch
sets
people
that
we
people
want
to
do
this
finally
think
ish
trying
to
get
this
guy
going
for
a
while
exciting.
If
someone
could
make
headway
there
cuz
it
doesn't
matter.
Sir
yeah
yeah,
it's
it's
great.
It's.
B
C
Quick
note
in
the
matter
of
McMichael,
optimization
and
batteries
up
and
home
up
and
haul
is
truly
supposed
to
be
extruding
in
supposed
to
be
extremely
cheap.
Apart.
One
single
case
when
the
carriage,
when
the
event
like
butter,
is
exhausted
or
is
unavailable
in
that
in
that
situation,
upon,
halt,
needs
to
go
out
and
allocate
memory
append
or
up
and
halt,
or
those
bunch
of
metals,
actually
shouldn't
do
that.
The
in
the
best
case,
the
entire
memory
allocation
should
be
served
only
and
the
construction
of
Babel
is
in
general
by
the
restart
method.
C
So
maybe
check-in
worth
taking
a
look
whether
the
decoding
are
in
Tony,
but
whatever
actually
reserves
a
space
in
batteries.
Maybe
it's
not
and
it's
not
under
Dell.
It
would
not
only
be
wasteful
in
the
matter
of
severe
cycles,
but
also,
maybe
mostly
in
the
matter
of
wasted
memory.
Athens.
When
happens,
the
allocation
unit
of
all
our
opponents
actually
4k.
C
Okay,
so
we
Cody
Inc
and
fifteen
percent
up
and
whole
three
feet
up
in
space
yep,
exactly
yep
the
tingle.
A
C
Have
to
in
two
in
two
variants
of
our
encoding
decoding
subsystem,
the
old
one
I
mean
encoding
doctype.
If
it
were
correctly,
it's
not
about
it's
not
about
trying
to
calculate
the
reservation
size
before
doing
the
job,
but
most
people
the
sake
of
bruce
terrifically.
We
introduced
a
new
subsystem,
it's
called
D
and
dot
H,
and
this.
C
A
Yep
I
agree:
I
couldn't
I
didn't,
remember
that
that's
how
we
implemented
it.
So
thank
you
for
that,
because
that
would
be
useful
yeah.
Maybe
that
will
be
we'll
try.
Next
is
just
to
switch
over
to
the
new
encoding
scheme
because
they
I
don't
think
there's
anything
in
there
that
should
make
it
no
network.
D
C
Might
be
also
worth
to
verify
how
much
we
are,
how
much
of
the
allocation
made
in
a
pinhole?
How
much
of
data
we
are
truly
using
I,
wouldn't
be
surprised.
I
wouldn't
be
surprised
if
it's
terribly
wasteful.
If
we
are,
let's
say
if
we
are
allocating
4k
just
to
up
and
thousands
of
bites,
it
would
be
a
surprising
yeah.
E
C
C
C
A
You're
kind
of
breaking
up
for
me
erotic,
but
but
yeah
I
think
if
I
understand
you
yeah
we're
going
to
all
issue
would
be
to
be
a
much
easier
thing
to
do.
Ugly,
encoding.
A
A
All
right,
well
anything
else,
guys.