►
From YouTube: 2019-06-13 :: Ceph Performance Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Let's
see,
there's
a
PG
mapping
cache
PR.
That
sage
has
been
reviewing.
He
mentioned
it
earlier.
Apparently
it's
it's
actually
providing
a
relatively
substantial
improvement.
So
that's
really
good
Radek
has
been
going
crazy,
working
on
this
input,
buffer
factory
stuff
for
Crimson,
looking
at
fragmentation
and
generally
just
kind
of
trade-offs
between
different
ways
of
doing
this.
A
In
terms
of
how
it
affects
performers
at
different
I/o
sizes,
though
lots
of
info
there,
that's
hoping
he
might
be
able
to
talk
a
little
bit
about
it,
but
I
don't
think
he
was
able
to
make
it
to
the
meeting
today.
So,
let's
see,
and
then
the
other
big
one
this
week
is
that
Adams
next
iteration
of
his
shirt
and
blue
store
is,
is
there's
a
PR
for
it
now
as
big
PR
lots
of
commits,
though
it'll
need
a
review
from
somebody,
but
it's
gonna
be
a
doozy
I.
A
Think,
let's
see
a
couple
close
this
week:
Jason's
messenger
async,
one
that
I
mentioned
last
week.
This
reduces
his
calls
I
think
there
was
a
notable
performance
improvement
with
it.
That's
good,
too,
the
only
two
other
ones
that
we've
got
both
foreclosed
by
the
stale
bot
I'm
a
little
sad
on
both,
but
that's
what
it
is
one
is
to
reduce
the
number
of
bluefish
space
allocations.
A
A
B
A
C
It
is
in
2003
right
now
for
another
round
of
testing,
and
hopefully
pathology
will
not
find
anything
but
and
then
we
can
merge
it.
Assuming
people
want
to
review
it,
and
you
know
I
mean
I
guess
if
nobody
wants
to
review
it
and
nobody
requests
changes
well.
Anyway,
if
it
passes
to
thought
oh
gee,
then
we
should
be
able
to
merge
it
in
I.
Have
some
follow-up
work
on
the
same
thing,
but
you
know
now
that
it's
been
hitting
but
I.
Think
150
commits
by
now
it'd
be
kind
of
nice
to
get
in
yeah.
A
C
A
Pool
well
yeah,
looking
forward
to
seeing
you
get
in
and
then
we'll
see
what,
if
the
sky's
limit
may
be
right
all
right.
Next
PR
there's
an
update
for
the
IO.
You
ring
IO
engine
work
that
Roman
is
doing
I
confess
I
haven't
been
I,
haven't
looked
at
that
since
cephalic
on
so
I
need
to
get
back
to
to
that
to
review
it
again
and
see
how
Oh
badly
the
the
new
locking
slowed
it
down.
But
that's
that's
there
and
and
ready
to
be
looked
at,
but
other
folks
are
interested
too.
A
There's
this
object
store
up,
create
one
I,
don't
remember
what
that
is
sorry,
but
apparently
get
updated.
I.
Finally,
reviewed
Igor's,
mall
P
are
here
for
automaking
Auto
tuning
more
aggressive
on
startup
I've
owned
him
that
forever
so
I.
Finally,
did
it
it's
I'm,
not
I'm,
not
totally
sure,
if
it's
a
good
idea
to
make
the
the
the
interval
for
for
changing
the
the
cache
size
bigger
that
PR
changes
it
from
one
seconds
to
five
five
seconds.
Instead,
the
advantage
with
it
being
bigger
is
that
you
know
it's
less
work.
A
The
the
mempool
thread
is
doing
this.
You
know
five
times
less
often,
but
I
haven't
seen
it
using
a
ton
of
CPU
even
at
a
one-second
interval,
and
the
the
downside
potentially
is
that
we
don't
respond
to
cache
changes
as
quickly.
So
the
the
case
where
this
camp
shows
up
sometimes
is
during
the
initial,
like:
oh
no
growth
when
you
first
start
up
the
OSD
and
you've
got
a
bunch
of
blue
sorrow,
notes,
started
being
populated.
If
you
make
that
interval
too
small
I'm,
sorry
too
big,
so
it
waits
too
long
for
updating.
E
A
It's
kind
of
hard
to
read
because
it
totally
depends
on
how
fast
you're
you're
reading
Oh
nodes
into
the
cache.
You
know
if
your
dice
isn't
very
fast,
then
having
a
lower
interval
could
make
sense
or
sorry
bigger
interval
could
make
sense.
But
if
you're
on,
like
a
super,
super
fast
device,
you
probably
want
you
know
iterating
through
really
quickly,
so
that
you
can,
you
know,
grow
the
cache
as
fast
as
you
needed
to
grow
to
to
not
run
out
of
space,
so
yeah
trade-offs.
A
C
B
A
All
right,
well,
let's
for
stuff
that
it's
been
updated
this
week.
There's
you
know
a
bunch
of
stuff:
that's
that's
old,
but
maybe
the
only
well,
maybe
the
two
in
here
that
are
are
kind
of
interesting
at
the
moment.
I'm
hoping
continue
to
be
updated,
there's
more
than
that
I
guess,
but
the
the
user
space
io
get
event
that
key
food
wrote
it
would
be.
A
It
would
be
good
to
to
continue
looking
at
that
that
that's
an
idea
that
I
think
he
he
got
out
of
both
crimson
are
sorry
out
of
sees
both
C
star
and
out
of
FIO.
That
might
be
better
than
what
we're
doing
now
and
then
there's
also
this
pole,
a
pole
events
from
user
space,
although
it's
really
experimental
not
in
the
kernel.
Yet.
Oh
there's
also
the
auto
tuning
of
MDS
cache
memory
limit
and
there's
a
mod
there's,
there's
the
Mon
equivalent
of
that
as
well.
A
That's
in
the
works
that
we're
going
needs
another
review
today,
so
both
of
those
things
we
need
to
make
sure
continue
to
to
make
it
their
way
through.
A
A
Okay,
well,
I've
got
a
couple
of
different
discussion
topics
here
that
we
can
go
through
so
I
mentioned
that
neha
was
working
again.
The
client
endpoints
support
and
CBT
into
technology
I
think
that
chief,
who
has
mentioned
that
he
is
interested
in
using
that
for
Crimson
testing,
and
then
also
you
know
we
can.
We
can
use
that,
for
you
know
just
nightly
testing
in
general
for
for
traditional
stuff.
A
A
One
of
the
things
I
wanted
to
talk
a
little
bit
about
is
kind
of
some
future
direction.
Ideas
with
CBT
that
they've
been
kicking
around
in
my
head
for
a
while
one
of
the
kind
of
long-standing
things
I've
wanted
to
do
with
it
is
take
the
cluster
definition
and
make
it
in
the
amyl
and
and
make
it
kind
of
more
abstracting
generic,
so
that
we
can
implement
different
kinds
of
clusters.
A
The
force
F,
the
the
big
one
here
would
be
something
like
Asif
ansible
back-end,
so
that,
instead
of
only
having
the
ability
to
create
SEF
clusters
through
CBT,
it
could
farm
that
out
to
whatever
our
current
installation
method
is
that
that's
been
kam
a
long-standing
request
from
like
the
reference
architecture,
guys
that
that
want
to
make
sure
that
they're
testing
on
a
cluster
that
was
deployed
using
kind
of
the
same
tools
that
a
user
would
use
so
that
that
I
think
would
be
useful.
But
beyond
just
a
fan,
symbol
or
whatever
other.
A
You
know,
tough
back
end.
We
would
we'd
use
there.
We
could
implement
things
like
cluster
cluster
implementation
and
now
that
we
have
the
client
endpoints
as
long
as
that,
cluster
cluster
can
can
have
endpoints
that
are
of
the
same
type
as
stuff.
So
like
a
block,
endpoint
or
filesystem
endpoint
or
something,
then
the
benchmark
should
be
able
to
use
it,
and
we
can
do
direct
comparisons
of
Gloucester
and
Saif
with
almost
like
the
same
animal
file,
but
that
would
be
have
a
really
interesting
thing.
A
I
think
if
we
could
get
that
working,
you
know
other
things
that
could
be
Swift
or
even
like
a
local
cluster
class
that
might
be
used
for
things
like
FIO,
with
a
local
object,
store
or
FIO
directly
on
disks.
We
have
a
benchmark
that
sort
of
does
that
already,
but
it's
kind
of
like
a
one-off
FIO
thing,
it'd
be
nice
to
have
this
cab
through
that
client
endpoint
framework
that
we
now
have.
So
those
are
kind
of
things
I've
been
thinking
about
a
little
bit
as
may
potentially
be
useful.
A
F
A
Potentially
doing
some
other
kind
of
metric
analysis
or
or
setting
like
read
ahead
on
on
particular
block
devices,
which
it
can
do
when
you
you
have
CPP,
deploy
the
cluster.
So
those
are
maybe
things
that
would
be
nice
if
we
could,
at
least
on
the
monitoring
side,
if
we
could
make
it
so
that
you
could,
you
could
CBD
had
some
knowledge
about
how
the
cluster
was
deployed
and
then
could
could
still
make
use
of
some
of
those
features.
B
So
when
we
do
when
you're
running
CBT
using
tautology,
we
are
already
using
the
use
existing
thing,
but
there
is
a
bit
of
spoon-feeding
that
we
need
to
do
just
to
give
CBD
the
cluster
section
of
the
ml.
It
requires
maybe
making
that
more
like
an
auto
discovery.
That
see
does
that
if
there
is
in
use
existing
flag
set,
it
goes
and
discovers
a
few
things
automatically
would
be
useful
in
these
scenarios.
I
think
yeah.
A
Yeah,
one
of
the
things
that
is
is
really
irritating
right
now,
for
that
cluster
section.
Is
that
a
lot
of
information
from
the
Ceph
dot-com
file
gets
duplicated,
so
it'd
be
interesting
to
if,
if
it
could
either
use
the
stuff
that
comp
file
or
or
command-line
tools
or
something
else
to
try
to
like
Auto,
discover
a
bunch
of
information
about
the
clustered
if
it's
use
existing
or
if
it's
you
know,
even
if
it's
guest
set
up
everything
itself,
if
it
can
read
that
from
somebody
somewhere,
yeah.
F
A
Okay
yeah
so,
but
that
sounds
good,
so
maybe
that
would
be
a
good
a
good
name
for
some
of
this
is
to
make
it
so
that
the
whole
cluster
section
kind
of
gets
rearranged
instead
of
being
just
like.
You
know,
you
put
a
bunch
of
stuff
stuff
in
there.
Maybe
you
define
what
kind
of
clusters
you
want
to
create,
and
then
we
revamp
the
stuff
portion
of
that
to
do
a
bunch
of
auto
discovery
and
then,
potentially
you
know,
maybe
you
could
also
target
different
stuff
deployment
mechanisms.
F
A
So
kind
of
tangental
to
that
but
but
sort
of
related
is
sage
had
mentioned
being
interested
in
having
CBT
be
able
to
use
containers,
though
you
know,
I've
done
a
little
bit
with
kubernetes.
It
kind
of
terrifies
me
a
little
bit.
It's
really
big,
but
I
was
looking
a
little
bit
at
mini
cube
and
that
seemed
a
little
bit
more
reasonable.
A
G
G
G
F
A
A
A
So
so,
anyway,
yeah
that's
that's
kind
of
I,
guess
more
or
less
it
for
now
I'm
hoping
to
rope.
Orlando
he's
not
here
today,
but
I'm,
hoping
to
rope
him
into
maybe
doing
some
UI
work
and
and
result
parsing
work
and
that's
kind
of
the
other
big
piece
that
that
we've
been
needing
for
a
long
time
with
those,
though
I
mean
hopefully
a
it.
That
would
kind
of
get
to
be
in
a
state
where
we're
I
think
more
folks
to
be
able
to
use
it.
A
Oh
there's
our
land
of
now
actually
hey
Orlando
I
just
mentioned
you
I
was
hoping
to
rope
you
in
maybe
on
working
on
some
of
the
CBT
graphing
pieces.
Oh.
H
A
So
that
was
a
branch
I
had
been
working
on
several
months
ago,
maybe
even
more
like
six
or
seven
months
ago,
and
then
got
got
distracted
with
other
stuff,
but
that
was
kind
of
a
first.
You
know
attempt
at
exploring
some
of
those
ideas
and
then
that
that
actually
I
think
that
branch
is
pretty
big.
It's
got
a
lot
of
their
stuff.
That
needs
to
be
split
out
from
it.
A
I
think
some
of
these
other
things
that
are
in
that
we've
been
talking
about
here
regarding
kind
of
cluster
definitions
and
some
of
the
first
ideas
for
client
and
point
testing
we're
in
there
too
so
yeah.
My
guess
was
that
that's
probably
a
count
cluttered
branch
but
but
yeah.
Some
of
it
might
be
have
some
of
the
ideas
for
this
to
the
graphing
and
result
parsing.
A
So
yeah
they're,
the
idea
I
think,
at
least
in
my
mind,
would
be
that
we've
got
these.
These
hash
encoded
directory
structures
and
we
can
just
build
indexes
for
them
and
and
then
have
some
kind
of
query
engine
for
for
being
able
to
quickly
extract.
You
know
performance,
results
or,
or
you
know,
other
other
info.
You
know
and
metrics
that
we
want
from.
H
A
A
Cool
all
right,
I
think
that's,
that's.
Basically
all
I've
got
I'm
the
CBT
side.
A
A
This
I
guess
will
be
pretty
quick
with
all
the
changes
that
we're
making
with
rocks
TBH.
We
are
going
to
need
to
think
really
carefully
about
how
we
are
laying
data
out
across
column
families.
Adams
work
is
going
to
end
up,
resulting
in
a
column
family
per
shard
when
you
layer,
my
double
cashing,
avoiding
double
cashing
work.
A
F
Add
that
as
another
piece
of
the
path,
so
within
a
given
column,
Thurman
not
creating
any
additional
ones.
I.
A
F
A
F
A
But
yeah
so
related
to
that
Adams
testing
might
change
on
top
of
his
changes,
so
hopefully
we'll
find
out
soon
how
that
works.
But
Adam
changes
seem
to
do
a
really
good
job
of
reducing
right
answer
and
compaction,
and
my
stuff,
it
seems
like,
is
doing
a
good
job
of
avoiding
the
double
cashing
which
results
in
you
know
pretty
pretty
significant
performance
improvements
when,
when
faced
with.
A
More
more
data
in
the
working
set
then
fits
in
cash,
especially
a
more
owner
data,
so
yeah,
but
if
those
both
can
end
up,
you
know
getting
in
for
octopus,
don't
know
if
they
will
or
not,
but
I
think
that
would
be
a
really
big
win
for
us.
So
yeah,
that's,
that's!
Basically,
all
I've
got
guys
any
anything
else
for
this
week.
Anyone
have
anything
they
want
to
bring
up
for
talk
about.