►
From YouTube: Ceph Performance Meeting 2020-08-06
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
I
saw
two
new
pr's
this
week,
both
from
majiang
peng.
He
actually
has
submitted
a
couple
of
additional
ones
that
weren't
performance
pr,
so
he
he's
been
kind
of
on
on
fire
this
week.
So
one
of
these
is
for
avoiding
blushing
too
much
dado
at
once
in
blue
rock's
environment.
A
I
think
that's
the
right
change
to
make.
It
doesn't
look
like
kifu's
here,
but
the
gist
of
it
is
that
he
saw
specifically
with
bluefest
bufferdio
enabled,
which
we
don't
do
anymore,
but
but
specifically
with
blue
fest
buffer
dio
enabled
we
were
building
up
this
giant
amount
of
data
to
flush
by
calling
append
over
and
over
again
and
when
we
did
eventually
flush.
A
It
was
this
big
flush
event
that
caused
a
big
latency
spike,
and
so
it
was
actually
building
up
quite
a
bit
beyond
what
our
blue
of
his
midflesh
size
is
set
to
be
so
his
his
fix,
for
this
is
basically
to
just
every
time
we
call
append
try
to
flush
it.
We
already
check
in
flush
if
we're
smaller
than
bluefest
midflesh
size.
So
we
don't
do
anything
if,
if
we're
smaller
than
that,
but
I
think
that
his
change
is
right.
A
This
will
let
us
kind
of
stay
closer
to
the
bluefest
mid
flush
size
when
we're
doing
these
pens
and
if
it
turns
out
that
we
want
those
flushes
to
be
larger,
we've
got
a
much.
It
will
much
more
consistently
be
able
to
keep
the
flushes
at
that
min
flush
size
by
making
this
change,
regardless
of
what
we
want
that
flush
size
to
be
so.
I
think
this
is
the
the
right
change
to
make
so
that
got
approved.
A
I
should
put
mark
approves
on
here
actually
and
then
there
was
another
pr,
a
little
bit
more
complicated
for
reducing
buffer
list
rebuilds
in
blue
fs
that
one,
I
think,
erratic.
I
put
you
in
igor
along
with
me
as
potential
reviewers
that
one,
if
you
want
to
take
a
look.
B
Just
taking
a
look
right
now,
I
I'm
not
sure
if
well,
even
if
that's
the
case,
I'm
not
sure
whether
that's
the
right
approach
to
attack,
maybe
we
should
fix
the
page
alignment.
Appender,
don't
know
it's
just
just
started.
Looking
on
that,
okay.
A
Very
good
all
right,
let's
see
what
else.
A
Two
updated
prs
both
in
rgw
this
d3n
cache
changes,
both
matt
and
mark
hogan
are
reviewing
that.
A
And
then
there's
also
an
update,
just
a
request
for
for
review
from
casey
on
this
ordered
list
map
efficiency,
pr
from
eric
and
then
we've
got
a
whole
bunch
of
stuff
in
the
no
movement
category,
still
a
couple
still
from
ajianpang
that
we
need
to
review,
but
I
just
don't
think
anyone's
had
time
to
really
dig
into
it.
A
It'd
be
good
if
we
could
figure
out
what's
wrong
with
igor's
memory
reduction
pr
and
then
I
still
need
to
work
on
my
double
cash
one,
but
that's
yeah!
That's
about
it.
As
far
as
I
could
tell
any
any
prs.
I
missed
here
this
week,
guys.
C
Not
a
pr
but
but
you
mentioned
the
blueprints
buffered,
I
would
change,
and
we
did
a
report
on
the
mailing
list
this
morning
of
some
folks
running
into
a
degradation
with
that
off
with
during
snapchat
trimming,
particularly
like
they
were
seeing
lots
more
iot
disk
than
they
were
when
it
was
turned
on.
A
Yeah,
I
mean
we
turned
it
on
for
a
reason.
Back
in
the
day
I
mean
we
were
seeing
better
performances,
but
on
which
is,
you
know,
kind
of
why
we
we
originally
turned
it
on,
but
with
what
we
were
seeing
with
like
that
horrible
behavior,
with
with
rgw
causing
the
osds
to
start
like
we're,
making
the
kernel
start
by
swapping
out
osd
memory.
I
don't
know.
C
Yeah
yeah,
I
agree
with
it.
It's
not
really
feasible
to
turn
its
like
on
and
let
us
potentially
swap,
but
I
wonder
if
there's
something
we
could
do
for
the
snapchat
trimming
case
to
allow
it
to
use
the
cash
more
easily,
I'm
not
exactly
sure.
What's
going
on
there.
A
If
they're
seeing
a
lot
more
rights,
I
guess
one
of
the
things
the
questions
would
be:
are
they
random
rights
or
are
they
contiguous?
I
would
assume
probably
they're
random,
but
maybe
not
like
fully
random,
so
maybe
buffer.
A
A
So
I
wonder,
then,
if
they
cached,
they
turned
on
the
the
blue
store,
buffered,
the
looser
buffered
rights,
but
basically
changed
it
so
that
we
cache
the
the
data
buffers
you
know
on
on
right,
instead
of
just
on
read.
If
that
would
help
them
or
not,.
C
Maybe
I'm
not
sure
if
the
snapchat
trimming
would
I
would
be
cash
there
like
if
that
would
be
have,
in
effect,.
A
Yeah,
I
don't,
I
don't
have
any
idea:
how
does
how
does
that
work?
Does
it
go
through
what
data
path
through
the
object
store?
Does
that
go
through.
C
I
think
it
ends
up
going
through
the
update
updating
the
metadata
about
the
object
that
is,
the
indoor
node
after
updating
that
so
updating
some
some
entry
entries
there
before
the
object
metadata
about
clones.
In
addition
to
removing
the
actual
snapshot,
object
itself.
A
A
A
Yeah
all
right
adam,
I
might
be
bugging
you
soon
about
how
to
how
to
best
integrate
my
stuff
with
your
stuff
and
and
getting
it
into
your
your
thing
that,
like
updates
the
osds
for
your
changes,.
D
A
All
right
anything
else
on
that
josh.
A
All
right,
I
have
a
quick
update
on
the
work
radican
I
have
been
doing
for
ring
buffer
for
for
buffer
list
depends
well
and
just
in
general,
so
it's
all
working
fairly
well,
the
ring
buffer
works.
A
We
also
started
working
on
making
it
so
that,
when
you
form
a
pens,
the
append
buffer
will
will
grow
as
as
it
gets
re-allocated
over
and
over
again
previously,
we
were
seeing
that
the
the
default
size
of
4k
was
not
enough
in
a
lot
of
cases
and
causing
a
lot
of
excessive
work
by
making
that
dynamic
and
and
grow
kind
of
like
vector
does
it
can
greatly
improve
performance
of
a
pens
when
they're,
they're
relatively
large,
and
then
also
it
really
in
our
benchmarks,
at
the
very
least,
seems
to
improve
the
penned
whole
performance
dramatically
so
here
I'll
I'll
share
the
link
for
that
it's
in
the
ether
pad,
but
that
is
here
on
the
first
on
the
first
slide,
unfortunately,
in
the
mbs,
which
is
where
I
was
hoping
to
see,
this
really
help,
it
does
seem
to
eliminate
the
extra
work
tc
malek
is
doing
like
it
looks
way
better.
A
The
problem
is
that
it
doesn't
actually
seem
to
be
speeding
the
mds
up.
As
far
as
I
can
tell
now,
all
of
a
sudden
other
areas
of
that
thread.
The
the
md
submit
thread,
I
believe,
no
sorry
different
one
well
anyway.
Basically
the
the
mds,
just
the
the
points
of
contention
changed
a
little
bit.
That
threat
is
still
working
very,
very
hard,
we're
still
seeing
a
lot
of
contention
in
the
mds,
so
this
seems
to
be
a
good
change,
but
it's
not
perf
seeming
so
far.
A
It
doesn't
seem
like
it's
achieving
any
measurable
benefit
in
terms
of
raw
performance,
which
surprises
me,
but
that's
where
we're
at
right
now
with
it.
So
I
still
have
work
to
do
to
try
to
understand
this
more,
but
but
at
least
for
now
the
the
only
clear
performance
advantage
we're
seeing
is
in
the
pen
bench
and
dependable
benchmark,
not
in
the
actual
test
that
we're
running
any
any
questions
on
that.
B
A
Yes-
and
there
is
yes,
there
is
definitely
a
lot
of
contention
involved.
I
see
it
in
the
choices,
so
it's
yeah,
you
know,
there's
other
stuff.
This
is
not
the
only
problem
I
was
just
hoping,
it
would
be
yeah.
A
Yeah
I
know
me
too.
I
still
think
it's
a
good
change.
I
mean
just
based
on
what
we're
seeing
it
seems
like
the
dynamic
append
length.
Change
is
good.
I
think,
because
I
mean
it,
I
don't
see
in
any
cases
where
it's
really
that
bad
other
than
I
guess
it
could
use
a
little
bit
more
memory.
So
there's
that,
but
combined
with
the
ring
buffer,
everything
looks
to
me
like
it's.
Basically,
better
tc
malik
does
not
show
up
in
my
traces
anymore.
A
Yeah
yeah
or
or
even
maybe
it's
maybe
it's
making
it
faster
at
processing
requests.
I
should
see
if
there's
some
way
I
can
track
in
the
mds.
How
many
like
journal
entries
per
second
or
something
is
cycling
through
and
and
maybe
that'd
be
another
way
to
to
do
it,
but
yeah
it's
anyway.
More
work
is
needed,
one
way
or
another,
that's
it
for
now.
A
F
F
Basically
all
comes
down
to
an
intersection
of
two
factors.
One
is
that
when
objects
compressed
objects
are
written
over,
their
metadata
gets
very
long,
so
only
few
items
only
few
objects
can
have
their
metadata
stored
in
osd
at
any
time.
F
That's
one
part,
and
the
second
is
that
for
testing.
I've
used
just
random,
writes
with
a
basically
uniform
distribution
and
intersection
of
this
two
factors
made
sometimes
very
strange
performance
results
which
took
us
some
time
to
understand,
but
it
seems
it
boils
down
to
a
fact
that
if
I
had
some
tests
and
all
metadata
for
my
data
set
could
fit
in
a
cache,
then
I
had
some
high
performance,
but
when
at
at
least
some
data
couldn't
fit
there,
the
performance
drop
was
dramatically
like
four
five-fold.
F
This
is
because
recovering
metadata
compression
metadata
takes
a
lot
of
time,
and
this
is
of
course,
bluester
metadata,
but
it's
more
complicated
if
we're
running
compression
and
to
the
degree
that
it's
like
20
times
more
time
consuming
than
actually
making
an
op
itself,
I
mean
fetching
means
that
you
have
we
have
to
to
drop
some
other
element
in
cash
and
that
everything
is
factored
in,
that
20
time
more
more
performance
consuming.
So
that's
that's
a
background
and
basic
on
that.
I
made
this
document,
which
basically
tells
one
thing.
F
Uniform
distribution
is
not
good
for
testing
performance
in
any
way
I
mean
not
even
close
having
even
triangle
distribution.
Just
you
know,
triangle.
Distribution
of
data
significantly
improves
the
steep
changes
in
performance,
for
example,
when
cash
is
getting
over
some
size,
so
yeah,
that's
it
basically,
okay
can
do
is
just
to
recommend
only
using
random
rights.
F
If
we
testing,
if
software
works
properly,
that's
that's
the
only
only
only
case,
even
for
crude
performance
evaluations,
I
don't
think
it
can
be
used
because,
as
for
example,
with
compression
each
total,
it
can
totally
circumvent
using
any
caches.
It's
like
they
were
not
even
there
so
yeah.
That's
all.
A
Adam,
do
you
do
you
think
that's
the
case,
even
if
you're
restricting
the
size
of
the
data
set
that
you're
testing,
because
it
seems
to
me
that
it's
valid
to
test
a
random
workload
up
to
a
certain
data
size
to
say
if
you
were
having
a
random
workload
on
some
database
or
something.
F
What
I
don't
believe
there
are
workloads
that
have
uniform
distribution
just
I
cannot
imagine
any
system
behaving
in
that
way.
Unless
you
have
a
just
a
log
that
writes
continuously
some,
I
don't
know
monitoring
data
that
is
being
trimmed
down
to
to
the
disk
and
never
actually
accessed.
A
F
C
A
Sure
josh,
but
over
time,
you'd
expect
that
if
you
have
enough.
F
This
part
is
correct
if
you
have
so
many
different
points
of
access
that
even
your
large
catch
cannot
reasonably
be
used
more
than
once
for
every
element,
then
you
have
just
trashing
and
you
can
do
nothing
with
that,
but
still
that
would
would
be
a
huge
amount
of.
F
Concurrent
and
not
related
clients
to
the
same
same
cluster.
C
Yeah,
I
think,
even
with
like
the
random
database
example,
the
database
you'd
still
see
some
points
of
commonality,
like
the
the
journal
of
the
database
for
would
be
similar
objects
where
the
indexes
would
be
read
quite
a
bit.
A
C
Away
yeah,
I
guess
maybe
what
you're
getting
at
is
that
if
you
have
such
a
large
number
of
clients
with
different
distributions
such
that
maybe
one
ost
is
only
seeing
like
a
single
I
o
from
each
client.
At
that
point
it
does
more
resemble
a
pure
random
workload
like
a
more
even
distribution.
For
that
matter
is
perspective
yeah,
but
you'd
still
see
some
bunching
on
some
osd.
A
C
C
I
think
it's
worth
thinking
thinking
about
like
what
what
kinds
of
distributions
would
more
closely
model
real
world
workloads.
I
think
in
the
past,
like
you've
used
those
ff
distributions
with
fio
mark
to
test
cash
hearing
rates.
A
Yeah,
exactly
yep
yep
we've
used
variations
of
different.
Oh,
what
are
they
called.
A
So,
yes,
we
we
can
do
more
of
those
kinds
of
tests
and
that
will
you
know
it'll
make
us
look
better
right
to
varying
degrees
on
how
how
much
we
favor
having
a
distribution
that
we
keep
in
cash.
C
Yeah-
and
I
think
it
depends
on
what
the
purpose
of
the
test
is,
if
you're
trying
to
like
isolate
the
behavior
of
one
particular
part
of
the
system,
then
maybe
the
even
distribution
is
okay.
If
you're
trying
to
see
what
a
real
world
user
would
would
see
from
for
the
performance,
perhaps
we
want
to
use
a
different
distribution.
A
C
Although
for
something
like
a
performance,
regression
test,
I'm
not
so
much
worried
about
the
corner
case
is
what
a
real
user
would
see
right.
A
C
Oh,
it
could
be,
it
could
be.
I
mean
that
if
I
mean
if
the
user
isn't
seeing
any
degradation
where,
in
practice
it
doesn't
always
it's
not
always
as
bad
a
problem
to
solve,
as
if
a
problem
that
we
would
only
see
if
we
like
had
an
unrealistically
low
cash
size,
for
example,
sure
sure.
C
All
right,
maybe
I
mean
it,
is
it
worth
adding
or
do
we
already
have
like
the
stat
perf
counters
to
track
are
like
cash
hit
rates,
miss
rates
for
owners.
F
It
was
12
gigabytes
for
100
gigabyte
of
compressed
data.
A
A
A
And
did
you
did
you
try
looking
at
cashman's
probability
in
the
oh,
I
see.
Okay,
it
is
a
b
c
and
d
in
the
probability
case
is
the
same,
a
b
and
c
in
the
in
the
distributions
right.
F
Yes,
but
please
note
that
this
is
the
graphs
you
see.
There
are
analytic
just
analytical
computations,
they
are
derived
from
actual
runs
from
inserta,
but
only
to
get
them
the
model
of
cost
of
one
operation
that
that's
that
has
been
actually
ramen
certain
the
graphs
here
you
see,
they
only
represent
what
you
will,
what
level
of
degradation
you
will
see
when
your
data
does
not
fit
cache
to
the
relation
of
some
example,
data
distributions.
A
A
So
in
the
I'm
trying
to
kind
of
like
take
all
of
this
and
think
about
it
back
in
terms
of
compression
when
you,
when
you
started
looking
at
compression
with
a
zyphen
type
distribution,
where
you
don't
have
you
know
the
the
the
same
kind
of
random,
I
o
across
the
whole
disc,
did
we
do
okay,
then.
F
A
A
F
F
Sorry.
What
what
was
that
adam?
I'm
I'm
thinking
that
a
double
double
caching
of
all
nodes,
both
in
the
compressed
metadata
in
and
in
roxdb,
will
not
give
any
significant
results.
If
our
access
pattern
will
be
uniform,
it
will
just
behave
strangely.
A
F
Right,
it
will
be,
I
guess
it
will
be,
a
balance
between
how
much
more
elements
to
store
in
a
compressed
form
and
how
much
will
still
be
in
just
the
compressed
metadata
already
ready
to
use.
And
if,
if
we
can
properly
strike
that
balance
to
actually
get.
A
A
A
Which
is
strange
right
because
you'd
think
that
that
with
all
of
our
encoding
and
and
compression,
and
that
kind
of
thing
that
the
having
that
secondary
tier
of
caching
might
be
beneficial.
But
it
must
not.
We
must
not
compress
enough
or
perhaps
because
it's
a
block,
cache
and
we're
compressed
or
we're
loading
a
lot
of
like
meaningless
other
stuff
into
the
cache
one
way
or
another.
It
doesn't
actually
seem
to
be
doing
much
to
benefit.
A
Yep,
and
so
this
is
the
entire
premise
behind
the
whole
priority
cache
scheme,
where
we
can
basically
say
that
in
rock's
db,
with
very
high
priority,
we
want
to
cache
things
like
indexes
and
filters,
and
potentially
like
oh
no
data
or
other
things
that
are
are
really
important
to
us
and
in
the
blue
store
cache
in
the
o,
node
cache.
A
We
want
to
cache
recent
no
nodes
with
very
high
priority
and
we
can
start
making
more
decisions
and
trade-offs
about
what's
important
to
keep
in
memory
and
what's
not
that's
kind
of
the
whole
idea
behind
that
whole
system.
F
A
That's
the
first
step
is
to
do
that
and
then
the
second
step
will
be
to
actually
finish
implementing
the
the
age
binning
for
the
lru
caches,
because
then
we
can
start
making
comparisons
saying
we
have
like
2
000
items
in
the
o,
node
cache
that
are
less
than
5
seconds
old
and
we
have
50
items
in
the
ruxdb
block
cache
for
omap
that
are
this
old,
and
how
do
we
balance
the
memory
requirements
for
those
two
things,
and
then
we
have
a
second
roxdb
block
cache
now
for
for
blue
store,
oh
node
data
that
we
are
fetching
from
roxdb
as
opposed
to
the
bluestore
cache.
A
How
do
we
want
to
balance
that
against
these
other
caches?
And
we
can
start
then
having
this
big
kind
of
multi-cache
memory
management
scheme
for
deciding
which
things
get
memory
based
on
ages
in
the
cash
and
how
important
different
things
are.
F
Yeah,
I
love
the
idea,
but
I'm
I
don't
know
that
that
it
seemed
might
be
tricky
in
implementation
department.
When
my
more
more,
I
have
more
most
concern
about
the
fact
that
getting
that
oh
note,
data
for
those
compressed
objects
gets
so
much
time.
It's
so
much
a
cpu
and
that
could
be
difficult
to
actually
offset
with
any
other
caches.
A
The
the
honest
truth
is
that
we
really
probably
want
to
avoid
caching
things
or
oh
nodes
at
the
block
cache
level
at
all.
To
be
honest
is
my
guess,
you
know
we
we
have
to
do
it
to
some
extent.
Probably
maybe
we
could
just
turn
that
off
the
the
block
cache
off,
but
well.
No,
we
don't
want
to
do
that.
We
probably
want
the
block
cache
to
have
the
indexes
and
filters
for
roxdb
in
it.
We
just
don't
want
to
do
much
with
with
oh
nodes,
probably.
A
F
This
is
good,
but
I
can
easily
imagine
a
scenario
when
we
just
go
a
bit
over:
oh,
not
cache
size
and
basically
we
are
continuously
missing
all
nodes,
just
like
having
equal
distribution.
F
Yes,
then,
we
will
almost
always
have
a
fail
on
all
note
cash
and
then
actually
having
it
really
small,
doesn't
really
change
much.
You
just
you're
only
already
already
on
that
very
flat
plateau
of
always
always
missing,
and
instead
we
could
say
sometime
actually
caching,
the
other
stuff,
so
the
block
block
cache
for
or
key
value
data
I
mean
I'm
not
sure
this
might
be
also
a
sweet,
sweet
spot
of
cash
balance.
A
So
the
what
I
could
see,
possibly
working
in
the
scenario
that
you
just
gave
is
if
we
were
able
to
say
that
we
have
something
in
the
blue
store
cache,
so
don't
store
it
in
the
roxdb
cache.
You
know,
don't
bother
keeping
this
block
in
cache
because
at
least
for
this
particular
value
because
we're
keeping
the
blues
for
cache,
but
when
we
invalidate
it
in
the
blue
store
cache,
then
it's
something
that
we
do
want
to
keep
in
the
roxdb
cache.
A
A
Sure
so
I'll
all
assert.
It
is
that
reading
o
nodes
from
roxdb
cache
is
far
far
far
slower
than
reading
them,
from
which
would
kind
of.
A
The
if,
if
you
had
a
hierarchical
cache
like
we're
trying
to
do,
you
would
hope
that
then
being
able
to
hit
enough
stuff
from
the
roxdb
cache
to
not
do
a
disk
read,
would
kind
of
make
up
for
the
fact
that
it's
so
much
slower
than
reading
from
kb
cache,
but
it
doesn't
seem
to
be
the
case
like
especially
with
nvme
drives.
Reading
from
the
block.
Cache
doesn't
seem
to
be
all
that
much
faster
than
just
reading
it
directly
from
nvme,
whereas
the
blue
store,
oh
no,
cash
directly
is
significantly
faster.
A
A
F
A
F
F
Less,
in
our
case,
this
is
not
the
same
data.
We
have
different
thing
in
all
node
cache
different
thing.
We
could
have
in
kd
cache
and
different
in
block
cache
in
roxdb.
So.
A
A
Because
you're
going
through
and
you're
you're,
basically
loading
all
this
stuff
into
the
cache
to
do
your
your
compaction
and
then
a
lot
of
stuff
gets
invalidated
before
you
re-read
it
back
in.
I
remember
seeing
that
when
I
was
going
through
doing
a
lot
of
the
testing.
F
A
That
that
reminds
me
of
this
is
one
of
the
reasons
why
I
was
so
kind
of
gung-ho
back
like
a
year
ago
on
trying
to
make
an
omap
cache
in
blue
store,
along
with
our
own
caches,
is
to
do
that
at
the
blue
store
level
and
then
just
have
roxdb
block
cache,
be
very
small,
and
only
for
you
know
indexes
and
filters
and
small
amount
of
of
data
for
compaction
to
keep.
You
know,
ssd
files
in
the
cache
for
or
doing
a
compaction
event.
F
C
Well,
we
covered
a
lot
of
different
areas.
I
guess,
but
maybe
going
back
a
little
bit
to
the
like,
have
performance
regression
testing.
It
sounds
like
another
metric
that
we
could
be
tracking.
There
is
looking
at
how
many
cache
meshes
are
happening
or
what
what
our
catchphrase,
where
it
is
in
the
test
to
see.
If
that
is
like
one
of
the
factors
of
causing
a
difference
in
performance.
A
Yeah,
that
would
be
a
really
good
idea
because
we
are
so
sensitive.
The
o
node
cache
misses
that
that
really
that
that
oftentimes
is
kind
of
a
determining
factor
in
performance
that
we
see.
C
A
My
guess
is
that,
unless
we're
picking
a
large
data
set
per
osd,
most
of
our
existing
tests
probably
are
all
fitting
inside
the
the
the
dash,
not
that
that
might
not
be
true
if
you
have
a
really
heavy
omap
workload,
but
but
like
for
a
an
rbd
test
against
like
something
that's
a
data
set,
that's
smaller
than
like
128
gigabytes
it.
I
believe
it
should
typically
fit
into
a
standard
four
gigabyte,
osd
memory
target
size.
A
C
A
C
C
And
do
you
remember
how
eliminating
the
runtime
there
is
already
just
like
running
better
spend
for
a
certain
amount
of
time
or
certain.
D
D
D
I
think
300
seconds
is
the
time
that
we
use
at
the
moment.
D
D
A
A
So
300
seconds
of
4k
rights,
I
am
going
to
hazard
a
guess
that
we
are
not
doing
that
fast
enough
to
blow
past
the.
A
F
F
D
So
one
one
quick
question:
is
there,
so
we
talked
about
the
cash
hit
and
misses
metric?
Is
that
something
we
can
expose
using
cbt?
Then
we,
these
tests
could
easily
capture
that
when,
whenever
they
run,
we
can
compare
300
seconds
is
enough.
A
Oh
yeah,
absolutely
all
I
would
need
all
someone
would
need
to
do
is
write
something
into
the
monitoring
to
periodically
grab
the
you
know,
perf
counters,
that
we
want
to
write
out.
F
No
sorry
I
had
that
but
bad
case
of
latency,
oh
okay,.
A
Yeah
niha,
I
think
all
it
would
take,
would
be
someone
writing
a
monitor.
You
know
module
or
whatever
I
forget
how
we're
doing
it
now.
I
think
someone
was
going
in
there
and
improving.
D
Somebody
radical
do
you
remember
there
was
somebody
who
was
working
on
your
stuff,
adding
more
metrics
to
see
the
cbt.
D
Okay,
can
we
check
with
him
if
it's
something
that
you
would
be
interested
in,
adding
to
cbt.
B
Well,
I
talked
with
him
yesterday.
He
think
okay,
the
I.
The
main
idea
is
about
to
use
about
using
cbt
to
monitor,
crimson
and
gather
some
sister
metrics
from
it,
but
I
believe
we
can
do
that
in
a
way
that
could
be
reused
by
in
other
scenarios
as
well
as
as
far
as
I
understood
you
all.
It
boils
down
to
just
collecting
some
perfect
counters
of
chef
during
a
cbt
run.
Right,
exactly
should
be
sounds
doable.
B
A
Braddock,
I
think,
there's
a
bigger
discussion
about
triggering
monitoring
down
the
road
so
that
some
event
can
trigger.
You
know
certain
monitoring,
but
for
now
I'd
say:
let's
just
you
know
at
the
start
of
a
test
and
end
of
a
test,
start
it
and
then
end
it.
E
A
A
Well,
maybe
all
of
the
ost's
or
maybe
you
specify
which
ost
you
want
and
then
and
then
does
that.
C
And
now
that
the
I
haven't
selected
commands
are
unified
with
the
tail
interface,
you
don't
have
to
be
on
the
local
node.
To
do
that,
you
can
just
go
to
the
monitor
and
collect
from
whatever
osu
you
want.
A
We
may
want
to
grab
it
directly
from
the
the
the
demons
and
then
and
then
at
the
end,
you
know,
grab
those
results
and
pull
them
back
into
the
results
directory.
Simply
to
not,
you
know,
require
the
extra
overhead
of
going
through
the
monitor
sure.
A
Well,
in
any
event,
we
are
past
end
of
the
meeting
time
here.
Good
discussion
is
very
interesting.
Adam
thank
you
for
presenting
on
your
work.