►
From YouTube: 2019-06-06 :: Ceph Performance meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
And
I
am
not
hearing
back
from
the
sage,
so
for
now
I
guess
we'll
assume
he
can't
make
it
alright.
So,
let's
see
we
welcome
back
everybody
from
cephalic
on.
There
was
a
lot
of
really
good
discussion
that
happened
there.
Both
conversations
I
was
involved
in
and
then
I've
been
just
hearing
over
time
that
there
are
lots
of
other
things
that
were
going
on
to
so
I'm,
probably
I'm,
hoping
for
lots
of
good
work.
That
will
come
out
of
that.
A
But
let's
start
out
here
with
some
of
these
pull
requests
that
are
that
have
been
kind
of
happening.
In
the
background,
though,.
A
This
recovery,
ops,
initialized
by
client,
op
higher
priority
or
give
recovery
ops,
centralized
back
line
up
a
higher
priority
sage
approved
that
I
don't
actually
know
that
it
was
extensively
tested
yet
which
would
be
good
to
find
out
to
make
sure
it
actually
is
doing.
What
it
claims
to
do
here.
I
guess
idea
is
that
you
don't
end
up
blocking
client
ops
when
you're
trying
to
limit
told
bandwidth
available
for
recovery
anyway.
Okay,
so
yeah,
it
probably
needs
testing
from
somebody.
A
Jason
had
what
looked
like
a
nice
little
PR
for
the
async
messenger
that
avoids
sis
calls
for
outbound
messages
helps
primarily
with
small
ops.
Look
like
I
think,
maybe
a
ten
to
fifteen
percent
performance
increase
with
no
decrease
for
large
ops,
so
that
was
as
exciting.
I
always
like
to
see
messenger
improvements
like
that
nice.
One
there's
this
Igor
has
this
garbage
collection
PR
for
avoiding
excessive
blob
count,
growth
in
blue
store
and
I.
Don't
know
if
this
actually
changes
perform
yet,
but
potentially
it
might
so.
A
B
A
A
A
What
given
some
of
the
other
performance
improvements
that
we've
been
seeing
in
lebar
BD,
specifically
around
some
of
the
work
that
Jason's
done
with
the
I,
owe
scheduling
and-and-and,
not
exactly
the
the
the
cash
but
just
some
of
those
other
tweaks
and
I'm
very
curious.
What
the
the
aggregate
differences
are
there,
because
this
this
looks
huge.
So
that's
good.
A
A
Load,
P,
G's,
osde
load,
P
G's
can
be
improved
from
single
thread
that
did
not
get
merged.
The
author
of
that
PR
said
that
his
company
was
no
longer
willing
to
make
the
changes
open
source
or
release
him
as
open
source,
so
he
has
withdrawn
that
pull
request
so
anyway.
That's
that's
that
we've
got
blue
store.
Pr
here
for
supporting
prefetch
in
buffered,
read
mode
I!
Think
that's
Igor.
Some.
A
Rita
had
stuff
that
he
did,
that
must
have.
We
must
that
must
have
shown
some
benefit
beyond
just
the
the
upgrade
upgrade
watch
DB
to
get
the
adaptive
read
ahead
so
though
say
just
murdered
that
sage
merged
that
and
and
yeah
I'd
look
like
a
improve
things
pretty
dramatically.
So
that's
that's
fantastic,
and
then
there
was
this
old
PR
from
ma
Jian
paying
for
batching
handle,
send
message
that
was
just
languishing
I
think
he
closed
it.
So
that's
about
it
for
stuff!
A
A
My
avoiding
double
cashing
in
blue
store
for
own
ODEs.
That's
do
not
merge.
For
now
we're
gonna.
We
decided
that
we're
gonna
redo,
that
on
top
of
Adam's
charting
work,
just
because
it's
gonna
hurt
his
stuff,
probably
more
than
his
stuff
for
its
mind
so,
and
that
will
also
then
make
it
so
that
we
can
have
his
is
roll
back
and
and
upgrade
tool
support.
My
changes
as
well
so
I
think
I
will
be
good.
A
There's
some
updates
on
the
IO
you
ring
work.
Roman,
went
back
and
fixed
the
locking
in
it
and
made
some
other
updates.
Apparently
it
works
now,
so
it
wasn't
working
before
cephalic
on,
but
apparently
it
works
now,
but
it
is
slower.
So
it
may
not
show
the
same
performance
gains
that
they
was
showing
previously
I
need
to
go
back
and
retest
that
I
haven't
done
so
yet,
then
baseline
are
slower
than
it.
A
But
but
in
general
still,
there's
there's
a
bigger
question
of
whether
or
not
the
way
that
we're
implementing
it
you're
kind
of
abusing
our
existing
AI
o
code
is
really
the
right
way
to
do
this
either.
We
might
be
able
to
to
kind
of
improve
this
if
we
split
colonel
device
out
and
to
separate
a
Oh,
tink
and
IOU
ring.
A
A
But
else
this
one
from
ma,
Jian
pang
has
been
updated.
It
just
need
to
be
rebased,
no
other
movement
on
that
this
one
from
Roman
about
holy
pole.
Events
from
user
space,
I
hadn't
looked
that
closely
at
it,
but
if
we
did
a
review,
no
like
that's
support
for
that
is
not
even
in
the
kernel
yet.
So
that's
very
experimental!
That's
gonna
be
a
while,
probably
before
that
lands
in
master
Auto
tuning
of
the
MVS
cache.
That
work
continues
to
go
on
with
that
Patrick's
reviewed
it
I
reviewed
it.
Some
changes
are
being
made.
A
That's
good!
I
I'd
like
to
talk
more
about
Auto
tuning
of
everything
in
general,
possibly
later,
but
we'll
see.
If
we
have
time
and
then
okay
there's
a
PR
hear
that
from
Igor
Mikado
tuning
more
aggressive
on
startup
I
looked
at
that
a
while
back
and
was
not
totally
sure
it
made
sense,
but
I
need
to
review
two
again
IOU
or
a
review
on
that
and
I
have
taught
it
to
him
for
a
while,
so
I'm
going
to
try
to
get
to
that
soon.
So
that's
about
it!
No
movement
on
a
bunch
of
other
stuff.
D
Testing
too,
into
our
CRI
CRI.
That's
a
plan.
Probably
we
also
apply
to
some
of
applied
to
sip.
Also,
basically,
the
idea
is
to
perform
the
perform
test
performance
that
when,
when
a
PR
is
posted
and
it's
labeled
with
some
some
some
labels,
for
example,
it
could
be
labeled
with
needs
performance
tests.
Something
like
that
under
this,
our
checklist
live.
We
will
we're
pulling
the
up
here
and
wrong
at
a
very,
very
basic
set
of
a
smoke
like
smoke
test
to
profile
the
change
and
the
compare
compare
the
result
with
the
baseline
stored
on
shift.
D
You
know
we
run
the
cpt
test
periodically
on
CP
also,
so
we
can
change,
compare
the
result
with
the
baseline
of
Turin
CPR
to
find
out
if
the
pr
could
incur
some
significant
performance
aggression.
If
that's
the
case,
the
drinkers
will
vote
on
the
PR.
Otherwise
it
will
show
summed
up.
That's
a
plan.
Okay,
currently
I'm,
pretty.
A
Gone,
oh
okay,
yeah.
Obviously
this
is
a
fantastic
topic
to
bring
up
so
so
I
had
a
conversation
with
Alfredo
over
cephalic
on
about
the
exact
same
thing
and
and
and
what
you
just
described
sounds
a
lot
like
what
we
were
thinking
about,
specifically
being
able
to
tag
pull
requests
with
specific
tags
or
to
look
at
existing
eggs
and
say:
okay,
we
want
to
run
certain
kinds
of
benchmarks
based
on
on
what
pays
these
have
and
and
see.
If
there's
a
difference
versus
master.
D
A
D
Also,
building
to
doing
the
part
of
more
like
a
attention
of
a
Carney.
What
would
that
make
check
test?
It
want
to
be
a
part
of
a
mixture
for
sure,
but
it
will
be
extension
of
it.
So
we
automatically
make
machinery
we
will
be
able
to
like,
for
example,
by
sick
to
to
find
out
the
offending
commit.
If
we
fail
to
do
so
when
catching
into
when
it
is,
was
a
PR,
we
can
catch
it
up
afterwards,
like
puts
the
motive
in
the
net
yeah.
A
Having
something
like
this
would
be
fantastic
to
avoid
having
to
do
that
quite
as
often
yeah,
so
so
Alfredo
right
now
is
currently
on
PTO
for
for
several
weeks,
but
he
did
express
a
lot
of
interest
in
potentially
working
on
something
like
this
Alfredo
or
sorry
ki
foo.
What
how
quickly
do
you
need
this
kind
capability.
D
Currently,
it
involved
couple
steps.
The
first
first
thing:
I
need
to
do
it
to
purr
from
for
most
important
that
the
whole
whole
idea
is
to
to
to
improve
the
performance,
though,
to
keep
the
performance
of
a
crimson
OST
to
prevent
it.
From
from
decoration,
though,
the
first
thing
is
to
to
package
a
crimson
OST,
and
the
next
thing
is
to
pick
some
very
typical
test,
which
can
present
the
performance
of
crimson
or
teal
and
I
think
it
will
be
riddles
pinched,
because
we
don't
support
our.
D
We
detest
now
I'm
the
we
don't
support,
fil
test
for
sure,
and
the
next
thing
is
to
find
some
fun
representives
test,
suite
or
test.
We
can
use
the
result
to
to
to
compare
it
them
with
with
that
windows
in
CBT
and
and
I.
Think
that's
pretty
much
that
should
be
not
and
I'm,
not
sure
if
sepia
or
too
little
could
offer
IP
is
that
we
can
call
a
result
from
given
given
commit
with
certain
set
of
parameters
but
I.
Even
we
cannot
do
it.
D
We
need
to
pour
the
the
possible
commits
and
to
find
out
who
who've
tried
to
try
our
luck,
because
we
we
have
no
API
to
curate
for
a
certain
combination
number.
If
you
a
combination
of
parameter
and
for
given
commits,
we
just
need
to
try
our
luck,
because
not
every
not
every
commit
is
tested
to
you
state
using
CPT
right.
That's
a
pain
with
the
API
yeah
there's
during
the
readout.
A
Yeah
yep
so
sore
Lando.
This
is
what
what
I
was
talking
about
a
little
bit
the
the
email
thread
that
we
had
going,
where
you
know,
if
we
had
some
kind
of
of
mechanism
for
taking
the
the
result
directory
that
we
can
produce
so
so
right
now
in
CBT.
We
we
have
kind
of
this
concept
where
every
single
result
directory
is
a
hash
of
all
of
the
parameters
that
were
used
to
create
a
particular
result
and
from
that
kind
of
the
idea.
A
Long
term
idea
was
that
we
would
create
maybe
an
SQL
Lite
index
of
these
things
and
be
able
to
have
like
a
query
interface
in
front
of
it.
So
you
can
say,
show
me
all
the
results
over
time
or
or
all
the
results
from
this
particular
run
that
have
that
were
4k
or
something
or
4k
reads,
or
for
clear
and
reads,
or
or
you
know
whatever
and
and
the
parameter
space
is
absolutely
huge.
A
A
A
Here
talking
about
key
foo,
where,
where
we
have
tests
that
get
run
on
PR
submissions-
and
we
are
adding
results
periodically
into
some
kind
of
you
know
small
database
or
even
multiple
databases-
it
wouldn't
necessarily
have
to
be
centralized,
but
then
make
its
this
queryable
from
from
somewhere
and
and
and
be
able
to
look
through
things
and
compare
things
that
can
make
sense.
Yeah.
A
If
we
wanted
to
in
CB
T's
output,
when
it
you
know,
dumpster
results
have
some
of
that
as
metadata
that
could
be
associated
with
the
result,
regardless
of
its
through
our
system
or
if
it's
someone
running
it
locally,
you
know
we
could.
We
could.
You
know,
pull
that
out
potentially,
so
we
could
I,
don't.
G
H
I
I
A
I
A
I
F
A
Now,
maybe,
but
if
we
could
get
this
running
on
dedicated
hardware,
I
think
it
would
be
much
better
hardware
specifically,
it
doesn't
have
like
software
updates
that
happen
randomly
you
know
if
we
we
control
those
on
scheduled
at
scheduled
timing
independently
of
the
rest
of
the
sepia
lab
I.
Think
that
would
be
very
good,
though
so
kind
of
the
the
thought
I
had
was
that
once
we
get
the
new
nodes
in
the
lab,
I
personally
think
those
are
most
useful
in
the
hands
of
developers.
A
A
They're
still
fast
enough
that
that
they
they
showcase
differences
in
code,
pretty
pretty
well,
they
may
not
hit
everything
and
they
may
not
quite
test
everything
that
crimson
will
eventually
be,
but
for
right
now,
I
think
they
would.
They
would
serve
pretty
well
any
any
thoughts
on
that
ki,
foo
or
anyone
else.
I.
G
Think
this
came
up
earlier
as
well
and
does
come
up
again.
We
need
to
fix
the
space
of
the
hardware
so
that
we
have
valid
baselines
to
compare
against,
because,
if
you're
trying
to
compare
master
on
machine
a
and
master
on
machine
B,
there's
definitely
going
to
be
different
results.
So
we
need
to
fix
what
that
set
of
machines.
Look
like
and
I
think
repurposing
Theoden
Serta's
is
a
good
idea
and
just
you
know,
renaming
them
and
making
them
accessible
using
ethology
and
stuff
will
will
probably
help.
A
I
A
I
G
G
I
For
someone
to
actually
go
and
test
it
for
the
ones
we
were
going
to
tag,
this
would
I,
don't
know,
I,
remember
having
a
lot
of
trouble
merging
anything
performance
related
if
they
never
had
any
evidence
that
it
would
help.
So
this
would
let
us
get
fast
evidence,
especially
for
small
patches.
I
A
One
thing
I
also
want
to
do
with
this
beyond
just
performance
is
I,
think
that
we
should
run
benchmarks,
but
where
we
are
focused
on
collecting
data
rather
than
and
just
looking
at
performance
numbers
I
would
absolutely
love
to
be
able
to
go,
and
just
look
at
some
some
change
that
somebody's
made
that
potentially
impacts
performance
and
see
a
wall
clock
profile
without
having
to
run
it
myself.
That
would
be
fantastic,
so
you
know
that
potentially
doubles
the
burden
right
there,
where
we
want
to
look
at
a
run.
A
That's
just
focused
on
the
performance
of
the
the
result
from
from
the
PR,
but
then
also
how
that
PR
is
actually
changing.
Behavior
potentially
of
the
OSD
potentially
of
you
know
the
client,
if
it's
labarie,
D
or
rgw,
or
something
else,
the
the
parameter
space
explodes
really
really
quickly,
and
some
of
the
tests
that
we
are
gonna
want
to
run
are
ones
that
were
we're.
Gonna
want
to
age,
the
the
the
system,
whatever
gets
deployed.
You
know
the
cluster.
A
A
number
of
the
tests
potentially
are
going
to
require
actually
having
multiple
in
certain
nodes
set
up
in
a
real
cluster,
rather
than
just
you
know,
one
off
one,
one
one
node
test,
though,
in
terms
of
the
the
usage
here
we
we
could
very
easily
end
up
consuming
a
lot
of
resources
for
this.
It's
not.
A
Absolutely
agreed,
Sam
agreed
I'm,
just
saying
that.
Can
the
sky's
the
limit
here
right,
like
we
will
not
have
any
trouble
utilizing
these
resources
so
that
the
only
reason
I
mentioned
tooth
ology
and
whether
or
not
to
involve
it,
is
I,
don't
know
if
it's
useful
or
not
for
or
specifically
this
task
of
PR
testing.
Maybe
it
is
people
that
know
tooth
ology,
better
than
I
do
might
might
have
strong
opinions
there.
I
just
don't
want
to
end
up
dividing
these
resources
in
a
way
that
makes
it
so
that
they're
less
useful
in
both
places.
I
I
A
A
side
note
with
that,
with
tooth
ology
and
and
potentially
this
is-
we
really
need
to
fix
the
the
scheduling
system
in
psychology
so
that
it
like
properly
lets
you
ask
for
lots
of
nodes
and
eventually
get
them.
I.
A
It
sounds
like
that's
the
maybe
I'm
wrong,
but
I
thought
that
was.
That
was
still
an
issue
where
things
would
just
time
out.
F
I
A
A
It
sounds
like
people
who
generally
seem
to
be
supportive
of
the
idea
of
repurposing
in
Sirte.
For
this
once
once,
we've
we've
got
the
new
gear.
Is
that
sound?
Is
that
right.
B
A
I'm
gonna
take
the
the
silences
as
acceptance
of
that
same
absolutely
so
yeah
yeah.
So,
okay,
let's
plan
on
that,
once
we
get
the
new
officinalis
Hardware
in
and
and
that
tested
and
set
up
and
ready
for
use,
then
then
we'll
we'll
start
thinking
about
how
we
can
take
some
of
the
inserted
gear.
I
think
I
am
going
to
want
to
keep
one
or
two
of
the
nodes,
maybe
kind
of
just
separate
for
kind
of
one-off
testing,
so
that
we
can
do
kind
of
ongoing
comparison.
A
Testing
with
the
new
gear
and
other
year,
we've
seen
evidence
that
different
Hardware
behaves
differently
with
some
of
the
rocks
DB
tunings
that
people
have
tried,
so
it
wouldn't
be
terrible
to
just
have
one
or
two
of
those
nodes
available
kind
of
for
for
one-off
stuff,
but
but
certainly
I,
think
you
know
six
of
the
eight
we
could.
We
could
donate
to
this
and
and
make
it
pretty
useful
anyway,
even
seven,
so
all
right
good
for
the
the
new
gear,
the
new
officinalis
gear.
A
A
Thinking
about
that
right,
you
know,
I
had
I've
been
operating
under
the
assumption
that
some
of
this
new
gear
that
we're
getting
in
the
officinalis
gear
that's
going
to
have
you
know
the
the
opting
drives
in
it.
Eventually,
it's
going
to
have
the
the
the
dim
versions
of
those
and-
and
you
know
basically
top
bin
CPUs
nvme
drives.
You
know
the
whole
works.
My
assumption
is
that
has
been
that
getting
that
into
the
hands
of
developers
is
is
more
useful
than
having
it.
A
You
know,
doing
being
completely
dedicated
just
to
doing
kind
of
the
the
kind
of
random
stuff
testing
that
that
have
inserted
as
half
of
them
currently
are
doing.
I
would
like
to
get
that
one
of
those
into
your
hand,
Sam
I'd,
like
to
get
into
Romans
hands
at
suzay,
into
Igor's
hands
into
the
crimson
developers,
hands
people
that
actually
can
make
use
of
this
stuff.
That's
always
been
kind
of
my
assumption
that
that's
that's
most
useful
is
that
is
that
a
good
assumption
like?
Would
you
use
this
Sam
if
you
had
one
of
them.
I
I
I
specifically
to
me,
given
that
they're
a
finite
number
of
memory,
but
it's
not
that
hard
to
be
in
and
out
of
the
pool
being
used
for
attempting
right.
I
A
A
Though
so
we
can
schedule
them
like
we
schedule
other
stuff
through
tooth
all
adji,
we
could
go
that
route.
--Kavitha
wrote
with
in
Sirte
that
we've
done
has
been
more
like
kind
of
long-term
leases
where
people
tend
to
kind
of
congregate
on
the
node
that
they've
got
and
do
testing
there.
Braddock
and
Adam
have
both
done
quite
a
bit
like
that.
I
do
quite
a
bit
like
that.
A
K
Technology
model
actually
works
pretty
well
for
this
because
yeah
it's
a
thing:
they're
separated
in
the
queues
by
noon.
Type,
though,
if
we
don't
have
cron
job
scheduling
things
for
this
machine
type
or
we
don't
make
it
widely
available
for
other
tests,
McMissile
use
the
locking
framework
to
track
who's,
got
it
using
currently
and
surely
yeah
I
mean
with
the
rest
of
the
names
are
available.
Ok,.
A
So
initially
we
were
gonna
have
ten
of
them.
The
the
thought
I
had
is
on
insert.
Oh,
we
can't
split
it
half
way.
We've
got
four
that
people
are
using
in
kind
of
a
one-off
fashion
for
doing
testing
and
development,
and
things
then
another
four
that
that
that
I
use
quite
a
bit
for
doing
just
small
cluster
testing,
have
some
of
these
more
targeted
performance
tests
and
then
also
that
I
got
a
lot
for
some
of
my
development
stuff.
We
could
stick
all
ten
of
these
into
theology
and
just
totally
go
that
way.
A
J
Essentially,
you
know
you
need
to
pair
them
with
Twitter,
with
a
load
generator
right
like,
for
example,
I
mean
we
could
do
local
testing,
but
you
could
have
some
of
the
in
certain
notes.
Maybe
one
or
two,
as
you
know,
load
generators
for
our
CBT
gesture
notes
or
for
the
new
in
office
notes
is
what
I
was
thinking
so
and
in.
J
That
would
be
interesting
to
run
the
regressions
on
and,
like
you
know,
like
I,
think
it
was
Neha
pointing
out
that
we
need
a
baseline,
which
could
be
maybe
a
five
note
baseline
on
the
latest
gen
Hardware,
because
the
previous
gel
you
know
the
in
certain
nodes,
actually
have
probably
you
know
older
CPUs,
but
they
have
thirty
seven
hundred
devices,
which
are
which
were
actually
in
some
sense
like
at
least
in
the
right
performance
point
of
view,
they're
actually
better.
So
so
it
will
be.
You
know,
be
good
to
use.
A
You
know
certainly
will
one
thing
that
could
be
interesting,
maybe
would
be
like,
like
you
were
saying,
with
the
new
officinalis
nodes,
if
we
have
egged
pr's
in
in
you
know
the
system
that
are
really
performance,
sensitive
right,
like
things
that
that
are
coming
in
from
crimson,
that
that
we
aren't
expecting
to
be
able
to
measure
well
on
the
older
gear.
That
especially,
could
be
really
interesting.
J
Yeah,
so
we
I
mean
we
could
we
could
split.
You
know
the
two
clusters
they
eat,
eat,
eight
node
and
ten
more
clusters,
as
maybe
you
know,
three,
three
or
more
and
more
nodes
outs
out
of
the
insert
a
cluster
or
maybe
five
nodes,
each
for
performance
regressions
and
then
that
could
also
be
scheduled
to
be
used
by
developers.
But
then
those
could
also
be
pulled
out
to
do.
You
know
my
focus
testing
that
you
do,
for
example,
right.
We
could
five
each
for
example
or
three.
J
A
We
could
use
in
Sirte
for
that
and
we
could
stick
like
25
gig
cards
and
those
if
we
really
wanted
to
I'm
just
wondering
if,
since
we've
already
got
the
48
cards
and
the
40
gig
switch
that
they're
on,
if
you
know,
if
we
can
make
use
of
the
inserter
nodes,
Kevin
in
you
know
still
a
performance
test
capacity
rather
than
just
as
low
generators.
They
might,
you
know,
be
a
better
way
to
use
all
that
existing
hardware-
that's
there,
but
you
know
we
can.
We
can
certainly
play
around
with
it.
A
We
can,
it
might
depend
on
in
whether
hardware
we
can
get
I
I
did
notice.
Roman
a
Ronan
here
had
commented
in
the
chat
window
that
the
scheduling
his
game.
He
really
must
be
simple
and
fast.
Otherwise,
it's
major
pain
and
they'd
be
better
to
split
some
for
dedicated
crimson
use
that
that
is
kind
of
the
feeling
I
had
to
that.
A
It
would
be
really
good
to
have
a
couple
of
these
that
the
Crimson
developers
can
just
you
know,
almost
have
dedicated
to
them,
but
but
if,
if
that's
not
really
the
case,
if
that's
not
what
people
are,
you
know,
if
that,
if
they
don't
think
they,
you
know
be
able
to
use
it
enough
on
a
regular
basis.
For
that,
certainly
we
could
we
could,
then
you
know,
have
it
have
it
more
on
a
scheduling
basis,.
H
D
Can
share
the
exact
for
crimson
development.
We
can
actually
share
a
thin
machine.
We
can.
We
can
King
our
plication
to
a
certain
corner
if
wood
and
the
use
a
certain
hard
drive
or
SSD
drive
or
partition
await,
so
it
can
be
shared.
Actually,
if
we
we
do
it
to
do
it.
Well,
okay,
Shifu,
okay,
buy
the
whole
box
right,
I.
D
In
the
a
thread
you
know
criminal
crimson,
currently
the
crimson
over
CD-
that
single
per
single
station,
so
we
can
king
out
King
our
application
to
a
single.
While
most
of
you
one
more
course,
assuming
that
yeah
like
like,
like
20
or
50,
or
440
course
pours
dispensable
on
a
single
box,
so
we
can
either
get
the
book
can
be,
can
be
shared
by
a
crimson
development
even
by
doing
for
testing.
A
D
Yes,
just
don't
want
to
in
turn
intervene
with
other
developers,
so
we
can
actually
allocate
a
certain
machine
to
be
shared
by
all
all
chrome
developer.
As
long
as
the
we
don't
need
to
use
all
core
course
and
do
the
whole
hardware
tip
other
words.
How
do
work
you
of
a
sudden,
SSD
or
let
me
drive
and.
A
Ok
well,
I'll.
You
know
if
folks
are
perfectly
cool
with
just
doing
the
locking
through
tooth
all
gee,
let's
let's
add
them
to
technology,
then
you
know:
let's
go
that
route.
One
of
the
things
was
in
Sirte
that
that
probably
was
kind
of
tough
in
the
past
is
that
since
they
were
kind
of
separate,
it
was
a
little
bit
of
a
double-edged
sword.
A
The
good
thing
was
that
it
it
you
know
made
it
so
that
people
could
kind
of
treat
them
as
similar
to
the
way
that
some
of
the
other
development
boxes
and
the
the
the
lab
I
guess
I,
don't
know
what
they
are
now
I,
don't
actually
use
them,
but
there
there
have
been
some
dedicated
boxes
for
development.
That
I,
don't
think
we're
part
of
tooth
ology
and
that
there
are
some
advantages
to
that
right.
It
you
get
a
little
bit
more
control
over
it.
A
K
A
I
A
K
I
A
Okay,
well
I
guess
I
mean
we.
We
could
maybe
just
with
all
of
the
insert
and
then
all
of
the
officinalis
have
a
everything
just
go
through
it
straight
through
teeth,
ology
egg,
some
of
the
machines
as
being
like
Jenkins,
you
know
slaves
or
whatever
take
some
of
them
as
crimson
development
and
and
then
don't
use
tooth
ology
to
schedule
stuff,
but
just
you
know
use
it
for
provisioning
and
and
then
you
know
what
those
things
go,
that
is
that
plan
yeah.
A
D
A
I
I
I
I
I
D
D
D
I
A
Hopefully,
though,
ki
foo,
the
idea
here
would
be
that
we're
still,
we
still
have
keep
everything
separate
right.
The
only
reason
that
we're
using
tooth
ology
here
is
so
that
the
node
can
be
easily
like
reprovision,
and
it's
like
listed
somewhere
as
this
is
what
it
is,
and
this
is
what
it's
being
used
for.
You
know
fear.
Theoretically,
it
would
be
like
no
different
from
the
Jenkins
perspective
than
if
you
just
like
installed
CentOS
on
the
node
and
had
a
dedicated
I
think
right.
A
Mean
maybe
someday
we
could
do
something
more
extensive
than
that,
but
I
that
sounds
complicated
and
not
fun
so
yeah.
This
would
be
more
like
we've
got,
you
know,
six
of
the
inserter
nodes
and
like
four
of
the
officinalis
or
two
or
whatever
the
officinalis
nodes
just
dedicated
to
to
jenkins,
and
then
you
know,
maybe
the
inserter
nodes
are
what
you
know.
A
Just
kind
of
run
in
the
mill
performance
stuff
gets
scheduled
to
to
run
some
tests
on,
maybe
for
like
really
critical
stuff
we
and
tests
on
officinalis
and
in
Sirte
or
just
officinalis
lots
of
options
there.
We
don't
have
to
decide
right
now,
but
but
we'd
have
some
number
of
them
available
for
you
know
going
through
and
doing
this,
and
we
want
this
not
just
with
crimson.
We
want
this
with
with
you
know.
Classic
OSD
to
this
would
be
super
useful,
yeah.
A
Cool
okay,
though
we
talked
about
that
we
I
do.
Think.
Alfredo
would
like
to
be
involved
in
this
when
he
gets
back.
So
just
just
keep
that
in
mind.
He,
you
know
he's
got
a
lot
of
experience
with
the
Jenkins
system,
and
you
know
he
I
think
you
know.
I
can
certainly
get
him
up
to
speed
on
CBT
as
well.
We
can
run
radios
bench
through
CBT
and
that
would
pick
up
some
of
the
monitoring
stuff
that
does
and
some
of
the
other
you
know
kind
of
benefits
that
we
get
there.
A
A
And
then
the
only
other
thing
I
had
on
my
list
here
that
I
wanted
to
quick
in
before
people
leave
is
that
we've
been
talking
to
the
Xerox
TV
guys
at
Toshiba.
Adam
has
been
doing
a
ton
of
work
on
both
Roxie
be
sharding
and
looking
at
that
and
and
he
has
found
that
it
does
in
fact
improve,
write
amplification
significantly
during
compaction.
I
I
don't
know
if
we
have
like
really
easy
to
consume
results
yet,
but
but
certainly
he's
very
positive
about
it.
A
So
we're
hoping
that
one
we
can
make
sure
that
upgrade
testing
works
when
you
switch
this
on,
that
has
to
work.
The
T
Roxie
of
you
guys
are
looking
at
potentially
adding
the
ability
to
downgrade
back
to
a
standard,
Brock's
DB
format
from
from
have
the
the
version
that
they've
got
and
then,
assuming
that
all
this
looks
good
works,
you
know,
seems
to
function
and
improve
compaction
results.
A
We're
hoping
that
maybe
we
can
go
to
the
Facebook
guys
and
say
that
you
know
this
is
this
is
actually
useful.
You
know
we
would
support.
You
know
the
t-rex
guys
in
an
effort
to
maybe
bring
this
into
upstream
rocks
TP.
So
you
know
we're
not
we're
not
to
that
point
yet,
but,
but
certainly
you
know
there
there's
a
lot
of
hopeful
signs
that
this
this
may
be
generally
useful
or
rocks
to
be
and
also
useful
for
stuff.
So
if
it's
this
kind
of
the
current
status
there,
any
any
questions
on
that
at
all.