►
From YouTube: Ceph Performance Meeting 2021-08-26
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
well
I'll
I'll,
get
started
here.
The
core
people
hopefully
show
up
somewhat
soon,
but
we'll
see
all
right.
So
what
we
got
new
pr's
this
week
that
I
saw
anyway
I
confessed
this
morning.
I
was
trying
to
do
multiple
things
at
once
and
may
have
missed
stuff.
So
please
feel
free
to
correct
me
if
I'm
missing
anything
all
right.
First,
one
prometheus
offered
the
ability
to
to
disable
the
cash.
A
Apparently
this
is
not.
The
cash
is
not
needed
for
small
deployments.
I
think
they
said.
I
don't
know
why
you
would
actually
want
to
disable
it.
Maybe
there's
a
good
reason
to
memory
usage
or
something
I
don't
know
so,
keep
a
review
that
I
think
he
may
have
approved
it,
but
yeah!
That's
that's
there.
Now,
that's
for
the
manager
by
the
way.
A
Yeah,
I'm
not
exactly
sure
what
the
rationale
is
for
it,
but
but
you
know,
I
suppose
you
know
it
doesn't
hurt
anything
to
build
disable
it.
So
if
it's
you
know,
I
don't
know.
If
that's
what
you'd
want
the
default
to
be.
A
So
yeah
I
don't
know
it
is
what
it
is.
I
guess,
let's
see.
C
A
So
I
think
he's
gonna
go
back
and
review
this
again,
but
the
gist
of
it
is
he's
he's
really
trying
to
change
up
the
way
that
some
of
this
works,
which
is
good
because
we're
seeing,
especially
with
roxdb,
that
it
does
really
weird
things
in
the
right
path
with
bluefs
that
we
don't
like
so
anyway.
That's
ongoing.
A
For
closed
prs,
oh
this
rgw
tracing
implement.
They
are
closed
so
casey
or
adam.
Does
that
mean
we
can
do
like
really
cool
tracing
in
our
gw?
Now.
B
So,
okay,
build
on
that
trace
things
other
than
requests.
Add
more
details
to
to
request
processing
itself.
I
think
the
next
immediate
step
this
was
just
adding
some
classes
into
rgw.
The
next
step
is
going
to
be
to
move
those
into
the
common
folder
to
share
the
implementation
with
the
existing
stuff
in
the
osd.
B
A
A
B
C
A
B
Yeah,
I
totally
agree
so
I
I
really
think
that
we
do
need
to
focus
on
tooling
to
to
make
it
usable.
A
A
All
right
updated!
A
No,
there's
this
full
request
for
make
some
modifications
to
our
version
of
the
roxdb
lru
cache,
and
that's
so
that
we
can
update
roxdb,
and
I
have
owed
kifu
review
on
this
for
like
a
week
and
a
half
and
it's
kind
of
weighing
on
me
because
I
haven't
been
able
to
focus
on
it
yet
so
yeah
that
that's
waiting
on
me.
I
don't
know
if
I'm
gonna
get
to
it
by
the
end
of
this
week
or
not.
Let's
see
there
is,
I
was
incremental
update
mode
for
bluefest
log.
A
It
was
reviewed,
but
then
it
failed
qa.
So
I
think
I'm
just
gonna
have
to
go
back
and
figure
out
what
was
wrong
with
it.
A
Okay
looks
like
there's
more
updates
and
discussion
on
this
rgw
osd
compression
bypass
pr
anything
casey
anything
new
with
that.
A
Let's
see
oh
and
then
we
talked
about
already
talked
a
little
bit
about
the
rgw
started,
cache
thing
that
I
made
yeah
I'll.
Look
at
your
your
updated
review.
Casey.
A
A
Oh
eager's
here
now,
though
igor
I
I'm
trying
to
remember
what
is
your
pr
that
the
cap
omap
naming
scheme
upgrade
transaction?
I
don't
remember
what
that
does
at
all.
E
G
Well,
mark's
trying
to
get
his
audio
sorted
out.
One
thing
I
wanted
to
bring
up
today
was
kind
of
brainstorming
around
different
sorts
of
performance
investigators.
It
would
be
useful
to
do
it
over
the
next
say
three
months,
six
months,
a
year,
sort
of
time
frame
with
either
existing
things
that
we
haven't
tested
well
enough
or
upcoming
work.
That
would
be
helpful
to
get
more
information
on
it
in
terms
of
performance
and
scalability.
G
So
mark
you
can
hear
us,
can
you
can
you
back
now.
A
A
A
G
Yeah,
I
mean,
I
think
they
think
that
they
tend
to
do
a
lot
of
korean
style
testing
anyway,
so
I'm
not
sure
how
different
that
would
be
from
their
existing
testing.
A
H
Josh,
do
you
want
to
add
here
also
balancing
improvement.
G
As
we've
spoken
already
josh
about
improved
balance
or
behavior
for
balancing
beats,
in
addition
to
rights.
H
Actually,
also
improving
the
current
balancing
and
adding
primary
balancing
on
top
of
this.
But.
A
I
don't
know
what
the
current
state
of
testing
is,
but
oh
there's
always
interest
in
recovery
versus
client,
throughput
and
cab,
where
the
state
of
that
is.
I
don't
know
what
their
most
recent
data
looks
like
on
that,
but.
G
I
That
is
something
we
are
trying
to
see
if
we
can
get
some
help
from
our
downstream
folks
to
evaluate
that
feature,
they
did
some
testing
with
async
recovery,
so
they
have
tool
tooling
to
do
recovery,
testing,
recovery
versus
client,
diode
testing
so
just
essentially
use
that
or
extend
that
to
also
evaluate
partial
recovery.
I
That's
interesting,
that's
also
on
the
list
of
the
same
group.
They
will
be
evaluating
it
once
it's
ready
to
be
consumed
once
we
feel
at
least
the
background
recovery
like
we
currently
have
recovery
that
is
covered
by
qos
in
terms
of
background
activities,
but
in
master
we've
worked
on
other
stuff
and
we
also
including
scrubbing
so
maybe
after
a
few
months
or
like
in
the
beginning
of
the
next
year,
they
can
start
consuming
master
bills
and
evaluate
qos
to
some
extent.
A
Client
to
us
would
also
be
really,
you
know
not
just
like
recovery
or
scrubbing,
or
anything
else
like
that,
but
actually
making
sure
that
some
clients
aren't
serving
other
clients.
Who
can
we?
We
definitely
know
what's
happening
in
certain
situations
so
being
able
to
yeah.
I
H
Also,
I
think,
if
you
understand
correctly
currently,
what
qs
is
doing
is
limiting
some
of
the
clients,
so
they're
not
going
to
starve
other
clients,
but
what
it
actually
does
it.
If
the
system
is
not
loaded
and
you
have
only
the
the
clients
with
low
qs,
they
actually
blocked
you
can't
they
can't
use
the
the
full
performance
of
the
system
system
is
idle
because
each
one
is
blocked
separately.
H
I
I
have
some
ideas
and
I'm
not
sure
it's
it's
not
it's
not
a
fairly
simple
thing,
but
the
idea
that
idle
osb
is
not
going
to
serve
too
many
requests,
because
the
client
is
blocked.
Somehow
you
know
weird
to
me.
G
Yeah,
I
think
you're
talking
about
a
different
kind
of
qs,
like
the
one
that's
kind
of
just
doing
pure
throttling
on
the
client
side.
What
yeah
they
have
been
talking
about
is,
or
the
dm
clock-based
qos,
which
is
much
more
dynamic.
It
does
enable
clients
to
use
more
than
just
that
pure
throttle
value
if,
if
they're
set
up
that
way,.
G
Okay,
that's
a
good,
that's
a
good
one
to
add
to
bad
to
the
list.
I
think
it's
it's
not
still
under
development,
but
it
may
be
ready
for
testing.
Perhaps
in
a
year
six
months
who
knows.
A
It's
been
quite
a
while,
but
if
someone
was
interested
in
looking
at
the
different
erasure
coding
back
ends-
and
you
know
kind
of
updated
performance,
data
and
cpu
utilization
and
that
kind
of
thing
that
would
maybe
be
another
interesting
thing
for
somebody
to
do
that
wouldn't
require
a
whole
lot
of
coding.
H
And
actually
something
which
we
currently
try
to
think
of
with
collaboration
with
academy,
so
it's
really
very
early
stages
that
we
try
to
think
of
and
go
back
to.
The
qs
is
having
different
cash
policies.
You
know
giving
more
priority
on
cash
for
privileged
clients,
rather
than
privileged
workloads
or
whatever
versus
other
workloads.
H
So
this
is
something
that
we
do.
It's
really
really
early.
We
may
add
this,
but
no
commitment.
You
know
it's
a
very
early
stage,
but
we
may
add
this
as
as
a
proof
of
concept
to
to
rtw
to
the
before
m
cash
that
it's
currently
is
on
work
pushing
it
upstream
anyway.
H
We
may
want,
because
this
is
just
probably
the
easiest
place
to
put
it
for
the
students,
so
we
we
may
want
to
to
put
such
policies
on
this,
but
it
also
relates
to
to
adding
more
dimension
to
qa,
not
just
the
request,
but
also
splitting
other
resources
such
as
cash,
so
to
privilege,
workloads
versus
less
periodic
workload.
A
If,
if
casey
and
and
crew
are
okay
with
it,
I
may
try
to
implement
the
the
priority
cash
scheme
in
rgw,
which
would
maybe
be
one
way
that
you
could
kind
of
approach
that
problem.
A
The
idea
there
is
that
you
kind
of
dynamically
balance
resources
based
on
a
different
request
levels
coming
from
each
individual
cache.
So
you
could
kind
of
imagine
a
scenario
where
you
could
tie
qos
into
that.
A
Josh,
what
do
you
think
about
fragmentation?
I
know
that
adam's
done.
A
bunch
of
work
on
that
and
it's
kind
of
a
complicated
topic
is
that
something
that
we
want
to
have
more
kind
of
testing
around.
G
G
A
Oh
good,
sorry
adam,
I
didn't
think
that
you
were
here
earlier.
Oh,
it
seems
to
me
like
on
one
hand,
we
want
to
be
careful
to
make
sure
that
we're
actually
not
not
only
advertising
the
most
horrible
scenarios.
We
end
up
with
fragmentation,
but
we
also
need
truthful
data
to
to
improve
it
right.
F
F
F
So
not
really,
I
mean
we
can
always
redeploy
without
adding
any
any
extra
data,
redeploy
all
the
osds
and
then
make
a
comparison.
That
would
give
us
some
some
info,
but
beside
that,
I
don't
have
any
idea
for
a
tool
to
measure
numerically
and
give
a
numeric
value
for
for
quality
of
our
fragmentation.
H
I'm
not
sure
about
it,
but
I
think
it
is
possible
that
you'd
get
on
the
same
system,
part
of
the
data
which
is
less
fragmented
than
others
because
of
workload.
That's
right,
the
mom
workloads
attract
much
more
than
other
workflows,
so
we
have
workloads
that
write
less
and
read
a
lot
and
their
data
would
be
less
fragmented,
probably
than
workloads
that
do
a
lot
of
writing-
and
I
mentioned
because
also
we
had
some
kind
of
high-level
discussion.
H
Gabi
is
not
on
the
call,
but
I
had
some
kind
of
discussions
with
gabi
regarding
also
taking
this
as
a
parameter
for
cash
policies.
H
The
part
of
the
of
the
capacity
which
is
highly
fragmented
is
relatively
small
to
the
rest,
so
I'm
not
sure
whether
it's
real
world
workload
scenario,
but
I
see
how
it
can
happen
that
you
have
part
of
your
workload
is
way
more
fragmented
than
the
other.
So
I'm
saying
this
is
a
caution
because
we
didn't
measure
it.
It
was
just
we
played
with
some
ideas
on
cash
improvements.
A
One
thing:
I'm
I'm
a
little
regarding
the
fragmentation
conversation
you
know
for
a
while,
I
was
have
had
the
attitude
of
well,
you
know
hard
drives
are
on
their
way
out.
Eventually,
maybe
this
isn't
big
as
big
of
a
deal,
but
now
we're
starting
to
get.
You
know
these
qlc
flash
drives
that
have
like
64k
allocation
units
and
I'm
not
sure
kind
of
what
allocators
should
look
like
going
forward.
Exactly
is
this.
I
don't
know
if
this
falls
underneath
like
performance
investigation
or
projects
over
the
next
year.
A
G
G
What
about
for
performance
investigations
beyond
raiders,
if
things
in
self-ffs
or
rgw
or
rbd,
see
we
got
some
casey
and
adam
as
well?
Do
you
have
any
ideas
for
areas
of
rgw
that
would
be
right
for
investigation.
B
Let's
see
well
there's
the
regressions
that
mark
pointed
out
recently
in
the
beast
front
end
and
mark
hogan
has
started
looking
at
that.
I
think
that's
going
to
be
a
project.
B
B
So
it's
hard
to
to
give
any
any
estimate
there.
G
B
B
And
then
I
think
multi-site
has
a
lot
of
unknown
performance
characteristics.
So
if
somebody's
interested
in
trying
to
work
on
that
could
arrange
a
project
around
it,.
B
But
I
think
I
would
start
by
looking
at
pulling
overhead
using
some
tracing
to
to
look
at
how
long
it
takes
from
a
right
on
one
side
to
get
through
the
replication
path
to
the
other
side
and
and
see,
if
there's
enough,
algorithmic
things
that
we
can
change
to
to
lower
that
time.
A
Casey
another
thing
that
matt
and
I
talked
about
a
while
back-
was
potentially
using
the
rgw
file
interface.
To
run
it
wouldn't
be
a
complete,
be
able
to
do
a
complete
run
of
the
I
o
500,
but
we
might
be
able
to
basically
implement
a.
A
I
don't
know
if
that's
particularly
interesting
to
you
guys
or
not,
but
it
maybe
would
be
a
way
to
to
see
if
that
rgb
file
is
doing
well.
B
Okay,
you've
discussed
that
with
matt.
I
know
that
rgb
file
has
a
lot
of
limitations
in
terms
of
like
right
patterns
that
it
supports
right.
So
I'm
not
sure
that
we
could
pass
kind
of
a
generic
file
system
suite
like
that.
A
Yeah
yeah
exactly
right,
it
would
be.
I
talked
to
him
just
a
little
bit
about
it,
like
I
don't
know
four
or
five
months
ago
or
something,
but
it
was
the
the
gist
of
it
is.
I
think
we
could
implement
some
of
the
the
calls
that
happened
in
like
their
abstracted
back-end
interface
for
ior
and
md
test,
but
we
wouldn't
be
able
to
do
anything
like
you
guys.
Don't
I
don't
think
would
allow
like
concurrent
clients
to
write
to
the
same
file
right.
B
Good
question:
I'm
I'm
not
sure
how
that
works.
They
they
might
end
up.
Creating
two
separate
instances
of
the
file
you'd
have
to.
B
That
sure
yeah.
B
Remember
most
is
that
one
client
can
only
write
sequentially.
B
A
B
I
don't
I
don't
know
that
our
gw
file
is
seeing
that
much
use.
Maybe
if
it
was
super
fast,
it
would
see
more.
But
personally
I
would
be
more
interested
in
the
http
path.
G
B
G
I
guess
do
we
have
like
controls
in
place
for.
G
Kind
of
trying
to
limit
the
background
operations
effect
on
the
client
apps
in
rdw.
B
B
G
B
B
Oh
cool
yeah
I
I
might
have
to
start
rejoining
the
qos
calls,
or
at
least
find
somebody
interested
in
that.
C
A
As
long
as
we're
talking
about
rgw
and
ffs,
we
could
throw
in
adding
the
priority
cache
manager
to
both.
G
Yeah,
that's
a
good
point
for
memory
usage
in
particular.
A
Yeah
and
and
for
just
gen
yeah,
both
memory
usage
and
then
also,
potentially,
you
know
smart
balancing
of
of
disparate
caches,
especially
if
we
add
a
training
support
for
those.
A
There
was
an
effort
in
self-ffs
for
the
mds
to
do
something
that
was
like
what
we
do
with
the
priority
cache
manager,
but
it
was
like
a
totally
different
implementation
that
it's
been
sitting
there
for
like
two
years.
I
think
so,
maybe
it's
worth
just
sitting
down
and
actually
using
the
the
priority,
cache
manager
and
and
the
glue
code
perfect
blue
code
for
for
dc
mallet.
To
do
it
the
same
way.
We
do
it
in
the
other
demons.
A
Casey
other
than
the
that
that
cash
that
I
was
messing
around
with
do
you
guys
have
any
other
like
caches
in
rgw
that
are
worth
thinking
about.
D
I
think
he
had
to
drop
for
oh
sure,
the
stand
up.
As
for
that,
the
main
cache
that
we
have
is
the
system
object,
cache.
D
It
basically
hangs
on
to
a
bunch
of
metadata
for
us
like
bucket
buckets
and
users
that
sort
of
things
there
is
a
refactor
of
that
going
on,
I
believe,
as
part
of
the
store
abstraction
layer
work.
I
think
daniel
mentioned
that,
rather
than
having
a
unified
system
object
that
tries
to
cover
everything
that
would
be
broken
out,
but
I
don't
know
the
details
that
he
was
planning.
A
C
A
Of
the
process,
but
might
not,
you
might
not
need
a
whole
lot
of
balancing
between
different
caches,
depending
on
how
that
looks,
but
might
still
be
useful.
G
Yeah,
like
speaking
of
caching,
there's
also
the
working
rgw
around
like
d3n
caching,
for
during
throwing
local
copies
of
objects
yeah,
I
think
their
research
group
around
that
had
been
some
forms
testing
themselves.
D
Yeah,
I
know
they're
working
on
it,
but
I
don't
think
it's
got
merged
in
yet
there's
some.
There
were
some
disagreements
about
exactly
how
to
implement
it.
Last
I
heard.
G
Anything
else
are
there
other
areas
that
in
rdw
that
you
can
think
about
them.
D
The
async
for
post
processing
is
the
main
one.
We've
been
wanting
to
do
that
for
a
while,
but
we
keep
getting
steamrolled
by
multi-site.
Basically,.
G
What
about
areas
in
southern
west
and
mark
and
you've
done
a
fair
bit
of
testing
there
in
the
past.
A
Oh
there's
lots
of
other
stuff.
I
mean
the.
A
Kind
of
question
about
what
we
should
do
regarding
some
dynamic
sub
tree
partitioning
versus
kind
of
the
the
ephemeral
pinning
neither
are
ideal,
it
seems-
and
it's
I
don't
with
the
femoral,
pinning
it's
not
a
bad
idea,
but
with
the
way
that
random
distributions
work
and
the
number
of
directories
that
you're
kind
of
spreading
stuff
across
you
still
end
up
with
this
really
lumpy
clumpy
distribution.
A
Unless
you
have
just
a
huge
number
of
directories
and
clients
it,
it's
not
bad,
it's
it's.
It
works
better
than
dynamics
of
tree
partitioning
in
a
lot
of
cases,
but
there
are
cases
it
doesn't
cover
and
they're,
even
in
the
cases
where
it's
good,
it's
not
like
as
good
as
it
could
be.
If
you
did
something
like
perfect
round
robin
distribution,
I
don't
know
exactly
how
we
deal
with
that,
but
that
is
an
area
of
research
that
would
be
really
beneficial
for
ffs.
I
think.
A
It
looks
like
there's
a
lot
of
areas
where
that
could
be
improved
dramatically,
but
I'm
not
sure
I
even
remotely
I
I
know
I
do
not
understand
it
well
enough
to
even
really
ask
the
right
questions
right
now
and
someone
else
might
be
able
to,
but
there
are
very
few
people
that
understand
that
code.
Well,.
A
The
the
actual
threading
inside
the
mds
itself
is
is
all
kind
of
governed
by
some
one
big
block,
it's
possible
that
that
could
also
be
broken
up.
Maybe,
but
that's
also
really
complicated.
A
A
Big
project
yeah
zhang,
I
think,
was
going
to
look
at
it
at
one
point,
but
you
know
about
to
find
somebody
else
now,
probably.
A
A
I
did
something
this
past
spring
pacific,
I
think
maybe
it
was
master.
I
don't
remember
that
was
when
we
were
trying
to
look
at
whether
or
not
they
they
changed
ephemeral,
pinning
to
no
longer
work
on
subdirectories
but
to
work
on
dirt
frags
instead,
and
I
think
the
hope
was
that
we'd
be
able
to
improve
the
situation
where
you
have
lots
of
clients
all
doing
work
in
a
single
directory,
and
unfortunately,
it
didn't
really
do
anything
different
than
what
we're
currently
the
way
it
was
currently
behaving.
A
A
So
really
I
mean
mdf
stuff
to
me.
You
know
we're
we're
actually
fast
and
ffs
for
like
big,
like
sequential
right,
workloads
or
even
big,
random,
right
workloads
or
reads
or
whatever.
If
you've
got
like
big
files,
it's
and
you're
doing
big.
I
o
it's
great.
A
It's
when
you
have
like
a
directory
full
of
a
ton
of
small
files
that
we
are
slow
for
a
variety
of
different
reasons,
and
we
can
even
I
mean
you
can
just
kind
of
blow
stuff
us
up
depending
on
what
you're
doing
sometimes
with
with
you
know,
really
horrible
workloads
like
this.
So
that's
that's
kind
of
the
stuff
that.
A
There's
like
with
rgw
there's
this
rbd
project
for
doing
client-side,
persistent
caching.
A
I
was
helping
somebody
with
that
recently
testing
it
and
it
was
slower
than
not
using
it
and
there
seemed
to
be
a
lot
of
client-side
lock
contention
in
the
cash
implementation
for
some
reason,
and
so
they
were
working
on
trying
to
understand
what
was
going
on.
It's
a
project
right,
but
I
think
people
are
already
working
on
it.
G
Yes,
yes,
I
think
a
lot.
A
lot
of
different
stuff
is
kind
of
further
off
in
terms
of
optimization,
but
I
did
put
a
c-star
tuning
idea
at
the
top
yeah.
C
G
A
Actually,
the
the
work
I've
been
doing
this
week
is
more
or
less
to
try
to
you
know
kind
of
very,
very
vaguely
and
roughly
simulate
what
we
maybe
could
possibly
see
when
we
do
it
for
real.
A
I've
been
I've,
been
working
on
making
mem
store
and
possibly
science
toward
nicer.
G
A
And-
and
I
suspect
that,
like
I
think
we
create
an
ordered
map
for
omap
in
every
single
object,
so
it's
like
you
know,
that's
that's,
probably
not
super
ideal.
A
It
would
actually
be
really
interesting
to
make
a
version
of
memstor
that,
like
didn't
use
objects
at
all
and
tried
to
just
store
data
in
other
ways.
That
would
be
very
curious
too
well,
but
at
any
event,
yeah
it'd
be
nicer
to
at
least
make
them
store.
You
know
better
consistently,
better
than
blue
stores.
G
Okay,
well,
this
is
a
pretty
good
list
and
I've
gotta
run
today,
like
fortunately,
but
I'm
hoping
we
can
talk
about
some
more
of
these
things
in
the
future.
Maybe
get
some
more
feedback
from
other
folks
who
couldn't
make
it
today.
A
Yeah,
I
agree.
I
agree
that
was
actually
the
the
take
I
had
with
with
crimson
and
science
store
early
on
too
is
that
those
guys
would
work
on
crimson,
but
I
would
try
to
keep
making
the
classic
osd
faster
to
give
them
challenge.
I
One
thing
I
wanted
to
say:
maybe
not
today,
but
maybe
in
a
meeting
or
something
we've
covered
topics
like
you
know,
feature
performance,
investigation,
stuff
but
it'd
be
really
nice
to
have
some
performance
gates
in
master
research.
I
know
every
now
and
then
we
run
into
issues
where
we
have
to
go
back
and
do
bisects,
etc.
Compare
with
pacific
and
go
through
this
whole
cycle
again
after
something,
you
know,
causes
a
regression.
I
So
how
can
we
make
sure
that
we
have
some
automated
performance
gates
or
like
even
if
it's
manual,
I
don't
know,
but
just
just
to
make
sure
that
you
know
the
these
kind
of
regressions
don't
get
caught
later?
I
C
A
My
whole
kind
of.
A
Lack
of
enthusiasm
for
this
kind
of
stuff
is
that
it
feels
like
any
time.
Anyone
ever
makes
something
like
this.
It
works
for
a
little
while
and
then
it
breaks
because
it's
so
hard
to
maintain
and
keep
these
things
running
consistently
like
you
know,
we
might
be
able
to
make
it
work,
and
maybe
it
works
for
a
while
as
long
as
you
know,
nothing
changes
too
much,
but
it
almost
seems
like
it's
as
much
work,
maintaining
it
and
keeping
it
going
as
it
is
to
go
and
do
the
bisect
every
once.
While
I
mean
it's.
I
Almost
like
it's
almost
like
you
know,
we
try
to
make
the
raiders
with
all
green
and
then
suddenly
something
gets
merges
and
it
breaks
everything
there
so
you're
back
to
square
one
so
yeah.
I
guess
it
is
a
difficult
problem,
but
I
guess
I
mean
yeah,
I'm
just
just
like
you
know.
We've
had
some
issues
recently
and
even
earlier
so
just
trying
to
get
back
some
focus
on
it,
and
even
it
has
it
has
to
be
manual.
You
know
intervention
required
like
every
every
three
months
or
every
four
months.
I
A
Yeah
well,
the
the
trickiest
things
in
this
kind
of
thing
is:
if
you're,
if
you're
doing
it
like
every
three
or
four
months,
what
happens
if
something
else
changed
right
like
what?
If
something
on
the
system
changed
new
kernel,
new
new
driver,
maybe
it
wasn't
rebooted
recently
I
mean
all
this
stuff
is
like
it
makes
it
really
hard.
So
you
almost
want
to
be
doing
it
every
night
or
every
day
and
then
be
able
to
say
something
changed
here.
A
I
I
It's
a
matter
of
just
looking
at
it
or,
like
you
know,
making
it
more
easily
visible
is
probably
something
we
can
start
with.
I
I
Keep
folks
any
longer
for
some
other
meeting
sounds
good.