►
From YouTube: 2017-FEB-22 :: Ceph Performance Weekly
Description
Weekly collaboration call of all community members working on Ceph performance.
http://ceph.com/performance
For full notes and video recording archive visit:
http://pad.ceph.com/p/performance_weekly
A
A
Tough
for
request
in
the
list
regarding
looking
at
the
blocking
in
the
bitmap
allocator,
I'm
very
very
excited
about
this-
is
something
that
that
we
kind
of
seen
in
the
background
for
the
past
couple
months
is
that
under
high
random
light
or
close,
the
mep
allocators
is
doing
a
lot
of
walking
and
walking
and-
and
I'm
excited
so
much
looking
into
it
now.
So
this
is
great
I'm,
there's
actually
I
think
some
code
there,
so
I'm
gonna
try
take
a
look
at
that
one
of
our
cats
a
chance.
A
A
How
am
I
has
been
working
on
RDMA
with
the
AC
messenger
in
eject
both
of
different
for
requests
here,
one
that
actually
merged
was
adding
/
counters.
They
are
being
made
back
end
as
when
you
probably
didn't
even
know
stroll
off
testing,
but
the
other
I'll
made
more
interesting
ones.
Here
is
making
a
basically
adding
zero
Pappy
code
into
the
rdna
back
end
and
then
also
a
kind
of
some
discussion
here
going
on
regarding
making
a
better
RDMA
buffersize
people.
A
It
sounds
like
there
there's,
maybe
some
issues
with
that,
but
not
allowed
to
tell
yet
so
exciting
saying
using
our
DM
mail
and
what
else
stage
has
a
basic
PR
for
right.
Cash
in
bufa
will
see
more
than
up
at
a
Pinterest
things
on
this
was
basically
to
get
around
having
to
enable
a
herd
right
in,
let's
say,
I
think
it's
an
avid
so
many
funny
things
that
we
have
now,
but
the
idea
is
basically
to
make
it
for
that
Roxy
be
he
doesn't
need
redoing
all
suffer
great
to
think.
A
I
might
have
screwed
that
up
but
event.
This
was
basically
to
work
around
some
performance
issues
that
work
wrapping
up
when
we
we're
not
doing
the
horizon
some
scenarios,
but
else
we
have
here
siggas
faster
decide
to
work
with
Greg.
It
looks
like
Greg's
been
doing
a
lot
of
review
on
that
which
is
fantastic.
They
should
mention
that
probably
ready
to
do
some
performance
testing
on
something
attractive.
Look
at
that
I've,
president
of
the
peak
anything
I
was
interested.
My
guess
is
that
we
might
see
better
performance
during
really
really
high
as
concurrency
random.
A
A
It
looks
like
there's
a
couple
patches
that
were
waiting
for
stage
to
look
at
so
unfortunately,
it's
here,
I
can't
buzz
about
it,
but
I
will
try
to
remind
him
to
stand
up
later
this
week.
So
yes,
nothing
for
now.
That's
about
it!
So
I've
got
a
bunch
of
stuff
here
that
I
can
go
over
with
our
gwm
boost
or
but
me
before
that
make
it
looks
like
you
got
some
stuff
that
so
you
want
to
talk
about.
Would
you
like
to
jump
in?
Maybe
not
what
your
first
yeah.
C
C
This
was
the
behavior
with
the
first
and
changes
from
fam
and
just
sorting
them
out
of
Fiji
little
spectrum,
and
then
I
must
do
a
test
day
of
the
the
next
one
which,
as
you
can
see
it's
a
lot
better.
So
I
set
the
sleep
200
milliseconds,
just
as
I
thought
that
a
random
sort
of
figure-
and
you
know
I-
can
do
a
different
range
of
sleep
values
if
that's
cool
to
be
the
usual,
but
overall
it
is
much
better
and
that
there
are
occasional
drops
down
to
zero
and
I.
C
B
C
Version
of
loose
and
then
we'll
see
what
the
next
step
is.
Effed
it
up
cool
good
here.
C
Right
so
it
just
from
last
week,
if
you
remember,
one
of
the
things
I
was
looking
at
was
that
that
of
the
right,
but
throttles
seem
to
be
favoring
sort
of
fruit.
Foot
rot,
revenant
of
read
latency,
so
I
just
was
interested
just
to
sort
of
explore
that
a
bit
more
so
Roger
system
sort
of
literary
things
in
35
seconds,
then
external.
They
got
a
moment
I
both
favor
maximum
throughput
for
small
air,
the
cost
of
reeds
on
and
latency.
So
if
you've
got
a
large
amount
of
Rights
just
queuing
up,
why?
C
C
Obviously,
you
know
pretty
pretty
high
the
default
right
back
bottle
and
can
be
up
to
5,000
iOS,
deep,
which
47
points
UK
this,
because
the
40
or
50
seconds
worth
of
flushing-
and
it
said
during
the
right
back
period,
the
reeds
nor
much
just
they
can
stop
and
I
did
what
the
tuning
with
their
schedule
opens.
I,
didn't
injure
and
Beatrice,
or
no
matching
do
to
help.
C
It
is
a
lot
more
sort
of
stable,
but
at
the
cost
of
less
write
throughput
and
so
that
those
graphic
in
it
log
10
so
sort
of
them
zoom
in
on
the
action.
So
you
can
see
it
better,
what's
actually
happening,
so
you
can
feel
it
with
the
reeds
in
that
left.
Graph
are
really
suffering
and
so
I
found
that
it's
actually
quite
getting
the
right
back
bottle
or
the
number
of
requests
on
on
the
block
device
do
pretty
much
the
same
thing.
C
So
if
I
set
the
number
of
outstanding,
so
the
request
limit
on
the
block
device
so
about
eight
or
the
hard
limit
of
the
right
back
for
us
about
160
iOS,
I
I
got
that
sort
of
quite
nice
smooth
behavior,
but
there
is
a
trade-off
against
that.
Maximum
throughput
I
tried
share
few
deadlines
and
even
BF
you
with
all
very
soon
options
and
getting
really
since
make
any
difference
until
I
think
it's
just
the
ratio
of
reads
the
right.
It's
just
it's
not
designed
to
cope
with
that
and
in
terms
of
sort
of
spiking.
C
Sorry
that
someone
else
in
the
bushes-
that's
okay,
I,
then
did
similar
stuff
with
large.
What
rights
and
its
large
what
rights
don't
seem
to
impact
as
much,
but
they
do
have
some
effects,
but
nowhere
near
them
out
that
I'm
small
I,
o
suta.
It
definitely
rather
a
depth
of
Io
going
to
the
disc
rather
than
just
the
amount
of
data.
But
if
you
set
the
Russell
past
about
14
meg,
that's
or
help,
certainly
things
out
as
well,
but
didn't
suffer
too
much
in
terms
of
loss,
performance
and
then
just
I've
just
so.
C
C
So
what
I
sort
of
found
is
generally
he's
going
from
the
defaults
down
to
you
know
the
a
100
or
150.
It
was
sort
of
a
reasonably
linear
dog
progression
towards
that
yeah.
It
just
things
out,
as
this
is
the
amount
of
them
and
I.
Oh,
you
can
have
gone
through
each
dish.
It
starts
to
reduce
it
just
sort
of
very
sound
living
it
and
I
guess
is
just
what
you
prefer
for
your
cluster
I
mean
for
me.
C
I
would
say
that
if
I
was
having
to
run
my
class
at
one
hundred
percent
to
get
that
right
performance,
that's
probably
an
indication
that
I
need
to
make.
That
was
the
bigger,
but
I
was
probably
prefer
in
that
instance
that
if
ever
did
go
to
a
hundred
percent,
that
I
can
rely
on
everything's
been
a
little
bit
more
stable.
C
A
A
All
right
see
here,
I
guess
I
will
maybe
share
my
screen
here
and
he
I
don't
have
slides.
I
can
at
least
share
this
spreadsheet
with
some
graphs
on
it.
A
little
bit
about
star
testing.
A
Fantastic
I
was
beginning
to
worry
that
I
get
disconnected
to
using
dishes
alright,
and
so
okay,
here
be
given
background
on
this.
A
couple
of
months
ago,
our
reference
architecture
team
was
seeing
really
can't
poor
performance
when
they
were
doing
rgw
scale
testing.
They
have
a
cluster
with
I
think
a
couple
hundred
Oh
Sees
all
on
hard
disks
and
they
were
the
base.
A
We
are
trying
to
figure
out
a
good
reference
architecture
for
our
GW
with
erasure
coding
and
they
they
were
just
being
really
really
crappy
performance,
especially
in
relation
to
ratos
bench,
and
they
didn't
know
why.
So
we
went
through
a
bunch
of
suggestions,
including
looking
at
the
number
of
buckets
and
the
is
charting
and
how
much
time
was
spent
in
bucket
index
update
the
we
file
for
or
hardly
the
RDW
chunk
size
that
was
being
used,
all
these
different
things
and
it
got
better,
but
it
never
really
got
great
so
sage.
A
A
It
was
really
painful
and
slow
to
we
test
things
and
they
was
not
admitted
so
they're
running
tests
on
a
cluster
that
had
already
had
a
bunch
of
right
son
to
its
who
was
aging,
and
it
was
hard
to
compare
test
results
from
a
fairly
young
version
of
the
cluster
to
a
very
aged
version
of
the
cluster.
So
what
I
did
is
I
went
back
in
and
added
you
to
a
benchmark
called
get
put
into
cbt.
A
This
was
written
by
Mark
Siegert,
maybe
a
year
or
two
ago
out
at
HP
and
the
benefit
versus
cause
benches
itself
Python,
and
it's
really
easy
to
just
throw
command
line
parameters
ibis
and
have
it
run
benchmarks.
So
so
nice,
and
the
same
way
that
like
FIA,
is
nice
but
for
swift
side
parking,
so
I
wrap
that
in
cbt
and
cleaned
up,
some
of
the
the
multi
had
just
did.
Some
of
the
stuff
like
cbt,
was
really
can't
bring
dead
in
the
way
that
I
was
having
a
party
venue
clusters
and
talking
to
them.
A
A
Why
did
unfortunately,
I
ran
into
some
problems
where
the
initial
results
were
all
bogus
and
the
reason
for
that
is
that
get
put
in
each
otha
that
that
it
spawned
will
first
do
a
head
request
against
the
rgw
serve
with
us
talking
to,
and
then
it
will
you
create
the
bucket,
assuming
that
it's
making
a
new
bucket
for
the
process
as
a
shared
bucket.
Then
then
it
won't.
A
The
problem
here
is
that
it's
assuming
that
that
head
request
and
that
other
creation
is
fast
and
in
rgw
it's
not
especially
under
load.
These
individual
processes
were
getting
up.
Words
will
even
five
minutes
out
of
sync
with
each
other,
maybe
even
longer.
These
were
applied
me
to
tell
hope
it.
Basically,
some
of
them
were
doing
running
at
all
and
or
in
some
cases
that
are
running
after
words
and
the
test
one
fish
until
all
the
processes
take
finished.
A
So
you
were
actually
getting
here
in
a
situation
where
some
processes
were
running
and
then
other
processes
work
starting
into
the
initial
ones,
could
finish
so,
as
you
can
imagine
how
those
results
order
that
it's
really
easy
to
do
that
have
synchronization
with
in
OneNote
I
was
able
to
modify
the
get
put
benchmark
so
that
it
didn't
actually
start
running
any
of
the
tests
until
all
of
the
bucket
creation
and
have
your
quest
processes
serves,
have
your
but
had
already
finished.
That
was
really
easy,
just
using
a
counter
in
a
lock.
A
A
Those
requests
can
finish
before
actually
running
the
benchmark,
and
then
it
will,
assuming
that
your
clock
is
the
same
on
all
your
nodes
will
wait
until
a
certain
time
after
the
benchmark
started
to
two
extra
except
learning
needs
the
30
real
test,
and
at
least
here,
but
working
really
well,
it
has
gone
from
a
really
really
bad
synchronization
to
now,
at
least
for
these,
the
purpose
of
least
has
pretty
accessible
certain
stop
patent
Jocasta
processes.
So
these
results
are
remittance
that
newest
version
of
the
benchmarks
and
I'm
talking
to
most
people
about
getting
an
extreme.
A
So
whatever
do,
let's
look
at
some
of
this,
so
it's
probably
the
first
thing
that
jumps
out
is
hope
is
that
in
these
tests,
blue
store
is
doing
quite
a
bit
better
than
stand
trial.
For
is
that's
what
freer
triplication
it's
about
twice
as
fast
and
and
you'd
expect
this,
because
we're
writing
out
really
big
objects.
A
Booster
is
not
doing
the
double.
It
does
not
suffer
the
devil
right
penalty
in
this
particular
case,
where
a
child
story
is
so
for
large
object
rights
like
this
it's
about
twice
as
fascist,
which
is
what
you'd
expect.
The
good
news
here
is
too,
is
that
rgw
is
about
same
speed
as
radius
bench,
so
we're
not
we're.
A
Not
cleaning
have
real
deck
sedation
use
test
from
from
going
through
to
you
to
gateway
with
erasure
coding,
it's
actually
even
even
kind
of
more
extreme
file-
storage,
quite
bad
and
in
these
half-
and
it's
quite
a
bit
worse
than
just
brigus
bench
and
I-
suspect
that
largely
due
to
the
extra
work,
that's
required
for
updating
the
t-bucket
into
I.
Don't
have
any
proof
of
that,
but
that's
my
suspicion
here.
A
Probably
just
because
laxity
tends
to
be
better
to
committing
them
some
local
TV,
remote
machine
and
also
probably
because
this
is
all
done
on
a
fresh
file
system
and
rock
t-
he
probably
hasn't
gotten
to
the
point
yet
or
seeing
a
lot
of
protection.
So
thank
you
with
all
those
agreements
else.
For
that
reason,
but
but
at
least
here
we're
always
looking
at
has
a
lot
better.
A
B
Yes,
weren't
coming
about
that,
is
that
even
more
important
that
that
than
having
much
more
much
than
double
performance
is
that
the
performance
is
more
stable.
While
you
feel
that
the
day
the
discs
all
the
time
with
with
file
store
performance,
be
great,
as
you
add,
more
and
more
objects,
while
rooster
will,
the
degradation
is
less
or
are
not
visible
at
all,
and
the
problem
booster
are
only
during
competitions.
Okay,
while
in
XFS
degradation
was
constant
and
as
you
feel,
the
question
yeah.
A
So
far,
I
think
from
what
I've
seen
in
the
past
I
think
he
starts
going
to
be
better
than
tall
for
a
min
each
file
system,
but
just
because
I
hadn't
done
that
testing
in
this
case,
I
did
I,
don't
want
to
I.
Don't
want
me
claim
that
I
can't
support
so
down,
but
yeah
I
suspect
that
you
are
correct
on
a
thousand
I.
Imagine
that-
and
this
has
to
first
opportunity
to
be
together
this.
B
Is
important
may-
and
this
may
be
more
critical
than
they've
done-
the
overall
performance
for
system
that
great
the
small
objects?
Okay,
if
you
need
to
write
small
objects,
the
expected
degradation
that
you
have
to
fill
the
disk
is
by
far
less
relevant
using
blister,
so
is
much
better
than
priced,
or
definitely
for
a
small
object.
So
this
this
is
the
message
you
need
to
move
from
this
to
this,
because
if
you
have
a
small
object,
which
is
very
reason
for
you,
yes.
A
It's
also
very
true
I.
The
NIST
has
liver,
just
larger
to
megabyte
objects,
because
that
is
what
our
reference
architecture
team
were
testing
in
there
doing
it,
and
we
wanted
to
show
the
same
kind
of
issue
that
they
saw
was
specifically
with
erasure
coding
where
they
were
seeing
our
DW
performance
with
a
lot
less
than
greater
expansion.
The
scenario
and
the
news
result
sucks
what
we're
seeing
have
the
same
thing:
you're
right.
It's
basically,
no
matter
how
many
buckets
three
years,
no
matter
how
many
rtw
Silver's
were
using
here.
A
Boosters
is
slower
than
compile
stores
to
varying
degrees
in
in
in
the
blue
sir
case,
rather
than
five
fours.
We
see
that
it's
slower,
but
it's
not
nearly
as
dramatic
in
it.
It's
quite
a
bit
hot
here,
Koecher
to
what
the
disk
and
sustained
and
similar
I'll
stories
is
doing
so
because
I
mean
that's
that's
kind
of
reason
why
these
tests
are
set
the
way
they
are.
My
hope,
though,
is
that,
since
we
now
have
much
better
infrastructure
for
running
automated
tests
on
our
GW,
we
can
start
looking
at
things
like
small
objects
as
well.
A
A
Each
of
these
another
pic
spinning
disk
available
for
ofes
and
then
one
in
a
well
in
this
particular
setup
here,
I
use
one
end:
you
need
rest
to
either
supply
journals
for
those
six
spinning
this
or
on
blue
source
ID
database
and
right
analogue,
because,
that's
probably
like
expect
most
people
be
using
neste
or
an
NDA
move
forward
with
resource.
So
the
the
difference
between
no
and
you
drive
and
an
ending
you
drive
for
node.
It
makes
a
pretty
big
difference
for
for
file
storage.
A
A
If
this
is
a
serious
implication
in
the
eurasian
coding
case
again
we're
just
being
file
stored
nuts,
not
do
well,
it's
just
in
general,
slower
with
our
VW,
then
was
very
good,
french
and
leaving
the
other
disciplines.
Performances
is
nothing
course
what
research
did
Nene
stuff
and
I
did
go
back
and
actually
verify
because
a
little
suspicious
that
I
was
doing
that
much
better,
but
I
went
back
and
verified
that
we're
not
seeing
process
you
in
the
benchmark
sharing
longer
it
looks
like
everything
is
pretty
well
synchronized.
A
So
if,
if
this
is
a
inaccurate
result,
if
it's
something
new
and
not
what
I
was
being
aware
cause
and
doesn't
appear
to
be
at
least
on
the
surface.
So
that's
a
good
news
again
here
or
rgw,
a
larger
chunk
size
which
is
having
a
very
dramatic
effect,
so
that's
now
default
and
in
master,
so
hopefully
folks
will
will
kind
of
benefit
from
that
about
heading
to
changing
singing
in
the
next
release.
A
The
last
test
here
are
kind
of
interesting
because
nvme
only
but
each
of
these
nodes
has
four
p
3700
nvme
drives
that
each
can
do
love
to
be
bytes
per
second
of
right.
So
we
combined
back-end
throughput
across
four
leaf
nodes
is
something
else
32
gigabytes
per
second.
So
what
we're
kind
of
seeing
here
is
actually
that
the
network
is
paid
quickly
becoming
the
bottleneck.
Each
each
node
has
a
40
gigabit
per
second
network
hardness,
but
it
only
is
reading
one
of
the
ports.
We
don't
have
enough
ports
on
me.
A
A
But
since
we're
kind
of
seeing
posted
parity
here
for
both
file
store
and
glue
store,
despite
the
fact
that
blue
sourcing
would
have
to
work,
I
think
we'll
travel
wave
network
found,
or
at
least
pretty
personnel,
found
that
kind
of
make
sense
to
you
with
the
erasure
coding
results,
because
we're
actually
we're
doing
better
with
four
plus
two
erasure
coding
and
I'm
guessing
peps.
Just
because
is
what
networks
is
what's
going
on,
it
doesn't.
Memphis
funds
is
most
uppal
to
the
network.
A
So
in
this
case,
the
good
news
here
is
that,
especially
with
Hugh
store
and
lots
of
buckets
were
we're
more
or
less
seeing
the
same
performance
with
you
afraid
of
bench
and
even
Heather
than
some
of
the
other
cases
or
saying
forcing
pretties
gets
to
put
with
11
rgw
servo
I
me
put
3
gigabytes
per
second,
so
that's
good!
That's
what
I
want
to
see.
You
want
to
see
that
an
individual
rgw
circular
scale
that
has
I
don't
have
the
results
here,
but
in
previous
testing
I
was
looking
at
cps
for
12.
A
While
this
was
happening
and
it
was
really
high.
The
rgw
server
was
using
like
20
cordage
to
to
maintain
this.
So
that's
the
bad
news
is
that,
despite
the
fact
that
we
heroes
pretty
fast
word
we're
burning
to
a
ton
of
CPU
to
do
it.
So
it
is
what
it
is.
I
think
I
think
a
lot
of
our
students
with
reading
model.
You
know,
respond,
tons
and
tons
and
tons
of
threads.
A
So
definitely
we
were
seeing
with
from
perf
a
lot
of
fans
fence
in
the
network
seconds
to
the
web,
and
I
was
sort
of
as
well
just
rather
than
sit
I,
don't
totally
understand
how
r
ZW
30
north,
but
my
understanding
from
talking
to
matt
is
that
we
do
a
lot
of
work
actually
directly.
Most
of
it
looks
good,
so
yeah
I
guess
that's,
probably
probably
narrator.
Looking
to
but
yeah
generally
here
the
octopus
is
pretty
high
five
floors.
A
But
in
this
case
there's
only
there's
one
child
per
bucket
and-
and
you
kind
of
expect
that
that
having
lots
of
buckets
for
your
ears
pursuing
the
class
tomorrow
sees,
would
improve
things,
and
it
does
here
in
some
cases,
especially
when
you
have
large
chunks
with
new
store.
But
but
it
seems
that
in
other
cases
here
we
actually
see
more
of
a
suitable,
reverse
and
especially
kind
of
with
a
professor.
A
This
is
for
cycles,
maybe
and
form
a
large
emphasis,
so
I
don't
understand
that,
but
it
seemed
to
be
repeated
across
multiple
different
tests,
so
dad
there's
something
going
on
sizing
and
watch.
So
that's!
That's!
All
I've
got
right
now
because
and
interesting
I
apologize.
I
haven't
look
at
any
of
the
questions
that
came
over
on
the
on
the
chat
window,
so
I
will
suppose
no.
A
Particular
test,
because
these
were
tests
to
run
it
for
five
minutes
each
rather
than
right
now
a
specific
amount
of
data.
So,
as
you
can
imagine
some
of
the
files
or
tasks
we
were
doing
really
really
badly
like
this
one
bucket.
It
was
like
in
some
cases
around
100
megabytes
per
second,
so
that
was
far
different
than
say.
A
Blue
store
was
for
my
glide
chunks,
where
it
was
writing
out,
maybe
like
11
hundred
megabytes
per
second
again
say,
though,
that
in
the
nbme
test
I
was
getting
very,
very
close
to
some
pieces
to
filling
up
the
entire
cluster
with
data
with
how
fast
it
was
writing
data
part
as
I
was
because
raiders
bench
was
running
to
a
youth
cool.
On
the
same,
so
some
of
the
existing
data
was
still
kicking
around
some
from
the
CW
textbook.
B
What
hello
this
this
is:
Jaime
from
Nokia
yeah,
hello,
I'm,
still
investigating,
and
honey
and
I
want
to
know
why
we
are
having
lots
of
sweets
from
time
to
time
in
our
only
write
test.
Ok,
so
I
can
trying
to
advance
what
is
the
root
cause
of
having
this
kind
of
of
a
lot
of
REITs
together
being
only
test?
Writing
I
I.
Don't
really
understand
the
relationship
between
competition
on
there
on
the
rocks,
TV
database
and
the
impact
on
weight.
B
A
I
think
the
wife
and
me
the
suspicion
that
we
had
was
that
maybe
the
box
TV
data
has
spilled
over
into
store
because
I
wasn't
enough
space
on
me:
the
ve
separate
partition
or
for
the
database.
Maybe
it
was
doing
filling
nerves
that
data
over
into
into
your
ear
interview
of
us
and
and
then
you're
doing
read
when
connection
happens,
because
but
it's
no
I
always
restore
international
aid,
is
reading
data
and
moving
it.
Some
more
of
that
I
think
was
insufficient.
A
B
C
A
A
11
crew
is
in
the
laws
at
winter
cydia,
if
you
so
over,
I
think
there
is
a
message
that
starts
telling
you
that's
happening
and
if
you
see
that
that
would
be
an
indication
that
yeah,
maybe
that
would
explain
why
the
rich
would
be
coming
from
the
other
desk
I,
don't
remember
what
the
message
is
exactly,
but
I've
seen
it
before.
So
that's
that's
something
to
look
for
in
your
log,
there's
an
e
anything
in
there.
That
looks
like
it's
talking
about
showing
data
over,
because
the
database
is
full
otherwise
yeah.
A
B
B
A
A
A
A
big
deal
it's
the
database
itself
is
where
that
would
be
the
thing
that
you'd
expect
coffee
and
if
it's,
if
it's
happening,
that
we
have
reads
that
are
not
caused
by
compaction
or
other
things.
I
think
that's
new.
We
should
ask
a
GCS
me
any
thoughts
on
it,
so
them,
but
I
don't.
I
don't
think
we
have
any
expectation
that
it
should
be
happening
so
yeah,
that's
very
interesting.
I.
A
Okay,
me:
anyone
have
any
other
things
that
they
want
to
close
to
seek.