►
Description
With the rising of cloud computing and big data, more and more companies start to build their own cloud storage system. Swift and Ceph are two popular open source software stacks, which are well deployed in today's OpenStack based cloud environment to implement object and virtual block service. In this presentation, we do a deep study on both swift and Ceph performance on Commodity x86 platform, including testing environment, methodolgy and thorough analysis. Tuning guide and optimization BKM (best known method) will also be shared for reference.
A
A
So
here's
seeing
I
will
talk
either
do
a
very
simple
introduction
to
save
the
time
and
the
first
time
you
talk
about
Seth
and
the
second.
I
will
talk
about
swift
and
in
I
will
give
you
a
summary,
so
I
will
not
compare
serve
to
the
staff
able
to.
I
hope
so,
because
I
think
that
force
really
sizing
today,
if
we
look
at
OpenStack
surf,
is
more
famous
for
the
block
service
and
the
sweet
fries
were
more
famous
for
the
you
know
opted
out
surveys
so
in
general
area.
A
So
I'm
from
Shanghai,
actually
I
came
from
India
I
work
for
in
jail
and
we
have
a
team
here
working
on
the
you
know,
cloud
technology
and
we
starting
to
look
at
OpenStack
performance
from
two
years
ago.
This
is
my
sword
conference
and
actually
I
delivered.
The
other
talk
regarding
to
a
benchmark
all
costs
bench
here
in
Santiago
Santiago
OpenStack
conference.
So
all
the
content
here
actually
is
a
still
work
in
progress.
A
So
there
may
be
something
wrong
and
we
are
glad
if
you
give
us
a
lot
of
comments
and
I
won't
emphasize
it's
actually
so
teamwork,
because
we
produce
a
lot
of
work,
a
lot
of
data
and
do
a
lot
environment
and
I
also
won't
thanks.
For
you
know,
people
from
ink
tank
and
Asterix
tag.
It
goes
some.
You
know
we
actually
talks
at
what
you'll
see
here
and
you
know
as
to
their
comments
and
give
us
some
hints.
So
we
can
do
some
adjustment
and
you
know
mixer
sings,
reasonable
and
I'm.
A
Trying
to
you
know,
blogging
amount
details
about
this
talk
because
I
will
have
40
minutes
and
then
we
have
so
much
date.
So
I
can
outer
cover
all
these
details,
I'm
trying
to
blogging
staff
and
the
try
to
explain
that.
So,
if
you
like,
you
can
go
to
the
there's
a
blog
link
and
you
can
go
to
that.
Look
for
more
details.
A
Okay,
let's
look
at
a
safe
path,
so
here's
our
testing
environment.
So
in
general
we
we
just
takes
a
we
just
amazing
admin
mode
I
mean
a
block-level
performance,
so
we
have
four
story:
node
and
all
the
network
is
10
GB.
So
we
want
to
make
sure
there's
a
no-nail
cabal
tonight
and
for
each
node
early
actually
have
one
processor
and
we
have
a
16
gig
memory
and
we
have
10
1tb
SATA
disk
is
connected
visser.
A
You
know
SI
HPA
of
wizardry
board
mode
and
each
it
is
kate's
party
she
ate
into
10
SD,
and
we
also
have
actually
three
SSD
play
as
the
journal
and
the.
If
you
know
a
lot
about
safe,
you
know
self
have
what
a
special
channel
design
so
they
can
make
all
this
right
run
faster
and
do
a
snatch
dropsy.
So
we
we
have
three
SSD
I
and
you
know
just
to
connect
to
the.
We
have
some
on
the
host.
We
have
some
local
SATA
controller,
so
we
just
connect
to
a
32
there.
A
So
this
is
a
software
configuration
when
I
doing
all
this
testing.
You
know
we
stick
to
bunt,
oh,
and
we
have
the
you
know.
Actually
the
methodology
of
a
test
is
we
have
an
open
staring
environment,
so
we
just
starting
a
lot
of
virtual
machine
on
each
virtual
machine.
We're
testing
the
starting
the
simple
walk,
closer
call,
I
file
and
a
dream,
defend
the
workload,
the
pattern.
A
So
so
here's
a
version
make
you
know
that
was
you
know,
set
you
know
all
this
candle
for
Colonel
washing,
because
actually
the
performance
really
depends
on
a
kernel
version
depend
on
the
acumen,
washing
all
these
kind
of
differencing.
So
on
the
outside.
We
just
enable
the
jumbo
frame
to
make
sure
we
can
do
a
better
sequential
oil.
A
A
A
So
there
are
several
ways
you
can
test
you
step
performance
right.
You
can
just
testing
from
the
you
know
directly
from
host.
You
know
you
can
also
testing
from
instead
of
vm.
So
we
try
to
understand
you
know
from
the
custom
view
if
people
really
want
real
type,
we
think
the
most
common
user
model
is
actually
something
like
they
want
to
use
like
Seminole
EBS.
A
So
if
mount
volume
into
their
virtual
machine
and
use
the
instant
virtual
machine
is
starting
all
this
workload
at
doing
always
I,
oh
so
here
we
actually
use
rd
mode
and
we
actually
goes
through
the
qrt
be
and
the
workload.
Actually
we
try,
we
use
ifile
and
we
try
for
different
the
workload
fighter.
That's
a
you
know
a
sequential
read/write
with
you
know,
64k
as
a
reading
block
and
the
reading
block,
and
we
also
try
the
random
I
all
with
a
4k,
so
in
general,
for
44
different
use
of
user
mode.
A
So,
while
seeing
we
want
to,
you
is
instant
of
of
just
a
throughput
of
the
whole,
you
know
cluster.
The
other
see
we
pay
a
lot
of
attention,
so
I
want
to
make
sure
the
volume
would
do
a
provided
enough
quality
of
service.
So
we
defend
some
actually
quality
service
requirement
released
through
here
once
you
know
for
random
movie
single
agencies
with
number.
Why?
Right,
because
we
always
want
to
lead
the
August
I
all
written
quickly,
so
you
want
to
make
sure
that
all
the
random,
our
video
latency
less
than
20
millisecond.
A
So
that's
one
qsr
requirement
the
other
things.
One
thing
we
actually
read
sometimes
about
it
applies.
We
try
to
understand
what
kind
of
4
I'm
the
web
service
EBS.
We
can
provide
so
in
general
way
to
be
starting
a
random
starting
server
with
do
some
running
testing
for
seven
days
and
during
the
seven
days
we
starting
a
lot
lvm
and
I
attached
of
a
like
a
different
volume
of
from
time
to
time
and
for
each
VM
will
be
wrong
for
two
hours
and
collecting
all
this
performance
data
and
try
to
understand.
A
If,
if
the
force,
then
the
EBS
volume
who
cannot
perform
at
this
young
kid
so
in
general,
the
things
we
get
is
that
you
know
a.
If
you
look
at
the
inner
best
thing,
you
can
see
that
they
say
that
they
call
me
BS.
They
will
provide
something
like
100
yards
per.
Second,
they
don't
a
mission
later
say
right
and
they
also
don't
mention
the
you
know
the
sequential
bandulus
so
based
on
our
testing
I.
Think
in
general,
their
performance
is
pretty.
You
know
qualified
to
the
SLA
claimant,
so
we
sell
some
go
here.
A
Is
a
we
want
to
make
sure
that
you
know
each
web?
Can
we
hope
each
volume
can?
You
know,
provide
100
manitech
but
100
ops
per
second
with
latency,
less
than
20
millisecond.
So
that's
four
random
on
the
other
side.
They
want
to
make
sure
for
issue
volume
they
can
provide
is
something
like
a
more
than
60,
some
like
60
60,
microsecond
vandalize.
So
that's
how
the
qsr,
how
they
do
that
so
country
safe?
Actually,
you
know
the
as
I
know
they
don't
have
a
very
good.
You
know
isolation
design.
A
They
can
make
sure
you
know
if
we
have
a
by
the
user,
have
a
lot
of
pressure
and
it
is
dependent,
for
example,
if
you
use
OpenStack.
Usually
what
we
do
is
a
very
good
si
group.
Your
secret
can
show
you
know
each
each
each
VM,
how
much
I
owe
and
how
much
been
amazed
he
can
consume.
So,
but
here
we
don't
your
sigil,
we
just
real
some
functionality
from
I
file.
A
If
I
will
actually
provide
a
feature,
you
can
cite
the
master
panda
base
and
a
mass-casualty
the
FI
over
you
generate
so
here
you
can
see
that
when
we
do
even
random
I'll
recite
the
100
100
Alps
per
second,
as
you
know,
max
throughput
as
a
target
and
a
60-bike
per
second
as
a
targeted
for
secrets,
oil
and
the
based
on
data.
We
want
to
make
sure
that
if
we
have
enough
vm
right
we've,
you
increase
the
way
I'm,
gradually
only
how
we
we
can
expect.
A
We
can
predicts
not
only
have
enough
a
lot,
how
vm,
actually
the
original
a
super.
The
problem
of
you
goes
down,
so
we
start
to
look
us
that
we
want
to
make
sure
that
you
know
for
averages
for
the
vm
performance.
Actually
it
should
be
should
a
larger
than
ninety
percent
of
different
support,
so
that
means
for
sequential
equation:
iOS
5
54,
my
mad
microsecond,
and
for
all
help
season,
19
lbs
per
second.
So
that's
two
QSR.
We
define
for
this
testing.
A
Now
it's
a
fun
part.
Let's
look
at
this
is
random
street
performance,
so
the
x
is
actually
the
vm
number
is
also
the
volume
number.
So
in
general,
we
create
a
vine
vm
and
attach
your
different
volume
into
a
system
and
the
the
left
side
is
a
proven
performance.
That's
how
much
you
know,
volume
helps
you
get
the
program
and
the
rest.
That
is
aggregation,
total
performance,
so
there
are
two
to
date.
You
know
the
mark
is
the
number
we
get
is
actually
the
4600.
A
That's
actually
gate
is
the
ATM,
and
but
remember
we
have
to
queue
s.
You
know
requirement.
We
want
to
make
sure
the
program
performance
larger
than
ninety
percent
of
the
pretty
fun.
You
know
target
so
actually
here,
if
we
take
that,
we
can
see
that
at
the
only
skill
to
30
Williams,
the
pro
vmware's,
your
ops
already
job
295.
A
So
we
want
to
me
I
wish
you
more
data
regarding
to
latency.
You
can
tell
why
we
take
the
30
p.m.
that's
the
other
with
it,
because
in
this
period
each
other
yeah
I
know,
but
because
I
think
that
the
mouth
of
really
is.
I
want
to
make
sure
I
think
if
we
want
to
offer
some
ebf
service
to
custom,
we
want
to
make
sure
that
our
cluster
is
not
over
commitment.
So
we
want
to
make
sure
we
meet
SLA.
You
know,
wait
yeah,
yeah,
that's
that's
what
we
did.
A
So
have
you
do
some
summarized
in
the
end
to
you
know
if
they
may
also
a
bottleneck
for
the
side,
so
this
is
a
random
right,
so
the
curve
is
a
little
different
compared
to
a
read
it.
Actually,
you
can
see
a
beginning.
The
you
know
that
yeah,
maybe
something
sorry
something
wrong
with
this
figure,
I
think
to
the
yeah.
This
is
a
latency.
Actually
so
you
can
see
that
at
the
beginning
the
poem
is
pretty
good,
because
latency
is
very
small.
That's
because
actually
most
officers
right,
hey
to
the
SSD
cache.
A
But
you
know
because
that
if
you
can't
really
for
xxo-s,
if
you
use
XFS
so
first
a
few
other
things
will
be
read
into
the
cash
I
30
cash
sh
t
know,
and
then
the
safe
will
act
to
the
clan
is
so
clearly
Justin
marks
the
complete
reading
stuff,
but
well
they
are
some
mana
mau.
You
know
the
pressure
goes
up,
so
you
can't
one
thing.
He
said
you
must
also
flash
all
the
state
from
the
s
from
the
out
of
on
a
field.
A
Star
part
I
mean
realize
the
disc,
so
you
can
see
that
actually
there's
a
big
jump.
You
know
when
you
move
from
the
vm
to
the
40
vm,
the
latency
jumps
very
quickly
and
the
actuator
the
pavilion
performance
also
very
very
quickly.
So
we
set
the
article
is
actually
a
base,
our
Q
s,
so
we
also
pick
up
the
vm
equals
to.
As
you
know,
you
know
the
peak
performance
for
the
cluster.
A
So
we
kept
every,
you
know
we
have
the
at
most
60
microsecond
performance,
so
you
can
see
that
when
you
increase
the
vm
number,
you
can
still
get
something
go
up
but
similar.
You
know
we
see
that
we,
because
we
have
the
pre
dfunk
us.
So
only
what
we
can
see.
It's
like
the
wave
number
equals
2
40.
You
know
the
performance
is
that
when
of
em
nam
is
equal
to
goes
to
50,
the
probe
am
a
super
job
to
lesser.
A
Let's
goes
to
49,
so
that's
already
pre
calc
us
number,
so
we
pics,
you
know
vm
doing
40
as
a
as
a
you
know,
peak
performance,
and
this
is
so
hot
right.
We
didn't
see.
You
know
the
SSD
general
benefit
on
a
sequential
right.
That's
because
I
think
is
view
very
quickly,
use
all
this
space
and
use
all
this
general
space
in
the
you
know
secret
all
right
so,
but
so
in
general.
A
In
this
case,
you
can
say
that,
because
if
you
understand
the
you
know,
staff
that
for
every
real
actually
safe,
just
to
do
a
physical
way
that
goes
through
the
master
right,
but
for
right,
if
you
use
replicated
in
equals
to
actually
the
other
two
real
physical
right
that
happen.
So
in
general,
if
you
count
single,
compare,
read
and
write
whether
you
consumed
twice
awful
io
disk
I/o
panel,
a
sand
Alps
compared
to
read
so
in
this
case.
Even
in
so
we
didn't,
we
still
are
not
meet.
A
A
This
is
the
interesting
thing
to
look
at
all
this
latency.
So
we
we
have
six
LAN
and
one
thing
we
we
found
is
very
interesting.
Actually
it's
the
latency
is
really
dependent.
I
was
strong
dependence
on
the
Qt
side.
Qt
is
a
parameter
you
can
use
to
adjust.
You
know
in
the
FIO
how
money
I
always
don't
fly
before
its
commit
once
in
we
fund.
Actually,
we
singles
are
still
some
back
or
something
we
can
do
better
on
the
client
side
and
the
one
thing
we
phone
is
on
the
other
class
at
that.
A
Sometimes
there's
only
once
right,
so
you
feel
have
a
lot
of
if
you
have
a
long,
a
very
long,
QT
duty
implementation,
most
of
my
iOS,
a
hyper
on
the
client
side.
So
I
just
show
you
some
picture
to
let
you
know.
What's
the
latency
looks
like
so
you
can
see.
There
are
several
line.
I
think
the
green
line
is,
you
said:
let's
look
at
a
real
foster
for
the
random
and
you
can
see
that
the
blue
line
is
a.
Is
a
random
write,
random,
read
and
did
a
red
one.
A
Yeah,
this
wise
is
so
random
read
and
this
one
is
a
random.
Why
so?
You
can
see
that
the
latency
jumper
very
quickly
because
they
all
say
the
s,
educational
impact.
So
in
general
you
can
see
that
actually,
the
latency
is
ok
for
the
Foley's
random.
The
beginning,
latency
is
actually
starting
from
something
like
10
and
it's
gradually
were
you.
Adding
more
pressure
is
gradually
increased.
A
We
do
some,
you
know,
Legion,
see
breakdown,
we
try
to
understand
the
wheel,
the
latencies
go
song
and
it
for
random
right
height
for
the
fall,
this
random
a
read
operation.
The
latest
thing
is
pretty
cool
light.
We
observe
that
most
of
ladings
they
go
to
the
disk,
so
we
just
don't
matter
the
latency
on
IFI
outside,
so
these
latency
smiling
instead
of
vm
from
a
file
on
the
other
side
of
the
method,
videos,
oil
state
to
measure
the
latency
from
the
storage
node.
So
you
really
self
did
a
pretty
good
job.
A
Your
liver
me
to
you,
know
sequential
staff
on
local
disk
arrived
because
we
can
see
something
like
a
one
millisecond
or
you
know,
to
marry
section
latency
at
most,
because
yes,
almost
a
no,
you
know
the
spindle
seeking
happen,
but
here
you
know
it's
a
some
different
losing
happen.
So
we
try
to
understand
why
so
they
do
a
lot
of
we
do
some
I
hope
oak
trees.
Try
to
understand
the
dial
pattern,
so
in
general,
what
size
data
is.
A
You
know,
sir,
for
you
try
to
distribute
all
these
I
owe
to
the
two
different
objects
across
the
whole.
Note
whole
cluster.
So
in
series,
if
you
have
a
logical,
sequential
read
or
write
happen,
our
virtual
disk
in
the
end,
in
a
physical
note,
actually
all
this,
this
guy
all
will
become
the
random
one
cross
all
these.
So
there
is
a
to
figure.
First
one
is,
you
know,
we're
starting
440
p.m.
or
doing
a
sequential,
so
we
can
tell
that
a
red
part
is
a
you
know,
all
this
I
or
one
by
the
other.
A
So
that's
a
real
physical,
sequential
aisle,
but
the
blue
part
is
actually
even
you
know.
All
you'll
do
is
a
sequential
I.
Oh
there's
still
some.
Maybe
twenty
percent
goes
to
random
and
if
you
mix
random
and
sequential
together,
that's
a
red
figure
totally
for
TV
m
and
attorney
for
random
Tony
for
sequential
you
can
see
that
actually
the
bloop
has
become
larger.
So
that's
what
we
do.
A
So
this
wise
some
interesting
one
because
of
the
SATA
disk.
We
also
try
the
for
SSD,
so
in
generalizing
a
safely
pretty
good
writer
for
SD.
So
this
result
is
actually
we
use
for
ICT
as
a
single
node
and
they're
doing
all
these
perform
testing.
You
can
see
that
if
we
settle
the
latency
qss
one
millisecond,
we
can
case
something
like
fifty.
Five
key
helps
4441
node
and
if
you
can
let
the
QSR
a
little
lies.
So
if
you
can
get
almost
80
k
with
you
know
too
many
second,
so
that's
pretty
good.
Acting.
A
A
This
is
the
master
Gator.
You
know
through
possibly
method,
but
we
didn't
consider
the
QSR
requirement-
and
this
is
this
reporter.
If
we
consider
QSR
requirement-
and
this
is-
you
know-
is
seriously
that
if
we
just
think
about
the
disc,
how
much
you
bandulus
and
always
helps
discount
provide.
So
we
do
some
testing.
We
use
the
disc
model.
A
Videos
is
a
seagate
es
enterprise
data,
so
in
general
it
can
provide
something
like
as
see
here's
a
90
miles
per
second
for
each
disc
and
it's
also
can
provide
something
like
you
know:
160
microsecond,
for
sequential
so,
based
on
this
state,
we
calculate
the
you
know
the
discus
throughput
for
the
whole
class.
Remember
we
have
40
discs
and
we
also
consider
a
foot
right
because
we
must
have
read
twice
so
we
just
you
know
half
of
the
max
of
support.
So
that's
really.
A
You
can
see
that
this
you
may
have,
and
we
also
consider
the
network
so
pull
the
right
be
called.
Lee
is
Incarnate
a
type
of
videos
for
10
GB
network
for
each
know
that
we
have
110
DB,
so
in
general,
if
the
this
is
very
in
Serie,
you
know
so
so
in
general,
if
they
for
this
for
small
I,
all
you
can
do
a
lot,
but
for
the
big
I,
alright,
so
something
like
we
assume
like
we
can
get
at
most
four
thousand
bytes
per
second.
So
we
pick
up
the
smaller
of
these.
A
These
two,
as
you
know,
as
a
fellow
you
know,
a
system,
you
know
perfect
support
and
a
calculator.
The
efficiency
so
I
think
Seth
is
very
good.
Are
random
and
a
personal
thing
is
pretty
good
they'll
steal
something.
Maybe
we
can
do
more
on
the
sequential
I
just
talked
with
young
people
these
days
and
we
do
some
testing
and
where
they
can
improve
this
performance,
something
like
fifty
percent
does
it.
You
know
thanks
for
Sager,
he
gave
me
some
other,
you
know
hint.
A
We
hope
we
can
work
there
so
to
make
this
pattern
in
the
future
and
the
other
side
elect,
compare
the
SATA
SSD.
So
in
general
you
can
see
that
you
know
for
the
SATA.
Actually,
traditional
disk,
you
have
a
very
big
space,
but
the
Alps
is
pretty
low
that
you
feel
use
something
like
the
just
Isis
d.
The
space
is
not
issue,
but
the
performance
is
pretty
good,
so
in
general
we
think
we
maybe
we
can
mix
a
SSD
and
the
SATA
together.
A
That's
a
better
solution,
so,
in
summary,
for
staff,
random
is
pretty
good,
see
crucial.
We
are
still
try
to
work
on
that
and
that's
that
we
will
to
continue
to
is
a
country
we
just
working
on
I
file,
so
we
will
try
to
do
amal.
You
know
real
workload,
starting
from
small
one
for
a
lap,
older
sis
bench
and
gradually
move
to
some
complex,
while
some
enterprise
work
order.
The
reason
we
do
that
is
we
try
to
understand.
What's
the
latency
really
impact
to
the
you
know
the
application
performance?
A
A
A
A
I
think
yourself
did
a
good
job
yeah.
His
question
is
I'm,
not
sure
I
catch
that
but
I
tried.
So
his
question
is
a
sinker.
Maybe
if
we
replace
state
you
a
different
way
right,
it's
non
violence
that
will
affect
to
impact
the
performance
when
I
sing
yourself
did
a
pretty
good
job,
so
they
are
crash.
Algorithm
will
distribute
all
this
stuff
work
good,
but
a
balancing
way
we
do
this
these
days
is
we
do
a
special
tuning
is
a
default
layer.
A
Actually,
if
you
create
a
pool
and
you
create
a
volume
right,
for
example,
in
this
case,
we
have
one
pool-
and
you
know
in
this
pool.
Actually
we
have
40
disc,
and
if
you
create
your
volume
in
inside
this
pool
you
all
your
data
will
distribute
it
to
all
these
kind
of
40
disc.
So
sometimes
that's
not
a
very
good
design.
There
are
two
liter
force
wiser
if
you
create
a
40
volume
in
this
simple,
so
the
possibility
for
all
these
kind
of
different
volume
traffic
will
be
together.
A
That
will
become
all
these
traffic
become
more
out.
You
know
fragment
and
the
random
the
other
things
if
you
create
put
all
these
kind
of
disk
in
one
pool
and
if
you
have
one
disc
of
you
actually
impact
a
lot
of
volume,
so
we
do
some
tuning
there.
We
can
not
think
that
you
know
you
can
quit
more
pool
it's
a
little
Dimitri
to
work.
For
example,
in
this
case
we
have
44
note
for
story
in
all
the
right,
so
we
just
pick
each
disk
around
each
pool.
Are
you
should
know
that
and
accretive
14?
A
A
A
Yes,
sweetie
is
that
we
actually
have
the
other
cluster
this.
You
know
we
it's
very
common.
You
know
we
have
a
10-story
node,
that's
you
peace
over
and
we
have
to
proxy
node
and
actually
that
perks
acknowledge
that
we
have
actually
before
that,
before
purchasing
all
the
way
have
the
actual
a
proxy
to
do
all
this
load
balance
and
anything
more
so
for
each
star
node.
Actually
we
have
the
one
processor
and
we
have
all
16
gig
memory
and
we
have
the
one
called
pot
Nick
so
recently
served.
A
However
good
design,
you
can
bending
several
I
p21.
You
know
you
know
starting
over,
so
that's
a
pretty
good
design
so
original.
Actually
we
use
I'll
bunny,
but
the
bounding
perform
is
not
very
good.
Candy
is
better
and
for
each
star
know
that
we
have
cheerful,
you
know
SATA
disk
and
a
while
SSD,
that's
a
city
used
to
hold
all
these
container.
All
this
stuff.
A
So
this
is
a
configuration
we
use
latex
the
swift
code
and
I
mean
you
can
almost
of
the
stuff
here,
so
you
can
have
you
uploaded
right,
so
you
can
check
that
later.
So
this
is
mythology
videos,
one
will
code
is
developed
by
Intel.
You
know
this.
We
already
open
source
that
we
introduced
cause
Banshee
one
year
ago,
also
down
on
OpenStack
conference
and
the
country
called
banshee.
A
Already
supporters
swift
response,
I've
and
we
also
support-
is
three
days
and
they
also
suppose
I'm
play,
deter
I'm,
not
sure
how
many
know
and
play
date,
her
so
the
other
month
among
people
try
to
use
that.
If
you
feel
interest
you
can
go
to
that
website
and
we
do
to
testing.
A
You
know
first
line.
We
call
that
a
small-scale
testing
second
lines,
that
we
call
it
a
larger
scale.
Testing
for
the
small-scale
testing
is
pretty
small.
We
only
create
behind
a
container
an
inch
continued
high
value
object,
so
we
just
try
to
understand.
What's
the
best
performance
we
can
get,
you
know
if
we
have
only
very
little
date
for
the
you
know
for
the
life
skill
testing.
Actually,
we
create
something
more.
We
treat
you,
we
have
two
different
object.
One
for
swine
small
object
is
the
128
key,
the
at
least
10
mag.
A
So
we
have
different
folder
for
the
small
jigs
actually
create
a
lot.
That's
tinsel
input,
multiple,
isn't
to
tell
solar
and
for
the
large
one
we
don't
have
so
much
disk
space.
So
we
have
to
create
one
a
10,000
container
and
each
container
have
the
100
object
was
easily
create
a
more
continuous.
Some
people
told
me
that
if
you
have
more
container-
and
maybe
you
will
have
more
pressure
to
your
container
service-
so
that's
the
reason
we
create
a
more
container
compared
to
object.
A
So
the
rock
ramping
up
for
a
300
second
and
the
matter
three
had
me
thinking
and
we
also
define
some
QSR
because
we
sing
because
the
latency
is
very
important.
So
in
generally
we
want
to
make
sure
we
get
to
the
foster
bad,
something
like
less
than
200
millisecond
and
so
that
qsr
later
the
Q
s
is
equal
to.
You
know
the
200
millisecond,
plus
the
object
size
and
it
/
to
Mike.
So
in
general,
if
you
have
a
big
object
size,
it
takes
a
long
time
to
transfer
all
this
stuff
right.
A
So
this
is
a
small
skill.
It's
pretty
good
right.
The
all
this
support
allows
this
latency.
Is
that
perfect
and
we
can
travel
the
cpu
almost
especially
for
small
object
right
we
can
travel
the
the
bottleneck
for
the
small
object
hosting
is
CPU,
so
we
all
mostly
use
all
these
cpu
and
for
the
larger
object
to
a
the
bottlenecks
to
network,
because
we
only
have
you
know
to
10gb
link
so
that
that's
the
bottom
line.
So
let's
look
at
what's
happening
when
we
increase
the
number,
so
so
the
big
object
actually
is
a
pretty
good.
A
A
So
actually,
this
is
a,
I
think,
a
lot
of
community
gallery.
Nowadays
we
talked
about
this,
you
lost
or
delayed
so
meet,
and
so
in
general
we
look
weak.
We
try
to
compare
the
you
know
large
scale
and
a
small
scale.
You
can
see
both
you
know.
Actually
the
I/o
pattern
change
you
a
lot
to
look
at
I.
Show
you
something
like
you
know
the
latency
of
a
time
and
on
the
outside
is
so
you
know
the
typical
size,
poor
read
and
write.
A
So
that's
Hindi
a
lot
and
we
also
do
the
use
of
oak
trees
to
capture
what's
happened
so
in
general
they
also
allow
thing
happening.
Is
metadata,
that's
a
fail
system,
but
failed
system
overhead.
There
are
so
many
I
know
there
and
all
these
metadata
information
you
they
cannot
cash
in
a
memory,
because
a
memory
is
not
large
enough.
So
in
that
case
the
Swift
and
masturbate
for
all
these
kind
of
I
know
that
meditative
information.
A
So
one
thing
you
can
do
you
can
have
a
big
memory.
So
this
is
a
test
of
a
deal
and
the
blue
is
a
small
test
and
the
you
know
if
we
have
enough
memory
and
the
rate
of
is
actually
we
do
some
pre
loading
and
the
the
sorry
the
Green
Monster
is
a
small
skill.
That's
a
perfect
target
and
the
blue
eyes.
So
actually,
if
we
have
your
mouth
memory,
so
you
can
say
that
if
you
have
enough
physical
memory,
which
time
goes
on,
you
can
catch
the
most
off.
A
So
I
know'd
and
all
these
metadata
information
into
the
unit
memory,
so
the
performance
is
very
close
and
the
other
became.
If
you
don't
want
weight,
because
if
you
you
can
do
some
pre
load,
it's
a
ESO
command.
You
just
side,
you
know
the
VSS
cash
press
equals
to
1
and
the
do
some.
You
know
a
RS
to
make
sure.
Although
your
I
know
the
information
can
be
cashed.
So
that's
the
lag
for
the
right,
so
you
feel
do
the
prefetch
and
preload.
Actually
the
performance
is
good
enough.
A
So
this
is
a
second
thing.
We
try,
so
we
just
use
SSD
videos,
SI
t
and
the
flash
cash,
but
I'm
not
sure
how
many
you
heard
about
flight
cash.
Free
cash
is
a
facebook
stuff.
You
know
he
can.
You
know,
takes
ICT
as
the
actual
cash,
so
we
can
see
the
flight
cash
to
make
sure
the
ICT
only
cash.
I
know
that
the
metadata
information
that
can
actually
improve
the
problems
a
lot,
something
like
we
can
get
something.
A
Fifty
percent
to
winery
percent
perform
improvement
compared
if
there's
no
SSD
but
there's
still
a
big
gap.
You
know
compared
to
the
perfect
case,
so
a
single,
maybe
we
can
do
something.
One
thing
is
so
I,
so
in
generalizing,
04,
Swift,
big
objects,
okay
and
the
small
object
is
needs
some
tuning,
so
first
lines
in
the
latest
XMS
you
can
use
a
small,
you
know,
I
know
the
I
know
the
size.
Orangi
know
that
default.
A
I
know
sighs
funky
trunk
Toa
suggest
we
can
go
to
some
small
one,
something
like
256,
but
that
you
can
only
use
that
in
the
latest
access
in
the
old,
colonel
colonel
user.
We
are
trying
to
do
some
testing
it's
doing.
Working
on
progress.
Second
sees
people
talk
with
us.
You
can
use
a
different
affair
system
right,
but
I
don't
know,
that's
that's
my
custom
done
it
like
that.
So
so
so
things
that
there's
also
some
discussion,
I
think
in
yourself,
community
people
talking
about
how
to
handle
a
small
object
right.
A
This
discussion
lot,
like
you
know,
let's
use
level
DB,
let's
use
something
different.
We
just
don't
use
the
file
system
and
the
mess
wiser
like
the
haystack
or
you
know,
Oh
TFS,
actually
just
adding
actual
layer.
They
they
combined
your
small
checking,
a
big
one
and
so
exact,
is
using
and
reduce
the
aisle
to
the
physic
disk.
So
I
think
a
swift
today
actually
have
a
one
page
proposal:
blueprint
from
red
arrow,
FS
I
think.
A
So
I
really
like
this
to
software
and
swifty
is
very
simple
and
is
easy
to
use
staff
is
it's
a
hawker,
a
structure
and
I
think
their
performance
is
oak
is
good.
Well,
let's
do
a
lot
housing
to
do
so.
We
hope,
if
you
guys
want
to
work
this
together,
we
can
work
together
to
you
know
make
this
better.
That's
all!
Thank
you.