►
Description
Cephalocon APAC 2018
March 22-23, 2018 - Beijing, China
Orit Wassermann, Red Hat RGW Core Developer
Matt Benjamin, Red Hat RGW Lead
A
B
B
So
what
is
object?
Storage?
So
it's
something
between
block,
started
and
fan
system
and
it
was
designed
for
large
scale,
large
number
of
objects
and
and
also
that
amount
of
data
we
have
a
flat.
Namespace
object
are
organized
in
a
bucket
or
container
and
buckets
cannot
contain
other
buckets.
So
we
have
wonderful.
But
if
you
need
to
preserve
some
here,
you
can
actually
use
a
prefix
to
the
name
and
have
future
happy
object
and
very
which
metadata
it's
not
it's
more.
If
we.
B
C
B
You
cannot
work
on
the
interview,
the
change
you
need
to
the
object,
but
you
can
read
parts
of
the
object.
You
don't
have
to
read
all
the
objects.
Now
we're
going
to
talk
about
about
very,
very
nice
feature.
The
object
saw
his
head.
That
makes
it
very
useful,
but
first
we
gotta
talk
about
cloud
object.
Storage.
The
object.
B
Storage
was
invented
way
long
before
the
cloud
was
there,
but
it's
didn't
catch
up,
people
with
block
found,
but
what
changes
that
we
got
a
cloud
and
without
the
first
under
the
cloud
they
had
the
object,
so
it's
called
s3
and
what
the
island
is
the
way
we
access
storage.
We
used
REST
API,
and
that
is
the
natural
way
to
use
data
in
the
cloud
and
since
there
is
growth
and
cloud
storage
is
very
popular
and
especially
yesterday,.
B
One
thing
we
need:
we
need
to
write
to
the
storage
and
in
objects,
so
what
we
call
it
a
clock
because
we
use
HTTP
and
we
need
to
a
very,
very
large
object.
We
need
to
handle
cases
where
network
error
and
for
that
we
have
a
different
way
to
upload
the
object.
It's
called
moody
part
upload
and
once
it
means
we
take
a
big
object
for
one
jjigae
and
we
divided
in
two
parts
and
we
upload
each
part
separately
and
only
we
will
be
done
and
we
show
all
the
data
is
there.
B
We
complete
the
upload
and
only
then
the
object
will
be
created
if
something
happened
and
we
didn't
complete
that
love
the
object,
search
would
actually
clean
or
the
temporary
data
we
stored.
It
helps
them
to
recover
from
Network
Arab
certified
athletic,
divided
the
object
for
megabytes
and
in
one
chunk,
I
felt
I
can
just
upload
it.
I
can
stop
resume
later.
I
can
even
upload
an
object,
I,
don't
know
its
final
size.
I
can
use
it,
for
example,
for
streaming
video,
the
transaction.
B
If
in
a
fast
system,
we
would
create
a
temporary
file
that
we
named
it?
That
is
very,
very
expensive
in
ellipsoid,
because
there's
no
in
a
we
access
the
object
bytes
today.
So
it's
actually
copying
the
data
and
the
reading
the
whole
object.
I
can
use
multi
Parata
to
actually
do
a
transaction
like
I
will
do
a
multi-part
to
the
temporary
object
and
when
I
want
to
finalize
the
reduction,
I
will
complete
it.
B
B
This
wife
example
five
o'clock
my
document
revisions
I,
can
have
the
old
revision
and
go
back
in
case
I,
wanna
restore
old
version,
the
manpower
and
versioning
White's,
not
this
in
ever
by
default
that
it
wastes
space
and
space
is
expensive
for
that
we
have
actually
life
cycle,
which
is
an
automatic
way
to
do
a
object
transition.
The
most
common
one
is
exploration,
I
can
say,
I
want
this
old
on
some
dates
or
usually
aged
and
the
object
search
will
delete,
object,
automatically.
B
I
can
say:
I,
don't
want
to
keep
virtual
bucket
I,
don't
want
to
keep
object.
That
are
older
version
that
are
other
than
six
months,
and
the
object
store
automatically
delete
those
old
versions.
I
can
even
use
it.
Also
for
tearing
so
I
can
say
if
I
have
object
all
then
the
two
years
I
want
to
move
them
to
a
call.
The
two
that's
expensive
and
the
moment,
rather
scant
where
thousands
of
during
all
the
exploration.
B
We
want
to
be
able
to
scale
up
so
I.
Just
get
away
is
almost
dead.
Last
and
in
case
you
didn't
work.
Who
put
you,
can
spin
a
new
install
tomatoes
get
way
on
the
same
safe
cluster
and
they
will
serve
the
user.
The
same
data
we
need
to
support
us
FBI,
so
that's
mean
we
need
to
have
HTTP
server.
There
are
two
ways
you
can
use
a
sweater.
B
We
use
like
this,
but
rather
subject
a
limited
inside
the
default
is
for
mega
we're
talking
about
object,
storage
and
we're
talking
about
really
really
large
objects.
The
limit
was
ample
in
AWS
and
we
also
use
the
same
default.
The
biggest
objects
you
can
upload
is
five
terabytes
and
one
of
just
regular
after
the
five
Ginga
there's
a
big
big
object
and
that's
mean
beneath
which
objects
and
probably
be
be
green
from
several
other
subject.
So
every
object
has
an
edge
of
death.
B
B
If
we
talk
about
naming
the
held
object
is
very
fast.
If
you
know
the
package,
ID
l
objects.
Now
we
can
who
say
we
need
to
reduce
back
Italian
and
bucket
men,
because
there
are
corner
cases
when
you
create
and
delete
a
bucket.
The
different
to
a
second
creates
a
bucket
of
the
same
name,
and
we
need
to
identify
those
and
the
tell
that
also
are
very
simple
naming
conventions.
So
we
went,
if
example,
will
we
can
now
each
tablet?
We
need
and
read
it,
but
quickly
with
one
others
with
a
brush.
B
We
object
stars
require
to
index
the
bucket.
What
we
user
needs
explaining
this,
the
bucket
you
want
to
see
and
all
the
list
of
all
the
objects
in
the
bucket.
That
is
the
bucket
index.
It's
an
object
and
how
much
are
the
subjects
that
contain
a
list
of
all
objects
in
that
bucket,
but
we
talked
about
large
scale
and
we
can
have
millions
that
actually
are
so
billions
of
objects
in
a
bucket,
so
one
of
them,
one
of
our
purpose
of
that
cannot
handle
that
load.
B
So
Yahoo
help
us
and
added
bucket
initially.
So,
when
you
here
in
the
bucket,
you
can
say
how
many
shots
you
want
for
the
bucket
index
and
we
had
several
OMA
rather
subject
to
under
the
bucket
eaglets,
but
it
was
static.
You
don't
always
know
how
many
shallow
warning
buckets.
You
can't
think
I
will
need
only
one
others
and
in
the
end,
the
bucket
of
war
and
even
more
in
Judah
added,
often
be
shouting
which
allow
users
to
change
the
name
of
shots,
a
bucket
Ilyas
users.
B
In
ominous,
we
have
only
sharding,
which
means
you
can
manually,
which
are
the
bucket
increased.
Its
number
of
shots-
and
we
also
have
now,
if
you
Charlie
brothers,
gets
white
checks
to
the
other
three,
the
name
of
shadow,
a
bucket
and
never
object,
and
when
you
see
that
is
not
good
and
you
need
to
increase,
it
will
share
the
bucket
automatically
that
Lee
moon
deciding
belong
to
each
other.
Don't
play
way
together
the
moment,
but
we're
working
on
fixing
that
so
for
multi
star.
You
can
use
Nana
Shari.
B
For
every
time
we
access
the
storage,
we
use
an
edge
recipe.
App
that's
been.
The
first
thing
we
have
to
do
is
to
authenticate
the
request
that
we
need.
The
news.
Information
is
key.
It's
secret.
We
also
need
the
packet
an
important
to
convert
into
a
bucket
name
bucket,
ID
and
the
packet
is.
An
information
could
then
was
up
to
the
name
of
a
shot
for
the
bucket
index
and
other
information,
but
we
want
performance
if
every
time
each
one
of
them
is
about
the
subject
would
reduce
food
waste
performance,
so
we
catch
them.
B
That's
the
river
r2w
method,
education.
We
usually
have
several
values
get
by
writing
on
the
same
data
we
have
to
synchronize
them.
You
did
for
that.
What
you
reuse
watch
notify,
but
it
is
for
riders.
Every
time.
I,
understand,
I,
challenge,
one
item
in
the
ventilator
cache
the
other
hardest
get,
will
get
a
notification
and
no
to
invalidate
the
cash.
In
the
last
few
months
we
have
noticed
some
problems
with
currency
of
the
caption
is
performance
and
there's
lots
of
work
that
mark
talked
about
of
improving
it.
A.
B
B
The
application
allows
us
to
replicate
the
data
between
different
set
of
clusters.
We
must
have
testers
and
because
of
that,
the
data
application
is
completely
a
compromise.
We
have
small
number
of
metadata
operation
that
are
supported
by
the
and
not
say
that,
for
example,
changing
adding
new
user
deleting
the
user
rating
about
living
about
the
last
corner.
You
can
configure
the
replication
to
be
active,
active
and
active
passive,
and
you
know
you
can
decide
that
whenever
the
synchronization
per
packet
after
the
new
years,
we
would
replicate
foods
out,
but
now
you
can
have
better
calamity.
B
B
This
is
food,
our
metadata
search,
but
we
are
a
storage
system
not
searched
for.
We
use
elastic
top
with
very
good
search.
We
export
our
metadata
I
will
do
a
different
cluster
elastic,
stop
search,
can
accesses
back-end
and
the
user
queries
execute
in
the
last
search,
and
then
he
gets
the
list
of
the
object
he
wants.
B
We
also
have
an
effect
NFS
with
the
faculty
call,
but
we
have
benefits
on
where
the
schedule
a
we
have
a
library
called
Libre
GW,
which
is
like
an
instance,
for
instance,
about
this
gateway
between
that
it's
phantom,
and
it
also
has
a
layer.
We
what
you
have
thought
that
word
world
between
objects,
operation
and
Paulo
coalition
for
any
facts
we
use
NFS
Ganesha
to
implement
denim
is
Portico.
So
so
you
run
an
NFS
Ganesha
process
with
Lee
Belgium
Avenue,
and
then
you
can
access
for
the
skate
by
using
the
NFS
protocol.
B
B
In
the
Middle's
we
also
added
support
for
bucketing
music
policy.
Access
patronise
are
not
enough.
We
want
much
greater
granularity.
We
wanna
share
data
between
different
packet
between
different
user.
For
that
we
have
bucketing
music,
see
we
can
allow
access
from
different
accounts
to
the
same
data.
B
No
me
was
you
that
you
can
restrict
actually
access
to
a
list
of
specific
LP.
It's
very
important
to
keep
your
data
safe,
actually
I.
Think
last
week,
or
maybe
two
weeks
ago,
Walmart
dangled
something
to
me
at
the
public
bucket
available
US
public
protective
AWS
as
filming
everybody
can
access
the
data.
So
it's
very
important
to
make
your
data
safe.
B
C
A
C
C
C
C
Called
civic
members
used,
it
provides
a
lot
of
tight
integration
with
merit
review
code
base,
but
it
has
a
number
of
limitations
that
have
surfaced
over
time,
the
big
one
being
that
uses
a
thread
per
connection
request
model
which
limits
the
you
know
the
number
of
rotations
of
the
connections
that
can
be
realistically
served
by
a
single
rgw
and
it's
one
of
the
things
that's
causing
sites
to
need
to
have
a
large
number
of
our
W's
in
their
cluster.
In
order
to
scale
did.
A
C
C
Boost
ASIO,
a
very
well-known
programming
framework
for
asynchronous
tcp
sort
of
application
development.
That's
part
of
the
business
part,
that's
part
of
the
boost
equals
plus
API
and
a
new
work
from
from
the
caboose
community
to
develop
a
full-fledged
HTTP
implementation
that
cooperates
with
that
pcs,
motley
and
there's
our
team
building.
This.
B
C
A
C
A
C
Happen
bottom
half
scalability
issues,
and
this
is
the
server
and
lots
and
lots
of
POSIX
threads
and
use
the
new
model
of
the
the
intent
is
to
fully
or
we
are
going
to
ascend
into
a
small
number
of
cooperating
threats
that
you
Sparkle
and
happenin
to
remove
context,
pushing
and
transitions
within
the
top.
Half
and
bottom
half
will
only
be
one-half
and
secondarily
we're
introducing
a
scheduler.
C
So
the
with
the
intent
with
the
intent
that,
in
addition
to
having
large
numbers
of
connections,
we'll
be
able
to
manage
the
incoming
workload,
much
more
intelligently,
we'll
be
able
to
classified
into
a
variety
of
buckets.
For
example,
read
verses,
writes
prioritize
traffic
criteria
over
over
other
traffic.
President.
A
C
Propel,
fellow
all
available
work
slots
with
heavy,
writes
large
architect,
up
close
it
and
in
that
environment
that
the
server
becomes
less
responsive
for
other
work
solution
is
collected,
this
quality
of
service
management
network
and
where
they
were,
and
the
schedulers
work
as
part
of
this
process
isn't
gonna
do
to
deal
with.
That
is.
C
C
C
C
C
C
A
C
C
C
C
A
B
A
C
C
C
C
First
round
of
work,
a
large
number
of
large
a
large
amount
of
workload,
testing
and
analysis
was
done
a
little
bit
by
two
tumors
of
my
team
and
I've
ever
seen
his
name
the
back.
There
was
a
part
of
this,
and
in
America,
I
insisted,
some
and
and
also
also
colleague,
of
recent
Israel
Marco,
going
to
work
on
this
one.
We
did
beer,
fest
consistency,
bugfixes.
A
C
A
C
C
C
C
A
C
C
C
Sit
essentially
so
the
next
so
connecting
traffic
from
different
RW.
So
then
we
know
that
in
our
dummy
Allah,
this
is
nicely
will
recognize
when,
when
it's
cash,
that
book
has
been
as
there's
no,
this
is
valid
and
whatever
we
re-establish
it,
and
in
that
case
won't
return,
appliances,
correct
date,.