►
From YouTube: 2015-MAR-26 -- Ceph Tech Talks: RGW
Description
A detailed look at the internals of the Ceph RADOS Gateway.
http://ceph.com/ceph-tech-talks
A
All
right
welcome
back
everyone
to
the
third
installment
of
the
sef
tech
talks.
If
you
would
like
to
see
the
first
two,
if
you
missed
one
of
them,
there's
the
talk
on
rados
by
the
tech
lead
sam
just
and
the
rbd
talk
by
josh
durgan.
Those
are
both
up
on
the
sef
youtube
channel,
don't
miss
those!
A
If
you
would
like
to
have
more
information
on
how
to
join
these
if
you're
viewing
it
on
youtube.
We're
on
the
sef.com
ceph
tech
talk
separated
by
hyphens
on
the
cef
site,
which
would
give
you
all
the
information
that
you
need
to
join
for
the
next
one,
which
I
believe
will
be
on
the
23rd
of
april.
A
I
will
be
talking
about
calamari,
then
these
are
typically
on
the
fourth
thursday
of
the
month
at
one
to
two
p.m:
eastern
standard
or
daylight
time,
and
we're
on
the
blue
jeans,
video
conferencing
tool
that
we
use
for
soft
developer
summits
and
other
things.
A
So
today's
talk
topic
is
going
to
be
on
the
rados
gateway.
The
tech
lead.
Yehuda
is
here
to
give
us
a
rundown
of
the
inner
workings
of
rgw
yehuda.
You
want
to
take
it
away.
B
All
right,
so
I'm
yehuda
and
I'll
be
talking
about
the
raiders
gateway
and
how
it
works.
So
so
first,
I
want
to
have
a
brief
recap
of
the
theft
architecture.
B
B
It
provides
the
building
blocks
for
the
different
storage
solutions
that
ceph
provides,
which
are
the
main
ones,
are
the
safe
file
system,
the
rails,
block
device
or
rpd,
and
the
rails
gateway
rgw
a
library
called
liberators,
provides
the
glue
that
enables
creating
all
these
different
services.
It
has
a
rich
set
of
apis
that
can
be
used
to
access
the
data
in
raiders.
B
The
raidos
interface
tries
to
make
it
simple
to
reason
about
accessing
distributed.
Storage
objects
are
divided
into
flat,
namespace
pools.
Each
pool
can
have
different
name
different
placement
rules
allowing
the
user,
for
example,
to
place
some
objects
exclusively
on
fascists's
dosds
or
in
slow
spinning
disorders.
Within
the
same
cluster,
applications
written
against
traders
can
rely
on
the
relative
simplicity
of
the
cp
consistency
that
it
provides.
B
Users
can
write
applications
for
raiders
using
the
library
of
the
interface
available
in
multiple
programming
languages,
and
these
are
quite
rich.
It
supports
special
partial
overrides
of
objects,
rather
than
requiring
objects
to
be
overwritten
in
their
entirety,
which
made
it
very
easy
to
create
rbd.
B
B
B
An
atomic
free
transaction
can
be
used
to
atomically
fetch
an
attribute
and
an
extent
of
the
data
payload.
An
atomic
cri
transaction
might
be
used
to
atomically
check
an
attribute
and
conditionally
add
a
set
of
key
value.
Mapping
radar
of
key
value
mapping,
writers,
object
classes
can
also
be
loaded
into
the
osd
to
add
additional
radius
operations,
for
example.
B
So,
but
if
you,
if
you
want
to
learn
more
about
tradeoffs,
you
can
go
to
the
presentation
by
sam
just
for
from
a
couple
of
months
ago.
Now,
let's
move
to
the
gateway
itself,
the
raze
gateway
provides
an
s3
and
swift
compatible
interfaces
to
applications
that
use
object,
storage.
B
A
raiders
gateway
deployment
includes
the
raiders
cluster,
with
a
set
of
fred
scheduled
processes
which
serve
s
s
3
or
swift
requests
from
from
application
using
a
liberated
connection
to
the
raiders
cluster.
So
the
rgw
is
a
liberal
application
that
uses
liberators
as
with
other
sf
entities.
Rgw
is
designed
to
be
able
to
scale
horizontally
and
multiple
large
w
services
can
be
set
to
run
in
parallel
and
provide
access
to
the
same
data.
B
Rgw
provides
two
front
two
main
front
ends.
It
can
run
at
a
fast
cgi
server
needing
apache
or
other
web
servers
boards
plus
cgi
to
serve
the
http
requests.
B
It
can
also
run
as
a
standalone
http
server
using
the
civit
web
embedded
server
to
serve
the
http
requests.
Civic
web
for
anyone
that
doesn't
know
is
relatively
new.
We
introduced
it
in
firefly,
it's
a
spinoff
or
fork
of
the
mongoose
embedded
http
server
and
other
than
these
two
front
ends.
We
also
have
a
third
front
end
currently
that
we
use
for
load
generation.
It
basically
cuts
out
off
the
the
the
actual
users
from
from
generating
queries.
It
allows
the
gateway
itself
to
generate
queries.
B
B
So
internally
and
that's
really
rough
block
diagram,
the
algebra
is
broken
into
multiple
logical
modules.
B
The
front
end
that
I
just
mentioned
the
rest
dialect
that
handles
either
s3
or
swift
or
we
have
another
dialect,
would
be
the
swift
auth.
B
And
potentially
we
can
add
other
apis.
For
example,
the
google
storage
api,
the
execution
layer
that
is
common
to
any
dialect.
So
we
have
a
specific
dialect
s3,
for
example,
and
then
it
goes
into
an
execution
core
that
is
common
to
the
s31
or
the
swift
one.
So
this
allows
us
maintaining
a
single
unified
view
of
the
data.
B
B
There
are
kind
of
in
internal
threads
that
handle
garbage
collection
in
the
quote
quota
and
all
of
that
runs
on
the
raiders
gateway
process.
It
uses
liberators
to
communicate
with
the
red
raiders
back
end
and
some
of
the
code
that
rgw
uses
runs
as
published.
Ops
classes
within
the
usd
so
part
of
the
rgb
code
actually
runs
on
the
osds
themselves.
B
An
object,
storage
system
like
f3
has
users,
each
user
can
create
multiple
buckets
and
in
each
bucket
objects
can
be
stored
in
a
flex
flat
namespace.
The
system
supports
an
authentication
system
and
provides
access
control
mechanisms.
Our
job
provides
both
the
s3
and
swift
restful
apis,
while
presenting
the
same
data
through
either
of
these.
B
B
For
example,
a
few
examples
where
they
differ
in
raiders,
the
general
guidelines
are
that
object
sizes
should
be
limited,
usually
to
a
few
megabytes
per
object.
With
restful
object,
storage,
the
limit
is
usually
in
the
few
terabytes
range
and
that
also
probably
flex
those
there's.
No
real
reason
why
that
shouldn't
be
should
actually
be
limited
to
to
know
to
the
amount
of
sizes
that
the
system
itself
can
can
handle.
B
B
B
And
for
an
object,
you
can
specify
a
list
of
users
that
can
access
it,
which
is
by
the
way,
unlike
swift
in
swift,
it's
a
bit
different
and
the
raiders
gateway
provides
a
a
super
set
of
of
this
functionality.
It
can
provide
provides
both
the
swift
per
container
per
bucket
permissions
and
the
s3
pair
object.
Permissions.
B
Rgw
objects
are
composed
of
two
main
logical
parts:
the
object
head
and
the
object
tail.
The
object
head
contains
all
the
objects
made
of
data.
This
includes
them
manifest.
This
describes
the
object
layout,
the
object's
attributes
echoes
and
the
user
defined
object
attributes,
so
users
can
can
specify
specific
special
attributes
on
the
objects
and
all
all
of
that
is
contained
in
the
the
head
itself.
B
B
So
if
you
have
an
object
up
to
the
size
of
512
case,
then
the
object
is
not
is
only
contain
the
rgw
objects
or
watch
what
users
would
look
at
as
the
s3
objects
or
the
swift
object
will
be
composed
only
a
single
radius
object,
which
only
is
only
the
head,
although,
although
it
is
possible
for
a
small
object
to
also
have
a
tail,
but
that's
beside
the
point
when
accessing
an
object,
the
object's
head
is
ready
in
its
entirety
using
a
single
radius.
B
I
o
operation,
so
smaller
objects
require
only
a
single
round
trip
to
the
back
end.
In
order
to
be
read
now
this
this
relative
operation
is
a
compound
operation.
So
in
in
this
one
io
we
basically
say
send
a
request
to
say
that's
saying
get
me
all
the
objects,
attributes
and
read
512
k's,
but
but
that
only
that
is
one
atomic
operation
that
takes
one
round
trip
to
the
rails
back
end.
B
B
B
For
each
bucket,
rgw
maintains
an
index
that
resides
in
a
raiders
object
within
an
omap
what
they
talked
earlier.
The
object
map
with
g,
the
key
value
storage,
can
be
created
for
each
radar
subject,
and
it
can
provide
a
sorted
list
of
all
the
objects
that
belong
to
that
bucket,
starting
at
hammer.
B
Our
upcoming
major
release,
the
bucket
index,
can
be
sharded
and
stored
on
multiple
radius
objects,
so
we
we
know
a
lot.
We
longer
longer
require
having
a
single
object
that
that
might
might
be
a
contention
point.
When
writing
multiple
objects
is
to
the
same
bucket
when
dealing
with
object.
Versioning
the
bucket
index
is
responsible
for
maintaining
the
version
order
for
each.
B
B
That
point
is
mainly
maybe
users
that
use
s3
are
familiar
with
the
versioning
api
that
s3
provides
and
we
provide
it
to
starting
at
hammer
and
in
that,
when
you
create
multiple
versions
of
the
same
object,
the
object
versions
are
ordered
by
the
order
of
creation,
and
you
always
have
the
the
current
version
of
the
object.
But
if
you
remove
the
current
version
of
the
object,
you
you're
gonna
move
back.
The
current
version
is
gonna,
move
back
to
the
previous
version
and
and
so
forth.
B
B
We
can
look
at
the
bucket
index
and
replay
these
operations
on
the
target
zone.
B
B
B
B
B
B
B
A
bucket
index
preparer
and
complete
request,
that's
not
something
that
regular
object,
storage
usually
provides,
and
when
I'm
talking
about
object,
storage,
I'm
talking
about
the
back
end
we're
using
not
rgw
as
an
object,
search
redus
is
extensible
and
allows
us
to
add
such
functionality
by
using
the
object,
classes,
object,
classes
just
creator.
8
is
a
piece
of
software
that
we
can
create
the
trance
on
on
the
osd
side
and
is
executing
at
the
friday
scio
path.
B
You
can
read
an
object
and
mute
it.
That's
making
our
lives
easier
by
limit,
eliminating
a
need
for
doing
a
lot
of
racy
read,
modify,
write
sequences
rw
makes
a
heavy
use
of
the
object
classes
and
it
is
used
for
multiple
features
like
the
bucket
index,
maintenance
usage,
logging,
garbage
collection,
advisory,
locking
and
so
on.
B
When
we
create
an
object,
instead
of
needing
to
keep
track
of
on
the
rgw
side
of
how
much
data
the
user
has
which
is
problematic
due
to
the
distributed
nature
of
the
system,
we
can
have
multiple
gateways
each
consume
their
own
data,
creating
their
own
objects.
The
other
hajjavu
not
necessarily
doesn't
necessarily
know
about
that
data,
so
we
lazy,
send
an
object.
Class
call
with
the
new
object
size,
which,
in
turn,
updates
the
internal
accounting
directly
on
the
osd.
B
Quota
now
I
mentioned
that,
in
order
to
access
an
object,
we
need
to
know
the
bucket
instance
id
for
that
object.
However,
this,
since
for
me,
this
info
information
is
not
necessarily
readily
available
for
the
gateway
for
each
bucket.
We
hold
two
different
objects:
the
bucket
entry
point
that
points
at
the
current
bucket
instance
and
the
bucket
instance
object.
This
is
needed
because
a
bucket
may
be
removed
and
created,
so
we
need
to
know
which
instance
we
refer
to
in
order
for
this
to
be
efficient,
rgw
keeps
a
cache
of
all
metadata
information.
B
B
This
applies
to
all
metadata
and
includes
packet
metadata
changes
like
packet
creation,
modification
removal
and
user
information
changes
like
user
creation,
user
suspensions
and
so
forth.
A
raiders
gateway
admin
process
that
can
be
used
to
to
do
those
metadata
changes,
for
example,
creating
a
user
it
uses
watch
notify
too.
So
a
metadate
change
that
goes
through
raiders
get
to
admin
will
affect
the
running
cluster.
B
B
For
example,
we
keep
all
the
information
about
all
the
objects
that
changed
in
a
bucket
in
within
the
bucket
index,
but
we
also
keep
a
list
of
all
the
buckets
that
have
changes
in
them
so
that
we
don't
need
to
go
over
all
the
buckets
in
the
system
to
in
order
to
discard
cover
what
changes
happened.
B
B
Keep
the
state
of
the
current
sync
process
so
that
next
time
that
we
go
and
continue
with
the
sync
process,
we
we
know
where
to
start
from
and
so
on,
it
is
possible
to
configure
multiple
rgw
zones
on
a
single
theft
cluster.
However,
each
needs
to
use
a
distinct
set
of
poles.
B
It
is
possible
to
define
multiple
placement
targets
or
storage
policies.
This
makes
it
possible
to
define
that
data
will
resize
and
reduce
different
trader
spools
with
different
storage
properties.
For
example,
we
can
define
a
gold
policy
where
all
the
data
resides
on
ssds.
The
silver
policy
where
the
data
resides
resides
on
spinning
disks
and
a
bronze
policy
where
the
data
is
on
a
spinning
disk
and
also
uses
erasure
coding.
For
example,
we
can
then
define
the
default
storage
policy
that
each
user
can
use.
B
A
region
is
a
set
of
zones
that
represent
a
logical
geographical
area.
Now
a
side
note,
the
term
region
has
been
kind
of
confusing
and
we
cut.
We
decided
that
zone
group
is
probably
better
suited
and
in
the
future
we
may
switch
to
using
it.
There
is
a
master
region
that
serves
all
the
master
it
serves
as
the
master
for
all
metadata.
B
B
Different
regions
maintain
different
data,
but
users
do
have
a
single
global
namespace,
where
you
they're,
like
all
other
buckets
when
a
user
lists
its
packets,
it
gets
a
list
for
all
the
packets
that
it
owns,
regardless
of
their
location.
Accessing
data
will
require
the
user
to
access
the
correct
rgw.
Otherwise
it
will
get
an
http
reader
response
that
will
send
it
to
the
correct
location.
So
it's
better
for
user
to
access
the
the
actual
zone
where
the
data
is
otherwise.
B
New
data
is
copied
from
the
master
zone
to
the
target
zone.
Now
this
will
might
change
in
the
future,
but
that,
with
the
current
architecture
architecture
in
the
future,
it
we
plan
to
have
it
possible
to
have
data
written
in
all
zones
of
the
same
region
and
and
have
it
synced
between
them,
not
just
from
the
master
zone.
B
B
This
object
from
that
packet
on
on
your
master
zone,
and
but
the
sync
agent
itself
doesn't
does
not
read
any
data,
so
it's
not
in
the
data
path.
B
What's
next
for
us,
as
I
just
mentioned,
active
active
architecture
for
multi
zone
configuration
multi-tenancy
is
also
on
our
radar.
It's
the
the
ability
to
be
able
to
have
different
tenants
for
rgw.
B
B
B
And
for
example,
all
all
the
objects
starts
with
a
specific
prefix.
We
want
them
to
be
expired
after
one
week
or
expiration
might
mean
that
if
they
are,
the
bucket
is
version
that
then
it
means
that
the
objects
will
still
exist,
but
will
not
be
current,
and
then
we
can
say
after
a
month
another
month
completely
removes
these
objects
or
another
possibility
is
to
say
after
a
month
move
those
objects
to
a
different
storage
policy.
B
So
that's
that's
object,
expiration
and
nfs.
The
ability
to
export
rgw
objects
through
nfs,
that's
about
it.
Thank
you.
Any.
A
Questions
at
this
point:
if
anybody
has
questions
they
can
feel
free
to
type
them
in
the
chat
or
ask
them
out
loud
looks
like
we
got
one
from
eric.
Is
the
civit
web
server
ready
for
production
and
hammer?
Does
it
scale.
B
B
We
we
believe
that
it
is
on
its
way
and
it's
ready
for
production.
We
have
been.
We
have
set
it
as
a
default
front
end.
B
I
think
it's
starting,
I'm
not
sure
if
we
put
it
in
giant
as
the
default
or
hammer
and
we've
been
pretty
happy
with
the
with
its
performance
and
with
with
the
stability
and
we
yet
to
have
users
complaining
about
civic
web.
But
the
the
most
important
thing
about
civic
web
is
that
it
makes
installation
much
much
easier
for
users.
It
removes
all
the
extra
complexities
involving
apache
and
the
fat
cgi
installation
and
with
fast
cgi
we
we
had
so
much
grief.
B
The
problem
with
the
fancy
janna
supporting
100
continues
or
a
few
there
there
nowadays,
three
different
fancy
gi
pro
modules
for
for
apache.
Each
has
its
own
issues.
We
don't
have
all
that
with
david
webb.
A
B
I
don't
think
we
do
qa
of
mixed
versions,
so
I
cannot
guarantee
how
it's
gonna
work.
You
will
need
to
have
the
usds
run.
The
latest
versions.
B
It's
a
firefly
greatest
gateway
in
theory
and
and
and
I'm
not
putting
a
qa
stamp
on
it
because
it's
not
tested
if
a
firefly
radio
skater
in
theory
should
work
with
the
giant
osd
objects
as
an
isd,
because
we
do
maintain
backwards
compatibility
we
try
to,
but
as
far
as
how
much
testing
goes
into
it,
I
I
wouldn't
do
it
because.
A
B
A
Been
tested
always
better
to
have
the
same
versions
across
all
your
pieces
see
another
one
here
from
bh
was
since
each
rgw
instance
sets
watch
points
on
buckets
and
other
metadata.
How
many
rgw
instances
can
we
bring
up.
B
Well,
it
sets
watch
points
not
on
the
bucket
and
metadata
sets
on
specific
objects
which
are
used
for
control
information.
I
I
don't
think
that
has
been
tested.
B
I
don't
think
we've
made
we've
tested
how
many
our
job
instances
we
can
can
can
have,
and
it's
gonna
have
a
linear
effect
on
on
the
metadata
changes.
B
But
I
know
of.
B
Running
a
few,
maybe
you
know
tests
that
I've
made
for
it
to
run.
That
does
a
dozen.
I
think
people
tried
it
with
more.
I
wouldn't
use
a
thousand
gateways
with
with
this
feature
set
in.
In
that
case,
I
would
turn
off
the
cash.
A
B
Well,
the
the
problem
that
we've
had
with
nginx
is
that
internally,
the
architecture
it
uses
is
asynchronous
and
our
job
itself
is
synchronous.
Now
the
question
is
whether
are
you
if
you're
using
it
using
a
fast
cgi,
then
as
long
as
nginx
does
it
correctly,
then
it
should
work
out
of
the
box.
We
we're
not
testing
it,
but
know
that
users
are
using
it.
So
so
it's
okay
in
that
case,
if
you're
willing
to
take
the
the
the
amount
of
testing,
then
you
can,
you
can
run
it.
A
Oh,
I
see
here's
a
follow-up
from
bh.
This
makes
a
little
bit
more
sense,
so
he's
asking
if,
if
too
many
rgw
instances
without
pros
cause
problems,
looks
like
they're
thinking
about
running
an
rgw
on
every
single
one
of
their
osd
nodes.
Probably
just
wants
to
know
if
this
would
be
okay
or
if
this
is
gonna
cause
problems.
B
Well,
depending
how
many
of
these
notes
you
have,
I,
I
would
assume
that
you
you'd
have
issues
with
too
many,
as
I
said,
with
the
watch
notified
with
the
cache
propagation
for
the
metadata.
In
that
case,
I
would
turn
off
the
the
metadata
cache
which
would
affect
performance,
so
you.
A
Know
it's,
it
looks
like
they'll
have
about
500
osd
nodes
if
they
just
turn
off
some
of
the
chattier
portions
of
that,
should
that
still
be
all
right
in
terms
of
functionality,
you'll
lose
the
watch
notify,
but
is
that
going
to.
B
Be
better
you'll
you'll
lose
the
metadata
cache,
so
object.
Read
every
operation
is
going
to
be
slower
right.
It's
going
to
be
multiple
round
trips
for
for
the
osds
for
for
each
operation,
but
it
will
scale
better.
On
the
other
hand,
so
you
know
it's
it's
a
trade-off
and
I'm
not
sure
that
having
500
gateways
is
the
way
to
go.
A
I'll
see,
another
question
from
derek
here
is:
is
anyone
else
asking
for
a
way
in
the
s3
interface
to
list
the
buckets
you're
not
owner
of
but
have
access
to?
He
knows
that
that
might
break
the
s3
compatibility,
but.
B
I'm
not
aware
of
any
problem
with
the
s3
api,
not
able
I
I
do
you
mean
able
to
when
you're
listing
the
the
the
actual
buckets
right,
not
listing
the
objects
within
the
buckets
yeah
we
with
s3,
it's
not
possible
and
the
the
current
way
to
do.
That
is
to
use
the
metadata
api
that
we
provide
the
admin
metadata
api
which
allows
you
to
list
all
buckets
in
the
system,
but
it's
it
requires
users
to
have
special
admin
caps
for
that
so
yeah.
I
I
the
context.
B
I've
heard
it
this
request
is
to
make
it
more
swift
like
because
in
swift
you,
you
have
multiple
users
that
are
that
can
share.
If
they
are
on
the
same
tenant,
they
can
share
the
same
bucket
or
containers
all
refer
to
the
same
data
in
in
for
us
we
can
achieve
that
by
using
sub
users,
which
is
a
feature
that
I'm
personally
I'm
not
too
fond
of.
B
We,
if
there's
a
compelling
use
case,
we
might.
We
might
want
to
reevaluate
how
to
do
that,
and-
and
what's
the
exact
use
case,
that
users
need.
B
Well,
the
the
the
short
answer:
there's
no
such
plan.
The
the
answer
is
that
when
you
upload
an
object,
we
assign
an
e-tec
e-tag
fill
to
it.
That
is
usually
the
the
md5
thumb
of
that
object.
But
it's
only
correct
for
for
simple
uploads
for
multi-part
uploads,
the
the
etag
etag
is
actually
md5.
Some
of
the
md5
sums
of
of
the
parts
that
you
uploaded
yeah.
A
B
Then
well.
B
And
yeah
well,
the
the
problem
with
that
is
that
we
don't
have
when
we
do
a
multi-part
upload.
We
don't
access
all
the
data
serially,
so
we
in
order
to
actually
provide
that
information
we
once
the
object
is,
has
completed
upload.
We
need
to
go
over
it
again
and
reread
it
and
modify
it.
B
B
Or
once
the
object
has
completed,
upload
or
or
maybe
maybe
maybe
setting
that
information
on
the
object
when
uploading
it
like,
if,
if
the
client
knew
that
that
information
could
set
it
as
an
extended
attribute
on
the
object.
A
B
A
Gotcha,
okay,
one
quick
question:
it
looks
like
asked
someone
vivek
cherian
was
asking
if
they
could
replace
the
openstack
swift
object,
stores
with
rados
gateway,
which
is
actually
something
that
we've
long
talked
about.
So
the
answer
is
yes,
but
you
did
you
have
anything
you
wanted
to
add
about
the
current
state
of
using
rados
gateway
as
a
drop-in
replacement
for
openstack
swift.
B
That
the
its
api
is
very,
very
fluid
right.
Openstack
swift
is
not
an
api
is,
or
I
wouldn't
say,
openstack
switch.
Swift
is
is
not
an
api.
It's
a
product.
It's
a
sim
specific
product
implementation.
B
We
strive
to
make
our
the
gateway
as
as
compatible.
B
As
we
can
with
with
swift-
and
I
know
there
has
been
tremendous
work-
that's
been
done
by
community
contribute
contributors
by
people
from
mirantis
and
from
other
places
in
order
to
make
it
even
more
compatible.
So
I
know
of
people
who
are
using
it.
Instead
of
of
open
stack,
swift,
and
so
I
don't
see
why
not.
Now
the
question
is
whether
there
are
specific
features
in
openstack,
swift
that
are
needed,
that
we
might
not
support
and
I'm
not
aware
of
any
major
feature.
A
So
the
the
only
major
difference
well
major
major
is
debatable,
but
the
only
major
difference
between
the
two
right
now
is
the
the
kind
of
per
user
name
spacing
stuff
right.
Isn't
that
the
biggest
the
biggest
hurdle
between
s3
and
swift
and
because
we
were
based
on
s3
first,
we
went
one
way
and
left
with
the
other.
B
When
we
started
swift
was
also
really
at
the
beginning,
and
its
user
model
was
wasn't
something
that
was
really
completely
thought
out.
I
think
at
that
point.
So
so
we
we
went
with.
B
We
tried
to
make
it
as
closer
as
what
we
understood
switch
was
doing,
but
you
know,
apparently
it
was
going
different
direction.
A
Okay,
so
it
looks
like
we
had
a
question
back
here,
always
from
abhishek
asking
what
are
md
log
bylog
and
data
logs
context
of
rgw
admin
and
rgw
agent.
Think
yeah.
B
B
The
data
log
is
the
the
list
of
all
the
buckets
that
changed.
It's
a
log
of
all
the
back
to
change
in
a
certain
way
within
a
certain
time
range.
So
we
don't
update
it
for
all
change
that
happens
in
the
bucket
in
an
object
within
a
bucket.
We
only
do
it
like
once
every
30
seconds
or
15
seconds,
so
that
not
each
right
requires
updating
the
plug
and
the
date.
The
bi
log
is
the
backend
index.
A
Okay,
abhishek.
Does
that
answer
your
question?
You
had
a
follow-up
sync
agent
and
metadata
agent,
sync,
if
not
in
data
path,
for
example
in
bucket
creation.
What
does
it
sync?
Only
user
data,
but.
B
All
right
so
metadata
think
yeah
if
it
will
think
it's
not
going
to
sync
the
the
data
itself,
but
it's
going
to
sink
the
user
data
and
the
bucket
metadata.
So
the
bucket
metadata
means
that
the
the
actual
aqua
bucket
occurs
and
the
fact
that
the
bucket
belongs
to
a
specific
user
and
that
stuff
you
can
maybe
try
to
look
at
the
metadata.
B
You
can
have
there's
the
raiders
gateway
admin,
metadata
command
command,
set
of
commands,
for
example,
rescue
admin,
metadata
list
in
or
metadata
data
list,
user
metadata
list
packet
packet
dot
instance.
So
you
can
you
can
look
at
all
the
the
metadata
you
can.
If
you
really
want
to
cause
damage
you
can
to
your
data.
You
can
modify
that
also.
B
A
Okay,
let's
see,
I
think
we
hit
all
of
the
questions
in
the
backlog
here.
Anyone
else
have
any
questions
before
we
wrap
this
up.
A
All
right
all
right,
I
think
we
hit
all
of
the
questions
here
in
the
backlog.
I
think
there's
a
great
tech
talk.
Thank
you
very
much
yehuda
for
for
going
through
rados
gateway
with
us,
we'll.
B
A
All
of
you
back
here
on
the
23rd
of
april,
hopefully
to
hear
a
chat
about
calamari,
both
the
management
api,
as
well
as
some
of
the
gui
options
that
are
floating
around
out
there.