►
From YouTube: CDS Infernalis (Day 2.2) -- OSD: Transactions
Description
Videos from Ceph Developer Summit: Infernalis (Day 2.2)
04 March 2015
https://wiki.ceph.com/Planning/CDS/Infernalis_(Mar_2015)
A
B
I'm
sorry,
we
tend
to
build
some
reference
solution
for
huddled
over
SEF.
We
we
tend
to
follow
the
Hadoop
opus
text,
our
project,
which
is
built
on
the
oven
over
Swift,
and
so
we
would
like
to
share
this
design
with
SEF
community
and
would
like
to
hear
some
comments
from
from
you
guys.
Oh
can
you
see
my
screen
now.
A
B
Ok,
so
there's
a
special
requirement
in
this
routine
that
all
the
storage
servers
are
actually
an
isolated
network
with
the
Hadoop
cluster
that
that's
actually
prevents
us
from
using
Hadoop
plus
C
ffs
proggy,
which
requires
a
self
cluster
and
hadoop
clusters
sit
in
the
same
network.
So
so
we
turned
to
use
a
SEF
trellis
gateway
in
in
this
ruching
separatists,
gateway,
work
or
play
as
connector
of
these
two
networks,
plus
we
we
are
going
to
leverage
self
catch,
TR
technology
to
catch
the
date
in
each
rattles
gateway
server
using
a
SSD.
B
Ok.
So
so
this
is
a
general
proposal
of
our
solution.
You
can
see
from
the
left
picture.
We
actually
have
four
four
components
related
ways:
SEF
first
is
a
RG
wfs,
which
is
actually
an
plugin
for
Hadoop.
That
that
makes
a
loop
can
talks
to
a
rather
skateway
directory,
the
secondaries
rgw
proxy,
which
is
a
which
is
like
a
demo
stand
alone.
Among
that
can
give
gave
out
the
the
closest
rather
skate,
for
instance,
rated
with
the
Hadoop
job
and
then
then
left.
B
So
so
here
here
is
a
general
working
flow
for
Hadoop
over
self
rightous
gateway.
The
first
step
is
scheduled:
ask
rattles
proxy,
which
is
a
closest
active,
rather
skate,
went
away
instances
for
for
this
job
and
then
the
second
step
is
scheduler
will
allocate
these
tasks
to
the
servers
that
is
near
to
the
data,
which
means
near
near
the
Redis
gateway
instances.
C
B
B
Okay,
so
so,
actually
for
RG.
Wfs
is
just
a
similar
fork
for
of
Swift
Travis
in
OpenStack
Sahara
project,
except
there
are
some
minor
changes,
so,
for
example,
in
Swift
FS
require
your
arrow
of
the
objects
and
actually
includes
some
partition
number
here,
154
362,
and
we
know
that
in
HW,
l
there's
no
substance,
so
we
actually
need
to
change
its
like
a
pillow
without
partition.
Number.
D
Disease
just
one
comment
in
in
Swift:
you
get
a
partition
number
by
advantage,
authenticating
to
the
earth
forever
you
get
the
entire
URL.
The
same
would
be
if
you
actually
authenticate
to
the
gateway
through
the
Swift
authentication,
but
I
I,
assume
you're,
not
really
authenticating
right
near
your
environment.
B
Okay,
so
so
for
the
second
part
we
have
the.
Actually.
This
is
a
complicated,
most
complicated
part
for
for
this
solution.
We
have
a
standalone
demon
that
acting
like
name
node
in
HDFS,
so
before
before
each
cat
food
at
wfs
will
try
to
be
Cory
algebra
proxy
to
get
the
block
location
from
proclamations
in
Redis
cluster.
E
D
We're
now
it's
unimplemented,
implement
it
is,
but
adding
that
should
not
be
that
complicated
yeah.
E
D
B
B
D
E
E
B
B
B
D
I
know
people
are
setting
it
to
like
five
megabytes,
so
it's
not
like
stuff
like
that.
So
yes,
okay,
okay,
max
the
max
chunk
size
controls
how
much
data
he
reading
a
single
rate
of
separation
in
the
strut
it
which
stripe,
sighs,
I,
think
so
the.
B
B
Okay:
okay,
oh
that's
quick!
That's
quite
so.
Second
step
is
our
soap
following
the
rest
for
interface,
a
GW
could
try
to
get
or
put
the
content
based
on
current
container
object
name.
So
in
the
first
step
we
first
is
alive.
We
intend
to
use
some
ranch,
read
request
to
get
each
each
chunk
or
block
here,
and
is
that
a
is
that
already
down
in
current
HW
interfaces?
Yeah.
F
D
Like
if
aunt
descended
correctly,
each
rgw
you'll
send
the
request
to
the
algae
w.
That's
that
you
know
that
it's
closer
to
the
data
right.
Yes,
yes,
so
it
might
have
one
read
that
split
into
a
few
chunks
and
eat
chunks
going
to
go
to
different
rgw.
D
D
D
B
B
A
D
First,
race
is
going
to
be
to
the
head
object
and
the
next
one
is
going
to
be
to
the
actual
range
that
you
specified:
okay,
yeah,
but
you
might
tell
people
you
might
be
able
to
do
something
like
if
you
could
send
the
metadata
somehow
to
the
gateways
from
algae,
w
proxy
and
add
something
to
the
gateways
so
that
that
they
don't
need
to
read
the
head.
The
head
object
and
I
think
you
can
do
it.
Then
then
you
it's
not
going
to
read
the
head
object
right.
E
Well,
remember
our
to
Naboo
proxy
is
actually
RG.
W
/
get
layout.
/
containers-
yes,
yes
right,
but
it
would
get
it
would
have.
One
of
those
things
would
be.
This
is
the
this.
Is
the
magic
URL
to
use
to
read
the
range
directly
right
and
and
when
provided
that
URL
it
would
skip.
It
would
just
go
directly
to
read
us
basically,
okay,.
D
E
Okay,
okay,
okay,
I
think
that
would
be
that
would
be
sort
of
a
second
pass.
Optimization.
So
first
step
would
be
to
add
the
get
layout.
That
just
tells
you
all
the
way
outs,
and
then
you
would
you
just
suffer
that
you'll
you'll
read
the
head
each
time,
but
it's
nutan,
and
then
the
second
thing
would
be
that
that
get
it
layout
would
also
just
carry
this
this
long.
E
G
B
Yeah,
that's
that's.
Let's
see
I'm
going
to
talk
next
step,
so
this
is
this
is
a
simple
sample
get
flow
for
for
the
HW
FS,
so
this
is
bit
tricky
here
that
GW
proxy
needs
to
so,
for
example,
at
wfs
is
going
to
get
one
object,
called
object,
one
and
and
HW
proxy
will
try
to
look
for
this
file
in
a
specified
container.
B
B
B
Yep,
okay,
so
so,
once
we
got
these
various
objects
and
algebra
proxy
can
can
do
some
look
up
by
using
crush,
for
example,
we
can
first,
we
can
catch
the
locations
for
these
objects
in
the
best
here
and
then
by
using
some
internal
functions.
I
can
remember
the
exact
name,
but
it
is
something
like
we
calculate
object,
location
in
cast
here,
and
so
we
can
actually
cause
the
target
price
target
OSD
for
for
these
objects
in
cast
year.
So,
and
so
actually
we
we
can
know
which
HW
instance
is
a
closest
from
the
date.
G
B
G
B
B
F
F
Only
one
catch
their
butter,
for
example,
when
we're
looking
looking
up
for
which
I
GW
to
be
written,
we
need
to
calculate
the
mapping
relationship
or,
for
example,
if
we
calculate
that
one
object
to
catch
here
is
a
catch
here,
so
it
actually
belong
to
a
DW
once
cut
the
lowest
be
in
touch
there.
Would
it
be
possible
that
a
deal
to
some
unknown
reason
that,
some
days
later,
the
object
is
actually
casually
each
cat?
Castilla
SSD
is
actually
in
Iraq
to
its
going.
F
A
E
E
B
B
B
First,
ask
algebra
proxy
for
the
location
of
some
big
data
file
as
double
proxy,
actually
monitors
these
instances
and
then
returns
an
active
HW
instance
guest,
which
the
first
one
is
actually
the
closest
store
and
and
the
second
in
the
soda
that
that
should
be
configurable
by
user
and
can
can
actually
be
some
other
values
gateway
instances.
So
you
can
have
some
fair
over
it.
B
F
B
A
D
B
Yeah
yeah
exactly
this
is
a.
Let
me
try
to
attend
your
next
page.
Yes,
this
is.
This
is
a
we.
We
tend
to
some
discuss
with
you
guys
here
on
the
modification
on
our
GW
side.
So
the
first
is:
is
it
possible
to
use
some
pseudo
random
string
in
shadow
object?
Names
I
mean
we
can?
We
can
rely
on
some
md5
of
the
of
the
entire
URL
of
the
container
object,
our
account
names
instead
of
a
contrary,
we
are
using
some
base,
664
screens
in
these
channel
channel
names.
D
G
D
B
D
Currently
the
Swift
API,
as
we
support,
is
regular
upload
and
the
dll
dynamic
large
object.
Oh
we
don't
do
this
static,
large
object,
although
I
arrow
did
look
at
the
API
recently
and
it's
not
that
different
from
the
dynamic
large
objects.
So
it's
oh,
we
would
be
happy
with
okay,
so
anyone
would
be
once
you
can
contribute
English.
Oh
ok,.
B
Okay,
all
right
so
so
this
is
pretty
much
a
work
I
have
here
so.
B
D
C
So
this
be
a
move
that
agw
curator
Hadoop
interface
and
it'll
it'll
at
this
little
cocky.
So
it
is
possible
that
MDS,
another
water,
saps,
really
dr.
Lee
from
RW,
so
I
think
it
may
be
more
performance.
That's
so
we
may
introduce
middle
world
between
I'm
w
and
survives
so
it
may
get
better
performance
before
Hadoop
and
the
staff
as
because
we
don't
have
a
lot.
C
Oh
I
lawyer
between
this
and
the
and
the
more
reason
that
Hadoop
or
often
go
out
of
many
data
and
I
worry
a
question
that
yo-yo
will
consume
work
much
of
network
that
bad
ways
the
if
all
this
extrusion
and
I
just
think
after
this
idea,
maybe
for
the
most
and
make
your
lid
holder
hate
but
I
Stinkum.
It
maybe
work
better
for
your
case
and
the
fourth
quarter
solution.
Coluber.
E
C
Yeah
I
read
about
yeah
a
swiftly
the
cost
of
the
simple
the
face.
It's
a
talent
to
really
performance
if
I
do
for
a
double
Java,
especially
for
for
much
Oh
for
Hadoop,
come
capital
of
people
using
Hadoop
low
interface,
which
we
allow
upset
and
alone,
and
the
Swift
only
gap
for
and
the
proof
for,
so
it
can
maybe
existing
some
gap.
00
with
this
yeah
and.
F
C
C
E
C
A
C
So
I
I
come
up
with
an
example
if
we
all
had
a
worldly,
a
large
data
source
that
ETA
of
several
TV
or
several
several
hundreds
of
TV,
and
we
needed
to
move
this
data
from
a
debris
or
Swift
to
the
did
the
Hadoop
data
package
and
such
an
HDFS
or
other
or
some
areas.
So
this
Hodges
OTB
need
for
more
one
place.
It
was
a
another
place
and
I.