►
From YouTube: CDS Hammer (Day 1) - Towards Ceph Cold Storage
Description
http://goo.gl/U4b70r
28 October 2014
Ceph Developer Summit: Hammer
Day 1
Towards Ceph Cold Storage
Matthias Grawinkel, Marcel Lauhoff
A
The
next
session
here
we
have
is
discussing
cold
storage
as
it
applies
to
seph,
and
it
looks
like
we
have
a
couple
of
a
couple
of
guys
here
that
propose
the
blueprints,
matthias
and
marcel,
and
so
one
of
you
guys
wants
to
give
us
a
little
bit
of
an
overview
of
what
you're
thinking
here
and
we
can
take
it
from
there.
B
First,
hello
from
germany,
so
I'm
matthias
gravenka,
I'm
from
the
university
of
mainz
so
and
my
colleague
marcelo
from
my
former
university
of
palabra,
and
we
are
still
looking
at
from
more
or
less
scientific
perspective,
but
right
now
for
also
with
a
natural
use
case,
so
we
would
like
to
see
how
far
we
can
bend
seph
to
into
a
cold
storage
or
as
into
an
archival
store
and
yeah.
For
this
we
actually
defined
a
master
thesis.
So
marcelo
is
right.
Now
writing
a
master
thesis
with
the
topic.
B
Basically,
how
can
we
cool
down
seth,
and
we
have
some
ideas,
how
to
do
this?
What
could
be
done
there
and
yeah?
So,
basically,
we
would
like
to
see
what
you
think
of
the
ideas,
if
you
think
some
of
them
could
be
doable
if
they
are
interesting.
Maybe
you
also
have
some
interesting
questions
that
could
go
into
this
master
thesis
and
yeah.
Basically,
this
is
what
this
talk
will
be
about
cool
okay.
B
So
basically,
this
is
more
or
less
a
continuation.
So,
in
the
last
development
subject
there
was
from
the
guys
from
hgst.
There
was
this
first
approach
they
discussed.
How
can
we
redefine
a
tier
to
make
it
more
silent?
So
basically,
the
biggest
problem
is
that
a
lot
of
data
is
moving
around,
which
is
nice.
If
things
are
getting
rebalanced,
we
get
new
posts
in
there.
B
C
Okay,
well,
we
defined
some
couple,
some
ideas.
We
thought
we
need
for
cold
storage
and
stuff.
The
first
is
we
always
came
back
to.
We
need
some
kind
of
placement
that
is
energy
aware
and
we
thought
of.
Can
we
add,
like
a
bucket
type,
that
knows
about
how
osds
are
powered
up
or
powered
down
and
maybe
based
on
a
discussion
I
had
with
luig
at
the
seven
days
in
paris.
Why
not?
Maybe.
C
Power
power
down
based
on
the
time,
for
example,
if
you
have
like
three
osd,
some
assign
a
weight
to
one
of
them.
That
is
that
basically
prevents
data
from
getting
there
and
do
that
like
for
an
hour
and
then
switch
to
the
next,
so
that
the
usd
has
like
a
time
time
to
cool
down
or
something.
C
Well,
I
I
think
it
is
quite
doable,
but
I'm
not
sure
how
to
get
some
like
this
side
effect
data
into
the
bucket.
The
bucket
choose
functions.
D
So
the
way
that
mapping
currently
works
is
crush
spits
out
the
the
set
of
ods
that
should
store
the
data
and
then
we
filter
out
all
the
osds
that
are
currently
down
and
we
get
the
so-called
acting
set,
which
are
the
ones
that
are
currently
sort
of
acting
ones
that
are
actually
doing
the
work.
And
that's
then
that's
those
are
the
ones
up
here
and
actually
do
the
rights,
and
so
the
the
down
filtering
it
happens
after
crush.
D
B
I
have
one
more
question
here,
so
this
would
mean
if
we
power
up
and
down
disks
a
lot
and
for
every
disk
we
have
one
osd.
This
would
mean
for
every
power
up
and
power
down.
We
would
change
the
crash
map
and
push
it
out
to
basically
the
whole
cluster
would.
D
You
change
the
osd
image
overhead.
You
would
change
the
osd
map,
which
is
that
you
know
like
100k
structure
that
describes
who's
up
and
who's
down,
but
that
that
that's
going
to
have
to
change
no
matter
what
so
that
the
clients
know
who
to
who
to
talk
to
and
who's
up
and
who's
down.
It's
a
that's.
D
D
D
Right
yeah.
D
E
B
That
yeah
yeah,
the
next
question
would
be
in
the
pad.
We
have
something
like
1.3
like
the
crush
rules.
There
are
some
like
this
sort
of
a
linear
fill
or
this
distribution
strategies
how
complex
so
some
time
ago,
I
read
something
about
making
them
more
flexible
or
adding
some
plug-in
functionality
into
it.
Is
there
some
some
possibility
to
get
more?
D
D
We
have
four
bucket
types
that
we
define
right
now,
but
you
could
easily
define
new
ones,
there's
a
little
bit
of
a
compatibility
thing
where
you
have
to
make
sure
that
the
whole
cluster
understands
the
new
new
features
that
you
add
to
it.
But
that's
we've
done
that
several
times
now,
so
it's
pretty
well
worn
territory.
D
So
I
think
if
there
are,
if
there
are
alternative
mapping
schemes
that
we
want
to
pursue,
then
then
definitely
but
that
that
being
the
case,
I
think
that
the
stuff,
where
you're
you're
not
actually
changing
your
notion
of
where
the
data
should
be
stored
but
sort
of
modulating
that
with
whether
it's
up
or
down
and
whether
you're
writing
to
it
at
this
particular
moment
that
stuff
is
those
types
of
filtering.
Steps
can
be
done,
one
step
above
and
without
actually
modifying
chord
brush.
B
The
next
question
would
be:
how
much
intelligence
can
you
get
into
these
rules
so
in
in
the
pad?
We
have
a
point,
five,
which
is
called
marked
data,
so
basically,
if
as
a
client.
So
if
I
would
like
to
build
an
archive,
sometimes
I
exactly
know
how
hot
or
cold
the
data
that
I
put
into
the
cell
cluster
are,
and
maybe
I
can
somehow
tag
them.
Maybe
I
have
blue
data
and
red
data
and
yellow
data
or
something
so
is
there
any
chance
to
get
those
tags?
B
Basically,
maybe,
as
a
system
attribute
or
of
object
attribute
that
is
written
by
the
client,
can
those
somehow
go
into
such
a
decision
where,
like
all
the
the
the
blue
tagged
files
to
object
stores
with,
I
don't
know,
ids
one
to
ten.
D
Something
so
so
you
can
certainly.
D
You
can
you
can
name
them,
you
can
tag
them.
You
can
do
all
those
things
it
that
doesn't
propagate
all
the
way
to
crush,
though,
because
crush
isn't
actually
placing
individual
objects,
it's
placing
placement
groups,
so
you
have
a
logical
pool,
that's
the
sort
of
logical
collection
of
objects.
That's
sharded
randomly
basically
spraying
all
the
objects
into
different
placement
groups
using
a
hash
function,
and
then
it's
the
placement
group
that
crush
is
deciding
where
to
store.
D
So
I
think
that
the
way
that
you
would
approach
this
generally
would
be
that
if
you
have
a
blue
object,
you
put
it
in
the
blue
pool
and
if
you
have
a
red
object,
you
put
it
in
the
red
pool
or
you
feed
that
information.
You
put
it
all
in
one
pool
and
you
give
that
information
to
rados
and
rados
uses
that
to
make
a
tiering
decision-
and
it
says
I'm
gonna
actually
just
make
this
object.
D
Which
would
rely-
and
I
think
that
that
type
of
redirect
would
rely
on
the
the
some
version
of
the
redirect
design
that
we
kicked
around.
I
think
in
the
emperor
time
frame,
but
we
didn't
ever
implement,
we've
only
implemented
the
cache
steering
so
far,
but
I
think,
probably
having
having
sort
of
a
generic
way
to
indicate
that
this
object
is
cold
and
is
unlikely
to
be
modified
ever
again
or
is
warm.
I
know
I
do
expect
further
updates
having
a
generic
way
to
communicate
that
down.
B
Okay,
it
would
bring
us
to
like
a
point
to
the
object,
steps
and
links
to
external
storage.
So
I
think
you
talked
about
this
earlier
today.
I
missed
that
one.
What
is
this
current
state?
Is
it
like
implemented?
Will
it
be
implemented?
Could
you
give
me
the
stock.
D
We
talked
about
it
during
that
summit
and
we
haven't
done
anything
with
it
since
so
we
haven't,
we
haven't
implemented
it,
so
the
original
idea
was
to
have
have
two
different,
two
different
types
of
tiering,
one
where
you
have
your
your
base,
pool
where
the
data
is
stored
and
you
put
a
cache
above
it
that
sort
of
has
a
subset
of
the
data
that
happens
to
be
hot
and
your
io
hits
the
cache
and
that's
that's
the
one
we
implemented
and
then
that
blueprint
is
about
having
a
second
tier,
that's
colder
than
the
base
tier,
where
you
just
have
a
pointer
where
the
object
would
normally
be
stored.
D
That
points
off
somewhere
else
and
that
one
is
yeah.
It's
it's
not
implemented.
We
haven't
gotten
there
yet.
I
think
there
are
a
host
of
applications
for
it.
So
one
would
be
something
that
you
mentioned
a
couple
using
your
cold
raid
scheme
or
could
point
off
to
a
different
raido's
pool.
It
could
point
to
you
know
the
cloud
it
could
point
anywhere
right
as
long
as
there's
a
way
that
the
radios
can
sort
of
get
input
from
that
from
that
other
object
store.
D
It's
not
currently
a
priority.
I
think
because
I
think
the
the
cash
steering
is
capturing
sort
of
most
of
the
tiering
use
cases
that
we're
looking
at
right
now,
I
think,
but
but
I
think
we
definitely
like
to
see
something
like
that,
eventually,
because
it's
gonna
open
up
a
lot
of
possibilities.
B
Okay,
then
we
have,
I
think
myself,
would
you
maybe
introduce
this
archiving
demon
idea.
C
Just
back
to
the
object
you
redirect
some,
I
watched
the
emperor
video
today.
Do
you
think
it's
there
you
there?
You
assume
that
that
it
is
that
it's
links
from
sev
to
ceph.
I
get.
I
think
this
thing
is
hard
to
generalize
this,
to
something
safe
to
something
else
like
http
server.
I
don't
know.
D
Right
so
I
took
a
quick
look
at
that
blueprint
yesterday
and
I
don't
think
it's
it's
hard
to
generalize
that
so
the
I
think
the
way
that
we
originally
wrote
it
up.
I
was
assuming
except
to
seth,
but
the
only
thing
that's
stored
on
the
on
the
cold
object.
D
Is
it's
just
a
flag
that
says
I
am
linked
from
something
else
and
a
back
pointer
to
like
who
linked
to
me
and
that's
just
for
like
scrubbing
consistency
stuff,
so
it's
not
actually
necessary
for
the
the
correct
operation
of
the
system,
so
so
yeah,
I
don't
think
it'll
be.
It
would
be
hard
to
generalize.
I
think
the
only
the
only
requirement
would
be
that
in
order
for
it
to
work.
Obviously
the
osd
has
to
be
able
to
do
gets
inputs
to
whatever
it
is.
That's
on
the
on
the
back
side.
C
Yeah,
okay,
yeah,
it's
it's
kind
of
the
key
key.
Indeed
in
ingredient,
we
sort
of
for
cold
storage
and
the
next
point
about
the
archiving
daemon
is
actually
we
need
these
links.
For
that.
That
thing
is
actually
pretty
pretty
short.
I
put
some
like
pseudo
python
code
in
in
the
pad.
I
mean
basically
iterate
over
all
stuff
and
decide
on
something
and
whether
it's
like
cold,
I
don't
know
which
metadata
is
actually
in
there.
I
I
didn't
know
if
there's
an
a-time
or
something.
D
This
is
almost
exactly
what
the
the
current
tiering
agent
does.
So,
with
the
cast
hearing
there's
a
background
thread,
that's
called
the
tiering
agent.
That
does
something
like
this.
It
iterates
over
objects
and
decides
whether
to
demote
them.
It
operates
only
on
the
cast
here.
So
it
goes
through
the
cached
objects
and
decides
whether
to
evict
them.
D
We
don't
explicitly
store
anytime,
because
that
would
mean
doing
a
write
on,
but
we
approximate
it
using
bloom
filters
and
it
just
so
this
whole
infrastructure
to
get
it
to
basically
guess
what
the
a
time
is
with
some
with
some
sort
of
semi-certainty.
D
D
So
simply
simply
making
a
rados
pool
that
sort
of
satisfies
your
requirements
for
what
a
cold
tier
would
do
would
would
get
you
most
of
this.
So
if
you
were,
we
were
able
to
make
a
radius
pool
behave
with
powered
down
osds,
for
example,
on
a
schedule.
Then
I
think
it
would.
It
would
work
which
I
think
I
think
is
sort
of
the.
D
Directions
so
the
first
is
the
first
direction
would
be
making
the
external
redirects
work,
and
in
that
case,
then,
you
can
push
cold
data
off
to
anything
else.
It
could
be.
You
know,
amazon
glacier,
it
could
be.
D
You
know
your
your
power
aware
raid
thing
it
could
be,
it
could
be
anything
and
that
would
be
sort
of
one
enabling
piece
and
by
the
way
we
made
it
when
we
did
the
tiering
stuff,
we
exposed
all
that
all
the
information
about
getting
the
bloom
filters
to
tell
how
hot
or
cold
an
object
is
and
controlling
them
promote
demote
decision
stuff.
We
exposed
all
that
via
liberatos,
so
you
could
actually
write
your
tiering
agent
in
python.
D
It
turns
out,
we
embedded
it
in
the
osd
because
it
was
faster
and
more
efficient
and
simpler
to
do
that,
but
but
those
raido's
calls
exist.
So
you
could
you
could
write
something
else
externally.
D
You
happen
to
know
what
should
happen,
but
then
the
second
direction
is,
I
think,
dealing
with
the
power
down
situation
like
how
you
turn
up
those
ds,
and
I
think
we
talked
a
little
bit
about
this
in
the
session
that
you
watched
earlier
last
time.
D
But
this
is
the
one
that
I'm
actually
most
interested
in
in
a
shorter
term
is
figuring
out
what
what
would
make
sense,
because
I
think,
there's
sort
of
two
ways
to
do
it,
and
the
thing
I
have
in
mind
here
is
the
actually
the
hgst
use
case
where
you
actually
have
these
disks
that
you
literally
want
to
power
down
individually,
so
the
osd
would
be
turned
off
and
they're
are
two
ways
to
do
it.
D
D
Or
and
then
alternatively,
you
would
you
would
actually
literally
turn
the
thing
down
and
then
you'd
have
to
have
some
integration
where
you
know
how
to
send
a
special
packet
over
the
internet,
eighth
amendment
or
whatever,
to
power
back
up,
which
is
that
whole
extra
level
of
integration?
But
I
think
in
both
cases
the
the
key
thing
would
be
giving
ceph
the
an
additional
osd
state
where
it's
aware
that
the
osd
exists
and
it's
presumed
to
be
healthy.
D
D
It
would
be
great
to
have
something
like
that,
but
it
it
sort
of
depends
on
what
the
what
the
expectations
are
and
what
the
what
the
usage
pattern
is
going
to
be,
as
you
can
imagine
a
case
where
you
have
like
three
replicas
and
you
just
power
down
two
of
the
replicas
or
one
of
the
replicas
just
to
incrementally
cut
down
your
power
or
you
could
imagine
a
state
where
you
turn
off
the
entire
cluster
for
20
hours
of
the
day,
and
then
you
turn
it
on
for
four
hours
and
like
ingest
all
the
new
data
and
do
the
reads
and
then
power
it
off
again.
D
B
Yeah,
I
actually
have
a
very
concrete
use
case.
This
is
still,
as
you
said,
this
I've
developed
this
two-dimensional
rate,
so
basically
a
lot
of
disk
drives
in
one
enclosure
and
on
top
I
have
in
the
end
file
system,
which
is
somewhat
smart,
where
to
put
data,
and
this
would
be
like
point
four.
B
What
would
be
the
the
an
interesting
way
how
to
integrate
an
osd
on
top
of
either
120
individual
disks
that
are
smart
in
the
way
that
they
are
always
on,
but
some
caching
schema
and
so
on
and
so
forth?
Basically,
this
system
is
always
there
it's
always
available,
but
in
the
worst
case
it
will
give
you
10.
Second
latency
until
you
get
the
data,
this
is
the
the
actual
use
case
there,
but
nevertheless,
once
data
is
written
to
the
osd,
it
should
not
be
moved
to
the
next
osd.
B
So
basically,
this
additional
state
if
an
osd
is
on
or
off,
is
maybe
not
that
important.
For
me
right
now,
though,
for
these
htc
disks,
it's
very
important.
I
see
that
for
me,
it
would
be
more
important
to
have
more
influence
on
how
to
put
this
mapping
from
a
file
from
an
object
to
an
osd
that
they
will
stick,
that
I
have
some
control
there,
how
to
put
them
yeah.
All
blue
files
go
to
osd
all
the
yellow
files
go
to
osd2,
some
something
like
that
or
to
some
groups.
B
D
Right
right,
okay,
so
sorry
in
the
in
the
context
of
your
your
two-dimensional
rate,
are
you
assuming
that
you
would,
on
top
of
that,
you
label
the
file
system
and
on
top
of
that
file
system,
then
you
would
have
a
single
osd
that
sort
of
owns
this
big.
This.
B
D
So-
and
I
think
in
that
case
I
guess
the
third-
the
third
direction
to
go
and
just
figure
out
how
to
how
to
avoid
migrating
data,
and
then
I
think
there
you
just
have
this,
this
fundamental
tradeoff
between
having
a
metadataless
ability
to
find
where
your
object
is
stored
versus
having
to
store
some
metadata
about
where
it
goes
and
then
then
having
a
lookup
of
some
sort
before
you
find
it.
So.
A
D
In
the
in
the
simplest
case
yeah,
so
I
I
mean
the
way
that
the
the
current
radio
stuff
is
structured.
You
can
break
things
down
by
pool
and
each
pool
can
have
its
own
scheme
for
mapping
onto
placing
groups
and
placing
groups
onto
posties.
So
right
now,
that's
all
done
through
hashing
a
hash
function
and
crush,
but
you
could
presumably
plug
in
something
else,
but
the
key.
The
key
thing
is
that
right
now,
it's
all
sort
of
defined
in
the
osd
map.
So
this
is
a
compact
structure.
D
So
if
you
can
come
up
with
a
way
where
you
know
knowing.
D
Where
that's
enough
information
to
find
out
where
your
object
goes
without
having
to
move
stuff,
then
then
that
that
would
work,
but
I
think
it's
tricky,
because
if
you
want
to
arbitrarily
name
your
objects,
then
you
can
fill
in
like
parts
of
the
namespace
that
have
already
written
to
a
particular
osd,
and
you
want
to
make
them
not
go
there.
You
want
to
make
them
go
somewhere
else
like
you
have
to
sort
of
like
name
them
based
on
when
they
were
written,
perhaps,
and
that
sort
of
makes
things
different
right.
D
B
D
Yeah
yeah!
Well,
if
you,
if
you
name
things
based
on
when
they
were
written
with
some
stability,
then
then
you
could
say
that
the
you
know
objects
written
in
this
time.
Then
you
could
have
a
compact
representation.
D
That
says,
like
this
slice
of
time
is
written
to
this
device,
and
this
slice
of
time
is
written
to
that
device,
and
that
could
be
relatively
compact
in
that
it's
it's
like
the
size
is
on
the
order
of
the
number
of
devices,
and
so
it's
not
going
to
sort
of
expand
the
the
size
of
the
osd
map
by
a
order
of
magnitude
or
anything
like
that,
and
that
would
work
right
or
you
could
be
clever
where
you
have
like
this
this
longer
time
period,
I
was
hashing
over.
D
D
Okay,
there's
there
is
a.
B
But
one
one
question
to
that:
this
would
mean
that
I
have
to
have
the
name
of
an
object.
This
is
the
one
that
goes
into
it.
So
if
I
have
the
the
power
of
naming
the
object
right,
this
is
fine,
but
if
you
don't
plug
it
into
a
ffs,
maybe
this
would
be
problematic.
I
think.
D
Right,
but
if
this
is
intended
for
cold
storage,
then
you
will
be
probably
not
putting
things
directly
in
this
pool,
but
you'll
be
putting
it
in
a
different,
rados
tier
and
then
using
the
redirects
to
point
off
into
this
other
thing,
and
so
in
that
situation
you
can
sort
of
arbitrarily
name
your
object
at
that
point
right,
because
you.
D
You
have
to
have
a
pointer
that
you
can
control
the
thing
that
the
name
of
the
thing
that
points
to,
but
if
you
have
that
pointer
in
where
the
object
would
normally
have
been
stored,
if
it
were
hot,
then
then
you
can
sort
of.
D
Okay,
yeah.
Well
I
mean
you
could
sort
of
you
can
sort
of
for,
for
the
purposes
of
like
figuring
out
how
this
would
work.
You
can
just
assume
that
you
have
control
over
naming
the
object
or
less
control
over
naming
the
object
right.
So
there's
a
portion
of
the
object
that
you
don't
control
or
or
whatever
yeah.
However,
you
want
to
structure
it
like.
D
Maybe
you
ask
the
system,
give
me
an
object
name,
or
maybe
you
say,
maybe
it's
a
combination
of
the
part
that
you
don't
control,
that
the
system
provides
and
the
part
that
you
do
control,
but
I
think
the
thing
that
you
might
look
at
that
sounds
like
it
might
line
up
with
what
you're,
what
you're
trying
to
do,
the
original
the
crush
list
buckets
are
based
off
of
the
original
rush
algorithm,
which
was
based
on
this
idea
that,
when
you're
deploying
a
big
cluster,
you
sort
of
deploy
it
in
stages.
D
So
I
would
first
deploy
my
first
100
disks
and
I
would
hash
across
them
and
then
those
at
some
point
those
would
fill
up
and
I'd
deploy.
My
second
chunk
and
so
the
way
that
the
hashing
algorithm-
it
was
basically
an
exception
that
says
this
percentage
of
the
objects
are
stored
in
the
new
stuff
and
if
they
weren't
sorted
there,
then
they
go
to
the
old
stuff.
And
then
you
have
the
third
thing,
and
so
it's
basically
a
list.
Is
it
in
the
newest
category,
if
not
as
in
the
second
newest
category?
D
If
not,
is
it
in
the
third
news
category
and
I
think
that
that
type
of
scheme
could
be
used
with
this
sort
of
time,
slicing
idea
where
you
say
during
this
period
of
time
I
had
these
devices
and
they
were
sort
of
randomly
scattered
across
them
and
then
later
they
filled
up
or
I
deployed
whatever
some
event
happened.
And
then
I
had
sort
of
the
new
view
of
things
where
I
store
them
in
the
new
place
and
then
and
and
so
on.
D
But
you
know
eventually
you're
going
to
have
the
situation
where
you
want
to
decommission
the
old
thing
and
so
you'll
have
to
say
like
10
of
what
used
to
be.
There
is
now
you
know,
got
shoveled
back
into
the
new
thing
or
something
I
think
you
could.
D
You
could
have
some
sort
of
layered
exception
type
approach
to
defining
how
there
that
makes
I'm
sort
of
waving
my
hands
here,
but
I
think
that
general
idea
could
probably
be
applied
that,
in
context
with
with
the
idea
of
slicing
things
up
over
time
periods,
I
think
would
get
you
your
compact
representation.
B
For
the
time
slicing,
when
I,
when
I
have
control,
assuming
that
we
have
these
stubs,
would
it
be
possible
to
also
integrate
object
metadata
there?
So
again,
this
as
a
user,
I
mark
my
file
as
blue
with
a
user
type
blue
and
also
put
the
blue
into
the
hash
function.
D
I
think
I
think
that
you
would,
I
mean
I
think
you
could
do
that.
You
could
just
make
it
part
of
the
name
even
right,
but
I
think
it
would
be
probably
more.
D
My
gut
tells
me
it'd
be
more
useful
to
put
that
sort
of
at
the
next
layer
up,
and
so,
if
you
have,
if
you
consist
exterior
cluster,
has
a
bunch
of
different
logical
pools
and
one
pool
is
this,
like
ultra
cold
and
one's
warm
and
one's
hot
or
whatever,
but
the
application
is
sort
of
talking
to
the
overall
data
collection
and
marking
your
object,
and
then
the
system
is
deciding
where
to
put
it,
which
pool
to
put
it
in
based
on
the
temperature.
D
D
But
maybe
you
buy
this
fancy
new
cold
storage
thing
and
then
you
look
at
the
existing
metadata
and
you
can
sort
of
how
you
map
that,
basically
onto
the
low
level
storage
back
end,
you
could
you
could
define
later
at
a
different
point
in
time.
Okay,
if
that
makes
sense,
I
think
it.
I
think
it'd
be
a
little
bit
more
flexible
to
to
do
it.
One
layer
up.
B
Yeah,
it
sounds
good
okay,
so
I
think
we
have
a
lot
of
work
and,
for
the
master
see
this
yeah
okay,
but
you
think
that
might
come
out
of
that.
Yeah,
okay,
okay!
So,
but
do
we
have.
A
D
D
It'll
probably
be
a
little
short
yeah,
you
can
yeah,
you
can
go
if
you
want.
If
you
have
more
more
questions
or
whatever.
B
C
No,
not
really,
I'm
I
mean
it
all
comes
down
to.
We
need
object,
redirects
to
do
anything.
I
guess
yeah.
D
But
I
think
I
think
the
case
for
having
object
redirects
will
be
bolstered
by
having
a
good
thing
to
redirect
to
so
I
think
you
could
for
the
for
the
purposes,
at
least
of
your
master's
thing,
you
could
probably
just
assume
that
a
capability
like
that
exists
and
then
and
figure
out
how
you
would
leverage
that,
like
how
you
would
structure
a
a
logical
pool
that
that
sort
of
does
that.
C
The
thing
that
you
can,
what
do
you
think
how
much
work
is
it
to
actually
implement
that.
D
This
I'm
always
the
wrong
person
to
ask
about
that.
So
we
had
a.
We
had
sort
of
two
two
pieces
of
the
the
tiering
picture
that
we
originally
proposed.
One
was
the
the
cash
tier
that
sort
of
logically
sits
in
front
and
then
the
the
cold
tier
that
we
redirect
to
that
sits
below
and
the
cold
tier
was
definitely
more
complicated
in
terms
of
all
the
interactions
that
would
happen
and
promoting
and
handling
the
races
and
so
forth.
D
So
we
implemented
the
cashier
one
and
it
took
like
at
least
twice
as
long
as
we
thought
it
would
to
be
generous,
and
so
I
think
the
the
demotion
one
would
be
more
more
complicated,
it's
possible
that
we
could
simplify
it
somewhat
from
what
the
we
originally
proposed.
But
I
have
to
look
at
the
blueprint
in
more
detail.
E
D
D
B
D
Yeah,
it
depends
on
how
much
it's
a
it's
a
non-trivial
amount
of
coding,
so
it
depends
on
how
sort
of
deep
into
the
stuff
you
want
to
get
again.
I
would
I
think
that
you
would
have
more
get
more
mileage
out
of
focusing
just
on
the
cold
tier
and
assume
that
the
redirect
stuff
exists
and
then
implement
that,
because
that's
probably
where
most
of
the
interesting
stuff
happens-
and
this
is
just
sort
of
mechanics-
of
integrating
everything
into
the
current
current
structure.