►
Description
RBD, the RADOS Block Device in Ceph, gives you virtually unlimited scalability (without downtime), high performance, intelligent balancing and self-healing capabilities that traditional SANs can't provide. Ceph achieves this higher throughput through a unique system of placing objects across multiple nodes, and adaptive load balancing that replicates frequently accessed objects over more nodes. This talk will give a brief overview of the Ceph architecture, current integration with Apache CloudStack, and recent advancements with Xen and blktap2.
A
A
B
Right
thanks,
maybe
I
should
be
last,
but
certainly
potentially
least,
I
have
the
unenviable
task
of
being
the
last
guy
between
you,
guys
and
beer.
So
we'll
see
if
we
can't
get
there
a
little
faster
and
stay
ahead
of
this
wonderful
early
boon
here,
I'm
Patrick,
McGarry
I'm,
one
of
the
community
monkeys
were
conferring
tank.
Were
the
company
that's
bringing
SEF
to
the
world?
How
many
you
guys
are
familiar
with
SEF
all
right
awesome.
B
That
means
there's
a
whole
chunk
of
my
presentation
that
we
could
just
fly
right
past,
so
that'll
be
even
better
just
a
little
bit
about
me,
like
I
said,
I'm
working
for
ink
tank,
mostly
doing
the
community
stuff
for
staff
a
little
bit
about
me.
I
started
off
most
of
my
cutting
my
teeth
of
the
community
stuff
working
for
slashdot,
which
is
where
I
met
Ross.
B
For
those
of
you
know
him
he
was
sourceforge
and
I
was
slashed
at
did
a
stint
at
Al
you
to
kind
of
get
that
whole
I
worked
for
a
big
company
thing
out
of
my
my
blood,
and
now
I'm
done
with
that
forever
and
worked
for
a
perforce.
And
now
I'm
here
at
ink
tank,
I
finally
feel
like
I've
come
home
to
the
open
source
world
again,
which
is
really
nice
I'm,
also
scuttle
monkey
on
SlideShare.
B
If
you
want
to
see
these
they're
up
there,
so
here
what
we're
going
to
talk
about
I'll
breeze
through
the
32nd
overview
of
SEF,
real
quick
I
had
a
little
deeper
dive
on
saph,
which
I
can
touch
on
a
little
bit
in
case.
There's,
questions
go
ahead
and
stop
me
or
whatever,
but
sounds
like
most
few
folks
are
probably
going
to
know
the
the
basics
the
101
will
touch
on
stuff
in
the
wild,
which
is
the
important
part
right.
B
That's
the
cloudstack
piece,
the
Zen
piece
and
then
all
the
rest
of
the
various
things
we
can
touch
out
a
little
bit
too.
Something
I've
been
geeking
out.
A
lot
about
lately
is
the
orchestration
piece,
so
I
like
to
touch
on
that.
A
lot
compare
and
contrast,
some
of
them,
the
nice
part
about
it
is
Seph
plays
with
basically
all
of
the
ones
that
I've
seen
thus
far
learned
about
a
new
one.
This
week
it
was
deploy
which
I
know
nothing
about.
B
So
maybe
not
that
one
force
F
but
can
talk
a
little
bit
about
the
community
status
in
case.
There's
questions
about
that
some
what's
next
and
then,
if
there's
any
questions,
we
can
wrap
those
up
at
the
end.
So
what's
F
besides
wicked
awesome,
it's
software,
that's
the
biggest
distinction
that
I
think
some
people
don't
understand
when,
especially
when
we
say
we're
here
to
steal,
EMC
and
NetApp's
lunch
money,
it's
software-defined
storage,
it's
just
a
software
Damon
that
runs
on
linux.
B
So
that's
basically
all
it
is,
and
it's
really
cool
because
it
runs
on
commodity
hardware
and
allows
us
to
do
storage
really
cheap,
with
no
single
point
of
failure.
The
other
thing
that's
really
interesting
is
it's
all
in
one
its
object,
block
and
file
all
in
a
single
cluster
and
a
lot
of
intelligence
built
right
into
it.
So
a
lot
of
people
that
are
familiar
with
storage
admins.
You
know
they'll
tell
me
well
how
many
admins
do
I
need
to
run.
It
is,
you
know
and
they'll
tell
me
in
the
history.
B
I'll
have
n
terabytes
per
storage
admin.
It's
kind
of
my
rule
of
thumb.
Everybody
has
some
rule
of
thumb
that
they
go
by
and
a
my
favorite
anecdote
to
answer
with
is
well.
Dreamhost
has
to
seth
clusters,
ones,
three
betta
fights
and
ones.
5
petabytes
they're
both
run
by
a
single
guy
part
time,
so
that
that
usually
kind
of
blows
their
mind
for
a
minute
which
is
fun
and
then
crush
crushes
the
secret
sauce
that
makes
F
so
powerful.
B
It's
it's
the
part
that
makes
Seth
infrastructure
aware
and
it's
the
the
placement
algorithm
that
handles
all
of
the
data
placement.
So
it's
really
cool
stuff.
It
was
the
original
research
that
was
done
by
sage
at
UC,
Santa
Cruz.
That
kind
of
led
to
what
stuff
is
now
and
then
the
my
favorite
part
about
CF
is
the
scale
it's
meant,
for
you
know
huge
huge
amounts
of
data.
I
was
originally
designed
for
supercomputing
applications,
so
it
was
designed
for
exabyte
scale.
So
I
we
haven't
hit
a
limit
yet
I'm.
B
Looking
forward
to
the
day
that
someone
decides
to
try
so
yeah.
That
was
fast.
You
can
find
out
more
there's
all
kinds
of
good
stuff
chef,
calm.
Our
doc
writer
is
kind
of
super
I.
Really
John
Wilkins
he's
amazing
and
then,
if
you
want
to
use
it,
you
can
play
with
dream
objects:
it's
public,
it's
basically
an
s3
competitor
and
it's
all
based
on
staff.
Of
course,
if
you
know
anybody
that
wants
pay
for
it,
ink
tank
is
more
than
happy
to
take
their
money.
So
this
is
kind
of
the
the
market
texture
diagram.
B
That
explains
how
SEF
all
fits
together
underneath
it
all
as
I'm
sure.
Most
of
you
know
at
this
point
it's
an
object
store.
We
get
some
really
cool
things
for
free
by
doing
an
object
store
underneath
we
don't
have
to
worry
about.
You
know
the
hierarchy
of
basing
it
on
files
or
anything
like
that.
You
know
the
lot
of
da
you
incorporated
metadata
and
things
like
that.
B
That
goes
along
with
it,
although
interestingly
SEF
doesn't
actually
have
a
whole
lot
of
metadata
unless
you're
dealing
with
SEF
of
s,
but
on
top
of
the
object
store,
we
expose
that
via
three
different
interfaces.
We
have
the
RESTful
API
s,
which
are
the
s3
and
the
OpenStack
Swift
gateway.
We
have
our
virtual
disk,
so
that's
our
block
device,
so
the
SEF
RVD,
and
then
we
have
this
F
of
S,
which
is
the
POSIX
compliant
scale-out
file
system.
B
So,
although
for
the
developer,
centric
folks
I
usually
like
to
show
them
this
picture
because
there
are
actually
two
object
interfaces,
one
of
them's,
the
low-level
library
interface.
If
you
want
to
roll
your
own,
so
of
course
the
basics
of
staff.
Is
you
start
with
some
amount
of
disks?
You
have
a
big
pile
of
disks
in
a
in
a
data
center
somewhere
and
you're,
going
to
throw
some
arbitrary
file
system.
B
On
top
of
that,
we
decided
we
didn't
want
to
reinvent
the
wheel,
so
we
took
advantage
of
you
know
our
favorite
is
butter,
but
obviously
it's
not
quite
there.
Yet
it's
been
not
quite
there
for
about
a
decade,
but
we
think
it's
the
future.
We
hope
it's,
the
future,
there's
some
really
cool
stuff,
some
underlying
cloning,
and
things
like
that
that
are
that
really
make
it
cool
most
of
our
folks
that
we
have
that
our
customers
running
in
production
are
running
on
XFS.
The
larger
extent
and
extended
attributes
make
that
pretty
cool.
B
That's
that's
what
gets
most
of
the
power
from
that
one.
Of
course
you
can
run
it
on
X,
4
and
now
ZFS.
My
tation
is
out
of
date,
so
ZFS
is
also
in
that
list
and
then,
of
course
that's
you
know.
One
rack
mounted
server
and
you
have
many
many
many
of
these
OSD
machines.
You
know
that's
what
the
OSD
is
on
top.
That's
the
software.
B
Damon
object,
storage,
Damon,
so
you
have
many
many
many
of
these
servers
and
then
some
small
amount
of
monitors
in
there
as
well,
that
are
kind
of
the
air
traffic
controllers.
They
they
herd
the
cats,
they
do
some
authentication
stuff,
but
they
actually
are
not
in
the
data
path.
That's
the
cool
part
about
crush
crush.
Is
that
placement
algorithm
and
it's
what
allows
the
clients
to
calculate
where
the
data
should
go
or
way
that
data
should
be
living
and
go
directly
to
the
OSD?
So
there's
no
none
of
that.
You
know
single
name.
B
Node,
look
up
slow
down
that
you
have
to
worry
about
it's
the
pseudo
random
placement.
It's
the
thing
that
also
you're
able
to
kind
of
define
via
a
crush
map,
what
your
infrastructure
looks
like
in
the
data
center.
So
you
have,
you
know
n
number
of
disks
in
y
servers
in
X
racks
in
you
know
some
number
of
rows
and
based
on
that,
you
can
then
create
rules
about
where
you
want
your
data
to
live.
You
want
a
fast
data
pool.
B
You
can
say
this
school
data
fast
is
going
to
use
my
SSDs
and
or
you
can
combine
by
doing
some
number
of
you
know,
spinning
rust
with
one
SSD
that
handles
the
journal
for
all
of
them
or
you
can
create
your
own
failure.
Domains
based
on
power
circuits
or
whatever
so
crush,
is
actually
relatively
simple,
but
also
quite
powerful
in
terms
of
what
you
can
do
with
it.
B
This
is
the
part
we
can
probably
breeze
through
it
just
handles
a
bit
about
what
happens
when
you
want
to
stuff
something
to
the
cluster.
You
take
it
in
you
hash
it
into
some
number
of
what
we're
calling
placement
groups.
These
are
just
the
by
default,
their
form,
a
logical
buckets
that
we
crammed
into
the
various
servers.
And
so
what
happens?
Is
your
client?
You
know
you're
going
to
hash
that
and
it
based
on
crush
you'll,
know
where
it
needs
to
live,
and
so
it'll
send
that
to
the
OSD
and
right.
B
The
OSD
then,
based
on
your
reputation
level,
will
then
peer-to-peer
with
the
other
OS
DS,
based
on
where
it
should
based
on
crush,
send
those
out.
When
those
rights
have
finished
and
acknowledged
the
primary
OSD,
then
it
sends
the
acknowledgement
that
the
right
is
done.
Seph
is
a
highly
consistent
system,
so
that's
how
it
has
to
work.
Of
course
you
do
this
for
many
things
all
at
once,
and
you
see
you
kind
of
get
this
random
distribution
of
data.
B
That's
nice
and
pretty,
and
even
and
when
the
client
comes
in
use
this
crush
to
look
up
these
things.
It
will
read
the
original
copy
or
if
the
original
copy
you
know
gets
you
know
your
half
your
data
center
burns
down
or
something
I,
don't
know
where
the
other
copies
need
to
live
and
it
can
go
there,
speaking
of
which,
if
you
have
a
node
failure.
B
So,
when
the
decision
is
finally
made
that
one's
down
the
OSD
s
that
have
the
replicas
of
that
data
will
know,
hey
the
we're
no
longer
kosher
with
our
replication
level.
So
we
got
to
fix
that
and
it
will
automatically
appear
with
the
OSD
based
on
the
new
fresh
map,
where
it
needs
to
live,
move
the
data,
and
then
the
client
will
know
where
to
go
so
cool.
We
were
able
to
breach
there
pretty
quick
now
we'll
talk
about
stuff
in
the
wild.
B
Anybody
remember
that
show,
while
America
Marty,
Stouffer
I
love
that
show
so
linux
distros
no
incendiary
devices.
Please
we
work
with
a
pretty
fair
number
of
different
distros.
Obviously
our
roots
are
pretty
heavily
in
ubuntu.
That's
where
we
originally
did
most
of
our
writing
and
testing,
and
but
you
know
now
we're
in
apple.
I
hear
rumors
that
will
be
real
happy
very
soon,
but
but
yeah
there's
packages
for
all
these
guys.
So
it's
it's
at
this
point
pretty
easy
to
deploy,
depending
on
how
you
want
to
get
it
out
there.
B
Of
course,
OpenStack
there's
a
lot
of
stuff
here.
I
will
kind
of
breeze
past
that
very
quickly,
but,
of
course,
the
the
nice
and
fancy
one
of
the
cloudstack.
This
one
is
a
lot
of
fun
because
I
love
this
story
as
a
community
manager,
because
this
integration
came
entirely
from
the
community.
This
wasn't
something
that
in
tank
decided
hey
we're
going
to
do
this,
its
strategic
or
biz
dev
or
whatever
the
hell.
B
It's
a
guy
in
the
community
said
I'm
using
CloudStack
and
I
want
to
use
SEF
and
he
wrote
it
as
Vito
from
42
on
a
few
guys
know
him
so
right
now
you
can
use
it
as
alternate
primary
and
secondary.
So
a
lot
of
the
snapshot
and
backup
support
stuff
that
he's
been
working
on
is
coming
in
for
dot
2
and
I
just
talked
to
him
last
week,
and
it's
all
done.
Its
package
is
ready
to
go
so
we're
just
waiting
for
the
arrival
of
that
with
for
dot.
B
2
he's
also
working
on
some
rbd
java
bindings
for
some
of
the
other
stuff.
But
right
now,
qmu
Libert
are
creating
images
by
format
won
by
default,
but
he
had
to
do
a
little
bit
of
hacky
stuff
to
make
format
to
work.
So
that's
kind
of
where
that's
at
now
so
I
guess
it
could
use
some
polish,
but
the
functionality
will
be
there
in
in
for
dot
to.
B
All
right
and
this
one
I
blatantly
ripped
this
off
from
Vito,
it's
a
good
diagram
of
kind
of
how
it
works,
whether
it's
kv
a
more
Zen.
This
is
kind
of
the
the
logical
flow
of
how
things
fit
together.
You
know
the
management
server
talks
to
the
agent
runs
kvm,
hypervisor,
etc.
It's
you
know
it's
all
right.
There
I
can
leave
that
up
for
a
minute,
but
the
important
part
here
is
the
management
server,
never
talks
to
the
SEF
cluster.
So
it's
kind
of
keeps
that
logical
separation.
So
it
makes
it
easy.
There's
not.
B
You
know
thousands
of
these
hypervisors,
but
also
that
the
management
servers
can
be
clustered
so
I
mean
you
guys
are
probably
all
familiar
with
Claude
stack,
but
the
cool
part
is
that
he's
actually
started
playing
with
different
implementations
of
having
multiple
SEF
clusters
to
do
different
workloads-
and
you
know
so-
multiple
pools
region
stuff,
so
he's
actually
been
a
good
test
bed
for
some
of
the
region
stuff.
If
you
guys
saw
sages
talk
this
morning
about
a
lot
of
the
Geo
replication
that
we've
been
working
on.
So
there's
there's
a
lot
of
thought.
B
That's
going
into
that
the
gateway
in
the
block
device.
They
have
answers
to
geo
replication,
mostly
from
a
disaster
recovery
standpoint.
But
the
next
thing,
that's
on
the
way
that
we're
all
really
excited
about
is
that
the
underlying
rato
infrastructure
is
actually
going
to
have
the
ability
to
define
regions
and
zones
and
do
multiple
geographically
aware
pieces.
So
that's
that's
nice.
Of
course
you
know
I
couldn't
say
cloud
without
talking
about
some
of
our
other
friends.
B
You
know
we're
in
the
suse
cloud
we
work
with
Google
Gennady,
proxmox,
open
nebula,
there's
actually
talk
next
week
in
Berlin,
if
any
of
you
guys
are
going
to
make
it
that
far
that
fast,
the
Joe
Merrick
from
the
BBC
is
talking
about
his
adventures
in
research.
So
he's
talking
about
some
of
his
experiments
with
Zen
and
kvm
openstack
cloud
stack
some
of
his
SEF
stuff
in
there.
So
there's
there's
some
really
cool
stuff.
It's
always
nice
to
see
it
through
the
eyes
of
a
user,
but
it's
definitely
worth
it.
B
If
you're
going
to
be
in
the
area
or
I
think
they
might
actually
be
live-streaming
that
event
too.
If
you
can
find
that
beyond
the
cloud
stuff,
you
know
project
intersection,
we
have
obviously
close
ties
with
the
kernel.
For
a
long
time
we
have
native
clients
for
our
BD
and
SEF
of
s
and
a
lot
of
active
development
in
the
linux
kernel.
Alex
alder,
one
of
our
guys,
actually
made
the
top
list
of
the
report
that
came
out
this
week
that
we're
all
very
excited.
We
thought
he
was.
B
He
got
major
cool
points
for
that
in
the
office.
We
have
things
like
a
Wireshark
plugin.
You
know,
we've
done
some
work
with
the
4i
skazhi
via
the
TGT
library
working
on
ilio
next
and
we're.
Actually,
there
have
been
some
a
creative
solutions
where
people
were
using
VMware
because
they
wanted
to
use
SEF,
but
they
had
windows
infrastructure
they
had
to
deal
with,
so
they
got
a
guy,
got
really
creative
and
did
a
fibre
channel
into
SEF
so
that
he
could
back
his
VMware
infrastructure.
Some.
So
definitely
some
cool
hacky
project
intersection
as
well.
B
B
This
has
actually
been
really
exciting
for
us,
because
it's
another
thing
that
is
coming
a
lot
from
the
community.
We
didn't
really
push
hard
for
that,
but
we're
definitely
happy
to
help
it
along.
You
know,
obviously
the
support
for
libvirt.
You
know
it
looks
like
most
of
the
faces
around
here,
all
Zen
expert.
B
You
know
we
talk
about
a
block
device,
xenserver
talks
about
a
VDI
and
liver
talks,
about
storage
volume
right
or
pools
versus
storage,
repose
versus
storage
pools.
You
know
it's
just
kind
of
different
vocabulary
and
the
guy
who's
been
doing
a
lot
of
the
work
on
the
the
seffyn
Zen
integration
actually
had
a
really
good
talk
in
london.
I
think
it
was.
It's
made
the
rounds
on
youtube
if
you've
seen
it.
I
ripped
this
off
from
him
because
it
was
just
perfect.
You
know
it
talks.
B
B
B
B
So
I
can
talk
a
little
bit
about
the
block,
object
and
file,
but
I
think
I'd
probably
breeze
through
these
and
if
there's
questions
we
can
touch
on
those,
so
the
cool
parts
about
each
of
these
force
F.
You
know,
obviously
the
being
SEF
talking
about
some
of
those
winds
that
we
got
from
having
an
object
layer
underneath
one
of
the
really
cool
parts
about
that
in
terms
of
a
block
device.
B
It
allows
you
to
do
cool
things
like
squash
hotspots,
so
things
like
because
you're
taking
that
block
device
and
striping
it
over
a
number
of
physical
hosts
in
the
object
layer,
you
actually
get
to
paralyze
a
huge
amount
of
your
workload,
and
so
you
can
have
your
block
device
be
arbitrarily
large
or
arbitrarily
busy,
and
it
doesn't
doesn't
matter.
We
can
also
do
things
like
instant
clones
and
live
migration
all
that
stuff,
because
it's
all
the
same
storage
back-end
the
object.
You
know,
like
I,
said
we
do
Swift
and
s3
there
well
established
api's.
B
B
This
is
this:
is
the
secondary
storage
part
for
CloudStack
and
there's
also
some
very
easy,
horizontal
scaling
it
plugs
into
existing
things
like
you
can
just
put
them
behind
an
H,
a
proxy
box
and
and
you're
ready
to
go?
And
this
is
the
the
file
system
which
I
haven't
spent
a
whole
lot
of
time.
Talking
about,
and
and
just
as
we
haven't
spent
a
whole
lot
of
time,
Q
aying
it
which
is
kind
of
why
we
aren't
telling
people
to
go
ahead
and
use
this
in
production.
B
B
And
that's
that's
only
the
like
directory.
You
know
timestamp
kind
of
metadata
stuff.
Again,
it's
not
in
the
data
path
for
the
data
you
still
go
directly
to
the
OSD.
You
just
also
have
to
pull
that
extra
little
bit
from
the
metadata
server,
and
this
also
is
has
the
ability
to
be
horizontally
scalable.
B
You
can
turn
on
many
many
many,
and
this
is
one
of
those
experimental
parts
that
hasn't
been
q8
a
lot,
but
the
ceph
metadata
servers
have
the
ability
to
spin
up
many
of
them
and
we
have
actually
what's
called
we're,
calling
dynamic,
subtree
partitioning.
So
as
your
directory
gets
busy,
you
have
hotspots,
you
know
all
the
way
down
to
a
single
file
or
a
part
of
the
tree
or
whatever,
as
there's
use,
these
metadata
servers
will
kind
of
shuffle
the
load
between
them.
You
can
even
have
a
single
metadata
server
serving
for
a
single
file.
B
B
So
this
is
the
deployment
stuff
that
I've
been
geeking
out
about.
I
was
like
to
touch
on
this
stuff.
You
know
what
are
the
most
often
asked
questions
that
I
get
is
ok,
Sep
sounds
cool.
How
can
I
use
it?
How
do
I
get
there
from
here,
so
I
like
to
touch
on
the
orchestration
stuff?
Obviously
you
know
chef
and
puppet.
These
guys
are
the
you
know.
B
Maybe
the
200-pound
gorillas
in
the
room,
the
the
mature
options
that
most
people
have
heard
of
you
know
chef
being
more
of
the
dev
side
of
DevOps
and
puppet
being
more
of
the.
You
know,
procedural
sysadmin
kind
of
crowd
that
they're
aiming
at
ansible
and
salt,
though
our
kind
of
the
other
end
of
that
spectrum
they're
the
ones
that
have
kind
of
gone
from
zero
to
hero
and
very
short
amount
of
time,
and
each
of
them
has
their
own
stuff.
We
heard
about
salt.
B
Salt
is
really
cool
because
it's
fast
fast,
fast
I've
seen
some
people
do
some
crazy,
silly
things
with
salt
at
scale
deploying
thousands
of
things
at
the
same
time,
and
it's
just
ridiculously
fast.
Ansible
is
kind
of
neat
I
like
it,
because
it's
it's
agent
lists.
So
it's
it's
very
lightweight.
It's
a
very
light
touch,
has
kind
of
a
different
mindset
and
then
you
know
there's
some
more
options.
Juju
is
kind
of
my
favorite
when
I'm
just
tinkering
and
playing
around
with
things,
it
seems
to
work
the
same
way.
B
My
brain
does,
which
you
know
might
be
backwards,
but
I'm
not
sure
canonical
has
done
some
really
fun
things,
making
it
relatively
agnostic.
If
you
already
like
chef
and
you
want
to
do
stuff
with
juju,
you
can
just
use
your
chef
recipe
right
and
wrap
it
in
juju
same
with
puppet
or
same
with
Python
or
bash,
or
whatever
you
want
to
use
to
deploy.
You
can
wrap
it
in
juju,
which
gives
you
the
advantage,
then
of
being
able
to
talk
to
mass.
B
B
Dell
has
some
skin
in
the
game
come
on,
iti
threw
in
there
just
basically
saying
you
know,
there's
a
lot
of
people
that
are
doing
home
rolling
their
own
thing,
there's
so
many
flavors
out
there
and
then
set
to
play,
which
is
kind
of
our
you
know,
do
it
without
without
a
tool.
This
is
our.
You
know,
quick.
You
know
eight
commands
or
whatever
to
get
yourself
to
a
SEF
cluster.
B
If
you
really
don't
want
to
don't
want
to
use
somebody
else's
overhead
community
I'll
touch
on
the
community
stuff,
just
a
little
bit
just
wanted
to
throw
some
slides
in
there
for
the
people
that
are
voracious
about
downloading
stuff
off
a
SlideShare
ink
tank
a
little
bit
of
history
on
Seth,
there
were
kind
of
four
main
periods
of
self-development,
and
this
actually
shows
the
number
of
authors
cumulative
to
kind
of
show
the
growth
of
each
of
these
inflection
points.
You
know
there
was
the
research
project,
which
is
the
first
block
at
UC
Santa
Cruz.
B
We've
seen
some
really
cool
code
contributions
just
wanted
to
show
this
that
you
know
the
the
employee
editions
versus
the
non
employee
contributions.
You
know
it's
up
into
the
right.
I
don't
want
to
spend
a
lot
of
time
on
this
commits,
so
this
kenneth
graph
is
actually
pretty
cool.
I
don't
know
if
you
can
see
very
well.
I
guess
not,
so
the
blue
at
the
bottom
is
sage.
B
B
He
has
some
really
good
deep
dives
into
kind
of
what
thinking
is
around
that
stuff?
There's
a
lot
to
think
about.
It
turns
out
all
the
way
from
you
know,
clocks,
all
the
way
up,
so
lots
of
stuff
to
think
about
the
erasure
coding
stuff
is
actually
nearing
completion.
This
has
been
some
good
work
here.
If
this
is
especially
interesting
as
it
relates
to
the
other
thing
next
to
it,
which
is
tearing
some
folks
want
to
take
and
have
multiple
dynamic
tiering,
you
know
it
gets
hot
and
it
goes
up
to
the
ssds.
B
B
So
we've
been
talking
a
lot
about
this
and
kind
of
progressing
our
every
quarter
after
or
as
we
approach
our
next
stable
release.
We
hold
our
sep
developer
summit,
which
is
a
virtual
summit.
Thus
far,
it's
been
on
Google
Hangouts,
but
that's
I
think
not
going
to
continue
because
we
have
too
many
people
for
it,
but
but
we
hold
blueprint
process.
So
anybody
that
wants
to
write
something
or
wants
to
see
something
written.
They
have
a
submission
window
where
you
say:
hey
here's
a
blueprint.
B
This
is
what
we
want,
and
then
we
all
get
together
at
the
developer
summit
and
talk
about
how
we're
going
to
get
there.
Of
course,
you
know
I
wouldn't
be
a
community
manager
if
I
didn't
plug
get
involved.
You
know
this
developer
summit,
which
I
talked
about.
We
have
a
number
of
SEF
days.
We've
done
three
at
this
point.
We
did
one
in
Amsterdam,
one
in
New,
York
and
one
in
santa
clara.
Last
week
we
have
one
coming
up
soon
in
london
second
week
in
october.
B
I
think
so
I
mean
if
you
want
ideas,
developer
summit
and
SEF.
They
are
great
places
to
do
face
to
face.
You
know
meet
space
communications
where
we
figure
out
hey,
what's
happening.
How
can
I
help,
of
course,
IRC
in
the
lists
and
if
you're
looking
for
project
ideas,
we
have
project
ideas
on
our
wiki
read.
Mine,
obviously,
is
the
easiest
place,
because
that's
where
everything
kind
of
is
held
in
brain
trust
and
again
IRC
in
the
lists,
so
questions.