►
From YouTube: CEPH
Description
Ceph by Inktank and Red Hat at the Colorado OpenStack Meetup
A
A
C
A
C
It's
great
to
see
a
full
house
here
that
very
very
knew
that
this
is
ian
holly.
A
C
This
is
ian
ink
tape,
red
hat
recently
acquired
new
tank.
I
believe
it's
being
used
as
a
julio
owned
subsidiary,
so
I
think
they're
retaining
some
of
their
ink
tank.
This
we
are
for
now
we
are
for
now,
okay
and
so
he's
based,
I
think,
in
golden
colorado,
and
he
he
took
the
tribute
here
to
share
with
us
a
little
bit
about
seth
and
what's
going
on
with
seth
and
openstack
in
the
back,
we
have
some
leftover
t-shirts
from
the
hpv
plasma.
C
C
Let's
see
next
month's
meetup
is
not
fully
planned,
yet
is
that
right
yeah,
but
we're
talking
about
a
couple
of
different
things,
it'll
be
sometime
later
in
the
month
we
know
thanksgiving's
coming
and
the
paris
summit
at
the
beginning
of
the
month.
So
somewhere
in
there
we'll
wedge
in
another
meet
and
we've
talked
about
a
couple
of
different
things.
We
hope
to
hold
it
at
cable
labs.
Is
that
exciting.
C
Wanted
for
this
week
was
still
being
upgraded,
so
it'll
be
a
brand
new
room
when
we
get
there.
If
and
when
we
get
there
in
november,
at
least
at
least
a
new
projector
or
a
new
new
speakers,
and
that,
as
you
know,
those
of
you
have
been
there.
That's
an
awesome
facility.
It's
not
located
right
next
to
panera
or
anything,
but
it
is
still
an
awesome
facility.
C
So
look
for
details
on
that
later
this
month
I
am
going
to
ask
ian
to
take
another
drink
of
water
and
then
I'll
have
ask
him
to
actually
introduce
himself
because
he
could
do
a
much
better
introduction
than
I
can.
I
do
believe
he's
a
vice
president
at
red
hat,
but
that's
about
as
far
as
I
can
go
is
showing
my
shirts,
but
it's
not
showing
my
presentation
mode,
even
presentation.
C
C
C
Or
the
kids
I
had
it
was
trying
to
hide
their
phones,
they
could
text
their
friends
in
the
other
classes.
While
I
was
asking
more
questions,
my
name
is
ian
coley.
I
am
the
well
it's
some
fancy
name,
I'm
global
director
of
software
engineering,
but
the
only
reason
that
sounds
so
fancy
is
because
that
basically
means
I
am
everybody
that
works
on.
C
Seth
works
for
me,
and
so
they
threw
the
little
global
thing
in
there,
because
I've
got
people
from
crimea,
so
yeah
a
guy
that
used
to
live
in
ukraine
and
now
lives
in
russia.
He
didn't
move
country,
borders,
moved
and
guys
in
china,
lisbon
atlanta,
even
far
off
minnesota.
So
we
have
a
team.
That's
spread
off
completely
globally
distributed.
Now
we
do
have
an
office
in
l.a.
C
C
C
C
I
can't
promise
anything
but
stay
tuned,
maybe
next
year,
but
of
course
I've
given
that
same
speech
for
the
last
two
years
that
I've
been
with
maybe
next
year.
So
what
things
really
took
off
was
in
2010
sage,
was
able
to
get
seth
into
the
linux
kernel
and
then
in
2012
actually
started
a
company.
He
tried
to
run
this
out
of
his
own
little
skunk
works
at
dreamhost,
which
was
his
web
hosting
company
that
he
built
up
while
he
was
in
college
outside
l.a
and
finally
decided.
C
You
know
we
need
to
stand
up
a
real
company.
That
only
does
this,
and
this
can't
just
be
this
little
backroom
thing
that
I'm
doing
as
an
offshoot
of
my
web
hosting
company
so
started
ink
tank
and
see.
We've
got
our
fancy
new
logo
down
here.
That
says
ink
tank
by
red
hat
and
that's
because,
as
we
have
updated
well,
it
was
officially
I
think,
april
30th.
So
I
guess
that's
right,
but
my
first
day
as
a
red
hat
employee
was
may
1st.
C
So
we
were
acquired
on
april
30th
by
red
hat,
basically
brought
everybody
over
from
inktank,
including
sage
crew,
and
we
have
been
operating
essentially
as
an
independent
unit
within
red
hat,
we're
getting
slowly
integrated
into
the
various
organizations
that
affects
more
the
business,
salesy
type.
Guys
me
within
engineering.
They
pretty
much
leave
us
alone
other
than
those
of
you
got
stickers,
I'll
apologize
in
advance.
Have
they
told
me
they
were
going
to
just
send
you
red
hat
propaganda?
I
would
have
brought
true
sef
stickers.
C
I
thought
they
were
sending
stuff
stickers
and
so
unfortunately,
the
only
thing
that
says
ceph
on
there
also
has
a
gluster
thing
on
there,
which
we're
brothers
now
we're
all
friends.
But
really
it's.
You
know.
B
C
Brothers,
you
know
how
it
is
with
brothers:
okay,
there's
a
little
bit
of
a
friendly
rivalry
there
and
that's
good,
that's
good!
That's
one
of
the
questions
that
we
get
a
lot
at
red
hat
is
wait
a
minute.
You
had
gluster
why'd,
you
buy
ceph.
Well,
they
also
had
xfs
and
butterflies
in
the
kernel.
Is
anybody
complaining
about
that
or
ext4?
Why?
Why
is
it
a
problem
to
have
competing
technologies?
Personally,
as
an
engineer.
C
C
So
why
are
you
guys
here
and
why
should
you
care
about
seth
if
you're
interested
in
openstack?
This
is,
after
all,
this
isn't
a
storage
meetup.
This
is
an
openstack
meetup.
Why
should
you
care
about
seth?
Well,
because
it'll
be
interesting
to
see
what
the
next
survey
results
that
come
out
in
paris
are,
but
as
of
one
in
atlanta,
there's
stuff,
if
you're,
if
you're
using
storage
for
an
open
stack
system
other
than
the
standard
lvm
you're
most
likely
to
use
set
in
your
implementation,.
C
C
But
look
at
how
tiny
that
is.
Why
is
that?
Can
you
take
a
guess
if
you're
involved
in
this
openstack
environment
and
you're
pushing
free,
cheap
commodity
you're
not
going
to
want
to
also
dial
up
your
emc
guy
and
say
here:
hey
here's,
a
ton
of
money
for
storage
that
I'm
going
to
run
openstack
on
top
of,
and
that's
why
it
fits
nicely
with
seth
and
we'll
go
more
into
the
architecture
principles.
But
seth
was
built
architected
from
the
ground
up
to
be
run
on
commodity
hardware.
A
C
So
again,
here's
the
the
comparison
between
the
two
models.
The
old
model
is
okay,
you
buy
everything.
Proprietary
I've
got
my
nice
little
box,
it's
in
a
rack,
I'm
not
really
sure.
What's
in
there,
it's
some
secret
sauce
that
I'm
paying
a
lot
for,
and
I've
got
a
subscription
contact
contract
4
that
I'm
locked
in
for
and
basically
I'm
not
really
sure.
What's
going
on
inside.
That's
the
other
part
that
really
bugged
sage
is.
He
is
adamant
about
open
source.
C
C
C
This
is
going
to
we're
going
to
build
upon
my
starting
at
the
bottom
here
to
read
the
acronyms,
but
it's
built
upon
an
object
store.
So
when
we
talk
about
seth,
sometimes
people
can
meet
many
different
things.
Some
people
might
be
the
gateway
a
lot
of
time
in
openstack
land
they
mean
the
block
device
and
then
there's
the
file
system,
but
seth
comprises
all
of
these.
It's
an
object
store
at
its
heart
and
then
each
of
these
components
are
built
upon
that.
C
So
if
you
look
at,
we
recommend
running
an
object,
storage
daemon,
which
is
you
know
there
weren't
enough
osd
acronyms
out
there
most
people
say,
object,
storage
device,
but
we
decided
to
try
to
rename
that
acronym
and
that
sits
on
your
standard
linux
file
system,
which
one
you
use.
That's
up
to
you
right
now,
we're
leaning
towards
xfs.
Some
people
have
ext4,
but
rfs
firefs
has
some
issues.
C
It's
got
some
really
good
features,
we'd
love
to
see
it
a
little
more
stable
and
there
are
even
some
people
using
zfs
so
again
that
that's
the
whole
idea
behind
this.
Is
you
pick
and
choose
what
works
best
for
you
out
of
the
box,
we
tend
to
recommend
xfs,
but
if
you
find
a
use
case
or
if
you
want
to
use
one
of
these
other
ones
more
power
to
you
and
then
that
runs
on
whatever
disk
you
want
these
to
be
different
sizes.
C
So
when
you
see
this
little
icon
here
with
the
green
with
a
little
red
under
it,
that's
representing
one
of
these
osd's
an
object
storage
demon,
which
is
the
daemon
on
top
of
the
linux
file
system
on
top
of
the
drive.
So
if
you
want
to
quite
equate
one
of
these
boxes
to
a
drive,
you
can't,
and
then
we
have
the
m's
which
stand
for
the
monitor.
C
C
That's
because
these
are
the
monitors
which
are
exchanging
system,
health
information
via
paxos.
Now,
what
that
that
does
is
by
having
an
odd
number.
You
avoid
a
split
rate,
so
you
can't
have
two
people
saying
I
vote
for
this
and
two
guys
say:
I
think
the
system,
the
cluster
status,
is
this
and
go?
Oh.
What
do
we
do?
You
always
have
an
odd
number.
C
Now
you
probably
don't
want
one,
that's
not
much
of
a
vote,
so
most
clusters
have
three
and
if
you
get
a
really
large
cluster,
you'll
have
five
like
this
example
here,
but
typically
you
can
get
away
with
one
if
you're
doing
a
proof
of
content
at
your
desk.
That's
that's
funny
yeah.
If
you're
building
something
in
your
home,
yeah
go
for
it,
but
otherwise
we
do.
Three.
Are
you
talking
about
cluster?
Is
it
typically
like
close
by
like
within
an
availability
zone,
distributed
it's
up
to
you?
C
B
C
C
C
I
am
the
guy
that
holds
a
copy
of
this
other
guy's
data,
so
I
should
probably
know
if
he's
up
or
not,
because
if
he's,
if
he
goes
down
and
he's
gone,
I
need
to
start
moving
some
stuff
around
to
make
sure
that
I
maintain
my
replication
because
the
core
of
seth
is
built
upon
replication
and
that's
what
ensures
that
you
don't
have
to
be
dependent
upon
your
storage
device
to
save
your
data
because
we've
got.
We
recommend
three
copies.
C
Typically
in
the
examples
I
use
here,
it's
just
two
for
just
for
ease
of
use
and
simpler
charts,
but
typically
three
copies
of
all
your
data.
So
we've
got
the
original
and
two
copies
that
are
floating
around
in
your
data
center
to
ensure
that
if
something
happens
to
one
of
them,
you
still
have
access
to
your
data.
C
C
C
But
what
happens
if
two
of
them
go
away
and
that's
where
this
told
me
to
go
well,
hopefully
you're
doing
some,
maybe
you're
doing
something
on
the
back
end
and
you're
moving
stuff
around.
But
typically
this
causes
a
lot
of
problems.
If
there's
any
data
outage,
or
especially,
if
you
add
data,
just
a
typical
sharding,
so
that's
where
sage
came
up
with
crush.
C
A
C
C
What
we
do
is
chop
up
that
object
into
a
number
of
pgs
and
then
feed
that
into
crush
and
then
crush,
tells
us
where
to
put
each
of
those
pg's
based
upon
the
rules
that
you
set
via
your
crush
map,
and
that
can
change.
If
you
have
bus
road
go
out,
you
have
you're
moving
an
entire,
you
rack
it.
You
update
your
crush
map
and
then
the
monitor
takes
that
and
pushes
it
out
so
now.
C
C
This
is
one
bus
row
and
this
is
another
and
I'm
telling
it.
I
want
one
copy
on
each
bus
row,
because
I
don't
want
to
have
to
worry
about
my
power
going
out
and
taking
out
both
of
my
coffees
and
losing
everything
so
based
upon
that
rule,
it
says:
okay,
I'm
going
to
take
this
green
chuck
and
put
one
over
here
on
this
drive
and
it's
copy
I'm
going
to
make
sure
it's
over
here
that
it's
nowhere
in
this
room.
C
Do
that
is
your
reads
always
go
to
the
primary,
because
at
the
base
of
this,
as
we
saw
back
with
our
osc's,
this
is
creating
linux
files,
and
so
those
are,
if
you're
doing
a
read
and
there's
a
possibility,
you're
going
to
read
it
later,
it
might
still
be
in
cache
on
that
primary
device.
So,
instead
of
having
you
read
it
from
any
of
the
copies
where
it
could
possibly
be
a
little
colder,
so
it
speeds
up
your
reads:.
C
C
B
C
B
C
Does
it
know
where
to
read
this
is
the
big
part?
The
client
has
the
crush
map
and
it
does
the
same
calculation.
It
doesn't
look
up
in
any
table
where
to
find
it.
It
does
the
same
calculation
using
the
crush
map
and
say:
oh
I'm
gonna,
I'm
gonna
read
this
this
green
bit.
It's
gonna
be
here
or
if
this
guy's,
not
here
gonna,
be
here.
C
Well,
these
guys
they're
talking
about
they're
you're,
always
talking
to
the
to
the
osds
that
you're
serving
copies
of
as
well
as
your
neighbors,
and
so
they
say,
hey.
I
haven't
heard
from
this
guy
in
a
while
something's
going
wrong
here
and
they'll.
Talk
to
the
monitor,
they'll
say
a
monitor
I
haven't
heard
from
this
guy
and
then
the
monitors
will
do
their
paxos
vote.
They'll,
say
yep.
We
haven't
heard
from
them.
Let's
mark
them
down,
so
it's
down.
A
C
The
duration
of
those
timeouts
and
everything
is
fully
configurable
by
you,
so
you
can,
if
you
have
like.
Maybe
you
know
that
your
drives
tend
to
flap,
and
so
you
don't
want
to
necessarily
mark
it
down
and
you're
okay
with
it
maybe
being
up
and
down
up
and
down
a
little
bit,
then
you
can
configure
it
so
that
it
won't
be
marked
out
of
the
cluster
immediately.
Maybe
you'll
be
a
little
more
graceful
and
wait
for
it
to
come
back
or
you
could
be
extremely
stringent
and
say
no
anytime.
C
C
C
C
My
primary
yellow
was
over
here.
I've
got
an
updated
map
now
I
know
if
I
want
to
read
it's
down
here.
Client
has
to
know
about
every
single
disk,
basically,
whether
it's
up
or
down
it
doesn't
know
that
it
just
gets
told
by
the
monitor.
Here's
your
updated
thing.
If
you're
looking
for
data,
here's
what
you
want
to
use,
it's
it's
computational
on
the
crush
map.
The
brush
map
will
just
spit
out
a
different
address.
It
doesn't
know
anything.
C
It's
like
it's.
It's
similar
to
the
hash
value,
changing
with
every
admin,
and
so
you
just
rehash
what
you're
looking
for
and
you
just
end
up
getting
a
different
address:
it's
not
a
memorization!
So
it's
like
a
single
factor
that
changes.
It's
actually
a
set
of
maps.
B
What
happens
when
you
like
the.
A
B
C
C
So
so
what
happens?
Is
that
basically
it
replicates
using
the
updated
crush
map
and
pushes
stuff
around
now
what
you
can
do
is
you
can
throttle
this?
So
you
don't
get.
You
know,
oh
my
god,
there's
this
bandwidth
storm
because
you
rolled
in
a
new
rack
of
storage
and
it's
doing
all
this
replication
and
everything
and
you
can't
serve
I
o
to
existing
clients
or
anything.
So
you
can.
You
can
set
a
priority
to
where
you'll
do
this
as
a
background,
and
so
it's
migrating
data
around
slowly.
While
you
continue
to
serve.
C
A
C
C
So
this
is
how
we
start
building
stuff
out
of
those
objects
at
its
core
level
is
liberatous,
which
is
a
library
which
then
has
straight
socket
connections
into
that
classroom
and
that
library
has
has
bindings
pick
your
favorite
language,
python,
c,
plus,
plus
even
erlang.
I
haven't
played
with
that,
but
you
know
if
you're
in
underlying
you
can
this
allows
you
like,
if
you're
some
high
high
speed,
I
think
the
term
we're
allowed
to
use
rather
than
giving
names
is.
B
C
Web
scale
company-
you
can
guess
whoever
that
might
be.
If
you
were
doing
that,
then
you
could
write
your
own
applications
because
you
don't
want
to
mess
with
anything
else.
You
don't
want
to
mess
with
rbd.
You
don't
want
to
mess
with
rgw.
You
don't
want
to
mess
with
anything
that
we've
built
on
top
of
it.
You
want
screaming
on
the
socket
as
fast
as
you
can
give
me
bits
of
storage.
Okay,
that's
what
you
want!
C
C
C
C
To
4k
it's
all,
it's
configurable,
that's
one
thing:
if
there's
one
I'll
say
frequent
criticism
of
ceph
is
that
there
are
so
many
knobs
that,
because
sage
really
wanted
it
to
be,
you
can
do
whatever
you
want
to
do
that.
A
C
C
I'm
saying
just
as
a
community
member
of
seth,
the
community
is
very,
very
strong.
Self-Supportive
people
are
always
in
irc.
I
mean
we
as
red.
Hatters
are
in
there
all
day,
but
you
have
people
from
europe
and
asia
that
are
in
there.
So
the
pound
ceph
on
irc
is
made
24
7.
and
that's
that's
on
oftc,
typically,
not
freedom.
So
if
you're
looking
for
it
look
on
oftc
irc
and
if
you
wind
up
in
free
node,
I
try
to
point
people
back
gently
and.
C
C
C
A
C
Now
the
specifics
of
your
use
case
is
where
you
need
to
be
sure.
Work
with
us
make
sure
that,
if
you've
got
some
special
gucci
portion
of
the
api
that
you
really
need
to
have
make
sure
that
we
stand
for
that,
because
we're
not
100
compliant
with
the
s3
api
or
the
swift
api
part
of
the
swift
api
challenge
is
that
they
only.
C
Three
or
four
years
some
people
are
on
b2.
Most
people
are
on
e1.
A
lot
of
the
swift
functionality
isn't
really
in
the
api
it's
buried
in
swift
itself,
so
it's
hard
to
satisfy
what
people
really
want.
Sometimes
people
think
they
want
something
from
swift
and
it's
being
exposed
via
underlying
swift
storage
system,
not
necessarily
via
the
api,
so
we're
pushing
the
swift
community
to
put
more
of
their
functionality
truly
in
the
api
layer
so
that
we
can
ensure
that
we're
compliant
with
it,
that's
the
gateway.
C
C
C
C
C
I
know
after
some
of
the
consolidation
announcements
this
week,
people
are
wondering
how
long
some
of
these
to
the
right
of
openstack
they're
still
going
to
be
around,
but
you
know
we'll
try
to
support
it.
If
we
can
one
of
the
significant
things
up
here
I
want
to
talk
about,
is:
does
anybody
know
what
that
means.
C
C
C
So
what
this
allows
you
to
do?
Is
you
spin
them
up?
You
spin
up
100
copies,
there's,
no
additional
storage,
then,
when
there's
a
right?
Well,
let's
say
I
wrote
one
bag
of
that
one
terabyte
web
server!
Okay!
Well,
now
I've
taken
up
one
terabyte
for
the
original
and
one
meg
for
the
copy,
because
that's
that's
the
delta.
C
C
I
want
to
take
snapshot
of
this
block
device
image
and
I
want
to
send
it
across
the
wire
to
someplace
else
and
then
I
can
copy
it
well.
The
incremental
snapshots
are
where
the
real
power
is.
So
it's
like
doing
your
differential
backup
every
night,
if
you
will
so
I'm
taking
just
a
delta
snap.
So
maybe
I
had
four
mags.
A
C
Okay,
let's
remember
we're
building
up
rados
and
using
liberators
to
first
two
applications.
We
wrote
with
the
gateway
and
block
device
now
everybody's
favorite
thing
to
beat
me
up
on
the
file
system.
Let's
go
back
in
time
ten
years
ago.
Why
did
sage
give
money
from
lawrence
livermore
labs
as
a
starving,
grad
student
to
go
off
and
work
on?
This
was
because
they
wanted
a
file
system
that
could
compete
with
luster
and
high
performance
computing.
C
C
So
what
is
separate
best,
it's
kind
of,
if
you
think
about
it,
your
standard,
doing
your
lookout,
which
I
earlier
told
you
you
don't
like
to
do
at
the
object
level,
but
when
you're
talking
about
files,
you
kind
of
got
to
do
it
so
you're
doing
you've
got
metadata
servers.
That's
what
this
little
icon
is
so
you're
going
to
sending
the
metadata
server.
The
actual
data
itself
is
then
getting
striped
across
the
underlying
radio
system,
just
like
rgw
did
just
like.
C
C
Now,
one
of
the
exciting
things
for
me
about
the
red
hat
acquisition
was
red:
hat,
really
means
it
when
they
preach
open
source.
We
had
created
as
inktank
this
calamari
tool,
which
is
kind
of
the
standard.
Startupy
thing
is
we're
going
to
sell
you
free,
open
source
software,
but
you
like
some
support
in
this
nice
gucci
proprietary
tool.
C
C
That
means
getting
through
all
the
lawyers
and
going
through
all
the
commits
and
everything
and
getting
permission
from
everybody,
and
this
is
open
source.
What
calamari
is
is
a
front-end
management
monitoring
tool
into
your
cluster.
To
allow
you
to
see
how
is
your
storage
cluster
performing,
so
you
can.
C
C
And
that's
the
architecture
of
everything
seth
wise
calamari
has
really
become
its
own
open
source
project
when
we
open
sources,
it's
because
it's
entirely
separate
from
seth,
so
it's
got
its
own
user
community
and
everything
we
kind
of
talk
about
it
because
we
helped
birth
it,
but
in
reality
it
is
its
own
separate,
open,
source
community.
It's
got
its
own
section
of
our
our
github.
It's
got
its
own
contributor
list,
it's
got
it
on
email
list
and
if
anybody
is
good
at
php
or
javascript
and
wants
to
contribute,
I'm
sure
they
appreciate
your
help.
B
C
C
C
We
run
our
our
in-house
testing
lab
on
it
and
it's
pretty
simple.
The
problem
is
until
we
can
do
that.
True,
we
call
forward
stroke
backward
scrub
where
you
can
scrub
through
the
metadata
match
it
up
with
the
actual
data
and
vice
versa.
Then
how
can
I
give
you
any
sort
of
a
confidence
level
that
your
data
is?
Okay
until
you
can
run
the
equivalent
of
an
fsck
on
your
data?
C
Yeah
you're
kind
of
segue
here
I
just
want
to
interrupt
for
just
a
second.
So,
besides
being.
A
Anniversary
frisbees
played
for
by
the
foundation
so
openstack
foundation.
C
And
thank
you
for
coming
so
much
if
you've
got
any
questions
about
the
meetup
or
anything
feel
free
to
find
me
afterwards.
I
don't
have
any
deadline
tonight.
So
I'm
happy
to
anybody
afterwards.
C
Had
I
known
that
there
was
a
five
guys
nearby,
oh
man,
and
knowing
that
this
was
all
on
my
sales
credit
card.
I
said
man
hook
me
up
for
the
five
guys,
not
some.
I
couldn't
figure
out
how
I
quickly
get
five
guys
cleated
up
here,
we're
gonna
just.
C
C
We
talked
a
little
bit
about
geographically
dispersed
instances
and
what
we
recommend
here.
You
have
a
case
where
you
got
a
cluster
in
europe.
You
got
a
cluster
in
new
york
and
you've
got
these
two
syncing
up,
so
this
isn't
to
where
you've
got
true,
where
you
can
make
calls
into
each
one.
This
is
more.
This
app
server
is
talking
to
this
guy,
this
absolute
section
of
this
guy
and
then
they've
got
a
journal.
That
occasionally
says
hey.
C
So
you
could
call
it
that,
but
it's
not
a
true.
I
don't
think
you
call
it
a
true
federated
gateway,
because
at
least
in
my
power
lines,
if
it
was
truly
a
federal
gateway,
these
would
stay
in
sync
all
the
time.
It
wouldn't
matter
if
I
wrote
here
right
there,
but
this
is
right
now,
if
you
have
a
primary
and
a
backup,
and
so
this
is
riding
primarily
to
here
and
then
sinking
across
to
here,
so
the
picture
might.
C
C
C
That's
one
thing:
we're
working
on
for
future
gateway
and
we'll
get
we'll
get.
I
didn't
give
you
much
of
an
overview.
I
apologize
for
that,
but
well
I'm
going
to
go
through
these
use
cases
and
we'll
go
kind
of
into
future
plans
and
stuff
like
that.
C
Here's
a
another
one
of
our
common
use
cases,
so
you've
got
an
erasure,
coated
backing
pool
we'll
go
into
my
next
couple:
slides
what
that
really
means.
What
is
erasure
coding?
How
we
implemented
it?
Then
you've
got
a
replicated
cash
tool,
so
everything
I've
talked
about
so
far
has
been
replication.
To
get
you
your
savings,
I
mean
to
make
sure
that
your
data
is
there
to
maintain
your
data
integrity.
C
I've
said
we
recommend
three
copies.
You've
got
primary
and
two
other
copies.
Well,
that's
great!
Until
I
go
to
my
netapp
rep
and
he
says
three
times
the
storage
you're
telling
me
to
get
four
terabytes,
I
have
to
buy
12
terabytes.
I
can
get
you
that
and
my
new
e-line
for
this,
and
they
can
be
pretty
close.
So
that's
why
we
got
a
lot
of
requests
for
erasure
coding
pools.
C
C
Coding,
instead
of
our
standard
replica
storage,
where
you've
got
three
physical
copies
of
the
data,
this
is
more
like
when
you
think
of
your
standard
raid
what's
happening
here.
We
chunk
up
the
object.
We
don't
give
you
three
different
copies
of
it.
We
give
you
four
data
blocks
in
this
case
that
you,
this
is
all
configurable
call.
This
k,
plus
m,
so
k
are
your
bits
of
data
that
it's
split
into
and
then
two
are
your
parody.
C
C
That's
where
you
get
your
data
integrity.
Now
this
comes
at
a
cost.
I
mean
nothing's
free
in
computer
storage.
In
this
case,
when
anything's
going
on
and
one
of
my
guys
goes
away-
and
I
just
I
just
go
and
replicate
them
in
the
background
and
it's
not
a
big
deal.
But
if
here,
if
say
two
of
these
guys
go
if
three
and
four
go
away
and
I've
gotta
figure
out-
oh
man,
three
and
four
are
gone.
C
I
was
reading
from
one
two
three
four
now
well
that
takes
more
cycles,
so
this
is
gonna
burn
a
little
hotter
than
this,
but
the
ratio
coding
can
be
a
lot
cheaper.
So
instead
of
say
three
to
one
or
in
this
case
I've
got
my
four
terabyte
drives
that
really
I'm
paying
for
12
terabytes
of
storage.
Here
you
can
do
all
this
at
about,
depending
on
the
erasure
code
that
you
use
1.4
1.5.
C
C
Again,
that's
you
can
figure
how
often
it
goes
through
and
doesn't
scrub
those
can
be
pretty
intensive,
so
we
recommend
doing
those
in
off
hours
things
and
again:
that's
you
can
figure
it
off
and
you
want
to
do
it
and
at
what
level
too,
you
want
to
do
scrubbing
you
make
sure
it
was
released
in
firefly.
C
It
was
okay.
Chinese
right
rc
is
out
okay,
the
real
giant
should
be
out
in
a
couple
weeks.
Hopefully
it's
not
going
to
be
supported
by
intake
enterprise
until
h,
we
kind
of
do
this
every
other
thing.
So
far
I
mean
we're
not
tied
into
that.
It's
not
a
dogma,
but
so
far
we
want
to
keep
the
community
development
going,
and
so
we
stuck
to
this
every
three
or
four
months,
the
community
will
crank
out
a
new
release
and
but,
as
red
hat
maintain,
we
want
to
ensure
more
stability.
C
The
three
to
four
month
cadence
ensures
we
get
lots
of
new
features
in
there
all
the
time,
but
that
also
kicks
up
a
lot
of
dust
with
all
that
velocity
and
all
that
new
code
landing
and
all
that
sure.
So
we
want
to
ensure
that
we've
got
plenty
of
time
to
truly
understand.
C
Another
mode
cache
turn
we
implemented
this
in
firefly,
so
you've
got
your
kind
of
standard
right
back
cache.
So
this
is
let's
say:
I
want
screaming
screaming
data
either,
whether
it's
I'm
just
dumping
a
bunch
of
data
or
I
want
to
read
it
really.
So
I
want
to
scream
in
here,
but
I
don't
care
what's
down
here
and
how
long
it
takes.
So
maybe
I
put
some
ssds
up
here.
I
put
some
sata
down
here
and
then
the
catch
dumps
the
writes,
go
into
that
and
then
it
slowly
migrates
down
to
the
vector
pool.
C
C
C
A
C
C
Me
1.4
to
1.5
times
so.
I've
got
I've
passed,
my
storage
cost
from
site
a
to
site
b,
so
right
here
I
want
to
dump
it
to
site
a
to
keep
it
initially
and
then
I'll
gradually
farm
it
off
to
my
backing,
pool
which
is
half
the
storage
and
then
eventually
I'll
probably
delete
it
off
of
here
and
keep
doing
that
cycle.
C
C
And
the
last
I'll
call
this
the
last,
because
we
haven't
really
done
a
lot
of
performance
testing
around
this,
and
so,
while
people
use
this
in
community,
I've
heard
people
running
this
in
operations.
You
just
don't
know
how
how
well
this
performs,
whether
you're
on
mysql
mario,
what
your
favorite
database.
I
definitely
don't
think
we're
going
to
be
certified
by
oracle
anytime
soon,
especially
since
we're
covering
that.
C
C
So
here
are
the
main
components
of
openstack
that
we
integrate
with
it's
a
question
earlier
about
keystone
integration,
so
keys
right
here,
users
integrate
straight
into
radio
scaling,
so
you
don't
have
to
set
anything
up.
Additionally,.
C
C
C
We
have
the
big
customer
for
us,
they
use
actually
most
of
our
customers
use
s3,
but
of
the
subset
that
use
swift.
There
are
some
to
do
both
and
the
big
one
dream
host
uses
are
difficult.
C
It's
like
it's
like
your
mom
tells
you
you're
cute.
You
know.
I
just
can't
believe
that
so
but
they're,
the
big
one.
There
is
kind
of
this
weird
movement,
and
this
is
one
of
those
internal
red
hat
and
interesting
conflicts,
because
we
just
bought
enormous
we're
a
french
openstack
integrator
and
one
of
their
favorite
things
to
do
which
kind
of
drives
sage
nuts
is
they'll.
Take
the
true
swift
flotation
and
run
it
on
top
of
burritos,
so
they
cut
out
rgw
and
all
that
and
it
kind
of
drives.
The
swift
community
notes
too.
C
I
think
the
there
are
reasons
that
people
want
swift
and
there
are
reasons
so
the
use
cases
where
people
want
true
swift
onset.
C
C
C
An
invention
about
the
synchronicity,
it
also
kind
of
influences
your
rack
design
as
well,
because
for
a
lot
of
network
coming
out.
A
C
A
C
Replication,
all
of
that
is
set
at
full
level,
so
you
can
say
I
want
three
copies
on
this
pool.
I
want
to
raise
your
button
on
this
pool
on
this
pool.
I
want
you
to
act
the
right
as
soon
as
it
goes
to
the
primary,
and
you
set
up
all
those
rules
yourself,
then
the
big
one,
the
big
use
case
for
the
openstack
is
center,
obviously
for
block
devices.
C
Fairly
integrated
abilities
between
rbd
and
center
there,
there
are
some
improvements
that
we'd
like
to
see,
especially
with
the
interactions
between
these
three
singer
glance.
Anova
still
does
kind
of
some
stupid
stuff,
because
let's
say
I
have
an
image
but
an
rbd
image.
That's
that's
it's
in
rbd
in
glance
and
then
I'm
trying
to
pull
it
both
ways.
Well,
I
pull
it
back
into
local
storage
and
I
just
put
it
back
out:
it's
just
it's
clear
gene.
C
C
If
I've
got
an
image,
that's
kept
a
note
why?
Why
do
I
have
to
pull
it
back
local
and
then
make
a
copy
to
send
it
to
cinder
to
push
it
back
into
rbd
when
they're,
both
in
here
and
rbd?
Does
all
of
that
snapshotting
like
we
talked
about
internal
to
it?
So
why
can't?
I
just
point
to
it
and
say:
that's
a
copy
on
the
right
phone
here
who
cares
who
uses
it
good,
cinderdouble,
whatever
you
need
to
copy
and
write
the
phone
john,
I
wish
I
could
he's
the
rbd
developer.
C
The
lead
developer
of
armenia,
and
so
these
are
two
of
our
other
goals.
Is
volume
migration?
So,
if
you
want
to
move
from
something
else
say,
you've
got
a
lvm
back-end
to
a
setback,
and
we
should
make
that
a
lot
easier
than
this
today.
It's
kind
of
a
pain
but
and
and
also
if
you
for
some
reason
decide
you
don't
want
to
use,
set
rbd
as
your
back
end
anymore,
and
you.
C
To
something
else,
we
should
just
make
that
a
lot
easier
as
an
openstack
community.
That's
the
way
we
force
people
to
do
that
really
locks
them
into
things,
and
that's
just
not
right.
So
we're
going
to
put
some
effort
into
that
in
the
next
release
and
we're
hoping
to
get
more
people
in
the
community
involved
with
that
as
well
and
then
improve
backup
to
cinder.
I
mean
we
have
that
type
of
capability
in
rbd.
C
It's
not
integrated
well
with
over
our
clients,
but
you
know
just
standard
backup,
stuff
differential
backups.
How
long
you
want
to
keep
it.
How
often
do
you
want
full
backup?
Just
we
should
be
able
to
do
all
that
stuff.
C
In
our
future
road
map,
so
question
was
what
is
h?
Well,
there
you
go
hammer,
hammer
is
actually
so
we're
finishing
up
giant
right
now.
The
big
features
in
that
are
the
open
sourcing
of
calamari,
which
again,
even
though
it's
it's
kind
of
starting
to
be
its
own
project.
I
included
it
here.
C
We
really
had
some
a
lot
of
requests,
but
but
people
were
just
not
happy
with
past
egi,
so
we
pulled
in
nginx
and
we've
now
enabled
that
giant.
So
that
would
be
an
alternative
front
end
if
you're
noticing
a
trend
here,
it's
hey,
take
what
you
want.
That's
that's.
Our
mantra
is
we'll
give
you
this
we'll
give
you
that
you
choose
what
works
best
for
your
implementation
people.
Some
people
may
still
be
happy
with
messages.
C
It's
just
we
heard
lots
of
requests
from
people
and
said
I'm
not
really
active
cgi,
I
really
like
engine
x,
so
we
implemented
engine
x
and
you
notice
a
consistent
thing
around
here
at
the
bottom.
For
the
past
two
years
that
I've
been
with
inktank,
we
have
been
just
throwing
features
in
this
thing.
I
mean
crazy
amount
of
code
growth
and
the
one
thing
we
haven't
done
is
really
stopped
caught
our
breath.
C
Let's
look
what
we've
really
done
the
performance
by
all
these
new
gucci
features
and
let's
look
at
the
things
that
we
need
to
be
doing
better.
Let's
find
areas
of
the
code
that
are
bottlenecks
and
as
hard
as
it
may
be.
Let's
rip
them
apart.
Let's
optimize
it,
let's
run
it
through
performance
testing
and
then
let's
re-release
it
so
for
the
next
at
least,
I
mean
giant
in
the
next
two
releases
at
a
minimum.
C
This
is
probably
it
for
some
some
aspects
going
to
continue
forever,
but
it's
been
my
emphasis
to
the
team
that
at
least
these
three
releases
we
need
to
be
banging
on
this,
and
if
there
are
areas
of
the
code
that
we're
known
bottlenecks,
we
need
to
rip
them
out
and
fix
them,
because
we
need
to
continue
to
improve
the
performance
of
the
underlying
object
store
because,
as
you
saw
with
that,
it's
a
pyramid
that
everything
built
on
top
of
it.
C
So
if
we've
got
bottlenecks
down
here
at
the
object
storage
level
that
just
propagates
all
the
way
up
now,
what
makes
it
interesting
is
that
we
may
have
bottlenecks
say
at
the
rbd
level
that
hide
things
from
the
object
stories
level.
So
we
may
be
having
problems
down
here
that
you
can't
even
see
because
people
are
blocked
by
something
in
the
rbd
code.
So
we
need
to
make
that
as
performant
as
we
can,
and
then
that
may
reveal
further
performance
improvements
that
need
to
be
made.
C
C
C
That
would
be
awesome
and
actually,
speaking,
of
new
open
source
projects,
we
are
contemplating
rolling
out
technology
really
with
its
own
community
and
everything
and
creating
a
true,
open
source
project.
C
Configurable,
all
right,
you
can
tell
it.
Basically,
you
can
point
it
at
a
group
of
machines
and
say:
okay,
here's
your
available
pool
here
are
the
tests.
I
want
to
run
and
it'll
go
out
and
say:
okay,
I
need
these
machines.
It'll
queue
up,
it'll
grab
those
it'll
it'll
go
ahead
and
install
stuff
for
you
and
it'll
say:
okay,
I
want
these
to
be
monitors.
I
want
these
to
be
osds.
I
want
three
monitors.
C
A
C
And
do
that
and
then
it
spits
out
the
results
sends
an
email
out
to
the
group,
and
it
also
has
this
pretty
cool
front
end
we
implemented
called
popito,
which
is
spanish
for
big
octopus
and
physiology.
Is
the
study
about
this
yeah,
and
so
so
technology
spits
the
data,
and
then
you
can
see
okay,
you
can,
through
that
web
front,
end
gauge
over
time.
How
is
this
test
performing?
C
Well,
it
was
fine
for
three
weeks
and
then
yesterday
it
crapped
out
what
happened,
who
committed
something
there,
and
so
the
test
suite
is,
is
really
fabulous.
The
automated
testing
is
very
powerful
and
we
are
encouraging.
We've
got
two
partners
that
are
standing
up
their
own
toothology
instances
and
we're
looking
for
more
people
all
the
time
to
run
to
biology,
run
their
testing.
C
C
C
C
A
C
It's
I
heard.
C
We're
in
my
community
another
spokesman
for
red
hat,
guaranteeing
product
availability,
I
would
say
that
you
could
probably
be
looking
for
secondfest
sometime.
C
B
C
Up
my
own
toothology,
thank
you.
I
got
it
so
so
get
started
with
stuff
go
to
the
docs.
We
that's
the
one
thing
we
need
lots
of
that's
an
easy
way
to
give
back
to
an
open
source
community
is
like
anybody
in
the
docs.
C
Because
we're
making
changes
all
the
time
we're
flying
with
our
hair
on
fire
and
the
last
thing
you're
doing
if
your
developers
want
to
ride
documentation
right
because
it's
intuitively
obvious
just
go
look
at
the
code,
if
you
don't
know
how
it
works.
So
that's
where
we
can
really
use
help.
If
you
say
hey,
I
want
to
stand
up
my
own
subculture,
hey
what
the
hell.
This
doesn't
do.
What
it's
supposed
to
do!
All
right.
I
have
to
do
dash
dash
x
instead
of
dash
x,
it's
open
source.
C
C
You
can
even
throw
one
up
on
aws
and
then
get
on
irc,
like
you
said
that
that's
where
we
are
somebody
in
the
community
is
in
there
24
7.
there's
like
300
a
lot
of
people.
It's
great,
get
on
poundstaff
again
on
life.net
freedom.
C
We
had
so
many
people
that
we
had
to
split
up
the
irc
channels
because
we
had
people
that
were
trying
to
ask
questions
about
hey.
Should
my
pull
request?
Look
like
this,
and
I've
got
this
question
about
my
code
and
sage
is
trying
to
talk
to
him
at
the
same
time
that
we've
got
somebody
in
france,
hey
my
cluster's
falling
over
and
here's
the
error
message.
I
see
he
talks
so
so
we
split
up
the
seth
and
seth
develo
and
they're
both
still
strong
channels.
So
it's
been
really
exciting.
C
C
Add
is
you
can
actually
do
dev
stack
with
stuff
as.
A
Well
now,
so,
if
you,
google,
dev
stack
stuff,
you'll
find
a
link
from.
C
C
Here's
me
we'll
put
these
charts
out
and
it
says
little
hyperlinks
to
enos
connect
with
me.
C
C
So
please,
honestly,
if
you
have
a
question
about
stuff,
if
you
have
a
question
about
a
presentation,
give
me
a
call
twitter
email.
This
links
to
my
linkedin
profile.
A
C
B
C
C
So,
if
you're
willing
to
put
up
with,
if
you
want
to
make
swift
calls
sage
would
say
you're
not
going
to
get
the
performance
that
you
get
if
you
use
rgw,
why
don't
you
just
make
swift
calls
to
the
gateway?
If
there's
a
subset
of
the
api,
that
we
don't
support?
That
you
think
is
important,
then
tell
us
and
we'll
implement
it,
because
this
is
built
wired
into
the
libretto.
So
this
is
where
you
get
the
best
performance
and
we
support
the
api.
It's
just
like
I
said
some.
C
C
Other
than
encourage
anyone,
if
you're
an
active
member
of
the
swift
community,
encourage
them
to
push
stuff
up
into
the
api
and
really
expose
it.
But
there
again
the
swift
stack
company
behind
swift
is
completely
disincentivized
to
do
that,
because
the
last
thing
that
they
want
to
do
is
turn
swift
just
into
this
box.
A
C
C
Having
to
support
both
of
those
storage
infrastructures
because
they
solve
our
bigger
our
bigger
cloud
problem
that
that's
what
met
our
needs
and
in
the.
C
C
C
C
Any
kind
of
nfs
that
you
have
have
to
have
in
there
today
and
we're
part
of
project
manila.
C
So
the
goal
is
for
that
yeah
I
mean
it
just
became
a
full-fledged
project,
so
you
know
maybe
it'll
be
it's
still
incubator.
It's
not.
I
think
it's
an
incubation.
C
Those
are
really
cool
yeah
we
haven't
played
with
them
much
I
mean
I've.
Only
we
have
not
played
with
the
shield
drives
where,
where
we've
actually
played
more
is
with.
C
Let's
see
how
much
of
this
I
can
say,
there
are
two
vendors
that
have
two
drives:
there's
one
in
the
room
here,
there's
one:
that's
not
once
there's
anyone
in
here
with
the
company
that
starts
with
an
h,
they're,
probably
here
too,
they
haven't
been
here
before.
So
we
are
working
closely
with
both
those
companies,
because
this
sage's
dream
way
back
in
the
day
when
he
spawned
was
it.
C
C
C
C
C
Too
much
but
it
is
plastered
with
with
gluster
paraphernalia
around
including
storage
server,
three,
which
is
a
cluster
product
in
case
you
weren't
wondering,
and
I
think
nasa
does
it
ahead
of
time.
There's
one
other
thing
that
we
do
so
on
the
on
the
meetup
page
jb
posted
a
link
to
some
discount
training
for
red
hat.
I
don't
know
what
it
covers.
I
didn't
actually
read
the.
A
C
But
it's
out
there.
If
you
need
more
information,
let
me
know
I
can
I
can
pass
it
on
to
you
more
important.
Is
red
hat
hiring?
That's
one
of
the
things
we
talk
about
at
every
meet
up.
We
always
ask
the
sponsor.
If
they're
hiring,
I
am
hiring,
I
I've
got
and
that's
the
beauty
of
my
team
is
you
can
live
up
here?
You
can
live
down
wherever
you
know
you
can
live
in.
C
I
lisbon,
a
guy
who
lives
in
boulder,
is
the
closest
one
closest
one.
So
red
hat's,
hiring
ink
tanks,
hiring.