►
From YouTube: 2017-MAR-23 -- Ceph Tech Talks: Ceph at Scale & Writing Applications with Language Bindings
Description
Chris Holcombe of Canonical talks about tricks for deploying Ceph at scale && writing an application with language bindings.
http://ceph.com/ceph-tech-talks/
A
Alright
welcome
everybody
to
this
month's
SEF
tech.
Talk
this
month
we
have
Chris
whole
chrome
from
Canonical's
joining
us
to
talk
about
a
couple
of
different
topics.
It
sounds
like
he'll,
be
touching
on
some
tricks
to
deploying
Cepheid
scale,
probably
looking
at
their
juju
stuff
that
I,
let
him
expound
on
that
as
well
as
writing
an
application
with
the
language
bindings.
So
there
are
a
number
of
language
bindings,
as
you
may
or
may
not
be
aware.
B
We
have
a
number
of
customers
that
have
deployed
our
stuff,
but
in
that
low
to
mid
petabyte
range
and
that's
helps
bring
back
some
feedback
and
help
us
now
create
our
up.
Let's
get
our
tools
to
the
point
where
it
makes
it
easier
for
everybody
to
deploys
that,
because
that
can
be,
if
you
want
to
get
people
through,
the
wiki
can
be
a
little
bit
challenging.
So
let
me
share
my
screen
and
show
me
show
you
guys
what
I'm
talking
about.
B
B
Let's
clean
screen
so
the
way
we've
been
two
points
out,
the
canonical
is
we
use,
what's
called
a
charm,
and
the
charm
is
a
a
set
of
hooks
that
allows
you
to
make
your
application
plug-in
or
communicate
easily
with
other
applications.
I'll
show
you
what
that
means
so
I'll,
maybe
let
me
do
for
the
Cephalon
cluster,
so
we're
going
to
do
that.
You
define
fun
and
I.
Have
it
pulled
down?
Luckily,
so
we'll
do
n
degree
because
you
want
to
have
a
quorum
and
because
say
we're
going
to
have.
B
Is
a
living
animal
file
see
that
yellow
file
over
here
basically
force
em
on
there's
no
options.
I
have
to
give
except
telling
it
where
to
pooled
assessment,
binaries
from
or
the
set
binaries
from,
and
we
have
something
called
cloud
archives
that
allows
you
to
have
quick
updates
on
a
bunt
too.
So
I'm
going
to
say
three
of
those
and
I'm
going
to
the
toilet,
rusty.
B
Let
it
go
you
can
see
now
on
the
left
here
et
tu
is
giving
me
three
machines
and
they're
going
to
start
coming
up
in
a
minute,
although
sometimes
et
tu
can
take
a
while
and
as
they
come
up
they're
going
to
communicate
with
one
another
and
form
they're
going
to
go
through
all
the
all
the
basic
commands
that
you
would
go
through
when
you
put
together
and
monitor
cluster
expanding.
It
is
actually
really
easy.
B
B
Let's
say
the
ammo
file
again
for
the
OSB,
isn't
the
same
thing
source
here:
I'm
unmounting,
the
ephemeral
melt
and
I'm
telling
it
to
use
the
xbdb
OSD
device
and
all
the
ec2
machines
come
with
us
so
that
they
come
with
like
a
little
easy
EBS
blocks
or
just
mount
of
that
/
mount.
So
I'll
just
use
that.
B
B
B
B
B
You
can
also
tell
us
a
split
up.
The
public
and
private
network
use
that
BB
6.
You
can
customize
a
failure
domain.
There's
many
many
options
at
this
point
and
something
I
recently
added
which
is
neat,
is
called
autotune
and
after
stuff
starts
up.
It
will
look
at
all
the
OSD
devices
and
trying
to
make
its
best
guess
about
read
ahead
settings
and
the
elevator
to
use
for
the
elevator
to
use
for
the
hard
drives,
etc
and
also
like.
B
If
it
you
could
see,
if
you
have
a
tanking
Network
versus
a
default
on
East
to
which
is
on
gig
ish,
it
will
try
to
see
your
CC
PL
settings
as
well
to
modify
that
make
it
more
efficient.
So
the
idea
is
that
everybody
gets
a
set
cluster
out
of
the
box,
so
normally
this
takes
about
five
minutes
or
so
to
start
up
depending
on
how
long
it
takes
for
easy
Q
to
give
us
machines
and
expert
pythons
all
dependencies.
B
The
idea
is
to
make
it
easy
to
or
make
it
identical
to
deploy
this
on
metal
or
a
PM
like
we're
doing
now
or
container.
So
you
might
want
to
have
your
cell
phones
in
a
container
versus
a
vm,
or
maybe
even
hardware,
and
using
the
mass
provider
or
the
Google
cloud
provider
or
the
XD
provider
for
containers
beach,
arms
act,
identical
on
all
the
different
substrates.
B
You
can
see
now
they're
giving
you
some
information
about
I'm
ready
to
booster
out
my
cluster
by
I.
Don't
have
enough
other
units
there.
The
default
is
trying
to
keep
everybody
safe,
I
a
default
to
meeting
three
months,
instead
of
just
just
one
that
way,
if
you
have
one
month,
if
you
want,
if
you
have
one
or
two
months
ago
now-
and
you
saw
a
you
know-
a
working
set
clustered
without
the
monitor
clusters,
Stefan
fall
over
so.
B
I
hadn't
said
this
before,
but
if
you
guys
have
any
questions,
you
know
feel
free
to
stop
me
up
I'm
going
through
this.
I
know
when
I
first
started
that
canonical
the
whole
the
whole
idea
about
charms
and
what
they
were
doing
seemed
very
magical
and
weird,
and
there
was
a
whole
lot
of
jargon
around
that.
But
I
had
a
trouble
getting
around.
A
It
might
be
helpful
to
just
tell
just
a
you
know
the
elevator
pitch
for
juju
for
those
that
aren't
familiar
with
it.
You
know,
I
don't
know
that
a
lot
of
people
realize
that
it's
a
alternative
to
other
orchestration
deployment
frameworks
like
ants
aboard
jeffer,
puppet
or
whatever.
Okay,
what
makes
it
done,
yeah.
B
My
interface
here
is
theft
and
if
any
other
charm
supports
the
set
interface
they'll
be
able
to
talk
with
one
another
and
set
themselves
up.
So
if
I
have
a
client
like
her
here,
I
have
to
set
client
interface.
I
have
set
badman
interface
if
I
want
to
hook
ups
a
ratos
gateway,
I
want
to
connect
OpenStack
or
something
like
that.
I
can
relate
it
and
see.
You'll
see
this
right
here,
I'm
missing
my
monitor,
I'm,
not
related
the
United.
B
B
We
can
see
that
cluster
came
up.
I
have
a
few
default
holes
and
policies
are
getting
added.
Believe
it.
Surely
you
can
see
here
that
the
disks
are
getting
skins.
Exif
ass
is
getting
put
down
onto
them,
journals
are
getting
set
up,
all
that
is
happening
through
the
set
deploy
tool
or
set
disk
prepare.
Rather
there
we
go
don't
another
minute
or
so
we'll
have
a
working
sub
cluster
with
60
s,
DS.
A
B
B
What's
they
have
a
one
terabyte
and
you
want
30
of
them,
something
like
that
and
depending
on
the
provider
being
even
mad
or
ec2
or
google,
it
will
pull
that
from
it'll.
It
will
find
those
those
drives
from
different
pools.
So
for
the
math
provider,
which
instrumental
as
a
service,
it
will
look
for
a
tag,
but
you
can
either
tag
your
drives
of
you
know,
sfe
or
slow
or
fast,
or
something
like
that.
B
B
B
B
So
what
I
wrote
recently
to
get
around
that
was
an
automated
upgrade,
an
automated
set
of
upgrade
steps
and
you
guys
know
premiering
a
sub
cluster.
The
way
you
upgrade
your
sub
cluster
is
first,
you
upgrade.
You
set
one
cluster,
you
love
in
each
one,
add
your
new
packages
and
then
restart
one
by
one
so
that
you
maintain
your
corn
kind
of
roll
through
it,
and
then
you
go
through
your
OS
DS
one
by
one
same
exact
procedure.
So
all
I
did
was
I
automated.
That
I
can
do
see
fig
source
and
I.
B
Now
what
happens
here
is
the
the
charms
are
actually
leveraging.
The
key
value
store
that
the
septum
on
cluster
has
and
taking
advantage
of
that
so
they're
setting
keys
on
the
monitor
cluster
and
I
know
this
debug
log
extremely
noisy,
just
thinkin
we're
now
in
a
moment
they
sort
themselves
by
IP
address
and
whoever
has
the
lowest
IP
address
goes
first
and
the
other
to
say.
I'm
waiting
on
him
is
set
a
key
or
the
one
who's
starting
set.
B
So
the
noise,
your
sing,
phaneuf
great
log,
is
all
you,
the
other
two
motors,
saying
I'm,
trying
to
find
this
key
saying
that
my
other
my
the
person
performing
or
the
monitor
perform
he's
not
done
yet,
but
it's
not
there
yet,
so
they
wait.
I
think
the
default
is
like
20
minutes
or
15
minutes,
and
if
that
doesn't
happen,
all
assumes
that
the
one
before
them
is
dead,
keep
going.
The
reason
for
that
is,
we've
noticed
that
at
s
scale,
when
we
run
this,
that
you
know
failures
happen,
and
it's
not
all
that
uncommon.
B
Mom
I'm
still
waiting
on
him
and
they
flip
back
to
saying
I'm
ready,
I'm
clustered,
so
we
can
actually
do
the
same
thing
with
the
ceph
OSD
cluster
and
once
the
monitors
are
done-
and
I
haven't
written
any
code
yet
to
prevent
you
from
shooting
yourself
in
the
foot
and
upgrading,
oh
if
these
first
and
then
mod
second,
I
can
come
a
little
bit
later
and
kind
of
like.
Let's
get
it
going.
First
make
sure
it
works,
work
out
the
kinks
look,
fine
there
we
can
go.
B
B
A
D
B
B
What
we're
going
to
try
and
do
is
I
think
create
a
really
tiny
application
that
connects
to
our
assessment
cluster
and
prints
out.
The
current
usage
of
the
cluster
and
I
wanted
to
send
us
over
to
influx
BB,
but
I'm
not
sure
I
can
get
that
done
in
like
30
minutes
so
may
be
able
just
stick
with
printing
it
out
the
standard
out.
So
the
cool
thing
about
these
SFI
bindings
that
Chris
and
I
main
or
that
is
that
they're
written
trust.
B
C
B
C
B
Seconds
it
really
work,
rust
works
is
it
has
a
tunnel
file
and
it
says
who
the
author
is
the
version
and
any
dependencies
and
these
dependencies
can
be
pulled
down
from
crete,
IL
really
easily.
I
happen
to
have
the
saffron
findings
already
on
my
machine,
I'm
going
to
say,
step
boxes
in
this
path,
which
is.
B
B
B
We
right
now
we're
going
to
write
a
little
hello
world
application,
but
in
this
we
have
safe
findings
to
the
underlying
c
library.
We
could
actually
write
new
clients
for
sup.
Oh
man,
you
can
write
a
set
archiver
that
would
sit.
You
know
next
to
SEF
of
s
and
greatest
gateway,
and
it
would
be
as
fast
as
the
c
or
c++
the
c++
that
stuff's
written
in,
but
also
not
have
a
garbage
collector
overhead
like
java.
B
So
that's
kind
of
in
this
interesting
space,
one
of
the
things
that
I've
written
with
it,
it's
called
I
made
some
extensions
to
this
project
called
preserve
and
it
takes
it
takes
a
directory
or
a
set
of
directories
that
you
give
it
and
encrypts
them
and
then
send
them
in
to
assess
is
a
radius
but
except
when
it's
done
and
encrypts,
each
chunk
says
it
in
the
radius.
So
that
was
pretty
cool.
It's
fast
been
very
useful.
Actually,
and
we
have
a
large
client
is
using
it.
B
The
set
rust
library
consists
of
two
pieces.
The
first
one
is
the
race
at
RS
module,
and
that
is
the
low-level
binding
so
set.
So
if
there's
anything
that
I
haven't
written,
that
a
safe
wrapper
around
these
were
the
bindings
that
use
and
with
Russ
anything
that's
calling
down
the
sea.
It's
considered
unsafe.
That's
because
rough
can't
track
memory
usage
inside
of
see
it
tracks
a
lifetime's
and
a
bunch
of
other
things,
and
so
what
so?
What
the
deal
is
is.
B
Everybody
who's
writing
rest.
What's
interface
with
a
c
library,
they
will
write
a
safe
wrapper
around
that,
like,
for
instance,
where
I
think
I'm
going
to
connect
SEF.
All
these
see
operations
are
unsafe
and
I'm
telling
the
compiler
that
what
I'm
doing
here
the
compiler
is
saying,
I
don't
know
what
you're
doing,
but
I'm
going
to
I'm
going
to
leave
it
to
you
to
make
sure
you
don't
do
anything.
B
B
B
B
So
we've
got
a
function
here:
safe
wrapper
and
stop
the
cluster.
You
give
it
back.
You
hit
you
hand
it
the
rate
of
C,
and
it
will
give
you
the
tunnel
space.
The
space
use
the
space
available
on
the
number
of
objects
so
far,
logging
that
so
like
Prometheus
or
in
flux
DB.
This
could
be
really
useful,
as
you
can
see
the
usage
change
over
time
and
also,
if
you've
a
lot
in
the
number
of
objects,
you
can
probably
divide
that
into
the
space
available
to
get
your
average
object
size.
B
So
that
could
also
be
interesting.
If
you
want
to
know,
are
the
clients
who
are
right
into
my
cluster
running
large
objects,
small
objects,
medium
sized
objects,
and
if
they
are
running
small
objects,
which
is
really
an
efficient
with
any
kind
of
distributed
storage,
why
are
they
doing
that?
Maybe
it
can
help
them
fix
whatever
whatever
client
or
a
program
they're
using
this
writing
into
it?
Oh,
let's,
let's
grab
this
thing.
C
B
B
So
you
can
see
whoops
list
of
a
couple.
Things
here
must
actually
has
some
really
good
error
messages.
It
will
actually
underline
and
tell
you
what's
wrong
and
try
to
give
you
a
good
note
and
it
actually
a
lot
of
times
it
even
solved
it
for
you.
So
it'll
say
something
like
I
noticed
that
you
can't
format
this
thing
that
you're
trying
to
print
out.
If
you
try
using
this
instead-
and
that
actually
is
the
answer.
So
it's
really
good
about
that.
B
B
B
D
B
C
B
B
Know
being
I'm
thinking
that
as
a
lazy
approach
right
here,
I'm
just
saying:
I,
don't
care
what
version
is
just
get
the
latest,
but
if
you
want
to
pin
it
down,
you
can
say
here
it
up:
20,
like
an
exact
version
where
you
can
say
an
approximate
version.
Anything
that's
here
about
2x,
something
like
that.
That's
what
this
till
they
need.
I
believe.
B
So
all
the
crates
on
the
crease
thought,
I
owned
a
used
said
mer,
semantic
versioning.
So
anything
is
beer.
Dot-To-Dot,
you
know
to
the
three
or
four
should
all
be
compatible
with
one
another,
but
geared
up
three
won't
be
compatible
with
zero
up
to
generally,
if
everybody's,
following
the
roles
you
can
see,
cargo
actually
makes
my
cember.
When
I
start
here
not
one.
B
C
B
B
B
B
C
B
It's
recruit
alright
and
the
default
by
the
default
database
name
is
will
find
that
second.
C
C
B
That's
actually
a
good
point:
okay,
five!
It's
all
the
term
fee
deployed
are
their
firewalls
offs
in
the
world
and
what
you
can
do
to
get
around
that
is,
you
can
say,
did
you
expose
and
they
will
open
up
that
port
and
allow
anybody
connect
to
connect
to
it.
So
if
I
say
he
exposed
complexity
now
anybody
that
connects
to
and
will
give
it
it
like
in
a
second.
If
you
guys
connect
to
this,
you
should
see
I,
think
86
or
80
83.
You
should
see
an
xbox
to
be
there.
B
C
C
C
D
B
B
B
D
C
B
The
rest
of
the
concepts
concept
of
life
science-
and
it's
saying
that
if
you
just
create
a
reference
to
the
string
inside
this
function,
it
will
destroy
it
before
the
function
n
and
that
this
this
field
here
doesn't
live
long
enough.
So
you
need
to
actually
move
it
up
and
they
can
the
call
stack
for
that
compiles
and
we're
going
to
cargo
filled.
B
B
C
B
B
A
Think
it's
good
to
give
us
good
overview
of
all
the
all
the
pieces
looks
like
most
people
follow
it
along
pretty.
Well,
though,
so,
thank
you
very
much
Chris
and
thank
you.
Everybody
for
coming,
we'll
see
you
again
in
in
april.
I
believe
the
next
one
is
on
the
27th
yep
same
bat-time
same
bat-channel
and
in
the
meantime,
we'll
see
you
on
IRC
in
the
lists
thanks,
everybody.