►
From YouTube: Ceph Days NYC: An Introduction to MicroCeph
Description
Presented by: Chris MacNaughton
Building up a Ceph cluster can be a bit tricky and time consuming, especially if it’s just for testing or a small home lab. To make this much easier, we’ve started working on microceph. It's a snap package that uses a small management daemon that allows for very easy clustering of multiple systems which, combined with an easy bootstrap process, allows for setting up a Ceph cluster in just a few minutes!
A
I'm
Chris
I'll
be
talking
about
Microsoft.
So
what
is
it?
It
is
a
snap
and
I
know.
Some
people
are
a
bit
leery
about
snaps
from
canonical
it's
a
clustering
demon
uses
a
DQ
light,
which
is
a
sequel
light,
compatible.
A
That
works
too.
It's
a
sqlite
compatible
database
using
rafter
consensus.
The
management
Daemon
allows
us
to
build
a
cluster
of
arbitrary
nodes.
We
then
use
it
to
manage
the
placement
of
all
of
the
various
F
daemons.
You
guys
are
all
aware
of
monitors
managers
osds
the
metadata
servers,
almost
sorry.
What
was
that
almost
the
Red
House
gateways
it's
in
progress.
Now
it's
really
opinionated
about
what
you
put
where
that's
likely
to
change
soon,
but
for
now
it
does
what
it
does
and
you
get
Seth.
A
So
why
did
we
do
that?
A
couple
of
different
use
cases?
We
really
wanted
to
support
a
single
node
intro
to
CEF.
You
can
run
this
on
your
laptop.
It
runs
as
a
strictly
confined
snap,
which
means
it
doesn't
have
full
root
on
your
system.
It
has
very
very
limited
access
to
things
you
have
to
give
it
permissions
to
do
things,
it's
kind
of
like
having
your
phone
running
Seth
instead
of
the
server.
In
addition
to
that,
we
really
wanted
to
be
able
to
run
small
Edge
clusters
deploying
thousands
of
them.
A
Instead
of
you
know
thousands
of
very
small
clusters,
rather
than
a
few
big
ones.
You
know
we
all
like
talking
about
petabyte
scale
clusters,
but
we
kind
of
like
some
small
terabyte
scale
clusters
too.
We
wanted
it
to
be
quick
to
deploy
very
predictable,
and
you
know
the
idea
being
that
average
deployer
isn't
actually
a
SEF
expert,
you'll
think
the
7-Eleven,
you
know
someone
who's,
the
assistant
manager
can
actually
go
in,
take
their
little
tiny
rack
in
the
back
of
a
few
nooks
and
run
stuff
on
it.
A
This
lets
them
do
whatever
they
want
to
do
with
ceph,
instead
of
having
to
hope
that
they
can
find
storage
somewhere.
There's
almost
no
infra
overhead
that
clustering
demon
is
very
minimal.
It's
a
little
go
demon
apparently
we're
all
fans
of
go
now
so
yay
and
it
really
needed
to
be
repeatable.
A
A
A
Yay
sorry
I
can't
actually
Zoom
it
in,
but
hopefully
it's
all
legible,
all
I've
done
to
prepare
the
system
is
to
install
the
snap.
It's
that
Microsoft
One,
the
fourth
one
down
we
bootstrap
a
cluster.
This
is
setting
up
that
clustering
demon,
getting
it
ready
with
DQ
light
and
things
pretty
quick
doesn't
actually
do
much
so
cheating
we
tell
it
to
add
some
nodes.
I've
actually
got
four
machines
here
to
play
with.
We
get
a
little
token.
A
A
A
A
Set
up
too
the
red
Skateway
I
mentioned
it's
a
progress.
There
is
actually
code
in
place
to
do
it,
we're
working
on
all
being
happy
with
it.
You
know
deciding
that
we
like
what
it's
doing
and
how
it's
doing
it.
Okay,
so
we
can
get
our
status.
You
might
notice
that
microsoft.sef
status,
it
is
confined
you're,
not
running
this
as
you
on
the
system,
you're
not
running
it
as
a
ceph
user
or
root,
but
inside
that
confined
space,
you
have
your
monitor,
managers
Etc,
but
no
disks.
A
I
did
run
into
a
slightly
annoying
bug
that
made
me
not
want
to
do
it
concurrently
in
this
demo.
Sorry
it
takes
it
a
little
bit
longer,
but
we're
gonna
get
our
one
OSD
showing
up
shortly.
You
might
notice
that
the
adding
the
disk
doesn't
actually
add
into
ceph
in
a
blocking
fashion,
which
is
citing.
A
Some
of
the
the
clustering
work
is
actually
around
letting
us
manage
demons
and
everything
in
a
central
fashion.
So
you
could
say
Okay.
Add
that
OSD
from
that
machine
over
there
put
a
monitor
over
there
put
a
we're.
A
A
A
I
did
accelerate
this
one.
A
little
I
didn't
feel
like
taking
five
minutes
to
demo
a
little
bit
of
S3
playing
with.
So
again,
we
have
you
know
no
data
in
there
in
the
pools
we
set
up
riddles,
Gateway,
there's
actually
config
on
here
to
do
The,
Nest
three
commands.
A
Okay
so
we've
not
got
a
user,
we're
gonna,
add
a
user
I
believe
that's
what
we're
doing
next
and
then
creating
a
bucket
and
writing
and
reading
an
object.
A
All
right,
so
we've
listed
things
added
a
bucket:
oh
hey,
the
bucket,
the
goose,
now
yay
looks
right
to
it
and
then
read
from
it
to
make
sure
it
actually
works.
A
All
right,
okay!
So,
what's
next
before
we're
happy
to
call
this
anything
like
something
you
should
play
with
you
can
today,
please
do
we
want
automated
encryption
so
very
easy,
like
seamless
to
a
user
encryption
of
the
osds,
you
need
to
actually
build
a
configure
your
network.
Instead
of
we
tell
you
what
to
do
and
we're
going
to
be
doing
stuff
upgrades,
so
Point
releases
and
major
upgrades
in
a
very
automated
easy
to
use
fashion
that
does
just
work.
It
doesn't
eat
your
data.
A
C
A
So
the
the
demon
that
it's
running
is
a
little
management
demon
that
includes
so
the
synapse
ships
with
all
of
the
Seth
bits.
It's
the
Ubuntu
packages,
Force
F,
when
you
say
you
know
the
first
three
that
start:
okay,
I,
don't
have
enough
monitors
start
a
monitor
and
it
just
runs
that
locally.
It
knows
what
the
whole
cluster
state
is.
It
knows
what
disks
are
added
to
What
machines,
nextusf,
which
I'm
a
little
annoyed
about
that
design
choice.
But
we'll
come
back
to
that.
C
Yeah,
that's
really
cool.
Actually
I
wish
I
had
this
four
days
ago
that
I
wouldn't
have
had
to
use
this
cluster
I
was
like,
if
only
there
was
some
packaged
Microsoft
thing.
I
could
use
I
thought
about
using
the
what's
the
in
this
F
source
code.
This
is
one
of
the
make
things
it's
like
quick
start
or
something,
but
that's
it
yeah
yeah,
but
that's
a
nightmare.
Yeah.
A
C
A
B
Have
you
ever
tried
to
create
Microsoft
deployments
and
then
have
them
talk
to
each
other?
A
Was
just
curious
that
said:
we've
done
a
lot
of
things
elsewhere
and
you
know
our
playing
with
Seth
with
mirroring
gateways
mirroring
RBD
volumes,
so
it.
B
Yeah,
so
in
CBT
it
will
will
like
deploy
stuff
into
the
temp
and
we
can
do
like
you
know.
Big
cluster
deployments,
like
you
know,
kind
of
like
a
real
deployment,
but
not
really
but
I'm
super
curious
with
something
like
this.
If
you
could
kind
of
do
a
single
machine
deployment
and
then
extend
it,
it'd
be
cool
to
see.
A
So
one
of
the
things
that
we're
looking
at
is
really
well
supporting
that
single
machine
use
case
and
then
barely
seamlessly
scaling
up.
So
you
know
you're
playing
with
it
on
your
laptop
you've
got
some
stuff
fundamentally,
the
same
experience
as
I'm
on
a
server
in
a
data
center
doing
the
whole
racket
once
so.
It
really
shouldn't
be
A
different
experience
and
you
should
be
able
to
go
from
one
to
the
other,
even
though
you
probably
shouldn't.
Let's
go
from
your
laptop
to
a
data
center.