►
From YouTube: 2018-May-24 :: Ceph Tech Talks
Description
Travis Nielsen explains how to deploy Ceph with Rook on Kubernetes.
https://ceph.com/ceph-tech-talks/
A
B
A
B
B
How
to
start
a
development
environment
anything
else
before
we
get
started
sage,
alright,
we'll
go
for
it.
So
rook
is
it's
an
Orchestrator
for
kubernetes,
the
that
brings
storage
to
kubernetes.
Historically,
storage
has
been
something
that's
external
to
kubernetes
and
rooks
goal
really
is
to
make
storage
is
first-class
citizen
of
kubernetes,
though,
and
the
place
we
started
is
with
staff
and
I've
been
with
the
rook
team.
For
the
last
couple
of
years
and
early
on,
we
made
a
bet,
we
said
yeah
we
can
even
staff
its.
A
B
So
what
I
have
here
is
the
just
the
rook
that
IO
site
to
get
going
now.
If
we
fail,
where
do
we
start
I'm
just
going
to
go
right
into
a
demo
and
bring
up
a
sample
environment
on
the
github
page?
Here
we,
the
easiest
place
to
get
started,
is
by
going
into
the
the
test
folder
so
test.
Folder
has
some
instructions
on
how
to
start
your
cluster.
B
B
B
B
Okay,
I'll
assume
so
so
now
that
we
have
our
mini
hue
cluster
running
now,
this
is
you
can
use
a
number
of
virtual
node
or
virtual
machine
plugins.
This
is
using
VirtualBox
for
me
so
to
make
the
demo
a
little
more
interesting,
I
added
some
discs
to
2
mini
cube.
So
just
add
one
more
here.
So
if
I
go
into
mini
cube
storage,
I've
already
added
3
disks
that
we
can
run
OSDs
on
and
I'll
add
one
more
just
to
show
how
I
did
this
the
creative
VHD
and.
B
Call
it
disc
number
four,
all
right
so
well,
we
just
gonna
start
in
this.
Vm
I've
got
four
devices
which
now
we
can
create
an
OSD
on
each
one
of
them
and
by
default,
rook
actually
doesn't
need
those
devices
just
for
a
simple
testing
environment.
So
it's
not
too
often
I
go
create
those
devices
by
default.
Rook
will
just
create
a
single
OSD
directory
using
file
store,
but
for
this
demo,
I'll
be
showing
400
SDS
on
booster,
ok,.
B
So
now
that
we
have
our
mini
cube
up,
let
me
show
you
what
happens
when
we
want
to
start
a
Brooke
all
right.
So
the
the
entry
point
for
rook
really
is
we
call
the
rook
operator.
The
operator
is
the
thing
that
manages
the
cluster.
It
manages
the
help
make
sure
that
the
cluster
is
healthy
and
running.
B
So
here
we
have
our
operator
mo,
which
shows
basically,
we
going
to
start
at
qu
Bernays
deployment.
It's
going
to
start
up
a
pod
that
will
run
our
rook
SEF
image
I'm,
just
using
my
local
build
well,
this
will
start
up
all
the
Ceph
demons.
All
the
SAP
demons
are
contained
in
this
one
image
here.
So
if
I,
including
the
operator
itself,
if
I
go
ahead
and
start
that
I'm
gonna
switch
over
to
this
window
here,
that'll
create
the
rook
operator.
B
Okay,
so
what
what
I
just
did
started
the
operator-
and
it
starts
a
couple
of
other
demons
as
well
I
get,
but
let's
look
at
what
I
did
so,
who
cut
I'll
get
namespaces
so
that
what
we
just
did
is
we
created
a
rook
SEF
system
namespace,
and
this
is
the
namespace
where
several
pods
are
going
to
run.
So,
if
I
look
inside
that,
namespace
works,
F
just
don't
get
pod.
B
The
first
thing
we
see
here
running
is
the
the
operator.
So
that's
the
operator
is
going
to
manage
our
cluster
make
sure
our
desired
state
is
applied.
The
agent
is
a
pod.
That's
going
to
be
right
on
all
the
nodes,
as
a
demon
set
to
mount.
That's
the
storage
on
the
pods
that
want
to
consume
it
for
block
or
file,
and
then
we
have
this
other
pod
discovery
pod,
which
is
going
to
discover
what
devices
are
available
on
all
the
nodes.
B
B
So
when
we
create
a
safe
cluster,
now
we're
going
to
start
it
up
in
a
different
namespace,
so
we
started
up
in
the
rooks
F
namespace,
and
these
are
all
the
custom
settings
that
we
need
to
start
a
cluster.
This
is
what's
called
the
committee's
CRD
and
the
operator
is
going
to
go
to
apply
the
settings
here.
So
I'm
saying
I
just
want
one
month,
because
it's
a
simple
one,
node
cluster,
instead
of
the
default
at
three
months,
that
you
would
need
in
production.
B
B
It's
going
to
say
whatever
nodes
I
have
running
in
kubernetes
I,
go
look
for
storage
on
those
devices
and
starts
alone,
and
there's
a
couple
different
modes.
You
know,
I
can
just
tell
it
go!
Look
for
all
broad
devices
that
are
not
being
used
by
someone
else
and
consume
them.
So
since
I
created
those
four
devices
in
mini
cube,
we
should
expect
to
see
that
work
is
just
going
to
find
all
four
of
those
and
startup
OS
DS
and
a
number
of
other
settings
that
we
won't
talk
about
here
here.
B
B
B
B
Here
we
start
up
one
manager
and
basically
even
in
for
in
production
clusters,
and
we
just
start
one
manager
and
rely
on
the
kubernetes
to
restart
that
that
demon,
if
it
fails
and
then
we
have
one
OSD
pod
today
we
have
one
OSD
pod
per
node,
but
that's
going
to
be
changing
soon.
So
we
have
one
OSD
pod
per
device:
okay,
so
now,
let's
go
look
at
now
what's
happening
inside
Steph,
so
that
I
can
actually
connect
to
any
of
these
nodes
and
run
Steph
commands
inside
of
them.
B
Alright
and
if
I
typed
that
right
so
now
we're
inside
of
the
OSD
pod
and
we
can
run
SF
commands
so
if
I
say
status,
we
should
see
yep
I've
got
a
Mon,
that's
in
quorum.
The
managers,
zero
is
active
for
OS,
DS
and
they're
all
up
and
in
and
then
we
don't
have
any
pools
or
objects
yet
by
consuming
this
storage.
B
Okay.
So
let's
do
something
a
little
more
interesting
here
so
as
part
of
the
manager
there's
a
plugin
for
the
dashboard,
which
we
told
look
that
we
wanted
to
enable
that
dashboard.
So,
let's,
let's
go
see
how
to
bring
up
the
dashboard.
Now
now,
if
I
look
at
the
services,
if
I
type
that
right
get
services.
B
So
the
service
is
basically
an
endpoint
and
a
load
balancer
that
directs
traffic
to
two
different
pods,
so
the
staff
manager
created
a
dashboard
pod,
our
service,
sorry,
that
will
expose
port
7,000,
which
is
for
the
the
dashboard,
and
it
will
expose
it
on
this
cluster
IP.
Now
that
my
default,
this
is
just
an
internal
service
for
the
cluster,
and
you
have
to
be
on
one
of
the
committee's
nodes
for
that
to
work
right
now,
I'm
on
my
Mac,
and
so
that
won't
really
help
me.
B
B
If
I
look
at
the
services
again,
we
should
see
that
we
now
have
an
external
service,
which
is
the
internal
port,
is
7,000
and
the
external
port
is
32,
Oh
58.
So
now,
if
I
have
the
host
IP
for
a
mini
cube
with
this
port,
we
should
be
able
to
see
the
dashboard.
So
we
real
quick
to
get
the
IP
address.
I'm
going
to
go
inside
mini
cube
and
quick.
I
have
to
config
look
for
eighth
one
which
has
the
address.
We
want.
B
B
B
B
So
you
see
these
pools
starting
to
show
up
my
FS
metadata
and
the
data
the
PGS
start
peering,
and
before
long
we
have
our
pools,
active
and
clean
okay,
and-
and
now
we
have,
our
file
system
looks
like
it's
taking
a
minute
for
the
P
G's
to
stabilize
but
again
in
the
dashboard.
Here
we
can
go.
Look
at
the
file
system.
B
B
A
B
So
remember
we
started
with
the
manager,
the
Mon
and
the
OSD.
So
now
we've
create
a
file
system
with
one
MDS
and
one
standby,
so
those
pods
are
both
running
the
rgw.
Now
I
just
has
one
running
or
I.
Think
there's
just
one
yep
piece
of
cake
right,
so
we've
got
a
file
system
and
an
object
store
a
number,
a
number
of
pools
all
created,
whether
I
want
a
replication
or
erase
your
coding.
B
B
But
okay,
so
let's
say
we
wanted
to
actually
go
look
at
those
BG's
and
troubleshoot.
Now
we
couldn't
just
connect
to
one
of
the
pods
again,
but
all
I'll
show
it
sort
of
what
we
call
it
toolbox
pod.
So
if
I
create
one
more
ya,
know
file,
so
the
toolbox
is
where
we've
got
the
Ceph
tools
and
a
couple
of
others
like
an
s3
client
that
lets
us
our
s3
command
to
test
out
the
object
store
and
that's
a
lot
of
things.
But
if
I
connect
now
let
me
get
the
name
of
the
pod.
B
B
Couple
of
kubernetes
things
to
show:
let's
say
that
one
of
our
pods
dies
and
one
of
our
demons
goes
away.
We
can
bearnaise
will
bring
it
up
automatically
I'm
gonna
go
ahead
and
just
delete
one
of
these
to
say:
let's
see
what
happens
so,
let's
say:
I
delete
the
manager
pod
and,
if
I
delete
pod,
ok,
the
manager
pod
has
been
deleted.
B
B
B
B
B
B
A
B
B
B
A
B
B
What
happens
is
when
the
OSD
pod
starts
up.
It's
going
to
use
a
less
block
and
say
hey
what
devices
are
available?
Do
they
already
have
partitions
on
them?
Are
they
do
they
already
have
a
file
system
on
them
and
make
sure
we
don't
overtake
them
if
we're
not
supposed
to
use
them?
And
after
we
see
that
you
know
in
this
example,
SDA
that's
already
in
use
better,
not
use
it,
but
the
other
devices
I
found.
You
know
STD
SD,
v,
SD
e.
As
you
see,
all
these
devices
are
available.
B
So
then
a
rook
will
go
through
one
by
one
and
you
know
make
FS
and
initialize
them
for
for
booster,
a
lot
of
a
lot
of
stuff
commands
that
if
you
set
up
a
cluster
on
your
own
I
hope
you
can
appreciate
how
much
has
been
done
for
you.
So
starting
up
a
cluster
is
really
easy,
with
kubernetes
and
with
rook.
A
Adequate
question
about
how
you
how
you
consume
the
storage,
if
you're
using
sort
of
upstream
kubernetes
and
you're
deploying
stuff
independently,
and
you
want
to
use
volumes
that
are
provisioned
and
stuff-
you
have
to
set
up
a
dynamic,
provisioner
I
think
it's
called
another:
it's
not
a
C
or
D,
but
it's
another
thing
you
have
to
declare
in
the
cluster.
How
does
that
work?
What's
rook?
What's.
B
Ric
yeah,
so
the
rook
has
a
volume
provisionary
that
it
and
it
uses
a
flex
volume
and
plug-in
that
makes
it
really
easy
to
to
mount
the
storage
in
your
pod.
So
we've
got
a
sample
here
which
would
be
good
to
show
actually
yeah,
so
we've
got
a
wordpress
sample
which
runs
my
sequel
behind
it,
and
my
sequel,
of
course,
needs
need
some
block
storage.
B
So
in
this,
my
sequel,
llamo,
which
is
a
bit
long,
a
CD
where
we,
where
we
consume
the
storage.
So
this
this,
my
sequel
pod,
is
going
to
use
this
volume
which
is
going
to
use
it
pressing
the
volume
button
Claymore
PBC
of
this
name,
so
my
sequel,
pv
claim
is
what
we
need
to
go.
Look
for
so
then
up
here
higher.
We
see
that
this
PVC
was
is
based
on
the
rooks
F
block
storage
class.
B
B
So
now,
if
I
go
back
and
show
that
storage
class
with
a
storage
class,
rooks
F
block
uses
this
September
catio
slash
block
provisionary,
so
that
when
somebody
wants
to
provision
a
PPC
using
this
storage
class,
the
rook
provisioner
will
be
called
in
the
operator
and
the
operator
will
will
go
and
create
an
image
for
consumption
based
on
the
pool
that
you
tell
it.
So
in
this
case,
we're
using
this
thing
called
a
replica
pool
which
then,
in
this
other
CRD
we've
got
a
pool
CRT.
B
B
B
B
B
B
You
know
RBD
make
they
make
sure
it
was
created,
creates
the
block
image
that
says:
yep
successfully
created
the
block
volume
and
then
it
uses
the
Flex
volume
driver,
and
then
this
flex
volume
driver
is
running
on
all
of
the
nodes
in
the
cluster.
It
was
started
by
the
the
rook
agent
pod
that
we
saw
in
the
system
namespace.
B
B
B
B
B
Yeah,
which,
if
we
traced
it
back
again,
we
would
see
that
that
created
this
this
thing
and
mounted
it
like
a
small.
We
added
I'm
gonna,
show
the
cubelet
log,
because
if
something
doesn't
I
get
hooked
up
quite
right
and
and
the
volume
won't
mount
sometimes
there's
a
she
was
with
flex
drivers
and
different
environments.
B
Flex
drivers
are
a
bit
a
bit
challenging,
sometimes
so
the
soon
we'll
be
having
a
SES
driver.
We'll
talk
about
that
in
a
few
minutes,
a
little
better.
That
way
all
right,
I'm
gonna
go
inside
my
name,
if
not
me
shift
I'm
in
a
cube
and
look
at
the
logs.
So
if
I
look
at
journals,
CTO
look
cubelet,
so
the
cubelet
is
the
daemon
running
for
kubernetes
on
the
nodes
that
does
attach
and
detach
of
the
volumes
it
calls
into
the
Flex
drivers.
B
B
B
All
right:
well,
if
we
talk
about
the
roadmap,
where,
where
are
we
going?
Yeah
Rick
has
a
lot
going
on
a
lot
and
a
lot
to
do
with
Steph.
To
really
our
next
goal
with
Steph
is
to
get
it
to
production
ready.
You
know,
SEF
itself
has
been
production
ready
for
four
years,
but
inside
we
nowadays
making
sure
it's
orchestrated
and
stable
in
that
in
that
environment,
we're
getting
getting
close.
B
The
CSI
plugin
as
I
mentioned,
is
an
area
where
we'll
be
going
and
working
on
next,
the
flex
driver
is
is
working
in
the
meantime.
Csi
is
just
the
next
way
to
work
across
multiple
multiple
orchestrators
kubernetes
and
makes
us
and
others
rook
is
still
just
focusing
on
on
kubernetes,
but
CSI
could
enable
other
say
it
was
there
to
something
else.
B
B
A
Yeah
I'll
just
add
a
few
things.
The
stage
I
think
the
main
thing
to
mention
is
that
rook
is
the
preferred
means
for
deploying
CF
and
kubernetes
from
I
think
both
the
upstream
perspective
and
also
from
the
Red
Hat
perspective.
This
was
sort
of
our
path
forward
and
our
efforts
are
going
to
be
focused
here.
A
So,
if
you're,
if
you're
looking
at
running
set
for
your
kerbin
aged
opponent
or
running
set
inside
your
Q
Bernays
deployment,
this
is
where
you
should
be
looking.
There
is
a
seth
helm
project
that
we
were
playing
with
earlier
and
that's
still,
you
know,
there's
still
a
repo
under
github
under
the
stuff
work
and
some
people
are
still
using
that,
but
are
what
we
found
when
we
were
working
on
that
is.
A
It
was
awkward
to
manage
stuff
using
sort
of
just
the
low-level
primitives
that
that
kubernetes
provides-
and
we
really
wanted
this
operator
pattern,
where
you
have
something:
that's
smart,
that
can
pay
attention
to
what
the
cluster
is
doing
and
do
things
in
a
particular
order,
and
so
on.
So
everything
from
you
have
more
monitors,
making
sure
it
adds
them
one
at
a
time
and
you've
been
tame
quorum
or
if
you
want
to
move
a
monitor
it
like
has
a
new
one.
A
First
ways
for
quorum
takes
you
over
now
all
those
sorts
of
sort
of
things
he
need
BK,
fill
about
to
maintain
availability
and
it's
intelligence
that
is
either
already
built
into
the
work
operator
can
be
added
later.
Another
big
item
in
that
category
would
be
upgrades
where
usually,
when
you're
making
a
major
upgrade
between
you
know,
aluminous
and
mimic
or
jewel
to
luminous
or
whatever,
the
whatever.
That
gray,
there's,
usually
a
sequence
that
the
demons
have
to
be
upgraded
in
a
lot
of
times.
A
There
are
special
steps
to
be
taken
to
make
sure
that
you
are
transitioning
and
enabling
features
and
and
so
on,
and
so
that
intelligence
to
orchestrate
those
upgrades
will
be
encoded
into
rook
operator
so
that
when
you
do
make
a
major
upgrade,
you'll
basically
update
the
rook
operator,
pod
and
restart
it
and
then
you'll.
Just
tell
rook
I
want
to
be
running
Nautilus
and
it
will
go
and
it'll
do
all
the
right
things
in
the
right
order,
with
all
the
right
weights
and
safety
checks
and
everything
to
make
sure
that
that
process
happens
happens
smoothly.
A
The
first
thing
there
is
that,
for
example,
in
this
test
branch
you
can,
you
can
create,
you
can
look
at
devices
and
create
OSDs
from
the
Ceph
command
line,
and
it
will
basically
stuff
will
go.
Tell
rook
go,
create
an
OSD
pod
over
on
this
list
or
whatever,
with
this
device
and
they'll
start
at
the
OSD,
and
eventually
that
all
be
plumbed
into
the
dashboard.
A
So
the
dashboard
will
like
let
you
deploy
a
new
O's
T's
or
expand
the
cluster
to
hosts
or
change
the
number
of
rgw
that
you
have
running
or
all
of
those
sorts
of
cluster
management.
Things
will
be
possible
and
it's
in
it's
being
done
with
the
generic
layer,
so
you
could
also
have
implement
it
with
other
orchestration
tools
like
seconds
Bowl
or
deep
sea
or
whatever
else.
Whoever
else
wants
to
go
to
the
effort
to
implement
that.
B
A
A
A
Guess
everyone,
one
last
thing
to
mention:
I,
think
you
said
this
earlier
on,
but
one
of
the
other,
the
main
change
that's
currently
in
flight
is
weilman's
work
to
change
the
way
that
the
OST
pods
are
created.
So
right
now,
there's
one
one
rehad
that
has
all
do
s
DS
very
shortly:
that'll
be
a
pod
for
OST,
with
the
nude
device,
discovery
and
stuff
all
all
been
to
new
infrastructure
to
to
manage
that.
We.
B
Used
to
show
what
that
will
look
like,
and
so
you
get
pod
yeah
so
right
now
we
only
have
the
one
o
SD
pod
per
node,
so
in
the
future
we'll
have
one
OSD,
a
one
pod
per
device
in
this
case,
we'd
have
four
and
the
OSD
names
would
also
be
in
the
pod
name
to
make
it
easier
to
troubleshoot
where
those
these
are
and
look
at
just
their
logs
of
things.
When
there's
issues
so
that'll
be
a
great
change.
Color
is.