►
From YouTube: FOSDEM 2019 - Ceph Storage with Rook - Alexander Trost
B
The
advantage
is
kind
of
like
how
can
it
help
you
doing
that
and
one
part
which
probably
falls
flat.
You
do
technical
issues
on
with
my
laptop
well
I'll,
see
if
I
can
get
some
informations
on
we're
something
out
that
we
kind
of
get
a
demo
of
the
creation
of
a
cluster
and
also
how
easy
it
is
that
in
Cuba
needs
to
consume
the
storage
there
and
the
world
you're,
adding
and
removing
a
new
node
to
the
cluster
to
a
safe
cluster.
B
B
The
goal
here
is
that,
for
the
orchestration
part,
we
try
to
have
the
deployment
automated
as
best
as
possible
bootstrapping
of
the
software
of
the
form,
in
this
case
self
cluster,
the
configuration
provisioning
scaling
upgrading
my
creation
is
also
recovery,
monitoring
and
resource
management.
Those
are
all
points
which
we
try
to
cover,
not
all
of
them.
Those
points
are
covered
yet,
but
well
it's
purposes.
We
are
happy
to
see
people
help
us
reach
the
goal
of
full
automation
of
your
yeah,
for
example,
safe
cluster
in
kubernetes.
B
As
I
already
said,
a
bit
earlier,
chef
Brooke,
it's
not
only
safe,
safest
one
of
the
many
search
providers
that
Roo
can
run
for
you,
for
example,
Mineo,
which
is
the
object
search,
maybe
you've
heard
about
it,
then
there's
also
from
an
eccentric.
Is
it
was
a
GFS?
B
It's
well
kind
of
in
the
direction
of
like
safe,
which
provides
block
search
files,
13
also
object
search,
but
in
a
more
like
well
in
I
think
they
have
a
huge
point
in
like
gear
replication
by
default
kind
of
inner,
definitely
worth
to
check
it
out
and
samara
to
which
I
have
flayed
almost
light,
which
are
currently
on
in
Ruge.
It's
well
made
Fossum
it's
open
source.
It's
especially
with
this
good
fast,
it's
hosted
by
the
cloud
native
computing
foundation.
B
S
Rukh
is
not
only
therefore
just
running
safe,
but
also
for
mania,
HFS
and
so
on.
Root
tries
to
get
generalization
of
certain
types
which
are
common
across
across
those
multiple
starch
backends.
So,
for
example,
to
say,
it
will,
for
example,
the
safe
and
HFS.
The
selection
of
the
devices
on
nodes
is
the
same.
There
isn't
there
are
no
two
types
which
are
different.
B
It's
most
of
time,
pretty
easy
to
well
now
that
I
would
to
recommended
to
merge
the
tube
instead
of
like
distro
hopping
as
you
storage
software
hop
all
the
time,
but
in
the
end
something
like
this
would
be
possible
because
you
could
just
copy
and
paste
the
device
list
from
safe,
for
example,
into
HDFS
or
mo
back
again
yeah
with
the
framework
part
of
Roop
there.
We
not
only
try
to
have
this
common
ground
of
specifications.
We
simply
also
try
to
have
combined
testing
efforts.
B
That
makes
it
easier
for
all
those
platforms
to
which
have
their
own
operator
in
route
to
be
well
tested
together,
have
policies
and
stuff
just
share
the
code.
That
is
not
like
every
operator
which,
for
someone
would
run
something
like
safe
like
and
are
not
possible
software
backends,
so
search
backends
that
they
don't
need
to
write
it
from
scratch
all
the
time.
So
that's
one
of
the
goals
with
the
rook
framework.
Basically,
there
know.
B
The
architecture
of
Rooke
it's
well,
it's
going
a
bit
into
companies
right
now.
Here
is
we're
kind
of
a
general
look
right
now.
Here
we
have
two
companies
API.
We
have
client
utility
which
we
can
use
to
talk
with
the
company's
API
and
kuba.
Nice
uses
entity
for
a
datastore
of
the
API
objects
and
if
we
basically
sell
from
here,
we
have
our
client
utility,
with
which
we
tell
the
kubernetes
api.
B
B
When
we
have
created
those
objects,
we
can
access
them
again,
just
run
alike.
For
this,
knowing
about
puker
like
you
can
get
safe
cluster
and
we
get
the
object
as
it
is
currently
in
API
and
in
more
common,
more
more
current
releases
of
companies,
I
think
with
one
something
twelve
thirteen
or
so
you
even
get
some
more
information
about
the
object
as
it
is.
B
It
would
be
creating
deployments
for
team
monitors,
creating
monitor
holy
ones
for
us,
these
and
all
other
components
like
manager,
manager,
MDS,
RGV
and
then
Elsa,
for
example.
Configure
certain
aspects
in
the
manager
like
enable
the
dashboard
and
fortune
will
enable
the
SSL
on
it
disabled.
As
long
it's
stuff
like
that
in
its
making
it
simple
by.
B
Here
we
are.
This
is
basically
like.
We
are
on
the
node
right
now
here.
The
demons
are
obviously
for,
let's
say,
for
example,
SEF.
We
have
a
monitor,
no
steer
well
one
of
the
safety
humans
which
is
placed
on
a
node
if
well,
a
Mon
is
placed,
and
do
you
want
to
use
the
disks
on
a
node
also
for
storage.
You
would
well
obviously
also
have
Ostia
teams
running
on
that,
but
all
containerized.
So
there's
no
like
well
conflict
with
other
stuff
running
on
the
system,
its
well
in
container
so
well.
B
Spare
the
container
part
there
is
else
at
the
point
where
the
upgrading
for
root
is
also
making
it
easier,
because
we
just
go
at
in
the
deployment
which,
for
example,
would
run
in
OCD,
we'll
just
go
at
to
the
convenience,
API
and
say:
hey
change
the
unit
from
self
version,
12
something
to
version
13,
something
and
for
the
other
is
basically
the
same
same
update.
Aspect
comes
in
place
because
Cooney
is
dustiest
management
for
us,
then
there's
also
the
rook
agent.
B
The
location
is,
in
our
current
case,
the
so
called
we
have
root
flex
volume
provider
plug-in
provider.
This
is
a
well
flex
volumes
kind
of
like
te
predecessor
of
CSI
kind
of
in
kubernetes,
where
C's
is
to
contain
a
storage
interface
which
well
kind
of
name
implies,
is
like
a
common
interface
for
storage,
not
anything
for
containers,
but
in
general,
for
storage
and
flex.
Volume
is
kind
of
like
a
but
more
on
a
limited
scale
as
I
know,
especially
for
Cuban
Easter,
and
we
have
it.
We
have
it
running
right
now,
still
with
flex
volume.
B
B
B
B
There
are
ways
I
think
specially
through
what
was
its
local,
persistent
storage,
local,
no
search
from
kubernetes,
which
has
been
introduced,
I
think,
like
1.9
point,
when
a
I
think
yeah,
which
could
potentially
be
used
for
this
exactly
purpose.
We
were
still
kind
of
looking
into
how
we
can
make
it
use
it
like
this,
but
we're
kind
of
like
right
now.
B
So
yeah,
so
if
I
create
the
route
safe
operator
and
then
go
ahead
and
create
a
group
safe,
cluster
object,
well,
I
have,
as
already
said,
in
a
previous
example.
Here
we
have
the
agent
ports
which
is
running
in
all
nodes
and
the
operator,
then,
as
that
takes
care
of
creating
the
Ceph
component
parts,
like
you
know,
is
these:
the
monitors
manager,
MDS
RGB,
is
also
there
and
well
ing
of
a
surf
class.
All
of
that.
B
So,
to
summarize,
this
part
we
have
this
custom
object
with
kind
surf
cluster
which
we
the
root
operator.
The
Ruby
operator
has
defined
you
created
and
what
you
currently,
which
that
would
get,
is
now
we're
as
safe
cluster
running
safe
version.
Thirteen
point
two
point:
four,
or
something
something
and
ignoring
your
data:
the
weld
a
lady
host
person,
ruckus
right
now
we're
most
configured
and
monitor
data,
but
we're
working
on
trying
to
well
make
it
really
just
for
configuration,
may
still
mom
data,
but
not
all
these,
because
right
now,
ready
behavior
would
be
it.
B
B
But
coming
back
here,
we
have
like
the
dashboard
part
here
where
we
say
yeah
we
want
processor
we
have
enabled
through.
We
can
control
a
bit
about
the
monitors
like
how
many
do
we
want
if
you
want
multiple
monitors
on
one
note,
this
is,
if
you
have
well
only
free
now,
you
may
or
may
not
want
this
to
happen,
or
if
you
have
multiple
nodes,
you
normally
want
to
disable.
B
There's
modern
for
your
notes,
for
example,
because
then
you
already
have
a
room
to
move
a
monitor
from
a
node
which
failed
to
another
node
and
well
create
a
new
monitor
there.
Basically,
this
part
was
the
start.
Here
is
here,
for
example,
use
all
notes
use
all
devices
is
kind
of
like
controlling
their
small
to
control,
but
those
two
options
are
kind
of
allowing
you
to
select
where
storage
will
be
used
and
what's
storage
will
be
used
with
the
user
notes.
It's
kind
of
well
use
all
node
states.
B
Here
we
go
if
we
skip
a
few
slides
ahead.
We
come
back
again
to
this
storage
configuration
part
where
we
have
again
those
use
all
notes
and
use
all
devices
instead
of,
for
example,
having
use
all
now.
It's
true
that
all
nodes
which
are
applicable
will
be
used.
We
could
also
specify
a
list
of
notes
which
will
have
information
on
the
next
slide.
We
can
eat
also
have
a
simple
device
further,
where
we
go.
Yeah
use
all
this
with
SD
and
well
wild
card.
B
There's
also
possibility
to
specify
certain
configuration
parameters
not
only
in
a
cluster
wide
level,
but
also
on
a
note
and
even
OSD
path,
level
where
in
this
case,
for
example,
we
have
like
option
which
normally
is
only
used
for
nvme
or
well
fast,
faster
than
names
of
these
devices.
This,
the
how
many
OS
these
should
be
created
per
device.
B
You
could
then
specify
like
config,
and
then
all
these
per
device
like
free
or
well
insert
the
mouth
for
nvme
devices.
So
many
of
these
here
basically
and
there
we
even
have
it
obviously
is
per
device
where
you
can
just
have
it
per
device
and
on
node
level
on
classified
level.
So
that's
not
all
just
a
few
find
where
I
left
off
yeah
yeah
to
sharply
go
engineer.
Further
companies
native
integration
parts.
B
Besides
the
so
called
customer
source
definition,
those
custom
types
objects
we
can
create.
We
already
have
in
Cuba
nice
storage
classes
which
allow
us
the
admin
most
of
the
time
to
specify
which
provisioner
and
certain
parameters
for
this
provisioner
to
be
called
yeah,
but
the
provision
will
be
called
if
you
create
a
volume.
Now,
a
persistent
volume
claim
if
it
matches
this
storage
class.
If
it
has
this
storage
class
in
it,
companies
will
take
care
of
talking
with
the
provision
and
saying
hey.
Somebody
requested
20,
gigs
of
storage,
for
example.
B
B
That's
kind
of
our
company's
takes
well
takes
over
in
point
of
like
storage
management
s,
Cuban
East
has
all
the
persistent
volume
claims
which
are
I
hope
for
everyone.
Using
them
always
reflect
that
the
claims
an
application
has
made
like
hey
I
need
five,
hundreds
of
storage.
So
it's
always
our
this
application
is
using
500,
gigs
of
search
and
any
background.
It's
kind
of
like
a
not
on
perl
application
level.
You
have
to
persistent
volumes,
which
are
again
also
having
the
information
like
which
search
class
has
been
used
to
create
as
volume.
B
B
Yeah,
so
what
can
rule
kind
of
do
to
make
it's?
Well,
if
you
have
a
self
class
or
especially
because
other
world,
you
even
need
you
to
run
a
safe
cluster,
make
it
better
for
you
to
well
run
a
safe
cluster
in
Cuba
nice.
Well,
especially,
I
said,
if
you
have
fun,
if
you
don't
have
one
and
you're
like
yeah,
let's
go,
it's
might
be
a
bit
problematic
because
you
need
to
know
a
bit
of
stuff
of
kubernetes.
B
Patois
true
class
is,
as
we
heard,
health
checking
for
monitor.
So,
if
you
have,
let's
say
five
nodes
and
you
currently
live
just
for
simplex
example
reasons
have
your
monitors
one
on
the
first
node
second
on
the
second
and
third
on
a
third
and
third
node
would
fall,
it
would
fail.
It
would
move
this
monitor,
fail
it
in
the
cluster
and
move
it
over
to
the
let's
say:
node,
four
or
five,
which
are
available
for
that:
the
simple
management
simply
a
free
de
Cuba.
B
You
say
anything
for
the
company's
API
in
point
that
you
have
those
general
manifests
kind
of
yes,
say
and
put
it
like
this
with
infrastructure
as
a
code
kind
of
it's,
we
are
more
more
well,
not
only
infrastructure
as
a
code
but
kind
of
application
employment
as
a
code.
Depending
on
how
far
you
see
it,
you
have
those
yellow
files
which
define
how
yourself
trust,
I
should
look,
how
which
pools
should
be
created.
You
just
have
a
yellow
fire,
which
says
here
pool
object
with
decent
this
name.
How
many
replication
should
use
the
racial
code?
B
It
relates
with
failure,
man
which,
as
the
MLB
actress.
Well,
you
just
chew
cuddle,
create
it
to
the
company's
API
and
you
have
it
if
your
pool
created
in
a
few
seconds-
and
it
makes
it
easier
on
that
side
too-
well
manage
those
things
mm.
For
example
same
here
too,
it's
not
just
like
where
you
I
want
you
to
use
file
system.
Let
me
buy
five
servers
and
install
surf
and
well
setup
too
and
yes,
we've
said
we're
frugal.
You
would
need
to
put
to
an
ace
on
it
and
then
just
create
a
file
system.
B
Object
which
has
in
has
the
size
numbers
and
everything
like
for
the
pools
again
forest
file
system.
What
else
old
numbers
like
how
many
active
monitor
MDS?
Do
you
want?
Should
there
be
standby
MVS
too,
and
just
gives
you
a
bit
of
well
my
playground,
not
to
say
like
about
a
good
amount
of
room,
to
have
certain
options
and
have
that
automatically
happen
same
for
RGB?
You
have
no
object,
control
it
through
that
have
certain
options
to
control
the
behavior
of
the
pools
or
how
they
should
look
yeah.
A
B
Well,
we
don't
have
liveness
probes
and
readiness
probes
for
all
staff
components,
but
we
think
we
have
at
least
for
a
manager
right
now
and
for
a
monitor.
We
do
kind
of
external
half.
Checking
from
the
operator
side
to
see
is
it
well?
Is
this
monitor
ceiling
for
more
that
we
need
to
fail
it
over
to
a
new
node
yeah
and
the
part
of
the
dynamic
provisioning
there
that
I
create
a
claim
and
few
seconds
later,
my
block
device
and
my
block
volume
in
surf
has
been
created
and
linked
with
kuben
ease
fruit.
B
A
persistent
volume
object
is
well
it's
a
breeze
too.
If
you
want
to
run
with
applications,
we
don't
need
to
like
well
fill
in
a
ticket
say:
I,
hey
I
need
20,
gigs
of
storage.
Can
you
please
provide
it
to
me,
and
someone
starts
running
through
the
data
set
and
finds
the
tries
to
find
the
data
did
something
for
you,
but
it's
simply
dynamic
there,
which
is
pretty
pretty
good
for
well,
especially
for
developers.
B
If
a
captain's
name
correctly
we're
looking
into
also
trying
to
give
get
more
abstraction
in
like,
for
example,
and
developer
woods,
yeah
I
need
a
pocket,
so
it
just
creates
a
bucket
object
and,
for
example,
for
we
also
have
support
for
cockroach
free
beer.
So
we
would
obviously,
as
a
database,
also
want
to
look
into
having
database
object,
which
we
develop.
I
would
just
create
and
then,
in
some
way
that
which
is
kind
of
still
up
for
discussion,
should
it
be
a
service
broker?
Should
we
collect
it?
B
Think
what
was
the
failover
time
was
like
when
a
node
failed.
It
would
take
a
things
like
five
minutes
or
so
to
failover
to
an
era,
node
and
well.
I
hope
you're
only
running
toward
native
application,
so
there
shouldn't
be
a
problem
as
you
have
ignored
and
wanna
Africa,
but
it's
still
kind
of
and
not
too
good.
B
Yeah
s
kind
of
yeah
set
in
the
beginning:
it's
not
only
safe,
there's,
not
only
a
roof
safe
operator,
yes
also
mini
operator,
which
will
this
object?
Sir,
should
we
have
a
mini
object,
search
object?
We
have
felt
a
cockroach
GB,
which
well
we
right
now
we
only
have
a
cover
should
be
object,
but
I
guess
again.
We
are
open
for
people
commenting
and
discussing
with
us
how
we
can
go
to
the
best
path
if
it's
more
of
a
service
broker,
approach
or
not
approach,
HFS
has
been
implemented
by
the
necks
and
our
guys.
B
Just
there
again,
also
a
big
shout-out
today
I'm
as
it's
a
huge
work.
It's
not
like
it's
not
like
a
mineral.
We
have
like
two
or
three
replicas
of
on
insightful
said
only
but
there's
again
a
kind
of
like
safe,
more
complexity,
simply
behind
it
and
it's
amazing
to
see
them
have
implemented
it
in
a
group
and
also
they
are
using
certain
parts
of
their
root
framework
and
NFS
server,
as
is
mineral
and
cooked
Russia
beer
and
well
jeff
has
to
implement
a
thanks
to
community.
B
B
Get
to
this
page
first,
if
you
want
to
get
involved,
we
aren't
github
I,
think
some
people
are
still
on
getting
right,
good
love.
We
have
a
page
which
we
are
currently
we
re
working.
So
if
you're
right
now
like,
where
do
you
find
this
and
this
we
are
working
on
it?
We
know
of
the
problem
and
well
we
have
a
slack
channel
slack.
You
know
it's
a
hole
slack
for
our
own
yeah.
If
you
want
to
join,
we
also
have
a
conferences
channel
and
well.
B
A
B
A
B
Well,
I,
don't
let
me
repeat
the
question,
so
the
question
is
for
the
Mon
failover
kind
of
how
does
the
data
get
to
the
other
node
or
dusty
data
even
gets
copied
from
the
node,
which
failed
to
not
well
da
da
note,
which
is
send
a
new
monitor?
No,
it
doesn't
get
copied,
it's
a
brand
new
monitor
which
well
just
a
new
directory
which
what
was
it
like.
A
B
The
question
is
that,
if
a
monitor
fails
what
will
kind
of
happen
to
the
pots
than
the
juice
storage
so
from
safe
sides,
the
monitor
list
is
always
kind
of
well
to
be
lighter
fluid.
So
if
one
monitor
fails
well,
it's
still
in
the
list
of
the
conflicts
most
of
the
time.
So,
if
you
bring
up
a
new
monitor
and
its
back
in
the
we're
talking
with
Jana
monitors
he's,
this
new
monmouth
is
basically
very
well
very
new.
A
B
B
B
Well,
if
you
host
network,
you
get
a
note,
notes,
Network
circuit,
which
has
other
advantages
and
all
its
advantages
say
like
this,
but
at
least
to
go
but
far
into
the
scene.
I
part.
Therefore
kunis.
There
are
multiple
I
think
two
or
three
projects
right
now
which
allow
you
to
define
multiple
sinaia
plugins
in
your
in
the
config
somewhere
and
then
have
multiple
interfaces
in
your
pods,
which
I
think
in
full
at
least
well.
I
heard
it
from
intellect,
like
container
I
swear.