►
From YouTube: Kubernetes Rook
Description
Jared Watts from Upbound presents Rook to the London OpenShift group.
A
All
righty,
let's
get
going,
then
all
right
yeah,
so
my
name
is
Jared
watts
and
I
am
a
maintainer
on
two
different
open-source
projects
that
are
heavily
integrated
into
the
kubernetes
ecosystem.
And
since
this
is
an
open
shift
meetup
here,
we
will
not
shy
away
from
diving
into
some
of
the
kubernetes
internals.
Maybe-
and
you
know,
I-
think
that
this
is
a
great
format
here
to
be
as
interactive
as
you
guys
want
to
be.
You
know,
feel
free
to
you
know,
call
out
a
question
or
or
voice
your.
A
You
know
dissent
on
any
of
the
slides
coming
up
here.
We
can
be
super
interactive
and
have
a
have
a
dialogue
here,
I'm
happy
to
do
that,
all
right.
So
let's
say
let's
go
start
talking
about
rook
now,
so
I
don't
know
if,
if
anyone,
if
how
many
people
have
heard
of
rook,
but
it
is
a
project
that
was
donated
to
the
cloud
native
computing
foundation
to
the
CN
CF,
so
you
know
that
was
you
know
that
foundation
has
a
lot
of
the
other
projects
in
the
ecosystem,
including
kubernetes
itself.
A
Kubernetes
was
the
first
project
hosted
by
the
CN
CF
to
graduate
to
you
know
full
graduation,
so
it
kind
of
fostered
this
big
ecosystem
for
other
other
projects.
You
know
based
on
kubernetes
and
you
know,
providing
solutions
in
that
space
to
know.
Hey
Alex,
it's
good
to
see
you
good
to
see
good
man
I'm
good
thanks
for
showing
up
yeah.
So
then
you
know
that
kind
of
gave
a
nice
space
for
a
lot
of
other
projects
to
you
know
get
off
the
ground
in
and
plug
into
that
ecosystem.
A
So
when
you,
when
you
want
to
answer
the
question
of
what
is
rook,
it's
you
call
it
a
cloud
native
storage,
Orchestrator,
and
so
what
I
mean
by
that
is
you
take
Cooper,
takes
kubernetes
and
then
it
installs
a
bunch
of
custom
controllers
and
custom
types
to
to
do
a
whole
bunch
of
automation
for
various
storage
solutions.
So
all
these
tasks
here
in
this
list
that
you
know
a
normal,
a
human
operator,
would
do
for
a
storage
solution.
A
Rook
provides
a
you
know,
framework
and
a
set
of
controllers
and
types
to
bring
all
of
those
into
the
kubernetes
ecosystem.
So
it's
it's
an
open-source
project
and
it's
completely
community
driven.
So
you
know
everything
is
up
on
github
and
the
whole
community
is
really
open,
transparent
and
happy
to
talk
to
anybody
at
any
time.
So
if
we
want
to
take
a
look
at
what
storage
for
kubernetes
his
kind
of
what
it
looked
like
in
the
beginning,
is
that
what
you'd
normally
end
up?
A
Having
is
you've
got
your
kubernetes
cluster
on
the
left
there
and
then
normally.
What
happens
is
that
you
have
a
various
set
of
a
whole
bunch
of
external
storage
services.
Those
could
be
you
know
an
as
device
or
you
know,
cloud
cloud
provider,
block
storage
like
Amazon
GBS
or
you
know
whatever
it
may
be,
but
the
key
here
is
that
they
lived
outside
of
the
kubernetes
cluster.
So
there
are
some
some
various
utilities
or
facilities
that
were
built
into
kubernetes
by
the
storage
storage
group.
A
That
you
know
enables
these
storage
systems
outside
of
a
cluster,
to
interact
and
provide
storage
for
your
pods
that
are
running
inside
of
the
cluster,
and
you
know
through
plugins,
and
you
know,
dynamic,
provisioning
things
like
that
that
we'll
get
into
later,
but
so
having
your
storage
outside
of
kubernetes.
You
know
it's
got
some
advantages,
but
we're
gonna
talk
about
what
the
challenges
are
here.
So
the
first
one
is
that
it
relies
on
that
external
storage
being
there
right,
it's
not
important.
You
know
you
if
you
deploy
your
application
into
another
kubernetes
cluster
somewhere
else.
A
You
might
you
know
you
don't
have
that
nas
box
sitting
there
anymore
right,
so
those
services,
external
storage
services
have
to
be
accessible
wherever
your
kubernetes
cluster
happens
to
be.
You
know
they
have
to
be
rolled
out.
You
know
so
that
creates
a
bit
of
a
burden
to
you
know
there's,
so
somebody
has
to
manage
them
to
you
know
that
you
have
these
external
storage
services
sitting
somewhere.
A
A
A
whole
team
of
engineers
see
the
DevOps
site,
reliability,
engineer,
sre
people
they're
running
those
for
you,
but
when
you
start
taking
dependencies
on
managed
services
from
cloud
providers,
you
can
get
into
a
scenario
of
you
know
a
vendor
lock-in
where
it's
hard
to
move
and
go
to
another
provider
and
we'll
also
talk
about
that
later
on
for
sure
in
depth.
So
here's
here's
another
way
to
look
at
this
instead
of
having
storage
outside
of
your
kubernetes
cluster.
One
way,
one
thing
we
could
do
is
bring
storage
right
on
into
the
cluster.
A
So
you
know
the
kubernetes,
as
the
container
Orchestrator
is
really
good
at
providing
a
lot
of
facilities
to
automatically
manage
you
know,
containers
you
know
the
pods,
the
pods
that
are
the
containers
are
running
inside.
You
know,
handling
errors,
failover
scheduling,
two
different
nodes
on
the
cluster
when
nodes
have
a
problem.
So,
there's
a
whole
lot
of
smarts
inside
of
kubernetes
that
we
could
take
advantage
of.
We
can
harness
that
power
for
our
storage
systems
as
well,
and
so
with
these,
these,
you
know,
storage
systems
that
were
deploying
now
inside
of
kubernetes.
A
And
now
you
know
it's
integrated
nicely
after
the
acquisition
of
Korus
Red
Hat,
it's
integrated
nicely
into
that
operator
SDK
and
into
openshift
as
well,
but
basically,
when
you're,
when
you're
looking
at
what
an
operator
is
inside
of
kubernetes,
you
can
think
of
it
as
a
piece
of
software.
It
takes
all
of
these
smarts
and
the
you
know
operational
expertise
that
domain
knowledge
of
what
it
takes
to
run
a
piece
of
software
and
then
puts
that
into
software
itself.
So
you've
got
software
that
you
know
is
looking
and
managing
and
configuring
and
handling
errors,
etc.
A
You
know
tasks
that
a
human
would
normally
do
or
we're
writing
software
to
do
that
for
us
now
it
so.
The
basic
premise
of
you
know
how
an
operator
works
is
that
it's
just
another
form
of
a
controller.
A
kubernetes
can
custom
controller
is
what
you
can
think
about
it
and
it
sits
in
a
loop
that
is
constantly
monitoring
what
the
what
the
user
wants,
the
system
to
be
and
what
the
system
actually
is.
So
we
call
that
observing
you
know
what
is
the
current
state
of
this
cluster
and
analyzing
of?
A
These
replaced
third
party
resources,
I
think
maybe
in
kubernetes
1.8,
maybe
one
seven,
but
a
few
releases
back
at
least-
and
this
is
the
this-
is
kubernetes
extensibility
story.
This
is
how
you
teach
kubernetes
about
new
arbitrary
types
and
the
cool
thing
about
those
is
that
you
can
make
them
first-class
citizens.
So
you
know
just
like
a
pod
or
a
config
map
or
deployment
or
whatever
that
cube
control
knows
about.
You
can
make
your
own
arbitrary
types
that
you
know
the
API
server
will
know
about
it.
You
can
perform
actions
on
them.
A
A
The
user
says:
hey
I
want
this
storage
cluster
and
then
the
operator
will,
you
know,
act
upon
what
the
users
desired
state
is
to
make
the
actual
cluster
match
what
the
user
has
asked
for
with
their
desired
state,
and
you
know
the
operators
the
way
they're
doing
that
is
they're
in
they're
in
constant
communication
with
the
kubernetes
api
server
and
they
can
use
anything
that
the
api
server.
You
know
all
the
resources
it
has.
A
You
know
config
maps
and
stateful
sets
or
whatever
it
may
be,
and
then
the
automat,
the
automated
software,
these
operators
can
do
interesting.
Things
like
upgrade
your
your
storage
system
or
it
can
watch
for
you,
know.
Health
defects,
and
you
know
monitoring
alerts
rebalance
data
around
the
cluster.
It
can
do
also.
These
operators
can
basically
do
anything
that
we
program
to
do
right
well
with
the
specific
domain.
Expertise
of
these
storage
systems,
and
so
one
important
thing
to
note
here,
though,
is
that
the
rook
operators
are
they're
not
on
the
data
path.
A
So
when
it
comes
to
reading
or
writing,
bytes
like
file
block
object,
storage,
whatever
it
may
be,
when
you
were
in
a
pod,
is
it
is
consuming
one
of
these
storage
systems
that
rook
has
orchestrated
for
you.
When
you
go
to
read
or
write
bytes
from
it,
it
doesn't
go
through
rook.
Then
it
is
speaking,
the
that
client
there
or
that
pod
there's
going
to
be
talking
directly
to
the
underlying
storage
system
and
the
operators.
Rook
operators
are
merely
just
about
setting
them
up.
You
know
managing
them
over
time,
etc.
A
A
A
So
if
you,
if
you
call
the
cube
cuddle,
I
will
not
make
fun
of
you
because
I've
done
it
myself
say
anywho.
So
on
cube
control,
you
know
we
can
that's
the
command
line,
kubernetes
tool,
that
everyone
knows
and
loves
to
manage
the
kubernetes
clusters.
And
so
what
this
is
saying
is
that
on
the
command
line
there
through
the
custom
resources
that
rook
installs,
we
get
all
these
new
types
to
interact
with.
So
we
get,
you
know,
storage,
cluster,
a
storage,
pool
file,
store
objects,
store
all
that
sort
of
stuff
that
you
can.
A
So,
of
course,
the
communities
api
server
is
persisting,
everything
to
add
CD,
underneath
in
the
top
right
there
we've
got
the
rook
operators
and
so
we're
seeing
here
that
the
operators
will
sit
there
and
they'll
talk
to
kerbin
ADA's,
API
server
and
they're,
going
through
their
control,
loops
and
they'll
creates.
You
know:
pods
they'll,
create
services,
deployments,
config
Maps,
all
that
sort
of
stuff
in
service
of
fulfilling
the
user's
desired,
desired
storage
requests,
and
so,
let's
see
on
the
bottom
right
here
we
have.
A
You
know
a
set
of
demons
that
run
so
when
you
say
I
want
a
storage
cluster.
You
know
you're
just
kind
of
you're
specifying
what
type
of
storage
cluster
you
want.
It
could
be
SEF
Mineo.
Maybe
you
want
a
cockroach
DB
database,
whatever
storage,
it
is
that
you
want,
when
you've
expressed,
that
need
for
it.
The
rook
operators
will
deploy
out
the
specific
storage.
You
know
provider
that
you've
requested.
You
know
in
containers
or
in
pods,
and
you
know
all
the
kubernetes
primitives.
They
need
to
run
Etsy
and
then
the
real
kit.
A
We
can
kind
of
gloss
over
that
a
bit
because
that's
being
replaced
now
by
CSI,
the
container
storage
interface,
which
is
the
new
new
hotness
for
storage
storage,
plugins
that
can
be
used
across
a
lot
of
different
container
orchestrators
such
as
meso
sand,
docker
swarm
and
within
cloud
foundry
as
well
and
kubernetes,
so
that
as
a
storage
plugin
for
when
the
user
says
are
sorry
when
a
pod
needs
a
block
volume
or
a
shared
file
system
or
an
object,
storage
whatever
it
may
be.
The
you
know
the
rogue
agent
gets.
A
So
let's
put
a
little
bit
more
visual
here
so
on
the
left
is
a
set
cluster
CRD.
This
is
a
custom
resource,
an
instance
of
the
custom
resource.
So
when
the
user
says
cube,
control
creates
on
that
yeah
mole
on
the
left.
There
then
the
rook
operator
is
going
to.
You
know,
get
a
notification
about
that
and
then
it's
going
to
speak
to
the
kubernetes
api
server
to
make
sure
that
all
the
Ceph
components
get
deployed
out
to
the
kubernetes
cluster
in
form
a
storage.
A
A
Storage
solution
that
I
think
it
came
out
of
a
research
project
from
University
of
Santa,
Cruz
I
think
originally,
and
it's
it's
part
of
Red
Hat
now
and
I
have
heard
talk
of
it
becoming
the
new
open
shift
container
storage
instead
of
lustre
I
heard
somebody
say
that
and
I
don't
know
if
they
knew
what
they
were
talking
about
or
not,
and
so
I'm
kind
of
spreading
rumors.
Now
that
are
unsubstantiated,
but
I
did
think.
That
was
interesting,
though,
that
Seth
may
may
replace
cluster
for
our
open
shift:
container
storage
bay.
Anywho.
A
All
right!
So
we're
getting
near
the
end
of
this
talking
here
and
then
we're
gonna
give
a
demo
which
kind
of
brings
a
lot
of
this
stuff
together
here
and
then
we
can
really
start
poking
at
it
and
asking
questions
so
yeah.
So,
in
addition
to
these
operators
and
these
CR
DS
that
we've
been
talking
about
for
a
bunch
of
different
storage
systems,
you
can
also
think
of
rook
as
a
framework
too.
So
you
know,
there's
some
libraries,
some
some
types,
there's
some
testing.
A
You
know
reusable
testing
patterns,
etc
that
different
storage
providers
could
take
advantage
of
to
make
their
job
easier
when
they
want
to
run
inside
a
kubernetes.
So
if
you're
with
you're
a
storage
vendor-
and
you
want
to
run
inside
of
kubernetes-
you
can
do
all
that
stuff
yourself
or
you
could
you
know
you
could
take
advantage
of
rooks
libraries
and
common
specs
and
in
testing
automated
testing
frameworks
and
stuff
like
that
to
make
that
effort
to
get
integrated
into
kubernetes
a
little
bit
easier.
A
So,
as
we
kind
of
already
talked
about
before
the
storage
providers
that
Brooke
that
have
already
integrated
with
Rooke
our
safe
car,
ODB
manao,
there's
an
NFS
provider,
Cassandra
Apache
Cassandra
is
in
there
as
well
and
next
into
edge.
Fs
is
there
as
well
with
hopefully
more
to
come
in
the
future
as
well
all
right?
So
let's
do
a
demo
now
and
I.
Don't
know
what
state
I
left
this
machine
in,
so
let's
bring
up
my
mini
cube
and
I
better
restore
this
snapshot
cuz
it
god
knows
what
I've
done
to
it.
Since
then,.
A
A
We
there's
a
contributor
to
the
rook
project
is
actually
a
maintainer
now
because
he's
been
he's
been
a
contributor
for
quite
a
while
that
he
wrote
a
set
of
scripts
that
uses
vagrant
to
bring
up
like
a
any
number
of
core
OS
VMs
onto
your
laptop.
So
you
like
you,
can
do
bring
up
three
core
SVM's
and
then
it
uses
cube
ATM
to
install
kubernetes
cluster
and
get
that
running.
So
you
can
have
a
multi
node
cluster
running
on
inside
of
your
laptop.
That
is
pretty
good.
It's
a
good
solution!
A
He's
he's
worked
out
a
lot
of
the
bugs
in
it
and
he
supports
a
number
of
other
distros
too.
In
addition
to
chorus
I
think
he
does
fedora.
He
does
boon.
I!
Think
that's
the
three
of
them
now,
but
it's
pretty
rad
because
I
don't
know
of
any
other
on
your
own
machine,
be
able
to
have
multi
node
kubernetes
clusters
running
at
least
easily.
A
So
it's
got
a
little
make
file
that
it's
just
like
you
know
make
up,
and
it
brings
up
a
you
know:
lights
up
on
all
completely
automated
a
multi,
node,
groom,
radius
cluster
right
on
your
laptop,
of
course,
that
runs
into
memory
issues
real
fast.
When
you
bring
up
multiple
VMs
inside
your
laptop
yeah,
it's
pretty
cool,
though
alright,
so
a
bigger
plugin.
B
A
A
Let's
see,
we
start
reading
that
now,
everybody's,
alright
good
excellent
I
thought
I
dismissed
this
guy
here
go
away.
Ok,
so
we
have
a
you
know:
I
am
not
running
a
multi
node
cluster,
but
I
am
running
a
kubernetes
cluster
with
a
single
node,
the
mini
cube,
one
that's
running
V
1.13
and
let's,
let's
kind
of
hop
into
this
now
so
this
is
the
this
is
not.
A
A
A
These
are
all
part
of
the
real
repo.
So
yeah
you
can.
You
can
see
all
this
stuff
on
there,
though
some
of
these
commands
actually,
probably
not
because
they're
or
well
they're
in
the
documentation,
but
they're,
not
you
know
easy.
I
can
just
copy
this.
You
know
right
from
here
and
have
everything
streamlined
here
with
no
no
help
whatsoever
all
right.
So
the
first
thing
we're
gonna
do
here
is
that
we
are
going
to
bring
up
the
Ceph
operator.
So
you
know,
as
we
talked
about
before
the
the
operators
are
no
yeah.
A
The
operators
are,
you
know
those
pieces
of
automated
software
that
understand
everything
about
a
particular
storage
provider
and
they're
gonna
be
sitting
there
waiting
for
you
to
ask
for
a
storage
pool
or
a
storage
cluster,
and
when
we
do
that
they
will
act
on
that
and
they
will
use
all
their
automated
knowledge
of
what
asset
cluster
is
to
bring
up
pods
and
things
that
are
relevant
for
that.
So
we
have
the
operators
up
and
running
right
now
and
then
let's
go
ahead
and
create
a
cluster
as
well.
A
A
Yeah,
so
basically
the
last
command
I
ran
was,
you
know,
cube
control,
create
cluster
yeah,
mole
and
so
inside
of
cluster
DMO.
We
basically,
we
have
some
are
back
related
stuff
service
accounts
and
are
in
and
whatnot,
but
the
the
meat
of
it
here
is
that
this
this
custom
resource
of
a
safe
cluster.
We
have
some
configuration
information.
Now
this
is
a
you
know,
just
a
sample,
yeah
mole.
That
has
a
whole
bunch
of
comments
like
documenting
what
the
fields
are.
So
there's
a
whole
bunch
of
green
stuff
on
here.
A
That's
just
comments,
but
basically
this
gives
you
the
ability
to
configure
the
way
that
you
want
your
set
clusters
to
run.
You
can
say
what
specific
version
of
Ceph
you
know
you
want
it
to
run
for
the
daemons
like
the
monitors,
the
OS
DS,
the
managers,
etc.
You
could
say
how
many
monitors
you
want
it
to
run,
whether
or
not
you
want
the
Ceph
dashboard
to
get
enabled
and
run
as
well,
some
other
stuff,
and
you
want
to
get
to
something
important.
So
the
storage
storage
node
here.
A
So
this
is
how
you
would
be
able
to
say
what
parts
of
your
kubernetes
clusters
resources
you
want
to
be
included
and
storage
resources
inside
of
the
Ceph
cluster.
So
right
here
what
I
have
is
to
say
use
all
the
nodes
in
the
cluster,
so
every
single
node
you
find
in
my
one
node
cluster.
Please
use
them
and
you
know
don't
use
all
the
devices.
A
If
I
never
said
true
there
I
could
you
know
it'll
try
to
find
every
single
to
free/open
device
that
doesn't
have
any
partitions
doesn't
have
any
file
systems
that
it's
found
doesn't
have
labels
and
maybe
as
well
too,
but
you
know
devices
that
it
thinks
it
can
use
that
are
raw
and
ready
to
go
or
you
could.
You
know,
specify
a
filter
of
saying,
hey
every
device.
Did
you
know
SDA
on
every
node,
whatever
it
may
be?
You
know
this
is
the
way
that
you
pass.
A
Cool
so
I'll
make
this
a
little
bit
bigger,
so
it
doesn't
rap
there.
We
go
that's
a
little
better
for
her.
For
me
at
least
so
what
we
have
now
here
is
you
know
the
operators
are
running
on
the
top
there
in
the
rooks
Ceph
system
namespace
and
then
in
rooks
F
namespace.
We
have
a
number
of
different
pods
here
and
those
are
the
pods
that
represent
or
make
up
what
a
safe
cluster
is.
A
We've
got
managers
monitors,
OS
DS,
so
we
have
a
fully
functional
SEF
cluster
now,
with
basically
just
I,
guess
two
commands
to
one
command:
to
cube
control
start
the
operators
and
another
command
to
keep
control,
create
the
cluster,
and
since
our
automate,
our
operator
has
you
know,
is
that
has
it's
a
set
of
automated
software
that
knows
how
to
create
a
safe
cluster
using
the
kubernetes
api,
we're
done.
We
have
it,
it's
ready.
So
let's
use
this
software
for
something
useful.
A
So
I'm
going
to
go
ahead
and
create
a
storage
class
and
I'm
also
going
to
create
a
little
toolbox
so
that
I
can
check
on
the
health
this
F
cluster
easily
on
the
command
line,
and
this
is
what
I
wanted
to
see
that
all
the
placement
groups
inside
of
that
storage
cluster
are
active
and
clean.
None
of
them
are
damaged
or
unknown,
or
anything
like
that.
So
that's
good,
and
the
next
thing
that
I
want
to
do
is
I
want
to
run
a
little
demo
here.
That
does
something
useful.
A
So
what
I'm
going
to
do
is
I
am
going
to
create
an
app
that's
going
to
consume
this
storage
and
there's
going
to
be
a
whole
lot
of
things
that
happen.
Underneath
the
covers
here
that
we
can
kind
of
talk
through
once
we
once
we
get
it
running
so
basically
I.
The
first
thing
I
did
is
I
started
my
sequel
and
then
I'm
also
going
to
start
WordPress
that
will
use
my
sequel.
So
WordPress
is
a
stateful
app.
You
know
it.
A
It
stores
things
inside
of
my
sequel
to
you
know,
like
all
the
comments
in
post
blog
posts,
etc,
gets
stored
inside
of
my
sequel.
So
what
we
have
running
here
now
are
a
two
pods
one
for
WordPress,
and
one
for
WordPress
is
my
sequel
database
now
what's
interesting
here
is
that
the
storage,
the
volumes
for
those
pods
are
actually
being
dynamically
provisioned
and
provided
from
our
SEF
cluster
that
we
created.
A
So
we
have
given
pods
the
pods
running
inside
our
kubernetes
cluster
we've,
given
its
storage
resources,
dynamically
on
the
fly
from
a
set
of
kubernetes
pods
that
we
created
with
two
commands.
So
let's
bring
up
WordPress
and
let's
configure
it
and
then
we're
gonna
do
some
other
stuff,
while
we're
talking
through
it.
So
I
am
going
to
run
this
command
just
to
get
the
IP
address
of
the
WordPress.
A
Front-End,
which
is
here
that's
good
and
I'm,
gonna,
use
a
super
secure
password
of
test
I
like
how
they
have
a
checkbox
that
you
have
to
confirm
your
use
of
a
weak,
password,
very
weak
I
like
that,
so
it's
gonna
do
it.
It's
gonna,
install
some
of
the
WordPress
packages
and
stuff
would
take
which
takes
just
a
couple
seconds,
but
let's
maybe
walk
through
what
happened
here.
To
get
this,
these,
the
storage
provided
so
okay.
So,
let's
start
with
my
sequel.
A
So
for
my
sequel,
what
we
actually
said
here
inside
of
the
definition
of
my
sequel,
we
said
we
created
a
persistent
volume
claim
and
so
I'm
not
sure
how
familiar
everyone
is
with
persistent
volume
claims
and
persistent
volumes
and
storage
classes,
etc
and
kubernetes,
but
basically
a
PVC.
A
persistent
volume
claim
is
a
request
for
storage
and
so
inside.
This
PVC
I
just
declared
it
as
two
gigs,
because
I
don't
need
a
I.
Obviously
don't
need
a
lot
to
just
run
a
single
blog
with
the
no
content
right
now.
A
You
know
we
suppose
we
specified
give
me
two
gigs
from
rook,
please,
and
so
now.
What
happens
under
the
covers
is
that
we
see
the
my
sequel
pod
is
scheduled
to
run
on
a
node
in
the
cluster.
That
happens
to
be
our
only
node
in
the
cluster.
So
it's
running
on
mini
cube,
but
now,
when
the
kubernetes
scheduler
picks
the
node
that
that
pod
is
going
to
run
on
a
little
dynamic,
provisioning
of
the
storage
dance
occurs
here
and
so
on
that
node,
the
the
rook
storage
plug-in
or
the
CSI
plugin
in
the
future.
A
It's
gets
notification
saying
there's
a
pod,
but
this
ID'd
that
needs
this
some.
You
know
gigs
of
storage
and
it
needs
this
type
of
file
system
on
it.
And
so
on
that
node
running
locally
there.
That
rook
plug-in
first
creates
a
a
block
image
in
the
cluster
that'll
serve
as
the
the
backing
store
for
those
two
gigs
striped
across
all
the
nodes
in
the
cluster
and
then
it's
attached
or
Maps
that
block
volume
image
to
a
block
volume
device
on
the
nodes.
A
So
it's
like
/dev
/r,
BD
0,
so
it
makes
it
a
local
device
and
then
it
you
know
formats
that
local
device
and
with
the
XFS
or
ext4
or
whatever
file
system
on
top
of
it,
and
then
it
hands
it
off
to
kubernetes,
saying
here's
the
path
or
you
can
find
that
that
2gig
volume
that
the
that
you
requested
and
then
kubernetes
takes.
You
know
attaches
that
as
a
volume
mount
to
that
pod
there.
A
So
underneath
so
what
we
have
is
we
have
the
my
sequel
pod,
the
container
running
inside
the
pod,
a
that
is
talking
to
or
mounted
to
a
file
system
that
the
rook
agent
provided
and
which
is
backed
by
a
set
block
device
which
is
backed
by
ACE.
You
know
set
virtual
image
striped
across
the
entire
cluster,
so
there's
a
whole
little
stack
there
Turtles
all
the
way
down
of
how
we
get
storage.
That's
you
know,
distributed
across
all
the
machines
in
the
cluster
as
a
block
device
for
one
particular
node.
C
A
Would
be
good
yeah
there's
my
hand,
is
that's
the
animation
there.
Okay,
so
let's
say
let's
look
at
WordPress
and
then
let's
do
something
which
demonstrates
why
this
is
kind
of
interesting
I
hope
that
is
the
right
credentials.
I
can't
possibly
remember
a
four
character
password
all
right,
so
we
have
WordPress
up
and
running,
and
it's
persisted.
You
know
in
the
my
sequel
database,
which
is
backed
by
a
set
of
ice.
So
let's
add
a
reply
here
of
hello,
London,
openshift
and
I.
A
Don't
know
how
many
times
have
you
guys
type
OpenShift
you
accidentally
leave
out
the
F
that
happens
to
me
way
more
often
than
I'd
like
to
admit,
so
we
won't
use
numbers
here
right.
So
we
have
on
this
hello,
world
blog
post.
Here
we've
got
a
comment
from
from
me
and
now
let's
do
something
a
little
bit
crazy,
so
this
command
that
I'm
about
to
execute.
Here,
let's
give
some
space
to
ourselves
what
this
command
is
gonna
do
is
it's
going
to
go,
shoot
the
my
sequel
database
in
the
head
and
kill
that
pod?
A
You
know
everything
all
these
operators,
these
controllers
are,
you
know,
always
running
and
monitoring
everything
so
that
oh
gosh,
my
sequels
down-
and
you
said
you
want
one-
my
sequel
running
I'm
gonna
bring
my
sequel
back
up
now.
To
do
that,
though,
obviously
in
this
case,
since
we're
only
running
on
one
node,
this
isn't
it
doesn't
apply
here,
but
when
you
have
a
multi
node
cluster,
you
might
have
to
move
that
storage
volume
to
another
node
right.
So
the
interesting
thing,
though,
is
that
that
storage
volume
is
backed
by
the
distributed
storage
system.
A
So
that's
how
we
can
you
know
win
pods
are
moving
around
in
the
cluster.
We
can
make
sure
that
their
volumes
that
they're
requested
are
always
following
them
around,
and
it's
a
very
quick
operation.
What
this
really
kind
of
clicked
for
me
when
so
gosh,
who
was
it?
It
was
someone
from
HBO
when
they
were
checking
out
rook
for
the
first
time
they
were
doing
a
failover
analysis
of
EBS
volumes,
and
so
they
would
kill
their
one
of
their
pods.
A
That
was
backed
by
an
EBS
volume
and
the
whole
process
of
detaching
the
EBS
volume
and
reattaching
it
on
a
new
node
in
the
cluster
that
the
pod
had
moved
to
was
on
the
order
of
it
was
10
15
minutes
or
something
that
that
meant
that
their
database
was
down
for
multiple
minutes.
But
then,
when
you
have
this,
you
know
dynamically
provisioned
and
dynamically
orchestrated
system
or
everything
is
running
in
kubernetes.
A
You
know
it's
seconds:
it's
only
a
couple
seconds
to
fail
over
and
it's
barely
a
blip
on
their
monitoring
instead
of
this
massive
gap
of
down,
so
that
kind
of
clicked
for
me
of
wow
kubernetes
really
does
have
a
lot
of
primitives
and
a
lot
of
functionality
and
features
to
run
and
resilience
data
center.
It's
it's
actually
pretty
pretty
awesome
and
it's
you
know
Google's
infrastructure
for
the
rest
of
us
right.
It's
it's
pretty
awesome.
A
So
let's
prove
that
that
worked,
so
I
can
refresh
the
site
in
the
comments
still
there
and
then
we
can
leave
another
comment
and
post
it
on
there
and
it's
you
know
it's
persistent
to
my
sequel
as
well,
so
we
killed
my
sequel.
It
could
have
moved
to
another
node
in
the
cluster.
Its
data
is
still
available
to
persist
it.
It
is
only
a
second
or
two
which
gives
us.
A
You
know
highly
resilience
highly
available,
highly
durable
in
all
sorts
of
those,
wonderful
adjectives
of
storage
for
stateful
applications,
and
so
that's
I
think
that's
the
end
of
the
demo
for
force
F
and
yeah.
So
that's
you
know
kind
of
Brook
in
a
nutshell
there,
where
you
know
we
have
operators
running,
we
have
custom
resources,
that
kind
of
declare
or
capture
different
storage
constructs
that
a
user
would
want
to.
You
know,
create
instances
of
and
configure,
and
then
those
operators
are
watching.
They
see
what
the
user
wants.
A
They
drive
the
kubernetes
cluster,
always
towards
what
the
user
wants
by
looking
at
the
actual
kubernetes
cluster
state
and
continually
performing
operations
to
make
it
match
the
desired
state
of
what
the
user
wants,
and
in
this
way
we
can
provide
storage
in
a
number
of
different
solutions.
You
know
Apache
Cassandra
and
Saif
and
manao
cockroach,
DB,
etc.
A
But
it's
the
same
patterns
all
the
way
throughout
all
of
those
where
you
have
an
operator
running
and
you've
got
custom
resources
to
teach
kubernetes
about
new
types
and
stuff
and
then
expose
that
new,
that
storage
for
the
users
to
use
in
their
pods
in
their
applications.
So
I'll
take
a
break
here
for
questions
or
discussion
about
rook
here
before
we
move
on
to
anything
else.
If
anybody
has
any
thoughts.
C
A
So
there
I
the
first
time
I
use
Steph.
Before
you
know,
any
of
this
code
had
been
written.
I
personally
walk
through
the
configuration
steps
for
deploying
a
safe
cluster
and
the
amount
of
operations
you
have
to
perform
is
is
fairly
intense.
Where
you
know,
you've
got
multiple
different
types
of
demons
that
you're
bringing
up.
You
know,
monitors
managers,
object,
storage,
demons
and
you
know,
and
you
have
to
set
up
the
SEF
X
authentication,
you
know
create
the
key
rings,
get
the
monitors
in
to
quorum.
A
You
know,
distribute
out
the
keyring,
you
know
copy
and
paste
it
like
SSH
to
other
machines.
You
know
install
the
package
there
give
it
the
right,
authentication,
okay,
no
copy
over
your
SEF
that
config
file
make
sure
the
right
elements
are
in
there.
It
was
it's
an
unknown.
I,
don't
want
to
say
ungodly,
that's
not
right!
It
is
a
a
high
number
of
steps
manual
steps
to
do
that.
There.
A
There
was
an
effort
within
the
safe
community
to
automate
some
of
that
in
kubernetes,
but
we
eventually
had
made
enough
progress
with
the
rook
project
that
the
the
SEF
the
SEF
maintainer
is.
The
SEF
core
team
is
now
part
of
the
rook
project
as
well.
Like
sage,
will
the
the
founder
of
SEF
is
you
know
one
of
the
core
contributors
now
and
Sebastian
Hahn,
a
bunch
of
those
guys
now
contribute
it
to
rock
specifically
and
I'm,
really
happy
about
that
because
a
we
have,
you
know
red
hats,
backing
for
the
rough
project
as
well.
A
But
it
also
is
the
specific
you
know
SEF
experts
on
it,
and
that
was
a
really
cool
thing
for
me,
where,
when
I
was
trying
to
build
a
lot
of
this
automation-
and
some
things
were
failing,
our
had
things
I
didn't
understand
about
Seth
I
would
go,
google
it
and
a
lot
of
the
hits
would
be
for
a
blog
by
a
blog
by
Sebastian
Hahn,
and
then,
when
he
started
coming
to
be
a
contributor
to
the
project,
that's
the
blog
guy.
He
knows
everything
this
is
awesome
and
I
was
I,
ran
up
to
him.
A
It
at
one
of
the
cube,
cons
and
I.
Think
I
scared
him
a
little
bit
with
my
enthusiasm.
C
A
Well
so
yeah,
that's
a
yeah,
that's
definitely
a
good
question,
and
so
there's
a
you
know.
There
are
there's
like
there
are
some
situations
where
you
know
the
like
running
on
bare
metal
or
you
know,
integrations
with
other.
If
you're
not
using
kubernetes,
then
you
know
that
that's
a
non-starter
because
you
know
we
have
to
have
kubernetes.
You
know
we
completely
depend
on
kubernetes.
So
if
you're,
not
a
crew
burn,
it
is
easier
than
you
wouldn't
use,
rook
at
all
I
guess
that's
the
core
of
it,
but
then
people
that
have
you
know
we
wouldn't.
A
Maybe
it
was
two
months
ago
or
so,
though,
three
months
ago,
we
you
know,
got
the
Ceph
support
declared
stable.
Now
so
it's
been,
you
know
almost.
It
was
all
about
two
years
of
an
alpha
and
beta
getting
up
to
stable.
Where
you
know,
people
were
running
it
on
production
environments.
When
we
specifically
said
not
to
so,
we
got
some
good
battle
testing
out
of
that.
But
it
wasn't
it
wasn't.
A
You
know
stable
enough
that
we
were
comfortable,
saying
use
this
in
production,
and
so
you
know
traditional
assessment,
environments
and
deployments
and
tools
would
be
the
way
to
go
for
a
lot
of
the
bigger
bigger
deployments.
I
know
it.
Cern
runs
God
how
many
petabytes
it
is
cluster,
it
might
be
even
2x,
I,
don't
know,
but
yeah.
They
were
in
a
massive
stuff
cluster
with
thousands
of
nodes
and
they
are
not
using
they're,
not
using
ruk.
For
that
they're
you
still
using
their
traditional
hand-rolled.
A
C
A
C
A
Yeah,
so
they
everything
all
the
storage.
That's
what
I
love
about
Seth.
Is
it
underneath
its
object?
Storage?
That's
you
know
how
it's
everything
is
persisted
amongst
the
cluster
itself,
but
it
surfaces,
block,
storage
and
you
know
files.
You
know
shared
file
system.
Stuff
of
S
is
well
in
addition
to
that
object,
storage
through
our
GW
rathaus
gateway.
A
A
That's
that's
a
really
good
question,
so
the
the
crush
map
that
yeah
Brooke,
you
know,
creates
a
default
clock
crush
map,
and
there
is
some
configuration
that
you
can
put
into
the
cluster
yeah
mole
to
in,
like
this
location
tag
right
here
to
influence
some
of
the
failure
domains
and
some
of
the
crush
topology
information,
and
it's
it's
fairly
basic
though.
If
you
wanted
to
get
into
more
more
granular
crush
map
management,
then
you
know
you
all.
The
Ceph
tools
are
available
to
you
so
inside
the
rook
container
image
you
know
you
can
use.
A
You
know
any
of
the
Ceph
tools
like
the
crush
tool,
or
you
know,
RB
d
or
if
we're
out
of
Skateway
admin,
like
all
those
tools
are
available
there,
so
you
can
drop
in
to
one
of
the
containers
and
do
some
manual
management.
If
you
wanted
to
there's
nothing,
that's
gonna
prevent
you
from
doing
that,
but
yeah
most
most
most
of
our
use
cases.
Kind
of
streamline
things
to
the
point
of
you
know:
simplicity
and
kind
of
hides
your
abstract
away,
some
of
those
more
difficult
details
for
first
F
cluster.
So
yeah
it's
it's!
A
Yeah,
so
it's
McCann,
all
it
I
think
you
can
only
take.
It
takes
the
hostname
from
the
node
that
you're
on
and
feeds
that
into
the
the
failure
domain,
information
and
topology,
but
the
only
other
place
you
could
specify
that
sort
of
stuff
is
in
the
location
fields
of
the
cluster.
That's
so
it's
fairly,
it's
fairly
primitive
I
would
say
those
are
good.
Those
are
informed
questions.
Those
are
good
questions.
A
Yeah
yeah
so
that
that's
that's
using
the
kernel
module
the
K
RBD
kernel
module,
so
that
is,
you
know,
directly
mapping
your
going
through
the
kernel
module
to
map
the
block
image
as
a
virtual
or
is
a
block
device
on
the
system.
So
it's
not
using
I
scuzzy,
it's
not
using
fuse.
It's
not
using.
You
know,
TCM
you
run
or
an
BD
or
whatever
the
things
that
I'm
not
even
sure.
If
those
are
the
right
terminology
could
be
appropriate
here,
but
it's
going
to
the
the
kernel
module
to
this
can
knows
how
to
speak
directly.
A
You
know
our
BD
protocol
on
the
wire
yeah
I
think
that
the
the
Ceph
team,
some
of
the
Red
Hat
guys
pet
I,
don't
know
if
they're,
if
it's
at
the
design
stage
right
now
or
if
it's
an
implementation
already,
but
they
were
adding
their
own
NFS
support
on
it.
I
think
backed
by
sefa
fess,
with
using
NFS
Ganesha
to
have
that
be
a
presentation
of
storage
as
well
on
top
of
it
and
I,
don't
know
if
they
were
talking
about
bringing
in
I
scuzzy
as
the
presentation
as
well.
C
A
Yeah
you,
you
obviously
know
a
lot
about
SEF,
so
I
like
that
you're
you're,
asking
a
lot
of
questions
that
clearly
show
you
you
stuff
before
so
they're
in
the
SEF
dot
comp
file.
You
can
specify
a
particular
network
to
use
for
the
back
back-end
traffic
for,
like
the
object,
storage
daemons,
to
talk
to
each
other,
and
you
know,
do
replication
and
migration
of
placement
groups
and
moving
blobs
around
the
the
cluster.
A
That
way,
and
then
you
have
a
separate,
you
can
have
a
separate
front-end
network
where
your
clients,
like
pods,
and
you
know,
command
line
tools
that
are
consuming
the
storage.
They
talk
to
so
a
separation
of
that
traffic,
and
so
you
can
do
that
because
you
can
specify
any
set
kampf
configuration.
You
want
to
there's
a
way
to
inject
that
kind
of
stuff
in
there
and
I
actually
I.
A
Think
there's
also
a
I
may
be
misremembering
this,
but
I
think
there's
also
a
way
through
some
configuration
here
to
specify
what
network
to
use
for
the
for
the
backend
now
the
difficult
part
with
that,
though,
is
that
we
don't
currently
have
any
facilities
to
create
those
networks.
For
you,
you
have
to
already
have
you
know
two
networks
available
for
the
pod,
using
whatever
C&I
plug-in,
or
you
know,
maybe
host
a
networking
or
something
you
have
to
do
that
part
yourself.
But
if
you
have
those
two
networks
available,
you
can
bring.
A
That
rook
supports
like
edge
FS,
and
then
it
would
be
available
for
SEF
as
well.
How
that's
going
to
be
implemented?
What
it
you
know
how
it
works
under
the
covers.
I
have
I,
do
not
know
yet.
That's
and
networking
is
not
my
strong
suit.
I
did
not
have
a
strong
background
in
networking.
I
met
that
fully.
A
Yep,
that's
definitely
true.
Yeah
most
scenarios
I've
ever
used.
It's
just
you
know
a
single
public
overlay
network
or
pod
network
that's
available,
and
that's
all
that
I
as
a
community
as
consumer
end
up
having
to
worry
about
at
all
yeah
whoo,
any
other
questions
about
Rukn
and
saff
or
any
of
the
other
storage
providers.
C
A
Is
it
is
by
far
because
that
was
the
first
we
we
didn't.
We
didn't
support
any
other
storage
providers
B
besides
F,
until
cube
con
of
last
year
in
Copenhagen,
which
was
May
of
2018,
so
it
hasn't
even
been
a
year
that
we've
supported
other
storage
providers
and
in
seth
we've
supported
since
day
one
when
we
open
source
like
two
and
a
half
years
ago.
A
So
in
so
SEF
is
by
far
the
most
popular
it's
the
only
one
we've
declared
stable,
the
other
ones
are
all
alpha
or
beta
declaration,
and
it's
got
by
far
the
biggest
user
base
and
in
the
most
support
for
developers
as
well,
because
there's
there's
probably
at
least
15
or
so
contributors
that
are
employed
by
Red
Hat
directly.
So
there's
it
gets
a
lot
a
lot
of
support,
a
lot
of
love
from
a
deaf
perspective
as
well.
So
it's
definitely
the
most
stable
in
feature-rich
as
well.
C
A
So
it's
a
kind
of
half-and-half
honestly
as
part
of
the
initial
effort.
Kind
of
a
proof-of-concept
to
show
that
hey
Brooke
is
is
not
just
Seth.
They
can
do
other
storage
providers
as
well.
We
took
on
as
the
rook
team
took
on
some
of
that
that
effort,
so
many
oh
and
cockroach
DB
we
did
those
initial
implementations
ourselves
and
then
other
people
once
that
functionality
for
work
to
support
multiple
storage
fighters.
Other
people
started
coming
to
us
and
as
as
the
POC,
the
project
grew
in
popularity
as
well.
A
People
started
approaching
us
to
get
involved
with
it
as
well,
and
you
know
that
I'm
hoping
that
the
common
functionality
that
helps
all
storage
providers
I'd
like
to
see
that
more
fleshed
out
than
it
is
right.
Now
it
you
know
it's
it's
I,
there's
lots
of
things
that
we
could
do
to
make
this.
You
know,
storage
providers,
jobs
easier
and
that
hasn't
gotten
as
much
development
love
as
I
would
I
would
have
liked
to
recently
so
I'd
love
to
see
more
effort
go
into
that
to
being
possible,
bring
in
even
more
storage
providers.
They.
A
Do
you
know
something
I'm
kind
of
interested
in
hearing
about
tonight?
Honestly,
if
we
have
time
for
it,
and
people
want
to
hear
about
it?
Is
it's
about
storage,
OS
I've,
been
curious
about
the
architecture
and,
and
you
know
how
that
fits
in
the
kubernetes
ecosystem
and
all
that.
So
if
we
have
time
tonight,
that's
something
I
would
be
interested
in
hearing
about,
since
we
have
the
founders
of
storage
OS
here
in
the
room,
so
that
might
be
something
interesting
later
on.
I
do
have
I
will
talk
about
crossplane
as
well.
A
Okay,
so
let's,
if
anybody
has
any
other
questions,
you
know
speak
up
and
in
holler
and
we
keep
keep
this
going
interactively,
but
I
have
the
rest.
I
have
some
more
slides
and
talking
about
another
project
now
and
one
that's
newer,
saw
even
more
I'm
more
excited
about
it,
honestly
that
I'm,
one
of
those
people
that
gets
distracted
by
new
technology
I
mean
I've,
been
working
on
work
for
two
years
or
more
than
two
years,
and
the
next
project
crossplane
did
only
been
open
source
since
December
a
couple
couple
months
ago.
A
So
I'm
really
excited
about
that.
One
and
I'm
noticing
I
noticed
this
every
time.
I
talk
a
lot
of
my
time
frames
in
milestones
for
our
based
on
cue
cons
of
I,
can't
I,
keep
constantly
saying:
oh
right
about
cube
con
Copenhagen
or
Q
Khan
this
one.
You
know
Austin,
Texas
and
I.
Don't
know
what
that
says
about
it.
A
lot
of
milestones
in
my
life
are
based
on
cube
cons
that
I
don't
know
how
the
rest
of
my
family
would
feel.
Okay,
all
right.
A
Oh
yeah,
you
wouldn't
say
yeah.
It's
totally
take
a
break
right
now,
if
you
want
to,
but
in
between
these
yeah
I'm
down.
Let's
do
it!
That's,
let's
get
out
of
here
and
mingle
and
hang
out
for
a
bit.