►
From YouTube: OpenShift Commons Gathering 2019 Santa Clara Ceph on OpenShift with Rook Travis Nielsen
Description
OpenShift Commons Gathering 2019
Rook: The Container Native Storage Operator for Kubernetes
Rook.io
Travis Nielsen and Annette Clewett (Red Hat)
A
B
My
name
is
Annette
Cluett
and
I'm
in
the
storage
business
unit
and
I
just
wanted
to
just
set
a
little
bit
of
context
here,
because
Travis
is
going
to
talk
about
OpenShift
container
storage,
which
RedHat
currently
has,
and
the
solution
today
for
open
chef.
Container
storage
is
a
combination
of
a
Kenny
and
Gloucester
and
it's
highly
integrated
with
OpenShift.
Some
of
you
may
have
be
using
it
or
know
about
it.
B
What
Travis
is
going
to
talk
about
and
what
the
workshop
was
about
is
what
we're
calling
next-generation
and
it
is
based
on
Seth,
will
provide
object,
file
and
block
today.
Clusters
provides
file
and
block.
So
it's
not
that
we
don't
have
a
solution
today,
but
this
is
the
next
generation
and
it's
really
from
a
point
of
view
of
being
generally
available,
we're
looking
at
sometime
in
September.
So
take
it
away.
Ok,.
A
A
Well,
we're
the
last
talk
before
lunch,
so
bear
with
us
it'll
be
lots
of
fun,
though
ok,
let's
start
back
a
bit
on
storage
for
kubernetes.
So
three
or
four
years
ago,
I
was
working
with
a
small
team
and
we
looked
at
this
cool
new
technology
kubernetes
up
and
coming,
and
we
working
on
storage
and
we
thought
well.
Okay,
here's
storage
storage
is
separate
from
kubernetes,
it's
something
external.
Why
is
it
external?
Why
does
they
have
to
be
the
separate,
separate,
separate
platform
to
manage
I?
Don't
want
to
be
separate.
A
I
want
to
be
part
of
the
same
cluster
that
I'm
already
running.
My
apps
in
kubernetes
is
supposed
to
solve
all
of
my
problems
to
run
my
distributed
application
or
what
about
storage?
So
this
is
where
rook
started.
So
a
bit
more
on
the
storage
limitations,
we
might
see
with
traditional
storage.
So,
first
of
all,
it's
not
portable.
A
A
So
let's
get
let's
get
past
that
let's
put
storage
on
kubernetes,
so
inside
my
inside
my
same
nodes
and
now
my
little
slides
are
done
on
the
same
nodes,
I'm
running
all
my
kubernetes
applications,
let's
just
treat
storage
as
another
application.
It's
so
in
the
world
of
software
to
find
the
storage
I
can
take.
My
storage
and
I
can
run
it
on
kubernetes.
As
my
industry,
we
did
platform
so
wherever
kubernetes
runs,
we
want
to
run
our
storage
with
it.
A
So,
if
I'm
in,
if
I'm
on
premise
now
I
have
a
solution
that
I
can
have,
storage
and
I
can
take
it
to
the
cloud
if
I
want
to
in
the
cloud-
and
this
is
how
we
get
that
that
portability,
we
have
a
platform
that
will
run
my
storage
wherever
kubernetes
or
openshift
are
running.
So
where
does
rook
come
in?
So
rook
really
is
a
framework
for
running
storage
providers
on
on
kubernetes.
A
It's
been
designed
from
the
ground
up
for
kubernetes
and
it
takes
storage
platforms
that
were
designed
before
the
time
of
kubernetes
and
brings
them
in
and
makes
them
look
like.
They're
part
of
of
the
platform
so
extends
kubernetes
with
custom
types
and
controllers.
So
these
these
are
what
we
you
may
hear
a
lot
about
these,
so
custom,
resource
definitions
or
CR
DS
is
how
we
extend
kubernetes
and
then
controllers
custom
controllers.
A
We
call
operators
so
when,
when
I
first
attended,
my
first
coop
con
a
couple
of
years
ago
is
when
the
operators
pattern
was
announced,
and
we
said:
oh
that's
exactly
what
we
want
to
do
with
with
storage.
We
want
to
have
an
operator
that
automates
everything
around
the
storage
for
you,
so
that
you
don't
have
to
think
about
it.
You
shouldn't
have
to
go,
manage
every
every
demon,
that's
running
and
all
of
your
storage,
just
let
the
rook
operator
take
care
of
it
for
you,
so
it
you
know.
This
is
an
upstream
project.
A
So
it's
been,
it's
been
a
fun
journey,
so
let's
get
more
into
what
rook
is
and
how
it
works,
so
the
so
Brooke
at
the
end
of
the
day,
it's
going
to
run
as
as
a
pod,
so
the
rook
operator
is
runs
as
a
container
a
pod
and
it's
going
to
manage,
launch
other
pods
and
configure
these
storage
providers
for
you.
So
Brooke
supports
multiple
storage
providers.
Today,
seth
is
where
we
started.
It's
the
first
one
and
and
we've
declared
CF
as
stable
inside
rook.
A
There
are
others
that
we've
since
added
that
are
still
in
alpha
state,
so
you'll
see
the
icons
there
for
cockroach
DB
and
Mineo
edge.
Fs
I'm
gonna
forget
a
couple.
We've
got
six
now,
but
at
the
end
of
day,
if
you
need
storage,
rook
intends
to
bring
it
to
you
in
a
kubernetes
way
and
now
that
you've
got
storage
in
your
cluster.
So
on
the
far
right
side
of
this
slide.
What
we
see
are
your
client
pods,
so
you
have
your
applications
that
are
essentially
clients
to
the
storage.
A
They
need
to
consume
that
storage,
and
this
is
where
we
get
the
drivers,
the
flex
driver
and
we
have
a
CSI
driver.
Now
that
will
allow
you
to
attach
your
storage
and
instead
of
that,
that
attaching
and
mounting
of
storage
that's
outside
of
the
cluster.
It's
just
inside
the
same
the
same
cluster,
here's
a
picture
of
what
it
really
looks
like
as
far
as
what
all
of
the
the
demons
that
were
running
so
there's
a
picture
of
the
SEF
architecture
with
rook.
A
So
at
the
top
of
this
picture,
what
you
have
is
you
have
your
applications
that
needs
storage,
they're,
going
to
create
volume
claims
or
using
PVCs
just
the
standard
way
that
you
plug
in
your
storage
to
kubernetes
and
those
those
PVC
is
they're,
going
to
connect
up
to
the
rook
storage
and
make
it
look
like
one.
One
storage
later
you've
got
your
so
all
the
different
there's
lots
of
rook
daemons
actually,
but
you
only
have
to
worry
about
starting
one
of
them
effectively.
A
You
start
the
operator,
that's
the
the
one
right
in
the
middle
there
and
it
starts
all
the
others.
The
agents,
the
Mons
managers,
OS
DS,
so
Steph
Steph,
is
a
software-defined.
Storage
system
has
been
around
for
a
number
of
years
in
production,
but
again
it
was
designed
before
the
world
of
kubernetes.
So
now
rook
took
the
containerized
version
of
SEF
and
creates
the
kubernetes
resources,
the
deployments,
the
pods
and
all
of
those
those
kubernetes
resources
that
are
needed
and
then
kubernetes
helps
us,
keep
it
healthy
and
keep
it
running.
A
So,
let's
talk
about
the
operator
pattern
for
a
minute,
so
the
operator
pattern
means
that
you
want
to
specify
how
you
want
the
cluster
to
look.
You
specify
your
desired
state
and
an
operator
as
a
custom
controller
is
going
to
go.
Make
that
happen.
So
the
e
there's
this
this
is
this
control
loop,
where
the
operator
is
going
to
watch
and
it's
going
to
say
what
state
do
you
want
here
in
your
cluster?
So
it's
gonna
watch
for
these.
A
What
we
call
clusters,
CR
D,
so
the
you
say,
and
the
CR
T
is
really
just
another
Hamill
file.
If
you,
if
you
work
much
with
kubernetes
and
open,
you
know
that
there's
lots
of
the
animal
files
everywhere,
but
rook
is
going
to
watch
for
those
it's
going
to
see
when
you
create
one.
So
as
soon
as
you
create
one,
the
operator
will
wake
up
and
it'll
say:
oh,
you
want
to
start
a
safe
cluster.
Let
me
go
do
that
for
you
so
we'll
analyze
and
and
then
act
on
on
that
state.
A
Okay,
so
again,
CR
DS!
These
are
kubernetes
resources
that
make
it
look
like
your
types
are:
first-class
objects
in
the
system.
They
look
like
other
resources,
pods
and
then
you
can
work
with
them
with
the
standard,
kubernetes
and
OpenShift
tools
that
you
already
have
for.
You
know
OC
if
you're
an
open
shift
or
coop
cuttle.
If
you
are
in
communities
and
the
CRTs
are
really
just
a
way
to
describe
your
desired
state.
A
So
that
rook,
specifically
an
operator,
will
handle
upgrades,
it
will
manage
everything
about
keeping
your
system
running.
But
one
important
note
is
that
it's
not
on
the
data
path
so
that
the
data
as
your
data
moves
around
you
know,
whatever
your.
If
your
databases
are
consuming
the
data,
wherever
your
data
is
going,
your
data.
A
Yeah
for
openshift,
just
a
couple
of
notes
to
point
out
there
due
to
the
security
constraints,
network
configurations,
I'll
just
point
out:
there's
a
couple:
extra
steps
on
a
Ric
documentation
on
how
to
get
it
to
work
with
openshift,
but
it's
yeah
it's
all
available
today,
upstream,
you
can
run
it
and
open
ship
just
as
a
summary,
before
I
jump
into
a
demo
for
what
Rick
is
is
starting.
So
there
are
lots
of
Seth
demons
that
make
your
block
object
and
file
storage
work.
A
So
the
Mons,
the
Mons,
are
really
the
brains
of
the
seth
cluster.
The
Mons
use
app
axis
algorithm
and
need
majority
online,
and
it's
a
way
you
can
have
a
distributed
application
to
be
security
similar
to
fcd.
It
keeps
your
distributed.
Data
platform,
safe,
OS
thieves
are
where
the
data
is
actually
stored
on
the
individual
nodes.
So
Rick
will
start
those
those
demons
to
manage
the
data
locally
rgw
for
object,
storage
and
on
and
on
there.
There
are
a
number
of
demons
that
work
will
manage
for
you
all
right.
A
So
a
little
demo
I
wanted
to
show
so
those
who
went
through
the
workshop
this
morning.
This
is
basically
what
we
did.
We
started
up
a
number
of
nodes
in
OpenShift,
so
we
had
the
master.
Of
course,
then
there
were
three
application
nodes
that
we
started
with,
and
then
we
added
three
new
nodes
for
storage
specifically,
and
this
was
to
show
that
will
you
even
in
the
same
cluster,
you
can
kind
of
partition
nodes
to
work
for
storage
and
tell
Brooke
only
run
on
these
nodes
and
run
my
application
pods
on
the
other
nodes.
A
A
A
Now,
I've
already
got
this
whole
thing
running
just
for
time
constraints,
so
right
now,
I'm
in
the
the
rooks
F
system.
Namespace,
let
me
show
you
the
pods
here
in
in
here.
You
see
in
the
middle.
The
operator
pod
is
running
one
instance
of
that
and
it
basically
orchestrated
starting
all
of
the
other
pods
in
the
system.
So
I
started
up
a
Brook
agent
on
each
of
the
nodes
so
that
you
can
attach
the
storage.
A
A
A
A
Alright,
here's
here's
a
little
application,
our
rails
application.
You
know
list
articles,
add
articles,
I
can
do
things
like
show
edit
just
destroy.
Let's
see,
I
can't
find
my
mouse
anywhere.
Anybody
else
see
my
mouse
here.
It
is
okay.
So
let
me
just
show
a
couple
things
here,
so
this
I've
added
something
to
my
database.
It
says:
oh
there's
this
article,
that
work
is
being
hosted
by
the
CN
CF
I
say
this
is
awesome
and
it
says
this
is
old
news.
Rookies
an
incubation
project.
A
Now
anybody
else
have
a
comment,
I'll
just
say,
although
okay,
so
this
application,
when
I
click
create
comment,
it's
just
going
to
add
something
else
to
my
database,
and
here
it
is
now.
What
I
want
to
show
is
that
even
if
a
node
goes
down
and
one-third
of
the
storage
is
gone
from
staff,
everything
will
just
keep
on
working
because
my
data
is
replicated
across
nodes.
So
rook
has
configured
this
I've
configured
this,
so
it
makes
two
copies
of
the
data
on
different
nodes.
A
A
Okay
and
I
lost
my
SSH
connection
because
the
node
is
down
now.
If
everything
is
working
properly,
I'd
expect
to
be
able
to
go
back
to
my
app
and
everything
should
continue
working
correctly.
So,
let's
see
how
how
lucky
were
feeling
all
right,
so
I
should
be
able
to
refresh
first
of
all.
Ok,
that's
working
can
I,
add
a
new
new
comment.
I,
don't
know
what
to
say
so,
create
comment
and
I
clicked
the
wrong.
I'm
sorry
create
comment
as
though
they
want.
Okay,
it
says
comment
was
created
successfully.
A
So
even
though
we
have
one
node,
that's
down
I'm
able
to
continue
writing
to
my
cluster,
because
I've
got
staff
managing
that
storage,
and
you
know
in
a
software-defined
storage
way,
and
just
to
show
you
that
that
is
down,
and
hopefully
you
believe
me.
Let
me
connect
back
to
the
tool
box
and
see
what
the
Ceph
status
looks
like.
So
SEF
status
should
show
us.
A
Okay,
we've
got
a
warning
to
OS.
These
are
down.
One
host
was
down
with
two
O's
these
on
its
we
lost
a
Mon
only
two
or
left
and
quorum,
but
the
important
thing
here
is
that,
because
we
still
have
that
quorum,
this,
the
storage
is
still
safe.
It's
still
working
fine,
alright,
so
that's
in
a
nutshell
how
your
storage
can
still
be
safe,
even
in
the
face
of
failure
with
kubernetes
yeah.
B
And
just
to
make
a
point
on
that:
if
you've
worked
with
stuff
before
you,
you
know
that
you
usually
have
not
three
Oh
STIs
or
six
on
you
have
tons
of
STIs.
What
I
think
you
know
moving
this
into
OpenShift
is
I.
Guess
you
could
call
it
stuff
on
a
small
scale.
So
why
Travis
did
was
was
basically
just
three
three
open
shift
nodes.
He
could
have
even
just
had
three
OS
T's
and
you
would
have
seen
the
same
behavior.
So
it's
it's
it's
something.