►
From YouTube: KubeCon Office Hours: Cloud Native Storage with Rook
Description
KubeCon NA 2020 Office Hours with Knative contributors Matthias Wessendorf and Roland Huss
A
Good
morning,
good
evening
good
afternoon
and
welcome
to
a
very
special
kubecon
office
hours,
the
cloud
native
storage,
with
rook
edition,
run
by
the
wonderful
josh
burkus
from
red
hat
josh.
Please
take
it
away.
B
Hey
there,
thank
you,
chris
welcome
everybody,
and
this
is
our
second
office
hours,
concurrent
with
kubecon,
in
order
to
give
you
more
time
to
actually
ask
your
questions
from
the
many
maintainers
who
are
here
at
kubecon,
so
that
you
can
find
out
what
you
need
to
know
about
using
various
cloud
native
projects.
This
office
hours
is
going
to
be
about
rook
and
also
ceph,
so
cloud
native
storage,
and
we
have
blaine
and
travis
with
you
so
guys.
Do
you
want
to
introduce
yourselves.
B
B
C
Yeah,
I
am
one
of
the
maintainers
of
of
rook
upstream.
I
also
work
for
red
hat
as
a
storage
engineer,
which
I
think
that's
a
pretty
good,
pretty
good
intro,
okay,.
D
B
Awesome
and
as
we
get
started,
I
you're
gonna
have
a
short
presentation
to
kick
things
off.
C
C
C
There
are
three
sort
of
core
architectural
layers
that
we
kind
of
break
things
up
into
being:
rook,
ceph,
csi
and
ceph
itself.
Ceph
itself
is
just
the
data
layer.
I
tend
to
kind
of
think
of
these.
In,
like
a
bottom-up
level,
the
csi
driver
will
provision
and
mount
storage
into
pods
for
your
applications
and
then
rook
is
in
charge
of
administrating
and
managing
ceph
and
also
can
set
up
the
cfcsi
components
for
you
and
so
there's
a
deeper
architectural
look
that
travis
is
going
to
go
into.
D
Thanks
blaine
yeah,
so
we
have
a
few
different
views
of
what
these
these
layers
look
like,
and
you
know
ultimately,
rook
is
the
you
know
it's
the
management
layer
for
bringing
your
storage
to
to
kubernetes
the
rook
being
the
management
layer.
What
does
that
mean?
So
that
means
that
rook
will
communicate
with
kubernetes
and
tell
how
to
deploy
all
the
various
pods
and
services
and
different
resources
that
need
to
be
created
so
that
you
can
have
storage
in
your
local
kubernetes
cluster.
D
Now
you
know
at
a
higher
level,
we
typically
think
of
storage,
or
it's
been
typically
treated.
This
way
where
it's
an
external
entity
to
kubernetes,
but
now
rook
really
brings
storage
inside
your
kubernetes
cluster
makes
it
just
like
another
application
that
you're
managing,
except,
of
course
it's
stateful.
It's
going
to
take
care
of
your
data,
keep
it
safe.
So
so
how
does
rook
do
this?
So
this
diagram
is
kind
of
showing
a
conceptual
cluster
of
three
nodes
where
each
of
those
black
boxes
is
a
is
a
node
with
various
pods
running
on
it.
D
So
on
this
center
node
right
here
we
see
the
rook
operator.
Pod
is
running,
and
this
is
in
a
fully
configured
cluster
that
rook
has
created,
and
it's
done
that
by
creating
all
the
different
pods
that
are
needed
for
for
staff
and
the
csi
driver.
So
it's
kind
of
color
coded
here.
You
see
the
book.
Sorry,
the
blue
pods,
like
the
discovery
pod,
it's
kind
of
related
to
the
rook
operator.
It
helps
discover
the
devices
on
the
on
the
nodes.
D
The
green
pods
are
for
the
csi
driver.
That's
what
helps
attach
your
volumes
to
your
application,
pods
and
then
the
red
pods
are
the
ones
that
are
actually
the
cef
demons
that
are
managing
this.
The
local
storage
on
on
your
cluster,
so
kind
of
you
see
this
there's
a
ceph
monitor
which
the
monitors
are
kind
of
the
brains
of
the
cef
cluster.
D
D
You
know
a
pvc
itself
that
comes
from
a
cloud
provider
if
you're
running
in
the
cloud
but
yeah,
so
those
are
the
pods
that
are
basically
running
to
get
your
ceph
cluster
running
so
rook
deploys
all
the
staff
demons
all
the
set
configuration
that
you
need
to
to
hide
all
of
the
complexities
of
stuff
in
your
cluster
and
give
you
storage
in
kubernetes
all
right,
so
that's
kind
of
layer.
One
management
that
we're
talking
about
now,
layer,
two
says:
okay,
you've,
let's
say:
we've
set
up.
Brooke
you've
got
the
storage
available
in
the
system.
D
D
D
So
that's!
Oh,
that,
that's
in
a
nutshell,
how
you
provision
block
storage,
similar
thing
if
you
need
shared
file
storage.
So
if
you're,
you
have
two
applications
that
need
to
share
the
same
storage,
so
each
of
them.
D
This
concept
called
bucket
claims
where
you've
got
a
special
type
of
storage
class
for
object,
storage
that
provisions
a
bucket
for
you,
so
you
can
have
s3
an
s3
endpoint
in
your
cluster
to
give
you
that
object,
storage,
all
right!
So
that's
how
you
would
basically
connect
your
storage
to
cell,
so
then
layer.
Three.
Now
we
get
down
to
the
data
layer
once
the
data
is
once
it's
been
set
up
at
layer,
one
and
the
csi
driver
has
provisioned
the
storage.
D
D
B
Not
oops
not
seeing
any
questions
in
the
chat
yet
usually
takes
a
little
while
to
get
started.
So
the
did
you
want
to
do
the
getting
started
or
do
you
want
to
go
straight
into
talking
about
1.5.
D
D
Ultimately,
we've
tried
to
make
rook
as
simple
as
possible,
so
you
can
deploy
stuff
and
there's
basically
just
a
few
manifest
files
that
you
need
to
get
up
and
running
and
I
just
realized.
I
need
to
update
this
slide
because
there's
one
more,
but
we
we
have
some
resources
which
gives
us
the
r
back
or
privileges
in
common.yaml.
D
D
How
do
you
want
to
consume
storage
on
all
the
nodes?
Do
you
want
to
consume?
It
from
pvcs
do
you
want
to
consume
it,
just
let
rook
look
for
all
of
the
local
devices
on
the
nodes,
and
so
this
shows
the
simplest
way
which
is
just
lets.
Rook
go
for
it
and
create
storage
on
all
of
the
nodes,
whatever
free
device
it
finds
and
and
that's
basically
a
little
simplified,
of
course,
for
this
discussion,
but
that's
pretty
much
it.
D
It
gives
you
a
rook
cluster
with
just
a
few
simple,
manifest
to
create,
and
also
again
so
that
was
kind
of
layer.
One
back
to
the
layering
thing
now
to
request
storage
from
your
application.
Just
a
simple
example:
the
admin
is
going
to
create
a
storage
class
he's
going
to
or
they're
going
to
create
a
persistent
volume
claim
and
then
the
application
pod.
So
on
the
right
here,
we've
got,
you
know
what
the
application
pod
might
look
like
in
this
case
the
simple
nginx
image,
which
is
requesting
storage.
D
B
B
Yep
yeah,
if
only
that
worked
that
way,
the
so
our
my
first
question
here
is
1.5
came
out
on
friday.
I
think
so
what
what
did?
What
are
the
big
things
in
1.5.
D
D
Just
I'm
glancing
through
that
right
now,
honestly,
just
to
remind
myself
of
all
the
features,
the
the
first
one,
we've
got
so
with
with
the
storage
configuration
we
want
to
encrypt
the
data,
that's
at
rest
and
we
want
to
do
it
in
a
way,
that's
as
secure
as
possible,
so
we'll
use
a
key
management
system
like
vault,
so
in
this
in
1.5
now
we
support
vault
as
a
kms
that
lets
you
store
your
keys
in
a
secure
way.
D
That's
one
feature
next
one
you
support.
D
D
Another
one
is
what
we
call
mirroring
or
rbd
mirroring
where
so
you've
got
two
clusters
and
you
want
to
replicate
the
data
across
kubernetes
clusters.
They
might
be
in
different
geographical
areas
or
they
might
just
be
just
independent
clusters,
where
you
want
multiple
want
the
data
replicated
across
for
failover
or
dr
scenarios.
B
The
so
does
that
work
across
clusters.
Don't
you
have
to
have
like
three
clusters
for
like
a
tiebreaker
or
something.
D
That
that's
a
little
different
I'll
get
into
that.
That's
actually
a
new
feature
now
too,
but
this
one
says
you
know
you
want
two
clusters
running
where
you've
got
your
application
running
in
two
different
places,
for
if
one
data
center
goes
down,
you
can
run
your
application
in
the
other
data
center
and
but
totally
independent
kubernetes
clusters
as
far
as
rook
is
concerned
and
and
seth,
so
the
they
need
to
be
connected
if
you're
within
a
kubernetes
cluster.
D
D
Now
and
yeah.
Well,
I
guess,
while
we're
on
that
related
topic,
then
so
this
you
mentioned
the
stretch
stretching
a
cluster
or
having
an
arbiter,
so
seth
just
added
support
for
this
and
well
actually
it's
still,
it's
still
an
experimental
mode,
because
this
the
ceph
release
that
includes
it,
won't
be
for
a
few
more
months
in
the
seth
pacific
release.
D
You
have
two
data
centers
that
participate
in
the
same
kubernetes
cluster,
so
you've
got
some
form
of
failure
domain
within
there,
where
but
they're
all
in
the
same
kubernetes
cluster,
and
only
two
of
the
data
centers
can
store
data,
but
you
need
a
third
or
tiebreaker
zone
so
that
the
seth
monitors
can
have
that.
Third,
third,
one
to
break
the
tie.
If
one
data
center
goes
down,
you
always
need
two
ceph
monitors
up
so
that
this
you
know
the
core
of
the
set
will
keep
running.
D
E
D
That's
right
so
in
let's
see
1.3
last
spring,
we
initially
had
experimental
support
for
multis
we've
been
working
for
it
through
a
few
details
and
fixes,
and
we
just
got
the
last
fix
merged
yesterday,
where
it's
looking
like
that,
we
really
feel
good
about
multis
being
able
to
be
used
as
that
network
provider
and
also
that
also
helps
with
communication
across
clusters
too.
If
you've
set
it
up.
D
D
Yeah
lots
of
other
internal
improvements,
there's
one
like
api
extensions.
We
finally
moved
that
to
v1
from
the
v1
beta
one.
D
That's
right
and
with
our
up
each
upgrade
with
each
release
that
we
have.
We
have
an
upgrade
guide
that
says:
here's
here's
how
you
deal
with
whatever
might
have
changed
and
the
yeah
the
api
extensions
change
was
a
fairly
big
internal
change.
But
our
upgrade
guide
basically
says
here:
just
go
apply
these
new
crds
and
you'll
be
good.
D
So
you
have
to
have
to
create
the
crds
anyway
for
new
clusters,
but
upgraded
clusters
with
with
every
release.
Really
we
whatever
changed
in
their
release.
We
have
to
update
them.
In
this
case
the
crds
all
changed,
and
so
we
say,
hey
in
the
upgrade
go
go
apply
these
new
crds,
which
I
mean
internally
kubernetes,
was
already
converting
them
to
v1,
and
so
we
don't
expect
existing
crds
during
the
upgrade
to
be
impacted.
D
B
D
1.6,
that's
a
great
question.
You
know
with
each
major
release
I
feel
like
okay,
we
got
out
the
door
and
there's
always
more
features
from
seth
coming
in
I'd,
say.
The
first
thing
on
my
mind
next
is
that
south
pacific
will
be
out
in
the
springtime
about
the
time
we
release
1.6,
so
pacific
will
be
become
fully
supported,
where
the
staff
releases,
of
course,
supported
until
they're
out.
D
C
I
mean
this
is
something
we
don't
usually
talk
about
a
whole
lot,
since
it's
kind
of
a
boring
non-feature
detail.
But
I
I'm
excited
about
some
of
the
ci
changes
that
we
I
mean
have
had
in
1.5
and
also
are,
are
continuing
to
to
continue
forward
with
in
1.6
we're
hoping
to
be
able
to
iterate
more
quickly
and
just
be
able
to
write
like
more
and
more
and
better
tests.
That
way
when
we
don't
have
to
wait
like
an
hour
between
ci
runs,
but
can
instead
wait
like
20
minutes
instead
and
wait.
C
So
what
changes
you
making
the
the
ci
has
moved
to
like
most
of
it
runs
using
like
github
actions
to
run
individual
tests
sort
of
one
at
a
time,
but
in
parallel,
which
has
been
has
sped
up
our
rci
quite
quite
a
lot
and
we're
also
looking
to
start
doing
ci
for
like
nightly
builds
of
both
rook
and
ceph,
either
on
like
a
daily
or
or
weekly
basis.
Just
to
get
like
good
integration.
C
Validation
like
between
those
two
pieces
make
sure
things
are
just
like
very
like
tightly
integrated
and
that
there
aren't
accidental
gotchas
that
we
end
up
figuring
out
like
a
week
or
two
down
the
road.
D
Built
yeah
we're
we're
finding
they
make
the
developer
life
much
much
better
in
rookie
at
least
a
couple
other
feature
areas
that
come
to
mind
as
well.
Going
forward
is
the
you
know
the
dr
disaster
recovery
scenarios.
Are,
you
know,
they're
complex
scenarios?
How
do
you
get
your
application
to
fail
over
in
a
reliable
way
or
testable
way
so
that
you
know
a
lot
on
improving
the
dr
automation
around
that,
and
so
you
know
rook
likewise,
is
you
know
that's
one
of
our
big
areas
of
focus?
D
How
can
we
make
the
mirroring
and
cross
clusters
better?
How
can
we
improve
stretch
clusters
to
make
them
even
simpler
or
even
more
reliable,
so
yeah,
so
the
anything
across
clusters
there's
rgw,
multi
multi-site,
which
means
you've,
got
object.
Your
object,
storage
that
you
want
to
replicate
across
clusters.
Not
just
the
you
know.
The
block
with
rbd
mirroring
these
ffs
with
the
shared
file
system
also
will
have
a
mirroring
technology
coming
soon.
I
don't
recall
honestly,
if
that's
specific
or
beyond,.
D
D
B
Okay,
I've.
I
have
more
questions
about
future
work,
but
before
we
get
to
that,
barnes
who's
just
getting
started
with
rook
and
ceph
wants
to
know.
If,
if
after
deployment,
only
a
single
monitor
pod
starts
and
then
the
cluster
stops
there,
what's
the
next
step
for
troubleshooting.
D
Okay,
if
one
pod
one
mon
pod
starts
so
yeah,
because
you
need
three
to
get
going
usually
what
happens
is
either
there's
a
networking
challenge
that
well,
let
me
back
up
so
what
the
operator
is
trying
to
do
is
it
start
starts
one
month
and
then
it
tries
to
connect
to
it
and
validate
it,
and
then
it
adds
the
second
and
the
third
one
at
each
one.
D
D
C
Yeah
on
on
the
networking
there
was
there
there's
a
kubernetes
app
just
to
kind
of
kind
of
smoke
test.
Your
your
networking
and
your
cluster
called
cooper
ring,
which
is
also
noted
in
the
ceph
channel
of
our
slack
of
the
rook
slack.
E
B
Okay,
the
thank
you
for
that
we'll
see
if
he
has
follow-up
questions
or
or
if
they
have
follow
questions.
The
one
question
for
me
is
going
to
be
volume
topology
and
a
rook
ceph
stack.
That
is,
if
I'm
asking
stuff
of
storage
the.
Ideally,
I
would
want
the
application
pod
and
the
storage
pod
to
be
running
as
close
together
as
possible
in
in
a
in
a
network
sense.
You
know
in
in
the
best
of
all
possible
worlds.
B
You
know,
particularly
for
applications
that
make
a
lot
of
storage,
calls
like
databases
to
actually
have
them
on
the
same
physical
machine
or
the
same
vm.
Is
there
any
planned
work
with
volume,
topology
manager,
etc
to
to
make
that
a
potential
reality.
D
There's
not
I'd,
say
we
so
steph
is
fundamentally
based
on
distributing
the
storage
across
your
cluster.
That
breaks
it
up
and
you
know
just
basically
shards
it
in
in
a
lot
of
ways.
They
don't,
except
doesn't
college
charting,
but
it
breaks
up
the
data
and
puts
it
everywhere,
and
so,
if,
if
you
fundamentally,
if
applications
need
store,
need
the
local
data
and
they're
doing
their
own
level
of
replication,
like
many
data
databases
like
cassandra,
then
we'd
recommend
that
you
use
a
local
pv.
It's
just
a
local
volume
and
they
manage
it
there.
B
B
If
you
follow
me,
and
so,
if
I
deploy
a
high
storage
requirement
app,
I
kind
of
want
that
to
land
on
a
machine
that
does
have
a
mon
pod
on
it.
Now
I
can
do
that
right
now,
using
affinity,
but
it
feels
like
there
should
be
something
involving
volume.
Topology
like
this
feels
like
a
case
that
volume
topology
ought
to
address
in
some
way,
but
I
don't
have
any
way
now
to
do
that.
C
C
It's
really
about
access
to
the
osds,
especially
for
like
block
and
file
access
that,
while
I
I
think
you
could
you
know
currently,
we
might
use
labels
and
use
like
pod
affinities
to
be
like
put
put
this
application
on
the
same
node
or
with
affinity
for
a
ceph
like
metadata
server
for
the
file
system,
or
you
know,
if
I'm,
if
I'm
doing
block
there,
is
no
real
metadata
metadata
server
for
it.
B
Okay,
I
I
have
more
thoughts,
questions
on
that,
but
we
have
questions
from
the
audience
which
I
want
to
take
first,
so
easy
steezy
wants
to
know
what
are
some
catalysts
that
you've
seen
for
people
migrating
to
a
rook,
ceph,
kubernetes
storage,
solution
from
using
regular
storage
classes
and
pvs
in
kubernetes?
D
Yeah,
let's
have
some
some
thoughts
there.
So
if
the
primary
use
case
where
we
created
originally
was
really
for
bare
metal,
because
when
you're
not
in
a
cloud
provider,
there
was
no
option
for
block
object
or
file
storage,
and
so
this
rook
gives
you
that
storage
when
you're
in
your
own
data
center,
that's
the
first
and
primary
use
case,
but
we're
also
finding
a
lot
of
people
moving
to
rook
storage
in
cloud
providers
because
of
some
benefits
like
oh,
we
can
put
large
volumes
behind
rook
and
seth
to
so.
D
So
you
get
yeah
good
performance
by
basing
on
on
top
of
rook
in
that
sense,
and
there
yeah
there's
some
other
other
things
for
different
failover
characteristics.
D
C
I
one
of
the
things
that
I
I
think
about
a
lot.
C
There
also
is
like
I
just
from
people
that
I
have
asked
you
know
when
I've
met
them
at
kubecon,
and
some
of
the
responses
are
seem
to
kind
of
be
around
more
of
a
multi-cloud
sentiment
where
you
know
they
might
be
using
a
cloud
provider
or
multiple
cloud
providers,
but
they
want
to
still
like
write
their
applications
to
like
access
the
storage,
the
same
way
for
simplicity
or
some,
some
even
the
same
cloud
provider
might
not
actually
provide
all
types
of
storage
in
all
regions.
C
D
B
B
Okay,
so
another
question
from
claudio,
which
is
a
similar
comparison
question
in
this
case,
he
wants
to
look
at
the
comparison
between
running
your
storage
inside
the
kubernetes
cluster
via
rook
and
seth,
versus
having
dedicated
storage
appliances
like
if
you're
buying
a
brand
new
cluster
and
you're
kidding
it
out
right
now,
when
would
you
decide
to
do
choose
one
of
those
paths
versus
the
other
path.
C
I
mean
I,
I
guess
my
starting
thought
is
that
there
there
are
so
many
little
and
big
things
that
go
into
answering
that
I
don't
know
if
I
have
any
real
rules
of
thumb,
but
I
I
definitely
think
that,
like
ceph
and
seth
shines
in
like
really
having
really
large
amounts
of
storage
and
in
being
able
to
scale
out
whenever
you
need
it
to,
and
so
I
think
those
are
the
cases
when
I
would
be
thinking
about
rook
versus
if
I'm
like.
C
Well,
I'm
probably
not
going
to
need
more
than
a
few
terabytes
which
I
could
just
have
in
like
a
nest
somewhere
and
have
a
like
raid
array
on,
but
there
there
are
also
a
lot
of
other
considerations
where,
maybe
maybe
you
want
to
use
rook
for
for
the
kind
of
flexibility
that
it
allows
or
just
the
ease
of
kind
of
the
ease
of
deployment
and
installation.
E
D
Right
that
that's
one
thing
that
you
could
do
if,
if
honestly,
if
you
just
had
two
clusters
where
you
want
to
dedicate
one
to
rook
and
the
other
one
to
the
rest
of
kubernetes
I'd
say
I
wouldn't
make
them
separate
kubernetes
clusters,
I
might
create
one
kubernetes
cluster
and
then
you
can
use
node,
selectors
and
other
things
in
rook,
so
that
you
say,
oh
only
run
on
these
nodes.
If
you
really
want
on
certain
nodes
for
high
performance
or
other
reasons,
so
I'd
say,
keep
it
with
one
cluster.
Unless
you
really
need.
B
Okay,
the
I
there's
I've
kind
of
seen
arguments
the
other
way,
just
because
I'll
say
once
again
come
from
the
database
realm.
You
sometimes
do
wonky
things
with
your
kubernetes
configuration
in
order
to
support
storage
and
some
of
those
wonky
things
will
go
away,
though,
right
because
once
you
guys
support
pod
disruption
budgets,
then
I
no
longer
have
to
have
a
workaround
for
saying:
hey,
don't
evict
my
storage
pods,
the
the
so
in
some
ways
you
know
the
whole
business
of
hey.
B
If
you
have
a
high
storage
cluster,
it
needs
to
be
configured
differently.
That's
going
to
go
away
in
the
future
through
improvements
in
kubernetes
and
improvements
in
in
rook
and
the
operator
itself
hasn't
gone
away.
Quite
yet
so
I
could
almost
see
having
your
high
storage
applications
in
you
know,
sort
of
a
dedicated
cluster
of
high
storage
applications
and
storage
cluster
and
then
another
cluster.
That's
running,
say
your
ephemeral
applications
there.
B
There
could
be
real
benefits
to
that
from
an
administrative
point
of
view,
rather
than
a
performance
necessarily
point
of
view.
D
B
Well,
yeah,
so
until
we
get
another
audience
question
one
thing's
going
to
follow
up
on
so
as
you
know,
I've
been
doing
some
work
on
testing
performance
of
database
applications
on
top
of
cepheroc.
The
performance
has
actually
been
excellent,
the
even
despite
the
fact
that
we
are
not
hitting
what
should
be
sweet
sweet
spot
right.
B
We
have
a
single
application
running
in,
say
three
pods
that
are
making
all
of
the
storage
requests,
but
I'm
still
actually
able
to
get
somewhere
around
50
of
the
throughput
that
I
would
get
on
a
single
local
pv,
which
is
you
know,
I'll.
Tell
you
having
had
work
with
various
distributed
storage
systems
for
the
past
20
years
is
excellent.
B
That
is,
if
some
have
storage
and
some
don't,
I
kind
of
want
to
target
those
which
was
my
whole
question
about
eventually
looking
at
integrating
with
volume
topology.
B
Not
so
much
for
you
know
where
we
put
rook
and
ceph,
but
more
for
where
do
we
put
the
application
that
is
heavily
reliant
on
the
storage
versus
the
other
applications
that
maybe
don't
need
any
storage.
D
Yeah
and
rook
does
some
things
around
topology
already
for
sure
like
we,
you
know
if
you
tell
us,
look
for
the
local
storage
automatically
or
whatever
device
they're
available.
Well,
we'll
basically
skip
nodes
where
we
don't
see
the
storage
and
we'll
just
create
it
on
the
other
nodes
or
we'll
look
for
topology
labels
and
set
up
your
failure,
domains
based
on
the
node
topology
labels.
B
Cool
the.
B
Let's
see
other
future
work
questions
so
there's
various
projects
that
are
aiming
to
stretch
your
kubernetes
network
across,
like
say:
multiple
physical
data,
centers
submariner,
for
example,
because
I
work
with
the
submariner
folks.
Some.
B
B
So
the
question
is,
if
I
want
to
oh,
I
have
an
audience
question
just
a
minute
after
this,
but
let
me
finish
this
one
up.
If
I
want
to
also
effectively
stretch
my
storage
there
for
redundancy
reasons
for
availability
reasons,
should
I
then
just
tell
rook
to
treat
this
as
one
big
kubernetes
cluster,
or
should
I
do
something
else,
yeah.
D
D
D
B
Sorry
hold
on,
let
me
unmute
myself.
Yes,
it
does
the
so
I
was
trying
to
figure
out
this.
We
have
an
audience
question
from
debt.
Conan
who
was
asking?
Is
it
possible
to
easily
do
a
hyper-converged
setup
with
rook
I'm
trying
to
find
out
what
he
means
by
or
she
what
they
mean
by
hyper-converged,
because
people
use
that
in
different
contexts.
D
It's
definitely
overlooked,
I
would
say:
hyperconverge
just
means.
Would
you
want
to
put
storage
inside
your
same
kubernetes
cluster
on
the
same
nodes
with
your
applications
and
yeah?
You
absolutely
can
and
lots
of
people
are
doing
that
and
if
you
have
specific
performance
requirements,
that's
when
people
tend
to
do
something
more
custom,
where
you
select
certain
nodes
for
for
rook
and
keep
your
applications
on
different
nodes,
but
overall
there's
nothing
that
that
prevents
it.
D
B
Cool
okay
yeah-
that
was
that
was
their
question.
In
fact,
that's
how
I
personally
run
rooksef
in
production
is
on
the
on
the
same
application
nodes.
Because-
and
this
is
one
of
your
other
sort
of
benefits
for
small
clusters-
is-
I
have
a
bunch
of
blades
each
of
those
blades
comes
factory
supplied
with
half
a
terabyte
of
storage.
If
I
don't
store
things
on
there,
it's
just
wasted
space.
C
D
C
A
benefit
to
to
a
situation
like
that
also
is
to,
to
some
degree
like
steph,
does
step?
Does
work
really
well,
when
it's
a
bigger
cluster,
like
it's
to
my
understanding,
it's
generally
better
to
create
a
bigger
cluster
with
smaller
amounts
of
storage
on
each
node
than
it
is
like
to
have
a
small
number
of
nodes
with
a
larger
amount
of
storage.
C
Just
because
then,
like
you,
you
know
when
you're
grabbing,
like
the
pieces
of
files
from
all
over,
like
you
can
make
use
of
like
the
like
large
amounts
of
network
bandwidth
on
each
node.
B
Yes,
have
you
seen
the
have
you
seen
the
performance
testing
that
sergey
and
I
have
been
doing
I
even
with
just
five
or
six
nodes
for
a
large,
sequential
read.
There
is
notable
benefit.
B
B
B
Okay,
well,
we
give
people
a
chance
to
think
of
any
last
questions,
we're
we're
almost
at
the
end
of
our
time
here.
So
any
last
thoughts,
anything's
coming
up
other
things
that
are
happening
during
kubecon
that
you
want
to
mention.
Please
share
them
now,.
D
We
have
a
couple
of
rook
talks
in
kubecon.
Let
me
just
remind
myself
what
the
schedule
looks
like,
but
friday
afternoon
we
have
our
our
diet
talk
for
the
maintainer
track,
so
it's
at.
Let's
see
where'd.
He
go
friday
at
3.
10
eastern
time
is
that
rook,
deep
dive
and
there's
another
one
that
involves
rook.
D
B
B
Okay,
well,
I'm
just
putting
something
in
chat
there.
For
our
four
question
askers,
I
have
just
sent
a
link
out
of
a
chat
for
you
to
request
your
complimentary
rook
shirts,
the
you
know
for
participating
in
our
office
hours
today.
B
Please
go
ahead
and
follow
that
link
and
you
can
go
ahead
and
request
that
if
you
are
watching
this
later
because
you
cut
out
in
the
middle,
you
can
ask
for
that
link
on
kubecon
slack
and
thanks
everybody
for
watching
and
participating,
and
thank
you
most
of
all
to
our
two
rook
maintainers,
not
only
for
answering
questions
participating
today,
but
for
the
awesome
tool
that
is
rook.