►
From YouTube: All Things Data: Workload consistency during Ceph updates & adding new storage devices
Description
As part of the “All Things Data” series of briefings, Red Hat’s Sagy Volkov demonstrated (live!) workload resilience and consistency during storage updates and additions using Ceph - a key component of OpenShift Container Storage.
OpenShift Container Storage (OCS) is software-defined storage for containers that provides you with every type of storage you need, from a simple, single source.
Visit openshift.com/storage for more information.
A
Today
we
have
a
very
special
guest
speaker
for
our
second
episode
of
all
things
data
we
have
segi
Volkov
from
Red
Hat,
who
is
here
to
give
a
live
demo
of
workload,
resilience
on
OpenShift,
with
Ceph,
storage,
updating
and
adding
storage
to
your
persistent
volumes,
and
everything
just
keeps
running
Thank
You
siggy.
Please
take
it
away.
B
Hi,
my
name
is
Sergey
Volkov
I'm,
a
storage
performance
instigator.
Today's
demo
will
be
work,
load,
consistency
during
safe,
upgrade
and
storage
expansion.
This
demo
is
based
on
components
of
the
open,
shaped
container
storage
in
version
4.2,
so
OpenShift,
a
container
storage
for
is
based
on
the
OCS
operator
and
in
the
ruk
operator,
and
the
oak
operator
is
the
director
of
Orchestrator
of
all
things
self
related.
B
Icicle
pod
we're
going
to
show
I'm
going
to
show
many
terminal
windows
we're
going
to
monitor
their
transactions
per
second
of
the
SIS
bench
job
and
during
this
process
we
are
going
to
update
the
SIP
version
by
changing
basically
the
SF
CRD
and
we're
going
to
monitor
to
see
if
there's
any
changes
during
the
update
and
then
we're
going
to
add,
storage
or
expand
the
storage
that
the
self
cluster
have
and
again
more
in
tour
to
see.
Are
we
seeing
any
kind
of
I/o
performance
hiccups
or
anything
like
that
during
the
this
stage?
B
And
the
version
that
I'm
going
to
update
to
is
14
the
2.6
before
starting
this
I'll
just
share
a
few
other
windows
in
here
on
the
top
left
is
a
list
of
pods.
That
safe
is
currently
running
on.
The
bottom
left
is
the
components
of
that
safe
needs
in
in
order
to
run
and
support
our
PD
or
block
devices
options.
B
As
you
can
see,
all
of
them
are
with
version
14.
The
2.5
on
the
top
right
is
the
root
operator.
Log.
We're
just
constantly
telling
it
on
the
bottom
right
is
our
transactions
per
seconds
that
are
running
on
the
my
sequel
and
I
will
just
run
this
batch
script
in
the
middle
window,
and
it
will
start
the
process.
What
we
will
see
immediately
is
that
the
Brooke
operator
will
get
the
request
to
change
the
version
and
an
act
upon
it
now.
B
Of
course,
all
of
these
many
other
things
that
I'm
showing
in
here
are
not
going
to
be
needed
on
OpenShift,
container
storage
or
CSF,
or
in
OCS.
For
you,
you
are
able
to
choose
whether
you
want
to
automatic
update.
They
all
see
a
software
or
manually,
update
it
and
OCS
for
operator
will
basically
take
care
of
everything
manually
that
I've
done
here,
such
as
patching
the
ruko
operator
and
things
like
that.
So
on
the
top
of
the
left
side,
what
we're
seeing
is
that
the
first
Mon
was
just
chosen.
B
Monet
was
chosen
as
the
first
one
to
be
restarted
and
deployed
with
the
new
software,
and
we
can
see
that
it's
also
on
the
bottom
left.
It's
the
only
one
that
is
currently
available
with
the
new
version
already
with
fallen
14
to
2.6.
The
way
this
process
of
update
is
working
is
it
will
go
through
the
three
Mon
pods
that
we
have
every
time
it
will
update
them
on
it
will
allow
it
to
be
back
into
the
Mon
quorum
and
and
will
not
continue
to
the
next
one
before
it's
actually
working.
B
The
wait
time
is
roughly
if
the
wait
time
is
not
roughly
is
exactly
60
seconds.
If
we,
if
the
rook
operator
decides
that
the
Mon
is
not
back
in
the
quorum,
it
will
just
pose
for
60
seconds
and
start
again
or
in
the
next
time.
So
we
see
that
the
rook
operator
decided
to
move
into
the
next
one
and
we
can
see
it
being
initialized
and
on
the
bottom
right.
B
We
can
also
see
that
there's
a
enumeration
for
basically
the
required
weather
is
updated
and
whether
it's
available
these
are
the
two
different
values
in
here,
and
they
are
basically
how
Luke
decides
again
and
all
whether
to
move
into
the
next
phase
or
not
one
note
about
the
pods
that
we
have
in
here.
This
rooks
F
clusters
only
runs
the
a
Alberti
portion
of
self.
They
block
up
a
block
device
option
of
self.
B
You
go
to
you're,
going
to
see
a
lot
more
pods
with
other
components
of
self,
for
example,
the
gateway
and
for
the
objects
and
surfaces
that
are
again
are
going
to
be
updated
in
a
fashion
where
only
one
once
a
certain
object
is
completed.
The
update
the
route
cooperator
will
continue
today
and
next
one,
and
we
are
now
going
to
the
rock
operator,
is
now
going
to
update
the
third
monitor
pod.
B
Third,
one,
as
you
can
see
on
the
bottom
left,
it's
waiting
for
it
to
be
updated,
and
then
it
will
be
wait
for
it
to
be
available.
The
next
component
that
is
going
to
be
updated
is
the
manager
pod
there's
only
a
single
a
part
of
this,
and
once
the
1c
is
basically
back
into
quorum,
it
will
continue
into
the
updating
of
the
manager
pod.
B
Now,
one
of
the
questions
in
the
presentation
that
I
kind
of
proposed,
whether
do
we
should
we
see
any
kind
of
I/o
disturbance
or
IO,
pose
as
I
call
it
while
running
these
updates
and-
and
the
answer
is
that,
typically,
we
should
not
most
software-defined
storage
that
are
available,
you
can
update
them
and
you
can
use
object.
It
can
update
the
software
live.
B
However,
most
places
will
probably
like
to
create
some
kind
of
a
time
frame
where
there
will
not
be
any
kind
of
live,
is
going
into
the
storage
system
for
this
duration,
but
in
this
demo
we're
actually
updating
the
software,
while
iOS
are
being
running
from
to
the
my
circular
pod.
Now
we
have
three
OS
DS.
These
are
basically
the
pods
that
are
providing
storage
or
creating
the
safe
cluster
to
for
our.
B
For
any
application
on
this
or
a
openshift
a
cluster
to
be
used,
and
we
can
see
that
the
first
one
was
already
updated
or
is
the
zero?
The
second
one
is
being
updated
right
now
and
if
you
look
at
the
the
bottom
right,
you
can
see
that
the
I
ups
are
a
little
bit
dropping,
and
that
is
because,
at
this
point,
self
has
to
basically
tell
the
self
clients
hey,
do
not
use
this
copy
of
the
data,
because
we
are
now
changing.
B
It
use
this
copy
of
the
data
we
because
it's
valid,
and
it's
already
with
the
new
self
version,
so
OSD
one
is
also
updated
and
a
the
ROC
operator
will
soon
move
into
updating
OSD
and
by
the
way,
there's
a
pod
here
called
surf
tools
can
disregard.
There
is
that
kind
of
a
Mook
self-management.
Pod
OCS
does
not
use
a
dis
pod.
So
again,
the
cycle
of
60
seconds
is
going
to
be
it's
going
to
be
enforced
by
a
the
recuperator.
B
If
it
does
not
think
that
a
all
the
OS
days
are
up
and
running
and
now
we're
seeing
that
after
the
60
seconds,
OSD
2
is
being
terminated
and
we
can
see
on
the
bottom
right
how
the
availability
the
require
is
for
one
pod.
One
pod
has
already
been
updated.
It
is
not
yet
available
at
the
self
cluster
and
we
can
see
also
that
once
this
OSD
will
be
updated
and
back
in
the
quorum,
the
log
of
the
group
set
of
the
rook
operator
is
going
to
show
succeeded
updating
the
cluster.
B
B
So
I'm
going
to
continue
demo
now
with
how
we
are
adding
additional
storage
into
the
safe
cluster
using
rook
and
again,
all
of
this
will
be
done
from
the
open
shift
console
in
open
shifting
storge
a
container
in
OCS
for
what
we
have.
This
is
an
AWS
cluster,
as
I
previously
stated
on
the
on
the
top
left.
B
What
I'm
going
to
do
is
I'm
going
to
edit
the
safe
cluster
component
of
called
the
object
and,
as
you
can
see
in
here
when
I
installed,
everything
I
actually
filtered.
What
devices
to
to
be
used
and
I
specifically
also
specified
to
not
use
all
the
devices
on
on
the
AWS,
a
VM.
So
I'm
going
to
change
that
to
true
and
I'm,
going
to
change
this
to
null
and
save
the
object
and
we're
going
to
go
and
see
on
the
bar
on
the
top
right
immediately.
B
B
And
allies
see
they
always
do
prepare
pods
on
the
top
left
are
all
been
restarted,
always
deeper
pill.
Pods
have
one
task
and
that
is
to
prepare
a
host
or
a
VM
to
be
used
for
force
F,
and
so
they
are
they.
They
have
a
specific
job.
They
basically
go
over
all
the
devices
that
are
available
to
be
used
on
that
host
and
then
provide
back
the
information
into
the
Rock
operator
and
once
the
Rock
operator
decides
that,
if
there
are
any
new
devices
to
be
used,
it
will
create
new
OS
DS
by
them.
B
That's
why
you
see
that
some
of
these
always
these
pods
have
already
completed.
Some
of
them
are
still
running,
and
what
you
can
see
is
that
on
the
bottom
left,
the
OSD
tree
have
already
marked
three
of
a
always
DS
that
it
can
be
used,
because
we
just
provided
more
storage
devices
to
the
surface
F
cluster.
They
are
still
just
something
to
be
used.
It's
not
really
usable
yet
because
there
is
no
OSD
pod
specific
to
that
is
running
and
using
this
the
device
that
the
VM
is
providing.
B
What
we're
and
now
we're
seeing
on
the
bottom
on
the
top
layer
left
how
OSD
a
port
follows
day.
Three
four
and
five
are
being
initialized
and
once
they
are
being,
the
pod
will
be
up
and
running
on
the
bottom
left.
We
are
seeing
how
each
of
these
new-
or
these
are
basically
joined
into
their
existing
host
or
markers
up
and
are
available
to
be
used
by
by
any
application.
B
That
is
using
the
rookin
self
block
option,
as
we
can
see
on
the
bottom
right,
there's
a
little
bit
of
a
slowdown
in
the
in
the
iOS
to
try
to
explain
it's
very
simple:
we
had
a
three
devices
that
provided
the
storage
for
everything
and
we
had
some
data
on
it.
Now
we
have
six
devices
that
are
providing
storage
for
everything.
Basically,
self
is
going
to
take
the
existing
data
layer
that
we
have
and
we'll
spread
it
over
all
the
devices.
This
is
actually
the
the
biggest
strengths
of
one
of
the
biggest
strengths
of
yourself.
B
It's
a
massively
distributed
the
more
devices
Saif
has
the
better
their
performances
and
so
Saif
as
a
cluster
is
now
redistributed.
The
data.
Of
course,
all
the
data
is
available
all
the
time
and
we
just
finished
basically
adding
the
storage
to
to
our
3
available
VMs
in
AWS,
and
this
is
the
end
of
the
demo.
A
Thank
you
so
much
siggy.
That
was
excellent
and
thank
you,
everybody
for
joining
us
for
another
all
things:
data
session
on
OpenShift
comments
on
the
briefing
channel
now
all
of
these
are
posted
to
youtube
and
please
look
at
the
OpenShift
comm,
slash,
storage
page
as
well
as
view
the
all
things
data
OpenShift
commons,
youtube
channel
where
this
will
be
posted.
So
please
look
out
for
this
on
the
open
ship
blog.
Thank
you.
Everybody
see
you
next
time.