►
Description
As part of the “All Things Data” series of briefings, Red Hat’s Kyle Bader and Annette Clewett gave a deeper dive, and live demo, of the new Generally Available and Technical Preview features in OpenShift Container Storage 4.3. These features include dynamic provisioning, flexible OSD sizes, and various local drive support.
OpenShift Container Storage (OCS) is software-defined storage for containers that provides you with every type of storage you need, from a simple, single source.
Learn more here: openshift.com/storage
A
Thank
you,
everybody
for
joining
us
for
another
all
things:
data
OpenShift,
Commons
briefing
today
we're
very
excited
to
have
Annette,
Cluett
and
Kyle
Bader,
as
well
as
Chris
bloom,
answering
questions
in
the
chat
from
our
storage
group
and
with
four-three
coming
out
of
OpenShift
container
storage
they're
here
to
give
us
a
deeper
dive
into
different
new
features.
So
please
Kyle,
take
it
away.
Thanks.
B
So
the
major
themes,
this
release
of
OCS
is
to
provide
more
flexibility
in
terms
of
deployment,
be
able
to
add
more
flexibility
in
terms
of
the
types
of
devices
you
can
use
underneath
OCS
and
to
add
a
new
platform
and
the
ability
to
support
bare
metal.
Now,
a
few
of
these
things
are
are
coming
into
4.3
and
our
tech
preview,
and
will
you
know,
go
GA
later,
but
we're
gonna
go
into
a
little
bit
more
detail
here
and
then
Annette
will
provide
a
nice
demo.
B
So
the
status
quo
you
know
before
4.3
and
and
what
continues
to
be
an
option
in
4.3
is
to
use
a
dynamic,
dynamic
parishioner
for
4.2.
We
targeted
two
platforms,
we
targeted
vSphere
and
we
targeted
ec2,
Amazon
ec2
and
in
both
cases
we
use
kind
of
an
infrastructure,
dynamic
provisioner
that
would
get
deployed
into
the
open
shift
automatically,
and
then
we
would
consume
volumes
or
Peavey's
from
those
provisioners
and
then
build
kind
of
a
build
OCS.
B
On
top
of
that
now,
there's
you
know,
there's
a
lot
of
advantages
that
you
get
from,
that
that
I
have
detailed
on
the
slide.
Here
we
have
the
ability
to
move,
move
that
PV
from
node
to
node.
So
if
you
have
a
node
that
fails-
or
if
you
you
know
otherwise
need
to,
you
know,
move
one
of
those
one
of
those
OS
DS
to
a
different
node.
B
You
can
do
so
without
having
to
basically
recover
the
data
it
can
just
detach
and
attach
to
I
know
that's
kind
of
a
nice
convenience
type
thing,
and
there
are
some
limitations
on
this.
If
you're,
if
you're
using
EBS,
then
you
know
the
EBS
PV
can
only
move
within
the
same
availability
zone.
So
it's
not
it's
not
perfect
right,
but
it
is
kind
of
a
nice
thing
to
not
have
to
necessarily
recover
data
if
your
OCS
notes
go
down.
B
The
other
nice
thing
about
the
the
dynamic
earth
are
the
using
a
dynamic
provisioner
is
the
sizing
is
dynamic
right
so
because
the
PVC
is
created
to
satisfy
that
particular
PV
C.
It
can
be
made
to
the
exact
size
of
the
request.
So
if
you,
you
know,
request
a
two
terabyte
volume,
then
you
get
a
two
terabyte
PV
most
of
the
volumes
that
you
would
get
through.
These
provisioners
are
generally
higher
mean
time
between
failure
right.
B
B
B
B
So
this
is
this:
is
some
flexibility
requested
by
customers
and
so
we've
introduced
it
in
4.3,
and
this
is
generally
available.
The
local
storage
that
I
was
alluding
to
earlier
is
the
ability
to
consume
storage
directly
from
from
the
locally
in
the
system
right.
So
in
the
case
of
vmware,
that
would
be
where
you
have
hypervisor
nodes
that
have
some
sort
of
locally
attached
media,
and
then
you
surface
them
into
the
the
VMS
that
that
you
know
your
open
ship
nodes
either
through
local
device,
the
mpk.
B
A
B
Map
those
devices
directly
into
the
the
guest
that
would
correspond
with
your
OCP
node
and
then
finally,
there's
a
direct
path
approach
for
nvme.
That's
it's
like
effect
like
a
PCI
pass-through,
which
is
only
for
nvme,
drives
right
because
that's
a
PCIe
based
protocol
and-
and
you
need
something
a
little
bit
more
direct
for
for
nvme
to
have
it
be
efficient
for
ec2.
B
We
have
an
instance
store
right,
so
there
there
are
a
number
of
instance
types
that
the
predominant
ones
would
be
i3
en
or
i3
that
have
locally
attached,
SSDs
that
you
can
now
consume
and
use
for
OCS.
There
are
some
some
caveats
there
around.
You
know
you
don't
want
to
stop
your
instances
and
you'll
want
to
create,
like
I,
am
role
prevent
that
from
happening,
and
then
finally,
local
SSD
or
nvme
in
bare
metal
hosts-
and
this
is
all
kind
of
organized
by
the
local
storage
operator
ahead
of
time.
B
C
Okay,
so
taking
off
from
where
I
landed,
good
explanation
and
a
good
sort
of
difference
with
us,
he
has
4.3
that
we
want
to
go
through
here.
So
dynamic
provisioning,
as
Kyle
said,
was
the
feature
in
OCS
4.2
and
for
using
dynamic
provisioning,
the
the
new
sort
of
addition,
as
that,
as
he
said,
the
shirt
size
selection
that
is
not
using
local
storage,
that
is
dynamic
provisioning.
So
what
we're
going
to
take
a
look
at
here
is
using
the
local
storage
method
and
in
particular,
we're
going
to
do
it
with
VMware.
C
So
starting
from
my
VMware
client
want
to
just
show
you
what
I
have
here:
I've
got
open
ship
installed
on
6
worker
nodes
and
3
control,
plane,
nodes
or
masters.
What
I
did
to
create
the
local
storage
is
I
a
what
you
call
in
in
the
vSphere
client,
a
hard-disk,
so
I
added
one
of
these.
You
also
see
a
raw
device
mapping
here,
but
I
added
a
hard
disk
and
the
size
is
a
hundred
gigabytes.
C
C
If
we
now
go
back
to
the
operator
hub,
we
see
that
we
have
a
new
version
here
or
dot
3.0
4.3
dot,
0
actually
just
released
today
and
before
I
before
I,
deploy
that
I
want
to
discuss,
though
how
we
are
creating
the
the
local
storage.
So
if
we
take
a
look
at
the
devices
we
have
available
here-
and
this
is
actually-
the
output
here
is
created
using
a
utility
that
one
of
our
own
essays,
created,
Daniel,
Moser
and
and
Chris
bloom
is
on
the
call
here.
C
But
what
I
did
when
I,
when
I
created
this
hard
dis,
I
essentially
created
this
additional
device,
and
this
is
the
device
that
we're
going
to
use
for
the
storage
and
you
can
find
this
utility
I'll.
Just
put
it
into
the
chat.
You
can
find
this
utility
here
and
I
definitely
encourage
you
to
check
it
out.
If
we
go
back
to
the
CLI
here,
go
ahead
and
get
rid
of
that
now.
C
I'll,
do
a
notice
again
nodes
and
I'm
just
connected
to
the
same
cluster
that
we
just
looked
at
and
IV
sphere,
so
there's
our
nodes
and
I'm
in
right
now
the
local
storage
project,
so
I
have
already
deployed
the
local
stewards
operator
and
that
would
be
found
in
operator
hub
if
I
type.
Here
right,
you
see
it's
already
installed.
So
if
I
do
it,
possi
get
pods
and
PV
here,
you'll
see
that
I
have
some
pods
that
are
in
the
local
storage
project.
C
C
Orange
and
I
used
the
by
ID
values
that
we
saw
that
I
got
from
using
that
utility
I
used.
Those
I
could
have
just
done
device
path,
s
DB,
because
s
DB
is
the
same
on
every
one
of
them
of
the
the
three
nodes
I'm
going
to
use,
but
this
is
a
better
way
to
do
it
and
in
the
future,
this
actual
ID
will
be
discovered
via
the
local
storage
operator.
But
right
now,
I
had
to
go
use
the
utility
and
put
it
here.
C
The
other
thing
I'm
using
to
decide
that
a
PV
should
be
created
for
the
local
storage
device
is
I'm
using
the
key
of
this
label.
This
label
is
the
label
that
should
be
added
to
any
OpenShift
nodes
that
are
going
to
be
running
OCS
I've
already
added
the
label
and
I've
already
created
the
CR.
So
that's
why?
C
C
Openshift
container
storage-
just
so
note
this
happens
to
be
VM
we're
using
VM
decays.
This
could
as
well
be
on
AWS
using
i3
or
i3
n
instances.
It
could
also
be
bare-metal
using
local
disk
in
a
server.
So
it's
really
the
same
process.
To
get
to
this.
This
point
you
need
to
have
the
disks
need
to
be
available
as
Peavey's
so
going
back
to
here.
Let
me
go
ahead
and
refined.
C
So
as
this
subscription
or
the
the
cluster
service
version
is
coming
up,
we
can
do
a
look
at
what's
happening
here.
We
also
currently
make
use
of
the
CR
DS
in
the
Lib
bucket
provisioner.
You
see
the
provided
API
so
to
the
right
there
and
the
next
version
OCS
4.4,
we've
all
that
we
will
still
use
the
CR
DS,
but
we
will
not.
You
will
not
see
the
operator
visible,
the
Lib
bucket
provision
or
operator.
Well,
that's
take
a
look
here
and
just
see
that
our
CSVs
are
succeeding.
C
C
But
we
still
have
the
43.0
installing
we
do
need
to
wait
until
that's
finished
installing
and
then
we
can
proceed
to
create
the
storage
cluster.
So
let's
do
it?
Oh,
so
you
get
pods
where
we
proceed
here
and
what
we
should
see
is
our
our
four
operators.
So
we're
gonna
see
the
the
OCS
operator.
That's
the
meta
operator
for
the
OCS
service.
We
see
the
except
operator,
the
Nuba
operator
and
the
Lib
bucket
operator.
So
now
we're
ready
to
create
our
storage
cluster.
C
So
in
this
case,
because
we're
doing
local
storage
we're
not
going
to
use
the
UI
as
you
did
in
and
and
dynamic
provisioning
and
dynamic
provisioning,
you
would
go
to
storage
cluster
and
you
would
go
create
storage
cluster,
but
for
local
storage.
We
need
to
be
able
to
give
it
different
information
and
we
need
to
be
able
to
give
it
sort
of
that:
the
storage
for
the
mods
and
the
storage
for
the
oddities.
C
So
when
we,
when
we
created
the
local
volume
CR
using
those
three
by
ID
disk
IDs,
we
also
got
a
local
block
stores
class
on
that
local
block
stores
classes.
How
we're
going
to
claim
and
create
PVCs
from
those
available
hundred
gigabyte
Peavey's?
This
is
the
actual
storage
cluster
CR
that
is
used
when,
when
over
here,
you
hit
create
OCS
cluster.
C
It's
not
exactly
this
one,
but
this
is
how
it
looks
so
we
we
need
to
have
Mon
storage
in
this
case,
because
I
do
have
the
thin
storage
class
available
here,
I'm
going
to
use
it
to
create
the
monsters.
I
could
also
use.
I
could
have
created
a
10
gigabyte,
VMDK
and
create
another
local
volume.
Cr
and
I
could
have
claimed
the
storage
that
way,
but
in
this
case
I'm
just
going
to
use
it
because
I
have
it
available
for
dynamic
provisioning
and
then
for
my
OS
deist
words.
C
C
Storage,
cluster
and
as
I
do
that
we'll
start
to
see
some
pods
come
up,
and
this
is
again
in
the
open
shift,
storage,
namespace.
So
first
off
we
see
creating
because
our
operators
are
already
created,
but
we
see
our
container
storage
interface
odds
coming
up.
This
is
the
new
API
via
kubernetes
or
all
storage,
to
be
created,
so
OCS
makes
use
of
the
of
the
CSI
API,
and
the
first
thing
we
need
to
do
is
we
need
to
land
some
pods
on
each
one
of
the
devices
re.
C
Excuse
me
each
one
of
the
open
shift
nodes
now
this
would
be
on
open
shift
nodes
that
allow
schedule
ocation
pods.
So
we're
not
going
to
see
this
on
the
master
nodes,
because
currently
the
master
nodes
have
a
no
schedule
taint,
but
but
you
know
in
any
any
worker,
node
or
any
inferno
that
could
host
applications
will
have
both
the
Ceph
of
s
and
a
RBD
plugin
so
that
you
know
volumes
can
be
created
and
deleted.
So
those
those
will
continue
to
come
up.
C
Go
back
to
the
storage
cluster.
We
can
see
now
that
we
compared
to
when
I.
First
looked
at
this,
we
have
a
source
cluster
being
created,
it's
version
4,
not
3.0,
and
instead
of
progressing
the
other
thing
that
happened
when
I
created
the
storage
cluster,
because
I
got
two
new
dashboards
that
are
completely
integrated
into
open
stuff
and
right
now,
they're
not
populating,
because
the
storage
cluster
is
creating.
C
The
the
other
thing,
if
we
have
time,
is
there's
a
lot
of
alerting
and
metrics
that
are
specific
to
OCS
and
specific
to
stuff
that
are
also
integrated
in.
So
you
get
very
good,
alerting
in
in
these
dashboards
that
there's
a
problem.
Let's
go
back
to
looking
here,
looks
like
we're
starting
to
bring
up
the
Mons.
C
C
C
So
if
we
take
a
look
here,
let's
just
take
one
more
look
and
see
if
we've
finished
well,
I'll
show
you
how
we
know
that
we
have
finished
the
deployment
just
from
looking
at
this
view,
there's
a
lot
of
different
ways
to
validate
the
deployment,
but
we
are
still
creating
here.
So
you
notice,
the
OCS
operator
here
is
is
running
but
not
ready,
which
is
evidenced
by
zeros
last
one,
so
this
operator
will
stay
in
the
state
until
the
deployment
is
come.
We
finish.
C
C
C
We
now,
even
though
the
OCS
operator
doesn't
show
quite
done
yet
we
can
see
that
our
dashboard
has
populated.
It
tells
us
that
I
use
300
gigabyte,
VM
decays,
so
I've
got
almost
300
gigabytes
of
storage.
Effective
storage,
of
course,
is
just
one
third
that
because
of
replica
three
and
our
object
s4,
it
usually
takes
a
little
bit
longer
to
come
up
because
you
notice
nuba
was
at
the
end.
So
I
think
that
is
the
end
of
my
demo.
So
a
successful
deployment
of
ICS
4.3
that
released
today.
A
That
was
fantastic,
Thank,
You
Annette
and
thank
you
Kyle,
it's
so
nice
to
see
4.3
out
and
what
a
great
demo
and
next
week
we'll
have
another
deep
dive
into
4.3.
So
please
join
us
next
week
and
thank
you
everybody
for
another
great
all
things:
data,
openshift,
Commons
briefing.
You
can
find
this
on
the
OpenShift
Commons
YouTube
channel.
Thank
you.
Everyone.