►
Description
Chris Blum, Senior Architect in Red Hat Storage, demonstrates using local disks on VMware with OpenShift Container Storage 4 in OpenShift 4.
For more information, please visit openshift.com
A
A
They
can
be
based
on
pretty
much
anything.
Then
we
have
the
choice
of
using
our
DM
or
raw
device
mapping
which
Maps
certain
rate
devices
directly
into
a
VM,
and
we
can
also
use
VM
direct
path,
IO,
which
can
map
device
local
devices
to
directly
to
VMs,
and
this
is
actually
what
we're
going
to
look
at
today.
So
we
will
try
to
map
local
and
many
devices
into
a
VM,
and
this
is
probably
one
of
the
hardest
steps
that
will
involve
many
things
to
do.
A
A
A
That's
provided
at
this
link
here
or
you
can
do
that
yourself,
either
by
SSH
against
the
host
or
using
OCD
buck
node,
and
then
you
create
two
local
volumes
and
local
search
operators
namespace
one
with
the
volume
hood
filesystem
for
the
set
monitors
and
one
with
the
volume
mode
block
for
the
local
disk
and
finally,
to
use
these
local
disk
Vivi's
with
OCS.
You
create
a
storage
cluster.
So
this
is
the
full
demo
that
you
need
to
create.
A
So
first
I
have
prepared
this
open
shift
cluster.
Here,
it's
freshly
installed
with
OC
open
share
for
35
on
VMware,
and
this
is
the
VMware
cluster,
so
we're
on
per
seven.
These
are
all
the
notes
that
make
up
the
cluster.
We
have
three
masters
and
six
compute
nodes.
We
will
be
using
the
first
three
compute
nodes
today
to
attach
the
local
disk,
especially
the
nd
case,
especially
the
VM
direct
pass
AO.
We
need
to
shut
down
these
workers
because
I
know
that
nothing
is
working
on
the
cluster.
A
I
can
just
tell
to
shut
down
if
you
would
be
doing
this
on
a
running
cluster
than
obviously
first
drain
those
notes
and
ensure
that
the
pots
are
properly
migrated
off
of
these
notes
before
shutting
them
down.
So
when
they're
shut
down,
we
ensure
that
these
are
running
on
the
three
hypervisors,
so
you
see
computing.
No
zero
is
running
on
host
raipur
three,
so
I
want
to
ensure
that
they
are
all
running
on
different
hypervisors.
A
So
you
see
that
compute
one
is
running
on
to
compute.
Two
is
running
on
three,
so
I'm
going
to
move
compute
zero
over
to
the
first
hypervisor
each
of
these
hypervisors
one
and
minidisc
that
I'm
going
to
touch,
and
now
they
should
all
be
on
different
hypervisors.
So
this
is
fine.
Now,
all
right,
so
let's
attach
our
local
nvme
for
this
I'm
going
to
add
an
nvme
controller
and
I'm
going
to
add
a
PCI
device-
and
you
see
over
here.
A
A
A
Also
note
that
when
you're
attaching
a
PCI
device,
that's
passed
through
PCI
device,
then
obviously
there
are
some
limitations
on
that
VM.
So
not
only
do
we
preserve
all
memory
and
hard
allocate
that
to
the
VM,
but
it
can
also
not
suspend
migrate
with
motion
or
take
a
snapshot
of
that
virtual
machine,
but
that
shouldn't
be
too
bad
for
you.
A
And
they
should
eventually
be
read
again
and
a
cluster.
You
can
either
monitor
their
stay
in
the
web
UI
or
you
can
do
that
from
the
terminal
when
you're
logged
in.
If
you
don't
see
that
your
nvme
PCI
devices
in
the
VM
options,
then
you
probably
haven't
configured
them
for
forwarding
to
forward
them.
They
need
to
be.
A
A
A
A
A
So
obviously
the
avengers'
storage
namespace
is
for
the
OCS
operator
and
the
local
storage
namespace
is
for
the
local
storage
operator
and
then
we
label
the
opens
of
storage
namespace
for
monitoring
so
that
permit
us
knows
it
should
be
monitoring.
Let's
hop
back
over
to
our
web
UI
over.
Here
we
go
to
operators
operator,
heart.
A
And
in
here
we
do
have
two
files,
the
local
file
and
local
black
llamo,
and
we
need
to
feed
it
the
device
path
and,
as
a
best
practice
we're
using
by
ide
device
path,
because
the
disk
IDs
could
change
between
reboots
and
so
that
we
can
easily
find
out
the
disk
ideas
and
all
the
nodes
we
can
use
to
be
to
do
Dax.
We
first
need
to
label
our
notes
so.
A
A
This
is
important
so
that
the
demo
set
nurse
on
to
which
notes
to
look
for
a
local
disc.
Now
that
we
have
that
we
can
verify
that
the
low
label
worst
successfully
applied
by
just
asking
for
the
notes
with
the
labels.
So
we
see
the
first
workers
have
the
label.
This
is
fun
so
now
we
deployed
in
human
set.
The
demon
sir
looks
like
this
looks
like
this,
and
you
see
it's
nothing
too
fancy
it's
just
running
some
bash
to
figure
out
the
disc
by
IDs,
going
back
to
the
terminal.
A
A
And
this
looks
like
this,
so
we
have
three
workers
compute
one
computer
compute
two
and
they
each
have
three
disks.
So
I
know
that
SDA
is
my
root
disk,
so
I'm
not
going
to
use
that
s.
Db
is
the
lien
BK
disk
and
nvme
zero
and
one
is
the
enemy
disk
and
they
are
fortunately
named
the
same
on
each
note.
So
what
I
can
do
is
I
can
now
use
these
disk
ideas
in
my
files.
A
A
A
A
A
So
now
we
see
the
local
PD
is
slowly
appearing,
so
the
10
gigabyte
capacity
TVs
are
used
are
based
on
the
VMB
case
and
the
1
point.
5
terabytes
TVs
are
based
on
our
local
enemies
and,
as
you
can
see,
that
the
10
gigabytes
TVs
are
using
the
local
file,
storage
class
and
nvme
devices
are
using
the
local
block
storage
class,
just
as
we
want
it,
and
we
are
going
to
use
the
storage
classes
now
to
deploy
or
see
us
on
top
of
these
devices.
So
this
is
going
to
look
like.
A
This
is
going
to
look
like
this,
so
we
are
going
to
apply
an
OCS
storage
cluster
and
we're
again
going
to
tell
OCS
to
use
the
local
file
storage
class
with
the
volume
mode
filesystem
for
the
monitors
and
we're
going
to
tell
OCS
to
use
the
local
block
Sturge
class
with
the
volume
earth
block
for
the
data
pots.
The
parts
with
the
Ceph
OSD
is
done
and
we
want
three
of
them.
A
A
Now
that
all
pods
are
in
state
running,
the
OCS
deployment
is
done
and
let's
look
at
this
again
outside
a
watch.
We
have
quite
a
bunch
of
parts,
the
point
now
and
we
have
the
OS
DS
and
we
see
down
here
that
the
OS
DS
are
now
also
using
the
enemies.
We
see
that
for
the
capacity
we
can
match
that
PV
back
to
our
local
disk
and
they're,
using
our
local
clock
through
squads
and
for
a
monitors
we're
using
the
local
file
storage
class.
A
That
gives
us
10
gigs
the
indicator
directly
attached
to
the
VMS
and
obviously
OCS
is
the
point
now
and
we
do
have
access
to
the
osseous
storage
classes
here
and
also
the
new
bio
BC
storage
class
for
OB
ceased
and
just
like
always,
we
can
now
go
back
to
the
web
UI
on
to
the
dashboard,
and
you
see
that
everything
still
looks
healthy.
We
can
go
to
the
newest,
yes
dashboard
here,
and
it
behaves
just
like
and
OCS
the
point
with
the
thin
storage
class
on
vmware,
and
we
have
the
very
same
look
here.