►
Description
Join Andrew Sullivan, Chris Short, and the occasional special guest for an hour designed specifically to help the OpenShift admins out there. Come with your questions, leave with solutions.
A
Good
morning,
good
afternoon,
good
evening,
wherever
you're
healing
from
welcome
to
another
episode
of
the
open
shift
administrator
office
hours
here
on
open
shift
tv,
I
am
chris
short
executive
producer
of
openshift
tv.
I
am
joined
by
the
one
and
only
andrew
sullivan,
who
is
going
to
talk
about
what
today
exactly.
B
Yeah,
so
so,
first
good
morning,
good
afternoon,
good
evening,.
B
Holidays,
I
hope
everybody
is
back
and
safe.
I
I
hope
that
if
you,
if
you
enjoyed
time
with
your
family
right,
if
everybody
gathered
together
and
all
of
that
stuff,
I
hope
it
was
safe
and
I
hope
everybody
remains
safe
because
things
are
crazy
out
there.
B
So
today
I
I
wanted
to
talk
about
so
first,
let
me
say
that
this
is
an
office
hour.
Show.
B
A
B
Questions
don't
be
shy,
don't
be
afraid
of
interrupting
or
anything
like
that.
But
in
the
the
absence
of
your
questions
I
want
to
talk
about
storage
today.
So
if
you
recall
a
few
weeks
ago,
well
a
few
it's
been
more
than
a
few
weeks
ago.
B
About
storage,
but
we
talked
about
it
from
a
different
perspective
so
before
and
I
want
to
say
it
was
not
the
last
episode,
but
the
episode
before
that
we'll
have
to
dig
that
up
and
I'll
figure
out
which
episode
it
was.
B
So
we
talked
about
storage,
but
it
was
from
the
perspective
of
what
type
of
storage.
How
do
I
provide
storage?
How
do
I
size?
You
know
both
from
gigabytes,
iops,
latency,
etc,
the
storage
using
being
used
by
by
nodes.
B
Not
only
are
you
managing
the
nodes
right,
the
openshift
cluster
itself,
but
you're
managing
all
of
the
resources
that
the
applications,
the
pods
that
are
running
inside
of
the
cluster
are
consuming.
So
this
is
most
often
manifested
or
we
think
about
these
things
in
terms
of
like
cpu
and
memory
right.
Each
pod
has
some
amount
of
cpu
and
memory
that
it's
going
to
consume,
but
also
storage
right.
It
would
be
even
though
right
kubernetes
is
cloud
native
and
you
know
12-factor
applications
and
all
that
other
stuff
right.
B
B
So,
oh,
hey,
christian,
welcome!
Welcome
to
the
party
christian.
I
know
it's
it's
a
little
early
there
on
the
west
coast,
so
thanks
for
joining.
So
let's
talk
about
storage
and
anything
else.
That's
on
your
mind.
B
C
A
B
B
So
first,
let's
clarify
a
couple
of
things.
So
there
are
broadly
speaking,
two
types
of
storage
there
or
let
me
rephrase
that
there
are
more
than
two
types,
there's
a
lot
of
different
types
of
storage
that
we
have
to
account
for
inside
of
openshift.
B
But
when
we
talk
about
pods,
when
we
talk
about
storage
being
used
by
applications,
there
are
two
types,
ephemeral
and
persistent,
so
ephemeral.
We
touched
on
the
last
time
that
we
talked
about
storage
and
that
was
really
focused
around
when
I
instantiate
a
pod
right
when
I
just
go
and
turn
on
a
pod,
it's
going
to
use
some
ephemeral
storage
because
it's
going
to
have
to
pull
down
that
image
right.
Well,
that
gets
that
uses
capacity
on
the
host
disk.
When
I
start
that
pod
it
creates
that
copy
on
right
layer
right.
B
There
is
ephemeral
storage
that
is
used
by
things
like
an
empty
dir
volume
declaration.
So
for
those
of
us
who,
you
know,
we
spin
up
non-production
clusters
and
do
a
bunch
of
testing
and
sometimes
we'll
want
to
do
the
registry,
and
we
just
do
a
quick,
empty
dire
in
there.
Well,
that
capacity
is
being
used
from
the
local
host.
So
if
you
dump
tens
or
hundreds
of
gigabytes
of
data
into
that
registry,
using
an
empty
dir,
guess
what
that's
coming
from
the
local
host
right
and.
B
And
only
the
localhost,
so
you
can.
You
can
run
that
host
out
of
capacity
gigabytes
very
easily.
Yes,
in
which
case
the
kubernetes
will
start
ejecting
pods
from
that
host.
So
you
just
have
to
be
aware
of
that.
Nothing
wrong
with
that
right,
but
ephemeral
storage
is
well
ephemeral
and
that's
not
what
we.
What
we're
here
to
talk
about
today.
B
So
persistent
storage,
on
the
other
hand,
is
storage
which
exists
beyond
the
lifetime
of
that
pod
and
it
is
frequently
attached
from
something
external.
So,
yes,
there
are
exceptions.
So
I'm
thinking
of
things
like
the
host
path,
provisioner,
where
I
can
take
a
a
piece
of
local
storage,
so
maybe
a
an
ssd,
an
nvme,
a
local
raid
device
and
carve
that
up
for
persistent
volumes,
but
we're
going
to
ignore
that
for
now,
because
usually
that's
used
in
an
edge
case.
A
I
got
it,
but
don't
worry.
I've
been
on
top
of
chat
today,
like
okay,
something
on
something
so.
B
B
B
B
So
there's
a
couple
of
interesting
things
about
persistent
volumes,
so
first
they
can
be
statically
created
right,
I
can
go
in
and,
as
the
administrator
I
can
define
a
persistent
volume
right,
hey,
there's
an
export
here.
There's
an
export
here,
there's
an
export
here.
Second,
they
are
only
visible
to
the
administrator
right.
You
have
to
have
administrator
or
specific
permissions
given
to
you
normal
application
users,
normal
users
of
the
system
aren't
expected
to
have
or
be
able
to
view
that
particular
data.
B
Can
do
that?
That's
an
easy
one,
so
there
is
so
no
let's
say
so.
Where
was
I
persistent
volumes
can
only
be
viewed
by
an
administrator
which
can
be
important
if
I
have
say
an
nfs
server
that
has
you
know
dozens
hundreds
of
exports
coming
off
of
it.
My
I
don't
want
anybody
to
be
able
to
go
in
and
say.
Show
me
all
the
pvs
show
me
all.
The
exports
show
me
where
all
of
this
storage
is
located
at.
B
So
there
are
some
important
things
about
that
that,
from
a
security
perspective
way
back
when
at
previous
employer,
I
did
a
a
blog
post
about
pv
security,
which
I
can
dig
up
the
link
for
that
and
we
can
share
inside
of
here.
If
you'll
give
me
just
a
second
pv.
B
Let's
see
here
so
this
blog
post,
which
is
written
now
two
and
a
half
years
ago,
is
still
broadly
applicable.
Of
course
you
know,
I
would
say,
because
I
haven't
reviewed
it
in
depth.
I
would.
C
B
Still
want
to
view
with
a
critical
eye
right,
there's
a
tremendous
amount
of
change
that
has
happened
in
the
last
two
and
a
half
years.
So
definitely
work
with
you
know,
whoever's
providing
your
storage
make
sure
you
understand
all
the
ramifications,
all
the
things
that
are
happening
there,
but
there's
a
lot
of
things
to
be
aware
of
there.
Okay,
so
persistent
volumes
or
the
persistent
volume
object
defines
where
to
find
the
storage
how
to
connect
to
it.
It
also
defines
you'll
see
here
very
importantly,
the
capacity
associated
with
it.
B
Note
that
with
nfs
so
say
I
have
a
single
nfs
exports
or
I
have
some
sort
of
shared
storage
rates,
I'm
thinking
of
like
I
want
to
use
a
rel
nfs
server
and
create
a
bunch
of
folders
or
a
bunch
of
exports
inside
of
there
and
then
introduce
those
with
the
the
value
for
capacity
that
I
define
here
does
not
have
to
reflect
the
actual
capacity
of
that
export
or
of
that
storage
device.
B
So
within
nfs
exports,
maybe
it's
sitting
on
a
one
terabyte
drive
and
I
create
a
bunch
of
folders
in
there
and
I
export
each
one
of
those
folders
every
single
one
of
those
folders.
When
I
mount
it
is
going
to
show
one
terabyte
of
space
and,
as
you
know,
if
there's
10
folders
10
exports,
10
persistent
volumes
right
as
those
fill
up
each
one
of
them
is
going
to
reflect
the
space
consumed
of
the
total
volume.
B
B
So
what
do
I
mean
by
that?
So
if
I
have
a
persistent
volume
claim
right-
and
I
say
I
want
some
rwx
storage
or
excuse
me-
I
want
some
rwo
storage
right.
I
only
want
read,
write
once
and
I
want
say
five
gigabytes
as
we
have
here
and
I
attach
that
pvc
to
a
pod
right.
It
gets
instantiated
right,
so
kubernetes
schedules,
the
pod
kubelet
mounts
the
storage
to
the
host.
The
pod
is
running,
it's
connected
to
it,
etc.
B
There's
nothing
that
stops
me
from
taking
that
same,
read,
write
once
pvc,
attaching
it
or
associating
it
with
another
pod
and
scheduling
that
what
will
happen
is
that
pod.
The
second
pod
will
be
scheduled
to
the
same
host
as
the
first
one
and
will
be
granted
access
to
that
same
export.
So
now
I
have
two
pods
on
the
same
host
accessing
the
same
pv,
so
some
people
might
be
thinking,
but
that's
a
security
right,
security,
security
security.
Now
I
can
have
rights
so
one
this
is
controlled
or
security
right.
B
Access
to
that
data
is
controlled
via
the
pvc
paradigm,
so
in
other
words,
just
like
the
blog
post
that
I
linked
right.
If
the
whoever's
instantiating
that
pod
doesn't
has
a
have
access
to
the
pvc
right
so
same
name,
space,
etc,
then
they
won't
be
able
to
access
that
data.
So
chris,
if
you
and
I
share
a
cluster-
and
I
I
go
through
this
process
right-
I
stand
up
a
pod.
It's
a
read,
write
once
you
know
and
everything's
running
great.
You
couldn't
then
come
in
and
say
I
want
to
access
this
mount
point
right.
B
It
doesn't
doesn't
work
that
way.
I
could
instantiate
a
second
one,
but
you
could
not
attach
to
that
so
very
important
right
differentiation.
Some
people
are
surprised
when
they
accidentally
schedule
more
than
one
pod
to
use
an
rwo
pvc
and
it
works,
and
that's
why
right
it's
it's
the
same
host
same
node
in
the
cluster.
B
Can
have
multiple
pods,
but
it
won't
be
accessible
from
multiple
hosts,
so
that
kind
of
covers.
Oh,
the
last
thing
that
we
want
to
talk
about
here
is
the
reclaim
policy.
B
So
the
reclaim
policy
is
what
happens
when
the
pv
is
deleted
and
there
are
technically
three
reclaim
policies.
However,
there's
really
only
two
that
we
should
care
about,
so
the
first
one
we
see
here
is
retain.
Basically,
what
happens
is
so
I'm
using
that
pve
I
delete
it.
It
doesn't
actually
get
deleted.
It
goes
into
a
status
where
it
is
no
longer
accessible.
It
will
not
be
reprovisioned.
It
will
not
be
reused
until
an
administrator
comes
in
and
cleans
it
up.
B
This
is
useful.
If
you
have,
shall
we
say
users
with
buttery
fingers
right.
Oh
no,
I
accidentally
deleted
the
wrong
volume.
Can
you
recover
that
for
me?
So
this
is
one
way
where
you
can
you
know?
Yes,
I
can
go
and
recover
that
for
you,
so
you
can't
move
apv
from
retain
back
into
active.
Instead,
you'd
have
to
recreate
the
pv
right,
so
you'd
create
a
new
one,
pointed
to
the
same
storage
object.
B
So
that
is
really
really
useful
with
something
that
we'll
talk
about
in
a
moment
which
is
dynamic,
provisioning,
so
hey.
I
created
this
fancy
new
pve
right,
it's
it's
one
terabyte!
It
holds
all
my
mission,
critical
data
and
then
I
accidentally
deleted
it.
Well,
hey!
You
know
we
we
didn't
actually
delete
it.
We
can
recover
it
for
you
type
of
thing
right.
So
the
second
one
that
we
that
we
care
about
is
the
delete
retention
policy
or
a
claim
policy,
as
the
name
implies
when
you
delete
the
pv.
B
So
to
does
it,
it
just
goes
away
right.
It's
it's
done.
There
is
no
ability
to
reclaim
it
or
anything
like
that,
and
then
the
third
one
which
is
deprecated
and
it's
been
decrea
wow.
I
cannot
talk.
C
B
C
B
For
a
long
time
now,
it's
still
technically
there,
but
it's
been
around
for
a
while,
and
that
is
the
recycle,
so
recycle
was
only
ever
really
applicable,
as
far
as
I
know,
to
nfs
type
of
storage.
Essentially,
what
it
would
do
is
when
the
persistent
volume
is
deleted,
instead
of
actually
deleting
the
volume.
What
it
would
do
is
mount
it
to
a
pod
that
basically
goes
in
and
does
an
rm-rf
and
then
put
it
back
into
the
pool
of
available
storage.
B
So
just
be
aware
of
that,
you
know
it.
It's
it's
technically
still
there,
but
it's
been
deprecated
for
a
long
time
now
and-
and
I
don't
know
when
precisely
it'll
go
away,
but
if
you're
relying
on
that,
it
might
be
a
good
idea
to
consider
alternatives,
and
I
believe,
if
you
look
at
the
kubernetes
documentation,
the
alternative
is
dynamic,
provisioning.
So,
okay,
so
persistent
volumes,
statically
created
persistent
volumes,
whether
it's
dynamically
or
statically,
created
they're.
B
All
going
to
look
like
this
right,
they're
all
going
to
have
these
same
sort
of
properties
associated
with
them.
So
how
do
I
consume
a
persistent
volume,
so
my
administrator
has
defined
a
bunch
of
storage.
That's
available
to
me.
How
do
I
get
access
to
that
and
that
comes
through
a
persistent
volume
claim,
so
I'm
going
to
scroll
down
here
to
where
the
there
we
go
persistent
volume
claim.
B
So
the
persistent
volume
claim
is
what
the
application
team,
with
the
user
with
the
developer,
creates
to
request
some
storage,
so
they're,
basically
saying
hey.
I
need
in
this
instance
right.
I
need
some
read,
write
once
storage,
that
is
eight
gigabytes
in
size
and
kubernetes
openshift
will
look
at
the
pool
of
available
persistent
volumes
and
say
hey
these
match
right
or
they
closely
match.
You
know:
hey
I've
got
read,
write
once,
but
I
don't
have
an
8
gigabyte
volume,
but
I've
got
a
10
gigabyte
volume.
B
And
then
I'm
going
to
associate
right.
It's
going
to
associate
your
pvc
with
my
newly
created
pv,
so
storage
classes
are
arbitrary
right.
They
don't
have
to
have
a
provisioner
associated
with
them.
I
can
create.
You
know
mainly
create
a
storage
class
with
no
dynamic
provisioner.
I
can
manually
create
or
statically
create
pvs
associated
with
that
storage
class.
B
If
I
so
choose
nothing
wrong
with
that
works,
exactly
as
you
would
expect,
and
is
it's
kind
of
how
it
is
intended
to
function
right
of
categorizing
storage,
maybe
I
have,
as
it
says,
gold
storage
right,
gold,
silver,
bronze,
cardboard,
plastic.
Whatever
I
happen
to
be
using,
you
know,
I
can
absolutely
do
that
statically,
so
dynamic
provisioning
relies
on,
as
I
just
mentioned,
the
storage
class
to
associate
with
the
provisioner,
and
there
are
two
types
of
provisioners
that
we
expect
to
see.
B
So
first,
let
me
review
my
my
notes
for
today
to
make
sure
that
I'm
not
skipping
over
anything
important
here,
good
point
so
storage
classes
and
we'll
look
at
a
storage
class.
In
just
a
moment.
I've
got
one
running
in
my
cluster
here,
so
we'll
look
at
that
in
just
a
moment,
but
I
want
to
talk
about
the
static
and
or
excuse
me,
the
dynamic
provisioning.
B
So
when
we
talk
about
dynamic
provisioning,
there
are
two
methods
or
two
ways
that
storage
can
be
dynamically
provisioned
and
the
difference
is
the
driver.
So
first
there
are
what
we
call
entry
drivers.
These
are
storage,
provisioners,
storage
drivers
that
are
a
part
of
kubernetes
itself.
So
if
you
have
kubernetes,
you
have
these.
These
entry
provisioners
these
entry
drivers
and,
unfortunately,
or
fortunately,
depending
on
your
perspective.
B
These
are
they
have
a
limited
shelf
life.
At
this
point,
so
entry
drivers
are
being
pushed
out.
I've
heard
the
1.21
1.22
time
frame,
but
that
was
a
year
plus
ago,
so
it
could
have
shifted.
I
don't
I
don't
know
precisely
at
the
moment.
B
So
I
see
again
I'm
I'm
keeping
up
trying
I'm
not
really
keeping
up
with
the
chat.
So
I'm
just.
B
Yep,
so
yes,
so
yes,
entry,
provisioners
are
moving
out
right.
They
will
be
ejected
from
kubernetes
at
some
point
in
the
future.
That
being
said,
they
are
still
perfectly
viable.
B
Our
list
of
persistent
storage
provisioners-
and
it's
not-
I
can
never
remember
where
we
actually
have
the
list
of
these
at
here.
We
go
supported
access
modes,
so
we
have
this
volume
plug-in
here,
and
this
is
on
that
same
documentation,
page
right,
I
didn't
change
the
page.
I
just
jumped
to
a
different
section,
so
I
have
this
volume
plug-in.
We
have
this
whole
list
of
plugins
here,
along
with
the
different
access
mode
that
those
are
accessible
for.
B
So
these
are
all
entry
they're
all
shipped
by
kubernetes,
by
openshift
they're,
all
supported
by
red
hat,
so,
for
example,
vmware
vsphere
down
here.
If
I
deploy
a
ipi
upi
vsphere
cluster,
it
is
going
to
deploy
and
configure
the
in-tree
vsphere
volume
provisioner
for
me
right
and
if
something
goes
wrong
with
that,
if
something
breaks,
if
it's
not
working
red
hat,
is
the
one
who
supports
that
because
it's
an
entry
provisioner.
B
So
the
second
type
of
provision
that
we
have
is
csi.
So
csi
is
short
for
container
storage
interface.
It
is
I
I
tend
to,
and
it's
probably
a
gross
oversimplification,
but
I
tend
to
make
the
analogy
of
csi
is
to
kubernetes
what
cinder
is
to
openstack.
B
Yeah,
and
so,
if
we
think
about
that,
what
am
I
doing
right?
I
am
relying
on
an
abstraction
to
an
underlying
or
a
background
storage
device.
So
kubernetes
says:
hey,
create
me
a
new
storage
device
right.
That's
it
talks
to
a
csi
api
endpoint.
The
storage
vendor
provides
a
csi
driver
that
says:
okay,
you
said
create
new.
That
means
that
I
need
to
do
these
actions
on
the
back
end.
I
need
to
talk
to
my
storage
device
and
say
you
know
new
volume
instead
of
create
new
or
whatever
that
happens
to
be
right.
B
B
They
are
always
there
right,
depending
on
which
platform
you
deploy
to
one
or
more
of
them
may
be
configured
out
of
the
box
right.
So
again,
if
you
deploy
to
vsphere
we're
going
to
configure
vsphere
if
you
deploy
to
aws
we're
going
to
deploy
or
configure
the
ebs
driver
right
so
on
and
so
forth,
so
csi
is
different.
So
I'm
going
to
jump
back
up
to
the
top
here
and
if
we
look
at
csi,
so
we
have
this
using
container
storage
interface.
A
There
is
a
question
I
need
to
ask
you
when
you
get
this
okay
yeah,
please
do
does,
does
oc
come
with
its
own
storage
classes.
I
don't.
B
B
So
yes,
yes,
and
no
so
that
depends
on
how
you
define
it
so
open
openshift
itself
is
not
a
storage
platform,
so
it
doesn't
have
something
unique
or
special
to
it,
except
when
it
does.
So.
What
do
I
mean
by
that?
B
So,
as
I
said
when
I
deploy
to
a
specific
platform
that
has
a
storage
driver
right
or
has
an
entry
driver,
it's
going
to
configure
that
for
me.
So
if
I
deploy
to
vsphere
right,
I
do
a
ipi
upi
deployment
to
vsphere
when
the
cluster
is
done.
Deploying
if
I
do
an
oc,
get
sc
oc,
git
storage
class
or
a
cube,
cuddle,
get
storage
class,
I'm
going
to
see
a
storage
class
named
thin,
so
it
is
not.
B
It
is
not
an
open
shift,
storage
class.
It's
a
vsphere
storage
class,
that's
created
by
openshift.
If
that
makes
sense.
Yes,
so
these
are
and
just
I
haven't,
I
was
going
to
get
there
in
a
few
minutes,
but
I'll
talk.
B
So
those
default
storage
classes
so
with
aws,
with
azure,
with
google,
with
vsphere
with
right
so
on
and
so
forth.
All
of
the
platforms
that
have
one
so
those
default
driver
configurations,
those
default
storage
classes
are
controlled
by
the
cluster
storage
operator.
So,
let's
switch
over.
I
can
make
this
a
little
bit
bigger
too.
Thank
you.
B
B
B
Yeah,
so
I
nearly
deployed
it
after
the
after
the
break,
because
I
reconfigured
my
lab
so
okay,
so
inside
of
this
cluster,
I
have
a
storage
class.
So
I
have
my
nfs
csi
storage
class
here,
which
is
provided
by
democratic
csi.
So
it's
an
open
source
project
for
connecting
to
truenas
or
freeness,
and
now
I
completely
forgot
where
I
was
going
with
this.
A
B
Gets
so
if
I
do
an
oc
get
cluster
operator
co
and
we
have
this
storage
operator
way
down
here
at
the
bottom.
C
B
Right
so
that
open
shift
storage
cluster
operator
is
responsible
for
really
one
thing,
making
sure
that
those
default
storage
classes
always
exist.
So
let's
say
I
had
deployed
this
cluster
to
vsphere
and
there's
that
default
thin
storage
class.
I
can
modify
that
storage
class,
so
I
can
change
the
properties
associated
with
it.
Maybe
I
change
the
reclaim
policy
from
delete
to
retain
or
something
like
that
perfectly
fine.
I
can
change
it
to
not
being
the
default
anymore.
B
B
So
let's
explain
this
command
a
bit,
so
oc
adm,
of
course,
is
an
administrator
command
right,
so
I'm
going
into
the
quote:
unquote
privileged
area
of
openshift,
so
I
want
to
get
the
information
for
the
particular
release
that
I'm
on
and
then
the
dash
dash
commits,
very
importantly,
shows
me,
which
github
repo
and
which
commit
which
specific
commit
is
associated
with
every
one
of
the
operators.
Nice,
so
the
reason
I
want
to
do
this
is
because
I
want
to
find
which
github
repo
is
associated
with
the
cluster
storage
operator
right.
B
So
it's
a
very
simple
way
of
hey.
I
like
wait
where
I
need
to
find
out
more
information
about
this.
I
need
to
figure
out
how
this
works,
or
or
maybe,
if
there's
some
more
troubleshooting
information
or
something
like
that,
the
developers
almost
always
have
great
documentation
in
their
github
repos.
So
this
is
an
easy
way
to
figure
out
which
github
repo
is
the
appropriate
one
for
the
operator,
I'm
interested
in
so
I'll
just
paste
this
in
here
and
then
make
it
much
larger.
B
All
right,
so
I'm
in
the
open
shift,
cluster
storage
operator,
github
repo-
and
we
can
see
here
right.
It's
an
operator
that
sets
openshift
cluster
wide
storage
defaults,
so
pretty
straightforward.
This
one
is
very,
very
simple:
in
the
grand
scheme
of
things,
it
literally
just
makes
sure
that
those
default
storage
classes
are
there
as
defined,
and
I
don't
remember
off
the
top
of
my
head,
where
they're
contained
inside
of
here,
but
it
defines
what
each
one
of
those
should
look
like
and
basically
ensures
that
it's
always
there.
B
B
B
If
you
want
to
prevent
access
to
them,
you
need
to
use
quotas
and
limit
ranges
right,
so
that
also
jumps
ahead
just
slightly,
which
is
how
do
I
control
storage
consumption?
How
do
I
console
control
resource
consumption
for
all
of
my
storage
resources
and
it's
the
same
as
you
do
for
cpu
and
memory
right
create
a
quota
that
says
you
have
access
to
and
and
these
can
be
multi
or
they
they
apply
on
multiple
levels.
B
So
I
can
say
chris
you
get
access
to
one
terabyte
of
storage,
total
right
and
then
I
can
say
you
get
access
to
500,
gigabytes
of
gold
and
500
gigabytes
of
silver
and
500
gigabytes
of
bronze.
So
if
you
think
about
that
three,
each
one
of
those
you
have
that
says:
51.5
terabytes
total,
but
you
can't
provision
rate.
It
can
be
any
mix
of
that
up
to
a
maximum
of
one
terabyte.
B
B
If
you
wanted
to
prevent
access
to
a
particular
storage
class,
I.e
that
vsphere
thin
class
or
whatever
the
other
ones
are,
you
would
just
set
a
quota
of
zero
and
you
can
define
those
quotas
right.
You
can
create
those
objects
as
a
part
of
the
new
project
template
as
well
right
so
anytime,
you
don't
have
to
think
about
it.
You
can
whenever
you
do
a
new
project.
It
automatically
applies
those
types
of
rules
we
also
have-
and
I
always
like
to
to
highlight
some
of
the
work
that
our
gpte
folks
have
done.
B
I
think
you've
you've
used
the
shared
clusters
from
gpte
before
oh
yeah,
they
actually
yeah,
so
they
they
created
an
operator
that
ensures
that
those
quotas
are
always
in
place
wow.
So
so,
even
if
somebody
with
privileges
goes
in
and
says,
I'm
going
to
remove
this
quota,
it'll
go
back
and
automatically
that's
awesome.
B
So
yeah
there's
there's
a
number
of
different
ways.
You
can,
you
know
solve
those
problems
which
are
always
fun
limit
ranges
if
you're
not
familiar
with
those
effectively,
they
define
minimum
and
maximum
values,
particularly
helpful.
For
you
know.
Sometimes
I
don't
so
one
example.
My
storage
system
has
a
limit
of
maybe
500
volumes
right.
I
I
can't
prov,
I
can't
provision
any
more
than
500
volumes
and
I
don't
want
you
chris,
creating
500
one
megabyte
volumes
right
because
that's
not
helpful
to
anybody.
B
So
I'm
going
to
say
you
have
to
create
a
minimum
of
maybe
one
gigabyte
or
five
gigabytes,
but
I
don't
want
any
single
volume,
any
one
volume
to
be
bigger
than
50
gigabytes.
Whatever
that
happens,
to
look
like
so
limit
ranges
are
good
for
or
limit
ranges
and
quotas
combined
are
good
for
controlling
that
resource
consumption
across
both
the
total
resource,
as
well
as
individual
storage
classes.
In
the
case
of
storage.
B
Okay,
so
let's
jump
back
over
here
to
our
csi,
so
csi,
as
I
said,
kind
of
like
sender,
right
we're
we're
asking
right
we're
relying
on
the
vendor
to
provide
a
driver
that
will
abstract
and
then
provision
storage
for
us.
So
I
may
have
shown
this.
I've
used
this
graphic
before
a
couple
of
times
on
how
these
things
work.
B
B
So
what
is
that
functionality
so
that
functionality
is
things
like
the
it
implements
the
logic
for
alerting
or
for
passing
the
events
around
a
storage
object
being
created
so
rather
than
you
know,
hey
I'm.
You
know
storage
vendor
x.
I
don't
want
to
create
the
logic
that
says
watch
for
this
event
in
the
kubernetes
api.
When
that
event
happens,
you
know
reach
out
to
this
there's
a
sidecar
that
automatically
does
that
for
them.
So
you'll
see
side
cars
right,
that's
where
some
of
those
cars
come
in.
B
B
B
The
sidecar
is
going
to
take
notice
of
that,
and
it's
going
to
right
tell
the
csi
driver
the
relevant
csi
driver.
Hey,
you
need
to
do
something
right,
so
that's
the
provisioner.
The
provisioner
is
then
going
to
go
and
do
its
job.
Hey
storage
system
make
me
some
storage
right
once
that
storage
system
has
created
the
volume,
so
an
nfs,
export
and
iscsilon
whatever
it
happens
to
be
to
be,
it
is
going
to
then
define
a
persistent
volume,
a
pv
to
attach
to
that
or
to
all
right.
B
B
B
So
this
is
another
area
where
csi
is
different
than
entry.
So
entry
relies
on
kubelet
to
do
that
mounting
right.
So
when,
let's
say
I'm
using
nfs,
when
I
mount
an
nfs
export
on
my
host,
kublet
is
the
one
who
goes
and
requests
right.
It
does
that
that
mount
operation
and
then
passes
it
out
with
csi
the
vendors
right.
Your
storage
vendor
has
a
driver
on
the
nodes
that
implements
that
logic,
and
that
can
be
you
know
it
could
be
as
straightforward
as
using
that
upstream
sidecar.
B
That
says,
hey
just
mount
this
nfs
volume
or
it
could
be
doing
a
bunch
of
other
things.
Maybe
there's
a
I'm
going
to
pick
on
like
rbd
right,
so
I
think
the
rbd
mounter
has
some
additional
logic
inside
of
it,
because
rbd
isn't
a
standard
right,
it's
not
iscsi,
it's
not
nfs,
so
it
takes
a
little
bit
more
three
to
four.
Are
you
servers?
What
are
you
doing?
Christian,
no.
A
A
B
Okay,
so
that
is
in
a
nutshell,
how
all
of
that
happens.
So
if
we,
if
we
take
a
step
down
on
that
whole
mounting
process,
essentially
so
I
have
a
pod
definition,
it
has
a
persistent
volume
claim
associated
with
it.
Kubernetes
schedules
that
pod
on
the
node
that
it
lands
on
essentially
kubelet,
now
requests
from
the
csi
driver.
It
says:
hey
mount
this
volume
at
this
location
that
csi
driver,
which
is
again
provided
by
your
storage
vendor.
B
Does
that
task
whatever
it
takes?
We
don't
know
at
the
end
of
it.
It
says:
okay,
boss,
you
know
hey
my.
The
volume
is
mounted
where
you
wanted
it
mounted
at
and
then
kublet
takes
back
over
and
you
know
then
instantiates
the
namespace
attaches
the
storage,
where
relevant.
All
of
those
other
fancy
things
inside
of
there.
B
So
that's
csi
kind
of
similar
to
sender
if
you're
familiar
with
sender
right
and
how
it
works
inside
of
there,
with
kvm
and
virtual
machines
and
mounting
storage
and
all
that
other
stuff
so
which
csi
drivers
are
supported
by
openshift,
and
this
is
a-
and
I
know
we
talked
about
this
before,
because
it's
a
little
bit
of
a
frustrating
answer-
and
I
I
I
sympathize
with
that.
So
much
like
cni
drivers,
we
don't
provide
a
list.
B
Now,
you're,
probably
thinking
you
know
andrew,
but
why
do
I
see
csi
drivers
associated
with
the
documentation
over
here?
That's
because
these
two
are
they're
red
hat
created
drivers,
so
the
manila
csi
driver
and
the
rev
or
overt
csi
driver
are
both
created
by
red
hat.
So
we
we
define
them.
We
document
them
inside
of
here,
but,
for
example,
you
know
off
the
top
of
my
head.
B
Hitachi
fujitsu,
there's
a
whole
bunch
of
them
that
are
out
there.
Dell
emc
netapp,
some
of
them
are
certified,
so
we
do
have
a
csi
certification
process.
B
If
you,
if
your
storage
vendor,
is
not
certified
with
openshift-
and
that
is
something
that
is
interesting
to
you-
please
talk
to
your
storage
vendor
request
that
they
do
that.
We
are
happy
to
facilitate
that
and
make
sure
that
that
happens.
Essentially,
it's
the
same
as
an
operator
certification,
with
one
extra
suite
of
tests
to
run
to
make
sure
that
everything
runs
as
you
would
expect
it
to
yeah.
B
So
csi
drivers
function
very
much
like
sender.
Right,
request,
storage
storage
gets
provisioned.
It's
now
defined
how
to
connect
to
it.
When
I
instantiate
my
pod
kubernetes
does
the
right
thing
right
it.
It
just
gets
handled
much
like
cinder.
We
also
have
some
additional
features
that
can
be
associated
with
a
csi
driver,
csi
capability
beyond,
what's
available
in
the
entry.
B
Very
cool-
and
these
are
things
that
have
been
in
development
for
a
long
time.
I
think
we
first
started
talking
about
them
in
the
1.9
1.10
days
and
it's
been
slowly
evolving
over
time.
B
So
a
couple
of
things
to
note
so
csi
volume
snapshots
are
a
tech
preview
feature
in
openshift
today,
right
right,
so
a
volume
snapshot
is
exactly
as
the
name
implies,
but
it's
implemented
maybe
slightly
differently
than
you
would
think,
because
it's
the
way
kubernetes
operates.
So
normally
you
would
say
you
know.
Let's
say
it's
a
virtual
machine
right,
hey
I
select
my
vm
or
I
select
my
disk
and,
and
you
know,
do
a
right:
click
in
the
gui
and
say
create
snapshot
with
kubernetes.
It
functions
off
of
custom
resource
definitions.
B
So
when
I
want
to
create
a
snapshot
of
my
volume
of
my
persistent
volume
claim,
I
create
a
csi
snapshot,
crd
or
object,
and
against
that
persistent
volume
claim
and
just
like
with
a
persistent
volume,
a
csi,
persistent
volume
right,
the
csi
provisioner
gets
notified.
Hey
you
need
to
do
something.
It
then
takes
that
action
and
then
it
creates
the
objects
after
the
fact.
B
B
So,
in
other
words,
if
I
create
a
snapshot
object,
it
could
be
a
half
a
second.
It
could
be
10
seconds.
It
could
be
three
minutes
before
the
snapshot
is
actually
created
on
my
storage
device
right
because
it
all
depends
on
how
long
does
it
take
for
the
api
notification
to
happen?
How
long
does
it
take
for
the
csi
driver
to
take
action?
How
long
does
it
take
for
the
storage
to
take
action
so
on
and
so
forth?
B
So
this
is
why
live
snapshots
or
snapshots
on
running
virtual
machines
aren't
supported
with
openshift
virtualization.
So
if
that's
okay
right,
then
great
you
can
create
snapshots.
You
can
use
those.
You
can
do
a
couple
of
different,
interesting
things
with
those
not
the
least
of
which
is
creating
new
volumes
based
off
of
those
snapshots,
assuming
your
your
csi
provisioner
supports
it.
B
Similarly,
you
can
create
cloning.
Clones
so
note
that
volume
cloning
is
supported
by
open
shifts,
just
be
aware
of
the
support
limitations
section
here,
generally
speaking,
pretty
straightforward
right
things
like
same
name,
space,
same
source,
pvc
or
same
name
space
as
the
source,
pvc
and
the
source
and
destination
storage
class
must
be.
The
same,
are
usually
the
two
big
ones.
B
B
It's
so
you
know
everything
is
done
in
software,
so
hey
I
want
to
create.
I
want
to
clone
this
volume,
but
I
want
it
to
be
in
the
bronze
qos
class,
because
it's
for
test-
and
I
don't
need
you-
know-
ultra
mega
platinum
performance
out
of
it
right-
that's
not
possible,
so
you
could,
on
the
back,
end,
modify
it,
but
through
an
official
kubernetes
api
openshift
supported
method
that
that's
not
possible,
but
yeah
just
be
aware
of
the
limitations
of
the
things
that
are
here,
but
otherwise
yeah
you
can
absolutely
clone
those
volumes.
B
One
thing
that
is
interesting:
the
data
volume
paradigm,
which
is
it
you
can
use
it
outside
of
openshift
virtualization,
but
it
is
deployed
and
accessible
by
default.
If
you
deploy
openshift
virtualization
takes
advantage
of
that.
So
if
you
can
use
a
data
volume
to
maybe
import
some
data
and
then
create
a
clone
of
that
data
volume-
and
it
will
take
advantage
of
if
your
csi
provisioner
supports
it-
the
offloaded
cloning
functionality,
so,
okay,
so
csi
volume
provisioning
fully
supported.
B
So
I'm
going
to
talk
about
a
couple
of
these,
so
topology
is
effectively.
I
am
defining
what
my
often
physical,
sometimes
logical
infrastructure
looks
like
and
where
and
when
things
can
connect.
So
what's
the
use
case
here,
let's
say
that
I
have
two
data
centers
in
the
same
building
or
two
server
closets
in
the
same
building,
so
one
of
them
has
servers
and
storage
or
they
both
have
separate
servers
and
storage.
B
So
when
I
request
my
storage
right,
so
I
just
create
a
pvc
of
you
know:
storage
class
x
right.
I
want
to
make
sure
that
my
pods
are
not
scheduled
to
location
a
and
my
storage
is
location
b,
so
it's
not
having
to
traverse
through
right,
whatever
infrastructure
so
that
I
always
want
a
pod
scheduled
to
or
a
pod
to
be
co-located
with
the
storage
associated
with
it.
So
that's
topology,
so
I
can
define
that
logic
in
behind
there
a
lot
of
times
it's
used
with
vsphere.
B
You
know
the
vsphere
folks
implement
topology,
so
that
I
can
so
that
you
can
ensure
that
my
nodes
in
cluster,
a
which
is
using
vsan
associated
with
cluster,
a
only
get
scheduled
to
cluster
a
while
cluster
b,
a
different
vsan
deployment
only
gets
scheduled
to
cluster
b.
So
this
is
one
of
those
so
from
a
kubernetes
perspective
that
works
fine.
So
whatever
the
defined
support
policy
is,
I
think
it's
alph.
No,
I
think
it's
beta
at
this
point.
I
think
so.
Oh.
A
B
Oh
good
so
yeah,
however,
from
an
openshift
perspective,
I
I
don't
know
whether
or
not
it's
supported
I
don't.
We
don't
clearly
define
it
supported
which
to
me
says:
either
it's
not
supported
or
we're
not
sure
so.
Hence
the
gray
area,
so
topology
can
be
helpful
right,
just
be
aware
of
any
potential
limitations
associated
with
that
and
again
check
with
us.
That
means,
if
you're
a
customer
right
use
your
red
hat
account
team
to
reach
up
through
and
validate
support
for
whatever
you're
trying
to
do
inside
of
there.
B
So
when,
when
we
define
a
persistent
volume
and
a
persistent
volume
claim
for
that
matter,
we
can
associate
they're,
essentially
two
volume
access
modes
file
system,
which
is
what
we
traditionally
expect
right.
So
when
I
get
to
the
access
to
that
volume,
it
is
going
to
be
like
nfs.
I
can
just
read
and
write
files
if
it's
a
block
volume,
so
iscsi
fibre
channel.
Whatever
that
happens
to
be
it's
got
some
sort
of
file
system
on
it
already
xfs,
ext4,
etc.
B
However,
some
vendors
will
implement
the
raw
block.
What
that
does
is
it
provides
direct
raw
block
access
to
that
storage
volume,
so,
in
other
words,
the
csi
driver
isn't
going
to
format
that
volume.
For
me,
when
is
this
useful?
Maybe
I'm
so
I'm
gonna.
I
don't
know
whether
or
not
anybody
does
this,
but
it's
an
example.
I'm
going
to
use
oracle
databases
right
oracle
databases
have
a
storage
mode
where
they
directly
access
the
disks
underneath
right,
there's,
no
abstraction,
there's
no
file
system
in
there
right.
B
Maybe
there
is-
I
don't
know,
but
so
the
oracle
database
can
directly
issue
scuzzy
commands
against
that
particular
block
device.
Another
one
is
again
openshift
virtualization,
so
with
openshift
virtualization
I
can
have
that
raw
block.
Persistent
volume
be
connected
to
my
virtual
machine
instance.
So
this
would
be
same
thing
as
like
or
similar
in
concept
to
like
a
raw
device
mapping
or
a
lun
pass
through
type
of
scenario.
B
So
it
can
access
that
without
me
having
to
abstract
it
through
a
file
system,
so
that
is
supported.
It
works
with
openshift
dependent
on
the
storage
driver.
So
some
things
like
I
I
don't
know
how
ocs
does
it.
I
know
ocs
supports
rwx
rewrite
many
block
volumes.
Yes,
so
I
don't
know
precisely
how
that
mechanism
works.
I
do
know
that
there
are
some
storage
vendors
that
have
read,
write
many
block
volumes
using
like
iscsi
and
roblox
s
or
raw.
B
You
know
direct
access
like
that,
so
there
are
others
out
there
that
can
do
that.
This
is
one
of
the
ways
that
they
work
around
that,
but
there
are
some
use
cases
for
block
volumes.
B
Volume.
Expansion
is
another
one
that
is
supported
by
openshift.
So
if
we
switch
back
over
here,
expanding
persistent
volumes
just
be
aware
that
it
needs
to
be
so.
You
can
see
it
is
technology
preview
for
csi
for
flex
volumes
which
remember
flex,
volumes
are
deprecated
so
as
well
as
for
other
types,
basically
rely
on
the
driver
underlying
in
order
to
do
that.
B
So
I
don't
know
the
details
on
all
of
them.
I
know
I
want
to
say,
file
system
or
or
file
based,
like
nfs,
was
the
first
ones
to
be
supported.
Basically,
I
can
resize
that
pvc
and
the
underlying
pv,
because
I
don't
have
to
actually
do
anything-
some
vendors
are
adding
support
for
block
devices,
so
I
can
take
that
iscsi
lun.
I
can
move
it
from
10
gigabytes
to
50
gigabytes
and
then
resize
the
file
system
underneath
but
check
with
your
csi,
your
storage
vendor
for
specifically,
what's
supported
and
under
what
conditions.
C
B
So
we
have
down
here
data
sources.
We
see
we
have
our
cloning
and
our
volume
snapshot.
So
this
would
be
right
when
I
define
my
persistent
volume
claim
and
I
want
to
create
a
clone,
and
I
think,
there's
an
example
inside
of
here.
Maybe
not,
there
is
one
in
our
documentation,
sorry
for
jumping
around.
B
So
when
I
create
my
clone,
I'm
telling
it
here's
my
data
source,
my
data
source,
is
this
existing
persistent
volume
claim
with
this
name.
Please
create
me
a
clone
of
that
volume.
So
that's
where
that
data
source
comes
into
play
and
then
the
other
thing
that
I
wanted
to
talk
about
in
here
is
so
capacity
tracking
volume,
health
monitoring,
they're
both
kind
of
early
in
you
see
alpha
here
and
this
one.
B
So
v
very
early
yeah,
so
I
know
there's
a
lot
of
excitement
and
a
lot
of
interest
around
these.
You
know
being
able
to
look
into
the
volumes
and
check
how
much
utilization
there
is
in
a
more
thorough
and
robust
manner
the
volume
health
monitoring
one.
I
always
liked
because
it
gets
one
thing
for
the
volume
to
be
mounted
it's
another
thing
for
it
to
be.
I
don't
know
rewritable,
so
you.
B
B
So
the
other
thing
I
wanted
to
talk
about
was
oh
snapshots,
and
by
snapshots
I
mean
I
want
to
show
an
example
of
this.
B
B
If
we
look
in
my
csi
definition
here
and
I'm
going
to
close
the
metadata,
because
we
don't
care
about
that-
there's
a
couple
of
interesting
things
inside
of
here.
So
first
this
is
nfs
and
regardless
of
whether
or
not
it's
nfs,
I
can
declare
mount
options
for
all
volumes
that
are
created
from
the
storage
class.
So
you
can
see
here
anytime.
I
mount
a
volume
from
the
storage
class.
It's
going
to
have
the
no
a
time
and
nfs
version
equals
three
options
provided
to
it.
B
So
maybe
I
want
to
tune
the
rw
size
or
the
the
read
size
or
the
r
size,
or
a
w
size
option
so
on
and
so
forth.
The
parameters
here
are
going
to
be
specific
to
the
driver.
This
is
basically
all
the
stuff
that
this
csi
driver
needs
to
be
able
to
connect
to
the
storage
system
and
do
this
and
do
that
and
maybe
other
things
that
I
need
each
one
of
these
is
going
to
be
different
volume,
binding
mode.
B
A
Wait
for
consumer
always
throws
me
for
a
loop.
Sometimes
so
it's
like
is
the
pod
in
the
right
place.
It's
running,
but.
B
B
These
two
subfile
folders,
if
you
want
to
call
them
that
I
think
they're
called
z-volts
or
whatever
are
just
one
underneath
this
vols
folder-
is
where
my
persistent
volumes
are
going
to
be
created.
B
So
let's
create
a
persistent
volume
claim
so
we'll
use
our
nfs
csi,
we'll
call
this
test00
super
creatively,
I'm
going
to
give
it
a
10
gigabyte
size
and
hit
create
so
behind
the
scenes.
All
of
that
csi
machinations
are
happening
right
and
you
can
see
it
only
took
a
second
or
two,
and
now
I
have
a
bound
persistent
volume.
B
So
here
I
have
my
one.
Pvc.
If
I
were
to
look
at
my
pvc
again,
we'll
collapse
the
metadata,
because
we
don't
care
about
it.
So
we
see
we
have
our
capacity
definition.
We
have
a
bunch
of
stuff
that
was
created
by
our
csi
driver
right.
This
is
again
going
to
be
specific
to
the
csi
driver,
and
then
I
have
things
like:
what's
my
access
modes
right?
What's
my
mount
options?
B
What's
my
volume
mode
right,
all
of
those
things
that
you
would
expect
to
see
associated
with
it,
switching
back
to
our
storage
here,
if
I
expand
this
out,
you'll
notice
that
I
now
have
a
child
volume
so
effectively
the
csi
driver,
in
this
case
democratic
csi,
reached
out,
and
it
communicated
with
my
storage
system
to
create
this
volume.
For
me
right,
I
didn't
have
to
do
anything.
I
don't
have
to
know
how
it
created
that
right,
I
don't
have
to
understand.
B
All
I
need
is
enough
permissions
from
my
storage
administrator
to
be
able
to
do
what
I
need
to
do
nice,
so
this
is
4.6
the
least
the
most
recent
version
of
4.6
you'll
notice
that
we
have
this
volume
snapshots
these
three
items
which
are
relatively
new,
so
I
can
create
a
snapshot
of
that
so
long
as
my
csi
provisioner
supports
it,
you'll
notice.
It
is
this
big
red
tech
preview.
B
So
let's
do
that
select!
My
existing
persistent
volume
claim
remember.
I
need
this
to
be
so.
First,
it's
important
to
understand
a
couple
of
things
and
I
know
we've
only
got
about
a
minute
left,
so
persistent
volume
is
the
persistent
volume
claim
they're
going
to
use.
The
name
that
I
want
to
give
it
snapshot
class
represents
a
different
definition,
so
oc
get
snapshot
class.
B
No,
I
think
maybe
it's
dash
class
I'll
have
to
find
that
and
dig
it
up
anyways
effectively.
This
is
telling
it
which
vendor
or
excuse
me,
which
storage
class
and
which
driver
to
use
when
we
create
this,
so
I
will
click
create
here.
You
can
see
it
very
quickly.
Came
back
with
this
volume,
snapshot,
content
and
volume
snapshot.
B
The
class
is
already
defined,
and
if
I
go
to
snapshots,
we
see
we
have
this
test
snapshot
associated
with.
Here
we
have
this
snapshot
contents,
which
is
associated
with.
What's
actually
in
that
right,
I
can
look
at
that
object
and
collapse.
The
metadata
we
can
see
all
of
the
relevant
metadata
or
well
all
of
the
relevant
information
metadata
is
different
about
our
particular
volume
and
our
particular
snapshot.
B
B
You
everyone,
so
what
does
snapshot
mean?
So
snapshot
is
a
point
in
time
copy,
so
it
was
drilled
into
me.
For
I
don't
know
the
better
part
of
a
decade
being
a
storage
administrator
snapshots
are
not
backups
so
effectively.
B
It
is
going
to
be
dependent
on
the
same
set
of
blocks
the
same
set
of
data,
so
it
is
not
a
copy
of
that
original
data,
so
just
be
aware
of
that.
Snapshots
are
not
backups.
Backups
are
a
separate
copy
of
the
data
I'll
pick
on
netapp,
because
that's
what
I'm
most
familiar
with
right.
If
the
aggregate,
if
the
underlying
raid
groups
supporting
my
data,
fail
no
matter
how
many
snapshots
I
have,
the
data
is
still
gone
right.
B
So
thank
you.
Everyone
really
appreciate
you
joining
us
today.
I
hope
that
this
has
been
informative
for
early
january.
Please,
we
will
be
back
next
week
same
bat
time
same
channel.
B
C
B
You're
also
welcome
to
reach
out
via
email,
very
simply,
first
name
dot,
last
name
andrew.sullivan
redhat.com.
So
again,
please
don't
hesitate
to
reach
out
if
there's
anything
that
we
can
help
with,
and
thank
you
very
much.