►
From YouTube: OpenShift Administrator’s Office Hour (Ep 10)
Description
Join Andrew Sullivan, Chris Short, and the occasional special guest for an hour designed specifically to help the OpenShift admins out there. Come with your questions, leave with solutions.
A
Good
morning,
good
afternoon,
good
evening,
wherever
you're
hailing
from
welcome
to
another
episode
of
the
openshift
administrator's
office
hours
here
on
openshift
tv,
I
am
chris
short
principal
technical
marketing
manager
here
at
red
hat.
Also
the
executive
producer
of
this
thing
we
call
openshift
tv
and
I'm
joined
by
the
one,
and
only
andrew
sullivan
and
andrew
is
going
to
take
us
on
a
journey
through
storage
today,
but
indeed,
kind
of
the
basics
of
storage.
Right
when
it
comes
to
cloud
native
things.
B
Yeah
so,
as
chris
mentioned,
this
is
the
open
shift
administrator's
office
hour,
which
really
means
that
we're
here
in
the
capacity
of
ask
us
anything
yeah.
Well,
I
do
have
a
topic
for
today.
I
try
to
have
one
for
every
week
and
not
just
completely
leave
it
to
random
chance.
B
Our
goal
is
to
really
help
you
with
any
questions,
any
problems,
any
issues
you
have
and
help
get
those
answers
and
get
those
questions
answered,
so
sometimes
that
that
means
that
we're
able
to
answer
those
directly
off
the
top
of
our
heads.
Sometimes
it
means
that
we
have
to
take
a
note
and
go
back
to
engineering
or
go
back
to
product
management
and
and
track
down
those
answers,
and
we
are
more
than
happy
to
do
that.
Yes,
we
will
definitely
do
that.
B
B
Lots
of
other
hats
and
roles
involved
inside
of
that,
so
storage
is
something
that
that
I
like
and
that
I
enjoy,
and
it
is
it's
not
a
whole
new
world
because
you
know
kubernetes
is
what
five
and
a
half
years
old
now,
but
it
is
different
and
sometimes
it's
different
in
unexpected
ways.
So
it's
good
good
to
have
an
understanding
and
good
or
at
least
begin
the
exploration.
B
I
think
I
told
you
on
one
of
these
shows
before
that
open
shift
and
really
not
just
openshift
kubernetes
as
a
whole
is
it's
kind
of
like
a
fractal
right.
Every
time
you
you.
A
B
A
B
A
B
B
C
B
It's
fun,
it's
I
enjoy
it
as
you
can
tell
by
the
fact
that
it
is
quite
literally
my
day
job.
So.
B
So,
as
always,
please
feel
free
to
ask
us
questions
through
the
various
chats,
the
I
believe
that
they're
all
rebroadcast
to
each
other.
So
if
you
ask
on
whatever
platform
you
are
on,
we
should
be
able
to
see
it
on
whichever
platform
we
are
on
so
feel
free
to
ask
those
at
any
point
in
time
we
will
get
to
those
just
as
soon
as
we
can.
A
Just
want
to
remind
everybody
that
we
have
a
discord
channel
if
you
want
to
join
us
there
for
after
the
show,
you
can
ask
questions
there
as
well.
B
Indeed,
all
right
so
storage,
what's
the.
B
All
right,
so
I
am
going
to
share
my
screen
here,
not
because
I
have
anything
to
show
immediately,
but
because
I
will
have
things
to
show
fine.
Actually
I
take
that
back.
I
do
have
something
to
show
immediately
so
first,
when
we
think
about
storage
in
kubernetes
and
storage
and
openshift
there's
two
broad
categories:
there
is
the
storage
that
the
hosts
need
to
do
the
things
that
they
need
right
and
that's
not
just
installing
core
os
or
whatever
operating
system
you're
using
if
it's
vanilla,
kubernetes.
B
The
other
set
of
storage
is
the
application,
persistent
storage
and
that
can
come
from
any
number
of
different
places
and
it
there
is,
shall
we
say,
varying
degrees
of
persistence
associated
with
that,
and
that
can
be
a
little
confusing
and
I'll
talk
a
little
bit
more
about
that
in
just
a
moment
sure.
B
So,
let's
start
with
the
the
host
storage
right,
the
storage
that
is
used
by
the
virtual
or
physical
servers
inside
of
your
cluster
right
and
importantly
and
dean.
I'm
going
to
finish
this
thought
and
then
we'll.
While
chris
reads
your
question
and
then
we'll
we'll
address.
C
B
So
importantly,
there
is
kubernetes
itself
monitors
the
var
live
partition
or
directory.
So
what
is
it
monitoring
for
so?
First
and
importantly,
when
we
pull
down
an
image
it
gets
stored
underneath
var
live
with
openshift
var
lab
containers
right
when
I
instantiate
a
pod
when
that
container
gets
instantiated,
it
creates
a
copy
on
write
layer
on
of
that
image
right
and
that
gets
started
that
gets
stored
underneath
var
lab
containers.
B
If
I
create
an
empty
dir,
you
know,
storage,
not
object,
but
declare
an
empty
der
mount
point
for
my
pod
that
gets
created
underneath
var
lab
containers,
so
it
becomes
an
incredibly
critical
and
also
a
focal
point
for
all
of
that
data.
Likewise,
things
like
the
container
logs
all
get
shoved
underneath
our
live.
So
this
means
that
whatever
storage
is
backing
that
mount
backing
that
folder
needs
to
have
both
the
capacity
as
well
as
the
performance
to
meet
all
of
those
needs.
B
Exactly
and
it's
it
can
creep
up
on
you
very
quickly
and
very
unawares.
If
you
will
of
you
know,
maybe
my
you
know
my
pods
aren't
doing
a
whole
lot.
Maybe
it's
everything's
running
along
just
great
and
then
somebody
turns
on
debug
logging
for
some
application
and
suddenly
it's
dumping.
You
know
gigabytes.
B
A
minute
into
you
know
that
directory
well,
suddenly,
now
it's
going
to
cause
a
ripple
and
it's
going
to
cause.
You
know
potentially
performance
problems
for
other
pods
that
are
on
that
host,
because
they're
consuming
a
lot
of
those
iops.
You
know
a
lot
of
that
storage
throughput.
If
it's
a
virtual
machine,
it
could
be
cons,
it
could
ripple
out
to
other
virtual
machines
in
the
environment.
You
know
hey,
maybe
I'm
running
off
of
you
know:
4
8,
16,
gigabit,
fiber
channel,
and
you
know
this
one
set
of
pods
on
this.
B
One
virtual
machine
is
suddenly
consuming.
You
know
massive
amounts
of
storage
throughputs.
You
know
all
the
way
back
to
the
storage
array.
Well,
that
can
have
a
lot
of
ripples
et
cetera,
so
we
do
need
to
be
aware
of
what's
happening
with
those
we.
We
want
to
leverage
all
of
those
monitoring
monitoring
tools.
I
can't
talk
today.
B
I
don't
my
my
tongue
just
gets
in
the
way
all
of
the
monitoring
tools
to
keep
an
eye
on
those
things.
Okay,
thought
finished.
So
chris,
do
you
have
a
summary.
A
Yeah
so
our
friend
dean
peterson
here
has
a
three-node
cluster
sounds
like
a
master
worker
combo.
One
of
those
nodes
has
an
nvidia
gpu
in
it
or
rtx
3090.
I'm
not
sure.
If
that
matters,
the
having
some
problems
getting
the
node
feature,
discovery
operator
to
work
but
install
the
nvidia
gpu
operator
and
the
node
feature
discovery
operator.
A
I
created
an
instance
of
the
node
feature
discovery
all
of
the
pause
for
each
of
the
masters
came
up,
but
none
of
but
none
of
the
nodes.
So
maybe
it's
a
six
node
cluster,
I'm.
A
By
the
question
dean,
it's
a
lot
of
words,
so
just
it's
bare
metal,
all
bare
metal
so
using
the
assisted
installer,
which
shouldn't
matter
after
the
fact,
and
so
I
asked
if
there
were
any
taints
on
the
nodes,
none
of
those
get
labeled
as
having
a
gpu
right,
because
it
can't
find
it.
I'm
wondering
if
it's
because
it's
on
the
master
node
is
there
something
going
on
there.
So
if
it's
schedulable.
B
Pods
to
be
able
to
deploy
there
what
comes
to
mind
for
me
and
what
I'm
trying
to
look
up
on
my
other
screen
here
is
the
hcl
for
the
gpu
operator
for
the
nvidia
operator.
I
mm-hmm.
I
would
think
that
it
is,
I
mean
it
generically
says,
pascal
plus,
so
that
should
include
the
the
3000
series
right,
because
I
think
pascal
was
the
900
series,
900
or
1000
series.
B
I
I
have
seen
some
folks,
you
know
not
basically
using
desktop
gpus
have
issues,
but
I
don't
think
that
that's
it
turned
out
to
be
something
else,
not
a
support
or
not
a
compatibility
type
of
issue.
It
was
other
things.
A
So
dean
reiterates
or
not,
reiterates,
also
states.
I
have
also
noticed
that
the
node
feature
discovery
is
seemingly
stuck
detecting
anything.
If
I
click
on
the
network
interfaces
or
disk
tabs
under
each
node,
I
just
get
a
bunch
of
loading
placeholders
indefinitely,
so
it
sounds
like
that
operator
is
broken.
B
Yeah,
it
sounds
like
something
could
be
wrong
there
off
the
top
of
my
head,
I'm
not
terribly
familiar
with
the
nfd.
A
C
A
Not
running
or
not
schedulable
or
whatever
right
like.
Could
this
be
a
capacity
issue?
Maybe
I
don't
know
so
if
everything's
good
and
it's
still
not
working,
try
removing
it
deleting
the
namespace
and
node
feature
discovery
thing
was
in
if
it
created
its
own
special
namespace
and
then
reinstalling
it
and
go
from
there
right.
That's
the
only
thing
I
could
think
of.
B
A
B
A
Else
is
running
fine,
okay,
cool
yeah.
That's
weird!
Let
me
plug
into
my
node
real,
quick
and
see
what's
going
on
if.
B
You
if
you
would
like
or
if,
if
you
can
for
those
pods
for
the
nfd
pods,
if
you
can
pull
the
logs
and
then
just
email
them
to
chris
and
I
yeah
we'll
use
them
to
get
started
and
andrew
sullivan
and
seashore
c-sharp.
A
B
It
should
yeah
it
should
work
with
the
master
worker
combo.
So
if
it's
not
yeah,
it's
either
a
bug
or
there's
potentially
something
something
else
going
on
there.
That.
C
B
All
right,
so
where
was
I
so
ephemeral
storage,
so
var
lib?
If
we
look
at
so
var,
lab
is
important
not
just
because
it
can
affect
performance.
So,
as
you
fill
up
gigabytes
as
you
fill
up
inodes,
it
can
also
affect
scheduling.
B
So
if
that
partition
or
if
that
which
by
default
there's
only
one
disk
in
openshift
right.
So
if
that
disk
begins
to
get
full,
essentially
what
you
will
see
is
the
scheduler
begin
to
remove
pods
right
it
will
evict
pods
from
the
cluster
and
or
it
will
not
schedule
new
pods
to
the
to
that
node,
if
even
though
there
may
be
cpu
and
ram
resources
available,
and
I
think
by
default-
and
I
used
to
have
a
tab
open
and
I
s
silly
me.
B
I
closed
the
wrong
window,
and
now
I
haven't
brought
it
back
up
yet
so
I
believe
that
by
default
it
will
bring
it
back
underneath
the
threshold
of
80
utilized.
B
B
This
is
why
things
like
deploying
the
registry
using
an
empty
dir
is
great
to
get
up
and
running,
but
if
something
ever
happens,
then
all
of
your
container
images
stored
on
that
registry
instance
would
be
lost,
so
keeping
an
eye
on
that
very
important.
It
can
have
ramifications
both
narrow
and
wide
if
we
don't
size,
if
we
don't
monitor,
if
we
don't
manage
that
local
capacity
for
our
nodes
correctly.
B
So
what
are
some
of
the
things
available
to
do
that?
So,
first
and
foremost,
we'll
rely
on
the
good
old
monitoring
service,
and
if
we
switch
over
to
our
node,
where
is
it.
B
C
C
A
B
B
B
B
B
B
B
Maybe
we
can
go
here
and
get
a
better
idea
file
system,
size,
bytes
wow-
that
was
way
too
friggin
hard.
So
this
is
one
way
that
we
can
look
at
those
bytes
or
that
data.
We
can
look
at
that
across
all
of
the
various
nodes
as
well,
so
this
one
you
can
see
is
just
summing
up
all
of
them
across
all
nodes.
B
So
if
I
go
back
to
my
prometheus
ui
here-
and
I
take
this
guy-
I
don't
know
why
this
wasn't
showing
up
when
I
was
trying
to
filter
by
got
me.
So
what
I'm
looking
for
here
is
all
of
the
various
characteristics
right.
So
one
of
these
is
which
particular
node
it
is
looking
at,
and
if
we
look
so
like
here,
I've
got
my
different
worker
nodes,
so
I
can
sum
all
of
those
guys
together
and
I
can
see
how
much
space
is
being
consumed
on
a
per
node
basis.
B
We
go,
I
swear,
I
do
know
what
I'm
doing
occasionally
do
you.
I
believe
you,
though
it's
dangerous.
So
we
see
down
here.
We
have
this
endpoint,
here's
sda1
sda2
right,
so
we
can
look
at
all
of
these
different
endpoints
and
we
can
see
where
is
it
run
user?
So
here's
var,
so
we
can
look
at,
for
example,
this
particular.
B
Result-
and
we
can
see
precisely
how
much
capacity
is
being
used
inside
of
there.
We
can
track
that
over
time.
We
can
determine
what's
happening,
however,
that
doesn't
necessarily
show
like
this.
One
is
file
system,
size,
bytes
right.
We
want
to
look
at
other
things
like
iops,
etc,
so
on
whatever's
happening.
B
B
B
So
that
means,
if
I
put
a
100
iop
limit
on
one
of
those
disks,
it's
effectively
going
to
apply
to
the
entire
vm,
which
means
that
it
can
also
affect
the
other
pods,
the
other
discs
that
are
attached
to
there
as
well.
It
is
also
a
sum
of
all
of
those,
so,
for
example,
if
I
were
to
split
off
var
into
a
separate
desk
which,
by
the
way
is
possible
here
I'll
show
we
go
to
the
docs.
B
A
B
So
let's
say
that
I
I
have
my
root
disk
on,
you
know,
store
a
and
I
have
created
a
second
disk
for
var
on
datastore,
b
and
datastore
a
I
want
to
have
maybe
100
iops
and
datastore
b
right,
I'm
going
to
or
the
disk
on
datastore
b,
I'm
going
to
give
it
a
thousand
iop
limit.
Well,
the
reality
is
the
vm
either
one
of
those
disks
will
be
able
to
hit
1100
diops
because
it
is
a
sum
of
those,
so
it
it
has
some
quirky
behavior.
B
When
you
look
at
how
storage
io
control
works,
just
be
aware
of
that,
be
conscious
of
and
be
be
careful
ultimately.
B
B
Not
apply,
however,
if
you
are
using
other
provisioners,
so,
for
example,
maybe
so
you're
using
you're
deploying
your
cluster
to
vsphere
and
you're,
using
the
pure
dynamic
provisioner
or
the
netapp
dynamic,
provisioner
or
you
know,
whatever
those
would
not
have
the
same
problem
applied
to
them,
because
that
is
it
it's
just
network
traffic
as
far
as
vsphere
is
concerned.
B
Conversely,
if
you
deploy
ocs
right,
ocs
would
be
affected
by
that
all
right,
so
my
ocs
dis
would
inherit
or
be
affected
by
all
of
those
same
iops
limits,
and
that
could
have
you
know,
of
course,
far-ranging
effects
to
my
cluster,
so
that
is
var.
That
is
the
os
disk.
So,
just
to
reiterate,
we
want
to
be
conscious
of.
B
C
B
Okay,
so
that
was
type
one
right.
So
type
two
is
application,
persistent
storage!
There
you
go
right,
so
application,
persistent
storage
comes
in
the
form
of
either
an
entry.
Or
let
me
let
me
let
me
start
before
that
so
application,
persistent
storage
is
requested
by
a
persistent
volume
claim
a
pvc
and
that
pvc
is
then
satisfied
by
it
is
met
by
a
persistent
volume,
a
pv.
B
B
Okay,
so
maybe
it's
nfs
and
it
simply
has
the
nfs
server
and
the
nfs
mount
point
inside
of
there.
Maybe
it's
iscsi.
It
has.
You
know
the
the
portal
ip
address
and
it
has
the
lun
identifier
right.
Maybe
it's
fiber
channel
same
sort
of
thing,
so
the
pv
describes
how
to
connect
to
that
particular
storage.
Endpoint,
the
pvc
is
what
is
bound
to
or
the
pv
is
bound
to
the
pvc
and
that's
how
the
pod
is
associated
with
some
storage.
B
So
how
do
we
get
that
mechanism
right?
How
do
we
get
persistent
storage,
so
pvs
can
be
created.
Basically,
two
ways
manually
or
automatically
so
manually
means
that
some
poor
sucker
has
to
go
in
and
literally
define
each
one
of
those
objects.
You
know
hey.
My
storage
team
gave
me
50
nfs
exports
and
100
luns
that
are
of
different
sizes,
and
so
you
go
in
and
you
define
each
one
of
those
yaml
objects
right
and
if
we
look
in
the
gui
here,
I
would
be
able
to
come
to
persistent
volumes.
B
B
All
right,
so
I
would
be
able
to
create
this.
I
need
to
define
things
like
the
storage,
the
access
mode,
etc
and
rinse
and
repeat
for
each
one
of
my
persistent
volume
claims.
So
you
can.
You
can
automate
the
manual
process
if
you
will
right
so
automate
the
process
of
defining
each
one
of
these,
but
it's
still
the
pvs
exist
before
the
storage
request
happens
right.
So
I
have
a
a
pool
of
predefined
pvs
somebody
submits
a
p.
A
persistent
volume
claim.
A
request
for
storage.
B
Kubernetes
looks
at
the
pool
of
available
storage
and
says
here
I'm
going
to
match
these
two
together
an
important
thing
to
know
you
can
define
a
storage
class
manually.
Yes,
yeah
storage
classes,
don't
have
to
be
associated
with
dynamically
provisioned.
Persistent
volumes
right.
You
can
absolutely
have
a
difference,
have
a
storage
class
that
is
for
statically
provisioned
persistent
volumes,
yes,
okay!
B
So
for
these
types
of
persistent
volumes,
pretty
straightforward
right,
be
conscious
of
you
know
when
we're
talking
about
a
capacity
planning
thing
right,
be
conscious
of
the
size
of
not
just
the
size
of
the
application
storage
right,
hey
do.
My
applications
needs
five.
Gigs
do
they
need
50
gigs?
They
need
500
gigs,
but
also
be
conscious
of
how
much
like
anything
else,
how
much
throughput,
how
much
latency
etc
they're
going
to
be
expecting.
A
A
If
you
want
to
take
that
real,
quick,
yeah
arjun,
where
can
you
real
case
use
openshift?
Let's
see
telcos
my
house
anywhere
right
like
if
you,
if
you
want
to
orchestrate
containers
and
manage
containers
in
a
way
across
infrastructure
openshift,
is
a
viable
place
and
viable
tool
to
do
that
with
yeah.
B
B
I'll
I'll
just
take
a
different
approach
right,
which
is
all
I
guess,
I'll,
ask
a
clarifying
question
and
then
answer
answer
one
part
and
please
clarify
so,
are
you
asking
where
would
you
use
openshift,
meaning,
kubernetes
and
containers,
or
where
would
you
use
openshift
compared
to
kubernetes
right?
B
B
So
you
know
both
chris
and
I
do
officially
have
or
technically
have
marketing
in
our
titles,
even
though
we
both
do
very
little
actual
marketing.
C
B
B
You
know
routers
load
balancers,
all
this
other
stuff,
that's
added
in
there
to
help
make
the
kubernetes
experience
more
robust
and
more
complete
at
the
core,
though
it's
still
kubernetes.
If
anybody
remembers
you
know,
we've
we've
talked
about
installation
process
on
the
on
the
admin
hour,
a
couple
of
times
right.
B
The
open
shifting
kubernetes,
what's
the
difference
ebook.
So
if
we're
talking,
you
know
more
at
the
the
core
of
why
use
containers.
Why
use
kubernetes?
Why
use
open
shifts?
B
You
know
it
really
comes
down
to
the
application
and
a
lot
of
other
things.
I
cover
a
lot
of
virtualization
stuff.
So,
from
my
perspective,
you
know,
if
you
have,
you
know
robust
processes
if
your
application,
if
you're
developers,
if
everybody,
is
comfortable
and
happy
with
how
things
work
in
virtual
machines,
maybe
you
don't
need
to
go
with
containers.
On
the
other
hand,
containers
make
a
lot
of
things
dramatically
easier
right.
All
the
dependencies
are
contained
inside
of
there
it's
easier
to
do
things
like
deployment,
it's
easier
to
scale
faster
to
scale
matt.
A
We
are
working
on
the
armbit
of
it
matt.
Just
to
let
you
know
we
see
arm
just
like
the
rest
of
the
world
does
so,
but
you
got
to
remember
that
a
lot
of
open
shift
is
upstream
components
that
we
put
together
and
it's
not
always
do
all
those
upstream
components,
work
and
arm
so
yeah.
A
A
Yeah,
well,
I
haven't
run
any
m1
max
lately,
but
I'm
waiting
for
the
the
next
iteration,
at
least
in
maybe
a
couple
years,
potentially
just
to
make
sure
the
ecosystem
builds,
but
regardless
sorry
to
interrupt
yours
out
there
you're
fine,
the
like
I
referred
to
openshift
as
openshift,
plus
the
things
on
the
cloud
native
trail
map
that
you
need
to
like
run
kubernetes
in
production.
A
Right
like
that
cloud
cloud
native
trail
map
mentions
like
ci,
cd
and
monitoring,
and
you
know
observability
and
storage
and
all
those
you
know
fun
things
that
you
would
need
to
actually
deploy
applications
to
a
kubernetes
cluster
and
then
monitor
them
and
maintain
them
and
operate
them
in
an
effective
manner.
So
that's
how
I
look
at
openshift
and
I
think
it's
the
right
way
to
do
kubernetes
in
my
opinion,
and
that's
why
I
work
at
red
hat.
So
that's
you
know
my
feel.
Yeah.
A
B
Yeah,
so
I
think
advanced
version
of
kubernetes
might
be
the
wrong
way
to
think
about
it.
It's
kubernetes,
it's
kubernetes,
it's
you
know
we're
not
deploying
a
different
kubernetes
right,
so
red
hats
and
open
shifts
so
re
openshift
has
the
same
relationship
to
kubernetes
that
red
hat
enterprise.
Linux
has
to
quote
unquote
linux
right.
We
we
use
upstream
components
that
we
downstream
into
products,
and
we
do
a
lot
of
testing
and
validation
et
cetera
in
order
to
create
something
that
is
a
product
and
supportable
by
us
right.
So
it's
not
an
advanced
version.
B
It's
not
a
different
version
of
kubernetes,
it's
a
kubernetes
that
we
have
tested
and
validated
and
supports,
and
then
there's
a
whole
bevy
of
other
things
added.
On
top
of
that,
as
you
pointed
out,
to
make
it
easier
for
our
customers
to
utilize
and
utilize,
there
is
not
just
deploy
applications,
it's
also
administer.
It's
also
managed
right,
keeping
it
up
to
date,
keeping
it
secure,
keeping
it
stable,
etc.
B
So
still
kubernetes
just
a
lot
of
stuff
around
kubernetes
and
if,
if
you
haven't
already,
I
would
definitely
reach
out
to
red
hats
right,
because
the
account
teams
right
your
red
hat
account
team-
has
some
really
great
presentations.
That
can
talk
about
all
of
that
and
really
go
into
detail
there.
And
of
course
they
can
always
reach
back
into
people
like
us
to
help
with
those
conversations.
B
But
yeah
there's
a
lot
of
a
lot
that
goes
on
there
and
it's
a
huge
conversation
and
one
that
we
spend
or
could
spend
many
hours.
You
know
just
going
into
all
of
the
various
details
there,
so
I
was
trying
to
find.
I
don't
know
if
it's
public
like
surely
it's
it's
got
to
be
on
on
one
of
our
public
websites
somewhere,
but
we
have
a
slide
that
I
always
like
to
to
use
to
illustrate
that
and
I'll
just
quickly.
Oh
yeah.
C
A
Also
madoon
not
storage
related,
but
wants
to
talk
more
about
sdn,
plugins
and
okd
for
openshift.
You
know
same
kind
of
deal
if
we
have
time
just.
I
feel.
B
Yay,
so
I
I
like
to
use
this
slide
to
illustrate
you
know
all
of
the
aspects
all
of
the
components
of
openshift.
B
B
Exactly
so
this
is
you
know
if
you
were
to
just
go
to
you,
know:
dot,
io
and
click
the
button
and,
and
you
know,
get
cube
admin
or
cops
or
coupe
spray,
or
you
know
pick
any
one
of
your
various
aspects
and
you
said,
deploy
me
a
kubernetes.
This
is
effectively
what
you
get
and,
from
our
perspective,
right,
the
value
prop
of
open
shift
and
all
that
other
stuff
is
all
of
the
things
that
you
see
up
on
top
of
here.
So
on
top
of
that
standard
kubernetes
we
deploy
things
like
here.
B
Operators
right
we
leverage
operators
for
deploying
and
managing
applications
over
the
air
updates,
so
openshift
is
able
to
manage
down
into
the
operating
system.
So,
if
you're
familiar
with
vmware
and
vsphere
right,
I
use
vcenter
to
manage
and
apply
updates
to
my
esxi
hosts.
Well,
can
you
open
openshift,
I
don't
know
presenter
mode,
I
don't
know
I
can
do
this
yeah.
Let's
do
that
a
comment.
I
don't
wanna.
A
B
A
B
You
know
it's
google
yeah,
I
can't
find
grain
scroll
or
assume
rather
yeah,
so
you
know
any
anyways.
We
we
add
on
all
of
these
things
and
that's
effectively
the
value
prop
of
open
shift.
It
is
kubernetes.
You
can
use
kube
cuddle,
coop
control
kubectl.
B
I
like
to
call
out
all
three
and
use
all
three
interchangeably
in
order
to
interact
with
and
deploy
or
you
can
use
openshift
the
oc
command
line
tool
which
I
I
like,
because
it
simplifies
some
of
the
common
things
like
switching
context.
B
B
Yeah
and
it's
funny
because
you
remember,
I
was
talking
with
our
team
yesterday,
the
day
before
so
the
week
before
thanksgiving
or
u.s
thanksgiving.
B
So
two
weeks
ago
I
did
a
demo
that
was,
you
know,
vanilla,
kubernetes,
and
it
took
me
like
two
hours
to
get
a
kubernetes
cluster
up
and
running
because
it's
been
a
while,
since
I
did
it
and
I'm
so
used
to
openshift
and
openshift
install
create
cluster
and
then
I
just
sit
back
and
go
get
a
cup
of
coffee
and
about
30
minutes
later
I
have
a
cluster
and
I
didn't
have
to
do
anything
right.
So.
A
Yeah,
that's
kind
of
what
I'm
used
to
right
like
push
button
get
cluster
right.
I
think
kelsey
hightower
says
like
that's.
His
biggest
thing
nowadays
is
right:
it's
not
the
kubernetes
setup,
it's
the
what
he's
doing
with
it
after
the
fact
yeah
that
that
is
very
much
where
we're
trying
to
push
things
right
like
we're,
trying
to
make
the
the
kubernetes
part
easier
for
you
to
you,
know,
manage
and
update
yeah
operate.
B
Okay,
so
sdn
plugins,
so
as
of
openshift
4.6,
there
are
two
generally
available
fully
supported,
sdns
from
red
hat,
so
the
first
one
is
openshift
sdn.
I
know
super
creative
name
literally.
B
Which
is
vxlan
based?
It
is
the
original
openshift
sdn,
og
or
sdn
in
openshift
right,
and
it's
been
there
for
for
a
long
time.
So
with
4.6
we
released
or
we
announced
general
availability,
but
it
is
not
yet
the
default
of
ovn
kubernetes
with
openshift,
so
ovn
kubernetes,
of
course,
ovn
geneve-based
tunnels,
etc.
So
it's
more
modern.
B
If
you,
you
know
talking
with
product
management,
you
know
discovering
you
know.
Why
are
we
making
this
change
right?
There's
kind
of
two
major
reasons
for
that,
so
one
openshift
sdn
is
effectively
a
red
hat
project
right,
there's,
not
a
lot
of
community
involvement.
It's
not
used
anywhere
else.
It's
not
used
by
anything
else.
B
So
you
know
by
joining
ovn
and
ovn
kubernetes.
We
now
have
a
much
bigger
community
and
much
more
widespread
use,
which
means
you
know
bugs
are
found
faster
or
fixed,
faster,
et
cetera.
The
other
one
being
ovn
is
works
across
multiple
operating
systems,
and
this
is
particularly
important
as
we
draw
closer
to
window
windows
containers.
B
So
you
know
with
when
we
have
clusters
that
are
a
mix
of
linux
and
windows
nodes.
We
need
an
sdn
that
can
go
span
across
all
of
those
nodes
and
openshift
sdn
is
not
capable
of
doing
that.
Ovn
kubernetes
is
now
those
are
the
two
from
red
hat.
We
have
a
huge
partner,
echo
system
of
additional
options,
so
tigera
calico,
let's
see
vmware
nsx
yeah.
A
Like
let
me
just
pull
up
the
landscape,
real,
quick,
real,
quick,
my
ass,
the
there
we
go
cloud
native
storage,
there's,
so
many
of
them.
B
Yeah,
so
essentially,
we
would
expect
that
most
of
them
are
going
to
work.
Some
features
so
I'll
pick
on
openshift
virtualization.
Some
features
want
to
test
and
validate
specifically
for
their
capability.
So,
for
example,
the
first
version
of
openshift
virtualization
did
not
work
with
calico.
There
was
some
issue
with
something,
and
now
both
sides
have
fixed
it
and
it
should
work
now.
B
B
So
with
3
you
could
go
and
see.
Here's
all
of
my
sdn
options
right.
Here's
this
this
list
and
links
out
to
their
website
that
says
here's
how
to
install
alongside
openshift
with
openshift4.
They
decided
not
to
do
that,
which
means
that
again,
thank
you,
matt,
which
means
that
you
know
we
rely
on
our
partners
and
to
document,
and
we
rely
on
customers
to
either
ask
or
to
know
that
you
know
hey.
I
need
to
go
and
look
at
tigera's
website
to
discover
if
it
is
compatible
with
with
openshift.
B
So
yeah,
but
for
better
for
worse,
that's.
That
was
the
decision,
and
I
know
that
the
the
fieldfolks
they
usually
have
a
whole
list
of
these,
and
I
think
that
they
often
include
them
in
their
presentations
to
help
our
customers.
A
How
about
that
to
clarify
you
installed
ocs
some
nodes
have
taints
for
ocs
to
be
installed
and
you
can't
use
the
normal
update
process
without
the
leading
pods.
So
that's
like
the
overarching
clarification
question.
There
frank:
are
they
just
any
posits?
You
have
to
delete
or
are
there
specific
pods
that
you're
deleting
and
like?
Are
they?
What
are
they
related
to.
B
I'm
I'm
wondering
if
those
pods
are
unable
to
be
rescheduled
either
they're
failing
to
drain
so
they're
failing
to
terminate
and
be
rescheduled
or
you
know,
which
could
be
a
resource
constraint,
issue
right.
It's
unable
to
that
would
be
my
question
or
maybe
there's.
C
A
B
When
is
the
so
tomorrow
at
9
00
a.m?
Eastern
is
the
open
shift
container
storage
office
hour
correct.
A
Yes,
so
I
will
take
this
question
and
send
it
to
chris
right
now
and
chris
bloom,
and
we
will
discuss
it
for
sure
tomorrow,
but
frank
has
clarified
the
taints
were
for
ocs.
I
had
to
delete
the
pods
from
the
daemon
sets
inside
of
openshift
storage.
I'm
assuming
is
a
namespace.
That
is
weird.
A
B
Yeah,
I
would
definitely
so
we
know
who
to
get
the
answer
from
and
that's
chris
yeah.
A
Chris,
I'm
not
not
chris
jordan,
no
there's
too
many
curses,
we've
determined
so
yeah
I'll
drop
this
your
exact
questions
in
my
slack
chat
with
mr
bloom
here.
B
All
right,
so
speaking
of
and
circling
back
to,
storage
and
frank,
please
continue
to
to
add
details
or
ask
clarifying
questions
if
needed.
A
B
And
I
know
if
chris
gets
a
response
from
chris
he'll,
we'll
we'll
bring.
A
B
As
well
so
circling
back
to
persistent
storage,
so
I
talked
about
manually,
creating
pvs.
So
essentially
that's
what
we
have
up
on
the
screen
here:
right,
defining
a
pv
and
and
using
having
a
pool
of
those
available.
B
Now
some
dynamic
provisioners
take
advantage
of
this
so
effectively
what
they
do
is
they
have
a
storage
class
with
a
provisioner
assigned
to
it.
When
I
create
a
pvc
for
that
storage
class,
the
normal
mechanism
takes
place,
and
it
says:
hey
provisioner
provision,
some
storage
and
essentially
what
it
does
is
in
the
background
that
provisioner
will
talk
to
its
storage
device,
create
the
volume
and
then
it
will
create
a
a
pv.
That
looks
exactly
like
this
to
map
to
that
particular
pvc
request
right.
So
it
is
a
standard
pv.
C
B
So
I
have
inside
of
my
lab
here.
I
have
a
very
simple:
this
is
using
the
it's
now:
the
deprecated
method
of
provisioning,
dynamically
provisioning,
nfs
storage.
So
what
this
does
or
how
how
this
dynamic
provisioner
works.
Is
it
has
a
single
nfs
export
that
it
creates
folders
inside
of
that
one
exports
to
to
satisfy
the
pvc
requests.
B
Persistent
volume
claim
so
there's
a
number
of
bad
things
about
this
one,
even
though
this
says
it's
five
gigabytes
it'll
create
a
pvc
that
says
it's
five
gigabytes,
it's
actually
it's
just
an
nfs
mount
or
an
nfs,
a
folder
inside
of
an
nfs
exports
that
could
be,
in
my
case,
it's
a
one
terabyte
share,
which
means
that
if
in
the
pod,
if
I
were
to
go
in
and
do
a
you
know
a
df-h
inside
of
that
mount,
it
would
show
up
as
the
full
one
terabyte.
B
So
if
I
look
at
my
or
excuse
me
personal
volume,
so
if
I
look
at
my
persistent
volume
definition,
we
can
see
come
on
scroll
down
once
we
get
down
here.
We
can
see
the
nfs
server,
we
can
see
the
path,
so
here's
the
folder
that
is
on
my
storage
host
and
then
it
created
this
subfolder
inside
of
there
as
my
pvc
mount
and
then
we
see
this
claim.
Ref
excuse
me,
so
this
claim
ref
is
how
it
is
mapping
that
this
pv
was
created
specifically
for
this
persistent
volume
claim.
B
So
we
can
see
here
claim
ref
persistent
volume
claim
namespace
default
name
test,
zero,
all
right.
So
this
is
this,
prevents
this
pv
from
being
claimed
or
being
taken
by
some
other
pvc
accidentally.
We
know
it
was
created
specifically
for
this
one.
So
there's
a
number
of
so
beyond
this
you
know
lab
only.
I
would
not
recommend
for
for
production
usage
using
this
provisioner
there's
a
number
of
others
that
do
or
have
similar
behavior
so,
for
example,
netapp
trident.
B
This
was
how
it
worked
up
until
version
20
or
1910
1907,
something
like
that
when
they,
when
they
fully
switched
over
to
the
csi
model
effectively,
it
would
create
a
volume
on
the
ontapper
solidfire
system
and
then
create
a
pv
to
map
to
that.
So
they
started
doing
that
back
in
kubernetes
1.8.
I
think
they
started
very
early
on
with
that
process.
B
B
Right,
it's
just
a
standard
nfs
type.
This
uses
the
entry
volume
type
right,
so
all
of
these
are
shipped
by
and
they're
supported
by
red
hat
of
course.
So
if
I
come
down
here
to
so
I'm
in
the
documentation,
right,
understanding,
persistent
storage
and
if
I
scroll
down
on
this
page
and
if
I
get
down
here,
come
on,
stop
jumping
on
me.
B
I
know
so
we
have
all
of
these
volume
plugins.
These
are
all
of
the
entry
provisioners,
with
the
exception
of
maybe
that
one
right.
So,
if
I
were
to
you
know,
if
I
look
at
and
I
created
an
iscsi
persistent
volume,
if
I
create
here
a
cinder
volume,
if
I
create
an
nfs
volume,
et
cetera,
right,
they're,
going
to
use
the
entry
drivers
so
essentially-
and
that
includes
vsphere,
so
the
the
default
dynamic
provisioner
that
we
configure
with
a
vsphere
ipri
upi
deployment,
uses
the
entry
provisioner
so
effectively.
B
These
volume
drivers
have
been
deprecated
and
will
be
removed
at
some
point
in
the
future.
This
is
not
an
open
shift
decision.
This
is
not
a
red
hat
decision.
This
is
a
kubernetes
thing
yeah,
so
everything
is
moving
to
csi
csis
yeah,
so
csi
works
a
little
bit
differently
and
it
works
more
like
cinder
if
you're
familiar
with
sender-
and
we
can
what.
B
So
if
we
look
in
the
documentation
again,
I
just
went
to
the
using
csi
and
configuring
csi
volumes.
We
have
this
handy
dandy,
little
graphic.
That
kind
of
describes
what's
happening,
so
I've
taken
this
graphic
and
turned
it
into
a
little
thing
here
that
describes
the
process
of
dynamic
provisioning
storage
with
csi.
B
B
So
the
csi
provider
has
pods
that
are
deployed
to
the
host
and
those
pods
have
the
logic
for
mounting
those
volumes.
So
just
to
step
back.
Let's
look
at
an
example.
So
if
I'm
mounting
an
nfs
export
with
the
entry,
essentially
it's
reaching
out
to
the
os
and
saying
mount
dash,
tnfs
blah
blah
blah
blah
with
csi.
B
It
doesn't
really
know
or
care
what
the
protocol
is.
It's
just
saying:
csi,
provisioner
csi
driver
mount
this
volume
and
tell
me
when
it's
ready
and
that's
as
much
as
kubernetes
is
aware
of
so
I
see
a
little
bit
of
chat
going
by
I'm
just
gonna.
A
Some,
so
if
a
pv
is
full,
a
pvc
is
full,
does
it
and
you
need,
you
know
more
storage,
you
ran
out
of
disc
or
whatever
will
it
sit
there
in
a
pending
state,
or
will
it
fail
immediately
if
you're
provisioning.
C
B
B
A
B
C
C
B
The
entry,
what
that
timeline
is,
I
don't
know
I
used
to
hear
kubernetes
1.21
1.22
time
frame.
I
don't
know
if
that's
changed,
because
it's
been
like
a
solid
12
months
since
the
last
time
I
checked,
I.
A
Is
that
docker
is
being
deprecated
which
we're
switching
to
container
d
in
kubernetes
and
that's
freaking,
everybody
out
that
has
been
the
discussion
of
the
day
in
the
ambassador
channel
for
cncf.
So.
A
Yeah
I
mean
it
is
exciting.
It's
also.
It
shows
where
some
some
knowledge
and
messaging
gaps
should
you
know
are
there's
a
lot
of
confusion.
It
seems
wait.
I
thought
docker
was
kubernetes
or
vice
versa.
I
thought
kubernetes
was
docker.
No,
it's
a
container
orchestrator.
There's
multiple
kinds
of
containers.
A
Look
at
oci
there's,
you
know:
kubernetes
runs
any
oci
compliant
container.
There's
lots
of
them.
C
B
So
I
know,
we've
only
got
like
two
minutes
left
so
I'll
close
out
by
saying
that
much
like
the
sdn
plugins
csi
provisioners,
they
most
of
them
come
from
third
parties.
So
you
need
to
check
with
your
storage
vendor
on
support
and
support
with
openshift.
For
example,
I've
said
pure
in
netapp
and
dell
emc
ibm
hitachi
fujitsu
right.
B
They
all
have
csi
provisioners
that
work
with
openshift
you'll
notice,
a
couple
of
others
in
this
list,
in
particular
you'll
notice,
so
manila
csi
and
the
overt
csi
driver
both
of
these
come
from
red
hat
right.
So
the
the
rev
team
created
and
manages
this
over
at
csi
driver
it's
deployed
by
default
when
you
do
a
ipi
deployment
on
rev,
so
those
are
supported
by
red
hatch.
The
others
would
be
supported
by
our
partners.
B
A
Need
to
make
that
easier
for
folks
I
feel
like,
but
we
gotta
go.
I
got
another
show
to
produce.
So
thank
you.
Everybody
for
joining
check
us
out
next
week
same
bat
channel
same.
B
Actually,
next
not
next
week,
I
have
a
prior
engagement,
one
that
one
that
predates
this
show
even
so
wow
next
week
next
week
we
will
not
have
a
show
the
week
after
we'll
have
a
show
cool.
If
you
have
any
questions,
any
concerns
any
etc.
Please
feel
free
feel
free
to
reach
out
to
andrew.sullivan
at
red
hat
or
see
short
red
hat
more
on
social
media,
practical
andrew
on
twitter
and
c
short
right.
Chris.
B
At
this
point,
if
I
haven't
gotten
in
so
thank
you,
everybody
appreciate
you
tuning
in.
Thank
you
to
all
the
people
who
have
asked
questions
and
please
don't
hesitate
to
reach
out.