►
Description
Resources are precious. You don't want to waste them and you don't want Pods to over indulge, so how do we control access to resources? Kubernetes has built-in functionality that allows the application to declare their requirements and, conversely, enable administrators to define their limits. This episode we'll explore the aspects of requests, limits, quotas, LimitRanges, and QoS classes, including how all of these are affected during resource contention.
A
Good
morning,
good
afternoon,
good
evening,
wherever
you're
hailing
from
welcome
to
another
episode
of
the
openshift
administrator
office
hours
today,
we're
talking
about
controlling
pod
resource
management,
but
the
one
and
only
andrew
sullivan
here
has
some
follow-up
from
previous
episodes.
So
why
don't
you
start
off
with
that?
There,
andrew
indeed.
B
Well,
thank
you
chris
and
happy
happy
wednesday,
happy.
B
B
Unless
you
hear
sirens,
in
which
case.
C
B
B
A
Getting
a
lot
of
exhaust
heat
yeah,
yes,
negative
11
celsius,
eight,
nine
fahrenheit,
so
yeah,
it's
a
little
chilly
here.
B
Oof,
let's
see
my
computer
says
it's
45
here
so
yes,
you,
you.
A
B
All
right,
so
this
is
oops,
sorry
about
that,
if
that's
banged
in
your
ear,
so
this
is
the
open
shift
administrator
office
hour.
So
the
goal
of
this
next
hour
or
56
minutes
57
minutes
as
it
were,
is
to
give
you
all
our
audience.
The
ability
to
ask
us
questions
about
anything
that
you
want
relating
to
openshift.
B
So
I
have
an
administrator
background
across
many
different
disciplines
chris.
I
know
you
have
the
same
as
well,
so
we'll
do
our
best
to
answer
any
questions
that
you
happen
to
have
be
the
administrator,
related
or
developer
related,
just
know
that
we'll
definitely
have
to
phone
a
friend
if
it
comes
to
the
developer
side
of
things.
B
That
being
said,
don't
hesitate
at
any
point
in
time
to
send
us
a
chat
so
yeah
across
twitch
across
youtube
across
all
the
various
platforms
that
we're
on
and
apologies
for,
the
dog
who
seems
to
have
escaped
at
the
moment.
Oh.
A
B
Yeah,
so
don't
hesitate
to
ask
any
of
those
questions,
regardless
of
what
we're
talking
about
today,
as
chris
said
today,
we'll
be
talking
about
managing
resources
for
applications
for
pods
et
cetera,
but
don't
let
that
be
the
only
topic
that
you
want
to
bring
up.
B
Exactly
yeah,
that's
what
we're
here
for
we
just
use
the
topic
to
fill
in,
for
when
we
don't
have
those
questions
right.
So,
as
chris
mentioned,
as
I
usually
do
at
the
start
of
these
shows,
I
have
some
follow-ups.
Some
clarifications
a
lot
of
times
some
corrections,
because
andrew
is
often
wrong.
B
Yeah
christian,
I
know
so
this
wall.
When
I
moved
offices,
this
wall
is
a
lot
closer
and
the
picture
is
basically
too
big.
It
takes
up
too
much
room,
oh
so
I
I
need
to
find
smaller
pictures
now.
So
the
raccoon,
which
is
actually
hanging
right
over
here,
is
a
24x24
and
oh,
that's
kind
of
big
yeah
yeah.
This
wall
is
two
feet
behind
me.
So
yeah
way
way
too
big.
A
C
B
So
again
ask
questions
at
any
point
in
time,
so
follow-ups
a
couple
of
things
to
talk
about
so
first
a
couple
of
weeks
ago,
maybe
three
or
four
weeks
ago
right
we
talked
about
updates,
I
think
and
disconnected
and
a
couple
of
times
now
on
the
stream.
I've
showed
the
cincinnati
graph
data.
B
So
cincinnati
is
the
update
service
inside
of
openshift
itself
and
the
data.
That's
that's
found
in
that
repo
is
effectively
how
openshift
determines
or
openshift
knows,
hey.
I
can
update
to
this
version
from
this
version,
or
I
can't
update
to
this
version
or
that
type
of
information,
and
if
you
look
at
it
in
the
raw
repository
and
I'm
going
to
share
my
screen
because
it's
easier
for
me,
please
do
talk
that
way.
B
B
So,
for
example,
if
I
go
to
channels
here,
I
can
look
at,
for
example,
the
stable
4.6,
and
I
have
this
big
long
list
of
all
of
the
the
versions
that
I'm
able
to
update
from
and
to
inside
of
here,
yeah
so
oftentimes
we
have
used
a
a
third
party
site
that
is
published
by
one
of
our
product
managers,
rob
zumski
to
kind
of
get
an
easier
visualization
of
this
better
solution,
still
not
great
right.
It's
unofficial,
it's
third
party
and
it
was
still
very
text
driven
so
sometimes
hard
to
understand.
B
B
Don't
don't
ruin
that
apple
buying
duckduckgo?
That's
not
okay!
So
when
we
look
at
this
particular
tool,
it's
really
good
for
a
couple
of
reasons.
So,
first
and
foremost,
this
gives
me
an
upgrade
path.
So
remember
with
openshift,
I
can't
go
directly
from
say:
4.4
to
4.6.
I
have
to
do
an
intermediary
4.5
first.
B
B
A
B
You
need
to
go
from
where
you're
at
change
your
subscription
or
not
your
subscription,
your
channel
to
4.5,
update
to
4.5,
to
27
change
your
channel
to
stable,
4.6
and
then
go
to
4.6.12..
B
So
it
walks
me
through
step
by
step
what
I
need
to
do
and
then
down
here
below.
I
have
each
one
of
the
steps
that
I
need
to
take
in
order
to
do
that.
So,
let's
click
administration
go
to
cluster
settings,
set
the
channel
to
stable,
4.5,
select
version
4.5.27,
so
it
walks
me
through
each
one
of
those
and
exactly
how
to
do
that.
Super
super
helpful
right,
no
longer
any
guesswork
of
okay.
B
If
I
go
to
four
dot,
you
know
from
where
I'm
at
4.4
to
4.5
dot,
something
am
I
going
to
have
to
do
another
4.5
dot,
something
update
to
get
to
a
version,
that's
eligible
to
update
to
4.6
and
so
takes
care
of.
All
of
that,
the
other
thing
it
does
is
it
generates
this,
I'm
not
going
to
say
not
useful,
but
very
difficult
to
read,
update
channel
graph.
B
B
So
we
we've
seen
these
a
number
of
times,
so
this
is
what
we
generate
these
internally
and
share
them.
Sometimes
this
is
the
visualization
of
all
of
that
yaml
data
that
I
showed
over
here
in
the
repo
right.
So
this
is
all
of
the
edges
connecting
to
all
of
the
versions
and
what
that
looks
like
as
a
visualization.
B
So
I
find
this
useful
for
a
couple
of
reasons
so,
but
useful
might
be
a
stretch
so
for
one
over
here
in
purple
or
excuse
me
in
light
blue,
the
highest
version
in
the
channel.
So
it's
pretty
easy
to
see.
Okay,
the
highest
version
I
have
available
is
4.5.30
and
if
I
kind
of
click
and
hold
this,
you
see
how
it
highlights
all
of
the
source
edges.
So
I
can
see
all.
C
B
A
B
So
one
thing
to
note:
in
this
view
you
see,
if
I
let
me
go
back
to
the
stable
4.5,
it's
actually
highlighting
the
path
that
it
has
recommended
to
me
so
up
here
at
the
top
remember
it
was
4.4.19
to
4.5.27,
so
here's
my
4.4.19,
you
can
see.
4.5.27
is
my
update
path.
You
notice
that
4.5.30
is
available,
but
I
can't
go
from
4.4.19
to
4.5.30.
B
So
if
I
had
just
blindly
chosen
one
of
those
versions
right,
I
would
have
had
an
error
or
it
wouldn't
have
been
available
and
then
likewise
with
the
4.6
it'll,
do
the
same
thing
right.
It'll
show
me
the
path
that
it
has
chosen
for
me
so
from
4.5
to
27
4.6.12.,
so
again,
useful,
definitely
encourage
you
to
check
it
out
if
you're
going
through
the
update,
upgrade
process,
etc.
There
is
this
other
show
upgrade
update
graph
by
channel.
A
B
Yeah,
so
let's
see
other
things
that
I
wanted
to
follow
up
on,
so
one
of
our
viewers
killer
goalie
had
reached
out
to
me.
I
think
it
was
two
weeks
ago,
which
was
the
week
after
we
did
a
show
on
disconnected.
B
B
So
the
goal
here
is
kind
of
twofold
right.
If
I
want
to
mirror
a
catalog,
so
let
me
pick
a
let
me
bring
up
my
cluster
here
go
to
operator
hub,
so
I
have
these
provider
types
which
are
effectively
analogous
to
a
catalog
source.
B
B
So
if
I
just
select
this
provider
type
of
red
hat
like
these
are
all
of
the
things
that
will
get
included.
So
we
do
this
as
it's
two
separate
things
that
have
to
happen.
So
one
is,
I
need
to
mirror
the
index
the
catalog
index
and
then
the
other
one
is.
I
need
to
mirror
basically
all
of
those
images,
so
the
index
is
pretty
straightforward.
B
B
Just
focus
on
please
yeah,
we'll
make
it
available
for
you,
so
one
of
the
fields
in
here
is
the
image
right.
So
all
I
need
to
do
is
effectively
mirror
that
image.
There
is
a
very
helpful
command
for
that,
so
it
is
opm
commands
and
we
covered
all
of
that
during
the
previous
session.
So
I
did
get
this
far
of
how
to
create
or
how
to
how
to
pull
down
that
image.
The
step
that
I
didn't
do
was
okay.
I
pulled
down
that
image
locally.
B
B
Here's
from
mirror
from
yeah
mirror
to
file
so
at
least
with
4.6.12,
I
will
admit
I
haven't
tried
it
with
4.6.6
or
16,
whatever
version
I'm
on,
because
I
just
deployed
that
yesterday
evening
right
so
this
didn't
work
effectively.
What
happens?
Is
it
mangles
the
output
into
the
mirror
file
and
it
causes
it
to
be
invalid?
So,
let's
see
what
that
looks
like.
B
So
if
I
were
to
attempt
to
here's
one
of
them,
so
if
I
were
to
attempt
to
mirror
this
data
and
let's
make
that
easier
to
read
by
eliminating.
B
B
Across
all
architectures,
so
there's
a
bug,
that's
where
it
requires
you
to
do
this
filter
by
os,
so
it
pulls
for
power
and
all
that
other
stuff
as
well
as
x86
and
then
in
this
instance.
I
have
right-
and
I
don't
want
to
do
that
right.
I
want
to
go
to
a
file,
so
I'm
going
to
say:
dump
all
this
stuff
to
a
file
at
just
this
location
and
it'll.
Think
for
a
moment
and
what
we're
going
to
see
is
it
and
maybe
it'll
work,
because
again,
this
is
4.6.16
right.
B
So
effectively
it
wouldn't
work,
because
this
is
probably
going
to
work.
What
I
can
do
at
this
point,
or
once
this
this
finishes
is
I
would
be
able
to
targe,
zip
all
40
something
gigabytes
and
move
that
over
and
then
reverse
this
process.
B
Right
so
simply
change
this
so
that
the
file
one
comes
first
and
then
my
disconnected
registry
name
is
second
and
then
apply
my
image
content
source
policy
so
just
know
at
least
for
now,
even
if
you're,
not
using
4.6.16,
if
you
don't
haven't,
deployed
4.6.16
cluster,
it
looks
like
if
you
use
the
4.6.16
oc
client,
judging
by
the
non-error
that
I'm
getting
at
the
moment,
it
may
actually
work
for
you
so
killer
goalie,
if
you're
out
there.
If
you're
listening,
please
know
that
it's
worth
giving
a
test.
B
B
I
you
know,
I
pulled
it
down
once
it's
sitting
in
my
object
store.
I
don't
need
it
again
on
the
file
system,
but
just
know
that
I
did
create
a
bz
for
this,
so
they
are
looking
into
it.
So
it
it
may
just
get
closed
and
say
fixed
in
the
new
version.
A
B
So
the
other
two
things
quickly
that
I
had
to
talk
about
are
and
chris
you
know
that
I
like
to
talk
about
things
that
I
see
come
up
internally,
that
are
on
behalf
of
customers
or
kind
of
preemptively.
Addressing
customer
issues
and
concerns
are
a.
B
So
the
first
one
is
the
cluster
network
definition.
So
let
me
switch
back
over
to
my
cli
instance.
B
B
So
inside
of
an
install
config,
I
have
this
networking
stanza
and
inside
of
there
there
are
going
to
be
three
network
definitions,
so
I'm
going
to
start
at
the
bottom
and
work
my
way
up
so
service
network
is
the
set
of
ips
that
will
be
used
that
will
be
assigned
to
service
definitions.
So
when
you
create
a
new
service
object,
it
will
be
given
an
ip
from
this
range
nice.
So
the
machine
network
cider,
is
the
public
right.
B
So
the
the
actual
network
inside
of
your
environments,
where
at
least
one
interface
from
your
nodes
from
your
machines,
will
have
an
ip
address.
So
this
is
important,
and
I've
talked
about
this
before
this
is
important
for
things
like
the
sdn.
That's
how
it
authoritatively
determines
which
interface
to
use,
if
it's
missing
or
if
it's
incorrect.
Basically
it
doesn't
see
an
interface
on
that
subnet.
It
just
chooses
the
first
interface
and
hopes
it's
good
for
things
like
proxy
config.
B
So
if
you're
doing
a
cluster-wide
proxy,
this
subnet
is
automatically
added
to
the
no
proxy
list.
So
we
see
that
a
lot
of
times
with
folks
who
are
they're
deploying
a
proxy.
They
don't
set
this
correctly
and
the
install
just
fails
repeatedly.
Yeah
I've
got
my
proxy
config
correct.
Well,
you
can
you
know
if
you
don't
set
this
correctly,
you
can
manually,
add
the
subnets
to
the
no
proxy
and
then
it
should
work
as
expected.
So
that's
one
common
reason
why
proxy
fails
and
then
the
third
one
here
cluster
network.
B
B
So
if
I
have
10
nodes
in
my
cluster,
each
node
will
be
assigned
a
slash
23
out
of
this
14
and
pods
that
are
instantiated
on
that
node
will
be
assigned
an
ip
address
on
each
one
of
those
subnets
right.
So
if
I
switch
back
just
to
give
us
an
example
here,
I
want
this
one
and
if
I
go
to
my
nodes
and
I'm
going
to
choose
this
worker
node
and
look
at
our
pods
and
what
I
should
see
here
is-
and
I'm
just
going
to
pick
one
of
these
at
random.
B
B
Do
I
really
need
a
slash,
14
and
a
slash
16
for
these
networks,
and
the
answer
is
maybe
so
the
host
prefix
of
a
slash
23
is
we
do
this
because
the
maximum
number
of
pods
per
node
is
500
and
a
slash.
23
is
effectively
510
usable
addresses,
right,
there's,
512
technically,
but
of
course,
first
and
last
blah
blah
blah
yeah.
B
B
Do
I
need
of
slash
14
here.
Well,
that
depends
on
how
many
nodes
you
have
in
the
cluster
and
the
size
of
the
host
prefix
yep.
So,
for
example,
if
I
have
50
nodes
in
my
cluster-
and
I
know
I
know
I'm
I'm
looking
over
here-
because
that's
where
you
are-
and
I
realize
that
the
camera's
over
here,
I
should
stop
doing
that.
B
So
if
I
have
50
nodes
with
a
slash,
23
so
510
times,
50
is
25
500..
Now
I
did
not
do
that
in
my
head.
I
did
that
yesterday
when
I
responded
to
the
question,
so
I
knew
that
one
off
the
top
of
my
head,
so
25
000
ips,
falls
right
in
the
middle
of
a
slash
17.
So
I
think
a
slash
18
is
like
16
000
ips
and
a
slash
17
is
32
000
ips.
I.
B
B
So
one
of
our
illustrious
essays
also
pointed
out
that
he
prefers
to
start
at
the
bottom
of
this
range
so
rather
than
10.128
do
like
or
favor.0
here
over
says,
dot,
128
10.128.128.,
okay,
because
if
you
need
to
expand
it,
so
maybe
you
need
to
add
some
more
ips
if
you
roll
over
into
the
next
subnet,
that's
harder
into
the
next
right.
If
you
exceed
the
next
octets
capability
right,
that's
that's
a
lot
harder
to
do
so
and
painful
yes,
yep.
But
do
you
remember
effectively
the
cluster
network,
the
service
network?
B
C
B
B
So
the
last
thing
I
have
is
a
question
courtesy
or
a.
I
guess:
it's
a
question,
courtesy
of
christian,
so
christian
and
I
were
chatting
yesterday
about
vsphere.
He
brought
up
vsphere
data
store
clusters,
so,
if
you're
not
familiar
with
the
concept
of
data
store
clusters
and
vmware
they're,
effectively
the
same
thing
as
a
drs
cluster,
I
take
multiple
data
stores.
B
I
put
them
into
a
cluster
and
when
I'm
provisioning,
my
virtual
machines,
essentially
I
just
select
the
data
store
cluster
and
whatever
my
capacity
is
and
it
chooses
which
data
store
to
put
it
into.
Similarly,
it
will
automatically
if
it's
allowed
to
storage
the
motion
between
the
different
data
stores
to
balance
right,
different
characteristics,
performance
capacity,
etc.
B
So
the
question
was:
why
don't
they
work
with?
You
know,
openshift?
Why
don't
they
work
with
the
storage
provisioner
and
the
reason
is
less
to
do
with
openshift
and
more
to
do
with?
Well,
the
storage
provisioner
itself.
So,
unfortunately,
the
entry
provisioner
doesn't
support
data
store
clusters
at
all.
B
B
B
Browser,
whatever
all
right
so
down
here,
we
have
this.
This
workspace
definition
right.
So
we
can
add
multiple
of
these
in
order
to
define
multiple
data
stores,
and
then
we
have
one
data
store
per
storage
class
definition,
kind
of
clunky
right.
That
means
that
whoever's
creating
the
pvc
has
to
be
aware
of
that.
B
It's
not
always
a
good
thing,
so
the
storage
provisioner
does
support
the
concept
of
of
policy-based
choosing,
and
I
should
have
thought
ahead
here
and
had
this
link
already
up
and
available,
but
I
happen
to
have
a
I
happen
to
have
it
bookmarked,
which
I
was
told
that
bookmarks
are
an
antiquated
concept.
A
B
Yeah
well
same
here,
anyways,
so
in
here
we
have
the
ability
to
specify
a
policy
right,
so
I
can
define
those
policies
as
being
tag
based.
B
B
I
can
create
a
vm
storage
policy
and
then
I
can
use
one
of
these
to
in
my
storage
definition.
So
I
can
say:
storage
provisioner,
choose
a
data
store
that
meets
this
policy.
Essentially,
so
maybe
that's
you
know,
flash
storage,
maybe
that's
you
know,
I
don't
know
block
storage
whatever
that
happens
to
be
right
and
that
can
match
multiple
data
stores.
B
B
B
B
A
B
Being
removed
in
the
future,
I
don't
remember
what
version
it
is
now.
So,
essentially,
you
know
sure
we
can
create
an
rfe,
maybe
something
like
that
is
already
there.
But
if
I'm
being
honest,
I
wouldn't
expect
action
to
be
taken
on
it.
So
the
csi
provisioner
is
in
the
cloud
native.
What
vmware
calls
cloud
native
storage
right
is
the
active
and
current
and
the
recommended
way
of
consuming
that
storage.
B
So
csi
provisioners
are
maintained
by
the
storage
provider,
so
you
would
open
an
rfe
with
vmware
in
that
instance,
and
then
vmware
does
supports
right.
So
the
vmware
csi
provisioner
is
supported
by
vmware
with
openshift,
and
you
can
absolutely
deploy
that
in
there
just
follow
their
deployment
guidance.
So
usually
the
big
change
for
us
for
on
the
openshift
side
is
making
sure
that
the
virtual
machine
hardware
version,
which
is
very
an
oxymoron
right
virtual
hardware
version
yeah,
is,
is
at
least
version
15,
and
then
you
can
deploy
the
csi
provisioner
in
there.
C
B
A
B
Good
all
right
any
questions.
B
All
right,
so
today's
subject
is
controlling
resource
consumption
and
you
know:
we've
talked
about
sizing
before
we've
talked
about.
You
know
a
bunch
of
different
aspects
of
what's
happening
inside
of
the
cluster,
but
at
the
end
of
the
day
right
we
can
size
the
clusters
as
much
as
we
want,
but
if
the
pods,
if
the
application
teams
aren't
responsible
stewards
of
those
resources,
there's
not
a
lot,
we
can
do
so.
We
can
help
control
that
through
a
couple
of
different
mechanisms
at
the
kubernetes
level.
B
B
Kubernetes
will
effectively
reserve
those
resources,
so
I
could
have
you
know
these
massive
nodes
in
my
cluster
that
have
effectively
no
real
work
happening,
but
from
a
scheduling
perspective,
they're
full.
So
we
want
to
be
very
conscious
of.
We
want
to
be
very
aware
of
what
those
requests,
what
those
limits,
what
those
configurations
are,
because
it
can
affect
that
pretty
significantly
right.
This
cluster
is
running
in
azure
right.
I
could
effectively
auto
scale
up
to
dozens
or
hundreds
of
nodes
that
are
doing
effectively
nothing.
B
B
So
I'm
I'm
doing
a
webinar
with
a
partner
of
ours
turbonomic
tomorrow
you
know
they
fantastic
tools
for
and
they're
one
of
many
of
our
partners
that
do
this.
You
know
for
looking
at
and
seeing
and
evaluating
the
real
resource
utilization
associated
with
an
application
so
that
you
can
then
go
to
that
application
team.
Like
look,
I
have
proof
that
you're
not
using
all
of
these
resources
that
you
requested.
B
Let
me
downsize,
I
promise
it's
not
going
to
affect
your
application,
but
it
allows
us
to
be
more
efficient
and
better
at
what
we
do
and
more
cost
effective
effectively,
so
trust
the
tools
build
the
trust
with
the
application
team.
As
an
administrator
that
you
know,
hey
we're,
not
gonna,
you
know
we're
not
taking
these
resources
away,
because
you
know
you
know
you're,
never
gonna
get
them
back,
but
when
you
do
need
them,
they
will
be
there
right
so
again
that
right
that
whole
trust
thing
that
we
that
keeps
coming
up
again
and
again:
okay,.
A
B
So
the
first
thing
I
want
to
talk
about
is
kind
of
a
couple
of
core
things,
and
I've
already
talked
about
them
in
previous
episodes.
So
I'm
not
going
to
spend
a
lot
of
time-
and
I
just
mentioned
the
first
one,
which
is
a
request.
So
each
pod
can
define
a
requested
amount
of
cpu
and
ram
right.
It
can
also
define
a
couple
of
other
things
inside
of
there,
and
this
is
not
the
page
that
I
wanted,
so
I'm
just
going
to
ignore
that
page.
B
So
that
request
is
effectively
a
please
guarantee
me
or
please
schedule
me
on
a
node
that
has
at
least
this
much
resource
available
right
now.
The
opposite
of
that
is
a
limit.
So
a
limit
is
don't
allow
this
pod
to
consume
any
more
than
this
amount
of
resources
and
the
two
of
those
don't
have
to
be
equal.
A
B
Yeah
very
much
so
so.
The
combination
of
those
two
things
determines
the
qos
class
associated
with
our
pod.
So
if
my
resource
request,
so
if
my
request
is
smaller
than
my
limit,
it
falls
into
the
burstable
class
right.
If
I
I
think
it's
burstable.
B
So,
let's
see,
if
a
request
is
less
than
the
limit,
then
it
is
a
burstable
qos
class,
okay
right
yeah.
If
there
is
no
request
or
limit
specified,
it
is
a
best
effort
class
and
if
the
request
and
the
limit
are
the
same-
and
I
had
to
check
my
notes
on
here-
just
to
be
sure
it
is
a
guaranteed
qos
class
nice.
So
what
do
these
mean
so
the
biggest
impact
that
these
have
is
when
it
comes
to
resource
contention
effectively,
so
we
can
think
of
guaranteed
as
being
the
highest.
B
A
B
B
Right,
so,
if
you
want
to-
and
you
know
there
are
some
constraints
on
that
right-
it's
not
going
to
like
this-
is
why
you
want
to
have
a
pod
disruption
budget
right,
so
that
way
it
doesn't
go
in
and
eject
all
40
pods
associated
with
you
know
this
application
component
and
suddenly
everything's
down
right
disruption.
Budget
prevents
things
like
that
from
happening,
so
you
can
see
that
there's
a
number
of
factors
that
play
in
here
right
that
all
go
into
this
grand
scheme
or
this
grand
plan
of
how
do
I
control?
B
B
You
know
it's
I
I
know
from
when
I
was
administrator.
I
may
or
may
not
have
done
this
before
of
you
know
hey.
I
need
to
put
this
note
into
maintenance
mode
and
reboot
its
rates
and
accidentally
taking
something
down
or
something.
You
know
this.
This
can't
be
the
emotion.
That's
fine
I'll,
just
turn
it
off
for
a
minute,
and
you
know
we
we
want
to
avoid
those
types
of
scenarios
so
working
together.
B
A
B
B
B
A
B
B
Then
you'll
note
that
up
here
I
skipped
over
these
two
at
the
top,
so
these
two
apply
across
all
storage
classes,
all
storage
types-
so
maybe
I've
got
you
know.
You
know:
gold,
silver,
bronze,
cardboard,
plastic,
storage,
classes
and
individually.
You
are
allowed
100
gigabytes
from
each
one
of
those
five,
but
I
can
say
up
here
at
the
request:
dot
storage
level
total
collectively
you
are
not
allowed
to
consume
more
than
250
gigabytes
across
those
five.
B
B
Yes,
so,
storage
resources,
just
as
important
now
note
that
this
is
gigabytes.
It's
capacity
based
effectively.
Kubernetes
has
no
concept
of
things
like
latency
or
iops,
which
is
of
course
important
for
many
storage
operations.
So
just
be
aware
of
that
right
that
it
is,
it
is
a
thing
and
there
are
ways
that
you
can
control
that,
depending
on
your
storage
vendor,
so
refer
to
their
provisioners,
their
csi
provisioners,
because
many
of
them
have
a
lot
of
different
things.
To
help
with
that.
I
know
pure,
I
know
netapp.
B
A
B
B
B
So
this
one
is
a
little
harder
to
nail
down
because
you
can
say:
oh
chris,
you
get
one
terabyte
of
ephemeral
storage.
You
know
don't
use
any
more
than
that,
but
maybe
your
hosts
right
and
by
default
we
recommend
120
gigabytes
drives
on
those
openshift
nodes.
Well,
that
would
effectively
be
like
10
nodes
worth
of
capacity
that
you
could
extinguish.
B
B
B
So
one
if
you
are
I'm
not
going
to
say
lazy,
I'm
not
going
to
say
irresponsible,
I'm
going
to
say
careless
application
administrator
who
does
not
define
any
of
these
values
yourself
right.
We
have
defaults
that
can
be
defined
in
here
right,
if
you
don't,
if
you
don't,
create
a
cpu
memory,
right,
ephemeral,
storage,
request
or
limit
you're
going
to
get
what's
defined
inside
of
here
and
note
that
these
can
be
defined
at
the
container
level
as
well
as
at
the
pod
level,
as
well
as
at
the
project
level,
nice.
B
Additionally,
you
can
specify
the
min
and
max
available,
so
your
quota
is
100
cpus
and
a
terabyte
of
ram,
but
you
cannot
create
an
individual
pod
that
consumes
more
than
two
cpus
and
two
gigabytes
of
ram
right,
so
kind
of
maybe
potentially
preventing
you
from
shooting
yourself
in
the
foot.
Similarly,
the
minimum
is
great.
The
minimum
and
the
defaults
right
are
great
for
that
whole
scheduling
thing
of.
I
always
want
to
make
sure
that
at
least
some
resources
are
accounted
for
at
the
scheduler
level,
so
that
way
it
can
make
more
intelligent
decisions.
B
So
limit
ranges
super
important
right.
They
they
help
with
all
of
these
things.
They
help
prevent
people
from
shooting
themselves
in
the
foot
and,
unfortunately,
I
think
we're
going
to
run
out
of
time
I'm
going
to
share
a
github
repo
because
chris,
I
don't
think
you
had
joined
our
team.
Yet
I
actually
created
a
whole
demo
on
this
I'll
share.
The
video
I'll
share
the
examples
that
I
have
from
back
in,
like
the
openshift
4.2.
A
B
A
B
A
B
A
Yes,
thank
you
yeah
and,
if
you're
not
aware
andrew,
does
a
very
good
job
of
getting
a
blog
post
up
after
each
episode
that
he's
on
so
yeah
subscribe
to
the
blog
and
you'll
see
all
the
notes
that
you
need
to
see
from
this
show.
Yep.
B
A
B
So
I'll
I'll
dig
up
that
video
I'll
share
that
you
also
have
that
github
repo,
which
has
a
lot
of
examples
inside
of
there
to
your
point
chris.
These
are
core
kubernetes
concepts.
This
is
an
open
shift.
This
is
kubernetes,
but
it's
something
that
I
have
found.
We
as
administrators
really
don't
understand.
Well,
because
a
lot
of
times
we
we
think
of
it
as
it's
an
application
thing
right.
B
B
B
So
if
you're
allowing
users
to
create
their
own
projects
without
you
know-
and
you
know,
the
the
old
school
way
would
be,
you
know,
hey
submit
us
a
request
and
somebody
will
create
your
project
and
hand
it
over
to
you
right.
Nowadays
we
often
hand
over
an
entire
cluster
to
them
and
say,
go
forth
and
do.
B
B
So
how
do
we
do
that
and
we'll
switch
over
here
and
clear
out
the
noise
here,
because
what
I
want
to
do
is
show
this
command,
so
oc
adm
create
bootstrap
project
template
with
an
output
format
of
yaml,
and
you
can
see
I've
simply
directed
that
into
a
file
on
the
file
system.
Here,
if
I
look
at
this
file,
if
I
look
at
this
yaml
output,
what
I'm
looking
at
is
the
template
that
defines
how
to
create
a
project,
and
you
can
see
a
number
of
different
things
inside
of
here.
B
B
So
I
can
see
both
my
code
and
the
chat
and
this
all
simultaneously
or
pseudo
simultaneously,
so
you
can
see
in
here
we
have
a
couple
of
variables,
so
things
like
description,
display
name.
These
are
defined
at
the
bottom,
so
we
see
down
here
the
parameters,
but
we
importantly
have
multiple
objects
inside
of
here.
So
when
I
create
a
new
project,
it's
going
to,
of
course
create
a
new
project
using
you
know.
B
So
when
I
say
oc
new
dash
project,
it's
going
to
use
this
information
to
create
the
project,
object,
it's
going
to
create
a
role,
binding
object
that
follows
these
types
of
that
has
these
security
permissions
defined.
So
we
see
the
the
cluster
role
here
and
the
user
role
that
are
bound
to
the
admin
user.
Whoever
is
creating
it.
We
have
our
parameters
down
here
at
the
bottom,
so
if
I
want
to,
for
example,
create
a
default
quota,
I
simply
add
it
into
here
right.
B
B
So
I'm
going
to
remove
this
one
real,
quick
and
then
I
also
want
to
add
a
default
limit
range
which
is
going
to
be
the
exact
same
limit
range
that
we
just
looked
at
a
moment
ago,
right
so
project
name
dash
limits
is
going
to
be
the
name
of
the
object,
all
right,
so
I'm
just
adding
in
whatever
it
is
that
I
want
to
be
created.
If
I
want
to
create
default
users
more
than
just
the
admin
user
quotas
on
and
on
and
on
all
of
these
things.
B
Inside
of
here,
I
can
define
these
with
the
project
template
so
we'll
save
that
particular
file,
and
now
I
want
to
create
that
object.
So
if
I
look
at
the
head
here,
you
can
see
that
it
is
a
template
with
the
name
project
request.
So
I'm
going
to
submit
this
so
I'll.
Do
an
oc
create
f
project
templates
and
I
want
to
put
this
into
the
open
shift.
B
A
B
B
A
B
Here
so
oc
get
config
map
named
config
and
the
api
server
namespace,
and
then
we're
just
using
json
to
select
one
one
portion
of
that
and
it
spits
out
a
bunch
of
different
things
that
are
inside
of
here
right
that
it's
going
to
use
that
it's
going
to
do
with
that
particular
api
request
so
effectively.
We
want
to
add
another
option
to
this,
which
says
use
this
one
right
when
creating
that
new
template.
B
So
we
will
create
our
object
and
I'm
going
to
switch
back
to
our
yaml
editor
to
show
this
one
because
I'm
going
to
pipe
it
in
line.
So
all
I'm
doing
here
is
creating
a
new
project,
so
you
see
no
name
space
associated
with
it.
So
you
requested
a
new
project
and
then
the
template
to
use
here
is
project
request,
cool,
so
pretty
straightforward.
So
I'm
going
to
do
oc,
apply,
dash,
f
and.
B
B
B
B
B
B
Yes,
so
I
talked
about
this
during
the
sizing,
so
at
the
end
of
the
sizing
stream
of
over
commitment
is
something
that
we,
especially
as
virtualization
administrators.
We
love
yeah.
We
we
over
commit
the
crap
out
of
everything
all
the
time
and
network
administrators
love
it
even
more
right.
It's
not
unusual
to
find,
especially
in.
C
B
B
So
that
means
in
the
case
of
openshift,
in
the
case
of
kubernetes,
at
the
kubernetes
layer
right.
The
reason
for
that
is
because,
let's
say
that,
I'm
over
committing
at
the
hypervisor
level,
so
vmware
rev
openstack,
whatever,
is
doing
the
over
commitment
and
the
hypervisor
is
out
of
a
resource
right.
It's
it's
out
of
memory.
It's
swapping
a
hypervisor
is
hurting.
It's
impacting
the
virtual
machines
openshift
in
this
case,
so
openshift
doesn't
know
this.
All
it
knows
is
every
alarm
that
it's
got
is
saying.
B
B
That's
each
one
of
those
layers
is
going
to
give
us
the
resources
that
we
need
and
not
give
us
a
bunch
of
crap
or
deny
those
resource
requests
and
then
impact
everybody
above.
So
it's
definitely
a
learning
experience
right
for
anybody
who
remembers
you
know
I
was
at
netapp
before
I
used
to
give
a
whole
talk
five
years
ago
on
this
exact
same
topic,
and
it's
it's
still
a
thing.
It's
still.
A
B
Yeah
I'll
figure
out,
what's
going
on
there
I'll
include
that
in
the
show
notes:
okay,
cool-
if
there
was
something
I
missed.
So
thank
you
to
eric
jacobs
by
the
way
who
he
created
a
lot
of
these
commands
a
while
ago,
so
yeah.
He
helped
me
greatly
with
how
all
this
works,
although
he
probably
doesn't
know
it
because
I
plagiarized
so
I'll,
follow
up
with
that.
B
Make
sure
that
we
get
that
covered
inside
of
those
show
notes
I'll
also
follow
up
with
it
next
week,
just
to
make
sure
that
we
for
anybody
who
doesn't
see
the
blog
post
awesome
that
that's
included
but
sounds
good,
so
yeah.
Thank
you.
Thank
you,
everybody
who
has
been
watching.
We
really
appreciate
your
time.
B
I
know
it's
sometimes
not
easy
to
devote
an
hour
with
us,
but
it
does
mean
a
lot
if
you
have
any
questions,
anything
that
we
didn't
address
today
and
I'll
go
back
and
I'll
review
all
of
the
chat
to
make
sure
that
we
catch
all
of
those
in
one
form
or
another.
B
If
you
didn't
have
time
or
if
you
didn't
feel
like
it
fit
or
just
didn't,
want
to
ask
in
public,
please
feel
free
to
reach
out
to
me.
So
I'm
on
social
media,
you
can
reach
me
twitter,
practical,
andrew
I'm
on
linkedin
and
all
that
other
nonsense
as
well.
You
can
also
send
me
an
email,
quite
simply,
andrew.sullivan
redhat.com
happy
to
take
those
questions,
love
to
get
those
questions
and
if
you
don't
want
me
to
talk
about
anything
publicly
right,
I
don't
usually
like
to
mention
names
or
anything
unless
you're,
okay
with
it.
B
So
I'm
happy
to
also
do
that,
but
yeah
don't
hesitate
to
reach
out
at
any
time
and
with
that
I
think
we're
basically
at
the
top
of
the
hour
chris.