►
From YouTube: OpenShift for operations
Description
Red Hat OpenShift is an amazing, award-winning tool for application developers. It is also a great management tool for operations teams due to its built-in power, flexibility, and scalability. But how can ops teams make the jump from managing OpenShift to running their own workloads effectively?
In this session, we'll cover the features of Red Hat OpenShift that are most important to an ops team when moving workloads to the platform. We'll also go through security best practices and have live examples to discuss.
Learn more: https://agenda.summit.redhat.com/
A
B
A
Have
an
honorary
degree:
it's
fine,
alright,
so
a
pretty
ops,
friendly
house
good
as
it
should
be
alright.
So
let's
go
ahead
and
get
going
we're
about
three
minutes
at
we're.
Four
minutes
after
and
there's
a
mountain
of
we're
gonna
be
hitting
you
with
the
firehose
like
we
always
do
so.
We're
gonna
go
ahead
and
get
rollin
while
the
last
folks
straggle
in
I'm,
Jamie
Duncan.
That's
my
email
address
cloud
guy,
redhead
calm.
It
took
a
director
level
approval
to
get
cloud
guy,
a
redhead
calm
its.
A
What
am
I
like
in
my
time
at
Red
Hat,
that's
one
of
my
major
accomplishments.
Getting
that
email
address!
That's
it's
a
pretty
low
bar
when
you
stop
and
think
about
it.
It's
not
a
it's
not
on
your
resume
got
cool
email
address.
That's
also.
My
Twitter
handle
lots
of
tweets
about
containers
and
really
cloudy
things
and
toddlers
and
chickens.
Don't
know
how
all
that
works
out,
but
that's
pretty
much
what
it
is
and
you.
B
Sir
I'm
Dan
Walsh
I
do
coloring
books
and
my
twitter
handle
is
I
had
Dan,
which
means
I
can
never
leave
Red
Hat,
so
they
they
own
me
but
yeah
I
do
I,
leave
the
container
team
at
Red
Hat,
so
I
do
everything
underneath
kubernetes.
So
it's
kind
of
weird
that
I'm
up
here
doing
this
since
I,
don't
really
work
above
kubernetes
saying
it.
A
B
A
A
Got
reorganized
a
little
bit
about
me:
do
this
for
every
every
presentation?
That's
my
daughter,
Elizabeth
she's,
my
good
luck,
charm
she's,
the
cutest
thing
ever,
that's
not
negotiable!
On
the
other
side
of
that
couch,
there
was
a
crying
three-year-old
that
she
had
just
wailed
on.
She
took
a
sword,
beat
up
a
three-year-old
then
took
his
sword
and
then
rolled
around
the
corner.
Like
Conan
I,
don't
know
what
I'm
in
for
but
I'm
a
little
afraid.
A
She's
20
months
old,
beat
up
a
three-year-old
and
took
his
sword.
I've
been
at
red,
had
about
six
and
a
half
years,
I've
been
working
with
government
customers
that
entire
time
I've
worked
with
the
support.
I
started
out
in
the
sport
in
engineering
rule
world
and
I'm
actually
been
working
with
our
government
sales
teams
for
the
past
two
and
a
half
years,
plus
or
minus
so
I'm,
the
geek,
the
sales
guys
call
in
when
they
need
some
help
little
shameless
plug.
We
just
released
this
book
OpenShift
in
action.
A
I
have
the
first
I
have
literally
I,
have
one
copy
left
in
my
bag
over
there.
So
if
someone
asks
an
awesome
question,
I
have
a
free
copy
of
a
book.
If
not
I'll
give
it
to
dan
dan
actually
makes
a
cameo
appearance
in
Chapter.
Seven
I
have
a
very
unflattering
picture
of
him
in
a
screenshot
and
then
there's
a
little
call-out.
That
says
who
is
Dan
Walsh
and
we
need
to
explain
him.
So
he's
got
a
cameo
in
here,
so
yeah
a
little
bit
of
a
shameless
plug.
B
My
team,
as
I,
said
I
work
on
contain
a
team.
We
have
new
products
if
you
got
the
coloring
book,
we
talked
about
him
there.
If
you
saw
my
talk
yesterday,
projects
called
cryopod
Man
in
scope,
EO,
which
you're
all
tools
for
sort
of
taking
a
pad
darker
into
its
core
components
and
then
trying
to
build
simpler
tools
that
you
can
specialize
in
doing
individual
tasks
and
I
also
invented
a
tool
called
bilder
and
Jamie
pointed
out.
I
also
work
on
darker
and
see.
A
B
Selinux
and
I
do
coloring
books,
I've
been
a
redhead
for
this
august
to
be
17
years.
I've
been.
A
B
A
A
Why
that's
pretty
cool
alright?
So
today's
task?
We
did
this
talk
last
year
on
the
same
title,
open
shift
for
operations
and
really
what
I
did
was
for
the
last
year's
version
was
I
took
the
discussions
that
I've
been
having
with
our
customers
and
I
put
them
in
a
slide
deck
and
really
just
I
did
the
same
thing.
We
did
the
same
thing
this
year
and
we
put
a
little
extra
spin
on
it
so,
a
year
ago
the
ops
world
was
really
still
trying
to
come
to
grasp
with
what
is
a
container.
A
A
So
it
was
very
nuts
and
bolts
it
was
we
walk
through
we're
gonna
do
a
little
bit
of
what
we
did
last
year:
kind
of
walk
through
how
openshift
creates
containers
and
who's
responsible
for
what,
but
last
year
was
very
nuts
and
bolts
like
we
were
down
like
in
calling
out
kernel
parameters
to
figure
out
which
interfaces
were
attached
to
what
inside
inside
containers
this
year
we
are
year
later
and
the
container
ecosystem.
The
container
world
is
the
fastest
moving
thing
NIT.
A
A
What
makes
why
should
an
ops
team
use
a
container
and
not
just
because
my
developer
screams
that
they
want
to
use
a
container,
but
how
does
it
make
your
life
better?
How
does
it
make
your
job
easier?
How
does
it
give
you
so
we're
still
very
nuts
and
bolts
where
there's
no
roadmap
in
here,
but
the
conversations
I've
been
having
with
my
customers
for
the
past
year
are
around
that
all
right.
I
know
what
a
container
is
and
I
know
that
someone
way
up
the
totem
pole
is
telling
me
to
go.
A
Do
can
give
me
some
of
those
things,
but
how
do
they
make
my
life
things
here?
So
that's
what
we're
focusing
on
today
that
sound
okay
to
everybody
are.
Do
we
want
to
get
down
into
the
kernel
again?
We
might
do
both
I
mean
he'll.
Who
knows
so
today's
to
ask
and
again
openshift
is
the
most
complete
production
ready
container
platform
out
there
where
Red
Hat
summit.
So
it's
a
pretty
friendly
audience.
You
guys
voluntarily
are
in
an
open
ship
session,
so
I
don't
think
we're
gonna.
A
You
know
unless
there's
some
Cloud
Foundry
junkie
in
the
back
that
wants
to
start
screaming,
I.
Think
we're.
Okay
with
that
statement,
and
again
it's
all
about
fundamentals.
If
you
look
at
open
shift
in
action,
it's
all
about
fundamental
knowledge.
It's
not
feature
sets
it's
not
the
greatest
latest
new
feature.
Taking
that
fundamental
knowledge,
what
is
a
container?
How
does
a
container
function
inside
Linux?
How
does
it
take
that
intake
that
knowledge?
Then
you
can
start
thinking
about
these
things
strategically.
A
They're,
not
some
weird
black
box
filled
with
unicorn
blood
they're,
another
tool
in
your
toolbox
to
make
your
life
better
to
solve
problems
inside
the
inside
your
data
center
and
that's
what
we
want
to
focus
on.
It's
all
about
the
fundamentals
I
mean
this
guy
literally
makes
container
runtimes
we're
that
fundamental.
Today
there
are
always
a
few
ground
rules.
This
is
discussion,
not
a
lecture
they're
200,
some
people
in
the
room.
A
Don't
everyone
speak
at
once,
but
if
you
do
have
a
burning
question,
if
you
do
have
something
that
you
vehemently
disagree,
disagree
with,
if
you
really
just
can't
not
ask
that
question,
feel
free
I'd
love
if
we'd
get
to
slide
four-
and
we
don't
get
past
slide
four,
because
we
have
a
great
discussion
thumbs
up.
It
was
a
good
day
for
me,
I
mean.
Hopefully
we
have
just
a
good
discussion
there
again.
A
A
A
Everyone
should
be
using
containers,
but
who
should
who
should
matters
when
it's
there
go
into
prod
and
win
and
where
containers
fit
see,
what
I
did
there
with
the
who,
when,
where
why
yeah
I
thought
I
thought
that
was
cute
when
I
thought
it
up
at
two
o'clock
in
the
morning
and
then,
with
whatever
time
we
have
left
over
conclusions,
cue,
a
call-to-action
whatever
we
want
to
do
there,
that
cool
with
everyone,
we
good
nod
nod
all
right.
So
how
containers
work-
and
this
is
this-
we
talked
about
this
at
this
session
last
year.
A
This
should
look
familiar
to
pretty
much
everyone
if
you're
using
open
shift.
This
is
deploying
a
new
app
on
the
command
line,
OC
new
app.
When
that
happens,
when
you
hit
the
enter
key,
when
you
type
that
open
shift
kicks
off
first
and
starts
doing
some
things
and
I'm
waiting
for
Dan
to
just
start
saying
no
you're
wrong.
A
A
So
open
shift
in
this
particular
case
I'm
using
s2
I
to
build
my
application.
I,
get
a
custom
container
image
bill
that
takes
the
source
code
and
a
base
image
and
marries
them
together
into
a
custom
container
image.
All
that
happens
in
the
background
you
never
see
it.
Openshift
creates
an
image
stream
image
streams
and
openshift
hi.
All
my
applications
together
and
lets
me
do
things
like
automatically
trigger
application
upgrades
based
on
upgrade
policies
when
a
new
version
of
my
base
image
is
available.
A
So
when
I
do
a
security
update
image
streams
make
my
application
automatically
update
itself
a
buildconfig
inside
openshift
a
bill.
Config
lets
me
programmatically
define
how
I
want
my
application
bill.
So
if
I
have
to
do
things
like
if
I'm
using
Java
and
I
have
to
go
in
and
do
things
with
maven
god
forbid,
I
can
define
all
of
that
inside
my
building.
How
do
I
want
my
application
built
OpenShift?
Also
an
object.
That's
unique
to
OpenShift
is
the
deployment
config
deployment
configs
dictate
how
your
application
gets
deployed
and
updated.
A
So
I
can
dick
I
can
say
in
the
deployment
configure.
I
want
to
always
have
five
copies
running
and
when
I,
when
I,
do
an
upgrade,
whether
manually
or
automatically
I
want
it
to
be
a
rolling
upgrade
or
I
wanted
to
be
a
Bluegreen
upgrade
I
want
to
be
a
canary
in
a
cage.
All
those
different
upgrade
methodologies,
I
define
them
in
the
deployment.
Config
I
actually
define
programmatically.
A
How
I
want
my
app
to
upgrade
so
OpenShift
can
then
go
do
it
for
me,
and
that
happens
in
the
deployment
config
OpenShift
also
has
a
routing
layer
built
in,
and
the
routing
layer
gets
updated
with
the
endpoints
inside.
That
are
your
containers.
So
when
you
go
to
the
URL,
OpenShift
knows
where
to
route
your
traffic
to
and
from
then
OpenShift
hands
off
things
to
kubernetes.
Everyone
knows
what
kubernetes
is
right.
Kubernetes
is
what's
on
my
shirt
here:
Jonah
the
Star
Trek
Joe,
there's
a
Star
Trek
joke
in
the
kubernetes
logo.
A
If
y'all
heard
it
so
in-house
kubernetes,
the
in-house
version
of
kubernetes
at
Google
is
borg
borg
are
the
bad
guys
in
Star
Trek,
and
the
most
famous
Borg
was
seven
of
nine.
That
was
her
name
I.
Don't
know
why
she
was
named
seven
of
nine
I'm,
a
Star
Wars
guy,
but
her
name
was
seven
and
there
are
seven
spokes
on
the
kubernetes
logo
on
the
wheel
for
kubernetes.
A
So
there's
literally
a
Star
Trek
job
cooked
into
the
kubernetes
logo
makes
it
really
hard
because
it's
not
an
octagon
and
when
you
go
to
put
the
sticker
on
your
laptop,
it
doesn't
fill
with
all
the
other
octagons.
It
kind
of
makes
me
mad,
but
it's
cute,
it's
a
Star,
Trek
joke,
so
openshift
creates
all
that
stuff.
Then,
underneath
that
kubernetes
creates
a
replica
set.
A
replica
set
is
how
many
pods
I
want
running.
How
many
containers
I
want
to
keep
running
at
all
times.
A
It
also
creates
a
service.
A
service
inside
OpenShift
lets
me
use
just
DNS
names
to
talk
to
other
services.
So
when
I
deploy
my
web
front-end
and
I
needed
to
talk
to
my
database,
all
I
need
to
know
is
the
name
of
my
database
service.
The
DNS
automatically
works
is
automatically
configured
at
runtime.
All
of
that
just
works.
I
don't
have
to
Crowell.
All
of
that.
Any
of
that
on
my
own
there's
also
a
whole
bunch
of
schedule.
A
E
goodness,
you
know
what
kubernetes
is
is
a
scheduler
at
the
end
of
the
day,
and
also
all
of
that,
where
am
I
going
to
put
the
next
container,
are
my
containers,
happy
and
healthy?
Do
I
have
for
when
all
I
really
want
five?
Let
me
create
another
one.
Oh
that
nodes
not
listening
to
me
anymore.
I'm
gonna
blow
it
away
and
stand
up
another
one.
All
of
that
happens
inside
kubernetes
kubernetes
is
the
beating
heart
of
OpenShift
kubernetes
thenns
hands
off
to
the
container
runtime
on
this
example
were
using
cryo.
B
Has
not
heard
of
cryo
okay,
so
there's
a
quick,
quick
description.
What
it
was
so
a
couple
years
ago,
kubernetes
was
first
built
totally
on
top
of
darker
and
what
happened
is
cora
west
came
to
kubernetes
and
said
we
wanted
to
have
full
support
for
rocket
inside
of,
and
so
they
did
a
huge
patch
set
for
kubernetes
to
basically
allow
it
to
run
rocket
or
darker
and
kubernetes
upstream.
At
that
point
said
no.
No.
B
We
can't
do
this
because
eventually
gotten
or
some
you
know,
system
to
un
spawned
or
somebody
else
is
gonna
come
along
and
want
the
same
type
of
treatment.
So
they
turned
it
on
its
a
year
and
a
coconutty
said
instead
of
you
guys
giving
us
patches
we're
gonna
spatially
call
out
to
a
daemon
and
say
these
are
the
things
I
want
done
and
that's
called
CRI
container
a
runtime
interface.
So
after
that
happened
guys
on
my
team
said,
you
know
we
could
build
one
of
those
and
they
created
it.
It's
called
sia,
it's
called.
B
We
ended
up
being
called
cry
out.
So
CRI
stands
for
container
runtime
interface
always
stands
for
open
containers,
so
the
real
goal
of
this
is
to
make
it
as
simple
as
possible
for
running
containers.
Underneath
kubernetes
I
mean
it's
totally
dedicated
kuba
nice,
it
doesn't
work
with
mesosphere,
doesn't
work
with
any
other
type
of
dark
swamp,
any
other
tools,
it's
just
for
running
containers
and
then
it
launches
containers
the
same
way
darker
does
under
the
covers,
so
it
ends
up
launching
run
C
or
other
OCI
compliant
compatible.
B
Sometimes
it's
available
as
of
3.9,
fully
supported
and
OpenShift
3.9
right
now
you
can
either
choose
between
cry
or
darker,
run,
we're
hoping
that
310
or
300,
probably
a
311
that
we
will
switch
it.
So
cry
will
be
default
for
all
for
all
containers,
running
and
open
shift
and
online
right
now
we're
moving
all
of
online
to
run
on
top
of
cryo.
How.
A
Many,
how
big
is
the
cryo
daemon?
How
big
is
the
cryo
daemon.
B
Oh,
the
cryo
daemon
is
is
well
there's
two
things
the
crowd.
Daemon
is
much
smaller,
because
all
it's
doing
is
the
implementation
of
the
CRI
uses
a
lot
less
memory
and
we
can
run
all
of
our
containers
with
we
actually
use
this
thing
called
C
C
code
to
to
run
it
instead
of
go,
so
it
can
actually
use
a
lot
less
memory
for
running
containers
than
then
dark
again
at
this
point,
so
we
use
a
lot.
B
A
There's
no
there's
no
root
running
process
right,
listening,
there's!
No!
You
don't
have
to
be
running.
Your
container
on
time
doesn't
have
to
run
as
root
and
for
a
lot
of
customers
again
I
work
in
the
government.
That's
a
huge
stopper!
That's
a
huge
pain!
You
know
having
a
daemon
running
as
root.
That's
that's
it's
hard!
So
there's
not!
We
don't
have
that
daemon
running!
That's
listening!
You
don't
have
to
run
a
daemon
to
build
a
container.
That's
just
silly
we'll
get
into
that
here.
A
A
little
bit
so
cryo
works
with
Linux
to
create
so
now
openshift
is
built.
All
the
openshift
stuff
kubernetes
has
built
all
the
kubernetes
stuff.
Kubernetes
then
tells
that
container
runtime
in
this
case
cryo
to
go
talk
to
Linux
at
this
point
now
we're
getting
down
into
the
Linux
the
process,
isolation
stuff.
So
when
we
talk
about
a
container
a
containers,
just
a
process
and
Dan's
gonna
Dan's
gonna
drive
this
home
here
in
just
a
couple
of
minutes.
The
app
you
run
in
your
container
is
your
container.
A
What
we're
doing
is
taking
that
app
and
wrapping
it
around
some
isolation.
Some
stuff
inside
the
Linux
kernel
to
make
that
application
feel
like
it's
the
only
thing
running
on
your
server.
The
same
thing
we
do
with
the
VM
only
we're
using
different
parts
of
the
Linux
kernel.
So
we
all
they
all
share
one
current.
It's
not
like
a
vmware
there.
Isit
everyone
has
their
own
virtualized
kernel.
That
means
they're
smaller.
That
means
they're
faster
launch
times
for
containers
are
measured
in
milliseconds.
A
Typically,
unless
you
have
a
lot
of
really
bad
Java
code
and
I
can't
fix
that,
but
to
launch
it
to
launch
a
container
with
cryo
we're
talking,
milliseconds
the
rel
seven
base
image
the
base
container
image
for
rel,
seven
140
mega
bytes,
like
how
big
is
your
default?
Vm
disk
right
now,
40
gigs,
60
gig
to
run
an
app
that
has
you.
B
A
The
first
thing
we
isolate
the
application
with
our
current
or
kernel
namespaces,
the
best
way
I
figured
out
and
maybe
have
a
better
one
to
describe
to
describe
what
a
name
spaces
inside
Linux
is
that
they're,
essentially
paper
walls,
they're,
really
easy
to
stand
up
and
teardown
they're
very
lightweight,
but
they
effectively
isolate
the
things
around
them,
and
so
the
Linux
kernel
takes
your
application
and
puts
it
in
these
namespaces.
So
the
mountain
namespace
takes
everything.
A
That's
from
my
that
base
image
that
I'm
building
my
application
with
my
container
image,
puts
it
in
its
own
mount
namespace.
So,
from
the
point
of
view
of
the
application,
all
it
can
see
is
what's
in
its
mount
namespace,
that's
all
I
can
see
so
the
entire
world
to
that
app
is
what's
inside
its
mount
namespace,
so
we're
isolating
the
filesystem
way
that
way.
The
network
namespace
gives
it
its
own
IP
address.
A
So
every
container
gets
its
own
IP
address
its
own
MAC
address
its
own
network
stack,
its
own
TCP
ring
buffers
all
of
that
crazy
stuff,
there's
not
shared,
or
they
know
yeah,
so
they're
completely
namespaced
away
so
network
namespaces.
So
from
the
point
of
view
of
my
app
in
my
container,
all
I
have
is
that
one
IP
address
that
I
need
to
talk
to
and
the
Linux
kernel
isolates
it
away.
A
Inter-Process
communication,
the
IPC
namespace,
all
of
our
shared
memory,
information,
so
name,
simha,
four
sets
small
SHM
acts
all
of
those
fun
things
things
that
applications
use
to
communicate
directly
with
one
another
through
Ram.
All
that's
isolated
away
with
the
interprocess
communication.
Namespace
I'm
Dan
gave
me
grief
once
because
I
didn't
explain
it
very
well
and
kind
of
eviscerated
me
so
Dan
twice
it.
This
is
my
fun
rabbit
hole
twice
in
my
life.
A
Dan
Walsh
has
completely
gutted
me
with
one
sentence:
I'd
been
at
Red
Hat,
maybe
for
months
and
I
had
to
give
an
SELinux
talk
to
a
group
and
Dan
was
sitting
in
the
second
row
and
like
I'm,
not
flustered,
really
easily,
like
I,
like
crowds
I
like
to
I
like
to
be
on
stage
and
talk
and
communicate,
telling
God
about
the
Bible
it's
a
little
intimidating,
especially
when
you've
been
in
Red
Hat
for
four
months,
and
you
don't
know
nothing
from
nothing.
A
I
didn't
sleep
for
four
days
like
I
was
gutted
and
then,
like
years
later,
I
was
in
a
different
event
and
Dan
was
there
and
I
was
doing
a
session.
Actually,
the
same
talk
I
did
last
year
at
Summit,
I
like
an
earlier
version
of
it
and
I
was
like
I
know.
This
thing
is
good,
like
this
talk,
I've
done
this
thing
like
40
times,
I
know
it's
good
and
Dan
sitting
in
the
back
corner.
A
Here
we
go
so
I
get
a
talk,
Adrenaline's
all
run
and
I'm
all
talking
real
loud,
podcast,
speed
and
I
know
I
nailed
it.
Man,
people
were
loving,
it
walk
up
to
Dan
after
Dan's
walk
in
his
session.
I
was
like
hey
Dan.
How
was
that
like?
It's
like
vindication,
like
you
know,
I
just
proved
myself
to
Dan
Walsh,
like
oh,
your
little
shaky
about
some
stuff.
I'll
talk
to
you
later
and
then
he
walks
into
a
session
and
I'm.
A
Just
pacing,
like
I,
probably
walked
three
miles
in
an
hour
waiting
for
Dan
to
to
get
done
with
his
session.
So
I
could
address
this
immediately,
like
I'm,
not
losing
four
more
nights
of
sleep.
I'm
so
I'm
like
all
right.
What
did
I
not
do
like
I
know
that
was
good.
It's
like
and
he
starts
laughing
he's
like.
A
Oh
yeah,
your
explanation
of
the
IBC
namespaces
a
little
off
and
like
I,
like
I
aged
three
years
in
that
hour,
I'm,
not
asking
Dan
how
this
goes
afterwards,
I'm
just
leaving
so
the
IBC
namespace
lets
us
lets.
Applications
have
shared
memory,
resources
that
are
isolated
to
just
what's
inside
that
what's
inside
the
container,
the
Pig
namespace-
and
this
is
where,
when
I
first
started,
messing
with
containers.
A
Well
blew
my
mind:
every
container
gets
its
own
set
of
paid
counters,
so
the
application
that
you
run
to
start
the
container
your
app
is
paid
one
and
I'm,
an
old
Linux,
dude
paid
ones
important.
The
fact
that
I
can
have
50
Pig
ones
on
my
hosts
and
not
break
the
world,
like
my
mind,
just
exploded,
but
it
works
again.
There's
those
paper
walls
inside
the
Linux
kernel
that
let
that
happen
I
can
actually
see
the
processes
from
the
host
on
the
container
and
they
have
a
pit
assigned
by
the
kernel
the
main
pit
tree.
A
But
from
the
point
of
view
of
the
container,
the
only
process
IDs
that
can
see
are
unique
to
the
container
and
it's
only
the
other
processes
running
in
the
container
at
that
time.
So
I'm
isolating
it
that
way
and
then
I
have
the
UTSA
namespace
that
gives
each
container
unique
host
name
and
domain
name,
and
that
gets
helpful
when
I've
got
you
know,
1,500
nodes,
running
thousands
and
thousands
of
containers
coming
and
going
all
the
time.
So
that's
how
applications
feel
isolated
inside
a
container,
and
if
that
wasn't
enough,
we
actually
have
two
more
ways.
A
We
isolate
processes
inside
OpenShift,
the
Linux
kernel
itself
and
these
resources
are
shared
across
all
containers,
create
control
groups
that
help
me
eliminate,
noisy
neighbor
syndrome.
I
can't
fix
your
bad
code.
You
know
Ops
guys
can't
fix
bad
code
as
much
as
we
try,
but
I
can
so.
You
can
crash
your
container,
but
you're
not
gonna
crash
the
whole
server.
A
So
that's
noisy
neighbors,
it's
nice
to
not
have
them
and
then
SELinux
enforces
mandatory
access
control.
I
used
to
have
this
parlor
trick
that
I
did
I
used
to
run
the
hacker
space
in
Richmond.
Virginia
and
there
was
an
old
laptop
running
Fedora
and
you
had
a
root
prompt
sitting
there
and
I'd.
Given
the
root
user
of
funky
SC
Linux
context
that
didn't
let
it
do
anything.
So,
even
though
you
were
UID
0,
you
couldn't
write
to
slash
temp
and
I
taped.
A
hundred
ollar
bill
to
the
top
of
the
laptop
I
said.
A
B
So
as
we're
I
like
to
talk
about
su
linux
in
containers
is
like
peanut,
butter
and
jelly.
So
if
you
look
at
containers
where
containers
are
using
all
these
namespaces
to
give
you
that
virtual
as
they
should
feel
isolation,
and
then
the
problem
with
the
namespaces
is,
if
you're
running
a
privileged
process
inside
of
the
container,
they
you
can
reverse
the
the
namespaces
right.
You
can
mount
that
you
potentially
mount
content.
B
B
It
turns
out
the
slack
she's,
an
easily
guessable
I
note
and
there's
a
function
call
into
the
Linux
kernel
called
open
file
at
so
you
can
actually
say
on
this
file
system
open
this
inode
without
going
down
paths
or
anything
else,
you
can
actually
do
that
if
you're
a
privileged
process,
so
I
darker
had
that
vulnerability.
So
if
you
opened
up
slash
now,
you
can
start
listing
slash
and
walk
your
way
down.
Well,
as
you
linux
instantaneously
block
that,
because
a
container
can
only
use
container
types
container,
t
can
only
read
container
file.
T
ii.
B
Breakout
was
lots
of
people.
If
you
ever
play
with
containers,
whenever
you
have
a
running
container
and
all
sudden
you
exec
a
process
into
the
container,
you
do
a
docker
exec
or
a
run
c
exec
or
Hogman
exec.
You
can
stick
a
process
into
a
container,
that's
already
running
and
the
problem
there
is
when
you
stick
the
process
in
it
comes
with
open
file
descriptors.
B
So
if
you
had
a
process
that
had
an
open
file
on
the
host
operating
system-
and
you
exact
it
into
a
container,
the
process
inside
of
the
container
can
now
attack
your
process,
get
access
to
those
open
file,
descriptors
and
then
walk
the
file
system
and
again
selinux
the
only
tool
that
breaks
that
so
selinux
and
and
containers
is
critical.
So
I
know
everybody
out
here
always
has
said:
enforce
one
everyone's
running
an
enforcing
mode
right.
We
don't
need
that
everyone.
B
A
Openshift
selinux
is
enforcing
by
default
and
the
install
won't
happen.
If
it's
not,
you
have
to
run
SELinux
and
enforcing
mode
and
OpenShift.
You
have
to
go
out
of
your
way
to
turn
it
off
and
then
I
think
we
turn
the
screen
red
and
like
skull
and
crossbones
and
scream
at
you
before.
We
let
you
do
it,
then
alright,
so
that
there,
that
is
a
modern
container
application
platform.
That's
how
the
processes
are
isolated.
That's
who
handles
what
that's,
how
OpenShift
works
at
a
high
level?
Does
that
make
sense?
A
Everybody,
no
unicorn
blood,
no
magic,
voodoo,
no,
no
spells
or
voodoo
bag.
Hoodoo
bags,
nothing
from
this
from
season
8
of
supernatural.
That's
how
containers
work,
that's
how
open
ship
works.
So
why
do
you
care-
and
this
is
where
we
start
talking
about
value
I'm,
an
old
ops
guy
y'all-
are
all
ops
folk.
We
have
two
masters
that
we
serve
in
the
operations
world.
A
We
don't
one
of
them
is
not
the
dev
team.
Remember
we're
all
dev
ops
now
right,
so
it's
all
kumbaya
on
12
string,
guitars.
We
love
each
other.
You
can
argue
with
the
dev
guys,
but
at
the
end
of
the
day,
we're
not
always
beholding
to
them.
We
work
with
them,
but
we
don't
serve
them,
but
ops
teams
do
serve
two
masters.
A
Raise
your
hand
if
your
budget
is
bigger
next
year
than
this
year,
yeah
that's
master
number
one.
None
of
us
have
big
budgets.
None
of
us
have
growing
budgets.
We
always
have
to
do
more
with
less
that's
we're.
That's
that's
our
world!
That's
what
we
do
and
number
two.
We
work
with
the
devs
we're
beholden
to
security.
The
security
team
can
stop
everything
we
do
and
that's
just
the
life
we
live
in.
So
why
should
you
care
about
containers
because
containers
and
OpenShift
help
you
solve
both
of
those
problems
more
efficiently?
More
effectively?
A
A
Also
noticed
the
rock
star
status
of
the
ops
guy
I
just
want
to
point
that
out
last
year,
that
picture
was
a
bald
guy
with
an
big
beard
and
it
hit
a
little
too
close
to
home.
So
this
year,
I
made
him
a
rock
star.
I,
don't
know
how
to
play
guitar
open.
Shifting
containers
help
accomplish
both
of
these
tasks.
Better
I'm,
not
saying
you're,
not
doing
it
now,
but
it's
better.
So,
first,
more
with
less
and
containers
always
talk
about
them
being
an
evolution.
A
We're
all
hung
on
this
timeline
of
evolution
and
that
evolution
is
about
process
isolation.
We've
been
isolating
processes
since
we've
had
processes
to
isolate.
So
we
have
two
axes
here
go
into
the
right.
We
have
abstraction
and
automate
ability,
which
is
good
and
on
the
bottom,
going
right
to
left.
We
have
size
and
power
consumed
and
it
all
started
with
ENIAC
or
whatever.
You
want
to
call
the
first
computer,
where
we
had
to
physically
go
flip
switches
to
right
ones
and
zeros
into
memory
registers.
A
We
could
run
one
process
at
a
time
and
it
was
the
definition
of
nota
mated.
We
had
to
go
flip
switches.
Then
we
got
mainframes
with
mainframes
came
time
sharing
multiple
people
could
log
in.
At
the
same
time,
multiple
rebook
could
run
applications
at
the
same
time,
they're
still
pretty
big.
There's
one
out.
A
On
the
expo
floor,
it
will
not
fit
in
the
trunk
of
my
rental,
but
mainframes
are
good
and
mainframes
are
still
around
with
thousands
of
mainframe
customers
at
Red
Hat,
and
sometimes
it
does
a
really
good
job
on
the
IRS
they're
one
of
the
customers
I
work
with
they
have
a
mainframe
application.
That's
been
in
constant
production
use
for
57
years.
A
We
got
our
guys
guys
in
the
back,
or
did
you
just
get
a
giant
tax
return
either
one
so
from
mainframes
came
the
client-server
model
pizza
boxes
where
I
could
go
hug
my
server,
so
the
size
of
my
computer's
getting
smaller
the
power
consumed
is
getting
smaller,
still,
not
abstract.
Until
not
automating
I
can
go
hug.
My
database
at
this
point.
Still
that
means
new
project
equals
new
server
equals
new
power,
equals
new
cooling
equals
new
networking
equals
new
cables,
still
not
automating.
A
At
this
point,
I
can't
unless
I
really
want
to
go
nuts
and
have
a
robot
in
my
data
center,
that
can
rack
servers
for
me,
then
we
got
VMs,
and
this
is
where
most
of
us
live
most
of
our
lives
right.
We're
not
100%
containerize,
but
we're
not
running
on
mainframes
for
everything.
Virtual
machines,
VMs
I
can't
go
hug.
My
database
server
anymore,
but
I
can
right-click
and
deploy
one
I.
Don't
have
to
go
rack
a
new
server
every
time.
I
want
a
new
server.
A
I
just
need
to
have
enough
resources
that
are
wracked
to
give
me
a
new
server.
So
now
my
I'm,
my
abstraction,
I've
abstracted
away
my
server
into
a
virtual
thing
that
lives
inside
the
Linux
kernel,
at
least
with
KBM
I'm,
able
to
automate
that,
because
I'm
able
to
talk
to
an
API
I
can
use
things
like
ansible
automate,
the
creation
and
the
management
and
the
destruction
of
my
virtual
machines.
I'm.
Still
too
big,
we
said
earlier.
A
You
know
your
your
disk,
your
default
disk
in
a
VM,
it's
about
40
or
60
gigabytes
and
the
source
code
for
the
application
running
on.
It
is
probably
4
megabytes,
very
small
applications,
but
every
time
I
need
a
new
one.
I
have
to
virtualize
an
entire
hardware
stack
have
to
virtualize
an
entire
kernel.
A
The
latest
and
greatest
evolutionary
step
is
containers.
Now
I
have
one
kernel
running
and
I'm
using
the
isolation.
The
ways
to
isolate
the
process
we
just
talked
about
now.
I
can
have
one
kernel
and
multiple
isolated
processes
on
a
host,
so
on
my
physical
server
I'm
running
multiple
virtual
servers
and
on
my
virtual
server
I'm
running
multiple
containers,
I'm
consuming
less
power
for
each
one
and
containers
are
more
automatable,
they're,
more
extractable,
yeah.
B
We
call
them
containers,
but
what
you
have
in
there
is
a
single
application,
usually
inside
of
a
container
subscribed
way
to
do.
This
is
to
have
a
single
you
know,
web
service
application
or
a
database
application
something
so
you
end
up
in
a
container
world
you're
managing
applications
instead
of
min
managing
virtual
machines
or
physical
machines,
or
you
know
all
the
way
down
the
stack
so
I
always
think
that
containers
is
not
anything
special
of
them,
you're
really.
Looking
at
the
application,
the
only
thing
goes
inside
of
a
container
image.
Is
your
application
on.
A
A
Like
three
feet
off,
the
screen
is
what
I
was
gonna
do,
but
I
decided
not
to
be
that
mean
and
just
talk
about
it.
Instead
of
actually
do
it.
So
that's
how
we
do
more,
with
less
with
containers
I'm
able
to
abstract,
more
and
I'm
able
to
make
my
application
scalable
unit
significantly
smaller,
which
means
in
essence
I
just
grew
data
center
I
can
run
more
more
application
per
you
more
application
per
square
foot,
more
application
per
watt,
so
applications
containers
equal
money.
I
can
do
more
with
less
with
containers.
B
A
B
A
lot
of
times
you
know
I
talk
to
people
they're,
always
worried.
Obviously,
both
security
and
maverick
upstairs
have
been
talking
all
week
to
different
customers
and
about
security
and
I
have
a
few
key
features.
That
I
think
you
should
think
about.
First
of
all,
containers
are
just
regular
processes,
so
one
of
the
you
know
we
talked
about
those
namespaces
and
stuff
like
that.
B
But
if
you
booted
up
a
rail
system
right
now-
and
you
looked
at
the
first
process
of
a
system
Pig
one
system
D
running
on
the
system-
you
could
actually
go
to
proc
1ns
and
you
would
see
that
system
D
is
running
in
a
bunch
of
namespaces.
Secondly,
if
you
looked
at
as
winning
should
do
PS
easy
you'd
see
that
system
D
is
running
with
SC
Linux
label
associated.
If
you
went
in
cat
it
out
proc
one
slash
cgroups,
you
would
notice
that
system.
B
D
is
running
in
a
bunch
of
C
groups,
so
you
could
argue
if
a
container
is
something
that
runs
in
C
groups
has
security
controls
like
SC
Linux,
wrapped
around
it
and
has
namespaces
it's
a
container.
So
there's
a
t-shirt
that
I
think
of
spa
downstairs,
and
probably
someone
in
this
room
has
it.
We
say:
containers
of
Linux
and
Linux
is
containers,
that's
what
it
means.
B
Every
process
on
the
system
is
wrapped
by
these
three
things:
container
runtimes,
like
cryo
and
just
going
into
the
kernel
and
modifying
those
a
little
bit,
and
the
bottom
line,
though,
is
and
other
questions
often
asked-
is-
can
I
do
this
inside
of
a
container,
because
people
think
containers
to
think
VMs.
They're
not
VMs
are
just
regular
process
in
the
system.
So
if
you
can
do
it
on
Linux,
you
can
do
it
in
a
container
and
it's
all
a
matter
of
how
much
security
and
how
much
you're
going
to
tighten
down
the
security.
B
But
you
can
do
anything
inside
the
containers
so
so
compliance
he
asked
about
compliance
and
compliance
and
host
applies
to
container
processes.
So
if
your
L
system
is
certified,
you
know
any
government
and
people
in
here
if
we
have
a
certification,
ei,
LP
or
LS
P
P
things
like
that.
These
are
regular
processes.
B
There's
nothing
in
containers
to
make
them
non
compliance
with
the
standard,
well,
compliances,
so
I
think
any
type
of
compliance
is
you
have
it
so
the
next
article
I
advanced
to
here
by
mistake,
but
my
biggest
sadness
about
containers
up
to
this
point.
Is
this
thought
about
route?
Okay?
Well,
when
you
guys
anybody
plays
an
open
ship.
What's
the
one
thing
that
aggravates
you.
A
B
You
want
to
run
all
the
stuff
from
docker
I
Oh.
The
problem
is
all
the
stuff
that
doctor
I
always
built
wrong.
It
all
assumes
that
you
have
to
be
rude
to
start
up,
so
I
wrote
an
article
on
open-source
comm,
telling
your
telling
you
to
just
say
no
to
root.
Do
you
name
an
application?
You
want
to
run
in
a
container
that
requires
root.
Almost
nothing
databases
don't
require
root,
Apache
doesn't
require
rules.
Nginx
does
not
require
root.
Just
about
nothing
on
that.
You
run
in
your
Linux
system
that
doesn't
manage.
B
The
operating
system
needs
to
be
root,
so
why
are
all
these
containers
built
with
requiring
root?
All
right?
There's
two
reasons
for
that.
One
reason
is
they
want
to
bind
two
ports
in
less
than
1024.
If
you
went
back
to
1968
when
communiques
systems
were
first
coming
on,
there
were
six
computers
in
the
world,
okay,
and
if
you
could
bind
to
a
port
in
less
than
1024,
you
would
trust
it
that
meant
you
were
an
admin
if
you're
buying
two
ports
greater
than
1024.
B
That
meant
you
a
student
at
a
university
or
a
military
base
or
whatever
and
you
so
that
was
weird
those
that
was
a
security
concern,
but
Lynn,
UNIX
and
Linux
have
never
changed.
So
we
still
have
the
rule.
They
can't
bind
to
poison
less
than
1024.
So
that's
why
Apache
binding
to
port
80
requires
route.
If
you're
running
in
containers,
we
can
actually
set
it
up
so
Apache.
He
can
run
inside
of
your
container
and
listen
employed
80
without
requiring
route,
because
the
container
runtimes
can
actually
configure
that.
B
The
second
reason
we
can
have
to
run
our
containers.
This
route
is
because
we're
building
them
from
packages
like
rpm
or
Debian
packages.
Those
packages
come
with
the
assumption
that
they're
going
to
be
installed
on
a
physical
operating
as
a
physical
machine
or
a
virtual
machine.
So
they
come
in
with
the
assumption
that
they're
gonna
be
installed
and
started
as
route
and
then
drop
privileges
so
whose
fault
they're
it's
red-
hats
fault.
It's
my
fault,
Fedora's
fault,
that
beans
fall
to
bunty's
fault.
We've
all
put
software
like
apache,
wait!
B
Wait
now
it
comes
in
assuming
that
it's
being
installed
this
route
and
when
we
did
packaged
all
that
stuff
up
into
containers.
We
went
with
that
forward.
So
what
you
need
to
do?
If
you're
gonna
have
your
developers
come
in
to
you
and
say:
I
need
route,
they're,
all
gonna
say:
I
need
rude
and
you're
gonna
say
no,
you
don't.
What
are
you
running
inside
the
application?
What
what
thing
in
route
do
you
need
and
if
we
could
just
make
that
assumption,
make
that
switch
so
that
you
weren't
running
your
software
and
route?
B
All
of
a
sudden
security
concerns
go
down
to
basically
time
sharing
type
systems.
You
know
the
security
that
we
have
for
different
users.
If
you
can
just
run
these
things,
that's
not
route!
You
forgot
laziness
by
the
way,
open
shift
of
fault
cat
running
this
route.
Okay
and
that's
what
people
complain
about?
B
Yeah
yeah
and
that's
then
we
add
all
the
other
security
goodness
around
it.
But
if
you
just
start
with
the
assumption
that
your
non-privileged
process
and
I'm
privileged
process
running
an
Apache
web
server
or
anything
else,
then
you
you've
instantaneously,
skyrocketed
the
security
tips
mode.
That's
another
compliance
thing:
how
many
people
in
here
care
about
FIPS
mode?
B
Okay,
so
there's
a
handful,
you
FIPS
mode
for
those
that
don't
know,
is
basically
a
rule
in
the
linux
kernel
that
says
that
there's
certain
algorithms,
crypto
algorithms
say
you
can't
use
certain
cryptic
algorithms
you
can
use
and
they
define
it
in
basically,
a
fifth
mode
system.
The
libraries
SSL
library
seems
like
that:
controlled
basically
dropped
algorithms,
if
you,
if
the
machine
is
booted
in
FIPS
mode,
turns
off
containers,
don't
follow
FIPS
mode.
B
Okay,
if
you're
running
docker
containers
that
come
from
random
distributions
or
random
packages,
you
can't
have
FIPS
mode,
because
the
libraries
don't
understand
it,
they
don't
understand
it
boots
up,
so
we're
actually
working
to
get
FIPS
mode
fully
comply
compatible
in
our
container
runtimes,
like
cryo,
now
support
FIPS
mode
for
at
least
the
container
runtimes.
There's.
B
Upstream
last
week,
so
go,
let
go
lang
is
like
so
hopefully
once
go
lang
once
we
ship
a
go
lang
package,
then
we
can
go
full
FIPS
mode
on
machines.
B
B
One
of
the
things
we're
going
to
do
with
cryo
is
we've
added
a
flag
that
basically
forces
all
containers
to
run
in
production
to
be
read-only.
That
means
that
they
can
only
write
to
directories
that
are
given
to
them
to
be
rewritable,
but
basically
their
entire
slash
user
becomes
read-only,
not
yet
big.
310.
B
Container
images
should
be
minimal.
This
one
seems
obvious
and
when
I'm
talking
about
minimal
here,
I'm
not
talking
about
the
Alpine
versus
rel
thing,
which
is
all
about
size
and
really
is
a
developer
concern,
because
he
wants
to
pull
down
little
small
number
of
bytes,
but
I'm
talking
minimal
here
I'm
saying
we
have
a
whole
bunch
of
cruft
that
goes
into
container
images
that
can
be
used
by
hostile
people.
So
every
container
image
you
get
has
bashing,
it
has
Python
in
it.
B
B
A
B
We
we
built
a
new,
so
darker,
build
basically
has
the
assumption
that
you
have
everything
inside
of
the
container
required
to
build
it.
To
me,
this
is
like
saying:
I
built
a
C
program
and
I
have
to
include
make
and
GCC,
and
everything
else
has
to
go
with
the
executable
I
think
it's
wrong.
I
think
what
you
need
inside
the
container
is
just
the
tools
that
are
required
to
run
the
container
on
your
application.
B
So
if
you're
running
Apache,
the
only
thing
that
you're
being
there
is
the
Apache
process,
yeah
patchy
code
with
the
Builder
tool
which
we
also
introduced,
went
1.0
this
week,
you
can
actually
build
minimal
size
and
container
image,
and
these
things
can
get
started
to
darker
I/o
or
any
other
container
runtime,
and
all
the
tools
can
run
them.
But
what
we
do
is,
instead
of
making
the
assumption
inside
of
the
container
image
that
you
have
young
and
all
that
stuff
in
it.
B
We
right
pull
that
all
out,
so
the
build
host
has
all
those
tools,
and
then
you
just
redirect
your
output
into
the
container
image
so
only
need
content
needed
for
the
application.
Take
a
look
at
builder,
replace
the
image,
don't
yum
update
how
many
people
have
gone
into
a
container
and
done
Jana
yum
update
all
right.
A
lot
of
people
have
done
that
right
matter
of
fact,
you
admins
or
basically
advised
to
do
that
because
you
go
onto
a
VM
and
you
update
when
the
security
fixes
come
out.
B
So
why
not
go
into
a
container
and
run
yum
update
and,
of
course,
docker
build
forces
you
to
have
yum
inside
the
container
or
DNF
inside
the
container
or
wrap
get
inside
the
container?
Well,
if
we
wanted
to
micro-services
the
way
it
is
supposed
to
be,
these
are
supposed
to
be
read-only
images
that
you
replace.
So
if
I
have
a
version
of
my
application
running
and
I
find
out
a
vulnerability.
B
What
I
do
is
I
create
a
whole
new
image
with
the
same
executable
in
it,
with
the
fix
fixes
in
and
I
replace
it
I
don't
go
into
the
image
and
update
the
image.
Okay,
I
replace
the
image.
These
are
just
you
know,
think
of
them
all
everything
is
read.
Only
images
on
so
I
go
through
my
built
chain
to
build
new
applications.
B
Okay
last
last
statement:
physical
machines.
He
actually
showed
in
his
display
physical
machines
versus
virtual
machines
versus
containers
and
when
it
comes
to
security,
the
most
secure
way.
If
you
want
to
write
you
running
to
apps
in
the
world,
most
secure
way
to
run
two
apps
to
physical
machines,
the
more
isolator
they
can
be,
the
better
put
one
on
the
moon,
one
on
the
earth.
B
If
that's
young,
that
way,
if
the
earth
explodes
you'll
still
have
your
service
up
and
running
right,
but
the
further
away
is-
and
that's
anybody
in
the
military
talks
about
you-
know
yeah
gapped.
Well,
when
they're
talking
about
yeah
gap,
they
don't
want
to
network
between
them.
Okay,
that's
best
isolation.
B
So
if
I,
obviously
that's
too
expensive,
so
I
go
from
physical
machines
to
virtual
machines,
so
virtual
machines,
great
security
right,
there's
been
very
few
breakouts
of
virtual
machines
and
and
my
one
of
my
coloring
books
I
talk
about
this
as
being
sort
of
like
a
duplex
house.
Where
you
have
you
know,
I,
you
know
separated
isolation,
but
you
share
common
wall
between
containers
over
like
apartment
building.
They
have
a
single
point
of
failure.
B
At
the
same
time,
if
one
of
them
gets
hacked,
it's
really
easy
if
I
put
those
two
applications
just
into
them,
instead
of
running
a
tool
on
the
same
server,
I
put
them
in
containers,
instantaneous
life,
skyrocketed
the
amount
of
security,
so
physical
machines,
most
secure
virtual
machines,
next
most
secure
containers
next
most
secure
and
then
running
physical.
But
the
key
factor
here
is
not
a
zero-sum
game.
You
can
use
all
these
technologies
together
if
you
have
your
apps
with
your
database.
B
In
my
credit
card
data
stored
and
the
system
put
it
on
a
separate
physical
machine
inside
of
a
separate
virtual
machine
behind
a
firewall
and
then
run
it
inside
the
containers.
All
in
that
stack
the
front-ends
building
the
dmz
they're
running
a
separate
physical
machine,
separate
virtual
machines,
separate
containers
and
isolate
them
based
on
the
security
roles
of
each
one
of
those
I.
Think
that's
done.
My
preaching
yeah.
A
B
A
Yeah,
what's
parking
lot
that
went
over
we're
on,
we
only
got
about
eight
minutes
left.
It's
an
interesting
thought.
Let's,
let's
talk
about
it
here
in
about
eight
minutes
yeah.
So
who
should
use
containers?
Isn't
the
right
question.
We
should
all
use
containers
who
should
take
containers
into
production
when
the
data
matters
who
should
be
in
charge
of
the
data
and
the
answer
is
us
the
Ops
guys,
the
ops
team
we're
the
people
that
have
the
security
posture
were
the
people
that
have
the
compliance
posture.
We're
not
you
know
we're
not
the
devs.
A
The
container
revolution
did
start
on
developer
laptop,
so
it's
kind
of
skewed
that
way
that
connect
that
idea
of
a
full
stack
developer.
You
can't
know
all
of
that.
It's
impossible.
You
have
to
separate
those
concerns
inside
OpenShift
with
kubernetes.
Those
concerns
are
actually
programmatically
in
there
they're
defined
there's
this
separation
of
concerns
inside
openshift.
A
So
there
is
a
control
plane
that
the
ops
team
handles
and
that
control
plane
is
cluster
wide
options:
how
to
attach
the
storage
which
stores
to
attach
to
project
wide
resource
limits,
project,
Y,
quotas,
role
based
access
control.
All
of
that
is
completely
the
purview
of
the
ops
team
by
design
with
kubernetes
and
openshift,
and
then
there's
a
data
plane
and
that's
the
actual
running
containers
and
that's
where
the
developers
live.
That's
where
all
of
that
stuff
gets
deployed
to,
but
the
infrastructure
that
there
are
controlled
by
is
managed
by
the
ops
team.
A
A
Looking
at
storage,
we
actually
divide
storage
into
two
things
when
the
person,
the
developer,
writes
the
application
and
they
write
the
deployment
code
for
OpenShift,
they
create
a
persistent
volume
claim
in
the
application,
which
is
a
request
for
storage.
That
persistent
volume
claim
says:
I
need
10,
gigs
of
storage,
I
need
it
to
be
read/write
mini.
A
Have
it
has
an
access
mode,
different
storage
providers,
different
storage,
backends
have
different
ways
to
provide
storage
and
then
I
can
define
a
storage
class,
so
I
could
have
database,
storage
or
I
could
have
production,
storage
or
dev
storage
and
that's
built
into
the
application
definition
in
openshift.
That's
the
data
plane.
The
control
plane
is
the
persistent
volume
that
represents
inside
openshift
the
storage
itself,
a
mountable
storage
volume,
Dan's
phone's
going
on.
That
was
a
weird
ringtone
dude.
Oh
it's
an
alarm.
You
have
five
minutes,
so
a
persistent
volume
is
what
the
ops
team
controls.
A
So
what
this
looks
like
inside
openshift,
we
create
persistent
volumes
that
are
of
different
sizes
in
different
classes
and
different
access
modes
and
they're
requested
by
applications
when
the
applications
are
launched.
What
happens
in
the
real
world
is
you
have?
In
this
example,
I
have
an
NFS
storage
system.
I
have
an
NFS
export
that
NFS
export
is
mounted
by
openshift
onto
the
right
host,
where
the
application
is
running
so
openshift
knows
I'm
gonna
put
it
on
this
host
and
I'm
gonna.
Take
this
NFS
mount
or
this
cluster
mount
or
this
ebay
elastic
block
storage.
A
From
Amazon
openshift
actually
supports
13
different
storage,
backends
out-of-the-box,
and
then
there
are
other
third-party
ones
beyond
that,
so
I'm
gonna
take
it
and
mount
it
and
I'm
gonna
mount
it
into
some
random
path
down.
Var
Lib
holy
crap
I
can't
remember
all
these
hashes
and
then
I
take
to
the
container
there.
So
I
have
my
exported
NFS
volume.
A
It's
mounted
by
OpenShift,
so
it's
not
actually
mounted
by
the
devs
inside
the
container.
It's
a
bind
mount
and
a
bind.
Mount
is
a
kernel
trick
where
I
can
take
a
volume
in
one
place
and
simultaneously
mount
it
in
another
location,
so
I
can
mount
it.
I
can
bind
mount
it
into
my
containers,
mount
name-space
butt
to
the
application.
A
It
just
looks
like
a
directory,
so
the
application
doesn't
have
to
know
what
type
of
storage
it
is
doesn't
have
to
know
that
it's
an
FS
doesn't
have
to
know
that
its
cluster
doesn't
have
to
know
that
it's
Azure
doesn't
have
to
know
that
it's
Amazon,
so
I
can
abstract
away
all
of
that
in
the
data
plane.
It's
just
a
request
for
storage.
How
much
do
you
need?
A
A
Containers
are
the
next
evolutionary
step
in
how
we
deliver
IT
I
guarantee
you.
There
is
some
kid
sitting
at
MIT
or
Butte
County,
Community
College.
Thinking
of
the
thing
after
containers,
it's
an
evolution,
it's
not
an
end
game.
There
will
be
a
better
way
to
isolate
a
process
than
a
container
I,
can't
fathom
what
it
is,
but
it
will
happen
we
couldn't
fathom
containers,
so
someone
decided
to
put
them
together
that
way
they
aren't
going
anywhere.
These
are
not
a
fad.
A
This
is
how
we're
gonna
deliver
IT
for
the
next
five
to
seven
years,
get
used
to
him
there.
No,
there
is
no
monopoly
on
how
to
make
or
manage
them,
but
OpenShift
is
the
best
way
to
do
all
of
that.
A
little
bias
again.
So
apologies
for
running
right
up
to
the
end
Dan's
new
coloring
books.
Do
you
have
any
of
the
new
ones
left,
because
I
need
one
I'll.