►
From YouTube: OpenShift for operations
Description
Hear from Jamie Duncan, Cloud Architect, Red Hat in this breakout session at Red Hat Summit 2017.
Red Hat OpenShift is an amazing, award-winning tool for application developers. It is also a great tool for Operations teams to manage due to its built-in power, flexibility, and scalability. But how can Operations teams make the jump from managing OpenShift to running their own workloads effectively on it. In this session, we will cover the OpenShift features that are most important to an Operations team when moving workloads to the platform. We will go through security best practices and have live examples to discuss.
https://www.redhat.com/en/summit/2017/agenda/session
A
A
That's
today's
date.
So
this
is
open
ship
for
operations,
I
called
it
open
ship
for
operations,
because
I
wanted
my
talk
to
get
accepted,
so
I
had
to
have
open
shipped
in
the
title
somewhere,
but
no,
but
so,
let's
just
take
a
look.
I
know
it'll
be
clear:
there
I'm,
an
old
admin.
The
container
revolution
was
kind
of
started
on
the
developer
laptop.
What
we're
seeing,
though,
is
the
container
revolution
is
being
sold
to
operations
teams
they're
the
people
that
are
taking
this
to
these
systems.
A
These
platforms
like
open
ship
and
they're,
managing
them
and
they're
tuning
them
and
they're,
making
them
work
at
scale
and
as
I
go
out
and
I
talk
to
these
guys.
I
run
into
these
questions
there
and
it's
stuff
that
I
assume
it's
kind
of
common
knowledge,
because
my
job
is
to
tinker
with
this
stuff
all
day.
Long
like
how
do
you
TCP
dump
of
container
easily
raise
your
hand?
You
cannot
to
do
that.
Yeah
right,
okay,
so
this
guy,
so
he's
the
smart
guy.
A
They
probably
a
core
committed
or
docker
or
something
I,
just
can't
see
his
name
badge
and
how
do
you?
How
do
you
easily
look
at
all
the
processes
running
inside
a
container
from
the
host
again,
not
hard,
but
not
obvious?
So
so
what
I
want
to
do
today?
Those
we
run
through
me
about
me
a
little
bit,
then
it
Red
Hat
for
a
little
over
five
years
now,
I
was
a
Red
Hat
Tam
for
four
years.
A
I
helped
start
the
public
sector,
tam
team
I've
been
in
the
sales
organization
for
public
sector
a
little
over
a
year
and
that
time
I've
worked
with
every
major
US
government
agency,
including
the
scary
ones.
So
I
have
a
lot
of
scar
tissue
on
my
back
a
lot
of
miles
a
lot
of
miles
of
bad
road,
and
that
is
my
daughter
Elizabeth.
She
is
the
cutest
thing
ever
that
is
not
negotiable.
I
mean
seriously
that
thing
god
she's
cute.
A
She
was
at
swim
lessons
this
morning
and
I
got
video,
so
I
just
want
to
go
back
to
Richmond
all
right,
so
we
have
a
little
bit
of
an
agenda.
I
will
define
a
task
and
that
was
off
started
to
do
and
I
decided
to
show
you
guys
a
picture
of
my
kid.
We
have
a
task.
We
always
have
a
few
ground
rules,
we'll
talk
about
how
containers
are
taken
over
the
world.
I
do
have
a
joke
in
there
that
I
hope
land,
because
if
it
does,
it
can
be
awesome.
A
We'll
talk
about
deploying
an
application
with
open
ships.
So
this
the
rest
of
this
deck
is
under.
That
guise
of
you
know
we'll
deploy
an
application
in
OpenShift,
which
is
the
most
common
thing
done
with
OpenShift,
and
then
we're
going
to
take
a
look
at
what
actually
happens.
Then
we're
going
to
take
that
application
and
do
a
little
bit
a
little
bit
of
inspection
with
it.
So
we're
gonna
actually
take
a
look
and
figure
out
how
to
what's
going
in
that
container
and
out
of
that
container.
A
How
do
I
analyze
that
from
the
host
and
when
I
can
do
it
from
the
host?
That
means
I
can
attach
it
to
monitoring
systems
and
tripwires
and
all
sorts
of
fun
stuff.
Then
we'll
talk
about
what
happens
inside
openshift
then
we'll
go
down
into
the
kernel,
because
it
couldn't
be
a
good
geek
talk
without
talking
about
stuff
inside
the
kernel,
then
we'll
take
a
look
at
help.
Nope.
You
can't
really
use
open
chips
without
understanding
at
a
high
level,
at
least
how
the
networking
works.
It's
a
little
non-intuitive.
A
So
I
want
to
walk
through
that
for
a
few
minutes,
but
then
and
then
we'll
take
a
look
at
how
pit
process
IDs
are
isolated
inside
a
container.
Is
it's
not
exactly
the
way
we
think
most
of
the
time
we
talk
about
this
process,
isolation
and
there's
this
large
immutable
group
inside
that
can
inside
the
kernel
and
you
can't
see
out
of
it
or
into
it
unless
you
have
these
very
special
things
called
container
runtimes.
A
Yet
none
of
that's
true
so
we'll
take
a
look
at
reality
a
little
bit
and
then
we'll
have
a
couple
of
conclusions
before
we
get
started.
Containers
are
taken
over
everyone.
Can
everyone
agree
with
that
statement
is
basic
fact:
containers
are
taking
over
the
world
as
we
know
it,
as
anyone
not
have
some
flavor
of
container
initiative
going
on
right
now,
really
like
no
containers,
nothing
Wow.
A
A
This
is
the
world
as
we
know
it
come
on
man,
it's
that
look
Akutan
all
right,
so
the
joke
didn't
land
of
it.
It's
the
world
as
we
know
it.
We
know
that
we
know
that
come
on
we're
super
geeks.
Let's
do
that.
Okay,
I
think
that
actually
did
have
the
expansion
deck
in
it.
I
didn't
count,
but
I
think
it
did
so
today's
task,
what
we're
actually
going
to
be
doing
open
ships
and
I
know
this
sounds
salesy
and
I
am
technically
a
sales
guy,
but
it
is
statement.
A
It
is
a
statement
that
I
can
back
up
with
facts.
Open
fact
is
the
most
complete
conduction
ready
container
platform
out
there
and
there
are
dozens
of
platforms
out
there
when
you
want
to
drop
something
in
place,
use
it
scale
your
application
out,
thousands
of
applications,
hundreds
of
nodes
of
container
platform,
the
one
that
is
proven
to
do
it
in
production
time
and
again
is
open,
shipped
and
there's
good
reason
behind
that
and
we'll
get
to
that
reason
here
in
a
little
bit,
I'll
happily
have
a
cold
beverage
with
anyone
who
disagrees
after
this
session.
A
So
if
anyone
is
a
giant
docker
data
center
fan
or
a
giant
makes
those
head
yeah,
let's
go
get
a
beer
and
let's
figure
it
out,
because
I
would
love
to
hear
good
counter
points
to
that
statement
and
heckle
you
a
little
bit
so
today
we're
going
to
take
a
deeper
dive
into
what
exactly
is
going
on
when
you
deploy
an
application
inside
openshift
I'm
firm
waiver.
So
this
is
kind
of
the
things
I
try
to
base
all
of
my
talks
around
this
idea.
A
Containers
are
a
cool
when
we
went
from
physical
machines
to
virtual
machines,
it
wasn't
that
giant
shift
mentally
sure
there
hypervisors
and
all
of
that
stuff,
but
it
still
looked
like
a
server
when
I
logged
into
it.
The
same
stuff
was
in
the
same
place
and
my
applications
went
in
the
same
place
and
everything
was
there.
It
was
just
virtualized.
A
A
It's
what
your
it
feels,
weird
to
an
ops
guy.
You
can't
go
put
your
hands
on
it.
You
can't
go
looking
vSphere
and
say
this
app.
Is
these
three
machines?
Here's
my
database?
It's
going
to
float
around
out
in
this
ether.
That
is
a
container
platform,
so
you
got
to
have
a
firm
grasp
on
the
basics
of
what
they're
doing
and
I'm
not
talking
we're
not
going
to
go.
Look
at
it
code
written
and
go
or
down
it.
A
You
know
the
namespace
code
in
the
kernel
that
could
you
know,
kill
us
all,
but
just
a
firm
concept
of.
What's
going
on
inside
the
container,
we
always
have
a
few
ground
rules,
and
these
ground
rules
are
more
from
my
benefit
than
yours.
But
this
is
a
discussion.
Not
a
lecture
me
standing
up
here
for
the
next
40
minutes
and
talking
will
be
boring
for
me
and
for
you
have
opinions
voice
them,
there's
a
very
good
chance.
You
know
something
that
I
don't
know.
It's
a
discussion,
not
a
lecture.
A
The
information
in
this
deck
is
accurate
to
the
best
of
my
ability.
As
of
OpenShift
3.5
know,
technology
is
moving
faster
than
containers.
I
have
literally
given
a
container
talk,
looked
at
the
deck
the
night
before
and
something
something
was
substantively
different.
The
next
day
when
I
actually
delivered
the
presentation
and
I
got
called
out
of
it,
so
it
is
accurate
as
of
about
a
day
and
a
half
ago,
I
it's
accurate.
As
of
the
horrible
gogo
Internet
on
my
flight
from
Richmond,
it
was
accurate
so-
and
this
is
a
nuts
and
bolts
talk.
A
I'm,
a
nuts
and
bolts
guy
I
like
to
see
how
things
work
and
I
like
to
show
and
help
people
understand
that
we're
not
talking
about
roadmaps
and
there
are
no
changelogs
in
here.
So
if
you
want
to
roadmap,
talk,
I,
don't
know
who
they
are,
but
I'm
sure
there's
a
different
session.
At
the
end
of
the
day,
it's
all
about
getting
some
new
information
and
having
a
couple
of
laughs,
I,
hope
the
Kattan
joke
didn't
land
but
I'll
work
on
it.
A
There's
an
X
there's
another
joke
cooked
in
here
and
I
hope
that
one
fares
better
all
right.
So
we
start
out
this
sort
of
demo
driven
idea.
The
first
thing
we
do
is
type
in
OSI
new
app
and
whether
you
go
through
the
command
line
or
you
go
through
the
web
interface
for
open
shift.
This
is
how
you
build
out
an
application.
Osi
is
a
command-line
client
for
OpenShift
OSI
new
app.
You
give
it
a
name.
You
tell
it
some
code
that
you
want
to
use.
A
You
know
in
this
case
it's
some
little
silly
application
that
I
wrote
for
a
project
I'm
working
on
and
then
I
tell
it
that
I
want
that
to
be
a
PHP
app.
It's
all
I
got
tell
OpenShift
OpenShift
takes
my
code,
makes
it
magic
happen
in
the
middle
of
the
gears
turn
and
seconds
or
minutes
later,
depending
on
where
I'm
at
in
the
Wi-Fi
world
my
applications
up
and
running
everyone.
Most
some
people
have
done
this.
You
guys,
okay,
with
this
idea,
I
feed
it
code.
I
tell
it
what
the
code
is.
A
Magic
I
have
a
URL
to
go.
Look
at
my
code.
So
that's
that's.
Where
we're
going
to
start.
We
have
deployed,
let's
assume
in
my
cluster
running
on
my
laptop,
and
we
can
pull
it
out
after
the
talk.
If
you
want
that
works
so
raise
your
hands
looks
familiar.
So
let's
connect
the
dots
a
little
bit.
What
actually
happens
in
those
intervening
seconds
and
minutes
they're
there
no
voodoo
chance.
We
don't
have
to
pee
dolls
to
actually
inflict
harm
or
magic
on
a
server.
A
So
OpenShift
takes
and
builds
out
a
custom
container
image
takes
your
source
code
takes
what
we
call
a
builder
image
marries
the
two
together
and
then
creates
a
custom
container
image
just
for
your
app
and
stores
it
in
its
internal
registry.
So
we
do
that.
So
now
we
have
a
container
image
for
our
app
that
we're
going
to
deploy.
It
creates
a
thing
called
an
image
stream
that
ties
all
of
these
other
resources
together.
It
says
my
image
stream
says
alright.
This
is
the
name
of
app.
This
is
the
place.
A
You
know
these
are
the
image
I'm
going
to
use
for
it.
Here's
how
many
replicas
I
want
it
takes
all
of
the
information
that
I
want
for
an
application,
and
it's
we
start
populating
this
image
stream
I
build
a
thing
called
it
build
config
and
I
tie
that
into
my
image
stream
and
the
bill.
Config
defines
how
my
app
is
built
so
I,
say:
I'm
gonna,
take
this
image.
I
need
to
mount
these
persistent
volumes
into
it.
I
need
to
grab
this
from
the
host.
I
need
to
go
download.
A
This
sequel
file
I
need
to
do
whatever
I
need
to
do
to
build
my
application
to
create
this
custom
container
image
and
that's
associated
with
my
image.
Then
I
create
a
thing
called
a
deployment
config.
The
deployment
config
tells
OpenShift
how
my
apps
going
to
be
deployed
so
to
say,
here's
the
build,
here's
the
image
and
then
I
want
you
to
deploy
ten
of
these
things
and
I
want
the
pod
to
be
named.
A
This
and
I
want
the
containers
to
be
named
that
and
when
you
upgrade
down
the
road
I
want
you
to
use
a
rolling
upgrade
method
or
I.
Want
you
to
use
a
blue,
green
or
a
canary
method
to
actually
update
this
application.
So
I'm
building
out
all
of
these
facts
about
this
image,
and
all
this
is
happening
in
those
seconds
or
minutes.
While
my
app
is
deploying
inside
OpenShift
and
the
deployment
config
is
kept
track
of
by
the
image
stream.
The
image
stream
is
my
source
of
truth
for
everything.
Then
I
also
update
a
load.
A
Balancer
configuration
because
I'm
going
to
have
eventually
have
a
URL
to
go
to
to
get
to
this
thing.
Openshift
runs
H
a
proxy
internally
or
it
can
actually
hook
into
like
f5
big
IP
load
balancers.
So
I
update
those
configurations
with
a
URL
to
point
to
my
application.
So
I
have
H
a
proxy
running
inside
open
ship.
It's
going
to
point.
A
A
Kubernetes
builds
a
replication
controller,
the
replication
controllers,
the
kubernetes
thing
that
says:
thou
shalt
always
have
10
of
these
container
images
running
and
when
I
tell
it,
no
I
only
want
five,
then
the
replication
controller
scaled
down
those
five
or
scaled
it
up
to
500
or
does
whatever
I
need
it
to
do
so.
Kubernetes
creates
that
object
inside
the
kubernetes
database.
It
also
creates
a
thing
called
a
service
that
is
a
single
endpoint
that
points
that
can
direct
traffic
to
all
of
those
pods
inside
that
replication
controller.
A
All
of
those
containers
inside
the
replication
controller,
the
Akoo
Benetti
talks
to
docker
jesus.
This
is
long
kubernetes
talks
to
docker
to
actually
create.
We
have
any
built
a
container.
Yet
all
we
have
is
a
container
image
and
a
bunch
of
gamal
at
this
point.
So
now,
kubernetes
actually
talks
to
docker
the
container
runtime
to
put
in
to
create
the
containers
to
put
in
this
new
replication
controller
that
kubernetes
just
created
and
that
moves
us
down
into
the
kernel,
then
we
have
to
think
what
is
an
ax.
What
is
a
container?
A
A
Yeah
forgot
one:
oh
there's
all
yeah
there's
the
process
of
an
SP
Lennox
is
the
last
one
stupid
se
Lennox
that
thing
we
disabled
so.
A
So
then,
doctor
what
doctor
does
is
manage
a
bunch
and
he
names
his
name,
most
of
them
and
I'm
not
going
to
repeat
them,
because
I'm
going
to
repeat
them
over
the
course
of
this
slide,
but
he
was
almost
100%
right.
So
then
docker
interfaces
with
the
Linux
kernel
and
the
Linux
kernel.
We
build
out
a
bunch
of
things
to
help
isolate
this
process
that
we're
about
to
start.
A
We
do
we
tell
our
custom
container
image,
we
take
this
custom
container
image
and
we
take
any
kind
of
mount
points
that
we
defined
in
our
build
config
and
our
deployment
config
in
our
image
stream,
and
we
mount
in
all
of
these
mount
points.
So
a
shared
storage
or
a
host
directory
from
the
actual
host
or
whatever
we
need.
We
put
them
in,
what's
called
a
mountain
name
space
inside
the
kernel
and
that
mount
name
space
makes
that
thought
that
chunk
of
the
filesystem
feel
like
it's
isolated.
A
It
can't
see
anything
else
on
the
host
we
can
see
into
it.
It
can't
see
out
of
it.
So
my
Applications
file
system
feels
like
it's
isolated.
That's
why
every
container
can
have
its
own
temp
directory
in
its
own,
slash,
Etsy
and
its
own
set
of
libraries-
that's
all
possible
because
in
the
mountain
name
space,
so
that
voodoo
magic?
No,
it's
just
a
mountain
interest.
A
We
use
an
IPC
namespace
inter
process
communication,
so
we
can
cordon
off
chunks
of
RAM
and
have
multiple
shared
memory
resources
with
the
same
name
running
on
the
same
server.
So
we
don't
get
seg
faults.
Everything
just
works
happily
so
again,
just
kernel
just
kernel
files,
kernel
tricks,
things
that
have
existed
in
the
kernel
for
years
of
the
IPC
namespace
showed
up
2010-2011,
it's
been
around
for
years.
A
We
just
haven't
been
leveraging
very
well,
then
each
container
gets
its
own
host
name,
a
domain
name,
and
we
use
what's
called
a
unix
time
sharing
namespace
for
that.
I
really
had
to
dig
in
to
figure
out
why
it's
called
UNIX
time
sharing,
there's
actually
a-
and
this
is
probably
the
dorkiest
moment
of
the
45
minutes
I'm
up
here.
There's
a
data
structure
inside
the
kernel
that
keeps
track
of
your
host
name
and
the
time
zone
you're
in
and
a
bunch
of
other
stuff.
It's
called
the
UTS
data
space.
A
Well,
that's
also
why
you
name
like
if
you
look
at
the
kernel
release
you're
running
at
you
name.
Are
you
named
XA,
though
you
is
because
it's
hitting
that
same
chunk
of
data
inside
the
kernel?
That's
why
this
is
called
so
my
own
host
name
and
domain
name,
because
I
have
if
I've
got
500
containers
running
on
a
box
or
I,
have
an
application
running
in
50
container,
stretched
across
100
servers,
figuring
out
the
host
name
with
ones
acting
funny.
It's
going
to
really
help
me
in
troubleshooting
and
forensic
analysis.
A
Then
I
get
my
container
its
own
setup.
It's
the
application,
I
use
to
launch
a
container
inside
the
container.
It
looks
like
pid'
one
so
Pig
one's
not
in
it
inside
the
container,
if
I'm
starting
Apache,
to
run
a
PHP
app,
which
is
what
this
example
is
and
we'll
look
at
it
here
in
depth.
A
little
in
just
a
few
minutes.
A
Http
D
is
Pit
one
like
when
I
when
I
first
started.
Looking
at
containers
three
or
four
years
ago,
I
was
like
mind
blown
like
Pig
one's
important
in
the
Linux
or
Pig
one.
Isn't
it
kid?
One
is
the
thing
that
starts
all
the
other
things,
except
in
a
container
pig.
One
is
just
the
process
you're
running,
so
every
container
gets
its
own
set
of
pig
counters
and
docker
is
creating
all
of
these
resources
for
us
and
putting
our
application
in
them
when
it
launches
it
on
the
fly.
This
is
what
a
container
actually
is.
A
Then
the
docker
also
tells
Linux
to
create
kernel
control
groups.
Anybody
using
kernel
control
groups
a
lot
like
what
are
you
all
using
them
for
nice?
Awesome
I
used
it
to
keep
Java
from
eating
the
world.
Okay.
That
was
my
primary
use
case
right.
So
nothing
can
fix
that
code,
but
I
would
I
could
put
Java
apps
in
a
C
group
in
a
control
group
and
it
would
consume
all
of
those
resources
and
eat
itself
and
not
have
to
restart
the
service,
but
I
would
have
to
restart
the
server.
A
A
Disk
IO
network
I/o
CPU
cycles
and
RAM
allocate
Ram
access,
so
I
can
throttle
data
I
can
throttle
so
I
get
rid
of
noisy
neighbor
syndrome
with
containers
inside
we
using
kernel,
control
groups
and
docker
creates
those
on
the
fly
for
me
in
milliseconds,
and
the
other
thing
that
container
runtime
does
is
create
unique
selinux
contexts
for
each
container
I'm
going
to
ask
a
question
and
everyone
raise
your
hand
or
I'm
going
to
get
really
angry.
Who
is
running
their
systems
with
selinux
enforcing
who's
lying.
A
Yes,
all
right,
you
had
a
free
t-shirt,
you
don't
have
it
anymore.
Selinux
is
a
labeling
system
gets
cooked
into
the
kernel
that
lives
below
filesystem
attributes
that
we
usually
associate
with
ownership?
It's
a
labeling
system,
so
I
can
do
things
with
SELinux.
Like
tell
the
Apache
executable,
they
can
only
read
Apache
config
files
with
the
Apache
config
label
and
it
can
only
write
into
directories
with
the
Apache
readwrite
directory
label.
I
can
restrict
incredibly
fine-grained
controls
with
SELinux
with
containers.
A
Every
su
every
container
gets
a
unique
SELinux
context,
so
even
if
I
somehow
manipulate
code
to
get
from
my
container
to
the
host
or
the
container
to
another
container
I'm
going
to
go
into
that
space
with
the
selinux
context
of
the
original
container
and
I
can't
do
anything.
I'm
in
a
computer
I've
route,
I'm
route
on
the
host
and
I
can't
even
write
to
slash
temp.
A
That's
what
SC
Linux
does
for
us
in
the
container
space
and
actually
does
the
same
things
with
virtual
machines
with
a
thing
called
expert,
so
we
create
great,
create
unique
SELinux
contexts
for
every
new
container
that
we
create.
So
all
of
that
text.
Let's
look
at
it
I'm
going
to
say
it
all
very
fast
again
in
an
infographic
econ
Doeg
we
have
openshift.
A
Openshift
creates
a
container
image
for
us,
creates
an
image
stream
for
us
to
associate.
All
of
this
information
creates
the
build
config
for
us
to
build
our
app
curtsy
deployment
config
to
deploy
our
app
and
upgrade
our
app.
It
creates
an
H,
a
proxy
configuration,
so
traffic
gets
in
and
out
cleanly
and
I
don't
have
to
mess
with
IP
addresses
inside
kubernetes.
A
They
talks
to
Kubb
itself,
kuba
shorthand
for
kubernetes
or
Kate's
kubernetes
creates
a
replication
controller,
which
tells
me
how
many
of
this,
how
many
of
these
types
of
containers
I
want
to
keep
running
at
all
times.
It
also
creates
a
service
that
lets
gives
me
one
place
to
go
to
talk
to
all
of
those
containers
and
kubernetes
will
will
route
traffic
no
matter.
What
host
it's
on.
That
service
knows
where
all
these
endpoints
are,
then
we
get
to
docker.
Docker
creates
a
mountain
namespace
for
our
file
system.
Isolation
creates
a
network
namespace
for
a
network.
A
Isolation
creates
an
IPC
namespace
for
our
shared
memory.
Isolation
creates
a
UTS
namespace
for
our
unique
host,
name
and
domain
name.
I
think
creates
a
pip
namespace,
so
all
of
the
process,
IDs
inside
our
container,
are
unique
to
into
that
container.
So
kids
get
pretty
important
and
being
able
to
just
say,
kill,
pit,
500
and
kill.
You
know
kid.
A
There's
only
one
SELinux
there's
only
one
C
groups,
so
we
create
C
group
SELinux
contexts
to
keep
our
applique
our
container
isolated
effectively
on
the
host,
and
then
we
create
control
groups
submit
to
effectively
isolate
our
containers
resources
to
make
sure
we
don't
run
into
noise
into
noisy
neighbor
syndrome
to
make
sure
one
container
can't
suck
in
all
the
resources
and
drag
the
other.
Any
other
containers
are
on
the
host
make
sure
they
can't
drag
them
down
cause
all
that
make
sense.
I
kind
of
hit
you
guys
with
a
fire
hose
for
about
20
minutes.
A
That's
what
we're
doing
so,
even
if
you
take
off
all
of
the
cool
stuff
with
OpenShift
and
kubernetes,
it's
all
the
Linux
kernel,
all
of
those
things
that
isolate
one
container
from
another
that
make
my
application
in
a
container
feel
like
it's
sitting
by
itself,
even
though
it's
sitting
with
dozens
or
hundreds
of
friends
on
the
same
host,
it's
all
dug
deep
down
into
the
Linux
kernel,
we're
talking
about
deep,
dark,
Linux
code
here,
folks,
it's
not
DevOps
II.
This
is
super
geek,
alright!
A
A
How
do
I
apply
all
that
to
a
container
I'm
guarantee
you
Nexus
has
no
concept
of
a
namespace,
so,
let's
start
with
what
everyone
always
says
is
the
hardest.
You
know
when
people
start
talking
about,
you
know
emerging
technology
and
they
ask
about
networking
and
OpenShift
or
OpenStack.
My
answer
is
yeah,
it's
hard,
that's
all
I
ever
say
and
then
I
try
to
go
to
the
next
thing,
but
let's
start
with
networking,
let's
just
jump
at
the
gorillas
and
also
please
note
the
Linux
beard.
A
So
you
can
tell
this
is
an
ops
guy.
This
is
no
developer,
bald
beard
Linux
guy.
So
all
you
need
to
know
so.
Open
ships,
networking,
open
ship
uses
software-defined
networking
to
isolate
resources
better
inside
the
cluster
and
software-defined.
Networking
could
be
a
little
scary,
but
it's
actually
not
that
weird.
So
let's
say
we
have
two
containers
up
and
running.
Each
of
those
containers
has
that
interface
that
was
created
by
the
network
namespace,
so
I
have
two
containers.
A
Each
of
those
virtual
Ethernet
devices
lives
in
an
open,
V
switch
bridge.
This
isn't
a
Linux
bridge.
It's
an
open,
V
switch
bridge
which
makes
it
fancier,
but
they
do
essentially
the
same
thing.
They
take
all
of
these
different
connections.
They
call
these
different
interfaces
and
they
let
them
share
other
resources,
let
them
route
traffic
more
efficiently
to
one
another.
So
there's
always
a
bridge
zero
on
an
open
shift
host
and,
though
that
bridge
zero
contains
all
of
our
virtual
either
face
devices
that
are
our
container
NICs.
A
A
Lan
and
I
can
actually
isolate
project
network
traffic
at
the
project
level,
so
I
can
get
really
fancy
with
the
way
I
isolate
traffic
inside
my
open
chip
cluster
and
then,
if
I
need
to
get
out
to
the
interwebs,
which
is,
we
all
know,
a
series
of
tubes.
There's
a
tunnel
device
in
old.
You
know:
tun
tap.
Interface
is
also
attached
to
that
bridge
and
it's
routed
out
the
front
door.
The
server
to
get
out
to
the
wider
world
through
whatever
the
default
gateway
is
so.
This
is
what
this
is
exactly.
A
What
networking
looks
like
inside
openshift,
so
I
have
each
container
gets
a
NIC.
Each
NIC
is
mapped
to
a
Nick
on
the
hook
at
the
host
network.
Namespace
then,
to
get
out
to
my
open
ship
cluster
I
go
through
an
encrypted
VX
land
encrypt.
It
into
end
and
then
to
get
out
to
the
world
I
go
through
a
tunnel
interface
which
is
just
routed
out
through
whatever
my
default
gateway
is
or
whatever
my
routing
needs
to
be
for
mine
from
my
infrastructure,
setup.
I
rakul,
this
not
you
know
this
is
pretty
straightforward.
A
This
is
sort
of
this
is
how
most
software-defined
networking
works.
It's
all
just
VX
lands
and
tunnel
and
tunnel
interfaces
seeing
this
in
action.
Ovs,
V
CTL.
This
is
on
a
this-
is
on
an
open
ship,
cluster,
low,
vs,
vs
CTL,
not
the
easiest
thing
to
say
your
type,
but
that's
the
open,
V
switch.
The
switch
control
program
I
can
actually
list
the
bridges
and
I
get
vr0.
A
So
I
have
one
bridge
here
and
then
I
can
also
list
the
interfaces
in
br0,
and
you
can
also
see
where
right
here,
I
have
a
brain,
fart
and
then
I
keep
typing
and
there's
tons
here,
o
v,
XL
and
zero,
and
my
three
ves
well
will
let
me
at
and
I
keep
going.
So
you
see
this
the
number.
The
problem
is
those
numbers,
those
hashes
for
those
ve
devices
are
created
by
open
V
switch,
not
by
kubernetes
there's,
no,
no
easy
mapping,
I
can't
say
old.
A
This
hash
matches
out
to
a
hash
in
docker
or
a
hash
and
Kubb,
or
a
hash
and
OpenShift.
So
there's
no
intuitive
correlation
there,
but
there
is
a
correlation
there.
But
now
how
do
we
figure
out?
So
if
I
wanted
to
do
a
TCP
dump
and
everyone
has
to
do
a
TCP
dump
from
time
to
time,
need
to
do
some
sort
of
packet
inspection
or
grab
a
header
or
see
if
I'm
getting
retransfer
VES
interface
on
the
hooks
and
see
everything
coming
and
going
for
my
container.
A
But
how
do
I
know
which
one
well
I
said
they're
linked
inside
the
kernel?
And
that's
that's
in
a
completely
accurate
statement.
If
you
go
look
at
the
kernel
man
page,
which
I
do
not
encourage
anyone
to
do,
there's
actually
an
if
link
value
and
if
I,
if
I,
can
link
one
interface
to
another
interface
at
the
kernel
level.
By
setting
this
value
and
I'm
actually
showing
this
here.
So
I
can
do
a
note.
C
get
pods
and
get
and
grab
this
the
app
we
deployed
earlier.
This
was
an
honest
demo.
A
I
did
do
the
exact
same
thing.
This
was
on
the
same
system.
I
can
grab
this
pod
name
and
the
pod
is
just
my
replication
controllers.
Just
mine
is
just
my
app,
then
IOC
execute
so
I
can
execute
a
command
inside
that
container
and
I
cut
out
a
file
from
slash
cyst
in
this
case,
sysclass
net
east
east
zero.
Its
link
this.
If
link
and
I,
got
a
value
back
at
23.
Well,
then,
if
I
look
back
on
my
host,
you
see
actually
SSH
tin
to
my
host,
where
this
container
was
running.
A
A
I
have
two
cat
one
file
in
the
container
and
then
I
have
my
inner,
then
I
have
the
interface
I
need
to
to
look
at
all
the
traffic
coming
glowing
from
that
container
on
the
host
I
got
to
be
route
to
do
all
that,
but
it's
there,
everyone
cool
with
that.
So
now
everyone
in
this
room
can
TCP
dump
a
container
in
about
seven
seconds.
You
cut
out
one
file
in
the
container
and
you
know
which
interface
you're
associated
with.
A
Maybe
you
I
can't
come
up
with
a
scenario
unless
the
kernel
itself
was
corrupting
data.
This
is
linked
at
the
kernel
level,
so
if
it
goes
into,
if
it
goes
into
the
virtual
lease
it
is
there
essentially
the
same
thing
there
are
there
carbon
copies
of
one
another
they're
just
in
a
different
network
namespace,
so
they
just
can't
see
each
other
easily,
and
so,
unless
the
kernel
itself
was
doing
something
bad.
A
Maybe
yeah
I'd
have
to
test
it.
I'd
have
to
look
I'd
want
to
watch
it
on
both
side
but
yeah,
but
for
the
vast
majority
of
troubleshooting
traffic
and
I.
Just
have
that
may
be
the
same.
I'd
have
to
go,
I'd
have
to
actually
test
it.
So
that's
how
you
TCP
dump
a
container.
That's
it
Ops
guys
have
to
do
that
stuff
every
day,
that's
kind
of
what
we
do
for
a
living
go.
A
Do
some
packet
inspection
go
see
if
I'm
getting
retransmit
a
Qi
headers
are
actually
making
it
through,
because
I
can't
see
them
through
through
all
of
my
troubleshooting
tools
and
the
next
thing
that
I
wanted
to
show
and
we've
got
so
we
started
at
3:30,
so
we
got
about
15
minutes
left
right.
We
have
about
15
minutes,
so
five
or
ten
minutes
that'll
give
us
five
or
ten
minutes
for
just
a
wide
open
QA.
The
other
thing
is:
how
do
I
look
at
the
host?
A
I
want
to
be
able
to
see
what
processors
are
running
in
a
container
from
the
host,
and
this
could
be.
This
could
be
good
for
something
like
monitoring
applications,
good
for
any
kind
of
any
kind
of
CITV
workflow.
I
could
actually
kill
a
process
inside
from
the
host
in
the
container.
If
I
couldn't
say
if,
if
docker
wasn't
responding
correctly
or
I
couldn't
get
in
the
container
for
some
reason
say:
I
didn't
have
a
bash
executable
in
the
container,
then
I
can't
create
a
shell
in
the
container.
So
there
are
all
sorts
of
scenarios.
A
Excuse
me
where
I
need
to
do
something
to
the
container
to
a
pit
in
the
process
in
the
container,
but
I
can't
get
in
there
to
do
it.
I
can't
get
into
that
namespace
I
can't
do
it
from
the
host
as
well,
so
here
I'm,
actually
using
docker
and
we'll
let
it
restart
again
so
I'm,
actually
using
docker
PS,
to
show
me
all
the
containers
that
are
running
and
I
just
grep
for
the
one
that
I
deployed
and
then
that
unique
ID
is
dr.,
specific
and
I
can
do
I.
A
Can
you
actually
use
docker
inspect
and
just
get
out
the
pit
that
I'm
looking
for?
And
that
app
is
just
you
know,
I'm,
just
looking
for
a
subset
of
the
data
coming
out
of
docker
inspect
I,
just
didn't
want
it
to
scroll
for
ten
years
on
this
on
this
graphic,
so
docker
inspect
and
I
see
that
I've
got
pig
number
kid
number
almost
had
it
timed
42:18,
so
I'm
that
container
pig
one
in
that
container.
A
That
process
that
launched
my
container
that
I
was
talking
about
earlier,
is
actually
on
the
host
is
actually
paid
42:18
and
then
I
can
look
and
see
what
other
processes
that
pid'
that
process
ID
has
spawned.
Let's
do
a
simple
PS
tree
are
just
PS
and
grep
and
look
for
whatever
the
PS
tree.
That's
pit,
42,
18
and
I
see
here
it's
Apache,
just
like
I
promised,
and
then
it
had
spawned
some
child
processes
that
makes
sense
to
everybody
so
I'm,
just
asking
docker
for
a
pig
and
I'm
feeding
that
pit
in
the
PS
tree.
A
A
I'm
logged
in
as
root
so
I'm,
this
kind
of
scenario
is,
is
troubleshooting,
something
that's
not
acting
right
in
open
ship.
So
if
an
application
is
acting
improperly
or
something
just
doesn't
feel
right
for
an
ops
guy,
if
your
trick
knee
is
kicked
in
yeah-
and
we
all
have
that-
you
know
we
say,
but
it's
just
done
smell
right,
then
we
can
go
in
and
start
inspecting
stuff.
This
is
completely
out
of
band
to
open
shift
management.
So
this
is
what
someone
with
root
access
to
the
open
ship
cluster
could
do.
A
There's
not
this
isn't
available
to
anyone
and
then,
if
we
look
at
the
container
itself
and
instead
of
using
OC
exec
here,
I
just
decided
to
use
docker
exec
because
it
under
the
coast,
the
covers,
are
the
same
thing.
I
execute
a
bash
shell
inside
my
container
I
then
make
sure
that
I
have
a
different,
different
hostname,
just
to
prove
I'm
in
a
container
and
then
I
have
the
exact
same
output
that
I
got
from
PS
tree
only
now,
instead
of
pit
42:18
httpd,
it
could
1.so
namespaces.
A
Aren't
these
crazy
immutable,
brick
walls
inside
the
kernel,
they're,
two-way
mirrors
I
can
leverage
the
operating
system
if
I
have
the
right
level
of
privilege
to
see
in
the
container
from
the
host
I
can't
see
from
the
container
out
into
the
host
I
can
go
from
the
host
in
I
can't
go
from
the
container
out.
That's
what
contains
that's?
What
namespaces?
That's?
What
containers?
Let
us
do
it's
just
process,
isolation,
the
new
magic
voodoo.
There's
nothing
does
everyone.
Everyone
makes
sense
with
that.
A
Does
that
make
sense,
everybody
they're
just
another
tool
for
operations
to
use
they're
the
next
layer
of
density?
You
know
who's
got
so
everyone
here,
I'm
sure
is
it's
very
heavily
virtualized,
and
this
is
kind
of
a
non
sequitur.
How
many
who's
got
an
average
utilization
on
their
virtual
server
cluster
north
of
20
percent
you're,
the
best
person
I've
ever
see
your
best
of
your
job
have
ever
seen
how
many
have
do
I?
A
Okay.
So
we
had
these.
We
had
a
ton
of
physical
system
sitting
at
3%
utilization
and
then
we
p2
vide
the
world,
and
now
we
have
a
ton
of
we,
but
we
never
lost
that
idea
of
worst-case
scenario:
provisioning.
We
over-provision
our
VMs
consistently
because
we
have
to
build
them
for
the
worst
day,
because
they're
not
I,
can
takes
minutes
or
sometimes
hours
to
scale
them
up
properly.
Containers
I
can
scale
up
a
millisecond,
so
I
can
write
size
a
container,
much
faster,
much
more
easily.
A
So
what
we
see
and
out
in
the
world
is
miles
and
miles
and
miles
of
vmware
and
the
average
the
average
VM
is
sitting
at
five
percent
utilization
all
the
time
if
you're
lucky
so
we're
still
just
wasting
resources,
but
now
I
can
take
I
can
take
something
like
open
shift
drop.
Dozens
or
hundreds
of
containers
on
each
host
and
I
can
effectively
utilize
an
open,
shipped
host.
If
I'm,
good,
70
80
90
%
consistently
I
can
sweat
my
hardware
more
who's
got
a
bigger
budget
next
year
than
this
year.
Really,
where
do
you
work?
A
I
want
to
give
you
my
resume?
A
bank?
Alright,
maybe
yeah
all
right.
This
he's
probably
telling
the
truth.
One
hand
went
up
out
of
a
room
of
hundred
people.
Two
hands
went
up,
but
you
work
at
a
bank
yeah
two
guys
working
the
bank
Wow
all
right.
That's
wouldn't
pick
that
one,
but
the
vast
majority
people
who's
got
more
responsibility
next
year
than
this
year,
so
almost
everyone's
hand
went
up
and
the
rest
of
you
didn't
want
to
admit
it.
A
So
out
of
a
hundred
people,
everyone
has
more
work
to
do
and
two
out
of
a
two
percent
of
you,
plus
or
minus,
have
more
budget
to
do
it
with
you
got
to
sweat
your
hardware
more,
you
got
to
automate
everything.
That's
the
only
way
you
survive,
I'm
able
to
create
do
all
of
the
stuff.
We
just
talked
about
create
all
this
magical.
Yeah
mold,
deploy
all
these
applications
across
all
these
servers
and
then
actually
going
inspect
them
pretty
effectively
and
pretty
easily
using
traditional
tools.
I'm,
adding
a
file
and
using
ps3
I
mean
I.
A
Didn't
you
know
nothing
good
stuff.
We
use
every
day,
so
I'm
able
to
use
traditional
tooling
to
inspect
containers
a
lot
and
to
really
get
that
information.
Make
them
feel
like
a
regular
part
of
my
operations
but
I'm
able
to
sweat
my
hard
or
more
effectively
because
containers
or
let
our
board
more
density
I
can
have
50
port
ATS
on
a
single
host
I'm.
A
firm
believer
80%
of
the
virtual
machines
exist
because
someone
needed
another
port,
80,
pretty
sure.
A
That's
fact:
don't
have
hard
data
on
it,
but
I'm
pretty
sure
we
don't
have
to
do
that
anymore.
We
don't
have
to
isolate
processes
with
virtual
machines.
We
can
isolate
processes
of
containers
and
we
get
more
effective
use
of
our
heart.
We
can
shrink,
you
know
100
VMs
down
to
10
or
12
using
containers
with
most
customers,
and
we
can
do
it
effectively
and
we
can
do
it
with
an
immeasurable
loss
of
performance
containers.
A
Add
a
percent
maybe
to
latency
all
of
those
things
inside
the
kernel
are
incredibly
fast
and
they
spawn
and
they
spawn
up
in
milliseconds,
it's
kind
of
roll
in
through
a
couple
of
conclusions:
openshift
automates
a
lot
of
the
work
to
make
applications
elastic
way.
I
have
to
have
all
that
yam.
Well,
we
have
to
have
built
config
and
deployment
configs
and
image
streams
and
replication
controllers
and
namespaces,
and
all
that
stuff
opens
just
automates
it
for
you.
A
If
you
decide
to
do
like
a
roll,
your
own
kubernetes,
you
can
figure
out
how
to
do
all
that
crap
and
maybe
you're
smart
as
Google
I
know.
No,
these
two
guys
have
more
budget
to
throw
at
it,
but
most
I'm,
not
I,
couldn't
invent
kubernetes
tomorrow,
I
couldn't
reverse-engineer
it
in
two
years,
three
years,
so
openshift
automates
all
of
that
stuff
using
open
source
hardware
for
an
ops
team.
A
You
have
to
think
about
applications
differently
inside
OpenShift,
but
you
can
analyze
them
with
traditional
tooling,
and
you
can
take
that
traditional
tooling
and
that
make
that
the
first
step
to
getting
it
effectively
into
your
monitoring
applications
getting
them
effectively
into
your
change
management
controls
getting
them
effectively
into
your
mainstream
ops
pipelines.
You
don't
have
to
treat
containers
like
unicorns
anymore,
open
ship
ships
with
a
lot
of
tools
to
get
the
information
you
need.
It's
Linux
we're
just
manipulating
Linux
a
little
differently.
A
Nowhere
in
a
diagram
or
screen
capture
did
I
reference,
voodoo
or
unicorn
blood.
There
is
no
magic
in
containers.
Containers
do
not
equal
magic,
it's
just
Linux
containers,
equal,
more
effective
process,
isolation
and
just
to
drive
at
home.
Most
importantly,
in
my
humble
opinion,
containers,
equal
Linux
and
then
your
Red
Hat
summit.
So
I'm
gonna
go
ahead
and
say
it
Linux
equals
rel.
That's
that's
it.
That's
I!
Don't
care
what
marketing
people
for
any
other
company
say.
That's
it
containers
or
Linux
we're
leveraging
Linux
in
a
new
way.
That's
it!
So
we
have
five
minutes.
A
Anybody
want
to
call
me
an
idiot
on
tape.
Please
feel
free
go
for
it,
so
the
question
was
a
little
bit
more
on
how
the
physical
resources
are
isolated
and
the
the
tool
there.
The
tool
in
that
toolbox
we
talked
about
is
control
groups.
So
when
I
create
a
control,
group,
I
can
associate
it
with
in
in
the
wider
world.
I
can
so
she
ate
it
with
a
user
or
a
group
or
process
ID
with
containers
we're
associating
with
a
process
ID.
A
So
when
I'm
crew,
when
docker,
is
creating
a
container
when
the
container
runtime
is
spawning
a
new
container,
it's
also
putting
those
applications
in
a
C
group
and
that
C
group
I
can
limit
the
number
of
CPU
cycles.
I
can
limit
the
number.
Of
course
it
has
available
to
it.
I
can
set
a
specific
core
available
to
it.
I
can
limit
the
number
of
the
amount
of
RAM
it
has.
Total
I
can
limit
the
IAP
per
second
to
memory.
A
I
can
limit
the
I/os
per
second
to
disk
and
network
so
that
that's
how
we're
sandboxing
the
physical
resources
per
container
there's
inside
it
inside
the
C
group
module
in
Kirk
in
the
kernel.
So
in
the
world
of
openshift.
You
declare
all
of
that
in
your
build
config.
So
there
are
options
to
say:
I
want
to
limit
this
container.
A
You
can
set
defaults,
you
need
to
set
defaults
per
project
and
then,
if
you
have
something
that's
a
one-off
inside
a
project,
you
can
actually
declare
them
independently
in
a
build
config
and
limit
your
resources
that
way.
So
the
question
was,
you
know
if
I've
scaled,
my
if
I've
scaled
my
applications
dynamically
up
to
the
limits
of
my
application
platform
can
I
do
my
application
platform.
A
The
short
answer
there
is
no
open
ship,
doesn't
have
a
mechanism
inside
of
itself
to
build
out
a
new
node,
the
install
the
install
tool,
which
is
just
an
answerable
playbook
under
the
covers.
You
can
prove
it.
You
can
add
a
new
node
to
the
cluster
in
a
pretty
trivial
fashion.
The
way
I
see
people
when
people
are
doing
it
really
well
and
really
awesome
you're
you
have
these
sort
of
axioms
of
scalability.
I
have
I,
have
a
an
elastically
scalable
application
on
that
platform.
A
My
application
platform
lives
on
an
elastically
scalable
infrastructure
like
OpenStack
or
like
a
traditional
virtualization
platform
like
VMware
or
Red
Hat
virtualization
controlled
by
something
like
cloud
forms
open.
Ships
on
OpenStack
is
the
sexiest
thing
you've
ever
seen,
because
then
you
can
actually
set
all
the
scaling
of
the
cluster
and
automate
all
of
those
steps
to
add
a
new
host
of
the
cluster,
and
it
just
goes
up
and
down
on
its
own.
A
It's
some
pretty
cool
stuff,
but
inside
openshift
we
figured
it
was
gonna,
be
hard
enough
to
scale
the
app
we
didn't
want
to
try
to
glue
in
code
to
scale
itself
because
there's
you
can
run
it
anywhere.
You
can
run
rel
and
then
we're
just
writing
a
we're.
Just
writing
a
big
fat
hack
at
that
point,
because
you
can
run
rail
pretty
much
anywhere.
I
mean
Red.
Hat
Linux
is
running
on
Mars
for
God's
sake,
not
a
new
enough
kernel
to
run
containers.
A
But,
okay,
the
kernel
is
a
little
out
of
date
on
the
rover,
but
it
is
the
Red
Hat
kernel
any
other
questions.
We
have
just
a
couple
of
minutes
where
they
kick
us
out
and
here's
that
quality
all
questions
concerns
anyone.
Anyone
who
won't
buy
me
a
beer
and
tell
me
there's
a
better
platform.
It
is
in
fact
I
was
taught
there
will
soon
be
in
our
Fe
I.
A
Literally
two
days
ago
we
did
a
lab
on
Tuesday
and
I
got
from
Penn
State
comes
up
and
says
this
information
needs
to
be
easier
to
get
at
and
yes,
you're
exactly
right.
Right
now,
I
could
I
can
figure
out
where
that
end
point
is,
with
a
cut
with
a
you
know,
I
cheated
here
and
had
a
one
node,
a
to
node
opener
cluster
that
cheated
a
little
you
know.
I
did
the
infomercial
I
put
the
Rahl
Turkey
in
and
pulled
the
cooked
turkey
out,
but
yeah.
A
So
there
were
two:
do
a
multi,
node
setup
I
could
figure
out
which
that
pato
to
get
pods
and
it'll
tell
me
all
the
different
pods
running
for
an
app
and
it's
mapped
to
a
host
there.
So
I
did
skip
a
step
by
running
a
single
node,
and
that
may
be
exactly
what
I
sent
an
email
out
to
some
of
our
engineering
and
to
that
customer.
Maybe
last
night
about
one
am
saying
that
exact
thing
that
we
need
to.
A
We
need
to
make
the
o,
in
particular
the
command-line,
client
and
open
shift
a
little
more
operator
friendly,
ops
friendly
and
this
kind
of
cooking.
That
kind
of
thing
into
it
would
be
a
great
first
step
and
be
a
pretty
low,
lift
I
think
we're
right
up
on
time.
Anybody
wants
to
chat
outside
afterwards
I'm
having,
but
that's
what
I
got
thanks
a
lot.