►
Description
Can I Contain This? with Ed Seymour @OpenShift Commons Gathering Helsinki 2018
A
You
find
the
right
kind
of
services
teams
to
help
you
like
build
platforms
and
then
also
how
you
can
kind
of
build
out
an
approach
for
for
containerization
of
existing
and
legacy
applications
as
well
so
sort
of
covering
some
very
similar
things.
And
what
I
was
going
to
do
today
is
talk
about
just
containerization
so
like
taking
existing
apps
and
and
how
we
can
get
those
into
into
containers.
So
I
thought
it'd,
probably
be
a
good
idea
to
just
walk
back
on.
Actually
what
a
container
is.
A
So,
if
we're
running
an
application
in
a
container
what
we
expecting
to
see.
So,
if
I
run
an
application
traditionally
on
a
Linux
operating
system,
then
I'll
have
an
application.
It
may
have
a
number
of
dependencies,
its
libraries
and
then
it
talks
to
the
Linux
operating
system
of
the
kernels
that
a
linux
kernel
when
I
containerize
it.
I
basically
do
exactly
the
same
thing.
A
And
so
we
would
say
that
then
the
Linux
operating
system
that
you
have
is
still
very
important.
So
we're
not
virtualizing.
Your
application
is
still
running
on
on
that
and
that
does
have
a
bearing
on
what
you
can
and
can't
containerize
within
what
what
types
of
applications
you
can
encounter
containerize.
A
So
why
is
that
a
good
thing
putting
stuff
in
a
container?
Well,
first
of
all,
we
can
distribute
our
applications
much
more
easily
so
previously,
before
we
had
before
we
had
containers
you'd
have
lots
of
different
ways
of
packaging
up
your
applications
or
explaining
how
to
install
those
applications
in
an
environment.
Now
we
have
a
container
image
that
we
can
take
that
container
image,
and
if
your
ops
team
know
how
to
run
containers,
then
we're
good
and
we
were
able
to
distribute
that
and
and
also
because
of
the
the
the
way
that
we
build.
A
Our
container
images
with
these
layers
I
only
need
to
distribute
the
layers
that
have
changed
so
if
I've
only
just
changed
the
the
layer
that
has
the
a
pin,
then
the
other
layers
which
we've
got
the
dependencies.
If
they've
stayed
the
same,
I
don't
need
to
send
those
out
with
that.
So
distribution
is
really
straightforward
and
nice
and
easy.
A
It
does
mean
that
we're
consistent
as
well,
and
so
the
idea
with
a
container
is
that
we
don't
rebuild
our
application
for
every
different
environment.
We
build
one
container
image
and
then
we
distribute
that
across
all
the
environment,
so
we're
going
to
run
it
and
also
it's
the
same
container
image
that
we're
running
through
the
software
development
life
cycle.
So
you
starting
in
dev
you
moving
into
test
and
then
into
production.
Ultimately,
it
means
that
what
you've
got
is
consistent
across
those
different
areas.
A
It
also
means
that
you
can
choose
different
software
technologies
and
languages,
so
you're,
creating
a
nice
abstraction
between
yourself
and
the
guys
that
need
to
run
the
containers.
So
if
your
developers
want
to
use,
go
or
rust
or
Java
or
whatever,
it
is
they're
kind
of
free
to
put
that
into
those
container
images
and
we
get
the
from
a
runtime
perspective,
we're
able
to
to
run
irrespective
of
the
the
let
the
technology
and
language
that's
inside
the
container.
A
It
also
means
that
your
application
is
encapsulated
and
is
protected
against
other
applications
and
that
underlying
operating
system
and
because
it
has
its
dependencies
within
that,
if
you,
if
the
ops
team
wants
to
say
patch
the
operating
system
or
another
application
that
happens
to
run
on,
that
needs
to
be
patched.
All
of
your
dependencies
are
actually
within
that
container,
and
so
we
don't
have
that
issue
will,
if
I
patch
this,
then
it's
going
to
have
a
knock-on
effect
on
my
application,
because
everything
that
application
needs
is
carried
with
it
in
that
container
yeah.
A
So
we
talked
about
that
that
consistency.
If
it's
gone
wrong
in
production,
how
do
I
know?
How
do
I,
how
do
I
test
that?
Well,
at
least
we
know
that
that
container
images
are
mutable.
So
when
I'm
testing
that
in
in
development
I've
got
exactly
the
same
image,
I
can
link
to
exactly
the
same
image
that
they're
running
in
production,
and
so
it
should
be
a
lot
easier
for
me
to
repeat
the
the
way
that
that
we
run
our
applications
in
production.
A
The
other
thing
as
well
is
that
we
see
a
lot
of
and
Jeremy
touched
on
security
as
well,
but
often
your
security
teams
may
be
interested
in
what
you're
actually
running.
Can
you
show
that?
Can
we
audit
what's
running
in
production
and
show
that
it's
gone
through
all
of
these
governance
checks
as
it
as
it
passed
various
tests
and
things
like
that?
Well,
what
we
can
do
with
the
containers
who
can
sign
it,
so
we
know
exactly
what
it
is.
A
We
also
have
a
unique
ID
for
that
container,
and
then
we,
if
you
have
the
process
set
up
in
within
your
organization,
you'll,
be
able
to
then
reference
that
across
all
of
those
steps
or
all
of
the
gateways
that
it's
gone
through
to
get
through
into
production,
you'll
be
able
to
show
that,
but
if
you're
just
running
a
single
container.
Well,
that's
that's
a
trivial
example,
but
if
in
in
real-world
examples
you're
not
just
running
one
thing,
you're
running
a
collection
of
applications
that
talk
to
each
other,
and
so
how
do
we?
A
A
Okay,
so
that's
great!
So
how
do
we
take
there
all
those
existing
applications
that
we've
got
today
that
we've
built
using
our
traditional
methods
and
how
do
we
get
them
into
into
containers?
Well,
first
of
all,
how
do
we
tell
if
we
can
actually
run
that
application
in
in
a
container,
so
the
sort
of
rule
of
thumb
is,
if
it
runs
on
Linux,
then
it
should
be
able
to
put
in
a
container
well
kind
of.
A
There
are
a
couple
of
flags
that
you
need
to
look
for,
so
you
generally
want
something
that's
running
in
user
space
within
within
a
Linux
system.
So
it's
not
tied
to
the
kernel.
It's
not
trying
to
it's,
not
a
kernel
module.
It's
not
got
that
sort
of
thing,
so
tivity
a
red
flag
would
might
be
that
you've
got
some
code.
That's
got
assembler
in
it.
What
we?
A
What
what
is
that
better
is
applications
that
are
using
higher-level
libraries,
so
like
Lib
DCC,
for
example,
because
they'll
do
a
lot
of
work
to
map
you
to
different
versions
of
kernels
and
they'll.
Stop
you
from
falling
foul
of
swapping
between
different
systems
with
different
kernel
versions.
You
may
have
specialized
hardware
or
technical
requirements
or
networking
requirements
that
so
not
all
of
those
are
going
to
are
going
to
work
within
a
containerized
environment
source.
A
lie.
A
Token
ring
network
as
an
example
there
it
may
be
a
mainframe
application,
that's
difficult
to
to
turn
into
an
IEEE
86
architecture.
So
those
sorts
of
things
can
also
be
red
flags
as
well,
and
also
important
to
remember
is
that
where
are
you
getting
that
software
from?
Are
you
getting
this
from
a
vendor
and
does
that
vendor
support
it
running
in
a
container,
and
so
you
may
be,
there
may
be
licensing
contracts.
A
There
may
be
support
contracts,
maintenance
contracts
which
prohibit
you
from
running
that
application
in
a
container
those
red
flags
don't
mean
that
it
won't
containerize
and
often
you
just
need
to
have
a
conversation
with
the
development
team.
So
recently
we
had
one
where
we
had
a
customer
asking.
Well,
this
application
is
a
rel
five
application.
We
can't
run
it
because
we
don't
do
a
rel
five
base
image
and
so
well.
Why
is
that
so?
Why?
A
Why
do
you
think
you
can't
tell
your
application,
and
actually
it
was
that
the
the
ops
team
for
that
application
didn't
support
a
Relf
anything
other
than
rel
5,
so
they
only
supported
a
rel
5
operating
system.
But
if
you
put
stuff
into
a
container,
then
we're
really
not
worried
about
the
operating
system
worried
about
the
the
userspace
libraries
and
the
application.
So
if
we
can
take
those
out
and
then
we
have
a
for
a
platform,
then
we
can
do
that
work.
A
So
if
it's
in
a
container
will
it
run
in
qu
vanessa's.
Well
again,
we
can
hit
some
other
problems
around
that,
if
you
look
at
systems
like
OpenShift,
where
we
are
promoting
openshift
is
a
common
platform.
This
multi-tenancy
do
you
have
lots
of
teams
using
it
rather
than
one
cluster
per
development
team,
then
you
know
you
need
to
start
adding
additional
capabilities
and
security
policies
within
that
to
make
that
viable,
and
particularly
around
security,
and
things
like
that.
A
We
need
to
make
sure
that
we're
we're
kind
of
mitigating
any
risks
around
vulnerabilities
and
things
like
this
and
so
different
policies
and
ways
of
working
can
have
a
constraint
on
what
you're
able
to
do
in
a
container.
It's
a
very
common
one
that
you
may
have
seen
if
you,
if
you've
used
openshift,
is
we
don't
let
you
run
as
root
and
not
only
do
we
not
let
you
run
as
root
is
that
we
randomize
your
user
ID
that
you're
running
that
process
in
now.
A
A
In
particular,
it
gives
you
opportunities
to
have
something
which
is
you
know,
to
drive
towards
things
like
micro
service
architectures.
We
use
things
like
sidecars,
which
I'll
explain
in
a
minute
and
to
actually
take
a
logical,
your
logical
design
for
your
application
and
sort
of
transpose
that
into
a
running
operation,
operational
environment.
If
you
contrast
that,
with
a
traditional
approach
for
applications,
you
typically
have
a
logical
design
for
your
application,
and
then
you
have.
This
is
the
physical
realization
of
it.
A
Kubernetes
actually
gives
you
an
opportunity
to
essentially
implement
your
logical
design
within
the
platform,
so
where's
I've
been
shift
in
all
this,
so
I'm
sure
someone's
mentioned
this
already
today,
so
so
the
upstream
community,
ok
D,
is
derives
from
kubernetes
and
a
whole
bunch
of
other
upstream
projects
and
creates
its
the
upstream
openshift
community
and
then
feeding
from
that.
The
various
different
versions
of
openshift
that
you
can
consume
so
opens
you've
contained
a
platform
being
the
one
that
you
build
and
manage
yourself
on
premise
or
in
the
in
cloud.
A
Whereas
dedicated
is
a
managed
service
that
we
offer
an
online
is
a
is
a
an
open,
managed
service
that
we
share
with
lots
of
different
organizations,
so
opens
you've
contained
a
platform
is
certified
kubernetes,
so
you
can
use
kubernetes
constructs
and
objects,
and
everything
within
it.
Openshift
does
add
some
additional
extensions,
but
that's
part
of
our
work
with
the
community.
A
So
maybe
you
already
building
containers,
so
you
don't
need
OpenShift
to
build
your
containers
for
you
right.
So
you've
already
got
your
docker
files
and
things
like
that.
Well,
that's
great!
Actually,
because
OpenShift
will
you
can
point
openshift
at
your
docker
files
and
it
then
becomes
a
basically
a
build
farm
for
your
for
your
containers.
So
not
only
will
it
build
those
containers,
but
it
then
has
a
registry
where
it
will
store
them
and
they'll
be
nice
and
secure
in
there.
So
you've
got
your
own.
A
You
can
manage
the
security
in
the
access
of
those
containers.
We
also
wrap
a
whole
bunch
of
metadata
around
that.
So,
if
you've,
if
that
docker
file
is
say
in
a
git
repo
you'll,
have
the
the
commit
reference
and
things
like
this
within
the
metadata
all
bundled
into
that
container
and
then
stored
for
you
and
so
instead
of
having
you
know
three
or
four
servers
sitting
there
that
their
job
is
to
build
containers.
You
now
can
just
fire
that
into
a
into
a
kubernetes
cluster
and
it
will
build
those
containers
for
you.
A
It
gives
you
an
opportunity
to
increase
and
reuse
with
those
docker
files,
so
your
traditional
approach
is
to
with
the
docker
file
is
to
say
well
from
some
OS
base
or
like
Alpine
or
something
like
that.
Add
some
application
dependencies
and
you
know
everything
that
I
need
for
my
app
and
off
I
go,
whereas
we
can
now
start
looking
at
a
kind
of
a
build
pipeline
for
your
containers
and
going
to
vendors
to
get
third-party
containers
that,
maybe
you
add
your
own
specializations
to
and
then
again
focus
on
your
app.
A
A
A
But
if
you
take
an
upstream
tomcat
vendor
image,
we
can
basically
simplify
that
into
three
steps,
which
is
basically
get
get
the
binary
and
drop
it
into
into
the
deployment
directory
for
you
have
for
your
Tomcat,
maybe
add
some
standard
libraries
and
things
are
that,
if
you
needed,
maybe
you
wanted
to
connect
yours
to
a
ms
sequel
server
database,
so
you
need
to
pop
in
the
driver.
You
just
need
to
add:
there's
an
extra
step
so
again
we're
focusing
on
the
application
using
that
approach.
A
So
designing
for
kubernetes
means
that
we
have
some
other
choices
around
that
so,
instead
of
having
trying
to
treat
your
container
as
a
mini
VM
and
trying
to
bundle
in
all
of
the
things
that
we
need
to
need
to
put
into
our
application
to
be
able
to
access
it
like
you
may
have
done
traditionally
on
a
traditional
VM.
We
can
now
start
splitting
this
out
and
having
a
logical
design
so
kind
of
rules
of
thumb
like
one
processes
per
container.
A
For
example,
you
don't
need
to
have
like
a
sis
in
it
type
process
that
launches
your
process
within
the
container
and
make
sure
that
it's
always
running,
because
kubernetes
is
essentially
going
to
be
doing
that
job
for
you
splitting
things
out
into
side
cars.
So
we
you,
if
you
have
tightly
coupled
things
that
need
to
be
able
to
talk
to
each
other
locally,
maybe
you're,
building
from
binary,
so
you've
got
like
a
CI
process,
for
example,
you're
building
Java
applications
using
maven
and
you
have
a
deploy
step
at
the
end.
A
It
takes
the
maven,
artifacts
and
Chuck's
them
into
I.
Don't
know
like
a
nexus
or
artifactory
repository.
That's
great!
So
we
can
take
that
we
can
basically
do
a
binary
build
from
that
approach,
so
we
can
take
the
binaries
and
the
libraries
that
you've
got
resulting
from
that
existing
CI
and
we
can
build
new
images
for
you.
So
that's
that's
really
easy
as
well,
so
we
can
do
that.
A
Maybe
you
you're
starting
a
new
project,
you've
written
some
source
code,
and
you
just
want
to
check
that
this
is
working
nicely
in
in
a
kubernetes
environment.
We
can
do
that
as
well.
So
source
to
image
enables
us
to
take
you
just
point
us
at
your
source
code.
We
will
build
your
source
code
for
you
and
then
build
the
container
and
then
run
your
container.
So
it's
all
about
accelerating
and
getting
you
into
the
platform
as
as
quickly
as
possible.
A
So
what
do
I
do
about
all
of
these
kubernetes
resources
that
I've
got
that
describe
my
application?
Well,
the
these
are
essentially
just
like
the
code
that
you've
got
for
your
application,
so
we
can
treat
that
as
source
code
use,
your
source
code
repository
to
manage
and
maintain
those
kubernetes
resources
as
well
open
chef
will
does
a
load
of
work
to
help
create
those.
So
you
don't
have
to
like
typing
these
yanil
documents
from
scratch.
A
It
will
help
you
create
them,
but
once
you
created
them,
it's
very
easy
to
export
them
and
have
them
as
resources.
That
then
goes
through
that
software
development
lifecycle
as
well
so
different
examples
of
the
types
of
resources
that
you've
got
so
things
that
describe
how
to
deploy
your
application
and
how
to
transition
from
version
a
to
version
be
so
deployment
information.
We
got
services
which
are
the
stable
endpoints.
So
when
you
have
your
applications
talking
to
each
other,
you
typically
would
talk
through
those
services.
How
are
we
going
to
talk
to
this?
A
These
applications
that
we've
created
from
the
outside?
So
how
do
we
get
ingress?
And
how
do
we
provide
specific
runtime
context?
So
we
talked
about
immutable
images
going
through
the
software
development
lifecycle.
How
do
I,
how
do
I
take
that
immutable
image
and
make
it
look
like
a
dev
environment?
Looked
like
a
test
environment,
look
at
a
production
environment,
so
we
use
things
like
config
maps
and
secrets
to
be
able
to
mount
within
the
container
and
provide
that
context
for
us
throughout
that
lifecycle.
A
Okay,
so
let's
look
at
a
particular
example.
Can
I
containerize
this
web
app,
so
this
was
taken
from
a
actual
example
with
the
customer
and
say:
well,
it's
a
tomcat,
and
that
was
like
great
because
we
got
a
tomcat
image,
that's
cool
and
they
wanted.
It
talked
to
an
external,
Microsoft
sequel
server
database,
that's
cool,
absolutely
fine.
They
were
using
the
ODBC
Java
driver,
okay,
so
it
makes
it
a
little
bit
fiddly.
We
need
to
make
sure
that
we
build
them
a
custom
image
with
that
driver
built
in,
but
that's
actually
super
easy.
A
A
So
the
first
thing
I
asked
was
like
well,
how
do
you
guys
do
this
today
and
they
said
well
what
we
do
is
we
install
a
tomcat
server
and
then
we
run
a
cron
job
which
does
a
que
in
it
with
a
service
account
and
just
make
sure
that
the
token
is
always
up-to-date,
so
I
think
they
gave
their
tokens
expired
after
about
an
hour.
So,
okay-
and
you
don't
want
to
rewrite
your
code,
no,
they
didn't
want
to
rewrite
the
code.
So
right.
So
that's
why
they
were
doing
today.
A
So
we
said
well,
look
what
you
could
do-
and
this
is
actually
the
which
is
where
they
were.
They
were
kind
of
heading-
was
all
right.
So
we've
we
create
a
custom
container
and
we
write.
We
write
essentially
their
cron
job
to
sit
inside
that
container.
That's
going
to
do
that!
It's
going
to
do
the
K
in
it
for
us
and
we
can
provide,
like
the
you
know,
the
the
initial
credentials
through
through
a
secret.
A
So
we
get
the
key
tab
and
we
do
it
that
way
and
I
said
well,
look
you
know
you,
you
could
do
it
that
way,
but
then
you've,
basically
you're,
that
application
has
got
that
Kerberos
stuff
built
into
it
and
they
said
well,
maybe
okay.
Well,
maybe
we
could
put
that
into
another
container
which
sits
outside
and
we
somehow
inject
the
token
into
it
in
some
way
and
I
said
well,
actually,
I
think
what
the
preferred
option
would
be
using
this
sidecar
approach.
So
what
you
would
do
in
this
case,
is
you
have
your
app?
A
Your
app
is
expecting
to
have
a
curved
wrist
token,
that's
valid,
but
we
write
another
container
and
its
job
is
just
simply
to
get
that
cur
breast
token
and
authenticate
it
and
they
said
well.
How
does
that?
How
do
the
two
containers
like
share
that
information,
and
so
when
you
run
sidecar
containers,
we're
able
to
share
different
bits
of
capability
within
a
kubernetes
pod,
and
one
of
them
is
shared
memory,
so
we
can
have
a
shared
memory,
space
of
a
directory,
there's,
basically
a
temporary
file
system.
A
So,
in
order
to
to
test
this
out,
we
did
an
architectural
spike.
It's
an
architectural
spike
we
want
to.
We
have
one
thing
that
we
need
to
test
and
it's
that
we
can
essentially
authenticate
our
get
a
get
a
valid
token
Cobra's
token
using
one
container
and
then
show
that
we've
got
a
valid
token
in
the
other
container.
And
so
in
order
for
us
to
do
that,
we
needed
some
sort
of
tame
Kerberos
KDC
that
we
could
talk
to.
A
We
needed
a
test
application
and
we
needed
to
build
our
sidecar
application,
so
actually
created
a
test,
Kerberos
server
to
help
do
this,
and
so
this
again
used
to
used
to
sidecar
approach
as
well,
and
so
the
Kerberos
test
server.
So
we
ran
a
KDC
process
and
we
ran
the
K
admin
process,
both
in
separate
containers
and
they
were
using
the
shared
memory
as
well
to
talk
to
each
other.
So
it's
a
bit
like
how
you
would
install
it
normally
and
because
both
of
those
two
things
run
as
a
service
within
normally.
A
If
you
install
them
on
a
on
a
on
a
Linux
server,
so
we
ran
them
in
two
separate
containers
and
so
actually
what
we
ended
up
with.
Was
this
stack
here?
So
we've
got
two
pods
we've
got
a
pod,
which
is
the
KDC
pod.
That's
our
test
server
and
it
has
a
service
for
that,
and
then
we
had
the
application
pod,
which
had
our
application
in
it
and
in
in
this
case
it
was
a
tech
because
it
was
a
architectural
spike.
Our
application
just
simply
did
a
Kay
list.
A
So
if
you
do
K
list,
it
tells
you
if
you've
got
an
authenticated
token,
and
then
we
did
the
the
K
in
it
sidecar,
which
did
the
initial
initialization.
So
work
like
this
there's
actually
a
test
script.
You
can
run
this
if
you've
got
an
open
shift
environment.
That's
running:
if
you
run
this
test
script,
it
will
basically
create
this
as
an
example.
It
basic
provisions
the
KDC
server.
It
provisions
the
test
app
with
the
with
the
sidecar.
A
A
So
this
was
the
example,
so
we
got
the
KDC
server,
that's
running,
which
is
there's
just
starting
up
and
then
here's
our
test
application,
and
so,
if
you've
not
seen
this
in
OpenShift,
if
you
look
at
the
pod
sort
of
thing,
I
clicked
through
into
the
pod
and
then
we
look
at
the
logs
you'll
see
that
we've
got
both
containers
running
and
then
in
the.
When
you
look
at
logs,
you
get
a
drop-down,
so
you
can
look
at
the
console
from
one
container
or
console
from
the
other
one.
A
So
in
this
one
here,
it's
showing
you
the
the
console
from
the
K
in
it
sidecar,
so
we're
just
waiting
to
get
the
credentials.
You
see
that
I
guess
should
get
credentials
in
a
minute.
There
we
go
so
we've
got,
we've
actually
logged
on
and
we've
it's
got,
we've
got
a
valid
token
and
then,
if
we
swap
to
the
example,
application,
which
was
just
basically
looping
and
waiting
till,
we
got
credentials
and
we
can
see
that
we've
then
got
the
credentials
coming
through
in
that
test.
Application.
A
Yeah
I
know
okay
and
all
right
there
we
go.
So
what
what
I
wanted
to
show
here
was
really
that
kubernetes
changed
the
way
that
we
solved
this
as
a
problem
and
it's
given
us
something
which
we've
got
very
clear:
separation
of
control
concerns.
We've
got
something,
that's
reusable.
So
if
someone
came
along
and
said
right,
I've
got
a
go
application
which
needs
to
do
the
same
thing.
We
can
basically
use
the
same
technology.
We
don't
need
to
rewrite
it
for
that.
Go
application
and
it
can
have
its
own
release
cadence.
A
A
If
you
look
at
how
this
is
being
rolled
out
across
things
like
kubernetes,
it's
do
is
a
good
example
of
this.
So
SEO
provides
it's
providing
you
with
a
service
measure.
You
can
control
and
manage
your
api's
within
the
platform.
It's
essentially
using
this
same
kind
of
technique,
so
we've
got
sidecars
running
against
your
applications,
then
supports
any
type
of
technology
that
you've
got
in
your
applications
and
so
the
the
old
approach
of
having
all
of
that
logic
built
into
your
application.
A
You
can
then
take
that
logic
out
of
your
application
and
have
it
defined
by
a
common
service
and
configuration.
So
it
makes
you
get
very
clear
separation,
so
explain
like
I'm
five
yeah,
so
it
does
look
a
little
bit
like
it's
a
lot
more
complicated.
So
if
you
look
at
the
system,
we've
got
lots
more
moving
parts,
but
actually,
if
you
look
at
your
monoliths
with
all
of
these
things
in-
and
you
look
at,
if
you
had
lots
of
monoliths
all
doing
these
things,
you've
got
it's
all
of
that.