►
From YouTube: OCG Berlin 2017 - OpenShift at Volvo
Description
From the March 28th 2017 OpenShift Commons Gathering in Berlin @KubeCon https://Commons.openshift.org/Gathering
A
A
Hi,
my
name
is
Robert
Foster,
I'm
and
I
work
as
a
application,
server
platform
architect
at
Volvo
cars.
Today,
I'm
gonna
take
you
through
how
we
transformed
our
old
java
ee
service
to
become
new
fancy
shiny
with
containers.
Instead,
so
we
use,
we
deliver
Java
EE
application
servers.
A
couple
of
months
ago
we
had
785
applications
on
560
application
servers
all
running
on
ADA
physical
servers,
so
a
fairly
large
environment.
A
Our
environments
has
to
be
stable.
All
of
the
time
our
manufacturing
plant
uses
applications
in
platform.
Our
maintenance
all
over
the
world
are
also
using
it
to
service
cars
and,
but
so
within,
delivering
this
service
were
roughly
15
16
years.
Something
like
that.
Everybody
knew
how
to
operate
it.
Everybody
knew
how
to
use
it.
It
was
well
known
within
the
organization,
but
we
started
getting
into
problems.
A
We
built
our
whole
infrastructure
on
physical
servers
and
all
of
the
sudden
there
was
this
medical
thing.
Ibm
would
give
a
deadline
for
one
of
their
earlier
versions.
That
meant
for
us
getting
new
server
simply
into
place.
Maybe
get
a
guy
out
to
drill
a
hole
between
the
data
centers
to
get
a
fiber-optic
cable
between
them.
It
was
a
moment
of
panic.
Everybody
has
to
change
this.
The
Java
version
go
up
to
the
next
Java
EE
version.
A
It
wasn't
that
good.
We
couldn't
go
to
the
cloud
we
tried
and
we
had
problems
getting
service
windows
for
our
platform
as
well.
It
had
to
be
up
and
running
if
we're
not
running
late
on
on
delivering
cars
to
our
customers,
planta
to
be
open.
Our
platform
had
to
be
up
and
running
on
our
Sunday
evening.
That
is
Japan's
Monday
morning
when
our
Japanese
customers
were
in
their
car,
going
to
the
dealership
to
get
the
Volvo
serviced.
It's
a
huge
issue.
Our
our
service
was
also
built
on
physical
servers.
A
That
means
that,
after
about
six
to
seven
years,
we
couldn't
guarantee
that
our
test
environment
looked
exactly
like
a
QA
QA,
a
market
that
looked
exactly
like
our
production
environment.
It
was
hard
almost
impossible
to
replicate
production
issues
because
the
the
the
servers
would
have
had
people
logged
into
them
doing
changes
doing
maintenance,
work,
fixing
stuff.
It
was
a
huge
issue,
so
we
also
had
a
problem
with
calendar
time.
A
It
took
us
a
couple
of
weeks
to
get
our
environments
into
place
that
come
with
also
with
with
us
being
a
couple
of
versions
of
Java,
EE
application,
server
versions
back
start
becoming
become
a
big
issues
issue
with
our
development
development
organization
because
they
couldn't
use
the
latest
and
fans
the
latest
fancy
and
Java
frameworks.
So
we
started
designing
the
new
platform.
Our
first
thing
was
always
offer
the
latest
version
of
Java
and
Java
EE.
Our
developers
had
to
get
the
latest
version
we
didn't
want
to
well.
A
A
We
had
to
go
to
multiple
locations
worldwide
being
on
every
plant
in
the
world
and
also
into
the
cloud.
Our
platform
had
to
be
a
modern,
fast
and
also
had
to
be
able
to
adopt
to
change
in
requirements.
There
are
always
going
to
be
new
services
out
there.
It's
always
gonna
be
new
frameworks
that
comes
along
that
our
development
talk
and
say
she
wants
to
use
and
we,
as
a
platform
provider,
needs
to
provide
that
for
them.
A
So
when
we
started
with
the
platform
platform
work,
we
wanted
to
isolate
everything.
A
misbehaving
application
shouldn't
be
able
to
take
down
the
whole
server
immutable.
That
means
it's
going
to
be
the
same
image
that
you
build
in
your
development
environment
that
goes
to
test
that
goes
to
QA
that
goes
to
production
and
even
to
the
cloud
is
going
to
be
the
same
image
can't
be
any
difference
between
the
different
versions.
Well,
the
different
environments
that
you're
Andorian
idempotent
everything
that
we
can't
use
image
immutability
with
such
as
servers
and
so
on.
We
used
idempotency.
A
A
And
the
only
thing
is
that
it's
in
the
wrong
state
with
normal
templating,
we
would
have
to
go
out,
go
to
a
key,
buy
a
new
share,
build
it
and
put
it
there.
But
if
I
using
an
idempotent
language
describe
that
all
I
want
is
that
share
to
be
down
on
the
floor,
the
script
will
actually
do
it
for
me.
So.
A
We
started
off
by
looking
at
how
it
worked,
so
we
started
off
by
saying
that
our
platform
stable
and
our
customers
trust
us
and
a
platform.
That's
really
important.
If
we,
if
our
customers,
our
internal
customers,
doesn't
trust
us,
then
we
have
a
problem.
We
approach
everything
in
a
scientific
way.
We
we
test
things
we
like
to
experiment
and
if
it
works,
then
it
works.
If
it
fails,
then
we
know
it
fails.
A
We
are
transparent
with
what
we
do,
what
we
deliver
and
how
this
with
everything
is
code.
Even
infrastructure
is
a
really
important
thing,
because
we
want
my
end
goal
somewhere
is
to
have
a
document
describing
the
whole
environment,
with
everything,
as
that
is
included,
so
that,
if
there's
a
major
incident,
then
I
can
bring
up
your
environment
here,
show
it
to
you
in
code
if
I
want
to
replicate
your
environment
anywhere
in
the
world.
I
just
use
the
same
bit
of
code,
just
put
it
on
a
different
place,
so
that's
really
important.
A
We
communicate
through
api's,
mainly
restful,
ate
the
ice
done.
So
this
was
how
we
started
everything.
We
started
looking
at
virtual
machines.
Well
with
virtual
machines.
We
can
automate
everything.
We
get
isolated
environments,
we
can
run
different
versions
of
Java.
At
the
same
time,
however,
our
ADA
physical
servers
became
850
virtual
servers
as
I
have
the
price
type,
because
we
pay
per
operating
system
instance,
and
the
configuration
is
only
known
directly
off
the
provisioning
off
to
being
running
your
application
for
a
while.
A
Then
things
that
might
have
changed
inside
your
virtual
machine
can't
have
that
so
looking
ahead
as
well,
so
we
deliver
monolith,
will
deliver
environments
for
monolithic,
Java
applications,
so
they're
huge,
they're,
really
huge
they're,
not
really
intended
for
running
a
container.
However,
we're
putting
them
into
a
context
container,
because
we
want
to
have
an
environment
where
we
can
put
our
huge
monolithic
applications
and
then
have
a
way
of
slowly
breaking
them
up
into
micro
services.
If
that's
needed
and
I
think
that
micro
services
is
one
of
the
cornerstones
of
dev
helps
as
well.
A
So
our
second
drove
was
containers
with
that
we
have
the
possibility
to
automate
everything.
We
got
isolated
environments.
Well,
you
guys
know
everything
about
this
well
useless
hardware,
and
the
configuration
note
is
known
at
all
time,
so
we
set
it
on
three
products:
the
first
ones
OpenShift.
It
provides
the
build
distribution,
runtime
environment.
We
actually
use
its
to
distribute
to
the
cloud
as
well.
It's
been
designed
with
the
developer
in
mind,
and
this
is
this
is
really
important.
It's
got
really
nice
api's
that
we
can
use,
because
we
want
to
create
a
self-service
portal.
A
Our
the
platform
doesn't
only
deliver
an
open
shift
environment
to
our
end
customers.
It
also
includes
things
like
load
balancers
and
maybe
in
the
future
database
information
and
so
on.
So
we
just
wrote
a
bit
of
a
wrapper
around
it
to
provide
a
GUI
for
our
development
and
operation,
guys
so
the
next
product
we
shoes
was
ansible
tower
with
that
one
we
can
automate
everything
it's
idempotent
and
it
also
has
nice.
Api
is
restful.
A
So
this
is
our
new
environment
looks
like
there's
not
a
lot
of
difference
for
a
developer
because
we
got
we
changed
the
operating
system
version.
The
clustering
capability
has
been
taken
over
by
Cuban
ETS
and
OpenShift.
We
run
virtualization
on
docker,
and
but
we
changed
the
application
server
from
network
deployment
to
Liberty
profile.
Liberty
profile
was
the
first
Java
EE
7
certified
enterprise
application
server.
So
we
put
that
in
there
and
on
that,
on
top
of
that
we
put
in
an
air
file.
The
difference
here
is
for
a
Java
EE
shop.
A
For
a
developer
and
operations
personnel,
this
wasn't
well
for
operation
was
a
bit
of
a
difference,
but
for
the
developers
it
wasn't
that
much
so
we
actually
had
to
take
a
look
at
our
application
deployment
process.
It
looks
look
like
this,
so
we
get
the
developer.
Checking
in
hello,
world
of
Java
to
our
subversion
or
gate
goes
to
Jenkins
gets
build,
goes
into
auto
factory
or
the
factories
a
glorified
FTP
server
down
to
the
down
to
the
developer.
A
The
developer
then
takes
the
artifact,
creates
three
different
deploy
packages.
Three
they're
all
different.
Look
at
all
different
configuration
points
out:
databases,
the
might
be
a
properties
file
somewhere
deep
inside
application.
That
we've
changed,
puts
that
on
three
different
file:
storage
mouse
drives
that
is,
and
for
testing
QA
they're
automatically
deployed,
but
for
production
it
has
to
go
through
ServiceNow,
which
is
our
incident
request,
handling
system
to
an
ops
guy,
and
you
need
to
inform
them
four
days
ahead
that
we're
going
to
do
a
production
deploy.
Then
it
goes
to
the
actually
environment.
A
This
isn't
really
continuous.
This
doesn't
really
support
continuous
deployment,
so
we
had
to
redo
it.
This
is
how
it
looks
now
everything
is
the
same.
Until
we
go
down
to
open
shift
in
open
shift,
we
build
an
image
put
into
the
test
environment.
We
don't
deploy
to
QA,
we
don't
deploy
to
production,
we
promote
it's
the
same
image
that
goes
from
test
to
production,
but
what
about
resources
so
in
the
test?
Environment
I
want
to
my
tests,
data
basis,
test
queue,
managers
and
so
on
in
production.
A
I
want
to
go
to
my
production
tests,
production
queue,
managers
in
production
databases,
so
we
actually
introduced
a
templating
language
based
on
moustache
syntax,
that
our
development
teams
are
using
when
they're
configuring
their
files,
anything
that's
environment,
specific.
They
will
fix
that
in
the
configuration
files
and
then
before
we
start
up
the
applications
or
we
scan
the
scan
for
those
files,
then
automatically
changed
them,
so
we
got
the
same
image
throughout
the
whole
process.
It's
only
that
the
environment
configuration
has
been
changed,
so
our
build
process.
This
is
basically
it.
A
We
take.
The
latest
version
of
Liberty
profile,
put
it
into
docker
image.
Our
development
teams
take
an
air
file
and
a
configuration
file.
The
the
configuration
file
basically
tells
us
where
the
databases
and
those
kind
of
things
are
with
mustache,
syntax,
put
it
into
Auto
factoring,
and
then
we
just
combine
the
two
and
put
it
into
the
doc
registry.
A
A
A
What
we
did
was
it
was
really
easy.
We
basically
took
the
same
optimization
scripts,
rewrote
them
a
bit
to
have
them
provision
a
sure
infrastructure
and
then
run
the
scripts
there.
So,
all
of
the
sudden
we
had
the
same
environment,
the
same
openshift
installation
up
in
the
cloud
as
we
did
have
on-premise
and
the
doctor
registry
was
just
connected.
Well,
we
didn't
connected.
A
We
use
an
ansible
playbook
to
push
out
the
information
to
push
out
the
images
to
two
different
cloud
locations
when
we
started
this
Red
Hat
wasn't
supported
on
Microsoft
cloud
about
to
my
I
think
was
about
two
before
actually
going
to
got
into
production.
They
started
supporting
it,
which
was
really
good.
A
But
this
is
basically
how
it
worked.
It
was
really
easy
because
we
virtualize
on
on
the
openshift
layers,
so
we
define
our
infrastructure
inside
open
shifts.
So
we
can
have
replicate
the
environment
anywhere
in
the
world,
any
cloud
provider
we
can
be
beyond
n
in
any
plant.
We
can
be
behind
the
behind
the
parking
lots
on
on
receiving
goods
port
in
Shanghai.
If
we
want
it
to
be
there,
we
can.
We
can
deploy
our
solution
pretty
much
anywhere.
A
So
looking
ahead,
we
actually
started.
We
we've
automated
everything
in
our
platform.
Everything
is
there,
but
we
have
a
problem
because
the
rest
of
our
our
organization
hasn't
been
automated,
so
we're
driving
a
hot
addenda
against
our
other
people,
for
instance,
if
the
business
case
is
there,
you
can
do
the
calculations.
A
I
think
that,
if
were
to
use
the
normal
processes
just
to
get
our
80
application,
we're
moving
80
applications
into
our
solution
or
80
application
portfolios
into
our
solution
before
q1
next
year
and
I
did
a
small
calculation
on
that
and
in
order
to
get
a
load
balancing
thing,
get
everybody
get
a
load
balancer
configured
as
we
do
with
the
normal
process.
It
would
take
us
roughly
30
years
to
get
that.
So
we
just
that.
That
is
a
good
business
case.
A
Then
we
just
have
to
tell
the
right
people
and
we'll
also
help
them
automated
service
the
same
with
authentication.
So
we
started
off
with
the
application
platform,
and
now
we
had
to
restarting
to
expand
to
automate
everything
so
that
I
could
get
very
end
results,
which
is
a
document
describe
your
whole
application
environment
because
I
think
that's
that's,
really
really
important
it
when
you've
done
that
that
you
have
automated
everything
you
have
moved
into
to
a
position
where,
where
your
Ops
guys
doesn't
have
to
sit
and
press
next
next
next
finish
now
I'm
done.
A
A
B
A
A
B
Excellent
presentation-
and
actually
it's
all
interesting
when
people
talk
about
something
which
is
productive,
so
you
mentioned
I-
think
787
application,
700
observers,
80
hosts
I,
know
that
we
discuss
lots
of
interesting
things.
Are
you
saying
that
all
these
seven
at
80
applications
are
actually
running
now
in
production,
on
open
shift?
No.
A
A
However,
we've
been
running
in
production
for
us
since
August
last
year
with
a
global
application
in
this,
but
we
have
the
so
we
know
it
works
and
I
think
we're
running
six
application
at
this
time
in
production
and
a
number
of
other
ones
in
testing
QA
that
hasn't
made
it
up
to
the
production
level.
Yeah.
D
C
A
For
the
first
thing
you
know
for
our
first
iteration
it
was
a
stable
scripts
running
inside
Tower,
so
we
have
a
basic
UI
that
was
doing
it,
but
now
we
actually
developed
a
portal.
Our
platform
is
called
mass
modernized
application
server
platform,
and
for
that
we
obviously
have
a
operation
portal
called
mocks.
D
A
A
We
even
have
when
we're
going
to
we.
Actually,
we
have
started
doing
release
based,
so
we
got
an
A
side
and
AB
east
side
for
our
next
release.
This
is
the
first
one
we're
doing
this
with
because
we
got
so
many
customers
and
so
much
volume
that
one
mistake
in
I,
in
an
upgrade
script
for
environment
would
have
disastrous
consequences,
so
we're
not
risking
that
what
we
do
is
have
a
be
a
side
and
a
be
side
and
application
can
be
in
both
sides
at
the
same
time,
in
different
environments.
A
E
A
E
D
A
We
still,
we
still
have
almost
weekly
meetings
with
our
operation
team
just
to
to
get
them
to
transition
start,
but
it's
kind
of
our
problem
as
well,
because
we
haven't
had
the
volumes
until
now.
It's
just
been
something
something
that
I'm
barely
touched,
but
now
they're
getting
more
and
more
involved
with
it.
Thank.