►
From YouTube: OpenShift at Suomen Asiakastieto - Eero Arvonen, Application Architect OCG 2018 Helsinki
Description
OpenShift Case Study: OpenShift at Suomen Asiakastieto
Presenter: Eero Arvonen, Application Architect https://www.asiakastieto.fi/web/fi/
from OpenShift Commons Gathering Sept 10 2018 Helsinki
https://commons.openshift.org/gatherings/Helsinki_2018.html
A
So
like
I
said,
my
name
is
Erin
and
I've
been
doing
Java
development
since
about
2011,
so
about
seven
years
been
with
sua
manasik
was
there
for
about
two
and
a
half
years
and
I
got
the
title
of
application
architect
about
a
year
ago.
So
since
I'm
here
representing
our
company
as
well
I'm,
just
a
briefly
individual
asus
introduced
us
as
well,
company
name
is
swim
in
our
circus.
Theater
there's
been
a
bunch
of
mergers
along
the
way,
but
our
oldest
branch
was
founded
in
1905.
A
So
yeah
talk
about
legacy
I
guess
we
got
about
150
to
500
employees.
That's
because
there
was
this
little
merger
a
couple
of
months
ago,
and
we
bought
a
swedish
company.
Called
you
see
and
tripled.
Our
headcount
most
of
the
stuff
today
will
be
about
the
old
august
theater.
So
I'll
only
be
touching
about
the
Swedish
part
yeah.
So
our
idea
is
about
30
people
50/50
between
dev
and
ops,
and
we've
been
publicly
traded
for
about
three
and
a
half
years.
Now
we
listed
on
the
Helsinki
Stock
Exchange
in
2015.
A
A
So
I
think
even
well
before
OpenShift
our
ID
was
IT
was
pretty
efficient,
we're
mostly
doing
development
in-house
using
some
consultants
as
well,
but
mostly
in-house,
and
on
average,
were
we
deploy
a
new
service
or
a
new
version.
Almost
every
day,
our
software
has
a
pretty
uniform
architecture,
especially
on
our
back-end.
If
you've
worked
with
one
back-end
service,
you
can
basically
work
with
them
all
because
they're
quite
similar
and
considering
the
fact
that
virtually
every
part
of
our
business
has
to
do
with
IT.
A
But
I,
don't
think
that's
enough.
We
need
to
stay
ahead
as
well,
and
here's
a
couple
of
things
we
have
trouble
with.
First
of
all,
scaling
is
painful.
We
have
a
set
number
of
front-end
and
back-end
servers
and
the
setting
of
new
ones
is
pretty
time-consuming
and
expensive
and
because
the
scaling
is
so
painful,
our
current
application
servers
are
pretty
crammed
with
with
applications
and
if
one
of
them
goes
haywire
it
can
affect
a
whole
bunch
of
other
services.
A
Well,
and
don't
let
my
boss
know
I,
said
this,
but
I
think
this
uniform
architecture
is
sometimes
holding
us
back
as
well,
because
every
new
service
has
to
conform
to
a
specific
standard.
So
it's
pretty
hard
to
test
new
stuff,
sometimes
so
red
had
offered
to
do
an
open,
shipped
proof-of-concept
project
with
us.
This
is
I,
think
April
2017.
A
So
we
basically
started
from
scratch
and
installed
OpenShift
on
premise.
We
poured
in
an
application
called
oma
theater
on
it
and
we
demonstrated
high
availability.
So
we
can
do.
We
could
do
rolling
updates
of
stuff
that
it
was
running
and
the
HTTP
session
of
day
of
the
user
was
persisted.
So
there
was
no
I'm,
no
downtime
and
it
worked
out
pretty
great
all
this
in
the
week.
A
So
now
we're
fast-forwarding
to
late
autumn
2017.
So
about
a
year
ago.
We
decided
to
build
a
service
that
runs
on
open
shift
from
the
get-go
and
container
native.
If
you
will
so
what
happened?
Was
we
got
some
cloud
openshift
from
ambience?
We
hire
a
couple
of
consultants
from
water
company.
We
picked
a
technology
with
never
worked
with
before
and
we
had
a
hard
deadline,
because
this
was
a
GDP,
our
compliance
service,
so
it
came
into
effect
in
May,
so
our
hard
deadline
was
I.
A
Well,
as
it
turns
out,
OpenShift
made
it
easier
to
get
everything
together.
Actually,
the
reason
being
the
reasons
being
was,
first
of
all
the
CI
CD
pipeline,
we
got.
We
got
that
out
of
the
box.
Like
I
mentioned,
we
had
the
nodejs,
so
there
was
the
ready
source
to
image
tool
for
that
and
and
so
on
and
so
forth.
We
had
no
issues
getting
it
running
in
a
test
environment
and
setting
up
multiple
environments
was
pretty
easy
as
well.
A
When
you
get
the
one
running,
you
can
just
replicate
it
for
specific
testing
needs
and
also
it
was
inherently
portable.
So
we
could
move
it
anywhere
if
we
want
it
to
pretty
easily
so
there
were
snow
risk
to
getting
stuck
in
a
a
single
environment
either
so
well,
what
happened
was
well
here.
It
is
since
it's
in
production,
there's
a
screen
shot
of
it
on
time
on
budget
customers,
happy
and
so
on
and
so
forth.
A
A
We
would
have
done
a
couple
of
more
developments
but
deployments
but
yeah
holidays
rolling
deployments
like
I
mentioned
no
downtime
needed
just
had
to
make
sure
the
database
changes
were
backwards
compatible
and
we're
gonna
actually
scale
this
Sweden
later
this
year,
probably
and
we're
gonna
make
a
parallel
one
and
it's
gonna
be
pretty
easy
because
we
have
the
templates
ready
and
so
on
so
yeah.
By
this
time
we
decided
to
get
our
own
OpenShift
cluster
as
well.
So,
at
the
end
of
the
spring,
this
spring
we
rolled
it
out
in-house.
A
But
at
this
point
we
only
had
the
one
service
actually
published
and
running
on
production
on
OpenShift.
So
what
are
we
gonna
do
with
the
rest
of
the
200
or
so
services?
We
have
is
the
question.
So
the
question
is
how
to
eat
an
elephant
right.
Well,
we
were
gonna
do
with
one
container
at
a
time.
Most
of
our
stuff
runs
on
e
IP.
So
that's
something
we're
getting
out
of
the
box.
So
that's
good
right,
Enterprise,
Application
Platform!
B
A
Flirting
the
OMA
Theodore
was,
it
was
a.
It
was
the
first
time
reporting
something.
So
there
were
all
these
little
things
we
had
to
take
into
account
because
it
wasn't
really
designed
to
run
in
containers
so
well.
This
is
gonna,
be
a
slightly
technical
I
hope
you
don't
mind,
but
first
of
all,
OMA
Theater
generates
some
Java
classes
out
of
web
service
description,
language
files
and
those
classes
pertain
to
HTTP
sessions
that
are
stored
on
the
server
side.
A
So,
in
order
to
persist
the
session
during
a
rolling
update,
those
classes
need
to
be
serializable.
So
basically,
we
had
to
write
some
biting
files
and
make
all
the
necessary
files.
Serializable
and
omedetou
had
this
little
assumption
that
in
some
environments,
there's
gonna
be
a
service
that
generates
PDF
reports.
It's
gonna
be
running
on
localhost.
Well,
that
that's
not
the
case
in
openshift
at
all.
A
So
that's
something
we
had
to
take
into
account
and
one
of
the
big
things
I
think
is
the
sideways
compatibility
thing,
because
you're
not
gonna
want
to
have
separate
code
bases
for
the
old
environment
and
the
OpenShift
environment.
You're
gonna
have
to
have
your
coding
in
one
place,
so
that's
something
we
had
to
take
into
account
and
actually
that's
why
we
have
to
see
the
bigger
picture
here.
A
We
can't
reinvent
the
wheel
because
their
stuff
that's
been
working
for
years
and
years,
and
if
you
want
to
actually
roll
this
out
in
a
large
scale,
we're
gonna
have
to
have
some
some
things.
We're
gonna
have
to
take
into
account
here.
So,
first
of
all,
we
have
to
I
think
we
have
to
integrate
into
existing
CI
City,
not
to
mess
everything
up.
Our
all
Jenkins
is
full
of
stuff,
and
we
can't
just
throw
that
away.
We're
never
gonna
recover
from
that.
A
So
this
is
the
old
way
we
were
building
and
deploying
stuff,
basically
I'm
the
developer,
we're
starting
from
the
top
left
here.
The
developer
commits
some
coding
to
get
lab.
This
triggers
a
build
in
Jenkins
after
that
janki's
deploys
the
artifact
into
a
testing
environment
and
also
copies
the
year
or
the
war
file
or
whatever,
on
an
FTP
server
and
afterwards.
If
we
want
to
put
something
into
production,
we're
gonna
feel
some
fill
out.
A
Also,
if
you
want
to
roll
back
from
a
production,
it's
a
bit
of
a
pain
because
we're
gonna
have
to
copy
the
old
artifact,
the
yeah,
ear
file
or
war
file
or
whatever,
and
we're
gonna
have
a
special
name
for
it.
So
we
know
when
it's
been
in
production
and
so
on
and
so
forth,
it's
pretty
pretty
complicated.
So
how
does
open
ship
affect
things?
Well,
the
Box
on
the
top
is
the
old
way
of
doing
things
and
we
didn't
really
change
anything
didn't
have
to
because
my
philosophy
was
were
just
gonna.
A
Add
a
couple
of
things.
So
in
the
old
jobs
were
only
gonna,
add
two
things:
we're
gonna,
publish
the
artifact
the
ear
file
into
Nexus
and
we're
gonna
trigger
a
build
in
our
new
Jenkins.
The
new
Jenkins
has
the
source
to
image
tool
installed,
and
it's
going
to
build
a
docker
image
that
that
it'll
push
into
the
darker
repository
of
our
open
shift
and
then
it's
going
to
deploy
the
relevant
deployments
convict.
A
There's
also
separate
jenkees
jobs
for
releasing,
what's
ever
in
the
testing
environment,
to
pre,
prod
and
release,
what's
ever
in
the
pre
prod
to
I'm
the
production
whenever
we
want
to
roll
and
we're
also
tagging
the
image
streams
with
say
1.0,
so
we
can
always
have
idea
full
backlog
of
everything.
That's
ever
been
in
production,
so
we
can
roll
back
to
them.
Whichever
version
we
want.
A
A
This
is
how
our
back-end
basically
looks
like.
So
we
have
a
set
number
of
EAPs
with
a
bunch
of
deployed
ears
inside
them.
Doing
EJB
calls
within
the
same
very
same
application
server,
and
the
question
is:
how
are
we
going
to
migrate
this
an
open
shift?
I
guess
one
way
would
be.
We
could
just
take
all
the
separate
services
make
them
run
in
a
single
EAP,
each
I
guess
this
would
conform
pretty
well
to
the
whole
micro
service
architecture
thing.
A
We
could
also
scale
all
of
these
applications
and
not
have
to
replicate
the
whole
server
with
with
everything
inside,
although
in
our
experience
running
one
deployment
for
EAP
takes
a
bunch
of
resources,
but
I
guess
this
might
be
just
a
case
of
not
configuring
them
to
be
to
contain,
whichever
all
only
the
features
we
need,
it
might
very
well
be
worth
doing.
But
but
the
problem
is,
it's
still
gonna
be
a
something
of
a
big
bankers.
We're
migrating
a
bunch
of
services
at
once.
A
So
this
is
option
2.
We
could
just
migrate
one
application
and
have
it
connect
to
the
old
back-end.
As
is
there's
a
couple
of
question
marks
with
this
as
well,
though,
there's
probably
a
firewall
between
the
left
and
the
right
side
of
the
picture
so
would
require
some
configuration
also
well.
The
real
question,
I
guess,
is
whether
we
want
to
go
to
the
old
environment
and
reconfigure
all
the
EAP
s
manually,
to
allow
for
this
kind
of
connections,
so
I'm
going
to
sure
if
it
want
to
do
this
or
not.
A
A
A
Yeah,
so
we're
gonna
pick
low-hanging
fruits
of
the
front
end
first
and
decide
on
the
back
end
later
and-
and
that's
kind
of
my
point
here-
you
need
to
identify
the
opportunities
with
the
with
the
hot
biggest
impact,
so
we're
running
a
business
after
all,
so
we
always
have
to
mind
return
on
investment,
and
it
might
be.
Some
services
will
actually
never
be
migrated
in
open
shipped.
They
might
live
a
live
out
their
lives
right
where
they
are,
they
might
just
gracefully
ride
off
into
the
sunset.
A
You
know
like
this
yeah
so
see
in
all
seriousness,
though,
there's
plenty
of
things
to
consider
with
the
whole
migration
into
containers
and
by
this
point
I
can
definitely
hear
the
business
people
asking
where's
the
beef.
Why
should
we
do
this,
and
is
it
worth
it?
Well?
Surprise,
I
think
it's
definitely
worth
it
to
us.
Otherwise,
I
wouldn't
be
here
talking
about
it.
So
the
reasons
it's
worth
it
for
us
is
we're
moving
towards
24/7
services
and,
like
I
said
we
can
do
approaches
incrementally
by
migrating
specific
application,
application
chains.
A
A
Well,
this
this
kind
of
helps
that,
for
example,
deploying
the
node.js
application,
the
GDP
our
compliance
service,
I
mentioned
earlier.
We
could
have
I,
don't
think
we
could
have
ever
done
it
because
it
would
have
been
a
like.
A
special
snowflake
in
our
current
environment
would
have
been
really
difficult
to
get
a
production
production
environment
for
it
what's
more
real
time,
horizontal
scaling.
A
There's
more
remember
the
UC
merger
I
mentioned
before
there's
this
thing
called
synergies
that
were
promised
to
our
shareholders,
so
it
turns
out
usually
have
plenty
of
stuff
running
on
mainframe
and
getting
rid
of
that
expensive
stuff
is
is
coming
eventually,
and
it
just
happens.
We
have
this
thing
called
open
shift
now,
so
that
might
get
fun.
A
Also.
We
got
some
big
Nordic
projects
now
that
we
tripled
in
size.
So
there's
been
a
lot
of
talk
even
on
the
Swedish
side,
about
openshift
and
we're
gonna
start
some
big
stuff.
This.
This
fault
is
this
winner
and
that's
all
folks.
This
is
where
we
are
with
everything
yeah.
If
you
guys
gonna
what,
when
I
get
in
touch,
there's
this
code
will
take
it
to
my
LinkedIn
page
and
yeah
any
questions
we.
C
B
C
D
A
I
mean
it's
been
pretty
easy
to
sell
it
to
that
to
the
tech
guys
because
they
know
what
what
it's
about,
but
the
business
guys
we
had
the
product
owner
of
the
GDP,
our
compliance
service
and
others
have
been
now
asking
how
it
went
and
I
don't
know
if
this
is
an
expression
in
English
or
not,
but
he
said
it
was
like
the
toilet
of
a
train,
so
it
means
that
it
just
works.
So
this
is
why
everything
everyone
in
in-house
is
pretty
excited
about
migrating
and
it's.