►
Description
A large application landscape, handling 96.000 requests per minute, has been successfully migrated to the cloud. That migration was not only about focussing on the application. While we applied an lift'n'shift approach to the application, managing the target infrastructure became crucial.
We needed to make sure that a team of 40 people was able to reproduce environments consistently across many geographies. Introducing Infrastructure as code was one of the best decisions we made.
This talk is about our journey from a client's datacenter to a fully customized cloud platform on Azure. You will see how we used Terraform and Azure DevOps to create a platform for a connected vehicle backend.
A
Yes,
okay,
then,
welcome
to
my
talk
about
infrastructure's
code
nextgen,
a
story
about
how
we
built
a
custom
cloud
platform
during
a
cloud
migration
with
the
help
of
infrastructure
as
code
and
predominantly
with
terraform.
A
First
of
all,
I'd
like
to
introduce
myself.
My
name
is
dawson
and
I'm
working
also
for
nova
tech
as
a
I
t,
architect
or
cloud
architect
or
well.
You
can
call
it
whatever
you
want
to
I'm
happy
to
build
with
my
colleagues,
flexible
cloud
solutions.
A
I'll
skip
this
slide
because
my
colleague
also
introduced
you
to
to
our
company
and
I
can
start
over
with
our
migration
journey.
So,
as
I
just
said
in
the
title,
we
have
been
submitted
with
a
club
migration
and
the
application
landscape,
which
we
are
talking
to
migrate,
was
something
we
are.
We
were
developing
for,
I
think
five
years.
A
Just
imagine
this
one
as
the
customer
data
center,
where
all
those
applications
are
being
hosted
and
connected
to
this
data
center.
You
will
find
some
plants
or
factories,
some
car
dealers,
your
phone,
your
car
itself
or
whatever.
So,
for
example,
you
can
use
your
phone
to
tell
your
car
that
it
should
open
the
trunk
or
ask
the
car
how
many
fuel
is
left.
A
Starting
over
with
the
the
the
old
stack
is
the
application
running
on
websphere
application
server
on
the
customer
data
center
or
on
our
local
operating
system
for
developing
purposes,
and
what
you
want
to
see
in
the
future,
together
with
our
customer,
was
applications
running
as
microservice
on
a
managed
cloud
platform.
Nobody
is
a
operating
guy
operating
operations
engineer
in
our
team,
so
we
we
dreamed
of
having
a
managed
cloud
platform.
A
But
this
was
not
just
snapping
a
finger.
We
need
to
find
a
migration
path
there
and
one
step
was
changing
the
runtime
from
webster
application
server
to
ibm
webshell
liberty,
server,
which
is
more
designed
for
running
inside
a
container
and
once
being
in
a
container.
We
can
shift
through
the
platforms
running
finishing
running
on
a
managed
cloud
platform
in
a
container
and
then
migrate
to
microservice,
so
that
that
was
the
high
level
plan.
A
But
just
having
a
plan
was
not
enough.
We
had
a
lot
of
questions
and
doubts,
since
we
did
not
know
exactly
what
to
do.
Some
of
the
questions
were
on
what
would
it
run?
So
we
decided
to
bring
our
application
on
webster
liberty
in
a
docker
container,
but
on
what
will
it
run?
Will
it
be
a
vm
on
a
cloud
platform
which
has
just
running
a
docker
daemon
or
will
it
be
kubernetes
will
be
cloud
foundry
who
will
bring
it
there?
A
It
consists
also
of
one
terabyte
of
data,
including
personal
identity,
fire
data,
so
it
was
kind
of
security
related
and
how
much
power
will
it
need
to
run?
For
example,
will
one
or
two
vm
be
enough
for
handle
all
those
traffic
which
I
just
showed
before
there
were
many
questions
and
the
biggest
question
of
all
was
how
to
maintain
all
those
new
technologies
throughout
any
cloud.
A
Provider
was
some
point
in
time
where
we
don't
have
much
time
left
for
theories
and
planning
we
decided
well,
we
need
to
start
over,
and
one
part
was
collecting
initial
requirements
requirements
from
the
team
from
from
the
customer
from
many
persons,
and
what
we
found
out
is
we
need
to
run
on
kubernetes
and
docker.
We
need
to
use
postgresql
or
mysql,
which
means
we
need
to
migrate
database
from
db2
to
mysql
or
postgresql.
A
A
So
what
we
decided
was
we
will
build
a
custom
cloud
platform
which
fits
exactly
the
applications
we
need.
We
will
not
design
a
cloud
platform
which
will
be
ready
for
every
application.
A
A
So
we
started
with
an
early
draft.
The
idea
was
using
three
kubernetes
managed
kubernetes
service
and
error,
one
in
europe,
one
in
america,
one
one
in
china,
where
they're
shared
container
registry,
with
a
shared
key
vault
and
some
sql
databases
and
load
balancers
in
between
a
very
basic
setup.
The
traffic
comes
through
the
internet
or
through
a
v-net
gateway
into
the
cluster,
so
that
was
the
early
draft.
A
A
And
what
we
need
was
something
like
a
repeat:
reproducible
environment.
We
need
to
find
a
way
to
instantiate
our
environment
for
temporary,
for
testing
purposes,
throw
it
away
change,
something
reproduce
it.
So
that
was
something
which
was
urgently
needed.
That
was
the
point
when
we
decided
to
introduce
infrastructure
as
code
we
said.
Well,
we
are
sharing
some
infrastructure,
like
the
container
registry
or
some
general
secrets
and
certificates,
but
everything
else
is
independently
hosted
in
one
environment.
A
A
A
A
One
part
we
we
do
have
was
bringing
a
taste
of
security
into
our
into
our
platform.
A
Luckily,
the
basic
setup
in
azure
for
azure
managed
databases
is
really
good.
We
do
have
mandatory
ap
white
listing
author
authentication
authorization,
encryption
in
transit
and
rest
just
out
of
the
box.
We
also
have
something
like
advanced
threat
protection
so
that
there
is
an
artificial
intelligence
which
is
introspecting
the
traffic
through
and
throughout
the
database,
and
looking
for
something
on
an
anomaly
team.
A
A
A
It
must
be
restrictable,
auditable
and
approvable,
so
we
were
looking
for
ideas
how
to
implement
that,
and
what
we
found
out
is
that
we
can
use
azure
devops
for
creating
a
pipeline
very,
very
straight
forward
for
that
audit
purpose,
but
the
what
wasn't
just
not
okay
is
that
the
internet
traffic
can
be
routed
to
the
database
itself.
A
A
A
A
kubernetes
cluster
consists
of
different
resources,
and
it
is
fact
that
in
azure
the
resources
are
collected
in
a
managed
resource
group
for
the
kubernetes
cluster.
What
did
you
see
here,
for
example,
here-
is
a
virtual
machine.
This
is
this
is
a
cluster
with
one
node
there's
a
root
table
for
the
kubernetes
network,
stuff
and
and
as
the
companies
cluster
grows
up,
there
will
be
more
resources
and
what
we
are
looking
for
is
access
listing.
A
A
Besides,
on
all
those
getting
started
and
terraform
advanced
sessions,
you
need
to
learn
something
like
external
data
sources
or
null
resources.
So
what
is
this
snippet
doing
here?
A
A
So
this
is
a
way
to
technically
restrict
traffic
to
database
via
terraform
reproducible
in
each
stage,
and
once
this
is
done,
you
can
create
a
virtual
network
rule.
This
is
default.
Terraform
usage
for
the
resource
group
for
the
for
the
sql
server
for
this
subnet.
So
once
you
have
the
subnet
id,
you
can
tell
your
sql
server
you.
You
can
just
accept
traffic
from
this
v-net.
A
Right
so
this
was
challenge
one.
You
might
remember
that
we
have
a
application
with
a
lot
of
traffic
and
we
were
just
afraid
of
that.
The
question:
how
much
power
will
it
need
to
handle?
The
traffic
is
still
not
answered
yet
and
it
wasn't
answered.
A
We
could
scale
up
the
database
engine,
but
that
was
just
not
the
solution,
because
the
traffic
which
has
been
generated
was,
from
the
from
the
logical
perspective
perspective
just
too
much.
So
we
resulted
in
too
many
outages,
and
what
we
found
out
during
24
7
on
call
is
that
we
need
to
be
informed
if
there's
something
suspicious
going
on
and
we
utilized
infrastructure
as
code
and
azure
platform
in
order
to
get
notifications.
A
A
Address
terraform
provider
allows
you
to
define
and
manage
many
resources.
One
of
them
is
azure
monitoring
metric
alert.
The
advantage
we
have
we
had
is
that
that
we
heavily
used
azure
insights,
azure
metrics,
which
are
the
managed
services
for
logging
and
monitoring
purposes,
and
why
we
did
that
we
could
also
define
alerts.
A
This
one
is
notifying
our
team
if
the
storage
is
well
almost
full
and
if
so
the
team
will
decide
what
to
do.
We
could
also
set
up
that
the
search
will
be
increased,
but
it
is
probably
not
just
the
solution
to
increase
the
start.
Maybe
there
are
too
many
things
left
which
has
to
be
handled
manually.
A
Next
step
is
getting
high
available.
Well,
that
was
one
requirement
which
we
collected,
and
the
question
is:
if
our
kubernetes
cluster
will
or
our
deployment
will
freeze
or
stop
working,
what
will
happen
to
the
application
landscape?
A
Let's
see
how
we
did
that,
we
created
a
a
profile
which
is
checking
if
the
application
is
healthy
and
if
so,
the
traffic
will
be
routed
to
one
end
point
and
with
priority
one.
There
will
be
another
endpoint
for,
for
example,
in
north
europe
or
region,
two
where
the
traffic
will
be
rooted.
If
endpoint
1
is
not
available
anymore.
A
Well,
next
step
was
being
live
in
europe
being
live
in
america
that
worked
besides
that
traffic
problem.
It
worked
very
good.
What
we
didn't
expect
is
that
going
live
in
china
was
that
hard?
I
want
to
share
some
facts
with
you
here
within
the
two
other
geographies.
We
just
need
to
change
variables
from
west
europe
to
u.s
east,
for
example,
but
there
were
many
more
things
to
do
for
china
and
we
knew
that
there
is
something
like
to
create
firewall,
but
we
didn't
know
how
this
will
affect
on
our
application.
A
A
That
might
be
not
a
problem
for
in
china,
but
if
you're
experimenting
with
how
to
deploy
your
application,
how
to
deploy
your
infrastructure
in
china,
you
will
face
much
more
latency
and
much
more
time
consumption
while
doing
that,
because
you
are
not
sitting
within
not
sitting
in
china,
there
is
no
advanced
threat,
protection
for
the
databases
or
for
the
flop,
storages
for
the
storage's
accounts
and
there's
no
encryption.
Data
addressed.
A
A
There
are
two
ways
to
to
use:
terraform
and
azure.
You
can
use
the
azure
resource
manager
and
the
terraform
plugin.
This
is
like
default.
You
are
defining
a
resource
and
apply
that
to
terraform.
What
terraform
will
do
is
translate
this
language
into
azure
templates,
which
is
the
the
azure
default
deployment.
Language
infrastructure
deployment,
language
and
the
other
way
to
do.
A
A
Right
so
I
I
need
to
better
hurry
up
a
bit.
I
already
mentioned
azure
devops
frequently.
What
we
did
with
devops
were
creating
a
pipeline
which
automatically
deploy
infrastructure
on
any
azure
instance
and
with
audit
table
and
approval
steps,
which
means
we
will
execute
a
plan
before
and
if
that's
on
prod.
Somebody
can
reject
that.
So
somebody
can
be
another
team
member,
but
not
yourself.