►
Description
Imagine a leading financial organization, serving millions of customers, calls you to solve multiple challenges: addressing aging infrastructure and software, providing new capabilities to meet regulations, and taking advantage of the cloud service model. How do you proceed to re-architect their systems with Kubernetes for containerization, Kafka for event streaming, REST APIs, Java Springboot, MySQL and Oracle databases, and more? Join us to find out.
Presenters:
Chris Hollies, CTO, Oracle Practice @Capgemini
Akshai Parthasarathy, Principal Director, Cloud Native and DevOps @Oracle Cloud
A
B
B
We
would
like
to
welcome
our
presenters
today,
chris
and
wallace
cto
or
co-practice
at
cape
cod,
germany
and
outside
parts
of
rocky
principal
director
cloud
nation,
devops
at
oracle
cloud,
a
few
housekeeping
items
before
we
get
started
during
the
webinar.
We
are
not
able
to
talk
as
a
daily.
There
is
a
keyway
box
at
the
bottom
of
your
stream.
B
Please
feel
free
to
drop
your
questions
in
there
and
you'll
get
us
as
many
as
we
can
at
them.
This
is
our
official
webinar
of
the
css.
As
I
said
it's
the
subject
of
have
could
have
found
that
please
do
not
add
anything
to
the
chat
or
question
too.
That
will
be
a
violation
of
the
code
of
kanji.
Basically,
please
be
respectful
of
all
of
your
fellow
fellow
parts,
funds
and
presenters.
B
C
Thank
you
apollo
chris.
Could
we
go
back
to
the
first
slide?
Thanks
hi,
everyone?
Imagine
you
are
a
customer,
you
imagine
you
are
an
organization,
that's
serving
millions
of
customers
right.
You
have
customers,
relying
on
you
day
in
and
day
out,
serving
hundreds
of
transactions
every
hour
every
minute
right,
so
have
a
lot
of
transactions
going
on
customers
you're
serving
in
the
credit
card
and
personal
finance
industry,
and
imagine
also
that
now
you
have
regulations
to
deal
with.
C
You
have
to
make
the
information
about
your
customers
loan
activity
available
to
the
government
register.
You
have
to
make
your
apis
available
per
new
regulations
as
well,
and
then,
in
addition
to
all
of
that,
you
also
have
a
monolithic
application.
That,
unfortunately,
is
a
single
point
of
failure.
C
So
this
was
the
kind
of
problem
that
chris
and
his
team
at
capgemini
faced
when
they
were
dealing
with
a
client,
a
financial
services
client,
and
this
presentation
is
about
how
oracle
cloud,
technologies
and
capgemini's
expertise
came
together
next
slide,
please
so
a
little
bit
about
the
speakers
chris.
Could
you
tell
us
a
little
bit
more
about
yourself.
D
Those
that
are
not
working
in
oracle
cloud
are
generally
working
in
an
another
cloud
so
that
we
have
projects
running
across
azure
and
aws
as
well.
The
vast
majority
of
what
we
do
is
in
the
cloud
now.
I've
done
tech
for
about
20
years.
I've
been
an
oracle
dba.
I've
been
a
business,
suite,
dba
I've
been
a
middleware
administrator
and
for
the
last
few
years
I've
been
a
cloud
architect.
C
C
Okay,
so
the
agenda
for
today
is
quite
straightforward.
I
will
talk
a
little
bit
about
the
customer,
the
challenges
they
faced
and
then
chris
is
really
going
to
take
a
deeper
dive
into
the
solution
that
he
and
his
team
at
capgemini,
implemented
on
oracle
cloud
and
then
we'll
get
into
the
results
and
we'll
open
it
up
for
q.
A
after
that.
C
Okay,
so
a
little
bit
about
the
customer,
as
I
mentioned
before,
the
customer
is
in
the
scandinavian
part
of
the
world.
It's
in
it's
in
scandinavia,
they're
serving
millions
of
customers
they're
serving
millions
of
end
users
right
for
their
financial
products.
C
C
They
were
facing
new
regulations
by
the
governments
and
one
of
these
regulations
dictated
that
they
should
make
the
information
of
their
end
users,
their
consumers,
the
people
who
are
taking
loans
on
their
credit
cards
and
other
personal
finance
loans.
They
should
make
that
information
available
to
a
national
registry
now.
The
reason
the
government
has
asked
for
this
information
to
be
available
is
because
they
want
to
collect
that
information
so
that
they
want
to
have
any
individuals,
credit
history
available
right,
so
it's
a
very
important
function
that
they
needed
to
provide
make
available
for
the
government.
C
The
second
thing
was
that
they
wanted
to
set
up
an
open
banking
strategy.
The
reason
they
want
to
do
it
again
is
because
of
european
regulation
called
psd2,
and
this
regulation
dictated
that
the
apis
that
they
had
to
expose
apis
to
third
parties.
That
can
then
provide
other
financial
products
based
on
if
the
consumer
is
open
to
receiving
information
about
those
financial
products
and
sharing
their
information
right.
C
So
you
know
the
problem
at
its
core
was
that
there
was
a
lot
of
detail
about
the
enterprise
architecture,
but
nobody
at
the
client
really
understood
it
that
well,
there
was
a
deviation
from
what
was
actually
conceptualized
and
what
was
actually
implemented,
which
made
it
really
hard
to.
You
know
come
up
with
the
sound
strategy
and
an
implementation.
That's
gonna
work.
C
In
addition
to
that,
you
know,
no
one
really
had
a
really
good
understand.
No
one
had
a
really
good
understanding
of
the
it
landscape
and
the
new
capabilities
that
were
available
and
how
best
those
capabilities
could
be
taken
could
could
be
put
into
use.
As
you
try
to
meet
these
regulations
and
modernize
your
infrastructure
and
applications,
there
was
also
the
issue
of
a
single
point
of
failure
in
the
middleware
application
at
this
scale.
D
Thanks
like
shall
we
yeah?
This
was
a
solution
that
that
we
really
had
a
lot
of
fun
with.
To
be
honest,
we
built
this
out
in
oracle
cloud
as
you'll
as
you'll,
see
as
we
go
through
the
deck
and
what
that
meant
was.
We
were
developing
on
a
cloud
platform
that
was
evolving
where
the
capabilities
were
were
being
continually
improved
and
where
we
had
a
lot
of
things
to
learn,
and
not
just
in
terms
of
the
technology,
but
in
terms
of
actually
delivering
a
real
project.
D
In
oracle's
cloud
platform,
we
learned
a
lot
of
things
around
the
delivery
model
on
ways
of
working,
as
well
as
the
technology
itself.
On
the
screen.
You'll
see
some
of
the
key
non-functional
requirements
the
customer
had.
You
know
actually
talked
about
the
fact
that
they
had
this
a
monolithic
application
at
the
centre
of
their
architecture.
D
It
become
a
single
point
of
failure.
There
was
a
degree
of
technical
debt
that
had
started
to
accrue
in
that
solution,
as
is
very
often
the
case.
You
know.
We've
worked
with
this
customer
for
about
seven
years:
we've
taken
them
on
a
on
a
salesforce
journey.
Actually,
where
they're
taking
some
software
as
a
service,
they
had
some
exposure
to
cloud,
but
they
hadn't
used
any
infrastructure
or
platform
services.
D
They
were
still
consuming
services,
provision
in
traditional
data
center
solutions
and
they
had
a
degree
of
technical
debt
that
was
starting
to
really
impact
their
time
to
market
and
the
consumer.
Financial
space
you
know
is,
as
with
any
other
consumer-facing
sector
the
drive
for
digitization
that
the
need
to
shorten
feedback
loops
to
bring
new
products
to
market
more
quickly
to
connect
with
customers
more
quickly
is
critical.
D
D
They
did
not
want
to
begin
to
time
themselves
into
a
monolithic
architecture
again,
so
they
wanted
to
adopt
an
event-driven
architecture
with
an
api
first
strategy
and
they
set
a
target
of
a
response
time
for
api
calls
to
the
services
they
were
deploying
so
the
psd2
open
banking
services,
the
regulatory
services.
They
set
a
target
response
time
of
500
milliseconds
for
for
clients
to
get
responses
to
api
calls.
D
We
also
had
a
relatively
complex
integration
required
with
a
product
called
curity
security
as
a
third
party
idam
server,
which
has
some
very
specific
out-of-the-box
capabilities
which
are
peculiar
to
to
banking
across
scandinavia
and
the
nordics.
So
the
is
so
it
integrates
out
the
box
with
things
like
norwegian
banking,
id
and
danish
and
other
scandinavian,
banking,
ids
and
federated
identity
providers
as
well.
It
was
already
selected
by
the
customer.
D
It
was
already
in
use
and
and
obviously
top
to
bottom.
This
is
a
banking
solution,
so
it's
so.
Security
was
really
key.
The
way
that
we
approached
it
was
to
use
something
that
we'd
done
successfully
in
other
engagements.
So,
in
a
very
large
furniture
retailer
also
scandinavian
we'd
adopted
we'd
used
a
particular
architectural
approach.
D
We'd
use
the
same
approach
in
a
large
retailer
in
the
uk
in
co-op
retail,
for
those
of
you
that
don't
know
it
co-op
retail
as
a
store
in
every
single
postcode
in
the
uk,
so
knocking
on
for
4
000
stores
and
the
approach
that
we
used
is
centered
on
what
we
call
the
agile
innovation
platform.
D
So
this
is
built
on
omesa
that
some
of
you
may
be
aware
of.
Omasa
was
grew
out
of
a
collaboration
of
architects
from
across
the
industry,
some
of
them
specialists
in
web
applications,
some
in
integration,
various
different
sectors,
but
this
this
collaboration
generated
a
set
of
basic
architectural
principles
and
integration
patterns
that
boiled
down
the
architectural
paradigm
into
four
layers.
D
So
so
it's
it
proposes
four
building
blocks
on
master,
does
delivery
experience,
api
service
implementation
and
persistence,
so
the
four
horizontal
layers
that
you'll
see
in
the
screen
through
our
experiences
in
some
of
the
projects
that
I
just
mentioned,
we
built
a
lot
of
collateral
a
lot
of
accelerators
we'd
selected,
some
tooling,
that
we'd
that
we'd
found
worked.
Well,
that
allowed
us
to
bring
our
agile
innovation
platform
to
life
so
built
around
this
architectural
approach.
D
We
had
a
set
of
blueprints
a
set
of
patterns,
a
set
of
accelerators
one
of
the
principles
that
are
master
embeds
is
that
development
isn't
done
in
isolation.
It's
rarely
done
agreeing
field
sites,
it
needs
to
integrate
with
on-premise,
so
it
has
a
set
of
standard
patterns
for
for
integration
and
wherever
possible,
we
try
to
use
open
source
products.
If
there's
a
proprietary
product
in
the
solution,
it
needs
to
justify
its
place.
It
needs
to
earn
its
place
in
the
solution.
D
So
the
first
thing
we
did
was
dive
a
little
bit
deeper
into
the
particular
implementation,
the
particular
capabilities
that
the
customer
needed.
So
in
order
to
deliver
an
event-driven
architecture,
we
needed
an
event
hub.
We
selected
apache
kafka.
In
the
end
we
didn't
go
for
full-blown
confluent
kafka.
We
went
for
apache
kafka
to
deliver
on
those
nfrs
of
keeping
things
open
source,
keeping
it
portable.
You
know,
potentially
the
customer
may
want
to
port
this
solution
from
one
cloud
to
another
in
the
future
and
then,
on
the
left
hand,
side
of
the
picture.
D
We
can't
just
bring
all
these
things
out
of
the
box,
because
the
customer
has
an
existing
landscape.
For
example,
they
had
splunk
for
monitoring,
so
we
had
to
integrate
into
that.
But
what
we've
done
over
the
years
in
the
oracle
practice
in
the
uk
at
capgemini
is
build
up
what
we
call
our
technical
design
authority.
D
D
It's
worth
taking
a
step
back
at
this
point
and
thinking
you
know,
this
is
a
customer
that
didn't
have
any
infrastructure
or
platform
services
in
the
cloud,
although
they
did
have
some
software
as
a
service,
but
all
of
their
core
business
was
still
running
on
premise
and
there
were
particular
advocates
for
particular
clouds
and
particular
technologies
within
the
customer,
and
it
kind
of
begs
the
question
why:
why
would
we
do
this
in
oracle
cloud?
You
know
many.
Many
people
will
associate
oracle
with
lock-in
with
proprietary
technology
and
and
with
potentially
with
expensive
technology
as
well.
D
There
are
some
facts.
You
know
that
we
had
to
talk
to
the
customer
about
and
get
them
to
understand.
You
know:
oracle
gets
open
source
oracle
understands
the
value
of
open
source.
If
you
think
oracle
means
proprietary
in
2020,
then
think
again,
because
it
no
longer
does
you
know,
oracle
means
enabling
you
to
oracle
cloud
products
generally,
enabling
you
to
develop
rapidly
in
the
cloud
yeah
oracle's.
D
Remember
many
open
source
groups
and
initiatives,
and
they
have
lots
of
marketplace
offerings
built
on
open
source
as
well
and
and
they
continue
to
build
out
and
deliver
some
services
in
open
source,
such
as
the
oracle
streaming
service,
which
is
built
around
kafka,
such
as
the
my
sequel
cloud
service,
which
has
now
become
available
in
in
oci,
which
previously
used
the
the
generation.
D
D
Before
I
dive
into
the
solution,
it's
worth
just
looking
again
at
some
of
the
key
items
in
the
business
case.
They
wanted
to
improve
response
times
for
end
clients,
tenfold
yeah.
That
was
the
that
was
the
target.
You
know
the
the
game
here
was
to
deliver
the
immediate
api
requirements,
but
provide
a
platform
into
which
they
could
decompose
their
existing
monolith
into
microservices,
with
with
rapid
response
times,
they
did
not
want
vendor
lock-in,
it
had
to
be
obviously
highly
available,
etc,
etc.
D
So
what
did
the
solution
end
up?
Looking
like
well
from
top
to
bottom,
I'm
hoping
you
can
still
pretty
much
see
the
omasa
layers
actually
in
this
solution.
What
you've
got
is
a
mixture
of
infrastructure
and
platform
services
deployed
in
oracle
cloud.
You
can
still
see
those
3ms
layers
there.
You
can
see
the
security
and
monitoring
capabilities
at
the
side
at
the
bottom
of
the
diagram.
You've
got
core
banking
databases
that
the
customer
was
already
operating
on
premise
and
they
already
had
some
oracle
goldengate
for
data
replication
as
well.
D
Everything
apart
from
that,
was
new
and
was
part
of
the
solution,
part
of
the
platform
that
we
built
so
from
the
bottom.
From
the
bottom
upwards,
we've
got
oci
compute
at
the
heart
of
it,
with
my
sequel
community
edition
used
in
a
source
replica
deployment
for
resilience
on
top
of
an
apache
kafka
broker.
Cluster
we've
got
data
replication
into
that
kafka,
kafka
deployment,
using
oracle's
data
integration
platform
cloud
running
on
an
oci,
compute
node,
so
just
running
on
a
vm
in
the
cloud
and
at
the
heart
of
the
solution.
D
Above
that
microservice
persistence
for
service
implementation,
we've
got
containers
running
in
oracle's,
kubernetes
engine,
so
a
bug
standard,
vanilla,
implementation
of
kubernetes.
You
don't
actually
pay
anything
for
that
service
in
oracle
cloud.
You
just
pay
for
the
vms
on
which
the
service
runs.
On
top
of
that
then
oracle's
api
platform
cloud
service.
So
the
api
management
aspect
of
that
service,
with
its
api
and
dev
portal
management
console
and
then
the
gateways
themselves
at
the
top
behind
an
oci
load,
balancer
service
behind
the
oci
web
application
firewall.
D
So
a
modern
layer,
7
wife
at
the
top
out
to
the
side
top
left,
we've
got
oracle
identity
cloud
service,
but
it's
integrated.
So
that's
that's
a
standard,
oauth2
token
server,
but
it's
integrated
with
curity
because
of
the
out
of
the
box
capabilities
that
security
had
at
the
curity
had
sorry
top
right
hand
corner.
We've
got
the
api
portal
that
we
had
to
spin
up
very
quickly
so
very
rapidly.
In
order
to
meet
regulatory
requirements,
we
had
to
provide
an
api
developer
portal
for
psd2
and
then
by
a
second
deadline.
D
On
the
right
hand,
side
we've
got
management
cloud,
it
does
integrate
out
to
splunk,
but
in
between
the
management
cloud
and
the
container
engine,
we've
got
something
which
is
really
exciting
and
which
will
allow
which
we
were
able
to
do
in
oracle's
generation,
two
cloud
but
hadn't
been
able
to
really
achieve
in
the
generation
one
stuff
that
we'd
used,
for
example,
in
co-op
retail,
which
is
a
really
integrated,
ci
cd
stack
again,
you
know
using
open
source
technology,
so
a
dedicated
kubernetes
cluster,
using
terraform
plugins
using
jenkins
in
a
master
worker
configuration
with
ansible
and
dread
as
well
built
into
it
to
actually
provision
not
just
applications
in
a
devops
manner
and
to
have
applications,
devops
pipelines,
but
to
have
proper
infrastructure,
devops
pipelines.
D
You
know
a
really
new
capability
in
oracle
cloud
and
still
something
that
you
know
is
relatively
new
in
a
lot
of
projects
in
the.
D
It's
worth
deep,
diving
into
into
a
couple
of
particular
sections
of
the
solution,
so
api
request
flows
into
the
top
of
the
solution
it's
complicated.
I
had
on
the
team
working
for
me.
I
had
three
consultants
and
it's
worth
pointing
out.
Actually
the
three
guys
who
worked
for
me.
None
of
them
had
worked
on
oracle
cloud
before
none
of
them
had
actually
worked
with
api
platform
cloud
before
none
of
them
had
worked
with
kafka
or
kubernetes.
Before,
however,
between
them,
they
had
something
like
70
years.
D
Experience
of
you
know
true
detailed
infrastructure
and
enterprise
infrastructure
design.
So
these
were
guys
who
understand
middleware,
they
understand
databases,
they
understand,
networking
they
enjoy
tech
and
they
enjoy
the
challenge
of
tech
and
the
challenge
of
design
and
the
challenge
of
problem
solving
and
between
them.
These
three
guys
built
an
incredible
solution.
You
know,
so
I
had
one
person
who
was
focused
on
the
api
deployment
on
the
kubernetes
deployment
as
well,
and
he
was
able
to
design
this
this
authentication
process.
D
So
api
request
flows
from
the
external
consumer
through
a
load
balancer
service
configured
as
round
robin
into
nginx,
reverse
proxies
authenticating
out
to
curity,
if
required,
and
then
once
authenticated
able
to
access
the
api
gateways
in
the
private
subnet
at
the
bottom
of
that
diagram.
D
To
then
actually
invoke
services
running
in
the
kubernetes
cluster,
so
complicated,
multi-layered
authentication
process
with
break
account
security
with
mutual
tls
authentication
as
well.
So
you
know
a
complicated
solution
where
an
enterprise
bank
enterprise
banking
grade
solution,
one
of
the
things
we
did
do
here
was
we
re-engineered
this
at
one
point,
because
we
had
a
regional
subnets
as
you'll
see
in
that
diagram.
D
We've
got
subnets
spanning
the
three
availability
domains
in
oracle
cloud
that
wasn't
available
when
we
first
solutioned
this
and
we
actually
had
multiple
ad
localized
subnets,
which
we
then
had
to
route
between.
So
we
had
multiple
routing
paths
to
this.
To
start
with,
we
were
able
to
then
re-engineer
it
to
use
regional
subnets
when
that
capability
became
available
part
way
through
the
project
and
the
reason
we
could
do
that
so
quickly
was
because
of
that
ci
cd
pipeline
for
infrastructure.
D
We
were
able
to
make
all
the
changes
as
code
test
them,
approve
them
and
push
them
into
the
deployment.
D
You
know
all
all
through
that
infrastructure
pipeline
and
through
the
governance
and
the
speed
that
that
actually
brings
to
a
solution
like
this
to
just
drill
down
further
into
the
authentication
flow,
because
it's
one
of
the
most
complex
parts
of
the
solution
and
it's
a
place
where
we
really
started
to
leverage
some
of
the
capabilities
of
nginx
as
well
and
there's
an
nginx
plug-in
that
allows
you
to
do
an
obscure
token
inspection
and
and
we
were
able
to
leverage
that
to
do
a
very
secure
authentication
flow.
D
If
people
have
particular
questions
on
this
authentication
flow,
I
can
direct
you
to
smes.
I
can
take
them
away.
It's
probably
some
of
this
that
I
can't
answer
in
detail,
but
what
I
can
show
you
is
is
the
overall
flow,
so
from
request
from
the
third
party
provider.
Application,
for
this
is
a
psd2
banking
flow,
terminating,
a
mutual
tls
on
the
nginx
server,
to
then
request
and
obtain
an
oauth
token
and
then
call
the
api
for
the
psd2
service.
D
It
breaks
out
to
nginx
to
do
that
token
introspection
for
for
for
that
additional
layer
of
security
that
was
required
and
on
validating
the
token,
then
it's
able
to
forward
an
api
request
to
the
platform
cloud
service
to
the
api
platform
gateway.
D
Ultimately,
so
multi-layered
approach
to
authentication
and
again
we're
able
to
design
deliver
prototype
these
kinds
of
processes
very,
very
rapidly
in
the
cloud
to
re-cut
and
re-draw
infrastructure,
as
well
as
applications
very
rapidly
through
that
devops
pipeline,
one
of
the
other
smes
that
worked
for
me,
probably
17
years
I
experienced
around
a
tech
or
something
hadn't
actually
worked
with
data
integration.
I
hadn't
really
worked
with
golden
gate
before,
except
in
a
very
sort
of
light.
D
D
We
were
able
to
put
a
socks
proxy
in
front
of
that
golden
gate,
deployment,
encrypt
the
trail
files
and
then
expose
that
trail
data
via
a
data
integration
platform
cloud
running
on
a
dedicated
host
in
oracle's
cloud
to
replicate
the
encrypted
trail
files
up
to
a
replica
process
in
a
hub
vcn
in
the
cloud.
D
So
we
had
a
secure,
ipsec
vpn
and
which
was
taking
encrypted
data
from
encrypted
trial
files
into
the
hub
cluster,
and
then
the
replica
process
replaying
those
changes
into
the
kafka
cluster
so
directly
into
kafka,
from
which
microservices
would
dequeue
a
topic
dq
from
topics
and
then
use
the
mysql
community
edition
for
their
persistent
storage
kafka.
Dataflow
was
a
you
know,
another
challenging
part
of
the
solution.
D
I
had
a
third
sme
who's
was
on
kafka,
actually,
as
well
as
the
api
platform
clouds.
It
was
the
same
guy
who
looked
at
kafka
for
me
and
he
was
able
to
design
a
you
know,
a
really
resilient
solution
again
using
all
three
availability
domains
in
oracle's
cloud,
using
regional
subnets
across
them
with
multiple
zookeepers
across
across
the
kafka
implementation-
and
you
know
we
had
kafka-
is
a
very
simple
technology
to
settle.
D
It
has
quite
simple
principles,
but
like
many
simple
elegant
technologies,
it
allows
for
a
real
complexity
of
solution.
You
know
it's
very
easy
to
start
to
do
complex
and
powerful
things
with
with
a
relatively
simple
tool,
and
so
what
we
have
here
on
the
screen
is
the
data
integration
platform
cloud
agent
in
its
dedicated
subnet
at
the
bottom
replicating
data
into
a
kafka
broker,
which
is
then
synced
with
two
other
brokers.
D
The
we've
got
a
an
example
example
service
depicted
here.
The
subscriber
account
details
service
which
can
dq
from
the
broker
one
topic
and
then
access
mysql
microstorage
as
it
processes
as
it
as
it
executes
the
business
service
and
responds
to
the
api
call.
Similarly,
we
might
have
produce
account
details
the
second
subnet
and
the
debt
register
api
in
the
third
subnet.
D
What
we
found
with
kafka
is
we
had
some
particular
challenges
during
the
bootstrapping
phase
with
kafka
in
order
to
get
the
throughput
that
we
needed,
we
had
kafka
topics
configured
across
multiple
brokers,
so
two
topics
with
multiple
partitions
across
the
three
brokers
and
multiple
consumers
and
consumer
groups
as
well,
and
what
we
actually
found
when
we
began
testing.
D
This
was
that
we
were
having
data
changes,
written
onto
topics
but
then
consumed
from
topics
in
a
different
order
in
which
they'd
been
written,
and
this
this
caused
a
lot
of
headaches
initially
and
what
we
ended
up
doing
was
using
a
hashing
key
data
integration
platform
cloud
allowed
us
to
generate
a
hashing
key
based
on
the
primary
column
of
the
data
that
changed,
and
we
could
therefore
write
every
data
change
into
the
same
kafka,
partition
and
read
them
serially
again
what
what
we
found
when
we'd
resolve
that
was
that
during
the
bootstrapping
phase,
we
had
difficulty
when
we
initially
loaded
data
into
the
solution.
D
We
had
difficulty
with
throughput
joining
that
phase
and
again
this
is
where
the
advantage
of
working
in
cloud
infrastructure
really
began
to
show
to
show
itself,
and
so
we
were
able
to
swap
out
the
shapes
on
which
the
compute
shapes
on
which
we
were
running
the
kafka
cluster.
We
were
able
to
swap
them
out
for
dense.
I
o
shapes
that
gave
us
a
much
greater
disk
throughput,
and
we
simply
did
that.
You
know
we
did
it
computationally.
We
did
it
by
changing
the
terraform
code
and
executing
the
infrastructure
pipeline.
D
We
swapped
out
the
underlying
hardware
shapes.
We
were
then
able
to
get
this
massive
throughput
for
the
bootstrapping
phase
on
the
dense,
io
shapes
and
then
again
execute
pipelines
to
change
the
compute
shape
underneath.
So
the
kind
of
flexibility
that
that
operating
in
the
cloud
gave
was
is
the
sort
of
thing.
Excuse
me
it's
the
sort
of
thing
that
was
unthinkable
just
just
a
couple
of
years
ago,
working
an
on-premise
project.
D
You
know
it
gives
us
the
kind
of
agility
that
it's
just
unheard
of,
particularly
particularly
working
with
guys
who've
been
working
in
the
infrastructure
space
for
15
to
20
years.
You
know
the
ci
cd
architecture
is
it's
relatively
simple.
To
be
quite
honest
with
you,
we
use
jenkins,
you
know
it's
tried
and
tested
it's.
There
are
many
other
tools
and
we
do
use
other
tools,
certain
proprietary
tools
as
well
across
capgemini
projects,
but
jenkins
did
everything
we
needed
it
to
do
here.
So
it
sits
there
behind
a
reverse
proxy.
D
It
runs
a
jenkins
master
and
then
we
run
a
set
of
jenkins
workers
across
a
dedicated
kubernetes
cluster.
So
it's
not
the
same
cluster
where
the
business
services
run.
It
has
a
dedicated
kubernetes
pool
to
to
execute
the
jenkins
jobs
distributed
by
the
master.
D
D
We
initially
did
quite
a
lot
of
the
application
configuration
using
terraform,
local
exec
and
remote
exec,
and
but
we
have
since
moved
on
and
done
a
lot
more
with
ansible
since
then,
so
we
were
able
to
do
a
lot
of
the
application.
Configuration
with
ansible
terraform
state
is
all
stored
in
oci,
object,
storage,
another
capability
that
came
along
during
the
project
and
again
we
were
able
to
simply
drop
that
into
the
engineering
work
stream.
D
D
We
looked
at
them
a
little
bit
at
the
start
of
the
call
at
the
start
of
the
presentation
we
did
meet.
All
of
these.
You
know
we
we
hit
all
the
regulatory
milestones.
We
had
a
first
drop
of
of
code.
In
month,
three,
we
delivered
the
psd2
developer
platform.
In
month,
six
we
developed
open
banking.
Sorry,
we
delivered
open
banking
apis.
We
actually
delivered
them
about.
D
I
think
about
10
days
ahead
of
schedule,
certainly
at
least
a
week
ahead
of
schedule
and
we
went
live
with
the
regulatory
requirements
on
time
as
well.
I'll
talk
a
little
bit
about
operating
model.
In
a
moment
we
took
the
customer
to
a
new
operating
model
and
we
we
really
cut
their
time
to
market.
You
know
it's
difficult
to
measure,
but
the
customer
feedback
is
that
they
are
able
to
propagate
new
ideas
into
real
business
services.
D
D
D
We
did
a
set
of
performance
testing
and
even
when
we
we
took
the
load
up
to
10
times
the
predicted
load.
We
were
still
actually
around
about
the
200
or
sub
200
millisecond
response
time.
So
we
really
the
infrastructure,
surprised
us,
because
we
didn't
oversize
it.
This
is
very
lightly
scaled
infrastructure,
it's
very
lightly
sized
infrastructure,
sorry,
but
it
really
performed
above
how
we
expected
it
to
perform
it.
The
solution
is,
is
cloud
native
you
know
we
have
so.
D
The
data
replication
part
is
oracle
oracle
data
integration
platform
cloud,
so
that
is
proprietary,
but
it
was
the
sensible
thing
to
use
to
integrate
with
the
existing
golden
gate
deployment,
but
everything
thereafter
is
open
source.
You
know:
mysql
community
edition
kafka,
java,
apache
kafka
should
say
java,
kubernetes,
they're,
all
open
source
solutions
and
the
only
other
bit
that
wasn't
open
source
was
the
api
gateway
itself
and
the
the
third-party
item
that
the
customer
already
had
in
their
environment.
D
But
it's
gone
away
as
long
as
you
didn't
hear
me:
that's
okay!
The
we've
not
had
any
downtime
in
this
solution.
I'm
touching
wood
as
I
sit.
A
D
So
the
way
that
we
operate
now
is
is
along
the
lines
of
the
diagram
that
you
see
on
the
left,
where
we
have
small
teams
developing
business
capabilities.
You
know
with
embracing
agile
principles,
so
small,
empowered
teams
working
to
sprint.
C
Chris,
we
are
not
able
to
see
the
slide
you're,
not
able
to
see
this
line.
C
Let
me
know
if
you'd
like
me
to
bring
it
on
on
my
end,
yeah.
D
I
don't
know
what's
happening
next,
it
says
it's
still
displaying
hold
on
a
minute.
In
fact,
I've
lost
my
zoom
controls
now
they've
disappeared.
Bear
with
me.
D
So
so
we
look,
we
reached
a
conclusion
with
this
with
this
customer
in
terms
of
delivering
their
regulatory
apis
and
open
banking
apis,
but
they're
now
on
a
journey
where
their
their
existing
monolith
application
is
gradually
being
shrunk
and
they're
decomposing
that,
on
a
weekly
basis,
there
are
new
services,
new
microservices,
being
deployed
into
the
pla
platform.
D
They're
existing,
so
solution
will
shrink
right
down
and
eventually
be
sunsetted,
and
you
know
they've
moved
to
this
product-centric
view
and
capability
and
really
digitized
at
the
heart
of
the
business
here,
in
a
way
that
their
existing
legacy
architecture
just
didn't,
allow
them
to
do
and
there's
a
great
quote
on
the
screen.
You
know
this
is
what
cloud
computing
brings
and-
and
you
know
this
happens
to
be
a
code
presentation
with
oracle,
but
actually
this
is
about
the
power
of
doing
something
in
the
cloud
rather
than
doing
it.
D
On
premise,
you
know
the
speed
with
which
this
customer
is
now
able
to
deliver
this.
This
business
is
now
able
to
deliver
to
their
customers,
has
taken
an
absolute
paradigm
shift
as
a
result
of
getting
out
of
the
the
on-premise
way
of
thinking
and
adopting
a
cloud-native
computer
so
I'll
leave
it
there.
I
think
we've
got
about
13
minutes
left
for
questions.
If
anybody
has
any
so
I'll
move
it
on
to
the
the
the
last
slide.
There
are
some
links
there
that
you
know
go
and
have
a
look
at
them.
B
Well,
welcome
thanks
chris
and
akshay
for
the
great
presentation
we
now
have
some
time
for
questions.
We
have
questions
that.
Would
you
like
to
ask
please
drop
in
the
keyway
table
and
the
bottom
of
your
screen.
I
will
get
as
many
as
possible
have
time
just
at
this
time.
Chris
and
let's
try.
We
have
10
questions
now.
Are
you
starting?
D
So
I
I
would
rather
take
that
one
away
to
come
back
with
a
really
good
answer,
because
I'm
an
infrastructure
guy,
I
led
the
infrastructure
implementation,
but
the
api
implementation
was
done
by
our
offshore
team
in
india
and
spain,
actually
a
hybrid
offshore
team.
So
I'm
happy
to
take
that
away
and
give
you
give
you
a
good
view
on
that,
from
probably
from
one
of
our
ace
directors.
Actually
what
I
can
do
scanning
the
questions
down.
If
I
I'd
happily
take
the
question
about,
is
it
available
for
on-premise
installation
as
well?
D
We
haven't
tooled
it
up
for
on-premise
installation,
but
we
do
have
a
set
of
an
implementation
process
now
that
will
land
this
agile
innovation
platform
in
about
10
working
days.
So
so
we
can
deliver
this
as
an
integrated
platform
very
quickly.
Now
we
have
other
integration
patterns
that
are
not
data
centric,
like
the
one
you
saw
here.
So
we
have
integration
patterns,
for
example,
for
erp
applications
for
oracle's
applications
unlimited
for
web
applications
as
well.
We
can
deliver
this
very
quickly.
D
It's
not
actually
aimed
at
an
on-premises
install
today,
but
it
could
certainly
be
retooled
towards
it.
The
the
whole
project-
I
think
I
mentioned
we
delivered
in-
we
delivered
the
first
drop
in
month.
Three
then
drops
in
month.
I
think
it
was
month
six
and
month
seven,
so
that
was
roughly
how
long
it
took
team
size
was
was
three
smes
for
the
infrastructure,
part
plus
myself,
and
then
we
brought
fourth
sme
in
to
do
the
service
integration
into
the
same
and
into
splunk
and
then
offshore.
D
We
had
three
developers,
we
had
a
designer
and
two
developers
in
fact
offshore,
but
the
offshore
team
has
grown
now
as
they
decompose
the
monolith
so
there's
about.
D
I
think
there
are
about
six
developers,
probably
working
offshore,
probably
a
total
of
about
eight
people
working
in
a
couple
of
pods
offshore
at
the
moment,
tenancy
organization
and
vcns.
I
again
I'd
prefer
to
come
back
on
that
offline,
happy
to
happy
to
come
back
on
that
offline
oracle
does
have
a
software
load
balancer
and
we
used
nginx
because
it
has
the
plugin
for
obscure
token
inspection,
so
nginx
actually
made
made
sense
for
us
to
use
there.
D
We
also
used
it
for
the
mutual
tls
termination
which
the
oracle
load
balancer
wasn't
able
to
do.
B
D
Yeah
I
was
scrolling
through
the
questions
and
trying
to
answer
a
few
basically,
and
so
I've
had
to
go
answering
the
questions
I
can
and
I'm
happy
to
take
away
a
couple
of
more
details,
such
as
the
single
versus
multi-purpose,
api
questions,
okay
and
and
I'll
take
away
the
one
about
my
sequel
versus
ms
sql
as
well.
B
Okay,
thank
you
so
much.
The
other
question
is:
is
this
solution
available
for
complete
on
premise
installation
as
well.
D
So
it's
not
geared
for
an
on-premise
installation.
We
can
deliver
this
into
cloud
in
about
10
working
days
at
the
moment
in
order
to
retool
this
to
deliver
into
an
on-premise
installation.
D
You
know,
that's
certainly
something
that
we
could
do
and
in
fact,
the
other
projects
that
I
mentioned,
the
the
very
large
furniture
manufacturing
and
reseller
from
the
scan
from
from
from
the
nordics
with
the
blue
and
yellow
logo.
D
B
Okay,
how
long
did
you
take
to
complete
this
immigration.
D
D
We
had
a
drop
of
the
regulatory
requirements
in
month,
six,
I
think
it
was
and
the
full
banking
requirements
I
think
in
month,
seven
so
roughly
we
might
have
been
monday,
but
roughly
this
was
about
a
nine
month
process,
including
some
of
the
design
time
and
the
procurement
time,
but
but
from
first
design
to
first
live
draw
up,
excluding
the
developer
portal,
which
was
quite
simple
but
api
api
is
live
in
six
months,
which
I
think
for
for
a
standing
start
is
pretty
incredible
to
be.
D
Yes,
that
was
another
one
that
I
picked
up,
and
so
I
had
three
smes
working
for
me
in
the
infrastructure
team.
Then
a
fourth
one
that
came
on
board
to
integrate
into
management
and
monitoring,
and
we
had
three
offshore
working
on
business
services
that
that's
a
larger
offshore
team.
Now,
as
they're
decomposing
the
monolith
into
microservices,
I
think
there's
probably
about
ten
people,
eight
to
ten
people
working
on
sure
now.
B
Okay,
hell:
I
was
wondering
how
your
organized
tenancy
in
compartments
for
div
dab
tests
and
production
environments
and
with
a
single
or
multiple
vcns.
D
Yeah
we
do
have,
we
have
a
single
vcn
and
we
have
hub
vcn
as
well,
where
we
deploy
some
services
such
as
such
as
bastions
for
access,
and
but
we
do
a
lot
of
the
logical
separation
now
using
tenancy
tagging
and
using
you
know,
the
the
built-in
governance
that
allows
you
to
logically
separate
service.
B
The
other
questions
is
swallows.
D
B
B
D
B
Okay,
the
next
one
is,
I
okay,
the
next
one
was
answer
all
right.
It's
about
vcn.
How
big
was
a
challenge
to
connect
the
new
application
to
legacy
applications.
D
That
was
a
challenge,
so
so,
in
order
to
so
the
original
business
services
design
was
dependent
on
a
handful
of
application
fields
or
a
handful
of
database
table
fields,
but
actually,
as
the
design
evolved,
more
and
more
data
was
being
pulled
through
and
in
fact
we
ended
up
with
quite
a
lot
of
data
coming
through
that
golden
gate,
data
integration
route
and
so
so
the
physical,
the
actual
sort
of
engineering,
the
connection
using
data
integration
platform
cloud
was
was
quite
straightforward.
D
Making
sure,
though,
that
kafka
was
scalable
and
was
the
thing
that
took
the
design
thinking,
because
we
ended
up
with
a
lot
of
different
data
fields
being
replicated
through,
partly
so
that
we
had
more
flexibility
to
develop
future
business
services
based
on
what
we
were
already
replicating
into
the
topics.
D
B
D
They
did
also
do
an
evaluation
of
aws
and
selected
oracle
cloud
based
on
fit
in
the
organization
based
on
their
ability
to
provision
very
quickly,
and
we
did
a
poc
migration
of
an
existing
on-premise
application
into
oracle
cloud.
We
did
a
12-week
poc
customer
was
really
happy
with
the
results
of
that
and
as
a
result
of
that,
they
were,
they
were
happy
to
select
oracle
cloud
rather
than
aws,
which
was
the
only
other
one
that
they
had
considered.
B
Okay,
thank
you.
Thank
you,
chris.
Thank
you,
okay,
for
a
great
representation
right.
All
that
all
that
is
all
the
questions
we
have
for
today.
Thank
us
for
joining
us
today.
The
webinar
recording
is
slides
will
be
online
later.
Today,
we
are
looking
forward
to
see
you
at
the
future.
Cncf
webinars
have
a
great
day
stay
safe,
see
you.