►
Description
Kenneth Chenis of ACI Worldwide presents their OpenShift Case Study as part of OpenShift Commons Gathering Red Hat Summit 2018 What's Next on OpenShift Panel - moderated by Brian Gracely (Red Hat)
A
Hi,
my
name
is
Ken
channeis
I'm,
a
chief
architect
at
ACI,
worldwide
ACI
worldwide
is
a
is
the
payments
company
you
may
not
have
heard
of
them,
but
there's
a
good
chance.
Each
of
you
used
them
today.
When
you
checked
into
you
hotel,
we
process
payments
and
banking
for
over
5,000
financial
institutions,
merchants
intermediaries
and
billers
worldwide.
We
do
about
14
trillion
dollars
a
day
in
payments
through
our
software.
A
So
it's
a
pretty
extensive
environment,
so
I'm
here
to
talk
to
you
a
little
bit
about
real-time
payments,
analytics
and
real-time
fraud,
detection
that
we
were
able
to
achieve
on
the
openshift
platform.
So
first
I
just
wanted
to
talk
a
little
bit
about
a
CIS
Universal
payments
platform
because
we
support
so
many
different
types
of
financial
modes
of
moving
payments.
From
point
A
to
point
B.
We
have
a
wide
variety
of
software
applications.
A
Those
applications
cover
a
plethora
of
areas
including
retail
payments,
merchant
payments,
bill
payments
and
so
forth
so
like
when
you
pay
your
charge
card,
you're,
probably
going
through
a
CIS
bill
pay
service
or
what
you
charge,
something
you
could
be
using
a
retail
payment
authorization
service
that
we're
on
or
when
you
use
an
ATM
machine,
you're,
probably
going
through
a
CI
Rails
to
get
money
out
of
the
machine.
So
we
do
all
that
and
what's
really
key
to
us
is
fraud,
detection
and
payments
intelligence.
A
So
everything
that
runs
in
our
cloud
environment
goes
through
a
centralized
Universal
payments
platform
which
is
hosted
in
the
ACI
cloud.
It's
a
private
cloud,
that's
2u,
distributed
in
multiple
data
centers
across
the
world,
and
so
we
have
these
challenges
that
we
have
to
deal
with,
and
one
of
the
biggest
ones
is
payment.
Latency.
When
you
process
a
payment
with,
is
very
little
amount
of
time
to
make
decisions
on
what
to
do
with
that
payment
and
during
that
decision-making
time.
A
Hopefully,
you'll
recognize
80
milliseconds,
there's
not
a
tremendous
amount
of
time
to
do
everything
that
you'd
have
to
do
with
this.
So
we
start
off
with
with
machine
learning
and
data
science
working
on
data
and
a
big
data
repository
on
a
Hadoop
cluster
and
what
ultimately
comes
out
of
that
is
features,
rules
and
models.
Those
features,
rules
and
models
of
what
we
apply.
Real-Time
to
each
transaction,
is
flowing
through
the
system.
A
Within
the
within
the
platform,
we
moved
from
a
relational
database
model
to
a
Cassandra
cluster
model,
so
we're
using
a
Cassandra
cluster
for
our
persistence
layer
and
we
still
are
using
our
Hadoop
cluster
for
all
of
our
machine
learning.
We
used
a
lot
of
open
source
technologies
and
then
expanded
them
to
meet
the
AC
is
non-functional
requirements.
A
We
are
very
stringent
about
security
and
we're
very
stringent
about
ensuring
that
a
payment
gets
from
point
A
to
point
B
without
getting
lost,
and
we
have
about
a
40
year,
history,
where
we've
never
lost
a
financial
transaction,
and
that
means
a
lot
to
to
a
CI.
So
here
you
can
see
our
transactions
come
in
through
our
Universal
payments
platform,
it's
kind
of
like
a
universal
adapter.
Then
they
go
into
our
event:
receiver
everything's
defined
through
metadata.
So
as
the
data
evolves,
we
can
just
do
configuration
changes
into
the
environment
and
it
automatically
updates.
A
We
never
have
to
experience
any
downtime
in
the
environment
because
we
can
role
service
updates
throughout
the
environment
simply
by
deploying
new
containers
a
little
bit
about
the
real-time
analytics
and
our
performance
characteristics.
We
we
come
into
an
exec.
The
exec
runs
into
a
micro
service
which
is
running
a
complex
event.
Processor.
We
can
spin
up
any
number
of
these.
You
know
during
Black
Friday
weekend,
we'd
probably
have
six
or
seven
of
these
running.
In
parallel,
we
execute
our
models.
The
same
way
we'll
have
multiple
model
executors
running.
A
In
parallel,
we
will
go
to
the
Cassandra
data
store
to
retrieve
all
of
the
feature
information,
and
then
we
make
a
decision
and
we
rode
through
the
Black
Friday
weekend,
which
is
basically
Black,
Friday
and
Cyber
Monday
and
maintained
a
30
millisecond
latency
time
for
all
of
our
decision-making.
So
we
were.
We
were
pretty
proud
to
hit
that
metric
on
the
on
the
openshift
platform.
A
What
we'd
like
to
do
a
future
direction
is
really
where
we
originally
intended
to
be,
which
was
running
our
Cassandra
cluster,
also
on
openshift
and
running
our
Hadoop
infrastructure
on
openshift,
we've
encountered
some
challenges
and
we're
working
with
the
vendors
we're
working
with
Hortonworks
and
in
data
stacks
and
Red
Hat
to
try
to
tie
these
forms
together
so
that
we
can
maintain
the
the
low
latencies
and
the
same
scale.
The
building
and
flexibility
that
we
have
within
the
container
platform
to
what
we
have
outside
the
container
platform.