►
From YouTube: Open Banking with Microservices Architectures and Apache Kafka on OpenShift - Case Poste Italiane
Description
OpenShift Commons Gathering
Milan Italy 2019
Title: Open Banking with Microservices Architectures and Apache Kafka on OpenShift - Case Poste Italiane
Speakers: Paolo Gigante (Poste Italiane) | Pierluigi Sforza (Poste Italiane) | Paolo Patierno (Red Hat)
A
Okay,
thank
you.
All
my
name
is
Paul
Offit.
You
know,
and
one
of
the
engineers
in
a
reddit
working
in
the
messaging
team,
mostly
on
the
Kafka
side
and
I,
am
here
with
the
Pierluigi
and
power
from
pasta
italiana
for
showing
a
use
case
in
pasta.
I
will
have
a
little
bit
of
introduction
around
Kafka
and
how
to
run
Kafka
workloads
on
kubernetes.
A
So
on
OpenShift,
in
this
case,
using
the
main
project
that
I
work
on,
which
is
a
stream,
is
e
and
then
P
religion
power
will
introduce
the
use
case
in
pasta,
italiani
on
using
Kafka
and
even
running
Kafka
on
kubernetes
overshoot,
as
I
mentioned.
So
a
little
bit
of
introduction
for
who
knows
about
Kafka
Kafka
is
a
messaging
system,
mostly
based
on
publish/subscribe
pattern,
but
it's
also
a
data
streaming
platform,
so
they
change
the
little
bit
the
definition
of
Kafka
over
time
so
started
as
a
messaging
system.
A
Then
a
data
streaming
platform,
but
at
the
end
Kafka
is
a
comet
log.
It's
something
like
you,
send
some
messages
and
the
messages
are
written
inside
some
file.
Now
Kafka
is
a
stateful
application
and
running
a
stateful
application
on
openshift
is
not
so
simple.
So
on
one
side
we
are,
we
have
some
features
that
Kafka
has
so
has
a
stateful
application
every
broker
in
a
Kafka
cluster
as
its
own
identity.
A
They
need
to
be
discoverable
each
other.
They
have
to
talk
to
each
other,
and
the
same
is
for
zookeeper
because
for
with
using
Kafka
a
capital.
Cluster
can
work
today,
alongside
at
Weber
and
sample,
for
saving
some
information
around
the
Kafka
topics,
the
Kafka
brokers
and
so
on.
So
on
one
side
there
are
some
features
that
we
need
for
Kafka
and
on
the
other
side,
we
have
something
that
OpenShift
provide
us
in
order
to
run
Kafka
on
OpenShift
itself,
because
we
know
that
some
kubernetes,
so
OpenShift
native
resources
like,
for
example,
state
food
set.
A
We
can
use
them
in
order
to
deploy
Africa
on
open
shift.
We
can
use
config
maps
and
secrets
for
storing
configuration
for
storing
TLS
certificates,
for
example,
or
we
can
use
a
persistent
volume
and
persistent
volume
claims
for
handling
the
storage
of
the
messages
in
Kafka
and
so
on.
So
there
are
something
that
are
the
the
the
Kafka
features
that
we
have
on
one
side
and
what
OpenShift
provides
us
in
order
to
have
Kafka
running
on
openshift.
But
there
are
some
challenges
and
it's
not
so
simple.
A
So,
as
you
already,
you
already
saw
this
morning
talking
about
operators
and
so
on.
The
best
solution
for
this
is
having
an
operator
so
instead
having
you
to
create
your
statehood
set,
create
all
the
config
maps.
All
the
persistent
volume
claims
that
you
need
in
order
to
set
up
your
Kafka
class
are
running
on
openshift,
and
then
you
have
to
update
all
these
ama
files
all
these
resources
in
order
to
update
your
cluster
and
so
on.
A
You
can
use
the
operator
coming
from
this
streem
c
project
which,
since
the
beginning
of
september,
is
under
the
cnc
F.
So
it's
a
sample
project,
I
love
to
say
that
it's
the
way
at
this
time
for
deploying
a
Kafka
on
kubernetes
in
a
cloud
native
way.
So
what
streams
provides?
Is
a
bunch
of
images,
cough
kinds
of
Keeper
for
running
on
openshift,
in
this
case
on
kubernetes
and
providing
a
way
for
end
link
for
deploying
and
ending
a
Kafka
cluster
in
a
cloud
native
way?
A
So
it
means
that
you
don't
have
to
create
native
kubernetes
resources
like
stateful
set
pods
and
deployment,
but
you
have
new
resources.
So
when
you
install
the
the
fringy
operator,
you
have
got
some
custom
resources,
you
have
a
Kafka
resource,
a
Kafka
topic,
Kafka
users,
and
so
on.
That
looks
like
something
like
this,
so
you
can
describe
your
Kafka
class.
There
is
a
new
kind
of
resource
in
kubernetes.
A
You
can
specify,
for
example,
the
number
of
replicas,
which
means
the
number
of
brokers
that
you
want
in
your
cluster,
the
configuration
and
how
to
expose
the
kafka
brokers
outside
so
outside
of
your
OpenShift
Glasser
or
even
outside
of
it,
and
at
the
same
time
you
can
describe
a
Kafka
topic.
So
you
want
to
create
a
Kafka
topic.
You
don't
need
to
use,
don't
have
to
use
the
tools
that
Kafka
provides
you
for
creating
topics,
but
you
can
interact
with
kubernetes
using
the
cube
CTL
common
or
the
OSI
common.
A
If
you
are
on
openshift
and
you
can
create
a
new
Kafka
topic
resource
with
all
the
information
around
the
topic,
so
the
number
of
partition
replicas
and
the
configuration
and
the
same
for
the
user.
So
if
you
want
to
define
the
rights
for
the
consumer
and
producer
applications
in
order
to
write
and
read
from
specific
topics
and
so
on,
so
you
can
deploy
and
handle
your
Kafka
cluster
just
handling
openshift
resources
in
this
case,
which
are
specific
for
Kafka.
A
So
what
you
have
is
this
cluster
operator,
which
is
watching
for
this
kind
of
new
resources
and
when,
for
example,
you
deploy
your
llamó
file
describing
your
Kafka
cluster,
the
the
the
cluster
operator
takes
care
of
that
and
create
for
you,
the
zookeeper
and
Sam.
When
it's
rapid
running
it
deploys
the
Kafka
cluster
and
then
deploying
to
more
operators
for
ending
the
topics
and
users.
So
instead
of
having
just
one
cluster
operator
in
order
to
focus
to
one
feature.
A
So
handling
just
the
Kafka
cluster
and
then
zookeeper
and
Sam
for
handling,
for
example,
topics
and
user.
We
prefer
to
have
other
different
operators
for
doing
that
and
on
the
other
side,
if
you
want
to
update,
for
example,
your
cluster,
which
is
something
not
simple
when
you
use
Kafka,
for
example,
on
bare
metal,
because
you
have
to
update
all
the
bloggers
running
in
all
the
nodes,
you
can
just
update
your
custom
resource.
The
cluster
operator
is
watching
for
that
and,
for
example,
we
start
the
rolling
update
on
the
zookeeper
laughter.
A
If
you
are
changing
some
configuration
parameter
for
zookeeper
or
the
number
or
some
other
information,
for
example,
increasing
the
number
of
nodes
that
you
want
in
zookeeper
and
the
same
for
Kafka.
So
it
takes
care
of
getting
the
new
configuration
that
you
changed
or
the
new
number
of
replicas.
You
can
scale
up
scale
down,
adding
or
removing
nodes
from
the
cluster
and
at
the
same
time,
then,
the
cluster
operator
will
update,
if
needed,
even
the
other
operators
that
can
be
configured
with
some
different
parameters
as
well.
A
So
we
have
discussed
Obrador,
taking
care
of
everything
for
you
for
creating
updating
and
handling
in
general,
the
kafka
cluster.
These
are
all
the
features
that
we
have
to
the
in
this
project,
so
there
are,
for
example,
toleration
and
affinities,
so
you
can
specify
that,
for
example,
your
Kafka
broker
can
run
on
this
node,
but
cannot
run
on
this
other
node,
which
has
some
pains,
and
there
are
no
toleration
for
that
or,
for
example,
you
want
to
run
some
Kafka
brokers
on
specific
nodes
for
for
networking
interface,
which
are
faster
than
others.
A
Networking
interface
we
handle
for
you
mirroring,
for
example,
the
operator
handle
Kafka
mirror
maker
for
mirroring
cluster
Kafka
cluster
across
data
centers
or
Kafka
connect
for
syncing
different
system
like
moving
data
from
one
database
to
another
through
Kafka,
for
doing,
for
example,
CDC
or
we
end
up
for
you
Prometheus
for
getting
all
the
metrics
from
the
brokers,
the
storage
of
course.
So
these
are
today
all
the
features
that
that
the
stream
Z
project
provides
you
in
order
to
easily
run
Kafka
on
openshift,
and
at
this
point
I
can
end
over
to
the
region
and
Paulo.
B
C
Pasta
is
on
the
market
since
1862,
so
it
has
160
years
of
history
in
delivering
letters
and
packages,
and
the
second
pillar
of
the
business
is
the
presentation
of
financial
financial
services
and
no
one's
even
loads
for
pet.
If
you
are
interested
interested
its,
you
know,
growing
market,
the
more
of
this
pasta
used
to
present
to
offer
digital
services
to
all
the
public
administration
like
speech
services
and
is
it
is
formed
by
a
universe
of
of
company
of
the
groups
that
offer
service
is
much
more
complex,
much
complex
like
mobile
services,
delivery
and
so
on.
C
C
13,000
of
in
postal
offices,
so
huge
huge,
huge
number
for
services.
Basically,
any
of
these
services
has
an
interface
digital
for
digital
access
to
those
services
and
all
the
business
line
pushed
to
the
AIT
to
transform
their
services
for
product
evolution
and
for
regulatory
compliance.
Ur
has.
We
will
see
for
PST
to
how
to
fold
this
piece
in
this
big
change,
basically
bite
after
bite.
C
B
Ticket
IVA
is
the
fifth
project
that
we
make
in
the
Bob's
approach,
so
we
have
the
opportunity
to
introduce
openshift
for
the
first
time
of
the
eighty
compartment
of
post
italiana,
the
the
project
is
not
portal.
Beast
is
composed
by
five
world
application
based
on
java,
1.4
and
JBoss
4.0
with
the
end
of
cycle
middleware,
and
so
we
have
containerized
them
and
brought
them
in
off
a
shift
in
a
little
shift
mode.
We
have
also
make,
for
the
first
time
the
the
pipeline
of
continuous
integration
and
continuous
delivery,
which
means.
C
C
B
B
C
Just
let
me
add,
basically,
one
view
aim
to
present
static
user
data
that
are
collected
from
legacy
system,
pushed
to
the
components
that
operate
them
and
see
the
data
to
MongoDB
cluster.
To
present
them
has
a
REST
API
to
consumer.
There
is
an
ongoing
project,
so
that
will
use
the
change
that
I'll
capture
from
mainframe
to
stream
real-time
data
and
provide
in
a
real-time
to
consumer.
C
This
is
an
ongoing
project.
If
you
are
asking
if
it
worked
well
here,
you
can
find
a
number
in
the
first
night
with
8
hours
of
work,
so
we
was
able
to
ingest
500
millions
of
Records
so
components
to
today
to
the
developers.
Ok
going
over.
We
saw
that
the
application
was
resilient,
the
infrastructure
were
resilient,
was
a
performant,
and
so
our
CIO
Mir
Kaminski
RT
decided
to
test
it
for
the
core
business
of
of
pasilla
Liana.
C
We
decide
he
decided
to
experiment
the
delivery
of
PS
to
regulatory
compliance
on
this
architecture
and
the
main
the
main
reads
of
of
challenges
was
to
be
on
time
for
for
the
data
of
online
and
give
a
response
time
offer
the
API
on
internet.
So
we
expect
to
have
a
huge
growth
of
requests
and
we
have
to
deal
to
do
this
quickly.
So
basically,
we
use
the
same
architecture
to
go
in
the
core
of
the
business
of
post
italiana
attached
into
the
payments
Gateway
Paulo.
Yes,.
B
This
architecture
is
similar
from
the
last
one.
We
have
all
love
micro-services
on
OpenJDK
and
the
spring
boot,
and
some
interface
with
legacy
systems
for
payment
and
to
mainframe.
The
important
things
of
this
platform
is
that
we
make
all
entire
platform
in
with
the
the
three
pillars
of
observability
matrix
logging
and
distribute
tracing.
So
we
have
made
the
guidelines
for
our
developers,
and
so
they
they
can
make
the
applications
with
the
standard
of
a
matrix
and
open
tracing.
B
A
B
We
we
make
the
entire
platform
disposed
in
three
data
center
owned
by
post,
italiana,
room,
Europa,
Roma,
Congress,
C
and
Turin.
We
make
the
campus
with
in
Rome
a
rope
and
Roma
Congress,
see
thanks
to
the
short
distance
and
low
network
latency,
and
we
structured
the
Kafka
cluster
and
the
OpenShift
cluster
in
within
the
the
said
in
2d,
we
replicated
the
entire
class
step,
topic
unity
tools
and
we
make
the
synchronous
replica
from
Romero
and
Rome
Congress.
He
and
a
synchronous
replica
of
the
data
from
roma
Roma
and
2d
Torino.
A
C
The
main
the
main
scope
to
switch
in
case
of
disaster
for
legacy
system-
and
here
this
is
this-
is
the
number
of
what
has
begun
in
less
than
one
year.
The
for
me
is
impressive.
You
may
know
that
I
entered
in
post
italiana,
see
what's
italiana
since
just
two
months
and
was
very
surprising
just
to
collect
some
data
and
understand
what
real
was
done
in
less
than
one
year.
C
They,
we
have
actually
15
initiative
in
development,
developing
state
that
will
be
we
land
on
the
infrastructure,
us
in
13
cluster
between
enterprise,
urban
shift
and
open
source
of
a
shift
with
the
production
of
100
1300
the
course
Twitter
center
one
cloud
provider
and
that
makes
developer
3
3
300
the
50
developers
working
every
day.
So
it's
a
very
huge
huge
number
that
we
have
to
take
in
charge.
C
A
C
And
when
were
deployments,
but
it's
it's
very
efficient
and
he
can
absorb
massive
cross
up
communication
and
it's
very
resilient,
but
it's
expensive
to
deliver
to
to
scale
up
and
to
scale
out
so
how
to
simplify
this.
We
are
facing
some
tests
with
streams
to
try
to
test
it
for
some
use
cases
for
interrupt
communication,
a
synchronous,
communication
and
I
hope
we
will
and
get
fast,
skid
out
and
scale
up
for
synchronous.
Communication
tests
are
currently
running,
so
we
hope
to
show
you
results
at
the
next
comments
and
that's
it.
Thank
you
for
your
time.