►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Thanks
everyone
for
turning
back
up
after
lunch,
good
good
audience,
still
I'm
filled
with
assan
the
Solutions
Architect,
with
Red
Hat
from
the
UK
I'm
here
with
Tom
from
Elsevier
today,
I'm
just
going
to
literally
introduce
Tom
and
that's
it
because
it's
his
story,
but
just
could
give
you
an
idea
where
we
started
about
two
or
three
years
ago,
I've
been
working
with
Tom
and
his
team,
and
this
journey
has
only
been
last
kind
of
1415
months,
really
hasn't
it.
So
it's
taken
off
really
rapidly.
It's
a
good
story
around
micro
services
and
transformation.
A
B
B
When
else
are
there,
you
know
what
are
we
trying
to
achieve
I'm
taking
the
slightly
different
angle
to
something
other
presentations
today,
rather
than
going
into
straight
into
what
have
we
done
an
open
shift
and
what
we're
doing
there
more
of
what
we
do
as
a
business?
How
do
we
move
fast
as
a
business
and
deliver
technology
fast
to
enable
business
outcomes?
B
That's
what
we're
trying
to
achieve
with
this
we've
got
our
micro
service
and
integration
platforms
then
help
us
enable
that
to
enable
our
different
products
and
our
different
internal
and
external
systems
and
data
platforms
really
make
use
of
data.
We
have
some
clear
goals.
Really.
How
do
you
make
the
live
development
faster?
How
do
we
get
through
this
development
lifecycle
get
services
into
production
as
fast
as
possible
at
a
high
quality
as
well.
That's
cost
effective
to
the
business.
How
do
we
use
our
cloud
platforms?
B
So
we
we
migrated
everything
to
the
cloud
when
we
started
this
journey.
We
started
the
hybrid
solution.
We
had
part
on
in-house
part
in
the
cloud
and
part
SAS
since
then.
We've
actually
closed
our
internal
data
center,
so
we've
now
got
all
of
our
platforms
cloud-based,
also
using
a
number
of
different
SAS
providers.
So
our
integration
platforms
enable
this
between
our
different
different
cloud
services
and
different
SAS
services
that
we
utilize.
B
One
of
the
key
aims
as
well
was
to
use
open
technologies.
One
thing
we
wanted
to
avoid:
it
is
lock
into
certain
yeah
certain
types
of
proprietary,
vendors
and
priety
technologies.
We
wanted
to
be
able
to
adapt,
as
this
market
is
fast,
evolving
all
the
time
within
technology.
We
want
to
be
able
to
move
with
that
and
get
the
right
kind
of
skills
in
from
the
market
to
help
us
get
there
as
well.
B
We
took
a
phased
approach,
Phil
mentioned.
We
started
working
together
about
three
years
ago,
so
now
yeah
we
started
off
with
our
enterprise
service
bus.
There
also
are
platforms,
everything
soap
based
and
quite
heavyweight.
What
we
found
when
we
started
looking
at
this
is
it
is
very
challenging.
We
had
large
monoliths
development
took
a
long
long
time
to
do,
and
everything
was
very
proprietary
when
we
were
trying
to
find
people
to
help
move
us
along
with
this.
B
If
you
haven't
got
people,
those
right
skills
are
they're,
expensive
or
difficult
to
find
all
the
time
as
well.
So
the
first
part
of
that
journey
is
we
can
get.
We
engage
in
red
hat
to
look
at.
How
do
we
start
to
set
up
our
integration
platform?
We
initially
decided
to
go
down
to
the
fuse.
The
fuse
with
JBoss
fuse
we're
really
what
I
call
it's
more
macro
services
where
we
got
to
they're,
not
fully
independent,
but
quite
a
way
along
that
journey.
B
When,
when
we
looked
at
this,
there
was
open
sheer
fee
at
the
time,
but
my
opinion
there
was
I
think
it
was
too
early.
I,
don't
think
the
technology
then
was
really
mature
enough
to
consider
that
as
the
way
forward
at
that
point
in
time,
but
where
we
are
now
it
comes
that
maturity,
level
and
I
think
it
shows
from
the
as
the
group
here
today
over
the
last
12
months.
It's
really
gaining
that
traction
and
it's
a
mature
product
out
in
the
market.
B
So
by
going
to
our
fuse
platform,
you
know
it
enabled
us
to
package
up
sort
of
small
services
and
put
them
as
OSGi
containers.
So
it
was
a
a
middle
ground
with
containers,
but
with
packaging,
multiple
services
into
those
different
containers
and
utilizing
the
open
source
technology
stack
as
well.
We
built
a
lot
of
our
services
around
a
camel
architecture,
so
pachi
camel
developed
the
java
and
packaged
into
a
SGI
we
found
in
terms
of
core
development.
This
is
really
good,
but
the
challenges
we
found.
We
still
had
fairly
long
development
cycles.
B
What
we
found
there
was
the
tests
and
deployment
model
were
still
quite
difficult
to
manage
and
that
ended
up
taking
quite
a
bit
of
time
along
that
journey.
Still
so
about
14
months
ago
we
did
a
week's
proof-of-concept
with
Phil
in
the
team
we
decided
to
look
right
is
openshift
the
right
way
to
go
now.
Let's
try
and
prove
this
out.
Does
this
really
work
for
us?
So
we
took
a
few
of
our
existing
capabilities.
B
We
look
at
what
would
it
take
to
migrate
them
to
OpenShift
so
where
we
are
now,
we
actually
we're
actually
in
production
with
our
apron
shift
about
late
last
year,
and
we're
also
migrated
in
a
number
of
our
services
from
our
fuse
platform
into
openshift,
and
what
we're
seeing
now
is
demand
across
the
business
it's
actually
starting
to
decrease,
as
well
as
people
see
what
we're
doing,
how
we've
we've
felt
some
quite
big
gains
by
doing
this
as
well.
Our
services
are
now
completely
independent,
so
each
container
it's
completely
independent.
B
So
if
one
breaks
it
doesn't
break
other
things
as
well,
which
is
a
real
business
game,
we're
moving
everything
away
from
soap
where
it
comes
to
integration
as
well.
We've
got
some
legacy
that
still
uses
it,
but
now
rest
is
a
standard,
as
you
still
have
a
lot
of
newer
software.
Vendors
that
you
integrate
with
most
of
them
are
all
going
down.
The
rest
method
and
rest
helps.
B
B
When
we
started
our
micro
services
strategy,
what
we
wanted
to
avoid
was
just
running
in
there
really
fast
getting
things
out
there
fast.
But
then
the
challenge
you've
got
is
actually
rounding.
People
back
because
you
do
it
fast,
but
you
end
up
with
the
Wild
West
everybody
does
their
own
thing,
it's
very
difficult
to
manage.
You
build
that
lots
of
technical
debt
and
you'll
never
catch
up
again.
You
never
get
that
opportunity
to
come
back
so
before
we
actually
migrate
this
out
to
production.
B
What
we
wanted
to
do
had
the
right
building
blocks
in
place,
so
we
took
some
time
to
do
that.
We
wanted
some
clear
development
standards,
so
we
got
some
clear
development
standards
of
how
do
people
develop
their
services?
What
deployment
model,
what
quality
criteria
need
across
this?
So
we
put
that
tooling
in
place
our
DevOps
processes,
so
in
DevOps
we
need
to
ensure
we
got
environments
provisioned
and
they
can
review
be
provision
quickly.
So
we
have
a
couple
of
clusters.
B
We
have
a
non
prod
cluster
and
a
prod
cluster
and
we
have
all
those
automated
pipelines
that
help
us
deploy
to
those
very
fast
as
well.
So
developers
can
actually
build
deploy
tests
and
then
promote
a
production
when
they're
ready
once
you've
got
things
in
production.
You
need
to
ensure
you've
got
something
robust
yeah.
The
business
are
not
going
to
be
happy.
If
you
put
something
out
there
and
then
it
ends
up
fairly
flaky
falling
over
outages.
B
You've
got
to
have
the
right
monitoring
in
place
to
make
sure
you're
covering
everything
off
and
can
compete
SLA,
so
the
business
needs
their
different
services
the
end
of
the
day,
wouldn't
the
other
challenges
that
we
found
was,
as
this
started
gaining
demands
at
first,
we
set
up
a
center
of
excellence
to
actually
deliver
integration,
services
and
micro
services,
but
what
we
found
over
the
last
two
years-
or
so
this
is
actually
scaled
way
beyond
our
control.
Everybody
is
building
different
services.
Everybody
wants
to
deploy
to
the
kind
of
platform.
B
So
what
we've
done
is
adapted
our
approach
to
delivery.
Where
now
we've
got
all
these
standards
and
building
blocks
in
place,
we
can
actually
use
distributed
livery
model
we
can
go
out
to
where
there
is
the
business
and
say
he
go.
You
can
develop
and
deploy
onto
this
platform.
All
you
do
is
adhere
to
these
standards.
If
you
follow
the
standards,
things
will
go
through
so
that
way,
you're
not
gonna,
be
the
bottleneck
in
the
middle
and
actually
slowing
people
down
to
what
we
want
to
achieve.
B
So
we're
talking
about
talking
about
OpenShift
here,
but
what
I
wanted
to
do
is
actually
paint
in
this
as
part
of
our
wider
strategy.
So
my
remit
within
Elsevier
is
wider
than
just
open
shift
and
those
aspects
I,
look
after
all
our
data
platforms,
our
business
intelligence
and
analytics
and
the
integration
aspects,
so
OpenShift
itself
is
part
of
that
wider
strategy.
This
enables
our
core
data,
API
is
and
micro
services.
B
What
we
also
put
in
place
was
an
API
gateway
in
front
of
this,
so
that
allows
us
to
deploy
our
core
api's
from
our
services
as
well
and
simplify
that
way
of
integration.
What
we
also
have
is
a
data
Lake,
so
we
have
a
data
Lake,
where
lots
of
different
people
can
publish
data
into
that
data.
Lake
some
of
those
will
get
deployed
via
API
through
our
OpenShift
platform.
Some
will
be
there
purely
for
analytics
purposes.
B
We
have
our
traditional
data
warehouses
as
well,
so
core
visualization
analytics
happened
to
get
data
in
the
data
warehouses
and
what
we've
also
started
putting
in
place.
That
has
a
lot
of
knowledge
graphs
across
our
data
lake
as
well,
so
have
a
lot
of
different
data
capabilities
across
different
data
sets
in
the
organization
and
different
identifies
that
allow
these
to
come
through.
B
So
what
we
do
is
put
in
a
set
of
knowledge
graphs
in
front
of
this,
so
you
know
where
to
go
to
different
data,
sets
to
join
those
dots,
bring
that
data
together
and
then
you
can
expose
it
out
for
analytics
visualization
or
api's.
So
there
are
different
ways
to
consume
data
across
the
organization.
You
need
to
make
it
as
easy
as
possible
for
different
consumers,
whether
or
not
it's
system-to-system,
communication
or
user
access
to
that
fer
from
performing
analytics
machine
learning,
Big,
Data
and
also
for
presentation
and
visual
nation
of
data
as
well.
B
Well,
it's
all
a
bit
through
our
roadmap,
just
yeah
where
we
started
and
where
we
are
now
we
started
in
2015
yeah.
We
implemented
JBoss
views,
put
our
initial
strategy
in
place
and
we
sent
set
of
a
center
of
excellence
for
integration.
This
worked
very
well.
Yeah
gave
us
a
really
good
way
to
get
in
and
we
migrated
everything
off
of
our
existing
EVs
bees
into
JBoss
views.
B
That
platforms
been
running
now
since
late,
2008
2015,
but
we're
then
what
we
did
with
decommissioned
our
ESB
and
then
in
2017.
It
was
the
right
time
so
right,
OpenShift.
This
is
coming
to
a
really
good
maturity
level.
It's
starting
to
get
a
lot
traction
out
in
the
market.
We
feel
it's
the
right
time
to
look
at
this.
We
did
our
proof
of
concept.
This
proof
of
concept
proved
to
be
successful.
We
thought
yes,
this
now
seems
the
right
way
to
go
this.
Will
this
helps
us
from
where
we
were?
B
Initially,
we
came
to
this
halfway
house
and
now
it
helps
us
to
move
to
the
future
to
where
we
need
to
be
for
the
long
term.
So
we
did
this.
Then
we
started
to
engage
the
Red
Hat
and
said
right.
Let's,
let's
get
this
platform
into
production,
we
put
all
our
processes,
everything
place.
We
got
it
into
production
and
then
at
the
moment,
which
were
due
to
finish
by
the
middle
of
this
year,
we
started
migrating.
Our
few
services
across
we've
got
new
services
coming
in
we're
migrating
our
JBoss
views.
B
B
It
wasn't
a
redevelopment,
it's
more
of
a
repackaging,
we've
taken
what
we've
had
no
SGI
and
repackaged
these
a
spring
boot
which
allows
you
that
far
cycled
to
build
tests
deployed
and
you've
tested
a
lot
less
in
the
past,
a
lot
of
it
works,
and
it's
proven
out
there
already
so
you're,
not
starting
again.
We
have
taken
some
opportunities
to
simplify
as
we've
put
an
API
gateway
in
front
of
this
one
of
the
things
we've
really
used.
This
an
opportunity
to
do
is
simplify
some
of
our
security
models.
B
What
we
had
when
we
had
our
initial
integration
services,
we
had
all
different
types
of
security.
We
had
basic
auth,
we
had
WS
security.
We
had
certificate
based
security
when
all
these
different
types,
so
we
decided
to
go
through
an
hour
by
wolf
model.
We
deployed
that
in
the
Gateway
and
then
we
can
have
our
simple
security
model
between
the
Gateway
and
our
micro
services.
B
B
So
since
we
went
live
with
this
platform,
things
have
started
evolving.
The
next
big
challenge
that
we've
seen
in
our
business
now
is:
how
do
we
scale
this
everybody's
come
in
to
us
everybody's?
Looking
at
containers,
yeah
we've
got
to
make
sure
we
can
keep
up
with
this
demand,
make
sure
we
manage
demand
in
the
right
way
and
one
of
the
other
key
things
as
well.
Is
you
know,
everybody's
jumping
on
containers
as
the
next
big
thing,
but
we
want
to
make
sure
containers
are
used
for
the
right
thing
as
well.
B
It's
probably
not
the
right
choice
of
putting
everything
on
it
and
it's
trying
to
rein
people
back
a
bit,
so
we
do
use
containers
the
right
thing
and
we're
not
putting
massive
massive
databases.
Yeah
massive
application
servers
onto
there
I
think
at
this
point
in
time
it
best
to
be
managed
sort
of
you
know
off
container
the
way,
the
way
they
are
at
the
moment,
so
throughout
the
rest
of
2018
and
2019.
B
B
Since
we
put
this
in
yeah,
one
of
the
things
we
wanted
to
do
is
take
a
step
back
and
said
right.
Have
we
achieved
the
business
value
we
we
set
out
for
really
and
in
most
circumstances
I
would
say
absolutely
yeah
if
I
look
at
it,
our
development
is
so
much
faster.
Now
we're
seeing
probably
between
25
and
50
percent
of
development.
Time
now
can
what
we
were
used
to
within
JBoss
views,
and
most
of
this
is
the
simplified
bill
test.
B
Employee
model
I
mentioned
earlier:
we've
allowed
simpler
testing
and
automation
automated
testing
around
some
of
this
as
well
yeah,
one
of
the
key
things
that
we've
done
around
testing,
which
really
helps
us
is
every
time
we
develop
something
new
across
integration.
A
lot
of
the
business
partners
wanted
end-to-end
testing
across
all
these
different
systems.
This
used
to
take
a
lot
of
time
a
lot
of
external
dependencies
on
people
and
processes
which
didn't
help
us
move
meet
that
so,
if
we've
got
clearly
defined
data
contracts
across
that
now,
what
we
do
is
test
a
contract.
B
We
actually
have
micro
services
that
provide
simulators
and
stubs
for
external
systems,
so
we
can
actually
test
against
that
and
you
can
test
the
data
output.
That's
thought
against
it
as
well,
which
gives
you
that
fast
turnaround
and
that's
been
a
real
big
gain
as
well.
You
know
we've
got
lower
cost
of
infrastructure
by
doing
this
as
well.
One
thing
you
got
to
be
very
careful
of
is
you
know
when
you've
got
more
containers,
you've
got
a
bigger
memory
footprint,
but
overall
you're
not
running
lots
and
lots
of
different
sets
of
infrastructure.
B
You've
got
one
set
of
infrastructure
across
your
cluster
and
it
allows
you
to
scale
that
down
once
you've
got
that
maturity.
The
other
thing
is
better
yeah,
better
business
engagement.
We
have
some
Clearasil
a's.
We've
got
to
meet
for
our
business.
By
doing
this,
what
we
have
done
is,
you
know:
we've
got
auto
scaling
now,
so
if
some
services
are
getting
hammered
with
lots
and
lots
of
demand,
we
can
auto
scale
to
meet
that.
What
we
used
to
find
is
a
container
would
fail.
Somebody
have
to
be
called
out
to
actually
write.
B
Can
you
restart
that
container?
Now
we've
got
the
auto
yeah.
Also
recovery
of
these
you
say:
I
want
three
pods.
If
one
dies
it
gets
thrown
away.
Up
comes
another
one.
You're
never
going
to
be
in
that
situation
now
where
people
need
to
come
out
and
actually
restart
containers
as
and
when
they're
needed.
So
it's
overall
we
found
it's
been
a
very
good
journey.
We've
had
some
challenges
along
the
way
which
you
always
will
do,
but
from
where
what
we
were
looking
to
achieve.
I
think
we've
come
a
long
way
towards
that.