►
From YouTube: OpenShift Commons Briefing #101: Monolith to Microservices: A Journey with James Falkner
Description
In this briefing, Red Hat's James Falkner gives a comprehensive overview and demonstration of Red Hat OpenShift Application Runtimes (RHOAR). RHOAR is the development suite for building microservices-based applications. James focuses on modernizing legacy services as well as developing responsive cloud-native services to demonstrate the power of microservices and provides number of techniques for modernizing your monoliths.
A
Hello,
everybody
and
welcome
again
to
another
OpenShift
Commons
briefing,
I'm,
really
pleased
to
have
with
me
here
one
of
the
the
leads
from
the
roar
team,
James
Faulkner,
who
is
going
to
give
us
a
talk
on
what
roar
is
or
redhat
OpenShift
application,
runtimes
and
the
practical
side
of
it
and
I'm
gonna.
Let
James
introduce
this
topic
and
take
it
away.
We
can
have
chat
in
the
chat
online
chat,
I'll,
try
and
answer
any
questions
and
we'll
have
live
Q&A
at
the
end.
So
with
that
James,
please
take
it
away.
B
Okay:
let's
try
it
again,
yep
much
better:
okay,
great
hello,
everyone.
My
name
is
James
Faulkner
I'm,
a
senior
technical
marketing
manager
in
the
Red
Hat
middleware
group,
focusing
on
the
Rohrer
project,
aurora
products,
the
Red
Hat
OpenShift
application
runtimes,
and
I'm
today
I'm
going
to
talk
to
you
about
using
roar
in
the
context
of
transitioning
from
monolithic
application
architectures
to
micro-services
architectures.
B
There
has
been
a
couple
of
briefings
a
couple
of
weeks
ago
by
some
of
my
colleagues
john
cling
and
the
product
manager
for
went
through
a
number
of
introductory
pieces
on
roar
what
the
the
development
goals
were
and
why
this
product
came
to
light.
My
my
colleague,
Thomas
coram,
did
a
deep
dive
on
spring
boot
within
the
context
of
roar
and
today
I'm
going
to
kind
of
cover
roar
in
general,
focusing
on
that
transition.
What
to
do
with
existing
applications.
B
B
Basically,
the
modernization
approaches
can
be
split
into
two
number
one
modernizing
existing
applications,
so
reusing
as
much
as
possible,
sometimes
achieving
100%
reuse,
but
also
moving
them
to
an
we
wear.
The
app
can
benefit
from
more
automation
and
continuous
integration
and
set
yourself
up
for
a
future
modernization
effort
with
micro
services.
B
The
other
bucket
is
what
some
organizations
do
where
they
make
a
concerted
effort
to
build
net
new
apps,
so
not
essentially
a
rewrite
from
the
ground
up,
employing
modern
application,
development
frameworks
and
architectures
and
developing
a
process
to
get
those
product,
those
apps
to
production
very
quickly
and
with
less
downtime
in
case
something
fails.
So
we'll
start
with
a
modernization.
So
there's
three
options
here:
there's
Rijo
stre
platform
and
refactor
we
host
is
simply
moving
the
application,
as
is
with
very
little.
B
So,
for
example,
small
apps
that
are
essentially
frozen,
and
maybe
all
the
developers
have
left
those
are
good
candidates
for
a
Rijos,
whereas
apps
requiring
high
scale
and
for
which
you'd
like
to
be
able
to
revise
very
quickly
would
be
something
where
you're
looking
at
a
replac
form
or
a
refactor,
depending
on
the
resources
in
time
that
you
have.
The
other
three
are
not
not
interesting
to
us,
they're,
essentially
do-nothing
and
as
you've
seen
countless
times
in
the
past.
That
is
really
not
an
option
for
businesses
to
really
innovate
in
this
digital
modernization
era.
B
Now
we
talked
a
lot
about
micro
services
and
we'll
talk
a
lot
about
it
more
about
it
today,
but
the
reality
is:
it
brings
more
complexity
to
the
application,
so
there's
no
more
single
database,
for
example,
there's
no
single
source
of
truth,
a
lot
more
moving
parts
that
need
to
be
integrated
so
for
customers
who
aren't
ready
for
that,
the
majestic
or
fast-moving
monolith
might
be
a
good
option
and
some
actually
argue
that
it's
best
to
always
start
with
a
monolith
even
for
new
Greenfield
applications,
because
it
gives
your
developers
a
familiar
environment
in
which
to
iterate
get
the
domain
boundaries
correct,
get
the
domain
models
correct
without
having
that
additional
complexity
of
distributed
micro
services,
key
example.
B
Here
again,
as
a
key
bank,
you
may
have
heard
of
this.
It's
an
old
bank
they've
had
a
number
of
acquisitions
over
the
years
and
inherited
a
lot
of
applications.
One
of
their
particular
applications
was
a
15
year
old,
Java
EE
application
are
deployed
on
WebSphere,
which,
as
you
can
imagine,
grew
into
this
huge
monstrous
thing,
with
really
big
maintenance
costs
and,
and
they
could
only
get
it
out
the
door
once
a
quarter
so
as
part
of
their
modernization
effort,
they
refactored
it
to
a
more
modern
application.
Still
a
monolith
but
kind
of
a
separating.
B
The
the
front
end
with
an
angularjs
app
and
the
back
end
service
was
all
being
restful.
Services
to
be
consumed
by
the
front
end
again
still
a
monolith,
but
they
were
also
able
to
containerize
that
monolith
and
move
it
to
open
shift,
wrap
it
with
the
deployment
pipeline
and
instead
of
70
steps
manual,
steps
required
in
the
past.
They
will
push
a
button,
and
you
know
in
a
few
minutes,
get
four
bits
to
production,
much
quicker
and
in
a
more
automated
fashion.
B
So
they
move
their
production
cycle
times
from
that
one,
one
Supporter
or
three
months
to
one
week.
Not
only
did
they
achieve
that
one
week
delivery
they
also
cut
their
production
failure
rates
in
half,
which
is
super
critical.
One
of
the
benefits
of
a
modernization
effort
is
assuming
you
will
fail
and
you
will
fail,
but
having
the
tools
and
processes
underneath
to
recover
from
that
and
be
able
to
get,
you
know,
keep
the
business
moving.
B
It
also
involves,
and
can
you
can
also
introduce
new
business
value
to
the
application
because
simply
rewriting
an
app
brings
no
additional
business
value,
but
if
you
are
able
to
add
business
value
which
again
I'll
demo
in
a
moment,
you
can
really
justify
that
cost
of
the
overall
effort
of
strangulation.
The
last
one
is
refactor.
This
is
the
complete
rewrite.
It's
generally
more
expensive,
but
as
it
there's
a
lot
more
upfront
cost,
but
can
also
provide
the
most
benefit.
B
It's
it's
there's
a
number
of
choices
to
be
made
when
you,
when
you
go
down
this
path
around
language
and
framework
and
development
approach,
this
is
where
Rohrer
comes
in,
but
those
choices
need
to
mean
need
to
be
made
before
the
first
line
of
code
is
rewritten.
So,
as
you
can
imagine,
each
of
these
comes
with
a
number
of
different
trade-offs.
We
hosting
again
is
generally
a
cheapest
takes
the
least
amount
of
time,
but
also
bears
the
least
amount
of
fruit.
B
It's
still
the
existing
application
and
all
of
that
applications
existing
bugs
still
remain
rhe
platform
is
reap
lat
forming,
is
kind
of
in
the
middle.
It
gives
you
a
chance
to
start
down
that
path
out
of
a
complete
modernization,
but
it
does
come
at
additional
cost
of
moving
the
application
and
introducing
new
services
and
rewriting
incremental
parts
of
the
application
versus
just
lifting
and
shifting
and
not
touching
it,
and
then
lastly,
rewrite
is
of
course
the
most
expensive,
but
when
done
correctly,
has
the
the
best
bang
for
your
buck?
B
B
They
start
speaking
the
same
language,
for
example
when
using
Linux
containers,
and
they
they
both
agree
that
they're
kind
of
responsible
for
their
their
own
bits
of
code
when
they
and
when
it
goes
all
the
way
from
the
developers
desktop
out
to
production.
Getting
developers
efficient
at
pushing
bits
to
production
essentially
means
getting
out
of
their
way
so
self-service
and
on-demand
infrastructure.
Where
developers
can
order
new
development
environments
in
minutes
instead
of
weeks,
is
critical
to
to
meeting
those
those
effort
or
those
goals
of
getting
bits
to
production
faster.
B
Once
you
have
developers
working
quickly,
you
need
a
quickly
a
way
to
automate
the
build
of
those
applications
in
a
consistent
manner,
using
things
like
Red,
Hat,
ansible
or
pop
or
chef
or
some
other
automation
framework.
Once
you
are
able
to
build
consistently,
you
need
to
be
able
to
deploy
and
deliver
consistently
and
continuously.
So
this
is
where
your
automated
delivery
pipeline
comes
into
place,
using
CI
CD
platforms
like
like
open
shift
and
jenkins,
and
things
like
that.
B
Once
you
have
the
bits
moving
quickly
through
the
through
a
pipeline,
you
need
to
be
able
to
land
them
in
production
safely,
so
autumn.
Advance
deployment
techniques
like
blue,
green
and
canary
deployments
allow
you
to
minimize
the
risk
of
bad
code,
making
it
to
production.
It
will
happen
with
these
advanced
deployment
techniques
you'll,
be
able
to
minimize
the
impact
of
those
changes,
possibly
prevent
them,
but,
more
importantly,
be
able
to
undo
them
if
they
do
occur.
B
Once
you
have
all
that,
then
you
can
start
talking
about
micro
services
and
fast
moving
monoliths
and
modernizing
the
applications
themselves.
But
again
you
still
need
to
consider
which
languages
frameworks
and
api's
you'll
need.
This
is
what
roars
is
bringing
us
so
today
the
new
digital
architecture
is
done
in
the
context
of
all
these
buzzwords.
You
see
here,
ap
as
a
front
Center
they're
super
critical
for
integrating
individual
small
applications
together
using
contracts
and
and
well-defined
API
is
an
API
versioning
and
things
like
that.
B
The
number
of
frameworks,
languages
and
technologies
you
can
use
to
do
this
pales
in
comparison
to
even
like
five
years
ago.
So
what
we've
seen
in
the
industry
is
a
move
from
the
traditional
sort
of
monolithic
application.
Server
that
contains
both
middleware
and
the
operational
platform
encompass
are
enclosed
in
a
handful
of
industry
standard
web
application
servers
like
JBoss
app
server
or
Oracle
WebLogic
or
IBM
WebSphere,
or
even
servlet
containers
like
Tomcat
to
a
split
a
separation
of
the
operational
platform
from
that
middleware
tier.
B
B
Red
Hat,
OpenShift
application
runtimes
is
a
curated
collection
of
those
time-tested
frameworks
and
runtimes
that
are
targeting
specifically
targeting
cloud
native
micro
service
applications.
The
product
contains
a
number
of
frameworks
and
run
times
that
you'll
undoubtedly
be
familiar
with.
So
we
provide
two
groups
of
those
runtimes.
The
supported
runtimes
are
fully
supported
by
Red
Hat.
We
provide
lifecycle,
management
and,
and
so
and
support
contracts,
for
example,
for
JBoss
EAP
Wow
fly,
swarm,
vertex,
eclipse,
vertex
and
nodejs.
B
The
other
groups
of
frameworks
are
those
that
Red
Hat
tests
and
verifies
to
make
sure
they
run
smoothly
on
OpenShift
like
spring
boot,
spring
cloud,
netflix,
history
and
ribbon,
and
as
we
go
forward,
more
parts
of
those
libraries
will
fall
under
the
support,
an
umbrella
and
be
integrated
with
Red
Hat
technologies.
Rohr
also
provides
launch
so
launched
as
a
project
generator
based
on
a
collection
of
cloud
native
samples,
using
the
supported
and
tested
and
verified
frameworks
to
provide
a
very
efficient
and
robust
initial
developer.
B
Experience
and
you'll
see
that
the
demo
as
well
so,
let's
skip
over
so
here's
the
actual
launch
itself
and
I
kind
of
wanted
to
just
briefly
demonstrate
it
so
launch,
is
a
set
of
samples
in
the
cloud
that
not
only
provide
you
project
starting
points
but
actually
will
deploy
it
for
you
onto
openshift.
So
it's
essentially
a
wizard
base
in
interface
that
runs
on
OpenShift
itself
and
deploys
to
both
OpenShift
online,
as
well
as
to
your
local
OpenShift
container
platform.
B
If
you
have
it
or
OpenShift
origin,
if
you're
running
that
as
well,
all
of
the
runtimes
are
supported
that
we
have
in
roar
so
spring
boot
vertex
well
fly
swarm
in
no
js'.
So
what
this
looks
like
essentially,
is
here's
the
the
website
here
so
developers
on
Red
Hat,
comm,
slash
launch,
so
you
can
launch
a
project
you
can
select
either
open
shipped
online
or
if
you
want
to
build
and
run
it
locally.
You
can
do
that
as
well.
So
I'll
just
choose
that
option
just
for
for
brevity
here.
B
So
it'll
it'll
essentially
set
me
through
a
number
of
options
for
the
different
runtimes.
So
if
I
reload
this
here,
it
might
have
gotten
logged
out,
you
know
it
looks
like
I
got
logged
out.
So
let
me
go
ahead
and
log
back
in
try
this
one
more
time
here.
Okay,
so
here's
I've
selected,
my
my
deployment
type
is
building
around
locally,
so
I'll
build
around
locally
I
can
select
a
mission
type.
B
So
here
the
micro
service
missions,
you
can
choose
circuit-breaker,
externalized
configuration
health
check
and
so
forth,
and
so
on,
I'll
go,
let's
say
a
circuit
breaker
and
then
I
can
choose
the
run
time.
So
all
of
the
previous
lists
of
types
of
applications
you
saw
are
available
for
the
different
runtimes,
so
we'll
go
with
vertex
click.
Next
I
can
provide
the
project,
information,
click
Next
and
in
this
case,
since
I'm
doing
a
local
install,
it
will
download
it
I.
Can
click
download,
it'll
download
a
zip
file.
B
I
can
then
unzip
that
and
and
load
that
in
my
IDE
and
go
and
take
a
look
at
that
example
code
again,
not
something
you're
going
to
use
in
production.
This
is
really
an
initial
developer
experience
to
get
you
up
and
running
quickly.
It's
it's
it's
more
than
just
a
project
generator.
It
actually
targets
specifically
micro
service
applications
like
health
checks
and
fault
tolerance,
types
of
features
you
find
in
typical
micro
service
applications,
and
that
applies
those
to
the
individual
runtimes
within
R.
Or
so
that's
the
starter.
B
Oh
and
you
can
quickly
get
started
with
that
for
developers
who
are
more
interested
in
advanced
developers
who
have
going
beyond
the
getting
started
experience.
The
way
you
consume
R
or
from
Red
Hat
depends
on
the
technology,
so
three
of
them
are
Java
based,
so
vertex,
wildfly,
swarm
and
spring
boot
are
all
Java
based,
and
so
the
typical
way
you
build
Java
applications
is
with
maven
or
Gradle.
B
So
the
artifacts
that
you'll
that
you'll,
essentially
download
for
the
R
or
product,
come
from
the
maven
repositories
that
Red
Hat
hosts,
in
addition
to
the
upstream
repositories
for
those
unsupported
components
of
the
runtimes
that
you'll
see
in
a
moment,
but
we
have
a
maven
repository,
Red
Hat
comm,
which
is
our
official
maven
repository
for
nodejs
nodejs
has
not
paid
java
application.
Obviously
the
way
you
consume,
that
is
through
the
container
Linux
container
image
that
we
have
hosted
on
our
Red
Hat
container
catalog,
which
is
that
registry
access
to
Red
Hat
comm.
B
So
essentially,
these
are
the
the
release
channels
that
these
bits
are
going
to
that
you
can
use
in
your
projects
a
quick
example
then
we'll
get
to
the
demo
here.
So
here's
an
example
using
wild
fly
swarm
to
consume
wild
fly,
swarm
you
simply
using
maven
in
your
pom
dot
XML.
You
declare
the
the
repository
from
which
the
bits
will
come
from.
Then
you
declare
a
dependency
on
the
BOM
or
bill
of
materials.
This
brings
in
all
of
the
dependency
information
for
wild
fly
swarm
within
the
context
of
R
or
then
from
there.
B
You
can
then
specify
the
individual
components
within
wild
fly.
Swarm
that
you
want
to
use.
In
this
example,
we're
using
a
fraction
called
monitor,
I'll
tell
you
what
fractions
are
in
a
moment,
but
you
can
bring
in
the
different
parts
and
different
functionalities
that
you
need
from
wild
fly,
swarm
or
from
spring
or
from
vertex,
using
this
same
technique
and
with
nodejs
you'll
make
a
change
to
your
package
JSON
file,
which
again
you'll
see
in
the
demo.
B
Ok,
so
enough
talk,
let's
get
to
the
the
meat
of
this
presentation,
which
is
a
couple
of
the
actually
four
or
five
examples.
The
code
is
on
github.
If
you
want
to
follow
along.
There
are
two
branches,
there's
a
master
branch
which
contains
the
starting
point
from
which
I
will
start
and
then,
if
you
get
stuck
or
need
help,
you
can
check
out
the
solution
branch
which
has
the
essentially
the
solution
to
these
different
exercises.
B
Ok,
so
we'll
start
with
wild
fly
swarm,
so
one
slide
on
what
what
swarm
is
so
well
fly
swarm
if
you
think
about
it
from
the
developers
perspective.
Shifting
to
these
micro
services
comes
with
a
lot
of
changes
like
the
infrastructure
is
changing
to
the
cloud.
The
app
architecture
is
moving
to
more
modular
distributed
services,
so
wild
fly
swarm
tries
to
provide
a
familiar
path
to
micro
services
for
Java
EE
developers.
B
That's
important
to
remember
it's
targeted
at
Java
EE
developers
who
are
building
Java
micro
services
in
order
to
maximize
their
production,
their
productivity
and
use
their
existing
Java
EE
skills
for
building
micro
services
using
a
subset
of
Java
EE.
One
of
the
complaints
with
Java
EE
in
the
past
is
that
that
speck
has
moved
really
slow
and
the
app
servers
are
really
big
and
bloated
with
swarm.
It's
it's
essentially
just
enough
of
the
app
server
that
you
need
for
your
micro
services
applications.
B
So
it's
particularly
useful
if
you
have
an
existing
application
like
say
a
monolith
and
you
you
want
to
move
that
to
a
micro
service
with
an
applicant
architecture.
Over
time
you
can
combine
Java,
EE
and
non
Java
EE
technologies
using
swarm
to
essentially
reuse
your
Java
EE
knowledge
and
bring
in
those
micro-services
functionalities
that
you
need
in
the
application.
B
Swarm
is
based
on
a
standard
that
the
components
number
the
components
come
from
the
Java
EE
world-
and
you
know,
standards
are
good,
but
sometimes
they
don't
move
as
quickly.
So
you
may
have
heard
of
micro
profile.
Micro
profile
is
a
collection
of
a
collection
of
specifications
that
are
very
useful
for
Java,
EE
micro
service
developers,
who
are
writing
micro
service
applications
using
Java.
A
number
of
vendors
have
come
along
and
grouped
together
to
kind
of
form
a
micro
services
set
of
specifications
in
addition
or
alongside
of
Java
EE.
B
So
it's
not
part
of
Java
EE.
These
vendors
are
interested
in
in
the
in
micro
services
applications
and
moving
the
Java
technology
forward
to
meet
these
modern
business
challenges.
So
Red
Hat,
of
course,
has
involved
a
number
of
others
that
you
can
see
on
the
slider
involved
as
well.
So
it's
not
just
Red
Hat.
It's
definitely
a
community
of
not
only
vendors
but
also
communities
themselves
like
the
London
Java
community
or
the
Brazil
Java
community
have
come
together
to
kind
of
set
this
this
in
motion.
B
So
there's
a
new
release
was
just
announced
at
Java
one
last
week,
version
1.2
and
then
1.2
contains
a
number
of
technologies
which
we
won't
specifically
get
into
today,
but
just
know
that
the
microscope
profile
is
a
set
of
specifications
and
wildfly
swarm
is
our
implementation.
So,
within
the
Ruhr
context,
here's
the
support
for
a
wild
fly,
swarm
we're
targeting
again
micro
services,
so
you'll
see
a
number
of
fractions
fractions.
Are
the
components
of
wild
fly,
swarm
that
think
that
encapsulate
certain
functionality?
B
So
you
can
have
a
fraction
for
health
checks
or
a
fraction
for
topology
or
a
fraction
for
externalized
configuration,
and
you
bring
those
in
using
the
maven
pom
that
XML
example
I
showed
you
earlier,
the
supportive
fractions,
the
certified
fractions,
which
again
are
kind
of
tested
and
verified
to
work
well
with
swarm.
Are
there
as
well
and
then
you
can
see
the
upstream
components
which
are
currently
unsupported
as
time
goes
on
and
we
hear
back
from
the
community
and
from
our
customers
some
of
those
may
drop
down
into
supported
status.
B
Okay,
so
what
does
this
mean
for
for
micro
server
for
existing
application?
So
let
me
show
you
shouldn't
go
back
to
this.
Bring
up
my
notes
here.
So
I
don't
miss
anything
here,
okay,
so
what
we're
gonna
do
is
use
wildfly
swarm
to
essentially
wrap
an
existing
application,
so
I
this
first
demo
I
have
an
existing
application.
It's
called
it!
So
it's
a
monolith.
It
is
a
a
basically
a.
B
Swarm
so
it's
essentially
a
storefront
and
let
me
just
go
ahead
and
run
this
first.
Just
you
can
see
exactly
what
I'm
talking
about
I
can
just
basically
do
maven
clean
package,
and
this
will
build
my
monolith.
So
this
is
an
existing
java,
ee
monolith.
I
have
a
number
of
services,
as
you
can
see,
in
the
source
code
here,
I
have
some
stateless
ejbs
for
handling
the
catalog.
The
product
catalog
I
have
stateful
ejbs
for
handling
the
shopping
cart,
so
the
individual
persons,
shopping,
cart
and
I
just
built
this
application.
B
So
I
can
look
at
it
here.
Here's
my
monolith
out,
War
I,
can
take
this
war
file
into
play
to
any
Java
EE
application.
So
this
is
you
know,
ten
years
plus
old
technology
to
to
deploy
this
application.
What
I
want
to
do
is
wrap
it
with
swarm.
So
the
first
thing
I'm
going
to
do
is
actually
look
at
the
palm
that
XML
and
let's
take
a
look
at
the
changes
needed
to
start
using
swarm.
So
here's
the
the
maven
project
file
very
simple.
The
only
dependency
it
has
is
on
Java
EE
7.
B
B
So,
let's,
let's
take
a
look
at
what
I
need
to
do
so.
I
have
a
plugin
for
my
IDE.
This
is
available
for
both
Eclipse
JBoss
developer
studio
as
well
as
IntelliJ
as
well
as
NetBeans.
The
plugin
makes
it
very
simple
for
me
to
set
up
while
fly
swarm,
so
I
choose
that
option.
I'll
just
click
finish
here,
and
so
what
it's
actually
happened
is
it's.
It's
added
two
things
to
my
to
my
XML
file:
it's
added
the
plug-in
itself
that
well
fly
swarm
plugin,
it's
a
maven
plug-in.
B
It's
also
added
the
Bill
of
Materials
for
a
while
fly
swarm.
That's
all
I
need
to
do.
It
also
adds
a
version
specification,
so
the
version
of
wild
fly
swarm
I'm
using
which
you
can
change
over
time.
That's
pretty
much
it!
So
if
I
want
to
go
ahead
and
build
and
run,
this
I
can
run
maven
wild
fly,
swarm,
warm
run
and
that's
essentially
going
to
build
my
application
using
wild
fly
swarm
and
do
that
auto
detection.
B
Looking
at
the
source
code
and
figuring
out
which
components
are
needed
and
you'll
see
this
in
the
output
here.
So
as
its
building,
you
can
see
right
here.
It's
detected
a
number
of
fractions
that
I
that
I
need
I'm
using
CDI
for
injecting
some
resources.
Amusing
ejbs
I
also
am
using
jax-rs
to
expose
the
restful
api
out
to
my
front-end,
and
so
then
it
basically
packages
that
up
into
a
single
runnable
or
fat
jar.
You
can
actually
see
this
if
I
look
take
a
look
at
the
fat
jar.
B
Let's
go
look
at
the
fat
jar,
so
here's
my
fat
jar
right
here,
monolith
swarm
jar
and
if
I
go
back
to
this
other
terminal,
you
can
see
it's
running
now.
So
if
I
were
to
actually
load
this
in
my
browser,
let's
take
a
look
at
here:
I'll
just
go
to
localhost
8080
and
you
can
see
this
is
my.
This
is
my
model
with
my
10
year
old
application
written
in
Java
EE
running
using
wild
fly
swarm
only
using
the
components
I
need.
B
So
this
is
a
very,
very
quick
way
for
an
existing
Java
EE
application
to
kind
of
warp
into
the
future
using
well
fly
swarm,
but
let's
not
stop
there.
So
I
only
stopped
this
one
and
now
what
we
want
to
do
is
deploy
it
to
OpenShift,
for
example,
because
I
want
to
write
a
Jenkins
pipeline
to
wrap
around
this.
This
monolith,
so
that's
absolutely
easy
as
well,
so
we're
using
I
have
another
plugin
that
I've
already
installed
called
fabricate.
So
this
is
a
source
project
championed
by
Red
Hat,
which
provides
integration
for
projects.
B
It
has
a
maven
plugin
to
integrate
your
projects
with
with
kubernetes
and
openshift.
Very
easily
so
I
can
do
things
like
Nathan,
actually
I've
already
I
have
a
local
open
ship
running,
so
I'm
going
to
go
ahead
and
create
a
new
project.
So
this
should
be
familiar
to
all
of
you
as
an
open
shift
fan
watching
here.
So
I'll
create
a
new
project
called
swarm.
B
Ok,
so
I've
got
my
new
project.
Now
I
can
simply
do
maven
fabric
8
deploy.
So
that's
gonna
go
ahead
and
deploy
this
existing
wild
fly
swarm
application,
which
is
wrapped
around
my
monolith
out
to
openshift.
So
as
that
builds
again,
the
same
same
thing
occurs:
it
does.
The
auto
detection
looks
looking
for
fractions.
It
found
the
number
of
fractions
I
was
using.
It
brought
in
a
number
of
other
fractions
to
the
transitive
dependencies
of
those
of
those
dependencies,
and
then
it
starts
this
build
using
openshift.
B
So,
while
that's
building,
let's
shift
over
to
open
shift
and
take
a
look,
so
here's
my
new
project
here,
the
swarm
project
so
nothing's
happening
yet
there's
a
build
in
progress.
You
can
see
the
build
here
running.
This
is
my
my
fabricate
build
of
my
application
ultimately
going
to
be
running
using
the
Java
s2
I
image,
that's
provided
by
Red
Hat.
So
once
that
build
completes
looks
like
it's
done.
It
will
then
deploy
that
application.
B
You
can
see
the
application
is
being
deployed
at
the
moment
and
looks
like
it's
up
now
now,
if
I
click
on
this
you'll
notice,
I
get
an
error
here,
and
you
probably
can
guess
what
this
is.
I
was
open,
shipped
experts,
this
lack
of
a
health
check,
so
my
application
has
no
health
checks.
So
that's
kind
of
one
of
the
first
things
you're
going
to
want
to
do
when
you're
moving
from
long
list
of
Micra
services
and
I'm
going
to
show
you
how
easy
that
is
to
do
with
swarm.
B
So,
if
I
reload
this
eventually
it
will
be
available
and
here's
the
application
now
running,
but
again
that
health
check
is
super
important,
not
just
to
avoid
that
error
message,
but
also
when
you're
doing
things
like
rolling
up
upgrades
or
trying
to
do
canary
deployments
and
things
like
that.
The
OpenShift
needs
to
know
when
the
application
is
healthy
and
when
it's
not
so,
let's
go
ahead
and
add
a
health
check
while
fly
storm
and
I'll.
Show
you
how
easy
this
is
again
we're
going
to
invoke
our
little
plugin,
which
doesn't
do
a
whole
lot.
B
I'll
show
you
exactly
what
it
does
once
it
does
it,
but
I'm
going
to
add
a
fraction,
so
I'll
click
add
fraction
like
a
list
of
fractions.
There's
a
number
of
fractions
in
here.
Some
of
them
supported.
Some
of
them
are
unsupported
upstream
fractions
from
the
community,
but
the
one
I
want
is
supported.
It's
called
monitor
so
I'm
going
to
click,
monitor
and
click
finish.
So
what
this
basically
did.
The
only
thing
that
it
did
to
my
palm
dot
XML
was
bring
in
this
dependency.
Is
this
org
dot
well
flight
out
swarm,
monitor
dependency?
B
What
that
does
is,
then
it
gives
me
the
ability
to
define
health
checks.
So
let
me
go
ahead
and
define
a
health
check.
Real
quick
in
this
application,
so
I'll
create
a
new
Java
class.
We
call
it
infra
endpoints,
it's
my
infrastructure
endpoint.
Now
it's
a
it's
a
restful
endpoint.
So
I
need
to
give
it
a
path.
I
will
give
it
a
path
of
infra
and
then
in
my
endpoint
I
want
a
health
check,
endpoints,
so
I'm
going
to
create
a
get
worse
pet
jax-rs
in
end
point,
so
that
path
is
gonna,
be
health.
B
It
doesn't
matter
what
I
call
these
I
can
call
them.
Whatever
I
want
you'll
see
in
a
moment
how
swarm
to
text
this.
The
way
that
it
detected
is
with
a
Health
annotation
on
the
class,
so
public
health
status.
It
returns
this
custom
health
status
object.
You
can
just
return
strings
if
you
want,
but
I'm
gonna
return
health
status
because
you
may
want
to
do
something
more
fancy
than
just
simply
saying
hello,
I'm
alive,
so
I'll
just
call
it
health
and
in
this
case
I'm
just
gonna
return.
B
B
You
don't
want
actually
change
the
state
of
the
application,
but
you
may
want
to
check
one
or
more
things
in
different
modular
methods,
for
example,
and
then
you
can
aggregate
them
using
a
single
health
endpoint
that
looks
at
all
of
them
and
if
one
of
them
is
down
then
then,
then
the
application
will
be
considered
to
be
not
healthy,
but
in
this
case
I'm
just
going
to
do
something
very
simple
here.
One
of
the
last
thing
I
need
to
do
back
on
my
palm
dot
XML
when
I
start
declaring
swarm
fractions,
specifically
declaring
them.
B
The
auto-detection
is
by
default,
turned
off
because
it
assumes
that
you
know
what
you're
doing
and
you
you
no
longer
need
auto
detection,
because
you're
telling
swarm
exactly
which
fractions
to
bring
in,
which
is
a
good
way
to
minimize
for
some
fractions.
They
might
not
have
the
most
robust,
auto
detection
code,
but
in
this
case
I
want
that
auto
detection
to
continue
so
I'm
simply
going
to
reconfigure
the
plugin
to
tell
it
to
continue
to
do
that.
B
Auto
detection,
auto
detection
by
you
setting
it
to
force
okay,
so
I'll,
save
that
and
I'll
go
ahead
and
deploy
it
again.
So,
instead
of
using
the
command
line,
I'm
simply
going
to
use
my
maven
integration
in
my
IDE
and
just
do
a
fabricate
deploy
again.
So
this
does
essentially
a
maven
fabricate
:
deploy,
but
it's
I
can
simply
double
click
it
instead
of
actually
typing
it
because
I'm
sure
you're,
sick
and
tired
of
seeing
me
type.
So
it's
going
to
rebuild
the
application
again,
it's
still
going
to
do
that,
auto
detection.
B
You
can
notice
that
it
it
it
picked
up.
My
configuration
of
force
still
does
the
auto-detection
also
brings
in
the
the
detections
that
that
are
needed
as
a
as
the
transitive
dependencies
and
then
rebuilt
the
application
and
redeployed
out
to
open
shifts
so
I
shift
back
over
to
open
shift.
I
can
see
the
application
will
once
this
build
number
two
completes
here.
The
application
will
then
be
redeployed.
B
One
interesting
thing
to
note
is
during
the
the
build:
if
I
go
back
to
the
build
here,
you
can
see.
The
health
check
was
automatically
added
here,
because
I
was
using
that
monitor,
refraction
and
The
Associated
developer
annotations
for
well
flies
swarm.
It
automatically
knows
how
to
add
kubernetes
OpenShift
endpoints
for
health
checking,
and
then
it
can.
It
will
use
that
in
the
application
itself.
So
has
the
application.
The
new
version
of
the
application
comes
up.
We
can
take
a
look
at
the
log
file
and
we'll
see.
B
Hopefully
the
health
check
will
will
be
automatically
wired
up
correctly
for
us
for
health.
I
can
see
it's
still
in
the
process
of
coming
up
and
there's
my
health
endpoint
I
was
added
services
in
for
health,
oops
lost
that
one,
so
it
basically
took
this.
This
health
check
that
I
declared
and
it
with
an
annotation
and
added
eight,
an
automatic
endpoint
to
the
application,
as
well
as
declared
and
defined
the
health
check
for
my
flora
or
openshift.
B
So
if
I
go
back
to
the
overview
screen,
I
can
see
my
application
is
up
now
and
when
I
hit
it,
it
actually
is
up.
The
health
check
is
defined
in
the
deployment
config,
which
you
can
see
here
automatically
defined
for
me,
so
very
simple
way
for
a
monolithic
application
to
start
down
that
path
of
micro-services
using
well
fly
swarm.
Okay.
So
let's
shift
back
and
go
on
to
the
next
next
run
time
here,
so
we'll
go
back
to
our
slides
if
I
can
find
them,
let's
think
find
them
here.
B
It's
on
this
screen
here,
nope
screen
nope.
Okay!
Here
we
go
so,
let's
move
on.
So
that's
why
I'll
fly
swarm.
What
we're
going
to
do
next
is
talk
about
spring
so
spring
and
spring
boot
is
an
opinionated
framework
for
building
micro
services
using
the
spring
framework
and
with
Java
EE
technologies
like
Jax
JPA
and
a
few
others.
Spring
choices
are
spring
projects.
B
We've
read
hats
already
certified
spring
boot
apps
on
OpenShift,
using
OpenJDK
as
well
as
JBoss
web
server
includes
support
for
it.
So
going
forward
we're
going
to
continue
to
certify
more
and
more
Red
Hat
technologies
to
be
used
with
spring
boot,
like
hibernate
or
in
finis,
ban
or
or
more
what
spring
is
in
roar.
It's
basically
the
same
spring
that
you
know
and
love
but
tested
and
verified
by
red
hats,
qe
department,
so
that
includes
spring
boots.
B
Spring
cloud,
kubernetes
ribbon
hysterics,
the
Red
Hat
components
that
are
part
of
the
spring
eco
system
are
fully
supported
like
embedded
Tomcat
or
hibernate
or
apache
cxf.
Also,
we
have
single
sign-on
technology
with
key
cloak
and
red
hadassah,
so
we
have
messaging
capabilities
with
with
amq,
but
more
importantly,
we
have
native
kubernetes
and
OpenShift
integration.
Much
like
you
saw
with
wild
fly
swarm.
We
have
a
similar
set
of
features
for
spring
and
spring
but--.
We
also
again
support
the
spring
boot
runtime,
using
the
launch.
The
developer
experience
website
that
you
saw
a
moment
ago.
B
We
also
have
a
number
of
starters
that
we've
contributed
to
the
spring
ecosystem.
Starters
are
basically
a
simplified
way
of
bringing
in
dependencies
from
the
spring
ecosystem
into
your
project,
using
things
like
pom.xml
entries
and
things
like
that
and
then
again,
as
I
mentioned
as
we
move
forward,
additional
functionality
like
support
for
transactions
or
JBoss
amq
will
be
integrated
into
roar
as
well.
So,
let's
take
a
look
at
what
that's
going
to
look
like
in
reality,
so
the
next
demo
is
a
spring
and
spring
boot
demo.
B
So
what
we're
gonna
do
is
take
a
piece
of
our
monolith
that
you
saw
a
moment
ago.
So
let
me
go
back
to
the
monolith
and
show
you
what
we're
gonna
do
here.
So
this
is
the
monolith
again.
These
components
are
all
part
of
the
monolith,
but
we
want
to
start
splitting
them
out
and
and
making
them
into
individual
micro-services.
So,
for
example,
the
catalog
micro-service
the
the
thing
that
that
gives
us
the
name
of
the
products,
the
image
and
the
description
like,
for
example,
this
one
over
here.
B
We
want
to
turn
that
into
a
microservice
and,
in
addition,
add
some
additional
business
value
along
the
way
and
we're
gonna
do
that
with
spring
and
spring
boo,
so
I've
taken
the
catalog
functionality
from
my
micro
service
and
split
it
into
a
spring
boot
application.
You
can
see
the
application
here.
Let
me
close
these
other
ones.
They
get
out
of
the
way
here-
and
this
is
my
my
spring
application
here.
So
you
can
see.
I
have
my
spring
boot
application
declared
here
very
simple,
simple
main
file.
I
have
one
controller.
B
This
is
providing
the
list
of
products
it's
essentially
going
out
to
its
own
database
and
getting
a
collection
of
product
descriptions
and
feeding
that
back
as
part
of
this
restful
interface
and
that's
basically
it.
So
let
me
go
ahead
and
deploy
this,
and
let's
see
what
happens
here.
So
let
me
go
back
to
my
integration
with
maven
here
spring
boots.
Plugins.
Let
me
create
a
new
project.
First
OCE
new
project
spring
and
I'm,
going
to
go
ahead
and
deploy
this.
B
This
micro-service
out
to
openshift
I'll
just
do
fabricate
deploy
again
fabricate
is
an
upstream
open
source
project
which
knows
about
OpenShift
and
kubernetes
and
is
able
to
take
Java
applications
package
them
up
and
build
them
using
s2i
and
then
deploy
them
out
to
OpenShift
and
provide
additional
functionality
like
creating
config
maps
and
service
accounts
and
secrets,
and
things
like
that,
so
this
this
build
chunks
should
not
take
too
long.
I've
also
created
a
very
simple
UI
on
top
of
this
microservices
just
for
demo
purposes.
Okay,
so
looks
on
the
Kenai.
B
Build
is
complete,
so
let's
go
ahead
and
go
back
to
openshift
and
see
what
what
we
got
here
so
I
go
back
to
my
overview
here
and
I
got
a
new
project
called
spring.
So
here's
my
catalogue,
it's
spinning
up,
looks
like
it
just
completed
and
if
I
hit
the
the
external
Ralph's
for
that
particular
application.
Here's
the
my
simple
user
interface
for
my
catalog.
It
just
basically
is
a
grid
of
the
different
pieces
of
information
in
the
catalog
database,
so
I
click.
This
fetch
button
to
research,
the
catalog
as
needed.
B
So
the
scenario
here
and
what
we
want
to
do
is
not
only
split
out
the
catalog
but
also
add
a
new
business
value.
The
scenario
is
that
we,
our
supply
chain,
is
pretty
weak
and
the
products
that
we're
getting
like
the
Red
Hat
fedora
or
the
JBoss
forge
community
project
sticker.
They
oftentimes
have
a
lot
of
quality
problems
from
the
manufacturer
who
are
creating
these
these.
These
kind
of
tchotchkes
that
you
can
give
out
at
a
trade
show
so
we're
constantly
getting
product
recall
notices.
B
We
need
to
be
able
to
quickly
remove
products
from
our
catalog.
The
problem
is
that
our
catalog
back-end
is,
you
know
thirty
year
old
technology
that
is
takes
weeks
to
get
changes
into.
So
what
we
want
to
do
is
is
provide
a
new
interface
and
we're
gonna
do
that
with
spring
spring
boot
and
OpenShift
through
a
config
map.
So
that's
kind
of
the
typical
way
you
do
externalize
configuration
you
know,
push
shift
so
I'm
going
to
show
you
how
easy
that
is
to
do
in
the
code
itself.
B
So
what
we're
gonna
do
is
we
have
this?
This
is
the
code
to
actually
returns
the
list
of
products
and
we
want
to
be
able
to
filter
that
in
our
micro
service,
because
this
is
a
simple
callback
to
a
simplified
database.
But
imagine
if
it
was,
you
know
again,
a
super
old,
very
large
system
that
that
takes
weeks
and
months
to
get
changes
into.
So
we
want
to
do
it
here.
B
So,
in
order
to
do
that
here,
we
need
to
provide
an
interface
so
we're
going
to
create
a
new
Java
class,
we'll
call
it
store
config
and
in
order
for
this
to
participate
in
the
spring
ecosystem,
I'm
going
to
declare
it
to
be
a
component
not
only
a
component
but
a
set
of
configuration
properties
with
a
prefix
of
store
and
you'll
see.
What
this
is
used
in
a
moment
is
basically
anything
in
the
config
file
that
starts
with
store
will
be
considered
to
be
part
of
this
configuration
class.
B
So
my
store
config
is
going
to
contain
one
thing:
it's
a
list
of
recalled
products
and
it's
let
me
generate
some
getters
and
setters
for
it,
so
I'll
go
ahead
and
do
generates
getter
and
setter
okay.
So
here's
my
essentially
a
bean,
a
spring
bean
which
encapsulates
my
externalize
configuration
now
once
I.
Now
that
I
have
that
I
can
now
inject
it
into
my
controller,
using
Springs,
Auto,
wired
capability,
so
private
or
config
Figg,
and
now
that
I
have
my
configuration.
I
can
now
filter
my
list,
so
dot,
filter
and
I'll
filter.
B
The
list
of
products
returned
to
only
those
that
the
config
get
recalled.
Products
does
not
contain
the
product
in
questions.
I
D
get
item
ID,
so
this
is
essentially
a
lambda
expression
that
will
for
any
product
that
does
appear
in
this
list
of
called
products
will
be
filtered
from
this
list,
so
I
think
that's
in
place.
So
the
only
last
thing
I
need
to
do
is
bring
in
the
the
components
of
the
spring
ecosystem
that
support
this
so
with
spring
cloud
kubernetes
in
particular.
B
So
in
my
pond
that
XML
I've
already
typed
it
in
here
for
demo
purposes,
I'll
just
uncomment
that
out
save
that
and
now
I
should
be
able
to
redeploy
this
out
to
two
OpenShift.
Well
double-click.
The
deploy
button
here
again
run
this,
and
what
will
happen
is
it
will
deploy
it
to
OpenShift,
create
the
config
map
which
will
contain
the
list
of
recalled
products
and
implement
the
the
filter
that
I
just
implemented
to
filter
the
list
of
products
when
I
want
to
remove
something
from
the
catalogue
so
that
that
looks
like
it's
in
progress.
B
So,
while
that's
going,
let's
switch
back
to
OpenShift
and
I'll
show
you
the
config
map
here.
So
the
config
map
is
created.
Here's
my
list
of
recalled
products,
that's
completely
empty
at
the
moment,
but
my
application
looks
like
it's
in
the
process
of
being
redeployed
here
should
come
up
momentarily
I.
Take
a
look
at
the
log
files
to
make
sure
nothing
crazy
happens.
Here
you
see
the
the
spring
tag
here
looks
like
everything's
working:
let's
go
ahead
and
hit
this
endpoint
again,
so
here's
my
application.
All
of
my
products
are
there.
B
So
let's
go
ahead
and
remove
this
first
item
so
three
to
nine
to
nine
nine.
So
we
go
over
here
and
edit.
Our
config
map,
which
now
gives
us
say
and
externalizes
configuration
and
the
ability
to
remove
products
I
can,
if
I
edit.
This
can
save
map
and
add
that
number
its
save
it
automatically
is
reloaded.
B
And
if
I
go
back
to
my
my
new
application
here
and
I,
say
fetch
catalog,
you
can
see
the
Red
Hat
Fodor's
now
disappeared
from
the
from
the
list
of
products,
so
I
can
edit
that
I
can
add
and
remove
products.
This
provides
a
huge
business
value
because
it
saves
the
business
reputation
of
by
not
distributing
junk
quality
materials,
and
my
business
is
happy
so
if
I
know
now,
let's
think
back
to
the
mana
list.
So
the
monolith
is
here.
B
So
let's
tie
them
together
and
start
that
strangulate
strangulation
of
my
monolith
into
a
microservices
architecture,
so
we'll
keep
our
existing
code,
as
is
we're
not
going
to
change
our
monolith
at
all,
we're
not
going
to
change
our
new
microservice,
we're
simply
going
to
tie
them
together
using
openshift
and
its
ability
to
do
clever
or
Software
Defined
Networking.
So
what
I'm
going
to
do
is
in
the
in
the
list
of
routes
that
I
have
for
my
for
this
application.
So
again,
remember
I.
Have
the
catalog
and
I
have
the
the?
B
B
So
once
this,
what's
once
this
new
application
comes
up,
I
should
be
able
to
create
that
so
let's
go
ahead
and
make
sure
the
build
is
still
in
progress,
looks
like
it
just
completed.
So
it's
now
being
deployed
here.
So
here's
my
catalog
in
the
same
project
as
my
my
monolith,
so
I'm
gonna
go
ahead
and
create
a
essentially
a
redirection
route.
So
you're
all
familiar
with
routes
from
the
from
the
OpenShift
world,
I'm
gonna,
take
any
application.
Any
request
that
comes
in
to
my
monolith.
B
B
This
path
is
hit
I'm,
going
to
redirect
that
to
the
catalog
service
on
the
same
ports
and
same
restful,
addresses
and
just
hit,
create
now
I
have
this
redirection
in
place
now,
if
I,
if
I
then
hit
my
monolith,
I
can
see
that
once
my
application
comes,
oh
I
need
to
edit
the
config
map
again,
obviously
to
remove
that
the
same
project
because
I've
redeployed
it.
So
let
me
go
ahead
and
edit
that
and
my
my
red
hat
fedora
product
ID
to
my
list
of
recalled
products,
I
go
back
to
my
monolithic
application.
B
Here
you
can
see
as
I
reloaded
it.
The
the
fedora
is
now
gone,
so
I've
essentially
strangled
my
monolith.
I've
started
the
strangulation
process
and
you
can
do
that
with
a
number
of
the
other
components
here,
like
the
pricing
and
the
inventory
service
and
the
ratings
and
reviews.
If
you
had
those
and
then
ultimately
you'll
get
to
a
point
where
you've
completely
gone
from
monolith
to
microservices
using
using
Roar.
B
So
we've
got
I
guess
about
ten
minutes
left
that
I
have
actually
two
more
demos,
I
think
I'll
just
do
one
I'm
going
to
focus
on
vertex,
because
that's
kind
of
the
the
interesting
runtime
here
and
we'll
quickly
go
through
this
see.
If
I
can
I
can
do
this
rather
quickly.
So
vertex
is
the
third
runtime
within
R,
or
it's
really
great
for
high
performance,
low
latency,
high
concurrency
applications
web
applications
in
particular.
B
The
reason
why
it's
good,
for
that
is
because
of
the
nature
of
reactive,
microcircuitry,
active
programming
and
event-driven
asynchronous
execution
models,
so,
to
briefly
illustrate
that
take
a
look
at
the
execution
model
of
a
single
threaded
synchronous
application.
This
application
has
three
tasks:
to
perform:
blue,
green
and
red.
So
in
a
single
threaded,
synchronous
model
blue
runs
until
it's
completely
done
and
exits
then
green
runs.
B
Then
red
runs
not
too
much
to
say
here,
except
that
it's
going
to
be
really
really
slow,
especially
for
high
info
applications
that
are
waiting
a
lot
on
disk
or
network
or
some
other
resource.
The
second
model
is
a
threaded
model.
This
is
a
traditional
model
that
you
probably
familiar
with
if
you're
a
Java
and
Java
EE
or
Java
developer.
This
is
where
the
tasks
run
in
parallel
on
different
cores
or
different
threads
of
a
computing
system,
the
CPU
is
able
to
switch
between
them
freely
at
any
time.
So
that
means
your
as
a
developer.
B
If
you're
writing
code
in
these,
they
have
to
be
thread
safe,
that
you
have
to
deal
with
synchronization
locks
and
mutexes,
and
you
have
to
coordinate
between
these
threads
so
that
the
state
is
what
you
expect
and
you
don't
corrupt
the
state
or
get
things
like
race
conditions
or
deadlocks.
This
is
also
called
pre-emptive
multitasking.
The
third
one
is
the
asynchronous
model.
This
is
one
we'll
concentrate
on.
This
is
where
the
developer
controls
that
interleaving.
B
It's
also
called
cooperative
multitasking
gets
rid
of
those
really
nasty
things
like
race
conditions
and
deadlocks
and
blocks
of
synchronized
code.
It
does
this,
through
a
mechanism
you'll
see
in
a
moment,
but
important
to
know
that
it's
been
around
for
a
long
time.
It's
not
like
vertex
or
nodejs,
invented
this
stuff.
It
was
used
on
the
space
shuttle.
You
know
30
years
ago,
where,
when
you
push
the
button
to
fire
the
thruster,
a
better
fire
the
moment
you
push
the
button
and
not
a
few
seconds
later.
B
It's
also
been
used
in
windows
and
windows
3.x
in
particular.
Essentially,
what
it
is
is
the
bits
of
code
from
blue,
green
and
red
and
my
example.
They
run
until
there
until
until
it's
it's
until
it's
done
until
it
reaches
a
good
stopping
point.
For
example,
when
you
go
out
to
a
disk
or
a
database
once
that
occurs
and
you're
waiting
on
a
call
back,
other
code
can
run
like
red
code
Bluecoat
in
this
example.
So
it's
important
to
note
that
your
code
runs
uninterrupted
until
you
sell
it
to
stop.
B
So
if
you
write
user
interface
code-
and
you
block
you
get
things
like
this
right,
you
can
paint
the
screen
with
a
dialog
box
right.
This
is
a
terrible
bug,
but
I'm
sure
you've
all
seen
it
in
the
past.
This
is
because
the
thread
blocked
when
it
shouldn't
have
in
the
UI,
and
it
produces
artifacts
like
that.
So
that's
to
be
avoided,
but
consider
what
it
buys
you.
B
It
also
is
a
big
benefit
when
you
have
a
large
number
of
these
things,
because
then
the
interleave
can
can
happen
in
such
a
way
that
you
can
save
a
lot
of
time.
If
you
need
to
run
blue,
green
and
red
and
the
task
takes,
you
know
a
certain
amount
of
time
for
threaded.
If
you
do
asynchronous,
you
can
save
a
lot
of
time
by
essentially
interleaving
bits
of
code
that
are
waiting
for
callbacks
so
that
when
blue
stops
and
it's
waiting
for
a
callback
from
a
database
call,
then
green
can
run.
B
And
then
blue
can
run
some
more
when
it
gets
that
callback
and
then
green
can
run
some
more
when
blue
stops
again
and
you
get
the
idea,
you
can
interleave
all
this
ultimately,
at
the
end
on
the
right
side
of
the
screen,
saving
that
amount
of
time
for
running
all
of
blue,
green
and
red-
and
this
is
what
vertex
does.
This
is
what
the
the
basis
of
reactive
systems
and
asynchronous
event-driven
programming
are.
B
Vertex
is
a
as
a
reactive
toolkit
for
the
JVM
and
has
a
number
of
supported
languages
within
that
within
the
JVM,
so
like
Java
and
JavaScript
and
Ruby
Ceylon,
Scala
:
and
a
number
of
others,
it's
again
ideal
for
that
high
concurrency
and
low
latency
services,
where
you
have
a
lot
of
people
hitting
a
website
or
a
lot
of
message
or
machines
talking
to
one
another.
It
does
this
through
event-driven
non-blocking,
I/o
libraries
throughout
the
entire
set
of
libraries
and
components
within
vertex,
so
there's
no
blocking
there's
no
waiting.
B
I
mean
there
is
waiting,
obviously,
but
there's
no
blocking
there's
no
things
like
promises
and
futures
and
callbacks,
and
things
like
that
to
to
really
make
this
application
much
more
flexible
and
resilient
to
failure
and
much
more
performant.
So
within
R
or
here's
the
list
of
supported
components
within
within
vertex,
you
can
see
again.
We
target
the
micro
service
developers,
so
externalized
configurations,
circuit-breaker
health
check,
service
discovery
and
a
number
of
other
components
within
the
vertex,
the
ecosystem
that
are
targeting
specifically
for
micro
services
and
react
to
micro
services.
B
B
We
support
both
the
AMQP
protocol
as
well
as
MQTT
for
cluster
management,
we're
supporting
and
finished
May
and
obviously
that's
a
an
open
source
project
champion
by
Red,
Hat
and
and
exposed
in
jboss
datagrid,
and
then,
of
course,
the
vertex
core
itself,
which
not
only
consists
of
the
core
web
interfaces,
but
also
a
shared
event,
event
bus.
So
you
can
do
distributed
messaging
across
different
vertex
instances
across
your
your
cluster
okay.
B
So
so,
with
the
current
release
of
vertex
is
3.4
dot
2,
and
that
is
what
is
currently
included
and
supported
within
R
or
to
use
them
very
similar
to
swarm.
You
simply
declare
some
palms
out
xml
entries.
We
also
have
boosters,
as
you
saw,
which
I
demonstrated,
which
demonstrate
a
number
of
micro
service
concepts
within
the
realm
of
vertex
and
reactive
programming
and
reactive
systems,
and
the
examples
that
one
of
the
examples
I
showed
you
is
available,
along
with
a
number
of
other
examples
at
the
the
website
here
listed
at
the
bottom.
B
Ok,
so
let's
do
the
the
last
one,
the
last
demo
that
we
have
time
for
here.
I
also
haven't
no
js'
demo
pizza,
that
one's
relatively
simple
and
you
can
check
that
out
after
viewing
this
one.
So
last
thing
we're
gonna
do
is
vertex,
so
let
me
go
ahead
and
create
a
new
project
to
hold
my
vertex
example
so
create
a
new
project.
So
what
I
have
is
again
very
similar
to
the
spring.
Boot
example:
I
have
a
catalog
the
same
product
catalog
that
is
implemented
not
with
with
spring
Bupa
with
vertex.
B
This
is
going
to
use
an
external
database,
so
the
first
thing
I'm
going
to
do
is
is
deploy
that
database
very
briefly
here
so
OSI
project
mixture
on
the
right
project,
OSI
process,
I
did
say
it's
it's
in
a
template
here,
so
I'll
just
create
it
and
while
I'm
doing
that,
well,
that's
being
created,
so
it'll
deploy
MongoDB
as
the
database.
So
the
structure
of
a
vertex
project
is
very
different
than
your
your
typical
spring
or
Java
EE
project.
B
The
core
kind
of
component
within
vertex,
is
called
a
vertical,
which
is
contains
basically
your
business
logic.
You
can
split
it
up
into
a
number
of
different
verticals
but
effectively.
You
are
writing
code.
In
a
vertical
much
like
you
would
in
a
spring
boot
component,
for
example,
so
here's
my
simple
vertical
this
vertical
is
a
web.
Vertical
exposes
a
set
of
restful
api,
so
I
can
say
slash,
get
slash
products,
and
it
will
give
me
that
that
list
of
products
from
the
catalog
ok.
So
what
are
we
going
to
do?
B
B
Let
me
import
that
properly
and
then
here's
my
circuit
breaker
object
itself,
and
there
are
a
number
of
configuration
options
for
a
circuit
breaker
like
how
many
times
will
it
fail
before
it
opens
the
circuit,
as
if
you
don't
know
a
circuit
breaker,
it
basically
protects
calls
to
an
some
other
service.
If
that
call
fails
a
number
of
times
it
will
what
do?
What's
known
as
opening
the
circuit
and
falling
back
to
some
other
strategy
to
get
the
same
amount
of
data.
This
prevents
subsystems
and
micro
services
from
being
overloaded
by
too
many
requests.
B
If
it
gets
overloaded,
the
circuit
is
open
and
it
gives
the
the
service
a
chance
to
cover
through
things
like
OpenShift,
scaling
and
and
and
pod.
You
know,
detecting
that
a
pot
is
unhealthy
and
killing
it
and
replacing
it
with
a
new
pod.
So
that's
what
a
circuit
breaker
does
ultimately
then
comes
back
once
the
once
the
service
is,
is
ready
to
go
so
here
so
I
have
my
circuit
breaker
object.
Here's
the
API
call
that
I
want
to
protect.
B
B
Okay,
so
this
is
the
call
to
my
circuit
breaker
to
protect
the
call
to
the
database.
It
you
give
it
some
code
to
call
you
give
it
some
code
to
call
when
the
original
call
fails,
which
is
called
the
fallback,
and
then
you
give
us
some
code
to
deal
with
the
results
of
either
the
fallback
or
the
original
version
of
the
code
if
it's
exceeded,
so
we
essentially
just
need
to
fill
these
three
things
in.
So
let
me
get
rid
of
and
then
I'll
just
keep
this
here.
B
Okay,
so,
let's
fill
in
the
code
to
call
the
database,
so
we're
going
to
take
the
existing
code
and
just
copy
and
paste
that
here.
Actually
let
me
leave
that
here:
I'll
just
paste
it
in
here.
So
here's
my
existing
call
right,
I,
don't
care
if
it
succeeded
or
failed
so
I'm
going
to
remove
the
error
checking
code
here,
I
only
care
I!
Don't
care
about
anything.
B
In
fact,
if
it
fails,
I
want
it
to
fail
properly
and
go
to
my
fallback
so
I'm
just
going
to
remove
the
code
that
deals
with
error,
checking
also
remove
the
code
that
deals
with
responding
back
to
the
client,
because
I
only
want
to
do
return
this.
This
value
from
this
call.
So
I'm
going
to
do
I'm
going
to
complete
the
the
future
so
response
dots.
Where
is
it?
B
It
is
then
stop
completes
and
I'm
gonna
complete
it
with
the
list
of
objects.
The
list
of
products
from
my
from
a
database,
so
that
list
is
in
corporate,
is
encapsulated
in
this
JSON
object,
so
I'm
just
gonna
complete
it
with
that.
Okay.
So
that's
the
code
to
my
existing
code
to
make
the
call
and
return
the
value
by
completing
the
the
future.
There's
a
lot
of
reactive
concepts.
I'm
glossing
over
here.
B
We
don't
have
time
to
kind
of
do
a
complete
thing,
but
this
is
essentially
a
callback
I
called
as
get
products,
and
then,
when
that
returns,
when
that's
ready
to
return,
this
code
is
actually
completed.
If
that,
if
that
code
fails,
then
I
want
to
return
something
else.
So
this
is
my
my
fallback.
So
in
this
case,
I
want
to
return
something
else
and
we're
just
gonna
hard-coded
here.
So
we're
gonna
return
a
new
JSON
array
which
contains
a
single
products
and
just
for
demo
purposes,
we'll
call
it.
B
Let's
see
fallback
products
fallback
description
and
they
give
it
a
an
item
ID
of
one
and
then
the
last
is
the
price
so
we'll
just
give
it
a
1
million
now,
in
reality,
your
fallback
is
gonna.
Do
something
a
little
more
interesting
than
this.
It's
not
going
to
return
something
hard-coded.
In
most
cases,
it's
going
to
to
do
something
like
check
a
cache
or
go
to
an
alternate
service,
or
something
like
that,
but
in
our
case
for
demo
closures,
we're
just
going
to
return
this
this
method
or
sorry
this
this
hard-coded
value.
B
So
there's
my
return
from
my
fallback
and
then
lastly,
the
code
to
actually
deal
with
sending
it
back
to
the
client
is
the
same
code
from
down
here.
So
we'll
just
copy
and
paste
that
code
in
here.
Instead
of
this
object,
we're
gonna
call
the
event
dot,
get
there,
sorry
that
results
dot
and
code
prettily.
So
there's
the
code.
Let
me
delete
the
old
code.
Here's
my
new
circuit
breaker
enabled
responsive
code
using
vertex
and
roar.
A
lot
of
this
can
be
simplified,
you'll
notice.
B
My
IDE
is
telling
me
that
I
can
replace
these
with
simplified
expressions.
So
I'll
go
ahead
and
do
that
a
couple
of
others
in
here
I
think
that's
it
for
now.
Ok,
so
it
looks
like
I'm
good
here
now,
I've,
essentially
wrapped
my
call
to
my
database
with
the
circuit
breaker
configured
in
the
code
that
you
saw
earlier.
So
let's
go
ahead
and
try
this
out.
B
So
let
me
go
ahead
and
deploy
this
out
to
two
OpenShift
again
I'm
going
to
use
the
same
integration
I
have
with
my
IDE
here
for
vertex
plugins
got
fabricates,
make
sure
I'm
on
the
right
project,
project,
vertex
and
I'll
go
ahead
and
deploy
this
to
my
to
my
my
new
project
I
created.
So
what
should
happen
is
when
I
hit
this
slash
products
API.
It
should
wrap
the
call
to
the
database
with
a
circuit
breaker.
So
you
can
imagine
what
this
demo
is
going
to
be.
I'm
gonna
run
it.
B
It
should
look,
fine
I'm,
gonna,
kill
the
database
and
then
we'll
see,
hopefully
the
the
the
fallback
be
employed.
Here
again.
The
fallback
in
my
example
is
going
to
be
a
very
simple
hard-coded
list
of
products
as
followed
by
product
here,
but
in
in
a
real
world
application
you
would
do
something
a
little
fancier.
Okay,
so
it
looks
like
that's
been
deployed.
So
let's
go
back
to
open
shift
and
go
to
my
new
project
here
my
vertex
project
down
here-
and
it
looks
like
my
my
database-
is
up
here.
B
My
new
catalog
microservice
was
up
here.
So
let's
go
ahead
and
hit
that
so
here's
my
my
catalog,
I
can
click
again
the
fetch
catalog.
I
can
get
a
list
of
products,
the
same
exact
set
of
products
that
I
had
before
now:
let's
kill
the
database
and
then
witness
what
happens
and
how
that
fallback
actually
happens.
So
I
will
go
ahead
and
take
my
database
down
by
scaling
it
to
0
pods.
B
So
if
I
go
back
to
my
my
microservice
here
or
sorry,
my
my
click
fetch
catalog,
you
see
it
took
a
couple
seconds
and
then
the
fallback
is
was
employed
to
to
return
this
fallback
product.
Again,
you
would
do
something
fancier
in
a
real-world
application.
Let
me
bring
the
database
back
up
and
then,
after
the
time
out
the
configure
time
out
of
I
think
it's
five
seconds
in
the
in
the
circuit
breaker,
as
well
as
the
health
checks
that
need
to
pass
in
openshift.
B
Once
this
application
comes
back
up
and
all
those
timeouts
expire,
the
and
I
hit
that
I
hit
the
service.
Again,
it
will
retry
that
call
to
the
database
that
time
it
should
should
succeed,
because
the
circuit
is
closed
again
and
the
application
goes
about
its
its
normal
business.
So
it
looks
like
the
application.
The
database
is
still
come.
So
if
I,
if
I
hit
this
endpoint,
it
should
still
fail
if
I
hit
fetch
catalog
looks
like
it's
still
failing
so
once
this
new
service
once
the
database
comes
back
up,
looks
like
it's.
B
It's
back
up
now.
So
if
I
shift
back
here
ultimately
once
that
time
out
again,
there's
a
there's
a
number
of
timeouts
that
are
in
play
here
to
give
that
service
a
chance
to
to
come
back
to
life
so
it'll
it
could
take
up
to
you,
know
20
or
30
seconds
for
this
for
this
catalog
to
come
back.
So
I'll
just
keep
hitting
in
here
and
hopefully
you'll
eventually
come
back
and
once
I
have
a
problem
in
my
code,
which
is
not
unlikely,
so
it
looks
like
the
the
database
came
back.
B
The
circuit
was
closed
using
the
vertex
circuit
breaker
and
my
business
is
back
up
and
running
using
vertex.
So
that's
it
for
the
demos.
I
had
again
I
had
a
no
js'
demo,
which
we
don't
have
time
for
today,
but
you
can
check
that
out.
Yeah
on
the
the
the
code
pointer
I
gave
earlier
here.
Github.Com
slash
james
Falkner,
slash,
roar.
Examples!
Ok,
so
last
slide
here
summary.
B
So
it's
essentially
what
we've
done
is
I've
showed
you
how
you
can
take
monolithic
applications
and
move
into
microservices
applications,
either
in
a
Big,
Bang
approach
using
roar
or
incrementally
and
preserving
the
value
that
you've
already
invested
in
your
existing
applications.
So
there's
multiple
technical
solutions
for
this
modernization,
depending
on
not
only
how
much
time
and
resources
you
have,
but
also
regulation
and
amount
of
risk
that
you
want
to
take.
Not
everyone
moves
at
the
same
speed,
so
roar
and
Red
Hat
in
particular,
are
designed
to
support
you.
B
Whether
you're
doing
traditional
java
ee,
with
stateful
workloads
or
modern
cloud
native
workloads,
so
with
red
hat
or
or
you
can
we
provide
that
trusted
solution
for
both
your
today's
exist.
Existing
business,
critical
apps,
as
well
as
a
supported
path
to
the
modern
application
architectures
with
micro
services
and
the
popular
frameworks
that
we
that
we
talked
about
today,
so
I
think
I'm
done.
Diane
I
know,
I
went
over
time.
I
appreciate.
A
B
A
Not
in
the
back
to
the
only
question
that
anyone
had
and
you
just
Ward,
to
use
a
bad
phrase
true
through
those
demos
and
I,
am
so
impressed
because
they
were
all
live
and
they're
all
going
that
and
nothing
crashed.
That
was
like
I
was
just
waiting
for
something,
but
no
it
kept
going
and
kept
going.
It
was
like
the
Energizer
Bunny
on
top
of
OpenShift,
so
wonderful,
if
you
could
go
back,
one
slide
go
down
on
the
one
question
was
about
where
these
the
files
were.
A
A
A
B
A
Perfect
Halloween
session,
so
thank
you
very
much.
Well,
I'm.
Definitely
going
to
have
you
back
on
to
do
some
more
because
I
you
just
rocked
it
today.
So
thank
you
very
much
and
I
think
there's
one
other
thing
in
the
chat
but
I'm
betting,
yeah,
fantastic
presentation
really
really
well
done.
So
thanks
I'm,
going
to
end
the
recording
and.