►
From YouTube: OpenShift Commons Briefing #97: Red Hat OpenShift Application Runtimes Explained with John Clingan
Description
In this era of specialization, no single software framework and architecture is suitable for all types of applications. The choices become overwhelming when combined with infrastructure that could be shaped via code. In this session, we discuss Red Hat’s approach to reducing complexity while providing flexibility to application developers with OpenShift Application Runtimes (RHOAR); a poly-architecture, poly-framework suite designed for OpenShift container platform to develop services-based responsive applications.
A
Hello,
everybody
and
welcome
again
to
another
openshift
Commons
briefing,
this
time,
we're
going
to
get
red-hats
john
clinton
to
give
us
an
overview
and
explain
exactly
what
open
ship
application
run
times
are.
It's
the
acronym
is
I
love
its
roar,
but
I
think
it
needs
a
little
bit
of
a
deep
dive
and
explanation
and
I'm
really
happy
that
we
have
John
with
us
today
to
do
that.
So
the
format
is
ask
questions
in
the
chat.
A
B
B
This
includes
just
a
quick
brief
overview.
Nodejs
well
place
warm
spring
boot
certification,
as
well
as
eclipse
it's
for
an
X
and
I'll,
explain
more
a
little
bit
later
as
we
go
go
throughout
this
presentation.
I
am
also
an
active
member
of
the
Eclipse
micro
profile
project,
which
will
also
briefly
describe
during
this
presentation.
B
Okay,
so
we're
talk
a
little
bit
about
the
monolith.
The
microservices
trend,
along
with
an
overview
of
microservices,
talked
about
the
evolution
of
micro
services,
as
well
as
how
the
open
shift
itself
and
kubernetes
actually
can
offer
quite
a
lot
of
value
to
micro
service
developers
all
right
and
talk
about
the
actual
product.
Openshift
application
run
titles
with.
If
we
have
enough
time
a
quick
demo
and
then
a
little
bit
of
a
kind
of
a
looking
forward
with
roar
as
well
so
red
head,
openshift
application.
Runtimes
is
shortened
to
the
roar
acronym.
B
So
if
you,
if
you
hear
me,
say
roar,
that's
what
that
means.
Okay
talk
a
little
bit
first
about
the
mana
list
of
microservices
trend.
What
I
think
most
organizations
have
in
place
have
been.
You
know
traditional
Java,
EE
application
servers
right
and
these
run
either
Java
EE
applications
or
spring
applications,
and
the
interesting
thing
about
Java
EE
application
servers
is
that
they,
actually,
you
know,
combined
with
an
Operations
Group
a
lot
of
services
that
are
available
on
behalf
of
the
developer
right.
So
with
a
developer
as
a
developer.
B
I
can
focus
a
lot
of
my
business
logic
and
not
have
to
worry
a
lot
about
some
of
the
supporting
services
that
are
available.
You
know,
buy
an
app
server
platform
like
JBoss
EAP.
Examples
include,
you
know:
provisioning
high
availability,
clustering
session
replication.
You
know
functionalities
like
that
and
what's
going
on
now,
basically
in
the
industry
is
that
software
development
is
changing
right.
The
industry
as
a
whole
is
moving
from.
You
know.
Well,
I,
don't
think.
B
We've
been
waterfall
for
a
while,
but
from
agile
methodologies
to
DevOps
from
an
infrastructure
perspective,
we've
gone
from
bare
metal
to
virtual
machines.
To
now
cloud
environments,
whether
it's
public
cloud,
private
cloud
or
what's
probably
most
prevalent,
is
actually
a
hybrid
cloud
where
you
have
a
combination
of
both
public
and
private
clouds
and
from
an
architecture
perspective
developers
have
gone
from
developing
monoliths,
which
are
very
easy
to
develop.
Right
I
mean
there's
a
huge
benefit
to
developing
my
own
list
to
today
where
we
actually
have
micro
services
and
the
challenge
to
developers
is.
B
All
of
this
is
happening
happening
simultaneously
right.
They
have
potentially
you
know,
an
architecture,
Chang
change,
mana
list
of
micro
services,
a
cloud
platform
change
which
may
be
perhaps
virtual
machines
to
now
docker
containers
right,
an
underlying
platform
change
as
well
as
now
changes
around
an
application,
runtime,
maybe
as
a
part
of
moving
to
micro
services,
you're
evaluating,
not
just
you
know
how
does
Java,
even
in
the
cloud,
but
maybe
I
can
get
into
reactive
development.
B
You
know,
maybe
I
can
look
at
nodejs
all
right,
so
all
of
these
things
are
happening
simultaneously
and
the
developer
kind
of
has
to
you
know,
keep
up
with
these
changes
happening
within
an
organization
and
the
way
most
organizations
are
approaching.
This
is
through
kind
of
you
know,
a
smaller
team,
perhaps
architects
or
application
leads
kind
of
all
collaborating
to
go
off
and
evaluate
cloud
runtimes
like
OpenShift
right.
B
You
know
more
than
10,000
developers
right,
so
the
question
becomes:
how
do
I
make
those
developers
as
productive
as
quickly
as
possible
as
they
kind
of
move
move
through
this
software
change
so
from
an
infrastructure
perspective
right,
as
some
of
you
may
be
familiar
with
the
graphic
to
the
left?
That's
actually
one
of
the
you
know
traditional
OpenShift,
graphics,
right
where
you
know
you
have
it's
the
architecture
of
open
shift.
Where
you
have
you
know
nodes,
you
know
you
hit.
First
of
all,
you
have
an
open
shift
cluster
right
in
that
cluster.
B
You
have
nodes
which
can
run
containers
and
containerized
applications
that
developers
in
your
organization's
develop
right
and
what's
interesting
about.
This.
Is
it
provides
a
lot
of
functionality
on
behalf
of
the
developer
and,
what's
interesting?
Is
that
a
lot
of
that
functionality
isn't
necessarily
being
tapped
into
which
is
kind
of
what
Rohr
directly
addresses
so
from
switching
a
little
bit
and
I'll
get
back
to
kind
of
how
developers
can
leverage
a
lot
of
the
functionality
available
in
an
open
shift?
B
You
know
there's
microservices
itself
right
and
there's
a
million
different
definitions
of
micro-services,
but,
generally
speaking,
what
it
is
is
instead
of
a
single
monolithic
application
that
includes
many
pieces
of
functionality.
A
micro
service
is
just,
as
you
know,
a
collection
of
small
services
that
each
individually
owned
part
of
the
business
problem
domain
and
they
all
kind
of
collaborate
to
expose
eventually
a
service
out
to
the
end
user,
and
these
are
each
independently
deployable,
which
means
they
conversion
at
different
rates.
B
I,
don't
have
to
you,
know,
wait
for
another
part
of
the
application
to
finish
before.
I
can,
actually,
you
know
provision
a
monolith
right
now.
I
can
provision
each
each
service
independently.
Alright,
there's
other
benefits
I'll
be
getting
into,
but
the
main
piece
is
that
it's
in
deploying
independently
deployable
services
right,
it's
no
longer
a
single
monolith
and
basically
the
monolith
is
broken
down
into
individual
services
using
domain
driven
design.
Typically,
it
fits
the
micro
services
model.
B
Well,
so
I
take
a
business
capability
and
I
make
that
business
capability,
a
service
right
and
the
I
think
the
most
interesting
piece
is
relative
to
micro
services
and
the
you
know,
folks
that
are
most
likely
watching.
This
is
the
fact
that
we
micro
services
right
where
you
used
to
have
one
big
monolith.
B
Now
you
have
many
micro
services,
you've
kind
of
exploded,
the
number
of
deployable
artifacts
right,
maybe
you've
gone
to
20
within
your
organization
to
potentially
a
hundred
or
a
thousand
right,
depending
how
you
do
you
compose
your
monolith,
and
so
what
really
helps
in
that
situation
is
that
you
have
fully
automated
software
delivery,
stack
right
and
that's
where
openshift
comes
in
in
a
lot
of
ways,
because
it
makes
provisioning
services
actually
a
lot
easier
and
managing
those
services.
A
lot
easier
provides
a
common
infrastructure
for
developers
to
deploy.
B
You
know
an
Oracle
database
or
minus
QL,
and
it's
always
that
database
within
an
organization
and
what
micro
services
enable
is
is
now
multiple,
multiple
application.
Runtimes
each
service
that
I
have
in
my
environment
can
potentially
be
running
a
different
application.
Runtime
is
how
we
refer
to
them.
So
I
mentioned
some
of
them
before
nodejs,
Eclipse,
vertex
and
so
on.
Alright,
so
it
provides
developers
a
lot
of
autonomy
right
and
I
suspect
within
your
organization's
you'll
narrow
it
down
to
perhaps
a
subset
of
what's
possible
in
the
application
realm
right.
B
It's
not
going
to
be
a
thousand
different
runtimes
running
in
your
environment,
but
you
don't
you'll
narrow
it
down
most
likely
due
to
a
subset
of
supportable
things
and
that's
kind
of
what
what
roar
is
is
is
going
to
be
offering
the
good
things
about
micro
services.
It's
it's
basically
changes.
You
know
the
way
you
approach
developing
applications,
so
agile
software
developments,
I
mentioned
domain
dip,
driven
design,
there's
no
a
common
packaging
model
with
a
container
format
right.
So
now
it
doesn't
matter
what
the
runtime
is.
The
packaging
format
is
always
a
container
right.
B
B
Part
of
the
issue
with
micro
services,
though,
is
that
it
trades
off
agility
with
operational
complexity.
So
now,
with
agility,
I
can
actually
deploy
services
independently
and
no
longer
does
one
team
have
to
wait
for
another
team.
Each
team
can
grab
their
services
at
the
need
of
the
business
right
whatever
that
is
required
by
the
business
for
that
service.
The
issue
is
now:
we've
introduced
a
lot
of
complexity,
right,
there's
more
things
to
manage.
B
There's
the
fact
that
I
have
many
services
in
general
and
some
of
the
things
that
were
provided
by
the
java
application,
server,
necessarily
they're
in
a
micro-services
environment
and
I'll
touch
on
some
of
those,
and
this
complexity
means
that
it
could
be
tougher
to
bring
or
on
board
the
rest
of
the
organization
onto
a
microservices
in
kind
of
cloud
platform.
Right
and
I
touched
on
that.
So
you
know
they're,
really.
B
The
really
ugly
piece
of
it
is
that
building
large-scale
distributed
applications
or
distributed
systems
is
really
really
hard
right,
because,
with
a
monolith,
you
may
perhaps
find
yourself
having
created
some
spaghetti
code,
perhaps
within
a
monolith,
but
at
least
you
don't
have
a
network
in
between
service
calls
within
a
monolith.
If
you
don't
design
your
your
your
micro
services
properly
in
the
boundaries
between
your
micro
services.
Now
what
you've
done
is
you've
also
introduced
a
network
right
which
increases
in
latency
significantly
so
many
more
pieces
to
be
that
have
to
get
involved.
B
You
know
many
micro
services
getting
them
all
to
interoperate
those
issues
like
how
does
one
service
located
another
service?
How
do
I
configure
that
service
right
in
a
way
that
isn't
tied
specifically
to
that
service?
These
are
things
that
were
kind
of
inherent
in
model
this
with
java
ee,
that
now
we
have
to
go
off
and
solve
in
the
micro
services
world
right.
So
you
know
micro
services,
recommendations,
I
think.
B
You
could
be
host
your
monoliths
in
in
the
container
running,
an
open
shift
and
you
still
benefit
from
some
of
the
operational
aspects
of
you
know
a
cloud
platform,
but
you
know,
is
as
soon
as
you
begin
decomposing,
some
of
your
more
calm
flex,
application
and
amana
lists.
I.
Think
the
first
thing
is:
can
you
decompose
it
in
the
context
of
the
monolith
right,
so
Martin
Fowler
there's
a
great
blog
entry.
He
has
on
monolith.
B
First,
make
sure
you
make
sure
that
you
defined
your
service
boundaries
properly
within
the
context
of
a
monolith
before
you
go
off
and
separate
these
out
into
multiple
services
running
in
your
environment
right,
so
that's
kind
of
a
recommendation
that
I
have.
If
you
want
to
decompose
right,
if
you
want
to
start
from
a
Greenfield
application,
write
a
brand-new
application,
which
is
a
strategy
that
a
lot
of
organizations
are
taking
start
small
and
and
grow
from
there.
Don't
pick
real
complex
services
or
applications.
First
right
choose
the
simpler
ones
gain.
Some
experience.
B
B
Micro
services
became
sort
of
becoming
quite
popular
right.
The
idea
of
micro
services,
the
roots
of
it,
reached
back
before
2014
right,
but
I
think
where
we
really
you
know,
organizations
really
started
to
evaluate
them
more
seriously.
Beyond
just
kind
of
the
absolute
bleeding
edge,
companies
started
right
around
2014
and
the
interesting
thing
is
when
you're
developing
business
logic
right
and
now,
underneath
you,
what
you
have
is
just
pure
infrastructure
as
a
service
right.
So
you
know
it's
it's
an
ami
with
just
an
operating
system
right
or
container
with
just
rel.
For
example.
B
These
some
of
these
services
actually
ran
on
top
of
this
infrastructure
right
in
infrastructure
as
a
service.
So
some
of
these
managed
services
might
be
a
service
registry
right
so
that
one
service
can
register
with
a
service
registry.
So
all
the
other
services
that
need
to
use
that
service
can
find
it
right.
So
you
need
to
be
able
to
register
and
discover
services
within
your
micro
service
environment.
So
the
idea
of
a
service
registry
which
has
been
there
in
computing
for
a
long
time,
but
in
terms
of
micro
services
right,
it's
a
it's.
B
Definitely
a
very.
It's
definitely
required
component
and
things
like
you
know,
configuration
server
or
how
do
I
externalize
my
configuration
I
no
longer
have
state
management
with
session
replication
like
I
did
with
Java
EE,
which
means
now
I
have
to
have
basically
a
datastore
or
some
kind
of
caching
system
to
actually
store
some
session
data
between
requests
right.
So
these
are
things
to
think
about
and
we've,
even
as
an
industry
kind
of
polluted
some
of
our
business
logic
with
these
infrastructure
type
concerns.
B
So,
if
you
think
about
how
do
I
deal
with
if
I
need,
if
I
call
a
service
and
that's
so
long
that
services
isn't
available,
how
do
I,
how
do
I
deal
with
that
failure
situation?
That's
where
things
like
circuit
breakers,
bulkheads
these
programming
patterns.
You
know
many
people,
think
of
history
from
from
the
Netflix
OSS
stack,
is
an
example
of
that
right.
So
now,
I've
kind
of
baked
into
my
application.
This
notion
of
you
know,
services,
come
services,
go
or
they're
available
and
not
available.
B
I
should
say-
and
you
know,
there's
many
examples
of
that
within
a
micro
service,
so
these
red
bubbles,
kind
of
or
red
dots
kind
of
represent.
You
know
the
supporting
services
that
I
have
to
have,
and
there
are
yellow
dots,
are
kind
of
things
that
have
been
infrastructure
concerns
that
have
been
kind
of
dealt
with
at
the
application
layer
right.
B
So
if
we
kind
of
think
about
openshift,
not
just
as
an
Operations
platform
but
as
a
platform
that
could
be
leveraged
by
the
developer,
there's
a
lot
of
services
available
in
openshift
and
kubernetes
right,
bye-bye,
a
natural
association
that
can
be
leveraged
by
a
developer.
So
we've
got
service
discovery.
B
B
Those
IP
addresses,
are
automatically
added
and
removed
from
you
know
from
DNS,
but
I
also
get
the
benefit
of
load
balancing
right.
These
are
things
both
things
that
can
help
replace.
You
know
monolithic
architectures,
with
micro
services
running
on
top
of
kubernetes
environmental.
It's
really
helped
auto-scaling
scale
up,
scaled-down
Java
EE
application
servers
often
offered
that
functionality,
and
now
that
is
kind
of
being
replaced
by
OpenShift
and
kubernetes.
B
Rolling
upgrades
become
much
more
straightforward,
operationally
across
runtimes,
not
just
Java
EE,
but
any
runtime
that
you
choose
to
develop
with
can
now
benefit
from
these
features
in
in
openshift
and
kubernetes.
Now
getting
to
some
other
interesting
things
that
can
actually
impact
an
actual
application
and
how
I
write
it
is
externalize
configuration.
B
So
instead
of
having
a
configuration
service
that
kind
of
where
a
store
configuration
state
you
know,
maybe
it
was
a
database.
Maybe
it's
it's
some
other
service,
you
know
Arceus
or
whatever
right
that
can
actually
store
properties,
not
our
case
I
apologize,
but
but
Eureka
you
know,
store
properties
as
well.
Maybe
I
can
externalize
some
of
that
configuration
right
well,
the
interesting
thing
is
kubernetes
has
built
into
it.
B
The
idea
of
config
config
maps,
where
I
can
store
a
configuration
inside
kubernetes
that
so
so,
why
not
just
use
the
features
that
are
available
in
kubernetes
instead
of
relying
on
this
other
service
that
I
actually
have
to
go
out,
you
know
go
and
manage
myself
right
if
I
have
to
create
an
instance
of
something
that
I
have
to
manage.
That
thing,
also
the
idea
of
credential
store
instead
of
having
a
vault
as
a
separate
process
to
store
secrets.
B
Maybe
what
I
could
do
is
just
use
the
inherent
secrets
capability
built
into
kubernetes
as
well
and
getting
a
little
bit
further
instead
of
using
you
know,
can
these
configuration
strings,
let's
say
to
connect
to
a
database
and
storing
that
in
a
configuration
service?
Maybe
we
can
use
the
actual.
You
know,
use
the
service
broker
right.
That
is
now
I.
B
So
you
know
one
example
could
be
config
map
right
in
storing
a
service
configuration
inside
of
kubernetes
itself
and
what
used
to
be
a
something
stored
in
in
in
my
business
logic.
Maybe
that's
something
now
that
becomes
a
supporting
service,
something
that's
a
higher
value
service
running
in
in
in
my
environment
right.
So
what
I'm
doing
is
I'm
pushing
some
of
these
concerns
out
of
my
actual
business
logic
and
into
the
stacks
below
below
the
business
logic
right.
B
So
the
question
then
becomes:
how
can
the
application
runtimes
actually
take
advantage
of
all
this?
You
know
stuff,
that's
been
baked,
you
know
push
down
into
the
underlying
container
runtimes,
for
example,
and
that's
where
the
OpenShift
application
runtimes
come
in
right
and
you
can
see
here
what
some
of
these
are.
I
mentioned:
four
decks,
while
Fleiss
warm
spring
boots
in
terms
of
certification
and
then
nodejs,
and
we
also
have
JBoss
EAP,
seven
were
planning
to
include
in
this
product.
It's
not
released,
yet
it's
actually
in
an
early
beta
and
I'll.
B
Discuss
that
here
shortly
and
with
JBoss
EAP
included
in
the
SKU
I
can
basically
first
create
my
applications
using
you
know,
begin
decomposing.
My
applications
in
the
context
of
the
monolith
remember
I
was
mentioning
that
don't
go
straight
from
you
know
a
like
what
could
be
a
spaghetti
code.
Monolith
into
a
micro
service
first
solve
the
problem
in
the
context
of
the
monolith
right
and
having
JBoss
EAP
included
in
this
queue.
Lets
you
do
that
right
and
we
even
have
this
concept
called
the
majestic
monolith.
There's
some
organizations
out
there
that
are
able
to
deliver.
B
You
know
weekly
releases
of
their
applications
running
in
a
monolith
on
top
of,
for
example,
an
application
server
right.
There
are
customers
doing
that
and
if
you
think
that
weekly
releases
of
your
service
are
frequent
enough
for
your
business,
then
it
may
be
actually
simpler
just
to
leave
it
in
the
context
of
an
application
server
right.
But
but
if
you
decide
to
move
to
micro
services,
first
try
and
solve
the
problem
in
the
context
of
an
application
server
and
then
decompose
it
it
out
out
into
micro
services
right
all
right,
so
simplifying
deployment
on
openshift.
B
So
what
open?
What
what
roar
offers
is
based
the
application,
runtimes
and
support
for
the
actual
application
runtimes,
so
support
for
EAP
support
for
vertex
support
for
wild
flies
form
at
the
moment.
We're
just
certifying
spring
boot,
but
we
look
to
look
forward
to
feedback
if
you'd
like
to
us
to
go
further
and
nodejs
is
in
tech
preview
right.
B
So
in
the
first
release
it
won't
be
fully
supported,
but
we're
definitely
working
on
full
support
for
nodejs
as
well,
and
what
we
want
to
do
for
each
of
those
runtimes
is
create
the
bindings
to
the
to
those
kubernetes
features
so
that
it
simplifies
the
the
development
experience
for
developers
right.
So
they
don't
have
to
know
all
the
ins
and
outs
of
kubernetes
to
actually
leverage
the
features
that
are
in
kubernetes
right.
So
that's
partly
what
it
offers
we're
going
to
extend
that
not
just
to
the
features
in
kubernetes
but
also
add-on.
B
You
know:
Gee
boss,
middleware
services
and
I'll.
Explain
that
well.
I'll
just
cover
a
little
bit
here,
think
about
JBoss
datagrid
right.
If
you
need
either
a
data
store
to
store
your
session
information
between
requests,
you
could
use
JBoss
data
grid
for
that
or
you
could
deploy
an
entire
data
grid
right.
B
Runtimes
all
running
on
top
of
open
shift
right,
documentation,
examples,
so
we'll
have
documentation
and
examples
around
the
bindings
and
the
simplification
that
we've
done,
as
well
as
some
tooling
oo
cover
and
a
totally
awesome
getting
started
experience
which
I
hope
to
demo.
If
I
have
time
all
right,
so
vertex
just
explain
a
little
bit
about
some
of
the
runtimes
that
we
actually
you
know,
support
a
certify
so
for
Dex.
B
Think
of
vertex,
it's
an
eclipse
project
and
it
started
in
2012
kind
of
as
a
way
to
do
what
no
date
jf
does
for
JavaScript
reactive
development,
asynchronous
development,
doing
it
on
the
GBM
right.
So
vertex
basically
takes
the
a
similar
approach
to
no
js',
but
on
the
java
virtual
machine
and
it's
it's
really
good
at
high
concurrency
low
latency
applications.
It
excels
at
that
right.
B
So
if
you
have
a
high
concurrency
low
latency
application
that
you
know,
you
think
you
know
you
might
need
to
develop
in
a
reactive
style
or
an
asynchronous
development
kind
of
style.
But
you
still
want
to
use
your
java
expertise
go
ahead
and
do
that
right.
It
is
polyglot
and
in
that
you
can
use
many
language,
many
languages,
many
languages
to
develop
for
Dex
applications,
but
all
we're
supporting
today
is
the
Java
language
binding
right.
So
that
was
a
kind
of
a
really
quick
overview
of
vertex.
B
If
this
sounds
interesting,
there's
a
couple
of
books
that
will
kind
of
introduce
you
into
vertex,
very
developing,
asynchronous
or
reactive
style
applications
for
the
JVM
for
Java.
These
books
are
a
really
good
place
to
start
just
go
to
vertex
that
IO
flash
Docs
okay
wild
fly
swarm
is
the
next
runtime,
so
many
of
you
have
probably
heard
of
wild
fly.
B
It's
a
Java,
EE
application
server,
that's
up,
s--
upstream,
led
by
Red
Hat
and
Red
Hat
productize,
as
that
as
the
JBoss
Enterprise
applications
server
and
swarm
basically
leverages
wildfly,
the
upstream
application
server
right
and
some
of
the
Java
EE
technologies,
not
all
of
Java
EE
right,
the
the
technologies
that
are
relevant
to
creating
micro
services
right.
So
we
combine
that
with
micro
profile
technologies
which
I'll
describe
shortly,
but
briefly
here.
B
Michael
micro
profile
is
all
about
bringing
micro
services
and
frameworks
to
the
you
know:
Java
ecosystem
right,
so
it
combines
that
with
micro
profile
and
those
openshift
bindings
that
I
mentioned
right.
So
we
kind
of
combine
all
these
things
and
we
have
wild
fly
swarm.
So
you
can
create
an
edible.
You
know
fat
jars.
If
you
don't
want
to
do
you
know
a
traditional
app
server
kind
of
scenario,
but
you
really
want
Luber
jars
and
develop
using
that
methodology.
You
can
do
that
with
Waialae
swarm.
It's
very
lot
lightweight
its
extense.
B
You
just
build
your
application
by
creating
in
the
maven
world
a
palm
file
right
with
your
proper
dependencies
on
the
right
artifacts
for
the
runtime
and
you
build
it.
You
get
an
uber
jar
right,
really
cool
stuff,
but
our
uber
jar
approach
for
well
well
flies
swarm
is
still
based
on
the
JBoss
modules.
It's
just
packaged
differently,
as
as
an
uber
jar
right.
B
So
micro
profile
is
a
project
that
Red
Hat
co-founded,
along
with
IBM
payara
Tommy
tribe
did
London
Java
community
and
many
others
have
joined.
You
can
go
to
micro
profile
that
IO
to
kind
of
learn
more
about
micro
profile,
but
the
idea
is
that
we
are
bringing
micro
service
patterns
to
Java
EE
developers
right.
So
you
know,
Java
EE
was
a
little
bit
slow
in
in
as
a
mature
platform
and
kind
of
moving
forward
right.
B
It
was
mature,
had
the
functionality
that
was
required
by
a
lot
of
you
know
traditional
applications,
but
now
with
the
uptick
of
micro
services,
what
we
wanted
to
do
was
was
kind
of
innovate
more
rapidly
in
an
open
source
project,
so
micro
profile
is
actually
an
eclipse
project
and
we're
all
collaborating
there
on
micro
profile
specifications.
There's
multiple
implementations
of
these
things,
I
think
circuit-breakers
thinks
externalized
configuration
health
checks
monitoring.
B
These
are
all
things
that
are
available
in
what
is
just
released
micro
profile,
1.2
right,
so
micro
profile
that
IO
take
a
look
and
wildfly
swarm.
Is
our
implementation
vehicle
for
the
micro
profile
specifications
that
I
just
mentioned
right
so
I'm,
one
of
the
co
leads
iBM?
Is
another
co-lead
currently
right
that
somebody
that
may
change
right?
As
you
know,
it's
just
an
open
source
community,
but
but
for
now,
that's
where
things
are
and
there's
committers
from
across
many
companies,
but
also
individuals
not
associated
with
companies.
B
So
if
you
want
to
help
bring
micro
services
for
Java,
go
to
micro
profile,
io,
there's
a
Google
group
as
well,
where
we
we
kind
of
hang
out
and
have
the
discussions
and
join
any
one
of
those
right
and
and
participate
if
you're
interested
in
in
this
project
in
this
concept,
right
all
right,
there's
also
a
book
in
the
making
on
wild
fly,
swarm
I
created
a
bitly
link,
otherwise
it'd
be
kind
of
long.
It's
just
bit
dot
Lee,
slash
enterprise,
Java
micro
services
book
that
you
see
there
in
the
bottom
of
this
slide
right.
B
It's
scheduled
I
think
for
later
this
year,
but
it
kind
of
provides
you
some
some
idea
of
how
we're
creating
and
allowing
you
to
create
micro
services
with
Java
EE
technologies.
You
know,
along
with
with
with
openshift
as
well
right,
so
definitely
take
a
look
if
you're
interested
all
right,
no
js'
a
large.
It's
a
large
and
vibrant
community
I
suspect
most
of
the
people
on
this
call
know
what
Java
Java
sorry,
no
js'
is
right.
Typically,
it's
considered
Java
server,
side,
JavaScript
right,
so
there's
a
tremendous
amount
of
JavaScript
expertise
in
the
industry.
B
B
Point
write
a
client,
JavaScript
application
does
really
well
talking
to
a
nodejs
server
and
often
that
nodejs
service,
which
kind
of
acts
as
a
gateway
to
all
these
back-end
services
that
may
be
written
in
who
knows
whatever
language
right
so
architectural
II,
that's
where
it
tends
to
fit
the
most
and
enterprises
roar,
as
I
mentioned,
is
going
to
have
a
tech
preview
of
this
at
GA,
and
eventually
we're
actually
be
scoring
DOJ's
itself
and
just
to
give
you
an
idea
here.
Red
Hat
is
a
no
js'
foundation.
B
Member
and
so
the
node
foundation
is
is
no
js'
foundation
is
where
nodejs
itself
evolves
as
a
platform
right
red
head
is
a
platinum
sponsor
and
we
also
have
no
GS
committers
right.
In
fact,
we
have
no
J's
committers
and
all
the
projects
I've
mentioned
so
far,
and
we
even
kind
of
you
know,
leave
clips
for
decks
and
we
lead
wildfly
swarm
as
well
right
in
my
rush
to
get
the
slides
done
in
time.
I
forgot
to
mention
or
add,
I
think
a
yeah
I
forgot:
let's
bring
boots
slide.
B
A
I'm,
just
an
interrupt
there's
a
couple
of
questions
about
spring
boot.
Now
that
you
thought
about
it
that
might
be
about
what
is
the
spring
boot
implementation?
Is
it
using
something
like
Tomcat,
is
a
servlet
container
or
is
it
from
fabric
eight,
and
a
lot
of
folks
are
asking
about
spring
deep
support.
If
that's
on
the
roadmap.
B
Yes,
in
fact,
the
more
comments
and
feedback
I
get
in
the
chat
on
this.
The
better
and
I
apologize
since
I'm,
fullscreen
I
can't
see
the
chat,
so
we
as
it
in
terms
of
just
a
servlet
container
from
a
servlet
container
person,
if
you're
using
you
know
obviously
upstream
spring
boot,
you're,
just
using
an
upstream
tomcat
container
in
terms
of
product.
What
we're
what
we've
done
in
case
you
didn't
know.
We
have
something
called
the
JBoss
webserver
which
app,
which
is
a
product
ization
of
the
apache
HTTP
server
and
apache
tomcat
right.
B
So
what
we've
done
is
we've
worked
with
the
JBoss
webserver
team,
who
has
tomcat
committers
I
should
mention.
In
this
context,
we've
worked
with
the
team
to
create
a
supported,
embedded,
tomcat
container
right.
So,
if
you're
running
a
spring
food
application,
on
top
of
you
know,
OpenShift
as
a
part
of
war,
we
actually
support
the
embedded
tomcat
container
right.
There's
been
some
interest
as
well
for
us
to
kind
of
have
a
standalone
product
eyes,
build
of
undertow,
which
is
the
servlet
engine
used
by
both
JBoss
EAP.
B
And
what
and
it's
well
fly
upstream
equivalent
and
it's
also
used
in
wild
flies
form
right,
so
I'd
be
interested
in
understanding
if
people
would
also
be
interested
in
having
undertow
as
an
actual
product
option
right,
we
just
started
off
with
with
with
Tomcat,
so
that
answers
that
question.
Other
things
we've
done
around
spring
boots
is
we've
tested
and
verified,
something
like
around
10
plus
spring
boots.
Starters
right.
B
The
spring
boot
is
something
called
a
fabricate
maven,
plugin,
sorry
to
fabricate
kubernetes
spring
cloud,
kubernetes
I'll
start
with
that
drink
loud
kubernetes,
basically,
is
that
binding
I've
mentioned
that
glue
that
lets
you
develop
spring
buta
applications
in
a
very
spring
boot.
Natural
way,
right,
think
about
spring
boot
to
a
large
degree,
is
as
annotations
that
allow
you
to
inject
things
into
your
application,
write
it
abstract
away
things
like
you
know.
Where
does
my
configuration
come
from?
How
do
I
register
and
discover
sir
my
service
and
discover
other
services
right?
B
Those
things
are
all
injected
in
and
the
way
we
do
it
with
spring
cloud
kubernetes
as
we
it's
just
something
that
you
add
to
your
palm
file
right
and,
in
you
add
spring
cloud
kubernetes
to
your
palm
file
and
then
it'll
use
the
kubernetes
binding
equivalents.
So
to
do
service
configuration
it'll,
actually
use
config
map
right
to
do
service
registration,
a
discovery,
it'll
use
kubernetes
under
the
hood.
B
What
that
does
is
it'll
use,
coop
ping
to
find
all
the
instances
of
a
service
and
actually
use
that
to
populate
ribbon
if
you're
using
ribbon
and
client-side
load
balancing
right.
But
the
important
thing
is
that
you're
not
changing
any
of
your
application
code
to
do
that
right,
so
so
we're
trying
to
make
it
as
natural
for
spring
boot
developers
to
develop
on
top
of
OpenShift
and
that's
true
of
all
the
language
runtimes
I've
mentioned
right.
B
The
other
piece
is
in
fact,
I
think
it's
the
next
slide
so
I'll
go
to
this
is
tooling,
and
this
is
true
for
all
the
runtimes,
so
well
with
an
exception
here.
Okay,
so
for
Java
based
applications
that
create
uber
jar
so
that
spring
boot
wild
fly,
swarm
and
vertex.
We
have
something
called
a
fabricate
maven
plug-in,
and
this
is
an
upstream,
it's
still
an
upstream
project,
but
what
that
does?
Is
it
basically
lets
you
take
an
application
that
you've
written
an
uber
jar
and
build
and
deploy
it
on
openshift
right?
B
So
you
don't
have
to
worry
about
docker
files
to
a
large
degree.
You
don't
have
to
worry
about
open
shift
templates.
If
I
am
a
Java
developer,
that's
just
been
used
to
creating
a
warm
file
and
deploying
it
on
top
of
an
application
server,
that's
kind
of
what
they
fabricate
maven
plugin
does
right.
It
just
says
you
know:
here's
my
app
I
add
something
to
my
palm
file.
Again
I,
say
maverick
maven,
fabricate
:,
deploy
and
I
could
tell
it.
B
You
know,
deploy
to
OpenShift
it'll,
actually
deploy
to
OpenShift
right
and
that
make
just
feel
like
an
app
server
right.
It
makes
OpenShift
to
a
large
degree,
just
feel
like
a
traditional
application
server,
although
it's
not
right,
as
we
think
all
know
here.
So
we
when
we
started
towards
product
decision
of
node,
we've
also
created
something
called
node
shift.
B
It
does
something
very
similar
to
what
to
fabricate
maven
plug-in
does,
but
for
node
applications
now
note
doesn't
quite
have
the
build
cycle
right
that
Java
applications
do,
but
what
node
shift
does
is
it
basically
takes
care
of
the
deployment
cycle
right
so
again,
creation
of
the
docker
image
you're
actually
deploying
it
on
top
of
openshift.
That's
something
that
we've
done
upstream.
The
upstream
project
called
Bucharest
gold
in
in
in
github
and
in
I
believe,
that's
also
in
the
sorry
docker
hub
as
well
images.
B
So
it's
it's
all
upstream
Bucharest
gold
and
it's
basically
our
efforts
around
nodejs
at
Red
Hat
right.
So
it
includes
node
shift,
for
example,
and
it'll
also
include
the
bindings
that
I
mentioned
any
binding
work
that
we
do
to
the
kubernetes
features
will
happen
through
bucharest,
gold
right.
So
no
chiefed
I
could
just
developed
like
it's
a
local
application
and
then,
when
I'm
ready
deploy
it
to
deploy
to
open
shift
using
node
shift
again
I
don't
have
to
worry
about
docker
I.
B
Don't
have
to
worry
about
open
shift
templates
right
as
a
developer,
the
online
environment,
so
the
tooling
around
the
I
learn
the
online
environment.
We
mostly
rely
on
Java
s2i
with
the
tooling
that
specifically
kind
of
geared
around
just
war
right,
so
I'm
going
to
show
you
a
demo
and
what
this
demo
is
going
to
do
is
use
Java,
s2i
right.
Actually,
provision
applications
generate
the
application
and
provision
it
OpenShift.
All
right
red
head
as
a
company
is
also
working
on
something
called
open
shift
at
I/o
and
open
shift.
B
Io
is
basically
an
entire
developer
experience
right
around.
You
know:
enterprise
development.
So,
while
it's
like
an
online
the
IDE
right,
where
you
can
edit
your
code,
build
your
code
test
and
debug
your
code
all
online.
It
also
has
you
know
some
some
team
planning
features.
It
has
a
really
neat
feature
called
called
analytics
that
basically
analyzes
the
code,
you're
writing
and
the
stack
you're
using
and
provides
recommendations
like
hey
you're
using
you
know,
versions
of
certain
plugins.
That's
not
very
common.
Most
people
are
using
this
combination
right,
which
might
be
different
versions
of
plugins.
B
You
know
the
ideas
could
also
see
hey,
there's
a
CVE,
a
critical
vulnerability,
security
vulnerability
in
this
version
of
this
plugin
that
you're
using.
Maybe
you
should
use
this,
this
updated
one
right.
So
the
idea
is
that
over
time
as
more
people
use,
OpenShift
I/o
it'll
get
smarter
and
smarter
around
these
types
of
analytics
right
and
really
help
the
developer
become
more
productive.
B
So
not
everybody
can
get
on
today
right
but
go
feel
feel
free
to
register
today,
openshift
at
I/o
there's
a
URL
and
you
can
register
kind
of
go
down
the
list
and
you'll
be
notified
when
they've
scaled
up
enough
capacity
to
actually
add
and
add
new
developers.
Ok,
so
maybe
I'll
just
go
ahead
and
show
the
demo
now
ok,
I
am
going
to
show
see.
Can
you
see
this
Diane.
B
The
open
shift
console
ok,
so
what
I've
done?
Okay
is
I've,
actually
I'm
actually
running.
This
launch
experience.
I
guess
this
launch
tool
in
mini
shift
on
my
desktop,
which
which
anybody
can
do
right.
It
will
also
be
available
as
an
online
service
so
that
you
don't
have
to
do
that.
If
you
don't
want
to
right,
it'll
be
available
so
that
you
can
actually
here,
I'll
just
go
through
and
explain
it
and
then
I'll
explain
more
about
as
I
go
along
right.
B
So
this
is
basically
let
you
build
and
deploy
some
example
applications
that
that
we
have.
So,
if
you
think
about
like
what
we
do
with
JBoss
EAP
today,
you
know
if
you
want
to
get
started
quickly.
We
have
a
getting
started
page
and
there's
a
set
of
steps
that
you
can
go
download,
JBoss
EAP,
clone
a
repository
of
examples,
build
the
examples
deploy
it
to.
B
You
know
deploy
to
EAP
and
that's
true
of
a
lot
of
Red
Hat
products,
and
what
we're
trying
to
do
is
actually
simplify
that
to
just
a
wizard
right
where
the
runtimes
are
all
available
online
right
mention
it's
mostly
maven
repositories,
maven
artifacts
or
in
the
case
of
node.
It's
a
it's
a
container
image
and
the
deployment
environment
could
be
OpenShift
online,
whether
it's
open,
shipped
online
starter,
which
is
a
free
account
right
that
you
can
sign
up
for
and
try
this
with
here
shortly
or
OpenShift
online
pro
right,
that's
kind
of
much
more
open.
B
Much
more!
You
know
many
more
resources
available
for
developers
to
actually
develop
their
applications,
so
these
are
the
supported
runtimes
that
I
mentioned
with
launch.
So
if
you
want
to
launch
your
project,
you
click
the
launch
button
and
I
can
do
a
couple
of
things
here.
I
can
either
and
we
have
to
kind
of
know
this
ahead
of
time.
Do
I
want
to
build
and
run
locally.
So
if
you
think,
since
there's
some
people
familiar
with
spring,
if
you
think
about
start
that
spring
that
I,
oh
right,
you
can
build.
B
B
The
only
difference
is
that
it's
actually
a
full
working
example
with
you
know
a
database
and
health
check
and
all
this
kind
of
stuff
it's
we
have
very
specifically
defined
use
cases
that
we
implement
and
you
could
actually
download
and
run
them
locally
and
run
them
in
and
then
provision
them
to
openshift
online
or
in
mini
shift.
If
you
want
to
run
openshift
on
your
desktop
right,
the
other
thing
you
can
do
is
actually
use
it
with
open
shift
online.
B
So
pretty
short,
when
we
go
public
beta,
what
you'll
see
is
a
list
of
clusters
right.
Do
I
want
to
do
online,
open
shift,
starter
online,
OpenShift,
sorry,
I'm,
going
to
open
shift
online
starter
or
do
want
to
deploy
to
my
open
shift
online
pro
account
right
or
you
know,
you'll
be
able
to
choose
between
which
I
lot
of
company
want
to
provision
these
examples
to.
B
Since
I'm
running
in
mini
shift
open
again,
that's
open
shift
on
my
desktop
we're
going
to
provision
that
to
mini
shift
running
running
locally,
and
that's
pretty
obvious
here
for
the
192
that
191
168
URL.
Okay,
all
right
now,
I
select
the
mission
kind
of
sting.
It's
a
Solange
themed
experience
right,
so
launch
is
kind
of
the
the
overall
experience
mission
is
basically
use
case.
Do
I
want
to
you,
know,
provision
a
database
and
and
a
sample
application
and
show
how
I
can
have
the
sample
application
use
that
database
all
running
an
open
shift
right.
B
We
have
a
circuit
breaking
example:
externalize
configuration
with
config
map
health
check,
a
simple
rest
endpoint,
that's
just
basically
hello
world.
If
you
want
to
start
really
simple
and
we're
even
working
on
one
that
will
actually
secure
an
endpoint
or
threat
with
Red
Hat
SSO.
Initially,
some
of
these
boosters
is
like
the
red
head.
Sso
one
take
more
manual
steps,
but
over
time
we
hope
to
remove
those
manual
steps
right,
as
we
think
about
things
like
the
surface
broker
right.
Maybe
we
can
remove
some
of
these
manual
steps
and
replace
them
with
automated
steps.
B
All
right,
but
but
many
of
these
are
actually
just
fully
automated
right,
so
what
I
can
do
is
I'm
going
to
choose
health
check,
I'm
gonna
click
Next
now,
which
one
time
do
I
want
to
do
a
health
check
with
all
right.
I'm
gonna
choose
vertex
just
because
I
tried
this
earlier.
I
just
picked
a
random
one,
you'll
be
able
to
use
any
one
of
these
right
since
I
tested
earlier
with
furred
X
I'm,
just
gonna
use
vertex.
B
Now,
okay,
now
I
got
to
name
the
project
right
because
what's
gonna
happen,
is
it's
gonna
actually
take
this
project
and
fork
it
to
my
personal
github
account
right.
So
there
is
some
set
up
and
initially
the
first
time
you
use
the
they
set
up
kind
of
the
bindings
to
your
OpenShift
online
starter
account
or
your
pro
account
and
github
right.
It's
a
one-time
thing.
B
B
I
just
didn't
select
that
and
now
it's
gonna
launch
and
again
here
it's
going
to
launch
to
openshift
running
on
my
desktop
in
mini
shift.
So
now
it's
actually
forking
the
project
to
my
github.
It's
going
to
actually
push
the
code
into
the
repo
and
then
create
the
project
on
openshift
online,
which
again,
in
this
case,
is
running
on
my
desktop
and
set
up
a
build
pipeline,
which
is
for
now
Java
s2
I.
B
What
the
way
that
this
kind
of
works
is
if
this
launch
experience
I'm
showing
here,
is
mainly
around
Java
s2i,
but
if
you
want
to
start
using
Jenkins
pipelines
and
get
more
complexed
in
it,
deployment
scenarios
like
blue,
green
and
and
a
be
kind
of
testing.
That's
where
Oakland
shift
IO
comes
in
right,
so
what
you
would
do
is
you
would
take
this
project
and
import
it
into
your
open
shift,
IO
account
and
just
kind
of
continue
on
from
there.
B
B
B
Maybe
what
I'll
do
in
the
interest
of
time
is
show
you
sorry,
I
meant
to
hit
home
I'll
just
show
you
maybe
I'll
show
you
the
node
example,
and
here
is
the
external
route
for
the
node
application.
It's
the
same
health
check,
one
that
I
mentioned,
and
it's
the
same
flow
for
all
the
runtimes
right.
So
there's
consistency
across
these
these
across
the
user
crease
across
all
the
actual
runtimes,
so
I
want
to
say
hello,
OC
be
openshift,
Commons
briefing,
okay,
yeah
I
got
them
in
the
right
order.
B
Hello,
LOC,
B
and
I
got
to
work
with
our
user
experience.
Best
button
I
think
stop
service
might
be
a
better
name
for
it,
and
so
I'm
gonna
show
you
something
here.
So
when
we
deployed
this
what's
going
on,
is
it's
actually
setting
up
an
open
shift
health
check
for
this
service
right?
So,
if
I,
if
I
click
the
the
red
button
to
stop
the
service,
what
you'll
see
is
at
some
point
I
think
we
have
a
two
or
five
second
kind
of
check:
you'll
see
that
it
actually
is
restarting
the
service.
B
Now
I
went
down,
and
so
it's
restarting
it
right
and
if
I
go
here
and
I
try
to
invoke
it,
you'll
see:
oh,
it's
probably
back
up
and
running
already
yeah
I
can
say
hello
test.
It's
actually
up
and
running
again
right.
So
what
I
can
do
is
I
can
use
this
get
attic,
starting
experience
to
actually
run
these
examples.
B
Online
check
out
the
code
see
the
OpenShift
bindings
as
long
as
the
bitings
that
we're
adding
to
other
Red
Hat
middleware
and
I
mentioned
red
head
SSO
as
one
of
the
ones
that
were
kind
of
targeting
first
to
secure
an
endpoint,
but
this
will
grow
beyond
that
all
right
all
right.
So
that's
just
a
really
quick
demo
and
we're
going
to
be
announcing
this
very
shortly
in
terms
of
letting
you
do
this
on
your
desktop
using
the
latest
iteration
yeah.
Well,.
A
B
Will
be
available
very
very
shortly,
the
redirect
and
it's
not
set
up
installed
yet,
but
very
shortly.
If
you
go
to
developers
that
redhead
calm,
slash,
launch
right,
that's
going
to
be
where
is
going
to
be
set
up
alright
and
imminence
is
the
word
I'll
use
well
by
the
way
Java
ones
coming
up
next
week,
coincidentally,
incidentally,
I.
A
B
B
B
What
I
can
do
is
when
I
deploy
a
service
in
a
pod.
What
I
can
do
is
deploy
with
every
service
a
sidecar
right
container
and
that
sidecar
container
provides
a
set
of
services
right.
So,
if
I
have
a
hundred
services
running
micro
services
in
my
environment,
every
one
of
those
services
has
a
sidecar
and
this
site
these
side.
Cars
can
basically
all
be
conducted
to
each
other,
which
is
what
we
call
a
service
mesh
right
now,
once
I
have
that
mesh
in
place,
I
can
do
some
really
interesting
things.
B
B
All
sorts
of
interesting
things
via
the
mesh
I
could
just
distributed
tracing
service
a
service
within
the
mesh
and
not
even
bake
that
into
my
application
as
well
right.
If
I
don't
want
to
write,
the
mesh
can
can
tell
me
what's
the
time
to
gate,
to
get
from.
You
know,
service
1,
to
service
right,
and
that
is
actually
quite
exciting
right
being
able
to
to
do
this,
and
what
is
tio
does
in
addition
to
these
citecar
containers
it
it
has.
B
It
has
kind
of
this
control
plane
above
it
right
where
I
can
define
my
policy
centrally
and
that
policy
gets
pushed
down
into
the
mesh
right.
So
so
I
have
some
level
of
control
and
centralization
and
consistency
within
my
environment
right,
really
cool
stuff,
sto
IO
is
I.
Think
the
website,
but
just
google
list,
EO
right,
you'll,
find
it
Red.
Hat
is
very
interested.
Were
we're
active
in
the
sto
community.
B
It's
it's
early
days
right.
It
was
just
publicly
announced
this
just
this
past
May
right
of
2017
so
very
early
days,
but
we're
definitely
interested
in
in
trying
to
bring
this
to
OpenShift
customers
right.
The
when
and
stuff
is
probably
somebody
else
within
red
hat
to
better
answer
that
so
now,
if
I
think
about
this
in
the
in
the
context
of
the
evolution
of
micro
services,
sto
offers
a
collection
of
services
that
now,
even
let
me
remove
even
more
of
this
infrastructure
concern
out
of
my
logic
and
into
the
underlying
platform
right.
B
So
the
idea
is,
if
you
think,
about
circuit
breaking,
maybe
I
can
remove
that
out
of
my
application
and
have
that
in
the
service
mesh,
maybe
with
the
distributed
tracing,
if
I
don't
need
to
actually
trace
into
a
container
and
get
you
know,
tracing
within
my
actual
business
logic
from
one
service
all
the
way
to
the
business
logic
of
another
service,
then,
if
and
all
I
want
to
service
the
service
tracing,
then
I
could
just
use
this
deal
for
that
right.
So
eventually,
we'll
have
you
know
kind
of
this
end-to-end
tracing
through
ISTE.
B
Oh
now,
we're
also
doing
work
with
Jaeger
in
in
the
with
in
Red
Hat,
which
is
recently
joined,
the
CN,
CF
and
Jaeger
lets
you
do
distributed
tracing,
not
just
point-to-point,
but
also
into
the
container
right.
So
what
we're
doing
there
is
OpenShift
application.
Runtimes
is
working
with
a
Jaeger
team
to
kind
of
enable
tracing
into
the
container
for
those
that
actually
want
to
do
that
as
well.
B
A
B
Actually,
a
very
good
point:
yeah
actually
watch
that
one
so
yeah
yeah
great
stuff,
unless
I
think
this
is
the
last
slide.
Public
beta
is
imminent,
ga4,
roar,
which
means
support
for
the
runtimes,
and
you
know
all
the
blue
work
and
the
documentation.
Examples
is
targeted
for
this
calendar
year.
We
hope
to
have
more
online
examples
and
at
launch,
so
you
saw
those
use
cases
defined
on
that
screen.
B
We
hope
to
get
more
in
by
launch,
if
not
don't
worry
over
time,
you're
going
to
see
that
list,
grow,
grow,
grow,
grow
right,
more
and
more
examples
and
more
maybe
you've
been
more
complex
examples
as
well,
and
some
planned
middleware
integration
right,
so
not
just
red
head
SSO.
But
how
do
we
easily?
You
know
interoperate
with
some
of
these
other
products
like
the
JBoss
message,
queue
and
router
right.
That's
a
part
of
a
part
of
that
product.
B
Chivas
data
grid
right,
Jaeger,
I,
mentioned
fuse
three
scale:
API
management
right,
there's
a
whole
bunch
of
products
that
we
have
in
the
red
head,
middleware
that
we
just
want
to
provide.
You
know
out-of-the-box,
it
just
works
examples
all
running
on
open
shift
right
and
that's
the
end
of
kind
of
the
presentation
and
I'd
be
happy
to
take
any
additional
questions.
Well,.
A
There's
there's
a
couple
in
that
then
go
and
we
went
a
little
long,
but
and
John
Osborne
and
Michelle
have
been
asking
good
questions
and
one
of
them
is
and
I
think
there
was
a
little
confusion
when
you
launched
whether
you're
wanting
just
to
open,
shipped
online
or
just
locally.
But
if
you
had
can
one
of
the
deployment
our
targets
be
Enterprise
OCD
cluster
instead
of
you
know
the
other
two.
Have
you
tweaked
it
out
for
that
Oh.
B
You
might
be
able
to
do
that
with
OCP
as
well,
but
we
haven't
done
a
full
like
suite
of
testing
with
that
right.
There
might
be
permissions
that
you
might
have
to
enable
for
front
launch
to
run
and
inside
of
OCP
within
within
your
environment,
so
we
haven't
fully
done
that,
but
definitely
provide
me
feedback,
je
Klingon
at
Red,
Hat,
comm
right,
and
if
this
is
something
that
you
would
like
to
see
running
inside
of
your
environment
right,
so
that
you
could
leverage
this
and
deploy
it
inside
of
your
environment.
B
A
B
Can
do
that?
Well,
so
so
we
again,
we
haven't
tested
that
right.
So
if
you
test
that
there
and
you
find
issues,
then
send
a
pull
request
right.
So,
if
you're
going
to
fork,
it
send
us
pull
requests
with
any
fixes
that
you've
got
at
a
minimum,
whether
you're
running
in
mini
shift
or
online
as
a
service.
What
you
have
in
your
project
is
going
to
get
forked
to
your
github
account
right.
B
So
what
you
could
do
is
take
that
out
of
your
github
account
clone
it
locally
I
or
you
download
the
zip
directly
like
I
mentioned
during
the
demo.
Now
you
have
it
on
your
desktop.
Now
all
you
have
to
do
is
OC
log
into
my
OCP
account
and
just
say:
maven
fabricate
go
and
deploy
P
openshift
it'll
select
the
openshift
profile.
It'll
it'll,
deploy
to
your
OCP
account
right.
I.
Do
I've
actually
done
that
internally,
here
at
Red,
Hat
right,
it's
just
the
actual
UI
in
the
wizard
steps
right,
aren't
kind
of
tested
in
OCP.
B
We've
done
some
kind
of
nominal
testing
of
actually
running
the
examples
themselves
inside
of
OCP
right,
so
you'll
still
be
able
to
run
those
inside
of
your
OCP
in
your
environment.
Some
of
the
other
ones
might
be
hard
like
SSO
and
stuff
like
that
might
be
a
little
bit
harder,
but
you
know
the
you
know
the
the
crud
example.
You
could
probably
do
as
long
as
you
have
the
resource.
The
resources
in
your
account
to
do
that
and.
A
The
only
other
comment
outside
of
that,
because
we
run
a
little
over
time,
but
that's
quite
okay
was
you
were
asking
for
feedback
on
undertow,
and
someone
had
mentioned
that
undertow
is
basically
the
most
important,
the
most
popular
embedded
containers
in
spring
booth
in
the
entire
in
spring
to
the
community.
So
support
for
that
would
be
sounds
like
it
would
be
a
key
thing
to
to
try
and
get
there
at
some
point.
So.
B
A
A
So
thanks
again
John
for
taking
the
time
to
do
this
and
explain
this
and
we'll
look
forward
to
seeing
more
runtimes
added
into
this
as
well
and,
let's
see,
maybe
what
we
can
do
also
is
work
with
you
guys
to
do
a
survey
of
the
open
ship
Commons
mailing
list
to
find
out
what
else
people
are
interested
in
seeing
and
getting
at
it.
In
that
completely
helpful
information
thanks.