►
From YouTube: OpenShift Commons Briefing #64:Modern Application Architecture Beyond Microservices - Full Stack
Description
Microservices are more than just building small services and with it comes operational and architecture challenges. Service integration, fault tolerance, independent development, deployment and scaling without disrupting production and operational monitoring are a few of challenges that need to get resolved for a successful journey into the modern application architecture.
In this session, we will discuss modern application architecture and give a full stack demo of building and running polyglot applications using containers, continuous delivery, JBoss middleware, Netflix OSS, Spring, OpenShift, CloudForms, and other technologies.
A
A
Well,
hello,
everybody
and
welcome
again
to
another
openshift
Commons
briefing,
this
time,
I'm
really
psyched.
To
have
one
of
my
colleagues
from
red
hats.
You
know
who's
up
in
Stockholm
with
us
to
talk
about
modern
architectures,
going
beyond
just
micro-services
and
doing
some
full
stack
demos
and
buy
a
full
stack.
A
We
really
mean
full
stack,
we're
gonna,
try
and
get
him
to
show
off
using
a
lot
of
the
middleware
and
other
pieces
that
really
do
the
heavy
lifting
for
some
of
these
applications
that
folks
are
building
and
you're
gonna
have
Q&A
in
the
chat
and
Q&A
after
his
presentation.
So
he
can't
see
the
chat.
So
I
may
have
to
interrupt
him
once
in
a
while
and
ask
a
question
if
it's
really
a
somebody
stumped,
but
otherwise,
we'll
save
most
of
the
questions
for
after
he's
finished
with
the
presentation.
A
B
Thank
you
Dan
and
hello,
everybody
so
I'm
gonna
talk
about
modern
architecture
beyond
micro-services.
Today,
I'm
going
to
show
you
a
demo,
a
full-stack
demo
of
essentially
what
is
beyond
micro
services,
so
much
services.
Everybody
by
now
knows
what
it
is,
but
in
order
to
build
services
in
an
enterprise
environment
with
this
architecture
and
take
them
in
production,
what
other
layers
of
stack
are
required?
What
else
do
we
need
to
do
to
be
able
to
take
these
things
into
production?
B
But
before
that,
let
me
introduce
myself
a
few
senses
about
me:
I'm,
a
CEO
Mexican
far
a
do
technical
product
marketing
for
open
chef
at
Red,
Hat,
open
chef
is
the
red.
Hats,
contain
a
platform,
and
you
can
reach
out
to
me
an
email
or
Twitter
after
this
session.
Of
course,
with
any
questions
you
might
have
in
future,
but
let's
get
it
started
immediately.
I'm
gonna
turn
off
my
meat
Oh
after
a
little
while
so
that
it
doesn't
block
any
part
of
this.
A
lot
of
things
that
I'm
showing
to
you
so
microservices.
B
A
lot
of
people
have
been
talking
about
microservices
for
last
two
years
it
has
taken
over
the
whole
blogosphere
and
half
of
decision
at
any
developer
conference
you
see
is
about
microservices.
There
are
good
reasons
for
that.
I
have
listed
a
few
of
the
reasons.
People
are
so
interested
in
micro
services,
for
the
use.
Cases,
of
course,
that
it
makes
sense,
is
not
a
bigger
plot
for
everything,
but
it
did
generally
for
complex
application.
B
It
helps
the
fast
faster
time
to
market
and
to
be
able
to
more
be
more
efficient
and
scale
for
the
reason
that
you
can
break
down
their
service,
the
smaller
ones,
then
you
can
develop
them
independently,
deploy
them
independently
scale
them
independent
thing
that
simplifies
things
and
also
makes
the
management
a
lot
simpler.
If
an
application
is
built
of
thousand
classes
and
the
hand
room,
hundreds
of
them
have
changed.
You
want
to
make
a
deployment
of
that.
It's
a
lot
more
complex
and
application
of
service,
that
is
1020
classes
and
two
of
them
have
changed.
B
It's
a
it's
an
online
shop
for
selling
product
if
they're
in
cool
products
from
Red
Hat
like
the
Red,
Hat,
polo
shirt
or
red
hat
fedora,
and
it
is
a
web-based
app
and
polyglots
use
the
users
different
type
of
technologies
for
building
various
services
and
it
uses
the
market
services
architecture.
Some
of
the
services
are
based
on
note
some
Java
using
different
type
of
frameworks
and
they're
all
deployed
as
as
containers.
If.
B
Off
I
switch
to
the
application
to
show
you,
so
this
is
the
web
application
I'm
talking
about.
You
have
some
really
real
nice
product
on
that,
and
you
see
the
inventory
for
each
of
these
products,
and
you
see
there,
it
can
add
them
to
your
shopping
cart.
These
fedoras
are
actually
really
popular.
I,
add
a
couple
of
more
of
them
or
my
friends
can
go
to
the
shopping.
Cart,
see
what
is
added.
I
change
my
mind
about
a
sneaker
I.
Don't
want
that
anymore!
B
I
just
want
the
fedoras
and
I
can't
just
make
it
purchase
and
go
on
it's
like
a
workshop,
as
you
would
expect
it,
and
the
architecture
of
this
application
is
built
from
each
of
these
pieces
being
a
microservice
on
their
own
deployed
independently.
So
we
have
the
web
UI,
which
is
the
front-end
that
we
were
just
browsing
based
on
nodejs
and
angularjs,
and
then
we
have
three
back-end
services.
One
is
inventory
service
and
much
service
that
uses
Java
EE
and
runs
on
JBoss
EAP
back
for
a
Postgres
database
and
that
one
provides
the
stock
status.
B
The
blue
number
that
DC
hamon
is
left
is
coming
from
that
inventory
service.
Let
me
have
the
catalog
service
that
gives
the
list
of
products.
That's
the
Micra
service
that
uses
jax-rs
chris
rest
service
and
runs
on
Tomcat,
and
they
have
the
card
service.
That
is
a
also
rest
service
running
on
JBoss,
a
EAP.
The
front-end
of
our
application
does
not
talk
to
these
services
directly.
We
are
using
agile
integration
and
the
component
based
on
the
apache
camel
and
spring
boots.
B
That
is
aggregating
this
api,
so
the
the
gateway,
cost
or
gateway
that
spring
boots
service
makes
the
cost
to
all
these
back-end
services
get
the
data
cleans.
It
transformed
to
the
json
format
that
we
want
to
consume
in
our
web,
UI,
add
or
remove
the
information
that
it
needs
and
sends
a
json
data
to
our
web
front
to
be
to
be
visualized,
which
builds
that
that
whole
service
every
so
we
see
that
the
inventory
part
is
one
mark
series
on
its
own.
Those
lists
of
products
come
from
another
service
and
the
shopping
cart.
B
Functionality
is
also
a
service
on
its
own
and
and
these
services
are
all
deployed
as
containers
independently
on
open
ship.
What
is
open
chief
I
keep
mentioning
I
haven't
explained
really
what
it
is
and
openshift
is
kubernetes
for
the
enterprise,
so
it
is
a
container
platform
based
on
docker
containers
and
Cooper
need
is
Orchestrator
and
all
the
other
pieces
that
you
require
to
be
able
to
build
containers
and
run
and
managing
in
production
at
scale.
So
it
provides
self-service
it's
a
polygon
platform.
We
can
deploy
different
type
of
languages
on
it.
B
We
will
look
at
JBoss
middleware
spring
and
nodejs
in
this
example,
and
you
can
automate
a
lot
of
things
around
it
and
the
most
important
of
all
it
is
it's.
A
secure
platform
for
running
containers.
Security
is,
is
one
of
the
big
issues
where
I
continue
to
make
sure
that
if
your
container
is
not
secure,
you
shouldn't
be
able
to
want
on
the
platform,
so
platform
make
sure
that
unqualified
or
not
compliant
containers
cannot
be
just
deployed.
B
If
you
look
at
our
application,
so
each
of
these
services
are
packaged
in
a
container
and
deployed
on
OpenShift,
and
they
expose
a
REST
API
that
other
services
are
called
in
are
calling
them
like
the
coolest
or
gateway
service,
or
it's
a
wavefront
calling
that
REST
API
to
collect
the
data
and
visualize
it,
and
these
services
might
be
no
jes
springboard,
Python,
jetty,
springboard,
Trampas
or
whatever.
That
is
really.
This
could
be
a
combination
of
that
and
that
services
the
platform
works
with
those
containers,
regardless
of
what
they
are
it's
actually.
B
It's
only
supports
also
supports
dotnet
core
one
point
one
is
since
Microsoft
is
open.
Source
center
runs
on
Linux,
so
you
could
even
one
dot
that
applications
on
the
platform.
So
let's
take
a
look
at
the
open
ship
and
how
these
containers
are
deployed,
or
this
application
is
deployed
on
the
platform
and
I
have
a
number
of
projects
that
we're
going
to
go
through,
and
what
I
want
to
show
right
now
is
the
production
environment.
In
this
application
we
have
series
of
service
groups.
B
As
you
see,
I
get
some
monitoring
information
metrics
how
much
memory
of
CPU
and
network
is
being
consumed
for
this
specific
container
and
also
how
many
containers
are
backing
this
service,
so
we
have
built
in
load
balancing
for
each
of
these
services
right
now,
I
have
one
container
running
for
my
VIP
front.
If
I
click
on
that
I
get
a
list
of
containers
for
that
wave
front
is
one
right
now.
Click
on
that
I
get
a
details
of
that
container.
B
Right
now
is
there's
some
really
valuable
information,
for
example,
what
IP
this
container
is
assigned
to
it
and
on
which
note
is
actually
running.
All
these
containers
are
running
on
a
bunch
of
virtual
machines
and
I
can
see
on
exactly
which
original
machine
this
specific
container
is
scheduled.
On
the
right
side,
I
see
some
good
information
about
which
image
was
used
and
for
deploying
this
container,
how
much
memory
and
CPU
is
assigned
to
it
and
of
course
we
can.
We
can
do
best
mode
with
this
container,
so
you
can
consume
more
CPU
than
your
allocated.
B
If
there
is
enough
capacity,
for
example,
and
what's
the
current
state
it
is
running,
then
we
have
the
metrics
tab.
I
get
like
monitoring
information
about
my
container,
so
I
can
identify
anomalies
over
time.
I'll
see
right
now
there
is
one
gigabyte
of
memory
allocated
to
this
container,
but
only
about
like
70
60
68
memory,
CC
8
megabyte
is
consumed.
The
same
goes
for
the
CPU
and
network,
so
it
helps
me
to
debug.
B
This
container
is
suddenly
I
have
a
spark
number,
but
it
also
helps
me
to
size
the
resource
allocation
a
little
better
right,
as
you
can
see,
I
have
too
much
memory
assigned
to
this
container
right.
If
this
consumes
only
about
100
megabyte
I
can
probably
reduce
that
1
gigabyte
to
some.
Some
number
that
is
lower
without
affecting
the
performance
of
this
container
same
goes
for
the
CPU,
so
this
matrix,
the
container
is
specific
metrics.
It
helps
me
a
lot
for
better
utilization
of
resources
for
the
container,
also
debugging
things
and
the
lockstep.
B
It
helps
me
to
look
at
the
logs
of
the
application.
You
see
this.
The
node.js
application
running
note:
server
J's
command,
it's
really
helpful
for
debugging
and
see
what's
going
on
in
the
container
and
we
actually
in
open
ship.
You
have
a
central
log
management
solution,
also
built
in
based
on
elastic
and
Cubana.
So,
even
if
this
container
is
removed,
the
log
is
going
to
be
pushed
into
elastic
and
will
be
available
for
later.
B
Analyze
terminal
is
also
really
handy
tool,
so
within
the
web
console
through
the
command
line,
commands
command
line,
environment
I
can
directly
have
a
remote
show
access
inside
the
container
without
having
an
extra
tool.
So
right
now,
I'm
actually
inside
that
nodejs
container,
if
I
run
a
PS
or
UX
command,
see
which
processes
are
running
I
see
that
there
is
a
node
server.
Jas
I
could
like
to
do
some
kind
of
debugging
and
access
some
information
inside
a
container.
This
is
like
really
helpful,
especially
during
the
development
phase
or
debugging
in
production.
B
B
So
that
was
the
front
end
and
let's
take
a
look
at
some
other
components
that
we
have
in
this
application.
We
have
the
card
service,
which
is
the
macro
service
running
based
on
JBoss
EAP.
If
I
go
to
the
logs
I
see
that
it
is
using.
This
container
is
using
the
JBoss
EAP
7
application
server,
continuing
image,
a
certified
image
that
comes
from
Red
Hat
hatched
and
secured
make
sure
that
nothing
no
vulnerabilities
are
in
it
and
the
service
is
deployed
on
that
container.
B
B
B
See
that
Tomcat
is
their
dad
this
this
service,
the
Cadillac
service,
the
rest
service
that
is
deployed
on
top
of
Tomcat,
so,
as
you
can
see
like
they
did,
the
middle
bear
or
the
programming
language
or
the
type
of
runtime
that
is
used
for
the
application.
It
doesn't
really
matter
from
a
deployment
perspective.
It's
a
complete
polyglot
environment.
You
deploy
whatever
type
of
application.
You
have
built
a
bit
whatever
type
of
framework
that
yet
you
have
built
and
let's
do
a
little
down
into
cool
store
gateway.
B
So
your
gateway,
remember
that
was
the
API
aggregator
based
on
a
spring
boot
and
apache
camel,
and
these
two
are
actually
part
of
j
bus
to
use
integration
services
mid
of
a
product
from
Red
Hat
that
comes
with
the
supported
spring
good
and
supported
Apache
camo
for
doing
lightweight
integration.
Now,
let's
drill
down
into
that
container,
look
at
the
logs.
We
see
the
like
popular
spring
boots
logo
in
the
log,
so
it
is
running
based
on
that
and
I
said
I
mention.
B
This
is
like
the
agile
integration
and
we
are
having
those
aggregation
of
the
api's
in
this
container.
But
we
don't
see
much
of
that
right
here.
Let's
take
a
look
into
the
details
of
those
integration
flows
running
inside
that
container
I
can
click
on
open,
Java
console
on
the
springboard
camel
container
a
Cousteau
gateway,
API,
and
that
would
take
us
to
hatha
yoga
on
Sol,
which
is
specific
before
JBoss
views,
integration
services
to
be
able
to
take
a
look
into
what
integrations
are
available
inside
in
that
container
what
is
implemented?
B
B
If
you
look
at
how
we
can
compose
micro
services
using
JBoss
view,
so
we
can
define
a
series
of
steps
using
Apache
channel
that
each
of
those
steps
could
be
using
one
of
the
enterprise
integration
patterns.
What
are
those
ones,
for
example,
call
this
for
micro
services
and
wait
for
the
response
and
then
split
the
message
coms
into
smaller
pieces,
transform
them
and
merge
them
together.
So
there
is
a
known
list
of
enterprise
integration
patterns
based
on
the
enterprise
integration
patterns
book.
B
That's
an
oppositional
defiance
based
on
its
own
DSL,
and
it
makes
it
really
easy
to
define
this
complex
integration
with
a
few
lines
of
XML
or
Java
code.
If
I
go
to
one
of
these
camel
routes,
for
example,
the
product
route
that
is
aggregating,
the
product
information
from
the
inventory
service
I
can
get
some
information
and
stats
about
about
this
integration
flow.
For
example.
B
What's
the
end
point,
if
some
integration
flow
on
a
contacts
in
the
message
to
this
one,
how
many
exchanges
has
happened
and
what
is
the
processing
time
Max
and
min
and
averages
so
on?
Looking
at
it,
this
way,
it's
a
little
difficult,
I
think
the
easier
it
would
be
to
visualize
it
as
a
as
the
standard
integration
flow
diagram.
So
you
see
the
standard
notation
of
into
enterprise
integration
patterns
and
also
shows
me
how
many
exchanges
have
happened
so
far
through
that
flow.
So
I
can
look
exactly
how
this
integration
is
done.
B
A
message
comes
in
then
we
make
a
choice
bit
on
what
type
of
request
it
is.
We
transform
the
message:
we
change
the
body
of
that
message
and
send
it
to
the
inventory
service
in
the
backend,
which
is
the
inventory
market
service
that
you're
calling
to
get
the
data
back
and
transform
and
provide
it
to
the
front
end.
I
can
also
take
a
look
at
the
source
code
for
that
integration
flow.
B
The
idea
is
that
our
specific
disease
integration-
all
those
things
are-
are
gone
and
not
usable
when
you're
building
a
market
service
application-
and
you
have
integration-
needs
so
camel
running
in
spring
boots
running
in
a
container.
It
allows
us
to
take
advantage
of
that
micro
services
type
of
architecture
but
at
the
same
time
be
able
to
use
enterprise
integration
patterns
to
build
these
integration
flows
into
our
application
and
simplifies
integrate,
simplify
integrating
services
together.
Let's
go
to
open
shift
console
again
and
back
to
our
containers.
B
All
right,
so
we
have
all
our
containers
deployed
and
it
is
running
as
a
macro
service.
But
as
I
mentioned
before,
this
is
an
online
shop
and
in
online
shops.
It
is
extremely
important
for
what
a
really
good
experience
for
the
users,
otherwise
they
will
switch
to
some
other
online
shop,
like
the
easiest
thing
for
a
user
to
do
is
to
Google
for
the
same
product,
find
the
next
shop
or
on
Amazon
or
somewhere
and
make
the
order.
So
conversion
is
always
a
very
difficult
matter
on
a
line
shop.
B
Because
of
that,
we
have
to
be
extremely
careful
about
failures
on
the
line
shops
on
the
e-commerce
websites,
because
failures
happen
anyway
right,
but
we
have
to
just
make
sure
that
we
have
a
build
and
architecture
in
for
our
application.
Also
infrastructure
that
takes
concede
that
considers
failures,
happening.
Different
type
of
failure
is
happening
and
make
sure
to
isolate
those.
B
The
scope
of
those
failures
to
the
services
that
are
affected
and
do
not
propagate
that
to
at
the
rest
of
the
application,
and
these
failures
happen
at
several
levels,
and
we
need
to
think
about
all
those
layers
both
from
the
infrastructure
perspective
and
also
in
the
application
side,
and
have
solutions
for
that
to
make
sure
to
to
minimize
the
effect
of
those
kind
of
failures.
Running
as
containers
and
openshift
solve,
provides
some
level
of
that.
That's
resilience
and
fill
fault
tolerance.
For
example.
B
Click
on
the
arrow
and
scale
that
to
two
containers
and
openshift
make
sure
that
after
that
container
is
up,
it
is
automatically
also
added
to
the
load
balancer,
so
that
when
we
refresh
the
service,
a
request
go
in
a
round
robin
fashion
to
each
of
those
containers
like
if
I
click
more,
then
it
will
create
more
instances
of
that
container.
The
same
way
I
can
scale
the
service
down.
So
I
have
two
containers
running
and
you
want
to
be
crashed.
B
We
have
the
other
one
that
can
provide
a
service
and
we
can
actually
automate
this
through
what
we
call
autoscaler
in
terms
of
open
ships.
So
you
would
define
a
certain
metric,
let's
say
the
network
traffic
or
the
cpu
metric.
If
the
CPU
goes
beyond
a
certain
threshold
80%,
let's
say
you,
you
want
open
ship
to
automatically
scale
this
inventory
service
to
a
maximum
of
five
container,
and
if
the
threshold
of
CPU
drops
below
what
you
have
defined,
then
it
would
scale
this
back
to
whatever
the
initial
number
that
you
had
defined
in
our
case.
B
What
so
we
can
automate
the
scaling
up
and
down
based
on
a
load
on
an
application
and
simplify
even
management
of
the
scale?
So
this
is
the
first
thing
we
could
do
to
scare
our
application
up
have
several
instances.
This
case
at
fault
happens,
but
we
don't
stop
really
there
with
open
ship
around
managing
the
health
of
this
container.
So
if
I
click
on
on
the
services
see
that
they
have
two
pods
with
two
containers
back
in
this
service,
one
of
them
deploy
in
an
hour
ago,
another
about
a
minute
ago.
B
This
is
the
new
one
and
if
I
click
on
that
container,
that
was
deployed,
older
I
want
to
go
manually,
delete
that
as
if
that
the
trash
had
happened
in
that
container
and
it
was
stopped
for
some
reason-
I
delete
the
container
in
our
production
environment
and,
if
I
go
back
to
our
overview
of
the
private
production
environment,
see
that
open
ship
has
immediately
sense
that
something
has
happened.
We
had
defined
a
number
of
containers
for
the
inventory
service
to
be
to
be
removed.
B
One
of
them
open
ship
has
a
health
check
mechanism
that,
by
default
checks
that
if
the
container
is
up
and
running,
you
can
also
override
that
based
on
your
application,
so
that
the
health
check
makes
sense
for
the
logic
of
your
application
as
soon
as
it
sees
that
the
container
is
not
healthy
or
it
doesn't
exist,
it
spins
up
new
containers
to
brings
it
up
to
the
same
number
of
instances
that
we
had
defined
for
it.
So
we
had
defined
to
have
two
containers,
I
removed,
one
of
them.
B
B
So
that's
the
second
level
of
resilience
that
we
can
provide
around
the
containers
to
make
sure
that
whenever
we
define
how
many,
how
many
discontinued
this
disservice
to
the
CL
to
open,
she
makes
sure
that
those
number
of
containers
running
at
all
time,
even
though
it
crash
happens,
something
happens
in
the
application,
but
that's
not
really
the
the
application
level
tolerance.
So
this
is
the
container
level
tolerance
that
fault
tolerance.
B
For
example,
usually
in
the
world
of
Micra
services,
we
have
services
calling
each
other,
so
it
clients
calls
the
first
service.
That
service
is
relying
on
through
other
services,
and
those
services
in
turn
might
call
all
other
backend
marta
services.
If
one
of
these
services
is
slows,
usually
this
directly
affects
the
calling
service
performance.
So
the
first
service
has
to
wait
for
the
second
service
till
the
respond
comes
back
and
if
every
time
it
takes
15
seconds,
it
means
that
my
client
investigates
the
front
end.
I
would
have
to
wait.
B
The
user
has
to
wait
15
seconds
to
get
to
response,
so
one
service
being
installed.
That
makes
the
entire
application
being
so
even
worse
than
that
you
might
have
a
service
failing
and
when
that
service
failed.
That
would
cause
the
next
service
calling
you
to
fail,
and
that
would
cause
the
next
service
fill
till
all
that
work
that
all
that
is
propagated
to
the
user.
B
So
we
would
call
your
experience
cascading
failures
in
your
application,
and
these
are
just
a
few
of
the
scenarios
that
you
would
experience
and
you
would
need
to
think
about
in
order
to
build
service
resilience
into
the
architecture
of
the
application
and
in
jboss
few
services
and
aqua
chicano.
There
is
built-in
support
for
integrating
your
integration
flows
into
some
of
the
components
of
Netflix
OSS
like
history,
for
a
circuit,
breaker
pattern
and
touring
as
for
collecting
and
aggregating
this
data.
So
what
does
this
tricks
do
in
this
example?
B
If
the
one
of
these
dependent
services
are
failing,
if
we're
using
hystrix
in
scenario,
hystrix
would
make
sure
that
in
the
next
call
we
don't
call
this
service
again
for
a
number
of
four
seconds
or
number
of
minutes.
Whatever
is
defined
we'll
blacklist
this
service
and
would
not
call
that
if
the
first
service
calling
us,
we
would
immediately
return
a
fixed
response
that
is
previously
defined
or
fallback
to
another
service,
whatever
that
integration
developer
has
implemented
through
camel.
B
B
This
this
scenario
is
called
a
circuit
breaker
that
when
the
servicer
service
fails,
we're
gonna
open
the
circuit
at
this
moment
at
this
stage,
so
that
calls
cannot
get
propagated
to
to
the
dip
to
the
failing
service,
and
history
cell
also
provides
where
that
can
visualize
all
this
I'm
gonna
click.
On
that
to
take
a
look
at
the
circuits
that
we
have
in
our
cool
store
gateway,
you
see,
we
have
a
one
circuit
pit
per
each
of
the
rest.
Api
is
that
we
are
calling
on
the
backend
services.
B
B
Have
you
see
the
number
successful
calls
has
gone
up
on
this
api's
and
all
the
circuits
are
closed,
means
that
everything
is
healthy.
All
the
calls
are
normally
going
to
the
back-end
services
and
is
as
expected,
but
let's
go
and
take
down
some
of
these
services
as
if
there
was
a
proper
crash
in
our
containers,
and
we
cannot
recover
from
that
so
right
now
we
have
two
containers
back
in
elementary
service.
I'm
gonna
scale
that
down
to
zero,
so
that
our
entire
inventory
service
is
gone.
B
So
if
we
hadn't
thought
of
service
resilience
into
our
service
that
usually
causes
a
failure
in
that
application.
So
when
the
web
page
is
refreshed,
we
have
a
call
for
getting
the
inventory
status
for
each
of
these
products
and
since
that
service
failed,
this
there
was
an
exception
for
the
entire
website
and
the
whole
cool
store.
B
Rep
shop
comes
down,
and
that's
that's
pretty
much
the
worst
thing
we
can
do
for
our
conversion,
because
there's
a
as
the
e-commerce
shopper
once
you
go
to
a
shop
and
you
get
a
job
exception
or
the
whole
site
is
down.
It
can
guarantee
that
the
user
never
comes
back
to
this.
To
that
eShop.
This
is
the
best
way
to
lose
customer,
and
that's
of
course,
we
don't
want
that
to
happen.
So
what
we
could
do
is
that
we
could
isolate
using
the
circuit
breaker,
isolate
that
failure
to
only
the
inventory
status.
B
So
we
could,
if
I
refresh
this
page,
the
baby
implemented
the
service
resilience
using
historic
sand
camel
in
the
cool
store
gateway.
Is
that
if
the
inventory
service
is
down
just
show
a
fix,
the
takes
static
content
instead,
so
we
can't
show,
what's
this
inventory
status
for
the
service,
but
we
do
not
stop
our
users
from
making
orders,
so
we
can
allow
them
to
sue
browse.
The
website
make
orders
shop
and
we
take
his
tiny
risk.
B
Because
of
that
now,
let's
refresh
this
page
a
couple
of
times,
while
the
inventory
service
is
down
and
see
how
the
circuit
breaker
is
kicking
in.
So
as
you
see,
we
don't
get
any
delays,
because
the
inventory
service
is
down
since
hystrix
knows
already
that
that
service
is
down
is
failing.
It
doesn't
even
make
the
call
to
that
service
anymore.
It
directly
returns
a
response,
so
we
have
silver
responsive
web
shop
despite
one
of
our
services
down,
and
we
partially
have
shut
down
some
of
this
functionality.
B
If
I
go
to
in
streaks
monitor,
we
see
that
the
circuit
for
the
inventory
service
is
open,
which
means
that
we're
not
making
any
calls
to
that
back-end
anymore.
Everything
else
is
functioning
all
the
circuits
closed,
except
the
inventory
service.
So
then
the
service
in
the
page
is
refreshed.
We
skip
calling
the
inventory
service
and
just
show
the
static
content.
Let's
scale
the
service
pack
up.
B
Back
up
now
we
see
the
inventory
information
is
displayed
again
and
historic
has
removed
the
inventory
service
from
the
black
list
and
close
the
suitcase.
So
since
this
call
has
been
successful
now
it
starts
passing
all
the
calls
to
the
backend
service,
so
using
hystrix
Netflix
districts
and
integrating
that
into
camel
with
a
string
boot
as
a
part
of
JBoss
use
integration
service.
We
can
bring
more
resilience
into
inside
our
application
as
well
by
not
breaking
the
whole
application
down
when
faults
happen
and
try
to
isolate
that
as
much
as
possible.
B
B
There
has
been
an
issue
with
this
polo
shirt,
a
polo
shirt
that
you
see
on
our
cool
store,
and
there
has
been
some
problem
with
the
color
used
to
produce
this.
So
the
manufacturer
has
asked
us
to
recall
this
product
from
our
cool
store,
Alliance
store
and
the
way
recalling
products
works
in
e-commerce
is
that
you
usually
get
a
deadline
for
taking
that
product
back
taking
product
down.
And
if
you
don't
take
it
down
on
that
till
that
date,
you
have
to
pay
financial
damage
for
every
day.
B
That
is
that
product
is
on,
because
that
damage
is
the
manufacturer,
the
suppliers
reputation.
So
we
need
to
take
this
down
as
soon
as
possible,
but
that
inventory
list
come
the
stages
of
inventory
and
the
list
of
the
product
that
comes
from
an
ERP
system
in
the
in
the
backend.
That
is
about
twenty
years
old,
and
we
have
made
a
request
to
remove
the
product
from
from
the
catalog.
But
they
have
given
us
two
weeks
to
do
that
and
that's
one
week
later
than
the
deadline
that
they
have
got.
B
So
we
have
decided,
as
a
workaround
in
the
meantime,
to
modify
our
inventory
service
for
this
product
and
sitting
inventor
to
0.
So
we
intercept
the
calls
basically
to
this
back-end
system,
returning
zero
inventory
and
prevent
our
users
from
buying
this
product
and
they
skip
all
the
financial
data
so
I'm.
The
developer
at
this
URI
issue
is
assigned
to
me
to
make
that
change
and
push
that
to
production
right
away
without
causing
any
downtime.
So
this
is
happening
during
the
day
and
I
don't
want
to
cause
any
disruption
in
the
production.
B
There
is
a
team
repository
called
cool
store,
baka
service,
which
is
the
all
the
code
for
all
the
services
running
in
this
application.
But
the
way
that
we
work
in
this
project
is
that
developers
don't
have
direct
access
for
committing
code
into
the
team
repository.
We
have
more
senior
developers
that
have
commit
access
and
they
can
review
the
code
that
is
written
by
other
developers
and
if
it's
good,
if
it's
all
right
with
me
candidate,
merge
it
into
that
team
repository.
This
is
the
process
that
they
used
to
to
maintain
code
quality
for
our
application.
B
B
In
my
own
personal
space,
I
can
clone
that
make
code
changes
test
that
then
I'm
happy
with
the
changes
I
committed
it
back
to
my
repo
to
my
fork,
repo
and
after
I'm,
ready
I
can
send
a
pull
request
to
the
team
repository
so
that
a
senior
developer
or
a
code
reviewer
can
take
a
look
at
that
pull
request.
We
can
discuss
different
issues,
maybe
modify
some
of
the
parts
to
make
sure
that
it
complies
with
our
code
combination
with
the
best
practices
and
after
we
get
approval
from
a
number
of
the
code
reviewers.
B
Then
we
can
merge
that
that
code
reviewer
can
merge
that
change
into
the
team
git
repository.
So
we
will
use
that
process
to
in
in
this
demo
as
well.
So
what
I
need
to
do
is
that
I
don't
have
access
to
commit
anything
to
this
ski
trip.
Oh
I
need
to
fork
this
one
to
my
own
personal
account
to
my
developer.
Account
ok
fork.
B
Now
we
see
that
I
have
a
copy
of
the
cool
store
marker
service
repository
under
my
own
account,
and
it
also
says
that
is
for
Trump
team
cool
store,
micro-service,
let's
copy
the
URL
to
disk,
is
repo
and
go
to
my
IDE
and
clone
the
code
and
make
some
changes
go
today
get
bu.
This
is
j+
our
studio,
which
is
essentially
eclipsed
with
the
JBoss
tools
plugins
and
some
other
plugins
installed
on
it.
I'm
gonna
clone
the
git
repo
by
default.
It
reaches
from
clipboard,
so
they
default.
Url
is
correct.
B
B
Could
also
use
any
other
ID
in
this
sensor.
If
you
used
to
IntelliJ
or
other
tools,
it
doesn't
really
make
any
difference.
You
just
clone
the
code
and
start
working
with
it,
so
I
have
flown
the
coup,
so
macro
service,
the
the
forked
repo
that
I
have
in
my
own
workspace
and
I
want
to
modify
the
inventory
service.
So
the
first
thing
I'm
going
to
do
is
to
import
that
as
a
maven
project
that
was
a
Java
project
running
on
JBoss
EAP,
based
on
maven
stitched
to
the
jaw
of
you.
B
I
have
my
inventory
service
open.
It
is
a
standard
Java
project
with
a
number
of
rest
services,
since
we
do
test-driven
development
in
our
team,
I'm
gonna
start
by
writing
your
test
that
verifies
that
the
inventory
status
for
those
recalled
products
is
zero
and
afterwards
I'm
going
to
write
the
code
that
makes
sure
that
unit
test
passes
inventory
test
service
that
was
prepared
before
that
does
exactly
this
check.
B
It
takes
a
list
of
the
recalled
products
and
then
tests
if,
if
the
inventory
status
with
those
products
are
zero
and
right
now,
this
is
ignored
from
our
test.
Three
I'm
going
to
remove
the
ignore
annotation
so
that
this
is
included
and
runs
in
our
as
a
part
of
our
unit
tests.
I
can
run
that
unit
test
immediately
inside
the
as
well
Iran
I
see
that
it
fails
expectedly,
because
we
haven't
really
made
any
code
change.
B
B
Remove
the
stock,
like
basically
removes
a
return,
zero
for
the
stock
assists
of
those
specific
products.
I
have
some
commented
code
here
that
actually
does
that
for
us,
so
we
are
circumventing
those
data
that
we
get
from
the
ERP
system
and,
if
is
those
products
that
are
recalled,
we
set
a
quantity
to
zero
in
the
inventory
and
return
the
result
of
the
front
and
after
the
ERP
system
is
updated
and
we
can
remove
this
code
and
with
the
data
pass
through
from
the
backend
system.
B
B
B
Ok,
take
a
look
at
a
repository
refresh
the
page.
You
see
that
the
last
commit
is
the
one
that
we
just
made
and
those
code
changes
are
made.
So
now
that
I
have
made
the
code
changes,
I
have
to
make
a
pull
request
to
send
these
changes
for
a
review
to
the
team
repository
I
can
do
that
by
clicking
the
green
button
to
create
a
pull
request.
Give
it
a
title
products
culprit
remove
from
inventory.
I
can
also
see
the
bottom
of
the
list
of
commits
included
in
this
pull
request
and
what
have
been
changed.
B
B
See
some
logs
here
I
see
that
the
developer
has
sent
a
pull
request
to
this
repo
could
go
to
the
repo
and
tech.
Also
look
as
you
can
see
that
come
it
doesn't
exist
in
the
team
report
yet
well.
We
have
one
full
request
waiting
here
for
to
be
reviewed
and
that's
the
one
that
was
sent
a
few
seconds
ago
by
the
developer.
Click
on
it
I
see
the
name
of
its
own
description.
B
One
review
is
enough:
I
agree
a
plus
one
and
add
a
comment
to
show
my
approval
for
this
change,
and
since
one
approve
is
enough,
we
can
merge
this
pull
request
into
into
our
git
repo
that
team
each
repo
I
go
to
list
of
comments
now.
I
can
see
that
now
that
git
commit
is
available
in
the
universal
commits
and
not
get
ripped
off.
B
Alright,
now
that
they
have
made
a
changes
into
our
team
repo,
we
want
to
push
this
into
our
production
environment.
We
generally
do
that
through
continuous
delivery,
a
pipeline
so
in
openshift,
there's
a
support
for
building
pipelines
to
automate
to
deliver
the
application
and
push
it
through
different
stages
of
development
and
pushed,
and
appreciating
the
production
right
now
in
the
list
of
services
that
are
built
for
that
coolest
cool
store
application
is
built
up.
B
This
is
the
layout
of
the
pipeline.
How
it
looks
like
we
have.
Every
change
that
happens
in
the
team
gets
repository,
like
the
merge
that
we
did
right
now
for
the
pool
request.
First,
the
inventory
test
environment
is
used
to
build
a
jar
file
with
a
docker
image
for
that
application,
with
the
changes
the
inventory
service
deployed
on
its
server
and
run
a
couple
of
tests.
B
If
all
of
that
is
successful,
then
we
promote
that
docker
image
to
the
cool
store
test
and
bar,
and
there
we
test
the
entire
cool
store
application
with
all
the
services
together.
So
it
is
the
kind
of
a
system
test,
environment
or
user
acceptance,
testing
environment,
and
after
that
is
successful,
we
will
promote
the
image
into
our
inventory
image
into
production
environment
and
make
a
production
deployment.
B
But
at
that
stage
we
do
not
replace
their
live
inventory
service,
but
we
deploy
it
into
the
new
container
running,
side-by-side
the
the
live
inventory
service,
without
touching
the
production
traffic
and
at
that
point,
wait
for
approval
from
a
release,
manager
or
someone
that
is
authorized
to
approve
switching
traffic
or
going
live
in
production.
This
step
is
usually
integrated
into
your
IQ
workflow
management.
If
you
use
ServiceNow
or
Jeeva
workflow
or
something
else
or
even
like
chat
ups
in
slack
or
rocket
chat.
B
This
step
is
generally
integrated
into
those
systems
that
you
could
get
a
notification
in
that
system
or
a
task
that
deployment
is
ready
in
production.
We
are
waiting
for,
go,
live
approval,
it
clicks,
number
and
traffic
is
changed
to
set
the
new
service
live
in
production.
Well,
let's
go
take
a
look
at
our
project.
You
have
a
number
of
projects
open
ship
for
different
environments.
We
have
the
inventory
test
environment
that
is,
testing
the
inventory
service
in
isolation.
B
We
have
only
the
inventory
deployed
and
it's
database,
and
then
we
have
the
test
environment
that
the
entire
my
cool
store
application
is
deployed,
but
we
don't
touch
any
other
services
as
this
this
pipeline,
except
the
inventories.
We
only
update,
inventory
container
in
the
test
environment
and
test
all
the
services
together,
and
then
we
have
the
production
environment,
which
is
the
production
for
application.
Running
live,
that's
the
one
that
we
have
been
looking
at
at
the
moment
and
there's
a
CI
CD
project
also
that
they
have
all
the
infrastructure
for
our
CI
CD.
B
You
see
the
GOG
server
running
with
it's
a
Postgres
backend
with
some
persistence
storage
attached
to
the
post
Chris,
so
that
the
data
doesn't
disappear
when
the
container
dies
or
moves
around.
We
had
Jenkins
running
that
is
running
our
pipeline
and
Nexus.
As
our
maven
repository
and
manager,
if
I
go
to
build
and
pipelines,
I
see
the
openshift
pipeline
running
in
the
CI
CD
project
and
it's
already
finished
building
the
container
for
the
new
inventory
service
with
a
change
included,
it
has
run.
The
test
has
been
successful.
B
It
promoted
into
the
test
environment
to
run
all
the
services
together
and
test
them,
and
even
since
that
has
been
also
successful,
they
have
promoted
it
into
production
and
deployed
into
a
container
that
is
not
live.
So
we
deployed
into
a
new
container
in
production.
We
do
not
replace
the
old
one
and
start
running
smoke
tests
and
some
maybe
even
manual
tests
against
this
new
container
and
at
that
point,
and
we
wait
for
a
manual
approval
for
the
go-live,
and
so
what
we
are.
B
What
this
process
is
called
is
a
Bluegreen
deployment
in
order
to
be
able
to
deploy
in
production
without
causing
any
downtime.
So,
right
now
at
the
deploy
production
with
no
traffic
step,
we
deploy
it
into
a
new
container,
the
green
container
and
that
production
traffic
is
still
go
into
the
blue
container
and
after
approval
happens,
you
switch
the
traffic
to
the
new
container
and
still
keep
the
blue
container
up
and
running
so
that
we
can
test
the
new
containers
still
with
the
complete
production
data
production
traffic.
B
If
something
happens,
we
can
roll
back
to
the
previous
version
by
just
switching
our
router
to
the
previous
version.
So
this
allows
us
to
without
disrupting
traffic
in
production,
easily
go
back
and
forth
between
different
versions
of
our
application
that
are
running
in
parallel
at
import
and
so
pipeline
has
continued
smoke.
Test
has
succeeded
in
in
the
production
the
new
container
and
we
are
waiting
for
an
input
and
approval
to
go
live
with
these
changes.
Let's
go.
Take
a
look
in
our
production
environment,
see
how
that
looks
like.
B
If
I
click
on
the
inventor
live
service,
we
see
that
there
are
actually
two
containers
providing
the
inventory.
One
is
called
inventory
green
that
was
deployed
about
an
hour
ago,
and
we
have
inventory
blue
that
was
deployed
three
minutes
ago.
This
is
the
new
container
that
we
just
set
up
with
the
changes,
but,
as
you
can
see
in
the
traffic
split
hundred
percent
of
traffic
is
going
to
the
older
container,
which
doesn't
contain
our
change
and
zero
percent
is
going
to
the
new
container.
B
So
we
can
test
this
container
the
new
change
in
production,
video
production
data,
but
without
really
affecting
any
of
the
users,
because
we're
sending
all
the
traffic
to
the
to
the
previous
version.
We
could
even
do
other
patterns
of
deployment.
For
example,
you
could
do
kanata
release
by
instead
of
putting
0%
on
the
new
container.
B
We
could
put
like
five
percent
of
traffic
in
a
new
container
and
see
had
a
new
service
reacts
to
put
it
through
production
traffic
to
the
portion
of
traffic
that
is
going
to
this
new
service
and
progressively
increase
that
traffic
to
100%.
When
we
are
confident
that
a
new
service
can
can
function
as
expected,
what
we
are
doing
in
this
demo
is
a
Bluegreen
deployment,
so
we
don't
send
any
traffic
to
the
new
container
before
we
are
sure
that
everything
is
functioning
and
just
switch.
Router
switch
the
traffic
completely
to
the
new
container.
B
B
Click
on
input
requires,
so
this
pipeline
is
using
Jenkins
DSL
for
describing
continuous
delivery
pipeline.
It's
a
very
powerful
syntax
and
very
popular
for
building
pipelines
and
in
this
demo,
I
have
it
integrated
this
into
ServiceNow
or
any
other
workflow
system.
So
we're
gonna
directly
go
into
a
Jenkins
running
on
open
chief
and
approve
the
launch
after
the
approved
with
the
go
live
directly
in
Jenkins.
B
Ok
pipeline
is
executed
successfully,
let's
go
to
back
to
the
production
environment
and
take
a
look
to
see
how
the
containers
look
like.
So,
as
you
see
now,
it's
switch.
We
see.
Hundred
percent
of
the
traffic
is
on
the
blue
container.
The
inventory
blue
container
I
was
deployed
five
minutes
ago
through
the
pipeline
and
zero
was
going
to
the
previous
version,
the
green
container,
which
was
deployed
an
hour
ago.
So
now,
if
I
go
to
the
cool
store
web
page,
any
call
to
the
inventory
service
would
come
from
the
new
version
of
the
container.
B
If
you
see
something
is
not
functioning
properly,
then
it
can
always
switch
back
from
the
blue
container
to
the
green
container
and
have
the
previous
version
function
in
a
matter
of
a
second,
so
it
wouldn't
cause
any
disruption
of
traffic
in
the
production
and
I
can
quickly.
Atomically
switch
traffic
between
the
previous
version
or
new
version.
Do
a
forward
like
roll
back
and
rolling
forward
quite
easily
directly
in
the
production
environment.
Let's
refresh
the
page
and
see
if
the
product
is
taken
in
off
the
website.
B
As
you
can
see,
the
inventory
is
says
unavailable,
so
we
still
have
inventory
for
all
the
other
products.
But
now
we
have
intercepted
a
call
to
that
ERP
system
and
and
setting
the
inventory
or
for
this
product
and
save
all
the
financial
damage
that
could
have
been
caused
by
the
old
ERP
system
in
the
backend.
B
All
right,
so
this
is
how
we
could
make
a
change
and
quickly
push
that
into
production
as
a
developer
or
as
someone
that
is
authorized
to
do
deployment
in
production
without
causing
any
any
downtime
that
we
could
do
this
at
directly
in
the
production.
During
a
day
time,
no
maintenance
window
is
required
to
be
on
the
weekend,
but
as
a
developer,
I'm
also
interested
to
make
sure
that
my
application
is
secure.
B
I
hear
a
lot
about
this
high-profile
CVEs
that
breakout
on
internet
like
the
heart,
bleed
or
poodle,
or
shell
shock,
and
things
like
that
and
I'm.
A
developer,
I
just
write
code,
so
I
don't
know
too
much
about
how
to
make
this
container
secure.
But
I
don't
want
my
container
to
have
these
issues
and
openshift
provides
a
way
for
you
to
be
able
to
do
that
also
easily,
because
of
that
open.
She
provides
management,
suite
called
cloud
forms
as
a
part
of
its
offering.
B
B
How
much
memory
you
see
that
memory
wise
is
a
little
orange,
so
I
have
80%
use
and
not
so
much
left,
I
got
to
add
more
nodes
to
my
efficient
environment
and
also
a
list
of
all
the
resources
that
I
use.
How
many
projects
are
there,
how
many
images
and
so
on?
If
I
click
on
a
project,
I
see
the
list
of
projects
and
how
many
resources
are
each
each
of
in
each
of
these
projects?
B
How
many
pods
containers
and
right
now,
I'm
logged
in
as
an
admin
I,
can
see
everything
we
could
remove
some
of
the
access
so
that
a
person
is
only
can
access
his
own
projects,
his
own
containers,
not
to
be
able
to
look
at
anything
else
and
go
back
to
the
overview.
So
what
we
wanted
to
look
at
it
to
make
sure
that
the
containers
that
I've
deployed
they
were
secure
and
they
do
not
contain
any
cv
issues.
I-
would
go
to
the
list
of
images
that
are
available.
B
It
might
open
to
the
environment,
and
we
see
the
inventory
service
that
we
created.
The
be
deployed
is
up
on
the
list.
I
can
click
on
it
and
it
gives
me
some
really
valuable
information
about
this
image
deploy.
First
of
all,
it
tells
me
in
which
projects,
pods,
containers
and
nodes-
this
particular
image-
is
deployed.
This
is
also
really
helpful,
because
some
time
you
have
one
image
deployed
in
several
containers
and
by
coming
into
cloud
form,
it
would
immediately
tell
me
okay,
this
is
specific
images
deployed
on
those
five
projects.
B
So
I
can
I
have
I
have
an
overview
of
which
teams
are
using
the
image
that
I
have
made
available
in
the
inventory
and
if
you
come
a
little
more
down,
which
is
something
interesting,
the
compliant
at
seven
minutes
ago,
so
about
seven
minutes
ago,
a
compliant
check
compliance
check
has
been
done
on
this
image
and
we
are
compliant.
But
what
does
it
mean?
What
is
that
compliance
that
is
checked
all
right,
so
you
see
the
what
compliance
we
have
configured
in
cloud
form.
B
At
the
moment
we
are
using
OpenScape,
which
is
a
standard
way
for
tracking
CVEs
and
vulnerabilities
against
the
Linux
environment
Linux
system,
and
since
these
are
Linux
containers,
the
same
rules
apply
to
the
news
containers
as
well.
You
see
that
421
rules
have
been
checked
and
right
now
we
are
compliant
and
we
are
not
causing
any
high
profile
issue.
I
can
also
download
that
report
of
the
vulnerability
scanning
is
a
really
nicely
formatted
the
HTML
to
to
send
it
to
people
and
share
it
in
the
organization.
B
It
gives
me
what
the
name
of
the
benchmark
and
the
list
of
the
rules
that
have
been
used
to
verify
this
image,
how
many
rules
have
been
passed
and
how
many
have
been
failed?
We
have
420
rules
that
are
green
and
one
medium
civility.
Does
it
failed
so,
by
default
in
cloud
in
my
cloud
forum,
environment
I
have
defined
that
only
prevent
containers
from
being
deployed
if
there
is
a
high
severity
issue,
vulnerability
existing
in
the
containers.
Yes,
this
is
a
media
one,
so
we
don't
block
it
from
deployment.
Allow
being
deployed.
B
I
can
get
a
list
of
all
the
rules
that
have
checked
with
their
severity
if
I
click
on
one
I
see
which
CVS
are
related
to
each
of
those
rules
and
how
the
result
of
that
check
has
been
a
good,
more
information.
So
we
made
sure
that
this
container
is
a
secure.
It
doesn't
contain
any
vulnerability
and
we
can
always
share
this
report
within
the
organization
to
make
sure
everybody's
aware
of
which
current
which
containers
are
are
compliant
with
in
our
application.
B
A
My
mind
is
blown.
This
is
got
to
be
the
the
best
demo
I've
seen
in
ages
in
terms
of
covering
off
pretty
much
every
aspect
of
the
full
sack.
There
was
one
question
about
the
history
container
weather,
and
maybe
this
was
a
good
way
to
to
bring
some
closure.
He
was
looking
for
the
container
and
github.
He
found
the
cool
store,
but
he
could
not
find
the
historic
container.
Can
you
pop
over
to
where
the
code
is
for
all
this
demoing
you've
been
doing
absolute.
B
B
So
the
application
code
is
here,
but
history
'xs.
We
are
using
the
images
that
are
available
on
docker
hub
and
on
their
openshift.
There
are
a
bunch
of
templates
that
we
use
to
deploy
that
there
is
a
template
for
Netflix
OSS,
a
list
that
takes
those
containers
from
docker
hub
registry
and
deploy
them
on
OpenShift
on
the
same
repo.
A
Well,
you
really
covered
pretty
much
every
base
and
there
were
a
couple
of
questions
and
then
you
hit
them
too,
while
you're
talking
so
I'm,
mind
is
blown
Sam
I
think
you
really
touched
on
a
lot
of
things.
I
hadn't
seen
the
history
stuff
before
I
love
that
you
closed
on
the
open,
SCAP
stuff.
That's
one
of
my
favorite
things
to
remind
people
to
do,
and
this
has
been
pretty
awesome.
I
also
think
that
there
are
a
number
of
pieces
and
parts
of
this
that
could
be
full
on
demos
themselves.
A
So
I
look
forward
to
doing
some
deeper
dives
and
drill
downs
on
different
parts
of
this
as
well,
but
for
those
of
you
who
are
listening
or
who
are
watching
this
later,
please
do
reach
out
to
us
and
if
you
have
questions
send
them
to
either
the
mailing
list
at
openshift
commons
or
its
directly
custom
yeah,
you
want
to
throw
back
up
your
contact
information
there
again
on
the
screen.
So
we
end
with
the
bat-file
food
yeah.
B
B
A
So
I'll
definitely
get
some
face
time
in
with
you
and
everybody
else
can
too
at
the
gathering
and
the
open
ship
gathering
on
March
28th
and
the
29th
and
30th
you'll
be
captive,
probably
in
the
booth
at
open
shift.
The
open
ship,
Red,
Hat
booth.
So
thanks
again
to
Matt
for
doing
this,
and
we
look
forward
to
everybody's
feedback
and
seeing
some
versions
of
the
cool
store
up
there
in
the
universe,
tweaked
out
for
your
cool
stuff.
So
thanks
again
take
care.
Everybody
thank.