►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
All
right
well
push
forward
here,
really
to
highlight:
delivering
certified
Red,
Hat,
Enterprise
Linux
containers
for
IBM
middleware
and
for
the
platform
itself
running
on
top
of
Red,
Hat,
OpenShift
being
able
to
mix
and
mingle.
These
certified
Rell
images
for
IBM
middleware
and
open
source
together
and
then
being
able
to
actually
deploy
IBM
middleware
everywhere.
That
OpenShift
is
supported
today.
A
So
I
want
to
talk
a
little
bit
about
the
things
that
we
have
found
with
clients,
as
we
talk
to
them
about
their
various
application
patterns
and
their
workloads,
and
in
particular
what
we
see
are
these
sort
of
three
primary
workflows
for
applications
and
then
data
governance.
So
the
first
is
creating
new
applications
that
are
based
on
micro
service
architectures,
then
extending
existing
architectures
with
new
interactive
API
to
create
new
systems
of
engagement
and
then
finally,
lifting
and
shifting
existing
workloads
to
optimize
how
they're
deployed
how
their
cost
is
managed,
etcetera.
A
A
So
you
deploy
it
in
your
infrastructure
wherever
that's
at
your
data
center
or
other
cloud
providers,
and
so
cloud
private
itself
is
made
up
of
four
key
components,
and
part
of
this
announcement
is
that,
instead
of
bringing
our
own
kubernetes,
we
can
actually
leverage
the
OpenShift
kubernetes
and
run
directly
on
top
of
that,
but
we'll
still
bring
the
common
services
that
run
on
top
of
that
layer.
So
this
includes
how
we
build
and
collect
logs.
How
do
we
manage
the
health
of
the
application?
How
do
we
manage
alerts?
A
How
do
we
actually
deal
with
licensing
consumption
and
common
security
and
then
IBM
middleware?
So
this
is
the
content,
so
this
is
kind
of
the
the
critical
aspect
here
being
able
to
deploy
IBM
middleware
directly
on
top
of
OpenShift
and
a
fully
supported
way
and
each
of
the
pieces
that
get
deployed
automatically
tie
in
to
this
common
infrastructure.
That's
made
available
on
the
platform
and
then
we
still
provide
Cloud
Foundry
as
well.
We
find
some
clients
who
still
have
a
need
for
Cloud
Foundry.
A
So
with
cloud
private,
we
focus
on
OCI,
compatible,
docker
images,
running
kubernetes,
we
use
helm
as
our
packaging
mechanism,
but
we
like
helm
for
various
reasons,
but
it
provides
us
with
a
open
way
to
both
package,
our
IBM
middleware,
as
well
as
allow
you
to
build
and
add
to
the
catalog
as
needed,
and
then
we
use
terraform
as
our
cloud
provisioning
layer.
So
anytime
we're
provisioning,
compute
network
and
storage
in
different
clouds.
We
can
actually
extend
those
terraform
templates
and
manage
them
directly
in
the
catalog,
alongside
the
helm,
charts
at
a
high
level.
A
This
represents
the
different
runtimes
that
were
able
to
run
on
and,
with
this
announcement,
we're
now
able
to
actually
substitute
a
Red
Hat
open
shift
and
put
that
in
between
basically
replacing
this
box
for
kubernetes.
The
other
boxes
still
remain.
The
same.
Terraform
thought
boundary
common
services
and
that
middleware,
so
this
was
the
architecture
chart
that
we
showed
yesterday
at
the
keynote.
This
highlights
the
ability
to
run
across
different
infrastructure
and
with
openshift
they've,
already
certified
several
different
clouds.
We
saw
the
announcement
yesterday
with
a
juror.
A
I'll
leave
this
up
just
for
a
moment
just
to
highlight
some
of
the
content
that's
available
today.
So
this
represents
what
was
available
as
of
1q.
It
continues
to
grow
on
an
ongoing
basis,
so
in
the
open
source
category
database
services
like
and
Post
Korres
web
terminal
access.
If
you
want
a
web-based
shell,
then,
on
the
enterprise
side,
components
for
DevOps,
like
urbancode,
deploy,
obviously
Liberty
node
MQ
lots
of
variations
of
db2
depending
on
your
scale
and
your
needs
cloud
automation
manager.
This
is
the
component
that
helps
us
build
and
manage
terraform
templates.
A
So
as
we
build
as
we
actually
deploy
cloud
automation
manager,
it
integrates
itself
into
the
catalog
and
actually
then
begins
to
bring
terraform
template
content
into
the
catalog
directly.
And
then
we
have
a
long
history
with
part
of
the
team,
in
fact
that's
building
cloud
private
and
our
HPC
space,
so
Spectrum,
Symphony
and
LSF,
which
are
high
performance
computing
products
that
have
been
used
at
very,
very
massive
scale
for
quite
some
time.
Those
also
are
now
available
to
run
on
top
of
the
kubernetes
platform
and
take
advantage
of
the
way
that
it
manages
compute.
A
So
here
we'll
highlight
a
couple
of
key
pieces
and
hopefully
we'll
have.
We
should
have
plenty
of
time.
We'll
do
some
some
live
demos
here,
but
we'll
look
at
these
sort
of
for
value
propositions
that
are
key
to
the
way
that
we
deliver
cloud
private,
the
first
being
able
to
quickly
deploy
and
get
up
to
speed
with
new
applications.
This
helps
us
support
that
use
case
worker
new
micro
services
or
refactoring
existing
services.
A
Then
hybrid
integration
being
able
to
connect
to
external
services,
whether
that's
an
AI
service
like
Watson
or
messaging
or
other
security
services
as
needed,
and
then
deploying
that
actual
IBM
middleware
directly
in
the
platform,
and
then,
of
course,
the
management
console
that
surrounds
it.
So
without,
let
me
actually
switch
over
here.
A
Bear
with
the
system
in
here
now
all
right,
perfect,
okay.
So
here
what
we're
seeing
is
the
actual
catalog
of
content,
and
in
this
case
we
have
we've
got
pieces
from
IBM,
as
well
as
those
open
source
components
who
are
talking
about
a
few
moments
ago.
Now
we'll
go
through
and
actually
do
a
quick
deployment
and
in
this
environment,
I'm
interacting
with
cloud
private,
but
this
is
actually
running
on
top
of
OpenShift.
A
A
A
So
this
is
a
home
chart.
All
of
the
values
that
you
see
here
are
part
of
the
parameter
values
that
I
can
supply
into
the
helm.
Chart
I
can
do
this
through
the
UI,
where
we
provide
some
content,
assist
and
help
to
guide
the
user
as
needed,
but
also
I
can
do
this
through
the
command
line
and
the
key
aspect
that
that
the
command
line
makes
so
important
is
because
that's
how
we
would
integrate
it
with
a
CI
CD
pipeline.
A
Well
pick
the
new
world
target
namespace
and
then
I
could
select
other
options
here
as
well.
Really
in
this
case
for
MQ
I
haven't
enabled
my
persistence
layer
in
this
cluster,
but
I
could
bring
any
persistence,
that's
supported
on
cloud,
private
or
open
shift,
and
so
there
I
could
have
dynamic
provisioners,
whether
it's
cluster
IBM
spectrum
access
or
other
storage
backends
and
we'll
set
a
queue
manager
name.
A
queue
manager
is
used
by
the
application
to
interact
with
its
messages,
and
then
here
we'll
set
a
password.
A
And
then
click
install
now
at
this
point,
all
of
the
resources
that
are
required
to
run
MQ
are
actually
being
deployed,
and
so,
if
I
look
at
the
helm
release
the
home
release
is
that
deployed
version
of
the
chart?
And
here
we
see
the
stateful
set
the
service
that
it
exposes
and
the
credentials
and
secret
that
actually
are
used
to
configure
that
the
passwords
and
things
that
we
saw
a
few
moments
ago
and
if
I
go
into
open
shift.
A
A
And
the
key
thing
we're
just
showing
here
is
that
this
is
real
right.
It's
not
it's!
Not
smoking
mirrors,
there's,
there's
actually
a
real
MQ
service
running
as
a
container,
if
I
bound
storage
into
it
and
the
container
of
POD
were
to
fail,
it
would
automatically
do
failure.
Recovery,
bring
up
the
new
pond
mount
and
new
storage
mounting
the
same
storage,
etc,
and,
at
this
point
as
a
developer,
I
have
a
self-service
way
to
get
access
to
components
that
are
going
to
be
part
of
my
production
system
as
an
operator.
A
We
also
do
provide
cues
to
help
you
understand
in
this
case.
I
had
deployed
one
Oh
a
couple
weeks
ago
and
now
there's
a
newer
version
available,
so
we
can
look
across
all
pieces
of
middleware
that
you
have
running
so
you're
supporting
database
is
supporting
messaging,
etc
and
help.
You
understand
when
there's
new
updates
that
are
available
for
you
and
that's
true
for
our
middleware,
but
also
for
your
applications
as
well.
A
Ok,
so
the
other
thing
that
we
will
highlight
here,
so
it's
not
just
about
deploying
it
when
I
actually
deploy
something
everything
deployed
directly
out
of
the
catalog
automatically
gets
tied
into
the
common
operations
plane.
So
kubernetes
is
wonderful
for
running
applications,
but
still
requires
additional
work
to
be
done
in
order
to
really
integrate
it
to
the
data
center.
So
what
we're
doing
is
we're
actually
doing
all
of
that
out
of
the
box.
So
here
we
come
with
a
common
set
of
dashboards.
A
We
automatically
deliver
prometheus
collectors
for
different
pieces
of
middleware
and
the
idea
is
to
actually
optimize
the
experience
for
both
our
middleware,
as
well
as
your
applications,
so
all
the
pieces
that
you
would
otherwise
have
to
stand
up
law,
collection,
alert,
management,
health,
metrics,
etc.
Auditing.
All
of
that
is
stood
up
for
you
out
of
the
box.
A
Actually,
the
new
world
namespace
will
see
sort
of
the
birthdate
that
happened
here
just
a
few
minutes
ago
right,
the
pods
that
we
started
deploying
automatically
began
to
get
tracked.
You
can
see
where
I
cleared
the
environment
out
a
few
minutes
before
and
then
the
new
ones
actually
pop
right
in
so
there's
nothing
extra.
That
has
to
be
done.
A
It's
automatically
built
into
the
entire
lifecycle
and
the
other
thing
that
we
showed
yesterday
on
stage
when
I
look
at
those
different
pieces
for
stock
trader
I'm,
looking
not
only
at
at
the
capability
to
run
traditional
micro
services,
but
also
these
other
pieces
of
software
like
ODM
and
the
example.
We
showed
integrates
something
like
ODM
for
business
rules
and
our
AI
services
from
Watson.
So
here
tone
analyzer
is
actually
running
in
IBM
cloud
and
the
application
running
on
top
of
cloud
private
is
consuming
that
service,
and
so
what
we're
doing
then,
is
tying
together.
A
The
complete
architecture,
where
I
have
a
service
that
talks
to
that
collects
data
from
the
user,
klux
feedback
and
then
submits
that
to
tone
analyzer
and
requests.
What's
the
tone,
what
sort
of
is
this
happy?
Is
this
angry
as
a
sad
and
then
that
input
becomes
one
of
the
pieces
that
allows
us
to
make
a
decision
on
the
loyalty
program
and
that's
where
something
like
ODM
comes
in
to
actually
encode
the
loyalty
program
rules?
So
if
they're
angry,
maybe
I
give
them
extra
credits
or
I
give
them
an
additional
call
write
something
to
help.
A
Go
back
and
make
sure
that
that
customer
relationship
is
always
helped
very
healthy
and
so
we'll
actually
go
through
and
show
that
here,
so
the
portfolio
application
itself
is
simply
their
listing.
My
stock
and
if
I
look
at
my
user
details,
something
happened
earlier
and
I
was
angry,
maybe
where
I
accidentally
kicked
the
plug
and
the
TB
went
off.
But
in
any
case,
in
this
case,
I
can
go
back
in
and
at
feedback.
A
And
it's
gonna
go
back
out.
It's
gonna
make
an
API
call
to
watch
into
an
analyzer.
It's
gonna,
take
that
text
and
figure
out.
What's
the
context,
what's
the
tone
that
this
is
bringing
in
and
then
it's
going
to
go
back
to
odm
and
it's
gonna
say:
okay,
what
do
I
do
now
that
I
know
the
emotional
state
of
this
user
and
in
this
case,
if
they're
happy,
we
have
business
logic
that
says
one
free
trade
right,
so
you
get
one
free
trade
as
part
of
your
portfolio.
A
If
I
give
it
an
Bri
feedback,
the
business
logic
actually
goes
back
and
says:
oh,
oh,
we
need
to
give
them
three
right.
We're
gonna
give
them
a
little
bit
more
we're
going
to
give
them
something
else
to
demonstrate
that
we
we
care
about
their
opinion,
and
we
want
to
try
to
reconcile
whatever
thing
that
we
did
that
damaged
this
relationship,
and
so
all
of
this
is
all
running
on
top
of
openshift
all
right.
A
So
all
of
the
pieces,
including
all
the
middleware
again,
are
all
containerized
all
right
and
if
you
want
to
actually
try
this
out
yourself,
there's
two
good
options:
one
you
can
just
go
to
github
clone
a
repo,
try,
Community
Edition
and
then
there's
an
option
that
actually
has
full
environments
that
are
all
stood
up,
ready
to
go.
You
click
through
and
in
two
minutes,
you're,
actually
running
your
own
cloud,
private
you're
able
to
kick
the
tires
today.
These
are
still
we're
using
our
kubernetes
environment
from
cloud.
A
Private
will
begin
the
tech
preview
process
for
ICP
running
on
openshift
through
the
end
of
this
quarter,
and
we
anticipate
to
have
it
fully
GA
sometime
in
3q.
So
if
you're
interested
and
participating
in
that
reach
out
to
me
in
the
Elder
on
Twitter
and
beloved
love
to
help,
you
get
involved
with
that
and
collect
feedback.
And
thank
you
so
much
for
your
time.