►
Description
The escalating volume and complexity of today’s IT environments has introduced unique challenges in the management of applications and infrastructure. Cloud native technologies are producing data at a rate that is impossible to manage and analyze with traditional tools. A new approach is needed to help IT shift from reactive to proactive management and incident resolution to increase efficiency, lower cost, and ensure business continuity. Join us to see how IBM’s AIOps and multicloud management solution helps you leverage AI to map data to business objectives, govern and automate IT operations with confidence, build a DevSecOps culture.
A
Welcome
everybody
to
another
openshift
commons.
Today,
we
are
here
with
a
few
product
managers
from
the
watson,
ai
ops
team
at
ibm
really
excited
to
talk
about
ai
ops
and
multi-cloud
management.
Today
and
please
stick
around
for
the
q.
A
to
shar
katarki
he's
a
senior
product
manager
over
the
core
openshift
platform.
B
A
C
Sure
and
thank
you,
morgan
tipson
product
management
for
watson,
aiops,
really
excited
to
speak
with.
Everybody
today
really
excited
to
tackle
some
of
your
questions
with
tushar
and
james
moore
from
the
watson
apps
product
management
team
as
well.
I'm
gonna
spend
about
15
20
minutes,
giving
you
kind
of
a
quick
overview,
doing
a
quick
demo
and
then
really
looking
forward
to
those
questions,
let's
dive
in
usual,
boilerplate
on
our
product
and
roadmap
here.
C
C
New
concept,
like
risk
budget,
are
evolving
in
the
skills
and
kind
of
talents
necessary
to
support.
This
are
pointed
at
new
kind
of
evolving
kpis,
so,
instead
of
just
thinking
about
availability
and
uptime,
we're
thinking
about
availability
still
and
reliability,
we're
also
thinking
about
quality
or
thinking
about
how
a
digital
experience,
for
example,
might
be
impacted
by
a
lack
of
support
or
challenges
to
the
technology
and
that
this
kind
of
secular
shift
in
how
we
think
about
technology
and
software
has
greatly
impacted.
C
How
kind
of
the
ibm
team
has
thought
about
how
we're
going
to
bring
ai
to
it.
I
don't
want
to
go
through
every
single
one
of
these
bullets
here
for
us,
but
I
just
wanted
to
call
out
kind
of
some
of
the
principles
we're
using
to
think
about
how
ai
can
be
really
valuable
for
it.
So
one
that
I
think
is
particularly
salient
in
the
chat
in
the
ai
ops
universe.
C
This
concept
called
chat
ops,
so
I
think
something
that
a
lot
of
red
hat
users
I
know
a
lot
of
folks
at
ibm
can
appreciate-
is
that
here
in
2020,
we've
got
far
too
many
dashboards.
C
There's
always
another
tool
that
we
need
to
pop
out
of
our
work,
to
examine
and
get
the
information
from
we're
trying
to
do
with
some
of
our
new
work
around
devsecops
around
bringing
ai
to
it
is
weaving
in
information
and
insights
into
the
actual
workflows
that
we're
working
on
day-to-day,
rather
than
forcing
us
to
incorporate
a
new
tool,
get
trained
on
a
new
system
and
kind
of
break
ourselves
out
of
our
productivity
up
next
and
a
big
part
of
the
great
work
that
we
do.
C
Launching
this
product
on
red
hat
openshift
is
embraces
of
with
open
ecosystem
of
tools.
The
it
operations
space
where
ai
ops
resides
is
incredibly
fragmented
and
the
way
to
succeed
they're
for
all
concerned
is
by
embracing
open,
integrating
integrating
in
a
very
thoughtful
way
and
being
able
to
collaborate
leveraging.
C
So
I
want
to
spend
a
moment
here,
just
kind
of
laying
out
what
aiops
is
in
light
of
the
kind
of
larger
shift
and
how,
in
terms
of
how
we
think
about
devsecops,
so
aiops,
and
sometimes
you
hear
this
term,
get
mixed
up
with
other
concepts
like
mlaps
or
itops
aiops.
It's
a
nice
jargony
term,
we're
basically
saying
here's
we're
applying
ai
to
it
operations
or
two,
the
support
systems
and
teams
kind
of
social
infrastructure.
C
If
you
will
that
support
the
technologies,
the
applications,
the
digital
experiences
that
we
all
as
consumers
and
employees
touch
every
day
and
what
makes
it
kind
of
step
function
different
than
prior
approaches
is
we're
bringing
together
different
types
of
data,
for
example,
structured
data,
more
traditional,
unstructured
data
things
like
tickets,
logs
chat,
logs
social
media
posts,
those
kinds
of
data,
we're
bringing
them
all
together
with
ai,
drawing
correlations
and
insights
across
those
different
data
sources
and
we're
using
ai
to
improve
both
our
ability
to
get
visibility
into
our
technology
and
then
to
resolve
issues
within
that
technology.
C
So
not
just
understanding
a
particular
getting
a
better
understanding
of
a
particular
outage
or
I.t
incident,
but
we're
able
to
use
ai
to
get
more
context,
collaborate
more
effectively
and
I'll,
ultimately,
repair
and
automate.
Future
repair
of
incidents
of
challenges
do
so
continuously
improving
over
time
kind
of
kicking
that
flywheel
forward
and
the
way
we've
seen
our
clients
interact
with
aiops,
both
our
clients
and
I'll
say.
Also.
Some
of
the
industry
analysts
have
seen
this
kind
of
pattern,
too,
is
something
we've
kind
of
seen
it
as
like
an
ai
ops
journey.
C
So
our
clients
typically
start
off
by
trying
to
get
kind
of
a
ground
truth
sense,
trying
to
really
understand
their
I.t
infrastructure
or
what
gerds
their
applications.
What
supports
those
digital
experiences
they're
trying
to
get
together
in
ground
truth
and
they'll,
do
that,
typically
through
event
and
metric
monitoring
tools
that
give
them
kind
of
a
forward-looking
sense
of
what
is
kind
of
about
to
happen.
C
Once
they've
established
that
ground
truth,
our
clients
typically
move
on
to
this
next
step
without
anticipating
what
could
happen
next
being
a
bit
more
proactive
and
the
way
that
comes
about
is
through
using
more
sophisticated
tools,
more
sophisticated
analytics,
applying
technologies
like
topology
to
get
a
more
rich
sense
of
kind
of
how
that
understanding?
How
that
ground
truth
is
related
to
different
pizza
pieces
of
itself,
applying
ai
to
understand
and
get
a
more
rich
sense
of
air
machine
learning
to
get
a
more
rich
sense
of
kind
of.
C
What's
coming
down
the
pipe
and
then,
finally,
with
that
sophisticated
understanding
and
that
ground
truth,
then
applying
automation
to
take
some
of
the
pain.
Some
of
the
effort
kind
of
bring
out
some
of
the
synergies
using
some
of
that
intelligence
and
insight.
That's
been
kind
of
teed
up
in
the
prior
steps
that
I
think
kind
of
tease
up.
C
How
we're
thinking
about
this
new
product
that
we
just
launched
earlier
this
summer
called
watson
apps,
and
this
is
a
new
product
that
we've
built
on
ibm
cloud
packs,
which
are
built
on
red
hat,
open
shift
and
we've
been
really
thrilled
to
work
with
red
hat
on
kind
of
making.
This
fantastic
experience
across
the
ibm
red
hat
teams
here
so
to
briefly
kind
of
lay
out
what
we're
doing
in
ai
ops.
C
With
this
new,
offering
called
watson
apps,
we're
really
focused
on
some
of
the
key
capabilities
and
technologies
that
distinguish
aiops
from
prior
approaches
to
the
it
operations
domain.
So,
for
example,
thinking
about
how
we
can
get
a
more
sophisticated
understanding
of
hidden
insights
so
using,
for
example,
unstructured
data,
natural
language
processing
techniques,
which
ibm's
done
some
really
innovative
work
in
to
reveal
new
anomalies.
A
lot
of
our
clients
that
we've
worked
with
in
pocs
or
in
our
beta
period.
C
We're
really
impressed
and
I'll
talk
about
them
a
little
bit
more
later,
we're
really
blown
away
by
our
ability
to
kind
of
shed
a
light
on
some
of
the
kind
of
dark
technical
depth
that
they
had,
that
traditional
approaches,
kind
of
simple
keyword,
search
and
other
tools,
we're
not
able
to
kind
of
shed
a
light
on.
C
Another
piece
of
this
is
connecting
the
dots
across
different
data
silos.
So,
as
I,
as
I
mentioned
earlier,
we've
seen
this
kind
of
paradigm
shift
where
parts
of
organizations
or
parts
of
digital
technology
infrastructure
were
extremely
siloed,
and
that
worked
for
a
time
now
we're
seeing
with
new
approaches.
New
techniques
an
end
to
these
silos,
red
hat
openshift
has
been
really
valuable
on
that
front
as
well
we're
taking
that
same
kind
of
logic
and
we're
applying
it
to
the
it
operations
domain.
C
Maybe
it's
your
events,
maybe
it's
your
metrics,
maybe
it's
your
logs,
that's
valuable,
but
it
has
a
kind
of
upper
limit
or
diminishing
return,
because
you're
not
able
to
draw
that
against
other
data
sources
and
what
typically
happens
in
an
incident
or
in
a
of
it
related
crisis
is
you've,
got
a
particularly
savvy,
asre
or
subject
matter
expert
who's,
drawing
insights
from
across
these
different
dashboards
and
tools,
to
kind
of
connect,
the
dots
and
figure
it
out,
and
that
works
ad
hoc.
C
But
that's
not
a
systematic
approach
and
there's
a
lot
of
opportunity
for
ai
and
machine
learning
to
help
kind
of
take
out
the
risk
and
save
some
of
the
pain
for
the
engineers
involved.
So
what
we're
doing
with
watson
apps
is
exactly
that.
We're
drawing
insights
signals
across
these
different
data
sources
and
we're
pulling
them
together
into
concrete
recommendations,
insights
alerts
and
notifications
for
those
engineering
teams.
C
So
they
not
only
have
a
sense
of
something's
wrong,
but
they
have
a
sense
of
how
that
issue
or
incident
is
expressing
itself
across
all
these
different
tools
and
then,
finally,
we're
surfacing
that
those
insights,
those
next
best
actions
as
other
tools
and
information
where
teams
work
today
in
what
I
kind
of
teed
up
earlier,
that
chat,
ops,
collaboration
experience,
so
watson
apps
integrates
with
tools
like
slack
like
microsoft
teams
and
other
solutions,
where
you're
able
to
kind
of
triage
an
incident
in
real
time.
C
Collaborate
on
a
particular
service,
that's
gone
down
and
that's
been
really
valuable
for
a
lot
of
our
clients,
not
just
having
the
information
about
a
particular
incident
kind
of
grouped
together
and
packaged
effectively,
but
having
that
land
where
teams
are
already
huddling,
scrumming
and
occasionally
sharing
gifts,
so
kind
of
underline
the
experience
for
the
user.
We
we
kind
of
sat
down
with
some
of
the
sres
at
ibm
to
get
a
sense
of
kind
of
their
day-to-day
life
before
and
after
using
apps-
and
it's
been,
it's
been
personally
a
very
interesting
journey.
C
You
know
third-party
log
management
tools,
all
these
different
systems
to
get
a
basic
understanding
of
the
problem,
whereas
after
iaps,
what
we're,
instead
able
to
do
is
provide
kind
of
one
quick,
sit
rep
in
that
slack
experience
where
they're
able
to
quickly
triage
the
information
necessary
to
respond
and
react
to
a
given
incident.
I'm
going
to
do
a
quick
little
demo
later
to
help
get
us
all
flavor
of
that,
so
to
give
a
quick
overview
on
watson
aiops
itself.
C
I
think
I
teed
up
a
little
bit
of
this
earlier,
but
we're
taking
these
different
data
sources
across
events
and
alerts
metrics
using
topology
data
and
then
we're
using
some
of
the
unstructured
data
I
mentioned
earlier,
which
is
a
bit
more
novel
things
like
blogs
or
things
like
tickets,
we're
bringing
that
together
in
watson,
ai
apps,
and
we're
able
to
use
that
to
push
out
these
insights
and
content
to
that
chat.
C
Ops
experience
where
teams
are
able
to
kind
of
scrum
and
react
on
it
much
more
quickly,
we're
also
able
to
feed
it
out
into
dashboards
or
other
tools,
so
that
parties
may
be
upline
in
the
management
stack
or
other
tools
are
able
to
leverage
those
insights
as
well.
C
So
we're
trying
to
you
know
as
a
part
of
that
open
approach
beat
information
across
the
different
stakeholders
that
are
involved
here,
but
also
play
nice
on
the
technology
side
and
part
of
the
approach
with
watson.
Apps
is
we're,
including
a
number
of
technologies
and
capabilities
that
ibm
offers
on
events
and
metrics
and
topology
provide
some
of
that
rich
understanding.
C
Topology,
for
example,
has
been
really
valuable
for
a
lot
of
our
clients
and
getting
a
sense
of
blast
radius
using
watson
aiops
to
understand
not
just
kind
of
what
service
is
down
or
at
risk
in
this
particular
moment,
but
what
further
services?
What
kind
of
second
order
impact
is
at
risk,
because
a
first
service
is
down.
C
It's
just
to
underline
some
of
the
work
we're
doing
on
the
open
side,
we're
on
the
kind
of
data
integration
side
we're
streaming
in
all
these
different
data
sources.
We've
got
about
150
different
sources
of
integration
that
we're
using
for
data
feeding,
watson,
apps,
and
these
include
a
number
of
kind
of
recognized
players
in
the
it
operations.
Space.
C
We've
also
got
a
number
of
great
ibm
partners,
companies
like
systig,
humeo,
hazelcast,
turbonomic
and
others
who
do
really
great
work
and
have
been
fantastic
partners
to
us
and
then
on
kind
of
that
chat,
ops,
layer,
companies
like
slack
microsoft,
teams
matter
most
and
others
areas
where
our
clients,
it
operations,
teams,
our
sre
users,
are
able
to
collaborate
around
the
insights
that
we
can
feed
them
with
lots
and
lots.
C
So
just
a
couple
of
quick
kind
of
client
reactions
here
stories,
I'd
love
to
tell
here.
Bank
kaiser
bank
has
been
using
watson,
ai
ops.
They
were
one
of
our
early
beta
clients,
they've
been
a
fantastic
partner
and
given
us
a
lot
of
great
feedback
to
work
off
of
the
one
of
the
really
kind
of
valuable
insights
that
they've
had
in
working
with
watson.
Apps
was
around
our
ability
to
kind
of
group
alerts
and
incidents
in
prior
kind
of
pre-watson
apps.
C
They
were
getting
floods
of
alerts
and
not
always
being
able
to
detect
the
actual
problem
because
of
some
of
that
flooding,
and
so
our
ability
to
kind
of
group
and
organize
events
and
provide
much
better
signal
to
noise
has
been
really
valuable
to
teams
on
the
ground
and
to
cios
looking
to
kind
of
benefit
from
that
information,
and
then
dinata
really
interesting
story
there.
They
were
using
more
traditional
methods
to
get
underneath
a
kind
of
long-standing
technical
issue
by
using
watson,
aiops
and
in
particular,
some
of
our
more
differentiated,
unstructured
data
approaches.
C
We
were
able
to
identify
long-standing
issues
that
that
they
were
able
to
address
by
kind
of
updating
their
security
posture,
but
it
was
very
interesting
in
that
we
were
able
to
identify
these
anomalies
using
that
new,
unstructured
data
approach,
whereas
they
had
been
kind
of
using
a
more
traditional
approach,
so
kind
of
two
different
stories
where
we're
using
ai
we're
using
some
of
the
unstructured
data
to
provide
a
more
rich
experience
to
the
to
the
teams
and
see
the
organizations
using
apps
to
deliver
more
reliable
experiences
to
their
end
users,
clients.
C
So,
just
to
briefly
speak
to
impact.
We've
seen
a
lot
of
great
feedback
in
some
of
our
early
work.
C
C
Keeping
the
trains
running
on
time
and
a
huge
value
that
we
could
bring
to
bear
for
them
was
in
accelerating
some
of
these
workflows
and
freeing
up
more
organizational
bandwidth
to
tackle
the
next
initiative
so
that
the
it
organization
could
be
a
source
of
innovation
and
kind
of
help,
move
the
ball
forward
for
the
business
rather
than
kind
of
being
in
the
side,
car
or
the
back
seat.
C
You
know
reacting
to
changes
being
pushed
from
the
cxo
or
the
line
of
business,
so
I'm
going
to
pause
here
and
I'm
going
to
switch
to
the
demo
and
then
I'd
love
to
get
some
of
the
questions,
thoughts
and
feedback
that
folks
have
here.
So
I'm
just
quickly
swap
over
and
I've
now
got
my
slack
user
interface
up
here.
As
I
mentioned
during
kind
of
the
pitch,
we
used
slack
as
kind
of
our
chat,
ops,
collaboration
experience.
C
We're
saving
a
lot
of
time
in
the
workflow,
we're
saving
a
lot
of
complexity
and
we're
kind
of
taking
risk
out
through
simplification.
So
to
dive
in
very
briefly
here,
we've
got
a
title
and
description.
These
are
using
natural
language
processing
to
kind
of
create
quick
kind
of
name
tags
for
these
love
and
be
searchable
later.
This
tells
me
that,
as
the
sre
underneath
the
service
that
the
ticket
info
service
is
down,
there
are
a
couple
of
different
anomalies
and
alerts
associated
with
that
error.
C
It
also
gives
me
a
bit
of
information
on
the
just
in
this
description.
Lets
me
know
that
it's
open
gives
me
a
sense
of
date
and
time
and
using
my
organization's
definitions
for
severity.
It
classifies
it
with
disparity
further
down
below
kind
of
getting
past
that
top
line
information.
I
got
information
on
localization
and
blast
radius,
and
I've
got
information
on
kind
of
more
more
information
on
events
and
alerts
that
are
related.
C
C
If
I
want
more
information
on
additional
services
that
are
downstream,
I
can
do
so
and
the
value
to
me
again
as
an
engineer
trying
to
support
this
product
and
try
to
ultimately
resolve
this
issue
is,
I
can
see
what's
downstream,
I
can
see
what's
at
risk
if,
if
this
outage
is
allowed
to
continue
and
in
turn,
this
helps
me
triage
my
response
to
this
incident
up
next,
we've
got
a
bit
more
information
on
related
events,
so
what
I'm
able
to
do
here
is
see
a
bit
more
information
on
some
of
the
different
tags
associated
with
this
event.
C
What's
really
valuable
here
is
that
what
this
is
actually
doing
is
bringing
together
tons
of
different
anomalies
across
different
tools.
So
I'm
not,
I
don't
need
to
go
to
six
different
tools,
dashboards
services,
to
get
more
information
and
to
kind
of
investigate.
Rather,
I
can
get
a
high
level
summary
of
the
problem,
the
related
services,
and
I
can
use
that
information
to
make
decisions
quickly
in
triage.
C
If
I
do
want
more
information,
all
I
have
to
do
is
click
through
these
links
and
that
will
give
me
more
information
from
each
of
the
related
tools.
So
this
hooks
up
with
other
parts
of
the
watson
app
stack
as
well
as
pagerduty,
as
well
as
log
dna
other,
both
ibm
and
third-party
products,
that
we
have
integrated
with
watson
apps.
C
So
that
gives
me
the
ability
to
investigate
localization
and
blast
radius.
Give
me
context
on
what
could
go
down
and
kind
of
how
this
is
located
within
my
technology
stack.
This
is
all
kind
of
really
helpful
in
giving
me
as
an
engineer
for
me,
as
kind
of
an
incident
manager
really
valuable
context.
What
I'd
now
like
to
do?
Having
gotten
some
of
this
context
having
gotten
some
of
this
information,
is
be
able
to
take
action
and
react
to
this
and
ultimately
address
the
issue.
C
We've
got
this
we're
able
to
kind
of
pull
up
tickets
from
prior
scenarios.
We
use
the
information
about
this
particular
incident
to
pull
out
tickets
specifically
that
were
related
and
were
addressed
successfully
were
able
to
then
use
those
tickets
to
guide
us
on
resolution
to
help
address
this
problem.
So
I
see
this
looks
a
lot
like
the
problem
we're
experiencing
today
and
if
I
want
to
investigate
more
again,
I
can
just
go
into
service
now
and
examine
the
ticket
more
closely.
C
C
What
I
can
do
now
is
I've
identified
an
action.
I've
got
a
good
understanding
of
the
incident
itself.
The
next
step
is
for
me
to
kind
of
acknowledge
this
and
triage
this
out
to
my
teams
and
my
organization.
So
what
I'm
going
to
do
now
is
create
a
new
incident
channel
for
my
team
to
huddle
on,
if
I
want
to
instead
add
it
to
an
existing
channel.
Maybe
this
is
a
service.
That's
been
a
bit
of
a
problem,
we
have
a
dedicated
channel
for
it.
C
If
I
want
to
keep
it
in
my
current
channel,
I
can
do
that.
This
sends
me
kind
of
a
cultural
issue.
More
than
anything,
I
can
also
adjust
the
status
by
virtue
of
me
working
on
it,
I'm
going
to
keep
it
in
progress,
but
let's
say
I'm
trying
to
delegate
this
to
somebody
else.
I
want
to
tag
it
as
just
open,
or
I've
decided
that
this
issue
is
a
nothing.
Burger
is
closed.
I
can
name
the
channel.
C
I
can
then
make
it
private
and
let's
say
I
want
to
tag.
I
have
to
make
it
private
or
public.
If
I
want
others
to
be
able
to
join
ad
hoc
or
if
there's
maybe
a
kind
of
an
organizational
culture
of
openness
around
slack,
which
is
something
a
lot
of
our
clients
have
really
embraced.
You
know
the
default
is
to
kind
of
keep
it
open,
maybe
sometimes
keep
it
private
if
it's
a
sensitive
service
and
then
finally,
I'm
able
to
tag
in
other
users.
C
To
perhaps
I
want
to
tag
in
my
friend
james,
because
I
know
he's
an
amazing
sme
on
the
service
will
help
us
address
this
issue,
not
just
this
time,
but
help
us
deploy
a
perpetual
fix.
And,
finally,
I
can
use
this
note
section
to
just
drop
an
additional
color
or
kind
of
a
game
plan
and
how
we
want
to
handle
this,
particularly
if
we're
juggling
other
incidents.
C
So
I
can
acknowledge
this
and
and
kind
of
go
from
there.
So
that's
kind
of
a
quick
demo
of
watson
iops
again
just
to
kind
of
play
back
what
you
saw
kind
of
this
quick
summarization
of
an
incident
we've
gotten
together
enough
information
to
make
some
quick
decisions.
Tagging
the
right
smes
understand
what
services
or
tools
are
downstream.
I've
got
easy
access
to
additional
information.
If
I
want
to
investigate,
get
a
more
rich
understanding.
C
This
allows
me
as
an
incident
commander
as
an
sre
as
an
iti
operations
manager,
make
some
quick
decisions
as
we
try
to
get
the
service
back
online
quickly
and
to
kind
of
some
of
my
earlier
points.
This
is
pulling
time
out
of
the
workflow.
This
is
automating
a
lot
of
activity
in
the
workflow.
This
is
using
ai,
to
bring
kind
of
simplicity
and
clarity
to
my
team's
efforts.
C
So
I'm
going
to
swap
back
to
the
deck
and
kind
of
close
here
really
enjoyed
the
conversation
today,
really
looking
forward
to
some
of
the
questions,
and
you
know
happy
to
dig
in
on
any
questions
around
how
we're
applying
ai
here,
the
models
shout
outs,
anything
else,
yeah
I'll,
put
the
mic
down
open
the
questions.
A
Awesome,
thank
you,
morgan.
That
was
a
great
demo
and
thank
you
so
much
for
the
presentation
and
walking
through
watson,
ai
jumping
right
in
we
do.
We
have
a
lot
of
questions
in
the
chat
first,
I
wanted
to
ask
since
we're
relating
this
back
to
openshift
and
the
platform.
B
Yeah
I
can
chime
in
morgan
on
on
the.
As
far
as
the
open
shift
support
it
supports
version
four
to
five,
I'm
not
mistaken.
I
was
checking
on
that.
While
morgan
was
talking
and
then
it
does
leverage
the
operators
for
the
the
majority
of
what
is
involved
here.
There's
some
most
of
what
morgan
referred
to
is
cloud
native
containerized,
so
it
leveraged
the
operators,
I'm
not
familiar
with.
I
was
just
checking
on
on
open
data
hub,
so
we
may
have
to
get
back
to
you
on
that
one.
D
Let
me
just
quickly
comment
on
that
real
quick,
I
mean
I
don't
know
if
there's
a
direct
connection,
but
certainly
the
open
data
hub
in
general
really
is
the
reference
architecture
community
of
how
to
do
ai
on
top
ai
and
machine
learning
on
top
of
plan.
So
it's
not
very
specific
to
ai
ops
per
se.
But
if
you
were
to
build
a
ai
ops
workflows
on
top
of
openshift,
then
you
could
use
something
like
open
data
hub
and
the
tools
and
the
reference
architecture
that
it
provides
to
do
that.
D
So
that's
how
I
would
answer
that
what
open
data
have
it
is.
A
D
To
char
and
and
and
just
to
add
to
that,
finally,
open
data
have
a
there's
an
instance
of
open
data
within
red
hat
called
data
hub,
and
that
provides
some
of
the
a
as
name
suggests
it
aggregates
some
data
that
we
collect
and
provides
some
services
for
ai
ops
within
red
hat.
So
that's
how
the
title
is.
A
D
That's
correct
yes,
yeah
and
for
example,
so
you
know,
morgan
was
talking
about
he's
showing
the
demo.
For
example,
you
know
as
an
sre.
You
know
how
do
I
do
some
log
analysis.
So
one
of
the
things,
for
example,
that
we
that
data
have
internally
collect
is
all
the
ci
cd
test
logs
and
then
you
could
apply
data
and
analytics
tools
on
top
of
that,
get
some
insights
and
then
use
that
to
improve
product
experiences
and
products.
So
that's
kind
of
one
way
how
we
are
using.
A
Thanks
to
sure,
so
I
because
we
have
this
light
up,
I
wanted
to
mention
the
ibm
cloud
pack
for
multi-cloud
management.
Do
you
also
tie
back
into
the
cloud
pack
for
data?
How
do
those
two
line
up.
C
That's
a
great
question
so
as
a
kind
of
a
ground
truth,
the
ibm
cloud
packs
are
kind
of
different
tools,
focusing
on
kind
of
different
parts
of
the
cloud
experience
that
are
built
on
red
hat,
openshift
and
watson.
Apps
is
built
as
basically
an
extension
of
or
kind
of
an
add-on
to
cloud
pack
for
data
today
and
we're
building
it
we're
also
making
it
available
as
a
extension
to
cloud
pack
for
multi-cloud
management.
C
These
are
two
different
cloud
packs
focused
on
kind
of
two
different
sets
of
problems,
and
so
there's
kind
of
a
synergy
with
both
cloud
pack
for
data
is
really
focused
on,
as
the
name
might
imply
really
focused
on
providing
tools
and
the
kind
of
base
infrastructure
to
allow
teams
to
really
effectively
work
with
manage
and
virtualize
data
cloud
factor.
C
Multi-Cloud
management
is
really
focused
on
managing
your
cloud
experience
across
hybrid
on-prem,
native
and
kind
of
giving
you
a
single
control
plane
for
that,
and
it's
been
particularly
interesting
in
light
of
the
kind
of
shift
in
depth.
Cyclops
we've
seen
over
time.
C
C
So
great
question:
so
today
we
are
soliciting
feedback
kind
of
through
our
kind
of
customer
relationships
with
with
our
user
base,
but
we're
planning
on
making
a
user
oriented
roadmap
available.
So
we
can
get
kind
of
external
feedback
on
our
roadmap
and
get
a
healthy
sense
of
kind
of
prior
prioritization,
or
rather
validation
for
prioritization
from
users
and
from
organizations.
A
Great
thanks,
morgan
all
right,
going
back
to
some
of
the
questions
in
the
chat
bob
asks.
Do
you
see
any
comment?
No,
we
already
covered
the
open
data
hub.
All
right,
so
william
asked
is
watson,
ai
ops,
supported
on
ocp4.
We
did
cover
that.
So
a
good
question
is,
is:
can
you
deploy
it
directly
into
openshift
james?
You
were
talking
about
that.
You
can
as
outside
of
cloud
packs,
or
is
that
a
no.
A
Okay,
so
then
the
answer
to
does
it
use
continuous
deployment
for
updates
on
ocp.
It
would
follow
the
cloud
pack
model
for
continuous
updates,
correct,
okay,
good
question,
william
all
right,
william
asked
another
one:
can
the
ai
ops
compare
events
to
previous
events
and
suggest
actions
based
on
previous
experience,
actions,
good
or
bad,
etc?.
C
C
So
the
way
we
approach
training
is
we
provide
some
models
out
of
the
box
that
focus
on
events
focus
on
logs
and
kind
of
correlation
across
these
different
data
sources
and
then
over
time,
with
use
and
kind
of
training
data
from
our
clients.
We
continue
to
train
those
models
over
time.
What
we
have
found
in
some
of
our
pocs
and
other
work
is
that
there's
so
much
heterogeneity
between
different
client
environments,
that
it
would
be
very
difficult
to
have
kind
of
one
out-of-the-box
model
to
rule
them
all
and
have
that
be
top-down,
updated.
C
What
instead
makes
a
lot
of
sense
is
to
kind
of
start
off
with
a
set
of
base
models,
kind
of
an
ensemble
of
base
models
and
then
refine
those
based
on
a
particular
application
or
implementation,
that's
relevant
to
what's
needed
really
on
the
ground.
So
to
your
specific
question,
yes,
our
our
watson
alps
is
able
to
be
trained
over
time
using
kind
of
past
events
to
do
that
training.
That's
exactly
right.
B
Yeah
there's
some
there's
some.
I
know
I've
heard
some
of
the
others,
some
of
the
folks
on
our
team
talk
about
entity
linking
where
they
start.
Essentially,
that's
matching
up
some
of
the
elements
that
could
be
in
the
logs
or
it
could
be
in
alerts
or
events
and
making
connections
across
those
across
different
source
types
to
form
an
entity
and
much
like
morgan
was
just
saying
that
that
then
feeds
the
the
model
and
it
starts
learning
from
there.
B
So
some
of
it
is
fluid.
Some
of
it
will
look
to
teach
the
model
in
a
supervised
way,
but
some
of
its
can
be
arrived
at
in
in
unsupervised.
B
Question
about
the
vertical
it's
pretty
horizontally
applicable,
but
what
we
see
are
tip
interest
so
far
right
it's
early
days,
but
typical
of
where
ibm
has
seen
a
lot
of
traction
right
is
in
the
financial
services,
in
retail,
across
different
government
types
of
public
areas,
as
well
as
in
in
telecommunications.
B
So
those
are
the
big
ones
that
that
get
a
lot
of
attention.
It's
not
limited
to
that
because
it's
really
a
matter
of
you
have
operations,
teams
that
are
modernizing
and
trying
to
do
more
with
less,
which
is
pretty
much
everybody.
It's
just
a
matter
of
how?
How
severe
are
the
problems?
B
If
they
don't
do
this,
because
the
way
things
are
changing,
as
morgan
mentioned
a
lot
through
his
presentation,
there
are
it's
really
unsustainable
to
be
so
manually
reliant
when
things
go
wrong
in
your
your
responses,
your
investigation,
your
triage
and
diagnosis
and
ultimately
getting
to
resolution
that
is
still
the
market,
is
still
heavily
reliant
on
manual
activity
and
that's,
of
course,
the
biggest
cost,
and
that
that
inherent
in
that
is
the
biggest
delay
factor
in
get
rid,
bringing
down
the
mean
time
to
go
through
each
of
those
steps
and,
ultimately
to
resolution.
B
So
it's
very
horizontal
you'll
see
more
budgets
and
bigger
budgets.
The
larger
the
company
is,
of
course,
and
the
more
severe
the
impact
is
when
there
are
those
delays
so
being
able
to
reduce
that
time
ties
right
into
roi,
and
you
know
it
does-
require
that
yeah.
The
the
impact
has
to
be
big
enough
for
them
to
make
the
investment.
B
So
it's
not
not
necessarily
for
super
small
types
of
companies
that
where
they
can
just
yell
across
the
room
and
fix
a
problem,
be
agile
and
be
on
the
fly,
there's
all
kinds
of
ways
they
can
approach.
It,
but
it's
definitely
tough.
It's
definitely
a
big
payoff
for
big
banks,
big
retail
organizations
that
just
have
a
lot
of
complexity
and
they've
got
a
lot
of
tools
that
they've
been
throwing
at
this
age
old
set
of
problems
that
are
just
getting
more
exacerbated
as
the
technology
changes.
So
that's
where
the
opportunities
lie.
A
B
Yeah,
I
can
start
with
that
one,
and
then
you
can
chime
in
if
you
want,
so
we
have
a
mix
between
some
of
the
traditional
capabilities
that
we've
had
in
the
market
for
a
very
long
time,
along
with
some
very
new
ai
capabilities
that
come
from
from
ibm
research
and
from
watson.
B
So
for
the
integrations
that
we've
had
on
the
more
traditional
site,
we've
got
thousands
of
pre-built
ways
of
bringing
in.
In
that
event
and
alert
data.
We've
got
pre
pre-built
ways
of
bringing
in
performance
metrics
from
different
types
of
sources,
and
these
are
you
think
the
prevalent
monitors
the
prevalent
devices,
the
parts
of
the
infras
resources
in
the
infrastructure
and
also
the
application
layer
which
typically
comes
through
monitors,
but
could
be
the
performance
metrics
that
we
can
pull
from
directly
from
the
source.
B
But
those
are
all
there's
a
huge
library
of
those
out
of
the
box
that
are
all
part
of
this.
And
then,
with
the
newer
pieces.
We've
got
sources
like
log.
Dna
humeo
is
on
the
way
to
bring
in
the
logs
and
there'll,
be
there's
some
generic
ways
of
bringing
that
in,
as
well
as
on
the
natural
language
front.
We've
got
some
pre-built
integrations
with
servicenow
some
of
the
more
prevalent
service
desks
and
looking
at
a
way
for
making
it
easier
to
bring
those
in
on
the
fly.
B
But
that's
where,
depending
on
how
you're
going
to
leverage
that
those
integrations,
that's
where
they're
probably
there
could
be
some
more
custom
based
methods
for
bespoke
type
sources.
Anything
you
want
to
add
morgan.
C
No,
no,
I
think
that's
a
really
valuable
answer
right.
I
would
just
call
that
we've
got
some
great
partners
as
well
that
we've
been
working
with
and
they've
they've
been
great
humeo,
for
example,
has
been
fantastic,
with
vlogs
as
a
partner
for
us,
so
yeah.
B
Yeah
and
then
there's
there's
integrations
also,
I
mean
I'm
referring
to
mostly
inbound
type
of
integrations,
but
there's
integrations
also
the
box
for
outbound
or
collaborative
type
of
sharing
exporting
the
information
to
varying
degrees.
So
you
saw
the
built-in
capabilities
of
exposing
everything
through
slack
microsoft.
Teams
is
next
up
and
there's
there's
plans
to
make
it
headless
so
that
it
could
just
be
exposed.
B
However,
it
makes
sense-
and
that's
that's
the
on
the
roadmap.
B
I
would
say
splunk,
so
we've
got
ways
of
that:
we've
had
splunk
integrations
people
and
events
from
splunk
to
pull
in
lots
of
different
information
information
from
splunk.
That's
looking
at
more
of
the
structured
data
that
we
talked
about,
it's
bringing
the
logs
in
out
of
the
box,
that's
something
that
is
one
of
the
priority
items.
What
else
morgan?
What
else
would
we
say
comes
up
a
lot.
C
I
would
say
splunk
is
definitely
a
major
one.
I
think
what
we're
also
looking
at
is
how
we
can
get
more
sophisticated
integrations
with
some
of
our
existing
tooling.
So,
for
example,
we
have
a
very
lightweight
integration
with
humeo
of
blog
dna
and
a
couple
other
vendors
in
the
space
and
we're
looking
at
ways
that
we
can
get
more
sophisticated
outputs,
more
rich
kind
of
two-way
connectivity
between
those.
A
C
I
would
say
that
our
our
primary
kind
of
use
case
or
jumping
off
point
is
incident
management,
kind
of
writ
large
and
then
event,
grouping
or
entity
linking
specifically
within
that
that's
been
a
really
kind
of
compelling
starting
point,
particularly
as
james
kind
of
mentioned.
A
lot
of
the
users
we
have
today
are
particularly
large
firms.
Their
data
are
often
siloed
between
different
departments
and
so
the
proposition
of
being
able
to
tie
together.
Those
different
investments
has
been
compelling.
B
That's
still
there
pretty
predominantly,
but
I
guess
I
should
say-
and
it's
it's
being
coupled
with
or
where
there
are
there's
a
lot
of
proliferation
of
site,
reliability,
engineering
teams
that
may
or
may
not
be
part
of
that
centrality
operations
group
they
may
be
part
of
an
application
group
or
product
development
team,
as
you
guys
are
probably
aware
of-
and
you
know
different
devsecops
type
of
methodologies
right.
B
Those
are
bringing
different
people
to
this
picture
which
didn't
traditionally
exist,
and
so
that's
exacerbating
the
silos
that
morgan
mentioned
right,
where
you
got
different
people
that
have
different
areas
of
focus
or
different
things,
they
care
about
or
have
expertise
in,
so
they
can't
be
experts.
You
don't
want
to
have
to
have
them,
be
super
knowledgeable
about
everything
going
on
in
the
infrastructure.
If
all
they
care
about
is
their
set
of
apps
right
or
you
don't
vice
versa.
B
You
don't
want
someone
being
so
siloed
that
all
they
care
about
is
one
part
or
one
set
of
resources
in
the
infrastructure
and
not
have
line
of
sight
into
how
everything
else
gets
impacted.
So
it's
a
bit
of
a
balancing
act
that
a
lot
of
companies
are
facing,
and
that
brings
up
a
lot
of
newer
use
cases
that
just
really
expand
proliferate.
Some
of
the
the
traditional
ones
that
were
coming
from
the
central
operations
teams.
A
D
Yeah,
I
mean
so,
I
think
you
know
in
general,
openshift
and
kubernetes.
We
are
trying
to
what
I
would
call
on
a
journey
to
create
self-healing.
D
You
know
clusters
right
to
that
end
in
kubernetes,
obviously
for
applications.
It
has
built
a
lot
of
these
self-healing
characteristics.
I
mean,
if
you
think,
about
replicas,
and
how
do
you
build
a
high
availability,
but
also
for
the
underlying
platform
itself,
especially
with
openshift4
the
use
of
operators,
the
operator
lifecycle
management?
D
We
added
a
lot
of
capabilities
that
create
the
tooling
towards
that,
and
a
simple
example
of
that
is:
how
do
you
scale
openshift
cluster
itself
and
add
up
new
worker
nodes
or
scale
up
and
scale
down
worker
nodes
in
response
to,
for
example,
some
of
the
things
that
you
know
morgan
was
saying
here
right
like,
for
example,
one
of
the
common
things
that
we
had
all
observed
really
was
even
in
openshift.
Three
was
the
idea
that
you
know
at
some
point.
D
You
know
we
run
out
of
capacity
and
if,
if
because
of
whatever
external
reasons
we
have
to
increase
the
number,
if
there
is
no
underlying
capacity
in
the
infrastructure,
then
how
do
you
add
that
so
do
something?
So
we
are
on
that
journey.
Long
story
short
the
rather
interesting
things.
For
example,
you
know
what
happens
after
you
have
put
a
workload
or
a
set
of
workloads
on
a
cluster.
How
do
you
kind
of
reoptimize
after
some
time
etc?
D
But
I
think
the
more
important
part
really
is
the
feedback
part,
which
is
the
second
part
of
your
question,
which
is
ai
ops
in
general.
So
you
know
we
have
built
capabilities
within
openshift
to
report.
Metrics
I
mean
morgan
was
showing
on
that
slide,
for
example,
that
you
know
that
is
you
know.
Ai
ops
heavily
relies
on
structured
and
unstructured
data,
but
in
terms
of
metrics
and
logs,
and
so
we
with
the
use
of
the
what
we
call
cluster
monitoring
and
you
know
using
prometheus.
D
We
can
export
both
prometheus
metrics
as
well
as
of
the
system
of
applications,
but
also
of
you
know,
alerts,
and
then
we
can
export
logs
to
external
systems
using
various
apis.
So
that
then,
can
be
consumed
by
what's
in
the
iops,
for
example,
and
then
they
can
provide.
D
The
provide
the
necessary
insights,
so
I'm
getting
instructed.
Let
me
kind
of
ping
these
locations,
so
the
other
part
of
it
really
is
talking
about
slack.
D
That's
the
downside
of
downside
of
too
many
alerts
now
so
the
other
side
of
it
was.
I
mean
the
other
interesting
thing
that
we
added
for
with
openshift4
is
when
we
talked
about
data
hub
previously,
but
we
also
collect
a
lot
of
these
metrics
so
that
we
can
provide
additional
insights
to
our
customers
and
and
and
one
of
the
biggest
one
really
is.
You
know
in
trying
to
decide
when
you
know
into
trying
to
decide
that
upgrades
and
updates
are
much
more
reliable.
D
We
actually
monitor
both
the
install
and
upgrade
of
openshift
clusters
through
this
telemetry
data
and
then
based
on
those
insights.
D
We
are
able
to
then
declare-
or
we
are
able
to
know
some
edges-
that
there
was
no
way
for
us
to
know
right
like
let's
say,
80,
90
percent
of
upgrades
are
fine
and
updates
are
fine,
but
that
always
is
10
15
percent
and
I'm
just
using
some
number
there's
always
a
very
small
subset,
which
I
would
constitute
as
edges,
which
there
was
no
way
for
our
testing
to
get
to
completely
cover,
and
so
those
are
the
things
that
you
can
uncover
through
this
tooling
that
we
have
built.
D
So
it's
basically,
you
know
telemetry
reporting
how,
for
example,
upgrades
are
going,
you
know
and
then
the
service
called
the
update
service,
then
monitoring
it
and
pop,
and
only
and
taking,
and
only
allowing
those
updates
that
are
considered
safe,
so
that's
kind
of
one
way
in
which
we're
using
the
iops,
the
other
one
I
I
said
earlier
was
you
know
we,
for
example,
internally
with
the
data
health
service,
we
collect
logs
and
we
try
to
improve
product
experiences.
That's
the
second
way.
D
D
But
also
there
is
you
know
the
question
of
you
know:
metering
right
like
we
have
a
metering
operator
now
and
the
same
prometheus
metrics
that
I
mentioned
earlier
can
now
be
stored
in
a
database,
and
then
you
can
query
it
for
reports
like
how
much
you
know
application,
how
much
did
an
application
of
cpu
and
memory
to
use,
etc
and
analyze
long-term
trends?
C
Sure
and
james
you
may,
if
you
want
to
jump
in,
feel
free,
but
we
cover
the.
I
had
a
slide
up
earlier
kind
of
highlighting
some
of
the
the
I
would
say,
kind
of
traditional,
more
structured
data
sources
and
then
the
unstructured
data
sources
ibm
has
a
number
of
products
and
included
in
the
watson.
Apps
portfolio
are
solutions
for
that
event,
metric
and
topology
monitoring
capability,
and
so
that's
part
of
the
synergy
that
we
bring
to
our
clients.
C
We
can
both
address
kind
of
use
cases
on
those
fronts
immediately
or
we
can
work
with
their
existing
infrastructure
to
provide
this
ai
overlay.
I
don't
know
james
if
you
want
to
add
anything
to
that
sure.
B
This
is
where
bread
and
butter
has
been
for
quite
some
time
right
and
so
you're
thinking
about
event
and
alert
information
that
typically,
you
know
in
a
data
store
somewhere
in
a
database
coming
from
a
monitor,
even
if
it's
coming
from
the
devices
or
the
pieces
of
parts
of
the
infrastructure
at
the
network
down
to
the
network
level,
it's
kind
of
like
old,
old
hat
type
stuff
we've
been
able
to
do
for
decades
and
that's
been
refined,
of
course,
and
has
evolved
with
the
market
as
the
technology
changes.
B
That's
that's
rich,
a
rich,
really
rich
set
of
data,
and
in
recent
years
we've
been
helping
clients
analyze
that
data
in
a
more
automated
way,
so
that
it's
just
not
overwhelming
them
and
becoming
part
of
the
problem.
There's
a
lot
of
you
know
when
you
start
looking
at
this
space,
there's
a
lot
of
terms
like
alert
fatigue
and
you
know
being
inundated
with
with
noise.
You
know
they
call
it
operational
noise
and
it's
just
it
it.
B
You
start
to
have
too
many
tools
too
many
monitors
that
everyone
has
their
favorite
monitor
and
then
it's
you
start
having
this
proliferation.
That
starts
to
get
you
know
get
over
beyond
where
they
can
handle
it
and
or
the
point
where
you
have
some
teams
that
are
getting
a
good
feel
for
their
area.
But
you
don't
have
a
good
feel
for
the
big
picture
like
we
talked
about
before.
B
So
that's
been
our
expertise
and
there's
we've
been
manager
of
managers
of
sorts
to
pull
in
data
from
all
these
places
that
are
relevant
throughout
in
an
automated
way.
All
that
noise,
all
the
duplicates
by
automatically
deduplicating
it
filtering
it
in
different
ways
based
on
certain
criteria.
B
That's
the
traditional
way
and
that's
that's
really
very
achievable
and
pragmatic
when
it
comes
to
structured
data,
what's
been
elusive
and
what
has
been
not
so
pragmatic
over
the
years,
but
definitely
a
lot
of
opportunity
for
deeper
much
deeper
levels
of
insight
is
to
pull
in
this
unstructured
data,
or
you
know,
there's
semi-structured
data
and
unstructured
and
that's
more
unruly.
It's
less
refined
right,
the
especially
the
unstructured
data
could
be
generated
in
a
way
or
created
in
a
way.
B
B
Well,
here's
where
you
have
information
that
can
be
matched
up,
because
it's
clean
enough,
where
things
are
just
typically
part
of
that
alert
or
part
of
that
natural
language
that
might
be
in
a
service
ticket
or
part
of
that
log
right
and
then
here's
where
we
just
nothing,
there's
nothing
on
the
plans
that
can
make
sense
out
of
it,
because
it's
just
messy
data,
and
you,
if
you
just
you,
know,
change
some
of
your
behaviors
or
change
some
of
the
way
that
you
kick
out
some
of
this
information,
then
the
ai
tool
would
pick
up
on
it
and
you
could
start
to
feed
the
models
and
and
learn
from
there.
B
So
there's
kind
of
an
evolution,
that's
starting
to
happen
on
that
front
with
the
unstructured
data
that
it
just
didn't
even
make
sense
to
tackle
before
it
was
just
kind
of
a
thing
to
joke
about.
But
you
know
you
could
we
used
to
joke
years
ago
about
oh,
would
be
nice
to
bring
in
twitter
information
when
people
were
complaining
about
your
application,
wouldn't
it
and
everyone
would
laugh
and
joke,
and
actually
now
that
you
can
do
that
kind
of
stuff
right
it's
over,
simplifying
it,
but
there's
a
lot
of
potential
there.
B
So
when
you're
in
the
thick
of
a
of
a
reactive
situation
with
a
high
severity
set
of
incidents
that
are
happening
and
you're
trying
to
huddle
together
to
understand
what
one
thing
means
that
somebody
might
know
about
another
thing:
you
know
that
somebody
else
might
know
about.
That's
what
really
starts,
though
the
delays.
That's
what
starts
all
the
it
could
even
lead
you
down
the
wrong
path
and
that's
a
traditional
set
of
challenges
that
can
now
be
rectified
much
easier.
A
So
we
have
a
question
so
james
you
mentioned:
don't
silo
the
talent.
Can
you
expound
on
that?
We
have.
B
Did
I
say
that
okay
yeah,
I
would
say
definitely
from
a
opportunity
standpoint.
It's
not
I
mean
there's
some
things
where
the
budgets
are
going
to
be.
People
are
more
willing
to
throw
money
at
things
like
this.
I
think,
maybe
that's
in
the
more
in
the
context
of
all
right.
If
you
have
domain
expertise
and
if
you
have
application
expertise,
you
know
you
want
to
be
able
to
leverage
that
without
creating
disruption.
B
So
before
you'd
have
maybe
some
people
in
operations
that
were
the
traditional
level,
one
level
two
type
of
operations
teams
and
they
would
get
these
incidents.
You
know
looking
at
their
console
in
the
data
center
and
depending
on
what
their
skill
set
was.
They
would
either
handle
it.
Do
some
more
triage
or
roll
it
up
and
escalate
it.
That's
typical
and
lots
of
times.
You'd
have
people
escalating
things
left
and
right
to
the
point
where
you
have
level
three
level,
people
that
are
just
getting
everything
right
and
so
that's
been
an
ongoing
problem.
B
B
Yeah
yeah
yeah
involved,
not
you
know,
threatened,
is
like
every
especially
an
sre
team
right.
They
have
a
way
they
want
to
work.
You
don't
want
to
disrupt
them.
You
just
want
to
be
able
to
augment
what
they
do
and
that's
the
the
feeling
you
want
them
leaving
when
they
see
this
stuff.
A
Thank
you,
I
know
we're
at
the
top
of
the
hour.
Thank
you.
So
much
that
was
very
informative
was
a
great
demo
and
thanks
again
for
joining
us
morgan,
james
and
tushar,
and
for
everybody
else
remember
to
join
us
next
week.
Next
week
we
have
a
technical
overview
of
ibm
cloud
pack
for
integration,
so
that'll
be
great
as
well
and
go
to
commons.openshift.org
if
you're,
not
a
commons
member
already,
please
look
into
that
and,
as
always
go
to
openshift.tv
to
see
the
schedule
for
all
the
great
things
happening.