►
From YouTube: OCB: What's New in IBM Cloud Pak for Data?
Description
Join us for an update on What’s New in IBM Cloud Pak for Data as the new release comes out, as well as a demo! IBM Cloud Paks are built on and optimized for OpenShift Container Platform.
Speaker: Clarinda Mascarenhas (IBM)
Host: Karena Angell
A
Welcome
everybody
to
another
openshift
commons
and
today
we're
really
excited
for
the
ibm
cloud
pack
for
data
team.
So
this
new
release
has
been
eagerly
anticipated
for
all
of
ibm's
customers
and
cloud
pack
for
data
users
and
we're
here
with
clarinda
masquerinas,
offering
manager
of
ibm
cloud
pack
for
data,
as
well
as
clay
davis
from
tech
data
very
important
partner.
A
B
Thank
you
so
much
karina,
it's
really
a
pleasure.
Definitely
it's
been
a
great
release
for
us
this
year
and
I
will
give
you
guys
a
quick
overview
of
what
we
will
be
covering
in
our
agenda
today.
B
So
in
today's
session
we
will
showcase
the
highlights
of
cloud
factory
data
version,
3.5
release
with
a
quick
demo
of
the
deployment
using
our
operators,
which
is
one
of
our
new
capabilities
and
how
it
ties
into
red
hat
marketplace
and
we've
also
onboarded
this
release
in
3.5
on
our
global
distributors,
tech,
data's
marketplace,
and
we
hear
from
clay
on
white
cloud
factor.
Data
is
important
to
them,
followed
by
a
quick,
end-to-end
demo
that
travis
will
walk
us
through
now.
We've
come
a
long
way.
B
B
You
know
I
just
wanted
to
give
some
background.
You
know
what
we
exactly
did
like
a
couple
of
years
ago,
through
our
data
and
ai
portfolio,
with
data
management,
governance
analytics.
B
You
know,
we
tried
to
build
the
best
tools,
coin
solutions
for
the
different
use
cases,
but
clients
wanted
to
build
a
more
comprehensive
use
case,
driven
platform
that
had
to
go
through
the
pain
of
piecing
these
services
together,
and
so
since
two
years,
our
positioning
is
more
from
a
platform
perspective
with
cloud
factor
data,
and
many
of
you
must
have
heard
about
cloud
packs
itself
which
are
predefined
use
cases.
We
have
six
other
cloud
packs
it's
to
deliver
our
end-to-end
experience
with
a
pre-integrated,
unified
experience
to
end
users.
B
I
wanted
to
quickly
give
you
guys
also
a
feel
for
what
our
data
and
ai
platform
is.
As
we
start
from
our
foundation,
which
is
based
off
open
shift
cloud.
Packer
data
is
truly
a
hybrid
offering
which
can
run
on
any
public
cloud
on
premises,
avoiding
vendor
lock-in
and,
as
you
can
see
in
the
three
boxes
here
that
we
have,
we
have
data
management
services.
B
Well,
I
could
say
you
know
it's
important
to
understand
with
data
that
is
actually
required
for
ai
and
it
needs
to
be
trusted
so
that
you
can
then
analyze
it
to
build
self-service
analytics
and
the
last
section.
The
last
box
is
analyzed
with
our
data
science
and
analytics
support
for
best
in-class
tools
and
open
source
frameworks
that
allow
you
to
run
your
models
across
a
variety
of
different
environments.
B
B
3.5
supports
openshift,
311
and
4.5,
and
besides
our
different
deployment
options
that
I
just
called
out.
We
are
also
introducing
our
support
for
z,
this
release
and
also
we
run
on
storages,
including
the
openshift
container,
storage,
portworx
and
nfs,
and
we've
seen
a
bit.
You
know
with
our
growing
ecosystem,
also
on
voting
on
the
tech
data,
marketplace,
etc.
B
Now.
The
next
thing
I
quickly
wanted
to
cover
is:
if
you
need
an
overview
of
the
latest
packaging
and
where
the
capabilities
lie
in
3.5
version
3.5.
We
have
some
base
capabilities
like
you
can
see
over
here,
and
then
we
also
have
extensions.
I
give
a
simple
analogy
similar
to
your
iphone.
We
have
default
apps,
which
are
part
of
your
base
services,
and
you
always
have
premium
services
which
are
like
extensions,
and
all
these
services
are
pick
and
choose
pre-integrated.
B
It's
a
land
and
expand
model
based
on
your
needs.
This
release.
We
are
introducing
new
services
in
the
base
that
you
can
see
highlighted
with
data
management.
Console
we'll
see
details
of
that
in
a
bit
in
the
ai
portfolio.
We
have
the
wmla,
the
watson,
machine,
learning
accelerator
for
deep
learning,
use
cases
as
well
as
data
privacy
enhancements
and
then,
from
an
extensions
perspective.
B
We
are
introducing
knowledge
accelerators
for
different
industries
for
business
vocabulary
and
then
open
pages,
which
is
actually
one
of
our
grc
solutions
and
also
an
oil
gas
solution
that
we're
introducing
this
release
now
quickly.
Just
to
summarize,
you
know
what
are
the
high
level
themes
in
cloud
five
for
data
this
release.
Given
the
times
we
are
in,
we
are
seeing
a
trend
of
companies.
They
are
even
in
a
survival
mode.
B
You
know
with
the
new
normal
or
in
there
in
an
accelerated
growth
mode
and
having
said
that,
our
two
high-level
themes
to
cater
to
both
these
types
of
needs
are
the
cost
reduction
strategy
and
the
innovation
strategy,
and
you
can
see
from
a
cost
reduction
perspective
and
we
will
cover
the
details
of
each
of
these
themes
and
areas
in
a
bit.
Businesses
are
looking
to
optimize
their
costs,
primarily
through
automation
or
is,
or
moving
to
cloud
to
optimize
their
infrastructure
and
they're.
B
Also,
looking
you
know
for
return
on
investment,
that's
a
very
important
factor.
B
Additionally,
when
it
comes
to
innovation,
they're
more
in
a
growth
mode
than
trying
to
keep
up
with
the
increased
demand
for
their
business,
investing
more
in
resiliency
or
risk
management
and
data
security
or
advanced
ti
and
we'll
be
seeing
what
each
of
these
capabilities
are
actually
going
to
cover
in
a
bit
so
from
a
cost
adoption
perspective.
B
The
first
important
thing
I
want
to
call
out
here
is:
you
can
see
on
the
left
hand
side,
you
have
many
different
pain
points
when
you
use
a
platform,
and
you
have
data
located
on
many
different
servers:
public
clouds,
many
different
user
interfaces
for
different
users
and
it's
painful
for
end
users
to
get
their
job
done.
You
know
very,
very
seamlessly,
and
so
you
can
see
on
the
right
hand.
Side
here
is
our
unified
user
experience
based
on
the
job
role
and
permissions.
B
The
next
capability
that
I
wanted
to
cover
is
in
in
terms
of
our
unified
experience
is
for
data
engineers.
We
wanted
to
give
them
a
unified
way
to
manage
their
databases
in
one
place
and
without
this
tool
you
know
it's
called
the
data
management
console.
You
might
need
multiple
consoles
to
manage
native
databases
running
on
the
platform.
So,
with
this
unified
data
management
tool,
you
can
use
it
to
manage
data
virtualization
connecting
to
any
sources
that
are
on
public
clouds,
on
premises,
etc.
B
Your
db2
databases
on
the
platform,
you
know
to
run
your
queries
to
monitor
the
performance,
and
this
new
console
is
actually
built
on
a
full
set
of
open
restful
apis.
So
anything
you
can
do
on
the
interface.
You
can
also
do
that
to
our
open
apis
so
from
in
short,
in
all
from
receiving
alerts
and
monitoring,
hundreds
of
databases
and
optimizing,
the
performance
of
them
from
one
screen,
providing
you
a
single
view
across
the
enterprise
to
even
creating
altering
and
managing
your
database
objects
to
the
single
interface.
So
this
is
a
great
value.
B
Add
for
us
on
our
platform.
The
next
important
capability
we
have
is
platform
connections.
Again,
there
are
two
main
goals
here.
We
wanted
to
make
sure
that
we
use
a
common
mechanism
of
connectivity
across
all
our
services
on
the
platform
and
a
common
set
of
connectors
across
those
services,
and
if
you
want
to
find
a
set
of
these
connectors,
they
are
available
on
our
knowledge
center.
Please
feel
free
to
take
a
look.
It
includes
ibm
third
party,
all
different
types
of
connectors,
as
well
as
custom
jdbc
connections
that
you
can
define.
B
Now
the
next
theme,
when
we
covered
some
of
the
highlights
in
from
a
user
experience
standpoint
to
make
to
increase
our
productivity.
The
next
theme
is
around
our
unified
platform,
management
capabilities
and
enhanced
automation.
B
So
you
know:
we've
seen
in
the
past
system.
Administrators
and
end
users
often
have
a
lot
of
difficulty
in
operationalizing
and
managing
their
data
and
ai
workloads.
So
this
has
been
one
of
the
pain
points
and
what
we've
done
this
release
is:
we've
introduced
couple
of
capabilities.
One
is
through
our
platform
management.
B
You
know,
system
administrators
on
containerized
platforms.
They
have
many
services
deployed
and
different
resource
consumptions
and
entitlements,
and
they're
very
complex
to
manage
on
your
own.
So,
besides,
providing
the
capability
to
drill
down
from
service
to
power
level
to
debug
and
correlate
the
issues,
administrators
also
require
visibility
and
control.
B
So
what
we've
introduced
this
releases,
we
are
also
giving
the
capability
to
configure
resource
quotas
on
cpu
and
memory
for
the
entire
platform,
as
well
as
individual
services.
That
way,
you
can
monitor
your
thresholds
and
receive
email
alerts
when
usage
exceeds
the
config
configure,
coders
and
optionally.
You
can
also
configure
a
scheduling
service
to
enable
a
soft
enforcement
of
these
coders.
That
way,
you
know
you
aren't
exceeding
what
you've
actually
allocated.
B
So
this
is
one
of
the
great
capabilities.
This
release
the
other
important
capability
from
a
management
perspective
is
often
times
we've
seen
that
a
lot
of
the
data
science,
workloads,
etc
that
are
running
in
production.
They
we
need
to
make
it
easy
to
monitor
it
as
well
as
manage
it
over
a
period
of
time.
B
So
we've
introduced
this
capability
in
deployment
spaces,
with
enhanced
dashboarding
capabilities,
where
you
can
actually
see
an
integrated
operations
view
for
the
workload
that
you're
running
to
depict
the
runs,
the
failures,
etc
as
well
as
you
know,
so
that
you
can
quickly
find
your
issues
and
get
a
quick
view
across
all
the
different
spaces
when
we
say
spaces,
think
of
it
as
just
a
concept
where
we
actually
do
our
production
level
deployments
on
the
platform
so
that
you
can
access
it
through
your
apps.
B
You
know
your
machine
learning
models
through
a
rest,
api,
etc,
and
this
also
builds
the
way
for
us
to
you
know
to
build
on
queuing
and
capacity
planning
for
these
production
workloads
in
phase
two
now
the
next
important
capability-
and
I
won't
speak
much
to
it
because
partha
is
going
to
walk
us
through
this
demo
is
our
cloud
pack
for
data
operator.
B
It's
an
olm
based
operator
for
faster
deployment
and
configuration
allowing
you
to
install
uninstall
patch
and
scale
in
an
effective,
as
well
as
an
automated
scalable
way.
So,
let's
see
it
in
action
over
to
you,
parker.
B
Moment:
okay,
maybe
travis,
why
don't
you
quickly
show
us
a
quick
demo
of
the
end-to-end
platform
while
parker
comes
back
and
then
we'll
dive
into
the
operator
demo
in
the
end,
so
travis,
do
you
mind
sharing
your
screen.
D
Sure
let
me
do
that
quick.
Thank
you.
B
D
That's
good
excellent
right,
so
I'm
gonna
take
a
quick,
15
minutes
kind
of
walk
you
through
some
some
basic
pieces
around
the
the
platform.
Just
do
a
quick
kind
of
end
to
end
demo
with
some
use
cases.
Let
me
first
start
off
actually
with
a
couple
of
slides
that
kind
of
make
up
the
use
case
for
you
here
right.
So
I
I
like
telling
a
story
when
I
do
the
demonstration,
so
the
the
demo
scenario
we're
going
to
walk
through
today,
so
a
fictitious
telecommunications
company.
D
They
have
a
goal
of
trying
to
retain
customers.
They
want
to
make
your
your
standard
customer
churn
and
propensity
to
churn
model
better,
faster,
easier
and
adapt
to
the
marketplace
right.
I'm
going
to
kind
of
do
an
end-to-end
demo
all
and
create
a
whole
model
and
deploy
it,
hopefully
within
the
next
15
minutes.
D
Now,
what
I'm
going
to
show
and
as
clarendon
mentioned
before,
about
breaking
things
down
across
the
ai
ladder
around
collect,
organize
analyze
and
infuse
the
first
phase,
I'm
going
to
go
through
and
focus
around
the
collect
and
organize
pillars
right
around
how
I
connect
to
data
through
data
virtualization
quickly
doing
some
discovery.
Analysis
of
that
data
and
publishing
it
out
for
use
by
my
data
science
team.
D
The
data
science
team
right
is
going
to
go
through
and
do
some
shopping
for
data
find
the
information
that
they
want
quickly.
Utilizing
the
platform
they're
going
to
build
a
quick,
auto
ai
predictive
model
right
using
our
our
ottawa,
auto
ai
features
inside
of
cloud
factory
for
data
and
then
they're
going
to
take
and
promote
that
machine
learning
model
up
to
a
deployment
space.
That's
going
to
mention
and
then
the
last
phase
I'll
show
how
quickly
and
easily
somebody
can
take
that
model.
D
D
So
so
this
is
one
of
the
views
of
cloud
pack
for
data,
as
you
can
see
a
bunch
of
tiles
of
activities
which
I
can
now
go
through
and
customize
the
tiles
and
add
some
dimension,
I'm
logged
in
actually
as
a
super
admin,
so
I
can
see
everything
but,
depending
on
your
role,
you'll
see
certain
certain
tiles
and
pieces
that
affect
you
and
what
you're
working
on
right.
Let
me
see
if
I
can
get
my
mouse
back
and
the
end
of
that
show
and
come
back
to
the
screen
here,
all
right.
D
So
I'm
going
to
start
off
in
that
role
of
a
data
engineer
type
I'm
going
to
go
into
data
virtualization
now
there
are
multiple
ways
to
go
through
and
access
data
and
then
make
that
available
throughout
your
data
without
throughout
the
cloud
pack
for
data
platform.
One
of
the
ways
it's
very
powerful
is
data
virtualization,
and
this
one.
I
show
here
that
my
current
data
constellation
view
is:
I
have
a
db2
database,
another
dbt
database.
I
have
a
postgres
database,
my
sequel
oracle,
a
moriah
db.
D
D
D
Yeah,
it
should
be,
it
should
look
normal
though,
so
let
me
see
if
I
can
zoom
in
if
it
makes
a
difference.
Does
that
make
a
difference.
D
D
All
right,
let's
share
again
and
share
my
entire
window
on
screen,
one
wow
all
right.
How
is
that?
Looking
now?
That's
awesome,
okay
technology
challenges,
so
so,
within
my
virtualized
data
view,
I
have
multiple
tables
now
I
could
decide
as
a
data
engineer,
I
have
how
much
I
want
to
securely
expose
of
this
data
out
to
interested
parties
that
are
also
using
cloud
pack
for
data
or
external
applications
as
well
right.
So,
for
example,
I
can
say
you
know
what
I
want
to
take
a
look
at
this
customer
profile
information.
D
D
I
can
see
metadata
information
about
this
table.
You
know-
and
I
decided
you
know
what
I
want
to
go
ahead
now
and
combine
some
of
this
data
together
and
give
one
simple
virtualized
view
of
multiple
databases
or
data
sources
out
to
my
data
science
team.
So
I
can
actually
take
a
customer
profile
data
customer
billing
data,
I'm
going
to
go
ahead
and
just
join
that
together
as
one
simple
data
set
with
the
new
graphical
interface.
D
D
I
can
look
at
the
edit
edit.
The
names
of
all
the
columns
associated
with
this
join
data
set
hit
next
make
a
new
view,
we'll
just
call
it
demo
join
customer
data,
and
now
I
have
a
choice
of
where
I
want
to
add
this
back
out.
To,
I
can
add
this
to
an
individual
project
for
a
digital
user.
I
can
make
it
fulfill
a
data
request.
I
can
add
it
just
to
my
virtualized
data.
I
can
also
go
ahead
and
submit
this
into
a
global
enterprise.
D
Knowledge
catalog
that
can
then
be
shared
for
with
other
data
sciences
types
just
for
demonstration
purposes.
I'm
just
going
to
create
my
own
view,
pieces
of
that,
and
I'm
going
to
take
a
look
now
right
here
is
my
new
data
set
that
I've
created
and
once
again
I
can
go
in
and
do
a
preview
of
that
I
can
see
that
now,
there's
16
columns
associated
with
it.
The
table
structure
is
now
much
bigger
than
before.
D
Now
metadata
wise,
like
there's
two
source
tables
or
files,
one
main
schema
and
I
can
see
the
detail
of
those
pieces
right.
So
what
did
I
just
do?
I
was
able
to
take
multiple
data
sources
behind
the
scenes,
decide
on
how
I
want
that
exposed
out
to
my
team
in
a
secure
manner
and
then
create
a
single
view
which
can
now
be
accessed
via
tools
inside
the
platform
or
tools
outside
the
platform
with
the
standard
db2
driver
connectivity
access.
D
One
more
quick
thing
that
I
want
to
show
around
data
virtualization
is
a
very
powerful
cache
management
system
right.
So
I
can
actually
log
in
as
an
administrator,
and
I
can
see
the
various
queries
that
have
been
occurring
through
the
data
virtualization
platform.
I
can
take
a
look
at
details
of
those
and
see
which
which
queries
have
taken
longer,
which
ones
have
been
quick
and
what
I
can
do
is.
I
can
actually
add
a
new
cache
ability
right
where
I
can
actually
select.
D
I
can
actually
select
which
different
types
of
queries,
android
pieces
of
data
and
add
those
with
the
amount
of
memory
I
have
set
up
for
a
real-time
cache
ability.
So
if
I
have
a
user
using
a
single
query
every
day,
I
can
now
have
that
set
up
as
a
cache
and
have
that
refreshed
over
time.
So
that
way
it
doesn't
have
to
go
out
to
the
end
data
source.
It
could
picture
it
from
from
the
cache
and
use
that
in
a
quick
and
easy
manner
right.
So
that's
data
virtualization.
D
Now
one
of
the
next
steps
that
I
would
usually
do
as
part
of
data
virtualization
is
as
a
data
steward
or
a
data.
Ops
representative,
I
would
say
you
know
what
now
I
have
new
data
sources.
I
want
to
be
able
to
use
data
management
and
go
out
and
analyze
that
data
understand
the
quality
of
that
data
before
I
expose
it
out
to
all
my
users.
D
So
here's
a
quick
example:
I
went
ahead
and
ran
those
same
data
sets
through
the
discovery
process
within
cloud
pack
for
data
and
how
to
go
ahead
and
profile
that
data
and
look
for
different
pieces
and
parts
within
the
data
for
analysis.
D
D
I
can
see
that
the
there's
also
a
delta,
which,
if
I
rerun
this
analysis,
I
can
look
at
my
data
quality
and
pieces
over
time.
So
cloud
patch
for
data
has
hundreds
of
different
classification
types
as
well
as
rules
built
in
that
will
help
you
do
data
quality,
and
you
can
also
then
customize
those
to
specific
things
that
you
need
for
for
your
particular
environment.
D
If
you
look
at
data
quality
here,
for
example,
I
can
see
the
last
data
quality
run.
I
can
go
down
and
see
the
the
dimensional
results
of
those
pieces
and
parts
and
kind
of
see
what
was
going.
I
can
see
that
complaints
per
month
actually
had
a
lower
quality
score,
so
I
can
dive
in
deeper
and
take
a
look
at
those
details
and
see
the
data
quality
and
see
some
possible
violations
and
findings
associated
with
a
specific
column
or
an
entire
data
set
all
right.
D
So
at
this
point,
as
a
data
engineer,
I've
taken
the
data
I've
published
it
back
out
now
out
to
my
data
catalog
and
now
I'm
going
to
hand
it
off
to
a
data
scientist
who's
not
going
to
go
out
and
shop
for
data,
use
that
data
and
then
build
a
dynamic,
auto
ai
based
machine
learning
model
right.
So
that
was
that
step
now,
if
I
come
back
over
and
I
change
roles
so
now,
if
I
come
in
as
a
data
scientist
to
go,
do
work.
B
You
know
I'm
sharing
your
screen
again
because
it
seems
to
have
decreased
to
a
smaller
size.
D
Yep
hold
on
it
must
be
worth
just
when
I
change
tabs
must
be
a
feature
all
right
there
we
go.
Is
that
better
again,
yeah
all
right
I'll,
just
stay
on
this
tab
from
now
on,
hopefully,
to
make
that
simpler,
all
right.
So
now,
as
a
data
scientist,
I'm
logged
in
first
thing,
I'm
going
to
do-
is
I'm
going
to
I'm
going
to
look
at
some
data.
I
want
to
go
see
what
data
is
in
existence
to
go
run
for
my
customer
churn
model
and
take
a
look
at
the
customer
data
catalog.
D
D
D
I
can
go,
take
a
look
at
that
data
and
see,
if
that's
the
kind
of
data
that
I
want
to
be
able
to
to
pull
into
into
my
project.
So
it'll
do
a
quick
overview.
If
I
have
access
to
see
it
all
right
so
now
you
can
see
that
it
has
21
columns
of
joined
single
data.
D
I
can
see
if
there's
any
reviews
upon
this
data,
I
can
look
at
the
profile
information
about
this
data
and
see
the
different
pieces
and
parts
of
that
data.
Let
me
actually
take
a
look
at
the
customer
profile
data
that
we
just
kind
of
looked
at
before
and
look
at
some
of
the
the
profiling
associated
with
that
data
set.
D
D
A
number
of
children's
decent
distribution,
estimated
income
is
pretty
good
distribution,
but
the
mean
around
111
000
looks
good.
So
now
you
know
what
I
I
like
this
data,
I'm
excited
about
this
data.
Now
I
can
quickly
move
that
data
into
my
project
for
use,
and
that's
as
simple
as
over
here
on
the
left.
Is
you
know
what
I
want
the
profile
data
I
want.
The
individual
data
sets
just
in
case,
oh
by
the
way,
here's
the
customer
churn
data
set.
That
is
a
csv
file
to
my
catalog.
D
Let
me
take
all
of
those
and
I
can
simply
just
hit
add
to
project,
and
I
can
now
select
this
into
my
churn
project.
I
can
just
hit
add
I'm
not
going
to
hit
add
now.
I've
already
added
them
in
to
kind
of
speed
the
demo
up,
but
that's
that's
how
quick
and
easy
it
is
now
for
data
sciences
to
shop
for
data
find
the
quality
data
that
he
likes.
That's
part
of
the
platform
now
bring
it
into
his
own,
his
own
project,
and
so
what
is
a
project?
D
So
let
me
go
into
my
churn
project,
but
a
project
is
a
scoped
set
of
assets
that
me,
as
a
project
lead
can
create
a
project
I
can
add
in
people
that
I
can.
D
Collaborate
with
within
my
within
my
project
is
also
part
of
the
platform,
and
then
the
project
has
a
collection
of
assets
that
are
scoped
for
my
view
and
my
use
within
within
the
project
itself.
So,
for
example,
there's
data
assets,
so
here's
some
of
those
same
data
assets
we're
just
looking
at
before
those
created
by
amy
the
customer
satisfaction
customer
profile.
Customer
billing,
there's
auto
ai
experiments,
notebooks
models,
so
the
the
project
is
that
collection
point
for
all
of
your
assets
that
you
can
then
use
those
within
your
project
all
right.
D
So
what
I'm
going
to
do
next?
Is
you
know
what
I
have
all
the
data
that
I
want?
I
have
access
to
that
data.
I
want
to
go
through
and
add
some
new
pieces
to
my
project,
especially
on
an
auto
ai
experiment,
so
I
can
add
these
new
assets.
Depending
on
what
I
have
installed
on
the
system
into
my
into
my
project,
I
can
make
a
scoped
connection
to
outside
data.
I
can
make
a
new,
auto
ai
experiment.
D
D
If
I,
if
I
am
that
data
scientist
type
that
loves
to
jump
in
and
build
something
with
code,
I
can
actually
jump
into
and
create
a
brand
new
notebook
and
develop
a
you
know
a
new
model
from
scratch.
Via
that
mechanism,
you
can
also
use
a
prescriptive
model
using
a
c
plex
engine
with
decision
optimization.
So
as
a
project
lead
and
as
a
person
within
this
project,
I
have
the
option
to
use
any
of
these
different
types
of
interfaces
and
tools
to
get
my
work
done.
D
Another
one
that
I'd
already
used
behind
the
scenes
is
something
called
data
refinery,
which
is
a
data
wrangling
data,
munching
type
of
graphical,
based
tool
that
I
could
use
within
my
project.
So
behind
the
scenes
I
actually
went
ahead
and
I
built
out
a
quick
little
data,
wrangling
project
where
I
took
where
I
took
the
customer
profile
and
billing
data
did
a
combination
of
that
and
joined
that
with
my
customer
churn
data
and
made
a
new
local
data
set.
D
D
We'll
just
do
a
quick
demo
one
I
can
pick
my
compute
resources
and
hit
create
so
auto.
Ai
is
a
function
that
was
introduced
in
one
of
the
last
releases
of
cloud
pack
for
data
that
allows
you
as
more
of
a
democratized
data
science,
type
of
user,
the
ability
to
not
have
to
cover
anything
and
let
ai
build
an
ai
set
of
models
for
you.
D
So
this
is
a
quick
entry
point
extremely
powerful
and
it
should
be
accurate
to
be
used
by
full-fledged
data
scientists
or
new
data
scientists
or
just
anyone
that
wants
to
build
out
a
model
right.
So
here's
what
we
do,
but
first
I'm
going
to
go
in
and
pick
my
data
source.
Here's
my
customer
merge
data
source
that
has
all
my
customer
data
and
customer
churn
data
all
together,
I'm
going
to
select
that
as
an
asset,
it's
going
to
analyze
that
data
quick
and
give
me
a
list
of
all
the
columns
that
are
associated
with
that.
D
And
then,
if
you
want
me
to
predict
which
it
wants
me
to
select
which
column
it
is
that
I
want
to
predict
upon
I'm
going
to
use
churn
as
the
one
and
because
it's
churn
and
because
it
went
and
analyzed
that
data
set
it's
suggesting
to
use
a
binary
classification
with
a
positive
class
and
what
is
the
optimized
metric.
So
how
do
you
want
to
judge
the
the
success
of
the
models
that
it
builds
for
you
and
it
shows
accuracy
as
that
model
now.
D
D
For
example,
I
can
decide
how
much
the
data
is
used
for
training
versus
how
much
is
used
for
testing
afterwards.
I
can
pick
columns
that
I
then
maybe
I
don't
want
certain
columns
to
be
included
inside
of
the
model
and
pieces
I'll
just
leave
them
all
there.
For
now
I
can
go
into
the
prediction
set
of
settings
and
you
can
see
right
here
that,
because
of
the
data
type
of
that
churn
column,
it
suggested
doing
a
binary
classification
right,
which
is
more
of
a
true
false,
designation
type
of
predictive
model.
D
I
can
also
pick
what
my
optimization
metric
is.
This
is
suggesting
accuracy
I
could
choose
precision
or
recall
or
other
data
science
capabilities
to
determine
the
metric.
I
want
to
use
I'll
just
use
the
defaults,
then
down
below
here's,
all
the
all
of
the
out
of
the
box,
algorithms,
it's
going
to
test
and
see
which
ones
are
the
best
ones
to
go
ahead
and
use.
D
Let
me
go
ahead
and
hit
save
settings
and
let's
go
through
and
go
ahead
and
run
this
experiment
all
right.
So
what
this
is
going
to
do
it's
going
to
bring
up
a
new
graphical
interface
and
I'm
not
going
to
go
through
it.
It
may
take
five
or
ten
minutes
to
complete
and
I'll
kind
of
jump
over
to
a
completed
one
already,
but
what
this
is
going
to
do
behind
the
scenes
it's
going
to.
Let
me
swap
the
view
you
can
see
some
of
the
steps
that
it's
going
to
take.
D
It's
going
to
first
read
in
the
data
set,
it's
going
to
take
a
split
hold
out
of
that
data
to
that
85,
15
split
that
was
shown
in
the
in
the
setup.
It's
going
to
now
read
that
data.
It's
going
to
pre-process
and
clean
up
the
data.
Where
needed.
It's
going
to
look
for
blanks.
It's
going
to
look
for
things
that
maybe
shouldn't
be
there
inside
of
the
data
and
clean
that
up
to
the
best
of
its
ability.
D
Then
it's
going
to
go
ahead
and
select
the
various
models.
They
would
expect
to
give
you
the
best
results
based
upon
the
data
set
itself
and
then
once
it
selects
the
various
models
to
test
it's
going
to
run
through
those
models.
It's
first
going
to
do
just
a
straight
test.
It's
going
to
do
some
hyper
parameter,
optimization
based
upon
those
results
and
test
it
again.
D
It's
going
to
do
some
feature
engineering
and
build
some
new
features
for
your
data
set
and
then
rerun
those
through
again
and
then
it's
going
to
do
one
more
hyper
parameter,
optimization
pass,
and
so
it's
going
to
do
that
for
each
of
the
models
that
are
that
are
selected
inside
of
the
activity.
You
can
see
here
that
it's
going
ahead
and
it's
starting
it's
processing
and
here's
a
different
view
of
that
same
processing.
That's
doing
right
now!
D
So
it's
going
through
selecting
the
current
algorithms
and
once
that's
complete,
it
will
go
ahead
and
kick
off
the
runs
of
those
algorithms
that
to
go
through
and
complete
its
work.
Now,
just
for
the
sake
of
time.
I
know
that
we're
going
to
do
this,
quick,
I'm
going
to
kind
of
jump
back
into
my
project
and
show
you.
D
I
ran
the
same
one
a
couple
days
ago,
so
you
can
see
what
the
completed
model
actually
looks
like
I'm
going
to
go
down
into
my
completed,
auto
ai
experiment
for
the
same
churn
data,
and
this
one
I
actually
ran
across
four
different
algorithms,
and
so
there's
actually
16
different,
runs
that
it
that
it
went
through
in
this
testing,
and
you
can
see
by
looking
at
him
that
it
ran
through
all
of
those.
The
star
right
here
represents
the
one
that
actually
tested
out
to
be
the
best.
D
You
can
actually
see
the
leaderboard
down
below.
So
what
this
means
is
that
it
thinks
of
pipeline
number
15,
which
is
this
one
right
here
with
an
lgbm.
Classifier
type
of
algorithm
got
back
a
98.2
percent
optimized
piece
of
accuracy,
and
then
it
used
one
hyperparameter,
optimization,
run
and
one
feature
engineering
run
to
complete
this
one
took
around
17
minutes
or
so
to
run
this
individual
model
at
the
time
that
it
was
it
was
done
completed.
D
If
I
want
to
go
in
and
take
a
look
at
the
pipeline
comparison,
I
want
to
look
at
details
across
all
the
different
16
pipelines
that
were
run.
Accuracy
was
the
important
piece
I
can
actually
select.
I
just
want
to
show
the
top
few
from
an
accuracy
perspective
right.
I
can
see
the
15
to
16
with
the
top
two
for
accuracy
I
can
put
my
mouse
over.
It
actually
see
how
it
looked
across
the
comparison
scale
with
the
other
models.
D
I
can
quickly
see
how,
within
the
within
the
model
itself,
it
shows
model
accuracy
area
under
the
curve,
precision,
etc.
I
can
dive
into
a
confusion
matrix.
So
if
you
are
a
more
experienced
data
scientist,
you
want
to
look
at
the
details
behind
it.
These
pieces
will
show
you
all
those
details
and
thoughts
behind
it.
If
you're
more
of
a
junior
data
scientist
like
me,
perhaps
I
can
kind
of
look
at
these
and
someone
understand
what
these
mean.
I
can
look
at
the
information
it
kind
of
gives
me
definitions
for
them.
D
Some
important
things
for
you
to
see,
though,
is
that
it
will
show
you
which
features
are
the
most
important
so
which
which
characteristics
of
your
model
have
the
biggest
impact
on
your
model.
So,
for
example,
might
not
have
guessed
it,
but
the
estimated
income
of
the
subscriber
is
one
of
the
biggest
factors
to
whether
they
will
turn
or
not
churn
other
things,
such
as
months
as
a
customer
number
of
support
calls
in
the
last
year.
D
It's
interesting
to
see
what
important
features
there
are
towards
that
all
right
now,
with
that
model,
I
have
a
couple
of
options
now
right.
So
I've
gone
through
the
work
now
of
creating
a
brand
new
model
having
that
run,
but
the
goal
is
to
actually
promote
and
deploy
a
model
to
production
right.
That's
your
goal!
Your
goal
isn't
just
to
do
analysis.
Your
goal
is
to
make
a
business
impact,
so
I
can
now
take
any
of
these
models
that
were
created
from
here.
D
It's
algorithms
and
now
I
come
over
here
and
say,
take
the
top
ranked
one,
and
I
can
now
save
off
this
algorithm
as
a
new
model
within
my
workspace,
I'm
going
to
call
this
demo
demo
one
now.
This
will
take
that
individual
algorithm
package
it
as
a
deployable
runtime
model
and
save
it
off
into
my
project
space
that
I
can,
then
you
know
promote
that
for
my
team
to
go
to
to
deploy
into
a
test
environment
or
to
production
for
those
data
scientists.
D
That
also
say
you
know
what
I
want
to
see
what
happened
behind
the
scenes.
I
don't
trust
it.
That
was
that
good.
I
can
also
take
and
export
this
this
model
out
into
an
active
running
notebook
and
look
at
all
the
python
code
associated
with
what
was
done
behind
the
scenes
using
ibm's,
auto
ai
technology,
and
so
here's
a
quick
peek
into
how
that
model
actually
looks
from
a
notebook
perspective
and
now
me,
as
a
data
scientist,
I
can
come
in
and
actually
tweak
this.
I
can
run
it
individually.
D
I
can
check
check
it
against
different
data
sets,
but
I
have
a
full-fledged
detailed
notebook
written
in
python
code
that
was
created
for
me
without
having
to
go.
Do
any
work,
it's
a
very
powerful,
powerful
solution,
all
right.
So
let
me
finish
this
piece
up
here.
Quick.
Let's
say
you
know
what
I
sleeve.
D
And
so
tell
me
quick
guys
if,
if
I
need
to
redo
my
screen
again,
hopefully
it
still
looks
big.
A
B
Thank
you.
It
was
really
a
good
overview
of
the
platform
itself
quickly.
We
will
be
moving
on
to
you
know
one
of
our
other
great
achievements.
This
release
is
we've
onboarded
on
the
tech
data
stream,
one
marketplace,
and
I
would
I
would
like
just
to
showcase.
You
know
what
we're
really
doing
with
our
global
distributors,
partners,
etc.
So
clay,
why
don't
we
start
off
with
you
telling
the
audience
about
your
role
at
tech
data
and
before
that
with
ibm.
E
Yeah
sure
clarinda,
can
you
hear
me.
E
So
I'll
start
with
my
time
at
ibm,
I
spent
eight
years
at
ibm
all
within
the
data
in
ai
organization,
working
with
great
people
like
you
and
travis
and
others,
and
I
held
a
number
of
roles
during
my
time
at
ibm,
but
my
final
role
was
directly
working
with
cloudpack
for
data
as
a
sales
leader
in
north
america.
E
My
team
was
responsible
for
driving
sales
and
impacting
helping
impact
product
direction,
and
you
know
for
the
new
solution,
this
new
solution
of
cloud
pack
within
ibm
and
then
earlier
this
year
I
began
a
new
chapter
in
my
career
when
I
moved
over
to
tech
data,
but
I
didn't
stray
far
from
ibm.
E
I
still
work
with
ibm
almost
every
day
and
a
lot
of
it
is
around
red
hat
and
cloud
pack
for
data,
and
so
at
tech
data
we're
a
global
distributor
and
there
I'm
responsible
for
leading
our
data
iot
and
ai
practice
globally.
So
I
work
with
both
vendors,
like
ibm
and
red
hat,
as
well
as
our
business
partners
and
resellers,
to
kind
of
optimize
the
impact
that
we
can
have
through
the
channel
ecosystem.
E
B
Glad
to
have
you-
and
it's
been-
it's
been
an
amazing
ride.
This
partnership
between
cloud
pack
for
data
and
tech
data
has
definitely
been
building
some
buzz.
So
do
you
want
to
tell
our
audience
a
little
bit
about
how
it
can
change
the
game
for
customers.
E
Yeah
yeah
I'd
love
to
I
mean
look
as
you
know
through
my
my
background.
Outback
for
data
is
near
and
dear
to
my
heart,
so
I
really
love
what
ibm's
doing
with
with
openshift
through
the
cloud
packs.
You
know
even
beyond
just
cloud
pack
for
data,
so
you
know
so
much
so
that
when
I
arrived
at
tech
data
earlier
this
year,
you
know
one
of
my
one
of
my
highest
priorities.
If
not
my
number
one
priority
was
to
ensure
that
the
channel
ecosystem
knew
the
power
of
cloudfacts.
E
E
You
know
just
a
very
brief
demo
of
the
robustness
of
cloudpack
for
data,
but
in
order
to
kind
of
harness
that
power
or
absorb
that
power
that
the
channel
ecosystem,
so
our
resellers
and
our
partners
we're
definitely
going
to
need
some
assistance,
and
so
thanks
to
the
power
of
open
shifts,
a
cloud
pack
for
data
can
be
deployed
on
on
any
cloud,
which
is
a
huge
thing
for
our
channel
and
for
our
clients
and
so
as
a
distributor.
E
We
work
with
so
many
partners
that
you
know
and
we
work
with
all
these
cloud
vendors.
So
we
set
out
to
ensure
that
we
can
build
the
most
effective
way
for
cloud
pack
for
data
to
be
consumed,
and
so
that's
what
we
did
together
right.
Our
our
team
at
tech
data,
your
team
at
ibm.
We
built
a
solution
of
cloud
pack
for
data
that
we,
we
term
a
click
to
run
solution.
E
E
Cool
yeah
and
yeah
and
kind
of
a
similar
question
that
you
asked
me,
but
I'd
love
to
know
what
you
know
to
ask
you
to
comment
on
what
our
announcement
means
to
ibm
and
especially
to
ibm
business
partners.
B
It's
really
exciting.
You
know
tech
data
has
over
a
thousand
global
vendor
partners,
as
we
know,
operating
in
more
than
100
countries,
and
you
know
onboarding
cloud
pack.
The
data
on
this
global
iit
marketplace
stream,
one
which
will
help
streamline
the
buying
selling
and
other
services
automated
and
offered
to
the
global
partners,
is
awesome.
You
know,
additionally,
as
you're
aware,
you
know
just
what
travis
showed
with
our
hybrid
cloud
ecosystem
strategy.
B
Customization
is
very
key,
and
tech
data
is
definitely
as
as
a
value-added
distributor.
It
meets
our
customers
where
they
are
with
solutions
that
are
more
innovative,
yet
less
costly,
offering
comprehensive
services.
You
know
to
foster
this
wider
adoption
so
to
provide
that
expertise
and
to
help
both
our
business
partners
and
customers
not
only
to
deploy
large-scale
solutions
from
technology
providers,
but
you
guys
are
helping
them
customize
them
just
their
specific
priorities,
not
to
forget.
B
You
know
the
click-to-run
automation
that
we
developed
to
deliver
this
on
the
stream,
one
marketplace
of
tech
data,
which
is
definitely
going
to
be
a
unique
value
for
our
partners,
so
simplifying
some
of
the
most.
I
feel
time
consuming
and
complicated
parts
of
deployments
and
in
automating
complex
processes
such
as
infrastructure
platforms,
software
as
a
service
deployments,
building,
connections,
configurations
and
integrations
is
something
that
I
feel
is
really
going
to
cater
to
our
business
partners
and
to
our
clients
so
clay
coming
back
to.
B
Why
do
you
think
tech
data
selected
cloud
pack
for
data?
You
know,
amongst
the
other
solutions.
E
Wow
great
question:
yeah
I
mean
we
kind
of
have
we
kind
of
have
our
pick
honestly,
I
mean
we
work
with
so
many
vendors
and
even
partners
that
have
their
own
solutions.
I
guess
I
would
kind
of
narrow
it
down
to
two
reasons.
E
First,
as
I
mentioned
earlier,
like
we
work
across
our
cloud
vendors,
and
so
we
wanted
to
make
sure
that
we
had
a
solution
that
would
not
only
work
with
right.
The
vendors
cloud
so
in
this
case
ibm,
but
you
know
azure
and
aws
and
others
and
obviously
cloudpack
for
data-
allows
this
openshift
and
second,
we
know
that
more
clients
are
looking
for
that
all-in-one
solution
to
drive
business
outcomes
and
cloud
pack
for
data
accomplishes
this
by.
E
You
know
kind
of
some
of
the
aspects
that
travis
went
through,
but
it's
to
really
simplify
this,
and
this
is
how
ibm's
you
know
effectively
marketed
this
solution.
It's
by
allowing
users
to
go
and
collect
data,
organize
that
data
and
then
analyze
that
data.
E
You
know
all
before
being
able
to
infuse
that
into
their
organization
to
use
it
in
the
most
effective
way
possible.
So
I
mean
it's
kind
of
a
short
answer,
but
you
know
for
those
two
reasons:
it
really
made
compact
for
data,
a
no-brainer
for
us
to
pursue
and
to
go,
build
this
market
ready
solution
and
put
it
on
our
on
our
ecosystem
platform
and
kind
of
get
off
and
running.
B
Very
interestingly-
and
I
assume
I
mean
you
mentioned
it
already,
but
I
see
I
know
that
you're
already
seeing
a
lot
of
value
from
the
integration
with
red
hat
open
shift
on
stream.
One
already.
E
Yeah
you're
right
clarinda,
I
mean
I
mean
we
probably
can't
say
it
enough,
but
it
really.
It
speaks
that
that
first
reason
I
gave
evolve
right
where
we
can
work
across
cloud
vendors.
You
know
seamlessly
it
speaks
to
the
power
of
openshift
and-
and
this
is
such
a
big
deal
for
our
channel
ecosystem,
and
so
you
know
we.
We
know
that
we
live
in
a
multi-cloud
world,
but
you
know,
especially
when
you
think
about
the
channel.
E
There's
still
a
lot
of
a
lot
of
you
know,
organizations
and
resellers
that
are
still
working
working,
that
out
right,
figuring
out
which,
where,
where
do
they
land?
Where
do
their
customers
want
to
be
right
and
and
trying
to
work?
You
know
through
in
a
business
outcome,
landscape,
so
being
able
to.
You
know.
We
know
that
it's
a
multi-cloud
world.
We
know
that
kubernetes
is
the
future
and
being
able
to
effectively
expose
that
to
the
partner
ecosystem.
I
think
is
really
really
important.
E
So
this
you
know
the
seamless
integration
of
openshift
and
kind
of
what
it
enabled
when
we
built
the
solution
and
again
what
we're
exposing
our
partners
and
then
user
to
is
is
really
is
much
needed
and,
frankly,
it's
just
really
exciting.
So
and
what's
interesting
is
obviously
I
gave
a
little
bit
of
my
background.
But
you
know
I've
worked
with
cloudback
for
data
extensively
in
the
past,
but
you
know
I've
been
out
of
the
everyday
for
the
last.
E
You
know
nine
to
12
months,
so
you
know
I'd
be
really
curious
to
hear
you
know
how
it's
going
recently.
You
know
you
covered
the
3.5
release
already,
but
maybe
we'll
start
with.
What's
your
favorite
new
new
feature
that
customers
can
use,
especially
specific,
especially
when
we
think
about
this
click,
the
run
solution
that
we
have.
B
Yeah,
definitely
that's
a
that's
a
very
good
point,
so
let
me
quickly
showcase
what
would
be
my
favorite
capability
in
cloud
factor
data.
So
I
think
innovation
is
definitely
one
of
those
areas
that
that
has
been
very
attractive,
so
one
of
the
capabilities
we're
actually
bringing
in
this
release.
Frankly
speaking,
is
our
watson
machine
learning
accelerator
in
the
base
and
it
allows,
I
think
it
allows
you
know
everybody
to
use
deep
learning
on
gpus.
B
It
makes
it
much
more
easier
for
data
scientists
for
this
distributed
deep
learning
architecture
that
simplifies
the
process
of
training,
deep
learning
models
across
the
cluster
for
faster
time
to
results,
as
well
as
powerful
model
development
tools
in
real
time
for
training
visualization,
as
well
as
runtime
monitoring
of
accuracy
and
some
of
the
hyper
parameter
optimizations.
We
just
saw
in
travis's
demo
for
faster
model
deployment.
B
So
I
think
this
is
one
of
the
great
capabilities
that's
coming
in
cloud
pack
for
data,
one
of
the
other
capabilities,
which
is
in
early
stages
by
ibm
research
team,
but
it's
definitely
a
new
cutting-edge
technology
and
it's
a
new
concept
that
I
feel
everybody
should
try
out
is
our
federated
machine
learning
capability,
which
enables
multiple
organizations
to
train
ml
models
collaboratively
without
having
to
share
data,
and
so
you
can
imagine
what
this
really
means.
The
driving
factor
behind
this
is
definitely
data
privacy,
confidentiality
regulations
and
even
the
cost
to
move
the
data
right.
B
So
it's
machine
learning
without
moving
your
data,
and
you
can
you
know
you
might
have
your
data
on
aws,
ibm
cloud
on
premises
and
without
moving
the
data
from
these
locations,
you
can
have
a
centralized
data,
aggregator
iterate
and
build
and
bring
ml
to
where
your
data
lives.
So
I
think
these
couple
of
capabilities,
I
would
say,
are
definitely
highlights
for
this
release.
Clay
from
our
end
and
and
folks
should
try
it
out.
It's.
E
I
was
going
to
say
that
those
are
really
really
neat
the
federated
learning,
especially
we're
going
to
have
to
dive
dive
more
into
that
at
some
point,
because
that
sounds
sounds
really
neat
and
addresses
a
lot
of
the
data
privacy
issues
that
we
definitely
see
in
the
market.
B
Definitely,
thank
you
so
much
clay.
It's
it's
been
amazing
to
have
you
on
this
on
this
webinar
and
we'll
continue
our
partnership
going
forward.
B
You
thanks
so
quickly
before
going
back
to
the
operator
demo,
there
are
a
couple
more
capabilities
I
wanted
to
cover
that
are
coming
in
cloud
pack.
The
data
one
of
them
is
data
privacy
and
many
times
you
know
we.
We
have
seen
the
need
for
a
lot
of
data
protection.
That
means
you
want
to
sometimes
de-identify
your
data
for
data
science.
B
You
want
business
analytics
and
testing
to
be
able
to
done
on
the
same
quality
of
data
that
you
put
into
production,
that
you're
training
your
models
with,
and
so
this
is
one
of
the
capabilities.
That's
tightly
integrated
with
our
watson,
knowledge,
catalog
from
data
sub,
setting
fabrication
for
end
users
and,
most
importantly,
it
aligns
with
our
governance
strategy
to
te,
and
you
can
even
use
this.
You
know
to
provision
your
data
for
test
data
for
your
models
in
production
with
the
same
level
of
security,
and
this
capability
is
is
very
useful.
B
One
of
the
other
capabilities
I
quickly
want
to
highlight
is
that
of
knowledge
accelerators.
You
know
we
in
our
governance
portfolio,
we
have
data
quality
data
consumption
more
from
a
self-service
perspective,
and
we
have
data
governance
and
often
times
it's
important,
to
understand
the
business
vocabulary
of
your
technical
data
and
building
the
business
forecast.
Vocabulary
is
more
than
creating
a
word
list
and
it
takes
time
to
create
a
usable
business
vocabulary
with
definitions
and
business
context,
so
to
quickly
get
you
up
and
running
this
release,
we're
bringing
in
the
ibm
knowledge
accelerators.
B
It's
scaling
the
business
vocabulary
quickly
out
of
the
box
for
industries
like
healthcare,
insurance,
financial
services
and
even
energy
and
utilities
I'll
quickly
hand
it
over
now
to
partha
to
walk
us
through
a
quick
demo
of
the
operators
that
we've
built
with
cloud
pack
for
data
on
the
red
hat
marketplace
the
path
over
to
you.
Do
you
want
to
try
sharing.
C
I'm
gonna
show
you
the
this
is
the
first
time
clock
pack
for
data
has
adopted
the
operator
framework
for
installation
and
upgrades,
which
makes
it
easier
for
customers
to
adopt
the
platform
and
get
started
in
a
quick
way
and
makes
installs
and
upgrades
easier.
Historically,
we
have
been
using
a
public
tool-based
installation,
and
this
is
the
first
release
where
we
have
adopted
the
operator
framework.
So
in
this
demo
we
have
the
red
hat
red
hat
marketplace
way
of
installing
the
cluster.
C
So
here
I
have
registered
the
openshift
cluster
in
in
this
marketplace
console.
So
let
me
just
show
you
how
the
experience
is
so
when
I
click
on
the
cluster
console,
it
will
take
me.
Take
me
to
the
openshift
cluster,
where
that
opens
up.
We
can
go
to
the
the
software
that
I
have
installed
already
on
my
red
hat
marketplace
dashboard.
C
So
you
see
all
the
listings
as
usual
and
one
of
which
is
the
ibm
cloud
pack
for
data,
so
you
can
install
the
operator
from
from
this
console
directly.
C
So
what
this
does
is
it
gives
you
a
mechanism
to
install
the
operator
pulling
it
from
the
ibm
operator.
Catalog
dynamically,
so
here.
C
I
just
click
on
the
install
operator
and
what
happens
is
it
takes
me
to
a
page
where
I
can
select
the
openshift
project
that
I
want
to
install
it
in
using
the
olam
mechanism?
So
here
I
I
select.
The
openshift
project
called
the
cloud
pack
for
club
pack,
demo
and
the
installation
is
started
immediately
and
in
a
couple
of
minutes
the
operator
is
installed
and
is
ready
for
use.
C
So,
finally,
we
can
go
to
the
openshift
cluster
itself
where
we
can
see
the
operate
operator
getting
installed.
C
So
this
is
my
project
where
I'm
installing
the
operator
here,
you
can
see
that
the
pack
for
data
operator
is
getting
installed.
C
So
as
soon
as
it
is
installed,
it
is
ready
for
use
so
I'll
show
you
quickly
how
we
can
install
the
control
plane
directly
from
this
console.
C
So
I
click
on
the
pack
for
data
record
and
in
the
details,
I
can
see
all
the
important
services
that
we
have
been
talking
about
in
this
session,
all
the
main
services
that
are
highlighted
here
for
the
customer.
C
It
also
links
out
various
storage
and
resource
requirements
to
the
ibm
knowledge
center,
where
user
can
look
at
what
are
the
resources
required
and
what
is
the
security
constraints
that
that
the
platform
uses
so
I'll
quickly
go
and
create
the
control
plane,
wherein
I
need
to
specify
the
the
service
name
that
that
I'm
interested
in,
namely
the
control
plane
in
technical
terms,
is
called
light.
I
specify
the
storage
class
and
then
I
just
accept
the
license
terms
and
conditions.
C
So
what
this
does
is
it
installs
the
control
plane
which
basically
sets
up
the
cloud
pack
for
data
web
client
and
from
where
end
users
can
get
started
on
it
easily?
C
C
So
here
you
can
see
we
have
installed
all
the
important
services
that
we
have
listed,
namely
a
open
scale,
watson,
machine
learning,
service,
db2,
warehouse
and
wkc
things
which
travis
had
shown
us
earlier.
All
I
have
to
share
thanks
and
any
questions
feel
free
to
reach
out
to
me.
B
A
Thank
you,
everybody
and
congratulations
again
on
this
great
new
release
and
as
clarendon
just
mentioned,
look
for
it
on
the
red
hat
marketplace
on
december
10th.
I
just
wanted
to
reiterate
that,
because
it's
very
important
and
being
able
to
try
it
out
we're
very
excited
about
that
as
well.
So
thank
you
so
much
clarinda
and
travis
and
partha
and
clay
for
joining
us
today
and
everybody
look
for
the
recording
on
the
youtube
on
the
openshift
youtube
channel
and
until
next
time.
Thank
you.
Everyone
and
chris.