►
Description
Open Data Hub: Machine Learning as a Service Platform Deep Dive with Sherard Griffin.
Recorded October 28th, 2019 in San Francisco.
A
Yeah
all
right,
we're
good
to
go
before
I
dive
into
this
Dan
did
an
excellent
job
talking
about
what
it
means
from
an
ethical
standpoint,
to
do
AI
what
it
means
in
terms
of
what
we
have
to
think
about
how
we
make
it
into
a
practical
application,
I'm
actually
going
to
take
it
one
step
farther
and
dive
a
little
bit
more
into
the
technical
part
of
this.
How
can
you
get
started?
A
How
can
you
your
customers,
get
started
with
your
AI
initiatives
and
we're
gonna
round
that
out
with
something
called
open
data
hub
I'm
actually
going
to
start
with
this
with
a
little
bit
of
background
of
how
we
got
the
name
data
hub?
We
started
this
project
internally
at
Red
Hat,
and
the
focus
was
more
specifically
around
aggregating
data.
That
was
the
primary
reason.
I
came
to
Red.
A
Hat
is
because
of
the
experience
dealing
with
big
data,
and
we
knew
we
wanted
to
do
that
on
open
shift
and
show
the
capabilities
of
open
shift
as
a
platform
for
data
engineering
and
ingestion.
We
had
the
data
Lake.
All
of
a
sudden
one
of
my
colleagues
Marcel
who
heads
up
the
AI
ops
team
said:
hey
sherrard.
We
have
all
this
data
I
want
to
do
some
data
science
work
on
it.
What
can't?
Where
can
I
point
my
data
scientists
and
say
you
know,
yeah.
We've
got
all
these
tools
for
data
ingestion.
A
No,
no
I
want
to
do
more
of
an
AI
type
of
thing,
so
what
we
decided
to
do
was
figure
out
how
to
bake
AI
into
what
we
were
doing
on
top
of
open
shift
and
that's
how
we
got
the
open
data
hub
and
really
the
problem
that
we
tried
to
solve
was
how
can
a
data
scientist
instead
of
bugging
everyone
on
my
team
every
time
they
want
to
do
something?
How
can
we
enable
them
to
have
more
of
a
self-service
type
of
infrastructure?
A
How
can
they
just
go
into
an
environment,
request
the
resources
and
the
in
the
technologies
that
they
want?
Have
that
working
in
a
collective
ecosystem
and
then
get
them
get
on
with
their
initiatives
and
be
able
to
get
some
results
out
of
it?
So
if
you
look
at
that
whole
self-service
model,
that's
one
of
the
big
drivers
of
open
data
hub
is
how
do
we
enable
the
data
scientists
to
do
what
they
need
to
do
in
a
flexible
manner?
A
It's
what
we
found
internally
was
needed
so
again,
I
don't
get
a
bunch
of
ServiceNow
tickets
and
Quest's
for
in
for
onboarding,
but
then
we
also
talked
to
a
lot
of
customers.
We've
had
a
little
bit
of
a
road
show
over
the
past
year,
where
we've
talked
to
many
many
different
customers,
and
it
turns
out
that
they're
interested
in
the
same
thing.
A
Some
of
the
current
challenges
that
the
data
scientists,
both
internally
at
Red,
Hat
and
at
our
customers,
were
facing.
One
of
the
big
things
is
that
they
were
all
working
in
their
own,
isolated,
one-off
environments,
whether
that's
a
laptop
or
a
server
housed
away
somewhere,
tuck
them.
Underneath
one
of
their
desks,
it's
very
challenging
for
them
to
number
one
have
a
way
to
be
able
to
share
their
work
together
right,
be
able
to
take
some
kind
of
model
that
they
built
and
say
hey.
This
is
really
cool.
It
does
something
tangible.
A
Why
don't
you
go
check
it
out?
One
of
the
biggest
things
is
just
limited
resources.
If
you
imagine
the
environment
where
let's
say
you
do
have
that
machine
tucked
under
your
desk,
what
happens
if
you
need
more
Hardware,
you
have
to
bubble
that
up
the
chain.
Even
if
you
have
traditional
IT
infrastructure,
what
happens
if
you
need
more
Hardware,
you
have
to
some.
You
have
to
send
in
a
request
to
I.t.
I.T
then
has
the
order
of
the
hardware.
The
hardware
takes
a
few
weeks
to
come
in
next
thing.
A
A
So
when
we
look
at
why
open
ship
was
so
key
and
so
relevant
for
us
to
build
this
platform
on
top
of
them,
one
thing
the
number
one
thing
that
stood
out
is
the
fact
that
it
allows
us
to
have
such
an
easy
mechanism
to
be
able
to
deploy
something
into
production.
It's
very
similar
for
what
an
application
developer
would
do
you
have
these
iterations
of
testing,
something
you
want
to
push
that
out.
A
The
the
thing
that
allows
us
to
do
is
be
able
to
low
balance
these
services
and
be
able
to
kind
of
you
know
you
you
span
out
horizontally,
but
you
can
also
span
out
vertically
very
easily,
especially
if
you
integrate
open
so
along
with
something
like
OpenStack.
So
what
that
allowed
us
to
do
is
every
time
a
data
scientist
deploys
a
model
that
models
a
micro
service,
and
then
we
scale
it
out,
depending
on
the
demand
and
I'll
show
you
an
example
of
that
shortly.
A
We
also
have
the
ability
to
orchestrate
these
micro
services,
these
these
machine
learning
micro
services
and
be
able
to
you
know,
schedule
them
for
training,
deploy
the
model
as
a
service
do
whatever
we
need
to
do
there
and
not
only
deploy
it
in
one
environment,
but
we
actually
real-world
problem.
We
have
today
in
Red
Hat.
We
have
some
of
the
teams
who
have
infrastructure
in
Amazon,
and
then
we
have
some
of
the
teams
who
have
infrastructure
on
Prem.
A
How
do
we
do
that
same
level
of
workload
and
just
and
shift
all
of
that
work
to
whatever
resources
we
want
to
pretty
freely
without
using
the
same
infrastructure?
And
that's
you
know
the
whole
hybrid
cloud
solution
that
open
ship
gives
us
and
we
just
show
the
capabilities
of
doing
that
from
which
machine
learning
perspective?
A
Where
is
AI
strong
right
now?
If
you
look
at
a
lot
of
the
customers
that
we're
working
with
they're
they're
starting
to
do
their
AI
initiatives,
this
is
not
a
proof
of
concept.
This
is
not
something
that's
way
out
in
the
ephemeral
cloud
and
everyone's
talking
about
it,
but
no
one's
doing
it.
We
actually
have
real,
tangible
results
that
are
being
generated
using
open
shift
and
using
a
lot
of
Red
Hat
products
and
open-source
products
in
order
to
do
in
order
to
do
those
things.
So
there's
a
couple
that
are
interesting
right
here.
A
We
have
Exxon
Mobil
they'll
talk
about
some
of
their
use
cases
today
and
it's
it's
growing,
it's
a
growing
list,
and
not
only
that
we're
using
it
internally
at
Red
Hat
as
well.
We
generate
about
about
300
gigabytes
of
data
per
day
that
flows
through
openshift
that's
available
to
our
data
scientists.
That's
just
from
our
build
systems.
A
We
also
have
a
massive
amount
of
telemetry
data,
that's
being
generated
and
we're
doing
AI
work
on
on
a
daily
basis,
so
the
data
volumes
are
growing
and
the
data
scientists
are
being
are
having
more
and
more
capabilities
in
order
to
do
that,
clothes
that
all
leads
me
to
the
open
data
hub
project.
I
mentioned
a
lot
about
the
experiences
that
data
scientists
are
looking
for.
The
experiences
that
IT
is
looking
for.
How
does
that
all
get
rounded
out?
A
What
we
decided
to
do
is
take
all
of
those
lessons
learned
from
running
an
internal
AI
as
a
service
platform
at
Red,
Hat
and
surface
that
up
as
an
open
source
project
where
we
have
the
open
data
hub.
It
takes
all
of
those
little
things
that
we
worried
about,
not
just
the
machine
learning
aspect
of
it.
But
what
comes
before
that,
and
what
comes
after
so
you'll
see
a
lot
of
focus
in
the
open
data
hub.
A
I
mentioned
this
at
the
beginning,
about
data
ingestion
and
collecting
data
and
how
you
build
the
data
like
whether
that's
a
virtual
or
an
actual
data
Lake
across
the
many
different
clouds.
Then
we
focus
on
how
do
you
prepare
and
massage
that
data?
You
do
things
like
cleaning
it.
You
know.
If
anyone
ever
sells,
you
hey
the
very
first
time,
I
ingested
data
and
it
was
perfectly
ready
for
a
machine
learning
exercise.
You
should
probably
not
get
that
person
on
the
project
again,
there's
always
some
work.
A
That
has
to
be
done
from
a
preparation
perspective
and
then
you're
all
familiar
with
the
machine
learning,
building
a
model
training
it
pushing
it
out
to
production,
but
then,
once
you
push
it
out
to
production,
your
data
scientist
can't
say:
hey,
oh
cool
I'm
done
I
can
go
home
now,
no
there's
a
lot
work.
There's
a
lot
of
work
that
happens
after
it
goes
into
production.
How
do
you
model?
How
do
you
monitor
it
for
drift?
How
do
you
make
sure
that
that
model
is
continuing
to
be
accurate
over
time?
A
A
If
you're
interested
in
the
open
data
hub
project,
t-shirts
out
front,
I
think
they're
pretty
cool
the
funny
story
about
the
t-shirts.
I
realized
that
we've
been
handing
these
t-shirts
out
for
about
a
year
and
a
half
now
and
like
no
one
on
my
team
has
ever
gotten
the
t-shirt.
So
I
got
to
stuff
my
suitcase
full
of
t-shirts
on
the
way
back,
just
to
make
sure
everyone
has
some
the
it
is
a
blueprint
architecture.
A
When
we
talk
about
open
data
hub
I'll
show
you
a
little
bit
about
the
components
that
are
in
there,
but
I
want
to
set
the
level
you
know
level
set
in
terms
of
what
is
actually
the
vision
of
open
data
hub
and
where
we're
pulling
from
it's,
not
just
a
collection
of
technologies
that
are
kind
of
one-off,
we're
actually
trying
to
tie
ourselves
to
upstream
communities
and
the
relevance
for
that
is
as
the
communities
grow.
Then
we
enable
these
AI
workloads
on
OpenShift
and
those
capabilities
will
grow
and
grow
as
well.
A
So
we
do
a
lot
of
partnership
with
Nvidia
we.
We
have
several
components
that
are
part
of
coop
flow
and
we're
having
stronger
integration
with
coop
flow
and
open
data
hub.
We
have
some
things
with
Selden
Pie
towards
spark
a
lot
of
open
source
technologies,
we're
kind
of
pulling
from
those
communities
and
wrapping
it
all
into
a
nice
package
that
can
be
delivered
on
OpenShift.
A
Now,
let's
get
into
the
the
meats,
the
meat
and
potatoes
I'm,
a
meat
and
potatoes
kind
of
guy.
So
with
this,
what
I'm
going
to
show
here
is
just
a
little
bit
of
what's
deployed
when
you
actually
go
into
open
data
hub.
We
have
a
number
of
different
things
that
are
relevant
for
both
the
the
data
engineering
side
of
it
and
the
data
science
and
machine
learning
side.
We
have
for
data
science
work,
you
have
Jupiter
notebooks,
you
have
set
for
your
data
Lake
for
ingesting
data.
A
You
have
Kafka
in
this
case
we
use
string
Z,
which
is
an
operator.
We
also
have
Argo,
which
is
great
for
your
pipelines,
and
then
we
also
have
from
a
monitoring
perspective
from
ETA
from
ETS
ingre
fauna.
We
have
Selden
from
model
serving
and
we
have
spark.
This
is
just
what's
available
today
in
open
data
hub
internally,
we
run
a
lot.
A
much
broader
stack.
Don't
worry
about
the
details
of
this
diagram.
A
You
can
go
to
the
website
and
get
more
information,
but
this
gives
you
a
little
bit
more
insight
in
terms
of
what's
running
internally
at
Red,
Hat
and,
what's
being
POC,
doubt
to
move
back
up
to
the
open
data
hub.
One
of
the
things
that
we
do
is
we
do
a
lot
of
processing
data
as
it
flows
through
Kafka.
So
we
do
things
with
Kafka
connect.
We
do
things
with
case
equal
and
Kafka
consumers
and
producers.
We
also
have
logstash
fluent
dr
syslog
referred
for
data
ingestion.
A
We
all
we
build
the
data
lake,
we
just
happen
to
do
our
data
lake
and
seth,
but
there's
also
s3.
You
know
any
other
technologies
you
want
to
do
for
your
data
lake
and
then
from
a
you
know
from
if
you
look
all
the
way
at
the
top.
From
an
analytics
perspective,
we
do
a
lot
of
analysis
with
hugh,
which
is
cloud
era
hugh,
and
then
we
have
cabana
as
well
and
then
for
the
model
life
cycle.
We
have
coop
flow
ml
flow
Seldon.
We
also
have
something
called
the
AI
library.
A
A
library
is
a
predefined
set
of
of
machine
learning
models
that
you
can
get
up
and
running
really
quickly
and
those
have
been
built
with
a
kind
of
community
effort.
So
you
have
things
like
sentiment,
analysis,
cluster
detection.
You
know
all
kinds
of
interesting
things
that
you
can
have
right
out
of
the
gate
and
then
for
our
business
intelligence
perspective.
A
We're
rolling
out
super
set
pretty
soon
now
going
really
quickly
right
before
I
get
to
the
demo
here,
I
just
want
to
you
know
kind
of
go
back
to
what
we
said
before
we're
moving
from
a
world
where
data
scientists
are
on
their
own
machines
or
they
have
some
some
isolated
environment
that
they're
doing
their
work
and
what
open
shifts
are
loud
allows
us
to
do
is
to
move
them
to
more
of
a
centralized
place.
To
do
all
that
work.
A
Not
only
can
they
share
the
resources,
they
can
share
their
models,
their
notebooks,
but
then
also
it
just
makes
it
a
nice
place
where
you
can
actually
push
that
model
out
into
production
as
a
service,
and
then
that
service
itself
can
be
managed
and
monitored.
Just
as
if
it's
any
other
application
and
open
open
shift,
so
with
that
said,
I'm
gonna
roll
this
demo.
A
And
one
of
the
first
things
I'm
going
to
show
you
if
you
want
to
get
started
with
open
data
hub
very
again,
very
easy
to
do
what
you
would
do
is
we're
gonna
go
into
this
thing
called
operator
hub
and
just
show
of
hands
how
many
people
have
actually
played
around
with
OpenShift
4,
okay.
So
a
good
amount
of
folks
I'll
take
a
little
bit
of
step
back
and
explain
operator
hub
we're
moving
into
the
world
of
operators.
A
You'll
hear
there's
a
lot
Diane
mentioned
it
earlier
operator
hub
allows
us
to
build
out
these
operators
that
are
really
intelligent
ways
of
managing
infrastructure
and
managing
your
applications,
and
we've
released
open
data
hub
as
an
operator.
It's
basically,
you
can
think
of
it
as
a
meta
operator,
where
it's
responsible
for
other
operators
like
spark
operator
stream,
Z
Kafka
Seldon.
A
This
is
just
kind
of
the
options
that
I've
selected.
You
can
choose
different
things
to
deploy,
whether
you
want
all
of
it
or
just
a
couple
of
things.
You
can.
You
know,
chat
it's
at
your
will.
But
I
mentioned
this
before
we
have
Gravano,
we
have
spark
operator,
we
have
string
Z
operator
and
we
have
a
couple
of
other
things
that
are
deployed
here.
Jupiter
notebooks,
you
know
some
what
the
example
that
I'm
gonna
work
through
today
is
actually
a
spam
filter,
where
you're
trying
to
detect
legitimate
messages
from
fake
messages.
A
When
you
fart,
when
your
data
scientist
first
goes
into
Jupiter
hub,
they
have
the
ability
to
select
these
notebook
images.
In
this
case,
we
have
several
different
images
and
the
image
it
allows
us
to
kind
of
prepackaged
up
some
resources
that
we
want
available
to
the
data
scientist
right
out
the
gate
in
this
case,
what
you'll
see
is
sparc
along
with
side
PI,
but
you
can
do
anything
like
tensorflow.
Whatever
other
types
of
technologies
you
want
to
add
there,
they
have
that
ability
to
just
kind
of
select
and
choose.
A
This
is
you
we
have
some
predefined
ones
that
you
have
out
of
the
box,
but
of
course,
it's
a
community.
If
there
are
others
that
you
want
to
add,
you
can
always
contribute
it
or
if
you
have
something
private
that
you
want
to
roll
internally,
you
can
do
that
as
well.
They
can
also
select
the
size
of
the
cluster
that
they
want
here,
so
anything
small,
medium
large,
and
in
this
case
this
is
specifically
for
the
container
running
the
Jupiter
notebooks.
A
The
other
thing
that
happens
here-
I'm,
not
gonna,
show
an
example
here,
but
you
can
play
around
with
it.
On
open
data
hub
is
when
I
select
that
I
want
a
spark
cluster
behind
the
scenes.
When
I
start,
my
notebook
server
you'll
actually
see
a
spark
cluster
spin
up
and
specifically
forward
that
data
scientist,
and
that
can
be
whatever
size
that
that
you
want.
A
You
know
anything
from
just
a
couple
of
workers
to
10
15
workers,
whatever
you
need
for
you
for
your
work
and
then
the
cool
thing
is
once
you
terminate
that
notebook
and
say:
hey
I'm
done
with
today's
work.
The
spark
cluster
cleans
up
automatically
for
you
and
it
goes
back
into
the
openshift
ethereal
of
hardware
available.
A
Also,
what
you
can
do
here
is
decide
how
many
GPUs
you
want
to
use
and
the
workloads
will
actually
be
run
on
that
GPU.
So
if
you
have
GPUs
enabled
and
openshift
it's
as
simple
as
changing
this
number
to
whatever
you
want,
and
you
have
the
ability
to
do
that,
some
of
the
other
options
here,
I
won't
really.
But
if
you're
interested
in
about
about
them,
you
can
always
take
a
look
online.
A
So
I've
already
started
a
notebook
server
for
you
for
for
this
demo
and
what
I'm
gonna
do
is
walk
through
a
couple
of
things
here.
The
first
thing
I'm
going
to
show
you
is:
we
have
this
feature:
engineering,
workbook
its
feature,
engineering,
workbook,
it's
really
about
preparing
the
data
and
making
sure
that
we
have
training
data
ready
to
go
for
the
model.
In
this
case
you
know
what
I'm
what
you'll
see
here
is
it's
all
this
fancies
data
so
I'm,
not
a
data
scientist,
so
I
had
a
data
scientist
create
this.
A
It
makes
me
look
smarter
than
I
am
but
all
I.
He
told
me
this
when
you
look
at
the
graph.
All
we
really
need
to
know
is
blue
is
legitimate
messages.
Orange
is
spam.
You
want
to
see
that
those
diverge
so
that
you're
correctly
associating
legitimate
messages
with
spam
messages
and
so
we're
good
to
go.
We
know
that
we
have
a
good
training
data
set
that
we
can
use,
and
now,
let's
move
on
to
the
training
aspect
of
it,
so
I'm
gonna
go
back
and
select
the
notebook
to
train
my
model.
A
What
I'll
show
you
here
is
all
we're
really
doing
is
deciding
what
is
a
legitimate
message
and
what's
a
what's
a
spam
message,
and
in
this
case
dark
blue
is
great.
You
know
this
lighter
color
is
great
as
well.
That
means
again
the
messages
of
diverge.
Then
we
can
clearly
tell
with
the
legitimate
messages
from
a
spam
message.
Awesome
we're
good
to
go,
and
then
what
I
want
to
do
here
is:
let's,
let's
play
around
with
this
a
little
bit.
Let's
actually
see
if
we
can
get
a
just
a
little
bit
more
information
about.
A
What's
going
on
so
in
this
case,
I
can
see
that
I've
got
I
predicted
that
this
these
messages
were
legitimate
and
the
actual
result
was
legitimate.
I
have
a
pretty
good
accuracy
of
94%.
I
have
some
that
were
predicted
legitimate,
but
were
actually
spam.
That's
only
you
know.
That's
only
about
five
percent
I
have
some
that
was
spam,
that
we
were
detected.
That
was
actually
legitimate,
but
they
were
did
we
thought
they
were
spam.
That's
about
two
percent
and
then
ones
that
were
spam.
A
That
actually
were
spam
is
97
percent,
so
I'm
good
right,
I'm
happy
with
those
results.
Again,
it's
not
perfect,
like
Dan
said,
but
it's
good
enough
to
start
with,
and
then
I'm
gonna
just
run
this
next
one
here
and
show
again
the
accuracies.
You
know
everything
looks
pretty
good
so
now
now
that
I
have
that
what
I
want
to
do
is
start
to
deploy
this
as
a
service
and
as
I,
deploy
it
as
a
service.
A
What
I'm
gonna
do
is
I'm
just
gonna
run
through
all
of
these
cells
really
quickly
doesn't
take
long
and
we'll
see
some
results
start
to
come
out
here
so
and
one
of
the
things
I
actually
want
to
show.
You
is
how
I
deployed
it
as
a
service
to
take
a
step
back
here
and
go
into
my
build
configs
and
have
something
called
pipeline
and
in
pipeline.
What
I've
actually
done
is
I've
taken
the
model
itself,
so
this
model
notebook
and
you'll,
see
you'll
see.
This
is
the
actual
model
notebook
that
we
have
here.
A
We
have
something
called
source
to
image,
but
we
built
source
the
image
to
work
on
notebooks.
If
your
data
scientist
has
a
notebook
and
they
want
to
deploy
it
as
a
model
as
a
service,
then
it
can
quickly
run
source
to
image
on
this,
just
as
if
it
were
an
application
and
deploy
it
into
OpenShift
as
a
running
notebook
and
you'll
see
here,
you
know
we
have
that
actually
running
here.
If
I'm
going
to
show
you
which
pod.
A
The
spam
filter
pod
and
we
can
again
skin
scale
that
guy
up
and
down,
however,
we
want
right
yeah,
we
can
scale
it
up
to
10
different
ones.
So,
with
that
said,
we'll
look
back
at
the
service
notebook
and
then
we
just
ran
a
quick
test.
Now
that
it's
up
and
running
all
I
want
to
do
is
send
it
a
note
of
a
message
and
I'm
sending
it
a
message
to
a
rest,
I'm
saying:
hey,
I'm
gonna,
send
you
something
dog
food
dog
food
is
detected
as
spam
and
then
I'm
gonna.
A
Send
you
the
second
message
and
that's
the
second
message
was
sent
as
legitimate
and
great
everything
looked
good.
Then
I
can
keep
going
here
and
I'm
actually
going
to
now
predict
a
few
more
few
more
messages
here
and
what
you'll
see
is
again
all
I'm
doing
is
I'm
sending
a
rest
message
to
the
service
and
I'm
getting
some
results
back.
You
can
see.
These
were
pretty
good
right.
It's
it's
pretty
accurate
again.
So
now
that
I
have
that
a
couple
of
things
that
I
can
do
here.
A
Open
data
hub
also
deploys
with
Prometheus
as
a
data
scientist
I
want
to
I've
tested
this
out.
That's
awesome
now
I
want
to
actually
see
how
does
it
performing
over
time
and
that's
always
an
interesting
one,
because
we
always
think
about
the
deployment
of
it
as
hey.
It
worked
that
first
time,
but
we
don't
ever
think
about
how
do
I
test
this
and
validate
this
over
time.
A
So
what
I
want
to
do
here
is
I'm
actually
going
to
go
back
to
back
to
the
Prometheus
and
I'll
show
here
exactly
what's
happening
over
time
and
let's
see
here,
if
I
can
remember
the
metric
hold
on
one
second
here,
I'm
not
gonna
have
time
it's
on
my
machine,
but
I'm
not
gonna
have
time
what
I
would
actually
show.
You
is
it's
a
really
cool
graph
I
wish
I
could
show
you
here.
Let's
see
here
now,
I
won't
be
able
to
pull
that
up.
A
A
That
I
want
to
show
is
that
we
also
have
gravano
that's
hooked
into
this,
and
so,
when
we
go
through
agra
fauna,
we
can
check
out
a
number
of
other
metrics
that
come
across
and
in
this
case
what
we
have
here
is
some
kafka,
metrics
and
again
you'll
see
everything
is
flowing
through.
This
is
when
I
started
the
spam
detector
this
morning,
and
you
see
that
some
data
is
actually
flowing
through
the
system.
A
So
again,
all
of
this
with
open
data
hub,
you
have
Prometheus
graph,
Anna
Selden,
all
of
these
different
technologies
rolled
into
a
nice
deployable
package
for
you
and
we're
going
to
continue
to
release
more
more
technologies
to
round
out
the
the
whole
ecosystem
of
the
end-to-end
AI
pipeline.
For
you-
and
that's
that's
really
it.
You
know
it's
the
whole
project
and
we're
very
excited
about
it,
we're
great
too,
to
have
it
as
a
nice
foundational
piece
of
how
you
can
do
AI
and
m/l
on
top
of
OpenShift,
the
landing
page.
A
So
people
know
how
to
find.
Oh,
yes,
yes,
yes,
never
remember
to
do
that.
Yes,
so
of
course,
always
go
to
open
data
hub
Daioh,
that's
where
and
then
there's
a
link
to
community.
If
you
want
to
know
how
to
reach
out
to
us
and
then
also
Doc's,
if
you
want
to
know
how
to
get
started,
Thank
You,
Diane,
cool
yeah
there
you
go
great.