►
From YouTube: OpenShift Coffee Break: Red Hat OpenShift Data Science
Description
Get your espresso ready for the OpenShift Coffee Break as we talk about Data Science with OpenShift! Our special guest is Max Murakami, Specialist Solution Architect at Red Hat, that will give us a brief walkthrough on Red Hat OpenShift Data Science and a live demo on deploying AI/ML apps on OpenShift.
A
C
C
Yes,
that's
right,
so,
of
course,
openshift
is
a
great
platform
for
running
all
kinds
of
workloads,
but
what
we're,
seeing
these
days
more
and
more
is
organizations
developing
their
own
kind
of
data,
science
and
ai
workloads
and
with
the
openshift
data
science,
we
now
have
an
offering
to
provide
to
data
scientists,
for
example
such
that
they
can
run
their
own
experiments
and
make
sure
and
make
use
of
the
capabilities
that
openshift
provides
as
a
container
platform.
A
Cool
and
before
getting
more
details
today
session,
I
will
invite
you
to
have
and
share
your
coffee
before
the
smelly
and
then
cool,
really
cool
awesome.
D
B
A
typical
you.
A
A
Great
place
it,
if
you
want
to
meet
shaq,
I
will
invite
you
to
do
it
the
next
summer.
So
please
max
before
getting
more
details
on
opposite
the
other
side.
I
would
like
to
ask
you
just
a
question:
why
why
we
should
speak
today
about
obviously
data
science,
reddit
openshift
data
science,
so
why
red
that
is
entering
in
this
domain.
C
Yeah,
that's
a
great
point.
So,
as
you
know,
openshift
is
out
there.
Many
people
are
using
that
for
for
developing
their
applications,
and
it
has
a
lot
of
great
tools
for
doing
the
whole
devops
processing,
for
example,
to
really
get
get
the
productivity
of
the
developers
out
and
make
it
really
easy
to
to
kind
of
develop
applications
and
put
them
into
production
in
a
fast
and
consistent
way.
So
with
the
ai
applications
that
we're
now
seeing.
C
Of
course,
much
of
that
still
belongs
to
the
domain
of
classical
application
development,
I
would
say,
but
what
we're
seeing
more
and
more
are
data
scientists
producing
what
we
call
machine,
learning
models
which
become
basically
a
new
kind
of
business
logic
that
are
driving
these
kinds
of
intelligent
applications.
Think
about
image
classification,
which
we're
actually
going
to
have
a
look
at
later
or
in
general,
really
trying
to
compress.
C
B
C
Business
basically
of
the
applications
themselves
so
and
with
the
openshift
data
science.
It's
it's
one
new,
offering
that
that
we
at
redhead
are
developing
to
really
enable
the
the
more
data
science
and
machine
learning
specific
parts
of
the
whole
development
of
machine
learning
and
intelligent
applications
out
there.
A
Okay,
cool,
hey:
there
are
a
lot
of
topics.
We
can
start
to
speak
about,
starting
from
what
you
already
told
us-
and
I
don't
know
andre
and
natalie.
If
you
have
some
question
otherwise,
we
can
probably
leave
the
stage
to
match
with.
I
don't
know
if
you
what
you're
going
to
present
today
a
demo
representation
so.
C
Yeah,
so
I
I've
brought
with
me
two
very
quick
demos.
Basically,
so
we
can
start
off
in
a
few
minutes
with
having
a
look
at
what
openshift
data
science
looks
like
how
we
can
actually
all
all
use
to
develop
a
sandbox
out
there
together
and
then
maybe
giving
a
bit
of
a
broader
picture.
Where
does
red
hat
openshift
data
science
fit
into
the
bigger
picture?
C
A
little
bit
like
what
we
just
discussed
and
then
at
the
end,
having
a
look
at
kind
of
deploying
an
intelligent
application
based
on
a
machine
learning
model
using
openshift
data
science
and
and
openshift
capabilities.
B
A
Yeah
show
us
the
demo
so
yeah,
please
max
I'll.
Leave
you
the
stage.
Let's
please
share
your
screen.
Okay,.
A
And
then
I
don't
I,
I
hope
you
don't
mind
something
anytime.
We
will
interrupt
you
we'll
do
some
questions
so,
but
please
over
to
sure.
C
Okay,
so
I
was
thinking
let's,
let's
start
in
the
developer
portal,
I
think
folks
should
really
be
familiar
here
and
maybe
you've
already
realized.
So
if
you
have
a
look
at
the
different
products
out
there
that
we're
now
seeing
openshift
data
science
by
the
way,
maybe
I
should
make
that
a
little
bit
larger,
okay.
C
Cool,
so
so,
what's
what
is
that
about
openshift
data
science?
We
can
have
a
look
at
that
and
we
actually
have
a
developer
sent
out
there
where
people
can
have
a
quick
look
and
already
try
out
openshift
data
science.
They
don't
really
have
to
pay
anything
for
that.
All
they
need
is
a
red
hat
account
which
they
can
get
for
free.
So,
let's,
let's
just
try
to
do
this,
so
try
openshift
data
science
in
the
sandbox
and
there's
already
quite
some.
A
Content,
sorry
for
interrupting
that's
a
great
point.
So
anyone
who
wants
to
start
to
play
to
experiment
or
just
to
get
some
more
details
about
reddit
openshift
data
science.
A
C
Exactly
so,
there's
a
bunch
of
learning
paths
out
there
documentation
to
really
yeah
get
get
familiar
with
with
using
that
maybe
also
some
some
kind
of
data
science
scenarios
already.
So
what
we're
just
trying
to
do
is
open
the
data
center
data
science,
sandbox
and
and
that's
it
right.
So
we
are
in
the
in
the
console
of
the
openshift
data
science.
So
you
can
see
that
there
maybe.
B
C
Make
that
a
little
bit
bigger
here,
so
there
are
different
kind
of
applications
in
this
openshift
data
science.
So,
on
the
on
the
left
hand,
side
you're,
seeing
three
different
tabs
with
enabled
applications
so
and
the
jupyter
hub
is
enabled
by
default.
We'll
have
a
closer
look
at
that
in
a
minute,
then
you
can
check
out
all
the
applications
that
can
be
enabled
by
the
administrator
of
openshift
data
science,
and
you
will
find
some
of
the
products
out
there
like
openshift
api
management
and
streams
for
apache
kafka.
C
So
those
are
the
the
managed
versions
of
our
products
which
really
integrate
nicely
into
the
whole.
You
know
intelligent
application
use
case
and
then
there's
a
bunch
of
other
components
from
third-party
isvs
that
we
collaborate
with
like
ibm
or
anaconda
or
intel,
and
they
are
fully
supported
by
the
isv
and
us,
so
they
address
also
kind
of
different
kinds
of
use
cases
over
there
and
then.
C
Lastly,
we
have
the
ability
to
check
out
all
kinds
of
resources
right,
so
we're
talking
about
various
documentations
tutorials,
interactive
walkthroughs,
about
all
of
the
components
that
I've
just
shown
to
you.
So
you
can
really
have
a
closer
look
and
maybe
even
have
some
kind
of
interactive
quick
start.
If
you
want
to
explore
some
of
those
so.
A
C
Exactly
exactly
okay
yeah,
so
this
is
basically
a
software
as
a
service
offering
right
now
right,
so
so
the
in
the
initial
stage
right
now
we're
talking
about
a
managed
service
on
top
of
aws.
C
C
Actually,
I
have
to
mention
that
this
is
still
in
field
trial
models,
so
it's
not
fully
g8
yet,
which
also
means
that
customers
can
just
use
this
for
free
until
it's
in
full.
Ga,
oh.
C
Yeah
so
you're
right
we're
talking
about
jupiter
notebooks
here,
which
are
really,
I
would
say,
the
vast
majority
of
data
scientists
are
really
familiar
already
with
how
to
how
to
use
jupyter
notebooks
and
that's
where
most
of
the
data
science
work
actually
happens.
Basically,
so
I
will.
I
will
give
you
a
short
insight
into
how
how
this
looks
like
you
know,
I'm
doing
kind
of
work
into
in
in
the
jupiter
notebook
in
a
second.
C
C
You
know
how
to
how
to
use
jupyter
hub
how
to
use
jupyter,
notebooks
and
the
main
idea
here,
and
what
what
really
makes
this
great
in
terms
of
the
openshift
data
science
and
the
jupiter
hub
that
is
provided
as
part
of
that
is
that
I,
as
a
data
scientist,
for
example,
don't
need
to
worry
about
where
to
run
my
jupyter
notebooks
to
provision
the
kind
of
workspaces
around
that
or
maybe
even
how
to
get
a
good
kind
of
base
image
for
for
running
my
experiments.
C
So
those
are
actually
quite
important
points
I
would
say,
and.
B
C
C
B
A
As
a
service
could
be
a
great
use
case
to
apply
that
appreciated
science,
so
if
a
company
want
to
implement
for
his
data
scientist
and
that
engineer
and
want
to
provide
a
jupiter
as
a
service
without
lowering
the
effort
regarding
the
ops
part
of
managing
everything,
that
could
be
a
perfect
use
case.
C
Yes,
yes,
exactly
so
really
really
taking
advantage
of
the
whole
management
of
the
whole
platform
in
terms
of
openshift
in
terms
of
of
the
data
science
platform,
so
yeah,
it's
a
really
great
way
to
quickly
get
into
the
data
science
work
straight
on:
okay,
cool,
okay.
So
let's
maybe
now
try
to
continue
here.
So
let's
say
I'm
a
data
scientist
I
just
want
to
get
get
quick
access
to
a
jupyter
notebook
where
I
can
run
some
of
my
experiments.
C
So
this
will
be
enough
and
I
have
the
option
to
add
any
kind
of
environment
variables
to
my
workspace
environments.
So
this
is
particularly
useful
if
you
want
to
inject
credentials
right
or
or
endpoint
information
like
if
you
want
to
integrate
kafka
or
s3
storage,
which
I'm
actually
going
to
show
you
later
in
the
second
demo.
So
for
now,
let's
just
provision
this
just
starting
the
server
and
wait
a
couple
of
seconds
until
this
has
been
provisioned
for
me.
C
So
again,
I
don't
really
have
to
worry
about.
You
know
about
the
infrastructure,
the
whole
provisioning
part
which
in
the
older
days,
might
have
taken
actually
hours
or
days
until
this
is
fully
provisioned
on
top
of
a
virtual
machine.
But
here
I
am
have
a
containerized
workspace
just
for
myself.
Basically,.
D
C
Here
I
am
so
I'm
now.
I
have
my
own
workspace
for
me.
It's
a
blank
workspace.
I
can
either
start
writing
a
jupyter
notebook
from
scratch.
But
of
course,
as
I
said,
I
might
already
have
some
project
with
maybe
some
of
my
colleagues
in
a
git
repository.
C
So
it
definitely
makes
sense
for
me
to
check
out
and
clone
a
git
repository
that
already
contains
yeah
some
jupiter
notebooks
that
I
might
have
worked
on
before
so
I've
had
this
one
and
then
it's
cloning
that.
C
And
yeah
I
have
this
checked
out.
I
can
now
go
into
this
and
have
a
jupyter
notebook
that
I've
prepared
before
so
very
simple
one,
and-
and
this
is
what
it
looks
like
so
for
people
who
might
not
be
so
familiar
with.
This
is
basically
a
way
to
interactively,
execute
python
code
and
have
a
really
nice
formatted
documentation,
alongside
that,
have
the
ability
to
run
really
pieces
of
or
blocks
of
code
in
a
very
interactive
fashion
right.
C
So
I
would
then
start
off
importing
my
dependencies
building
a
very
simple
model
out
there
and
then
doing
the
actual
experimentation
work
by
training.
My
my
very
simple
model
out
there
right,
so
I
can
just
see
in
real
time
this
is
being
executed.
I'm
seeing
all
the
results
in
terms
of
of
the
performance
here.
C
You
is
that
better,
better
great
okay,
yeah
and
that's
it
so
as
you've
seen.
I
could
really
go
like
in
in
two
clicks.
Basically,
as
a
data
scientist
get
here
into
my
own
workspace
and
which
already
includes
the
dependencies
in
terms
of
the
tensorflow
and
really
quickly
get
continuing.
My
work
on
data
science
work
that
I've
done
in
terms
of
the
jupyter
notebooks
before.
A
A
Say
that
if
we
try
to
draw
a
sort
of
devops
workflow
related
to
the
aiml
development,
this
is
the
that
part.
So
you
mentioned
the
detox
part
so,
instead
of
any
time
load
the
source
code
from
somewhere,
there
is
the
source
of
truth,
there's
git,
the
git
part
and
then,
and
then.
A
More
thing
that
makes
me
think
about
standardization
governance
in
terms
of
libraries
and
release,
which
is
a
very
important
topic
also
for
the
iml
development.
So
it
goes
initially
when
you
show
you
showed
us
how
to
create
the
developer,
which,
which
kind
of
framework
you
can
involve
there.
There
is
already
the
release.
C
A
C
Absolutely
yeah
and-
and
that's
that's
one
of
the
very
good
points
I
would
say
why
it
definitely
makes
sense
to
talk
about
a
centralized
data
science
platform
in
the
first
place
right,
so
data
scientists
might
might
be
very
comfortable
and
familiar
with
with
you
know,
running
their
own
environment
on
their
own
local
machine,
which
is
great
to
really
get
get
started
quickly.
But
at
some
point,
especially
if
you
have
a
larger
team
and
you
collaborate
on
particular
experiments
and
analysis,
you
might
end
up
with
kind
of
inconsistent
versioning
of
the
dependencies.
C
So
everyone
has
a
little
bit
different
kinds
of
versions
of
you
know,
tensorflow
or
maybe
other
packages,
and
at
some
point
that
that
might
even
lead
to
people
not
being
able
to
reproduce
each
other's
work.
And
you
can
imagine
that,
especially
in
the
data
science
domain.
This
is
super
critical
to
really
ensure
that
the
model
that
you're
producing
is
actually
doing
the
right
thing
in
the
first
place.
C
So,
and-
and
maybe
if,
if
you
want
to
talk
about
the
broader
picture
here,
I've
prepared
a
little
bit
of
of
slice
just
to
kind
of
showcase
where
all
of
this
fits
in.
C
So
what
I
like
to
talk
about
with
with
customers
or
partners,
is
this
kind
of
ai
journey,
as
as
I
see
it,
so
of
course
everyone
every
organization
has
their
own
kind
of
unique
use
cases,
but
on
an
abstract
level,
I
would
say:
when
it
comes
to
technology,
then
what
we're
seeing
is
a
kind
of
pattern
and
evolution
if
you
will
of
of
typical
use
cases
that
have
to
be
dealt
with
and
considered.
So
I
would
say
the
very
first
stage
is
for
an
organization
before
they
have
even
tapped
into
ai.
C
Is
the
realization
that
they're
owning
a
large
amount
of
data
right?
So
everyone
nowadays
has
some
data
in
one
form
or
another,
and
then
the
the
idea
comes
well.
We
should
really
tap
into
this
in
into
this
data,
to
improve
our
decision
making
on
the
business
side
right
so
and
then
you
have
a
kind
of
discussion
about
technical
challenges
out
there.
C
So
one
major
theme
I
would
say
without
without
diving
too
deep
into
this-
is
taking
care
of
data
infrastructure
data
infrastructure
which
enables
you
to
tap
all
your
different
data
sources
out
there
to
integrate
them.
Such
that
your
business
analysts
can
get
a
kind
of
unified
view
on
on
the
business
data.
C
So
based
on
that,
I
would
say
then
comes
kind
of
the
second
stage
where
the
organization
is
able
to
tap
into
the
data,
but
they
want
to
get
even
more
kind
of
insights
out
of
the
data
in
terms
of
hidden
statistical
patterns
and
when
the
first
data
scientists
come
into
play.
So
data
scientists
really
great
at
at
uncovering
statistical
patterns.
And
then
we
have
another
kind
of
technical
challenges.
C
And
I
would
say
then
the
third
stage
based
on
that
is
organization
says
oh
great.
We
have
now
found
some
patterns
in
our
data.
Let's
now
try
to
develop
some
machine
learning
models
such
that
my
machines
can
automatically
detect
some
kind
of
patterns
in
the
data
that
we
have,
or
maybe
even
data
that
is
coming
in
so
to
operationalize.
Basically,
the
whole
machine
learning
thing
again.
C
One
challenge
and
one
major
theme
is
making
sure
that
the
machine
learning
training
part
you
can
really
optimize
that
in
terms
of
automation
in
terms
of
efficiency
and
making
sure
that
that
the
data
scientists
and
other
folks
are
not
spending
too
much
time
doing
manual
work
here
and
finally,
just
to
make
the
picture
complete,
I
would
say
the
the
end
state
is
where
I
would
say
this
is
the
intelligent
application
right.
C
So
we
have
the
machine
learning
model,
then,
as
a
as
a
central
piece
of
business
logic
which
is
running
inside
of
the
intelligent
application
and
which
is
there
to
enrich
and
interpret
the
data
which
is
coming
in
in
a
real
time,
could
be
either
in
terms
of
end
users.
Think
about
classical
web
applications
or
could
even
be
about
data
coming
in
from
edge
locations
from
sensors
from
iot
devices
right,
so
we're
seeing
now
growing
numbers
of
those
kinds
of
devices
in
various
locations
outside
of
the
data
center,
which
are
producing
massive
amounts
of
data.
C
So,
and
it
really
makes
sense
to
process
this
data
as
close
to
the
data
sources
as
possible
and
to
really
extract
and
extract
the
meaningful
information
from
the
business
or
for
the
business
out
of
this
data
stream.
So
just
to
draw
this
picture
of
how
you
can
go
through
as
an
organization
through
different
steps
and
what
kind
of
technical
challenges
are
arising
once
you
try
to
tackle
the
whole
machine
learning
life
cycle.
Basically,
so
I
would
say
there
are
two
takeaways
here
and
from
this
kind
of
discussion.
On
the
one
hand,
side.
C
What
I
want
to
show
is
that
when
we
talk
about
ai
projects,
then
we're
talking.
Actually,
if
you
take
a
closer
look
at
that,
a
vast
range
of
different
technical
use,
cases
which
you
should
should
address
right,
starting
with
the
data
infrastructure,
then
doing
the
data
science,
the
machine
learning
delivery
and,
finally,
the
whole
monitoring
in
production.
Whole
integration
part-
and
the
second
thing
is:
where
do
we
as
red
hat,
come
into
play?
Well,
we.
A
A
Max
one
of
the
aspect,
I
think,
is
one
of
the
most
neglected
when
we
speak
about
the
ml
project,
because
the
focus
is
always
about
the
creating
the
model,
which
is
very
requires
a
lot
of
effort,
but
then
the
path
from
the
experimentation
phase.
So
I
created
the
model:
okay,
cool,
my
convolutional
neural
network
for
image,
recognition.
Great
then
I
need
to
make
it
a
real
real
life
application.
A
I
need
to
manage
not
not
only
day
zero
but
also
day
one
the
two
patching
modification
things
that
are
the
same
for
the
traditional
application
development.
So
it's
great
that
this
platform
comes
to
help
and
provides
a
tool
not
only
for
the
data
scientist
part,
but
also
for
managing
the
model
in
the
production
during
the
production
phase
of
the
application,
which
is
going
to
leverage
that
model.
C
Exactly
so,
and
and
and
and
if
you've
talked
about
some
of
the
use
cases
and
some
of
the
things
that
you
really
have
to
consider
right
and
what
we're
actually
seeing
with
with
organizations
that
are
new
to
this
domain
of
data
science
and
machine
learning,
is
that
they
tend
to
focus
very
much
on
doing
the
data
science
part.
You
know
developing
the
model,
but
then
it
becomes
a
really
big
challenge
to
kind
of
translate
that
into
a
productive
system
right,
because
actually
you
have
a
lot
of
other
concerns
that
you
have
to
tackle.
C
How
do
we
actually
serve
that
model?
How
how
do
we
make
sure
that
all
of
these
processes
are
automated?
How
do
you
make
sure
that
you
can
audit
this?
Have
a
have
a
proper
governance?
How
to
monitor
this
in
production
so
and-
and
those
are
kind
of
concerns
which,
again,
at
a
certain
level,
remind
us
of
concerns
of
application
development
which
are
being
tackled
by
what
we
would
now
call
devops
and
gitops
right.
C
About
ml
ops,
which
is
basically
a
translation
of
the
established
devops
practices
into
the
domain
of
machine
learning
applications
so
again,
without
going
too
much
into
detail
here,
I've
tried
to
sketch
a
kind
of
typical
ml
workflow
which,
again
at
a
certain
level,
reminds
us
a
little
bit
of
the
application
or
devops
workflow.
I
would
say
so
just
making
sure
that
the
problem
you
want
to
want
to
tackle
with
this
is
is
defined.
You
know
what
what
the
success
criteria
are
then
basically
comes
the
whole
development.
C
You
know
validation,
part,
which
is
a
little
bit
similar
to
software
testing
and
then
basically,
the
the
delivery
into
production,
integration
intro
into
the
larger
system
and,
finally,
making
sure
that
what
is
going
on
in
production
is
also
monitored
as
such
and
just
like,
the
devops
workflow
actually
turns
out
that
also
the
mlobs
workflow
is
very
iterative
in
nature
right,
so
you
don't
really
have
only
a
straight
line,
but
depending
on
the
validation
steps,
you
might
have
to
kind
of
reiterate
from
certain
other
earlier
points
in
in
your
machine
learning
life
cycle
so
and-
and
all
of
this
is
really
abstract,
and
if
you
now
want
to
go
talk
a
little
bit
more
about
the
technology,
then
what
I
just
want
to
show
you
on
a
very
high
level
again
without
going
into
detail
here
a
kind
of
analogue
suite.
C
If
you
will
so
because
we've
seen
earlier,
there
are
many
different
technical
challenges
that
you
have
to
address
in
your
end-to-end
machine
learning
life
cycle.
So
there's
not
going
to
be
a
single
software
that
is
going
to
address
all
of
that.
So
we
definitely
have
to
take
a
look
at
different
kinds
of
software
technologies
for
different
kind
of
use,
cases
that
arise
right
so
so
talking
about
the
data
infrastructure,
on
the
one
hand,
side
going
to
the
whole.
C
You
know
development
and
experimentation,
part
in
terms
of
the
jupiter
that
we've
seen
before
and
then
again,
the
continuous
integration,
continuous
delivery
and,
finally,
the
productive
monitoring,
part
and
folks
out
there,
who
are
familiar
with
openshift,
will
immediately
spot
that
some
of
the
components
are
already
part
of
openshift.
You
know
in
terms
of
openshift
pipelines
in
terms
of
openshift
git
jobs,
for
argo
cd,
for
example.
So
again,
that's
that's
one
kind
of
every
piece
of
evidence.
C
I
would
say
that
that
the
classical
devops
tools
which
we
already
have
in
terms
of
the
devops
process
and
what
we
can
leverage
in
terms
of
making
that
devops
really
work
efficiently
based
on
those
kinds
of
of
tools,
are
really
nicely
fitted
to
also
a
large
part
of
what
we're
seeing
in
terms
of
machine
learning,
development
and
machine
learning
deployment
out
there.
C
And
what
we
are
then
doing
as
red
hat
is
basically
having
a
vision
like
this
as
a
suite.
If
you
will
all
based
on
open
source
tools,
of
course,
and
then
try
to
gather
this
and
collect
and
curate,
although
all
of
those
different
components
that
might
be
really
worthwhile
to
have
a
look
at
for
for
data
science
projects.
So
and
that's
when
we
start
talking
about
open
data
hubs.
C
Open
data
hub
is
an
open
source
community
projects
which
yeah
we
as
red
hat,
are
maintaining
and
the
goal
is
to
collect
all
those
open
source
tools
which
we
find.
For
example,
many
customers
or
partners
are
really
enjoying
using
that
and
to
really
consolidate
and
make
sure
that
those
are
really
nicely
integrated
and,
from
a
technical
perspective,
are
easily
deployable
on
top
of
openshift
right.
C
So
we're
talking
about
a
an
openshift
operator
here
which
is
being
developed
as
part
of
open
data
hub,
which
makes
it
really
easy
for
for
people
out
there
to
kind
of
compose
their
own
machine
learning
suite.
If
you
will
based
on
the
particular
requirements
that
they
have
and
just
very
easily
deploy
that
right.
So
we're
talking
about
the
operator
which
deploys
other
operators
based
on
the
components
you
have
chosen
and
then
in
the
in
the
end,
you
have
a
system
which
you
can
directly
use
on
top
of
your
own
self-managed
openshift
cluster.
Basically.
A
So
I'm
seeing
a
big
value
here
that
you
are
showing
probably
I
would
like
to
highlight-
is
the
openshift
op
data
hub
is
not
just
a
collection
of
technologies
regarding
the
iml
domain,
but
not
only
ml,
but
it
it's
an
effort
from
rada
to
provide
and
customers
or
partner
and
already
ready
to
use.
A
I
could
say
tools
where
all
the
all
the
components
you
mentioned
already
tested,
fully
integrated
and
provided
to
the
end
users
as
seo,
so
instead
of
just
mix
and
match,
or
let
people
taking
all
this
very
valuable
frameworks
around
and
trying
to
figure
out,
which
is
the
best
release,
which
is
the
one
that
fits
more
for
me.
This
is
already
fully
integrated
and
tested
for
for
the
end
user.
A
Is
that
something,
I
think
is
of
great
value
behind
of
the
ship
that
I
have
as
as
always
behind
the
the
fully
open
source
solution
provided
by
by
radar.
C
Exactly
exactly
so
think
about
linux
right
so
again
we
have
the
same
approach
when
we
talk
about
what
we're
doing
in
the
linux
space
again
consolidating
many
different
upstream
community
projects
and
consolidating
and
integrating
and
maintaining
and
curating
that
in
terms
of
the
fedora
project.
Right.
So
in
this,
in
this
sense
you
can
say
open
data
is
a
little
bit
similar
to
fedora.
Where
then
we
as
red
hat
our
engineers
and
quality
engineers,
are
then
building.
C
Finally,
a
product
out
there
right
at
enterprise
linux
as
a
commercial
offering
to
our
enterprise
customers
and
that's
actually
a
similar
approach
that
we're
taking
here.
A
And
we
could
say
that
we
without
mentioning
the
the
end
customers,
we
already
have
some
companies
using
or
open
that
are
having
a
sort
of
production
environment.
So,
yes,
is
to
some
extent,
is
similar
to
fedora,
but
it
could
be
used
so
fully
open
source
also
for
enterprise
grade
projects.
Absolutely.
C
Yeah,
so
so
we
have
basically
a
suite
of
open
source
tools,
some
of
which
are
already
massively
used
by
you,
know:
data
scientists
out
there
in
terms
of
tensorflow
and
pytorch
jupyter
notebooks,
but
other
other
tools
which
which
take
care
more
of
the
integration
and
the
delivery
part
and
yeah.
Definitely
you
you
can.
If
you
have
your
own
production
environment,
you
can
also
deploy
open
data
hub
and
quickly
get
access
to
this
kind
of
technology
right.
C
So
maybe
they
are
for
certain
use
cases
a
little
bit,
I
would
say
different
support
requirements
when
it
comes
to
production
support
right.
So,
if
you
think
about
a
development
platform
for
data
scientists-
and
this
might
be
have
a
little
bit
different
criticality
than
if
you
talk
about
really
production
environment
for
real
time,
application
delivery.
C
So
definitely
you
you
can
take
advantage
of
open
data
to
quickly
get
started
and
yeah
make
sure
and
make
sure
that
you
can
use
the
different
capabilities
that
are
that
are
delivered
delivered
as
part
of
open
data
hub
for
your
own
users,
and
one
of
the
other
projects
that
we
are
heavily
involved
in
is
what
we
call
operate.
C
Finally,
a
products
productized
service
offering
such
as
openshift
data
science
right
so
especially
now
that
we're
seeing
more
and
more
complex
software,
the
difficulty
really
becomes
not
not
only
understanding
what
single
entities
in
terms
of
code
mean,
but
really,
how
do
you
operate
and
integrate
and
set
up
all
those
different
components
and
orchestrate
that
in
an
environment
with
a
kind
of
production
like
scenario
I
would
say
so.
This
might
be
really
interesting
for
for
some
of
you
guys
out
there.
A
First
topic
would
be
a
good
point
for
let's
see
for
next
session,
about.
C
So
we
have
that
as
a
managed
service
right
now
and
just
on
a
very
high
level
to
give
you
an
idea
what
this
is
about,
as
I
said
before,
managed
service
on
top
of
aws
in
terms
of
add-on
to
openshift,
dedicated
and
rosa
for
for
now,
of
course
comes
with
kind
of
core
components
in
terms
of
the
jupyter
hub.
That
we
saw
earlier
comes
also
with
integration
with
other
openshift
services
like
managed
services
or
even,
of
course,
services
that
are
already
part
of
openshift,
as
well
as
a
number
of
isv
components.
C
So
growing
number
at
that
which
address
certain
parts
of
the
whole
machine
learning
life
cycle
right
again,
talking
about
intel,
ibm,
anaconda
and
more
to
come.
Definitely
so
really
giving
also
customers
the
opportunity
to
pick
particular
supported
components
for
addressing
specific
use
cases
out
there.
B
Okay,
I
have
a
question
max.
Yes,
do
you
think
we
can
link
the
I
see
there
that
openshift
data
science
core
is
jupiter
hub,
pytorch
tensorflow
right.
So
how
do
we?
How
can
we
link,
for
instance,
kafka?
B
Openshift
stream
manages
kafka
to
openshift
data
science
and
then
maybe
we
can
link
it
opposite
data
science
to-
I,
don't
know
some
tecton
pipeline
somewhere
the
envelopes?
How
how
do
we
link
the
envelope
sites
to
the
operation
data
science,
core.
C
Yeah
yeah,
so
actually
the
integration
with
kafka
is
something
that
I'm
going
to
showcase
in
a
minute.
Oh.
C
But
if
you
talk
about
the
whole
machine
learning
life
cycle
in
terms
of
the
intelligent
application,
definitely
there's
overlap
with
the
concerns
of
you
know
typical
applications
out
there
when
it
comes
to
middleware
and
that's
where
we
can
also
see
apache
kafka
coming
into
play
in
terms
of
building
an
event-driven
architecture
right
or
where
we
can
see
openshift
api
management
coming
into
play,
where
it's
really
more
about
the
governance
of
of
the
public
api
that
that
custom,
that
organizations
may
want
to
expose
out
there
so
and
and
the
tecton
piece
is
interesting.
C
C
We
know
that
tactron
is
of
course,
also
productized
in
terms
of
openshift
pipelines,
but
there's
a
very
interesting
integration
when
it
comes
to
really
training
and
validating
machine
learning
models
where
we
can
also
leverage
tecton-
and
this
is
actually
something
on
the
roadmap
which
we
will
be
able
to
deliver
as
part
of
openshift
data
science
later
maybe
this
year,
even
and
but
but
this
is
actually
something
that
you
can
already
do
in
terms
of
open
data
up
right.
So
we
have
this
integration
already.
C
We
have
kubeflow
they're
great
use
cases
for
really
automating
the
whole
delivery
thing
right
without
really
yeah
having
to
rely
on
manual
processes
or
or
baking
everything
into
a
git
repository.
So
maybe
really
interesting
thing
to
showcase
at
some
point
later
on,
as
well.
C
Sure
awesome
all
right,
then
then,
I
said
that
I
have
prepared
this.
This
kind
of
demo
here,
which
is
basically
based
on
the
roles
open,
open,
object,
detection
workshop,
which
colleagues
have
prepared
before
so
maybe
I
can
no,
let
me
go
back
so
guys
out.
There
can
all
can
also
just
follow
along
and
and
have
some
more
details
and
documentation
into
the
different
steps
out
there.
C
So
the
basic
idea
of
of
the
setting
that
I'm
going
to
show
you
now
is,
I
would
say,
a
kind
of
hello
world
example
in
terms
of
computer
vision
right,
so
we're
talking
about
an
object
detection
system
which
is
running,
maybe
even
somewhere
on
premise,
maybe
in
an
edge
location,
let's
say
in
a
shop
or
a
warehouse
or
factory
somewhere,
where
there's
a
camera
and
yeah.
We
want
to
automatically
track
objects
that
are
there
and
just
to
understand.
What's
going
on
so
in
terms
of
the
architecture?
C
So
in
this
very
simple
example,
we're
actually
going
to
use
the
same
git
repository
for
the
jupyter
notebooks
for
the
models
that
are
being
produced
here,
as
well
as
as
the
kind
of
glue
code,
if
you
will
for
serving
the
model
and
doing
the
kafka
integration.
So
again,
very
simple
example
just
to
see
that
it's
very
easy
to
to
get
going
with
this
and
to
finally
then
rely
on
the
source
to
image
capabilities
of
openshift
to
basically
convert
the
git
repository
into
a
running
container.
A
Thanks
max
because
this
is
a
very
simple
example,
at
the
same
time
here,
you're
drawing
a
very
complex
use
case,
let's
think
about
a
typical
age
use
case
where
you
have
your
enterprise.
Data
center
could
be
in
public
cloud.
Obviously,
and
then
you
have
your
age
and
companies,
age,
services
or
age
applications,
rich
services
like
image
recognition,
so
you
can
develop
in
training
your
machine
learning
model
in
in
the
enterprise
data
center
and
then
deploy
everything
in
a
standard
way
and
using
the
same
platform,
which
is
always
openshift.
A
C
Exactly
so
again,
one
of
the
great
advantages
of
openshift
really
the
possibility
to
run
the
same
workload
in
various
different
modalities
right
talking
about
various
different
public
cloud
scenarios,
talking
about
on-premise,
talking
about
really
edge
and
yeah,
taking
advantage
of
that
fact
for
for
producing
and
developing
one
application
in
one
particular
location,
but
then
having
really
the
possibility
to
really
deploy
that
in
all
the
different
kinds
of
modalities.
That
really
make
sense
for
your
particular
use
case
right.
So,
of
course,
the
data
source
is
an
important
aspect.
C
In
terms
of
this,
you
know
hybrid
cloud
scenario,
where
you're
doing
something,
maybe
in
the
public
cloud
and
then
consuming
that
in
an
on-premise
or
even
edge
scenario,
I
would
say
so:
yeah
cool,
that's
a
very
good
point.
C
All
right,
then,
maybe,
let's
dive
in.
I
have
another
openshift
data
science
instance
here
which
we're
going
to
run.
So
I'm
I'm
trying
to
make
this
real
quick,
because
we
don't
have
that
much
time
left
so
again.
Choosing
a
tensorflow
image
here
and
you
already
see
that
there's
a
number
of
environment
variables
which,
because
I'm
going
to
integrate
with
kafka
and
with
s3
storage,
and
I'm
then
going
to
provision
this
jupyter
notebook
environment.
C
So
the
the
server
is
now
there
it's
been
provisioned
now,
I'm
also
I
I'll
automatically
be
redirected,
okay
and
just
as
before,
maybe
making
that
a
little
bit
larger,
I'm
going
to
now
clone
a
git
repository
which
already
contains
notebooks
that
we're
going
to
use.
C
So
I'm
going
to
work
now
with
the
different
git
repositories.
So
this
is
the
main
one
from
understanding
the
model
itself.
We'll
have
a
look
at
that
in
a
minute
just
because
this
is
taking
some
time.
Let
me
already
switch
to
the
other
cluster,
so
this
is
the
kind
of
you
know
on
premise:
cluster
simulation
that
I
have
it's
a
different
openshift
cluster,
but
anyway,
something
where
we
can
already
start
deploying
our
application
so
again
making
sure
to
use
the
source
to
image
builder,
just
based
on
the
git
repository.
C
So
that's
the
same
one
that
that
I
just
cloned
in
the
jupyter
hub
environment,
it's
detecting
a
python
service
over
there
and
let's
see
that
that
the
names
are
correct
over
here.
Okay,
I
want
to
create
a
route
to
that.
So
let's
create
that.
C
So
I'm
doing
this
now
because,
as
you
know,
it
will
now
automatically
clone
that
repository
and
start
the
whole
source
to
image
building
of
that
container
image
and
then
finally,
the
deployment
of
the
service
that
is
already
defined
in
that
git
repository.
I
will
do
actually
the
same
with
with
the
other
component,
which
is
the
front-end
component.
C
C
Exactly
yeah,
so
this
is
basically
already
on
the
two
components
of
my
intelligent
application,
which
will
have
a
look
once
that
is
deployed,
and
I'm
already
also
going
to
deploy
then
the
last
one
which
is
actually
a
kafka
client,
so
so
the
one
that
is
taking
messages
from
the
from
the
kafka
topic
and
then
doing
the
actual
classification
on
top
of
that.
So
if
you
will
that's
a
different
version
of
that
first
component,
which
is
not
running
as
a
as
a
rest,
http
server,
but
instead
as
a
kafka
consumer,
so
that
should
be
enough.
C
C
C
I
will
just
have
a
look
at
at
one
of
the
jupyter
notebooks,
which
gives
you
a
clue
of
of
what
the
what
the
model
really
is,
how
it's
being
used
so
again
doing
the
kind
of
picture
interactive
stepping
through
those
codes
of
blocks,
first
of
all,
of
course,
importing
the
dependencies
making
sure
that
we
would
download
the
data
that
is
needed
so
really
dummy
data
here
at
this
point
from
our
cs3
storage,
basically
and
then
load
the
model
that
we
have
here.
C
C
So
the
model
is
already
part
of
the
git
repository.
So
we
can
directly
load
that
straight
away
here
and
then
already
do
the
object
detection
on
this
one.
So
this
is
taking
a
few
seconds
in
terms
of
tensorflow
loading.
The
saved
model
there
and
now
is
actually
the
object
detection.
So
we're
talking
about
numerical
values
here,
let's
see
how
we
can
get
a
nice
visual
representation
of
that.
C
So
some
kind
of
upper
functions
that
we
have
defined
here:
let's
display
an
image
and
it's
the
same
image
with
bounding
boxes
that
the
model
has
detected
for
certain
objects.
So
there
are
small
labels
which
are
too
small
to
read,
but
it
says
dog
here,
dog
here
and
also
interestingly
footwear
for
for
some
of
the
paws.
C
So
we
can
finally
do
some
filtering
where
the
model
says
it's
very
certain
that
there's
an
object
and
then
finally
it
says
that
okay,
it's
certain
that
it's
detecting
one
dog
in
this
area
and
one
dog
and
the.
C
C
On
the
other
hand,
side
can
also
work
so
really
joint
collaboration
within
the
very
same
notebook
and
where
the
application
developer
would
then
set
up
the
the
actual
code,
which
is
then
being
invoked
in
the
service
which
is
running
in
production
right
so
in
terms
of
prediction:
how
to
actually
call
prediction
of
particular
images
and
how
to
actually
formulate
and
define
in
a
web
server.
C
So
here
we're
relying
on
on
a
flask
server
in
terms
of
a
python
server
and
everything
that
is
needed
for
the
source
to
image
to
to
build
a
running
container.
Out
of
that
is
already
provided
in
the
very
same
repository.
So
this
one
really
again
very
simple
condense
everything
that
is
needed
from
the
data
science
perspective,
as
well
as
what
what
the
concerns
are
of
the
actual
running,
intelligent
application.
A
So
we
we
have
our
data
scientists,
which,
in
a
previous
phase
we
didn't
show
today,
for
sake
of
gravity,
worked
on
training
in
the
model.
Then
we
had
our
developers
that
engineering
developer
that
worked
in
realizing
the
final
application
and
then
thanks
to
the
source
of
image,
we
are
moving
this
real
life
application
into
production,
and
I
think
it's
worth
to
be
to
mention
that
everything
could
be
what
could
be
automated.
So
all
the
manual
step
you
did
could
be
obviously
automated
through
any
kind
of
workflow.
C
Yes,
exactly
so
so
what
I'm
doing
right
now
is
adding
the
missing
piece
here
in
terms
of
the
kafka
integration
for
the
for
the
intelligent
application.
So
what
I've
prepared
beforehand
was
a
secret
which
contains
you
know
the
credentials
and
the
endpoint
of
the
kafka
topic
which
I've
set
up
before
and
then
making
that
accessible
as
environment
variables
to
my
components
in
terms
of
the
front-end
application,
as
well
as
the
back-end
kafka
kafka
client.
C
C
So
I
want
to
be
able
to
detect
some
kind
of
objects
on
some
kind
of
visual
scene,
for
example,
this
picture.
What
I'm
going
to
do
is
now
make
this
picture
and
see
whether
the
the
model
actually
is
able
to
to
classify
that
and
yeah.
It
makes
sense
that
it
just
detected
a
car,
maybe
even
multiple
times,
a
wheel
of
a
car,
a
person
clothing.
C
It's
not
perfect
to
be
sure,
but
definitely
something
where
we
can
already
start
playing
around
in
terms
of
real
object
detection
over
here
so
and
then
the
missing
piece
is
available
once
once
the
kafka
consumer
is
up
and
running,
but
already
we
we
already
have
some
kind
of
you
know
hello,
world
application,
which
is
able
to
detect
objects
in
a
visual
scene
based
on
some
camera
input
and
and
return
that
information
to
me.
C
So
I
can
then
monitor
what's
going
on,
what's
what's
being
detected
and
maybe
I
can
even
use
the
downstream
for
some
other
application
logic.
So,
finally,
the
the
kafka
client
is
up.
So
let's
try
to
see
whether
the
the
final
use
case
is
working
in
terms
of
the
kafka
integration.
Now
we
should
be
able
to
track
objects
and
in
real
time.
B
Okay,
but
what
what
what
what
we
should
see
here.
C
Yeah
so
again,
as
I,
as
I
told
you,
we're
talking
about
an
event
driven
application.
Basically,
so
a
stream
of
data
is
coming
in
from
the
from
the
data
source,
which
is
my
camera
in
this
particular
case,
and
all
the
the
frames
of
the
of
the
video
sequence
are
then
being
passed
to
the
object.
Detection
model
and
the
object.
Detection
is
then
being
applied
on
those
frames
and
we
should
be
able
to
see
just
like
for
for
the
different
case.
A
I
I
think,
because
we
are
running
up
to
the
the
hour.
There
is
a
question
from
hamad
karim
ahmad
thanks
for
joining
regarding
the
replacement
of
ibm
cabanero
framework,
because
I
don't
think
his
part
or
open
data
hub
anyway
or
that
official
data
science.
Do
you
because
you
have
any
insight
about
this
replacement?
Just
if
you
have
it,
if
you're
not.
C
Yeah,
I'm
sorry,
I'm
I'm
not
too
deep
into
ibm
carbonaro,
nothing
that
that
we
would
provide
in
terms
of
open
data
hub.
Sorry,
I
really
can't
say
much
about
this
one,
particularly
okay,.
B
Yeah
it
looks
like
if
you
go
to
the
website.
They
say
that
the
new
development
are
based
on
auto
and
openshift
gear
ups.
So
probably
it's
something
around.
You
know
deploying
and
apps
with
tactile
and
that
kind
of
stuff,
so
yeah.
B
The
as
max
was
showing
tacton
argo
cd.
Those
are
the
tool,
also
used
for
doing,
implement
the
analogues,
workflow
workflow
so
and
you
can
connect
to
red
dot,
openshift,
dataset
and
very
cool.
You
can
use
also
kafka
max.
I
was
wondering
if
we
can
use
also
the
managed
kafka
here.
If
we
can
connect
ods
to
manage
kafka.
D
C
Due
to
time
reasons,
I
couldn't
really
go
into
this,
but
you
can
definitely
make
make
use
of
the
managed
openshift
streams
for
apache
kafka.
Again.
You'll
find
the
details
inside
of
the
workshop
documents
and
that's,
of
course,
one
of
the
great
use
cases
for
for
really
getting
this
up
and
running
without
really
worrying
about
infrastructure
piece.
B
C
Yeah
exactly
so
yeah
based
on
on
this
kind
of
schematic,
so
I
I
showed
you
the
data,
the
data
science
cluster
as
part
of
the
reddit
openshift
data
science,
and
then
I
had
provisioned
earlier
on
another
separate
openshift
cluster.
So
this
is
running
also
in
the
public
cloud,
but
you
can
easily
imagine
that
this
might
be
running.
You
know
on
premise
somewhere,
maybe
in
an
edge
location
and
the
same
thing
would
work
there,
because
once
you
have
your
workload
running
on
openshift,
then
it
can
easily
run
on
premises
and
openshift
as
well.
Also.
C
C
Exactly
yeah
and-
and
the
very
final
thing
that
I
wanted
to
to
say
here
so
if
you
folks
want
to
get
your
hands
on
on
the
technology
itself
again
here
here
are
the
urls
for
for
the
road
sandbox
with
interactive
learning
parts
parts
that
you
can
take
as
well
as
the
operate
first
community.
So
please
check
this
out.
Join
the
community.
Try
it
open
data
hub
in
a
community
context,
and
then
it's
more
about
really
learning
and
exchanging
ideas
about
data
science,
open
source
development
and
sre
practices.
D
Well,
first
of
all,
thank
you
for
the
excellent
presentation
dem
effect
during
about
we.
A
D
Got
the
concert
there
and
it's
it
was
very
interesting
actually
to
see
all
the
pieces
put
together.
D
In
your
case,
obviously,
you
had
a
demonstrator,
so
you
you're
using
a
an
app
to
share
the
you
know
to
get
the
the
value
from
the
camera
in
english
solution.
This
would
be
some
sensor
somewhere
else.
Your
app
may
have
actually
been
running
on
openshift
somewhere
else,
getting
getting
all
the
streams
from
kafka
and
the
decisions
from
the
front
front
for
the
algorithms
and
putting
it
all
together
in
the
visualization
with
that
it
was
actually
excellent.
It
may
be
just
a
small
example,
but
it
shows
all
the
components
that
you
need.
D
So
my
my
question
would
be
in
a
more
complex
situation.
We
have
multiple
algorithms
and
they
may
necess.
They
may
not
necessarily
be
deployed
all
in
one
go
in
one
single
container.
They
may
be
considered
as
different
microservices
in
a
sort
of
a
mesh
is
that
is
that
a
concept
like
that
yeah
exactly.
C
So
yeah,
what
we're
seeing
in
real
use
cases,
of
course,
massive
amounts
of
data
that
are
being
processed
and,
of
course,
there
are
certain
ways
to
classify
those
kinds
of
objects
or
enrich
those
kinds
of
information,
but
on
the
same
stream
of
data,
maybe
even
for
another
business
purpose,
also
identify
other
kinds
of
information
within
the
same
stream.
So
what
we're
seeing
more
and
more
is
that
not
only
single
machine
learning
models
get
deployed,
but
really
a
number
of
those.
C
So,
of
course
you
can
you
can
try
and
do
the
integration
yourself
in
terms
of
you
know
coding
a
large
microservice
like
this,
but
there
are
already
frameworks
out
there
which
can
make
that
that
much
more
easy,
so
talking
about
model
meshes,
for
example,
so
really
frameworks
that
can
enable
you
to
define
this
kind
of
integration,
of
different
machine
learning
models
in
in
a
declarative
way
based
on
kubernetes
resources,
for
example.
C
Right
so
selden
might
be
one
way
where
you
can
achieve
this
thing,
really
having
a
number
of
different
machine
learning
models
and
then
do
the
whole
orchestration
part
as
part
of
the
declaration
of
the
of
the
whole
micro
service.
Basically,.
B
Who
owns
this
noisy
chair
like.
B
I
was
impressed
of
the
live
demo
where
you
showed
the
phone
and
you
recognized
the
car
and
it
from
you
know.
That
is
a
real
example
of
you
know.
Real
cameras
doing
ai
face
recognition,
and
this
is
all
open
source
and
I
think
folks
we
can
share
the
links
or
to
all
the
demos.
If
you
have
please,
we
can
share
in
the
chat,
because
this
is
really
cool.
You
can
try
yourself
the
same
demo
that
max
did
and
go
into
a
sandbox,
red.opership.sign,
sandbox
and
and
try
try
it
out.
A
D
And
also
before
we
close,
please
don't
forget
our
next
episode
of
the
series
operators
next
week
on
the
on
the
second
march,
where
we
will
learn
how
to
write
an
operator
with
java.
D
B
Today,
fabio
today,
at
6
00
p.m.
Our
time
there
is
openshift
commons
talking
about
how
to
use
database
as
a
service
with
the
new
roda
that
we
started,
we
talked
about
it
a
little
bit
in
our
first
database
of
service
series,
and
this
is
the
other
session.
Maybe
we
can
also
link
in
the
chat
thanks
the
event,
if
you
can
also,
please
link
all
the
the
repository
of
the
demos
we
can.
A
The
link
to
the
repository
before
closing
you
can
share
it
into
the
chat
and
guys
thanks
for
joining
thanks
to
all
the
attendees
there's
the
pleasure
to
share
their
coffee.
A
Today
with
us,
we
had
the
pleasure
to
have
again
max
who
showed
us
how
create
a
com,
a
simple
time,
complex
application
in
the
ml
topics
to
recognize
image,
and
he
started
from
the
model
and
he
deployed
everything
in
let's
say:
production
thanks
again
for
the
great
presentation
demo
see
you
next
for
the
java
operators
session
bye
guys.
Thank
you.