►
From YouTube: Data Services Office Hour (Ep 10): Tackling AI/ML Workloads using Red Hat OpenShift & OpenVino
Description
As AI/ML workloads become more popular, one of the challenges is getting models to scale in production within intelligent applications. Working together, Red Hat and Intel have integrated Intel OpenVINO into Red Hat OpenShift Data Science, a newly introduced cloud service that gives data scientists and developers a powerful AI/ML platform for building intelligent applications. See it in action – after a brief intro, Audrey will demonstrate how model inferencing performance can be improved for use cases like AI at the edge.
Twitch: https://red.ht/twitch
A
Good
morning,
good
afternoon,
good
evening
welcome
to
a
special
edition
of
the
data
services
office
hour.
I
am
chris
short
host
with
the
most
of
red
hat
live
streaming,
I'm
joined
by
one
of
my
favorite
red
hatters
and
an
intel
person
yay.
Thank
you,
ryan
and
audrey
for
coming
on
today,
audrey,
you
want
to
like
tee
up
what
we're
talking
about
today,
a
little
bit.
B
Yeah
for
sure
so
we're
please.
A
B
I'll
send
you,
I
got
it
no,
so
today
we're
looking
at
how
we
can
tackle
aiml
workloads
using
red
hat
openshift,
the
platform
with
openvino,
which
is
a
fabulous
intel
product,
so
ryan,
and
I
had
actually
spoken
about
this
in
a
session
earlier
this
week
and
I
thought
well,
we
need
to
spread
the
love.
We
need
to
tell
everybody
about
open,
vinyl
and
red
hat
open,
shaft.
C
How
about
that?
Okay?
Well
so
yeah,
I'm
I'm
ryan
lonnie!
I'm
a
product
manager
at
intel
for
the
openvino
toolkit,
which
is
a
set
of
libraries
and
tools
that
are
used
for
optimizing,
deep
learning
inference
and
we've
done
a
lot
of
work
to
integrate
with
red
hat
so
that
it's
easier
to
deploy
and
optimize
quantize
these
ai
models,
so
they
can
be
deployed
in
open
shift,
and
so
they
can
be
deployed
on
red
hat
enterprise
linux
and
we're
going
to
talk
a
little
bit
about
it
today.
A
B
Yeah,
so
one
of
the
things
that
we
found,
that
is
pretty
prevalent
right
now
in
industry,
since
we
have
kind
of
a
need
for
an
open,
hybrid
cloud
and
ai
platform,
is
that
around
69
of
the
enterprises
today
use
a
mix
of
open
source
and
cloud-based
software
to
power
their
ai
initiatives,
which
is
fantastic.
B
So
again,
that's
really
really
the
reason
that
ryan
and
myself
are
here
today,
because
we
want
to
talk
about
these
two
products
that
red
hat
and
intel
have
created
so
cue,
the
intro
for
red
hat,
openshift
data
science
and
cue,
the
intro
for
intel
openvino,
and
I
did
want
to
mention
that
you
know,
since
we
have
so
many
enterprises
using
the
open
source
and
cloud-based
software
to
power
their
ai
initiatives.
B
One
thing
that's
also
very
interesting
is,
despite
the
large
number
of
vendors
that
are
out
there
in
cloud
platforms.
Architectures
to
choose
from
most
of
the
technology
leaders
are
using
open
source
tools,
which
is
yay,
fabulous,
yeah.
C
B
It's
you
know,
78
percent
of
them
or
or
their
initiatives
that
they're
that
they've
actually
gone
ahead
and
created
are
deployed
using
hybrid
cloud
infrastructure.
So
it
makes
it
a
fabulous
opportunity
for
us
to
get
our
products
out
there.
A
C
C
He
and
I
were
on
a
call-
and
I
you
know
I
never
met
anyone
at
red
hat
before
I'd
talked
to
the
the
team
inside
intel
that
worked
with
red
hat
and
it
was
sort
of
like
a
game
of
telephone,
sometimes
you'd,
hear
okay.
Red
hat
is
doing
this
they're
going
to
launch
a
new
data
science
program,
and
then
I
actually
got
to
talk
to
a
solutions
architect,
and
he
really
explained
it
so
well
and
said
this
is
what
we're
trying
to
do.
C
C
You
know
there's
a
lot
of
cloud
providers
that
have
these
sort
of
vendor
lock-in
tools
that
you
can
use
on
their
service,
but
I
haven't
seen
this
like
managed
mlops
kubeflow
type,
open
data
hub
offered
by
anyone
else,
and
I
think
it'd
be
great.
If
red
hat
did
it
and
he
said
great,
we
need
you
to
do
something
too.
So
we
we
see
that
you
have
a
helm
chart
and
some
kubernetes
integration
that
you
have
with
openvino.
C
Can
you
turn
that
into
an
operator
for
openshift
get
it
certified
certify
some
of
your
containers?
We
need
them
to
run
on
ubi,
and
at
that
point
we
had
not
done
much
work
with.
We
just
started
to
enable
and
validate
our
software
on
on
red
hat
enterprise
linux.
We
had
previously
supported
centos
and
you
know,
had
customers
working
with
other
flavors
of
linux,
so
it
wasn't
too
hard
to
move
over.
C
A
That's
awesome
right
and
that's
like
most
partnerships
at
red
hat
right.
It's
like
let's
work
on
a
couple
bits
here,
working
a
couple
of
bits
there
and
usually
it
turns
out
awesome.
So
it's
great
to
hear
so.
Do
we
want
to
talk
about
open
shift
and
open
v?
Note
together,
I
like
how
they
both
stay
open,
like
capital
letter
open
in
the
name.
So
this
is
pretty
cool.
B
So
yeah,
I
can
start
on
that.
What
we
have
with
openvino
is:
it
is
one
of
our
managed
services,
that's
available
on
roads
and,
what's
really
cool
about
it,
is
that
when
you
go
ahead
and
choose
it
in
our
red
hat,
open
shift
data
science
platform
and
spin
up
a
jupiter
notebook,
it
actually
comes
prepared
with
all
these
wonderful
tutorials
and
for
one.
C
Yeah
would
it
help
if
I
don't
want
to
dive
right
into
a
demo,
but
should
I
share
my
screen
and
show
the
open
shift
console
and
yeah?
Let's
try.
A
C
A
C
Cool,
so
I
just
went
over
to
the
installed
operators
on
this
managed
cluster
that
we
have
that
red
hat
is
managing
for
us
and
we
have
the
pre-release
version.
I
guess
I
would
say
of
openshift
data
science
installed.
We
have
these
two
operators
from
intel.
One
is
the
ai
analytics
toolkit,
and
this
includes
some
of
the
common
frameworks
created
by
third
parties
like
tensorflow
pytorch
and
we
sort
of
add
bake
in
our
intel
optimization.
C
So
you
can
keep
using
tensorflow,
keep
using
pi
torch
mode
in
pandas
scikit-learn,
with
some
of
the
low-level
math
kit,
library
and
optimizations
from
one
api
baked
in
so,
but
I'm
going
to
focus
on
openvino
toolkit,
which
is
our
intel
native
tool.
It's
an
open
source
set
of
tools,
like
I
said
before,
and
runtimes
for
deep
learning
inference
and
when
we
did
the
integration
we
were
thinking
about.
Openshift
data
science
and,
of
course,
partnering
very
close
with
folks
like
audrey,
and
we
added
two
component
are
two
apis
into
the
operator.
C
One
is
our
existing
model
server,
which
is
for
survey
models.
It's
a
service
oriented
architecture,
so
you
can
deploy
it
deploys.
A
ubi
container
manages
it
with
the
operator
and
it
creates
a
endpoint
for
prediction:
requests
for
your
image:
classification,
object,
detection,
nlp
models,
the
other
is
a
notebook
instance.
So
all
you
have
to
do
is
click
this
create
notebook
button
and,
if
you're
planning
to
do
development,
this
will
install
third-party
dependencies
that
you
need
in
your
jupyter
environment.
So
you
can
run
openvino
run
tools
like
tensorflow,
pi,
torch,
onyx
and
start
optimizing.
C
The
models
that
you
have
trained
and
we
even
have
some
training
examples
that
you
can
run
that
also
don't
require
a
gpu,
so
you
can
just
quickly
get
started,
doing
a
training,
run
it
on
a
zeon,
cpu
and
and
and
start
doing
some
optimization.
C
So
when
you,
when
you
click,
create
notebook,
what
this
does
is
it
it?
It
pre-installs,
a
bunch
of
jupiter
notebooks.
You
can
see
here
that
are
available
through
this
jupiter
console
and
in
order
to
get
to
this,
you
know
we'll
go
to
the.
If
I'm
using
open
data
hub,
there's
a
different
link.
If
I'm
using
red
hat
open
shift
data
science,
it's
just
part
of
the
managed
services
so
I'll
go
to
red
hat,
open
shift
data
science,
and
then
you
can
see
that
these
cards
have
been
enabled
here.
C
So
I
selected
the
openvino
option
launched
them,
and
these
are
actually
two
jupiter
notebooks
that
audrey
and
I
were
sharing
the
other
day
at
an
intel
conference,
and
these
these
are
showing
how
to
take
a
model
train
it
in
tensorflow.
C
C
So
this
is
sort
of
the
classic
tutorial
that
tensorflow
provides
in.
You
know
open
source,
it's
a
image
classification
model.
It's
what
we
would
call
like
a
toy
model
because
you're
not
really
gonna,
just
detect
six
or
classify
six
types
of
flowers
for
many
production
use
cases,
but
well
I'll
I'll
dive
in
and
show
you
what
we're
doing
so
there's
these
are
the
this.
C
Is
the
data
set
that
you
that
gets
downloaded
automatically
when
you
launch
the
jupiter
notebook
and
it's
going
to
pull
down
3
700
photos
of
flowers,
so
there's
only
five
subdirectories
which
for
the
different
classes.
So
these
are
the
the
labels
that
we'll
have
for
when
we
did
classify
the
images
and
it's
just
five
different
types
of
flowers
and
feel
free
to
stop
me
at
any
time,
audrey
and
chris.
If
you'd
like.
C
C
But
and
then
this
is
just
walking
through
some
of
the
like
boilerplate
steps
that
you
would
have
from
preparing
a
tensorflow
image
classification
training
using
keras,
which
is
part
of
tensorflow,
and
you
will
sometimes
see
these
sort
of
funny
harmless
errors
and
often
it's
either
due
to
you're,
not
using
a
cuda
or
you're,
not
using
some
other
library
that
it
expects
you
to
use.
C
But
these
are
harmless,
so
don't
don't
be
alarmed
by
the
red
and
then
let's
see
if
we
again,
these
are
the
each
label
and
preparing
the
data
set
some
other
model
preparation
and
then
we're
going
to
actually
run
the
training
on
the
cpu.
We'll
go
down
to
that
part.
So
you
can
see
15
epochs
of
of
training,
and
in
this
I
don't
know
how
many
I
think,
maybe
14
cores
are
available
to
my
pod
right
now.
C
So
with
this
14
core
pod
with-
maybe
I
don't
know
how
many
gigabytes
of
memory
not
too
much
it's
for
able
to
process
each
epoch
in
152
milliseconds,
which
is
for
a
cpu.
I
think
that's
it's
not
too
bad,
so
you
have
to
wait.
Yeah
wait
a
few
minutes
for
this
to
run,
but
again
it's
a
very
simple
model
and
you
can
check
out
in
inside
the
notebook
and
see
the
accuracy
is
actually
pretty
good.
C
It's
you
know
close
to
80
percent,
just
after
running
those
15
epochs
with
those
3
700
flowers,
and
then
this
is
where
we
start
to
hand
off
between
tensorflow
and
moving
to
openvino.
So
first
we're
gonna
download
a
picture
of
a
sunflower
and
make
sure
that
it's
it's
classified
and
see
that
it
that
it
works,
and
so
we
get
this
little
message
down
here.
That
says
it
belongs
to
sunflowers.
We've
got
94.86
percent
confidence.
C
C
Some
of
our
competitors
have
other
tools:
there's
like
core
ml
tensor,
rt,
onyx,
runtime
and
you'll
usually
have
a
step
where
you
convert
that
to
a
different
format
before
you
deploy
it
in
production.
So
in
this
case
we're
we're
openvino,
so
we're
going
to
convert
it
to
the
openvino
representation,
there's
a
few
parameters
that
can
be
defined,
so
the
shape
of
the
model
which
we
know
from
the
previous
steps
is
one
by
180
180
by
three.
This
is
the
size
of
the
images.
C
The
input
images
are
quite
small
and
then
three
channel
rgb,
and
so
the
the
output
type
we're
going
to
go
from
a
floating
point:
32
precision
to
fp16
when
we
convert
to
openvino.
So
we
don't.
We
reduce
the
size
of
the
model,
it's
still
floating
point
precision
and
we
should
see
no
change
in
accuracy.
C
C
C
A
Awesome,
we're
back
live
folks,
sorry
about
that
zoom
just
decided
to
disconnect
us
from
the
call.
So
you
know
yay
technically.
C
So
I'll
do
so
I'll
go
through
like
the
last
few
things
that
we
covered.
So
we
we
did
a
prediction
on
the
the
tensorflow
model
and
here
I'll
make
this
screen
a
little
better.
So
we
saved
the
tensorflow
model,
converted
it
to
the
openvino
format
and
moving
to
floating
point
16.,
so
reducing
the
model
size
a
little
bit
better
performance,
but
no
change
in
that.
C
There's
no
noticeable
change
in
the
accuracy
in
the
output
of
the
model
and
then
we're
going
to
run
a
test
which,
if
you
scroll
through
all
of
this,
see
that
the
model
converts
successfully
we're
going
to
load
the
ir.
So
the
intermediate
representation
is
what
we
we
call
the
openvino
format
and
when
you
run
that
conversion
step
you'll
see
that
there's
a
dot
bin
and
a
dot
xml.
C
So
here
we
load
the
network,
read
it
with
inference
engine
and
then
we're
going
to
download
a
picture
or
of
a
dandelion
from
wikipedia
just
so
we
have
an
image
that
didn't
come
from
the
original
training
set
and
because
we
don't
want
to
plug
in
an
image
that
was
used
during
the
training.
We
want
to
have
a
totally
random
one,
so
we
download
this
dandelion
and
we
see
that
it's
able
to
detect
or
classify
it
as
a
dandelion
and
with
great
confidence,
and
then
that
brings
us
to
the
next
step.
C
So
the
next
step
is
to
do
quantization,
and
so
now
that
we've
trained
the
model
it's
in
openvino
format.
We
see
that
the
it's
able
to
classify
the
image
correctly.
The
next
step
is
to
further
optimize,
so
we're
going
to
actually
lower
the
precision
to
eight
bits.
Right
now
I
said
it
was
in
floating
point:
16.
we're
going
to
do
a
process
called
quantization
and
we
have
a
tool
that
openvino
provides
that
comes
baked
into
the
integration
with
red
hat
openshift
data
science.
C
It's
called
the
post
training,
optimization
tool,
and
this
tutorial
and
there's
a
few
others
that
are
currently
in
the
the
notebook
image
and
even
more
that
are
getting
added.
You
know,
I
think
we
have
three
more
coming,
so
it's
a
lot
of
examples
for
you
to
start
with
for
different
use
cases,
and,
of
course
this
is
image
classification,
so,
first
we're
going
to
just
use
the
model
that
we
trained
in
the
previous
notebook
and
if
you
didn't
run
the
previous
notebook,
this
notebook,
actually
checks
and
downloads
and
runs
the
previous
notebook.
C
If
you
happen
to
forget
to
run
the
one
that
does
the
training
and
then
there's
some
sort
of
these
are
steps
that
are
required
to
run
quantization.
So
with
our
tool,
you
have
to
create
a
data
loader
and
there's
an
accuracy
metric
to
help
measure
the
accuracy
to
compare
the
original
model
with
the
quantize
model.
We
also
have
an
accuracy
aware
algorithm
that
you
can
use
to
to
set
the
maximum
amount
of
accuracy
drop.
C
So
that's
some
of
the
reasons
for
the
accuracy
metric
and
then
this
is
the
actual
pipeline
to
use
the
post-training
optimization
tool
which
performs
the
post-training
quantization,
and
so
this
running
these
steps
here.
This
will
give
us
a
compressed
model.
That's
in
the
lower
precision,
so
integer
8,
precision
and
then
it'll
save
it
in
the
directory.
So
and
I
was
looking
before
in
the
model
directory.
So
now
we
have
this
optimized
and
these
are
the
low
precision
models
that
get
saved
by
running
this
pipeline.
C
Once
you
have
this
spin
in
this
xml
or
if
you're,
using
a
onnx
and
onyx
model,
you
can
take
these
and
deploy
them
with
the
model
server
and
create
an
endpoint
in
openshift.
So
you
can
serve
the
models
and
I'll
I'll
talk
about
that
in
a
minute,
but
once
you've
done
the
experimentation,
the
work,
the
quantization
optimization
in
the
notebooks
and
roads
and
openshift
data
sites,
you
can
then
download
these
these
model
artifacts
and
serve
them
in
openshift.
C
So
one
thing
we
always
want
to
check
is
that
the
original
model
and
the
quantized
model
are
roughly
the
same
and
in
this
case
they're
they're
actually
almost
exactly
the
same,
and
there
will
be
cases
with
more
challenging
workloads
or
challenging
data
sets
where
you're
not
able
to
get
this
type
of
accuracy.
So
you
always
want
to
check
and
part
of
the
process
that
we
were
running
before
was
actually
taking
some
of
the
images
from
the
original
training
using
them
in
the
quantization
so
pipeline.
C
So
we
can
fine
tune
this
quantized
model
so
that
we
don't
have
accuracy
drop,
and
so
that's
really
important
step
is
to
check
the
accuracy
and
last
we're
going
to
basically
do
the
same
thing.
We
did
before
we're
going
to
run
inference
on
the
quantize
model
and
download
that
same
picture
of
a
dandelion
and
see
what
happens
so
again.
We
get
the
image
most
likely
belongs
to
dandelion,
about
the
same
confidence
as
before.
C
So
we
one
of
the
first
things
we
well
first,
the
benchmark
app.
This
is
one
of
the
other
tools
that
we
provide
in
openvino
that
once
you
have
an
openvino
model,
you
can
easily
just
use
our
this
command
line
tool
or
use
it
through
the
the
jupiter
interface
to
to
see
what
the
throughput
and
the
latency
are
for
the
model
that
you're
using
on
the
hardware
that
you
have.
So
in
this
case
you
know
we're
running
this.
On
aws
we
have
a
platinum
8259
like
a
a
new,
pretty
good
xeon.
C
We
don't
have
access
to
the
full
thing,
because
you
know,
if
I'm
using
rhodes
and
I'm
a
data
scientist
and
if
everybody
requests
a
pod,
that's
got
all
the
cores.
It's
gonna,
it's
gonna,
be
an
expensive
aws
fill.
So
the
this
is,
I
think,
with
14
cores.
C
You
can
see
the
original
model
that
it's
throughput
is
around
28
91
frames
per
second
and
then,
if
we
look
at
the
quantize
model
on
the
integer,
eight
low
precision
model
we're
getting
about
a
thousand
frames
more
per
second,
and
so
this
this
really
matters
when
you're
dealing
with
like
video
processing
as
one
example,
if
you're
processing,
30
frames
per
second
or
60
frames
per
second
getting
an
extra
thousand
frames
without
having
to
buy
additional
hardware
and
just
not
having
any
drop
in
accuracy.
C
C
These
are
not
perfectly
accurate
because
it's
there's
some
overhead
with
running
jupiter,
there's
some
overhead
with
python,
but
it
gives
you
a
rough
idea
of
the
increase
that
you
can
see
by
running
this
quantization
step
and
then
there's
also
some
features
that
are
not
going
to
be
available
to
test
when
you're
running
on
roads.
In
the
cloud
on
openshift
data
science,
one
is
that
we
have
intel
gpus,
which
right
now
are
just
integrated
graphics.
So,
like
my
laptop,
you
know
it.
C
I
could
run
this
same
notebook
on
my
laptop
with
fedora
and
it
actually
is
able
to
access
the
integrated
gpu
or,
if
I
had
red,
hat
enterprise
linux
on
a
desktop
or
laptop
or
on
a
nook
edge
device.
You
know
you
name
it
we'd
be
able
to
tap
into
the
integrated
gpu
and
soon
when
we
launch
the
discrete
graphics
cards
from
intel,
you'll
be
able
to
use
those
as
well.
C
And
what
this
does
is
it's
it's
combining
both
devices,
so
I
can
either
run
it
just
on
the
gpu
just
on
the
cpu
or
if
I
do
this
multi
cpu
gpu
it'll
maximize
the
throughput
by
using
both
but
of
course
on
aws.
We
don't
have
integrated
gpus,
it's
just
a
zeon
scalable
processor.
So
this
part
you'd
have
to
run
locally.
A
The
channel
carlos
from
ibm,
what's
the
support
model
for
this
operator
for
enterprises
wanting
to
use
it,
can
they
pay
intel
for
support,
patching
that
kind
of
thing
or
how's
that
relationship
work.
C
Yeah,
so
we
have
both
a
community
version,
that's
provided,
as
is
without
any
sla
enterprise
support,
and
then
we
also
have
the
a
marketplace
offering
so
that
if
you
want
to
use
openvino
on
the
openshift
data
science
platform,
you
do
need
to
install
the
marketplace
version
and
there
is
a
trial,
and
so
we
provide
the
24
7
sla.
C
We
have
customer
support
with
you
know
technical,
consulting
engineers
that
that
will
provide
support
to
customers
who
use
the
marketplace
operator,
and
I
believe
that
yeah.
So
if
you,
when
you
install
the
operator,
you
will
need
to
install
the
marketplace
version
on
openshift
data
science.
If
you
want
to
use
open
data
hub
diy,
do
it
yourself
without
any
support
technical
consulting
services?
A
So,
carlos,
let
me
know
if
that
doesn't
answer
your
question,
but
I'm
pretty
sure
a
comprehensive
answer
there,
so
yeah
the
the
benchmarking
anything
out
of
the
norm
here
that
you're
looking
at
or
are
you
is
this
expected
kind
of
latency
kind
of
thing
any
bumps
in
the
night
you
might
have
noticed
using
a
cloud
provider
versus
you
know,
local
hardware,
kind
of
thing,
yeah.
C
And
I
think
that
the
key
we
don't
show
a
performance
benchmark
of
with
tensorflow,
because
there's
it's
not
really
easy
to
do
an
apples
to
apples
comparison
but
sure
I
mean
if
I
did
that
in
almost
any
case,
openvino
is
either
slightly
or
significantly
faster.
Depending
on
you
know,
we
we
do
additional
optimizations
that,
like
operations,
fusing
and
pruning
of
graphs
and
things
that
happen
by
default
when
you
run
openvino
and
so
that
you
should
always
expect
the
performance
to
be
better
than
just
using
the
framework.
C
And
then
when
it
comes
to
the
hardware,
I
think
that
you
know
getting
over
a
thousand
frames
per
second
for
any
workload
on
a
cpu
is
that's
an
accomplishment
in
itself.
I
think
that
oftentimes
there
we
have
partners,
customers
who
decide
to
buy
expensive,
discrete
graphics
cards,
because
they
don't
think
they
can
get
this
kind
of
performance
just
using
their
cpu.
C
And
so
that's
you
know
there
are
cases
where
you
need
to
get
50
000
frames
per
second,
or
you
know.
You
really
want
to
spend
that
extra
money,
because
you're
processing
a
thousand
camera
streams
on
one
server
or
something
like
that.
But
when
it
comes
to
using
the
cpu,
it's
sort
of
just
a
different
equation
right,
if
you
have,
you
know
88
cores,
and
you
know
how
many
frames
per
second,
you
can
process
that
determines
how
many
camera
streams
that
pod
can
can
process.
C
So
it
you'll
have
to
do
the
back
of
the
napkin,
math
and
say:
okay,
I
can
go
buy
a
discrete
card
and
it'll
cost
me
this
much
per
month
on
aws
or
I
can
just
buy
one
with
xeon,
and
I
can
process
this
many
frames
and
then
you
know
decide
what's
best,
but
with
cpu.
The
latency
here
is
actually.
I
think
this
is
great,
but
it
really
depends
on
the
workload
so
but
yeah.
A
A
C
Yeah
and
we
we
definitely
for
especially
for
training
I
mean,
I
know
I
showed
you,
this
really
simple
model,
that's
very
easy
to
train.
You
can
train
this
using
a
gpu
if
you
have
a
gpu
instance,
training
with
an
nvidia
gpu
it'll
go
much
faster
and
there's
another
example
that
audrey
and
I
are
looking
at
using
for
another
event
that
shows
how
to
do
a
quantization
aware,
training
with
pi
torch
and
this
actually
this
runs
really
fast.
C
If
you
use
nvidia
gpu
and
then
you
can,
of
course
deploy
the
inference
on
intel
hardware,
but
you
can
also
train
it
on
a
cpu.
It
just
takes
longer.
B
I
think
I'll
just
mention
here
that
some
people
may
ask
you
know
what
industries
you
know
may
be
benefiting
from
some
of
this
technology
and
because
I
just
did
a
conference
session
yesterday
for
iapg,
which
is
the
international
association
of
petroleum
geologists,
one
of
the
things
that
comes
to
mind
already
and
that
I
know
that
intel
is
heavily
involved
with
is
looking
at
seismic.
B
So
if
you
can
think
of
seismic
is
just
basically
pictures
of
the
the
subsurface,
but
in
a
way
it's
almost
like
an
x-ray.
You,
you
are
having
a
lot
of
images
that
you
need
to
process
for
certain
details,
certain
horizons
or
seeing
if
you
can
pick
out
faults
or
seeing
if
you
can
pick
out
horizons-
and
this
is
where
having
something
like
openvino
for
quantization
and
for
for
inferencing,
really
makes
a
big
difference,
because
those
industries-
or
I
should
say
those
energy
companies-
are
using.
B
You
know
state-of-the-art
gpus
in
their
high
performance,
computing,
centers
and
they're,
trying
to
squeeze
every
last
bit
of
performance
out
to
see
what
they
can
see
in
their
subsurface
models.
And
again
I
know
that
intel
has
like
an
open
seismic
architecture.
Their
alpha
release:
that's
currently
a
sandbox
environment
for
developers
in
oil
and
gas,
where
they
can
perform
deep
learning
on
3d
and
2d
seismic,
and
for
me,
that's
something
that
I
find
fascinating,
but
don't
discount
anybody
who's
doing
something
in
the
medical
industry
like
research
for
for
cancers.
B
Looking
at
images
you
know
the
openvino
has
a
lot
to
offer
for
a
lot
of
different
enterprises
and
and
industries.
A
And
I
found
a
link:
analyzing
3d
seismic
data
using
intel
distribution
of
openvino
toolkit,
I'm
just
going
to
drop
that
in
chat.
So
folks
can
take
a
look
at
that
to
get
a
better
understanding
of
that
kind
of
use
case.
B
Yeah
I
said
it's:
this
is
really
exciting
for
better.
You
know,
yeah.
A
B
Better
lack
of
word
is
it's
good
stuff
that
that
we're
really
doing
together-
and
I
get
very
excited
about
this-.
C
C
Oh
I'd
say
that
I
don't
know
as
much
about
the
the
seismic
analysis,
but
you
know
we
just.
I
did
a
session
yesterday
with
one
of
our
healthcare
partners
and
similar
to
the
seismic
analysis.
They
have
very
large
input,
images
or
input
data
that
goes
into
the
model
so
like
in
the
case
of
x-ray,
which
we
were
talking
about
yesterday.
They
have
like
a
10
24
by
10
24
grayscale
image
that
gets
input.
C
They
also
you
know
they
process
also
like
3d
slices
of
3d
images
for
ct
scans
and
they're,
really
measuring
not
in
the
frames
per
second,
but
they're
saying.
Can
we
process
one
frame
in
less
than
a
second?
So
it's
like
a
totally
different.
You
know
paradigm
shift
yeah,
but
that
the
every
little
bit
of
optimization
counts
when
you're
dealing
with
big
input
like
that.
So
with
the
x-ray,
you
know
they
were
saying.
C
Oh,
we
were
able
to
go
from
two
seconds
with
tensorflow
to
process
one
x-ray
down
to
800
milliseconds
with
openvino
four
years
ago,
and
then
today
we're
back
we're
down
to
147
milliseconds.
So
it's
like
wow.
We
keep
inching
down
and
going
lower
and
that
was
running
on
an
intel
core
processor,
so
not
even
a
xeon
okay,
because
they
they
have
to
plug
it
into
a
ultrasound
machine
and
have
like
a
you
know,
embedded
machine,
that's
next
to
the
in
the
operating
room
or
at
the
hospital.
A
C
They
were
saying
around
the
world.
You
know
we're
lucky
in
the
us
there's
like
one
in
ten
thousand
one
radiologist
for
every
ten
thousand
people,
but
in
other
parts
of
the
world.
B
Yeah
and
I'll
just
put
in
a
plug
for
the
energy
industry.
This
also
helps
in
the
areas
where
we
have
data
gravity
when
we're
working
with
national
oil
companies
that
don't
allow
us
to
take
the
data
out
of
the
country.
So
what
do
you
do
if
you
can't
use?
You
know
a
public
cloud
provider?
You
have
to
create
some
sort
of
on-prem
or
hybrid
cloud,
and
that's
where
you
know
do
you
build
your
own
high
performance
computing
center,
which
could
be
cost
prohibitive
or
do
you
use?
C
Right
and
we're
really
excited-
I
think-
about
the
next
phase
of
doing
some
of
this
hybrid
setup.
We're
having
you
know,
edge
devices
that
can
be
like
fleet
managed
by
openshift
data
science,
where
you
have
control
plane
in
the
cloud
and
you've
got
some
edge
devices
that
you
know
either
call
home
just
to
send
some
telemetry
data,
or
you
know,
just
not
necessarily
sending
all
the
data
they're
processing
back
to
the
cloud,
because
I
think
that's
really
the
the
future
is
we're
going
to
have
these
low-power
devices.
C
That
may
not
have
great
internet
access,
but
they
need
to
be
managed
right
and
they
need
to
be
connect
back
to
a
control
plane
like
openshift.
B
Yeah
and
I'm
just
going
to
put
another
plug-in
for
the
energy
industry.
That
is
like
freaking
amazing,
because
you
have
a
lot
of
oil
fields
that
are,
you
know
not
that
close
to
a
lot
of
good
cloud
services.
So
we
have
to
do
a
lot
of
some
of
the
inferencing
on
some
of
the
fluids
that
we're
pumping
out
of
the
ground
or
health
of
the
wellhead.
B
We
have
to
do
that
that
work
away
from
the
mothership,
so
to
speak,
so
being
able
to
do
that,
you
know
first
before
we
get
the
rest
of
the
information,
makes
things
so
much
quicker
for
the
geoscientists
or
even
the
engineer,
who's
looking
after
the
the
oil
field,
so
that
he
can
make
decisions
quicker,
especially
if
there's
something
that
could
be
failing.
That
could
be
bad.
So
we
want.
C
B
A
Know
yeah,
you
know
just
parts
where
right
like
that
decreases
efficiency,
and
you
know
just
knowing
that
right,
like
knowing
that.
Okay,
we're
reaching
this
point
where,
like
our
coefficient,
is
off
and
we
need
to
replace
a
part,
then
that's
incredibly
powerful
to
know
that
sooner
rather
than
later,
right,
like.
B
Actually,
when
you
have
a
couple
thousand
wellheads
in
a
in
a
field,
you
know
exactly.
A
C
A
C
Use
them
and
we
also
have
a
intel
com,
slash
openvino,
dash
customer
dash,
openvino
success
stories
and
I'll
drop
that
for
you,
chris
thank.
A
C
Of
those
use
cases
and
hear
about
the
partners
like
bmw,
ge,
healthcare,
siemens
and
who
are
doing
solving
some
of
these
challenges-
and
hopefully
more
of
your
customers-
will
join
this
exciting
board
of
success
in
the
future
yeah.
This
is
pretty
cool.
B
C
B
A
A
Let's
get
let's
get,
let's
get
ryan
some
coffee
here.
Folks,
thanks
for
tuning
in
coming
up
next
at
the
in
17
minutes,
we
will
have
the
what's
new
and
openshift49
briefing
joined
by
multiple
product
managers
from
the
openshift
team.
So
please
stay
tuned
for
that
and
there's
a
full
slate
of
shows
on
the
calendar
today.
So
please
go
check
out
that
calendar
and
join
us
for
any
of
the
shows
you
might
find
interesting.
Thank
you,
audrey.
Thank
you,
ryan,
really
appreciate
it.