►
From YouTube: AiX Keynote: How Companies Have Achieved Business Benefits with Kubernetes Powered MLOps
Description
This keynote will dive deep into the real-world success stories, including lessons learned from various industries on how these organizations built and deployed AI projects on Kubernetes powered hybrid cloud platform. It will also share findings from an industry survey of Technology Executives around MLOps deployment trends for Kubernetes and Hybrid Cloud.
A
As
organizations
apply,
csed
principles
to
model
development,
life
cycle
they're,
increasingly
looking
at
application
platforms
powered
by
kubernetes,
my
name
is:
will
mcgrath
and
I'm
a
go
to
market
specialist
for
our
openshift
data
science
offering
and
I'm
joined
by
abhinav
joshi
abhinav.
You
want
to
introduce
yourself.
B
Yeah
absolutely
hi
everyone
thanks
for
joining
us
today.
My
name
is
abhinav
joshi,
I'm
a
senior
manager
in
the
openshift
product
group,
at
red
hat,
focus
on
the
aiml
go
to
market
strategy
and
execution,
and
I'm
looking
forward
to
the
talk
today
back
to
you
will.
A
Great
thanks,
yeah
we're
both
excited
to
be
presenting
at
the
open
data
science
conference
west
we're
going
to
talk
through
some
trends
that
we're
seeing
in
implementing
ml
ops
on
hybrid
cloud
kubernetes
platforms
and
then
abner
is
going
to
quickly
dive
into
several
customer
case
studies.
So,
let's
get
started
this
spring
and
early
summer
we
conducted
a
survey
through
pulse
an
I.t
executive
community
portal.
We
found
several
interesting
trends,
the
first
of
which
is
really
displayed
on
this
slide,
close
to
80
percent
of
enterprise
I.t
and
data
leaders.
A
A
And
finally,
the
last
slide
we
have
on
some
of
the
trends
that
we're
seeing
from
these
pulse
surveys.
Aaml
has
really
popped
up
as
a
as
a
workload
of
kubernetes.
You
can
see
here
it's
the
number
two
workload
behind
database
or
data
caching
workloads
not
only
that,
but
data
ingest
data
cleansing
data
analytics
you
can
see
on
the
slide
is
not
too
far
behind.
A
But
what
are
some
of
the
ml
ops
challenges
and
reasons
why
ai
application
platforms,
powered
by
kubernetes
are
becoming
so
popular
I'll
turn
it
over
to
abinav
joshi
to
dive
deeper?
But
let's
just
look
at
some
of
the
the
mlaps
challenges:
the
talent
storage,
as
we've
seen
not
only
for
data
scientists
but
but
for
people
like
cloud,
architects
and
ml
engineers,
the
lack
of
self-service
infrastructure
to
allow
your
data
scientists
to
access
that
that
infrastructure,
gpu
environments
very
quickly
and
the
complexity
to
really
operationalize
the
ai
projects.
A
B
Thanks
yeah,
so
what
we've
seen
with
a
typical
software
development
project,
especially
with
the
cloud
native
technologies
like
containers,
kubernetes
and
devops?
So
these
technologies
and
the
operational
practices
provide
the
much
needed
the
agility,
the
flexibility,
the
consistency
portability
and
the
scalability
for
the
data
scientists
to
be
able
to
develop,
train
test
and
share
their
machine
learning
and
deep
learning
models
in
a
repeated
way,
without
having
to
worry
about
the
underlying
infrastructure
resources
and
for
the
developers
and
the
devops
engineers.
B
Now
what
you
see
on
the
screen
is
a
conceptual
architecture
for
it's
mlaps,
built
on
containers,
kubernetes
and
devops
best
practices
in
order
to
execute
on
the
ai
lifecycle.
What
you
need
is
the
data,
engineering
and
machine
learning,
deep
learning
and
the
devops
software
tool
chain,
as
in
yeah.
B
Some
of
the
common
examples
are
tensorflow
pytorch,
jupiter
and
notebooks
spark
python
and
so
on,
and
also
a
set
of
data
services
to
build
your
pipelines
and
there's
like
the
sql
server,
the
nosql,
the
new
sql
databases,
the
data
lakes
and
so
on,
and
all
of
this
has
to
be
supported
on
a
secure,
hybrid
cloud
platform.
B
That's
powered
by
containers,
kubernetes
and
devops
capabilities,
with
the
self-service
like
access
that
like
empowers
the
data
scientists,
data
engineers
and
the
developers
to
be
more
agile
and
collaborated
throughout
the
whole
process,
without
depending
too
much
on
it
operations
or
the
individual
activities.
B
Now
what
you
see
on
the
screen
is
are
some
of
the
examples
of
the
organizations
worldwide
that
have
operationalized
the
mlrs,
with
containers,
kubernetes
and
devops
to
achieve
the
much
needed
business
outcomes.
Healthcare
organizations
such
as
hca
healthcare,
achieve
the
data
driven
diagnostics,
financial
services.
Organizations
such
as
the
royal
bank
of
canada,
are
improving
the
consumer
banking
experience.
B
Now,
let's
spend
a
few
minutes
to
talk
about
success
that
we've
seen
at
hca
healthcare.
Now
previously,
the
detection
of
sepsis
at
the
hca's
hospitals
was
done
manually
by
the
nursing
staff.
B
B
That
would
help
them
make
better
and
also
faster
treatment
decisions,
and
then
also
they
wanted
to
build
a
software
application
that
could
easily
scale
across
the
hundreds
of
clinical
sites
that
they
have
and
then
finally
to
be
able
to
collect
the
near
the
real-time
data
from
the
hospitals
to
support
the
decision-making
throughout
a
machine
learning
model.
What
they
felt
was
if
they
could
do
this
by
giving
the
data
scientists
and
the
developers
the
ability
to
rapidly
build,
deploy
and
update
the
models
and
associated
the
ai
powered
software
application
to
help
detect
success.
B
Now
the
hca
team,
so
they
recognized
this-
wasn't
just
like
a
technology
had
a
problem,
so
they
they
worked
in
partnership
with
the
hospitals.
B
They
engaged
the
key
stakeholders
and
the
senior
management
very
early
in
the
process
and
also
evangelize
the
solution
to
the
leadership
of
the
clinicians
and
the
nurses
and
the
team
engage
the
sepsis
coordinators
like
who
are
the
end
users
very
early
in
the
software
development
process,
and
once
the
app
was
ready,
they
implemented
a
training
program
as
well
and
like
help
the
clinicians
be
able
to
adapt
to
the
new
processes
and
the
application
is
called
spot
so
that
application
that's
powered
by
ai,
has
been
developed
and
runs
on
on
reddit
openshift.
B
The
the
second
key
example
I
want
to
talk
about
is
the
energy
company
exxonmobil.
B
They
are
one
of
the
largest
publicly
traded
international
oil
and
gas
companies,
so
they
rely
heavily
on
data
for
key
upstream
midstream
and
the
downstream
oil
and
gas
processes
as
in
exploration,
the
logistics,
the
refinery
monitoring,
the
detection
of
leaks,
the
incident
yeah
response
and
much
more
and
aimn
is
key
to
their
business
with
over
100
data
scientists
teams.
So
they
wanted
to
help
make
them
more
productive
and
happy
and
also
make
sure
they
could
easily
share
the
models
with
the
stakeholders
for
feedback,
but
the
existing
platform
that
they
had
were
not
sufficient.
B
Yeah
and
then
the
exclusive
the
explosive
growth
in
the
analytics
capabilities
and
the
tools
opened
up
the
opportunities
for
the
exxonmobil
team
trade
to
do
more
with
machine
learning,
but
that
also
quickly
led
to
like
a
lot
of
challenges.
B
The
data
scientists
had
to
install
and
configure
a
set
of
five
to
six
tools
for
each
of
the
projects
on
their
laptop
and
some
of
which
required
the
root
or
the
admin
access.
As
the
work
got
more
sophisticated
and
complicated,
the
data
scientists
were
spending
more
time
setting
up
the
infrastructure.
B
So
what
they
did
was
they
kicked
off
an
iterative
process
of
containerizing.
The
key
configurations
on
on
red
hat,
open
shift
the
kubernetes
platform
and
also
being
able
to
run
the
pocs
with
the
small
data
science
teams.
B
B
They
learned
was
the
agile
development
processes
could
be
applied
to
the
data
science
and
the
ai
projects
as
well,
and
with
that
they
were
able
to
bring
the
speed
of
devops
to
the
ai
projects
and
to
help
the
business.
So
now
the
data
scientists
are
able
to
create
the
image
files
from
the
secure
repository
quickly
by
a
capability
in
the
red
hat
openshift
kubernetes
platform
and
be
able
to
publish
the
models
to
a
url
with
a
single
click
and
letting
them
to
get
the
feedback
from
the
key
stakeholders.
B
So
the
end
results
of
the
deployment
have
been
amazing.
The
the
productivity
of
the
data
scientist
has
gone
up
by
10x,
and
now
there
are
one
yeah
30
teams
on
a
single
platform
and
the
data
scientists
can
focus
on
writing
code
and
building
the
models,
and
it's
easy
for
them
to
share
the
models
and
get
the
feedback
quickly
from
the
key
stakeholders
and
the
security
and
the
operations
team
can
also
be
assured
that
the
software
packages
and
the
code
that
is
being
built
and
deployed
by
the
data
scientists
and
the
developers
are
compliant.
B
And
then,
finally,
I
want
to
spend
a
few
minutes
on
one
more
success
story
here:
that's
at
verizon
and
verizon
media
to
be
able
to
help
the
developers,
use
the
5g
data
insights
and
be
able
to
build
the
innovative
new
services
verizon
and
it's
a
division
called
the
verizon
media
created
a
platform
called
leo
yeah.
Think
of
it
as
an
ai
platform.
That
also
has
the
key
applications
that
can
help
the
business
and
consumers.
B
The
leo
provides
data
insights
for
the
multiple
use
cases
such
as
the
crew
safety,
touchless
travel,
video
analytics
the
drone
based
yeah
delivery.
That
is
going
to
be
like
a
real
soon.
The
smart
manufacturing
and
much
more
by
certifying
the
leo
platform
on
top
of
red
hat
openshift
verizon
has
gained
the
skill,
the
scalability
and
the
responsiveness
to
benefit
from
the
real-time
data
from
the
network
edge
and
also
stay
to
be
able
to
stay
ahead
of
the
competition
in
a
new
and
fast
growing
market.
B
So
red
hat
open
shift
the
kubernetes
platform.
It
provides
verizon
a
consistent
foundation
across
the
different
environments
at
all
the
edge
locations
for
the
innovative
app
dev
at
scale
and
including
the
automation
capabilities
and
the
comprehensive
and
a
continuous
security
across
all
the
thousands
of
sites.
So
now
verizon
has
been
able
to
successfully
migrated
more
than
a
thousand
containers
to
the
new
platform
from
its
previous
air
solution.
B
A
Thanks
abner,
it
was
great
to
hear
how
companies
like
hca
healthcare,
exxon,
mobil
and
verizon
are
really
gaining
success.
Operationalizing
ml
through
kubernetes-
and
I
just
wanted
to
spend
a
few
final
moments-
kind
of
explaining
what
red
hat
is
doing
to
kind
of
really
make
ml
ops.
Real
red
hat
is
really
not
known
as
an
aml
software
company,
but
we
are
a
platform
company
and
so
we're
investing
a
significant
amount
of
resources,
making
containers,
kubernetes
and
devops
principles
real
to
help
accelerate
aiml
projects
through
production
deployments.
A
Secondly,
we've
been
investing
in
a
project
known
as
the
open
data
hub
project
for
some
five
years
now
through
our
office
of
the
cto.
What
it
does
is
really
stitches
together,
some
20
different
open
source
ml
technologies
into
a
kubernetes
operator.
So
we
have
a
meta
operator
that
kind
of
stitches
all
those
together
technologies
like
spark
kubeflow
hive
or
all
kind
of
pull
together,
jupiter
examples,
and
we
actually
work
with
a
number
of
commercial
aiml
technology
partners,
also
to
provide
best
of
breed
kubernetes
capabilities
on
top
of
our
kubernetes
platform.
A
So
we
hope
you
actually
enjoyed
some
of
the
success
stories
that
avenue
kind
of
went
through
and
if
you
want
to
learn
about
some
20
other
different
types
of
case
studies,
where
they're,
actually
using
kubernetes
to
kind
of
roll,
their
ml
models
into
production
feel
free
to
check
out
the
stories
that
are
listed
here
later.
It
might
just
inspire
you
on
how
you
can
actually
create
best
practices
for
mlaps.