►
From YouTube: Accelerate AI with the Open Hybrid Cloud
Description
Business leaders desire data driven insights to help improve customer experience. Data engineers, data scientists, and software developers desire a self-service, cloud-like experience to access tools/frameworks, data, and compute resources anywhere to collaborate, build, and scale. This keynote will highlight AI/ML use cases, execution challenges, and tools to help accelerate AI/ML projects from pilot to production, and accelerate delivery of intelligent applications. Finally, the session will share real world success stories across a number of open source projects led by Red Hat, including Open Data Hub.
A
My
name
is
mike
peach,
I'm
vice
president
general
manager
of
the
cloud
storage
and
data
services,
business
unit
at
red
hat-
and
I
am
super
excited
to
be
here
with
you
all
today-
I'm
going
to
talk
a
little
bit
about
how
you
can
accelerate
your
artificial
intelligence
and
machine
learning
projects
and
efforts
and
initiatives
with
open,
hybrid
cloud.
So
let
me
jump
right
in
so
I
always
try
to
situate
a
talk
like
this
within
some
current
context,
and
sometimes
that
can
be
a
little
a
little
difficult,
a
little
a
little.
You
know
contrived.
A
Let's
say
in
this
case
we
actually
have
a
really
fascinating
event:
series
of
events
underway
right
now.
That
is
just
absolutely
rich
with
inspiration
and
starting
points
to
talk
about
data
data,
engineering,
data,
science,
artificial
intelligence,
machine
learning,
so
unless
you're
just
completely
off
grid,
you
can't
not
have
heard
that
a
very
large
ship
was
stuck
in
the
suez
canal
for
a
number
of
days.
Last
week
it
is
called
the
ever
given
stuck
here
in
the
suez
canal
in
egypt.
A
This
ship
is
1
300
feet,
plus
long
almost
200
feet
wide
220
000
tons.
It
carries
20
000
containers,
each
ranging
from
20
to
40
tons
a
very,
very
big
ship.
Now
it
happens
to
be
stuck
or
happened,
to
have
been
stuck
in
a
very
important,
a
very
strategic
location.
A
The
suez
canal
connects
the
basically
the
the
indian
ocean
to
the
mediterranean
and
just
as
is
very
well
depicted
in
this
graphic
here,
it
cuts
nearly
a
third
off
of
the
journey
between
europe
and
asia.
Let's
pick
two
significant
ports,
such
as
rotterdam
and
singapore.
A
Let's,
let's
just
look
at
a
couple
of
quick
statistics:
80
percent
of
the
import
export
volume,
the
trade
in
the
world
goes
by
ships
50
by
value
1.5
tons
per
year
per
person
on
the
planet.
A
lot
of
stuff
moves
by
ships,
the
suez
canal,
120
miles
long
owned
by
the
egyptian
government.
50
ships
a
day
go
through
this
again
strategic
location,
9
billion
worth
of
goods
per
day.
13
of
the
world's
trade
goes
through
this
strategic
location.
A
Now
I,
as
I
was
learning
about
this
over
the
last
couple
of
days.
This
particular
image
just
struck
me
for,
among
other
reasons,
the
sheer
difference
in
scale
between
old
technology
and
new
technologies,
often
just
mind-blowing
overwhelming
now.
This
particular
incident
ended
reasonably
well.
Yesterday,
the
ship
was
in
fact
floated
through
a
lot
of
amazing
engineering,
as
well
as
a
little
help
from
from
mother
nature,
with
some
spring
tides
on
the
over
the
weekend
on
sunday,
but
a
very
important
takeaway
here
and
really
this
is
the
setup-
is
that
data
can
help.
A
A
Data
data,
engineering
and
data
science
are
all
critical
now,
if
we
just
think
about
this
whole
scenario,
what
happened
with
this
ship,
its
impact
on
world
trade
on
world
business,
and
we
start
to
think
about
some
of
the
different
kinds
of
data
that
came
into
play.
It's
mind-blowing
right,
so
just
the
the
ship
itself
right.
A
It
got
stuck
because
of
a
sandstorm
and
70
mile
an
hour
winds
that
blew
a
220
000
ton
ship
slightly
off
course
in
a
very
narrow,
very
strategic
canal
and
blocked
shipping
blocked
nine
the
flow
of
nine
billion
dollars
worth
of
goods
a
day
for,
on
the
order
of
six
days,
you
had
the
title
information
oceanographic,
you
have
now
ship
scheduling
and
ship
routing,
as
as
different
transport
companies
around
the
world
were
scrambling
trying
to
figure
out
how
to
deal
with
this
thing.
A
You've
got
fuel
considerations
both
for
the
ships
and
the
fuel
that
the
ships
were
carrying
from
various
places
around
the
world.
You
have
so
many
different
kinds
of
specialists
and
they're
scheduling,
an
availability
specialist
to
pilot
the
ship
specialists
to
dredge
the
ship
out,
specialists
in
re-optimizing
and
rerouting
various
trade
routes,
and
so
on.
You've
got
raw
materials
supply
and
demand
at
a
local
or
sort
of
more
tactical
level.
You've
got
macroeconomic
factors,
black
swan
events,
blah
blah
blah.
A
It
just
becomes
quickly
overwhelming
how
many
different
kinds
of
data
come
into
play
in
situations
like
this,
and
this
doesn't
matter
whether
you're
the
operator
of
the
ship,
whether
you're,
the
seller
of
goods
that
are
being
transported
by
the
ship
you're
the
operator
of
a
port.
There
are
just
so
many
ramifications,
so
many
implications,
so
many
percussions
of
an
event
like
this,
all
of
which
can
be
significantly
improved
and
aided
with
the
right
use
of
data.
A
So
clearly
data
critical
asset
and,
depending
on
what
type
of
industry
you're
in
it
can
be
used
in
different
ways.
It
can
improve
a
customer
experience.
It
can
allow
businesses
to
gain
competitive
advantage.
It
could
be
about
a
p
l,
profit
and
loss
cost
savings.
It
can
be
about
automation,
etc
and
in
different
vertical
domains.
In
different
types
of
business.
A
There
are
clearly
even
more
sub
domains
and
specific
ways
along
the
lines
of
some
of
those
things
I
just
mentioned
where
data
can
be
brought
to
bear
and
machine
learning
more
significantly.
More
recently
can
be
brought
to
bear
in
very
helpful
and
very
important
ways,
whether
it's
in
health
care,
faster,
better
diagnosis,
risk
analysis
and
financial
services,
optimizing
network
routing
within
telecommunications,
how
insurance
premiums
are
calculated,
etc.
A
Now
an
important
consideration
to
keep
in
mind
is
that
operationalizing
the
use
of
data,
the
employment
of
data,
whether
it's
for
really
basic
artificial
intelligence?
Let's
say
simple
rules
based
systems,
whether
it's
more
advanced,
actual
modern,
sophisticated
learning,
algorithms
putting
all
of
this
together
is
not
trivial.
There's
a
lot
of
limelight
there's
a
lot
of
discussion
about
specific
algorithms
and
specific
technologies.
A
The
true
science-y
stuff
is
very
exciting
for
sure
and
and
certainly
a
great
impact
here,
but
we
also
need
to
not
forget
all
of
the
seemingly
more
boring
stuff
that
it
takes
to
get
all
the
right
kinds
of
data
to
the
right
places
at
the
right
time,
so
that
models
can
be
trained
so
that
trained
models
can
be
deployed
so
that
there's
a
feedback
loops
that
there
is
the
the
right
call
it
plumbing
so
that
a
fast
iterative
cycle
can
be
set
up,
so
that
learning
can
really
do
what
it
needs
to
do
and
in
a
timely
manner.
A
So
if
you
just
look
at
some
of
the
roles
of
of
the
stakeholders
involved
and
some
of
the
phases
of
a
machine
learning
a
more
generally
artificial
intelligence,
employment
of
data,
it
is
quite
complex
right
at
the
executive
level,
you're
setting
high-level
business
goals,
you've
got
data
engineers
that
are
gathering
and
preparing
data,
putting
all
of
the
right
platform
the
right
infrastructure
in
place
to
get
again
those
various
kinds
of
data
to
the
right
place
at
the
right
time,
so
that
data
scientists
can
then
actually
sit
down
and
work
with
models.
A
A
But
all
of
these
different
types
of
application
building
stakeholders
need
to
be
able
to
collaborate
together
to
get
an
actual
full-blown
application
out
and
in
production,
and
then
ultimately,
you've
got
the
ops
folks
now
whether
you're
running
an
application
in
a
public
cloud
or
or
on
your
own
premises
in
a
data
center,
you've
got
folks
that
need
to
keep
the
lights
on.
You
need
to
keep
everything
up
and
running.
You
need
to
handle
backups
and
restores
and
and
all
of
that
good
stuff.
A
So,
let's
quickly
so
we've
we've
talked
about
a
couple
of
challenges
already,
but
let's
just
highlight
a
couple
of
additional
ones
right.
So
the
data
itself,
as
we
already
touched
on
with
the
shipping
example
there's,
the
volume
variety
and
velocity
of
different
kinds
of
data-
is
of
a
scale
that
is
like
never
before,
and
that
is
overwhelming
old
ways
of
handling
data
right.
So
to
do
modern
machine
learning,
one
needs
different
architectures
from
what
one
needed
in
the
past,
as
with
anything
new
you've
got
a
a
scarcity
of
expertise
to
deal
with
this.
A
We
have
more,
let's
say,
infrastructural
type,
augmentation
of
that
or
support
for
that
with
data
pipelines
and
various
data
constructs
such
as
data
lakes.
A
Now,
at
the
center
of
all
of
this,
the
next
layer
down
the
platform
layer
that
that
is
critical
for
enabling
the
kinds
of
speed
the
kinds
of
scale,
the
kinds
of
reliability
that
we
need
to
do:
the
kinds
of
application,
development
and
and
running
of
machine
learning,
enhanced
applications
in
production.
This
is
where
a
cloud
platform
this
is
where
containers
and
the
architecture
that
containers
enable
such
as
microservices,
fine
grain,
modularity,
etc.
A
This
is
what
is
fundamentally
enabled
by
a
technology
like
kubernetes
and
its
instantiation,
its
embodiment
in
the
commercial
product,
openshift
that
is
itself
enhanced
and
augmented
with
technologies
at
an
even
lower
layer,
such
as
graphics,
processing
units,
floating
point,
gate,
arrays,
I'm
sorry,
a
field,
programmable
gate,
arrays,
tensor
processing
units,
various
new
types
of
hardware,
basically
to
accelerate
the
right
kinds
of
elements
in
the
kinds
of
algorithms
that
we're
talking
about
here
and
then
all
of
that
is
available
to
enterprises
in
various
infrastructure
models,
whether
it's
physical
on-premises,
virtualized,
completely
private
public
cloud,
hybrid
and
a
theme
that
you'll
see
in
some
of
what
we
talk
about
here
is
that
everything
is
increasingly
hybrid.
A
So
now,
with
all
of
that
background,
let's
take
a
look
at
a
couple
of
examples:
three
in
fact,
from
three
different
industry:
verticals,
we'll
start
with
financial
services,
so
royal
bank
of
canada,
top
ten
bank
in
the
world
been
around
for
a
fairly
long
time.
86
000
employees,
lots
of
branches,
a
big
bank.
A
A
They
made
some
initial
attempts
and
some
of
the
challenges
they
experienced
as
they
were
first
setting
up
the
team
of
on
the
order
of
100
folks.
A
They
were
finding
that
projects
took
two
months
to
get
off
the
ground.
The
platforms
were
hard
to
build.
The
the
just
the
sheer
wiring
together
of
the
various
technologies
at
at
the
disposal
of
the
engineers,
was
just
itself
very
time
consuming
and
distracting
and
taking
away
from
the
time
to
actually
build
the
applications
and
security
and
compliance
were
challenges
as
well.
A
They
had
a
goal.
They
wanted
to
set
up
tools
and
processes
for
a
hundred
developers
and
engineers.
They
obviously
wanted
to
take
that
two-month
project
cycle
down
significantly.
A
So
they
they
worked
with
red
hat
and
nvidia.
In
addition,
some
other
technologies,
they
employed
red
hat,
open
shift
and
nvidia
gpus
to
accelerate
machine
learning
models.
A
They
their
architecture,
taking
advantage
of
kubernetes
and
the
the
container
architecture
of
openshift
employed
a
very
fine-grained
architecture
to
deploy
machine
learning
models
in
containers
so
that
they
could
get
that
rapid
iterative
structure
to
their
process.
They
were
particularly
because
of
regulatory
constraints,
and
so
on,
wanted
to
set
this
up
any
sort
of
on-premises.
You
know
their
own
data
center.
The
nvidia
technology
was
significant
in
speeding
up
what
was
initially,
you
know.
Some
performance
challenged
applications
there,
so
the
results
they
took.
A
They
were
able
to
they've
already
done
on
the
order
of
a
thousand
models
with
the
setup
they've
built
over
the
last
couple
years.
They
were
able
to
do
ten
times
more
experiments
per
unit
time
than
they
were
able
to
do
with
their
earlier
setup.
A
They
took
that
two-month
window
that
two-month
life
cycle
for
projects
down
to
a
number
of
days
and
in
one
particular
example
of
a
project
they
were
able
to
analyze
the
records
of
13
million
of
their
canadian
customers
in
in
20
minutes
and
given
the
complexity
of
the
calculation
going
on
there,
that's
actually
a
pretty
phenomenal
number.
A
So,
let's,
let's
jump
to
a
different
vertical,
a
different
domain
and
with
certainly
different
drivers
and
different
constraints.
Healthcare,
so
hca
healthcare,
private
healthcare
company
based
in
the
u.s
also
been
around
for
50
plus
years.
A
couple
hundred
hospitals,
2
000
care
sites
across
the
u.s
and
as
well
in
the
uk,
50
billion
dollars
in
revenue
number
67
fortune
500.
So
a
big
health
care
organization.
Here
in
the
us,
280
000
employees,
health
care
in
terms
of
employees
per
unit
of
work
or
per
revenue
et
cetera.
A
It
tends
to
be
a
very
sort
of
people
intensive
business.
A
They
set
out
to
address
a
particular
challenge.
Now,
we've
all
in
learning
about
machine
learning
and
artificial
intelligence
in
general.
Diagnosis
of
medical
conditions
is
a
use
case
that
comes
up
fairly
frequently,
and
it's
fairly
intuitive
fairly
easy
to
get
one's
head
around.
A
How
one
can
throw
machine
learning
at
the
basic
problem
of
input
set
of
symptoms
output,
set
of
possible
medical
conditions
they
were
addressing,
in
particular,
the
disease
called
sepsis,
which
is
a
disease
where
a
person's
immune
system
overwhelmingly
reacts
essentially
over
rotates
in
response
to
an
infection
to
the
point
where
that
immune
response
starts
to
actually
do
more
harm
than
good,
it
literally
damages
organs
in
the
in
the
body.
A
So
it's
a
it's
a
it's
a
disease
that,
among
other
characteristics,
it's
it,
it
spreads
and
it
it
does
its
damage
very
quickly.
So
time
to
diagnose
is
absolutely
critical
in
hca
hospitals
and
and
facilities.
A
The
diagnosis
of
sepsis
was
a
very
manual
process,
literally
nurses,
with
clipboards
and
the
knowledge
about
how
to
diagnose.
It
was
also
spotty.
There
was
better
knowledge
in
some
places
than
others
and
that
needed
to
be
that
needed
to
be
well
addressed,
so
they
again
hca
set
out
to
address
this
very
specific
problem
to
automate
and
normalize
this
diagnosis
across
all
of
their
vast
properties
and
give
every
possible
diagnosis,
instance
the
benefit
of
the
the
best
possible.
A
You
know
diagnostic
technology,
diagnostic
knowledge
right,
so
you
know
smooth
out
that
spikiness
and
and
not
let
you
know
some
patients,
be
you
know
worse
off
than
others,
because
they
happen
to
be
in
a
place
with
with
less
knowledge,
so
they
employed
openshift
and
set
up
an
environment
where
their
data
scientists
could
gather
their
existing
data,
set
up
an
initial
model
roll
it
out
to
an
application
that
nurses
and
doctors
would
use
instead
of
those
clipboards
and
the
previous
much
more
manual
process,
and
they
significantly
sped
up
and
improved
the
results
of
diagnosis
of
sepsis
and
here's
a
quote
from
the
chief
data
scientist
hca.
A
They
provide
a
five
hour
head
start,
there's
there's
a
great
video.
That's
that's
linked
with
an
interview
that
talks
about
every.
So
every
hour
is
of
delay
in
diagnosing
sepsis
increases.
The
odds
or
the
risk
of
of
death
by
four
to
seven
percent,
so
hours
really
is
the
difference
between
life
and
death
here,
and
this
is
just
a
fantastic
example
of
how,
with
the
right
kind
of
infrastructure,
supporting
data
science,
how
it
can
be
rolled
out
on
a
mass
scale
and
really
really
help
humanity.
A
Okay.
Third
third
use
case
here:
let's
look
at
automotive
manufacturing,
so
bmw
group,
everybody's,
I'm
sure
you
know
heard
of
and
and
seen
bmw's
on
the
road.
Eighth
largest
automaker
in
the
world
been
around.
For
you
know,
over
100
years
they
roll
out
two
and
a
half
million
cars
a
year.
Now,
bmw
has
always
prided
itself
on
its
image
of
innovation
right.
They
made
the
first,
their
first
electric
car
in
1972,
so
as
an
innovative,
technology-minded
technology-oriented
company.
A
It's
clear
that
these
this
is
a
company
that's
going
to
want
to
make
the
best
use
of
emerging
data
capabilities
data
science,
machine
learning.
A
A
You
know
basically,
auto
manufacturers
are
becoming
internet
of
things,
manufacturers
right
it's
not
just
about
commuting
or
or
transportation
or
getting
from
a
to
b.
It
is
about
an
experience,
a
connected
experience.
A
Now,
bmw
wanted
to
manage
so
they
have
a
a
program
called
connected
drive.
So
if
you
happen
to
be
an
owner
of
a
late
model,
bmw
you're,
probably
familiar
with
their
connected
drive
application,
allows
you
to
do
everything
from
you
know:
navigation
to
scheduling,
maintenance
to
you
know
ordering
a
pizza
while
you're
on
the
road.
A
A
They
have
put
an
open
shift
based
infrastructure
in
place
to
enable
their
application
builders,
their
data
scientists,
to
develop
again
these
new
services
and
be
constantly
iterating
in
that
sort
of
rapid
innovation,
rapid
trial
and
error
type
approach
to
to
rolling
out
new
services,
and
they
also
realized
that
they
had
to
adopt
a
more
devops
type
of
culture
right
that
in
in
transforming
from
that
sort
of,
you
know,
pre-connected
world,
to
the
assumption.
A
The
expectation
that
every
car
out
there
is
is
going
to
be
connected
their
whole
software
development
organization
had
to
be
itself
transformed
so
again,
solution
based
on
openshift.
They,
they
developed
the
d3
data-driven
development
platform
to
to
throw
out
the
massive
amounts
of
data
being
generated
already
by
by
their
cars
out
there.
This
was
all
built
using
the
latest
cloud
native
architecture
of
microservices,
etc.
On
top
of
openshift
and
with
a
a
partnered
development
company
called
dxc,
they
have
really
taken
this.
A
A
You
know
when
you
start
to
think
about
what
gets
short
circuited
by
throwing
machine
learning
at
a
problem
versus
having
humans
figure
it
out.
It's
it's.
It's
pretty
amazing,
okay,
so
we've
walked
through
three
different
example.
Use
cases
of
how
a
hybrid
cloud
infrastructure
has
helped
companies
or
organizations
in
those
three
different
industry,
verticals
significantly
improve
or
augment
their
offerings,
their
customer
experiences
etc.
A
So
all
of
that
has
been
what
I've
gone
through
already
has
all
been
developed
on
openshift
again
that
kubernetes
based
cloud
platform.
So
I
want
to
talk
about
real
quickly
here
in
my
last
couple
of
minutes.
Is
a
project
called
open
data
hub?
So
this
is
an
open
source
project.
If
you
go
to
open
data
hub
dot,
io
you'll
see
what
it's
all
about,
and
in
short,
I
mean
as
said
here,
a
data
and
ai
platform
for
the
hybrid
hybrid
cloud
it
is
built
on
top
of
openshift.
A
Basically,
it
is
an
open
shift
operator
operator,
there's
a
special
construct.
That
is
what
installs
and
sort
of
monitors
the
the
runtime
of
workloads
on
openshift.
So
it
is
an
operator.
It's
a
meta
operator,
if
you
will
that
pulls
together,
different
open
source
projects
that
are
part
of
the
data
science
workflow,
enabling
a
much
easier,
let's
say,
wiring,
together
and
setting
up
of
a
data
science
environment,
to
allow
companies
to
do
the
kinds
of
projects
that
we
just
went
through,
but
much
more
easily
right.
A
It
gives
it
gives
the
the
folks
setting
up
environments
for
data
scientists,
just
a
a
leg
up
takes
a
lot
of
that
sort
of
configuration
and
installation
both
time
and
and
risk
of
error
off
the
table.
A
So
here
in
a
super
super
simplified
nutshell
is
what
that
workflow
look
like
looks
like,
and
some
of
the
technologies
that
have
been
incorporated
right.
So
you've
got
data
in,
let's,
say
an
object.
Type
of
store,
such
as
ceph
or
s3.
A
You've
got
data
scientists
working
in
jupiter
notebooks,
perhaps
using
spark
tensorflow
they'll
run
experiments.
The
kubeflow
technology
marries
that,
with
the
underlying
kubernetes
in
an
efficient
way,
so
that
jobs
can
be
done
in
that
containerized
kubernetes
environment.
A
The
workflow
to
deploy
models
as
a
service
on
openshift
is,
is
part
of
this,
either
in
a
simple
way
or
more
advanced
way
with
technologies
such
as
selden
also
incorporated
our
technologies
for
gathering
metrics
and
storing
the
results
of
those
metrics.
So
that's
your
grafana
and
prometheus
technologies
like
that.
So
this
is
what
open
data
hub
is
doing.
A
It's
bringing
together
these
open
source
projects
into
a
coherent,
relatively
seamless
environment,
to
empower
data
scientists,
data
engineers,
machine
learning
engineers,
all
of
the
stakeholders
involved
in
creating
these
intelligent
applications
to
give
them
the
environment
that
they
need.
So
they
can
do
this
rapidly
with
high
performance
at
scale,
without
spending
too
much
time
having
to
do
all
of
the
the
tedious
and
error-prone
wiring
together
themselves.
A
So
let
me
end
with
the
following
a
couple
of
takeaways
that
I
hope
you've
sensed
and,
as
we
touched
on
them
in
some
of
these
case
studies
as
well
as
some
of
the
initial
setup,
the
data
opportunity
will
force
practically
everyone
to
be
hybrid
right.
There
is
the
the
notion
of
a
walled
garden
is
a
fleeting
fantasy
right.
If
anybody
is
imagining
like.
Oh
I'm
just
going
to
go,
buy
a
a
data
science
environment
off
the
shelf
and
set
it
up
and
away.
I
go.
A
You
know
that
that
is
things
are
moving
so
quickly
and
enterprises.
Organizations
out
there
already
have
so
many
different
technologies
in
their
data
centers
that
any
kind
of
monolithic
approach
to
to
data
science
is
just
doomed
to
disappointment.
A
So
so
hybrid's
fundamentally
here
with
that
in
mind,
anybody
setting
up
infrastructure
or
data
scientists
out
there
who
are
specifying
the
requirements
of
your
infrastructure
providers
for
such
environments.
You
want
to
ask
for
the
power
of
flexibility
and
adaptability.
A
You
want
the
ability
to
pull
in
new
technologies
to
to
to
to
connect
things
in
different
ways,
so,
whereas
in
open
data
hub-
as
I
just
discussed,
some
of
that
is
taken
off
the
table
for
you
so
that
wiring
together
it
shouldn't
be
walled
off
it
shouldn't
be
hidden,
it
shouldn't
be
completely
black
boxed.
You
need
flexibility
and
related
to
that.
There
should
be
a
balance
of
opinionated
constraints
and
freedom
right.
A
So
so,
basically
you
know
there's
that
that
phrase
you
know
you
know,
make
the
make
the
simple
thing
simple
and
make
the
hard
things
possible.
That's
that's
that's
what
you
really
want
to
get
to
here
right.
So
there
is
no
perfectly
you
know
handheld,
you
know,
can't
hurt
yourself
type
of
environment
but
with
the
right
kind
of
opinionation.
The
right
kind
of
you
know.
A
You
know:
guard
rails,
you
can
be
made
much
more
efficient
and
and
and
be
able
to
roll
out
data
models
and
machine
learning,
enhanced
applications
that
are
scalable
and
reliable.
A
So
I
talked
about
cloud
containers
microservices
that
stuff's
here
for
a
while,
so
you
can
very
confidently
bet
on
on
a
technology
like
open
shift.
You
know
the
the
sheer
growth
and
the
the
rate
at
which
that
is
being
deployed
out
there.
You
know
that
that's
not
a
is
this
going
somewhere
or
not.
A
It
is
a
a
very
dependable
foundation,
and
you
know
when
you
have
the
right
platform,
the
right
foundation
in
place,
that's
going
to
accelerate
your
efforts
and
then
last
but
not
least,
as
you
saw
in
a
couple
of
the
case
studies
I
went
through
in
order
to
be
successful
with
these
projects,
these
organizations
underwent
not
just
technology
transformations,
but
cultural
transformations
they've
had
to
change
the
practices,
the
behaviors
literally
the
organizations
of
their
people
to
make
best
use
of
these
new
technologies
and
new
ways
of
doing
things.