►
Description
RH OSCG ¿Cómo acelerar la innovación con MLOps
A
A
Good
afternoon,
everyone
now,
once
we
had
lunch,
we
had
some
casualties.
What
we
are
going
to
see
and
try
to
start
unmasking
is
how
to
get
out
of
the
experiments
and
take
them
to
production.
There
is
a
practice
and
a
common
denominator
that
is
not
alien
to
what
that
we
have
already
been
working
and
precisely
beyond
everything
that
we
have
been
seeing
regarding
I,
think
that
in
all
the
talks,
it
was
also
something
that
does
not
happen.
A
A
A
There
is
a
phrase
that
I
like
the
most,
and
that
is
what
is
really
happening,
and
it
is
that
artificial
intelligence
is
really
what
is
eating
the
software
due
to
the
transitivity
of
the
world.
It
is
also
changing
the
way
of
development.
So
what
we
were
talking
about
before
we
have
the
development
of
products,
but
now
we
begin
to
incorporate
the
models
when
we
begin
to
incorporate
models.
A
This
appears
that
before
we
talked
about
mobile
first
and
now
we
talk
about
it
in
first,
which
means
that
any
application
could
have
some
kind
of
model,
but
things
are
happening.
There
are
failures,
so
there
are
also
certain
reluctances.
There
are
certain
frictions
and
a
little
I
am
starting
to
understand
why
projects
with
artificial
intelligence
or
some
kind
of
model
fail.
There
are
many
statistics,
but
we
can
group
five
important
causes
and
that
there
are
two
that
are
common
in
general
projects.
A
A
So,
starting
from
the
need
understanding,
the
need
comes
the
other
factor
which
is
to
know
if
the
data
we
have
is
enough
to
solve
that
need
because
it
may
be
that
I
want
to
extend
a
very
interesting
use
case,
but
in
the
customer
experience
but
I
don't
have
the
data
that
is
generated
for
me
today.
So
I
have
to
start
incorporating
more
registration
systems.
I
have
to
think
about
data
that
before
I,
did
not
have
or
understand
what
the
video
is.
A
That
I
am
going
to
give
you
the
case
of
business
use
many
times
and
it
was
going
to
happen
in
any
project.
One
is
going
to
have
identified
certain
points
as
they
were
in
the
sources
of
information
that
are
going
to
make
my
business
use
case
and,
as
we
are
going
to
walk,
we
are
going
to
realize
that
we
lacked
such
another
source
that
we
lacked
natural
traffic,
so
that
is
part
of
what
is
also
important
that
we
begin
to
incorporate.
On
the
other
hand,
we
know
that
there
is
a
practice
that
is
new.
A
Then
it
gives
us
a
practice
that
is
new
in
something
that
is
being
done
today,
but
we
know
that
from
the
side
of
science
it
comes
from
a
long
time.
Ago,
Luis
was
mentioning
it
before
this
comes
in
1950
more
to
talk
about
what
is
to
act
as
a
human
and
all
the
techniques
that
break
down
into
what
is
computer
vision,
natural
language
processing
and
everything
that
makes
it
possible
for
me
to
give
intelligence
to
something
with
the
sum
of
many
algorithms
and
that
they
are
going
to
make
them.
A
Give
me
that
feeling-
and
let's
start
talking
about
that
intelligence,.
What
happens
today
is
that
the
data
used
to
sometimes
get
us
used
to
working
in
the
software
development
mechanism
that
is
naturalized
by
all
of
us
who
develop
software,.
So
that
means
that
there
are
practices
for
the
development
of
software
that
are
not
in
art
drinks,
because
the
mathematician,
the
data
scientist,
the
physicist,
are
concerned
with
understanding
the
data
for
the
development
of
the
algorithm.
A
A
So
we
need
to
start
having
some
kind
of
platform
that
allows
me
to
self-manage
each
of
these
inputs
and
outputs
of
the
different
roles
and,
on
the
other
hand,
is
that
in
this
type
of
technique,,
as
we
begin
to
break
it,
down,
I
do
not
use
one,,
two,,
three,,
four,,
five
tools,,
but
rather
I
can
use
many
and
many
that
now
we
are
also
going
to
see
visually,.
It
is
difficult
to
see.
manage.
Then.
A
If
we
go
to
the
origin
of
the
problem,
we
can
understand
that
the
software
development
of
traditional
products,
silio
I,
add
to
some
algorithm
with
the
machine
lamen
the
deep
learning
we
get,
what
we
call
intelligent
applications.
These
intelligent
applications
have,
as
any
life
cycle
of
development
of
a
project,
an
action
part,
a
part
where
I
am
going
to
develop
and
choose
which
models
I
am
going
to
take
to
production
and
then
I
am
going
to
give.
A
The
operational
part
is
very
important,
because
I
need
to
get
that
data
to
feed
back
the,
but
it
happens
that
all
the
projects
fail
at
that
point
when
I
take
it
to
production.
It
is
no
coincidence
that
there
are
many
algorithms
that
are
trained
today.
The
state
of
the
art
from
67
years
ago
is
not
the
same
as
it
is
today,
more
after
the
pandemic.
They
have
also
been
extrapolated
from
the
amount.
There
are
many
algorithms
that
are
poorly
trained,,
so
you
have
to
understand
the
risks,.
A
A
A
If
that
is
the
case,,
we
will
be
able
to
get
good
results.
applications
later
we
can
see
if
the
use
case
was
not
real
if
it
did
not,
but
from
the
product
marketing.
It
was
not
really
good
because
there
was
no
need
for
that
use
case.
I
imagined
it
in
a
way,
but
I
was
able
to
get
it
out
of
production.
And
I
don't
stay
in
the
anteroom,
so
from
all
this
flow
we
are
going
to
have
data
law
data
warehouse.
However,
we
are
going
to
put
it
together.
We
are
going
to
learn.
A
We
are
going
to
understand
which
models
to
implement.
We
are
going
to
release
the
products
or
the
product
start
collecting
data
and
then
provide
feedback,
Once
again,,
our
data
warehouse
or
our
data
lake,.
This
has
the
same
architecture
that
we
had
seen
before
with
the
Galicia
case.
One
of
the
points
of
failure
is
the
lack
of
flexible
infrastructure,,
which
I
did
not
say
before,,
but
not
having
a
flexible
infrastructure,
makes
me
think.
So.
A
And
obviously,
if
we
position
it
on
what
the
hybrid
cloud
is,,
then
we
begin
to
see
that,
regardless
of
the
life
cycle
of
development
of
models
that
we
propose,,
everything
can
come
to
be
more
elastic
and
to
be
more
scalable
within
the
life
cycle.
In
each
of
these
interactions
and
little
boxes
a
technological
stack
of
tools
appears,,
it
does
not
matter
which
ones
we
are
not
going
to
talk
about
in
this
instance,
but
when
those
interactions
are
not
fluid
and
we
do
not
I
have
well
detected.
A
So
it
makes
all
those
storms
that
I
have
I
really
don't
understand
what
the
dependencies
are.
If
something
fails,
I
can't
do
a
good
traceability,
so
I
can't
do
a
good
baguio.
I
can't
do
a
good
analysis.
I
can't
understand
if
the
model
the
model
is
failing,
because
I
lacked
data.
If
there
was
a
manipulation
in
the
middle,
because
I
don't
have
the
trace,
then
there
are
many
many
windows
open
many
doors
and
open
fronts
that
make
it
take
us
a
long
time
to
understand
the
cause
of
the
problem.
A
So
it
is
important
that
we
can
understand
why
it
is
so
difficult
when
we
talk
about
the
big
technology
stack.
This
is
a
landscape
from
last
year
that,
if
we
say
analytics
to
each
of
these-
and
we
say
computer
vision,
if
we
talk
about
streaming
data,
all
I
'm
going
to
need
to
develop
some
of
these
use
cases.
The
variants
There
are
many,,
so
I
don't
have
to
manage
five
tools
to
have
freedom,,
and
here
it's
not
about
just
marrying
one,
but
using
the
best
of
each
one,.
A
So
that's
where
this
practice
begins
to
appear,
the
practice
called
machine
learning
ops,,
which
is
Starting
to
make
this
practice
that
we
already
knew
from
the
box.
Take
it
to
the
world
of
data,
that
is,
we
are
talking
about
tops
of
the
box
and
taking
the
practice
of
machine
learning
to
the
data
means
that
starting
to
automate
the
pipelines
began
to
understand
what
are
these
dependencies
and
not
just
hand
in
hand
issues,
and
this
is
really
redefining
the
way
software
development.
A
We
got
out
of
the
world
of
an
experiment
to
really
get
it
into
production
and
make
it
scalable
and
be
able
to
get
more
apps
out
when
these
life
cycles
work
well,
it
seems
natural
to
develop
intelligent
applications.
It
seems
natural
to
develop
some
model
and
then
be
able
to
consume
it
by
another
team
in
different
products
or
in
other
projects
or
in
other
areas.
Then
debugging
makes
me
more
reliable.
I
have
auditability
traceability
of
the
entire
project
at
the
time
one
does.
A
An
experiment
is
very
important
that
I
understand
what
data
I
trained
it
with.
How
were
those
results?
How
was
the
evolution
and
beyond
that?
One
can
detect
it,
because
one
knows
what
is
the
best?
The
best
result,
the
algorithm
there
are
tools
that
are
also
recommending
it
to
me.
Then
It
is
part
of
everything
that
one
has
to
take
into
account.
Well,.
What
does
the
market
say
about
this??
A
69
percent
of
the
companies
surveyed
need
to
have
and
deal
with
multiple
tools.
They
prefer
open
source,,
but
sometimes
they
also
need
to
get
out
of
open
source
and
use.
It.
The
important
thing
is
that
I
can
go
in
and
out,,
but
keep
the
same
strategy
and
data
governance
standardized,,
which
is
a
bit
about
that.
The
other
step,
is
that
all
the
models,,
the
data,,
are
being
taken
to
the
containers
of
94
percent
of
those
that
are
already
there.
Doing
experiments,.
A
A
I
have
to
talk
about
what
are
the
keys
to
a
successful
implementation,
part
of
what
that
we
were
also
talking
about
in
the
case
with
Galicia
interoperability,,
that
is,
being
a
polyglot
being
a
polyglot
means
if
I
know
that
I
am
going
to
take
a
step
with
a
tool,.
It
may
be
that
I
need
to
test
a
model
that
to
test
it.
There
are
large
engines
in
the
cloud,
but
let
us
know
that
If
I
am
training
in
the
cloud,.
My
knowledge
remains
in
the
cloud,.
I
am
not
left
out,.
A
So
as
long
as
we
are
aware
of
what
each
one
is
losing
when
they
take
that
step,,
sometimes
it
can
be
useful
to
have
a
moderately
trained
model
to
test
what
the
fit
mentioned.
With,
the
market,
okay,,
that's
it.,
Well,,
now
I'm
doing
it
on
a
large
scale
and
I'm
beginning
to
understand
what
platforms
and
what
tools
I
use.
That's,
precisely
part
of
everything
we
promote
from
Redhat,
and
we're
adding
more
and
more,.
There
are
business
partners,
magical
systems
that
not
only
open
but
closed
ones,
so
that
we
can
interoperate.
A
Another
point
is
the
speed
of
adaptation.
Here.
It
is
not
about
not
failing
to
do
this
type
of
innovation.
Projects
is
to
fail,
but
it
is
to
fail
and
recover
is
to
know
how
to
fail
to
fail
quickly.
So
it
happens
quickly,
preparing
a
context
so
that
allow
me
ok,
I,
understood
where
it
is,
and
now
I
just
adjust
it
adjusted
if
I
give
up
not
ready,
I'm
not
going
to
get
any
kind
of
these
projects
it
's
not
for
frustrations.
A
On
the
other
hand,
the
scalability
of
what
I
choose
what
we
are
mentioning
to
understand
from
we
already
know
that
they
work
in
the
applications
in
the
models.
Also
the
fact
of
starting
to
build
all
the
architectures
oriented
towards
microservices
that
we
are
already
working
on,
but
add
to
that
the
models
amplify
the
models
make
them
available
as
a
service,
so
that
the
applications
can
also
consume
them.
And
what
I
mentioned
before
that
this
is
related
to
culture.
A
If
we
do
not
have
a
cultural
change,
is
the
change
that
we
make
will
not
be
sustainable
over
time?
What
does
this
mean?
An
innovation
requires
a
different
vision
because
they
are
going
to
change
roles
internally.
That
will
to
do
something
that
they
did
manually,
before,,
that
they
stop
doing
it
and
that
they
are
going
to
start
training
algorithms
and
that
they
are
going
to
make
a
different
use
of
the
capacities,,
then
that
optimization,.
So
there
are
detractors
in
all
of
the
processes,.
There
are
always
traction
winds
in
everything
innovation
process,.
A
A
Technology
from
tools,
as
we
mentioned,
this
is
a
profile
of
a
sainz.
Now
we
are
going
to
see
the
open
data
hub.
They
were
mentioning
in
previous
talks.
Open
data
java
already
comes
with
everything
that
is
openshift
and
openshift
data
science
appears
to
be
able
to
give
it
a
service
as
well
of
support
of
the
component
of
the
component.
A
That
forms
the
open
data
hub
architecture
and
to
be
able
to
start
offering
it
as
a
service
to
this
that
in
two
clicks,
I
can
start
testing
and
developing
a
model
that
is
the
jupiter
notebook
with
different
types
of
either
compay
torch
with
tensor
flow.
It
doesn't
matter,
it
comes
for
all
flavors,
but
it's
quick
to
try.
So
that's
why
we
talk
about
accelerating
this
innovation,
and
we
see
that
there
are
different
types:
are
aconda
starbucks,
these
scores
patxi
de
arm.
A
There
are
different
types
of
tools
that
have
a
community
part:
they
have
a
non-
community
part
and
that
we
seek
to
carry
all
the
containers.
That
is
what
makes
it
really
have
that
style
of
being
polyglot
and
that
we
can
choose
for
it.
The
life
cycle
in
the
data
extraction,
part
in
the
game
part
I,
am
going
to
run.
My
experiments.
I
am
going
to
train
them.
I
am
going
to
deploy
those
models,
regardless
of
what
I
am
going
to
use,
but
that
it
is
defined
and
that
it
can
be
automated.
A
This
is
the
architecture
that
we
have
shown
before,
that
is
called
open
data
hub
and
that
amount
of
components
is
not
limited,
but
for
each
one
I
don't
want
an
artificial
intelligence.
I
want
to
go
to
simply
metrics,
no,
not
to
say
simple,
but
I
am
going
to
take
metrics
and
indicators
and
go
more
to
what
is
via
and
well
that
there
is
a
super
set.
A
That
is
a
super
interesting
visualizer,
very
graphic
that
also
one
you
can
use
it
and
this
whole
life
cycle,
where
you
start
to
transform
the
models
and
make
them
available
with
very
images
so
that
the
records
can
be
tested
and
I
do
everything.
That
is
the
version
to
be
found
within
the
repository
or
the
type
of
repository
we
are
using
in
In
this
case,
git
and
start
offering
it
as
services
to
be
consumed
by
different
types
of
applications,.
Do
the
corresponding
monitoring
and
return
to
that,.
All
of
that
is
what
we
contemplate
within.
A
What
is
a
principle,
within
what
is
openshift,.
There
is
science
and
something
very
important.,
more
than
anything,
also
for
everything
that
is
the
industry
in
telco
if
they
are
going
to
work
with
computer
vision
for
everything
that
is
processing
at
the
edge
today,
everything
is
decentralized.
It
is
that
that
itself
is
transparent
and
that
I
can
deploy
it
in
the
led
without
No
type
of
concern,.
So
I
am
not
thinking
that
I
have
to
do
another
configuration,.
That
is
what
really
makes
this
proposal.
A
A
So
there
is
a
sandbox,
as
we
were
saying
in
previous
talks
and
more
of
the
partners
that
we
are
adding
day
by
day,
that
It
is
not
about
dressing
everything
in
a
single
tool,,
but
rather
being
interoperable
and
knowing
how
to
enter
and
exit
the
different
ones
that
I
have
to
use,.
That
is
what
I
really
need
to
have,
I
need
to
have
hybrid
machine
learning
strategies,,
and
it
is
not
about
just
doing
different
things
to
innovate,
but
to
do
it
differently
to
really
have
an
impact.
We
always
do
it
the
opensource
way.