►
Description
The Ignite AI platform built by KPMG enables data management, model build/model management, solution development and deployment, and business enablement. Ignite is made for data scientists and engineers, but also allows for business user empowerment, keeping humans in the loop. The Ignite AI platform provides automated MLOps and data pipelines to achieve this goal.
A
Thank
you.
Thank
you
all
for
joining
this
morning.
I
know
it's
early
for
some.
My
name
is
kevin
martelly,
I'm
a
principal
advanced
technologies,
cloud
leader
at
kpmg,
overseeing
a
lot
of
our
cloud:
innovation,
ml,
ops
and
pipeline
so
nice
to
meet
everyone
virtually
today.
B
Yeah
sure
thanks
a
lot
kevin,
I
thanks
all
yeah
thanks
for
joining
the
session
today
my
name
is
abhinav
joshi.
I
am
a
senior
manager
in
the
openshift
product
marketing
team,
at
red
hat,
I
have
over
yeah
20
years
of
industry
experience
in
the
like
a
lot
of
roles
on
both
the
customer
side,
as
well
as
on
the
vendor
side.
B
In
my
current
role
at
red
hat
me
and
my
team,
so
we
focus
on
building
out
and
being
able
to
evangelize
the
value
of
red
hat,
open
shift
industry
leading
kubernetes
platform
with
the
integrated
devops
capabilities
for
cloud-native
workloads.
That
also
includes
the
emerging
technologies
like
data
analytics
aiml,
and
so
on.
Back
to
you,
kevin,
I'm
looking
forward
to
the
talk
today.
A
Thank
you
evan.
So
before
we
get
started
in
today's
talk
track
around
ml
ops,
I
wanted
to
give
a
brief
background.
Around
kpmg.
Many
of
you
may
know,
kpmg
is
one
of
the
big
four
consultancies
and
advisory
firms
and
within
our
advisory
practice
we
have
a
group
of
practitioners
around
2500
or
so
located
in
multiple
countries,
where
we
really
focus
in
on
maximizing
the
value
from
data
ai
and
emerging
technologies.
A
A
First,
we'll
give
you
a
little
bit
of
a
background
around
what
kpmg
ignite
is
we'll
talk
through.
You
know
who
was
using
ignite?
Why
they're
using
it
and
then
we'll
drill
into
some
specific
challenges?
Ignite
is
used
to
solve,
for
we'll
also
talk
a
little
bit
about
use
cases
and
finally,
we'll
get
into
the
meat
of
the
presentation
around
how
ignite
solves
for
the
problem
of
ml
ops,
both
from
a
process
perspective,
as
well
as
a
technology
perspective,
so
kpmg
ignite
initially
was
built
to
extract
value
out
of
unstructured
data.
A
So
we
saw
the
need
with
unstructured.
Data
sets
predominantly
contracts,
documents,
loan
documents,
emails,
voice,
etc,
but
there's
a
big
need
early
on
in
the
days
of
machine
learning
where
values
need
to
be
extracted,
unstructured
data
and
that's
really
where
kpmg
ignite
started
its
its
its
generation.
If
you
will,
and
over
time
it
took
in
unstructured
data
as
well,
it's
a
platform,
that's
built
in
a
modular
form,
really
based
on
open
source
with
the
containerization
such
as
openshift
running
it
and
open
architecture.
A
So
ignite
solves
many
of
the
end-to-end
challenges
that
enterprise
face
from
taking
you
know
pocs
or
pilots,
into
production
at
the
ml
use
case
level.
However,
we
think
about
the
ops
perspective
and
where
does
ml
ops
and
what
types
of
challenges
does
this
might
solve
it
solves
through?
You
know
different
different
areas,
whether
it's
pre-deployment
deployment
and
post-deployment
in
the
in
the
pre-deployment
space.
A
It
provides
a
robust
set
of
tools,
ai
tools
that
is
enabled
for
data
scientists
to
run
their
experiments,
enabling
for
risk
management
teams
to
see
the
outputs
and
the
results
and
explainability
of
models.
It's
ready
to
scale
training
infrastructure
being
built
on
top
of
containerizations
allows
you
to
scale
out
on
your
training
needs.
A
You
know
some
of
the
more
important
things
here,
I
think,
are
the
the
logging
and
alerting
you
know
after
the
fact,
to
keep
track
of
the
model,
the
scalability
to
be
able
to
scale
the
platform
on
demand
based
on
the
specific
needs,
the
integration
into
a
ci
cd
pipeline
and
overall
performance
monitoring
and
and
health.
A
Metrics,
so
ignite
itself
is
the
concept
of
ignite
is
to
build
use,
cases
and
or
solutions
use
cases
and
solutions
are
really
built
by
stringing
together.
What
we
call
components
you
can
think
of
a
component
as
an
individual
piece
of
work
or
functionality
to
achieve
some
type
of
solution
or
use
case.
A
A
It
may
be
something
that
is
an
open
source
type
of
capability
in
the
marketplace
today,
maybe
like
tesseract
for
ocring,
or
it
could
be
something
that
comes
from
a
third
party
like
a
proprietary
type
of
algorithm,
something
maybe
for
speech
to
text
that
you
may
be
able
to
get
off
of
the
csp
provider.
A
But
at
the
end
of
the
day,
the
components
are
the
smallest
piece
of
work,
effort
that
strung
together
into
a
workflow
will
produce
a
solution
and
all
along
the
way,
the
ability
for
the
human
in
the
loop
to
be
able
to
evaluate
the
outputs.
That's
coming
out
to
determine
if
the
models
are
in
fact
predicting
accurately
or
over
time,
making
changes,
so
retraining
can
go
into
the
process.
A
The
human
in
the
loop
also
extends
to
other
groups
in
an
organization
it
could
expand
to
the
governance
team
that
needs
to
see
that
the
metrics
that
are
part
of
the
model
it
can
expand
to
the
operational
team
for
the
deployment
and
the
health
of
the
system
and
we'll
go
into
some
of
those
details.
But
again,
at
a
high
level,
the
solutions
are
made
up
of
many
components
that
are
individually
pieced
together
into
a
workflow
that
will
then
produce
an
output.
A
So
ignite
itself
solves
for
many
different
types
of
use
cases
I'm
not
going
to
go
into
each
individual
use
case
that
you
see
here
on
the
slide,
but,
as
you
can
see,
a
lot
of
these
use
cases
are
around
unstructured
data.
The
first
three
are
more
around
contractual
terms
and
and
pdf
documents.
The
last
one
is
around.
You
know
kpmg,
intelligent
interaction,
which
is
that
the
chat
bot
types
of
interaction,
I'll
take
one
use
case
here
as
an
example
in
cognitive
contract
management.
A
One
of
the
challenges
that
we
see
organizations
facing
is
that
they
have
a
ton
of
contracts
out
there
and
being
able
to
get
the
right
information
out
of
these
contracts
to
make
the
right
business
decisions
so,
for
instance,
our
vendors
complying
to
terms
and
conditions
our
payments
being
paid
on
time.
Any
type
of
information
that
you
think
is
relevant
as
part
of
you
know,
your
contracting
process
ignite
can
help
break
the
contract
down
into
usable
pieces.
That
will
ultimately
give
you
an
answer
to
be
able
to
determine
the
next
steps
similar
for
libor
analytics.
A
The
libor
rate
is
going
away
shortly
and
there's
a
lot
of
needs
to
change
these
rates
in
in
documents.
What
are
your
alternatives?
You
have
to
understand
the
documents
and
figure
out
and
figure
out
what
you
can
replace
the
libor
rate
with,
and
this
is
another
one
of
the
use
cases
where
it
breaks
down
into
components,
maybe
ocr
in
a
document.
You
know
breaking
it
down
into
sections
using
some
rule-based
inferences,
maybe
using
classification
models
to
determine
what
type
of
options
are
available.
A
A
I
think
a
lot
of
us
have
experienced
the
challenges
of
creating
a
a
pilot
or
a
poc
successfully,
but
it's
been
you'd
spend
months
and
months
getting
that
poc
into
production,
because
there's
a
lot
of
processes
and
controls
and
things
that
may
not
have
been
considered
specifically
around
the
model,
the
ml,
ops
management
and
we're
going
to
dive
into
the
details.
Specifically,
you
know
how
how
it
might
solve
for
that.
But
before
that
share
a
little
bit
of
information
around
red
hat
abraham.
B
B
These
technologies
and
the
operational
best
practices
provide
the
much
needed
the
agility
flexibility
portability
and
the
scalability
for
the
data
scientists
to
build
to
develop,
iterate
and
share
the
models
with
the
peers
in
a
seamless
way
and
and
as
well
as
with
the
software
developers
as
well
and
then
for
the
developers.
B
So
they
get
the
capabilities
to
be
able
to
do
rapid
coding
of
the
soft
software
apps
that
are
powered
by
the
machine
learning
and
the
deep
learning
and
models
and
the
data
scientists
and
the
developers
and
no
longer
have
a
dependency
on
idea
on
on
id
ops
for
every
infrastructure.
Provisioning
task
next
slide.
B
Kevin
to
execute
on
the
ai
workflows,
what
you
need
is
like
a
bunch
of
software
tools,
such
as
say
a
tensorflow
spark,
pytorch,
jupiter,
notebooks
and
so
on,
and
the
data
services
like
the
data
streaming
technologies
like
say,
kafka,
sql
and
nosql
databases.
It
could
be
object,
storage
and
so
on,
and
then
what
you
need
is
yeah
end-to-end,
solution,
architecture.
It
may
span
across
like
on-premises
in
the
cloud
and
it's
all
the
way
to
the
edge
for
the
various
needs.
B
Yes,
such
as
the
security
and
compliance,
the
data
generation
at
the
edge,
the
data
gravity
and
so
on,
and
and
all
these
tools
and
access
to
the
data
sources.
It
should
be
ideally
supported
on
a
self-service,
hybrid
cloud
platform
that
is
able
to
encapsulate
all
these
infrastructure
endpoints
and
also
provide
the
consistency
and
the
scalability
anywhere.
B
Now
the
hybrid
cloud
platform
should
have
the
integration
with
the
hardware
accelerators,
such
as
the
nvidia
gpus,
to
be
able
to
speed
up
the
data
analytics
the
model
training
as
well
as
the
inferencing
activities.
Finally,
the
hybrid
cloud
platform
should
be
able
to
offer
the
consistent
experience
across
like
on
premises,
public
clouds
and
the
edge
and
be
able
to
efficiently
manage
in
a
unified
way
by
the
I.t
operations.
B
B
It
has
empowered
the
data
scientists,
the
data
engineers
and
the
developers
to
be
agile
and
be
like
very
collaborative.
You
know
throughout
the
ai
life
cycle
and
yeah,
without
like
having
a
dependency
too
much
on
the
it
operation
for
the
individual
activities
kevin.
Can
you
go
to
the
next
slide?
Please.
B
Yep
so
openshift,
it
provides
like
a
lot
more
than
the
fundamental
values
that
we
that
you
get
with
the
containers
and
kubernetes.
The
first
thing
is,
it
actually
simplifies
the
deployment
scaling
and
the
life
cycle
and
management
of
the
containerized
aiml
tools,
such
as
the
few
of
the
examples
that
you
see
on
the
screen
here
and
a
lot
more
by
being
able
to
automate
the
day,
one
to
two
operation,
tasks
that
are
associated
with
these
tools
and
and
this
helps
ensure
the
high
availability
and
the
faster
time
to
value.
B
From
openshift,
the
third
key
thing
is
yeah
the
openshift.
It
comes
with
the
self-managed,
as
well
as
the
cloud
hosted
option,
and
this
actually
like
helps.
You
get
a
consistent
way
to
be
able
to
perform
the
day,
one
to
two
of
ops,
be
it
on
prem,
at
the
edge
or
in
the
public
cloud,
and
also
providing
the
much
needed
portability
and
the
consistency
of
the
modeling
and
and
the
app
dev
workflows.
B
Yeah
well
say
for
the
data
gathering,
the
preparation,
modeling,
the
deployment
and
the
influencing
task
yeah
and
then
like,
as
I
mentioned
earlier,
so
openshift
also
comes
with
the
integrated
devops
capabilities
that
way
yeah.
It
actually
helps.
B
I
extend
the
value
of
devops
for
the
ntr
machine
learning
life
cycle
and
that
helps
the
collaboration
between
the
teams
and
and
this
helps
ensure
that
the
models
can
be
like
easily
deployed
in
the
app
processes
and
the
rollout
of
the
ml,
the
intelligent
applications
and
then
finally,
openshift
is
a
fully
integrated
like
hybrid
cloud
platform,
and
it
includes
all
the
key
capabilities
like
the
monitoring,
automation,
the
devops
tool
chain,
the
pipeline
get
offs
and
so
on
and
and
all
this
is
built
on.
B
100,
open
source
technology
and
and
and
this
helps
drive
the
innovation
yeah
without
having
the
locking.
So
back
to
you.
A
Thank
you
guys
so,
as
we
kind
of
set
up
a
little
bit
of
the
delete
in
now
we're
going
to
dive
into
some
of
the
the
technical
details
on
how
we
are
solving,
as
I
mentioned
from
both
the
kpmg
perspective,
as
well,
for
our
clients
around
the
entire
ml
ops
life
cycle.
A
If
we
look
at
the
platform
itself
that
the
kpmg
ignite
platform,
it's
made
up
of
many,
what
I
would
call
sort
of
your
infrastructure
components-
and
these
are
things
that
you'll
see
up
here
at
the
at
the
top
box
here-
and
these
infrastructure
components
are
ones
that
are
continuously
running
on
the
the
openshift
platform
and
are
in
taking
the
use
cases
or
the
components
that
string
together
to
form
a
use
case
from
an
execution
perspective.
A
I'll
call
out
some
of
these
components
that
you
know,
I
think,
are
relevant
to
the
overall
understanding
of
the
platform,
but
for
for
the
rest
of
the
presentation,
we're
really
going
to
be
spending
most
of
our
time
around
the
model
management
capabilities
of
the
platform
itself.
A
As
mentioned,
if
you
look
at
the
bottom
layer,
this
is
right:
on
top
of
red
hat
openshift
from
a
containerization
orchestration
perspective.
Having
all
gave
some
answers
around,
you
know
the
event,
the
advantages
of
using
openshift
and
these
types
of
solutions.
A
In
addition,
the
full
embedded
ci
cd
pipeline
is,
is
there
and
then
we
also
have
an
abstraction
or
a
layer
called
minio
which
we're
using
from
an
object,
storage
perspective.
So
it's
a
it's
a
technology
layer
that
that
achieves
the
same
level
of
object.
Storage
that
you
would
you
know,
see
on
cloud
providers
for
quick
access
and
read
and
write
times
at
the
component
level.
There's
a
few
things
again
that
I
think
are
important
to
call
out.
One
is
the
workflow
engine
we
mentioned
a
little
bit
earlier.
A
The
workflow
engine
are
nothing
more
than
a
component,
and
a
component
is
a
micro
service
and
door
piece
of
work.
So
it
could
be
a
data
ingestion
document
classification,
some
type
of
data
extraction.
These
types
of
components
come
together
to
form
a
workflow
and
they
can
they
can
split
off
or
they
can
be
a
linear
based
on
your
use
case
and
as
these
workflows
are
are
executed.
A
Some
of
the
other
components
are
important
parts.
I
think
again
are
the
model
management
pieces,
which
I
think
we'll
drill
more
into
later
on
around.
How
do
we
store
the
model
metrics?
How
do
we
deploy
the
models?
How
do
we
get
explainability
in
surrogate
models?
How
does
all
this
come
into
play?
So
it's
not
only
the
the
technology
enablement,
but
it's
the
business
process.
It's
the
risk
processes
and
it's
the
business
users
integrating
with
the
end-to-end
process
to
make
this
successful
and
we'll
go
into
how
that
works.
A
There's
the
message
controller,
which
is
kafka
so
as
components
are
done
in
your
in
your
particular
workflow.
It
notifies
kafka
that
this
component
is
done
start
the
next
one
and
the
nice
paint
part
about
using
the
openshift
platform
is
that
each
one
of
these
containers
can
be
spun
up
one
or
an
amount
of
time.
A
The
kafka
sort
of
keeps
track
of
the
messaging
to
know
which
container
needs
to
go.
Next,
as
I
mentioned,
there's
coder
development
environments
through
notebooks,
log
and
storage.
You
know
elasticsearch
storing
some
of
your
model
statistics
after
it
and
then,
if
you
need
time
series
databases
like
prometheus,
that
may
be
relevant
for
some
of
your
model
monitoring
processes
and
then
ultimately
zookeeper.
A
But
this
is
this
is
the
the
the
component
level
view
of
what
the
ignite
platform
itself
looks
like
and
then
one
more
view
we'll
show
you
of
the
overall
ignite
platform
and
then
drill
into
the
the
the
pieces
of
the
the
ml
ops
is
the
platform
itself.
You
know
as
mentioned,
is
everything
comes
through
our
authentication
or
our
api
gateway?
A
The
api
gateway
can
has
two
two
ways
it
can
come
in.
One
is
you
can
come
in
as
a
as
an
end
user
through
an
interface
which
has
a
lot
of
interface,
controls
that
we'll
talk
about
interface
applications
and
then
the
other
way
is
through
a
restful
service
when
you're
sending
a
particular
workflow
through
the
restful
service.
Think
of
the
workflow
again
as
being
a
strong
and
together
of
components,
those
components
come
into
a
particular
order
of
how
that
workflow
needs
to
execute
that
workflow.
A
It's
stored
inside
of
a
database
in
postgres
as
a
metadata
and
then
as
it's
stored
in
that
particular
database.
You
know
it
comes
through
and
it
will
then
start
to
actually
execute
and
expand
out
on
the
ecosystem.
So
the
ignite
component
builder
understands
the
workflow,
understands
the
components
associated
to
that
workflow
and
then
we'll
pull
it
out
of
the
artifactory
or
your
your
registry
or
factory
registry,
and
deploy
those
components
onto
the
platform
and
again
those
components
are
completely
dynamic.
A
Based
on
the
need
for
that
particular
use
case
of
how
many
times
that
component
needs
to
be
instantiated,
if
the
component
has
a
model,
it
will
then
go
through
through
the
model
manager.
It
will
retrieve
the
model
that
sits
in
ml
flow
and
pull
and
load
the
model
into
that
component
and
again
it
can
load.
It
can
launch
that
component,
one
to
n
at
times
based
on
again
how
many
documents
or
how
much
data
needs
to
go
through
that
model.
A
A
Now
getting
down
into
sort
of
the
meat
of
the
overall
topic
area
in
conversation
is
is
what
what
the
talk
track
was
around
ml
ops.
Now,
when
we
think
about
ml
ops,
ml
ops
has
many
different
functionality,
that's
needed
to
be
built,
such
as
model
training
model
serving
you
know,
ml
manual
management,
in
addition,
there's
a
lot
of
business
processes
that
also
come
into
play
and
we're
going
to
drill
into
each
one
of
these
content
areas.
A
There
needs
to
be
a
place
for
your
governance
team
to
understand
the
model,
attributes
that
are
being
generated.
Are
they
aligned
to
kind
of
expectations?
Is
there
bias
in
your
model?
Are
you
getting
the
right?
You
know,
output
for
what
the
parameters
need
to
be
then
there's
the
serving
part.
I
need
to
serve
this
model
into
the
ecosystem.
How
do
I
serve
it?
How
do
I
consume
it?
Can
I
serve
one
model?
Does
it
can
be
multiple
models
based
on
scale
and
then,
finally,
the
the
management
part
of
it?
A
A
So
if
we
think
about
model
training,
ignite
supports
two
mechanisms
for
model
training.
The
first
mechanism
is,
you
can
bring
your
own
model,
so
maybe
you
built
your
model
somewhere
else
and
you
want
to
bring
it
into
the
ignite
ecosystem.
The
second
part,
maybe
you're
a
data
scientist
that
wants
to
build
a
model
using
the
ignite
ecosystem.
So
ignite
has
jupiter
hubs.
It
instantiates
a
notebook
for
each
individual
developer.
A
developer
can
then
use
that
notebook
to
build
those
models.
A
Those
models
have
a
wrapper
layer
that
we
call
ignite
model
manager,
which
is
a
custom
built
component.
That
model
manager
functions
in
two
ways:
whether
you're
bringing
your
own
model
in
or
if
you're
using
you
know
your
own
notebook
and
the
platform
to
develop
it.
The
ignite
model
manager
can
then
send
the
relevant
attributes
into
ml
flow,
so
things
that
you
may
want
to
capture
this
metadata
is
very
flexible.
You
know
some
clients
want
to
capture
the
training
data,
so
there's
metadata
to
where
the
training
data
was.
A
If
you're
running
classification
models
you
may
want
to
capture
the
accuracy.
You
know
the
precision
the
f1
scores
if
you're
running
regression
models,
maybe
the
mean
square
error
so
based
on
what
you're
doing
and
based
on
what
you
want
and
based
on
what
your
governance
processes
are,
a
lot
of
different
metrics
can
be
captured
as
part
of
your
model.
A
How
do
we,
how
does
our
risk
processes
gain
comfort
to
move
into
production
so
explainability?
We
see
that
as
becoming
a
a
pretty
big
topic
and
adding
capabilities
and
features
into
that.
Another
thing
is
model
alternatives.
Are
we
using
a
very
complex,
deep
learning
model?
Can
we
replace
that
with
something
less
complex
and
still
get
the
same
level
of
accuracy?
These
are
all
things
we
see
that
the
business
is
asking
for
and
how
they
can
get
these
models.
You
know
moved
into
production
quicker
in
a
more
you
know,
governed
process.
A
The
next
is
around
model,
ops
and
model
serving.
So
how
do
these
models
get
served
into
the
ignite
platform?
We
saw
the
high
level
flow
on
the
prior
page,
but
just
to
kind
of
walk
you
through
the
use
case,
as
we
mentioned
before,
a
workflow
is
made
up
of
components
and
components.
Strung
together
in
a
workflow
is
a
solution
or
a
use
case.
A
So
if
we
take
this
particular
example,
there
will
be
some
type
of
service,
whether
it's
an
application
or
a
user
that
will
initiate
a
workflow
it'll
say:
please
run
these
components
in
these
ways
the
workflow
will
come
through
the
ignite
platform,
and
this
is
an
example
of
you
know.
Maybe
a
demonstrated
workflow,
where
we're
doing
ocr
we're
doing
template,
matching
machine
translation
et
cetera.
This
could
be
anything
that
that
you
want
to
do
as
part
of
your
workflow
data
extraction.
A
You
know
anomaly,
detection
model
etc,
but
the
idea
is
if
it
comes
at
a
time-
and
these
are
all
containers,
so
you
know
part
of
that
process
of
the
deployment
which
I
think
makes
you
know
the
openshift
platform
and
the
the
the
containerization
very
unique
and
quick
is
that
these
containers
are
sitting
inside
your
your
artifactory,
your
registry
and
only
at
the
time
of
workflow
execution.
Do
these
containers
get
pulled
down
into
the
platform
to
run
and
they
scale
based
on
the
needs
of
that
particular.
A
I
guess
function,
if
you
will
so,
let's
say,
for
instance,
you're
going
through
your
workflow.
These
are
the
components
your
containers
are
being
pulled
out
of
your
registry
you're
having
an
ocr
component
running,
maybe
one
to
end
your
template
matching
machine
translation.
All
these
components
are
running
they're,
notifying
kafka
back
and
forth
that
they're
complete
and
the
next
components.
Picking
up
on
that
activity.
Now
you
get
to
a
point
where
you
need
to
run
a
classification
model
as
part
of
this
classification
model
run.
A
What
will
happen
is
through
this
ignite
component
or
model
manager
or
through
the
model
manager.
It
will
integrate
into
your
min
io
or
your
model
stored.
It
will
have
the
right
json,
it
will
pull
the
right
value
and
then
it
will
pull
the
data
through
into
your
component
and
then
load
it
into
your
component
and
your
component
for
this
classification
model.
A
A
So
the
main
you
know
the
main
areas
or
the
main
points
of
this
is
you're
getting
real-time
model
loading
during
runtime.
So
it's
picking
up
the
most
recent
model
based
on
your
parameterization
of
what
model
version
you
want
to
pick
during
the
actual
runtime
of
the
use
case,
which
can
change
your
scale
model.
A
You
know
based
on
the
needs
and
finally,
it's
outputting
a
lot
of
the
model
metrics
as
well
as
part
of
your
runtime
evaluation.
So
any
metrics
that
you
want
to
capture
you
know
could
be
stored
as
json
could
be
stored
in
a
time
series.
You
know
database
like
prometheus
based
on
types
of
models,
but
all
these
metrics
are
going
to
be
stored.
So
over
time
you
can
review
those
metrics
determine
what
type
of
changes
you
need
to
make.
A
Now
the
the
next
part
around
here
is
what
I
think
is
one
of
the
most
important
parts,
which
is
the
the
ml
ops
management
and
in
the
mlm
ml.
Ops
management
is
the
dashboarding
area,
where
many
different
types
of
business
users
can
come
in
to
evaluate
the
process
and
the
end,
and
we
talked
through
some
of
the
ability
to
train
models
and
serve
models.
A
But
that's
you
know,
probably
a
very
small
piece
of
the
puzzle
once
you
get
done
that
you
want
to
get
something
into
production,
there's
a
lot
more
work
that
needs
to
get
around
that.
So
what
is
this
management
console
that
that
is
put
together,
so
some
of
the
stuff
that
it
does
is
it
helps
provide
the
businesses
to
create
training
data,
so
training
data
and
labeling.
The
data,
as
we
all
know,
is
a
huge
problem
for
businesses
to
do.
There's
different
techniques
on
how
to
create
this
label
data.
A
A
What
we
traditionally
find
is,
you
know
the
governance
teams
will
take
legacy
ways
of
approving
models,
maybe
pass
financial
models
done
in
different
tools
and
apply
those
same
techniques
to
these
more
advanced
models
that
may
not
be
trying
to
solve
the
same
business
problem,
and
sometimes
the
governance
processes
become
a
little
bit
more
challenging.
So
it's
good
to
understand
the
types
of
metrics
and
data
points
that
they
want
to
capture.
So
that's
all
part
of
your
and
the
nml
ops
life
cycle.
A
In
addition,
we're
seeing
a
lot
of
you
know
wants
and
needs
around
how
these
models
are
working
for
the
governance
team.
Can
you
provide
ways
that
the
explainability
can
gain
better
insights?
So
they
get
better
comfort
on
how
these
predictions
are
actually
occurring,
and
maybe
there's
not
biases
in
our
algorithm
or
we're
not
we're.
Our
data
is
not
skewed,
so
the
explainability
is
becoming
another.
Very
you
know
critical
piece
for
how
the
governance
team
wants
to
work
with
that.
A
You
know,
in
addition,
there's
the
pieces
where
the
evaluation
of
the
training
results.
So
the
businesses
have
the
opportunities,
both
during
the
training
time
to
say,
know
that
this
is
right,
or
this
is
wrong
and
update
the
new
value
to
give
to
the
data
scientist
and
then
also
over
time
right
over
time
as
you're.
A
Looking
at
the
model's
predictions
and
the
different
scores
and
metrics
that
you're
capturing
it's
important
for
those
business
teams
to
also
be
able
to
understand
that
so
over
time
they
can
feed
it
back
into
the
loop
to
be
able
to
get
better
and
smarter
models
through
that
process.
A
In
this
next
slide
here,
it
talks
a
little
bit
about
what
many
of
us
have
probably
seen
as
a
traditional
ci
cv
pipeline
and
probably
not
a
whole
lot.
Here,
I
would
say,
is
different
than
what
you've
seen
before.
You
know
the
ability
for
developers
to
put
their
information
into
some
type
of
repository
like
gitlab
or
bitbucket,
having
the
ability
to
trigger
it
through
something
like
jenkins
and
build
it
if
you're,
using
maybe
like
red
hat
openshift,
to
build
it
to
store
that
container
image
to
go
through
the
right
scanning
processes,
testing,
etc.
A
But
one
of
the
interesting
points
I
wanted
to
call
out
over
here
to
the
right
is
the
is
the
cd
part
of
it.
So
as
the
ignite
infrastructure,
the
cd
part
is
pretty
straightforward.
A
As
we
talked
about
before
you
have
the
ability
to
deploy,
you
know
kafka
or
postgres
or
different
component
trees
into
the
ecosystem,
but
there's
also
a
deployment,
probably
not
your
traditional
you're,
not
not
your
traditional
ci
cd,
but
none
of
the
containers
or-
and
none
of
the
you
know,
components
that
make
up
a
use
case
are
deployed
inside
ignite
until
runtime.
A
Until
someone
calls
that
particular
workflow,
so
once
someone
calls
that
particular
workflow
this
ignite
component
builder,
what
it
does
is
it
it
reaches
out
into
the
artifactory
or
into
jfrog
wherever
you're,
storing
your
your
information,
it
will
pull
that
container.
So
say
it's
pulling
a
classification
model
and
then
based
on
pulling
that
model
based
on
the
workflow
inputs.
It
will
know
how
many
of
those
containers
it
needs
to
instantiate
onto
the
cluster.
So
again
it
could
be
one.
A
Okay.
So
with
that,
I
just
wanted
to
kind
of
touch
on
a
few
of
the
key
takeaways
that
that
we
found
important
as
we
went
through
this
and-
and
one
is
you
know,
we
kind
of
hit
on
this.
A
little
bit
too
is
make
sure
your
business
processes
keep
up
with
the
technology
needs,
and-
and
what
I
mean
there
is
that
we
have
a
lot
of
processes
today,
that
businesses
are
working
against
that
help
them
move
models
into
production.
A
A
lot
of
these
may
have
been
more
traditional
financial
service
types
of
models
that
aren't
necessarily
the
same
as
the
type
of
models
that
we're
building
today.
So
it's
important
that
as
we're
building
these
new
capabilities,
we
still
have
the
right
governance
and
controls
in
place,
but
how
do
those
business
processes
evolve?
Just
like
the
technologies
themselves
are
evolving
and
sometimes
we're
we're
pushing
legacy
processes
against
newer
ways
which
can
cause
a
little
bit
of
a
challenge.
A
The
second
one
here
to
call
out
is
containers,
kubernetes
and
the
devops
powered
by
a
hybrid
cloud,
and
what
this
means
is
that
there's
so
many
places
we
can
build
these
advanced
analytical
models.
We
can
build
them
on
prem.
We
can
build
them
in.
You
know
any
number
of
csp
providers.
How
do
we
keep
an
ecosystem?
That
is,
is
easy
and
quick
to
bring
something
from
pilot
to
production
well
through
containerization
through
openshift.
A
How
do
you
keep
your
architecture
flexible
enough
and
open
enough
that
you
can
plug
and
play
these
new
capabilities
and,
at
the
same
time
expand
out
your
architecture
because
you're
never
going
to
get
everything
the
first
go
around.
It's
going
to
need
to
expand,
to
have
new
capabilities
so
being
flexible
and
open
to
be
able
to
plug
and
play
new
component
trees
as
well
as
being
layered.
So
you
can
build
upon
the
foundation,
is,
is
really
important
and
then
the
fourth
one
here
is
the
integrated
training
and
up
skill
of
technical
business
teams.
A
A
So
with
that,
I
don't
have
any
other
slides
I've
don't
know,
I
don't
know
if
you
have
anything
that
you
want
to
add.
B
So
I
think
that
was
great.
I
think
there
were
a
couple
of
questions
in
the
chat.
Let
me
read
those
out.
There
is
a
question
from
praveen
and
the
question
is:
do
you
have
a
security
layer
on
the
model
saved
in
main
io?
B
A
A
So
there's
controls
on
the
outside,
but
the
rbac
controls
themselves,
specifically
in
the
model
specifically
to
ml
flow,
is
more
controlled
through
the
ignite
pipeline
and
then
through
the
model
ids
that
are
stored
inside
of
mflo
we're
not
using
native
amalfoil
capabilities
from
an
rbac
perspective,
but
I
will
say
it's
probably
not
as
secured
as
as
one
would
want
in
in
in
some
of
the
enterprises.
That's
a
good
question.
B
A
The
question
yeah:
that's
it
that's
a
really
really
good
question.
So
this
is
not
the
ml
flow
from
databricks.
That's
only
available
through
the
databricks
ecosystems
on
the
cloud
and
one
of
the
challenges
that
we
experienced
with
ml
flow
was
the
ability
to
serve
models.
It's
not
very
robust
to
serve
the
model
into
the
production
pipeline.
You
may
want
to
use
something
like
q
flow
or
as
you've
seen,
we
created
a
custom
connector,
which
we
call
it
ignite
connect
which
is
leveraged
to
serve
the
model.
A
So
it's
not
one
and
on
the
same
data
bricks
did
put
some
hardening
around
ml
flow
and
does
give
you
the
flexibility
to
deploy
those
models
in
production
via
the
ml
flow
and
the
data
bricks
enhancements.
But
ml
flow
out
of
box
doesn't
support.
You
know
the
deployment
of
models
easily
in
a
scalable
way.
B
A
A
First,
it
was
really
just
kind
of
getting
the
model
out
there,
embedding
it
into
our
code
base
which
really
didn't
become,
which
really
wasn't
advantageous,
and
then
we
evolved
into
what
we
had
today.
I
think,
by
doing
this
and
doing
it
in
other
large
organizations,
we
find
it
meets
a
lot
of
the
needs.
A
However,
I
see
where
the
biggest
gap
in
this
whole
entire
process
is
still
around
the
explainability
of
ai
and
how?
How
well
can
that?
Can
we
explain
it
to
give
comfort
over
the
regulators
and
the
governance
teams
to
move
some
of
these
models
forward?
So
I
see
that
as
being
one
of
the
biggest
challenges
still
yet
to
solve.