►
From YouTube: ML on OpenShift SIG Briefing: OpenShift S2I Helping ML Deployment with Seldon-Core by Clive Cox
Description
from the OpenShift Commons Machine Learning on OpenShift SIG meeting June 1 2018
A
So,
basically,
we
want
to
allow
people
to
train
using
ma
any
ML
tools,
so
tensorflow
spark
really
any
tools
you
can
think
of
and
including
the
the
wider
ecosystem
of
tools
such
as
key
flow
and
new
things
like
IBM's
framework
for
deep
learning.
So
we
don't
want
to
restrict
data
scientists
and
how
they
train
the
models,
but
once
they're
trained,
we
want
to
be
able
to
deploy
them
scale
them
and
manage
them.
A
On
top
of
kubernetes
to
betsy,
allow
people
to
create
one-time,
microservice
measures
of
ml
components
that
describe
their
one-time
components,
so
not
just
models
more
complex
things
and
I'll
give
another
example
of
what
that
would
look
like
and
then
deploy
those
and
expose
those
automatically
with
Western
G
RPC
endpoints
that
you
can
tie
into
your
business
apps
to
manage
the
whole
deployment
aspect.
Basically
so
yeah
so
you've
seen
this
slide.
A
So
what
you
need
to
do
to
get
up
and
running
using
Seldon
core
three
steps.
Really.
We've
got
Helmand
case
on
it
package
managers
to
deploy
cell
on
call
on
to
a
kubernetes
cluster.
So
it's
really
easy
and
that's
also
part
of
cue
flowers.
You
can
use
like
bear
case,
tonic
registry
as
well,
and
then
there's
two
steps.
A
That
exposes
the
appropriate
API
is
that
we
have,
as
part
of
our
micro
service,
mesh
and
I'll,
show
you
how
that's
done
and
then
the
second
thing
is
to
actually
describe
those
how
those
opponents
fit
together
at
one
time,
the
one
time
components
and
we
have
our
own
custom
resource
definition
that
allows
you
to
describe
that
one
time
graph
and
then
you
could
just
apply
it
using
Q
control.
Another
another
concepts
to
deploy
onto
your
cluster.
A
So,
just
a
quick
description
of
of
the
sort
of
complex
graphs
we're
trying
to
allow
people
to
do
so.
Simplest
thing.
You
have
an
API
geography,
Co
West,
going
to
a
model.
That's
going
to
give
you
prediction
so
you're,
sending
in
features
getting
our
predictions
that
we
compel
you
to
deploy
that.
But
then
we
want
to
allow
people
with
no
downtime
to
actually
do
rolling
updates
to
say
turn
it
into
an
a/b
test.
A
You
might
have
that,
but
then
you
might
want
to
separate
out
your
one-time
components
to
have
a
separate
feature
transformation
in
a
separate
rocket
container,
and
you
can
do
that
and
into
the
graph
to
handle
the
feature
transformation
as
a
separate
component,
and
then
you
might
want
to
have
add
in
components
that
we
think
will
be
useful,
such
as
outlier
detection,
Zoar
concept
drifts.
So
you
can
get
understanding
of
what's
happening
at
one
time
with
your
machine
learning
deployments
as
as
a
request
come
through
so
again
on
the
on
the
open
source.
A
We
have
an
outlier
detection
module,
you
can,
you
can
use
which
tries
to
build
up
a
distribution
of
what
it's
saying
of
passing
for
it
as
as
our
quests
come
in,
and
then
it
will
add
a
certain
piece
of
metadata
to
the
responses
going
back.
How
if
it
thinks
that
particular
request
is
actually
an
outlier.
You
can
add
those
in
as
well
and
put
them
into
the
graph
and
then
have
our
components
also
such
as
trying
to
get
explanations.
A
So
maybe
your
models,
a
deep
learning
models
which
are
hard
to
explain
and
you
need
to
get
high
level
explanations
of
what
features
were
being
used
for
particular
requests.
You
can
pass
this
back
to
stakeholders
as
to
why
certain
predictions
were
being
made.
I
will
work
on
those
as
well,
and
so
those
can
be
added
in
so
all
these
components
they're
all
so
pluggable.
So
it's
not
only
am
as
part
of
our
work
that
you
can
add.
A
Those
in
you
can
add
those
in
as
components
as
part
of
your
workers,
data
scientists
and
we
Specter,
as
the
community
grows,
that
some
of
these
components
will
be
provided
and
doing
various
pieces
of
functionality
that
could
be
added
into
the
one
time.
That's
the
sort
of
thing
we're
trying
to
allow
people
to
do
in
terms
of
the
one
time
part
of
the
machine
learning
deployments,
but
the
first
step
is
obviously
to
get
your
code
out
there
into
something
that
could
be
one
inside
Sullivan.
A
A
So
to
allow
people
to
do
that
really
easily
we're
using
technology
like
red
chips,
Red,
Hat's,
open
shift,
source
to
image
and
I'll
describe
how
that
makes
it
easier
for
people
for
data
science
is
to
just
take
their
code
and
then
package
it
up
very
quickly,
so
they
could
be
one
inside
cell
and
so
OpenShift
supposed
to
image.
So
basically,
what
this
allows
you
to
do.
You've
got
your
code
there
on
your
left.
That's
the
prediction
part
of
the
data
centers
which
and
that
they
could
just
concentrate
on
doing
the
core
stuff.
A
Getting
the
predictions
out
here,
they're
just
to
finally
predict
method
in
python.
This
is
for
some
model
they've
loaded,
and
then
we
we
provide
the
the
package
contains
parts
the
package
components
to
actually
help
to
wrap
that
into
a
docker
container,
so
source
image
option.
Associative
image
provides
allows
developers
to
create
packages
that
come
to
build
images.
A
Allow
you
to
package
up
various
nut
ups
with
the
white
dependencies,
so
we
provide
a
set
of
different
and
build
images
for
Python,
R
and
Java,
and
we'll
extend
it
to
further
components
in
the
future
that
allow
you
to
then
use
those
build
images
choose
one
of
them
that
fits
your
use
case
in
this
case.
On
the
code
on
the
left
is
Python,
do
you
choose
one
of
our
builder
images
and
then
you
can
just
use
as
I
do
wrap
your
code
quickly
into
a
docker
image
so
that
we
can
run
it
inside
Selda?
A
That
makes
it
very
easy
for
the
data
scientist
just
to
concentrate
on
the
MCOR
code,
part
for
doing
the
predictions,
and
they
just
choose
one
of
the
builder
images.
So
how
does
that
work
in
practice?
So
here's
a
straw
I
come
on
line
at
the
bottom
vanity
use
SEO
I,
to
build
on
cart,
directory,
you've
chosen
to
use
our
Python
clues
package,
build
image
and
that's
the
output
docket
image
you
want
to
quite
on
the
right
there.
A
A
So
yes,
just
to
summarize
the
steps
that
you
need
to
do
to
use
seldom
so
one
is,
you
can
train
using
anything
and
then
you
just
package
it
and
for
this
we
use
this
to
I.
Then
you
describe
your
deployment,
which
I
won't
go
into
so
much
detail
s
we
just
described
you'll
go
after
I
showed
you
inside
a
custom
resource
to
describe
what
you
want
to
be
deployed,
and
then
you,
you
fire
that
off
either
kubernetes
api
and
you
can
then
monitor
it
and
obviously
update
it.
A
As
you
go
along,
I
just
want
to
quickly
show
a
couple
of
recent
projects
that
we've
integrated
with
recent
examples
of
how
you
can
use
this
technology.
So
one
is
a
price
bargain
PMML,
demo
and
github
repo,
and
so
in
this
one
I
won't
do
a
live
demo.
I'll
just
show
you
quickly
just
run
through
the
code,
but
you've
got
code
here.
That
runs
a
price
bar
training
session
to
do
in
this
case
Emma
list,
and
then
you
export
you
export
the
final
modelers
and
PMML
into
a
file.
A
Then
here
we
just
use
us
to
I
to
wrap
a
Java
program
to
actually
150ml
I'm
the
evaluator.
So
you
can
run
that
and
that's
what's
going
to
do
the
one-time
predictions.
The
recipe
in
this
case
we've
got
a
little
piece
of
code.
There's
a
Java
code,
that's
using
a
library,
J
jpml
that
interprets
email
exported
models,
and
then
you
can
use
that
to
actually
perform
predictions.
So
in
this
case
the
data
scientist
would
just
concentrate
on
writing
this
little
bit
of
code.
A
That
does
the
predictions
and
uses
that
library,
G
I'm,
JP
FML,
and
then
you
just
you,
just
try
to
build
that
into
an
image
and
then
deploy.
So
it's
really
easy
and
those
and
there's
a
notebook
here
to
try
it
out
and
you
can
test
it
on
docker
and
mini
cube
to
run
that
and
a
second
one
with
integrated
in
is
IBM's
fabric
for
deep
learning.
A
You
can
find
us
on
a
fiddle
and
in
this
folder
to
integrate,
to
allow
you
to
then
deploy
the
final
train
models
on
seldom
and
in
this
case,
there's
an
example
to
again
do
a
tensor
flow
model
using
M
list
train
it
using
fiddle,
and
then
you
can
wrap
the
one
ton
score
again
using
s2
I
really
easily
to
quickly
wrap
that
into
a
component
which
we
can
then
deploy
onto
on
to
communities
where
fiddles
running
to
actually
do
the
one-time
scoring.
So
in
this
case
again,
it's
Python
there's
a
lot
of
code.
A
Here,
that's
actually
doing
loading
the
lovely
model
from
s
week.
That's
how
I'm
so
fit
all
stores,
VM
the
models,
there's
a
three
in
three
buckets
or
on
a
three
compatible
object
store
and
then
again
the
register
just
focuses
on
the
production
part
at
the
end.
I
should
just
do
the
predictions
and
that's
the
color
that
we
wrapped
in
s2i
and
deployed
and
again
here's
the
example
graph.
This
is
the
graph
the
customer
resource,
where
you're
defining
some
of
the
environment,
variables
needed,
load
your
code
and
then
you
defined
the
graph
at
the
bottom.
A
Here,
that's
what's
deployed
onto
the
network,
and
then
you
can
even
run
that
so
we're
trying
to
build
up
a
whole
sort
of
a
set
of
different
examples
for
different
use
cases
with
different
projects
to
make
it
easy
again
for
their
scientists
want
to
use
different
types
of
training,
libraries
and
different
different
projects,
choice
you
get
there.
The
one-time
deployments
pushed
out
there
onto
kubernetes
and
managed,
and
so
yeah
so
back
to
the
next
steps.
A
What
we
want
to
do
so
some
of
the
things
I
want
to
do
is
obviously
have
some
more
openshift
specific
examples,
but
be
great
to
have
those
out
there.
I
want
to
use
some
of
the
redhead
CentOS
images
for
the
core
components,
so
that's
all
compatible
with
openshift.
We
were
to
work
on
that
and
in
the
coming
weeks
and
also
add
Selden
to
the
other
short
ship,
Service
Catalog.
B
A
B
C
C
Really,
that's
amazing
kind
of
presentation,
I
love
to
see
the
way
that
you're
integrating
in
the
community
here
and
kind
of
picking
up
the
pieces
that
that
make
for
really
fill
out
the
solutions,
4ml
and
the
way
you
fit
into
the
coop
flow
picture
is
also
really
well
articulated.
That
was
great.
Thank
you.
Cool.