►
From YouTube: Lightning Talks: Clive Cox: Seldon
Description
Lightning Talks: Clive Cox: Seldon https://www.youtube.com/playlist?list=PLaR6Rq6Z4IqeU2tGNgcBaodEWvuDFtqu2
A
A
I'm,
what
the
Zeldin
were
based
in
Barclays
techhub
boys
in
in
London,
its
accelerator
with
20
30
companies
in
it
one-tenth
flow
London
workshop
every
month.
So
if
you're
in
London
would
be
great
to
have
you,
there
join
em,
we
do
talk
about
tensorflow,
as
a
company
will
work
on
machine
learning,
deployment
and
kubernetes,
and
we
also
do
consulting
in
the
FinTech
area,
doing
machine
learning
in
very
single
effects,
equity
prediction
and
very
sever
other
things.
So
where
do
we
stand
as
a
company
exactly
what
we
do
in
terms
of
our
product?
A
We
view
the
machine
learning
pipeline
as
these
steps
you
know
from
training
data
ingestion,
analysis,
validation
of
your
model,
basically
Seldon
core,
which
is
our
open
source,
which
is
I'm
going
to
talk.
What
talk
about
today
quickly
is
we
just
focus
purely
on
machine
learning
deployment
so
after
you've
done
the
training
and
you've
got
you
to
what
you
want
to
deploy
your
predictor
out
scale?
It's
monitor
it
do
analysis
and
do
updates,
do
rolling
updates
to
your
machine
learning,
a
production,
and
so
we're
so
also
part
of
the
Q
flows
ecosystem.
A
So
you
can
choose
seldom
core
to
deploy
your
models
on
Q
flow
as
one
of
the
options
you
know
you
can
choose
intense
flow
servant.
You
can
also
choose
southern
core.
So
how
do
we
fit?
So?
How
do?
How
does
it
all
work
so
once
that
once
you've
got
your
kubernetes
cluster,
you
can
install
it
our
helm
or
case
on
it.
We've
got
our
own
case
on
it.
A
Registry
was
one
part
of
queue
flow
and
then
the
next
step
is
to
package
all
machine
learning
one
time
so
for
that
we
use
this
to
I
and
that's
what
I'm
going
to
explain
today.
So
that's
to
take
your
source
code
of
your
machine
learning
prediction
point
package
up
is
the
image,
and
so
we
can
then
manage
that
container,
which
is
going
to
give
predictions
in
your
graph.
So
the
the
final
part
is
to
actually
create
your
one
time
graph.
So
that's
just
saying
how
your
components
are
going
to
fit
together.
A
So
your
models,
a
B
tests
and
other
things
you
might
do-
is
part
machine
learning
pipeline
fit
together
one
together
and
we
you
defy
that
define
that
as
a
resource
and
deploy.
We
have
our
an
operator
that
will
understand
that,
isn't
and
deployed
and
managed
that
graph.
Basically,
so
what
we're
trying
to
do
is
allow
machine
learning
data
science
just
use
any
toolkit,
so
spark
tends
to
flow
skycat
learn
what
we
want
is.
A
They
can
use
any
tool
case
they're
using
now,
and
we
just
manage
the
one
time
prediction
for
their
models
and
for
that
they
just
need
to
do
two
things.
They
need
to
dockwise
there,
the
one
time
component
and
expose
it
using
our
western
gr
pc
AP
is
so
they
can
do
that
themselves.
We
want
to
make
it
really
easy
for
them
to
do
that.
So
for
that
we're
using
red
ship,
it
has
open
source
source
to
image,
open
source
tool.
So,
just
suppose
you
have
a
news
source
to
image.
There's
two
parts
of
this.
A
You
have
your
code
that
you
want
to
package
up.
So
here
you
have,
we've
got
a
prediction
component
in
python
and
then
we
have
a
set
of
builder
image
that
we
put
that
we
provide.
We
provide
python,
r
and
java,
build
images
that
allow
you
to
package
up
your
source
code
into
a
image,
and
so
we
provide
all
the
dependencies
and
then
we
provide
the
scripts
in
this
case,
assemble
script.
A
To
say
how
your
source
code
is
going
to
be
packaged
up
with
our
dependencies
runtime
scripters,
of
how
it's
going
to
be
run
and
that
you
see
scripts.
These
are
script
required
by
sty
and
once
you've
got
those
there,
you
can
then
use
the
sty
tool,
and
that
will
package
it
up
and
it
does
all
the
work.
So
this
is
just
a
quick
example.
So
here's
a
here's
an
example
using
sty
system
to
a
builder
on
the
current
directory.
A
It
could
be
from
github
it's
going
to
use
our
python
to
build
image
and
it's
gonna
out
for
this
Python
classifier,
and
so
the
first
thing
that
needs
to
do
is
have
their
one-time
component.
So
here's
one
for
this
standard,
RS,
classifier
and
python,
so
they
do
that.
They
can
then
Edwards
supply
a
set
of
requirements,
sort
of
what
packages
they
need
sky
kid
learns
side,
PI
and
et
cetera,
and
that
will
be
included
in
the
image
and
then
they
just
need
to
provide
a
set
of
requirements
of
how
we're
going
to
package
that
image.
A
So
one
is
what
the
class
is
going
to
be
called
with
this
kid
always
classify,
so
we
can
find
it
when
we
package
it.
How
you
want
to
expose
the
API
rest
or
GRP
see
is
the
two
api's
we
handle
right
now
and
what
it
what
this
is.
Is
it
a
model?
We
also
have
other
types
of
things
that
allows
to
do
a/b
tests
or
ensembles
in
different
different
forms,
and
things
like
that.
A
So
once
you've
done
that
and
you
can
actually
provide
the
environment
as
part
of
the
command
line
or
you
can
provide
it
as
part
of
the
source
code.
It's
once
you've
got
that
you
just
run
the
single
line
of
sty
and
that
will
build
your
run
time,
image
and
package
it,
and
then
we
can
deploy
it
onto
your
cluster.
So
really
what
we're
trying
to
do
is
make
it
really
easy
for
people
to
take
their
runtime
components.
Package
up
describe
the
graph
of
what
they
want
to
deploy
out
there
on
kubernetes.
A
Then
we
deploy
it's
managed
by
our
operator,
and
then
you
can
go
into
the
virtuous
loop
of
updating
your
components
changing
doing
during
a
be
test:
Canary,
rollouts
and
all
sorts
of
things.
You
need
to
do
in
machine
learning
in
production
to
actually
keep
that
machine.
Learning,
component,
updated
and
running
so
just
the
final
slide,
a
few
call-outs,
so
there's
two
source
to
image:
deep,
dives
and
intros
on
Thursday
and
Friday
and
I'm
going
to
to
more
depth
on
seldom
core,
which
is
stuff
I
work
on
on
Friday.