►
From YouTube: Incubation Engineering - Exploring a Model API
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Welcome
back
to
part
two
in
this
in
the
series
of
exploring
ideas
for
a
new
seg,
this
I,
this
video
describes
a
generic
model
API
and
in
a
nutshell,
it
is
basically
the
missing
missing
link
between
ml
flow
and
actual
production.
This
idea
is
basically
the
conclusion
of
me
having
to
do
the
same
thing
over
and
over
again
at
different
customers
and
learning
from
all
of
all
of
the
successes
and
mistakes.
Usually
what
I've
seen
happen
at
companies
is,
they
have
models
and
those
models
all
get
an
API
in
front
of
it.
A
Sometimes
there's
API
Management
in
front
of
it,
but
there's
still
a
different
API
behind
it,
and
every
model
has
their
own
API,
so
people
would
use
Python
R
and
then
they
would
use
different
libraries
such
as
fast
API
or
flask,
and
then
it's
it's
all
over
the
place,
so
every
project
would
have
their
own
specific
setup.
Which
is
far
from
ideal
because
it
costs
a
lot
of
money
to
eat,
to
develop
and
maintain
those
apis.
One
of
the
other
things
that
I've
also
seen
is
that
they
just
keep
on
copying
the
same
structure.
A
That
makes
it
a
lot
easier
for
implementation,
because
the
teams
that
you're
collaborating
with
can
get
quite
easily
used
to
the
structure
of
your
API,
and
it's
only
the
same
for
every
model.
It's
just
the
data
that's
being
sent
to
the
model
to
the
API
is
different.
A
You
only
have
to
work
on
the
model,
not
the
API
or
deployment,
there's
no
dependency
on
external
release,
Cycles
because
you
have
your
own
API
and
that
API
probably
won't
change,
because
this
spec
for
this
is
super
generic.
You
can
just
connect
the
model.
So,
as
a
data
science
team,
you
could
just
connect,
deploy
your
model,
make
it
available
in
the
API
and
that's
it.
Clients
can
start
using
it.
A
One
of
the
things
that's
also
very
time
consuming,
usually,
is
that
you
need
to
get
your
solution
approved,
including
governance
than
by
having
just
one
API.
It
becomes
you,
don't
you
don't
have
to
go
through
that
process
again
and
again,
probably
the
only
thing
that
you
have
to
do
is
get
your
data
governance
to
get
your
governance
approval
for
your
model,
which
data
are
you
using
Etc?
A
Another
thing
that
I
would
like
to
incorporate
his
Services
instead
of
models
like
a
lot
of
teams,
consider
a
model
to
be
a
model,
so
there
will
be
one
model,
but
the
reality.
There
is
probably
more
than
one
model,
not
in
the
last
place
for
Canary
deployments,
so
you
would
have
your
variant
of
a
model
and,
depending
on
the
data
or
maybe
you're
trading,
a
new
model
you
would
want
to
deploy
it,
but
deploying
it
in
one.
A
Go
is
a
bit
risky
because
you're
not
sure
how
your
model
is
going
to
behave
in
production,
so
you
would
want
to
perform
Cannery
deployment.
Let's
say
your
first
model
is
getting
60
of
the
traffic
you're
going
to
introduce
the
new
model
with
30
of
the
traffic,
and
if
the
model
is
good,
then
you
can
increase
it
to
100,
or
maybe
you
want
to
just
run
two
different
models
all
the
time.
One
will
be
better
for
the
evening
and
the
other
one
better
for
the
rest
of
the
day.
A
So
then,
how
would
it
look
like
in
a
full
ecosystem?
You
have
your
consumers
as
usual,
like
whether
that's
a
website,
another
API,
a
backup
system
doesn't
really
matter
as
long
as
they
are
asking
for
your
for
the
output
of
your
models,
you
would
have
your
centralized
API
that
controls
all
of
these
requests
and
will
route
them
to
the
correct
services
that
will
drop
them.
Do
the
correct
service.
You
would
have
a
a
log
collector
specifically
designed
for
model
evaluation.
A
You
would
probably
have
a
feature
store
for
your
external
features
and
you
would
probably
have
a
workflow
orchestrator
like
airflow,
to
update
models,
deploy
models,
update
your
features
that
kind
of
stuff
I'm
super
excited
about
integrating
such
a
solution
into
gitlab,
because
I
think
it
will
make
it
a
lot
easier
for
users
to
deploy
models
evaluate
them.
We
already
have
all
the
components
for
other
types
of
applications.
We
just
have
to
combine
them
in
order
to
be
able
to
serve
models
in
this
way
that
I've
just
described.