►
From YouTube: CDF SIG MLOps Meeting 2020-08-13b
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
C
I've
been
in
the
arrow
meeting
with
the
other
time
zone.
Oh
okay,
yeah.
C
B
B
So,
probably
since
I.
B
Don't
have
a
background
on
your
side,
I'm
all
like,
and
I
see
she
gang.
C
Yes
sure
so
I'm
I'm
a
tech
entrepreneur,
I'm
a
you
know
like
a
hands-on
technical
guy.
I
was
a
contributor
to
kubernetes.
C
Personally,
you
know
I
just
yeah,
I'm
because
in
my
previous
company
I
was
the
cto.
So
that's
what
I
do
and
now
I'm
looking
nowadays
for
innovation
in
the
envelopes
world.
So
I
thought
hey
it's
gonna.
It
can
be
a
good
idea
to
contribute
to
the
community
and
learn-
and
you
know,
collaboration
so
I'm
here
to
help.
B
C
General,
you
know
the
field
so
right
now
it's
super
early,
I'm
just
learning,
because
I'm
trying
to
be
humble
when
I'm
approaching
new
stuff.
I
think
that
the
monitoring
area
is
really
really
interesting,
because
there
are
no
good
good
solutions
over
here,
at
least
from
the
interviews.
I'm
I'm
making
again
I'm
here
I'm
here
to
to
learn
and
help,
and
you
know
I
think
that's
that's
the
best
way
for
me
to
know
it's
like
when
you
contribute
to
open
source.
B
Typically,
there
is
the
general
monitoring
ecosystem
right,
which
has
emerged
around
kubernetes
right,
which
is
the
more
info
level
monitoring
around
cpu
gpu
utilization,
incoming
request
response
times
tracing
so
that
if
you're,
looking
at
this
machine
learning
platform
being
built
on
top
of
communities,
that's
that's
the
one
being
leveraged.
I
believe
you're
more
looking
for
advanced
monitoring,
which
is
in
the
context
of
the
models,
etc.
Right.
C
Yeah-
and
you
know
the
the
the
connection
between
these
two-
that's
when
it
become
interesting,
because
you
need
to
know
how
the
the
model
behave.
How
is
that
related
to
the
performance?
And
you
know
how
to
wrap
it
all
together
without
writing
your
own
code,
because
everyone
have
nowadays,
you
know
you
have
seldom
or
you
have
a
kflo
or
tensorflow.
C
B
Yeah
now
I
hear
you
and
I
mean
if
you
look
at
more
advanced
depending
on
you
know,
if
you're
looking
at
models
deployed
in
production
and
if
you
want
to
monitor
them
right,
it's
sudden,
you
mentioned
right
there,
it's
scarf
serving,
which
is
you
know,
q
flow
serving
both
of
them
are
very,
very
active
in
that
area.
Right
so
and
it's
a
combination
of
then
you
know
the
traditional
infra
level
monitoring
around.
B
B
C
Are
you
aware
of
kf
serving?
I
haven't
tried
it,
but
I
try
to
learn
you
know
from
reading.
C
A
Yeah,
so
we're
we're
just
coming
to
the
end
of
the
work
on
the
technical
requirements
section
of
the
document
so
really
last
call
for
any
contributions
on
the
challenges
table
and
technical
requirements
tables
in
the
next
session.
We
expect
to
finalize
those
and
then
move
on
to
to
just
looking
at
the
potential
solutions
table,
so
hopefully
we're
we're
now
relatively
close
to
finalizing
the
first
draft
for
this
year's
roadmap,
but,
as
I
say,
happy
to
accept
any
contributions
over
over
the
next
couple
of
weeks.
A
B
Okay,
I
mean,
if
that's
something
you
know
what
you
want
to
provide
point
among
two
as
well
right
where
it
is
so
that
if
he
wants
to
go
through
it
and
take
a
look
and
see
if
you
can
contribute.
D
Yeah,
this
is
my
first
meeting.
I
just
joined
the
like
slack
in
the
working
group.
D
I
work
at
docusign
and
we've
just
been
working
on
building
out
like
the
the
low
like
the
data
lake
infrastructure
for
the
company
and
getting
past
that
I
want
to
like
get
back
into
the
ml
space,
and
so
we
use
a
lot
of
managed
services,
at
least
when
our
team,
so
my
interest
is
in
say
if
you
have
like,
if
you're
using
some
sort
of
canned
compute
to
serve
a
model.
D
What
is
the
like
ml
specific,
like
monitoring
like
what
does
a
dashboard
look
like
to
monitor
the
health
of
the
the
model
itself
around
that
and
like
what
are
the
best
practices?
And
what
is
does
that
look
like?
So
that's
my
interest,
but
I
also
happy
to
learn
about
just
about
anything
in
the
space
I
like
it.
B
What
what
did
you
end
up?
Choosing
and
using
for
as
technologies.
A
D
We've
been
working
with
spark
for
the
most
part,
emr
spark,
I
think,
in
depending
on
what
the
use
case
for
ml.
If
it
wasn't
like,
if
it
didn't,
have
to
be
something
really
low
latency,
I
would
probably
try
to
throw
it
into
a
smart
job
like
that,
but
and
then
sort
of
I'm
and
then
I'm
curious.
How
would
we
build
a
system
and
like
what
kind
of
regular
checks
would
you
have
to
run?
D
B
You
know
I
want
to
contribute
to
nathan,
as
I
was
mentioning
right.
I
mean
this
project.
It's
probably
you
know,
I
mean
we
don't
the
we
don't
support,
spark
based
models
right.
There
is
a
custom
model
support,
which
means
you
know
you
can
bring
any
anything
inside
a
custom
container,
but
by
out
of
the
box
it's
you
know,
tensile
flow
by
torch,
second,
learning
boost
on
x
and
tensor
rt
right,
which
is
which
is
supported,
but
I
think
you
know
it
would
be
great
to
have
someone
so
far.
B
You
know
we
haven't
heard
much
in
terms
of
serving
spark
ml
based
models
right,
but
that's
something
which
you
would
like
to
contribute.
The
the
core
area
is
like.
You
know
in
the
sense
that
let's
skip
this,
there
are
multiple.
As
I
mentioned,
you
know,
companies
contributing
here
right,
so
it's
being
driven
by
the
requirements
right
which
is
being
raised
by
the
participating
vendors.
So
so
far,
spark
ml
hasn't
surfaced
up
to
the
top
right.
So
but
I
would
say
things
like
cat
boost
right.
B
B
D
I
think
a
lot
of
the
spark
mo
libs
can
save
and
own
an
x
file.
Maybe
a
pml
file,
I'm
not
sure.
B
Yeah
and-
and
this
is
a
pretty
plugable
right
in
a
sense
that
you
can
build
your
own,
like,
for
example,
in
for
tensorflow,
we
use
tf
serving
behind
the
scenes.
B
Nvidia's
tensor,
rt
server
is
used
for
python
sheet
boost
and
circuit
learn.
We
use
our
own
custom
servers
right,
which
are
python
based
servers
plugged
into
kf
serving
right.
So
it's,
I
would
say
it's
probably
two
to
three
weeks
of
effort
to
spawn
a
model
serving
back
end
for
a
particular
kind
of
model
right.
B
So
if,
as
I
mentioned,
it's
built
in
that
way,
so
that
you
can
bring
your
own
time
or
the
other
part
is
you
know,
if
spark
ml,
has
a
production
model
serving
run
time
right
and
which
can
be
plugged
into
that?
That
can
be
one
angle
as
well.
A
B
I
think
the
the
the
key
advantages
you
start
getting
here
is
that
you
know
a
like
there
is
this
concept
of
transformer
right.
So
a
lot
of
the
pre-processing
and
post-processing
code,
which
you
bundle
with
your
model
together
and
most
of
the
models
right
in
production,
you're,
probably
running
on
gpus.
You
don't
want
the
preprocessor
and
the
post
processor
to
consume
your
gpu
horsepower
right.
B
So
the
transformer
gives
you
a
pluggable
mechanism
where
you
preprocessor,
post
processor,
they're
all
running
on
cpus,
but
the
main
model
predict
code
is
on
gpus
right
so
and
then
there
is,
you
know
the
standardization
effort
around
the
data
plane
protocol
right.
So
the
goal
is
that
you
know
you
are
not
going
to
change
or
you
should
not
be
changing
your
model
client
code.
If
you
are
moving
from
care
serving
to
selden
to
tensor
rt
to
tf
serving
right.
B
So
we
are
standardizing
around
the
data
plane
as
well
yeah,
so
I
mean
for
someone
in
the
communities
world.
It's
pretty
straightforward
right.
I
mean
it's
defining
ammo's,
where
your
model
is
what
kind
of
model
it
is
and
that's
pretty
much
what
you
need
to
do
and
very
easy
to
define
your
default
and
calories
if
you're
rolling
out
canary
versions-
and
you
want
to
route
a
particular
percentage
of
the
traffic
to
your
cannery.
B
That
is
supported
right
and
then
you
know
behind
the
scenes.
As
I
mentioned,
pretty
consistent
where
you
can
use
gf
serving
or
you
can
use,
nvidia
stripe
on
inference,
server
high
torch,
all
with
the
same
syntax
and
semantics
all
right
and
we've
seen
a
lot
of
these
right.
There
is
the
main
predictor
which
we
are
saying
run
on
gpu
and
then
the
transformer
which
is
you
know
the
pre
and
post
processing.
B
We
are
not
saying-
or
we
are
not
asking
this
system
to
run
it
on
gpus
right
and
then
there
is
auto
scaling
built
in.
We
leverage
the
generative,
auto
scaling,
which
is
request
based
and
queue
based,
auto
scaling,
as
opposed
to
a
scaling
based
on
the
resource
consumption
right,
because
the
scaling
based
on
resource
consumption
of
gpus
and
cpus
in
general
is
not
going
to
be
consistent.
B
So-
and
this
gives
you
a
much
better
mechanism
and
then
you
know
you
get
a
very
strong,
solid
mechanism
to
send
traffic
to
both
default
and
canary
split
them
according
to
the
percentage
you
want
test
it
and
then
you
can
use
it
yeah.
So
I
think
if,
if
there
is
an
interest
into
this
right
and
then
there
is
a
lot
of
the
monitoring
capabilities
right
now
to
to
the
point
which
was
draw
brought
up
right
in
a
sense
that
we
have
a
dashboard
which
brings
everything
together
now.
B
So
there
is
not
a
single
ui.
We
are
working
on
a
ui
within
ibm
currently,
which
I
do
intend
to
open
source
at
some
point
right.
But
all
of
this
is
command
line
driven
currently
right,
but
you
can
get
more
advanced
capabilities
around
outlier
detection,
adversarial
detection
concept
drifts
all
right,
so
we
collect
payload
logs
and
by
virtue
of
you
know,
collecting
these
payload
logs.
B
We
can
enable
your
payload
locks
to
go
to
your
persistent
backend,
whether
it's
you
know
kafka
or
kd,
broker
or
relational
db,
and
then
run
analysis
on
top
of
it
right
to
provide
these
advanced
monitoring
characteristics
right.
Additionally,
you
can
get
explanations
right,
you
can.
B
This
is
a
project,
for
example,
which
ibm
open
source.
Now
it's
part
of
linux
foundation,
ai
right,
so
that's
also
integrated
in
terms
of
getting
and
things
like,
outlier
detection,
adversarial,
detection
concept,
drift
right,
they
are
integrated
as
well.
So
I
would
say
this
probably
at
this
point
is
the
most
advanced
one
in
the
space
right.
If,
if
you.
B
Are
looking
at
you
know,
production
model
serving
and
monitoring
platform
right
in
in
terms
of
the
things
like
you
know,
we
don't
have
a
dashboard,
bringing
everything
together,
yeah
but
yeah
so
somewhere
you
know
it's
it's
also
like.
B
Thanks:
okay,
anything
else
else
we
can
give
some
time
back
yeah
and
if
you
need
to
know
more
about
care
serving
where
to
contribute
where
the
meetings
happen
etc.
I
mean
just
you
know,
paying
me
on
the
mlobs
sick
channel
and
then
you
know
I
can
redirect
you
to
the
right
places.
C
Can
I
invite
you
for
a
zoom
call
with
my
partner
and
ask
you
some
questions
about
your
challenges
in
your
organization
and
etc?
It
looks
like
you've
met
a
lot
of
you
know,
issues
and
you
handle
them
in
specific
way,
and
maybe
you
can
help
us
find
some
things.
You
think
that
we
should
solve.
B
C
Okay,
thank
you
so
I'll
pm
you
over
there:
okay,
okay,
thanks
man!
Thank
you
very
much.
I
appreciate
your
time.
Thank
you.
Bye.