►
From YouTube: CDF SIG MLOps Meeting 2020-09-10a
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
hi
there,
how
are
you
doing
good
peak
hour
morning
time,
meetings
in
in
europe?
That's
good,
so
I
haven't
looked
at
much
since
last
time.
I
think
I
yeah,
I
don't
think
I've
looked
at
anything
since
last
time.
However,
I
did
manage
to
sell
and
buy
a
new
car
so
and
it's
a
british
car,
so
I'm
buying
a
mini.
A
Oh
rather
my
wife's
buying
a
mini
she
chosen,
many
so
which
is
which
is
good,
so
that
was
taking
a
bit
of
time
and
a
lot
of
other
things
but
yeah.
So
what
have
we
got?
Next?
It's
it
was
the
the
home
stretch
it
felt
like.
B
Yeah,
so
so
what
we
need
to
do
next
is
just
work
through
the
rest
of
these
timeline
charts,
and
then
we
need
to
think
about
what
we
want.
The
conclusions
to
be
for
this.
A
Year
and
then
how
to
promote
it,
what
what,
what
posts
and
content
to
do
and
share,
and-
and
I
guess
in
that
case
we
probably
could
lean
on
some
professional
outside
help
for
for
marketing
and
how
to
get
things
at
the
right
time
in
the
right
places
we
can
even
get
things
placed
in
you
know,
equates
press
still
say,
press
things
like
that
might
be
worthwhile
considering
if
we
have
those
ways
to
get
the
get
the
word
out
there
and
that'll
bring
other
people
along
as
well
yeah.
A
To
follow
up
with
the
the
fellow
from
netflix
that
was
doing
the
metaflow
thing
because
he
was
very
interested
in
it
and
he
was
going
to
have
a
look
at
it
and
I
think
it'd
be
great
to
have
sort
of
if
we
are
going
to
sort
of
push
this
out
to
have
some
some
sort
of
names
out
there.
So
I
think
that's
something
we
can
look
at
so
yeah.
A
A
B
A
Sorry,
which,
which,
which
how
many
rows
down
is
this
in
the
solutions,
governance
processes
and
I
can
just
search
for.
B
B
So
this
is
quite
a
complex
requirement
and
you
know
I
expect
to
see
parts
of
this
being
evolved
over
time.
B
B
A
Yeah,
I
think
where
people
are
at
right
now,
certainly
sort
of
where
I
am
is
you
you
train
something
and
then
you
use
it
for
a
while
and
you
retrain
it
and
use
it
for
a
while.
It's
I
feel,
like
I'm
a
long
way
from
being
mature
enough
to
have
this
kind
of
so
yeah.
It
feels
like
and
all
the
tools
sort
of
exist
or
there's
tools
that
probably
could
do
this
to
exist,
but
it's
it's
partly.
A
B
B
So
the
next
one
is
management
of
shared
dependencies
between
training
and
operational
phases.
So
this
is
about
keeping
things
in
sync
between
what
you're
doing
in
the
training
side
of
your
scripts
and
what
you're
doing
in
deployment.
B
Now
it's
definitely
a
technical
challenge.
B
So
so
we
should
probably
just
mark
this
one
as
as
all
gray
and
then
just
keep
pushing
to
make
sure
that
those
features
are
available
in.
B
A
Is
this
analogous
to
something
like
an
application,
binary
interface,
an
api
or
a
or
a
vm
image
or
like?
Is
that
the
analogy
here
like
a
machine,
readable,
binary
format,
that's
standard.
B
This
is
kind
of
like
json,
as
opposed
to
serialization
right.
So
you
know
an
example
of
this
that
exists
already
is
onyx.
A
Yes,
I
mean
it
feels
like
that's
a
while
away,
because
even
for
things,
I
even
if
I
think
of
you,
know
the
format
for
virtual
machines
and
that
took
a
while
and
even
now,
there's
multiple
ones
like
there
might
not
be
one
of
these,
but
like
it
feels
like
people
wouldn't
really
want
to
settle
down
on
this.
Yet
would
they
like
it
needs
more?
A
few
more
years
of
the
use
cases
emerging
before
people
will
accept.
Okay
with
this
is
the.
B
So
as
an
example
with
ml
ops
in
jenkins
x,
we
rely
on
onyx
for
most
of
the
examples,
and
so
it's
it's
working
today
for
a
limited
subset
of
machine
learning
types.
B
B
Yeah,
well,
I
think
we'll
we'll
go
straight
into
continuous
improvement
on
this,
because
really
we're.
C
B
In
continuous
improvement
to
a
certain
stage,
but
to
have
the
full
capability
that
this
implies,
I
think
there's
a
couple
of
years
of
development
and
then
into
continuous
integration.
A
Longevity
of
ml
assets-
that
seems
very,
is
that
actually
separate
row
yeah.
That
seems
very
related
to
that.
Doesn't
it
would
so
that's
at
least
at
least
as
long
as
the
obstruction
layer.
B
A
You
know
the
java
analogy
is
interesting.
It's
like
the
jar
standard,
you
know
put
things
in
a
zip
file
in
a
certain
way
came
along,
but
then
there's
the
content
of
the
class
files
that
is
specific
to
some
aspects
of
architecture
or
the
version
of
the
jvm,
and
then
maybe
you've
got
things
like
you
know.
A
You've
got
nexus
and
artifactory
for
artifacts
like
that,
like
that
that
was
part
of
the
maturity
there
and
things
like
maven
central
existed,
so
that
you
can
depend
on
a
artifact
instead
of
depending
on
its
source
like
to
me
that
feels
like
ml.
A
B
Yeah
and
to
an
extent,
it's
a
bit
more
like
long
running
processes
in
a
workflow
environment
where
you're
doing
is
kicking
off
a
process
and
then
it
it
goes
on
hold
for
six
months.
While
somebody
does
something
and
then
you
come
back
to
it,
and
you
need
to
reinstantiate
that
yeah.
A
B
And
then
managing
and
tracking
trade-offs.
A
And
then
yeah
I
can.
I
can
sort
of
relate
to
this
one.
It's
not
a
easy
like
I.
I
was
training
models
to
estimate
the
size
of
things.
You
know
with
some
success,
and
then
I
thought
you
know
those
those
models
can
be
per
account
if
you
like
for
organization
but
they're
flexible
enough
to
be
multi-organization.
A
So
instead
of
having
n
models
for
n
organizations,
I
could
have
one
model,
but
that
would
mean
I'm
taking
data
from
unrelated
organizations,
mixing
it
together
and
training
the
model
and
that
sort
of
felt
like
a
weird
thing
to
do.
Probably,
for
this
reason
it's
like,
if
it
had
to
be
explainable
from
the
source
data,
then
it's
like.
Oh
you've
got
data
from
xyz
and
data
from
abc
they
come
together.
B
A
Of
the
remnants
of
that
remain
in
the
model,
but
yeah
in
in
some
ways
it's
I
don't
know
if
tooling
could
solve
that
for
me
really
yeah.
I.
B
So
yeah,
I
think,
there's
at
least
one
black
fare
in
terms
of
research
needed
and
then
probably
three
years
of
development
effort
to.
C
A
Yeah
and
it's
maturity
as
well
of
people
getting
used
to
the
idea
of
like
that
scenario
I
described
where
you
proficiency,
you
might
train
a
model
on
multiple
customers
data
and
that
train
model
is
used
for
a
third
customer
or
one
of
the
other
two
that
would,
if
you,
if
you
explain
that
to
people
now,
it
would
probably
upset
them,
but
maybe
in
as
the
tools
mature
and
as
time
just
time
passes
and
people
get
more
exposed
to
it
and
development
goes
underway.
A
C
Can
I
ask
you
a
question
for
the
managing
and
tracking
trade-offs
you've
mentioned
in
the
past,
that
for
the
work
that
you're
doing
you
are
using
sort
of
parallel
training
and
that
there's
a
bit
of
you,
use
the
phrase
evolutionary
process
for
this,
in
that
you
must
have
some
benchmarks
that
you're
expecting
these
different
variations
to
be
meeting,
and
you
will
only
take
four
of
the
ones
that
are
meeting
certain
expectations
and
then
is:
am
I
understanding
correctly
how
you've
described
yeah,
so
this
is.
B
B
Potentially
we'd
have
to
look
at
it.
Okay,.
B
So
yeah
the
the
escalation
of
data
categories.
I
think
this
is
at
this
stage
it's
kind
of
a
prediction.
B
So
to
my
gut
feel,
for
this
is
it's
probably
black
for
three
years,
while
we
wait
to
see
what
happens
in
in
the
compliance
space
and
then
after
that,
you
can
then
start
to
look
at
developing
solutions.
So
it's
probably
yeah
this
one
doesn't
arrive
on
on
the
time
horizon
of
this
roadmap.
So
it's
probably
three
years
of
black
two
years
of
blue
and
then
we
don't
actually
know
whether
it
will
be
available
at
that
point
or,
if
we'll
still
be
working
on
it.
C
But
I
have
read-
and
I
do
not
have
any
any
real
substantial
intelligence
there,
but
I
was
reading
about
distributed,
ledger
technology
and
how
that
could
be
used
to
both
provide
some
data
protection
as
well
as
traceability,
and
I
was
wondering
if
you've,
you
have
any
more
knowledge
or
context
you
want
to
add
to
that
like
if
you've
looked
into
it
at
all.
B
B
Yeah,
so
you
you've
still
got
the
problem
that,
in
order
to
train
the
models,
you're
going
to
have
to
aggregate
that
data
and
then
in
order
to
be
able
to
then
provide
the
explainability
you're
you're,
then
somehow
going
to
have
to
be
able
to
link
back
to
factors
that
can
potentially
reveal
privacy
related
information,
for
example,
and
the
the
a
lot
of
the
risk
here
is.
B
So
you
know
if
you
need
to
check
for,
say
ethnic
bias,
then
to
be
able
to
validate
that
you've
actually
got
to
have
the
information
about
everybody's
ethnicity,
so
that
that
means
that
you,
you
you're,
forcing
people
to
then
hand
over
that
information
in
order
to
be
compliant
with
a
law
that
was
designed
to
protect
them
and
therefore
you're,
just
making
the
information
more
likely
to
to
be
leaked
or
misused,
rather
than
less
likely
so
yeah.
A
B
A
Security
and
ip
oh
no
like,
given
that
models
could
contain
ip,
then
yeah,
it's
the
same
as
a
binary
which
is
interesting
when
you've
got
things
like
the
gpl
and
asl
and
different
licenses
that
are
well
tested
and
battle
hardened.
A
How
does
that
apply
to
models
that
were
that
are
doing
the
job
of
handwritten
code,
but
they're
trained
on
sets
of
data
like
those
licenses
aren't
designed
for
that?
I
just
thought:
that's
an
interesting,
open
source
side
angle
who
owns
the
you
know,
because
the
interesting
things
in
the
future
may
well
be
the
data
and
the
models
which
a
lot
of
ip
stuff
isn't
really
configured
for.
B
Yeah,
I
think
we're
a
long
way
away
on
on
this.
One
you've
got
fundamental
challenges
like
python,
so
solving
this
is
going
to
need
not
just
the
technology
but
also
a
big
shift
in
behavior.
A
A
B
So
emergency
customers-
I
think
we've
got
the
technology
to
do
this.
A
A
It's
like
probably
continuous
improvement,
although
I
don't
know
if
anyone
really
thinks
about
this
to
to
do
it,
maybe
they
do.
Maybe
there
are
systems
that
are
chat,
bots
or
interact
with
humans
in
some
way
that
could
be
abusive,
I'm
sure
they
have
I'd
hope.
They'd
have
safeguards
in
place.
A
Yeah
I
was
playing
with
gpt2,
so
gpt3
is
the
one
that's
open
only
as
a
service
that
I've
tried
to
get
access
to
for
gpt2,
I've
been
trying
and
out
just
to
see
how
it
works
and
it's
they
do
have
big
warnings
that
you
shouldn't.
A
You
shouldn't
put
this
directly
in
front
of
people,
that's
not
in
a
research
concept
because
it
could
do
anything,
and
I
haven't
had
it
say
anything
that
bad
yet,
but
you
know
it
wouldn't
surprise
me.
Interestingly,
my
son
uses
it
to
generate
he's
been
playing
around
with
it
to
generate
historical
fan
fiction.
A
He
gives
it
a
whole
scenario
from
some
of
his
history
books
types
it
all
in
and
then
we
let
it
crunch
for
a
while,
and
it
comes
out
with
a
story
involving
something
and
he's
fascinated
how
it
gets
geographical
details
right
and
stuff,
but
yeah
but
yeah.
It
certainly
wouldn't
have
an
emergency
cut
out,
because
otherwise,
if
it
was
about
to
say
something
bad,
then
it
would
just
crash.
But
I
think
the
technology
exists
for
this,
but
it's
not
maybe
not
used,
but
maybe
that's
just
an
area
of
continuous
improvement.
I'd
say.
B
B
Year,
so
that's
all
of
those
so
I'll
update
that
table
and
circulate
for
review.
So
the
last
key
piece
really
in
the
road
map
is
that
we
have
a
section
which
is
our
conclusions
and
recommendations.
B
So
we
need
to
work
out
what
what
we
want
to
say
as
a
summary
of
where,
where
the
road
map
is
and
what
what
the
state
play
is
and.
A
Is
it
really
just
appointed
to
the
sections
above
like
the
conclusions
would
be
for
the
challenges,
look
at
the
requirements
and
then
the
potential
solutions,
and
if,
if,
if
it's
you
know,
if
it's
in
a
continuous
improvement
state,
then
should
continue
doing
that
or
focus
research
in
in
the
black
areas
like
what
like?
What?
What
are
we
like
who's
going
to
be
reading,
that
sort
of
section
would
that
be
people
that
skip
ahead
to
it
and
they
want
to
read
a
summary
of
it.
B
I
I
think
in
in
general
it
would
be
the
place
where
you
drew
attention
to
particular
things
that
needed
more
work
or
areas
that
we
anticipate.
That
would
be
more
challenging,
I
think,
perhaps
in
the
first
year.
Maybe
it's
just
a
a
place
to
discuss
the
overall
maturity
of
the
practice
and
to
encourage.
B
So
it's
perhaps
an
enjoying
it
to
people
to
to
pick
something
and
see
how
they
can
add
that
into
their
product.
A
Well,
maybe
I'll
put
a
note
to
have
a
starter
pull
request.
A
Just
just
start
writing
it
for
the
references
section.
Do
we
really
need
that,
like
there's
various
links
in
our
ongoing
google
doc
notes
that
things
we've
referenced
to
and
someone?
Is
it
worthwhile
putting
them
in
there
or
do
you?
Do
you
have
any
specific
prior
work
that
you
want
to
link
to
there,
or
is
that
the
section
we
can
just
leave
out.
B
B
So
I
don't
think
we've
really
got
any
examples
of
that
that
we
need
to
draw
attention
to
this
year,
so
we
probably
will
probably
drop
that
section
for
this
year.
Yeah.
A
There
might
be
the
odd
paper
mentioned
here
and
there,
but
I
don't
recall
anything
I
think
there
is.
There
was
a
few
academic
papers.
I
think
I
mentioned
at
some
point,
but
they
were
just
more
part
of
our
general
discussion.
I
don't
think
they
are
relevant
to
the
anything
they
are
relevant
to
the
roadmap,
so
yeah
we
can
drop
that.
A
A
Well,
I
guess
that's
the
the
finalizing
of
it,
so
we
should
have
it
done
done
by
next
time.
C
A
A
You
know,
I
think,
promoting
it
with
some.
You
know
a
few
different
blog
posts
talking
to
talking
through
specific
scenarios.
I
know
we've
had
really
good
discussions
over
the
months
about
certain
things
that
were
relevant
to
certain
challenges
or
problems
caused.
So
I
think
blogs
and
articles
that
talk
to
that
could
be
quite
interesting.
A
It
would
be
interesting
so,
but
yeah
we've
certainly
got
you
know
at
cloud
business
we've
got
access
to
sort
of
marketing
some
marketing
firepower
that
they
can
get
it
in
front
of
certain
people,
and
you
know
have
a
better
chance
of
getting
enough
eyeballs
on
it
and
I'm
sure
a
lot
of
people
would
be
interested
if
they
hear
about
it.
I
think
that's
the
trick
is
getting
people
to
hear
about
it
because
it's
you
know
it's
a
it's
a
big
area
of
discussion.
A
All
of
these
are
challenges
like
even
with
non-developers
non-technical
people
were
curious
about
it
and
yeah.
Oh,
I
did
see
a
a
sad
and
funny
video
today.
I
think
I
saw
it
on
tick
tock,
but
it
was
this
cleaner
with
it
watching
a
robot
clean.
A
You
know
it
was
a
shopping,
mall
supermarket
floor
or
whatever,
and
there
was
one
of
those
clingy
robots
going
around
and
he's
there
with
his
mop,
and
he
just
has
the
saddest
expression
on
his
face,
and
it's
like
the
you
know,
like
he's
seeing
his
just
career
being,
you
know
automated
away.
I
just
thought
that
was
interesting.
A
All
right:
well,
you
could
probably
have
an
early
mark
this
week.
Anything
else
you
want
to
bring
up
cara.
C
A
No,
I
I
do
have
a
I.
I
got
a
we
had
father's
day
on
the
weekend
and
to
father's
day
I'll
get
a
robot
vacuum
planner,
which
is
awesome.
So
I
love
it.
It's
never
had
floor
circling,
so
I
just
move
it
around
the
house
at
different
times
a
day
and
and
it's
it's
great
but
yeah,
it's
it's
just
saving
me
work.
C
Yeah,
hopefully
that,
but
what
I
was
going
to
say
is
that
for
the
conclusion
and
any
outward
messaging,
we
do.
We
should
really
always
I'm
sure
we
will
but
always
put
to
the
fore
and
emphasize
the
overall
goals
in
in
a
really
positive
way,
and
we
have
this
at
the
top
of
the
roadmap.
But
I
I
think
reiterating
it
is
always
positive,
so
there's
the
specific
goals
or
initiatives,
but
the
overall
arching,
what
we're
trying
to
achieve
and
the
good
that
we're
trying
to
do
in
a
collaborative
way
within
the
foundation.
A
Yeah
all
right:
well,
it's
good
to
chat
again
as
always,
and
you
try
to
get
in
two
weeks
and
yeah.
Hopefully
then
start
planning
some
some
content
now
to
promote
things,
and
there
may
even
be
some
interesting
projects
out
there
that
that
solves
some
of
these
problems.
We
can
work
with
to
promote
and
companies
like
netflix
and
stuff,
so
yeah,
I'm
hoping
to
see
that
netflix's
they
had
sort
of
their
internal
version
of
this.
So
it's
really
good
to
compare
notes.
A
So
if
I
get
that
I'll
bring
that
up
next
week,
all
right
well
until
next
time
have
a
good
thursday.