►
From YouTube: CDF - SIG MLOps Meeting 2021-07-29
Description
For more Continuous Delivery Foundation content, check out our blog: https://cd.foundation/blog/
C
Yeah
good
yeah,
I
I
don't
know
how
many
we're
expecting
today
at
maybe
a
quiet
one.
B
Okay
sounds
good
yeah,
I'm
new
to
the
sig,
but
have
been
working
a
lot
in
mlabs
and
ml
infra.
B
Yeah,
let
me
pull
it
up.
Yeah
I
took
a
good
pretty
decent
read
through.
I
thought
it
was
interesting.
Definitely
an
interesting
perspective
like
defining
that
you
know
trying
to
give
a
definition
for
mlaps
is
applying
devops
principles
to
kind
of
this
ml
field.
B
It's
pretty
interesting
and
that
so
far
in
my
the
reason
why
I
was
kind
of
compelled
to
join
and
check
you
guys
out
is
because
it
resonated
a
lot
of
things
like
with
how
I've
implemented
and
kind
of,
in
my
past
six
months,
just
kind
of
like
going
through
kind
of
a
field
and
kind
of
bumping
into
different
tools,
and
you
know
how
do
different
companies
do
it?
How
do
my
buddies
and
other
companies
do
it?
I've
landed
on
something
pretty
similar
to
what
the
roadmap
was
saying,
which
is
I
don't
know.
B
C
Yeah
I
mean,
I
think,
there's
there's
a
lot
of
people
who
are
going
through
the
same
experience
at
the
moment,
which
is
that
you
know
they're
they're,
trying
to
add
machine
learning
to
to
various
products
and
and
they're
running
up
against
all
of
the
same
challenges.
B
C
Yeah
yeah
I've
I've
had
a
play
with
that
in
the
in
the
past,
and
I
like
some
of
the
stuff
around.
You
know
dags
and
things
like
that
and-
and
you
know,
trying
to
to
to
put
some
structure
around
stuff,
but
at
the
end
of
the
day,
most
of
the
most
of
the
platforms
that
are
out
there
are
they're
written
from
a
data
centric
perspective,
which
is
not
surprising
because
that's
you
know
where
they've
come
from,
but
it
also
means
that
they
tend
to
think
of
things
as
being
like
databases.
C
So
you
you
have
a
a
place
in
the
world
where
you
put
your
data
and
a
place
in
the
world
where
you
put
your
models
and
those
are
operational
stores
of
data
that
that
change
on
an
ad
hoc
basis
over
the
course
of
the
day,
which
you
know
is
where
we
were.
C
You
know
20
years
ago
with
all
of
our
software
systems,
but
the
the
difference
being
that,
as
the
rest
of
the
industry,
moved
on
to
solving
larger
problems
at
scale
and
tackling
you
know
some
of
the
issues
that
come
with
that
scale.
C
There
was
more
of
a
push
towards
managing
your
software
assets
in
a
much
more
controlled
fashion,
and
so
that
side
of
of
of
the
industry
has
has
moved
more
towards
devops
and
managed
automated
release
processes
and
versioning
and
having
very
tight
control
over
the
explicit
definition
of
what
what
assets
you
have
in
production.
In
what
state
at
any
moment
in
time,.
C
So
it
puts
machine
learning
right
in
the
thick
end
of
being
high
risk
and
a
sharp
end
of
having
to
solve
all
of
the
hardest
problems
to
mitigate
that
risk.
C
So
it's
it's
kind
of
unfair
really
for
the
industry
to
leave
the
ml
guys
to
to
try
and
solve
all
the
problems
from
first
principles.
Yeah.
We
really
need
to
pull
all
of
the
information
we've
learned
over
the
past
30
or
40
years,
and
use
that
to
to
just
make
things
easier
and
more
robust
and
reliable
and
generally
improve
things.
B
Hearing
your
thoughts
so
kind
carrying
through
the
analogy
where
devops
made
a
developer's
dev,
I
mean
the
the
skill
sets
are
separate
like
especially
the
deeper
you
go
into
devops
and
you
go
in
software.
It
can
be
like
separate
little
chunks
of
knowledge
bases
that
you
kind
of
have
to
learn,
which
is
why
it
can
be
specialized
skill
sets.
But
to
me
in
my
experience
it
feels
like
you
know,
because
I
was
trained
in
software
development
and
engineering.
B
It
was
quite
easy
for
me
to
pick
up
devops
get
caught
up
to
speed,
get
onboarded
and
kind
of
make
changes
and
and
kind
of
be
an
influence,
and
you
know
how
should
you
structure
this
and
kind
of
be
a
voice
in
the
room?
Now,
when
I
think
about
my
data
scientists
and
modelers
that
I
work
with,
you
know
they
have
a
completely
different
training,
completely
different
background.
B
It's
not
really
engineering.
It's
more!
That's
why
we
call
like
modeling.
I
mean
there
are
some
engineering
principles.
That's
why
I
kind
of
liked
our
favorite
metaflow
or
we're
kind
of
experimenting
with
it,
because
it
seemed
like
it
would
blend
for
what
I
would
want.
As
an
engineer
like
no,
you,
you
know,
or
you
guys
have
a
line
of
the
road
map
of
you
shouldn't-
be
deploying
jupiter
notebooks
right,
which
I
would
agree,
but
for
a
like
I'm,
you
know
these
are
like
phd
statisticians.
B
They
barely
know
what
a
deploy
is.
They
don't
really
care.
They
also
don't
really
care
about.
Well,
they
kind
of
care
about
versioning,
because
it's
practical
to
a
practical
point
to
care
about
versioning
and
but
really
kind
of,
like
as
you
get
in
the
nitty-gritty
of
things
to
them.
What
seems
like
nitty-gritty
to
me?
B
It
seems
like
engineering
best
practices,
I'm
wondering
where
in
software
I
feel
or
as
software
development
I
feel
like
there's
been
less
of
a
friction
point
for
me
to
understand
and
appreciate,
devops
and
understand
and
appreciate
the
importance
of
it
and
then
carry
whatever
responsibilities
have
to
do
as
an
engineer
to
kind
of
keep
the
standard
high
on
the
devops
side
and
help
out.
It's
been
a
fairly
frictionless
process
for
me
to
do
so,
but
I
can't
really
see
that
happening
with
my
models.
B
I
work
with
these
people,
who
spend
all
their
time
in
in
stats
and
modeling
and
kind
of
like
keen
up
to
date,
and
you
know
I
mean
they're
whizzes
at
r
and
matlab
and
these
python
libraries,
but
so
kind
of
that
to
me
I
don't
know,
I
guess
what
do
you
think
about
that
where,
for
me,
I
feel
like
I've
had
to
do
a
lot
of
extra
leg
work
to
kind
of
keep
things
like?
B
No,
we
should
version
no
here's,
how
we're
going
to
do
things,
and
this
is
why
it's
important
so
yeah,
it's
kind
of
interesting
or
I
think
I
guess
that's
what
I'm
kind
of
specifically
curious
about
right
now,
where
how
can
we
integrate
like
these
people
have
completely
different
training
and
it
seems
like
it'll,
be
a
little
difficult
for
them
to
kind
of
like
learn,
engineering
or
maybe
that
will
trend
to
be
the
expectation
of
is
having
these
specialists
pick
up
some
sort
of
engineering
principles
to
kind
of
like
help
integrate
this
devops
mentality
into
the
common
work.
C
Yeah,
I
I
I
think,
that's
a
key
piece
of
the
puzzle.
I
mean
from
my
experience.
I've
worked
with
groups
of
data
scientists
and
taught
them
devops
best
practice,
and
you
know.
Initially,
there
was
some
resistance
because
well
it
means
doing
extra
work,
but
then
people
start
to
understand
the
implications
of
not
doing
it
and
and
once
people
get
their
head
around
the
basic
concepts.
C
It
suddenly
makes
things
easier
for
them
and
and
a
big
part
of
this
is
that
it
helps
people
to
feel
confident
in
switching
things
on
and
that
getting
that
balance
right
of
you
know
having
a
degree
of
confidence
in
the
provability
of
the
work
that
you've
done
and
and
the
fact
that
it's
safe
versus
the
overconfidence
of
just
not
knowing
or
caring
what
happens.
C
If
you
press
the
go
button,
yeah
that
that's
a
big
piece
and
we're,
unfortunately,
we're
about
to
go
into
a
period
of
time
where
we
see
a
whole
string
of
very
public
mess-ups
from
machine
learning
that
wasn't
properly
managed
and
went
into
production
and,
as
a
result,
lots
of
people
lose
lots
of
money
or
get
injured
or
killed.
Or
you
know
a
whole
raft
of
fairly
horrific
things
that
are
likely
to
happen
when
we
put
these
decision-making
systems
live
without
properly,
assessing
and
testing
them.
C
So
it's
eventually
it'll
come
back
down
to.
Oh,
it's
my
job
to
know
this
stuff
and
if
I
don't
know
it,
there's
a
chance
I'll
go
to
jail
and
you
know
we're
seeing
you
know
some
pretty
tough
moves
in
in
terms
of
legislation
on
machine
learning
and
certainly
in
in
the
european
union.
C
B
B
So
you
think
that
over
time,
because
of
the
you
know,
kind
of
like
regulation
government
will
catch
up
people's
current
unsafe
practices
will
produce.
You
know
very
negative
outcomes,
because
of
that
you
think
over
the
next
five
ten
years
it'll,
maybe
earlier
but
it'll.
The
the
job
description
of
a
data
scientist
or
a
modeler
will
get
changed
slightly
to
include
a
bit
more
of
this
type
of
like
devops,
rigor
and
discipline.
C
It's
a
mixed
discipline
team
that
contains
data
scientists
and
software
engineers
and
sres,
and
you
know
security,
people
and
and
that
you're
operating
collectively
to
maximize
the
quality
of
a
product
and
the
the
focus
here
is.
C
If
you
always
think
about
this
from
a
product
perspective,
rather
than
from
an
engineering
perspective,
it's
actually
a
lot
easier
to
see
what
needs
to
be
done,
because
what
the
thing
that
makes
the
money
is
the
product
and
therefore
you
have
to
have
enough
quality
in
your
product,
and
you
have
to
maintain
that
as
an
asset
over
the
period
of
time,
you're
going
to
realize
the
value
out
of
it,
and
so
you
have
to
invest
what's
necessary
to
to
maximize
your
return
whilst
guaranteeing
that
you're,
not
exposing
yourself
to
unnecessary
commercial
risk
through
you
know
a
poorly
maintained
product
that
becomes
a
question
of
technical
debt.
C
B
So
I'm
helping
a
company
out,
they
have
two
modelers
five
data
engineers
and
they're,
or
maybe
three
data
engineers
and
the
they're
right
now,
at
the
point
where
they're
thinking
we
might
need
someone
to
build
some
sort
of
ml
infra
support
so
that
people
actually
have
some
sort
of
common
infrastructure
to
you
know,
standardize
some
build
tools
and
basically
mlaps
at
what
point
do
you
think
or
actually
I
guess,
borrowing
from
like
the
devops
analogy.
I
think
at
what
point
does
it
make
sense
to
implement
devops
much
more
like
on
a
gradient?
B
What's
the
first
thing:
do
you
think
is
worth
it
to
kind
of
implement
and
have
a
newer
growing
quickly,
growing
team?
I
guess,
if
I'm
borrowing
from
the
devops
analogy,
then
it's
probably
something
like
ci
cd,
some
sort
of
continuous
deployment
and
integration.
But
what
do
you
think
about
that?.
C
So
the
way
I
usually
explain
it
to
sort
of
a
commercial
audience
is
to
say
that
if
you're,
a
technology
company,
you're
you're
building
a
machine
that
represents
your
product,
so
you
you
have
to
build
all
of
the
bits
of
that
machine
that
you
can't
buy
and
when
the
machine
runs
it,
it
will
do
whatever
your
product
intends
it
to
do.
C
But
you
also
have
to
own
another
machine,
which
is
the
machine
that
builds
your
product,
and
so
you
have
two
assets
that
you
have
to
manage
all
of
the
time.
One
is
the
thing:
that's
your
product
and
the
other
is
the
thing
that
builds
your
product
and
you
have
to
carefully
manage
and
invest
in
both
of
those
things
and
incrementally
improve
them
over
time.
C
Otherwise,
you
don't
you
don't
get
to
have
a
product
and
a
lot
of
what
goes
into
the
machine
that
builds
your
product
is
actually
about
saving
you
time
and
money
by
reducing
risk.
C
C
But
of
course
it's
it's
never
like
that.
It's
always
a
circular
iterative
process
where
you're
going
round
and
round
and
round
incrementally
improving
things
until
you
get
to
a
point
where
you've
got
product
market
fit,
and
then
you
can
scale
but
you're
still
scaling
in
a
circular
lots,
not
a
circular
patterns,
adding
small
features
to
your
overarching
product.
C
It's
it
adds
more
bugs
than
you
take
out,
because
you
were
sloppy
about
it
because
you
weren't
thinking
about
it
as
part
of
your
asset.
You
were
thinking
about
it
as
an
annoyance
that
you
need
to
push
through
to
get
out
of
the
way.
C
Because
then
you
don't
have
to
do
that
work
ever
again.
You
just
push
the
button,
it
does
it
for
you
and
it
tells
you
it's
good
or
bad,
and
then
you
shave
off
all
of
that
time,
multiple
times
a
day
and
you
you're
getting
an
immediate
return
on
investment,
so
you
can
focus
on
what's
important
rather
than
on.
You
know,
getting
the
thing
to
actually
build
and
getting
it
to
actually
deploy.
C
I
don't
know
if
that
answers
your
question,
but
you
know
that
those
are
the
sort
of
basic
principles
and,
in
fact,
in
in
one
of
the
other
sig
groups,
we're
we're
actually
writing
out
the
best
practice
guide
for
devops
and
ml
ops,
which
discusses
a
lot
of
the
stuff
and
how
to
how
to
think
about
the
problem
in
a
way
that
naturally
takes
you
to
good
solutions
rather
than
risky.
B
Yeah
ones
makes
sense.
I
think
it
doesn't
answer
my
question
so
more.
You
know
iterative
cycle
of
looking
at
the
friction
points.
How
can
I
make
my
you
know
people
I
work
with,
or
how
can
I
empower
them
to
do
their
jobs?
You
know
better
faster,
easier
focus
on
you
know:
shipping
high
quality
code
instead
of
focusing
on
you
know,
just
build
filled.
B
C
C
If
you
think
about
it
from
a
commercial
perspective,
yeah
it
it's
done
when
it's
live
in
production
and
the
customers
are
using
it
and
preferably
giving
you
money
for
the
privilege
yeah
and
at
that
point,
if
it's
broken,
it's
actively
costing
you
money
and
reputational
damage.
B
C
If
the
answer
is
not
obvious,
then
you
haven't
yet
got
an
appropriate
mlx
process
in
place,
because
really
you
should
be
operating
in
a
situation
where,
if
you've
deployed
a
product
and
the
product
has
has
failed
in
production,
it
should
be
as
simple
as
pressing
a
button
to
reverse
that
deployment
and
take
you
back
to
your
previously
known
good
state.
C
So
if
you
haven't
got
that,
that's
actually
your
your
first
goal
is
is
to
be
able
to
deploy
on
that
basis.
Because
then
you
you
can
you
have
no
fear
of
deploying
because
there's
an
undo
button.
B
Yeah,
I
was
you
know
not
to
I
promise
I
don't
work
for
mediflow,
but
it
was
that
I
was
very
pleased
to
see
when
we
implemented
metaphor.
B
They
have
a
built-in,
you
know,
revert
to
last
successful,
deploy
and
that
to
me,
like
you
know,
my
we
haven't
gone
through
the
full
iteration
cycle,
yet
you
know
deploying
the
process
and
with
with
my
ml
team
and
letting
them
feel
what
it's
like
to
personally
production
see
the
data,
the
resulting
data
that
comes
out
of
it
and
then,
when
things
inevitably
go
wrong
to
feel
the
pain
of.
Oh,
you
know
how
do
I,
how
do
I
fix
this?
B
So
they
haven't
seen
that
yet
they
didn't
understand
my
joy,
but
for
my
experience
from
you
know:
shipping
normal
software
products.
It
was
it's
a
huge.
You
know,
burden
or
weight
off
my
shoulders.
Just
a
huge
worry
just
relieved
just
knowing
that
there
is
like
built-in.
B
I
think
you
got
interesting
approach
for
it,
but
you
know
there's
a
built-in
way
to
revert
to
you
know
because
they
keep
a
record
inversion
or
model
you
deploy
so
then,
or
that
rather
they
help
facilitate
that.
So
then,
just
being
able
to
you
know,
revert
back
and
and
keep
the
linux
on
that
it
was
it
was.
You
know,
I
think
it's
a
step
in
the
right
direction
for
sure.
C
And
if
you
look
at
the
entirety
of
that
product
ml
bit
is
usually
about
five
percent
of
the
effort.
Yeah.
C
You
set
up
the
model
server
at
the
beginning
of
the
project
and
then
that's
where
the
rest
of
the
product
goes
to
get
access
to
a
model,
but
the
models
themselves
are
getting
arbitrarily
pushed
into
that
model,
server
on
some
different
cadence
to
the
release
cycle
for
the
rest
of
the
product
and
there's
a
lot
of
quite
nasty
problems
that
can
creep
in
if
you've
got
disconnect
between
the
release
cycles
of
of
your
your
whole
product
and
one
component
of
the
product,
and
and
so
there's
a
lot
of
ways
that
your
asset
can
suffer
from
bit
rot.
B
Yeah,
so
if
I'm
hearing
you
correctly,
it's
almost
like,
you
should
treat
your
your
models
as
almost
as
deployables,
where
they
should
be
on
a
release
cycle
as
well
synced
up
to
kind
of
the
movement
of
and
hope
you
know,
it'd
be
awesome,
even
if
they
use
the
same
tool
set
for
the
tool
chain,
as
kind
of
like
typical
software
engineering.
C
So
here's
here's
the
way
that
I
encourage
people
to
think
about
this
problem
and
again
it's
it's
a
problem
that
you
only
encounter
late
in
the
process
when
it's
too
late
to
do
anything
about
it
easily.
C
Now
as
soon
as
you
go,
live
with
that
for
that
customer,
that's
a
a
stake
in
the
ground
that
is
now
holding
you
back,
because
you've
got
a
bunch
of
people
who
are
paying
you
to
use
a
particular
version
of
your
thing
and
the
more
customers
you've
got
the
more
pins
you've
got
that
are
making
it
harder
for
you
to
change
things.
C
And
these
are
the
sorts
of
things
that
actually
kill
businesses,
because
these
are
the
things
that
will
stop
you
from
scaling
at
the
pace.
You
need
to
actually
become
profitable
and
if
you
haven't
planned
for
them
and
architected
for
them
up
front,
it
can
take
you
a
year
to
recover
from
discovering
that
you
needed
that,
and
you
don't
have
that
year
in
your
runway.
C
Again,
these
are
all
things
that
you
you
learn
by
getting
your
fingers
burnt,
but
for
most
data
scientists
they've,
yet
to
successfully
released
their
first
product,
so
they've
never
had
any
of
those
types
of
experiences.
So
the
need
for
that
that
sort
of
forward
thinking
and
planning
doesn't
yet
exist
in
their
model
of
the
universe.
B
Yeah,
that's
interesting.
I
have
I've,
had
the
unique
experience
of
being
able
to
work
early
stage
sort
of
mid-stage,
but
the
second
comes
a
little
little
earlier
than
mid-stage
and
then
like
a
later
stage
company-
and
I
I
don't
know
you
got
me
curious-
I
don't
actually
know
the
specifics
of
how
they
handled
that.
The
later
stage
company.
C
To
your
models,
with
the
added
caveat
that
typically,
models
are
going
to
be
used
in
scenarios
when,
where
they'll
be
more
long-running
than
a
lot
of
software,
because
a
model
will
be
trained
to
do
a
p,
a
a
role
within
a
business
it'll
be
tending
to
replace
a
person
or
augment
a
person
as
part
of
a
business
activity,
and
if
the
business
still
exists
and
is
still
trading
in
the
same
way,
it'll
still
be
doing
the
same
business
activities.
C
C
So
you
you
end
up
with
a
very
long-running
version
management
problem,
which
is
typically
worse
than
than
the
sort
of
thing
we
see
in
classical
software.
B
Yeah,
that's
interesting
yeah
I'll
do
some
digging
is
there?
Is
there
mention
of
that
of
this
topic
on
the
roadmap
and
best
practices.
C
Yeah,
so
this
some
of
this
is
in
the
best
practices
document
and
some
of
it
I
touch
on
in
some
of
my
talks.
B
Yeah
interesting
yeah
I'll
do
some
more
reading.
It's
definitely
interesting
yeah,
so
for
the
sig
and
just
kind
of
overall.
B
How
might
I
come
in
and
and
get
more
involved,
subscribe
to
the
mailing
list
or
are
there
any
other
communities,
because
I
think
the
this
new
and
developing
field
of
ml
ops
is,
you
know.
A
C
So
have
you
joined
the
cdf
slack.
B
Group,
so
I
could
you
brought
that
up,
I
tried
then
it
looked
like
it
was
locked
to
certain
domains.
Let
me
see
if
I
can
pull
that
up
again.
C
There's
there
should
be
a
a
link,
I
think,
in
the
get
on
top
of
the
ops
sig
repo.
That
should
get
you
an
invite
to
to
join
that.
If
it's
not
working,
we
might
need
to
get
the
link
refreshed.
C
C
C
Well,
if
I
get
a
resolution
to
that
I'll
I'll,
send
the
mail
out
to
notify
everyone
cool.
C
C
B
Yep,
okay
sounds
good:
does
the
cig
host
any
talks
like
a
tech
talks
or
lunch
and
learns
or
any
activities
outside
of
the
once
every
two
weeks
recurring
calendar
or
this
this
meeting.
C
We
occasionally
do
bits
and
pieces.
I've
been
doing
quite
a
lot
of
community
outreach
this
year
at
various
conferences,
so
you'll
be
seeing
a
bunch
of
conference
talks
getting
published
as
we
go
through
those
conferences.
C
So
there's
a
there's:
a
cd
card
talk
on
ml,
ops
using
jenkins,
x
and
I've
just
this
morning
finished
a
talk
for
a
conference
later
in
the
year.
Go
do
another
one
tomorrow,
so
there
will
be
a
bunch
of
these
coming
out
over
the
course
of
the
year
discussing
different
aspects
of
ml,
ops
and
some
of
the
issues
involved.
B
Yeah
we'll
do
yeah
the
rest
of
this
year
should
be
how
interesting
I
think,
I'm
going
into
next
year
as
well.
C
Okay,
well
goodness,
just
us,
I
guess
we
can
probably
wind
up
at
this
point.
B
Yeah,
that's
a
nice
natural
closing
point
nice
to
meet
you,
terry
yeah,
looking
forward
to
sitting
around
hopefully
chatting
again.
This
was
this
is
interesting.
C
Yeah,
well,
you
know,
feel
free
to
come
along
to
these
sessions.
Yeah
we're
in
the
middle
of
the
summer,
so
it's
usually
quiet
around
this
period
yeah.
But
when
we
get
into
the
the
sort
of
thicker
finalizing
changes
for
the
document,
then
there's
generally
more
discussion
going
on
and
we'll
get
a
larger
group
meeting
up
face
to
face.
B
Okay,
I
saw
that
the
on
the
calendar
invite
on
the
repo
it
recurs
until
august.
26Th
is
there?
Is
that
just
the
actual
ending
point
for
that
segment
or
is.
C
B
Sounds
good
then
yeah
natural
ending
point
we'll
talk
soon.
Terry
thanks.