►
From YouTube: CDF - SIG MLOps Meeting 2021-03-25
Description
For more Continuous Delivery Foundation content, check out our blog: https://cd.foundation/blog/
A
B
I
can't
complain
all
good
I
got
just
this
week
or
today.
Actually,
access
to
the
open
ais,
gpt3
they've
been
handing
out
accounts,
so
I've
been
playing
with
it
all
afternoon
and
wow.
It
certainly
is
a
thing.
B
It's
certainly
I'm
still
trying
to
get
my
head
around
what
it
means
and
what
it
can
do,
what
it
can't
do,
but
the
few
shot
idea
is
quite
interesting
like
because
it's
trained
on
a
common
crawl
set,
which
I
don't
know
what
that
is,
but
I
gather
it
it's
a
good
chunk
of
open
web,
so
you
give
it
enough
hints
in
of
what
the
topic
it
is
and
a
few
examples,
and
then
it
sort
of
digs
into
it.
B
Like
there's
some
examples
of
writing
sql,
and
I
thought
I
would
do
one
and
I
swapped
the
sql
and
told
it
I
was
doing
graphql
and
I
did
you
know
good
semantic
graphql,
and
you
know
no
one
that
I
know
I've
done
that
before,
like
that
it
knew
about
graphql
and
things
like
that
and
the
way
it
can
classify
things
and
generate
things
and
yeah,
it's
a
it's
a
bit
in
some
ways.
B
It
feels
like
a
weapon
like
it
could
be
weaponized
like
for
generating
content
and
like
how
easy
it
would
be
to
to
have
a
few
shots
to
train
it,
to
classify
something,
for
example,
a
resume
or
something,
and
then
have
it
pick
things
out
of
that
based
on
some
context.
He
said
it's
yeah,
it's
interesting,
do
you
have
access
to
it
or
have
you
applied
for
access
to
it?
I.
B
B
If
you
tell
it
that
it's
tabular
data
and
put
little
table
markers
in
and
yeah,
it
can
translate
between,
say
python
and
javascript,
which
seems
pretty
good
at
the
top
programming
languages,
because
they're
they're
often
crawled
like
the
information's
crawled
off
github
and
I've
heard
I
haven't
tried
it
yet.
I
started
looking
at
log
output.
I
didn't
have
much
results
with
that,
but
I've
heard
when
people
hit
bugs
and
exceptions
there
was
a
company
that
was
playing
around
with
it.
B
If
it's
hitting
some
bug,
that's
in
some
public
bug
report.
It
can
often
summarize
that,
for
you
that's
another
thing
it
does
very
well
is
summarize
large
chunks
of
text.
Depending
on
how
much
summary
you
want
like.
I
was
trying
that
with
my
son,
it
was
fascinating
like
we're
giving
it.
You
know
information
about
the
french
revolution
or
franz
ferdinand
he's
into
history
and
then
having
it
summarize
in
one
line
and
it's
like.
Oh
that's,
pretty
good
like
it.
It
plucked
out
the
points
from
from
wikipedia.
B
So
it's
it's
a
force
to.
A
What
the
quality
remains
like
on
on
things
like
that,
because
if
you
think
about
it,
the
amount
of
of
bad
code,
that's
posted
on
the
internet
fast.
B
B
A
Will
it
will
it
learn
to
code
really
badly?
Yes,
in
terms
of
what
it's
it's
actually
picking
out,.
B
Yeah,
it's
certainly
for
code
generation.
I
haven't
seen
anything
like
it.
I
think
for
more
structured
data
that
it
you
know
it
wouldn't
be
too
hard
to
beat
it.
If
you
know
the
domain
but
yeah
I
had
it.
I
was
giving
it
globs
of
python
that
I
was
working
on
and
it
sort
of
would
suggest
like
reasonable
function
names
in
that.
But
then
it
wouldn't
really
go
much
deeper
and
hilariously.
B
At
some
point
I
would
be
put
a
bit
more
code
in
and
all
it
would
suggest
was
doing
more
indentation,
which
I
thought
was
pretty
funny
for
person
like
it's
just
like.
Oh,
I
see
you
like
spaces
and
tabs
have
some
more
spaces
and
tabs
like
it
did
like
it
wasn't
the
the
depth.
I
could
tell
it
wasn't
going
super
deep,
but
it
was
like
invoking
functions
that
it
had
written
before
that
had
sensible
names
with
you
know,
using
an
appropriate
parameter
like
conventions
like
using
df
for
data
frame
like
it
was.
B
It
was
some
ml
code,
so
it
kind
of
it
could
kind
of
match
and
recognize
this
sort
of
area.
You're
writing
and
I
think
it's
for
smaller
chunks.
It's
interesting
and
maybe
css
apparently
does
a
good
job
there
and
maybe
some
markdown
html
writing
sql
statements.
You
know
sort
of
those
little
expression,
things
something
where
there
isn't
a
higher.
You
don't
need
to
understand
the
higher
level
like
stuff.
That's
a
bit
more
context-free.
I
think
it's
probably
but
yeah.
It
will
be
interesting,
but.
A
B
Yes,
that's
right,
I
mean
yeah
being
able
to
both
write
and
generate
and
comprehend,
which
seems
to
be
pretty
good
at
both
yeah.
That's
that's.
I
mean
it's
a
funny
example,
but
it's
not
not
unreasonable.
It's
like
being
able
to
describe
in
in
sort
of
your
own
terminology
the
context
you
said
of
like
I
want
to
change
these
things
to
this,
and
then
it
says.
B
Oh,
this
is
actually
how
you
do
it,
you
know
and
it
because
it
knows
and
and
then
it
could
do
the
reverse
and
take
a
gob
of
yaml
and
then
describe
it
as
a
one
line
thing
of
you
know.
This
is
what
it
does
like
a
you
know,
kubernetes
config
or
something
like
that's,
not
an
unreasonable
thing,
and
it
seems
to
be
able
to
do
things
like
that.
It
seems
to
be
able
to
digest
it
down
and
and
summarize
things
it's
it's,
it
was
pretty
good.
I
even
gave
it
it
was.
B
I
gave
them
some
output
of
some
building
from
some
code
being
built
that
was
ultimately
maven
and
it
gave
this
summary
this
is.
I
was
telling
it
to
do
a
summary.
That
was
that's
one
of
the
things
you
can
do
and
it
was
sort
of
describing
oh,
it's
using
maven
and
it's
checking.
If
there's
it
was
saying
in
plain
english
like
checking,
if
there's
a
jdk
version
eight
and
installing
it
it
was
doing
all
these
things.
That
was,
it
was
like.
Well,
that's
when
I
look
at
the
logs.
B
That's
what
I
would
have
said,
partly
from
what
I
see
in
the
logs
and
also
from
what
I
know
of
how
maven
works,
and
so
it's
got
that
common
crawl
context
in
its
model,
and
then
it's
had
a
few
shot.
Examples
that
I've
given
it
just
just
like.
Just
as
if
you've
someone
that's
read
the
material
before
is
then
prompted
oh
we're
talking
about
maven
and
javanya,
so
I
thought
that
was
fascinating.
B
So
the
the
I've
got
no
idea
of
the
machinery
underneath
like
how
what
what
what
sort
of
models-
but
I
know
they
it's
it's,
obviously
a
deep
model
and
they
just
kind
of
scaled
it
out
with
just
an
incredible
number
of
parameters
like
it's.
It
almost
was
like
a
research
project
to
go.
What
happens
if
we
just
let
it
come
up
with
billions
and
billions
of
parameters,
and
you
know,
but
but
not
you
know
fundamentally
change
it
from
a
from
a
a
deep
network
and
yeah
that's
what
they
ended
up
with.
B
So
it's
fascinating
that
people
are
doing
it
but
yeah
it's.
I
can
see
why
they're
being
super
careful
with
how
it's
used
and
why
it's
an
api,
and
it
doesn't
take
much
to
sort
of
hit
a
thing
where
it
sort
of
flags
it
and
says
we
think
this
might
be
problematic
the
content
and
then
you
can.
You
can
white,
you
know
why
list
it,
but
you
can
tell
it
it's
a
false
alarm.
They're
being
very,
I
think,
they're
taking
a
lot
of
these
things.
We've
talked
about
to
heart.
B
You
know
they're
on
the
bleeding
edge
of
it.
So
it
doesn't
surprise
me
but
they've
whoever's
behind
this
has
put
a
lot
of
thought
into
preventing
misuse.
A
Yeah,
I
think
it's
it's
going
to
be
an
interesting
situation,
because
you
know
at
some
point
it
will
reach
a
density
of
information.
That
means
it
it.
It
can
be
quite
congruent
about
a
broad
enough
range
of
different
topics
that
that
it
starts
to
to
look
a
lot
like
a
human,
in
terms
of
its
ability
to
say
something
plausible
in
in
a
very
wide
range
of
areas,
and
then
there's
there's
that
question
of
you
know
we're.
A
We
keep
getting
stuck
on
this
idea
that
we're
building
super
intelligences
that
will
be,
you
know,
intrinsically
better
than
humans
at
everything.
But
in
reality
you
know
practical
intelligence
is
is
always
full
of
mistakes.
B
Yeah
yeah,
it
could
just
be
doing
that
at
a
faster,
more
impressive
scale,
but
doing
the
same
mistakes
like
yeah.
A
But
there
comes
a
there
comes
a
point
in
terms
of
you
know:
we
we
stop
being
able
to
instantly
recognize
the
mistakes
in
humans
once
they
get
past
the
level
of
experience
and
they
just
become
believable
at
that
point,
even
if
they're
wrong-
and
you
know
there
will
be
a
threshold
with
with
these
models
where
the
the
model
will
appear
to
have.
A
You
know
human-like
capabilities
and
and
will
start
to
become,
you
know
intrinsically
trustworthy
as
a
result,
because
it
seems
to
be
congruent
in
what
it's
saying
on
on
everything
you
throw
at
yeah.
B
Yeah,
so
you
start
trusting
it
and
yeah
yeah.
B
No,
I
mean
a
human
isn't
having
to
do
that.
Like
you
have
ceos
and
leaders-
and
you
know
everyone
knows
they're
flawed,
they
know
they're
flawed,
but
at
some
point
you
go
look.
This
is
the
best
west
option.
Let's
go
ahead
with
it,
you
know
where
we
don't
have
perfect
information
and
so
yeah.
I
could
totally
see
people
sort
of
giving
over
to
a
machine.
That's
yeah,
it's
closer
to
that
than
I
thought
like
it
was
yeah.
You
know.
Obviously,
nonsense
comes
out,
but
yeah
it's
it's.
It's
a
fascinating
way.
B
They've
done
it
and
they've
made
it
easy
to
integrate
into
apps
and
and
things
so
it'll
be
an
interesting
thing.
It's
the
the
whole
process
of
training.
That
model
is
is
interesting
like
it's.
You
know
from
the
ml
ops
point
of
view
like
what
what
what
do
you
even
have
to
say
about
something
like
gpt3
like
it?
What
was
the
rumor
cost,
like
four
million
dollars
for
their
training
run?
I
don't
know
if
that
was
like
the
total
accumulated
cost
of
the
aws
bill
for
the
project.
B
Maybe
there
was
some
false
starts
and
maybe
they
you
know
tuned
hyper
parameters
and
started
again
or
maybe
you
know
the
mistakes
were
made
or
whether
it
was
the
final
training
run
that
succeeded
and
it's
the
model
that
they
published
or
would
they
allow
you
to
use
like?
I
wonder
if
that's
the
cost
is
four
million
dollars
like
that's
like
an
astounding
amount
of
energy
consumed.
B
It's
yeah,
whereas
I
think
about
the
the
models.
I'm
training,
you
know
it
might
be
10
minutes
a
training
run
and
maybe
under
under
a
million
parameters.
So
pretty
modest.
You
know
it's
a
very
structured
data.
It's
not
images,
it's
not
raw
text.
It's
relatively
simple,
and
in
that
case
it's
it's
very
much
like
to
me.
B
The
analogy
is
very
much
like
compiling
code,
so
I
can
have
use
normal
ish
test
tools
and
apply
normal
practices
and
things,
and
I
just
the
challenge-
is
more
that
it's
more
probabilistic
in
how
you
test
things
rather
than
rather
than
deterministic,
but
for
this
that,
like
this
is
all
of
those
other
concerns
that
we
talked
about
become
come
to
the
fore.
Nothing
else.
A
This
brings
us
on
to
you
know
something
that's
very
relevant
at
the
moment.
Really,
you
know
in
terms
of
what
the
impact
of
this
type
of
technology
is
going
to
be
on
devops
practice
in
in
general,
because
I've
been
some
discussion
this
week
between
some
of
the
cdf
projects,
where
you
know,
there's
there's
quite
a
strong
difference
of
opinion.
A
In
terms
of
you
know
whether
ml
ops
is
actually
part
of
the
devops
process
or
whether
it
can
be
safely
ignored
and
as
being
somebody
else's
problem,
and
I
think
it's
it's
very
clear
from
technologies
like
this-
that
there's
a
there's,
an
immediate
role
for
this
sort
of
technology
in
the
devops
pipeline
itself.
A
Now,
if,
if
you're,
if
you're
taking
your
pipeline
logs
and
feeding
them
into
a
model
and
then
having
the
model,
monitor
all
of
your
pipelines
and
tell
you
in
plain
english,
what's
going
wrong
and
suggesting
fixes
that
is
going
to
be
a
massive
productivity
boost,
yeah.
B
Is
that
that
application
of
models
is
that,
like
some
people
use
the
term
ai
ops,
for
when
ai
technologies,
you
know
be
those
trained
models
or
some
other
things
classified
as
ai
is
applied
to
operational
data,
so
logs
metrics
monitoring
that
my
understanding
is,
people
would
tend
to
fall
at
a
iops,
even
though
it's
not
really
ops.
In
this
case,
it's
like
ai,
applied
to
devops
practices.
A
The
the
point
I
was
trying
to
make
really
is
that
the
this
sort
of
technology
brings
the
opportunity
for
big
productivity
advances
in
many
of
the
things
that
it
touches
and
and
your
devops
itself
will
not
be
immune
from
that
yeah
and-
and
so
you
know,
the
work
that
we're
doing
to
to
try
and
align
the
devops
and
mlx
worlds
is
actually
quite
important,
and
we
need
to
make
sure
that
we're
doing
the
appropriate
level
of
communication
and
evangelization
to
to
get
everybody
on
board.
B
B
Yeah,
I
think,
of
some
of
the
the
models
I've
been
building
and
they're
in
they're
a
much
more
modest
scale
to
sort
of
your
neck
of
the
woods
like
because
the
data's
more
tabular,
it's
more
modest.
It
definitely
is
something
that
you
know
any
any
developer
could
kind
of
do.
B
B
So
it's
it's
now
about
what
what
else
you
can
do
with
the
data-
and
I
don't
know
to
me,
training
and
deploying
models
is
feels
like
sort
of
web
developments
at
the
tone
of
the
millennium
like
it
was.
It
was
a
thing.
People
did
not
all
developers
did
it,
but
you
know
then
it
took
off
it's
like
it
sort
of
feels
like
it's.
B
B
I
think
it
back
to
you
know
if
you've
done
client
service
stuff
in
the
90s
and
when
you
came
across
the
web,
that
was
pretty
weird
like
we're,
both
old
enough
to
remember
that
transition
and
and
probably
earlier
like
it's.
This
is
not
really
that
different,
like
understanding
what
a
what
a
data
frame
is
and
what
a
feature
is
and
how
that's
subtly
different
from
other
things.
That's
not
that's!
Not
without
that's
not
out
of
the
reach
of
a
developer
like,
I
don't
think
so.
B
B
You
know
my
only
sort
of
complaint
is
I'm
forever
coming
across
notebooks
and
then
then
going
oh
gosh.
I
have
to
go,
and
you
know
it's
just
it's
half
the
time.
They
don't
work,
and
you
know
I
I'm
forever
taking
it
out
of
notebooks
and
just
putting
it
in
regular
python
code
with
unit
tests
and
stuff.
A
B
Yeah
and
then
I
think
netflix
did,
or
someone
did
publish
a
tool
that
did
that,
but
they
sort
of
they
wanted
to
let
their
data
scientists
stay
in
the
notebook
land,
and
you
know
that
that
is
kind
of
kind
of
the
reality
for
a
lot
of
companies.
I
mean
I
mean
I'm
sure
you
know
about
data
breaks
like
they're,
a
huge
sort
of
open
source
success
story
and,
and
I'm
sure
a
lot
of
that
is
powered
by
you
know
they
have
their
own
way
of
running
notebooks
and
things
like
that.
B
That's
yeah,
I
think
notebooks
are
a
reality
for
a
while.
Yet
I
can
certainly
see
the
benefit.
I
have
started
using
this
other
library
called
I'll
post
it
in
here
in
the
notes.
B
It
might
have
been
someone
here
that
suggested
it.
I
don't
know,
but
I
streamlit
it's
a.
Can
I
paste
in
that
document.
B
It's
a
web
framework
written
in
python
that
lets
you
very
quickly.
B
B
B
Yeah
it's
this
tool
that
kind
of
natively
works
with
the
constructs
of
machine
learning
libraries,
so
data
frames
data
sets.
You
know
the
various
plotting
tools
taking
simple
inputs
and
it
it
does
it
as
a
as
a
as
a
responsive
web
app.
I
think
it
actually
uses
react,
but
you
don't
ever
see
that
you
just
say
I
want
a
number
input
or
I
want
a
slider.
I
want
to
drop
down
here's
the
data
frame
to
pop
the
values.
Show
me
the
data
frame
or
you
know,
pass
it
to
this.
B
So
it's
kind
of
got
this.
You
can
very
quickly
put
sort
of
data
heavy
stuff
on
the
web.
It
reminds
me
of
it's
kind
of
like
rails
was
to
to
databases
back
in
the
2000s.
You
could
go.
Look!
Here's
the
table,
you
know
just
show
it
on
the
web
and
customize
this
and
that
it's
like
that,
but
for
machine
learning
concepts.
B
So
I
think
that's
something
that
things
like
that.
Could
excite
developers
and
get
people
interested
in
doing
it.
Who
aren't
you
know,
sort
of
notebook,
centric
data
scientists
who
like
to
have
the
notebook
where
they're
you
know
explaining
and
justifying
the
data,
whereas
the
developers
want
to
mostly
see
the
results
like
being
able
to
graph
something
in
a
notebook,
is
all
very
good
and
explain
why
you
normalize
this
thing.
This
way
is
all
very
great,
but
a
developer
wants
to
put
that
in
a
unit
test
and
have
it
pass
fail.
B
So
that
was
interesting.
That's
something
I
come
across
and
start
using
it's
very
easy
to
host.
You
can
host
it
on
a
you
know,
some
sort
of
serverless
platform.
I've
been
using
google
cloud
run,
so
you
have
a
you
know,
just
a
single
docker
container
that
fires
up
when
when
it
needs
to
and
yeah
it's
a
neat
tool.
There's
the
other
thing
is
I
found
a
few
conferences
I'll
see
if
I
can
paste
them
in
the
notes
here.
I
can't.
B
B
There
are
three
conferences
that
came
out
that
I
thought
would
be
interesting
to
sort
of
the
the
non-data
science
audience
spring,
obviously,
cdecon
and
all
day,
devops,
there's
probably
some
others
that
have
come
up.
B
A
Yeah,
so
I'm
still
waiting
to
hear
whether
r
slot
for
cd
con
has
been
accepted.
A
Hopefully
that
will
go
ahead.
I've
started
a
conversation
with
a
group
in
canada
that
are
doing
a
a
devops
focused
conference,
so
it's
possible.
We
might
be
able
to
get
slot
there
and
just
make
contact
on
ai
camp.
B
Them,
and
do
you
do
you
have
like,
have
you
got
some
content
that
you
talk
about
already
or
a
slide
deck
or
some
starting
material
that
others
could
use
if
we're,
because
I'm
struggling
to
think
where
to
start?
If
I
I
was
thinking
the
first
thing
would
be,
I
want
to
do
sort
of
a
either
a
personal
or
a
corporate
blog
post
talking
about
this,
but
I
I
I'm
struggling
to
find
the
entry
point.
A
A
There's
the
cdcon
talk
that
I
did
last
year,
which
was
a
an
overview
of
what
we're
doing
and
what
goals
are.
A
But
I
I
haven't,
I
haven't,
got
a
specific
deck,
because
what
I
tend
to
do
is
just
talk
through
the
through
the.
B
Points
yeah,
I'm
just
trying
to
think
of
you
know,
what's
a
good
catchy
entry
point
for
the
audiences
I'm
thinking
about
like,
for
example,
spring
one,
you
know
they're
developers
that
are
curious
to
try
sort
of
the
next
thing,
so
the
angle
for
them
would
be
well
there's
bound
to
be
something
in
there
that
that
is
interesting
like
or
controversial.
Maybe
it's
always
interesting
to
have
something
controversial.
A
Uh-Huh
yeah
and
I
I
think
what
we
should
probably
start
to
do
is
watch
for
examples
of
the
things
that
we
we're
predicting
and
and
then
make
sure
that
we're
producing
blog
content
that
links
the
example
back
to
the
roadmap
and
yeah,
because
that's
a
good
way
of
getting
people
to
to
pick
up
on.
You
know
real
world
examples
of
things
that
they
haven't
considered.
A
A
So
other
activities
are
going
on
at
the
moment.
You've
probably
seen
I've
created
a
draft
2021
document
in
the
repo.
So
that's
now
available
for.
A
That's
all
good
to
go
now,
I'm
also
working
with
another
cdf
sig
that
is
putting
together
a
best
practice
document
for
devops
and
I'm
producing
some
other
content
for
that,
and
also
going
to
include
ml
ops.
B
There
was
something
I
came
across
today
that
I
was
wondering
that
was
covered.
You
know
last
year
was
the
the
ip
that's
contained
in
a
model.
The
example
was.
B
So
I'm
looking
at
training
models
on
our
own
data,
eating
your
own
job
food,
applying
that
and
then
some
of
that
model
is
relevant
to
another
customer.
Another
customer
signs
up
creates
their
own
account
with
their
own
set
of
data,
but
the
and
the
model
trained
on
customer
a
is
partially
relevant
for
customer
b,
so
you
can
transfer
it
over
do
another
fit.
B
B
The
model
can
understand
that,
because
you
know
it's
a
big
space
of
numbers,
they're,
not
going
to
think
that
that
person
is
that
person,
but
but
other
things
are
in
common,
like
you,
you
there
may
be
certain
dimensions
of
of
things
that
are
the
same
so
like
who
owns
that
ip,
like
you've
trained
a
model
on
your
customer
data,
another
customer's
data,
third,
fourth,
fifth,
it
it
gets
smarter,
you
as
the
vendor
benefit
and
your
customers
benefit,
but
you're
effectively,
using
their
data
to
train
up
your
model.
B
Oh
yeah
custom's
got
to
know
like
what.
What
are
the
issues
around
that?
I
feel
maybe
there's
no
issue,
because
you
know
this
is
you
know
this
is
how
things
like
search
engines
and
whatever
work.
You
know
the
more
people
use
it
and
recommendation
engines.
You
know
when
you
go
to
amazon
or
netflix
or
you
search
on
google
or
whatever
you're
always
participating
in
an
unknown
way
in
a
model.
That's
retrained.
So
maybe
it's
not
a
big
deal,
but
it
seemed
like
in
an
enterprise
setting.
B
A
Yeah,
so
this
is,
this
is
actually
a
one
of
the
one
of
the
next
big
challenges
in
information
technology
in
general
and
I
I
know
I've
been
diving
into
this
quite
deeply
with
the
ieee
on
on
the
iods
roadmap.
A
I
can't
remember
if
we've
also
included
some
of
this
in
in
our
robot,
but
it
it's
certainly
worth
having
a
check
to
make
sure.
So
the
the
scenario
becomes
very
obvious
when
you
look
at
industrial
applications,
so,
for
example,
in
the
semiconductor
industry,
there's
there's
a
lot
of
work
going
on
in.
A
A
But
what
you
have
in
that
situation
is
quite
a
complex
mix
of
ip,
because
the
the
fab
is
owned
by
one
company
and
what
happens
within
the
fab
is
effectively
their
ip.
A
So
so
the
each
piece
of
equipment
is
covered
in
sensors
and
capable
of
generating
a
large
amount
of
data,
and
so
that
data
is,
you
know,
intrinsically
used
to
control
the
piece
of
equipment,
but
can
also
provide
valuable
information
about
the
state
of
a
product.
That's
running
through
the
line.
So
there's
work
going
on
to
build
models
to
help
run
the
equipment
better
at
the
vendor
level,
but
then
there's
also
a
desire
to
have
end-to-end
models
that
are
optimizing
for
the
product.
That's.
B
A
B
A
And
they're
doing
it
on
behalf
of
the
the
fab
owner,
who
has
you
know:
ownership
of
the
of
the
arrangement
and
the
settings
for
the
equipment.
A
So
that
means
that,
if
you're,
if
you're
running
a
meta
model,
on
top
of
this,
there's
a
risk
that
you're
actually
effectively
modeling
the
ip
of
the
vendor
of
the
individual
pieces
of
equipment
because
you're
capturing
inputs
and
outputs
and
and
learning
about
what
that
thing
actually
does
as
a
result
of
having
access
to
a
lot
of
the
sensors
that
it
it
uses.
Yeah.
A
So
so,
you've
got
you've
got
these
this
multiple
challenge
of
you're
you!
You
have
some
intrinsic
ip
over
the
the
the
line
that
you've
constructed,
but
there's
also
inherent
ip
in
in
the
in
each
of
the
pieces
of
equipment
on
that
line,
and
then
there's
a
third
problem,
which
is
that
you
want
to
be
able
to
pass
this
data
up
and
down
your
supply
chain
because
you're
only
a
piece
of
the
bigger
puzzle
where
your.
B
Yeah,
so
so
that
I'd
be
then
leaks
outside
and
yeah
yeah.
This
is,
and
I
mean
if
the
if
it
was
all
contained
inside
the
fab
owner,
then
you
know
you
could
understand
no
one
would
it
might
technically
be
a
problem,
but
no
one's
really
going
to
fuss
about
it
because
it's
never
leaving.
But
what,
if
that
fab
owner
wants
to
sell
that
knowledge
as
a
model
or,
like
you
said
it's
part
of
a
wider
supply
chain,
then
yeah
I
mean
if
you,
if
you
imagine
that
the
models
were
hand
coded
algorithms.
B
Even
that
would
be
complex,
but
you
could
probably
solve
that
like
we
have
copyright
for
that
probably
left
libraries
and
there's
fair
use
and
there's
api
surfaces
and
but
in
in
this
world,
there's
no
there's
no
developer
tuning
these
models,
the
the
the
system,
is
kind
of
building
itself.
So,
like.
A
So
so
what
what
needs
to
happen-
and
this
is
what
the
recommendation
has
been
through
the
ieee
is-
is
that
we
start
to
work
on
a
standard
model
or
you
know
cross-licensing
in
these
scenarios,
so
so
that
what
you're
effectively
doing
by
building
the
superset
of
these
things
is
you're.
Creating
a
new
product
which
is
jointly
owned
by.
B
It's
it's
a
it's
a
derivative
work,
the
same
as
in
software
licenses.
You
consumed
another
one
and
depending
on
its
license,
you
are
now
doing
a
you
know,
a
derivative
work
and
if
it's
some
kind
of
apache
like
mit
license,
then
that's
no
big
deal
like
it's
yeah
sure
it's
a
derivative
work,
but
it's
still
your
own.
If
it's
gpl's
and
it's
viral,
and
so
on,
like
it's
like
it's
well
defined
in
software
and
somewhat
tested,
but
yeah.
A
What
I'm
interested
to
see
is
where
we
see
the
first
example
in
in
the
social
space
where
a
company
decides
to
treat
their
aggregate
data
as
a
cooperative,
where
it's
you
know
effectively
paying
its
users
for
the
contribution
of
their
data
to
the
overall
product
quality.
A
So
yeah
you
you're
you're,
using
a
product
you're
paying
to
use
it,
but
you're
also
getting
some
kickback
or
dividend
from
your
contribution
to
to
that,
and
I
think
that's
going
to
be
very
interesting
because
it
it
will
actually
encourage
more
lock-in
to
a
solution.
A
You
know,
because
if
you,
if
you
get
involved
in
a
product
early
on
and
become
a
you
know,
a
member
of
that
cooperative,
then
your
data
is
actually
earning
you,
some
value
and
as
that
scales,
you're,
you're,
you're,
potentially
earning
you
know
significant
enough
amounts
that
it
makes
you
want
to
stay
with
the
product
and
you
because
you
feel
a
sense
of
co-ownership.
A
The
challenge
is,
in
you
know,
building
manageable
solutions
to
to
actually
track
and
maintain
those
I
mean.
B
It's
yes,
there's
companies,
I've
heard
of
companies
or
or
new
companies
that
are
looking
at
publishing
models
where
the
model
is
the
ip
that
they
sell,
like
natural
language,
understanding
in
a
specific
vertical
domain,
so
they've
spent
a
lot
of
work
accumulating
the
data
and
training
it.
You
know
in
medical
or
accounting
domains
and
natural
language
understanding,
and
so
presumably
they
ship
you
a
model
or
you
use
it
as
a
service.
But
let's
say
if
they
ship
your
model
at
some
binary.
You
can
then
add
to
that.
B
You
can
correct
me
if
I'm
wrong,
but
you
could,
then
you
know
load
up
that
model
assuming
it
allows
it
add
more
data
to
it.
Trying
to
you
know,
adjust
the
parameters
again
and
so
you've
created
a
derivative
work,
not
just
by
using
it
by
root,
but
by
retraining
it
like
it's
yeah,
I
don't
that's.
The
analogy
of
software
is
like
you,
you
take
the
in
that
case.
That's
like
taking
their
source
code
and
modifying
it.
You're
not
like
you
know,
linking
it
in
that
world.
You're
actually
modifying
the
source
code.
A
Yeah
back
to
your
original
scenario,
where
you
know
you
have
a
a
service
provider,
that's
training
models
and
then
a
series
of
you
know
commercial
users
who
want
to
benefit
from
that
model,
but
are
also
providing
data
through
the
system
which
can
be
used
to
improve
the
model.
A
You
know
under
the
under
this
approach.
What
you
could
do
is,
you
know,
commercially
license
your
relationship
with
each
of
those
users
such
that
you
have
a
written
data
usage
agreement
that
allows
you
to
consume.
A
You
know
pseudonymized
content
to
to
improve
the
model,
but
at
the
same
time
provides
a
dividend
to
to
to
that
company.
The
the
you
know
is
paying
them
for
the
use
of
that
data
and
then
that
company
has
the
option
to
pass
on
that
dividend
to
you
know
in
individual
users.
A
If,
if
what
they're
doing
is
aggregating
data
from
you
know,
end
customers,
so
you
get
this
chain
of
data
sharing
agreements
with
a
commercial
element
to
them,
and
then
everybody's
getting
paid
for
the
value
that
they're
creating
yeah,
which
makes
them
more
likely
to
want
to
share.
B
B
All
right,
I'm
going
to
sign
off
next
time,
it'll
be
earlier
in
my
time
zone,
so
we
can
talk
longer,
because
this
is
I'd
love
to
go
back
into
this.
This
is
like
fascinating.
I
think
we
should
dive
a
bit
deeper
in
the
ip
and
maybe
start
flushing
it
out.
Yeah.
A
B
Think
it
is
yeah
it's
because
it
comes
up
a
few
times
when
I
talk
to
people
and
it'll
certainly
come
up
once
people
unders,
it's
explained
to
them
and
they're
like
holy
crap.
What
are
we
doing
here?
This
is
yeah
so
good
to
get
from
it
all
right.
Well,
I'm
gonna
sign
off
and
talk
to
you
next
time
around
at
a
better
time
zone,
so
that'll
actually
be
really
good
and
otherwise.