►
From YouTube: CDF SIG MLOps Meeting 2020-08-27a
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
how's,
it
going
can't
complain
so
cara's
joining.
B
Here
you've
been
having
some
interesting
weather
in
your
direction.
A
Yeah
we
have
had
snow
last
weekend,
which
was
nice
snow
nearby
and
even
where
I
am,
it
didn't
settle
on
the
ground
here,
but
it
did
elsewhere
and
given
that
our
borders
are
closed
both
internationally
and
interstate,
it
seemed
like
there
was
millions
of
people
all
driving
up
this
way
to
see
the
snow.
We
had
nothing
else
to
do
on
a
weekend,
so
it
was
chaos,
but
but
no
it
was
nice
and
it
was
strange
things
snow
on
on
the
bush
we
have
here.
A
A
Oh,
it
might
just
be
us
today.
Let
me
give
a
few
minutes.
I
pasted
a
few
interesting
links
in
that
I
thought
people
might
be
interested.
A
There
was
a
cloud
storage
app
that
was
doing
facial
recognition
on
people's
faces,
for
I
didn't
really
look
at
what
purpose,
but
they
basically
got
shut
down.
I
thought
that
was
interesting.
It
was
probably
in
the
fine
print
or
something
of
people
providing
their
photos.
Who
knows,
but
I
thought
that
was
an
interesting
piece
of
news
and
another
one
I
found
was
a
website
spinningup.openi.com,
like
obviously
the
people
do
gpg,
3
and
all
that
fun
stuff
that
seem
to
be
leading
the
way,
and
they
talked
about
this.
A
There
was
sort
of
a
good
introduction
that
I
enjoyed
to
reinforcement,
learning
and
I
learned
a
few
things
and
I
said
learned
so
that
in
the
state
of
the
art
they
they
do
separate,
reinforcement,
learning
into
modelless,
reinforcement,
learning
and
ones
that
have
models,
and
I
thought
that
was
interesting
and
relevant
to
us,
because
we've
talked
a
bit
about
reinforcement
learning,
but
with
the
assumption
that
there's
probably
a
model
in
there-
that's
being
retrained
somehow,
but
they
do
talk
about
that.
The
I
think
they
said
the
more
common
modes
were
model-less.
A
So
I
don't
know
if
that
means
there'll
be
a
trend
towards
model
based
reinforcement
learning.
So
I
thought
that
was
that
was
kind
of
interesting,
because
if
the
dominant
thing
is
modellers,
then
it's
that's
a
challenge
to
see
how
mlops
practices
apply
to
that.
If
you've
got
this
system,
that's
almost
alive
and
evolving,
it's
it's
hard
to.
Unless
you
capture
the
entire
data
stream,
that's
going
through
it.
You
know
if
it's
a
real-time
system
or
whatever,
then
that's
going
to
be
a
challenge.
A
So
that'll
be
something
interesting
to
watch,
and
maybe
it
evolves
more
to
a
model-based
one.
Where
the
the
models
are
effectively
checkpoints
and
they
don't
need
to
evolve
in
real
time,
so
I
thought
they
have
some
good
material
out
there
that
open
ir
group
like
that,
obviously
at
the
bleeding
edge
of
research.
So
I
thought
it
was
a
good
introduction
to
it.
B
A
Yes,
that's
right!
That
was
in
the
news
too.
I
should
have
put
that
in
yeah
that
was
late.
I
was
like
talking
to
people
about
that
last
friday,
even
my
son
brought
it
up.
I
was
driving
him
to
school
late
last
week
and
he's
like
dad.
What's
what's
this
thing,
that
pops
up
in
youtube
and
google?
I
started
at
school,
and
so
I
had
to
explain
it
poorly
and
then
I
went
back
and
read.
A
Some
articles
and
strata
kerry
had
some
great
analysis
on
it
and
yeah
it
was
and
and
the
people
were
confused
because
you
know
you've
got
the
the
the
government
organization,
which
is
normally
pretty
good.
They
normally
look
after
consumer
rights,
so
they
do
some
good
stuff
with
warranties
and
things,
but
they
seem
well
out
of
line
here
and
they
seem
to
say:
oh
we
didn't
say
anything
about
giving
away
data
or
anything,
and
it's
like
they
just
said.
A
Oh,
we
just
need
a
30
day
heads
up
for
a
change
in
in
in
in
algorithms
and
then
then
then
then
it
hit
me
it's
like
well,
you
know
all
search
today
uses
machine
learning.
A
If
you
want
to
actually
tell
them
the
algorithm,
it's
like
well,
here's
all
the
data
we
used
it
to
train
in
the
last
10
days
or
whatever
so
yeah.
That
was
kind
of
shocking.
Like
that's.
That's
an
example
of
the
escalation
of
data
problem
mentioned
challenge,
and
so
that
was
that
was
interesting,
and
then
I
was
able
to
explain
it
to
my
son
and
when
it
hit
me
it's
like
all
right,
but
they
use
yeah.
A
What
they
actually
mean
is
that
it's
it's
they're
using
trained
models
to
you
know,
based
on
all
sorts
of
people's
information,
to
get
get
the
right
information
of
the
right
people,
and
so
it's
not
just
an
algorithm
of
weights.
If
we
look
for
tags
in
this
way
and
we
weight
things
from
these
domains
over
this
one
this
week
and
this
one
to
that
one
and
response.
No,
it's
not
like
that.
It's
it's
like
not
hand
coded
it's
it's
and
it
hasn't
probably
been
for
a
decade,
no
more
so
yeah.
A
I
thought
that
was
there
was
an
interesting
collision
between
the
real,
well,
the
legislative
world
and
an
ml
and
fundamentally
mlr.
So
you've
had
all
this.
This
data
that
was
like,
presumably
not
leaking
out
suddenly.
Theoretically,
they
might
have
to
hand
it
over
if
you
take
it
literally,
so
that
was
yeah
that
was
exciting
and
embarrassing
as
an
australian,
so
yeah.
A
Yeah
I
mean
it
was
interesting
because
this
one
didn't
even
explicitly
mention
machine
learning.
It's
like,
I
can
totally
imagine
any
cases
where
they
ask
systems
that
that
are
smart
and
making
decisions
and
and
banks
and
insurance
companies
or
whatever,
but
this
was
something
where
even
to
me,
it
didn't
occur
to
me
what
they
were
talking
about
until
I
thought
for
a
second
that
hang
on
it's
like,
of
course
this
is
machine
learning,
but
it
was
never
mentioned
anywhere.
No
one
even
thought
about
it.
Even
people
who
you
know
working
this
domain
might
not.
A
It
might
not
be
obvious,
but
it's
like
this.
Just
just
came
right
out
so
yeah.
I
think
you'll
see
it
in
a
lot
of
places.
So
yeah
it's
it's
it's
going
to
be
interesting.
Yeah.
C
C
A
They
were
basically
claiming
that
google
was
making
and
facebook
they
named
them
specifically,
but
making
a
significant
amount
of
income
due
to
the
work
of
local
news
organizations
and
those
local
news
organizations
should
be
compensated
for
that,
and
so
they
basically
made
a
ruling
that
these
two
parties
had
to
come
together
and
form
a
binding
agreement
or
an
arbitrator
would
and
part
of
that
agreement
was
to
agree
how
much
they
would
pay
them
to
put
little
news
tiles
on
the
google
page.
A
I
don't
really
see
it
by
google
or
maybe,
if
you
use
an
android
phone
or
something
or
facebook
showing
it.
It
didn't
really
make
sense,
because
the
only
time
I
ever
see
that
if
someone
pastes
a
link
to
a
local
newspaper
or
whatever
in
facebook,
and
then
it
unfurls
it,
and
I
read
the
content,
and
so
they
were
saying
they
wanted
to.
A
They
had
to
come
to
an
agreement
to
pay
the
news
providers
for
it,
and
that
was
interesting
enough
on
its
own
kind
of
like
a
weird
sort
of
tax,
but
instead
of
the
tax
going
to
the
people,
it
just
went
straight
to
and
of
course,
the
driving
force
here
was
news,
limited
and
rupert
murdoch
no
surprise
there,
but
the
side
effect
of
that
is
they
had
all
these
extra
writers
like
they
had
to
do
certain
things
and
notify
them
of
changes
to
the
algorithm
with
30
days
notice
before
they
make
a
change.
A
That
actually
said
that
so
30-day
notice
for
you
know-
and
this
is
presumably
a
machine
learning
model-
it's
trained
by
search
data-
that's
regional
and
there's,
there's
probably
millions
of
features
that
go
into
that
which
involve
you
know:
people's
browsing
data
and
stuff
that
all
get
combined
together
to
provide
better
search
and
they're
saying.
Well.
We
need
30
days
notice
of
that
and
yeah-
that's
not
actually
possible.
A
Unless
you
take
it
literally
and
go
we'll
give
you
all
the
data
that
was
used
to
make
this
model
ego
and
that's
got
all
sorts
of
sensitive
stuff
in
it.
No
doubt
so
yeah,
that's
the
sad
story
of
it
yeah.
It's
a
good
example
of
of
that.
I
think
this.
This
could
almost
have
been
from
google's
point
of
view
and
is
it
escalation
of
data
categories?
A
Yeah,
which
is
what
all
what
will
happen
if
they
persist
with
it.
It
happened
in
germany
and
the
news
organizations
basically
said.
No,
this
is
really
bad.
Our
traffic's
dropped
off
a
cliff
undo
it
so
it
and
in
spain
I
think
they
they
came
to
some
agreement
to
give
them
a
small
amount
of
money,
but
it
doesn't
really
make
any
difference
but
yeah.
It's.
The
problematic
thing
is
that
wording
around
the
algorithms
and
this
misunderstanding
that
algorithms
are
separate
from
the
data
that
that
build
them.
It's
like
yeah.
A
If
you
write
code
by
hand,
you
could
you
know
as
a
developer,
you
a
whole
lot
of
input
went
in.
You
know
you
talk
to
people
who
know
the
business
domain
that
you're
working
in.
Maybe
you
talk
to
actuaries
if
you're
an
insurance
company,
maybe
you
talk
to
end
users?
All
this
information
goes
into
you,
you
produce
source
code
and
that
gets
compiled
and
you
can
distribute
that,
and
someone
legally
might
say
we
need
to
see
the
algorithm.
A
You
show
the
source
code
you're
not
probably
not
leaking
anything
from
those
conversations
of
personal
information
and
stories
about
medical,
because
it's
all
written
in
code,
you
probably
don't
hard
code
in
people's
personal
information,
even
in
test
cases
like
every
developer,
knows
not
to
do
that,
but
with
machine
with
a
trained
model.
It's
like
no
you're,
not
it's
that
the
data
goes
straight
into
that
thing
and
and
gets
combined
into
a
model.
A
So
you,
but
people
don't
don't
understand
that
like
it's
like
I'm,
I
really
think
of
models
as
like
their
executables
in
a
way
they're
programs
they're,
just
not
written
by
human,
so
to
show
the
algorithm
you
have
to
show
all
of
the
inputs
to
it,
which
is
all
of
the
data.
A
It
probably
never
went
that
far,
but
there's
this
hard
line
between
the
sensitive
information
that
went
into
building
the
system,
an
artifact,
whereas
in
models
machine
learning,
it's
it's
not
there.
So,
yes,
it
was
interesting
case.
C
What's
so
interesting
is,
it
seems
like
because
they
did
something
similar
in
the
in
europe,
as
you
know
saying,
but
it
seems
like
this
is
almost
attacking
more
or
like
a
counter
measure
against
an
almost
monopoly.
That's
something
like
google
has
like
it's
it's
against
its
size
and
its
dominance,
rather
than
even
how
it's
using
data,
it's
just
as
a
side
effect.
It
shows
how
data
is
to
google.
A
I
mean,
and
there
was
there
was
some
I
guess
the
government
parties
involved
thought
they
would
have
the
backing
of
the
people,
because
there's
this
view
that
these
big
tech
companies
are
not
paying
their
fair
share,
and
so
they
thought
well.
This
is
something
we
can
do
to,
but
yeah
there's
all
these
side
effects
of
that
and
but
yeah.
I
think
where
we
went
further,
was
that
algorithm
thing
that
terry
mentioned,
which
then
leaks
into
machine
learning
and
data
and
and
things
like
that
so
yeah?
A
That
would
be
an
interesting
case
of
if,
if
there
was
some
legal
demand
of
a
model
you
had
trained
via
doing
good
ml,
ops
practices
and
things
and
a
government
organization
like
our
a
triple
c
as
they're
called
said,
we
need
to
see
that
or
we
need
to
see
what
changed
or
like.
Could
you
like?
Would
it
satisfy
them
to
give
them
the
binary?
You
know
which
would
be
just
a.
A
I
know,
you
know
one
of
those
formats,
terry
or
you
know
a
zipped
up
tied
up
tensorflow
protobuf
things
with
all
of
the
little
parameters
for
the
neural
network
and
go
look.
Here's
the
model,
here's
the
here's,
the
algorithm
we
use,
then
it's
just
noise
to
a
human
and
here's
the
one
we
used
last
time.
A
Different
noise
like
with
that
lit
like
this,
would
have
to
go
to
court
to
satisfy
it
like
we
won't
know,
because
if
they
say
no
to
that,
then
you
have
to
go
back
to
the
source
data
and
of
course,
if
you've
done,
you
know
the
stuff
we're
talking
about,
then
you
would
be
able
to
do
that.
You'd
be
able
to
go
back
in
some
repository
and
go
here's
the
training,
data
and
the
scripts,
and
you
can
reproduce
it
this
way,
but
then
you'd
be
giving
away
all
the
data.
A
B
Like
that,
there's
the
problem
that
that,
if
that
stuff
is,
is
self
learning,
then
it's
effectively
updating
every
few
seconds.
So
you
know
you
you
you,
you
would
just
be
flooding
them
with
petabytes
of
continuous
data.
B
Yeah
could
actually
do
anything
useful
with
in
any
case,
and
you
would
somehow
have
to
delay
the
the
actioning
of
of
that
in
an
entire
territory.
A
Which
is
obviously
would
be
catastrophic
to
the
user
experience
like
that's
that's
what
yeah
it's
it's,
but
yeah.
I
guess
it's.
Maybe
this
will
serve
as
a
test
case.
For
that
I
mean
this
is
a
side
effect
of
the
bigger
thing.
That'll
probably
not
happen,
but
yeah.
I
if
this
is
the
first
of
many,
then
I
can't
imagine
this
stuff
being
resolved
technically
alone.
It
will,
you
know,
they'll
have
to
be
something
that
goes
to
court
before
people
understand
what
what
this
is
like.
A
It
was
very
interesting.
Here's
something
very
mainstream
that
my
son's
asking
me
about
on
the
way
to
school,
and
it's
like
I
effectively
have
to
go.
Oh
this
involves
ml
ops
like
it's
a
learning
opportunity
of
like
you
can't
do
it.
Here's
why
it
is
it's!
It's
not
mainstream
knowledge
knowledge.
So
I
guess
the.
A
You
know-
maybe
I
mean
the
ml
ops.
You
know.
Roadmap
is
clearly
not
meant
for
mainstream
sort
of
consumption,
but
are
there
extractions
of
it
that
could
be
altered
with
other
people
and
published
to
kind
of
explain
this
to
educate
a
bit
more
of
the
mainstream
to
say,
here's
why
you
can't
just
get
an
algorithm
without
the
data
or
if
it's
learning
online,
here's
why
you
can't?
A
Actually
you
just
like
the
physical
nature
of
this
is
different,
like
you
need
to
understand
this,
this
world
we're
in
now,
you
can't
it's
hard
enough
to
think
into
like,
even
in
the
mainstream,
the
idea
of
source
code
and-
and
you
know
ip
there's
like
a
tiny,
tiny
bit
of
knowledge.
I
think
there's
been
court
cases
here
and
there
and
the
word
algorithm
is
thrown
around
in
the
mainstream.
So
there's
some
understanding
of
that.
A
But,
like
does
does
there
need
to
be
a
little
bit
more
mainstream
and
by
mainstream,
maybe
just
mean
journalism,
journalists,
mainstream
developers,
business
analyst
types-
maybe
they
need
to
have
a
little
bit
of
education
like
an
executive.
A
Summary
of
this
is
the
minimum
things
you
need
to
know
that
are
running
your
life
day
to
day
you're
already
using
this
every
day
like
when
you
use
a
google
search
when
you
use
email,
when
you,
you
know,
when
you
use
your
credit
card
and
there's
fraud,
detection
and
where
you
get
declined
for
something,
then
every
day
you're
encountering
models
that
have
been
trained
that
are
an
algorithm,
but
that
exposing
that
algorithm
means
exposing
all
the
data
is
that
I
don't
know.
A
I
thought
that
was
that'd
be
an
interesting
thing
for
the
right
person
to
explain
at
the
right
level
to
to
come
out
of
this,
because
I
think
that
would
help
people
understand,
like
I'd,
be
trying
to
explain
it
to
family
members
and
things
like
that,
but
been
struggling.
B
Yeah
I
mean
we're
we're
about
to
go
into
a
period
of
time
in
which
our
fundamental
legal
system
is
going
to
be
challenged,
because
you
know
for
the
first
time,
in
a
long
time
we're
encountering
something
that
clearly
does
not
fit
within
the
current
legal
framework.
B
Legal
entities
of
the
same
class
as
a
person
in
order
to
be
able
to
apply
the
same
sets
of
laws
to
a
corporate
body
as
you
do
to
a
physical
body,
and
so
you
know
we
may
end
up
seeing
some
jurisdictions
actually
giving.
B
A
Right
because
in
that
case
you
could
you
could
ask
the
model,
you
could
interrogate
the
model,
but
you
wouldn't
necessarily
get
access
to
the
data
that
he
had
access
to
like
it's
got
a
personhood
so
to
speak.
Yeah
I
mean
that's
where
people
have
often
talked
about
that
in
in
sci-fi,
like
scenarios
of
personhood
being
granted
to
things
that
are
sufficiently
intelligent,
but
yeah,
that's
a
that's.
That's
a
different
angle
is
like
we
don't.
We
can't
handle
this
way
of
legally,
so
we
need
to
hack
it
like
corporations.
A
Yeah,
that's
an
interesting
because
it
makes
no
sense
that
corporations
have
a
person
who
I'm
not
sure
if
that's
the
same
outside
of
the
us.
Maybe
it
is
but
yeah.
That's
certainly
that's
a
thing
and
that's
real
and
it's
absurd,
but
it
works
enough
or
maybe
it
doesn't
work
that
well,
but
it's
it's
in
use,
so
yeah.
B
So,
in
terms
of
where
we
are
with
the
roadmap,
we've
now
completed
all
of
the
challenges
and
technical
requirements
that
we
currently
have
in.
A
The
document
I
think
there
was
just
one
for
me
to
complete
last
sections
in
sync
solutions
table.
I
hadn't
had
a
chance
to
look
at
that.
B
A
B
That's
right,
yeah,
so
so
what
we
need
to
do
here
is
we
need
to
go
through
this
table
and
then
validate
the
the
map.
That's
in
there
at
the
moment
and
update
anything,
that's
missing
so
so
the
color
code
is,
let
me
just
cut
across
so
anything,
that's
in
yellow
we.
We
definitely
need
to
edit
the
room.
The
remainder
of
the
colors
are
the
initial
estimates
that
I
put
in
for
some
of
these
things
and
we
need
to
review
those
the
the
classifications
basically
are.
If
it's.
B
B
So
yellow
is
is
unclassified
so
from
from
our
perspective,
it
means
that
we've
we've
added
something
to
the
table,
but
we
haven't
yet
even
thought
about
it
validated
the
numbers.
B
B
White
is
where
we
know
or
expect
that
a
solution
will
be
available
and
is
being
qualified.
You
know,
so
it's
it's
pre-production
level,
but
there
are
betas
out
there.
People
are
testing
things
and
then
gray
is
you
know,
there's
a
solution
available
and,
and
it's
being
continuously
improved.
A
B
So
so,
when
I
first
created
the
table,
I
put
in
some
candidate
values
but
as
we've
added
additional
items
to
the
other
tables.
I've
just
added
those
to
the
bottom
of
this
table
and
and
mark
them
as
to
be
determined.
B
A
Think
I
think,
that's
well
underway.
Everyone
knows
that
that's
a
problem.
B
Oh
so
treating
machine
learning
assets
as
first-class
citizens
in
devops
processes.
You
know
clearly
we've
got
solutions
out
there,
which
are
starting
to
do
that.
More
improvement
is
needed,
but
it's
so
it's
it's
something.
B
No
so
so
white
would
be,
you
know,
somebody's
got
a
beta
out
there,
but
it's
not
it's
not
broadly
adopted
yet
right,
whereas
gray
is
you
know,
this
is.
A
The
state
of
the
art,
yeah
yeah
or
if
they're
not
using
it,
they
should
be,
but
they
haven't,
got
yeah
like
like,
like
ci
cd,
you
know
it's
out
there,
people
use
it.
Other
people
don't
use
it,
but
they
should.
You
know
like
scm.
Surely
everyone
uses
it,
but
you
just
know:
there's
someone
that
doesn't,
but
it's
it's
fair
to
say
that
it's
mature
and
improving.
B
So
providing
mechanisms
by
which
training
sets
training
scripts
and
service
wrappers
may
all
be
versioned
appropriately
again,
some
of
the
solutions
can
already
do
that
to
some
extent,
so
we're
in
that
continuous
improvement
phase.
B
A
Yes,
this
year
yeah,
it
would
should
that
be
white
to
the
right
of
that,
or
instead
of
gray
like
it
might
take
a
is
that
the
progression
that
would
go
blue,
then
white,
then
gray,.
B
Typically,
the
what
you've
got
to
think
about
is
that
each
each
of
these
blocks
represents
a
year.
No
things
that
are
going
to
be
difficult
might
take
a
year
to
go
through
qualification.
B
Things
that
are
going
to
be
more
straightforward.
Will
right.
A
Including
training
sets
as
managed
assets,
if
anything
for
that
one,
it
seems
like
there's
a
bunch
of
things
out
there
sort
of
vying
for
people
to
use
them,
there's
different
projects
or
products
that
will
look
like
git
or
something
like
that
for
managing
data.
A
B
I
think
there's
probably
an
argument
that
we
need
to
shift
that
to
the
right,
because
you
know
we're
we're
not
going
to
see
maturity
in
that
space
this
year
now.
B
Now
the
the
security
one
again
we're
we're
seeing
very
limited
activity
in
that
space.
You
know:
we've
seen
some
drivers
this
year
and
I've
seen
some
actual
problems
occurring,
but.
A
There's
no
yeah,
I
it
feels
like
with.
A
There
needs
to
be
more
accidents
for
improvements
to
happen
just
like
with
airliners
like
they
were
probably
very
dangerous
things
to
fly
around
on
in
50s
and
60s,
but
each
accident
taught
people
a
lot
and
now
they're
very
safe.
So
it
feels
like
often
with
things
like
this.
It's
just
bad
things
have
to
happen
for
good
things
to
happen.
B
Yeah,
so
I
think
again,
we
need
to
shift
that
right
a
year,
and
I
think
this
is
one
where
the
the
roll
out
of
this
will
be
a
lot
slower
and
there'll
be
a
need
for
a
lot
more
pre-qualification
activities.
A
B
So
we
haven't
qualified
that
I
I
don't
think
it's
very
well
considered.
A
Things
like
this
australian
lore
around
google
like
providing
advanced
notice
of
algorithms-
maybe
that's
too
generic
or
not
too
generic-
maybe
that's
too
narrow
to
be
included
in
here,
but
it
seems
like,
even
in
this
past
week,
a
novel
challenges
popped
up,
that's
to
be
determined.
A
Is
there
some
value
in
having
a
catch-all
there
for
for
other
legislative
actions
that
may
affect
ml
ops
that
that
you
know
that
are
already
happening?
So
if
this
is
happening,
there's
going
to
be
more
right.
B
Yeah,
well,
I
think
we've
got
that
in
terms
of
the
the
the
risk
that
we
talked
about
in
the
last
session,
where
we're
pointing
out
the
fact
that
there.
C
A
Yeah,
but
I
I
guess
yeah,
my
point
is
like:
is
there
a
room
for
a
road?
That's
like
the
solution,
which
may
be
something
like
a
hack
like
personhood
like
it
sounds
crazy
like
that
that
if
that
solution
worked,
that
would
be
a
viable
solution
to
that
legal
problem
yeah.
I
don't
really
know
how
to
word
it
so
yeah,
probably
not.
B
Yeah,
but
this
this
particular
one
is
about
how
you
actually
simplify
the
process
of
allowing
people
to
respond
to
gdpr
requirements
by
you
know,
making
it
easy
for
them
to
invalidate
models
and
retrain
them
automatically.
Yeah.
C
B
A
result
of
subject
access
requests
and.
A
The
right
and,
of
course,
that's
a
a
great
angle
for
a
denial
of
service,
which
you
could
kind
of
already
do
with
gdpr.
Today
you
could,
you
could
spam
a
company
with
all
sorts
of
requests
for
data
and
things,
but
this
takes
it
to
the
next
level,
because
retraining,
a
model
is
quite
expensive,
like
the
the
gt3
model
from
open
ai
cost,
five
million
dollars
to
train.
A
So
if
it
falls
foul
of
right
to
be
forgotten,
because
there's
some
web
page
that
it
ingested
that
it
wasn't
meant
to,
then
that's
five
million
dollars
for
a
gdpr
request
if
it
was
to
apply
so
yeah.
It's
a
very
real
thing.
B
Probably
not
really
going
to
be
widely
available
in
the
next
three
years,
so
there'll
probably
be
ongoing
development
and
and
and
then
a
period
of
qualification,
and
so
we're
we're.
Probably
looking
at
a
sort
of
blue
blue
white
gray,
gray
on.
B
B
A
I'd
say,
based
on
my
experience
here,
I'd
say:
that's
definitely
underway.
I
look
at
things
like
metaflow
from
netflix
and
that's
something
they've
built
that
they're
actively
using
and
starting
to
promote,
there's
probably
some
various
startups
out
there
that
are
doing
that.
I
know
I've
seen
some
that
claim.
This
is
their
aim,
so
I
think
it's
reasonable
to
say
that
development
is
underway
for
that
at
least.
A
Like
it's
probably
maybe
not
next
year,
but
maybe
the
year
after
it
start
to
see
it
apply,
but
I
think
it
could
go
or
is
it
white
for
a
bit
because
it
might
take
a
while
for
it
to
sort
of
be
fully
absorbed.
B
You
know
broad
alignment
with
other
security
assets
and
and
systems,
so
there'll
be
a
a
long
period
of
trying
to
you
know,
align
systems,
so
they
can
all
talk
to
each
other
and
share
information
in
an
appropriate
way,
whereas
this
is
just
a
matter
of
solution
providers
implementing
a
a
version
of
this
solution
for
customers
to
use
right-
and
you
know
we've.
Some
of
that,
for
example,
is
already
in
the
jenkins
x
solution.
C
A
B
B
A
I
I
think
so
too
I
mean
it's
yeah,
there's
all
sorts
of
things
happening
out
there
in
different
languages
and
then
there's
there's
the
rise
of
unsupervised
learning.
That's
something!
I've
been
working
with
lately,
which
has
different
ways
of
training,
but
I
imagine
it'll
it'll
settle
down
and
be
similar,
so
yeah,
I
think
that'll
be
an
ongoing
thing
for
a
while.
A
Yet
it's
it's
it's
just
when
things
are
so
new,
it's
just
easy
for
them
to
have
their
own
way
of
working
for
a
while,
like
someone
comes
up
with
new
something
new,
then
they're
not
necessarily
going
to
look
at
things
like
ml
ops,
like
it's
still
new
to
you
to
consider
that
yeah.
I
think
that's
sounds
about
right
where
it
is.
B
B
B
So
certainly
a
chunk
of
this
is
is
is
still
at
the
research
stage.
Right
now,.
A
B
Yeah
well
again,
you're
going
to
have
a
lot
of
people
who
are
doing
this
already,
but
they're
doing
it
by
completely
bespoke
methods
yeah
and
then
what
we're
talking
about
in
this
particular
challenge
is
about
providing
standardized
ways
of
doing
that
with
conventional
tooling,
so
that
it's
open
to
everybody.
B
Yeah,
so
that
might
be
that
it's
a
it's
two
blacks
and
two
blues
yeah
or
possibly
even
push
the
grey
off
the
end
of
the.
B
C
When
we're
looking
at
this
question,
does
it
does
it
didn't
come
by
things
like
what
do
they
call
it
edge
training?
I
forget,
I
forget
the
word
for
edge
something.
Basically,
a
lot
of
your
data
has
been
collected
and
used
in
forming
your
model
or
influencing
your
model
at
its
point
of
being
gathered
so
basically
like
on
phones
or
on
very
various
iot
things.
B
But
specifically,
this
is
about
being
able
to
do
things
like
cross
training
so
that
you,
you
train
on
gpu
and
then
produce
a
model
that
will
is
deployable
in
silicon.
B
A
Although
apple
does
have
some
neural
chip,
I
think
they
call
it.
It's
probably
a
gpu
that
that
you
can
target
that.
I
guess
come
comes
as
that.
B
B
Yeah
in
in
the
way
that
edge
is
being
used
say
in
the
irds
road
map.
B
At
the
moment,
it's
it's
focused
much
more
in
terms
of
things
like
factory
automation
or
smart
cities,
where
you're
actually
looking
at
massive
amounts
of
smart
sensors
being
deployed,
and
you
know,
then,
having
sort
of
things
like
the
automated
traffic
management
systems,
where
you've
got
local
networks
of
of
sensors
that
are
reporting
back
to
intersections,
where
there's
a
certain
amount
of
processing
going
on
to
work
out
what
the
state
of
a
set
of
traffic
lights
should
be,
and
then
passing
that
information
out
through
the
network
and
back
to
the
autonomous
vehicles
to
warn
them
about
the
status
of
other
vehicles
approaching
blind
junctions
and
things
like
that.
B
B
It's
now
being
pushed
out
to
you
know
the
the
number
of
edge
devices
will
be
orders
of
magnitude
greater
than
the
number
of
numbers
of
smartphones
in
the
field
right
because
there'll
be
just
a
an
even
smaller
granularity
and
there'll.
Be
many.
Many
more
of
them.
A
It's
I
was
reading
in
the
past
week
that
google
are
using
or
planning
on
using
their
android
android
phones
to
detect
earthquakes
like
the
the
accelerometers
and
the
slight
movements
in
some
time.
Coupled
fashion
means
they
can
detect
earthquakes
globally,
which
I
thought
was
fascinating.
A
I
mean
it's
they're
not
doing
the
computation
on
the
edge,
because
I
doubt
an
individual
phone
would
know
that
there's
an
earthquake
but
correlating
it
with
nearby
ones
might
so.
I
thought
that
was
an
interesting
case
of
sensing
at
the
edge
but
yeah
who
knows
they
might
do
more
in
the
future.
B
And
then
next
session
we
can
push
through
and
christopher.
A
And
I
get
that
done
and
yeah.
I
like.
I
like
the
idea
of
some
sort
of
content
somewhere
on
explaining
ml
ops
to
anyone
or
a
subset
of
it.
I
don't
know
if
we
have
our
you
know
wheels.
A
Turning
in
our
mind,
maybe
we'll
think
of
either
someone
that
could
write
it
or
some
way
to
express
what
we're
thinking,
because
this,
this
stuff
that
was
in
the
news
about
the
australian
media
and
facebook
and
google,
would
be
it'll
just
be
some
great
content
to
have
out
out
there
to
explain
to
people
simply
it's
like
an
algorithm
is
not
an
algorithm
anymore
without
data,
so
yeah.
A
If
you
have
kara,
if
you
have
any
bright
ways
to
express
that
or
or
if
I
do,
then
we
should
collaborate
on
some
content
under
the
cdf
banner
or
something
and
get
that,
because
I
just
think
that
would
be
good
to
get
that
knowledge
out.
There.
B
Well,
I
I
think
this
is
one
of
those
areas
where
actually
there's
already
quite
a
lot
of
stuff
out
there,
but
you
know
you
need
to
look
at
things
like
boston's
super
intelligence
book.
B
A
Yeah
he
goes
in,
he
goes
into
almost
fractal
detail
like
in
every
little
angle
of
what,
if
we
did
this
and
then
takes
that
to
extreme,
then
he
backs
back
out
it's
very
interesting
book,
so
it's
I'm
I'm
reading
through
it.
I
read
it
at
night
time,
but
then
I
fell
asleep
so
and
then
have
weird
dreams,
but
it's
an
excellent
book,
but
yeah
I
mean
what
I
mean
is
like
something
that
would
go
on.
A
You
know
more
mains,
almost
mainstream
news
sites
like
just
to
ex
explain
to
people
what
an
algorithm
is
in
this
world,
because
they've
only
just
started
to
understand
what
they
thought.
It
was,
but
that's
no
longer
true
with
trained
models.
So
it's
just
something.
I
think
it
would
be
good
for
people
to
to
know
the
more
people
that
knew
than
the
more
knee-jerk
things
happen.
That
would
just
be
ineffective,
like
this
thing
of
saying,
you'll
have
to
give
them
30
days
notice.
A
If
there
was
some
mainstream
understanding
of
what
an
algorithm
is
in
this
world,
that
would
not
happen
so
anyway,
just
a
thought
yeah.
Maybe
there
is
content
out
there
to
research,
and
you
know
restate
again
and
again
just
that's
often
what
it
is
just
or
finding
a
journalist.
That's
that's
happy
to
restate
things
and
yeah.
It
could
be
something
good
to
do
all
right.
I'm
gonna
sign
off.