►
From YouTube: CDF SIG MLOps Meeting 2020-07-30
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
C
A
Yeah
I
was
remembering
just
this
week.
It
was
a
year
ago.
I
was
lost
in
london
and
it
was
the
record-breaking
heatwave.
It
was
a
really.
I
think
it
was
a.
It
was
a
thursday
and
it
was
like
super
hot,
well
38
degrees
or
something,
and
we
just
stayed
in
a
hotel
like
just
didn't,
want
to
because
it
just
which
is
funny,
because
if
it
was
38
degrees
here
we
wouldn't
even
talk
about
it.
A
I
don't
know
why
that
is,
but
it's
just
maybe
it's
air
conditioning
or
something,
but
you
wouldn't
even
it's
just
a
warm
day,
but
but
I
didn't
want
that
in
london.
I
didn't
want
to
be
running
around
london
in
the
heat
and
it
wasn't
very
nice
all
right.
A
So
yeah
last
time
I
wasn't
around,
was
there
anything
interesting
discussed,
or
is
it
mostly
going
through
not
that
it's
not
interesting,
but
going
through
the
roadmap
and
continuing
on
with
that.
B
A
That's
good,
so
I
guess
we
can
continue
on
unless
someone
anyone
had
anything
interesting
this
week,
trying
to
think.
If
I
I
did
take
a
note
of
something
I'll
paste
it
in
the
or
the
chat
here.
I
guess
so,
a
friend
of
mine,
that's
his
project,
and
I
guess
company
as
well
that
have
a
bunch
of
interesting
ml
engineering,
toolkits
for
sort
of
tracking
experiments
providing
a
bit
more
rigor
around
running
these
experiments,
which
I
thought
was
interesting.
A
It's
very
python
centric,
but
it
seemed
to
resonate
with
stuff
I've
seen
where
you
often
want
to
run
a
bunch
of
experiments
and
kick
them
off
and
let
them
run.
That
was
an
interesting
link.
Another
one
just
gonna
find
it.
I
saw
it
this
week
was
a
netflix
project.
It's
a
meta
flow
I'll
paste.
A
A
link
to
that
netflix
is
open
source,
I'm
always
a
bit
in
two
minds
of
because
they
have
sort
of
a
habit
of
sort
of
retiring
things
because
they
build
things
for
their
own
purposes,
but
once
they
no
longer
support
it,
they
can't
justify
continuing
it.
Unless
someone
takes
it
and
runs
with
it,
like
spinnaker,
for
example,
lives
on,
obviously
netflix
use
it,
but
it's
under
the
under
the
cdf.
A
So
but
this
metaflo
one
was
interesting
because
it's
sort
of
building
tool-
maybe
it's
similar
to
some
of
the
aims
that
you've
had
terry,
it's
sort
of
to
enable
data
scientists
to
build
and
operate
and
sort
of
take
some
of
the
engineering
away,
not
not
so
much
building
tools
for
developers,
which
is
that
other
one
guild
ai
is
more
for
that,
but
more
for
the
to
bring
the
data
scientists
along.
So
I
thought
that
was
an
interesting
one.
I'll
put
it
in
the
notes.
A
It's
I
thought
it
was
an
interesting
project.
That
metaphor,
I
don't
really.
I
haven't
looked
any
further
into
it,
but
if
it
was
sort
of
a
work,
flowy
thing
is
that
something
that
we
should
think
about?
You
know
from
the
cdf
point
of
view,
as
it
might
be
an
interesting
project,
because
last
I
heard
and
kara
might
know
more.
There
was
there's
always
the
lookout
for
new
interesting
projects
to
sort
of
invite
to
be
part
of
the
city.
I
can
give
it
that
netflix
is
already
part
of
it
in
a
big
way.
B
We
do
seem
to
be
steadily
building
attention
now
we're
getting
a
number
of
people
joining
the
mailing
list
each
week,
I'm
regularly
getting
messages
from
people
expressing
support
for
the
idea
of
having
a
roadmap.
B
So
you
know,
but
it's
it's
it's
slowly
growing
organically,
with
without
a
lot
of
promotion
at
the
moment
yeah
once
we
once
we
get
close
to
the
first
published
version,
I
think
we
can.
We
can
have
a
bigger
push
to
to
make
sure
that
people
are
aware
of
the
existence
and
have
read
it
and
are
starting
to
consider
the
implications
of
that.
A
Well
I'll
see
if
I
can
find
someone
there
and
yeah,
maybe
mention
it
to
them,
and
then
who
knows
whether
it
leads
to
something
else
with
the
cpx?
That
could
be
interesting,
but
the
the
netflix
angle
is
interesting
because
it
would
be
like
a
lot
of
things:
netflix
reasonably
opinionated
onto
the
netflix
way
of
working,
which
is
proven
and
probably
fairly
aws,
heavy,
at
least
to
start
with,
which
is
fine.
A
That's
where
people
are
like
I've
noticed
they've
added
support
for
step
functions,
which
I
always
thought
would
kind
of
would
be
kind
of
useful
for
long-running
things
I
use
google's
and
they
google
stuff
and
they
have
their
own.
You
know
equivalents
to
manage
functions
and
so
on,
but
yeah
netflix,
you're,
obviously
hauling
on
amazon.
So
I
thought,
that'll
be
an
interesting.
C
A
There's
already
a
yeah,
there
are
netflix
already
in
on
the
cdf,
so
you
know
they're
yeah.
I
was
my
first
thing
when
I
saw
something
from
netflix,
but
I
saw
the
github
project
and
I'm
like.
Oh,
is
this
another?
You
know
asgard
or
there's
all
these
projects
that
they
built
and
then
kind
of,
I
wouldn't
say,
abandon
them.
They
just
kind
of
retire
them,
which
is
which
is
fair
enough,
because
if
they
don't
have
a
self-sustaining
community,
if
netflix
aren't
using
them,
then
they
they
move
on
from
it.
A
But
if
they
do,
then
they
find
a
way
to
work
with
it.
Like
spidernik
is
a
good
example
of
that
success,
so
I
wasn't
sure
whether
where
this
one
felt,
but
it
seemed
to
line
up
with
a
lot
of
the
stuff
we've
been
talking
about,
but
from
the
angle
of
of
let
let's
build
the
framework
for
data
scientists
to
make
it
as
as
pleasant
as
possible
to
do
the
right
thing
with
whatever
tools
you
need
to
use
without
having
to
learn
too
much
and
and
it's
battle
hardened
and
and
that
yeah.
A
B
I
I
was
going
to
do
the
the
conference
circuit,
but
obviously
that
got
completely
disrupted
this
year
and
for
some
reason
the
talks
that
we
we
originally
had
set
up
for
for
our
conference
didn't
get
renewed.
So
so
I'm
no
longer
doing
the
the
mlops
overview
that
that
I
was
planning
to
do.
B
B
C
Which
conference
is
this
that
your
talk
has
been
withdrawn
from.
B
So
we
we
had
to
reschedule
the
the
the
cdf.
B
And
the
apparently
the
talks
weren't
carried
over
so
I'm
I'm
not
going
to
be
doing
the
the
introduction
to
the
road
map
that
I
was
expecting
to
do.
A
Are
there
other
like
what
what's
the
what's
the
audience
to
evangelize
this
to
a
publicize?
Is
it
developer
heavy
or
data
science
heavy,
because
I
imagine
there'd
be
different
places
where
people
hang
out,
especially
now
that
everyone's
kind
of
finding
these
online
ways
of
doing
things
like
you've
got
google
doing
a
multiple
week
thing?
A
There's
re
invents
like
a
smeared
over
three
weeks
in
december,
which
sounds
crazy,
but
maybe
that
works
people
sort
of
you
know
spend
a
day
each
week
on
something
like
are
there
other
places
that
it
makes
sense
to
introduce
this
like
where
data
scientists
or
or
developers
with
the
data
science
event
like?
Where
do
they
virtually
hang
out.
C
C
It's
quite
a
good
conference.
I
think
it's
the
london
version,
although
they
may
be
combining
them
all
together
and
it
is
run
underneath
the
python
software
foundation
so.
B
Yeah
I
mean
there's
a
there's,
a
lot
of
different
ai
and
machine
learning
groups
and
meetups
out
there.
So
I
guess
we
just
need
to
start
thinking
about
a
strategy
for
getting
in
touch
with
with
those
and
and
offering
to
speak.
A
There
was
some
people
on
the
biz
dev
side
on
amazon
that
were
interested
in
it.
I
they're
probably
working
on
the
mailing
list,
but
I
don't
think
they've
made
it
to
a
meeting,
yet
I'm
not
sure
what
they
would
have
on,
but
yeah
and
and
getting
implementers
of
it
as
well
like
people
who
are
sort
of
boots
on
the
ground
would
be
good
as
well,
because
they're,
the
ones
that
often
feel
the
questions
of
some
of
these
issues
that
we're
addressing.
A
Whereas
the
pure
data
scientists
I
mean
it's
still,
we
still
need
to
talk
to
them
because
the
benefit
there
is
convincing
them
that
it's
important
like
it's
like.
Like
your
you
know,
facts
and
figures
are
around
how
many
things
never
make
it
to
production
or
the
ones
that
do
don't
get
updated.
That's
kind
of
relevant
and
and
yeah.
A
Yeah,
that's
a
good
point
because
it's
worth
thinking
about
that
soon
because
things
are
flushing
out
pretty
nicely
now
so
the
there
was
one.
I
had
an
open
pull
request
that
I
was
going
to
close
terry
pull
request
number
27,
which
I'll
paste
in
here,
which
was
around
reinforcement,
learning
which
had
a
typo
and
everything.
B
I
think
I
I
remember
putting
in
the
comments
that
we'd
we
had
got
some
gaps
that
we'd
previously
talked
about
that
we
we
should
probably
include
in
in
in
that
area,
so
maybe
just
check
that
thread
before
you
close
it
and
see.
If
there
was.
A
They
needed
to
be
clean,
yeah,
the
the
one
that
I
did
capture
from
that
was
the
you
know
colloquially
known
as
the
swearing
problem.
Actually
that
reminds
me,
though
yeah,
since
we
last
talked
that
whole
gpt
thing
really
took
off
on
the
on
the
web,
everyone's
talking
about
that
open
ai
thing,
with
with
great
amusement
I've
applied
to
get
api
access.
I
assume
you've
had
a
go
too,
but
I
haven't.
No
one
I
know
has
got
access
to
it
yet,
but
I
thought
that
was
interesting.
A
I
did
read
the
paper
on
it
and
it
mentioned
things
in
this
area
of
because
it's
trained
on
a
common
crawl
set
of
data
like
they
can't
guarantee
its
behavior
at
all,
which
I
thought
was
interesting,
not
quite
the
same
as
as
an
mlx
angle,
but
yeah.
It
just
reminded
me
of
it.
Then
the
gpt3
stuff,
it's
an
interesting
paper
to
look
at
too
so
I
did
I
did.
A
We
did
merge
in
the
stuff
to
do
with
that
swearing
problem
and
the
emergency
sort
of
kill
switch
idea,
and
this
was
kind
of
related
to
that.
But
I
was
trying
to
capture
it
that
reinforcement
learning
is
extra
risky
in
this
regard,
because
there's
less,
at
least
in
my
mind,
there's
less
of
an
explicit
train
test.
A
You
know
blue
green
canary
deploy
with
reinforcement,
learning
it's
more
on
the
fly,
but
that
might
I
don't
know
how
that
fits
in
this
road
map,
or
maybe
it
doesn't
like,
maybe
like
you're
saying,
reinforcement
learning
is
really
just
it's
a
pattern.
That's
applied
where
you
know.
There's
this
sort
of
faster,
tighter
loop
of
there's
new
data
things
get
trained,
you
know
it
gets
promoted
or
not
depending
on
yeah.
Sorry,
terry.
B
Yeah,
I
think
I
think,
there's
a
challenge
entry
that
we're
missing
kind
of
addressing
that
and
pointing
out
that
those,
though
those
techniques
sit
outside
of
the
the
traditional
release
and
deploy
approach,
and
so
we
probably
need
to
just
give
a
little
bit
of
thought
to
that
and
express
that
as
a
challenge.
Even
if
we've,
we
don't
have
a
good
set
requirements
to
go
with
it.
A
Yeah,
that's,
I
think,
that's
kind
of
was.
My
intention
was
that
this
is
a
whole
evolving
area
that
we're
not
covering
but
yeah.
We
don't
have
a
necessarily
anything
to
say
on
it
in
terms
of
the
solution.
It's
just.
If
people
are
going
to
do
it
they're
going
to
do
it.
B
Of
train,
but
it
may
be
that
there
are,
there
are
things
that
you
can
do
within
your
ci
cd
environment
that
actually
helped
to
improve
the
quality
of
that
so
yeah,
there's,
there's
one
area
of
of
ci
cd.
That
is
really
sort
of
maintenance
related
in
that
a
lot
of
the
platforms
at
the
moment
are
focused
on
doing,
build
and
release
and
then
doing
all
the
verification
and
checks
on
the
artifacts
that
you're
releasing.
B
So
so
I
ideally,
these
platforms
should
also
be
doing
automated
rebuilds,
a
regular
cadence
just
to
keep
the
tests
running
and
and
to
make
sure
that
you're
re-evaluating
software,
that
proactively
that's
already
in
production
and
if
you're,
using
things
like
self-learning
algorithms
that
are
effectively
self-modifying
code.
B
A
You
know
classical
software
analogy
to
that,
and
I
guess
sometimes
you'd
call
it
like
a
synthetic
transaction
almost
like
it.
You
people
would
sometimes
use
or
do
use
cxcd
systems
to
kind
of
run
through
an
end-to-end
scenario
to
make
sure
people
can
sign
up
to
the
website
that
they
can
process
a
credit
card.
It's
effectively
blurring
the
lines
between
monitoring
and
ci
cd.
Like
I
know
when
people
talk
about
models
and
ai,
they
talk
about
monitoring
a
lot.
So
maybe
this
is
the
same
thing.
A
It's
blurring
the
line,
I'll
see
just
gonna
try
and
capture.
A
A
I
think
I
think
some
people
are
using
the
term
continuous
verification
for
something
like
this,
so
it's
sort
of
it's
the
same
ci
cd
thing,
but
it's
after
the
d,
the
delivery,
but
you
still
keep
doing
it
and
that's
the
verification,
but
it's
effectively
the
same
thing,
because
when
people
say
monitoring
or
observability,
they're,
more
thinking
about
probes
or
signals
or
metrics
or
tracing,
and
things
like
that,
whereas
this
isn't
so
much
that
this
is
more
just
our
things
in
an
okay
state
based
on
some
set
of
criteria
like
by
by
automating
things
and
driving
a
synthetic
transaction.
A
B
B
Especially
in
a
heavily
containerized
environment,
you
have
to
expect
that
every
container
image
that
you
rely
on
is
evolving
over
time.
Each
one
can
potentially
develop
vulnerabilities
that
will
be
discovered
over
time
and
all
of
your
dependencies
are
similar.
B
The
the
the
risk
with
ci
cd
is
that
people
want
to
treat
it
as
fire
and
forget
so
you
ship
a
product
and
then
ignore
it
until
you've
got
some
new
features
to
add
right.
But
realistically
you
actually
want
to
be
monitoring
the
health
of
that
that
set
of
ip
on
on
a
regular
cadence
and
evaluating
how
much
effort
is
going
to
be
involved
in
updating
the
the
product
if
dependencies
are
changing
and
also
you
need
to
evaluating
the
severity
of
any
cves
to
impact
your
existing
dependencies.
B
B
Here's
our
estimation
of
how
much
effort
is
involved
in
actually
updating
this
product
to
align
it
to
all.
B
The
latest
changes
is
our
estimation
of
the
level
of
risk
you're
carrying
at
the
moment
with
this
product
in,
if
you
leave
it
as
is,
and
then,
if
we
extend
that
to
ml
ops
and
we're
also
factoring
in
any
drift
in
our
models,
you
know
any
catastrophic,
forgetting
any
dynamic
learning
that's
going
on
operationally
and
it
gives
just
gives
you
another
set
of
metrics
against
which
you
can
evaluate
the
health
of
your
ip.
A
Yeah,
I
guess
people
well,
some
people,
think
of
sort
of
ci
cd
infrastructure
is
production.
Typically,
it's
separated
from
it
at
least
that's
where
people
are
today
but
yeah.
I
could
see
it
in
terms
of
availability
and
uptime.
People
expect
it
to
be,
you
know,
always
available,
so
it's
reasonable
to
think
that
it's
should
be
there
to
be
used
continuously.
A
So
I
guess
the
same
this.
It's
the
same
analogy
as
software.
It's
just
that
in
like
a
typical
microservice
or
a
typical
web
app
or
something.
A
If
there's
no
developers
making
changes
on
a
given
month,
then
there's
not
going
to
be
any
change
based
activity,
but
it's,
but
in
a
ml
ops
world
it
the
model
could
be
evolving,
even
if
you're
not
doing,
reinforcement,
learning
or
online
learning
or
unsuperv,
or
anything
like
that.
Just
the
fact
that
the
data
is
is
is
changing
and
drifting
over
time
would
trigger
a
new
deploy
of
a
model.
Even
without
a
person
present.
Potentially
you
know
they
might.
That
might
be
quite
reasonable.
A
A
There
is
an
allergy
set
in
software
like
if
using
something
like
snick
or
some
other
or
white
source,
or
some
automated
system
that
looks
for
cves
in
open
source
libraries
you
use,
it
can
tell
you
that,
there's
a
problem
or
even
open
a
pull
request
which
triggers
a
build
and
if
you're,
very
keen
if
it
passed-
and
it
was
okay,
you
could
have
it
deployed
to
a
staging
or
production
environment
for
you,
ditto
with
things
like
docker
hub
when
some
base
layer
image,
there's
some
ssh
zero
day
or
something
that
gets
patched
in
an
image
that
you
depend
on
in
container.
A
A
Typically,
a
person
would
be
notified
and
then
they
would
change
the
you
know
the
dockerfile
to
say
from
this
version
or
the
library
file
and
and
check
that
it
works
by
hand,
and
maybe
that's
the
model
analogy
as
well,
but
in
theory,
there's
no
reason
why
you
couldn't
have
humans
out
out
of
that
loop
for
those
sort
of
changes,
so
you're
not
really
changing
the
hand
coded
functionality.
But
you
are
improving
your
software
in
some
ways
through
a
security
pitch
or
a
or
a
more
relevant
model.
A
B
Think
I
I
think
that
the
challenge
right
now
is
that
most
solutions
to
that
are
either
bespoke
to
a
given
organization
or
their
features
of
third-party
sort
of
security
scanning
packages,
which
hook
back
into
your
your
your
build
process.
I
think
in
general,
that's
actually
a
gap
in
in
the
cicd
tooling
space,
and
you
know
it's
more
obvious
to
us
from
an
ml
ops
perspective.
But
it
is
a
generic
requirement
for
right.
The
icd.
A
Yeah,
a
lot
of
people
assume
that
the
trigger
is
to
something
is
some
source
code
changing,
but
there's
a
whole
lot
of
other
things
that
could
that
could
trigger
a
pipeline.
B
B
A
A
B
Certainly,
if
you
look
at
something
like
a
large
government,
I.t
solution
that
will
be
funded
as
a
linear
project,
so
right,
we'll
have
legit
to
deliver
version
one
and
then
there'll
be
typically
no
funding
for
anything
else.
Unless
somebody
decides
they
want
a
change
in
functionality
and
then
they'll
spin
up
another
project
to
change
it
so
which.
A
A
A
So
a
few
cells
down
in
the
tbd
there's
a
one
one
that
says
managing
the
security
of
data
in
the
ml
ops
process,
with
particular
focus
upon
the
increased
risk
associated
with
aggregated
data
sets.
A
Is
that
related
to
use
for
training
a
battery
processing?
Is
that
related
to
people
giving
permission
and
then
revoking
permission
of
the
training
data,
or
maybe
you've
learned
from
you've
trained
a
model
from
some
aggregate
set
of
customer
data,
and
someone
ceases
becoming
a
customer,
so
you
legally
can't
use
their
data
to
train
their
historical
data
to
train
your
model
is.
Is
that
what
that
means,
or
is
the
security
mean
something
else?
Or
is
this
the
security
of
the
source
data,
like
the
training
sets
of
data
that
you
have
to
keep
copies
of
that?
B
I
put
this
one
in
as
a
a
as
a
bookmark
to
get
us
to
do
a
general
consideration
of
security
aspects
right
because
you
know
clearly
we're
dealing
with
typically
highly
sensitive
data
in
large
aggregations.
A
B
B
Yeah,
so
so
one
of
the
things
we
need
to
think
about
is
you
know:
should
we
be
building
tools
that
allow
you
to
execute
arbitrary
pieces
of
code,
or
should
we
be
going
to
a
level
where
we're
actually
signing
and
accrediting
each
component
and
ensuring
that
only
valid
code
is
being
executed
as
part
of
a
pipeline.
A
I
did
one
of
the
things
google
announced
was
there
with
amd
they're,
secure,
secure
computing
stuff
where
things
stay
encrypted
until
it's
well
inside
the
processor,
which
I
thought
was
interesting.
A
You
know
down
the
track
that
could
be
mentioned
as
some
solution.
Maybe
there's
other
thing.
If
it's
it's
about
the
security
of
the
data,
maybe
you
from
the
very
start
you
can,
for
example,
you
could
hash
some
a
bunch
of
data.
Maybe
you
don't
need
for
your
particular
model.
You
don't
actually
need
the
raw
address
of
people
or
something
or
coordinates.
It
could
be
hashed
in
some
way,
but
still
yield.
A
A
B
So
when
you're
working
with
machine
learning
data,
typically
you
have
some
raw
data
and
then
you
you
need
to
pre-process
that
data
to
encode
it
into.
B
B
A
Yeah,
it's
it's
interesting.
I
I
sort
of
encountered
that
myself
and
my
solution
was
that
it's
the
one
bit
of
code,
which
may
not
be
the
ideal
thing,
but
in
a
simple
case
it
worked,
and
it's
just
am
I
trying.
Am
I
training
or
not,
and
there's
just
a
few
little
bits
that
change,
but
the
all
of
the
the
code
to
massage
the
data
or
extract
it
from
from
the
graphql
system.
Is
the
same?
A
It's
just
it's
just
you
know
it's
either
pulling
out
a
great
big
chunk
of
it
and
then
telling
it
to
to
train
on
something
or
it's
or
it's
pulling
out
a
subset
and
yeah
that
that
was
yeah.
That's
a
good
point!
That's
what
you
mean.
That
makes
sense.
It's
just
yeah.
The
solution
in
my
case
was
it's
the
one.
It
was
the
one
bit
of
code,
it
just
made
sense,
but
I
I
can
imagine
that
could
be
problematic
for
more
complex
cases
or
or
you
know,
they're
using
notebooks
in
training.
A
B
A
A
Yeah
and
below
that
abstraction
layer
for
models
is
that
related
to
portability
and
and
the
or
is
it
more
or
is
this
more
of
an
api.
A
Right,
I
didn't
see
that
one-
oh
okay
yep,
I
haven't
had
a
look
at
that.
So
I'll
have
a
look
at
that
yep
great.
A
A
A
This
rings
a
bell
that
I
cannot
remember
what's
that,
what's
the
definer
data
category.
B
So
that's
where
you're
evaluating
the
sensitivity
of
the
data
that
you're
working
with.
A
Information-
or
this
is
european
rules,
there's
probably
different
national
rules,
and
then
you
know
there
might
be
information
about
minors
in
there
and
and
things
like
that:
child
production
laws
and
right
so
the.
B
There
are
also
caveats
around
how
the
sensitivity
of
data
goes
up,
as
you
aggregate
more
of
it,
so
so
there's
a
difference
between
holding
a
set
of
records
for
an
individual
versus
holding
a
set
of
records
for
every
individual
in
a
geographic
area,
for
example,
because
you
know
having
a
known
set
of
information
about
one
individual
might
be
relatively
easy
to
to
redact
and
pseudonymize
and
and
be
fairly
protected.
B
But
if
you've
got
a
big
block
of
data
for
everyone
in
a
particular
area,
it
becomes
much
easier
to
detect
the
ray
of
things
in
that
block.
A
I
think
it
was
insurance
companies
or
or
mortgage
companies,
basically
detecting
socioeconomic
status.
From
this
aggregate
data
stuff
that
wasn't
explicitly
disclosed,
they
were
able
to
extract
a
feature
and
where
it
was
and
bias
things
accordingly
or
it
would
reveal
effectively
someone's
race
and
then
the
the
model
was
then
making
decisions
on
race,
race
wasn't
explicitly
provided,
but
with
that
data
in
aggregate
it
was
it
you
know
you
could
extract
that
feature.
If
you
like,
it's
similar
yeah.
B
A
B
A
A
A
And
then
then
there
became
a
data
security
concern
like
how
are
they
anonymizing
things
and
and
people
were
showing
that
they
weren't
really
anonymizing
you
could
you
can
with
enough
data
you?
Can
you
can
walk
backwards
pretty
easily
to
an
individual,
so
there's
all
sorts
of
concerns,
because
because
it
escalated
to
that,
you
know
it's
asking
about
people's
incomes
and
their
background,
and
you
know
all
this
personal
information
that
you
know.
Even
your
closest
friends,
don't
really
know
so
yeah
it
sort
of
escalated.
A
I
guess
that's
how
I
see
escalate
and
it
escalated
up
to
a
category
of
like
you.
Don't
really
want
anyone
to
know
this.
So
as
like
what
happened,
I
happened
to
be
out
of
the
country
that
time
in
new
zealand,
so
I
didn't
have
to
do
it
so
and
actually
a
few
people
did
leave
the
country
deliberately
to
avoid
it
can't
do
that
this
year,
so
yeah,
all
right.
That
makes
sense.
I
got
a
good
sense,
so
there's
only
a
few
left
to
do
so.
B
B
Then
we
should
probably
close
that
off
and
then
the
final
section
is
to
we
need
to
go
through
these
evaluate
the
the
status
of
things
to
date.
So
we
update
the
chart
in
the
final
chapter
and
and
then
potentially
add
some
narrative
about
you
know
existing
potential
solutions
and
and
how
you
know
how
we
expect
them
to
evolve
right.
A
C
A
I'll
add
the
link
to
the
doc
there,
because
they'd
be
good,
because
we
could
add,
you
know,
there's
a
references
section
at
the
bottom.
So
we
might
there's
lots
of
links
in
these
notes
throughout
throughout
this
google
doc.
So
we
can
always,
if
it's
suitable
at
add
links
to
some
prior
art
there
and.
A
Yeah,
no,
it's
good
it!
It's
sort
of
yeah!
It's
like
there's,
not
that
much
left
to
flush
this
out,
which
will
be
good
all
right.
Well,
is
there
anything
else
we
probably
have
a
slight
early
mark
or
and
then
meet
again
in
two
weeks.
Hopefully
I'll
have
done
a
few
more
sessions
of
this
and
I'll
see
if
I
can
touch
with
anyone
from
the
netflix
side
to
get
them
interested
that'll
be
good.
A
Well,
stay
well
and
talk
to
you
next
time
and
hope
the
other
session
goes
well.
The
other
session's
been
going
on.
I
sort
of
flick
through
the
notes
every
now
and
then
but
they're
going.
Okay,
like
I
just
didn't,
see
them
the
last.
B
As
as
one
of
their
technical
design
meetings,
so
it's
the
roadmap
is
not
featuring
as
highly
in
in
sorry,
but
I
I
try
and
well
it's.