►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
This
is
always
a
challenge
when
seeking
more
than
small
incremental
steps
forward
in
AI
Jeff
was
a
brilliant
mind
and
many
of
the
ideas
he
has
developed
and
aggregated
from
your
science
are
worth
understanding
and
thinking
about
there
are
limits
to
deep
learning,
as
it
is
currently
defined
forward.
Progress
in
AI
is
shrouded
in
mystery.
My
hope
is
that
conversations
like
this
can
help
provide
an
inspiring
spark
for
new
ideas.
B
B
There's
a
clear
answer
to
that
question:
my
primary
interest
is
understanding
the
human
brain
no
question
about
it,
but
I
also
firmly
believe
that
we
will
not
be
able
to
create
fully
intelligent
machines
until
we
understand
how
the
human
brain
works
so
I
don't
see
those
as
separate
problems,
I
think
there's
limits.
So
what
can
be
done
with
machine
intelligence?
If
you
don't
understand
the
principles
by
which
the
brain
works,
and
so
I
actually
believe
that
studying
the
brain
is
actually
the
fast
the
fastest
way
to
get
to
machine
intelligence
and
within.
B
It's
like
okay,
we
don't
even
have
know
where
to
begin
on
this
stuff,
and
so
now
that
we've
made
a
lot
of
progress
on
this
after
we've
made
a
lot
of
progress
on
how
the
neocortex
works-
and
we
can
talk
about
that.
I
now
have
a
very
good
idea.
What's
going
to
be
required
to
make
intelligent
machines,
I
can
tell
you
today,
you
know
some
of
the
things
are
gonna,
be
necessary,
I
believe
to
create
intelligent
machines.
Well,.
A
So
we'll
get
there
we'll
get
to
the
neocortex
and
some
of
the
theories
of
how
the
whole
thing
works
and
you're
saying,
as
we
understand
more
and
more
about
the
neocortex
about
our
own
human
mind,
we'll
be
able
to
start
to
more
specifically
define
what
it
means
to
be
intelligent.
It's
not
useful
to
really
talk
about
that
until
I.
Don't.
B
Know
if
it's
not
useful,
look,
there's
a
long
history
of
AI,
as
you
know,
right
and
there's
been
different
approaches
taken
to
it
and
who
knows,
maybe
they're
all
useful
right.
So
you
know
the
good
old
fashioned
AI
the
expert
systems,
current
convolution
neural
networks.
They
all
have
their
utility,
they
all
have
a
value
in
the
world,
but
I
would
think
almost
everyone
agree
that
none
of
them
are
really
intelligent
in
a
set
of
a
deep
way
that
that
humans
are,
and
so
it's
it's
just.
B
The
question
is:
how
do
you
get
from
where
those
systems
were
or
are
today
to
where
a
lot
of
people
think
we're
going
to
go
and
just
big
big
gap
there,
a
huge
gap
and
I
think
the
quickest
way
of
bridging
that
gap
is
to
figure
out
how
the
brain
does
that
and
then
we
can
sit
back
and
look
and
say:
oh,
what
do
these
principles
that
the
brain
works
on
are
necessary
and
which
ones
or
not
kula?
We
don't
have
to
build
this
in,
and
telogen
machines
aren't
going
to
be
built.
B
A
Let
me
ask
me
B:
before
we
get
into
the
fun
details,
let
me
ask
me
to
get
depressing
or
a
difficult
question:
do
you
think
it's
possible
that
we
will
never
be
able
to
understand
how
our
brain
works,
that
maybe
there's
aspects
to
the
human
mind
like
we
ourselves
cannot
introspectively
get
to
the
core
that
there's
a
wall.
You
eventually
hit.
B
I
have
never
believed
that's
the
case.
There's
not
have
been
a
single
thing.
We've
ever
humans
have
ever
put
their
minds
to
so
we've
said:
oh,
we
reached
the
wall.
We
can't
go
any
further.
It
just
people
keep
saying
that
people
used
to
believe
that
about
life.
You
know
Ilan's
Vittal,
right,
there's
like
what's
the
difference
in
living
matter
and
nonliving
matter,
something
special,
you
never
understand.
We
no
longer
think
that
so
there's,
there's
no
historical
evidence
to
suggest
is
the
case
and
I
just
never
even
considered
that's
a
possibility.
B
I
would
also
say
today
we
understand
so
much
about
the
neocortex.
We've
made
tremendous
progress
in
the
last
few
years
that
I
no
longer
think
of
it
as
an
open
question.
The
answers
are
very
clear
to
me
and
the
pieces.
We
know
we
don't
know.
I
are
clearly
me,
but
the
framework
is
all
there
and
it's
like.
Oh
okay,
we're
gonna
be
able
to
do
this.
This
is
not
a
problem
anymore.
It
just
takes
time
and
effort,
but
there's
no
mystery.
A
big
mystery
anymore.
So.
A
B
Overview
yeah
sure,
human
brain.
We
can
divide
it
roughly
into
two
parts:
there's
the
old
parts,
lots
of
pieces
and
then
there's
a
new
part.
The
new
part
is
the
neocortex
it's
new
because
it
didn't
exist
before
mammals,
the
only
mammals
have
a
neocortex
and
in
humans
it's
in
primates,
it's
very
large
in
the
human
brain
the
neocortex
occupies
about
seventy
to
seventy-five
percent
of
the
volume
of
the
brain.
B
It's
huge
and
the
old
parts
of
the
brain
are
there's
lots
of
pieces
there
there's
a
spinal
cord
and
there's
the
brain
stem
and
the
cerebellum
and
the
different
parts
of
the
basal
ganglia
and
so
on.
In
the
old
parts
of
the
brain,
you
have
the
autonomic
regulation
like
breathing
and
heart
rate.
You
have
basic
behaviors
so
like
walking
and
running
or
controlled
by
the
old
parts
of
the
brain.
All
the
emotional
centers
of
the
brain
are
in
the
old
part
of
the
brains
when
you
feel
anger
or
hungry,
lust
with
things
like
that.
B
Those
are
all
in
the
old
parts
of
the
brain
and
and
we
associate
with
the
neocortex
all
the
things
we
think
about
as
sort
of
high-level
perception
and
cognitive
functions,
anything
from
seeing
and
hearing
and
touching
things
to
language
to
mathematics
and
engineering
and
science
and
so
on.
Those
are
all
associative.
The
neocortex
and
they're
certainly
correlated
our
abilities
in
those
regards
are
correlated
with
the
relative
size
of
our
neocortex
compared
to
other
mammals.
So
that's
like
the
rough
division
and
you
obviously
can't
understand
the
new.
B
Your
cortex
is
completely
isolated,
but
you
can
understand
a
lot
of
it
with
just
a
few
interfaces,
so
the
all
parts
of
the
brain,
and
so
it
it
gives
you
a
system
to
study
the
other
remarkable
thing
about
the
neocortex
compared
to
the
old
parts
of
the
brain
is
the
neocortex.
It's
extremely
uniform.
It's
not
visually
or
anatomically,
or
it's
very
sucky
I,
always
like
to
say
it's
like
the
size
of
a
dinner
napkin
about
two
and
a
half
millimeters
thick,
and
it
looks
remarkably
the
same
everywhere
everywhere
you
look
and
that
children
have
millimeters.
B
Is
this
detailed
architecture
and
it
looks
remarkably
the
same
everywhere
and
that's
a
cross
species,
the
mouse
versus
a
cat
and
a
dog
and
a
human?
Or
if
you
look
at
the
old
parts
of
the
brain,
there's
lots
of
little
pieces
do
specific
things.
So
it's
like
the
old
parts
of
a
brain
evolved
look.
This
is
the
part
that
controls
heart
rate,
and
this
is
the
part
that
controls
this.
B
This
is
incredible,
so
all
the
evidence
we
have-
and
this
is
an
idea
that
was
first
articulated
in
a
very
cogent
and
beautiful
argument
by
a
guy
named
Vernon
mal
Castle
in
1978-
was
that
the
neocortex
all
works
on
the
same
principle.
So
language
hearing
touch
vision,
engineering.
All
these
things
are
basically
underlying
or
all
built
in
the
same
computational
substrate
they're,
really
all
the
same
problem.
B
B
Yes
did,
you
see
variations
of
it
here
and
there
more
of
the
cell,
so
that's
not
all
so
on,
but
what
now
encruster
argued
was
it
says
you
know
if
you
take
a
section
on
your
cortex,
why
is
one
a
visual
area
and
one
is
a
auditory
area
or
why
'since
and
his
answer
was
it's
because
one
is
connected
to
eyes
and
one
is
connected
ears
literally.
B
Sir
literally,
if
you
took
the
optic
nerve
and
it
attached
it
to
a
different
part
of
the
neocortex,
that
part
would
become
a
visual
region
this.
Actually,
this
experiment
was
actually
done
by
Mercosur,
oh
boy
and
in
in
developing
I,
think
it
was
lemurs.
I
can't
remember.
There
was
some
animal
and
and
there's
a
lot
of
evidence
to
this.
You
know
if
you
take
a
blind
person,
the
person
is
born
blind
at
Birth.
They
they're
born
with
a
visual
neocortex.
B
It
doesn't
may
not
get
any
input
from
the
eyes
because
of
some
congenital
defect
or
something
and
that
region
become
does
something
else.
It
picks
up
another
task.
So
and
it's
it's
so
it's
just
it's
this
very
complex
thing:
it's
not
like!
Oh
they're,
all
built
on
neurons,
no
they're
all
built
in
this
very
complex
circuit
and
and
somehow
that
circuit
underlies
everything,
and
so
this
is
the
it.
It's
called
the
common
cortical
algorithm.
B
If
you
will
some
scientists
just
find
it
hard
to
believe,
and
they
decide
can't
really
that's
true,
but
the
evidence
is
overwhelming
in
this
case
and
so
a
large
part
of
what
it
means
to
figure
out
how
the
brain
creates
intelligence
and
what
is
intelligence
in
the
brain
is
to
understand
what
that
circuit
does.
If
you
can
figure
out
what
that
circuit
does
as
amazing
as
it
is,
then
you
can.
Then
you,
then
you
understand
what
all
these
other
cognitive
functions
are.
So.
A
B
B
Understand
I
have
to
speak
from
my
own
particular
experience
here.
So
I
run
a
small
research
lab
here.
It's
like
yeah,
it's
like
I
need
other
research
lab
I'm,
the
sort
of
the
principal
investigator.
There
was
actually
two
of
us
and
there's
a
bunch
of
other
people,
and
this
is
what
we
do.
We
started
the
neocortex
and
we
published
our
results,
and
so
on
so
about
three
years
ago
we
had
a
real
breakthrough
in
this.
In
this
film.
B
Just
tremendous
spectrum
we
started,
we've
now
published,
I
think
three
papers
on
it,
and
so
I
have
I,
have
a
pretty
good
understanding
of
all
the
pieces
and
what
we're
missing
I
would
say
that
almost
all
the
empirical
data
we've
collected
about
the
brain,
which
is
enormous,
if
you
don't
know
the
neuroscience
literature,
it's
just
incredibly
big
and
it's
it's
the
most
part,
all
correct
its
facts
and
and
experimental
results
and
measurements
and
all
kinds
of
stuff,
but
it
none.
None
of
that
has
been
really
assimilated
into
a
theoretical
framework.
It's
it's
data.
B
Without
it's
in
the
language
of
Thomas
Kuhns
a
historian,
it
would
be
a
sort
of
a
pre
paradigm.
Science,
lots
of
data,
but
no
way
to
fit
in
together,
I
think
almost
all
of
that's
correct.
It's
gonna
be
some
mistakes
in
there
and
for
the
most
part,
there
aren't
really
good
cogent
theories
about
it.
How
to
put
it
together.
It's
not
like.
We
have
two
or
three
competing
good
theories
which
ones
are
right
and
which
ones
are
wrong.
It's
like
yeah
people
just
like
scratching
their
heads
wrong
things.
B
You
know
some
people
given
up
on
trying
to
like
figure
out
what
the
whole
thing
does
in
fact
is
very,
very
few
labs
that
we
that
we
do,
that
focus
really
on
theory
and
all
this
unassimilated
data
and
trying
to
explain
it.
So
it's
not
like
we
have
we've
got
it
wrong.
It's
just
that
we
haven't
got
it
at
all,
so.
B
B
I,
don't
think
it's
I,
you
know
I'm
an
optimist
and
from
where
I
sit
today
most
people
would
disagree
with
this,
but
from
where
I
sit
city
from
what
I
know,
it's
not
super
early
days
anymore.
We
are
it's
it's
you
know
the
way
these
things
go.
Is
it's
not
a
linear
path
right,
you
don't
just
start
accumulating
and
get
better
and
better
better.
No,
you,
okay,
all
the
stuff
you've
collected.
None
of
it
makes
sense
all
these
different
things
we
just
turn
around
and
then
you're
gonna
have
some
breaking
points
or
all
sudden.
B
Oh
my
god.
Now
we
got
it
right.
So
that's
how
it
goes,
and
science
and
I
feel
like
we
passed
that
little
thing
about
a
couple
years
ago,
all
that
big
thing
a
couple
years
ago,
so
we
can
talk
about
that.
Time
will
tell
if
I'm
right
but
I
feel
very
confident
about
it.
That's
my
moment
to
say
it
on
tape
like
this.
A
At
least
very
optimistic,
so,
let's
before
those
few
years
ago,
let's
take
take
step
back
to
HTM
the
hierarchical
temporal
memory
theory,
which
you
first
proposed
on
intelligence
and
went
through
a
few
different
generations.
Can
you
describe
what
it
is,
how
it
evolved
through
the
three
generations?
Yes,.
B
So
one
of
the
things
that
neuroscientists
just
sort
of
missed
for
many
many
years
and
ice,
and
especially
people
were
thinking
about
theory-
was
the
nature
of
time
in
the
brain
brains
process
information
through
time.
The
information
coming
into
the
brain
is
constantly
changing
the
patterns
from
my
speech.
Right
now,
if
you're
listening
to
it
at
normal
speed,
we'd
be
changing
on
IRA's
about
every
10,
milliseconds
or
so
you'd.
Have
it
change
this
constant
flow?
B
When
you
look
at
the
world,
your
eyes
are
moving
constantly
three
to
five
times
a
second
and
the
inputs
complete
completely.
If
I
were
to
touch
something
like
a
coffee
cup
as
I
move,
my
fingers
that
input
changes.
So
this
idea
that
the
brain
works
on
time
changing
patterns
is
almost
completely
or
was
almost
completely
missing
from
a
lot
of
the
basic
theories
like
fears
of
vision,
and
so
it's
like.
B
Oh,
no,
we're
going
to
put
this
image
in
front
of
you
and
flash
it
and
say
what
is
it
a
convolutional
neural
networks
work
that
way
today
right,
you
know,
classify
this
picture,
but
that's
not
what
visions
like
vision
is
this
sort
of
crazy
time-based
pattern?
That's
going
all
over
the
place
and
was
touched
and
so
is
hearing.
So
the
first
part
of
a
hierarchal
temporal
memory
was
the
temporal
part.
It's
it's
the
same.
You
you
won't
understand.
The
brain
orally
understand
intelligent
machines
unless
you're
dealing
with
time-based
patterns.
B
The
second
thing
was
the
memory
component
of
it
was,
is
to
say
that
we
aren't
just
processing
input.
We
learn
a
model
of
the
world.
That's
the
memory
stands
for
that
model.
We
have
to
the
point
of
the
brain
part
of
the
New
York
white
chest.
It
learns
a
model
of
the
world.
We
have
to
store
things
that
our
experience
is
in
a
form
that
leads
to
a
model
the
world.
So
we
can
move
around
the
world.
We
can
pick
things
up
and
do
things
and
navigate
know
how
it's
going
on.
B
So
that's
that's
what
the
memory
referred
to
and
many
people
just
they
were
thinking
about
like
certain
processes,
without
memory
at
all
it
just
like
processing
things.
And
finally,
the
hierarchical
component
was
reflection
to
that
the
New
York
or
check.
So
though
it's
just
uniform
sheet
of
cells,
different
parts
of
it
project
to
other
parts
which
project
to
other
parts,
and
there
is
this
sort
of
rough
hierarchy
in
terms
of
them.
So
the
hyperbole
temporal
memory
is
just
saying
look.
We
should
be
thinking
about
the
brain
as
time-based.
B
You
know
model
memory
based
and
hierarchical
processing
and-
and
that
was
a
placeholder
for
a
bunch
of
components
that
we
would
then
plug
into
that.
We
still
believe
all
those
things
I
just
said,
but
we
now
know
so
much
more-
that
I'm
stopping
to
use
the
word
hierarchal
thumper
memory
yeah,
because
it's
it's
insufficient
to
capture
the
stuff.
We
know
so
again
it's
not
incorrect,
but
it's
I
now
know
more
and
I
would
rather
describe
it
more
accurately.
Yeah.
A
B
B
B
Montse
well,
if
you
think
about
a
neuroscience
problem,
the
brain
problem
neurons
themselves
can
stay
active
for
certain
perks
of
time.
They
parts
of
the
brain.
With
this
doctor
4-minute,
you
know,
so
you
could
hold
up
a
certain
perception
or
an
activity
for
a
certain
period
of
time,
but
not
most
of
them.
Don't
last
that
long,
and
so,
if
you
think
about
your
thoughts,
are
the
activity
neurons
if
you're
going
to
want
to
involve
something
that
happened
a
long
time
ago?
B
B
Now
you
remember
that,
and
that
memory
is
in
the
in
the
synapses,
it's
basically
in
the
formation
of
synapses,
and
so
it's
it
you're
sliding
into
what
you
know
is
two
different
time
scales,
there's
time
scales,
of
which
we
are
like
understanding,
my
language
and
moving
about
and
seeing
things
rapidly
and
over
time.
That's
the
time
scales
of
activities
of
neurons.
B
A
B
B
A
B
Learned
a
model
of
the
cell
phone,
so
if
you
have
a
smart
phone
like
this
and
I
said
well,
this
has
time
aspects
to
it.
I
have
expectations
when
I
turn
it
on.
What's
gonna
happen
what
water,
how
long
it's
going
to
take
to
do
certain
things
if
I
bring
up
an
app
what
sequences
and
so
I
have
instant?
It's
all
like
melodies
in
the
world.
You
know:
yeah
melody
has
a
sense
of
time,
so
many
things
in
the
world
move
and
act
and
there's
a
sense
of
time
related
to
them.
B
B
A
To
make
sure
that
your
models
have
been
intelligence
incorporated,
so,
like
you
mentioned,
the
state
of
neuroscience
is
deeply
empirical
a
lot
of
data
collection.
It's
you
know,
that's
that's
where
it
is
using
meshing,
Thomas,
Kuhn,
right,
yeah
and
then
you're,
proposing
a
theory
of
intelligence
and
which
is
really
the
next
step,
the
really
important
stuff
to
take,
but
why?
A
Why
is
HTM
or
what
we'll
talk
about
soon?
The
right
theory?
So
is
it
more
in
this
it?
What
is
it
backed
by
intuition?
Is
it
backed
by
evidence?
Is
it
backed
by
a
mixture
of
both?
Is
it
kind
of
closer
to
or
string
theories
in
physics,
where
this
mathematical
components
would
show
that
you
know
what
it
seems
that
this
it
fits
together
too
well
for
not
to
be
true,
which
is
what
we're
string
theory.
B
B
Fix
of
all
those
things,
although
definitely
where
we
are
right
now,
it's
definitely
much
more
on
the
empirical
side
than
let's
say
string
theory.
The
way
this
goes
about
we're
theorists
right.
So
we
look
at
all
this
data
and
we're
trying
to
come
up
with
some
sort
of
model.
That
explains
it
basically
and
there's
yeah,
unlike
string
theory,
there's
this
vast,
more
amounts
of
empirical
data
here
that
I
think
than
most
physicists
deal
with,
and
so
our
challenge
is
to
sort
through
that
and
figure
out.
B
What
kind
of
constructs
would
explain
this
and
when
we
have
an
idea
you
come
up
with
a
theory
of
some
sort.
You
have
lots
of
ways
of
testing
it.
First
of
all,
I
am
you
know
there
are
hundred
years
of
assimilated,
unassimilated
empirical
data
from
neuroscience,
so
we
go
back
and
read
papers.
We
said:
oh,
did
someone
find
this
already
with
you?
We
can
predict
x,
y&z
and
maybe
no
one's
even
talked
about
it
since
1972
or
something,
but
we
go
back
and
find
out.
B
We
say
Oh
either
it
can
support
the
theory
or
it
can
invalidate
the
theory,
and
we
said:
okay,
we
have
to
start
over
again.
Oh
no,
it's
the
poor.
Let's
keep
going
with
that
one.
So
the
way
I
kind
of
view
it.
When
we
do
our
work,
we
come
up.
We
we
look
at
all
this
empirical
data
and
it's
it's.
What
I
call
is
a
set
of
constraints,
we're
not
interested
in
something,
that's
biologically
inspired
we're
trying
to
figure
out
how
the
actual
brain
works.
B
So
every
piece
of
empirical
data
is
a
constraint
on
a
theory.
In
theory,
if
you
have
the
correct
theory,
it
needs
to
explain
every
pin
right.
So
we
have
this
huge
number
of
constraints
on
the
problem,
which
initially
makes
it
very
very
difficult.
If
you
don't
have
any
constraints,
you
can
make
up
stuff
all
the
day.
You
know
here's
an
answer.
How
you
can
do
this?
You
can
do
that.
B
You
can
do
this,
but
if
you
consider
all
biology
as
a
set
of
constraints,
all
neuroscience
instead
of
constraints
and
even
if
you're
working
on
one
little
part
of
the
neocortex,
for
example,
there
are
hundreds
and
hundreds
of
constraints.
These
are
empirical
constraints
that
it's
very,
very
difficult
initially
to
come
up
with
a
radical
framework
for
that,
but
when
you
do
and
it
solves
all
those
constraints
at
once,
you
have
a
high
confidence
that
you
got
something
close
to
correct.
B
It's
just
in
mathematically
almost
impossible,
not
to
be
so
it
that's
the
the
curse
and
the
advantage
of
what
we
have
the
curse
is
we
have
to
solve.
We
have
to
meet
all
these
constraints,
which
is
really
hard,
but
when
you
do
meet
them,
then
you
have
a
great
confidence
that
you
discover
something.
In
addition,
then
we
work
with
scientific
labs.
So
we'll
say:
oh
there's
something
we
can't
find.
We
can
predict
something,
but
we
can't
find
it
anywhere
in
the
literature.
So
we
will.
B
Then
we
have
people,
we
collaborated
with
say
that
sometimes
they'll
say
you
know.
I
have
some
collected
data
which
I
didn't
publish,
but
we
can
go
back
and
look
in
it
and
see
if
we
can
find
that
which
is
much
easier
than
designing
in
your
experiment.
You
know
new
neuroscience
experiments
take
a
long
time
years,
so,
although
some
people
are
doing
that
now
too
so,
but
between
all
of
these
things,
I
think
it's
reasonable.
It's
actually
a
very,
very
good
approach.
A
B
B
You
could
say
same
thing
about
you
know
about
the
double
helix
that
that
people
have
been
working
on
a
problem
for
so
long
and
I
have
all
this
data
and
they
can't
make
sense
of
it.
They
can't
make
sense
of
it.
But
when
the
answer
comes
to
you
and
everything
falls
into
place,
it's
like!
Oh,
my
gosh,
that's
it!
That's
got
to
be
right.
I
asked
both
Jim
Watson
and
Francis
Crick
about
this
I
asked
him.
B
You
know
when
you
were
working
on
trying
to
discover
the
structure
of
the
double
helix
and
when
you
came
up
with
the
the
sort
of
the
structure
that
ended
up
being
correct,
but
it
was
sort
of
a
guess.
You
know
I
wasn't
really
verified,
yeah
I
said:
did
you
know
that
it
was
right
and
they
both
said
absolutely
so
we
absolutely
knew
it
was
right
and
it
doesn't
matter
if
other
people
didn't
believe
it
or
not.
We
knew
it
was
right.
They
get
around.
B
B
In
fact,
Francis
Crick
was
very
interested
in
this.
In
the
latter
part
of
his
and
in
fact,
I
got
interested
in
brains.
By
reading
an
essay
he
wrote
in
1979
called
thinking
about
the
brain,
and
that
is
when
I
decided,
I'm
gonna,
leave
my
profession
of
computers
and
engineering
and
become
a
neuroscientist
just
reading
that
one
essay
from
Francis
Crick
I
got
to
meet
him
later
in
life.
I
got
I
spoke
at
the
Salk
Institute
and
he
was
in
the
audience
and
then
I
had
a
tea
with
him
afterwards.
B
B
B
He
was
really
sort
of
behind
moving
in
the
direction
of
neuroscience
there,
and
so
he
had
a
personal
interest
in
this
field,
and
I
have
met
with
him
numerous
times
and
in
fact
the
last
time
was
a
little
bit.
Over
a
year
ago,
I
gave
a
talk
close
to
me,
Harbor
labs,
about
the
progress
we
were
making
in
in
our
work,
and
it
was
a
lot
of
fun
because
he
said
well,
you
you
wouldn't
be
coming
here
unless
you
had
something
important
to
say
so.
B
I'm
gonna
go
change
our
talk,
so
he
sat
in
the
very
front
row.
Next
to
most
next
to
him
was
the
director
of
the
lab
was
Stillman,
so
these
guys
are
in
the
front
row
of
this
auditorium
right.
So
nobody
else
in
the
auditorium
wants
to
sit
in
the
front
row,
because
Jim
Watson
is
detective
and
and
I
gave
a
talk
and
I
had
dinner.
B
With
Jim
afterwards,
but
it's
I,
there's
a
great
picture
of
my
colleague
sue
Battaglia
mahad
took
where
I'm
up
there
sort
of
like
screaming
the
basics
of
this
new
framework
we
have
and
Jim
Watson
is
on
the
edge
of
his
chair,
he's
literally
on
the
edge
of
his
chair,
like
intently,
staring
up
at
the
screen
and
when
he
discovered
the
structure
of
DNA.
The
first
public
talk
he
gave
was
that
Cold
Spring,
Harbor
labs
so
and
there's
a
picture.
B
B
A
B
B
We
don't
really
have
enough
data
to
make
a
judgment
about
that.
I
would
say,
definitely
was
a
big
league
and
leap
and
I
can
tell
you
why
I
think
I,
don't
think
it
was
just
another
incremental
step
at
that
moment.
I
don't
really
have
any
idea.
How
likely
it
is.
If
we
look
at
evolution,
we
have
one
data
point
which
is
earth
right.
Life
formed
on
earth
billions
of
years
ago,
whether
it
was
introduced
here
or
it
created
it
here
or
someone
introduced
it.
We
don't
really
know,
but
it
was
here
early.
B
It
took
a
long
long
time
to
get
to
multicellular
life
and
then
from
multi
to
other
started
life.
It
took
a
long
long
time
to
get
his
neocortex
and
we've
only
had
the
New
York
Texas
for
a
few
hundred
thousand
years.
So
that's
like
nothing.
Okay,
so
is
it
likely?
Well,
certainly
isn't
something
that
happened
right
away
on
earth
and
there
were
multiple
steps
to
get
there.
So
I
would
say
it's
probably
not
get
something.
What
happened
instantaneous
on
other
planets
that
might
have
life
it
might
take
several
billion
years
on
average.
B
B
Yeah
I
can
tell
you
how
pretty
much
I'll
start
a
little
press.
Many
of
the
things
that
humans
are
able
to
do
do
not
have
obvious
survival
advantages.
Precedent
yeah,
you
know
we
create
music.
Is
that?
Is
there
a
really
survival
advantage
to
that?
Maybe
maybe
not
what
about
mathematics?
Is
there
a
real
survival
advantage
to
mathematics?
B
It's
stretchy.
You
can
try
to
figure
these
things
out
right,
but
up,
but
mostly
evolutionary
history.
Everything
had
immediate
survival
advantages
too
right.
So
I'll
tell
you
a
story
which
I
like
me
may
not
be
true,
but
the
story
goes
as
follows:
organisms
have
been
evolving
first
since
the
beginning
of
life
here
on
earth.
B
Anything
this
sort
of
complexity
on
to
that
just
sort
of
complexity
and
the
brain
itself
is
evolved.
This
way,
in
fact,
there's
an
old
parts
and
older
parts
and
older
older
parts
of
the
brain.
That
kind
of
just
keeps
calling
on
new
things
and
we
keep
adding
capabilities
and
we
got
for
the
neocortex.
Initially
it
had
a
very
clear
survival
advantage
and
that
it
produced
better
vision
and
better
hearing
and
better
thoughts
and
maybe
a
new
place.
So
on,
but
what
what
I
think
happens
is
that
evolution
just
kept
it
took.
B
It
took
a
mechanism,
and
this
is
in
our
recent
theories,
but
it
took
a
mechanism
evolved
a
long
time
ago
for
navigating
in
the
world
for
knowing
who
you
are.
These
are
the
so
called
grid
cells
and
place
cells
of
an
old
part
of
the
brain,
and
it
took
that
mechanism
for
building
maps
of
the
world
and
knowing
we
are
in
those
maps
and
how
to
navigate
those
maps
and
turns
it
into
a
sort
of
a
slimmed-down
idealized
version
of
it
mm-hmm
and
that,
ideally,
this
version
could
now
apply
to
building
maps
of
other
things.
B
Yes
and
not
just
almost
exactly
and
and
so
you
and
it
just
started,
replicating
this
stuff
right,
you
just
think
more
and
more
more
bits.
So
we
went
from
being
sort
of
dedicated
purpose
neural
hardware
to
solve
certain
problems
that
are
important
to
survival
to
a
general
purpose,
neural
hardware
that
could
be
applied
to
all
problems,
and
now
it's
just
it's
the
orbit
of
survival.
It's
we
are
now
able
to
apply
it
to
things
which
we
find
enjoyment.
B
You
know,
but
aren't
really
clearly
survival
characteristics
and
that
it
seems
to
only
have
happened
in
humans
to
the
large
extent,
and
so
that's
what's
going
on,
where
we
sort
of
have
we've
sort
of
escape
the
gravity
of
evolutionary
pressure
in
some
sense
in
the
neocortex,
and
it
now
does
things
which,
but
not
that
are
really
interesting
discovery
models
of
the
universe
which
may
not
really
help
us
doesn't
matter.
How
is
it
help
of
surviving
knowing
that
there
might
be
multiple?
No,
there
might
be.
You
know
the
age
of
the
universe
or
what?
B
A
B
You
know
evolution
works
at
one
time
frame.
It's
it's
survival
if
you
think
of
a
survival
of
the
phenotype
survival
of
the
individual.
It
is
that
what
you're
talking
about
there
is
spans
well
beyond
that.
So
there's
no
genetic
I'm,
not
transferring
any
genetic
traits
to
my
children
that
are
gonna,
help
them
survive
better
on
Mars
right.
B
A
B
B
We
were
deep
into
problems
about
understanding
how
we
build
models
of
stuff
in
the
world
and
how
we
make
predictions
about
things
and
I
was
holding
a
coffee
cup
just
like
this.
In
my
hand
and
I
had
my
finger
was
touching
the
side.
My
index
finger
and
I
moved
it
to
the
top
and
I
was
going
to
feel
the
the
rim
at
the
top
of
the
cover
and
I
asked
myself
a
very
simple
question.
B
B
What's
going
to
sense,
so
this
told
me
that
Dean,
your
cortex,
which
is
making
this
prediction,
needs
to
know
that
it's
sensing,
it's
touching
a
cup
and
it
needs
to
know
the
location
of
my
finger
relative
to
that
cup
in
a
reference
frame
of
the
cup.
It
doesn't
matter
where
the
cup
is
relative,
my
body,
it
doesn't
matter
its
orientation.
None
of
that
matters.
It's
where
my
finger
is
relative
to
the
cup,
which
tells
me
then
that
the
neocortex
is
has
a
reference
frame.
B
That's
anchored
to
the
cup,
because
otherwise
I
wouldn't
be
able
to
say
the
location
and
I.
Wouldn't
be
able
to
predict
my
new
location
and
then
we
quickly
vary
installation
instantly.
You
can
say
well,
every
part
of
my
skin
could
touch
this
cup
and
therefore
every
part
of
my
skin
is
making
predictions
and
every
part
my
skin
must
have
a
reference
frame
that
it's
using
to
make
predictions.
So
the
the
big
idea
is
that,
throughout
the
neocortex
there
are
everything
as
being
is
being
stored
and
referenced
in
reference
frames.
B
You
can
think
of
them
like
XYZ
reference
things,
but
they're
not
like
that.
We
know
a
lot
about
the
neural
mechanisms
for
this,
but
the
brain
thinks
in
reference
frames
and
it's
an
engineer.
If
you're
an
engineer,
this
is
not
surprising.
You'd
say
if
I
wanted
to
build
a
a
CAD
model
of
the
coffee
cup.
Well,
I
would
bring
it
up
in
some
CAD
software
and
I
would
assign
some
reference
frame
and
say
this
features
at
this
locations
and
so
on.
B
But
the
fact
that
this,
the
idea
that
this
is
occurring
through
out
in
your
cortex
everywhere
it
was
a
novel
idea
and,
and
then
zillion
things
fell
into
place
after
that,
it's
doing
so
now
we
think
about
the
neocortex
as
processing
information
quite
differently
than
we
used
to
do
it.
We
used
to
think
about
the
neural.
B
It's
a
surprising
thing,
the
thing
about,
but
once
you
sort
of
internalize
this,
you
understand
that
it
explains
almost
every
all
the
almost
all
the
mysteries
we've
had
about
this.
It's
about
this
structure,
so
one
of
the
consequences
of
that
is
that
every
small
part
of
the
neocortex,
so
you
have
a
millimeter
square
and
there's
a
hundred
and
fifty
thousand
of
those.
So
it's
about
150,000
square
millimeters.
B
If
you
take
every
little
square
millimeter
of
the
cortex,
it's
got
some
input
coming
into
it
and
it's
going
to
have
reference
frames
which
assign
that
input
to
and
each
square
millimeter
can
learn
complete
models
of
objects.
So
what
do
I
mean
by
that?
If
I'm
touching
the
coffee
cup
well,
if
I
just
touch
it
in
one
place,
I
can't
learn
what
this
coffee
cup
is
because
I'm
just
feeling
one
part,
but
if
I
move
it
around
the
cup,
it
touched
you
to
different
areas.
I
can
build
up
a
complete
model.
B
The
cup
because
I'm
now
filling
in
that
three
dimensional
map,
which
is
the
coffee
cup
I,
can
say:
oh
what
am
I
feeling
in
all
these
different
locations.
That's
the
basic
idea,
it's
more
complicated
than
that,
but
so
through
time-
and
we
talked
about
time
earlier
through
time,
even
a
single
column,
which
is
only
looking
at
or
a
single
part
of
the
cortex.
B
It's
only
looking
at
a
small
part
of
the
world
can
build
up
a
complete
model
of
an
object,
and
so,
if
you
think
about
the
part
of
the
brain
which
is
getting
input
from
all
my
fingers,
so
there's
they're
spread
across
the
top.
And
here
this
is
the
somatosensory
cortex.
There's
columns
associated
all
these
from
areas
of
my
skin
and
what
we
believe
is
happening
is
that
all
of
them
are
building
models
of
this
cup
every
one
of
them
or
things
not
do
not
all
building
all
not
every
column.
B
Every
part
of
the
cortex
builds
models
of
everything,
but
they're
all
building
models
of
something,
and-
and
so
you
have
it
so
when
I,
when
I
touch
this
cup
with
my
hand,
there
are
multiple
models
of
the
cup
being
invoked.
If
I
look
at
it
with
my
eyes,
there
again
many
models
of
the
cup
being
invoked
because
each
part
of
the
visual
system
and
the
brain
doesn't
process
an
image.
That's
mr.!
That's
a
misleading
idea.
B
It's
just
like
your
fingers,
touching
so
different
parts
of
my
Radnor
of
looking
at
different
parts
of
the
cup
and
thousands
and
thousands
of
models
of
the
cup
are
being
invoked
at
once
and
they're
all
voting
with
each
other
trying
to
figure
out
what's
going
on.
So
that's
why
we
call
it
the
thousand
brains,
theory
of
intelligence,
because
there
isn't
one
model
of
a
cop.
There
are
thousands
of
models
to
this
Cup.
There
are
thousands
of
models
for
your
cell
phone
and
about
cameras
and
microphones,
and
so
on.
B
A
B
Great
question:
let
me
try
Spain
there
there's
a
problem,
that's
known
in
neuroscience
called
the
sensor.
Fusion
problem,
yes,
and
so
is
the
idea
of
something
like.
Oh,
the
image
comes
from
the
eye,
there's
a
picture
on
the
retina
and
it
gets
projected
to
than
your
cortex.
No
by
now,
it's
all
spread
out
all
over
the
place
and
it's
kind
of
squirrely
and
distorted
and
pieces
are
all
over
this.
You
know
it
doesn't
look
like
a
picture
anymore.
B
When
does
it
all
come
back
together
again
right
or
you
might
say:
well,
yes,
but
I
also
I
also
have
sounds
or
touches
associated
with
a
couple
so
I'm
seeing
the
cup
and
touching
the
cup.
How
do
they
get
combined
together
again?
So
this
it's
called
the
sensor.
Fusion
problem
is
if
all
these
disparate
parts
have
to
be
brought
together
into
one
model,
someplace,
that's
the
wrong
idea.
The
right
idea
is
that
you
get
all
these
guys
voting,
there's
auditory
models
of
the
cup.
B
There's
visual
models,
the
cup,
those
tactile
models
of
the
cup,
there's
one,
the
individual
system
there
might
be
ones
that
are
more
focused
on
black
and
white
ones.
Fortunate
on
color,
it
doesn't
really
matter,
there's
just
thousands
and
thousands
of
models
of
this
Cup
and
they
vote.
They
don't
actually
come
together
in
one
spot.
It
just
literally
think
of
it.
This
way.
I.
Imagine
you
have
these
columns
or
like
about
the
size
of
a
little
piece
of
spaghetti.
B
B
I
have
to
move
the
straw
around,
but
if
I
open
my
eyes
to
see
the
whole
thing
at
once,
so
what
we
think
is
going
on.
It's
all
these
little
pieces
of
spaghetti.
If
you
know
all
these
little
columns
in
the
cortex
or
all
trying
to
guess
what
it
is
that
they're
sensing
they'll
do
a
better
guess
if
they
have
time
and
can
move
over
time.
So
if
I
move
my
eyes
and
with
my
fingers,
but
if
they
don't,
they
have
a
they
have
a
poor
guest.
B
It's
a
it's
a
probabilistic
s
of
what
they
might
be
touching.
Now
imagine
they
can
post
their
probability
at
the
top
of
a
little
piece
of
spaghetti.
Each
one
of
them
says
I
think,
and
it's
not
really
a
probability
decision.
It's
more
like
a
set
of
possibilities
in
the
brain.
It
doesn't
work
as
a
probability
distribution.
B
We've
described
this
in
a
recent
paper
and
we've
modeled
this
that
says
they
can
all
quickly
settle
on
the
only
or
the
one
best
answer
for
all
of
them.
If
there
is
a
single
best
answer,
they
all
vote
and
say
yeah.
It's
got
to
be
the
coffee
cup,
and
at
that
point
they
all
know
it's
a
coffee
go
and
at
that
point
everyone
acts
as
if
it's
the
coffee
cup
they
yeah.
B
We
know
it's
a
coffee,
even
though
I've
only
seen
one
little
piece
of
this
world
I
know
it's
coffee
cup,
I'm
touching
or
I'm,
seeing
or
whatever,
and
so
you
can
think
of
all
these
columns
are
looking
at
different
parts
in
different
places,
different
sensory
and
put
different
locations,
they're
all
different,
but
this
layer
that's
doing
the
voting.
That's
it's
solidifies!
It's
just
like
it
crystallizes
and
says:
oh,
we
all
know
what
we're
doing,
and
so
you
don't
bring
these
models
together.
In
one
model
you
just
vote
and
there's
a
crystallization
of
the
vote.
Great.
A
B
Yeah,
it
does
and
well
first
the
primary
face.
Every
evidence
for
that
is
that
the
regions
of
the
neocortex
that
are
associated
with
language
or
high-level
thought
or
mathematics
or
things
like
that.
They
look
like
the
regions
of
the
new,
your
cortex
that
process
vision,
hearing
and
touch
there.
They
don't.
B
They
look
only
marginally
different,
and
so
one
would
say
well
if
Vernon
now
Castle,
who
proposed
it
all
that,
come
all
the
parts
of
New,
York
or
trees
doing
the
same
thing.
If
he's
right,
then
the
parts
that
during
language
or
mathematics
or
physics,
are
working
on
the
same
principle.
They
must
be
working
on
the
principle
of
reference
frames.
So
that's
a
little
odd
flawed,
but
of
course
we
had
no
eye.
We
had
no
prior
idea
how
these
things
happen.
B
So
that's,
let's
go
with
that
and
we
in
our
recent
paper
we
talked
a
little
bit
about
that.
I've
been
working
on
it
more
since
I
have
better
ideas
about
it.
Now
I'm
sitting
here,
very
confident
that
that's
what's
happening
and
I
can
give
you
some
examples
to
help
you
think
about
that.
It's
not
we
understand
it
completely,
but
I
understand
it
better
than
I've
described
it
in
any
paper.
So
far
so,
but
we
did
put
that
idea
out.
There
says:
okay,
this
is
it's
it's
it's
it's
a
good
place
to
start.
B
B
B
Yeah,
so
so,
basically,
the
way
what
we
think
is
going
on
is
all
things
you
know
all
concepts,
all
ideas,
words,
everything
you
know
are
stored
in
reference
frames,
and
so,
if
you
want
to
remember
something,
you
have
to
basically
navigate
through
a
reference
frame,
the
same
way,
a
rat
navigates
to
a
Maeve.
In
the
same
way,
my
finger
rat
navigates
to
this
coffee
cup.
You
are
moving
through
some
space,
and
so
what
you,
if
you
have
a
random
list
of
things
you
were
asked
to
remember
by
assigning
him
to
a
reference
frame.
B
You've
already
know
very
well
to
see
your
house
right
an
idea.
The
method
of
loci
is,
you
can
say,
okay
in
my
lobby,
I'm
going
to
put
this
thing
and
then
and
then
the
bedroom
I
put
this
one.
I
go
down
the
hall.
I
put
this
thing
and
then
you
want
to
recall
those
facts.
So
we
call
this
things.
You
just
walk
mentally.
You
walk
through
your
house
you're
mentally
moving
through
a
reference
frame
that
you
already
had,
and
that
tells
you
there's
two
things
are
really
important
about.
B
It
tells
us
the
brain
prefers
to
store
things
in
reference
frames
and
that
the
method
of
recalling
things
or
thinking,
if
you
will
is
to
move
mentally
through
those
reference
frames.
You
could
move
physically
through
some
reference
frames
like
I,
could
physically
move
through
the
reference
name
of
this
coffee
cup.
I
can
also
mentally
move
to
the
reference
time.
The
coffee
cup,
imagining
me
touching
it,
but
I
can
also
mentally
move,
my
house
and-
and
so
now
we
can
ask
yourself
or
are
all
concepts
toward
this
way.
B
There's
some
recent
research
using
human
subjects
in
fMRI
and
I'm
gonna
apologize
for
not
knowing
the
name
of
the
scientist
that
did
this,
but
what
they
did
is
they
they
put
humans
in
this
fMRI
machine,
which
was
one
of
these
imaging
machines,
and
they
they
gave
the
humans
tasks
to
think
about
Birds,
so
they
had
different
types
of
birds
and
beverage.
It
looked
big
and
small
and
long
necks
and
long
legs
things
like
that
and
what
they
could
tell
from
the
fMRI.
B
You
know
and
what
it
basically
says
that
even
when
you're
thinking
about
something
abstract
and
you're
not
really
thinking
about
it
as
a
reference
frame,
it
tells
us
the
brain
is
actually
using
a
reference
frame
and
it's
using
the
same
neural
mechanisms.
These
grid
cells
are
the
basic
same
neural
mechanism
that
we
we
propose
that
grid
cells
which,
in
the
old
part
of
the
brain,
the
entire
cortex
that
that
mechanism
is
now
similar
mechanism
is
used
throughout
the
neocortex.
B
It's
the
same
nature
preserve
this
interesting
way
of
creating
reference
frames,
and
so
now
they
have
empirical
evidence
that,
when
you
think
about
concepts
like
birds
that
you're
using
reference
frames
that
are
built
on
grid
cells,
so
this
that's
similar
to
the
method
of
loci,
but
in
this
case
the
birds
are
related,
so
it
makes
they
create
their
own
reference
frame,
which
is
consistent
with
bird
space
and
when
you
think
about
something
you
go
through
that
you
can
make
the
same
example.
Let's
take
a
math
mathematics
all
right.
B
Let's
say
you
want
to
prove
a
conjecture.
Ok,
what
is
a
conjecture?
Conjecture
is
a
statement
you
believe
to
be
true,
but
you
haven't
proven
it,
and
so
it
might
be
an
equation.
I
I
want
to
show
that
this
is
equal
to
that
and
you
have
a
place.
You
have
some
places.
You
start
with,
you
said
well,
I
know
this
is
true
and
I
know.
This
is
true
and
I
think
that
maybe
to
get
to
the
final
proof,
I
need
to
go
through
some
intermediate
results,
but
I
believe
is
happening.
B
Is
literally
these
equations,
where
these
points
are
assigned
to
a
reference
frame,
a
mathematical
reference
frame,
and
when
you
do
mathematical
operations,
a
simple
one
might
be
multiply
or
divide,
but
you
might
be
a
little
applause
transform
or
something
else.
That
is
like
a
movement
in
the
reference
frame
of
the
math
and
so
you're
literally
trying
to
discover
a
path
from
one
location
to
another
location
in
a
space
of
mathematics.
And
if
you
can
get
to
these
intermediate
results,
then
you
know
your
map
is
pretty
good
and
you
know
you're
using
the
right
operations.
B
A
So
if
you
dig
in
an
idea
of
this
reference
frame,
whether
it's
the
math,
you
start
a
set
of
axioms
to
try
to
get
to
proving
the
conjecture,
can
you
try
to
describe
maybe
taking
step
back?
How
you
think
of
the
reference
frame
in
that
context
is:
is
it
the
reference
frame
that
the
axioms
are
happy
in?
Is
it
the
reference
frame
that
might
contain
everything?
Is
that
a
changing
thing
so.
B
There,
it
is
you
any
reference
frames,
I,
mean
fact
the
way,
the
theory,
the
thousand
brain
theory
of
intelligence,
says
that
every
single
thing
in
the
world
has
its
own
reference
frame.
So
every
word
has
its
own
reference
names
and
we
can
talk
about
this.
The
mathematics
work
out.
This
is
no
problem
for
neurons.
To
do
this.
B
It's
on
a
table.
Let's
say
you
asked
how
many
reference
names
could
the
column
in
my
finger.
That's
touching
the
coffee
cup
hat,
because
there
are
many
many
copies
there,
many
many
models
of
a
coffee
cup.
So
the
coffee.
There
is
no
walnut
model,
the
coffee
cup.
There
are
many
miles
of
a
coffee
cup
and
you
could
say
well
how
many
different
things
can
my
finger?
Learn?
Missus,
it's
just
the
question
you
want
to
ask.
Imagine
I,
say
every
concept.
Every
idea
everything
you've
ever
know
about
that.
You
can
say:
I
know
that
thing.
B
It
has
a
reference
frame
associated
with
him
and
what
we
do
when
we
build
composite
objects.
We
can.
We
sign
reference
frames
to
point
another
reference
frame,
so
my
coffee
cup
has
multiple
components
to
it.
It's
got
a
limb,
it's
got
a
cylinder,
it's
got
a
handle
and
those
things
that
have
their
own
reference
frames
and
they're
assigned
to
a
master
reference
frame
where
we
just
called
this
cup
and
now
I
have
this
clementa
logo
on
it.
Well,
that's
something
that
exists
elsewhere
in
the
world.
B
It's
it's
own
thing,
so
it
has
its
own
reference
time,
so
we
now
have
to
say
how
can
I
sign
the
new
mentor
bogel
reference
frame
onto
the
cylinder
or
onto
the
coffee
cup?
So
it's
all!
We
talked
about
this
in
the
paper
that
came
out
in
December
this
last
year.
The
idea
of
how
you
can
assign
reference
names
to
reference
names.
How
neurons
could
do
this
so.
B
So,
let's
just
say
that
we're
gonna
have
some
neurons
in
the
brain.
Not
many!
Actually
10,000
20,000
are
gonna,
create
a
whole
bunch
of
reference
frames.
What
does
it
mean
right?
What
is
the
reference
in
this
case?
First
of
all,
these
reference
names
are
different
than
the
ones
you
might
have
be
used
to,
let
you
know
lots
of
reference
in
its
route.
For
example,
we
know
the
Cartesian
coordinates
XYZ,
that's
a
type
of
reference
frame.
We
know
longitude
and
latitude
that's
a
different
type
of
reference
frame.
B
If
I
look
at
a
printed
map,
you
might
have
Colin
a
through
a
Monroe's.
You
know
one
through
twenty,
that's
a
different
type
of
reference
frame.
It's
a
kind
of
a
Cartesian
coordinate
frame,
though
interesting
about
the
reference
frames
in
the
brain,
and
we
know
this
because
these
have
been
established
through
neuroscience,
studying
the
anti
Rana
cortex,
so
I'm
not
speculating
here.
Okay,
this
is
known
neuroscience
in
an
old
part
of
the
brain.
The
way
these
cells
create
reference
frames,
they
have
no
origin.
So
what
it's
more
like
you
have?
B
You
have
a
point,
your
appointment
in
some
space
and
you
give
it
a
particular
movement.
You
can
then
tell
what
the
next
point
should
be,
and
you
can
then
tell
what
the
next
point
would
be
and
so
on.
You
can
use
this
to
to
calculate
how
to
get
from
one
point
to
another.
So
how
do
I
get
from
being
around
my
house
to
my
home
or
how
do
I
get
my
finger
from
the
side
of
my
cup
to
the
top
of
the
camp?
How
do
I
get
from
the
the
axioms
to
the
conjecture?
B
B
A
B
Well,
that's
an
interesting
question
in
the
old
part
of
the
brain.
The
answer
I
know
cortex.
They
studied
rats
and
initially
it
looks
like
oh,
this
is
just
two-dimensional.
It's
like
the
rat
is
in
some
box
and
the
maze
or
whatever,
and
they
know
where
the
rat
is
using
these
two-dimensional
reference
frames
and
know
where
it
is.
That's
right,
the
maze
we
saw
okay,
but
what
about?
What
about
bats?
That's
a
mammal
and
they
fly
in
three-dimensional
space.
How
do
they
do
that?
They
seem
to
know
where
they
are
right.
B
So
there's
this
is
a
current
area
of
active
research
and
it
seems
like
somehow
the
rep,
the
neurons
in
the
in
tirana
cortex
I
can
learn
three-dimensional
space.
We
just
to
members
of
our
team,
along
with
ela
FET
from
MIT,
just
released
a
paper
this
little
literally
last
week,
it's
on
by
archive
where
they
show
that
you
can.
If
you
the
way
these
things
work
and
I'm
gonna
get
unless
you
want
to
I,
won't
get
into
the
detail,
but
grid
cells
can
represent
any
n-dimensional
space.
It
there's!
No,
it's
it's
not
inherently
limited.
B
B
These
things
work,
there's
a
whole
bunch
of
two-dimensional
models,
and
you
can
just
you,
can
slice
up
any
n-dimensional
space
and
with
two-dimensional
projections
so
and
you
could
all
have
one
dimensional
models
it
does
so
there's
there's
nothing
inherent
about
the
mathematics
about
the
way
the
neurons
do
this,
which
which
constrain
the
dimensionality
of
the
space,
which
I
think
was
important,
and
so
obviously
I
have
a
three
dimensional
map
of
this
cup.
Maybe
it's
even
more
than
that.
B
A
In
terms
of
each
individual
column,
building
up
more
and
more
information
over
time,
do
you
think
that
mechanism
is
well
understood?
In
your
mind,
you've
proposed
a
lot
of
architectures
there.
Is
that
a
key
piece
or
is
it
is
the
big
piece,
the
thousand
brain
theory
of
intelligence
omble
at
all?
Well,.
B
B
This
a
totally
new
way
of
thinking
about
other
new
yorker
optics
work,
so
that
is
appealing.
It
has
all
these
ramifications
and
with
that,
as
a
framework
for
how
the
brain
works,
you
can
make
all
kinds
of
predictions
and
solve
all
kinds
of
problems.
Now
we're
trying
to
work
through
many
of
these
details
right
now.
Okay,
how
do
they
neurons
actually
do?
B
This
well
turns
out,
if
you
think
about
grid
cells
and
place
cells
in
the
old
parts
of
the
brain,
there's
a
lot
of
snow
and
about
them,
but
there's
still
some
mysteries,
there's
a
lot
of
debate
about
exactly
the
details,
how
these
work
and
what
are
the
signs,
and
we
have
that
still
that
same
level
of
detail,
the
same
level
concern
what
we
spend
here
most
of
our
time
doing
is
trying
to
make
a
very
good
list
of
the
things
we
don't
understand.
Yet
that's
the
key
part
here.
What
are
the
constraints?
It's
not
like.
B
Oh
this
thing
seems
work.
We're
done!
No,
it's
like
okay,
it
kind
of
works,
but
these
are
other
things.
We
know
what
has
to
do,
and
it's
not
doing
those
yet
I
would
say
we're
well
on
the
way
here.
I'm
not
done
yet
there's
a
lot
of
trickiness
to
this
system,
but
the
basic
principles
about
how
different
layers
in
the
neocortex
are
doing.
Much
of
this
we
understand,
but
there's
some
fundamental
parts
that
we
don't
understand.
The.
B
B
Talked
about
as
if
to
predict
what
you're
going
to
sense
on
this
coffee
cup
I
need
to
know
where
my
finger
is
gonna,
be
on
the
coffee
cup.
That
is
true,
but
it's
insufficient
think
about
my
finger
touches
the
edge
of
the
coffee
cup.
My
finger
can
touch
it
at
different
orientations.
Right
I
can
rotate
my
finger
around
here
and
that
doesn't
change.
Ice
I
can
make
that
prediction
and
somehow
so
it's
not
just
the
location,
there's
an
orientation
component
of
this
as
well.
This
is
known
in
the
old
parts
of
the
brain,
too.
B
There's
things
called
head
Direction
cells,
which
which
way
the
rat
is
facing.
It's
the
same
kind
of
base,
the
idea,
so
my
finger
were
Iraq.
You
know
in
three
dimensions:
I
have
a
three
dimensional
orientation
and
I
have
a
three
dimensional
location.
If
I
was
a
rat,
I
would
have
it.
You
might
think
it
was
a
2-dimensional
location,
a
two
dimensional
orientation
or
one
dimensional
orientation
like
just
which
way
is
it
facing.
So
how
the
the
two
components
work
together?
How
it
is
that
I
I
combine
orientation
right.
A
There's
a
more
general
version
of
that.
Do
you
think
context
matters
the
fact
that
we
are
in
a
building
in
North
America
that
that
we,
in
the
day
and
age,
where
we
have
mugs
I,
mean
there's
all
this
extra
information
that
you
bring
to
the
table
about
everything
else
in
the
room.
That's
outside
of
just
the
coffee
cup.
B
B
It
matters
for
certain
things.
Of
course
it
does
maybe
what
we
think
of
that
as
a
coffee
cup
in
another
part
of
the
world.
This
commute
is
something
totally
different,
or
maybe
the
our
logo,
which
is
very
benign
in
this
part
of
the
world.
It
means
something
very
different
than
another
part
of
the
world,
so
those
things
do
matter,
I
think
the
thing
the
way
to
think
about
is
the
following.
One
way
to
think
about
it
is
we
have
all
these
models
of
the
world.
B
Ok
and
we
have
modeled,
we
model
everything
and,
as
I
said
earlier,
it
comes
snuck
it
in
there.
Our
models
are
actually
we.
We
build
composite
structure,
so
every
object
is
composed
of
other
objects
which
are
composed
of
other
objects
and
they
become
members
of
other
objects.
So
this
room
is
chairs
and
a
table
and
a
room
and
the
walls
and
so
on.
B
Now
we
can
just
arrange
them
in
these
things
a
certain
way
you
go,
that's
the
new
meta
conference
room
so
so,
and
what
we
do
is
when
we
go
around
the
world
and
we
experience
the
world
we've
I
walk
into
a
room.
For
example,
the
first
thing
I'd
like
say:
oh
I'm,
in
this
room,
do
I
recognize
the
room.
Then
I
could
say.
Oh
look,
there's
a
there's
a
table
here
and
I
by
attending
to
the
table.
I'm
then
assigning
this
table
in
a
context
of
the
room.
That's
on
the
table.
B
B
So
the
point
is
you:
your
attention
is
kind
of
drilling
deep
in
and
out
of
these
nested
structures
and
I
can
pop
back
up
and
I
can
pop
back
down.
I
can
pop
back
up
and
I
can
pop
back
down
so
I
when
I
attend
to
the
coffee
cup.
I,
haven't
lost
the
context
of
everything
else,
but
but
it's
sort
of
nested
structure
so.
B
Can
move
up
and
down
then
we
do
that
all
the
time
you're
not
even
now
that
I'm
aware
of
it
I'm
very
conscious
of
it
but
scintilla,
but
most
people.
Don't
don't
you
think
about
this?
You
know
you,
don't
you
just
walk
in
the
room
and
you
don't
say:
oh
I,
looked
at
the
chair
and
I
looked
at
the
board
and
looked
at
that
word
on
the
board
and
I
looked
over
here.
What's
going
on
right.
B
Totally
useful
I
think
about
this
stuff.
Almost
all
the
time
and
I
meant.
One
of
my
primary
ways
of
thinking
is
when
I'm
in
sleep
at
night,
I
always
wake
up
in
the
middle
of
the
night
and
then
I
stay
awake
for
at
least
an
hour
with
my
eyes
shut
in
a
sort
of
a
half
sleep
state.
Thinking
about
these
things,
I
come
up
with
answers
to
problems
very
often
in
that
sort
of
half
sleeping
State,
I.
Think
about
on
my
bike.
Ride
I,
think
about
on
walks.
I'm
just
constantly
thing
about
this.
B
I,
do
that
all
the
time,
but
that's
not
all
I,
do
I've
constantly
observing
myself
so
as
soon
as
I
started
thinking
about
grid
cells,
for
example,
and
getting
into
that
I
started.
Saying:
oh
well:
grid
cells
can't
mice
place
a
sense
in
the
world.
You
know
that's
where
you
know
where
you
are
and
essentially
you
know.
We
always
have
a
sense
of
where
we
are
unless
were
lost,
and
so
I
started
at
night.
When
I
got
up
to
go
to
the
bathroom
I
would
start
trying
to
do
a
complete
with
my
eyes
closed.
B
A
B
Then
I
would
count
in
my
error
again
and
see
how
the
errors
accumulate
so
even
something
as
simple
as
getting
up
in
the
middle
light
or
the
bathroom
I'm
testing.
These
theories
out,
it's
kind
of
fun
I
mean
the
coffee
cup
is
an
example
of
that
too.
So
I
think
I
find
that
these
sort
of
everyday
introspections
are
actually
quite
helpful.
B
It
doesn't
mean
you
can
ignore
the
science
I
mean
I
spend
hours
every
day,
reading
ridiculously
complex
papers,
that's
not
nearly
as
much
fun,
but
you
have
to
sort
of
build
up
those
constraints
and
the
knowledge
about
the
field
and
who's
doing
what
and
what
exactly
they
think
is
cooperating
here
and
then
you
can
sit
back
and
say:
okay,
let's
try
to
piece
this
all
together.
Let's
come
up
with
some.
You
know
I
I'm,
right
in
this
group
here
people
they
know
they
just
I.
A
B
A
Let's
talk
a
little
bit
about
deep
learning
and
the
successes
in
the
apply
space
of
neural
networks,
the
ideas
of
training
model
and
data,
and
these
simple
computational
units
you're
on
artificial
neurons,
that
with
backpropagation
as
the
statistical
ways
of
being
able
to
generalize
from
the
training
set
onto
data
that
similar
to
that
training
set.
So
where
do
you
think,
are
the
limitations
of
those
approaches?
What
do
you
think
our
strengths
relative
to
your
major
efforts
of
constructing
a
theory
of
human
intelligence,
yeah.
B
I
have
I
have
a
little
bit
more
than
intuition,
but
you're
going
to
say,
like
you
know,
one
of
the
things
that
you
asked
me
do.
I
spend
all
my
time
thing
about
neurons
I,
do
that's
to
the
exclusion
of
thinking
about
things
like
convolutional,
neural
networks
in
you,
but
I
try
to
stay
current,
so
look
I!
Think
it's
great!
The
progress
they've
made
it's
fantastic
and,
as
I
mentioned
earlier,
it's
very
highly
useful
for
many
things.
B
The
models
that
we
have
today
are
actually
derived
from
a
lot
of
neuroscience
principles.
There
are
distributed
processing
systems
and
distributed
memory
systems,
and
that's
how
the
brain
works.
They
use
things
that
we,
we
might
call
them
neurons,
but
they're
really
not
neurons
at
all.
So
we
can
just
they're
not
really
in
terrassa,
distributed
processing
systems
and
and
that
nature
of
hierarchy
that
came
also
from
neuroscience,
and
so
there's
a
lot
of
things
that
the
learning
rules-
basically
not
backprop,
but
other
you
know
so.
Have
you
int.
A
B
Biggest
difference
yeah
we
had
a
paper
in
2016
called
why
neurons
of
thousands
of
synapses
and
it-
and
if
you
read
that
paper,
you
don't
know
what
I'm
talking
about
here.
A
real
neuron
in
the
brain
is
a
complex
thing.
It,
let's
just
start
with
the
synapses
on
it,
which
is
a
connection
between
neurons
real
neurons
can
everywhere
from
five
to
thirty
thousand
synapses
on
the
ones
near
the
cell
body.
The
ones
are
too
close
to
the
the
soma
of
the
cell
body.
Those
are
like
the
ones
who
people
model
in
artificial
neurons.
B
There
is
a
few
hundred
of
those.
Maybe
they
can
affect
the
cell.
They
can
make
the
cell
become
active.
Ninety-Five
percent
of
the
synapses
can't
do
that
they're
too
far
away.
So,
if
you're,
actually
at
one
of
those
synapses,
it
just
doesn't
affect
the
cell
body
enough
to
make
any
difference
any
one
of
them
individually.
Anyone
emanuelly
or
even
if
you
do,
what
mass
of
them,
what
what
we,
but
what
real
neurons
do
is
the
following.
B
If
you
activate-
or
they
you
get
10
to
20
of
them
active
at
the
same
time,
meaning
they're
all
receiving
an
input
at
the
same
time
and
those
10
to
20
synapses
are
forty
sensors
within
a
very
short
distance
on
the
dendrite,
like
40
microns,
a
very
small
area.
So
if
you
activate
a
bunch
of
these
right
next
to
each
other
at
some
distant
place,
what
happens
is
it
creates,
what's
called
the
dendritic
spike
and
then
juridic
spike
travels
through
the
dendrites
and
can
reach
the
soma
or
the
cell
body?
B
Now
when
it
gets
there,
it
changes
the
voltage
which
is
sort
of
like
gonna,
make
the
cell
fire,
but
never
enough
to
make
the
cell
fire
it's
sort
of
what
we
call.
It
says
we
depolarize
the
cell.
You
raise
the
voltage
a
little
bit,
but
not
enough
to
do
anything.
It's
like
well
good!
Is
that
and
then
it
goes
back
down
again.
So
we
proposed
a
theory
which
I'm
very
confident
in
basics
are:
is
that
what's
happening?
There
is
those
ninety-five
percent
of
those
synapses
are
recognizing
dozens
to
hundreds
of
unique
patterns.
They
can
write.
B
It
it
can
fire
when
it
gets
enough,
what
they
call
approximately
input
from
those
ones
near
the
cell
fire,
but
it
can
get
ready
to
fire
from
dozens
to
hundreds
of
patterns
that
it
recognizes
from
the
other
guys,
and
the
advantage
of
this
to
the
neuron
is
that
when
it
actually
does
produce
a
spike
in
action
potential,
it
does
so
slightly
sooner
than
it
would
have
otherwise,
and
so
what
could
is
slightly
sooner
well,
the
slightly
sooner
part.
Is
it
there's
it
all.
B
You
end
up
with
a
reputation
that
matches
your
prediction:
it's
a
it's
a
sparsa
representation,
meaning
as
fewest
known
or
interactive,
but
it's
much
more
specific,
and
so
we
showed
how
networks
of
these
neurons
can
do
very
sophisticated
temporal
prediction.
Basically.
So
so
this
summarize,
this
real
neurons
in
the
brain
are
time-based
prediction,
engines
and,
and
they
and
there's
no
concept
of
this
at
all
in
artificial.
What
we
call
point
neurons,
I,
don't
think
you
can
mail,
the
brain
without
them,
I,
don't
even
build
intelligent.
B
It's
its
theme,
it's
where
large
part
of
the
time
comes
from
it's
it's.
These
are
predictive
models
and
the
time
is
in
is
there's
a
prior
and
I'm
in
a
you
know,
a
prediction
and
an
action,
and
it's
inherent
to
every
neuron,
the
neocortex.
So
so
I
would
say
that
point
neurons,
sort
of
model,
a
piece
of
that-
and
not
very
well
with
that
either,
but.
B
Example,
synapses
are
very
unreliable
and
you
cannot
assign
any
precision
to
them.
So
even
one
digital
position
is
not
possible,
so
the
way
real
neurons
work
is
they
don't
add
these.
They
don't
change
these
weights
accurately.
Like
artificial
neural
networks,
do
they
basically
form
new
synapses,
and
so
what
you're,
trying
to
always
do
is
is
detect
the
presence
of
some
10
to
20
active
synapses
at
the
same
time
as
opposed
and
they're
almost
binary,
it's
like,
because
you
can't
really
represent
anything
much
finer
than
that.
B
So
these
are
the
kind
of
dishes
and
I
think
that's
actually
another
essential
component,
because
the
brain
works
on
sparse
patterns
and
all
about
all
that
mechanism
is
based
on
sparse
patterns
and
I.
Don't
actually
think
you
could
build
our
real
brains
or
machine
and
tell
us
about
incorporating
some
of
those
ideas.
It's.
B
B
A
B
B
Four
neurons
and
they're
doing
this:
can
we
you
know
the
scribes
are
mathematically,
what
they're
doing
type
of
thing,
even
the
complexity
of
convolutional
neural
networks.
Today
it's
sort
of
a
mystery.
They
can't
really
describe
the
whole
system,
and
so
it's
different,
my
colleague
sue
Burton
I,
am
on.
He
did
a
nice
paper
on
this.
B
You
can
get
all
the
stuff
on
our
website
if
you're
interested
talking
about
a
little
math
properties
of
sparse
representations,
and
so
we
can't
what
we
can
do
is
we
can
tell
mathematically,
for
example,
why
10
to
20
synapses
to
recognize
a
pattern?
Is
the
correct
number
it's
the
right
number
you'd
want
to
use
and
by
the
way
that
matches
biology,
we
can
show
mathematically
some
of
these
concepts
about
the
show
why
the
brain
is
so
robust
to
noise
and
error
and
fallout
and
so
on.
B
We
can
show
that
mathematically
as
well
as
empirically
in
simulations,
but
the
system
can't
be
analyzed
completely.
Any
complex
system
can,
and
so
that's
out
of
the
realm,
but
there
is
there
are
mathematical
benefits
and
intuitions
that
can
be
derived
from
mathematics
and
we
try
to
do
that
as
well.
Most
most
of
our
papers
have
the
section
about
that.
So.
A
A
Let
me
dig
into
it
and
see
what
your
thoughts
are
they're
a
little
bit
so
I'm,
not
sure
if
you
read
this
little
blog
post
called
bitter
lesson
by
Richard
Sutton
recently
recently
he's
a
reinforcement,
learning
pioneer
I'm,
not
sure
if
you
familiar
with
him.
His
basic
idea
is
that
all
the
stuff
we've
done
in
AI
in
the
past
70
years
he's
one
of
the
old
school
guys.
A
A
It's
very
difficult
he's
saying
this
is
what
has
worked
and
yes,
a
prescription
with
the
difficult
prescription,
because
it
says
all
the
fun
things
you
guys
are
trying
to
do.
We
are
trying
to
do
he's
part
of
the
community
they're
saying
it's.
It's
only
going
to
be
short-term
gains,
so
this
all
leads
up
to
a
question.
I
guess
on
artificial
neural
networks,
and
maybe
our
own
biological
neural
networks
is
you
think
if
we
just
scale
things
up
significantly
so
take
these
dumb,
artificial
neurons.
B
B
We
just
they're,
bigger
and
train
more
and
we
have
more
label
data
and
so
on,
but
I
don't
think
you
can
get
to
the
kind
of
things
I
know
the
brain
can
do
and
that
we
think
about
as
intelligence
by
just
scaling
it
up
so
I
that
maybe
it's
a
good
description
of
what's
happened
in
the
past.
What's
happened
recently
with
the
re-emergence
of
artificial
neural
networks,
it
may
be
a
good
prescription
for
what's
going
to
happen
in
the
short
term,
but
I
don't
think
that's
the
path.
I've
said
that
earlier
there's
an
alternate
path.
B
B
Just
super
talked
him
on
and
he's
more
of
a
machine
learning
guy
am
more
of
a
neuroscience
guy,
so
this
is
now
our
new
is
I,
wouldn't
say
our
focus,
but
it
is
now
an
equal
focus
here,
because
we
we
need
to
proselytize
what
we've
learned
and
we
need
to
show
how
it's
beneficial
to
to
the
Machine
were
earlier,
so
we're
putting.
We
have
a
plan
in
place
right
now.
In
fact,
we
just
did
our
first
paper
on
this.
B
I
can
tell
you
about
that,
but
you
know
one
of
the
reasons
I
want
to
talk
to.
You
is
because
I'm
trying
to
get
more
people
in
the
machine
learning
the
community
say
I
need
to
learn
about
this
stuff,
and
maybe
we
should
just
think
about
this,
a
bit
more
about
what
we've
learned
about
the
brain
and
what
are
those
team
Aetna
meant?
What
have
they
done?
Is
that
useful
for
us
yeah.
A
A
B
This
is
a
always
that
what
we
I
would
call
the
entrepreneurs
dilemma.
So
you
have
this
long
term
vision.
Oh
we're
gonna
all
be
driving
electric
cars
or
all
kind
of
computers,
or
or
whatever
and
and
you're
at
some
point
in
time-
and
you
say,
I
can
see
that
long-term
vision,
I'm
sure
it's
gonna
happen.
How
do
I
get
there
without
killing
myself,
you
know
without
going
out
of
business
right.
That's
the
challenge!
That's
the
dilemma.
It's
a
really
difficult
thing
to
do
so.
We're
facing
that
right
now.
B
So,
ideally,
what
you'd
want
to
do
is
find
some
steps
along
the
way
you
can
get
there
incremental.
You
don't
have
to
like,
throw
it
all
out
and
start
over
again.
The
first
thing
that
we've
done
is
we
focus
on
the
sparse
representations,
so
I
just
just
in
case.
You
don't
know
what
that
means,
or
some
of
the
listeners
don't
know
what
that
means
in
the
brain.
If
I
have
like
10,000
neurons,
what
you
would
see
is
maybe
2%
of
them
active
at
a
time.
B
B
I
take
10,000
neurons
that
are
representing
something,
though
it's
sitting
there
in
a
bullet
block
together,
it's
a
teeny
little
blocking
around
10,000
there
right
and
they're,
representing
a
location
they're
representing
a
cop
they're
representing
the
input
for
my
sensors
I,
don't
know
it
doesn't
matter
it's
representing
something.
The
way
the
representations
occur.
It's
always
a
sparse
representation,
meaning
it's
a
population
code
so
which
200
cells
are
active.
Tells
me
what's
going
on:
it's
not
individual
cells
on
it's
not
important
at
all.
B
It's
the
population
code
that
matters
and
when
you
have
sparse
population
codes,
then
all
kinds
of
beautiful
properties
come
out
of
them.
So
the
brain
used
the
sparse
population
codes
that
we've
we've
written
and
described
these
benefits
in
some
of
our
papers.
So
they
give
this
tremendous
robustness
to
the
system.
Student
brains
are
incredibly
robust,
neurons
are
dying
all
the
time
and
spasming
and
synapse
is
falling
apart,
and
you
know
that
all
the
time
and
it
keeps
working
so
what
simatai
and
Louise
one
of
our
other
engineers
here
have
done.
B
I've
shown
they're
introducing
sparseness
into
accomplished
neural
networks
and
other
people
thinking
along
these
lines,
but
we're
going
about
it
in
a
more
principled
way,
I
think
and
we're
showing
that
with
you
enforced
sparseness
throughout
these
convolutional
neural
networks,
in
both
the
active,
the
which
sort
of
which
neurons
are
active
and
the
connections
between
them
that
you
get
some
very
desirable
properties.
So
one
of
the
current
hot
topics
in
deep
learning
right
now
are
C's
adversarial
examples.
So
you
know
I
can
give
me
any
deep.
B
Learning,
Network
and
I
can
give
you
a
picture
that
looks
perfect
and
you're
gonna
call
it.
You
know
you're
gonna
say
the
monkey
is
you
know
an
airplane?
That's
the
problem
and
DARPA
just
announced
some
big
thing.
They're
trying
to
you
know
have
some
contest
for
this,
but
if
you,
if
you
enforce
sparse
representations
here,
many
of
these
problems
go
away.
They're
much
more
robust
and
they're
not
easy
to
fool.
So
we've
already
shown
some
of
those
results.
It
was
just
literally
in
January
or
February
just
last
month.
B
We
did
that
and
you
can
I
think
it's
on
bio
archive
right
now
or
on
I
cry
for
you
can
read
about
it,
but
so
that's
like
a
baby
step.
Okay,
that's
taking
something
from
the
brain.
We
know
we
know
about
sparseness.
We
know
why
it's
important.
We
know
what
it
gives
the
brain.
So,
let's
try
to
enforce
that
on
to
this.
What's.
A
B
A
lot
is
yes,
so
here's
an
intuition
for
it.
This
is
a
bit
technical.
So,
for
you
know
for
engineers,
pyram
machine
land,
people,
let's
be
easy,
but
all
the
listeners-
maybe
not,
if
you're,
trying
to
classify
something
you're,
trying
to
divide
some
very
high
dimensional
space
into
different
pieces,
a
and.
A
B
And
you're
trying
to
create
some
point
where
you
say
all
these
points
in
this
high
dimensional
space
are
a
and
all
these
points
inside
dimensional,
space
or
B,
and
if
you
have
points
that
are
close
to
that
line,
it's
not
very
robust.
It
works
for
all
the
points
you
know
about,
but
it's
it's
not
very
robust
because
you
just
move
a
little
bit
and
you've
crossed
over
the
line.
When
you
have
sparse
representations.
B
Imagine
I
pick.
I
have
I'm
gonna
pick
200
cells
active
out
of
out
of
10,000,
okay,
so
I
have
to
nurse
cells
active
now,
let's
say
I
pick
randomly
another,
a
different
representation
200,
the
overlap
between
knows
is
going
to
be
very
small.
Just
a
few
I
can
pick
millions
of
samples
randomly
of
200
ons,
and
not
one
of
them
will
overlap
more
than
just
a
few.
So
one
way
to
think
about
is,
if
I
want
them
fool.
B
B
Maybe
the
next
thing
yeah,
okay,
so
what
we
have
we
picked
one,
we
don't
know
if
it's
going
to
work
well,
yet
so
again
we're
trying
to
come
up
incremental
ways
to
moving
from
brain
theory,
to
add
pieces
to
machine
learning,
current
machine
learning
world
and
one
step
at
a
time.
So
the
next
thing
we're
going
to
try
to
do
is
sort
of
incorporate
some
of
the
ideas
of
the
thousand
brains.
Theory
that
you
have
many
many
models
and
that
are
voting
now.
That
idea
is
not
new.
B
B
B
This
is
a
very
you've
identified,
a
very
serious
problem.
First
of
all,
the
tests
that
they
have
are
the
tests
that
they
want,
not
the
tests
of
the
other
things
that
we're
trying
to
do
right.
You
know
what
are
the
so
on.
The
second
thing
is
sometimes
these
two
could
be
competitive
to
in
these
tests.
You
have
to
have
huge
data
sets
and
huge
computing
power.
Instead,
you
know-
and
we
don't
have
that
here-
we
don't
have
it
as
well
as
other
big
teams
and
big
companies
do
so,
there's
numerous
issues
there.
B
You
know
we
come
at
it.
You
know
where
our
approach
to
this
is
all
based
on.
In
some
sense,
you
might
argue
elegance
we're
coming
at
it
from
like
a
theoretical
base
that
we
think-
oh
my
god
this.
So
this
is
a
clearly
elegant
this,
how
brains
work
this
one
told
uses,
but
the
machine
learning
world
has
gotten
in
this
phase
where
they
think
it
doesn't
matter,
doesn't
matter
what
do
you
think?
As
long
as
you
do,
you
know
point
one
percent
better
on
this
benchmark.
B
That's
what
that's
all
that
matters
and
and
that's
a
problem.
You
know
we
have
to
figure
out
how
to
get
around
that.
That's
that's
a
challenge
for
us.
That's
it's!
One
of
the
challenges
we
have
to
deal
with
so
I
agree.
You've
identified
a
big
issue,
it's
difficult
for
those
reasons,
but
you
know
what
you
know.
Part
of
the
reasons
I'm
talking
to
here
today
is
I
hope.
I'm
gonna
get
some
machine
learning
people
to
say
read.
B
A
That's
that's
why
I'm
here
as
well,
because
I
think
machine
learning
now
as
a
community?
Is
it
a
place
where
the
next
step
is
needs
to
be
orthogonal
to
what
has
received
success
in
the
past?
Oh.
B
You
see
other
leaders
saying
this
machine
learning
and
leaders.
You
know
Geoff
Hinton,
with
his
capsules
idea.
Many
people
have
gotten
up
say
you
know
we're
gonna
hit
road,
but
maybe
we
should
look
at
the
brain.
You
know
things
like
that.
So
hopefully
that
thinking
walk
occur
organically
and
then
then
we're
in
a
nice
position
for
people
to
come
and
look
at
our
work
and
say
well
welcome
you
learn
from
these
guys
yeah.
B
On
this
idea
of
what
well.
B
B
A
little
disappointed
in
these
initiatives
because
yeah
you
know
they're
they're
is
sort
of
a
human
side
of
it
and
it
could
very
easily
slip
into
how
humans
interact
with
intelligent
machine
interest,
which
is
nothing
wrong
with
that.
But
that's
not
that
is
orthogonal
to
what
we're
trying
to
do.
We're
trying
to
say
like
what
is
the
essence
of
intelligence,
I,
don't
care!
I!
Think
I
want
to
build
intelligent
machines
that
aren't
emotional,
that
don't
smile
at
you
that
you
know
that
aren't
trying
to
tuck
you
in
at
night
yeah.
A
There
is
that
pattern
that
you,
when
you
talk
about
understanding
humans
is
important
for
understanding
intelligence.
You
start
slipping
into
topics
of
ethics
or
yeah,
like
you
said,
the
interactive
elements
as
opposed
to
no
no.
No.
What's
the
zoom
in
on
the
brain
study
say
what
the
human
brain
the
baby,
the
what's.
B
A
Maybe
you
can
explain
to
me
what
is
there
a
difference
between
you
know
an
artificial
neural
networks,
there's
a
difference
between
the
learning
stage
and
the
inference
stage.
Do
you
see
the
brain
is
something
different?
One
of
the
one
of
the
big
distinctions
that
people
often
say:
I,
don't
know
how
correct
it
is.
Is
artificial.
Neural
networks
need
a
lot
of
data
they're
very.
A
B
Two
issues
there
so
remember:
I
talked
early
about
the
constraints
we
always
feel
well.
One
of
those
constraints
is
the
fact
that
brains
are
continually
learning.
That's
not
something
we
said.
Oh,
we
can
add
that
later,
that's
something
that
was
upfront
had
to
be
there
from
the
start
made
our
problems
harder,
but
we
showed
going
back
to
the
2016
paper
on
sequence
memory.
We
showed
how
that
happens,
how
the
brains,
infer
and
learn
at
the
same
time,
and
our
models
do
that
and
they're,
not
two
separate
phases
or
two
separate
sets
at
the
time.
B
I
think
that's
a
big
big
problem
in
AI,
at
least
for
many
applications.
Not
for
all
so
I
can
talk
about
that.
There
are
some
that
gets
detailed.
There
are
some
parts
of
the
neocortex
in
the
brain
where
actually
what's
going
on,
there's
these
those
ease
with
these
cycles,
they're
like
cycles
of
activity
in
the
brain
and
there's
very
strong
evidence
that
you're
doing
more
of
inference
on
one
part
of
the
phase
and
more
of
learning
on
the
other
part
of
the
phase.
B
So
the
brain
can
actually
sort
of
separate
different
populations
of
cells
or
going
back
and
forth
like
this,
but
in
general
I
would
say
that's
an
important
problem.
We
have
a
you
know
all
of
our
networks
that
we've
come
up
with,
do
both
and
it's
it
they're
learning,
continuous
learning
networks,
and
you
mentioned
benchmarks
earlier
well,
there
are
no
benchmarks
about
that.
Exactly
so
so
we
you
know
we
have
to,
like
you
know,
begin
our
little
soapbox
say
hey
by
the
way
we
yeah.
This
is
important.
B
A
B
First,
things
is
I
when
I
was
imagine,
I
reach
my
hand
into
a
black
box
and
I'm
reaching
I'm
trying
to
touch
something.
Yeah
I,
don't
know
upfront
if
it's
something
I
already
know
or
if
it's
a
new
thing
right
and
I
have
to
I'm
doing
both
at
the
same
time.
I
don't
say:
oh,
let's
see
if
it's
a
new
thing.
B
Oh,
let's
see
if
it's
an
old
thing,
I
don't
do
that
I
as
I
go,
my
brain
says:
oh
it's
new
or
it's
not
new,
and
if
it's
new
I
start
learning
what
it
is
so
and
it
by
the
way
it
starts
learning
from
the
get-go.
Even
if
we
couldn't
recognize
it
so
they're
they're
not
separate
problems
they're
in
so
that's
the
flinger.
The
other
thing
you
mentioned
was
the
fast
learning.
B
So
I
was
distorting
my
continuous
learning,
but
there's
also
fast
I
mean
literally
I
can
show
you
this
coffee
cup
and
I,
say
here's
a
new
coffee
cup.
It's
got
the
logo
on
it.
Take
a
look
at
it
done
you
done.
You
can
predict
what
it's
going
to
look
like.
You
know
in
different
positions.
So
I
can
talk
about
that
too.
Yes,
in
the
brain,
the
way
learning
occurs.
I
mentioned
this
earlier,
but
I
mentioned
again.
B
The
way
learning
occurs:
I'm
imagining
a
mass
section
of
a
dendrite
of
a
neuron
and
I
want
to
learn.
I'm
gonna
learn
something
new
I'm
just
doesn't
matter
what
it
is.
I'm
just
gonna
learn
something
new
I
I
need
to
recognize
a
new
pattern,
so
what
I'm
gonna
do
I'm
gonna
form
new
synapses
new
synapses,
we're
gonna
rewire,
the
brain
on
to
that
section
of
the
dendrite
once
I've
done
that
everything
else
that
neuron
has
learned
is
not
affected
by
it.
B
That's
because
it's
isolated
to
that
small
section
of
the
dendrite
they're,
not
all
being
added
together
like
a
point
neuron.
So
if
I
learn
something
new
on
this
segment
here,
it
doesn't
change.
Anything
occur
anywhere
else
in
that
neuron,
so
I
can
add
something
without
affecting
previous
learning
and
I
can
do
it
quickly.
Now,
let's
talk,
we
can
talk
about
the
quickness,
how
it's
done
in
real
neurons,
you
might
say:
well,
doesn't
it
take
time
to
form
synapses?
Yes,
it
can
take
maybe
an
hour
to
form
a
new
synapse.
B
B
Is
learning
that
is
learning?
Okay,
you
know
back.
You
know
the
going
back
many
many
years,
people
you
know
as
what's-his-name
the
psychologist
who
proposed
heavy
hem
Donald
Hebb.
He
proposed
that
learning
was
the
modification
of
the
strength
of
a
connection
between
two
neurons
people
interpreted
that
as
the
modification
of
the
strength
of
a
synapse,
he
didn't
say
that
he
just
said:
there's
a
modification
between
the
effect
of
one
neuron
another,
so
synaptogenesis
is
totally
consistent
with
Donald
Hebb
said,
but
anyway,
there's
these
mechanisms
that
growth,
a
new
sense.
You
can
go
online.
B
You
can
watch
a
video
of
a
synapse
growing
in
real
time.
It's
literally,
you
can
see
this
little
finger,
it's
pretty
impressive
yeah.
So
that's
those
mechanisms
are
known
now,
there's
another
thing
that
we've
speculated
and
we've
written
about,
which
is
consistent
with
no
neuroscience,
but
it's
less
proven,
and
this
is
the
idea.
How
do
I
form
a
memory
really
really
quickly
like
instantaneously
if
it
takes
an
hour
to
grow
synapse
like
that's,
not
instantaneous,
so
there
are.
There
are
types
of
synapses
called
silent
synapses.
B
They
look
like
a
synapse,
but
they
don't
do
anything
they're.
Just
sitting
there,
it's
like
they
do
a
action
potential
that
comes
in
it
doesn't
release
any
neurotransmitter.
Some
parts
of
the
brain
have
more
of
these
and
others.
For
example,
the
hippocampus
has
a
lot
of
them,
which
is
where
we
associate
most
short
to
remember
with
so
what
we
we
speculated
again
in
that
2016
paper.
We
proposed
that
the
way
we
form
very
quick
memories,
very
short-term
memories
or
quick
memories-
is
that
we
convert
silence
and
synapses
into
axis
enough.
It's
going.
B
It's
like
seeing
a
synapse
there's
a
zero
weight
in
a
one
way,
but
the
long-term
memory
has
to
be
formed
by
synaptogenesis.
So
you
can
remember
something
really
quickly
by
just
flipping
a
bunch
of
these
guys
from
silent
to
active.
It's
not
like
it's
not
from
point
one
to
point
one.
Five,
it's
like
doesn't
do
anything
to
it
releases,
transmitter
and
if
I
do
that
over
a
bunch
of
these
I've
got
a
very
quick,
short-term
memory.
So
I
guess
the
lesson
behind
this
is
that
most
neural
networks
today
are
fully
connected.
B
Every
neuron
connects
every
other
nerve
from
layer
to
layer.
That's
not
correct
in
the
brain.
We
don't
want
that.
We
actually
don't
want
that.
It's
bad
if
you
want
a
very
sparse
connectivity,
so
that
any
neuron
connects
just
some
subset
of
the
neurons
in
the
other
layer,
and
it
does
so
on
a
on
a
dendrite
by
dendrite
segment
basis.
So
it's
a
very
sparse
elated
out
type
of
thing
and
and
that
then
learning
is
not
adjusting
all
these
ways.
But
learning
is
just
saying:
okay,
connect
to
these
10
cells.
Here,
right
now
in.
A
B
Easier,
it's
even
easier
is
even
easier.
That
propagation
requires
something
we.
It
really
can't
happen
in
brains,
this
back
propagation
of
this
error
signal.
It
really
can't
happen.
People
are
trying
to
make
it
happen
and
brain
fits
on
a
vertebrate.
This
is
this
is
pure
heavy
and
learning.
Well.
Synaptogenesis
pure,
have
been
learning.
It's
basically
saying:
there's
a
population
of
cells
over
here
that
are
active
right
now
and
there's
a
population
of
cells
over
here
active
right
now.
How
do
I
form
connections
between
those
active
cells?
And
it's
literally
saying
this
guy
became
active
this.
B
These
100
neurons
here
became
active
before
this
neuron
became
active,
so
form
connections
to
those
ones.
That's
it
there's
no
propagation
of
error,
nothing,
all
the
networks.
We
do
all
models.
We
have
work
on
almost
completely
on
heavy
and
learning,
but
in
in
on
dendritic
segments
and
multiple
synapses.
At
the
same
time,
so.
A
B
You
know
it's
just
always
the
the
crazy
question
to
ask,
because
you
know
no
one
can
predict
the
future.
Absolutely
so
I'll
tell
you
a
story:
I
used
to
I
used
to
run
a
different
Neuroscience
Institute
called
the
red
burn
neuroscience
tattoo,
and
we
would
we
would
hold
these
symposiums.
We
get
like
35
scientists
from
around
the
world
to
come
together
and
I
used
to
ask
him
all
the
same
question.
B
I
would
say:
well
how
long
do
you
think
it'll
be
before
we
understand
his
and
your
cortex
works
and
everyone
went
around
the
room
and
they
had
introduced
the
name
and
they
have
to
answer
that
question.
So
I
got
the
the
typical
answer
was
50
to
100
years.
Some
people
would
say
500
years,
some
people
said
never
I
said
well
your
size.
B
So
you
know,
but
it
doesn't
work
like
that
as
I
mentioned
earlier.
These
are
not.
These
are
step,
functions,
things
happen
and
then
bingo
they
happen.
You
can't
predict
that
I
fill
I've
already
passed
a
step
function.
So
if
I
can
do
my
job
correctly
over
the
next
five
years,
then
meaning
I
can
proselytize
these
ideas.
I
can
convince
other
people
they're
right.
We
can
show
that
other
people
machine
learning
people
should
pay
attention
to
these
ideas.
Then
we're
definitely
in
an
under
20
year
time
frame.
B
If
I
can
do
those
things
if
I'm
not
successful
in
that-
and
this
is
the
last
time
anyone
talks
to
me
and
no
one
reads
our
papers
and
you
know
I'm
wrong
or
something
like
that.
Then
then
I
don't
know,
but
it's
it's
not
50
years.
It's
it.
You
know
it'll
it'll.
You
know
the
same
thing
about
electric
cars.
How
quickly
are
they
going
to
populate
the
world
which
probably
takes
about
a
20
year
span?
It'll,
be
something
like
that,
but
I
think
if
I
can
do
what
I
said,
we're
starting
it
and
of.
A
B
A
B
He
was
ahead
of
his
time.
I,
don't
think
you
know
like,
as
I
said,
I
recognize
this
is
part
of
any
entrepreneurs
challenge,
I
use
it
entrepreneur
broadly
in
this
case,
I'm,
not
meaning
like
I'm
building
a
business
trying
to
sell
something.
I
mean
I,
come
trying
to
sell
ideas,
and
this
is
a
challenge
as
to
how
you
get
people
to
pay
attention
to
you.
How
do
you
get
them
to
give
you
a
positive
or
negative
feedback?
B
A
You
know
that
there's
a
lot
of
hype
behind
artificial
intelligence
currently
do
you,
as
as
you
look
to
spread
the
ideas
that
are
of
neocortical
theory
of
the
things
you're
working
on.
Do
you
think
there's
some
possibility
we'll
hit
an
AI
winter
once
again,
it's.
B
A
True
yeah,
very
true
because
it's
almost
like
you
need
the
the
winter
to
refresh
the
palate.
Yeah.
B
A
B
If
everything
crashed
completely
and
every
student
left
the
field-
and
there
was
no
money
for
anybody
to
do
anything
and
it
became
an
embarrassment
to
talk
about
machine
intelligence,
an
AI
that
wouldn't
be
good
for
us
either
you
want
you
want
sort
of
the
soft
landing
approach
right.
You
want
enough
people,
the
senior
people
in
AI
and
machine
learning
say
you
know,
we
need
other
approaches.
We
really
need
other
approaches,
but
damn
we
need
two
approaches.
Maybe
we
should
look
to
the
brain?
Okay.
Let's
look.
B
A
B
Of
all
I
don't
think
the
goal
is
to
create
a
machine
at
his
human
level.
Intelligence
I
think
it's
a
false
goal.
It
back
to
Turing
I
think
it
was
a
false
statement.
We
want
to
understand
what
intelligence
is
and
then
we
can
build
intelligent
machines
of
all
different
scales,
all
different
capabilities.
You
know
a
dog
is
intelligent,
I,
don't
need
you
know,
that'd,
be
pretty
good
to
have
a
dog
yeah,
you
know,
but
what
about
something
that
doesn't
look
like
an
animal
at
all
in
different
spaces?
B
So
my
thinking
about
this
is
that
we
want
to
define
what
intelligence
says
agree
upon
what
makes
an
intelligence
system.
We
can
then
say:
ok,
we're
now
going
to
build
systems
that
work
on
those
principles
or
some
subset
of
them,
and
we
can
apply
them
to
all
different
types
of
problems
and
the
the
kind
of
the
idea
it's
not
computing.
We
don't
ask
if
I
take
a
little.
You
know
little
one
ship,
computer
I,
don't
say:
well,
that's
not
a
computer,
because
it's
not
as
powerful
is
this.
You
know
big
server
over
here.
B
You
know
no,
because
we
know
that
what
the
principles
are
computing
are
and
I
can
apply
those
principles
to
a
small
problem
into
a
big
problem.
Insane
intelligence
needs
to
get
there.
We
have
to
say
these
are
the
principles.
I
can
make
a
small
one,
a
big
one,
I
can
make
them
distribute
it.
I
can
put
them
on
different
sensors.
They
don't
have
to
be
human
like
at
all.
Now
you
did
bring
up
a
very
interesting
questions
about
embodiment.
Does
that
have
to
have
my
body?
It
has
to
have
some
concept
of
movement.
B
It
has
to
be
able
to
move
through
these
reference
frames.
I
talked
about
earlier
I,
whether
it's
physically
moving
like
I
need.
If
I'm
going
to
have
a
a
I
that
understands
coffee
cups,
it's
gonna
have
to
pick
up
the
coffee
cup
and
touch
it
and
look
at
it
with
it
with
its
eyes
and
hands
or
something
equivalent
to
that.
If
I
have
a
mathematical
AI,
maybe
it
needs
to
move
through
mathematical
spaces.
I
could
have
a
virtual
AI
that
lives
in
the
internet
and
it's
true.
B
Its
movements
are
traversing
links
and
digging
into
files,
but
it's
got
a
location
that
it
span
is
traveling
through
some
space
you
can't
have
an
AI
that
just
takes
some
flash
thing
input
and
we
call
it
flash
different
system.
Here's
a
pattern.
Thun
know
its
movement
moving
pattern,
moving
pad
and
moving
pad
attention,
digging,
building,
building
structure,
just
so
I
figure
out
the
model,
the
world,
so
some
sort
of
embodiment,
whether
it's
physical
or
not,
has
to
be
part
of
it.
So.
B
B
A
B
It's
interesting
to
think
about
I,
don't
think
it's
useful
as
a
means
to
figure
out
how
to
build
intelligent
machines.
It's
it's
something
that
systems
do
and
we
can
talk
about
what
it
is
that
are
like.
Well,
I
build
the
system
like
this,
then
it
would
be
self-aware
or
and
if
I
build
it
like
this,
it
wouldn't
be
self-aware.
So
that's
a
choice.
I
can
have
it's
not
like.
Oh
my
god
itself
away.
I
can't
turn.
Oh
I
I
heard
interview
recently
with
this
philosopher
from
Yale.
B
I
can't
remember
his
name
apologize
for
that,
but
he
was
talking
about
well.
If
these
computers
are
self-aware,
then
it
would
be
a
crime
to
unplug
them.
I'm,
like
oh,
come
on,
you
know,
I
employed
myself
every
night
go
to
sleep.
What
is
that
a
crime?
You
know
I
plugged
myself
in
again
in
the
morning.
I
am
so
people
get
kind
of
bent
out
of
shape
about
this
I
have
very
different,
very
detailed
understanding
or
opinions
about
what
it
means
to
be
conscious
and
what
it
means
to
be
self-aware.
I,
don't
think
it's
that
interesting.
B
B
We
then
we
have
to
break
it
down
to
the
two
parts:
okay,
because
consciousness.
Isn't
one
thing:
that's
part
of
the
problem.
That
term
is,
it
means
different
things
to
different
people
and
there's
different
components
of
it.
There
is
a
concept
of
self-awareness,
okay,
that
it
can
be
very
easily
explained.
B
You
have
a
model
of
your
own
body,
the
your
cortex
models,
the
things
in
the
world
and
it
also
models
your
own
body
and
and
then
it
has
a
memory.
It
can
remember
what
you've
done.
Okay,
so
it
can
remember
what
you
did
this
morning
can
remember
what
you
had
for
breakfast
and
so
on,
and
so
I
can
say
to
you:
okay,
Lex,
were
you
conscious
this
morning?
When
you
know
I
had
your
you
know,
bagel
and
you'd
say
yes,
I
was
conscious.
Now
what?
B
If
I
could
take
your
brain
and
revert
all
the
synapses
back
to
the
state?
They
were
this
morning
and
then
I
said
to
you:
Lex,
were
you
conscious
when
you
ate
the
bagel,
you
should
know,
and
I
wasn't
hot,
just
actually
here's
a
video
of
eating
the
bagel
he's
saying:
I:
wasn't
there
I
have
no
I,
that's
not
possible
because
I
was
I
must
have
been
unconscious
at
that
time.
B
So
we
can
just
make
this
one-to-one
correlation
between
memory
of
your
body's
trajectories
through
the
world
over
some
period
of
time,
a
memory
that
and
the
ability
to
recall
that
memory
is
what
you
would
call
conscious.
I
was
conscious
of
that.
It's
a
self
awareness
and
and
in
any
system
that
can
recall
memorize
what
it's
done
recently
and
bring
that
back
and
invoke
it
again
would
say:
yeah
I'm,
aware:
I,
remember
what
I
did
yeah
all
right,
I
got
it
that's
an
easy
one.
B
Although
some
people
think
that's
a
hard
one,
the
more
challenging
part
of
consciousness.
Is
this
one
that
sometimes
you
just
go
by
the
word
of
quality,
which
is
you
know?
Why
does
an
object,
seem
red
or
what
is
pain
and
why
just
pain
feel
like
something?
Why
do
I
feel
redness?
So
what
do
I
feel
a
little
pain
is
in
no
way
and
then
I
could
say.
Well,
why
does
sight
seems
different
than
just
hearing?
You
know
it's
the
same
problem.
It's
really
yeah!
These
are
all
dis
neurons,
and
so
how
is
it
that?
B
Why
does
looking
at
you
feel
different
than
you
know,
I'm
hearing
you
it
feels
different,
but
this
is
noise
in
my
head,
they're
all
doing
the
same
thing.
So
that's
the
interesting
question.
The
best
treatise
I've
read
about
this
is
by
guy
named
Oh,
Reagan
or
Regan.
He
wrote
a
book
called
why
red
doesn't
sound
like
a
bill.
It's
a
little!
It's
not
it's,
not
a
trade
book,
easy
read,
but
it
and
and
it's
an
interesting
question-
take
something
like
color
color
really
doesn't
exist
in
the
world.
It's
not
a
property
of
the
world
property.
B
The
world
that
exists
is
light
frequency
and
that
gets
turned
into.
We
have
certain
cells
in
the
retina
that
respond
to
different
frequencies
different
than
others,
and
so
when
they
enter
the
brain,
you
have
a
bunch
of
axons
that
are
firing
at
different
rates
and
from
that
we
perceive
color,
but
there
is
no
color
in
the
brain,
I
mean
there's,
there's
no
color
coming
in
on
those
synapses,
it's
just
a
correlation
between
some
some
some
axons
and
some
property
of
frequency,
and
that
isn't
even
color
itself.
B
A
B
B
That's
a
good
way
putting
it
it's
useful
as
a
predictive
mechanism
or
useful
there's
a
generalization
I
did
it's
a
way
of
grouping
things
together
to
say
it's
useful
to
have
a
model
like
this
yeah
think
about
the
the
the
there's,
a
well-known
syndrome
that
people
who've
lost,
a
limb
experience
called
phantom
limbs
and
what
they
claim
is
they
can
have
their
arm
is
removed,
but
they
feel
their
arm
that
not
only
feel
it.
They
know
it's
there.
They
it's
there,
I
can
I
know
it's.
B
There
they'll
swear
to
you
that
it's
there
and
then
they
can
feel
pain
in
the
arm
and
feeling
their
finger
in
it.
They
move
their.
They
move
their
non-existent
arm
behind
your
back.
Then
they
feel
the
pain
behind
their
back.
So
this
whole
idea
that
your
arm
exists
is
a
model
of
your
brain.
It
may
or
may
not
really
exist
and
just
like,
but
it's
useful
to
have
a
model
of
something
that
sort
of
correlates
to
things
in
the
world.
So
you
can
make
predictions
about
what
would
happen
when
those
things
occur.
B
It's
a
little
bit
of
a
fuzzy
but
I
think
you're.
Getting
quite
towards
the
answer
there,
it's
it's
useful
for
the
model
of
to
express
things,
certain
ways
that
we
can
then
map
them
into
these
reference
frames
and
make
predictions
about
them.
I
need
to
spend
more
time
on
this
topic.
It
doesn't
bother
me
do.
B
A
There
is
so
the
the
silly
notion
that
you
described
briefly
that
doesn't
seem
so
silly
does
humans.
Is
you
know
if
you're
successful,
building
intelligent
machines?
It
feels
wrong
to
then
turn
them
off,
because,
if
you're
able
to
build
a
lot
of
them,
it
feels
wrong
to
then
be
able
to.
You
know
to
turn.
B
Off
the
Y,
but
just
be:
let's,
let's
break
it
down
a
bit
as
humans.
Why
do
we
fear
death?
There's
there's
two
reasons:
we
fear
death
well,
first
of
all,
stay
when
you're
dead
doesn't
matter.
Oh
okay,
you're
doing
it.
So
why
do
we
fear
death?
We
fear
death
for
two
reasons.
One
is
because
we
are
programmed
genetically
to
fear
death.
That's
a
that's
a
survival
and
propagating
the
genes
thing,
and
we
also
a
program
to
feel
sad
when
people
we
know
die.
We
don't
feel
sad
for
someone.
We
don't
know
dies.
B
It's
people
dying
right
now,
they're
always
come
saying,
I'm,
so
bad
about
because
I
don't
know
them,
but
I
knew
them.
I'd
feel
really
bad.
So
again,
this
these
are
old,
brain
genetically
embedded
things
that
we
fear
death
there's
outside
of
those
those
uncomfortable
feelings.
There's
nothing
else
to
worry.
A
B
A
B
And
if
I
didn't
wake
up,
it
wouldn't
matter
to
me
only
if
I
knew
that
was
gonna
happen.
Would
it
be
bothers
him
if
I
didn't
know
was
gonna
happen?
How
would
I
know
know
it?
Then
I
would
worry
about
my
wife.
So
imagine
imagine.
I
was
a
loner
and
I
lived
in
Alaska
and
and
I
lived
them
out
there
and
there's
no
animals,
nobody
knew
I
existed.
B
A
B
Maybe
there
isn't
a
reason.
Maybe
there
is
so
I
mentioned
those
big
problems
too
right.
You
know
you,
you
interviewed
max
tegmark,
you
know,
and
there's
people
like
that
right,
I'm
missing
those
big
problems
as
well,
and
in
fact,
when
I
was
young,
I
made
a
list
of
the
biggest
problems.
I
could
think
of.
First,
why
is
anything
exists?
Second?
Why
did
we
have
the
laws
of
physics
that
we
have?
Third
is
life
inevitable,
and
why
is
it
here?
Fourth
is
intelligence,
inevitable,
and
why
is
it
here?
B
B
So
I
said
my
mission:
I
mean
I.
You
asked
me
earlier.
My
first
missions
understand
the
brain,
but
I
felt
that
is
the
shortest
way
to
get
to
true
machine
intelligence
and
I
want
to
get
the
true
machine
tells
us,
because
even
if
it
doesn't
occur
in
my
lifetime,
other
people
will
benefit
from
it
because
I
think
it'll
occur
in
my
lifetime.
But
you
know
20
years,
it's
you
never
know,
and
but
that
would
be
the
quickest
way
for
us
to
you
know
we
can
make
super
mathematicians.
We
can
make
soup
space
explorers.
B
We
can
make
super
physicists
brains
that
do
these
things
and
that
can
run
experiments
that
we
can't
run.
We
don't
have
the
abilities
to
manipulate
things
and
so
on,
but
we
can
build
intelligent
machines
that
do
all
those
things
and
with
the
ultimate
goal
of
finding
out
the
answers
to
the
other
questions.
Let.
A
B
A
Love
to
so,
let's
I
think.
B
World's
before
better
knowing
things
we're
always
better
than
no
things,
do
you
think
it's
better
better
place
to
work
the
living
that
I
know
that
our
planet
is
one
of
many
in
the
solar
system
and
the
soleus
is
one
of
many
of
the
calluses
I?
Think
it's
a
more
I
I
dread,
I
used
to
I,
sometimes
think
like
God.
What
would
be
like
the
list
three
hundred
years
ago,
I'd
be
looking
at
the
sky,
god
I
can't
understand
anything.
Oh
my
god
I'd
be
like
throwing
a
bit
of
light
going.
What's
going
on.
A
B
A
B
B
We
have
to
worry
about,
we
have
to
worry
about
privacy
and
about
how
impacts
false
beliefs
in
the
world
and-
and
we
have
real
problems
that
and
things
to
worry
about
with
today's
AI,
and
that
will
continue
as
we
create
more
intelligent
systems.
There's
no
question:
you
know
the
whole
issue
about
you
know:
making
intelligent
armament
and
weapons.
It's
something
that
really
we
have
to
think
about
carefully
I,
don't
think
of
those
as
existential
threats.
B
I
think
those
are
the
kind
of
threats
we
always
face
and
we
all
have
to
face
them
here
and
hope
to
deal
with
them.
The
ie
we
can.
We
could
talk
about
what
people
think
are
the
existential
threats,
but
when
I
hear
people
talking
about
them,
they
all
sound
hollow
to
me,
they're,
based
on
ideas,
they're
based
on
people
who
really
have
no
idea
what
intelligence
is
and
and
if
they
knew
what
intelligence
was.
They
wouldn't
say
those
things.
So
those
are
not
experts
in
the
field
in.
A
B
So
we
already
face
this
threat
in
some
sense,
they're
called
bacteria.
These
are
organisms
in
the
world
that
would
like
to
turn
everything
into
bacteria
and
they're.
Constantly
morphing
they're
constantly
changing
to
evade
our
protections,
and
in
the
past
they
have
killed
huge
swathes
of
populations
of
humans
on
this
planet.
So
if
you
want
to
worry
about
something,
that's
going
to
multiply
endlessly,
we
have
it
and
I'm
far
more
worried
in
that
regard,
I'm
far
more
worried
that
some
scientists
in
a
laboratory
will
create
a
super
virus
or
a
super
bacteria
that
we
cannot
control.
B
That
is
a
more
existential
strep.
Putting
putting
in
its
halogen
thing.
On
top
of
it
actually
seems
to
make
it
less
existential
to
me,
it's
like
it's
it
limits.
Its
power
is
limits
where
it
can
go
and
limits
the
number
of
things
that
can
do
in
many
ways.
A
bacteria
is
something
you
can't.
You
can't
even
see
so.
A
That's
the
only
one
of
those
problems.
Yes
exactly
so
the
the
other
one
just
in
your
intuition
about
intelligent,
you
think
about
intelligence
as
humans.
Do
you
think
of
that
as
something?
If
you
look
at
intelligence
on
a
spectrum
from
zero
to
us
humans,
do
you
think
you
can
scale
that
to
something
far
superior
yeah.
B
All
the
mechanisms
with
me
I
want
to
make
another
point
here
that
Lex
before
I
get
there
sure
intelligence
is
the
neocortex.
It
is
not
the
entire
brain.
If
I,
the
goal
is
not
to
be
make
a
human.
The
goal
is
not
to
make
an
emotional
system.
The
goal
is
not
to
make
a
system
that
wants
to
have
sex
and
reproduce.
Why
would
I
build
that
if
I
want
to
have
a
system
that
wants
to
reproduce
enough
sex?
Make
bacteria
make
computer
viruses?
Those
are
bad
things.
Don't
do
that.
Just
those
are
really
bad.
B
Don't
do
those
things
regulate
those,
but
if
I
just
say
I
want
to
intelligent
system.
Why
does
it
have
to
have
any
human
like
emotions?
Why
couldn't
I?
Does
he
even
care
if
it
lives?
Why
does
it
even
care
if
it
has
food,
it
doesn't
care
about
those
things.
It's
just
you
know
it's
just
in
a
trance
thinking
about
mathematics
or
it's
out
there
just
trying
to
build
the
space
plant.
You
know
for
it
on
Mars,
it's
C.
We
don't
that's
a
choice.
B
B
B
This
is
why
I
mentioned
the
bacteria
one
yeah,
because
you
might
say
well,
some
person's
gonna
do
that.
Well,
some
person
today
could
create
a
bacteria,
that's
resistant
to
all
the
non
antibacterial
agents,
so
we
already
have
that
threat.
We
already
knows
this
is
going
on.
It's
not
a
new
threat,
so
just
accept
that
and
then
we
have
to
deal
with
it
right
yeah.
So
my
point
has
nothing
to
do
with
intelligence.
It
intelligence
is
the
separate
component
that
you
might
apply
to
a
system
that
wants
to
reproduce
and
do
stupid
things.
A
In
fact,
it
is
a
mystery
why
people
haven't
done
that
yeah.
My
my
dad
is
a
physicist
believes
that
the
reason
you
so
for
some
nuclear
weapons
haven't
proliferated
amongst
evil
people.
So
one
is
one
belief
that
I
share.
Is
that
there's
not
that
many
evil
people
in
the
world
that
would
that
that
would
use
Spectre,
whether
it's
bacteria
and
you
clear
weapons,
or
maybe
the
future
AI
systems
to
do
bad,
so
the
fraction
is
small,
and
the
second
is
that
it's
actually
really
hard
technically
yeah.
B
Terms
and
otherwise
it
really
annihilate
humanity,
you'd
have
to
have
you
know,
sort
of
the
the
nuclear
winter
phenomena
which
is
not
one
person
shooting
you
know,
or
even
ten
bombs.
You'd
have
to
have
some
automated
system
that
you
know
detonates
a
million
bombs
or
whatever
many
thousands.
We
have
extreme.
B
And
it's
just
like
only
some
stupid
system
that
would
automatically
you
know
dr.
Strangelove
type
of
thing
you
know
I
mean
look,
we
could
have
some
nuclear
bomb
go
off
in
some
major
city
in
the
world
like
no
I,
think
that's
actually
quite
likely,
even
in
my
lifetime.
I,
don't
think
that's
on
I
like
to
think
and
it'd
be
a
tragedy,
but
it
won't
be
an
existential
threat
and
it's
the
same,
as
you
know
the
virus
of
1917
whatever
it
was.
You
know
the
influenza,
these
bad
things
can
happen
and
the
plague
and
so
on.
B
B
On
your
cortex
I
think
the
wrong
thing
to
say:
double
the
intelligence
you
break
it
down
into
different
components,
can
I
make
something:
that's
a
million
times
faster
than
a
human
brain.
Yes,
I
can
do
that.
Could
I
make
something
that
is,
has
a
lot
more
storage
than
the
human
brain.
Yes,
I
could
more
common
more
copies
of
comp
can
I
make
something
that
attaches
the
different
sensors
than
human
brain?
Yes,
I
can
do
that.
Could
I
make
something
that's
distributed
so
these
people.
Yet
we
talked
early
about
that
important
in
your
cortex
voting's.
B
B
B
B
Right,
it's
a
really
hard
thing
together,
I
mean
paint
in
a
fuzzy
picture,
stretchy
space-
you
know
yeah,
but
the
the
field
equations
to
do
that
in
the
deep
intuitions
are
really
really
hard
and
I've
tried
I
unable
to
do
it
is
to
get
you
know,
it's
easy
to
get
special
relativity
general.
That's
it
man,
that's
too
much,
and
so
we
already
live
with
this.
B
To
some
extent,
the
vast
majority
of
people
can't
understand
actually
what
the
vast
majority
other
people
actually
know
we're
just
either
we
don't
have
the
effort
to
or
we
can't
or
it
on
time
are
just
not
smart
enough
whatever
so,
but
we
have
ways
of
communicating.
Einstein
has
spoken
in
a
way
that
I
can
understand.
He's
given
me
analogies
that
are
useful.
B
I
can
use
those
analogies
from
my
own
work
and
think
about.
You
know
concepts
that
are
similar.
It's
not
stupid.
It's
not
like
he's
existed,
some
other
plane,
there's
no
connection
to
my
plane
in
the
world
here
so
that
will
occur.
It
already
has
occurred.
That's
from
my
point
that
this
story
is,
it
already
has
a
kirby
liveth
everyday.
B
One
could
argue
that
with
me
crepe
machine
intelligence
that
think
a
million
times
faster
than
us
that
it'll
be
so
far.
We
can't
make
the
connections,
but
you
know
at
the
moment
everything
that
seems
really
really
hard
to
figure
out
in
the
world
when
you
actually
figure
it
out.
It's
not
that
hard.
You
know
we
can
everyone
most.
Everyone
can
understand
the
multiverses,
and
most
everyone
can
understand
quantum
physics.
We
can
understand
these
basic
things,
even
though
hardly
any
baby
people
could
figure.
B
B
Yeah
they
say
already,
you
mean
Einstein,
wasn't
a
very
normal
person.
He
had
a
lot
of
where
the
quirks,
and
so
the
other
people
who
work
with
him.
So
you
know,
maybe
they
already
were
sort
of
this
astral
plane
of
intelligence
that
we
live
with
it
already.
It's
not
a
problem,
it's
still
useful
and
you
know
so.
B
Would
say
that
intelligent
life
has
and
will
exist
elsewhere
in
the
universe,
I'll
say
that
there
is
a
question
about
contemporaneous
intelligence
life,
which
is
hard
to
even
answer
when
we
think
about
relativity
in
the
the
nature
of
space-time.
You
can't
say
what
exactly
is
this
time
someplace
else
in
the
world,
but
I
think
it's
it's.
You
know.
I
do
worry
a
lot
about
the
the
filter
idea,
which
is
that
perhaps
intelligent
species
don't
last
very
long,
and
so
we
haven't
been
around
very
long.
B
You
know,
as
a
technological
species
we've
been
around
for
almost
nothing
man.
You
know
what
200
years
I'm
like
that,
and
we
don't
have
any
data,
a
good
data
point
on
whether
it's
likely
they
will
survive
or
not
so
do
I
think
that
there
have
been
intelligent
life
elsewhere
in
the
universe,
almost
certain
that,
of
course,
in
the
past
in
the
future.
Yes,
does
it
survive
for
a
long
time?
I
don't
know
this
is
another
reason.
B
I'm
excited
about
our
work
is
our
work,
meaning
that
general
Worlds
of
AI
and
I
think
we
can
build
intelligent
machines
that
outlast
us,
and
you
know
they
don't
have
to
be
tied
to
earth.
They
don't
have
to.
You
know,
I'm,
not
saying
that
recreating.
You
know.
You
know
aliens
I'm,
just
saying
well,
if
I
asked
myself-
and
this
might
be
a
good
point
to
end
on
here,
if
I
asked
myself,
you
know
what's
special
about
our
species,
we're
not
particularly
interesting
physically
we're.
B
Not
we
don't
fly
we're
not
good
swimmers,
we're
not
very
fast
from
that
very
strong.
You
know
it's
our
brain,
that's
the
only
thing
and
we
are
the
only
species
on
this
planet.
It's
built
the
model
of
the
world.
It
extends
beyond
what
we
can
actually
sense.
We're
the
only
people
who
know
about
the
far
side
of
the
Moon
and
the
other
universes
and
I
mean
other
other
galaxies
and
other
stars,
and
and
but
what
happens
in
the
atom,
there's.
No,
what
that
knowledge
doesn't
exist
anywhere
else.
B
It's
only
in
our
heads
cats,
don't
do
it
dogs
into
a
monkey's,
don't
do
it
it's
just
on,
and
that
is
what
we've
created.
That's
unique,
not
our
genes,
it's
knowledge
and
if
I
asked
me,
what
is
the
legacy
of
humanity?
What
what?
What
should
our
legacy
be?
It
should
be
knowledge.
We
should
preserve
our
knowledge
in
a
way
that
it
can
exist
beyond
us
and
I
think
the
best
way
of
doing
that,
in
fact
you
have
to
do
it-
is
to
has
to
go
along
with
intelligent
machines.
To
understand
that
knowledge.
B
B
Does
give
us
a
better
chance,
prolonging
life?
Yes,
it
gives
us
a
chance
to
live
on
other
planets,
but
even
beyond
that
I
mean
our
solar
system
will
disappear
one
day
just
give
enough
time
so
I
don't
know
I
thought
we'll
ever
be
able
to
travel
to
other
things,
but
we
could
tell
the
stars,
but
we
could
send
Intel's
machines
to
do
that.
Say.
A
B
B
Human,
what
does
human
our
species
are
changing
all
the
time
human
today
is
not
the
same
as
human
just
fifty
years
ago.
Its.
What
does
human
do
we
care
about
our
genetics?
Why
is
that
important?
As
I
point
out,
our
genetics
are
no
more
interesting
than
about
two
Miam
genetics.
There's
no
more
interesting
them.
You
know,
monkeys,
genetics,
what
we
have,
what
what's
unique
and
what's
family
better
I
start
is
our
knowledge
art
what
we've
learned
about
the
world,
and
that
is
the
rare
thing.
That's
the
thing
we
want
to
preserve
its
genes.