►
Description
Jeff Hawkins, Numenta
https://simons.berkeley.edu/talks/jeff-hawkins-4-17-18
Computational Theories of the Brain
A
Better
I
think
it's.
We
should
take
a
second
and
see
how
remarkable
that
is
that
we're
having
a
four
day
workshop
on
large-scale
computational
theories
of
the
brain.
This
doesn't
happen
very
often
a
little
bit
of
an
indicator.
That
is
our
instructions,
as
speakers
was
to
encourage
us
to
be
somewhat
speculative
and
if
you
think,
if
you're
a
theorist,
that's
what
you're
supposed
to
do,
but
in
neuroscience
it's
actually
often
looked
down
upon
I'm
going
to
so
I.
Think
it's
great
that
we're
here
and
I
really
appreciate
the
Simon
sister
for
doing
that.
A
I'm
gonna
talk
about
a
large
scale.
Theory
of
the
brain
I'm
gonna
talk
about
the
neocortex
today
and
it's
a
great
following
too
serious
talk,
which
is
great.
I'm
gonna
talk
about
a
theory
about
how
the
your
cortex
itself
may
be.
Organized
around
locations
in
specific
government
proposed
specific
neural
networks
in
the
neocortex,
where
grid
cell,
like
mechanisms
not
grid
cells
per
se,
which
are
in
the
anthropoids.
If
it's
l-like
mechanisms
representing
locations
are
exist
in
every
cortical
column
and
that
the
cortex
can
be
understood
as
a
location.
Processor.
A
That's
a
lot
to
cover
here
and
I'm
going
to
get
into
it.
I'm
gonna
start
with
just
this
working
know
there
we
go
yes,
just
what
you
may
not
know
no
Manta
is
dementia
is
a
privately
held
research
lab
a
company
we're
in
Redwood
City
California,
and
we
have
about
12
employees
and
we
have
a
scientific
mission
and
our
scientific
mission
is
to
reverse-engineer
the
new,
your
cortex.
A
That's
arguably,
a
very
ambitious
goal:
I
understand
that,
but
it's
also
realizable
goal-
and
we
shouldn't
forget
that
we're
interested
in
biologically
accurate
theories,
we're
not
interested
in
biologically
inspired
theories,
we're
not
interested
in
sort
of
advancing
machine
learning.
We
want
to
understand
how
the
the
neural
pill
that
exists
in
our
heads
actually
works,
and
we
test
our
theories
empirically
with
partnerships
and
and
via
simulation.
We
do
have
a
secondary
goal,
but
it's
definitely
secondary.
We
don't
spend
as
much
time
on
it.
A
We
believe
that
cortical
Theory
will
be
the
basis
for
future
machine
intelligence
and
we
have
an
active
open
source
community.
That's
our
theories
and
applying
them
into
machine
learning
tasks.
Okay,
so
let's
just
get
into
it,
I'm,
not
sure.
Why
he's
not
very
liable
I'm
gonna
start
with
a
quote
from
Francis
Crick,
DNA
Fame
and
a
lot
of
part
of
his
life.
He
became
interested
in
neuroscience
and
he
was
totally
consumed
with
the
ideas.
How
is
it
the
neurons
in
our
head
could
perceive
how
it
is?
A
We
have
the
scene
in
front
of
us
and
how
could
neurons
be
doing
this?
He
wrote
an
essay
in
1979
that
was
very
influential.
For
me,
it
was
in
Scientific
American.
It
was
a
special
issue
about
the
brain
and
there
all
these
articles
from
famous
neuroscientists
about
what
we've
learned
about
the
brain
and
Francis
kick
came
along
at
the
end.
He
says
you
know
what
that's
great.
A
We
have
all
this
data,
he
says,
but
we
have
no
idea
what
the
hell's
going
on
and
let's
not
pretend
that
we
do,
and
he
says
it
wasn't
that
we
were
lacking
a
theory.
His
exact
words
were.
We
were
lacking
a
broad
framework
in
which
to
apply
these
ideas.
You
needed
a
framework
for
understanding
what
the
brain
does
in
which
to
apply
theories
and
hang
them
on
to
now.
That
was
almost
40
years
ago,
and
a
lot
has
happened
since
then.
A
First
of
all,
we've
had
tremendous
advances
in
experimental
techniques.
We're
all
aware
of
those
tremendous
advances
in
the
collection
of
data
and
the
things
we
can
do.
The
question
is
how
we've
done
developing
a
broad
framework
of
understanding
what
the
brain
does
well
I
think
there's
been
some
success
and
some
lack
of
success.
One
of
the
things
that's
been
most
successful
is
the
work
you
just
heard
about
the
whole
field
of
play,
cells
and
grid
cells
in
the
media
onto
ronald
cortex
and
hippocampus.
A
That
is
sort
of
a
framework
for
how
we
map
out
environments
and
know
where
we
are,
and
you
can
see
the
excitement
and
interest.
It
goes
into
that
field
right
now,
but
when
it
comes
to
the
new,
your
cortex
itself,
I
argue
that
there
hasn't
been
much
progress
in
developing
this
framework
and
we
still
need
that
and
that's
why
I'm
going
to
talk
about
today
now
there's
a
artist
rendition
of
a
brain.
A
We
all
know
that's
a
human
brain,
just
a
reminder
that
neocortex
is
this
thing
that
wraps
around
the
old
brain,
it's
about
75%
of
the
human
brain
in
terms
of
volume,
and
if
you
could
take
that
your
cortex
out
of
your
head
I
always
carry
this
napkin
with
me.
This
is
my
model
of
the
neocortex.
It's
about
this
big
in
area.
A
If
you
could
flatten
it
out,
which
you
can't
and
it's
about
two-and-a-half
to
three
three
millimeters
thick,
it
is
basically
everything
we
think
about
the
world,
our
percept
in
a
vision
and
hearing
and
touch
mathematics,
language,
physics
and
so
on.
It's
all
occurring
in
the
sheet
of
cells.
It's
quite
remarkable.
A
Now,
there's
two
things
about
the
neocortex
that
are,
in
contrast
with
each
other.
One
is
the
diversity
of
functions,
the
number
of
things
it's
capable
of
doing
and
the
other
is
the
commonality
of
the
circuitry,
and
this
has
puzzled
people
for
a
long
time
in
terms
of
the
diversity
of
function
in
your
cortex,
it's
divided
into
about
a
hundred
different
regions.
A
No
one
really
knows
how
many
there
are
multiple
breeds
and
for
vision
and
touch
and
audition
and
languages,
mathematics,
pok
in
languages
and
written
languages,
but
mathematics
and
physics,
music
and
art
all
cognition
occurs
there.
Those
regions
are
connected
in
this
very
complex
hierarchy
that
doesn't
really
look
like
a
hierarchy
sort
of
like
a
hierarchy,
and
this
is
remarkably
diverse
functionality
that
has
led
to
our
success
as
a
species,
but
then
people
notice
for
over
a
hundred
years
when
you
look
at
the
details
in
your
cortex
there's
a
lot
of
commonality.
A
First
of
all,
there's
these
layers
of
cells,
and
then
there
are.
These
predominantly
looks
like
the
connections
that
cross
the
layers
are
predominant,
that
information
flows
up
and
down
across
the
layers
there's
some
horizontal
connections
in
some
layers-
and
this
has
suggested
a
long
time
ago-
there
might
be
a
columnar
organization
to
the
cortex
that
information
flows
up
and
down
in
the
column
and
then
spreads
horizontally.
They
don't
look
like
this,
but
that's
the
threat.
Achill
idea-
and
this
is
a
remarkable,
similar,
similar
circuitry
everywhere
throughout
the
neocortex
and
in
different
species.
A
A
And
if
you
haven't
read
it,
you
should
go
read
his
essay
he's
proposed
to
following
he
said:
look
I'm
proposed
that
all
regions
are
doing
the
same
thing
and
what
makes
a
visual
region
vision
is
because
it's
hooked
to
an
AI
and
what
makes
an
auditory
region
auditories,
because
it's
hooked
to
a
near
incredible
idea.
I
put
that
up
there
right
over
with
you
know,
Darwin's
idea
of
evolution.
A
He
said
that
the
diversity
of
species
they're
all
based
on
one
algorithm,
a
mouth
cancer
sayings
the
diversity
of
our
cognitive
functions,
is
based
on
one
basic
algorithm.
He
then
went
on
further
to
say
that
the
functional
unit
of
the
neocortex
is
a
cortical
column
and
that's
a
debatable
thing.
What
it
is
it,
but
this
for
the
purpose
of
my
talk
today,
I'm
just
gonna,
say,
take
a
square
millimeter,
of
course,
and
in
a
human
there's
about
150,000
of
those,
and
then
he
said
the
obvious.
He
said:
look
understanding
the
cop.
A
What
the
column
does
is
the
key
problem
in
neuroscience.
If
you
understand
that,
then
you
understand
your
cortex
and
then
you
understand,
70%
of
the
brain
sounds
simple,
but
that
was
his
proposal
and
it's
so
radical
that
today
many
people
still
don't
believe
it.
It's
like
hard
to
imagine
what
that
could
be,
but
I
take
that
as
gospel,
acknowledging
the
fact
that
our
differences,
but
there's
still
something
common
going
on
here
now.
One
of
the
problems
in
solving
this
problem
is
that
the
core
of
a
comms
are
incredibly
complex.
A
B
A
A
A
A
All
the
circuitry
that
it
cortex
has
is
contained
in
here,
there's
some
exceptions:
I'm
not
talking
about
the
thalamus
and
there's
all
the
obviously
connections
to
subcortical
areas.
But
there
are
no
other
know,
one
of
the
reasons
that
mountain
castle
picked
the
cortical
column,
because
every
is
an
earth-type.
Every
cell
type
exists
in
that
in
that
block,
every
type
of
connection
exists
and
basically,
every
type
of
physiological
response
that
repeats
over
the
cortex
is
contained
in
that.
So
there
isn't
another
place
for
the
cortex
to
do
things.
A
A
Anyway,
back
to
the
quarter,
comm,
it's
extremely
complex
and
you
know
there's
at
least
a
dozen
or
ten
or
more
cellular
layers.
There
are
dozens
and
dozens
of
different
types
of
cellular
pathways
occurring
here
and
they've
been
documented
over
the
decades
within
the
column,
in
between
columns
there's
an
entire
hip
inhibitory
cell
network,
which
is
even
actually
more
interesting
than
the
excitatory
cell
network
and
and
I.
We
know
that
there
is
a
significant
region,
the
region
variability
in
this
thing's
there's
more
cells.
A
We
heard
that
and
talked
yesterday,
more
cells
in
certain
levels,
fewer
and
other-
is
more
connectivity
and
so
on,
but
still
there's
something
common
out
here
and
it's
been
tugging
at
people
for
a
long
time.
How
do
we
understand
that?
You
may
not
like
my
yesterday
but
I'm
going
to
walk
through
what
I
think
is
going
on
here?
Okay,
now?
How
do
we
attack
this
problem?
Our
approach
is
the
following:
this
is
what
we've
done
in
our
research.
We
start
with
an
observation
that
the
cortex
is
constantly
predicting
its
inputs.
A
Now,
what
do
I
mean
by
that
when
I
pick
my
touch,
if
I
reach
my
hand
over
here,
I
have
an
expectation,
we're
gonna
feel
and
if
I
move
it
up
here,
a
bit
different
expectation
and
when
I
touch
this
thing,
I
have
a
different
expectation.
In
fact,
every
time
I
touch
anything
I
have
an
expectation
when
I'm
going
to
feel
and
I
know
that,
because
it
was
different,
if
this
felt
like
jello
or
milk
or
something
else
we
had
a
real
rough
edge
on
I
would
notice
it.
A
A
When
we
started
with
this
I
wrote
an
entire
book
about
this
problem,
called
on
intelligence,
that
brains,
building
a
predictive
model
of
the
world,
and
then
the
kind
of
question
is
how
does
the
cortex
and
now,
therefore,
how
does
the
column
learn
predictive
models
of
its
input
and
I
argue
it
does
if
it's
got
to
make
predictions
anywhere
in
the
cortex.
It's
going
to
be
making
predictions
in
every
cortical
column
and
how
would
it
do
that?
A
So
let
me
just
walk
through
some
basic
circuitry
here,
because
I
wanted
pick
some
parts
of
the
circuitry
in
the
quarter
of
a
column
which
I'm
now
going
to
try
to
put
a
theoretical
framework
around.
So
let's
just
talk
about
feed-forward
pathways
into
a
cortical
column.
It
doesn't
matter
where
you're
talking.
There
are
two
sources
of
input.
One
is
directly
from
other
cortical
regions
and
the
other
is
through
the
thalamus.
This
is
true.
The
thalamic
input
is
true
not
just
for
the
primary
central
regions,
but
as
Mary
Sherman
in
regularly
of
arguing
for
many
years.
A
It
is
a
part
of
every
cortical
column
that
input
comes
in
to
every
color
column,
from
the
thalamus
and
directly
from
cortex.
If
that's
available,
now
remind
ourselves
that
when
it
arise
primarily
in
layer,
four,
not
exclusively
Lea
4
but
arrives
in
layer
for
your
input
layer
in
the
cortex,
it
forms
less
than
10%
of
the
synapses
on
layer,
4
cells
in
the
v1.
It's
estimated
about
7%,
so
where
all
those
other
synapses
on
layer
4
is
coming
from.
A
There's
many
coming
from
other
layer,
4
cells,
but
about
50%
of
them
are
coming
from
layer
6a.
This
is
a
common,
repeated
observation
in
cortex,
the
1/2.
The
input
coming
to
layer
4
is
coming
from
6a
and
I've
noted
it
here
in
blue,
because
it
doesn't
appear
that
those
inputs
are
strong
enough
to
make
the
layer
4
cells
fire,
but
they
must
be
doing
something
important.
That's
a
lot
of
neural
pill,
but
just
not
doing
something.
So
something
is
happening
here
in
this
thing,
then
layer.
A
The
classic
view
is
that
layer,
4
projects,
2
layer,
3
layer,
3
projects,
2
layer,
5.
This
is
you
find
this
in
every
textbook,
but
here
you
see
a
similar
thing
between
lower
layer,
6
layer,
6b,
there's
a
similar
sort
of
bi-directional
connection
to
layer
5
those
cells.
Look
very
similar
is
a
very
unique
pattern,
though
how
they
look,
and
so
there
seems
to
be
this
sort
of
parallel
construction
going
on
here
between
6a
and
4
and
6,
B
and
5.
This
is
a
very
common
circuitry.
You
see
this
everywhere.
A
There
are
two
types
of
outputs
to
a
cortical
column.
There
is
fun
the
upper
layers,
predominantly
layer
3,
but
also
later
to
it
projects
to
other
areas
of
the
cortex
and
then
there's
another
output
from
layer
5,
the
thick
tufted
cells,
which
projects
to
the
thalamus
becomes
then
input
to
another
region
and
also,
as
Murray
has
argued
for
many
years.
It
has
a
motor
component
and
we've
seen
this
everywhere,
so
every
column
seems
to
have
a
motor
output.
A
As
far
as
we
know,
I
want
to
remember
this
I'm
gonna
try
to
develop
a
framework
for
understanding
what's
going
on
here
in
in
the
rest
of
this
talk
now
I'm
going
to
do
it
using
this.
As
my
my
outline
for
my
talk,
this
represents
the
alpha
half
of
my
talk,
and
it
also
represents
the
outline
for
how
we've
gone
about
our
work
over
the
last
13
years,
and
we've
been
trying
to
tackle
this
a
piece
at
a
time
and
building
up
a
more
complex
models
as
we
go.
A
We
started
by
asking
a
simple
question:
how
does
a
layer
of
cells
learn?
Predictive
models
of
sequences
like
like
a
melody
I
can
learn
a
melody.
I
can
predict
it
I'm
always
constantly,
particularly
next
note
in
the
melody
sequences
are
used
everywhere
in
the
brain.
We
use
them
in
our
motor
behaviors.
We
use
them
in
visual
inference
and
auditory
inference.
It's
a
common
thing
that
brains
have
to
do.
They
have
to
learn
sequences
and
play
them
back
that
took
us
many
years
to
figure
this
out.
It's
a
very
complex
story.
A
A
We
we
deduced
that
there
must
be
an
efference
copy
or
motor
drive,
location
signal
and
I'll
talk
about
that
in
a
second
and
we
added
a
second
a
column,
a
second
layer
in
this.
In
this
model
we
published
that
this
last
year
in
October
of
2017,
we,
when
we
did
that
and
we
published
us
in
that
paper-
we
talked
about
the
nature
of
this
location
signal,
but
we
said:
look
we
deduced,
it
must
exist,
but
we
didn't
know
where
it
come
from
came
from.
We
didn't
understand
the
nature
of
it.
A
We
speculated
in
that
paper
that
we
could
study
the
answer.
Ronald
cortex,
which
has
another
mechanism
for
representing
locations
the
grid
cells,
and
perhaps
will
we
be
finding
motivation
there
for
how
it
is
be
happening
in
the
cortex
by
the
time
that
paper
came
out,
we
had
already
figured
this
out.
We
think
we
figured
it
out.
At
least
we
have
a
good
model
for
it.
A
We
believe
that
layer,
6a
is
actually
doing
something
very
similar
to
what
grid
cells
are
doing
in
the
answer
on
aquatics
I
call
it
grid
cell,
like
it's
not
exactly
the
same,
but
it's
using
some
of
the
same
tricks
that
we
believe
are
going
on
in
the
in
Toronto.
Cortex
were
preserved
and
actually
happening
in
every
cortical
column,
and
now
we
have
a
complete
sensory
motor
inference
model.
Here
we
have
not
extended
it
further.
A
We
realize
that
objects
in
the
world
are
composed
of
other
objects
and
I'm
going
to
get
to
this
into
more
detail,
and
so
we
believe
that
each
cortical
column
is
actually
learning
composite
objects
and
that
you
can
now
explain.
6A
and
6b
is
actually
two
different
location
layers
representing
two
different
structures
and
layer.
Five
is
a
composite
upper
layer.
I
will
get
to
that.
A
This
has
not
been
published,
so
we
did
put
that
a
poster
at
this
a
cosine
this
year
and
manuscripts
are
underway,
but
I
can
talk
about
it
because
we're
supposed
to
be
speculative
where
I
want
to
get
to
is
I
want
to
talk
about
what's
going
on
here,
but
you
can't
understand.
What's
going
on
here
and
Charlie,
tell
you
something
about
what's
going
on
here,
so
we
kind
of
build
this
up
and
I
have
to
bring
you
with
me.
A
Otherwise,
I
can't
just
explain
this
so
I'm
gonna
gonna
start
over
here
and
we
talk
about
how
is
to
some
of
the
key
points
and
how
we
think
that
a
layer
of
cells
we
think
even
in
input
layer
way
for
is
learning
sequences.
We
started
with
the
neuron
where
our
predictions
were
approaching
the
brain
occurred.
Have
you
ever
thought
about
that?
If
the
brain
is
making
predictions
all
over
the
place?
How
does
that
happen?
A
You
could
have
a
separate
population
of
cells
that
makes
predictions,
which
is
true,
but
we
actually
think
that
every
pyramidal
neuron
is
a
prediction
machine.
Now,
if
you
think
about
a
pyramidal,
neuron
has
anywhere
between
five,
it's
30,000
synapses,
ten
percenter,
more
close
to
the
cell
body
proximal
they
can
make
the
cell
fire
90%
of
them
are
too
far
away,
and
for
many
many
years
people
had
no
idea.
What
would
the
90%
of
the
synapses
in
that
cell
we're
doing?
A
Several
clever
labs
have
come
up,
and
some
of
people
here
in
the
room
have
worked
on
this
they've
discovered
that
dendrites
are
active
processing
elements
and
if
you
have
somewhere
between
8
or
20
synapses
on
a
dendritic
branch
and
they're
near
each
other
within
40,
microns
of
each
other
and
they
fire
and
they
activate
around
the
same
time.
Then
it
generated
dendritic
spike
and
the
generate
expect
will
travel
to
the
summer
and
have
a
large
effect.
A
If
it
was
an
NBA
spike,
it
will
depolarize
the
soma,
but
not
enough
to
make
the
cell
fire
and
it
can
lead
to
a
sustained
depolarization.
So
now
people
argued
is
that
useful
because
it
doesn't
make
the
cell
fire.
What
good
is
it?
We
have
proposed
the
following
that
the
proximal
synapse
is
defined,
the
classic
receptive
field
of
the
neuron,
but
the
distal
synapse
is
when
they
they
close.
These
dendritic
spikes.
They
put
the
cell
in
Ettore
depolarize.
What
we
call
a
predictive
state.
A
The
cell
is
saying:
I've
recognized
the
pattern
that
is
typically
preceding
myself
becoming
active
and
I'm
depolarized,
and
the
advantage
of
that-
and
this
is
our
pothinus-
that
the
polarized
neurons
fire
sooner
than
nondepolarizing
neurons
and
they
inhibit
other
neurons
nearby
and
then
lead
to
a
sparser
activation
and
I'll
walk
through
the.
How
that
works
in
a
moment
and
a
neuron
can
predict
its
activity
in
hundreds
of
different
contexts
because
they
have
thousands
of
synapses.
The
next
thing
we
have
to
do
to
understand
this
model
is
oh
I'm.
A
Sorry,
here's
a
picture
of
how
we
model
neurons
and
I'll
show
off
to
everything
we
do
it.
This
is
a
sort
of
a
picture
of
how
we
do
it.
In
software,
our
neurons
have
arrays
of
dendritic
branches.
If
you
will
that
act,
like
coincidence,
detectors
as
far
as
I
know,
were
the
only
people
in
the
world
who
build
sophisticated
networks
using
model
neurons
with
active
dendrites
I,
don't
see
how
people
can
feel
like
they're
modeling,
the
brain
by
using
point
neurons
I'm,
going
to
talk
now
about
sparse
activations.
A
This
is
key
to
the
whole
theory
as
well.
If
you
look
at
a
population
of
cells,
let's
say
5,000
cells
in
one
layer
in
one
column
at
any
point
time,
there'll
be
some
activation
on
those
cells
and
it's
sparse
so
I'm
going
to
see
using
an
example
here.
Listen
we
had
5,000,
neurons
and
2%
were
active
at
any
point
in
time
and
another
point
in
time.
2%
were
active,
another
point
in
time,
percenter
active
just
for
grins,
so
we
have
a
hundred
active
cells,
a
couple
of
observations
we
can
make.
A
First
of
all,
how
many
different
things
can
we
represent
with
that
it's
virtually
unlimited.
It's
a
basically
5000
choose
100,
which
is
3
times
10
to
the
211,
so
almost
infinite.
From
our
point
of
view.
The
next
point
is:
if
you
randomly
chose
sparse
activations,
and
you
could
do
that
all
day,
how
much
would
they
overlap?
How
many
cells
which
they
have
in
common
and
it's
about
two,
but
what's
very
important-
it's
never
going
to
be
a
lot.
A
It's
very,
very
unlikely
that
if
you
pick
some
random
group
of
a
hundred
cells
that
there'd
be
a
lot
of
cells
in
comments,
you
can
do
this
all
day.
Long
and
then
go
do
forever,
and
you
can
pick
these
representations
and
they're
a
unique
now,
no
one's
going
to
take
advantage
of
that
in
the
following
ways.
First
of
all,
if
I
wanted
to
recognize
a
pattern,
Asst
and
the
set
of
active
cells
I
only
need
to
send
form
synapses
to
a
subset
of
the
active
cells.
A
If
I
form
subsets
to
maybe
eight
to
twenty
of
them,
this
neuron
will
fire.
When
that
pattern
is
there,
you
can
argue
like
yes,
but
there's
a
large
chance
that
this
thing
could
have
a
false,
positive
recognition.
It
turns
out
it's
not
likely
at
all,
it's
very,
very
unlikely.
You
can
do
the
math
on
it,
and
we've
done
that.
You
can
also
test
it
empirically.
This
cell
will
reliably
recognize
that
pattern
on
almost
all
conditions.
The
second
thing
now
this
is
the
piece
that
I
want
to
get
you
on
this
slide.
A
A
So
if
I
want
to
invoke
ten
patterns,
that
I
have
a
little
bit
less
than
a
thousand
cells
active,
maybe
a
little
bit
less
than
twenty
percent
of
the
cells
active
and
what
we
can
show
mathematically
and
empirically
did
this
cell
with
its
15
synapses
or
whatever
will
still
reliably
recognize
that
one
pattern
it
was
designed
to
recognize,
there's
a
obviously
a
much
greater
increased
risk
of
false
positives,
but
mathematically
an
empirically.
We
show
that
is
still
very,
very
small.
So
what
we
decide
is
the
following:
all
of
our
models
work.
A
This
way
that,
with
the
way
the
cortex
represents
uncertainty
is
through
unions
of
patterns,
and
we
know
when
it's
uncertain,
do
you
see
higher
activity
levels
what's
going
on
there?
It's
a
union
of
patterns
and
when
activity
gets
sparser
when
the
when
the
cortex
is
figure
out.
The
answer
to
something
and
what's
amazing,
is
that
all
the
networks
I'm
about
to
tell
you
about
they
work
with
these
unions.
That
is,
you
can
process
unions.
At
the
same
time,
you
can
propagate
multiple
hypotheses
and
act
on
them
cut
all
to
every
connection.
A
That's
going
on
in
the
brain.
Okay,
we
put
this
together
in
a
single
layer
model
to
learn
sequences
in
prediction.
So
you
can
imagine
this
like
layer.
Four
here
I
show
that
layer
in
time
0
and
time,
1
I've,
arranged
the
cells
in
little
mini
columns
with
the
cells
and
the
mini
columns
have
the
same
feed-forward
receptive
field
property.
A
It's
not
important
to
have
many
columns,
it's
important
to
have
a
set
of
cells
that
have
the
same
FIFO
receptive
field
property
and
if
an
input
comes
along
later,
it's
going
to
activate
a
set
of
a
set
of
mini
columns
and
you'll
see
all
the
cells
in
those
mini
columns
become
active.
That's
what
whoo,
boolean
bezel
saw
4050
years
ago,
and
we
still
see
today
what's
going
to
happen
now,
if
you
have
a
predicted
input
and
I'm
gonna
represent
that,
but
there's
a
little
red
circles.
These
cells
are
in
the
new
polar
ice
state.
A
The
same
input
comes
along
those
cells
fire
sooner
and
they
inhibit
the
cloud
around
them
and
you
end
up
with
a
very
sparse
pattern.
That
pattern
represents
the
input
in
a
specific
context,
and
you
could
have
millions
of
these
and
then
that
activation
pattern
can
then
predict
the
next
activation
pattern,
and
you
can
basically
build
up
a
sequence
memory.
We
have
characterized
there
so
at
this
point
out,
it's
very
high
capacity,
even
a
small
network
like
this,
can
learn
hundreds
of
thousands
of
transitions.
It
can
learn
any
kind
of
high
order
sequences.
A
It's
extremely
robust
to
everything.
The
learning
is
importantly
unsupervised
continuous
and
local.
So
there's
no
like
global
signal
going
on
here,
and
it
really
is
a
biological
network.
We
actually
require
two
different
types
of
inhibitory
cells
and
make
this
work.
It's
very
detailed
in
that
regard
just
to
be
comfortable.
This
is
many
people
have
implemented
this
there's
many
multiple
source
implementations
and
it's
actually
being
applied
in
commercial
products.
Today,
okay,
that
was
what
went
on
here
so
then
we
say:
okay,
how
can
I
extend
that
model
to
learn
paying
predictions
of
sensorimotor
sequences?
A
What
can
I
move?
That's
a
much
harder
problem,
so
let
me
just
so
I
think
we
all
understand
the
problem
here.
It's
like
this
is
a
static
object
and
in
when
I
touch
it.
The
changes
to
my
brain
occurred
because
I,
move
and
I
want
to
be
able
to
predict
that.
So
we
started
with
our
same
input
layer,
our
sequence
memory.
We
said
how
do
we
modify
this
to
incorporate
sensory
motor
predictions
and
we
said
okay,
this
simply
interests.
That
is,
we
need
to
add
a
second
context.
A
We
have
one
context,
which
is
the
cells
themselves.
We
need
a
second
context
which
would
be
some
kind
of
motor
related
context,
and
if
we
could
do
that,
then
maybe
we
can
get
the
system
to
do
what
we
wanted
to.
But
what
is
that
correct
motor
related
context?
We
for
quite
a
while-
and
the
answer
in
that
was
very
obvious
to
me,
but
it
wasn't
obvious
in
time
and
I'm
just
going
to
illustrate
it
with
just
my
coffee
cup
here.
A
Literally
I
was
doing
this
in
my
office
one
day,
I
say:
okay,
so
imagine
I'm
just
touching
this
cup
with
my
finger
and
I'm
about
to
move
my
finger
and
touch
in
a
new
location
and
I
can
predict
what
I'm
going
to
feel
I
can
predict
it
right
now,
I
know
I'm
going
to
feel
and
I
know
if
I've
touched
the
bottom
a
couple
feel
something
different:
there's
a
little
rough
edge
down
here
and
I
know.
If
I
move
my
hand
over
here,
I'm
going
to
feel
this
little
handle
here.
A
A
It
needs
to
know
in
the
reference
frame
of
the
cup
doesn't
matter
where
the
cup
is
relative
to
my
body.
It
doesn't
matter
where
my
finger
is
relative.
My
body,
it's
where
my
finger
is
relative
to
the
cup.
There's
no
way
around
this.
If
I'm
not
going
to
make
a
prediction,
I'm
gonna
have
to
know
where
it
is
I'm
going
to
be
on
the
cup
after
I
move,
and
you
can
just
do
this
yourself.
You
can
imagine
what
you're
going
to
feel
when
you
do
this
now.
A
Imagine
now
I'm
touching
the
cup
of
multiple
fingers
at
the
same
time,
my
hand,
all
parts
of
my
hand,
all
my
fingers
are
making
the
same
prediction
different
different
predictions,
but
they're
doing
it,
I'm
not
conscious
of
it
but
they're
doing
because
if
anything
changed,
I
would
notice
it.
There
was
a
little
rough
edge
here
or
something
like
that.
That's
not
normally
there.
So
this
idea
that
every
part
of
your
sensory,
your
somatosensory
area,
is
actually
predicting
what
it's
going
to
feel
requires.
Every
part
knows
where
it
is
relative
to
the
cup.
A
I'm
not
gonna,
work
through
it
today,
but
the
same
argument
can
be
made
for
vision
and
the
retina.
So
we
came
back
and
said:
okay,
we
need
an
object,
centric
location
signal
to
do
this,
and
now
I
could
say:
okay,
the
input
layer.
What
does
it
represent
now?
It
represents
a
feature
at
a
location
on
the
cup
and
every
time,
I
move
that
changes
and
if
I
pulled
those
together
in
a
separate
layer
cells
called
a
knob
beause.
It's
called
the
object
layer.
A
This
and
I
said
this
is
going
to
be
a
stable
representation
representing
the
cup
and
I'm
gonna,
associate
that
stable
representation
with
the
set
of
features
that
location
I
now
have
a
definition
for
a
cup
and
I
can
predict
what
I'm
going
to
feel.
If
I
have
that
object,
centric,
location
being
updated,
we
built
these
models.
A
A
A
This
is
hand
the
finger
and
I'm
about
to
touch
this
cup
at
some
location-
and
this
is
my
input
layer-
and
this
is
my
output
layer
and
I-
have
a
location
signal
coming
in
telling
me
where
I'm
going
to
touch
I
get
some
sentry
feature
when
I
touch
that
and
what's
going
to
happen
up
here,
you're
gonna
form
a
union
of
all
the
possible
things
that
this
column
knows
that
are
consistent
with
that
location
and
feature.
In
some
sense.
A
It
says,
like
there's
three
things
that
are
consistent
without
a
cup
of
key
and
a
ball
and
the
way
you're
going
to
do
inference
you're
going
to
keep
moving
your
finger
just
like
if
I
reach
my
hand
to
a
black
box,
I
touch
something
I,
don't
know
what
it
is
until
I
move
it
several
times.
What's
going
on
when
you're
doing
that,
as
you
move
to
a
new
location,
you
get
a
new
sensory
input,
and
now
that
is
inconsistent
with
one
of
these
things.
Do
you
start
narrowing
down
the
representation
up
here?
A
You
do
another
position.
Another
thing
you
get
another
location,
another
feature
bingo.
Now
you
know
what
it
is.
It's
the
coffee
cup.
We
realized
that,
of
course,
multiple
columns
are
being
input
at
the
same
time.
So
imagine
touching
this
with
three
fingers.
At
the
same
time,
each
one's
going
to
be
on
a
different
location
on
the
object
and
each
one's
gonna
have
a
different
century
feature,
but
they're
all
trying
to
model.
A
The
same
thing
is
that
we
realize,
if
you
had
Longreach
connections
in
this
output
layer,
you
can
form
an
association
memory
between
these
and
they
could
resolve
ambiguity
very
very
quickly.
So
an
example
of
that
in
this
guy.
Imagine
that
my
little
thing
here
I
have
the
hand
touched
the
cup
and
and
three
different
fingers
at
once.
So
this
is
the
three
different
fingers
each
one's
at
a
different
location,
different
feature.
Each
one
on
its
own
is
M,
is
uncertain
of
what's
going
on
each
one
can't
tell
what
it
is.
A
This
guy
says
well
could
be
a
cup
or
Bowl.
This
is
a
cup
or
a
can,
but
when
they
vote
across
columns,
they
quickly
settle
on
the
right
answer.
This
is
equivalent
envision.
Imagine
if
you're
looking
at
the
world
through
a
straw
and
you're
trying
to
recognize
something
you
just
have
to
move
it
around
and
and
to
see
the
different
parts
you
couldn't
recognize
it
just
in
one
glance.
A
But
if
I
have
multiple
patches
of
my
retina,
the
entire
retina
in
some
sense,
then
all
of
them
can
vote
once
and
say:
oh
I
know
what
this
is.
You
think
this
is
actually
what's
going
on
in
my
four
layers:
layers
free
in
the
cortex
okay,
so
that
again
was
published.
Last
fall
and
and
I
really
wanted.
I
move
over
to
this
guy
here
and
talk
about
layer,
six,
a
and
and
what
I'm
going
to
argue
here
is
that
layer,
six,
a
is
something
like
grid
cells
now
I'm
going
to.
A
It
was
great
that
Syria
gave
that
very
detailed
call
talk
earlier,
because
I'm
going
to
talk
about
yourselves
in
a
very
broad
theoretical
framework
and
not
going
to
the
details,
I'm
going
to
point
out
a
few
things
about
him
and
I
just
want
you
to
trust
me
that
I
know
the
details,
I'm
just
not
going
to
talk
about
it
today.
So
how
does
that?
Let's
talk
about
how
location
is
represented
in
grid
cells?
A
A
Maybe
it's
a
virtual
environment,
but
let's
say
it's
a
rat
in
the
room
and
what
grid
cells
will
do
is
they
will
encode
different
locations
in
this
room,
and
so
what's
born
here
is
that
these
location
representations
are
just
if
I
just
looked
at
them
as
a
bunch
of
neurons
or
just
some
sparse
code
and
they're
unique
for
each
room.
This
is
an
important
part
of
good
cell
theory,
at
least
in
in
learned
environments.
A
That
would
be
the
case,
and
so
these
two
rooms
here
are
similar
but
they're
different
in
the
rat
and
see
them
different
cuz.
They
have
some
sort
of
sensory
cues
in
them
which
tell
you
they're
different,
so
the
same
locations
in
this
room
will
be
represented
by
very
different
activations
in
this
room
and
same
with
this
room
here.
So
this
is
an
important
thing.
These
are
just
the
the
representation
vocation
in
grid
cells.
A
If
I
just
looked
at
the
population
of
cells
and
what's
active,
represents
these
locations,
they
unique
to
the
room
and
to
the
location,
room
I'm.
Also,
my
individual
good
cells
now
again
I'm
talking
about
the
ensemble
of
good
cells.
Key
thing
here
is
that
what
ties
these
points
together
is
movement.
If
you
look
at
the
actual
representation
of
the
grid
cell
population,
it
tells
you
nothing
about
dimensionality,
doesn't
say
x
and
y.
Nothing
like
that.
It's
just
some
random
vector.
It
looks
like,
but
what's
important
about
it.
A
If
the
rat
moves
from
A
to
C,
it'll,
say:
okay,
I
have
a
new
representation
for
C
and
if
they
are
moved
from
A
to
B
to
C,
it
also
gets
back
to
C
and
the
clever
trick
that
grid
cells
do.
Is
that
once
you've
learned
this
and
sort
of
talked
about
this,
a
bunch
once
you've
learned
this?
It
implies
two
novel
environments.
You
don't
have
to
learn
it
again,
so
I
can
be
a
new
environment
and
also
I
can
navigate
between
these
and
the
path
integration
is
going
to
work.
A
One
way
to
look
at
this
is
say:
what's
the
definition
of
a
room,
a
definition
of
a
room
is
a
set
of
locations,
all
of
them,
not
just
the
ones
I'm
showing
here
that
are
connected
together
by
movement
by
path
integration.
You
can
move
between
those
points
and
the
system
knows
how
to
do
that.
Some
of
these
locations
have
associated
features
and
that
defines
how
you
know
what
room
you're
in,
but
not
all
of
them.
You
don't
have
to
have
features
at
everything
here.
You
just
have
to
have
some.
A
Some
locations
have
associated
features,
notice,
I
pointed
a
location
W
here
that
is
in
the
space
of
the
it's
not
on
the
pen,
but
it's
in
the
space
of
the
pain,
because
a
virus
down
here
at
y
and
I
move
my
finger
up
here,
I
will
sweep
through
w
to
get
there.
It
doesn't
mean
that
you
have
to
be
touching
the
thing
you're
just
within
this
point
that
you
can
move
between
and
the
way
this
works.
A
Each
object
has
its
own
point
in
space
is
own
locations
of
set
of
locations
in
space,
and
once
you
treat
you're
trying
to
do
is
narrow
down
which
location
you
are,
you
can
downtown
which
location
you
are
exactly.
Then
you
know
what
cup
your?
What
object
your
honor
this
give
me.
This
analogy
can
be
take
you
further.
This
is
explaining
so
like
primary
and
secondary
sensory
cortices.
What
about
higher-level
concepts
in
the
brain?
Could
this
be
happening
there
too?
A
Well,
if
you
starting
grid
cells,
you
could
start
realising
that
the
location
representations
are
dimensionless
after
your
question
earlier,
if
I
just
look
at
the
representation,
its
dimensionless,
what
defines
is
the
dimensionality?
The
space
is
how
the
path
integration
learns.
That's
what
defines
a
dimensionality,
and
so
you
could
have
one
dimensional
space
with
two
dimensional
space.
It's
all
about
path,
integration
and
movement.
The
second
thing
is
the
move.
Rooms
themselves
do
not
have
to
be
physical,
if
I'm
a
cortical
column
and
I'm
getting
some
sort
of
efference
copy.
It
doesn't
have
to
be
physical.
A
It
could
be
a
movement
coming
from
some
other
part
of
the
cortex,
which
would
be
sort
of
movement
in
a
different
space,
and
the
features
themselves
do
not
have
to
be
sensory
features.
They
can
be
the
outputs
of
other
columns,
and
so
what
we
are
proposing
is
that
all
knowledge
in
the
brain
is
represented.
Even
abstract.
Concepts
are
represented
in
this
way,
they're
represented
in
spaces.
A
So
when
you
think
about
mathematics,
your
equivalent
of
a
movement
might
be
a
transform,
you
do
on
an
occasion,
it's
a
it's
a
procedure
operate
on,
and
then
it
moves
moves
the
components
to
a
new
point
in
the
space
of
the
problem.
You're
working
on
there's
a
very
vague
idea
here,
but
I
think
it
follows
from
mount
castles
proposal
from
the
cortical
Anatomy
and
from
this
basic
idea
that
this
this
idea.
If
this
is
happening
in
sensory
areas,
it's
gonna
be
happening.
Other
areas
in
the
brain
as
well
question.
D
A
Didn't
present
it
here,
I
rather
talk
about
it
too,
because
this
is
a
bit
complicated,
but
you
remember:
I
talked
about
the
sequence
memory
earlier.
That
has
to
have
time
in
it
and
and
so
does
sensorimotor
inference
has
to
have
time.
Everything
has
to
be
time.
Everything
has
some
sort
of
temporal
pattern
by
which
it
flows.
A
Think
of
it
that
way,
I
think
it
was
like
that
the
tempo
or
the
the
rhythm
or
the
the
individual
duration
notes
and
a
melody-
and
we
have
a
theory
about
how
that's
done
and
I
kind
of
talking
about
it.
It's
very
threat
achill
but
I
believe
I.
Their
current
best
guess
about
it
is
its
matrix
Elves
and
the
thumbless
which
project
to
layer,
one
have
all
the
right
attributes
for
this
and
they
look
like
they
might
be
doing
that
problem.
But
it's
an
important
problem,
but
I
didn't
talk
about
it
here.
Okay,.
A
B
A
You
know
I
think
when
I
talk
about
hierarchy,
let
me
come
back
to
hierarchy.
Okay,
if
I
look
to
a
bigger
straw,
I
could
see
immunity
face
right,
but
I
look
to
a
really
tiny
straw.
I
might
not
be
able
to
see
the
moon
face.
Let
me
come
back
in
a
second
well.
I
talked
about
hockey
all
right,
I,
think
you're,
asking
about
compositionality
or
hierarchical
structure.
A
Me
talk
about
that.
We
came
to
realize
that
objects
are
always
composed
of
other
objects.
This
is
gonna
get
to
the
hierarchy
here.
So
let
me
here's
a
classic
example.
Here
is
my
coffee
cup.
On
the
coffee
cup,
there
is
a
new
mental
logo,
I've
learned
to
cement
the
logo,
someplace
else
and
now
I
know
that
it's
part
of
my
coffee
cup.
If
this
wasn't
there
I'd
say
something's
wrong.
A
If
the
log
would
look
different,
I
was
something
we'd
be
wrong:
I
didn't
I,
don't
learn
the
logo
again,
because
it's
on
the
coffee
cup
I
have
some
concept
of
the
logo
and
I
have
to
somehow
science
to
the
concept
of
the
cup.
This
is
turns
out
that
everything
in
the
world
is
structured.
This
way,
every
object
is
composed
of
sub
objects.
I
lied
to
you
when
I
said
they're
coming
there.
Those
features
on
objects.
We
don't
believe
that's
true.
A
We
think
every
cop
everything
in
the
world
is
structured
by
structure
of
structure,
objects
on
objects
and,
if
that's
going
to
happen
in
the
cortex
and
it
does,
it
has
to
happen
in
cortical
columns,
and
so
what
we
think
is
going
on
here
is
exactly
that.
We
see
these
two
member
there's
a
6a
and
6b.
There's
these
two
parallel
structures
going
on.
We
think
the
6b
is
actually
representing
it's
another
exhorting
principle
like
layer,
but
it's
representing
the
space
of
the
compositional
object
in
6a
is
representing
the
space
of
an
attended
component
of
that
object.
A
Lewis
is
here
today
came
another
very
clever
way
of
represent
that
position
of
the
logo
on
the
coffee
cup,
using
another
grid
cell
like
trick,
and
we
think
that's
what's
going
on
in
layer
5,
you
think
layer,
5
is
actually
building
compositional
structure
and
we're
a
layer
layer,
3,
&,
4,
&
3
are
tending
to
sub
structure.
So
again,
we've
presented
that
mechanism
in
that
poster.
I
just
wanted
to
leave
you
that,
because
this
is
a
key
thing
that
has
to
occur
in
the
brain
and
now
we
can
start
beginning.
A
Let
me
talk
about
hypotheses
a
framework
for
understanding
what's
going
on
here
and
and
this
has
to
happen
and
once
you
got
at
once,
that's
good
enough
because
you
can
keep
drilling
down.
You
can
look
at
a
subject.
You
can
do
on
the
logo.
My
logo
is
composed
of
lines
and
we're
and
things,
and
so
you
have
to
do
it
once,
but
then
you
can
do
the
whole
compositional
structure.
B
A
Seems
like
you
might
get
too
many.
Is
that
your
question
you,
like
you,
might
get
too
many
unions
yeah.
That's
a
very
easy
question,
astute
question
which
I'd
rather
take
a
bit
offline,
but
why
doesn't
this
break
down?
Because
why
don't
you
form
unions
that
it
could
to
form?
The
answer
is
a
bit
complicated,
but
first
of
all
that
neural
tissue
doesn't
let
you
get
too
dense.
There's
there's
a
it
breaks,
it
I
won't!
Let
you
represent
too
many
things.
A
There's
a
certain
level
of
density
of
activation,
that
you're
going
to
get
and
and
then
so,
you're
never
going
to
get
all
the
cells
active
at
once
and
then
the
system
will
work.
It
may
take
more
steps
more
movements,
but
it
slowly
builds
out
it's
incredible
how
even
in
a
very
complex
system
that
it
settles
and
we
don't
understand
all
of
it,
but
I
will.
If
you
want
I'll
talk
to
you
about
it
later.
Okay,
I
want
to
take
one
more
slide
about
Harkey.
Then
I
wrap
up
this
idea
of
thing
about
this
brain
challenges.
A
The
way
we
think
about
hierarchy.
So
this
is
the
classic
view.
If
you
will,
you
have
some
sensory
organ
it
projects
to
the
primary
sense
of
reason.
You
extract
some
features.
You
production,
the
next
review,
more
complex,
those
features,
and
then
eventually
you
recognize
objects
in
the
deep
Learning
Network.
This
might
be
a
hundred
level
high.
A
We're
challenging
that
it's
not
to
say
that
this
isn't
true,
but
it's
a
you
have
to
think
about
it'll
bit
differently.
Now
we're
saying,
as
a
region
actually
is
a
whole
set
of
object
models
and
the
next
region
up
is
another
whole
set
of
object
models.
The
next
reads
up
is
another
one
set
of
object
models.
So
how
would
that
have
you
think
about
that?
Well,
first
of
all
one
of
the
things
you
should
know
that
any
input
to
any
region
always
skips
a
couple
levels
too.
A
So
there's
input
from
LGN
from
the
vision
to
b1
to
b2
and
so
on.
It
gets
less
as
you
go
up,
but
what
that
allows
us
to
do.
It
allows
the
columns
at
different
levels
to
also
work
at
different
scales.
So,
if
I'm,
if
I'm
turning
to
something
very,
very
small
in
my
visual
space,
I'm
gonna,
argue
that
V
one's
gonna
be
required
to
do
that.
A
But
if
I
think
it's
larger
like
a
letter
and
it
gets
larger,
then
v1
and
v2
could
do
it
and
then,
if
it
gets
larger
still,
v1
is
no
longer
able
to
see
that
because
it
takes
up
too
it's
outside
of
my
the
ones.
Looking
at
this
tiny
little
space
in
this
thing
is
too
big.
You
like
the
moon
face.
So
that's
one
thing
to
think
about
they're
working
at
different
scales.
A
The
other
thing
to
think
about
here
is
that
when
you
have
multiple
sensory
arrays
so
now,
I
have
like
a
vision
array
and
a
touch
array,
and
so
I'm.
Looking
at
this
coffee
cup
and
touching
at
the
same
time,
what's
going
to
happen
here,
you
can
have
invoking
models
in
the
visual
adult
and
they
somatosensory
Harkey
and
in
the
visual
hierarchy
at
the
same
time,
and
these
guys
can
collaborate
across
modalities
because
they're
all
modeling
the
same
object.
A
The
same
argument
I
made
before
about
long-range
connections
between
certain
light
layers,
routing
the
columns
to
vote
applies
here
too,
and
what
you
see
in
the
cortex.
You
see,
connections
like
this
that
don't
make
any
sense
in
the
higher
in
the
classic
hierarchy.
You'll
see
connections
in
a
secondary
region
projecting
to
a
circular
region,
another
modality
to
a
primary
region.
Another
modality
going
up
all
over
the
place
that
doesn't
make
any
sense
in
this
model,
but
it
makes
a
lot
of
sense
in
this
model,
because
these
guys
can
all
vote.
A
C
A
I'm
arguing
is
that
even
columns
and
b1
I'll
talk
about
that
in
the
next
slide.
I
get
to
this
even
comes
in
b1
are
recognizing
complex
objects,
you
don't
think
of
it
that
way,
but
here
this
thing
called
border
ownership
cells
came
out
of
Andhra
Hodge
lab.
They
found
cells
in
v1
and
v2
they're
classic
receptive
fields,
but
they
only
respond
in
an
alert
environment.
A
They
respond
when
that's
when
that
part
of
the
visual
field
is
on
a
specific
point
of
that
object
is
if
that
cell
knows
oh
I'm
on
the
tail
of
the
tiger,
that's
I'm
supposed
to
respond
and
it
doesn't
respond
in
other
places.
You
would
expect
that
the
respond,
but
it
seems
to
be
tied
to
various
locations.
That's
a
direct
consequence.
That's
showing
that
even
in
v1
and
v2,
there's
there's
some
sort
of
knowledge
about
what
the
entire
object
is
and
the
cells
are
being
unique
in
location.
A
That's
exactly
what
we
would
predict
there's
another
really
using
a
set
of
things
are
going
on
right
now.
This
is
very
clever
experiments
going
on
with
fMRI,
where
they've
shown
signatures
of
grid
cells
throughout
the
cortex,
not
to
all
of
cortex,
but
lots
of
it
and
there's
some
clever
experiments.
I
won't
go
into
it.
It's
not
proof,
but
it
looks
very,
very
convincing
that
grid
cell,
like
properties,
exist
through
many
different
regions
of
the
cortex.
A
This
stuff
has
been
known
for
a
long
time,
there's
sort
of
sensorimotor
prediction,
even
in
low-level
sensory
areas
like
v1
and
v2.
There
are
cells
that
become
active
during
a
saccade
before
the
cicadas
finished
that
they
become
active
before
the
cicadas
finish
and
there
there
would
have
become
active
one.
The
cicadas
finished
so
after
the
secod
there's
a
fixation.
We'd
expect
that
cell
to
be
active
and
itself
becomes
active
during
the
secod.
It's
as
if
it's
hard
to
explain
that.
A
But
if
you,
if
you
think
about
it
this
way,
you
say
well
that
Kel
is
being
given
its
new
location.
It
says,
given
this
motor
behavior,
here's
what
your
new
location
should
be,
and
it
has
a
model.
What
it's
looking
at
it
says:
okay,
I,
can
predict
what
I'm
going
to
see
totally
explainable
to
them.
There's
another
interesting
angle
thing
about.
A
I
remember
when
I
first
learned
about
grid
cells,
I
was
blown
away
by
them.
I
was
struck
by
the
fact
that
I
never
realized
that
I
had
cells
in
my
head,
my
brain,
which
are
representing
my
location,
and
it's
very
easy
to
figure
this
out,
you
can
just
walk
around
in
the
dark.
Get
up
in
the
middle
of
night
and
try
to
walk
to
your
bathroom
and
you
have
a
sense
for
where
you
are
there's
no
clues.
No
visual
input
know
what
it
may
be.
A
A
So
I
was
just
blown
away
that
he
didn't
realize
this.
It's
all
obvious
now
the
same
thing
has
occurred
to
me
when
I
think
about
the
world.
You
guys
look
like
you're
out
there.
You
don't
look
like
you're
on
my
retina.
How
come
I
perceive
you
is
out
there.
How
come
I
proceed
my
hand,
there's
some
distance
in
position
relative.
My
body
I
have
a
continuous
concept
where
everything
is
located
to
everything
else
and
I
now,
having
navigate
between
everything
and
everything
else.
A
This
is
what
Crick
was
looking
for
when
he,
when
he
wrote
that
essay
like
how
do
we
perceive
the
world
being
out
there?
It's
now
clear
to
me
that
the
cortex
is
actually
a
location
processor,
every
single
thing
that
the
cortex
does
is
located
in
space,
and
you
can
think
about
the
brain
and
so
now
I
think
about
the
cortex.
Is
this
location,
processor
and
uses
sensory
input
to
figure
out
what
locations
it's
in?
It
doesn't
process
sensory
input
per
se.
A
It
produces
pencil
in
precise
where
you
are
and
it
models
objects
in
space
is
based
on
how
they
move
within
their
own
spaces.
So
that's
my
end
of
my
argument.
There
I
would
speculative
I
did
my
bet.
My
goal
and
I
want
to
thank
this
is
the
entire
team
in
the
Mendte.
This
is
a
group
of
myself
and
these
other
five
people,
some
of
them
here
today,
sort
of
the
core
research
team.
This
is
a
collaborative
effort.
We
do
this
all
day
long
well,
most
days
and
right,
these
things
build
models
and
test
them.
A
E
The
differences
when
yours,
it
sounds
like
you
are
building
their
model,
partly
out
both
the
mathematical
question
of
what
it
takes
to
represent
space.
What
kind
of
information
flows
are
necessary
as
I
see
your
instincts,
so
we
have
to
have
a
signal
from
here.
We
have
to
have
this.
We
have
to
combine
them
this
way,
but,
on
the
other
hand,
you're
also
working
at
what
is
it
cortex?
What
people
measured
a
few
years
ago.
E
A
Don't
I
never
think
about
that
way.
I,
never
think
about
what
is
mathematically,
expedient,
physicists,
physicists.
Think
that
way,
I
don't
think
that
way.
I
look
at
the
anatomy,
they
say:
that's
what
it
is
and
I
don't
ask.
Is
it
the
best
way?
I
said
that's
the
way
it
is.
We
want
to
understand
that
they
may
be
much
better
ways
of
doing
this.
I
don't
know
our
goal.
I
said
this
up
front.
Our
goal
is
not
to
look
for
biologically
inspired
theories.
A
Our
goal
is
explain
the
neuro
pill
and
I'm
going
to
end
it
at
that.
Other
people
physicists
perhaps
yourself
and
others,
can
say.
Oh
what's
the
better
way
of
doing
this,
we
don't
start
with
the
math.
We
actually
apply
mathematics
later
I
didn't
show
any
of
the
craisins
here
we
can
talk
about
this
stuff,
but
to
me
it's
all
about,
starting
with
the
anatomy
and
physiology
and
sort
of
deduction
about
what
might
must
happen
and
then
try
to
figure
out
those
things.
D
A
A
Know
I
can't
answer
that
question
I
can't
answer
this.
For
example,
I
didn't
mention
this
in
the
talk,
but
it's
been
somewhat
related.
It's
been
noted
the
course
are
what
and
where
pathways
and
vision
the
dorsal
and
ventral
streams,
but
there
are
also
they've
been
seen
in
audition
of
somatosensory
as
well.
So
the
arguments
been
made
that
this
is
a
constant
theme
and
this
algorithm.
This
call
a
calm,
has
to
solve
both
the
way
I
think
about
that
I
was
I
was
describing
he
right
now.
A
Gamma
is
a
ventral
pathway.
You
can't
recognize
objects,
but
you
can
reach
them
and
be
surprised
that
you
reach
them
so
I
think
the
same
algorithm
will
work
in
both
cases.
It's
just
a
different
spaces,
different
sort
of
spaces
there
they're
working
in-
and
we
haven't,
worked
that
out
in
any
detail
and
I
can't
answer
the
question.
You
asked
me
I'm,
sorry
about
that.
Yeah.
C
A
A
We
don't
think
along
those
lines
with
deep
learning.
We
think
I
learned,
I
didn't
talk
about
the
learning
rules
in
our
neurons.
We
rely
on
synaptogenesis
and
finally
synapses
and
other
things
going
on.
All
the
learning
rules
are
sort
of
biologically
very
deeply
biologically
inspired,
but
we
don't
we
don't
sort
of
satisfy
some
sort
of
energy
function
or
that's
not.
What
would
that's
what's
going
on?
There's
no
back
propagating
error
signals.
These
are
all
local
learning
rules.
A
Everything
we
do
is
local
learning
rules,
and
so
essentially,
if
you
want
to
ask
how
do
I
learn
something
new
is
pretty
simple,
pretty
simple
in
this
very
in
a
very
simple
plastic
way.
If
I
just
said
I
don't
know
where
I
am
I
could
just
start
assigning
myself
a
brand
new
location
like
this
pick
a
location,
there's
some
new
thing:
I,
don't
know
what
it
is,
pick
a
location
and
then
start
building
a
map
around
that
location.
I
can
be
simultaneously
trying
to
recognize.
A
A
To
eyes
to
things
to
eyes
and
then
color
right,
the
coracle
comm
doesn't
know
what
its
inputs
represent.
Has
no
idea.
It's
the
axons.
Look
like
axons
sodium
spikes.
Look
like
sodium
spikes,
that's
that's
all
there
is
to
it
and
and
somehow
you're
asking
like
well,
where
does
color
come
from?
There's
been
some
very
work
on
this
and
I'm
trying
to
remember
the
guys
name.
He
wrote
the
book.
Why
is
the
bell?
Why
is
the?
Why
is
the
bell
not
sound
like
rad
or
something
like
that?
That's
there
we
go.
A
F
A
A
I'm
not
sure
what
your
question
is.
We
understand
that
it's
a
when
I
thought
about
sequence:
when
I
was
talking
about
you
know,
ordered
sequences
in
sensory
motor
sequences
are
not
ordered.
They
can
go
three
in
any
order,
but
but
the
temple
structure
is
what's
essential
there
and
we
don't
view
it
as
we
just
trying
to
learn
that
temporal
structure.
What
follows
what
follows?
What
follows
what
and
how
can
you
do
that
in
a
very
robust
way
and
I
didn't
go
this
I
have
a
whole
talk,
so
I
can
do
about
the
sequence.