►
Description
Jeff Hawkins reviews the new paper "Neuronal vector coding in spatial cognition" by Andrej Bicanski and Neil Burgess. The paper reviews the many types of cells involved in spatial navigation and memory. Jeff then ties the paper to The Thousand Brains Theory of Intelligence, using it as a launch point for discussion on how the neocortex makes transformations of reference frames.
Neuronal vector coding in spatial cognition, Andrej Bicanski and Neil Burgess
https://www.nature.com/articles/s41583-020-0336-9
B
Okay,
so
I'm
going
to
do
a
a
brief
review
of
this
paper
that
super
tight
sent
out
to
a
very
recent
paper,
2020
by
andre,
buckinski
and
albertus,
and
then
and
then
I'm
going
to
talk
about
how
it
got
me
thinking
about
something
which
I
wrote
and
you
know
last
week,
so
it's
sort
of
two
topics
that
are
intersecting
briefly
and
I'm
going
to
try
to
keep
this
short
given.
B
I
know
that
some
of
you
have
a
short
time
today,
including
me,
so
so
this
as
a
review
paper,
it's
very
well
written,
although
I
had
to
read
it
a
couple
of
times
to
really
absorb
it.
That's
not
the
way
it's
written,
it's
just!
It's
got
a
lot
of
dense
stuff
in
it.
So
it's
a
lot
of
terminology
both
on
the
anatomy
and
the
different
type
of
cell
types
with.
So
I
just
had
to
get
my
mind
around
it
a
bit.
B
So
it's
it's
essentially
saying
a
lot
of
it's
been
written
about
grid
cells
and
place
cells,
and
these
are
cells
that
fire
when
animals
at
some
location
in
an
environment.
We're
talking
rats
here
almost
completely
but
vector
coating
cells
are
cells
that
respond
to
an
object
or
something
at
some
vector
from
the
animal
at
some
distance
in
orientation
from
the
animal.
B
So
a
border
vet
for
cell
or
say
a
boundary
vector
cell
is
a
is
a
cell
that
fires
when
there's
a
boundary
at
you,
know
some
position
distance
from
the
animal,
so
it's
a
distance
and
a
direction
and
that's
why
it's
a
vector.
So
this
is
a
review
of
all
the
different
vector
coding
cells,
saying
we're
not
going
to
talk
about
grid
cells
and
place
cells,
we're
going
to
look
at
just
about
vector
coding
cells
and
do
a
review
of
those.
So
so
that's
what
this
paper
is
about
any
questions
about
that
basic
intent.
Yet.
B
Okay,
so
they
go
through
and
they
basically
list
a
whole
bunch,
all
the
different
types
of
vector
cells
that
people
have
found
so
there's
boundary,
vector
cells,
there's
border
cells
and
landmark
vector
cells,
there's
object,
vector
cells,
this
trace
vector,
cell's,
a
little
different,
and,
and
so
they
they
go
through
and
they
they
describe
what
these
different
cells
are
and
how
they
work.
So
this
this
figure
here
figure
one
is
essentially
talking
about
the
nature
of
vector
cells.
B
So
if
a
person
was
at
some
orientation
and
there
would
be
cells
that
reach
they
make
this
little
rosette
pattern
saying
if
you're
this
is
you
the
triangle
and
you're
in
some
room
and
the
rats
in
the
room
there'll
be
cells
that
reflect
that
there
should
be
some
cells
that
require
that
when
there's
an
object
or
boundary
at
some
distance
and
orientation
and
most
of
these
cells
are
allocentric.
So
that
means
that
this
is
a
a
boundary
in
the
room.
It's
not
relative
to
you!
B
And
that's
this
figure's
just
sort
of
illustrating
what
th
that
property.
Then
this
this
table
is
a
list
of
all
these
things.
That's
just
a.
C
C
B
That's
right:
it's
independent
direction,
but
they've,
but
one
of
the
oh,
my
gosh,
also
the
huge
wind
that's
picked
up
outside
here.
It's
just
like
it's
like
a
tornado,
just
right.
Oh
sorry,
about
that
yeah.
So
these
are
allison's,
but
one
of
the
next
things
they
they
point
out-
and
I
didn't
realize
this.
B
They
argue
that,
for
all
the
allocentric
cells
there
are
equivalent
egocentric
cells
so
for,
if
there's
a
boundary
vector
cell,
normally
that's
an
allocentric
cell,
but
now
they
have
their
ego,
boundary,
vector
cells
and
if
there's
a
border
vector
cell,
there
are
ego
board
effect
cells
and
if
object
cells,
there
are
ego,
object
cells,
so
they
they
argue
that
there's
equivalents,
there's
both
ego
and
aloe
for
every
one
of
these
things.
B
Now
let
me
just
at
this
point:
let
me
detect
something
these
are
all
found.
These
are
found
in
different
parts
of
the
hippocampus,
the
antarotic
cortex,
the
cerviculum,
the
retrospective
kind
of
scatter,
all
over
the
rat's
brain
these
different
cell
types.
I
don't
think
all
of
these
make
sense
in
making
your
cortex
I'm
not
thinking
that
your
cortex
says
equivalence
of
all
these.
This
could
very
well
be
a
rat,
and
animals
have
been
around
for
a
long
long
time
and
they've
evolved
all
kinds
of
special
things
for
navigation.
B
So
when
we
look
at
these
things,
we
don't
say:
oh
the
neocortex
has
to
have
equivalent
of
a
boundary
cell
or
this
kind
of
stuff.
I
don't
think
so
they
might
or
may
not.
But
I
don't
view
this
as
a
model
for
how
the
cortex
works.
I
just
use
this
as
a
general
principles
and
about
how
rats
and
animal
mammals
navigate,
and
some
of
these
principles
will
apply
to
the
neocortex,
but
not
all
of
them.
So
there's
a
there's
a
point
about
that.
B
So
in
this
table
here
I
don't
know
how
I
lost
all
my
highlights,
but
it
did
here,
you
see
they
say,
there's
a
boundary
vector
cell
and
then
they
say,
there's
an
egocentric
boundary
vector
cell
right
down
here
and
they
say:
where
is
it
well?
The
boundary
vector
cells
are.
This
is
the
allo
ones
and
subidiculum,
but
down
here
they're
in
the
retrospecial
cortex
and
the
I
don't
know,
lateral
and
ironic
projects
and
so
on.
So
this
is
the
allocentric.
B
This
is
egocentric,
and
so
they
sort
of
characterize
these
things
and
then-
and
so
then
here
is
there.
These
are
sort
of
the
visual
explanations
for
what
those
different
cell
types
do.
I
don't
think
it's
really
really
important
right
here,
I'll
just
give
you
a
flavor
for
it,
because
these
words
are
very
confusing.
There's
border
cells
and
boundary
cells.
Well,
a
border
cell
seems
to
be
a
it's
like
if
the
rat
can
detect
it
with
its
whiskers,
so
it
has
to
be
very
close.
B
In
essence,
it's
not
really
a
vector,
so
the
rat
detector
is
a
wall
on
its
right.
That's
a
border
cell,
a
boundary
sector
cell
is
maybe
a
boundary
that's
further
away,
and
the
rat
can't
feel
it
with
this
whisker.
So
the
boundary
vector
cell
is
really
a
vector.
It
has
a
distance
component
to
it.
Where
the
border
cell
is
a
separate
cell,
doesn't
have
a
distance
vector.
Then
there
are
objects,
vector
cells
which
are
essentially
it's
not
a
border,
but
it's
more
of
a
specific
feature
or
thing
out
there
and
so
on.
A
I
will
I
will
throw
in
the
often
border
cells.
Often
people
do
think
of
boundary,
vector
cells
as
being
just
more
more
generalized
border
cell.
Just
a
border
cell
is
a
specific
kind
of
boundary
background
itself.
B
But
I
think
they
argue
the
op,
they
argue
differently
here.
Oh,
do
they?
Okay,
yeah?
Well,
they
say
brings
well
maybe
it's
a
pipe
but
they're
physically
located
differently.
So
if
you
look
in
this
chart
here,
they
say
a
boundary
vector
cell
is
in
the
speculum,
but
the
border
cells
and
amino
acids.
It's
not
like
they're
mixed
together.
They
make
the
argument
that
they're
actually
separate
cells.
It's
not
just
one's
really
close.
That's
what
they
argue,
whether
it's
true!
I
don't.
I
don't
think
it
really
matters
that
much.
B
But
so
then
they
go
through
the
different
types
of
allocentric
border
cells
or
biocentric
cells,
and
then
they
go
through
keep
going
down
here.
There's
a
lot
of
text
here.
Then
they
go
through
this.
This
is,
I
believe,
the
the
egocentric
equivalent.
So
this
this
figure
is
equivalent
to
the
previous
figure.
It's
just
talking
about
what
the
egocentric
cells
do
and
again
the
details
is
a
lot
of
details
here
and
I'm
not
going
to
try
to
drive
through.
B
I
did
read
the
paper
very
carefully
twice,
but
the
basic
idea
is,
you
got
a
bunch
of
these
cells
they're,
so
object-oriented.
Some
are
ego
and
then
the
equivalent
of
some
are
allocentric
so
and
there's
a
lot
of
little
details
in
here
that
were
very
interesting
to
read
about,
but
then
I'm
going
to
skip
this
one.
This
had
to
do
with
hemispheric
drop,
you
know
when
happens,
and
they
they
damage
one
side
of
the
brain
or
disable
it
from
the
other
which
occurs,
and
it
wasn't
that
important.
B
Then
this
one
here
this
is
the
figure
I
sent
out
last
week
and
suggested
you
might
want
to
look
at
this.
One
is
very
interesting
and
it
generated
a
lot
of
thinking
on
my
part
and
when
so,
basically,
what
they're
saying
is:
oh
by
the
way
previously,
the
two
authors
who
are
writing
this
review
paper,
the
can
skin
purpose
they
have
proposed
in
the
earlier
papers,
a
a
network
that
would
convert
between
an
allocentric
and
an
egocentric
version
of
a
cell
and
back
again
and
so
on.
B
The
left
here
you
see
a
sense
of
these
are
egocentric
border
cells
or
boundary
cells.
Excuse
me
egocentric
boundary
cells,
so
these
are
like
hey.
If
there's
a
boundary
right
in
front
of
me,
this
cell
fires
in
the
boundary
to
my
right,
this
cell
fires-
it
doesn't
matter-
you
know-
that's
always
the
case.
It's
egocentric
and
down
here
are
the
equivalent
allocentric
border,
vector
cells
and
so
there's
a
cell
that
fires
when
there's
a
when
there's
a
boundary
sheet,
a
boundary
at
some
distance
away
from
me.
B
B
But
this
cell
says:
hey,
there's
a
there's,
a
boundary
at
some
distance
in
the
north
end
of
this
room,
not
it
doesn't
matter
which
way
I'm
facing,
and
so
they
say:
here's
how
you
can
get
from
one
from
this
representation:
the
ego
to
the
aloe,
here's
a
a
little
sort
of
schematic
diagram
of
how
you
could
do
this
and
it's
kind
of
like
a
look-up
table
in
some
sense
you
and
they
say
well.
B
The
thing
you
need
to
know
is,
you
need
to
know
the
head
direction,
and
so,
if
you
know
the
head
direction,
then
you
can
convert
back
and
forth
between
these
two,
which
is
pretty
basic.
You
could
say:
okay,
imagine
I'm
representing
some
space,
and
now
I
say
oh
well,
knowing
the
head
direction,
which
is
an
allocentric
signal,
meaning
like
I
am
facing
in
a
particular
direction
in
the
room.
Not
it
doesn't
matter
which
I'm
facing
you
know
relative
to
the
world.
It's
this
particular
room.
B
Then
I
can
use
that
to
say
how
much
everything
has
been
rotated.
So
you
can
imagine
a
two-dimensional
world
and
you're
saying
the
head
direction.
Cells.
Tell
you
how
much
the
world
is
rotated,
and
so
now
I
can
rotate
all
the
different
vector,
vector
elements
and
it's
pretty
straightforward.
B
So,
as
I
pointed
out,
but
this
really
got
me
thinking
about
about
reference
frame
convergence.
I
had
a
little
private
conversation
with
subutai
last
week
in
blackboard,
most
likely
whatever
we
talked
about
this
a
bit
and
it
made
me
realize
you
know,
we
know
that
the
brain,
the
new
york
projects
everywhere,
we
have
to
do
these
conversions
between
reference.
We
have
to
go
from
like
a
retinal
subject.
Like
our
thousand
brains.
B
Theory
says
you
know,
you
got
input
coming
from
the
eye,
and
yet
we
built
allocentric
models
even
in
v1,
and
so
it
tells
you
that
you
have
to
do
a
reference
frame
conversion,
a
reference
range
transform-
and
I
never
really
put
my
finger
on
I
think
before,
but
now
it's
clear
to
me
that
this
reference,
transm
reference
frame,
transform,
has
to
occur
in
every
single
cortical
column.
It's
a
it's
a
basic
function
of
cortical
harm.
It's
not
like!
Oh
we're
going
to
use
the
where
pathway
and
what
pathway
or
something
like
that.
B
No
every
column,
I
believe
now
is
getting
an
input
and
it's
going
to
convert
it
into
another
reference
frame.
If
we
think
about
like
v1
or
v2
it'll
be
in
an
eagle-centric
reference
frame
relative
to
retina,
maybe
retinol-centric,
but
it
in
other
parts
of
the
cortex
it
can
be
whatever
reference
frame
is
coming
out
of
some
other
part
of
the
cortex.
B
So
it's
a
general
purpose
like
I've,
got
an
impromptu
incoming
reference
frame
and
I'm
going
to
convert
it
to
another
reference
frame
that
I'm
going
to
build
the
model
from
even
in
the
where
pathway
you
might
have
like.
I
need
to
go
from
a
retinal
centric
reference
frame.
Maybe
their
headsets
are
fractured
or
hand-centered
referencing.
B
So
I
now
believe
that
every
single
column
is
doing
a
reference
ring
transformation
and-
and
it's
really
hard
to
imagine
how
to
do
that-
and
this
figure
was
a
big
clue
to
me
like
oh,
what's
going
on
here,
this
seems
to
work.
How
what
about
this?
How
can
we
make
this
work?
So
I'm
just
going
to
share
with
you
some
of
my
observations
and
I
haven't
reached
a
conclusion
about
this
yet,
but
I
feel
like
I
can
come
up
with
a
general
purpose
solution
here.
B
The
first
thing-
and
you
interrupt
me
if
anyone
has
a
question,
yet
let
me
know
I'll
post
a
second.
C
B
You
know
that
was
an
interesting
question.
I
I
wondered
about
that
too.
First
of
all,
there's
high
direction
cells
throughout
the
rat
spraying
all
over
the
place,
and
I
I
and
they
were
they're
also
in
the
thalamus.
I
didn't
know
that
during
some
of
the
the
intro,
the
the
intralaminar
nuclei,
you
know
the
ones
ones
that
are
not
the
the
relay
cells
and
they
found
cells
in
there
that
are
head
directional
and
I'm
not
sure
why
they
made
that
distinction
right
here.
B
I
think
it
might
be
because
in
this
particular
example
I'm
guessing
here
in
this
particular
example
notice,
they're
saying
the
egocentric
boundary
cells
are
in
the
parietal
cortex
and
the
in
the
boundary
vector
cells.
Here
they
don't
say
where
these
are
actually,
but
they
may.
They
may
say,
oh
and
then
they're
saying
these,
these
transition
cells
in
between
or
the
retrosplenial
cortex
up
here,
and
so
maybe
they're
saying
in
this
particular
circuit.
B
They
know
that
the
thalamic
cells
project
to
the
retrosplenium
cortex,
so
to
answer
your
question
is
there
are
head
direction
cells
in
the
thalamus,
at
least
in
the
rat?
There
are
lots
of
other
places
too.
So
this
is
but
in
this
particular
circuit
they
might
be
saying.
Well,
we
think
these
intermediate
cells
are
in
natural,
sphenoid,
cortex
and
theirs.
We
know
that
there's
a
phononic
projection
to
them,
so
we're
going
to
say
those
those
are
the
polemic
head.
B
Those
are
kind
of
details
that
I'm
trying
not
to
get
too
hung
up
on,
because
obviously,
there's
head
direction
cells
all
over
the
place
and
all
kinds
of
circuits
all
over
the
place.
But
in
this
particular
one
don't
answer
that:
okay,.
B
Okay,
so
what
what
really
struck
me
about
this?
I
wanted
to
understand
deeply
why
this
works,
and
I
can
I
can
follow
their
little
example
here.
They're
saying:
oh,
this
cells
projector
these
and
these
these
are
these:
what
they
call
these
transformation
circuits.
These
are
cells
that
sort
of
respond
to
both
they're
like
this.
This
would
be
a
cell.
B
These
are
all
these
squares
are
cells,
and
this
is
a
cell
that
responds
to
both
the
the
idea
this
the
star
here,
which
is
I
mean,
I'm
I
have
a
north
egocentric
cell,
and
it
also
responds
to
this
down
here,
which
means
I
have
a
allocentric
border
in
this
case,
they're
shown
in
the
same
area
that
doesn't
matter,
and
so
this
kind
of
like
this
would
be
a
bit
out.
This
is
like
half
hour
century
and
half
ecocentric.
B
It
occurred
to
me
that
these
these
cells
might
be
like
if
this
proposes
these
cells.
Yes,
these
are
like
out
of
the
conjunction
of
the
conjunction
conjunctive
cells
that
we
see
a
lot
so
so
anyway,
so
they
have
this.
Basically,
you
have
these.
You
have
one
vector
here,
you
have
another
vector
here,
then
you
have
this
matrix
that
they
show
it
has
to
be
fully
filled
in.
So
if
it's
four
and
four,
then
you
have
to
have
16
of
these
intermediate
cells.
B
If
this
were
10
and
10,
you'd
have
to
have
100
of
these
intermediate
cells,
but
then
the
trick
to
making
this
work
is.
They
say:
oh
yeah,
we're
going
to
bring
other
feedback,
which
is
the
head
direction
itself,
and
that
tells
me
which
of
these
cells
to
select.
You
know
something
like
that,
so
it's
an
intersection
of
these
three
things
that
are
going
on,
and
so
I
can
follow
this.
I
could
walk
through
the
details,
but
I
really
was
struggling.
I
still
am
to
get
a
very
deep
understanding
of
this.
B
It's
like
yeah.
I
can
see
why
it
works,
but
it's
like
I
don't
have
an
intuitive
defense
sense,
but
I'm
getting
it
slowly.
One
thing
I
noticed
right
away
is
the
thalamic
head
direction
cells
of
these
head
direction,
cells
in
general.
They
are
already
allocentric,
so
it's
like
okay
yeah.
I
can
solve
this
transform
from
an
egocentric
to
an
alloy
centric.
If
I
already
know
something
about
what
the
transform
is,
I
already
have
another
cell.
That
tells
me
what
the
transform
is,
and
so
I
can
use
that
other
cell
and
so
like,
okay.
B
Well,
how
does
this?
How
do
the
head
direction
cells
know
which
the
right
you
know?
How
do
they
get
their
answer
because
that
they
already
presume
they
know
the
answer
they
they
know
once
you
have
a
head
direction
cell,
it
says
yeah.
I
know
I
know
how
everything
has
been
rotated
here,
because
I'm
not
using
egocentric
direction.
B
So
they
don't
address
that
and
in
in
much
much
of
what
I
read
about
about
the
head
direction,
cells,
people,
sort
of
say,
yeah,
it's
got
to
be
some
sort
of
sensory
input,
it's
something
else
where
we
don't
really
know
about
it.
There's
some
there's
some
theories
about
it,
but
I
thought
in
general.
That's
that's
also
a
problem.
That's
part
of
the
problem.
We
can't
just
assume
that
we
know
the
head
direction
itself,
because
that's
the
thing
we're
trying
to
solve
it's
like
saying:
oh
yeah,
I
know
the
answer
of
it.
B
How
did
you
know
what's
the
right
transformer
to
do,
but
in
general
you
wouldn't
know
that
and
we'd
have.
The
system
has
to
figure
out
the
head
direction
cells,
so
it
occurred
to
me
this
is
just
another
vector
and
it's
another
vector
that
has
to
go
from
an
egocentric
to
an
allocentric
representation
from
a
you
know,
egocentric
head
direction.
If
you
will
to
an
allocentric
direction
and
that's
just
part
of
the
problem,
and
so
I
then
I
realized
well,
you
could
probably
substitute
any
vector
here,
that's
allocentric.
B
It
doesn't
really
matter,
it
doesn't
have
the
head
direction.
I
could
pick
another
one.
I
could
take
border
cells
or
I
could
take
you
know,
location,
object,
vector
cells
or
something,
and
if
I
knew
the
right
answer
for
one
vector,
I
could
use
it
to
answer
the
question
for
another
vector.
It's
just
like
oh
yeah.
I
need-
and
somebody
knows
the
answer.
Then
I
can
tell
everyone
else.
What
to
do.
D
B
A
B
Okay,
so
then
it
kind
of-
and
I
had
a
whole
bunch
of
questions
about
this-
could
this
circuit
first?
The
next
thing
I
realized
is
that
you're
going
to
be
doing
a
whole
bunch
of
transformations
at
once,
and
this
is
just
for
one
cell
type
right,
but
they're
different
vector
cell
types
and
so
on,
and
then
I
realized
I
could
be
doing
the
same
transformation
for
movement
vectors.
So
I
recently
I've
been
talking
about
movement
vectors
in
the
cortex
like
how
many
columns
we
represent
movement
vectors.
B
So
I
said
well,
that's
good,
because
I
know
I
have
to
do
that.
I
don't
know
if
the
cortex
has
to
do
boundary,
vector
cells,
but
I
know
it's
going
to
do
most
motion
by
itself.
I'm
pretty
sure
that
so
I
said
well,
that's
good!
Now
I
have
at
least
a
beginning
of
a
mechanism
here,
but
I
still
don't
you
know.
I
still
have
this
mystery
like
how
you
solve
this,
but
it
didn't
occurred
to
me.
B
Okay,
I'm
going
to
throw
out
a
couple
random
thoughts
and
then
I'm
going
to
leave
it
back,
because
I
haven't
really
worked
it
all
out.
Yet
one
random
thought
is
you're
going
to
be
doing
this
circuitry
a
lot
you're
going
to
do
it
in
parallel,
and
it
seems
that
any
vector
could
help
any
other
vector
like
anybody.
Who's
learning
something
about
the
transition
can
help
anybody
else,
learning
about
the
transition,
and
so
you
could,
if
you
could
imagine
here
in
these
head
direction
cells.
B
I
could
make
this
bi-directional
arrow
and
it's
just
like
these
are
bi-directional
arrows
I
I
can
make
these
are
bi-directional
arrows,
so
I
can
have
a
whole
bunch
of
allocentric
vectors
down
here
and
a
whole
bunch
of
egocentric
vectors
on
here
and
and
they're
all
sort
of
coming
together
in
some
sense
in
this
way,
and
the
system
will
resolve
the
answer.
It
doesn't
tell
me
how
I
get
started,
but
if
I
knew
my
new
answer
on
any
one.
B
Then
I
would
be
able
to
help
the
other
one.
It
also
occurred
to
me
that
I
don't
need
in
a
situation
like
that.
I
don't
have
to
have
every
single
variation
here,
it's
probably
perfectly
okay
to
have
a
bunch
of
conjunctive
cells
or
what
they
call
these
transformation
circuits.
I
don't
have
to
have
everyone
I
could.
I
could
have
some
subset
of
them
and
it
would
still
work.
B
I
just
that's
an
intuition.
It
says
yeah.
I
don't
need
to
specify
everything.
It's
just
like
our
temporal
memory
doesn't
have
to
make
connection
to
everything
you
make
a
connection
to
the
subset
of
previous
cells
that
are
active
and
that's
good
enough
something
along
those
lines
here.
So
then
this
leads
to
sort
of
this
general
idea
that
there
might
be
a
very
simple
way
of
going
between.
You
have
a
set.
B
You
know
set
of
vectors
in
that
one
space
and
set
about
this
equivalent
sectors
in
another
space
and
they
project
for
a
set
of
cells
in
between,
and
you
don't
have
to
full
connectivity
and
it
doesn't
have
to
be
precisely
defined.
I
haven't
proven
this
yet,
but
that
if
I
start
making
progress
at
all,
then
the
system
will
settle
now.
How
do
I
make
progress
at
all?
B
The
question
is,
if
I
know
nothing
about
the
world,
I
wouldn't
know
where
I
had
no
way
of
basically
transforming
from
an
egocentric
to
an
allocation
representation
and-
and
the
authors
addressed
that
situation
here,
as
do
other
authors
and
other
papers,
they
say.
Basically,
you
have
to
have
some
sort
of
model
of
the
world
to
give
you
a
sort
of
some
beliefs
about
what's
possible
in
the
allocentric
side.
So
in
this
next
figure
here
I
don't
think
this
figure
is
going
to
pertain
exactly
to
the
near
cortex.
B
But
it's
the
basic
idea
here:
they're
showing
sensory
input
going
to
a
egocentric
vector
then
through
the
that
transformation
that
matrix
which
which
they
were
saying
was
in
the
retrospective.
That
was
that
transformation
circuit,
that's
the
n
squared
says
neurons,
and
here
are
they
making
head
direction
cells
as
a
separate
input?
B
I'm
going
to
eliminate
that
to
say
you
have
a
bunch
of
these
guys
and
then
now
over
here
you
have
the
allocentric
vector
cells,
but
what's
different
over
here
is
that
I
think
it's
this
thing
identity
here
they
basically
say
for
this
to
work.
You
have
to
have
some
prior
learned
knowledge
about
what
to
expect
and
what
the
objects
or
what
the
environment
you're
expecting
to
see,
and
so
there's
a
there's,
a
learned
representation
over
here
which
can
feed
back
into
the
saying.
B
Well,
you
know
it
could
be
this
environment
that
environment
this
object
of
that
object.
So
you
give
me
some
partial
information.
I
will
feedback.
What
I
think
is
possible
and
this
guy
says
yeah.
Okay,
you
know
so
they
meet
in
the
middle.
This
is
purely
sensory
and
this
is
purely
learn:
model
the
world
and-
and
then
you
you
can,
you
know,
go
back
and
forth
between
them.
B
So
there's
nothing
new
in
this
basic
idea,
but
just
seeing
the
seeing
this
figure
really
got
me
going
about
this
and
saying
all
right,
we
can
figure
this
out.
I
think
I
can
come
up
with
something
something
as
simple.
As
you
know,
in
the
cortex
you
know
I'll
be
able
to
map
out
okay
layer,
four
may
be
an
egocentric
feature,
and
maybe
you
know
lower
layer.
Three
is
an
egocentric
motion
vector
and
then
I'm
just
making
this
up.
B
For
example,
I
mean
the
layers
of
the
5a
is
the
usmo
vector
and
when
that
maybe
from
parts
of
layer,
3
are
going
to
be
these
transformation
circuits.
I
really
think
this
can
be
done.
I
think
this
is
something
we
can
map
onto
a
cortical
column,
so
I'm
very
anxious
to
work
on
this
further
and
try
to
figure
out
a
very
deep
understanding
about
how
you
make
these
things
and
what
what
what
number
of
these
sort
of
in
between
cells
you
might
find
these
transformation
cells.
B
You
need
to
get
this
to
work,
so
I'm
pretty
excited
about
this.
I
think
this
is,
I
think,
I'm
going
to
get
an
answer
to
this
transformation
and
we'll
be
able
to
have
it
on
doing
columns
circuitry
using
this,
this
basic
figure
as
a
launching
point.
So
that's
all
I
wanted
to
say
about
it
at
this
point
and
I
won't
work
on
it
again
for
at
least
until
the
beginning
in
the
middle
of
next
week,
because
I
have
to
be
working
on
the
booking
form.
A
One
just
a
high
level
question
that
I'm
just
curious
to
hear
what
your
thoughts
are
is
we
can
we
keep
kind
of
going
back
and
forth
how,
let's
see
how
much
a
an
actual
physical
space
is
being
represented?
How
much
how
much
the
space
that,
like
these
grid
cells,
these
boundary
vector
cells,
are
representing
actual
vectors
that
align
with
physical
space
and
how
much
is
something
something
like
vaguely
different,
where
it
might
not
be
a
strict
spatial
thing.
I
I
I
feel
like
we
keep
going
back
and.
A
That
it
might
be
kind
of
warped,
it
might
be
strange.
I've
used
the
example
before
oh
yeah
of
like
of
like.
If
the
the
playful
example
I
used
is,
you
can
imagine
like
in
a
soccer
player's
head
when
he
or
she
is
running
on
the
field
if
his
or
her
continuous
attractor
bump
is
going
in
a
straight
line
that
might
align
with
them
running
like
a
curve
toward
the
goal
like.
B
Yeah
yeah
yeah
yeah
yeah
yeah,
so
you
know
so
we're
we're
we're
not
certain
about
that.
You're
right.
I
propose
that
I
propose
that
you
know
when
I
was
talking
about
how
you
could
have
a
spatial
puller-like
mechanism
looking
at
flow
bits
that
you
might
end
up
with
your
trajectories
right
that
are
spiraling
or
curving,
or
something
like
that.
B
I've
also
gone
back
and
forth
between,
like
each
mini
column,
is
its
own
vector
and
therefore
you
have
a
lot
of
them
or
maybe
there's
only
like
10
and
then
and
the
and
the
and
then
the
the
slabs
are
all
the
same
variations
of
that.
So
so
that's
another
thing
we've
been
wobbling
about.
I
don't
know
the
answer
that
yet,
but
I
think
what's
really
exciting
to
me
about
this.
B
B
You
know
linear
in
space,
or
maybe
they
don't
have
to
be
limited,
where
I'm
coming,
where
I'm,
if
you're
following
me,
where
I'm
currently
thinking
on
this
particular
issue,
you
asked
about
margins,
is
that
even
if,
even
if
a
motion
vector
is
a
curve,
it's
not
going
to
it's
going
to
be
a
a
smooth
curve.
It
won't
be
a
spiral.
Well,
it's
not
going
to
be
a
wonky
curve.
Like
you
know,
go
straight
and
turn
and
go.
You
know
it
has
to
be
something
you
can
path
integrate
easily.
B
Therefore,
if
it's
a
curve,
it's
going
to
be
just
an
arc,
you
know
it's
going
to
be,
or
if
it's
a
straight
line,
it's
a
straight
line.
It's
not
going
to
be
a
straight
line,
then
a
curve.
You
know
what
I'm
saying
so:
they're
they're,
I'm
I'm
I'm
currently
working
under
that
assumption.
That
and
and
there's
the
other
issue
too,
about
the
dimensionality
of
space.
B
I
I
don't
know,
and-
and
I
don't
it's
not
clear
to
me
like
here-
there's
a
head
direction
cell
in
this
paper
and
that's
clearly
what
you
need
to
to
convert
into
2d
space
right.
All
you
can
do
in
a
2d
space
is
rotate
it.
So
that's
simple
what
happens
in
an
dimensional
space?
B
I
don't
know
I'm
confused
by
that
right,
so
I'm
just
pointing
out
that
I
I'm
still
confused
actually
because
I'm
still
uncertain
about
many
of
these
issues,
but
I
think
that
just
thinking
about
this
now
it
gives
me
a
new,
a
new
way,
a
new
major
piece
of
data
that
says:
okay,
every
particular
problem
is
going
to
be
doing
this.
If
I
understand
it
deeply,
then
I
can
map
that
into
the
cortical
anatomy,
and
maybe
that
will
answer
the
other
question.
D
B
More
constraints,
as
I
wrote
from
the
book,
you
know
more
constraints
you
have.
It
first
makes
it
more
confusing,
but
then
eventually,
when
the
answer
presents
itself,
then
all
the
constraints
can
solve
it.
Once
subtitle
made
an
interesting
point,
I
have
been
working
at
this
another
recycling
here.
B
I've
been
working
up
to
this
point
in
time,
assuming
that
in
a
cortical
column,
there
were
two
representations
of
space
like
there'd,
be
two:
the
sets
of
grid
cells,
one
for
one
space
and
one
for
the
transform
space
and
typically
made
the
point
that
that's
not
really
necessary
and,
and
then
I
think,
he's
right,
meaning
in
a
cortical
column.
I
don't
need
I
want.
B
I
need
an
egocentric
representation
of
motion
and
I
need
an
egocentric
representation
of
like
sense
features,
but
I
don't
necessarily
have
to
have
a
a
a
full
space
developed
for
that.
I
can
just
convert
those
vectors.
If
I
can
just
convert
the
motion
vectors
into
an
allocentric
or
a
new
reference
frame,
then
then
all
the
rep,
all
the
good
cells
and
all
the
placeholder
equipments
would
be
in
the
new
reference
frame.
B
So
really,
if
I
can
just
do
this,
conversion
on
the
input
to
the
column
like
a
com
starts
off
with
one
part,
some
cells
representing
motion
in
one
in
one
space
and
other
cells
representing
motions
in
other
space.
Then
I
don't
that's
then,
once
I've
done
that,
then
I
can
do
all
the
modeling.
B
In
the
second
phase,
I
don't
have
to
have
plenty
cells
and
grid
cells
in
both
spaces,
but
we
really
freed
up
a
lot
of
thinking
in
the
ui
part,
because
now
I
don't
have
to
ask
where
the
two
sets
of
good
cell
equivalents
are.
I
can
say,
I
only
need
one.
I
need
the
new,
the
transformed
cells
and
transform
place
cells
equivalents,
so
that
makes
life
simple
too
anyway.
Okay,
any
other
questions.
B
I
didn't
know
I
was
frozen,
very
quiet.
Okay,
I'm
going
to
stop
sharing
my
screen.
I
think
we're
we're
done
unless
there's
another
topic.
C
I
can
briefly
mention
just
this
stuff
about
dendrites
that
I'm
yeah
yeah,
but
it's
just.
I
talked
about
this
a
little
bit
while
you
were
gone
the
research
meeting,
but
I
have
one
slide
that
I
showed
then
that
I
could
show
now
just
to
get
your
kind
of
reaction
to
it.
It's.
C
C
Powerful
yeah,
so
the
the
reason
I
brought
it
up
is
that
in
thinking
about
how
we
can
translate
dendrites
into
kind
of
machine
learning
that
works,
I
was
playing
around
with
some
a
different
way
of
explaining
it
and
the
way
we
typically
explain.
This
is
just
an
example.
Here
is
that
you
know
we
have
these
mini
columns,
and
you
know
when
you
get
an
input.
You
get
some
a
set
of
activity
here.
This
causes
some
prediction
in
a
in
a
sparse
set
of
cells
in
the
mini
columns.
C
Then,
when
you
get
input,
you
know
these
cells
went
out
right
and
they
they,
you
know
they
because
they're
depolarized
they
went
out
and
there
become
the
active
cells
within
the
mini
column,
and
the
way
this
kind
of
prediction
mechanism
happens
is
that
each
of
these
cells
get
depolarized
because
they
are
connected
to
some
set
of
active
cells
that
were
active
just
previously.
Yes
right,
this
is
the
kind
of
language
we've
used
to
describe
it.
C
What
I
think
somewhat
missing-
and
I
mean
I
think
we
all
know
this,
but
we
often
don't
describe
it.
This
way
is
that
the
once
the
network
is
trained.
This
is
actually
like
a
a
very
densely
recurrently
connected
network
right.
So,
if
you
think
about
a
temporal
memory,
that's
learned,
let's
say
a
thousand
sequences
and
each
of
them
have
you
know:
10
10
transitions,
each
transition
and
you
have
40
cells
on
at
a
time.
You
have
actually
something
like
40
000
connections
like
this
in
this
network.
B
D
C
B
C
B
C
B
If
you
look
at,
if
you
look
at
all
the
cells
together,
it
stands.
But
if
you
look
at
in
terms
of
the
dendritic
segment,
it's
not
dense.
C
So
that's!
My
second
point
is
that
you
have
this.
You
know
ton
of
connections
here,
but
you
have
very,
very
sparse
sub
networks,
that's
specific
to
each
task
or
sequence.
So
if
you
have
a
sequence
with
10
transitions,
it's
going
to
be
a
really
sparse
set
of
a
sub
network.
Within
this
much
bigger,
you
know
more
connected
network,
that's
going
to
be
specific
to
each
task
and
what
the
dendrites
are
basically
doing
is
choosing
which
sub
network
to
instantiate
at
any
given
point
which
part
of
which
sub
network,
given
the
particular
context
right.
C
So,
even
though
you
may
have
you
know
thousands
and
thousands
of
connections
with
the
dendrites
with
any
particular
input,
it's
going
to
choose,
20
of
them
to
become
active.
It's
kind
of
a
a
different
way
of
saying
the
same
thing
right,
yeah,
okay,
dendrites
are
by
depolarizing
the
cells
they're,
choosing
which
cells
are
going
to
win,
and
so
they're
going
to
choose
they're
helping
to
choose
which
sub
networks
are
are
winning.
C
Yeah
and
so
there's
like
a
whole
bunch
of
these
really
sparse
sub
networks
and
they're
being
you
know,
instantaneously
kind
of
switched
on
and
off,
depending
on
the
context.
So
and
then
you
know,
sdrs
avoid
significant
overlap
and
then
you
know
the
permanence
is
kind
of
within
the
weights
are
binary
and
the
permanence
is
choose
which
weights
to
make
active
via
learning.
So
the
permanence
is
that
kind
of
switching
on
and
off
the
synapses
as
a
via
learning.
You
know,
which
synapses
are
part
of
the
sub
network.
B
D
B
It's
not
inherent
to
the
way
that
the
network
works.
It's
inherent
to
the
way
we
want
the
network
to
learn,
make
a
big
difference
there,
but.
C
Not
in
the
final
network,
yeah
yeah,
okay,
so
is
this
a
different
way
of
explaining
it,
and
the
reason
I
chose
this
way
is
that
they
were
and
then
described
a
bunch
of
papers
which
are
showing
some
properties
and
deep
learning
networks
that
we
might
be
able
to
leverage.
So
one
one
really
interesting
one
was
this
paper
called
super
masks
where
they
take
a
large
network
and
then
they
instantiate
very
instantiate,
these
sub
networks,
and
they
basically
have
binary
weights
in
here
and
they're.
C
This
this
is
like
a
typical
three
layer:
feed
forward
network
okay
and
these
thick
lines
showing
which
subset
of
the
network
they're
instantiating.
At
this
point
in
time,.
C
C
So
imagine
like
a
huge
list
of
zeros
and
ones,
and
here
there'd
be
one
two,
three
four
five
six
ones
so.
C
C
They
do
it
as
a
function
of
training,
so
there's
a
whole
training
process
here
that
they,
they
figure
out.
Basically,
they
add
and
drop
these
things
until
they
figure
out.
C
C
They
go,
they
go
through
them
serially.
Well,
they
have
a
couple
of
different
methods.
You
can.
You
can
do
a
linear
sum
of
these
all
of
these
different
networks
and
then
they
try
to
find
the
sub
network
the
subset
that
gives
the
most
confident
response
at
the
output.
So
there's
they
have
a
few
different
ways
of
doing
it.
I.
D
C
B
It
seems
to
me
this
is
a
bit
of
a
stretch
to
what
our
way
our
networks
work,
I'm
not
sure
you're
going
with
this,
but
in
some
sense
you
know
the
way
the
temple
memory
works.
Is
this
stuff
just
gets
automatically
resolved.
There's
no
thought
about
masks
or
you
know
which
subset
or
putting
a
name
on
it
or
you
know
nothing
like
that.
It's
like
the
data
comes
in
and
then
it's
it
settles
on
the
right
answer.
D
B
C
I
I
use
that
phrasing,
mostly
as
a
way
to
introduce
the
discussion
of
these
papers,
because
there
was
a
couple
of
these
papers.
I
went
through
where
they
are
basically
choosing
the
the
way
they
did
continuous
learning
is.
They
would
have
a
bunch
of
these
sub
networks
that
they
would
switch
on
and
off
in
different
ways,
so
they're
different
schemes
for
doing
it.
B
C
B
I
think
that's
great,
I
mean
I
don't
see
how
anyone
could
object
to
that.
It
seems
like
a
good,
a
good
observation,
but
you
know
again:
we
have
you
want
to
be
careful
that
the
reader
just
says.
Oh
yeah,
I
know.
C
C
Not
the
same,
and
I'm
not
suggesting
it's
the
same,
but
it's
it's
analogous
what's
interesting,
is
in
a
deep
learning
context.
They
were
able
to.
By
doing
this,
they
were
able
to
have
binary
weights
and
and
show
really
good,
continuous
learning
results.
Oh
that's
interesting.
So.
B
But
anyway,
okay,
that's
helpful
for
me
me
to
know
what
you're
talking
about.
I
do.
I
always
think,
there's
a
risk
that
when
we
make
these
connections
that
people
on
the
machine
learning
side
of
the
world
will
automatically
soon,
they
know
what
is
you're
talking
about
and
and
then
they
just
you
know
it
might
be
hard
to
get
the
message
across,
but
what
actually
how
this
thing
works.
B
So
you
just
have
to
be
careful
that
you
have
to
address
that
all
the
time
you
know
it's
like
where
it's
you
run
at
the
same
thing
when
we're
dealing
with
sparsity
right.
The
way
sparsity
is
being
implemented
in
in
traditional
neural
networks.
In
the
way
it's
certain
from
the
brain
are
very
similar,
but
not
exactly
the
same,
and
so
you
know
we
have
to
dance
around
all
the
different
parameters
and
talk
about
what
we
mean,
what
they
mean,
but
I
think
it's
it's
interesting.
I
think
it's
a
good.
B
So
I
think
what
you're
saying
is
you
can
say:
well,
here's
the
math
we're
doing
continuous
learning,
it's
similar
to
super
mass,
but
in
this
case
we
do
the
following
things
differently.
Something
along
those
lines
is
that
right.
C
C
C
C
B
C
D
B
I
think
I
think
it's
great,
so
this
is
more
of
an
fyi
for
me
because
I
missed
it
in
your
early
presentation
and
I
think
you
just
wanted
to
hear
my
reaction
to
it.
I
think
maybe.
C
B
Yeah,
I
think
that
the
trick,
if
we
were
describing
this
to
other
people,
is
somehow
when
you
showed
those
those
sub
networks
in
the
classic
neural
network
diagram.
What
you're
missing
there
is
is
it's.
I
think
the
idea
that
you
have
these
gender
dendrites
that
are
part
of
the
point
neuron,
it's
totally
not
visible
there.
It's
not
visible
in
our
diagram
either
by
the
way
we
don't.
B
You
know,
we
don't
show
it
in
our
diagram,
but
that
part
seems
to
be
the
hard
part,
perhaps
for
people
just
to
get
somehow
so
that
drawing
of
the
subtask
somehow
doesn't
convey
that
component,
which
is
essential
to
make
the
whole
thing
work
right.
So
I
don't
know
if
that's
an
issue
or
not,
but
just
I'm
bringing
it
up
that
that
would
be
something
that
easy
to
overlook.
You
know.
C
B
Yeah,
where
or
maybe
yeah
or
somehow
each
unit,
yeah
yeah,
we
show
what
you
know
what
a
unit
really
looks
like
or
something
like
that.
I
don't
know
I
just
you
know
I
think
that's
you
know,
we've
run
into
this
many
times
with
people
when
you
talk
about
dendrites
and
why
they're
important
it's
just
for
many
people.
Well,
I
know
from
neurosciences
and
some
machine
learning
people
too.
They
just
said:
oh
yeah
that
doesn't
really
matter.
You
just
do
that.
We've
already
proven
that
you
don't
need
those
things
because
you
have
a
bunch.
B
So
that's
a
common
reaction.
So,
but
I
like
the
idea
in
general,
so
thank.
B
D
As
you
described
so
there
are
current
connections.
Are
there
to
make
it
suitable
for
sequence,
learning
right
so
because
we
are
in
a
scenario
where
each
new
sample
is
dependent
on
the
previous
one,
whereas
in
continuous
learning
it's
a
different
scenario,
the
tasks
have
no
dependence
at
all.
So.
C
B
Yes,
so
we
we
publish
two
different
contexts.
One
is
the
previous
statement,
which
is
the
sequence
and
subutai
said
the
other
one
is
the
location,
alcentric
location
and
that's
and
that
works
as
well,
but
you
could
provide
any
context.
I
guess
you
could.
I
guess
it
could
be
labels
for
images.
I
don't
know
what
they
would
be.
C
Yeah,
it
could
be
it
could
it
could
be
labels,
it
could
be.
You
know,
we
talked
about
feedback
context,
you
know
coming
into
apical
dendrites
or
something
that's
another
type
of
context
vector
we've
had
the
idea
of
having
a
timing
signal
as
a
context
yeah.
So
it's
a
very
kind
of
generic
property.
It's
a
way
to
again
use
it.
I'm
trying
this
language
out
of
it.
It's
just
selecting
the
sparse
network,
that's
appropriate
for
the
given
context
and
shutting
off
everyone
else.
C
Yeah,
it's
just
a
very
it's
like
you've
learned
a
whole
bunch
of
stuff,
but
you
want
to
focus
on
one
particular
thing:
that's
appropriate
right
now
and
that's
what
and
the
nice
thing
is
because
we
have
so
much
sparsity.
If
there's
ambiguity,
we
can
actually
have
multiple
things.
C
You
know
if
the
context
is
not
sufficient
to
tell
you
uniquely
what
it
is.
We
can
have
multiple
sub
networks
active,
a
union
of
them
and
then
over
time
we
can
figure
out.
The
you
know
make
it
more
less
ambiguous.
D
D
C
Using
it
actually
they're
using
it
in
a
different
way,
in
our
case,
the
superposition
is,
is
to
use
to
represent
ambiguity.
So
if
you
have
multiple
things
that
are
possible,
you
can
superimpose
them
without
losing
each
one,
because
it's
extremely
sparse
in
their
case,
they
actually
only
have
it's
not
very
sparse,
so
they
can't
really
do
this
unioning.
The
way
we
do
it
they're,
using
it
more
as
a
way
of
doing
like
an
ensemble
of
experts,
which
is
slightly
different.
C
D
B
Yeah
just
hearing
you
say
that
reminding
me
that
going
back
to
what
I
was
talking
earlier
about
remember
the
transformed
cir
neurons.
I
was
talking
about
in
the
bridge
for
cancer
model,
those
cells
that
also
have
dendrites
and
right.
You
know
they
show
them,
as
you
know,
point
neurons
with
three
implants
right,
but
but
they
would
also
have
dendrites
and
they'd
also
be
doing
the
same
properties,
so
those
transformation
neurons
would
probably
be
exhibiting
the
same
sort
of
context.
Dependence,
I'm
not
sure
what
that
means.
B
Yet,
but
that's
another
thing
I
have
to
struggle
into
my
thinking
about
that
tomorrow.
D
So,
okay,
I
have
something
to
ask
you
that
I
can
ask
after
yes,
whatever.
D
I
can
ask
after
we
stop
the
meeting.