►
Description
This paper describes a model of how an animal might use grid cells, place cells, and border cells to navigate in complex environments. It was an excellent summary of existing ideas and it introduced several things we were not aware of that could be important for understanding how a cortical column works.
Read paper at https://onlinelibrary.wiley.com/doi/10.1002/hipo.23147
Discuss at https://discourse.numenta.org/t/navigating-with-grid-and-place-cells-in-cluttered-environments-paper-review/7296
A
You
guys
hear
me
okay,
so
we
are
just
about
ready,
that's
gonna,
be
navigating
grid
cells
and
play
cells
should
be
an
interesting
talk.
We're
just
waiting
for
someone
to
join
the
meeting,
we're
all
working
from
home
right
now
because
of
the
code.
19
thing
all
right
here
we
go,
we're
gonna,
get
started,
I!
Think
we
real
soon.
B
Right,
there's
no
on
here
fine,
but
I,
assume
we're
on
it
here
right
now,
all
right!
So
super
tight
point
out
this
special
issue
of
hippocampus
girl
and
envy
and
I
looked
through
it.
A
bit.
I
read
the
introduction
some
of
the
articles,
but
you
point
out
this
one
in
particular,
which
might
beat
didn't
read
and
I
thought
it
was
really
interesting
and
so
I
just
want
to
talk
about
it.
It's
in
some
sense.
It's
a
beautifully
written
paper.
B
B
There
was
some
real
insights
in
it
that
I
just
things
I
hadn't
really
put
together
before
a
few
things
are
struck
me
as
like:
oh
yeah,
oh
yeah,
you
know
I
sort
of
knew
that
but
I
didn't
think
of
it
right,
Python
stuff,
so
I'll
try
to
highlight
that,
although
when
we
go
through
it,
it
can
be
a
lot
of
our
rat
navigation,
but
but
again
I'll
try
to
point
out
the
key
insights
that
I
got
from
it.
What
else
wants
it
not
been
overhead?
B
Essentially,
what
their
timeline
here
is
how,
of
course,
that
navigation
is
how
you
know:
rat
gets
from
some
place
to
some
place
else,
it's
just
typically
where
they
are
and
a
goal
direction
they
want
to
get
to,
and
they
make
a
very
clear
distinction
between
two
different
strategies
for
doing
that.
We've
always
talked
a
lot
about
how
grid
cells
to
provide
a
sort
of
vector-based
that
or
metric
system
to
say.
B
Okay,
if
I
have
a
metric
space
behind
my
grid
cells,
I
can
say,
pick
my
current
location
and
calculate
the
the
direction
and
distance
to
the
goal,
and
even
and
that's
just
like
a
straight
cross,
shot
to
that
call.
We've
talked
in
the
past
about
mechanisms
that
propose
to
that,
but
they're
kind
of
wishy-washy,
but
that's,
but
that's
one
idea
and
the
other
way
of
navigating
which
I
didn't
really
understood
before,
is
using
play
cells,
and
this
is
they
call
it
a
topological
strategy.
B
So,
in
this
highlighted
text
using
abstract,
we
have
play
cells
have
associated
with
topological
strategy
for
navigation
grid
cells
have
been
so
suggested,
support,
vector
navigation
in
that
day,
we're
using
place.
Also
navigation,
is
you
just
retrace
your
steps?
So
if
I've
gotten
from
A
to
B
and
I
want
to
go
back
to
a
again,
I
can
retrace
my
steps
using
the
places
I
visited.
That
would
be
the
topological
strategy
and
the
vector
strategy
say
no
I
just
go
straight
from
be
back
to
a
the
figures.
B
I
highlight
some
text
in
here,
but
I'm
just
going
to
write
you.
The
figures
think
the
figures
are
very
nice.
They
support
it.
This
is
a
bigger
one.
It
it
shows.
Basically
the
two
basic
strategies
they're
showing
it
here
is
a
little
picture
of
a
you,
know,
house
and
nest
for
some
fun.
Guess:
I
guessed
it.
The
rat
is
visiting
the
home.
That
gets
some
food
going
back
to
its
nest.
B
C
B
Follow
the
link
of
places
it's
been
too,
and
so
the
idea
here
is
I
didn't
really
understood
this.
But
if
a
lot
of
people
believe
that
place
cells
are
linked
together,
sequentially
in
series,
so
as
like
a
sequence
memory
and-
and
you
just
follow
the
sequences
of
the
places
you've
been
but
going
backwards
order
and,
of
course,
on
the
right
here
we
see
the
vector
navigation
which
they're
at
says.
Oh
I
can
calculate
the
straight
path
to
get
there.
B
So
what
they
are
proposing
is
the
following
is
that
an
animal
starts
off
with
the
vector
strategy
which
is
grid
cells
in
a
metric
space,
and
so,
and
it
starts
heading
straight
towards
where
it
wants
to
go
until
it
gets
to
an
obstacle,
and
then
it
decides
what
to
do.
When
you
get
to
an
obstacle,
they
have
proposed
a
simple
way
for
the
use
of
border
cells.
B
Let's
see
you
started
out
here
on
the
right
on
this,
a
figure
he's
now
over
here
and
he
wants
to
get
back
and
so
using
the
vector
strategy.
You
go
straight
towards
the
original
spot
and
he
runs
into
a
wall
and
it's
a
slanted
wall
in
this
case
and
and
and
they're
proposing
in
their
model.
That's
all
they
build
a
model
of
all
this.
What
they're
doing
is
a
stand
will
starts
with
a
back
to
strategy
and
when
it
gets
close
to
a
border,
the
border
deflects
the
vector.
It's
nonsense.
B
Instead
of
here
they're
showing
there's
some
motor
cells
that
are
indicating
the
direction
the
animal
is
supposed
to
go,
but
when
it
reaches
when
it
bumps
into
the
border,
there's
an
inefficient
of
certain
of
those
certain
of
the
motor
cells
that
are
active,
so
the
ones
that
are
you
can
see.
There's
a
motor
cells,
a
search
so
mostly
to
the
right
some
up
and
some
down.
That's
it
distribution
there.
B
But
when
it
gets
here,
this
read
inhibition
the
flex,
devote
ourselves
to
protect
more
to
the
left
or
up
in
this
picture,
and
so,
as
you
know,
it
gets
closer
to
the
border
it
bends
around
and
it
will
just
go
around
the
border
and
then
continue
on
to
the
ball
and
that's
part
of
their
model.
It
does
that
very
nicely
and
then,
but
there's
a
problem
with
the
animal
runs
into
a
obstacle
that
that
strategy
doesn't
work.
B
So
in
this
case
the
animals
come
around
here
and
once
they
go
back
home
and
it
bumps
into
this
vertical
law
and
there's
no
clear
strategy
that
this
deflection
method
wouldn't
work
here,
and
so
what
they're
proposing
happens
is
that
when
the
animal
gets
to
this
point
and
stops
moving,
they
said:
okay,
I'm
stuck.
What
do
I
do?
There
is
the
replay
of
play
cells.
Now
this
is
something
we've
we've
heard
about
in
the
past,
which
I
never
had
I,
never
had
a
model
in
my
head.
B
What
the
replays
for
us
I
just
review,
what's
going
on
here
with
sometimes
an
animal
stops
moving.
They
see
that
the
play
cells
very
rapidly.
We
call
a
sequence
of
recent
movements,
an
exact
path,
so
what
they're
suggesting
here
is
the
animal
when
he
gets
stuck
starts
recalling
how
it
got
to
its
to
its
place
over
here
on
the
blue,
dot.
I
hope
everyone
can
see
my
cursor,
it
works
yeah.
B
Happens
very
rapidly,
you
know,
there's
a
very
sequence
occurs
on
this:
120
millisecond
burst
activity
that
it
starts
playing
back
playstyles
in
water
and
as
it's
doing
that
it's
calculating
the
place.
Let's
play
back
and
the
argument
the
corresponding
grid
cells
play
back
too,
and
so
it's
basically
can
we
calculate
a
new
from
where
it
is
right
now
stuck
at
this
wall.
It
calculates
a
new
vector
direction
based
on
the
grid
cells.
So
as
soon
as
it
sees
oh
I
can
get
to
that.
I
can
get
to
this
new
target.
B
That
new
target
is
over
here
now
in
this
third
diagram.
He
says
the
new
targets
down
here
says:
I
can
get
to
that,
so
it
dissipate.
This
is
okay.
Now
this
becomes
my
new
temporary
goal
and
once
it
gets
to
its
temporary
goal,
it
goes
back
to
doing
vector
navigation
against.
It's
not
I
can
go
straight
to
the
target.
B
So
there's
this
interplay
between
first
starting
of
vector
navigation,
deflecting
around
obstacles
that
can
be
deflected
around
and
still
keep
going,
but
if
it
gets
stuck,
then
you
replay
very
rapidly
with
the
animal
stops
moving
it
replays
sequences
of
how
it
got
where
it
was
until
it
finds
a
new
goal
and
then
continues
on
from
there.
This
was
the
first
time
I
had
ever
heard
of
a
functional
role
for
the
replay
sequences
that
have
been
observed
in
play.
B
Cells
and
I
wasn't
aware
that
those
replay
sequences
also
invoked
grid
cells
in
the
same
path
in
the
same
way,
and
so
I
thought
that
was
very
exciting
and
you
know,
because
one
of
the
reasons
I
was
thinking
about
this
is
we
think
about
a
quarter
of
a
column
in
some
general
way.
Very
simple
way.
You
can
think
about
the
upper
layer
cells
as
being
like
lay
cells.
There
are
this:
they
always
sensed
in
the
environment,
world
of
the
animal,
and
we
don't.
B
We've
talked
about
how
to
learn
sequences
of
those
inputs,
but
we've
never
talked
about
learning
sequences
of
sensory
motor
inputs
and
essentially
what's
happening
here
is
they're,
arguing
that
even
in
a
sensory
motor
situation
when
the
animal
originally
moved
from
from
this
red
circle
already
here,
it
was
remembering
in
one
shot.
Learning
is
remembering
that
sequence
of
events
and
our
sequence
memory
could
do
that.
We
haven't
thought
about
doing
that
because
in
general
it's
not
a
very
useful
thing
to
do.
B
If
you're
trying
to
learn
a
model
of
an
object,
it's
not
useful,
but
if
you're
trying
to
navigate
it
is
so
I
thought
it
was
pretty
exciting
idea-
and
you
know
maybe
we're
doing
this
one-shot
learning
and
not
related
or
and
could
serve
a
similar
purpose
here.
Okay,
so
that's
it
for
that
figure.
Unless
there's
questions
about
that,
I'm.
B
B
B
G
B
B
So
it
seemed
to
me
that,
even
when
the
red
circle
was
here
that
the
deflection
strategy
would
work
at
that
point
in
time
they
animal
would
say.
Oh
I
can
start
moving
in
that
direction.
That's
not
the
way
they
described
it.
They
described
it.
They
waited
to
the
green
trying
to
the
red
circle
back
all
the
way
to
this
clear
shot
and
once
they
got
a
clear
shot,
then
the
animal
says:
okay,
I
can
get
there.
Clearly
I,
don't
have
any
options
in
front
of
me.
B
I'm
gonna
go
I,
wasn't
clear
to
me
where
I
had
to
be
that
way.
I
thought
that
why
could
the
end
we'll
start
heading
towards
this
goal
right
now
and
because
it's
in
the
deflection
scenario
that
would
work
but
anyway,
there's
a
there's,
a
mechanism
which
determines
when
to
stop
moving
when
the
when
they
say
that's
enough
replay,
let's
pick
a
new
target,
which
is
the
green
triangle,
and
now
I'll
switch
back
to
vector,
notation
they'll
answer
that
question
yeah.
B
F
F
B
B
No
I
think
they
had
a
name
for
I
forget
what
it
was
called,
but
they
had
a
name
for
this
algorithm
society
and
be
like
how
long
do
I
keep
doing
this
playback
until
I
decide
to
go
back
to
vector
navigation,
and
they
are
talking
about
a
moment
if
they
had
they
all
said.
If,
through
this
process
of
playback
you
there
was
never
a
place,
you
could
go
like
you're
totally
stuck
in
the
animal.
This
has
to
go,
walk
its
way
backwards.
You
know
they
basically
and
it
updates
its
model.
B
G
B
E
I
guess
the
key
thing
really
is
that
they
take
the
grid
cells
long
during
the
replay,
which
allows
further
for
the
continued
vector
navigation.
So
you're,
not
you
don't
have
to
choose
between
two
strategies,
but
you
can
actually
alternate
between
them.
Obviously
much
of
that
is
based
on
that
very
brief.
All
of
satya
paper,
which
I
still
have
a
lot
of
questions
about
in
terms
of
how
how
well
they
actually
show
like
which
cell
we
play.
But
we
can
talk
about
that
some
other
time.
B
E
I
mean
some
of
those
I've
already
seen,
because
I
was
talking
with
me
about
some
of
them
at
at
cosine,
when
I
was
asking
him
for
references
for
the
nangou
implementations.
I
want
to
do
in
the
summer,
I
mean
who
knows
whether
that
summer
school
and
what
Toulouse
gonna
happen,
but
I
think
I
have
a
bit
of
a
mental
map
of
what's
happening
here,
but
I
was
thinking.
We
should
show
briefly
the
videos,
because
it's
gonna
bring
some
of
these
home
points
a
little
bit
faster
and
what's
happening
dynamic.
You
hear.
B
F
B
E
All
right
cool,
so
these
are
the
videos,
so
the
first
one
is
just
basic
slanted
obstacles.
So
you
see
this
thing.
The
agent
is
moving
around
from
the
start
to
some
point
and
then
wants
to
go
back
so
vector
navigation
says:
go
right
now
we're
hitting
that
border.
What
you
will
see
is
that
the
the
motor
command
these
motor
cells
that
they
postulate
are
now
modulated
by
the
growing
inhibition
from
the
border
cell,
which
you
can
see.
E
So
this
is
the
border
navigation
in
case
you
can
see
my
mouse
here
alright
and
that
bias
is
the
the
motor
cell
into
different
direction.
So
there's
no
remapping
here.
This
is
just
a
biasing
of
the
vector
based
navigation
by
the
border
cells,
that
sort
of
change
the
vector
a
little
bit
to
allow
for
that,
and
because
maybe
briefly
I
should
say
that,
because
that
inhibition
grows
with
proximity
to
the
border.
B
You
go
on
for
it,
I
want
to
point
out
yeah.
Those
of
you
know
me
here:
I
do
a
lot
of
introspection
on
these
things,
and
you
can.
You
can
feel
yourself
doing
this.
As
you
know,
she's
personally,
navigating
the
world
it
strikes
me
is
certainly
very
very
close
to
how
we
actually
behave
ourselves
and
as
fascinating.
Just
imagine
I
think
that
that
curve
is
you
get
protein
when
we
as
we
get
closer
to
an
obstacle.
B
E
I've
been
thinking
about
even
about
the
more
complicated
task
where
you
might
want
to
compare
like
actually
untrained
rats.
We
do
a
lot
of
trained
rats,
but
if
you
do
I'm
trained
rats-
and
you
see
them,
yeah
use
these.
You
know
different
strategies
and
you
know
switch
between
different
kinds
of
goal-
navigation
right
to
what
extent
you
can
quantify
that
and
verify
some
of
these.
Some
assumptions
of
the
model
so
tune
them
right,
but
we
get
for
that
before.
B
We
go
on
again
I
like
to
make
know.
This
is
relates
to
our
research.
So,
as
you
know,
I've
been
working
on
this
idea
and
we've
talked
about
numerous
times
about
many
columns
representing
sort
of
one
be
a
set
of
1d
basic,
a
beta
basis,
vectors
for
defining
a
metric
space
and
in
it,
as
part
of
that
I've,
been
developing
further
ideas
about
how
each
how
the
motor
representation
is
represented
in
each
mini,
because
every
mini
column
has
the
letter
v.
So
it's
the
motor
cells
and
so
I
can
literally
visually.
B
Imagine
this
process
occurring
between
many
columns,
because
suddenly
many
columns
are
gonna
represent
in
a
quarter
of
a
column.
We're
gonna
represent
the
motor
vector
for
that
column,
and
you
can
literally
imagine
these
mini
columns
in
inhibiting
one
another
shelves
in
layer.
Five
I
look
at
this
I'm,
translating
this
back
to
cortex
and
that's
pretty
exciting.
E
There's
some
I
think
architectural
problems
with
with
much
of
this
I
mean
like
they
really
cut
down
on
the
detail
in
an
effort
to
make
a
more
functional
model,
which
is
great,
so
we
now
get
nice
videos
to
look
at
I.
Think
there's
some
like
like
this
there's
some
problem
with,
for
example,
this
need
to
arrange
cells
into
ring
structures
that
don't
have
any
biological
reference
showing
them
and
these
things.
But
we
can
talk
about
those
things.
Yeah.
B
E
That
deflection
worked
because
it
was
a
slanted
object,
so
you
were
still.
There
was
still
a
gradient
of
progress
when
the
object
is
now
perpendicular
that
doesn't
work.
So
what
you
saw
there
I'm
pausing.
The
video
here
was
a
brief,
essentially
shop,
where
triple
sequence,
that
activated
a
market
here
with
this
triangle
that
event
right
and
so
now
we
are
navigating
to
what
is
it
like
ten
play
cells
before,
which
is
roughly
the
range
of
a
shock
wave
ripple
event:
I
mean
they
don't
you
know,
replay
back
infinitely.
E
B
E
E
There's
this
pinging
left
and
right
and
those
are
quite
different
from
the
longer
and
more
powerful,
sharp
wave
ripples
which
you
get
either
during
slow-wave
sleep
or
doing
doing
doing,
stop
consummatory
behavior
rest
right
like
what
you
might
get
at
a
teammate's
intersection
or
when
you
have
a
perpendicular
blocking
obstacle,
so
there's
actually
different
kinds
of
replay.
The
model
only
presumes
one,
namely
playing
backwards
from
the
goal:
location
with
a
long-range
replay
so
like
a
sharp
wave
ripple
right
to
to
get
a
new
intermittent
goal.
Theoretically,
one
can
combine
that
with
the
theta
forward.
E
Looking
like
a
pinging
right,
you
know
what
are
my
options
now
to
build
a
merge
of
these,
so
that
would
go
into
the
direction
of
this
stuff
that
I
presented
from
David
Terrell,
like
a
hierarchical
planning
scheme,
where
you
have
long
term
goal
intermittent
goals
and
then
planning
strategies
towards
intermittent
goals.
I
one.
B
Thing
I,
don't
think
was
addressed
in
this
paper-
is
that
which
I
think
is
fascinating.
Is
this
replay
of
itself
any
place
else
was
to
where
several
things
they
pointed
out
with
your
fascinating
ones?
It's
we.
It
goes
both
forward
and
backward,
and
if
you,
if
an
animal
stops
I
believe-
and
it
goes
to
this
in
so
it's
it's
only
one
handles
not
moving,
which
also
makes
sense
by
the
way
I
am.
B
Brief
periods
where
these
playbacks
occur,
but
sometimes
a
playback
go
forward
and
some
goes
backwards.
Well,
so
that's
been
well
documented.
So
that
tells
me
existing.
That
tells
you
that
there's
this
one-shot
learning
sequence
memory,
but
it's
bi-directional
and
we
don't
you
know
our
sequence.
Memory
has
always
been
forward-looking.
After
we
talked
about
learning
maladies,
it's
it's!
You
know
you
don't
learn
about,
but
this
system
does
and
so
but
I
don't
think
there
was
any
suggesting
to
why
and
the
reverse
sequence
would
be
useful
in
this
navigation
task.
Perhaps
it
is.
B
Maybe
a
offer
could
comment
on
that,
but
but
but
this
just
talks
about
the
forward
one.
The
other
thing
was
you
just
mentioned
there
with
the
playback
of
the
place
else
has
to
for
this
to
work.
There
has
to
invoke
the
appropriate
grid
cells
at
the
same
time
and
they
offered
to
make
an
argument
there's
enough
time
to
do
that,
and
they
also
that
happens,
but
the
other
thing
I
didn't
realize
is.
It
also
makes
everything
set
the
playback
where
the
grid
cells
can
occur
without
the
places
that
the
grid.
C
C
B
E
The
microseconds
behind
that
grid
supplies
interaction,
I,
actually
quite
complicated.
It's
not
like
straightforward
bi-directional
connectivity,
because
all
the
places
are
feedback
into
media
and
tryna
cortex
layer
to
its
indirect,
through
layer,
5
into
layer
three
into
layer
2.
So
there's
like
a
there's,
a
try,
synoptic
circuit
and
it's
a
powerful
one
and
it's
everywhere,
but
it
is
so
that
so
there's
no
doubt
about
the
existence
of
you
know
like
feedback
loops.
But
it's
not
really
cell
to
cell
connections
like
what's
the.
B
B
B
E
B
What's
going
on
in
cortex,
so
to
me,
if
they,
if
we
think
these
grid
cell
equivalents
are
in
the
lower
layers
and
they're
connected
this
bi-directional
selfie
sheet
between
layer,
6
and
a
4
and
their
5,
then
there's
it's
not
a
mechanical
problem
there.
The
mechanism
would
be
existing
approaches
you're,
seeing
in
the
hippocampal
complex
the
fact
that
is
a
more
complex
circuit,
yeah.
E
Yeah
exactly
they
don't
have
the
theta
forward
replay
like
this
head
Direction
cells
pinging,
what
what
I
refer
to
as
the
sweep
yesterday
and
what
David
Terrell
then
supposed
to
be
like
these
forward
kind
of
replace
theta
sequences,
which
are
much
smaller,
they're
like
more
like,
like
3/4
play
cells
and
not
like
10
they're,
also
less
powerful
events.
No,
they
are
theta
rhythmic
yeah.
E
B
So
we
talked
about
this
figure.
Let
me
just
go
on
the
figure
three
here:
they're
they've
introduced
some
work.
These
are
all
simulation.
Remember,
there's
no
real
rapture,
there's
a
simulation
of
the
model
and
they've
made
up
this,
this
funky
little
environment.
These
are
these
great
things
or
obstacles,
and
the
red
line
is
showing
how
they're
moving.
B
So
they
talk
about
here
where
there's
a
nest
right
where
this
red
line
begins
and
a
one
here
and
showing
you
the
trajectory
of
the
rat,
the
animal
and
then
what
happens
at
this
point,
then
the
animal
traverses,
some
distance
along
the
perimeter
until
to
this
is
called
the
start
point
and
then
I
said
to
get
back
home.
So
now
the
animals
out
here
with
its
best
remark,
is
it
says,
but
what's
the
strategy,
how
do
I
return
to
the
nest
which
is
right
at
the
beginning?
B
Open,
ok,
appear
they
go
through
this
in
two
scenarios,
I
talked
about
worthy.
The
return
point
is
right
at
the
opening
to
the
cave
and
one
where
it's
deeper
in
the
cave,
so
you
can
see
be
in
the
middle
of
screen.
It
was
at
the
opening.
The
key
and
see
it's
in
the
in
the
middle
of
screen.
This
reminds
me
and
and
then
they
talk
about
this
under
different
scenarios,
so
here
they're,
saying
the
vector
navigation
with
obstacle
deflection
starting
from
these
two
blue
points
seems
to
work
pretty
well.
B
B
If
you
move
the
goal
in
the
nest
inside
the
cave,
it
no
longer
works
because
the
animals
get
stuck
on
the
outside
here,
just
a
fancy
version
of
what
we
saw
before
and
but
then
they
show
in
this
scenario,
where
you
have,
we
combine
the
two
strategies
the
animal
doesn't
get
stuck
here.
It
goes
back
to
doing
a
path.
Integration,
not
pay
attention
following
its
previous
using
the
apology
of
the
map
of
the
environment,
I
think
his
mother,
his
mom,
was
the
end.
They
stood
in
the
animal,
had
a
complete
model.
B
I
forget
what
it
was.
Was
this
this
was
the
environment
was
was
understood,
I
think
I
can't
remember
anyway,
so
they're
showing
how
then
they
then
they
show
what
happens
from
all
different
points
around
the
circles,
and
you
know
so
the
animal
they're
trying
to
try
to
start
from
any
of
you
around
here
and
again
as
long
as
the
extras
in
the
mountain
middle
here,
then,
the
vector
navigation
work
here,
the
vector
of
navigation,
only
work
from
a
few
spots
and
the
animal
got
stuck
most
possible
here.
B
E
B
E
E
Trial
starting
from
here
and
they
have
yeah
and
you
kind
of
like
get
stuck
there
right,
because
it's
a
local
minima
and
the
energy
landscape-
and
you
don't
have
any
way
to
get
out
of
it-
so
combined
Victor
place
on
navigation
to
overcome
that.
So
you
form
that
place
map.
You
see
all
these
connected
places
right.
So
you
you
have
a
map,
and
now
you
go
and
now,
when
you
get
stuck,
you
can
do
replay
to
an
intermittent
goal
remap
and
get
to
the
goal.
E
E
All
these
nodes
are
connected
now,
which
means
that
we
already
know
the
connections
here
right
these
steps
that
are
important
to
getting
to
the
to
the
to
the
nest
back.
So
you
start
black
I'll
start
on
vector
navigation.
Now,
there's
a
local
minima,
so
you
go
into
replay
because
you
can't
solve
it
and
then
you
remap
and
map
and
map
with
intermittent
goals
through
these
topological
steps.
So
that
was
that
now
we
can
continue
with
a
paper
I'm
gonna,
stop
sharing.
E
C
E
That
in
the
worst
possible
case
right
where,
for
some
reason,
you
know
the
entire
vector
navigation,
any
possible
integration
is
a
failure
right,
because
you
make
no
progress
and
the
gradient
towards
to
go.
You
will
just
in
highly
retrace
your
your
place
map
that
is
not
realistic,
but
your
shock
wave
ripples
don't
go.
You
know,
like
infinite
steps
back
yeah,
no,
no
I.
F
E
But
that
is
not
all
unrealistic
I
mean
like
when
you
explore
an
environment
intensely.
First
you're
much
better
at
taking
shortcuts
right
and
getting
out
of
deadlock
situations,
so
I
mean
in
that
sense.
You
know
you.
You
would
expect
performance
to
be
worse
when
when
there's
not
an
extensive
place,
map
in
existence,
yeah
yeah.
F
B
Wood
or
in
some
town
or
something
and
you
haven't-
explored
an
entire
section
of
town
you're
unlikely
to
go
over
there
looking
just
to
see
if
that's
a
good
way
to
get
on
and
you're
going
to
look
for
the
you're
gonna
look
for
this
sure
place
to
get
back
even
if
it's
a
little
bit
longer
there.
The
next
video
will
talk
about
some
of
these
optimizations,
but
I
think
it
would
be
on
it's
kind
of
unrealistic
to
ask
the
animal
to
go.
Take
approach
in
an
area
of
town
works.
Never
beer,
ya,.
F
B
F
B
Do
one
could
just
say
you
keep
looks
like
they
did
is
they
said,
keep
as
you
as
you
play
back,
keep
testing
whether
doctor
navigation,
whether
there's
any
deflection
to
get
there.
Let's
look
what
that
looks,
what
they
did,
and
so
the
moment
you
get
to
there's
no
deflection,
then
you
say
that's
good
enough.
Go
ahead
ahead
in
that
direction.
B
Always
soon
the
animal
knows
about
the
environment,
I
think
it's
Andres,
just
saying
the
only
thing
we
assume
the
animal
knows
about
the
environment
is
the
path
that
took
to
get
there
where
they
are
now
so
I.
Think
it's
a
good
strategy.
Given
me,
everyone
knows
nothing
else
about
the
environment
is
to
go
straight,
that
straight
to
the
destination,
and
you
get
stuck
and
until
you,
if
you
can't
get
anywhere,
then
you
then
you
pick
an
intermediate
thing.
G
A
question
on
seeking
those
goals
so
I
can
understand.
How
can
you
replay
it
back
until
you
have
line
of
sight
to
this
other?
You
know
a
shortcut,
but
your
perception
of
that
shortcut.
You
know
when
you
were
traversing
it
first
time.
You
know
you
know
object
day
is
over
to
my
right
object.
B
is
to
my
left
and
stuff
like
that
when
you're
you're
looking
for
that
line
of
sight,
what's
the
interaction
between
actually
being
able
to
visualize
that
that
previous
gold
point
is
actually
what
you're
now
looking
at
from
a
different
direction,
do.
B
I,
don't
think
you
were
I,
think
I
got
this
one
there's
what
I
said
earlier.
I,
don't
think,
there's
an
assumption
of
sight.
It's
just
what
it
looks
to
me.
The
way
the
album
would
be
working
is
is,
if
they're,
there
waiting
till
they
get
to
the
point
where
there's
no
deflection
and
and
that's
good
enough
and
I
think
I'm,
not
sure
Kevin.
B
If
your
question
is
like
remember,
the
underlying
this
is
the
ability
very
quickly
to
say
it
from
any
point
to
other
any
other
point:
what
is
the
vector,
the
guess,
meter
and
so
your
constant
calculating
that
from
the
animals
current
location,
bird
stock
through
a
series
of
place,
locations
and
associated
for
itself
equations
as
that
play
that
is
occurring,
it's
calculating
a
factor
continuously
and
says:
okay,
that's
good
enough!
I
might
go
for
it,
it
didn't
have
to
be
any
prior.
E
E
B
Here's
figure
four
and
I
think
this
is
what
Andre
just
talking
about
a
moment
ago.
This
is
this
is
a
looking
at
this
algorithm
from
a
different
perspective.
This
is
talking
looking
at
the
album
perspective
of
two
new
variables.
One
is
how
much
of
the
the
environment
has
previously
been
explored.
This
is
called
a
dense
exploration
versus
one
where
the
animals
only
seen
a
little
bit
of
being
per
minute,
which
is
a
sparse
exploration
and
then
they're
off.
B
That's
one
new
variable
and
the
other
new
variable
is
they're,
saying
they're,
introducing
novel
or
new
shortcuts
to
the
system.
This
didn't
exist
when
the
animal
left,
though
you
know
in
I,
don't
remember
why
I
decided
to
do
this,
but
this
what
they
decide.
So
the
thing
I
didn't
understand
about
this
and
that's
something
I
really
want
to
get
into
in
the
literature.
Is
this
idea
that
you
could
have
a
the
playstyles
can
create
a
dense
sort
of
now
of
the,
and
this
could
be
used
for
navigation
I.
B
Don't
understand
that
how
that
happens,
you
know.
So
it's
almost
explored
everything
and
those
where
all
the
places
are
it's
it's
just.
Anyone
knows
you
can,
let
me
know,
but
it's
something
I
have
to
dig
in
to
understand.
How
is
that
useful
for
navigation,
but
that's
what
this
missus
figures
about
and
then
they
show
what
happens
if
you
use
just
the
topological
agent.
This
meaning
just
use
your
knowledge
of
places
and
I'm,
not
sure
this
means
that
the
animals
exploit
we've
done
every
one
of
these
paths
or
assists
somehow
there's
a
ban
on
spinner.
B
B
If
the
fully
explored
environment,
then
the
animal
is
able
to
move
pretty
quickly
right
towards
the
goal
and
in
the
sparse
model
here
the
animal
can
only
be
trace
it
steps
completely,
so
it
has
to
if
it's
here
it
has
to
go
all
the
way
around
again,
all
there
back
in
the
back.
That's
not,
and
also
it
doesn't
take
advantage
of
novel
shortcuts,
because
there's
no
way
of
discovering
that's
just
going
to
take
its
path
back,
you're,
going
to
use
the
combined
agent
and
now
they're
sharing
of
all
the
different
points
around
here.
It
works.
B
It
works
well
in
the
dense,
it's
not
taking
advantage
of
the
dense
model
and
or
it's
just
it's
just
using
vector
navigation
here.
It's
saying
if
it's
sparse
its
using
that
it
also
works
very
well,
and-
and
it's
also
able
to
discover-
because
it's
using
the
veteran
navigation,
it'll
it'll
discover
new
shortcuts,
because
it's
going
straight
towards
a
goal.
B
And
then
they
can
build
it.
They
compare
the
performance
of
these
different
networks,
the
showing
how
efficient
they
are,
and,
of
course
the
combined
agent
does
very
well
and
it's
able
to
take
advantage
of
the
shortcut.
That's
the
only
one
that
really
is
able
to
take
advantage
of
shortcuts,
but
it
does
well
overall
compared
to
the
purely
topologically
now
I
need
to
learn
and
I
need
to
learn
how
how
this
model
works.
B
B
B
Understand
much
time,
this
figure
was
somewhat
interesting,
but
this
was
more
of
one
of
these
things
like
they
said.
How
does
this
work
on
two
types
of
mazes
that
have
been
historically
used
going
back
a
long
period
of
time
many
years
ago,
in
just
sort
of
comparing
it
to
the
these
classic?
This
is
a
starburst
sunburst
maze,
and
this
other,
which
is
this
I,
forget
what
they
call
it
both
remember
this
moment.
B
The
detour
maze
a
feather
just
these
are
sort
of
classic
in
the
literature
mazes
that
rats
have
really
gone
through
and
they're,
basically
trying
to
show
that
the
model
works
well
on
them
in
it
and
I,
don't
think,
there's
really
much
insight
further
into
it.
At
least
I
didn't
get
much
insight
into
it
other
than
they're
just
showing
it's
compatible
with
what
mats
have
done
previous
well-known
basis,
so
I
thought
I
promised
this
would
be
a
fairly
short
tape.
I
have
a
lot
of
notes
on
it.
B
I'll
just
go
right
to
the
very
summary
here
and
then
I
can
end
it
here.
Here's
this
is
basic
in
discussion.
Section
Vista,
summary
page
initially
performs
better
navigation,
primarily
given
by
grid
cells
and
aided
by
border
cells
for
obstacle
deflection.
That's
what
we
saw
them
first
figure
if
progress
is
blocked
or
obstacle
the
agent
initiation
hippocampal
replay
that
introduces
aspects
of
topological
navigation
to
the
agents
overall
behavior,
allowing
the
agents
to
switch
between
different
sub
goals
in
order
to
successfully
complete
a
complex,
uneri,
complex
environment.
B
A
B
G
B
We
have
talking
about
animals
that,
were
they
intentionally,
disabled
or
or
you
know
in
other
ways,
disrupt
this
pathway
and
I.
Think
there's
lots
of
examples
and
there's
some
well
that
were
cited
in
this
paper
like
what
happens
to
mats
in
this
water
maze
when
you
disable
the
communications
between
the
grid
cells
and
the
play
cells.
There's
there
is
impairment.
Basically,
animals
don't
do
animals
to
cannot
find
the
way
someplace
very
well.
They
were
definitely
impaired
from
any
part
of
the
system
that
is,
is
interfered
with
it.
That
was
the
question
yeah.
B
E
B
E
All
right,
I
hope
you
guys
can
see
so
combined
Victor
place
agent,
sparse
exploration
without
shortcuts.
So
you
got
this
agent
which
traces
out
you.
You
see
the
map
of
the
place
else
that
got
formed
right.
All
these
little
place.
Doubts
you
get
deflection.
You
trace
back
ten,
also
like
it's
interesting
to
kind
of
stop
at
this
point
to
see
that,
of
course,
like
the
intermittent
goal
that
gets
set
here
right
gets
get
set,
essentially
on
the
basis
of
the
of
the
gradient
right.
E
E
You
know
resume
that
navigation
based
on
replay
and
based
on
on
topological
steps-
and
you
see
this
nice
intermix
here
right
where
you
get
where
you
get
the
the
gradient
based
a
vector
based
navigation
and
then
a
topological
noise
which
should
switch
us
back,
because
it
can
write
it's
straight
line
of
sight
to
the
to
the
to
the
target
right
to
to
the
vector
navigation
a
bit
of
deflection
until
it
gets
stuck
again
and
needs
to
switch
again
to
topological
navigation.
So
there's
like
this
nice
back-and-forth
and
then
here's
one
with
novel
shortcuts.
E
B
B
B
E
And
so,
lastly,
the
the
sunburst
mace
here
so
obviously
quite
complicated.
You
get
this
long
path
into
the
mace
and
that
map
you
build
is
useless
at
least
pretty
useless,
because
now
your
entrance
point
is
actually
blocked.
So
you
can't
go
back.
You
actually
need
to
take
the
goal
arm
6
and
it's
very
hard
to
find
it
at
first,
because
of
course,
the
gradients
are
kind
of
pointing
in
that
direction
in
arm
5
and
4.
B
E
So
what
you
see
here,
not
right
what
is
noted
in
these
diamonds
is
that
that
and
and
these
purple
things
like
when
their
agents
essentially
gives
up
right
and
enters
random
exploration,
so
just
to
build
a
bigger
place,
map
which
may
or
may
not
be
useful
later
down
the
road
in
order
to
hopefully
eventually
get
better
at
this
and
there
we
find
the
right
arm
right.
So.
B
E
F
E
Yeah
I
guess:
there's
a
lot.
One
could
build
on
top
of
this
model
right
to
make
it
cognitively
better,
but
they're,
interesting
things
and
also
what
is
the
biological
justification
for
it
and
what
is
the
circuit
mechanism
and
how
would
that
get
learned,
and
so
you
need
to
answer
a
lot
of
questions
on
plausibility.
Then,
if
you
do
that,
yeah.
E
F
F
D
C
A
So
Andre
says
no
penalty
but
a
tweak
to
the
obstacle
deflection
because
of
the
narrow
corridors
it's
mentioned
in
the
paper.
A
B
B
No
noise
in
it
and
they
don't
update
the
grid
cells
from
the
place
house,
so
it
it's
right.
It's
a
perfect
grid
cell
system
or
vector
navigation
system.
So
there's
no
ways.
It's
unrealistic,
I
think
goes
okay,
again,
yeah
I'm
less
interested
in
how
rats
actually
navigate
these
things.
I'm
more
interested
in
these
basic
principles
that
came
out
of
it
and
I
really
didn't
like
this.
The
motor
thing
I'll
talk
about
that
again
in
a
future
meeting.
Why
I
thought
that
was
really
interesting.
B
E
Yeah
I
mean
like
the
the
way
that
they're
building
these
place
sells
right.
I
mean
it's
like
one
short
learning
and
they
essentially
add
a
place
so
as
soon
as
you're,
far
enough
away
from
the
place
that
you
have
yeah
the
the
way
that
that
helps.
The
navigation
is
because
they're
building
like
this
graph
structure
and
so
potentially
since
they
do
optimal
planning
in
the
topological
graph.
You
will
get
to
you
know
much
better
shortcuts
when
you
have
explored
an
area
densely
because
you
use
the
the
place
cell
graph
once
you're
out
of
vector.
B
Navigation,
like
so
graph,
is
not
a
metric
graph
right.
How
do
I
know
I,
don't
even
know
it's
just
just
cut
out
a
link
between
two
things:
I,
don't
know
if
I'm
going
in
the
right
direction
to
the
wrong
direction,
I
mean,
unless
you
put
it
on
top
of
a
of
a
metric
of
space,
that
connected
graph
I
have
no
idea
what
it
means.
I,
don't.
B
B
E
B
E
I
have
this
that
the
paragraph
would
here
so
it
says
the
resulting
play
cell
graph
reflects
the
topology
of
the
explored
environment
and
contains
sufficient
information
to
calculate
the
shortest
path
between
arbitrary
pairs
of
start
and
goal
play
cells
across
those
explored
parts.
The
graph
can,
for
example,
determine
which
of
the
current
immediately
adjacent
place,
fields
lies
on
the
shortest
path
to
the
destination,
so
they
so
they
have
an
actual
implementation
of
that
search.
Essentially,.
G
E
B
Is
something
that
people
understood
was
happening
and
I
couldn't
see
how
it
actually
worked?
So
this
to
me
sounds
like
it's
a
bit:
I,
actually,
don't
think
it's
necessary.
Maybe
you
don't
try
to
do
that.
The
dense
versus
sparse
one
you're,
basically
using
their
initial
algorithm,
which
is
like
I'm,
stuck
retrace
my
path
until
I
find
a
place.
I
can
join
it
and
get
back
then
I
don't
need
that
so
I'm
gonna
go
for
the
moment.
It's
the
assumption
that
this
is
computationally
very
difficult
to
do
this
complete
graph
and
so
I
couldn't
mean.
B
F
Not
sure
yes,
but
you
know
I
have
to
say
you
know,
but
it's
like
you
said:
Burleigh
is
very
similar
to
kind
of
the
functional
models
we
do
and
it's
it's
really
nice
to
see
them,
take
all
of
this
experimental
data,
but
then
attempt
to
create
something
that
actually
works.
Yeah
yeah
trouble
to
create
all
of
these
complex
environments
and,
yes
to
do
it,
they
had
to
take
a
few
liberties
here
and
there.
But
you
know
we
did
that
too.
In
our
column,
I.
B
E
B
The
exciting
part
was,
you
could
remember
the
actual
passage.
Oh,
that's
great.
No
rats
actually
do
that.
You
can
calculate
how
to
get
there
directly,
although
we
don't
really
know
the
navigation
and
those
two
things,
and
also
the
fact
that
there
there
seems
to
be
memory,
there's
memory
of
the
not
only
of
the
the
place,
how
sequence
but
the
grid
cell
see
and
new.
E
B
B
B
B
B
E
B
B
E
D
Sure
yeah
he's
easy,
a
couple
different,
easy
possibilities
on
that
one
I'll
go
with
the
two
different
populations.
You
could
say
the
superficial
layers
of
internal
cortex
are
doing
more
of,
like
you
know
the
attentional
type
that
aren't
your
actual
current
location.
It's
more
like
yeah,
more
like
an
attention.
One
and
you
could
say
the
the
the
ones
in
the
deep
layers
of
under
I
know.
Cortex
are
more
of
the
thing
being.
You
know,
anchored
by
sensory
input
and
the
more
of
the
actual
one.
That's
one
perspective
on
this.
E
B
Talked
about
this
a
lot
in
terms
of
cortical
models
right
in
the
anatomical
evidence,
looks
surprisingly
suggestive
of
two
different
populations
in
six
seconds,
and
that
was
one
thing
we
can
so
that's
the
first
thing
Marcus
referred
to
and
then
the
second
one
was
the
idea
and
it
was.
There
is
some
evidence.
I
can
remember
what
it
is
alternating
back
and
forth,
and
and
that
was
also
suggestive.
B
We
brought
this
in
the
frameworks
paper
that
this
might
happen,
and
it
had
to
do
this
really
weird
thing
about
these
layer:
five
cells,
which
are
motor
output
cells,
which
essentially
your
vector
of
navigation,
also
the
forward
and
cortex,
and
so
we
were
one
of
the
ways
we
speculated
that
these
seams
can
oscillate
back
and
forth,
but
there
is
there
is
some
evidence,
but
I,
don't
remember
what
it
was
more.
Maybe
you
remember
some
time,
but
there
was
some
evidence
for
that
that
there's
students
faces
it
would
be
switching
on
our
students.
D
Another
wild
card
that
you
know
may
come
into
it
is
that
I've
already
brought
up
the
layer
two
and
the
layer,
five
grid
cells
now
think
of
the
layer,
three
ones,
the
the
the
one
place
where
it
does
seem
that
there
are
reciprocal
connections
between
grid
cells
and
place
cells,
both
directions.
At
least
I
could
point
to
the
paper
from
David
Rowland,
where
they
found
them
between
layer,
three
of
an
Toronto,
cortex
and
I'm
gonna
forget
either
CA
one
or
CA
three,
but
there's
there
are
reciprocal
connections,
both
directions,
grid
cells
and
place
cells.
D
B
Thomson
layer,
six
Thompson
yeah,
she
she
made
this
point
that
this
is
very
unusual.
Morphologies
of
these
layer,
six
cells
and
their
morphology
is
is
bi-directional,
very
narrow.
Having
conductivity
team
there,
six
in
therefore
they're
succeeding
like
four,
and
they
are
16
later
and
they're
very
unique.
They
look
like
they
would
have
this
unique
function
and
the
bi-directional
and
and
there's
two
sets
of
a
back
and
forth,
and
so
that's
the
underlying
assumption
of
our
models
of
modeling
objects.
Is
that
that's?
What's
going
on.
H
Maybe
I'm
misunderstanding
something:
this
is
a
bit
of
a
detour,
maybe
a
misunderstanding.
Something
about
the
model,
but
one
thing
I
would
really
like
to
see
is
some
some
sort
of
inclusion
of
some
kind
of
like
simulation,
especially
in
the
regard
like,
for
example,
when
you,
when
you
walk
up
to
a
slanted
obstacle
right.
Like
imagine
you're
doing
this,
you
don't
just
like
keep
going.
H
Okay,
I
know
vaguely
a
sort
of
vector,
coordinate
for
my
target
and
then
I
run
into
a
slight
obstacle:
goalie,
oh,
no,
no,
no,
no,
and
then,
like
sort
of
you
know,
take
a
gradient
to
to
avoid
it.
You
sort
of
have
a
model
in
your
head
about
the
limits
and
the
orientation
of
obstacles
and
stuff
and
how
you
would
work
around
them
even
in,
like
only
partially,
you
know
sparsely
explored
environment,
I'm.
B
Missing
that
it's
easy,
so
I
think
in
this
model
is
the
animal
does
not
know
about
this
obstacle
right,
just
no
prior
knowledge
about
addition,
right,
you're
running
through
the
woods
and
all
sudden,
there's
a
there's
a
tree
down
on
the
cross
in
front
of
you.
You
have
to
go
around
it
and
you
go
around
the
you
tend
to
go
around
the
way.
That's
sloping
towards
the
place,
you're
going
I,
don't
think,
there's
an
assumption
or
a
model.
In
fact
that
you're
just
know.
H
H
This
works
better
for
like
if
you
have
a
line
of
sight
that
is
significant
and
I
guess,
if
you're
the
size
of
a
human
being,
then
that
would
apply
over
here
like
a
rat,
and
you
can't
see
the
like
a
log
in
front
of
you
until
you're
right
next
to
it,
then
that
might
be
different,
but
you
know
I
would
like
to
include
something
like
sort
of
more
like
a
larger
line
of
sight
kind
of
planning.
Well,.
B
Would
it
be
different
I
mean
saying
that
at
some
point
you
would
detect
this
barrier
it's
before
you.
You
know
it's
due
for
you
don't
run
into
it
and
cut
you
in
half
I
mean
you
you
detect
when
there's
whiskers
or
where
you
get
your
hand
out,
feeling
it
or
you've
got
eyes
or
I,
don't
know
what
it
is,
but
to
some
sensor
that
lets
you
detect
the
barrier
and
at
that
point
that's
the
first
point
you
can
start
deflecting.
H
I
was
thinking
more
along
that
range
and
then
like
thinking
of
like
okay
I,
just
have
like
the
sequence
of
places.
You
know
play
cell
activations
that
I
had
recently
look
like
that.
It
can
replace
her
I'm.
Just
gonna
go
back
there
and
plan
another
route.
That
seems
like
it's
a
cute
model,
but
it
seems
a
little
bit
naive
to
me,
of
course,
like
it's
great
that
they
made
this
work
and
everything
I
agree
with.
B
H
B
What
do
you
mean?
I
mean
if
I'm
in
the
city,
I
and
I,
if
I
need
to
get
someplace
I'm
in
New,
York
and
I
need
to
get
someplace
like
yeah
I
know,
generally
the
direction
I
go
have
to
go
in
is
ever
ever
get
the
intersections.
I
have
to
make
a
decision,
so
I
go
down
the
Avenue
Street,
but
I
think
it
works
me
too
serious.
What
does
in
the
forest
I'm
missing
I
mean
I'm,
not
disagreeing.
I.
Just
don't
understand
your
objection.
H
F
C
E
Thing
that
I've
been
thinking
a
little
bit
about
in
terms
of
you
know
biological
evidence
for
these
things.
Obviously
there's
a
lot
of
like
it's
very
nice
that
they
are
using
sort
of
that
that
you
know
evidence
for
for
grits
replay
and
that
coordination
between
place
of
replaying.
What's
a
repeater
like
make
it
possible
to
do
this,
you
know
fluid
switching
right
between
vector
and
an
implication
thing,
but
the
way
that
they
built
the
placement,
also
like
I,
mean
in
biological
circuits
right.
These
are
contingent
on
I
mean
I,
mean
II.
E
Lafitte
white
would
make
arguments
about
the
capacity
of
the
play
cell
space.
You,
you
can't
just
put
a
place
anywhere,
and
people
have
been
looking
at
replay
and
pre
play
right,
also
say
that
well
look
I
mean
these
cells
are
not
really
like
new
and
then
added
to
the
sequence,
actually
that
kinda
in
the
sequence
before
they
just
get
recruited
as
representing
that
space.
So
so,
like
you
know,
all
this
evidence
from
from
from
pre
play
that
sequences
of
certain
cells
exist
before
they're
ever
being
used
for
for
mapping
a
particular
specialty
environment.
These.
E
B
Episode
forget
the
relevance
part
yeah.
My
point
here
is
I
actually
think
this
I
mean
the
relevance
is
obviously
can
be
important,
but
I
walk
away
from
this.
Is
that
yeah
we
remember
the
stuff
completely,
even
if
it's
not
relevant
and
we'll
forget
it
again
quickly
if
it's
not
relevant,
but
you
don't
need
relevancy
to
remember
this
you're
gonna.
Remember
the
past.
B
B
The
way
we're
talking
about
is
that
place
cells
are
inherently
sensory,
driven,
they
are
the
sensory
and
so
they're
not
picked
from
a
pool
they're,
not
just
randomly
chosen
to
be
here.
It's.
What
would
you
curve
here
by
your
sensory
input
to
any
point
in
time?
I,
look
around.
You
see
something
you
know
where
you
are
you're
gonna
invoke
some
set
of
cell
play
cells
based
on
what
your
experience
is
so.
B
E
E
Cells,
yeah,
but
theoretically,
but
not,
but
not
really.
So
when
you
induce
places,
for
example
like
the
way
that
Aaron
Milstein
did
right
with
with
the
generative
plateau
potentials,
then
you
don't
get
a
narrow
place
at
the
exact
moment
that
you
put
that
pulse
down.
But
you
might
get
a
place
out
three
seconds
before
that
or
two
seconds
after
somewhere
in
the
neighborhood.
Here,
there's
going
to
be.
B
B
But
if
it's
single
cells,
the
ideas-
that's
not
only
so
what
I
places
in
the
place
I'll
change
to
me.
It's
always
they
almost
like
our
sequence
memory
that
was
a
sparse
activation
of
a
larger
population
cells.
There's
that
virtually
unlimited
set
of
those
and-
and
you
can
learn
quite
long
sequences
of
those.
E
B
E
I
think
it's
actually
like
an
artifact
of
the
way
that
we
record
now
that
we
used
to
have
petrodes
I
mean
maybe
it's
gonna
change.
Now,
maybe
you
know
sometime
in
the
future,
now
that
we
do
these
dance
recordings
with
with
the
newer
pixel
probes,
you
just
excite
him
and
you're
recording
600-cells,
maybe
that's
gonna
change
where
we're
no
longer
talking
of
a
place
cell,
but
we're
talking
about
you
know
a
place
assembly
right
or
something
yes,.
B
B
B
E
B
This
paper
that
we
published
just
showing
how
the
same
neural
mechanism
for
our
sequence,
memory
works
for
sensory
motor
and
sensory
motor
learning
as
well,
and
so,
in
one
case,
the
sequence,
memory
we
take
these
inputs
and
we
just
learn
the
sequence
as
we
go
through
it.
The
other
case
we're
trying
to
pair
it
with
a
motor
signal
location
signal
from
it
would
be
grid
cells
and
when
I'm,
what
I
walk
away
from
this
is
that
I
think
the
upper
layers
of
the
cortex
are
learning
every
single
transition,
the
matter
with
it.
B
It's
a
sensory
motor
sequence,
which
is
a
very
unique
thing.
I
may
never
do
it
again,
but
it
could
be
in
a
little
bit
or
it's
a
it's.
A
high
luck,
where
temporal
sequence,
like
a
melody.
Remember
that
so
the
time
now,
we've
really
unify
those
two
ideas.
We
say:
okay,
sensory,
motor
inference
and
learning,
and
it's
just
it
doesn't
matter
how
you're
doing
it's
gonna
feel
quick
lunch.
All
that
stuff.
B
B
C
A
Okay,
hey
it's
just
you
and
me
now
so
I
wanted
to
say
something
real,
quick.
If
any
of
you
guys
are
watching
you're
like.
Why
are
you?
Do
you
guys?
Care
about
grid
cells,
play
cells,
environments
and
stuff
like
that?
So
are
the
new
Mentos
theory
of
intelligence
involves
the
mapping
of
space
as
we
learn
objects
with
our
sensors
and
that's
what
this
is
all
about.
A
I,
don't
know
if
loops
the
right
word,
but
we
are
trying
to
understand
how
a
model
we're
trying
to
create
a
model
of
how
this
cortical
column
works.
That
includes
the
ideas
that
we've
understood,
grid
cells
and
place
cells
to
work
upon
and
how
they
map
space,
because
we
believe
that
concepts
and
objects
are
essentially
the
same
thing
for
your
brain.
A
So
if
we
can
understand
how
an
organism
can
can
map
a
space
with
its
sensors,
whether
it's
running
around
the
space
or
whether
you're
holding
something
in
your
hand
and
touching
it
with
your
fingers,
then
that
will
help
us
understand
how
we
can
map
conceptual
space
and
understand
how
thought
works,
how
concepts
can
be
organized
and
linked
to
other
concepts
and
other
things.
So
this
is
a
core
sort
of
focus
of
our
work
at
Aminta.
So
I
just
wanted
to
go
over
that.
A
So
you
understand
why
we
care
so
much
about
place
sales
of
grid
cells,
because
these
seem
to
be
mechanisms
that
are
mapping
space
and
I
think
that
they
are
really
important.
We
all
think
that
meant
that
these
are
really
important
to
understand,
because
we
think
they're
happening
not
just
in
in
Toronto,
cortex
and
hippocampus,
but
all
throughout
your
cortex,
all
throughout
your
neocortex
and
are
they're
involved
in
the
representation
of
everything
that
you
have
an
understanding
of,
and
that's
it
thanks
for
watching,
we
will
be
back.
Wednesday
I
will
do
another
livestream
and
I.