►
Description
Numenta Research Meeting, April 22, 2020. Aris reviews several plasticity mechanisms, including developmental plasticity, various forms of Hebbian plasticity, eligibility traces, homeostatic plasticity, and impact of neuromodulators. In the second part, Jeff briefly reviews a few findings and facts related to optical recordings of grid cells.
A
So
right,
super
die
and
I
were
talking
and
all
have
been
talking
extensively
about
continual
lifelong
learning
and
in
general
and
specifically
in
how
we
might
implement
it
and
there's
this
review.
It's
very
long
review,
as
pointed
out
on
lifelong
learning
in
biological
systems
and
an
artificial
neural
networks.
So
today,
I
want
to
go
over
the
biological
aspects
of
lifelong
learning,
part,
which
is
basically
a
short
hash
of
the
different
mechanisms
of
plasticity
in
the
brain
and
how
how
the
brain
avoids
catastrophic
forgetting.
A
So
what
I
wanted
to
do
today
is
take
an
overview
of
learning
and
memory
in
the
brain
and
what,
but
with
from
the
perspective
of
how
it
can
do
continual
learning
so
I'll
start
all
the
way
from
development
get
into
different
way:
mechanisms
of
synaptic
plasticity
and
then
different,
interesting
new
ways
that
neurons
can
can
learn
in
supervised
or
unsupervised
ways
and
then
also
briefly
talk
about
systems,
consolidation
and
the
fact
that
we
overwrite
our
memories
to
an
extent
when
we
recall
them
so.
B
That
that's
great,
this
is
something
we
haven't
really
done
in
the
past
before
and
just
as
a
kind
of
reminder
to
everyone,
just
one
of
our
underlying
motivations.
Of
course,
the
HTM
sequence
memory
does
do
continuous
learning,
and
it
does
it
really
well,
and
so
one
of
our
motivations
is
to
see
if
we
can
take
some
of
those
principles
that
are
in
the
sequence,
memory.
B
There
are
a
number
of
aspects
to
it
that
lead
to
continuous
learning
and
then
apply
them
to
practical
machine
learning
systems
as
well,
and
so
this
is
just
sort
of
a
background
task
of
understanding
what
the
viola
is,
some
of
the
other
biological
work.
That's
been
lifelong
learning
and
continuous
learning.
A
Right
thanks
so
starting
from
the
very
beginning,
one
thing
that
people
are
exploring
in
deep
networks
and
so
on,
but
that
I
think
hasn't
been
emphasized
enough
for
isn't
very
well
explored,
is
the
fact
that
the
the
brain
is
not
a
blank
slate
with
a
number
of
random
connections
in
it.
That
has
a
very
rich
detailed
architecture
which
which
facilitates
the
compartments
compartmentalization
of
different
tasks
and
learning
and
memory
in
those
past.
So
this
is
a
this
figure
here
shows
schematically.
How
can
you
guys
see
my
mouth?
Yes,.
C
A
So
this
figure
shows
schematically,
for
example,
how
retinal
ganglion
cell
axons
from
the
retina
or
guide
it's
different
parts
of
here,
the
superior
colliculus
and
the
frog
she's
very
based
on
very
early
work
based
on
chemical
gradients
that
exist
both
in
the
retina
and
in
the
target
area.
So
these
chemical
gradients
can
repel
or
attract
axons
that
are
growing
acts
and
growth
cones
to
go
in
different
directions
and
reach
different
targets.
And
here
you
see
in
the
retina,
there's
Efrain
a
and
a
Furby,
and
these
creative,
like
a.
A
Dorsal
ventral
and
temporal
sore
temporal
medial,
depending
on
the
anatomy
of
the
animal
gradients
and
the
combination
of
these
gradients,
direct
axon
growth.
So
the
growth
of
axons
is
already
very
targeted
and
as
a
dynamically
regulated
process,
for
example,
in
most
animals,
axons
overshoot
to
an
extent
their
target
area
and
we're
here
for
examples
of
terminations
onward.
A
Where
you
want
the
accidents
to
me,
the
overshoot
them
because
these
chemical
gradients
serves
for
right
and
then
there
are
additional
refining
mechanisms
using
both
like
entrance
within
the
within
the
neuron
and
and
also
additional
factors,
additional
molecular
factors
that
are
extrinsic
to
refine
that
and
prune
over
extended.
Synapses
put
them.
You
know,
put
them
roughly
in
the
right
area
and
dynamically
between
them
further
with
neurotrophic
factors,
for
example,
so
synapse
selection
isn't
just
a
learning-based
thing,
its
developmental
saying
to
large
extent
and
I'm,
not
gonna,
go
over
a
lot
of
what's
in
here.
A
But
what
I
wanted
to
say
about
this
is
that
you
have
a
lot
of
molecular
mechanisms,
a
lot
of
molecular
mechanisms
involved
in
this
sort
of
selection,
of
which
synapses
get
to
stay
and
which
ones
have
to
leave.
So
these
are
talking
about
brain
derived,
neurotrophic
factor,
which
is
one
of
these
factors
that
is
excreted
by
postsynaptic
neurons,
and
this
is
an
axon,
that's
sort
of
growing
in
that
direction
and
it's
pushing
out
little
butonce.
A
D
You
go
on
I,
don't
know
if
you
were
done
with
this
long
enough,
but
I
want
to
make
a
point.
You
know,
but
this
overshoot
is
interesting.
We
know
in
the
cortex
that
the
the
ends
of
dendrites
are
continually
overshooting
throughout
your
life,
meaning
they're
always
growing
and
reaching
out,
and
if
they
don't
find
anything
to
connect
you
they
retract
again,
and
so
this
is
an
important
part
of
continuous
learning.
It's
not
just
a
development
issue.
I!
D
Don't
know
that
in
this
particular
example,
what
whether
it
has
been
in
the
cortex,
that's
a
strategy
that
the
brain
uses
all
the
time
is
this.
These
things
are
always
growing
out
and
retracting
and
growing
out
and
retracting,
and
and
just
to
remind
in
our
models.
We
we
don't
need
to
model
that,
because
that's
just
really
changing
the
size
of
your
your
potential
synapse
pool,
and
so
we
have
other
ways.
D
We
don't
need
to
model
the
growing
part,
but
but
you
can
you
can
model
it
by
just
expanding
or
contracting
your
potential
Sanibel
I
just
want
to
make
that
point
that
I
didn't
know.
This
happened
during
development
like
that.
That
was
an
interesting
picture
you
showed,
but
it
is
happening
all
the
time
in
the
cortex.
E
D
Well,
I
think
the
evidence
is
they
do
the
following:
they
if
they
find
some
synapses
that
are
useful
and
that's
kind
of
like
the
complex
diagram
on
the
right
II
that
errors
are
showing.
But
you
know,
basically
you
can
just
think
like
if
they
find
something
with
heavy
and
learning
says
s.
I
can
reinforce
this
because
it's
useful
for
me.
Then
they
grow
from
there
and
if
they
find
more,
they
will
establish
synapses
and
they
keep
growing
I.
Don't
they
just
sort
of
sort
of
go
in
the
same
direction?
D
I,
don't
think
it's
at
that
point.
There's
no
I,
don't
believe
in
the
cortex
there's
any
of
these.
The
gradients
that
are
being
followed,
it's
more
like
hey
if
I
found
something
interesting
over
here.
I'll
keep
looking
over
here
if
I,
don't
I'll
retract.
So
the
the
extent
of
the
dendrites
in
in
live
brains
can
grow
and
we
track
very
dynamically
day
to
day,
but.
A
A
A
Maybe
during
a
during
adulthood,
I
don't
know
if
this
is
specific
to
like
newly
forming
ones
and
hippocampus,
or
anything
like
that,
but
yeah.
So
these
are.
This
developmental
processes
are
large,
more
initially
much
more
coarse-grained
than
what
happens
in
the
adult
when
you're
growing,
new
synapses
or
in
dendritic
segments
right.
So
this
this
can
overshoot
by
our
distractions,
the
millimeter,
whereas
you
talking
about
much
smaller,
much
more
low
yeah.
D
The
overshooting
part
yeah
I,
don't
think
we
can
make
equivalent
beam
cortex
and
other
parts
like
the
frogs
superior
colliculus.
I
wanna
make
that
clear,
but
the
general
idea
that
you
you
grow
until
you
find
something
useful
and
then
you
establish
that
is
throughout
the
brain
and
as
soon
as
you
know,
as
soon
as
some
cell
loses
useful
synapses
on
some
dendritic
grants
that
dendritic
branch
usually
disappears
just
retracts
again.
So
it's
a
maybe
different
mechanisms
at
different
times.
But
this
is
a
general
strategy
of
dynamic.
Oh.
A
A
So
and
this
this
developmental
component
is
also
important,
for
example,
for
creating
distinct
cortical
areas
as
well.
So
here
you
see
here,
these
sort
of
countered
colors
here
in
the
telencephalon,
represent
gradients
in
the
ventricular
zone
of
different
factors,
elites
different
gene
expressions.
So
you
see
these
gradients
and
gene
expression
that
can
form
in
all
sorts
of
different
ways
and
molecular
lis.
This
leads
to
this
can
interesting
through
interesting
feedback
mechanisms
can
lead
to
very
sharp
distinctions
between
these
vague
gradients,
and
this
also
happens
in
the
when
sort
of
maybe
information
overload.
A
A
A
Looking
a
little
bit
more,
you
know
in
a
more
detailed
view.
We
all
know
that
synapses
can
potentiate
or
be
depressed
and
like
like
we
discussed
before
concurrent
Hartman
can
lead
to
long
term.
Potentiation
which
is
sort
of
this
is
the
early
work
of
Erika
and
L
in
the
plea.
Gia
is
where
this
sort
of
all
started.
A
Showing
that,
for
example,
activation
of
the
postsynaptic
cell,
we
can
go
and
activate
any
postsynaptic
cell
trafficking
of
and
the
post
synapse,
making
it
more
active
at
least
to
this.
Isn't
here?
Oh,
it
is
here
actually
and
in
the
a
receptor
activation,
which
means
that
when
the
postsynaptic
component
is
depolarized,
this
kicks
out
magnesium
ion
in
the
NMDA
receptor,
which
allows
which
allows
it
to
lead
to
an
influx
of
calcium,
which
trick
there's
a
lot
of
a
lot
of
plasticity
mechanisms
both
locally
and
globally
through
a
gene
transcription
here.
A
So,
for
example,
is
one
of
the
earliest
known
factors
that
are
activated
when,
when
you
get
this,
this
type
of
plasticity,
Cole
creb-
and
this
leads
to
a
lot
of
protein
synthesis
in
the
synapse
when
many
synapses
actually-
and
it's
still
unclear
how
this
leads
to
selective,
selective,
synapse
potentiation.
Even
though
there
are
a
lot
of
interesting
theories
on
that
yeah.
D
Of
course
you
can
mm-hmm,
everybody
I
think
it's
for
those
who
may
not
know
this
I
think
it's
worth
over
observation
here,
the
oblivia
and
other
very
simple
animals.
The
nervous
systems
are
so
small
that
pretty
much
everything
is
genetically
determined,
I'm,
not
sure
in
the
implicitness
is
true,
for
but
every
every
single
neuron
Alyssia
is
like
accounted
for,
it
can
be,
can
be
numbered
and
I
believe
most,
if
not
all
the
synapses
are
that
way
too,
and
so
in
these
very
simple
nervous
systems.
D
This
connectivity
is
is
primarily
genetically
determined
and
then
all
memory
has
to
be
coming
through
synaptic,
potentiation
and
depression
because
it
doesn't,
it
doesn't
exhibit
those
things
we
were
just
talking
about,
like
continuous
growth
them,
you
know
dendrites
and
so
on.
So
when
you
get
to
more
complex
animals
that
there
isn't
enough
information
in
the
genetic
code
that
specified
the
connectivity
in
the
brain
they're,
just
not
it's
not
possible,
so
the
connectivity
has
to
come
about.
The
detailed
connectivity
has
to
come
about
later
through
through
learning
processes
and
and
this
this
for
many
many
years.
D
This
was
led
to
this
idea
that
only
learning
by
studying
animal
like
a
I
felt
like
while
learning
has
to
occur
through
join
a
synaptic
potentiation,
because
that's
how
it
occurs
in
simple
animals,
but
in
complex
animals.
My
cortex,
we
now
know
that's
not
true.
It's
that
the
most
learning
occurs
from
the
formation
of
new
synapses.
At
least
that's
so
I
just
want
to
make
that
very
clear
that
you
have
to
be
thinking
about
these
things.
D
When
we
talk
about
learning
in
these
simple
animals,
it's
always
going
to
be
like
this,
but
in
complex
animals
on
top
of
synaptic
potentiation
depression,
you're
gonna
have
all
this
synaptic
Genesis,
which
is
the
it's
huge
things,
I
just
I'm,
not
sure
everyone
listening
to
this
will
understand
that
distinction.
So
I
just
thought
I'd
make
it
clear.
Sorry.
F
D
Know
of
it
becoming
I.
My
guess
is
that
even
in
our
brain,
some
of
the
early
stuff,
like
your
spinal
cord
and
brain
stem
and
so
on,
mica
genetically
determined
I,
don't
know
that
I'm
guessing
that
so
I'm,
not
I,
don't
know
any
clear
distinctions.
Opportunity.
I
do
know
that
in
very
simple
animals
and
it's
the
reason
like
Kindle
and
switch
study.
That
bliss
do
is
that
you
can.
You
can
literally
count
every
animal,
have
the
exact
same
nervous
system
and
therefore
you
know
and
somewhere
along
the
way.
A
Yeah,
thanks
for
and
disappoint
out,
first
of
all,
interestingly
another
memories
and
the
please
yeah
and
the
squid
frog
in
the
beginning
was
because
their
neurons
and
axons
are
huge.
You
can
almost
see
them
by
with
your
naked
eye,
so
they're
much
easier
to
record
from
when
you
have
limited
technology
like
a
different
fifties
and
sixties
I'd,
also
like
to
point
out
that
the
the
there's
an
there's,
an
intermediate
component
to
to
synapse
generation
in
connectivity
in
cortical
complex
animals,
in
a
sense
that
these
it's
not
just
the
genetics
of,
will
be
determined.
A
E
E
Just
have
a
question
on
Delta
P
Ltd.
So
this
the
basic
mechanism
of
P
as
per
my
understand,
is
not
true.
Gene
expression
right
you're,
showing
there
is
an
influx
of
course,
costume
and
more
costumes
going
to
be
too
it's
going
to
add
an
AMTA
receptor
there.
So
there
is
no
gene
expression
both
in
that
stage
or
is
there.
A
There
is
its
there's,
both
short-term
and
long-term
components
to
it
so
yeah
the
long-term
components
actually
come
from
a
reef
or
Singh
structural
or
creating
structural
elements
in
there,
the
pre
and
post
synapse-
and
these
require
gene
expression.
So
there
are
some
interesting
things.
I
think
that
there
are
like
some
of
these
elements
are
just
sitting
around
there
that
have
been
you
know
recently,
recently
synthesized
so
like
proteins
and
all
mostly
proteins
that
can
be
phosphorylated.
A
This
just
means
that
they're
altered
slightly
by
activity
like
calcium
influx
and
they
lead
to
local,
local
creation
of
of
these
scaffolding,
proteins
and
other
reinforcement,
proteins
and
then
there's
the
other
component,
which
leads
to
gene
expression,
and
this
we
helps
reinforce
the
synapse
as
well,
but
it
also
sends
because
once
you
get
gene
expression
here,
the
the
nucleus
doesn't
know
where
the
signal
came
from,
doesn't
know
which
segment
it
came
from.
So
it
sends
these
differentiating
factors
everywhere.
A
To
all
spines
I
think,
and
then
this
leads
to
like
further
potential
potentiation.
That
happens
later.
So,
if
you
activate
once,
if
you
potentiate
one
synapse,
you
can
potentiate
other
synapses
in
the
same
neuron.
So
this
leads
to
like,
let's
say,
second-order,
associative
learning,
for
example,
but
the
the
gene
special
coupon
is
very
crucial
to
a
long-term
potentiation.
Let.
D
Me
clear:
look
as
you
understood
that
answer,
because
we
were
talking
a
moment
ago
about
gene
expression
of
genes,
defining
the
architecture
of
the
brain
and
and
and
we're
not
talking
about
that
now.
Maybe
that's
obviously,
oh
now
we're
talking
about
when
the
synapse
potentiates
anytime,
you
got
proteins
involved.
Genes
are
expressed,
and
you
know,
there's
genetic
mechanisms
that
are
that
are
behind
all
this
protein
stuff.
But
that's
not
that's
not
saying
it
has
anything
to
do
with
the
architecture.
At
that
moment
you
just
just
have
my
cells
build
things
alright,.
A
Okay,
alright
and,
of
course,
in
addition
to
long
term
potentiation,
you
have
a
long
term
depression
which
happens
when
there's
a
let's
say,
inadequate,
pre
and
post
firing
like
an
adequate
coincidence.
So
you
know
this
they're
out
of
time.
The
timing
is
off
or
the
there's
just
not
enough
input,
that's
driving
the
postsynaptic
cell
and
then
there's
another
number
of
genetic
and
molecular
mechanisms
that
that
lead
to
depression
of
that
synapse.
So
the
problem
is
that
I'll
just
stick
here,
because
some
people
don't
get
confused
by
the
next
slide.
A
Basically,
and
also
this
only
allows
you
to
form
sort
of
short-term
allows
you
to
form
correlation
correlations
of
things
that
have
been
very
close
in
time
to
each
other.
So
for
distal
learning,
where
you
have
to
learn
associations
between
the
events
that
occurred
within
hundreds
of
milliseconds
or
seconds
you,
you
can't
really
do
that
with
this
vanilla,
heavy
and
learning
style.
So
you
need
more
components
and
one
component
is
homeostatic.
A
Which
prevents
this
runaway
age
and
make
sure
that
not
all
synapses
are
recruited
when
there's
a
coincident
activity
and
also
that
the
synapse
don't
lead
to
the
reinforcer
Nexus,
don't
lead
to
runaway
excessive
activity
in
neurons.
So
this
there's
a
little
stuff
that
I'm
sort
of
presenting
where
there's
a
vague
controller
signal.
That's
a
the
in
that
controls.
The
synaptics
strength.
A
A
Intracellular
extra
stability
coming
from
other
sources
that
create
negative
feedback
loops
that
allow
that
stop
that,
because,
as
I
mentioned
before,
you
can't
learn,
you
can't
learn
the
associations
longer
than
the
what
a
prison
near
system
expired.
Timing,
the
city
window,
which
is
on
the
order
of
a
few
milliseconds
at
best.
So
there's
you
can
use
what's
called
three
factor
plasticity,
which
is
when
neurons
are
Co
active.
Let's
see
so
here
you
have
a
pre
and
post
synaptic
neuron.
A
So
there's
a
the
green
green
is
pretty
synaptic
orange
is
postsynaptic
and
you
can
have
your
standard
and
hebbian
plasticity
occurring
here,
but
you
can
have
instead
of
this.
Instead
of
this
activation,
leading
to
immediate
reinforcement
of
the
synapse,
basically
create,
what's
called
an
eligibility
trace
and
the
eligibility
trace,
where
you
have
something
basically,
the
polar,
a
local
depolarization,
that's
sustained
over
long
periods
of
time
by
long
I
mean
like
hundreds
of
milliseconds
and
well.
A
A
D
D
A
A
Yes
right!
Thank
you.
So
the
idea
is
here
that
you
can.
You
can
create
more
interesting,
more
interesting,
associate
and
then
just
the
sort
of
things
that
occur.
At
the
same
time,
you
can
have
things
that
are
predictive
or
post
it,
for
example,
learn
things
that
happen
distally
in
time
by
adding
the
skating
mechanism.
That's
why
I
yeah
I'm.
D
Joining
this
is
an
advantage
we're
trying
to
build
systems
that
do
lifelong
learning.
You
know
learning
is
not
going
to
a
continuous
function.
You
know
we
learn
all
time
every
time
you
do
anything,
you
have
some
memory
trace,
but
if
you
were
building
a
true,
you
know
an
intelligent,
robot
or
something
and
tell
the
Machine.
There
are
some
things
you
that
are
more
important
to
remember
that
occurred
and
you
don't
want
to
forget
them,
and
so
you
have
to
have
this
gating
factor.
This
is
yes.
This
is
really
important.
D
We're
gonna
form
a
permanent
memory
here.
Something
else
is
temporary,
we're
gonna,
forget
it
in
a
day,
and
we
don't
have
anything
like
that
in
our
in
our
any
of
the
work
we've
done,
but
it
would
be
easy
to
add
I
just
I,
just
wanna
put
it
I'm
trying
to
raise
this
up
out
of
the
detailed
neurobiology
a
bit
or
hope
you
don't
mind.
I
know.
E
D
E
D
E
D
You're
using
the
same
mechanisms
of
synaptic,
Genesis
and
synaptic
plasticity,
so
you're
still
doing
this
sort
of
at
the
level
of
the
synapse,
but
synapses
are
very
complex
functions
and
some
some
go
away
right.
You
know,
they've,
you
don't
use
them,
they
disappear
tomorrow.
That's
it
happens
all
the
time,
but
some
get
get
like
permanently
glued.
You
know
super
glued.
Ok,
this
one's,
never
gonna,
go
away
and
and
you're
gonna
stick
with
it
for
less
your
life
type
of
thing,
but
it's
still
a
synapse.
D
So
it's
a
variation
of
the
it's
just
it's
just
the
permanence
of
that
synapse
and
it's
it
is
more.
It's
almost
like
I
I
read
this
paper
wants
to.
People
talked
about
the
synaptic
plasticity
and
and
the
weight
of
a
synapse
that
someone
said
you
know
it
doesn't
really
look
like
that's
really
like
how
permanent
the
synapse
is.
That's
the
most
important
thing
and
so
to
say
mechanisms,
but
don't
think
about
it.
It's
just
like
a
weight
going
up
and
down.
D
Ultimately,
it's
just
it's
a
it's
a
structural
change
to
the
synapses.
They
do.
You
know
these
channels
of
genes
that
express
their
physically
a
synapse.
This
is
very
permanent,
gets
gets
bigger
and
thicker
and
you
physically
looks
more
robust,
but
there
would
be
there
would
be.
You
know,
obviously,
genetic
and
grow
things
and
proteins
that
you
know
it's
just
you
know
it's.
It
has
a
different
protein
structure
and
different
molecular
structure
that
keeps
it
permanent.
B
If
you
go
back
to
that
slide,
you
know
when
you
mentioned
sort
of
homeostasis
and
I
think
there's
at
least
two
different
types
of
homeostasis
that
that
you
might
have
mentioned.
One
is
kind
of
homeostatic
activity
so
that
the
overall
activity
of
the
network
is
kept
to
within
a
certain
range
and,
of
course,
in
art.
Situates.
We
use
do
that
by
imposing
sparsity
sparsity,
and
you
know
that
essentially
put
some
limits
on
how
much
on
these
cells
at
a
time
could
be
active.
B
Then
there's
kind
of
homeostatic
plasticity,
which
might
be
some
kind
of
bound
on
how
much
learning
can
actually
occur
in
a
given.
You
know
dendritic
segments
and
so
on,
and
we've
talked
about
that
before,
but
our
temporal
memory
models
don't
explicitly
have
I
have
something
like
that.
That's
you
know
something
that
says
you
know
if
you
grow
some
synapses
here.
Well,
you
have
to
decrease
other
synapses
there.
You
know
keeping
kind
of
a
bound
on
the
total
number
of
synapses
or
synaptic.
B
A
Right,
of
course,
so
Susan
said
yes,
this
is
typically
dependent
on
homeostasis,
which
is
logics
through
inhibitory
interneurons,
let's
say:
okay,
this
cell
gets
to
fire,
the
other
ones
shut
up,
and
the
synaptic
part
is
also
complex
as
much
so
I
I'm
not
very
familiar
with
local
mechanisms,
for
example
within
a
dendritic
segment.
You
know
if
one
synapse
is
reinforced.
If
there's
like
a
negative
feedback
mechanism
that
says
okay,
other
guys
should
not
be
reinforced.
With
this
event,
yeah.
B
D
You
know
it's,
it's
almost
inevitable
if
you
think
about,
as
we
believe
now
that
this
heavy
impasto
city
is
working
within
a
local
8
range
of
a
dendritic
grants.
So
the
typical
number
is
like
40
microns
and
so
within
40
microns.
You
can
get
about
40,
synapses
and
and
they
can
be
cell
free
and
you
know
they're
happy
to
be
all
existed.
D
But
if
you
try
to
add
more
synapses,
there's
no
room
for
them,
and
so
they
have
extend
beyond
that
this
area
and
then
they
they
kind
of
leave
the
synaptic
integration
zone
and
they
start
acting
independently.
So
there's
physical
things
like
that,
which
could
enforce
a
limit
that
you
know
you
can't
have
200
synapses
that
are
all
contributing
together
on
the
dendritic
branch
to
generating
a
dendritic
spike.
It
just
doesn't
happen
like
40.
You
know
something
like
that
and
you
can
use
that
40
anyway.
You
want
but
you're
not
going
to
get
200
I'm.
A
D
E
A
A
Yeah,
so
I
was
gonna,
just
bring
this
up
so
energy
cans,
energy
consumption
as
well
and
implicitly,
and
explicitly
I
guess,
but
also
the
fact
that
still
finds
you
know,
synapses
can
only
reach
a
certain
size
right,
and
this
is
this
is
something
that
is
determined
quantic
genetically,
for
example.
They
cannot
like
grow
to
like
the
size
of
a
neuron,
for
example,
and
there's
also
a
scaling
mechanism.
A
D
We've
modeled
this
in
our
models
that,
with
a
single
variable
permanence,
which
represents
the
zero
like
there's
no
synapse
to
you,
know,
point
one
you're
starting
to
grow.
One
point:
four:
you've
got
a
minimum,
a
functional
synapse
to
you
want
it's
like
it's
it's
the
biggest
you
can
get
it.
We've
always
felt
that
there
have
to
be
other
factors,
ultimately
in
the
brain,
to
represent
more
of
like
more
of
its
true
permanence,
that
some
synapses
would
never
go
away.
But
anyway,
we've
modeled
this
with
a
single
variable,
and
it
worked
pretty
well
so
far.
Yeah.
B
A
A
A
So
this
is
that
spike
timing-dependent
plasticity,
which
is
have
been
learning
when
they're,
the
the
pre
and
post
synaptic
activation,
happens
within
a
certain
window
right
and
if
it
happens
outside
this
window,
basically
you're
not
gonna
reinforce
the
synapse
so
but
you
can.
But
you
need
to
make
this
more
sophisticated
for
reasons
that
I
mentioned
before.
A
The
input
which
basically
says
that
this
so
that,
for
example,
the
instructive
pathway,
could
be
an
error
signal
that
says
this
is
a
desired:
the
desired
and
output
versus
the
actual
output
of
the
neuron.
So
let's
say
that
you
want
to
make
this
neuron
that's
giving
this
input
you
want
to
make
it
shut
up.
A
So
basically
you
get
an
error
signal
that
comes
that
generates
of
dendritic
Spike,
and
this
can
alter
the
the
firing
of
the
neuron
in
response
to
that's
feed-forward
it,
but
such
that
it
goes
up
or
down
to
minimize
that
error,
and
then
they
have
behavioral
timescale,
plasticity
which
I'll
get
into
in
a
second,
which
is
very
interesting.
So
in
terms
of
the
supervised
learning
you
can
have
this
in
various
ways.
A
And,
of
course,
you
can
involve
inhibitory
circuits
here
to
make
this
more
sophisticated
and
then
there's
the
concept
of
behavioral
timescale,
synaptic
plasticity,
which
is
something
that
was
very
recently
discovered
in
the
hippocampus,
which
does
not
require
this
hebbian
Association
component.
So
what
happens
here?
I,
don't
know
probably
Jeff
and
some
other
people
probably
familiar
with
this.
What
happens
is
that
let's
say
you're
a
neuron
in
ca1
and
the
hippocampus
and
you
get
a
bunch
of
excitatory
inputs
from
ca3
or
the
internal
cortex,
but
because
of
excitatory
inhibitory
balance,
and
you
basically
don't
fire.
A
A
This
is
a
very
quick
mechanism,
so
they
can.
They
can
happen
like
within,
like
one-shot
learning,
basically
as
soon
as
the
event
is
experienced
and
also
the,
and
why
they're
not
very
sharp
they
their
predictive
in
sense.
So
you
can
create
predictive
representations
using
this
plasticity
mechanism
due
to
the
fact
that
the
calcium
Plateau
is
long
and
that
you
have
these
overlap.
Temporally
overlapping
inputs
coming
in
is
everyone
on
board.
Do.
D
A
D
You
don't
carry
Kinzie
cell,
so
Mackenzie
sounds
a
very
unusual
neuron
so,
but
if
we,
if
we
were
to,
you,
would
see
similar
diagrams
like
this
for
cortex
and
of
course,
we
know
that
excitatory
synapses
never
form
on
the
cell
body.
That's
a
error.
It
didn't
look
like
that
and
they're
always
on
these
dendrites
and
as
if
you
know
from
our
work
that
you
know
the
dendritic
compartments
are
really
important.
D
So
a
lot
of
this
literature,
you'll
read
about
synaptic,
plasticity,
completely,
I
would
say
95%
of
it
completely
ignores
the
dendritic
properties
and
dendritic
spikes
and
the
whole
thing
that
we
put
into
the
temporal
memory
algorithm.
So
you
have
to
be
very
careful
when
reading.
If
you're
reading
this
literature,
you
have
to
be
very
careful,
the
keys
about
tease
out
like
sometimes
what
they
describe
doesn't
really
doesn't
apply
at
all,
or
it
plays
differently
in
those
situations.
So
just
this
would
be
very
cautiously.
Read
these
things.
Do
you
think
the
people?
D
This
is
how
the
neurons
learn.
In
reality,
they
it's
much
more
complex
in
base
because
you
have
to
accommodate
the
dendritic
activation
zones
which
they
don't
talk
about.
So
memory,
if
you
say
like
how
two
neurons
store
information,
if
you
look
at
the
temporal
memory
algorithm,
we
have,
it
is
far
more
sophisticated
in
an
information
point
of
view
than
almost
all
these
other
models
you
read
about,
but
it's
much
simpler
at
the
detail
level
like
we
don't
model
the
complexity
of
these
synapses,
but
we
model
the
larger
scale
effects
of
dendritic
segments.
D
And
so
when
you
read
about
the
literature
of
memory
formation
and
synapses,
almost
nothing
will
have
that
information
in
it
and
you
just
have
to
you-
have
to
put
on
really
strong
filters
when
you
read
to
it.
I
just
wanted
to
make
that
clear.
It's
frustrating
at
times!
If
you
don't
know
that
people
just
ignore
the
dendrites.
A
D
Yeah
I'm
not
saying
I'm,
not
being
critical,
I'm,
just
pointing
out
that
we
now
we
know
that
it's
important,
and
so
we
we
have
to
put
our
own
sort
of
careful
filters.
When
we
read
these
things,
other
people,
they
don't
have
theories
about
the
dendrites
do
so
you
know,
memory
is
really
as
far
as
I
know,
the
first
really
end
and
information
theoretic
neural
model
that
incorporates
them
so,
but
we
do
don't
like
so
now.
We
know
that
we
need
to
keep
thinking
about
that
all
the
time.
It's
just
you.
A
And
one
thing
is
to
add:
for
example,
you
see
this
as
point
as
Jeff
pointed
out
this
inaccurate
sort
of
schematic,
where
the
excitatory
synapses
end
on
the
soma
these
in
these
papers.
It's
implicit
that
the
reader
has
read
enough.
You
know
neuroscience
literature
to
understand
that
this
is
just
a
cartoon
right
and
that
these
these
these
synapses
occur
on
Android
excitement's,
but.
D
D
D
Input
this
is
instructive
pathway,
and
so
we
have.
We
have
synapses
on
the
soma
at
the
bottom
of
the
red
and
we
have
one
that's
all
very
cartoonish
and
again,
even
in
those
apical
dendrites,
there
are
hundreds
or
thousands
of
synapses
up
there
and
multiple
integration.
So
I'm,
not
I,
guess
I'm
not
being
trying
to
be
critical
here,
I'm,
just
pointing
out
that
every
scientist
has
their
own.
D
If
you're
studying
very
specific
synaptic
properties,
then
you
can
ignore
all
this
morphology
of
the
cell
and
all
this
other
stuff
and
it's
as
a
someone
trying
to
come
into
theory
overall
theory.
It
can
be
very,
very
difficult
to
understand
what
the
hell's
going
on
here,
because
they
leave
out
all
this
information
and
maybe
you're
supposed
to
know
it.
But
if
you
read
these
papers,
you
don't
know
it
and
no
one
knows
you
point
out:
there's
people
did
more
because
they
didn't
know
what
it
does
it's,
but
it's
it's!
D
A
Is
something
that's
kind
of
assumed
and
of
course
you
know,
every
paper
makes
a
very
incremental
contribution,
so
it
is
like
here's,
a
rough
sketch
of
the
idea,
we're
proposing,
and
you
need
to
keep
that
in
mind
as
well.
You
don't
you
shouldn't
see
any
of
these
things
as
a
sort
of
literal
interpretation,
yeah.
D
F
D
D
A
Is
yeah
I
think
what
they
mean
is.
Maybe
things
like
see
what
I
mentioned
earlier?
The
fact
that
there
are
several
mechanisms
and
dendrite
that
compartmentalize
information
and
there's
feedback
loops
and
that
sort
of
thing
I'm
not
really
sure
they
don't
mention
it
specifically
or
maybe
I
just
was
blind
for
a
second.
When
reading
the
review-
and
this
is
not
something
I'm-
an
expert
on
so
yeah-
you.
D
B
F
A
Yeah,
absolutely
so
yeah
these
are
yeah.
These
are
you
have
to
take
everything
with
a
bit
of
salt
and
ask
yourself
what
do
I
remember
about
this
are
going
to
read
more
about
this
and
yeah
levels
of
abstraction,
neuroscientist
work
on
so
systems
you're,
a
scientist
often
don't
look
at
dendrites
and
complex
mechanisms
within
because
it's
not
the
level
of
abstraction
I
was
looking
for
they're.
A
F
The
second
question
I
had
on
that
diagram
was,
on
the
right
hand,
side
you're,
the
image
D.
What
are
you
trying
to
what's
trying
to
show?
There
is
I
assume
that
the
blue
bar
corresponds
to
the
blue
bar,
that's
over
on
the
diagram,
see
but
I'm
trying
to
figure
out
what
is
it
trying
to
illustrate
alright.
A
So
I
think
this
is
the
start
of
the
trial
right
when
the
mouse
starts
approaching
this
area.
I,
guess
and
I
think
this
wants
to
illustrate
the
fact.
The
the
the
crazy
timescale
of
this
that
these
calcium
plateaus
can
last
four
seconds,
which
is
unusual
in
when
we're
talking
about
like
intercellular
timescales
in
neuroscience,
and
so
what
basically
the
the
mechanisms
are
learning
is
the
mouse
starts
here,
and
it
gets
all
these
excitatory
inputs
right
so
picture.
These
excitatory
inputs
are
spread
out
over
here
in
time.
A
What
the
cell
is
inactive
right
because
of
inhibition,
and
this
plateau
occurs,
so
this
plateau
will
enforce
synapses
in
four
synapses
activations.
That
happened
around
that
time,
so
the
this
blue
thing
here
will
mean
that,
because
this
occurred
around
three
seconds,
it's
a
little
bit
potentiated,
the
one
around
four
seconds
gets
more
potentiated
and
around
five
gets
really.
Potentiated
means
that
you
can
create
this
sort
of
ramp
over
here,
which
means
that
laughter
this
little
blue
bar.
Thus,
though,
the
cell
slowly
starts
firing
or
is.
D
F
Okay,
that
that's
what
I
was
unclear
because
I
was
trying
to
I
was
trying
to
think
it
was
that
there
was
a
progression
there
that
you
get
to
see
where
the
things
are
being
integrated,
then
I
was
trying
to
so
there.
Actually,
okay,
if
D,
is
just
an
expanded
version
of
B
that
I
understand.
What's
going
on,
okay,.
A
So,
and
wrapping
up
very
quickly,
this
is
in
the
in
the
paper
coding
on
clothing,
because
complementary
learning,
Systems
Theory,
it's
otherwise
known
more
famously
as
a
systems.
Consolidation
is
the
fact
that
there's
a
interplay
between
the
hippocampus
and
the
cortex
that
hippocampus
can
learn
new
associations
very
quickly,
but
the
storage
of
them
happens
in
in
the
cortex,
and
this
is
something
that
takes
hours
to
days
to
happen,
and
also
I
would
like
to
point
out
that
the
representations
here
at
the
hippocampus
and
cortex
are
different.
Hippocampus
is
extremely
sparse
and
sparse.
A
The
only
way
that
there's
a
low
activity
as
far
as
I
mean
and
when
an
event
occurs.
The
activations
not
only
sparse
in
terms
of
how
many
neurons
are
active,
but
also
their
representations,
very
compact,
let's
say,
whereas
in
the
cortex
the
representations
are
sort
of
overlapping
and
distributed.
So
there
are
still
sparse,
but
you
know
a
certain
memory.
A
certain
concept
will
activate
several
neurons
and
then
adjacent
and
another
concept
will
activate
many
of
those
neurons
as
well.
A
D
A
D
You
know,
if
you
read
about
this
people,
sort
of
say
like
oh,
the
hippocampus
remembers
things
really
quickly,
there's
your
episodic
memory
and
then
it's
stored
in
the
nirakara
session.
People
used
to
believe
this,
like
train
her
to
the
neocortex.
Oh,
they
still
talk
about
it.
That
way
and
I
never
believe
that,
because
there
aren't
song,
have
the
ability
to
transfer
memories,
and
it's
been
shown
now
that
it's
not
true
that
memories
in
the
cortex
do
not
get
transferred.
D
It's
not
transferred
the
hippocampus
there
learned
simultaneously,
but
a
much
slower
rate,
and
so
what
what's
surprising,
though,
is
if
you
remove
the
hippocampus,
the
learning
in
the
neocortex
doesn't
occur,
but
I
just
want
to
make
the
distinction.
It's
not
like
you
heard
in
one
place
and
then
it
over
the
course
of
a
month.
It's
moved
someplace
else.
It's
not
true!
It's
going
quickly
in
the
near
court.
In
the
hippocampus,
it's
learned
slowly
in
the
neocortex
separately,
formed
memories,
and
yet,
if
you
remove
the
hippocampus
for
some
reason,
no
one
knows
yet.
D
A
D
F
D
This
has
been
shown,
and-
and
so
it's
it's
more
like
the
cortex
and
in
my
mind,
is
the
hypothesis,
but
in
the
moral
in
my
mind
the
hippocampus
is
designed
for
the
super-fast
learning
and
in
your
car.
Texas
is
designed
for
more
long-term
memories.
Much
larger
it's
physically
much
I
did
a
lot
more
storage
area
there,
but
if
they're
both
doing
the
same
thing,
they're
just
doing
a
different
time,
skills
and
different
speeds,
it's
not
like
they're,
fundamentally
different
processes.
I,
don't
believe
that
that's.
E
F
D
So
a
very
momentary
event
can
lead
to
a
consolidated
lead
to
a
more
a
permanent
type
of
memory,
but
it
may
take
a
couple
hours
for
the
Sinister,
and
so
you
know
once
you
kick
it
off
in
some
sense,
if
you
know
it
may
take
hours
to
finish
the
process,
that's
one
thing:
it's
not!
It
doesn't
have
to
be
constantly
reinforced.
F
D
And
other
things
are
for
longer
term
things.
There's
I,
don't
I,
don't
know
the
literature
on
this
and
I.
Don't
know
if
anyone
knows
it,
but
maybe
someone
else
does,
but
there
there.
There
is
this
issue
that
you
need
to
this
idea
that
you
need
to
replay
these
memories
over
and
over
again
somehow
in
order
to
continue
that
consume
that
learning
process.
So
if
some
people
plead
that
topping
during
sleep
he's,
not
it's
one
hypothesis,
that
you
know
during
sleep,
you're
sort
of
really
playing
these
memories
so
that
they
you
continue
to
live
for.
D
A
Right,
so
there
is
a
lot
of
literature
on
replay
during
sleep
and
not
only
during
sleep.
Weekly
also
happens
kind
of
all
the
time
in
a
different
flavor.
So
this
is
where
you
get
like
the
source
of
continual
input
and
furthermore,
when
replay
happens
and
during
sleep,
the
it's
not
like
the
same
memory,
the
same
activation
trace
is
just
being
replayed
over
and
over
they.
They
change
right.
You
create
it
you're,
adding,
like
maybe
a
more
general
like
a
generalization
of
the
memory,
for
example,
or
its
relationship
to
other
things
that
were
experience.
D
A
Is
amazing
and
I
only
remembered
this
like
10
minutes
before
so
I,
don't
have
any
figures
or
anything,
but
basically
what
happens
is
when
you
recall
a
memory,
there's
synaptic
plasticity
that
happens
that
basically
changes
the
synapse
and
basically
it
changes
your
memory
of
it
as
well.
So
your
memory.
This
is
one
of
the
reasons
why
memories
deteriorate
become
more
vague
and
abstract
over
time.
Is
that,
as
you
recall,
them
they
change
and
for
they
can
begin
to
the
effect,
to
the
extent
that
they
can
become
completely
fictitious.
A
It's
very
easy
to
can
convince
someone
that
they
had
an
experience
that
they
never
had.
So
this
is
something
I
just
wanted
to
discuss.
I
mean,
maybe
we
running
out
of
time,
but
in
the
context
of
continuous
learning.
You
know
this
zombies.
If
it
happens
in
the
brain
it
has,
it
has
to
be
very
beneficial
right.
So
there's
maybe
a
component.
D
That's
funny
maybe
you're
right
there
is
I,
don't
know,
I
assume
the
opposite.
I
assumed
it's
detrimental,
and
it's
it's
just
a
it's
a
physics
problem
in
some
sense
and
I
guess
we
don't
know
which
of
those
approaches
is
correct
to
me.
This
reminds
me
of
semiconductor
memories,
where
you
might
say,
like.
Oh
here's,
a
summary
conduct.
It's
permanent
here,
there's
a
permit
semiconductor
memory,
but
when
you
read
from
it
you
destroy
this
the
memory.
D
Therefore,
you
have
to
write
it
out
again
and
when
you
write
it
out
again,
you
might
make
errors
and
writing
it
out
again,
so
it
could
be
I've
always
assumed
that
could
be
wrong.
I've
always
assumed
that
this
is
not
beneficial,
but
but
this
is
actually
just
a
limit
to
how
the
the
biological
physics
work
and
but
I
might.
A
It
could
be
beneficial
in
the
sense
that
basically
and
the
sudden
experiences
are
very
correlated
with
each
other,
so
every
time
you
see
a
fire
truck,
it's
gonna
be
a
very
similar,
looking
fire
truck,
for
example,
and
there's
a
continuity
in
time
and
space
with
with
your
experiences
while
you
basically
I'd
be
doing
is
like
you're
reactivating
you
know
a
memory
and
the
memory
is
like
it
still
preserves
the
gist
of
what
happens.
It's
just
sort
of
the
details
of
it.
A
I've
changed
and
that's
probably
because
I'm
not
important
and,
as
you
alluded
to
earlier,
there's
things
that
you
were
like.
You
want
to
never
forget,
and
that
does
happen
and
now
these
memories
that
are
basically
in
us
kind
of
how
PTSD
works.
Initially,
then
you
have
these
memories
that
basically
do
not
want
to
go
anywhere.
Yeah.
D
I
guess:
well,
it's
a
good
hypothesis.
We
I
guess
we
I
would
argue.
We
just
don't
know
why
this
rewriting
occurs.
It
could
be
beneficial,
it
could
be
detrimental,
but
if
you
remember
things
that
didn't
really
happen
which
which
we
do,
as
you
point
out,
you
just
have
someone
remember
something
over
and
over
again
and
every
time
you
remember
it,
you
sort
of
say
it
a
little
differently
than
they
actually
form
the
memory
of
the
incorrect
thing
and
they
swear
that
something
happened.
It
didn't
happen.
D
A
D
Reckon
ISM:
this
is
not
by
the
way
Kevin.
This
is
not
like
a
new
memory,
overwrite
the
old
memory.
This
is
you
recalling
something
overwrite
the
memory.
So
just
the
fact
that
you
thought
that
I
said:
oh
what'd,
you
have
for
breakfast
this
morning
and
you
can
you
think
of
it.
The
thinking
of
it
actually
damages
the
memory
of
it
and
so
you're,
not
adding
any
new
information
you're,
not
integrating
a
new
experience
with
an
old
experience,
you're
just
degrading
the.
A
D
But
but
you
can
see
where
it
gets
wrong.
You
take
a
witness
who
observed
a
crime
and
the
woman
committed
the
crime.
There's
a
man
and
the
woman,
the
woman
committed
the
crime
by
just
talking
to
that
person
who
observe
the
crime
over
and
over
again,
you
can
convince
them
that
the
man
did
the
crime
literally,
you
can
do
this,
and
that
is
not
adding
any.
It's
just
fake.
It's
wrong.
So
look
I!
D
A
D
D
I
was
like
you
know,
along
the
lines
of
weird
things
like
rewriting
the
memory
I
bought
this
up
up,
bring
this
up
again
to
some
of
you
remember
this,
but
there
are
people
who
are
you
know,
they're
called
Cervantes
and
a
savant
is
someone
who
they're
pretty
normal.
Some
of
them
are
they're
pretty
normal
people
they,
but
they
have
some
very
unusual
properties.
One
property
is
often
if
savant
can
have
this
tremendous
photographic
memory.
I
mean
like
unbelievable
things.
D
They
can
read
a
hundred
books
and
remember
every
word
on
every
page
I'm,
not
joking
people
who
can
do
this
and
I
know,
there's
another
guy's,
a
famous
artist.
They
flew
him
around
Rome
in
a
helicopter
for
20
minutes
and
then
over
the
course
of
the
next
month
he
drew
a
picture
of
Rome
and
he
had
every
building
at
every
window
in
the
right
number
of
windows,
incredible
details,
they
don't
generalize.
D
Well,
they
they
don't
see
the
bigger
picture
on
things
as
a
general
rule,
but
they
so
the
thing
is
story
about
this:
is
these
brains
are
pretty
normal
they're
not
like
some
alien
brain
like
a
human
brain
with
some
tweak
to
them,
and
it
shows
that
the
neocortex
with
that
tweak
can
form
this
incredibly
detailed,
precise,
huge
volume
of
data?
It's
it's
not
a,
and
so
it's
it's
like
the
same
basic
structure
of
the
brain
can
do
this
with
a
different
set
of
learning
rules
and
that's
another
interesting
tweak
it.
D
It
shows
that
this
there's
the
ability
to
remember
incredible,
complex
details,
that
is
the
structure
of
the
brain,
can
support
that.
But
we
generally
don't
do
that.
We
generally
do
this
more
of
them.
We
would
don't
want
to
remember
those
details.
We
want
to
do
more
at
the
generalization,
so
I,
just
it's
sort
of
long,
the
lines
of
them.
We
call
rewrites
the
memories
thing.
It's
like
wow,
that's
fascinating,
yeah,.
E
Yeah
one
interesting
fact
about
sevens
and
shows
that
they
can
generalize
that
they
can't
recognize
people,
because
they
have
to
see
that
same
person
with
different
expressions
in
order
to
recognize
someone.
So
if
they
only
see
Jeff
smiling
and
then
Jeff
series,
they
won't
be
able
to
tell
it's
Jeff.
So.
A
D
B
D
D
Okay,
did
you
see
my
property?
You
see
this
power
point
here.
You
got
that
yeah.
We
see
it
yeah
all
right
so
so
quickly.
This
is
this
image
from
this
GU
tank
paper
that
we've
used
a
lot
I'm
just
fascinated
with
this
image,
and
we
were
talking
on
Monday.
This
is
a
grid
cell
module,
dividing
the
six
phase,
quadrants
and
I'm.
Not
gonna,
explain
this
right
now,
but
the
question
is:
what
were
we
actually
seeing
here?
D
C
D
Don't
surround
our
cortex,
what
are
they
actually
picturing
when
they
do
this
image?
So
I
dropped
into
some
papers
about
this,
and
so
in
these
these
slides
I'm
showing
the
title
of
paper
and
the
authors
up
from
the
left-hand
side.
Here
again,
this
is
a
GU
tank.
This
is
the
good
thing,
another
GU
tank
paper,
some
other
people
too
anyway,
in
this
particular
image,
is
a
rat
cortex,
the
media
in
Toronto.
Well,
this
is
that
this
is
a
different
part
of
the
brain,
but
they're
they're
using
this
technique,
the
technique
is,
they
have
a.
D
Let
me
see
if
I
can
show
you
picture
over
here.
Like
you
see
my
cursor
either
there
was
a.
There
is
a
prism
that
they
have,
that
they
so
stick
and
oops
go
back
here,
but
in
this
case
they're
trying
to
image
this
piece
of
cortical
tissue,
which
is
hidden
in
this
fissure.
So
they
need
to
look
at
it
from
the
side.
So,
like
that,
the
vert
you
see
this
red
oval,
that's,
that
is
the
that
that
is
the
the
part
of
the
cortex
are
trying
to
look
at
and
it's
facing
to
the
right.
D
If
well,
meaning
that
layer,
one
would
be
on
the
right
of
this
figure
and
not
like
anything,
so
they
insert
this.
They
insert
this
prism
into
the
brain
of
the
rat
in
this
case
over
here.
This
is
not
good.
Here's
if
they're
doing
the
media
line
of
cortex
here
you
go.
They
want
to
insert
that
prism
in
there,
and
so
now
they
have
this
imaging
system
which
can
can
look
down,
reflect
sideways
and
look
at
that
that
part
of
the
cortex.
So
that's
that's
how
these
were
done
here.
D
You
see
it
in
a
bigger
picture
here:
they're
they
insert
this
prism
whoops,
damn
it
I'm!
Sorry
excuse
me
this
prism,
which
some
slides
of
tissue
apart
and
now
they
have
their
objective
of
their
their
microscope
up
here
now
these
are
two
photon
imaging
microscopes
and
I.
Didn't
good
I
was
less
interested
in
these
exact
physics,
how
this
works
and
more
interesting
about
what
they're
actually
measuring?
What
are
they?
D
What
what
are
we
looking
at
and
so
I'll
talk
about
that
briefly,
but
the
way
the
general
idea
how
these
these
Microsoft
works
is
they
have
their
their
meeting
light
two
light
sources
at
the
same
time
and
are
relying
on
them,
coincidentally,
hitting
the
same
spot,
and
they
can
do
this
really
really
fast.
So
these
ponies
lasers
are
extremely
a
pulse
that
extremely
fast
I
think
femtosecond
hyper
time
frames
and
what
they
can
do
is
they?
D
Can
they
continue
the
activity
of
a
cell
at
a
very
precise
point,
one
point
at
a
time
and
then
they
scan
to
the
next
point
and
the
next
point
the
next
point.
So
they
could.
They
basically
measure
a
whole
bunch
of
points
really
rapidly
and
they
build
up
a
two-dimensional
image.
That's
the
basic
way
and
areas
if
I'm
getting
this
wrong.
Let
me
know
because
yeah.
A
D
That's
right:
I,
just
wanna
bore
to
the
physics
interesting,
but
I'm.
Not
it
wasn't
my
point.
My
point
was
a
Mis
site.
Investigation
was
to
figure
out
what
are
they
actually
looking
at,
and
this
gives
you
this
image
gives
you
a
better
sense
of
what
they're
actually
looking
at
in
this
case,
they're
showing
okay
they're
gonna
image,
this
section
of
some
some
part
of
the
rat's
brain
here
and
then
on.
This
is
the
C
panel.
You
see
the
very
bright
yellow
line
that
is
essentially
showing
health.
D
What
they're
looking
at
they're
looking
at
a
very
thin
as
you
point
out
on
Monday,
it's
almost
like
it's,
it's
extremely
thin,
much
smaller
than
a
neuron
there's
this
sort
of
plane
here.
So
if
the
cells
intersect
this
plane,
they
can
look
at
the
cell.
If
it's
a
little
bit
off
this
plane,
they
won't
see
it
at
all,
so
you're,
getting
this
very,
very
precise,
thin
slice
and-
and
in
this
case,
we're
looking
at
the
immediate
lentil
cortex.
D
E
D
So
this,
as
you
said,
on
Monday
or
is
this
is
a
very,
very
thin
slice
here
that
they're
looking
at
and
so,
and
they
can
get
that
very
rapidly,
even
though
they're
only
looking
at
one
point
at
a
time,
so
they
can
make
movies
of
this
two-dimensional
image
by
looking
at
one
pixel
at
a
time
but
they're
doing
as
we're
going
through.
It
now
I
think
here's
some
mountainous,
that's
that's
all
I'm
going
to
say
about
the
technique
and
now
I
discovered
some
interesting
observations
that
came
out
of
reading
these
papers.
So
here's
another
one.
D
D
C
D
If
you
saw
these
things-
and
you
knew
this-
you
know
I'm
getting
this
wrong.
Let
me
know:
cuz
I
think
you've
read
some
of
these
papers
this.
This
is,
we
need
this
up
on
the
right,
it
says.
Grid
and
non
grid
cells
are
intermixed.
So
if
you
look
into
medial
in
tirana
cortex
there
ourselves
that
look
like
grid
cells
and
their
cells,
that
don't
look
like
red
cells
and
will
come
and
talk
about
a
moment
and
and
they're
saying,
oh,
the
grid
cells
are
clustered
together
and
going
back
to
this
image
here.
D
This
is
this
is
neighbors
from
from
tank.
We
say:
oh,
these
are
grid
cells
and
these
are
prairies
or
not
grid
cells.
Well,
I,
don't
know,
I,
don't
understand
this,
because
in
this
paper
the
red
dots
are
grid
cells
and
the
blue
dots
are
or
non
grid
cells
and
they're
saying:
oh,
yes,
the
red
dots
are
clustered.
You
can
see
how
they're
clustered
and
they
go
through
this
text.
Statistical
analysis
showing
the
red
tails
are
clustered
but
they're,
clearly
completely
intermixed
with
the
blue
cell.
D
C
Part
of
this
is
coming
from
the
experimenters
with
grid
cells.
One
of
the
thing
one
of
the
things
they
face
that
makes
their
job
difficult
is
they'll,
put
an
electrode
in
to
talk
to
zero
grids.
Put
another
one
in
zero
put
another
one
in
detective
ten
or
something
like
that.
So
the
point
is
that,
from
an
experimental,
let's
point
of
view,
they're
clustered
if
you're
trying
to.
D
D
D
They
were
breaking
the
cells
in
the
immune
or
on
the
cortex
into
grid
cells,
which
they
counted
by
eighteen
percent
of
the
cells,
nine
percent
of
cells
reporter
cells,
one
percent
of
the
cells
of
pure
head
Direction
cells
and
68
percent
of
the
cells
were
non
grid.
Spatial
cells,
these
meaning
these
are
cells
that
respond
specifically
at
a
point
in
the
environment,
but
they
don't
seem
to
do
so
any
grid
like
wet.
You
know,
says
the
animals
running
along
the
track.
D
The
cell
will
fire
at
the
same
point
every
time
when
the
animals
along
that
track,
but
they
can't
seem
to
see
any
sort
of
greediness
to
it,
and
I
was
little
bit
blown
away
by
that
number,
like
sixty
eight
percent,
maybe
I
just
didn't
realize
that
this
you
know
so
many
of
these
cells
are
doing
something
else,
and
so
they
said
in
layer,
2
and
layer
3.
These
non-spatial
cells
are
dominant
in
use,
68%
of
them,
and
they
were
talking
in
this
particular
paper.
D
They
were
talking
about
how
they
we
met,
meaning
they
fire
in
different
locations.
When
you
change
the
room,
like
you,
change
the
shape
of
the
room
or
the
color
of
the
room
where
grid
cells-
don't
do
that,
so
it
seems
to
me
these
are
more
like
play
sells.
They
never
said
that
in
this
paper
they
just
said
they're
non-grade
spatial
cells,
but
they
seem
like
there's
not
a
lot
of
play.
So
I
was
like
you
know
it's
it's
it's
environment
dependent,
but
it's
in
the
particular
environments,
always
firing
same
place.
D
C
The
other
perspective
is
it's.
Some
later
work
took
this
observation
and
said:
alright,
we
need
to
stop
classifying
cells
as
being
just
a
grid
cell,
just
the
border
cell.
Let's,
let's
treat
them
more
as,
like.
You
know,
wait,
there's
a
whole
spectrum,
something's
kind
of
gritty
kind
of
border
hurry,
and
they
then,
and
and-
and
this
this
pie
chart
looks
a
little
bit
different
if
you,
if
you
try
to
incorporate
that
fact,
but
still
the
overall
spirit
of
what
you
said
remains
that
a
large
percentage
of
them
are
like
are
these
funky
ones
so.
D
When
I
think
about
I
think
about
a
cortical
column,
I'm
thinking
well,
maybe
there's
something
like
this.
In
the
coracle
common
member
a
quarter
column,
it's
not
the
same
as
it
rental,
cortex
and
hippocampus,
but
we're
guessing
that
there's
gonna
be
a
lot
of
overlap,
and
so,
when
I'm
thinking
about
a
cortical
column,
I
usually
think
okay
I
had
this
beautiful
array
of
grid
cells
down
there,
like
I,
saw
in
the
tank
image.
That's
like
maybe.
D
70%
of
the
cells
were
doing
something
else,
and
you
know,
and
then
in
only
80%
of
grid
cells,
the
nine
percent
of
border
cells
and
remember
on
Monday
I
was
taking.
Maybe
grid
cells
are
sort
of
a
projection
of
something
else.
I,
don't
know
what
that
else
is,
but
here
I
got.
Seventy
percent
of
the
cells
are
doing
something
different
and
they're
sort
of
partially
commingled
with
the
grid
cells.
So,
like
it,
I'll
shoot,
that's
interesting
and
then
this
is
a
paper.
This
was
a
really
super,
well-written
paper.
I
love
the
way
they
did
this.
D
The
to
Moses,
but
I
they
had
these
little
summaries
in
them.
I
just
thought
it's
worth
going
through
them,
one
at
a
time
just
a
little
bit.
These
are
all
iCloud
things
that
I
didn't
really
understand.
So
this
is
the
only
place
I'm
starting
off.
This
is
the
only
place
that
I
I
heard
anything
about
the
depth
in
the
in
there
meant
like
what
happens,
is
you
go
deeper
and
up
and
down
in
the
depth?
Because
out
of
the
image
we
seen
so
far,
is
this
little
thin
sheet?
D
So
here
they
say
the
mean
discreteness,
because
distributions
recording
that
was
significant
alone.
The
mean
discreet
in
this
grid
spacing
what
they're
saying
is
that
the
grid
sub-module
seem
to
transcend
layers.
That
is,
as
you
go
deeper
into
the
code
up
and
down
to
the
depth
of
the
media.
A
toilet,
cortex,
the
grid
cell
property
of
that
particular
section
seems
to
be
maintained,
meaning
the
cells.
D
There
are
more
grid
cells
in
the
depths
as
you
go
deeper,
so
it
is
a
three-dimensional
structure
and
it
doesn't
appear
like
it
appears
out
there
grid
cells
as
you
go
deeper,
so
this
gets
back
to
the
issue
you
know.
I
was
talking
about
many
columns
and
potential
for
many
columns,
and
now
you
see
that
multiple
cells
representing
the
same
grid
depth.
Well,
there
are
multiple
cells
represented
the
same
greediness
as
you
as
you
go
in
and
out
of
the
depth
of
the
cortex.
A
A
D
You
know
what
that
that's,
what
the
suggestion
here
was
when
the
discreteness,
the
word
discreteness
in
this
case,
was
referring
to
their
phase
and
where
they
and
where
and
their
orientation
in
there
and
they're
walking
in
with
you
know,
this
cell
responds
at
this
stay
at
this
face
facing
at
this
point
right
over
this
phase.
At
this
point
and
the
cells
seem
to
be,
is
they're
burning
out
as
they
went
through
the
depth.
The
cells
had
still
had
share
all
these
properties,
but
when
you
went
laterally,
they
changed.
D
C
D
D
This
next
point,
just
no
one
ever
talks
about
this,
but
you
know:
there's
two
sets
of
ntarama
cortex
exists
in
two
parts,
and
so
does
the
hippocampus
on
two
sides
of
the
brain
and
they've
only
pointed
out
that
when
you
look
at
the
two
sides
of
brain
together,
the
mean
values,
a
good
scale
and
grade
orientation
were
identical
on
the
left
and
right.
So,
even
as
if
the
two,
the
equivalent
bits
of
module
on
the
left
side,
the
brain
and
the
right
side
of
brain
both
on
the
same
orientation,
they
were
not
independent.
D
They
talked
about
here
in
a
grid
cell
grid
fields
are
often
and
long
gated
in
one
direction
they
think
of
them
as
circular
dots.
But
they're
saying
this
is
often
not
the
case.
They
have
a
symmetric
grid,
firing
fields.
I
think
we've
seen
that
in
some
other
papers
too,
but
they
made
a
point
of
saying
this
and
they
say
that
the
the
a
symmetries
that
are
consistent
with
the
Milagros,
all
the
cells
in
the
module
seem
to
have
the
same
elongation
and
it
changes
per
environment
and
it
cuts
across
cortical
layers.
D
D
This
is
the
issue
that
we
talked
about
this
in
the
past
that
there's
this
progression
of
between
the
modules
about
you
know,
1.42
scaling,
which
turns
out
to
be
doubling
so
then,
as
you
go
from
medial
to
dorsal,
meaning
from
the
inner
of
the
brain
to
the
surface
of
the
brain,
the
modules
sort
of
scale
up,
not
incrementally,
but
at
these
discrete
points
discrete
translations,
since
the
modules
good
cells
of
similar
geometric
properties
are
discrete
and
also
garden.
Oh,
this
is
an
interesting
one.
D
Whoops
sorry,
so
they
looked
at
the
theta
frequency
that's
exhibited
within
a
grid
cell
module.
So
you
can
look
at
what's
a
sort
of
background,
theta
frequency
going
on
and
it
was
per
grid
cell
module,
meaning
the
next
grid,
so
hot
over
would
have
a
difference
rate
of
frequency,
but
within
that
little
module,
all
everybody
absorbs.
It
was
the
same
frequency
you
go
to
the
next
vigil
module.
They
all
have
a
you
know
their,
so
they
each
so.
There's
only
you
know
a
size
of
phase
size
if
you
will
for
each
grid
cell
module
there's.
D
Some
others
responded
independently
to
the
occasion.
Well,
these
are
oh
they're.
Just
this
is
more
evidence.
I
think
that
the
grid,
so
modules
are
actually
so
the
operating
independently
of
each
other.
If
I
recall
evidence
if
it's
imagine,
opera
infinity
obtained
do
metric
inputs
referring
to
here
when
they
change
like
the
size
of
the
room.
So
you
know
when
you
change
the
size
of
the
room,
so
incrementally
change
the
size
of
the
room.
D
The
grid
cell
modules,
the
grid
cells,
sort
of
them
will
sort
of
stretch
and
at
some
point
they
stop
stretching,
and
then
they
remap.
It
was
something
like
that.
Well
different
modules
stretch,
some
others
will
stretch,
some
modules
will
remap
earlier
and
and
when
they
we
mapped
and
when
they
stretch
is
independent,
the
modules
do
it
on
the
ground.
There's
no
coordination
between.
D
This
was
the
point
we,
this
is
I
think
was
where
we
got
the
idea
in
our
papers
and
multiple
grid
cell
modules
could
lead
to
a
unique
location,
so
they
say
in
there.
This
is
a
computational
simulations
have
shown
that
converges
or
signals
from
only
two
to
four
independent,
allowing
virtual
modules
may
be
sufficient
to
obtain
near
completely
mapping
and
downstream
place
cells,
meaning
that
did
you
reckon
unique
location.
So
they
were
arguing
too
only
in
two
to
four
independent
line:
grid
cell
modules,
I'm,
not
sure
I,
think
that's
not
enough.
D
I
think
that
it
worked
for
one
environment,
it
doesn't
work
for
multiple
environment
and
then
there
was
not
in
this
paper,
but
in
one
paper
and
I
forgot
where
it
was
my
last
point
here
and
I
just
put
in
an
extra
because
I
forgot,
where
it
came
from
that
one
of
the
papers
talked
about
that
there
was
this
axonal
clustering
that
exists
in
the
vegan
Angela
cortex,
which
is
exactly
what
we
see
me
in
the
neocortex,
the
neocortex,
the
many
columns.
One
of
the
definitions
is
one
of
those
defining
characteristics
and
mini-cons.
D
Is
that
the
axons
of
a
bunch
of
pyramidal
cells
group,
together
in
a
cord
or
a
bundle
and
of
all
of
those
pyramidal
cells
in
a
particular
section
of
a
mini
column,
and
probably
a
similar
sort
of
axonal
cluster
in
going
on
in
MVC,
which
which
is
dry
etching,
which
I'm
glad
to
see
they
didn't
use?
The
word
mini
column.
They
just
said:
there's
axonal
clusters
and
what
it
made
me
think
is
is
something
I've
been
coming
up
to
recently.
D
Is
that
if
we
think
about
the
cortex,
we
think
about
the
upper
layers
and
the
lower
layers,
the
upper
layers-
and
we
have
this
mini-com
to
span
to
Trina,
but
it
might
be
that
the
upper
layers
are
have
their
own
sort
of
mini
column,
operational
property
and
the
Laurel
Ayres
have
their
own
separate
many
common
operational
properties
and
they
just
there's
Co,
align
and
and
but
but
it
really
may
be,
like
they're
doing
separate
things.
Many
columns,
the
bottom
layers
and
the
many
comments
anyway
suggest
about
that.
C
So
I'll
jump
in
I
we
were
probably
supposed
to
start
lunch
right
now,
but
there
was
the
one
thing:
I
came
here.
One
like
the
thing
I
was
really
curious
to
hear.
Maybe
you
said
this
quickly
or
in
the
beginning
and
the
David
tank
picture,
where
it's
showing
the
grid
themselves
based
on
the
cortical
XI
now,
do
you
see
that
as
a
small
sampling
of
the
grid
cells,
or
do
you
see
it
as
being
a
large
sampling
of
the
grid
cells
that
are
there
I?
Think.
C
D
C
D
There
might
be
a
lot
of
them,
yes,
but
I,
don't
think
they're
different
I
think
they're
copies
of
what
we're
seeing
here.
That
would
be
that's,
that's
a
bit
beyond
what
the
data
show
but
downwards.
What
the
data
was
suggesting
that
this
is
a
slice
it,
so
it
may
be
a
slightly
encourage
Marcus
about
the
hypothesis.
D
I
had
that
that
in
a
cortex
there
may
be
five
to
ten
grid
cells
and
the
mini
column
that
are
all
basically
representing
the
same
saves
and
so
on,
they're
sort
of
and
this
hypothesis
that
might
be
like
the
temporal
memory
mechanism.
You
might
find
the
ten
cells
they're
all
basically
identical,
but
at
any
point
in
time
one
may
be
actually
will
not
be
active
like
our
temporal
memory,
and
it
gave
me
confidence
to
say
that
that's
not
a
stupid
idea.
D
It
may
actually
be
happening
at
Joanna
cortex
to
that
there
may
be
ten
cells
or
five
cells
underneath
this
red
one
here
and
there
might
be
there
might
be
cells
underneath
this
Blake
blank
are
you
here?
Do
we
just
didn't
intersect
them
so,
but
the
point
that
there
could
be
multiple
cells
of
pretty
much
the
same
receptive
field
properties
or
the
grid
cell
properties
stacked
on
top
of
one
another,
and
that
there
is
evidence
that
there's
even
mini
column
like
structure
here?
That
was
to
me.