►
From YouTube: Cortical Circuitry (Episode 13)
Description
In this episode, we'll walk through concepts introduced in "A Theory of How Columns in the Neocortex Enable Learning the Structure of the World" (https://numenta.com/papers/a-theory-of-how-columns-in-the-neocortex-enable-learning-the-structure-of-the-world/). We talk about larger structures in the cortex that contain neurons, like layers and columns.
A
So
far,
there
have
been
two
major
discoveries
in
HTM
firry.
One
is
about
sequence
memory.
You
could
say
that
the
last
12
episodes
have
all
been
about
sequence
memory.
Now
we're
going
to
talk
about
the
second
major
discovery
in
HTM
theory,
the
cortical
circuitry
of
layers
and
columns
we've
known
about
layers
in
the
cortex.
For
over
a
hundred
years,
here's
Santiago
Ramon
Iike
hawk
at
his
microscope.
A
He
created
hundreds
of
beautiful
illustrations
of
the
brain,
including
this
one
displaying
neocortical
layers
in
this
episode,
I'm
going
to
define
the
structures
of
the
cortex
that
enable
intelligence,
but
first
a
quick
primer.
If
you
remember
from
previous
episodes,
the
HTM
neuron
has
different
integration
zones
that
affect
how
it
behaves.
There
are
three
zones.
We
currently
define
an
HTM,
proximal,
distal
and
apical.
A
A
Based
on
input
received
in
these
three
integration
zones,
the
neuron
can
become
active
or
predictive
to
make
this
all
come
together
into
a
coherent
theory.
We're
gonna
need
to
bring
back
the
spinning
brain.
Remember
when
I
told
you
that
the
neocortex
is
a
homogeneous
sheet.
Well,
there's
a
repeating
structure
in
the
sheet
called
a
cortical
column.
You
have
hundreds
of
thousands
of
them
in
your
brain
and
they're
all
doing
the
same
thing.
Understanding
this
processing
unit
is
crucial
to
understanding
intelligence
in
the
brain.
Cortical
columns
have
a
laminar
structure.
A
Each
layer
is
another
modular
processing
unit.
It
contains
thousands
of
pyramidal
neurons
all
oriented
in
the
same
way.
The
integration
zones
of
the
individual
neurons
also
translate
to
the
layer
itself
in
computing
terms.
Each
layer
is
a
configurable
compute
module
with
different
parameters
and
processing
different
input
that
might
have
come
from
other
layers
in
the
column
or
other
parts
of
the
brain.
A
Looking
back
at
the
cortical
sheet,
different
groups
of
cortical
columns
are
dedicated
to
processing
different
sensory
modalities
like
touch
sight
and
hearing
every
cortical
column
in
your
brain
contributes
to
object
representation,
as
you
observe
a
coffee
cup
with
your
fingers.
Cortical
columns
are
processing
your
sensations
over
time.
Each
touch
and
movement
helps
to
define
the
object
feature
by
feature
in
space
as
we
move
the
object
through
our
sensory
field.
Many
cortical
columns,
processing
sensory
input
represent
the
entire
object
from
different
perspectives.
A
Now,
when
you
think
of
a
coffee
cup
neurons
representing
that
idea,
activate
in
each
cortical
column,
specific
to
that
column,
sensory
experience
with
past
coffee
cups,
each
of
these
cortical
columns
has
a
different
representation
of
coffee
cup.
Yet
they
all
work
together
to
construct
a
generalized
idea.
A
This
is
the
brain's
allocentric
object,
definition,
meaning
objects
are
defined
with
respect
to
themselves.
Your
brain
builds
up
a
library
of
allocentric
objects
by
interacting
with
the
world
through
behavior
humans
learn,
object,
representations
based
on
movement
and
collaborative
sensory
input.
Over
time.
You
don't
remember
every
coffee
cup
you've
ever
used,
but
every
coffee
cup
you've
ever
used
has
contributed
to
your
idea
of
a
coffee
cup.
A
The
way
it
felt
the
way
it
looked
even
the
way
it
smelled
as
it
was
poured
full
in
future,
episodes
were
going
to
discuss
details
about
how
we
think
this
works,
but
for
now,
let's
talk
about
some
properties
of
layers
and
columns
that
we
can
use
to
help
define
this
interesting
new
computational
space,
a
cortical
column
gets
feed-forward
input
which
comes
from
the
direction
of
the
senses.
A
layers
input
can
be
one
of
at
least
two
types,
proximal
or
distal.
Let's
forget
about
apical.
For
now,
the
layers
output
is
the
state
of
its
cells.
A
The
output
of
a
layer
can
project
to
the
inputs
of
other
layers,
including
itself
all
layer,
input
and
output
can
be
defined
in
terms
of
SDRs
because
of
the
inherent
properties
of
semantic
representations
with
SD,
ours.
Merging
and
splitting
of
axon
bundles
is
okay,
so
the
data
flows
in
this
diagram
can
break
apart
or
join
together
between
layers.
The
layer
is
unaware
of
the
context
of
its
input.
So
what
does
all
this
have
to
do
with
spatial
pooling
and
temporal
memory?
Well,
let's
say
that
a
layer
in
the
cortical
column
is
running
those
algorithms.
A
I've
just
introduced
a
framework
for
cortical
circuitry,
based
on
what
we
know
about
columns
and
layers
in
the
neocortex
today
in
your
brain,
neuroscientist
even
have
insight
into
how
the
layers
within
a
cortical
column
are
wired.
Here
is
a
commonly
repeated
circuit,
a
layer
processes
proximal
sensory
input,
while
it's
modulatory
distal
input
comes
from
elsewhere.
This
lower
layer
performs
spatial
pooling
in
temporal
memory,
as
defined
in
previous
episodes
to
create
a
representation
that
merges
sensation
with
some
modulatory
signal,
creating
unique
and
comparable
representations.
A
The
layers
output
is
proximal
input
for
another
layer
higher
in
the
cortical
column,
which
we
call
a
pooling
layer.
The
pooling
layer
creates
a
stable
representation
of
something
being
projected
from
the
input
sensory
layer
over
time.
This
circuit
could
be
used
to
process
sensations
occurring
at
locations
in
space,
on
an
object
being
sensed
a
location
on
an
object
can
be
represented
in
a
distal
input
signal
in
an
SDR.
This
signal
could
be
created
in
another
layer
of
the
cortical
column
and
we'll
talk
about
ways
it
could
be
generated
in
future
episodes.
A
The
pooling
layers
representation
can
be
split
and
projected
into
adjacent
layers
in
neighboring
columns
as
well
as
itself.
This
layer
uses
neighboring
cortical
columns,
object,
representations
as
a
context
for
its
own
representation,
sharing
perception
across
many
cortical
columns.
You
might
be
asking
yourself
how
that
location
signal
is
being
generated
in
that
lower
layer.
I'll
have
some
answers
for
this
in
future
episodes,
but
first
we're
going
to
need
to
talk
about
grid
cells
in
the
next
episode.