►
From YouTube: Spatial Pooling: Input Space & Connections (Episode 7)
Description
Finally! We're talking about the first major component of HTM Theory: Spatial Pooling! In this episode, Matt introduces SP with respect to the input space of a spatial pooler, and how it randomly creates connections to the input space.
HTM Forum: https://discourse.numenta.org/
SP pseudocode: https://numenta.com/assets/pdf/spatial-pooling-algorithm/Spatial-Pooling-Algorithm-Details.pdf
Intro music: "Books" by Minden: https://minden.bandcamp.com/track/books-2
A
Hello
again
welcome
knaves,
damn
school
I'm,
Matt
Taylor
from
Numenta
and
today
will
be
the
first
episode
on
spatial
pooling
what
is
spatial
pooling,
and
why
does
the
brain
do
it?
First
of
all,
let's
remember
that
everything
I'm
going
to
talk
about
today
is
based
upon
the
biology
of
the
neocortex,
which
is
composed
of
a
hierarchy
of
regions.
Each
one
of
these
regions
gets
a
bunch
of
input,
millions
of
nerve
axons
being
fed
into
it,
which
come
from
sensory
organs
or
come
from
other
places
within
the
brain
I
like
to
think
about
this
input.
A
Space
sort
of
like
fiber,
optics
cable,
where
each
fiber
in
the
cable
represents
one
of
these
nerve
axons
and
whether
that
neuron
is
on
or
off
the
cortex
has
no
way
of
knowing
what
any
of
these
nerve
axons
mean
or
where
they're
coming
from
at
all.
It's
one
of
the
big
problems
that
it
has
to
solve,
but
it
has
to
find
a
way
to
normalize
that
input
over
time
so
that
it
can
start
learning
sequences
of
patterns
in
that
space.
A
Also
the
size
of
the
input
space,
the
number
of
nerve
axons
is
variable,
so
a
region
could
be
looking
at
a
small
portion
of
nerve
fibers
coming
into
it
or
very
large
portion.
The
spatial
pooling
algorithm
has
to
solve
these
problems,
and
it
does
so
by
accepting
an
input
vector
like
we
were
saying,
the
fiber
optics,
cable,
sort
of
and
translating
it
into
an
output
vector
of
a
different
size
with
a
sparse
number
of
activated
bits.
Now,
an
output
vector
of
spatial
Pooler
represents
many
columns.
A
Many
columns
are
a
column
of
pyramidal
neurons
in
the
cortex
and
they're
really
important
when
we
start
talking
about
sequence
memory
and
how
your
brain
recognizes
temporal
patterns
of
sparse
distributed
representations
over
time.
But
for
the
purpose
of
explaining
how
the
spatial
Pooler
works,
we
don't
have
to
understand
many
columns
at
all,
so
we're
going
to
put
off
that
discussion
for
another
day
and
I'm
just
going
to
refer
to
columns
in
the
spatial
Pooler
and
that's
the
representation
we're
going
to
deal
with
today.
A
So,
in
this
input
space,
if
you
get
two
different
representations
over
time
and
they
have
a
high
overlap,
score,
meaning
they're,
semantically,
similar
the
output
data
that
the
spatial
cooler
creates
to
represent
those
pieces
of
data
also
must
have
a
high
overlap
score.
So
if
you
got
two
similar
inputs,
you
should
get
two
similar
outputs.
The
other
side
of
that
coin
is,
if
you
have
two
very
dissimilar
inputs,
the
spatial
polar
should
create
two
very
similar
output.
A
So
we
have
to
maintain
those
overlap,
properties
of
the
input
space
in
the
output
space
that
we
are
creating.
Now
the
spatial
Pooler
is
a
learning
algorithm.
We'll
see
this
in
action
in
our
future
episodes
about
spatial
pooling,
but
this
is
sort
of
an
introductory
episode
and
I'm
going
to
talk
mostly
about
the
input
space
itself
and
how
the
spatial
Pooler
Maps
its
columns
onto
that
input
space.
A
It's
not
going
to
involve
learning
at
all,
we'll
talk
about
learning
in
the
next
episode,
so,
let's
first
explore
the
spatial
pooler's
input
space
and
here
is
an
example
of
a
spatial
polar
input
space.
In
fact,
this
is
an
example
that
I
am
likely
going
to
use
for
the
next
couple
episodes
first
of
all,
this
SDR
that
we
see
over
here,
it's
not
really
SDR.
It
could
be
a
dense
representation.
A
A
Like
you
go
work
out
in,
and
the
power
consumption
over
time,
I'm
encoding
power
consumption
time
of
day,
and
let
me
just
be
really
explicit
about
this:
here's,
the
power
consumption
and
that's
where
we
get
from
this
little
dot
right
here-
and
this
is
represented
here
and
in
this
bucket
of
bits
in
a
scaler
encoder.
The
time
of
day
is
getting
taken
from
you
know
the
time
the
weekend
is
also
getting
taken
from
the
time
you
might
notice
that
these
are
different
from
the
other
days,
because
those
are
weekends.
These
are
also
weekends.
A
These
are
also
weekends
anyway.
The
time
of
day
is
in
this
bucket
of
bits,
and
the
weekend
is
the
rest
of
these
bits
down
here.
So
pay
attention
to
those
bits.
As
we
move
along
here
as
the
power
value
goes
down,
you
can
see
that
power
bucket
jump
up
and
change
quite
a
lot.
That's
what
we
want
to
see
and
as
I'm
progressing
through
each
day.
The
other
set
of
bits
down
here
that
are
representing
a
time
of
day,
just
kind
of
rotate
periodically
through
their
encoding
space
and
then
resets
when
the
day
resets.
A
Now
that
the
weekend
bits
down
here,
you're
going
to
see
change
as
soon
as
I
get
to
the
weekend,
so
this
is
Friday
night
midnight
and
then
BOOM
Saturday
morning,
Friday
night
Saturday
morning.
So
all
of
this
data
power
consumption
time
of
day
and
weekend,
whether
it's
weekend
or
not,
is
being
represented
in
this
input
medium.
A
So,
like
I
said
when
I
talked
about
the
the
fiber
optics
analogy,
it
really
I
like
to
think
of
this
input
space
as
a
communications,
medium,
there's,
so
many
different
messages
that
could
be
sent
across
the
space
it's
more
than
there
are
atoms
in
the
universe.
So
what
the
spatial
Pooler
is
going
to
try
to
do
in
this
input
space
is
extract
this
space,
the
spatial
correlations
in
this
data
as
it
sees
them
over
time.
A
So
that's
what
we're
going
to
to
look
out
for
one
interesting
thing
is
you
know
this?
This
particular
data
set
has
a
certain
data
signature
when
it's
encoded
in
this
specific
way.
In
this
input
space,
the
spatial
Pooler
can
handle
different
data
signatures,
for
example,
if
I
want
to
use
the
random
distributed,
scalar
encoder,
instead
of
just
a
scalar
encoder
I,
could
do
that
just
fine,
so
it
completely
changes
the
representation
of
data
in
the
input
space,
but
the
semantic
meaning
is
still
there.
So
the
spatial
Pooler
will
still
result
within
the
same
outcome.
A
It
can
get
the
semantic
meaning
from
the
bits
whether
they're
in
one
continuous
bucket
or
whether
they're
randomly
distributed
throughout
the
space
like
they
are
now.
So
it
just
also
goes
to
show
you
how
many
different
possible
ways
there
could
be
to
represent
this
data
in
this
medium.
The
this
is
not
the
only
way
there
could
be
hundreds
of
ways,
thousands
even
to
represent
even
just
one
particular
data
set
in
a
communications
medium
in
a
way
that
the
meaning
is
encoded,
semantically
and
the
spatial
Pooler
can
pick
that
up.
A
So
now,
let's
investigate
how
the
spatial
polar
initializes
itself
to
the
input
space.
So
what
we
have
here
is
on
the
Left.
We
have
the
input
space
on
the
right.
We
have
the
spatial
pooler's
columns.
They
don't
have
to
be
the
same
number
and
what
the
spatial
puller
is
going
to
do
is
take
that
input
space
and
translate
it
translate
the
incoming
data
an
input
space
into
active
columns
in
its
representation
in
the
output
space.
A
So
the
first
concept
I
want
to
explore
here
is
the
idea
of
a
potential
pool
every
one
of
these
columns
has
a
different
potential
pool
of
input
cells
that
it
might
be
connected
to
as
I
kind
of
mouse
over.
Here
you
see
that
each
one
is
randomly
potentially
connected
to
a
different
set
of
input,
space
bits,
and
this
is
a
parameter
that
can
be
tweaked
by
changing
the
potential
percent
in
in
the
new
pic
spatial
fuller
initialization
settings.
But
so
that's
the
number
of
potential
connections
that
each
cell
could
have
now.
A
Each
one
of
these
potential
connections
now
I'm
going
to
I'm
going
to
click.
This
button
to
show
permanence
is
over
here.
Each
one
of
those
potential
connections
also
has
a
permanence
attached
to
it
and
now
I'm,
showing
a
heat
map
of
what
that
permanence
is
so
bear
with
me
as
I
click
on
this
one
cell
and
I
get
a
representation
of
the
connections
as
well
as
the
permanence
'iz.
So
let
me
do
that
here,
okay,
so
this
is
just
that
very
first
column
in
the
spatial
Pooler.
A
It
has
a
relationship
with
every
cell
in
the
input
space
and
that
connection
or
potential
connection
has
a
permanence
value
associated
with
it
for
every
potential
pool
every
potential
connection
in
that
pool,
so
the
white
cells
are
the
ones
that
will
never
connect
to
so
I
misspoke.
Now
every
single
connect
every
single
cell,
but
within
its
potential
pool
in
this
case,
85%
of
the
input
space,
but
all
the
other
cells
that
are
colored
have
a
a
permanence
value
associated
with
them.
A
For
example,
this
one
right
here,
this
red
one
that
I'm
looking
at,
has
a
permanence
value
of
0.1.
It
is
not
connected,
as
you
can
see
here,
there
is
no
dot
there
that
the
blue
circle
means.
There
is
a
connection
from
this
column
to
that
cell
in
the
input
space,
but
in
this
case
there
is
not,
and
it's
because
the
connection
threshold
0.1
was
not
exceeded
the
O,
the
permanence
itself
was
0.1,
but
it
was
not
greater
than
0.1.
So
there's
no
connection.
A
If
we
move
down
here
to
the
input
space
cell
just
beneath
it,
there
is
a
connection
to
the
cell,
because
the
permanence
value
is
above
the
connection
threshold.
It
is
so
point
five
four
and
that's
what
this
little
one
bar
graph
is
showing
you
as
I
hover
over
these
cells.
You
can
see
that
graph
changing
none
of
these
have
a
permanent
value.
That's
large
enough
to
be
connected,
but
all
of
these
do
have
a
permanence
value.
That's
large
enough
to
be
protected
because
they
are
all
above
the
connection
threshold.
A
So
there
you
can
see
why
every
cell
has
connections
when
they
are
above
that
permanence
threshold.
So
I
can
change
this
up
if
I
want
to
just
by
changing
spatial
pull
of
parameters,
for
example,
I'll
change,
the
permanence
are
the
potential
%
to
0.4,
so
instead
of
85%
of
the
input
space,
each
column
being
mapped
to
85%
of
the
input
space,
I
just
change
it.
A
So
now
it's
going
to
be
40%,
so
our
initial
number
of
connections
is
going
to
be
much
much
lower
as
you'll
see
as
soon
as
the
spatial
Pooler
re,
initializes
and
I'll
hover
over
these
cells.
Here
there
it
is
so
so
much
I'll
probably
be
clearer
if
I
turn
off
the
permanence
--is
yeah.
So
this
is
the
potential
pool
now
for
each
cell,
which
is
much
smaller.
It's
only
40
percents
about
half
the
size
as
it
was
before
when
I
click
on
it.
A
A
Let's
change
it
to
like
0.7,
let's
say
here:
we
go
so
the
connection
threshold.
Let's
I'm
going
to
turn
lines
back
on
and
we
will
click
on
one
of
these
cells
and,
let's
also
show
permanence
to
here.
We
go
so,
let's
turn
it
off
now,
as
you
can
see
on
the
right
over
here,
our
connection
threshold
is
0.7.
That's
the
value
that
I
just
changed.
Any
permanence
below
0.7
will
not
have
a
connection.
A
Anything
above
will
have
a
connection,
but
one
thing
you
might
have
noticed
over
here
is
that
the
number
of
kin
actions
hasn't
really
changed.
Much
based
on
the
the
last
iteration
of
spatial
polar
are
created
before
I
up
to
that
connection
threshold.
It
was
point
one
now
it's
point
seven,
but
it
didn't
change
the
number
of
connections.
You
would
sort
of
assume
if
I'm,
making
that
threshold
higher
I'm
going
to
have
less
connections,
but
the
spatial
Pooler
will
try
to
sort
of
give
a
normal
distribution
of
permanence
values
around
that
connection
threshold.
A
So
you
have
a
lot
of
connections
that
are
primed
to
either
become
connected
or
become
disconnected,
but
they're
sort
of
grouped
in
a
normal
distribution
around
that
connection
threshold.
So
that's
why
you
see
that
the
the
number
of
connections
doesn't
really
change
much,
even
though
I've
changed
the
connection
threshold.
A
A
So
you
can
sort
of
see
in
if
I'm,
if
I'm
interested
in
this
bit,
for
some
reason,
I
can
get
an
idea
of
what
columns
are
currently
mapped
to
it
or
currently
have
connections
to
it.
More
interesting,
I
think,
is
if
we
go
the
other
direction
and
look
over
here
at
the
columns.
So
in
this
case
this
column
is
has
all
of
these
connections.
There
are
a
hundred
and
twenty
three
connections
that
this
column
has
out
of
its
potential
pool
of
two
hundred
and
fifty-two
and
each
connection.
Some
of
them
are
within
this
input.
Space.
A
A
So
as
we
move
forward
and
talk
about
learning,
I'm
going
to
talk
about
how
columns
learn
based
on
the
when
they
see
input,
whether
the
they're
connected
synapses
it
reinforced
because
they
continue
to
see
input
on
that
connection
or
they
get
decremented
in
some
way,
because
they
do
not
see
ever
any
input
along
that
connection.
So
that's
going
to
be
important
when
we
talk
about
learning,
but
for
right
now,
an
important
the
thing
I
want
to
highlight
here
is
the
overlap.
A
There
are
43
connections
that
this
cell
has
to
this
input
space
that
are
currently
overlapped
with
this
particular
encoding,
so
that
is
that
current
met
columns,
overlaps
core
as
we
scroll
through
these
other
columns.
You
can
see
this
overlap,
change
this
value
change
over
here,
as
we
go
from
one
column
to
the
next
different
columns
are
going
to
have
different
overlaps
with
this
particular
input
at
any
point
in
time.
A
So
these
different
columns
will
eventually
learn
to
recognize
certain
spatial
features
in
the
input
as
they
as
they
learn,
but
for
right
now,
if
I
wanted
to
activate
some
of
these
columns
based
upon
this
specific
input,
I
might
decide.
Where
is
that
overlap
threshold?
Is
it
40,
50
whatever
and
say
any
columns
that
have
an
overlap
above
that
threshold,
with
this
input
space
I'm
going
to
call
active
columns
and
you're
going
to
see
that
in
this
next
visualization
that
I'm
going
to
show
you
with
a
random
spatial
Pooler?
A
A
So
this
red
line
is
right
now
in
time
and
at
each
point
the
input
space,
which
I've
already
talked
about
the
power
time
of
day
and
weekend,
is
being
encoded
in
in
the
input
space
and
the
active
columns
coming
out
of
the
spatial
Pooler
are
being
shown
on
the
output
and
the
active
spatial
polar
columns
grid
over
here
now.
Something
you
can
already
note
is
that
first
property
I
talked
about
spatial
pooling
which
is
a
fixed
sparsity.
A
So
we
can
see
that,
even
though
the
input
space
may
have
a
different
sparsity,
the
the
spatial
pool
or
columns
over
time
will
have
a
fixed
Varsity,
so
we'll
get
a
normalized
sparsity
from
the
spatial
Pooler
and
the
semantic
details
of
the
data
will
still
be
encoded
in
that
representation
that
the
spatial
Poehler
is
creating
and
let
me
show
you
some
evidence
of
that
actually
occurring.
So
one
of
the
things
I'm
doing
here,
you
see
these
sort
of
bouncing
balls
over
time,
so
there's
the
yellow
balls
and
the
green
ball.
A
A
What
I've
plotted
here
is
a
on
this
line,
10%
of
the
previous
encoding
that
we've
seen
all
along
this
data
space.
The
top
10%
that
are
most
similar
in
overlap
score,
is
to
this
particular
encoding
that
we're
seeing
right
here.
So
this
encoding
is
most
similar
to
every,
where
you
see
a
green
dot
over
here,
which
makes
sense
that
it's
about
the
same
time
and
about
the
same
power
level
in
previous
times
previous
days
that
it
is
seen
so
that's
just
the
encoding
of
the
input
space,
and
we
know
that
it
contains
semantic
value.
A
If
you
know
from
a
previous
lessons
about
coders,
because
we
encoded
it
to
have
semantic
value,
the
power
the
time
of
day
and
whether
it's
a
weekend
or
not
so
to
compare
how
the
spatial
cooler
is
doing
and
representing
the
semantic
value
in
that
data
I've
also
plotted
the
exact
same
thing
for
the
active
columns
in
the
spatial
polar.
So
at
this
point
in
time,
this
is
the
state
of
the
spatial
Pooler.
These
are
the
active
columns.
A
This
SDR
is
being
compared
to
every
other
SDR
that
it's
created
over
time
and
the
top
10%
most
similar
an
overlap,
so
the
ones
with
the
highest
overlap
score
are
being
displayed
on
on
this
chart.
Above
so
as
we
move
along
here,
you
should
see
that,
as
this
data
is
moving
forward,
those
balls
should
be
bouncing
sort
of
up
and
down
with
the
pattern
of
the
data
and
one
of
the
interesting
things
that
you'll
see.
A
So
we
get
a
lot
more
dots
occurring
on
the
weekends.
Now
that
this
data
that
we're
seeing
is
representing
weekend,
we
get
many
many
fewer
similar
representations
from
weekdays
in
the
past
and
more
similar
representations
from
the
weekends
in
the
past.
You
can
see
are
very
explicitly
here
in
this.
This
data
point
right
here
we're
seeing
nothing
that
the
top
10
most
similar
or
the
top
10
percent
most
similar
in
coatings
and
spatial
cooler.
A
Columns
in
the
past
are
all
on
weekends,
which
shows
us
that,
even
though
this
looks
like
just
some
random
scattering
of
bits
in
the
spatial
cooler
columns,
they're
actually
representing
semantic
meaning-
and
this
is
even
without
learning
turned
on.
So
if
we
were
to
turn
on
learning,
the
spatial
cooler
would
be
better
at
recognizing
the
correlations
and
the
spatial
input.
A
We
talked
a
lot
about
sparse,
distributed
representations
and
encoders
in
this
episode
and
if
you
missed
those
previous
episodes,
no
wonder
you're
lost
because
there's
a
lot
of
good
information
in
those
that
build
up
to
the
concepts
of
spatial
pooling
that
I'm
going
over
in
these
episodes.
So
as
always,
if
you
like
this,
if
you're
enjoying
this
and
it's
informative,
please
hit
that
like
button
and
hit
the
subscribe
button.
So
you
won't
miss
the
next
episode
space
gem
school
I
really
appreciate
you
watching.