►
From YouTube: Spatial Pooling: Learning (Episode 8)
Description
In this episode of HTM School, we talk about how each column in the Spatial Pooler learns to represent different spatial characteristics in the input space.
SP pseudocode: http://numenta.com/assets/pdf/biological-and-machine-intelligence/0.4/BaMI-Spatial-Pooler.pdf
Ask questions about this episode here: https://discourse.numenta.org/t/htm-school-episode-8-spatial-pooling-learning/1257
Intro music: "Books" by Minden: https://minden.bandcamp.com/track/books-2
A
Hello,
everybody
I'm
Matt
Taylor
from
momenta
and
welcome
once
again
to
HTM
school.
This
is
part
two
of
spatial
pooling
and
today
we're
going
to
talk
more
about
spatial
pooling,
algorithm
and
htm'.
Last
episode,
we
talked
about
the
spatial
pooler's
input
space
and
how,
when
a
new
spatial
Pooler
is
instantiated,
it
randomly
sets
up
its
columns
to
be
connected
to
that
input
space.
A
Today,
we're
going
to
talk
about
two
things:
one:
how
each
of
these
columns
is
activated
by
calculating
the
columns
overlap
with
the
input
representation
and
the
input
space
and
secondly,
how
each
of
those
columns
learns
to
represent
specific
spatial
characteristics
of
that
data,
that's
being
encoded
in
the
input
space
over
time.
I
am
going
to
explicitly
skip
two
concepts
that
we
could
talk
about.
Those
are
inhibition
and
boosting
I'm
not
going
to
talk
about
those,
because
each
of
them
will
get
their
own
episodes
so
stay
tuned
for
that.
A
So,
let's
look
at
our
first
visualization,
where
we
have
we're
kind
of
taking
off
where
we,
where
we
left
last
episode,
where
I
have
an
input
space
over
here
on
the
left
and
I
have
a
spatial
pooler's
columns
over
here
on
the
right
and,
as
you
can
see,
I
have
some
interactions
set
up
for
this
and
each
one
of
these
columns
in
the
spatial
polar
has
a
very
specific
relationship
to
that
input
space.
So
this
is
just
a
randomly
initialized
spatial
cooler.
A
It
hasn't
learned
anything
yet
and
what
we're
sort
of
trying
to
view
here
is.
Take
this
example
input
that
we
have
here,
which
you
can
see.
Here's
an
encoding
similar
to
the
coatings
that
we've
been
dealing
with
earlier
so
say
this
spatial
Pooler
saw
this
input
in
the
input
space.
How
would
it
learn?
How
would
the
columns
learn
to
represent
that
input?
So
that's
what
we're
going
to
talk
about
we're
going
to
talk
about
learning
rules
so,
first
off
each
one
of
these
columns
as
I
mouse
over
them.
A
You
can
see
that
its
relationship
to
the
input
space
is
entirely
different
from
the
next.
They
all
have
these
random
connections
to
cells
in
the
input
space,
and
they
also
have
an
overlap
score.
So,
for
example,
this
column
has
an
overlap
score
of
38
specifically-
and
that
is
the
number
of
connected
bits
here.
A
So
the
green
circles
are
connections
that
fall
within
the
input
and
coding,
and
all
of
these
gray
circles
that
you
see
are
connections
that
this
column
has
to
the
input
space
but
are
not
connected
to
any
on
bits
in
the
encoding
itself,
so
these
are
falling
outside
of
the
encoding
of
the
data.
So
a
bunch
most
of
the
connections
are
out
side
of
this
in
this
particular
encoding.
So
that's
this
columns
relationship
and
the
overlap
is
38.
A
What
I
have
over
here
is
a
ranking
of
every
single
column
that
we
have
in
the
spatial
cooler
and
they
are
ranked
by
their
overlap,
score
with
this
particular
input
encoding.
So
for
this
input,
encoding
here
are
how
all
the
different
columns
ranked
up.
So
we've
got
this
column
that
has
the
most
overlap.
You
see
it
right
here,
it's
the
one
at
the
top.
Well,
there's
several
that
have
47
bits
of
overlap
with
this
input
encoding
and
those
are
going
to
be
ranked
at
the
top.
A
At
some
point,
we're
going
to
draw
a
line
right
here
and
say
these
are
the
active
columns
and
the
way
we
draw
that
line
is
in
the
spatial
cooler
parameters.
One
of
the
parameters
is
called
number
of
active
columns
per
inhibition
area,
and
that
is
40
and
the
examples
I'm
going
to
show
you
today,
there's
only
one
inhibition
area,
a
global
inhibition
area,
so
every
column
is
a
neighbor
of
every
other
column.
A
That's
called
global
inhibition
talk
more
about
that
in
a
future
episode,
but
we
can
just
assume
global
inhibition
in
this
and
there's
only
one
neighborhood.
So
40
of
these
columns
are
going
to
be
active.
We're
drawing
that
line
right
here
and
saying
any
columns
that
are
below
that
line
are
not
winning
in
this
compute
cycle.
The
rest
are,
you
can
probably
tell
there's
some
several
other
columns
that
also
have
the
same
amount
of
overlap.
A
There
is
a
touch,
some
tiebreaker
logic
in
here
that
randomly
selects
columns
if
there
is
a
bunch
that
have
the
same
overlap
score,
but
that
is
just
an
implementation
detail.
So
so,
let's
choose
one
of
these
columns
like
this
one
and
I
want
to
point
out
a
couple
of
things.
First
of
all,
there's
these
dim
squares
here
and
you
can
also
see
some
sort
of
dimmer
squares
in
the
input
space.
These
cells
will
never
get
connections
to
this
particular
column
because
they
are
not
part
of
its
potential
pool.
A
In
the
last
episode,
I
talked
about
what
the
potential
pool
was,
but
connections
will
never
grow
to
these
spaces
simply
because
there
there's
no
dendritic
segment
that
goes
to
that
input
space.
So
we're
only
ever
going
to
see
connections
in
these
other
spaces
that
aren't
dimmed
out
like
that.
So
that's
what
what
that
means.
A
So,
let's
talk
a
little
bit
about
learning.
So
given
a
time
step
and
say
this
is
our
time
step.
Let's
just
grab
a
column
this
one
right
here
and
what
happens
as
a
spatial
cooler
learns.
First
of
all,
none
of
the
columns
that
are
inactive
will
learn
anything
no
state
changes
happen
to
columns
that
have
not
been
activated,
so
the
only
learning
that
goes
on
happens
in
these
40
columns.
A
In
this
case,
these
are
all
going
to
increment
and
decrement
the
permanence
values
based
upon
how
many
connect,
what
connections
they
have
in
the
input
space
in
this
time
step.
So,
for
example,
for
this
column,
all
of
these
connections
that
are
falling
within
the
input
space,
the
permanence
values
for
those
connections,
will
be
incremented.
That
means
they'll
become
stronger,
so
this
input,
falling
with
over
top
of
those
connections,
increases
the
permanence
of
those
connections.
A
It's
learning
those
connections,
it's
something
that
that
column
is
going
to
recognize
any
connections
that
fall
outside
of
the
input
for
an
activated
column,
those
permanence
'iz
will
be
decremented.
So,
if
you
remember
from
the
pre,
this
episode's
connections
are
calculated
simply
whether
they
are
above
a
certain
threshold,
whether
the
permanence
value
for
that
cell
is
above
a
certain
threshold.
So,
as
these
permanence
values
go
up
and
down,
connections
can
be
created
and
destroyed.
So
as
it
learns
from
one's
compute
cycle
to
the
next,
some
of
those
connections
will
go
away.
A
Some
of
them
will
appear
some
of
them
go
away
and
then
reappear.
It
depends
all
on
the
input
space
on
the
random
initialized
state
of
the
system
and
on
what
it's
learned
so
far
and
how
strong
its
connections
are.
If
it
sees
a
lot
of
input
over
connections
that
it's
had
well
established,
though
those
are
just
going
to
get
reinforced
and
reinforced.
A
So
so
that's
an
overview
of
how
learning
works,
very
simple
learning,
rules
and
and
the
values
for
what
gets
incremented
or
or
how
much
things
are.
Incremented
and
decremented
are
also
parameters
to
the
spatial
Pooler
synaptic,
permanence,
inactive,
decrement
and
synaptic
permanence
connected
that's
the
threshold,
the
last
one
being
where
is
it
minimum?
No
synaptic
permanence
active
increment?
So
this
is
how
much
the
permanence
value
is
incremented
when
it
is
being
reinforced.
This
is
how
much
the
permanence
value
is
decremented
when
it
is
being
diminished,
because
there's
not
input
over
top
of
the
affection.
A
At
any
time,
step,
okay,
so
now
I'm
going
to
show
you
another
visualization.
This
is
an
example
of
learning
versus
non
learning
in
a
spatial
Pooler.
So
what
I've
got
going
on
here
is
the
exact
same
input,
which
is
here
on
the
left
in
fed
into
two
different
spatial
pooler's.
The
random
spatial
Pooler
in
the
center
is
not
learning
so
I've,
essentially
just
turned
off
learning
to
easily
do
that
with
each
compute
cycle
you
can
tell
it
do
you
want
to
learn
or
not?
A
If
you
say
yes,
then
it
will
do
that
incrementing
and
decrementing
of
permanence
values.
Otherwise
it
just
everything
stays
the
same.
So
this
random
spatial
Pooler
has
learned
nothing
about
the
input.
It's
just.
It's
an
ish,
random
state.
Now
the
learning
spatial
cooler
that
we
have
over
here
is
a
is
learning
at
each
compute
step.
So
all
of
its
columns
are
starting
to
try
and
conform
to
recognize
spatial
aspects
of
that
input.
Space,
so
I
wanted
to
try
and
do
a
little
comparison
here.
A
After
the
learning
spatial
Pooler
has
seen
this
cycle
several
times.
It
should
be
producing
active
columns
that
represent
the
input
better
than
the
random
spatial
cooler,
and
you
may
remember,
from
a
previous
from
the
previous
episode,
I
used
a
similar
visualization,
where
I
am
comparing
the
current
active
columns
coming
out
of
the
spatial
cooler
to
every
other
set
of
active
columns
that
have
come
out
of
it
in
for
each
time
step.
So
using
the
overlap
score,
you
can
decide
how
similar
each
of
these
encoding
x'
is.
A
So,
if
we're
doing
well,
we
should
have
it
since
we
have
this
daily
cycle
I'm
in
the
middle
of
one
of
these
days,
it's
like
11:00
a.m.
that
should
be
very
similar
to
the
middle
of
previous
days
that
we've
seen
as
long
as
the
power
level
is
similar,
and
it
does
look
like
that's
the
case.
This
is
the
learning
spatial
Pooler
in
the
random
spatial
Pooler,
it's
not
so
much.
A
There
is
some
similarity
between
some
of
the
bits
like
there's
some
yellowish
bits
here,
there's
a
green
one
right
here,
but
it
hasn't
really
generalized
much
about
this
data
set,
whereas
the
learning
spatial
polar
has
obviously
is
doing
a
better
job
at
understanding.
The
data
and
producing
sdrs
that
match
previous
STRs
is
produced
when
appropriate.
A
So
I,
just
let
this
run
ahead
a
little
bit
because
I
wanted
to
get
into
one
of
these
Peaks
and
we're
going
to
inspect
one
of
these
columns
or
several
of
these
columns
in
the
learning
spatial
Pooler.
So
I'm
going
to
click
on
one
of
these
columns,
and
this
is
going
to
bring
a
view
up
of
this
entire
columns
history
for
all
of
the
data
that
it
has
seen
so
far
and
what
its
cellular
status
with
the
permanence
values
the
connections
I'm
most
interested
in
the
connections
it
has
to
the
input
space.
A
So
this
is
column
864,
the
one
I
clicked
and
we're
at
time
step.
0
I've
got
a
little
slider
up
here
that
I
can
grab
and
move
back
and
forth
and
I
can
move
forward
and
backwards
in
time
and
I
also
have
I'm
going
to
show
you
some
things
that
pop
up
when
I
jump
forward.
So,
first
of
all,
I'm
just
going
to
step
forward
a
little
bit
in
time
for
time,
step,
2,
3,
4
and
this
column
is
not
active
at
all
see
our
overlap.
Score
is
only
19.
A
There's
not
much
overlap
with
this
input
to
the
connections
that
it
has
to
the
input
at
this
point,
so
we'll
keep
moving
forward.
I,
nothing
is
happening.
No
state
is
changing.
That's
correct
as
long
as
this
column
is
in
active,
its
state
will
not
change.
So
there
is
no
learning
happening
until
this
column
becomes
active.
Now,
I've
got
this
button
here.
I
can
jump
to
the
next
act
of
time
step
so
I'm,
jumping
from
19
and
I
just
jumped
to
249,
so
this
column
did
not
even
become
active.
A
Nothing
changed
until
it
saw
almost
250
pieces
of
data,
and
now
suddenly
it's
become
active
now
what
happens
when
it
became
active?
Well,
quite
a
bit
it
connected
to
7
of
the
cells
within
that
input
space.
So
just
from
this
one
input
that
it
saw,
that
was
enough
to
increment
the
permanence
values
of
this.
These
7
connections,
the
cyan
circles-
so
knows
those
are
now
connected.
A
This
column
is
now
connected
to
those
7
new
input
bits
in
the
space
and
it
has
disconnected
13
different
synapses,
and
so
we
have
less
connections
now
outside
of
that
input
space,
and
it's
going
to
be
typical
that
the
spatial
Pooler
is
going
to
disconnect
more
than
it
connects
because
it's
going
to
learn
to
represent
sort
of
a
smaller
area
of
the
spatial
space,
and
so,
let's
keep
jumping
ahead
to
50
it's
active
again.
We
can
see
it's
still
connecting
more
and
disconnecting
all
the
way
to
298.
So
this
column
is
not
really
specialized.
A
Much,
maybe
because
it's
it
just
doesn't
happen
to
be
randomly
connected
to
the
the
right
cells
that
the
encoding
is
covering.
Now,
let's
look
at
another
one
and
you'll
see
some
of
these
are
very
drastically
different,
so
so
here's
column,
105
I'm,
going
to
jump
to
the
first
active
column
or
the
first
activity
where
this
column
is
active
and
that's
only
time
step.
A
Six,
so
I
can
already
tell
this
one's
going
to
get
active
pretty
fast
and
let's
just
step
through
and
watch
and
see
how
this
the
connections
this
column
has
to
the
input
space
changes
as
I
step
through
it
becomes
fairly
evident
after
a
while
what
portions
of
the
input
space
this
column
becomes
sensitive
to,
based
on
the
connections.
As
you
can
see,
it's
diminishing
connections
that
are
outside
of
what
it's
looking
for
and
it's
strengthening
connections.
A
Obviously
this
range
here
is
it's
very
connected
to
this
range
here,
it's
very
connected
to,
let's
just
bump,
all
the
way
over
to
the
end
and
see
what
the
end
state
of
this
is
and,
and
it's
sort
of
interesting
to
see
from
the
beginning.
This
is
the
initial
state
to
the
end,
it
all
the
connections
that
were
destroyed
in
red
and
all
the
connections
that
were
created
in
cyan
here.
A
So
you
can
tell
it's
focusing
on
this
area
of
the
space,
this
area
of
the
space
and
there's
still
some
few
random
stuff
that
has
associations
with.
Let's
see
what
else
we
can
discover
I'm
going
to
I'm
going
to
move
this
forward
a
little
bit
until
we
get
into
that
trough,
because,
interestingly
enough
there's
going
to
be
different
columns
that
could
represent
that
data
in
that
trough,
the
that
are
going
to
be
different
than
the
ones
that
are
representing
data
and
the
peak.
So
let's
grab
one
of
these
guys.
A
I
might
end
up
grabbing
a
column
that
represents
data
in
both
that
happens,
but
so,
if
I
go
all
the
way
to
the
end
yeah,
so
this
one
is
a
did,
not
grow
a
lot.
It
disconnected
a
lot,
126
connections,
we're
disconnected
here
and
it
ended
up
in
a
state
that
looks
like
this.
So
we
can
tell
it's
sensitive
to
this
area
earlier
on
in
the
input
space
this
this
area
and
it's
since
down
here.
A
A
It
starts
developing
these
columns
that
are
very
highly
specialized
to
just
certain
spatial
aspects
of
the
space
and
as
those
learn
that
space
those
connections
get
strengthened,
and
it's
looking
specifically
for
certain
attributes
of
the
the
space
now
is
the
spatial
pattern
changes
again
since
this
is
so
such
a
plastic
process,
and
these
these
permanence
values
can
go
up
and
down
for
the
whole
lifecycle
of
the
spatial
Pooler.
As
the
input
pattern
changes
over
time,
maybe
the
data
gets
encoded
a
little
bit
differently.
A
Then
the
spatial
Pooler
will
learn
the
new
patterns
and
forget
the
old
patterns,
as
those
little
connections
are
decremented
and
new
connections
get
created
because
there's
a
lot
more
inputs
happening
in
different
parts
of
the
space.
So
that
was
a
quick
overview
of
how
spatial
Pooler
learning
works,
with
emphasis
on
how
connections
are
created,
how
permanence
values
change
next
episode
we're
going
to
talk
about
inhibition
and
probably
touch
on
topology
a
little
bit.
This
episode
was
just
about
global
inhibition.
A
That's
the
example
that
we
used
here
and
that's
a
very
typical
case
that
that
we
use
in
new
pick
generally.
We
always
do
global
inhibition,
but
next
episode
we're
going
to
talk
about
inhibition
and
then
eventually
we
will
talk
about
boosting.
Thank
you
for
watching
this.
Second
spatial
cooler
episode
on
HTML,
please
hit
the
like
button.
Please
hit
the
subscribe
button.
It
does
mean
a
lot
of
emotional
good
and
have
a
wonderful
day,
see
you
on
some
Friday
and.