►
From YouTube: Topology (Episode 10)
Description
This episode, we're traveling into another dimension... the 2nd dimension. Let's talk about why topology in HTM is important and how it is implemented today.
Intro music: "Books" by Minden: https://minden.bandcamp.com/track/books-2
A
A
Hello
and
welcome
to
episode
10
of
HCM
school,
where
we
talk
about
topology
now,
depending
on
who
you're
talking
to
topology,
could
mean
different
things.
We're
going
to
look
at
it
from
a
neurons
standpoint.
Some
neurons
in
the
cortex
are
closer
than
others.
A
neuron
is
physically
weighted
to
connect
more
often
and
more
strongly
to
those
close
neurons.
Lateral
inhibitory
connections
create
local
groupings
of
cells
that
affect
each
other's
activity.
One
aspect
of
neuronal
topology
is
the
grouping
of
neurons
together
locally.
A
How
this
locality
is
implemented
today
in
new
pic
is
a
Euclidian
distance
calculation
between
bit
locations
in
different
representations.
New
pic
is
the
Numenta
platform
for
intelligence
computing,
which
is
the
HTM
system
that
I've
been
running
all
these
visualizations
on
that
we've
seen
throughout
these
episodes.
A
So
let's
take
a
look
at
some
2-dimensional
inputs
and
how
its
represented
into
the
input
space
and
also
how
the
spatial
Pooler
Maps
itself
onto
that
2
dimensional
input
space
when
it
has
a
topology
applied
as
well,
first
I'm
going
to
introduce
a
new
visualization,
so
let
me
start
with
the
input
space.
So
what
I've
got
here
is
the
input
space
to
the
spatial
Pooler
I
think
this
is
like
a
32
bit
by
32
bit
animated
gif,
that
I
have
split
up
into
its
frames
and
encoded
every
pixel
as
an
on
or
off
bit.
A
So
you
can
see
the
shape
of
a
man
dancing
back
and
forth
in
this
input
space.
So
this
input
is
topological,
there's
a
very
obvious
spatial
relationship.
We
basically
have
a
moving
image
in
this
input
space,
which
is
different
from
the
type
of
data
that
I've
shown
you
in
all
these
other
episodes
of
HTM
school.
This
is
why
we
might
want
to
try
and
enable
topology
if
we
have
an
input
space
with
a
rich
topological
structure
to
it.
So
in
this
case
it's
not
necessarily
rich,
but
there's
definitely
a
topological
structure
to
this
data.
A
A
Let's
move
them
a
little
bit
closer
to
each
other
so
that
we
can
do
comparisons
there
we
go
so
what
we've
got
here,
I
can
get
them
both
in
the
viewport
is
the
input
space
on
the
left,
the
green
bits
or
the
on
bits
and
the
rest
are
all
off
and
on
the
right
we
have
the
spatial
Buller.
These
spatial
Pooler
over
here
on
the
right
is
a
three
dimensional
structure,
because
it's
got
columns
and
cells
per
column,
so
the
spatial
polar
is
is
constructed
two
dimensionally
and
it
also
has
four
cells
per
column.
A
So
that's
what
we're
looking
at
here
each
one
of
these
yellow
bits
is
an
active
bit
for
this
time
step.
Let
me
turn
this
off
and
you
can
see
very
clearly
as
I
go
next.
Next.
Next
to
those
active
bits
are
changing
and
one
thing
you'll
notice
right
away.
Is
there
sort
of
swaying
back
and
forth
with
the
input?
So
this
is
different.
This
doesn't
usually
happen.
So
let
me
show
you
what
this
looks
like
without
topology
for
a
moment,
so
I
just
basically
turned
all
the
topological
elements
of
the
spatial
cooler
off
and
reinstated.
A
It-
and
this
is
what
those
active
columns
look
like
with
no
topology
they're,
scattered
all
over
the
input
space,
which
makes
sense
if
you
go
back
and
view
all
of
our
other
episodes
on
the
spatial
Pooler.
That's
what
the
spatial
Pooler
does
by
default,
so
I'm
going
to
show
you
why
topology
has
this
spatial
characteristics
or
spatial
grouping
of
activity
based
on?
A
What's
in
the
input
space,
it's
a
topological
structure
and-
and
let
me
explain
two
different
reasons-
there
are
two
ways
that
topology
is
implemented
in
an
HTM
in
in
nupoc,
specifically,
so,
first
of
all
so
I
just
mentioned
this
is
topological
input,
but
we're
not
processing
it
topologically.
So
if
I
show,
if
I
click
a
column
for
example,
and
which
I
just
clicked
this
red
column
here
an
input
space
every
column,
I
click
has
a
different
set
of
potential
pools.
A
That's
what
these
orange
bits
are
and
they're
sort
of
overlaid,
also
on
top
of
the
input,
so
you
can
see.
There's
a
green
orange
and
orange
and
a
green
and
every
time
I
go
to
a
different
column.
We
have
a
different
set
of
potential
pools.
I
explained
this
in
previous
episodes
about
the
spatial
Pooler,
so
a
difference
if
I
turn
topology
on
and
let
me
just
move
to
an
input
there.
A
We
go
so
with
a
very,
very
obvious
change
here,
with
topology
on
as
I
click
different
columns
in
the
spatial
Pooler,
they
all
have
a
completely
different
view
port
or
a
receptive
field
of
the
input
space.
So
they
don't
all
see
everything,
whereas
in
global
inhibition
with
no
topology
every
column
sees
all
of
the
input
space
and
can
work
with
all
of
the
input
space.
A
That's
why
we
get
all
of
those
active
bits
spread
throughout
all
the
columns
in
the
spatial
Buhler,
because
every
column
has
an
opportunity
to
react
to
every
cell
within
its
potential
pool,
which
is
expanded
across
the
entire
input
space
with
topology
on
each
column
only
has
a
window
into
the
input.
So
that's
one
aspect
of
topology.
A
The
other
aspect
is
column
neighbourhoods.
So
let
me
go
back
with
no
topology
again.
I'm
gonna
select
a
column
and
I'm
going
to
show
the
columns
neighborhood.
So
what
this
really
means
is
there's
there's
a
column
competition
to
find
out
which
columns
are
the
winning
columns,
which
each
time
step
this
competition
or
this
inhibition
to
determine
these
winning
columns
is
global
throughout
the
entire
spatial
puller,
when
there's
no
topology
enabled.
A
However,
if
I
turn
topology
on
you'll
see
that
now,
each
columns
at
neighborhood
is
no
longer
global,
so
that
competition
or
that
the
inhibition
to
determine
which
columns
are
winning,
is
now
restricted
to
local
areas
of
the
input
space,
which
applies
a
topology
to
the
spatial
Pooler
itself
and
its
columns
relationships
with
other
columns.
So,
in
addition
to
the
each
columns
projection
to
the
input
space
being
limited
to
a
regional
local
section
of
the
input
space,
the
spatial
Pooler
columns
themselves,
the
actual
column.
A
Competition
is
a
did
by
the
locality
of
each
column
in
the
spatial
Pooler
and
as
you
get
as
you
can
see,
we
can
go
through
here
and
see
specifically
each
columns
topological
nature.
This
column
projects
this
input,
and
these
are
its
columns
in
its
neighborhood
that
affect
its
competition
with
all
the
rest
of
the
columns
in
the
space.
Some
other
interesting
things
that
I
like
to
point
out
here
are
the
active
duty
cycles.
So,
let's
turn
topology
off
again.
Let's
start
running
and
you'll
see
again,
these
are
scattered
throughout
the
space.
A
A
If
we
turn
topology
on
and
we
start
again,
you're
going
to
see
very
quickly
that
the
activity
is
localized
in
the
spatial
Pooler
to
match
the
activity
within
the
input
space,
and
even
if
we
turn
this
off,
you
can
see
this
this.
This
moving
of
the
bits
from
one
side
to
the
other
as
the
input
space
changes.
A
That's
what
we're
seeing
here,
you're
saying
most
of
the
activity
is
in
this
part
of
the
space
and
that's
very
obvious
when
you
have
topology
on
because
you
can
see
in
the
spatial
Pooler
some
spatial
representation
of
the
input
data.
Even
if
you
don't
see
the
input
data
just
by
looking
at
the
active
duty
cycles
of
the
spatial
four,
so
you
might
be
able
to
see
now
how
an
applied
topology
might
help
localize
spatial
patterns
better
express
themselves
and
this
n-dimensional
space.
A
But
the
truth
is
we
very
rare
use
topology
in
today's
HTM
systems,
so
I
thought
I
should
talk
a
bit
about
why
we
don't
so.
From
my
perspective,
the
main
reason
is
that
the
input
spaces
that
we're
dealing
with
and
our
HTM
systems
are
usually
too
small
for
a
topology
to
have
a
general
positive
effect
on
the
results
that
we're
trying
to
get
we're
usually
trying
to
do
anomaly:
detection
on
scalar
input,
so
the
input
space
is
really
small.
A
It
includes
an
encoding,
oven,
time
stamp
and
maybe
one
or
two
scalar
values
at
this
at
this
size,
the
topology.
Doesn't
it
doesn't
give
you
much
and
also
the
encoders
that
we
use
to
encode
data,
do
not
attempt
to
encode
data
atop,
all
logically
at
all,
so
there's
doesn't
seem
to
be
any
benefit
of
turning
on
topology.
If
the
data
is
not
topological
also
topology
adds
computation
costs
and
it
decreases
performance.
Although
it
might
increase
learning
efficiency
it
it
decreases.
General
performance
takes
longer
to
perform
each
step.
All
that
being
said,
there's
a
reason.
A
I
made
this
episode.
Topology
is
really
important
to
HTM
theory
as
HTM
scales
and
the
input
space
that
we're
dealing
with
gets
larger
and
larger.
We're
going
to
need
to
use
topology
to
understand
these
larger
input
spaces
and,
if
you're
talking
about
HTM
as
a
model
of
a
small
section
of
cortex,
as
that
section
of
cortex
gets
larger,
and
we
need
to
integrate
it
with
other
sections
of
cortex
topology
and
how
it's
implemented
again.
It
becomes
important
on
a
on
a
different
type
of
scale,
so
we
can't
ignore
it.
A
We
we
have
to
try
and
understand
how
it
works.
So,
hey
thanks
for
watching
this
episode
on
topology
we're
going
to
be
moving
into
a
temporal
memory,
secrets,
sequence,
memory,
algorithms,
soon
it's
either
bursting
or
temporal
memory
or
with
I,
haven't
quite
figured
out
exactly
the
entry
point.
We're
gonna
make
in
a
temporal
memory,
but
the
next
episode
is
going
to
be
about
sequence
memory
and
that
builds
on
top
of
everything.
We've
talked
about
so
far,
so
I
hope
you're.
Looking
forward
to
that,
it's
going
to
be
after
the
holidays,
but
I'll
see.