►
Description
Felix shows off some really interesting visualizations of HTMs using Comportex and ComportexViz.
http://nupic2015spring.challengepost.com/submissions/37837-seeing-inside-htm-algorithms
A
So
yeah,
as
men
said,
I'm
gonna
talk
about
my
attempts
to
see
inside
HTN
s.
It's
using
complex
fears
that
that
Marcus
is
already
helpfully
introduced
and
I
have
to
say,
isn't
a
sign.
He
was
very
misleading
in
his
his
comments
there.
He
actually
did
a
lot
of
substantial
work
on
comport
X
fists
there
and
and
the
integration
and
it's
fantastic
to
have
comport
X
running
in
on
a
server
on
the
JVM,
which
has
a
lot
more
capacity
than
running
in
the
browser.
So
I'm
looking
forward
to
using
that
as
well.
A
A
A
So
so
to
just
get
you
are
answered
in
this
in
this
space
on
the
left,
we
have
a
representation
of
what
the
input
is,
in
this
case,
it's
just
like
a
sequence
of
letters,
ABCD
efg,
hijk,
no
ejj,
and
it's
currently
focusing
on
one
letter
so
represented
by
the
black
line.
So
basically,
this
letter
C,
is
just
the
input
in
a
through
a
category
encoder,
okay,
and
because
we,
this
is
a
sensory
motor
case
study.
A
A
So
that's
just
to
get
you
sort
of
oriented
about
what
all
these
dots
are,
but
it's
not
really
when
out,
but
I
want
to
talk
about
things
but
I.
What
I
worked
on
my
hack
was
the
temporal
pooling
algorithm
I
was
I've
been
trying
to
implement
the
temporal
pooling
algorithm
I'm,
not
going
to
explain
it
here.
If
you
don't
know
what
the
temporal
pooling
is,
you
can
look
it
up
on
the
Numenta
wiki,
some
good
stuff.
A
Basically,
the
idea
is
that
to
form
stable,
but
distinct
representations
in
higher-level
regions,
Oh
higher
level
layers,
so
the
and
the
it
does
that
by
keeping
cells
active
over
time,
while
the
underlying
sequence
is
predictable,
so
the
problem
I
ran
into
I've
been
playing
around
with
implementing
this.
But
as
you
as
you
go
through
the
process,
you
realize
that
there's
a
lot
of
it's
not
clear
how
to
do
it
exactly
and
you.
B
A
What
should
be,
what
should
be
the
relative
weight
of
all
these
parameters
that
you're
faced
with
you
creating
and
what
I
ended
up
realizing
I
needed
was
an
ability
to
visualize.
What
is
what
the
different
influences
are
on
this
on
the
cells
that
are
becoming
active
and
because
it's
a
sparse
representation,
it
that's
feasible,
there's
only
a
relatively
small
number
of
active
cell
on
each
time
step.
A
What
I'm,
showing
here
on
the
right,
the
individual
bars
here
single
cells
that
are
becoming
active
so
there's
the
total
of
I'd,
say
12
or
15
cells
that
are
becoming
active
and
in
each
one
the
column
represents
the
the
amount
of
influence
that's
causing
it
to
become
active.
Okay,
so
red
is
just
the
proximal
excitation
as
represented
by
these
these
lines.
A
Here
this
is
the
proximal
excitation,
that's
driving
that
the
yellow
and
if
you
can
see
the
yellow,
that's
the
boosting
factor,
which
is
part
of
the
algorithm
and
the
to
do
is
encouraging
columns
to
become
active.
It
hasn't
haven't,
become
active
recently,
so
it
it
surprised
me.
Actually,
when
I
saw
this,
how
much
effect
the
boosting
is
having
you
know,
it's
obviously
I'd
set
that
nothing
but
I'd
forgotten
about
it,
and
it
was
invisible
until
now,.
A
A
So
in
this
plot,
although
it's
the
same
kind
of
structure,
it's
showing
a
completely
different
thing
in
this
case,
it's
showing
the
distribution
of
the
states
of
the
active
columns,
so
red,
being
active
and
unpredicted
active,
but
not
predicted
to
become
active
and
purple
is
correctly
predicted
to
become
active,
and
you
can
see
every
time
more
and
more,
a
higher
proportion
of
the
columns
were
predicted
as
you'd
expect,
and
we
just
we're
just
you
know.
Moving
constantly
around
this,
just
one
sequence
of
letters.
A
A
A
But
the
red,
the
the
these
represent,
the
active
cells,
but
the
red
indicates
that
they're,
the
feed-forward
input-
that's
activating
those
was
is
not
pretty,
is,
was
not
predicted
and
can't
be
predicted,
because
it's
coming
from
the
input
data
there's
nothing
actually
not
really
relevant
there.
But
if
you
look
at
the
layer
three,
which
is
the
next
layer
in
there
in
the
stack,
the
purple
here
is
indicating
that
what's
forcing
these
cells
to
become
active,
is
input
I'll.
A
Just
look
so
the
the
the
purple
Alliance
to
these
synapse
is
representing
that
the
proximal
input
that's
turning.
Those
cells
on
is
coming
from
cells
which
were
pretty
predicted
below.
So
it
means
the
underlying
sequence
is
predicted,
and
in
that
case
it
adds
to
an
persistent
level
of
excitation
called
the
temporal
pulling
level
which
okay
on
this
button
is
the
green
line.
A
So
it's
that
which
is
which
is
responsible
for
forcing
some
of
these
cells
to
become
active
in
the
higher-level
region
and
if
we
go
just
I'm
just
stepping
forward
in
time
here.
So
you
have
a
situation
where
the
some
cell
has
become
this
set
of
cells
is
become
active
and
because
it's
from
the
predicted
input,
then
in
the
following
time,
step.
You've
got
a
large
amount
of
persistent
excitation,
which
is
the
green
lines
there
and
they're
just
decays
over
time
until
they
those
cells,
turn
off.
A
A
A
It
was
like
five
synapses,
so
it's
just
good
to
be
able
to
check
that,
like
sanity
check
and
to
start
to
get
a
feel
for
what
the
relative
influences
of
the
temporal
continuing
temporal
pulling
activation,
and
you
know
how
fast
that's
dropping
off
and
what's
building
it's
causing
it
to
drop
off
I
haven't
you
know,
I
haven't
really
explored
this
much,
but
that's
just
where
I
got
to
say
and
hopefully
giving
you
some
idea
of.
You
know
the
concepts
involved
in
that
algorithm
there.
A
So
I'd
like
to
do
this
better,
but
what
we
should
have
is
it
is
multiple
words
and
then
you
can
have
micro
circuits
within
it
and
then
high-level
circuits
going
to
the
next
word,
and
so
you
can
start
actually
what
I
was
originally
hoping
to
look
at
was
forming
a
higher-level
sequence
memory
like
actually
predicting
the
sequence
of
words
on
top
like
in
the
next
layer
yeah,
but
I
get
I
realized
that
I've
got
no
fundamental
issues
to
do
with
how
the
algorithm
is
implemented
before
I
can
start
to
look
at
that.
So.