►
From YouTube: SP+JS+MNIST: 2014 Spring NuPIC Hackathon Demo
Description
Ian Danforth
A
All
right,
so
what
I'm
showing
off
is
some
people
may
remember
that
I
did
a
demo
last
time
of
a
visualization
of
the
spatial
pooler
that
was
in
python,
and
I
have
since
ported
the
spatial
pooler
to
javascript
and
I'm
in
the
process
of
porting
the
temporal
pooler.
A
It
doesn't
have
a
lot
of
the
other
things
that
newpick
has
like
encoders
or
or
large
region
hierarchies,
but
you
can
build
them
yourself,
so
this
is
where
you
can,
where
you
can
get
the
code
for
all
the
things
I'm
about
to
show
you
probably
the
the
simplest
thing
you
can
do
is
to
let
me
restart.
This
is
to
load
up.
A
A
single
image
is
just
gonna
work,
yeah
all
right,
you
load
up
a
single
image
and
scan
over
it
and
each
time
each
one
of
these
boxes
is
fed
into
the
spatial
pooler,
and
you
can
see
over
time.
Each
one
of
these
represents
a
different
column.
It's
not
a
very
large
network,
not
a
lot
of
columns,
but
what
they
begin
to
recognize
is
visualized
here
in
terms
of
their
permanences,
and
you
know
these
are
showing
you
that
up
here
we
see
that
we've
got
a
portion
of
of
this
guy.
A
Here,
we've
got
a
portion
of
this
guy
here
and
eventually
you
can
combine
those
in
a
secondary
layer
to
build
up
larger
features.
So
the
second
demo
is
looking
at
very,
very
clean
inputs,
but
multiple
numbers,
and
so
this
is
again
all
in
javascript.
It's
loading
it
off
of
my
local
machine
but
could
be
put
on
the
server
I'm
scanning
over
this.
A
A
Does
it
appear
here
and
then
plotted
it
over
here
these
map,
one
to
one
you
can
see,
while
that
feature
appears
there
there
there
there
there
and
then,
if
you
combine
all
of
the
feature
maps
again,
you
expect
to
see
something
similar
to
the
input
that
you
put
in
and
in
fact
you
do
over
here.
What
I'm
doing
is
simply
counting
the
number
of
times
each.
This
is
a
second
level
spatial
pooler.
A
So
this
is
a
two
layer
network
that
we're
looking
at
here
and
each
time
one
of
the
columns
in
the
second
layer
is
active.
I'm
simply
counting
what
the
truth,
the
the
low-level
truth
value
was
at
the
time.
So,
for
example,
every
time
this
column
in
the
layer,
two
spatial
pooler
becomes
active.
I
increment
a
number
between
one
and
nine
here
and
you
can
see
that
every
time
this
guy
becomes
active,
the
ground
truth
is
in
fact
eight.
A
So
what
you're
seeing
here
is
that
there
is
a
perfect
separation
between
each
of
the
numbers
and
each
one
of
the
numbers
has
a
single
representation.
So
this
layer,
this
10
10
column
layer,
two
spatial
pooler,
has
perfectly
learned
and
can
identify
or
classify
the
inputs
that
it's
seen.
This
is
a
very
simple
case.
It's
very
clean
and
that's
what
you
would
would
expect.
A
lot
of
this
project
was
me
learning
about
the
spatial
cooler
and
debugging
it
and
really
trying
to
understand
it.
A
I'm
a
very
visual
thinker-
and
this
helped
me
to
do
that.
This
is
the
same
thing,
but
you
can
also
turn
off
all
of
the
permanent
weights
that
are
not
connected.
So
this
is
the
same
thing,
but
only
seeing
those
weights
that
are
connected
and
that's
a
just,
a
single
boolean
flag
in
this
demo
that
you
can
look
at
and
then
what
I
ultimately
wanted
to
to
do
was
actually
use
this
for
mnist
data.
And,
if
you
don't
know,
mnist
is
a
very
standard
benchmark
in
machine
learning
vision.
A
A
This
architecture,
I
should
say,
is
not
a
traditional
new
pick
architecture.
This
is
using
the
ideas
from
convolutional
neural
networks.
That's
you
know
the
sliding
window
over
the
input.
That's
the
feature
maps
the
feature
maps
get
sent
into
the
second
layer.
That's
the
the
convolutional
neural
network
style
of
doing
things.
A
A
A
This
has
the
final
bit
which
is
using
this
histogram
of
when
what
column
was
active
and
what
the
ground
truth
was
to
build
up
to
show
exactly
well,
you
know
we
thought
it
was
a
nine,
but
it
was
actually
a
two
and
then
over
time
after
the
initial
training
period.
What
is
our
overall
accuracy?
A
This
is,
I
just
got
this
working,
so
the
the
a
completely
random
guess
would
be
about
10
and
a
good
value
for
accuracy
in
mnist.
These
days
is
99.9
percent,
so
I'm
going
to
be
working
with
a
student
as
part
of
season
of
new
pick
and
his
his
desire
is
to
take
the
js
and
and
add
a
swarming
layer
to
it
such
that
you
know
we
can
explore
different
architectures
and
different
parameters
to
try
and
get
this
accuracy
score
up
to
a
more
reasonable,
lower
bound
and
yeah.
That's
where
we
are
today.
B
Thanks,
so
did
you
use
any
topology
in
the
how
the
columns
inhibit
each
other
since
we're
looking
here
at
local
features?
Maybe
some
topology
in
the
way
the
columns
are
inhibiting
each
other
could
be
useful.
Now.
A
It
could
be,
however,
this
is
entirely
a
a
global
inhibition
architecture,
because
a
convolutional
neural
net
does
each
feature
at
a
time.
It
convolves
one
feature
over
the
input
and
then
it
convolves
another
feature
in
the
input
to
update
the
the
weights
and
so
to
replicate
that
the
parameter
is
set
to
only
allow
one
on
at
a
time
and
to
inhibit
everything
else
globally.
C
Well,
is
it
your
goal
to
make
this
more
htmc
that
I
like
or
not,
or
I
mean
it's
just
pretty
far
from
what
you
know
to
new
brick
today,
so
just
trying.
C
B
C
I'd
be
amazed
if
you
could
possibly
get
this
to
work
really
well
and
with
this
level
of,
I
think.
A
There's
one
step
that
should
dramatically
improve
the
performance
in
between
this
this
layer
here,
there's
traditionally
a
pooling
level
in
using
convolutional
neural
network
terminology,
meaning
that
these
feature
maps
get
down
sampled
such
and
that
gives
you
a
little
bit
of
translation
invariance.
A
D
Okay,
this
is
important
because
if
you
happen
to
go
to
like
the
reddit
machine
learning
forum-
and
you
read
the
things
that
I
posted
there
about
numenta
every
time
we
have
a
blog
post-
I
throw
it
on
reddit.
D
Historically,
there's
been
a
lot
of
negative
comments
from
the
machine
learning
community
about
the
techniques
that
we've
used,
because
there's
no
math
proof
behind
it.
So
this
is
this
type
of
stuff
is
important,
because
they're
always
citing
the
imps
database,
databases
and
timit,
and
all
of
these
things
that
the
rest
of
the
machine
learning
community
uses
to
benchmark
their
techniques.
D
So
I
think
the
more
types
of
things
that
we
do
like
this,
the
more
we
can
kind
of
tease
those
guys
to
come,
join
us
and
work
on
brain-inspired
machine
intelligence
as
well.
So
I
think
this
is
really
important
work,
even
if
it's
not
directly
the
same
way
did
that
it
works
in
new
pic.
It's
like,
like
I
said,
a
gateway
drug.
A
For
those
machine
learning
guys
for
reference
this
page
by,
I
don't
know
how
to
pronounce
this
guy's
name.
Andres
carpathi
is
sort
of
state
of
the
like
literally
cutting
edge
javascript
machine
learning
he's
the
best
in
the
world
at
it,
and
this
is
his
demo
page.
So
if
we
have
something
that
is
recognizably
similar
to
this,
it
becomes
a
very
easy
transition
and
for
those
who
are
curious
but
have
a
high
barrier
to
entry.
F
So
with
the
previous
generation
of
our
algorithms,
which
is
actually
not
too
dissimilar
to
what
you've
done
here,
we
with
mnist,
we
got
about
98.6
accuracy.
So
if
you
want
afterwards,
I
can
kind
of
walk
you
through
maybe
some
of
the
differences,
and
it
may
help
guide
you
in
this
process
and
then
we
can
see
also
doing
it
in
a
like.
You
said
in
a
pure
cla
manner.