►
From YouTube: A Whole New World [DEMO #4] (2014 Fall NuPIC Hackathon)
Description
By Chetan Surpur & Yuiwei Cui.
A demonstration of sensorimotor inference in simple robotics.
A
Hello,
I'm
trading,
this
is
ua
and
our
hack
is
a
an
application
of
the
recent
research
we've
been
doing
at
dementa
regarding
sensory
motor
inference
and
temporal
pooling.
A
B
Yeah,
so
this
is
actually
the
first
thing:
the
first
experiment.
We
use
real
world
data
and
tested
algorithms,
so
the
setup
is
very
simple:
we
have
a
smart
driver.
There
has
ir
sensor,
that's
taking
the
distance
between
some
objects.
B
It
has
a
range
between
about
5
centimeters
to
up
to
30
40
centimeters,
so
it
also
has
a
motor
on
the
back.
If
you
only
look
at
the
sensory
part,
it
looks
random
because
I
reprogrammed
it
to
move
randomly
to
sample
a
fixed
motherboard,
but
if
you
consider
both
the
sensory
input
and
the
motor
command,
it
contains
some
information
about
the
layout
of
the
world.
B
And,
if
you
also,
if
you
change
the
spatial
configuration
of
the
work
you
might
get,
a
sense
of
this
is
a
new
environment
versus
the
old
one,
and
generally
this
kind
of
algorithm,
yeah.
A
So,
a
little
bit
of
the
theory
before
we
show
you
the
demo,
so
you
kind
of
understand,
what's
going
on
what
we're
doing
is
taking
and
taking
this
data
from
the
sensor
and
the
motor
and
feeding
it
in
using
a
scale
encoder
to
get
sdr's
and
feeding
it
in
both
of
them
concatenated
to
layer
four,
and
so,
if
you
think
about
it
from
the
perspective
of
this
robot,
what
it's
seeing
is
it
thinks
okay,
I'm
gonna
move
left
now
and
then
it
moves
left
and
it
senses
how
far
away
the
object
is
using
its
ir
sensor
and
then
it
can
turn
right.
A
I
say
I'm
going
to
turn
right
now.
What
do
I
expect
to
see
how
far
away
the
object
is
it's
supposed
to
be
and
it
can
make
a
prediction.
So
in
this
case,
layer
4
gets
the
information
about
the
current
sensor.
Reading
how
far
the
object
is
looking
at
is
and
the
motor
commands
about
to
execute
and
layer.
Four
basically
learns
those
sensory
motor
transitions
and
learns
to
predict
what
it's
going
to
see.
Next,
what
the
sensor
value
is
going
to
read
next
is
going
to
look
like.
A
So
if
this
was
predicted
by
layer
four,
if
a
transition
was
predicted
by
layer
four
successfully
to
learn
that
transition,
then
now
there
two
three
can
pull
over
those
predictive
transitions.
Because
now
the
world
is
more
predictable,
it
can
build
a
stable
representation
for
that
world.
So
layer,
two
three
pulls
over.
It
does
temporal
pooling
and
if
it
was
unpredicted,
then
this
then
you'll
see
bursting
in
layer
four
and
it'll
pass
through
those
changes.
A
So
what
we
hope
to
see
is
some
stable
representation
once
the
world
becomes
predictable,
so
and
layered,
two
threes
is
supposed
to
learn:
high
order
transitions,
but
we
didn't
test
that
part
okay.
So,
let's
take
a
look
at
the
demo
disclaimer.
This
is
a
live
robotics
demo,
so
very
likely
it
won't
work.
A
It
did
work
yesterday
in
the
room
and
we
took
videos
of
it
but
go
ahead
and
stand
up.
So
you
can
see
all.
B
A
Okay,
it's
initializing
right
now.
So
what
I'll
do
is.
A
30
random
movements-
this
is
by
the
way
the
nothing
has
been
trained.
Yet
the
model
is
empty.
Hasn't
learned
anything
at
this
point,
so
here
what
you're
seeing
is
the
representation
in
layer
4
in
the
middle
representation
in
layer
3
at
the
top
and
the
number
of
unpredicted
cells
in
layer
4
at
the
bottom.
In
that
graph,
there.
A
A
A
A
But
it
looks,
it
looks
different,
we'll
see
what
it
what
it
sees
and
what
it
represents.
A
A
See
that
it's
more
everything
is
unpredicted
again,
because
it
hasn't
seen
this
before
layer.
Three
there's
there's
no
stability.
It
keeps
changing
between
the
representations
and
soon
you
see
that
it
starts
making
predictions
in
layer
four
and.
A
B
B
A
B
A
A
No
stable
representation
in
layer
three,
but
I
should
have
maybe
done
more
time
steps.
Do
you
think
it
would
lauren
has
a
new
world.
A
But
yeah,
essentially
you
see
that
with
the
anomalous
world
it
wasn't
able
to
consistently
predict
so
it
didn't.