►
From YouTube: NuPIC 2013 Fall Hackathon Demos
Description
NuPIC 2013 Fall Hackathon Demos
A
A
C
A
C
B
Hey
everybody:
first
off
we
have
a
pair
of
lost.
A
B
It
I
think
we
have
about.
A
Seven
demo
full
hacks
at
this
point
and
let's
start
off
with
matt
keith.
A
So
we're
gonna
do
a
little
trial
and
error
here
with
the
hangout,
because
we're
streaming
this
I
don't
know,
do
you
have
anything
matt.
B
A
Here
we
go:
okay,
hello,
everybody!
So
for
my
hack,
I
decided
to
use
the
raspberry
pi
and
I'm
trying
to
use
it
to
control
a
remote
control
car.
So
what
you
see
here
is
just
a
cell
phone
battery
charger
which
is
used
to
power.
The
body
have
that
rather
quiet
sandwich
in
the
middle,
with
the
gpio
bright
board
breakout
I've
got
an
analog
digital
converter
that
I
can
use
for
the
sensors
and
then
use
the
outputs
as
well.
A
So
I
control
some
transistors
to
to
control
the
car
controls
here.
A
So
basically,
my
idea
originally
was
to
use
some
photocells
that
the
measurement
I
could
read
in
the
data
stream
the
photocells
and
have
the
mod
protect
on
those.
A
Unfortunately,
the
raspberry
pi
is
not
really
powerful
enough
to
handle
multiple
cli
models
at
once,
so
it
could
only
have
one
one
value,
so
I
kind
of
rethought
twitch
gears
and
then
I
went
until
I
could
progression
and
so
now,
basically,
the
data
streams
in
from
the
photocells
and
I
use
that
data
basically
to
begin
hardwired
controls
for
steering.
A
So
it's
it's
basically
always
a
light
seeking
car
so
always
turn
towards
light,
and
then
the
data
is
also
streamed
into
the
cla
model
and
I
use
the
cla
model
to
do
phenomenal
detection
and
whenever
it
has
a
high
anomaly
score,
it
gets
scared
and
stops,
and
this
is
there
reading
data
from
the
photocells
to
continue
to
build
its
model
and
then
once
it's,
it's
used
to
its
surroundings
and
the
amount
of
sport
comes
down.
It'll
drive
you
so
I'll
go
ahead
and
get
it
started
because
it
takes
a
little
time
to
get
going.
A
A
C
A
A
F
B
A
F
Well,
I
really
impressed
with
the
hackness
of
it,
so
the
idea
that
you
had
to
make
it
a
little
wireless
substation
there
you
know
itself
is
incredibly
cool,
so.
F
A
A
A
Impressive,
what's
going
on
the
whole
thing,
although
indian
behavior
is
kind
of
fun
to
say
the
same
neural.
A
F
F
A
A
A
One
point:
seven
years
right
to
get
ram,
so
matt
tried
to
comply
on
it.
It
took
about
six
hours
and
this
is
running
like
a
older
version
of
the
clip
base.
Probably
about
two
months
ago,
I
tried
going
to
the
recent
one,
but
I
had
trouble
with
the
dependencies,
particularly.
A
So
I
can't
screencast
it
because
of
the
google
plugin
for
unharm,
but
if
you
want
to
see
it
can
so
I
got
that
all
running
back
around
the
hot
chin
and
then
I
actually
ran
one
for
the
second
project
screencast
on
you.
The
second
side
of
it
was
trying
to
do
wi-fi
vocalization.
A
So
if
you
were
familiar
with
this
recent
conference
with
view
relic-
and
they
basically
gave
everyone
electric
input
to
their
badge
and
they
wanted
to
track
people
with
multiple
access
points
around,
so
I
was
wondering
they're
doing
all
based
off
base
station
rsi
and
quality
and.
F
A
So,
ideally,
you
want
to
make
multiple
models
based
off
of.
A
F
A
Lot,
it's
supposed
to
change
a
lot
of
time
right,
so
one
of
the
ones
that
I
can
screencast
is
a
basement.
There's
a
bunch
of
big
pizza
up
here.
So
instead
of
actually
feeding
that
snakey,
I
was
wondering
how
what
the
delta
prediction
would
be
if
I
just
started
thinking
of
like
random
ones
from
like
you
know,
low
end
to
high-end.
A
G
A
C
C
A
C
A
A
A
A
D
F
F
A
So,
like
I
actually
see
the
scene
if
I
sample
like
every
two
three
seconds,
I
see
the
same
signal,
but
I
have
to
go
way
back
out
there
forward.
So,
but
I'm
just
saying
it's
a
pretty.
If
I
move
around
a
little
bit
here,
you
can't
detect.
F
A
C
A
A
Sampling
to
enough
to
the
point
that
what
the
raw
is
was
to
predict,
it
doesn't
really
level
up
to
where
the
delta
is
right
on
with
each
other.
So
there
is
no
real
difference
between
what
you
were,
where
you
should
have
been
way
down
there,
which
would
have
been
five
steps
into
the
future
technically,
and
therefore
you
can
predict
where
a
person
is
based
on
the
localization
and
the
sprint
of
the
access
points,
so
that
was
the
idea
of
it.
It's
probably
better
for
mine
to
come
up
here.
Look
at
it
actually
in
some
way.
A
C
C
A
Was
the
idea,
so
that's
why
it's
too
far,
I
basically
started
this
project
of
the
trying
to
get
it
running.
A
Version,
seven
architecture,
so
a
lot
of
the
the
phones
and
everything
the
higher
end
ones.
I
did
with
the
age
of
team
cortex,
it's
version,
seven
architecture,
so
the
stuff
that
you
know
matt
was
trying
to
work
on,
which
is
you
know,
low
skill
in
terms
of
it's
just,
not
very
powerful.
So
I
was
trying
to
get
something
that
could
be
a
little
bit
more
powerful
running
on
it
and.
C
A
Allowed
me
to
do
it
more
native
on
my
arm
and
now
I
get
like
a
little
bit
start
to
move
into
other
stuff
and
then
cross,
compile
it
or
work
neatly
on
there.
That's
more
processing
power,
so
it
was
part
of
the
arm
architecture
on
the
part
of
it
trying
to
learn
how
to
do
the
inference
and
prediction
all
right.
A
A
F
A
A
A
B
A
C
A
I
think
about
doing
some
predictions.
Those
predictions
come
from
the
inputs
that
this
model
has
already
but
received.
Imagine
that
I
wanted
to
make
a
decision
about
something
you
know.
Should
I
take
option
a
or
option
b
of
some
action
that
I'm
going
to
take,
and
I
want
to
evaluate
the
cost
of
those
actions
by
asking
the
cla
and
to
predict
the
outcome
of.
A
A
To
fork
the
model
at
a
point
in
time
when
I
wanted
to
evaluate
a
set
of
hypothetical
decisions,
creating
a
hypothetical
you
know
model
and
then
driving
a
number
of
inputs
into
it
and
then
making
prediction,
I
could
then
apply
a
cost
function
to
those
predictions
and
then
deciding
which
one
was
was
the
best
hypothetical
model
actually
discard
those
works
and
go
forward
with
my
main
model,
consuming
the
best
input
reflecting
the
action
that
they've
decided
to
take.
A
C
F
F
F
Move
that.
F
A
A
A
F
F
A
This
function
says:
look
three
steps
ahead.
Just
look
for
the
maximum
amount.
A
F
C
A
F
A
A
So
he
starts
this
at
this
position
here
and
which
is
position:
zero.
F
A
A
F
A
A
A
Of
all,
what's
really
is
that
we
talked
about
like
using,
let's
see
like
projecting
multi-steps,
but
you've
done
that.
First
of
all,
you
learn
the
world
first
and
you
learn.
D
A
Opportunity
to
artificially
figure
out
what
the
future
might
look
like
in
the
surface
of
this
area,
so.
D
And
areas.
A
D
Is
a
really
good
way
to
make
it
it's
very
intuitive
in
the
way
of
human
rights,
so
to
see
that
humans
and
my
australian
neocortex
is
programmed
to
go.
G
Yeah,
you
know,
I
think
this
is
part
of
what
business
people
do
every
day.
I
think
every
decision
that.
G
Very
fundamental
to
to
what
we
need
to
do
so
I
think
it's
really
cool.
I
mean
right
now,
I'm
looking
at.
We
need
to
move
to
a
new
building,
okay,
somebody's
presenting
the
lease.
It's
so
much
money.
Well,
what's
the
market
going
to
do
in
the
meantime,
if
I
wait,
what's
the
outcome
of
that
and
I've
got
to
imagine
all
those.
C
C
B
So
I
gave
you
an
nlp
thing
yesterday,
such
a
long
time
ago,
and
since
then
I
have.
E
Managed
to
take
these
predictions
that
are
coming
out
of
the
temporal.
A
Cooler
so
just
as
a
reminder,
I
had
two
lists.
One
list
of
animals
just
a
whole
bunch
of
animals,
one
list
of
vegetables,
a
bunch
of
vegetables.
I
would
randomly.
A
A
So
some
some
level
of
success
generally
when
I
got
something
comprehensible
out
of
the
tp,
it
would
map
to
some
type
of
vegetable
like
that.
E
A
A
E
A
So
this
is
what
sets
as
a
suite
is-
and
this
is
our
bunch
of
different
suites
different
sdrs-
that
set
up
maps
to
speed.
So
these
are
vastly
have
awkwardly
different
in
some
ways,
but
like
this
one,
there,
a
term
is
coming
out
of
the
temporal
pool.
That
is
pretty
different
from
a
suite,
but
it
maps
to
a
speed,
because
there's
really
nothing
else
in
steps.
A
A
The
temperature,
it's
it's
depends
as
you
can
see,
and
there's
some
terms
that
let
me
find
one
squash.
This
is
a
good
one.
A
A
Ours:
okay,
okay,
I'm
gonna
squash.
A
A
About
is
that
they
are,
they
did
have
to
some
type
of
vegetable,
which
is
what
I
expected,
and
these
overly
dense
sdrs
mostly
have
vegetable
equalities.
It
seems,
and
that's
that's
the
important
thing.
One
thing
I'm
really
wanting
to
do
and
then
I
didn't
have
time.
A
Animal
vegetable,
vegetable
and
then
I
wanted
to
give
it
furniture
and
that's
what's
next,
because
I
don't
know
if
it's
cares.
What's
in
the
first,
you
know
it
might
just
be
saying.
The
second
thing
is
always
a
vegetable.
It
doesn't
care.
What
the
first
thing
is
that
you
probably
should
have
trained
it
on.
You
know:
furniture,
sports
and
yeah
vegetables
discrimination
right
so
I
mean
at
the
core.
It's
just
a
very
basic
word
association.
Just
give
it
a
random
list
of
anything
and
another
randomist
of
anything
it'll
it'll.
Do
this
trick.
B
Predictions
that
would
also
be
interesting
and
just
one
one
thing
last
time
francisco
were
basically
talking
about.
The
next
thing
to
do
is
to
add
a
tp
layer
on
top
of
the
sdr
generator
and
then
feed
it.
The
training
course,
so
it
gets
the
sequence
information,
so
it
it
will
base
the
prediction
that
there'd
be
another
sdr
available.
That
is
basically
the
one
that
it
has
seen
in
the
text
that
is
read
yeah.
B
To
me
about
earlier,
so
the
thing
is,
you
know,
like
furniture
is
seems
to
be
the
best
other
crazy
category
to
use
for
testing
this
thing.
So
you
get
furniture
instead
of
an
animal
and
see
whether
it
comes
out
with
a
vegetable
right.
There's
really
no
context
in
this
experiment
at
all.
So
I
know
that
somehow
vegetables
are
associated
with
animals
or
that
just
a
vegetable
is
the
second
thing
that
may
be
what
it's
like.
A
E
F
A
You
know
you're
looking
at
activations
of
cells
in
the
brain,
something
very
analogous
to
what's
going
on,
and
you
know
that
in
real
brain,
the
activations
for
any
particular
concept
are
never
going
to
be
exactly
the
same.
You're
never
going
to
expect
the
word
tomato
to
be
always
the
same.
It's
never
going
to
happen.
The
cells
are
changing
comparable,
so
this
you're
not
going
to
expect
ever
to
get
perfect
anything.
A
But
the
fact
that
I
think
the
power
of
the
system
is
you're
showing
that
even
though
you're
predicting
this
multi-faceted
thing
based
on
other
things
going
on
it's
just
you
can
still
say
it's
a
tomato
or
still
say
it's
a
cabbage
or
sweet
or
whatever,
and
this
is
this
is
what
it
actually
looks
like
guaranteed
if
you're
looking
at
cells
in
the
brain.
This
is
the
kind
of
relationship,
and
yet
you
should
still
be
able
to
classify
it
properly.
So.
A
The
other
thing
is,
I'm
not
quite
concerned
about
keeping
the
level
of
sparsity
here
at
some
constant.
The
tp
itself
doesn't.
A
By
definition,
it
varies
in
sparsity.
That's
why
it's
a
special
pool
which
brings
you
down
to
a
constant
varsity
again,
so
that
variation
is
perfectly
fine
too.
As
long
as
you
don't
get
too
much
activity,
then
then
you're
out
for
some
range.
You
know
where
he
just
barks
at
you,
it's
just
fine,
so
I
mean
I'm
looking
I
feel
like
I'm
looking
in
a
brain
and
neuroscientists
would
look
if
they're
anything.
You
see
this
kind
of
data
coming
out
of
a
brain.
They.
C
A
And
it's
just
so
cool
to
watch
it
and
imagine
that
that's
actually
what's
going
on,
you
know
it's
like
the
first
time,
you've
ever
seen
this
kind
of
stuff.
B
A
E
So
some
of
them
I
get
a
lot
of
bestiars
from
the
tp
that
that
are
just
empty.
I
don't
know
why.
D
D
B
A
F
B
A
A
So
we
have
three
things
to
show
you
today:
one
is
a
helicopter
that
is
to
be
flown
by
the
cla.
The
second
thing
is
an
open
source
project
that
is,
that
provides
a
foundation
for
creating
motor
control
based
ais
using
the
cla.
In
fact,
it's
even
more
general
than
that.
It
provides
an
architecture
that
you
can
use
any
kind
of
predictor,
of
which
the
cla
is
one,
and
the
third
is
a
sort
of
future
goal
that
we
have
for
motor
sensory
motor
integration
for
the
cla
to
control
to
do
to
solve
generic
control
problems.
A
So
the
first
thing
we'll
show
you
is
the
proctor,
so
the
idea
we
had
was
to
to
basically
use
the
cla
to
learn
to
learn
to
control
the
content.
So
our
approach,
we
started
very
ambitious.
We
wanted
to
do
unsupervised,
learning
on
any
control
problem.
All
you
have
to
offer
provide
is
the
goal
the
cost
function
and
have
the
controller
generic
controller
and
the
cla
generic
skill
date
learn
to
play
around
with
this
environment
and
learn
to
control,
to
execute
the
motor
controls
in
a
sensory
motor
environment
to
achieve
cost
function.
A
And
we've
simplified
the
problem
as
much
as
possible
to
get
results
and
have
a
demo
for
you
guys
to
show
it
to
you
guys
today.
So
what
you
supposed
to
see
today
is
a
supervised
learning
approach.
So
how
we've?
How
we've
set
this
up?
Is
we
train
the
copter
with
a
pid
controller,
so
we
have
basically
a
controller.
We
have
a
predictor
in
the
world
of
the
world.
A
The
controller
is
a
pi
pid
controller,
which
is
basically
an
ideal,
almost
ideal
controller
that
given
a
certain
altitude
that
target
altitude
it'll
send
speeds
to
the
copter
you'll
control
the
speed
to
achieve
the
target
altitude
right.
So
the
pnd
controller
does
this
and
the
predictor,
which
is
the
cla,
is
watching
the
copter
do
this.
So
the
input
to
the
cla
during
training
is
the
altitude
of
the
adapter,
which
we
read
off
the
sensor
and
the
the
speed
coming
from
the
from
the
controller
right.
So
the
sensor
sensory
motor.
A
So
the
sensor
is
the
altitude.
The
motor
part
is
it's
watching
these
controllers
speed
up
right
and
it's
making
correlations
with
how
the
speed
effects
out
and
the
the
final
thing
to
give
it
is
the
distance
to
the
target.
Okay,
so
the
distance,
how
far
away
it
is
from
the
target
right
now,
and
so
it
can
make
a
correlation
between
what
the
controller
is
intending
to
do,
which
is
achieve
the
target.
It's
out.
It's
motor
output
and
the
sensor
sense
screen
input
of
how
the
controller
affects
the
world
so.
A
During
training
we
basically
we
had
it,
we
had
the
pi
controller
start
from
start
from
hovering.
So
the
way
this
works
is
you
tell
it
to
take
off
and
then,
if
it
doesn't,
the
on
board
controller
on
board
other
basically
makes
it
take
off
and
hover
at
a
certain
altitude.
That's
the
base
over
there
you
see
right
onto
800,
and
then
we
allow.
Then
it
just
hovers,
and
then
you
can
send
the
speed
speed.
You
can
tell
it
to
fly
to
the
speed
up
or
down,
and
the
pid
controller
will
fly.
A
It
will
increase
the
speed
until
it
reaches
close
to
two
thousand
two
thousand
millimeters
to
two
meters,
so
it
flies
it
up
to
two
thousand
meters
and
it
covers
it
and
then
it
resets.
So
you
see
the
reset
and
it
starts
over
again
it
lands
again
and
then
it
tries.
So
it
does
a
bunch
of
runs
so
we've
trained
on
you
see
this.
This
one
we've
trained
on
15
runs.
So
15
of
me
is
take
off
five,
two,
two
one
two
thousand
millimeters
and
then
nine.
Are
you
resetting
the
sequences
for
the
sale?
A
C
A
Starts
off
spreading
pretty
terribly
and
over
over
many
runs.
It
comes
in
onto
that
students
right,
so
it
learns
the
output
controller,
along
with
the
the
inputs
from
the
world.
Now.
The
interesting
thing
about
this
problem
is
that
it's
very
it's
noisy
noise,
because
it's
a
physical
system,
it's
an
aerodynamic
system,
so
you
end
up
having
a
lot
of
noise.
So
that's
where
the
cla
is
very
useful.
It
ends
up.
A
A
D
A
A
Now
this
is
really
interesting
because
this
makes
a
sensory
motor
system
so
now
during
the
execution.
So
during
the
learning
returning
but
basically
isn't
a
sequence
right
and
the
cool
thing
is,
it
has
to
input
the
formula
sensor,
information
and
then
it
outputs
the
prediction
and
then
that
prediction
is
directly
applied
to
the
world
which,
which
then
affects
a
little
and
then
affects
the
future.
So
it
its
actions
are
actually
part
of
the
world,
and
so
it
has
to
learn
a
sensory
motor
model
which
is
directly
affected.
E
A
A
C
C
A
A
To
the
drone
and-
and
it
has
been
decision
at
each
of
those
points-
so
it's
under
dense
sequence
and
it
can
be
actually
messy.
So
that
was
the
first
thing
that
we
first
resolved,
which
was
really
cool,
and
it
does
it
really
well,
it
does.
E
A
A
Seen
but
here's
the
thing
we
feed
in
the
d
target
right
we
fit
in
the
distance
of
the
target,
and
so
when
it's
training
the
distance,
the
target
starts
off
at
2000
and
it
moves
towards
zero
right.
It
sees
that
happening,
but
at
some
point
this
is
the
target
of
one
thousand
and
then
it
moves
from
one
thousand
zero.
That's
part
of
the
sequence
happens.
A
So
can
we
can
we
cut
out
the
first
argument
and
have
it
learn
and
have
it
execute?
Just
put
it.
C
A
E
A
A
E
A
A
A
E
A
Then
I
should
work
for
it
right
right.
That's
right!
That's
right!
That
would
be
great.
I
mean
so
you're,
basically
we're
basically
taking
the
the
generalization
idea
and
trying
to
encode
that,
and
then
we
get
to
it
and
basically
in
including
the
input
of
what
we
want
as
as
genetically
as
possible,
we
can
achieve
more
results
and
another
trick
is
to
make
it
fly
down.
Instead
forgot
what
is
telling
us.
A
A
So
that's
what
we
saw
with
that
and
you
can
imagine,
there's
a
lot
that
we
can
do
with
it,
there's
more
to
do
with
it.
But
let
me
show
you
some
of
these
graphs
that
we
took
so
during
the
training
we
saw
that
during
the
control.
This
is
the
sequence
to
get
to
2000
that
the
cla
executed.
This
is
so.
This
is
actually
the
motion
in
the
y
direction
so
yeah.
This
is
what
you
saw
and
it's
pretty
smooth
motion.
A
It's
still
a
little
bit
noisier
as
you
can
see,
but
if
you
this
is
the
2000s,
the
sequence
it's
been
trained
on
so
1000,
it's
a
little
noisier
right,
but
it
still
achieves
the
one
thousand,
because
it's
not
that's
on
the
exact
sequences
to
get
two
outs
to
one
thousand
and
then
this
one
was
one
thousand
five
hundred
so
smoother
than
the
one
before
closer
to
the
original,
and
this
is
the
five
hundred,
so
the
farther
away,
you
are
the
more
noisy
its
sequence.
A
A
A
Okay,
so
I
think
one
of
these-
let's,
let's
look
at
it-
you
always
provide
it
yeah
yeah,
I
think,
okay,
so
I
can't
not
exactly,
but
it
looks.
It
looks
like
a
very
noisy
version
of
this.
I
mean
it
has
to
kind
of
behave.
Similarly,
in
order
to
achieve
this,
this
trajectory,
but
what's
an.
A
Remarkably,
robust
in
the
face
of
all
this
noise
and
this
generalization,
this
is
also
honestly.
We
I
I
personally
did
not
expect
to
see
this
working
as
well.
I
thought
we'd
show
you
guys
a
simulation,
which
is
what
we
had,
but
I
didn't
think
we
actually
did
working
on
the
physical
thing,
and
so
we
were
surprised
at
how
quickly
the
especially
that
sensory
motor,
all
the
noise,
pretty
surprising,
just
pretty
much
15
runs
because
I
I
recall
the
you
know
the.
A
And
they
were
basically
trying
to
go
in
on
sequence
and
they
took
25
tries
to
learn.
So
I'm
pretty
surprised
that
in
about
15
iterations,
it
was
able
to
win.
I
guess
one
of
the
things
is
it
it
can
make.
It
doesn't
have
any
perfect
decisions
right.
It
doesn't
bring
that
exact
sequence
right,
but
so
the
noise
isn't
as
much,
but
then
it
has.
A
B
And
just
guard
it
immediately
just
learn
immediately
how
to
get
either
the
last
bit
of
the
sequence
or
to
scale
the
whole
sequence
over
the
we
don't
know,
and
so
what
I
want
them
to
do
then,
is
right,
so
do
down
now
right,
so
bring
it
up
to
a
certain
point
and
start
learning
and
do
supervised
learning
to
get
it
down
to
a
certain
point
and
see.
Can
you
learn.
B
A
A
B
A
The
open
source
project
is
on
github
and
it
was
structured
in
such
a
way
that
it
can
you
can.
This
is
a
copter
world,
but
you
can
plug
in.
We
actually
start
with
the
pendulum.
We
want
to
solve
an
inverse
pendulum
problem
which
is
keeping
a
pendulum
upright
by
just
right.
Removing
your
thing
underneath
it
you
can.
You
can
add
it
to.
I
mean
a
maze
you
can
put
in
a
maze
in
there.
You
can
any
control
problem
like
driving
a
car
you
fit
into
that
framework,
and
it's
it's
all
very.
A
A
So
definitely
I
encourage
you
to
check
out
that
project
and
you
know
try
different
control
problems
if
you're
interested
in
it-
and
I
think,
control
motor
control
is
very
interesting.
That
jeffrey
talked
about
it
yesterday
and
we
were
very
inspired
by
that.
We've
been
this
far
for
some
time
regarding
motor
control
because
you
know
html,
so
so
that
actually
means.
The
third
thing
I
want
to
say
is
that
we
one
of
the
benefits
of
each
of
the
html
approach
is
that
it's
right,
so
you
can.
A
You
can
learn
any
sequence
of
patterns
if
we
can
apply
that
to
a
motor
domain
and
we
can
have
a
generic
controller
and
our
approach
is
very
much.
The
idea
of
this
is
what
we
started
with
was
having
a
controller
just
come
up
with
a
bunch
of
options
and
use
the
cla
to
decide
which
of
those
options
is
the
best
option
at
each
time.
A
Step
use
that
basically
to
make
a
decision-
and
initially
you
can
just
play
with
its
environment
right,
learn
its
environment,
learn
the
sensory
motor
representation
of
its
environment
and
then
at
each
and
over
time,
use
that
understanding
to
achieve
a
goal
which
is
externally
provided
then
all
you
have
to
do,
and
you
can
all
you
have
to
do
is
provide
a
goal
specific
to
the
domain
that
you're
trying
to
solve,
self-driving
cars
of
and
a
generic
controller
with
a
generic
cla
should
be
able
to
just
figure
out
how
to
do
it
in
a
supervised
approach.
A
A
Dream
with
this
project,
this
is
our
first
step
in
that
direction,
but
we
think
that
really
unsupervised
learning
is
such
a
cool
because,
like,
for
instance,
with
a
self-driving
car,
current
google
self-driving
cars
and
the
last
time
I
checked
they've
been
driving
snow
or
rain,
one
of
the
more
severe
weather
conditions,
and
so
I'm
sure
one
approach
is
just
put
in
a
lot
of
specific
algorithms
to
deal
with
those
conditions.
A
A
G
You
know
I
think
jeff's
talked
a
lot
about
censoring
our
integration,
so
I'm
sure
he'll
have
some
some
comments
on
it.
I
found
myself
just
wondering
about
generalization
of
all
the
different
things
you
might
do
with
this
to
see
how
it
generalized,
for
example,
put
some
weights
on
it
so
that
it's
a
different
weight
or
put
a
fan
next
to
it.
So
it's
got.
You
know
a
little
bit
of
resistance,
or
you
know
I
could
imagine
exploring
lots
and
lots
of
different
things.
G
I'd
be
very
curious
to
see
to
what
extent
he
has
generalized
that
information.
So
that's
that's!
Where
I'd
love
to
see
it.
D
D
A
I
mean
you're,
not
the
first
person
to
try
and
ucla
robot.
I
think
you're,
the
first
person
to
use
cli
on
a
robot
where
you
had
a
weapon
to
kill
it.
In
case
you
get
out
of
control.
A
Has
been
a
lot,
but
mostly
I
I.
I
applaud
you
for
ambition
for
trying
to
create
a
platform
and,
of
course,
starting
figuring,
a
way
of
building
a
general
purpose
solution.
You
know,
I
know
other
people
talk
about
that,
but
I
think
that's
a
great
thing.
It's
going
to
be
really
hard,
but
I
think
we
have
to
get
started.
B
G
B
C
B
A
D
E
E
A
D
E
C
E
E
Of
how
the
sp
in
a
very
restricted
setting
learns,
because
I
think
that
one
of
the
difficulties
in
learning
about
cli
nhm
is
having
a
visual
intuition
of
what's
going
on
in
the
system,
I'm
a
very
visual
person,
and
so
I
wanted
to
just
be
able
to
see
exactly
what's
going
on,
as
I
have
inputs
going
into
the
system.
So
I'm
going
to
just
play
this
once
and
I'll
talk
through
what's
happening
here.
E
A
Only
16
columns
so
essentially
16
neurons
and
you're,
seeing
activity
as
it's
going
across
and
seeing
something.
So
you
can
see
that
as.
A
Is
a
visualization
of
the
permanences
that
each
of
those
columns
has
for
the
underlying
image?
So
right
now
it's
set
to
100
overlap.
So
each
one
of
those
columns
has
a
potential
connection
to
every
single
pixel
in
that
32
by
32
image
patch
that
it's
sliding
it's
using
as
a
sliding
window
and,
as
you
can
see,
some
of
the
some
of
the
columns
have
never
won.
They
have
that
you.
E
Sort
of
baseline
randomized
furnaces
that
you
have.
This
is
what
happens
when
you
initialize
the
sp,
and
this
is
not
obvious
if
you're
just
looking
at
the
code.
E
A
Every
neuron
that
wins
becomes
a
very,
very
good
representation,
very,
very
quickly
of
one
of
those
four
features.
It
becomes
sort
of
a
perfect
feature,
detector.
E
Of
that
perfect
feature
because
note
that
the
window
overlaps
exactly
in
this
32
by
32
way
it
never.
A
E
E
You
have
two
for
every
single
one,
because
you
know
they're
not
really
competing
with
each
other
they're,
just
perfectly
memorizing
things
and,
as
I
said,.
A
Before
every
one
of
these
columns
has
a
hundred
percent
possible
connectivity
to
the
input
now,
this
is
very
different
than
the
the
standard
input.
E
A
E
So,
let's
see
so,
let's,
let's,
let's
go
to
an
example
where
boosting
is
really
useful,
so
I'm
gonna
go
back
to
one
and
I'm
just
gonna
go
to
one
of
these
guys,
I'm
gonna
turn
boosting
back
on
max
boost
is
going
to
be
three
and
I'm
going
to
use
a
slightly
different
image.
E
E
E
So
what
happens
if
this
is
the
case
where
you
only
have
one
column
that
wins
at
every
single
iteration
and
no,
if
it
has
the
highest
overlap,
it's
always.
A
C
E
A
We
have
a
variety
of
columns
that
are
written
and
because
of
that,
almost
all
of
the
columns
are
being
used
because,
if
they're
not
used
they're
they're
duty
cycles
alone,
so
that
they
get
musicals
low
and
then
they
get
boosted,
you
say
it's
low
and
so
boost
kicks
in
and
they
start
when
it's
you've
got
neurons,
that
there
are
columns
that
become
excellent
representations
of
the
input,
but
all
of
them,
you
know,
get
to
learn
something,
and
this,
as
you
can
see,
we've
got
at
least
four
columns
that
are
now
excellent.
E
A
E
Little
bit
of
a
shape
here
that
I
start
to
recognize
and
okay,
that
one
is.
E
But
still
you
get
this
discriminatory
effect
because
of
boosting
and
you
don't
have
just
one
neuron.
You
know
just
as
an
example
of
where
you
might
take
this
and
where
it
might
become
a
more
realistic
problem.
E
A
Or
you
know,
we've
got
one,
that's
the
perfect
center
life
like
ring
there,
but
some
of
them
are
a
combination
of
sliding
things
and,
like
you
know,
it
gets
much
more
complicated,
very,
very
quickly
and
the
next
step
on
this
is
to
close
a
little
bit,
because
this
is
the
reason
why
I
set
things
up
like
this.
A
And
if
you're
familiar
with
any
image
databases
where
they
have
actual
like.
A
A
E
D
A
D
D
C
C
A
A
How
would
you
get
this
kind
of
result
if
you
saw
this
result,
what
do
you
think
is
going
wrong
and
you
know
you
can
work
your
way
up?
I
can
give
you
a
set
of
very
simple
things.
A
B
A
D
D
D
So
I
couldn't
really
figure
out
what
to
do
with
my
hat.
I
was
talking
to
lots
and
lots
of
people
yesterday
and
today,
a
lot
of.
D
D
D
So
more
seriously,
matt
showed
how
to
use
the
sept
and
cla
to
associate
word
pairs.
Basically,
so.
D
C
D
To
see
if
I
could
take
that
just
a
little
bit
further
and
we've
all
been
done
by
using
the
temple
cooler
in
this
context,
so
I
extended
mass
moving
and
lp
to
do
three
more
sequences.
So
two
word
sequences
and
then
I
trained
it
with
sequences
of
three
words
like
now.
D
With
various
animals
eating
various
things
and
then
at
the
end
of
the
thing
I
asked
the
system
whether
they've.
D
This
is
my
data
set,
so
it's
it's
fairly
short,
it's
just.
It's
like
frogging
splines,
how
we
spray
healthy,
so
there's
different
types
of
animals,
eating
different
types
of
things,
and
I
also
put
in
another
verb
so
likes
a
cat
like
small
elephant,
mike's,
water
and
so
on,
and
at
the
end
of
it
I
put
in
fox
heat
and
then
I
just
wore
something
just
to
see
what
would.
F
D
So,
for
what
happens
with
math
system
is
that
for
each
word,
query
the
set
api
and
make
that
the
sdr
representation
that
you
show
the
visualizations
up?
And
so
I
feed
that
to
the
temple,
cooler
and.
F
D
A
little
bit
about
the
symmetric
properties
of
animals
and
the
verb.
The
second
thing
about
it
is
that
this
is
actually
in
order
to
do
this
properly.
You
need
to
do
a
high
order
sequence.
You
can't
do
a
first
order,
sequence
with
this,
because
you
have
to
go
at
least
two
sets
back
to
see
what
that
animal
was.
D
D
So,
what's
happening
is
it's
like
coyote
and
then
what
the
cla
predicted
was-
and
I
I
think
I
put
in
multiple
things-
rabbits
and
squirrels
and
things
like
that
and
a
pretty
good
one.
D
D
F
A
D
D
And
we
didn't
have
a
spatial
cooler
in
this
code
base
and
the
temperature
will
assume
there's
a
relatively
fixed
spark
city
level
coming
into
it.
So
the
right.
C
D
D
So
if
it
was
five
percent,
I
think
I
subsampled
three-quarters.
D
So
you
know,
but
as
francisco
brought
up
earlier
in
this
morning's
talk,
if
you
had
a
spatial
cooler,
he
would
pick
up
on
the
commonly
occurring
sub-patterns
and
use
that
to
create
the
scr.
But
here
you
know
so
like
the
openness.
D
So
that's
the
summary.
I
just
extended
the
big
nlp
into
three
step
sequences
and
I
was
trying
to
get
symmetric
generalization
as
well
as
high
order.
A
A
I
think
you've
seen
once
again
and
we
want
to
thank
just
before
his
work
so.
B
B
C
B
Yeah
thanks
a
lot
for
showing
this
I
mean
I
principally
was
extremely
motivated
to
come
here
to
see.
Finally,
after
all
the
work
we
did
with
the
sdrs
and
how
to
generate
them
and
so
on
and
actually
see
that
cla
is
able
to
learn
based
on
this.
So
that
was
say
my
my
big
hope
and
it
was
made
possible
and
for
me
it
is
proof
so
to
say
so,
I'm
very
confident
that
we
are
going
to
discover
a
lot
of
things
if
we
try
to
bring
those
two
systems
together.
D
A
B
B
B
B
B
A
B
A
B
B
That
and
they're
also
they've
got
this
stuff
coming
in.
That's
predictive
and
the
two
are
added
together
and
this
guy
wins
because
he
has
better
input
dendrites
and
he
also
has
a
the
most
stuff
coming
in
from
other
places
in
there.
So
he
wins
because
he's
both
protective-
and
he
also
has
the
best
den
right
down
here.
B
C
B
B
So
anyway,
it
took
me
four
or
five
hours
to
put
that
together
and
this
guy
said
I'd
like
to
help
one
more
and
then
I
wrote
this
email
by
turn.
While
I
was
writing
this
email,
I
discovered
that
it's
one,
two,
three
four
five
six
six
lines
of
code
in
the
entire
several
thousand
lines
of
code
would
actually
do
it
for
you
right
so
so
this
is
what
you
do.
B
B
B
B
B
B
Cells
and
the
different
activation
levels
so,
but
one
of
you
guys
who
work
there
will
be
able
to
do
20
minutes
right
and
it's
a
thing
that
can
be
done
by
swarming.
So
basically
you
put
your
data
through
it,
and
one
of
the
parameters
is
turn
this
on
or
leave
it
off,
because
it
will
degrade
some
people's
data.
It
will
hallucinate
for
other
people's
data.
It
will
give
you
much
better
like
for
language
data.
It
could
be
much
better
prediction
and
for
some
people
it
won't
matter.
B
A
We
know
this
isn't
biologically
correct.
We
have
a
reason
for
not
doing
it,
it's
something
to
make
a
lot
easier,
and
this
is
one
of
them.
The
sp
doesn't
really
correspond
properly
to
the
way
login
looks,
and
but
we
think
it
actually
made
it
clearer
for
us
to
understand
how
to
build
it
and
make.
A
E
A
A
E
Idea
to
cook
where
I
do
so
I'll
just
leave
it
at
that,
but
we
don't
really
know
yet
how
much
it'll
help
us
until
we
do
some
real.
A
B
A
A
G
Okay,
I
know
we're
getting
ready
to
wrap
up
here.
If
I
haven't
met
you
all,
I'm
done.
G
I
just
wanted
to
make
a
comment
and
then
say
thank
you.
A
couple
thank
yous.
The
comment
I
want
to
be
is
that
you
know
I.
This
is
really
what
it
feels
like
to
be
at
the
beginning
of
something
very
important,
very
big,
it's
it's
got
lots
of
little
pieces
and
lots
of
fits
and
starts,
and
and
and
when
you
step
back
and
you.
F
G
And
something
was
changing,
and
I'm
really
reminded
by
my
husband
is
here
with
me-
was
part
of
the
homebrew
computer
club
and
they're
having
a
reunion
right
next
week.
G
Yeah
and-
and
you
know,
those
people
were
there
kind
of
at
the
very
beginning
of
the
pc
revolution
and
getting
these
tools
on
everybody's
desk,
and
I
think
you
all
will
feel
one
day
that
this
was
a
little
bit
like
being
part
of
the
homeroom
computer
club
of
machine
learning
and
that
you
know
you're
here
for
intelligent
computing,
and
this
is
this
is
kind
of
where
things
are
started
when
they're
happening.
So
it's
a
very
exciting
time.
I
wanted
to
say
a
couple
things.
First
of
all
to
the
numenta
team.
G
Trading
off
and
helping
out
there
as
well
too,
so
she
probably
can't
hear
me,
but
it
takes
quite
a
bit
of
work
to
put
on
something
like
this,
so
the
team's
done
a
lot.
G
All
of
you
for
coming
it
is
it's
a
lot
of
time
to
put
into
it
and
for
participating
in
this
community
for
being
active
and
for
sharing
your
work
and
for
helping
each
other
and
and
for
just
helping
everybody
learn
to
grow
and
develop
this.
It's
an
amazing
experience
and
it'd
be.
G
Of
you,
so
I
know
some
of
you
travel
a
long
way.
Others
are
close,
but
either
way
they've
chosen
to
spend
your
time
on
this,
rather
than
many
of
the
things
you're
spending
your
time
on
so
so
my
main
purpose
was
just
to
say
thank
you
and
welcome
on
the
ride
and
I
think
we're
all
gonna
be
happy.
We
were
part
of.