►
From YouTube: Sensory-motor Integration in HTM Theory, by Jeff Hawkins
Description
The neocortex generates most of our high level behavior and every region in the neocortex has some form of motor output. In this talk I will describe what we know about how the cortex generates behavior and how behavior fits within the framework of Hierarchical Temporal Memory theory. Although we don't yet have a comprehensive theory of how the neocortex generates behavior we do understand several of the major components giving us hope that a comprehensive theory may be reachable in the near future.
A
A
That's
the
name
of
our
company
nemanta,
and
we
have
this
mission,
which
is
to
be
a
catalyst
for
machine
intelligence
and
you'll,
know
that
catalyst
means
accelerates
a
reaction.
That's
gonna
happen
anyway,
but
and
doesn't
get
consumed
in
the
reaction,
and
so
machine
intelligence
is
going
to
happen.
We're
just
trying
to
accelerate
it
as
much
as
we
can.
We
got
three
things
we
do.
We
got
a
research
team
me
and
a
couple
other
guys.
We
do
theory
and
algorithms.
A
We
have
an
open
source
community
which
matt
taylor
runs
and
that's
we
started
last
summer
it's
going
great,
we've
been
taking
the
algorithms
as
we
develop
and
put
them
into
that,
and
then
we
have
a
business
which
is
a
product
called
drock,
which
we
we're
just
we're
going
to
launch
it
officially
in
a
couple
weeks,
it's
really
cool
it's
built
on
these
algorithms.
I'm
not
going
to
talk
about
that
tonight
at
all
a
little
later.
A
If
you
want
to
see
it
I'll
demonstrate
I'll
show
it
to
you,
it's
really
cool,
but
I'm
just
going
to
talk
about
some
of
the
research
topics.
That's
what
this
is
all
about,
and
hopefully,
hopefully
you're
all
here,
for
that
was
that
clear
for
everybody
right,
yeah,
okay,
you
know
when
we
started
the
menta.
We
had
this.
We
had
this
big
debate.
Well,
it
wasn't
much
of
a
debate,
but
we
thought
about
it.
A
lot
which
was.
A
Can
you
really
try
to
understand
how
the
cortex
works
without
focusing
on
the
entire
motor
experience
as
well?
Can
you
can
you
do
a
purely
sensory
inference?
Can
you
make
progress
with
sensory
inference
without
understanding
how
the
motor
interaction
works,
and
we
were
not
certain
of
that
and
we
went
down
that
path,
and
so
now
we
could,
but
now
we're
coming
back
to
the
motor
side
and-
and
we've
been
thinking
about
it,
a
lot
okay.
So
that's
the
introduction
to
the
talk
unless
there's
any
questions
I'll
just
delve
right
into
it.
A
So
this
is
the
problem.
What
are
the
problem
we're
trying
to
solve
it's?
You
know
it
goes
by
different
names:
sensory
motor
integration,
sensory
motor
contingencies,
people
spell
it
different
ways:
sensory
motor
embodied
ai
symbol
grounding.
A
Basically,
it's
is
that
the
fact
that
you
know
most
of
the
changes
on
your
sensory
organs,
your
ears
and
your
eyes,
and
your
skin
and
others
are
due
to
your
own
actions,
most
of
it
to
do
your
own
actions
and-
and
so
it's
kind
of
hard
to
separate
that
out,
and
so
our
model
of
the
world
as
we
build
from
our
data
stream,
has
to
take
advantage
to
take
cognitive
of
our
behaviors
so
that
we
build
the
sensory
motor
model
of
the
world.
A
How
the
world,
how
our
sensory
patterns
change
as
we
interact
with
the
world
and
the
questions
we'd
want
to
know,
are:
how
does
the
cortex
learn
this
sensory
motor
model
the
world
and
then
how
does
the
cortex
generate
behavior?
So
those
are
the
two
topics,
I'll
be
talking
about
a
lot
here
today.
I
have
a
couple
of
overview
slides.
These
are
very
I'm
just
two
of
them,
and
so,
if
you
know
our
work,
this
will
be
not
new
to
you,
but
just
it's
worth
stating
again,
I
find
it's
worth
the
savings
again.
A
We
have
an
overall
theory
about
how
the
new
cortex
works
a
very,
very
high
level.
Conceptual
idea
is,
we
call
hierarchical
temporal
memory.
I
talked
about
this
in
my
book
on
intelligence,
although
I
didn't
use
those
words
for
it,
but
all
the
features
were
there,
and
so
you
have
what
is
it?
You
got
a
bunch
of
sensory
data.
You
know
the
retina
is
really
like:
a
million
sensors,
it's
not
a
sense.
It's
a
million
sensors
there's
a
million
fibers
on
it.
A
A
It's
a
hierarchy
of
cortical
regions,
these
regions
all
do
basically,
they
all
have
a
very
cl
I'll
say
they
do
all
the
same
thing:
those
variations
they
basically
are
all
doing
the
same
thing
and
there's
much
things
that
are
common
because
of
species
and
across
modalities,
so
vision
and
hearing
look
very,
very
similar
and
and
touched,
and
so
on
and
across
mice,
rats
and
so
on.
A
A
So
if
you
want
to
understand
my
speech
or
any
auditory
sound,
it's
patterns
in
time
and
the
order
matters
and
you
have
to
have
memories
of
what
those
words
sound
like
and
that
the
patterns
go
and
so
on,
and
that's
true
for
touch
as
well,
and
it's
even
true
for
vision,
even
though
we
can
do
some
visual
inference
without
time,
but
most
of
it
involves
time
and
and
of
course
anything
that
cortex
generates
into
motor
behavior
is
going
to
be
sequences
too.
A
That's
temple
memory
because
I'm
playing
back
recorded
sequences
or
memorized
sequences
right
now,
as
I'm
speaking
to
you
so
and
if
all
regions
in
the
near
cortex
are
doing
the
same
thing,
then
all
regions
are
doing
inference
and
motor
behavior,
which
we
know
is
pretty
much
true.
So
a
large
part
of
memory
in
these
regions
is
doing
a
sequence
memory.
A
The
other
thing,
that's
part
of
this
hierarchical
temporal
memory
theory
is
that
as
you
go
up
the
hierarchy,
it's
very
well
known
that
you
you
build
more,
you
build
representations.
The
cells
are
are
becoming
more
stable
over
time
and
they
represent
larger
parts
of
the
input
space.
So
as
you
go
forward,
you
are
you're,
basically
you're
one
way
to
look
at.
A
If
you
think
of
each
least
level
in
the
hierarchy,
each
region
is
sort
of
learning
sequences
like
melodies,
then
you
have
to
form
a
stable
representation
for
any
pattern,
like
any
melody
that
you
know,
and
then
it's
sequences
of
sequences
and
so
on.
So
you
as
you
build
up
up
the
hierarchy.
You
end
up
with
the
cells
that
stay
on
for
very
long
periods
of
time
over
learned
experiences
and
then
it
going
backwards.
A
It
unfolds,
so
you
can
take
a
fairly
high
level
concept
and
then
it
unfolds
into
a
series
of
sequences
and
those
involved
into
series
of
elements
and
so
on.
This
is
exactly
what
my
speech
is
right
now,
like
I
start
thinking.
Oh,
what's
the
first
slide,
I'm
going
to
do
and
then
also
they
produce.
You
know
that
10
000
muscle
movements
to
say
that
what
that
is,
so
you
got
this
unfolding
of
sequences
going
up
and
down
that's
basic,
hierarchical
temporal
memory.
A
We
have
a
a
theory
about
what
a
layer
of
cells
is
doing
in
the
neocortex.
That's
the
cortical
learning
algorithm
or
cla,
I'm
not
going
to
say
how
it
works.
This
is
what's
in
our
open
source
repository.
This
is
what
we've
been
testing
for
four
years.
We
understand
extremely
well
now,
and
I
think
it
has
actually
has
a
lot
to
do
with
how
the
core,
how
the
brain
really
does
this
stuff,
I
just
say
some
of
the
attributes
to
it.
Basically,
what
it
does
is.
A
It
takes
this
is
representing
a
layer
of
cells
and
we'll
talk
about
layers
in
a
moment,
a
layer
of
cells
in
the
cortex.
It
takes
an
input
to
it,
there's
a
distributed
input.
It
turns
it
into
a
sparse,
distributed
representation
in
column
space
and
then
learn
sequences
of
those
or
transitions
in
time,
and
it
makes
predictions
and
detect
anomalies.
A
It's
got
some
nice
desirable
attributes,
it's
an
online
learning
system,
so
it
learns
continuously.
It
has
a
very
large
capacity,
even
for
very
small.
We
do
typical
models
of
2048
mini
columns
and
64
000
cells,
and
it
can
learn
millions
of
transitions.
It's
very
high
capacity.
It
builds
high
order
sequences.
It
has
it's
built
on
simple
local
learning
rules
and
it's
fault
tolerant
everywhere.
So
those
are
some
nice
things
about
it.
Okay,
so
now
we're
going
to
jump
into
cortex
and
behavior.
A
Here's
here's
a
sort
of
a
big
picture
thing
you
kind
of
need
to
know
so,
starting
off
with
just
it's
worth
mentioning
that
you
know
we
have
a
lot
more
senses
than
the
three
big
ones
you
think
about,
and
I'm
throwing
here
vestibular,
which
is
your
sense
of
balance
and
your
proprioceptive
senses.
You
have
sensors,
which
tell
you
the
position
and
the
of
joints
and
things
like
that
in
your
body.
A
There's
a
lot
of
data
coming
in
from
your
sensory
stream
and
first
thing
it
goes
to
is
not
to
the
cortex,
but
almost
all
of
it
goes
someplace
else.
In
fact,
we
all
have
what
you
might
call
an
old
brain.
It
consists
of
the
spinal
cord,
the
brain
stem
basal,
ganglia
superior
collections,
other
things
like
that.
A
It's
built
of
neurons,
of
course,
and
so
this
sensory
data
goes
to
various
different
sections
of
the
old
brain
and
those
genes.
Those
things
generate
behavior.
We
all
have
these
a
lot
of
built-in
innate
behaviors
that
don't
require
a
neocortex.
Don't
require
the
big
thing
on
top,
you
can
think
of
it,
sort
of
like
a
reptile
brain
so
that
exists
and
those
behaviors
exist,
and
then
what
happened
was
the
neocortex
came
along
sort
of
in
evolutionary
time
fairly
late
fairly
recently
and
and
what
happens
is
the
following?
A
It
gets
a
copy,
in
fact,
literally
the
axons
which
are
projecting
to
these
old
brains.
They
split
it's
the
same
accent
and
they
and
they
go
to
the
near
cortex.
This
was
observed
over
100
years
ago
by
kahal,
and
this
is
pretty
much
a
universal
rule,
so
you
get,
the
cortex
gets
a
copy
of
the
exact
same
data
which
is
going
to
the
old
brain.
A
A
You
know
it
doesn't
know
that
the
motor
command,
it's
just
actions
and
firings
themselves,
but
but
it's
not
just
getting
sensory
data,
it's
getting.
Basically,
a
representation
of
all
the
other
behaviors
that
are
being
generated
elsewhere
in
the
brain
and
then
the
final
thing
that
happens
is
that
the
cortex
itself
has
projections
which
generate
behavior.
Now
they
don't
go
right
to
the
muscles.
They
go
to
these
old
brain
sections,
and
so
I
drew
with
a
dotted
line
because
it's
not
it's
a
kind
of
influence.
It's
trying
to
influence
the
stuff,
that's
already
existing.
A
So
just
to
give
you
a
few
flavors
of
this,
you
know
like
my
breathing
and
your
breathing
is
pretty
much.
You
know
brain
stem
stuff,
but
my
cortex
can
control
it.
It's
learned
how
to
do
that,
and-
and
so,
but
it's
not
like
it's
replacing
the
brain
stem.
In
fact,
you
know
most
of
the
time
the
cortex
is
not
involved
at
all,
but
if
I
told
you
hey,
I
want
you
to
breathe
right
now
in
and
out
hold
your
breath.
A
You'll
do
that
and
if
I
tell
you
to
hold
your
breath
for
too
long,
the
old
brain
stuff
takes
over
and
says
screw
you
cortex,
I'm
going
to
breathe
anyway.
So
this
is
true
for
everything
like
walking
and
and
grasping,
and
all
these
reflex
things
you
have
a
lot
of
behaviors,
which
are
subcortical,
which
the
cortex
basically
learns
to
control
them,
and
this
is
the
way
the
the
the
the
whole
nervous
system
is
structured.
A
This
way
sort
of
layers
on
layers
and
layers
where
the
new
thing
sort
of
interacts,
with
the
old
thing,
the
new
thing,
the
next
one
that
interacts
with
that
and
so
on.
So
we
this
is
the
system
we
have
to
understand.
You
know
it's
not
like.
Oh
the
brain's
just
doing
stuff
this
it's
in
this
context.
So
this
is
just
a
repeat
of
what
I
just
said.
The
whole
brain
has
complex
behaviors,
the
cortex
is
evolved
on
top
the
cortex.
Both
you
know
receives
both
sensory
data
and
copies
of
the
motor
command.
A
A
Here's
a
picture
of
I
just
made
a
little
drawing
of
like
a
region
of
the
cortex,
and
you
know
remember:
the
cortex
is
really
thin.
The
cell
is
only
about
two
and
a
half
millimeters
thick
and
the
whole
neocortex
of
humans
about
the
thousand
square,
centimeters
yeah,
a
thousand
centimeters,
and
if
you're
zoomed
in
on
one
little
section
anywhere,
it
doesn't
really
matter
what
region
we're.
Looking
at,
I'm
going
to
talk
about
universal
principles,
museum
doing
that
the
first
thing
you'd
see
is
layers
right.
A
There's
four
layers:
there's
layer,
two
three
there's
a
four
layer:
five
and
layer:
six,
it's
a
that's
sort
of
the
basic
starting
point
where
everyone
sort
of
agrees
to,
and
so
that's
the
first
thing
you'd
see
if
you
looked
under
a
microsoft,
if
you
then
dig
in
closer
and
and
this
was
much
harder
to
determine
so
here's
the
same
thing
but
now
blown
up
there's
another
structure
which
is
going
on
here
called
mini
columns,
and
this
is
controversial
for
a
while.
A
But
I
don't
think
it's
controversial
anymore,
it's
pretty
much
universal
evidence
for
it,
but
essentially
the
neurons
themselves.
These
are
like
the
little
neurons
they're.
They
are
aligned
in
these
very,
very
skinny,
little
vertical
columns
or
30
to
50
microns
wide-
and
this
is
something
happens
during
development,
but
it
also
is
true
during
through
your
life,
there's
a
lot
of
evidence
supporting
this.
A
So
this
many
columns
and
these
cells
in
the
minicom
tend
to
have
a
very
similar
response-
property
vertically
oriented
here,
so
they're
really
tied
together
functionally
and
there's
been
a
lot
of
debate
about.
You
know
it
was
speculated
many
years
ago
that
the
mini
column
was
the
sort
of
repetitive
unit
of
the
cortex
and
clearly
the
cortex
got
bigger
by
making
these
sheets
bigger
and
more
regions
and
getting
more
columns.
I
don't
think
that's
right,
though.
A
I
think
the
way
to
look
at
this
is
the
right
level
of
organizations
to
really
think
about
the
cellular
layer
in
the
as
a
unit
of
cortical
computation
and
at
the
minicom
and
oh
I'll
talk
about
here.
Each
layer
implements
a
variant
of
a
sequence
memory.
It's
a
variant
of
rcla
and,
and
then,
and
so
I
put
them
in
sequence
memory
there
you'll
see
in
a
moment
it's
not
quite
wrong,
they're
different,
but
they
all
basically
work
on
the
same
principle.
This
is
my
belief
and
the
mini
comms.
A
They
play
a
critical
role
in
two
ways:
they
play
a
critical
role
in
how
these
howie
players
learn
sequences,
and
so,
if
you
know
the
cla,
if
you
read
the
white
paper,
if
you
know
the
code,
you
understand
that
the
mini
com
basically
provides
the
mechanism
for
doing
high
order
representations,
but
but
they
also,
they
also
have
they
play
a
role
in
tying
the
the
layers
together
and
you
it
they
sort
of
force
the
the
layers
to
represent
similar
types
of
things
which
we'll
get
into
further
okay.
A
So
that's
the
that's
first
level
here
and
then
the
question
is
well.
What
can
we
say
about
these
layers
and
well,
we
can
say
a
bunch
of
things.
So,
let's
look
at
layer,
four
and
layer.
Two
three,
these
are
most
neuroscientists
would
agree
that
these
are
generally
the
feed-forward
inference
layers.
So
information
comes
into
the
cortex
and
the
destination.
Typically,
go
in
a
feed-forward
way
is
layer.
Four
there's
exceptions
to
these
we'll
talk
about
some
of
them
in
a
second,
and
but
remember
this.
A
This
information,
which
is
coming
sometimes
through
the
thalamus,
doesn't
really
matter
it's.
Basically,
it's
sensitive
information,
but
it's
also
copy
motor
commands
now
use
the
word
afrin.
If
you
don't
know
that
word,
that's
the
neuroscience
term
for
basically
feed
forward
information
because
it
could
be
coming
from
another
cortical
region.
It
doesn't
have
to
be
coming
from
the
retina
or
something
like
that
it
could
be
coming
from
another
cortical
region.
So
this
is
a
feed
forward,
pathway
and
layer.
A
4
gets
both
sensory
information
and
it
gets
a
copy
of
the
motor
commands
that's
arriving
everywhere,
and
what
I'm
going
to
argue,
I
believe,
is
that
layer,
4
is
learning
sensory
motor
transitions
and
let
me
just
talk
what
I
mean
about
that.
So
there's
a
little
picture
of
a
face
here
on
the
right
and
when
you
look
at
a
face
or
any
image,
any
any
still
any
still
image.
Imagine
your
eyes
are
constantly
moving
three
to
five
times
a
second.
They
say
cod
and
they
fix
it.
A
Any
point,
and
every
time
there's
a
complete
change
of
pattern
on
your
retina.
I
mean
it's
just
incredible:
it's
it's
not
like
it's
shifting,
that's
not
right!
It's
a
completely
new
pattern.
You
don't
perceive
it
you're,
not!
You
have
absolutely
no
idea
that
your
eyes
are
moving
and
if
you
turns
out
if-
and
this
is
even
helmholtz
in
the
1890s
recognize
this,
if
you,
if
the
cortex
didn't
know
that
the
eyes
were
moving
and
how
they're
moving
it
would
look
as
if
the
world
was
spinning
all
over
the
place,
you'd
get
sick
very
quickly.
A
So
it's
somehow
basically
saying
okay,
I'm
going
to
try
to
learn
to
make
a
model
of
how
how
patterns
change,
given
motor
commands
and
I'll
walk
through
that
in
a
bit
of
detail
in
a
moment.
So
I'm
going
to
argue
that
layer
4
is
learning
sensory
motor
commands.
Now
this
is
a
predictive
memory.
If
you
know
this
cla,
it's
a
predictive
memory
system,
it's
always
constantly
predicting
things
are
going
to
happen
next
and
it
does
that
through
cells
that
are
depolarized,
but
that
doesn't
matter
from
an
implementation
point
of
view.
A
There's
two
things
can
happen.
It
can
make
a
correct
prediction,
so
a
correct
prediction
means
that
this
this
model
of
this
thing
that
it's
doing
by
the
way,
I'm
talking
about
vision,
but
it
doesn't
have
to
be
vision.
It
can
be
any
sensory
motor
thing.
It
can
move
my
head,
it
can
be
moved
my
body.
They
can
make
my
own
voice
we're
talking
about
just
patterns
that
are
changing
due
to
my
own
behavior.
A
This
goes
back
to
the
idea
that
htm,
where
you've
got
this,
you
know
you've
got
this
hierarchy
and
your
stability
going
up.
I
now
believe
it's
actually
happening
between
layers,
I'm
pretty
certain
of
this,
and
so,
but
if
you
haven't
unpredicted
this
layer,
four
can't
predict
what's
going
on
it
says,
like
you
know,
I
can't
model
this
data.
Then
then
those
changes
are
passed
through
to
the
next
layer.
So
we'll
have
stability.
If
we
have
a
good
model
and
it
has
to
be
unique-
I
don't
want
to
have
stability.
A
I
can
just
have
a
dead
thing,
I'll
be
stable,
it
has
to
be
unique,
it
has
to
be
a
unique
representation,
but
if
I
I
can't
predict
that
I
pass
it
to
the
next
layer
now,
the
next
layer,
I
argue,
is
high
order,
sequence
memory.
This
is
what
we
typically
think
of
our
our
current
implementation
of
the
cla.
Our
current
implementation
does
not
know
anything
about
motor,
it's
just
a
big,
pure
high
order,
sequence
memory
and
an
example
of
that
would
be
the
following
and
near
the
right.
A
If
I
learned
two
sequences,
a
b
c
d
and
x,
b,
c
wise
things,
think
of
like
melodies
or
words,
or
something
like
that
or
just
abstract
patterns.
You
notice
that
bc
is
part
of
both
of
those
patterns,
and
so
the
question
I
want
to
do
is
I
said:
well,
if
I
show
you
a
b
c,
I
want
to
predict
d
and
if
I
show
you
x,
b
c,
I
want
to
predict
y,
and
so
that
makes
it
a
high
order.
Sequence.
It's
just
like
speech
and
music
and
everything
else
in
the
world.
A
There's
actually
high
order
sequences
so,
and
so
you
have
to
have
a
way
of
doing
that
and
of
course
we
know
how
to
do
that
in
the
cla,
which
is
you
know,
we
basically
built
these
unique
representations
for
inputs
and
context.
So
these
two
things
this
idea
that
you
can
learn
the
sensory
motor
transitions,
which
are
not
necessarily
repeatable
high
order
sequences
that
make
make
that
clear
when
you
staccato
over
face
you,
don't
always
do
it
in
the
same
pattern.
It's
not
like
it's,
not
a
repeatable
sequence.
A
It's
not
always
you
don't
always
go
eye
nose,
mouth
eye,
nose,
mouth,
no,
you're
going!
So
that's
not
a
high
order
sequence.
You
can't
learn
that,
but
you
can.
If
you
can
learn
to
make
the
transitions
from
the
motor
thing,
then
you
have
a
model
through
it
where
layer
three
is
purely
high
order,
and
I
believe
now,
oh
and
then
and
then
layer
three
is
the
output.
Essentially,
if
you
go,
if
you
want
to
think
about
one
of
the
projections
up
up
the
hierarchy,
the
layer
three
cells
are
the
next
higher
regions.
A
So
this
is
going
to
go
on
and
again
you're
going
to
go
layer,
three
player,
four
layer,
three
other
four
layer,
three
and
so
on.
I
believe
the
following.
So
so,
I
believe
like
force
changes
due
to
input
behavior
at
least
three
models:
sequences,
the
remainders
of
layer,
four-
and
these
are
these-
are
very
universal
inference
steps.
A
These
are
not
specific
to
any
particular
modality,
and
I've
come
to
believe
that
there
actually
aren't
any
others
that
this
is
all
the
brain
has
to
work
on
in,
and
this
applies
a
vision
touch
hearing
proprioceptive,
it
all
works
in
all
of
them.
So
this
is
what
the
brain's
got
to
work
on
it.
Does
this
in
a
hierarchy
and
we'll
talk
more
about
the
hierarchy
in
a
moment?
Okay,
so
that's
what
I
think
is
going
on
layer,
four
and
layers,
two
three:
what's
what
are
their
five
and
layer?
A
Six
doing
they're
a
bit
more
complicated
and
I'm
going
to
give
a
very,
very
simplistic
view
of
it.
For
the
moment
we
do
know
the
following
that
anything
that
the
cortex
generates
in
behavior
starts
in
layer,
5
cells.
So
that's
a
fact,
and
I
don't
think
there's
any
exceptions
to
that.
So
when
we
talked
about
the
cortex
projecting
back
to
the
old
brain
to
influence
it,
it's
layer,
five
cells,
so
we
obviously
we.
A
These
are
sequences,
we're
activating
many
cells
at
the
same
time
as
a
sparse
representation,
we're
playing
back
these
sequences,
activating
lots
of
muscles
so
on
so
that's
a
sequence
of
motor
sequences
and
then
layer.
Six
is
actually
three
different
types
of
things.
I'm
not
going
to
go
into
it
in
detail
here,
but
I'm
just
saying
it's
attention,
so
it
has
feedback
and
attention
and
one
other
thing.
But
the
moment
I'll
just
say
it's
attention.
A
It
does
some
other
feedback
things
and
the
point
is
these:
are
both
downward
projecting
things
and
attention
is
kind
of
like
a
motor
sequence.
It's
a
when
I
talk
about
attention.
I'm
talking
about
covert
attention,
which
is
like
you're
looking
at
something
and
you're
attending
the
part
of
it,
without
moving
your
eyes
or
you're
tuning
into
hearing
listening
without
you
know,
sort
of
tuning
out
your
vision,
things
like
that,
so
you
got
these
two
basic.
You
know.
I
think
this
would
be
a
bit
uncontroversial
layer.
Five
and
six
are
basically
feedback
layer.
A
Four
and
two
three
are
basically
feed
forward.
There's
a
lot
of
other
things,
I'm
not
telling
you
about
here,
but
I
keep
saying
that
because
the
neuroscientists
in
the
room
and
I'm
gonna
say
something
you
might
be
surprised,
but
over
on
the
right
here,
I'm
telling
you
my
confidence
level
that
I
know
how
to
build
this.
Okay,
not
that
I
understand
everything
going
on
in
neuroscience,
but
I
know
how
to
build
a
working
version
of
this
and
I
I
know
how
to
build
a
working
version
of
layer,
three,
two
and
three
it's
well.
A
Instead,
we
test
it
extensively,
I'm
very
confident.
I
know
how
to
build
the
sensory
motor
transition
memory.
I
put
a
90
there,
we've
just
started
implementing
it
and
we
hope
to
test
that
extensively
this
year.
I'm
about,
I
think
I
understand,
maybe
half
of
what's
going
on
in
layer,
five
and
maybe
about
ten
percent.
What's
going
on
in
layer
six,
but
I
think
I
got
some
of
the
key
components
so
and
I
think
we're
making
progress
things
are
falling
into
place,
so
I'm
just
giving
you
sort
of
a
lay
of
the
land
here.
A
This
is
not
a
typical
academic
talk,
you
don't
go
and
say
hey.
I
have
no
idea
what's
going
on
here,
but
but
but
I
also
being
very
bold
saying
I
understand
those
other
things
pretty.
Well,
that's,
that's
pretty
broad
came
a
claim.
A
Okay.
Now
I
think
I've
already
talked
about
this.
The
our
existing
cla
is
what
I
would
think
of
layer
three
when
we
were
developing
it.
I
was
thinking
of
it
as
layer,
three
cells.
It
has
almost
all
the
connections
between
those
cells
within
that
layer.
That's
how
that
works
in
the
cla,
that's
how
we
tested
it
for
and
and
then
what
I
think
is
going
on
here.
A
As
I
said,
layer
four
is
essentially
the
same
cla,
but
we're
doing
a
slightly
different
thing
instead
of
just
feeding
connections
to
itself,
instead
of
just
being
an
odd
associative
memory,
we're
actually
providing
sensory
context,
and
I've
worked
this
through
on
paper,
a
couple
different
ways
to
make
sure
it
works
and
before
we
started
implementing
it,
I'm
not
going
to
walk
you
through
it
here
today.
I
actually
have
some
slides
on
it,
which
I,
if
someone
wants
so
I'll,
go
back
to
it
later,
but
I'm
going
to
skip
through
them
right
now.
A
This
is
a
little
thought
exercise
I
did,
but
if
we
have
time
later
I'll
come
back
to
that,
I'm
sorry
about
that.
I
meant
to
take
it
out.
So
now,
let's
talk
about
layer,
five
and
I
said
it's
generating
motor
commands
right
and
it's
predicting
the
old
brain.
So
now
I'm
gonna
switch
a
little
bit
here.
I'm
now
going
to
go
some
slides.
I
did
the
last
hackathon,
so
I
these
are
slides
these
everything.
Up
to
now
I
just
recent
slides
they
did
now
I'm
going
to
switch
to
some
slides
slightly
different.
A
Looking
from,
we
did
bill
hackenheim
and
I'm
going
to
talk
about
how
you
think
about
how
does
layer
5
learn
to
generate
behavior?
What
does
that
mean?
So,
let's
just
do
that,
so
I'm
going
to
again
start
back
a
little
bit
and
now
here
I'm
showing
a
picture
where
it's
going
to
be
a
little
bit
repetitive.
I
apologize
for
that
here.
We
have
something
where
we
have
a
sensory
organ
and
we
have
a
very
simple
memory.
A
It's
like
our
one
layer,
cla
high
order
sequence
member
here
and
if
I
expose
that
to
some
dynamic,
pat
something
in
the
world,
that's
moving
and
changing
it
can
learn
those
patterns.
It
can
learn
what's
going
on
in
there
and
that's
basically
what
we
did
with
grok
our
product.
We
basically
feed
it
server
data
metric
data
from
servers
and
that
stuff
is
changing
all
the
time.
So
this
thing
doesn't
need
any
behavior.
A
It's
just
hey,
I'm
looking
at
the
changes,
it's
like
an
ear
and
it
has
there's
no
behavior
and
we
can
learn
these
and
we
can
do
amazing
job
of
deciphering
the
patterns
and
learning
the
patterns.
In
this
thing,
that's
an
extreme.
Basically,
no
behavior
you're
relying
on
the
world
to
be
dynamic.
The
other
side
of
it
is:
let's
take
it,
the
opposite
side
of
the
world.
A
Here
I
have
a
structure
thing,
a
house
or
a
room
or
building
or
something
like
that,
and
I
have
an
organism
with
a
sensory
organ
and
I
have
a
little
cla
attached
to
it,
but
it
can't
move,
and
so
there's
no
changes
going
on
in
here.
There's
a
lot
of
structure
here,
but
there's
no
changes
going
because
this
thing's
not
moving,
and
so
this
thing's
flat
line
it's
dead
right.
So
how?
How
would
we
learn
the
structure
in
this
place?
A
Well,
obviously,
if,
if
I
I
need
some
sort
of
behavior
to
learn
this
structure
I
need
to
have,
I
have
to
move.
I
either
have
to
move
my
sensors
or
move
my
body,
which
is
really
the
same
thing,
so
I
have
to
like
move
my
eyes
or
turn
my
head
and
walk,
or
I
have
to
lift
things
up
and
open
them.
I
just
have
to
have
some
sort
of
behavior,
and
now
I
can
suffer
the
structure
of
a
static
world.
It
doesn't,
you
know
the
world
can
have
structure,
but
it's
not
moving.
A
It's
back
to
my
layer,
four
layer,
three
thing
so
again,
this
is
a
little
bit.
This
has
been
a
review.
What
I
just
told
you
before,
because
I
had
to
go
back
to
these
old
slides,
so
these
are
so
that
this
is
what
I've
drawn
in
here.
You've
got
these
you've
got
these
sensors
you've
got
the
old
brain
pre-wired
behaviors.
These
are
some
of
the
p-wired
behaviors
I
just
talked
about
a
moment
ago,
and
and
so
now
what
we
just
like
I
said
earlier.
A
We
now
send
a
copy
of
the
the
the
vision
to
the
copy
of
the
sensory
organ
to
the
cortex.
Now,
just
let's
think
about
this.
For
a
moment,
I've
got
this
old
brain
here
like
the
reptile
brain
and
it
has
behavior.
So
it's
able
to
navigate
through
this
thing.
Maybe
it's
doing
a
wall
finding
thing
or
it's
just
it's
got
some
behaviors,
and
so
the
cortex
now
will
see
those
changes.
Imagine
if
it
was.
A
It
was
a
little
robot
walking
through
these
rooms
and
it
says:
okay,
when
I
get
to
the
refrigerator,
the
robot
either
turns
left
and
it
sees
the
sink
or
turns
right
and
sees
the
dining
room.
Okay,
the
cortex
will
this
this
cla
will
say:
hey
when
I
see
a
refrigerator,
I
might
go
left
and
I
might
see
a
sink
or
I
might
see
the
dining
room
and
it
would
predict
either
one.
A
Oh
well,
if
I'm
looking
at
the
refrigerator-
and
I
know
that
the
organ
is
about
to
turn
left
I'm
going
to
see
the
sink,
so
I
can
predict
the
sink
and
if
I
said,
if
I'm
going
to,
if
I
know
the
the
organism
turning
right,
I
will
predict
the
dining
room,
and
so
it
can
make
much
much
better
predictions
now
if
it
knows
what
it
is,
what
it
is,
what
behaviors
are
being
generated
by
the
organism.
It
still
can't
do
anything.
It's
just
still
just
a
predictive
memory.
A
It's
not
doing
anything
at
this
point
in
time.
So
how
do
we
get
it
to
generate
some
behavior?
How
does
the
cortex
control
behavior
so
here's?
I
think
the
basic
idea
and
there's
a
lot
of
information
in
this
this
little
transition
here.
So
let
me
walk
you
through
it.
We
know
that
layer,
five
cells
are
generating
the
they
have
the
cells
that
project
back
down
to
here,
and
what
I
believe
is
going
on
is
that
layer,
four
and
layer
five
excuse
me
layer,
three
and
layer
five.
A
I
say
layer,
four
three,
but
it's
really
like
three
and
layer.
Five
are
sort
of
connected
at
the
hip
in
a
couple
ways:
first
of
all
layer,
three
projects
to
layer,
five
very
heavily,
but
they
also
share
column
activations.
A
So
just
for
the
moment,
think
of
l5
layer,
five
as
a
copy
of
layer,
three
you've
got
two
things:
doing:
learning
the
same
basic
sequences,
two
things:
I've
learned
of
basically
representing
a
sensory
motor
model
of
the
world,
and
yet
now
layer,
five
cells
are
projecting
down
here
now
this
is
an
important
thing:
layer,
five
associated
links
to
subcortical
behavior.
What
do
I
mean
by
that
these
layer?
Five
cells
have
no
idea
where
they're
projecting
they
have
no
idea.
A
They
don't
know
what
they
mean
right,
they're,
just
sending
these
actions
off
into
some
other
neural
tissue
someplace,
but
the
mechanism
we
use
in
the
cla
if
you're
familiar
with
it-
and
hopefully
some
of
you
are
very
familiar
with
it.
It's
a
it's
an
auto-associated
memory.
It's
an
associate
memory
and
you're
linking
one
sparse
representation
to
another.
In
fact,
you're
linking
a
series
of
sparse
representations
to
a
series
of
sparse
segmentations.
The
same
thing
can
happen
here.
These
cells
are
being
firing
in
some
pattern,
which
represents
a
representation
of
the
sensory
motor
world.
A
This
thing
is,
these
cells
in
here
are
generating
the
behavior
and
we
can
basically
associate
these
patterns
here
with
these
patterns
here.
In
fact,
a
stable
pattern
here
could
actually
unfold
a
lengthy
pattern
here.
The
point
is
this:
is
a
learned
relationship
if
the
cortex
doesn't
know
where
it's
projecting,
it
just
says,
I'm
sending
signals
down
here
and
hopefully
they
will
associate,
and
I
will
then
be
able
to
associate
the
behavior
which
I'm
modeling
in
my
own
cells.
A
It's
not
a
recursive
thing,
I'm
building
a
model
of
behavior
and
now
I'm
using
that
model
to
generate
the
behavior.
Okay,
I'm
just
like
I'm
just
reinforcing
going
around
in
circles.
I've
been
generating
new
behavior,
yet
I've
just
learned,
but
I've
learned
to
have
these
cells.
If
these
cells
became
active
now
they
would
invoke
the
behaviors
which
made
this
pattern
to
begin
with.
They
have
the
ability
to
say.
Oh
I've
learned
what
walking
is.
I
now
can
invoke
walking,
because
I
have
these
learned
sequences
for
what
walking
looks
like
that's
important
thing.
A
B
I'll
go
first,
I
guess
my
understanding
is
that
through
experimentation
through
movement
or
whatever,
the
the
brain
eventually
learns,
the
meaning
of
movement
and
how
that
relates
to
what
is
coming
in.
A
So
remember
what
I
said
I
said
we're
starting
off
with
a
system
that
already
has
a
behavior
right.
The
cortex
is
not
generating
any
behavior
yet
so
I'm
not
trying
to
explain
how
the
cortex
generates
behavior,
I'm
just
saying:
how
does
the
cortex
learn
about
behavior
and
how
does
the
cortex
learn
a
representation
that
could
invoke
behavior
if
it
invoke
that
behavior
again?
A
So
so
imagine
imagine
a
child.
Imagine
in
the
beginning,
a
child
just
starts
manipulating
the
world
just
by
the
subcortically.
It
says.
Oh,
it
starts
moving
its
head
and
moves
its
eyes.
It
looks
a
mom.
These
are
all
like
built-in
behaviors,
okay,
it
chews
it
swallows
it
burps.
You
know,
poops,
whatever
and,
and
the
cortex
is,
is
learned
to
model
that
it
doesn't
know
in
advance
when
it's
born
the
cortex
really
knows
nothing.
A
It
just
says
I
don't
really
know
anything,
and
so
it's
building
a
model
of
these
behaviors
and
that
model
is,
is
tied
to
these
behaviors.
So,
if
the
animal,
if
the
child
is,
is
crawling-
and
it
builds
a
model
of
what
crawling
is
like
now-
those
cells
can
actually
socially
link
to
the
cells
that
are
actually
generating
the
crawling
and
these
cells.
If
I
could
invoke
these
independently,
that
would
cause
you
to
crawl.
C
A
At
this
point,
it's
not
controlling
anything.
It's
learning
to
control
right!
Imagine
what
happens
some
some
sensory
thing
comes
in.
It
generates
a
behavior
right.
Cortex
doesn't
know
about
this,
but
cortex
gets
a
copy
of
those
things.
It's
just
a
sparse
pattern.
It's
just
a
sequence
of
sparse
stupid
representation.
A
So
there's
a
sequence,
that's
being
generated
here
and
the
sequence
is
coming
from
the
sensory
organs
and
the
cortex
says
I'm
just
going
to
build
a
model
of
them.
That's
all
I'm
going
to
do
I'm
trying
to
build
a
model
of
them,
and
so
that's
what
the
cla
does
builds
a
model
of
these
patterns,
and
now
I'm
I'm
just
going
to
say
the
cells
that
are
active
during
these.
These
are
very
sparse
representations.
A
The
cells
that
are
active
during
these
patterns
are
being
projecting
down
here
and
they're
they're
forming
synapses
with
the
distal
dendrites
of
these
cells
down
here
basically,
to
to,
I
could
talk
about
the
mechanism,
the
biological
mechanism,
for
those
who
want
to
know
it,
but
it
would.
It
would
basically
saying
I'm
going
to
try
to
learn
to
get
you
to
have
the
exact
same
patterns
here
that
you
had
when
I
learned
it.
A
It's
sort
of
circular
reasoning,
I'm
going
to
model
what
you're
doing
and
I'm
going
to
learn
how,
to
you
know,
send
my
model
down
there.
So
I
have
a
connection
between
the
model
and
the
actual
thing
that's
occurring.
So
information
comes
in
generates
behavior.
This
guy
gets
it
it's
learning
a
model
of
it
and
then,
after
the
fact
it
says.
Okay.
How
would
I
have
generated
that
you
know?
Could
I
have
generated
behavior
on
my
own.
A
No,
it
doesn't
know
nothing,
nothing.
The
brain
knows
anything.
What
what
will
happen
you'll
see
in
a
moment
no
at
the
moment
it
doesn't
know
at
the
moment
it
doesn't
know
if
it's
successful,
for
example,
I've,
given
you
several
examples
of
things
that
are
really
subcortical
really
obvious
like
breathing
and
swallowing,
and
things
like
that,
which
are
totally
non-cortical
stuff,
that
you
can
learn
to
do
with
your
cortex
there's
other
things
like
hiccuping,
for
example.
A
You
can't
force
yourself
to
hiccup,
and
the
reason
is,
I
believe,
is
you
have
a
model
for
hiccuping
in
your
brain
in
your
cortex,
but
the
cells
that
that
are
or
know
something
about.
Hiccuping.
Cannot
you
just
do
not
project
to
the
part
of
the
brain
that
hiccups
and
so
there's
no
way
of
controlling
it.
You
know,
like
I,
can't
control
my
heartbeat.
I
can't
control
my
head
coming.
A
There's
things
you
just
can't
control,
because
if
these
connections
don't
potentially
exist
it
just
won't
be
able
to
do
it,
but
if
they
do
potentially
exist,
I
will
be
able
to
basically
be
able
to
sort
of
take
advantage
of
that.
Let
me
go
on
and
let's
just
dying
question:
no
okay,
it's
gonna
get
it's
gonna
get
worse,
I'm
afraid.
A
So
the
question
is:
why
have
a
layer
5
in
addition
to
layer
3.,
I
told
you
layer.
5
is
the
same
as
almost
the
same
as
layer
3..
Why
would
you
have
a
separate
copy?
Why
would
you
need
two?
Why
don't
I
just
take
the
layer,
3
cells
and
projecting
down
here?
Well,
there's
a
good
reason
for
that
everything.
Okay
over
here,
okay,
you
need
to
be
able
to
separate
inference
from
behavior.
A
You
need
to
be
able
to
generate
behavior,
sometimes
before
you
actually
sense.
It
gets
back
to
the
question
we
just
had
a
moment
ago
so
think
about
it.
This
way
in
the
inference
world
like
okay,
this
thing
the
inference
is
you
know,
patterns
are
coming
in
pretty
wide.
Behavior
is
happening,
so
you
go
from
layer
three
to
layer,
five
to
old
brain.
The
old
brain
learns
associated
like
around
here
right,
that's
the
infrasight,
but
when
I
want
to
generate
behavior,
I
don't
want
to
have
that.
A
I
want
to
generate
behavior
before
I've
heard
anything
I
want
to
make.
My
speech
occur
before
I
hear
my
speech
right.
I
don't
want
to
make
it
occur
after
I
heard
my
speech
and
nothing
would
ever
happen.
So
I
have
to
be
able.
I
have
to
be
able
to
generate
behaviors
before
the
inference
layer
actually
infers
that
they
occurred,
so
I
have
to
separate
them.
A
I
have
to
have
two
ways
of
doing
this,
and
so
the
layer
five
generates
behavior
before
the
feed
flow
and
answers,
but
you
know
it
gets
confirmed
after
it
happens,
but
I
need
to
be
able
to
separate
these
two
things.
Others,
you
know
you
just
wouldn't
be
able
to.
You
wouldn't
be
able
to
generate
behavior
now,
if
you're,
okay
with
that,
then
I'm
gonna
go
into
some
more
detail
about
this
yes
chain.
D
A
Look
different
in
terms
of
the
connections
that
formed
well,
we
say
before
training
we're
talking
about
after
training
the
training
this
for
the
moment.
Right
now,
this
stuff
is
independent.
These
these
connections,
four
to
three
and
three
to
five,
would
occur
the
same
regardless
of
whether
this
stuff
happened
here.
So
I
don't
care
if
this
trained
or
not
right,
but
but
a
lot
of
things
would
happen
up
here
during
training
right.
First
of
all,
I
have
to
learn
the
sensor.
I
have
to
learn
the
layer
motor
changes.
A
I've
learned
the
remainders
of
that
in
layer
three,
I
have
to
train
layer
five
to
be
similar
to
layer
three.
So
all
that
has
to
be
learned,
but
this
thing
right
now,
since
this
guy's,
not
until
I
until
this
generates
behavior
on
its
own,
it's
really
not
changing
the
feedback
loop.
It's
just
setting
it
up.
It's
setting
up
the
situation,
so
I
can
generate
behavior.
D
A
A
little
bit
we're
getting
into
the
harder
stuff
right,
but
I
think
this
general
idea,
I
think
is
is
is
correct.
The
general
idea
is
that
you
always
have
to
think
about
the
cortex
or
any
neural
tissue
doesn't
know
anything
and
cortex.
Actually,
it's
very,
very
generic.
It
it
it.
It
doesn't
come
like
you
know,
pre-wired,
to
do
exactly
these
things
where
these
old
brain
things
are.
A
E
A
quick
comment
and
what
we
see
like
in
the
neuroscience
field
is
that
the
cortex
actually
seems
so
there
there's.
We
do
a
lot
of
experiments
with
with
mice,
for
example,
and
the
easy
behaviors
the
mouse
has
to
do
something
really
simple.
Even
if
the
cortex
is
lesioned
or
blocked
or
inhibited,
it
doesn't
really
change
yeah,
but
the
more
complex
the
behavior
becomes.
So
the
mouse
has
to
integrate
different
features.
Some
you
know
behavior
it's
running,
it's
not
running
the
more.
E
A
Well,
I
would
say
the
following:
maybe
you're
right,
I
would
say
the
fine.
I
say
the
mouse
has
a
really
good
brain
like
this
yeah
right.
Well,
it's
as
good
as
a
mouth
brain
right.
You
know
whatever
they
do
right
so,
and
I've
heard
this
too
from
any
neuroscientist.
You
know
you
can
you
can
ablate
the
whole
visual
cortex
and
you
can
barely
tell
the
difference
right.
So
the
mouth
still
runs
around.
Those
are
the
same
things
and
other
things
it
does,
and
it
has
some
very
subtle
changes.
A
So
the
mouse
is
a
very
well
developed
old
brain
and
has
a
very
small
neurocortex
and-
and
so
that
do
I
interpret
that
is
like
okay.
Well,
that
there's
nothing
wrong
with
that.
That's
just
the
way
it
is
where
once
you
get
to
primates
and
so
on,
you
know,
the
relative
ability
of
this
has
declined
and
the
size
of
this
has
increased
dramatically.
So
we're
really
relying
upon
this,
I'm
not
sure
if
you
were.
If
that
was
a
question
or
just
a
comment
or
just
a
more
of
an
observation.
A
I
think
it's
consistent
with
what
I'm
talking
about
here
I
mean
yeah,
you
know
as
humans.
You
know
a
lot
of
a
lot
of
mammals
have
a
really
you
know
robust
old
brain
at
birth.
You
know
they
get
up,
they
start
running
around
doing
all
kinds
of
stuff.
I
think
the
evidence
is
that
humans,
we
don't
learn
to
walk.
We
we
actually
know
how
to
walk
genetically,
but
our
brain
hasn't
finished
developing
while
when
we're
born,
so
it
has
to
finish
developing
it's
not
like.
We
learn
how
to
walk.
A
A
Okay.
Let
me
just
dive
into
this
another
layer
of
detail
more
a
couple
of
couple
layers
of
detail.
More
I'm
just
talking
about
hierarchy.
A
bit
hierarchy
is
very
difficult
to
grapple
with
all
the
time
and
what
I'm
going
to
argue
here,
of
course,
I'm
just
showing
like
one
small
region
of
the
cortex
and
in
mammals
we
have
a
hierarchy.
A
A
mouse's
hierarchy
is
pretty
small:
monkey's
got
a
thousand
layers
levels
and
humans,
they
don't
really
know,
but
it's
quite
big,
and
so
I'm
just
on
one
little
region
here
and
I
really
haven't-
walked
through
how
that
region
works
in
detail
because
that's
like
cla3,
but
how
would
the
hierarchy
be
consistent?
So
let's
say
I
just
say
everything
I
just
said
existed
now.
We
know
that
one
of
the
feed
forward
pathways
to
the
next
level
in
the
hierarchy.
A
This
would
be
the
next
region
up
in
the
hierarchy,
is
some
layer,
three
cells
and
they're
going
to
project
up
to
layer,
four
and
then
to
layer.
Three.
So
that's
pretty!
That's
your
one
of
your
feed
forward
pathways.
So
that's
pretty
cool
we
come
in.
You
know
we
have
sensory
information
coming
to
layer,
four
to
the
layer.
Three
then
the
four
and
then
three
and
four
and
three
and
so
on.
There's
a
second
pathway
coming
forward
in
the
cortex
two
there's
two
major
pathways
that
could
feed
forward.
A
The
second
pathway
are
these
layer,
five
cells
which
are
generating
the
behavior
they
split
and
the
accent
literally
splits
in
two
action
goes
up
and
projects
also
to
layer
four
of
the
next
region.
This
has
been
very
well
documented
by
lots
of
people,
but
sherman
and
giller
have
been.
The
two
guys
have
been
pushing
this.
The
importance
of
this
this,
as
you
can
think
about
it,
is
also
a
copy
of
the
motor
command.
A
It's
a
copy
of
the
motor
command
being
generated
by
this
region
right
this
region's
sending
this
this
motor
command
down
to
here,
it's
a
higher
level
mode
of
company,
but
a
copy
of
that
gets
sent
up
here.
So
this
is
a
high
level
representation
of
this,
like
a
stably
temporal,
stable
variant
representation
here
of
what
it
can
represent,
it
goes
past
up.
This
is
sort
of
the
high
level
motor
command
being
passed
up.
So
it's
mimicking,
what's
going
on
here.
Right
here
is
sensory
data
plus
motor
commands.
This
is
sensory
data
or
inference
data.
A
Afferent
data
plus
motor
command
same
thing
going
on
now
we
have,
of
course
we
have
a
layer,
five
up
here,
doing
the
same
thing
with
the
other
five
here
and
this
guy
has
cells
in
it
and
where
do
these
cells
project,
these
cells
project
also
back
to
the
old
brain?
So
if
you
go
up
like
from
v1
to
b2
or
a1
and
a2
anywhere,
you
go
in
the
hierarchy,
especially
the
first
several
levels
in
the
hierarchy
you'll
find
that
these
layer,
5
cells,
are
projecting
back
down
someplace
subcortically.
A
Now
that
might
seem
a
little
odd.
You
might
think.
Well,
why
doesn't
layer,
5
cells,
project
down
to
this
guy?
Why
is
it
protecting
the
bottom
down
here?
This
is
the
way
it
is.
This
is
the
neuroscience.
So
we
can't
argue
with
that.
But
one
thing
you
realize
that
this
I've
shown
this
as
one
block
here,
it's
not
really
one
block
at
all.
It's
a
it's
a
hierarchy
of
stuff!
It's
you
know
it's
spinal
cord.
It's
brain
stem
it's
benzogangia
even
like.
A
If
I
go
in
the
superior
colliculus,
which
is
moving
the
eyes,
it's
the
hierarchy
itself,
so
there's
a
hierarchy
of
behaviors
down
here
in
in,
and
you
look
at
the
connections
they
kind
of
map
to
the
hierarchy
up
here,
the
higher
up.
You
go
the
higher
things
they
projected
down
here.
So
this
is
representing
higher
level
concepts.
Things
will
be
higher
level
motor
behaviors
and
that's
these
are
historical
as
well,
and
so
it's
it's
playing
off
the
hierarchy
of
the
old
brain.
A
Doing
that
now
there's
another
thing
which
is
going
to
go
on
here
and
just
in
general
idea
as
we
go
in
this
direction
as
we
go
up
the
hierarchy
starting
down
here
and
then
the
first
level
higher
the
second
line:
hurry
we're
getting
this
increasing
in
variance
we're
getting
this
faster,
changing
here
and
slower
changing
up
here,
because
we're
coalescing
these
sequences
as
we
go
up,
and
so
so
this
guy
here
this
guy
here
will
be
a
slower
changing
pattern
than
this
guy
here,
and
this
will
be
slower
changing
patterns
in
this
guy
here,
there's
the
primary
feedback
pathway.
A
I've
just
shown
you
here,
and
it
starts
in
these
layers:
six,
it's
one
of
the
other
layer,
six
cells,
basically
and
it
projects
back
down
to
the
region
below
it
or
several
regions
below
it,
and
it
spreads
very
broadly
across
layer,
one
actually
of
the
region
below
so
this
you're,
taking
this
a
representation
of
something
happening
here
and
spreading
it
down
here
over
a
large
area,
and
this,
of
course,
is
going
to
be
more
my
my
I
just
put
new
batteries
in
this
or
something
else
wrong
with
this.
A
This
is
going
to
be
more
stable
than
this,
because
this
is
coming
from
here.
This
is
going
to
be
this
guy's
again
representing
the
names
of
these
sequences,
or
something
like
that.
So
this
can
be
a
more
stable
representation
and
it's
putting
it
back
down
here
and
you
say
well
what's
that
all
about
these
are
massive
connections
by
the
way
there's
huge
number
of
these
coming
back.
A
So
we'll
talk
about
that
in
a
moment
what
that's
going
on
and
then
this
whole
thing
is
repeated
over
and
over
again
up
the
hierarchy,
so
we're
not
going
to
go
any
further
about
it.
But
the
point
I
wanted
to
make
about
here
is:
there's
a
parallelism.
A
That's
going
on
and
what's
happening
at
the
first
level
is
happening
in
the
next
level
having
the
next
level,
and
so
we
really,
if
we
really
truly
understand,
what's
going
on
in
one
region,
we'll
we'll
have
a
really
good
idea,
how
the
whole
thing
works
and
how
any
region
is
going-
and
this
is
this
is
all
you
know
neuroscience.
This
is
not
really
speculative
this
these
sort
of
connections
here,
I
didn't,
show
all
the
details,
but
I
left
out
the
thumbs,
but
it
was
too
hard
to
explain
all
right.
A
I
now
want
to
go
in
one
more
level
of
detail
and
we're
in
very
speculative
range
here,
I'm
just
being
realistic,
very
speculative,
but
this
is
where
our
current
thinking
is-
and
this
is
where
my
you
know
where
I'm
currently
coming
out
of
this.
I
spent
a
lot
of
time
thinking
about
this
stuff.
So,
let's
go
back
to
look
at
our
slice
here
again,
we've
got
our
mini
columns.
We
got
our
layers.
We've
already
talked
about
some
of
these
things,
so
we
said.
Okay,
you
remember.
A
We
have
we
have
this
sensory
data
and
it's
afferent
data.
The
copy
of
the
motor
plan
is
going
to
layer
four
that's
we
saw
before
and
then
we
know
that
layer,
four
projects
to
layer.
Three
again,
I've
argued
that's
gonna,
be
stable
representations
in
layer
three
and
so
that's
pretty
cool.
But
what
else
something
something?
That's
in
the
literature.
Everyone
knows
that
people
just
don't
talk
about
too
much.
There
are
nearly
as
many
connections
from
these
very
same
fibers
going
to
layer
six.
A
A
That's
interesting,
there's
a
little
parallelism
going
on
there
and
if
I
just
want
to
say,
there's
ways
that
people
characterize
the
receptive
field
properties
and
they
typically
do
this
in
v1,
but
they're
doing
other
things
too.
I'll.
Just
say
that
the
cells
here
are
are
what
they
call.
You
might
call
simple
cells,
that's
the
term
they
use,
and
these
are
more
complex
cells,
and
so
those
are
very
well
characterized.
But
again,
if
you
look
in
the
literature,
they'll
say
that
layer,
5
cells
are
complex
cells
and
layer
cells.
A
Six,
those
six
cells
are
simple
cells.
They
have
simple
cell
response
properties,
just
like
layer,
four
does
and
layer.
Five
cells
have
complex
cell
response
properties,
just
like
layer
three
does
now.
We
know
that
layer,
three
cells
project
up
to
the
higher
region.
That's
our
feed
forward
pathway.
We
talked
about
a
moment.
We
know
that
these
these
layer,
five
cells
project
subcortically
to
the
motor
areas,
but
we
also
know,
as
I
mentioned
a
moment
ago,
that
these
split
and
they
send
the
copy
of
the
motor
command
up
to
the
next
higher
region.
A
So
there's
a
little
parallelism
going
on
there
as
well,
and
then
we
jump
at
the
next
thing.
Now
we
look
at
the
actual
cells
in
layer.
Three.
These
are
little
parameter
cells.
There's
lots
of
them
there's.
You
know
tens
of
thousands
and
you
know
cubic
millimeters
like
a
hundred
thousand
or
something
like
that,
but
these
cells,
the
predominant
ones
you
see
in
layers
two
and
three:
they
have
what
are
called
apical
dendrites
and
those
dendrites.
A
So
they
have
a
lot
of
connections
locally
many
many
collections
locally
on
their
basal
dendrites,
that's
what
we
model
in
the
cla,
but
they
also
have
these
apical
dendrites,
which
are
kind
of
like
an
extension
of
you
know,
it's
like
more
more
dendritic
structure
up
here
and
they
send
those
up
into
layer,
one!
That's
where
those
go
layer,
five
cells,
the
ones
that
are
generating
this
motor
behavior,
do
the
same
thing:
they
have
their
bigger
cells.
A
They
just
took
the
same
picture
and
stretched
it,
but
but
they're
bigger
cells,
but
they
also
send
their
apical
dendrites
up
into
layer,
one
that's
again,
a
very
a
parallel
structure
going
on
there
and
then
now
I'm
going
to
show
you
that
you
know
again
layer,
one
which
I
did
not
label
this.
Is
that
pattern?
That's
coming
back
from
the
region
above
there's
a
couple
other
sources,
but
let's
talk
about
that
one,
and
this
is
that
stable
representation
that's
going
to
be
stable.
A
A
So
you
see
there's
a
lot
of
parallels
going
on
here.
It's
almost
like
layer,
five
and
slayer
six
are
doing
the
same
thing.
We're
seeing
the
six
and
five
are
doing
the
same
thing
as
four
and
three.
Both
these
are
for
feedback
and
motor
control,
and
these
are
purely
feed
forward,
sensory
going
going
forward
and
they
both
get
these.
These
massive
connections
in
layer,
one
what
is
going
on
here,
I'm
just
going
to
state
it.
A
I
didn't
have
time
to
make
pictures
of
this
I'm
trying
to
come
up
ways
of
visualizing
this,
but
I
I
know
I
know
exactly.
I
think
I
know
exactly
what's
going
on
here.
A
Imagine
I
put
a
stable
pattern
up
here
and
then
I
ran
through
a
whole
bunch
of
sequences
in
layer,
three
and
layer
five.
So
I'm
learning
these
temporal
sequences.
Now
again,
if
you
know
the
cla,
what
does
that
mean?
I
go?
I
got
a
sparse
representation
of
cells
that,
as
a
unit
represents
something
individual
cells.
Don't
tell
you
much
and
then
I
have
another
sparse
representation,
another
sprite,
so
a
sequence
is
going
through
these.
These
sparse
representations
boom
boom
boom
boom
boom
boom.
A
That's
a
sequence
using
the
same
cells
over
and
over
again,
but
every
every
representation
is
sparse.
So
I'm
going
through
that
so
at
any
point
in
time,
there's
a
very
sparse
activity
of
cells
in
this
layer
and
anytime,
there's
a
sparse
activity
cells
in
this
layer
and
they're
going
to
get
associated
with
these
connections
coming
back
here,
they're
going
to
basically
we're
going
to
form
connections
here.
A
I
will
make
all
those
cells
at
once
and
when
I
say
invoke
them,
we're
just
going
to
depolarize
them,
we're
not
going
to
make
them
active,
we're
going
to
make
them
in
a
predictive
state
we're
going
to
we're
going
to
send
a,
but
the
neuroscience
would
be
we'd
send
a
dendritic
spike
and
we'd
depolarize,
the
cell
body.
It
wouldn't
be
sending
a
spike
out
of
the
cell
just
to
put
the
cell
in
a
predictive
state.
A
So
essentially
now
I
will
be
able
to
basically
a
single
pattern
here
would
be
invoked.
A
union
of
many
sparse
states
now,
if
you've
heard
my
other
talks
about
sparse,
distributed
representations,
you
can
form
a
union
of
sparse
representations
and
not
get
confused
and
the
sparser.
They
are
the
more
things
you
can
put
in
that
union.
You
can
put
hundreds
of
rep,
you
can
form
a
union,
meaning
you
order
them
together.
You
can
take
a
sparse
pattern.
A
Another
sparse
pattern,
another
sparse
pattern
on
all
the
same
cells
and
you
can
invoke
all
them
at
the
same
time
and
they
won't
get
confused
because
they're
sparse,
and
so
you
can
invoke
a
union
of
sparse
states
in
multiple
sequences.
All
right.
So
try
to
imagine
this.
Imagine
these
imagine
layer,
three
cells
and
layer.
Five
cells
have
learned
lots
and
lots
of
sequences.
A
They've
earned
hundreds
of
thousands
of
transitions
so
like
learning
lots
of
melodies
and
and
now
I'm
and
and
I've
I've
run
through
those
melodies.
While
this
thing
is
stable,
because
you
know
that's,
what's
going
on
it's
the
building
hierarchy
now,
when
I
basically
put
this
pattern
down
here,
I'm
invoking
every
note
in
every
melody-
that's
that
is
under
this
umbrella.
Under
this
higher
umbrella.
It's
not
everything
that
this
unit,
this
thing
has
learned.
It's
everything
that's
been
learned.
A
While
this
has
been
stable
and
so
I've
been
able
to
invoke
a
union
of
all
these
things,
it's
sort
of
like
saying
a
union
of
motor
behaviors
or
a
union
of
sequences
of
inference-
and
this
is
like
you
can
think
of
this
as
a
goal.
If
I
was
thinking
about
a
motor
behavior,
it's
like
imagine,
if
the
not
in
the
following
thought
experiment
I
want
to
learn.
I
want
to
tell
this
this
robot
or
this
person
to
go
to
the
refrigerator
and
I've
learned
five
six
different
ways
to
get
to
the
refrigerator.
A
I
go
this
way.
I
go
that
way.
If
I'm,
you
know,
if
I'm
in
this
room,
I
go
here
and
I'm
right
and
now
I've
I've.
I've
basically
learned
all
these
different
sequences,
and
now
I
put
things
something
down
here,
I
say:
go
to
the
refrigerator
and
I
invoke
all
those
sequences
at
once
and
what
happens
next
is
the
following.
I
actually
wrote
about
this
in
unintelligent
you're,
going
to
get
a
feed
forward
input.
So
imagine
I'm
in
the
house
and
I've
been
told,
go
to
the
refrigerator.
The
feed
forward.
A
Input
is
tells
me
where
I
am
right
now.
It
says
oh
you're
in
the
dining
room,
and
it's
going
to
what
it's
going
to
do
it's
going
to
uniquely
invoke
one
element
in
one
of
those
sequences
and
says:
oh
given
them
in
the
I'm,
giving
them
in
the
dining
room.
I
know
what
pattern
to
follow
to
get
to
the
refrigerator.
If
I
was
in
a
different
spot,
it
would
immediately
be
selected
in
different
sequences.
I
know
what
I'm
going
to
do
to
get
to
that.
A
So
I
don't
have
pictures
to
describe
all
this,
so
I'm
just
going
to
leave
you
the
fuzzy
concept
about
it,
but
I've
I've
worked
it
through
on
paper.
It's
an
ability
to
basically
to
put
tell
the
unit
below
you.
Here's
a
whole
bunch
of
things
that
you
might
do
to
achieve
the
thing
I
want
you
to
achieve
you
choose
which
one
based
on
where
you
are
right
now
and
given
where
you
are
right
now:
it'll
automatically
select
one
of
those
things
and
follow
it
out
through
time.
A
So
it's
like
a
goal:
oriented
behavior
and
the
same
thing
is
happening
in
layer
three,
which
is
essentially
saying
here's
what
I
I'm
expecting
you
to
infer
here's,
what
I
think
you
should
be
seeing
now
so
you're
gonna
have
some
ambiguous
inputs,
but
you'll
see
them
immediately.
Given
when
you're
looking
for
an
elephant,
I'm
going
to
tell
you
immediately
that
input's
going
to
tell
you
it's
an
elephant
as
opposed
to
it
could
be.
You
know,
a
gray,
peugeot
or
something
like
that.
A
So
that's
the
most
difficult
slide.
I
have
here
and
I
apologize
for
not
having
more
details,
but
I
warned
you.
This
is
a
very
speculative
talk
and
it's
very
sort
of
in
progress.
So
what's
our
goal,
I
have
a
short
term
goal,
meaning
a
year
two
years
this
year.
Hopefully
I
want
to
build
a
one
region,
sensory
motor
system.
That's
the
thing
I
want
to
do.
I
want
to
build
a
very
simple.
Like
our
2048
column
cla,
I
want
to
add
the
layer
four
and
I
want
to
add
layer.
A
Five,
that's
basically
tripling
the
amount
of
sort
of
what
we
do
today
in
software.
I
want
to
hook
it
up
to
some
simple
thing
that
has
behavior,
and
I
want
to
be
able
to
prove
these
concepts
out
and
and
get
them
to
work,
as
I
expect
them
to
work.
I
have
a
hypothesis
which
is
that
the
only
way
to
build
machines
with
intelligent
goal
or
behavior
is
to
use
the
same
principles
in
the
near
cortex.
A
I
can't
prove
that
that's
been
my
hypothesis
about
machine
intelligence
in
the
beginning,
which
is
like
you
know,
you
have
to
understand
how
the
brain
does
this?
A
Not
everyone
believes
us
there's
a
lot
of
people,
don't
believe
this,
but
this
I
believe
this
very
strongly,
and
so
I
think
if
we
want
to
get
to
building
intelligent
goal
and
behavior
and
intelligent
machines
which
you're
really
going
to
have
to
have
behavior
to
do
anything
like
what's
going
to
car
classifies
and
tell
the
machine
you're
going
to
have
to
you're
going
to
have
to
understand
these
principles
well
enough
to
build
them.
A
I
should
point
out-
and
I
usually
mention
this
someplace
else,
but
I
didn't
mention
it
today
when
I
think
about
building
a
goal-oriented
behavior
system,
I'm
typically
not
thinking
about
a
robot,
I'm
thinking,
thinking
about
something
more
virtual,
because
behavior
can
could
be
any
way.
You
move
your
sensory
organ
through
something
it's
any
way
of
it's
any
sort
of
change
of
your
sensory
organ
to
some
other
different
data
stream.
So
I
could
have
a
smart
bot
on
the
internet,
for
example,
that
his
behavior
and
it
has
no
physical
appendages.
It's
moving.
A
Instead
of
being
a
wall
climbing
web
bot,
it
could
be
a
smarter
web
bot,
something
like
that.
So
we
don't
you
don't
have
to
restrict
yourself
to
thinking
this
is
a
physical
robot
or
an
arm
or
a
leg,
or
something
like
that.
It
could
be
anything
that
has
behavior
anyway.
This
is
what
we're
trying
to
do.
This
is
why
I
believe
we're
going
to
do
it.
I
believe
many
of
the
principles
I
just
talked
about
are
are
have
decent
amount
of
validity
to
them.
Some
will
certainly
be
wrong.
A
We
don't
know
yet,
but
I
think
we
have.
We
know
enough
to
start
building
this
stuff.
That's
that
thanks.
A
F
F
Test
this
I
mean
so
what
would
be
the
behavior
right?
Mathematical
functions,
such
as
adding
numbers
subtracting
numbers,
is
a
very
easy
behavior
for
a
computer.
Oh.
A
Well,
it's
funny,
you
mentioned
mathematics,
that's
the
last
thing.
I
would
think
of
really
what
well
I
start
with
very
simple
systems,
and
I
think
of
mathematics
is
a
fairly
high
level
system.
I
mean
if
I
were,
to
build
a
robotic
system.
I
would
start
with
the
simplest
thing
I
could
like.
A
You
know
something
with
three
behaviors
and
and
two
sensors,
and
you
know,
in
a
limited
world
I
mean
you
got
to
start
with
very
simple
things
that
you'll
never
figure
out
how
to
debug
this
stuff,
and
then,
if
I
want
to
do
something
beyond
that,
I
mean
I
mean
the
first
things
I
implement.
I
would,
I
would
do
things
that
well,
you
mentioned
mathematics.
I
don't
know
how
to
answer
that
question.
F
Exactly
mathematics
can
be
made
very
discreet,
which
is
conducive
for
simplest.
A
But
I
think
you
don't
want
to
do
discrete
things
you
want
to
do
things
which
are
not
discrete.
You
want
things
that
are
really
messy
and
difficult
I'd.
Rather
do
you
know
a
little?
You
know
maze
crawling
robot
that
had
to
deal
with
all
kinds
of
noise
and
ambiguity
and
and
weirdness
in
its
world,
and
you
know
with
concept
where
the
sensory
data
was
never
consistent.
A
That
to
me
would
be
the
right
thing
to
do.
It's
easy
to
come
up
with
solutions
on
very,
very
precisely
defined
problems,
attempting
to
fall
into
engineering
answers
to
that,
but
we
have
to
you
really
want
to
come
up
with
a
solution.
It's
memory
based
it's
dealing
with
the
fact
that
the
sensory
data
is
never
exactly
the
same
and
the
solution
to
the
problem,
even
though
you're
trying
to
achieve
something
it's
you
never
achieve
it
exactly
the
same
way.
G
A
Yeah,
I
don't
know,
I
don't
think
so.
Look
we,
our
our
current
cla
is
2048
com,
64,
000
neurons
and
we
have
been
we've
been
testing
the
heck
out
of
that
and
we
put
it
in
our
product
rock
and
it's
a
very
capable
thing,
and
it
can
do
some
amazing
stuff
and
learn
a
hell
of
a
lot
of
stuff.
So
that
would
I
consider
sort
of
a
minimal
system.
I
wouldn't
want
to
get
smaller
than
that,
because
you
start
losing
the
benefits
of
sparse
representations.
A
Yet
at
the
same
time
I
found
that
was
very,
very
rich
environment.
There
was,
there
was
no
question.
We
could
text
test
tax,
the
system
extremely
well
for
all
kinds
of
problems,
all
kinds
of
sequence,
problems
and
so
on.
So
I
think
you
could
build
a
system
that
is,
when
I
say
simple,
let's
say
a
layer,
4
layer,
3
layer,
5
each
of
2048
columns.
That
would
do
some
amazingly
difficult
things,
although
in
the
end
result
it's
not
going
to
be
human
right.
A
You
know
it's
going
to
be
something
simple,
but
it's
going
to
be
solvent.
Solving
hard
problems,
I'm
more
interested
in
proving
out
the
theory
and
solving
hard
problems
and
getting
it
all
right.
Then
I
am
in
building
something.
That's
super.
You
know
impressive
at
the
moment
and
from
a
you
know,
like
a
commercial
point
of
view,
it's
more
important
to
get
the
stuff
right.
So
that's
our
attitude.
At
the
same
token,
once
we
think
we've
got
enough
of
we'll
try
to
build
something
useful
out
of
it.
A
So
if
I
thought
you
know,
I
thought
what
would
I
do
practically
with
it.
You
know
again,
our
product
rock
learns
about
it,
detects
very
subtle,
anomalies
and
data
streams,
and
really
really
clever.
I'm
just
happy
about
it
turned
out,
and
you
know
so
it's
doing
some
really
hard
stuff,
and
I
I
could
imagine
adding
behavior
to
that.
I
can
imagine
adding
behavior
like
right
now.
It
just
looks
at
certain
metrics
on
servers.
A
B
His
question
is:
does
the
future
new
pic
that
has
the
sensor
motor
integration
built
into
it,
compete
with
computer
science
concepts
like
agent,
oriented
software
and
bdi?
That
is
belief,
desire,
intent,
model
software
agent.
A
Well,
that's
a
good
question.
I
mean
I
don't
think
you
have
to
you,
could
broaden
up
and
saying
hey.
Does
this
approach
to
machine
intelligence
compete
with
ai
neural
networks?
Deep
learning,
you
pick
your
favorite
thing
and
that's.
This
is
a
subset
of
that
question.
I
think
what
rick
is
asking.
I
I
don't
view
it
as
competing
the
way
I
view
it
is
the
following.
You
know
I
started
with
saying
we're
trying
to
be
a
catalyst
for
machine
intelligence.
It's
not
a
catalyst
for
my
version
of
machine
intelligence.
A
I
want
to
be
accountable
for
machine
intelligence.
There's
going
to
be
one
way,
there's
going
to
be
a
way
of
doing
this,
I
don't
think
there's
going
to
be
20
or
2.,
you
know
when
they
started
building
computers.
There
was
a
lot
of
diff.
There
were
digital
computers,
there
were
decimal
computers,
there
were
octal
computers,
there
were
computers
in
the
different
architectures
and
finally,
people
figured
out.
You
know
what
binary
is
the
way
to
go
and,
and
they
all
settle
in
that
and
the
common
architecture,
and
so
on.
A
The
same
is
going
to
happen
in
machine
intelligence.
There's
not
going
to
be
ten
different
ways
of
doing
this,
not
even
two
we're
going
to
settle
on
a
con
iconic
couple
of
common
elements.
Now,
if
we
all
agree
to
that,
not
everyone
would
agree
with
that,
but
if
we
all
agree
to
that,
then
we'll
all
have
the
same
goal
to
get
there
right.
My
goal
isn't
to
get
there
my
way.
My
goal
is
to
get
to
that
end
point.
So
what
you
think
like,
for
example,
deep
learning,
I
think
that's
great
deep
learning
is
great.
A
It's
got
hierarchy.
They
don't
have
time,
they
don't
have
the
the
issue
of
they
don't
have
a
sense
of
motor
integration.
So,
let's
you
know,
let's
add
those
things
to
the
hierarchy
of
deep
learning.
These
should
converge.
It's
not
like
you
know
it's
not
a
battleground.
We
should,
if
we're,
unless
we're
you
know
we're
doing
this
right.
We
should
all
be
trying
to
achieve
the
same
result
in
the
end.
So
I
don't
view
these
things
as
competitors
or
alternates.
A
I
think
look
there's
going
to
be
there's
going
to
be
a
couple
concepts
that
went
out
in
the
end
and
we
should
be
all
trying
to
get
there.
I
argue
that
if
you
don't
understand
the
details
of
how
the
neuroscience
works-
you're
just
not
going
to
get
there,
so
I
have
a
that's
my
bet,
I'm
very
confident
in
that.
But
if
I
was
proven
wrong,
I
guess
I'd
be
proven
wrong,
and
so
so
I
guess
I
don't
know
if
I
answered
rick's
question,
but
I'm
trying
to
I'm
saying
it's:
it's
not
really
competitive.
A
B
Your
response
just
made
me
think
of
something,
oh
so
at
the
risk
of
going
off
in
a
tangent.
Here,
stop
me
if
you,
if
you
want.
B
Is
considering
computing
models
and
the
fact
that
the
current
project
is
basically
working
on
the
von
neumann
model
of
computing?
Is
there
a
you
foresee
in
the
future,
a
need
or
a
desire
to
break
away
from
the
bond?
Yes,.
A
Yeah,
I
mean,
from
a
hardware
point
of
view:
yeah
yeah.
Okay,
let
me
tell
you
a
couple:
things
are
going
on
right
now.
You
guys
may
not
know
this.
There's
a
lot
going
on
in
this
space.
First
of
all,
there
are
some
people
who
are
very
excited
about
our
algorithms
are
I'll,
give
them
the
stuff.
I
just
talked
about
here:
the
cla
there's,
a
team
at
ibm,
aberdeen
under
winfried
wilkie
who's
got
the
I
don't
know
somewhere
between
eight
or
ten
people
who
are
working
on
figuring.
How
to
implement
this
stuff.
A
In
hardware,
like
new
device
physics,
the
new
device
architectures
there
is,
there
are
there's
a
program
at
darpa
being
put
together
called
the
cortical
processor,
which
is
heavily
influenced
by
our
work.
There's
another
program
at
irp,
which
is
being
put
together
eye
opera
is
like
darpa,
but
for
the
intelligence
community,
that's
also
about
discovering
new
algorithms
and
so
on,
and
that's
anyway.
A
A
lot
of
people
are
interested
in
neuromorphic
computing
in
general,
and
so
these
are
people
trying
to
look
for
architectures
to
build
hardware,
new
hardware,
implementations
of
cortical
like
circuits
and
they're,
still
figuring
out,
which
are
the
right
ways
to
go
and
have
different
technologies.
They're
trying
to
apply
all
I'm
saying
is:
there's
a
lot
of
stuff
going
on
right
now
that
you
may
not
be
aware
of
in
this
field,
where
there's
a
lot
of
interest
in
trying
to
find
out
like
how
we're
going
to
build
the
hardware,
that's
going
to
be.
A
You
know,
10
years
from
now
is
going
to
be
driving
all
this
stuff.
So
it's
getting
yesterday,
we
can
only
do
so
much
in
software.
That's
why
I
said
I
want
to
start
with
a
very
simple
system,
because
I
think
I
could
do
you
know
3x
what
we're
doing
today,
but
I
couldn't
do.
I
can't
do
a
100x,
what
we're
doing
today
right
and
what
we're
doing
is
very,
very
small.
You
know
that
right,
okay,
those
questions
in
the
back.
We
have
to
pass
mike
back.
H
Good
evening,
and
thanks
for
the
talk
just
if
you
go
back
to
that
slide,
where
you
showed
the
hierarchies
between
different
layers
of
cla,
this.
H
Exactly
so
in
the
white
paper,
you
showed
that
the
thalamus
acts
as
a
gate
for
feed
forward
input.
G
F
A
Had
that
on
this
slide
this
morning-
and
I
took
it
out,
why
did
I
take
it
or
maybe
I
still
have
it
here
anything
because
I
thought
it
was
too
much
for
today
I
got
all
these
other
stuff.
I
do
these.
I
missed
one
slide
back.
No,
I
have
a
lot
of
stuff
and.
A
No,
I
don't
have
it.
Unfortunately,
what
I
was
going
to
say
is
whoops
what
I
was
going
to
say.
I
had
it
in
the
slide,
it's
in
another
presentation,
so
this
pathway
here
the
the
command
copy,
the
command
going
forward
that
actually
goes
through
the
thalamus
and
it's
gated
by
the
thalamus.
I
have
a
little
gate
here
and
that's
true
here
too,
and
by
the
way,
it's
true
here
too,
it's
like
when
the
the
information
comes
from
your
retina
before
it
gets
into
the
into
the
cortex.
A
The
thalamus
gates
are
right
here,
so
this
is.
This
is
ldn,
and
this
is
these
are
in
the
pulvenar.
But
these
are
there's
a
gate
going
forward
and
that's
very
interesting.
One
question
is
very
interesting
one.
You
had
two
sort
of
parts
of
that
question.
One
is
the
thalamic
gate,
and
so
so
sometimes
this
information
goes
forward,
and
sometimes
it
doesn't-
and
I
don't
really
understand
it,
but
it's
clearly
it's
involved
in
the
intention
mechanism.
A
It's
clearly.
This
is
a
way
of
saying
I
can
turn
off
this
information.
So
I
do
not
get
a
copy
of
the
motor
commands
because
I'm
assuming
everything
being
it's
like
it's
like
if
this
reason's
taking
care
of
everything.
If
this
reason
is
understanding
everything,
then
don't
send
me,
the
details
just
send
me
the
summary.
This
is
the
summary,
but
this
is
the
details.
This
is
like
the
the
you
know.
A
The
commands
are
being
generated,
so
we
can
turn
that
off
and
and-
and
we
know
a
lot
about
how
those
dynamic
cells
turn
on
and
off,
and
I
won't
go
into
it.
So
that's
part
of
it,
there
is
a
there
is
an
intentional
gateway
and
a
lot
of
neuroscientists
believe
that
that's
what
it
is.
There's
all
this
evidence
to
support
that.
So
I
didn't
show
that
here.
The
other
thing
you
mentioned
is
basal
ganglia,
which
it
plays
a
big
role
in
here.
I
there's
a
lot
of
stuff.
I
didn't
talk
about.
A
The
basic
anglia
has
another
role
other
than
the
stuff
it
does
down
here
and
it
projects
and
receives
information
from
all
the
different
cortical
regions,
and
the
current
best
estimate
is
if
somehow,
the
basal
ganglia
helps
these
cortical
regions
decide
which
of
several
motor
commands
and
motor
behaviors
to
exhibit-
and
I
I
maybe
some
of
the
neuroscientists
know
this
there's
a
lot
of
literature
on
this,
and
I
have
only
read
a
bit
of
it.
I've
already
read
about
10
papers.
A
Circuits
yeah,
it's
very
complex,
you
know
the
word.
Basal
ganglia
basically
means
lower
things.
It's
not
one
thing,
it's
a
whole
bunch
of
things
and
they
just
group
them
all
together,
because
they're
so
damn
complicated,
but
there's
a
lot
known
about
them.
Let's
take
a
look.
You
can
get
a
tremendous
amount
of
detail.
What
you'll
find
difficult
is
finding
theoretical
understanding
of
them.
You'll
find
that
your
chemistry
and
the
connectivity
and
the
cell
types
and
the
histology
and
all
that
kind
of
stuff,
but
anyway
they
do
know-
and
I
used
to
have
a
picture
here.
A
The
basic
angle
is
receiving
information
and
sending
information
broadly
back
and
somehow
it
looks
as
if
it's
helping
these
regions
decide
which
motor
command
to
behave
to
do
so.
I
really,
I
didn't
really
explain
how
we
do
goal-oriented
behavior.
I
only
showed
some
of
the
mechanism.
I
don't
really
know
how
we
do
goal-oriented
behavior.
The
basic
angle
is
obviously
involved
in
it
he's
involved
in
deciding
what
these
the
the
emotional
the
saliency
of
these
events
are,
and
so
we
don't
know
that.
H
And
just
on
top
of
that,
if
you
look
at
the
the
neocortex
physiology,
isn't
there
like
an
area
that
gets
activated
in
particular
when
motor
actions
are
being
held?
Okay,.
A
No,
that's,
that's
that's!
Actually
there
you
can
look
at
and
you'll
say:
oh
here's
m1
the
motor
area.
It
turns
out,
that's
really
wrong
and
the
people
who
originally
proposed
that
will
acknowledge
that.
Now
we
used
to
think
that
the
cortex
is
like
there's
these
motor
areas
and
there
still
are
these
motor
areas
right
right
across
the
top
here.
A
But
but
we
now
know
that
every
region,
every
region
they've
ever
looked
at
has
these
layer
five
cells,
that
project
out
it
just
turns
out
that
some
of
them
there's
a
there's,
a
whole
motor
area
which
which
projects
very
to
the
to
the
spinal
cord.
So
there's
a
very,
very
motor
map.
Like
your
hand,
you've
seen
the
picture
of
that.
You
know
the
head
and
the
lips
and
the
in
the
fingers
there's
a
motor
map,
but
it's
just
even
v1,
even
v1
the
primary
visual
area
projects
to
the
supercalicus
and
has
a
motor
output.
A
So
it's
it
was
it's
a
sort
of
misnomer
to
think
that
there's
a
motor
area,
there's
a
somatosensory
motor
area.
There's
a
visual
motor
area,
there's
an
auditory
motor
area.
There's
everybody
has
a
motor.
We
just
think
of,
like
motor
being,
you
know,
moving
your
arms
and
limbs.
That's
a
big
part
of
motor
behavior,
but
we
also
move
our
eyes
and
turn
our
head,
and
these
things
are
else.
You
know
there's
other
parts
of
that.
So
it's
it's
it's
it's
true,
but
it's
not
true
that
there's
a
motor
area.
Okay,
I.
I
Yeah
kind
of
so
you
gave
a
great
example
of
a
goal
directed
task
that
could
be
in
a
sense
one
way
of
I
guess,
offering
the
sensory
motor
integration
tasks
by
new
pick
or
your
your
follow-on
incarnations
of
new
pig.
So
I
think
that's
a
perfect
example
of
a
method
within
machine
learning,
called
reinforcement,
learning,
yeah.
C
I
A
Well
so,
as
I
understand,
reinforcement
learning
it's
it's
it's
very
much
similar
to
what
I
talked
about,
but
it
doesn't
specify
the
mechanisms
for
doing
it.
You
know
it
doesn't
it's
not.
It's
not
a
mechanistic,
it's
more
of
a
concept
of
like
how
you
would
train
things
and
lots
of
different
ways.
You
can
implement
it.
If
I'm
wrong,
let
me
know
that's
how
I
view
it
right.
A
So
everything
I've
talked
about
here
is
consistent
with
reinforcement
learning
I
just
find
you
know
if
you
read
the
literature
on
reinforcement,
learning,
you
pick
up
things,
but
there's
not
enough
detail.
The
problem
is
you
know
you
don't
want
to
do
these
really
simple
things?
You
want
to
do
this
complex
thing,
with
constantly
changing
data
and
constantly
changing
sensory
streams
and
in
a
distributed
hierarchy
I
mean
I
want
to
solve
the
whole
damn
thing,
and
so
it
you
can
take
the
concepts
of
reinforcement,
learning
which
we
did.
A
I
do,
but
I'm
trying
to
implement
them
biologically,
and-
and
so
it's
completely
consistent,
but
it's
still
there.
It
still
leaves
some
questions.
There
are
things
I
didn't
talk
about
because
I
just
don't
understand
them
and
I
haven't
been
able
to
find
them
in
any
of
the
literature
on
reinforcement,
learning
or
anything
else.
A
I
I
could
go
into
them
if
you
want,
but
I
think
it's
a
little
bit
more
than
we
should
try
to
do
right
now,
while
recording,
but
so
I
think
I
think
again,
that's
the
thing
like
yeah
I'll,
take
whatever
I
can
from
real
life
learning,
it's
a
great
it's
the
right
idea.
A
We're
actually
doing
it
here,
we're
basically
saying
you're
learning
patterns
you're
deciding
which
ones
are
those
post-facto
are
the
ones
you
want
to
keep
and
remember
we're
going
to
take
that
advantage
of
that
stuff
here
and
I'm
just
proposing
a
mechanism
by
which
how
all
this
can
play
out.
How
about
we
do.
One
more
question
is
that
is
that
right?
What's
time-wise
here
820.,
I
don't
know.
If
we
had
a
limit
to
this
or
not,
we
could
take
a
question
and
then
just
have
conversation
yeah.
I
A
I
read
the
cybernetics
stuff,
I
don't
know
25
years
ago,
30
years
ago
and
was
norbert
weiner
and
all
that
stuff.
I
just
couldn't
get
anything
out
of
it.
I
haven't
looked
at
it
again
in
a
long
time.
A
It
was
interesting.
You
know
I
read
this
book
by
ashby,
you
know
design
for
brain.
I
mean
you
guys
seen
that
those
kim
have
come
from
that
era.
I
thought
that
was
really
cool,
but
it
just
it
just
wasn't
enough
detail
in
it.
It
wasn't
like
you
know.
I
need
to
build
something
right.
We
got
to
build
something,
that's
really
right
and
I
felt
those
things
just
they
didn't
give
me
enough.
It's
not
the
way.
I
think
you
know.
J
A
Some
of
it
does
maybe
I
should
read
it
again.
Very
few
things
deal
with
time
in
a
sophisticated
way.
I
I
could
be
I
maybe
I
missed
that
I
don't
know
anyway.
You
know
it
just
goes
back
to
this
premise
that
I
have
and
I'll
end
up
under
this.
I
just
believe
you're
not
going
to
get
there
unless
you
understand
how
the
brain
works
on
the
cortex
work.
A
I
just
don't
believe
that,
and
I
believe
that
now
for
since
1980
yeah,
that
long
and
and
I
haven't
been
convinced
of
that
otherwise,
so
you
know-
I've
been
through
multiple
phases
of
neural
networks
and
multiple
phases
of
ai
and
ultimate
this
theory,
and
that
theory-
and
this
thing
and
where
has
it
gotten
us
all
right
now,
so
I
still
believe-
and
it's
just
a
hypothesis-
it's
a
stated
thing.
A
If
I
read
something
and
people
don't
talk
about
any
of
that
stuff,
then
I
usually
say:
well,
I
don't
think
it's
going
to
get
me
there
and
what
can
I
learn
from
it,
but
that's
just
my
bias
and
I'm
going
to
stick
with
it
until
proven
otherwise,
and
so
I'm
getting
too
old,
probably
a
change
now,
so
all
right
we're
going
to
end
it
there
matt's
find
it
finish
up.
You
want
this
microphone,
you
got
you're
good.
You
wanna
stand
in
front
thanks.
G
K
K
And
I
just
want
to
emphasize
one
of
jeff's
subtle
points
is
that
this
is
just
the
beginning
of
real
machine
intelligence
based
on
neocortical
principles
and
we're
building
it
right
out
in
the
open
in
an
open
source
project.
Nement
has
been
building
it
for
years
and
we've
taken
all
of
that
and
and
just
thrown
it
out
for
anyone
to
help
with
and
we're
working
on
it
continuously
in
the
open
with
community
members
and
we're.
This
is
my
invitation
to
you.
K
It's
part
of
my
job
to
try
and
get
more
people
to
come
help
us
do
this.
If
you're
interested
in
working
on
this,
you
don't
have
to
be
a
neuroscientist,
you
don't
have
to
be
a
machine
learning
expert,
I'm
not,
and
I'm
helping
out
there's
a
lot
of
a
lot
of
things
you
could
help
out
with
so
go
to
numenta.com.
K
There's
a
section
about
our
community.
You
can
find
out
how
to
contact
me
and
get
involved
to
be
more
than
happy
for
you
guys
to
try
and
help
out
in
any
way
you
can.
So
thanks
for
coming
thanks
for
your
interest
and
oh
gosh,
yeah,
there's
a
lot
going
on
in
the
community
right
now.
I'm
a
bit
overwhelmed.
K
K
It
will
be
here
so
this
was
part
somewhat
a
trial
for
me
to
test
out
this
space.
It
should
be
a
lot
of
fun.
It'll,
be
sort
of
the
same
format
as
we
had
at
the
last
hackathon.
A
lot
of
you
were
there.
I
see
some
familiar
faces.
It
was
a
last
one
was
a
great
event.
You
can
see
a
blog
post
about
it
on
nementa.org
and
see
what
other
people
built
at
the
last
hackathon.
K
K
We
also
have
a
summer
project
called
season
of
new
pick
where
we're
trying
to
assign
people
that
want
to
work
on
on
new
pick
and
these
concepts
with
mentors
from
people
within
our
community
already
that
want
to
help
people
work
on
concepts.
So
it's
a
kind
of
a
way
to
gently
lead
people
in
and
and
guide
them
to
completing
something
that
could
get
contributed
back
into
the
code
base
and
help
the
project
so
we're
we're
trying
to
push
the
boundaries
we're
trying
to
get
people
involved.
If
you
like
this
stuff,
it's
this.