►
From YouTube: Jeff Hawkins
Description
Jeff Hawkins
A
Thank
you
and
thanks
for
inviting
me
once
again,
I,
don't
know
what
I
can
do
to
Maratha,
so
he
doesn't
bite
me
again,
but
I
keep
coming
back.
Many
of
you
in
the
audience
probably
know
a
little
bit
about
in
Amanda,
but
some
of
you
may
not
know
about
Numenta.
So,
just
briefly,
Numenta
is
a
small
I
represent
Numenta
and
we
are
a
team
about
15
scientists
and
engineers
here
in
Northern
California,
and
we
are
focused
mostly
on
theory,
information,
theoretic
sort
of
principles
how
the
neocortex
works.
A
We
also
believe
that
those
theories
and
will
inform
machine
intelligence.
My
talk
today
is
a
bit
aspirational.
I'm
gonna
do
less
on
the
details
of
what
we
do
in
our
modeling
and
talk
more
about
the
big
picture
and
make
some
suggestions
about
how
we
ought
to
be
thinking
about
our
work
as
a
collective
team.
As
we
all
know,
you
know
finding
10
years
ago,
when
I
started
Numenta
the
term
AI
was
a
really
negative
turn.
A
What
what
should
we,
the
legacy
we
should
be
working
towards?
It
doesn't
mean
it'll.
Take
that
long
it
just
that
we
ought
to
be
thinking
about
what
we
ultimately
want
to
get
to.
It
may
only
take
five
years,
but
we
ought
to
be
thinking
about
the
long
term.
There's
some
things
there's
some
reasons
to
be
worried
about
this,
because
the
field
of
machine
learning
and
deep
learning,
and
so
on
is
a
bit
high
fees
days.
I
came
aware
of
this.
A
It
really
struck
me
once
there
was
a
recent
headline
that
said
something
along
the
lines
of
Microsoft.
Investing
heavily
in
machine
intelligence,
the
sub-headline
was
agreed
to
acquire
swipe
the
keyboard
predictive
keyboard
company,
and
that
was
the
the
basically
leading
that's
in
a
big
investment
in
machine
intelligence.
A
So
I,
don't
think
that's
what
machine
intelligence
is,
and
so
what
I
wanted
to
do
today
is
I
want
to
talk
about
what
intelligence
is
I
want
to
take
it
from
a
perspective
of
biology
and
see
what
we
can
constrain
our
view
about
intelligence
and
then
ask
ourselves:
well
what
should
we
be
do
when
we
trying
to
build
intelligent
machines?
And
so
that's
the
title
of
my
talk
here?
A
What
is
intelligence
that
a
machine
might
have
some
the
talk
we
write
in
the
three
sections,
the
bulk
of
it
is
this
first
section
which
is
basically
going
through
the
biological
components
of
intelligence.
We
all
agree.
In
fact,
the
only
thing
we
all
agree
on
is
intelligent
is
a
human
human
nervous
system,
and
so
outside
of
that,
there's
disagreement.
So
we're
going
to
focus
about
that
and
say
what
does
that
tell
us
what
intelligence
is
then
we
can
talk
about
what
are
the
functional
components
of
intelligence
that
we
can
pick
up
from
that?
A
So
take
yourself
outside
of
the
biology.
What
are
the
functional
components
of
that
and
then
finally,
I'll
talk
a
bit
about
the
diversity
of
intelligent
machines.
What
might
they
look
like
in
the
future?
Okay,
let's
just
talk
about
the
first
one
here
this
is
I.
Don't
hope
you
can
see
that
this
is
a
cartoon
drawing
of
a
nervous
system
I'm
going
to
use
a
lot
of
cartoon
drawings
here.
Trust
me,
I
know
the
complexity
of
the
nervous
system,
but
it's
not
worth
putting
it
up
in
pictures
here.
A
This
is
showing
what
a
reptile
brain
might
look
like
and
they
in
the
brain
in
the
nervous
system
evolved
hierarchically.
So
we
started
with
a
spinal
cord
which
has,
by
by
the
way,
has
sensory
inputs,
and
it
has
behaviors
reflex
behaves
and
then,
on
top
of
that,
eventually
evolved
the
brainstem,
which
is
sort
of
mostly
autonomic
behaviors.
That
would
be
like
you
know.
A
If
you
have
a
blood
pressure
and
custard
territory
functions,
things
like
that,
eventually,
we
added
system
might
add
what
we
might
call
midbrain
structures
in
human
that
would
be
like
cerebellum
bears
the
ganglia
or
things
like
that,
and
these
are
these-
have
basic
these
emotions
and
basic
behaviors,
and
there
can
be
learned,
behaviors
and
they're
at
the
top
of
the
reptiles
brain
there's,
something
called
the
pallium,
which
is
very
similar
to
the
hippocampus
in
mammals,
and
it's
a
it's
a
very
fast
memory
of
where
the
animals
bands
that
can
recognize.
Where
he's
been
before.
A
Now,
you
can
think
about
a
reptile
or
a
pretty
creative
thing.
Cigarette
a
locator,
for
example,
has
a
lot
of
behaviors
and
it
has
a
lot
of
ability
to
rear
its
children
and
territorial
fighting
and
eating
and
sex
and
things
like
that,
and
but
what
what
came
along
is
there
in
mammals
we
added
one
more
component.
A
Basically,
all
this
was
preserved
and
in
mammals
we
added
the
neocortex
and
then
you
or
Texas
actually
sandwiched
between
the
hippocampus,
logically
between
the
hippocampus
and
the
rest
of
the
brain,
and
it
represents
about
75%
of
the
volume
of
a
human
brain,
not
75%,
of
the
cells
but
75%
of
the
volumes
and
it's
pretty
expensive
and
a
human.
It's
so
big
that
were
the
only
species
who
regularly
died
in
childbirth.
A
Now,
we've
presented
that
now
these
days,
but
that's
because
we
have
such
a
big
head,
it
doesn't
fit
through
the
birth
canal
and
we
also
take
our
chart.
Our
offspring
can't
even
walk
for
about
a
year,
and
they
can't
really
do
anything
on
their
own
for
about
four
or
five
years
and
takes
about
18
years
for
them
to
come
fully
mature.
These
are
pretty
expensive
stuff.
So
there
ought
to
be
a
good
reason
for
having
a
neocortex
and-
and
we
should
ask
what
is
it
I
mean?
What
are
we
getting
over
the
the
reptile?
A
It
better
be
pretty
good.
Now,
if
I
wanted
to
try
to
summarize
this
in
one
word
or
one
small
phrase,
I
would
say
the
following:
I
would
say,
then
your
cortex
learns
a
spatial.
Excuse
me
a
sensory
motor
model
of
the
world
and
what
do
I
mean
by
sensory
motor
model
the
world?
It
basically
learns
how
the
world
is
it's
the
structure
of
the
world
and
it's
a
sensory
motor
model,
because
mostly
what
it
learns
is
how
the
world
behaves
when
you
act
upon
it
now.
A
Remember
the
world
to
the
brain
is
just
a
bunch
of
patterns
coming
on
the
sensory
organs.
What
you
perceive
is
the
model
in
your
brain.
It's
not
the
world.
The
brain
doesn't
actually
deal
with
that.
The
brain
has
to
construct
that.
So
it's
basically
says
when
I
act.
What
are
the
patterns
that
come
in
and
when
I
act
again?
What
are
the
patterns
of
come
in
and
we
have
to
through
that
interaction?
A
We
build
this
model,
the
world
which
tells
us
how
everything
worked
from
computers,
the
doors,
the
food
and
cars
and
so
on,
and
all
the
things
we
do
every
day
now
we
have
a
pretty
sophisticated
model
of
the
world
and
that's
what
really
makes
us
tick,
let's
just
jump
into
that,
and
let's
look
at
it
in
more
detail.
Now,
if
you
look
at
the
neocortex,
everyone
knows
this
most
people
do.
This
is,
of
course,
the
classic
Feldman
and
Vanessa
diagram
from
1991.
A
The
cortex
itself
is
a
sheet
of
cells,
and
it's
and
it's
divided
into
regions
and
those
regions
are
connected
together
in
a
hierarchy.
This
is
the
macaque
monkey
hierarchy
on
the
left.
You
see
the
somatosensory
hierarchy
on
the
right,
the
visual
hierarchy
there,
little
rectangles,
if
you're,
not
familiar
with
the
story,
drawing
they're
little
rectangles,
you
can
barely
see,
are
the
Oracle
regions
and
all
the
lines
at
the
interconnections,
massive
interconnections
between
those
regions.
So
those
rates,
one
of
those
lines-
represents
millions
of
nerve
fibers,
going
both
ways:
deforming
a
hierarchical
representation
or
hierarchical
chart.
A
If
you
will
of
the
of
the
cortical
regions,
information-
and
this
tie
again,
we
come
in
at
the
bottom
and
flow
up
the
hierarchy
and
back
down
the
hierarchy.
You'll
see
that
there's
different
hierarchies
from
the
different
modalities
that
are
connected
towards
at
the
top.
Now
the
first
thing
we
can
say
this
looks
awfully
complicated
Josh.
How
are
we
ever
gonna
figure
this
out?
A
Well,
it's
a
hierarchy
of
regions
but,
as
you
heard-
and
you
know-
and
everyone
here
should
know
that
the
reasons
are
remarkably
preserved-
they're
almost
identical
everywhere-
they're,
not
identical,
there
are
differences.
Some
are
noted.
Some
are
very
subtle,
but
they're
remarkably
conserve
the
regions
have
a
lot
of
detail
and
everywhere
you
look
that
detail
exists.
This
is
and
this
therefore,
the
basic
consumption
is
all
the
regions
are
doing
something
very,
very
similar
that
there's
so
much
evidence
for
this
I
don't
want
to
debate
it.
A
Although
there
are
people
who
would
like
to
debate
it,
we're
not
going
to
debate
that
here
today.
The
second
thing
we
can
argue
is
that
notice
that
the
hierarchy
itself
varies
significantly
across
species.
So
if
I
look
at
the
visual
hierarchy
of
a
monkey
and
the
visual
hierarchy
of
a
human
and
the
visual
hierarchy
of
a
cat
and
the
dog
they're
all
quite
different,
so
there's
nothing
particular
about
this
conic
to
graph.
A
Here
it
turns
out
that
you
can
take
these
cortical
regions
and
you
hook
them
in
different
ways
and
they
generally
work
pretty
well,
and
we
know
from
sensory
substitution,
you
can
put
information
and
on
the
wrong
type
of
information,
one
end
of
the
hierarchy
and
it
still
works.
So
we
don't
have
to
worry
about
the
complexity
of
that
fellowman
by
Nessun
diagram.
What
we
need
to
worry
about
is
what
does
it
eats
cortical
region
doing?
How
do
they
work
in
a
hierarchy
and
then
later
on,
you
can
figure
out
what
hierarchy
you
want
to
build.
A
We
look
at
what
each
cortical
region
looks
like.
The
first
thing
you
see
is
its
structure,
there's
layers
everywhere,
you
see
there
are
layers
of
cells,
typically
six
depending
on
who's
counting,
but
that's
the
typical
number
that
people
use
but
they're
very
clearly.
There
there's
no
doubt
about
this:
the
second
layer.
A
second
part
of
organization
of
this
is
that
they
are.
There
are
these
many
columns,
the
individual,
neurons
excitatory
neurons
arranged
in
these
many
columns.
They,
those
are
physically
true.
A
You
can't
always
see
them,
but
they're
part
of
the
development
of
the
brain
and
there's
a
lot
of
debate
about
whether
they're
functional
and
when
mini-com
is
really
skinny.
It's
only
about
50
microns
wide.
It
has
about
100
to
120
cells
and
it's
very,
very
skinny,
column
of
cells
as
an
organizing
principle
in
the
reach
of
cortex.
Now
it's
a
few
things
that
most
some
people
forget
about
about
the
these
cortical
regions,
and
let
me
tell
you
at
Adam.
The
first
is
the
input
we
think
of
the
input.
A
Everyone
knows:
Oh,
some
sensory
input
comes
into
the
primary
visual
cortex
or
the
primary
auditory
cortex
whatever,
and
then
that
gets
passed
to
the
next
one
and
so
on.
But
you
need
to
remember,
though,
that
that's
not
all
that's
only
half
of
what
a
region
gets.
The
region
also
gets
a
copy
of
motor
commands
that
are
being
executed
by
the
rest
of
the
brain.
A
It's
inferring,
both
sensory
data
and
motor
behavior,
that's
a
common
principle
throughout
the
cortex.
The
second
thing
is:
every
region
that
we
know
of
in
the
cortex
has
layer,
5
cells,
that
project
out
of
the
cortex
and
generate
motor
behavior.
Everyone
even
v1
has
cells
that
project
down
the
superior
colliculus
in
impact
eye
movement
yeah.
So
this
is
a
universal
property
that
same
output
gets
split
in
two
and
goes
up
to
the
next
region
in
the
hierarchy.
So
you
actually.
A
The
next
reason
is
getting
a
copy
of
the
motor
behavior
that
this
region
is
generating,
and
this
is
a
general
principle
throughout
the
neocortex.
Every
region
of
the
cortex
also
has
cells
in
layer
six
which
project
back
to
the
thalamus,
which
are
believed
to
be
involved
in
attention.
Now,
there's
a
lot
more
to
this,
so
I'm,
not
gonna,
I'm
gonna
stop
right
here,
but
I
want
you
to
make
some
observations
about
this.
A
First
of
all,
every
region
is
recognizing
sensory
sequences,
so
regions
getting
the
stream
of
data
coming
in
and
looks
like
my
speeches
right
now
or,
if
or
if
you
listen
to
some
music
or
if
some
of
you
bird
was
flying
across
or
something
it's
time
based
sequences
and
one
of
the
types
of
sequences.
It's
just
a
pure
sensory
sequence
like
my
speech
right
now,
and
you
have
to
infer
that
or
recognize
it
by
hearing
it
through
time.
It's
it's
an
inference
over
time.
It
is
not
a
spatial
inference.
A
The
second
thing
is
that
every
region
has
to
recognize
sensory
motor
sequences.
This
is
again
when,
when
you,
when
you
feel
something
if
I
put
my
hand
down
and
I
touch
something
I
know
that,
because
I
know
how
I
move
my
hand,
if
the
inference
is
both
a
combination
of
my
motor
behavior
and
what
I'm
sensing,
and
so
this
is
the
second
type
of
inference
that
a
region
does
every
region
generates
motor
sequences.
A
So
it
builds
a
model
of
sensory
data,
sensory
motor
data
and
it
generates
output,
and
you
might
argue
that
every
region
is
doing
what
the
entire
cortex
is
doing.
If
there
were,
if
the
car
torque
just
is
trying
to
build
a
model
of
the
world
through
sensory
motor
interaction,
but
that's
what
every
region
is
doing,
which
makes
sense.
It's
very
copacetic,
a
very
desirable
thing.
You
know
that
these
are
the
it's
not
like
some
property
that
comes
out
of
the
hierarchy.
A
This
is
what
every
part
of
the
neural
tissue
is
doing
and
when
you
hook
them
in
hierarchies,
you
get
some
nice
other
properties.
So
this
is
basically
the
core
item
of
understanding
how
the
neocortex
works
is
understanding,
how
one
of
these
regions
works
and
it's
the
same
pretty
much
everywhere
you
go.
These
are
the
principles
that
exist
everywhere.
Now
we
hypothesize
and
I,
think
it's
pretty
strong,
I'll
say
when
you
deduce
you
can
deduct
that
to
do
these
functions,
you
need
to
have
memories
of
sequences.
A
You
need
to
have
memory
of
how
things
change
over
time.
My
speech
right
now
is
layer,
five
cells
and
one
part
of
my
new
cortex
firing
in
a
complex
pattern,
and
every
word
is
a
complex
pattern.
My
phrases
are
complex
patterns,
I
can
repeat
them,
I
can
repeat
them.
I
can
say
these
sentences
over
and
over
again,
I
can
give
this
talk
twice.
I
have
these
sequences,
memorized
and
I
can
put
them
together
in
complex
ways,
but
it's
all
playing
back
sequences
and
that
has
to
be
stored
in
cortical
tissue.
A
Similarly,
when
I
recognize
speech,
I
recognize,
music
or
I
recognize
anything.
The
visual
parts
of
my
world
I
move
about
I'm
I'm
playing
I'm
recognizing
patterns.
I've
learned
before
these
are
all
sequence
of
the
pattern
streaming
data.
So
it's
all
about
sequence
memory
here
this
is
the
primary
memory
of
of
a
quarter
of
the
cortex
is
how
things
change
over
time,
and
we
have
a
theory
about
this,
which
is
that
every
layer
of
cells
is
actually
implementing
sequence
memory
and
they
reason
they
differ
is
because
they're
doing
different
things
with
them.
A
So
our
current
castes
best.
Well,
it's
pretty
clear
that
layer,
5
cells
are
playing
back
motor
sequences.
Our
guests
asked
about
layer
4
that
they're
learning
sensory,
sensory
motor
sequences
are
inferring
sensory
motor
sequences
not
generate
him
to
recognize
them
and
lenger
3
are
recognizing
pure
sensory
sequences.
I
don't
want
to
get
into
the
details
of
that,
but
this
is
this.
In
some
sense,
you
can
deduce
that
these
properties
must
exist
in
the
cortex
for
you
to
understand
the
world.
A
Now.
How
does
that
work?
We
have
a
theory
about
this,
exactly
how
this
works
and
I'll
just
give
you
the
basics
of
it.
I
won't
go
into
the
details.
First
of
all,
we
have
to
talk
about
neurons.
Obviously
the
brain
is
made
of
neurons.
This
is
a
pyramidal
neuron.
This
is
80%
of
the
excitatory
cells
in
the
brain
I,
just
learned
recently,
by
the
way
that
spiny
stellate
cells
for
those
you
know
those
are-
are
actually
also
parameter
on
their
onset
lost
or
apical.
A
Dendrite
I
didn't
know
that
that,
but
so
we
can
say
pretty
much.
This
is
the
excitatory
cell
of
the
neocortex.
Now
we
know
that
these
cells
have
thousands
of
synapses.
It
varies,
maybe
10.
We
know
that
up
to
30,000
synapses
on
in
a
pyramidal
cell
in
the
hippocampus,
so
these
are
huge
numbers
of
synapses
on
these
cells,
and
these
these
synapses
are
expensive,
they're,
not
there
for
they're
there
for
a
purpose.
Ten
percent
of
the
synapses
are
near
the
cell
body
or
proximal,
and
they
were
able
to
make
the
cell
fire
in
a
classic
way.
A
We
think
about
in
neural
networks.
90%
of
these
synapses
are
so
far
away
from
the
soma
that,
if
you
activate
one
of
them,
it
either
has
almost
undetectable
or
it
is
in
detectable
at
the
soma.
So
it's
very
very
hard,
and
for
many
years
people
said
what
the
hell
they
good
for
what
are
90%
of
these
synapses
doing
if
they
don't
really
have
any
effect
at
the
cell
body.
We
had
some
of
this
debate
yesterday
when
Christopher
Koch
was
here,
so
we
now
know
that,
of
course,
these
synapses
are
all
along
the
dendrites.
A
There
are
no
excitatory
synapses
on
the
soma
they're.
Only
on
the
dendrites
there
really
talk
about
the
spines
are
about
a
micron
part,
and
we
now
know
that
there
are
active
properties
to
dendrites.
This
work
is
from
Larco,
major
and
Schiller,
but
the
basic
idea
here
is:
if
you
have
a
number
of
synapses
that
are
co-located
on
a
dendritic
segment.
We're
then
about
40
microns
of
each
other,
and
you
activate
multiple
ones
at
the
same
time
that
they
can
act.
A
They
they
some
nonlinearly,
typically,
that
we
talk
about
an
NMDA
spike
that
can
generate
an
NMDA
spike,
which
is
a
much
larger
depolarization
and
a
longer
depolarization,
and
it
has
a
significant
effect
on
the
soma.
An
NMDA
spike
is
very
measurable
at
the
soma.
It's
not
sufficient
to
make
the
cell
spike,
but
it
is
sufficient
to
make
a
significant
depolarization.
So
this
little
diagram
shows
that
the
difference
between
going
from
seven
spikes
in
blue
and
eight
spikes,
seven
synapses
and
eight
synapse
is
activated
in
red
and
it
shows
that
nonlinear
property.
A
But,
as
I
said
earlier,
it's
usually
15
to
20.
Synapses
are
required
to
create
an
NMDA
spike,
and
the
basic
theory
here
is
that
that
is
a
coincidence.
Detector
you're
detecting
a
pattern
out
in
some
larger
neural
tissue
by
detecting
15
to
20
synapses
active
at
the
same
time
and
you're
going
to
recognize
that
and
you're
gonna
put
the
cell
into
it.
A
The
polarized
state,
if
you
follow
that-
and
you
do
the
math
in
this-
and
this
is
all
built
on
sparse
representations-
that
Rogers
is
talking
about
and
then
we've
written
about
extensively
and
others
as
well,
that
the
neuron
each
parental
neuron
can
recognize
hundreds
of
unique
and
independent
patterns
on
its
dendrites.
It's
not
recognizing.
Like
one
thing.
It's
like
hundreds
of
them,
and
the
basic
theory
have
is
that
most
of
those
detection
patterns
are
patterns
that
preceded
the
felt
cell
becoming
active
and
therefore
their
predictions.
A
Your
cell
is
saying:
I
have
seen
this
pattern
in
the
past.
I
kept
eclis
become
active
afterwards,
I'm,
going
to
depolarize
and
I'm
going
to
be
in
a
more
of
prime
state
to
fire
when
if
I
do
get
input,
so
these
are
the
patterns
and
the
basal
dendrites
with
the
polarized
to
cell,
and
this
is
a
form
of
prediction,
and
the
advantage
of
this
is
when
the
cell
does
become
active.
A
It's
going
to
spike
a
little
bit
sooner
than
other
cells
that
have
similar
input
and
in
some
sense,
the
cells
that
are
predicted
will
become
active
and
inhibit.
Other
people
and
you'll
get
a
sparser
activation
when
you
have
a
correct
prediction,
and
these
are
all
observations
that
are
sung
knowing
the
brain
I'm
not
going
to
go
through
exactly
how
this
works,
but
I'm
just
going
to
give
you
the
flavor
of
this.
A
If
you
put
a
bunch
of
these
neurons
in
a
layer
of
cells
in
a
pretty
in
the
column
of
representation,
you'll
end
up
developing
a
very,
very
powerful
sequence
memory
and
that's
what
we've
been
testing
for
many
years
now,
and
we,
if
you
want
to
go
the
details
up,
you
can
come
to
our
poster.
I
just
want
to
point
out
that
we
model
these
these
neurons.
This
is
a
picture
of
our
software
model
of
these
neurons.
We
we
have
to
model
the
individual
dendrites.
A
We
have
to
model
different
integration
zones,
the
April
Cole,
the
basil
and
the
proximal.
These
are
important.
We
have
to
model
individual
synapses,
but
there's
a
lot
of
things.
We
don't
model.
Our
neuron
model
is
not
a
spiking
model,
because
we
haven't
found
a
need
for
that
from
an
information
theoretic
point
of
view.
Okay,
we
have
just
to
give
you
a
flavor
of
this.
If
you
haven't
seen
this,
you
have
to
have
a
representational
scheme
for
how
it
is.
You
would
represent
information
in
sequences.
We
walk
us
through
these
two.
A
We
usually
just
sort
of
this
picture
here
is
using
like
ABCD
and
xB
see
why?
Because
they
have
sub
sequences
are
the
same,
and
so,
if
I
show
you
XB
c
ABC
and
I've
learned
that
sequence
I
predict
D.
If
I
see
you
X,
BC
I
have
to
predict
Y.
The
point
is
that
sequences
are
very
complex
in
real
data
I'm
not
going
to
walk
you
through
this,
but
we
have
a
whoops
I.
Just
let
me
hear
this.
A
These
little
panels
are
representing
how
in
a
layer
of
cells
with
columns,
how
would
you
represent
inputs
that
are
predicted
and
inputs
that
are
not
predicted
and
so
on?
It's
really
cool
theory.
I
encourage
you
to
learn
about
it,
come
by
the
poster
layer
later,
but
it
explains
how
a
layer
of
cells
can
learn
very
complex
sequences
of
any
duration
and
merge
them
and
separate
them
and
making
predictions
constantly
all
the
time
and
even
make
multiple
predictions
at
the
same
time,
I'm
not
going
to
walk
you
through
that.
A
It's
beyond
the
point
of
this
today's
talk
we
also
model
the
apical,
dendrites
and
synapses.
This
is
the
one
at
the
bottom
of
this
picture.
I
want
to
talk
about,
though,
in
this
model
learning
is
not
by
modification
of
synaptic
weights,
it's
by
synaptogenesis,
which
is
a
much
more
powerful
form
of
learning,
and
we
know
this
is
going
on
all
the
time
in
the
cortex
we
heard
yesterday.
It's
a
fact
that
individual
synapses
are
largest
stochastic.
A
You
cannot
rely
on
from
any
amount
of
fidelity,
even
one
digit
is
more
than
you
can
get
out
of
a
synapse.
So
forget
it
if
your,
if
your
little
model
requires
one
digit
of
precision,
it's
not
going
to
work
in
a
grilled
neuron,
but
if
you
learn
sets
of
neurons
at
once,
then
you
can
get
something
like.
A
If
you
can
learn
15
to
24
new
synapses,
then
you
got
something
that's
reliable
and
that's
what
we
thinks
going
on
the
way
we
learn
in
this
system
is
we
model
growth
instead
of
heads,
heavy
and
type
of
learning,
but
we
model
the
growth
of
synapse
is
not
the
synaptic
weight
change
of
synapses.
When
we
know
this
is
going
on
all
the
time.
So
in
this
picture
on
the
left,
you
can
see
an
axon
and
a
dendrite
and
we
have
something
we
call
the
synapse
permanence,
which
represents
the
growth
of
the
synapse
and
at
zero.
A
There's
no
growth
between
these
two
at
this
axon
and
this
dendrite,
and
then
you
start
in
when
I
increase
or
Nightrain
I
increase
the
permanence.
At
some
point
you
get
to
a
threshold
in
this
case
point
3,
where
you've
actually
formed
the
synapse
and
it's
now
active,
and
then
you
can
continue
training
the
system
you,
the
strength
of
the
synapse,
does
not
change
it's
a
binary
synapse,
but
the
permanence
increases.
A
Why
would
we
do
this
first
of
all
kind
of
models
what's
going
on
in
the
biology
actually,
but
the
importance
of
this
is
that
you
can
start
training
on
patterns
before
you
know
they're
real.
They
could
be
noise.
You
have
to
see
it
several
times
before
it
actually
becomes
an
actionable
thing,
and
then
you
can
continue
to
train
on
it,
making
it
much
harder
to
forget
so
something
you've
been
exposed
to
over
and
over
and
over
again,
it's
going
to
last
longer
a
memory
of
them,
something
that's
only
been
exposed
to
a
few
times.
A
It
allows
us
to
do
continuous
learning
in
the
in
the
face
of
noise.
You
have
constant
noise
coming
in
their
systems
constantly
trying
to
learn
it,
but
it
only
acts
when
it's
seen
it
multiple
times.
It's
a
very
powerful
form
of
a
form
of
learning.
Now
we've
tested
this
and
built
these
things
out.
The
wazoo
we've
planned
commercial
products.
I
only
want
to
just
point
out
a
couple
things.
The
top
point
here
is
showing
the
blue
line,
and
one
of
these
htm'
sequence
memories.
We
call
them
HTM
sequence.
A
Memories
is
learning
a
predictive
model,
some
complex
data
stream,
that's
partly
noise
and
partly
structured
data,
and
you
can
see
we
starting
time
is
on
the
right
on
the
horizontal
axis.
We
start
feeding
in
a
stream
of
data.
There's
no
batch
here.
You
just
start
training
the
system
and
it
starts
getting
up
and
gets
the
perfect
accuracy
the
best
it
can
do
in
this
case
at
the
top.
A
Do
then
at
some
point
we
change
the
data,
its
falls
back
in
its
accuracy
and
it
learns
again
there's
no,
it's
constantly
adjusting
constantly
learning,
even
as
the
data
changes
the
bottom
one
just
shows
that
the
systems
are
very,
very
robust,
I'm
only
showing
one
type
of
robustness
here
this
is
cell
death.
So
if
I
train
a
system
and
then
somewhere
along
the
line,
I
kill
a
bunch
of
the
neurons.
In
this
case,
anything
up
to
about
40
percent
of
the
neurons
is
hardly
noticeable.
A
After
that
you
can
see
a
sharp
drop-off
in
the
performance
of
the
system
and
even
without
all
those
neurons,
that
re
learns
how
to
do
the
same
problem.
It's
extremely
robust
system
in
every
single
way,
neuron
synapses
dendrites-
and
this
is
an
important
property.
If
you're
ever
going
to
build
these
things
in
hardware,
ok
I
now
want
to
switch
topics
to
the
functional
components
of
intelligence.
I
just
gave
you
sort
of
the
biological
substrate
a
system
of
the
cortex,
a
hierarchy
and
regions,
the
reason
doing,
sequence,
memory
and
so
on.
A
So
what
are
the
functional
components?
This
is
Hawkins
list.
This
is
my
list,
because
this
is
subjective,
so
I'm
just
I
made
this
up.
So
that's
what
I
call
it
Hawkins
list
of
functional
components,
but
I
think
it's
a
pretty
good
one.
First
of
all,
I
argue
that
if
you're
going
to
build
an
intelligent
system,
it's
going
to
have
to
have
networks
of
neurons
that
learn
and
recall
sequences.
That
is
its
fundamental
tissue
premise
that
has
to
do
this
because
all
inference,
almost
all
infants
in
auditory,
visual
and
somatosensory
is
inference
of
sequences.
A
All
motor
behavior
is
playing
back
sequences.
This
is
not
something
to
be
added
to
a
system.
It
is
the
core
fundamental
principle.
We've
talked
about
some
of
the
key
principles:
continuous
learning,
it's
not
a
batch
system.
It's
got
to
be
continuous
learning
to
be
an
intelligent
system.
I
didn't
go
into
this,
but
I'm
on
argue.
It
has
to
be
make
multiple
simultaneous
predictions
and
it
has
to
be
very
robust.
Those
are
requirements,
I
believe
for
an
intelligent
system.
I
proposed
one
model
for
doing
this.
A
A
The
second
thing
is:
we
have
to
have
regions
that
use
sequence,
memory
for
sensory
inference,
sensory
motor
inference
and
motor
generation
and
I
think
this
is
a
fundamental
requirement
of
intelligent
machines,
intelligent
people-
we
don't
do
most
of
this
stuff
today,
then
you
have
to
have
a
hierarchy
of
regions.
Now,
hierarchy
is
required,
but
there's
a
lot
of
parameters
here.
The
number
of
regions
is
a
parameter.
The
size
of
the
regions
is
a
parameter.
The
connectivity
graph
is
a
parameter.
You
could
have
a
system
with
one
region
or
we're
system
with
200
regions.
A
You
can
have
a
system
with
the
teeny
little
region
system
with
big
regions,
that
is
a
design
parameter.
It's
not
essential
to
the
overall
function
of
the
system.
It's
just
those
are
things
you
can
dial
it
in
for
different
types
of
applications
and,
finally,
this
system,
if
we're
modeling
the
new
your
cortex,
has
to
have
an
embodiment.
It
has
to
exist
in
something
and
I
read.
It
argument
has
to
exist
with
one
or
more
sensors.
It's
got
to
have
one
or
more
built-in
behaviors.
A
These
are
behaviors
that
exist
outside
of
the
cortex
in
some
sense
of
their
body,
the
old
behaviors,
because
the
cortex
controls
the
rest
of
the
body
it
has
to
have
emotions
and
motivations.
They
can
be
very
simple,
they
can
be
complex
and
it
has
to
have
a
something
equivalent
for
it
might
have
something
equivalent
to
like
the
hippocampus,
I.
Actually,
shouldn't
say
it
has
to.
It
has
to
have
an
embodiment,
be
these
individual
parameters.
Here,
these
intervals,
things
are
parameters,
you
can
have
more
or
less
different
types.
It's
not
like
this,
one-size-fits-all
okay.
A
So
that's
those
are
my
functional
requirements
and
I
think
about
we're
going
to
build
intelligent
machines
in
the
future.
They're
gonna
have
to
do
these
things.
So
now,
let's
talk
about
the
diversity
of
intelligent
machines.
If
you
accept
this
sort
of
list,
let's
look
at
the
things
we
can
play
around
with
what
are
the
parameters
we
can
play
with
here
and
what
could
that
lead
to
us?
Well,
first
of
all,
you
can
realize
you
could
build
a
system
that
works
on
the
cortical
principles
and
these
principles
of
intelligence
that
are
very
small
and
very
big.
A
Yesterday
Bruno
talked
about
tiny
brains,
you
talked
about
insect
brains,
those
are
not
intelligent
brains,
but
you
can
build
small,
tiny,
intelligent
brains.
We
have
rats,
we
have
mice,
but
we
can
even
go
small,
isn't
it
in
it?
No
Manta
we
build
very
small
pieces
of
cortex
if
65
thousand
neurons
and
several
hundred
million
synapses
and
those
learn
the
spatial
temporal
data
patterns,
ian's
sensory
data,
coming
off
of
sensors
on
machines
and
and
buildings,
and
things
like
that.
They
work
on
cortical
principles
but
they're.
Very,
very
all
when
I
say
it's
super
intelligent.
A
No,
but
is
it
working
on
intelligent
principles?
Yes,
it
is,
and
so
you
can
go
from
that
variety.
On
the
other
hand,
we
could
build
things
with
very
complex
sensors
that
are,
unlike
anything,
a
human
has
and
and
very
big
hierarchies,
and
so
on.
I'm
gonna
leave
you
with
two
of
my
personal
aspirational
goals.
Here
things
I
would
like
to
see.
I
Dowell
see
him
in
my
lifetime,
but
things
I
would
like
to
see
and
I
think
those
things
we
can
build
using
intelligent
these
principles.
A
One
is
I,
would
love
to
see
a
machine
that,
just
like
a
super
mathematician,
a
super
physicist.
You
can
literally
build
a
hierarchy,
that's
designed
where
part
of
the
hierarchies
is
in
behaviors
or
mathematical,
behaviors,
mathematical
functions
and
mathematical
operations,
and
so
the
system
operates
in
the
space
of
mathematics
and
it
can-
and
you
might
have
one
section
of
hierarchy,
dealing
of
topology
and
another
section
of
hierarchy,
doing
of
another
aspect
of
mathematics
and
this
system
could
be
a
super
brain
on
mathematics.
A
It
could
be
huge
and
it
could
be
fast
and
it
can
work
non-stop,
24
hours
a
day
and
come
the
right
motivations.
This
is
possible
to
do
using
these
kind
of
principles.
Another
thing
I'd
love
to
see
is
that
one
day
we're
not
gonna
be
able
to
live
on
this
planet
anymore.
I
hope
it's
a
long
time
from
now,
but
it
might
not
be-
and
you
know
Elon
Musk
is
talking
about
sending
people
to
Mars
and
so
on.
A
So
NASA
well,
you
know
they're
serious
about
that,
but
they
think
they're
gonna
send
people
there
and
the
first
thing
they're
gonna
do
is
build.
You
know
factories
and
mines,
and
things
like
that
I,
don't
think
I
want
that
job.
So
we
we
need.
We
need
to
make
super
engineers,
scientists,
robots,
that
go
and
do
this
stuff.
This
sounds
crazy.
I
know
that
but
I'm
not
I'm,
not
joking.
We
need
to
be
able
to
send
milled
machines
that
can
actually
solve
engineering
problems
solve
construction
problems.
If
we're
ever
gonna
get
off
this
planet.
A
If
we're
ever
gonna
explore
the
rest
of
the
solar
system,
we
have
to
have
machines
that
can
do
that,
because
we
are
not
going
to
be
able
to
survive
in
those
environments,
but
why
can't
we
make
a
machine
that
is
really
good
at
using
tools?
Really
is
a
good
engineer.
You
can
sit
there
and
solve
problems
and
build
things
and
we
can
send
out
in
advance
of
us.
A
It
is
crazy,
but
I'm
obsessed
about
done,
but
you
know
we
ought
to
be
aspiring
to
something
great
and
we
shouldn't
be
settling
for
what
we
can
do
today,
and
so
I've
argued
that
these
are
the
principles
we
need
to
go
to
embodiment.
We
need
to
go
to
the
sensorimotor
inference.
We
need
to
build
these
complex
models,
there's
a
huge
amount
to
do
this,
but
it's
not
impossible
and
we're
making
great
progress
on
this.
So
that's
the
end
of
my
talk.
Thank
you
very
much.
A
Yeah
well,
look
I,
love
all
animals.
Is
it
good,
a
good
political
statement?
You
know,
you
know
you
could
ask
me.
Well
what
do
you
think
I
birds?
What
do
you
think
about
octopus
octopi?
They
have
great
brains
too.
There
are
a
lot
of
structures
here
that
are
going
to
be
shared
in
other
animals.
You
know
I,
remember
you
know
bird
brains,
people
study
songbirds
and
they
have
a
very,
very,
very
rigid
type
of
memory.
But
then
there
was
what's
the
name
of
the
woman
who
studies
the
parrots
yeah.
A
What's
her
name,
yeah,
pepper,
pepper,
pepper,
something
yeah,
Pepperberg
and,
and
she
C
argues
that
great
power.
So,
like
super
smart?
Well,
maybe
they
are
I'm,
not
arguing.
These
principles
don't
exist
in
other
brain
structures.
They
probably
do
but-
and
so
that's
great
I
just
say
we
should
be.
We
should
be
shooting
for
first
understanding
the
mammalian
neocortex,
because
it
is
the
one
thing
we
can
all
agree
is
intelligent.
Yeah.
Yes,.
A
It's
a
questions,
do
we
we
don't
use
spiking?
Do
we
have
a
format,
timing-dependent
plasticity?
In
some
sense,
we
do
you
realize
there's
a
lot
of
remember.
I
said
there
earlier
on
that
everything
we
do
is
vary
by
a
lot.
Did
I
say
this.
I
should
have
said
this.
We
didn't
our
models
are
very
biological.
A
A
In
this
particular
case,
learning
in
the
system
occurs
from
a
box
potential
traveling
up
the
dendritic
tree,
and
we
actually
only
want
learning
to
occur
in
these
segments
that
have
been
too
polarized
because
they
had
sufficient
input.
We
don't
want
to
actually
learn
all
over
the
system.
We
want
to
learn
localize
certain
parts
of
the
dendrites,
and
we
only
really
want
to
think
about
form
synapses
to
things
that
were
active
just
moments
before
the
cell
generate
an
action
potential.
A
So
I
have
a
pattern
as
exists
in
the
world
in
this
neural
tissue
and
then
I
have
the
cell
spike
I.
Do
the
abacus
action
potential
in
that
I
do
training
if
that
signal,
if
that
context
in
the
world
came
afterwards
in
the
wrong
side
of
spike
timing-dependent
plasticity,
I
wouldn't
want
to
learn
it.
It
wouldn't
have
been
predictive.
So
the
whole
idea
is
you
want
to
have
a
predictive
signal
occurring
right
before
the
cell
fires.
That's
what
you
want
to
learn.
You
don't
want
to
say
that
that
timing
is
there.
A
C
A
Would
they
need
us
first
of
all,
I,
never
used
the
word
super
intelligence,
so
they'll
be
careful
about
that.
I
wrote
a
nice
blog
about
this
or
an
opinion
piece
about
this
on.
You
can
find
it
on
recode.
This
is
about
the
danger
dangers
of
this
remember.
I
talked
about
those
parts
of
this
I
took
it
off
the
screen
now,
but
there's
parts
of
this
there
was
like
the
emotions
and
the
motivations
right.
There's
just
so.
There's
two
answers
your
questions.
A
One
is
we
get
to
decide
with
the
motivations
and
and
and
emotions
of
these
systems
are,
if
anything,
so
they
will
not
have
the
same
emotions
you
and
I.
Have
we
also
do
not
make
them
self-replicating,
there's,
no
reason
these
things
have
to
be
self-replicating.
They're
not
gonna
have
sex
in
their.
You
know
out
on
Mars,
so
I
mean
what
is
the
worry?
The
worry
is
well,
you
know,
there's
two
basic
words:
if
these
things
get
out
of
control
and
just
have
to
do
what
they
want
to
do
it
they're
not
gonna.
A
Do
that
it's
kind
of
just
very
simple
things
that
just
learn:
it's
like
it's
like
building
a
human
without
the
rest
of
the
the
emotional
structure,
then
like
the
world
would
be
a
lot
more
logical
and
simple
that
way,
but
these
systems
aren't
going
to
have
those
emotional
frameworks,
they're
not
going
to
be
angry
of
Cersei
or
lust
or
whatever.
This
is
and
and
they're
also
not
going
to
self-replicate.
So
I
don't
worry
about
that
as
much,
but
you
can
read
the
piece
I
wrote
and
we
talked
about
it.
Thank
you.