►
Description
Jeff Hawkins gives a keynote talk at Cornell Silicon Valley’s premiere event, March 2017. This is a trimmed video of the talk with permission from Cornell University.
A
Our
first
session
this
morning
is
a
keynote
conversation.
It
is
entitled
reverse
engineering,
the
brain
for
intelligent
machines.
We
have
a
treat
ahead
of
us,
Jeff,
Hopkins
and
Suba
tie
on.
It
are
on
a
mission
to
reverse-engineer
the
brain
in
this
session.
They'll
they're
going
to
discuss
how
to
acclaim
the
scientific
challenge,
the
progress
they
are
making
and
why
brain
theory
is
necessary
to
truly
and
create
truly
intelligent
machines.
A
B
B
C
B
A
B
B
So
our
director
of
engineering
is
also
a
chameleon
and
we've
had
a
number
of
interns
over
this
a
whole
bunch
of
interns,
yeah
I,
don't
know
why
that
is
we
don't
we
don't
go
out
of
our
way
to
do
that,
we're
not
like
Cornell
hiring
shot,
but
somehow
they
find
it
so
that
works
pretty
well,
so
should
we
delve
into
it
yeah?
Okay,
so
how
we
want
to
start
here,
I
think.
B
Yeah
so
I'll
start
off
with
what
the
big
goal
the
big
picture
here
is,
and
we
are
gonna
have
time
for
questions
at
the
end.
So
you
better
pay
attention
to
those
complicated
instructions
on
how
to
ask
us
questions,
so
the
kind
were
not
allowed
to
take
in
many
other
ways.
So
start
thinking
about
that.
B
Look
at
simatai,
you
know
he's
had
a
long,
internship
and
lifelong
learning
and
how
the
berryworks
I
I
didn't
really
fall
in
love
with
brains
until
right
after
I
graduated,
it
was
a
fall
of
79
and
but
I
decided
at
that
time.
That
I
wanted
to
really
dedicate
my
life
to
figure
out
how
the
brain
works,
and
it
took
a
long
time
to
get
to
position
to
do
that,
but
we
think
we're
working
on
one
of
the
most
interesting
and
important
problems
ever
about
anything.
I
mean
if
you
think
about
our
species.
B
The
human
species
were
unique
and
really
in
one
way
is
that
we
have
a
unique
brain,
otherwise
we're
kind
of
not
very
interesting
animals,
but
we
have
an
intelligent
brain
more
so
than
other
animals
and
the
whole
nature
of
our
success.
And
our
lives
and
our
social
structures
are
the
knowledge,
even
that
the
very
notion
of
knowledge
can't
be
can't
exist
outside
of
the
framework
of
the
brain,
and
so
if
we
want
to
understand
what
life
is
about,
what
the
universe
is
about
and
what
we
have
species
are
about.
B
There
is
no
more
essential
question
than
what
does
the
brain
do
and
how
does
it
does
it?
How
does
it
do
it,
and
so
that
we
have
a
once
in
a
forever
opportunity
to
figure
that
out,
it's
going
to
happen
once
and
it's
only
going
to
happen
once
and
intelligent
species
can
do
it,
and
so
what
the
hell?
Let's
go
for
it
and
that's
what
we
do
we
we
think
it's
going
to
have
a
lot
of.
Is
it
by
the
way?
It's
not
as
pie
in
the
sky
stuff.
B
B
It's
going
to
have
a
lot
of
impacts.
If
you
think
about
our
world
is
we
we
haven't,
we
spend
so
much
time
educating
our
children,
but
we
actually
don't
know
how
how
we
learned.
We
don't
know
how
well
we're
starting
to
learn
how
it
is.
We
form
memories
and
how
it
is
that
the
structures
of
the
brain
actually
do
this,
and
so
it
can
inform
pedagogy.
We
have
a
whole
series
of
issues
related
to
a
psychology
and
neurology
which
can
be
impacted
by
the
knowledge
about
how
the
brain
works.
B
It
is
purely
of
course,
it's
clearly
interesting
on
a
pure
scientific
and
philosophical
point
of
you
just
knowing
how
wants
so
how
it
works,
but
also.
We
also
believe
that
there
is
going
to
be
very,
very
important
and
a
large
industry
and
impactful
on
our
societies
about
building
intelligent
machines,
and
we
don't
believe
you
can
build
intelligent
machines
unless
you
understand
how
the
brain
works.
So
much
of
what
you
hear
about
an
AI
today
is
not
really
about
intelligence.
B
It's
nothing
really
at
all
like
what
brains
do
and
so
we're
trying
to
get
to
the
core
of
that
the
core
principles
about
what
how
brains
work
and
that's
going
to
inform
the
future
of
AI,
or
we
prefer
to
say
machine
intelligence
and,
if
you're
worrying
about
this,
it's
nothing
to
worry
about.
We
can
talk
about
that
to
the
risk
of
this
or
not
what
you've
heard
about
if
you've
heard
about
the
existential
threat
of
this
anyway.
So
that's
what
we're
that's!
B
What
motivates
us
I
think
everyone
who
works
at
Numenta
is
deeply
passionately
interested
in
this
topic.
We
think
it's
important
to
work
on
it's
exciting
to
work
on
it's
fun
to
understand.
I,
wish
you
all
knew
what
I
use,
because
it's
fun
and
interesting
so
we'll
be
able
to
share
a
little
bit
of
that
today,
just
a
little
bit
of
it.
I
should
mention
before
I
turn
over
to
time.
The
next
thing
that
everything
we
do
is
open
as
a
business.
So
all
of
our
research,
all
of
our
code,
everything
we
do.
B
It's
not
easy
necessary
to
follow.
But
if
you
really
into
what
we're
doing
you
can
go
to
our
websites
and
you
can
learn
about
for
all
of
it
in
detail.
It's
nothing
secretive
going
on
here,
but
I
think
what
might
might
be
useful
now.
Systematize
is
to
chat
a
little
bit
about
he's
in
charge
of
all
of
our
research
and
to
chat
a
bit
about.
You
know
the
state
of
where
we
are.
What
do
we
know
what
we
figured
out?
What
what
are
some
of
the
remaining
challenges
to
go
through?
C
C
Thanks,
if
so,
I'll
try
to
give
you
a
little
bit
of
a
feel
for
kind
of
the
state
of
north
science
and
how
we
go
about
thinking
about
it.
So
what
does
it
even
mean
to
have
a
theory
of
intelligence
that
sort
of
based
on
north
science?
So
I,
don't
know
how
much
you
guys
know
about
north
science,
but
the
field
is
absolutely
exploding.
It's
amazing!
The
the
innovation
that's
going
on
in
the
research
there,
there's
somewhere
around
I,
think
about
50,000
neuroscientists
in
the
world
of
being
researching
this.
C
B
You
didn't
interrupt
each
other
here,
just
make
more
actually
there's
the
annual
net,
the
Society
for
Neuroscience
conference
with
this
many
neuroscience
conferences,
but
the
big
one
they
have
over
30,000
neuroscientists
go
to
it
every
year.
I
mean
it's,
it's
used,
that's
just
mean
I'd,
say
yeah
I,.
C
Forget
about
actually
reading
all
the
posters
but
yeah,
so
the
field
is
exploding.
You
know
it's
almost
like
there's
a
Moore's
law
in
neuroscience.
It's
not
unusual
today
for
researchers
in
the
field
to
be
recording
from
say.
Let's
say
a
thousand
neurons
in
awake
live
behaving
animals,
so
they
could
actually
give
these
animals
tasks
and
see
what's
happening
and
look
at
the
neural
responses
and
try
to
get
a
get
patterns.
C
There
are
several
labs
that
are
close
to
being
able
to
record
from
an
entire
animal,
basically,
some
species
of
fish
being
able
to
record
every
single
neuron
within
that
fish.
While
the
fish
is
swimming
in
3d
and
being
able
to
visualize
its
life,
I
mean
it's
just
unprecedented.
What's
going
on,
and
this
kind
of
advance
would
have
been
unthinkable,
I
think
even
ten
years
ago,
but
although
I
think,
where
we
come
in,
is,
although
there's
a
lot
of
data,
there's
actually
surprisingly
very
few
theories
of
how
the
brain
actually
works.
C
What
is
the
actual
functionality
behind
these
things
and
I?
Think
we've
learned
a
bunch
of
things
from
speaking
with
neuroscientists
and
working
on
a
modeling
that
I'd
like
to
kind
of
share
a
little
bit
with
you.
So
one
of
the
things
we've
learned
is
that
the
model
of
how
you
think
about
it
neuron
is
really
important.
So
if
you
look
at
most
machine
learning
and
deep
learning
technologies,
they
have
a
very,
very
simplistic
view
of
what
a
neuron
is.
C
C
The
specific
areas
of
extend
output,
there's
temporal
dynamics
in
the
neuron
that
are
really
important,
and
it
turns
out
that
if
you
really
study
that
and
understand
that
it
tells
you
a
lot
about
how
the
brain
learn,
how
we
deal
with
temple,
structure
and
streaming
information,
you
know
how
we
behave,
how
we
send
out
motor
commands
and
so
on.
There's
a
lot
of
detail
in
the
kind
of
structure
of
neurons.
That's
actually
important
to
model
and
most
of
the
machine
learning
stuff
today
completely
ignores
that
stuff
too.
C
It's
the
service,
we
think
so
that's
one
thing
I
think
we've
learned.
The
second
thing
that
we've
learned
is
sort
of
the
opposite.
Is
that
much
of
the
complexity
around
brains
and
if
you
look
at
North
science,
textbooks
you'll
see
these.
You
know
extremely
complicated
diagrams
of
all
of
the
regions
of
the
brain
and
how
they're
hooked
up
and
how
every
region
is
doing
some
specific
function
or
the
other,
and
it
turns
out
that
you
can
actually
ignore
a
lot
of
that
complexity.
C
B
Before
I'll
just
illustrate
that,
so,
if
you
think
about
your
brain,
this
is
a
model
of
a
human
brain
and
if
you
think
about
it,
there's
actually
about
75%
of
the
volume
of
the
brain.
Is
this
thing
on
top
the
neocortex
and
that's
the
thing,
that's
unique
to
mammals
and
that's
the
thing
that
makes
us
smart,
so
we're
interested
mostly
in
the
neocortex.
B
We
have
to
understand
how
it
relates
to
other
things
and
if
you
could
take
this
neocortex,
which
is
all
wrinkly
like
this,
and
if
you
could
lay
it
flat
and
iron
out
it'd
be
just
like
this.
Is
my
favorite
prop
it's
about
this
thick?
It's
only
a
couple
millimeters
thick
and
it's
about
this
big
in
area,
so
it
just
gets
all
scrunched
up
to
get
inside,
and
so
this
is
you
right
this?
Is
it
there's
a
there's
something
about
30
billion
neurons
in
here
and
I'm,
not
joking.
This
is
it.
This.
B
Is
you
and
it's
mind
speaking
right
now,
and
yours
is
listening,
but
they're
amazing
game,
but
Simplot
I
was
just
talking
about
these
different
regions.
They're
all
this
region
to
his
language.
This
region
division
this
region,
there's
like
a
hundred
different
regions
here,
but
sympathy
I
was
referring
to.
Is
you
can
slice
through
that
tool
in
later
thickness?
B
You
see
this
very
complex
structure
of
cell
types
and
layers
and
connectivity,
but
it's
the
same
everywhere.
It's
almost
identical
everywhere
and
it's
also
true
across
species.
So
if
you
took
a
mouse's
near
cortex
and
sliced
through
it,
you
can't
tell
it's
not
as
a
mouse
versus
a
human's,
almost
identical,
and
but
it's
detailed,
so
our
goal
and
what
Rupert
is
referring
to
is
what
we're
truly
trying
to
figure
out
is.
What
is
that
common
structure
doing
that
exists
everywhere
in
the
cortex?
B
C
C
B
Next,
gen
ADC,
you
know
you
could
route
if
you
can
route
the
defense.
If
the
data
coming
from
your
skin,
which
is
a
visual
part
of
the
brain,
we
don't
do
this
repeat
ones,
but
they've
done
it
with
some
other
animals.
That's
that
part
of
the
brain
takes
over
a
different
meaning.
So
we're
looking
for
this
sort
of
uber,
calm,
common
algorithms
that
are
that
are
applying
across
all
aspects
of
intelligence
and
that's
what
we're
making
progress
and
understanding
all
right.
C
So
I
think
that
that
actually
as
as
modelers
and
computer
scientists,
it
makes
our
job
much
much
simpler.
It
just
says
we
can
ignore
much
of
the
complexity
and
to
the
blur
eyes
and
focus
on
the
commonalities
in
the
common
principle
and
there's
a
lot
that
we've
learned
of
that
I
think
we're
making
really
good
progress
on
that.
C
B
Yeah,
it's
a
problem.
We
can
I'm
so
tempted
to
go
into
detail
here
that
some
people
would
want
to
hear
and
other
people
like
what
the
hell
is
he
talking
about,
but
I
mean
just
to
give
me
a
sense.
What
we
do.
We
actually
are
modeling
large
structures
of
real
complex
at
neurons
in
our
in
our
business
or
in
our
lab.
B
We,
we
model
the
networks
of
hundreds
of
thousands
of
very
complex
neurons
in
structures
as
they
exist
in
the
neocortex
and
that's
what
we
do
until
we
can
in
a
deep
understanding
exactly
how
some
of
this
tissue
works,
not
such
a
fuzzy
understanding,
a
very,
very
detailed
understanding
about
it.
We
had
a
real
breakthrough
this
last
year,
which
I
mentioned
we
were.
B
We
were
asking
ourselves
if
there's
a
consensual,
sensorimotor
inference
which
it's
like
mostly
you're,
not
aware
this,
but
most
of
the
way
you
learn
about
the
world
is
to
movement,
you're,
constantly
moving
you're
moving
to
learn
what
something
feels
like
you
have
to
move
your
hands
over
it
to
learn
what
a
building
is
or
a
structure.
You
have
to
walk
through
it,
even
when
you're
just
sitting
here,
looking
at
up
your
eyes
are
constantly
moving
three
to
five
times
a
second
you're,
not
aware
of
it
being
input
to
your
brain
right.
B
C
B
C
At
spec,
hopefully,
I
gives
you
a
feel
for
what
we're
doing,
and
you
know
how
we're
going
about
it
and
kind
of
the
depth
at
where
I'm
going
into
it
and
they're.
The
key
things
you
know
one
is
that
neurons
are
more
complex
than
it
might
seem
at
first
and
it's
important
to
model
that,
and
the
second
part
is
that
the
brain
is
perhaps
not
as
complex
as
everyone
thinks
and
there's
a
set
of
common
principles,
and
by
focusing
on
this
common
principles,
we
can
really
help
to
make
progress.
Collin.
B
A
little
color
to
that
you're
trying
to
imagine-
and
there
are
just
no
other
these
complicates
they
have
somewhere
around
10,000
connections
on
each
cell
and
what
we've
learned
I
just
shared
this
with
you.
What
we've
learned
is,
though,
they're
not
like
a
mass
group.
It's
not
like
a
you
know,
like
some
sort
of
math
thing
happens,
all
the
ones
that
individual
neurons
can
learn
hundreds
of
unique
patterns
and
respond
very
precisely
in
hundreds
of
unique
ways.
Most
people
who
think
about
neurons,
like
that
they
think
I,
was
getting
some
input
and
has
an
output.
B
We've
learned
as
a
neuron
is
very,
very
picky
and
individual
neurons
will
say
this
pattern
I've
seen
before
I'm,
not
kind
of
missing
this
one.
This
pattern
I've
seen
before
and
it's
a
very
precise
way
of
working,
even
though
the
neurons
themselves
are
kind
of
noisy
so
anyway,
just
at
exam.
Just
imagine
the
thing
with
what's
like
a
tree
with
10,000
connections
on
it,
it's
pretty
incredible
and
you've
got
about
30
billion
of
those
in
your
neocortex.
C
B
Yeah
I
think
you
know,
there's
a
lot.
I
already
mentioned
some
of
those
impacts
earlier,
I
think
the
one
that's
getting
a
lot
of
attention
now
is
the
whole
idea
of
machine
intelligence,
and
maybe
it's
worth
exploring
that
a
little
bit.
As
you
know,
there's
a
lot
of
a
lot
of
noise.
These
days
about
AI
and
most
of
what's
happening
in
AI
is
not
really
intelligent.
It's
machine
learning
techniques
are
very
good.
B
The
powerful
they're
not
very
useful,
we're
not
trying
to
say
that
that's
not
the
case,
but
not
even
getting
close
to
what
most
people
say
is
AI
is
not
even
getting
close
to
what
what
it
is
that
we
do
as
humans
and
how
our
flexibility
and
ability
to
learn
about
almost
anything.
So
we
think
what
we're
doing
is
the
core
of
what
really
is
going
to
be
the
future
of
machine
intelligence,
and
it's
very
hard
to
know
exactly
how
that's
going
to
play
out.
B
We
think
that
first
of
all,
the
Machine
tells
us
is
going
to
be
built
on
the
principles
of
the
brain.
That's
what
one
of
our
beliefs?
It's
not
a
commonly
police
today,
but
that's
we
were
a
competent
net
and
and
that
allows
us
to
build
machines
that
are
not
at
all
like
humans.
There.
The
goal
is
not
to
build
like
human-like
things.
That's
that's
a
mistake.
The
goal
is
to
build
machines.
They
work
on
the
same
principles
of
the
brain,
but
can
be
big
plied
in
other
areas.
B
B
We
learn
about
the
world
through
interaction
with
it,
and
we
discover
the
structure,
that's
what
science
and
and
what
we
do
is
society,
and
but
you
could
build
a
machine
that
works
on
the
same
principles
that
works
at
the
Nano
level,
that
it's
discovering
the
structure
of
proteins
and
understanding
how
proteins
fold
and
it
could
be,
it
could
be
learning
same
principles
but
in
a
very
different
time,
scales
and
very
different
sizes.
It's
going
to
have
an
impact
as
broadly
as
computers
have
had
on
our
society
in
the
last
seven
years.
B
Seventy
years
and
it's
going
to
be
as
big
as
that,
but
if
don't
think
about
it
as
robots
running
around
and
talking
to
you
and
taking
over
your
job,
that's
that
it's
not
what
this
is
about
and
just
like
computers
didn't
do
that
actually
not
going
to
do
that.
But
it's
going
to
be
amazing
impact
on
our
societies
going
forward
and
it
will
transform
the
world
in
many
exciting
ways
which
I'd
be
glad
to
talk
about
later.
It.
B
C
C
You
know,
people
would
interview
experts,
they'd,
write
down
rules,
there's
been
sort
of
expert
systems,
and
you
create
this
big
database
of
knowledge
and
if
then
kind
of
rules-
and
that
was-
and
you
know
that,
has
you
know
a
lot
of
applications-
that's
sort
of
a
per
AI
is
actually
still
around
today.
If
you
know
I've
heard
of
IBM
Watson,
that's
basically
that
style
of
AI.
You
know
we
kind
of
call
that
AI
1.0,
then
there's
kind
of
the
world
kind
of
machine
learning
world
that
were
immersed
in
today.
C
You
might
have
heard
of
buzzwords
like
deep
learning
and
so
on,
while
the
classic
AI,
the
AI
1.0
guys
were
working.
People
started
to
figure
out
how
to
get
machines,
to
learn
how
you
can
give
it
data
and
tune
its
parameters
and
get
it
to
sort
of
recognize
basic
patterns.
But
you
know
this
was
all
going
on
during
the
80s,
but
they
didn't
have
enough
data
and
they
didn't
have
enough
compute
power,
but
today
there's
you
know,
computers
are
way
more
powerful
and
you
know
with
the
internet.
C
There's
data
tons
of
data
available,
and
so
the
machine
learning
techniques
becoming
a
lot
more
popular
and
actually
delivering
huge
amounts
of
value
today.
But
the
problem
with
those
systems
is
that
they're
as
Jeff
was
mentioning
they're
extremely
brittle.
You
know
you
tend
to
train
them
on
data
and
even
though
it's
you
know,
you
don't
have
these
hand,
engineered
rules,
they're
still
kind
of
recognizing
the
patterns
that
are
in
the
data
that
you
give
it
and
it's
very
passive
and
extremely
brittle.
C
If
you're
aware
of
you
know
the
deep
mind
go
system,
for
example,
they've
beat
the
world
champion,
go
player,
a
Pete
that
was
thought
of.
You
know
it
was
extremely
difficult,
but
that
system,
even
though
it
can
beat
the
world
best
goal
player,
can't
actually
play
tic-tac-toe.
You
have
to
completely
start
from
scratch
in
order
to
get
it
to
play
tic-tac-toe.
So
we
think
this
is
going
to
lead
to
AI
3.0,
which
is
as
biologically
based
in
following
our
work.
A
C
B
Assume
today
we
have,
we
have
a
pretty
good,
that's
what
those
principles
are
some
of
them.
We
understand
deeply
some
of
them.
We
don't,
but
public
and
safety
rights
assured.
Today,
most
these
AI
systems
aren't
even
remotely
like
an
intelligent
system.
I
mean
make
sure
we
can.
There
leave
some
room
for
some
questions
here.
Do
we
have
anything
else?
You
want
to
talk
about
what
we
do
the
questions.
Oh,
maybe
this
is
about
a
business
yeah.
A
B
Is
like
you
know
it
was?
You
know
we
have
a
DC
introducing
us
here
and
so
on.
We
have
a
kind
of
unusual
business
model.
We
have
to
work,
we're
not
a
classic
super
time.
I
both
worked
at
previous
startup,
so
I
started
palm
and
handspring
and
Hughes
worked
in
division
space,
but
a
new
mentor
is
is
a
business,
but
we
don't
have
a
product
and
no
VC
would
fund
us.
They
it's
funny
because
they
all
come
to
us
all.
The
time
like
we're.
B
Know
is
there
no
you're,
not
you're,
not
going
to
want
to
give
us
money
and
but,
but
it's
usually
extremely
paid.
We
have
to
be
really
patient.
I
mean
we've
been
at
this
for
12
years
so
far,
and
we're
going
to
be
out
for
quite
a
few
more.
Our
business
model
is,
though,
isn't
it
reads,
one
of
intellectual
property.
We
have
for
a
small
company.
We
have
over
30
Rios
for
like
35
issued
patents.
Some
of
them
are
very
fundamental
and
I
know
for
my
work
in
mobile
computing
that
those
things
become
really
valuable.
B
Later,
some
of
the
many
of
the
patents
I
did
when
I
was
at
Palm
to
end
up
being
an
extremely
viable
and
a
lot
of
litigation
over
many
years
later,
so
we
kind
of
have
a
patient
business
model
here.
We're
really
focused
on
the
science
and
and
I
think
we'll
pick
what
it'll
pay
off
in
the
end
run,
but
we're
not
we're
not
focused
on
that
now,
but
we
do
have
some
licensees
already
booked
right.