►
Description
In this third video in the series, Author Jeff Hawkins talks to VP Marketing Christy Maver about Part 2 of his new book, A Thousand Brains. Part Two: Machine Intelligence, covers how a new understanding of the brain can point the way to truly intelligent AI. Jeff talks about why today’s AI is not intelligent, what we can do to change that, and why we need not fear it. #athousandbrains
Order A Thousand Brains here: https://www.basicbooks.com/titles/jeff-hawkins/a-thousand-brains/9781541675797/
Visit http://AThousandBrains.com for more information.
A
Hi,
I'm
christy
maver
vp
of
marketing
at
numenta
here
with
jeff
hawkins,
co-founder
at
numenta
and
author
of
a
thousand
brains,
a
new
theory
of
intelligence
which
we're
talking
about
today
in
our
video
series
covering
the
book.
So
this
is
our
third
video
in
the
series
today
we're
going
to
talk
about
part
two
of
the
book.
Part
two
is
called
machine
intelligence.
A
So,
in
our
last
conversation,
when
we
talked
about
part
one,
you
talked
about
how
a
new
understanding
of
the
brain
is
part,
one
which
really
covers
the
thousand
brains.
Theory
of
intelligence,
the
discoveries
along
the
way
that
led
to
the
creation
of
that
theory
and
really
part
two,
is
then
about
the
impact
of
the
theory
on
the
future
of
ai
and
machine
intelligence.
A
So
one
of
the
one
of
my
first
questions
is
you
talk
about
how
there's
no
I
in
a.I
in
this
section?
So
why
is
that?
Why
aren't
today's
machines,
intelligent.
B
Well,
so
that's
you
know
it's
interesting,
as
I
I
mentioned
earlier
in
the
previous
video.
You
know
we
we
start
out.
B
Our
researchers
really
primarily
focus
on
neuroscience
and
the
brain
how
it
works,
although
we're
a
bunch
of
engineers
as
well
as
scientists
and
so
we're
interested
in
ai
as
well,
but
as
we
learn
more
and
more
about
how
the
brain
actually
creates
intelligence
and
what
intelligence
is
then
it
became
much
more
clearer
focus
about
what
today's
ai
is
and
how
it's
different
than
biological,
ai
or
biological
intelligence,
and
and
it's
striking,
and
so
we
I
decided
we
should
write
about
this.
So
now
we
do
have
a
chapter
chapter.
B
Eight
says:
why
is
there
no,
I
in
a
I
o,
I
you
know,
I'm
basically
arguing
that
today's
ai
is
not
really
intelligent
and
it
gets
back
to
something
I
mentioned
in
a
previous
video
which
is
you
know
what
is
intelligence
intelligence
is,
is
at
its
core,
is
ability
to
learn
a
model
of
the
world
and
that
model
is
very
detailed
and
complicated.
B
So
when
we
know
something
you
know,
if
you
know
what
a
door
is
or
a
car
is
or
we
know
how
to
you
know,
use
something
like
a
stapler
or
we
know
some
language,
or
we
know
some
mathematics
and
so
on.
We
know
this
information,
it's
not
just.
You
know
some
pattern
that
we're
exposed
to
we
stored
in
our
head
in
this
model,
and
the
model
has
and
that's
part
of
the
theory.
B
The
model
is
all
based
on
these
reference
frames
or
how
information
is
stored
in
the
model,
and
so
that
allows
us
to
act
upon
it
allows
us
to
have
sort
of
it
gives
a.
The
theory
explains
that
intelligence
is
based
on
this
model
and
the
model
has
structure
to
it,
like
the
knowledge
of
the
knowledge
of
the
world
has
structured
that
you
can
act
upon,
and
today's
ai
doesn't
really
have
that
structure.
B
Today's
ai
is
like
a
is
a
very
sophisticated
pattern,
matching
system,
you
can
kick
an
input
in
and
you
can
classify
it,
but
you
know
if
I
have
a
if
I
have.
You
know
something
an
example
I
use
in
the
book.
If
I
say
okay,
here's
pictures
in
the
ai
today
says
that's
a
picture
of
a
cat
or,
you
might
even
say
that's
a
of
a
cat
playing
over
the
ball,
but
that
system
doesn't
really
know
what
a
ball
is
or
what
a
cat
is.
It
doesn't
know
that
cats
are
animals
that
they
have.
B
They
probably
have
livers
and
spleens
that
they
have
fur
that
they
have
nails
that
they
like
to
be
petted
that
they're
cat
people
and
dog
people.
You
know
what
is
you
know?
What
how
do
you
clean
up
after
cat?
I
mean
there's
nothing
of
that
and
we
all
do
we
learned
that
as
children
right
we
know
this
is
sort
of
our
basic
knowledge
of
the
world,
and,
and
so
what
we've
discovered
is.
How
is
that
basic
common
knowledge
stored
in
your
head
and-
and
we
know
now
that
nothing
like
that
is
equivalent
in
today's
ai.
B
So
today
is
a
very
shallow
level
sort
of
understanding
of
the
world,
and
I
don't
think
you
can
get
to
what
what
ai
scientists
and
people
in
general
want
to
get
to,
which
is
truly
intelligent
machines.
You
can't
get
there
just
doing
what
we're
doing
today.
You
have
to
sort
of
incorporate
these
principles
from
the
brain,
so
in
the
book
I
lay
out
a
series
of
these
principles.
B
I
say
these
are
the
sort
of
the
core
basic
things
you
got
to
have
in
a
system
for
it
to
be
intelligence,
and
I
make
that
argument.
It's
a
novel
argument,
and
so
I'm
sure
some
people
will
push
back
on
it,
but
I
think
a
lot
of
people
might
embrace
it
as
well.
So
it's
not
just
I'm
being
critical
of
ai.
I'm
saying
look
it's
great.
B
A
B
Well,
you
know
it's
interesting.
It's
in
my
previous
book
on
intelligence,
which
I
wrote
about
15
years
ago.
I
had
a
little
bit
about
consciousness
in
there
and
I
was
surprised
how
many
people
asked
me.
I
wish
you
wrote
more
about
consciousness,
and
so
I
took
that
to
heart
on
this
book
and-
and
I
put
a
lot
more
effort
into
that-
and
I
think
it's
something
that
people
are
concerned
about.
B
I
mean
people
are
concerned
about
what's
the
nature
of
consciousness
in
general
human
consciousness,
you
know
what
are
we,
what
does
it
mean
to
be
conscious?
How
can
I
feel
this
way,
but
then
it
comes
up
a
different
way
to
look.
I
say:
well,
okay,
let's,
let's
test
a
machine,
that's
doing
something
the
same
as
a
brain.
B
Would
it
be
conscious
and
why
and
under
what
conditions,
and
so
I
explore
that
in
this
chapter
I
start
off
in
the
chapter
with
a
a
story
about
how
I
was
attending
a
lecture
and
a
philosopher
said
that
if
machines
were
conscious,
then
we
would
be
obligated
not
to
turn
them
off
like
unplugged,
and
I
said
wow,
that's
an
interesting
proposition.
You
know
is
that
true
is
that
murder?
You
know,
so
it
got
me
thinking.
B
So
I
built
this
case
around
that,
and
I
I
I'm
probably
just
right
up
front
about
I'll
give
a
giveaway
part
of
the
the
chapter
is
that
I
think
machines
can
be
conscious.
It's
not
something
that
just
magically
appears
there's
a
certain
set
of
things
you
have
to
be
able.
The
machine
has
to
have
to
be
conscious
and
if
it
doesn't
have
those
things
it
won't
be
conscious
and
same
with
humans.
Humans
have
to
have
these.
B
If
we
don't,
we
wouldn't
be
conscious
and
I
won't
go
through
all
of
them
right
now,
but
just
to
give
you
a
flavor
of
it,
one
of
the
one
of
the
things
about
again.
The
book
is
a
lot
about
how
we
build
this
model
the
world
in
our
head,
and
we
we
remember
things
all
the
time.
Oh
constantly
updating
our
model.
Every
time
I
see
anything
I
update.
B
If
I
see
something's
moved
on
the
table,
I
update
my
model
where
that
thing
is
and
if
I
see
a
new
piece
of
food
in
the
refrigerator,
I
update
my
model
of
what
the
world
is
like,
and
so
one
of
the
parts
of
things
we
actually
store
in
our
model.
The
world
is
our
thoughts.
We
actually
store
our
thoughts
about
what
we
did
a
moment
ago.
B
What
we
were
thinking
a
moment
ago
was
part
of
our
experience
and
we
remember
that
just
the
same
way
as
I
saw
a
coffee
cup
on
the
table,
I
can
say
I
was
thinking
about
getting
ice
cream
in
the
kitchen
and
and
and
one
of
the
key
components
of
consciousness
is
the
ability
to
not
just
experience
in
the
moment,
but
remember
what
you
experienced
some
moments
ago
or
even
a
long
time
ago
you
literally
record
your
thoughts
and
they
become
part
of
the
model
of
the
world,
and
so
when
I,
when
I
get
to
the
kitchen,
I
say
why
did
I
come
to
the
kitchen?
B
So
there's
this
idea
that
you
can
slide
your
your
your
current
perception
from
the
present
back
into
the
past
and
also
into
the
future
that
gives
you
this
sense
that
you're
in
the
present
day
that
I
exist
in
the
world,
that's
one
of
the
components
is
there's
other
ones
too,
but
anyway,
I
do
make
the
case
in
the
book
that
a
machine
can
be
conscious
if
we
give
it
the
these
set
of
ingredients
that
I
I
outline
and
but
I
then
make
the
distinction
between
you
know
what
are
our
moral
obligations
to
a
conscious
machine?
B
What
are
you
know
so
going
back
to
the
initial
question
like?
Can
we
unplug
it,
and
this
is
where
something
you
and
I
haven't,
talked
about
yet
about
the
book.
But
I
talk
a
lot
about
the
older
parts
of
the
brain
in
the
book.
These
are
not
parts
that
aren't
the
neurocortex,
but
these
older
sections
of
the
brain-
and
these
are
where
all
of
our
emotions
come
from
and
these,
where
our
fears
come
from.
B
So
when
we
fear
death,
for
example,
it's
really
the
older
parts
of
the
brain
that
long
long
time
ago,
that
are
fearing
death,
it's
not
the
near
cortex,
it's
not
the
intelligent
part.
It's
really
the
older
parts
of
the
brain,
the
fear,
death
or
fear,
pain,
and
things
like
that,
and
when
we
build
intelligent
machines,
we
do
not
have
to
build
those
older
parts
of
the
brain.
In
fact,
we
shouldn't
I
make
the
argument
in
the
book.
We
shouldn't
build
those
other
parts.
B
We
don't
want
to
build,
human-like
machines,
you
can
be
intelligent
and
not
have
the
same.
Emotions
as
a
human
or
the
same
fears
as
a
human
or
the
same
desires
of
human
at
all.
In
fact,
intelligence
doesn't
create.
Those
desires
that
that
has
to
be
intelligence
is
really
just
to
build
them
out
of
the
world,
and
so
intelligent
machines
do
not
have
to
fear
death
and
and
if
and
there
would
be,
no
moral
obligation
to
them
to
turn
them
off.
B
It's
not
a
big
deal,
you
know,
even
if
in
a
machine,
if
I
turned
it
off
and
never
plugged
it
in
again,
it
wouldn't
matter
unless,
unless
we
went
out
of
our
way
in
a
big
way
to
make
them
fear
and
have
the
same
sort
of
emotional
impact
about
the
world
as
we
do,
which
I
don't
think
we
should
can
right
at
the
moment,
and
I
don't
think
we
should
so
I
end
up
to
include
that
chapter
explaining
what
I
think
intelligence
is
and
then
say:
yeah
and
machines
can
be
intelligent,
but
they're
not
going
to
be
like
humans,
and
we
don't
have
to
worry
about
them.
A
Right
well,
consciousness
is
always
always
generates
some
interesting
conversations.
So
after
will
as
well.
B
I
take
will
it.
It
runs
contrary
to
what
some
people
believe
and
I'm
sure
they'll
they'll
have
a
few
things
to
say
about
that,
but
I
feel
I'm
pretty
pretty
comfortable
about
it.
Yes,.
A
A
Yes,
there's
also
a
chapter
on
the
existential
risks
of
machine,
intelligence
and
and
and
people
may
be
surprised
by
some
of
your
what
you
have
to
say
in
that
chapter
yeah,
which
is
really
along
the
lines
of
why
we
shouldn't
we
don't
need
to
fear
machine
intelligence.
Can
you
talk
well.
B
Yeah,
so
I
well
so
as
just
to
make
sure
people
listening
to
this.
I'm
sure
most
people
know
this,
but
there's
been
a
lot
of
concern
recently
about
ai.
Not
necessarily
I
mean
there's
two
concerns:
ai
is
a
powerful
technology
and
people
can
abuse
it.
That's
that's
true,
I'm
not
denying
that
at
all,
but
there's
a
different
level
of
concern.
That's
come
up
with.
This
is
sort
of
the
existential
risk
of
ai
and
you've,
seen
famous
technologists
and
philosophers
and
other
authors
of
different
types,
claiming
that
you
know
hey
we're.
B
Creating
these
intelligent
machines
and
they're
going
to
be
smarter
than
humans
and
they're
going
to
take
over
the
world,
or
we
won't
be
able
to
control
them
or
they
won't.
They
won't
adhere
to
our
desires
and
and
there's
a
whole
different
number
of
scenarios
that
come
out
of
this,
where
somehow
we
lose
control
and
humans
are
doomed.
You
know
we're
taken
over
by
our
creations.
B
This
is
a
serious
thing.
A
lot
of
people
are
worried
about
this,
and
I
I
I
and
I
respect
these
people.
I
don't
want
to
dismiss
it,
however,
and,
however,
I
think
almost
all
those
fears
were
based
on
without
a
deep
understanding
what
it
means
to
be
intelligent
and,
and
so
the
assumption
is,
it's
like
to
be
intelligence
to
be
like
a
human,
and
so
they
can't
imagine
like.
Oh
a
super.
Intelligent
machine
is
like
a
human
and
just
a
super.
Intelligent
human
and
they'll
do
bad
things.
B
That's
like
humans
do
bad
things,
and
in
this
chapter
I
I
I
really
make
the
distinction
between
machines
that
are
smart,
but
they
don't
have
to
be
have
any
of
the
emotions
and
drives
and
they
won't
have
any
emotions
and
drives
unless
we
give
it
to
them.
That
is
intelligence
is
to
build
a
model
of
the
world
and
it's
not
a
desire
to
to
procreate
or
it's
not
a
desire
to
dominate.
B
B
People
have
created
a
model
of
the
earth
which
is
maps
you
know
back
from
for
over
the
last
several
hundred
years,
hundreds
of
years
people
are
trying
to
understand
what
the
world
is
really
like,
and
they
created
maps
and
maps
and
and
and
maps
is
a
form
of
knowledge.
It's
it's.
It's
a
it's
entity
and
a
map
allowed
people
to
do
things.
They
allow
people
to
be
successful.
They
allow
them
to
be
traitors.
B
A
lot
of
me
warriors
they
allow
them
to
do
discoveries
and
so
on,
and
but
the
maps
themselves
had
no
desires
right,
it's
a
model
of
the
world,
but
it
on
its
own.
It
doesn't
say
I
want
to
you,
know,
dominate
and
kill
these
people.
I
want
to
trade
peacefully
with
these
people
here
and
the
the
map
itself.
It
was
sort
of
it's
analogous
to
our
brain.
It
has
a
map
of
the
model
of
the
world,
and
but
how
it's
used
is
independent
of
that.
So
intelligent
machines
are
like
a
map
of
the
world.
B
They
they
will
be
they'll,
be
they'll,
have
a
very
detailed
understanding
of
model
of
the
world,
but
how
it's
applied
determines
whether
it's
good
or
bad
and
intelligence
itself
does
not
dictate
that
it
will
not
come
about
on
its
own
maps.
Don't
didn't
start
automatically
starting
dominating
the
world.
You
know
they
didn't
do
that.
B
So
it's
that's
an
analogy,
but
I
think
it's
a
good
one
and
I
developed
that
to
basically
make
the
point
that
ai
does
not
represent
an
extra
central
threat.
I
don't
want
to
dismiss
the
idea
that
truly
intelligent
machines
are
not
powerful
tools
that
could
be
abused
by
humans.
B
I
think
that
is
true,
and
we
have
to
be
very
very
careful
about
that.
The
same
way
we
have
to
be
careful
about
nuclear
weapons
or
we
have
to
be
careful
about
the
people's
ability
to
modify
genes
and
so
on.
These
are
powerful
technologies
that
we
have
to
be
careful
about,
because
some
people
will
abuse
them,
but
on
its
own,
ai
is
not
an
existential
threat.
There's
going
to
be
no
runaway
explosion
of
intelligence.
I
I
talk
about
that
concept
too
in
the
book
so
yeah.
I
know
that
will
be
a
somewhat
controversial
chapter.
A
B
Yeah,
I
I
should
point
out
and
we'll
talked
about
when
we
when
we
come
back
to
talk
about
part
three
of
the
book
is,
I
think
we
should
build
intelligent
machines.
I
think
we
will
build
intelligent
machines.
B
It
will
be
one
of
the
dominating
dominant
technologies
of
the
21st
century,
but
it
will
impact
the
world
in
the
same
way
that
computers
impact
the
world
in
the
latter
part
of
the
20th
century,
and
I
think
there's
some
really
really
wonderful
things
that
are
kind
of
coming
out
of
that
and
we'll
just
have
to
learn
to
control
the
downsides.
In
the
same
way,
there
are
some
wonderful
things
that
came
out
of
computing.
B
This
tremendous
number
of
benefits
to
medicine
science.
You
know
life,
yet
we
still
have.
We
still
have
problems
with
it
and
people
abuse
it
so
anyway,
I
think
this
is
going
to
be
a
theme
for
the
over
the
next
this
century.
It's
going
to
be
a
big
thing.
A
Well,
we'll
talk
more
in
when
we
discuss
part
three
of
the
book:
human
intelligence
and
a
thousand
brains.
A
new
theory
of
intelligence
is
available
for
pre-order
now
on
amazon
and
we'll
put
a
link
in
the
youtube
description.
Thanks
for
your
time
today,
jeff
and
I
look
forward
to
talking
about
the
last
section.