►
Description
Presented by Jeff Hawkins
A
Eli
yeah
when
he
called
me
up
and
said
when
I
come
speak
at
this
conference.
Well,
it's
kind
of
hard
to
refuse
him
he's
just
a
well-known
guy,
and
but
I
said
you
know
you
sure
you
want
me
here.
It's
you
know,
I'm
not
a
guy.
I
don't
study
energy,
efficient
electronics.
A
A
So
I
brought
a
little
brain
along
here
just
to
remind
us
what
we're
talking
about
this
is
my
little
my
little
plastic
brain
okay
so,
but
I
do
think,
there's
an
intersection
between
my
world
and
your
world
that
will
grow
over
time,
and
so
hopefully
I'm
going
to
be
able
to
keep.
You
entertained.
Tell
you
a
few
things
about
brain
theory
and
and
and
encourage
some
conversations
about
how
we're
going
to
build
silicon
based
or
whatever
the
technology
is
brains.
A
Okay,
so
that's
the
time
I
talked
with
rain
tells
us
about
the
future
of
silicon.
Some
of
this
just
to
repeat
what
eli
just
said
here:
oh
this
doesn't
progress,
but
that
progresses,
so
I'm
not
supposed
to
look
at
this
is
that
I
just
just
to
reiterate
what
he
said
here.
Dementa
is
is
an
unusual
business.
A
It's
a
for-profit
business,
but
we
kind
of
run
it
as
a
combination
of
a
pure
science
component
and
a
business
component,
and
the
pure
science
component
actually
is
the
is
the
more
important
part
in
my
mind,
and
so,
and
that
is
to
reverse
engineer
the
neocortex,
and
we
want
to
do
this
in
a
in
a
pure
biological
form.
We
want
to
understand
how
that
structure
works
exactly
this
is
not
to
be
inspired.
It's
a
if
I
had
to
do
nothing
else.
I
would
stop
right
there
and
say
you
know.
A
A
If
you,
if
you
really
do
figure
out
how
the
neocortex
works,
that
you
can
use
those
principles
to
build
machines
that
work
on
on
the
same
principles-
and
this
would
be
the
catalyst
for
the
beginning
of
machine
intelligence,
so
what
we
want-
and
a
second
part
of
the
mentos
goal-
is
to
play
a
catalytic
role
and
remember
what
a
catalyst
does
it's
something
that
accelerates
the
reaction
dramatically,
but
without
getting
consumed,
and
so
as
a
startup
company,
you
don't
want
to
be
consumed
by
an
industry
you
create
so,
and
so.
A
The
idea
here
is
to
build
machines
or
software
that
work
on
neocortical
principles.
It
is
not
to
model
or
simulate
a
human
brain.
We
have
no
desires
to
build
things
that
are
like
humans
or
solve
or
passes.
You
know
sort
of
turing
tests
and
things
like
that.
It's
just
that
there's
a
difference.
Brains
obviously
work
on
a
set
of
principles,
and
we
understand
those
principles.
We
can
do
some
really
amazing
things.
A
Eli
mentioned
this
morning
that
the
goal
of
his
institute
is
to
to
find
the
replacement
of
the
transistor,
which
I
thought
was
a
great
statement.
I
don't
see
it
anywhere
on
the
website
for
anything,
but
that
was
the
right.
A
So
our
goal
here
is
in
some
sense
to
replace
the
program
computer
with
ones
that
learn
okay,
and
the
idea
is
that
you
could
build
machines
that
are
truly
intelligent,
but
they
might
have
different
sensors.
They
might
have
different
embodiments.
They
might
be
faster
and
larger
and
distributed
a
lot
of
different
ways.
So
that's
that's
what
we're
trying
to
do?
A
I'm
now
going
to
I'm
going
to
give
you
a
very,
very
brief
history
of
artificial
intelligence-
and
I
I
I
made
this
slide
because
I
thought
I
needed
to
provide
context
for
this
audience.
So
it's
just
coming
in
one
slide
here.
If
you
go
back
a
number
of
years
ago,
from
purries
from
like
1950
to
1990,
when
people
talked
about
ai
they're,
always
talking
about
one
thing,
they
were
talking
about
programming
computers
and
they
had
no
interest
in
biology.
A
In
fact,
I
remember
visiting
the
mit
ai
lab
in
1980
that
was
trying
to
be
a
graduate
student
there
before
I
came
to
berkeley
and-
and
they
told
me
point
blank-
he
says-
there's
absolutely
no
reason
to
study
brains,
they're,
just
noisy
computers.
If
you
want
to
study
brains,
go
someplace
else,
I
went
to
berkeley
but
didn't
work
out
here
either.
A
That's
I'm
serious!
That
was
that
was
their
position
at
that
time.
Now,
not
everybody
agreed
with
that.
During
this
period
of
time,
some
people
thought
hey.
Maybe
brains
could
tell
us
something
until
there
was
a
smaller
field
of
the
artificial
neural
networks,
a
ns
and
the
term
they
use
where
these
are
biologically
inspired,
and
the
big
success
in
this
field
was
something
called
back
prop
neural
network
about
propagation,
which
became
popular
in
the
mid
80s.
A
A
That's
that's
sort
of
like
dead,
almost
and
that's
just
computer
programming,
but
today
the
term
ai
refers
to
the
the
descendants
of
the
artificial
neural
networks
that
were
created
in
the
1980s,
so
the
ai's
in
the
is
in
the
news
a
lot
these
days
almost
every
day
you
can
see
stories
about
ai
or
google
or
the
threat
of
humanity,
and
things
like
this.
A
These
are
basically
pretty
much.
I
don't
want
to
overdo
it,
but
basically
these
are
the
same
types
of
networks.
They
had
back
in
the
mid
80s
they're,
still
based
on
the
same
concept
of
back
prop
neural
networks.
They
go
by
a
different
name
now,
deep
learning,
which
is
a
really
great
name,
but
it's
pretty
much
the
same
thing
they
had
in
the
80s.
What's
really
different.
A
Now
is
we
have
fast
computers
and
we
have
big
data,
and
so
they
figured
out
how
to
train
monster
sized
things
which
you
couldn't
have
done
back
in
the
80s,
and
there
have
been
many
successes
in
this
field.
This
is
a
very
successful
field
at
the
moment
and
we
I
don't
want
to
take
anything
away
from
that
and
but
they're
still
biologically
inspired,
that's
the
term
they
use.
But
honestly,
we
know
now
that
they're
biologically
impossible
that
the
way
these
networks
work
are
nothing
like
at
all
like
the
way
brains
work.
A
Now,
maybe
that
doesn't
matter.
Maybe
we
don't
need
to
know
that
maybe
we
can.
We
can
just
build
the
intelligent
machines
without
really
caring
how
brains
work,
but
just
as
there
was
in
the
past,
there's
a
bunch
of
people
and
I'm
included,
who
don't
believe.
That's
true
that
you're
not
going
to
get
there
that
this
is
going
to
be
like
a
local
minimum.
It's
there's
a
lot
of
excitement
about
it.
A
Now
it's
very
valuable,
but
it's
not
really
on
the
path
to
machine
intelligence,
and
I
don't
have
a
name
for
those
people
so
I'll
just
call
him
neocortical
theorists,
okay
of
which
I'm
one
and
and
the
goal
there
is
to
really
reverse
engineer
the
neural
cortex
we
think
of
ourselves.
As
biologically
constrained,
I
mean,
if
part
of
our
theories,
don't
match
the
biology.
Then
they
got
to
be
wrong
and
if
part
of,
if
and
and
everything
we
know
about,
the
biology
has
to
be
consistent
with
our
theories.
A
You
know
through
brain
anatomy
and
physiology
things
like
that,
and
what
we've
learned
and
what
I
can
tell
you
a
part
of
my
story
today
is
that
the
sort
of
the
the
the
artificial
neural
networks,
the
deep
learning
networks
we
have
today,
are
really
insufficient
for
building
intelligent
machines,
and
this
is
still
an
important
thing
to
do,
and
I
think
in
the
future,
I'm
very,
very
comfortable,
confident
saying
this:
when
people
talk
about
the
term
ai
they're
going
to
be
thinking
about
things
that
are
really
brain-like
and
that
this
transition
from
like
we
could
ignore
computers
completely.
A
Until
we
can
ignore
computers,
we
can
our
brains
completely.
We
can
ignore
brains
mostly
to
know
we
can't
really
ignore
them.
It
didn't
have
to
be
this
way.
Maybe
humans
could
have
just
figured
out
what
intelligence
was.
We
could
build
machines
to
work
on
those
principles,
but
we
have
a
long
history
saying
that
it's
surprising,
the
brain
still
has
things
to
teach
us
and
we
need
to
learn
what
those
are.
So
that's
what
we
do.
A
That's
what
I
want
to
talk
about.
Okay.
Why
should
you
care
about
this?
Why
does
it
matter
to
you
guys
at
all?
Well,
there's
a
lot
of
people
interested
in
building
hardware
to
model
brains,
and
sometimes
they
use
the
term
neuromorphic
which
used
to
have
one
meaning.
Analyticism
has
a
broader
meaning
about
hardwood
model
brains.
Pretty
much.
Every
major
semiconductor
company
in
the
world
has
come
to
dementor
or
talked
to
me
at
one
point
in
time,
because
they're
all
interested
in
this-
and
why
is
that?
A
Well,
there's
a
bunch
of
things
that
people
they
tell
me,
I'm
not
telling
you.
This
is
what
they
tell
me.
Some
people
talk
about
the
end
of
moore's
law.
You
guys
were
discussing
that
in
some
of
the
in
this
morning.
What
does
that
actually
mean?
I'm
not
going
to
get
into
that
debate.
Some
people
say
you
know
we're
tired
of
the
vine
ornament
architecture.
It's
getting
boring,
we
need
something
new,
you
know
how
many
you
know
more
cores.
Can
we
put
into
a
chip?
We
need
something
beyond
that.
A
The
some
people
are
motivated
by
the
fact
there's
big
data
and
and
there's
machine
learning
applications.
They
want
to
accelerate
machine
learning
applications
and
some
people
come
at
it
from
a
power
efficiency.
Point
of
view
which
is
relevant
to
this
group,
which
they
say
man.
The
brain
is
amazingly
power
efficient.
You
know
it
just
runs
on
a
few
watts
and
look
at
all.
It
does
so.
Maybe
there's
something
to
be
learned
about
that.
This
is
not
my
area
of
focus,
I'm
going
to
come
back
to
neuromorphic
cardio.
A
At
the
end
of
my
talk,
my
my
business
is
being
figuring
out
how
the
brain
works,
how
the
neocortex
works.
Now,
just
to
remind
you,
when
we
talk
about
the
neurocortex,
it's
it's
not
the
entire
brain,
it's
about
75
percent
of
the
volume
of
human
brain.
It's
a
big
wrinkly
thing
on
top,
the
old
parts
of
the
brain
are
so
stuck
up
in
the
middle
like
a
post,
underneath
it
and
and
the
neocortex
is
pretty
much
everything
we
think
about
is
intelligent.
A
You
know
my
speech,
language,
understanding,
the
world,
all
high
level
vision
planning
all
this
happens
in
the
neocortex,
so
I'm
going
to
present
a
simplistic
picture
about
what
the
neocortex
does
and
how
it
works
and
just
to
give
you
I
don't
want
to
get
too
lost,
but
I
want
to
give
you
the
big
picture
what's
going
on
here,
so
I'm
not
going
to
talk
about
the
rest
of
the
brain
at
the
moment,
so
you
can
think
about
the
neocortex.
Is
it's
like
a
memory
system?
That's
what
it
is
when
you
were
born.
A
You
have
no
knowledge
about
things
in
the
world
very,
very
little.
You
can't
even
see.
Actually
it's
not
like
you've
seen
you
don't
know
what
they
are.
You
actually
haven't
learned
to
see
it
and
you
don't
know
about
buildings
and
computers
and
electronics
and
presentations
and
powerpoint
and
all
this
kind
of
stuff.
You
have
to
learn
everything
and
you
learn
it
through
your
senses.
You
now
the
senses
are
arrays
of
senses,
so
your
your
retina
is
about
a
million
sensors.
Your
body,
somatic
sensors,
are
about
a
million.
A
Your
ears
are
about
30
000.,
so
you
have
these
several
million
bits
coming
in
on
these
neuron
axons.
They
all
look
identical
and
they're
flowing
into
the
brain,
and
those
patterns
on
those
sensory
neurons
are
changing
multiple
times
a
second
or
rapidly
changing.
So
even
your
eyes
are
moving
three
to
five
times
a
second,
even
if
you're
not
aware
of
it,
so
completely
changing
input
patterns
all
the
time.
A
A
It's
generating
my
speech
right
now
and
if
I
decide
to
click
this
little
clicker
and
things
like
that
now
one
thing
it
says
when
you
think
about
this
you're
constantly
interacting
with
the
world
through
motion
through
behavior
and
when
you
think
about
it
that
most
of
the
changes
coming
in
on
the
sensory
stream
are
because
of
your
own
behavior.
So
as
you're
sitting
in
this
room,
your
eyes
are
moving
about
the
room's
not
moving.
The
room
is
static,
but
the
input's
changing
on
your
constantly
changing
on
your
optic
nerve,
because
your
eyes
are
constantly
moving.
A
Yes,
I
move
a
little
bit
too,
but
you're
moving
your
head
your
eyes.
Similarly,
if
you
want
to
touch
something,
you
have
to
move
your
hand
over
it,
it
doesn't
happen
unless
you
do
that
so
and
and
even
now,
I'm
hearing
my
own
speech
as
I
talk
here,
so
it
turns
out
that
the
majority,
not
all
but
the
majority
of
the
changes
coming
out
of
that
sensor
stream
are
because
of
your
own
behavior
and
so
how?
What
is
this
brain
doing?
A
Well,
we
say
that
neocortex
is
learning
a
model
of
the
world
and
it
learns
this
primarily
through
behaving
and
acting
upon
the
world
it.
It
wants
to
build
a
model
of
the
world
how
the
world
is
supposed
to
work
when
it
as
it's
observed.
A
This
model
is
a
time-based
model
and
it's
a
predictive
model,
and
I
mean
by
time-based
the
primary
memory
operation
in
the
aquatics
is
a
memory
of
time
of
sequences
of
what
follows
what,
under
what
conditions?
It's
not
like
taking
pictures,
it's
more
like
a
movie
okay
and
it's
all
about
sequence
memory,
and
it's
constantly
making
predictions
about.
What's
going
to
happen
next,
so
it's
a
it's
a
model
of
a
world
that
says
okay,
given
what's
happened
in
the
past,
give
my
own
behaviors.
What
am
I
expecting
next?
What
am
I
expecting?
A
Next,
what
I'm
expecting
next
and
just
testing
those
predictions
constantly?
This
is
the
pilot's
little
picture.
I
can
give
you
what
the
neocortex
does.
There
are
three
principles:
we've
already
picked
out
of
this
that
I'm
going
to
just
harp
on
there's
more,
but
these
three
I'm
going
to
focus
on
this.
These
are
maybe
the
top
three.
First
of
all.
The
memory
in
the
brain
is
a
memory
of
time-based
patterns.
It
is
inherent
it's
not
something.
That's
layered.
On
top
of
it,
it's
built
into
the
core
memory
structure
of
how
neurons
work.
A
The
second
thing
is
you
really
can't
understand
how
an
intelligent
system
you
can't
build
intelligence
or
understand
intelligence.
Unless
you
include
the
sensory
motor
integration
component,
it's
not
something
you
can
just
say.
Oh,
I
don't
that's
an
option
I'll
act,
some
time.
No,
it's
part
of
how
you
learn
and
part
of
how
you
sense,
and
the
third
thing
is:
it
is
continuously
learning
there's,
never
a
batch
process.
It's
not
like.
Oh
here's,
the
training
set
and
here's
the
test
set.
You
know
we
force
that
on
kids
in
school,
perhaps
but
that's
not
how
it
works.
A
You're,
constantly
learning
all
the
time.
Every
time
new
input
comes
in,
it's
continually
adapting
in
real
time.
So
these
are
some
of
the
high
level
principles
here
now,
I'm
going
to
dive
into
a
bit
more
neuroscience
theory
here
before
I
come
back
up
and
talk
about
implications
for
you.
So
hopefully,
you'll
find
this
interesting.
Let's
just
talk
about
the
architecture
of
this
thing,
the
neocortex
I've
shown
a
human
near
cortex
and
there's
a
rat
in
your
cortex
next
to
it,
because
the
principles
I'm
about
to
tell
you
are
common.
A
Throughout
all
mammals,
all
mammals
have
a
neural
cortex
and
it
doesn't
really
matter
which
mammal
we're
talking
about
these
principles
are
the
same.
Now
the
cortex
is
really
a
sheet
of
cells.
If
you
could
take
it
out
of
your
head
right
now,
a
human
neocortex.
This
is
a
good
model
from
a
human
cortex.
It's
about
a
thousand
square
centimeters
about
two
and
a
half
millimeters
thick.
It
gets
wrinkled
up
to
fit
in
your
head,
but
it's
really
a
sheet
very
thin,
and
it's
remarkably
uniform.
A
That
is,
it's
got
a
lot
of
detail
in
it,
but
no
matter
where
you
look,
this
detail
is
almost
the
same
everywhere.
In
fact,
it's
very
hard
to
tell
what
species
or
what
part
of
the
neocortex
you're
looking
at,
because
it
almost
looks
identical
and
functionally.
We
now
know
it's
functionally
the
same,
it's
hard
to
believe,
but
basically
the
neocortex
is
doing
the
same
thing
everywhere,
even
though
there's
parts
of
it
our
vision
and
portrait
of
hearing
it's
the
same
basic
process.
A
This
is
how
it
got
very
large
so
quickly.
This
is
why
we
are
so
adaptable,
because
we
have
a
universal
learning
algorithm
going
on
here.
The
new
cortex
is
divided
into
regions.
Those
regions
are
defined
by
connectivity,
they're,
not
visual.
You
can't
see
them,
but
the
neurons
connect
between
areas
in
the
near
cortex
and
if
you
separate
out
those
areas,
you
find
that
they
form
a
hierarchy.
A
If
you
then
slice
through
the
two
and
a
half
millimeters,
what
do
you
see?
So
this
little
picture
shows
the
two
and
a
half
millimeter
slice.
The
first
level
of
organization
you're
going
to
see
is
cellular
layers.
When
I
say
a
cellular
layer,
it's
not
one
cell.
In
a
layer.
It's
like
a
there's,
a
very
thick
dense
mat
of
cells,
but
if
we
basically
divide
it
into
four
cellular
layers,
you
can
count
count
them
differently.
They
surprisingly
they're
labeled,
two
three,
four,
five
and
six
one
doesn't
really
have
many
cells
in
it.
A
The
next
thing,
if
you
look
a
little
closer
in
the
microscope,
you'd
see
that
there
are
these
neurons
and
the
neurons,
the
next
level
or
organizations
called
the
mini
column.
The
neurons
are
the
neurons
that
are
aligned
and
invert
a
very,
very
vertical
stripe
across
the
different
layers
here,
and
sometimes
you
can
actually
see
the
mini
columns,
they're
visually
seeable
in
the
microscope,
sometimes
they're
not
but
they're
real.
They
come
about
the
way
the
brain
develops
and
all
the
neurons
in
a
little
column.
A
Across
the
surface,
there
have
a
very
similar
response
properties
they
kind
of
tend
to
represent
the
same
thing
in
the
world
under
some
conditions.
If
you
dive
down
even
further,
you
can
actually
look
at
the
neuron.
This
is
this:
is
your
classic
pyramidal
neuron,
which
makes
up
the
vast
majority
of
the
neurons
in
the
near
cortex?
A
It's
a
fairly
complex
thing:
they
have
10
anywhere
between
5
and
10,
000
synapses
on
them
connections.
There
are
some
neurons
on
the
brain
actually
in
the
pyramidal
cells
that
have
over
30
000
synapses
on
them,
so
thousands
and
thousands
of
synapses
those
synapses
are
arranged
on
the
dendrite
they're,
actually
not
connected
to
the
cell
body
itself.
A
The
cell
body
has
no
excitatory
synapses
they're,
all
on
those
dendritic
trees
that
we
call
the
dendrites
and
if
you
look
at
the
dendrite
and
there's
a
picture
one
on
the
bottom
here,
I
hope
you
can
see
that
that's
a
little
section
of
the
dendrite
and
there's
these
little
the
synapses.
You
can
see
them
along
there
on
these
little
spines,
a
real
bad
micron.
Apart,
stretched
out
there.
A
If
the
synapses
become
active
at
different
points
in
time
or
different
points
in
the
dendritic
tree
has
almost
no
effect
on
the
cell,
and
you
can
think
of
these,
as
coincidence
detectors
so,
but
it's
an
act
of
dendrite
and
then
finally,
we
now
know
that
learning
we
use
for
many
many
years,
people
think
of
learning
as
the
modification
of
the
weight
or
the
efficacy
of
a
synapse.
This
is
what
you
learned
in
any
neural
network
theory.
We
now
know
that
neurons
form
memory
primarily
by
forming
new
synapses.
A
They
grow
new
ones
rapidly
and
they
disappear.
If
you
looked
at
a
particular
neuron
in
your
brain
today
and
you
look
at
tomorrow,
you
see
a
good
percentage
of
the
synapses
have
changed
and
they
grow.
They
just
literally
grow
the
spines
and
retract
them,
and
so
on.
It's
a
much
more
powerful
form
of
learning
than
modification
of
synapses
and
by
the
way
synapses
are
highly
stochastic.
They
don't
work
half
the
time,
literally
50
of
the
time
they
don't
they
just
they
just
crap
out
on
you.
A
A
So
do
we
have
any
such
theory?
Well,
we
have
we
yeah.
We
do.
We
understand
a
lot
of
it.
We
don't
understand
a
lot
of
it
too,
but
I'm
going
to
tell
you
what
we
do
have
we
have
an
overall
theory
which
we
call
hierarchical
temporal
memory
or
just
htm.
It's
kind
of
it's
basically
named
for
neocortical
theory.
The
terms
essentially
says
it's
a
hierarchy
of
identical
regions.
That's
fact
each
region
is
primarily
a
memory
of
time
or
sequences.
People
will
admit
that,
but
most
people
don't
think
of
it.
A
That
way,
but
that's
I'm
arguing
is
the
case.
We
know
that
stability
increases
as
you
go
up
the
hierarchy.
So,
as
you
go
up
to
hierarchy,
cells
tend
to
represent
things
over
longer
periods
of
time,
the
larger
purge
of
space
and
then,
when
you
go
down
the
hierarchy,
things
unfold
in
time
and
space.
So
that's
kind
of
the
basic
idea,
and
now
what
we
want
to
know
is
like
well
how
what's
actually
going
on
in
these
layers
and
in
a
region
exactly
what
are
these
neurons
doing?
How
do
they
implement
this,
and
so
on?
A
So
digging
down
one
more
step
deeper
here,
hopefully
you're
finding
this
interesting.
Now,
let's
look
at
a
slice
of
this
cortex
here,
okay,
so
I'm
showing
you
these
these
four
layers
of
cells,
I'm
showing
you
a
little
those
little
dots
and
neurons
are
aligned
in
the
mini
columns.
A
We
can
basically
say
two
of
these
layers:
are
a
feed
forward
pathway,
two
of
them
basically
feed
backward
pathway
that
comes
from
the
nerve
anatomy
and
what
we
believe
and
what
we
pro,
what
we
we
predicting,
which
is
part
of
the
theory
that
we
do,
is
that
all
of
these
layers
are
implementing
a
form
of
sequence
memory.
It's
a
there's,
a
common
thing,
that's
going!
Obviously,
all
neurons,
which
is
learning
sequences
and
the
different
layers,
are
doing
sequences
for
different
things.
A
So
let
me
give
you
an
example-
and
this
is
as
deep
as
we're
going
to
go.
The
inputs
to
a
to
the
cortex
or
to
any
region
of
the
cortex
are
not
just
the
sensor
data.
We
actually
think.
Oh
I'm
getting
input
from
the
eyes,
I'm
getting
it
from
the
ears
and
things
like
that,
but
you
also
get
a
copy
of
the
motor
commands
that
the
rest
of
the
brain
is
operating
on.
A
So
when,
when
the
visual
cortex
gets
a
new
sensory
input,
it
also
gets
a
copy
of
how
how
the
body
just
moved
the
eyes,
and
so
this
is
important,
because
if
you
want
to
recognize
something
like
a
face
and
your
eyes
are
moving
over
the
face
all
the
time,
it
appears
stable
to
you,
but
it's
not
stable
the
input's
changing
if
the
inputs
just
change-
and
you
didn't
know
that
you
moved
your
eyes-
it
would
look
like
the
world
is
jumping
around,
but
it
doesn't
look
that
way.
How
is
it
that's
possible?
A
Well,
if
you
could
match
up
the
motor
behavior
with
the
current
sensory
input,
and
then
you
can
predict
what
the
next
samsung
input
would
be.
Then
you
say
this
is
what,
as
I
expected,
if
I
move
from
left
eye
to
right,
I
expect
to
see
the
eye
here
and
so
on.
A
So
there's
a
type
of
what
we
call
an
inference,
which
is
pattern,
recognition,
a
sort
of
a
sensory
sequence,
memory
of
sensory
motor
inference
which
would
be
incorporating
motor
behavior,
but
some
patterns
in
the
world,
like
my
speech
or
like
music,
do
not
have
a
motor
component
to
them,
they're,
just
what
we
call
high
order,
sequences,
meaning
they're,
very
long
sequences.
They
can
be
long,
they
can
have
repeated
elements
they
can.
They
can
merge
and
come
apart.
A
It's
a
very
complex
stream
coming
in
here,
and
so,
let's
have
a
I'm
gonna
push
a
button
here
for
a
laser.
Is
this
right
yeah?
So
you
actually
have
to
what
we
call
a
high
order
inference,
and
so
we
believe
actually
what's
going
on
in
these
two
layers.
You
have
these
two
basic
types
of
inference,
pure
temporal
and
sensory
motor
tempo,
the
layer
and
then
that
feeds
up
to
the
next
higher
region,
the
cortex
layer,
five
cells
are
the
cells
that
are
generating
motor
behavior.
A
So
right
now
there
are
neurons
in
my
head
in
layer,
five
in
in
one
part
of
my
near
cortex
and
those
layer.
Five
cells
are
coming
on
and
off
making
my
speech
right
now,
it's
just
hard
to
imagine,
but
it's
true,
and
so
these
project
down
to
motor
things
that
generate
motor
behavior
and
I'll
and
layer.
Six
is
a
feedback,
it's
kind
of
a
tension.
A
So
so
this
is
your
basic
idea
here
and
why
you
might
have
different
layers
of
cells
and
a
couple
things
to
notice
that
these
basic
operations
exist
in
all
cortical
regions.
So
there
are
no
pure
sensory
regions.
There
are
no
things
like.
Oh,
this
is
a
regional
region.
That's
a
motor
region!
Every
we
now
know
that
every
region
is
sensory.
Motor
there
is.
This
is
an
integrated
behavior.
This
is
true
for
everywhere
in
the
near
cortex
and
it
works
across
modalities.
A
This
is
true
in
low
level
vision
and
hearing
and
language
and
so
on
now
this
is
definitely
more
complicated
than
your
typical,
artificial
neural
network.
If
you
know
anything
about
a
ns,
there's
nothing
like
this,
but
so
this
is
some
suggestion
here
that
this
stuff
is
important
to
intelligence.
A
But
it's
not
too
complicated
that
we
can't
understand
it
with
diligence
in
application.
We
can
figure
out
how
this
stuff
works
and
what
we
focused
on
for
a
number
of
years,
and
we
figured
out
I'm
very
confident.
We
figured
this
out.
We
figured
out
exactly
how
the
basic
sequence
memory
algorithm
works
and
you
can
apply
it
in
different
ways,
but
it's
sort
of
the
core
memory
learning
system
and
I'd
love
to
tell
you
all
about
it.
We've
documented
it
there's
white
papers
on
it.
I
have
all
the
talks
online
about
it.
A
I
don't
have
time
to
go
into
all
the
details.
It's
really
cool
and
I
suggest
you
look
at
it.
I'm
just
I'm
going
to
tell
you
a
few
things
about
it
and
then
I'm
going
to
leave
you
a
little
hanging
on
the
actual
details
of
all
the
stuff
because
we
won't
have
time,
but
I
do
want
to
talk
about
the
neuron
because
that's
important
for
this
group,
because
you
might
be
thinking
about
building
hardware
and
you
might
be
thinking
about
building
neurons
and
hardware,
and
things
like
that.
A
So
let
me
just
tell
you
a
little
bit
about
neurons
a
little
bit
more,
so
you
get
a
sense
of
what's
what
some
of
the
issues
are.
This
is
your
typical
artificial
neural
network.
Neuron,
that's
used
in
deep
learning
or
any
kind
of
most
artificial
neural
networks.
They
have
very,
very
relatively
few
synapses,
maybe
dozens
or
hundreds
they
basically
form
an
output
by
some
sort
of
you
know,
input
times
weight
sum
them
together
and
you
modify
you
learn
by
modifying
the
synapses.
As
I
mentioned
before,
this
is
a
real
neuron.
A
They
have
thousands
of
synapses,
they
have
active.
Dendrites
cells
recognize
hundreds
of
unique
patterns
on
these
dendrites
and
they
learn
by
growing
new
new
synapses.
They
also
the
inputs
to
the
neuron
are
segregated
into
different
regions
and
they
have
different
effects
on
the
body
on
this
neuron.
So
the
feed
forward,
input
to
the
neuron
shown
in
green
here
has
a
very
it's
near
the
cell
body,
not
on
the
cell
body,
but
near
the
cell
body.
That
is
really
what
makes
the
cell
fire
these.
These
other
inputs,
the
contextual
input
and
the
feedback
input.
A
So,
on
we
model
this
in
our
software
simulations,
usually
something
called
not
surprising,
an
hdm
neuron
for
no
other
reason-
and
you
can
see
what
we
do
is
we
model
the
dendrites
as
a
set
of
an
array
of
coincidence,
detectors
we
put
them
in
different
groups,
feedback,
contextual
and
feed
forward.
A
We
model
that
learning
in
this
in
these,
these
neurons
is
through
the
growth
of
synapses.
So
we
model
the
growth,
but
but
we
actually
the
the
weights
of
the
synapses
we
make
in
binary.
So
what
we
can
do
is
we
can
take
a
layer
of
cell.
So
now
I'm
just
talking
about
what
one
layer
here
is,
and
this
is
a
picture
of
one
of
our
simulations
of
just
one
layer
of
cells.
So
you
can
see
the
this
is
only
four
cells
deep
in
the
columns.
This
is
very
small.
A
Our
actual
simulations
are
much
bigger
than
this
much
much.
This
is
something
you
could
see
and
and
this
each
of
these
little
cubes
is
one
of
these
htm,
neurons
and
you're.
Just
gonna
have
to
take
it
from
me
that
this,
if
you
do
this
right
and
it's
not
hard,
this
creates
a
very
cool
distributed
memory
of
sequences
and
what
I
mean
distributed
means
there's
no
center
of
control.
There's
no
point
of
failure.
You
can
make
a
sheet
of
these
as
big
as
you
want.
A
It's
all
based
on
local
interactions,
and
so
it
can
learn
sequences
and
different
parts
of
the
sequence,
as
the
sheet
has
a
lot
of
desirable
properties.
So
it
learns
recognizes
and
recalls
high
order,
sequences
and
again
high
work.
Sequences
can
be
like
melodies
and
and
long
complex
patterns.
A
It's
constantly
predicting
the
next
inputs
in
this
picture.
The
yellow
neurons
are
actually
the
ones
that
are
depolarized
are
making
predictions
and
the
red
ones
are
the
actual
ones
that
are
active
and
interesting.
The
system
can
pick
multiple
patterns
at
the
same
time,
lots
of
them
it
can
predict,
maybe
20
or
30
things
at
once
and
not
get
confused
in
the
same
set
of
neurons
this,
because
this
is
a
property
of
the
fact
that
these
are
all
sparse,
activations
and
which
I
don't
have
time
to
get
into
today.
A
A
I
claim
that
this
kind
of
sequence
memory
is
an
essential
component
of
intelligence,
whether
it's
biological
intelligence
machine
intelligence,
it's
sort
of
the
founding
memory
principle,
and
if
we're
going
to
build
hardware
to
support
machine
intelligence,
it
has
to
be
consistent
with
these
kind
of
neurons
in
these
kind
of
structures.
You
can't
ignore
that
stuff.
A
Okay,
so
where
are
we
in
our
in
our
sort
of
theoretical
research
map?
We've?
We
really
understand
how
the
basic
sequence
memory
for
high
order
sequences,
what
we
think
is
going
on
in
layer,
2
3
is
going
on.
This
has
been
extensively
tested,
we've
applied
it
to
many
commercial
applications.
I'll
tell
you
about
them
in
a
second:
we've
done
a
lot
of
work
on
understanding
how
the
same
mechanism
can
done
and
do
sensory
motor
sequences.
A
We
haven't
done
anything
commercial
with
that.
Yet
so,
but
we
have
documented
it.
We
have
a
paper
out
on
that
we're
in
the
middle
trying
to
understand
exactly
how
the
cortex
generate
motor
behavior
and
we
don't
understand
all
of
it.
We
have
a
lot
of
the
mechanisms,
but
we're
still
still
working
on
that.
What
we
decided
to
do
is
said:
okay,
let's
test
this
to
make
sure
it
really
works.
A
So
we
took
this
basic
model
and
we
said:
let's
build
some
proof-
that
we
can
build
some
things
on
this,
so
you
can
apply
this
to
streaming
data
applications.
Essentially,
if
anything,
I've
got
some
data
coming
from
some
place.
I
can
build
and
run
it
through
this
and
build
a
model
of
it.
So
that's
what
we
did
so
we've
taken
this
basic
model.
We
said
you
can
take
a
data
stream.
You
run
it
through
an
encoder
which
is
like
a
sensory
organs.
A
It's
all
about
streaming
data,
not
spatial
patterns,
so
there's
lots
of
things
that
generate
data
these
days,
the
whole
internet
of
things,
but
you
know,
there's
tons
and
tons
of
things
that
anything
has
a
sensor
can
generate
data
and
we've
come
up
with
a
number
of
encoders
that
take
numbers
and
categories
dates
and
times
even
gps
coordinates
and
words
as
long
as
you
get
them
into
an
sdr
that
smartsheet
representation
thing
works.
So
I'm
not
going
to
this
is
not
a
commercial
sales
here.
A
I
just
want
to
give
you
a
flavor
for
what
we've
done
so
far.
We've
done
it
for
it.
Monitoring
looking
at
metrics,
coming
off
of
servers,
amazon's
infrastructure,
for
example,
works
very,
very
well
to
detect
the
anomalies.
We've
been
able
to
detect
anomalies
in
human
behavior,
looking
for
people
who
are
acting
differently
than
they
normally
act,
how
they're
using
keystrokes
and
things
like
that.
We
are
able
to
use
it
for
prediction
and
anomaly:
detection
and
financial
data
streams
like
stock
volumes
and
selling
this.
A
You
can
actually
download
this
app
and
try
it
on
your
cell
phone
right
now.
If
you
have
an
android,
we
applied
to
geospatial
tracking
and
we
even
done
a
lot
of
it's
working
with
a
company
I'll
tell
you
about
in
a
moment
doing
interesting
work
in
natural
language
search.
So
these
are
some
of
the
things
we've
applied
to
our
business
model
is
essentially
a
licensing
model,
so
we
have
two
model.
Two
licensees
today,
one's
this
company
called
cortical
io
who's
using
hdm
principles
in
natural
language
processing.
That
is
a
really
cool
company.
A
You
should
check
it
out.
Can
you
tell
us
what
html
is
again
hierarchical,
template
memory?
That's
the
that's
the
basic
name!
For
our
theory.
It's
a
memory
of
hierarchical
memory
of
sequences.
That's
everything
we
do
sorry
about
that.
So
that's
basically
the
when
I
talk
about
htm
here,
it's
using
that
sequence
memory,
it's
using
all
of
our
code.
Okay,
the
stuff
I
was
talking
about.
We
used
to
check
these
guys
out.
It's
very
cool.
A
We
just
this
week
announced
there's
a
company
it's
going
to
go
on
under
the
name
of
grockstream.
They
took
our
a
demo
app
for
it
monitoring
and
they're,
making
a
commercial
product
out
of
that.
So
they're
just
going
up
and
running
so
our
business
model
is
basically
people
want
it
we'll
license
it
to
them
and
they
can
do
with
it
what
they
want.
We
have
a
number
of
research
partnerships
and
this
has
been
going
on
longer.
A
We
have
a
about
18
months
now,
working
with
ibm
in
almond
and
underground
named
dr
winfrey
wilkie,
and
they
have
been
working
on
a
complete
technology
stack
for
our
htm
neural
models.
There's
a
guy
at
darpa,
called
dan
hammerstrom
who's
got
a
program
called
the
cortical
processor,
which
is
based
on
hdm.
A
A
We
were
able
to
do
that,
but
not
easily,
and
we
have
a
collaboration
going
on
with
dr
matthew,
larkin
who's,
one
of
he's
a
neuroscientist
one
of
the
leaders
in
the
world
on
active
dendrites
and
he's
taking
interest
to
our
theories
and
putting
together
a
testing
biological
testing
protocol
for
that.
So
these
are
really
great
for
this.
For
us,
we've
also
taken
all
of
our
software
and
everything
and
put
it
into
an
open
source
project
that
eli
mentioned
called
new
pic.
A
If
there's
other
people
who've
created
them
and
are
using
them
and
we've
put
all
of
our
software
up
there
and
as
well
as
the
application
software
for
those
demo
apps
and
there's
lots
of
discussion
groups
and
other
things
we're
running
a
contest.
This
fall
that's
up
there
as
well.
This
is
a
chart
from
last
year,
basically
showing
that
the
open
source
community
is
healthy
and
growing.
This
is
over
the
past
year,
growth
in
a
whole
bunch
of
metrics,
but
almost
tripling
in
size.
A
A
The
term
neuromorphic
hard
drive,
I
believe,
was
rising
king
from
carver
mead
and
it
originally
meant
using
the
analog
sections
of
the
analog
properties
of
semiconductors
to
model
the
neural
processes.
It
has
become
to
mean
a
broader
thing
now.
Nowadays,
people
use
this
term
to
mean
anything:
that's
modeling,
brain-like
stuff,
so
apologies
to
carver.
I
I
break
into
two
different
groups,
and
I
didn't
I
couldn't
think
of
a
good
name
for
this.
I
just
I'll
call
it
the
neuron
approach.
A
Don't
pay
attention
that
word
too
much,
but
the
idea
here
is
these
people
said
like
okay:
let's
try
to
figure
out
how
we're
gonna
model
spiking
neurons
and
that's
how
we're
gonna
figure
out
modifiable
synapses
there's
been
a
lot
of
work
in
people
trying
to
use
emeristers
to
model
modifiable
synapses,
seeing
how
many
states
of
memory
could
they
get
in
an
mr
to
model
the
different
states
of
a
memory
in
a
particular
synapse.
A
The
goal
has
been
primarily
the
low
power
and
high
speed,
oops
and
examples
of
these
there's
a
chip
at
ibm,
a
different
group,
not
the
group.
We
work
with
called
the
true
north
chip
created
by
domendra
moda.
The
high
can
kinship
I
mentioned
about
a
minute
ago
from
the
human
brain
project
in
europe.
Qualcomm
has
announced
the
chip
in
this
area.
Hrl
did
a
did
a
chip
like
this
as
part
of
the
darpa
program,
and
I
have
a
lot
of
problems
with
these.
I
don't
find
them
very
useful.
A
First
of
all,
analog
synapses
and
spiking
neurons
from
a
theoretical
point
are
not
needed.
The
theory
tells
us
that
way.
I
didn't
come
in
knowing
that,
but
that's
what
I
believe
now
that
the
brain
has
spikes,
because
that's
how
it
learned.
That's
how
nature
can
form
a
communication
lawyer,
but
it's
not
an
essential
part
of
the
theory
about
what's
actually
going
on
and
similar
synapses
they're.
So
stochastic,
though,
who
cares
if
you
can
get
32
levels
of
precision
of
them?
It
does
not
really
matter.
A
The
theory
doesn't
require
that
and
the
the
problem
is
these
chips
that
people
created
are
incompatible
with.
The
things
we
now
know
are
essential,
they're
incompatible
with
neurons,
with
active,
dendrites
and
they're
incompatible
with
neurons
with
thousands
of
synapses.
So
most
of
these
chips
limit
the
number
of
synapses
you
can
have
on
a
neuron
to
maybe
256..
A
I
gotta
have
ten
thousand.
That's
what
the
theory
tells
us.
I
can't
I
can't
use
that
stuff
when
we
we
ported
our
algorithms
to
the
high
can
ship.
We
have
to
do
unbelievable
bunch
of
crazy
stuff
to
make
that
happen.
Some
of
these
chips
can't
learn,
and
so,
and
almost
all
of
them
can't
learn
continuously.
So
they
kind
of
fail
in
that
regard
too,
and
the
basic
problem
is
that
they
were
designed
oops.
They
were
designed
without
a
system
level
theory.
A
They
were
just
people
saying:
okay,
we'll
punch
a
bunch
of
neurons
or
synapses
and
we'll
have
someone
else
to
figure
out
what
to
do
with
it.
Well,
I'm
figuring
out
what
to
do
with
it.
I
can't
use
that
stuff
there's
a
different
approach,
which
is
the
systems
approach,
and
these
people
have
said:
okay,
it's
all
about
connectivity
and
that's
the
problem
that
brains
have
and
we
have
huge
amounts
of
wiring
in
the
brain.
That's
true,
and
so
their
goals
are
scalability
and
configurability
of
connectivity.
A
Two
examples
of
this
are
there's
a
project
in
england
at
the
university
of
manchester
called
spinnaker,
it's
been
around
for
a
number
of
years.
They
take
thousands
of
arm
processors
and
they
and
they
basically
created
a
large
connectivity
scheme,
a
toroidal
connectivity
scheme
between
them
very
clever,
oops.
Sorry,
I
keep
hitting
the
wrong
button.
Everyone
does
there's
the
other
project
at
ibm.
The
people
we're
working
with
under
wilkie.
They
are
working
on
a
different
type
of
system,
a
wafer
scale
system
and
I'll.
Just
take
I'm
going
to
show
you
one
picture
of
that.
A
This
is
from
a
presentation
that
winfried
wilkie
did
earlier
this
year
at
a
normal
cardboard
conference
called
nice,
and
this
is
he's
showing
what
the
the
systems
that
ibm
is
working
on.
These
are
all
inspired
by
htm
theory,
but
not
us,
and
what
they're
doing
is
they're
doing
a
wafer
scale.
Integration
but
stacked
way
for
scale
integration
and
they've
got
vias
between
between
the
the
wafers
and
they
have
a
clever
way
of
pairing
down
the
wafers
to
make
it
all
really
thin.
A
I'm
not
going
to
go
into
it,
but
he
presented
this
slide,
so
I
figured
I
could
present
this
slide.
The
point
here
is
you've
got
now
a
hierarchical,
deep
memory
system.
The
wafers
are
primarily
memory
and
interspersed
amongst
the
memory
is
simple
processors,
simple
arm
processors,
so
actually
modeling.
The
neuron
is
easy.
It's
the
connectivity,
it's
the
hard
part
and
routing.
So
it's
all
about
routing.
The
memory
is
almost
all
about
routing
where
the
information
has
to
go,
because
the
neuron
has
to
send
signals
to
10
000
other
neurons
simultaneously.
A
Okay.
So
let
me
just
end
with
my
summary
here.
My
first
thing
is:
I
claim
that
progress
being
made
in
reverse
engineering
in
europe
in
reverse
engineering
in
the
near
cortex,
I'm
very
confident
in
the
things
I
told
you.
I
believe
that
in
the
future,
we're
all
going
to
look
back
and
understand
that
true,
intelligent
machines
are
going
to
be
based
on
those
principles,
primarily
sequence
memory
hierarchy,
sensing
motor
integration,
online
learning-
maybe
we'll
get
these
ways
to
get
here
without
looking
at
brains.
A
I
think
the
brain
still
has
a
lot
to
teach
us
and
and
that
hardware
for
machine
intelligence
must
support
these
components.
I've
been
talking
about,
and
it
doesn't
need
to
support
these
components,
which
I
don't
think
are
important,
but
this
is
where
a
lot
of
people
spend
their
time
and
that's
it.
Thank
you
very
much.
C
Yeah
we
have
time
for
questions,
so
I'm
well
I'm
kind
of
amazed
by
all
this.
So
let
me
see
if
I
can
summarize
what
I
learned
you,
you
had
a
a
model
for
what
was
happening.
That
was
level
two
three,
so
you
used
what
you
learned
in
essentially
analyzing
the
biology
of
level
2
3
and
use
that
to
inspire
you
in
what
kind
of
hardware
to
make.
C
So
the
only
thing
I
could
think
of
is
trying
to
make
the
hardware
to
look
a
little
bit
like
level
2
3,
but
on
the
other
hand
it
it
also
inspired
some
kind
of
software
and
I
suppose
the
way
it
was
organized
in
level
two
three.
I
guess
you
could
organize.
You
could
sort
of
think
that
would
be
an
inspiration
for
a
software
approach.
Now,
that's
what
I
got,
but
I
have
a
feeling
that
maybe
I
didn't
get
it
quite.
C
How
do
you
go
from
what
you
learned
in
level
two
three
to
software
products
and
why
do
we
have
to
imitate
it?
Okay,
on
hardware.
A
All
right
a
couple
different
questions
here,
so
let
me
just
dissect
them.
First,
let
me
make
it
clear
all
the
levels
all
those
layers
of
cells
in
the
inner
cortex
they're,
very
similar.
They
all
made
neurons
with
thousands
of
synapses
with
the
same
properties.
I
was
talking
about
what
really
distinguishes
between
the
different
layers,
the
layers
two
three
four
five
and
six
is
what
they're
connected
to
so
the
principle.
A
The
the
idea
we're
working
on,
which
I
feel
fairly
confident-
is
that
they're
all
doing
the
basic
same
memory
principle,
but
it's
what
they're
connected
to
that
makes
them
different.
So
it's
it's
not
just
a
model
layer,
two,
three,
it's
a
model
of
all
the
layers
and
by
connecting
them
different
into
different
things,
you
get
different,
behaviors,
okay,
we
of
course
we
wanted
to
test
this.
We
wrote
we
had
to
test
this
and
we
tested
in
software,
and
we
do
this.
We
run.
A
We've
run
millions
and
millions
of
these
models
now
and
we
do
it
in
software
because
it's
just
it's
a
way
of
empirically
testing
them
see
if
they
work.
What
are
the
behaviors?
What
are
the
problems?
We've
applied
to
all
different
ways?
We've
it's
it's
our
main
tool.
Now
we
can
do
today.
We
have
we've
spent
a
huge
amount
of
time,
optimizing,
the
software
and
so
today,
if
I
have
a
typical
one
of
our
networks,
it's
fairly
small.
It
is
2048
columns
each
of
32
neurons,
so
it's
65
000
neurons.
A
Each
one
has
maybe
a
several
thousand
synapses
5
000
synapses,
that
is
about
one
millionth,
the
size
of
a
human
neocortex.
It's
about
one
thousandth,
the
size
of
a
mouse
in
your
cortex
okay.
So
that's
what
we're
that's?
What
we're
modeling
in
software
today,
we
have
optimized
that
tremendously
as
much
as
we
can
hold
the
engineering
team
down
this,
and
we
can
now
run
that
and
basically
in
about
30
milliseconds,
we
can
do
a
complete
learning
inference
prediction
step.
So
that's
that's
our
maximum
performance.
A
I
can
do
a
lot
with
that.
I
can
do
a
lot
of
interesting
commercial
applications.
With
that
I
can't
build
a
brain.
I
can't
even
go
close,
I'm
talking
about
one
millionth,
the
size
of
a
human
brain
and
it's
running
at
30
milliseconds
a
cycle.
So
we
cannot
really
get
beyond
the
most
minuscule
stuff
here
without
some
sort
of
hardware
acceleration,
it's
going
to
be
necessary
and
I
can't
wait
around
for
it.
C
Yes,
I
think
so,
and
so
what
I
understand
is
by
modeling
the
brain
it
you,
you
can
model
these
things
with
software
to
go
ahead
and
model
them
in
software.
Of
course,
they
could
lead
to
problems
that
runaway
conditions,
things
that
are,
that
can't
work,
and
therefore
you
test
against
those,
yes
and
then
having
reached
that
point,
you
can
then
apply
it
to
specific
problems
like
recognizing
patterns.
A
A
That's
right,
we
spent.
In
fact
we
came
up
this.
This
thing
I
presented
here
is
actually
the
basic
memory
algorithm
I
talked
about
was
learned.
We
figured
it
out
over
about
five
years
ago,
we
spent
an
entire
year
just
one
year,
our
entire
company
testing
it
in
software.
No
just
just
parameterizing
it
characterizing
it
understanding
it
deeply,
and
then
we
said
okay,
it
really
does
work.
It
does
all
the
things
we
thought
it
was
going
to
do
and
we
understand
its
capabilities.
A
Then
we
started-
let's
say,
let's
start
applying
other
things,
but
it's
it's
a
hard.
It's
you
know
it's
complex.
These
are
complex
systems.
You
can't
there's
no
single
equation
that
describes
how
this
whole
thing
works.
You
know
it's
not
like
physics,
there's
a
question
over
there.
Yes,.
D
Great
presentation,
thank
you
for
that.
So,
according
to
my
understanding,
the
main
problem
of
artificial
neural
network
was
the
fact
that
it
is
not
error-free,
so
there
was
always
a
chance
of
having
error.
So,
unlike
the
cpus
that
maybe
you
want
to
have
100
success
rate,
so
did
you
solve
that
problem?.
A
I
I
I
hope
so
everyone
heard
the
question
I
I
don't
think
that
I
wouldn't
agree
with
that
assessment,
that
the
main
problems
in
neural
networks
is
they're,
not
error-free.
Let
me
phrase
it
a
different
way:
brains,
generalize
right.
We
don't
want
brains,
we
want
brains
to
generalize,
in
fact,
the
input
coming
on
your
eyes
and
your
ears
and
your
skin.
The
input
coming
into
your
brain
has
never
repeated
once
in
real
life.
A
It
is
always
different,
and,
and
so
we
have
to
somehow
extract
what
are
the
structure
in
that
data
stream
and
yet
apply
it
to
new
ones,
and
so
we
inherently
want
the
system
to
mistake.
If
you
want
to
call
it
that
way,
new
inputs
as
being
the
same
as
old
inputs,
we
want
to
say
you
know
this
is
actually
different,
but
I
think
it's
the
same
as
something
I
saw
before
now.
A
You
might
call
that
an
error,
if
you,
if
you
didn't,
do
that
then
you're,
like
those
people
who
are
savants,
you
know
they
see
everything
they
never
see
any
they.
Never
they
never
generalize.
Everything
is
new
and
fresh.
That's
a
problem.
So
it's
not
a
matter
of
error.
I
don't
think
of
these
things
as
error.
It's
more
a
matter
of
how
do
I
get
them
to
generalize?
How
do
I
get
them
to
work
in
a
noisy
environment,
not
like
oh
there's,
a
proper
answer
that
it
should
always
be
exactly
x.
A
Cpu
yeah.
Well,
I
I
would
think
making
you're
making
a
memory
chip
that
works
like
this
right.
Let
me
just
try
to
do
it
again,
a
couple
just
a
couple
things.
First
of
all,
the
algorithms
are
highly
highly
fault,
tolerant.
So,
for
example,
I
could
take
that
that
memory
model-
I
I
didn't
explain
how
it
works,
but
I
take
that
one.
I
can
literally
kill
off
40
of
the
neurons
and
it
still
works
amazingly
well.
You
know
it's
just
incredible,
so
it
is
inherent
fault
tolerance
built
into
the
system
just
like
in
brains.
A
Your
neurons
are
dying
all
the
time
they
crap
out
all
the
time.
They
just
don't
work
very
well,
so
the
system
is
inherently
fault,
tolerant
and
it
just
and
that's
the
nice
thing
about
if
you
want
to
get
the
scalability,
whether
you're,
trying
to
get
the
low
power
or
get
the
voltage
down
whatever
you're
trying
to
do.
You
know
you
can
work
in
a
different
range
here
and
not
worry
about
happening.
You
can
have
20
error
and
the
system
works
fine.
A
So
we
have
to
rethink
ourselves
of
what
it
means
to
have
a
an
error-free
environment.
We
want
those
properties.
B
Jeff,
how
important
is
it
for
you
to
have
new
hardware
versus
software?
In
other
words,
if
you
want
to
make
progress
in
your
company
in
this
field,
is
the
limit
software
or
is
the
limit
being
of
hardware
that
would
be
able
to
run
software
at
larger
scale?.
A
I
kind
of
tried
to
say
this
earlier
I'll
rephrase.
It
hardware
takes
a
long
time
and
it's
really
hard.
A
I
don't
want
to
do
that.
You
guys
do
that.
You
know
I
was
really
interesting
to
talk
this
morning
about
the
agile
development
right,
because
you
know
in
software
we
do
we
do
outdoor
development
of
the
menta,
but
you
know
we
can
do
a
release
every
day.
You
guys
can't
do
that.
I
don't
think
you
can,
but
so
that's
a
you
work
on
a
different
time
frame.
So
usually
when
people
come
to
me
and
they
say
we're
gonna
work
on
harder
for
this
I
said
great.
A
I
really
need
your
harder,
but
I
can't
wait
around
for
you.
You
might
take
five
years
to
get
there.
I
can't
I
don't
know
so
we,
our
research,
is
all
focused
on
software
emulation
and
I
have
a
lot
we
can
still
do
in
software
pure
software.
I
can
add
that
layer,
five,
I
can
lay
out
a
layer,
four
layer
for
the
little
tv
section
of
cortex.
I
can
model
all
those
layers
in
software.
A
I
can't
build
big
systems
so,
but
there's
some
things
I
can
do
in
software,
so
I'm
going
to
stick
to
those
things
and
I've
got
a
lot
of.
I
still
have
a
lot
to
do.
However,
if
I
had
hardware
support,
I
could
do
those
better
and
I
could
do
more
and
I
could
do
them
faster.
Some
of
our
simulations
take
a
long
time.
A
If
I
start
you
know,
when
we
first
implement
some
new
feature
of
the
algorithm,
we
might
do
it
in
python
and
it
runs
much
slower
and
then
you're
waiting
hours
and
hours
and
hours
for
results.
Oh,
we
hate
that,
but
I
can't
get
big
so
it's
kind
of
like
I
really
need
the
hardware
we
are
going
to
need
the
hardware.
The
whole
industry
in
the
world
is
going
to
need
hardwood
buildings,
intelligence
machines.
If
you
haven't
figured
out
already.
A
I
think
this
is
the
most
important
thing
humans
could
be
working
on
and
and
so
we
need
to
get
there,
but
I
I
as
a
little
company
with
limited
resources.
I
can't
wait
around
for
it,
so
we
we
have
a
lot
to
do
that.
We
don't
ride,
but
we're
happy
to
collaborate
with
people
and
we
just
started.
We
we've
had
similar
conversations
with
intel
intel
wants
to
test
these
algorithms,
but
we
have
this.
E
Hi,
you
may
have
mentioned
this
before
I
couldn't
make
it
until
now.
What
do
you
have
in
place.
A
Of
graded
synaptic
weights,
how
do
you
solve
that.
A
Okay,
let
me
just
we're
going
to
focus
on
this
part
down
here.
Okay,
what
we
want
to
do
is
is
we
want
to
where's
the
question
from
here?
What
what
we
want
to
do
on
a
neuron
just
bear
with
me
and
I'll
come
back
to
the
side.
We
want
a
section
of
a
dendrite
to
form
15
to
20
synapses,
to
a
subset
of
a
large
number
of
cells
that
might
be
active.
A
We
only
have
to
connect
to
like
15
of
them
and
there
might
be
thousands
of
active
cells,
and
I
want
to
recognize
that
pattern
I
only
want
to
do.
I
only
need
to
form
like
15
synapses
mathematically,
it's
all
you
need
to
do,
and
so
we
want
to
form
new
connections
to
active
cells.
The
way
we
do
that
is
ours.
We
model
a
synapse
with
something
called
permanence
and
it
models
the
growth.
Imagine
I
have
an
axon
and
a
dendrite,
they
don't
have
a
synapse
and
what
happens
is
I
grow
literally?
A
I
grow
a
spine
and
I
make
a
connection
between
them
and
the
permanence
goes
between
zero
and
one
that's
a
scalar
and
where
zero
means
there's
nothing
growing
between
these
two
and
one
means:
there's
a
big
synapse
and
along
somewhere
along
the
way,
maybe
0.2
or
0.3.
We
actually
form
the
synapse
meaning
it's
up
to
up
to
that
point
is
there's
no
functional
connection
after
that
point,
it's
it.
There
is
a
functional
connection,
but
the
weight
of
the
synapse
is
one.
A
So
the
weight
of
the
synapse
is
binary
either
it's
connected
or
it's
not,
but
I
have
a
scalar
which,
which
suggests
the
growth,
and
why
do
I
do
this
well?
This
was
this
is
what
the
biology
looks
like.
But
the
point
is:
if
I
train,
I
use
heavy
learning
and
when
I
learn
I
move
the
permanence
of
a
synapse
and
I
sort
of
say,
okay
every
time
I
have
a
learning
event.
I
want
to
increase
that
permanence.
A
At
some
point
the
synapse
becomes
functional
prior
to
then
it
isn't,
but
maybe
that
was
noise,
so
it
had
to
repeat
a
certain
number
of
times
and
then
beyond
that
permanence.
I'm
beyond
that
threshold,
if
I
go
all
the
way
up
to
here,
I
have
I
have
something:
that's
been
trained
so
many
times
I
want.
I
don't
want
to
forget
it
very
easily,
so
it's
like
a
more
permanent
memory,
so
we
have
a
difference.
We
have
two
synapses.
This
is
like
in
the
brain.
A
You
have
two
synapses
are
both
functionally
equivalent,
but
one
will
last
longer
than
the
other,
because
that
one
has
been
reinforced
more.
So
this
is
how
we
measure
it
and
we
don't
need
a
graded
thing
because
we
have
to
have
a
set
of
them
if
we
have
a
set
of
those
synapses.
That's
what
I
need
and
any
one
of
them
is
not
important.
As
long
as
I
have
15
to
20
of
them,
then
that's
good
enough,
and
and
so
I
could
drop
off
any
one
or
two
of
them.
It
really
doesn't
matter.
A
C
E
Yeah,
could
you
provide
any
insight
or
specific
examples
on
how
we
can
go
from
the
biology
or
the
experiment
to
this
theory?
How
are
you
able
to
make
that
connection
between
what
we're
seeing?
E
How
do
you
develop
your
theory
from
what
you
see
biologically
okay,.
A
It's
just
a
quick
answer
top
down
bottom
up.
Oh!
Is
that
quick
enough?
Look.
We
have
theoretical
constraints
that
are
sort
of
deduced
like
I
need
to
have.
I
need
to
be
able
to
have
stability
through
movement
of
my
senses.
I
need
to
be
able
to
play
back
sequences
and
make
predictions.
These
are
theoretical
constraints
that
I
say
the
brain
must
do
these
things.
Then
we
study
a
great
deal
of
anatomy
and
physiology.
A
I
I
have
a
pretty
deep
knowledge
of
anatomy
and
physiology
in
in
cortical
stuff,
and
so
the
anatomy
physiology
tells
you,
oh
by
the
way
synapses
are
stochastic
by
the
way
they
they
form,
oh
by
the
way,
there's
a
whole
field
of
active
dendrites.
Now
that's
what
they
do.
This
is
what
the
physical
that's
the
bottoms
up
right.
So
we
said
here's
the
theoretical
things
we
have
to
do:
here's
the
the
substrate
we
have
to
make
it
work
on,
and
you
look
for
that
connectivity
and
initially
it's
very
very
hard.
A
Initially,
you
feel
like
it's
impossible
when
it
actually,
when
you
actually
they
match
up
so
many
constraints
are
satisfied
simultaneously.
You
just
have
a
tremendous
confidence
that
the
answer
is
correct.
It
is
not
easy.
It's
not
like
you
can
just
pick
something
out
of
that
and
make
this
stuff
work.
It's
really
hard
to
get
this
to
work.
C
C
So
right,
but
that's
also
something
that
I
think
we
can
do
in
in
electronics
and
particularly
in
optical
electronics.
We
can
provide
an
unbelievable
density
of
connections.
There's.
A
A
guy
that's
interested,
I'm
happy
to
hear
that.
There's
a
guy
at
sandia
labs,
murat
o'condom
who's,
a
k
was
a
fan
of
optic
connectivity
and
he's
he
talks
about
it
greatly,
and
I
that's
fantastic,
but
other
people
disagree.
I
don't
know
you
know
I
don't
I
don't
know.
What's
going
to
win,
okay.
C
With
that,
let's
thank
jeff
hawkins.