►
From YouTube: Jeff Hawkins - Numenta - Progress in Cortical Theory, Implications for the Machine Intelligence
Description
New Frontiers in Cognitive, Evolutionary, and Computational Models of the Mind: Part 2.
A
A
The
reticle
neuroscientist
and-
and
we
tried
to
decipher
the
theory
of
how
the
brain
works,
specifically
your
cortex,
so
the
next
40
minutes
I'm
going
to
give
you
an
overview
of
what
we've
learned
over
the
last
so
many
years,
I've
been
told
it's
a
mixed
audience,
so
I'm
going
to
try
to
keep
it
at
a
level
that
would
be
insane
for
thinking
and
a
thorough
whole
life.
Anyone
to
have
intellect,
but
not
neuroscientists,
and
if
they're,
making
money,
it's
really
confusing
you
just
I've
lost
with
you
again,
I'll
be
around
all
day.
A
They
answer
any
other
questions
so
and
I'm
happy
to
be
here
to
help
others,
as
you
guys,
try
to
story
how
to
get
them
into
or
into
the
neuroscience
ok.
So
these
are
the
two
things:
I
do
basically
guiding
lights.
I've
had
for
over
30
years
in
my
career,
I
want
to
understand
the
operating
principles
with
uniform
decks.
That's
the
number
one
thing
I
fell
in
love
with
brains
back
in
and
I'm
just
gonna
have
to
work
on.
This
I
felt
it.
A
A
A
That's
not
true
anymore.
I
also
decided
that
if
you
can
understand
the
principles,
the
neocortex,
you
could
then
build
machines
that
work
on
those
principles,
and
that
would
be
very,
very
good
and
Kristen
asked
me
if
I
was
quickly
in
a
bit
about
that
again
much
blockage,
one
I
will
so
just
because
you
don't
know
the
neocortex
is
about
8%
of
the
bottom
of
your
brain.
It's
the
big
drink
of
the
sheet
that
covers
all
the
rest
of
it.
So
you've
got
your
friends
a
goalpost
and
your
participation
wrapped
around
it.
A
The
impartation
is
the
locus
wall
by
local
intelligence,
language,
Minear
cortex
is
speaking
and
driving
the
musculature
in
life
and
when
I
get
a
voice
box,
your
neocortex
is
listening.
Everything
you
could
tell
me
about
the
world
is
sorted
using
w8x,
it's
a
center
wall,
high-level
thought
and
planning.
All
mammals
have
a
neocortex
non
mammals,
don't
and
and
I.
Although
we
understand
a
lot
of
we
study
of
the
large
parts
of
a
my
specific
focuses
in
the
airport
Jackson's
some
structures
related
to
it.
A
C
A
Is
very,
very
detailed,
neuroscience
anatomy
physiology
everything
we
know
about
the
brain
in
the
neocortex,
which
is
a
tremendous
amount.
In
my
mind,
a
set
of
constraints
and
we're
gonna
have
a
theory
about
how
this
works.
You
can't
ignore
those
you
can
say
I'm
going
to
not
pay
attention
because
here's.
Why?
But
you
can't
just
say
we
need
you
to
pick
out
a
few
little
facts
about
the
brand
stamina
folk.
Something
works
like
that.
A
The
brain
is
a
set
of
constraints,
and
our
goal
is
to
understand
that
so
I
spend
much
more
of
my
time
in
the
neuroscience
world
than
I
do
the
engineering
world,
because
we
need
to
understand
those
constraints
from
though
we
can
build
a
set
of
directed
principles
and
we
have
to
end
we
can
test
them.
We
can
test
it
both
to
the
pilot
in
the
biology
with
experimentation
and
also
just
the
literature
searching,
because
there's
so
much
under
stimulated
knowledge
about
Brandon.
But
we
have
these
theories
and
then
we
test
them.
A
A
We've
made
a
lot
of
progress
on
this
a
lot
and
so
I'm
going
to
tell
you
about
that
now,
if
usually
identified
talk,
it's
been
mostly
talking
about
the
theory,
the
principles
that
we've
learned
about
the
neocortex
very
little
bit
saying
what
would
it
be
begin
from
once
you
build
this
stuff,
Megatron
part
a
little
bit
about
the
commercialization,
because
I
believe
it's
important
to
build
a
financial
base
for
this
to
keep
people
interested
in.
It
really
hope.
A
So,
if
you
can
turn
this
into
valuable
products
and
then
I'll
speculate
about
the
future
of
intelligent
machines,
let's
just
go
into
the
theory.
What
does
the
New
York
context
do?
If
anyone
tells
you
the
brain,
is
a
computer
they're
wrong?
It's
not
a
computer.
It's
a
memory
system.
It
is
a
system
but
you're.
You
were
born
within
your
cortex.
It
doesn't
know
anything
about
the
world
and
you
have
to
learn
about
the
world
through
your
through
your
experiences
in
life.
Now
the
memory
system,
the
inputs
in
training,
might
think.
A
Oh
yeah
I've
got
eyes
ears
and
skin
I
got
five
senses
or
something
like
that.
That's
not
true.
The
eye
itself
is
a
million
symptom,
so
you
have
to
think
about
it.
That
way,
that's
what
it
is!
There's
a
million
fibers
on
the
optic
nerve.
It
is
a
ray
of
a
million
senses.
Your
copula
is
an
array
of
30,000
sensors
and
your
somatic
sensors.
A
Your
body
sensors
you've
got
a
couple
of
billion
sensory
streams
coming
into
the
brain
they're
coming
in
high
velocity
v-two
they're,
changing
rapidly
on
the
order
of
a
few
milliseconds
10
milliseconds
20
milliseconds,
constantly
changing,
and
this
just
firehose
to
brain
and
the
brain
has
to
build
a
model
of
the
world
from
it.
It
has
to
learn
everything
that's
out
in
the
world,
about
computers
and
rooms
and
chairs
and
coffee
cups
and
cars,
and
whatever
everything
you
know,
that
word,
you
know,
has
to
be
learned
through
this
process.
Now
the
brain
itself
does
some
things.
A
Why
does
it
built
this
model
and
one
things
are
right
about
my
work:
it's
not
how
it's
constantly
making
predictions-
and
this
was
a
real
insight
to
me.
You
make
the
predictions
that
everything
you
see
healed
and
touched
here
all
the
time,
not
even
where
most
traditions,
but
it's
true,
and
that
was
no
pain
to
understanding
how
the
mechanisms
competitive's
how
to
make
predictions.
Of
course,
when
you
make
predictions,
you
can
be
checked
when
things
are
clumping,
correct
anomalies
and,
of
course
the
brain
takes
actions.
A
Not
always
you
continue
doing
almost
nothing
and
being
very
smart,
but
occasionally
it
takes
actions
I'm
speaking
right
now
and-
and
those
are
the
basic
things
that
the
brain
does
now
there's
another
part
of
this,
which
is
with
you
in
the
neocortex,
also
generates
your
behavior
and
we'll
talk
about
a
moment
everywhere.
The
deer
carcass
in
generate
behavior.
A
There
is
no
pure
sensory,
no
pure
and
motor
is
used
to
be,
and
and
most
of
the
changes
that
occur
on
your
sensory
stream
by
our
beauty
or
own
behavior,
your
own
movement,
so
you're,
either
moving
constantly
your
body
will
be
harshly
most
of
our
experience.
My
own
actions,
and
so
there's
this
huge
loop
that
goes
between
the
brand,
exploring
the
world
interaction
with
the
world
and
coming
back
and
so
that
part
of
the
problem
as
well.
A
So
here's
for
what
is
in
your
projects
do
is
facing
the
new
projects
learn
for
the
sensory
motor
models.
World
turn
all
the
user
centric
contingencies
is
the
sensory
motor.
What
is
out
there,
what
happens
when
I
turn
left
then
and
go
right
about
the
doorway
and
make
up
right,
I
remember
for
the
top
list.
A
These
are
all
expectation
that
were
to
come
to
the
model
of
this
building
and
how
did
I
do
that
and
it's
essentially
notable
okay
I'm
not
going
to
go
through
the
top
six
principles,
my
top
six
okay,
neocortical
function
and
I'll,
stick
by
them
and
there's
others
is
just
the
discoveries
of
Pop
Idol
one
is
it's
an
online
learning
system,
so
I
Rico,
there's
a
beverage.
That's
what's
learning
so
to
love
learning
system,
but
online
is
that
is
the
term
which
means
it's
learning
continuously.
A
A
It
looks
like
a
sheet
of
cells,
looks
like
this
big
sheet
and
it
is,
but
it's
actually
connected
in
this
hierarchical
fashion
itself,
a
cartoon
drawing
there's
there's
in
the
human
brain
and
respect
that
maybe
about
a
hundred
different
regions
that
are
there
are
structured
in
this
hierarchical
arrangement
where
the
connection
connections
were
very
well
established,
and
so,
when
the
information
comes
into
this
hierarchy
flows
up
the
hierarchy,
flows
back
down
the
herd
is
a
two
dimensional
Harvey.
We
also
know
that
there
is
a
common
algorithm
as
Chris
mentioned
distances,
introduction.
A
There's
a
common
argument
everywhere
you
look
in
the
neocortex,
it
looks
the
same
and
it's
actually
surprisingly
doesn't
matter
of
his
vision.
Hearing
it
touch
that
all
the
regions
are
doing
primarily
same
things
except
me,
but
the
primary
is
that
they're
all
doing
the
same
things
when
you
process
vision.
So
when
you
process
what
it's
more,
your
somatosensory
input,
it's
all
the
same.
This
was
discovered
not
by
me,
but
a
guy
named
Vernon
mountain
castle
in
1979
and
I
speculated
about
this.
A
So
we
have
this
hierarchy
of
similar
memory
regions,
and
the
first
thing
is
that
we
can
understand
how
the
hierarchy
works
about
each
other's
memory.
Regions
is
going
to
run
along
with
getting
closer,
I'm
staying
out
of
the
import-export.
Well,
we
do
know
something
about
this.
We
know
that
the
primary
memory
function
in
all
these
regions
is
a
memory
of
sequences.
Is
the
memory
of
time
transmissions
of
patterns
over
time
now?
This
may
not
have
been
obvious
to
someone
wasn't
to
me
when
I
first
started
with
eNOS.
A
But
let
me
just
talk
about
inference
is:
is
basic
pattern
renditions,
you're
inferring
from
my
speech,
you're
recognized
versus
an
inference
right
now
and
when
you
look
around
the
room,
you're
doing
inference
well,
think
about
it.
If
you're
trying
to
abstain.
My
speech,
the
words,
are
very
complex
patterns
in
sequence.
The
order
matters,
the
timing
that
you
have
to
have
a
memory
in
your
brain
of
what
word
it's
down
line,
what
rule
family
and
what
things
follow.
What
content
instance
is
all
about
temporal
patterns
when
you
touch
something
it's
the
patented
time.
A
You
hear
something
evil
and
you
move
your
eyes
around
on
these
patterns
of
time
and
then
basically
increases
almost
all
temple,
this
temporal
sequence
memory
and
when
I
generate
employment
and
motor
output,
such
as
my
speech.
What
am
I
doing?
I'm
playing
back
complex
temporal
patterns
and
I
memorize,
these
I
can
repeat
them,
I
can
repeat
them.
I
can
defeat
the
money.
A
The
fourth
item
here
we've
learned
that
the
summarization
to
talk
about
representations
we've
learned
that
the
weight
range
represented
information,
something
called
sparse
of
student
representation
everywhere
you
look
in
the
brain.
You
find
many
many
cells,
you
find
few
that
are
active
in
the
most
most
of
enacted.
At
any
one
time,
it's
sparse
in
this
distributive,
meaning
that
there's
always
a
bunch
of
cells
active
but
not
many
percentage-wise,
it's
very
low.
A
Finally,
I
mentioned
a
minute
ago
that
all
reasons
in
the
year
projects
are
are
both
sensory
and
motor.
Everywhere
you
look
even
the
primary
visual
cortex.
There
are
cells
that
projection
areas
with
rain
degenerate
behavior.
In
that
case,
I
woundn't
there
is
no
purely
sensory
and
there's
no
purely
motor
areas
that
cortex
we
used
to
think
this.
So
everywhere
you
go.
There's
a
sensory
motor
integration
going
on.
A
We
have
to
understand
that
if
we
understand
how
the
cortex
works
and
finally
there's
intentional
mechanisms
of
the
Dominions,
how
you
can
attend
a
part
of
your
auditory,
a
party
original
stream-
or
you
know,
zooming
in
zooming,
goes
going
to
the
founders
neocortex
and
that's
awesome
part
of
motor
behavior,
and
so
these
are
my
six
principles.
We
understand
all
the
to
some
extent,
the
top
the
best
thing
that
we
understand
on
that
these
ones
here
these
are
ones
I,
have
the
best
knowledge
about
sparsity
representation,
sequence
memory
and
online
learning.
A
Those
were
my
talk
about
depth
today.
There's
still
a
lot
to
be
learned,
but
we
were
bidding
to
get
the
big
picture
here
and
understand
all
the
components
so
without
further
ado,
I'm
going
to
jump
right
in
we're
going
to
talk
about
supporters
to
representations
we're
going
to
start
by
contrasting
them
to
computers,
so
I
assume
all
the
other
that
by
Computers
Computers
view
something
called
debts
representations
and
this
light
will
take
a
a
by
your
word.
Eight
bits,
32
bits,
something
like
that.
We
use
all
combinations
of
ones
and
zeros.
So
thank
you.
A
C
A
The
air
it
is
the
letter
M
was
referred
to
by
0
1
1,
0,
1,
1,
0
1.
Well,
I
can
ask
you
well,
but
one
of
the
meanings
of
those
pits.
There
are
no.
We
use
to
those
bets.
I
said
what's
the
third
bit
and
that's
because
nothing,
it's
arbitrary
assignment
with
just
someone
once
a
long
time
ago,
so
we're
gonna
let
this
rivers
at
the
letter,
M
and
the
computer,
knows
nothing
about
that.
So
all
of
the
software
is
the
program
is
head,
so
those
meetings
are
assigned
by
the
programmer.
A
We
look
into
brain.
We
find
representations
that
are
very,
very
different,
there's
sparse
and
then
distributed
so
first
of
all
their
benefits.
When
I
talk
about
bits,
you
I'm
always
talking
itself
sells
a
bit
the
same
thing.
Ok,
so
when
I
talk
about
there's
lots
of
someone
returned
what
sells
directly
with
your
laser
cells
or
internet
so
gives
a
language
for
a
moment.
A
So,
typically
a
nice
partial
representation
to
have
you
have
thousands
of
bits,
thousands
themself
and
at
any
point
in
time
you
have
mostly
zeros
and
if
you
want
that's,
why
it's
parts
so,
in
example,
been
used
in
this
talk.
I'm
going
to
start
a
little
talk,
2,000
bits
percentage
would
be
accurate,
so
they're
going
to
40
ones
in
1960
zeroes.
Now,
in
this
case,
in
all
cases,
in
the
bring
the
bits
have
meaning
there's
not
arbitrary,
they
don't
change.
The
neurons
actually
learn
to
have
some
sort
of
meaning
about
the
world.
A
You
can
slowly
change
as
your
honor,
but
they're
relatively
stable,
so
they
have
expense
mean.
What
does
it
mean?
We
just
give
you
an
example.
This,
wouldn't
we
wouldn't
do
this,
but
this
is
an
illustrative
example
of
how
you
might
think
about
this.
A
different
one
represent
a
letter
with
a
smart,
stupid
representation.
I
would
I
would
then
come
up
of
two
thousand
a
tribunal,
letters
I
could
say.
Well
what
are
the
experts?
I
could
say,
their
valves
are
consonants.
A
What
do
they
sound
like
eaves
and
all
those
eyes,
but
they
have
fricative
sound
heart,
sound
sauce
now
I
might
say
how
the
ribbons
that
have
descender
debate
centers.
Is
it
a
closed
shape
and
open
shake?
Where
is
it
at
the
alphabet?
So
then,
when
I
want
to
represent
a
letter,
I
would
pick
the
top
40
attributes
that
describe
that
letter
and
that's
that's
the
representation
I
would
do
now.
These
have
to
be
learned.
We
don't
do
it
that
way,
but
that
gives
you
an
illustration
what
it
means
former
sparsity
reputation.
A
Progress
is
partially
due
to
representations
and
there's
a
little
some
research
field
right
now
getting
into
this
I'm
going
to
tell
you
about
a
few,
but
not
most
of
them.
So
here's
the
property
first
property
is
if
I
took
two
of
these
parts
of
stupid
representations
and
I
just
compared
them
bit
for
bit.
If
they
have
a
seat
a
bit
a
one
bit
in
the
same
location.
That
means
they
share
semantic,
meaning
that
they
share
some
semantic
at
you,
dude
and
there's
similar
semantically.
A
Now
it's
not
going
to
happen
randomly
so
so,
even
if
I
just
I
have
three
or
four
bits
that
are
common
between
these
two.
That
is
highly
statistically
significant
and
likely
to
happen
by
chance
anyway.
So
we
can
immediately
compare
two
patterns
to
say
all
they
different
semantically,
how
they
different.
Similarly
and
if
they
they
have,
they
overlap
before
the
clamoring
distance.
When
having
space,
you
know
bits
there,
we
say
they're,
semantically,
similar.
We.
A
To
have
a
separate
data
structure
to
do,
there's
no
abyss
between
build
with
the
machine.
The
second
is
what
I
asked
you
to
store
one
of
these
patterns,
and
then
we
call
it
later
to
see
if
it
occurred
to
the
pattern.
I
want
to
know
if
it
occurs
again
in
future,
so
we
don't
need
to
save
all
mm.
This
is
they
of
the
locations
of
the
language.
You
can.
A
So
we're
going
to
say
some
steps
table
elements.
Well,
the
state,
here's
40
in
business
and
I'll
state
the
locations
on
the
one
minute
now
I
see
a
new
pattern,
I
see
ones
in
those
same
locations.
I
know,
I
got
the
same
patter
because
only
40
minutes
and
that's
it.
So
that's
good
enough.
That's
gonna
work
right
now.
What
have
I
told
you,
you
put
a
stay
at
the
locations
of
all
these.
You
can
only
stay
in
fuel
and
randomly
we're
going
to
something
meaning
I'm
going
to
say.
A
Okay,
you
can
only
save
the
locations
of
10
of
those
41
days
and
that's
what
we
going
together
and
do
kind
of
comes
in
you
say:
okay!
Well,
let
me
see
while
those
10
bits
are
there
they're,
all
ones.
So
can
I
say
it's
the
same
pattern
or
not
well
now,
I
can't
because
you
might
say
because
look
I
might
get
the
tendon
or
write
the
other.
30
could
be
wrong
very
easy
to
show.
A
Typically,
that's
extremely
like
astronomically
unlikely
to
happen,
and
if
you
do
make
an
error,
it'll
be
one
firm
of
having
that
is
semantically,
very,
very
similar
than
when
you
started.
So
if
I
said
oh,
yes,
this
is
the
same
pattern.
I
saw
before
it
could
be
something
some
in
the
real
world
similar
to
the
thing
I
saw
before,
and
that's
good
enough
time
to
walk
up.
That
is
the
basic
generalization.
My
last
properties,
mostly
children,
get
here
today
is
one
of
union
of
membership.
A
I
took
a
bunch
of
these
patterns
and
I
formed
a
union
of
my
board,
look
together,
logically
or
them
together.
So
now,
I
have
10
pounds
of
stay
and
I'm
going
to
order
them
under
order
to
gather
I
got
new
counting
down
here.
Instead
of
having
to
do
some
bids
act,
it's
gonna
have
about
27%
just
for
than
the
other.
Now
I
can't
do
that
I
can't
say:
go
what
would
the
original
10
I
can't
do
that?
A
But
I
can
do
the
fall
because
almost
as
good
I
could
say,
here's
a
news,
parsers
tribute
representation,
isn't
one
of
the
members
of
Richmond
and
you
need
to
say
well.
How
would
I
do
that?
Well,
I
could
just
look
to
see
if
the
new
pattern
has
one
in
the
Union
and
if
dollars
I'm
going
to
say
it's
good,
it's
one
of
the
original
members
now
this
is
actually
very,
very
likely
to
be
true.
It's
a
bit
surprising
awesome
to
come
to
the
sparsit's,
but
I
can
tell
you
that.
A
Yes,
this
is
one
original
ten
and
if
it
made
a
mistake
by
some
chance
you
said:
okay,
it's
a
half
of
Padawan
and
amphipathic
four,
which
is
very
unlikely,
but
if
it
would
still
be
very
semantically
similar
to
the
things
that
were
in
the
original
Union
now,
what
are
we
going
to
do
with
this?
The
brain
uses
this
property
when
it
makes
predictions.
When
you
make
predictions,
you
don't
actually
only
have
a
singular
prediction.
A
You
don't
always
predict
exactly
what
we're
not
going
to
say
exactly
what
you're
going
to
see
the
Micah
mobile
prediction
to
do
at
the
same
time,
but
you
want
to
know
what
did
happen
was
the
expected
or
not
expected,
so
the
brain
makes
predictions.
It
actually
does
a
union
of
prediction
to
fix
multiple
things
at
the
same
time
and
they
can
tell
if
what
actually
occurred
was
one
of
the
ones
that
one
of
those
or
not-
and
this
is
actually
going
to
be
the
basis
of
our
secret.
So,
okay,
so
that's
enough!
A
That's
parts
to
representations!
Yeah
I
want
to
talk
about.
How
do
we
live
sequences?
Those
with
the
brain,
even
your
cortex,
a
little
bit
more
neuroscience
here.
You
zoom
in
on
the
neocortex
is
that
sheet
on
the
left
there.
If
you
zoom
in
a
section
of
it
about
two
and
a
half
millimeters
thick,
you
see
these
layers
of
cells
in
the
second
image
here,
the.
C
A
Of
cells
is
everywhere,
no
matter
what's
easy.
What
part
of
the
projects
you're
looking
at
there's
a
few
exceptions
to
that,
but
pretty
much
it's
going
to
look
the
same.
My
mouse
cortex,
which
was
like
anywhere
in
cortex
in
that
old
lost
life.
Just
humans
is
much
bigger
in
the
area.
So
we
find
these
layers
of
cells
when
I
are
what
I'm
going
to
argue.
A
This
each
layer
is
actually
doing
a
type
of
sequence,
remembering
I'm
gonna
tell
you
how
it
does
it
now,
the
exhume
either
one
of
those
layers
just
one
of
those
layers
in
the
middle
picture,
there
there's
two
kind
of
organizational
principles
in
the
green
arrow.
We
see
the
cells
of
those
circles
or
cells,
the
cells
that
are
vertically
when
I
redid
them,
and
these
little
micro
columns
have
the
same
feed
flow
response
properties.
They
represent
the
same
things
of
how,
in
fact,
some
patterns
of
the
world,
but
the.
A
A
In
there
about
10,000
per
cubic
millimeter,
that's
a
typical
for
a
more
cortical
cell-
and
you
know
cell
body
and
then
there's
a
big
dendritic
tree,
the
branching
tree
structure,
all
the
positive
inputs,
the
cellar
on
the
dendrites,
a
typical
cortical
neuron,
today
working
three
to
ten
thousand
connections
to
an
Amazon,
one
of
those
guys
and
they're
all
ranged
along
those
the
dendritic
tree.
Now
many
other
people
thought
that
this
was.
A
The
cell
was
just
like
a
song
and
then
the
vast
majority
of
neural
network
models
were
here
today
still
like
that
life,
but
we
have
a
cell
body
all
they
didn't
put
the
atom
up.
We
do
some
sort
of
nonlinear
output
function.
It's
not
true.
We
know
how
clearly
that
the
dendrites
themselves
are
active
properties,
they're,
active
processing,
elements.
What
and
then
the
last
picture.
We
show
a
picture
of
a
real
dendritic
segment.
It's
about
40,
microns.
A
Why
in
that
picture,
you
can
see
along
there
a
whole
bunch
of
little
synaptic
spines
coming
off
of
that
little
segment.
We
now
know
that
on
a
section
of
adapted
right,
if
you
have
a
bunch
of
synapses
active
being
and
inputs,
are
active
at
the
same
time
within
some
few
milliseconds
of
each
other
in
close
proximity,
basically
about
40
micron
distance
that
generates
a
nonlinear
event,
essentially
generate
a
need
n
direct
spike,
which
makes
it
more
effective.
A
So
I
can
have
thousands
of
inputs
on
active
on
the
cell
all
over
this
metric
tree,
if
they're
dispersed
all
over
the
place
had
almost
compacted.
However,
if
they're
on
as
I
have
ten
or
fifteen
or
twenty
in
a
line
in
a
very
close
section
and
they
all
become
active
at
the
same
time,
it
has
a
very
large
effect
this.
We
think
this
is
like
a
coincidence,
detector
to
expressional
detector
system.
I.
Have
these
a
number
of
these
guys
active
the
same
time
finger?
It's
something
I
want
to
know
about
our
question.
A
It
really
is
how
does
the
array
of
cells
with
those
kind
of
neurons
learning
sequences-
and
so
this
is
father
picture-
shows
from
our
simulations
the
artificial
neuron,
the
right
there?
You
can
see
the
colored
options
like
the
synapses
they're
arranged
along
these
dendritic
segments
of
the
blue
ones
of
these
metrics
segments
in
our
instance,
detectors
I'm,
almost
all
the
details
here
we
pick
thousands
of
these
diving
rages
arrangement
layers,
four
columns,
and
the
question
is:
how
does
that
structure
of
learning
sequences
learned
that
the
temporal
characteristics
of
the
world?
A
That's
involvement
and
we
think
we
know
the
answer
to
that
question
to
a
large
degree,
I'm
going
to
start
here
by
showing
this
picture
here.
Is
this
a
again
a
picture
of
our
sponsor
student
representation,
but
these
are
like
little
cells,
little
cubes,
the
red
ones,
are
the
active
ones.
The
great
ones
are
the
inactive
ones.
This
is
that
quarter
of
our
2003
representation,
and
so
but
this
is
what
it
would
look
like.
If
you
look
at
the
brain
literally,
you
find
cells
that
you
can
see
them.
A
You'll
see,
there's
a
very
sparse
activation
like
this
everywhere
you
look
and
at
some
point,
I'll
have
a
pattern
like
this
and
over
time,
these
things
are
changing.
Very
rapid
is
what's
going
on
your
brain
right
now,
as
I'm
speaking.
These
things
are
changing
in
the
order
of
every
5
milliseconds
like
this
distributed
patterns
throughout
your
brain,
not
just
at
one
rate
all
over
the
place,
and
the
question
is:
how
do
we
learn
the
sequences
like
this?
How
many
one
of
these
transitions?
The
answer?
Is
we
don't
do
an
image,
centralized
the
brain?
A
Does
it
and
no
matter
the
time?
So
what
happens
when
I'm
neuron
becomes
active?
It
go
in
the
middle
there.
It
looks
around
for
some
other
guys
nearby.
It
says:
look
if
you
were
active
just
a
little
while
ago,
then
you're
predicted
for
my
activity
and
I
see
you
again
I'm
going
to
predict
and
so
some
subset
of
the
samples
nearby
put
together.
If
I
see
this,
then
I
will
be.
A
It
gets
ready
to
fight
and
that's
enough
in
a
particular
state.
If
you
do
this
and
you
show
lots
of
patterns,
all
the
cells
get
activated
some
period
of
time,
they
all
learned,
predictable
transitions,
and
this
is
we
going
up
or
something
like
this.
In
this
picture,
the
red
cells
are
the
current
input
and
the
yellow
cells
are
the
ones
that
have
been
in
the
predictor
state,
saying,
I,
think
I'm.
Next,
why
they're
more
yellow
cells
and
red
cells?
A
Imagine
my
trend
just
on
the
pattern,
a
followed
by
B
and
they
followed
by
C
and
a
followed
by
B
if
I
show
a
predict,
be
C
and
D,
so
he
got
a
multiple
prediction
going
on
here.
This
is
a
first
order,
transition
memory,
meaning
it
can
learn
from
one
state
to
the
next,
and
this
is
the
beginning
of
sequence-
memory.
Actually,
it's
very
powerful,
but
we
need
something
different.
We
need
a
pie
order
memory,
the
difference
higher
are
memories.
What
you
need.
A
It
means
I
want
to
be
able
to
use
patterns
over
a
longer
period
of
time.
I
have
to
be
able
to
repeated
elements.
So
imagine
we
gave
a
pattern
ABCD
and
another
one
X.
You
see
why
and
I
show
you
you
know,
I
show
you
ABC
I
need
appendix
D
and
I
showed
of
exbc
I
predict
Y
I
need
to
have
sort
of
a
I
need.
That's
a
higher-order
madness
like
a
fourth
word
of
memory.
C
A
Two
representations
of
the
same
input.
The
same
thing
should
feed
forward
path,
listing
kind
of
coming
in,
but
a
different
different
purpose.
Tunic
representations
in
content.
There
are
actually
10
to
the
40th
ways
to
represent.
In
this
example,
the
represent
the
same
in
putting
different
content.
So
if
I
say
in
this
sentence,
there
are
too
many
to
choose
to
count.
I
just
said
that
sound
in
four
different
contexts
and
you
didn't
get
in
confused,
it
was
which
is
a
announcement.
A
It
was
the
number
and
there
was
too
many
and
and
in
your
brain
at
some
point,
you
have
the
same
representation,
because
the
same
sounds
and
agree
yours,
but
someplace
else.
We
have
a
different
representation,
so
you
don't
get
them
confused.
This
is
basically
a
mechanism
which
is
going
on
doing
it.
Okay,
if
you'll
just
have
to
trust
me
home
before
the
details
here.
But
if
you
combine
this
so
you
take
it
a
layer
of
cells
in
columns
with
these
properties
and
learning
properties,
you'll
end
up
building
a
very
high
rate
of
sequence
memory.
A
You
do
multiple
simultaneous
prediction
is
high
capacity
kind
of
memory,
even
just
these
two
thousand
columns
I
have
to
talk
about
this.
If
you
remember
millions
and
millions
of
transitions,
we
build
these
all
the
time.
So
this
is
a
proven.
It's
distributed.
Think
about
that
way.
There's
no
point
to
failure.
No
single
points
appeared
like
the
top
on
columns.
I
can
drop
themselves.
I
can't
drop
out
that
right.
The
synapses
continue
to
working
up
to
a
fairly
high
level
of
fall
and
then
generalization
I
didn't
tell
you
how
they
were
learned.
A
We
do
understand
that,
but
the
suspense
reputation
so
I
learned
how
one
object,
behaves
and
now
I
see
another
object
that
shares
submit
so
semantic
similarity
to
the
first
object.
I
can
apply
the
learning
of
the
first
object
to
the
second
object
and
I
say
yeah.
This
is
a
little
bit
different,
but
I
recognize
the
pattern.
Now
I'm
gonna
make
a
prediction:
semantically
generalization
to
the
to
the
new
pattern,
and
this
is
the
kind
of
thing
which
kills
AI.
They
can't
do
that
country
building
you
automatically.
Okay.
B
A
A
Identified
that
was
the
difference
between
the
so
the
property
which
is
critical.
The
second
thing
is
almost
none
of
them
deal
with
time.
They
don't
think
about
time,
they're
all
their
spatial
classifiers
they
don't.
The
vast
majority
have
no
concept
of
sparsely
reputation,
so
you
can
look
at
them.
They
don't
deal
with.
The
sparse
is
understanding
how
to
build
representations.
So
is
the
brain,
a
neural
network
sure
is
this
a
neural
network
sure?
Is
it
different.
C
A
Most
little
networks
yeah,
it
really
is-
and
you
might
say
well,
that's
a
big
difference
to
look
at
I.
Think
it's
a
fictive,
it's
good!
It
makes
you
human
to
perform
which
of
these
guys
you
try
to
find
the
literature
of
neural
networks
that
learn
sequences.
There
is
something
well,
it's
very,
very
impoverished,
because
they're
just
not
very
good
at
it
and
they
don't
have
any
of
these
properties.
A
So
I'm,
not
anti
neural
network
person,
I'm
a
bright
guy
that
it
was
under
all
that
work,
but
the
vast
majority
of
neural
networks,
I,
don't
incorporate
those
principles,
sparsely
distributed
occupations,
time,
columns,
column,
structure,
pie,
pie,
pie,
pie,
pie
order,
and
so
ok
I'm
going
to
say
what
it
takes
to
implement
one
of
these
things
today
we
do
this
in
software
today.
I
just
think
this
is
a
typical
example.
A
What
we
do
in
our
my
company,
the
2030
cells
per
colony,
of
about
60,000
neurons,
we
about
five
thousand
synapses
per
neuron
about
one
to
twenty
eight,
a
dendritic
segments.
These
are
all
very
realistic
numbers
in
the
biology
again,
you
know
if
you
think,
go
back
to
traditional
neural
networks.
Very,
very
almost
none
it'll,
never
give
you
a
thousand
synapses.
They
have
no
idea,
they
usually
2005.
In
about
300
million
synapses
total.
We
run.
A
Today,
you
can
run
on
it:
we've
optimized
the
spider
bit
me
right
on
sequel,
core,
CPU
and
think
about
hundred
megabytes
of
memory,
and
we
can
we
can
do
in
about
10
milliseconds.
We
can
do
one
inference,
learning
stuff,
which
is
pretty
good.
This
is
practical
and
we
are
taking
that
the
run
of
it
now.
Unfortunately,
this
is
only
about
1,000,
very
rough,
one,
thousands
of
the
size
of
a
mouse
cortex
and
with
millions
of
sides
making.
A
So
although
this
is
a
very
useful
thing,
we've
built
and
it
turns
out
to
be
commercially
valuable,
is
we
got
a
long
way
to
go
and
there's
no
way
we're
going
to
scale
this
purely
of
software?
It's
not
going
to
happen
so
and
there's
going
to
be
a
future
of
a
hardware,
business
building
brains,
but
the
good
thing
is
we
can
build.
Things
are
practical
today.
A
Okay,
I'm
going
to
sing
a
little
bit
about
commercialization
this
I'm
not
trying
to
sell
you
a
product,
but
the
idea
is
to
understand
that,
and
what
does
this
look
like
when
someone
tries
to
do
something
with
this
today?
It's
not
just
hypothetical,
not
to
kill
its
reticle
and
B
I
think
it
is
important
that
we
started
injured
feet
around
this,
because
that's
going
to
drive
a
lot
of
research.
So
what
we've
done
is
that
we've
applied
the
space
Big
Data
today.
Those
are
the
big
name
is
a
big
big
topic.
A
People
are
collecting
data
from
all
over
the
world,
all
kinds
of
sensors
that
could
have
been
storage
and
these
tools
are
visualizing
it
and
it's
making
predictive
models.
The
problem
with
it
is
that
the
model
care
how
old,
very
quickly
so
some
engineer
or
computer
scientist
build
the
model
or
something
like
them,
so
we're
behavior
and
then
two
weeks
later,
it's
obsolete
and
it
takes
lots
of
people.
So
this
is
really
a
broken
system.
Today,
it's
not
working
very
well
in
the
future.
A
What's
going
to
happen,
it's
going
to
be
like
rings,
you
can
take
data
from
the
Internet
of
Things
millions
and
millions
trillions
of
things
you
stream
them
into
models,
and
they
learn
in
an
online
fashion
prediction
to
take
actions
and
the
key
criteria.
Here
you
need
a
continuous
or
online
learning
environment,
because
the
data
the
world
is
the
unlikeable
is
constant
changing
and
you
need
them
temporal
and
spatial
models.
Very,
very
few
people,
very
very
few
people
do
anything
in
a
temporal
domain,
so
I
believe
them.
We
built
product
that
does
this
be
fully
croc.
A
It's
an
engine
for
active
data
streams
we
need
take
in
this
station.
Taking
on
the
left
side,
we
cage
structured
and
semi-structured
data
records
from
streaming
off
of
sensors
and
so
on.
Well,
in
one
or
more
fields,
we
run
through
assembly
coders,
which
you
can
think
of
them
as
just
like
sensory
organs,
their
take.
They
take
some
physical
activity
to
the
worldly
turn,
those
partially
representation.
A
This,
like
your
your
retina,
does
your
cochlea
doesn't
and
then
we
be
able
to
are
this
cortical
model
or
to
it
that
I'm
not
showing
here
repeated
to
the
program
offered
exactly
what
I
just
showed
you
2004
for
column,
6
2007.
It
learns
the
alerted
to
temporal
and
spatial
characteristics
of
the
data,
and
then
it
makes
predictions
and
homilies
and
we
can
take
actions
to
that.
So
we
use,
we
basically
says
here's
some
data,
here's
my
problem.
A
Say
feel
the
energy
demand
response
may
not
be
aware
that
the
large
consumers
of
electricity
sometimes
keep
University
campuses.
Maybe
MSU
I,
don't
know
they,
they
don't
pay
the
concentration.
They
have
a
constant
association
going
on
with
the
electrical
provider
utilities
saying
you
know
an
hour
from
now
or
15
minutes
are
now.
If
you
consume
X
will
give
you
this
price.
You
did
it
to
my
I'll.
Give
me
that
price-
and
this
is
called
the
man
response,
and
so
here
are
the
typical
energy
profile
electrical
energy
uses
of
factory.
A
Over
seven
days
you
can
see
five
days
of
declining
productivity.
It
looks
like
maybe
and
then
a
weekend
we're
not
much
is
going
on.
It's
not
a
particularly
difficult
property.
It's
actually
you
don't
know
what
it
looks
like
in
advance.
Does
that
must
be
what
it
looks
like
and
because
the
customer
wants.
C
A
Is
midnight
we
want
to
predict
the
energy
consumption
in
our
factory
for
the
next
hour
for
the
next
20
for
every
hour
for
the
next
24
hours,
and
so
we
throw
these
kind
of
data
streams
in
the
rocking
rocking
back,
and
here
it
is,
it's
actually
predicting.
We
can't
see
it,
but
it's
predicting
24
hours
in
advance
in
this
picture
and
actually
evolving
time
very
well
and
makes
it
in
a
control
that
very
quickly
all
that
stuff.
A
Here's
another
example:
it's
a
little
bit
less
obvious,
that's
to
accompany,
provides
an
online
service
for
decoding
videos
and
they
have
to
have
a
hold
that
give
a
very
fast
response
time.
So
someone
sends
a
minute
respond
immediately,
and
so
they
have
to
keep
servers
online.
All
the
time
waiting
around
doing
nothing,
it's
expensive.
They
could
predict
the
load
better.
They
could
keep
fewer
servers
online
statement
and
you
save
money,
and
so
this
shows
they're
there
they're
actual
and
connected
peaking
demand
curve,
and
it's
not
an
obvious
path
with
rocket.
A
A
Here
are
all
the
action
column
in
this
model
at
this
time,
and
the
green
dots
are
basically
saying
these
are
these
are
Commodore
bits
that
were
predicted
and
actually
occurred
to
this
representing
a
time
of
rock
predicted
one
thing
in
it
occurred
all
those
are
40,
40,
green
dots
there
and
love
another
example.
I
might
have
this,
isn't
where
there
was
rock
was
critically
multiple
things
might
have
happened,
those
blue
circles
and.
A
A
A
Not
like
the
binaries
that
you
throw
in
say
if
you
want
to
dive
into
this
and
say
what
aspects
of
this
were
anomalous,
what
aspects
weren't
enough?
The
brain
takes
advantage
of
this
information,
but
is
there
insiders?
Okay,
I'm,
just
going
to
end
up
one
of
the
few
thoughts
on
the
future?
Intelligent
machines.
A
B
A
There
is
the
matrix-
and
this
is
I,
think
Barbara
Schwarzenegger
Terminator.
These
are
like
you
know:
robots
computers
run
amok
that
may
be
human
life,
and
so
on
we're
gonna.
You
know
we're
gonna,
build
ecology,
don't
talk
about
it,
but
this
is
one
view
that
we
deserve
this.
Just
don't
even
feel
then.
A
Going
to
have
like
friendly,
you
know,
robot
Butler's,
you
know
we're
not
taking
care
of
us
all
the
time
we're
gonna
be
playing
games
with
people
like
I,
didn't
watch
it.
Maybe
we'll
find
new
ways
of
entertaining
ourselves
by
putting
on
our
head
and
then
this
vendor
sort
of
in
between
which
is
sort
of
like
well.
It's
not
a
good.
B
A
Things
are
definitely
going
to
happen.
One
is
we
can
make
artisanal
brands
that
are
faster
and
bigger
than
anything
that
nature's
really
don't
touch
developers
faster.
The
neurons
are
really
slow,
something
conductors
really
fast,
either
order
of
a
million
any
difference
in
their
performance.
We
can
break
brains
that
are
really
much
faster
than
human,
and
why
would
you
want
that?
We
would
want
that
to
something
like
c-3po
was
doing
with
the
world
like
you
and
I.
A
Are
you
know,
but
in
a
world
where
we
have
Qatar,
artificial
intelligence
is
dealing
with
online
data,
or
you
know
machine
generated
data.
The
speed
could
be
very
helpful
and
so
we're
gonna
do
that
we
can
make
a
bigger
there's.
No
reason
we
can't
make
paper
memory
systems,
you
know
the
human
being
apart
project
a
big
by
just
essentially
replicating
itself.
You
got
bigger,
bigger,
just
like
a
mouse
cortex,
it's
just
a
lot
bigger,
and
so
we
do.
That
still
doesn't
mean.
These
are
gonna,
be
useful
for
you
super
intelligent.
A
They
still
have
to
be
trained,
these
the
work
in
a
world
and
so
long
and
learn
about
that
world.
It's
not
like
intelligence
is
not
just
a
meter
if
you
can
make
them
every
bigger,
but
you
still
have
to
find
a
world
that
you
can
discover
structure.
We
can
create
different
types
of
senses.
We
tend
to
think
of,
like
human
senses,
vision
we're
making
a
computer
that
sees
over
into
here,
but
you
don't
have
to
be
limited
to
that.
I
already
showed
you
a
moment
ago.
We
can
take
structured
data
focus.
A
A
They
may
not
look
at
all
like
a
human
okay.
Last
one
only
on
the
deference
happen.
I
mentioned
that
the
approaches
is
a
hierarchy
of
memory
agents
that
are
connected
together
in
your
brain.
That's
the
white
matter
connected
together.
However,
there's
no
reason
they
all
be
co-located,
so
you
could
actually
build
an
intelligence
machine
when
those
different
number
of
agents
are
distributed
in
different
places
that
communicate
over
long
distances.
I,
don't
know
what
that's
going
to
mean
and
I
think
very
cool,
we'll
find
out.
A
Passes
there's
a
human
you
can
have
a
conversation
with.
You
can
be
very
tricky,
there's
a
lot
more
to
it
than
just
in
New,
York
or
Jackson
cancer.
So
to
have
a
body
of
the
a
human
that
know
other
things
that
humans
do
I
couldn't
even
want
that.
But
it
might
happen.
I
don't
think
we're
all
gonna
have,
you
know,
be
plugging
into
every
night.
Some
you
know
computer.
A
You
know
mine
now
type
of
thing
now
I
know
someone
speaking
today,
I
think
is
talking
the
frame
machine
interfaces
that
we
extremely
useful
for
people
with
with
and
recovering
functionality.
But
this
idea
that
we're
all
gonna
be
walking
around
half
human
half
brain,
but
I
can't
be
certain
I
can't
listen.
The
baby
can
make
a
little
bit.
Here's
the
things,
I
think.
Actually
that's
not
gonna
happen.
You're
not
gonna,
be
offloading
a
brain,
their
computer
I'm.
A
Sorry,
if
you're
hoping
that,
if
you
read
a
lot
of
polka
music
yeah
but
I
want
to
do
it's
not
going
to
happen
and
they're
reasons.
Why
you,
basically,
that
the
memory
in
your
brain,
the
memory
itself,
is
so
intimately
tied
to
the
specific
wiring
and
we're
talking
trillions
of
the
trillions
of
bits
of
memory
in
your
brain.
A
It
really
tied
to
the
physicalness
of
the
biology
of
your
body
that
no
one's
ever
going
to
figure
out
how
to
recreate
and
so
you're,
just
not
going
to
be
doing
that
you're
not
going
to
be
you
a
robot
to
run
amok.
This
is
a
thing
that
people
talk
about
a
lot,
but
it's
not
design.
Now
we
have
to
worry
about
the
world
provocation.
You
want
to
worry
about
something
we're
about
things
that
self-replicating,
but
intelligence
is
not
somebody
itself
replication.
A
You
know
I
can't
and
then
I,
don't
believe
in
this
think
over
single
area
closure.
You
follow
this
this.
The
idea
that
once
machines
can
building
smart
are
smarter
than
humans
in
bills,
from
smart
machines.
You
know,
there's
runaway
complexity.
Intelligence
will
never
understand
again,
not
going
to
happen.
A
Senate
Intelligence
is
not
something
you
do
in
this
design
a
turn
of
the
crank.
It
is
a
memory
process
of
learning
processes.
We
can
build
very
big
brains,
just
so
much
take
a
long
time
to
try
and
figure
out
how
to
do
that.
Ok,
so
the
question
is
last
bite.
Why
we're
going
to
losers?
Why
would
you
eat?
Why
should
be
yeah?
This
is
why
pillow
talk
I
mentioned
your
brain.
Just
good
brains
are
cool,
but
what
about
machine
tell
us?
Why
should
we
do
this?
First
of
all,
we
can.
A
A
I
think
knowledge.
Acquisitions
to
me
is
what
it's
all
about.
You've
been
right
on.
What
the
endgame
is.
I
can
tell
you.
The
knowledge
acquisition
is
a
good
thing.
We
want
to
be
trying
to
figure
out
what
the
whole
universe
is
about,
and
the
tawny
machines
we're
going
to
accelerate
that
tremendously,
we'll
be
able
to
go
machines
that
are
a
million
times
faster
than
our
machines
that
don't
get
bored
think
about
math
24
hours
a
day,
and
if
we
want
to
explore,
we
want
to
explore
the
universe
outside
of
our
solar
system.
A
C
A
A
Then
will
not
do
we
now
have
a
scalar
of
them.
It's
also
try
to
I
didn't
go
through
here,
but
also
true
on
the
synaptic
model.
To
get
it
it's
on
the
answer.
Your
question
is
the
following:
it's
true:
it
rains
we
have
a
scale
relative
to
cell
in
some
cells,
not
all
but
there's
a
huge
amount
of
evidence
that
suggests
that
brains
can
do
significant
work,
but
we
know
this
is
a
fact,
engineering
or
fitness
on
a
single
scenario,
and
so
we
have
distributed
representations
way.
A
It
works
out
if
you
do
a
better
job
by
a
scalar
activities
with
that
conviction,
but
because
it's
distributed
I
don't
require.
Remember
everything
is
to
cast
every
cells.
Yes,
let
me
smell
could
be
unreliable
pulses,
who
still
works
so
I.
Don't
can't
the
bra
now
I
can
get
better
by
having
to
scale
it
up
with
yourself,
but
I
don't
require
it.
So
we
took
the
simplification,
saying:
let's
go
with
binary
activations
see
how
it
does
it
does
extremely
well.
A
I
knew
I
could
do
slightly
better,
so
much
better
if
I
had
non-scalar
high-skilled
operations,
but
it
doesn't
seem
to
be
requirement
when
you
have
to
distribute
your
representation.
So
it's
a
it's
a
choice
you
can
make
and
it
can
range
actually
do
do
a
lot
with
a
single
spike,
but
then
they
can
they
can
refine
their
behavior
with
multiple
spike.
We
better.