►
Description
Matt Taylor's April 2017 talk at the AI With The Best conference.
A
Hello,
everybody,
my
name
is
Matt
Taylor
and
I.
Work
for
Numenta
worked
there
for
about
five
years.
So
one
thing
I
want
to
say
before
I
start
is.
This
is
awesome
that
there's
so
many
people
talking
about
biologically
inspired
intelligence
at
this
conference
this
year,
because
last
year,
I
think
I
was
the
only
person,
so
I'm
very
excited
to
talk
to
you
about
this.
A
Then
we're
going
to
dive
into
some
minor
brain
science,
after
which
I'll
introduce
hierarchical
temporal
memory,
which
is
really
the
technology
that
I'm
pitching
in
this
talk
and
I'll
talk
about
the
fundamental
aspects
of
that
theory
of
intelligence,
including
the
medium
of
information
which
is
sparse,
distributed,
representations
or
STRs,
and
then
I'll
talk
about
some
specific
algorithms
in
htm'
data,
encoding,
spatial
pooling
and
temporal
memory.
So
way
back
in
the
beginning
of
2015,
I
saw
this
wired
article
and
in
the
title
of
it
it
said.
A
Ai
has
arrived
and
I
remember
being
upset
about
this
because
I
like
AI
I,
don't
agree
that
AI
has
arrived
and
it
made
me
realize
that
I
have
a
different
definition
of
intelligence
than
a
lot
of
people
in
the
community.
Do
so
maybe
if
I
modify
this
to
say
we
day,
I
has
arrived.
I
think
I
can
agree
with
that
statement,
but
we
kiii
the
type
of
AI
we
have
now
can
do
some
really
amazing
things.
A
It's
it's
huge
right.
Everybody!
Everybody
needs.
Ai,
everybody
wants
it.
You
know
the
artificial
neural
network
inspired
technologies
of
today.
All
the
different
directions
that
it's
gone
can
do
amazing
things.
So
this
is
in
no
way
putting
down
what
we
can
do
right
now
with
weak
AI,
because
I
think
it's
really
important.
But
my
point
is
by
my
definition
of
intelligence.
What
we
have
today
is
not
intelligent,
so
what
would
I
say
is
intelligent.
Well
that
falls
under
the
term
strong,
AI
or
true
AI.
A
Some
people
call
it
artificial,
general
intelligence
and
wikipedia
says
the
requirements
for
strong
AI
is
the
ability
to
reason
you
strategy
solve
puzzles,
represent
knowledge
plan,
learn,
communicate
and
natural
language
and
the
kicker
the
last
one
integrate
all
these
skills
towards
common
goals.
Now
that's
a
huge
deal
because
we
might
be
able
to
with
weak
AI,
put
together
specific
systems
that
do
some
of
these
things,
but
integrating
them
together
into
a
system
that
can
do
amazing
things.
A
It's
really
hard
I'm
going
to
take
this
one
step
further
and
say
that
what
we
have
today
is
not
strong
AI,
but
also
weak
AI
won't
do
s--
intelligence
systems.
There
are
others
at
this
conference
that
are
sort
of
echoing
the
sentiment
too,
of
some
of
the
presentations
I've
watched.
So
if
it
won't
produce
intelligence,
then
why
not?
What's
missing
so
I'm
going
to
tell
you
what's
missing?
The
very
first
thing
is
realistic.
Neurons.
A
The
artificial
neural
network
neuron
that
was
created
decades
ago
was
the
best
we
could
do
with
what
we
knew
about
neuroscience
at
the
time,
but
it
was
a
very
simplistic
model
of
a
neuron
and
based
upon
that
really
simple
model
of
the
neuron.
We've
done
some
really
amazing
things.
So
imagine
if
we
went
back
now
that
we
know
so
much
more
about
neuroscience
and
about
how
neurons
interact
with
each
other
is
actually
in
the
neocortex
and
other
parts
of
the
brain.
A
What
we
could
do
with
that,
if
we
tried
to
implement
new
intelligent
systems
with
this
more
biologically
realistic,
neuron
model,
so
I
think
this
is
a
big
part
of
why
we
won't
get
strong
AI
from
today's
artificial
neural
networks.
Another
thing
that's
moving.
If
you're
excuse
me,
that's
missing.
If
you're
talking
about
intelligence
is
movement,
this
is
a
key
thing.
Everything
that's
intelligent
today.
A
That's
alive
today
and
intelligent
has
to
move
and
explore
its
environment
in
order
to
learn
how
its
actions
affect
its
environment,
all
of
the
AI
systems,
the
weak,
AI
systems
that
we
have
don't
have
this
integration
of
movement
into
their
into
their
systems.
They
are
generally
being
fed
data
and
taking
action
upon
that
data,
but
they
don't
have
the
ability
to
explore
that
data.
A
So
our
mission
at
Numenta
and
we've
been
around
for
over
a
decade
now,
and
this
has
not
changed.
Our
mission
is
to
understand
how
intelligence
works
in
the
mammalian
neocortex
and
create
software
based
on
those
principles,
and
the
reason
is
because
we
think
that
to
create
intelligence,
we
need
to
study
what
we
know
is
intelligent
today,
and
that
is
brains.
So,
let's
talk
about
brains,
so
this
is
a
picture
of
your
brain.
The
wrinkly
thing
is
the
neocortex.
A
If
you
were
to
express
this
neocortex
out
and
flatten
it
it's
about
the
size
and
shape
of
a
dinner
napkin
and
it's
completely
homogeneous
for
the
most
part
throughout.
So
if
you
took
a
slice
of
the
new
quick
text
from
your
visual
processing
area
versus
your
audio
processing
area
versus
your
motor
generation
area,
whatever
it's
going
to
look
the
same
under
a
microscope,
the
cellular
structure
of
that
layer
of
cortex
is
modernist.
A
What
that
means
is
that
the
same
algorithm
is
operating
everywhere
throughout
the
cortex,
no
matter
what
it's
processing,
no
matter,
what
motor
commands
is
generating
it's
the
same
algorithm,
and
this
is
very
encouraging
because
it
sort
of
tells
us
that
if
we
can
crack
this
set
of
algorithms
that
are
running
in
a
small
section
of
cortex,
we
go
a
long
way
in
understanding
how
intelligence
works.
So
that's
what
we
work
on
at
Numenta
so
say.
Let's
say
we
have
a
model
of
a
bunch
of
neurons
in
a
section
of
cortex.
A
Our
point
is
that
we're
going
to
model
each
one
of
those
neurons
as
realistically
as
possible
and
it's
interactions
with
all
of
its
neighboring
neurons
as
realistically
as
possible
and
there's
some
key
differences
between
our
neuron
model
versus
the
older
and
end
models.
One
big
one
is
that
a
neuron
can
get
input
in
different
ways.
A
It
can
get
proximal
input,
which
is
a
feed-forward
input
from
some
input
space
that
it
really
doesn't
know
much
about,
and
it
can
also
get
distal
input
or
lateral
connections
that
spread
throughout
the
structure
and
connect
to
other
neurons
in
the
same
structure.
I'm
going
to
talk
a
little
bit
more
about
this
do
so.
This
is
what
I'm
talking
about
is
hierarchical
temporal
memory
and
it's
an
evolving
theory
of
intelligence
based
upon
this
neuron
model.
A
That
I'm
trying
to
explain
to
you
is
biologically
plausible,
as
I
can
I'm
going
to
talk
first
about
sparse,
distributed
representations.
So
you
can
think
about
this.
The
way
I
like
to
think
about
it
is
like
a
fiber
optics,
cable
and
every
one
of
those
fibers
in
that
cable
is
like
a
bit
and
whether
it's
on
or
not
means
something.
So
if
you
think
about
a
nerve
bundle
as
a
fiber,
optics,
cable
and
each
one
of
those
whether
the
nerve
is
active
or
not,
representing
some
semantic
meaning.
A
This
is
what
a
section
of
cortex
might
be
fed
forward.
This
might
be
the
proximal
input
for
a
section
of
cortex,
so
it's
looking
at
this
space
that
has
a
sparse
activation
so
generally
2%
activations
and
it's
trying
to
derive
meaning
from
it
as
it's
changing
over
time.
So
I'm
going
to
do,
show
you
some
visualizations
of
sparse
distributed
representations.
A
So
here's
an
example.
This
is
just
a
2048-bit
binary
array.
Essentially,
I've
got
40
bits
on
I.
Can
I
can
change
this
if
I
want
to,
but
I'm
going
to
keep
it
at
40,
because
that's
that's
2%
and
it's
a
typical
sparsity
of
the
brain
and
the
point
I
want
to
make
here
is
that
the
amount
of
representations
that
you
can
show
in
this
space
is
astronomically
large.
There
are
more
ways
that
you
can
put
40
bits
in
this
space.
Then
there
are
atoms
in
the
known
universe.
A
So
there's
not
going
to
be
a
problem
of
running
out
of
room
to
represent
things
in
this
space,
and
this
is
just
2048
bits.
So
that's
one
key
point
about
sparse
distributed
representations.
The
other
one
is
I
want
to
talk
about
this
one.
So
imagine
here's
one
sparse
distributed
representation
in
this
one
I've
got
it
just
as
an
array.
I
don't
have
it.
You
know
wrapping
around
the
other
one
I,
just
wrapped
it
around
so
I'm,
going
to
make
these
big
sdrs.
A
A
We
go
100
STRs
in
the
stack,
so
I'm
going
to
show
you
some
comparison
operations
here
that
are
important
when
we
talk
to
a
cheat
in
theory,
so
I'm
going
to
take
one
of
these
STRs
I'm
just
going
to
grab
one
randomly
I'm
clicking
it
and
it's.
This
is
going
to
apply
a
25%
noise
to
that
STR
and
then
show
it
up
here
with
25%
noise
and
then
match
it
based
on
how
much
it
overlaps.
How
much
it's
on
bits
overlap
with
every
other
SDR
down
here
and
we
can
tell
with
25%
noise.
A
There
are
30
bits
that
still
overlap
out
of
the
40.
Obviously
we
put
25%
noise,
so
10
of
those
are
off,
but
we
can
still
easily
identify-
and
this
is
a
comparison
of
I've
stack
ranked
all
of
these
STRs
and
I
can
easily
identify
this
one,
even
with
25%
noise.
Let's
bump
it
up
to
55%
noise
again,
I
can
I
can
manage
the
threshold
so
that
I
can
very
easily
identify.
A
This
is
the
SDR
even
with
50%
noise,
so
even
with
noise
involved-
and
let
me
show
you
I-
can
also
give
you
exactly
what
the
false
positive
possibility
is
going
to
be
so
with
this
SDR
with
68
percent
noise,
I
can
match
it
and
within
this
group
of
100,
with
a
false,
positive
probability
of
5
or
6
times
10
to
the
negative
12.
So
it's
still
pretty
good.
So
the
point
here
is,
as
stars
are
very
noise
tolerant.
There
are
also
very
fault,
tolerant,
but
I'm
going
to
skip
that
part
of
the
demonstration
so
see.
A
If
there's
any
questions,
I
lost
with
the
SDR
stuff.
So
at
the
end
of
this
presentation,
I'm
going
to
show
you
some
resources
so
that
you
can
watch
some
videos
about
about
all
this
at
your
own
speed
and
it
goes
into
much
more
detail
about
STRs
much
more
detail.
So
you'll
have
some
resources
to
do
that.
But
the
point
here
is
that
the
communications
medium
of
the
brain
is
SDRs.
Let
me
let
me
try
and
explain
this
a
little
better
if
I'm,
a
neuron
I
have
all
these
connections
to
other
neurons.
A
They
might
be
proximal
connections
which
are
feed-forward
inputs
from
some
space
and
there
they
could
be
distal
connections
which
are
looking
at
other
neurons,
like
neighbors
nearby
each
one
of
these
is
an
SDR
look,
so
in
neurons
view
of
the
other
neurons
that
it
sees
is
an
SDR
and
at
any
point
in
time
generally,
2
percent
of
them
are
on
those
all
mean
something.
So
the
neurons
is
connected
to
our
telling
it
something
about
the
current
state,
the
current
state
of
whatever
is
being
observed
at
the
time.
A
All
right
I
need
to
get
back
to
the
slide,
so
they
have
a
lot
to
go
through
so
after
SDR
is
we
have
to
talk
about
encoders.
So
here's
one
of
the
problems
that
we
face
with
HTM
is:
how
do
we
get
our
data
into
this
SDR
format
so
that
the
NH
DM
system
can
understand
it
because
you
have
to
convert
it
into
this
bit?
Array
encoders
are
basically
equivalent
to
your
senses,
your
retina,
your
cochlea,
those
are
extremely
complex,
encoders.
Those
things
have
had
millions
and
millions
of
years
to
evolve.
A
So
we
don't
have
software
encoders
with
that
complexity,
because
we're
still
hard
to
understand
how
those
systems
even
work,
but
we
do
have
some
very
simple
encoders,
which
I'm
going
to
show
you
I
should
have
made
that
fullscreen,
but
so
I'm
going
to
show
you
an
example
of
a
date
encoder.
So
here's
an
example
of
me
taking
some
data,
which
is
a
date,
but
this
is
today
and
encoding
it
in
a
way
that
the
semantics
of
today
of
the
date
that
right
now
is
being
encoded.
So
I
can
encode
the
day
of
week.
A
Here
I
can
encode
whether
it's
a
weekend
or
not.
I
can
encode
a
general
sense
of
the
time
of
day,
and
this
isn't
exactly
the
time
of
day.
It's
a
general
sense
of
it
and
the
same
thing
a
sense
of
the
season.
So
as
I
walk
through
this
I
go
to
tomorrow,
Monday
Tuesday
Wednesday,
you
can
see
the
day
of
week
encoding
changes.
A
It
has
a
different
coding
for
every
day
of
the
week
when
we
go
from
Friday
to
Saturday
the
weekend
encoding
changes,
so
that
represents
that
we're
a
weekend
versus
a
weekday
and
then,
as
we
continue
to
like
pump
through,
you
can
see
going
through
June
July.
The
season
encoding
isn't
changing
its
changing
to
represent
the
season.
Change.
I
can
also
change
the
time
of
day,
and
that
affects
time
of
day
encoding.
A
You
might
also
notice
that
all
of
these
and
codings
are
simply
concatenated
together
to
give
you
an
entire
encoding
that
represents
all
of
these
semantic
details
of
the
timestamp,
including
day
of
week,
which
I
think
is
one
of
these
is
buckets
is
day
of
week.
One
is
weekend
ones,
time
of
day,
one
it's
season
and
as
we
go
through
it
and
you
look
at
that
representation,
you
can
see
those
things
changing
in
difficulty.
A
Okay,
so
that's
one
way
of
encoding
data
that
we
also
have
other
encoders.
We
have
scalar
encoders
that
can
encode
numeric
data,
but
I'm
not
going
to
get
into
that
right
now.
I'll
give
you
a
short
example:
let's,
let's
talk
about
the
input
space,
so
here's
an
example
of
a
real
piece
of
data
being
encoded
into
an
input
space,
so
you
can
think
of
this
SDR
at
the
bottom
left
as
ask
sort
of
that
nerve.
A
Axon,
that's
feeding
into
a
cortical
section,
the
cortical
material
has
to
try
and
understand
the
patterns
in
this
data
over
time.
So
what
I'm
encoding
in
it
is
a
scalar
value
which
is
power.
That's
what's
charted
here,
the
time
of
day
that
reading
was
taken
and
whether
it's
a
weekend
or
not
so
as
I
walk
through
here
slowly,
you
can
see
how
the
encoding
changes
over
time,
so
we're
dealing
with
a
temporal
sequence
here,
not
sorry
we're
dealing
with
a
temporal
sequence
here,
not
not
something.
A
Spatial
I
mean
it's
a
spatial
temporal
representation,
but
it's
changing
over
time
and
that's
key.
That's
the
way,
your
brain
processes,
information.
It
needs
to
have
data
over
time
to
understand
it,
so
this
is
one
way
that
I
might
encode
this
specific
graph
into
an
SDR.
There's
hundreds
of
ways
I
might
decide
to
encode
it
using
different
mechanisms.
So,
for
example,
there's
another
type
of
scalar
encoder
called
the
random
distributed.
A
Scalar
encoder
I
could
use
that
and
that
displays
the
semantics
just
the
same
as
the
other
one,
which
was
just
a
block
of
buckets
this
one
distributes.
So
my
point
is:
there's
lots
of
different
ways.
You
might
encode
data.
This
is
a
space
in
HTM
serie
that
I
think,
is
right
for
different
people,
creating
different
coders
for
different
things.
I
think
it's
a
very
cool
aspect
of
the
technology.
A
Let's
see
here,
I'm
going
to
talk
about
spatial
pooling,
so
this
is
a
bit
this
is
this
gets
a
little
tricky
but
I'm
going
to
do
it
anyway,
all
right
spatial
pooling.
So
this
is
a
core
aspect
of
HTM
technology.
So,
basically,
there
are
problems
with
this
input
space
that
we're
talking
about
we're
talking
about
this
encoding
or
the
simplest
base
that
has
encoding
in
it.
A
That's
coming
from
some
encoder
and
every
bit
has
some
semantics,
meaning
the
problems
are
we
don't
have
control
over
the
sparsity
of
that
input,
so
we
need
to
create-
and
this
is
what
the
brain
does
not.
This
isn't
like.
This
is
a
discovery,
not
an
invention.
The
brain
needs
to
normalize
that
representation.
Somehow,
and
it
does
this
by
doing
a
process
called
spatial
pooling
in
a
cortical
layer
of
the
cortex
there
are
these
structures
called
mini
columns.
A
That's
what
you're,
seeing
on
the
left
here
and
a
drawing
by
the
famous
neurosciences
each
one
of
these
mini
columns
can
have
many
cells
in
it,
many
neurons
in
it,
and
they
all
share
the
same
proximal
connection
to
that
input
space.
So
I'm
going
to
talk
a
bit
about
that.
These
are
feed-forward
input
coming
from
some
input
space
that
could
represent
sensory,
sensory
information
or
perhaps
information
from
other
parts
of
the
brain
as
well.
A
So
as
these
many
columns
are
observing
that
input
space,
they
become
active
over
time,
and
this
allows
the
us
to
normalize
the
sparsity
of
encoded
representations
while
maintaining
the
semantic
meaning
and
that's
the
tricky
bit
so
I'm
going
to
you
guys,
can
still
see
my
screen
I
hope
going
to
talk
about
what
was
it
connected
synapses.
So
this
is
this
want
to
make
sure
you
guys
hear
me
agree.
A
So
here's
the
input
space
on
the
left
doesn't
have
anything
in
it.
I
could
show
the
input.
This
is
just
an
example
of
some
input
that
might
be
in
it,
but
I
don't
care
about
that.
I
just
want
to
show
you
the
relationship
that
the
spatial
pooler's
call.
Many
columns
have
to
the
input
space
so
on
the
left
or
just
bits
coming
from
some
encoding
on
the
right
is
the
neocortical
structure
that
we're
building
seen
from
the
top,
so
we're
looking
at
the
top
of
these
many
columns.
A
Each
one
of
these
boxes
contains
one
or
many
neurons.
Then
we're
going
to
talk
about
that
when
we
talk
about
sequence
memory,
but
for
right
now.
My
point
is
each
one
of
these
active
men
eCollege
space.
Let
me
click
on
one,
and
you
can
see
this
column
that
I
just
clicked
has
these
connections
to
the
input
space
when
the
spatial
cooler
initializes
itself,
it's
sort
of
pseudo
randomly
connects
to
the
input
space,
so
that
has
a
place
to
start
with.
So
these
are
all
connections.
A
Every
one
of
these
columns
has
a
different
set
of
initial
connections
to
the
input
space.
Okay,
something
make
this
a
little
more
complicated.
Is
it's
not
just
whether
it's
connected
or
not
the
each
one
of
these
columns
has
a
very
specific
relationship
with
each
cell,
so
in
this
case
this
cell
there
is
a
permanence
value
between
every
column
and
every
bit
in
the
input
space.
So
over
here
on
the
on
the
right,
you'll
you'll
see
this
the
permanence
value
displayed
here,
and
this
is
the
connection
threshold.
These
are
all
configurable
values.
A
So,
as
I
look
at
this
bit
right
here,
it's
connected
because
its
permanence
value
is
high
enough.
It's
over
the
connection
threshold.
This
one
also
is
over
the
connection
threshold,
so
it
has
a
blue
dot.
This
one
is
not
that's
too
low,
so
this
is
just
the
initial
state
of
the
spacial
Pooler.
As
the
spatial
Pooler
learns.
Let
me
talk
about
learning
as
it
sees
information
so
so
here's
an
example
on
the
terms
mouse
highlight
thing
off,
so
here's
an
example
of
one
input
and
I
can
see
every
columns
relationship
to
that
that
input.
A
Okay,
so
here's
the
first
column.
It
is
not
an
active
column,
because
the
amount
of
bits
that
it
has
overlapping
with
the
current
input
is
not
high
enough
for
it
to
become
activated.
However,
these
green
ones
are
have
enough
bits
or
connections
that
overlap
with
the
input
space
that
it
has
activated.
A
We
just
stack
rank
them
by
their
overlap,
score
with
the
input
based
on
their
connections
to
the
input
space
and
the
on
bits
in
the
input
space,
the
ones
that
overlap
the
most
we're
going
to
activate.
Somehow
all
these
to
these
parameters
are
are
tunable,
so
you
can.
You
can
really
kind
of
focus
it
on
based
on
what
your
data
looks
like.
Okay,
let's
see
so
temporal
memory,
so
so
spatial
pooling
the
like
just
to
reemphasize
gives
us
a
space
for
a
sequence
memory
algorithm
to
operate
within.
A
A
This
is
an
HTM
neuron
from
our
theory.
This
feed-forward
is
the
proximal
input.
That's
what
I
was
talking
about
with
a
spatial
Pooler,
that's
from
the
encoding
space,
the
input
space
and
all
of
these
are
distal
segments
that
this
neuron
has,
with
other
neurons
in
the
same
structure.
So
as
if
getting
these
feed
for
inputs
and
columns
activate
the
cells
inside
of
those
columns
decide
whether
they're
going
to
be
active
next
based
on
its
connections
to
other
cells
in
the
structure.
So,
let's
go
to
my
big
visualization
arrow.
A
Okay,
this
is
the
sequencer,
so
I'm
going
to
need
to
spend
a
little
time
setting
this
up.
Okay,
just
so
you
understand
what's
going
on,
can
you
hear
those
notes
it's
coming
on
back
here,
so
this
animation
may
not
come
across
beautifully,
but
this
is
what
we've
got
make
sure
you
guys
can
see
it
so
over
here,
I've
got
a
note
sequencer,
basically
and
we're
just
going
from
C
sharp
e
F
sharp
C
sharp
just
one
after
the
other.
This
sequence
starts
fresh
every
time
and
at
the
end
of
the
sequence,
I'm
cutting
it
off.
A
So
every
time
we
see
the
sequence,
we
know
it's.
A
new
sequence
here
is
the
encoding
space.
This
is
the
input
space.
You
can
see
the
different
encoded
representations
for
each
of
those
notes.
There's
also
representations
we're
not
seeing
here
for
a
and
for
a
rest,
this
represents
the
active
columns
right
now
and
you
can
you
can
see
you
know
these.
Are
these
are
columns
and
they
spread
this
out
a
little
bit.
You
can
really
see
those
columns
okay,
so
this
is
the
spatial
Pooler
in
action
right
now,
activating
columns
based
on
it
input.
A
So
let
me
show
you,
the
proximal
connections
I'm,
going
to
select
a
column
like
this
one,
so
that
so
you
can
see
exactly
what
input
bits
that
column
is
connected
to
which
is
different
from
this
column
and
different
from
this
column.
Some
of
these
columns
are
never
active
because
they
never
have
enough
overlap
with
the
input
space
to
ever
become
active,
and
also
this
is
a
very
simple
pattern.
So
we're
not
going
to
be
utilizing
the
capacity
of
the
structure
either,
but
there
you
can
see
the
proximal
inputs.
A
Okay,
now
I'm
going
to
try
and
explain
to
you
how
sequence
memory
works
within
this
structure
that
spatial
pulling
provides
sports.
So
let
me
just
pause
on
e
okay.
So,
let's
go
to
the
very
beginning
of
this
sequence.
So
this
is
c-sharp
this
spatial
representation,
this
set
of
active
columns
represents
c-sharp.
So
what
we
want
to
do
is
activate
neurons
within
these
active
columns
to
help
to
give
this
spatial
representation
some
temporal
context.
Okay
and
there's
two
ways
that
the
temporal
memory
algorithm
does.
A
This
one
is,
if
we
look
through
here,
I'm,
going
to
show
predictive
cells
which
will
show
nothing
here,
though,
you
understand
this
in
just
a
minute.
If
we
look
through
every
active
column
and
there
are
no
predictive
cells
cells
that
have
been
put
into
a
predictive
state
by
their
distal
context,
then
we're
going
to
activate
all
of
them.
So
that's
what
happens
here
now:
I'm,
showing
you
the
active
cells,
the
the
yellow
means
that
the
column
is
active.
The
orange
means
that
the
cell
itself
is
active
and
as
I
step
forward.
We'll
show
you
why.
A
First
of
all,
if
there
are
no
predictive
cells,
it
activates
them
all.
This
indicates
the
start
of
a
sequence.
This
indicates
something
new
just
happened.
This
may
be
a
branch
off
a
current
sequence
or
a
new
sequence,
or
something
like
that.
If
there
are
our
predictive
cells
or
cells
in
a
predictive
state
like
this,
then
it
will
activate
only
them.
So
in
this
case
we
do
have
predictive
cells
here
and
it
activates
just
those
predictive
cells.
Okay.
Now
the
next
big
question
is:
how
do
those
cells
become
predictive
right?
A
Okay,
so
let
me
turn
on
the
cells
that
are
currently
in
a
predictive
state
based
upon
these
cell
activations,
okay,
so
how
we
identify
this
is
by
looking
at
each
of
these
neurons.
Let
me
get
see
if
I
can
click
that
one
and
I'm
going
to
show
its
distal
connections.
Hopefully
you
guys
can
see
this,
but
this
this
neuron
right
here
has
this
one
distal
connection
that
is
connected
to
all
of
these
other
neurons
that
are
active
right
now.
The
reason
it's
active
is
because
all
these
other
neurons
is
connected
to
are
active.
A
So
we're
learning
this
pattern
over
and
over
C
sharp
e
F
sharp
C
sharp
over
and
over
and
over
and
and
it's
got
enough
that
if
I
show
you
the
correct
predictions,
these
green
ones
means
it's
correctly
predicting.
So
it's
got
this
pretty
much
down
path
every
time,
it's
correctly
predicting
the
next
note,
and
by
the
way
this
is
an
extremely
trivial
example.
I
just
did
it
for
the
sake
of
a
education.
So
let's
stop
at
this
e
and
take
away
the
f-sharp
and
play
an
A.
A
So
at
this
point,
if
we
look
at
the
cells
that
are
currently
predicted,
what
know
are
these
cells
predicting
well
they're,
all
predicting
f-sharp,
because
that's
all
it's
seen
follow
e
in
the
past.
So
when
we
move
forward-
and
we
don't
get
an
F
sharp,
what's
going
to
happen,
I
will
show
you
what
happens.
Columns
burst.
A
This
is
called
bursting
when,
when
you
have-
and
the
reason
is,
if
I
go
and
show
the
predictive
cells,
all
these
predictive
cells
are
predicting
F
sharp,
so
they
were
all
wrong,
but
instead
we
get
the
this
spatial
representation
in
the
spatial
Pooler
which
we've
never
seen
before.
So
in
this
case,
this
is
when
the
temporal
memory
algorithm
activates
all
of
the
cells
within
each
of
those
columns,
because
it's
like
this
is
a
new
pattern.
A
I've
seen
C
sharp
e
before,
but
I've
never
seen,
C
sharp
e
a
so
this
is
a
this
was
called
bursting.
Each
of
those
columns
will
burst
and
that
allows
the
algorithm
to
kick
off
the
new
sequence
from
that
point
and
if
I
run
this
continue
to
run
this.
Now,
with
this
new
pattern,
C
sharp
e,
a
sharp
over
and
over
and
over
we'll
see
something
interesting
happen.
We
show
the
predictive
cells
after
it
learns
this
for
a
little
bit
we're
going
to
get
a
branch
in
the
sequence.
A
A
So
now
it
is
looks
like
it's
still
predicting
a,
but
sometimes
it
takes
a
while
to
unlearn
a
sequence
that
it's
already
learned
but,
like
I
said
all
that
stuff
is
tunable.
It's
all
tunable,
okay,
so
I'm
going
to
go.
That
was
my
last
Oh,
real
good
okay.
So
that
was
my
last
visualization
and
I'm
going
to
try
and
finish
up
my
presentation
and
then
I
will
go
through
and
take
questions.
So
just
a
note
here,
these
are
just
the
foundational
components
of
HTM
or
neocortical.
You
inspired
intelligence.
A
There's
we're
working
on
a
lot
more
stuff.
I
mentioned
movement
is
one
of
the
requirements
for
strong
AI.
That's
currently
a
big
research
area
for
us,
I
posted
a
talk
or
a
series
of
talks
in
chat
earlier
where
I
talked
to
our
founder
Jeff,
about
his
thoughts
on
all
that
type
of
theory.
For
sensory
motor
inference,
so
that's
really
important
and
I
think
this
next
sort
of
evolution
of
algorithms
that
come
from
this
research
is
going
to
produce
a
lot
more
capabilities
for
HTM
also
mentioned
a
that.
A
I
have
a
video
series
with
all
these
visualizations
that
I'm,
showing
you
came
from
this
video
series.
It's
called
HTML
you
can
do.
It
should
be
the
first
thing
that
pops
up
so
I've
gone
through
all
of
this
and
pretty
extensive
detail
in
this
video
series.
So
if
you
want
to
learn
more
about
this,
you
can
do
it
at
your
own
pace.
Just
walk
through
these
I
go
through
lots
of
stuff
about
STRs,
spacial,
pooling
and
I.
Just
did
a
temporal
memory
episode
a
couple
months
ago,
I'm
working
on
a
second
one.
A
It's
just
that
once
that
one's
hard,
so
to
kind
of
sum
up
all
stuff
is
open
source.
It's
all
on
github
you
can.
You
can
see
all
the
details
about
our
community
at
Numenta
org
we've
got
a
really
great
community
about
a
thousand
people
or
so
on.
Our
forums
and
everybody
loves
to
help
each
other
out.
So
if
and
and
the
community
is
interesting,
let
me
just
put
my
contact
info
up
Matt
at
Numenta
org,
but
the
and
then
I'm
just
going
to
talk
to
you.
A
Let's
do
this
okay,
but
the
community
is
really
great
because
whenever
sometimes
I
have
difficulties,
understanding
parts
of
HTM
theory
and
I
could
post
a
question
on
the
forums
and
there's
enough
smart
people
on
the
forums
that
really
do
understand
this
stuff
that
they
can
help
me
out,
which
is
nice.
So
we
get
community
contributions
to
the
theory
as
well
as
community
contributions.
Core
code
bases
we're
constantly
trying
to
improve
this
I'm
working
on
a
1.0
version
of
our
core
implementation
of
HTM.
A
All
of
these
visualizations
were
run
on
a
system
called
new
pic,
which
is
the
Numenta
platform
for
intelligent
computing.
So,
as
you
saw
those
that
was
a
real
HTM
system
running
the
theory
that
I
talked
about
live
on,
my
laptop,
ok,
so
I'm
going
to
try
and
get
to
some
questions.
Ok,
so
which
visualization
did
was
what
was
I
talking
about
when
you
missed,
missed
something
because
I'd
be
happy
to
go
back.
I
think
I
have
four
minutes
right:
encoders,
okay,
oh
the
date,
encoding
thing.
A
Okay,
let
me
do
that
real
quick
entire
screen
date
encoding.
So
here
is
the
date
encoder,
okay,
so
I'll
run
through
this.
So
all
the
semantics
semantics,
meaning
day
of
week
weekend
time
of
day
in
season,
are
all
being
encoded
in
an
encoding
down
here
on
the
bottom
left
there
just
concatenated
together.
So
this
is
how
we
get
the
semantic
meaning
of
a
timestamp
into
an
encoding
that
we
can
also
add,
like
other
in
nation,
like
scalar
information.
A
So
we
not
only
get
the
encoding
of
the
scalar
data,
but
also
the
time
at
which
it
was
collected
or
time
at
which
it
occurred.
That
was
really
the
main
thing
for
encoders
and
then
actually
I
think
I
also
talked
about
an
input
space.
Did
you
guys
see
this
visualization?
This
is
a
real
data
stream
that
encoded
in
how
we
might
encode
it
in
a
real
system.
So
this
is
one
way
of
encoding.
It.
A
We've
got
the
scalar
value
sort
of
on
the
top
moving
up
and
down
and
the
time
the
time
information
semantics
at
the
bottom
and
I
wanted
to
point
out
that
there's
other
ways
we
can
encode
this.
So
this
is
the
same
semantic
information
being
encoded
in
a
different
way.
There's
could
be
hundreds
of
ways
to
encode
this
data.
It's
it's
a
big
area
of
exploration,
I,
think
in
HTM
that
we
can
use
Oh
again,
I!
A
So
this
is
a
graph
of
scalar
data
and
the
subsequent
encoding
that
is
created
and
if
I
turn
I
can
encode
it
in
a
different
way
like
this,
with
the
exact
same
semantics
being
encoding,
but
just
the
bucket
sort
of
distributed
through
the
space.
So
there's
there's
tons
of
different
ways.
You
might
do
these
these
encodings.
That
was
my
point.
A
These
aren't
Python
notebooks.
These
are.
This
is
all
very
customized
code
that
I
wrote,
but
if
you
go
to
the
HTML
videos,
there's
links
to
it.
Yeah
the
sharing
option
is
kind
of
a
bummer
yeah.
There
is
a.
There
are
applications
of
this
in
production.
Somebody
posted
rock
stream
that
runs
off
of
our
technology
directly
I
mean
it
runs.
Our
the
nupoc
codebase
cortical
I/o
is
another.
A
Is
a
partner
of
ours
and
they're
working
on
natural
language
processing
using
HTML
an
SDR,
so
they'll
create
fingerprints
of
words
in
STRs,
so
you
can
get
like
a
an
SDR
for
cat
or
dog
and
compare
them
to
see
how
close
they
are.
They're
doing
some
really
cool
stuff,
so
I'm
just
going
to
try
and
address
some
questions
for
my
time
runs
out
sorry
about
the
slide
problem.
A
Why
simulate
the
human
brain
humans,
cleanser,
okay,
so
the
bird
versus
the
the
plane
problem,
yeah,
there's
a
ton
of
stuff
going
on
in
the
brain
that
we
can
learn
from
and
just
in
the
same
way
that
we
didn't
build.
You
know
flapping
planes.
We
might
not
model
all
the
things
that
we
see
in
the
cortex,
but
with.
If
we
can
understand
how
it
works,
we
should
be
able
to
implement
it
somehow
because
we
don't
have
anything
close
today
to
intelligence.
A
So
why
not
study
something
that
we
know
is
intelligent
and
try
and
figure
out
how
it
works.
We're
not
trying
to
exactly
replicate
the
brain,
we're
not
trying
to
build
a
brain,
we're
trying
to
understand
how
it
works
and
build
software
like
the
brain.
The
data
does
have
to
be
temporal.
You
can
use
images,
but
they
have
to
be
a
temporal
screen
as
images.
A
So
here's
it
here's
a
thought
experiment,
so
you
can
do
feature
extraction
using
deep
learning
technologies
like
weak,
AI
technology,
so
you
can
have
a
video
stream
and
if
you
process
each
image
in
the
video
extract,
features
from
it
and
then
use
those
features
to
create
sdrs.
And
then
you
can
start
to
predict
what
features
will
be
in
the
next
frames
of
the
video
all
right
so
I'm
still
here,
I
can
I
can
hang
out
for
questions.
A
A
It
exists
in
the
entirety
of
your
breath
like
there's,
there's
neurons,
representing
this
glass
all
over
your
cortex,
yeah
and
I
have
to
go
because
I
do
have
a
one-on-one
booked,
so
I
will
have
to
go,
but
but
the
point
being
that
at
the
very
lowest
levels
that
react
immediately-
and
a
lot
of
this
is
talking
about
reflexes-
we're
not
talking
about
reflexes.
The
neocortex
is
not
involved
in
in
some
of
those
lower
level.
Things
like
reflexes,
but
yeah,
so
I've
got
to
go
and
do
a
couple
interviews.
So
thank
you
all
for
watching.