►
From YouTube: HTM Overview (Episode 0)
Description
In this first introductory episode of HTM School, Matt Taylor, Numenta's Open Source Flag-Bearer, walks you through the high-level theory of Hierarchical Temporal Memory in less than 15 minutes.
Join our online community at https://discourse.numenta.org/.
Intro music: "Books" by Minden: https://minden.bandcamp.com/track/books-2
A
Hi
I'm
Matt
Taylor
I
am
the
open
source
community
flag
bearer
for
Numenta,
and
the
nupoc
open
source
project.
Htm'
is
a
unique
approach
to
artificial
intelligence.
That
starts
from
the
neuroscience
of
the
neocortex,
we're
not
trying
to
recreate
a
brain,
we're
just
trying
to
learn
how
intelligence
works
in
the
neocortex
and
build
systems
on
the
same
principles.
In
this
first
episode
of
HCM
school
I'm,
going
to
give
you
a
broad
overview
of
HTM
and
then
in
subsequent
episodes,
we'll
dive
into
some
of
the
details.
A
So
if
you
want
to
follow
along
just
subscribe,
so
you
can
be
notified
and
you
won't
miss
any
episodes.
So,
let's
start
with
the
neocortex,
the
neocortex
is
that
wrinkly
blob
on
the
top
of
your
brain.
It
takes
up
about
75%
of
your
brain
and
it
is
evolved
in
mammals
over
the
past
one
to
two
hundred
million
years.
So
reptiles,
for
example,
do
not
have
a
neocortex.
This
is
a
mammalian
development,
so
it
gives
us
a
decided
advantage
over
our
reptiles
and
bird
cousins
in
humans.
A
The
neocortex
is
about
the
size
of
a
dinner
napkin,
it's
2.5
millimeters
thick
and
it's
all
scrunched
up
and
shoved
inside
your
skull,
so
I
can
have
more
surface
area.
So
the
older
parts
of
the
brain
that
are
below
the
neocortex
are
involved
in
the
basic
functions
of
life
like
sleeping
and
eating,
and
sex
and
love
and
emotions,
and
that
sort
of
thing,
but
those
aren't
intelligent.
Those
old
parts
of
the
brain
don't
represent
any
real
intelligence.
What
makes
you
you
is
all
stored
in
your
neocortex?
A
That's
where
all
your
memories
are
stored,
that's
where
all
the
lessons
that
you've
ever
learned
reside.
That's
your
identity!
That's
your
intelligence!
The
neocortex
is
considered
the
seat
of
intelligence
in
the
brain.
If
you
were
to
take
slices
of
your
neocortex
from
a
bunch
of
different
places
in
your
brain
and
look
at
them
all
under
a
microscope,
you'd
see
something
remarkable.
The
cellular
structure
of
each
one
of
these
slices
in
that
sheet
that
you've
cut
it
out
of
is
going
to
be
almost
identical,
no
matter
where
you
cut
it
out
up.
A
So
if
you
took
it
from
the
visual
processing,
part
of
your
brain
or
the
auditory
processing
part
or
the
language
generation,
you'll
see
that
the
structure
is
the
same.
So
this
tells
us
that
the
type
of
problems
that
the
brain
is
solving
as
it's
getting
all
the
sensory
information
from
the
world
are
very
similar
and
it's
solving
them
in
the
same
way
has
the
same
cortical
structure
and
the
same
algorithmic
reasoning
when
it
deals
with
information
at
every
part
of
the
neocortex.
A
So
your
neocortex
is
divided
up
into
several
dozen
regions,
and
these
regions
are
connected
together
through
bundles
of
nerve
fibers
that
go
through
white
matter.
These
regions
are
logically
linked
together
in
a
hierarchical
structure
or
hierarchy
and
the
raw
sensory
data
or
the
more
lower-level
types
of
data
coming
from
your
ears
in
your
eyes,
and
your
senses
come
up
at
the
lower
levels
of
that
hierarchy
and
our
process
by
nodes
in
the
lower
levels.
A
The
output
of
those
nodes
are
then
passed
up
to
higher
levels
in
the
hierarchy
which
are
processed
in
the
same
fashion
because
of
this
common
cortical
structure
throughout
the
cortex
and
then
output
of
those
regions
are
then
set
upward,
and
this
continues
up
this
hierarchy
as
we
climb
or
transcend
that
hierarchy.
The
ideas
that
are
being
understood
over
time
become
more
abstract
and
more
permanent.
A
How
they're
processed
we're
not
concerned
about
how
the
retina
processes
data
or
how
the
cochlea
processes
data,
because
from
the
neocortex's
standpoint
that
input
from
every
one
of
those
senses
is
all
essentially
the
same?
It
does
the
same
types
of
functions
and
operations
over
that
input,
no
matter
where
it's
coming
from,
because
each
region
is
performing
the
same
set
of
processes
on
the
input
data.
That
means
that
the
capabilities
of
the
entire
neocortical
structure
must
be
present
within
each
region
itself.
A
Therefore,
we
can
focus
on
how
one
region
operates
and
how
it
interacts
with
its
neighbors.
In
this
way,
when
we
build
out
hierarchical
models,
we
can
create
indefinite
complexity
that
can
work
on
any
sensory
motor
problem.
The
human
neocortex
has
between
20
and
30
billion
neurons
in
it,
and
you
could
consider
the
state
of
your
cortex
at
any
point
in
time.
The
active
neurons
that
are
on
in
those
20
to
30
billion
neurons
at
that
point
in
time.
A
So
one
remarkable
thing
about
the
cortex
is,
if
you
do
look
at
that,
you
would
see
that
a
remarkably
small
percentage
of
neurons
are
going
to
be
on
at
any
point.
In
time
generally,
about
2%
you'll
hear
that
a
lot
in
HTM
theory,
that's
the
general
sparsity
that
we
use
in
our
algorithms
and
applications.
You
might
represent
the
state
of
a
layer
or
region
of
cortex
with
an
array
of
ones
and
zeros
each
bit
in
that
array
representing
a
neuron
and
whether
it
is
active
or
not.
So
this
is
what
we
call
a
sparse
distributed.
A
Representation
taken
straight
from
the
neuroscience:
every
SDR
represents
neurons
in
a
region
of
the
cortex
and
whether
they
are
currently
on
or
off,
sparse
distributed
representations
is
a
key
element
in
HTML
intelligent
systems
without
using
this
data
structure.
So
we're
going
to
talk
a
lot
more
about
STRs
in
the
future,
so
stay
tuned
for
those
episodes.
There
are
two
basic
inputs
to
the
neocortex.
One
is
a
copy
of
the
motor
commands
that
are
coming
out
of
the
old
brain
and
going
out
to
command
your
muscles
that
copy
gets
sent
back
into
the
neocortex.
A
So
if
you
can
understand
how
it
is
interacting
with
the
world
around
it,
the
second
thing
is
a
copy
of
the
sensory
input
coming
from
all
the
senses.
All
of
your
touch
and
your
taste,
your
sight,
your
hearing,
all
of
that
data
is
also
copied
to
sent
to
the
New
York
cortex,
because
that
gives
it
a
representation
of
what
is
happening
in
the
outside
world.
So
we've
got
sensory
input
and
motor
commands.
A
These
two
things
give
the
neocortex
a
sensory
motor
model
of
the
world,
not
just
what's
happening
in
the
world,
but
how
it
is
interacting
with
it
in
time.
You
might
think
about
it
like
this
as
you're.
Looking
around
your
eyes
are
moving
all
the
time
and
every
time
your
eyes
move,
your
optic
nerve
is
sending
a
picture
to
your
neocortex.
That
is
entirely
different
from
the
one
that
it
just
sent.
If
your
neocortex
did
not
receive
information
about
where
and
how
your
eyes
moved,
it
would
not
be
able
to
understand
why
this
picture
is
changing.
A
It
would
just
see
the
world
jumping
from
place
to
place
with
no
real
reason
for
it,
and
that's
not
the
way
that
we
perceive
the
world.
We
understand
the
world
because
we
are
constantly
understanding
not
only
the
data
we're
receiving
from
our
sensors,
but
the
data
we're
receiving
from
our
own
brains
on
how
we're
interacting
with
that
world.
Now
htm'
systems
do
need
the
equivalent
of
sensory
organs
and
we
call
these
things
encoders.
A
So
an
encoder
is
something
that
takes
a
datatype
and
converts
it
into
a
sparse
distributed
representation
so
that
the
HTM
can
then
digest
it.
Html
anguish
is
sparse,
iterated
representation,
so
we
have
to
figure
out
how
to
turn
these
different
values
of
data
into
sdrs.
So
we
have
a
collection
of
common
encoders
that
we
already
use
inside
of
nupoc
and
HTM
Java
and
our
own
HTML
imitations,
but
they
all
do
essentially
the
same
thing.
They
take
data
which
could
be
numbers
or
dates.
A
Temperatures
or
what
have
you
GPS
coordinates
and
it
converts
them
into
an
SDR
that
then
the
HTM
system
can
understand
the
changes
in
that
data
over
time.
So
one
of
the
most
interesting
things
about
this
is
that
we
are
able
to
create
encoders
that
have
no
biological
counterpart.
For
example,
we've
created
a
GPS
encoder
which
can
take
latitude
and
longitude
and
understand
how
an
object
moves
through
time
and
space.
It
can
then
make
predictions
about
where
that
object
you're
going
to
be
after
it's
learned
some
of
its
patterns.
A
It
can
give
us
anomaly
indications
about
how
anomalous
its
behavior
is
at
any
point
in
time
we
can
even
potentially
classify
its
movement,
so
this
gives
us
a
hint
about
where
intelligent
machines
are
going
in
the
future.
At
the
heart
of
HTM
theory
is
an
algorithm
called
temporal
memory.
Now.
This
is
something
that
learns
the
patterns
that
are
changing
over
time
in
these
neurons
that
are
on
in
these
STRs
over
time.
It
operates
on
motor
commands,
as
well
as
sensory
input
and
HCM
postulates
that
every
excitatory
neuron
in
the
neocortex
is
learning
transitions
of
patterns.
A
Temporal
memory
is
probably
the
biggest
difference
between
HTM
and
other
commonly
used
machine
learning
techniques.
Today,
HTM
starts
with
the
core
assumption
that
everything
the
neocortex
does
is
based
on
the
memory
and
recall
of
sequences
of
patterns.
Htm
systems
learn
continuously.
So
as
the
input
data
changes,
the
HTM
model
updates
itself.
As
that
data
changes,
so
there's
no
need
for
batch
processing
or
learning
on
a
training
set,
so
you
can
train
a
model.
You
can
always
that
model
can
always
change
over
time.
A
As
the
data
changes,
HTM
builds
a
predictive
model
of
the
world,
so
every
time
it
receives
input,
it's
attempting
to
predict
what
is
going
to
happen
next.
So
it's
got
this
idea
of
what's
going
to
happen
when
it
gets
the
information
about
what
really
did
happen,
it
can
keep
track
of
how
well
it
did
if
it
does
well,
it
reinforces
that
learning.
If
not,
it
does
not.
A
system
like
this
will
constantly
adapt
to
the
world
as
it
changes.
A
So
scientists
started
working
on
artificial
neural
networks
over
50
years
ago
before
we
knew
much
how
the
brain
worked
at
all.
But
since
then
neuroscientists
have
gotten
a
lot
of
insight
into
the
neurobiology
anatomy
physiology
of
neurons
in
the
neocortex
and
we've
learned
a
lot,
but
the
basic
state
of
ans
has
not
changed
at
all,
despite
the
name
neural
network
and
ends
really
have
not
much
to
do
with
real
neurons
at
all.
A
Now
ans
have
been
evolving
recently
into
some
interesting
new
ideas:
convolutional
neural
networks,
recurrent
neural
networks,
deep
learning,
but
these
are
still
not
really
biologically
plausible
models
and
they're
a
far
cry
from
what's
going
on
right
now
in
your
neocortex
HTM
is
an
evolving
theory.
We
have
a
long
way
to
go
before
we
come
up
with
a
full
theory
of
the
neocortex,
but
the
good
news
is:
we've
made
significant
progress
on
nailing
down
some
of
the
fundamental
aspects
of
this
theory.
A
So
we
can
do
some
interesting
things
just
by
simulating
one
layer
of
one
region
of
cortex.
What
we've
created
in
software
today
represents
data
and
coding
principles
so
that
we
can
get
data
into
SDR
format.
The
temporal
memory
algorithm
that
I
talked
about
to
get
predictions
and
anomaly
indications
out
of
live
streaming,
data
from
what
we
can
do
just
with
one
layer
of
one
region
of
cortex.
A
You
know
about
you,
know:
one
millimeter
cubed
of
cortical
tissue
is
really
amazing,
and
so
it
tells
us
that
there's
a
bright
future
for
HTM
and
the
more
we
can
build
onto
this
theory
and
build
that
out
into
software.
The
more
capabilities
we're
going
to
get
out
of
it.
If
you
want
to
find
out
more
about
this
stuff
right
now,
you
can
do
a
couple
things.
First,
off
I
highly
recommend
this
book
by
our
founder
Jeff
Hawkins.
It's
called
on
intelligence.
A
This
is
the
book
that
got
me
interested
in
this
field
of
study
and
Numenta
over
ten
years
ago.
I
highly
recommend
it.
If
you
like
this
type
of
thing,
you
don't
even
have
to
be
interested
in
building
HTM
systems.
It's
just
a
great
layman's
introduction
into
cortical
theory.
You
can
also
take
a
look
at
our
website.
Neumann
to.com,
slash,
learn!
There's
a
bunch
of
video
presentations.
A
White
papers
example
applications
a
bunch
of
different
resources
that
you
can
look
at
to
dive
in
and
learn
more
about
theory
and
take
in
any
direction
that
you
want
to
go
otherwise
stay
tuned
to
this
YouTube
channel,
because
I'm
going
to
be
continuing
this
HTM
school
series
in
our
next
episode,
which
will
be
about
bit
arrays
the
very
basic
of
basics
when
it
comes
to
HTM
and
that
will
lead
us
into
discussions
about
sparse,
distributed
representations.
So
stay
tuned
and
I
hope
to
see
you
soon.