►
From YouTube: Introduction to HTM Tech - Matt Taylor
Description
HTM Community Meetup - March 14, 2016
A
So
I
am
not
the
smartest
guy
in
the
room.
I
just
want
to
put
that
out
there
first
off,
so
me
giving
you
you
know.
A
big
overview
on
htm
might
fall
short
of
your
expectations,
but
I
can
tell
you
where
to
go
to
find
more
information,
so
I'm
just
I'm
stealing
some
slides
from
jeff
hawkins
from
talk
to
you
guys
and
I'm
gonna,
try
and
give
an
overview
is
assuming
you
know
nothing
about
html
or
sdrs
or
anything
see.
If
I
could
just
five
minutes:
okay,
okay,
str's,
sparse
distributor
representations.
A
You
can
think
of
this
as
a
giant
binary
array
of
bits
that
are
almost
all
zeros,
very
few
of
them.
Two
percent
are
actually
active
and
those
represent
something
that
those
have
semantic
meaning
in
that
array.
Those
are
on
for
a
reason.
This
data
structure
represents
very,
very
the
exact
same
fashion,
in
the
way
that
your
neurons
work
in
your
brain.
A
So
if
you
take
a
a
cross
section,
a
small
piece
of
neocortex
at
any
point
in
time,
there's
going
to
be
a
very
small
percentage
of
neurons
that
are
active
and
that's
the
same
data
structure
that
we're
talking
about
anytime.
You
talk
about
sdrs,
those
really
represent
neurons
that
are
in
predictive
or
active
states
and
a
bunch
of
neurons
that
aren't
so.
This
goes
back
to
biology
every
step
of
the
way
just
talking
about
svrs.
A
A
So
the
attributes
of
this
is
high
capacity,
very
robust
noise
and
that's
another
just
attribute
of
sdrs
in
general,
it's
robust
to
noise
and
it's
fault
tolerant.
So
let's
talk
about
some
operations.
This
is
where
it's
going
to
get
nasty
so
assume
that
you
have
two
sdrs.
This
is
the
top.
A
This
is
very
small,
any
of
the
bits
that
share
on
any
of
these
bits
that
are
both
on
hsbr.
That
represents
a
semantic
similarity
between
these
two
representations.
A
That
means
they're
same
in
that
way,
so
you
can
actually
store
these
sdrs
and
compare
them,
and
the
next
thing
about
sdrs
is
that
they
compress
very
nicely
because
you
just
need
to
store
the
positions
of
the
on
bits,
and
so
we
do
sdr
comparisons.
A
lot,
and
you
actually
don't
have
this
to
store
every
single
one
that
you
can
store
a
sample
of
the
ombuds
and
you
generally
don't
lose
much
information
these.
A
This
is
things
we
found
out
about
stars
or
years
of
research
by
the
way,
and
we
have
papers
at
demented.com
papers,
go
into
vast
detail
and
mathematics,
a
lot
of
stuff.
So,
but
the
key
thing
is,
you
can
do
cool
things
like
add
them
off
or
do
a
union
membership
of
sdr.
If
you
have
a
collection
of
a
bunch
of
sdrs-
and
you
want
to
see,
if
there's
a
theme
running
through
all
of
them,
you
can
see
what
their
union
is
and
that
might
represent
a
common
theme
between
the
sdr.
A
They
at
least
tell
you
what
isn't
what
that
thing
kind
of
unions
out
to,
and
that
can
be
very
useful,
because
once
you
have
that
union,
you
can
take
any
other
spr
that
you
get
and
compare
it
and
see.
Is
this
this?
We
are
a
part
of
the
original
union
and
you're
almost
always
going
to
be
completely
right.
A
So
these
are
just
very
high
level
properties
of
sdrs,
and
now
I'm
going
to
talk
about
even
deeper
things
about
the
neuron
okay.
So
that's
about
sdr's
collections
of
neurons.
This
is
about
the
biological
feasibility
of
our
neuron
model.
So
most
artificial,
neural
networks
that
exist
today
have
very
few
synapses
they're,
not
really
biologically
realistic.
A
They
sum
up
their
input
of
weights
and
they
learn
by
modifying
these
weights
and
that's
really
not
how
the
biological
neurons
operate
at
all.
So
biological
neurons
have
thousands
of
synaptics,
they
have
active
dendrites
and
they
learn
by
growing
new
synapses.
So
it's
much
more
complex
and
we
try
and
capture
that
complexity
in
our
model
of
a
neuron
in
the
htm
theory-
and
this
is
what
we
have
implemented
in
software-
that
you'll
be
seeing
examples
on
today.
A
So
it's
just
much
more
biologically
plausible.
Some
of
the
I
always
this
graphic
explained
it
better
to
me.
So
imagine
this
is
a
slice
of
your
brain,
very,
very
small
size
of
your
brain
and
there's
a
certain
number
of
neurons
that
are
active
at
that
point,
because
you're
getting
some
sensory
stimulation
so
at
time,
one
I'm
gonna
step
back
and
forward
in
time
here
right
at
time,
one
these
are
active.
The
type
two
this
is
that,
and
you
know,
there's
a
difference
between
these,
but
this
is
a
temporal
sequence.
A
That's
been
encoded
over
time
from
something
in
your
senses,
and
this
is
how
your
brain
is
representing
that
so
what
html
does
is
it
learns
transitions
and
forms
connections
between
previously
active
cells
to
currently
active
cells
and
those
cells
trying
to
predict
based
on
the
active
cells
that
I
just
saw,
am
I
going
to
be
active
next,
and
so
that's
really.
One
of
the
core
concepts
of
this
theory
is
that
this
is
small
of
a
level.
Is
this
neuron
based
on
what
I
see
around
me?
A
Am
I
going
to
be
active
next,
so
there's
a
bunch
of
you
know
based
on
all
of
those
neurons
and
whether
they
think
they're
going
to
be
active
next
or
not,
you
can
see
a
bunch
of
them
might
think
they're
going
to
be
active
next,
based
on
that
sequence
of
on
bits
in
the
sdr,
so
you
have
a
prediction.
So
this
is
what
your
brain
is
doing.
It's
constantly
predicting.
It
thinks
I'm
pretty
sure
he's
going
to
say
the
word
wildebeest.
A
A
So
this
is
a
first
order
sequence,
so
you
can
learn
abcd
or
xbcy
or
excuse
me,
but
it
can't
learn
higher
order
sequences
and
I'm
going
to
stop
there,
because
this
is
where
it's
kind
of
hand
wavy.
For
me,
but
at
least
you
can
see
that
in
our
model,
we've
got
a
ton
of
these
columns
of
these,
not
just
one
layer,
but
we
represent
columns
of
them.
You
may
have
returned
many
columns
or
something,
but
but
we
have
a
cellular
layer
that
contains
a
bunch
of
columns
of
these
neurons.
A
A
It
can
also
detect
anomalous
input
because
if
you've
got
all
these
cells-
and
they
think
maybe
I'm
going
to
predict
or
I'm
going
to
be
active
or
not,
then
you
can
save
that
state
and
based
on
what
what's
happening,
you
can
tell
how
generally
confused
the
system
is,
so
we
get
values
like
that
and
again
this
is
high
capacity.
A
A
So
you
know
our
claim
is
that
this
neuron,
the
hdm
neuron
our
and
this
cellular
layer
that
we've
got
in
our
algorithms
and
our
htm
open
source
projects
are
essential
elements
to
machine
intelligence.
That's
where
we're
different
from
all
the
other
machine
learning
machine
intelligence
companies-
if
we
think
you've
got
to
do
this
if
you're
going
to
make
ai
you've
got
to
do
it
like
this.
A
So
when
we
also
think
you
know,
the
hardware
that
will
eventually
come
up
for
machine
intelligence
also
needs
to
support
these
structures,
and
there
are
people
working
on
that.
Okay,
one
more
little
analogy:
we
have
a
partner
called
cortical
io
and
they
are
a
company
that
really
latched
on
to
this
idea
of
sparse
distributed
representations
years
ago
before
we
took
anything
open
source
at
all.
A
Analyzed
you
give
it
another
word,
like
you
know,
wildebeest,
for
example,
it
will
give
you
back
something
else:
it'll
be
common
to
apple
and
that
they
both
represent
living
carbon
based
life
forms,
but
it'll
be
uncommon
and
that
one
will
have
four
legs.
One
will
have
fur
and
the
other
will
be
good
in
a
pie.
A
So
so
you
can
do
interesting
things
like
this.
You
know
they
used
to
call
them
word
sarahs.
Now
they
call
them
fingerprints
or
semantic
fingerprints
or
something,
but
you
can
ask
for
apple
and
then
ask
for
the
sdr
for
fruit
and
since,
as
you've
noticed
in
when
I
was
talking
about
stars
that
they're
kind
of
they
have
these
additive
properties,
you
can
subtract
the
apple
away
from
fruit
and
get
another
representation.
A
You
don't
know
what
it
means,
but
you
can
send
it
back
to
their
api
and
say:
what's
the
most
similar
term,
that
area
thinks
this
blob
of
bondage
being,
and
what
do
you
think
it
would
be
makes
sense
right,
because
an
apple
could
be.
You
know
the
beatles
record
label.
It
could
be
the
thing
that
you
eat
when
you're
hungry
or
it
could
be
a
computer.
So
if
you
take
the
fruit
out
about
what
you're
left
with
the
computers,
so
this
is
just
properties
of
sdrs
without
the
sequence
memory
at
all,
without
html
at
all.