►
From YouTube: Temporal Memory Part 2 (Episode 12)
Description
This episode contains the answer to last episodes puzzler regarding Single order vs High order memory. We also go over how bursting kicks off the learning of new sequences by choosing winner cells to represent those transitions.
A
A
Okay,
so
the
puzzler
last
episode
was
to
think
about
what
would
happen
if
the
temporal
memory
algorithm
was
restraint
using
only
one
cell
per
column,
I
asked
how
this
would
affect
sequence
learning
inside
temporal
memory.
To
answer
this
question,
we
need
to
learn
a
bit
about
single
order
and
high
order:
sequence
memory,
dementia
research,
engineer,
Marcus
Lewis,
so.
B
A
A
Let's
bring
out
single
order
memory
data,
look
at
this
sequence,
hi,
I'm,
single
order,
memory
and
I'm
going
to
learn
a
thing
and
tell
you
about
it.
Oh
look!
There's
the
thing:
okay,
I
learned
the
thing:
okay,
single
order
memory.
If
you
hear
the
note
e,
what
note
is
most
likely
to
come?
Next,
that's
easy.
I'll
just
go
through
the
whole
sequence
and
look
for
all
the
ease
and
keep
count
of
what
follows
done.
A
How
many
times
that
way,
I
can
tell
you
that
chances
are
good
and
F
is
going
to
follow
an
e
in
a
sequence.
But
what
if
you
knew
more
than
one
note
in
the
sequence?
Let's
say
you
didn't
hear
just
an
e,
but
you
heard
a
GE
right
before
the
e.
Can
you
refine
your
prediction?
Wait?
What
do
you
mean
before
E
there's
nothing
before
e
F
comes
after
E.
This
looks
like
a
job
for
a
high
order
memory.
A
A
A
You
told
me:
I
can
have
a
fidgets
pair
if
I
did
this
for
you
later
so
having
one
cell
per
column
restricts
each
column
to
represent
a
spatial
feature
in
only
one
context,
meaning
it
only
has
information
from
the
current
state
to
make
predictions
about
the
future.
This
severely
limits
the
functionality
is
a
learning
algorithm,
a
good
sequence.
Memory
needs
to
recognize
spatial
patterns
with
many
different
contexts,
and
that's
why
we
need
many
cells
for
mini-com.
A
We
talked
a
little
about
bursting
in
the
last
episode,
but
one
key
idea
is
still
missing
when
a
column
bursts.
How
is
a
cell
within
the
bursting
column
chosen
to
represent
that
new
sequence
in
the
future?
Let's
say
that
proximal
input
causes
some
columns
to
activate
here's
one
of
those
columns.
You
can
tell
that
this
spatial
input
was
not
predicted
because
there
are
no
predicted
cells
in
this
column,
so
the
mini
column
bursts
and
every
cell
is
activated.
Now
let
me
introduce
the
idea.
A
A
Winner,
because
it
correctly
predicted
it
would
be
active.
However,
if
the
column
is
bursting,
we
still
need
to
choose
a
winner
cell
to
represent
this
new
pattern,
I'm
going
to
explain
how
the
selection
is
made
in
a
moment,
but
for
now
let's
just
choose
one.
Moving
on
to
the
next
time
step
we
get
a
new
spatial
input.
A
Once
again,
this
mini
column,
among
others,
was
activated
because
it
recognized
a
certain
spatial
feature
in
the
input
space
through
its
feed-forward
proximal
dendritic
segments.
Again,
this
spatial
feature
was
not
predicted,
so
the
column
bursts.
Now,
let's
talk
about
how
we
choose
a
winter
cell
to
represent
this
new
transition
in
our
sequence.
A
First,
we
will
look
through
these
potential
winners
in
the
column
to
see
if
any
of
them
almost
predicted
the
last
input.
This
means
that
they
have
distal
segments
that
match
the
previously
active
cells,
but
they're
permanent
values
are
not
high
enough
to
form
a
connection.
If
they
have
been
connected,
the
cell
would
have
been
in
a
predictive
state
and
the
column
would
not
have
burst.
Remember
that
we
look
at
all
the
segments
of
all
the
bursting
cells
which
can
lead
to
any
other
cells
in
the
structure.
A
So
this
graphic
is
a
bit
misleading,
because
it's
focusing
on
just
a
few
columns.
There
are
probably
thousands
of
columns
involved,
dozens
of
which
are
activated
by
this
for
input
simultaneously.
So,
given
that
this
was
the
winter
cell
last
time,
if
this
top
cell
happened
to
have
a
segment
to
that,
so
it
might
become
the
winter.
A
A
Is
active
because
it
recognized
some
spatial
feature
of
the
input,
all
the
cell's
activates,
because
there
were
no
cells
predicting
this
input.
Here's
the
cell
in
the
last
example
column,
that
was
the
winter
cell,
but
perhaps
none
of
the
active
cells
in
the
current
column
have
any
segments
that
match
any
previous
winter
cells.
A
So
we
basically
have
no
clue
what's
going
on
and
we
need
to
represent
this
as
a
brand
new
sequence
in
this
case
we're
going
to
inspect
all
the
bursting
cells
to
find
a
cell
with
the
fewest
number
of
segments
and
make
that
bloom
winter
cell.
This
ensures
that
we
are
utilizing
as
many
cells
as
possible
and
not
overloading
cells
with
meaning
unnecessarily.
It's
also
important
that
we
randomly
break
any
ties
that
occur
when
finding
this
winter
cell.
Now
that
the
winter
self
selection
is
done,
we
need
to
either
create
segments
or
grow.
A
From
all
the
active
cells
in
this
bursting
column,
two
cells
representing
the
previous
state.
We
do
this
by
increasing
the
distal
permanence
values
between
these
active
cells
and
the
winter
cells
in
the
last
time,
step.
Remember
that
we're
applying
this
learning
to
every
winter
cell
in
every
active
column.
In
this
time
step
and
a
couple
of
notes
before
we
close
up
this
episode,
every
active
column
will
have
a
winter
cell,
even
if
it's
not
bursting,
in
which
case
the
winner
is
the
active
cell
in
the
column.
A
Also,
not
all
columns
must
burst
at
the
same
time,
it's
quite
typical
for
input
noise
to
introduce
bursting,
because
this
noise
creates
many
different
sequences
that
are
still
very
similar.
So,
while
completely
new
spatial
input
will
burst
all
the
columns
many
times
subtle,
deviations
in
spatial
features
only
burst
some
of
the
columns.
This
brings
our
discussion
about
HTM
sequence
memory.
To
a
close,
however,
there's
a
lot
more
to
HTM
than
just
sequence.
Memory,
HTM
sensorimotor
theory
uses
essentially
the
same
learning
algorithms
I've
gone
over
in
HTML
so
far
to
do.
A
3D
object,
recognition
in
the
next
few
episodes
I'll
be
talking
about
bigger
structures
like
cortical
columns,
multiple
layers
of
cortex
and
how
interactions
between
cortical
columns
and
layers
allow
your
brain
to
do
sensory
motor
integration
to
recognize
objects
in
3-dimensional
space
based
upon
motor
commands
and
sensory
input.
If
this
sounds
exciting
to
you,
please,
like
this
video
and
subscribe
to
the
HTML
channel,
where
we
will
continue
to
provide
educational
materials
about
hierarchical
temporal
networks,
thanks
again
for
watching
ACM.