►
From YouTube: HTM Hackers' Hangout (Mar 3, 2017)
Description
No topics yet!
A
Hello
and
welcome
to
HTM
hackers,
hang
out
it's
March
third
I'm
Matt
Taylor
and
thanks
you
for
watching
we're
going
to
talk
today
about
a
few
things.
Sorry
I've
got
the
week
we're
just
having
a
deep
conversation
about
temporal
memory
and
trying
to
get
out
of
my
head.
So
at
the
end
of
this
episode
at
the
end
of
this
Marcus
and
I
think
are
going
to
continue
having
this
conversation
about
temporal
memory
predictions
anyway.
Let's
talk
about
what's
going
on
in
the
community
and
stuff,
so
it's
been
a
month.
A
So
a
few
things
I
wanted
to
mention
and
then
like
as
always
open
it
up
to
anybody
else
who
wants
to
talk
about
anything
are
asked
questions.
We
have
conversations
so
first
of
all,
we've
got
a
lot
of
events
happening
where
we're
going
to
be
represented
or
new
meta.
Let
me
share
my
screen.
Let
the
pic
which
screen
to
share
this
is
so
well
planned
out.
A
B
A
That's
a
good
snag,
so
we'll
add
that
to
the
website
as
soon
as
we
get
details,
just
send
it
to
Christie
I'll,
probably
she
or
I
will
do
it
sure
thing.
Okay,
next
up
on
our
list,
oh
I
was
going
to
say
there's
one
more.
It's
AI
with
the
best
a
I
dot
with
the
best
com,
I
think
it's
an
online
conference,
it's
cheap
and
it
was
people
liked
it
last
time
so
I'm
doing
it
again.
A
A
C
A
So
if
you
register
for
a
with
the
best
forget
to
watch
those
live
and
ask
questions
at
the
ends
of
the
talks,
so
but
ok,
so
the
continuing
education
work,
that's
I'm
still
working
on
HTM
school
videos,
and
this
is
probably
going
to
continue
evolving.
So
Jeff
is
taken
an
interest
now
and
he's
talking
to
me
about
maybe
doing
some
other
things
in
that
style.
A
Regarding
some
of
newer
theory.
So
at
some
point
you
might
see
that
I
don't
know
not
making
promises
will
see.
I
also
am
going
to
be
putting
out
some.
This
is
the
worst
job.
Honestly,
new
pic
compilation
guides
not
that
I'm
complaining,
but
this
is
the
most
annoying
thing
to
do,
but
it
needs
to
be
done
so
I
started
VMS.
I
start
afresh
ubuntu,
whatever
is
the
latest
16
and
install
new
pic
from
source
code
from
scratch
and
at
least
walk
through
it,
and
this
gives
me
a
chance
to
exercise
the
whole
process.
A
So
it's
beneficial
it
just
sucks
because
I
have
to
it.
Just
takes
a
lot
of
waiting
and
typing
the
wrong
commands,
and
anyway,
I'm
going
to
do
that
for
I'm,
going
to
try
to
do
for
windows
too,
as
well
as
Mac
OS
other
than
that
new
pics
stuff.
I'm
working
on
api,
docs
and
they're
they're
a
lot
better,
austin,
don't
say
anything
I'm
working
on
a
GI
docs.
A
This
is
the
way
it
should
have
been
done
before,
but
these
are
so
much
better
than
the
last
ones,
and
we've
got
some
really
nice
linking
and
and
formatting
the
parameters
of
all
look
nice
code
styles.
But
the
thing
is
that
there's
a
lot
of
changes
that
need
to
get
made
the
doc
strings
to
get
this
working,
I'm
also
working
on
a
new
pic
core
in
the
same
style.
A
It's
going
to
be
the
same
Sphinx
tile,
because
we
found
a
tool
Scott,
something
about
a
tool
called
breathe
that
you
can
convert
doxygen,
xml
output
and
perhaps
Fink's
process
it,
which
is
wonderful.
So
we
could
potentially
get
these
in
a
very
similar
format
and
look
and
feel
which
will
be
really
great,
so
I'm
looking
forward
to
that.
That's
what
all
I'm
working
on
right
now
yeah
I
wanted
to
mention
this
that
so
we
haven't
got
a
lot
of
community
contributions.
I,
don't
think
it's
a
big
deal
and
I!
A
Think
it's
because
we've
been
focusing
all
our
efforts
on
on
research
and
I.
Think
people
are
kind
of
waiting
to
see
what
happens
from
that,
and
that
makes
sense
right.
Yeah.
We
there's
a
lot
of
other
HTM.
Let
me
just
I:
don't
need
to
present
any
more
so
there's
a
lot
of
other
HTM
implementations
out
there
that
are
doing
things
at
slightly
different
ways
and
experimenting
and
trying
cool,
stuff
and
I.
A
A
I
still
think,
there's
lots
of
potential
error,
but
the
people
who
I
think
are
most
interested
in
contributing
back
especially
to
the
theory,
are
doing
this
work
and
I
think,
there's
more
people
interested
in
contributing
back
to
the
theory
than
contributing
back
to
specifically
the
new
pic
or
HTM
java
code.
But
but
that's
okay
and
I
think
when
we
come
out
with
new
stuff-
and
it
goes
in
others-
probably
gonna
be
an
uptick
and
interest
about
taking
contributions.
A
That's
it.
That's
all
I
got
so
hope,
I,
don't
think
anybody's
on
chat.
There's
some
people
watching
so
I.
Think
if
you
type
in
to
chat,
you
can
ask
a
question,
but
otherwise
do
you
like
by
I
brothers
Rembrandt
in
an
estate
sale
this
morning
and
for
ten
dollars
pretty
good
but
yeah
any
other
things
you
want
to
talk
about
it
see.
We've
got
somebody
on
online
here,
Daniel
nice
to
see
you
I
can't
hear
you,
though,
but
if
you
want
to
you
might
want
to
turn
your
mic
on.
A
A
Nice
on
the
note
of
research,
I
forgot
to
mention
this
but
jeff
and
I
are
planning
on
having
a
chat
when
I'm
in
the
office
in
about
a
week
and
a
half,
and
I
want
to
talk
to
him
some
things
about
new
theory,
especially
now
that
I
feel
like
I
have
a
better
understanding
of
current
theory
and
explore
some
of
his
thoughts
about
the
what's
happening
in
research
right
now
in
the
direction
that
it's
going
and
how
it
relates
to
current
implementations.
I
know
there's
some
confusion
about
that.
A
So
I
started
a
thread
on
each
team
forum
about
this,
and
people
are
asking
some
questions
that
I'm
going
to
try
and
curate
and
turn
into
a
conversation
with
jeff
about
these
things
and
kind
of
mix
it
into
some
things
that
I
want
to
talk
to
him
about
as
well.
So
that
should
be
fun,
and
I
know
it's
not.
You
know
an
office,
our
style.
You
can't
join
in
and
live,
ask
questions,
but
we're
not
going
to
give
up
doing
that.
A
A
So
that's
all
we
got
so
do
you
want
to
continue
talking
about
temporal
memory?
Sure
all
right,
so
I
was
trying
to
explain.
Okay
when,
when
cells
are
predictive,
what
this
is
really
what
I,
what
I
want
to
do
is
identify.
How
predictions
occur
is
say,
are
identified
from
one
spatial
input,
all
the
sequences
that
are
associated
with
that
spatial
input.
That
was
my
initial
a
point.
The
thing
I
was
trying
to
do
and
I
realized
it
was
impossible
to
do.
Would
you
agree
with
that
Marcus
so.
C
So,
representing
all
sequences
associated
with
a
single
space
show
input.
So
yes,
I
mean
because
when
you
burst
that
you're
essentially
doing
that
when,
when
you,
when
you
burst
a
spatial
input,
all
the
cell's
interactive
are
like
one
context
of
that
of
that
input.
Now,
if
you're
talking
about,
though.
A
I
understand
that
that's
during
learning,
I'm
talking
about
retrieval
I,
think
it's
different.
It's
60
I
want.
If,
given
one
spatial
pattern
and
a
system,
that's
already
learned,
give
it
once
vigil
Pat
and
I
want
from
the
state.
I
can
retrieve
from
that
once.
But
you
know
the
operations
I
can
do
with
that
spatial
pattern.
How
can
can
I
identify
individual
sequences
and
I?
Don't
think
it's
possible.
C
A
C
C
Now
we
it's
really
not
going
to
burst,
will
activate
a
smaller
set
of
cells,
for
example
that
still
might
be
0
Union.
You
can
bring
up
the
examples
of
a
b,
c
d
and
XB
see
why
your
first
input
is
be.
If
you
first
beep,
then
predictions
are
still
you,
then
your
your
predictions
are
both
types
of
C.
So
when
I
see
comes
in
at
the
point
way,
but
not
all
wet
it,
yeah.
A
Because
ND
slipping,
let
me
step
back
here.
I
know,
I
know
where
you're
going,
but
my
ultimate
goal
here
is
to
try
to
identify
somehow
and
extract
a
definition
of
each
sequence
given
like
four
inputs
or
given
n
inputs
in
a
row.
I
want
a
way
to
extract
and
identify
and
away
every
sequence
that
could
be
part
of
so
I'm
thinking.
There
would
be
lists
a
time
step,
but
not
time
steps.
You
know,
they're
just
things
that
are
it's
a
linked
list
right
and
each
one
is
a
spatial
state
of
the
pulsus
non-spatial
state.
A
C
A
C
If
you
see
so,
let's
say,
you've
learned
five
different
sequences,
the
scarborough
ABT
say
how
the
ABC
XY
ABC
eh
and
take
out
a
few
of
these.
At
the
point
we
could
see,
you
don't
have
a
union
at
the
point
where
you
had
see
your
attic,
your
arm,
you
have
it
narrowed
down
to
basically
one
sequence,
and
then
it
diverges
after
that.
So
the
point
is
yeah
hit.
C
A
So
that
brings
me
to
another
question:
I
understand
that,
and
thank
you
for
that
explanation.
That
makes
sense.
My
next
point
is
I.
Don't
ever
like
bursting
is
obviously
the
beginning
of
a
sequence
or
the
beginning
of
a
node
and
as
a
branch
of
another
super
know
where
a
sequence
splits,
but
there
is
no
ending
cap,
there's
never
an
ending
cap.
Unless
you
manually
cut
all
the
cut
it
right
right.
How
does
that
work?
A
Because
there's
got
to
be
a
way
that
you
can
close
sequences,
or
else
I
mean
it
just
doesn't
make
sense
that
have
all
of
these
open
sequence.
I
mean
you
identify
to
you.
None
classify
temporal
patterns
and
I
think
that's
that's.
The
classification
part
we're
talking
about
right
as
ending
with
sequences.
C
Right
so
generally,
a
lot
of
our
a
lot
of
experiments
end
up
using
a
reset
signal,
and
that's
what
that's
for,
if
somehow,
you
can
mark
that
all
right
now
I'm
at
the
beginning
of
something
new,
or
that
was
the
end
of
something
then
r,
you,
then
you
do
a
reset,
and
the
assumption
is
just
that,
like
that
at
some
point
we'll
find
a
biological
equivalent
for
resets
I
mean
when
you
shift
your
attention
somewhere
else.
Just
call
that
reset
there
are
just
so
many
possibilities
for
where
the
equivalent
of
reset
could
happen.
Okay,.
A
A
C
A
C
Thanks
and
really
I
say,
I
say
it
grows
forever,
but
theoretically
a
rather
part
of
that
is
because
of
how
it's
currently
implemented
to
not
really
have
very
good
cleanup
yeah.
B
C
B
C
Kind
of
thing,
so
that
could
be
a
solution
to
the
everlasting
growth.
That's
not
fundamentally
flawed,
but
it's
but
yes,
it
will
keep
learning
higher
and
higher
order
context.
Yeah
I'm.
A
A
something
like
a
memory
that
could
really
remember
songs,
which
have
so
many
hierarchically
strong
Oracle
structures.
You
know
with
patterns
with
the
patterns
and
just
feels
like
that
sort
of
thing
doesn't
take
necessary.
Take
your
attention.
It's
just
part
of
the
structure
of
the
data
right.
It's
part
of
the
structure
of
the
world.
It's
not
you,
don't
necessarily
have
to
attend
to
noticing
that
it
just
is
a.
C
A
C
A
huge
tree
because
if
you
have
a
subsequence,
that's
shared
between
sequences
yeah,
what
eventually
happens
is
every
element
and
that
subsequent
gets
a
totally
different,
SDR
Oh
SDR
from
so
that
this
this
shared
some
sequence
in
many
ways.
Not
only
is
it
not
only
is
it
not
useful,
shared
subsequences
actually
make
learning
harder
like
it
takes.
C
A
To
this,
it's
sort
of
funny,
though,
because
yeah
we're
talking
about
hierarchical
structures
and
data
like
sauce
or
whatever,
and
in
order
to
for
our
brains
at
least
to
understand
this,
it
does
seem
like
a
hierarchical
structure
within
the
brain
has
to
exist
because
that's
what's
going
to
identify
those
patterns
within
patterns
right,
it's
interesting,
that's
pretty
interesting
or.
C
Ya,
hi
ya
hierarchy
in
some
sense
has
to
exist.
The
question
of
whether
I
mean
I'm
just
talking
here.
This
isn't
like
you
meant
to
internals,
wear
anything
but
like
the
the
hierarchy
might
be
embedded
within
a
layer,
maybe
layer
2
as
like,
oh
sure,
some
things
pooling
things
which
are
pulling
things.
Maybe
it
has
a
mini
hierarchy
inside
of
it,
you
don't.
C
The
good
days
and
experiments
right,
yeah,
basically
so
yeah
it
might
not
be
the
case
that,
like
the
hierarchical
nature
of
the
cortex,
is
solving
that
problem,
it
might
be
solving
a
different
problem.
Mm-Hmm.