►
From YouTube: Catching up on a week's worth of HTM Forum conversations
Description
Broadcasted live on Twitch -- Watch live at https://www.twitch.tv/rhyolight_
A
I
am
going
to
be
using
some
of
this
time
today
on
my
weekend
to
catch
up
with
a
week's
worth
of
HTM
forum
conversations
and
first
I
will
go
to
the
forum.
It's
how
long
that's
what
I'm
doing
so
I
have
only
recently
started
doing
twitch
stuff
and
then
I
took
a
week
off
and
I
was
totally
like
offline
for
week.
At
my
kid
science,
camp,
okay.
A
On
forum
conversations,
if
you
feel
like
it
I
think,
maybe
I
should
put
the
my
your
own
that
too
again
just
in
case.
If
it
forgets
well,
let's
I
don't
want
it
to
be,
and
I
wanted
to
get
giant
link
every
time,
so
I'll
just
put
it
n
up
so
now,
if
anybody
happens
to
be
online
because
look,
there's
usually
people
online.
A
B
A
This
is
the
broadcaster
set
up,
so
I
should
see
like
that
chat.
They
you
see
the
chat
pop
up,
but
it's
all
white,
so
you
can't
really
actually
see
it.
So
I
should
probably
fix
that,
but
I
don't
really
know
I'm
I'm
still
figuring
it
out,
so
it
doesn't
doesn't,
doesn't
show
up
on
the
white
background
very
well,
but
that's
okay,
back
to
the
forum
stuff,
so
I'm,
a
professor
like
twitch,
telling
me
I'm,
live-streaming.
Obviously,
okay,
so
I'm
gonna
go
through
the
forum
and
start
answering
questions
because
I
missed
all
these
topics.
A
So
there
was
a
lot
of
nice.
Interesting
conversation,
probably
we'll
find
out
so
duh
how
much
trust
swarm
results.
This
is
Sam,
so
we've
been
going
back
and
forth
on
this.
It
started
with
the
question
about
the
adaptive
scalar
encoder,
because
the
swarm
was
returning
an
adaptive,
scalar
and
coder
in
it
really
shouldn't
be
because
we
don't
even
use
the
adaptive
scalar
encoder
anymore
and
he
was
asking
if
he
should
trust
it
and
the
and
I
told
him
no.
But
when
now
we're
dealing
with
some
other
issues,
I
don't
even
know.
A
What's
going
on
here,
everything's,
like
seems
to
be
plotted
in
different
resolutions
and
I'm
totally
confused
about.
What's
going
on
and
I
asked
for
Suba
tide
to
answer
this,
but
I
know
he's
been
super
busy.
He
was
gone,
a
cosign
for
like
a
week
or
more,
and
he
just
got
back,
and
this
swamped
with
stuff
he's
got
a
paper
he's
working
on
and
I
know.
This
is
not
a
high
priority
for
him.
I
don't
even
know
if
he
saw
this
message
honestly,
but
I
don't
know
the
answer.
This
said,
why
is
the
bucket
index?
A
A
So
there's
some
bug
in
in
Sam
setup
and
I
can't
figure
out
what
it
is.
I
don't
understand
why,
when
he
sends
in
a
new
row
of
data,
he
gets
back
a
prediction
that
that
is
referencing
the
encoders
bucket
index
as
the
value,
the
most
likely
value
and
not
the
actual
encoded
Rd
coded
value
of
that.
So
that's!
So
that's
the
problem,
but
I.
Don't
know
why
I'm
gonna
have
I
will
have
to
dig
into
this.
I
really
don't
know
the
I'm,
not
even
answer
it.
Cuz
I,
don't
know
the
answer.
A
This
question
so
I'm
just
gonna
leave
it
I
hate,
leaving
things
like
this,
but
I
just
don't
know
the
answer.
The
question
installing
new
pick
on
a
Raspberry
Pi.
Oh
these
topics,
people
have
been
installing
new
pick
on
a
Raspberry
Pi
since,
like
night
2015
I
think
was
the
first
time
I
saw
somebody
do
it,
which
was
pretty
impressive,
because
it's
really
hard
to
get
all
of
the
C++
binaries
compiled
properly
on
an
arm
because
you
have
to
bring
in
so
many
external
libraries
onto
it
yourself-
and
this
is
totally
not
my
forte.
A
It
looks
like
Paul
Paul's
definitely
got
a
lot
of
experience
on
the
on
this
aspect
of
new
pic
by
the
way
new,
because
then
it
meant
a
platform
for
intelligence
computing.
It's
the
Python,
/
C++
is
a
C++
part
of
it
for
a
speed
implementation
of
hierarchical
temporal
memory.
So
there's
a
lot
of
traffic
on
this
form
about
nucleic
like
technical
things,
it's
software
I
was
able
to
start
each
team
service
with
basic
basic
functionality
without
errors
on
a
Raspberry
Pi
3
B+
cool
in
total
I
spent
not
much
more
than
two
hours.
B
A
So
I
think
this
is
how
he's
pointing
out
some
things
in
Paul's,
tutorial
and
Paul.
Are
you
like
this
I,
like
it
too
good
job,
I'm
gonna,
call
him
a
community
helper.
If
you
point
out,
if
you
find
some
bad
documentation
or
something
and
pointed
out,
that's
that's
good
juju!
That's
a
community
helper!
A
Okay!
Thank
you!
George
I!
Don't
think
I
need
to
respond
to
that
and
bouncing
back
here.
Cuz
like
I,
can't
tell
how
many
users
are
watching.
So
there's
three
people
watching:
that's
nice!
Anybody,
if
you
have
a
question
about
anything,
feel
free
to
jump
on
chat
on
I
will
take
questions,
path,
anomaly,
detection,
using
grid
cells
and
temporal
memory.
A
A
Okay,
so
any
this
running,
because
that
looks
cool
so
he's
doing
some
type
of
grid
grid,
so
module
type
encoding
looks
like
you
can
see
the
actual
encoding
on
the
top.
Here
it
says
grid
cells,
and
then
this
is
the
prediction
of
where
it's
gonna
be
next,
so
he's
running
simple
our
memories.
Long
as
he's
got
a
nominal
score.
C
C
C
C
A
C
C
A
A
Hey
dear,
let
me
say
that
right
here
here,
so
you
can
see
it
full
screen.
Okay,
you
close,
oh
well
good.
Some
hold
on
alright,
so
I'm
guessing
see
that
see
that
anomaly
indication
there.
They
like
the
history,
he
just
moved
and
they
not
only
went
boom
so
so
the
so.
The
algorithm
was
learning
this
path
in
space
2d
space
and
then,
when
he
moved
this
over,
the
anomaly
was
like
bumps
up.
Then
we
moved
it
back
and
out.
A
My
goes
down
at
school
and
did
you
see
those
legs
in
it
because
a
little
indicate
those
little
things
like
that?
That's
probably
because
it
it
crosses
every
time.
It
crosses
the
path
that
it's
been
before
just
like.
Well,
that's,
not
a
nautilus,
so
that
is
it
moves
it
completely
into
a
different
place.
C
A
So
for
those
of
you,
you're
you're,
watching
or
not
in
the
community,
so
the
community
members
are
gonna
know
what
I'm
looking
at
sort
of.
But
three
Twitter's
is
that
a
good
term
Oh
three
people
in
twitch,
one
of
the
things
we
can
do
with
one
thing
we
know
about
grid
cells
and
you
can
lookup
grid
cells.
There's
an
HTM
School
episode
about
it.
Just
search
for
grid
cells
on
YouTube.
A
An
object
through
space
and
temporal
memory,
the
part
of
HTM
edge,
hierarchical
temporal
memory,
like
the
temporal
memory.
Part
of
that
algorithm
can
remember
that
location
space
over
time,
based
on
how
the
grid
cell
modules
update
as
the
thing
moves
through
space.
So
what
he's
doing
is
he's
like
sort
of
training
a
system
severly
training
is
just
the
system,
is
observing
an
object
moving
through
time
and
and
it
gets
used
to
it
after
a
while.
It's
stuck
because
it's
always
pretty
thing
engines
are,
or
your
brain
is
always
predicting
everything.
It's
observing.
A
It's
always
pretty
thing
about
what
it's
going
to
do
next
so
again,
like
the
HTML
rhythm,
is
again
always
creating
predictions
about
where
what
it
thinks
is
going
to
happen
next
or
what
sensory
input
it's
gonna
see.
Next,
and
in
this
case,
when
he
moves
the
the
track,
you
know
the
objects
moving
to
it
at
the
place.
Then
it's
predictions
all
become
wrong.
You
know
this
is
it
hasn't
seen
an
object
move
over
here
yet
so
it
takes
a
long
time
for
it
to
see
to
see
it.
A
Okay,
okay
I
see
it
over
here
now,
because
that
builds
up
over
time.
New
synapses
this
that
links
locations
in
space
like
as
it
sees
it
over
and
over
and
over.
It's
like
it's
like
dialing.
It
in
you
know,
there's
a
new
synaptic
row
every
time
it
observed
something
and
after
a
while
it'll
have
learned
like
many
different
patterns.
However
many
it
might
have
observed
and
you
can
sort
of
bounce
between
them
and
that's
one
of
the
things
that
Marty's
showing
here.
C
A
Htm'
is
a
continuous
online
learning
system,
so
every
new
location,
all
he's
sending
in
is
a
location
of
this
circle.
At
this
point
in
time,
and
then
the
next
step
it
moves
a
little
bit
and
he
sends
in
that
new
location
and
that
location
is
encoded
into
a
sparse
representation
using
this
grid
cell
mechanism,
so
grid
cells
sparks
activations.
A
It
will
continue
to
update
of
the
synapse
weights
permanence
weights.
We
call
them
so
and
continue
to
learn
as
something
moves
as
long
as
you
have
it,
so
you
can
turn
learning
on
and
off,
but
but
HTM
is
definitely
a
continuous
learning
system.
It's
very
different
from
like
Bayesian
networks,
deep
learning
networks
where
you
have
a
huge
space
of
spatial
data
and
you
sort
of
process
it
all
at
once
and
extract
properties
of
it.
A
You
know
when
you
like
facial
recognition,
software
image
classification,
all
the
cool
image
stuff
that's
come
out
lately
has
all
been
these
giant
deep
learning,
networks,
convolutional
neural
networks
that
can
find
features
and
images
and
and
then
call
up
all
the
different
places
it's
found
is
seeing
those
features
sort
of
in
different
places
and
put
things
together
based
on
that,
but
it's
all
using
probabilities
and
spatial
techniques,
spatial
classification
techniques.
What
we're
seeing
here
is
temporal
prediction
so
through
time,
so
an
object
moving
in
space,
which
is
very
different.
It's
a
very
different
problems.
A
C
A
C
C
A
Just
it's
similar
to
like
memory
in
your
brain.
You
don't
have
it's
not
like
you.
You
have
to
dump
memory
in
order
to
learn
something
new,
but
all
the
time
when
you're
learning
something
new
I
mean
well.
The
old
memories
fade
into
the
background,
like
you're,
not
exercising
those
synapses
anymore,
so
they
become
less
strong.
A
A
It's
like
it's
more
anomalous
than
it
was
when
he
had
had
it
there
before,
because
you'd
been
deserving
a
bunch
of
other
stuff,
so
like
some
of
those
old
synapses
sort
of
will
become
overridden
in
a
way
because
it's
had
its
learn,
this
other
stuff
and
there's
actually
settings
and
all
these
algorithms
to
decide
how
quickly
synapses
fade
away.
How
quickly
do
we
forget
and
there's
lots
of
ways
you
can
go
in
and
trim
or
prune
different
permanence
Azure
synapses
to
try
and
optimize,
but
that's
a
very
big
topic.
A
You
know
problems
Eric
come
back
any
time,
I'm
gonna
stream
on
Monday.
So
hopefully
I'll
see
you
there.
This
is
just
a
test
for
me.
Marty
that
was
really
cool.
I
really
need
to
get
this
working.
Maybe
I
can
maybe
I
can,
since
it's
in
C++
and
I,
followed
C++.
Maybe
that
maybe
I
can
do
this
on
Twitch
I
can
try
and
get
your
project
running
on
another
one
of
my
work
sessions
on
Twitch
and
some
of
the
C++
people
that
I've
found
there
might
be
able
to
help
me
out.
A
A
B
B
A
This
is
a
bad
thing
about
twitch
because,
like
my
half-baked
thoughts,
if
I
do
this,
while
I'm,
actually
updating
the
forums,
I'm
gonna
have
a
have
big
thoughts.
I'm
like
yeah,
maybe
I
shouldn't,
say
that.
But
then
it's
already
live
I,
don't
know,
I
was
gonna,
say
next
week,
but
I'm,
but
I
really
don't
know
what
this
I'll
just
say:
I'll
try
I'll
make
it
a
higher
priority.
I'll
leave
it
at
that.
Okay,.
A
Okay,
so
can
previous
inputs
I
think
we've
already
answered
this
question
Paul
stepped
in
and
helped
with
this
yeah.
You
I
think
you
can
do
this
with
sort
of
a
backwards
classic.
We've
got
the
classifier
a
back
looking
classifier
this
we
like
to
be
able
to
remove,
so
you
can
spend
your
time
to
create
and
run
each
TM
in
a
programmatic
manner.
I
think
this
is
an
old
topic.
Isn't
it.
A
I'll,
just
obviously
I
shouldn't
go
I
shouldn't
be
going
correcting
other
people's
good
misspelling.
Should
I
thanks
Marty
for
answering
him
thanks
Oh
like
the
swarming,
so
this
was
while
I
was
gone.
Maybe
you
have
any
example
for
making
swarming
with
the
algorithm
API
this?
Not
really
so
if
anyone
watching
swarming
that
isn't
very
compatible
with
the
algorithms
API
you're
gonna
have
to
build
your
own
network,
API
or
algorithms
swarming
sort
of
gives
you
this
canned
file
with
a
model
parameter.
It's
like
a
configuration
file
that
works
with
the
OPF.
A
A
So
if
you
do
swarm
and
you
find
mount
model
parameters,
you
can
certainly
use
those
just
look
at
them
and
then
go
through
the
API
Docs
like
here
on
the
new
pic
API
QuickStart
like
here.
We
go
no,
not
okay,
algorithms,
it
guys
so
so
then
you
could
go
through
and
if
you
look
at
this
I
think
I
should
maybe
I'll
just
link
you
to
this
quick
start,
because
this
QuickStart
is
great
because
it
they.
A
It
shows
you
using
the
exact
same
model
parameters
file
in
the
amel,
how
to
use
those
parameters
to
instantiate
and
opf
model
and
then
how
to
use
those
parameters
to
instantiate
a
model
with
the
raw
algorithms
API,
and
then
one
the
same
file.
The
same
configuration
file
to
do
that
with
it
guys
so
I'm
going
to
refer
you
to
this.
A
A
A
You
know
HTM
pulling
yeah,
so
he's
got
a
pony
implementation
and
apparently
it's
doing
something
of
clicksor,
but
that's
about
experiment
on
down
neurons
and
voting
just
each
model.
So
this
is
a
big
topic
because,
with
like
Erlang
HTM
should
be
parallelizable
at
some
point
that
the
bottleneck
there
is
really
hardware,
but
in
software
there's
probably
more
efficient
ways
we
can.
We
could
create
HTML
to
where
they
could
be
distributed
and
I
think
moving
towards
the
lik
serve
pony
or
something
is
definitely
step
in
the
right
direction.
A
A
A
A
Can't
tell
if
you're
a
spammer
I
can't
help
your
spammer
okay,
I'm
gonna,
I'm
gonna,
err
on
the
side
of
caution,
I'm,
not
gonna,
I'm
gonna,
think
I'm
gonna
assume
you're
a
real
person,
although
I
get
there.
This
looks
like
a
spammers
message
because
usually
it's
like
a
very
short
message
and
there's
a
one
link
to
it
to
a
suspect
page
in
this
page
of
expiry
suspect
so
I
don't
know
so
I'm
actually
like.
B
A
A
A
Resnick
is
also
named
mark
Oda,
Hall
he's
long-term
long-term
commit
her
on
new
pic
and
as
as
heavily
involved
in
the
current
community.
Fourth
of
music.
The
idea
is
to
ditch
separate
code
for
SP
and
just
use
the
TM
oh
yeah
yeah.
Yes,
so
basically
he's
saying
this
is
done
in
the
community
book
already,
which
is
great
I
love.
What
you
guys
are
doing
in
the
community
fork,
not
oscillations.
Travelling
waves,
hey
Gary's
back
I
haven't
seen
Gary
in
a
long
time.
A
Gary
did
some
really
cool
dotnet
animations
in
this
virtual
world.
This
little
icon
do
you.
You
should
go
check
out
some
of
the
stuff
this
guy
did
in
his
post
activity.
Probably
it's
probably
yeah
I
look
at
this
stuff.
It
was
amazing,
I
love
the
see
why
this
is
so
cool
anyways.
If
you
look
at
there's
some
videos
of
the
stuff
running-
and
it's
just
really
neat
so
he's
really
interested
in
oscillations.
A
lot
of
this
stuff
deals
with
oscillations.
A
Okay,
they're
talking
about
a
paper:
oh
oh
yeah,
cuz
mark
brown,
our
bit
King
just
posted
this
nature,
paper
and
I.
Don't
think
I've
read
this,
don't
think
I've
seen
there
was
recently
the
I
just.
Why
did
it
pop
it
down
all
the
way?
There
was
recently
a
something
that
came
across
my
desk,
but
it
wasn't
this
one
so
Gary
says
he's
still
confident.
I
don't
want
to
go
through
this
whole
video.
It's
only
26
seconds,
that's
cool!
A
A
Yeah,
so
both
Gary
and
and
Mark
Brown
I,
think
are
on
similar
page
here.
They
have
some
other
higher-level
theories
and
and
HTM
will
hopefully
play
a
role
in
those
theories
that
made
perhaps
a
local
level
but
they're
sort
of
putting
together,
like
the
whole,
more
of
the
whole
brain.
How
does
this
region
connect
to
this
region
I'm
interested
in
in
that
too,
but
I
can't
focus
too
much
about
it.
A
So
it's
saying
this
is
suspicions
that
these
oscillations
arise
in
a
thalamus
and
are
injected
into
the
cortex
and
layer
for
that
could
be
Casey
Casey.
No,
this
seems
to
be
pretty
well
informed
about
neuroscience
stuff.
From
what
I've
read,
some
oscillations
are
sent
from
higher-order
thalamus
to
primary
cortex
and
l23,
possibly
l5a
and
another
element
there's
so
many
connections.
A
Like
ours
does
this
ii-if
because
he
goes
off
on
the
detail
tangent
he
puts
it
in
this
little
thing.
Casey
knows
a
lot
about
like
Beryl,
cortex
and
rats
rat
whiskers.
This
is
a
great,
a
great
way
to
study.
Cortical
columns
is
rat
whiskers
because
each
rat
whisker
seems
to
have
a
direct
connection
to
a
one
cortical
column
and
the
rat
cortex.
A
A
A
A
Well,
Mark's
got
theories
about
X
Creek,
Formation
l23,
maybe
working
at
a
higher
like
gamma
frequency
to
do
local
voting
on
hex,
Reformation
or
macro
file
loading
yeah,
so
these
different
oscillations
gotta
be
doing
different
things.
We
they
could.
They
could
have
to
do
with
scaling.
Perhaps
you
could
have
to
do
with
I,
don't
know,
there's
a
lot
of
things
that
they
could
be.
A
A
A
So
the
thousand
brands,
the
one
thing
I,
think
her
and
if
you
look
at
the
2d
object,
recognition
problem
that
we're
working
on
I
keep
waving
away
the
idea
of
displacement
cells,
because
that
adds
an
extra
complication
and
I,
don't
quite
know
how
it's
gonna
work.
Because
displacement
cells
allow
you
to
construct
objects
out
of
other
objects
and.
A
Mark
wonders
if
there's
a
path
from
lower
brain
structures
through
the
thalamus
through
the
thalamus
to
the
cortex
I've
noticed
that
there
are
patterns
Ursula
I
assume
there
are
paths
from
lower
brain
structures
through
the
thalamus
to
the
cortex.
Oh,
maybe
there's
not
maybe
because
I
mean
all
sensory
input
goes
through
the
thalamus
I
think,
except
for
some
there's.
What
sense
does
it
for
some
sense?
It's
not
I
can't
remember
it's
it's
an
olfactory.
It's
something
like
that.
A
Okay,
no
Unruh
a
couple
of
new
things
is
this:
this
is
one
day
ago,
13
hours
ago,
spatial
cooler,
serialization
problem
too
big
I
have
stored
a
spatial
boy
as
follows:
open
temp,
write
to
file
I
cannot
reload
it
using
a
third
file
expected
total
words
less
than
or
equal
to.
Traversal
limit
rewards
message
is
too
large
increase
the
limit
on
the
receiving
end
c-captain
pea
reader
options.
Ok,.
A
The
above
error
message
suggests
setting
the
reader
option.
I
have
figured
out
what
needs
to
be
done,
set
the
option
call
transversal
limit
in
words,
which
is
a
default
value
of
8
x,
1024
x,
10
24,
but
I
cannot
figure
out
how
to
change
it.
I
figured
out
that
captain
fee
is
a
C++
program,
but
I
cannot
work
out
how
to
set
this
parameter
from
Python.
A
Yeah,
what
is
the
normal
size
of
a
spatial
polar
when
stored
by
this
method?
Mine
is
83,
megabytes
yeah
I
mean
it
depends.
It
depends
on
what
you
did
with
it.
It
depends
on
how
big
you
made
it,
because
the
spatial
polar
is
entirely
configurable.
So,
if
you
make
it
a
million
cells,
it's
gonna
be
huge.
So
how
big
did
you
make
it
do
I
need
to
make
a
smaller
by
reducing
size.
My
input?
No,
you
need
to
make
it
smaller
by
reducing
the
number
of
columns.
The
number
of
mini
columns.
Excuse
me.
A
A
B
A
A
A
A
B
A
A
In
it,
so
whatever
is
emitted
here
now
can
have
a
transversal
limit
in
words.
So
what's
a
called
stream
FD
message
reader,
that's
that
looks
like
it's
private
or
protected.
So
where
is
the
public
API
flat
or
a
message?
Reader
schema
hacked
a
few
men,
Oh
packed
I,
don't
know.
Phd
is
scheme
a
CPP,
PhD
I,
don't
know
what
a
PhD
fellas.
A
A
B
B
A
B
A
B
A
B
B
A
B
A
A
A
A
A
A
A
Yeah
yeah,
the
application
would
be
the
dog.
The
dog
show
would
be
like
the
evolutionary,
the
the
the
environment,
the
applications
revolving
with
them
internal
states,
wouldn't
matter.
Okay,
good
well
clear
that
all's
good
at
clearing
things
up.
This
is
also
me
just
like
catching
up
with
the
conversation
too.
A
Teachers
being
judged
feel
to
place
value
on
their
own
internal
models
based
on
how
well
they're
meeting
their
needs.
Yes,
this
of
course
requires
them
to
know
their
own
internal
state.
This
internal
evaluation
should
self
assemble
without
requiring
a
third
party
to
view
the
creatures
after
the
rounds
and
interpret
them.
I
agree.
The
judge
is
a
third
party
which
only
needs
to
generate
composite
intelligence
scores
based
on
the
results
of
various
tasks.
Purely
objective
function
was
an
acquired
knowledge
of
internal
States,
yeah,
okay,
yep
good
good.
A
A
Yes,
like
a
baby
starting
random
gradually
becoming
purposeful.
This
is
how
AGI
will
be
created
like
a
baby,
starting
with
random
movements
and
gradually
becoming
purposeful
I.
Believe
that
wholeheartedly
we
are
going
to
create
a
structure
for
intelligence
to
emerge.
It
will
explore
space
through
initially
random
movements
as
its
building
up
a
model
of
space
and
a
model
of
itself
and
the
its
interactions
with
space,
and
at
some
point
it
will
have
learned
a
model
well
enough
that
it
can
navigate
now.
I
do
not
know
about
goals,
rewards
punishments
and
I,
don't
even
I.
A
A
Sorry
about
the
background
noise,
my
kids
are
upstairs
bouncing
off
the
walls.
I,
apparently,
okay,
go
distributed.
Good,
so
I
have
to
like
compare
this
to
swarming
and
apply
lessons
learned
some
swarming.
It
doesn't
I,
don't
think
it
should
matter
what
language
it's
in.
It
should
be
language
of
agnostic,
intelligence,
agnostic.
It
should
just
be
a
way
to
judge
a
system,
an
agent
or
something
to
perform
some
general
intelligence
tasks.
B
A
New
challenges
could
be
added,
yeah
I,
like
this
idea,
I
like
I,
said
yeah,
I,
think
I
still
think
and
I
think
Jeff
takes
us
to
that.
Robotics
is
going
to
be
probably
one
of
the
first
major
applications
of
HTM
and
that's
all
gonna
start
in
simulated
environments
like
we're
all
talking
about
right
now.
This
could
be
a
toy
simulated
environment
of,
however
many
dimensions
you
decide
it
will
be
I'm
working
on
two-dimensional
environments.
A
The
thing
about
working
in
two
dimensions
is
I'm,
pretty
sure
that
scaling
to
higher
dimensions
is
not
going
to
be
a
huge
deal
now.
Orientation
is
gonna,
be
a
big
deal
but
I'm,
not
sure,
but
I,
don't
think
adding
a
third
I
think.
Once
you
go
to
dimension,
that
I
mean
that's
many
you're
in
many
dimensions.
Now
you
just
add
another
add
another
dimension
to
the
arrays.
You
know
what
I
mean.
A
A
It
seems
so,
let's
read
about
cell
death,
okay
cell
death,
I
didn't
read
this
before
I
left
I,
don't
understand
much
about
what
is
cell
death
in
the
brain
and
how
the
spatial
Pooler
can
implement
a
cell
death.
What
we
currently
don't
as
I
understand,
if
all
connections
intercept
a
field
of
a
column
are
connected,
it
means
that
this
column
is
dead.
No,
no!
No!
That's
not
what
that
means.
A
If
all
connections
intercept
a
field
of
a
and
we're
talking
mini
columns
here,
except
a
field
of
a
mini
column,
if
they're
all
connected
this
means
that
the
column
is
death,
I,
don't
think
so.
Jimmy
re
answered
this
death
of
neurons
due
to
Parkinson's
Alzheimer's
death
by
over
connection
I
see
some
simulation
about
cell
death.
I,
don't
understand
this
concept!
Well,
what
is
alright?
We
don't
have
a
simulation
of
cell
death,
oh
yeah,
so
I
know
what
you're
talking
about
so
one
of
the
spatial
Pooler
papers.
A
Tolerance
to
simulated
cell
death
I
think
this
is
what
where
die
is
referring
to.
We
did
not
implement.
You
know
how
I
say
this:
it
is
not
a
part
of
part
of
the
SP.
Let's
call
this
fake
cool
cooling
process.
It
is
just
something
the
that
SP
handles.
Well,
yeah
cells
die
all
the
time.
Sometimes
it's
random
switch,
pull
or
simulate
stem
cells.
All
good
models
of
the
brain
should
be
able
to
survive.
Yes,
this
is
actually
a
D
Mac.
A
A
A
Okay,
that's
well
with
that!
Hey!
What's
going
on
here
to
it
I'm
still
alive!
That's
good!
Three
viewers,
hello
viewers
and
I'm.
Does
anybody
have
any
questions
or
want
to
want
to
guide
me
in
some
direction?
I'm,
just
I'm,
just
going
through
more
form
messages
that
I
haven't
read.
This
is
Mac
again,
I.
Think
I
already
answered
one
of
his
questions.
Gosh
I
wish
nupoc
works
better
on
Windows,
so
nupoc
and
captain
p,
kappa
p,
does
not
work
on
windows.
New
pic
with
captain
P
does
not
work
on
windows.
A
A
A
A
Captain
p
is
a
captain
proto,
that's
sort
for
captain
proto
and
it's
because
it's
built
on
top
of
protocol
buffers
and
it's
and
it's
a
way
to
serialize
any
object
in
any
language
any
time.
So
you
can
send
it
over
the
wire
and
something
else
on
the
other
side
can
deserialize
it
and
use
it
and
it's
very
flexible.
But
it's
this
huge
pain
in,
but
it
has
been
so
hard
to
deal
with.
I
don't
want
to
use
it
anymore
and
we're
gonna
pull
it
out.
A
A
A
This
isn't
a
good
enough,
no
yeah
for
you
can.
You
can
use
the
serialized
version
to
serialize
things
that
captain
P
creates
in
any
way
you
want.
I
mean
you.
Can
we
just
write
them
discs
and
write
them
on
right
on
the
disk,
so
you
can
put
them
in
a
stream.
You
can
use
web
services,
you
can
you
know
whatever
you
want?
A
I'm
not
gonna
fit
serialization
I,
don't
know
what
up
shell
tag,
but
I'm
gonna
move
that,
okay,
what
other
topics
running
community
for
Chrome
window?
That's
what
I
just
created
regarding
multi-step
prediction
and
anomaly:
detection,
hi
everyone,
when
you
guys
say
the
model,
will
make
multi-step
predictions.
What
does
it
actually
mean?
Well,
the
model
make
predictions
next,
second
or
next
minutes.
If
I'm
doing
a
time
series
and
an
analysis
does
the
next
step
prediction
depend
on
the
way.
A
I
input,
my
data,
okay,
so
Paul's
explaining
us,
the
multi-step
prediction
is
a
trick
of
the
classifier
new
pick
or
excuse
me.
Htm'
makes
prediction,
so
your
brain
is
constantly
making
predictions,
but
it's
based
on
sort
of
the
time
frame
or
the
time
context
of
time
in
which
you
observe
the
world,
so
you're
making
predictions
about
what's
coming
next
immediately?
Next,
you
can
also
think
ahead
and
use
your
imagination
to
imagine
what
you're
going
to
be
doing
tomorrow
and
predict
what's
going
to
happen
tomorrow.
A
That's
not
the
type
of
prediction
that
we're
talking
about
right
now.
At
this
low
level,
the
the
low
level
type
of
temporal
predictions
that
are
be
made
that
are
being
made
in
response
to
sensory
input
is,
is
basically
what
am
I
going
to
feel
as
I
move
through
space.
Those
are
the
types
of
predictions
I'm
talking
about.
What
are
we
going
to
observe
as
we're
moving
through
space?
A
That's
what
HTM
is
doing,
so
your
brain
is
not
necessarily
predicting
what
am
I
going
to
predict
one
time
step
next
vert
or
a
hundred
times
steps,
and
that's
when
I'm
over
here
you
know:
that's
that's
not
what
we're
really
those
aren't
the
types
of
predictions
that
we're
doing
when
we
do
a
multi-step
prediction,
it's
really
something
that
the
classifier
is
performing.
It's
saying,
okay,
based
on
this,
knew
this
cell
state
and
everything
you
tell
the
classifier
you
want
to.
You
want
to
learn
one
step
ahead.
A
It'll
keep
track
of
all
the
all
the
last
predictions
and
how
well
it
did,
and
it
will.
It
will
go
back
and
decode
that
into
the
language
of
the
encoding
space
and
tell
you
the
probabilities
of
what's
coming
next
and
when
you
do
a
multi-step.
It
basically
does
that
exact
same
thing,
except
it
doesn't
look
one
step
back.
It
looks
X
steps
back.
So
it's
not
a
trick
of
necessarily
of
HTM.
That's
giving
you
the
multi
step
prediction
ability
it's
how
the
classifier
set
up.
A
So
so
yes,
Falco,
says:
if
you
get
an
input,
every
ten
ticks
you'll
get
a
prediction
and
ten
tick
in
Turville.
So
exactly,
but
if
you
get
an
input.
Well,
that's
as
long
as
you're,
not
inputting,
you
know
empty
empty
information,
tick,
tick,
tick,
tick
and
every
tenth.
You
get
you
get
something.
It's.
A
If
you
decide
to
predict
one
tick
ahead,
for
example,
that's
different
from
10
ticks
ahead.
If
you
get,
if
you
just
the
hard
thing
to
realize,
is
your
brain
is
not
time,
stamping
every
moment
that
comes
into
it.
It
experiences
time
relatively
based
on
a
lot
of
internal
conditions.
Chemical
imbalances
in
your
brain
or
whatever
you
know
when
you
have
a
lot
of
adrenaline
running
time,
seems
to
slow
down.
A
So
time
is
a
bit
warped
in
your
brain
time
doesn't
mean
as
much
it's
not
as
solid
and
stable
in
your
brain,
as
it
is
I
think
in
reality
or
how
we
perceive
it
in
reality,
oh
yeah.
If
so,
if
you
have
a
regular
like
spike,
every
X
ticks,
and
otherwise
no
Z
stream,
yeah
yeah.
So
if
you
get
a
spike
every
time,
you'll
it'll
it'll
predict
that
if
you're
one
step
away
from
that
spike,
it'll
predict
it.
If
you're
two
steps
away
from
that
spike,
it
won't
predict
it
because
that's
not
coming
next.
A
It's
not
coming
next.
However,
if
you
have
a
two
step
prediction,
a
multi
step
prediction,
you
say,
don'ts
predict
what's
coming
next
predict
what
you
think
is
coming
two
steps:
it's
just
you're
telling
the
classifier
to
keep
track
of
the
last
two
steps
and
to
do
a
reverse
classification
of
it.
It's
it's
not
as
I
think
Paul's
probably
explains
just
fine.
A
A
Yeah
yeah
sample
data
give
us
the
sample
data.
What
do
you
mean?
How
do
you
set
that
prediction
request
the
multi
step?
It's
a
parameter
in
the
model
params
for
temporal
memory,
I
think
it's
steps
ahead.
It's
something
like
prediction:
steps
ahead!
It's
something
like
that!
I,
don't
know
what
that
is.
Something's
notifying
me.
Somebody
else
is
live
on.
Twitch
I've
been
like
perusing
twitch
lately,
so
I
like
signed
up
to
watch
a
whole
bunch
of
people's
streams.
A
Okay,
stupid
specific
questions
about
coding,
I
think,
oh
great,
alright,
someone's
trying
to
write
an
HTM
and
C++
cool
of
you.
Are
you
new
host
esos
ISOs.
A
And
spin
around
a
little
bit
join
March
1
not
to
too
much
okay
I
just
like
to
see
how
much
people
have
been
reading.
How
does
the
brain
do
it?
How
do
you
set
that
prediction
request?
I,
don't
know
how
the
brain
does
it?
It's
a
good
question.
I,
don't
know,
but
when
you
start
talking
about
predictions
in
five
minutes,
it's
not
the
same
type
of
prediction.
This
is
it's
not
a
sensorimotor
prediction:
you're
imagining
you're
like
bringing
in
the
whole.
It's
not
just
your
immediate
state
of
things.
A
It's
the
state
of
things
outside
your
immediate
sensory
input
like
when
you,
when
you
think
about
what's
gonna
happen
in
five
minutes.
You
have
to
think
about
things
that
you're
not
immediately
observed.
You
know
what
I
mean
it's
not
the
same
type
of
prediction:
I,
don't
know,
and
I
really
don't
know
how
it
works.
I
mean
we
are
AGI
is
not
right
around
the
corner.
Okay,
so
about
the
pseudo-code.
A
Segments
for
column
right
give
it
a
set
up.
You
know
I
hope
forever.
His
answer
is
given
a
set
of
segments
return.
The
segment
Center
breath
is
working
on
his
own
implementation.
Rebbe
used
to
work
for
the
mint
that
he
did
all
of
our
web
stuff
for
a
long
time.
It's
great
engineering,
it
looks
like
he
is
letting
them
know
right
when
it
says.
On
a
cell,
like
the
synapse
went
from
the
cell,
it's
burning
stem
cells,
burrow
snaps,.
A
A
A
All
of
these
distal
segments
we're
just
putting
into
a
big
array
we're
not
saying.
Oh
some
are
close
and
some
are
further
away
and
some
over
here
and
some
over
there,
it's
all
like
sort
of
random
and
they're,
just
in
a
big
array,
so
on
the
cell
just
means
that
somehow
attached
to
the
cell,
it's
not
coming
and
and
we're
simulating
that
the
mini
columns
are
basically
we're
sort
of
simulating
that
they're.
Just
one
neuron.
You
know
we're
saying
we're
they're
all
going
to
share
the
same
input,
but
that's
for
proximal
input,
not
distal.
So.
A
A
A
Okay,
I,
don't
know
if
this
is
a
newbie
conversation.
I
think
I
might
move
this
getting
quite
theoretical
here,
but
I'm
gonna,
let
it
go
until
the
Oh
Pete
comes
back
with
some
more
detailed
question
because
I'm
not
sure
where
exactly
it's
going
with
this
but
I'm.
Not.
This
is
a
newbie
question.
This
is
gonna,
be
in
hackers.
A
Alright,
what's
going
on
here,
so
a
couple
viewers
thanks
for
watching
I,
know:
Falco
is
probably
still
there.
Who
else
is
online
who's,
love
to
hear
you
pipe
up,
and
let
me
know
what
your
what
you're
working
on
you're
coming
from
twitch
or
from
someone
else
from
the
forum.
Oh
man,
this
is
a
deep
detailed.
A
I'm,
just
not
good
enough
of
the
neuroscience,
so
if
you
want
this
is
so
packed
with
info
like
I
said:
Casey
works
or
has
a
lot
of
knowledge
about
barrel,
cortex
and
rats,
and
so
a
lot
of
this
I
think
is
informed
by
that.
But
it's
really
a
great
place
to
study
cortical
columns
so
I.
Imagine
all
of
this
stuff
was
really
useful,
but
wow.
That's
a
lot
notes.
He's
been
working
on
this
for
a
while.
It
gets
links
to
sources.
A
A
Is
I
have
to
I,
know
I've
given
Casey,
so
many
of
these
awesome
badges
but
I'm
gonna
keep
doing
it.
This
is
when
you
can
put
something
like
this
together.
I
haven't
even
read
this,
but
I
can
tell
that
he
is
put
sauce
together
in
a
really
interesting
in
a
way
that
is
trying
to
communicate
like
it's
it's
this
type
of
stuff.
A
That's
that
can
help
you
with
with
a
further
theory
like
if
you
can,
if
someone
summarizes
a
bunch
of
papers
and
says
this
is
connected
to
that,
and
this
leads
that
listen
to
that
it's
it's
super
helpful
and
and
I
don't
usually
I'll.
Let
the
research
guys
read
this
as
they
want
to
cuz.
They
I
think
most
of
them
see
the
neuroscience
stuff,
but
most
of
them,
whenever
I
bring
stuff
up,
they
usually
are
like
yeah
I
know
they
they
prove.
They
have
a
pretty
good
idea
of
this.
The
connections
it
is
yeah.
A
A
A
Getting
really
good
at
monitoring
there
on
some
thinkers,
we're
gonna
discover
a
lot
about
the
hierarchy.
Soon
enough,
oh
I
missed
the
depth
issues.
We
thought
it
was
feedback
down
the
Purple
Heart
l6
cortical
thalamic
projections
from
vortex
to
thalamus
coming
out
of
l6
or
usually
thought
of
as
feedback
down
the
cortical
hierarchy.
Size.
B
A
The
well,
if
you
look
at
the
thalamus
and
how
it
projects
to
cortex,
it's
like
a
little
like
if
you
look
at
the
map
of
the
areas
of
thalamus
and
and
what
areas
of
neocortex
it
projects
to
it's
like
a
mini
neocortex
like
the
visual
areas
here
and
the
cortex
and
middle
areas
here,
and
they
they
have
like
I've
got
up
here.
A
A
A
There
it
is
the
thalamus
and
just
ejects
up
right.
Here's
that
here's
the
right
picture,
but
but
this
thalamus
is
all
broken
up
into
pieces
here
and
and
each
piece
projects
to
a
different
part
of
neocortex
here
and
and
there
they
matched
like
this
one
goes
to
the
same
side
and
in
the
same
location.
So
the
little
football
is
like
a
sort
of
a
mini
version
of
half
of
your
neocortex
and
and
the
projections
project
right
into
this
same
area
of
the
neocortex
down
up
and
down.
A
So
you
know,
we
think
that
they
all
the
all
the
thalamus
researchers
like
to
call
the
thalamus
a
blackboard
they
call
it.
You
know
they
use
the
metaphor
blackboard,
but
you
have
to
think
of
it.
That's
like
a
really
really
fast
blackboard.
It's
like
I
think
it's
like
projecting
what
you
think
is
gonna
happen
and
then
overlaying
what
actually
happens
and
then
erasing
it
and
like
doing
extremely
quickly
in
a
way
that
you
can
compare
immediate
sensory
input
with
with
predicted
sensory
input
and
respond.
A
Somehow
I,
don't
know
how
it
works,
but
they
call
it
a
blackboard.
It's
not
something
that
I.
Don't
think
I'm
not
sure
that
it
has
a
huge
learning
capacity.
That's
other
parts
of
your
brain
are
learning,
but
it
is
like
the
place
where
everything
where
reality
sort
of
plays
out
it's
the
first
place.
Sensory
input,
hits
and
I.
A
What's
the
biggest
concern
with
maybe
I
honestly,
don't
even
know
how
registers
really
even
work,
but
if
it's,
if
it's
the
difference
between
RAM
and
disk,
potentially
that
might
be
some
way
to
do.
I,
don't
even
think
Ram
makes
sense,
because
you
can.
You
can
keep
a
ton
of
memory
and
RAM
and
leave
it
there
for
a
long
time
and
and
I,
don't
think
that's
happening.
The
thalamus
I
think
it's
like
an
instant
sort
of
like
the
now
like
what
you
perceive
as
now.
A
A
B
A
Put
my
face
back
yeah
I'm
stuck,
let's
see,
ok,
thank
you,
Paul
for
splitting
that
topic.
A
What
do
you
mean?
What
do
I,
what
do
white
when
a
team
gets
an
input
and
it
and
it
creates
a
prediction
of
what
it
thinks
is
gonna
happen,
so
he
but
he's
trying
to
I'm
trying
to
talk
about
neural
networks,
neural
networks,
you
get
an
input,
some
outputs,
like
you,
get
some
input
and
it's
like
classifies
it
to
some
output,
but
with
HTM
you
get
input
and
yeah.
There
is
no
classification,
I
don't
get
it.
It's
not
nothing
happened
or
something
changed,
I
guess
I,
just
don't
staying
with
this
way.
A
A
Don't
know
enough
about
generative
I,
don't
know
enough
about
Ganz
to
talk
about
them.
All
so
I
probably
shouldn't
say
anything,
that's
something
that
I
need
to
look
into,
but
but
all
the
examples
I
see
all
the
like
powerful
things,
especially
pattern.
Recognition
is
a
spatial
pattern.
Recognition
seems
to
be
it's
some
type
of
classification
and
and
and
sometimes
it's
just
a
classification
of
what
feature
exists
in
the
set.
You
know.
A
B
A
A
B
A
B
A
B
A
Why
there
should
be
like
I
thought,
I
had
you
know,
I'm
just
talking
to
myself
question
about
sparsity
and
local
inhibition.
So
a
lot
of
messages
on
this
form
while
is
gone,
which
is
a
good
thing.
I
have
I,
also
have
problems
sparsity
and
local
inhibition,
because
I've
read
the
code
and
oh
I,
think
I
saw
this
who
flag
this
for
moderation,.
A
A
B
A
One
hour
and
35
minutes
all
right,
so
it
took
me
1
hour
and
35
minutes
to
get
through
all
of
the
HTM
forum
topics
or
all
of
our
discussions
that
happened.
While
I
was
gone
at
6th
grade
science
camp
there
we
are
okay,
I
am
going
to
log
out
thanks
for
watching
if
you're
watching
tune
in
on
Monday
I'm
gonna
have
the
live
stream
Monday
at
1
p.m.
check.
My
channel
it'll
be
about
I'm,
going
to
chat
so
hopefully
interactive
about
artificial
intelligence.
A
Just
in
general,
I'm
gonna
talk
about
weak
AI,
strong,
AI,
deep
learning,
a
classic
AI,
biologically
inspired
a
eye
on
the
bunch
of
stuff
and
I'd
be
more
than
happy
to
take
questions.
I
hope
people
join
in
and
see
me
then.
So
I
will
be
back
in
my
main
office,
Monday
1
p.m.
Pacific
Time,
that's
UTC,
minus
8,
so
hope
to
see
you
there
thanks
for
joining.
Thank
you
file,
Co,
for
always
being
there
see
on
the
forums.