►
From YouTube: HTM Hackers' Hangout (July 7, 2017)
Description
Updates on the state of HTM at Numenta and within the HTM community. Anyone is free to join in.
A
The
first
thing
I
wanted
to
think
wanted
to
talk
about
was
I
wanted
to
mention
that
momenta,
the
company
that
I
work
for
Marcus
works,
for
we
are
continuing
to
shift
our
focus
from
applications
of
HTM
to
ongoing
research
of
HTM.
So
you'll
see
that
on
our
on
our
forums
and
in
our
code
bases
as
we
move
ahead,
we
are
planning
on
releasing
new
pic
1.0
today.
It's
my
hope.
I
just
did
a.
We
just
released
a
new
pic
core
1.0
yesterday,
which
was
a
prerequisite
we're
going
to
do
the
1.0
release
today.
A
The
very
latest
I'll
do
it
tomorrow,
output
and
once
we
make
that
1.0,
we
are
going
to
put
the
new
pic
project
into
maintenance
mode,
which
is
sort
of
a
software
term
for
where
we're
not
going
to
continue
developing
features
on
this
codebase.
We're
only
going
to
work
on
critical
bug
fixes
that
we've
that
it's
people
have
identified
or
we've
identified,
and
also
if
we
have
any
requirements
from
our
research
code,
because
our
research
code
is
still
using
new
pic
and
new
pic
or
currently,
although
I'm,
not
certain
that.
Well,
that
will
continue.
A
We
research
still
pretty
heavily
depends
on
your
core,
but
it
not
so
much
on
a
new
pic,
but
we
currently
still
depend
on
it.
So
we're
still
using
it.
It's
still
a
critical
software
at
our
opinion,
so
going
1.0
is
just
going
to
sort
of
cement
that,
but
we're
not
going
to
continue
adding
features
there.
We
will
continue
adding
features
and
research
repositories,
but
that's
very
fluid
in
the
end.
A
Years
ago
we
I
wrote
down
a
bunch
of
milestones
in
the
community
kind
of
helped
identify
what
were
the
priorities
for
getting
to
a
1.0
marker
and
a
new
pic
and
the
biggest
things
that
came
out
of
that
was
we
needed
to
split
the
Python
code
base
in
the
C++
code
base
the
fewer
around
back
in
2013.
When
we
released
new
pic
as
open
source,
it
was
really
messy
and
it
took
18
days
for
anyone
to
even
compile
the
software,
and
that
was
marco
de
hall,
which
is
still
a
committer
on
the
project.
A
A
We've
also
added
windows,
support
which
was
a
community
initiative
as
well,
and
we
have
many
linux
support
for
the
pip
side.
So
you
can
now
run
pip
install
new
pic
on
most
platforms
aside
from
32-bit
systems
and
it
will
generally
install
properly-
and
we
also
have
just
recently
finished
up
a
new
serialization
functionality
based
on
captain
proto,
which
is
a
much
faster
serialization
library
and
and
I
worked
a
ton
on
documentation.
So
as
a
part
of
the
1.0
release.
Let
me
show
you
a
couple
things
here.
A
If
I
share
my
screen,
which
one
do
I
want,
let's
share
this
one.
So
here,
for
example,
is
our
documentation
and
I?
Don't
know
if
I've
shown
this
on
the
hackers
hangout
before,
but
if
you're
a
user
of
nupoc-
and
you
don't
know
about
these
Doc's
you're
missing
out-
you
can
go
to
new
pick
Doc's
new
mentor
org
and
see
all
the
different
versions.
I'm
just
gonna
go
to
this
latest
stable
version.
Actually
I
want
to
go
to
the
diversion,
and
you
can
see
here
under
guides.
There's
a
guide
for
serialization
Scott.
A
A
It's
just
I,
just
I'm
afraid
it's
a
safety
net.
You
know
if
anybody
is
using
them,
we
don't
want
to
just
rip
it
out
from
under
them,
but
I
would
encourage
people
to
to
use
the
new
serialization
technique
if
you're.
If
you
want
to
write
your
HTM
algorithm
objects
to
a
file,
I
think
yeah.
So
let
me
start
my
screen
milestones.
Yes,
so
I
think
we've
reached
all
the
milestones
that
we
set
for
ourselves
for
1.0.
My
biggest
thing
was
to
have
API
documentation
that
covered
a
clean,
API
and
I.
A
Think
we've
gone
a
long
way
towards
that.
Just
over
the
past
few
months
we
have
cleaned
up
and
refactor
d'etat
stuff
made
a
bunch
of
breaking
changes
that
clarified
the
codebase
as
a
part
of
this
move
and
I
think
the
API
Doc's
are
in
a
really
good
place
for
people
who
wanted
to
use
new
pic
for
streaming
analytics
stuff.
A
So
that
is
what
I
wanted
to
talk
about.
First
I
would
like
to
give
the
conch
to
Marcus
to
give
a
research
update.
If
you
are
willing
Marcus
and
then
I'll
have
one
more
thing
to
talk
about
when
you're
done
sure
kind
of
informal
that
yeah
I
can
talk
about
where
we're
a
little
bit
right
now,
I'd
say
the
first
major
topic
to
talk
about
on
current
research
is
we're
wrapping
up
of
paper.
That
is
about
the
idea
that
the
cortex
is
well.
A
They
learn
like
as
the
sensor
like
as
your
eye
or
as
your
finger
moves
over
an
object.
It
is
learning
the
object
even
at
the
lowest
parts
of
the
cortex,
and
so
there's
there's
a
lot
to
cover
there
in
a
paper.
One
big
gap
in
that
paper
and
not
not
and
not
a
flaw,
but
just
something
it
leaves
undefined,
is
where's
that
location
coming
from
like
how?
How
does
your
cortex
know,
because
for
this
to
work
it
has
to
if
you're
gonna
keep
a
set
of
feature
location
pairs?
A
You
need
to
know
the
location
and
that
location
needs
to
be
like
relative
to
the
object,
and
yet
it
needs
to
be
like
it's
now,
that
my
hand
is
right
here,
relative
to
my
body.
It's
that
my
hand
is
here
on
this
water
bottle
and
that's
a
difficult
problem
that
this
paper
intentionally
leaves
open
for
some
future
research
and
that's
so.
A
The
paper
is
taking
up
a
great
amount
of
our
time,
I'm
more
focused
on
computing,
that
location
and
that
had
been
like
living
and
breathing
that
for
the
past
four
months
or
so
and
I've
made
some
pretty
cool
progress,
and
that
has
a
good
chance
of
really
being
part
of
the
model.
Part
of
it
is
like
the
the
first
thig
stage
was
showing
how
how
a
single
sensor
moving
around
can
basically
do
this
narrowing
down.
A
So
so
one
of
the
first
big
leaps
and
in
our
thinking
on
how
locations
are
computed
is
a
sensor
moving
causes.
This
causes
a
union
of
locations
to
move,
and
you
keep
doing
that.
You
keep
moving
that
union
of
locations
around
and
taking
in
sensory
input
and
each
time
you
get
sensory
input.
It
narrows
it
down
some
until
you
have
figured
out
the
location-
and
this
is
still
allocentric
location
and
we're
talking
about
with
the
reference
object
in
sort
of
the
attention
on
one
object.
A
Yes,
this
is
totally
of
like
this
has
nothing
to
do
with.
Where
I
am
relative
to
my
body,
it
is
totally
like
yeah
figuring
out
where,
on
the
object,
am
I
starting
with
nothing
like
starting
with
I
reached
my
finger
out
and
I
hit
something
what
do
I
do
now
and
so
that
I
think,
is
we
kind
of
a
solved
problem.
We
know
how
to
do
that,
then,
after
that
we
really
dived
into
grid
cells,
which
are
part
of
the
part
of
how
we
they
won.
A
The
Nobel
Prize
a
couple
years
ago
has
nothing
to
do
with
us,
but
the
grid
cells
are
a
in
some
ways.
It's
a
grid
cell
SDR
is
a
location
SDR,
it's
an
allocentric
location,
ser,
it's
the
animals
location
within
a
room
or
within
whatever
it's
in,
and
we
tried
to
drive
insights
from
how
those
work
to
to
our
theories,
drawer
mechanisms
and
the
big
thing
that
they
presented,
the
the
big
benefit
that
they
provided
is
grid
cells
are
in
a
bunch
of
these
modules
and
they
learn
how
to
each
of
the
individual
modules
learns.
A
How
to
use
the
animals
like
motion
cues,
like
his
motor
actions,
its
vestibular
and
put
anything
that
is
signalling
motion.
Each
of
these
individual
modules
is
learning
how
to
do
it
separately
and
and
they're
reused,
again
and
again
and
again
in
different
and
different
environments,
but
but
the
population,
all
the
population
of
all
of
these
modules.
I'll
answer
your
question
in
a
second
David.
A
The
population
of
these
modules
creates
like
a
unique
location,
Str
in
a
sense,
and
so
we
pulled
that
idea
in
and
now
we
have
this
whole
idea
of
using
motion
to
update
your
location.
Now
we
don't
have
to
learn
that
for
every
single
location
you
learn
at
once
in
this
like
reusable
way
and
a
bunch
of
little
units
and
then
and
then
it's
capable
of
doing
this
path,
integration
on
novel
locations,
so
you
don't
have
to
there's
not
an
enormous
scaling
issue
of
having
to
learn
every
single
potion
for
every
single
location.
A
A
Allocentric
versus
egocentric
egocentric
is
like
relative
to
your
body,
and
then
you
could
probably
guess
from
that.
What
Ella
centric
is
is
relative
to
something
else,
so
an
object
of
room,
whatever
is
just
something
outside
of
you
and
and
fundamentally,
if
you
want
to
learn
objects,
you
just
need
to
factor
the
egocentric
out
of
the
equation.
In
some
sense,
it
needs
to.
You
learn
objects
in
some
way
where
you
learn
it
here,
and
then
you
know
it
here,
and
hopefully
you
learn
it
once
here
and
then
you
also
know
it
here.
A
So
rotation
and
translation
might
be
two
different
things:
we're
still
figuring
out
all
the
details
there
so
and
then
I
would
say
the
third
big
leap
that
we're
in
the
middle
of
isn't
within
the
past
couple
weeks,
we're
still
figuring
it
out,
but
it's
the
we
had
a
fun
meeting.
The
other
day,
I
gave
a
big
presentation
how
this
whole
inferring
of
location
is
based
on
meant.
B
A
Now
and
that's
cool,
but
it's
kind
of
it
might
be
secondary
the
in
some
ways
the
the
primary
way
that
you
invert
locations
is
your
sensors
know
where
they
are
relative
to
each
other.
So
you
you
close
your
fingers
on
on
the
object
and
from
that
point,
your
you,
your
body
knows
where
it
like,
in
some
sense
the
egocentric
location
of
each
of
your
fingers
and
somehow
they
need
to
work
together.
To
say,
like
like
look
I'm
feeling
this
you're
feeling
this
you're
feeling
this.
A
The
only
thing
this
could
possibly
be
is
a
water
bottle
and
right
now,
this
computing
of
locations
requires
you
to
move
with
a
couple
times
before
you
figured
that
out
so
and
it's
a
challenge
to
do
that
in
a
way
that
doesn't
require
you
to
learn
the
object
here
and
here
and
here
and
here
and
here
and
here
and
moving
it
at
all,
all
over
space.
You
want.
You
still
want
to
learn
the
object,
only
once
well
still
well
having
that
capability
and
I
think
we
have
a
way
to
do
that.
A
It
does
that
it
uses
that
in
a
new
way,
I
don't
want
to
get
into
any
more
details,
partly
because
I
haven't
proven
that
it
works,
but
I
think
it
does
so
in
general,
like
the
all
this
location
stuff
there,
we
haven't
really
talked
about
it
because
we're
still
figuring
out
whether
we
like
it's
more
just
like
explore.
This
idea
explore
the
society
explore
this
idea,
then
in
the
future,
we'll
pull
it
all
together.
A
But
the
there
are
some
write
ups
on
the
HTM
research,
repo
and
there's
a
project,
location,
layer
folder,
where
I've
written
up
a
couple
of
these
and
I
think
in
the
next
couple
weeks
a
third
write-up
will
appear
and
after
I've
been
Arif
I'd
that
I'm
saying
with
this
latest
idea.
A
This
is
like
a
a
good
example
of
what
theoretical
neuroscience
research
really
is
because
I
don't
think
we
have
any
evidence
that
there
are
cells
that
are
acting
like
grid
cells
in
the
the
places
that
we're
expecting
them
to
be,
but
we
do
have
evidence
of
them
and
within
our
internal
cortex.
So
that's
where
they've
been
identified,
but
we're
theorizing
that
this
type
of
activity
could
be
throughout
the
cortex
and
that's
what
could
be
generating
this
location
signal
right,
Marcus,
yes,
correct,
yeah,
that's
theory.
A
B
A
Maybe
we're
thinking
about
the
problem
wrong
and
just
this
feedback
loop
of
doing
both
of
those
David
asked
about
internal
cortex
is
ento
Rhino
ent,
orh,
I
mal,
and
it's
is
this
part
of
the
cortex,
that's
like
between
the
neo,
cortex
and
hippocampus.
It
still
is
like
six
layers,
but
it's
kind
of
like
evolutionarily
older
part
of
the
protects
that
it's
it
I'd
say
the
jury's
still
out
on
whether
it's
the
same
circuit
as
neocortex.
If
anything
is
probably
a
subset
of
it,
it
doesn't
have
like
it.
A
Doesn't
really
have
a
layer,
for
it
has
a
kind
of
a
nun
cell,
your
cellular
layer
for
anyway,
it's
it's
the
part
of
the
cortex
that
has
grid
cells,
and
it
was
mostly
unexplored
until
about
like
12
years
ago.
2005
was
when
the
big
grid
cell
study
happened,
funny
story,
I,
I,
just
babble
on
a
little
more.
A
The
entorhinal
cortex
turns
out
to
be
like
immensely
important
and
it
won
people
a
Nobel
Prize,
but
it
was
unexplored
for
a
bunch
of
years
by
the
people
who
are
studying
all
this,
like
navigation
and
such
and
like
O'keefe
or
the
guy,
who
discovered,
play
cells
and
eventually
shared
the
Nobel
Prize
for
grid
cells.
For
years
he
just
didn't
explore
the
cortex
around
the
hippocampus.
A
Like
that's
what
needed
to
be
studied
the
whole
time
to
win
a
Nobel
Prize
nice
grid
cells
are
really
fascinating.
You
could
probably
google
it
and
find
some
of
some
of
those
papers
and
research.
I
I
am
probably
going
to
talk
about
grid
cells
on
on
our
YouTube
channel
in
the
future,
but
yeah
they're
fascinating.
The
scaling
is
the
fascinating
part
to
me.
A
Okay,
anything
else
Marcos
you
wanna
mention
now,
that's
it
for
me:
okay,
okay,
so
the
only
type
of
guy
want
to
talk
about
was
as
a
part
of
this
old.
You
know
momentum
moving
towards
research,
as,
as
you
know,
if
you've
been
a
part
of
our
community,
a
big
part
of
my
job
has
been
to
create
example,
applications
and
samples
and
tutorials
and
stuff
on
how
to
build
things
with
with
new
pic
and
HTM
systems,
as
we
shift
away
from
applications
and
towards
research,
I'm
gonna
focus
on
supporting
research
and
education
for
them.
A
A
But
I
am
going
to
be
focusing
at
least
over
the
next
year
on
creating
content
focusing
on
on
YouTube,
so
I'm
going
to
continue
with
the
the
whole
HTM
school
vibe
and
probably
will
be
working
on
some
even
some
more
general
neuroscience
type:
educational
content
that
links
concepts
of
neuroscience
to
HTM,
like
the
the
neuroscience
that
we
know
about.
That's
that's
required.
A
You
know
to
understand
how
HTM
works
and
even
some
of
the
more
theoretical
stuff
like
like
grid
cells,
if
the
work
we're
doing
pans
out
and
that
becomes
of
an
important
part
of
sensory
motor
theory.
I
think
it
deserves
a
thorough
explanation,
so
so
I'm
going
to
be
working
on
on
that
type
of
stuff
and
less
on
applications.
A
So
I
just
wanted
to
note
that
so
you're
gonna
see
a
lot
more
YouTube
content
over
the
next
year.
Probably
gonna
change
the
face
of
the
YouTube
channel
a
little
bit
and
I.
Think
Christie
is
probably
gonna
get
involved
in
some
of
that
educational
content
as
well
she's
our
director
of
marketing
she's,
made
some
really
nice
stuff
recently.
A
B
Glenn,
did
you
have
something
to
say:
cuz
Oh,
mainly
I,
just
want
to
thank
Marcus
and
thank
Matt
and
I'm
excited
for
all
the
new
YouTube
stuff.
Coming
up.
That's
great
and
Marcus.
That's
a
research
packet,
even
though
you
don't
have
all
the
answers.
Yet
the
things
you're
saying
are
really
helpful
from
the
outside,
because
I
miss
out
on
all
the
conversations
you
guys
have.
So
your
little
insight
into
what's
going
on
is
super
helpful.
A
Glad
you
found
it
useful,
yeah
yeah,
it
is
a
little
yeah,
sometimes
you'll
be
able
to
find
like
I'll
kind
of
sneak
these
little
write-ups
and
to
their
end
of
the
repo
for
any
onlookers,
but
yeah
I
feel
for
people
on
the
outside,
because
for
because
I
was
originally
like
in
the
community
and
studying
this
stuff
from
the
outside
and
there's
definitely
like
a
lot
of
talking
that
happens
here
is
more.
That,
like
is
more
that
like
week
over
week.
A
C
A
C
They
were
working
on.
They
had
this
experiment
that
shifted
a
camera
and
tried
to
try
to
guess
what
the
guess,
what
objects
were
in
a
certain
field
of
objects
and
and
then
you'd,
move
the
object
and
then,
when
the
camera
shifted,
Becky
would
detect
that
you
know
that
that
that
object
had
changed.
That
was.
C
A
B
C
C
A
Based
on
learning
an
object
by
moving
a
sensor
over
it
and
basically
learning
all
of
these,
you
could
call
them
first
order
transitions.
Learning,
like
this
sequence
from
here
to
here,
and
then
this
sequence
from
here
to
here,
and
so
on,
and
learning
an
object
like
that
as
a
series
of
first
order,
transition
and
I
would
say
the
big
difference
with
our
current
theory.
Is
it
hold
a
piece
of
that
out?
I
pulled.
A
It
pulled
a
piece
that
out
so
that
those
transitions,
the
motor
transitions
between
each
point
are
now
like
those
happen
in
a
more
abstract
sense.
At
the
end,
the
word
I'm
holding
back
in
alsace
location,
the
the
idea
of
multiplication
being
pulled
out
at
being
a
first
class
citizen,
the
location
being
a
first
class
citizen
of
the
cortex,
and
then
so.
A
You
learn
objects,
as
rather
than
a
series
of
transitions
from
from
from
feature
to
feature
from
place
to
place
wherever
you
want
to
call
it
rather
than
learning
it
is
that
you
learn
literally
a
set
of
feature
location
pairs
and
the
computing
of
the
location
is
happening
separately.
So
this
caused
in
general
I
would
say
that,
like
the
latest
round
of
sensory
motor
inference,
it's
big
fundamental
benefit.
A
It
only
had
to
learn
it
once
and
then
it
can
handle
all
of
those.
So
the
big,
the
big
change
from
the
current
theory
and
back,
is
that,
like
a
factored
out,
the
location
that
that,
like
you,
can
pull
the
location
aspect
of
this
out,
have
your
motor
actions
update
that
and
then
use
that,
as
as
their
context
for
use
that
as
your
kind
of
distal
input
for
learning
the
future
location,
repairs.
C
C
Transition
features
in
the
TM
or
features
exist
as
many
column,
an
assembly
of
many
columns
as
as
as
a
vector
of
binary
bits
right.
So
the
binary
bits
represent
a
row,
a
arrangement
of
many
columns,
so
is
so
now.
What
is
the
storage
mechanism?
How?
What
is
the
storage
mechanism
for
a
location
so
is
that
is
that,
like
many
columns
arranged
in
you
know,
is
that
there's
no
store
so
we're.
A
Gonna
have
them:
how
do
you
have
some
content
about
this
coming?
Okay,
we're
a
paper
is
about
to
be
released.
Hopefully,
I'll
have
a
video
that
accompanies
the
paper
I'm,
eventually
gonna,
put
together
a
a
YouTube
video
with
an
anime
should
only
show
you
what
something.
Let
me
show
you
something:
first,
okay,
so
I'm
working
on
visualizations,
I
hate
to
cut
in
here
Marcus,
but
we're
talking
about
cortical
columns
now
right,
here's!
This
is
my
first
like
very
basic
prototype
of
a
visualization
of
a
cortical
column.
A
So
you
can
imagine
the
input
space
on
the
bottom
layer,
two
three
in
the
middle
and
a
layer,
four
up
top
okay,
so
the
location
information
is
going
to
be
coming
in
distally
to
a
think.
The
the
center
layer
here
layer,
two
three
and
it's
not
necessarily
storage.
It
just
selects
the
columns
that
are
being
activated.
It's
its
selects,
provides
context
in
the
same
way
that
the
the
distal
connections
in
the
TM
provided
temporal
context.
This
is
now
providing
a
different
type
of
context.
A
C
C
The
location
information
as
in
in
terms
of
how
its,
how
its
represented
inside
the
actual
tissue
of
the
neocortex
you
know
like
like
did
is
it?
Is
it
a
first
off?
Is
it
a
vector?
Is
it
a
binary
vector
are
the
locations
sdrs
and
if
they
are
then
or
gives,
if
they're
STRs,
then
they're
stored
just
like
so
you
have
transitions
of
or
maybe
not
even
transitions,
but
you
have
activations
of
okay,
so
I,
don't
even
know
I
mean
you
try.
A
Column
is
a
collection
of
layers
and
the
layers
really
have
yeah,
so
the
cortical
column
has
that
the
one
I
just
showed
you
had
three
different
layers:
well,
two
layers
in
an
input:
space:
okay,
that's
which
represents
proximal
input,
but
the
cortical
column,
as
we're
laying
them
out
is
going
to
have
two
different
layers
that
interact
with
each
other,
the
layer,
two
three
that
has
the
many
columns
that
we're
you
can
think
of.
As
you
know,
what's
exists
in
new
picker
HTM
Java
today.
A
That's
going
to
be
the
object,
representation,
so
sort
of
the
static,
more
permanent
idea
of
what
you're
touching
it
and
as
something
touches
an
object.
It's
that
top
layer,
the
cell
activations
that
we're
doing
unions
and
we're
narrowing
down
what
that
object
is
so
bits
are
turning
off
as
you
have
a
tension
on
an
object
and
you
touch
a
touch
touch
touch
all
that
narrows
down
to
a
smaller
set
of
objects.
Every
time
you
touch
a
feature
at
a
location
because
you're
like
oh,
that's,
not
a
ball.
A
Oh
that
can't
be
of
all
that
can't
be
a
you
know,
a
penny!
That's
your
narrow!
Every
time
you
touch
something
it
narrows
it
down.
That's
happening
in
that
top
layer
for
the
the
layer,
2/3
is
still
doing
the
the
spatial
features
which
are
activating
the
active
columns.
That's
like
the
the
sensory
input
that
you're
getting
here
and
it's
being
processed
through
the
spatial
Pooler,
but
the
difference
is
we're
not
temporally.
A
C
C
A
I
mean
because
many
complement
in
one
layer,
if
other
layers
may
not
have
the
many
columns,
the
the
cortical
column,
is
the
interaction
between
you
know
different
layers
within
it
to
do
something
maybe
Marcus.
You
can
explain
this
better,
but
we
will
have
more
content
about
this
coming
out
soon.
That
hopefully
explain
it
watch
that
HTM
chat
with
Jeff.
If
you,
if
you
want
a
lot
of
those
details,
he
explains
in
that.
If
you
haven't
seen
it
it's
on,
it's
like
right
on
the
main
YouTube
page.
C
A
The
location
STR
is
an
SD
arc,
the
its
binary
and
in
it
at
least,
all
of
our
current
experiments
and
some
of
them
it's
some
level.
It
might
need
to
have
some
continuous
value,
rather
than
just
being
binary
like
internally.
Something
needs
to
be
like
integrating,
like
small
movements,
need
to,
like
eventually
activate
in
your
cell
and
knowing
when
to
activate
a
new
cell,
as
it
might
be
something
continuous
and
there's.
It's
not
binary
there,
like
the
membrane
potential
or
as
a
firing
rate.
A
We
don't
know
exactly,
but
there
there
might
be
some
continuous
aspect,
but
from
the
perspective
of
our
experiments,
we
still
keep
it
all
the
SD
ours
binary,
and
that
has
work
okay.
As
for
whether
they're
tied
into
many
columns
at
all
the
answer,
is
we
don't
know
yet
they
might
be
for
now.
We
don't
have
to
really
just
make
that
decision
to
to
model
the
process.
It
might
be
the
case
that,
like
these
is
I
you've
heard
me,
use
the
word
modules
a
little
bit
like
good
cell
modules,
and
it
might
be
the
case.
A
One
crazy
idea
is
that,
like
maybe
one
of
the
bottom
layers
of
the
cortex,
maybe
its
many
columns
are
modules
and
there's
always
only
like
one
of
them
active
at
a
time
and
as
you
move
around
to
different
one
activates,
we
propose
these
crazy
things.
But
the
answer
is
we
don't
know
if
it's
if
it's
gonna
be
based
on
many
columns
or
not
like,
and
there
might
be
some
layers
where,
like
yes,
many
columns
are
there
but
they're
not
really
part
of
the
computation
and.
C
D
A
B
A
B
A
A
So
there's
a
there's
a
term
for
that
postcards
from
the
edge
I.
Don't
know
no
I
can't
remember
where
it
was
anyhow
so
stay
tuned,
because
we
we
are
gonna,
have
some
more
information
about
all
this
stuff.
Kristy
put
together
a
really
nice
video
that
explains
this.
That's
gonna
accompany
our
paper,
I
can't
really
say:
yeah
I,
don't
think,
but
maybe
I
can
sneak
it
out.
I'll
ask
her
to
get
to
the
forums
but
but
I
think
a
lot
of
the
stuff
that
will
be
will
become
more
clear
as
it
becomes
more
clear
to
us.
A
So
we
want
everybody
to
understand
how
this
works.
I
want
to
point
out
one
thing:
before
we
win
open
source
in
2013.
We
had
this
big
breakthrough,
which
is
one
of
the
reasons
we
went
open
source
and
that
was
basically
the
temporal
memory,
the
sequence
memory
algorithm
and
using
STRs
and
how
to
learn
pattern.
Spatial
patterns
over
time,
so
that
was
a
big
breakthrough.
I
think
that
we
all
agree
that
this
is
another
one
of
those
milestone:
breakthroughs
with
the
sensory
motor
integration,
stuff
that
we're
working
on
and
so
I.
A
When
we're
doing
research
and
theory,
it's
best
I,
think
not
to
think
too
much
about
applications
of
potential
applications
of
new
functionality
that
arise
from
that,
because
it's
sort
of
distracting,
especially
to
us.
You
know
what
our
mission
is
understand,
how
intelligence
works
and
write
software
based
on
those
principles.
So
we
don't
want
to
get
too
distracted
about.
A
How
useful
is
this
going
to
be
honestly,
we
want
to
figure
out
how
it
works
and
we've
totally
believe
that
the
uses
will
follow,
and
it's
really
hard
to
predict
how
any
new
technology
is
going
to
be
used
over
time.
So
we're
just
having
faith
that
there
is
utility
in
intelligence
in
these
intelligent
processes
we're
trying
to
uncover
and
that
they
they
will
show
themselves
as
being
useful
in
the
future.
A
So
again,
this
is
a
part
of
our
shift
from
HTML
occations
to
pure
research,
and
we
think
this
is
a
huge
milestone
and
when
we
get
when
we
get
to
a
point
where
we
have
code
and
and
it's
already
for
applications,
I
will
be
there
helping
helping.
You
guys,
write
it
I
I'm
writing
code
that
will
be
used
against
this
research.
Codebase
to
do
visualizations
so
I'm
going
to
be
a
consumer
of
this
code,
just
like
anybody
else
in
the
community,
so
I'm
gonna
make
sure
and
keep
the
research
team.
A
You
know
what
I
mean,
so
that's
everything
I
wanted
to
talk
about,
and
we've
gone
for,
40
minutes
now.
So
if
we
had
a
good
discussion,
I
think
we
had
better
wrap
it
up.
So
all
you
viewers
out
there
thanks
for
watching
I,
appreciate
you
hanging
out
with
us
and
we'll
do
this
again.
In
a
month,
I'm
gonna
keep
doing
hgm
hackers
hangouts,
no
matter
what
we
do.
The
YouTube
channel,
because
I
like
this
being
able
to
interact
the
community
since
we're
not
holding
hackathons
anymore.
A
A
D
Can
hear
myself
go
that's
annoying
I'm
trying
to
get
the
network
quick
start
working,
and
this
is
like
I
get
a
lot,
maybe
asks
on
the
forum
or
whatever
it
says
you
for
the
params
file.
It's
yeah
Mille.
It's
like
it's
like
imported
as
a
yeah
mo
file.
Do
we
have
to
do
it
that
way
or
cuz
I
just
have
a
regular
like
dot
pie.
Params
file
could
I
just
do
that.
Yeah.
D
Okay,
cool
one
more
tiny
thing
and
we're
done.
Okay,
so
I
have
basically
XY
data
of
someone's
motion
through
space
like
this
I'm,
currently
just
using
two
scalar
encoders
that
are,
you
know
that
are
clipped.
Do
you
think
it
would
make
any
qualitative
difference
to
try
to
switch
to
a
coordinate
encoder?
A
D
A
All
to
use
the
same
parameters,
the
only
reason
that
those
are
any
animals
files,
because
it's
just
easy
access
to
the
the
same
parameters.
So
you
don't
even
need
to
have
a
llamó
file.
If
you're
setting
up
your
network,
you
don't
have
to
read
that
stuff
from
anywhere
you
can
just
put
it
in
line
and
in
the
code
and
create
your
network
that
way,
if
you
want
okay,
so
instead
of
importing
params,
just
just
sort
of
chunk
tray.
D
A
Yeah
I
mean
you'll
have
to
make
sure
that
you,
you
know,
get
get
all
the
parameters
right
when
you're,
creating
the
the
network,
nodes
and
the
algorithms,
but
but
I
think
it
you,
probably
you
might
have
better
success
with
the
coordinate
encoder,
especially
if
you
want
speed
to
be
a
factor
with
this
object.
Movie
two-dimensionally.
Yes,.
D
D
Okay,
yeah
I,
like
if
it's
okay,
I
get
that
and
do
you
have
do
you
know
if
there's
any
like
examples
anywhere
of
the
coordinate
encoder,
actually
like
somebody's
setting
up
parameters
to
use
the
coordinate
encoder
the
way
you
would
in
the
params
pie,
yeah.
A
I've
done
it
there's
a
there's.
A
project
called
mine,
hack
that
takes
X
Y
Z
3d
coordinates
from
minecraft
and
court
and
encodes
them
with
a
coordinate
encoder.
So
you
could
just
use
X&Y,
actually
I,
don't
even
know
if
I
use,
Z
I
might
just
be
using
x
and
y
in
that
example,
so
I
got
it
working
in
that
there's
a
YouTube
video
of
it.