►
From YouTube: HTM Hackers' Hangout - Nov 2, 2018
Description
Discussion of this hangout's topics here: https://discourse.numenta.org/t/htm-hackers-hangout-nov-2-2018/4796
HTM Hackers’ Hangout is a live monthly Google Hangout held for our online community. Anyone is free to join in the discussion either by connecting directly to the hangout or commenting on the YouTube video during the live stream.
If you have something specific you’d like to discuss, or if you just want to learn more about the HTM Community, please join HTM Forum at https://discourse.numenta.org. We have active discussions about HTM theory, research, implementations, and applications.
More info on all these topics at http://numenta.org.
A
A
So
look
in
this
in
the
notes
of
this
video,
if
you're
watching
on
YouTube
there's
a
link
to
our
forum
and
we've
got
a
topic,
that's
got
links
to
the
stuff
that
I'm
going
to
talk
about
and
you
can
get
involved
in
conversations
there.
If
you
don't
want
to
join,
live
here
in
YouTube
land.
So
my
quick
rundown
I'm
going
to
talk
about
a
lot
of
news.
A
Since
last
time
we
did
a
hackers
hangout,
we
released
two
papers
and
we
got
an
article
in
The
New
York
Times,
like
the
print
version
of
the
New
York
Times,
who
came
out
on
Monday
to
Monday's
ago
I.
Think
so
it's
been
almost
two
weeks.
It
seems
like
longer
than
that,
but
we
got
a
some
good
publicity
out
of
that.
This
article,
like
I,
said
it's
linked
in
on
the
forum.
It
definitely
presented
the
company
in
immense
and
good
light.
A
But
it
really
it
missed
out
on
one
big
thing
that
I'm
very
focused
on
is
our
transparency
as
a
company
or
openness
as
a
company,
because
I
run
the
open
source
community
and
it
really
gave
us
gave
the
impression
that
Numenta
was
closed.
I
really
don't
know
why?
Because
we're
so
open
we're
just
so
open.
It's
like
that's.
My
job
is
trying
to
make
us
I.
A
Remember
when
I
first
started
working
as
open-source
community
manager,
Denise
Cooper,
who
really
mentored
me
in
that
role,
it
was
really
active
and
with
O'reilly
and
the
open
source
community
she
told
me
your
job
is
to
make
as
much
as
possible
public
like
no,
and
that
meant
to
hired
me
to
do
that.
That's
what
they
pay
me
to
do
is
to
like
pull
that
lever
as
much
as
possible
and
make
sure
that
we
can
release
as
much
as
stuff
that
we
do.
It
is
possible
and
I
I
work
to
do
that.
A
So,
if
you
got
that
impression,
it's
not
true.
Go
look
at
our
github,
all
our
code,
all
all
the
papers
and
stuff
and
like
this,
what
I'm
doing
right
now,
you
know
I,
just
I'm
gonna
talk
to
you
about
stuff.
I
talked
to
Jeff
about
yesterday,
so
it's
not
like
we're
hiding
anything
at
all.
We
want.
We
think
this
is
a
really
important
technology.
Hey
Marc,
Brown
nice
to
see
you
just
saw
your
post
easily
distracted,
ADHD,
sorry
what
was
I
even
talking
about
anyway.
A
The
article
was
great
I
wish
it
didn't
pain
us
as
a
closed
company,
because
we're
not
that's
it
anyway.
So
now,
we've
got
a
bunch
of
papers
and
it,
and
now
I
also
made
a
list
of
the
papers
and
sort
of
what
we
nicknamed
them,
the
so
the
latest
two
papers.
We
call
the
frameworks
paper
and
that's
the
one,
the
New
York
Times
article
referenced
and
the
we
call
a
columns
plus
because
we
just
we
did
another
paper.
A
We
called
the
columns
paper
and
that
introduced
the
idea
of
object
representation
in
each
cortical
column,
but
not
like
how
the
location
was
represented.
Just
that
a
location
is
necessary.
Columns
plus
does
like
computations
and
simulations
using
a
location
based
on
grid
codes,
sparse
codes
that
grid
grid
cell
modules
represent
anyhow,
so
those
are
the
two
latest
papers,
I'm
working
on
two
more
HTM
schools.
One
is
gonna
finish
up
sort
of
the
frameworks
paper,
because
the
first
one,
the
one
I
did
just
recently
stopped
short
at
really
explaining
the
whole
thousand
brains.
Theory.
B
A
A
So,
whatever
I'm
talking
about
they've,
already
thought
about
like
yawns
ago
and
I'm,
just
catching
up
with
okay,
so
I'm
thinking
a
lot
about
hierarchy
right
now
and
trying
to
understand
how
I
can
present
it
to
two
people,
and
anyway,
I
was
talking
to
Jeff
yesterday
about
it,
because
I
got
this
impression
and
I've
read
some
forum
posts
about
this
too.
That
hierarchy
is
emergent
and
I
felt
like
how
that
makes
sense,
hierarchy,
the
structures,
the
regions
like
how
the
hierarchy
architecture
of
hierarchy
constructs
itself.
A
Maybe
that's
emergent
over
time,
maybe
that's
dependent
on
the
sensory
input,
and
maybe
it's
learned
so
I
pitched
up
to
Jeff
yesterday.
He
said
no,
he
doesn't
think
is
emergent
over
time.
He
thinks
yes,
it's
true
that
he's
different
in
the
visual
cortex
than
it
is
in
somatic
cortex
over
the
auditory
cortex.
Those
are
different
hierarchies
and
they're
structured
differently
and
the
inputs
vastly
different,
but
he
says
that
those
those
those
how
those
regions
are
connected
together
and
how
those
are
how
they're
connected
is
something
that
that
evolved
along
with
the
sensor,
the
sensors.
A
A
Sorry
go
ahead
was
that
Jacob,
yes,
structure
heard
how
you
do
oh
I
just
got
some
audio
from
you.
Did
you
have
a
comment?
Sorry
go
ahead.
Was
that
Jacob?
Yes,
I!
Think
it's
just
okay,
I'm
gonna
mute
you!
Alright,
we
muted
Jacob,
that's
live
its
life,
that's
proof
anyway.
I
am
just
rambling
now.
So
a
really
good
read
if
you're
interested
this
stuff
is
mal
castles,
the
collar
organization
of
the
neocortex.
It's
it's
a
great
paper,
I.
Think
that
summarizes
a
lot
of
different
research
like
that.
A
That
puts
together
the
argument
of
columnar
organization
and
it
goes
into
work
of
Hulu
and
visa
Lupo
and
Beisel,
which
is
really
old
stuff.
But
it's
applicable
and
they're.
Talking
about
visual
cortex
Jeff
also
told
me
to
take
that
with
a
grain
of
salt,
a
lot
of
stuff
that
hasn't
aged
well,
but
but
I
won't
go
into
the
details
of
it,
but
but
this,
but
the
interesting
idea
is
that
there's
sort
of
these
slabs
and
that
they
are
responding
to
different
orientations.
A
A
You
know
have
something
clean
and
streamlined,
so
that's
happening
Paul
that
mutt,
you
might
notice
that
I
know
you
have
a
PR
or
a
new
pit
core
right
now.
I
still
want
to
come
back
and
address
those
I
think
we
have
some
Windows
builds
problems,
but
once
that
we
may
move,
I
actually
have
no
idea
what
this
data
that
is,
but
those
two
PRS
there
is
some
windows
problem
right.
A
We
can
talk
about
that
offline,
but
that's
where
we're
at
okay
I'm
done
that's
my
update
I
do
want
to
point
out
if
you're,
if
you're,
not
a
member
of
the
forum,
go
go
to
the
forum,
there's
an
interesting
discussion.
I,
don't
like
to
get
I
learned
a
long
time
ago,
not
to
talk
about
free
will
on
the
internet,
but
there's
a
discussion
about
free
will
on
forums.
I'm,
not
I'm,
not
interested
in
getting
involved
in
that,
but
you
may,
if
you
wish,
there's
some
other
interesting
ones
like.
Why
do
we
need
sleep?
A
I
think
that's
a
really
interesting
idea
of
the
brain
sleeping
and
why
it's
necessary
yeah.
The
forum's
are
very
active.
There's
a
lot
of
new
people
coming
in
with
some
interesting
ideas,
come
on
in
and
have
a
look
if
you're
watching
on
YouTube
right
now,
it's
in
the
show
notes
so
that
just
click
the
link
you're
in
the
forum.
Anyway.
That's
all
for
me.
So
let's
take
comments
from
the
community.
If
anybody
has
one
type
up
and
I'll
address,
you.
C
C
C
A
C
A
A
D
Okay,
sorry,
just
real
quick.
First
of
all,
it's
awesome
to
hear
just
everything
you
hear
you're
doing
and
definitely
weren't
rambling,
so
am
either
wise.
So
I
quickly
wanted
to
had
a
quick,
practical
question
and
a
quick
I
guess
reporting
of
something
of
finding
that
I
thought
you
might
find
interesting.
Just
very
briefly.
So
I'll
start
with
the
question.
D
Maybe
because
it's
the
most
direct
thing
essentially
I
wondered
if
so
I'm
working
with
a
small
amount
of
data,
but
the
application
is
still
working
well
because
I'm
doing
pre-processing
that
sort
of
smoothens
it
out,
but
it
reduces
the
amount
of
data
such
that
obviously
I'm
still
getting
anomaly
scores,
but
the
anomaly
likelihood
just
doesn't
kick
in,
because
the
anomaly
Eyck
likelihood
goes
at
0.5
for
a
long
time
before
it
actually
kicks
into
something
else,
depending
on
certain
I.
There
are
certain
basic,
whatever
hyper
parameters
to
that,
so
I
essentially
wondered.
D
Is
there
any
like
rule
of
thought?
There
would
be
a
rule
of
thumb
or
just
a
way
that
you
would
think
about
saying
like
okay?
Well,
if,
instead
of
having
like
literally
like
I'm,
doing
a
thing
where
I'm
building
models,
saving
them
and
running
testing
data
through
them,
so
I'm
like
saving
and
then
calling
them
back
in
and
the
testing
data
is
only
like.
It's
like
I'm
based
I'm
trying
to
identify
people
by
breaking
similarity.
I've
like
40
people
and
there's
only
a
couple.
D
A
hundred
of
data
points
often
like
a
couple
hundred
to
several
hundred
testing
points,
and
maybe
like
a
thousand
training
points,
so
the
anomaly
likelihood
doesn't
kick
in
so
I
figured.
There
would
be
some
parameters
that
I
could
change
to
try
to
make
it
kick
in
I'm,
just
using
raw
anomaly
scores
now,
which
is
actually
working
pretty
well,
but
I
was
I.
A
D
Isn't
my
I'm
as
to
the
extent
that
I
understand
what
I'm
doing
which
I
think
I?
Do
it's
it's
about
not
having
enough
data
yet
because
I
have
noticed
that
when
I've
done
other
things
where
I
did
have
enough
data,
it
would
flatline
at
0.5
for
a
long
time
in
the
beginning
and
then
start
and
then
it
would
drop.
You
know
I
mean
and
then
only
spike
up
for
a
real
anomaly
which
I
know
I
want
to
do
I'm
and
also
because
I
want
it.
D
I
just
want
to
be
able
to
control
that,
because
I'm
gonna
do
another
experiment
to
where
I
think
I
really
want
to
have
the
anomaly
likelihood,
especially
and
I
may
have
more
data,
but
it
may
still
not
be
enough.
So
I
just
wanted
to
be
able
to
essentially
know
how
to
control
the
head
so
I
know.
Maybe
I
can
make
a
post
for
this
in
the
for
I'm
gonna
be
easier
to
address
it.
That
way,
but
yeah.
C
D
C
A
I
know
there
must
be
Jacob.
Sting
is
right
too,
if
you're,
if
you're
serializing
any
of
these
models
and
on
serialize
them,
you
lose
your
anomaly
window,
but
you're
a
Mallee
likelihood
window.
So
you
should
keep
that
around.
Don't
if
or
figure
out
a
way
to
serialize
it.
It's
just
an
object.
I,
don't.
D
Think
I,
actually
I
think
I
want
to
lose.
Cuz
I
have
a
traditional
sort
of
training
and
testing
setup,
so
in
other
words
like
just
really
quick
to
finally
so
Matt,
so
you
play
a
game
with
the
joystick
right.
It's
this
basic
manual,
control
task
and
I
had
that
it's
a
simple
task
you'd
over
and
over
again
it
generates
sequences
that
identify
you
works
pretty
well
right,
and
so
basically
what
I
do
is
they
train
a
model
from
you,
but
I
also
train
a
model
from
40
other
people,
and
then
I
have
I.
D
Take
a
couple
of
your
runs
as
a
Matt
sample
like
a
test.
You
know
about
a
Matt
validation
set,
basically
run
it
through
all
the
models
get
everybody's.
What
I'm
now
using
it
just
very
crudely,
it's
just
mean
anomaly
score
and
then
I'm
ranking
all
the
models,
and
what
happens
is
the
Matt
model
will
rank
in
first
place?
You
know
maybe
55
60
percent
of
the
time
and
be
pretty
high
in
the
rankings.
Most
of
the
time.
D
Just
doing
that,
you
know
just
doing
that
basic
simple
anomaly
score:
it
works
a
lot
better
when
I
cuz,
my
sampling
rate,
was
too
high,
so
I
down
sampled
the
data
a
lot
using
like
a
pre-processing
aggregation,
but
basically
so
it
made
it
so
that
a
small
amount
of
data
made
the
performance
a
lot
better.
So.
D
A
D
Yes,
I
want
that
not
only
likelihood
to
stop
being
point
five
soon,
basically
so
all
right
so
business
so
I
know
that
that
must
be
I,
know.
I,
know,
I
know
you
know,
dig
through
the
documentation,
but
you're
thinking
like
in
that
anomaly
likelihood
dot
pie
file.
There
would
be
some
yeah
yeah.
That's.
A
D
Yeah
I
think
because
aren't
there
isn't
it
sort
of
like
it's
essentially
doing
like
a
running
Z
score
on
the
anomalies?
Basically,
it's
like
they
maintain
a
distribution
they're
you
know
making
to
normal
and
they
say
offices
at
one
of
the
edges
of
the
distribution.
It's
like
because
it's
cool
cuz
obviously
means
the
anomaly
score
is
changing,
so
if
it
goes
from
unpredictable
too
predictable,
that's
also
an
anomaly
right,
whereas,
like
with
an
anomaly
score,
everything
unpredictable
is
an
anomaly
yeah.
That's.
D
B
B
D
B
B
A
How
we
should
be
doing
this
like
in
the
future?
This
is
how
we
would
be
doing
is
more.
The
way
Paul
describes
is
creating
representations
of
things
that
that's
not
just
temporal.
You
know
what
we're
doing
with
sequence
memory
then,
and
doing
a
classification,
basically
by
anomaly
detection
as
we're
trying
to
run
spatial
sort
of
classification
over
time
into
just
a
temporal
classifying
system
or
a
temporal
memory
system.
You
know
like.
D
It
feels,
and
it's
hot
like
like,
like
what
has
been
called
temporal
pooling
or
at
least
to
me
like
having
a
more
stabilized
layer
that
you
know
says
not
Oh
a
is
gonna
be
B
is
gonna,
be
C
bit
like.
Oh,
this
is
the
health
of
it.
No
wait
for
the
alphabet
to
be
done.
You.
B
To
been
pretty
close
to
having
a
self-organizing
temper,
eventually
one
that
you
don't
want
to
have
to
label
it
ahead
of
time
right.
But
in
your
case
you
already
are
are
labeling
the
40,
so
you
can
get
away
with
just
create
a
random
SDR
and
then
connect
it
up
to
your
sequences
as
they're
learned
right,
the
hard
part
of
which
is
what
I'm
working
on
is,
is
getting
it
to
come
up
with
those
40
by
itself.
B
D
So
basically,
you
know
the
that
other
30
percent
or
whatever
it
is,
and
that
creates
another
SDR
and
then
I
could
just
do
basic
SDR
essentially
overlap
score,
which
I
know
it's
super
fast.
So
it's
basically
a
way
of
taking
yeah.
It's
a
temporal
pooling
approach.
Well,
that's
anything.
I'll
go
back
and
read
some
of
your
other
old
stuff
on
any
of
so
much
on
temporal
pooling
and
yeah
I'd
love
to
get
into
this
mano
and
drag
everybody.
A
That's
what
Paul's
talking
about
you
know
identifying
the
objects
base
to
see
it.
It
gets
really
deep,
really
fast
when
you
start
thinking
about
this,
because
we
we
always
have
to
have
to
introduce
motion.
At
least
I
always
have
to
think
about
motion
what
I
think
about
even
identifying
objects,
because
when,
when
you
move
from
one
object
to
another,
there's
there's
a
transition,
there's
an
obvious
transition.
A
You're
like
there's
a
state
transition
in
your
brain
and
you're,
putting
attention
from
one
object
to
another
right,
and
we
don't
know
what
what's
exactly
but
we're
trying
to
model
how
or
understand
how
we
model
the
things
that
attention
placed
or
that
attention
directs.
You
know
we
want
to
model
those
things
right.
So
a
lot
of
these
questions
yeah,
we
we
brush
them
off
because
we
don't
know
how
it
works,
and
this
for
and
temporal
cooling
is,
is
such
a
catch-all
term.
A
B
D
D
B
Did
you
know
when
the
theory
was
still
young,
even
he
distinguished
between
the
temporal
memory
or
sequence
memory
and
the
pooling
part,
so
they
had
an
original
meaning
they
combined
them
into
one
out
where
them
originally
they
broke
it
out.
So
by
definition,
it's
kind
of
similar
to
that
original
meaning,
but
obviously
I've.
You
know
I've
deviated
quite
a
bit
from
the
way
from
the
way
it
was.
You
know
originally
theorized.
So
at
some
point,
whenever
I
get
it
actually
working,
then
it'll
be
a.
C
A
We
can,
we
can
also
talk
about
what
we
call
I
wants
it's
done
to
even
in
the
machine
learning
community.
They
are
talking
about
temporal
pooling
now
and
when
they
say
it,
they
mean
a
very
generic
way.
You
know
just
like
pulling
over
what
anything
else
you
might
put
it's
just
a
temporal
pool
of
things.
It
could
be
a
bag
of
objects
over
time.
It
just
could
be
a
moving
window.
You
know
that
could
be
very
simple.
D
Just
think
of
it
as
something
that
is
watching
a
temporal
pooling
like
a
normal
sorry
I'm,
not
super.
Excuse
me
a
normal
TM
region
right,
that's,
you
know
it's
being
sparse
if
it's
predictive
and
it's
bursting,
if
there's
a
lot
of
unpredicted
stuff,
that's
like
watching
that
and
then
saying
and
recognizing
just
larger
temporal
structures.
Essentially
ya
know
cuz.
C
D
D
B
A
A
Here
now
and
I
feel
like
this:
whatever
motor
command
is
being
emitted
by
your
brain,
you
know
what
the
the
columns,
according
all
columns,
that
are
processing
just
this
patch
of
skin.
It's
more
it's
important
to
know
a
lot
more
than
just
where
this
patch
of
skin
is
moving.
Potentially
it's
a
I
mean.
C
A
D
C
C
B
D
C
C
D
Yeah
I
know
that's
it's
yeah.
Every
everything
that
I'm
trying
to
do
applied
wise
is
totally
from
a
passive
perspective.
In
that
way,
it's
like
his
ideas,
like
general
ideas
like
human
control,
monitoring
and
then,
like
you
know,
doing
identification
or
doing
like
always
are
they
getting
distracted?
D
B
A
A
hackathon
or
a
like
we're
meeting
in
a
long
time,
so
it's
nice
even
and
there's
a
lot
of
people
out
there
that
are
watching
they
can't
join
in
and
at
least
they
get
to
see
sort
of
the
discussions
we're
having
and
know
what
we're
talking
about
on
the
forums
mark
get
a
get
a
microphone.
You
can
join
in
next
time
he's
in
chat
like
all
this.
D
Yeah
just
want
to
say
thanks
and
I'll
we're
actually
getting
ready
to
to
submit
this
like
paper
on
this
for
publication,
where
it's
like
using
HTM
or
this.
So
it's
like
introducing
HTM
to
this
like
area,
so
I
kind
of
feel,
like
almost
in
a
sense
like
I,
would
want
and
make
sure
I,
don't
say
anything
wrong.
So
I,
don't
miss
like
come
on.
You
guys
approval,
but
I
didn't
want
to
like
bother
with
that.
A
D
Totally
open
so
we're
not
yeah
I've
seen
other
people
just
like
there's
like
like
a
Chinese
University,
where
they
do
a
really
good
introducing
HTM.
You
know:
I
mean
it's
it's
similar
to
papers
that
you've
done
and
that
the
use
of
some
of
the
same
figures
and
I'm
like
alright.
This
is
a
cool
way
to
explain
it
too,
but
basically
the
same
thing,
but
they
know
I,
love,
finance,
that's
great!.
A
A
C
C
C
A
Htm,
yet
not
quite
I
mean,
but
the
tides
are
turning
I
mean
it.
The
last
time
we
had
big
news
and
we
had
like
a
paper
come
out
and
the
machine
learning
community
was
was
hostile.
I
felt
I
felt
attacked
this
time.
There's
been
little
to
none
of
that.
So
that's
a
really
good
sign,
I
think
and
there's
a
lot
more
people
on
our
forums
that
are
trying
to
apply
deep
learning
techniques,
R
and
N
and
they're.
Coming
from
that
background
to
us,
which
is
a
good
sign,
yeah.
C
I
think
I
think
one
of
the
challenges
is
really
hard
to
understand.
What's
going
on,
but
I
think
HTM
has
has
a
advantage
in
that
you
can
actually
understand.
What's
going
on
in
the
black
box
as
opposed
to
you
know,
artificial
neural
networks,
it's
really
difficult
to
figure
out.
What's
going
on,
yeah.
D
I
made
that
point
to
a
recruiter.
Yesterday,
literally,
we
didn't
know
that
much
but
like
I
was
like
I
was
trying
to
like
be
like
oh
I,
know
about
this
stuff
and
then
I
like
did
a
site
distinguishing
to
HTM.
She
was
like
yeah
they're,
asking
about
like
LS
TM
and
I'm
like
yeah.
That's
good
and,
like
I,
understand
why
they
use
that
and
there's
some
stuff
you
can
use
that
you
can't
use
HTM
for
yet,
but
like
it
is
a
black
box
like
it's
totally
I,
don't
know
yeah.
A
And
it's
a
we
had
a
conversation
a
bit
about
this.
It's
so
we
definitely
won't
understand
it.
We
won't
understand
how
it
works,
I
mean
I,
don't
think
you
can
truly
build
something
great
until
you
understand
we're
building,
but
the
things
the
intelligent
systems
that
we
build
may
not
be
something
we
can
just
tap
into
and
see.
Oh,
these
neurons
are
on.
That
means
it's
thinking
about
this.
You
know
right
all.
A
Gonna
have
their
own
sort
of
internal.
What
this
is
I'm
going
back
to
the
max
tegmark
definition
of
reality,
internal
representation
of
reality.
That's
modeling,
the
external
reality,
the
same
way
that
we
have
internal
representation
of
reality,
that's
modeling
external
reality
and
what
we
have
to
do
is
create
a
consensus
between
intelligence
systems.
You
know
that's
my
position
not
to
mention
Matt's
position
on
this.
A
A
A
A
Exactly
that's
that's
the
thing,
but
I
think
especially
in
robotics
mark
was
saying
something
in
chat
about
this
robotics.
There
are
huge
applications
for
HTM
in
robotics,
but
we
have
to
start
thinking
about
robots
as
a
complete
system.
You
know
like
a
sensory
array
and
we've
got
to
like
construct
a
way
for
it
to
move
through
space
and
feel
at
the
same
time
and
like
understand
itself
like
part
of
what
you
do
it
as
you're.
Building
an
intelligence
is
model.
A
D
That's
that's!
That's
where
people
I
think,
like
common
people
will
start
calling
it
artificial
intelligence.
You
know
because
I
asked
me
like:
oh
so,
you're
working
on
artificial
intelligence
and
I'm
like
and
I'm
like
yeah,
like
I,
am
like
to
me.
This
stuff
is
the
most
close.
Definitely,
the
only
thing
that
I
would
actually
consider
to
be
like
really
fully.
D
Is
that
where
you
see
like
that
type
of
part
of
it
is
where
you
see
HTM
be
really
important
in
robot,
because
I
agree
that
HTM
should
be
important
to
robotics,
but
I
can't
say
with
the
specificity
exactly
why
I
just
kind
of
like
think
so
you
know.
So
if
you
have
like
more
specific
intuitions
or
anybody
does
like
super
interesting.
C
A
I
think
the
same
way
when
you
think
about
it,
especially
we
talked
about
object,
representation
and
grid
cell
I,
really
love
how
grid
cells
come
together
with
with
the
idea
that
we
have
of
how
objects
are
represented.
You
know,
because,
because
they're,
oh
jeez,
what's
the
oh
I've
lost
my
train
of
thought
here,
so
you
get
too
deep
in
philosophy
and
they
get
lost
grid.
A
Think
about
this
think
about
throwing
a
ball.
Okay,
that's
there's
a
big
motion
involved
in
that,
but
but
that's
the
same
thing
as
an
as
an
object
in
some
ways.
It's
the
definition
of
something
it
just
like.
There's
a
definition
of
a
cup
I,
don't
I,
don't
really
keep
a
representation
of
this
cup
in
my
head,
it's
just
a
cup
that
happens
to
be
in
my
office.
I,
don't
know
how
many
squares
that
are
I
just
know
it's
a
cup.
A
You
know
I,
don't
have
a
special
representation
for
this
converse,
four
of
them
in
here
somewhere.
I
know
that
but
I
know
it's
a
cup
and
I
know
exactly
how
cups
work,
because
I
have
this
fuzzy
representation
of
cop,
like
I,
have
an
idea
of
cup
that
I
can
easily
mutate
and
Mattin
and
apply
in
different
in
different
ways.
Yes
and
I
think
that
you
can
match
that.
So
I
like
to
think
about
that
idea
of
a
cup
as
an
attractor
I,
don't
know
this
is
again.
This
is
not
meant
to
talk.
D
A
An
attractor
that
wasn't
the
one
you're
trying
to
apply
your
sensory
features
that
you're
sensing
to
things
attract
the
things
you've
defined
sort
of
it
that
snap
to
in
a
way,
something
that
makes
sense
and
I
think
that
you
can
think
about
concepts.
That
way
in
the
same
way,
in
in
the
same
way
is
like
when
you
fill
I,
mean
the
metaphor
of
filling
something
up
when
you,
when
you're
full
of
something,
if
it
feels
like
you're
full
right
like
a
compass,
now
get
it
now,
it's
getting
great.
D
It's
a
concept:
it's
like
you,
have
a
tangible
grasp
because
we
get
like
we
can
say
like.
Oh
I
was
full
of
doubt
or
something
like
that,
something
that
has
no
physical,
real
embodiment
to
it.
But
we
understand
the
idea
of
being
full
of
something
it's
different
when
a
person
is
full
of
something,
then,
when,
like
oh,
the
world
wheelbarrow
was
full
of
dirt
and
you
and
say
a
wheelbarrow
is
full
of
doubt.
That
also
makes
no
sense,
but.
A
D
It
makes
sense
it's
like
you
can
do
it's
like,
because
that's
what
the
court
exercise
I
mean
are
just
like.
Intelligent
a
system
like
make
a
really
generalizable
mechanism
that
can
be
applied
like
across
the
board.
So
it's
really
simple
and
really
complex,
obviously,
obviously
must
be
really
complex
in
a
lot
of
ways,
but
it's
as
simple
as
it
possibly
can
be,
while
do
while
having
that
crazy
capacity
that
it
does
have.
D
The
convolution
works
a
little
bit
like
spatial
pooling
and
does
a
bunch
of
a
whole
other
crap,
you
know,
but
it,
but
it
can
process
the
pixel
images.
It
can
do
that
data
type,
whereas,
like
I,
don't
know
if
there's
I
know,
there's
some
work
but
like
with
HTM
structure.
Specifically,
it
hasn't
been
done
that
much
with
images,
so
people
think
oh
convolution
is
the
best
for
image.
It's
like
well,
it's
good,
because
it's
doing
something
that
HTM
is
doing
way
more
of
well.
You.
A
D
A
Week,
I'm
going
to
interview
Blake
Richards
who's,
a
neuroscientist,
but
he's
really
into
deep
learning
and
the
deep
learning
community
really
respects
this
person.
He's
talked
a
lot
about
credit
assignment
through
apical,
dendritic,
computation
and
he's
written
paper.
A
paper
about
this
and
I'm
gonna
talk
to
him.
I'm
gonna.
Ask
him
those
types
of
questions,
because
that's.
D
D
And
it
seems
like
that,
like
the
basic,
real,
quick,
sorry
I,
think
the
basic
like
outer
limit
of
in
terms
of
like
try
to
be
like
a
practitioner
because
I
see
this
as
like
wow.
There
are
so
many
things
that
HCM
could
bring
so
much
to
that
they're.
Just
not
done
that
sophisticated
of
away
like
human
control
modeling.
That
was
shocked
at
how
simple
it
is
like
I
went
to
my
professor,
and
he
was
telling
me
about
how
simple
it
is.
D
A
C
D
I
promise
will
be
it
cuz.
This
is,
and
it
comes
full
circle
and
I'm
ever
Paul
is
talking
about
some
like
this
was
the
last
month
or
two
months
ago,
or
something
where
it's
like.
You
could
use
deep
learning
to
essentially
do
dimensionality
reduction
of
some
kind.
It's
a
feed
into
HTM,
because
I
feel,
like
the
limit
of
H
team
right
now,
is
literally
the
data
types
that
it
can
know
how
to
process.
C
D
C
C
A
Got
all
this
structure
for
reppin
deep
learning
if
we
can
figure
out
a
way
to
change
the
the
neuron
model
that
in
a
way
that
allows
different
types
of
connectivity,
your
nose,
people.
D
Have
people
are
tired
of
back
prop
to
they're
like
frustrated
with
their?
Do
they
realize
this
is
too
expensive?
It's
unrealistic.
You
know,
I
mean
like
I,
think
that's
where
the
awesome
sparse,
like
heavy
and
like
learning
of
HTM
like
when
I
knew
about
that
problem.
Si
HTM,
learning
and
I
was
like
that
makes
way
more
sense
and
it's
way
more
powerful
and
is
way
more
efficient
and
way
more
everything
good
and
now
I
see
anything
with
back
problem.
A
Which
is
interesting
because
you
don't
read
a
lot
of
theoretical
neuroscience
papers,
but
but
he
takes
evidence
found
in
other,
and
this
is
a
big
thing.
You
know
there's
so
much
neuroscience.
Data
and
people
are
sharing
it
a
lot
now
with
the
open
science
movement.
So
you
can,
you
can
make
a
career,
have
a
neuroscience
lab
and
not
actually
do
experiments
you
you
can
get.
You
can
get
all
the
data
from
other
people's
and
then
say
what
can
I
understand
about
the
brain.
A
With
this
data
you
know
how
come
how
could
I
what
questions
can
I
answer,
but
anyway,
we're.