►
From YouTube: HTM Hackers' Hangout
Description
HTM Hackers’ Hangout is a live monthly Google Hangout held for our online community. Anyone is free to join in the discussion either by connecting directly to the hangout or commenting on the YouTube video during the live stream.
If you have something specific you’d like to discuss, or if you just want to learn more about the HTM Community, please join HTM Forum at https://discourse.numenta.org. We have active discussions about HTM theory, research, implementations, and applications.
More info on all these topics at http://numenta.org
See discussion of this hangout at https://discourse.numenta.org/t/htm-hackers-hangout-mar-2-2018/3536
A
Hey
welcome
to
HTM
hackers
hangout,
it's
March,
2nd,
2018
I
have
a
list
of
topics
that
I
want
to
talk
about
on
our
forum.
That's
linked
in
the
description.
If
you
want
to
follow
along
I'm,
going
to
talk
about
this
MIT
Adi
course
it's
going
on
and
then
I'm
going
to
talk
about
some
documentation
and
some
community
stuff.
We'll
talk
a
little
about
unity,
but
somebody
brought
up
something
they
wanted
to
talk
about.
First
I
hope
you
guys
don't
mind.
A
This
was
released
last
year
and
his
big
names
like
Josh
Tannenbaum,
was
on
here:
Homer
Ullman,
Brendan,
Lake
and
Samuel
Gershman,
and
they
they
kind
of
I,
think
do
a
good
job
of
defining
what
an
8
this
means.
When
people
like
I
think
I
don't
to
speak
for
Gary
Marcus,
but
he
talks
about
an
8
this
a
lot,
but
the
way
they
talk
about
an
8.
This
is
basically
what
do
you
need
for
AGI
or
for
truly
intelligent
things
to
what
do
you
need
to
be
pre-wired
or
baked
in
what?
A
What
a
priori
knowledge
needs
to
exist
in
the
system
in
order
for
it
to
be
intelligent
and
the
examples
that
they
give
are
like?
We
need
to
bake
in
things
like
knowledge
of
number
systems,
space,
physics
and
psychology
and
that
they
use
terms
like
they
need
some
form
of
intuitive
physics,
intuitive
psychology,
and
then
they
discuss
a
lot
of
you
know
ways
in
which
you
can
see
in
humans.
That
innate
knowledge
exists,
however,
in
in
all
of
their
arguments,
they're
pointing
towards
young
humans,
but
not
babies.
A
So
here's
here's
the
thing
that
the
issue
I
always
take
them
talk:
necklaces
yeah!
You
do
have
to
build
up
knowledge
of
number
systems
in
space
and
physics
and
psychology,
of
course,
but
I
don't
think
any
of
that
has
to
be
inherent.
The
system
before
it
starts
learning.
I.
Think
those
those
things
are
some
of
the
most
basic
constructs
of
reality
that
an
intelligent
system
has
to
learn.
A
Just
like
a
newborn
human,
it
doesn't
know
right
away
what
numbers
are
what
spaces,
but
it
doesn't
even
have
a
very
good
definition
of
self
of
boundaries
between
reality
and
self.
That's
why
you
know
babies
would
make
lots
of
random
movements
that
they're
constantly
getting
feedback.
That
would
say
the
first
few
months
of
life
are
really
just
trying
to
figure
out
what
am
I
Iverson
everything
else.
You
know
where
is
that
boundary?
What,
when
I
move
what
do
I
effect?
How
does
this?
How
does
it
work?
A
A
It
has
to
be
able
to
take
action
in
that
reality
affect
it
and
and
generally
when
a
an
AI
is
embodied
in
some
way.
It
is
a
part
of
the
reality
that
it
experiences.
You
know
our
bodies
are
a
part
of
reality,
so
an
agent
and
AI
agent
when
it
takes
action,
it's
if
it
can
perceive.
Also
that's
the
second
thing.
You
have
to
be
able
to
sense
changes
to
reality
over
time,
so
that
you
can
tell
what
actions
had
what
consequences
aside
for
that
I'm,
not
sure
you
need
to
pre
wire.
A
Anything
else
and
what
you're
doing
when
you
have
these
abilities
is
you
have
an
ability
to
learn?
You
know
you
basically
pre
wire
and
learning.
If
you
can
give
a
positive
feedback
signal
to
the
agent
so
that
when
it
properly
predicts,
what
in
reality
is
going
to
happen
when
it
takes
actions
and
it
gets
positive
feedback
for
that,
we're
teaching
it
to
to
learn?
No,
it
gets
positive
feedback
for
prompt
for
properly
predicting
the
future
when
it
moves
or
takes
some
action
and
I
think
at
a
very
basic
level.
That's
that's
the
main
requirement.
A
That's
the
only
innate
'no
stood
say
it
would
be
required
for
an
HTM
system
is
now
yeah.
That
being
said,
I've
run
this
by
the
whole
research
team
or
anything,
but
but
I
think
they
would
probably
agree
with
that.
We're
not
trying
to
research
any
of
this
like
how
to
bake
in
this
type
of
stuff.
We
think
the
brain
learns
this
stuff.
We
think
all
this
stuff
is
learned
and.
A
It's
true,
and
if
you
look
it's
a
lot
of
the
psychological
research
on
children,
they
do
have
a
lot
of
innate
knowledge,
but
it's
after
being
live
for
several
you
know,
years
or
months,
or
they
have
had
a
chance
to
soak
up
all
this
reality
in
their
interaction
with
it
so
yeah.
They
know
when
you
move
something
behind
something
else.
It's
still
there.
They
know
that,
because
they've
seen
that
so
many
times,
it's
not
an
innate
knowledge,
it's
they've
experienced
that
you,
a
newborn
baby,
can't
do
any
of
that
stuff.
A
It's
just
you
know
flailing,
just
trying
to
figure
out
what
it
is
versus
everything
else
in
the
home.
So
that's
my
take
on
a
natus
and
there's
another
question
about
dendrites.
Oh
so
I
was
just
looking
mark
mark
Brown,
which
is
a
bit
king,
just
saying
he
can't
join
that's
a
bummer,
because
I
was
looking
forward
to
discussing
some
things
with
him,
but
all
right.
So
let
me
talk
about
this
still
about
dendrites,
so
mic
2.0
on
the
forum
was
asking
about
then
right.
So
so
I'll
try
and
break
this
down.
A
Drawn
there
on,
let's
say:
we've
got:
you
know
some
ethical
dendrites
up
here:
I'll
label
these
better.
This
is
a
little
well.
These
are.
These
are
called
fit,
easel
members
and
typically,
what
Jeb
has
told
me
is
a
pyramidal
neuron
will
have
you
know
four
or
five
did
rinic
branches
coming
off
of
the
cell
body,
something
like
that
and
each
one
will
branch
four
to
five
times
and
we've
got
like
axon
we're
done
here.
So
the
axon-
and
this
isn't
perfect,
but
apical
dendrites,
and
then
all
these
others.
A
All
these
down
here
that
are
not
apical
to
sort
of
go
off.
Laterally
are
called
basal
dendrites.
We
don't
use
that
term
up
a
lot
because
we
we
split
them
up.
We
say
okay
up
into
this
first
branch.
These
are
proximal,
so
all
the
close
ones.
That's
that's
the
what
proximal
means
those
are
gonna.
Those
are
proximal
dendrites
and
all
these
other
ones
with
colors.
You
might
have
dirt
all
these
other
ones
out
here
beyond
that
all
these
other
basal
dendrites
are
are
distal.
A
So
so
that's
the
difference
and
we're
not
talking
you
don't
talk
a
lot
about
apical
dendrites.
We
know
those
of
those
would
be
necessary,
which
we
start
talked
about
talking
about
creating
about
attention
and
and
creativity
and
stuff
like
that.
That
will
come
into
play.
But
right
now
our
theory.
We
really
focus
on
the
proximal
and
distal
dendrites
within
the
hole
within
the
basal
section
of
the
of
the
pyramidal
neuron.
A
So
another
question:
how
does
HTM
model
neurons?
Are
they
selling
machines
or
classical
neural
networks,
so
I
would
call
them
their
client
their
neural
networks,
definitely
because
they
require
all
about
the
weights.
It'll
call
them
weights,
we
call
them
permanence
is
but
it's
basically
a
heavy
and
learning
type
of
neural
network
which
deep
learning
systems
are
not
heavy
in
systems
and
most
current
people
that
are
working
on
a
GI
and
deep
learning
arena.
A
We
we
try,
we
just
take
input
like
one
after
the
other
and
and
we're
not
doing
any
timing
and
there's
a
video
about
this.
Where
I
talk
to
Jeff
about
spike
timing
and
oscillations
and
stuff
on
our
YouTube
page
I
think
it's
called
exact
timing
and
oscillation,
so
you
can
search
our
YouTube
page
for
that
and
we
do
discuss
that.
We
don't
have
we
don't
model
I,
don't
know
how
do
I
say
this?
A
All
of
our
all
HTM
Narns
are
pyramidal
neurons
and
we
do
the
way
we
group
them
is:
will
group
them
into
layers
or,
and
we
may
eventually
inside
of
a
layer.
We
might
have
different
groups
for
different
like
grid
cell
modules
or
something
like
that,
but
but
but
they're
all
pyramidal
neurons.
There
are
other
types
of
neurons
that
are
incorporated
in
the
model
that
are
inhibitory
neurons,
but
we
don't
model
those
directly.
We
don't
create
instances
of
inhibitory
neurons.
A
What
we
do
is
what
we've
we've
noticed:
the
effect
of
inhibitory,
neurons
from
experimental
neuroscience
and
we
just
modeled
the
effects
of
inhibitory
neurons.
So
we
only
directly
model
pyramidal,
HTM
nons
and
the
inhibition
is
done
in
a
small,
simplified
way
based
on
what
we
think
we
know,
inhibition
does
I
hope,
I
answered
that,
and
this
last
question
is:
how
do
you
measure
that
changes
to
HTM
are
useful?
A
So
example
is
if
one
of
our
researchers
reason
neuroscience
paper
on
a
new
way
to
do
something,
and
it
was
it
was
implemented
in
HTM.
Had
you
test
that
those
changes
have
were
useful?
Well,
we
we've
got
lots
and
lots
of
tests
that
we
run
and
we
do
benchmarks.
So
at
this
point
we
don't
change
the
core
of
HTM
Theory,
often
at
all
like
I,
don't
remember
the
last
time
we
changed
one
of
the
core.
A
Now
in
new
pic,
the
the
core
code,
algorithms
haven't
changed
in
years
as
far
as
I
know.
Now
we
do
all
of
our
messing
around
with
that
theory
and
our
research
repositories,
because
we
have
the
flexibility
to
do
that
there,
because
we
can.
We
know
that
you
told
the
community
don't
depend
on
this
stuff,
because
it's
it's
it's
flux,
it's
in
flux,
but
nupoc
is
stable.
Those
algorithms
don't
change,
but
in
the
research
repositories,
that's
where
we
do
all
of
those
new
experiments
like
a
new
way
to
do.
A
Some
epigenesis
would
be
done
on
the
research
repository
and
we
would
fully
vet
that
out
so
that,
logically,
everyone
agrees
that,
yes,
that's
the
right
way
to
do
it
and
prototypically
through
these
experiments
and
demonstrations
of
it
has
actually
working
the
way
that
they
think
it's
going
to
work.
And
even
then
it's
going
to
take
a
lot
of
vetting
to
get
that
into.
You
know
the
official
code
base
of
new
pic
and
ok
last
thing
that
Mark
Brown
was
asking
I
I
was
thinking
about
this
I,
don't
know.
A
If
I
can
answer
this
mark,
he
has
how
his
field
of
HTML
recognized
a
complicated
pattern.
That's
represented
to
the
entire
ensemble
at
the
same
time,
and
and
I
keep
thinking
about.
You
know
the
spatial
cooler,
perhaps
because
each
column
in
the
spatial
cooler
is
just
looking
at
a
portion,
a
section
of
the
input
and
is
only
after
learning
after
a
good
amount
of
learning
is
tuned
to
look
at
just
specific
features
in
that
portion
that
it's
looking
at
so
it's
it
has
an
affinity
towards
specific
features.
A
But
it's
it's
sort
of
hard
to
say
how
that
works
or
how
it
recognizes
a
complicated
pattern.
I
mean
it's
the
distal
segments
and
synapses
that
tie
those
facial
features
and
all
is
represented
in
lots
of
different
many
columns
together
over
time.
So
what
ties
that
pattern
together
are
the
distal
connections,
while
the
proximal
connections
are
what
sort
of
lock
in
what
many
columns
are
going
to
respond
to
what
spatial
features
and
it's
hard
to
answer
your
questions
about.
A
You
know
for
a
partial
match
versus
a
completely
novel
thing:
I,
don't
know
I,
think
I
can
put
together
some
experiments,
but
honestly
I,
don't
think
I
have
time
to
do
that.
You're!
Welcome
to
okay,
okay,
let
me
address
some
questions
now.
Mike
is
obviously
watching.
He
says
with
regards
to
Natan,
as
sounds
like
I
lean
towards
mostly
learning
on
not
much.
You
know
it's
referencing
Gary
Marcus's
book.
A
It's
I'm,
not
saying
that
the
baby's
mind
is
a
completely
blank
slate,
but
but
we're
leaning
towards
not
begging
in
anything
until
we
know
we
have
to-
and
at
this
point
we
don't
know
we
have
to
so
in
any
pre
optimization.
What
I
would
call
it
of
trying
to
bake
in
this.
These
ideas
of
the
natus
are
just
that.
It's
a
pre
optimization
and
with
our
the
sensorimotor
model
that
we're
working
on
we're
trying
to
make
work
without
those
things
we
think
we
as
far
as
neocortex
is
concerned.
A
There
is
something
general
enough
in
neocortex
that
we
can
start
that
it
can
start
learning
sensory
input
based
on
motor
motor
commands
and
put
together
models
of
things
in
reality
without
having
innate
knowledge
of
characteristics
of
the
Universal
reality.
As
you
might
say,
ok
I
have
other
things
to
go
on.
How
does
the
HTM
model
neurons
are
being
learning
Oh,
Mike
I'm,
going
to
respond
to
you
probably
later
on
the
forum,
so
I'm
not
taking
up
all
my
time
on
on
just
your
questions,
because
there's
some
other
things
I
want
to
talk
about.
A
So
let
me
go
on
to
this
MIT
a
GI
course,
and
this
is
sort
of
just
a
personal
thing,
I'm
interested
in
I'm
going
to
share
my
screen.
There's
a
curse
hanging
out.
Hopefully
you
can
see
this
so
here's
the
MIT,
a
GI
course
you
can
see
it
at
AGI
that
mit.edu
and
they've
got
some
interesting
guest
speakers.
Oh
there's
another
Stephen
Wolfram,
the
video
Stephen
Wolfram
just
came
out
so
I
need
to
watch
that
I've
watched
all
mixed
up.
A
Stephen
Wolfram
and
I've
been
a
bit
critical
of
this
on
Twitter,
just
because
there's
only
one
neuroscientist
involved
here
and
that's
Lisa,
Feldman
Barrett,
so
I,
mostly
I'm
critical,
because
I
wanted
Jeff
to
be
in
this
panel.
That
would
and
I
said
so
to
Lex
Friedman
and
you
know,
I,
don't
know
why
Jeff
isn't
in
this
list
of
people,
because
we
have
a
lot
to
say
about
AGI
and
how
we
think
intelligence
works
in
the
brain,
and
that
seems
to
be
what
a
lot
of
these
guest
speakers
are.
A
B
A
So
yeah
we
totally
agree.
So
why
aren't
they
talking
about
the
work
that
we've
been
doing
for
such
a
long
time?
That's
basically,
my
critique.
My
plan
with
this
is
because
we
were
not.
This
is
not
invited
to
be
a
part
of
this
conversation
is
to
I'm
going
to
write,
not
necessarily
a
rebuttal,
but
a
response
to
these
guest
speakers
and
probably
a
series
of
blog
posts
that
I'll
be
posted
to
the
mint
Sycamore
org
I.
A
Don't
know,
I
really
liked
Lisa
Feldman
Barrett's
talk
actually
and
which
is
interesting,
because
I
got
into
an
argument
with
her
on
Twitter
beforehand
about
emotions
and
I,
didn't
think
she
agreed
with
me,
but
I
think
she
does
I
think
we
do
agree.
That's
just
that's
the
thing
about
Twitter.
You
know
you
never
really
know,
but
her
talk
was
was
I.
A
Thought
really
went
towards
the
HTM
way
of
thinking
about
about
intelligence
in
the
brain
which
I
really
liked
so
I
encourage
you
guys
to
watch
these,
even
though
it
doesn't
includes
Jeff,
pointed
Jeff's
point
of
view.
I
would
love
to
give
him
MIT
feedback
on
these
lectures
and
if
you
guys
want
to
do
that-
and
that
would
be
awesome.
A
So
let
me
talk
real
quick
about
some
documentation.
You
guys
may
or
may
not
know
about
bammy.
So
I
have
a
personal
mission
at
Numenta
that
I
don't
know
if
I'm
gonna,
cheat
or
not
I
want
to
turn
this
whole
bammy
thing.
This
bammy
document,
which
is
right
now
just
a
series
of
PDFs
into
a
web
document
that
basically
says
here's
how
to
build
HTM
here's
how
to
build
HTM
s
because
there's
a
lot
of
people
trying
to
build
HTM
and
anybody
that
has
questions
about
it.
I
always
point
them
to
bammy.
A
But
the
way
this
document
is
structured
is
sort
of
how
to
build
HTM.
But
at
the
same
time
it
reads
like
a
scientific
technical
document
and
I
think
that
I
I
think
we
can
do
better
I'd
like
to
make
it
better
I'd
like
to
make
it
a
completely
web
resource,
so
that
we
can
reference
sections
to
each
other,
better,
so
I'm
trying
to
work
on
that
and
I'm
interested.
A
If
anyone
has
feedback,
if
you've
read
Bamie-
and
you
have
critiques
about
it,
if
you
like
PDFs
and
you're
like
no,
don't
make
it
non
PDF,
because
that's
my
plan
I,
don't
want
it
to
be
PDF
anymore,
then
you
should.
Let
me
know,
because
that's
what
I'm
gonna
try
and
do
I'm
fighting
some
I'm
fighting
about
that
right
now,
I
would
like
it
to
be
a
web
resource.
I
think
it
would
be
much
consumed
much
more
often,
and
it
would
be
easier
for
me
to
link
to
documentation
when
somebody
says
hey.
A
How
do
you
make
a
spatial
Pooler
I,
don't
have
to
say
download
this
PDF
look
at
section
5-1
and
paragraph
2
I
can
just
say:
here's
link,
which
is
much
better.
Okay.
What
else?
Ok,
getting
rid
of
slack
does
do
I,
don't
think
we
need
a
chatroom
anymore.
Now
that
we
have
discourse.
Discourses
is
just
amazing:
I
love
our
forum.
It's
such
a
good
communications,
medium,
I,
really,
don't
think
we
needed
to
chat
room.
A
It's
just
one
other
place
that
I'd
forget
to
look
in
and
and
go
find
people
are
trying
to
talk
to
you
so
I
think
I'm
gonna
get
rid
of
slack
I
already
got
rid
of
git
er
I,
don't
think
I
got
rid
of
it
rid
of
it,
but
you
know
maybe
I
did
but
I
think
I'm
gonna
get
rid
of
them.
So
all
communication
will
come
to
HTM
forum.
There
will
be
no
chat
if
you
want
to
have
chat,
I
mean
it's
pretty
much.
A
A
live,
chat
and
HTM
form
some
days,
so
I
would
say:
go
there
and
I'm
looking
for
feedback
if
anybody's
like.
Oh,
please,
no,
but
honestly,
hardly
anybody's.
Ever
in
there
chatting
and
and
usually
I
forget
to
go
there
too.
All
right.
The
last
thing
I
think
I'd
like
to
talk
about
thanks
Louis
for
sticking
through
20
minutes
of
me,
rambling
people
have
been
asking
about
how
we
use
unity
with
with
HTM
and
I
was
hoping
that
Lewis
is
done.
Pretty
much
all
the
work
on
that.
A
C
C
Sharp,
so
they
don't
talk
to
each
other
very
easily,
so
the
way
I
usually
implement
I
write,
I,
use
unity
to
do
all
the
UI
and
they
treat
the
environment
and
everything
that
it
has
done.
Environment,
point
of
view
and
I
create
a
Python
command
line
interface
to
the
algorithms
and
create
use
sushar
to
communicate
to
the
Python
process
via
standard
I/o.
C
C
It's
there.
You
know
the
first
step.
You
know
this
is
a
internal
project
that
we
have
that
we
we
use
in
one
of
our
papers
for
the
solarium
colors
of
forgot,
the
name
layers
and
column
papers
which
you
create.
We
run
experiments
where
we
have
a
robotic
hand
trying
to
identify
objects.
So
you
have
different
3d
objects
and
you
have
this
hand
and
we
try
to
identify
this
object.
So
we
use
unity
to
you
run.
C
Those
experiments,
for
example-
and
one
thing
for
example,
we
did
see-
is
a
special
polar
interface,
so
we
create
a
command
line
interface
to
the
special
board.
This
is
pure
Python
that
it
just
does
that.
So
this
is
the
whole
interface,
so
it
does.
It
creates
a
special
polar
with
those
parameters
and
then
passes
some
training
data
and
and
then
just
run
the
inference
you
need.
The
actual
cold
is
basic,
create
a
special
poor,
create
a
special
pool,
all
those
parameters
and
just
fit
it.
C
So
that's,
basically,
the
the
interface
is
a
style
back
and
forth
between
the
unity
between
c-sharp
and
Python.
So
this
is,
for
example,
like
you
know,
to
read
the
content
and
and
and
so
forth.
This
is
the
whole
interface
between
em,
so
the
interface
you
can
actually
find
here,
which
is
one
of
different
projects
that
we
created
for
fun.
That's
our
internal
hackathon.
We
did,
and
we
can
see
this
interfacing
here.
So
it's
you
it's
access,
but
we
can
see
that
today
and
to
use
that
in
a
in
a
actual
process
in
actual
algorithm.
C
This
is
a
wrapper
to
that.
That's
the
Python
process.
We
use
that.
We
use
this
Python
class
that
I
create
and
I
just
give
the
I
give
them
the
the
Python
script
that
I
want
to
use
and
just
attach
the
methods
to
this
10
arrow.
So
every
time
you
receive
something,
this
would
get
called
with.
The
information
like
I
sent,
a
right,
training
data
or
receive
I
receive
like
or
I
can
just
read
the
line,
the
actual
content.
So
this
is
basically
the
whole
interface
1k
via
is.
C
This
is
a
blocking
it's
a
blocking
call.
So
you
probably
want
to
do
that
in
a
crew
team,
so
it
doesn't
block
your
new
UI
you,
you
know,
UI
is
not
blocked,
so
you
do
anything
using
a
routine
to
update
UI.
You
run
this
as
a
separate
courting
process,
so
this
way
this
is
that
this
is
how
out
interface,
so
first
name,
install
Python,
create
a
Python
command
line.
Interface
to
the
algorithm
then
use
this
class
to
Python
man.
First,
we
have
it.
C
A
C
A
C
A
A
D
B
Got
it
I'll,
try
talking
like
this
so
and
I
just
saw
that
you
were
streaming
so
I
thought
I'd
tune
in
I'm
sort
of
newer
to
the
HTM
I
watched
a
few
of
the
videos,
so
I
have
sort
of
a
more
rudimentary
question
which
is,
like
you
know,
I'm
starting
to
get
grass
just
starting
to
grasp
the
idea
of
many
columns
that
I
realized
that
terminology
has
changed
which
has
been
making
it
harder.
B
A
A
So
this
is
something
I
would
like
to
talk
more
to
Jack
about,
because
you
can
see
many
columns
in
some
places
like
I
know
in
in
Mouse
cortex.
You
can
actually
see
it,
but
you
can
see
that
the
cells
are
kind
of
reach.
You
know
through
to
the
layers
is
in
these
structures
and
they're
not
all
exactly
lined
up,
but
you
can
sort
of
group
them
like
that,
but
in
other
ear
in
other
places
you
can't
see
them
but
they're
still
there
and
I
don't
know
exactly
how
that
works
like.
A
Is
it's
not
if
so,
let's,
if
you
look
at
a
column
and
all
of
these
layers,
there's
some
layers
that
have
many
columns
and
I'm
not
going
to
I,
don't
know
which
ones
exactly
which
some
layers
will
have
the
columns
and
the
mini
columns
will
span
layer
like
they'll.
Continue
through
you
know,
layer,
3,
layer,
4,
oh
those
are
the
right
layers,
but
they'll
go
through
different
layers,
and
so
we
can
see
examples
of
that
and
we
don't
know
exactly
what
the
functionality
of
that
is.
A
Although
that's
a
research
topic,
I
didn't
talking
about
ball,
often
and
there's
some
layers
that
don't
seem
to
have
many
common
functionality
which
meet,
which
we
assume
means
they're,
not
doing
something.
That's
like
spatial
cooling,
because
we
know
that
the
big
columns
are
enables
spatial
pulling.
You
can't
do
spatial
blowing
up
without
these
mini
column
structures,
so
they
may
be
doing
something
else.
A
We
think
they're
doing
another
type
of
cooling
type
of
temporal
cooling
over
and
generally
you'll
have
like
the
cooling
layer
on
top
of
one
or
two
other
layers
that
might
have
have
many
columns
and
we're
still
trying
to
figure
out
exactly
why
now
physiologically
I,
don't
know,
that's
something
that
I'm
I've
been
wanting
to
ask
Jeff.
But
every
time
I
I
go
to
the
office.
A
It
makes
you
think
that
there's
there's
a
bunch
of
axons
that
are,
you
know
that
are
they're
reaching
from
the
bottom
somehow
or
from
wherever
they're
coming
from
and
they're
like
throwing
up
through
here
and
making
synapses
on
their
way
through
and
those
these
are
like
proxy.
These
are
the
proximal
ones
and
they
go
to
some
area
and
there's
other
axons
up
here
that
are
making
all
these
distal
connections,
and
maybe
there's
some
organization
to
these.
But
jeff
has
told
me
more
than
once
this.
This
isn't
actually
how
it
works.
A
It's
not
like
there's
a
bunch
of
axons
like
like
growing
up
through
this
area
and
making
these
connections,
and
he
hinted
that
that
had
something
to
do
with
with
other
types
of
neurons
that
were
controlling
alpha
I'm
gonna
have
to
ask
ya
so
I'll.
Take
a
note
on
that,
because
I
think
that's
an
interesting
question.
That
I've
been
wondering
too
a
good
question.
Definitely.