►
From YouTube: Q&A with Jeff Hawkins (2014 Fall NuPIC Hackathon)
Description
Hackathon guests chat with Jeff Hawkins about HTM theory.
A
A
We
we
did
this
last
time
and
I
I
I
really
enjoyed
it.
I
thought
it
was
nice,
where
we
just
kind
of
had
a
q
a
session
with
jeff
and
anybody
who
was
interested.
We
started
off
with
the
new
temporal
pooling
kind
of
sort
of
concepts
at
that
point.
But
for
this
I
mean
what
I,
what
I'd
like
to
see
is
anybody
who
has
a
question
and
they
don't
understand
about
htm,
even
if
it's
a
beginner
level,
question
intermediate,
whatever
jeff's
really
good
at
explaining
things.
A
D
B
A
Maybe
in
person
you.
B
There
we
go
and
I'm
not
sure
at
times
I
feel,
like
you
know
it's
gonna
seem
silly
to
say
this,
but
I'm
not
I'm
not
expert
in
everything
about
these
there's
some
a
lot
of
questions
here
that
I
would
not
be
able
to
answer,
especially
when
it
comes
to
implementation
stuff.
So
I'm
looking
around
for
some
friendly
demented
faces
that
might
be
able
to
help
you
you're
it
matt
you're.
The
only.
B
D
E
About
the,
how
is
the
time
siri
implemented
in.
B
I
destroyed
it.
This
is
a
really
good
question.
Actually,
so
let
me
try.
Let
me
try
to
address
it
a
couple
levels
and
see
if
I
catch
what
you're
interested
in
first
of
all
the
the
model
we
like
the
the
the
htm
temple
memory
model
is
a
model
that's
predictive
on,
so
it
says
at
any
given
point
it's
going
to
predict
what's
going
to
occur
next
now,
there's
actually
no
time
in
the
system.
B
It's
just
saying
this
is
what's
going
to
occur
next
and
if
we
don't
give
it
anything
next,
the
system
just
sits
there
in
that
state.
I
mean
it's
that
that's
the
way
the
html
implementation
is
so,
but
what
it
does
is
predict
what's
going
to
happen
next.
So
when
there's
something
there's
x
happens,
it
can
say
that
was
right
or
wrong,
and
then
it
could
predict
the
next
thing.
So
it's
an
inference
model.
B
So,
if
you're
trying
to
infer
like
you're
trying
to
learn
sequences
and
then
recognize
sequences,
you
actually
don't
need
to
encode
time
itself.
All
you
need
to
do
is
make
the
next
prediction
and
say
the
next
thing
that
comes
in.
Does
that
match
it?
So
it's
a
it's
just
predicting
one
data
point
at
a
time
into
the
future,
but
but
it
does
in
a
very
sophisticated
way.
So
that's
that's
for
inference.
I
don't
need
to
actually
encode
time
per
se
in
that
kind
of
model
you
follow.
You
just
need
to
encode
transitions
right.
E
Oh,
and
is
the
that
vector
that
sequence,
which
is
coming
in
parallel
from
a
good
number.
B
Yeah,
imagine
imagine
you
gotta
hold
a
series
of
records
right,
dean's
record
is
sequential
in
time
and
and
it's
just
learning
that
sequence
right,
I'm
trying
to
make
a
distinction
which
you
may
not
be
making
that
we
don't
actually
encode
time
itself.
You
don't
say:
oh
this
one
occurred,
30
milliseconds
after
that
one
or
this
one
occurred.
B
You
know
there's
no
sense
of
like
absolute
time.
Remember.
B
Okay,
so
a
series
of
records
are
fed
into
the
into
the
models
and
it
just
learns
those
transitions.
It
learns
that
sequence
and
then,
when
you
feed
in
another
series,
it
basically
matches
to
previous
all
the
previous
things
it's
learned
at
once,
and
it's
constantly
trying
to
predict
what's
going
to
occur
next.
So
from
an
inference
point
of
view,
that's
all
you
need
to
do.
Is
you
just
need
to
learn
the
sequence?
B
B
I
see
handover
there's
a
separate
problem
which
comes
back
in
playing
back
sequences,
recalling
sequences
on
their
own
like
a
motor
behavior,
and
we
haven't
really
dealt
with
that
yet
right.
So
none
of
the
stuff
we
have
right
now
actually
plays
back
sequences.
You
could
do
it
artificially,
but
there's
no.
That
would
require
some
sort
of
like
you
know.
The
system
on
its
own
has
to
generate
the
next
event
in
the
next
next
event.
All
of
you
doing
is
inference
predicting
anomaly
detection.
B
You
wait
for
the
data
to
come
in
and
and
one
at
a
time
did
that
help
you
no.
E
B
E
I
mean
it
was
one
of
the
presentations
which
was
so
we
said
friday,
oh
okay,
and
so
there
were
a
number
of
dots
and
I'm
curious
if
those
dots
were
part
of
the
time
server.
B
Okay
yeah,
so
so
we
we
there's
a
proximal
synapse
stratum
with
dendrites,
and
these
get
the
this
gets.
The
feed
forward
pattern
right
and
that's
not
encoding
time
and
then
the
the
or
nikon
sequences.
It's
these
distal
dendrites
here
that
we
call
them
coincidence,
detectors
right
and
these
are
getting
like
connections
from
the
other
cells
nearby.
So
basically,
these
are
saying
this
cell
is
learning
to
predict
what
is
going
to
be
active
based
on
the
other
cell's
activity
nearby.
B
So
it
says
you
know
if
I
recognize
the
state
a
moment
before
I
become
active
and
then
I
become
predicted
and
then
I'm
you
know
primed
to
fire.
These
are
encoding
the
sequences.
This
is
encoding
the
transitions
this
these
cells.
These
are.
This
is
all
about
just
learning
what
you
know
predicting
when
the
cell
is
going
to
become
active
based
on
some
very
high
order,
or
maybe
not
some
very,
very
sparse,
high
order
state
that
says
I'm
at
the
third.
You
know
the
23rd
note
in
this
melody.
B
I
recognize
that
I'm
part
of
the
next,
the
next
note
type
of
thing.
G
C
B
These
basically
remember
you've
got
a
huge
number
of
cells
nearby
and
at
any
point
in
time,
due
to
feed
forward
input,
some
of
the
very
sparse
number
of
cells
become
active
and
as
soon
as
they
become
active,
they
say,
oh,
what
who
was
active
nearby
recently
and
I'm
trying
to
make
connections
to
them
and
that
that
becomes
my
prediction.
It
basically
says
if
I
can
see
those
things
nearby,
that
that.
B
E
E
B
Maybe
even
more
well,
if
we,
this
is
a
separate
topic.
If
we
start
talking
about
in
biological
brains,
about
time
scales
at
which
we
can
have
hold
things
in
our
head
or
keep
things
active
or
how
many
you
know,
sort
of
a
temporary
memory,
it's
kind
of
a
separate
problem,
separate
taps
too
we're
not
really
dealing
with
that.
C
B
Series
of
other
things
I
was
trying
to
get
to
my
answer
earlier
about
real
time
like
how
do
I,
how
would
I
play
back
something
with
with
proper
cadence
like
a
melody?
How
does
the
brain
encode
that
those
are
separate
questions,
but
right
at
the
moment
the
simplest
thing
is
to
say:
look,
you
know
where
do
we
learn
the
transitions
in
time?
Where's
the
sequence
stored
it's
stored
here,
each
cell
is
learning
on
its
own
when
it
becomes
active.
B
What
are
the
things
that
predict
it,
and
it
can
do
that
in
hundreds
of
contexts
and
that
cell
any
cell,
of
course,
is,
is
reused
over
and
over
again
in
many
different
representations
in
many
different
temple
contexts.
But
it's
always
the
combination
which
is
unique.
That
was
part
of
subutai's
talk
that
you
know
you
only
got
so
many
cells
and
if
you
only
in
some
percentage,
I'm
active
over,
you
know
they're
going
to
be
used
over
and
over
and
over
again
in
many
many
many
different
contexts,
but
it's
the
combination
which
is
unique.
B
So
that's
why
the
whole
memory
is
distributed.
This
guy
doesn't
know
what
anyone
else
is
doing,
but
he
just
says
when
I
become
active:
what
are
the
contexts
if
I
become
active?
So
if
this
was
participating
in
a
melody,
it
could
be
participating.
Many
different
melodies
many
different
points
and
commitment
to
melodies,
and
these
are
the
dif,
these
recognized
with
different
states
and
all
those
different
melodies.
In
some
sense,
I
saw
it
in
your
question.
I
think
that's
it.
B
A
Yeah
here
at
a
meet-up.
D
You're
saying
that,
like
yeah,
the
the
primitive
subcortical
areas
already
know
how
to
do
this
stuff
and
it
grows
it.
Just.
B
Makes
associative
connections
well
the
basic
it's
subcourt?
A
region
of
cortex
is
sitting
on
top
of
a
whole
bunch
of
other
stuff,
and
it
doesn't
really
know
what
anything
about
you
know
you
have
to
imagine.
Neural
tissue
is
dumb
right,
there's
no
smarts
anywhere,
so
this
neural
tissue
is
sitting
there
and
it
doesn't
literally
know
where
it
is
in
the
hierarchy.
B
B
Here's
a
two
week,
you
know
say
two
reasons
in
the
quarter
hierarchy
and
then
here's
some
old
brain
stuff
down
here.
You
know
a
layer.
Five
cell
here
can
project
both
down
to
the
region
below
it,
and
it
can
also
project
down
to
the
old
brain.
It
basically
says
I'm
I
can
try
to
control
anything
below
me
right
and
this
guy's
doing
behavior
and
he's
trying
to
build
him.
So
it's
done
in
a
hierarchy,
but
the
the
key
the
whole,
the
key
to
the
whole
motor
thing
in
a
hierarchy.
B
At
least
it
felt
that
way
to
me
was
understanding
that
that
this
reason
of
cortex
is
modeling
everything
underneath
it
not
knowing
what
it
is
and
it
builds
tries
to
build
a
model
of
how
all
this
stuff
behaves
and
then
and
then
it
projects
its
layer,
five
cells
to
the
cells
that
are
generating
behavior
down
there
and
auto
associative
links
to
it.
So
it's
it's
basically
saying
I
don't
know
what
you're
doing
down
here.
B
I
don't
even
know
what
I'm
projecting
to,
but
I
can
auto
associate
the
link
to
control
someone
else's
behavior,
so
this
region
can
control
this
guy
and
you
can
control
this
thing
or
dissociatively,
so
I
can
walk
through
it
more,
but
I
have
to
get
out
that
talk,
but
that
was
the
the
big
insight
for
me
was
that
if
you
can
do
this,
auto
associative
linking
or
not
auto,
associate
associate
blinking
and
this
guy
models
the
behavior
of
this
thing
and
then
it
learns
to
control
it,
and
then
it
can
control
it
in
new
novel
ways.
B
D
B
The
part
I'm
struggling
with
right
now
is
well
I'm
not
right
now.
The
part
I'm
struggling
with
when
I
think
about
this
is
what
we
know
is
that
we
know
that
here's-
here's
cortex,
let's
call
that
cortex.
B
It's
okay!
I
I
just
think
about
colors,
I
like
it,
but
on
the
occasion
we
know,
I
have
a
pretty
good
idea
how
this
guy
can
learn
to
control
this.
This
it's
this
behavior
down
here,
whether
it's
other
cortical
regions
or
other
things.
So
a
pretty
good
stance
for
how
this
guy
can
learn
to
control.
That,
and
I
can,
I
have
a
pretty
good
sense
how
it
can
learn
to
generate
sequence
of
these
things.
What
I
don't
know
yet
and
I'm
struggling
with,
is
how
does
it
decide
what
to
do?
B
B
Which
is
literally,
it
means
lower
stuff,
basically
they're,
just
like
groups
in
their
own,
so
there's
a
whole
bunch
of
little
things
that
they
just
they.
They
have
a
whole
bunch
of
different
names,
but
they
just
group
them
together,
because
they're
kind
of
small
and
all
together
and
what
happens,
is
what
the
general
belief
is
that
the
cortex
presents
to
the
basic
anglia
a
a
a
a.
B
Things
you
could
do
the
union
of
them
all
right
and
the
basic
I'm
going
to
choose
is
one
through
some
magical
mechanism
and
tells
back
to
the
cortex,
execute
that
one.
So
there's
a
lot
of
existing
psychology
and
other
theory
about
you
know
how
we
evaluate
the
behaviors
and
how
we
choose
behaviors
and
how
we
learn.
Behaviors
anyway,
is
this
it's
just,
and
this
pathway
is
really
complicated.
B
It
runs
all
these
little
regions
that
and
then
it
goes
back
to
the
thalamus,
and
it's
really
messy
and-
and
although
there's
this
this
basic
idea
that
the
gaze
of
gango
decides
which
to
execute
there's
really
almost
no
real
detail,
you
can
stick
your
fingers
into
your
teeth
into.
It
say:
oh,
that
explains
it.
It's
just
like
kind
of
mysterious,
and
so
what
I'm
struggling?
What
I
struggle
with
is:
how
is
it
that
we
we
who
are
trying
to
model
cortex?
B
How
is
it
that
we
want
to?
What
are
the?
What
is
the
desirable
outcome?
How
are
we
going
to
what
kind
of
problems
can
we
set
up
when
we,
where
we
can
structure
something
where
I
have
a
problem,
or
I
can
pick
the
right
answer?
It's
it's
just.
B
I
can't
even
articulate
it
very
well,
but
we
sort
of
like
have
a
lot
of
the
mechanisms
which
are
in
place,
at
least
in
my
head
they're
in
place,
they're,
not
encoded,
but
they're,
but
then
this
idea
of
how
do
you
generate
you
know
what
is
the?
What
is
the
the
function
you're
trying
to
optimize?
B
How
are
we
going
to
do
that?
Do
we
have
to
do
it?
What
kind
of
subcortical
structure
do
we
have
to
design
to
do
this,
because
apparently
the
cortex
itself
doesn't
do
that?
It's
that
kind
of
stuff
there's
also
another
problem
which
I'm
struggling
with.
B
If
I
look
at,
if
I
look
at
the
the
regions
in
the
in
the
layers
in
the
region
of
cortex,
so
you
got
two
three.
You
have
four,
you
have
two
and
you
got
six
down
here
and
these
cells
that
generate
the
behavior
in
layer.
Five.
B
Those
are
the
ones
that
project
subcortically
but
they're
complex.
They
also
project
up
the
hierarchy
as
well.
They
give
the
hvc
the
input
from
layer
three.
They
also
receive
input,
speed,
voting
from
familiar
six
to
layer,
fives,
there's
a
lot
of
circuitry
here
that
I
don't
understand,
and
I
wish
I
did
understand.
D
B
And
you
and
you
read
the
literature-
I
can't
even
get
some
basic
things
like
everyone,
there's,
a
very
common
you'll,
say
layer,
three
projects
to
layer.
Five,
you
can
read
that
in
any
neuroscience
technology.
Well,
you
say:
okay!
Well,
where
do
they
project
to
the
distal
synapses
or
the
proximal
stem?
B
So
I
can't
find
that
I
don't
really
understand
the
relationship
in
layered
three
and
layer
five
exactly
it
looks
to
me
like
layer,
five
and
six
are
almost
almost
repeating
what
layer,
four
and
layer
three
do
is
that
the
the
feed
forward,
input
that
comes
into
a
region
goes
into
layer,
six
and
protection
layer,
five
and
the
cell
responses
here,
mimic
the
ones
we
see
in
four
and
three.
It
looks
like
there's
almost
a
parallel
path
here.
B
B
D
From
you
on
the
nasal
ganglia
yeah,
a
naive
implementation
would
be
just
an
objective
function
right,
some
generic
reward,
dual
punishment.
Yeah.
Do
you
see
that
working
putting
in.
B
Well,
obviously,
we'd
like
to
be
able
to
do
that
right,
we'd,
just
like
to
do
something
really
simple
like
like
today,
our
mo.
We
don't
really
have
emotional
states
in
our
models
at
all.
We
don't
plan
on
having,
but
just
something
really
simple.
We
can
turn
on
learning
or
speed
up
learning,
slow
it
down
things
like
that
sure
I
mean
yeah.
We
don't
have
to
model
basically
like
I
have
no
intention
of
model.
Basically
it's
got
it
does
all
these
weird
stuff.
It's
got
all
kinds
of
stuff
in
it.
B
It's
like
modeling
20
little
things,
but
still
it's
is
still
it's.
B
It's.
I
can't
really
talk
about
it
too
intelligently.
You
know
it's
it's,
you
know
a
lot
of
the
pieces
are
in
place
and
it
feels
like.
Oh,
it
can't
be
that
hard
and
it
really
can't
be
nothing
in
the
cortex
is
really
that
hard.
B
These
are
all
stupid,
neurons
and
they're,
all
just
doing
simple
things,
but
anyway,
so
I've
been
I've
been
trying
playing
around
with
different
models
and
different
connectivity
between
these
layers
and
understanding,
five
and
six
versus
four
and
two
three
and
trying
to
figure
out.
What's
going
on
exactly
here
and
and
I
got
pieces
of
it,
I
got
a
lot
of
pieces,
though
we
have
a
half
of
it.
I
think
it
feels.
I
B
I
don't
know
this
could
be
a
very
long.
Is
it
something
short?
It's
short,
yeah,
really
yeah.
Okay,
you
want
to
give
it
a
shot.
Yeah
go
for
it.
I
have
trouble
believing
it's
really
that
short.
I
C
B
I
I
C
A
I
I
D
I
I
I
I
B
I
C
D
C
H
I
Feed
forward
input-
and
they
all
have
approximately
the
same
response
to
that
input-
okay,
very
slightly,
but
we
know
that
this
cell
here,
the
the
inhibitory
cell,
has
a
faster
response
to
the
same
input
as
any
of
these
guys.
Okay,
so
that
means
that
it's
going
to
have
a
higher
slope
than
any
of
the
individual
cells.
C
I
And
somewhere
over
here
before
that
signal
reaches
it,
the
same
thing
happens
in
another
column,
okay,
and
that
sends
the
same
type
of
signal
back
this
way,
and
that
means
there's
no
activation
in
here.
Yes,
okay,
so
these
cells
are
now
protected
from
being
hit
by
an
hip
inhibition
wave,
yes
from
elsewhere.
Yes,
right.
B
Well,
what
basically
happens
is
that
all
these
cells
still
try
to
inhibit
one
another,
so
they
burst,
but
then
very
quickly.
One
of
them
wins
out
because
there's
still
those
those
inhibitory
cells
that
can
inhibit
one
of
them,
so
one
of
them
is
going
to
win.
So
we
basically
burst,
followed
by
a
single
cell
being
active
okay.
I
I
Basically
but
yeah,
but
it's
a
differential
thing,
but
these
guys
all
fire
effectively:
okay
yeah
for
the
whole
rest
of
the
timestamp,
okay,
but
they're,
not
learning.
So
what's
the
point
no,
but
they
are
because
they're
learning
to
feed
forward
right.
Okay,
okay,.
I
I
B
I
B
B
No
because
there's
still
there's
there's
still
this
mechanism
that
says
when
a
cell
fires
it
inhibits
all
those
neighbors
right
and
then,
and
yet
we're
trying
to
do
is
what
happens,
is
the
column
activation
occurs
first
before
that
happens,
then
all
of
them
fire,
but
then
one
once
one
cell
fires
inhibits
his
neighbors.
So
this
whole
bursting
idea
is
not
something
that's
stable.
It's
a
temporary.
I
I
B
B
But
but
the
the
the
drainage
action
potentials
do
I
think
it's
like
the
30
millisecond
or
30
millivolts.
It's
pretty
it's
a
big
jump.
Yeah.
I
I
I
B
I
Correct,
but
in
the
implementation,
it
is
that's
missing
right,
so
I
don't
think
that's
okay,
it
works.
The
other
implication
of
this
is
that
if
you
have
bursting
cells,
if
you
have
the
column
bursting,
you
know,
then
all
of
these
cells
here
learn
to
feed
forward
that
matched
the
inhibitory
cell
yeah
okay,
and
so
they
tend
to
converge
in
their
own
dendritic
signatures
to
whatever
the
inhibitory
cell
the
column
cell
represents.
C
I
B
I
So
in
ls4
all
right,
you
have
columns
here,
okay,
yeah,
right
with
multiple
cells
in
it,
okay,
and
in
the
case,
where
those,
let's
say
it's,
a
new
pixel-sized
region
of
2000
columns,
32
cells
per
gallon.
Okay,
in
that
case,
when
you
have
correctly
predicted
transitions
everywhere.
Yes,
you
have
40
cells,
active.
C
I
B
Virgo
I'm
going
to
I'm
going
to
challenge
you.
I
don't
think
you're
heading
towards
explaining
how
they're
fireworks
you
so
far,
we're
talking
about
temporal
cooling
and
how
temper
pulling
is
occurring
in
layer
three
and-
and
we
can
talk
about
that-
but
so
far
you
and
I
this
is
exactly
what
we've
laid
out
before.
B
So
I
don't
see
any
difference
here,
but
we
can
go
into
talk
about
temple,
pooling
exactly
how
it
works,
because
we're
in
the
process
of
doing
that
right
now.
But
I
don't
see
you're
going
to
bring
this
back
to
layer,
five
and
the
behavior.
The
question
that
I
originally.
I
B
I
G
I
I
B
I
I
C
I
The
signal
is
so
poor
that
those
cells
are
basically
they're
going
to
be
connected
to
several
of
these
things
that
don't
change,
for
example,
okay
and
they're,
going
to
also
inhibit
their
neighbors
so
that
their
neighbors
aren't
learning
any
of
these
current.
This
current
set
of
transitions.
Okay,
so.
C
I
Okay,
what
happens
is
that
when
you
have
sparseness
you
get
you
get
temporal
pooling
in
layer,
three!
Okay,
when
you.
C
C
I
B
That's
not
necessarily
true,
and
we
can
debate
about
that,
because
it's
not
clear
the
way
we're
propagating
between
four
and
layer.
Three
is
through
the
inhibitory
cells
and
it's
not
clear
that
those
actually
go
down
the
other
direction,
but
it
might.
I
But
layer
5
is
doing
the
same
thing,
but
it's
getting
predictive
input
from
layer,
3.,
okay
and
what
it's
trying
to
do
is
generate
behavior.
That
maintains
the
sparseness
in
layer,
three,
okay
right,
so
it's
generating
behavior
and
it
learns
to
associate
whatever
it's
getting
from
layer,
three,
which
it's.
B
B
B
Layer,
well,
I
guess
in
general
I
would
agree
with
that
in
general.
The
whole
idea
here
is
that
when
you
have
sparse
representations,
it
means
you
predicted
the
input
correctly.
That
is
what
you
want
to
use
to
propagate,
to
do
pooling
on
the
next
level.
If
you
can't
do
that,
you
pass
up
the
noise
to
the
next
step.
I
wrote
that
about
that
in
our
intelligence
is
part
of
this
whole
theory
here
yeah
the
question.
B
The
exception
is
exactly
how
that
works
in
layer.
Five
is
not
clear:
it's
not
clear
that
those
that
it's
it's
not
clear.
The
same
sort
of
propagation
occurs
in
the
columnar
sense
down
to
layer,
five
or
if
it's
doing
its
own
thing,
and
it's
not
clear
what
the
layer
five
proximal
inputs
are.
So
what
is
clarified
modeling
that
layer
three
can
be
the
be
the
sort
of
predictive
signal
for
it.
So
I
think
this
is
a
much
longer
conversation.
B
I
don't
think
we
should
turn
this
little
session
into
that,
and
I'm
not
against
talking
about
this
sure,
but
I.
I
B
B
I
I
agree
100,
that's
that's
what
I
wrote
about
this
on
intelligence.
The
goal
of
behavior
is
to
start
a
sense
to
make
predictions
come
true
or
make
things
more,
predictable,
yeah.
So
but
the
question
I
still
use
the
question
about:
how
do
you
evaluate
multiple
behaviors
and
how
do
you,
what
kind
of
system
designed
to
do
all
that
I
I
still
back
to
the
same.
I
My
suggestion
is
is
that
the
answer
is
that,
in
the
same
way
that
this
is
sparse,
if
this
is
sparse,
okay,
yeah,
then
in
layer,
five,
it's
getting
a
similar
sort
of
it's
getting
a
similar
sort
of
either
sparse
or
non-sparse.
C
I
And
it's
trying
to
also
maintain
this
sparse,
stable
representation
coming
from
layer
three
and
so
its
representation,
when
all
everything
is
working,
is
a
sequence
of
very
sparse
representations
in
layer,
five
that
are
generating
very
specific
behaviors.
B
Feedback,
I
I
agree.
Although
the
details
there
are
the
hard
part
and-
and
that's
where
I'm
that's,
where
I'm
struggling,
I
mean
the
basic
idea
is
I
agree
with
you
that's,
but
I
think
that
this
is.
J
Did
their
five
cells
get
input
from
apical
dendrites
from
player
one
as
well.
I
B
B
B
All
the
reasons
we
know
could
be
more
stable
than
this
guy,
and
so,
as
as
I
go
through
sequences
here
in
layer
five,
this
is
through
time.
If
I
go
through
sequence,
these
are
sparse,
distributed
representational
sequences
in
layer
five
as
I
go
through
them.
These
are
all
these
layer.
Five
cells
have
apical
dendrites
called
layer,
one
they're
all
getting
feedback
from
this
region.
Above
and
essentially,
what
will
happen
is,
as
I
go
through
these
sequences
and
there's
a
stable
representation
in
layer
one
coming
from
above.
So
this
is
like
a
higher
level
concept.
B
It's
stable
and
I'm
going
through
sequences
here,
every
one
of
these
cells
can
be
associated
with
this
stable
representation,
so
in
the
future.
Now,
if
I,
if
I
just
present
this
this
stable
pattern
here,
it
will
invoke
a
union
of
player.
Five
cells
to
become
people
are
depolarized
and
what
what
that
union
represents,
that
union
represents,
in
some
sense,
all
the
possible
states
of
all
the
sequences
associated
with
this
thing.
So.
B
B
I
basically
associated
that
with
multiple
sequences
here,
and
these
are
like
you
can
think
of
these
are
like
the
different
steps
I
had
to
get
to
the
kitchen
in
one
case,
from
the
dining
room
from
my
bedroom
from
the
basement
whatever,
and
when
you
say
kitchen,
you
invoke
all
of
those
at
once.
You
essentially
invoke
all
the
potential
behaviors
that
led
to
this
more
stable
pattern
here
and
then,
when
an
input
comes
in,
it's
one
particular
one
it
that
input
might
be.
I'm
in
the
bedroom
and
it'll
pick
up
one
of
those.
B
In
the
it'll
basically
say
all
these
guys
are
depolarized,
so
it's
a
big
union
of
polarization,
but
now
I
do
a
very
specific
instance
here.
I
am
and
you'll
essentially
start
playing
back
that
motor
sequence
from
that
place,
so
I
basically
say:
go
to
the
kitchen
is
a
high
level.
I'm
usually
used
as
analogies
there's
a
high
level
behavior
I
want
to
achieve.
I
now
invoke
a
union
of
all
the
behaviors
that
ever
could
possibly
do
that,
and
then
I
have
a
specific
input
that
comes
in
and
says
you
are
here.
B
It's
like
saying
you're
at
this
note
in
this
melody,
and
I
know
exactly
and
immediately
just
start
saying,
I
know
exactly
what
to
do
to
play
it
out
and
it's
sort
of
a
union
of
these
things
and
you
can
walk
through
the
depolarizations
and
so
on.
So
there's
this
is
this
I'm
pretty
confident
about?
This
is
basically
you're
invoking
a
union
of
behaviors
sequences
of
behaviors
that
then,
when
a
particular
input
comes
in,
you
just
know
exactly
what
to
do,
and
it
just
follows
that
behavior.
B
As
long
as
this
thing
is
stable
here,
then
it
just
follows
that
so
that's
that's!
What's
going
on
with
these
layer,
five
cells
projecting
up
into
layer
one
and
the
similar
things
happening
layer,
three
cells
project
up
into
layer,
one!
What's
that
going
on
there,
that's
sort
of
saying
I'm
expecting
you
to
see
something
or
I'm
expecting
you
to
perceive
something
like
look
for
the
dog
or
look
for
the.
You
know,
look
for
the
cat
and
you
invoke
essentially
a
union
of
light
free
cells
that
are
all
depolarized.
B
That
could
be
all
the
different
possible
patterns
that
you
might
do
evoke
with
that
thing.
So
now,
when
an
input
comes
in,
even
even
if
it's
even
if
it's
ambiguous,
it
will
be
interpreted
in
that
context,
you'll
immediately
go
to
a
higher
you'll.
Go
to
that
sparse
representation
immediately
you
you
know
you
come
and
say:
I'm
not
really
sure
what
this
is,
but
because
you
depolarize
the
70
cells
you'll
come
right
immediately.
The
sparse
representation
that
yep
there's
a
cat
and
you'll
be
biased
to
seeing
the
cat.
B
So
this
basic
idea
of
feedback
from
a
stability
to
a
changing
pattern
is
happening
both,
and
this
is
exactly
what's
happening.
It's
happening
to
layer,
three
cells
and
it's
happening
to
layer,
five
cells,
again
sort
of
suggesting
this
parallel
construction
between
layer,
three
and
five,
where
layer
three
is
really
a
sensory
pattern
and
layer.
Five
is
a
motor
pattern
and
and
so
you're
invoking
a
sequence
of
motor
behaviors,
and
this
is
essentially
invoking
a
sequence
of
inference
of
inferences
if
you
will.
B
So
all
this
is
one
of
those
pieces
that
that
that
is
clearly
happening,
but
it
still,
I
still
won't-
have
the
whole
picture
wrapped
up.
Yet
I
I
I'm
still
struggling
to
put
all
the
pieces
together
and
it's
a
multi-modal
problem
with
lots
of
different
edges
to
it.
So
I
think,
maybe,
unless
we
want
to
keep
talking
about
this.
G
Questions:
okay,
yeah
jeff.
Can
you
talk
a
little
bit
about
your
understanding
of
inhibitory
cells
and
their
role
in
temporal
pooling.
B
Yeah,
let
me
just
think
about
that,
so
we're
still
working
on
exactly
how
we
think
temple
pool
is
working.
So
that's
the
hardest
part
for
me
to
jump
into
well.
How
about?
I
just
give
you
a
brief
introduction
to
some
inhibitory
still
stuff,
and
then
we
can
talk
about
some
of
the
virgo
just
mentioned
here,
and
so,
just
as
a
general
rule,
75
of
the
cells
in
the
cortex
are
excited
to
ourselves
and
25
are
inhibitory
it
as
another
general
rule.
B
It
is
generally
believed
that
the
inhibitory
cells
are
not.
They
are
not
where
learning
occurs
they
and
there's
a
lot
of
reasons
to
suggest
this.
This
is
you're
shaking
your
head.
If
you
don't
believe
it,
but
this
is.
This
is
general
neuroscience
belief.
It
could
be
wrong
right.
We
kind
of
subscribe
to
there's
a
lot
of
biologic
logical
evidence
to
suggest.
B
This
is
true
in
the
sense
that
when
you
look
at
connections
between
excitatory
cells,
they're,
very,
very
sparse,
like
you,
you
almost
never
see
one
very
rare
to
see
an
excitatory
cell
make
multiple
synapses
on
another
one,
but
it's
really
common
in
inhibitory
cells,
so
inhibitory
cells.
If
I
look
at
a
column
of
excitatory
cells,
which
I'll
draw
as
brown
here
and
and
then
you
look
at
an
inhibitory
cell,
which
I'll
draw
as
green
inhibitory
cell
might
connect
many
many
synapses
to
all
these
guys
in
ways
that
would
just
completely
shut
them
down.
B
There's
no
integration,
there's
no
nothing
going
on
there,
the
inhibitory
cells
they
they
they
impact
on
a
neuron
in
three
places.
B
They
they
can
form
synapses
on
the
cell
body
itself,
which
excitation
synthesis
never
do
only
inhibitory
senses.
Excitatory
symptoms
are
always
on
dendrites
the
they
can
also
form
synapses
on
what
they
call
the
very
beginning
of
the
of
the
axon
there's
a
little
place
in
the
beginning
of
the
axon,
where
the
actual
action
potentials
start
and
they
can.
They
form
synapses
there,
which
is
a
very
strong
basically
saying
I
don't
care
how
depolarized
you
are
you're
not
going
to
generate
a
spike.
B
It's
not
it's
not
going
to
propagate
you're,
just
not
going
to
get
one.
So
these
are.
These
are
multiple
ones
from
inhibitory
cells.
They
don't
seem
to
be
carrying
any
information
other
than
just
shut
down,
and
the
third
place
they
occur
is
along
along
the
dendrites,
typically
even
the
far
the
distal
dendrites,
and
I
believe
those
are
actually
changing
the
threshold
of
the
distal
dendrites,
but
we
don't
model
that
so
that
this
is
a
way
that
this
is
the
way
that
the
cortex
can
turn
up
prediction
and
turn
down
predictions.
B
So
I'm
not
enough
prediction,
I'm
doing
too
much
anyway,
so
inhibitory
cells
and
generally
believe
not
to
be
information
carrying
there's
a
lot
of
reasons
for
that
there
are.
There
are
many
classes
of
inhibitory
cells,
it
depends
on
who's
counting,
but
you
might
think
of
like
five
of
them
or
something
like
that.
B
The
most
common
one
is
the
ones
I've
just
shown
you
here
that
they
just
when
a
cell
becomes
active,
it
activates
inhibitory
cell
and
it
tries
to
shut
down
everybody
in
some
region
nearby,
very
strong,
very
fast.
This
is
we
rely
on
this
for
our
sparse
activation.
This
is
essentially
how
sparse
activation
occurs
very
well
documented.
B
There
are
other
inhibitory
cells
which
have
a
very
strong
columnar
aspect
to
them,
and
so
they
literally
define
these
bundles
of
inhibitory,
axons,
literally
surrounding
the
many
columns
and
there's
two
classes
of
those
and
we're
relying
them
in
a
very
fuzzy
way
to
say
basically
well
what
they
what
they
seem
to
be
doing.
Some
people
have
said
that
these
cells
inhibit
these
cells.
So
if
it
inhibited
so
this
calm,
one
would
inhibit
the
other
ones,
which
essentially
said
well.
No
one's
inhibited.
B
All
these
cells
here
are
not
inhibited.
This
is
what
would
lead
to
the
columnar
activations,
because
you
have
inhibitory
fast,
inhibitory
of
inhibitory
cells
in
a
columnar
fashion,
and
so
those
are
the
those
are,
the
general
ideas
and
there's
other
there's
lots
of
other
flavors
which
are
complex
and
things
going
on
so
we've.
I
just
said
here:
we
rely
on
these
two
basic
ideas,
predominantly
in
both
the
spatial
pooling
function
of
selecting
the
column
and
then
it
quickly
and
then
and
then
selecting
one
of
the
cells
to
win.
B
Just
like
berger
was
talking
about
so,
but
we
think
it's
what
happens
is
even
if
the
inhibitory
column
activation
occurs
quickly.
One
of
these
cells
will
start
firing
and
as
soon
as
they
start
firing,
it
then
we'll
inhibit
the
other
guys
so
there
now
you
want.
You
have
specific
in
terms
of
temple,
pooling
all
right,
I'm
dancing
around
it,
because
I'm
not
sure
I'm
going
to
say
about
that.
Let's
talk
about,
we
are
still
working
through
temple
pooling.
B
We
are
still
trying
to
figure
out
how
we
wanted
to
work
and
how
we
think
it
would
work
and
we've
been
modeling.
It
mostly
on
layer,
three
and
four,
and
so
we're
basically
saying
here
we're
saying
that
if
I
have
a
if
I
have
an
unpredicted
input
here,
that's
the
easy
one.
If
I
have
an
unpredicted
input
in
layer,
four
I
get
burst
in
columns.
We
believe
that
that
actually
just
propagates
straight
up
into
layer
three,
and
so
you
get
bursting
columns
here.
It's
like
nothing,
nothing's
happened.
B
Then
you
have
a
very
sparse
activation
and
then
another
sparse
activation.
Another
sparked
activation.
Now
you
want
to
lead
to
a
stable
pattern
up
here
and
a
circle
is
just
saying
you
want.
Basically,
you
want
to
have
as
many
of
these
changing
patterns
here
map
into
that
stable
pattern.
That's
the
basic
idea!
B
We
started
off,
and
this
is
totally
just
you
know,
work
in
progress
so
just
bear
with
us.
I
might
tell
you
something
completely
different
a
month
from
now
and
I'm
trying
maybe
I'll,
be
able
to
work
back
in
your
question
about
inhibit
inhibition
did.
F
C
B
B
Me
just
start
here
the
question.
One
question
is:
which
cells
am
I
going
to
be
trying
to?
So
when
I
say
temple
pooling
we're
talking
about
cells
that
have
a
proximal
synapse
here
that
right
here
and
we
want
this
cell
to
learn
multiple
patterns
down
here
pattern,
a
pattern
b,
pattern
c
pattern:
d,
patterning
things
like
that
right
and
so.
B
B
To
have
one
cell
in
one
of
these
sparse
columns,
one
of
the
you
know,
sparse
activation
columns
and
one
cell
in
each
column
we're
now
working
towards
an
idea
that
actually
might
be
it
might
be
more
dense
than
that
we
might
have
like
one
cell
in
all
the
different
columns
or
more
of
a
more
distributed,
less
sparse
representation.
There's
some
reasons
for
that,
and
so
there'll
be
a
bunch
of
cells
up
here
that
are
essentially
staying
on
through
so
basically
they're
winners
or
hysteresis.
B
They
just
keep
going
until
someone
tells
them
to
stop,
and
the
signal
to
tell
you
to
stop
would
be
a
bursting
column.
A
bursting
column
says
you
stop,
because
I
got
a
new
unpredicted
input,
but
as
long
as
I
have
an
unpredicted
input,
I
can
keep
trying
these
cells
and
keep
learning.
Now,
where
does?
Where
does
the
inhibitory
cells
come
into
play
there
at
a
very
simple
level?
B
We're
going
to
rely
on
these
guys
again
here
because
we
essentially
say
well:
these
guys
will
allow
a
very
sparse
representation,
be
active
and
they'll
have
a
bunch
of
cells
that
are
winners
and
they
can
initially
shut
down.
Anybody
nearby,
but
you
can
still
have
quite
a
few
cells,
but
it's
just
going
to
be
a
sparse
representation,
I'm
not
sure
if
we
thought
about
inhibitory
cells
anywhere
else.
Yet
in
that
in
that
context,
so
maybe
you
have
some
specific
thing.
You
wanted
to
ask.
B
So
in
general
the
rule
we're
working
so
far
is
that
when
a
column
bursts,
it's
temporary,
always
it's!
Never!
It's
not
a
it's
a
transitory
state!
It's
a
temporary
state
where
all
these
cells
fire,
but
soon
as
one
of
them
as
soon
as
they
start
flying
one
guy
is
going
to
win
out.
It's
a
winner,
take
all
because
of
the
inhibition.
So
it's
always
a
it's
a
quick
thing
I
burst,
and
then
I
settle
on
a
sparse
one.
So
that's
always
going
to
happen,
and
that
would
happen
in
layer
32.
B
F
That's
not
really
a
follow-on,
but
it
deals
with
layer.
Four.
So
do
sensory
data
come
into
layer
four
twice
through
both
proximal
and
distal
input,
or
is
that
just
a
implementation
detail
in
the
code.
B
Well,
at
the
moment,
we
say
that
implementation
details
at
the
moment
when
we're
starting
to
let
model
a4.
We
do
not
have
sentry
data
coming
into
the
distal
device.
J
The
it
tries
to
predict,
given
the
current
sensory
signal
and
the
signal
the
motor
command
you're
about
to
make
what
the
sensory
signal
will
be.
So
currently
we
do
feed
in
the
sensory
and
the
motor
into
the
distal.
B
Yeah
yeah
yeah,
I'm
sorry,
I'm
saying
maybe
I
missed
the
interview
for
the
question,
so
you
have
a
cell
here.
It's
getting
feed
forward,
which
will
think
about
is
sensory
right
and
then
on
the
on
the
distal,
guys
you're
getting
you're
getting
a
motor
and
your.
I
guess
you
are
getting
sensory
as
well,
which
what
here's
the
interesting
thing
about
it.
B
What
we
generally
did,
what
we
didn't
want
to
do
initially
is
we
didn't
want
to.
We
didn't
want
to
have
other
layer
force
connect
here,
because
other
layer,
4
cells,
represent
a
high
order
state
and
that's
what
we
need
in
layer.
Three.
If
you
want
to
predict,
you
want
to
predict
from
a
higher
state
to
higher
state
to
higher
state.
But
here
you
don't
want
to
do
that,
because
it's
not
a
high
order
pattern.
B
B
B
And
we
know
that
there
is
motor
commands
coming
in
the
cortex
through
the
thalamus,
just
like
the
sensory
data,
but
we
don't
know
where
they
go.
I
haven't
been
able
to
find
anything
about
that,
so
we're
predicting
they're
going
to
go
there
on
the
distal
synapses.
That's
our
prediction,
hopefully
we'll
be
able
to
find
some
literature
which
proves
that
right
or
wrong,
but
that's
that's
what
we're
predicting
there
and
the
idea
behind
this
is
roughly
is
we
said?
B
Okay,
we
don't
we
want
to
have-
and
I
don't
know
where
the
case
storm
cater,
but
the
state
of
our
code
is
but
I
wrote
a
little
in
the
year
what
you
want
to
feed
into
these
distal
synapses.
It's
not
necessarily
the
raw
sensory
data.
You
want
to
represent
sort
of
the
columnar
representation
here.
It's
you
want
to
say
I'm
looking
at
an
eye,
I'm
about
to
move
to
the
right
I'm
going
to,
and
I
predict
to
see
another
eye
and
I
and
I
don't
want
to
be
at
some
high
order
state.
B
I
don't
want
to
be
saying:
oh
I'm
at
an
eye
that
I
got
to
by
looking
up
from
the
nose.
I
want
to
be
able
to
that's
just
an
eye
and
look
to
the
right,
and
so
my
original
thinking
is
that
you
wouldn't
want
to
take
this
raw
sensory
data.
You'd
really
want
to
have
a
columnar
representation
of
it
and
that
could
be
coming
from
layer
six.
You
could
go
back
and
read
that
email,
but
a
lot
of
this
stuff
is
in
flux
here.
So
this
this
sort
of
fit
the
model
it
says.
B
Okay,
this
is
a
non-high
order,
state
of
the
current
input.
This
is
the
motor
command.
Then
we
can
predict
what's
going
to
happen
next.
There
are
some
complications
here.
We
do
know
that,
even
though
65
percent
of
the
connections
come
from
here
and
we
we
believe
the
motor
connections
are
a
lot
of
these
as
well.
We
do
know
that
there's
a
few
connections
from
my
other
layer,
4
cells,
not
a
lot,
but
there
are
some.
I
don't
we
we're
starting
to
think
about
what
those
might
be
doing.
B
So
you
know
that's
the
state
of
the
novel,
but
this
stuff
this
number
here-
and
this
here
makes
a
lot
of
sense
and
the
fact
that
these
cells
are
not
highly
interconnected
makes
a
lot
of
sense.
So
that
was
all
that
all
fit
with
the
theory,
but
we
have
a
couple
of
things.
We
don't
understand
exactly.
We
don't
understand.
Why
there's
some
of
these
connections
and
we
don't
we
don't.
We
haven't
worked
through
all
the
details
of
how
this
is
working,
so
we
don't
really
model
this.
B
F
J
J
B
The
theory
says
you
have
to
have
multiple
cells
per
column,
the
bursting
all
that
stuff
is
the
same.
The
theory
says,
though
the
distal
dendrites
seem
to
be
getting
input
from
both
a
motor
command
and
a
non-high
order.
Spatial
state
and
the
things
we
don't
understand
exactly
is
like.
Why
are
these
other
connections
here?
This
some
small
percentage
are
representing,
presumably
high
order
state.
Why
are
they
going
there?
We
have?
B
Maybe
we
actually
have
a
speculation
behind
why
they're
happening,
and
we
don't
really
know
what's
going
on
in
layer
six,
but
it's
nice
to
know
that
it's
at
least
it's
getting
input,
mostly
from
layer
six
and
not
from
other
layer,
four
cells.
It
was
all
from
other
layer.
Four
cells
we'd
be
in
trouble.
This
wouldn't
work,
so
we
can
speculate
that
layer.
Six
is
sort
of
representing
and
it
is
these
layer.
Six
cells
are
simple
cells,
so
we
can
say:
okay,
it's
getting
some
non-high
order,
state
from
layer,
six
and
layer.
B
Six
gets
direct
by
the
way
leadership
get
direct
feedback
center
input.
So
so
it's
sort
of
like
layer,
six
is
doing
spatial
pulling
just
like
layer.
B
Four
is,
and
it's
passing
that
representation
up,
so
it's
a
bunch
of
stuff:
we
don't
really
understand
it,
but
overall
it
hangs
together
really
well,
I
mean
overall,
this
is
very,
very
nice
again
there's
a
lot
of
things
that
we
speculated
here
that
could
turn
out
to
be
different
would
have
shot
the
whole
thing
to
hell,
but
most
of
them
fit
pretty
well,
so
there's
a
few
things
that
don't
fit
very
well.
B
H
So
if
you
think
voice
right,
we've
got
different
temporal
spaces
that
you
know
the
brain
is
paying
attention
to
right.
So
you
got
like
phonemes,
you
got
like
vowels,
yes,
consonants
and
then
and
then
you've
got.
A
B
B
If
I
could
create
a
humongous
monstrous
super
monstrous,
v1
or
a1
auditory
thing,
it
can
learn
a
lot
more
than
than
the
ones
we
have
it
could
you
could
basically
do
more
in
that
first
region
before
you
go
up
to
the
next
region,
but
it
turns
out
that
there's
a
how
do
I
describe
this
in
theory,
you
could
build
a
single-layer
neocortex.
That
does
everything,
but
it
would
take
forever
to
train.
B
You
could
have
to
be
exposed
to
everything
and
and
would
have
a
you
know.
You've
got
the
amount
of
memory
required
to
do
this,
but
if
you
had
it,
if
you
have
an
infinite
amount
of
time
or
very,
very
large
amount
of
time
eons-
and
you
have
all
this
training
data,
you
can
basically
build
the
monstrous
huge
v1
and
or
what
you
want,
and
you
can
learn
everything.
But
what
happens?
Is
you
can't?
You
can't
do
that.
So
what
the
cortex
does
is
here's
a
general
idea.
B
B
The
cells
can
only
learn
within
certain
regions
and
and
they
try
to
keep
building
longer
longer
sequences
and
then
they'd
say
that
there's
no
more
member
here,
so
these
guys
learn
as
much
as
they
can
and
then
they
you,
you
just
have
a
pooling
over
this.
Then
you
represent
these
sequences
as
more
stable
patterns
and
then
they
become
the
inputs
to
the
next
guy
and
the
next
guy
does
the
same
thing
and
he's
trying
to
learn
and
it
learns
longer
sequences.
B
Appear
in
here
well,
it
depends
on
how
much
memory
you
have
and
how
complex
of
the
world
is.
I
don't
think
there's
like
oh
words
with
this
level
and
concepts
of
glyphs
level
and
phonemes
are
at
this
level.
It's
not
going
to
be
like
that.
It's
going
to
be
wherever
they
kind
of
naturally
max
out
and
and
the
hierarchy
basically
saw
it,
makes
it
solvable
in
real
time
with
limited
amounts
of
memory,
but
you
do
lose
the
capability
by
using
hierarchy.
B
So
the
answer,
the
short
answer,
your
question
is
there
there
won't
be
a
sharp
break
for
things
like
you
and
I
think
about
like
words
or
phonemes,
and
I
can
give
you
lots
of
examples
like
you
know
like
say
that
say
that
here's
here's
the
phrase
you
know
good
morning
is
that
one
word
or
two
words
right.
You
know
it
depends
if
you
hear
it
enough,
it
becomes
one
word
if
you
haven't
heard
it
much
it's
too
hard.
So.
C
B
Okay
right,
if
I
just
sit
here
and
go,
that's
that's
not
a
sequence!
Okay,
you
know,
it's
don't
think
about
that.
We
think
it's
imagine.
You
could
break
it
up
into
little
phonetic,
sound
little
sound
transitions
all
right.
So
then
don't
worry
about
how
long
it
takes
to
make
those
transitions.
Just
that's
the
number
of
transitions
that
are
you
know
comprised
in
this
auditory
stream
and
it
and
then
and
then
it'll,
just
parse
them
up.
However,
it
can.
B
B
No,
I
don't
think
so.
I
don't
think
so.
Remember
cortex
doesn't
know
anything.
It's
just
looking
at
sequences
and
this
auditory,
cortex
or
whatever
cortex
you're
talking
about
is
no
different
than
any
other
cortex.
It
doesn't
know
what
this
is.
All
it
can
do
is
look
at
the
statistics
of
the
data
and
say
all
right.
What
what
are
the
repeatable
sequences
here,
where
you
know
how
many
of
mark
can
I
learn
before
I
run
out
of
memory?
How?
How
long
can
I
learn
before
they
run
out
of
memory
and
then
so
I
could
imagine.
B
For
example,
imagine
I
have
a
language
we
came
up
with
an
language
has
very
few
phonemes
and
they're.
Very
simple,
and
the
words
are
very,
you
know,
have
a
perfect
architecture
to
them,
esperanto
or
something
I
don't
know
what
it
is,
and
you
would
end
up
with
a
very
different
representation
here
than
if
I
had
some
other
thing
where
I
had
50
000
phonemes
and
you
know
it
just
makes
it
up
as
it
goes.
B
It
just
says
I'm
just
going
to
try
on
as
much
as
I
can
as
much
as
I
can
as
much
as
I
can,
and
and
that's
it
I
don't
think,
there's
a
I
don't
think,
there's
a
nobody's
sitting
there
going.
Oh,
this
is
language.
Here's
a
phoneme
and
here's
the
word,
and
I
gave
you
an
example
of
you
know
a
word
like
good
day
for
a
good
day.
Is
it
really
one
word
or
two
words
right?
You
know,
and
you
start
out.
B
It
sounds
like
two
words
it's
good
day,
but
then
it
becomes
one
word
and
same
with
phonemes.
They
all
kind
of
move.
Together.
You
learn
phrases,
you
know
and
there's
no
there's
no
clear
break
here.
We
impose
those
things
after
the
fact
to
try
to
describe
what
language
is.
That's
what
linguists
do
right.
They
say.
Oh,
these
are
words,
and
these
are
nouns,
and
this
is
why
linguists
have
such
a
hard
trouble
understanding
language
because
they
want
to
put
the
structure
on
it
in
reality,
that
structure
doesn't
really
exist.
I
H
C
B
Right,
I
think
the
first
word
my
you
know.
The
second
word
my
daughter
learned
was
strawberry
and
because
she
loved
strawberries
and-
and
she
didn't
think
of
you
know-
that
was
just
a
thing
that
was
you
know
it
wasn't
to
her
mind.
No,
that's
a
three
syllable
word,
no
yeah.
She
didn't.
You
know
it
was
just
one
thing:
strawberry
and
then
he
probably
didn't
make
a
connection
in
any
other
words
over
time.
This
all
gets
sorted
out,
but
I
don't
think
the
visual
world
is
not
broken
up.
B
That
way
that
the
tactile
somatic
century
world
is
not
broken
underway.
The
auditory
world
is
not
broken
up.
That
way.
We
impose
language
structure.
After
the
fact
we
don't
learn
to
speak
by
grammar
rules.
We
learn
to
speak
just
by
repeating
what
someone
says
and
just
hearing
it
again
and
again
only
later.
C
B
C
B
Language
is
what
works
is
what
works
right.
I
mean
the
same
thing
with
music
right.
You
can
break
down
music
and
you
can
say.
Oh
you
know,
these
are
different
types
of
phrases.
If
you
take
music
theory,
you
know
these
are
different
sort
of
resolutions
and
chord
progressions
and
different
types
of
dissonance
and
so
on.
But
in
reality
you
know
the
first
musicians
didn't
do
any
of
that.
They
just
started
playing
stuff
that
they
like
to
hear,
and
then
that
was
that
you
know
then
later
we
can
go
back
and
go.
Oh,
no!
B
That
is
a
such-and-such
progression
and
you
resolve
the
seventh
from
the
tonic
and
blah
blah
blah.
You
know,
but
they
didn't
start.
No
one
started
that
way
and
then
language
is
like
that
too.
We
just
started
talking
and
making
sounds
and
noises,
and
then
later
we
come
back
and
say:
let's
put
some
structure
on
that.
C
C
B
So
I
think
matt
wants
me
in
so
I'll
leave.
B
I
was
talking
to
the
what
did
you
say?
No,
not
in
I
had
someone,
I'm.
B
Okay,
I
can
keep
talking.
You
can
just
stop
recording.