►
From YouTube: Jeff Hawkins on How To Model Neocortical Neurons
Description
Jeff Hawkins gives a talk about how he thinks about neurons and modeling them. This will be focused on internal engineers, but Jeff is fine with us live-streaming it as well.
Discussion at https://discourse.numenta.org/t/how-to-model-neurons-with-jeff-hawkins/6350
B
Ready
yeah,
so
what
we're
gonna
do
right
now
is
have
a
bunch
of
new
people
here,
we're
gonna.
Do
it
very
quick,
Susan
I
thought
about
doing
a
set
of
little
minions,
tutorials
about
neuroscience
and
and
I
think
we're
both
similar.
Both
good
pleasure
star
would
be
talk
about
neurons
and
how
they
work,
so
they
can
be
very
interactive.
I
make
I,
can't
know
the
stuff
that
I
haven't
prepared
anything
so
so
just
you
can
no
direct
it
as
we
go
okay,
and
so
it's
really
sort
of
two
worlds.
B
B
They
all
get
some
together,
somehow
that's
supposed
to
signal
and
then
there's
typically,
some
sort
of
nonlinear
output
function
on
this.
That's
what
you
have
an
output
of
this
thing,
and
so
each
of
these
have
some
sort
of
scalar
value
that
comes
in
it
gets.
Multiplied.
Comes
the
way
is
someone
together,
you
put
a
system
on
linear
function
and
then
you
get
another
scalar
output,
that's
sort
of
your
classic
pointer,
my
mom
and
then
both
contrast
that
to
in
biology
in
the
brain.
There
are
there's
many
different
types
of
drugs:
they're,
not
all
the
same.
B
Eighty
percent
of
the
neurons
in
the
brain
are
excitatory
neurons,
meaning
they
those
are
where
most
of
the
so
one
way
to
think
of
it.
That's
where
most
the
processing
occurs
and
then
eighty
percent
of
those
are
pyramidal
cells
in
the
other,
20%
called
study
cells,
but
there's
some
view
to
the
subset
of
parental
cells.
So
I'm.
B
B
B
So
this
is
a
pyramidal
neuron
and
one
way
to
think
about.
It
has
been
argued
that
the
majority
of
the
other
neurons,
that
aren't
parental
cells
are
really
just
like
criminals
like
this.
No,
the
next
biggest
categories,
it's
called
spining
stellate
cells,
but
they're
tunneling
pyramidal
cells,
but
they
don't
have
this
apical
dendrites,
and
so
they
don't
have
that
little
pointy
hat
thing
as
much,
and
so
it
may
be
news
evidence
that
this
is
just
a
derivative
of
this
and
they
work
actually
on
the
same
line:
okay,
so
I'm
just
Kentucky
or
Amazon.
B
Now,
there's
a
lot
of
things
we
can
party
about
here
the
differences
between
these
two,
the
chrysalis.
Let's
just
talk
about
just
about
the
synapses
for
a
moment,
okay,
parental
cells
have
lots
of
synapses.
Thousands
and
thousands
of
someone
have
about
the
thirty
thousand
synapses
most.
They
don't
really
know
it's
estimated
somewhere
between
three
and
ten
thousand,
something
like
that.
B
B
And
so
all
the
all
the
excitatory
synapses
are
long
out
here
and
none
of
neuron
cell
bodies
none
here
they
there
are
inhibitory.
Now
one
thing
about
these
synapses
here
the
synapse
can
have
a
plus
and
a
minus
value
like
they
can
be
positive
and
negative.
Here
these
are
only
pluses.
You
don't
get
any
negative
synapses.
B
There
are
a
second
set
of
synapses
which
are
negative:
synapses,
inhibitory,
synapses
and
I'll.
Show
those
of
green
and
the
inhibitory.
Synapses
are
they're
very
different.
They
they
work
differently
and
there
are
different
places.
So
there's
a
set
of
them,
there's
three
classes
of
others
generally.
The
some
that
are
lying
along
here
on
this
is
the
this
is
called
doing.
Well,
this
time
sorry
I
should
I
should
tell
you
this.
This
is
called
the
axon
and
these
are
dendrites
and
of
course,
the
axon
goes
out
and
their
branches
and
connects
to
other
servers.
B
B
I'm
just
gonna
sped
along
here
there
are
most
of
the
cell
excitatory
so
far
more
right,
but
the
inhibitory
synapses
in
general
are
believed
there.
They
they
look
different
and
so
the
first
order
approximation.
You
could
say
that
learning
occurs
with
the
excitatory
synapses
and
the
inhibitory
synapses
are
not
learning
anything
they're
there
for
regulation
and
function.
So,
for
example,
the
way
an
output
of
a
cell
happens
in
a
neuron.
You
get
this
spike
right.
It's
it's
a
single
spike
and
Spike
are
sponsored.
B
B
B
There's
a
few
exceptions
to
this.
So
there
are
a
few
places
where
there
are
dendro
dendritic
attractions,
but
right
now,
for
today,
just
say
every
yeah,
but
every
cell
is
an
axon,
whether
it's
in
the
HIPAA
Tory
cell,
annex
product
or
cell,
and
then
you
give
it
to
ourselves,
make
inhibitory
connections
and
they
until
they
so
there's
another
class.
The
way
to
think
about
is
the
in
timber
to
ourselves
that
we
inhabit
or
synapses
or
more
like
regulation
or
plumbing
or
code
they're
they're,
not
where
most
of
the
learning
occurs.
B
So
learning
occurs
in
the
excites
ourselves
and
so,
but
there's
a
complex
function
of
how
these
things
interact,
which
is
based
largely
based
on
the
inhibitory
cells,
inhibit.
Our
cells
connect
in
control
times,
so
I
was
kind
of
giving
an
example.
Here,
it's
again
what
happened
to
one
of
the
voltage?
B
This
doesn't
really
matter
for
modeling
point
of
view,
but
when
the
voltage
of
this
cell,
if
you
look
at
the
internal
voltage,
here's
the
rowboats
volts
the
voltage
is
maintaining
down
in
at
a
negative
60,
that's
a
sort
of
standard
point
and
then
and
then
and
then
a
raised
is
when
he
gets
inputs.
It
comes
up
a
bit
and
it
begins
with
almost
certain
threshold.
Then
you
get
this
spike,
then
it
goes
back
down
again.
Okay!
So
that's!
What's
going
on
here!
It's
like
summing
these
inputs
in
some
way
and
then
it
gets
I.
A
B
B
B
D
B
There's,
like
cow,
is
good
one
big
plane.
You
know
you
can
argue.
There
are
many
dozens
of
types
of
neurons,
I
have
different
learning,
rules
and
different
functions,
and
so
on.
So
it's
a
very
complex
thing.
There's
not
like
there's
an
attack,
one
type
of
neuron
there's
you
could
take
the
neurons
in
the
vitamin
two
excitatory
inhibitory.
Then
you
can
divide
those
into
things
anything
you
know.
So
there
are
many
classes
of
inhibitory
neurons.
Actually
you
could
argue,
there's
really
two
basic
classes
of
excitatory,
neurons
and
I
have
already
said.
They're
almost
the
same.
B
You
know
almost
all
the
excitatory
cells
can
be
the
pyramidal
or
supplying
stellate
and
those
are
really
the
same.
And
but
in
the
inhibitory
room,
there's
a
dozen
different
inhibitory
neurons.
They
vary
functionally
different.
They
have
different
morphology,
they
look
different.
They
have
different
rules,
they
work
on
so
there's
a
much
more
excitement
going
on
in
the
inhibitory
role,
but
only
20%
of
neurons,
but
most
people
don't
have
a
model
any
of
that
stuff.
They
don't
know
what
it's
doing
so
they
just
focus
on
so
this
kind
of
model,
but
to
the
excitatory.
B
Think
you
can
say
that
on
it,
the
inhibitory
on
an
individual
cell
having
in
having
neuronal
synapses
that
go
positive
and
negative
is
out
the
window.
Okay,
that
can't
happen.
I'm
a
biological
cell.
For
the
most
part,
these
inhibitory
synapses
do
not
learn.
For
the
most
part,
there
are
exceptions
that
so
I
think
this
to
start
off
with,
but
you
definitely
don't
get
positive
and
negative,
but
that's
not
even
the
worst
of
the
problem
we
get
into
the
worst
of
the
problem
at
the
moment.
B
So
so
we
have
this
very
different
world
when
we
have
lots
of
excitatory
synapses
that
can
go
from
zero
weight
to
some
value
and
we
have
20%
or
so
the
stances
are
inhibitory.
We
don't
seem
to
learn
at
all
they're
there
for
like
controlling
things,
controlling
thresholds
and
and
it's
so
like
the
code
of
the
network.
How
do
you
implement
you
know?
How
do
you
lose
them
in
boosting
or
how
do
you
implement
sparsity
right
and
so
and
then
so?
So,
let's,
let's
stop
that?
That's
just
talking
about
the
distribution
of
synapses!
B
Talk
about
what
okay
over
here
in
this
world,
we
assume
these
guys
are
usually
some
sort
of
floating
point
value
right,
and
we
also
assume
that
the
output
of
this
thing
is
a
floating
point
value
some
number
of
digits
of
precision.
What's
going
on
over
here.
Well,
you
could
say:
well,
I
only
get
the
spike
on
the
output,
so
you
say
well
yeah,
but
the
spike
rate
can
change
right,
so
I
can
have
I
can
have
a
fast
bike
grade.
B
B
So,
for
example,
if
someone
is
given
at
a
stand,
say:
okay
I'm
going
to
show
you
a
picture
of
a
hippopotamus
or
a
unicorn,
and
you
have
to
push
the
button
the
right
button
to
the
left
button
when
you
got
hippopotamus
or
unicorn
right
well,
you
can
do
that
really
fast
and
you
can
do
it
fast
enough
so
that
they
say:
okay,
here's,
the
eyeball
and
here's
the
here's,
the
axons
coming
out
of
Europe
coming
out
the
eyeball,
and
then
it
goes
this
neuron.
Then
it
goes
for
this
neuron.
B
It
goes
for
this
time
goes
this
time
and
you
can
count
up
how
many
steps
it
takes
before
you
push
the
bomb
right
and
you
take
those
steps.
There
isn't
enough
time
to
get
the
second
spike.
So
on
some
tasks
that
are
cognitively
difficult
tasks,
you
cannot
be
doing
I'm
averaging
of
the
of
the
rate.
It's
just
not
possible,
there's
not
enough
time
to
do
it.
A
B
It's
like
well,
several
hundreds
is
very
fast,
and
so
you
have
to
look
at
the
neurons
in
like
in
the
visual
cortex
or
a
typical,
fast
response.
There
might
be
50
right,
so
you
got
you
got
20
milliseconds
and
then,
if
you
have
10
steps
before
you
can
push
the
button,
then
that's
200
milliseconds
and
there
you
go
and
you
do
it
in
200
milliseconds.
It's
not
kind
of.
B
It's
not
always
necessary,
I'm,
not
saying
it
doesn't
happen,
it
does
happen.
It
piercing
ears
to
be
important.
It
may
need
in
places
when
I,
when
I
exert
a
force
with
my
muscle.
There
are
spikes
coming
down
to
the
muscle
and
the
the
the
great
Spike's
corresponds
to
how
tense
my
muscle
be
and
I
just
think.
I
do
at
night,
sometimes
I.
Don't
anyone
else
to
do
this,
but
I
sleep
on
my
head
on
the
side.
Often
the
pill
is
pushing
us
right
here
and
I
can
hear.
B
B
B
That's
a
right.
This
is
very
I,
get
this
at
maybe
once
a
month,
I
get
I
just
happen
to
be
here
just
right
and
you
can
just
it's
really
fun.
You
can
just
sort
of
like
tense
your
muscles,
nothing
I'm
hearing
what
I'm
hearing
is.
The
spike
arises
that
many
cognitive
tasks
do
occur
quicker
than
you
have
time
to
do
the
rate
coding.
So
you
I'm
not
saying
there
isn't
rate
coding,
but
I'm
saying
you
got
a
there's
a
lot
of
things
in
the
brain
which
don't
rely
on
Rico.
B
Just
it's
just
you
can
do
hard
things
that
there
really
isn't
any
time
to
get
any
precision.
It's
like
it's
when
the
first
spike
occurs.
It's
really
what's
important,
and
so
there's
this
it's
just.
This
is
the
fact,
and
so
it's
not
to
say
that
there's
no
rain
coating,
but
we
also
said
hey
I
got
understand
how
a
system
can
work
where
I
can't
rely
on
recoding.
Here
in
many
situations
you
look
at
the
models
we
did
with
the
HTM
temple
memory
model.
B
That's
a
big
difference
and
then
also
we
look
at
the
the
weights
of
the
synapses.
Now
it's
cured
and
these
point
neural
models
is
assumption
that
you
have
a
fairly
high
precision
in
the
synapse.
Real
synapses
aren't
like
that.
At
all
will
synapse
looks
like
this.
You
have
a
you,
have
a
dendrite
take
one
of
these
little
stands
with
you.
B
B
E
B
Comes
in
you
have
the
spike-
that's
traveling
around
traveling
down
here
when
it
gets
here.
There's
these
little
vesicles,
these
vesicles
spill
out
some
chemicals
into
this
little
gap
and
then
they
migrate.
They
diffuse
across
the
gap
into
the
jewelry
septage
here
and
basically
occurs.
Is
that
one
the
spider
eyes
here,
based
on
how
many
these
vesicles
are
release
and
picked
up?
On
the
other
side
you
the
voltage,
lowers
here,
so
you
get
up.
B
B
B
B
B
They
said,
but
it
looks
more
like
the
difference
between
a
synapse
which
is
old
and
heavily
learned,
and
one
which
is
new
and
I'm
still
learning
is
more
like
permanent,
it's
like
so
so
there's
something
officers
are
really
happy
like
this.
They
kind
of
stick
around
for
a
long
time,
but
there's
also
synapses
that
are
kind
of
like
little
bitty
things
and
they
can
come
and
go
all
the
time.
In
fact,
it's
not
common
I
mean
it's
common.
B
B
B
B
Can
say:
well,
it
has
what
only
for
me
announces
to
to
axons
that
past
somewhat
near
its
dendrites.
If
there
was
an
axon
that
was
passing
out,
you
know
out
here.
It
can't
make
a
synapse
that,
but
if
there's
one
that's
coursing
through
it
like
this,
it
can
different
places
that
can
make
synapses.
It's
called
a
you
know
it's
a
potential
synapse.
It
has
to
be
close,
but
not
the
closest.
So
we
think
there's
some
routing
rules,
we'll
talk
about
a
moment
here,
but
in
the
brain
it
has
to
be
closed.
B
B
A
square
millimeter
by
one
element
by
one
millimeter,
let's
say
by
a
hundred
microns,
a
tenth
of
a
millimeter
in
that
little
block
I-
might
have
ten
thousand
neurons
okay,
and
they
in
that
you
might
have
you
know,
hundreds
of
hundreds
of
meters
of
axons
and
dendrites.
It's
really
like
super
small
these,
these
things
that
can
be
as
small
as
like
a
micron,
thousands
of
a
millimeter
and-
and
it's
just
like
you-
see
pictures
of
this
up-
it's
just
all.
B
They
can
reconstruct
it,
because
it's
totally
squashed
all
over
the
place
and
so
at
any
point
of
time,
I
know
I'm.
Can
it's
going
to
have
a
lot
of
other
axons
passing
nearby
and
it
can
make
connections,
but
the
accent
doesn't
go
close
connection
to
the
complicating
that
is
the
following.
If
if
there
are
appears
to
be,
if
there
are
active
synopsis
here
that
are
useful
towards
the
end
of
this
dendrite,
the
dendrites
will
continue
to
the
crop.
They'll
just
stop
the
branch
out
as
if
they're
going
to
try
to
find
more
connections.
B
So
in
software
we
don't
have
to
do
any
of
this
doctor.
We
can
say
we
can
say
our
neurons
can
potentially
connect
to
any
other
neuron,
but
in
the
physical
brain
they
can't
do
that
and
the
physical
brain.
They
have
to
be
lucky
to
find
the
right
things
to
connect
to,
and
so
they
have
to
consciously
try
to
find
things
that
they
might
connect
you
in
a
heavy
and
learning
way,
they're
useful.
B
But
we
don't
have
to
borrow
that.
Okay
early
on
you
into
meant
that
we
Marlowe
these
potential
synapses
and
we
said
okay,
there's
only
so
many
neurons.
This
disorder
cannot
connected
these
neurons
and
then
after
a
while
is
that
we
don't
need
to
do
that
anymore.
What
to
say:
there's
10,000
neurons
if
I
want
to
learn
if
I
won't
train
one
of
these
neurons
here,
I'll
just
randomly
pick
from
those,
and
we
have
to
worry
about
this
physical
constraint
of
proximity.
B
So
what
I
didn't
how
analysis?
But
what,
if
there's
a
curse
on
connects
to
multiple
spines
on
the
dead
dragon
is
short
and
it's
polar
yeah.
I!
Don't
remember
this
conversation
we
have
about
this.
It
doesn't
happen.
If
you
exceptional
does
happen,
but
rarely
it
happens.
It
certainly
happens
a
bit
inhibitory,
synapses
and
the
temperature.
A
neuron
can
make
multiple
connections
on
the
same
stop
but
basically
I'm
saying
I'm
going
to
shut
you
down.
I.
B
B
B
Guy
invoke
this
guy.
All
right,
I
want
him.
This
pattern
pattern
a
and
vote
pattern,
B,
and
so
the
way
we
do
this
in
our
models
is
a
cell.
Over
here,
let's
say
cell
over
here
will
form
connections
to
some
subset
of
the
cells
over
here
so,
like
maybe
it'll
form
connections
to
20
of
these
cells.
Maybe
there's
a
thousand
active
Vieira's
on
that
and
and
when
it
sees
that
pattern
of
2805
we'll
talk
about
how
that's
going
to
occur
on
one
of
these
dendritic
branches,
I
might
have
20
stances
here
they
all
become
active.
B
B
D
B
B
B
What
happens
if
I
activate
a
synapse
right
here,
I
have
to
meet
this
one
I
can't
just
actually
that
one
synapse
says:
oh,
that's
good.
It
lowers
the
rate
of
the
voltage
a
little
bit
at
the
cell
body
and
if
I
can
activate
enough
of
them,
then
the
voltage
of
the
stuff
I
do
is
high
enough
and
then
I
get
a
spike.
What
happens
in
my
engineering
background?
B
These
are
like
these
are
like
wires
with
with
linking
capacitances,
and
so
what
happens
is
the
spike,
despite
decreases
as
you
move
along
here,
it's
leaking
out
to
the
membrane
and
the
impedance
basically
lost,
and
so
it
looks
like
my
activate
this
enough.
Nothing
is
going
to
happen
here
or
so
little
that
it
is
barely
worth
it,
and
then
you
could
say
well
what,
if
I
activate
these
guys
all
around
here
and
you
look
at
this-
hardly
anything
it's
like
well,
these
synopsis
can't
be
doing
anything.
B
What,
if
you
take
a
section
of
this
with
the
number
typical
uses,
40
micron?
So
if
46
is
long
here,
you
can
activate,
let's
say
20
of
them:
20
active
synapses,
each
one
of
the
very
stochastic.
You
know
releasing
a
little
bit
of
this
a
little
bit
of
that
then
locally.
What
occurs?
There's
a
locally
the
voltage
gets
raised
and
not
here
the
generator
will
spike
in
the
dendrite
and
it's
a
different
type
of
spike.
Is
this
one?
B
It's
not
the
same
chemical
composition
as
this
one
here,
so
it
works
on
the
same
principle,
but
it's
a
different
it's
it's!
Actually,
this
is
like
spiky,
and
this
is
might
be
type
of
things.
Well,
this
is
called
a
gender
it
expire.
This
is
now
this
is
an
axonal
spike
or
somatic
spike
and
so
individually.
If
I
activate
any
one
of
these
synapses
in
wouldn't
do
anything,
but
if
I
get
20
of
them
so
active,
it
generates
a
spike
and
then
the
spike
travels
along
here
and
it
gets
to
the
cell
body.
B
So
and
now
it
has
a
big
effect
at
the
cell
bottom.
So
this
becomes
like
a
pattern:
recognizer,
five.
Twenty,
if
I
can
recognize
a
pattern
here,
then
I
can
create
a
significant
event
here
and
as
the
curse
anywhere
along.
These
is
then
so
there's
lots
of
47
Minecon
segments
if
you
could
activate
20,
so
synapses
and
and
then
you'd
have
an
effect
at
the
cell
body.
But
if
you
just
activate
a
few
here,
a
few
there,
if
you
hear
nothing,
it.
B
Wouldn't
have
any
effect
they
could.
It
might
be:
okay,
erratically,
yes,
I,
don't
think
we'd
actually
even
see
that.
But
let's
let's
say
this
is
a
bit
controversial.
The
question
about
how
many
can
it
acts
on
how
many
synapses
can
actually
make
us
myself,
I've,
seen
studies
that
say
a
particular
axon
will
make
one
synapse
on
its
top
I've
seen
others
who
say:
oh
no
I've
seen
a
dozen
synapses
on
that
cell,
but
very
rarely
do
you
see
anything
close
to
one
another.
B
Everything
in
the
brain
goes
through
a
sparse
pattern
to
a
sports
pattern,
so
sparse
paths
and
invoking
some
some
representation
amongst
another
representation
that
can
be
in
the
same
set
of
cells
like
a
sequence:
memory
like
the
same
set
of
cells
or
here
across
from
two
populations
in
the
brain.
This
is
pretty
much
everything
we
talked
about,
and
you
know
most
of
the
communications
of
the
brands
like
this
between
sparse
patterns
and
sparse
patterns,
and
this
all
works
now
by
the
fact
that
each
original
neuron
is
recognized.
Each
segment
is
recognizing.
B
B
So
when
these
20
cells
from
these
thousand
sauce
here,
all
these
cells
will
have
a
dendritic
spike
that
says:
oh
I
recognize
that
pattern,
but
an
individual
cell
here
will
participate
many
different
patterns,
so
I
can't
say
what
this
particular
cell
means
it's
going
to
potential
a
patent
P
and
just
complete
dissipating,
anoxia
matter
D
and
because
it
can
say,
oh,
this
is
kind
of
representing
I'm
gonna
recognize
a
here
and
then
another
time.
I
have
X
and
I
goes
to
C.
B
I
recognize
here,
and
so
this
cell
would
dissipate
many
different
things
and
it's
the
combination,
which
really
leads
to
the
difference.
You
always
have
a
different
population
called
individual
cells.
Don't
tell
you
much
it's
a
population
code.
So
so,
let's
almost
done
here,
you
want
to
talk
more
about
this.
Oh
I
haven't
talked
about
just
filling
out.
The
rest
of
this
here
remember
I,
said.
B
Close
the
cell
body
it'll
definitely
enough
here
that
I
can
get
an
action
potential
right.
So
nothing
but
locally
will
make
this
thing
spike
when
this
occurs
and
I
get
this
dendritic
spike.
You
might
think.
Oh
that's
make
this
fight
too,
but
it
never
does.
It
just
doesn't
happen.
What
it
does
is
it
raises
the
voltage
quite
a
bit,
but
not
enough
to
make
this
fight
fire
the
cell
fires,
so
when
they.
C
B
B
Well,
our
model,
basically
the
paper,
this
label,
one
thousand
synapses,
really
for
the
first
time.
What
we
think
is
going
on
here
is
that
this
sub-threshold
depolarization
is
not
enough
to
make
it
spike
has
an
important
effect.
It's
like
a
prediction.
It's
saying
this
stick.
The
cell
is
almost
ready
to
fire.
It's
on
the
edge.
Now
you've
read
ESET,
it's
ready
to
go
and
if
it
gets
any
input
here
from
the
side,
it's
kind
of
like
it's
kind
of
spiked
a
little
bit
sooner
than.
B
That's
what
people
really
thought
they
said:
okay!
Well,
if
one
of
these
is
good
enough,
maybe
I
have
to
get
all
these
different
walks.
All
right,
I,
don't
think
I'm
not
aware
of
the
literature
that
explore
that
further,
but
I
don't
believe
it's
happening.
Okay
and
I,
don't
I,
don't
know,
I
know
no
evidence
that
that's
happening.
It
was
pure
speculation,
because
they're
saying
like
you
know,
we
gotta
have
something
happen
here.
D
C
B
B
Our
our
interpretation,
we
should
we
accept
with
the
neuro
sponge
literature
on
this
face
back
the
nurse
links.
Leaders
just
has
these
dendritic
spikes
he's
NMDA
spike
to
curb
they
depolarize
the
cell,
almost
enough
to
make
it
fire,
but
not
enough
to
make
it
fire.
These
notes
do
make
the
cell
fire
and
then
we
said
well
what
if
I
was
depolarized
because
of
this
then
I
get
this
implementing
action
potential
that
occurs
so
now
here's
a
little
timeline,
here's
a
kiss
when
this
input
comes
in
here.
B
This
thing
would
spike
like
that
and
what
we're
saying
is
if
it's
gonna
be
polarized,
because
it
recognized
the
pattern
first,
what's
gonna
happen,
it's
gonna
spike
a
little
bit
sooner
than
it
would
otherwise,
and
that
has
a
big
effect
may
not
seem
like
a
big
effect.
It
can
only
be
only
a
few
Millicent's,
but
now
one
more
digression
around
here
remember:
I,
said:
20%
of
the
cells
are
inhibitory
right.
B
So
there's
a
bunch
of
cameras
out
here
all
packing
together
and
then
the
second
most
common
type
of
the
cell
or
the
most
common
type
of
inhibitory
cell
is
a
it's
called
a
basket
cell.
There's
an
immature
cells
cells
are
interspersed,
everybody
find
parameter
cells
as
basket
cells.
There's
fewer!
It's
not
one-to-one,
it's
more
like
a
five
to
one
ratio,
some
like
that
and
what
is
the
characteristic
of
the
basket
cell?
B
B
It's
like
saying,
you're
all
toilet
now,
but
this
guy
won
and-
and
we
actually
believe
that
what's
going
on
here-
is
that
yeah
he
gets
in
training
with
this
guy.
So
this
guy
is
not
inhibited,
but
everyone
else
gets
inhibited,
and
so,
if
you
have
a
series
of
cells
like
this
and
if
one
of
them
spikes
a
little
sooner
than
all
the
other
than
the
other
ones,
you
know
shuts
down
everybody
else.
D
B
B
B
B
I
think
that
the
thing
we
came
up
with
was
understanding
how
this
dendritic
spike
is
goal
is
not
to
create
this
guy
despite,
but
to
make
him
spike
earlier
and
and
then
using
in
him
inhibition
that
basically
creates
sparse
representations
and
then
then
then,
there's
a
whole
mini
comic
pollicis,
which
we
were
talking
about
this
morning
about
that.
But
so,
let's
just
review
good.
You
want
to
go
onto
one
here.
That's
what
one
is
there
anything
special
about
the
refractory
period
of
that
20-year.
There's
no
section
here:
okay,
the
refractory
period.
B
A
E
B
B
They
also
have
to
become
active
within
a
fairly
short
window,
okay
shorter
than
you
might
think.
It's
like
I
think
I'm.
The
order
like
five
milliseconds
so
I
mean
I,
have
to
have
these
20
and
the
synapses
within
20
or
more
and
some
relationship
15
so,
but
they
have
to
be
on
properly.
So
how
do
you
get
a
little
me
act
within
5
milliseconds?
You
know.
That's
really
tricky.
B
Bunch
of
neurons
are
gonna
fire.
You
liken
the
sort
of
admit
if
there
are
going
to
be
active,
you
sort
of
like
a
little
bit
a
spike
around
the
same
time
so
that
they
would
sum
up
over
here.
So
it's
sort
of
a
again,
it's
a
very
precise
timing
thing
and
they
were
just
sort
of
even
if
they're
spiking
at
let's
say
50
Hertz,
which
is
every
20
milliseconds.
If
they
were
randomly
distributed,
then
the
chance
of
me
getting
these
are
the
finals.
B
B
B
B
B
B
It's
comics
complicated
because
there
are
places
where
sequence
tunnel
coding
rate
coding
is
important,
but
it
seems
to
be
more
places
where
he's
more
about
the
sport
happens
after
it
expires
diamonds.
So
anyway,
what
we
think.
So
what
are
these
things?
We
have
to
pay
attention
to
in
our
models?
It's
not
clear,
I
think
the
whole
idea
for
the
neuron
making
a
prediction
in
is
is
so
essential
to
to
the
cortex
meaning
almost
90
percent
of
the
synapses
on
these
neurons
are
predictions.
B
They're
recognizing
patterns
for
predictions,
and
so
97
with
the
neuron
does,
is
try
and
predict
its
input,
and
so
it's
really
kind
of
hard
to
imagine
how
you
could
build
the
network
out
of
these
kind
of
neurons.
That
really
does
to
predict
the
right
way.
You
know
these
are
with
most
of
these
predictions
are
silent.
No
one
ever
knows
him
they're,
just
like
you,
it's
like.
If
something
wrong
happens,
you.
B
Something
right
happens,
you
don't
you
know,
it
just
seems
normal.
There
are
other
types
of
predictions
too,
but
most
of
the
predictions
are
going
on.
The
brain
is
silent.
They
don't
actually
leave
the
neurons
they're
actually
happening
inside
and
until
this
is
a
it's
a
part
of
the
fabric
of
the
whole
thing
in
straight
deeper
building
the
wood
parts
of
these
you
could
separate
out.
B
Can
you
separate
out
and
predict
the
neuron
from
the
afferent
and
writes
to
the
population
codes,
the
sparsity
and
so
on
so
I
think
what
we're
trying
to
do
now
is
super
tight
right
now
says:
Lee,
let's
just
focus
on
the
sparsity
aspect.
First,
we're
gonna
do
that,
let's
go
towards
more
of
them
more
towards
the
binary
synapses,
let's
go
to
learning
by
growing
new
synapses
changing
the
connections.
You
know,
those
are
all
the
levers
we
can
turn
here
and
sonically,
which
is
the
right
path
way
to
go,
but
you
guys.
B
Yeah,
so
so,
when
this,
when
this
cell
actually
emits
a
spike
right,
a
real
spike,
it's
magic
spike.
What
happens?
Is
it
travels
backwards?
So
it
comes
down
here.
It
goes
this
way,
but
it
also
goes
out
here
and
up
here
and
back
up
here.
Okay,
now
it's
pretty
because
when
this
one
comes
down
the
axon,
it
goes
all
the
way
to
the
end
everywhere.
D
B
B
B
A
C
B
At
this
moment
in
time,
and
so
the
mechanism
for
doing
that
is
a
bad
capture
potential
that
reaches
out
to
this
section
here.
This
guy
knows
it
generated
recently
generated
dendritic
spike,
so
I
didn't
bring
the
next
Mike
and
within
the
say,
100
milliseconds
later,
or
something
like
that.
I
now
got
a
back
second
potential.
It's
golden
we're
going
to
do
we're
gonna,
do
synaptic
plasticity.
B
B
B
You
can
intermix
a
whole
bunch
of
different
patterns
right
here
this
time,
listening
746,
if
I
gave
you
a
tea
and
then
now
I
get
a
44
different
thing:
I
can
eat
your
neck,
so
nobody
gets
confused
because
of
sparse
things
did
not
gonna
overlap,
so
you
could
intermixed
patterns
and
won't
get
confused.
Nobody
gets
confused,
but
you're
gonna
do
that
with
a
whole
neuronal
holder
and
then
things
get
abused.
So
there
must
be
some
malleable
pal
be
here,
but
it's
like
a
union
property
I
can
have
multiple
representing
a
they.
B
Is
deadly
someone
saying
that's
not
true:
I'm,
not
joking
I,
really,
it's
crazy!
So
what
they're
people
who
don't
want
to
believe
the
importance
of
active
difference?
There
are
people
who
don't
think
these
dendritic
spikes
are
important
until
they'll
make
make
the
point.
Well,
they
can
only
work
if
the
backgrounds
of
potential
kept
all
the
way.
B
B
Don't
remember
why
this
Germans,
Chris
I,
don't
know
boy,
but
they
point
out
that
an
action
potential
typically
will
peter
out
not
reach
the
end
of
the
dendrites.
So
then
they
say
that
it
can't
be
very
useful
because
if
it
doesn't
end
done
it,
then
other
people
came
back
and
said:
oh,
but
we
did
it
and
there
was
a
recent
depolarization
if
it
does
keep
going
so
they're
back
and
forth
about
this.
B
You
know
their
argument
each
other,
so,
okay,
it
seems
crazy
to
me
that
all
the
neural
models
in
the
world
have
no
accounting
for
the
thousands
and
thousands
of
synapses
that
these
cells
have
there's
no
accounting
of
the
fact
that
these
citizens
don't
seem
to
some
of
the
cell
body
they're,
not
counting
the
fact
that
there
was
these
dendritic
spines,
which
we
didn't
know
until
20
years
ago,
or
so
so,
not
that.
So
the
idea
that
these
things
aren't
happening
seem
crazy.
B
E
Something-
and
it's
the
same-
it's
constant
you
don't
want
to
you-
know,
have
the
expiring
and
being
really
excited,
because
there's
absolutely
before
I
can
explain
it.
So
what
you're,
showing
here
is
the
very
dynamic
system
that
keeps
recognizing
things.
Action
potentials
fire
Barbra's
ions
are
moving.
Metabolism
is
happening.
So
how
do
you
balance
the
the
I?
Don't
want
to
waste
my
energy
on
something
that
is
I
know
about.
B
What
I
put
in
my
book
I?
Don't
think
it
was
exactly
that.
Okay,
there
is
an
idea
called
predictive
coding
and
machine
learning
idea.
Predictive
coding
is
that
you
don't
want
to
transmit
errors.
So
if
you
predict
something,
then
you
don't
want
do
anything.
It's
like
what
you
predict
something
stop.
That's
not
what
I
said
my
work,
okay
and
I
think
that's
right.
What
happens
here
in
this
situation
is
if
I
settle
cells
becomes
active
in
our
temple
memory
and
I'm
not
going
to
go
through
this
today.
B
B
If
I
can
you
put
and
I
didn't
predict
it,
and
this
has
been
observed
by
neuroscientists,
and
this
is
in
our
body,
you,
many
more
cells
become
activated.
It's
like
a
super
cell
comes
out
and
it's
sort
of
it's
the
way
we
would
view
that
is
like
a
union
of
things.
It
says.
Okay,
it's
like
a
lizard
normality
and
I'm
predicting
the
next
note
in
the
melody.
If
I
get
that
next
note,
I
will
form
a
very
sparse
representation
of
that
note
in
the
context
of
that
melody
at
this
point,
that's
what
it
represents.
B
If
the
wrong
note
comes
in
then
the
cortex
is
gonna
represent
it,
but
if
it
can
represent
it
almost
like
a
whole
bunch
of
cells
are
coming
out
active,
it's
sort
of
like
saying.
Well,
it
could
be
any
song.
No,
let's
start
over
again
right
community
song.
So
let's
refresh
ins
by
tracking
now
we're
a
world
might
be
so
you'll
have
much
more
activity
on
an
unpredicted
input,
much
less
activity,
I'm
predicting,
but
we
don't
believe
in
the
predictive
coding
concept
that
you
don't
pass
on
anything.
B
B
That's
what
C
unpredicted
input
in
the
cortex
whatnot
occurs
in
the
animal
looking
at
a
visual
or
anything
you
get
a
much
higher
activity
and
in
our
model
that's
like
a
union
of
all
possibilities
and
it
does
other
things.
So
it's
it's
not
like
it's
silent,
it's
never
silent
your
own
is
these
things
are
always
happening.
It's
just
that
if
you
try
to
look
at
these
cell
populations
would
be
very
sparse,
very
sparsely
activated,
be
very
difficult
to
see.
B
A
A
sense
for
whether
we
should
be
trying
not
to
use
negative
weights
or
learns
negative
weights
and
I
just
want
to
bring
up
that
argument
has
like
I'll,
just
give
an
alternate
picture
of
a
point.
Your
honor
I
mean
people,
probably
if
you
think,
of
a
quaint
neuron,
as
you
know,
having
its
inhibitory
part
and
you
know,
and
there's
congas.
B
C
B
No
one
has
to
shut
down
a
lot
of
stuff,
so
it's
not
associated
with
this
kind,
and
the
second
thing
is
most
of
these
inhibitory
cells,
not
all,
but
most
of
me,
if
I
think
about
the
connections
from
the
Quran
will
sell
to
an
inhibitory
cell.
That
is
not
the
one
that
just
look
like
the
best
itself.
That's
learned
that
it's
like
it's
like
every
printer
cell
makes
a
connection
to
an
inventory
cell.
B
A
B
Rule
always
make
connections
to
lots
of
parameters,
whether
that's
bipolar
cells
or
whether
chandelier
cells
or
this
basket
cells.
They
make
connections
with
many
many
an
inventory
so
generally
make
connections
and
many
many
parental
cells,
and
so
it's
it's
it's
it's
not
like.
You
could
assign
one
to
this
matter
on.
Well,
it
doesn't
look
like
that.
So
I,
don't
think
you
know
I,
don't
you
know
it's
a
very
good
question.
Well,
how
bad
is
it
they're
doing
inhibitory,
neurons
I?
Think
it's
I.
B
Think
the
more
fundamental
problem
in
my
mind
is
that
the
point
neuron
has
no
concept
of
prediction
and
there's
no
concept
of
being
able
to
read
before
it's
very
difficult
to
form
the
sparse
representations
that
are
meaningful,
that
once
partial
dictation
demotion
on
the
sparse
representation,
reliable
it
we're
doing
it.
But
it's
you
know:
we've
shown
here.
B
Of
transitions
reliably
in
nine
ten
thousands
of
cells,
so
it's
really
high
capacity
and
because
these
cells
just
don't
get
confused,
they
just
every
cell.
So
no
I
could
belong
to
hundreds
of
different
one
hundred.
Do
different
patterns
and
I
can
participate
a
hundred
different
patterns
and
everyone
does
it
and
no
one
gets
confused.
It's
always
the
population
that
matters.
Any
individual
cell
will
be
a
in
every
the
active
in
every
fifty
or
are
new
patterns,
but
the
combinations
can
be
unique
to
children's.
B
So
it's
all
worked
out
nicely
the
mathematics
about
it.
Nothing
gets
confused
anyway,
so
I
think
to
me
one
of
the
largest
problems
here.
Is
this
idea
that
to
build
these
sort
of
these
kind
of
associative
networks
that
work
really
reliably
point-to-point
or
high
frequency,
high
dimensionality,
yeah
I,
don't
know
how
you
separate
out
having
a
predictive
neuron
and
not
having
a
predictor
statement?
B
There's
no
there's!
There's
no
prediction
here:
give
ourselves
actors,
not
actor,
there's
no
other
separate
thing
going
on.
You
might
have
to
have
a
separate
population
to
be
a
predictive
cells,
or
something
like
that.
You
know
so
that's
my
best.
One
of
my
big
boy,
who's
about
the
point,
neuron
and
then
I-
think
the
whole
idea
of
learning
through
rewiring
is
so
much
more
powerful
than
learning
through
synaptic
weight
adjustment.
So
and
all
these
things
play
together
in
some
sense,
so
it's
difficult
to
know
which
part
you
can
leave
out
at
the
moment.
D
E
B
B
Mentioned
it
in
my
ethnicity
or
a
situation
for
that
time,
fan
can
be
increased
significantly
to
hundreds
of
milliseconds
and
all
the
time.
It's
very
short,
so
it's
a
typical
function
of
the
cell
and
what's
going
on-
and
this
is
what
Marie
sure
going
to
talk
about
these
metaphor-
tropic
receptors-
that
under
certain
conditions,
this
polarization
can
last
a
long
time,
long
time,
honey,
two
milliseconds,
maybe
a
second
or
two
but
I-
think
when
we
thought
about
the
sequence
memory
we
said:
okay,.
C
B
I
would
be
sort
of
our
way
of
keeping
track
of
where
you
were
thought
was
in
context
like,
like
our
current
sequence
memory.
If
you
have
one
wrong
input,
the
whole
thing
gets
lost,
right,
it's
lost,
and
so
maybe
I
should
keep
something
synapses
active
for
a
longer
period
of
time
and
I.
Don't
know
we
explore
that
I
can't
I
was
not
what
is
here,
there's
an
alternate
way
of
doing
that
too.
The
alternate
way
of
doing
that
is,
you
haven't
another
set
of
cells
which
are
stable
throughout
the
sequence.
C
B
The
song
they're
there,
the
classifier
as
well,
is
I'm
classifying
this
thing
and
that
classification
is
the
same
throughout
the
entire
sequence
until
that
classifier
and
that's
called
tempo
cooler
and
that
classifier
can
say,
hey
I'm
listening
to
this
melody,
you
know
it's
a
national
anthem
of
Brazil
and
a
wrong
note
comes
in
I.
Don't
forget
that
I
was
listening
to
the
national
anthem
right
away,
right,
maybe
three
or
four
wrong
notes.
I'll
forget,
but.
B
Song
and
then
I'll
hear
it
starts
up
again.
So
my
guess
is:
that's
more
likely
to
happen.
The
so
temple
cooler,
which
is
saying
you
know,
I
know
what
song
I'm
listening
to
is
just
able
representation
for
that
and
and
I'm
feeding
that
back
onto
this
network,
and
we
have
a
lot
of
those.
Instead,
it
says
yeah
you
can
get
out
of
step
a
little
bit
but
I'm
still.
B
B
I
think
it's
much
genetically
troublesome
I
think
about
what's
going
to
happen,
you
know
so
now.
You're
now
you're
predicting
imagine
like
imagine
like
my
predictive
action,
my
predictions
active
for
three
notes.
That
means,
if
the
next
note
and
the
sequence
didn't
occur,
but
the
previous
note
occurred
again
then
I
wouldn't
notice
it
you
know
like
ademma
could
just
keep
repeating
the
same
note
over
and
over
again
it
was
like.
You
know,
it
would
say.
Fine
I
know.
B
B
Solutions
of
it's
not
okay,
we
model
this
is
you
might
have
this.
This
is
your
temple
memory
layer
and
this
is
going
to
learn
sequences,
and
so
there's
always
the
spark
pattern.
Here.
It's
really
good
other
memory
sequences.
Then
you
know
you
can
have
a
temple
pooling
layer
which
is
basically
stable,
sparse
and
unique
for
the
sequence.
So,
as
this
seems
changing
me
make
me
moving.
This
guy
is
like
the
name
of
the
song
and
there's.
B
This
because
I've
done
that
multiple
patterns
here
on
the
one
pattern
here
and
there's
a
limit
to
how
many
you
can
do
that.
But
under
this
scenario
this
guy
can
continue
on
saying
yeah,
I
know
what
melody
this
is,
and
you
could
be
driving
back
here
and
saying
you
know,
you're
still
listening
to
that
same
song
and
so
think
about
these.
Like
really
is
April
dendrites
here
he
said:
you're
still
listening
the
same
song.
Actually
roughly
of
me
like
this
I
have
a
parameter
cell.
B
Here
you
have
the
apical
dendrites
up
here,
and
so
this
guy's
saying
okay
I'm
still
telling
you
you
listen
to
this
long,
and
these
synapses
here
are
saying.
What's
the
next
note,
so
the
next
note
goes
wrong,
but
this
guy
says
yeah
okay.
That
was
wrong,
but
you're
still
with
melody
here
and
so
there's
still
a
bias
of
these
Salzburg.
But
the
next
note
should
be
so
that's
how
I
think
it's
more
likely
to
be
solved
in
the
brain
and
when
you
hear
models
of
this
and
we
and
it
works.
B
This
is
actually
a
bottleneck,
because
you
can't
an
interesting
thing
here.
You
can't
there
could
be
millions
of
elements
here,
hundreds
of
thousands
of
elements
here,
but
you
can't
map
a
sequence
with
you
can't
map
all
of
those
onto
some
pattern
here.
It's
it's
just
too
many.
These
cells
have
to
recognize
many
different
patterns
out
there.
It's
part
of
the
same
so
on.
B
If
you
think
about
like
trying
to
name
the
tune,
if
you
listen
to
a
melody,
it's
easier,
the
name
of
tune,
if
you're
in
the
refrain,
the
part
that
repeats
over
and
over
the
elements,
a
title-
a
song
like
that,
but
it's
somewhere
else
in
the
song
I
know
that
melody
I
know,
but
I
can't
think
of
the
name
on
it.
It's
because
this
guy
knows
it,
but
this
guy
hasn't
been
here,
but
this
guy
can't
remember
everything.
So
we
learn
the
most
common
subsequences
here
and
that's
the
refrain.
B
B
So
really,
if
you
read
the
wire,
why
does
a
brain
and
right
neurons
have
so
many
synapses
talks
about
all
of
this
and
simatai
did
a
really
nice
analysis
in
that
paper
about
the
mathematics
about
these
synapses,
why
it
looks
like
the
brain
requires
about
20
of
them
and
he
sympathizes
sort
of
optimal
mathematically
under
a
broad
range
of
assumptions.
So
it's
a
really
nice
theoretical
derivation
of
the
biologically
observed
phenomena
and
I'd.
B
That
covers
I
come
with
everything
I
just
talk
about
you
today,
I
think
it
doesn't
cover
the
inhibitory
neurons
too
much,
but
it
covers
this
plus
a
bunch
of
other
stuff.
That's
it
that's
an
important
paper.
I
think
she
already
did
you
get
haven't
yet.
Okay,
maybe
next
time
we'll
do
the
temporal
memory.