►
From YouTube: Seeing is Believing Part 2 (NRM Feb 24, 2020)
Description
This is the 2nd part of Aries' talk. The first half is here: https://youtu.be/L8PzquwMV6A
Discuss at: https://discourse.numenta.org/t/numenta-research-meeting-feb-19-2020-part-2/7239
Paper: https://www.nature.com/articles/nn.4385
A
Okay,
excellent,
so
just
to
pick
up
where
we
left
off
suicide.
A
Some
some
connectivity
data
so
you're,
just
starting
like
with
an
image.
This
is
all
rabies,
tracing
data.
If
you
want,
if
you
you
don't
know
what
brady's
tracing
is.
Basically
the
rabies
virus
jumps
monosynaptically
in
the
sort
of
retrograde
direction.
That
means
that,
if
you
inject
it
into
a
neuron,
it
will
not
go
down
the
neurons
axon
and
to
the
next
neuron
that
it
projects
too,
but
will
go
up
through
the
synapses
through
the
dendrites,
sorry
and
the
neurons
that
connect
that
project
to
it.
A
So
if
you
inject
this
into
v1,
this
is
just
a
general
balance
injection
to
be
one,
not
player
specific.
Here
you
see
just
to
get
like
you
know,
an
eyeball
of
the
projection
density.
This
is
lgm
which
is
considered
the
sort
of
main
feed
forward
projection
to
to
v1,
and,
as
you
can
see,
it's
a
lot
less
dense
than
these
other
areas
here.
So
this
is
acc,
which
I
talked
about
a
little
bit
last
week,
and
this
also
was
m2
so
m2.
Is
this
part?
A
No
sorry
m2
is
this
part
of
acc?
Is
this
part,
and
then
like
here
at
the
border?
You
have
this
various
dense
integration
and
it's
we
call
it
acc
in
the
mouse
it's
just
because
of
like
vague
anatomical
homologs.
It
has
a
primate
cortex.
It's
mostly
like
a
motor
area,
slash
association
area.
You
know
just
it's
big
and
there's
probably
a
bunch
of
different
areas
that
do
different
things
in
it.
This
is
retrosplenial
cortex.
This
is
all
the
way
in
the
posterior.
Part
of
the
brain
is
very
close
to
b1.
A
This
is
a
very.
This
is
the
biggest
input
to
v1
in
general,
and
then
here
just
for
some
see
the
diversity
of
inputs.
You
see,
like
the
amygdala
projects,
a
little
bit
the
classroom
which
we
just
make
fun
of,
and
the
the
orbital
frontal
cortex
part
of
it,
and
then,
of
course,
there's
these
intriguing
connections
from
the
lateral
into
rhino,
cortex
and
the
ca1
and
hippocampus.
These
are
both
very
sparse
yeah.
C
A
No,
I
think,
that's
mainly
that's
a
tool,
that's
technique
thing,
so
people
can't
I
haven't
been
able
to
do
this
kind
of
you
can't
do
rabies
tracing,
but
you
use
like
to
just
pick
up
the
signal
you
use
like
these
older
techniques
and
it's
not
something
that's
been
done.
So
it's
still
like
an
open
question,
so
people
haven't
reported
them
but
nobody's
like
we
checked
and
we
didn't
find
them.
You
know.
D
One
thing-
that's
not
shown
here,
of
course,
is
like
the
effect
of
these
connections
right,
so
it's
well.
B
D
That
the
majority
of
connections
to
the
b1
are
not
not
from
lgm,
but
I
believe
that
that's
the
primary
driver
of
v1
yeah
so
and
this
I
assume
this
covers
all
layers
of
the
cortex,
so
we're
not
being
discriminated
about
which,
when
you
do
this.
D
D
Largely
modulatory,
some
people
call
them
not
drivers.
So
it's
hard
to
put
too
much.
You
can
just
it's
more
numerical
than
it
is
meaning.
A
Sure
yeah,
of
course
it
requires
some
clarification,
so
I
don't
know
about
retrosplenium
on
the
other
areas
in
acc.
The
projection
to
ace
from
ac
to
v1
is
by
stratified
it
projects,
layer,
1
and
layer
6..
A
While
we
were
imaging
last
what
I
was
showing
imaging
last
time
was
obviously
layer
1,
because
you
it's
incredibly
hard
to
do.
Axon
imaging
layer,
6.
and
yeah.
There
have
been
a
number
of
old
papers,
and
so.
D
That
makes
sense
in
some
sense,
if
you
think
of
that,
to
layer,
six
we've
always
thought
of
that,
as
you
know,
representing
those
sort
of
the
good
cell
areas
down
there,
and
you
would
want
a
motor
connection
to
that.
I
know
we
think
of
that,
as
coming
in
the
primate
from
like
not
from
motor
core
to
whether
it
would
be
from
motor
cortex
too,
but
that.
D
Going
on
there,
you
know
I'm
just
saying
this:
these
are
interesting
things.
We
can't
speculate
too
much
about
but
sure
yeah
a
couple
of
questions
on
the
experimental
technique
like
what
do
we
know
about
the
efficacy.
D
A
Babies
does
that's
why
rabies
is
bad
for
you,
yeah
yeah,
that's
a
good
question,
so
I'm
not,
I
think,
the
serotypes
that
we
use.
So
all
this
affinity
and
stuff
depends
on
the
stereotype,
so
basically
the
viral,
the
capsid,
basically
in
the
receptors
on
the
capsid,
and
I
think
the
stereotype
that
most
people
use
it's
bas90.
I
think
it's
called
it's
sort
of
like
a
decent
trade-off
between
what
you
said.
So
it
doesn't
kill
the
cells
too
quickly
and
it
seems
to
be
pretty.
D
A
Then
again,
you
know,
there's
all
kinds
of
bites
that
people
haven't
described
yet
right.
So
we
know
many
aev
serotypes
or
like
to
avoid
layer
four
for
some
reason,
unless
you
like
only
restrict
them,
they'll
only
be
able
to
express
in
layer
four
and
that's
how
you
get
there
for
expression.
Is
there
any
known.
A
Yeah,
that's
also
a
good
question,
so
you
would
imagine
like
naively
that
well
the
more
the
more.
A
That's
also
true,
so
then
you
yeah,
so
if,
if
the
trade,
if
that
trade-off,
was
perfect,
sorry,
if
the
tradeoff
were
perfect,
then
the
you
know
lgn
and
acc
would
both
be
fairly
displayed
like
it
would
be
a
fair
representation.
I
don't
really
know,
I
don't
think
anyone
really
knows
but
yeah.
I
could
look
into
this.
If
you
want
to
ask
one
more
question.
A
A
D
A
Their
projections
like
right,
dendrites
and
maximums
this.
D
B
So
a
better.
A
This
is
just
like
fluorescence
quantification
over
like
slices
oops
and
so
like
the
director
splinos,
the
strongest
projection,
lateral,
v2,
auditory,
cortex
and
then
fourth
biggest
is
the
area
two
four
b,
which
is
that's
part
of
mcc
that
we
talked
about
yeah.
I
thought
acc
was
big
m2
up.
Where
is
that
slash
m2,
so
m2
is
so.
A
So
we
call
it
slash
m2,
because
we're
not
confident
that
we
will
that's
all
our
accounts.
Don't
include
m2,
so
it's
more
like
hey.
You
know
a
lot
of
caution
there.
So
the
thing.
D
D
You
know
you
get
to
the
edge
of
the
neocortical
sheet,
literally
the
angelino
cortex
and
the
hippocampus
on
the
edge
of
the
intercontinental
sheet.
Is
they
fold
up
underneath
the
brain?
You
can
follow
the
surface
of
these
cells
around
they,
so
that
and-
and
I
think
the
retrophenol
cortex
releases
primates-
it's
under
it's
in
the
center.
You
know
it's
v1
fold
into
the
surface.
D
It's
not
a
quote
standard,
neurocortical
region,
it's
it's
a
sort
of
a
hybrid
something
I
forget
what
we
know
what
it
does,
maybe
mark
as
you
remember,
but
it's.
A
A
Is
it
is
very
interesting
region,
so
one
like
I
just
mentioned
it
has
a
there's,
a
granular
and
non-granular
part,
meaning
it
has
or
doesn't
have
a
layer
for
and
that
sort
of
changes
in
the
mouse.
I
think
the
more
lateral
part
is
the
a
granular,
but
it
can
be
wrong
and
connectivity
wise.
It
has
very
dense
connectivity
to
the
foundation.
D
Yeah,
so
it's
like
halfway
between,
so
you
know
you
go
to
the
far
end.
You
hit
the
campus
and
you
get.
You
know
anthropologists,
you
got
retrospective
projects
and
then
then
you
kind
of
transition
to
being
more
classic.
So
I
just
think
it's
from
an
evolutionary
point
of
view
and
from
a
functional
point
of
view,
we
just.
D
A
In
the
in
the
mouse,
the
orbital
cortex
of
the
mouse
is
just
like
blob,
but.
C
And
is
it
from
what
you
how
you
responded
to
flo's
question?
Is
it
true
that
so
post-synaptic
layer,
four
is
not
represented
in
here.
A
Oh,
that
would
be
aav
so
with
rabies.
We
think
it's
represented,
so
we've.
A
Yeah
most
likely,
so
you
know
it's
a
thing
where
you
get
all
this
labeling,
because
we
have
like
a
local,
a
different
floor
for
it's
local
and
we
see
we
confirm
that
there's
labeling,
that's
mostly
well
distributed
in
the
layer
across
layers
and
to
make
sure
it's
in
v1.
A
This
is
all
monocular
v1
by
the
way,
probably
with
a
little
bit
of
binocular
too
I'll,
ask
jorg,
for
I
don't
know
how
we
classified
the
data,
but
we
have
like
next
to
give
his
office
a
massive
poster
with,
like
all
of
this
cell
type,
specific
and
layer,
specific
rabies,
tracing
results,
so
I'll
try
to
at
least
get
him
to
send
a
picture.
A
All
right
right
so
anywho,
so
we
went
over
this
last
time.
Just
as
a
reminder,
we
found
a
population
of
stimulus,
predictive
neurons
that
required
experience
to
form
so
condition
one's
the
block
of
the
first
two
days
and
they're.
Not
there
then,
and
the
visual
neurons
seem
to
be
more.
They
don't
seem
to
have
any
characteristic
evolution
with
time
right
and
they're.
These
are
about
50
50.,
like
they're,
both
about
seven
percent
of
the
cortical
neurons.
A
We
found
I
showed
you
the
acc
axons
and
how
they
basically
do
the
same
thing,
but
I
think
I
didn't
dwell
on
this
as
long
so.
Basically,
you
see
the
same
types
of
signals
in
the
acc
axons
and
v1.
Now,
subject
of
debate
is
like:
do
we
know
that
these
synopsis,
terminator
and
b1?
A
Are
they
just
passing
through
in
the
video
it
looked
like
you
know
these
little
dots,
these
grains,
we
think,
are
boutons
and
that's
what
other
people
think
too,
but
you
know
always
take
everything
you
see
the
grain
of
salt
and
experimental
results,
but
you.
C
A
In
layer
one
and
how
much
she's
just
passing
by,
of
course,
this
v1
is
that
they're
like
it'd,
be
weird
to
be
passing
through
there,
where.
A
A
E
A
When
we
show
to
just
see
if
these
are
visual
or
not,
you
know,
you
show
the
other
grading
when
you
flip
it
remember.
We
had
this
condition
where
we
flipped
the
grading
10
of
the
trials
when
you
flip
it,
you
show
that
this,
like
weird
anticipatory
component,
is
still
there,
but
there's
no
like,
if
there's
a
visual
response
as
much
attenuated
and
vice
versa.
Right,
if
you
have
an
ax.
A
Right,
okay,
so
we
have
this,
you
can
ignore
this.
We
can
talk
about
this
later.
If
you
want,
we
have.
E
A
Projection
from
ac
to
v1
and
one
of
the
prerequisites.
A
To
work
is
that
you
need
the
coordinate
transformations
to
work
right,
so
you
need
the
you
may
need
to
make
sure
that
the
projection
from
acc
to
v1,
for
example,
and
all
these
projections
need
to
talk
to
the
other,
the
other
cortical
area
in
the
language
of
that
cortical
area
right.
So
one
thing-
and
this
is
in
marcus's
paper-
is.
C
A
Marcus's
paper
is
that
you
know
you
could
inject
two
different
fluorophores
into
like
adjacent
areas
of
acc
and
see
if
there's
like
a
retinotopical,
if
there's
a
gradient
right
in
the
projection
to
be
one,
is
it
just
like
almost
together
or
is
there
a
gradient
and,
of
course,
oops?
We
found
a
gradient.
So,
for
example,
this
you
know
you
eject
a
red
in
a
green
fluorophore
into
acc.
You
get
a
red
and
green
innervation
being
adjacent
to
each
other,
and
this
is
the
only
quantification.
You
know
you
can
quantify
this.
There.
C
A
D
A
A
Thing
to
do
by
the
way,
and
so
and
another
thing
is
that
these
predictions
should
be
plastic
right.
If
they're
learned
right,
you
should
be
able
to
reverse
them
and
play
with
them.
So
this.
A
Accident
and
ended
up
being
the
main
point
of
marcus
linebacker's
paper,
but
which
I
will
briefly
talk
through.
So
his
project
was
they
had
this
tunnel
right
and
it
was
like
a
2d
tunnel
and
then.
A
The
not
very
hard
it
takes
a
while
for
the
mice
to
learn
to
do
this
because
they
walk
they're
like
running
on
this,
like
a
little
styrofoam
ball
that
just
keeps
spinning
around.
So
it
takes
them
a
while
to
like
learn
to
control
that
so
the
in
the
first
days.
It
was
very
short
just
to
encourage
and
you
get
they
get
a
reward.
A
You
know
get
them
to
where
and
then
over
time.
You
know
they
would.
You
could
progressively
make
it
longer,
and
what
happened
is
that
in
this
tunnel,
this
2d
tunnel,
you
would
introduce
some
perturbations
in
the
angle
of
the
tunnel
right.
So
imagine
the.
E
D
A
Degrees
off
on
one
side
or
the
other,
and
then
the
mouse
has
to
correct
for
that
right
good.
So
we,
the
marcus,
imaged
axons
again
in
v1
same
paradigm
but
the
but
in
left
and
right,
v1
as
well,
and
what
you
see
is
that
each
area
so
left
or
right
v1
is
the
axons
are
most
responsive
when
the.
A
Is
contralateral,
meaning
that
in
left
v1
you're
more
the
the
axon's
most
likely
when
there's
more
visual
flow
on
the
right
right?
That's
looking
at
that
part
of
the
brain
and
visual
flow
seems
to
be
a
signal
that
they
like.
So
this
could
be
via
perturbation
or
just
by
steering
right
and
likewise
for
everyone.
It
likes
left
for
right,
v1,
sorry,
it
likes
left.
A
The
visual
flow
that
produces
right
so
the
way
that
you.
E
D
No,
so
we
weren't
perturbing
so.
B
A
A
Symmetric
right
right
and
right
so.
A
It
turns
out-
and
the
champion
would
be-
the
main
part
of
the
paper
was
that
he
accidentally
reversed
reverse
design
of
the
coupling
such
that
when
the
miles
would
be
trying
to
turn.
You
know
turning
right
and
you
get
left
visual.
C
B
A
Last
prediction:
error
yeah,
so
what
they.
A
In
the
first
day,
of
course,
the
these
statistics
here
so
the
the
preference
of
these
axons
was
the
same
right.
If
you
have
left
right
and
regulate
virtual
flow,
but
over
time
they
it
reversed.
Basically,
this
is
saying
that
the
acc
axons
didn't
really
care.
So
this
is
not
about
the
motor
input
they're
getting
it's
about
the
visual
flow
input,
and
this
would
you
would
need
this.
A
If
you
were
able
to
predict
if
they
were
predicting
visual
flow,
they
would
need
it
would
need
to
be
about
the
statistics
of
the
environment
basically
and
yeah.
So
I
mean
this
is
just
a
tuning
index.
It
shows
that
it
doesn't
affect
performance.
If
you
have
normal
visual
flow,
you
know
the
performance
stays
the
same.
This
is
on
the
y-axis.
Sorry
yeah
performance
is
here,
and
then
this
is
the
tuning
preference
and
it's
sort
of
uniform
in
the
beginning.
Then
it
changes,
but
the
performance
is
more
or
less.
A
This
is
the
performance
of
the
mouse
right,
how
good
so
it's
quantified
as
a
percentage
of
time
spent
within
30
degrees
of
the
target.
Okay
and-
and
this
is
a
tuning
preference
of
the
individual
axons-
do
they
prefer
controversive
or
subversive
visual
flow
right
right,
so
negative
would
mean
they're
changing
from
controversy
to
subversive
yeah
and
then,
when
you.
A
But
compared
to
normal
visual
flow
right,
they
still
learn
right,
like
one
of
them
didn't
but
like,
but
the
tuning
performance,
the
tuning
preference
changes,
it
becomes
negative.
So
it
goes
to
the
ipsilateral
flow
right.
A
So
it's
not
just
about
like
topology
yeah,
so
each
of
the
ritual
yeah
yeah
yeah,
sorry
right.
So
I
think
I
showed
you
all
this
video
at
the
end
last
night,
so
this
we're
talking
about
the
prediction
error.
So
this
is
when
this
is
when
the
expected
grading
is
there-
and
this
is
the
tunnel
right
so
time,
zero.
The
expected
rating
happened.
A
A
This
is
a
very
ubiquitous
thing
we
found
across
all
mice
except
one
mouse,
and
so
just
two
because
biology
right
so.
B
A
The
other
grading,
ten
percent
of
the
time
we
just
didn't,
show
anything
ten
percent
of
the
time
right
and
this
is
all
ice
aluminum.
This
wasn't
like
a
brightness
change
effect
and
you
find
that
emitting
the
stimulus
drove
by
far
the
biggest
responses
in
v1.
If
you
compare
this
to
like
the
mean
response
to
the
a
or
e.
C
A
A
And,
as
I
mentioned
previously,
the
emission
doesn't
drive
any
changes
in
the
acc
axons,
it's
only
locally
in
v1,
whereas
the
predictions,
if
you
want
to
call
them
that
and
other
visual
signals-
are
in
the
axons.
So
this
would
support
and
we
have
other
data
with
sensory
motor
mismatches
that
supports
that
the
the
error
computation.
A
If
that's
what
you
want
to
call
it,
it
happens
locally
in
v1
in
layer
23.
Specifically,
we
don't
really
see
it
in
layer
5.,
because
the
the
argument.
A
Prove
it
it
supports
the
argument
right.
It's
consistent
with
the
argument
right,
yeah.
Well,
of
course
it
could
be,
but
we've.
A
A
Been
likely
pretty
much
what
my
project
has
done
and
they've
added
some
of
the
marcus
linebacker's
awesome
work
and
we
sort.
D
A
Alexander
who's
in
lisa
jokomo's
lab
at
stanford.
Now,
if
you
guys
want
to
talk
to
him
and
bill
wyatt
who's
at
ucla
and
for
scared,
had
this
amazing
paper,
where
they
looked
at
like
sensory
motor
learning
for
experience,
they
had
like
mice
that
were
dark
weird
and
then
they
only
learned
like
they
learned
how
they
have
them
learned
like
to
run
on
the
ball,
but
the
flow
had
nothing
to
do
with
their
running.
A
B
A
For
you
to
read
it's
like
the
most
thorough
thing
I've
ever
seen
in
my
life,
but
like
sort
of
they,
they
did
a
bunch
of
like
cell
type,
specific
imaging
and
perturbations
off
the
genetic
and
dreads
chemogenetic
virtualization.
So
you
had.
D
A
Chemical
that
like
selectively
silences
or
activates
one
cell
type,
and
they
found
that
this
sort
of
this
classic
sst
vip-
somatostatin.
Sorry,
yes,
intestinal
peptide
right.
So
this
is
like
adam
capex
and
a
couple
other
people
have
found
to
consider
this
to
be
a
very
canonical
type
of
circuit
and
cortex,
at
least
in
layer.
23
right,
where
you
have
pyramidal
neurons,
are
inhibited
by
somatostatin.
A
So,
for
example,
if
you
inhibit,
if
you
inhibit
some
understanding
neurons
here,
for
example,
the
you
don't
get
a
mismatch
signal
when,
when
you
get
like
the
mismatch
of
that
occurring
sorry
and
try
to
understand
that
right,
yeah,
please
so
you
know
this
is
this
was
all
done
with
sensory
motor
as
much
so
this
is
just
like
the
mouse
is
running
and
there's
visual
flow
coming
to
the
back,
and
then
you
interrupt
that
visual
flow.
All
the
bus
is
still
running
right.
A
That's
the
sensory
motor,
that's
much
paradigm,
and
normally
it
looks
like
this.
The
population
average
right.
So
this
is
just
a
signal
here
so
hold
on
that
red
circle.
On
the
diagonal
to
the
left
says,
I
have
either
knocked
out
or
deactivated
yeah,
so
this
is
expressing
cells.
Yes,
sorry,
this
is
crimson
r.
It's
crimson
wait.
A
D
What
is
the
overall
learning
from
this
paper?
I
mean
like,
what's
by
the
details,
right.
A
Right
yeah,
so
the
learning
is
that
somatostatin
neurons
seem
to
be
seems
to
be
visually
driven.
They
they
don't
have
very
modern
words.
Vip
neurons
and
npy
neurons
are
seems
to
be
more
motor
driven
and
by
having
these
two
layers
and
by
you
know,
you
can
invert
where
the.
D
A
Good
question,
so
you
can
see
this
in
this
diagram
here.
A
Top-Down
input,
let's
say
the
set
like
motor
prediction:
if
it
were
just
a
motor
input
and
it
can
innervate
either
the
apical,
sorry
the
diesel
dendritic
tree
or
or
the
one
of
these
internal
types
right.
Let's
say
it's
well.
E
A
Well,
let's
say
it's:
it's
it's
a
long-range
projection,
so
it's
excitatory.
So
let's
say
this
is
vip.
This
would
inhibit
the
sst,
for
example.
Yes,
so
this
gives
you
a
way
to
have
two
different
signs
to
your
prediction:
error,
the
positive
and
negative
sign,
meaning
it's
a
prediction:
minus
the
input
or
the
input
minus
the
prediction
right.
That's
the
idea
of
a
synaptic
circuit
like
this.
Then
you
can
use
the
same
with
an
input.
D
A
A
A
Neuron
here
it
would
be
inhibited
by
the
visual
input
right
and
then,
of
course,
if
you
arrange
it
differently,
you
would
have
it
excited
by
the
visual
input
right
just
because
of
where
the
the
top
down,
if
you
want
to
call
it
innervation
happens
to
go
in
that
neuron,
that
little
little
micro
circuit
of
one
neuron
into
or
several
more
ssd
and
vip
stuff
is
everyone
following
for
this.
I
think
you've,
probably.
D
Is
you
know
the
following
is
two
different
things
in
my
world
like
you
can
follow
the
details
of
his
experiments
and
you
can
just
walk
down
every
little
point
here,
but
then
you
can
also
just
keep
track
of
the
big
picture.
Like
yeah
yeah,
you
know,
okay,
yes,
we
showed
that
this
exists
just
like
this,
but
what's
the
point
of
it
all
it's
like
I
mean.
C
B
A
All
absolutely
yeah,
I'm
saying
if
you
imagine
you
can
do
it,
the
bigger
picture
is
going
to
be
too
simple
to
describe
it's
going
to
be
like
yeah.
This
90
of
what
we
understood
but
like
ten
percent,
is
something
we
don't
really
understand
something
much
subtler,
which
you
need
to
understand
the
actual
circuit
for
yeah.
I
don't
think
this
is
the
case
here,
but
yeah
in
general.
Now
so
right.
So
the.
A
Is
this,
this
was
from
pierre's
review
with
tom
vs
logal.
It's
great
great
review,
highly
recommend
it.
A
A
It
inhibits
the
pyramidal
neurons
and
then
the
bottom-up
stimulus
excites
them,
and
this
would
be
what
they
call
a
type
one
prediction
error,
which
is
that
you
know
if
you're,
if
your
input
matches
your,
if
your
prediction
matches
your
sensory
input,
of
course,
it
should
be
silent,
but
like
this
would
look
like
a
what
a
visual
style,
a
visually
evoked
response
would
look
like
right,
so
we're
arguing
it's
not
necessarily
just
like
the
feed
forward
connectivity,
plus
all
the
pooling
and
stuff
that
layer
form
might
do.
A
This
is
actually
a
subtraction
with
a
prediction.
So,
if
you're
seeing
something.
A
A
You
see
very
very
little
exactly
and
that's
why,
like
one
reason
why
we
think
layer,
two
three
is
very
sparse
right,
because
your
brain
mostly
gets
it
right
and
you
have
the
other
type
of
error
signal
which
is
you
know
if
you,
if
you
accept
the
distort
the
distal
dendrites
and
then
the
bottom
up
sensory
input,
you
know
inhibits
the
neuron,
then
you're
subtracting
prediction.
You
know
you're
doing
prediction
minus
right,
so
you
have
these
two
types
of
air
signals
and.
A
Here
somewhere,
it
must
be
an
internal
representation,
because
you
can't
only
have
predictions.
Why
can't
you
only
have
predictions?
Well,
I
mean
you
could,
theoretically,
if
you're
an
engineer,
come
up
with
a
little
way
that
you
know
your
prediction
errors,
you
have
predictions
and
you
could
have
predictions.
You
could
recover
the
state
right,
of
course,
but
the
general
presentation
neurons
seem
to
be
something
to
we're
using
this
mainly
to
explain
what
we
see
in
layer
five
and
what
we've
seen
later.
Five
is
very
dense
activity.
It's
probably
still
considered.
D
D
D
The
internal
model,
because
your
predictions
are
coming
from
elsewhere
in
this
spot,
in
your
experiment,
you're
inside
the
long
range
conditions-
and
I
was
arguing
in
the
last
meeting
I
was
saying-
but
we
think
most
of
these
predictions
are
occurring
locally
and
magicians,
but
there's
certainly
a
lot
of
local
and
is
that
what
that
is?
Is
that
your
internal
model?
Yes?.
A
C
Circuit
is
essentially
computing.
The
the
the
bottom
up
input
minus
the
top
down
prediction,
whereas
the
right
circuit
does
the
exact
opposite
right,
so
you
get
two
different
kinds
of
arrows
and
what
the
circuit
underneath
is
then
essentially
doing
is
because
you're
subtracting
them
from
each
other
you're,
creating
like
a
difference
of
the
two.
So
I'm
not
exactly
so
yeah.
A
Right
yeah,
I
don't
remember
what
gary's
point
was
with
that
feeding
it
back.
So
let's
say
if
you
have,
if
you're
subtracting
two
types
of
error
signals,
let's
say
since
something
that
looks
like
a
sensory
signal,
something
that
looks
like
predictive
signal.
If
you
subtract
them
subtract
two
errors:
what
is
the
computational
benefit
of
that.
C
A
It
would
mean
that
you
yeah,
I
know
right
so
well,.
B
C
They
cancel
out.
That
means
the
average
is
zero,
but
in
general
it
wouldn't
cancel
out
in
general.
One
would
be
higher
if
you're
thinking,
scalar
numbers
in
general,
one
would
be
higher
than
the
other
right.
Just
like
in
deep
learning.
You
know
you,
you
have
a
node
that
projects
to
many
other
nodes
and
you
accumulate
the
grade,
add
up
all
the
gradients,
and
that
gives
you
an
overall
delta
for
that
knowledge.
So
that
particular
case
we
would
not.
C
It's
not
too
different
modality.
It's
the
same
modality!
It's
just
that
you
know
it's
not
two
things,
there's
hundreds
of
these
things
and
they're
all
noisy
and
they're
all
error-prone
themselves.
So,
by
averaging
together
all
of
the
errors
you
get
an
overall,
more
accurate
estimate
of
which,
what
okay?
This
is
not
a
detailed
thing
where
this
particular.
B
C
C
C
C
C
C
A
Prediction
around
the
wrong
yeah
so
that
so
that.
B
A
Yeah
so
let's
say
it's:
let's
say
it
inhibits
this
one
and
it
potentiates
this
one
so
because
it
feeds
back
on
itself,
which.
A
Cancelled
out
and
those
those
kinds
of
circuits
exist,
a
lot
throughout
the
course
become
this
handshake
protocols
all
the
time
right.
So
when
an
area
projects
from
one
area
to
the
next,
it
also
silences
the
incoming
area
and
the
step
after
so,
if
you
have
strong
predictions
from
layer
4
today,
2
3
there's
also
going
to
be
projection
to
inhibitory
nodes
that
then
shut
down
layer
4..
A
If
you're,
projecting
from
layer,
2
3
to
layer,
5,
there's
going
to
be
some
inhibitory
neurons
that
then
shut
down
the
inputting
layer,
two
three
nodes,
meaning
whatever
input
you
are
getting
you're
saying.
Thank
you
very
much
and
I'll
shut
up.
So
I
think
what
I
mean,
what
yeah
so
in
the
lineup.
What
this
could
be
is
it's
saying?
A
Okay,
I
just
calculated
this
difference
here,
and
this
is
what
I
am,
what
I
think
I'll
be
seeing
in
next
delta
t
right,
right
and
then
yeah
and
then
you're,
comparing
that,
so
this
could
be
like
some
about
the
dynamics
of
how
the
I
mean.
B
C
A
A
Made
some
interesting
argument
about
you
know
predictions
essentially
being
computers
through
different
phases
through
thermo-cortical
protections
like
using
the
temporal
difference
between
two
and
essentially
the
brain
is
switching
between.
You
know,
like
prediction,
and
you
know,
stimulus
input
and
those
neurons
are
essentially
computing.
The
temporal
difference
between
those
two
phases-
there's
no
phases
here
but
of
course,
there's
a
temporal
dynamics
of
this
in
these
signals-
and
I
don't
think
we
can
tell
what
this
looks
like
unless
we'll
make
some
basic.
A
A
drawing
right-
that's
not
a
micro
circuit
like
a
microcircuit
unfolds
in
time,
there's
actual
spiking
neurons
right,
yeah.
D
D
I
think
it's
interesting
to
try
to
reconcile
the
whole
predictive
coding
world
with
our
understanding
of
what
we
study
here
and
and
I've
never
never
really
liked
the
predictive
coding
one
aspect
of
it
where
people
talk
about
what's
passed
up.
The
court
of
a
hierarchy
is
the
error.
I
just
don't
believe
that
I
don't
think
that's
right,
but
it's,
but
it's
possibly
partially
right.
It's
like
just
let
you
see
if
I
can
just
reconcile
our
kind
of
views
of
the
world.
D
I
believe
that
a
quarter
of
a
column
on
its
own,
you
know,
millimeter
square
quarter.
Column
is
a
complete
sensory
motor
entrance
system.
It's
got.
Every
quarter
of
a
column
has
a
motor
output
every
chord
column,
some
sort
of
input
which
can
be
sensory
from
other
regions.
The
brain
has
to
learn
a
model
of
the
world.
Now,
when
we
talk
about
rats
running
amazing,
that's
a
very
particular
type
of
model
right.
D
We
think
more
generally,
about
models
of
objects
and
things
you
see
and
so
on,
and
so
you've
got
this
model
of
the
world
in
here
and
the
model
has
to
be
a
predictive
model.
So
the
models
you
just
look
at
a
single
quarter
of
a
column,
it's
going
to
be
making
predictions
it's
going
to
be
generating
a
motor
behavior.
It's
basically
building
some
model
of
what's
observed
in
its
entirety.
It
doesn't
require
other
parts
in
general.
D
They
don't
they're,
as
I've
talked
about
like
the
neuron
sticks,
internal
depolarization,
and
then
you
have
like
okay,
I
have
another
quarter
of
a
column,
someplace
else,
and
these
have
to
communicate
with
each
other
right
all
right.
So
maybe
this
part
of
the
cortex
can
tell
this
guy
what
they
expect.
We've
always
talked
about
it
here,
instead
of
a
voting,
it's
like
what
I
know
can
influence
what
you
know
and
what
you
know.
We
can
always
common
consensus
about
this.
You
just
think
of
voting,
perhaps
as
a
type
of
prediction.
It's
like
saying.
D
D
But
the
many
corner
columns,
all
these
long-range
connections
in
the
cortex
are
basically
voting
to
try
to
reach
against
some
section.
That's
observing
the
same
thing.
So
let's
say
I'm
touching
touching
this
pan
and
I'm
and
I'm
looking
at
the
pen
and
I'm
feeling
its
temperature
and
maybe
it's
even
making
my
sound
all
those
have
to
be
united
into
the
perception
of
this
pen
and
so
everybody's
voting
the
hands.
The
touch
areas
are
saying.
I
think
I'm
touching
your
pants.
D
We
haven't
talked
about
it
as
a
prediction:
we've
been
talking
about
it
as
reaching
a
consensus
about
what
should
be
expected
at
any
moment
in
time,
and
this
morning
it
says:
there's
a
dynamic
and
temporal
time.
So
these
things
might
somebody's
knowledge
may
precede
what
the
other
one
senses.
So
then
you
could
look
at
it
as
a
prediction
right,
I'm
predicting
I'm.
D
I
think
I
got
this
and
therefore
you
should
see
this,
and
this
is
you
know,
I'm
seeing
this,
but
therefore
you
should
feel
that
so
you
can
think
of
so,
but
these,
of
course,
can't
be
silent.
These
are
long-range
productions.
They
have
to
be
your
axons
that
are
productive
and
you
have
they
have
to
be
active,
whereas
maybe
nine
percent
of
the
predictions
occur
here
do
not
so
and-
and
so
we
could
think
of.
Maybe
it
was
our
voting
circuitry
as
sort
of
this
sort
of
predictions
you're
talking
about
here.
D
A
Absolutely
and
what
you
just
mentioned
recently,
like
with
longer
traditions,
that's
exactly
what
we
think
is
going
suspect
is
going.
It's
like
this
kind
of
I'm
expecting
you
to
see
this,
because
I
have
this
type
of
information
yeah,
and
it's
also
like
it
concurrently
says.
I'm
like
this
is
still
happening.
What's
yours.
D
Same
thing
in
sometimes
you
could
talk,
I'm
just
pointing
out
that
the
word
when
we
talk
about
voting
between
columns.
That
is
a
some
sense
of
type
of
prediction.
It's
it's
absolutely,
even
if
it
could
be
vague
in
our
case,
so
the
voting
can
be
ambiguous.
It
could
be
like
I
don't
know
what
it
could
be:
a
b
or
c
all
those
are
possible,
so
I
wouldn't
call
it
very
then
it's
kind
of
harder
to
call
it
a
prediction:
it's
like
being
a
convolution.
Well,
no
more
like
I
have
uncertainty.
D
These
are
the
possibilities
of
the
things
that
might
be
happening.
So
we've
we've
talked
about
how
our
this
voting.
Essentially,
everybody
could
be
ambiguous,
like
the
different
parts
of
your
skin
they're
touching
the
topic
and
bingoes,
but
together
they
know
it's
a
coffee
cup
individual
finger
can't
know,
that's
called
the
government's
together.
They
can,
and
so
each
finger
has
uncertainty
and
it's
like
it's
more
like
this
union
of
possibilities,
and
then
you
narrow
down
to
the
only
common
thing
so.
A
And
like
yeah,
I
know
I'm
moving
my
I'm
moving
my
arm.
I
need
to
know
that
it's
going
to
end
up
here
because
I
need
to
I
need
to
have
that
confident.
I
need
to
like
plan
ahead
to
be
able
to
do
that
thing
and.
E
A
D
D
My
fingers
of
my
finger
this
distance.
If
I
know
I'm
going
to
feel-
and
I
mean
or
something
along
these
lines-
is
falling
apart,
but
but
so
then,
at
that
point
in
time
you
say
yeah
well,
you
should
be
seeing
that
or
something
like
that,
but
that's
a
dif
I
mean
there's
a
couple
things
going
on
there.
I
don't
know
for
if
we're
actually
passing,
I'm
not
sure
what
actual
information
you're
passing
there
am.
I
passing
like
the
motor
information.
D
It's
more
like.
I
think
I'm
passing
here's
what
I'm
expecting
given
my
movements,
because
the
I
the
eye
region
in
the
sonata
center
region,
don't
can't
talk
to
each
other
in
terms
of
their
individual.
D
E
D
A
We
cannot
point
out
that
there's
two
kinds
of
transformations
happening
here
so
so
so
one
thing
is
of
course,
yeah.
The
topology
needs
to
map,
because
otherwise
you.
A
I
mean
you're.
Not
this
model
does
not
presume
that
there's
some
kind
of
prediction
error
being
passed
down
right
there.
The
error
is
computed
locally,
with
respect
to
the
model
that
is
locally
within
that.
C
A
A
But
that
doesn't
mean
that,
like
that
voting
signal
right
as
we
refer
to,
that
is
understood
directly
by
that
local
model.
So
there's
some
kind
of
transformation,
an
incorporation
which
is
why
the
the
actual
error
is
like
produced
locally.
So
like
what
you
saw
in
the
in
the
submission
response
that
they
that
there's
no.
D
D
An
error
is
propagated,
I'm
with
you
on
that.
I
don't
think
error
is
a
product
and
that's
the
core
hypothesis
of
the
predictive
coding
scheme,
which
you
could
be
saying
you
could
be
passing
uncertainty
which
is
kind
of
like
an
error.
You
could
be
saying
I
could
even
know
this
is
a
x
or
I
don't
know
it
could
be
x,
y
and
z,
in
which
case
you
could
be.
You
know,
so
you
might
think
that,
like
that's
an
error,
but
that's
not,
I
can.
I
can.
A
B
C
You
would
be
passing
here's
exactly
what
I
think
it
is,
and
you
would
pass
that
on
if
you're,
if
it
could
be
two.
D
B
C
A
C
A
A
A
Other
types
of
kind
of
motor
coordinates
and
motor
frames,
in
which,
like
you,
would
need
to
have
a
type
of
prediction
such
as,
like
you
know,
just
visual
flow,
for
example,
and
sort
of
you
know
things
that
are
coming
from.
D
Well,
yeah,
but
I
think
in
it
you
know
it's
classically
viewed
as
maintenance
of
cods
in
the
primate,
but
my
understanding
is
that
superior
colliculus
is
actually
a
much
more
broad-based,
primitive
motor
system.
Yeah.
It
also
does
have
a
head.
D
I
wouldn't
be
surprised:
it
removes
responsible,
running
things
in
the
mouse.
It's
like
a
it's
a
it's,
a
primitive
motor,
sensory
system
and
more
hardwired.
So
it's
not
limited
to
vision
and
isocops.
It's
a
large
part
of
it.
A
D
Know
one
of
the
things
nothing
just
I'm
sorry
to
jump
all
around
you.
But
another
thing:
when
you
talk
about
prediction,
the
brain
is
kind
of
always
remember
that
norms,
don't
know
anything
and
the
court
of
regent
doesn't
know
anything.
It's
just
getting
some
inputs
and
just
trying
to
model
those
inputs.
I
have
no
idea
what
it
represents
and
if
I
got
a
part,
if
I'm
getting
a
quarter
of
a
column,
getting
input
from
this
amount
of
sensory
input,
and
I
don't
get
individual
input,
they
don't
know
that.
So
how
do
they
communicate?
D
What
could
they
possibly
tell
each
other?
They
can't
assume
anything
about
knowledge
of
the
other
person.
The
only
thing
they
can
assume
the
only
way
our
voting
works.
The
only
way
these
long-range
connections
work
is
that
we
can
assume
they're
both
observing
the
same
thing
in
the
world,
the
same
structure
in
the
world,
even
though
they're
observing
it
differently.
D
Oh,
I
was
expecting
this
video
input
and
I
got
this
input,
so
I
calculate
some
error.
Visual
error.
I
can't
share
that
with
anyone,
because
it
doesn't
mean
anything
to
anybody
else.
It's
like
it's!
That's
that's
got
to
be
handled
internally,
because
the
language
of
visual
error
is
not
a
language,
that's
being
shared
of
another
column,
but
everybody
could
say.
D
I
have
ambiguity
about
what
I'm
viewing,
because
we're
all
doing
the
same
thing,
and
so,
even
though
I
don't
know
how
you're
viewing
it
like,
you
may
be
doing
it
doing
as
a
vision,
not
viewing
it
through
touch
with
someone
else's
hearing
we're
all
sensing
the
same
thing.
At
least
we
can.
We
can
try
to
vote
on
that
because,
even
though
I
don't
know
how
you're
sensing
it
we're
all
agreeing
that
this
is
the
same
thing,
so
I
think
about
these
long-range
connections.
To
me
an
error
cannot
be.
D
E
My
error,
but
it
would
have
some
symmetry
with
how,
for
example,
the
retina,
how
it
sort
of
is
outputting,
diffs,
so
sort
of
outputting
things
how
things
have
changed
if
you
interpret
the
error
as
being
something
how
things
change
from
what
it
was
before
something
or
is.
E
You
can
you,
could
you
could
format
that
as
an
error
and.
A
A
projection
of
one
to
ac
and
they're.
Now
their
layers
are
different,
so
like
wherever
these,
these
are
well
there's
there's
two
broad
classes
of.
D
Cortical
projections,
ones
that
are
asymmetric
and
ones
that
are
symmetric
in
terms
of
layer
by
layer.
So
there's
a
lot
of
so
we
try
to
make
that
distinction,
because
there's
a
lot
of
connections
that
are
like,
oh
from
you
know,
maybe
cortical
thalamic
projections
or
layer.
You
know
layer,
I
don't
know
layer
three
to
layer
four
or
something
like
that
or
backwards
to
layer,
one.
Those
are
asymmetric
in
terms
of
the
anatomy
that
clearly
has
to
do
something
about
some
hierarchical
structure
and
that's
the
classic
fellow
in
essence.
D
C
D
The
voting
area-
those
are
the
voting
connections,
they
tend
to
be
everywhere.
They
go
all
over
the
place,
lacking
of
hierarchy,
lacking
of
topology
they're,
basically,.
D
The
same
thing:
whereas,
if
you
have
these
asymmetrical
connections,
this
is
what
felon
vanessa
focus
on
then
that's
clearly
some
more
hierarchical
type
of
structure,
which
is,
we
think,
is
related
to
the
compositionality,
but
I
just
want
to
make
that
distinction,
because
it's
that
took
me
many
years
to
figure
out
the
difference
between
those
two.
So
when
we
talk
about
voting,
we're
tending
to
talk
about
the
layer,
the
common
layer,
iso
layer,
layer,
similar,
not
this
asymmetrical
connections-
just
that's
a
sure
different
thing.
A
So
you
see
this,
you
see
this
flat
line
in
acc
in
the
projection,
but
you
know
there
is
that
doesn't
refute
that
the
there's
a
projection
from
v1
to
acc,
another
layer
that
says
like
hey
this
is
the
error
acc
then
has
to.
If
acc
gives
like
a
spatially
one
thing,
I
didn't
focus
on
so
much
that
I
probably
should
focus
on,
and
these
are
these
predictions
have
to
have
have
to
happen
in
this
facial
context.
They
aren't
spatially
invoked
predictions-
it's
not
like
you're,
just
sequencing.
A
A
Visual
of
like.
D
C
A
A
But
yeah
right.
C
A
The
first
experiment
I
ever
did-
I
was
gonna.
My
project
was
gonna,
be
to
investigate
the
projection
from
ca1
and
the
to
to
rhino
cortex,
which
is
much
sparser
in
this.
A
Or
anything
here,
it's
like
a
a
tunnel
like
the
other
one
with
a
bunch
of
different
textures
on
it
and
the
mouse
would
just
run
right.
So
these
are
lec,
slash,
ca1,
accents
and
ones.
You
can't
really
disambiguate
them,
because
you,
you
know
you
have
to
make
an
injection.
You
know
it's
going
to
take
so.
A
See
into
layer,
v1,
v1,
layer,
1.,
okay,
so.
D
D
D
D
A
And
when
I,
what
I
did
is,
I
typically
just
turned
off
the
projectors:
it's
not
a
very
clean
experiment.
Let's.
B
A
Like
the
whole
brain
is
so
super
active
in
the
dark,
and
what
this
I
mean
this
is
noisy,
but
I
saw
this
in
a
few
other
cells.
There
seems
to
be
something
like
a
phase
procession,
not
a
procession
where,
with
each
trial,
there's
like
an
increasing
uncertainty
of
where
you
are-
and
I
don't
know
if
you've
seen
these
sort
of
daylight,
entrainment,
circadian
clock
and
training
and
things.
D
B
B
B
A
Sorry,
what's
going
on
okay,
it
cannot
only
be
visual
because
well.
A
It
has
to
be,
but
you
see,
there's
discrimination
between
the
different
textures.
There
is
a
visual
component
yeah
over
there
absolutely,
but
it
still
happens
over
here
when
you
reset
it
right,
yeah
yeah.
It's
the
same.
So
it's
doing
the
same
thing,
but
when
you
turn
off
the,
when
you
give
it
uncertainty
about
its
location,
it
drifts,
yeah
and.
E
A
See
this
in
our
recordings
from
ca,
one
two:
where
are
they.
A
D
You
know
I
I
do.
I
do
this
little
thing,
which
probably
a
lot
of
experiments
would
just
like
shutter,
but
I
imagine
myself
being
in
a
rat
and
running
along
this
maze,
and
I
so
I
see
these
patterns
going
down
and
what
would
I
think,
but
I
think
that
I'm
in
a
linear
track
and
I'm
feeding
how
far
how
long
I
am
or
am.
B
D
Oh,
it's!
You
know
it's
gotten
a
better
b
pattern
b,
better
nate
done,
and
in
this
case,
if
you
don't
find
place
cells,
it's
because
it
might
be
the
right
doesn't
think
of
it
as
a
place.
That's
just
like
saying:
okay,
I'm
keeping
track
of
work.
What's
coming
up
next,
it's
more
like
listening
to
a
melody
and
and
yeah
okay.
I've
got
this
pattern
yeah
and
I
have
to
progress
the
melody
by.
D
I'm
talking
down
the
hallway
and
how
far
down
I
am.
It
might
be
easier
just
to
imagine
going
squares
and
circles
the
little
squares,
the
circles.
Now
I'm
done
sure,
and
that's
why
I
might
not
find
place
health
because
it's
it's
all
it's
like
accounting.
A
A
When
we
find
place
cells
in
other
experiments,
it's
because
of
the
how
we
show
the
the
environment
and
these
so
in
vr,
in
these
linear,
one-dimensional
borders,
people
do
find
places
all
the
time
until
she
yeah.
B
A
Where
you
basically
see
if
it's
temporal,
if
it's
temporal
or
or
spatial
right,
and
if
it's
temporal,
if
you
sort
of
align
the
if
you
align
the
zero
here,
to
be
sorry,
if
you,
if
your
alignment
activity
to
be
like
after
the
offset
of
the
previous
grading
or
of
the
previous
stimulus
in
general,
and
if
it
were
only
temporal,
then
they
would
line
up
right
if
you
match
for
speed.
So
basically
these
peaks
would
coincide
right,
but
they
don't
they
spread
out.
A
So
imagine
you
align
you
align
your
responses
either
on
the
gradient,
because
you
know
we
take
like
a
mean
subtraction
right
of
what's
happening
before
and
we
see
the
evoked
response
right.
So
you.
A
That's
about
to
see,
and
that
would
be
times
zero
or
you
aligned
it
on
the
previous
stimulus,
and
that
would
be
time
minus
two
how
much,
how
far
you've
traveled
from
the
previous
year,
how
much
time
has
progressed,
yeah
exactly
right,
yeah
and
if
you
bend
it
by
slow
and
fast
reversals,
you
would
see
that
like,
for
example,
the
if
it's
temporal
it
wouldn't
matter
if
the
traversals
are
slow
or
fast.
You
know
like
it's
a
time
axis,
yeah
exactly
so.
A
C
A
D
A
You
know
that's
also
like
yeah,
but
that's
also
kind
of
like
a
space
within
the
yeah.
D
D
D
A
They
can
and
the
fact
that
it's
linear
right
I
mean
it-
would
be
better
if
this
were
a
two-dimensional,
dimensional
virtual
environment,
so
yeah-
and
I
mean
literally
all
the
the
simplest
thing
you
could
do
is
like
a
sort
of
conjunction
of
space,
of
visual
stimulus
and
time
and
one
dimension-
that's
space
right,
I
mean
I,
I
don't
see
another
way.
A
These,
oh
absolutely
right,
but
you
know
this
is
like
all
consistent
with
what
other
people
have
shown
before.
I'm.
D
Can
be
propagated
right
and
I
I
think
that
I'm
just
going
to
stick
with
it
at
the
moment
I
haven't
learned.
D
My
thing
is
it's
more
of
a
fundamental
principle
like
how
could
a
bunch
of
neurons
know
what
another
bunch
of
neurons
the
error
would
be.
I
my
assumption
is
that
when
the
brain
is
the
cortex
is
sort
of
built,
there
isn't
specific
knowledge,
the
specific
knowledge
and
the
connectivity
between
them,
because
that
has
to
be
determined
primarily
by
genetics
or
long-range
connectivity,
but
what
that,
what
that
those
axons
represent,
how
to
interpret
them?
It
can't
be.
This
is
something
founding
assumption.
D
You
really
can't
know
where
it
where
those
signals
are
coming
from
and
what
they
represent
it
has
to.
It
has
to
be
a
message
that
can
be
interpreted
locally,
my
own
knowledge
of
the
world.
So,
when
you're
sending
me
an
error
that
you
calculated
an
error
in
vision,
I
not
necessarily
can't
understand
what
that
means.
I
don't
know
what
vision
is.
It
has
to
be
in
the
language
of
the
object,
a
language.
A
D
A
D
Just
because
that's
because
they
become
active,
it's
not
that
they're
visually
selected,
it's!
What
is
every
one
of
these
signals
is
a
as
a
sparse
representation
of
activity.
It's
not
a
single
axon,
and
so
how
do
you
interpret
that
as
far
as
representation?
Yes,
it's
going
to
be
visually
selected
or
visually
activated,
but
what's
its
interpretation,
how
do
I
encode?
How
do
I
interpret
that
set
of
active
axons
at
any
point
in
time,
they're
meaningful
only.
This
is
like
a
general
principle
of
the
brain.
D
They're
only
meaningful
the
local
circuit
that
could
generate
once
you
predict
when
you
take
a
bunch
of
axons
you
project
them
some
place
elsewhere
in
the
brain
the
person
receiving.
It
has
no
idea
what
the
hell
they
mean.
A
A
That
you
could
assume
that,
if
this
is
something
that
lands
on
the
pistol,
dendrite
of
sort
of
the
ethical
dendrite
of
pyramidal
cells
like
the
problem
also
would
be
like
this
is
like
the
other
things
I
get
locally
right.
A
C
D
But
predictive
coding
is
idea
that
I
have
an
expectation
of
which
is
some
sparse
activation.
I
have
an
actual,
sparse
actors,
an
actual
input
comes
in,
I
somehow
calculate
the
difference
and
that
difference
would
be
in
the
local
language
of
the
region.
It
would
be
like
I
I
need,
and
if
I'm
going
to
send
that
to
someone,
I
have
to
understand
how
the
receiving
person
can
get
meaning
out
of
it,
not
just
that
there
was
an
error
but
understanding
exactly
how
that
receiving
region
can
take
advantage
of
the
errors.
B
A
Mean
I
think
it's
something
that
is
implicit
in.
I
think
it's
implicit.
D
D
A
Can't
think
of
one
so,
but
it
seems
to
happen
in
the
top
down
projection,
so
it
should
be
in
theoretically,
it
could
happen
in
the
forward
projection
if
you
want
to
think
of
it
that
way,
yeah
and
then
another
thing
in
the
last
line
is
basically
the
idea
that
it's
we
shouldn't
think
this
is
a
hierarchy,
at
least
not
in
the
classical
sense,
because
all
these
you
know
we
find
these
values
yeah,
and
I
was
very
happy
to
read
that
on
your
a
thousand
brains,
yeah
yeah.
D
A
And
then
again,
it's
symmetric
right.
Both
areas
send
prediction
errors
and
predictions
to
each
other,
which
would
need
to
happen
for
for
this.
This
whole
thing
to
work:
yeah.
D
Again,
I
I
I
think
we
have
to
be
really
careful,
because
there
are
interior
connections
which
are
a
laminar,
laminator,
laminate
or
equivalent,
and
and
then
there's
these
other
ones
which
are
more
classic
hierarchical
and
I
think
they
have
different
meanings
and
they
have
different
ways
of
interpreting.
So
we
have
to
be
careful
when
you
say
top
down.
I
can
agree
that
v2
rejected
the
v1
type
of
thing
could
be
much
more
in
the
line.
D
That's
going
to
be
different
than
v1
projecting
the
s1
which
we
see
you
know,
we
see
those
kind
of
connections,
those.
D
D
D
You're,
observing
can
inform
v1
like
hey.
Maybe
you
should
be
looking
for
this
thing.
You
know
and
that's
why
we
see.
We
see
these
effects
of
almost
every
modality.
D
Else
but
v1
only
gets
no
one
else.
You
know,
v1
gets
input
century
for
the
thalamus
from
the
eyes
doesn't
get
from
anywhere.