►
From YouTube: Performance-optimized hierarchical models predict neural responses in higher visual cortex
Description
A
A
Okay,
you
guys,
where
let
me
just
do
this
just
right:
I
get
the
right
thing
that
button
didn't
work
anyway,
we're
about
to
start
I
just
wanted
to
be
streaming
right
now,
as
soon
as
Markus
is
ready.
I'm
gonna
switch
over
to
him
on
my
screen.
You
can
ignore
that
agenda.
That's
just
by
when
I'm
planning
on
working
on
today,
a
lot
of
everybody
I'm,
the
guy
sitting
behind
the
desk
behind
of
words
know
this
journal
club
is
just
about
to
start
and
I
just
wanted
to
jump
in
here,
and
let
me
turn
this
off.
A
Hello,
hello,
folks,
can
you
just
admit,
let
me
see
if
I
catch
at
work
in
here
you
do
have
a
chat
room
and
it's
supposed
to
show
up
there.
We
go
hello
mark.
C
C
B
Something
that's
why
I'm
talking
about
this
paper
is
that
we've
spent
a
lot
of
years,
focusing
on
the
ways
that
deep
networks
are
different
from
cortex,
which
there
are
many,
but
right
now
we're
at
a
time
when
it
is
actually
useful
to
us
to
focus
focus
on
the
ways
that
they're
the
same,
because
if
we
want
to
apply
insights
from
from
neuroscience
like
convolutional
neural
networks,
for
example,
then
it's
suddenly
really
useful
to
know
if
they're
the
same
so
or
if
there's,
if
there
are
similarities
between
them
so
anyway,
background.
So
this
paper
is
performance.
B
Optimized
hierarchical
models
predict
neural
responses
and
higher
visual
cortex.
It's
from
this
from
the
DeCarlo
lab
at
MIT.
The
coaster
starts
that
authors
were
Danny
Evans
and-
and
so
this
paper
kind
of
made
a
splash
back
in
2014
and
since
then,
for
example,
Denman's
got
a
faculty
position
at
Stanford.
He
now
runs
a
lab
that
does
this
kind
of
research,
so
they
say
like
it
made
an
impact.
It
did
some.
It
has
an
impressive
result
so,
like.
B
B
Does
it
approximate
what's
happening
in
cortex,
yes
or
no,
and
so
this
is
kind
of
a
top-down
questioning,
and
then
this
is
more
of
a
bottom-up
question
of
our
the
units.
Are
they
using
kind
of
the
same
building
blocks?
Are
these
networks
built
on
similar
building
blocks
as
cortical
networks
and
and
the
answer
to
this
question
can
vary
by
task
and
the
model
and
the
paper
this
paper
I,
would
say
pretty
like
strongly
votes
for
at
the
very
least
convolutional
neural
networks
are
doing
something
quite
similar
to
cortex.
B
E
B
E
C
Sounds
like
we're
not,
it
sounds
like
you
think,
a
little
differently
about
it.
The
rest
of
this
doesn't
depend
on
that.
That
was
me
justifying
it
again.
It
depends
what
the
bills
are
if
the
goal
is
to
have
deep
learning
systems
that
approximate,
what's
going
on
in
experimental
results
or
maybe
very
closely,
then
that
might
be
true
right,
but,
for
example,
we're
not
trying
to
do
that
at
all
right.
We're
trying
to
use
insights
from
neuroscience
to
improve
machine
intelligence
which
is
going
in
a
different
direction.
If
that
makes
sense
like.
C
B
This
under
I'm
I'm
talking,
we
want
to
improve
convolutional
neural
networks
using
insights
from
neuroscience.
So
one
of
the
things
and
one
of
the
observations
from
neuroscience
are
that
representations
and
the
brain
are
sparse
and
is
that
applicable
to
convolutional
neural
networks?
Will
they
improve
if
we
add
sparsity
and
I'm,
saying
that
the
answer
to
that
question
depends
on
how
similar
the
two
are?
Maybe.
C
B
Just
that
they
they
approach
this
task.
They
they
trained
this
network
to
be
good
at
classifying
static,
and
that
just
says
just
having
an
image
flashed
and
classified
from
that,
and
it
suggests
pretty
strongly
as
I'll
get
into
it
was
fairly
strongly
that
it's
really
doing
what
something
quite
similar
to
the
cortex.
B
It's
the
visual
hierarchy
to
the
first
2
b1
b2
before
I
team,
but
but
here
this
is
just
where
I'm
disclaiming,
that
they
haven't
even
attempted
to
solve
other
things
like
video
or
or
sensorimotor
object,
recognition,
or
anything
like
that,
and
so
there's
no
there's.
There's
no
claim
at
all
that
this
network
is
doing
it
all
right,
there's
tons
of
it
it's
missing
and
in
some
ways
that's
what.
Why
he's
forming
a
lab
to
try
to
attack
these
other.
E
Problems,
but
we
know
we
know
we
can
begin
to
say.
Is
it
approximate
convocation
right?
We
know
that
there's
dramatic
differences
between
the
CNN
and
the
court.
You
just
look
at
you
know
the
number
of
the
levels
and
of
CNN
that
you
have
to
invest.
That's
one
of
my
main
points.
I
make.
Those
are
four
layers.
B
E
B
Just
simple
but
okay.
B
B
B
Using
just
linear
combinations
of
units
from
up
here,
so
things
two
things
to
point
out
is
the
model,
never
encounters
the
neural
data.
These
are
two
separate
worlds.
The
month
the
model
is
just
learning
it
to
recognize
objects
and
the
neural
data
is
over
here.
You
would
not
expect
single
units
to
match
single
units.
B
That
match
know
that
the
thing
is:
okay,
the
let's
say
my
eye.
T
is
a
is
my
model,
let's
say
let's
say
we're,
comparing
my
eye
T
to
your
IT,
you
wouldn't
expect
single
units
to
have
the
same
tunings.
You
would
expect,
though,
to
be
able
to
track
to
maybe
do
linear
combinations
of,
like
30
of
your
particular
neurons
could
predict
one
of
mine
and
the.
E
Not
to
me
at
all
I
mean
one
of
the
I
think
some
of
the
foundations
of
you
know
neuroscience
is
when
they
look
at
these
response
properties
themselves.
They
expect
to
see
the
similar
response.
Rafi
different
animals,
you
know
tick
to
macaque,
monkeys,
they
don't
expect
one
macaque
monkeys.
They
expect
to
see
the
similar
response
response
at
all.
I
can
find
you
the
equivalent
of
doing
a
linear
combination
of
this
animal
I'm.
B
E
E
E
B
B
B
Yeah
here
is
like
I,
think
I
should
talk
about
the
training
and
testing
what
they
trained
on
and
what
they
tested
on,
because
it's
a
little
bit
different
than
what
you
normally
expect
in
supervised,
learning
and
I.
Think
there's
a
little
bit
of
a
barrier
right
now
and
telling
it
that
across
so
what
they
trained
on.
First
of
all,
yes,
yes,
what
they
trained
the
neural
network
on.
B
And
so
they
trained
the
top
layer
on
this
model
to
support
linear
classification
of
these
nine
categories,
which
is
equivalent.
This
is
a
little
wonky,
but
you
can
think
of
it
as
putting
like
a
fifth
layer
up
here
temporarily
with
the
with
a
temporary
unit
for
each
category
and
then
back
propagating
it
and
then
just
like
stripping
off
that
top
layer
and
then
that
last
part
stripping
off
the
top
layer
is
interesting
because
during
testing
they
now
use
a
different
set
of
categories.
B
They
don't
they're,
not
testing
whether
the
network
can
recognize
these
categories,
they
give
it
a
whole
new
data
set
whole
whole
new
set
of
backgrounds,
all
sorts
of
things
they
even
render
in
different
software,
and
and
now
they
train
kind
of
a
new
temporary
top
layer,
but
just
that
top
layer.
They
train
they
train
this
new
one
to
linear
classified
these
new
objects.
E
E
A
E
D
Learning
that
argument,
why,
whatever
using
you're,
trying
to
simply
that
that
is
the
most
complex
thing
that
these
networks
can
do
about
the
representations
layer.
And
so
if
you
were
talking
about
transfer
running
at
all
and
these
networks,
then
you
would
talk
about
in
the
context
of
the
highest
level.
Okay,
I'm,
trying
to
just.
E
D
D
B
B
So
the
idea
that
IT
encodes
object
identity
is
not
quite
right.
The
idea,
instead
IT
encodes
input
in
a
really
useful
way
that
makes
itit
that
makes
classification
easy.
That's
why
you
can
classify
an
object
identity
from
IT
proponents,
but
that's
not
suggesting
that.
That's
particularly
what
it's
doing
so
now
I
can
talk
about
the
at
the
model,
a
little
bit
in
more
detail
and
end
up
I've
translated
this
into
a
picture.
That's
a
little
more
neuroscience
here,
a
little
bit
more.
B
This
is
kind
of
the
machine
learning
version
of
it.
This
is
the
equivalent
version
of
it
where
I
talk
about
cortical
columns
and
such
so
like
I'll.
Talk
about
this
briefly
here,
I'm,
showing
like
the
image
basically
and
in
each
of
these
as
each
of
these
is
the
same
filter
is
being
applied
to
different
parts
of
the
image
and
here
a
cortical
column,
which
they
never
they
don't
use
the
vertical
column
in
here.
This
is
me.
B
Introducing
interpretation
here
consists
of
here
I'm,
just
saying,
standard,
a
mapping
of
CNN's
on
to
cortical
columns
the
idea
that
every
one
of
these
has
like
has
these
groups
of
units,
but
this
has
the
additional
change
that
that
the
whole
thing
is
multiple
parallel
models,
and
now
I'll
explain
that
I
was
just
introducing
that
side
of
the
board.
So
this
model
is
same.
B
Level
skipping
the
idea
that
sometimes
well
the
observation
that
v1
will
skip
levels.
V2
everything
will
skip,
and
sometimes
it
goes
through
multiple
levels
of
processing
before
it
reaches
IT.
So
the
way
they
simulate,
all
that
is,
they
have
a
set
of
these
models
that
are
trained
separately.
These
have
these
smaller
convolutional
neural
networks.
Here's
a
2
layer.
Now,
here's
a
3
layer,
Network,
here's
a
1-1
layer,
Network,
and
so
they
they
actually
train
these
separately.
B
B
So
so,
boosting
and
like,
if
you
open
a
machine
learning
textbook,
it
has
a
certain
meaning
involving
training,
parallel
models,
straight
training
and
ensemble
of
models
and
I'm
talking
about
it
briefly,
just
because
I
think
we
might
come
back
to.
We
might
use
something
like
this
when
we
start
training
parallel
models
is
a
useful
idea.
The
idea,
like
briefly
what
they
do
is
they
train
this
network
and
it
performs
better
on
some
examples
and
where
these
things
are.
B
Class
they
classify
the
second
layer,
they
train
this
it
does.
It
does
a
really
good
job
at
classifying
some
examples.
It
does
a
poor
job
of
classify
others.
So
when
they
train
this
next
one,
they
weight
the
examples
that
they
make
it.
They
can
record
the
network
a
little
bit
more
for
for
being
able
to
recognize
those
objects
that
this
one
was
bad
at
recognizing
and
they
proceeded
to
basically
train
every
model
so
that
it
it
gives
preference
to
to
the
examples
that
the
previous
ones
weren't
doing
such
a
good
job
on
and.
B
B
The
other
multiple,
parallel
safe
train
that
same
input,
and
they
simulate
this.
This
is
kind
of
their
stand-in,
for
this
is
how
they're
doing
level
skipping
they
just
think
of
it
as
like.
This
is
a
two
layer
network.
This
is
a
three
layer
network
is
a
one
layer
network
and
then
they
all
feed
into
a
fourth
layer
that
takes
it
all
in
and-
and
this
is
also
trained
separately,
they
just
concatenate.
B
E
B
Three
means
this:
this
is
a
model.
This
model
meant
just
drink
three
by
default
everywhere.
It
might
be
like
42
as
42
feature
that
it's
that
it's
it's
saying
like
here's,
where
the
speech
arose
on
the
image,
here's
where
another
one
is,
here's
where
another
one
is
now
the
parallel
models.
Well,
parallel
models
could
all
just
be
identical
types
of
models
trained
differently,
but
they
also
do
experiment
with
different
hyper
parameters.
They
were
like
simulating
in
the
idea
that
some
neurons
are
different
than
others,
some
that
they
do
their
activation
thresholds
are
different.
B
They
they
very
high
parameters
across
these
as
well,
oh
yeah,
in
boosting
when
we
were
talking
about
first
one
has
some
examples
that
he
doesn't
work
well
with,
and
then
you
kind
of
train
the
other
one
level
to
fix
the
question.
That
is,
that
successive
refinement,
or
do
you
kind
of
rotate
things
around
so
that
everything
is
not
just
reference
to
the
first
one
and
correcting
all
the
deficiencies.
B
C
B
E
B
B
Sort
of
I
mean
like
dim,
different
different
groups
of
many
columns,
okay,
think
of
think
of
multiple
cortical
columns
processing
the
same
patch
like
you
might
have
that
you
mentioned
in
s1.
You
could
have
the
slowly
adapting
and
the
rapidly
adapting
cortical
columns,
processing
the
same
patch
with
different
different
types
of
sensors.
E
E
B
E
Well,
that's
may
not,
and
also
that,
then
we
should
be
able
to
correlate
that
with
modeling
the
size
of
an
end.
If
n
in
this
case
turns
out
to
be
two
hundred
well,
it
doesn't
work.
I,
don't
think
well,
that's
before.
Maybe
it
would
work.
It's
also
a
stretch
to
make
that
assumption.
There's
no
biological
basis
for
saying
that
it's
a
stretch,
okay
I,
just
wanted
to
see
what
they've
done,
but
they
really
have
multiple
models
that
have
different
sets
of
parameters.
B
E
Models
I
think
it's
a
fundamental
disconnect
between
operating
work,
but
we
can
throw
it
around
I
just
said:
there's
a
lot
of
things
can
be
said
like
that.
My
experience
has
been
when
people
try
to
make
correlations
between
our
networks
in
the
brain.
You
get
very
sloppy
about
these
details
then
just
hand
wavey
say:
oh
yeah.
This
is
like
business
like
that
and
I.
C
So
this
is
a
very
complex
set
up.
You
know
pretty
intricate
from
a
machine
learning
standpoint
and
typically,
if
you
wanted
to
train
a
10
category
network
to
do
well,
you
wouldn't
need
to
do
all
of
this
stuff.
Did
they
motivate
this
that
they
try
simpler
system?
What
like
why?
Why
did
why
this
particular
complex
architecture?
It's.
B
C
B
B
So
so,
anyway,
I
don't
think
the
assessment
that
they
that
they
train
this
until
it
was
match
from
the
neural
data,
is
right
that
that's
when
the
first
figure
on
this
paper,
they
compared
a
bunch
of
different
models
where
they,
where
the
on
the
bottom
is
their
category
the
categorization
performance.
So
the
the
test
performance
on
this
and
on
the
y-axis
they
check
how
much
of
the
variance
can
they
explain
and
single
units,
and
and
first
they
went
with
simple
models.
B
Simple
three
layer
models,
this
for
this
one
on
the
left
is
just
single
ones
of
these,
where
they
just
they
just
take
these
and-
and
they
find
this
general
trend
that
the
better
the
network
performs
in
this
categorization,
the
better
it
is
the
better
the
network
is
for
explaining
for
predicting
neural
activity
and
I
want
to
step
back
and
just
repeat
like
it's
real.
It's
interesting
to
keep
in
mind
down.
E
B
Neural
data
was
by
adding
levels
given
so
the
the
kind
of
even
the
title
of
the
paper
well
kind
of
have
a
look
is
essentially
implying
that
all
you
have
to
do
well,
they're,
implying
that
they
were
trying
to
make
a
four
layer.
Network
perform
really
perform
of
object,
recognition
really
well
and
and
when
I
say
perform
it
really
well
it's
this.
E
I,
don't
find
this
result
surprising
at
all,
starting
with
that.
It's
like,
if
you've
got
two
systems
that
can
both
solve
this.
This
object
classification
problem
at
some
point
right
before
you
do
the
classification
they're
going
to
have
some
common
basis
set,
it's
like
how
could
it
not,
and
you
don't
have
any
of
the
details
underneath
it
how
you
got
there?
That's
almost
like
a
given.
It's
gonna
have
to
do
that
so
and
it's
not
even
like
you
know
it's
some
linear
combinations,
these
cells
that
produce
the
same
results
just
like.
E
C
I
couldn't
feel
similarly
to
because,
if
you
you
know,
if
we
say
you
know,
IP
is
gonna
solve
this
problem.
There's
gonna
be
some
simple
way
to
take
the
ID
representation
and
classified
let's
say
a
linear
classification
and
the
definition
of
this
classifying
is.
You
can
take
a
linear
classification
of
this,
a
linear
combination
of
these
outputs
and
classified
and.
B
C
E
B
E
What
you're
doing
brain
much
more
complex
problem?
It's
all
the
sensory
motor
learning
problem
of
these
objects,
but
if
you
don't,
if
you
just
give
it
a
flash
inference,
it
can't
use
any
of
that
system.
At
that
point
times,
you
point
out
doesn't
do
moving
things
rotating
object,
it
doesn't
understand
optics,
you
change
over
time
and
so
on.
So
some
of
these
extracting
features.
D
D
Higher
level
detail
to
the
classification-
and
one
might
intuitively-
you
know,
expect
that
useful
and
may
the
entire
extent
of
that
map
into
biology
is
that
well
is
elegy.
There's
also
you
know
that
will
skip
pigment,
it's
not
as
picky
of
article
organization,
so
hyeona
areas
also
now
access
to
low-level
information,
at
least
to
a
certain
extent,
and
so
when
we
know
that
you
may
know.
Maybe
that
is
the
extent
of
what
we
be
shown
here,
that
these
systems
are
working
on.
On
the
basis
of
you
know,
partial
policy
yeah.
B
Yes,
since,
but
I
want
to
add,
or
just
put
a
little
extra
emphasis
on
the
the
the
easy
explanation
that
IT,
that
is
representing
input
and
a
basis
that
makes
it
easy
to
linearly
separate
things.
Even
even
novel
objects,
even
objects
that
you've
never
seen
before.
This
I
think
this
suggests
that
that's
a
pretty
good
interpretation
of
what
IT
is
doing,
or
at
least
part
of
what
I
to
used
to.
C
E
C
B
C
B
F
E
And
they
go
back
like
what
Stella
they're
reporting
from
here
people
classically
look
how
people
report
from
cells
in
projects
the
vast
majority
are
safe,
much
more
than
physics,
not
cells,
do
not
or
not
they
don't
have
any
correlations.
They
don't
see
this
correlation
to
the
inputs,
they're,
ignoring
right
and
so
another.
E
Just
really
really.
If
you
don't
know
that
stuff,
it's
really
easy
to
misinterpret
really
again,
but
solely
looking
at.
They
say
this
correlation.
No,
this
will,
who
will
the
beat,
will
stop
over
fifty
Google
beautiful
could
measure
front
ignore
because
they
didn't
seem
to
correlate
with
the
changing
Empire
good.
C
C
Suppose
you
have
some
really
complex
system
here,
all
of
this
DNA,
and
then
you
have
a
linear
system
here
that
with
a
set
of
weights
that
can
then
classify
it
on
pizza
and
it
does
a
good
job
of
that
yeah,
but
some
other
complex
system.
Let's
call
it
the
monkey
and
you
have
another
set
of
ways
that
can
classify
the
same
data
set
well.
C
C
C
F
B
B
B
B
E
We
can
fool
ourselves
into
thinking,
I'm
capturing.
What's
going
on
the
brain,
I
think
it's
a
separate
question.
This
is
the
way
people
want
to
build
artificial
neural
networks.
What
can
we
take
in
the
brain
to
make
them
better,
but
I?
Don't
think
this
is
any
evidence
that
this
is
how
the
brain
works.
It's
just
showing
I
think
what's
super
digest
that
it's
just
showing
that
you
minimize
the
problem
to
being
having
classifiers
of
a
sort
of
type
taken
both.
E
E
E
B
B
B
C
C
B
B
E
E
We
know
that
these
layers
are
so
much
more
knowledge
and
say
that
you
know
v1
is
on
the
story,
but
no
idea
with
you
and
in
so
we
have
this
whole
theory
about
every
column
and
every
layer.
What
it's
doing
in
it's
much
more
complicated,
so
mean
down
we're
just
looking
certain
generate
a
case
when
you
already
trimmed
the
brain,
you're,
now
just
flashing
the
image
of
it
you're.
Only
looking
at
some
of
the
cells
don't
fit
your
model.
C
B
E
E
Maybe
not
that
they're
just
finding
these
correlations,
and
you
can
maybe
disagree
about
their
finest
relations
under
there's.
One
simple
set
of
things
that
people
are
neural
networks,
computer
or
convolution.
Neural
networks
can
do,
which
is
a
tiny
subset
of
what
actual
brains
do
under
that.
Well,
so
that
they
can
find
some
more
relation
between
these
different
levels
or
find,
but
that
doesn't
really
finding
that
anything
at
all
in
our
brain
is.
E
C
D
B
Didn't
focus
very
much
on
that
detail
so,
like
keep
in
mind
like
this,
some
they
had
sort
of
a
two-part
training
thing
where
one
type
of
training
they
were
doing
was
like
building
a
model
like
this
and
seeing
how
it
does,
and
the
second
thing
they
were
doing
is
trying
a
bunch
of
different
arrangements
of
these
models
having
some
levels
giving
having
different
head
experience
there,
and
that
second
kind
of
training
was
like
an
outer
loop.
They
weren't
using
back
propagation
for
that
outer
loop.
They
were
just
like.
B
B
But
only
these
orange
cuts
are
that
yeah,
so
they
were
just
saying
like
somewhat.
Interestingly,
if
we
just
aim
for
IT
fitting,
we
also
improve
our
categorization
performance.
That's
not
their
main
result.
That's
and
you
can
tell
that
they
anticipated
a
lot
of
feedback.
They
did
the
random
thing.
They
did
the
one
Pig
I
swear.
They
do.
B
F
B
E
E
B
E
E
From
v1,
but
those
are
not
identical,
their
prices
issue
separate
channels
of
processes
differently,
so
we
type
it's
getting
it's
getting.
Both.
It's
got
two
very
different
types
of
it,
but
the
processes
of
v1
and
the
unprecedented
it's
not
like
they're,
just
to
set
their
inputs
are
converting
together,
they're
separate
pathways
prosecutor.
B
Details
I
just
wanted
to
make
sure
we're
clear.
Just
it's
we're.
Just
echoing.
There
are
four
layer,
convolutional
neural
networks
that
yes
have
involved
parallel
models,
but
they're
four
layer,
convolutional
neural
networks
that
come
perform,
object,
recognition
at
levels,
certain
kind
of
boundary
issues
at
levels
comparable
to
humans;
and
so
anyway,
the
idea
that
convolutional
neural
networks
always
require
hundreds
of
layers.
E
E
E
E
B
B
E
D
E
F
E
Especially
if
you
assume
that
the
data
has
to
travel
off
the
ITU
through
each
level
of
the
hierarchy,
so
it's
not
that
they're
in
another
time
for
any
of
the
neurons,
despite
more
than
once,
and
so
that
is
it's
not
you're,
not
getting
in
a
frequency
or
trading
signal
at
all.
It's
like
more,
like
are
sort
of
you
know:
binary
population
model,
Asian
code,
and
so
that's
a
very
important
piece
of
data,
and
so
it
tells
you
that
in
many
minutes
the
duration
of
the
brain
use
average.
E
C
C
E
D
D
So
we
don't
know
that's
playing
a
role
that
we
we
don't
know
that
you
could
tell
that
if
you
like
to
be
sure
you
separated
the
train
house
essentially
except
for
the
then
you
could
wear
the
stimulus
lecture
and
you
could
prove
the
different
speeds
of
the
people
were
component.
That
is
presumably
very
fast.
D
D
E
E
Just
these
are
also
two
variations
on
a
theme
of
the
different
ways
of
your
slicing
bases.
Think
about
it!
So,
going
on
to
your
first
question:
how
to
be
well,
one
images
at
the
beginning
before
is
how
to
bring
neuroscience
principles,
which
is
you
people,
any
members.
Was
there
a
inclusion
recommendation,
I.
C
E
D
E
E
D
D
I
won't
need
a
lot
of
space,
so
the
idea
is,
you:
have
you
have
a
monkey
brain
right,
so
that
is
two
hops
right
and
you
you
want
to
record
from
from
IT,
say
this
society
right.
So
you
have
your
electron
here
right.
So
that's
why
you
were
calling
and
ice
right,
and
so
you
have
stimuli
on
on
the
screen
and
the
problem
is
for,
like
these
fast
kind
of
image,
flashes
right
that
there's
really
two
things
that
are
happy
for
one.
There
is
the
contralateral
field
for
with
so
image
here
right.
D
So
let's
call
this
the
white
part
of
the
image
right
as
being
perceived
by
the
eye
and
being
processed
here
as
the
as
the
feed-forward
signal
right.
So
sometime
after
showing
images
here,
you
will
get
right.
So
this
is
where
you
have
your
stimulus
and
some
70
milliseconds
later
or
so.
You
will
start
some
neurons
doing,
but
the
problem
is
there's
also
another
pathway,
there's
also
the
other
side
of
the
image
right
and
this
plant
goes
in
goes
in.
D
Here
has
a
feed-forward
response
on
the
other
side,
and
then
there
are
top-down
signals
right
from
from
PFC
and
you
can
or
like
other
superior
structures
and
the
only
way
to
disentangle.
This
is
actually
to
break
the
anterior
commissure.
So
you
break
these
areas.
Brain
halves
apart
in
the
head,
so
now
they're
only
connected
50,
Hz
and
now
I
have
a
new
plan.
So
there
was
a
couple
of
extra
Mentos
to
me.
D
That
actually
did
this
thing
and
I
have
a
very
neat
setup,
because
so
now
you
can
decide
to
show
a
stimulus
only
on
one
side,
so
you
introduce
a
screen
alright.
So
the
monkey
cannot
see
across
this,
and
now
he
can
show
a
stimulus
only
on
this
side
and
then
what
you
will
get
is
only
the
top-down
response,
because
the
only
way
that
there's
gonna
be
information
arriving
here
is
not
through
this
part,
because
he
would
leave
that
image
there.
D
But
in
fact,
all
the
responses
that
you're
gonna
get
a
now
only
top-down
responses
which
have
which
you
see
major
so
suddenly
there's
a
signal
here,
and
you
know
that
this
we
know
must
have
been
the
feedback
pathway,
whereas
this
one,
the
early
response,
was
the
VP
forward.
The
problem
with
that
time
window
that
I
have
from
like
70
to
140
milliseconds
is
that
it
pushes
the
two
different
responses.
So
some
part
of
the
response,
because
this
one
reaches
two
I,
would
have
to
recheck.
D
D
Of
70
milliseconds
and
then
there's
like
some
30
40
milliseconds
where's,
the
feed
forward
path
alone,
so
at
somewhere
I,
don't
know,
maybe
at
a
hundred
ten.
No,
so
this
stocking
starts
getting
in
and
so,
if
you're
now,
a
virgin
from
70
to
170
milliseconds
right,
you
are
recording
both
of
you
for
response,
which
is
awesome.
That
is
a
polar
filter
like
this,
but
you
also
I'm
intermixing,
that
with
all
the
water
rules
are
the
game.
E
D
D
The
other
problem
is
also
that
so
I
just
wanted
to
point
out
the
latter
one
and
how
people
have
addressed
physiology
I.
Think
that
time
when
was
yeah,
if
you
wanted
to
do
that,
you
would
have
to
do
that
with
a
selectively
contralateral
stimulus
to
just
show.
This
is
the
P
polar
response.
This
is
like
a
peephole
and
filter
which
convolutional
neural
network.
It's.
If
you
want
to
draw
that
analogy,
which
means
the
feed-forward
properties
of
the
visual
system
and
the
feed-forward
convolutional
networks,
then
you
should
make
a
set
up
where
you
can.
C
E
F
A
A
A
So
you
just
watched
a
pay-per-view
led
by
Marcus
Lewis
here
at
new
meta
headquarters
in
Rapid
City.
If
you
like
this
content,
please,
like
the
video,
please
give
me
a
thumbs
up
and
subscribe
to
our
channel.
That
would
be
great
I,
try
and
stream
as
much
live
stuff
as
I
can
from
the
office
whether
it
be
research
meetings
or
paper
reviews
like
this
one
they'll
probably
be
more
this
week,
I
will
be
in
the
office
Friday
probably
streaming.
Something
then
so
appreciate
everybody.