►
Description
Invariance vs. Equivariance (presented by Marcus Lewis)
http://dicarlolab.mit.edu/sites/dicarlolab.mit.edu/files/pubs/dicarlo%20and%20cox%202007.pdf
Discussion at https://discourse.numenta.org/t/numenta-research-invariance-vs-equivariance-jan-24-2020/7115
A
A
A
The
second
word
is
the
one
we
use
super,
often
one
person
who's
kind
of
popularized.
The
term
is
Jeff
Newton
with
capsules.
He
talked
about
equivariance
a
lot
equivariance
Britain
very
high
covariance
is
where,
when
the
inputs
changing,
the
representation
is
also
changing,
but
in
kind
of
a
controlled
way,
whereas
in
variance
those
where,
when
your
inputs
changing
the
output,
is.
A
B
A
A
Now
I
think
it's
now
I
think
to
just
talk
about
this
in
terms
of
computational
principles
in
terms
of
just
deducing,
how
these
systems
probably
work,
how
the
cortex
broadly,
what
it's
doing
I'm
going
to
talk
about
it
in
terms
of
kind
of
this
conventional
view
of
the
visual
hierarchy,
because
it's
easy
to
make
the
point
there
are.
Other
people
have
made
the
point
now
ranking
points
at
papers
where
they
write
really
well
about
it,
but
it
but
I,
think
the
principles
of
this
can
carry
over.
A
Even
if
this,
even
though,
even
if
this,
this
conventional,
you
I've
even
made
it
as
as
ridiculous
as
possible
and
just
making
all
this
into
a
simple
feed-forward
network,
obviously
not
the
case.
But
it's
what
I'm
going
to
use
to
make
this
one.
So
the
proof
of
concept
that
a
lot
of
cortical
processing
kid
doesn't.
A
Getting
an
invariant
object
in
very
an
object
representation
is
that
this
paper
frame
DeCarlo
in
cox,
2007
I'm,
taking
it
very
object
methods
mission
I,
do
have
the
paper
up
on
the
screen
I'm
just
going
to
just
zoom
in
on
one
figure,
because
I
here,
I
drove
on
them
2d
things,
but
it's
better
to
have
a
mental
image
up
there,
pretty
charts.
Where
that
would
they
here
they
drop
three.
A
But
it's
really
symbolizing
these
planes
that
they
have
in
various
shapes
are
manifolds
that,
let's
just
say,
manifolds,
that
the
neural
population
occupies
as
an
object
changes
as
an
object,
rotates
or
moves
closer
or
if
certain
properties
have
chained.
If
the
heck
be
becomes
larger
than
knows,
I'm
the
head
becomes
a
little
Audrey
electric.
The.
A
This
is
showing
the
various
activations
the
various
population
activations
that
can
occur
for
a
given
object
and
so
I'm,
just
really
only
pointing
to
this
is
like
when
I'm
showing
these
2d
things
the
mental
image
this.
This
is
a
better
mental
image
that
you
can't
draw
this,
and
then
you
also
just
have
to
keep
trying.
This
is
standing
in
for
something
that's
even
higher
dimensional,
so
here
I'm
showing
like
in
this,
they
may
have
this
theory
that
what
you
want
one
way
to
understand
what
the
visual
hierarchy
is
doing.
A
Visual
b1
b2
before
I
T-
is
that
they're
performing
something
they
called
manhole
flak.
So
the
input
like
it
was
just
state
an
object.
The
input
as
I
move
as
I
rotated
and
mover
this
marker
around
in
their
model.
What
would
happen
is
in
V
1.
The
population
activity
is
going
to
kind
of
move
along
this
line,
removal
on
one
of
those
planes,
but
it's
kind
of
kind
of
hopping
around
here
or
showing
you
can
see
I'm
showing
unit
one
unit
two.
These
are
like
the
activations
of
two
neurons,
but
it's
really
a
schematic
picture.
A
It's
standing
in
for
a
larger
population
so,
as
this
is
moving,
what
I'm
really
going
to
build
up
to
is
even
the
area
of
cortex
that
many
people
think
of
this
encoding,
I
object,
identity
and
for
a
temporal
cortex.
Even
their
changes
to
the
object
are
going
to
cause
an
activation
to
move
along
this
line
or
along
this
plane
along
this
high-dimensional,
then
yeah,
it
will
say
mental
that
there's
a
dimensional
manifold
and
but.
A
A
B
A
Reasons
come
a
point,
object,
label
and
whoa.
It
helps
the
visual
areas
are
projecting
all
over
the
place
like
IT
to
your
language
centers.
If
the
part
of
your
cortex
that's
trying
to
come
up
with
a
word
to
speak,
needs
to
name
something,
it
probably
is
receiving
input
from
IT,
and
it
just
needs
to
obvious
read
this
sucks
when
you
need
to
make
the
name
of
an
object.
Your
language
regions
basically
have
a
linear
classified
the
axons.
A
A
A
Brain
can
get
labels
they
can
get.
You
would
call
it
an
invariant
object
representation,
but
it's
not
so
much.
This
picture
of
temporally,
stable
environment
object
representations
is
like
it's
test
driven
it's
like
when
you
need
to
name
something:
okay,
I'm
gonna,
classify
a
girl
but
figure
it
out.
Okay,
the
word
I
need
to
say
is
marker.
A
A
D
And
so
the
ability
to
label
a
bearing
right,
they
build
it,
a
label,
anyone
there's
a
challenge
to
it
of
label.
It's
just.
Yes,
very
every
single
point
could
be
labeled
they're
all
unique
to
that
object,
but
they're,
not
implicit,
but
this
is
to
a
many
of
them
right,
and
so
it's
just
impossible.
So
would
you
have,
it
seems
like
you'd
have
to
go
just
say:
well,
how
do
I
make
it
so
it's
possible
ablest.
You
have
to
reduce
the
number
of
these
changing
inputs,
they're
all
referring
to
the
object.
D
A
Need
to
so
so,
if
you
treat
as
having
to
connect
to
all
of
these
patterns,
yes,
that's
feasible,
or
that's
that's
just
too
much
if
you
can
come
up
with
an
arrow
mechanism,
though,
for
actually
detecting
whether
actually
detecting
whether
this
is
on
a
certain
side
of
a
line,
certain
side
of
a
hyperplane
like
if
you
can
I
mean
this
is
what
this
is.
What,
but
to
do
that.
B
D
To
have
a
many
to
one
mapping,
you
can't
have
as
many
points
I
believe
in
the
ITU
curve,
as
you
can
in
the
v1
manifold,
because
you
still
have
this
I
mean
there's
no
magic
formula
that
says.
Oh,
you
know,
I
have
all
these
points
to
get
him
onto
that
flatter
manifold
I
have
to
somehow
collapse
them.
I
guess.
D
D
A
D
D
That's
the
requirement
of
some
level
of
temporal
cooling.
This
is
way
of
saying.
I
have
multiple
points
in
the
map.
If
you
run
out
of
points
and
whether
that's
a
hard
requirement
on
if
the
key
issue
here
I
already
is
I,
think
you
just
sort
of
sort
of
itself
at
that
argument.
But
if
you
do
then,
then
that
to
me
is
what
it
means
to
create
that
environment.
It
only
turns
into
a
multiple
of
maybe
two
fewer
mattli.
Many.
D
A
D
If
I
think
on
our
models,
we
have
an
interesting
thing
here.
You
know
we
show
these
math
all
discontinuance
right,
but
if
I
think
about
the
way
the
temple
memory
represents
sequences
or
even
the
way
we
represent
objects
in
in
the
quarter
column.
Those
points
would
you
consider
like
locations.
You
know
the
location,
representations
and.
D
Assignments
in
some
sense
that
together
each
one
of
them
use
unique
to
the
object.
But
if
I
look
at
the
elements
and
sequence
memory
there,
I
can't
look
at
two
elements.
These
are
next
to
each
other
in
any
kind
of
way.
It's
just
so
like
these
points
at
random,
and
so
even
there
there's
interesting
problem,
because
if
there's
truth
to
that.
D
This
more
smoother
thing
which
doesn't
change
it
often
like
that,
and
these
multiple
points
here
map
onto
some
sort
of
smooth
development.
There
I
think
this
idea
that
you
know
that's
part
of
our
theories,
not
how
people
think
about
people
think
about
these
manifolds
being
more
sort
of
continuous
but
I.
Don't
think
I
do
but
I
don't
think
anymore.
I
think
they
need
the
wrong
manifolds.
D
If
you
want
to
call
it
a
manifold
in
in
a
column,
it's
not
continuous
at
all
as
a
bunch
of
this
joint
points,
and
we
have
to
sort
of
turn
it
into
a
manifold
before
we
send
it
off
to
someone
else.
That's
why
I
just
want
to
make
that
point
here.
I
think!
That's!
That's
a
key
issue
that
if
we
write
about
it's
not
really
reflected
in
the
Saints
another
reason
you
mean
this
is
sort
of
invariants
step
a
few
feet.
A
Deeper
very
much
so
you
just
highlighted
a
part:
I
thought
it
was
really
well
written
and
this
dead
just
like
the
way
they
word
this,
and
that
really
goes
with
what
you
just
said,
or
it
really
connects
to
it.
So,
when
they're
talking
about
how
this
manifold
flattening
might
occur,
they're
talking
about
ways
it
could
happen.
The
third
potentially
key
idea
is
that
time
implicitly
supervises
manifold,
flattening
the
information
density
of
those
words
is
great.
I'm,
Superman.
D
A
A
Watching
something,
as
things
are,
changing
in
front
of
you
and
they're
cars
are
passing.
The
cars
door
is
opening
all
sorts
of
manipulations
are
happening.
Their
car
is
sort
of
moving
through
a
manifold
and
I
see
by
just
watching
how
it
evolves
over
time,
its
various
degrees
of
freedom,
you're,
just
gonna,
see
them
and
so
time,
yes,
watching
how
an
image
changes
over
time
spells
out
the
various
degrees
of
freedom
to
do.
Car
door
opens
car.
B
These
things
are
physically
connected
to
each
other.
Almost
listen,
supervisors
say
this
is
connected.
This
is
the
same
thing.
This
is
the
same
thing.
This
is
the
same
thing
and
then
but
I'm
getting
all
these
different
inputs.
I'm,
not
saying
it's
transforming
something
different
I'm
trying
to
basically
are
you
know
in
a
in
a
most
associate
revised
way
argue
so
how
this
is
the
same
thing
and.
D
How
can
I
make
it
so
so
I
think
that
I
mean
that's
where
that
context.
Yes,
two
things,
one
is
temple
human
evolution.
That
could
be
just
me.
Looking
at
you
know
the
skull
and
turn
around
yeah
I'm
talking
about
where
it
could
be
the
car
door
opening
those
are
two
very
different
things
wandering
different
poses
water
action,
degrees
of
freedom
of
the
objects
stating
sometimes
right
and.
D
So
that's
another
complication
here,
I
just
want
quite
a
bit
so
long
people
think
about
vision.
They
talk
about
images
as
if
the
images
the
entire
image,
but
it's
never
that
never
so
I'm,
not
sure
I.
Don't
I
can't
read
into
this
too
much
what
am
I
seeing
when
says
something
about
the
image
it
makes
me
wonder:
are
they
recognizing
that
fact
or
they
or
they
think
we
value
that's
an
entire
image
processing?
A
D
C
A
A
C
C
C
B
A
A
Last
week,
Matt
and
Flo
were
mostly
mad,
a
little
bit
of
flow.
We're
presenting
that
the
framework
for
three
visual
strange
one
and
a
lot
of
the
conversation
started
sending
or
centering
around
this
like.
How
is
it,
how
does
it
achieve
invariance
so
now
I'm
starting
to
gather
that
maybe
part
of
what
you
have
in
mind
when
you
say
this
is
sort
of
like
how
this
blue
line
becomes
this
blue
line
now,
maybe
with
that
that's
kind
of
what
you're
going
through
and.
D
D
But
in
other
situations
you
can
so
it's
like.
Everybody
tries
to
do
this
as
much
as
possible
and
in
many
situations
a
single.
What
I
work
in
the
court
is
sufficient
for
doing
it,
but
other
situations
was
not
so
it's.
It
doesn't
say
that
it
just
says
everybody
tries
to
do
as
much
as
possible
and
that's
what
it
means.
It's
a
many
to
one
that
it's
not
a
continuum.
Yeah
yeah,
so
I,
interesting
distinction
between
the
word
invariants
in
equivariance
I,
don't
know
that's
commonly
become.
A
D
A
I'm,
okay
for
now
I'll
agree
with
that.
There
is
I
want
to
think
about
it
more.
Keep
in
mind
that
sometimes
maybe
small
regions
of
this
are
going
to
be
stretched
out
to
be
much
larger
up
here.
Sometimes
tiny
changes
in
the
pixels
are
actually
like,
really
really
important
for
distinguishing
semantics,
so
some
sometimes
little
things
are
going
to
become
longer
here.
So
there's
a
lot
of
stretching
end
yeah.
A
Through
this
process,
but
okay
now
I'll
bring
up.
Maybe
this
is
an
object,
an
obvious
point
of
why
this
is
relevant
it
to
read
it
if
cortical
columns
are
really
representing
object,
identity.
Why
hasn't
anybody
noticed?
It
seems
like
at
least
the
cartoon
version
of
object
identity.
If
you
just
go
for
a
basic
one,
the
first
time
anyone
ever
did
a
multi
unit
recording
and
v1
and
monkeys.
They
would
have
realized
like
whoa
this
representing
objects
or
advocate
entities.
A
Forgotten
so
I'm
sorry
I.
D
D
This
correlation
and
then-
and
so
this
has
been
observed-
so
me
extend
this
over
in
please
to
ignored
because
they
either
have
those
two
things:
either
that
they're
silent
or
they're
a
genuinely
firing,
but
it's
very
it's
very
sparse
and
I.
Think
your
middle
there
on
can't
label
the
object.
It
has
to
be
looked
at
population
code,
so
that
was
the
reason
that
nobody
knows
it's
just.
They
were
looking
they're
having
trouble
finding
cells
responding
to
anything.
So
when
they
find
a
self,
it
seems.
D
D
C
D
Try
to
attend
to
different
things
as
opposed
to
look
at
something
right,
because
if
you
were
truly
the
different
thing,
if
you're
attending
the
different
thing,
then
literally
then
the
whole
thing
will
change
and
you're
awful
a
representations
were
changed
too.
So
it's
disability,
you
can
proceed,
make
yourself
one.
Do
you
have
a
stable
representation,
not
only
your
attending
the
different
components,
its
way
now
I'm,
not
attentive,
is
like
looking
at
you
and
my
sister
colleague
of
you.
D
That's
what
that
stability
of
my
perception
has
to
be
somewhere.
Instability
of
the
neurons,
I
think,
and
so
that's
what
you're
looking
for
and
I'm
not!
You
would
see
some
level
of
that
even
in
you
want
and
again,
if
I
want
to
do
it
in
b1,
I
have
to
I,
don't
make
sure
that
the
thing
I'm
attending
over
small
in
the
original
space
wouldn't
big
thing,
because
that's
part
of
your
on
how
I
can
do
complete
again
in
your
object,
invariance
small
thing,
but
not
Basin.
So
this
is
right.
D
A
So
yeah
I
mean
I,
haven't
found
any
papers
than
just
like,
obviously
refuted.
My
suspicion
is
that
a
number
of
labs
have
gone
and
done
some
quick,
exploratory
experiments.
Just
a
sanity
test
make
sure
there's
no
like
object
at
T
cells
in
v1,
then
they
go
on
and
work
on,
something
that
they're
going
to
be
able
to
get
published
and
so
I
feel
like
these
kinds
of
things.
Maybe
if
you
talk
to
do
the
right
people
you
can
they
tell
you?
Oh
no
I've
actually
tried.
D
D
A
D
D
10
years
ago,
or
something
we
can't
actually
do
the
sequence
memory,
we
realize
that
the
sequence
memory
did
not
require
a
popular
need
to
work
at
all.
Just
didn't
require
it
like
wow
there's
like
the
key
insight
there
was
that
every
single
point,
the
sequence,
has
unique
ID
and
unique
representation
like
wow
and
then
thought
of
that.
That's
amazing.
So
so
now
everything
is
classified
well
different.
The
whole
thing
you
were
with
any
object,
ID
or
any
stabilization.
D
D
I
felt
like
a
ball,
but
I
can
sometimes
name
a
song.
So
if
I
can
name
the
song
I,
don't
think
I
label
every
single
color
song.
That's
why
I
made
this
argument
multiple
times
that
it's
easy
because
supposed
to
name
a
melody.
If
we
hear
the
refrain
of
the
chorus,
because
that
part
is
repeated
over
and
over
again,
if
I'm
trying
to
remember
that
song
I
can't
remember
it
can't,
remember,
I
know
this
line
know
this
line
can
be
a
name
of
it.
D
D
A
Was
going
to
say,
you're
doing
the
problem
of
having
to
learn
a
bunch
of
patterns
and
move
them
and
do
a
smaller
thing
and
and
it's
better,
if
you
could
somehow
set
it
up
as
a
as
like
you
know,
a
linear
mapping,
where
everything
on
one
side
of
this
line
this
line,
if
you
can,
this
line
was
glued
on.
Why
I
should
use
fair
color
does
not
take
large
capacitive
to
represent.
If
you
have
some
neural
mechanism
for
figuring
out
whether
the.
D
Kevin's
I
could
get
to
get
to
these
curves
so
that
you
can
do
that.
I
mean
I,
be
different
perspective,
I
think
of
it
from
this
image,
one
not
because
I
got
bunch
of
points
plan
for
a
few
ones
and,
and
so
the
way
I
think
about
it.
It's
like
oh
now,
I
realize
that
these
points,
if
I
were
just
to
look
at
I,
would
look
at
all
the
points
location
points
that
represent
the
points
on
a
coffee
table
or
the
points
that
represent
the
nobles
normality.
I
might.
A
D
D
C
B
D
Know
they'd
be
evenly
dispersed.
You
couldn't
there's
nothing
but
I'm.
Looking
at
the
points
of
selling
points
being
which
neurons
are
active
in
the
green
dial
is
more
interactive
in
the
black
dial.
There's
nothing
to
tell
you
there
Jason
they're,
on
a
mantel,
but
what
you're
trying
to
do,
then,
is
the
outcome
of
this.
Let's
say:
I
have
a
thousand
dots
here
I'm
going
to
try
to
come
up
with
the
output
of
this
is
not
going
to
have
fewer.
Does.
D
Never
it's
going
to
be
on
a
continuous
manifold
I've,
never
seen
that
anywhere,
it's
just
fewer
dots,
and
so
some
remember
these
dogs
are
going
to
map
to
this,
and
some
of
them
is
not.
Maybe
this
doesn't
maybe
comes
in
more
than
one
because
I
couldn't
do
anything
pulley
on
it.
But
the
point
is
in
the
end:
I
have
fewer
dots
here
and
there
was
a
many
to
one
mapping
from
set
of
these
dots
into
the
and
and
and
if
I
was
really
lucky
I
again,
these
dots
are
not
neurons.
D
It
that's
all,
and
then
I
like
me,
not
so
that
would
be
lucky,
but
the
point
of
the
process
just
try
to
get
to
as
few
as
possible
and
and
and
then
the
next
region
want
the
same
kind
of
input
or
the
same
thing.
How
that
relates
exactly
to
this,
you
know
classifier,
and
these
sort
of
manifold
picture.
I,
don't
really
know
again.
I
have
trouble
imagining
the
idea
of
continuous
manifolds
in
binary
space.
It
just
doesn't
work
in
my
head.
It's,
like
you
know,
seems
like
points
are
right
next
to
each
other.
D
B
B
B
Is
is
that
when
you're,
when
you're
drawing
these
interstitial
neurons
firing
up
and
saying,
there's
nothing
to
separate
them?
Well,
there's
not
physically,
but
they're
activations
themselves,
you
know
themselves
create
a
virtual
network
that
goes
to
into
South
one
or
set
a
smaller
number
of
neurons,
so
I
think
part
of
the
problem
would
eat
when
you
think
of
the
manifold
because
of
the
screen.
This
is
that
the
manifold
kind
of
comes
out
of
later.
B
Just
straight
theory,
you
know
you
just
cut
in
space
in
some
kind
of
hyperspace,
but
if
you
think
of
it
in
terms
of
a
graph,
these
things
are
separable.
They
are
not
physically
I
mean
can't
disentangle
them
or
you
can
light
them
on
aggregate
but
yeah.
But
yes,
but
this
meant
is
the
same.
It's
not
just
the
same
as
an
SDR
I
say
all
these.
These
cells
come
out
together
right,
but
you
could
you
can
I
mean
part
of
the
thing.
Is
we're
trying
to
imagine
this
and
looking
at
them?
It's
a
matter
fold.
B
D
Graph
there,
so
maybe,
but
in
my
eye
on
the
minute
of
life,
we
haven't
thought
about
that
way
and
I,
don't
know
it's
truth.
Oh
I
see
your
honor
proceed
at
and
educate
us
again
when
I
think
about
these
points.
These
STRs
I,
think
of
them
literally
almost
like
grandma
I,
mean
the
way
we
did
the
statements
memory.
They
were
to
large
extent
random
and
chosen.
They
were
chosen
that
of
columns
but
and
therefore
there
seems
to
be
nothing,
you
could
do
to
look
at
those
the
patterns
and
different
NCR's
to
know,
there's
nothing.
D
D
D
So
so
that
might
work,
but
my
caution
about
that,
but
I
hope
four
years
of
me
work
on
this
says
that
when
you
just
start
say:
okay,
we
don't
need
to
worry
about
how
the
neurons
are
doing
this.
Let's
think
about
from
some
mathematical
perspective
and
and
so
I
think
you
have
to
you-
have
to
think
about
the
neurons.
You
have
to
visualize
that
we
have
to
be
understanding
what
they
are
telling
us
and
if.
C
C
Yeah,
what
about
the
temporal
aspect
of
all
of
this?
So
the
fact
that
these
representations,
these
STRs?
Well,
it's
like
stable
in
time
like
you,
you
think
of
them
as
overall
these
representations
map
to,
and
if
you
want
to
these
representations,
but
in
both
of
those
places
right
activity
is
continually
evolving,
even
while
a
certain
cost
constraint
or.
D
C
D
C
There's
going
to
be
parts
of
that
are
going
to
be
very
nicely
stable
because
they
respond
to
you
know,
maybe
along
receptor
fields
and
whatnot,
and
since
their
input
a
stable
are
going
to
be
stable.
What
I'm
talking
about
is
not
the
lower
sensory
areas
where
you
will
see
a
lot
of
that
stability
when
you
fix
the
sensory
input.
What
I'm
talking
about
is
them
hi
Owen
areas
like
prefrontal
cortex
in
a
motor
memory
task
where
the
activity
is
not
stable.
D
A
D
B
D
D
D
Resists
I'm
building
up
these
markets,
those
employment
history,
both
permanent
and
temporary
models
of
the
world-
that's
what
we
really
do
most
the
time
it
is
not.
My
point
mean
just
looking
car
over
and
over
again
I
already
learned
that
car,
so
we
spelled
the
brain,
wants
to
basically
attend
the
new
things
that
I
have
not
already
learned
as
a
group.
So
I
can
build
this
temporary
position
in
what's
going
on
right
now.
D
So
under
that
situations,
I
would
see
changes
in
prefrontal
cortex,
almost
rapid
of
the
changes
in
v1,
and
that
would
be
the
normal
situation,
but
I'd
have
to
then
say
now:
here's
a
picture
famous,
you
know
photograph
you
now
and
I
want
you
to
scan
over
and
look
at
attend.
Any
particular
I
mean
right.
C
We
do
see
very
stable
firing
in
too
late
working
for
sabotage
working
memory
under
a
bunch
of
very
particular
conditions
by,
like
one
item
had
faced
with
with
it
with
it,
with
it,
with
the
pointer
directed
attention,
so
no
contravening
task
so
so
so.
I
agree
with
that.
Right.
There's
lots
of
other
things
that
make
it
that
make
it
you
know
complicated,
but
the.
C
Even
despite
all
that
complication,
when
you
do
when
you
run
like
this
thing,
I
mentioned
that
Maria
wanted
to
look
for
the
memory
code.
Like
I've
shown,
you
two
objects
right,
you
know
item
and
your
task
is
to
for
some
reason
because
of
the
task.
We
need
to
remember
them.
The
interesting
thing
is
I
can
read
out
that
object
identity,
even
though
there
is
no
stable
firing
of
you
know.
This
configuration
of
six.
D
D
B
C
D
D
I
always
been
impressed
that
at
times,
but
you
can
catch
yourself
doing
this,
that
you
know
this
perception
that
you're
always
perceiving
things
but
reality,
often
when
you're
thinking
about
something
you
do
all
these
things
in
your
life
without
she
ever
be
conscious,
like
I'll
pick
up,
you
know
something
on
the
counter
I'm
doing
this
really
to
try.
If
I
ask
myself.
Oh.
A
D
D
C
Always
in
the
context
of
invariance
and
invariants
and
I
think
that's
been
interesting
with
respect
to
what
is
the
representation
of
an
object
and
yeah,
it's
fair
to
say
it's.
You
know
not
one
thing:
it's
very
likely
that
it's
many
things
in
there.
You
know
somehow
the
brain
makes
use
of
these
two
Kovarian
representations,
yeah
sure,
but
I
also
wanted
to
point
out
that
even
those
particular
points
on
that
manifold,
but
he
even
those
are
not
really
ever
a
static
right.
So.
D
B
D
Mean
really
interesting
to
say
if
there
were
under
all
the
right
controls
your
for
example,
there's
a
normality
and
I
thought
it's
a
pure
ability,
there's
no
changing
the
instrumentation
I
think
II,
with
the
representations
that
from
note
to
note,
be
different
in
different
times.
Our
theories
is
now
maybe
the
same
representation
thing.
D
Fortunately,
even
at
the
same
Str
know
if
they
were
different.
The
interesting
observation,
its
economy.
They
do
what
you're
saying
it's
like
saying:
mother
they're,
not
always
the
same
things
might
change
under
a
controlled
situation.
If
that
were
really
true
interesting,
that
I
would
feel
at
this
point.
Would
it.
C
Let's
talk
about
the
the
noise
that
has
meaning
in
the
noise
has
so
is.
There
is
the
noise
that
we
observe
just
because
of
our
inability
to
properly
control
the
things
and
because
we
can't
jump
back
in
time,
so
we
can't
have
it
to
repeat
a
trial
in,
except
for
in
computational
neuroscience.
We
can,
because
computers
can
set
for
the
same
thing
right
by
the
biology
we
never
can
as
well.
D
D
A
A
It's
a
little
uncomfortable
that
there's
not
evidence
for
it,
maybe
they're
still
right
for
it.
The
final
thing
I
was
making
final
point.
I
was
making.
Was
that
computationally,
it's
sort
of
a
weak
thing
to
vote?
We
use
the
one
way
we
actually
do
use
the
object
identity
in
our
models.
Is
this
the
thing
all
the
cortical
columns
can
agree
on?
They
can
help
each
other
by
voting
on
it.
Now,
I've
made
this
case
like
for
multiple
years.
A
I'll
bring
up
a
figure
I'm
in
2017,
so
here
I
showed
that
when
you're
voting
on
object,
identity
here,
I'm
showing
two
little
toy
objects
worth
five
features.
This
one
is
feature
a
feature,
a
feature.
A
the
this
one
is
a
familiar
object,
feature
a
feature.
B
feature
see
if
I
take
five
sensors
like
a
hand,
put
it
down
on
this
object
and
all
they're
doing
is
voting
on
an
object,
identity.
A
This
hand
will
still
think
it
might
be
sensing
this
object,
because
it's
not
doing
any
sort
of
we
all
doing
is
voting
an
object
identity.
It
has
no
location
information
and
if,
instead,
they
were
voting
on
something
a
covariant
like
an
object
at
a
certain
pose
an
object
at
a
certain
location
relative
of
a
sensor,
that's
much
more
powerful.
This
is
this
touch.
It
senses
immediately
sort
of
like
what
you
think
when
you
subtract
things
with
your
hand
house,
you
sense
immediately.
D
B
D
D
C
D
D
C
A
And
when
I
like,
when
I
talk
about
a
marker
or
whatever,
looking
both
third
parties,
but
the
point
is
that,
like
that,
it's
like
I'm
sensing,
a
marker
and
the
marker
is
right
here-
that
the
second
piece
of
information
is
that
is
the
important
part.
Yes,
you
might
have
a
canonical
orientation,
that's
probably
a
part
of
this,
but
the
connectome
canonical
orientation.
We
thought.
D
D
It's
like
you
know,
the
pose
is
correctly.
The
distance
to
me
doesn't
seem
to
be
important.
It's
just
like
it
disappears
somehow
that
I
tend
to
that,
and
so
so
maybe
we
could
stick
to
just
them
the
pose
part,
but
even
that
still
is
a
very,
very
difficult
thing
going
on,
because
there's
too
many
poses
so.
Okay,
don't
work
well,.
A
A
A
Think
of
the
sensor
relative
to
specific
object.
This
is
our
columns
plus
model.
This
is
this.
Is
this
layer,
as
is
that
this
is
something
I
coded
a
few
years
ago
a
couple
years
ago.
So
this
is
the
location.
This
is
the
location
of
the
sensor.
This
is
multiple
grid
cell
modules
and
had
some
fun
with
the
visualization.
C
A
D
A
Clustering
positive
now,
I
called
it
the
claustrum,
because
the
costume
could
producs
to
layer
six
of
multiple
areas,
but
I
was
just
making
stuff
up.
Who
knows.
The
point
is
part
of
me
thought
that
I
was
eventually
going
to
take
this
and
insert
it
up
here
and
get
rid
of
the
object
ID,
but
for
now
I
was
putting
it
somewhere
else.
A
Just
so,
I
could
present
the
idea
to
everyone,
and
so
the
way
I
had
it
happen
was
they
voted
on
the
location
of
the
object
and
another
reference
frame,
the
bodies,
reference
frame
or
the
way
I
phrased
here.
It's
the
location
of
the
body
relative
to
a
specific
object,
similar
to
how
good
cells
track
the
location
in
your
body
relative.
A
Is
your
location
relative
to
it?
So
it's
the
distance
and
when
you're
talking
about
distance,
you're
you're,
taking
your
you're
converting
to
like
you're,
converting
the
saying
like
okay,
this
is
this
far
away
and
it's
and
then
you
have
this
X
Y
component
that
you
still
need
to
represent
is
all
six
dimensions,
but
PO
is
usually
as
tough
yeah
yeah
in
a
lot
of
in
computer
vision.
Usually,
post
refers
to
location
and
orientation,
that's
not
Jeff,
Lichtman
uses
it,
for
example,
and
so
yes,
we
might
divide
these
up
differently
and
insult
them
in
different
ones.