►
Description
For details and discussion, go to https://discourse.numenta.org/t/connecting-hintons-capsules-to-numenta-research/6160
This morning, Marcus is planning on discussing capsules on the whiteboard, connecting them to our work.
Here are 3 Hinton capsules papers and 1 talk.
2011 Paper: http://www.cs.toronto.edu/~hinton/absps/transauto6.pdf
2017 Paper: http://www.cs.toronto.edu/~hinton/absps/DynamicRouting.pdf
2018 Paper: http://www.cs.toronto.edu/~hinton/absps/EMcapsules.pdf
2014 Talk: https://www.youtube.com/watch?v=rTawFwUvnLE
Broadcasted live on Twitch -- Watch live at https://www.twitch.tv/rhyolight_
B
C
A
D
E
E
C
So
anyway,
it's
a
sort
of
a
sort
of
a
checkpoint
meeting,
sort
of
a
conversation
promoter
and
we'll
see
what
happens
but
I
mean
there's
no
big
rebel,
big
new
revelation
here,
but
it's
just
that.
There's
some
nice
parallels
that
I
can
draw
and
and
some
nice
interest
nice
things
I
can
point
out
where
there
are
subtle
differences
that
are
interesting,
Oh,
so
yeah
the
capsules.
There
are
these
three
papers
that
have
been
that
have
come
out,
or
at
least
three
papers
from
Hinton's
group
about
them.
C
So
capsules
are
I,
mean
I'll
just
go
into
like
what
is
a
capsule.
Briefly,
it's
it's
a
it's
a
conceptually
kind
of
simple
thing
that
leaves
a
lot
a
lot
unconstrained.
There's
a
lot
of
every
paper
kind
of
does
a
new
take
on
them.
It
doesn't
some
of
the
differences
from
the
previous
one.
A
capsule
isn't
a
very
constrained
thing
right
now.
So
in
these
papers
a
capsule
generally
has
8
to
16
cells
and
hinton
has
specifically
said
that
that
it
may
line
up
with
the
mini
column.
C
He
thinks
of
it
as
possibly
one
of
many
colleges
doing
and
a
typical
layer
of
cells
consists
of
something
like
32
capsules
and
they
changed
it
up
in
different
networks.
But
this
is
a
pretty
typical
number,
similar
with
the
8
to
16,
sometimes
the
17,
sometimes
that
the
point
is.
This
is
just
giving
you
ballpark
estimates
of
these
numbers
and
so
a
capsule
network
I'll,
just
in
like
30
seconds,
I'll
jump
and
talk
about
like
what
is
inside
a
capsule,
but
just
briefly
a
capsule
network.
C
It's
very
much
the
same.
How
sometimes
I've
pitched
the
idea
that,
in
our
model,
like
layer,
the
layer,
four
layer,
six
network
is
learning
these
primitive
features
and
and
it's
learning
that
really
well
learning
what
they
look
like
from
every
viewing
angle.
And
then
the
remainder
of
the
network
is
arranging
those.
Those
features
into
compositions
and
councils
are
pretty
much
arranged
like
that.
Similarly,
and
which
is
just
an
interesting
point,
because
even
some
of
Hinton's
older
papers
trying
to
get
away
from
that,
but
but
capsules,
it
kind
of
jumps
back
into
it.
C
So
there's
just
a
set
of
an
interesting
discussion
points
to
be
had
there,
but
okay,
a
capsule
like
what
is
happening
inside
one
of
these
little
squares.
I
drew
four
cells
here,
but
this
sometimes
a
sometimes
16
a
capsule
represents
a
feature
and
what
he
calls,
what
they
call
instantiation
parameters
and
that
that's
a
broad
term.
C
It's
actually
about
deforming
the
feature
deforming
its
into
into
something
and
in
some
of
their
networks.
This
is
the
kind
of
thing
that
pops
out
it
kind
of
changes
on
the
the
setup
of
the
experiment
at
the
set
up
of
the
network,
but
in
general
a
capsule
represents
some
thing:
that's
parameterised,
so
it
may
be
just
that
this
location
is
posed.
It
may
be
like
you
can
think
of
like
old
old
models
like
jion's.
C
You
can
imagine
like
having
these
cylinders
that
can
be
warped
into
arbitrary
shapes
like
handles
of
coffee
cups,
and
you
can
imagine
instantiation
parameters
being
where
all
that's
kind
of
happening.
So
it's
a
flexible
idea
that
they
can.
That
can
become
different
things
based
on.
What's
a
jig
on
it,
how
you
what
you
insert
into
it,
how
it's
trained
so
one
thing
about
how
these
are
trained,
but
one
of
the
discussions
happens,
I'm
kind
of
want
to
bring
up
because
it
shows
where
they
went
a
slightly
different,
different
direction.
C
That
I
think
we
would
if
we,
if
we
were
building
out
this
theory.
The
first
paper
said
in
the
first
time
they
trained
these
capsules.
C
They
essentially
fed
in
location
information
they
they
passed
in,
like
essentially
it's
saying
like
at
this
sensor,
location
or
when
the
object
is
right
here.
Here's
what
you
sense
when
the
objects
right
here,
here's
what
decides
and
it
trains
the
network
on
that.
It
kind
of
gave
it
that
that
what
and
information
for
free
that
pose
information
for
free
and
they
justified
it
back.
Then,
in
that
paper,
as
and
by
the
way
I
said,
hi,
seven
here
I
said
that
it
incorporated
movement
into
learning.
It
didn't
incorporate
movement
movement
into
inference.
D
C
Well,
just
that's
how
they
justified
that
that's
how
they
justify
giving
it
this
free.
That
was
their
motivation
for
it
yeah.
The
motivation
was
that,
like
that,
was
that
actual
living
beings
do
have
movement
information.
They
do
have
location
information.
They
know
that
they
move
around
so
in
the
first
one
they
were
kind
of
unapologetically
did
that
there
was
later
papers.
They
were
proud
of
the
fact
that
they've
gotten
rid
of
that.
They
were
proud
of
the
fact
that
they
were
no
longer.
C
It's
like
the
capsules
chain
kind
of
moved
from
representing
things
like
this
to
representing
things
like
this
that
they
had
to
move
from
rather
than
you
know,
representing
what
happens
as
I
move
around.
They
represent
more
like
warping
features
into
different
shapes,
which
was
just
I,
don't
know
an
interesting
thing
to
point
out
and
and
in
the
subsequent
papers
they
were.
C
D
C
B
C
E
E
D
E
B
E
C
E
It's
my
writing.
I,
think
that,
because
the
whole
concept
of
grid
cells
work
is
you
know,
you
have
a
complex
mechanism
that
you
need
to
represent
location
and,
and
it
has
to
be
based
on
movement.
All
these
things
are
necessary,
so
it
sounds
like
they
didn't
include
that
in
this
they
were
just
sort
of
taking
like
assuming
we
have
this
information
and
just
go
from
there
is
that
right.
B
E
Front
is
well
I,
expect
out
of
Hinton
he's
not
really
doesn't
know,
do
these
neural
models
and
they're,
not
real,
accurate
neural
models.
So
when
you
said
hey
use,
it
just
needs
a
mini
columns.
Life
slipped
in
there,
Mike
thinking
about
many
columns,
and
yet
that's
really
incorrect.
At
all
I
just
said:
okay,
we're
gonna,
have
a
bunch
of
you
know.
Variables
represent
these
things.
That's
fine
called
called
neurons.
Any
comments.
Just
confused
me
find
all
the
stuff
in
that
way.
D
C
The
second
and
third
papers,
these
capsules,
basically
used
a
form
of
weight,
sharing
that
they
they
did.
Those
convolutional
capsules
where,
like
this
cat
this
this
one
sort
of
symbolize,
is
a
single
capsule,
but
it
was
applied
over
a
limited
part
of
the
image,
the
input
and
in
and
then
so.
This
capsules
actually
kind
of
replicated
what,
however,
many
times
twenty
times
or
something
like
that,
using
the
same
filters
using
Wayne
sharing,
basically
so
that
they
did
bring
a
convolution
back
into
this
in
the
psychic
paper.
C
Yeah,
so
the
way
that
they
do
this
and
these
papers
is
at
the
top
they'll
put
a
full
they'll,
put
a
final
array
of
capsules
where
there's
essentially
like
you
could
think
of
them
as
grandmother
capsules
really
like.
There's
there
there's
one
it's
just
sort
of
like
having
one
cell
for
each
learned:
object
except
knowledge,
one
capsule
per
third
object
and
that's
just
what
they
do
at
the
pop
at
the
top
of
the
network.
C
The
rest
of
it
is
just
kind
of
this
training
process
that
it
learns
whatever
it
learns,
and
this
kind
of
still
magical
but
yeah
at
the
top.
They
definitely
the
the
top
that
you
said:
I
called
them
grandmother
capsules,
but
and
then
those
are
fully
connected.
So
yeah,
this
kind
of
kind
of
delusional
going
up
with
the
both
aspects
of
convolution
that
that
the
receptive
fields
are
getting
bigger
and
there's
wait.
Jenny,
okay,
I'll
talk
about
this
part,
so
one
term
that
was
new
to
me
when
I
first
jumped
into
capsule
stuff.
C
It
was
the
word
routing,
yeah
and
now,
yeah
and
and
I
can
talk
about
our
club,
what
it
is
and
why
they
call
it
about
it.
So
yeah,
each
level
of
these
capsules
is
arranging
parts
and
tuples.
So
you
can
think
of
these.
Like
parts
here
might
be,
like
you
know,
pen
caps
and
a
marker
cap
folks,
like
others,
a
marker,
oh
there's
a
marker
like
it,
given
the
fact
that
I
see
a
marker
cap
right
here.
C
There
is
a
marker
right
here
and,
and
you
could
come
up
with
other
examples
where
there's
ambiguity,
though,
like
I
see,
there's
a
chair
right
here.
So
therefore
there's
a
therefore
there's
this
conference
room
right
here
or
there's
the
other
conference
room.
So
that's
a
little
when
you
talk
about
environments,
so
you
can
describe
that
in
language
like
I,
see
a
chair
here,
therefore,
I'm
in
this
room
versus
I
see
a
chair
I'm
in
that
room.
C
But
the
point
is:
there's
ambiguity
here
between
I,
see
a
chair
and
now
I
want
to
use
that
to
figure
out
where
I
am
there's
ambiguity
there,
and
so
through
term
routing
comes
from
the
way
that
the
way
they
phrased
us,
it's
not
the
way
we
did
or
phrased
it,
but
it
might
be
the
best
way.
Who
knows
this?
Capsule
has
figured
out
that
there's
a
chair
right
here
now
it
needs
to
decide
where
to
send
its
output.
It
needs
to
decide
and
doesn't
the
same,
that
output
to
the
conference.
C
C
Do
I
so
it's
deciding
where
to
send
its
output,
do
I
vote
for
the
neuron
room
or
to
vote
for
the
dead
record
compost
proponents
from
arrow
and
for
Catherine's
drum
beat
just
refusing
so
the
way
they
describe.
This
is
the
capsule
figures
out
where
to
send
its
output
and
they
call
that
rally,
and,
but
that's
that
really
is
the
process
of
narrowing
down
the
parent
objects.
Yeah.
D
C
So,
to
do
this,
they
have
a
set
of
these.
You
could
call
themselves
that
are
gating.
These
various
connections
I
drew
just
a
couple
example
ones:
here's
a
cell,
here's
a
cell
and
this
process
is
called
yes,
is
called
routing
and
it's
a
sort
of
a
dynamic
recurrent
back-and-forth
process
of
alright
I
see
a
set
of
object,
parts
and
those
are
going
to
kind
of
vote
on
objects
like
I've,
seen
eyes,
they're
gonna
vote
on
there's
a
face.
I
see
a
pen
cap
on
that's
gonna
vote.
There's
a
marker
chair.
C
C
Well,
I
would
say
the
whole
network
is
set
up
to
do
object,
composition,
robbing
is
the
art
of
inferring
how
these
parts
fit
into
the
whole
into
the
into
the
compositional
objects.
So
yeah,
it
fits
into
the
mechanism
for
often
object
composition,
so
so,
as
so,
these
vote
on
the
things
up
here
to
the
extent
that
they
agree
these
cells,
let
the
input
through
to
the
extent
they
disagree.
These
cells
inhibit
the
inputs
and
then
another
round
of
voting
occurs
and
that
it
goes
through
this
recurrent
process.
E
E
B
D
Yeah,
the
basic
problem
is
it's
a
settling
process.
The
chair
analogy
the
chair
could
belong
to
conference,
room
a
or
B
you
don't
really
know
until
you
look
at
all
of
the
other
chairs
and
all
of
the
other
and
see
what
both
rooms
have.
So
you
know
this
is
the
scheme
process
or
expectation-maximization
processes,
just
like
a
clustering
/
settling
process
that,
where
each
room
tries
to
explain
the
various
chair
is
based
on.
You
know
its
model.
D
C
Well,
what
this
directly
corresponds
to
in
our
model
is,
as
these
cells
right
here
that
are
that
are
kind
of
gating.
This
play
the
same
role
as
what
we
sometimes
put
in
layer
5,
the
displacement
cells.
Basically,
as
you
recognize
a
compositional
object,
a
narrowing
occurs
in
layer,
5
of
I
co-chair
I.
C
Remember
all
the
places
where
I've
seen
that
chair
that
activates,
the
Union
of
displacements
now
I've,
been
I,
see
other
things
in
the
room
as
well,
and
those
those
kind
of
settled,
babe,
Oh,
hon,
I,
think
Oh,
dad
I'm
in
this
room,
so
I
guess.
The
answer
to
your
question
is
our
voting
mechanism.
The
thing
we
have
that
happens
in
layers
u3-
and
this
is
is
like-
is
a
form
of
this.
What's
different
here
is
this
is
not
voting
just
on
object,
identity.
It's
also
voting
out
the
objects
pose
is
voting
on.
D
C
And
in
our
world,
that's
like
this
thing,
this
place,
yeah
just
like
this
place
themselves.
You
learn,
you
learn
the
transformation
between
one
reference
frame
and
another.
So
then,
once
you
know
where
you
are
and
one
of
those
reference
frames,
you
can
figure
out
where
you
are
and
the
other
yeah
and
you
could
that's
something
you
can.
E
E
C
Yeah
I'll
talk
about
this
I'll.
Put
that
this
in
a
minute.
Okay,
so
within
the
capsule
like
Jeff,
asked
about
how
can
16
cells
represent
a
OHS,
etc?
So
what's
the?
What
exactly
is
going
on
inside
the
capsule
changed
with
each
paper,
which
is
just
to
say
that,
like
this
is
very
much
an
open
area
for
research,
I
wrote
here:
instantiation
parameters,
representation
and
there
and
interaction.
So
what
are
the
cells
and
how
do
they
interact
is
an
open
research
area.
It
changes
with
every
paper.
C
So
in
the
initial
one,
the
initial
paper
it
was
like,
there's
a
cell
that
indicates
you
know,
is
there
is
the
future
present.
Is
there
a
coffee
cup
at
all,
and
then
there
are
nine
cells
that
represent
what's
the
orientation
of
the
cup
and
for
this
paper
they
they
fed
in
that
orientation
and
that
beneden
is
as
nine
numbers
you
can
call
it
cells
call
it
numbers,
and
that
was
basically
how
they
made
it
work
in
that
paper
2017
they
did
sort
of
a
different
extreme.
C
They
they
just
give
it
16
cells,
which
is
enough
cells
to
represent
a
four
by
four
matrix,
but
they
didn't
do
anything
to
make
it
a
matrix.
They
just
gave
it
16
cells
each
each
one
of
these
modules.
Oh
sorry,
modules.
Each
of
these
capsules
has
16
cells
and
it's
going
to
learn
to
use
them.
However,
it
learns
to
use
them
and
the
clever
thing
they
did
it
here
to
no
longer
have
a
separate
special
cell.
They
no
longer
have
the
present
cell.
C
C
So
it's
like,
if
there's
absolutely
a
coffee
cup
there
than
the
these
of
these
cells
summons
to
one
or
the
sum
of
the
squares,
is
1,
and
otherwise
it's
a
shorter
vector,
if
the
if
the
object
is
less
likely
to
be
present,
and
in
this
one
you
know.
So
what
happened
like
when
this
module
talks
to
this
module?
C
They
then
went
and
constrained
it
more
so
that
there
are,
they
got
away
from
the
clever
idea
of
using
the
vector
linked
thing.
They
found
that
that
was
making
the
routing
not
work
as
well
as
they
wanted
it
to
so
they
they
went
back
to
having
a
special
cell
that
encodes
probability
that
the
coffee
cups
there
and
they
still
used.
You
know
these
16
cells
that
are
connected
to
each
other,
but
they
constrained
it
more
in
a
way
that
makes
it
really
they.
C
Such
that
well,
if
you
consider
what
happens
in
matrix
multiplication,
if
you
consider
to
say
say
you
just
say
you
have
these
four
cells,
they
said
say
these
four
cells
are
right
here,
and
these
input
cells
are
just
one
over
here.
Matrix
multiplication
in
of
this
cell
involves
taking
these
weights
and
taking
these
these
specific
sounds
and
multiplying
the
weights
by
the
activity
of
those
cells.
The
point
of
this
is
that
this
cell
has
no
concern
about
these
weights.
C
They
didn't,
they
didn't
feed
and
they
didn't
feed
in
matrices,
but
they
made
it
where
the
connectivity
of
these
is
is
essentially
a
matrix
multiplication
and
like
which
is
interesting.
Was
that
that
that
means
there's
like
some
weight
sharing
between
these
cells?
That
means
like
this
cell
on
this
cell,
are
gonna
share
the
same
set
of
weights,
but
two
different
inputs,
which
sounds
magical
if
you
think
in
terms
of
cells,
but
then
they're.
C
C
The
type
of
thing
that
we
would
concern
ourselves
with
is
bringing
to
the
table
things
about,
like
actual
neural
representations
of
location,
actual
neural
representations
of
orientation
and,
and
then,
if
you
use
those
act
that,
if
you
use
these
biologically
plausible
forms
of
these
representations,
how
does
that
change
like
the
network,
the
certain
how
the
mechanism
for
performing
the
transformations
so
yeah
they're
living
in
this
world,
where
they're
just
playing
around
with
stuff
and
we
might
be
able
to
bring
something
to
the
table
and
involving
other
representations
of
these
things?
One.
D
Set
of
fundamental
difference,
I
think
I.
Remember
correctly,
they
can't
really
represent
unions.
They
have
to
make
a
commitment
as
to
what
it
is,
and
so
this
selling
process,
if
there's
any
ambiguity
between
two
objects,
he's
still
gonna
pick
the
most
likely
one
and
settle
it
in
so
that
subsequent
layers
just
have
to
live
with
that
inference,
whereas
I
think
we
can.
D
We
can
do
a
better
job,
ambiguity
and
uncertainty
for
unions
and
that,
in
turn,
our
light
our
column,
loading
sending
process
is
much
much
faster
as
well,
because
perhaps
but
news
can
represent
unions
and
look
at
intersections
I.
Think
that's!
Maybe
one
thing
that
the
stuff
we've
worked
out
if
it
has
is
more
powerful
than.
D
C
C
D
D
C
In
that
part
you're,
definitely
you
you're
definitely
right
that
they
they
let
this
part
settle,
and
then
they
forgot
this
part.
Then
they
figure
out
this
part
they
have
no
back
and
forth.
No,
that
yeah
the
knowing
what's
going
on
up
here
doesn't
yeah.
Knowing
what's
going
on
up
here,
doesn't
help
the
lower
layers.
C
Yeah,
the
feedback
only
goes
down
to
it.
These
cells
that
doesn't
go
down
with
these
cells
because
because
I'm
skating
so
much
basically
okay
last
part.
One
interesting
difference
is
that
this
is
debatable.
I
have
some
green
text
in
parentheses.
That
gives
like
that
they
are
alternate
view,
but
in
this
in
capsules-
and
this
is
also
a
way
the
Hinton
changed
over
the
years-
it
seems
the
location
of
the
part
relative
to
the
whole
location
of
the
coffee
cup.
Handle
relative
to
the
cub
is
stored
in
the
weights
of
this
network,
but
there's
never
any.
C
There,
never
cells
that
are
representing
that
there
are
never
cells
representing
the
location
of
the
handle
and
the
space
of
the
cup,
the
location
of
the
logo
them
in
the
space
of
the
cup.
These
layers
are
representing,
like
suppose,
this
is
cup,
and
this
is
beside
suppose.
This
is
like
cylinder
handle
logo,
and
this
is
the
cup.
This
is
just
representing
where
the
cylinder
the
logo
the
handle
are.
C
We
know
relative
to
the
viewer,
it's
the
spatial
relationship
between
the
viewer
and
that
that
object
of
that
feature-
and
this
is
the
spatial
relationship
between
the
cup
as
a
whole
and
the
viewer
and
there's
nothing
in
between
there's.
No
there's.
No
there's
no
neuro
population
representing
where
the
handle
is
relative
to
the
cup.
C
It's
represented,
quote-unquote
and
weights
is
represented
and
synapses,
it's
learned,
but
it's
not
they're,
never
selves
representing
that,
whereas
we
have
our
displacement
cells
which
are
representing
that-
and
that's
just
an
interesting
thing
to
know
that
early
Hinton
papers
emphasized
the
importance
of
representing
object
based
features
and
then
learning
objects
as
pools
of
those.
But
this
is
never
representing
those
in
activity
it
is,
it
is
that
is
inferring
them,
but
it's
only
putting
them
in
the
weights.
C
On
the
other
hand,
you
did
it.
These
cells
that
are
performing
the
gating
are
sort
of
representing
that
they're
they're
representing
like
is
this
a
handle
of
a
you
know
of
a
coffee
pumper?
Is
this
a
handle
of
a
briefcase,
and
in
that
sense
they
are?
They
are
kind
of
representing
that
by
figuring
out
which
of
these
is
the
right
one
which
of
these
arrows
is
the
correct
one,
but
it's
a
little
different.
It's
like
we're
like
turning
something
on
and
off,
rather
than
representing
it
and
its
fullness.
C
So
I
guess
yeah
things
I
wanted
to
bring
up
was
just
we
can
map
these
to
our
battles
to
various
versions
of
our
models.
One
interesting
thing
was
that
it
sort
of
has
this
bottom
level
is
much
different
from
what
the
rest
of
the
system
is
doing
like.
Who
knows?
Maybe
maybe
somehow
you
could
somehow
you
could
take
this
multiple
level,
there's
multiple
levels
of
capsules
and
make
it
more
like
more
a
recurrent
process.
C
C
E
A
very
high-level
question
that
anyone
can
answer
this,
so
you
know
our
whole
work,
but
my
life
is
just
sort
of
figuring
out
how
it
is
the
brain.
Learns
it's
sensory
motor
model.
Like
you
know,
the
movements
are
just
incredibly
important
to
how
we
learn
and
how
we,
and
so
the
whole
dream
cell
reppin
Dacians,
for
example,
allow
you
to
plan
movements
and
execute
movements.
E
You
know
see
how
this
model
that's
involved
in
motor
behavior,
and
and
so
that's
how
or
if
these
are
based
and
I'm
trying
to
understand
where
they're
coming
from
for
this,
are
they
trying
to
solve
that
same
problem
or
I'm,
clearly
trying
to
do
a
better
or
sort
of
flash
inference
and
they're
trying
to
include
locations
to
do
a
better
question?
Friends?
Okay,
do
they
recognize
they
need
to
go
to
this
full
sensory
motor
model?
And
this
is
on
the
path
there,
where
they
headed
with
this
I?
Guess
it's
maybe
my
question.
C
I,
don't
think
that
they're
putting
this
big
emphasis
on
movement,
they're,
more
yeah
they're,
not
putting
that
there,
but
they're
they're,
trying
to
capture
more
of
the
realities
of
the
world.
The
really
the
structure
of
the
world
that,
like
the
fact
that
you
know
objects,
are
composed
of
parts
and
yeah
our
networks
and.
C
Yes,
well,
I
would
say
it's
more
about
describing,
what's
in
a
scene,
it's
more
about
taking
taking
an
image
with
there,
starting
with
images
right
now
and
saying
that,
okay,
there
is
a
there.
Is
this
object
right
here
and
there's
that
object
right
there
and
they
can
actually
describe
the
scene
in
terms
of
a
set
of
capsules
rather
than
having
just
something
big,
there's
a
cat,
but
rather
something
else.
C
E
Even
though
we're
not
moving
the
world
is
moving
and
and
so
we're
just
trying
to
get
to
that
goal
with
them,
yeah
I'm
trying
to
run
it
the
more
the
merrier
trying
to
get
to
the
same
end
goal
we're
trying
to
get
to
within
the
more
life.
Will
you
collaborate
with
them,
Auvergne
for
them
and
I'm,
just
struggling
with
that.
E
You
know
it
just
seems,
like
you
know,
movement
to
the
world
of
our
bodies
and
yeah,
for
example,
how
it
is
applied
to
somatosensory
inference
with
fingers
and
hands,
and
things
like
that,
you
know:
do
they
talk
about
that
at
all?
There's
a
real
sort
of
markup,
taking
this
image
and
I'm
always
taking
an
image.
C
E
D
D
D
By
what
they're
saying
is
that
by
representing
these
relative
poses
and
relative
positions
of
thing,
your
end
network
is
going
to
be
much
more
robust
to
changes.
So
if
you
rotate-
and
in
this
image
for
example,
most
a
lot
of
evidence,
networks
will
great,
but
this
one
won't
because
it's
going
to
be
married
to
that,
you
know
if
you
make
the
images
scaled
images
to
be
larger
and
smaller,
this
one
won't
break
because
it's
it's
knows
how
things
change.
As
you
change
scale.
You
know
you
don't
have
to
explicitly
train
it
up.
E
That's
based
on
grid
cells,
which
is
really
beautiful
concept
that
Nature
has
come
up
with
and
and
that's
innovated
with
that
movement,
and
so
then,
then
also
Nick
can
explain
how
you
can
drop
something
with
your
hand,
which
is
not
like
a
picture
at
all
and
understand
it
and
I
guess
you
know
it
just
trying
to
decide
how
much
time
when
best
study
knees
in
do
I.
Do
we
go
like
hey
Jeff,
let's
go
visit
you
and
talk
about
our
work,
and
you
know
whether
how
receptive
will
they
be
to
these
ideas?
E
It's
funny
you
say
that,
because
they
took
here
with
the
council,
they
took
a
very
theoretical
approach,
which
initially
you
know
one
could
argue,
did
not
move
here.
Directly
courses
are
Arctic
recognition,
I
mean
capsule
they're,
not
winning
in
a
well.
It
seems
to
me
that
they
they
are
taking
more
or
less
vertical
bottoms
up.
E
To
think
on
this
to
me,
you
know
so
much
that
goes
on
from
a
neuroscience
point
of
view
again:
I
want
to
make
sure
I
put
yeah
I'm
going
stands
I'm
just
talking
about
from
the
theory
side
of
our
work.
From
the
neuroscience
point
of
view,
I
have
to
decide
is
that
we
really
trying
to
you
know
collaborate
or
get
involved
with
these
people
more
or
whatever
and
I.
Don't
have
a
good
sense
for
that
I.
Don't
know
you
know
whether
it'd
be
a
waste
of
time
or
not
and
like
maybe
that's
the
best.
E
D
D
E
Straining
by
the
neurobiology
and
almost
none
of
these
people
really
know
the
neurobiology
at
all
yeah
it
doesn't
much
neurobiology
at
all,
and-
and
so
you
know,
we
can
sit
there.
You
know
it's
not
in
my
mind,
it's
junk,
not
just
another
made-up
theory.
You
know
these
are
highly
constrained
theories
that
we
proposed
and
therefore
we
have
high
confidence
that
they're
correct.
E
Whereas
if
you
just
make
up,
you
know,
here's
nobody,
we
might
do
something.
Yeah
I
could
see
where
they
come
from
that
perspective.
So
you
know
I,
like
my
impression
from
my
past
interactions
with
Jeff,
is
that
he
really
didn't
care
about
the
neuroscience
and
didn't
know
much
about
it,
and
that's
that's
his
prerogative
and
so
my
plane
from
the
past.
E
He
did
not,
he
didn't
seem
to
express
much
knowledge
about
it,
but
also
not
much
interest
in
it,
and
so
until
the
capsule
things
came
along
I,
just
sort
of
said:
yeah,
okay,
more
deep
learning,
stuff,
it's
not
going
to
help
us
understand
how
the
brain
works.
Now
you've
got
the
sort
of
intermediate
representation
here
which
is
very
closer
to
what
we're
doing
and
so
I
said.
Well,
maybe
things
have
changed.
Maybe
there
is
some
common
ground.
Maybe
we
can
learn
from
them.
E
D
E
B
C
Don't
know
if
he
didn't
talk
as
if
he
hears
us:
okay,
okay!
Well,
what
was
it
it
when
I
was
in
when
I
was
summarizing?
This
I
left
out
one
thing,
I
meant
to
summarize
kind
of
goes.
The
question
I
think
one
thing
we've
done
that
we
wouldn't
Lee.
Is
they
got
away
from
from
feeding
and
locations
to
the
bottom
most
layers,
whereas
we
would
not
try
to
get
away
from
that?
They
we.