►
From YouTube: Seeing is Believing Part 1 (NRM Feb 19, 2020)
Description
This research meeting was split into two parts. This is the 2nd part of the research meeting, but the 1st part of Aries' talk.
First part of this research meeting: https://youtu.be/3fdl9O7WTHM
Second half of Aries' talk: https://youtu.be/3THc5dN-2Wg
Discuss at: https://discourse.numenta.org/t/numenta-research-meeting-feb-19-2020-part-2/7239
Paper: https://www.nature.com/articles/nn.4385
B
C
D
C
C
Creates-
and
it
has
access
to-
let's
say:
yes,
we
have
like
an
internal
world
of
likes
extra
space
in
the
internal
space
and,
let's
say,
there's
something
interesting
in
the
external
world,
and
we
want
to
interface
with
it
right.
So
you
know
we
can
use
like
our
eyes
or
noses
sense
of
touch
and
all
these
things
to
feedback
to
the
brakes.
Wait.
Let
me
ask
us
to
these
things:
that's
rich
to
the
outside
world,
and
this
person
has
access
to
its
own
internal
states.
So
how
does
it
do
this?
How
do
we
like
you
know?
C
How
are
we
able
to
do
anything
at
all
right?
So
the
leading
the
prevailing
theories
is
well
I'll
meet
those
experimental
II,
at
least
to
the
50s.
This
makes
people's
easels
work
is
that
sort
of
we
have
this
little
Lego
block
model
which
I'm
sure
all
of
you
are
familiar
with
this
idea
that
there's
okay,
let's
say,
there's
a
hierarchy
in
cortex
and
that
you
know
there's
a
the
one.
For
example,
the
primary
visual
cortex
just
looks
a
little
thing.
C
It
has
like
a
little
receptive
field
somewhere
and
it
has
it
really
likes
that
one
there's
a
little
bar
or
a
dog
or
something
in
this
direction.
Another
direction
and
then
you'd
like
a
pool,
a
number
of
these
cells,
and
you
can
get
likes
more
complex
for
presentation.
So
when
these
two
cells
are
active,
you
get
like
a
little
track
now
the
line
will
look
something,
and
then
you
could
just
like
arbitrarily
superimpose
these
things,
and
so
you
get
like
more
complex
shapes.
Then
you
know
there
is
some
of
it.
This
is
for
this.
C
We
have
things
are
good
for
the
fusiform
face
area,
but
we
like
response
on
up
to
faces
with
you,
there's
also
a
tools
and
stuff,
but
the
thing
is
like
a
lot
of
actually
majority
I
would
say
of
experimental
data,
how
we
seem
to
behave
in
the
world.
It
isn't
really
consistent
with
this
idea.
We
seem
to
like
fill
in
a
lot
of
things
that
are
not
really
there
in
the
outside
world,
and
you
know
you
I,
don't
think
you
could
explain
this
just
by
saying.
Oh,
this
is.
C
So-
and
you
know
one
of
the
one
example
of
this
is
sensory
illusions,
for
example
visual
illusions
being
the
most
compelling,
so
you
know
so
the
baby
is
like
okay,
maybe
there's
some
truth
to
this,
but
what
happens
the
other
way
back?
We
would
need
a
sort
of
way
a
more
efficient
way
to
do.
This
would
be
to
say:
okay,
we
have
this
knowledge
about
the
world.
How
do
we
compare
it
to
the
sort
of
fine
rail
fine-grained
detail
we
have
of
the
world,
so
this.
C
C
So
you
know
so
this,
but
this
you
know
this
also
means
that
we
detect
patterns
where
there
aren't
necessarily
getting,
and
this
I
think
is
like
a
very
big,
a
basic
cornerstone
of
how
how
we
work
as
a
species.
So
you
know,
as
many
of
you
know,
the
one
of
the
founding
fathers
of
the
idea
of
metamodels
is
Kenneth.
Drake
was
a
contemporary
of
volunteering
and
he
sort
of
detailed
this
idea
of
having
carrying
around
a
small
scale
model
of
reality,
and
you
can
use
this
to
sort
of
like
a
little
physics
model.
C
C
So
let's
see
how
the
things
like
how
do
we
implement?
This
is
all
like
a
lovely
idea,
but
how
you
know,
can
we
look
at
like
a
sort
of
model
of
how
this
can
work?
So
one
old
notion
is
the
the
notion
of
effort
is
copies,
so
this
comes
from
like
engineering
and
also
like
you
know,
it's
mold
idea
in
neuroscience
as
well.
So
that
allows
me
to
do
things
that
close
my
eyes
and
reach
from
our
water
bottle.
C
Without
you
know
having
the
sensory
feedback,
so
yeah
I
need
to
have
a
3d
model
of
the
world
to
do
this,
so
you
know
the
simple
way
that
you
can
do
this
so
they're
looking
lasers
anyway,
measures
up
there,
a
simple
way
to
do
this,
is
you
know,
just
a
bizarre
control
theory
scheme
and
say:
okay,
you
have
you
produce
a
movement
and
that
movement
generates
a
sensory
feedback,
but
also
you
have
an
efference
copy.
So
this
is
just
a
copy.
C
Basically,
the
motor
output,
but
it's
sort
of
the
difference
is
that
it's
sort
of
transformed
into
the
coordinates
of
the
sensory
input.
Otherwise
it
won't
work.
And
then
you
know
you
can
compare
those
two
and
have
a
little
error
if
there's
a
mismatch
and
then
that
you
can
use
that
here
to
update
your
output
online
or
you
could
use
it
to
learn
something
about
the
environment.
C
A
C
D
C
Right
so
so,
let's
say
you're
looking
at
d1
yeah
right,
so
we're
looking
at
v1.
So
our
predictions
coming
only
coming
into
v1
distally
from
other
cortical
areas
or
just
view
one
generated
conjunction
yeah,
sorry
and
then,
of
course,
the
other
component
is
error
signals.
So
these
should
be
because
neurons
can't
take
the
absolute
of
something
at
least
that's
not
know
to
be
a
computation.
They
could
do
it.
C
You
need
to
have
positive
and
negative
types
of
being
meaning
that,
like
this,
your
prediction
stronger
than
your
sensory
input,
or
is
your
sensor
even
stronger
right,
because
otherwise
you
know
you
just
have
like
his
hero,
everyone,
and
ideally
they
should
be
directional.
The
ideas
of
the
it
shouldn't
just
be
like.
Oh
there's,
an
arrow
here
deal
with
it.
The
error
should
be.
Is.
D
D
D
B
D
Still
know
what
part
of
your
prediction
is
wrong
like
that's
not.
That
word
is
the
wrong
word,
because
it's
referring
to
a
color
and
it
should
refer
you
this
well.
That
was
the
wrong
word,
because
it's
come
on
intensive
speech,
but
I
wouldn't
call
those
vectors
call
those
semantic
parsing
parts
of
the
part
of
the
representation
are
incorrect.
D
C
C
C
Responses
which
releases
population
response,
which
is
the
feedback
mismatch-
and
this
is
not
explained
just
by
the
visual
hall
itself.
So
if
the
muscles
run,
it's
running
it
like
a
different
sort
of
speaker
randomly,
but
this
halt
happens.
You
don't
really
see
anything
at
all.
It's
called
the
playback
cult
here.
It's
the
playback.
B
C
C
A
B
C
C
C
So
what
I
might
conflict
that
my
PhD
project
was
to
sort
of
take
this
idea
and
sort
of
take
it
to
a
different
domain
where
you
could
actually
see
predictions,
because
he
is
if
you're,
making
sensorimotor
predictions
predictions
will
almost
look
unless
you're
using
like
very
high
temporal
resolution.
You
know
they
will
be
concurrent
to
this
for
them
to
work,
so
you
want
something
we're
gonna
slower,
they
precede
it
and
that
I
think
notion
of
space,
you're,
absolutely
right,
slower,
yeah,
we're
noisier,
let's
say
so.
You.
C
C
C
D
D
D
C
D
C
B
B
C
So
unfortunately,
I
couldn't
I
mean
the
video
was
lost
for
seeing
a
mouse
that
run
down
the
spiritual
tunnel,
but
it
is
at
the
side
of
the
tunnel,
looks
like
this,
so
these
are
textures
that
are
on
the
top
and
it
is.
We
would
have
two
different
visual
stimuli,
the
you
know,
gratings
that
would
appear
once
the
animal
reach
reach.
So
once
the
animal
runs
they're
out,
you
know
it's
not
massive.
C
You
know
the
movement
is
cold
right
and
then,
once
the
animal
is
here,
this
grating
little
flashes
stay
there
and
for
the
rest
of
the
traversal
right.
So
just
because
we
want
to
know
when
exactly
it
sees
it,
and
these
things
are
with
all
them.
Spatial
cues,
basically
things
that
hopefully
help.
Let
us
know
how
far
along
there's.
C
B
C
Learn,
there's
no
like,
so
what
we
did
is
we
changed
the
variability
of
the
braiding
in
position,
five
over
the
experiments.
So
this
was
the
first
four
rows.
There
is
hard
to
ba-ba-ba
a
DAT,
sorry
and
then
you
know
there.
We
did
the
experiment
in
lots
of
two
days,
so
we
always
be
recorded
chronically
so
from
the
same
neurons
over
eight
days,
which
was
a
total
experiment.
So.
D
C
C
C
D
D
Know
you
could
just
remember,
we
only
had
beats
so
so
when
you
say
to
the
mission,
almost
imagining
I
was
running
down,
seeing
a
V
and
they're
also
one
time
you
don't
show
it
and
shows
like
mm-hmm,
but
that's
not
way
shown
here
so
there's
a
two
day,
transcript
so
condition
for
condition.
Five,
so
did
you
do
two
days
in
addition
for
and
then
two
days
of
condition,
five,
no.
C
A
A
B
C
D
C
Of
the
phone
has
been
look
activity
to
space
right,
zero
to
zero
and
100,
and
you
can
look
at
the
traces
of
two
neurons.
These
are
two
neurons,
two
different
neurons
and
one
traversal,
and
one
this
be
preferring
your
honor
I
could
see.
You
know
the
orientation
sensitive,
be
preferring
their
on
post
is
sort
of
puzzled
expires.
You
know.
B
C
C
D
C
C
C
D
C
D
B
C
C
So
yeah,
what
we
do
is
we
we
look
at
the
you
know
the
stack
and
then
what
we
do
is
we
draw
a
region
of
interest
around
the
body
of
the
neuron.
We
try
to
make
other
Europe
girl
stuff
may
be
traumatic.
You
just
be
taking
me
of
that
over
number
fence
and
that's
our
tricks,
and
then
we
do
some
other
low-pass
filtering
stuff
on
it
to
remove
noise
and
stuff,
because
movement
right,
there's
all
kinds
of
artifacts.
That
can
be
there
right.
C
B
C
The
domain
it's
necessary
to
check
to
see
like
are
these
predictions
they're
in
beginning
with
the
mouse,
didn't
have
time
to
sort
of
learn
the
environment
very
well.
But
you
can
also
do
the
same
analysis,
which
sort
of
mean
over
and
over
the
conditions
or
mean
of
one
condition.
And
you
don't
see
you
don't
see
it's
pretty
arrows
in
the
beginning,
which
is
the
setting
check,
and
this
is
not.
C
C
C
C
So,
okay
cool-
we
see
these
things.
Those
predictions
would
be
one,
and
so
next
follow-up
question
is
suppose
like
what
do
they
come
from
right?
So
there's
loads
of
huge
projections
into
p1
from
all
kinds
of
corollaries,
most
bigger,
especially
from
ACC
and
retro,
speeding,
air
cortex
and
those
are
in
terms
of
a
number
of
Bouton's
or
even
number.
Five
words
are
much
bigger
than
the
projections
from
the
FCN,
so.
A
C
B
C
A
C
D
C
D
D
C
A
A
A
C
C
Okay,
so
just
to
finish
up
this
C
part,
we
see.
Basically,
the
same
pictures
would
be
one
receives
signals,
predictive
and
visual
responses.
The
thing
is,
the
visual
response
is
one
being
here.
This
is
spin
was
predicted
response.
This
is
Jung
said
they
they're,
never
purely
onset,
responsive
after
learning.
This
is
a
predictive
component
to
them
look
around
and
in
the
beginning
there.
Well,
you
only
get
caught,
it's
sort
of
like
visual,
like
things
in
this
condition
one.
This
is
addition
for
right.
C
C
Sending
advantage
like
so
is
it
just
the
loop
I,
don't
know,
but
the
the
fact
that
these
become
predictive
and
even
the
visual
evoked
on
cycle
prediction
that
part
I'm
not
sure
how
to
explain.
Okay
I
mean
there's
a
lot
of
potential.
You
know
need
explanatory,
but
I
don't
have
a
good
clean
one.
If
anyone
has.
D
D
So
that
it
all
happened
indeed,
one
and
stuff
with
this
is
a
pattern
that
we
links
back.
This
is
one
that
would
require
I'm
just
making
the
distinction
between
different
types
of
prediction.
This
is
one
that
it's
happened
over
a
long
period
of
time.
Is
it's
a
high
order?
Sequence,
it's
learning,
maybe
somewhere
else
in
the
brain
and
it's
leading
to
a
visual
prediction,
but
it's
not
in
the
visual
cortex
itself,
modeling
the
world
I'm,
just
making
them
in
the
scene
for
our
model
violently
recline
to
this
situation,
the.
D
It
could
be
under
signals,
anyone
and
there's
a
general
belief,
a
lot
people
don't
believe
being
one
right
and
we
think
it
goes
someplace
else,
but
it
doesn't
it'll
scale
and
small
scale
and
time
in
space,
and
that
would
qualify
here
right
so
I'm.
Just
putting
that
I'm
trying
out
our
models
onto
this
and
say
we
don't
require
our
models
only
require
somebody
projected
that
we
want
to
make
predictions,
but
again
we're
not
trying
to
model
this
convey
exactly.
D
D
D
B
C
Of
that
prediction,
well
as
good
as
a
scientist,
you
have
to
be
cautious
right.
So,
of
course,
you
know
when
we
think
predictions
come
locally.
Big
one
is
layer,
five
layer,
six
and
we're
like
yeah.
You
know
this
is
just
one
potential
source
of
predictions.
You
probably
if
he
was
a
few
reporters
from
Phoenix
projections,
so.