►
From YouTube: Numenta Research Meeting | Apr 26, 2019
Description
Broadcasted live on Twitch -- Watch live at https://www.twitch.tv/rhyolight_
A
B
B
C
D
B
If
I
was
in
this
room
and
I
know,
I
was
alive
and
I
was
trying
to
track
the
way
the
door
was.
Okay,
that's
my
goal
and
because
I
mean
they
can
escape
any
point
time,
so
there's
always
a
mental
image,
whereas
the
doors
might
have,
and
now,
if
I
was
facing
here
at
the
door
straight
ahead
and
I
start
climbing
up,
walk
the
other
door
still
over
there
over
there.
So
he
went
from
like
other
way,
the
biggest
where
the
doors
you
know.
B
B
C
D
C
B
B
C
C
B
Rush
to
the
door
when
I'm
on
the
spacious
with
Oracle
fabulous
me,
so
it
basically
just
assumed
we
keep
it
the
same
way,
and
that
means
really
obvious.
If
there's
no
gravity
you
just
like
saying
I'm
just
putting
in
space,
there
is
no
gravity.
What
should
my
direction
self
be?
One
on
to
the
door,
location,
and
the
answer
is
well
I
mean
thing
is
correct:
it
might
just
keep
up
what
you
had
before
and
then
it
would
flip
as
soon
as
you
have
a
sense
again.
B
B
Or
no
way
was
the
quickest
way,
so
very
quickly.
Go
from
the
quickest
way
is
going
this
way,
just
big
sways
going
that
way,
and
it
just
flipped
on
me.
So
it's
that
sense
of
like
what's
the
quickest
direction
to
get
to
that
movement.
That
I
just
think
it's
a
different
thinking
about
it
and
it
might
be
okay.
B
Super-Tiny
asked
us
to
talk.
We
talk
about
that
paper
on
the
right
and
and
then
you
and
I
had
some
time
discussing
some
of
the
topics
of
that,
and
so
I
just
want
to
I
just
want
to
go
through
that.
We're
super
tight
here
and
a
little
bit
more
explicit
than
talked
about
before.
Okay,
so
the
whole
goal
here.
B
B
I'm
thinking
more,
like
I,
have
a
model
of
the
coffee
for
this
car
and
the
location
of
the
thing
I'm
going
to
sense
is
this
part
of
the
cup
get
the
model
of
the
cocktail.
I
need
to
know
the
location
of
the
cup.
What
I,
actually
sense
will
depend
on
my
distance
from
that
and,
if
I
but
I'm
thinking
the
model
itself
does
not
incorporate
that
distance.
The
model
itself
is
just
in
the
reference
frame
of
the
cup.
Suppose.
B
This
is,
and
that's
what
that's
the
thing
that
I'm
doing
this
whole
on
the
coffee
table
or
touching
this
all
the
coffee
cup,
but
what
I
actually
announce
the
location
on
the
coffee
cup,
but
what
I'm
actually
going
to
sense
is
the
distance
from
here
as
well,
so
I'm
breaking
it
down
a
little
bit
further
so
and
I
think
that
was
a
key
things
now.
This
is
the
way
ever
I.
Think
that's
part
of
what
this
exercise
is
about
is
trying
to
poor
definition
of
the
problem.
B
We
said
that
the
way
to
solve
a
problem
is
you
define
it
better
and
better
and
better
and
better
better.
Eventually,
your
definition
defines
the
answer
so
I'm
breaking
this
out,
trying
to
be
looking
whitelisted
about
it,
and
especially
since
I
proposed
the
solution
for
that,
and
then
we
can't
forget
about
the
state
of
the
object.
B
No
I'm
going
to
put
on
this
location
on
the
cell
phones
mean
I,
have
to
know
what
state
the
cell
phone
is
like.
You
know
what
about
color,
you
know
the
king,
the
color
are
perceiving
and
objects
is
going
to
be
the
the
actual
inputs
going
to
vary
depending
on
the
the
state
of
the
rule
around
me
and
there's
a
bunch
of
question
mark
here.
B
Nothing
else,
I
know
I
need
to
incorporate
state
of
the
object,
as
in
my
icons
on
the
screen.
So
then
we
have
this
assumption
that
this
predictions
occurring
in
layer
for
the
papers
they've
all
written
about,
coming
to
lay
before
that's
where
the
prediction
occurs
and
Marcus
design.
We
talked
about
this
and
we
talked
about
the
different
possibilities
about
how?
Where
is
this?
Where
are
these
five?
Where
is
this
information
arriving
online
afford
to
make
these
predictions
so
one
of
our
classic
examples
of
what
we've
been
using
in
our
papers?
B
There
is,
of
course,
layer
for
itself
which
has
connections
which
we
talked
about
those
learning
sequences,
but
you
put
it
out
last
time
where
they
say
even
layer.
Four
cells
could
be
subdivided
into
different
proofs,
so
maybe
there's
more
than
one
population
of
cells
here
and
what
do
I
think
about
that
by
the
way
is
mini-com,
because
many
columns
divide
player,
foreign
substance
and.
B
D
B
Are
other
layers
in
the
layer?
Four,
so
six,
a
typically
people
say
that's
forty,
two:
five
percent
of
the
connections
only
for
a
cell
a4
cells
with
a
lot
of
local
connections,
but
you
know,
and
if
I,
if
I
bet
you
if
I
go,
look
at
that
human
brain
project
website,
you
know
I'll
see
a
whole
bunch
off
itself,
so
you
can
fit
up
to
two.
So
there's
other
things
coming
in
here.
We
don't
really
know
them
and
there's
possibility.
B
B
I,
don't
know
we
opened
the
possibility,
given
the
theory
that
not
all
these
predictions
are
not
everything
we
predict
is
happening
in
every
column,
so
I'm
just
trying
to
find
the
problem
better
here
and
one
of
the
reasons
I,
you
know,
I've
sort
of
learned
to
break
out
the
distance
of
the
object
is
I
now
have
a
hypothesis
that
this
is
one
of
the
things
that
the
projection
for
six
aid
of
balance
could
be
doing.
This
is
not
a
this.
Is
this
thing
here?
B
Doesn't
really
it's
nothing
to
do
with
the
actual
model
of
the
object,
it's
more
or
less
just
a
level
of
function.
It's
like
you
know
in
place
that
the
same
the
solution
here
has
nothing
to
do
with
the
exact
object:
I'm
viewing
it
just
how
far
away
from
the
object,
whereas
what
I
predict
based
on
my
orientation
want
a
predict
based
on
my
location
are
all
high
specifically
from
model
of
the
object
this
I
could
I.
Could
I
can
change
this
without
really.
B
Say
anything
could
be
said
about
orientation.
I
both
often
argued
that
you
know
when
I
changed
my
learning
meditation.
My
thumb
on
this
am
really
not
taking
anything
advantage
of
they
weren't
tension.
I
could
calculate
this
independently
object
as
long
as
I
know
what
the
feature
is.
I
can
just
rotate
so
a
complex
set
of
things
here,
but
one
of
the.
B
Is
that
you
know
that
we
could
be
given
a
scale
through
the
dollars,
and
that
would
be
both
motor
scale
and
sensory
scale,
accomplishing
the
distance,
the
object
or
the
scale
of
the
object
really
the
same
thing.
So
again,
that
would
be
the
way
of
handling
a
small
coffee
cup
in
a
big
coffee,
coffee
table.
B
So
I,
the
point
of
this
exercise
here
is,
is
move
closer
to
having
a
fuller
definition
of
the
problem
and
so
the
extent
that
we
can
answer
all
these
questions
about
exactly
the
things
we
need
to
know
and
where
are
the
possible
places
that
these
information
could
be
coming
from
is
essentially
the
problem
statement
of?
How
do
you
do
sensory
motor
instant
learning?
B
So
this
is
just
to
share
a
tsuba
tie
some
of
the
things
we
talked
about
last
week
and
just
it's
almost
like
nothing
new
here,
but
it's
good
to
say
it
again
and
put
it
down
in
writing
again
and
maybe
tweak
do
I.
Have
it
and
be
very
sort
of
explicit
about
all
the
unknowns
we
have.
So
that's
all
I
want
to
say
about
this
topic
using.
D
B
Know
that
all
I'm
saying
is
these
are
requirements
to
predict
so
the
system
to
break
it.
We
know
the
system
can
predict
and
if
it's
going
to
predict,
this
information
must
be
represented
and
Hamill's.
Somehow,
whether
it's
learned
or
whether
it's
a
systemic
process
like
learn
would
be
okay.
What
specific
feature
is
that
some
location
object,
a
systemic
process
would
be
like
that'll
change.
The
scale
I
don't
need
to
know
the
specific
feature
to
change
the
scale
but
I'm
trying
to
make
I'm
saying
these
are
the
requirements.
B
Here's
what
we
think
about
the
circuitry
which
is
helpful
to
think
about,
and
then
we
go
into
all
kinds
of
crazy
ideas
about
exactly
how
this
could
map
onto
those
requirements.
You
know
we've
taught
we
things.
We've
talked
about
the
past.
We
have
6b.
We
also
have
the
whole
idea
that
there
was
this
other
sensory
input
coming
into
layer,
2,
small
portion
of
layer,
5
and
I've
talked
in
the
past
that
that
maybe
is
establishing
many
common.
B
And
then
Marcus
and
I
talked
about
I
want
the
idea
that
that
I
should
the
many
columns
themselves,
maybe
the
reference
vector
that
that's
talked
about
in
this
paper,
and
so
now
you
can
have
like
you.
Might,
you
might
think
of
things
like
this?
You
might
say:
oh
I
have
I
have
like
a
location
grid
cell
location,
I,
have
a
green
tation,
set
orientation
modules
and
then
I
have
a
reference
vector,
module
inspector
and
somehow,
between
those
three
things
I
all
the
information
I
need
to
there's
a.
B
I
like
to
solve
these
problems,
is
you
define
these
problems
or
ethically
forever
it
clean
deeply
as
you
possibly
can,
and
then
you
constrain
yourself
to
these
physical
pieces
and
trying
to
make
them
come
together.
Somehow
and
so
I
think
about
this,
and
we
talk
about
it.
I
think
it's
really
important
to
understand
that
maybe
they
may
be.
Actually
we
think
the
thing
David
Kerley
derived
from
from
sensory
input,
but
perhaps
that's
not
the
case
and
perhaps
they
are
actually
just
like
they
may
be
changing
as
you.
B
B
And
he
did
point
out
that
some
of
these
are
more
active
than
others.
There's
another
coating
that's
going
on
there,
so
one
possibility
is
that
we
got
the
whole
the
whole,
how
you
represent
the
unique
location
raw
we've
been
assuming
that
there's
multiple
in
grid
cell
modules
and
now,
when
you
look
at
the
cells
together.
B
And
then
we
talked
about
whether
because
like
what
this
paper
and
orientations,
basically
you
take
a
one-day
orientation
model
with
a
2d
reference
vector
and
now
you're
gonna.
You
can
somehow
create
a
3d
orientation,
so
they
thought
was
hey.
Can
we
take
one?
One
could
be
T,
miss
Alonso,
plus
a
reference
vector
and
turn
into
a
3d
location,
and
one
of
the
problems
of
that
act
is
just
I'm.
Just
saying
all
this
to
complete
one
of
the
problems.
That
is.
B
But
I
think
it's
an
intuition.
That's
been
gnawing
at
me
for
a
long
time
that
perhaps
the
way
we
think
about
location,
being
multiple
groups
of
modules
is
not
correct
and
that
there
really
may
be
just
one
of
these
with
an
additional
coating
on
top
of
it,
combining
with
some
sort
of
reference
that
there
it
runs
through
the
whole
thing
is
somehow
sufficient
to
do
what
we
need
to
do,
because
that's
more
what
the
anatomy
looks
like
it
doesn't.
B
C
Okay,
so
this
predicted
predicting
input
requires
knowing
these
five
things,
it's
one
okay,
so
this
is
relying
on
there
being
some
notion
of
like.
Where
am
I
sensing
right
now
like
what
part
of
the
coffee
cup
in
licensing
so
notion
of?
What's
the
current
location,
that
is
one
way
to
do
this.
That
might
be
right,
but
I
don't
think
we've
ruled
out
the
other
approach
that
doesn't
ever
represent
this
and
the
other
approach
is
a
little
bit
more
inspired
by
well,
it's
a
little
bit
more
similar
to
computer
vision.
B
C
A
B
C
B
D
C
C
B
C
C
B
C
B
B
Everything
sorted
around,
there's
no
there's
no
notion
what
I
wanna
take
a
view.
We're
all
Capcom
says:
Oh
show
me
what
this
looks
like
from
here.
They
there's
no
knowledge
about
what
it
looked
like
from
any
particular
location.
They
take
the
model
of
the
object
and
then
from
that
they
can
derive
what
you
would
see
from
that
distance.
It's.
D
B
B
B
B
B
B
B
Elements
and
now
I'm
going
to
build
composition,
the
composition
as
I
composition,
two
compositions
two
compositions
is
harder
and
harder
harder.
So,
for
example,
I
started
to
say
like
okay,
the
coffee
cup
and
I
got
the
letters
on
the
logo.
Now,
maybe
my
elements
are
the
letters
on
the
lookup
and
it
said:
okay
now
I'm
learning
a
composition
which
is
the
name
new
mentim
menta
works
on
all
animals.
D
C
B
B
Question
the
right
question
is:
what
is
the
model
of
the
object?
Look
like
if
the
model
of
the
object,
a
computer
vision
systems,
every
CAD
system
models,
an
object
as
features
at
locations
independent
of
where
the
eventual
lighting
source
will
be
or
the
eventual
camera
will
be
or
the
eventual
anything
yeah?
That
is
the
logical
way
of
going
about
this,
and
so
the
camera
position
is
not
even
part
of
the
model
of
Yanam.
It
is
an
accepted
stuff.
B
B
Features
and
different
types
of
movements,
and
so
on,
it's
really
really
hard
to
imagine
any
kind
of
knowledge
about
the
specific
thing
about
a
model
of
the
world
being
specifically
dependent
on
the
position.
I've
seen
it's
really
hard.
Imagine
that
could
possibly
okay!
This
is
the
problem
we
have
to
solve
from
this.
B
B
D
B
D
B
B
B
Excited
about
the
6a
two
families
thing,
because
I
think
this
could
really
do
that.
Whole
conversation
just
looks
perfect
than
that
yeah
and
the
same.
You
know
if
you're
gonna
do
this
thing
you
to
Sally,
you
have
to
also
figure
out
how
you
can
handle
them.
The
whole
issue
of
movement
relative
to
this
thing
you
know
like
how
do
I
yeah
you
have
to
solve
that
problem
too.
So,
there's
also
new
gonna
start
burning
everything
with
this.
D
B
B
B
There's
so
many
things,
there's
so
many
things
that,
depending
on
have
a
model
that
is
independent
of
your
position,
independent
of
your
scent,
your
particular
sensor
and
independent
of
everything
that
that
is
hardly
to
accept
a
model
that
doesn't
have
that
is
it's
not
like
I
think
you're.
Just
having
one
of
the
problem
of
the
problem
of
the
prom
I
talked
about
the
compositionality
death,
that's
going
to
be
a
problem.
B
B
B
B
Really
really
strong
case,
so
much
fundaments
sound
base
to
start
with,
and
the
fact
that
we
haven't
figured
out
exactly
how
to
do.
This
is
not
the
reason
to
abandon
it.
It's
the
reason
to
focus
on
it
any
throughout
what
aspects
of
this
we
don't
understand
and
how
to
address,
and
then
ass
things
you
know.
Well,
that's
a
perfect
example
of
this
in
my
farm,
I
grew
here
to
deal
with
the
scale
of
both
above
time
and
space
and
motor.
B
B
C
C
B
Okay,
you
like
it
because
all
the
time
I
think
if
I
tend
to
you
in
that
tender
student
I
automatically
calculate
adjusting
your
intuitive,
that's
sort
of
inherent
and
that's
a
displacement.
I
agree
with
that,
but
that
doesn't
define
that's
not
my
model
of
the
object.
I,
don't
say!
Oh,
you
two
are
at
this
position
apart
when
I'm
here
right,
you're
at
that
distance,
apart,
doesn't
matter
where
I
get
the
same
displacement
and
fall
over
the
area
here
or
there.
That's
to
me
the
truck
okay,
so
I'm
agreeing
with
that,
but
I,
don't.
C
A
nice
problem
that
I
that
we
can
totally
avoid
having
to
solve.
If
we,
if
we
do
do
this
approach,
as
as
you
sakata
around,
how
do
you
perform
3d
path?
Integration
like
how
do
you,
how
do
you
say,
alright,
I'm
a
left
corner
now,
I'm
moving
forward
lookup
space,
now
I'm
moving
back
at
the
cop
space
yeah
integrating
the
third
dimension?
Isn't
it
it
seems
to
be
necessary
with
this
approach
or
you
up
somehow
account
for
it.
We're
here,
that's
not
necessary.
Here's
Academy
I,
just
updated
the
orientation
yeah.
B
B
And-
and
so
we
saw
Mike
coming,
I
have
to
add
it's
something:
that's
there
from
the
start
and
I
don't
explain
how
I
caught
it.
You
know.
That's
that's
the
trick
now.
It's
obvious
if
I
have
a
model
of
the
room,
it's
3d
and
that
helps
right
because
then
I
can't
even
show
you
I
can
show
you
a
two-dimensional.
B
C
C
B
B
Like
here
you're
saying,
I'm
learning
what
this
object
looks
like
by
Mugu
my
eye
back
and
forth
without
going
to
depth
of
these
features,
isn't
that
what
you're
just
saying
I?
Don't
you
don't
need
it?
You
just
said
I,
don't
need
to
know
the
depth
of
these
features
to
for
this
to
work.
You
just
said:
oh
I,
don't
need
to
know
that
this
is
further
away
than
there,
because
it's
like
you
just
look
at
each
of
feature.
Feature
feature
in
some
sort
of
you
know.
B
C
Then
you're
figuring
out,
like
okay,
now
you're
figuring
out
that
okay
I
am
right
here
in
the
space
of
this
ellipse
and
you're
figuring
out.
I'm
I
am
so
this
is.
This.
Is
one
child
after
Ike
you're
figuring
out
your
location
of
that
space?
Another
child
object,
it
might
be
the
handle
I'm
figuring
out
where
I
am
in
that
space.
B
Could
you
just
start
by
saying
moment
ago?
The
advantage
of
the
second
approach
is
I.
Don't
need
to
know
the
depth
of
the
features
when
they
come.
That's
what
you
that's!
What
I
was
reacting
to
you
said:
I,
don't
need
the
damage
was
I,
don't
need
to
know
the
depth
of
the
features
of
the
cop
I
can
just.
C
D
B
B
B
D
Still
a
huge
until
about
you
factoring
out
one
dimension,
you
still
have
to
learn
it
from
every
every
angle
and
you
have
to
do
that
for
every
component,
every
every
common
there's
there,
some
set
of
things
that
you
have
to
learn
that
the
projection
on
the
eye
from
every
possible
angle,
yeah
and
then
once
you
have
that
now
you
can
create
3d
common
compositions
of
that
and
but
you
say,
you're
representing
the
relative
positions
of
these
sub
objects
or
whatever
it
lets.
You
learn,
but.
D
Is
you
they're
your
you
have
reference
from
each
frame
for
each
one
and
you're
learning
relative
3d
position
which
one?
Yes,
that
seems
to
me.
The
only
difference
is
what
we're
calling
an
object
or
a
feature
here
in
the
top
one.
You
would
still
need
to
learn.
You
know
how
how
thing?
How
something
feels
or
what
the
sensation
of
every
point
on
an
object
is
from
every
location.
I,
really
don't
see
the
distinction
between
there's
new
many
in
you
still
have
Queeny
reference
frames
in
three.
D
This
is
not
I
mean
you
still
have
treaty
reference
frames
and
you
still
need
to
do
transformations
and
all
this
stuff
in
the
bottom.
All
right.
Yes,
so
what
is?
What
is
the
distinction?
Isn't
just
that
the
scale
of
the
thing
that
you
know
at
every
orientation,
because
here
all
something
you
have
to
know
what
this
feels
like
at
every
orientation
here,
you're
saying
with
what
we
know
when.
C
B
What
is
the
fundamental
difference
between
this
I?
Keep
coming
back
to
I,
don't
understand
what
is
the
model
of
the
object
that
to
me?
That's
the
important
part.
How
do
I
represent
this
structure
in?
Is
there
fundamental
difference
in
representing
the
structure
yeah?
What
what
sends
me
off
its
when
you
say
I
have
to
remember
this.
What
this
thing
looks
like
every
different
position
in
the
world.
That's
like
I
can't
do.
That
seems
wrong,
but
you
simulate
important
well
like
there's
like.
D
B
D
You
have
a
3d
model
of
things.
You
still
have
to
do:
rusyns,
sub-object,
occludes,
the
other
object
so
unique,
and
to
me
you're,
not
saying
you're,
going
to
learn
that
every
single
possible
inclusion
thing
in
advance
that
there's
going
to
be
some
model
of
how
objects
include
one
another.
So
that's
again
more
similar
to
the
cat
model.
B
But
it
seems
like
even
when
I
look
at
a
very
complex
object,
it
doesn't
feel
to
me,
like
I'm
I'm,
saying,
oh
there's
a
whole
bunch
of
little
sub
objects.
Here,
it's
like
the
coffee.
The
logo
becomes
its
own
sub
object
and
then
the
coffee
table
becomes
its
own
sub
object,
and
so
I
can
incorporate
a
coffee
cup
into
the
model
of
this
room
because
glued
to
the
table,
some
spot
and
I
don't
have
to
I.
Don't.
D
C
D
So
you
would
have
learned
how
that
thing,
not
only
how
it
looks
at
everything
at
every
orientation,
but
how
it
feels
you've
made
that
yes,
but
now
that
you've
done
all
of
that,
you
still
need
all
of
the
machinery.
You
need
all
of
the
relative
positions
of
things
and
occlusions
and
how
things
how
these
sub
objects
are
going
to
rotate.
C
D
C
D
B
C
B
B
B
B
B
It
comes
next
to
each
other
are
somehow
or
another
I'm
making
this
up
voting
on
the
entire
thing,
there's
like
I'm,
seeing
a
D
I'm
using
a
jian
seng
at
all,
but
somehow
together
we
know
that
that's
dog-
we
don't
do
that,
but
that
couldn't
be
pony
could
have.
This
is
a
spatial
aspect
to
it
too,
so
you
could
be
a
second
learning
step
that
I
have
to
accommodate.
So
I
can
learn
once
with
the
sequence
that
you
go
to
sequence
and
then.
B
B
A
A
B
B
B
A
A
B
B
A
A
A
A
How
many
research
staff
Raj
just
a
few
right
now,
but
I
mean
you
saw
them
in
the
meeting
pretty
much
we
do
have.
We
have
interns
come
in
and
going
all
the
time
we
currently
don't
have
any
interns.
We
usually
have
like
two
or
three
so
there's
an
always
intern
opportunities.
You
can
look
at
new
Metacom
if
you're
interested
in
that
slash
careers,
I,
think
and
I
think
we
are
hiring
a
machine
learning
engineer
right
now
and
you
can
look
at
our
website
about
that.
I
think
that's
posted
as
well.
I'm
going.