►
From YouTube: Full Vision of HTM 2D Object Recognition Project
Description
Broadcasted live on Twitch -- Watch live at https://www.twitch.tv/rhyolight_
A
So
let's
talk
about
test
definitions,
I
guess!
So,
if
anyone
is
interested,
there's
a
project
that
I'm
trying
to
sort
of
coordinate
here
that
is
going
to
what
I'm
trying
to
do
is
is
encourage
the
community.
The
HTM
community
I
used
to
have
a
used
to
have
a
black
marker
here,
I'm
trying
to
encourage
the
HTM
community
to
build
one
of
these
three
column
networks.
A
Okay,
and
if
you
watched
the
recent
video
we
did,
was
it
yesterday?
No,
no!
It's
two
days
ago
Wednesday
when
I
was
in
the
office
with
Jeff,
and
we
talked
about
this
being
like
equivalent
to
layer
four
and
I
think
was
it
6,
X,
8,
I,
think
communication.
This
way,
communication
that
way
input
this
way,
output
that
way
and
this
being
an
object,
pooling
layer.
So
this
is
this
is
gonna.
This
is
going
to
identify
an
object.
A
A
Persistent
I
would
say
over
time,
persistent
stable
staples.
The
word
stable
object
representations
here,
so
this
is
gonna
move
very
quickly.
It's
gonna
be
a
lot
of
jumps
and
patterns
here,
as
input
comes
in,
this
is
gonna
move
very
quickly.
This
is
as
we're
attending
an
object
is
going
to
be
stable,
so
so
this
incorporates
two
papers
what
we
call
the
columns
paper
and
then
the
columns
plus
paper.
A
The
columns
paper
defines
sort
of
this
space
and
the
columns
plus
paper
defines
defines
this
or
it
describes
the
mechanisms
that
we
think
are
in
place
within
these
these
systems,
and
so
what
I'm,
trying
to
encourage
here
with
this
object?
Recognition
project
is
to
do
really
really
simple:
2d
object,
recognition,
so
so
the
so
that
we
can
define
an
object
space
so
I'm
going
to
call
this
an
object.
Space
and
an
object
can
exist
in
it
and
I'm
gonna
actually
give
this.
You
know
an
X
and
a
Y
dimension.
A
Okay
and
and
we're
gonna
have
points
in
this
space
that
we
can
define
with
X,
comma
Y,
so
very
discrete
space
and
each
one
of
these
locations
in
the
space
may
have
a
feature.
Let's
just
call
this
feature
a.
Maybe
you
can
see
feature
a
over
here
and
there's
a
maybe
a
feature
B
over
here,
so
you
can
have
features
at
these
spaces
and
the
idea
being
that
we
want
to
try
and
create
one
of
these
column.
A
There
structures
with
three
layers
that
when
given
sensory
input
from
this
object,
space
can
identify
what
object
is
being
sent
so
we'll
have
like
a
a
library
of
objects.
So
this
this
is
maybe
object
one
right
here,
that's
how
it's
defined
and
but
we'll
also
have
a
library
of
them.
They'll
be
two
three
four
or
five
and
we'll
label
them
and
they'll.
Look
like
things.
A
Do
what
you
need
to
do
to
train
on
them
and
then
the
test
would
be
to
grab
an
object
out
of
the
space
they
trained
on
load
it
into
the
object
space
and
then
give
them
one
location
to
place
the
agent
and
ask
it
what's
the
object?
If
it
knows
it,
it
wins.
You
know,
that's
the
that's
the
goal
immediately.
If
it
knows
immediately
with
one
touch
great,
but
then,
if
it
doesn't,
we
we
give
it
a
movement
path,
we're
gonna,
initially
predefined
the
movement
path.
It's
probably
just
gonna
be
a
randomly
predefined
movement
path.
A
Yeah,
the
object,
pools
meant
to
be
layer,
two
three
that's
correct
catch
64
and
so
then,
then,
will
allow
you
know
we'll
give
every
agent
every
competitor
or
whatever
give
it
the
same
movement
path
through
this
space
and
see
how
many
touches
it
takes.
How
many
senses
it
takes
overtime
to
identify
what
object
in
the
library
it's
touching.
So
I
want
to
create
a
test
like
that
within
the
end,
the
you
know,
the
motivation
behind
this
is
not
to
create
a
puzzle
or
a
benchmark
that
no
one
else
can
solve.
A
That's
not
what
I'm
trying
to
do
I
think
I
would
structure
Toth
totally
differently
if
I
was
trying
to
create
a
benchmark
that
only
HTM
would
be
good
at
that's,
not
what
I'm
trying
to
do.
What
I'm
trying
to
do
is
create
a
a
puzzle,
a
a
challenge
for
you
in
the
community
that
are
interested
in
building
HTM
systems
to
solve
this.
So
here's
the
here's,
an
even
closer
look
at
the
idea
so
as
I
say
we're
in
this
object.
A
A
I
know
this
is
sort
of
strict,
but
we're
gonna
give
you
a
North
sensor,
an
East
sensor,
a
South
sensor
and
a
West
sensor
and
the
the
idea
being,
and
we
could
start
off
with
just
one
right,
but
that
seems
that's
sort
of
boring.
Let's,
let's,
let's
try
it
with
with
multiple
sensors,
because
the
idea
being
that
each
one
of
these
sensors
each
one
of
them
is
going
to
be
represented
as
a
cortical
column
with
these
layers,
so
the
sensory
input
is
going
to
be
set
fed
into
this
layer.
A
It's
going
to
activate
some
mini
columns,
which
is
going
to
help
identify
where
it
is
or
with
all
the
places
in
the
space
it
might
be,
and
as
and
and
this
will
select
all
the
possible
objects
that
it's
learned
so
far-
the
union
of
them
so
and
and
we
maybe
we
get,
we
pop
out,
you
know,
object
a
you
know.
Ninety
percent
object
8%
whatever,
so
we
should,
hopefully
maybe
we're
gonna
try
to
get.
A
You
know
probabilities
of
this
after
you
touch
each
one
of
these
is
going
to
have
its
own
model
of
a
column
with
all
this
mechanism
happening.
All
at
the
same
time,
so
they're
all
going
to
get
different
sensory
input.
This
is
gonna
have
its
over
here
and
then
to
pull
it
all
together,
and
this
is
like
the
pie-in-the-sky
vision,
pull
it
all
together.
We're
going
to
have
these
layers
all
talking
to
each
other
voting,
because
that
really
is
what
is
going
to
make
one
touch
be
powerful
enough
to
recognize
an
object.