►
From YouTube: HTM Hackers' Hangout - July 5, 2019
Description
HTM Hackers’ Hangout is a live monthly Google Hangout held for our online community. Anyone is free to join in the discussion either by connecting directly to the hangout or commenting on the YouTube video during the live stream.
More info on all these topics at Discuss at https://discourse.numenta.org/t/htm-hackers-hangout-july-5-2019/6240
A
Hello
and
welcome
to
HTM
hackers
hangout
is
July
5th.
We
do
this
pretty
much
regularly
every
every
month.
So
thanks
for
joining
everybody
have
a
few
people
watching
online.
Just
a
reminder:
if
you're
watching
live,
give
the
video
a
thumbs
up.
That
would
be
helpful.
It's
YouTube
really
likes
it
when
people
thumbs
up
the
video,
even
if
you're
watching
later
give
it
a
thumbs
up.
A
So
this
is.
This
format
is
going
to
change,
because
this
Hangouts
online
thing
is
going
away.
Youtube
just
told
me
as
soon
as
I
hit
broadcast.
It's
like
this
is
no
longer
gonna
be
around
so
there's
if
I
change
next
time,
there's
something
called
YouTube
webcam.
So
we'll
see
how
this
goes
next
month,
when
we
try
and
do
this
I
just
want
to
take
a
moment
and
say:
welcome
to
new
subscribers.
A
I
know:
I've
got
a
lot
more
traffic
and
subscribers
since
we
had
that
Lex
Friedman
interview
with
Jeff,
so
if
anybody's
watching
that
came
from
there.
Thank
you
for
subscribing
and
watching
I
do
a
lot
of
live
streams
on
this
channel,
which
you
are
free
to
ignore.
If
you
like,
or-
and
this
is
one
of
them-
we
do
this
once
a
month
and
it
allows
us
to
talk
with
community,
so
I
always
invite
the
community
to
to
actually
join.
A
It's
got
Marty,
which
is
the
teapot
and
Marcus
from
Demento,
is
joining
so
have
some
discussions
with
them.
Otherwise
there
is
a
forum
post,
child
I'll,
put
in
chat
right
now.
I
think
this
is
it
yeah
and
that
sort
of
has
a
summary
of
what
we're
going
to
talk
about.
So
we
will
get
right
into
that,
so,
first
off
and
once
we
get
through
this,
we'll
open
it
up
to
anybody
who
has
who
was
joined.
It
wants
to
talk
her
and
I'll,
be
watching
chat.
A
So
if
you
have
any
questions
in
chat,
I'll
address
those
once
we
get
past
these
main
topics,
so
the
the
first
thing
I
want
to
talk
about
is
the
Mintos
research
direction.
So
we
have
over
the
past
year,
you
know
sort
of
been
leaning
more
towards
machine
learning
and
the
application
of
what
we've
understood
about
the
particularly
you
know:
the
neocortical
circuit
and
sequence,
learning
of
spatial
pooling,
let
sort
even
the
old
stuff.
A
You
know
that
came
out
in
2013,
which
is
spatial
temporal
memory
and
how
to
apply
that
to
current
machine
learning
system,
specifically
deep
networks
and
we're
continuing
to
go
in
that
direction.
We're
getting
traction
there,
which
is
great,
and
so
so
that's
where,
since
we're
getting
that
traction,
we're
going
to
looks
like
my
bit
rates
a
little
bit
low.
So
you
might
you
guys,
might
get
a
little
shaky
video.
So
sorry
about
that.
A
A
When
wait
sorry
mark
says,
please
repeat
this:
when
chad
is
live,
may
be
worth
dropping
and
weekly
to
introduce
the
lincoln
name,
change.
I'm,
sorry,
I,
don't
know
what
you
talking
about
mark
anyway.
We're
we're
moving
towards
the
machine
learning,
so
you're
gonna
see
us
all
of
our
new
research
stuff
and
our
live
streams
and
everything
are
gonna,
be
focused
towards
the
application
of
HTM
ideas.
Sorry,
that
happens
no
window
cam
in
this
legend,
so
you're
gonna
see
the
research
meetings
focused
towards
machine
learning.
A
A
On
this
machine
learning
side,
we
have
a
a
project
called
nupoc
torch,
which
is
basically
creating
machine
learning
architectures
that
are
using
the
ideas
of
HTM,
particularly
sparsity,
but
as
we
move
along
we're
going
to
continue
to
create
new
models-
and
you
can
see
there's
there's
models
in
here.
If
you
want
to
take
a
look
at
them
right
now,
where
you've
got
sparse,
CN,
n
and
then
and
more
will
probably
be
coming,
the
thing
I
wanted
to
point
out
on
this
is
this
recently
Lewis
has
sorry.
Now
we
have
a
firetruck.
A
Okay,
so
Lewis
has
recently
added
more
examples,
and
the
cool
thing
is
that
there
are
Jupiter
notebooks.
So
we
can.
We
can
take
advantage
of
that
in
this
new
collab
environment
that
Google
has
has
made
available.
So
all
of
our
all
of
these
examples
can
be
opened
up
in
one
of
these
collab
environments
and
trying
to
make
this
bigger,
and
you
can
run
this
on
a
GPU
which
is
crazy
right.
A
A
So
this
is
a
an
example
of
how
to
use
this
model
and
it
will
run
all
the
training
and
I'll
do
the
training
and
the
testing
and
then
give
you
like
error
rates
and
introducing
random
noise
and
all
this
stuff,
and
you
can
just
sit
here
and
watch
it
working
and
not.
You
know
overload
your
local
CPU
with
this
stuff.
You
can
do
it
all
on
Google's
servers,
which
is
outstanding.
A
A
The
next
thing
is,
if
you're,
some
sort
of
talking
to
our
forum
right
now
at
HTM
forum,
we've
had
I've
noticed
a
big
increase
of
spammers
on
the
forum.
So
if
you
notice
that
there's
spammers
on
the
forum,
you
can
flag
the
post
so
there's
in
the
options
you
know
under
each
post,
there's
a
bar
with
a
little
flag
and
if
you
think
something
spam,
please
flag.
It
they're
getting
really
good
at
this,
with
like
crazy
good.
A
But
if
you're,
if
you
keep
up
with
you
know,
what's
going
on
in
machine
learning
and
I
can't
tell
if
they're,
just
dirty
tricks
or
if
they're,
like
really
smart
applications
of
the
latest
NLP
machine
learning
tactics.
You
know
because
someone,
a
new
post
will
pop
up
and
it'll,
seem
completely
reasonable
and
they'll
be
a
link
to
something
that
also
seems
reasonable.
A
Here
too,
you
know
so
they're
there
they're
posting
without
any
explicit
links
to
get
you
to
like
click
bait,
they're,
not
posting
a
direct
click,
bait
I
think
they're
posting
initially
to
get
trust
in
the
system,
because
the
forum
software
has
this
trust
system
and
I
think
they're
doing
this
to
see
real
content
so
that
in
the
future
they
can,
they
can
do
the
clay
and
the
spam
stuff.
So
just
keep
an
eye
out
for
that.
If
something
seems
weird,
let
me
know,
I
mean
I'm,
always
looking,
but
sometimes
I
miss
it.
A
Sometimes
I
miss
the
spammers
and
I
almost
missed
one
just
recently.
Okay,
so
that
there's
that
quick,
I
don't
want
to
talk
too
much
about
twitch
versus
YouTube,
but
I'm
gonna
be
live
streaming
on
YouTube.
Now,
for
the
foreseeable
future,
I
think
that's
where
the
audience
is
so
like.
This,
obviously,
is
a
YouTube
live
stream.
If
you
go
back
and
look
at
the
history,
almost
all
my
live.
Streams
recently
have
been
done
on
Twitch
and
then
moved
over
to
YouTube
as
just
a
video
but
I've
been
investigating.
Youtube
live
streaming
and
I.
A
Think
I
can
do
everything
I
did
on
Twitch.
I
can
do
on
YouTube,
so
I'm
going
to
keep
so
I'm
going
to
sort
of
move.
My
process
I
have
some
technical
tasks
to
achieve
to.
You
know
update
my
tooling
to
apply
it
to
YouTube
like
the
chat
interactions
and
stuff
like
that,
but
it'll
all
it'll
all
be
back
and
just
like
on
twitch,
so
which
is
nice
because
most
most
workplaces
allow
YouTube
access,
but
they
do
not
allow
twitch
access.
A
A
Brown
thanks
for
doing
that,
the
last
thing
is
I'm
still
working
on
building
HTM
systems.
That's
my
current,
like
livestream
that
I'm
heavily
streaming
and
we're
doing
spatial
pooling
stuff,
so
so
we're
partially
through
it.
We
got
to
do
finished
up
active
duty
cycles.
We're
gonna
do
overlap
duty
cycles
soon,
which
I
haven't
even
talked
about
or
explained
at
all,
so
still
working
through
that
and
with
that
hello.
Lucas
welcome.
A
Okay,
with
that
we'll
get
right
into
sort
of
questions
on
the
forum
and
respond
to
some
of
those
maybe
share
this
at
least
so
you
guys
can
see
I
think
this'll
work
if
anybody's
watching
here's
the
forum
thread
were
was
just
talking
about
the
summary
so
bit
King
was
asking
about
a
summary
of
capsules
versus
HTM.
You
know,
and
he
cut
caught
on
this
thing,
we're
talking
about
like
grandmother,
cells
and
I
I.
Don't
quite
understand
that
I
thought
Marcus
when
we
were
talking
about
grandma
capsules.
A
A
So
I
can
address
a
couple
of
those
things
go
for
it.
So
I
guess,
since
you
brought
this
since
we're
on
the
topic
of
grandmother
capsules.
I
can
talk
about
that
first,
so
yeah,
the
the
notion
of
grandmother
cells,
the
idea
that
there
are
single
neurons
that
represent,
like
you,
have
an
era
devoted
to
when
you
see
a
grandmother
or
something
like
that
and
the
idea
that
that's
happening
in
the
brain,
etc.
A
The
concern
what
we'll
call
that
I
understand
like
the
young,
the
concern
there
that
yeah,
it
seems
silly
the
idea
that
you
even
have
like
a
cell
devoted
to
each
concept,
or
that
word
that's
like
how
learning
occurs.
Do
you
just
the
grandmother
cell
and
have
a
coffee
cup
cell
or
similar
with
capsules?
That
is
certainly
like,
that
is
almost
definitely
a
hack
of
sorts.
What
is
going
on
at
the
top
of
a
capsule
network
as
a
hack
but
whatever,
but
what's
going
on
between
the
bottom
and
the
top
is
not
a
hack.
A
It's
this
quite
nice
and
like
this
idea
of
yes
at
the
top,
the
capsules
are
sort
of
grandmother
II,
but
all
the
ones
in
between
are
in
the
end,
between
layers
they're,
more
like,
like
these
reusable
components.
They're
like
almost
a
now
I
mean
it.
It
is
almost
analogous
to
the
spatial
pooler's
many
columns,
where
the
spatial
cooler
learned
sort
of
these,
like
course,
features
where
each
mini
column
is
start
of
the
course
feature,
and
in
that
sense
these
capsules
being
devoted
to
one
feature
or
one
component,
one
type
of
component.
A
A
The
part
that
is
nice
is
sort
of
like
is
sort
of
like
how
suppose
you
have
a
thousand
capsules
in
a
network.
There's
say
you
have
a
layer
of
capsules,
a
population
of
cells
thousand
capsules.
It
needs
to
learn
a
set
of
features
of
that
set
of
a
thousand
features.
Basically,
that
are
useful
for
representing
the
input,
and
it's
so
like
using
the
statistics
of
the
objects
to
pick
a
good
set
of
features.
That's
all
really!
A
If
you
were
to
build
a
more
advanced
system
that
that,
instead
of
using
this
fully
supervised
de
proach,
that's
passing
in
something
up
top
if
it
were
a
little
more
similar
to
like
a
reinforcement,
learning
system
where
the
task
of
the
system
is
to
like
output,
the
desk
of
the
system
is
to,
like
is
to
identify
objects,
but
it
does
that
more
as
it
does
that
more
as
like
a
behavior
as
more
of
it
as
an
output.
If,
if
instead
the
network,
word
like
this
big
black
recurrent
box
and
and.
A
You
you
could
basically
remove
the
hack
is,
what
is
what
I'm
thinking
here
is
like
once
you
once
you
make
the
system
more
and
more
of
a
black
box
work
somehow
internally,
it
is
figuring
out
that
that's
a
grandmother
or
that
that's
a
coffee
cup
and
then
then
it
acts
in
a
way.
It
speaks
the
word
grandmother
or
something
like
that
that
that
that
that
can
be
one
approach
to
removing
the
hack
this.
So
it's
like
yeah
grandmother
capsules
are
like
a
temporary.
A
A
If
you're
asked
to
identify
what's
in
this
photo,
you
don't
answer
the
question
by
having
the
person
decode
your
brain.
You
answer
the
question
by
by
by
speaking
the
answer,
and
so
like
everything.
So
basically
there
doesn't
have
to
be
a
grandmother
capsule
in
your
brain,
because
you
have
another
way
of
reading
it
out
through
through
speaking
and
so
so.
So.
A
My
point
of
view
is
that
the
grandmother
sells
grandmother
capsules,
all
that
is
a
placeholder
hack
2
to
get
to
kept
in
a
bootstrap
the
whole
field
of
research,
but
but
eventually
it
will
go
away.
So
that
was
a
long
answer
to
just
one
part
of
his
to
one
part
of
that.
So
yeah
I
mean
summary
of
capsules
versus
our
model.
A
I
would
say,
I
mean,
of
course,
there
have
been
multiple
hours
of
research
meetings
where
I
talked
about
this
recently
and
one
one
specific
part
I
would
point
to
is
in
the
most
recent
research
meeting,
which
is,
there
was
last
Monday
part
to
counsels
part
to
I
off
to
the
right
side
of
the
white
bar
board
to
the
toward
the
end
of
the
research
meeting.
I
drew
like
this
on
this
spectrum
of
design
constraints
like
of
the
you'll.
A
If
you
go
to
the
video
you'll,
see
on
the
right
side
of
the
board,
I
have
these
just
this
axis
of
yes-no,
yes-no
and
and
I
talked
about
capsules
versus
our
model
and
in
the
the
the
thing
that
is
varying
between.
That
is
the
underlying
assumptions
about
wonder
what
neurons
can
do
and
what
a
small
number
of
neurons
can
do
in
capsules.
A
A
You,
you
wouldn't
want
these
capsules
to
be
devoted
to
individual
features.
Individual
objects
you'd
want,
because
there
are
so
many
cells
required
required
for
representing
this
other
stuff
to
kind
of
avoid
being
wasteful.
You
want
this
capsule
to
represent
many
different
types
of
features,
and
that
really
starts
to
change
the
architecture
of
the
holder
of
the
whole
thing
or
the
or
the
the
the
motivation.
The
the
principles
of
the
system-
well,
it
once
you
have
this
this.
A
You
know
this
extra-large
capsule
that
is
representing,
pose
and
scale
or
whatnot,
and
then
it's
also
representing,
like
object,
identity
at
this
point,
because
it's
so
bigger
you
now,
rather
than
having
it
devote
to
one
object,
you
want
to
devote
it
to
a
lot
of
objects.
It
starts
to
seem.
Well,
you
could
call
it
a
cortical
column.
It
starts
to
seem
it's
just
different
and
and
spirit,
and
so
like
at
the
core.
A
A
So
otherwise,
I
tried
on
the
board
I'd
tried
to
draw
there's
sort
of
this
some
continuous
space
of
how
many
cells
does
it
require
to
represent
a
location,
orientation
and
everything.
If
it's
very
few
and
wanted
into
the
spectrum
you
get
capsules.
If
it's
a
lot,
you
get
something
more
like
our
model,
and
you
know
maybe
there
are
in
between.
Maybe
maybe
it's
actually
something
in
between
and
and
I
also
made
it
clear
in
that
meeting
that.
A
A
Don't
know
the
answer
to
these
questions
about
how
many
neurons
it
takes
to
represent
a
pose
and
like
and
I'm
just
saying
if
it
takes
a
lot
our
model,
our
model
is
kind
of
predicated
on
that.
Then
it
takes
a
lot
if
it
doesn't
capsules,
there's
a
lot.
We
don't
know
about
neural
tissue.
There's
a
lot
could
be
going
on.
Maybe
cells
are
more
powerful,
more
capable
representing
more
than
we
think
these
are
all
open.
Questions
neuroscience
is
still
there's
still
a
lot.
A
Okay,
thanks
for
the
summary
now
I
hesitate
to
move
on
to
a
completely
other
topic,
because
yeah
I
think
I
think
we
should
I.
Think
you've
said
enough
about
the
about
the
capsule
discussion.
There
was
a
very
high-level
question
from
Barnett
talking
about
object,
object,
recognition,
specifically
higher-level
abstracted
objects,
and
so,
for
example,
we
always
talk
about
coffee
cup
like
this
particular
coffee
cup,
but
there's
a
difference
between
my
coffee
cup
that
I
use
every
day
and
then
just
the
general
idea
of
a
coffee
cup.
A
Do
we
have
anything
to
say
about
that
at
this
point
so
to
the
piece
he
also
gave
me.
The
idea
of
apples
versus
free
or
like
Apple,
is
a
fruit
ban
as
a
fruit,
but
corn
as
a
vegetable
or
you
know
it's
really
talking
about
abstract
concepts
and
how
real
physical
objects
can
can
be
sort
of
classified
in
these
abstract
ways.
A
We
we
don't
have
anything
on
this
topic.
This
official
I
have
little
thoughts
about
like
instances
versus
classes
this.
This
idea,
like
you,
have
models
of
coffee
cups,
and
then
you
have
various
instances
of
that
of
coffee
cups,
but
I
don't
have
any
it
at
this
point.
It'd
just
be
me,
making
stuff
up
so
I.
A
A
The
only
thing
that
comes
to
mind
is
that's
the
type
of
test
that
that's
the
type
of
thing
that
is
almost
as
easy
for
certain
types
of
systems
to
do
that
are
really
simple
it
if
you
just
have
like
you,
know,
associations,
it's
like
you
have
fruit
and
it's
associated
with
you.
You
have
Apple
and
its
associated
with
your
representation
for
fruit,
and
you
can
draw
these
graphs
so
like
this.
Is
that
and
and
I
don't
know
a
simplistic
approach
to
that?
A
It's
just
like
associating
representations
with
each
other
and
in
some
ways
that
that
class
of
problems
is
is
not
challenging,
or
rather
there
are
approaches
to
it,
but
are
simple
I,
don't
know
if
those
approaches
are
right,
whereas
the
types
of
things
we've
been
approaching
are
a
little
bit.
Are
things
that
can't
be
solved
through
just
simple
associations
like
that
having
to
bring
in
these
notions
like
location
and
orientation?
Yeah
I
mean
another
I,
don't
know
if
this
is
what
he's
explaining.
A
A
It's
just
like
just
not
something
that
automatically
happens,
but
you
can
make
associations,
or
at
least
you
can
say
well,
it
looks
sort
of
like
this
thing
that
I've
you
know
my
eye
can
match
it
towards
a
sense
as
I
get
when
I'm
looking
at
this
type
of
thing
or
that
type
of
thing.
But
it's
not
so
simple,
as
just
like
performing
a
what
we
would
call
a
union
without
everything
that
we
know
of.
A
A
A
A
B
Right
now,
the
research
direction
we're
working
on
is
trying
to
combine
machine
learning
with
the
HTM
principles
we've
just
called
in
the
last
year's
and
so
we're
starting
from
the
beginning.
So
we
have,
we
have
a
bucket
placed
of
a
lot
of
things.
We
want
a
taco
and
the
first
thing
we're
gonna
solve
is
sparsity
so
right
now,
machine
learning
models
are
mostly
dance
and
we
believe
it
they
should
be
sparse,
as
the
brain
is
so
right.
B
Now,
that's
the
work,
we're
doing
we're,
trying
to
apply
a
sparse,
teach
several
machine
learning
models
and
see
if
the
properties
we
have
in
sparse
distributed,
representations
like
robustness
and
efficients,
they
will
translate
to
machine
learning
models
as
well,
so
that's
a
very
short
key
stupid
and
then
we'll
probably
move
on
to
other
problems
like
continuous
learning
called
encoding
structural
plasticity.
That's
something
we're
working
on
right
now,
and
so
we
have
like
a
large
bucket
list
of
things.
B
D
Thanks
guys,
one
thing
that
I
am
wondering
about
often
during
the
research
meetings
is,
when
any
of
you
guys
talk
about
child
objects.
So
you
you
have
this
combination
of
features
and,
and
you
have
the
you
call
it
child
objects
but
I'm,
trying
when
I'm,
trying
to
make
sense
of
HTM
I
see
as
a
as
features
as
points
the
smallest
amount
of
sensory
information
that
any
of
our
sensors
can
get.
D
So
if
you're,
considering
visual
information,
I
consider
the
smallest
point
that
the
retina
can
detect
and
then
it's
channeled
through
whatever
means
towards
our
neocortex
and
then
it
starts
the
computation
the
same
for
our
skin.
When
we
touch
something
with
with
the
tips
of
our
fingers
that
there
are
probably
a
you
know,
hundreds
maybe
thousands
of
on
one
top
tip
of
our
finger
and
that's
but
okay
I'll
cut
it
short.
The
idea
is
when
you
guys
talk
about
child's
objects,
are
those
already
combination
of
inputs
or
is
a
feature
just
that
a
point
sensed?
A
A
In
that
context,
we
call
them
child
objects
when
we're
once
we've
gotten
up
to
the
level
of
talking
about
compositionality,
so
the
child
objects
here
are
the
things
talked
about
and
columns
paper,
the
theory
of
whatever
that
paper
was
called,
how
columns
in
the
neocortex
learn
the
structure
of
the
world
or
whatever
that
and
and
then
the
follow-up
paper
of
locations
in
the
neocortex,
the
theory
or
sensorimotor
object,
recognition
using
cortical
grid
cells.
Remember
that
title
because
I'm,
the
first
author,
those
in
those
we
talk
about
objects
and
and.
A
A
So
it's
like
really
a
building
a
map
hug
like
at
this
location
that
sense
this
at
this
location
that
sense
this
or
like
this
location,
a
sense
that
this
location
I
sense
that
and
you
and
you
can
think
of
in
these
cases
you
can
think
of
the
input
being
whatever
is
coming
in
on
the
sensory
patch.
You
can
think
of
it
as
like,
if
you
wanted
to
think
of
it
as
just
a
set
of
zeros
and
ones
of
some
kind.
A
Just
like
a
big
like
a
bitmap
of
input
bits,
that's
one
way
to
think
about
it.
So
we
typically
think
of
this.
As
the
set
of
input
bits
comes
in
spatial
fooler
maps
it
to
some
sparse
set
of
mini
columns
and
basically
that's
what
a
quote
unquote
feature
is,
and
the
system
a
child
object
is
what
feature
do
I
sense
at
this
location.
What
sense
what
feature
to
us
and
said
that
location,
so
you
were
asking
about
like?
Is
it
point
by
point,
or
is
it
larger
than
a
point?
A
A
A
We
talk
in
terms
of
cortical
columns.
A
cortical
column
has
a
receptive
field
instead
of
a
set
of
inputs,
a
set
of
like
receptors
on
the
finger
or
or
like
retinal
ganglion
cells
from
the
retina,
and
that's
just
like
this
big,
like
2d
2d
input
and
you
put
a
spatial
pool
or
over
that
set
of
input
bits
that
output
is,
we
call
that
a
feature
and,
and
so
the
child
objects
are,
are
what
feature
do
I
sense
at
this
location?
What
feature
do
I
sense
in
that
location,
etc?
A
Let's
call
that
a
child
object
and
and
now
in
order
to
learn
this
higher
object
like
the
full
pen,
I'm
gonna,
I'm,
gonna,
rearrange
these
child
objects
like
the
pen
top
and
the
pen
bottom
into
a
into
a
compositional
object,
and
so,
when
we
talk
about
something
that
spans
an
area,
that's
where
we
use
our
child
object.
Parent
object
mechanism;
okay,
great
thanks,
Marcus,
I'm,
gonna,
move
on
to
Marty,
who
also
had
a
question
to
ask
you're
ready.
A
Some
parts
of
a
lot
of
our
work
is,
is
a
lot
of
our
work
already
goes
into
thinking
about
the
core
problems
the
brain
has
to
solve
and
then
or,
for
example,
thinking
about
like
what
is
vision.
What
what
is
vision,
what
must
the
brain
be
doing
to
solve
the
problem
of
vision
and
just
thinking
about
that
problem
in
the
abstract
and
then
it's
a
whole
other
set
of
work
to
map
it
on
to
cortical
columns
and
to
map
it
on
to
neural
populations?
A
Oh
so,
the
one
one
reaction
I
would
give
is
that,
like
we're,
still
gonna
be
thinking
about
the
core
problems
of
vision,
our
core
problems
of
you
know,
learning
sequences,
and
so
what
it
would
is
likely
to
come
back
to
HTM,
as
is
just
new
new
things.
We
learned
from
thinking
about
the
problems,
will
be
spending
I,
think
less
time
focusing
on
biological
neurons
or
no.
No,
that's
not
the
right
way
to
say
it.
A
B
Yeah
can
I
can
I
add
something
I
think
it's.
It's
also
an
interesting
way
to
validate
the
theories
we
have
right
now,
so
we
don't
think
about
applications
of
extending
too
deep
models
and
extending
to
to
the
benchmarks
we
use
in
machine
learning.
It's
also
an
opportunity
for
us
to
validate
what
we
have
done
in
html5,
so
right
now,
I'm
working
on
sparsely,
but
we
have
a
Jeremy
who
is
working
on
a
HTM
applied
to
continuous
learning,
so
I
think
there
is.
B
There
is
a
path
between
HTML
machine
learning
and
we
are
going
to
that
path.
It's
not
like
a
straight
line,
so
we're
going
back
and
forth
and
you
experimenting.
We
have
that
sdrs
or
experimenting
with
temporal
memory,
but
it's
it's
been
I.
Think
it's
been
fruitful
to
see
how
the
theory
is
applied
to
like
larger
data
sets
to
larger
models
and
I
also
believe
we're
going
to
learn
a
lot
for
that,
and
some
of
that
can
come
back
and
somehow
help
us
on
there
on
the
brain
part
of
the
neuroscience
research.
A
Thanks
Lucas
I'm
excited
about
this
direction
because
for
the
longest
time
for
years
and
years
since
we
open
sourced
one
of
the
big
arguments,
critical
arguments
against
HTM
and
what
we're
doing
was
what
value
are
you
adding
to
current
system?
So
what
can
we
do
right
now?
That's
that's
valuable
and
the
answer
has
always
been
well
little
here
and
there,
if
you
can
figure
out
how
to
do
it
and
the
direction
that
we're
going
right
now
is
basically
to
take
everything.
A
We've
we've
learned
and
find
out
where
we
can
add
the
most
value
to
current
systems
with
the
least
amount
of
effort
and
I
think
that
could
make
a
big
impact.
You
know
finally
get
us
a
little
bit
of
attention
for
our
NASA
amount
of
research
that
we've
done
over
the
past
10
15
years
and
bring
it
to
the
deep
learning
machine
learning
world.
So
I'm
excited
about
that
I
think
it's
going
to
be
enlightening.
A
Ok,
I
think
we'll
wrap
it
up
thanks
everyone
for
joining
HTM
hackers,
hang
up
July
5th,
just
a
reminder
for
all
you
guys
watching.
Please
give
the
video
a
thumbs
up
and
subscribe
to
our
YouTube
channel.
It's
encouraging
and
it
helps
validate,
but
I'm
doing
basically
doing
these
live
streams
and
trying
to
be
as
transparent
as
possible
through
for
you
guys
all
the
community
for
my
bosses
so
cuz
it
lets
them
know
that
what
we're
doing
here
is
valuable.