►
From YouTube: Numenta Research Meeting - May 1, 2019 | artificial general intelligence, computational neuroscience
Description
Broadcasted live on Twitch -- Watch live at https://www.twitch.tv/rhyolight_
A
B
So
it's
basically
inspired
by
the
SDR
map,
stuff
and
I-
don't
want
to
spend
a
huge
amount
of
time
on
it
because
it's
not
directly
relevant
to
what
we're
doing
today.
But
it's
about
some
interesting
things.
It's
mostly
taking
these
associative
networks
and
showing
that
if
you
do
sparsity,
you
can
get
better
robustness
and
and
higher.
A
B
A
A
B
A
B
B
But
the
basic
idea
is
to
see
that
show
that
the
properties
that
we
talked
about
also
hold
here
and
so
the
way
he's
creating
these
associative
networks.
So,
typically
with
an
associate
Network,
you
have
an
input
population
and
then
some
hidden
population
of
cells
and
they're
all
sort
of
interconnected
and
they
learn
to.
A
A
A
Well,
like
in
our
in
our
sequence,
memory
I
would
call
that
a
type
of
auto
associative
Network,
but
there
isn't
even
a
hidden
layer.
It's
just
it's
just
the
same
neurons
connect
back
to
the
same
neurons.
B
C
B
A
D
A
C
B
C
B
A
B
Yeah,
so
what
he
does
is
he
creates
these
on
the
fly.
So
let's
say
you
have
some
input
pattern
coming
in
and
you
want
to
learn
that
he'll
create
a
set
of
H
new,
hidden
neurons,
that
form
sparse
connections
to
the
input
pattern
and
those
new.
That's
that
set
of
H
neurons
represents
that
pattern.
Now,
because.
B
B
Samples
from
the
name
and
then
there's
an
inhibitory
version
of
the
network
where
okay,
so
let
me
finish
this
soon
so
each
hidden
or
on
here
randomly
sub
samples
from
the
input
population
and
it's
just
binary
weights,
so
just
creates
these
connections,
and
then
they
project
back
to
the
same
input
neurons
that
they
receive
projections
from
and
there's
a
excitatory
way.
So
the
idea
is
that,
let's
say
with
this
pattern
suppose
you
only
invoke
three
of
these
bits
up
there.
B
And
then
here's
an
inhibitory
version
where
not
only
do
you
have
these
excitatory
connections
back,
but
you
also
have
inhibitory
connections
back
to
the
non
inactive
neurons,
so
this
guy
is
basically
voting
for
its
pattern.
It's
it's
saying,
put
a
plus
one
for
my
patterns
and
a
minus
one
for
everything
else.
A
B
B
B
A
B
And
then,
let's
see
he
so
I
can
do
stuff
like
this.
If
you
have
us,
this
is
a
stored
pattern.
You
give
it
a
corrupted
version
of
the
input
and
it
can
then
recall
this
pattern,
and
then
he
has
some
nice
math
analysis
that
talks
about
kind
of
the
capacity
of
the
networks,
the
probability
of
false
positives
and
the
probably
most
negatives.
C
B
B
A
B
B
A
A
B
A
B
A
A
B
C
B
E
Project
about
networks,
taking
basically
hop
build
networks
and
making
them
sparse
and
seeing
how
many
patterns
can
be
stored.
He
did
that
the
inspiration
for
this
was
he
pointed
out
that
our
layer
2/3
our
object
layer.
Our
output
layer
is
all
full
up
like
hot
build
networks,
but
it's
sparse
and
basically
I
think
he
had
a
lot
of
charts
that
were
basically.
A
C
A
B
E
B
A
B
B
C
B
A
A
A
C
A
A
C
A
D
E
E
C
A
A
E
B
A
A
I,
remember
the
coding
networks
failed
miserably
because
there
was
no
way
of
representing
particularly
input
in
different
contexts
because
there's
no
work,
so
you
just
lose
your
sequence
and
anyway,
so
we
be
able
to
see
back
directly
to
the
same
cells,
but
they
don't
activate
the
cells
right.
They
just
depolarize
the
rentals
which
I
think
it
just
is
the
same
issue.
There
yeah.
A
E
E
E
C
E
Making
these
nails
a
little
more
advanced
so
that
this
population
to
become
yeah,
business
area
yeah
and
a
lot
of
new
issues
occur
a
lot
of
new
ridiculous
things
can
happen
with
this
network.
If
you
remove
this
top
layer,
there's
like
if,
like
you
get
this
phenomenon
like
it
would
I
have
some
slides
that
created
to
demonstrate
this
ones.
But
but
you
you
get
this
weird
phenomena
where,
like
a
super
common
feature,
can
it
sort
of
act
as
a
wild
card?
E
A
A
C
B
I'm
trying
to
see
if
it's
a
standard
trick
or
not
I
think
almost
all
networks
I
can
think
of
that
work.
Well,
have
it
as
a
separated
population
like
with
you,
have
the
autoencoders.
You
have
the
input
and
the
auto
encode
and
the
encoding,
and
then
you
have
a
separate
decoding
layer.
So
that's
a
pure
feed-forward
system,
then
RM
ends.
Also
you
have
an
input,
then
you
have
a
recurrent
population
typically,
and
then
you
decode
it
out,
but
even
if
you
were
to
impact
me,
but
you
still
have
that
separate
population.