►
From YouTube: Cases for Sparse Activations
Description
Marcus talks about sparsity in neural networks across Deep Learning and HTM, and Jeff talks about building bridges between the two spaces.
B
B
Divide
the
board
in
the
two
kind
of
not
equal
halves:
biological
based
computational
cases
for
sparse
activation
for
having
a
small
subset
of
the
neurons
active
at
any
point
in
time
and
biological
or
or
artificial
neuron
that
purpose.
So
the
basic
biological
case,
of
course,
is
activation
and
cortex
is
sparse,
and
therefore
we
have
a
proof
of
concept
that
that
a
system
like
this
can
work
with
sparse
representations
and
and
not
only
that
not
only
can
it
work,
that's
the
way
cortex.
Does
it
it's
the
proof
of
concept
we
might
want
to
mimic
it.
B
So
the
nuance
I
wanted
to
bring
up
here
is
a
drowning
access
here
of
the
access
of
to
what
degree
do
you
believe
that
deep
networks
capture
some
truth
about
the
cortex,
deep
networking,
artificial,
artificial
di
works
and
and
I
would
say,
look
and
I?
Where
did
this
a
little
bit
carefully
just
to
say
that
object?
You
can
believe
this.
You
can
believe
that
deep,
artificial
networks
capture
some
truths
about
the
cortex
and
they
leave
out
a
ton
of
truth
as
well.
B
Are
the
units
the
individual
neurons
of
deep
networks
analogous
to
neurons
in
the
brain
or
on
the
individual
units
and
then
deep
neural
network
analogous
to
groups
of
neurons
of
the
brain?
Maybe
multiple
neurons
actually
act
as
like.
Maybe
each
perceptron
unit
is
actually
implemented
by
a
population
of
corta,
corta,
building
units,
critical
neurons
and
there's
an
easy
way
like
just
using
some
of
our
some
of
our
neural
mechanisms.
I
could
show,
for
example,
here's
just
a
one
layer
network.
B
These
individual
units
of
this
network
could
be
implemented
either
by
a
ring
of
pyramidal
cells.
You
could
have
like
a
ring,
so
pump
of
activity
is.
This
is
representing
a
scalar
or
equal
or
very
similarly,
we
have
something
like
this
scalar
encoder
that
we
used
before
there
uses
a
population
code
to
represent
a
scalar
by
moving
a
bump
of
activity
along
it.
C
Have
similarities
even
though
there's
no
direct
analog
between
units,
it's
okay,
yeah
I,
when
I
saw
this
chart,
you
explained
I
thought
it
was
upside
down
and
in
the
sense
that
if
I
were
to
think
about
the
levels
of
analogy
between
deep
learning
networks
and
the
cortex
I
would
say:
okay,
there's
no
analogy.
Mister
bottle
the
next
United
States.
B
C
The
truth
about
neurons-
they
never
accepted
this,
but
you
said
deep
networks
to
capture
some
of
the
truth
about
the
cortex,
so
I
think
the
projects
of
this
really
sophisticated
complication
of
computations
and
don't
therefore,
the
overall
complication
being
analysis
and
I'm
just
putting
out
of
it.
I'm,
not
sure
why
you
thought
that
was
the
other
way
well,
I
mean
this
is
I,
feel
like
a
lot
of
people.
B
Three
levels
of
like
is
the
overall
computation,
the
same
figure.
Okay,
these
two
systems
might
do
the
same
computation
in
fundamentally
different
ways
now
figure
out,
like
the
representation
of
whatever
this
terminology
is
I,
think
mine
sort
of
mirrors
that
the
idea
of
figuring
out
what
is
the
algorithm
of
the
brain
you
can
implement
it
using
like
using
probabilistic
programming.
You
can
using
all
these
things
and
I'm
kind
of
going
up
in
that
yeah.
C
C
The
computation,
so
it's
like,
if
I'm
adding
more
and
more
detail
I,
would
start
with
neurons,
then
overall
computation,
so
I
just
want
to
point
out
that
you
could
look
at
in.
Please
yeah
and
I
would
have
done
it
differently,
but
it
doesn't
matter
long
as
we
understand
what
you're
trying
to
learn
something
from.
D
What
you
said
and
their
way
I'd
be
seeing
the
networks,
particularly
don't
use
convolutional
networks,
they
model,
maybe
1%
of
the
synapses
there's
tons
of
circuitry.
That's
not
modeled
at
all.
It's
not
so
much
about
these
that
they're
landing
is
they're,
not
analogous.
It
might
be
analogous
to
1%
of
what's
their
line.
There's
all
of
this
other
stuff,
that's
not
at
all,
captured
and.
C
B
Mean
just
the
point
I
was
ready
to
make
now.
Is
that
if
you
argue
that,
if
you
say
neuron,
activity
in
cortex
is
sparse,
therefore,
activity
in
deep
networks
should
be
sparse.
I
feel
like
that
applies.
If
you
that's
the
first
one
problem.
Yes,
if
you
believe
this
so
the
way
I've
drawn
this,
these
are
more
and
Mortenson
more
intense
levels
of
belief
you
can
like
so
I'm,
not
framing
it.
The
way
you
framed
it
that's
also
another
way
but
saying
like
ok,
they're,
similar
in
some
wavy
way
or
in
some
just
general
way.
B
B
B
C
Was
maybe
hasn't
been
all
the
times
about
the
many
active
dendrites
where
people
say
well,
I
mean
just
you
know
a
lot
of
when
they're
on
with
you
know
fifty
point
neurons,
because
each
point
on
we
would
represent
that
segment
of
the
dendritic
tree
and
so
I
know
a
lot
of
people.
Neuron
is
actually
the
IQ
model.
With
the
network
argument:
oh
yeah,
no
I'm.
E
B
B
You
might
try
to
make
the
neurons
binary
if
you
think
this,
but
you
might
do
something
totally
different
if
you
think
something
down
here
and-
and
it
just
feels
like
a
useful
thing
to
keep
in
mind
as
you
combine
it,
it
is
easy
as
you
take
these
punchlines
like
Marilyn
activity,
is
sparse
and
and
what
what
conclusion
you
draw
from
that
depends
on
where
you
are
on
this
axis.
I'd.
B
B
So
this
is
I
can
go
through
one
through
five
pretty
quickly
here
and
I
just
tried
to
break
this
into
multiple
pieces,
so
I
started
with
something
pretty
conservative
and
just
said,
like
everything
might
still
work
with
sparse
activation
everything.
Oh
yeah
everything,
yes,
networks
should
still
work
with
sparse
activation
and
and
if
they
do,
there's
a
lot
to
be
gained
from
that
in
terms
of
it's.
C
B
C
D
And
I
randomly
and
they
allow
some
percentage
of
it
and
that's
all
training,
there's
no
really
sparse
activational.
It's
finished,
they
basically
kill
cells
recommended
ly.
Do
we
make
it
more
robust?
You
make
it
more
robust,
so
it's
not
really
sparse
activation
that
no
other
coders
there's
no
sparsity.
It's
just
a
small
number
units,
well,
you're,
basically
forcing
it
to
code
down
so
yeah,
but
there's
no
sparsity,
it's
a
smaller
representation,
but
still
there's
a
difference.
D
There's
a
big
difference
in
Dennison,
sparse,
it's
different
from
small
and
big,
but
you
start
off
with
something
large
you
you
make
it
smaller,
make
it
smaller,
but
not
as
far
as
dense,
but
it's
it's
Tannis,
but
smaller
I
mean
yeah.
It
could
have
had
zeros
all
across
there
and
that
would
look
like
a
sparse
layer.
You
know
I
think.
C
F
D
F
D
Did
right
that
doesn't
make
it
a
sparse
representation?
It's
just
a
denser.
It's
just
still
dense,
but
a
smaller
number
of
dimensions.
Okay,
so
so
they
don't
rejection
just
smaller
absolutely
so.
In
that
sense
it
says
I'm
trying
to
find
the
top
K
the
most
important
features,
but
I'm
collecting
everything
else.
D
So
I'm
compressing
the
space
down
correct
right,
but
you're
saying
that
that
doesn't
count
as
far
see
because
it's
is
exploited,
start
off
with
large
space
and
then
subset
of
it
right
he's
projecting
down
to
smaller
number
of
dimensions,
right,
okay,
so
you're
drawing
a
distinction
between
projection
and
sparsity
to
me,
as
you
have
mostly
zeros,
and
you
can
trivially
do
it,
you
can
take
something.
That's
three-dimensional,
add
a
thousand
zeros
and
just
keep
it
always
zeros.
So
yes,
it's
sparse,
but
it's
not
really
capturing
the
spirit
of
our
sitting
right.
It's.
D
B
Which
brings
us
to
ok
another
case
to
be
made.
This
one
is
going,
I'd
probably
understand
the
least
about,
is
how
really
how
it
works
is
again
tweeting.
Why?
But
enforcing
sparsity
through
something
like
k,
winners,
changes,
the
tuning
of
units
and
awareness
may
be
beneficial.
So
if
you,
if
you
make
it
where,
like
neurontin,
that
stopped
the
neuron
doesn't
activate
because
k
winners
suppresses
it,
it
wants
to
go,
learn
something
else,
and
it
causes
your
tuning
herbs.
D
D
B
There's
no
parents,
yeah
and
now
to
throw
in
a
couple
things
that
are
obvious
to
some
of
us.
Sparsity
enables
it
makes
cooling
easier,
probably
and
may
be
said
here,
I've
drawn
a
picture
of
what
I
mean
by
cooling
I'm,
using
the
way
we
usually
describe.
It's
vaguely
related
to
the
way
machine
learning
community
describes
it
so
here
time
step
one.
Maybe
these
input
these
input
unit
interactive,
and
we
want
this
output
unit
to
connect
to
them
a
different
time
step.
There's
no
there's
no
time
here
in
different
different,
whatever,
like
a
year
later,.
B
So
at
some
later
point
these
inputs
are
active,
and
but
you
know,
if
there's
some
supervisors,
they
know
that
this
unit
should
be
active.
It
connects
to
those
and
all
of
the
there's,
no
disruption,
no
confusion
between
the
weights
that
just
one
just
quarter,
building
block
that
you
might.
You
might
encourage
more
of
that
kind
of
thing
and
deep
networks
by
making
the
representation
sparse
and
then
also
on
a
related
note
unions.
B
So
the
the
idea
that
you
could
have
an
input
be
ambiguous,
and
so
you
have
to
make
multiple
representations
and
continue
activating
multiple
representations
up
through
the
network.
It's
not
super
clear
to
me
whether
that'll
work,
if
you're,
if
these
are
dense
and
you're,
just
adding
them
up.
I'd,
really.
B
I,
don't
know
what's
gonna
happen,
if
you
add
up,
if
you
think
of
each
of
these,
like
you
represent
account
with
what
like
a
dense
vector
of
these
all
these
look
at
this
might
be
like
fifty
five
ten
three
like
just
there's
numbers,
you
represented
a
dog
with
a
different
vector,
and
you
add
those
vectors
up.
What
will
happen?
I,
don't
seems
fairly
intuitively
strong
to
me.
That
seems
intuitively
astronomy,
that
this
will
work.
B
D
B
Way
they
do
it
like
they
do
it.
They
can
even
do
it
dense,
binary
codes,
but
it
I
mean
it
gets
worse
and
worse.
Tomorrow,
you
Union
of
course
morning
super
votes,
but
they're
there
they're
vital
piece
that
they
have.
Is
they
on
top
of
it
all
they
have
a
an
auto
associative
memory.
That's
able
to
take
us.
Take
us
a
portion
of
a
code
and
activate
the
entire
code.
B
Now
one
thing
I
want
to
point
out
here:
I
think
people
probably
thought
about
this,
but
it's
we're
not
gonna
get
this
for
free,
we'll
have
to
do
just
a
little
bit
of
work
if
well:
okay,
K
winners.
If
you're,
if
we're
always
enforcing
exactly
K
winners,
we
might
not
get
a
decibel
right
so
so
like
it
might
be.
We
want
to
do
k1
at
K
winners
during
learning,
but
then
like
during
inference.
Maybe
we
let
that
k
winners
be
a
little
less
dramatic.
B
B
D
B
Okay
and
now
the
final
part
is
where
I
now
bring
in
our
other
algorithms
into
it,
like
temporal
memory
and
everything
else
so
v
region,
we
have
a
bag
of
tricks
based
on
forming
associations
between
sparse
representation.
We're
really
good
at
like
learning
these
kinds
of
things
and
things
bidirectional.
B
So
we
have
a
bit
like
here:
I've
depicted
temporal
memory,
you
have
a
sparse
code
associated
with
another,
sparse
code
associated
with
another
source
code,
and
of
course
you
know
what
this
one
problem
memory
does
here,
I've
sort
of
depicted
this
is
this
is
physical
space
that
could
be
thought
of
as
grid
cell
space
and
associating
those
are
sensory
motor
yeah.
B
This
is
such
a
fundamental
associating
these
with
these
codes,
and
then
these
sparse
codes
eventually
need
be
at
that
location,
location
and
here,
I've
done
just
a
quick
picture
and
voting
of
having
a
bunch
of
sparse
because
they
represent
objects,
the
object,
identity
and
having
them
vote
for
each
other.
So
we
have
once
we
have
sparsity,
we
have
kind
of
a
it
kind
of
opens
the
door
to
introducing
these
things.
B
B
Home
obvious,
straightforward
way,
or
at
least
I,
don't
know
it
like
if
you
fit
a
my
mental
motto
or
a
mental
model
of
what
the
networks
are
doing
is
they're.
Taking
this
high
dimensional
input
space
and
finding
the
shape
of
the
boundaries
of
the
classes
and
like
this
is
the
class
of
coffee
cups
and
like
if
a
point
is
somewhere
in
this
space.
This
high
dimensional
space
is
a
coffee
cup,
and
these
are
markers.
It's
it's
kind
of
inherently
doing
generalization.
B
It's
kind
of
inherently
saying
this
big
region
is
coffee
cups,
and
these
mechanisms
to
me
feel
a
lot
like
building
graphs
or
building
building
graphs
and
connecting
them
to
each
other
through
associations,
but
taking
nodes.
Taking
these
representations,
these
codes
and
connecting
them
to
each
other
and
kind
of
building
a
big
graph
of
locations,
two
features
the
picture
in
the
future
or
a
B
C
D
and.
B
C
Learning
does
any
artificial
neural
network
for
life.
I
went
back
to
perceptrons,
you
know
you
got
these
networks,
multi-layer
networks
are
connected,
and
this
is
the
way
these
basically
are
doing
what
you
said
here:
they're,
basically
taking
a
series
of
inputs,
and
that
are
labeled
and
by
using
their
labels
at
propagation
learning
algorithm.
You
end
up
with
a
representation
up
here
that
is
more
separable
and
input
a
classifier,
but
that's
all
it's
doing
is
no
there's
nothing
else.
It's
just
taking
a
bunch
of
label
things
into
rearranging.
C
C
C
Basically,
if
you
want
to
call
it,
you
know,
3d
models
objects
in
the
world,
including
how
they
behave,
and
this
is
all
based
on
particular,
a
small
sensory
patch,
not
a
big
thing.
So
this
reminds
like
I'm,
going
to
put
a
whole
image
in
here,
but
over
here,
I,
just
put
like
a
small
sentry
patch,
just
one
little
piece
of
the
retina,
a
sensory
patch
and
I
also
have
a
motor
motor
signal.
C
Movement
is
able
to
learn
complex
structure
and
behaviors,
and
then
we
have
under
50,000,
and
this
is
something
completely
different
classifier,
and
so
the
challenge
we
have
is
even
if
we
understand
there's
a
hundred
percent
what's
going
on
here.
This
is
so
different
than
what's
going
on
over
here,
it's
kind
of
hard
to
take
what
we've
learned
here
back
at
tricks.
If
you
want
to
call
it
that
and
apply
it
over
here,
I
didn't
know,
but
anyone
would
be
possible,
so
silver
tagging
pose
a
year
ago.
C
He
said,
like
you
know,
originally
I
thought
we
would
have
to
go
to
start
implementing
this
and
you
know
I
think
we
can
take
some
of
the
things
we've
learned
from
here
and
apply
them
to
here
and
so
that
he
said.
I
was
that
and
he
goes
oh
I
think
we
can
take
the
basic
I
get
sparsity.
Would
you
know
this
is
sort
of
a
substrate
of
this
thing?
It's
not
like
you
know.
All
all
the
mechanism
here
depend
on.
C
Sparsity
is
if
we
can
take
back
substrate,
and
maybe
we
can
make
these
things
work,
and
so
they
declare
the
idea
there
was.
We
don't
have
to
work
that
world
convinced
people
to
build
megastructures.
We
can
just
start
with
this
thing
and
prove
that
one
of
the
component
that
one
little
thing
really
improve
this
and
it
wasn't
clear
anything
else
from
here.
We
move
these
other
ideas
and
bring
them
over
to
here.
Maybe
you
can
I,
don't
know,
I,
don't
know
if
it's
true,
but
I
think
it
was
and
because
I
think
is
I
hate.
C
If
we
can
make
the
case
that
sparsity
really
makes
a
huge
difference
over
here,
the
people
might
be
a
little
bit
more
attention
than
people
who
say
the
biology
makes
it
important.
They
might
start
saying
anyone
else
can
happen
over
here.
What
else
I
need
to
know
about
that
that
we
don't
know
about
so
I,
don't
think
all
these
principles
can
be
brought
over
into
this
world
I
think
one
can
be
brought
over,
hoping
that's
the
case.
C
Maybe
a
couple
more
but
I
still
I,
don't
think
the
NGO
leaders
that
end
up
building
these
people
learning
networks
and
well
these
features
I,
don't
like.
Oh,
let's
go
start
going
cortical
columns
that
do
I
do
so
many
more
things
than
any
of
this
stuff
ever
did,
and
so
it's
interesting
question
how
many
of
these
principles
can
be
brought
over
here
and
I
started
off
by
not
knowing
anything
super
time
involved.
Sparsity
great,
that's
going
to
be
super
good.
Maybe
we
can
bring
over
some
more
I.
Don't
know
I,
don't
I!
C
D
C
True
that
for
sparsity
we
are
trying
to
basically
particular
focusing
on
existing
problems
in
existing
networks
she's,
showing
that
they
work
better
for
at
least
four
Spahr
see
we're
going
this,
but
I
really
we're
not.
We
are
seeing
you
could
take
an
existing
people
in
network
there's
an
existing
problem
and
you
can
make
it
better
by
an
experiment
to
me.
We
don't
do
a
lot
of
you're,
adding
Sparsit
into
this
like
we're,
not
trying
to
create
a
whole
new
domain
of
AI.
We're
just
saying,
I
think
the
ultimate.
E
C
C
C
Trying
to
take
you
guys
all
overcome
existing
problems
and
existing
benchmarks,
trying
to
show
that
we
beat
these
things
from
various
ways.
So
to
me
that
means
like
when
I
say
here:
you're
working
on
the
exact
same
problems
of
AI
researchers
already
compound
an
exact
same
data
sets
exchange
to
start
adding
this
stuff.
You're
gonna,
be
your
friend
different
data
sets
yeah.
D
C
But
I
find
I
think
it's
going
to
be
a
hard
break.
I
think
this
at
some
point
either
either
work
on
these
same
problems
that
everyone
else
is
working
on
and
then
is
it
going
to
be
some
sort
of
break
we're
saying
you
know
what
we're
not
doing
those
problems
anymore
and
now
we're
doing
something
else
and.
E
C
And
then,
but
I
still
feel
it's
gonna
be
hard.
I
just
don't
feel
like
Marco's.
It
sounded
like
you
were
suggesting
how
we
had
more
and
more
things
and
something
I.
Just
don't
work,
I,
don't
know,
but
I
do
think
it
got
to
come
over
here
and
to
me
that
feels
I'm
gonna
jump.
It's
like
it
doesn't
feel
like
a
continual
all
right.
Unfortunately,
here
and
then
we
have
to
have
it
just
continuous
break
disappoint,
say
we're
going
to
do
some
new
problems.
C
D
About
CNN's
analyst
that
it
was
shown
early
on
that
you
can
emulate
any
function
with
a
linear
sum
of
the
inputs
to
the
rich.
Even
you
can
buy
an
e
stat-wise
piecewise.
You
can
get
a
exponential
Gaussian
things
in
here
with
those
units
I'm
just
saying
that's
that
was
one
seductive
part
of
it
is
a,
although
of
its
Turing
equivalent
to
any
function
is
not
touring
equipment.
D
D
D
One
hidden
layer,
you
can
map
any
static
right,
okay,
the
stated,
but
that
was
a
seductive
part
of
the
fact
that
it
says
hidden
self
space.
It
looks
like
a
complete
representation
by
space.
What
I
see
what's
happening
here
is
well,
that's
fine,
but
there's
a
lot
of
power
in
the
topology
of
the
network.
C
C
D
You
have
a
representation
from
the
neurophysiology
that
that
works.
Okay,
it
is
it's
based
in
data.
It's
based
in.
You
know
what
we
know
about
neurophysiology
I'm,
just
saying
as
a
bridge,
you
know
you
have
to
be,
if
you're
going,
to
make
a
claim
that
these
networks
can't
do
what
we're
doing
you're
going
to
say
what
does
this
thing
have
not
what
it
apparently
could
do?
What.
C
Is
it
so
the
answer
that
question
isn't
sparsity
the
internet
question
as
far
as
this
low-level
substrate
right?
It's
really
not
the
argument.
The
argument
is
learning
in
dimensional
models
of
the
world,
including
the
behaviors
of
objects
and
learning
the
you
know
in
dimensional
models
of
my
body
and
so
on
and
so
to
to
represent
the
model
of
structure
in
the
world.
You
need
to
have
reference
frames.
You
need
to
have
the
concept
of
the
location,
reference
fans,
you
need
the
concept
of
an
orientation
tell
us
who
those
are
constraints.
C
You
have
to
have
the
concept
of
the
state
of
the
reference
frame
to
represent
the
state
of
the
object.
You
know
that
has
different
forms
and
shapes
and
so
on.
It
all
builds
upon
movements.
These
are
the
things
that
are
that
are
what
a
coracle
come
to
us.
I,
wouldn't
start
off.
You
know
find
out
what
the
whole
book
of
so
far
about
talking
about
what
cortical
columns,
I
didn't
mention
sparsity
once
I
didn't
need
the
mention
sparsity,
that's
not
the
key
thing.
C
The
key
thing
is
the
reference
terms
of
locations,
orientations
of
movements,
just
you're
learning
this
model
of
the
world,
there's
no
any
dimension
a
lot
of
the
world
with
movements
and
how
to
get
someplace
in
to
manipulate
things.
You
know
costly
incorporate,
all
of
robotics
and
everything.
So
to
me,
this
sparsity
thing
is
we
card?
It's
it's
this
week
or
just
saying
is
hey
you
know
we
can
just
do
something.
You
learn
from
the
brain.
Look
how
much
you
make
these
things
better!.
C
And
they
have
the
orientation
and
they
have
where
you
are
beyond.
This
is
the
it's.
This
all
the
knowledge
of
the
world
finger
nose
is,
in
this
column,
any
commodity
completely
objects.
These
reference
frames
are
in
here.
The
orientation
is
in
here
these.
We,
we
spent
a
lot
of
time
talking
about
where
these
cells
are
and
then
different
layers,
and
how
we're
representing
these
things.
These.
C
There
are
motor
in
the
sensory
input
and
all
these
modeling
is
occurring
in
here
to
this
very
sophisticated
modeling
function.
That
has
all
these
things
you
just
you
can't
get
away
from
that.
They
have
to
be
there.
It
is
not
someplace
else.
It's
there's
50,000
copies
of
these
things.
So
this
is
the
thing
we
have
to
convince
people.
C
No
I
think
yeah,
so
we
are
how
we
bridges
between
these
two
things
is
an
interesting
question
and
but
I
think
the
least
role
all
aligned
to
entire
line
that
we
need
to
end
up.
This
is
this
is
not
just
the
dynamic
CNN
what
it
is.
This
is
a
system
that
has
reference
frames
and
orientations
and
different
types
of
states
in
it,
and
so
on
is
a
very
complex
function.
C
That
happens
we
build
on
and
on,
but
the
neurons.
You
know
the
reference
frame
neurons,
the
good
salespeople
quorum
itself
behave
on
one
set
of
principles,
orientation
for
each
a
spelled
different
state,
equivalent
playstyle
different
set
of
principles,
they're
all
neurons,
but
the
inter
neuron
making
function
at
different
ways.
So.
E
C
C
Let's
say
you
know,
what
we're
doing
is
we're
doing
right
now
we're
walking
this
way
and
we
say:
look
we
want
you
something
through
this
this
world
are
your
we're,
bringing
easy
thing
called
sparsity
and
look
how
great
it
is.
Look
how
it
makes
your
network
so
much
better.
Why
don't
you
follow
me
back
over
here
and
I'll?
Show
you
the
rest
of
it.
C
Think
we
need
that
ball.
We
need
they're
not
like
see
this
stuff
actually
does
help
and
oh
by
the
way,
look
at
the
rest
of
this,
and
maybe
I'll
have
the
epiphany
that
this
is
what
you
need
and
you
need
to
move
over
here,
because
there's
no
question
my
mind
if
the
world
would
come
to
end
up
over
just
job
is
how
to
convince
people.
C
It's
just
it's
not
it's
gonna
happen
so
anyway,
that's
we
have
a
so
it's
gonna
be
balls,
we're
going
to
show
you
that
it
helped
here
and
we're
going
to
somehow
get
you
to
start
thinking
about
these
things
and
build
it
and
you'll
realize
that
hey
I
will
realize
that
they
have
to
do
this.
This
is
the
that's
I
mean
that's
what
they
are
our
framework
paper
based.
This
is
the
framework
for
intelligence
and
unique
this
framework
you
gotta,
have
it
there's
no
other
way
of
doing
it.
C
What
the
HD
maps
is
to
see
if
there's
other
things
we
couldn't
recover
beyond
sparsity
in
still
work
in
the
domain
of
artificial,
no
flowers.
You
know
you
mentioned
the
active
device.
Is
it
possible
to
add
active
dendrites
to
these
guys
and
show
additional
profits,
but
still
solving
the
same
problems
or
solving
today,
but
they're
not
solving
sensory
motor
problems
today,
at
least
not
any
remains
made
by
so
eventually,
we
will
have
to
move
from
these
problems
to
a
different,
sound
problems.
C
C
D
C
C
Well,
maybe
people's
dialect
changes
over
time,
and
maybe
the
number
of
people
are
using
a
change
over
time,
so
you
can
have
a
continuous
learning
system
that,
basically
you
don't
have
to
retrain
this
system
at
any
point
on
this
retraining
as
people
use
it,
and
that
would
be
the
same
problem.
Finding
better
performance
and
continuous
learning
based
on
religion,
I'm,
not
sure
how
beneficial
that
is
kind
of
out
out.
There
is
community
one
chopped
learning
so
I'm
just
wondering
if
that's
a
step
toward
one
job,
winning
kid
could
really
come
from
sparsity
in
the
dendritic.
C
Understand
how
one-shot
learning
curves
over
here
this
doesn't
do
a
gradient
descent.
You
know
back
propagation
over
here.
You
just
form
the
spin
axis.
Once
in
your
daughter,
you
can
move
things
and
we
know
we
understand
how
neurons
do
that
here.
It's
not
clear
you
can
actually
you
might
be
able
to
if
these
are
sparse
and
you're
an
active
dendrites
on
all
this.
You
might
be
able
to
learn
something
new
here.
C
They
just
don't
really
want
to
think
about
it.
So
that's
why
I
feel
like
we're
coming
over
this
way
and
walking
into
their
world
and
saying
you
know
you
don't
really
care
about
neuroscience,
but
let
me
help
you
out
doing
something
you
are
doing
here.
That's
why
I
think
it's
this
way
and
then
we're
trying
to
get
them
to
follow
us
back
in
order
to
really
convince
world
to
show
it
working
really
well.
C
C
That
this
does
something
I
can't
do
here,
it's
more
like,
oh
of
course,
I
understand.
This
must
must
be
this
way,
but
maybe
other
people
don't
do
that,
and
maybe
we
need
to
show
them
working
moment
by
moment.
I
hope
you
think
of
insulation
for
people
that
they
without
proving
it
to
them.
Just
honestly
understand
this,
like
oh,
how
I
get
it,
but
maybe
that
will
work
really
have
to
show
you
no
problem,
that's
the
problem
that
it
works
best.
F
F
C
C
D
One
of
the
things
I
mean
what
you're
doing
with
the
sparsity
is
that
you're,
basically
showing
here
so
you
can
add
robustness.
We
didn't
have
it
before
your
computational
efficiency.
We
network
before
I,
mentioned
the
one-shot,
because
I
kind
of
tracked
that
literature
rule
that
people
are
down
here
well,
I'll
have
a
smaller
set
I,
you
know
one-tenth
the
size
of
the
set
and
I
can
train
run
part
of
that
problem.
If
you
have
a
representation
that,
as
you
say,
ideally
works
instantly,
you
know
to
to
you
know,
take
the
generality
of
the
schemas.
D
C
People
would
jump
now
I
I'm
saying
no,
this
two
possibilities
here
that
we
can
sort
of
incrementally
move
people
in
this
direction
by
showing
continuous
systems
that
work
better
and
better
and
better,
better,
better
or
there's
a
point
in
which
we
get
more
and
more
people.
That
sort
of
realize
that
this
is
the
place
connect
start
focusing
is
like.
C
C
C
C
So
again,
maybe
us
with
another
way:
it's
looking.
We
need
to
get
sort
of
bring
along
the
engineering
world
Mormon
working
working
working
working.
Then
we
also
need
to
excite.
So
you
know
I,
just
him
he's
not
they're
talking
about
capsules
and,
like
he's
trying
to
he's
trying
to
jump,
he
was
trying
to
John
from
here.
He
doesn't
know,
but
he's
trying
to
do
that
jump
he's
out
there.
He
doesn't.
You
know.
Capsule
aren't
so
they're,
just
incremental
improvement
on
these
things.
So
there
are
people
and
it
used
to
be
a
lot
of
them.