►
Description
We have a visitor who recently finished her PhD at Purdue, and will be starting as a professor at Yale in August. She has an upcoming paper to be published in Nature called "Towards Spike-based Machine Intelligence with Neuromorphic Computing". She will be discussing this work with us at our Numenta Research Meeting.
B
C
C
C
B
C
B
A
A
D
D
D
D
D
B
D
We
start
comparing
it
to
the
brain
now,
of
course,
to
the
artificial.
Very
much
you
know
has
fueled
everything
and
we
have
state-of-the-art
results
on
several.
You
know
cognitive
applications
right
when
you
are
looking
at
perception
tasks
that
we
are
looking
at
active
planning
tasks,
so
artificial
intelligence
is
very
much
there.
So
if
you
look
back
onto
the
basic
ideas
of
the
brain-
and
you
know
and
neuroscience,
computational
neuroscience
has
had
several
interesting
research
efforts
done
to
understand
what
the
brain
does,
but
still
in
that
aspect,
brain
remains.
D
Was
he
an
explorer
so
in
this
figure
on
the
Left,
what
we
tried
to
push
out
was
what
are
the
key
foundational
observations
you
know
of
all
nuisance,
research
effort
that
has
been
done
and
what
are
the
key
foundational
observations
that
can
be
attributed
towards
the
remarkable
cognitive
capability
that
the
brain
has.
So
these,
what
we
are
essentially
found
out
was
one
is
the
vast
connectivity
in
the
brain.
D
D
D
B
D
D
E
D
D
Now,
what
do
I
mean
by
the
temporal
processing
ability
is
that
information
in
the
brain
is
pink
student
computed
in
the
form
of
sparse
events
right
so
the
dispersity
in
the
bay,
the
neurons
are
communicating
and
there
is
normally
start
setting
overall
activity
and
overall,
the
way
the
gates
are
there,
but
the
sparsity
information
exchanges
also
what
makes
up
green
that
efficient
and
by
and
you
know,
if,
like
very
recently,
we
conducted
an
analysis.
I
just.
D
D
E
D
E
E
E
D
Words
of
all
these
three
activities,
probably
the
temporal
aspect
of
processing
through
sparks
even
human
communication,
as
well
as
the
causality
which
drove
the
brain,
in
course-
and
you
know
the
causal
way
of
encoding
and
decoding
information.
That
is
what
is
probably
giving
you
that
efficient
aspect.
The.
B
D
D
B
D
B
D
Know
foundational
principles
which
are
similar
to
what
the
guillotine
has,
for
example,
even
this
every
computer
system
today
is
based
upon
silicon
transistors
right.
So
it's
a
mass
transistor,
which
is
a
metal
oxide.
So
it's
you
know
a
standard.
Mosfet
is
basically
the
computing
unit,
which
is
enabling
every
kind
of
technology
of
this.
So
there
is
a
MOSFET.
D
Now
this
MOSFET
can
be
organized
into
different
circuit
said
you
can
perform
any
operation,
whether
it
is
an
and
operation
or
operation-
and
you
know
an
amalgamation
of
all
of
these
will
give
you
some
kind
of
multiply
operation.
Any
corporation
is
a
fair
cetera,
so
processing,
so
you
eat
this.
Mos
transistors
put
it
in
an
interesting
way
and
image
surface.
Now
these.
D
Now
have
to
be
put
into
a
processor
right
which
will
do
your
standard
computing.
Now,
when
computing
is
done
by
the
processor
which
can
be
held
as
like,
you
know
the
neuronal
units,
then
there
is
also
memory
which
will
keep
track
of
what
exactly
I
need
to
either
store,
which
will
fetch,
which
will
be
fashioned
which
will
be
decoded
when
you're
performing
any
of
the
operation.
D
So
if
you
look
at
it
in
your
standard
computer
break
the
main
two
parts,
you
have
a
memory
unit
and
you
have
a
processing
unit
right
and
the
interaction
between
these
two
is
more
builds
an
entire
system.
Okay,
now
the
main
issue-
and
if
you
see
that,
if
you
just
compare
the
male
in
the
gradient
memory
and
computer
co-located
right,
I
mean
it's
all
intertwined
with
each
other.
But
in
this
kind
of
a
case
in
Maine,
standard,
bangla
and
processing
I
mean
all
everything.
Today
is
a
one
time
and
computing
engine
where.
D
D
With
each
other,
so
this
is
constrain
right
depending
upon
the
bandwidth
of
the
bus.
The
amount
of
data
even
push
into
the
memory
or
you
and
pushing
through
the
processor
is
going
to
be
constrained.
So
this
distance
to
this
is
known
as
basically,
the
constraint
provided
by
it
hierarchy
will
yield
separating
the
memory
and
the
processing
unit
is
known
as
the
memory
vault
water
line,
and
this
is
what
he
leads
to
inefficient
trust
innovativeness.
If
you
look
at
neuronal.
B
A
D
A
B
D
D
B
D
B
E
B
D
D
B
D
I
was
just
trying
to
say
that
deep
learning
is
also
organized
in
hierarchy
and
deep
learning
networks.
The
hierarchical
organization
is
worth
less
than
learn.
These
different
feature
representation
of
the
input,
and
that
has
become
also
heat.
So
you
know
you
have
to
at
this
point
of
time
in
order
to
enable
large
data
set
recognition,
problems
I
think
one
going
the
convolutional
router
on
the
decoder
out
is
probably
you
know
the
state.
D
B
D
D
Little
bit
of
history
of
when
how
did
computing
coming
to
be
covered
mean,
who
proposed
the
field
essentially
wanted
to
emulate
biological
neural
routines
with
transistors?
So
if
you
take
a
transistor,
which
is
the
standard
or
the
other
silicon
limited,
you
take
an
ancestor,
this
transistor
operates
in
different
regimes
like
a
transistor.
Is
nothing
but
a
switch.
They
did.
It
is
higher
on
stick
and
it
has
an
off
state.
But
if
you,
you
know
with
certain
you
know,
depending
on
in
which
regime
you
operated,
he
can
act
as
a
switch.
It
can
have.
D
E
D
D
Know
because
we
want
to
implement
efficient
efficient
if
we
make
the
network
sparse,
using
even
communication
using
event-driven
activity,
and
then
in
that
case
you
know,
energy
efficiency
will
be
obtained
because
right
now,
unlike
standard
artificial
neural
network,
stick
each
and
every
time
any
input
comes.
This
neuron.
B
D
D
D
E
D
D
E
B
D
D
Of
M,
as
I
said,
you
know,
no
be
computing,
wasn't
confessing
just
immediately
biological
neural
routines
later
on
it,
artificial
intelligence
came
into
the
be,
and
they
do
not
for
the
same
under
the
blue
parts.
Here
they're
all
these
innovations
are
motivated
towards
some
new
efficiency.
So
the
fact
that
distributed
memory
processing.
B
D
E
D
Working
on
other
elements
that
can
possibly
so
besides
using
this
silicon
transistors,
even
when
I
started
looking
at
different
new
materials,
that
probably
do
better,
you
know,
computer
so
to
the
activity
percent
moving
administers.
So
memory
cells
are
essentially
passive
components
which
can
act
as
memory
elements.
So
it
is
an
analog
way
of
you
know.
Instead
of
having
a
bunch
of
transistors
together,
which
are
ready
to
use
Sam's.
D
D
B
D
Okay
in
the
1990s
GPUs
are
formally
introduced,
so
in
80s
and
90s
we
were
mostly
working
on
CPUs,
where
he
had.
You
know
single
core
processing,
but
then
GPUs
were
introduced
by
this,
that
you
know
for
certain
applications.
I
do
not
need
these
complex,
single
core
processing.
Maybe
if
I
make
my
course
very
simple
where
they
don't
improve
the
critical
need
to
simple
operations,
but
I
can
then
multiply
that
and
I
can
do
parallel
processing
and
that
way
I
will
have
much
more
efficiency.
I.
B
B
E
D
E
B
D
D
B
D
D
Say
that
Jane
is
doing
that
I
think
the
way
to
go
forward
with
hardware
are
tapping
into
all
efficiency
benefits
would
be
if
you
want
to
implement
a
spiking
neural
network
in
the
best
you
know
distributed
manner,
so
I
think
having
this
hybrid
global,
Hydra
synchronous,
asynchronous
no
local
through
the
channel
of
communication
and
the
Communication
I
mean
you
know,
he's
component
to
sending
some
information
to
the
other
thing.
Okay,
now,
if.
D
B
E
D
C
C
B
D
E
D
A
D
Right
now,
for
the
last
few
years,
the
effort
has
been
can't
even
spike
in
deep
networks
right
and
in
extreme
convolutional
hierarchy.
Let's
say
an
image
net
recognition
or
C
for
education,
but
then
we
have
to
go
beyond
these
vision.
Tasks
we
have
to
reinforcement.
Learning
is
completely.
You
know
unexplored
in
the
SMN
domain.
I
think
that
very
recently
in
the
year,
2019
some
base
reinforcements.
D
D
E
E
D
E
D
Is
not
to
me
right
now
what
people
are
trying
to
have
in
the
active
research
area
is
trying
to
see
if
we
can,
as
I
said
earlier,
like
is
silicon
the
best
way
to
any.
D
D
D
With
a
little
bit
of
memory
and
compute
co-located
now
the
second
thing
we
are
looking
at
is:
can
we
have
our
than
Hardware
co.design?
Because
right
now
you
know
whatever
algorithms
you
build?
You
cannot
have
like
this
two
separate,
of
course,
algorithm
is
going
in
one
direction
and
hardware
is
going
in
that
direction.
E
D
E
D
If
you
look
at
what
is
it
so
for
dramatic
change
in
hardware,
it's
a
long
sellers
and,
of
course,
we
know
from
research
for
a
point
of
view
when
you
look
at
emerging
devices,
so
that
still
research
right
I
mean
you
don't
know,
but
the
fantasy
has
excelled.
L
burning
will
progress
much
more
faster
and
get?
Hopefully,
thinking
that
you
know
I
got
in
progress
is
faster
and
today,
with
spiking
neural
networks.
That
is
what
is
the
key
challenge,
then
mainly
at
least
to
the
current
hardware
that
we
have.
If
you
can
address.
E
A
E
D
D
Okay,
so
if
you
look
at
my
right
in
1990
it
was
wolf
and
man
who
started
looking
at
using
spiking
neurons
as
computational
units
in
a
neural
network.
So
till
then
your
network
was
there,
but
you
will
use
an
Armenian
functionality
like
you
know,
Sigma.
It
was
good
learn
once
we
started
looking
at
you
know,
can
we
use
this
pi
functionality
and
orange.
C
D
D
You
read
books,
then
you
know
in
2012
election
it
was
released
and
you
know
from
there
on
deepening
revolution
happen.
Fighting.
On
the
other
hand,
if
you
see
the
main
proposals
like
after
the
proposal
of
spiking
Universe
computational
units,
you
know
the
only
thing
we
would
do
was
like
trying
to
see
if
you
can
get
a
gradient
descent
version.
D
E
B
D
B
E
D
E
A
E
A
E
A
D
B
D
D
E
D
E
B
D
D
D
A
A
A
D
A
D
The
current
active
research
areas
from
the
you
know
is
why
we
want
to
use
neural
networks
to
understand
the
brain
mechanisms.
Then
how
extends
networks
to
state
of
that
is?
We
want
to
see
if
we
can
wanna
go
beyond
vision.
Go
from.
You
know,
title
to
corporate
reinforcement,
learning
or
continual
learning
tasks
within
a
cycle
neural
network
utilizing
this
extract
and
pure
dimension.
You
have
in
order
to
you,
know,
address
some
of
these
aspects
of.
D
Secondly,
envy,
and
one
thing
is
that
we
are
using
great
coding
mechanisms,
because
today
we
going
to
have
data,
sets
that
give
you
spike
the
spiking
dig
assess
everything.
Is
your
image
right.
So
that's
why
we're
using
any
sort
of
coding
mechanism
to
convert
an
image
data
into
spike
data?
But
if
you.
B
D
Changes
in
the
environmental
activity
and
it
will
be
winning
back
exactly
so
there.
The
temporal
relation
is
very
much
intact
in
that
dataset
and
using
that
data
set
in
order
to
do
image,
processing
and
doing
learning
if
it
works.
That
would
be
even
more
interesting
and
that
could
be
very,
very
efficient
right.
What.
D
Videos
buttons
with
videos-
it's
all-
are
still
pixelated
frame
data
right,
so
we
have
to
come
up
with
a
way
in
order
to
extract
out
the
rice
bike
signatures
from
those
video
such
that
you
can.
You
know
if
you
let's
take
in
a
video,
some
of
it
is
moving.
You
need
to
expect
out
those
object.
Movement
related
insulin.
A
D
D
To
develop
good
training,
algorithms
for
a
service,
and
in
fact,
so
the
problem
is
that,
while
these
narrow
fit
ships
are
present,
we
cannot
use
them
to
develop
training
and
go
to
their
site.
It
is
a
restraining.
You
want
to
go
towards
the
back,
face
periodicity
and
give
you
the
good
at
doing
that
risky.
As
soon
as
you
reduce
the
back
size
to
1,
you
will
see
that
the
GPU
power
conversion
is
pretty
high
and
the
GPU
is
not
I
mean
it
completely.
D
A
GPU
useless,
so
you
want
to,
if
you
have
these
three
moves
to
a
fight
or
cafe
or
tensorflow,
and
if
you
want
to
add
a
proper
spying,
you
know
Network
algorithm.
Can
we
utilize?
These
already
available
frameworks
are
already
available
functionalities
for
corporations
or
you
know
any
normal
operations.
D
A
D
D
A
D
D
D
D
In
an
interesting,
are
we
integrate
fire
dynamics
in
order
to
model
what
kind
of
produce?
So
you
are
you
doing
the
same
thing
that
the
equal
spike
get
multiplied
with
the
waves
and
get
stumped
before
you
push
it
through
the
output?
Now
the
nicotine
is
a
functionality
which
is
denoted
by
that
signal.
Be
there
it's
nothing,
but
the
membrane
potential
which
I
will,
which
is
known
as
the
activation
function
activation
function,
which
is
the
blue
line
there.
So
the
membrane
potential
keeps
on
increasing
as
whenever
arise
by
input
side.
Is
there.
D
D
D
So
so-
and
this
is
how
you
know
the
neuronal,
many
others
and,
interestingly
enough,
you
see
the
scheduling.
Functionality
is
what
accounts
for
that
discontinuity
in
the
neural
functionality.
So
you
cannot
exactly
use
standard,
gradient
descent
mechanisms,
and
you
know
you
can
expect
if
there
are
expect
that
it
will
go
well,
because
that
is
a
normally
is
a
discontinuity
there,
which
will
be
difficult
to
differentiate
in
your
standard,
really
decent
mechanisms,
but
pre.
D
D
B
D
Then
you
will
have
a
smaller
weight
of
it,
and
would
you
in
the
correlation,
then
the
post
fires
after
that
lead
right.
So
if,
if
the
causal
effect
is
present,
where
the
post
fires
up
the
place,
then
it
will
be
a
positive
800
and
if
the
causal,
if
it
is
absent,
it
means
there's
a
depolarization.
When
the
post
fires
before
the
three,
then
you
will
have
a
negative
area.
So
this.
B
D
D
D
D
B
B
D
D
D
D
D
D
D
B
B
D
D
D
D
B
D
A
A
D
D
D
D
B
E
D
D
D
D
D
D
D
B
D
With
lower
intensity,
so
what
exactly
is
the
thing
you're
trying
to
achieve
with
the
proximity?
Radius
in
training?
Is
that
let's
say
you
have
this
neural
network?
You
do
standard
forward,
propagation
and
back
propagation
rate
doing
forward
propagation.
You
will
get
a
delegate
activation
values
and
you
will
calculate
the
error.
This.
D
Then
be
propagated
to
calculator
of
these
now
for
doing,
update
ways
if
you
employ
chain
rule
this
quantity,
which
is
the
F
of
F
dash,
which
is
nothing
but
the
difference.
Now,
the
gradient
of
the
activation
unit
that
actually
comes
into
play
when
you
calculating
any
wakeup
gives
you
the
chain
and
if,
in
a
case
which
I
showed
you
earlier.
D
D
The
approximate
version
of
F,
so
that's
what
many
binaries
you
don't
go
through.
They
go
through
some
approximation
of
you
know
using
a
stateful
estimate
or
something
so
here,
given
that
we
have
a
temporal
correlation.
So
can
we
come
up
with
a
move
on
in
your
way
of
approximating
that
function
such
that
we
are
able
to
integrate
all
the
temporal
constraints
while
doing
gradient
at
leisure?
So
all
the
efforts
from
land
from
our
group
across
every
group
today
who
are
doing
our
base,
building
design
they're
trying
to
come.
B
B
D
B
D
D
B
E
B
D
D
B
D
B
B
D
D
D
D
B
D
B
D
D
A
D
And
so,
as
I
said,
okay
right,
if
you
want
to
enjoy
the
nonlinear
functionality,
let's
say
right:
you
want!
So
if
you
have
a
monitor
functionality
like
this,
you
do
not
wander.
You
can
utilize
a
transistor
in
a
different
regime
in
order
to
implement
a
nonlinear
functionality.
Okay-
and
this
is
an
Universal
Charlie
which
can
be
a
sake
model.
Another
study
now.
B
D
D
D
D
D
D
A
D
People
have
the
physical
Rockstar
implementation
and
they're
trying
to
be
human,
logical,
crossbars,
so
so
logically
doing
crossfire
operations
rather
than
physical.
So
this
means
you
have
a
memory
sticking
at
a
cross
point
which
will
help
you
do
or
logic
which
will
need
to
do
an
order
of
operation,
but
you
can
remove
it.
Take
if
I
have
a
SM.
D
D
Okay,
so
now,
let's
see
the
holes
are
the
lines
are
the
inputs
and
the
vertical
lines
are
the
outputs?
Okay,
so
you
send
in
hang
on
I
I
agree,
then
the
high
one
is
the
voltage
that
is
going
in
and
this
is
a
resistive
element.
So
from
also
we
know
that
kind
of
a
voltage
passes
through
a
resistor.
It
gives
you
some
height
thing
so.
A
B
B
D
B
E
D
D
D
D
D
D
D
D
E
D
B
D
Know
if
you
are
in
human
perception,
cows
may
be
in
the
final
classified.
It
does
not
make
sense
to
have
a
spy
base
off
max.
There
may
be
the
standard
soft
marks.
So
you
know
approaches
are
more
efficient
for
accuracy
for
energy
efficiency,
so
all
of
that
has
to
be
taken
in
so
with
that
this
paper
just
tries
to
make
a
case
when
you
don't
want
the
computing,
but
we
have
to
come
up
it's
in
a
mystical
reference
from
both.
B
A
C
All
right,
thank
you
for
for
joining
the
livestream.
If
you
like
this
content,
please
please
give
it
a
thumbs
up
and
subscribe
to
the
channel.
It
helps.
You
know
my
bosses
communicate
to
them
that
what
I'm
doing
here
this
is
useful,
stay
tuned
for
another
live
stream
in
less
than
an
hour
I'm
going
to
be
back
online,
doing
a
live
coding,
building,
HTM
systems
live
stream
and
I
am
taking
off
because
I
got
to
go,
eat
lunch
first,
so
see
you
guys
in
less
than
an
hour.