youtube image
From YouTube: Numenta On Intelligence Podcast, Episode #13: Subutai Ahmad on Applying HTM Ideas to Deep Learning

Description

After a summer hiatus, we are back with a new episode of the Numenta On Intelligence Podcast.

Host Matt Taylor talks to Numenta VP of Research Subutai Ahmad about the effort he has been leading in applying Numenta research and HTM principles to deep learning systems.

Paper link: https://arxiv.org/abs/1903.11257

0:51 Why is Numenta looking at deep learning?
2:43 What are the inherent problems in deep learning and how can neuroscience help?
3:06 Continuous learning
3:48 Catastrophic forgetting  - “If you have really sparse representations and a better neuron model and a predictive learning system you can learn continuously without forgetting the old stuff.”
5:11 What does high dimensionality mean in deep learning and neural networks?
6:34 Why does sparsity help?
11:23 Other types of sparsity: dendrites are tiny, sparse computing devices
14:47 Another type of sparsity: Neurons are independent sparse computing devices
15:34 Numenta’s paper on sparsity: How Can We Be So Dense? The Benefits of Using Highly Sparse Representations
19:34 The surprising benefit of sparse activations AND sparse weights, a rarity in the machine learning community
20:52 Benchmarks that we’re working with: MNIST, CIFAR10, VGG19
24:22 What does the future of research look like at Numenta?