27 Dec 2022
In this research meeting, Jeff gave a synopsis of the Complementary Learning Systems Theory presented in the paper “What Learning Systems do Intelligent Agents Need? Complementary Learning Systems Theory Updated” by Dharshan Kumaran, Demis Hassabis and James McClelland.
Paper (2016): https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(16)30043-2
Other paper mentioned:
“Sparseness Constrains the Prolongation of Memory Lifetime via Synaptic Metaplasticity” (2008): https://academic.oup.com/cercor/article/18/1/67/319707
- - - - -
Numenta has developed breakthrough advances in AI technology that enable customers to achieve 10-100X improvement in performance across broad use cases, such as natural language processing and computer vision. Backed by two decades of neuroscience research, we developed a framework for intelligence called The Thousand Brains Theory. By leveraging these discoveries and applying them to AI systems, we’re able to deliver extreme performance improvements and unlock new capabilities.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://numenta.com/news-digest/
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
Paper (2016): https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(16)30043-2
Other paper mentioned:
“Sparseness Constrains the Prolongation of Memory Lifetime via Synaptic Metaplasticity” (2008): https://academic.oup.com/cercor/article/18/1/67/319707
- - - - -
Numenta has developed breakthrough advances in AI technology that enable customers to achieve 10-100X improvement in performance across broad use cases, such as natural language processing and computer vision. Backed by two decades of neuroscience research, we developed a framework for intelligence called The Thousand Brains Theory. By leveraging these discoveries and applying them to AI systems, we’re able to deliver extreme performance improvements and unlock new capabilities.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://numenta.com/news-digest/
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- 6 participants
- 52 minutes
10 Jun 2022
Guest speaker Massimo Caccia introduces a simple baseline for task-agnostic continual reinforcement learning (TACRL). He first gives an overview of continual learning, reinforcement learning, and TACRL. He then goes through empirical findings that show how different TACRL methods can be just as performant as common task-aware and multi-task methods.
Papers:
“Task-Agnostic Continual Reinforcement Learning: In Praise of a Simple Baseline”: https://arxiv.org/abs/2205.14495
"Avoiding Catastrophe: Active Dendrites Enable Multi-Task Learning in Dynamic Environments": https://www.frontiersin.org/articles/10.3389/fnbot.2022.846219/full
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://numenta.com/news-digest/
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
Papers:
“Task-Agnostic Continual Reinforcement Learning: In Praise of a Simple Baseline”: https://arxiv.org/abs/2205.14495
"Avoiding Catastrophe: Active Dendrites Enable Multi-Task Learning in Dynamic Environments": https://www.frontiersin.org/articles/10.3389/fnbot.2022.846219/full
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://numenta.com/news-digest/
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- 4 participants
- 1:16 hours
5 Apr 2022
Subutai Ahmad gives a tutorial on the voting mechanisms in cortical columns developed by Numenta and answers questions from the team.
Whiteboard photo: https://tinyurl.com/5apr-whiteboard
Columns paper "A Theory of How Columns in the Neocortex Enable Learning the Structure of the World": https://www.frontiersin.org/articles/10.3389/fncir.2017.00081/full
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://numenta.com/news-digest/
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
Whiteboard photo: https://tinyurl.com/5apr-whiteboard
Columns paper "A Theory of How Columns in the Neocortex Enable Learning the Structure of the World": https://www.frontiersin.org/articles/10.3389/fncir.2017.00081/full
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://numenta.com/news-digest/
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- 7 participants
- 1:55 hours
18 Mar 2022
Subutai reviews the paper "Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer" and compares it to our dendrites paper "Avoiding Catastrophe: Active Dendrites Enable Multi-Task Learning in Dynamic Environments".
Paper: https://arxiv.org/abs/1701.06538
Dendrites Paper: https://arxiv.org/abs/2201.00042
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://numenta.com/news-digest/
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
Paper: https://arxiv.org/abs/1701.06538
Dendrites Paper: https://arxiv.org/abs/2201.00042
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://numenta.com/news-digest/
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- 7 participants
- 1:15 hours
11 Feb 2022
Heiko Hoffmann gives an overview of the “Neural Descriptor Fields” paper. He first goes over how the Neural Descriptor Fields (NDFs) function represents key points on a 3D object relative to its position and pose, and how NDFs can be used to recover an object’s position and pose. He then discusses the paper’s simulation and robot-experiment results and highlights the useful concepts and limits of the paper.
In the second half of the meeting, Karan Grewal presents the “Vector Neurons” paper. He first gives a quick review of the core concepts and terminology of the paper. Then he looks into the structure of the paper’s SO(3)-equivariant neural networks in detail and how the networks represent object pose and rotation. Lastly, Karan goes over the results of object classification and image reconstruction and points out a few shortcomings.
“Neural Descriptor Fields: SE(3)-Equivariant Object Representations for Manipulation” by Anthony Simeonov et al.: https://arxiv.org/abs/2112.05124
“Vector Neurons: A General Framework for SO(3)-Equivariant Networks” by Congyue Deng et al. https://arxiv.org/abs/2104.12229
Datasets mentioned:
Shapenet: https://shapenet.org/taxonomy-viewer
ModelNet40: https://3dshapenets.cs.princeton.edu/
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://numenta.com/news-digest/
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
In the second half of the meeting, Karan Grewal presents the “Vector Neurons” paper. He first gives a quick review of the core concepts and terminology of the paper. Then he looks into the structure of the paper’s SO(3)-equivariant neural networks in detail and how the networks represent object pose and rotation. Lastly, Karan goes over the results of object classification and image reconstruction and points out a few shortcomings.
“Neural Descriptor Fields: SE(3)-Equivariant Object Representations for Manipulation” by Anthony Simeonov et al.: https://arxiv.org/abs/2112.05124
“Vector Neurons: A General Framework for SO(3)-Equivariant Networks” by Congyue Deng et al. https://arxiv.org/abs/2104.12229
Datasets mentioned:
Shapenet: https://shapenet.org/taxonomy-viewer
ModelNet40: https://3dshapenets.cs.princeton.edu/
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://numenta.com/news-digest/
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- 7 participants
- 1:08 hours
22 Dec 2021
Numenta Research Intern Abhiram Iyer presents the paper “Learning Physical Graph Representations from Visual Scenes” by D. Bear et al.
He first gives some context and an overview of physical scene graphs. He then explains the pipeline of how these graphs are built in the deep learning system, starting with 1. feature extraction 2. graph pooling 3. graph vectorization, and 4. graph construction. Lastly, he goes through the results from the paper, the caveats, and his main takeaways.
Paper: https://proceedings.neurips.cc/paper/2020/file/4324e8d0d37b110ee1a4f1633ac52df5-Paper.pdf
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://numenta.com/news-digest/
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
He first gives some context and an overview of physical scene graphs. He then explains the pipeline of how these graphs are built in the deep learning system, starting with 1. feature extraction 2. graph pooling 3. graph vectorization, and 4. graph construction. Lastly, he goes through the results from the paper, the caveats, and his main takeaways.
Paper: https://proceedings.neurips.cc/paper/2020/file/4324e8d0d37b110ee1a4f1633ac52df5-Paper.pdf
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://numenta.com/news-digest/
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- 4 participants
- 1:21 hours
15 Nov 2021
In part two, visiting scientist Jeremy Forest continues his overview of the plasticity mechanisms in the brain and focuses on activity levels in neurons rather than dendritic spines.
He talks about how memory ensembles are dynamic and how neurons encode signals. He then explores different behavioral expressions of memory and context impacts plasticity mechanisms in neurons. He makes the case that all those mechanisms interact on widely different timescales and timescale should be considered when developing deep learning networks.
Part One: https://youtu.be/gbzMt4-3YhY
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://numenta.com/news-digest/
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
He talks about how memory ensembles are dynamic and how neurons encode signals. He then explores different behavioral expressions of memory and context impacts plasticity mechanisms in neurons. He makes the case that all those mechanisms interact on widely different timescales and timescale should be considered when developing deep learning networks.
Part One: https://youtu.be/gbzMt4-3YhY
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://numenta.com/news-digest/
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- 5 participants
- 54 minutes
10 Nov 2021
In part one of the presentation, visiting scientist Jeremy Forest gives a brief overview of the plasticity mechanisms in the brain. He goes over how neurons interact and change over time with different plasticity events in dendritic spines, and covers topics such as molecular mechanisms, synaptic activations, and structural plasticity.
Throughout the presentation, Jeremy points out the biological mechanisms, such as time-scale interactions, that could potentially be modeled in AI systems and bring many benefits, such as continual learning and efficiency.
Jeremy talks about the dynamic neuronal activities that impact plasticity mechanisms in part two.
Part Two: https://youtu.be/rLpb3J4AHXE
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://numenta.com/news-digest/
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
Throughout the presentation, Jeremy points out the biological mechanisms, such as time-scale interactions, that could potentially be modeled in AI systems and bring many benefits, such as continual learning and efficiency.
Jeremy talks about the dynamic neuronal activities that impact plasticity mechanisms in part two.
Part Two: https://youtu.be/rLpb3J4AHXE
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://numenta.com/news-digest/
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- 5 participants
- 44 minutes
18 Oct 2021
Guest speakers Johannes Leugering and Pascal Nieters talk about their work on neural computation with dendritic plateau potentials. Johannes first frames the problem of sequence processing and makes the case that a neural model based on active dendrites and dendritic plateau potentials would help solve the problem. Pascal then explains their recent work on the computations in a neural model with segmented dendrites and one with stochastic synapses. He concludes the presentation by discussing the implications of this model. The team asks questions and discusses.
Preprint v4: https://www.biorxiv.org/content/10.1101/690792v4.abstract
Paper at NICE workshop: https://dl.acm.org/doi/10.1145/3381755.3381763
Presentation at NICE workshop: https://www.youtube.com/watch?v=qLaq1m0xVuQ
Presentation at the Computational Cognition Workshop: https://www.youtube.com/watch?v=kVyY776m1PM
Other papers mentioned:
“Functional clustering of dendritic activity during decision-making”: https://elifesciences.org/articles/46966
"Embedded ensemble encoding hypothesis: The role of the “Prepared” cell": https://onlinelibrary.wiley.com/doi/full/10.1002/jnr.24240
"Local glutamate-mediated dendritic plateau potentials change the state of the cortical pyramidal neuron": https://journals.physiology.org/doi/abs/10.1152/jn.00734.2019
"Compartmentalized dendritic plasticity and input feature storage in neurons": https://www.nature.com/articles/nature06725
"Functional clustering of dendritic activity during decision-making": https://elifesciences.org/articles/46966
"Data-driven reduction of dendritic morphologies with preserved dendro-somatic responses": https://elifesciences.org/articles/60936
0:00 Johannes Leugering
30:30 Pascal Neiters
55:25 Q&A
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://numenta.com/news-digest/
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
Preprint v4: https://www.biorxiv.org/content/10.1101/690792v4.abstract
Paper at NICE workshop: https://dl.acm.org/doi/10.1145/3381755.3381763
Presentation at NICE workshop: https://www.youtube.com/watch?v=qLaq1m0xVuQ
Presentation at the Computational Cognition Workshop: https://www.youtube.com/watch?v=kVyY776m1PM
Other papers mentioned:
“Functional clustering of dendritic activity during decision-making”: https://elifesciences.org/articles/46966
"Embedded ensemble encoding hypothesis: The role of the “Prepared” cell": https://onlinelibrary.wiley.com/doi/full/10.1002/jnr.24240
"Local glutamate-mediated dendritic plateau potentials change the state of the cortical pyramidal neuron": https://journals.physiology.org/doi/abs/10.1152/jn.00734.2019
"Compartmentalized dendritic plasticity and input feature storage in neurons": https://www.nature.com/articles/nature06725
"Functional clustering of dendritic activity during decision-making": https://elifesciences.org/articles/46966
"Data-driven reduction of dendritic morphologies with preserved dendro-somatic responses": https://elifesciences.org/articles/60936
0:00 Johannes Leugering
30:30 Pascal Neiters
55:25 Q&A
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://numenta.com/news-digest/
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- 4 participants
- 1:26 hours
30 Sep 2021
Drawing inspirations from the Thousand Brains Theory on Intelligence, guest speakers Tim Verbelen and Toon Van de Maele from Ghent University share their recent work on learning object identity and pose representations from pixel observations.
0:00 Introduction
3:42 Active Inference
18:40 Visual Foraging
26:40 Cortical Column Networks
44:28 Q&A
➤ Paper - https://arxiv.org/abs/2108.11762
➤ Blog post - https://thesmartrobot.github.io/2021/08/26/thousand-brains.html
➤ For more information on The Smart Robot: https://thesmartrobot.github.io/
Abstract
Although modern object detection and classification models achieve high accuracy, these are typically constrained in advance on a fixed train set and are therefore not flexible enough to deal with novel, unseen object categories. Moreover, these models most often operate on a single frame, which may yield incorrect classifications in case of ambiguous viewpoints. In this paper, we propose an active inference agent that actively gathers evidence for object classifications, and can learn novel object categories over time. Drawing inspiration from the Thousand Brains Theory of Intelligence, we build object-centric generative models composed of two information streams, a what- and a where-stream. The what-stream predicts whether the observed object belongs to a specific category, while the where-stream is responsible for representing the object in its internal 3D reference frame. In this talk, we will present our models and some initial results both in simulation and on a real-world robot.
Bio
Tim Verbelen received his M.Sc. and Ph.D. degrees in Computer Science Engineering at Ghent University in 2009 and 2013 respectively. Since then, he has been working as a senior researcher for Ghent University and imec. His main research interests include perception and control for autonomous systems using deep learning techniques and high-dimensional sensors such as camera, lidar and radar. In particular, he is active in the domain of representation learning and reinforcement learning, inspired by cognitive neuroscience theories such as active inference.
Toon Van de Maele received his M.Sc. degree in Computer Science Engineering at Ghent University in June 2019. Since then, he has been working on a Ph.D. degree on learning representations for 3D scenes at Ghent University. His main interest lies in the combination of deep learning approaches for robotic perception, using biologically-inspired techniques.
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
0:00 Introduction
3:42 Active Inference
18:40 Visual Foraging
26:40 Cortical Column Networks
44:28 Q&A
➤ Paper - https://arxiv.org/abs/2108.11762
➤ Blog post - https://thesmartrobot.github.io/2021/08/26/thousand-brains.html
➤ For more information on The Smart Robot: https://thesmartrobot.github.io/
Abstract
Although modern object detection and classification models achieve high accuracy, these are typically constrained in advance on a fixed train set and are therefore not flexible enough to deal with novel, unseen object categories. Moreover, these models most often operate on a single frame, which may yield incorrect classifications in case of ambiguous viewpoints. In this paper, we propose an active inference agent that actively gathers evidence for object classifications, and can learn novel object categories over time. Drawing inspiration from the Thousand Brains Theory of Intelligence, we build object-centric generative models composed of two information streams, a what- and a where-stream. The what-stream predicts whether the observed object belongs to a specific category, while the where-stream is responsible for representing the object in its internal 3D reference frame. In this talk, we will present our models and some initial results both in simulation and on a real-world robot.
Bio
Tim Verbelen received his M.Sc. and Ph.D. degrees in Computer Science Engineering at Ghent University in 2009 and 2013 respectively. Since then, he has been working as a senior researcher for Ghent University and imec. His main research interests include perception and control for autonomous systems using deep learning techniques and high-dimensional sensors such as camera, lidar and radar. In particular, he is active in the domain of representation learning and reinforcement learning, inspired by cognitive neuroscience theories such as active inference.
Toon Van de Maele received his M.Sc. degree in Computer Science Engineering at Ghent University in June 2019. Since then, he has been working on a Ph.D. degree on learning representations for 3D scenes at Ghent University. His main interest lies in the combination of deep learning approaches for robotic perception, using biologically-inspired techniques.
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- 7 participants
- 1:04 hours
29 Sep 2021
Marcus Lewis frames the problem of knowledge transfer among cortical columns in the Thousand Brains Theory and explores potential solutions. He first explains how, at a high level, Numenta's model tackles this problem by having cortical columns communicate a description of an object horizontally through lateral connections.
Marcus then explains how this "horizontal description of an object" mechanism suggests a different mindset for the Thousand Brains Theory. Influenced by Douglas Hofstadter’s book “Gödel, Escher, Bach,” he states that a cortical column fundamentally needs to learn to represent descriptions of objects, so that, given a description of a novel object, it can make predictions and recognize that object no matter where the description comes from. Secondly, the cortical column can memorize those descriptions, but this object-memorizing functionality is secondary to being able to describe an object in the first place.
Jeff then dives into what information about the object is needed for columns to communicate and vote on what they’re sensing. He makes the case that columns communicate locally through lateral connections in the neocortex and work in parallel with each other. The team then asks questions and discusses.
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://numenta.com/news-digest/
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
Marcus then explains how this "horizontal description of an object" mechanism suggests a different mindset for the Thousand Brains Theory. Influenced by Douglas Hofstadter’s book “Gödel, Escher, Bach,” he states that a cortical column fundamentally needs to learn to represent descriptions of objects, so that, given a description of a novel object, it can make predictions and recognize that object no matter where the description comes from. Secondly, the cortical column can memorize those descriptions, but this object-memorizing functionality is secondary to being able to describe an object in the first place.
Jeff then dives into what information about the object is needed for columns to communicate and vote on what they’re sensing. He makes the case that columns communicate locally through lateral connections in the neocortex and work in parallel with each other. The team then asks questions and discusses.
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://numenta.com/news-digest/
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- 6 participants
- 48 minutes
27 Sep 2021
Marcus Lewis reviews a few papers from Dana Ballard and highlights some insights related to object modeling and reference frames in the Thousand Brains Theory.
Marcus first gives an overview of what “animate vision” is, as outlined in Ballard’s papers, and defines optic flow. Marcus then makes a case for using a world-centric, viewer-oriented location relative to a fixation point to represent objects and depth.
In the second part of his presentation, he looks at Numenta’s previous sensorimotor research (where the motor command is being received by the system) and Ballard’s sensorimotor “animate vision” system (where the motor command is being generated by the system) for objecting modeling. He evaluates whether the two sensorimotor frameworks will lead to different object modeling solutions and discusses the opportunities that could stem from Ballard’s framework.
Papers by Dana Ballard:
➤ “Animate Vision” (1990): https://www.sciencedirect.com/science/article/abs/pii/0004370291900804
➤ “Eye Fixation and Early Vision: Kinetic Depth” (1988): https://ieeexplore.ieee.org/document/590033
➤ “Reference Frames for Animate Vision” (1989): https://www.ijcai.org/Proceedings/89-2/Papers/124.pdf
➤ “Principles of Animate Vision” (1992): https://www.sciencedirect.com/science/article/abs/pii/104996609290081D
➤ “Deictic Codes for the Embodiment of Cognition” (1997): https://www.cs.utexas.edu/~dana/bbs.pdf
Papers by Numenta:
➤ “Locations in the Neocortex: A Theory of Sensorimotor Object Recognition Using Cortical Grid Cells”: https://www.frontiersin.org/articles/10.3389/fncir.2019.00022/full
➤ “A Framework for Intelligence and Cortical Function Based on Grid Cells in the Neocortex”: https://www.frontiersin.org/articles/10.3389/fncir.2018.00121/full
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://numenta.com/news-digest/
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
Marcus first gives an overview of what “animate vision” is, as outlined in Ballard’s papers, and defines optic flow. Marcus then makes a case for using a world-centric, viewer-oriented location relative to a fixation point to represent objects and depth.
In the second part of his presentation, he looks at Numenta’s previous sensorimotor research (where the motor command is being received by the system) and Ballard’s sensorimotor “animate vision” system (where the motor command is being generated by the system) for objecting modeling. He evaluates whether the two sensorimotor frameworks will lead to different object modeling solutions and discusses the opportunities that could stem from Ballard’s framework.
Papers by Dana Ballard:
➤ “Animate Vision” (1990): https://www.sciencedirect.com/science/article/abs/pii/0004370291900804
➤ “Eye Fixation and Early Vision: Kinetic Depth” (1988): https://ieeexplore.ieee.org/document/590033
➤ “Reference Frames for Animate Vision” (1989): https://www.ijcai.org/Proceedings/89-2/Papers/124.pdf
➤ “Principles of Animate Vision” (1992): https://www.sciencedirect.com/science/article/abs/pii/104996609290081D
➤ “Deictic Codes for the Embodiment of Cognition” (1997): https://www.cs.utexas.edu/~dana/bbs.pdf
Papers by Numenta:
➤ “Locations in the Neocortex: A Theory of Sensorimotor Object Recognition Using Cortical Grid Cells”: https://www.frontiersin.org/articles/10.3389/fncir.2019.00022/full
➤ “A Framework for Intelligence and Cortical Function Based on Grid Cells in the Neocortex”: https://www.frontiersin.org/articles/10.3389/fncir.2018.00121/full
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://numenta.com/news-digest/
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- 5 participants
- 1:36 hours
9 Sep 2021
In continuation to last week’s meeting, Jeff first shares his new ideas about voting. We used to think that voting between cortical columns only communicates object ID of the things you attend to, but now he hypothesizes that at a minimum, voting communicates object ID, object state and location / orientation relative to body.
Jeff then describes what a model (created by cortical columns) is and the characteristics of a model introduced in our previous papers. As our knowledge continues to expand, our understanding of what a model is has also evolved. We used to think that models in a column were based on grid cell metric reference frames, but now we deduce that vector cells are involved too. We hypothesize that models are represented using vector cell modules, not grid cell modules as we previously thought. And grid cell reference frames are used in determining movement from one location to another. The team then asks questions and discusses.
➤ Columns paper: https://numenta.com/neuroscience-research/research-publications/papers/a-theory-of-how-columns-in-the-neocortex-enable-learning-the-structure-of-the-world/
➤ Frameworks Paper: https://numenta.com/neuroscience-research/research-publications/papers/a-framework-for-intelligence-and-cortical-function-based-on-grid-cells-in-the-neocortex/
Other papers mentioned:
➤ “Neuronal vector coding in spatial cognition”: https://www.nature.com/articles/s41583-020-0336-9
➤ “Population coding of saccadic eye movements by neurons in the superior colliculus”: https://www.nature.com/articles/332357a0
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
Jeff then describes what a model (created by cortical columns) is and the characteristics of a model introduced in our previous papers. As our knowledge continues to expand, our understanding of what a model is has also evolved. We used to think that models in a column were based on grid cell metric reference frames, but now we deduce that vector cells are involved too. We hypothesize that models are represented using vector cell modules, not grid cell modules as we previously thought. And grid cell reference frames are used in determining movement from one location to another. The team then asks questions and discusses.
➤ Columns paper: https://numenta.com/neuroscience-research/research-publications/papers/a-theory-of-how-columns-in-the-neocortex-enable-learning-the-structure-of-the-world/
➤ Frameworks Paper: https://numenta.com/neuroscience-research/research-publications/papers/a-framework-for-intelligence-and-cortical-function-based-on-grid-cells-in-the-neocortex/
Other papers mentioned:
➤ “Neuronal vector coding in spatial cognition”: https://www.nature.com/articles/s41583-020-0336-9
➤ “Population coding of saccadic eye movements by neurons in the superior colliculus”: https://www.nature.com/articles/332357a0
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- 8 participants
- 1:44 hours
8 Sep 2021
Anshuman Mishra talks about algorithmic speedups via locality sensitive hashing and reviews papers on bio-inspired hashing, specifically LSH inspired by fruit flies.
He first gives an overview of what algorithmic speedups are, why they are useful and how we can use them. He then dives into a specific technique called locality sensitive hashing (LSH) and goes over the motivations of using these types of hash algorithms and how they work. Lastly, Anshuman talks about the potential biological relevances of these hash mechanisms. He looks at the paper “A neural algorithm for a fundamental computing problem” which outlined a version of LSH inspired by fruit flies that uses sparse projections, expands dimensionality and uses a Winner-Takes-All mechanism.
Paper reviewed: “A Neural Algorithm for a Fundamental Computing Problem” by Dasgupta et al. : https://www.science.org/doi/abs/10.1126/science.aam9868
0:00 Overview
1:11 Algorithmic Speedups
14:28 Locality Sensitive Hashing
45:54 Bio-inspired Hashing
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://numenta.com/news-digest/
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
He first gives an overview of what algorithmic speedups are, why they are useful and how we can use them. He then dives into a specific technique called locality sensitive hashing (LSH) and goes over the motivations of using these types of hash algorithms and how they work. Lastly, Anshuman talks about the potential biological relevances of these hash mechanisms. He looks at the paper “A neural algorithm for a fundamental computing problem” which outlined a version of LSH inspired by fruit flies that uses sparse projections, expands dimensionality and uses a Winner-Takes-All mechanism.
Paper reviewed: “A Neural Algorithm for a Fundamental Computing Problem” by Dasgupta et al. : https://www.science.org/doi/abs/10.1126/science.aam9868
0:00 Overview
1:11 Algorithmic Speedups
14:28 Locality Sensitive Hashing
45:54 Bio-inspired Hashing
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://numenta.com/news-digest/
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- 8 participants
- 1:05 hours
3 Sep 2021
In continuation of the last two meetings where Subutai discussed voting in the Thousand Brains Theory, Jeff looks at the overall theory and focuses on the missing elements - things we haven't implemented, things that have changed due to new insights or things that are still unknown.
He first presents the two core ideas proposed in our Columns paper (2017), and then walks through a list of shortcomings of the hypotheses and simulation. He then gives an overview of our Frameworks paper (2019) where we hypothesized that objects are modeled based on grid cell modules. He highlights how the paper solved a few issues in the Columns paper but points out that there are still previous shortcomings and problems that we haven't addressed.
He wraps up the meeting by discussing the ways we’re currently trying to address these problems in our research. He proposes new ways of thinking about how we model objects, how cells represent location and orientation of the sensor and how columns detect movements and path integration. He expands these ideas further in the next meeting.
➤ Columns paper: https://numenta.com/neuroscience-research/research-publications/papers/a-theory-of-how-columns-in-the-neocortex-enable-learning-the-structure-of-the-world/
➤ Columns+ paper: https://numenta.com/neuroscience-research/research-publications/papers/locations-in-the-neocortex-a-theory-of-sensorimotor-object-recognition-using-cortical-grid-cells/
➤ Frameworks Paper: https://numenta.com/neuroscience-research/research-publications/papers/a-framework-for-intelligence-and-cortical-function-based-on-grid-cells-in-the-neocortex/
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
He first presents the two core ideas proposed in our Columns paper (2017), and then walks through a list of shortcomings of the hypotheses and simulation. He then gives an overview of our Frameworks paper (2019) where we hypothesized that objects are modeled based on grid cell modules. He highlights how the paper solved a few issues in the Columns paper but points out that there are still previous shortcomings and problems that we haven't addressed.
He wraps up the meeting by discussing the ways we’re currently trying to address these problems in our research. He proposes new ways of thinking about how we model objects, how cells represent location and orientation of the sensor and how columns detect movements and path integration. He expands these ideas further in the next meeting.
➤ Columns paper: https://numenta.com/neuroscience-research/research-publications/papers/a-theory-of-how-columns-in-the-neocortex-enable-learning-the-structure-of-the-world/
➤ Columns+ paper: https://numenta.com/neuroscience-research/research-publications/papers/locations-in-the-neocortex-a-theory-of-sensorimotor-object-recognition-using-cortical-grid-cells/
➤ Frameworks Paper: https://numenta.com/neuroscience-research/research-publications/papers/a-framework-for-intelligence-and-cortical-function-based-on-grid-cells-in-the-neocortex/
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- 9 participants
- 1:27 hours
30 Aug 2021
In the continuation of last week’s research meeting, Subutai Ahmad explains voting in the Thousand Brains Theory.
In this research meeting, he explains how inference is much faster with multiple columns as columns share information through long-range sparse connections to agree on what the object is. He goes over the simulation results we presented in our “Columns” paper, and shows that as the number of cortical columns increases in the network, the number of touches to recognize an object rapidly decreases, making inference much quicker.
Finally, he talks about how the Thousand Brains Theory rethinks the notion of hierarchy in the neocortex. Instead of the classic view of using hierarchy to assemble features into a recognized object, the theory states that the neocortex uses hierarchy to vote across levels and sensory modalities, and rapidly reach consensus on the objects being sensed.
Columns paper "A Theory of How Columns in the Neocortex Enable Learning the Structure of the World": https://www.frontiersin.org/articles/10.3389/fncir.2017.00081/full
Part One: https://youtu.be/XUpmN_CLOZc
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://numenta.com/news-digest/
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
In this research meeting, he explains how inference is much faster with multiple columns as columns share information through long-range sparse connections to agree on what the object is. He goes over the simulation results we presented in our “Columns” paper, and shows that as the number of cortical columns increases in the network, the number of touches to recognize an object rapidly decreases, making inference much quicker.
Finally, he talks about how the Thousand Brains Theory rethinks the notion of hierarchy in the neocortex. Instead of the classic view of using hierarchy to assemble features into a recognized object, the theory states that the neocortex uses hierarchy to vote across levels and sensory modalities, and rapidly reach consensus on the objects being sensed.
Columns paper "A Theory of How Columns in the Neocortex Enable Learning the Structure of the World": https://www.frontiersin.org/articles/10.3389/fncir.2017.00081/full
Part One: https://youtu.be/XUpmN_CLOZc
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://numenta.com/news-digest/
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- 6 participants
- 1:28 hours
25 Aug 2021
Subutai Ahmad goes over voting in the Thousand Brains Theory.
In the first of two research meetings, he lays the groundwork for understanding how columns vote in the theory by unpacking the ideas in our "Columns" paper. First, he presents the hypothesis of the paper on how cortical columns learn predictive models of sensorimotor sequences. Then, he explains the mechanisms behind a single cortical column and how it learns complete objects by sensing different locations and integrating inputs over time. In the next research meeting, he will review voting across multiple columns.
Columns paper "A Theory of How Columns in the Neocortex Enable Learning the Structure of the World": https://www.frontiersin.org/articles/10.3389/fncir.2017.00081/full
Other paper mentioned: “The columnar organization of the neocortex” - https://academic.oup.com/brain/article-pdf/120/4/701/17863573/1200701.pdf
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://numenta.com/news-digest/
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
In the first of two research meetings, he lays the groundwork for understanding how columns vote in the theory by unpacking the ideas in our "Columns" paper. First, he presents the hypothesis of the paper on how cortical columns learn predictive models of sensorimotor sequences. Then, he explains the mechanisms behind a single cortical column and how it learns complete objects by sensing different locations and integrating inputs over time. In the next research meeting, he will review voting across multiple columns.
Columns paper "A Theory of How Columns in the Neocortex Enable Learning the Structure of the World": https://www.frontiersin.org/articles/10.3389/fncir.2017.00081/full
Other paper mentioned: “The columnar organization of the neocortex” - https://academic.oup.com/brain/article-pdf/120/4/701/17863573/1200701.pdf
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://numenta.com/news-digest/
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- 6 participants
- 1:05 hours
18 Aug 2021
Subutai Ahmad reviews the biology behind active dendrites and explains how Numenta models them. He first presents an overview of active dendrites in pyramidal neurons by describing various experimental findings. He describes the impact of dendrites on the computation performed by neurons, and some of the learning (plasticity) rules that have been discovered. He shows how all this forms the substrate for the HTM neuron, proposing that dendritic computation is the basis for prediction and very flexible context integration in neural networks.
Papers:
Bartlett Mel, Neural Computation 1992: https://direct.mit.edu/neco/article/4/4/502/5650/NMDA-Based-Pattern-Discrimination-in-a-Modeled
Poirazi, Brannon & Mel, Neuron, 2003: https://pubmed.ncbi.nlm.nih.gov/12670427/
Numenta Neurons Paper 2016: https://www.frontiersin.org/articles/10.3389/fncir.2016.00023/full
Numenta Columns Paper 2017: https://www.frontiersin.org/articles/10.3389/fncir.2017.00081/full
“Predictive Coding of Novel versus Familiar Stimuli in the Primary Visual Cortex”: https://www.biorxiv.org/content/10.1101/197608v1
“Continuous online sequence learning with an unsupervised neural network model”: https://direct.mit.edu/neco/article/28/11/2474/8502/Continuous-Online-Sequence-Learning-with-an#.WC4U8TKZMUE
‘Unsupervised real-time anomaly detection for streaming data”: https://www.sciencedirect.com/science/article/pii/S0925231217309864
“Active properties of neocortical pyramidal neuron dendrites”: https://pubmed.ncbi.nlm.nih.gov/23841837/
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://numenta.com/news-digest/
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
Papers:
Bartlett Mel, Neural Computation 1992: https://direct.mit.edu/neco/article/4/4/502/5650/NMDA-Based-Pattern-Discrimination-in-a-Modeled
Poirazi, Brannon & Mel, Neuron, 2003: https://pubmed.ncbi.nlm.nih.gov/12670427/
Numenta Neurons Paper 2016: https://www.frontiersin.org/articles/10.3389/fncir.2016.00023/full
Numenta Columns Paper 2017: https://www.frontiersin.org/articles/10.3389/fncir.2017.00081/full
“Predictive Coding of Novel versus Familiar Stimuli in the Primary Visual Cortex”: https://www.biorxiv.org/content/10.1101/197608v1
“Continuous online sequence learning with an unsupervised neural network model”: https://direct.mit.edu/neco/article/28/11/2474/8502/Continuous-Online-Sequence-Learning-with-an#.WC4U8TKZMUE
‘Unsupervised real-time anomaly detection for streaming data”: https://www.sciencedirect.com/science/article/pii/S0925231217309864
“Active properties of neocortical pyramidal neuron dendrites”: https://pubmed.ncbi.nlm.nih.gov/23841837/
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://numenta.com/news-digest/
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- 8 participants
- 1:30 hours
16 Jul 2021
We reviewed 2 papers in this research meeting. First, Numenta intern Jack Schenkman reviewed the paper “Multiscale representation of very large environments in the hippocampus of flying bats” by Eliav et al. The paper proposes a multiscale neuronal encoding scheme of place cells for spatial perception. The team then raised a few questions and discussed.
Next, our researcher Ben Cohen reviewed the paper “Representational drift in primary olfactory cortex” by Schoonover et al. The paper shows that single neuron firing rate responses to odor in the anterior piriform core are stable within a day, but continuously drift overtime. The team then discussed the notion of representational drift in the context of Numenta’s work.
“Multiscale representation of very large environments in the hippocampus of flying bats” by Eliav et al.: https://science.sciencemag.org/content/372/6545/eabg4020
“Representational drift in primary olfactory cortex” by Schoonover et al.: https://www.nature.com/articles/s41586-021-03628-7
Columns paper mentioned: https://www.frontiersin.org/articles/10.3389/fncir.2017.00081/full
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
Next, our researcher Ben Cohen reviewed the paper “Representational drift in primary olfactory cortex” by Schoonover et al. The paper shows that single neuron firing rate responses to odor in the anterior piriform core are stable within a day, but continuously drift overtime. The team then discussed the notion of representational drift in the context of Numenta’s work.
“Multiscale representation of very large environments in the hippocampus of flying bats” by Eliav et al.: https://science.sciencemag.org/content/372/6545/eabg4020
“Representational drift in primary olfactory cortex” by Schoonover et al.: https://www.nature.com/articles/s41586-021-03628-7
Columns paper mentioned: https://www.frontiersin.org/articles/10.3389/fncir.2017.00081/full
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- 8 participants
- 1:34 hours
9 Jun 2021
Numenta’s intern Akash Velu discusses the project he has done over the course of his time at Numenta. Building on the key component of the Thousand Brains Theory of Intelligence, his work focuses on multi-task reinforcement learning using dendritic networks to achieve strong performance. In this presentation, he talks about the challenges, codebase, and results obtained through various environments. He then explores the next steps for the project such as experimenting with different dendrite configurations and incorporating more elements of sparsity.
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- 7 participants
- 53 minutes
17 May 2021
In this research meeting, Marcus Lewis discusses the importance of explaining grid cell distortions, and to generate discussion he proposes a possible explanation, showing some results from an experiment he conducted. He hypothesized that an animal localizes by detecting distance from various points of boundaries and those points “vote” on the location. The weight of the votes is determined by nearness, and distortions occur when the animal's idealized map differs from the actual environment. The team then discusses the hypothesis and raises further questions.
Jeff then explores the possible processes and mechanisms that underlie reference frame transformations in the neocortex. He describes a few problems with a previous hypothesis he proposed about rf transformations in the thalamus and further explores the role of the thalamus. He then suggests the relationship between rf transformations and temporal memory.
Papers from Marcus’ presentation:
“Framing the grid: effect of boundaries on grid cells and navigation” (2016) by John O’Keefe et al.: https://physoc.onlinelibrary.wiley.com/doi/full/10.1113/JP270607
"The hippocampus as a predictive map” (2017) by Stachenfeld et al.: https://www.nature.com/articles/nn.4650
“Flexible modulation of sequence generation in the entorhinal–hippocampal system” (2021) by McNamee et al.: https://www.nature.com/articles/s41593-021-00831-7?proof=t
0:00 Marcus Lewis on Grid Cell Distortions
43:08 Jeff Hawkins on Reference Frame Transformations
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
Jeff then explores the possible processes and mechanisms that underlie reference frame transformations in the neocortex. He describes a few problems with a previous hypothesis he proposed about rf transformations in the thalamus and further explores the role of the thalamus. He then suggests the relationship between rf transformations and temporal memory.
Papers from Marcus’ presentation:
“Framing the grid: effect of boundaries on grid cells and navigation” (2016) by John O’Keefe et al.: https://physoc.onlinelibrary.wiley.com/doi/full/10.1113/JP270607
"The hippocampus as a predictive map” (2017) by Stachenfeld et al.: https://www.nature.com/articles/nn.4650
“Flexible modulation of sequence generation in the entorhinal–hippocampal system” (2021) by McNamee et al.: https://www.nature.com/articles/s41593-021-00831-7?proof=t
0:00 Marcus Lewis on Grid Cell Distortions
43:08 Jeff Hawkins on Reference Frame Transformations
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- 3 participants
- 1:04 hours
5 May 2021
Karan Grewal reviews the paper "Self-Organization in a Perceptual Network" from 1988, and argues that the use of Hebbian learning rules (1) is equivalent to performing principal components analysis (PCA), and (2) maximizes the mutual information between the input and output of each unit in a standard neural network, more commonly referred to as the InfoMax principle.
“Self-Organization in a Perceptual Network" by Ralph Linsker: https://ieeexplore.ieee.org/document/36
Other resources mentioned:
• “Linear Hebbian learning and PCA” by Bruno Olshausen: https://redwood.berkeley.edu/wp-content/uploads/2018/08/handout-hebb-PCA.pdf
• “Theoretical Neuroscience" textbook by Dayan & Abbott: https://mitpress.mit.edu/books/theoretical-neuroscience
• “Representation Learning with Contrastive Predictive Coding” by van den Oord et al.: https://arxiv.org/abs/1807.03748
• “Learning deep representations by mutual information estimation and maximization” by Hjelm et al.: https://arxiv.org/abs/1808.06670
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
“Self-Organization in a Perceptual Network" by Ralph Linsker: https://ieeexplore.ieee.org/document/36
Other resources mentioned:
• “Linear Hebbian learning and PCA” by Bruno Olshausen: https://redwood.berkeley.edu/wp-content/uploads/2018/08/handout-hebb-PCA.pdf
• “Theoretical Neuroscience" textbook by Dayan & Abbott: https://mitpress.mit.edu/books/theoretical-neuroscience
• “Representation Learning with Contrastive Predictive Coding” by van den Oord et al.: https://arxiv.org/abs/1807.03748
• “Learning deep representations by mutual information estimation and maximization” by Hjelm et al.: https://arxiv.org/abs/1808.06670
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- 8 participants
- 44 minutes
28 Apr 2021
In this research meeting, joined by Rosanne Liu, Jason Yosinski, and Mitchell Wortsman from ML Collective, Subutai Ahmad explains the properties of small-world structures and how they can be helpful in Numenta’s research.
Subutai first discusses different network types and the concept of small-world structures by reviewing the paper “Collective Dynamics of ‘Small-World’ Networks” by Watts & Strogatz. He then evaluates the efficiency of these structures and how they are helpful in non-physical networks by looking at Jon Kleinberg’s paper “Navigation in a Small World.” Subutai also addresses how small-world structures would apply to machine learning by using concepts from the paper “Graph Structure of Neural Networks” by Jiaxuan You et al.. Lastly, the team discusses how small-world structures relate to Numenta’s research such as sparsity and cortical columns.
“Collective Dynamics of ‘Small-World’ Networks” by Watts & Strogatz: https://www.nature.com/articles/30918
“Navigation in a Small World” by Kleinberg: https://www.nature.com/articles/35022643
“Graph Structure of Neural Networks” by Jiaxuan You et al.: https://arxiv.org/abs/2007.06559
"Small-World Brain Networks" by Bassett & Bullmore: https://journals.sagepub.com/doi/10.1177/1073858406293182
More information on ML Collective: https://mlcollective.org/
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
Subutai first discusses different network types and the concept of small-world structures by reviewing the paper “Collective Dynamics of ‘Small-World’ Networks” by Watts & Strogatz. He then evaluates the efficiency of these structures and how they are helpful in non-physical networks by looking at Jon Kleinberg’s paper “Navigation in a Small World.” Subutai also addresses how small-world structures would apply to machine learning by using concepts from the paper “Graph Structure of Neural Networks” by Jiaxuan You et al.. Lastly, the team discusses how small-world structures relate to Numenta’s research such as sparsity and cortical columns.
“Collective Dynamics of ‘Small-World’ Networks” by Watts & Strogatz: https://www.nature.com/articles/30918
“Navigation in a Small World” by Kleinberg: https://www.nature.com/articles/35022643
“Graph Structure of Neural Networks” by Jiaxuan You et al.: https://arxiv.org/abs/2007.06559
"Small-World Brain Networks" by Bassett & Bullmore: https://journals.sagepub.com/doi/10.1177/1073858406293182
More information on ML Collective: https://mlcollective.org/
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- 9 participants
- 1:21 hours
19 Apr 2021
Previously, Jeff Hawkins has discussed the possibility that reference frame rotations might occur locally in a cortical column. In this research meeting, he proposes an alternate possibility that movement and sensed features are translated in the thalamus. Using vision as an example, Jeff gives an overview of the thalamus and discusses the role and mechanism that thalamo cortical cells might play.
Jeff Hawkins on Reference Frame Transformation in the Thalamus: https://youtu.be/eQv-MjnTodM
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
Jeff Hawkins on Reference Frame Transformation in the Thalamus: https://youtu.be/eQv-MjnTodM
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- 3 participants
- 45 minutes
7 Apr 2021
Our research intern Alex Cuozzo discusses the book Sparse Distributed Memory by Pentti Kanerva. He first explores a few concepts related to high dimensional vectors mentioned in the book such as rotational symmetry, distribution of distances etc. He then talks about the key properties of the Sparse Distributed Memory model and how it relates to a biological one. Lastly, he gives his thoughts and explores some follow up work that aims to convert dense factors to sparse distributed activations.
Sources:
➤ “Sparse Distributed Memory” by Pentti Kanerva: https://mitpress.mit.edu/books/sparse-distributed-memory
➤ “An Alternative Design for a Sparse Distributed Memory” by Louis Jaeckel: https://ntrs.nasa.gov/citations/19920001073
➤ “A Class of Designs for a Sparse Distributed Memory” by Louis Jaeckel: https://ntrs.nasa.gov/api/citations/19920002426/downloads/19920002426.pdf
➤ "Comparison between Kanerva's SDM and Hopfield-type neural networks" by James Keeler: https://www.sciencedirect.com/science/article/abs/pii/0364021388900262
➤ "Notes on implementation of sparsely distributed memory" by James Keeler et al: https://www.semanticscholar.org/paper/Notes-on-implementation-of-sparsely-distributed-Keeler-Denning/a818801315dbeaf892197c5f08c8c8779871fd82
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
Sources:
➤ “Sparse Distributed Memory” by Pentti Kanerva: https://mitpress.mit.edu/books/sparse-distributed-memory
➤ “An Alternative Design for a Sparse Distributed Memory” by Louis Jaeckel: https://ntrs.nasa.gov/citations/19920001073
➤ “A Class of Designs for a Sparse Distributed Memory” by Louis Jaeckel: https://ntrs.nasa.gov/api/citations/19920002426/downloads/19920002426.pdf
➤ "Comparison between Kanerva's SDM and Hopfield-type neural networks" by James Keeler: https://www.sciencedirect.com/science/article/abs/pii/0364021388900262
➤ "Notes on implementation of sparsely distributed memory" by James Keeler et al: https://www.semanticscholar.org/paper/Notes-on-implementation-of-sparsely-distributed-Keeler-Denning/a818801315dbeaf892197c5f08c8c8779871fd82
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- 5 participants
- 1:17 hours
31 Mar 2021
We started this research meeting by responding to a few questions posted on the HTM forum. The HTM Forum is our open source discussion group. It is a great place to ask questions related to Numenta’s work and find interesting projects that people in the community are working on. Join HTM Forum today: https://discourse.numenta.org/
Subutai Ahmad reviews the paper “Alleviating catastrophic forgetting using context-dependent gating and synaptic stabilization” by Masse, Grant and Freedman. He first explains his motivations behind reading this paper based on Numenta’s previous work on dendrites and continuous learning. He then highlights the various network architectures simulated in the experiment and the results presented in the paper (i.e. accuracy for each network). Finally, Subutai gives his thoughts and the team discusses the results.
Paper: https://www.pnas.org/content/115/44/E10467
Other paper mentioned:
“Continuous Online Sequence Learning with an Unsupervised Neural Network Model”: https://numenta.com/neuroscience-research/research-publications/papers/continuous-online-sequence-learning-with-an-unsupervised-neural-network-model/
0:00 Answering Questions from HTM Forum
7:12 Paper Review
- - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
Subutai Ahmad reviews the paper “Alleviating catastrophic forgetting using context-dependent gating and synaptic stabilization” by Masse, Grant and Freedman. He first explains his motivations behind reading this paper based on Numenta’s previous work on dendrites and continuous learning. He then highlights the various network architectures simulated in the experiment and the results presented in the paper (i.e. accuracy for each network). Finally, Subutai gives his thoughts and the team discusses the results.
Paper: https://www.pnas.org/content/115/44/E10467
Other paper mentioned:
“Continuous Online Sequence Learning with an Unsupervised Neural Network Model”: https://numenta.com/neuroscience-research/research-publications/papers/continuous-online-sequence-learning-with-an-unsupervised-neural-network-model/
0:00 Answering Questions from HTM Forum
7:12 Paper Review
- - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- 8 participants
- 1:09 hours
24 Mar 2021
Through the lens of Numenta's Thousand Brains Theory, Marcus Lewis reviews the paper “How to represent part-whole hierarchies in a neural network” by Geoffrey Hinton. By focusing on parts of the GLOM model presented in the paper, he bridges Numenta's theory to GLOM and highlights the similarities and differences between each model's voting mechanisms , structure and the use of neural representations. Finally, Marcus explores the idea of GLOM handling movement.
Paper: https://arxiv.org/abs/2102.12627
Other resources mentioned:
Numenta "Thousand Brains" voting alternate version (2017):
http://numenta.github.io/htmresearch/documents/location-layer/Hello-Multi-Column-Location-Inference.html
"Receptive field structure varies with layer in the primary visual cortex" by Martinez et al.: https://www.nature.com/articles/nn1404
"A Multiplexed, Heterogeneous, and Adaptive Code for Navigation in Medial Entorhinal Cortex" by Hardcastle et al: https://www.sciencedirect.com/science/article/pii/S0896627317302374
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
Paper: https://arxiv.org/abs/2102.12627
Other resources mentioned:
Numenta "Thousand Brains" voting alternate version (2017):
http://numenta.github.io/htmresearch/documents/location-layer/Hello-Multi-Column-Location-Inference.html
"Receptive field structure varies with layer in the primary visual cortex" by Martinez et al.: https://www.nature.com/articles/nn1404
"A Multiplexed, Heterogeneous, and Adaptive Code for Navigation in Medial Entorhinal Cortex" by Hardcastle et al: https://www.sciencedirect.com/science/article/pii/S0896627317302374
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- 8 participants
- 1:24 hours
8 Mar 2021
In this research meeting, our research intern Alex Cuozzo reviews some notable papers and explains high level concepts related to learning rules in machine learning. Moving away from backpropagation with gradient descent, he talks about various attempts at biologically plausible learning regimes which avoid the weight transport problem and use only local information at the neuron level. He then moves on to discuss work which infers a learning rule from weight updates, and further work using machine learning to create novel optimizers and local learning rules.
Papers / Talks mentioned (in order of presentation):
• "Random synaptic feedback weights support error backpropagation for deep learning" by Lillicrap et al.: https://www.nature.com/articles/ncomms13276
• Talk: A Theoretical Framework for Target Propagation: https://www.youtube.com/watch?v=xFb9N4Irj40
• "Decoupled Neural Interfaces using Synthetic Gradients" by DeepMind: https://arxiv.org/abs/1608.05343
• Talk: Brains@Bay Meetup (Rafal Bogacz) : https://youtu.be/oXyQU0aScq0?t=246
• "Predictive Coding Approximates Backprop along Arbitrary Computation Graphs" by Millidge et al: https://arxiv.org/abs/2006.04182
• "Identifying Learning Rules From Neural Network Observables" by Nayebi et al: https://arxiv.org/abs/2010.11765
• "Learning to learn by gradient descent by gradient descent" by Andrychowicz et al: https://arxiv.org/abs/1606.04474
• "On the Search for New Learning Rules for ANNs" by Bengio et al: https://www.researchgate.net/publication/225532233_On_the_Search_for_New_Learning_Rules_for_ANNs
• "Learning a Synaptic Learning Rule" by Bengio et al: https://www.researchgate.net/publication/2383035_Learning_a_Synaptic_Learning_Rule
• "Evolution and design of distributed learning rules" by Runarsson et al: https://ieeexplore.ieee.org/document/886220
• "The evolution of a generalized neural learning rule" by Orchard et al: https://ieeexplore.ieee.org/document/7727815
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
Papers / Talks mentioned (in order of presentation):
• "Random synaptic feedback weights support error backpropagation for deep learning" by Lillicrap et al.: https://www.nature.com/articles/ncomms13276
• Talk: A Theoretical Framework for Target Propagation: https://www.youtube.com/watch?v=xFb9N4Irj40
• "Decoupled Neural Interfaces using Synthetic Gradients" by DeepMind: https://arxiv.org/abs/1608.05343
• Talk: Brains@Bay Meetup (Rafal Bogacz) : https://youtu.be/oXyQU0aScq0?t=246
• "Predictive Coding Approximates Backprop along Arbitrary Computation Graphs" by Millidge et al: https://arxiv.org/abs/2006.04182
• "Identifying Learning Rules From Neural Network Observables" by Nayebi et al: https://arxiv.org/abs/2010.11765
• "Learning to learn by gradient descent by gradient descent" by Andrychowicz et al: https://arxiv.org/abs/1606.04474
• "On the Search for New Learning Rules for ANNs" by Bengio et al: https://www.researchgate.net/publication/225532233_On_the_Search_for_New_Learning_Rules_for_ANNs
• "Learning a Synaptic Learning Rule" by Bengio et al: https://www.researchgate.net/publication/2383035_Learning_a_Synaptic_Learning_Rule
• "Evolution and design of distributed learning rules" by Runarsson et al: https://ieeexplore.ieee.org/document/886220
• "The evolution of a generalized neural learning rule" by Orchard et al: https://ieeexplore.ieee.org/document/7727815
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- 8 participants
- 60 minutes
1 Mar 2021
In this research meeting, we invited Gideon Kowadlo from Cerenaut.ai to talk about modelling of the hippocampus together with the neocortex for few-shot learning and beyond.
Abstract:
In mammalian brains, the neocortex and hippocampus are complementary modules that interact. Their interaction is known to be crucial in the formation of declarative memory as well as being important for Working Memory and executive control. Computational modelling of hippocampus and interaction between hippocampus and neocortex is of great importance to better understand neocortex itself, animal intelligence and to build more intelligent machines. A standard framework for hippocampal modelling is CLS. It captures an ability to learn distinct events rapidly. CLS has been tested on toy datasets, showing fast learning of specific examples, but not generalisation. In ML, the inverse is true. The standard approach to few-shot learning considers learning of categories, showing generalisation, but not instance learning (e.g. a particular tree), which is important for realistic agents. In addition, few-shot learning in ML is predominantly ‘short term’, without permanent incorporation of knowledge of new categories. We will describe extension to CLS, a novel Artificial Hippocampal Algorithm (AHA), which overcomes the above limitations.
"Unsupervised One-shot Learning of Both Specific Instances and Generalised Classes with a Hippocampal Architecture" paper: https://arxiv.org/abs/2010.15999
"One-shot learning for the long term: consolidation with an artificial hippocampal algorithm" paper: https://arxiv.org/abs/2102.07503
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
Abstract:
In mammalian brains, the neocortex and hippocampus are complementary modules that interact. Their interaction is known to be crucial in the formation of declarative memory as well as being important for Working Memory and executive control. Computational modelling of hippocampus and interaction between hippocampus and neocortex is of great importance to better understand neocortex itself, animal intelligence and to build more intelligent machines. A standard framework for hippocampal modelling is CLS. It captures an ability to learn distinct events rapidly. CLS has been tested on toy datasets, showing fast learning of specific examples, but not generalisation. In ML, the inverse is true. The standard approach to few-shot learning considers learning of categories, showing generalisation, but not instance learning (e.g. a particular tree), which is important for realistic agents. In addition, few-shot learning in ML is predominantly ‘short term’, without permanent incorporation of knowledge of new categories. We will describe extension to CLS, a novel Artificial Hippocampal Algorithm (AHA), which overcomes the above limitations.
"Unsupervised One-shot Learning of Both Specific Instances and Generalised Classes with a Hippocampal Architecture" paper: https://arxiv.org/abs/2010.15999
"One-shot learning for the long term: consolidation with an artificial hippocampal algorithm" paper: https://arxiv.org/abs/2102.07503
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- 5 participants
- 1:02 hours
22 Feb 2021
Our research intern Akash Velu gives an overview of continual reinforcement learning, following the ideas from the paper “Towards Continual Reinforcement Learning: A Review and Perspectives” by Kheterpal et al. He first goes over the basics of reinforcement learning (RL), and discusses why RL is a good setting to study continual learning. He then covers the different aspects of continual RL, the various approaches to solving continual RL problems, and touches upon the potential for neuroscience to impact the development of continual RL algorithms.
Paper: https://arxiv.org/abs/2012.13490
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
Paper: https://arxiv.org/abs/2012.13490
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- 7 participants
- 58 minutes
22 Feb 2021
Marcus Lewis further elaborates and discusses some ideas outlined in "The Tolman-Eichenbaum Machine” paper in a continuation of Feb 15's research meeting. He first gives a quick review of the grid cell module presented in the paper and outlines 2 extreme scenarios of the mechanisms within the module to address the team’s skepticism of a multi-scale grid cell readout.
“The Tolman-Eichenbaum Machine: Unifying Space and Relational Memory through Generalization in the Hippocampal Formation” by James Whittington, et al.: https://www.sciencedirect.com/science/article/pii/S009286742031388X
Feb 15 research meeting: https://youtu.be/N6I3M3pof5A
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
“The Tolman-Eichenbaum Machine: Unifying Space and Relational Memory through Generalization in the Hippocampal Formation” by James Whittington, et al.: https://www.sciencedirect.com/science/article/pii/S009286742031388X
Feb 15 research meeting: https://youtu.be/N6I3M3pof5A
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- 3 participants
- 31 minutes
17 Feb 2021
Michaelangelo Caporale reviews and evaluates a continual learning scenario called OSAKA, outlined in the paper “Online Fast Adaption and Knowledge Accumulation: A New Approach to Continual Learning.” He first gives an overview of the scenario and goes through the algorithms and methodologies in depth. The team then discusses whether this is a good scenario that Numenta can use to test for continual learning.
Paper: https://arxiv.org/abs/2003.05856
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
Paper: https://arxiv.org/abs/2003.05856
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- 7 participants
- 47 minutes
15 Feb 2021
Marcus Lewis reviews the paper “The Tolman-Eichenbaum Machine” by James Whittington, et al.. He first connects and compares the paper to the grid cell module in Numenta's “Locations in the Neocortex” paper. Marcus then gives a high-level summary of the paper and highlights two aspects - how grid cells and place cells interact, and how place cells can represent novel sensory-location pairs. The team then discusses the multiple grid cell modules and mechanisms presented in the paper.
“The Tolman-Eichenbaum Machine: Unifying Space and Relational Memory through Generalization in the Hippocampal Formation” by James Whittington, et al.: https://www.sciencedirect.com/science/article/pii/S009286742031388X
Papers mentioned:
“Locations in the Neocortex: A Theory of Sensory Recognition Using Cortical Grid Cells” by Jeff Hawkins, et al.: https://www.frontiersin.org/articles/10.3389/fncir.2019.00022/full
“What is a Cognitive Map? Organizing Knowledge for Flexible Behavior” by Timothy Behrens, et al. https://www.sciencedirect.com/science/article/pii/S0896627318308560
“A Stable Hippocampal Representation of a Space Requires its Direct Experience” by Clifford Kentros, et al.: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3167555/
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
“The Tolman-Eichenbaum Machine: Unifying Space and Relational Memory through Generalization in the Hippocampal Formation” by James Whittington, et al.: https://www.sciencedirect.com/science/article/pii/S009286742031388X
Papers mentioned:
“Locations in the Neocortex: A Theory of Sensory Recognition Using Cortical Grid Cells” by Jeff Hawkins, et al.: https://www.frontiersin.org/articles/10.3389/fncir.2019.00022/full
“What is a Cognitive Map? Organizing Knowledge for Flexible Behavior” by Timothy Behrens, et al. https://www.sciencedirect.com/science/article/pii/S0896627318308560
“A Stable Hippocampal Representation of a Space Requires its Direct Experience” by Clifford Kentros, et al.: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3167555/
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- 5 participants
- 1:26 hours
10 Feb 2021
Karan Grewal gives an overview of the paper “Continual Lifelong Learning with Neural Networks: A Review” by German Parisi, et al.. He first explains three main areas of current continual learning approaches. Then, he outlines four research areas that the authors advocate will be crucial to developing lifelong learning agents.
In the second part, Jeff Hawkins discusses new ideas and improvements from our previous "Frameworks" paper. He proposes a more refined grid cell module where each layer of minicolumns contains a 1D voltage-controlled oscillating module that represents movement in a particular direction. Jeff first explains the mechanisms within each column and how anchoring occurs in grid cell modules. He then gives an overview on displacement cells and deduces that if we have 1D grid cell modules, it is very likely that there are 1D displacement cell modules. Furthermore, he makes the case that the mechanisms for orientation cells are analogous to that of grid cells. He argues that each minicolumn is driven by various 1D modules that represent orientation and location and are the forces behind a classic grid cell / orientation cell module.
“Continual Lifelong Learning with Neural Networks: A Review” by German Parisi, et al.. : https://www.sciencedirect.com/science/article/pii/S0893608019300231
"A Framework for Intelligence and Cortical Function Based on Grid Cells in the Neocortex" paper: https://www.frontiersin.org/articles/10.3389/fncir.2018.00121/full
0:00 Continual Lifelong Learning Paper Review
40:55 Jeff Hawkins on Grid Cell Modules
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
In the second part, Jeff Hawkins discusses new ideas and improvements from our previous "Frameworks" paper. He proposes a more refined grid cell module where each layer of minicolumns contains a 1D voltage-controlled oscillating module that represents movement in a particular direction. Jeff first explains the mechanisms within each column and how anchoring occurs in grid cell modules. He then gives an overview on displacement cells and deduces that if we have 1D grid cell modules, it is very likely that there are 1D displacement cell modules. Furthermore, he makes the case that the mechanisms for orientation cells are analogous to that of grid cells. He argues that each minicolumn is driven by various 1D modules that represent orientation and location and are the forces behind a classic grid cell / orientation cell module.
“Continual Lifelong Learning with Neural Networks: A Review” by German Parisi, et al.. : https://www.sciencedirect.com/science/article/pii/S0893608019300231
"A Framework for Intelligence and Cortical Function Based on Grid Cells in the Neocortex" paper: https://www.frontiersin.org/articles/10.3389/fncir.2018.00121/full
0:00 Continual Lifelong Learning Paper Review
40:55 Jeff Hawkins on Grid Cell Modules
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- 10 participants
- 1:37 hours
3 Feb 2021
Building upon his prior grid cell model, Marcus Lewis explores ways to transform an array of 1D grid cell oscillators to 2D grid cell modules. He evaluates and explains one possible deterministic mapping technique that could be used to achieve this.
In the second part, Jeff Hawkins attempts to explain the physical structure of minicolumns and how they might interact. He proposes a new mechanism that introduces the idea of a voltage controlled oscillator in dendrites for dimensional movement in grid cells. With this mechanism, the grid cell will represent much more complex interactions and could possibly explain how the cells make connections to each other.
Research meeting on Marcus's new grid cell model: https://youtu.be/7tF-ofr7VUo
Paper Jeff referenced on oscillatory interference: https://www.jneurosci.org/content/31/45/16157.short
Paper Subutai mentioned on gap junctions between pyramidal cells : https://www.sciencedirect.com/science/article/pii/B9780128034712000138
0:00 Marcus Lewis on deterministic mapping
20:06 Jeff Hawkins on voltage controlled oscillators in dendrites
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
In the second part, Jeff Hawkins attempts to explain the physical structure of minicolumns and how they might interact. He proposes a new mechanism that introduces the idea of a voltage controlled oscillator in dendrites for dimensional movement in grid cells. With this mechanism, the grid cell will represent much more complex interactions and could possibly explain how the cells make connections to each other.
Research meeting on Marcus's new grid cell model: https://youtu.be/7tF-ofr7VUo
Paper Jeff referenced on oscillatory interference: https://www.jneurosci.org/content/31/45/16157.short
Paper Subutai mentioned on gap junctions between pyramidal cells : https://www.sciencedirect.com/science/article/pii/B9780128034712000138
0:00 Marcus Lewis on deterministic mapping
20:06 Jeff Hawkins on voltage controlled oscillators in dendrites
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- 5 participants
- 46 minutes
25 Jan 2021
Jeff Hawkins explores the relationship between the thalamus and the neocortex. He also examines whether reference frame transformation can occur in the thalamus. He first gives an overview of the Thousand Brains Theory, specifically how reference frame transformation happens in the brain. He then explores cortex feedforward pathways. Jeff proposes that the anatomy and physiology of the thalamus suggests that thalamocortical relay cells might be implementing a multiplexor. Lastly, he discusses a few issues about the theory and the team gives various perspectives and ideas.
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- 5 participants
- 1:00 hours
20 Jan 2021
Subutai Ahmad goes through the framework and results of a study he conducted on dimensionality and sparsity in a deep learning network. Using the GSC dataset, he explores the correlation between dimensionality and the size and accuracy of sparse networks. He also assesses whether there are scaling laws for sparsity in deep learning, similar to the mathematical algorithms for sparse distributed representations in the brain.
“How Can We Be So Dense? The Benefits of Using Highly Sparse Representations” paper: https://arxiv.org/abs/1903.11257
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
“How Can We Be So Dense? The Benefits of Using Highly Sparse Representations” paper: https://arxiv.org/abs/1903.11257
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- 5 participants
- 42 minutes
20 Jan 2021
Last month, Marcus Lewis proposed an alternate view of grid cells which enables the creation of maps of novel environments and objects in a predictive basis. In this meeting, he extends his grid cell proposal to a machine learning model. The agent in this model uses spatial displacements and its grid cells (for self-location) to create a grid cell representation of the attended location. The agent then goes through a series of attention shifts to activate the proper grid cell representations in order to reconstruct object vector cells. Marcus then describes how this model can contribute to explanations that can be tested using some empirical grid cell results.
Dec 21, 2020 Research Meeting (Using Grid Cells as a Predictive-Enabling Basis): https://youtu.be/7tF-ofr7VUo
Dec 23, 2020 Research Meeting (Informal Follow-up): https://youtu.be/uZ4qC2SltXA
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
Dec 21, 2020 Research Meeting (Using Grid Cells as a Predictive-Enabling Basis): https://youtu.be/7tF-ofr7VUo
Dec 23, 2020 Research Meeting (Informal Follow-up): https://youtu.be/uZ4qC2SltXA
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- 2 participants
- 41 minutes
6 Jan 2021
Lucas Souza continues his discussion on machine learning benchmarks and environments. In this meeting, he reviews the paper “Rearrangement: A Challenge for Embodied AI”. The paper proposes a set of benchmarks that captures many of the challenges the AI community needs to overcome to move towards human level sensorimotor intelligence. He discusses how goals can be specified, a taxonomy to categorize different types of agents and environments and some examples of benchmarks that follow the proposed structure. The team then discusses how they can translate the machine learning benchmark / environment to Numenta's work.
“Rearrangement: A Challenge for Embodied AI” by Dhruv Batra, et al.: https://arxiv.org/abs/2011.01975
Lucas Souza on iGibson Environment and Benchmark - December 14, 2020: https://youtu.be/feteCs80bIQ?t=4170
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
“Rearrangement: A Challenge for Embodied AI” by Dhruv Batra, et al.: https://arxiv.org/abs/2011.01975
Lucas Souza on iGibson Environment and Benchmark - December 14, 2020: https://youtu.be/feteCs80bIQ?t=4170
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- 4 participants
- 1:10 hours
23 Dec 2020
Marcus Lewis discusses and elaborates some of the ideas he explored and proposed on using grid cells as a prediction-enabling basis in a previous meeting. He first details the frameworks of the grid cell module and neural network. The team then asks questions on the technique used and evaluates the technique’s constraints and potentials. This is an informal continuation of the research meeting on December 21, 2020.
Dec 21's meeting: https://youtu.be/7tF-ofr7VUo
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
Dec 21's meeting: https://youtu.be/7tF-ofr7VUo
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- 4 participants
- 1:11 hours
21 Dec 2020
Marcus Lewis presents an unsupervised learning technique that represents inputs using magnitudes and phases in relation to grid cells. He proposes an alternate view of grid cells that enables the creation of maps of novel environments and objects in a predictive basis.
Marcus first gives an overview and his assumptions on the core pieces of the algorithm and provides examples to support his viewpoint on grid cells in mini columns. He then presents a simulation and shows how the technique can be implemented in artificial neural networks and biological tissues.
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
Marcus first gives an overview and his assumptions on the core pieces of the algorithm and provides examples to support his viewpoint on grid cells in mini columns. He then presents a simulation and shows how the technique can be implemented in artificial neural networks and biological tissues.
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- 5 participants
- 1:30 hours
16 Dec 2020
Subutai Ahmad, Lucas Souza and Karan Grewal give a recap on The Conference and Workshop on Neural Information Processing Systems (NeurIPS) 2020, which was held virtually on Dec 6-12.
Subutai first gives his impression on the conference and highlights the positives and negatives. He then gives an overview of three papers from the conference that are focused on contrastive learning. Karan Grewal gives some highlights on the workshops he attended, specifically talks on the community being hyper-focused on achieving benchmark performances. Karan also highlights a panel session he attended on the rising trend of bias in machine learning. Lastly, Lucas shares his experience at the conference - highlighting the tutorials, workshops and industry talks.
Papers Subutai mentioned:
Unsupervised Learning of Visual Features by Contrasting Cluster Assignments
https://proceedings.neurips.cc//paper_files/paper/2020/hash/70feb62b69f16e0238f741fab228fec2-Abstract.html
Bootstrap Your Own Latent - A New Approach to Self-Supervised Learning
https://proceedings.neurips.cc//paper_files/paper/2020/hash/f3ada80d5c4ee70142b17b8192b2958e-Abstract.html
LoCo: Local Contrastive Representation Learning
https://proceedings.neurips.cc//paper_files/paper/2020/hash/7fa215c9efebb3811a7ef58409907899-Abstract.html
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
Subutai first gives his impression on the conference and highlights the positives and negatives. He then gives an overview of three papers from the conference that are focused on contrastive learning. Karan Grewal gives some highlights on the workshops he attended, specifically talks on the community being hyper-focused on achieving benchmark performances. Karan also highlights a panel session he attended on the rising trend of bias in machine learning. Lastly, Lucas shares his experience at the conference - highlighting the tutorials, workshops and industry talks.
Papers Subutai mentioned:
Unsupervised Learning of Visual Features by Contrasting Cluster Assignments
https://proceedings.neurips.cc//paper_files/paper/2020/hash/70feb62b69f16e0238f741fab228fec2-Abstract.html
Bootstrap Your Own Latent - A New Approach to Self-Supervised Learning
https://proceedings.neurips.cc//paper_files/paper/2020/hash/f3ada80d5c4ee70142b17b8192b2958e-Abstract.html
LoCo: Local Contrastive Representation Learning
https://proceedings.neurips.cc//paper_files/paper/2020/hash/7fa215c9efebb3811a7ef58409907899-Abstract.html
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- 6 participants
- 1:12 hours
14 Dec 2020
Michaelangelo Caporale presents a summary of two papers that apply self-attention to vision tasks in neural networks. He first gives an overview of the architecture of using self-attention to learn models and compares it with RNN. He then dives into the attention mechanism used in each paper, specifically the local attention method in “Stand-Alone Self-Attention in Vision Models” and the global attention method in “An Image is Worth 16x16 Words”. Lastly, the team discusses inductive biases in these networks, potential tradeoffs and how the networks can learn efficiently with these mechanisms from the data that is given.
Next, Lucas Souza gives a breakdown of a potential machine learning environment and benchmark Numenta could adopt - Interactive Gibson. This simulation environment provides fully interactive scenes and simulations which allows researchers to train and evaluate agents in terms of object recognition, navigation etc.
“Stand-Alone Self-Attention in Vision Models” by Prajit Ramachandran, et al.: https://arxiv.org/abs/1906.05909
“An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale” by Alexey Dosovitskiy, et al.: https://arxiv.org/abs/2010.11929
iGibson website: http://svl.stanford.edu/igibson/
0:00 Michaelangelo Caporale on Self-Attention in Neural Networks
1:09:30 Lucas Souza on iGibson Environment and Benchmark
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
Next, Lucas Souza gives a breakdown of a potential machine learning environment and benchmark Numenta could adopt - Interactive Gibson. This simulation environment provides fully interactive scenes and simulations which allows researchers to train and evaluate agents in terms of object recognition, navigation etc.
“Stand-Alone Self-Attention in Vision Models” by Prajit Ramachandran, et al.: https://arxiv.org/abs/1906.05909
“An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale” by Alexey Dosovitskiy, et al.: https://arxiv.org/abs/2010.11929
iGibson website: http://svl.stanford.edu/igibson/
0:00 Michaelangelo Caporale on Self-Attention in Neural Networks
1:09:30 Lucas Souza on iGibson Environment and Benchmark
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- 8 participants
- 1:32 hours
9 Dec 2020
Jeff Hawkins reviews the paper “Grid Cell Firing Fields in A Volumetric Space” by Roddy Grieves, et al.. He first goes through the premise of the paper where the authors recorded grid cells in rats as they go through a 2D arena and 3D maze. The team then explores different ways grid cell modules can encode high dimensional information. Lastly, Marcus discusses a talk by Benjamin Dunn showing simultaneous recordings from over 100 neurons in a grid cell module.
Paper reviewed: https://www.biorxiv.org/content/10.1101/2020.12.06.413542v1
Marcus’s paper: https://www.biorxiv.org/content/10.1101/578641v2
Talk by Benjamin Dunn: https://www.youtube.com/watch?v=Hlzqvde3h0M
Paper reviewed: https://www.biorxiv.org/content/10.1101/2020.12.06.413542v1
Marcus’s paper: https://www.biorxiv.org/content/10.1101/578641v2
Talk by Benjamin Dunn: https://www.youtube.com/watch?v=Hlzqvde3h0M
- 4 participants
- 1:35 hours
2 Dec 2020
Niels Leadholm, a visiting researcher, discusses the research he has done over the course of his time at Numenta. His work builds on and extends the Thousand Brains Theory of Intelligence to the visual domain. Most machine learning networks stumble when integrating sequential samples across an image if the sequence does not follow a stereotyped pattern, while our brain does this effortlessly. In a continuation of the introduction he made during October 5, 2020’s research meeting, Niels explores how grid cell-based path integration in a cortical network can enable reliable recognition of visual objects given an arbitrary sequence of inputs.
If you want to follow Niels' work, you can follow him on Twitter (@neuro_AI).
Interested in being a visiting researcher at Numenta? Apply to our Visiting Scholar Program here: https://numenta.com/company/careers-and-team/careers/visiting-scholar-program/
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
If you want to follow Niels' work, you can follow him on Twitter (@neuro_AI).
Interested in being a visiting researcher at Numenta? Apply to our Visiting Scholar Program here: https://numenta.com/company/careers-and-team/careers/visiting-scholar-program/
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- 6 participants
- 55 minutes
30 Nov 2020
Jeff Hawkins explains how introspection can be a helpful tool in neuroscience research. He first gives an overview of what a column in the neocortex needs to know to recognize an object. To recognize an object, a column must simultaneously infer object, location, orientation and scale. The team then extensively discusses how scaling works in the columns. Lastly, Jeff proposes four possible explanations for how the neocortex can represent objects with different orientations.
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- 6 participants
- 1:18 hours
23 Nov 2020
Karan Grewal reviews the paper “Gated Linear Networks” by Veness, Lattimore, Budden et al., 2020. He first gives an overview of the new backpropagation-free neural architecture proposed in the paper, then he draws parallels to Numenta’s current research and the team discusses how these models are successful in continual learning tasks.
Link to paper: https://arxiv.org/abs/1910.01526
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
Link to paper: https://arxiv.org/abs/1910.01526
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- 6 participants
- 48 minutes
16 Nov 2020
Lucas Souza discusses few-shot learning and relevant benchmarks used to measure performance in these settings. This is part of a series of research meetings aimed at reviewing common training paradigms and benchmarks in machine learning, and their relation to the Thousand Brains Theory of Intelligence.
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a cohesive theory, core software technology, and numerous software applications all based on principles of the neocortex. Our innovative work delivers breakthrough capabilities and demonstrates that a computing approach based on biological learning principles can do things that today’s programmed computers cannot do.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a cohesive theory, core software technology, and numerous software applications all based on principles of the neocortex. Our innovative work delivers breakthrough capabilities and demonstrates that a computing approach based on biological learning principles can do things that today’s programmed computers cannot do.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- 5 participants
- 59 minutes
19 Oct 2020
We invited guest speaker Viviane Clay from the University of Osnabrück to talk about her research on learning sparse and meaningful representations through embodiment. In the first part, she explores how these types of representations of the world are learned in an embodied setting by training a deep reinforcement learning agent on a 3D navigation task with RGB images as main sensory inputs. She then discusses how the model learns sparse encoding of high dimensional visual inputs without explicitly enforcing sparsity, and what the possible hypothesis for this phenomena are.
In the second part, she covers her undergoing work on extracting concepts by identifying a minimal set of co-occurring activations that represents an object in a curiosity-driven learning setting. These concepts can be used to improve sample efficiency and performance in downstream tasks, such as object classification or the full reinforcement learning task.
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a cohesive theory, core software technology, and numerous software applications all based on principles of the neocortex. Our innovative work delivers breakthrough capabilities and demonstrates that a computing approach based on biological learning principles can do things that today’s programmed computers cannot do.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
In the second part, she covers her undergoing work on extracting concepts by identifying a minimal set of co-occurring activations that represents an object in a curiosity-driven learning setting. These concepts can be used to improve sample efficiency and performance in downstream tasks, such as object classification or the full reinforcement learning task.
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a cohesive theory, core software technology, and numerous software applications all based on principles of the neocortex. Our innovative work delivers breakthrough capabilities and demonstrates that a computing approach based on biological learning principles can do things that today’s programmed computers cannot do.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- 6 participants
- 1:36 hours
14 Oct 2020
Subutai Ahmad and Jeff Hawkins clarify aspects of active dendrites, and some of the history behind them, for other members of our research group. Using two key papers as a backdrop, they highlight issues such as basic dendritic integration, technological advances for stimulation of synapses, the controversy around and meaning of dendritic spikes, and the importance of understanding temporal dynamics. The team then discusses how to better understand dendrites from a machine learning perspective.
Papers mentioned:
'The Decade of the Dendritic NMDA Spike': https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5643072/
'Active Properties of Neocortical Pyramidal Neuron Dendrites': https://pubmed.ncbi.nlm.nih.gov/23841837/
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a cohesive theory, core software technology, and numerous software applications all based on principles of the neocortex. Our innovative work delivers breakthrough capabilities and demonstrates that a computing approach based on biological learning principles can do things that today’s programmed computers cannot do.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
Papers mentioned:
'The Decade of the Dendritic NMDA Spike': https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5643072/
'Active Properties of Neocortical Pyramidal Neuron Dendrites': https://pubmed.ncbi.nlm.nih.gov/23841837/
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a cohesive theory, core software technology, and numerous software applications all based on principles of the neocortex. Our innovative work delivers breakthrough capabilities and demonstrates that a computing approach based on biological learning principles can do things that today’s programmed computers cannot do.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- 6 participants
- 1:14 hours
5 Oct 2020
Niels Leadholm, a visiting researcher, discusses some ideas for further research on how to apply the object recognition implemented in Numenta's 2019 paper ""Locations in the Neocortex: A Theory of Sensorimotor Object Recognition Using Cortical Grid Cells"" to images. In particular, the research would explore whether the strengths of object reference frames and grid-cell encoding can be leveraged in an image-based setting.
Read paper here: https://www.frontiersin.org/articles/10.3389/fncir.2019.00022/full
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a cohesive theory, core software technology, and numerous software applications all based on principles of the neocortex. Our innovative work delivers breakthrough capabilities and demonstrates that a computing approach based on biological learning principles can do things that today’s programmed computers cannot do.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
Read paper here: https://www.frontiersin.org/articles/10.3389/fncir.2019.00022/full
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a cohesive theory, core software technology, and numerous software applications all based on principles of the neocortex. Our innovative work delivers breakthrough capabilities and demonstrates that a computing approach based on biological learning principles can do things that today’s programmed computers cannot do.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- 4 participants
- 27 minutes
23 Sep 2020
Jeff Hawkins brainstorms some ideas on minicolumns, in a continuation of a recent concept he presented in Numenta’s July 27, 2020 research meeting. In the previous meeting, he hypothesized that minicolumns represent movement vectors. In this research meeting, Jeff suggests a new mechanism for calculating reference frame transformation that ties into his minicolumn hypothesis. He suggests that the movement vectors of the minicolumns’ upper layers are allocentric, while that of the lower layers are ego-centric.
July 27, 2020 Research Meeting: https://www.youtube.com/watch?v=yceJeKf-ad4
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a cohesive theory, core software technology, and numerous software applications all based on principles of the neocortex. Our innovative work delivers breakthrough capabilities and demonstrates that a computing approach based on biological learning principles can do things that today’s programmed computers cannot do.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
July 27, 2020 Research Meeting: https://www.youtube.com/watch?v=yceJeKf-ad4
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a cohesive theory, core software technology, and numerous software applications all based on principles of the neocortex. Our innovative work delivers breakthrough capabilities and demonstrates that a computing approach based on biological learning principles can do things that today’s programmed computers cannot do.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- 6 participants
- 56 minutes
21 Sep 2020
In this week’s research meeting, Marcus Lewis presents his ‘Eigen-view’ on grid cells and connects ideas in 3 underlying papers to Numenta’s research. He discusses the mapping of grid cells in terms of eigenvectors, and evaluates eigenvectors in terms of the Fourier transform (space) and the non-Fourier transform, called “spectral graph theory” (2D graph).
Marcus’s whiteboard presentation: https://miro.com/app/board/o9J_klMK6P0=/
Papers referenced:
► "Prediction with directed transitions: complex eigen structure, grid cells and phase coding" by Changmin Yu, Timothy E.J. Behrens, Neil Burgess - https://arxiv.org/abs/2006.03355
► “The hippocampus as predictive map” by Kimberly L. Stachenfeld, Matthew M. Botvinick, Samuel J. Gershman - https://www.biorxiv.org/content/10.1101/097170v4
► “Grid Cells Encode Local Positional Information” by Revekka Ismakov, Omri Barak, Kate Jeffery, Dori Derdikman - https://www.cell.com/current-biology/fulltext/S0960-9822(17)30771-6
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a cohesive theory, core software technology, and numerous software applications all based on principles of the neocortex. Our innovative work delivers breakthrough capabilities and demonstrates that a computing approach based on biological learning principles can do things that today’s programmed computers cannot do.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
Marcus’s whiteboard presentation: https://miro.com/app/board/o9J_klMK6P0=/
Papers referenced:
► "Prediction with directed transitions: complex eigen structure, grid cells and phase coding" by Changmin Yu, Timothy E.J. Behrens, Neil Burgess - https://arxiv.org/abs/2006.03355
► “The hippocampus as predictive map” by Kimberly L. Stachenfeld, Matthew M. Botvinick, Samuel J. Gershman - https://www.biorxiv.org/content/10.1101/097170v4
► “Grid Cells Encode Local Positional Information” by Revekka Ismakov, Omri Barak, Kate Jeffery, Dori Derdikman - https://www.cell.com/current-biology/fulltext/S0960-9822(17)30771-6
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a cohesive theory, core software technology, and numerous software applications all based on principles of the neocortex. Our innovative work delivers breakthrough capabilities and demonstrates that a computing approach based on biological learning principles can do things that today’s programmed computers cannot do.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- 7 participants
- 1:31 hours
16 Sep 2020
Karan Grewal reviews the paper ‘Accurate Representation for Spatial Cognition Using Grid Cells’ by Nicole Sandra-Yaffa Dumont & Chris Eliasmith. He first gives us an overview of semantic pointers and then discusses the use of grid cells in spatial representations.
Link to paper: https://cognitivesciencesociety.org/cogsci20/papers/0562/0562.pdf
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a cohesive theory, core software technology, and numerous software applications all based on principles of the neocortex. Our innovative work delivers breakthrough capabilities and demonstrates that a computing approach based on biological learning principles can do things that today’s programmed computers cannot do.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
Link to paper: https://cognitivesciencesociety.org/cogsci20/papers/0562/0562.pdf
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a cohesive theory, core software technology, and numerous software applications all based on principles of the neocortex. Our innovative work delivers breakthrough capabilities and demonstrates that a computing approach based on biological learning principles can do things that today’s programmed computers cannot do.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- 5 participants
- 33 minutes
16 Sep 2020
Niels Leadholm, a visiting researcher, discusses his (recently de-anonymized) PhD research on hierarchical feature binding and robust machine vision. He first explores the issue of robust machine vision and his motivation in developing a deep-learning neural network architecture using a biologically-inspired approach. Many AI systems nowadays are vulnerable to adversarial examples. Niels explains how the characteristics of “feature binding,” which happens in a primate’s brain, can be implemented in machine learning systems to enhance robustness.
If you want to follow Niels’ work, you can follow him on Twitter (@neuro_AI).
Interested in being a visiting researcher at Numenta? Apply to our Visiting Scholar Program here: https://numenta.com/company/careers-and-team/careers/visiting-scholar-program/
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
If you want to follow Niels’ work, you can follow him on Twitter (@neuro_AI).
Interested in being a visiting researcher at Numenta? Apply to our Visiting Scholar Program here: https://numenta.com/company/careers-and-team/careers/visiting-scholar-program/
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.
Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- 5 participants
- 45 minutes
9 Sep 2020
This week, we invited Max Bennett to discuss his recently published model of cortical columns, sequences with precise time scales, and working memory. His work builds on and extends our past work in several interesting directions. Max explains his unusual background, and then discusses the key elements of his paper.
Link to his paper: https://www.frontiersin.org/articles/10.3389/fncir.2020.00040/full#h1
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a cohesive theory, core software technology, and numerous software applications all based on principles of the neocortex. Our innovative work delivers breakthrough capabilities and demonstrates that a computing approach based on biological learning principles can do things that today’s programmed computers cannot do.
Subscribe to our Weekly News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Monthly Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
Link to his paper: https://www.frontiersin.org/articles/10.3389/fncir.2020.00040/full#h1
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a cohesive theory, core software technology, and numerous software applications all based on principles of the neocortex. Our innovative work delivers breakthrough capabilities and demonstrates that a computing approach based on biological learning principles can do things that today’s programmed computers cannot do.
Subscribe to our Weekly News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest
Subscribe to our Monthly Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter
Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta
Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/
Our Website:
https://numenta.com/
- 3 participants
- 1:16 hours
24 Aug 2020
Kevin Hunter does a recap on the Hot Chips 32 conference, which was held virtually on Aug 16-18. He highlights presentations and talks that focus on the latest processor innovations and machine learning processing.
- 4 participants
- 45 minutes
19 Aug 2020
Marcus Lewis discusses how continual learning presents a dilemma of memory vs generalization. He also presents an idea that quick few-shot learning (e.g. MAML) may offer a different, and biologically plausible way of solving this dilemma.
- 8 participants
- 1:13 hours
17 Aug 2020
Jeff Hawkins reviews the new paper "Neuronal vector coding in spatial cognition" by Andrej Bicanski and Neil Burgess. The paper reviews the many types of cells involved in spatial navigation and memory. Jeff then ties the paper to The Thousand Brains Theory of Intelligence, using it as a launch point for discussion on how the neocortex makes transformations of reference frames.
Neuronal vector coding in spatial cognition, Andrej Bicanski and Neil Burgess
https://www.nature.com/articles/s41583-020-0336-9
Neuronal vector coding in spatial cognition, Andrej Bicanski and Neil Burgess
https://www.nature.com/articles/s41583-020-0336-9
- 4 participants
- 49 minutes
10 Aug 2020
In today's meeting, Jeff Hawkins gives some brief comments on the book, "Human Compatible" by Stuart Russell. Then Michaelangelo sparks a discussion on why dimensionality is important in The Thousand Brains Theory.
- 8 participants
- 1:05 hours
5 Aug 2020
In our previous research meeting, Subutai reviewed three different papers on continuous learning models. In today's short research meeting, Karan reviews a paper from 1991 that he points out was referenced by all three. The paper, "Using Semi-Distributed Representations to Overcome Catastrophic Forgetting in Connectionist Networks" (http://axon.cs.byu.edu/~martinez/classes/678/Presentations/Dean.pdf), was one of the first papers to reference sparse representations in continuous learning.
- 5 participants
- 23 minutes
3 Aug 2020
In this meeting Subutai discusses three recent papers and models (OML, ANML, and Supermasks) on continuous learning. The models exploit sparsity, gating, and sparse sub-networks to achieve impressive results on some standard benchmarks. We discuss some of the relationships to HTM theory and neuroscience.
Papers discussed:
1. Meta-Learning Representations for Continual Learning (http://arxiv.org/abs/1905.12588)
2. Learning to Continually Learn (http://arxiv.org/abs/2002.09571)
3. Supermasks in Superposition (http://arxiv.org/abs/2006.14769)
Papers discussed:
1. Meta-Learning Representations for Continual Learning (http://arxiv.org/abs/1905.12588)
2. Learning to Continually Learn (http://arxiv.org/abs/2002.09571)
3. Supermasks in Superposition (http://arxiv.org/abs/2006.14769)
- 7 participants
- 1:06 hours
29 Jul 2020
This research meeting contains a couple short topics presented by Subutai Ahmad and Jeff Hawkins. But first, the research team tries something new. After the previous meeting, there were a few comments and questions posted, and the team decided to address them live, for the first 18 minutes.
Then, Subutai discusses the aperture problem and discusses a paper, "Contribution of feedforward, lateral and feedback connections to the classical receptive field center and extra-classical receptive field surround of primate V1 neurons." (http://europepmc.org/article/med/17010705)
Next, Jeff briefly discusses a recent paper that supports the notion that head direction cells are a type of path integration, "Constant Sub-second Cycling between Representations of Possible Futures in the Hippocampus" by Kay et. al, Cell, May 2020. (https://www.sciencedirect.com/science/article/abs/pii/S0092867420300611)
Then, Subutai discusses the aperture problem and discusses a paper, "Contribution of feedforward, lateral and feedback connections to the classical receptive field center and extra-classical receptive field surround of primate V1 neurons." (http://europepmc.org/article/med/17010705)
Next, Jeff briefly discusses a recent paper that supports the notion that head direction cells are a type of path integration, "Constant Sub-second Cycling between Representations of Possible Futures in the Hippocampus" by Kay et. al, Cell, May 2020. (https://www.sciencedirect.com/science/article/abs/pii/S0092867420300611)
- 7 participants
- 1:06 hours
27 Jul 2020
In this research meeting, Jeff Hawkins presents several new ideas, in a continuation of a recent concept he presented: "minicolumn is a movement vector."
- 5 participants
- 1:35 hours
15 Jul 2020
In this research meeting Subutai and Karan focus on reviewing 4 related meta-learning papers. Subutai (after an initial surprise reveal) summarizes MAML, a core meta-learning technique, by @chelseabfinn et al, and a simpler variant, Reptile, by Alex Nichol et al. Karan reviews two probabilistic/Bayesian variants of MAML by Tom Griffiths et al.
Papers: Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks (https://arxiv.org/abs/1703.03400), On First-Order Meta-Learning Algorithms (https://arxiv.org/abs/1803.02999), Recasting Gradient-Based Meta-Learning as Hierarchical Bayes (https://arxiv.org/abs/1801.08930), and Reconciling meta-learning and continual learning with online mixtures of tasks (https://arxiv.org/abs/1812.06080).
Papers: Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks (https://arxiv.org/abs/1703.03400), On First-Order Meta-Learning Algorithms (https://arxiv.org/abs/1803.02999), Recasting Gradient-Based Meta-Learning as Hierarchical Bayes (https://arxiv.org/abs/1801.08930), and Reconciling meta-learning and continual learning with online mixtures of tasks (https://arxiv.org/abs/1812.06080).
- 9 participants
- 1:04 hours
2 Jul 2020
In this research meeting, guest Stephen Omohundro gave a fascinating talk on GPT-3, the new massive OpenAI Natural Language Processing model. He reviewed the network architecture, training process, and results in the context of past work. There was extensive discussion on the implications for NLP and for Machine Intelligence / AGI.
Link to GPT-3 paper: https://arxiv.org/abs/2005.14165
Link to slides from this presentation: https://www.slideshare.net/numenta/openais-gpt-3-language-model-guest-steve-omohundro
Link to GPT-3 paper: https://arxiv.org/abs/2005.14165
Link to slides from this presentation: https://www.slideshare.net/numenta/openais-gpt-3-language-model-guest-steve-omohundro
- 9 participants
- 1:41 hours
24 Jun 2020
In this research meeting, guest presenters from the Neuromorphic AI Lab at Univ. Texas at San Antonio presented to the Numenta research team. Dr. Dhireesha Kudithipudi, Professor & Lab Director, and her team have been working on HTM related projects for nearly 7 years. Today, she and her student Abdullah M. Zyarah reviewed their recent paper, “Neuromorphic System for Spatial and Temporal Information Processing” published in IEEE Transactions on Computers 2019. The paper describes a complete memristor based implementation of HTM spatial pooling and temporal memory. Their system is continuously learning, and achieves impressive improvements in latency and power consumption.
Neuromorphic System for Spatial and Temporal Information Processing:
https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9109629
Neuromorphic System for Spatial and Temporal Information Processing:
https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9109629
- 5 participants
- 1:18 hours
22 Jun 2020
In the first part of the meeting Jeff discusses grid cells formed via oscillatory systems, the Bush & Burgess’ model of ring attractors, and how this idea might be overlaid onto cortical columns.
Starting at 34:00 Subutai switches gears quite a bit, and discusses a new paradigm for achieving AGI via meta-meta learning, by reviewing Jeff Clune’s 2019 paper “AI-GAs: AI-generating algorithms, an alternate paradigm for producing general artificial intelligence” (https://arxiv.org/abs/1905.10985). We discuss the prospects for meta-learning AGI, and meta-learning Numenta’s neuroscience based approach.
Starting at 34:00 Subutai switches gears quite a bit, and discusses a new paradigm for achieving AGI via meta-meta learning, by reviewing Jeff Clune’s 2019 paper “AI-GAs: AI-generating algorithms, an alternate paradigm for producing general artificial intelligence” (https://arxiv.org/abs/1905.10985). We discuss the prospects for meta-learning AGI, and meta-learning Numenta’s neuroscience based approach.
- 5 participants
- 1:37 hours
8 Jun 2020
Ares Fisher discusses dendrites in machine learning, and reviews the paper “Improved Expressivity Through Dendritic Neural Networks” by Wu et al.
Paper: https://proceedings.neurips.cc/paper/2018/file/e32c51ad39723ee92b285b362c916ca7-Paper.pdf
Paper: https://proceedings.neurips.cc/paper/2018/file/e32c51ad39723ee92b285b362c916ca7-Paper.pdf
- 7 participants
- 1:27 hours
3 Jun 2020
In this short research meeting, Marcus raises some questions about Jeff’s brainstorming session on June 1.
Link to June 1 brainstorming session: https://www.youtube.com/watch?v=uQpX-MnAJqU
Link to June 1 brainstorming session: https://www.youtube.com/watch?v=uQpX-MnAJqU
- 4 participants
- 32 minutes
1 Jun 2020
Jeff Hawkins brainstorms how sensorimotor models might be built up from purely sensory data, how this might fit into a cortical column, and the importance of magnocellular and parvocellular cells.
For a continuation of this discussion, view the next video: https://www.youtube.com/watch?v=jQCtuK9XbTE
For a continuation of this discussion, view the next video: https://www.youtube.com/watch?v=jQCtuK9XbTE
- 3 participants
- 28 minutes
27 May 2020
Subutai gives a basic overview of Quantization in Neural Networks, and then reviews the paper “And the Bit Goes Down: Revisiting the Quantization of Neural Networks” by Stock et al., 2020.
http://arxiv.org/abs/1907.05686
http://arxiv.org/abs/1907.05686
- 6 participants
- 39 minutes
20 May 2020
Lucas Souza does a “trip report” on the ICLR 2020 conference, which was held remotely. He focuses on papers related to neuroscience, deep learning theory, pruning and sparsity.
- 9 participants
- 1:03 hours
18 May 2020
Jeff Hawkins reviews the thalamic inputs to the various layers, and discusses their importance on minicolumns, representing features vs movements, and a surprising finding regarding simple and complex cells. Discussion ensues.
- 5 participants
- 54 minutes
28 Apr 2020
Numenta Research Meeting, April 20, 2020. In this meeting, Jeff suggests that grid cell encoding of large location spaces can’t happen just by superimposing multiple grid cell modules. Suggests a temporal memory like SDR encoding of location.
- 5 participants
- 50 minutes
22 Apr 2020
Numenta Research Meeting, April 22, 2020. Aris reviews several plasticity mechanisms, including developmental plasticity, various forms of Hebbian plasticity, eligibility traces, homeostatic plasticity, and impact of neuromodulators. In the second part, Jeff briefly reviews a few findings and facts related to optical recordings of grid cells.
- 6 participants
- 1:31 hours
18 Mar 2020
Florian Fiebig discusses his attendance at COSYNE 2020.
Discussion at https://discourse.numenta.org/t/cosyne-2020-recap-numenta-research-mar-9/7268
Part 1: https://youtu.be/BLBPqIOyMgo
Part 2: https://youtu.be/JM5DE2BChT0
Discussion at https://discourse.numenta.org/t/cosyne-2020-recap-numenta-research-mar-9/7268
Part 1: https://youtu.be/BLBPqIOyMgo
Part 2: https://youtu.be/JM5DE2BChT0
- 7 participants
- 50 minutes
16 Mar 2020
This paper describes a model of how an animal might use grid cells, place cells, and border cells to navigate in complex environments. It was an excellent summary of existing ideas and it introduced several things we were not aware of that could be important for understanding how a cortical column works.
Read paper at https://onlinelibrary.wiley.com/doi/10.1002/hipo.23147
Discuss at https://discourse.numenta.org/t/navigating-with-grid-and-place-cells-in-cluttered-environments-paper-review/7296
Read paper at https://onlinelibrary.wiley.com/doi/10.1002/hipo.23147
Discuss at https://discourse.numenta.org/t/navigating-with-grid-and-place-cells-in-cluttered-environments-paper-review/7296
- 8 participants
- 1:21 hours
4 Mar 2020
Lucas and Marcus continue discussions on attention and transformers. Jeff adds the idea of myelin to the conversation.
- 8 participants
- 50 minutes
2 Mar 2020
Marcus Lewis on attention. He reviews current papers on Transformers and relates them to HTM with Jeff.
Recurrent models of visual attention
https://arxiv.org/abs/1406.6247
Attention is all you need
https://arxiv.org/abs/1706.03762
Recurrent models of visual attention
https://arxiv.org/abs/1406.6247
Attention is all you need
https://arxiv.org/abs/1706.03762
- 8 participants
- 1:10 hours
24 Feb 2020
This is the 2nd part of Aries' talk. The first half is here: https://youtu.be/L8PzquwMV6A
Discuss at: https://discourse.numenta.org/t/numenta-research-meeting-feb-19-2020-part-2/7239
Paper: https://www.nature.com/articles/nn.4385
Discuss at: https://discourse.numenta.org/t/numenta-research-meeting-feb-19-2020-part-2/7239
Paper: https://www.nature.com/articles/nn.4385
- 5 participants
- 1:18 hours
19 Feb 2020
Numenta Research Meeting - Feb 19, 2020
Discussion at: https://discourse.numenta.org/t/numenta-research-meeting-feb-25-2020/7238
Paper: https://www.cell.com/neuron/fulltext/S0896-6273(16)30707-3
Discussion at: https://discourse.numenta.org/t/numenta-research-meeting-feb-25-2020/7238
Paper: https://www.cell.com/neuron/fulltext/S0896-6273(16)30707-3
- 5 participants
- 33 minutes
19 Feb 2020
This research meeting was split into two parts. This is the 2nd part of the research meeting, but the 1st part of Aries' talk.
First part of this research meeting: https://youtu.be/3fdl9O7WTHM
Second half of Aries' talk: https://youtu.be/3THc5dN-2Wg
Discuss at: https://discourse.numenta.org/t/numenta-research-meeting-feb-19-2020-part-2/7239
Paper: https://www.nature.com/articles/nn.4385
First part of this research meeting: https://youtu.be/3fdl9O7WTHM
Second half of Aries' talk: https://youtu.be/3THc5dN-2Wg
Discuss at: https://discourse.numenta.org/t/numenta-research-meeting-feb-19-2020-part-2/7239
Paper: https://www.nature.com/articles/nn.4385
- 4 participants
- 43 minutes
12 Feb 2020
Continuing conversation about traveling waves (see NRM from Feb 12).
- https://www.nature.com/articles/nature08010
- https://www.ncbi.nlm.nih.gov/pubmed/22072668
Discussion at https://discourse.numenta.org/t/implications-of-traveling-waves-nrm-feb-12-2020/7188
- https://www.nature.com/articles/nature08010
- https://www.ncbi.nlm.nih.gov/pubmed/22072668
Discussion at https://discourse.numenta.org/t/implications-of-traveling-waves-nrm-feb-12-2020/7188
- 4 participants
- 1:15 hours
12 Feb 2020
Jeff Hawkins will talk about some papers he is reading on traveling theta waves, and how they might work in primary sensory cortex.
- https://www.nature.com/articles/nature08010
- https://www.ncbi.nlm.nih.gov/pubmed/22072668
- https://www.nature.com/articles/nature08010
- https://www.ncbi.nlm.nih.gov/pubmed/22072668
- 5 participants
- 50 minutes
10 Feb 2020
Numenta Research Meeting, with Marcus Lewis presenting a writeup on Backprop-Trained Permanences. See it at https://github.com/mrcslws/nupic.research/blob/backprop-structure/projects/backprop_structure/documents/backprop-permanences/backprop-permanences.pdf
Discussion at https://discourse.numenta.org/t/backprop-trained-permanences-nrm-feb-10-2020/7166.
Discussion at https://discourse.numenta.org/t/backprop-trained-permanences-nrm-feb-10-2020/7166.
- 5 participants
- 25 minutes
5 Feb 2020
Some discussion of Local Field Potential (LFP) from Florian, probably some random discussion of other things.
https://discourse.numenta.org/t/numenta-research-meeting-feb-5-2020/7147
https://discourse.numenta.org/t/numenta-research-meeting-feb-5-2020/7147
- 6 participants
- 35 minutes
24 Jan 2020
Florian's ideas after reading the following paper:
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3972729/
A Hybrid Oscillatory Interference/Continuous Attractor Network Model of Grid Cell Firing
Daniel Bush and Neil Burgess
Grid cells in the rodent medial entorhinal cortex exhibit remarkably regular spatial firing patterns that tessellate all environments visited by the animal. Two theoretical mechanisms that could generate this spatially periodic activity pattern have been proposed: oscillatory interference and continuous attractor dynamics. Although a variety of evidence has been cited in support of each, some aspects of the two mechanisms are complementary, suggesting that a combined model may best account for experimental data. The oscillatory interference model proposes that the grid pattern is formed from linear interference patterns or “periodic bands” in which velocity-controlled oscillators integrate self-motion to code displacement along preferred directions. However, it also allows the use of symmetric recurrent connectivity between grid cells to provide relative stability and continuous attractor dynamics. Here, we present simulations of this type of hybrid model, demonstrate that it generates intracellular membrane potential profiles that closely match those observed in vivo, addresses several criticisms aimed at pure oscillatory interference and continuous attractor models, and provides testable predictions for future empirical studies.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3972729/
A Hybrid Oscillatory Interference/Continuous Attractor Network Model of Grid Cell Firing
Daniel Bush and Neil Burgess
Grid cells in the rodent medial entorhinal cortex exhibit remarkably regular spatial firing patterns that tessellate all environments visited by the animal. Two theoretical mechanisms that could generate this spatially periodic activity pattern have been proposed: oscillatory interference and continuous attractor dynamics. Although a variety of evidence has been cited in support of each, some aspects of the two mechanisms are complementary, suggesting that a combined model may best account for experimental data. The oscillatory interference model proposes that the grid pattern is formed from linear interference patterns or “periodic bands” in which velocity-controlled oscillators integrate self-motion to code displacement along preferred directions. However, it also allows the use of symmetric recurrent connectivity between grid cells to provide relative stability and continuous attractor dynamics. Here, we present simulations of this type of hybrid model, demonstrate that it generates intracellular membrane potential profiles that closely match those observed in vivo, addresses several criticisms aimed at pure oscillatory interference and continuous attractor models, and provides testable predictions for future empirical studies.
- 3 participants
- 1:37 hours
24 Jan 2020
Invariance vs. Equivariance (presented by Marcus Lewis)
http://dicarlolab.mit.edu/sites/dicarlolab.mit.edu/files/pubs/dicarlo%20and%20cox%202007.pdf
Discussion at https://discourse.numenta.org/t/numenta-research-invariance-vs-equivariance-jan-24-2020/7115
http://dicarlolab.mit.edu/sites/dicarlolab.mit.edu/files/pubs/dicarlo%20and%20cox%202007.pdf
Discussion at https://discourse.numenta.org/t/numenta-research-invariance-vs-equivariance-jan-24-2020/7115
- 4 participants
- 1:11 hours
15 Jan 2020
This live-stream was delayed because of an internet outage. I am reposting the recorded video in full here. The previous live stream video will be removed.
Florian's ideas after reading the following paper:
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3972729/
A Hybrid Oscillatory Interference/Continuous Attractor Network Model of Grid Cell Firing
Daniel Bush and Neil Burgess
Grid cells in the rodent medial entorhinal cortex exhibit remarkably regular spatial firing patterns that tessellate all environments visited by the animal. Two theoretical mechanisms that could generate this spatially periodic activity pattern have been proposed: oscillatory interference and continuous attractor dynamics. Although a variety of evidence has been cited in support of each, some aspects of the two mechanisms are complementary, suggesting that a combined model may best account for experimental data. The oscillatory interference model proposes that the grid pattern is formed from linear interference patterns or “periodic bands” in which velocity-controlled oscillators integrate self-motion to code displacement along preferred directions. However, it also allows the use of symmetric recurrent connectivity between grid cells to provide relative stability and continuous attractor dynamics. Here, we present simulations of this type of hybrid model, demonstrate that it generates intracellular membrane potential profiles that closely match those observed in vivo, addresses several criticisms aimed at pure oscillatory interference and continuous attractor models, and provides testable predictions for future empirical studies.
Another paper mentioned: https://www.researchgate.net/publication/259456051_Theta_phase_precession_of_grid_and_place_cell_firing_in_open_environments
Florian's ideas after reading the following paper:
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3972729/
A Hybrid Oscillatory Interference/Continuous Attractor Network Model of Grid Cell Firing
Daniel Bush and Neil Burgess
Grid cells in the rodent medial entorhinal cortex exhibit remarkably regular spatial firing patterns that tessellate all environments visited by the animal. Two theoretical mechanisms that could generate this spatially periodic activity pattern have been proposed: oscillatory interference and continuous attractor dynamics. Although a variety of evidence has been cited in support of each, some aspects of the two mechanisms are complementary, suggesting that a combined model may best account for experimental data. The oscillatory interference model proposes that the grid pattern is formed from linear interference patterns or “periodic bands” in which velocity-controlled oscillators integrate self-motion to code displacement along preferred directions. However, it also allows the use of symmetric recurrent connectivity between grid cells to provide relative stability and continuous attractor dynamics. Here, we present simulations of this type of hybrid model, demonstrate that it generates intracellular membrane potential profiles that closely match those observed in vivo, addresses several criticisms aimed at pure oscillatory interference and continuous attractor models, and provides testable predictions for future empirical studies.
Another paper mentioned: https://www.researchgate.net/publication/259456051_Theta_phase_precession_of_grid_and_place_cell_firing_in_open_environments
- 10 participants
- 1:05 hours
8 Jan 2020
Florian's ideas after reading the following paper:
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3972729/
A Hybrid Oscillatory Interference/Continuous Attractor Network Model of Grid Cell Firing
Daniel Bush and Neil Burgess
Grid cells in the rodent medial entorhinal cortex exhibit remarkably regular spatial firing patterns that tessellate all environments visited by the animal. Two theoretical mechanisms that could generate this spatially periodic activity pattern have been proposed: oscillatory interference and continuous attractor dynamics. Although a variety of evidence has been cited in support of each, some aspects of the two mechanisms are complementary, suggesting that a combined model may best account for experimental data. The oscillatory interference model proposes that the grid pattern is formed from linear interference patterns or “periodic bands” in which velocity-controlled oscillators integrate self-motion to code displacement along preferred directions. However, it also allows the use of symmetric recurrent connectivity between grid cells to provide relative stability and continuous attractor dynamics. Here, we present simulations of this type of hybrid model, demonstrate that it generates intracellular membrane potential profiles that closely match those observed in vivo, addresses several criticisms aimed at pure oscillatory interference and continuous attractor models, and provides testable predictions for future empirical studies.
Another paper mentioned: https://www.researchgate.net/publication/259456051_Theta_phase_precession_of_grid_and_place_cell_firing_in_open_environments
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3972729/
A Hybrid Oscillatory Interference/Continuous Attractor Network Model of Grid Cell Firing
Daniel Bush and Neil Burgess
Grid cells in the rodent medial entorhinal cortex exhibit remarkably regular spatial firing patterns that tessellate all environments visited by the animal. Two theoretical mechanisms that could generate this spatially periodic activity pattern have been proposed: oscillatory interference and continuous attractor dynamics. Although a variety of evidence has been cited in support of each, some aspects of the two mechanisms are complementary, suggesting that a combined model may best account for experimental data. The oscillatory interference model proposes that the grid pattern is formed from linear interference patterns or “periodic bands” in which velocity-controlled oscillators integrate self-motion to code displacement along preferred directions. However, it also allows the use of symmetric recurrent connectivity between grid cells to provide relative stability and continuous attractor dynamics. Here, we present simulations of this type of hybrid model, demonstrate that it generates intracellular membrane potential profiles that closely match those observed in vivo, addresses several criticisms aimed at pure oscillatory interference and continuous attractor models, and provides testable predictions for future empirical studies.
Another paper mentioned: https://www.researchgate.net/publication/259456051_Theta_phase_precession_of_grid_and_place_cell_firing_in_open_environments
- 7 participants
- 1:17 hours
18 Dec 2019
NeurIPS 2019 Conference Recap from Numenta. Discussion at https://discourse.numenta.org/t/numenta-research-meeting-dec-18-2019/6928
- 9 participants
- 58 minutes
16 Dec 2019
From Florian:
In response to Jeffs contemplation of MCs in 1-dimensional terms, I’m (re)reading a bunch of grid-cell papers with respect to their apparent dimensionality and what happens during projections into lower space (such as projections from 2D grid cells in navigation tasks on a linear track):
https://www.ncbi.nlm.nih.gov/pubmed/26898777
I also want to take a few minutes to talk about the non-hebbian behavioural timescale plasticity involved in spontaneous or targeted creation/remapping of place cells, described by Bitte and Milstein:
https://www.ncbi.nlm.nih.gov/pubmed/28883072
In response to Jeffs contemplation of MCs in 1-dimensional terms, I’m (re)reading a bunch of grid-cell papers with respect to their apparent dimensionality and what happens during projections into lower space (such as projections from 2D grid cells in navigation tasks on a linear track):
https://www.ncbi.nlm.nih.gov/pubmed/26898777
I also want to take a few minutes to talk about the non-hebbian behavioural timescale plasticity involved in spontaneous or targeted creation/remapping of place cells, described by Bitte and Milstein:
https://www.ncbi.nlm.nih.gov/pubmed/28883072
- 7 participants
- 1:35 hours
9 Dec 2019
Jeff Hawkins will lead this research meeting. Topic is dimensionality in grid cells and the thousand brains theory of intelligence.
Discussion and photos of the white board at https://discourse.numenta.org/t/numenta-research-meeting-dec-9-2019/6898
Discussion and photos of the white board at https://discourse.numenta.org/t/numenta-research-meeting-dec-9-2019/6898
- 3 participants
- 47 minutes
4 Dec 2019
Marcus Lewis will draw a connection between the "Sparse Manifold Transform" paper and Numenta's general "location" idea.
http://papers.nips.cc/paper/8251-the-sparse-manifold-transform
Discussion at https://discourse.numenta.org/t/numenta-research-meeting-dec-4-2019/6881
http://papers.nips.cc/paper/8251-the-sparse-manifold-transform
Discussion at https://discourse.numenta.org/t/numenta-research-meeting-dec-4-2019/6881
- 6 participants
- 1:17 hours
20 Nov 2019
Topic is "Does sparsity help Continual Learning?"
Hosted by Vincenzo Lomonaco.
Hosted by Vincenzo Lomonaco.
- 9 participants
- 1:18 hours
13 Nov 2019
Numenta Research Meeting, Nov 13, 2019.
with Marcus Lewis
Tracking synapse usefulness with a “permanence”.
with Marcus Lewis
Tracking synapse usefulness with a “permanence”.
- 7 participants
- 1:24 hours
8 Nov 2019
Florian Fiebig will talk about the (struggling) persistent activity theory of working memory.
- 5 participants
- 1:16 hours
4 Nov 2019
Paper Discussion: Hierarchical organization of cortical and thalamic connectivity (https://www.nature.com/articles/s41586-019-1716-z)
There may be other topics.
There may be other topics.
- 5 participants
- 50 minutes
28 Oct 2019
Moving forward on the STP model to show the first application (beyond merely matching electrophysiology), a working memory model by Mongillo,Barak &Tsodyks (https://science.sciencemag.org/content/319/5869/1543). Free PDF access through here: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.352.9618&rep=rep1&type=pdf
- 6 participants
- 59 minutes
23 Oct 2019
I'll probably touch on both these papers, although the first one is more essential reading than the second.
- Cortical mechanisms of action selection:the affordance competition hypothesis http://www.cisek.org/pavel/Pubs/Cisek2007.pdf
- Resynthesizing behavior through phylogenetic refinement https://link.springer.com/content/pdf/10.3758%2Fs13414-019-01760-1.pdf
The interesting thing to me in these models is the similarities between "affordances" in Cisek's models and "objects" in our models.
- Cortical mechanisms of action selection:the affordance competition hypothesis http://www.cisek.org/pavel/Pubs/Cisek2007.pdf
- Resynthesizing behavior through phylogenetic refinement https://link.springer.com/content/pdf/10.3758%2Fs13414-019-01760-1.pdf
The interesting thing to me in these models is the similarities between "affordances" in Cisek's models and "objects" in our models.
- 9 participants
- 1:07 hours
18 Oct 2019
Just present some observations that will be helpful if you want to dive in deeper someday. Most networks / objective functions can be translated into the language of variational inference, and doing so often provides useful insights. I’ll show an example: how Gaussian dropout can be described in this language, and how this tells us something interesting about quantization. (This observation comes from the variational dropout paper http://papers.nips.cc/paper/5666-variational-dropout-and-the-local-reparameterization-trick)
Oct 18, 2019
Oct 18, 2019
- 7 participants
- 53 minutes
9 Oct 2019
- 5 participants
- 1:31 hours
27 Sep 2019
The very interesting and recently published paper at ICLR2019 studying the impact of sparsity in the context of Continual Learning:
https://openreview.net/forum?id=Bkxbrn0cYX
Related: Continual Learning via Neural Pruning
Siavash Golkar, Michael Kagan, Kyunghyun Cho https://arxiv.org/abs/1903.04476
https://openreview.net/forum?id=Bkxbrn0cYX
Related: Continual Learning via Neural Pruning
Siavash Golkar, Michael Kagan, Kyunghyun Cho https://arxiv.org/abs/1903.04476
- 6 participants
- 2:07 hours
20 Sep 2019
Paper review: https://arxiv.org/abs/1804.02464 "Differentiable plasticity: training plastic neural networks with backpropagation"
It is aimed to be a connection between the work we are doing, with structural plasticity through Hebbian learning, and continual learning.
Will possibly review a 2nd paper: "Backpropamine: training self-modifying neural networks with differentiable neuromodulated plasticity"
https://openreview.net/forum?id=r1lrAiA5Ym
Subutai, time-willing, will go over "Oscillatory responses in cat visual cortex exhibit inter-columnar synchronization which reflects global stimulus properties"
https://www.nature.com/articles/338334a0
It is aimed to be a connection between the work we are doing, with structural plasticity through Hebbian learning, and continual learning.
Will possibly review a 2nd paper: "Backpropamine: training self-modifying neural networks with differentiable neuromodulated plasticity"
https://openreview.net/forum?id=r1lrAiA5Ym
Subutai, time-willing, will go over "Oscillatory responses in cat visual cortex exhibit inter-columnar synchronization which reflects global stimulus properties"
https://www.nature.com/articles/338334a0
- 6 participants
- 54 minutes
18 Sep 2019
https://link.springer.com/article/10.1007/s10827-019-00729-1 from visiting scientist Florian Fiebig. He says:
Its a brief 6 page paper, and I think it can serve as a neat introduction to the kinds of spiking neural networks and model thinking about the cortical microcircuit I was working on for my PhD.
The main idea in short:
Many Hebbian Learning Rules violate Dale's principle (A neuron cannot be both excitatory and inhibitory, all its axons release the same neurotransmitter) in the course of dynamic synaptic weight learning, because it this may change the sign of an individual connection. On the example of a reduced cortical microcircuit originally built as an attractor model of working memory, we show how biological cortex might instead learn negative correlations through a di-synaptic circuit involving double bouquet cells (DBC). These cells are very particular in the way they are distributed regularly across the cortical surface and innervate the whole minicolumn below without affecting neighboring columns. "Indeed, disregarding some exceptions, there appears to be one DBC horsetail per minicolumn"
Its a brief 6 page paper, and I think it can serve as a neat introduction to the kinds of spiking neural networks and model thinking about the cortical microcircuit I was working on for my PhD.
The main idea in short:
Many Hebbian Learning Rules violate Dale's principle (A neuron cannot be both excitatory and inhibitory, all its axons release the same neurotransmitter) in the course of dynamic synaptic weight learning, because it this may change the sign of an individual connection. On the example of a reduced cortical microcircuit originally built as an attractor model of working memory, we show how biological cortex might instead learn negative correlations through a di-synaptic circuit involving double bouquet cells (DBC). These cells are very particular in the way they are distributed regularly across the cortical surface and innervate the whole minicolumn below without affecting neighboring columns. "Indeed, disregarding some exceptions, there appears to be one DBC horsetail per minicolumn"
- 5 participants
- 1:56 hours
16 Sep 2019
Marcus talks about sparsity in neural networks across Deep Learning and HTM, and Jeff talks about building bridges between the two spaces.
- 6 participants
- 60 minutes
13 Sep 2019
Paper review: https://www.researchgate.net/publication/26753678_Holtmaat_A_Svoboda_K_Experience-dependent_structural_synaptic_plasticity_in_the_mammalian_brain_Nat_Rev_Neurosci_10_647-658
Holtmaat A, Svoboda K. Experience-dependent structural synaptic plasticity in the mammalian brain.
Also: https://www.ncbi.nlm.nih.gov/pubmed/26354919
Holtmaat A, Svoboda K. Experience-dependent structural synaptic plasticity in the mammalian brain.
Also: https://www.ncbi.nlm.nih.gov/pubmed/26354919
- 3 participants
- 45 minutes
5 Sep 2019
(Previous version had missing content) Jeff talks and asks a lot of questions.
- 8 participants
- 1:14 hours
30 Aug 2019
Yes, we're reviewing our own paper. :P Two newer Numenta hires are going to review our latest theoretical neuroscience paper. This is more for the benefit of the new hires to completely understand the Thousand Brains Theory of Intelligence.
- 7 participants
- 1:51 hours
14 Aug 2019
With Numenta founder Jeff Hawkins.
Aug 14 Numenta Research Meeting
Aug 14 Numenta Research Meeting
- 8 participants
- 1:29 hours
7 Aug 2019
This is just a recap of the Deep Learning portion of the event.
A recap by Lucas Souza, Numenta Research Engineer.
Numenta Research Meeting - Aug 7 2019
Discuss at https://discourse.numenta.org/t/deep-learning-reinforcement-learning-summer-school-2019-recap/6434/2
A recap by Lucas Souza, Numenta Research Engineer.
Numenta Research Meeting - Aug 7 2019
Discuss at https://discourse.numenta.org/t/deep-learning-reinforcement-learning-summer-school-2019-recap/6434/2
- 7 participants
- 1:05 hours
22 Jul 2019
"Temporal Memory via Recurrent Sparse Memory-like models" - topic from Jeremy Gordon https://twitter.com/onejgordon
Discuss at https://discourse.numenta.org/t/temporal-memory-via-rsm-like-models/6345
Discuss at https://discourse.numenta.org/t/temporal-memory-via-rsm-like-models/6345
- 4 participants
- 28 minutes
10 Jul 2019
We have a visiting scholar Theivendiram Pranavan from National University of Singapore who'll be talking about his work on unsupervised continuous machine learning.
- 6 participants
- 37 minutes
8 Jul 2019
Live Numenta Research Meeting. Adding scale-invariance to our model.
- 4 participants
- 1:12 hours
1 Jul 2019
Marcus is further investigating Capsules and how they might inform HTM (and vice versa?)
- 6 participants
- 1:01 hours
17 Jun 2019
For details and discussion, go to https://discourse.numenta.org/t/connecting-hintons-capsules-to-numenta-research/6160
This morning, Marcus is planning on discussing capsules on the whiteboard, connecting them to our work.
Here are 3 Hinton capsules papers and 1 talk.
2011 Paper: http://www.cs.toronto.edu/~hinton/absps/transauto6.pdf
2017 Paper: http://www.cs.toronto.edu/~hinton/absps/DynamicRouting.pdf
2018 Paper: http://www.cs.toronto.edu/~hinton/absps/EMcapsules.pdf
2014 Talk: https://www.youtube.com/watch?v=rTawFwUvnLE
Broadcasted live on Twitch -- Watch live at https://www.twitch.tv/rhyolight_
This morning, Marcus is planning on discussing capsules on the whiteboard, connecting them to our work.
Here are 3 Hinton capsules papers and 1 talk.
2011 Paper: http://www.cs.toronto.edu/~hinton/absps/transauto6.pdf
2017 Paper: http://www.cs.toronto.edu/~hinton/absps/DynamicRouting.pdf
2018 Paper: http://www.cs.toronto.edu/~hinton/absps/EMcapsules.pdf
2014 Talk: https://www.youtube.com/watch?v=rTawFwUvnLE
Broadcasted live on Twitch -- Watch live at https://www.twitch.tv/rhyolight_
- 5 participants
- 54 minutes
17 Jun 2019
Discussion at https://discourse.numenta.org/t/icml-2019-recap/6161
Broadcasted live on Twitch -- Watch live at https://www.twitch.tv/rhyolight_
Broadcasted live on Twitch -- Watch live at https://www.twitch.tv/rhyolight_
- 4 participants
- 34 minutes
31 May 2019
Orientation and object composition and reference frames.
Broadcasted live on Twitch -- Watch live at https://www.twitch.tv/rhyolight_
Broadcasted live on Twitch -- Watch live at https://www.twitch.tv/rhyolight_
- 3 participants
- 1:17 hours
31 May 2019
Courage and wit have served thee well.
Thou hast been promoted to the next level.
Broadcasted live on Twitch -- Watch live at https://www.twitch.tv/rhyolight_
Thou hast been promoted to the next level.
Broadcasted live on Twitch -- Watch live at https://www.twitch.tv/rhyolight_
- 3 participants
- 9 minutes
29 May 2019
My broadcasting software crashed in the middle of this stream, so I had to cut it up into pieces. Sorry about the previous botched video. This one has sound throughout.
-- Watch live at https://www.twitch.tv/rhyolight_
-- Watch live at https://www.twitch.tv/rhyolight_
- 7 participants
- 1:25 hours
22 May 2019
Broadcasted live on Twitch -- Watch live at https://www.twitch.tv/rhyolight_
- 4 participants
- 1:01 hours
20 May 2019
Broadcasted live on Twitch -- Watch live at https://www.twitch.tv/rhyolight_
- 4 participants
- 1:14 hours
13 May 2019
Broadcasted live on Twitch -- Watch live at https://www.twitch.tv/rhyolight_
- 5 participants
- 19 minutes
10 May 2019
Broadcasted live on Twitch -- Watch live at https://www.twitch.tv/rhyolight_
- 4 participants
- 1:05 hours
10 May 2019
Broadcasted live on Twitch -- Watch live at https://www.twitch.tv/rhyolight_
- 4 participants
- 40 minutes
1 May 2019
Broadcasted live on Twitch -- Watch live at https://www.twitch.tv/rhyolight_
- 5 participants
- 25 minutes
26 Apr 2019
Broadcasted live on Twitch -- Watch live at https://www.twitch.tv/rhyolight_
- 4 participants
- 1:08 hours
24 Apr 2019
Broadcasted live on Twitch -- Watch live at https://www.twitch.tv/rhyolight_
- 5 participants
- 36 minutes
22 Apr 2019
Broadcasted live on Twitch -- Watch live at https://www.twitch.tv/rhyolight_
- 5 participants
- 1:06 hours
1 Apr 2019
Numenta Research Meeting - neuroscience / artificial intelligence (AI) / neocortex oscillations https://gist.github.com/rhyolight/59dcd4f5810a00b001697abd70452411 -- Watch live at https://www.twitch.tv/rhyolight_
- 3 participants
- 23 minutes
22 Jan 2019
Matt and Florian will present their interpretation of paper "Deep Predictive Learning: A Comprehensive Model of Three Visual Streams" as described here: https://discourse.numenta.org/t/deep-predictive-learning-a-comprehensive-model-of-three-visual-streams/3076
- 8 participants
- 1:26 hours