Numenta / Numenta Journal Club

Add meeting Rate page Subscribe

Numenta / Numenta Journal Club

These are all the meetings we have in "Numenta Journal Club" (part of the organization "Numenta"). Click into individual meeting pages to watch the recording and search or read the transcript.

27 Dec 2022

In this research meeting, Jeff gave a synopsis of the Complementary Learning Systems Theory presented in the paper “What Learning Systems do Intelligent Agents Need? Complementary Learning Systems Theory Updated” by Dharshan Kumaran, Demis Hassabis and James McClelland.

Paper (2016): https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(16)30043-2

Other paper mentioned:
“Sparseness Constrains the Prolongation of Memory Lifetime via Synaptic Metaplasticity” (2008): https://academic.oup.com/cercor/article/18/1/67/319707
- - - - -
Numenta has developed breakthrough advances in AI technology that enable customers to achieve 10-100X improvement in performance across broad use cases, such as natural language processing and computer vision. Backed by two decades of neuroscience research, we developed a framework for intelligence called The Thousand Brains Theory. By leveraging these discoveries and applying them to AI systems, we’re able to deliver extreme performance improvements and unlock new capabilities.

Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://numenta.com/news-digest/

Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter

Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta

Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/

Our Website:
https://numenta.com/
  • 6 participants
  • 52 minutes
theories
discussed
intelligent
understanding
neural
introduced
gradually
representational
research
complementary
youtube image

13 Sep 2022

Guest speaker Burak Gurbuz talked about his recent work with Constantine Dovrolis that was presented in ICML 2022: “NISPA: Neuro-Inspired Stability-Plasticity Adaptation for Continual Learning in Sparse Networks.” He started the presentation with an overview of the biological aspects of continual learning, then introduced NISPA and shared experimental results.

Paper: https://arxiv.org/abs/2206.09117
- - - - -
Numenta has developed breakthrough advances in AI technology that enable customers to achieve 10-100X improvement in performance across broad use cases, such as natural language processing and computer vision. Backed by two decades of neuroscience research, we developed a framework for intelligence called The Thousand Brains Theory. By leveraging these discoveries and applying them to AI systems, we’re able to deliver extreme performance improvements and unlock new capabilities.

Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://numenta.com/news-digest/

Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter

Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta

Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/

Our Website:
https://numenta.com/
  • 5 participants
  • 60 minutes
neurals
neural
neuroinspired
neuron
cortex
gradual
selectively
sparse
lifelong
connectivity
youtube image

25 Aug 2022

The neurons we use in today’s deep learning systems are extremely simple compared to their biological counterparts. In this meeting, guest speaker Toviah Moldwin explored the abilities of a single biological neuron with extended dendrites and synapses and introduced a potential biological model of plasticity that might enable better learning in neural networks.

Work done by: Toldvin Moldwin, Menachem Kalmenson, Li Shay Azran, and Idan Segev from the Edmond and Lily Safra Center for Brain Sciences (ELSC) at the Hebrew University of Jerusalem

Topics covered & related papers:
1/ Biophysical Perceptron
➤ “Perceptron Learning and Classification in a Modeled Cortical Pyramidal Cell”: https://www.frontiersin.org/articles/10.3389/fncom.2020.00033/full

2/ The Gradient Clusteron (G-Clusteron)
➤ “The gradient clusteron: A model neuron that learns to solve classification tasks via dendritic nonlinearities, structural plasticity, and gradient descent”: https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1009015

3/ Fixed Point - Learning Rate (FPLR) Framework for Calcium

4/ The Calcitron

5/ Hierarchical Heterosynaptic Plasticity
➤ "Asymmetric voltage attenuation in dendrites enables hierarchical heterosynaptic plasticity”: https://www.biorxiv.org/content/10.1101/2022.07.07.499166v1
- - - - -
Numenta has developed breakthrough advances in AI technology that enable customers to achieve 10-100X improvement in performance across broad use cases, such as natural language processing and computer vision. Backed by two decades of neuroscience research, we developed a framework for intelligence called The Thousand Brains Theory. By leveraging these discoveries and applying them to AI systems, we’re able to deliver extreme performance improvements and unlock new capabilities.

Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://numenta.com/news-digest/

Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter

Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta

Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/

Our Website:
https://numenta.com/
  • 4 participants
  • 1:06 hours
neuronal
synaptical
neuron
neurons
neural
brain
neuroanatomist
cortex
hippocampus
dissertation
youtube image

10 Jun 2022

Guest speaker Massimo Caccia introduces a simple baseline for task-agnostic continual reinforcement learning (TACRL). He first gives an overview of continual learning, reinforcement learning, and TACRL. He then goes through empirical findings that show how different TACRL methods can be just as performant as common task-aware and multi-task methods.

Papers:
“Task-Agnostic Continual Reinforcement Learning: In Praise of a Simple Baseline”: https://arxiv.org/abs/2205.14495
"Avoiding Catastrophe: Active Dendrites Enable Multi-Task Learning in Dynamic Environments": https://www.frontiersin.org/articles/10.3389/fnbot.2022.846219/full
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.

Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://numenta.com/news-digest/

Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter

Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta

Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/

Our Website:
https://numenta.com/
  • 4 participants
  • 1:16 hours
continual
gradually
learning
lifelong
tractable
thinking
knowledgeable
tasking
agnostic
behavior
youtube image

18 Mar 2022

Subutai reviews the paper "Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer" and compares it to our dendrites paper "Avoiding Catastrophe: Active Dendrites Enable Multi-Task Learning in Dynamic Environments".

Paper: https://arxiv.org/abs/1701.06538
Dendrites Paper: https://arxiv.org/abs/2201.00042
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.

Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://numenta.com/news-digest/

Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter

Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta

Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/

Our Website:
https://numenta.com/
  • 7 participants
  • 1:15 hours
models
computational
principle
capacity
performance
cluster
suggests
review
experts
dendritic
youtube image

11 Feb 2022

Heiko Hoffmann gives an overview of the “Neural Descriptor Fields” paper. He first goes over how the Neural Descriptor Fields (NDFs) function represents key points on a 3D object relative to its position and pose, and how NDFs can be used to recover an object’s position and pose. He then discusses the paper’s simulation and robot-experiment results and highlights the useful concepts and limits of the paper.

In the second half of the meeting, Karan Grewal presents the “Vector Neurons” paper. He first gives a quick review of the core concepts and terminology of the paper. Then he looks into the structure of the paper’s SO(3)-equivariant neural networks in detail and how the networks represent object pose and rotation. Lastly, Karan goes over the results of object classification and image reconstruction and points out a few shortcomings.

“Neural Descriptor Fields: SE(3)-Equivariant Object Representations for Manipulation” by Anthony Simeonov et al.: https://arxiv.org/abs/2112.05124

“Vector Neurons: A General Framework for SO(3)-Equivariant Networks” by Congyue Deng et al. https://arxiv.org/abs/2104.12229

Datasets mentioned:
Shapenet: https://shapenet.org/taxonomy-viewer
ModelNet40: https://3dshapenets.cs.princeton.edu/
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.

Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://numenta.com/news-digest/

Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter

Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta

Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/

Our Website:
https://numenta.com/
  • 7 participants
  • 1:08 hours
robotic
robot
neural
demonstrating
experiment
concepts
model
method
voxel
vector
youtube image

27 Sep 2021

Marcus Lewis reviews a few papers from Dana Ballard and highlights some insights related to object modeling and reference frames in the Thousand Brains Theory.

Marcus first gives an overview of what “animate vision” is, as outlined in Ballard’s papers, and defines optic flow. Marcus then makes a case for using a world-centric, viewer-oriented location relative to a fixation point to represent objects and depth.

In the second part of his presentation, he looks at Numenta’s previous sensorimotor research (where the motor command is being received by the system) and Ballard’s sensorimotor “animate vision” system (where the motor command is being generated by the system) for objecting modeling. He evaluates whether the two sensorimotor frameworks will lead to different object modeling solutions and discusses the opportunities that could stem from Ballard’s framework.

Papers by Dana Ballard:
➤ “Animate Vision” (1990): https://www.sciencedirect.com/science/article/abs/pii/0004370291900804
➤ “Eye Fixation and Early Vision: Kinetic Depth” (1988): https://ieeexplore.ieee.org/document/590033
➤ “Reference Frames for Animate Vision” (1989): https://www.ijcai.org/Proceedings/89-2/Papers/124.pdf
➤ “Principles of Animate Vision” (1992): https://www.sciencedirect.com/science/article/abs/pii/104996609290081D
➤ “Deictic Codes for the Embodiment of Cognition” (1997): https://www.cs.utexas.edu/~dana/bbs.pdf

Papers by Numenta:
➤ “Locations in the Neocortex: A Theory of Sensorimotor Object Recognition Using Cortical Grid Cells”: https://www.frontiersin.org/articles/10.3389/fncir.2019.00022/full
➤ “A Framework for Intelligence and Cortical Function Based on Grid Cells in the Neocortex”: https://www.frontiersin.org/articles/10.3389/fncir.2018.00121/full
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.

Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://numenta.com/news-digest/

Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter

Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta

Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/

Our Website:
https://numenta.com/
  • 5 participants
  • 1:36 hours
viewpoints
discussion
insight
thinking
presentation
perception
concepts
topics
reference
biases
youtube image

8 Sep 2021

Anshuman Mishra talks about algorithmic speedups via locality sensitive hashing and reviews papers on bio-inspired hashing, specifically LSH inspired by fruit flies.

He first gives an overview of what algorithmic speedups are, why they are useful and how we can use them. He then dives into a specific technique called locality sensitive hashing (LSH) and goes over the motivations of using these types of hash algorithms and how they work. Lastly, Anshuman talks about the potential biological relevances of these hash mechanisms. He looks at the paper “A neural algorithm for a fundamental computing problem” which outlined a version of LSH inspired by fruit flies that uses sparse projections, expands dimensionality and uses a Winner-Takes-All mechanism.

Paper reviewed: “A Neural Algorithm for a Fundamental Computing Problem” by Dasgupta et al. : https://www.science.org/doi/abs/10.1126/science.aam9868

0:00 Overview
1:11 Algorithmic Speedups
14:28 Locality Sensitive Hashing
45:54 Bio-inspired Hashing
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.

Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://numenta.com/news-digest/

Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter

Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta

Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/

Our Website:
https://numenta.com/
  • 8 participants
  • 1:05 hours
speedups
speed
algorithmic
techniques
complexity
insights
topic
basic
heuristic
streaming
youtube image

25 Aug 2021

Subutai Ahmad goes over voting in the Thousand Brains Theory.

In the first of two research meetings, he lays the groundwork for understanding how columns vote in the theory by unpacking the ideas in our "Columns" paper. First, he presents the hypothesis of the paper on how cortical columns learn predictive models of sensorimotor sequences. Then, he explains the mechanisms behind a single cortical column and how it learns complete objects by sensing different locations and integrating inputs over time. In the next research meeting, he will review voting across multiple columns.

Columns paper "A Theory of How Columns in the Neocortex Enable Learning the Structure of the World": https://www.frontiersin.org/articles/10.3389/fncir.2017.00081/full

Other paper mentioned: “The columnar organization of the neocortex” - https://academic.oup.com/brain/article-pdf/120/4/701/17863573/1200701.pdf
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.

Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://numenta.com/news-digest/

Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter

Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta

Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/

Our Website:
https://numenta.com/
  • 6 participants
  • 1:05 hours
voting
vote
understanding
thinking
brain
refresher
gradually
suggests
inhibition
dendritic
youtube image

18 Aug 2021

Subutai Ahmad reviews the biology behind active dendrites and explains how Numenta models them. He first presents an overview of active dendrites in pyramidal neurons by describing various experimental findings. He describes the impact of dendrites on the computation performed by neurons, and some of the learning (plasticity) rules that have been discovered. He shows how all this forms the substrate for the HTM neuron, proposing that dendritic computation is the basis for prediction and very flexible context integration in neural networks.

Papers:
Bartlett Mel, Neural Computation 1992: https://direct.mit.edu/neco/article/4/4/502/5650/NMDA-Based-Pattern-Discrimination-in-a-Modeled
Poirazi, Brannon & Mel, Neuron, 2003: https://pubmed.ncbi.nlm.nih.gov/12670427/

Numenta Neurons Paper 2016: https://www.frontiersin.org/articles/10.3389/fncir.2016.00023/full
Numenta Columns Paper 2017: https://www.frontiersin.org/articles/10.3389/fncir.2017.00081/full

“Predictive Coding of Novel versus Familiar Stimuli in the Primary Visual Cortex”: https://www.biorxiv.org/content/10.1101/197608v1
“Continuous online sequence learning with an unsupervised neural network model”: https://direct.mit.edu/neco/article/28/11/2474/8502/Continuous-Online-Sequence-Learning-with-an#.WC4U8TKZMUE
‘Unsupervised real-time anomaly detection for streaming data”: https://www.sciencedirect.com/science/article/pii/S0925231217309864
“Active properties of neocortical pyramidal neuron dendrites”: https://pubmed.ncbi.nlm.nih.gov/23841837/

- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications. 

Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://numenta.com/news-digest/

Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter

Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta

Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/

Our Website:
https://numenta.com/
  • 8 participants
  • 1:30 hours
neuron
neurons
synaptic
neuroscientists
dendritic
analyzing
perceptron
experimentally
inhibitory
active
youtube image

16 Jul 2021

We reviewed 2 papers in this research meeting. First, Numenta intern Jack Schenkman reviewed the paper “Multiscale representation of very large environments in the hippocampus of flying bats” by Eliav et al. The paper proposes a multiscale neuronal encoding scheme of place cells for spatial perception. The team then raised a few questions and discussed.

Next, our researcher Ben Cohen reviewed the paper “Representational drift in primary olfactory cortex” by Schoonover et al. The paper shows that single neuron firing rate responses to odor in the anterior piriform core are stable within a day, but continuously drift overtime. The team then discussed the notion of representational drift in the context of Numenta’s work.

“Multiscale representation of very large environments in the hippocampus of flying bats” by Eliav et al.: https://science.sciencemag.org/content/372/6545/eabg4020

“Representational drift in primary olfactory cortex” by Schoonover et al.: https://www.nature.com/articles/s41586-021-03628-7

Columns paper mentioned: https://www.frontiersin.org/articles/10.3389/fncir.2017.00081/full
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.

Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest

Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter

Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta

Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/

Our Website:
https://numenta.com/
  • 8 participants
  • 1:34 hours
large
bats
flies
representations
area
places
experimenter
cells
traverse
hippocampus
youtube image

5 May 2021

Karan Grewal reviews the paper "Self-Organization in a Perceptual Network" from 1988, and argues that the use of Hebbian learning rules (1) is equivalent to performing principal components analysis (PCA), and (2) maximizes the mutual information between the input and output of each unit in a standard neural network, more commonly referred to as the InfoMax principle.

“Self-Organization in a Perceptual Network" by Ralph Linsker: https://ieeexplore.ieee.org/document/36

Other resources mentioned:
• “Linear Hebbian learning and PCA” by Bruno Olshausen: https://redwood.berkeley.edu/wp-content/uploads/2018/08/handout-hebb-PCA.pdf
• “Theoretical Neuroscience" textbook by Dayan & Abbott: https://mitpress.mit.edu/books/theoretical-neuroscience
• “Representation Learning with Contrastive Predictive Coding” by van den Oord et al.: https://arxiv.org/abs/1807.03748
• “Learning deep representations by mutual information estimation and maximization” by Hjelm et al.: https://arxiv.org/abs/1808.06670
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications. 

Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest

Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter

Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta

Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/

Our Website:
https://numenta.com/
  • 8 participants
  • 44 minutes
learning
information
important
mutual
analysis
neural
supervised
interaction
infomax
eigenvectors
youtube image

28 Apr 2021

In this research meeting, joined by Rosanne Liu, Jason Yosinski, and Mitchell Wortsman from ML Collective, Subutai Ahmad explains the properties of small-world structures and how they can be helpful in Numenta’s research.

Subutai first discusses different network types and the concept of small-world structures by reviewing the paper “Collective Dynamics of ‘Small-World’ Networks” by Watts & Strogatz. He then evaluates the efficiency of these structures and how they are helpful in non-physical networks by looking at Jon Kleinberg’s paper “Navigation in a Small World.” Subutai also addresses how small-world structures would apply to machine learning by using concepts from the paper “Graph Structure of Neural Networks” by Jiaxuan You et al.. Lastly, the team discusses how small-world structures relate to Numenta’s research such as sparsity and cortical columns.

“Collective Dynamics of ‘Small-World’ Networks” by Watts & Strogatz: https://www.nature.com/articles/30918
“Navigation in a Small World” by Kleinberg: https://www.nature.com/articles/35022643
“Graph Structure of Neural Networks” by Jiaxuan You et al.: https://arxiv.org/abs/2007.06559
"Small-World Brain Networks" by Bassett & Bullmore: https://journals.sagepub.com/doi/10.1177/1073858406293182

More information on ML Collective: https://mlcollective.org/
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications. 

Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest

Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter

Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta

Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/

Our Website:
https://numenta.com/
  • 9 participants
  • 1:21 hours
roseanne
mlc
researchers
discussions
introduce
interview
ai
interested
washington
thanks
youtube image

7 Apr 2021

Our research intern Alex Cuozzo discusses the book Sparse Distributed Memory by Pentti Kanerva. He first explores a few concepts related to high dimensional vectors mentioned in the book such as rotational symmetry, distribution of distances etc. He then talks about the key properties of the Sparse Distributed Memory model and how it relates to a biological one. Lastly, he gives his thoughts and explores some follow up work that aims to convert dense factors to sparse distributed activations.

Sources:
➤ “Sparse Distributed Memory” by Pentti Kanerva: https://mitpress.mit.edu/books/sparse-distributed-memory
➤ “An Alternative Design for a Sparse Distributed Memory” by Louis Jaeckel: https://ntrs.nasa.gov/citations/19920001073
➤ “A Class of Designs for a Sparse Distributed Memory” by Louis Jaeckel: https://ntrs.nasa.gov/api/citations/19920002426/downloads/19920002426.pdf
➤ "Comparison between Kanerva's SDM and Hopfield-type neural networks" by James Keeler: https://www.sciencedirect.com/science/article/abs/pii/0364021388900262
➤ "Notes on implementation of sparsely distributed memory" by James Keeler et al: https://www.semanticscholar.org/paper/Notes-on-implementation-of-sparsely-distributed-Keeler-Denning/a818801315dbeaf892197c5f08c8c8779871fd82
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications. 

Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest

Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter

Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta

Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/

Our Website:
https://numenta.com/
  • 5 participants
  • 1:17 hours
dimensionality
vector
vectors
matrix
theoretical
representations
memory
sparses
monograph
numenta
youtube image

31 Mar 2021

We started this research meeting by responding to a few questions posted on the HTM forum. The HTM Forum is our open source discussion group. It is a great place to ask questions related to Numenta’s work and find interesting projects that people in the community are working on. Join HTM Forum today: https://discourse.numenta.org/

Subutai Ahmad reviews the paper “Alleviating catastrophic forgetting using context-dependent gating and synaptic stabilization” by Masse, Grant and Freedman. He first explains his motivations behind reading this paper based on Numenta’s previous work on dendrites and continuous learning. He then highlights the various network architectures simulated in the experiment and the results presented in the paper (i.e. accuracy for each network). Finally, Subutai gives his thoughts and the team discusses the results.

Paper: https://www.pnas.org/content/115/44/E10467

Other paper mentioned:
“Continuous Online Sequence Learning with an Unsupervised Neural Network Model”: https://numenta.com/neuroscience-research/research-publications/papers/continuous-online-sequence-learning-with-an-unsupervised-neural-network-model/

0:00 Answering Questions from HTM Forum
7:12 Paper Review
- - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.

Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest

Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter

Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta

Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/

Our Website:
https://numenta.com/
  • 8 participants
  • 1:09 hours
dendrites
neurons
discussion
subnetworks
workshops
grid
research
propagation
insight
gaiting
youtube image

24 Mar 2021

Through the lens of Numenta's Thousand Brains Theory, Marcus Lewis reviews the paper “How to represent part-whole hierarchies in a neural network” by Geoffrey Hinton. By focusing on parts of the GLOM model presented in the paper, he bridges Numenta's theory to GLOM and highlights the similarities and differences between each model's voting mechanisms , structure and the use of neural representations. Finally, Marcus explores the idea of GLOM handling movement.

Paper: https://arxiv.org/abs/2102.12627

Other resources mentioned:
Numenta "Thousand Brains" voting alternate version (2017):
http://numenta.github.io/htmresearch/documents/location-layer/Hello-Multi-Column-Location-Inference.html
"Receptive field structure varies with layer in the primary visual cortex" by Martinez et al.: https://www.nature.com/articles/nn1404
"A Multiplexed, Heterogeneous, and Adaptive Code for Navigation in Medial Entorhinal Cortex" by Hardcastle et al: https://www.sciencedirect.com/science/article/pii/S0896627317302374
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications. 

Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest

Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter

Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta

Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/

Our Website:
https://numenta.com/
  • 8 participants
  • 1:24 hours
glom
discusses
neural
section
mind
generalization
complexity
thalamus
thousand
models
youtube image

8 Mar 2021

In this research meeting, our research intern Alex Cuozzo reviews some notable papers and explains high level concepts related to learning rules in machine learning. Moving away from backpropagation with gradient descent, he talks about various attempts at biologically plausible learning regimes which avoid the weight transport problem and use only local information at the neuron level. He then moves on to discuss work which infers a learning rule from weight updates, and further work using machine learning to create novel optimizers and local learning rules.

Papers / Talks mentioned (in order of presentation):
• "Random synaptic feedback weights support error backpropagation for deep learning" by Lillicrap et al.: https://www.nature.com/articles/ncomms13276
• Talk: A Theoretical Framework for Target Propagation: https://www.youtube.com/watch?v=xFb9N4Irj40
• "Decoupled Neural Interfaces using Synthetic Gradients" by DeepMind: https://arxiv.org/abs/1608.05343
• Talk: Brains@Bay Meetup (Rafal Bogacz) : https://youtu.be/oXyQU0aScq0?t=246
• "Predictive Coding Approximates Backprop along Arbitrary Computation Graphs" by Millidge et al: https://arxiv.org/abs/2006.04182
• "Identifying Learning Rules From Neural Network Observables" by Nayebi et al: https://arxiv.org/abs/2010.11765
• "Learning to learn by gradient descent by gradient descent" by Andrychowicz et al: https://arxiv.org/abs/1606.04474
• "On the Search for New Learning Rules for ANNs" by Bengio et al: https://www.researchgate.net/publication/225532233_On_the_Search_for_New_Learning_Rules_for_ANNs
• "Learning a Synaptic Learning Rule" by Bengio et al: https://www.researchgate.net/publication/2383035_Learning_a_Synaptic_Learning_Rule
• "Evolution and design of distributed learning rules" by Runarsson et al: https://ieeexplore.ieee.org/document/886220
• "The evolution of a generalized neural learning rule" by Orchard et al: https://ieeexplore.ieee.org/document/7727815
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.

Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest

Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter

Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta

Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/

Our Website:
https://numenta.com/
  • 8 participants
  • 60 minutes
backpropagation
brains
propagation
predictive
complexity
perturbation
iteratively
empirically
learning
bias
youtube image

22 Feb 2021

Our research intern Akash Velu gives an overview of continual reinforcement learning, following the ideas from the paper “Towards Continual Reinforcement Learning: A Review and Perspectives” by Kheterpal et al. He first goes over the basics of reinforcement learning (RL), and discusses why RL is a good setting to study continual learning. He then covers the different aspects of continual RL, the various approaches to solving continual RL problems, and touches upon the potential for neuroscience to impact the development of continual RL algorithms.

Paper: https://arxiv.org/abs/2012.13490
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications. 

Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest

Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter

Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta

Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/

Our Website:
https://numenta.com/
  • 7 participants
  • 58 minutes
behavior
reinforcement
observation
rewarding
experience
learns
supervised
theoretical
continual
markov
youtube image

22 Feb 2021

Marcus Lewis further elaborates and discusses some ideas outlined in "The Tolman-Eichenbaum Machine” paper in a continuation of Feb 15's research meeting. He first gives a quick review of the grid cell module presented in the paper and outlines 2 extreme scenarios of the mechanisms within the module to address the team’s skepticism of a multi-scale grid cell readout.

“The Tolman-Eichenbaum Machine: Unifying Space and Relational Memory through Generalization in the Hippocampal Formation” by James Whittington, et al.: https://www.sciencedirect.com/science/article/pii/S009286742031388X

Feb 15 research meeting: https://youtu.be/N6I3M3pof5A
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications. 

Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest

Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter

Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta

Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/

Our Website:
https://numenta.com/
  • 3 participants
  • 31 minutes
hippocampus
cortex
neuronal
grids
orientation
tends
mind
cells
tolman
rhino
youtube image

17 Feb 2021

Michaelangelo Caporale reviews and evaluates a continual learning scenario called OSAKA, outlined in the paper “Online Fast Adaption and Knowledge Accumulation: A New Approach to Continual Learning.” He first gives an overview of the scenario and goes through the algorithms and methodologies in depth. The team then discusses whether this is a good scenario that Numenta can use to test for continual learning.

Paper: https://arxiv.org/abs/2003.05856
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications. 

Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest

Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter

Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta

Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/

Our Website:
https://numenta.com/
  • 7 participants
  • 47 minutes
learning
continual
osaka
models
testing
suggesting
retraining
rigorously
strategy
supervised
youtube image

15 Feb 2021

Marcus Lewis reviews the paper “The Tolman-Eichenbaum Machine” by James Whittington, et al.. He first connects and compares the paper to the grid cell module in Numenta's “Locations in the Neocortex” paper. Marcus then gives a high-level summary of the paper and highlights two aspects - how grid cells and place cells interact, and how place cells can represent novel sensory-location pairs. The team then discusses the multiple grid cell modules and mechanisms presented in the paper.

“The Tolman-Eichenbaum Machine: Unifying Space and Relational Memory through Generalization in the Hippocampal Formation” by James Whittington, et al.: https://www.sciencedirect.com/science/article/pii/S009286742031388X

Papers mentioned:
“Locations in the Neocortex: A Theory of Sensory Recognition Using Cortical Grid Cells” by Jeff Hawkins, et al.: https://www.frontiersin.org/articles/10.3389/fncir.2019.00022/full

“What is a Cognitive Map? Organizing Knowledge for Flexible Behavior” by Timothy Behrens, et al. https://www.sciencedirect.com/science/article/pii/S0896627318308560

“A Stable Hippocampal Representation of a Space Requires its Direct Experience” by Clifford Kentros, et al.: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3167555/
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications. 

Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest

Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter

Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta

Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/

Our Website:
https://numenta.com/
  • 5 participants
  • 1:26 hours
summary
topic
discussion
sections
presentation
project
elaborating
papers
grids
modelers
youtube image

10 Feb 2021

Karan Grewal gives an overview of the paper “Continual Lifelong Learning with Neural Networks: A Review” by German Parisi, et al.. He first explains three main areas of current continual learning approaches. Then, he outlines four research areas that the authors advocate will be crucial to developing lifelong learning agents.

In the second part, Jeff Hawkins discusses new ideas and improvements from our previous "Frameworks" paper. He proposes a more refined grid cell module where each layer of minicolumns contains a 1D voltage-controlled oscillating module that represents movement in a particular direction. Jeff first explains the mechanisms within each column and how anchoring occurs in grid cell modules. He then gives an overview on displacement cells and deduces that if we have 1D grid cell modules, it is very likely that there are 1D displacement cell modules. Furthermore, he makes the case that the mechanisms for orientation cells are analogous to that of grid cells. He argues that each minicolumn is driven by various 1D modules that represent orientation and location and are the forces behind a classic grid cell / orientation cell module.

“Continual Lifelong Learning with Neural Networks: A Review” by German Parisi, et al.. : https://www.sciencedirect.com/science/article/pii/S0893608019300231
"A Framework for Intelligence and Cortical Function Based on Grid Cells in the Neocortex" paper: https://www.frontiersin.org/articles/10.3389/fncir.2018.00121/full

0:00 Continual Lifelong Learning Paper Review
40:55 Jeff Hawkins on Grid Cell Modules
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications. 

Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest

Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter

Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta

Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/

Our Website:
https://numenta.com/
  • 10 participants
  • 1:37 hours
progressively
learns
plasticity
overarching
reconstructing
representations
generalize
lifelong
neural
review
youtube image

6 Jan 2021

Lucas Souza continues his discussion on machine learning benchmarks and environments. In this meeting, he reviews the paper “Rearrangement: A Challenge for Embodied AI”. The paper proposes a set of benchmarks that captures many of the challenges the AI community needs to overcome to move towards human level sensorimotor intelligence. He discusses how goals can be specified, a taxonomy to categorize different types of agents and environments and some examples of benchmarks that follow the proposed structure. The team then discusses how they can translate the machine learning benchmark / environment to Numenta's work.

“Rearrangement: A Challenge for Embodied AI” by Dhruv Batra, et al.: https://arxiv.org/abs/2011.01975

Lucas Souza on iGibson Environment and Benchmark - December 14, 2020: https://youtu.be/feteCs80bIQ?t=4170
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications. 

Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest

Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter

Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta

Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/

Our Website:
https://numenta.com/
  • 4 participants
  • 1:10 hours
benchmarks
benchmark
discussed
experiment
specification
ai
simulation
challenges
advanced
environment
youtube image

14 Dec 2020

Michaelangelo Caporale presents a summary of two papers that apply self-attention to vision tasks in neural networks. He first gives an overview of the architecture of using self-attention to learn models and compares it with RNN. He then dives into the attention mechanism used in each paper, specifically the local attention method in “Stand-Alone Self-Attention in Vision Models” and the global attention method in “An Image is Worth 16x16 Words”. Lastly, the team discusses inductive biases in these networks, potential tradeoffs and how the networks can learn efficiently with these mechanisms from the data that is given.

Next, Lucas Souza gives a breakdown of a potential machine learning environment and benchmark Numenta could adopt - Interactive Gibson. This simulation environment provides fully interactive scenes and simulations which allows researchers to train and evaluate agents in terms of object recognition, navigation etc.

“Stand-Alone Self-Attention in Vision Models” by Prajit Ramachandran, et al.: https://arxiv.org/abs/1906.05909
“An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale” by Alexey Dosovitskiy, et al.: https://arxiv.org/abs/2010.11929
iGibson website: http://svl.stanford.edu/igibson/

0:00 Michaelangelo Caporale on Self-Attention in Neural Networks
1:09:30 Lucas Souza on iGibson Environment and Benchmark
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications. 

Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest

Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter

Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta

Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/

Our Website:
https://numenta.com/
  • 8 participants
  • 1:32 hours
attention
cognitive
self
perception
generalize
guided
relevance
discussed
semantically
nlp
youtube image

9 Dec 2020

Jeff Hawkins reviews the paper “Grid Cell Firing Fields in A Volumetric Space” by Roddy Grieves, et al.. He first goes through the premise of the paper where the authors recorded grid cells in rats as they go through a 2D arena and 3D maze. The team then explores different ways grid cell modules can encode high dimensional information. Lastly, Marcus discusses a talk by Benjamin Dunn showing simultaneous recordings from over 100 neurons in a grid cell module.

Paper reviewed: https://www.biorxiv.org/content/10.1101/2020.12.06.413542v1
Marcus’s paper: https://www.biorxiv.org/content/10.1101/578641v2
Talk by Benjamin Dunn: https://www.youtube.com/watch?v=Hlzqvde3h0M
  • 4 participants
  • 1:35 hours
grids
neuropixel
neurons
experimental
diagrams
brain
cells
maze
dimensionality
discussion
youtube image

23 Nov 2020

Karan Grewal reviews the paper “Gated Linear Networks” by Veness, Lattimore, Budden et al., 2020. He first gives an overview of the new backpropagation-free neural architecture proposed in the paper, then he draws parallels to Numenta’s current research and the team discusses how these models are successful in continual learning tasks.

Link to paper: https://arxiv.org/abs/1910.01526
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications. 

Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest

Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter

Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta

Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/

Our Website:
https://numenta.com/
  • 6 participants
  • 48 minutes
predictive
predictions
ai
analysis
deterministic
neural
models
intuitions
propagation
representation
youtube image

16 Sep 2020

Karan Grewal reviews the paper ‘Accurate Representation for Spatial Cognition Using Grid Cells’ by Nicole Sandra-Yaffa Dumont & Chris Eliasmith. He first gives us an overview of semantic pointers and then discusses the use of grid cells in spatial representations.

Link to paper: https://cognitivesciencesociety.org/cogsci20/papers/0562/0562.pdf

- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a cohesive theory, core software technology, and numerous software applications all based on principles of the neocortex. Our innovative work delivers breakthrough capabilities and demonstrates that a computing approach based on biological learning principles can do things that today’s programmed computers cannot do.

Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest

Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter

Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta

Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/

Our Website:
https://numenta.com/
  • 5 participants
  • 33 minutes
grids
conceptually
mapping
visualization
representations
reconstructing
convolution
pointer
neuron
vect
youtube image

17 Aug 2020

Jeff Hawkins reviews the new paper "Neuronal vector coding in spatial cognition" by Andrej Bicanski and Neil Burgess. The paper reviews the many types of cells involved in spatial navigation and memory. Jeff then ties the paper to The Thousand Brains Theory of Intelligence, using it as a launch point for discussion on how the neocortex makes transformations of reference frames.

Neuronal vector coding in spatial cognition, Andrej Bicanski and Neil Burgess
https://www.nature.com/articles/s41583-020-0336-9
  • 4 participants
  • 49 minutes
cells
topics
margins
border
discussion
review
vaguely
vector
neuroscientists
rats
youtube image

5 Aug 2020

In our previous research meeting, Subutai reviewed three different papers on continuous learning models. In today's short research meeting, Karan reviews a paper from 1991 that he points out was referenced by all three. The paper, "Using Semi-Distributed Representations to Overcome Catastrophic Forgetting in Connectionist Networks" (http://axon.cs.byu.edu/~martinez/classes/678/Presentations/Dean.pdf), was one of the first papers to reference sparse representations in continuous learning.
  • 5 participants
  • 23 minutes
distributed
cognitive
sparser
generalizing
roughly
representation
connectionism
sparsity
theory
forget
youtube image

3 Aug 2020

In this meeting Subutai discusses three recent papers and models (OML, ANML, and Supermasks) on continuous learning. The models exploit sparsity, gating, and sparse sub-networks to achieve impressive results on some standard benchmarks. We discuss some of the relationships to HTM theory and neuroscience.

Papers discussed:
1. Meta-Learning Representations for Continual Learning (http://arxiv.org/abs/1905.12588)
2. Learning to Continually Learn (http://arxiv.org/abs/2002.09571)
3. Supermasks in Superposition (http://arxiv.org/abs/2006.14769)
  • 7 participants
  • 1:06 hours
representations
inference
brains
gradually
htms
topics
analyze
memorizing
cells
mammal
youtube image

15 Jul 2020

In this research meeting Subutai and Karan focus on reviewing 4 related meta-learning papers. Subutai (after an initial surprise reveal) summarizes MAML, a core meta-learning technique, by @chelseabfinn et al, and a simpler variant, Reptile, by Alex Nichol et al. Karan reviews two probabilistic/Bayesian variants of MAML by Tom Griffiths et al.

Papers: Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks (https://arxiv.org/abs/1703.03400), On First-Order Meta-Learning Algorithms (https://arxiv.org/abs/1803.02999), Recasting Gradient-Based Meta-Learning as Hierarchical Bayes (https://arxiv.org/abs/1801.08930), and Reconciling meta-learning and continual learning with online mixtures of tasks (https://arxiv.org/abs/1812.06080).
  • 9 participants
  • 1:04 hours
mammal
help
showing
features
studying
careful
brain
lab
meta
imagenet
youtube image

22 Apr 2020

Numenta Research Meeting, April 22, 2020. Aris reviews several plasticity mechanisms, including developmental plasticity, various forms of Hebbian plasticity, eligibility traces, homeostatic plasticity, and impact of neuromodulators. In the second part, Jeff briefly reviews a few findings and facts related to optical recordings of grid cells.
  • 6 participants
  • 1:31 hours
lifelong
memory
brain
cognitive
remembers
neurobiology
forgetting
learning
plasticity
refine
youtube image

16 Mar 2020

This paper describes a model of how an animal might use grid cells, place cells, and border cells to navigate in complex environments. It was an excellent summary of existing ideas and it introduced several things we were not aware of that could be important for understanding how a cortical column works.

Read paper at https://onlinelibrary.wiley.com/doi/10.1002/hipo.23147
Discuss at https://discourse.numenta.org/t/navigating-with-grid-and-place-cells-in-cluttered-environments-paper-review/7296
  • 8 participants
  • 1:21 hours
hippocampus
discussion
navigating
traversing
neural
orientation
point
plan
grid
rat
youtube image

23 Oct 2019

I'll probably touch on both these papers, although the first one is more essential reading than the second.

- Cortical mechanisms of action selection:the affordance competition hypothesis http://www.cisek.org/pavel/Pubs/Cisek2007.pdf
- Resynthesizing behavior through phylogenetic refinement https://link.springer.com/content/pdf/10.3758%2Fs13414-019-01760-1.pdf

The interesting thing to me in these models is the similarities between "affordances" in Cisek's models and "objects" in our models.
  • 9 participants
  • 1:07 hours
discussion
research
brains
refinement
having
author
sex
evolutionary
dr
referred
youtube image

7 Oct 2019

  • 6 participants
  • 1:32 hours
cortex
neuroscientists
neural
understanding
introduce
briefly
mind
carefully
review
chat
youtube image

27 Sep 2019

The very interesting and recently published paper at ICLR2019 studying the impact of sparsity in the context of Continual Learning:
https://openreview.net/forum?id=Bkxbrn0cYX

Related: Continual Learning via Neural Pruning
Siavash Golkar, Michael Kagan, Kyunghyun Cho https://arxiv.org/abs/1903.04476
  • 6 participants
  • 2:07 hours
present
momenta
podcast
episodes
soon
conference
twitch
chats
consultation
watching
youtube image

20 Sep 2019

Paper review: https://arxiv.org/abs/1804.02464 "Differentiable plasticity: training plastic neural networks with backpropagation"

It is aimed to be a connection between the work we are doing, with structural plasticity through Hebbian learning, and continual learning.

Will possibly review a 2nd paper: "Backpropamine: training self-modifying neural networks with differentiable neuromodulated plasticity"
https://openreview.net/forum?id=r1lrAiA5Ym

Subutai, time-willing, will go over "Oscillatory responses in cat visual cortex exhibit inter-columnar synchronization which reflects global stimulus properties"

https://www.nature.com/articles/338334a0
  • 6 participants
  • 54 minutes
soon
expecting
plans
meet
watching
streaming
thinking
disrupt
research
regulated
youtube image

18 Sep 2019

https://link.springer.com/article/10.1007/s10827-019-00729-1 from visiting scientist Florian Fiebig. He says:

Its a brief 6 page paper, and I think it can serve as a neat introduction to the kinds of spiking neural networks and model thinking about the cortical microcircuit I was working on for my PhD.

The main idea in short:
Many Hebbian Learning Rules violate Dale's principle (A neuron cannot be both excitatory and inhibitory, all its axons release the same neurotransmitter) in the course of dynamic synaptic weight learning, because it this may change the sign of an individual connection. On the example of a reduced cortical microcircuit originally built as an attractor model of working memory, we show how biological cortex might instead learn negative correlations through a di-synaptic circuit involving double bouquet cells (DBC). These cells are very particular in the way they are distributed regularly across the cortical surface and innervate the whole minicolumn below without affecting neighboring columns. "Indeed, disregarding some exceptions, there appears to be one DBC horsetail per minicolumn"
  • 5 participants
  • 1:56 hours
neuron
conductance
electrophysiological
synaptic
circuit
postsynaptic
plasticity
gradually
spiking
experimenters
youtube image

13 Sep 2019

  • 3 participants
  • 45 minutes
plasticity
cortex
neuroscientist
neuronal
neuroscience
structural
synapses
structure
brain
research
youtube image

30 Aug 2019

Yes, we're reviewing our own paper. :P Two newer Numenta hires are going to review our latest theoretical neuroscience paper. This is more for the benefit of the new hires to completely understand the Thousand Brains Theory of Intelligence.
  • 7 participants
  • 1:51 hours
brain
neuroscientists
thinking
cortex
discussed
understanding
introduction
mindset
suggests
structure
youtube image

19 Aug 2019

  • 5 participants
  • 41 minutes
sparse
neural
sparsely
regularization
sparsity
brain
intelligence
subtle
techniques
l0
youtube image

16 Aug 2019

Numenta Journal Club - Aug 16, 2019

Weight Agnostic Neural Networks:

https://arxiv.org/abs/1906.04358

The is a fairly new paper (Jun 11) that reinforces the implication of the Lottery Ticket Hypothesis: that the weights of a network don’t matter as much as people think and there’s a lot of importance in the structure of a network.

Discussion at https://discourse.numenta.org/t/weight-agnostic-neural-networks/6467
  • 6 participants
  • 1:04 hours
neural
minimal
intelligent
sparse
understand
sufficiently
representation
agnostic
tuning
biases
youtube image

9 Aug 2019

  • 7 participants
  • 35 minutes
sparser
computing
structure
gpus
clusters
capacity
matrix
brain
bits
papers
youtube image

23 Jul 2019

Paper review of "Learning distant cause and effect using only local and immediate credit assignment" (https://arxiv.org/abs/1905.11589)

Discuss at https://discourse.numenta.org/t/paper-review-of-recurrent-sequence-memory/6357
  • 6 participants
  • 1:05 hours
memory
neural
insights
predictive
idea
representation
machine
recurrence
incubators
company
youtube image

19 Jul 2019

For this Friday’s journal club, we will be looking at a recently published paper by Bruno Olshausen at the Redwood Institute. It is about a more plausible algorithm in terms of biological constraints. It also ties well with the latest discussions on Continuous Learning, it provides an ingenious and elegant approach to the catastrophic forgetting problem in multitask learning.

Here is the link to the paper: https://arxiv.org/abs/1902.05522
  • 5 participants
  • 44 minutes
adaptive
neural
behavior
trained
accuracy
recency
simulated
heuristics
catastrophic
randomize
youtube image

17 Jul 2019

Subutai will quickly review Elon Musk & Neuralink's new paper and we will discuss. https://www.biorxiv.org/content/10.1101/703801v1
  • 6 participants
  • 33 minutes
neuroscientists
neuroscientist
neural
neuroscience
scientists
neuron
research
discussion
musk
tweeted
youtube image

12 Jul 2019

We have a visitor who recently finished her PhD at Purdue, and will be starting as a professor at Yale in August. She has an upcoming paper to be published in Nature called "Towards Spike-based Machine Intelligence with Neuromorphic Computing". She will be discussing this work with us at our Numenta Research Meeting.
  • 5 participants
  • 1:54 hours
currently
screens
livestream
ready
presentation
cognitive
introduced
eventually
host
supercomputer
youtube image

10 Jun 2019

Review of paper in bioarxiv: https://www.biorxiv.org/content/10.1101/657114v1

Layer 6 ensembles can selectively regulate the behavioral impact and layer-specific representation of sensory deviants

Broadcasted live on Twitch -- Watch live at https://www.twitch.tv/rhyolight_
  • 3 participants
  • 43 minutes
experimental
suggesting
conclusion
project
partly
somatosensory
anticipating
insights
processing
sailing
youtube image

8 Jun 2019

Review of the paper Scalable training of artificial neural networks and how it relates to our ongoing research on applying sparsity to neural networks (https://www.nature.com/articles/s41467-018-04316-3).

Additional papers reviewed to set the background for the discussion:

1) Rethinking the Value of Network Pruning (https://arxiv.org/abs/1810.05270): structured pruning using several different approaches, reinitialize remaining weights to random values.

2) The Lottery Ticket Hypothesis (https://arxiv.org/abs/1803.03635): Finding Sparse, Trainable Neural Networks: unstructured pruning based on the magnitude of final weights, set remaining weights to initial values.

3) Deconstructing Lottery Tickets (https://arxiv.org/abs/1905.01067): Zeros, Signs, and the Supermask: unstructured pruning based on the magnitude of final weights or the magnitude increase, set weights to constants with same sign as previous initial values.

Structured pruning usually refers to changing the network architecture, like removing a filter or a layer.
Unstructured pruning is “sparsifying”, killing the connections by setting the weights to zero and freezing.


Broadcasted live on Twitch -- Watch live at https://www.twitch.tv/rhyolight_
  • 4 participants
  • 43 minutes
pruning
consider
issue
tending
process
smart
tuning
model
finished
mara
youtube image

6 Jun 2019

Broadcasted live on Twitch -- Watch live at https://www.twitch.tv/rhyolight_
  • 6 participants
  • 1:52 hours
fmri
eeg
microscopic
frequencies
researchers
experimentally
implanted
probes
observed
cortical
youtube image