Numenta / Brains@Bay

Add meeting Rate page Subscribe

Numenta / Brains@Bay

These are all the meetings we have in "Brains@Bay" (part of the organization "Numenta"). Click into individual meeting pages to watch the recording and search or read the transcript.

13 Apr 2022

Follow-up Q&A and slides will be posted soon - check the meetup page for updates.
Slides: https://numenta.com/resources/videos/brainsatbay-neuromodulators-and-ai
Link to meetup: https://www.meetup.com/Brains-Bay/events/284481247/

In this Brains@Bay meetup, we discuss how the principles of neuromodulators in the brain can lead to more flexible and robust machine learning systems.

First, Skrikanth Ramaswamy (Newcastle University) gave an overview of the biological organizing principles of neuromodulators in adaptive cognition and highlight the competition and cooperation across neuromodulators.

Then, Jie Mei (the Brain and Mind Institute) discussed ongoing research on bio-inspired mechanisms of neuromodulatory function in DNNs, and propose a computational framework to inspire new architectures of “neuromodulation-aware” DNNs.

Finally, Thomas Miconi (ML Collective) talked about his work on evolving neural networks, endowed with plastic connections and reward-based neuromodulation, based on a computational neuroscience framework.

To learn more about Brains@Bay, visit our Meetup page: https://www.meetup.com/Brains-Bay/

0:00 Introduction
1:39 Skrikanth Ramaswamy: A primer on neuromodulatory systems
33:19 Jie Mei: Implementing multi-scale neuromodulation in artificial neural networks
1:07:54 Thomas Miconi: How to evolve your own lab rat
1:33:22 Discussion + Q&A
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.

Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://numenta.com/news-digest/

Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter

Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta

Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/

Our Website:
https://numenta.com/
  • 5 participants
  • 1:57 hours
brain
cognitive
neural
neuromodulator
discourse
neuromarketing
meetup
ai
circuit
present
youtube image

15 Dec 2021

In this Brains@Bay meetup, we focused on how sensorimotor learning can lead to more flexible and robust machine learning systems. To learn more about Brains@Bay, visit our Meetup page: https://www.meetup.com/Brains-Bay/

Richard Sutton (DeepMind and University of Alberta) first talked about the increasing role of sensorimotor experience in AI and proposed a minimal architecture for an intelligent agent that is entirely grounded in experience.

Next, Clément Moulin-Frier (Flowers Laboratory) presented an evolutionary and developmental perspective on open-ended skill acquisition in humans and machines, highlighting the key role of intrinsically motivated exploration in the generation of behavioral regularity and diversity.

Finally, Viviane Clay (Numenta and University of Osnabrück), talked about the capabilities that emerge from letting deep neural networks learn more like biological brains - through sensorimotor interaction with the world.

Follow-up Q&A and slides will be posted soon - check the meetup page for updates.

Link to meetup and talk abstracts: https://www.meetup.com/Brains-Bay/events/282182527/

0:00 Introduction
2:50 Richard Sutton: The Increasing Role of Sensorimotor Experience in Artificial Intelligence
40:59 Clément Moulin-Frier: Open-ended Skill Acquisition in Humans and Machines: An Evolutionary and Developmental Perspective
1:13:05 Viviane Clay: The Effect of Sensorimotor Learning on the Learned Representations in Deep Neural Networks
1:31:33 Discussion + Q&A
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.

Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://numenta.com/news-digest/

Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter

Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta

Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/

Our Website:
https://numenta.com/
  • 7 participants
  • 2:05 hours
brains
brain
panelists
cognitive
meetup
discussions
session
bay
come
deepmind
youtube image

7 Jun 2021

To learn more about Brains@Bay, visit our Meetup page: https://www.meetup.com/Brains-Bay/

Brains@Bay Meetups focus on how neuroscience can inspire us to create improved artificial intelligence and machine learning algorithms. In this Brains@Bay meetup, we focused on grid cells and how they act as an inspiration for machine learning architectures.

Numenta Senior Researcher Marcus Lewis started us off with a basic overview of grid cells and then dove into using grid cells and other cells to quickly form spatial memories.

Next, James Whittington, Postdoctoral Research Associate at University of Oxford talked about unifying space and relational memory in the hippocampal formation.

Finally, Kimberly Stachenfeld, Research Scientist at DeepMind, presented a relational view of grid cells in which grid cells represent geometry over graphs.

Professor Tim Behrens from University of Oxford joined us for the discussion.

Link to meetup: https://www.meetup.com/BraIns-Bay/events/278434338/

0:00 Introduction
2:02 Marcus Lewis: Grid cell intro + Quickly forming structured memories
34:32 James Whittington: Generalisation in the hippocampal formation
1:03:17 Kimberly Stachenfeld: Representation Learning with Grid Cells
1:46:16 Discussion + Q&A
- - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications. 

Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest

Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter

Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta

Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/

Our Website:
https://numenta.com/
  • 6 participants
  • 2:36 hours
panelists
brains
brainstorm
discussion
presenting
grid
neuropixels
meetup
bay
someday
youtube image

14 Apr 2021

To learn more about Brains@Bay, visit our Meetup page: https://www.meetup.com/BraIns-Bay/

Brains@Bay Meetups focus on how neuroscience can inspire us to create improved artificial intelligence and machine learning algorithms. In this meetup, neuroscientist and entrepreneur Jeff Hawkins, co-founder of Numenta, will be talking about his new book A Thousand Brains: A New Theory of Intelligence.

This is a fireside chat between Numenta's researcher Lucas Souza and Jeff, where they will cover the main aspects of the theory, what it represents for neuroscience and machine learning and how can we incorporate these breakthrough ideas into our existing learning algorithms. This is followed by a Q&A.

Link to meetup: https://www.meetup.com/Brains-Bay/events/277296472/

0:00 A Fireside Chat with Jeff Hawkins
1:27:26 Q&A

- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications. 

Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest

Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter

Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta

Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/

Our Website:
https://numenta.com/
  • 10 participants
  • 2:03 hours
brains
brain
neuroscientists
cognitive
forums
discussed
thinking
hypothesis
dab
bay
youtube image

18 Nov 2020

To learn more about Brains@Bay, visit our Meetup page: https://www.meetup.com/Brains-Bay/

Brains@Bay Meetups focus on how neuroscience can inspire us to create improved artificial intelligence and machine learning algorithms. In this meetup, we discuss alternatives to backpropagation in neural networks.

From the neuroscience side, Prof. Rafal Bogacz (University of Oxford) discusses the viability of backpropagation in the brain, and the relationship of predictive coding networks and backpropagation.

Then, Sindy Löwe (University of Amsterdam) presents her latest research on self-supervised representation learning, where she shows networks can learn by optimizing the mutual information between representations at each layer of a model in isolation.

Finally, Jack Kendall (RAIN Neuromorphics) shows how equilibrium propagation can be used to train end-to-end analog networks, which can guide the development of a new generation of ultra-fast, compact and low-power neural networks supporting on-chip learning.

Link to meetup: https://www.meetup.com/Brains-Bay/events/274459844/

0:00 Introduction
4:06 Rafal Bogacz (University of Oxford)
38:59 Sindy Löwe (University of Amsterdam)
1:11:02 Jack Kendall (RAIN Neuromorphics)
1:36:42 Discussion
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a framework called the Thousand Brains Theory of Intelligence that will be fundamental to advancing the state of artificial intelligence and machine learning. By applying this theory to existing deep learning systems, we are addressing today’s bottlenecks while enabling tomorrow’s applications.

Subscribe to our News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest

Subscribe to our Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter

Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta

Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/

Our Website:
https://numenta.com/
  • 7 participants
  • 2:14 hours
brainset
brains
brain
brainstormmates
neural
cognitive
neuroscientist
meetup
ai
discussion
youtube image

26 Aug 2020

To learn more about Brains@Bay, visit out Meetup page: https://www.meetup.com/BraIns-Bay/

Brains@Bay Meetups focus on how neuroscience can inspire us to create improved artificial intelligence and machine learning algorithms. In this meetup, we focus on the role of active dendrites in learning from a neuroscience and a computational perspective.

First, Matthew Larkum (Larkum Lab) gives us an overview on the main advances in active dendrites over the last two decades.

Then, Illenna Jones (Kording Lab) presents her new, exciting research on computational power of biological dendritic trees.

Finally, Blake Richards (Linc Lab) argues that dendrites solve an implementation problem for brains, but they are not necessary at the algorithmic level, and the importance for ML is largely with respect to neuromorphic chips.

Link to meetup: https://www.meetup.com/BraIns-Bay/events/272662486/

0:00 Matthew Larkum - The Role of Active Dendrites in Learning
45:55 Illenna Jones - Can single neurons solve MNIST?
1:15:08 Blake Richards - The Role of Dendrites in Machine Learning
1:44:17 Discussion / Q&A
- - - - -
Numenta is leading the new era of machine intelligence. Our deep experience in theoretical neuroscience research has led to tremendous discoveries on how the brain works. We have developed a cohesive theory, core software technology, and numerous software applications all based on principles of the neocortex. Our innovative work delivers breakthrough capabilities and demonstrates that a computing approach based on biological learning principles can do things that today’s programmed computers cannot do.

Subscribe to our Weekly News Digest for the latest news about neuroscience and artificial intelligence:
https://tinyurl.com/NumentaNewsDigest

Subscribe to our Monthly Newsletter for the latest Numenta updates:
https://tinyurl.com/NumentaNewsletter

Our Social Media:
https://twitter.com/Numenta
https://www.facebook.com/OfficialNumenta
https://www.linkedin.com/company/numenta

Our Open Source Resources:
https://github.com/numenta
https://discourse.numenta.org/

Our Website:
https://numenta.com/
  • 7 participants
  • 2:15 hours
discussions
brain
meetup
neuroscientists
forum
present
bay
message
speakers
ai
youtube image

3 Jun 2020

To learn more about Brains@Bay, visit our Meetup page https://www.meetup.com/BraIns-Bay

Brains@Bay Meetups focus on how neuroscience can inspire us to create improved artificial intelligence and machine learning algorithms. In this special edition we focus on the function of lateral connections (connections between neurons within a level). Long-range lateral connections are ubiquitous in the neocortex and cannot be explained by pure feedforward models. In this meetup, researchers from the Allen Institute for Brain Science discuss their recently published paper on modeling these connections.

The speakers are Stefan Mihalas, Ramakrishnan Iyer, and Brian Hu. The title of their talk is “Lateral Connections can Perform Contextual Integration in Cortical and Convolutional Neural Networks." They present a network model of cortical computation in which the lateral connections from surrounding neurons enable each neuron to integrate contextual information from features in the surround. They show that adding these connections to deep convolutional networks in an unsupervised manner makes them more robust to noise in the input image and leads to better classification accuracy under noise.

Link to paper: https://www.frontiersin.org/articles/10.3389/fncom.2020.00031

Link to meetup: https://www.meetup.com/BraIns-Bay/events/270695139/
  • 8 participants
  • 1:14 hours
brain
neuroscientists
neural
cortex
meetup
synaptic
present
bay
projections
connections
youtube image

11 May 2020

To learn more about Brains@Bay, visit our Meetup page https://www.meetup.com/BraIns-Bay

Brains@Bay Meetups are designed to bring together the fields of neuroscience and artificial intelligence. Speakers for this meetup have been selected from each discipline to provide unique views of the topic of Predictive Processing. The speakers for this event are Avi Pfeffer & Georg Keller.

Modern neuroscience has gained an increasing appreciation that predicting aspects of the environment, like the sensory consequences of action, is a core component of intelligent computation. In recent decades many models, such as Hierarchical Temporal Memory, have surfaced to propose how the brain generates predictions. In this event, our speakers will talk about one family of such models: predictive processing.

Predictive processing states that the brain uses internal models of the world to continually make predictions about events in the environment. These predictions are compared with sensory input to generate prediction errors, which can be used to update the brain's internal models.

0:00 Georg Keller - Predictive Processing in the Neocortex
42:01 Avi Pfeffer - Building Long-Lived AI Systems Using Predictive Processing
1:17:40 Q&A/ Discussion
  • 9 participants
  • 1:47 hours
cognitive
brains
neuroscientists
neural
ai
predictive
suggests
meetup
discussion
present
youtube image

31 Oct 2019

https://www.meetup.com/BraIns-Bay/events/265724515/

Meeting starts at 30:00.

Welcome to another session of Brains@Bay! This time we will be talking about Hebbian Learning, the main mechanism used by the brain to learn new experiences.

As always, we will start with a perspective from neuroscience, then move on to understand how these concepts can be applied to machine learning. Our meetup is a journal club, so expect a lot of very interesting discussions during the presentations.

We've changed the location for this next event to make it more comfortable for everyone - we've partnered with UCSC and will hold the event at UCSC Silicon Valley Campus. Please see the details on the meetup page.

Agenda:

** 06:30 - 07:10: Networking and Pizza Time!

** 07:10 - 07:50: Hebbian Learning in Neuroscience - Floran Fiebig, PhD

Title: Bayesian-Hebbian learning in spike-based cortical network simulations.

Many Hebbian learning rules exist, serving a variety of local learning rules that both explain neuroscientific findings and shape various networks in machine learning. In a biophysical model of spiking cortical neural networks, we leverage abstracted processes of synaptic and nonsynaptic changes, to derive a local, online Hebbian learning rule for spiking neurons inspired by Bayesian statistics. The resulting spiking implementation of a Bayesian Confidence Propagation Neural Network (BCPNN) is capable of explaining a broad array of neuroscientific findings and perform a variety of cognitive tasks, but may also be interesting to ML investigators puzzled by the spiking nature of neuronal computation, and how one might want to approach the interesting problem of mapping existing learning rules onto a network that leverages discrete spikes as a means of information propagation and unsupervised online learning. SLIDES: https://prezi.com/view/Hf1MBr8cfXSNs8LeVDqt/

** 07:50 - 08:30: Hebbian Learning in Machine Learning - Thomas Miconi, PhD

Title: Differentiable plasticity: training self-modifying networks for fast learning with Hebbian plasticity

Neural networks are usually trained over long periods with massive amounts of data. By contrast, animals and humans can learn complex information very quickly. This ability for fast lifelong learning results from (largely Hebbian) synaptic plasticity, carefully tuned by evolution. Here we show that plastic networks can be trained to learn quickly and efficiently, using standard gradient descent instead of evolution. The trained networks can quickly learn high-dimensional information and use it to solve a task. Our results show that differentiable plasticity may provide a powerful novel approach to the learning-to-learn problem.

** 08:30 - 09:00 Discussions and Wrap-up

0:00 Starting soon...
30:51 Floran Fiebig - Hebbian Learning in Neuroscience
1:20:00 Thomas Miconi - Hebbian Learning in Machine Learning
  • 13 participants
  • 2:07 hours
discussion
conversation
having
hey
important
mind
refreshment
listening
channel
institute
youtube image

20 Aug 2019

Some presentation slides posted on the meetup page: https://www.meetup.com/BraIns-Bay/events/263945823/

0:00 Subutai Ahmad - Sparsity in the Neocortex
24:05 Lucas Souza - Literature Review
43:46 Hattie Zhou - Deconstructing Lottery Tickets
1:19:00 Gorden Wilson - Sparsity in Hardware
  • 12 participants
  • 2:00 hours
cortex
neuros
neural
neuroscience
brain
cortical
neocortex
mind
somatosensory
sparsit
youtube image

11 Jul 2019

What is Continuous Learning? Borrowing from a recently released survey [1], "Continual learning (CL) is a particular machine learning paradigm where the data distribution and learning objective changes through time, or where all the training data and objective criteria are never available at once".

We will try to answer a few questions, and reviewing the existing literature is a way towards answering those questions:
- How is it defined in the literature?
- What are the most common approaches?
- How do current approaches relate to biology and neuroscience?
- What are the limitations and open questions?
- What are future research directions?

As a reminder, our meetups are journal clubs, so we invite participants to read on the topic beforehand and join the discussion.

https://www.meetup.com/BraIns-Bay/events/262647238/

Review papers:
[1] Continuous Learning for Robotics (https://arxiv.org/abs/1907.00182)
[2] Continual Lifelong Learning with Neural Networks: A Review (https://arxiv.org/abs/1802.07569)

Additional Literature:
[3] Overcoming Catastrophic Forgetting in Neural Networks (https://arxiv.org/abs/1612.00796)
[4] Continual Learning via Neural Pruning (https://arxiv.org/abs/1903.04476)
[5] Superposition of Many Models Into One (https://arxiv.org/abs/1902.05522)

0:00 Lucas Souza
1:18:09 Sam Heiserman
  • 19 participants
  • 1:54 hours
learning
gradually
research
supervision
concepts
discussion
neuroscientist
retraining
adapting
mira
youtube image