A primary goal of neuroscience is to understand how the brain works — not in artificial lab tasks, but when using its full capabilities to thrive in the rigors of the natural environment. Neuroscience has made enormous progress by examining how the brain performs simplified tasks, but these tasks do not expose the richly adaptive dynamics that the brain must use in a changing world. Therefore the current neuroscientific understanding of the brain is missing fundamental ingredients. We seek to fill this gap, providing a new paradigm for the conduct of behavioral neuroscience and offering an unprecedented opportunity to observe the neural computations that solve a complex natural task. We record activity of many neurons in multiple areas of a mouse brain while the mouse is foraging in a virtual reality environment, and develop mathematical models to make sense of the complex data. This research will thereby provide a unique training opportunity for undergraduate and graduate students in both computational and experimental neuroscience and will advance society’s goals of understanding the biology of healthy and disordered brains, with the ultimate hope of repairing neurological problems.
Experimenters will train mice to forage in a virtual reality environment, while recording activity from neurons in four brain areas involved in vision and navigation: visual cortex, entorhinal cortex, posterior parietal cortex and hippocampus. State-of-the-art analysis techniques will be used to describe the mouse’s behavior, and to discover neural representations of the internal models which express the animal’s beliefs about things that cannot be directly observed in sense data. Finally, we will uncover how neural representations of critical task variables are communicated and transformed across brain areas, guided by the hidden variable dynamics of the behavioral model. Together, these experiments, theory, and analysis will provide an unprecedented, system-wide understanding of neural computation, ranging from the scale of individual neurons up to a multi-region system. A key quality of the approach is the pervasive influence of theory, both in structuring experiments and dictating analyses. Since the great strength of the human brain is its ability to comprehend the hidden structure in the world, this approach takes an essential step toward unraveling the mysteries of cognition.
Simons Foundation for the Global Brain
Imagine catching a firefly. There are multiple steps: seeing the firefly, estimating its location in between flashes, deciding when to move to catch it, and then moving. Each step engages different brain areas—for example, seeing the firefly activates visual processing regions whereas moving activates motor systems—yet how these areas interact to produce an action based on sensory experience is a mystery. We have designed an experiment to get at these interactions using, it turns out, virtual fireflies. Working in monkeys, we train animals to forage through a virtual environment for flashing specks of light. As in real life, such a virtual task engages a variety of brain regions, including those involved in sensory and perceptual processing, navigation, decision-making, and movement. While previous work has studied each of these brain regions individually, we have taken the research one step further to study many regions at the same time. Using sophisticated recording technology, we simultaneously monitor the electrical activity of neurons in each brain region, allowing us to observe the flow of information from brain region to brain region as the animal performs the virtual foraging task. Because even the simplest tasks require the precise coordination of many neural networks distributed across many different brain areas, this approach will be broadly applicable to studying many tasks the brain performs.
In parallel experiments in freely-moving macaques, we also explore real-world foraging. The brain evolved complex recurrent networks to enable flexible behavior in a dynamic and uncertain world, but its computational strategies and underlying mechanisms remain poorly understood. We have planned experiments to uncover the network basis of neural computations in foraging, an ethologically relevant behavioral task that involves sensory integration, spatial navigation, memory, and complex decision-making. We will use large-scale electrical recordings from multiple relevant interconnected areas (visual cortical area V4, Area 7A, Entorhinal Cortex, Hippocampus, Parahippocampal gyrus, and Prefrontal Cortex) of freely behaving macaques. To track the neural network computations used in these ethologically relevant, natural tasks, we will exploit recent advances in both statistical data analysis and theories of neural computation. First, to characterize behavior, we will model relationships between task-relevant sensory, motor, and internal variables using graphical modeling. Animal behavior will be modeled in the framework of Partially Observable Markov Decision Processes (POMDP) and these models will provide predictions about which variables the animals use and how they interact. Second, once we have modeled the behaviorally relevant variables, we will use modern data analysis techniques to identify these variables from the patterns of neuronal responses, extracting the low-dimensional, task-relevant signals from the high-dimensional population activity. The time series of these low-dimensional neural representations will be used to analyze the transformation and flow of signals between different brain areas, using such measures as Directed Information. Finally, we will compare these neural analyses to predictions from the normative models of the foraging task. We hypothesize that neural representations of sensory and internal variables will exhibit the same causal and temporal relationships manifested in the behavioral model. By combining — for the first time — normative modeling, selective dimensionality reduction of neural population signals, and quantification of directed information flow, we will be able to identify the transformations within and between key brain areas that enact neural computations on complex natural tasks. The team project aims to produce a transformative view of distributed neural population coding, unifying ethologically crucial computations across multiple neural systems.
In everyday life, we often experience activation of more than one of our sensory systems at a time. For example, listening to a person’s voice and looking at his lips move represent a fundamental interaction that underlies our speech perception and social interactions. Similarly, as we move in the world and interact with our environment, signals from our visual and balance (vestibular) systems must work together saliently and consistently. This process is known as ‘multisensory integration’. Our lab characterizes how our brain performs optimal multisensory integration – both when it works normally, but also in diseases like autism spectrum disorder.
Vestibular and Multisensory Cues in Spatial Navigation
The 2014 Nobel Prize in Medicine and Physiology for the discovery of ‘place’ cells in the hippocampus and ‘grid’ cells in entorhinal cortex has shed light into a brain system responsible for encoding spatial representations of the environment. Yet, how multisensory cues control and define the properties of these cells remains poorly understood. Vestibular sensory information has long been recognized as having an important role in spatial orientation and navigation. In recent years, virtual-reality (VR) has become fashionable in exploring navigational processes, even though vestibular cues are in conflict when rodents are head-fixed. Similar to commercial flight simulator industry, which has long known that complete immersion in VR requires a combination of both physical linear and angular accelerations and large-field optic flow, we explore how visual, vestibular and motor-related cues collectively control the spatial properties of the navigation circuit of both macaques and rodents.
Dissecting Circuit Computations in the Cerebellar Cortex
With a focus on an internal model computation of gravity in the caudal cerebellar vermis – part of the vestibulocerebellum – we aim to understand the role of individual cell types and synapses in cerebellar computation. We take advantage of a new molecular genetic toolkit developed in the Sillitoe lab to explore the cell-specific, neural circuit properties of a functionally-important and experimentally tractable computation in the caudal vermis: inference of an internal estimate of our orientation relative to gravity, critical for spatial orientation, posture and navigation. To test this experimentally, we have devised an in vivo model that enables us to manipulate chemical communication at the main cerebellar synapses. Our model utilizes the Cre/loxP genetic approach to conditionally block the expression of a vesicular glutamate transporter molecule in climbing fibers and unipolar brush cells of the mouse cerebellum. We hypothesize that these two excitatory synapses provide spatial and temporal modulation of cerebellar output during behavior. Our main goal is to test whether and how loss of signaling at these synapses results in computational deficits that impair signals critical for computing gravity and thus orienting and navigating in a natural environment. We will use the experimental data to convert our previous gravity-sensing, Bayesian, high-level computation into a circuit-level, anatomically-correct model that implements the internal model computations, which will then be used to test predictions of how synapse-specific information shapes the function of the whole circuit . Our predictions will be validated in vivo by neuronal spike recordings performed in behaving mice. Collectively, these experiments aim to systematically dissect the cellular mechanisms that underlie cerebellar computation by creating a mechanistic conceptual structure-computation-function diagram of the canonical cerebellar circuit and its targets in vivo.
Both Human and Animal Models
Our neurophysiology studies are done in rodents and monkeys, but critical aspects of perception and spatial navigation are also performed in human subjects. These studies both refine behavioral tasks for use in animal experiments and allow direct quantitative comparisons with data collected in patients suffering from sensory, motor and cognitive disorders.