Current Projects

Check out what we're working on

Real-time analysis of neural data

Together with Eva Naumann’s lab, we’ve developed improv (paper), a software platform for designing and orchestrating adaptive experiments. By analyzing data in real time, we can measure, model, and manipulate neural activity in response to new data. We’ve shown how these tools, in conjunction with holographic photostimulation, could in principle map functional connectivity of large circuits in a few hours (paper, expanded version). More recently, we’ve worked on methods for fast dimensionality reduction and modeling of neural populations in real time (paper).

A whole zebrafish brain activity map, showing the distribution of motion-sensitive neurons, color-coded to show the preferred motion direction.
Concept for the closed-loop pipeline. Neural data from the zebrafish are collected in the form of images, preprocessed, and analyzed in real-time. Targets for optical stimulation are then chosen based on the results of this analysis, creating adaptive experiments that test causal hypotheses.

Animal vocalizations

Vocalization is a complex behavior that underlies vocal communication and vocal learning, and is important for the study of humans’ underlying linguistic competency and musicality. Despite its prominence in a wide range of disciplines, vocalizations are often quantified in an ad-hoc and species-specific manner. Fortunately, recent advances in machine learning have resulted in techniques that allow high-dimensional data to be compressed in a data-dependent manner, resulting in low-dimensional encodings that minimize information loss. We use one such method, the variational Bayesian autoencoder (VAE), to perform dimensionality reduction of the vocalizations and vocal learning behavior of several model organisms: laboratory mice, zebra finches, and marmosets. Together with Richard Mooney’s lab we use latent representations of these species’ vocal behavior to reproduce and extend existing results in a species-agnostic manner, offering a unified view of vocal variability and learning on timescales ranging from individual syllables of millisecond duration to across days (paper, paper).

a. Syllable VAE: Segmented syllables from adult Zebra Finch song are projected to a low-dimensional space, then reconstructed from that space using a VAE. b. Shotgun VAE: The VAE is trained on 20ms segments of adult Zebra Finch song to model variability on a millisecond duration. Visualized are songs projected into the latent space using these shorter segments.

Efficient coding in the retina

How does the retina, which receives roughly one gigabit per second of visual information, compress that into something small enough to transmit down an optic nerve with a capacity of one megabit per second — three orders of magnitude lower? One answer, proposed by Horace Barlow half a century ago, is that the nervous system attempts to minimize redundancy, maximizing mutual information between the world and the brain’s representation of it while minimizing metabolic costs. This theory makes a number of testable predictions, including the well-known fact that retinal ganglion cells should be active only in response to either increases or decreases in light levels at within small regions of visual space — their receptive fields.

Working together with Greg Field’s lab, we’ve shown that patterns of alignment between different collections of receptive fields can also be explained using efficient coding theory. This was based on findings from Field lab (paper), which led to surprising further theoretical results (paper). In short, the most information-efficient receptive field arrangements are determined both by levels of noise in the system and the statistics of natural images.

Most recently, we’ve looked at what happens to mosaics as the number of neurons available for coding changes. There, we found that greater numbers of available neurons leads to greater diversity in functionally defined cell types, starting with small temporally smoothing receptive fields and progressing toward larger temporally “differentiating” receptive fields.

Autoencoding whole-brain dynamics

Brain functional magnetic imaging data (fMRI) is one of the most popular modalities in human and clinical neuroscience as it allows researchers to investigate relationships between high-level cognitive functions, brain activity patterns and experimental variables of interest. Traditional fMRI analysis methods utilize a mass univariate approach, wherein a General Linear Model (GLM) is fit to each small volume pixel (“voxel”) independently and researchers correct for an inflated false positive rate post hoc. This method has been widely adopted due to its simplicity and ability to produce separate spatial brain maps, capturing the inferred effects of experimental variables on brain-wide activity. However, it fails to account for the rich spatial and temporal information inherent to this modality. In recent work, we’ve explored the idea of using variational autoencoder (VAE) methods nested inside a Generalized Additive Modeling (GAM) framework to model entire brain volumes together (paper). This approach better accounts for the spatial dependencies of fMRI data and generates separate, interpretable spatial maps capturing the inferred effects of experimental variables on whole-brain dynamics. In collaboration with Kevin LaBar’s lab, we’re working to expand on this work with the goal of characterizing brain spatio-temporal dynamics underlying transitions between emotional states in health and in disease.

A) VAE-GAM Model Schematic: brain volumes with signal of interest are compressed to a lower dimensional representation using encoder network. Sampled latents are then fed through decoder network to yield a base map and separate spatial effect maps. Each effect map is scaled by a potentially non-linear gain modelled using a Gaussian Process. Variance is modeled separately on a per voxel basis. B) Sample Effect Maps for VAE-GAM and GLM: effect maps for a visual stimulation task dataset analysed using the proposed VAE-GAM approach (top) vs. the tranditional (GLM) approach.