Typical neuroscience experiments start by assuming we know the set of variables that drive neural activity. But what if neurons are tuned to variables we would never have guessed? What if, as with social interaction, the stimulus set is too complex to be boiled down to a few dimensions. With Jeff Beck, we’re developing models that infer stimulus categories directly from data, allowing us to “tag” images and movies based on neural responses.
Most neuroscience experiments begin by stripping away as much of the complexity of the real world as they can afford to. But when the phenomena of interest are our social interactions — who we trust, who we fight with, who we love — there’s only so much complexity you can remove. In P[λ]ab, we’re studying the ways in which humans and other primates make strategic social decisions in real time by recording from the brain as pairs of individuals play dynamic games.
By modeling these interactions, we’re able to generate realistic samples of actual play, as well as characterize players’ strategies.
Thanks to advances in microscopy and calcium indicators, it’s now possible to collect terabytes of data in a single experiment. But that increase in data volume comes at the cost of increased processing time. Yet recent work on preprocessing algorithms for imaging data, along with methods for characterizing cell responses and inferring the functional relationships between them, has made it possible to envision a real-time pipeline for neural data analysis.
Together with Eva Naumann’s lab, we’re working to develop a fully-integrated online analysis platform that will facilitate closed-loop, all-optical control in the larval zebrafish. This is work in progress, so stay tuned!
Vocalization is a complex behavior that underlies vocal communication and vocal learning, and is important for the study of humans’ underlying linguistic competency and musicality. Despite its prominence in a wide range of disciplines, vocalizations are often quantified in an ad-hoc and species-specific manner. Fortunately, recent advances in machine learning have resulted in techniques that allow high-dimensional data to be compressed in a data-dependent manner, resulting in low-dimensional encodings that minimize information loss. We use one such method, the variational Bayesian autoencoder (VAE), to perform dimensionality reduction of the vocalizations and vocal learning behavior of several model organisms: laboratory mice, zebra finches, and marmosets. Together with Richard Mooney’s lab we use latent representations of these species’ vocal behavior to reproduce and extend existing results in a species-agnostic manner, offering a unified view of vocal variability and learning on timescales ranging from individual syllables of millisecond duration to across days.
Two-dimensional embedding of sounds made by an adult zebra finch on a single day. Mouse over points to see spectrograms, which represent sounds with a vertical frequency axis and a horizontal time axis. The amorphous shape on the left consists of innate calls, while the lines on the the right show the bird’s highly stereotyped learned song. Thank you to Samuel Brudner for collecting recordings.
Where we look speaks volumes about what we’re thinking. For over a century, psychologists and neurobiologists have used the movements of the eyes and measurements of pupil size to study the mind, but the need for experimental control has limited our ability to study eye movements in naturalistic settings. In P[λ]ab, we are pairing new advances in eye tracking technology with methods in computer vision and machine learning to tackle the challenge of studying eye movements in real-world settings, with applications ranging from treatment of acute fear to how we view art.