Typical neuroscience experiments start by assuming we know the set of variables that drive neural activity. But what if neurons are tuned to variables we would never have guessed? What if, as with social interaction, the stimulus set is too complex to be boiled down to a few dimensions. With Jeff Beck, we’re developing models that infer stimulus categories directly from data, allowing us to “tag” images and movies based on neural responses.
Most neuroscience experiments begin by stripping away as much of the complexity of the real world as they can afford to. But when the phenomena of interest are our social interactions — who we trust, who we fight with, who we love — there’s only so much complexity you can remove. In P[λ]ab, we’re studying the ways in which humans and other primates make strategic social decisions in real time by recording from the brain as pairs of individuals play dynamic games.
By modeling these interactions, we’re able to generate realistic samples of actual play, as well as characterize players’ strategies.
Thanks to advances in microscopy and calcium indicators, it’s now possible to collect terabytes of data in a single experiment. But that increase in data volume comes at the cost of increased processing time. Yet recent work on preprocessing algorithms for imaging data, along with methods for characterizing cell responses and inferring the functional relationships between them, has made it possible to envision a real-time pipeline for neural data analysis.
Together with Eva Naumann’s lab, we’re working to develop a fully-integrated online analysis platform that will facilitate closed-loop, all-optical control in the larval zebrafish. This is work in progress, so stay tuned!
Where we look speaks volumes about what we’re thinking. For over a century, psychologists and neurobiologists have used the movements of the eyes and measurements of pupil size to study the mind, but the need for experimental control has limited our ability to study eye movements in naturalistic settings. In P[λ]ab, we are pairing new advances in eye tracking technology with methods in computer vision and machine learning to tackle the challenge of studying eye movements in real-world settings, with applications ranging from treatment of acute fear to how we view art.