##Modeling other minds: ###A neuroengineering approach John Pearson [pearsonlab.github.io/duke-CCN-talk-2017](https://pearsonlab.github.io/duke-CCN-talk-2017)

A social brain?

Mars et al. (PNAS 2014)

But how do you turn this...

...into this?

##You knew there was a catch - How far can we distill social interaction? - Trial averaging is out. - Our (statistical) models are limiting our thinking.
### Today's plan: Social neuroscience from the outside in: - Learning "social space" from spiking ([arXiv](https://arxiv.org/abs/1512.01408)) - Inferring goals from complex motions ([arXiv](https://arxiv.org/abs/1702.07319)) - Single-trial analysis of neural spiking (in prep)
What I cannot create, I do not understand. — Richard Feynman
###A reverse engineering approach - Work "outside-in" - Focus on computational constraints - "Structured black box" modeling

How do neurons see the world?

Freiwald, Tsao, Livingstone (Nature Neuroscience, 2009)

But what if we use this?

Adams, Pearson, and Platt (in prep)

Xin Chen

Jeff Beck

Let's run an experiment

Let's imagine a model

###We are not the first - Gallant lab (fMRI) ([Huth 2012](http://www.sciencedirect.com/science/article/pii/S0896627312009348), [Stansbury 2013](http://www.sciencedirect.com/science/article/pii/S0896627313005503)) - Continuous latent states ([Park 2014](http://www.nature.com/neuro/journal/v17/n10/abs/nn.3800.html), [Buesing 2014](http://papers.nips.cc/paper/5339-clustered-factor-analysis-of-multineuronal-spike-data), [Archer 2015](https://arxiv.org/abs/1511.07367), [Park 2015](http://papers.nips.cc/paper/5790-unlocking-neural-population-non-stationarities-using-hierarchical-dynamics-models)) - Discrete latent states ([Escola 2011](http://www.mitpressjournals.org/doi/abs/10.1162/NECO_a_00118#.WNqSexLythE), [Putzky 2014](http://papers.nips.cc/paper/5338-a-bayesian-model-for-identifying-hierarchically-organised-states-in-neural-population-activity)) - ...and many more
###So what's different? - Previous models: latents capture *internal* dynamics - latents can be driven by stimuli - but vary for presentations of the same stimulus - Our model: latents capture *stimulus* dynamics - each stimulus frame has a set of binary tags - tags follow a Hidden (semi-)Markov Model - latents are *the same* for repeated stim presentations

Let's put that in math

$$ \begin{align} N_{tu} &\sim \mathrm{Poisson}(\Lambda_{tu}\cdot\theta) \\ \theta &\sim \mathrm{Gamma}(s, s) \\ \Lambda_{tu} &= \lambda_{0u} \prod_{k=1}^K (\lambda_{zuk})^{z_{tk}} \prod_{r=1}^R (\lambda_{xuk})^{x_{tr}} \end{align} $$

firing rate = baseline * latents * externals * noise

And in diagram form

###Model fitting We have $p(N|Z, \Theta)$ $Z$: latent variables, $\Theta$: model parameters We want $$ p(Z, \Theta | N) \propto p(N|Z, \Theta)\, p(Z) \, p(\Theta) $$ But too hard to do Bayes' Rule exactly!
> Do you want the wrong answer to the right question or the right answer to the wrong question? I think you want the former. > > — David Blei
###Variational Bayesian (VB) Inference - Replace true posterior $p$ with *approximate* posterior $q$ - Minimize "distance" $KL(q \Vert p)$ between actual and approximate posteriors - Same as maximizing the evidence lower bound (ELBO): $\log p(N)$

Experiment I: Synthetic data

Experiment II: Parietal Cortex

Roitman and Shadlen (J. Neuro., 2002)

Experiment III: Temporal Cortex

McMahon et al. (PNAS, 2014)

Face, monkey, and body part cells!

Experiment III: Temporal Cortex

Experiment III: Temporal Cortex

Viewpoint selectivity!

###What did we do? - Given spike counts, *what features drive firing?* - Multiply "tag" each stimulus frame - Model recovers features from even modest data sizes when signal is strong - Goal is to look for patterns that **suggest new experiments.**

Mind reading 101

Shariq Iqbal

Caroline Drucker

Jean-Francois Gariépy

Michael Platt

Penalty Shot

Complexity tax

  • each trial a different length
  • how to average, align?
  • need to "reduce" dynamics

Real trials

### Our approach - Borrow from control theory, time series - Structured black box models (pieces make sense) - Neural networks for flexible fitting

Modeling I

Observed positions at each time ($y_t$): $$ y_t = \begin{bmatrix} y_{goalie} & x_{puck} & y_{puck} \end{bmatrix}^\top $$
Control inputs ($u_t$) drive changes in observed positions: $$y_{t + 1} = y_t + v_{max} \sigma(u_t)$$
Goal: predict control inputs from trial history: $$u_t = F(y_{1:t})$$

Modeling II

Assumption: PID control $$ u_t = u_{t-1} + L * (g_t - y_{t-1}) + \epsilon_t $$
  • linear control model: $L$
  • goal (set point): $g_{t}$
  • error signal: $e_t \equiv g_{t} - y_{t-1}$
  • observation noise: $\epsilon_t$

Modeling III

Goal model:
$$ \log p(g) = -\beta E(g|s) - \log Z \\ E(g|s) = \sum_t \left[ \frac{1}{2} \Vert g_t - g_{t-1}\Vert^2 + U(g_t, s_t)\right] $$
How do we interpret this?
  • Goals minimize an "energy"
  • "Potential" $U$ captures player interaction

Modeling IV

  • $U$ is a problem
  • What if $U$ were just quadratic?
  • Model $e^U$ as a *mixture* of normals
  • Use a Generative Adversarial Network

GANsplaining

Generator

Discriminator

Our model

Model fitting

Variational Bayes autoencoder
  • Encoding model:
    • goals: cGAN
    • control: PID + Gaussian noise

  • Decoding model:

It fits!

Generated Trials

Generated trials

A sample trial

Inferred goals

Value function

###What did we do? - Dynamic control tasks let us leverage motor behavior to study cognitive and social decisions. - Structured black-box models allow us to carve behavior into interpretable pieces. - We inferred a value function capable of explaining behavior in terms of goals.

Predicting final target

A potential training signal

A potential training signal


DMPFC DLPFC
Win > Loss 33% 25%
Both effects 15% 9%
### Conclusions - Capturing social behavior in the lab is challenging - But we can get traction by - working "outside-in": from sensory and motor to intermediate signals - leveraging rich models: matching our analyses to our questions

Same behavior, different mechanisms

Adams, Watson, Pearson, and Platt (2012)

Foraging, for instance

Pearson, Watson, and Platt (2014)

Probing the model

Initial goal distribution