Resolving Non-identifiability Mitigates Bias in Models of Neural Tuning and Functional Coupling

Kavli Affiliate: Kristofer Bouchard, Loren Frank, Christoph Kirst

| Authors: Pratik Sachdeva, Ji Hyun Bak, Jesse A Livezey, Loren Frank, Christoph Kirst, Sharmodeep Bhattacharyya and Kristofer Bouchard

| Summary:

In the brain, all neurons are driven by the activity of other neurons, some of which maybe simultaneously recorded, but most are not. As such, models of neuronal activity need to account for simultaneously recorded neurons and the influences of unmeasured neurons. This can be done through inclusion of model terms for observed external variables (e.g., tuning to stimuli) as well as terms for latent sources of variability. Determining the influence of groups of neurons on each other relative to other influences is important to understand brain functioning. The parameters of statistical models fit to data are commonly used to gain insight into the relative importance of those influences. Scientific interpretation of models hinge upon unbiased parameter estimates. However, evaluation of biased inference is rarely performed and sources of bias are poorly understood. Through extensive numerical study and analytic calculation, we show that common inference procedures and models are typically biased. We demonstrate that accurate parameter selection before estimation resolves model non-identifiability and mitigates bias. In diverse neurophysiology data sets, we found that contributions of coupling to other neurons are often overestimated while tuning to exogenous variables are underestimated in common methods. We explain heterogeneity in observed biases across data sets in terms of data statistics. Finally, counter to common intuition, we found that model non-identifiability contributes to bias, not variance, making it a particularly insidious form of statistical error. Together, our results identify the causes of statistical biases in common models of neural data, provide inference procedures to mitigate that bias, and reveal and explain the impact of those biases in diverse neural data sets.

Read More