Inter-animal transforms as a guide to model-brain comparison

Abstract

Accurately measuring similarity between different animals’ neural responses is a crucial step towards evaluating deep neural network (DNN) models of the brain. Under what transform class are animals likely to be similar to each other, and how much neural data needs to be collected to get an accurate similarity estimate? Using model variability as a proxy for inter-animal variability, we find that where we measure similarity from has critical implications for the suitable transform class. Specifically, we observe high linear mappability between pre-ReLU activations, but require a simple non-linear mapping class (that combines logistic regression with linear regression) in the case of post-ReLU activations. With our approach, we estimate that measuring inter-animal variability requires collecting neural data for at least 500 stimuli and 300 neurons from the same hypercolumn, providing a prescription for future experimental data that can adjudicate between models.

Publication
CCN 2022, Cosyne 2023
Javier Sagastuy-Brena
Javier Sagastuy-Brena
PhD Candidate at Stanford University

Ski, code, eat, sleep, repeat.

Related