Cognition Workshop 11/29/23: Marlene Cohen

Title: Reformatting neural representations to facilitate sensation, cognition, and action

Marlene Cohen, Professor of Neurobiology, University of Chicago

Abstract: Visually guided tasks have three hallmark components: perception (sensing the external world), cognition (making an inference or decision in service of a goal while ignoring irrelevant information), and action (planning a behavior based on that inference). Traditionally, these computations have been assumed to be performed by separate neural populations that communicate flexibly to achieve cognitive flexibility. But evidence that interactions between brain areas are flexible enough to enable the wide range of visually guided behaviors performed by humans and animals has been limited. Here, we consider a non-modular possibility: signals related to perception, cognition, and action are integrated in a single neural population in visual cortex, and flexibility is instantiated via modest modulations of neural responses that rotate population activity to drive appropriate inferences and actions. To test this hypothesis, we trained monkeys to make veridical (continuous) estimates of the 3D curvature of objects that varied in task-relevant and irrelevant parameters while we recorded simultaneously from populations of neurons in primary visual cortex (V1) and mid-level visual area (V4). We discovered that populations of V1 and V4 neurons have two ingredients necessary to instantiate cognitive flexibility: they 1) robustly encode both task-relevant and irrelevant visual information and 2) contain a stable representation of the subject’s curvature inference amid irrelevant stimulus variation. The same V4 population has a third ingredient: it encodes which of several possible actions the subject will use to report their inference by rotating or reformatting the representation of the stimulus to align with a putative readout axis. This reformatting can be accomplished via modest gain changes, such as those associated with surround modulation, attention, or task switching. Finally, we find that premotor areas contain the same ingredients, but that these representations are reformatted to enable a different causal impact on behavior. Our results suggest that visual cortex contains all the ingredients to mediate cognitively flexible visually guided behavior. Our results support a new, non-modular mechanism for cognitive flexibility.

Leave a Reply

Your email address will not be published. Required fields are marked *