Cognition Workshop 05/15/24: John Veillette

Title: An integrative framework for evaluating theories of motor awareness

John Veillette, doctoral student in the Nusbaum Lab, Department of Psychology, University of Chicago

Abstract: Our bodies are the interface through which we execute all behaviors, and the conscious experience of volitionally controlling our bodies constitutes one of the most basic forms of self-awareness. While an empirically-grounded neurocognitive account of this sense of agency would have broad medical (e.g. for psychiatric and motor disorders), legal (e.g. in the interpretation of mens rea) and ethical implications (e.g. autonomy in emerging interface technologies and prosthetics), theories of motor awareness have proven inherently difficult to test. In addition to inheriting all the falsifiability issues recently discussed in the broader consciousness literature, theories of motor awareness must depend on additional auxiliary assumptions from competing accounts of motor control which may themselves have ambiguous neuroanatomical specifications. We propose a framework that integrates recent methodological advances in physical and biomechanical simulation, robotics, and human-computer interaction to jointly elicit falsifiable predictions from a broad class of (pairs of) models of motor awareness and control. We present a proof-of-concept MRI experiment in which we use this general approach to test a previously intractable hypothesis, while also illustrating the remaining methodological obstacles to applying our proposed framework in practice. This discussion motivates our ongoing work to eliminate such barriers with the development of novel statistical methods, facilitating our ultimate aim of automatically benchmarking hundreds of models of motor control and awareness, all specified with biomechanical and neuroanatomical precision at the level of individual subjects.

Cognition Workshop 05/01/24: Henry Jones

Title: Storage in working memory recruits a modality-independent pointer system

Henry Jones, doctoral student in the Awh/Vogel Lab, Department of Psychology, University of Chicago

Abstract: Prominent theories of working memory (WM) have proposed that distinct working memory systems may support the storage of different types of information. For example, distinct dorsal and ventral stream brain regions are activated during the storage of spatial and object information in visual WM. Although feature-specific activity is likely critical to WM storage, we hypothesize that a content-independent indexing process may also play a role. Specifically, spatiotemporal pointers may be required for the sustained indexing and tracking of items in space and time, even while features change, within an unfolding event. Past evidence for such a content-independent pointer operation includes the finding that signals tracking the number of individuated representations in WM (load) generalize across colors, orientations and conjunctions of those features. However, a common allocation or spatial attention, or an overlapping orientation and color codes in early visual cortices may mimic a generalizable signal. Here, I will present a stronger demonstration of content-independence. In experiment 1, I dissociate WM load from a common confound, spatial attention. By independently manipulating the two, we find signals which were selectively sensitive to WM load. In experiment 2, I replicate the previous finding of load signal generalization, after which I provide a stronger demonstration of content-independence by using pairs of features that are as cortically disparate as possible, color and motion coherence. We use RSA to simultaneously track both content-feature and content-independent load signals. Extending these observations, in the final experiment we apply similar analytic approaches to demonstrate a common load signature between auditory and visual sensory modalities, while controlling for modality-specific neural activity and the spatial extent of covert attention. Our findings suggest that content-independent pointers may play a fundamental role in the storage of information in working memory, and may contribute to its overall limited capacity.

Cognition Workshop 04/24/24: Cambria Revsine

Title: The Memorability of Voices is Consistent and Predictable

Cambria Revsine, doctoral student in the Bainbridge Lab, Department of Psychology, University of Chicago

Abstract: Memorability, or the likelihood that an item is remembered, is an intrinsic stimulus property that is highly consistent across viewers. In other words, people tend to remember and forget the same faces, scenes, objects, and more. However, stimulus memorability research until now has been limited to the visual domain. In this talk, I will present the first exploration of auditory memorability, in which we investigated whether this consistency in what individuals remember extends to speakers’ voices, and if so, what makes a voice memorable. Across three experiments, over 3000 online participants heard a sequence of different speakers from a largescale voice database saying the same sentences. Participants indicated whenever they heard a repeated voice clip (Exp. 1 and 2) or a repeated speaker speaking a different sentence (Exp. 3). We found that participants were significantly consistent in their memory performance for voice clips, and for speakers across different utterances. Next, we tested regression models of voice memorability incorporating both low-level properties (e.g., pitch) and high-level properties measured in a separate experiment (e.g., perceived confidence). The final models, which contained primarily low-level predictors, were significantly predictive and cross-validated out-of-sample. These results provide the first evidence that people are similar in their memory for speakers’ voices, regardless of what the speaker is saying, and that this memory performance can be reliably predicted by a mix of voice features.

Cognition Workshop 04/03/24: Pınar Toptas

Title: Long-Range Interactions between Hippocampus and Prefrontal Cortex via Beta Oscillations in Olfactory Rule-Reversal Learning

Pınar Toptas, doctoral student in the Yu Lab, Department of Psychology, University of Chicago

Abstract: Everyday life requires animals to learn the associations between sensory cues and their outcomes to respond accurately given a context. For instance, while the fresh scent of roses in a nice flower bouquet triggers the response of smelling, the same scent in a rose-flavored dessert triggers the response of eating. How do brains learn a new behavioral response for a sensory cue (e.g. odors) that already has an associated response leading to desired outcome? Literature suggests that long-range communications between sensory cortices, prefrontal cortex and the hippocampus are essential for enabling complex olfactory associative memories and decision making related to them. Local-field potentials of these regions show high coherence in beta range (12-30 Hz) oscillations while animals engage in a familiar olfactory associative memory task. We want to expand this knowledge by investigating the role of beta coherence in learning novel sensory associations while still remembering the familiar associations. To investigate this question, we recorded neural activity from the medial pre-frontal cortex and the dorsal hippocampus regions of six rats as they practice novel and familiar olfactory association memory tasks. Behavioral data analysis identified different decision strategies which may point toward different states of task representations. Future analysis of this work aims to identify the nature of beta coherence across the ROI for these different clusters of task representations.

Cognition Workshop 03/20/24: Euan Prentis

Title: Segmenting experience into generalizable predictive knowledge

Euan Prentis, doctoral student in the Bakkour Lab, Department of Psychology, University of Chicago

Abstract: Human experience unfolds gradually over time. To make effective decisions, it is therefore necessary to predict outcomes that may occur at distant points in the future. By learning which events generally follow from one another – a process termed predictive learning – humans can infer which actions will bring us to the best futures, and effectively arbitrate between choice options. A challenge of decision making in the real world is that events are complex, composed of numerous changing features. Predictive learning must be generalized across events as features change. The present research probes how generalizable predictive representations are learned. Using a combination of computational modelling and eye tracking, we demonstrate that successful generalization may be achieved by learning at the level of features (feature-based learning) rather than events (conjunctive learning). Particularly, an inductive bias that segments learning into semantic categories may promote more accurate feature-based learning, and better choice.

Cognition Workshop 03/06/24: Jin Ke

Title: Tracking human everyday affective experiences from brain dynamics

Jin Ke, research specialist in the Rosenberg Lab, Department of Psychology, University of Chicago

Abstract: From the excitement of reuniting with a long-lost friend to the anger of being treated unfairly, our daily lives are colored by a diverse range of affective experiences. How does the human brain give rise to the richness of these experiences in naturalistic contexts? In this talk, I will present two fMRI projects that address this question using data collected as people watch movies and rest quietly. I will first demonstrate that dynamic functional connectivity tracks moment-to-moment fluctuations in affective experience as participants watch different movies. Results reveal that these brain network dynamics encode generalizable representations of emotional arousal, but not valence. I will next show evidence that functional connectivity observed in the absence of any explicit task encodes aspects of ongoing affective and cognitive states. In particular, functional connectivity patterns measured during rest predict the dimensions, topics, and linguistic sentiment of spontaneous thoughts during mind-wandering. Taken together, these results suggest that we can track everyday naturalistic affective experiences from brain dynamics using a combination of behavioral experiments, neuroimaging, and language modeling.

Cognition Workshop 02/07/24: Emma Megla

Title: Drawings reveal changes in object memory, but not spatial memory, across time

Emma Megla, doctoral student in the Bainbridge Lab, Department of Psychology, University of Chicago

Abstract: Time has a powerful sway over our memories. We better remember an experience that lingered than one we barely glimpsed, and better remember what occurred minutes ago than days ago. Despite centuries of mapping the relationship of time on memory, including Ebbinghaus’ famous ‘forgetting curve’, little research has pinpointed the changes in memory content that drive memory performance across time. What features (e.g., spatial accuracy) of a memory are actually changing while an experience is encoded and retained in memory? Here, we leveraged 869 drawings of scenes made from memory after variable encoding (Experiment 1) and retention of that memory (Experiment 2). Through crowdsourced scoring of these drawings by thousands of participants—scoring the objects recalled, the presence of false objects, and spatial accuracy—we were able to quantify how the content of memory changes across time on the feature level. We find that whereas the number of objects recalled from a scene, including the number of false objects, is highly dependent on time, spatial memory is largely precise after just 100 msec of encoding or after one week of retaining the memory. Additionally, we also find that the location and meaning of an object predicts when it will be recalled across encoding, but the saliency of an object when it will be forgotten.

Cognition Workshop 01/24/24: Christine Coughlin

Title: Development of memory mechanisms support an adaptive extension of knowledge

Christine Coughlin, Assistant Professor, Department of Psychology, University of Illinois Chicago

Abstract: Memories for past events form the basis of our life story. However, they also serve as building blocks through which we acquire and extend knowledge. By combining information across different experiences, we are often able to extend knowledge beyond what we directly observe. My talk will present a sample of studies examining when and how children develop the ability to use memories as building blocks for forms of knowledge extension. Across several experiments, I will show how memories are used to guide future event imagination and reasoning at different ages. Using a combination of computational modeling and functional MRI, I will then link the development of knowledge extension to the maturation of hippocampus and frontoparietal cortex. The overarching goal of this work is to uncover the neurocognitive mechanisms that support children’s developing ability to use memories for constructive, adaptive purposes.

Cognition Workshop 01/10/24: Ziwei Zhang

Title: Brain network dynamics predict surprise dynamics

Ziwei Zhang, doctoral student in the Rosenberg Lab, Department of Psychology, University of Chicago

Abstract, We experience surprise when reality conflicts with our expectations. When we encounter such expectation violations in psychological tasks and daily life, are we experiencing completely different forms of surprise? Or is surprise a fundamental psychological process with shared neural bases across contexts? To address this question, I will introduce a new brain network model, the surprise edge-fluctuation-based predictive model (EFPM). This model predicts surprise in an adaptive learning task from functional magnetic resonance imaging (fMRI) data. I will demonstrate that the same brain network model generalizes to predict surprise from fMRI data as a separate group of individuals watched suspenseful basketball games. Furthermore, I will show evidence that the surprise EFPM uniquely predicts surprise, capturing expectation violations better than models built from other brain networks, fMRI measures, and behavioral metrics. Together these results suggest that shared neurocognitive processes underlie surprise across contexts and that distinct experiences can be translated into the common space of brain dynamics.

Cognition Workshop 11/29/23: Marlene Cohen

Title: Reformatting neural representations to facilitate sensation, cognition, and action

Marlene Cohen, Professor of Neurobiology, University of Chicago

Abstract: Visually guided tasks have three hallmark components: perception (sensing the external world), cognition (making an inference or decision in service of a goal while ignoring irrelevant information), and action (planning a behavior based on that inference). Traditionally, these computations have been assumed to be performed by separate neural populations that communicate flexibly to achieve cognitive flexibility. But evidence that interactions between brain areas are flexible enough to enable the wide range of visually guided behaviors performed by humans and animals has been limited. Here, we consider a non-modular possibility: signals related to perception, cognition, and action are integrated in a single neural population in visual cortex, and flexibility is instantiated via modest modulations of neural responses that rotate population activity to drive appropriate inferences and actions. To test this hypothesis, we trained monkeys to make veridical (continuous) estimates of the 3D curvature of objects that varied in task-relevant and irrelevant parameters while we recorded simultaneously from populations of neurons in primary visual cortex (V1) and mid-level visual area (V4). We discovered that populations of V1 and V4 neurons have two ingredients necessary to instantiate cognitive flexibility: they 1) robustly encode both task-relevant and irrelevant visual information and 2) contain a stable representation of the subject’s curvature inference amid irrelevant stimulus variation. The same V4 population has a third ingredient: it encodes which of several possible actions the subject will use to report their inference by rotating or reformatting the representation of the stimulus to align with a putative readout axis. This reformatting can be accomplished via modest gain changes, such as those associated with surround modulation, attention, or task switching. Finally, we find that premotor areas contain the same ingredients, but that these representations are reformatted to enable a different causal impact on behavior. Our results suggest that visual cortex contains all the ingredients to mediate cognitively flexible visually guided behavior. Our results support a new, non-modular mechanism for cognitive flexibility.