Cognition Workshop 03/20/24: Euan Prentis

Title: Segmenting experience into generalizable predictive knowledge

Euan Prentis, doctoral student in the Bakkour Lab, Department of Psychology, University of Chicago

Abstract: Human experience unfolds gradually over time. To make effective decisions, it is therefore necessary to predict outcomes that may occur at distant points in the future. By learning which events generally follow from one another – a process termed predictive learning – humans can infer which actions will bring us to the best futures, and effectively arbitrate between choice options. A challenge of decision making in the real world is that events are complex, composed of numerous changing features. Predictive learning must be generalized across events as features change. The present research probes how generalizable predictive representations are learned. Using a combination of computational modelling and eye tracking, we demonstrate that successful generalization may be achieved by learning at the level of features (feature-based learning) rather than events (conjunctive learning). Particularly, an inductive bias that segments learning into semantic categories may promote more accurate feature-based learning, and better choice.

Cognition Workshop 03/06/24: Jin Ke

Title: Tracking human everyday affective experiences from brain dynamics

Jin Ke, research specialist in the Rosenberg Lab, Department of Psychology, University of Chicago

Abstract: From the excitement of reuniting with a long-lost friend to the anger of being treated unfairly, our daily lives are colored by a diverse range of affective experiences. How does the human brain give rise to the richness of these experiences in naturalistic contexts? In this talk, I will present two fMRI projects that address this question using data collected as people watch movies and rest quietly. I will first demonstrate that dynamic functional connectivity tracks moment-to-moment fluctuations in affective experience as participants watch different movies. Results reveal that these brain network dynamics encode generalizable representations of emotional arousal, but not valence. I will next show evidence that functional connectivity observed in the absence of any explicit task encodes aspects of ongoing affective and cognitive states. In particular, functional connectivity patterns measured during rest predict the dimensions, topics, and linguistic sentiment of spontaneous thoughts during mind-wandering. Taken together, these results suggest that we can track everyday naturalistic affective experiences from brain dynamics using a combination of behavioral experiments, neuroimaging, and language modeling.

Cognition Workshop 02/07/24: Emma Megla

Title: Drawings reveal changes in object memory, but not spatial memory, across time

Emma Megla, doctoral student in the Bainbridge Lab, Department of Psychology, University of Chicago

Abstract: Time has a powerful sway over our memories. We better remember an experience that lingered than one we barely glimpsed, and better remember what occurred minutes ago than days ago. Despite centuries of mapping the relationship of time on memory, including Ebbinghaus’ famous ‘forgetting curve’, little research has pinpointed the changes in memory content that drive memory performance across time. What features (e.g., spatial accuracy) of a memory are actually changing while an experience is encoded and retained in memory? Here, we leveraged 869 drawings of scenes made from memory after variable encoding (Experiment 1) and retention of that memory (Experiment 2). Through crowdsourced scoring of these drawings by thousands of participants—scoring the objects recalled, the presence of false objects, and spatial accuracy—we were able to quantify how the content of memory changes across time on the feature level. We find that whereas the number of objects recalled from a scene, including the number of false objects, is highly dependent on time, spatial memory is largely precise after just 100 msec of encoding or after one week of retaining the memory. Additionally, we also find that the location and meaning of an object predicts when it will be recalled across encoding, but the saliency of an object when it will be forgotten.

Cognition Workshop 01/24/24: Christine Coughlin

Title: Development of memory mechanisms support an adaptive extension of knowledge

Christine Coughlin, Assistant Professor, Department of Psychology, University of Illinois Chicago

Abstract: Memories for past events form the basis of our life story. However, they also serve as building blocks through which we acquire and extend knowledge. By combining information across different experiences, we are often able to extend knowledge beyond what we directly observe. My talk will present a sample of studies examining when and how children develop the ability to use memories as building blocks for forms of knowledge extension. Across several experiments, I will show how memories are used to guide future event imagination and reasoning at different ages. Using a combination of computational modeling and functional MRI, I will then link the development of knowledge extension to the maturation of hippocampus and frontoparietal cortex. The overarching goal of this work is to uncover the neurocognitive mechanisms that support children’s developing ability to use memories for constructive, adaptive purposes.

Cognition Workshop 01/10/24: Ziwei Zhang

Title: Brain network dynamics predict surprise dynamics

Ziwei Zhang, doctoral student in the Rosenberg Lab, Department of Psychology, University of Chicago

Abstract, We experience surprise when reality conflicts with our expectations. When we encounter such expectation violations in psychological tasks and daily life, are we experiencing completely different forms of surprise? Or is surprise a fundamental psychological process with shared neural bases across contexts? To address this question, I will introduce a new brain network model, the surprise edge-fluctuation-based predictive model (EFPM). This model predicts surprise in an adaptive learning task from functional magnetic resonance imaging (fMRI) data. I will demonstrate that the same brain network model generalizes to predict surprise from fMRI data as a separate group of individuals watched suspenseful basketball games. Furthermore, I will show evidence that the surprise EFPM uniquely predicts surprise, capturing expectation violations better than models built from other brain networks, fMRI measures, and behavioral metrics. Together these results suggest that shared neurocognitive processes underlie surprise across contexts and that distinct experiences can be translated into the common space of brain dynamics.

Cognition Workshop 11/29/23: Marlene Cohen

Title: Reformatting neural representations to facilitate sensation, cognition, and action

Marlene Cohen, Professor of Neurobiology, University of Chicago

Abstract: Visually guided tasks have three hallmark components: perception (sensing the external world), cognition (making an inference or decision in service of a goal while ignoring irrelevant information), and action (planning a behavior based on that inference). Traditionally, these computations have been assumed to be performed by separate neural populations that communicate flexibly to achieve cognitive flexibility. But evidence that interactions between brain areas are flexible enough to enable the wide range of visually guided behaviors performed by humans and animals has been limited. Here, we consider a non-modular possibility: signals related to perception, cognition, and action are integrated in a single neural population in visual cortex, and flexibility is instantiated via modest modulations of neural responses that rotate population activity to drive appropriate inferences and actions. To test this hypothesis, we trained monkeys to make veridical (continuous) estimates of the 3D curvature of objects that varied in task-relevant and irrelevant parameters while we recorded simultaneously from populations of neurons in primary visual cortex (V1) and mid-level visual area (V4). We discovered that populations of V1 and V4 neurons have two ingredients necessary to instantiate cognitive flexibility: they 1) robustly encode both task-relevant and irrelevant visual information and 2) contain a stable representation of the subject’s curvature inference amid irrelevant stimulus variation. The same V4 population has a third ingredient: it encodes which of several possible actions the subject will use to report their inference by rotating or reformatting the representation of the stimulus to align with a putative readout axis. This reformatting can be accomplished via modest gain changes, such as those associated with surround modulation, attention, or task switching. Finally, we find that premotor areas contain the same ingredients, but that these representations are reformatted to enable a different causal impact on behavior. Our results suggest that visual cortex contains all the ingredients to mediate cognitively flexible visually guided behavior. Our results support a new, non-modular mechanism for cognitive flexibility.

Cognition Workshop 11/08/23: Qiongwen (Jovie) Cao

Title: Moral conviction and metacognition shape neural response during sociopolitical decision-making

Qiongwen (Jovie) Cao, doctoral student in the Decety Lab, Department of Psychology, University of Chicago

Abstract: Moral conviction has significant social and political implications, but how it is incorporated into the valuation and decision-making process remains underexplored. In the current project, we examined the brain responses associated with support for sociopolitical issues that vary on moral conviction during decision-making. Participants (N = 45) underwent fMRI scanning while, on each trial, choosing between two groups of hypothetical political protesters. Results demonstrated that stronger moral conviction was related to faster and more consistent decisions. Hemodynamic response in the aINS, ACC, lPFC were among the regions that tracked the moral conviction level of a decision. This is in line with the conceptual framework that moral conviction incorporates cognitive and affective dimensions. Metacognitive sensitivity, measured in a separate perceptual task, correlated with parametric effects of moral conviction on hemodynamic response in that these effects were more pronounced among individuals with poorer metacognitive sensitivity. Mean support rating of the protesters was positively associated with brain activity in regions including vmPFC and amygdala, suggesting that brain regions in the valuation circuit are modulated by support. These findings provide novel evidence regarding the neural basis of support and moral conviction during sociopolitical decisions.

Cognition Workshop 11/01/23: William X. Q. Ngiam

Title: Towards an integrative framework on visual attention and working memory: a few pointers

William X. Q. Ngiam, postdoc in the Awh/Vogel lab, University of Chicago

Abstract: What is visual working memory (VWM)? Our growing field has many varying definitions, measures and models – so much so that I think researchers may be talking past each other. In this talk, I will share how I am trying to get researchers to rethinking the poorly defined theoretical framework in our field. I start off with my own work examining a long-standing debate – whether the units of representation in VWM are objects or features. With this, I illustrate that this binarized debate may not actually be grounded on fundamental theoretical differences. Then to help rethink these debates, I offer a ’theory map’ – a common space to discuss phenomena, compare models and integrate theories. I will end with some recent work from our lab pursuing an exciting novel mechanism – content-independent pointers – that has the potential to link cognitive and neural models of working memory. My hope is that this talk will inspire steps in our department towards slower empirical research with a focus on theory development.

Cognition Workshop 10/11/23: Anna Corriveau

Title: Consequences of sustained attention’s floodlight for recognition memory

Anna Corriveau, doctoral student in the Rosenberg Lab, Department of Psychology, University of Chicago

Abstract: Attention is often described as a spotlight in that it selectively enhances processing of relevant or salient information. However, it is not clear whether the spotlight metaphor applies to sustained attention, which fluctuates over time. Specifically, when presented with both task-relevant and task-irrelevant information, do moments of high attention act as a spotlight, selectively increasing processing for task-relevant stimuli? Or, rather, does a high attentional state act more like to a floodlight, increasing processing for both task-relevant and task-irrelevant stimuli? To investigate this, we tested how changes in sustained attention state impact recognition memory for stimuli as a function of task-relevance. Across multiple studies, we find that high sustained attention predicts better memory for both task-relevant and task-irrelevant stimuli, lending support to a floodlight model of sustained attention. This work further characterizes the relationship between sustained attention and memory and highlights a key difference between sustained attention and other aspects of attention.

Cognition Workshop 10/04/23: Riley Tucker

Title: ‘Eyes’ on the Street: How Computer Vision and Cognitive Psychology Can Help Us Get the Gist of Neighborhood Environmental Design and Explain Crime

Riley Tucker, postdoc in the Berman Lab and the Mansueto Institute of Urban Innovation, University of Chicago

Abstract: For half a century, scholars of crime have theorized that the spatial attributes and layouts of places shape how likely people are to take action against perceived crime threats. Specifically, people are expected to have heightened territoriality in areas with visually open, unobstructed spaces that are aesthetically pleasing. However, measures of these constructs have proved elusive, so studies testing this idea have generally been limited to data measuring the presence of objects such as vegetation or trash that impede sight-lines or degrade aesthetic value. This represents a substantial limitation, as research in cognitive psychology suggests that people make behavioral decisions by rapidly assessing the ‘gist’ of entire scenes rather than scanning specific objects. By training an image recognition AI to rate several forms of scene gist for 168k georeferenced Google Streetview images, this project introduces a strategy for measuring aesthetic value and natural surveillance quality across Chicago neighborhoods. Using data from Chicago’s 311 system to measure how likely neighborhood residents are to report man-made incivilities, this study explores the relationship between neighborhood-level visual characteristics, territoriality, and crime.