Cognition Workshop 01/15/25: Tesnim Arar

Title: Aging and Metacognitive Updating: Can We Improve Self-Awareness?

Tesnim Arar, doctoral student in the Gallo Lab, Department of Psychology, University of Chicago

Abstract: Metacognition, or knowledge of one’s cognitive processes and abilities, is a critical component of self-regulation. Given this relationship between metacognition and behavior, researchers have argued that enhancing the accuracy of our metacognitive beliefs is an antecedent to improving behavior, and this may be especially true for older adults. Yet despite the potential value of harnessing metacognitive beliefs to mitigate negative aging-induced effects on behavior, one vital question remains unanswered: How does aging affect metacognition? One hypothesis is that aging spares metacognitive monitoring—or the ability to evaluate one’s cognitive performance on laboratory or everyday tasks in real-time—but impairs older adults’ capacity to update self-representations in memory. Another hypothesis is that older adults’ inaccurate metacognitive beliefs—when found—may arise not from impaired memory processes, but rather, a positivity bias, or a tendency to avoid incorporating recent task experiences into their self-representations because these experiences are presumably more negative. Here, we evaluate both these hypotheses via a feedback paradigm. Younger and older adults took a cognitive battery and received feedback, presented as percentiles, on their performance. We then examined whether this task-specific feedback induced updates in their everyday metacognitive beliefs at various delays. We found evidence that both groups can update and improve their metacognitive beliefs for up to two weeks, but several factors moderated this effect.

Time: 01/15/25 3:30 PM

Location: Biopsychological Sciences Building atrium

If you have any questions, requests, and concerns, please contact Nakwon Rim (nwrim [at] uchicago [dot] edu) or Cambria Revsine (crevsine [at] uchicago [dot] edu).

Cognition Workshop 12/04/24: Dr. Alex Koch

Title: What is perceived diversity?

Alex Koch, Assistant Professor, The University of Chicago Booth School of Business

Abstract: Institutions and organizations place importance on signaling that their workforce is diverse. This signaling requires knowing the indicator(s) that stakeholders rely on to perceive diversity. Does perceived diversity depend on the number of groups represented in a work unit? We contrast this richness indicator with several indicators that relate to the evenness of the workers’ distribution across the represented groups. In Experiment 1, the richness of a work unit predicted its perceived diversity independently of several evenness indicators. In Experiments 2a-c, the richer of two work units appeared more diverse, despite all evenness indicators suggesting the opposite. This result generalized from fictional to real groups, and from groups of beings to types of things. Experiment 3 replicated Experiment 2a with a larger sample of work units. Experiments 4 and 5 flipped the effect. Instead of being informed about the richness and evenness of two work units, people experienced both work units by encountering worker after worker. Sequentially experiencing the less rich but more even work unit included more switches between represented groups, which, in turn, made the less rich but more even work unit seem more diverse. Experiment 6 showed a second boundary condition. People did not perceive the richer of two work units as more diverse when the other work unit was substantially more even. Overall, institutions and organizations can effectively signal diversity through statements that emphasize the number of groups represented in their workforce.

Time: 12/04/24 3:30 PM

Location: Biopsychological Sciences Building atrium

If you have any questions, requests, and concerns, please contact Nakwon Rim (nwrim [at] uchicago [dot] edu) or Cambria Revsine (crevsine [at] uchicago [dot] edu).

Cognition Workshop 11/13/24: Woohyeuk (Leo) Chang

Title: Colored Word, Is it Visual or Verbal?

Woohyeuk (Leo) Chang, doctoral student in the Awh/Vogel Lab, Department of Psychology, University of Chicago

Abstract: Visual working memory (VWM) and verbal working memory have often been treated as distinct processes. However, recent research suggests a potential overlap between these two forms of memory. For instance, letters and words, despite their verbal associations, can elicit similar contralateral delay activity (CDA) – a load-sensitive electrophysiological signature of VWM that is typically associated with visual stimuli (e.g., colored squares). Here, by leveraging multivariate load decoding technique and representational similarity analysis, we re-analyzed data from Rajsic et al. (2019) and re-confirmed the presence of a generalized load signal across stimulus types, as well as distinct content-based signals. To further test this finding, we ran a modified version of the original task, where we removed the perceptual differences between visual and verbal working memory task conditions by using colored words. Our results once again demonstrated a generalized load signal across task conditions, while also allowing us to track the specific content being actively maintained. Thus, our results strengthen the case of a unified mechanism underlying working memory load that is independent of content.

Time: 11/13/24 3:30 PM

Location: Biopsychological Sciences Building atrium

If you have any questions, requests, and concerns, please contact Nakwon Rim (nwrim [at] uchicago [dot] edu) or Cambria Revsine (crevsine [at] uchicago [dot] edu).

Cognition Workshop 10/30/24: Brady Roberts

Title: Intrinsic memorability of symbols: Visual features, processing efficiency, or both?

Brady Roberts, Post-doctoral scholar in the Bainbridge Lab, Department of Psychology, University of Chicago

Abstract: Recent work has begun to evaluate the memorability of everyday visual symbols as a new way to understand how abstract concepts are processed in memory. Symbols were previously found to be highly memorable, especially relative to words, but it remained unclear what was driving their heightened memorability. In this exploratory, conversational presentation I will offer evidence that memorable visual attributes as well as processing efficiency might have roles to play symbol memory. In the first section, I will detail a study whereby we explored which features predict memory for conventional symbols (e.g., !@#$%). We then used an artificial image generator to form novel symbols while accentuating or downplaying predictive features to create a set of memorable and forgettable symbols, respectively. In a separate study, we tested memory for conventional symbols in a group of individuals with aphantasia (the inability to form mental images). Based on the results of these two studies, I will review arguments for why visual attributes, processing efficiency, or perhaps both might be driving intrinsic memorability of symbols.

Time: 10/30/24 3:30 PM

Location: Biopsychological Sciences Building atrium

If you have any questions, requests, and concerns, please contact Nakwon Rim (nwrim [at] uchicago [dot] edu) or Cambria Revsine (crevsine [at] uchicago [dot] edu).

Cognition Workshop 10/23/24: Xiaohan (Hannah) Guo

Title: What Makes Co-Speech Gestures Memorable?

Xiaohan (Hannah) Guo, doctoral student in the Bainbridge Lab, Department of Psychology, University of Chicago

Abstract: Adults consistently remember, and forget, certain visual stimuli––particular static images or dance movements––regardless of their personal experience with those stimuli. Here, we ask whether this phenomenon, dubbed the memorability effect, extends to a different type of visual stimulus––co-speech gestures, which are hand movements speakers spontaneously produce when they talk. Investigating the kinds of gestures that are memorable and how the memorability effect interacts between modalities is important for understanding how gesture can be incorporated into instruction to promote student learning. We ask, first, whether there is a memorability effect for gesture and, if so, whether semantic meaning features and/or visual form features are responsible for this effect. We created 360 10-second audiovisual stimuli by video recording 20 actors producing unscripted natural speech and gestures as they pretended to explain Piagetian liquid conservation to a child. We then tested online participants’ memories using a study-test paradigm for video, audio, and audiovisual versions of the stimuli. Participants showed memory consistencies in all three conditions, and memorability for gesture+speech (audiovisual stimuli) was predicted by the memorability of both its gesture and speech components. We then quantified features of the gestural stimuli using 3 methods: trained coders, automatic computational analysis, and online crowd-sourcing. Gestures are more memorable when they are more informative, and are most memorable when originally produced with memorable speech. These findings provide insight into the impact that multi-modal communication has on memory, and offer a basis for investigating whether increasing the memorability of instruction can promote student understanding.

Time: 10/23/24 3:30 PM

Location: Biopsychological Sciences Building atrium

If you have any questions, requests, and concerns, please contact Nakwon Rim (nwrim [at] uchicago [dot] edu) or Cambria Revsine (crevsine [at] uchicago [dot] edu).

Cognition Workshop 10/02/24: Nakwon Rim

Title: Natural scenes are more compressible and less memorable than man-made scenes

Nakwon Rim, doctoral student in the Berman Lab, Department of Psychology, University of Chicago

Abstract: Compressing the information from the environment to fit our processing capacity is an essential function of our cognition. However, some environments may be easier to compress for us than others, keenly relating to how taxing the environment is. In this paper, we investigate this environmental variation of compressibility in the visual domain along the dimension of naturalness. Across three human experiments, we quantified the compressibility of the natural and man-made scenes utilizing spatial frequency and sharpness of edges. Aligning with previous work on the benefits of interacting with nature, natural scenes were more compressible than man-made scenes. Furthermore, we used memorability as a behavioral proxy of how much information from the scene got processed into memory. Matching the compressibility result, we found that natural scenes are less memorable. Finally, we trained a neural network that predicts the naturalness of scenes and replicated the results in a large-scale scene database with more than 100,000 images. Our results shed important insight into our cognitive process and human-environment interaction, most notably on the beneficial effect of interacting with nature.

Time: 10/02/24 3:30 PM

Location: Biopsychological Sciences Building atrium

If you have any questions, requests, and concerns, please contact Nakwon Rim (nwrim [at] uchicago [dot] edu) or Cambria Revsine (crevsine [at] uchicago [dot] edu).

Cognition Workshop 05/15/24: John Veillette

Title: An integrative framework for evaluating theories of motor awareness

John Veillette, doctoral student in the Nusbaum Lab, Department of Psychology, University of Chicago

Abstract: Our bodies are the interface through which we execute all behaviors, and the conscious experience of volitionally controlling our bodies constitutes one of the most basic forms of self-awareness. While an empirically-grounded neurocognitive account of this sense of agency would have broad medical (e.g. for psychiatric and motor disorders), legal (e.g. in the interpretation of mens rea) and ethical implications (e.g. autonomy in emerging interface technologies and prosthetics), theories of motor awareness have proven inherently difficult to test. In addition to inheriting all the falsifiability issues recently discussed in the broader consciousness literature, theories of motor awareness must depend on additional auxiliary assumptions from competing accounts of motor control which may themselves have ambiguous neuroanatomical specifications. We propose a framework that integrates recent methodological advances in physical and biomechanical simulation, robotics, and human-computer interaction to jointly elicit falsifiable predictions from a broad class of (pairs of) models of motor awareness and control. We present a proof-of-concept MRI experiment in which we use this general approach to test a previously intractable hypothesis, while also illustrating the remaining methodological obstacles to applying our proposed framework in practice. This discussion motivates our ongoing work to eliminate such barriers with the development of novel statistical methods, facilitating our ultimate aim of automatically benchmarking hundreds of models of motor control and awareness, all specified with biomechanical and neuroanatomical precision at the level of individual subjects.

Cognition Workshop 05/01/24: Henry Jones

Title: Storage in working memory recruits a modality-independent pointer system

Henry Jones, doctoral student in the Awh/Vogel Lab, Department of Psychology, University of Chicago

Abstract: Prominent theories of working memory (WM) have proposed that distinct working memory systems may support the storage of different types of information. For example, distinct dorsal and ventral stream brain regions are activated during the storage of spatial and object information in visual WM. Although feature-specific activity is likely critical to WM storage, we hypothesize that a content-independent indexing process may also play a role. Specifically, spatiotemporal pointers may be required for the sustained indexing and tracking of items in space and time, even while features change, within an unfolding event. Past evidence for such a content-independent pointer operation includes the finding that signals tracking the number of individuated representations in WM (load) generalize across colors, orientations and conjunctions of those features. However, a common allocation or spatial attention, or an overlapping orientation and color codes in early visual cortices may mimic a generalizable signal. Here, I will present a stronger demonstration of content-independence. In experiment 1, I dissociate WM load from a common confound, spatial attention. By independently manipulating the two, we find signals which were selectively sensitive to WM load. In experiment 2, I replicate the previous finding of load signal generalization, after which I provide a stronger demonstration of content-independence by using pairs of features that are as cortically disparate as possible, color and motion coherence. We use RSA to simultaneously track both content-feature and content-independent load signals. Extending these observations, in the final experiment we apply similar analytic approaches to demonstrate a common load signature between auditory and visual sensory modalities, while controlling for modality-specific neural activity and the spatial extent of covert attention. Our findings suggest that content-independent pointers may play a fundamental role in the storage of information in working memory, and may contribute to its overall limited capacity.

Cognition Workshop 04/24/24: Cambria Revsine

Title: The Memorability of Voices is Consistent and Predictable

Cambria Revsine, doctoral student in the Bainbridge Lab, Department of Psychology, University of Chicago

Abstract: Memorability, or the likelihood that an item is remembered, is an intrinsic stimulus property that is highly consistent across viewers. In other words, people tend to remember and forget the same faces, scenes, objects, and more. However, stimulus memorability research until now has been limited to the visual domain. In this talk, I will present the first exploration of auditory memorability, in which we investigated whether this consistency in what individuals remember extends to speakers’ voices, and if so, what makes a voice memorable. Across three experiments, over 3000 online participants heard a sequence of different speakers from a largescale voice database saying the same sentences. Participants indicated whenever they heard a repeated voice clip (Exp. 1 and 2) or a repeated speaker speaking a different sentence (Exp. 3). We found that participants were significantly consistent in their memory performance for voice clips, and for speakers across different utterances. Next, we tested regression models of voice memorability incorporating both low-level properties (e.g., pitch) and high-level properties measured in a separate experiment (e.g., perceived confidence). The final models, which contained primarily low-level predictors, were significantly predictive and cross-validated out-of-sample. These results provide the first evidence that people are similar in their memory for speakers’ voices, regardless of what the speaker is saying, and that this memory performance can be reliably predicted by a mix of voice features.

Cognition Workshop 04/03/24: Pınar Toptas

Title: Long-Range Interactions between Hippocampus and Prefrontal Cortex via Beta Oscillations in Olfactory Rule-Reversal Learning

Pınar Toptas, doctoral student in the Yu Lab, Department of Psychology, University of Chicago

Abstract: Everyday life requires animals to learn the associations between sensory cues and their outcomes to respond accurately given a context. For instance, while the fresh scent of roses in a nice flower bouquet triggers the response of smelling, the same scent in a rose-flavored dessert triggers the response of eating. How do brains learn a new behavioral response for a sensory cue (e.g. odors) that already has an associated response leading to desired outcome? Literature suggests that long-range communications between sensory cortices, prefrontal cortex and the hippocampus are essential for enabling complex olfactory associative memories and decision making related to them. Local-field potentials of these regions show high coherence in beta range (12-30 Hz) oscillations while animals engage in a familiar olfactory associative memory task. We want to expand this knowledge by investigating the role of beta coherence in learning novel sensory associations while still remembering the familiar associations. To investigate this question, we recorded neural activity from the medial pre-frontal cortex and the dorsal hippocampus regions of six rats as they practice novel and familiar olfactory association memory tasks. Behavioral data analysis identified different decision strategies which may point toward different states of task representations. Future analysis of this work aims to identify the nature of beta coherence across the ROI for these different clusters of task representations.