Cognition Workshop 04/13: Ziwei Zhang

Cognitive state fluctuations impact learning in different contexts

We are constantly learning from the world around us. How do changes in our cognitive and attentional states impact this process? I will describe two projects examining relationships between internal state fluctuations and an automatic, fundamental process of learning—statistical learning, and a noisy, dynamic form of learning—adaptive learning. In the first project, we examined the consequences of sustained attention fluctuations for statistical learning. Participants completed a continuous performance task with shape stimuli online. Unbeknownst to participants, we manipulated what they saw in real time by inserting visual regularities (a sequence of three regular shapes) into the task trial stream when their response times suggested that they were in especially high or low attentional states. Demonstrating that attentional state impacts statistical learning, we observed greater evidence for learning of the regular sequence encountered in the high vs. the low attentional state. In project two, we reanalyzed an openly available fMRI dataset collected as participants performed an adaptive learning task in which they learned to make accurate predictions about the locations of a fallen object in an noisy and dynamically changing environment. Individual differences in a brain network signature of sustained attention predicted individual learning style, with individuals with network signatures of stronger attention showing a learning style more like that of a normative model. In addition, trial-to-trial fluctuations in a distinct network signature of working memory predicted learning performance, such that trials on which participants showed a network signature of stronger working memory were followed by closer alignment between human and model predictions on the next trial. Together, these studies reveal consequences of sustained attention and working memory fluctuations for learning in different contexts.

Cognition Workshop 04/06: Chong Zhao

Stable attentional control demands across individuals despite extensive learning

Classic models of expertise propose that when first learning a task, success is primarily determined by the individual’s attention and working memory ability. However, as skill is developed performance becomes less dependent on attention control and loads more on acquired long term memory structures for the task. Here, we tested whether individual differences in attentional control ability continued to predict long term memory performance for picture sequences even after participants showed massive learning increases for the sequence via multiple repetitions. In Experiment 1-3, subjects performed a location source memory task in which they were presented a sequence of 30 objects shown in one of four quadrants, or 30 centrally positioned objects with an external black square in one of the four quadrants, and then were tested on each item’s position. We then repeated the procedure with the same object sequences, such that each subject was shown and tested on the same sequence 5 times. We replicated the prior findings of a relationship between attentional control and overall memory accuracy. Interestingly, we discovered that individual differences in attentional control continually predicted memory accuracy across all repetitions. In Experiment 4, we sought to replicate our finding with verbal materials, so that the participants were asked to memorize 45 word pairs and perform cued recall tasks as memory measure. We replicated the correlation between attentional control and overall memory accuracy, as well as the stable attentional control demands even with extensive learning of word pairs. Together, these results suggest that developing expertise does not eliminate the contribution of attentional control ability for long term memory, but may instead reflect more optimized attention control during expert task performance.

Cognition Workshop 03/09: Hayoung Song

Large-scale neural dynamics in a low-dimensional state space reflect cognitive and attentional dynamics
Cognition and attention arise from the adaptive coordination of neural systems in response to internal and external demands. The low-dimensional latent subspace that underlies large-scale neural dynamics, and the relationship of these dynamics to cognitive and attention states, however, is unknown. We conducted functional magnetic resonance imaging as human participants performed attention tasks, watched comedy sitcom episodes and an educational documentary, and rested. Whole-brain dynamics traversed a common set of latent states that spanned two gradient axes of functional brain organization, with global synchrony among functional networks modulating state transitions. Neural states transitions were time-locked to narrative event boundaries and changes in cognitive task demands, and reflected attentional states in both task and naturalistic contexts. Together, the study demonstrates that traversals along the low-dimensional gradients reflect cognitive and attentional dynamics in diverse contexts.

Cognition Workshop 02/23: Dr. Jorge Morales

The Psychophysics of Subjectivity
What is perception about? A traditional and intuitive answer is that it is about the world out there—the external environment and the objects that populate it. However, this picture of perception leaves out an essential aspect of experience and its targets: our own subjectivity. We experience not only what’s objectively out there, but also the point-of-view from which we encounter it; we perceive not only what there is, but also what’s missing; and we can become aware not only of the external world, but also of our own internal mental states. These forms of subjectivity play a central role in a long and rich philosophical tradition, but they have been notoriously challenging to study scientifically. In this talk, I will explore a new approach to the study of subjectivity. By exploiting experimental designs from vision science, I’ll show how we can make progress on—and even solve—centuries-old philosophical puzzles, by demonstrating that our subjective point of view leaves psychophysical traces in rapid, automatic visual processing. We’ll also see how patterns of visual attention reveal that our visual systems process absences in similar ways to how they process more ordinary, present objects. Finally, by developing computational models of introspection, we can determine when reports of our own subjective experience are reliable and when they are not. In summary, perception is about the world, but also about our place in it.

Cognition Workshop 02/09: Dr. Angela Radulescu

Towards naturalistic reinforcement learning in health and disease 

Adaptive decision-making relies on our ability to organize experience into useful representations of the environment. This ability is critical in the real world: each person’s experience is dynamic and continuous, and no two situations we encounter are exactly the same. In this talk, I will first show that attention and memory contribute to inferring a set of features of the environment relevant for learning and decision-making (i.e. a “state representation”). I will then present results from ongoing work attempting to understand how such inference can take place in naturalistic environments. One line of work leverages virtual reality in combination with eye-tracking to study what features of naturalistic scenes guide goal-directed search. A second study examines the role of language in providing a prior for which features are relevant for decision-making. And a third thread focuses on how mood biases attention to different features of a decision. I will conclude with a discussion of the potential of naturalistic reinforcement learning as a model of mental health dynamics.

Cognition Workshop 1/26: Andrew Stier

ALBATROSS: fAst fiLtration BAsed geomeTRy via stOchastic Sub-Sampling
What are the intrinsic patterns that shape distributions of biological data? Often, we use analysis techniques that assume that our data come from a flat Euclidean space (e.g., PCA, tSNE, and eigenvector decomposition). However biological data often looks like it has been sampled from a space with some curvature. In this talk, I will build some intuition for curved spaces, particularly spaces with negative curvature, i.e. Hyperbolic spaces, and discuss why these geometries might be better frameworks than Euclidean or flat space to analyze biological data. In addition, I will introduce a new statistical topological data analysis (TDA) protocol to detect geometric structure in biological data and determine whether your data comes from a curved or flat space. This statistical protocol reduces TDA’s memory requirements and makes it possible for scientists with modest computing resources to infer the underlying geometry of their data. Finally, I will demonstrate this protocol by mapping the topology of functional correlations for the entire human cortex, something that was previously infeasible.

Cognition Workshop 01/12: Dr. Mina Cikara

Causes and consequences of coalitional cognition

What is a group? How do we know to which groups we belong? How do we assign others to groups? A great deal of theorizing across the social sciences has conceptualized ‘groups’ as synonymous with ‘categories,’ however there are a number of limitations to this approach: particularly for making predictions about novel intergroup contexts or about how intergroup dynamics will change over time. Here I present two projects that offer alternative frameworks for thinking about these questions. First I review some recent work elucidating the cognitive processes that give rise to the inference of coalitions (even in the absence of category labels). Then I’ll discuss an ongoing project on the effects of social group reference dependence–which falls out of coalitional reasoning–on hate crimes in the U.S. between 1990 and 2010.

Cognition Workshop 12/01: Dr. Rebecca Keogh

Understanding and measuring visual imagery in congenital aphantasia (absent visual imagery) and it’s relation to other cognitive functions

Visual imagery is our ability to ‘see with the mind’s eye’ and the vividness with which people report being able to visualise varies substantially with some people reporting incredibly strong lifelike imagery while others report very weak imagery. A recently identified group (congenital aphantasia) report not experiencing any visual imagery at all. Due to its inherently private nature, one of the main hurdles to overcome in visual imagery research is objectively and reliably measuring individual differences in the ability to visualise. In my presentation I will report on some behavioural (binocular rivalry) and physiological (skin conductance and pupillometry) measures that can be used to index visual imagery strength in the general population, as well as the lack of visual imagery in congenital aphantasia. I will then also discuss how cortical excitability might drive individual differences in visual imagery strength, touching on our recent findings that show that a less excitable visual cortex produces the strongest visual imagery. Lastly, I will talk about how individual differences in visual imagery ability may influence a range of cognitive functions, specifically assessing the relationship between visual imagery and memory, in congenital aphantasia.

Cognition Workshop 11/17: Dr. Emily Cowan

How consolidation supports adaptive memories

We rely on our ability to recall the past to guide behavior in the present. However, since we cannot remember everything we encounter, it is adaptive for memory systems to prioritize retaining salient, goal-relevant information. Memories are thought to be stabilized in the brain as they become supported by distributed neocortical networks, facilitated by interactions between the hippocampus and cortex, particularly during periods of sleep. My research focuses on understanding the adaptive nature of consolidation processes, examining how consolidation not only prioritizes the retention of goal-relevant memories, but also reorganizes the way memories are represented within and across brain regions. Through such transformation, memories for related but distinct experiences can become integrated, leading to an abstracted trace that can be flexibly generalized to future experiences. In this talk I’ll present three studies examining how consolidation supports such adaptive prioritization and transformation processes, using behavioral measures and functional neuroimaging methods including task-based and resting-state functional connectivity and multivariate pattern analyses. I’ll present data showing sleep-dependent changes in the organization of memories in cortical regions and along the long-axis of the hippocampus, as well as work examining the scale of cortical regions that undergo such experience-dependent changes in service of selectively retaining novel information.

Cognition Workshop 11/3: Max Kramer & Andrew Savoy

What makes something memorable?: Analyzing the Memorability of Objects

Max Kramer

A growing body of research has demonstrated that certain stimuli are consistently remembered more often than others, even across large heterogeneous populations (Isola et al, 2011), leading many to ask, “what makes something memorable?” This consistency in what is remembered and what is forgotten has been hypothesized to reflect an intrinsic and measurable property of stimuli known as memorability (Bainbridge, 2019). In attempting to determine why we remember certain things and forget others, some researchers have hypothesized that the most memorable stimuli are the most atypical or distinctive (Valentine, 1991) while others suggest that the most typical items are most often remembered (Bainbridge, Dilks, & Oliva, 2017; Bainbridge & Rissman, 2018). Here, we examine THINGS, a hierarchical naturalistic object image database that systematically samples all concrete object concepts to determine whether the most typical or atypical items are most often remembered. We collect behavioral ratings of memorability from 13,946 AMT participants to compare to three different types of typicality ratings. We collect behavioral ratings of typicality to capture human intuition, similarity scores across the dimensions of an object space associated with THINGS, and similarities across features in a deep neural network. We find a spread of memorability that persists across all levels of THINGS and determine that there is a bias towards the most typical items being most often remembered, though there are counterexamples across the dataset. These results run counter to decades of research in memory, suggesting potential targets for future analyses.

 

*****************************************************

Perception and evaluation of courtship song in female songbirds during mate choice 

Andrew Savoy

Mate choice is a complex psychological and behavioral process. It is also a central agent of the evolutionary theory of sexual selection. Insights from cognitive neuroscience are essential for understanding mate choice but are largely absent from studies of it. I study courtship display preferences and the corresponding neural signals of perception and evaluation. For this workshop I will present my rationale and methodology for testing two hypotheses pertaining female zebra finch responses to courtship song—one regarding temporal regularity in song and the other regarding song familiarity. This research has the potential to valuably extend the scope of our knowledge about sexual selection mechanisms while also deepening our neurobiological understanding of fundamental cognitive processes.