Physiological and psychosocial correlates of “good sleep”: Implications for cognition, health, and aging
What makes your sleep “good”? Few wake intrusions? Falling asleep once your head hits the pillow? Waking up refreshed and ready for your day? All the above? Science is still grappling with the answers to this question, yet we do know that a period of sleep helps us think, learn, and remember better. Additionally, specific neural changes during sleep support human cognition. To date, my research program has examined how these neural features and specific changes in our body 1) help us define “good” sleep and 2) support cognition. In this talk, I will review this body of work and identify future directions aligned with this research trajectory. Additionally, data suggests that 35% of people do not get the recommended amount of sleep at night. This widespread sleeplessness comes with significant costs to our overall health. Yet, the burden of sleep loss does not fall on everyone equally. I will discuss disparities in sleep health and access, discuss historical links, and on-going and future projects that address these topics.
The Visual Mandela Effect as evidence for shared and specific false memories across people
The Mandela Effect is an internet phenomenon describing shared and consistent false memories for specific icons in popular culture. The Visual Mandela Effect (VME) is a Mandela Effect specific to visual icons (e.g., the Monopoly Man is falsely remembered with a monocle) and has not yet been empirically quantified or tested. In Experiment 1, we demonstrate that certain images from popular iconography elicit consistent, specific false memories. In Experiment 2, using eye-tracking-like methods, we find no attentional or visual differences that drive this phenomenon. There is no clear difference in the natural visual experience of these images (Experiment 3), and these VME-errors also occur spontaneously during recall (Experiment 4). These results demonstrate that there are certain images for which people consistently make the same false memory error, despite the majority of visual experience being the canonical image.
The Development of Decision-Making Across Diverse Cultural Contexts
The human behavioral repertoire is uniquely diverse, with an unmatched flexibility that has allowed our species to flourish in every ecology on the planet. Despite its importance, the roots of this behavioral diversity — and how it manifests across development and contexts — remain largely unexplored. I argue that a full account of human behavior requires a cross-cultural, developmental approach that systematically examines how environmental variability shapes behavioral processes. In this talk, I use the development of decision-making across diverse contexts as a window into the relationship between the socioecological environment and behavior. First, I present the results of a cross-cultural investigation of risk and time preferences among children in India, Argentina, the United States, and the Ecuadorian Amazon, suggesting that market integration and related socioecological shifts lead to the development of more risk-seeking and future-oriented preferences. Second, I present the early results of a five-culture investigation into the ontogeny of social preferences — namely, trustworthiness, forgiveness, and fairness. Taken together, these studies help elucidate the developmental origins of behavioral diversity across diverse contexts, and underscore the utility of interdisciplinary research for explaining human behavior.
Cognitive state fluctuations impact learning in different contexts
We are constantly learning from the world around us. How do changes in our cognitive and attentional states impact this process? I will describe two projects examining relationships between internal state fluctuations and an automatic, fundamental process of learning—statistical learning, and a noisy, dynamic form of learning—adaptive learning. In the first project, we examined the consequences of sustained attention fluctuations for statistical learning. Participants completed a continuous performance task with shape stimuli online. Unbeknownst to participants, we manipulated what they saw in real time by inserting visual regularities (a sequence of three regular shapes) into the task trial stream when their response times suggested that they were in especially high or low attentional states. Demonstrating that attentional state impacts statistical learning, we observed greater evidence for learning of the regular sequence encountered in the high vs. the low attentional state. In project two, we reanalyzed an openly available fMRI dataset collected as participants performed an adaptive learning task in which they learned to make accurate predictions about the locations of a fallen object in an noisy and dynamically changing environment. Individual differences in a brain network signature of sustained attention predicted individual learning style, with individuals with network signatures of stronger attention showing a learning style more like that of a normative model. In addition, trial-to-trial fluctuations in a distinct network signature of working memory predicted learning performance, such that trials on which participants showed a network signature of stronger working memory were followed by closer alignment between human and model predictions on the next trial. Together, these studies reveal consequences of sustained attention and working memory fluctuations for learning in different contexts.
Stable attentional control demands across individuals despite extensive learning
Classic models of expertise propose that when first learning a task, success is primarily determined by the individual’s attention and working memory ability. However, as skill is developed performance becomes less dependent on attention control and loads more on acquired long term memory structures for the task. Here, we tested whether individual differences in attentional control ability continued to predict long term memory performance for picture sequences even after participants showed massive learning increases for the sequence via multiple repetitions. In Experiment 1-3, subjects performed a location source memory task in which they were presented a sequence of 30 objects shown in one of four quadrants, or 30 centrally positioned objects with an external black square in one of the four quadrants, and then were tested on each item’s position. We then repeated the procedure with the same object sequences, such that each subject was shown and tested on the same sequence 5 times. We replicated the prior findings of a relationship between attentional control and overall memory accuracy. Interestingly, we discovered that individual differences in attentional control continually predicted memory accuracy across all repetitions. In Experiment 4, we sought to replicate our finding with verbal materials, so that the participants were asked to memorize 45 word pairs and perform cued recall tasks as memory measure. We replicated the correlation between attentional control and overall memory accuracy, as well as the stable attentional control demands even with extensive learning of word pairs. Together, these results suggest that developing expertise does not eliminate the contribution of attentional control ability for long term memory, but may instead reflect more optimized attention control during expert task performance.
Large-scale neural dynamics in a low-dimensional state space reflect cognitive and attentional dynamics
Cognition and attention arise from the adaptive coordination of neural systems in response to internal and external demands. The low-dimensional latent subspace that underlies large-scale neural dynamics, and the relationship of these dynamics to cognitive and attention states, however, is unknown. We conducted functional magnetic resonance imaging as human participants performed attention tasks, watched comedy sitcom episodes and an educational documentary, and rested. Whole-brain dynamics traversed a common set of latent states that spanned two gradient axes of functional brain organization, with global synchrony among functional networks modulating state transitions. Neural states transitions were time-locked to narrative event boundaries and changes in cognitive task demands, and reflected attentional states in both task and naturalistic contexts. Together, the study demonstrates that traversals along the low-dimensional gradients reflect cognitive and attentional dynamics in diverse contexts.
The Psychophysics of Subjectivity
What is perception about? A traditional and intuitive answer is that it is about the world out there—the external environment and the objects that populate it. However, this picture of perception leaves out an essential aspect of experience and its targets: our own subjectivity. We experience not only what’s objectively out there, but also the point-of-view from which we encounter it; we perceive not only what there is, but also what’s missing; and we can become aware not only of the external world, but also of our own internal mental states. These forms of subjectivity play a central role in a long and rich philosophical tradition, but they have been notoriously challenging to study scientifically. In this talk, I will explore a new approach to the study of subjectivity. By exploiting experimental designs from vision science, I’ll show how we can make progress on—and even solve—centuries-old philosophical puzzles, by demonstrating that our subjective point of view leaves psychophysical traces in rapid, automatic visual processing. We’ll also see how patterns of visual attention reveal that our visual systems process absences in similar ways to how they process more ordinary, present objects. Finally, by developing computational models of introspection, we can determine when reports of our own subjective experience are reliable and when they are not. In summary, perception is about the world, but also about our place in it.
Towards naturalistic reinforcement learning in health and disease
Adaptive decision-making relies on our ability to organize experience into useful representations of the environment. This ability is critical in the real world: each person’s experience is dynamic and continuous, and no two situations we encounter are exactly the same. In this talk, I will first show that attention and memory contribute to inferring a set of features of the environment relevant for learning and decision-making (i.e. a “state representation”). I will then present results from ongoing work attempting to understand how such inference can take place in naturalistic environments. One line of work leverages virtual reality in combination with eye-tracking to study what features of naturalistic scenes guide goal-directed search. A second study examines the role of language in providing a prior for which features are relevant for decision-making. And a third thread focuses on how mood biases attention to different features of a decision. I will conclude with a discussion of the potential of naturalistic reinforcement learning as a model of mental health dynamics.
ALBATROSS: fAst fiLtration BAsed geomeTRy via stOchastic Sub-Sampling
What are the intrinsic patterns that shape distributions of biological data? Often, we use analysis techniques that assume that our data come from a flat Euclidean space (e.g., PCA, tSNE, and eigenvector decomposition). However biological data often looks like it has been sampled from a space with some curvature. In this talk, I will build some intuition for curved spaces, particularly spaces with negative curvature, i.e. Hyperbolic spaces, and discuss why these geometries might be better frameworks than Euclidean or flat space to analyze biological data. In addition, I will introduce a new statistical topological data analysis (TDA) protocol to detect geometric structure in biological data and determine whether your data comes from a curved or flat space. This statistical protocol reduces TDA’s memory requirements and makes it possible for scientists with modest computing resources to infer the underlying geometry of their data. Finally, I will demonstrate this protocol by mapping the topology of functional correlations for the entire human cortex, something that was previously infeasible.
Causes and consequences of coalitional cognition
What is a group? How do we know to which groups we belong? How do we assign others to groups? A great deal of theorizing across the social sciences has conceptualized ‘groups’ as synonymous with ‘categories,’ however there are a number of limitations to this approach: particularly for making predictions about novel intergroup contexts or about how intergroup dynamics will change over time. Here I present two projects that offer alternative frameworks for thinking about these questions. First I review some recent work elucidating the cognitive processes that give rise to the inference of coalitions (even in the absence of category labels). Then I’ll discuss an ongoing project on the effects of social group reference dependence–which falls out of coalitional reasoning–on hate crimes in the U.S. between 1990 and 2010.