COGS 20100 Final Project Report (Fall 2023)
Section 1: An Overview of Human Cognitive Biases
Cognitive biases are systematic patterns of deviation from norm or rationality in judgment, where inferences about other people and situations may be illogical (Tversky & Kahneman, 1974). Some of the most prominent cognitive biases include (1) confirmation bias, where individuals favor information that confirms their preconceptions; (2) availability heuristic, which involves overestimating the importance of information that is readily available (Schwarz et al., 1991); and (3) anchoring bias, where reliance on the first piece of information encountered (the “anchor”) influences decision-making (Tversky & Kahneman, 1974). Other cognitive biases include hindsight bias, leading people to believe that past events were more predictable than they actually were (Fischhoff, 1975), and the Dunning-Kruger effect, where individuals with limited knowledge or competence in a domain are overconfident and overestimate their own ability (Kruger & Dunning, 1999). These cognitive biases are not confined to theoretical constructs; they are embedded in everyday thinking, influencing decisions ranging from daily routine choices to high-stake decisions.
In the age of artificial intelligence (AI) that enables rapid developments of publicly available LLMs such as ChatGPT, the significance of cognitive biases in shaping human judgment and decision-making has become increasingly important. For instance, confirmation bias can lead people to over-rely on ChatGPT’s outputs that align with their pre-existing beliefs, overlooking its potential flaws or biases and failing to incorporate diverse perspectives. Similarly, the availability heuristic can lead to an overestimation of the reliability and applicability of AI-generated information. That is, users might overweight ChatGPT’s responses simply because they are readily accessible, without considering the limitations and prompt-dependent nature of the model’s responses. This can lead to a skewed perception of AI’s capabilities, where users assume ChatGPT’s responses are always the most relevant or accurate. In the case of anchoring bias, the initial output provided by ChatGPT can influence subsequent thinking and decision-making processes. For example, when ChatGPT provides a preliminary, not necessarily the most comprehensive answer or perspective on a topic, users might anchor to this initial information and use it as a reference point for all future inquiries related to that topic. The danger here is that the initial AI-generated content might set a biased or incomplete foundation for further thought, potentially leading to conclusions that do not come from thorough critical evaluation. These examples underscore the necessity for heightened awareness of how human cognitive biases can be influenced by AI.
Section 2: ChatGPT’s Role in Bias Navigation
ChatGPT can play a role in mitigating human cognitive biases. In particular, we will explore its potential in (1) identifying when a user might be exhibiting a bias and (2) “debiasing” through guided questioning and critical thinking prompts through a real-life scenario.
Scenario: Selecting a Research Topic
In this hypothetical scenario, ChatGPT first confirmed the beneficial side of the approach to aligning with one’s advisor’s expertise and listed five reasons.
Then, ChatGPT suggested three alternative perspectives that are worth considering when implementing the approach described in the prompt. These perspectives, although not directly highlighting the user’s potential cognitive biases, shed light on considering a comprehensive view of the decision-making process for a thesis topic, which can help with mitigating the three prominent cognitive biases, confirmation bias (focus only on the positives of choosing a topic aligning with the user’s advisor’s field and ignore other viable topics, which leads to favor information that supports the user’s initial inclination); availability heuristic (give more importance to the thesis topic in temporal choices because the information is easily accessible or recently discussed with the user’s advisor, which leads to overlooking other areas that aren’t as immediately present in previous discussions); and anchoring bias (the user’s initial consideration of a topic in the user’s advisor’s field might create an anchor, influencing all subsequent thoughts, which leads to give one option undue weight, even if other topics might be equally or more suitable). ChatGPT’s response above drives the user to consider both positive and negative perspectives of implementing the approach described in the prompt.
ChatGPT can also output responses that are thoughtful and comprehensive based on prompt variants that are slightly more biased. For example:
Prompt variant 1: I am choosing a topic for my thesis. My advisor is an expert in temporal choices and choices under risks, so I’m thinking of choosing a topic in this area for my thesis. I want to minimize the time I’ll spend on it. Is this a good approach?
Prompt variant 2: I am choosing a topic for my thesis. My advisor is an expert in temporal choices and choices under risks, so I’m thinking of choosing a topic in this area for my thesis. I want efficiency. I’d like to jump right into working on my thesis as soon as possible. Is this a good approach?
However, without being directly asked to make the judgment about whether the approach is good, ChatGPT will significantly decrease its emphasis on encouraging the user to consider alternative perspectives, mostly justifying the user’s requirement and suggesting possible research ideas. Some example prompts are as follows:
Original prompt variant: I am choosing a topic for my thesis. My advisor is an expert in temporal choices and choices under risks, so I’m thinking of choosing a topic in this area for my thesis. Is this a good approach?
Prompt variant 1.1: I am choosing a topic for my thesis. My advisor is an expert in temporal choices and choices under risks, so I’m thinking of choosing a topic in this area for my thesis. I want to minimize the time I’ll spend on it. Is this a good approach?
Prompt variant 2.1: I am choosing a topic for my thesis. My advisor is an expert in temporal choices and choices under risks, so I’m thinking of choosing a topic in this area for my thesis. I want efficiency. I’d like to jump right into working on my thesis as soon as possible. Is this a good approach?
With appropriate instructions, ChatGPT can become a powerful tool that assists with mitigating human cognitive biases. I found that adding information about the need for debiasing into the prompts could enhance the quality of ChatGPT’s response. Here, quality is defined by the diversity, scope, and depth of a response, and how much it can nudge a user to think comprehensively about the question and be open-minded for alternative perspectives. Some example prompts are as follows:
Original prompt variant: I am choosing a topic for my thesis. My advisor is an expert in temporal choices and choices under risks, so I’m thinking of choosing a topic in this area for my thesis. Is this a good approach? Try your best to debias my decision-making process.
Prompt variant 1.2: I am choosing a topic for my thesis. My advisor is an expert in temporal choices and choices under risks, so I’m thinking of choosing a topic in this area for my thesis. I want to minimize the time I’ll spend on it. Is this a good approach? Try your best to debias my decision-making process.
Prompt variant 2.2: I am choosing a topic for my thesis. My advisor is an expert in temporal choices and choices under risks, so I’m thinking of choosing a topic in this area for my thesis. I want efficiency. I’d like to jump right into working on my thesis as soon as possible. Is this a good approach? Try your best to debias my decision-making process.
Section 3: Debiasing Strategies for Users in ChatGPT Interactions
Cultivating Awareness of Cognitive Biases:
The first and most crucial step in using ChatGPT to counter cognitive biases is acknowledging their pervasive influence on our thinking. This awareness is key, as cognitive biases such as confirmation bias, availability heuristics, and anchoring bias, etc. can significantly influence the way we pose questions to ChatGPT, interpret its responses, and make decisions. Recognizing these biases allows us to critically examine our queries and ChatGPT’s replies, ensuring we do out unconsciously seek confirmation of our beliefs or give undue weight to readily accessible information. Awareness acts as the foundation for active debiasing, allowing users to critically assess our thought processes and the information provided by ChatGPT. This conscious engagement leads to more reflective and balanced interactions with AI.
Engaging ChatGPT in Critical Evaluation
Rather than giving ChatGPT simple, directive instructions, users should engage the AI in a dialogue that scrutinizes their requests and assumptions. In other words, users should engage in deeper, more reflective dialogues with ChatGPT. This involves actively soliciting ChatGPT’s assistance in analyzing and evaluating one’s thought processes. By asking ChatGPT to present counterarguments, explore alternate viewpoints, or even pinpoint biases in one’s reasoning, users open themselves to a broader scope of understanding. ChatGPT can thus be transformed into a critical thinking aid that pushes users to consider aspects they might have overlooked and to question their initial assumptions. Such engagements can not only enrich the decision-making process but also help develop a mindset that is alert to the human’s inevitable cognitive biases, leading to better judgments and decisions.
Incorporating Debiasing Instructions
When interacting with ChatGPT, users can enhance the debiasing process by including specific instructions aimed at mitigating cognitive biases. This proactive approach involves directly asking ChatGPT to identify any apparent cognitive biases in one’s questions or decision-making process and to suggest ways to overcome these biases. An effective way to do this is by posing questions that invite critical analysis, such as “I think X is true because of Y, but can you help me see if I’m missing something or if there’s another way to look at this?” This method promotes leveraging ChatGPT’s capabilities to challenge and expand the user’s perspective.
References
Fischhoff, B. (1975). Hindsight ≠ Foresight: The Effect of Outcome Knowledge on Judgment Under Uncertainty. Journal of Experimental Psychology: Human Perception and Performance, 1(3), 288-299.
Kruger, J., & Dunning, D. (1999). Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-Assessments. Journal of Personality and Social Psychology, 77(6), 1121-1134.
Schwarz, N., et al. (1991). Ease of Retrieval as Information: Another Look at the Availability Heuristic. Journal of Personality and Social Psychology, 61(2), 195-202.
Tversky, A., & Kahneman, D. (1974). Judgment under Uncertainty: Heuristics and Biases. Science, 185(4157), 1124-1131.