Researcher ORCID Identifier

0009-0008-2779-4337

Graduation Year

2025

Date of Submission

12-2001

Document Type

Campus Only Senior Thesis

Degree Name

Bachelor of Arts

Department

Psychology

Reader 1

Gabriel Cook

Terms of Use & License Information

Terms of Use for work posted in Scholarship@Claremont.

Rights Information

© 2025 Emma Pan

Abstract

As Artificial Intelligence (AI) becomes increasingly common, examining its influence on human cognition is important. Two experiments are proposed to investigate whether racially biased outputs from AI chatbots influence the users' explicit racial attitudes and whether warning labels can mitigate this effect with one-time or repeated exposures. Based on the Schema Theory, Experiment 1 aims to examine the effect of a single exposure to biased AI-generated article summaries on explicit bias and whether a warning label would disrupt the process. Drawing from the Elaboration Likelihood Model, experiment 2 aims to test whether the warning label’s effectiveness declines across repeated sessions due to desensitization and cognitive fatigue. The findings are expected to demonstrate that biased content can potentially reinforce prejudice through familiar cognitive pathways, and warning labels, while initially effective, will lose impact over time. These proposed results have important implications for theories of persuasion and human-AI interaction, and they emphasize the need for user-centered interventions in responsible AI designs for chatbots.

This thesis is restricted to the Claremont Colleges current faculty, students, and staff.

Share

COinS