Date of Award

Fall 2020

Degree Type

Open Access Dissertation

Degree Name

Psychology, PhD


School of Social Science, Politics, and Evaluation

Advisor/Supervisor/Committee Chair

Kathy Pezdek

Dissertation or Thesis Committee Member

Andrew R. A. Conway

Dissertation or Thesis Committee Member

Lise Abrams

Dissertation or Thesis Committee Member

John Dunlosky

Terms of Use & License Information

Terms of Use for work posted in Scholarship@Claremont.

Rights Information

© Copyright Erica Abed, 2020 All rights reserved.

Subject Categories

Cognitive Psychology


Low performers tend to greatly overestimate their performance on a task, but high

performers slightly underestimate their performance; the unskilled-unaware effect (Kruger & Dunning, 1999). Because assessment of one’s own skill (monitoring) impacts future decisions, such as selecting information to re-study (control), low performers may be disadvantaged in both what they know and what they are likely to learn. Although most research has attempted to reduce metacognitive errors in low performers by training cognitive ability (e.g., teaching them to perform better on a task), training metacognitive ability may be both more efficient and more likely to transfer to other tasks. In light of recent findings that suggest the unskilled-unaware effect is the result of a true metacognitive error, this dissertation tests two methods for reducing overconfidence (Experiment 1) and improving monitoring accuracy (Experiment 2) in high and low performers.

Experiment 1 tested whether answering easy rather than hard questions prior to taking a medium-difficulty test reduced trial-by-trial overconfidence in low performers. This hypothesis was not supported. Global, but not local judgments were affected by the difficulty of a preceding task. Experiment 2 tested whether training and feedback improved metacognitive monitoring accuracy, especially for low performers for whom monitoring accuracy is relatively poorer. This hypothesis was also not supported. Across all performance quartiles and experimental conditions, monitoring accuracy remained consistent for the trial-by-trial confidence judgments.

Taken together with previous research, results from both experiments indicate that when making global judgments at the end of a task, people rely on various sources of information, including perceptions of task difficulty, to inform their metacognitive judgments. By contrast, when making trial-by-trial judgments, people more likely rely on information specific to the question itself (e.g., information from memory or gut feelings) and not information about the task to inform their confidence judgments.