Researcher ORCID Identifier
0009-0003-1089-6904
Graduation Year
2025
Date of Submission
4-2025
Document Type
Campus Only Senior Thesis
Degree Name
Bachelor of Arts
Department
Mathematical Sciences
Reader 1
Professor Mark Huber
Rights Information
Alexander C Nasoni
Abstract
Board games have long served as foundational testbeds in Reinforcement Learning (RL) research, offering structured environments to train, test, and benchmark agents. By abstracting key elements of real-world decision-making, such as strategic planning, resource management, uncertainty, and competition, these games provide a simplified yet meaningful platform for experimentation. As a result, board games have become a gold standard for facilitating fair algorithmic comparisons in RL. This paper investigates several approaches for training a Deep Q-Network (DQN) agent to learn the game of Checkers, examining four distinct learning setups: training against a random agent, against a Minimax agent, against a curriculum-based ensemble of opponents, and through pure self-play. The study details the design decisions involved in modeling the Markov Decision Process (MDP) environment, constructing opponent strategies, structuring agent training, and ultimately analyzes the experimental results to assess the strengths and limitations of each method.
Recommended Citation
Nasoni, Alex, "DQN Reinforcement Learning Approaches for the Combinatorially Complex game of Checkers" (2025). CMC Senior Theses. 4019.
https://scholarship.claremont.edu/cmc_theses/4019
This thesis is restricted to the Claremont Colleges current faculty, students, and staff.