Date of Award
2025
Degree Type
Open Access Dissertation
Degree Name
Information Systems and Technology, PhD
Program
Center for Information Systems and Technology
Advisor/Supervisor/Committee Chair
Yan Li
Dissertation or Thesis Committee Member
Wallace Chipidza
Dissertation or Thesis Committee Member
Hengwei Zhang
Terms of Use & License Information
Rights Information
© 2025 Yang Zhong
Keywords
deep reinforcement learning, facility location, maximum coverage location problem, operations research, site selection
Abstract
The Maximal Covering Location Problem (MCLP) is a classic Combinatorial Optimization Problem (COP) in spatial optimization and operations research, predominatnly used for strategic public facility placement. The model’s objective is to determine the optimal locations for a fixed number of facilities in order to serve the largest possible demands within a desired service distance. In its original form, the MCLP model is formulated with discrete facility candidate points. However, in many practical situations, the facilities can be positioned anywhere in a continuous region. For example, in environmental planning or wireless network design, the prospective facility placements may be established in open areas rather than predetermined spots. This continuous variant, known as the Continuous MCLP (C-MCLP) or Planar Maximal Covering Location Problem (PMCLP), presents unique computational challenges. In this thesis, I propose a novel hybrid approach that combines a custom Candidate Location Set (CLS) generation technique with a Deep Reinforcement Learning (DRL) model to address the C-MCLP. Unlike previous studies that apply DRL to the discrete MCLP, my approach offers a streamlined solution by discretizing the problem model and making sequential decisions on facility placement within a given convex continuous region, while maximizing the total covered demand. The model’s architecture explicitly accounts for complex spatial interactions between facility locations and demand points, enabling it to optimize placement decisions through iterative training. The proposed method was evaluated by comparing with the solutions based on a commercial solver (CPLEX) and a heuristic method (Genetic Algorithm). Results demonstrate that my DRL model effectively solves the C-MCLP; it exhibits advantages in identifying better solutions compared to the GA-based heuristic model\, and achieves faster computation time compared to the solver-based (CPLEX) solution. This work advances the application of deep reinforcement learning in spatial optimization and offers a new perspective on solving location covering problems in continuous spaces. The primary contribution of this dissertation include the development of a novel methodology for solving a variant of a well-known combinatorial optimization problem, providing both theoretical advancement in spatial optimization and practical implications for urban planning, emergency management, and various domains where optimal location decision is crucial. Future research directions include investigating DRL applications to other MCLP variants and spatial optimization models, mainly focusing on addressing dynamic constraints and uncertainty in real-world scenarios.
ISBN
9798315738411
Recommended Citation
Zhong, Yang. (2025). DeepGridMCLP: A Deep Reinforcement Learning Approach to Solve the Maximal Covering Location Problem with Facilities in Continuous Regions. CGU Theses & Dissertations, 981. https://scholarship.claremont.edu/cgu_etd/981.