Usage
  • 46 views
  • 78 downloads

Learning Admissible Heuristics with Neural Networks

  • Author / Creator
    Li, Tianhua
  • Machine learning has been used to solve single-agent search problems. One of its applications is to guide search algorithms by learning heuristics. However, it is difficult to provide guarantees on the quality of learning from a neural network, since the resulting heuristics can be inadmissible, leading paths to be suboptimal when the heuristic is used as part of a search algorithm.
    This thesis introduces empirical methods to guarantee the admissibility (non-overestimation) of the heuristics learned by neural networks and discusses their application as a compression technique. As supervised learning alone is hard to guarantee admissibility, admissibility is achieved through the combination of three ideas: learning a classifier, adjusting the classifier quantile, and taking the minimum value of an ensemble of neural networks. The quantile and ensemble methods are able to learn admissible heuristics either on their own or together. These methods are able to compress a pattern database (PDB) heuristic by several orders of magnitude with a lower loss than DIV compression. Although the average heuristic value from neural networks can be no greater than that of the original PDB, it is significantly higher than a comparable size PDB compressed from the original PDB by DIV compression.

  • Subjects / Keywords
  • Graduation date
    Fall 2022
  • Type of Item
    Thesis
  • Degree
    Master of Science
  • DOI
    https://doi.org/10.7939/r3-zckd-fg22
  • License
    This thesis is made available by the University of Alberta Library with permission of the copyright owner solely for non-commercial purposes. This thesis, or any portion thereof, may not otherwise be copied or reproduced without the written consent of the copyright owner, except to the extent permitted by Canadian copyright law.