Usage
  • 51 views
  • 60 downloads

Leveraging Large Language Models for Speeding Up Local Search Algorithms for Computing Programmatic Best Responses

  • Author / Creator
    Sadmine, Quazi Asif
  • Despite having advantages such as generalizability and interpretability over neural representations, programmatic representations of hypotheses and strategies face significant challenges. This is because algorithms writing programs encoding hypotheses for solving supervised learning problems and strategies for solving games must conduct searches in very large and discontinuous search spaces---the spaces programming languages induce. Previous studies have introduced self-play algorithms to learn programs that encode game-playing strategies. These algorithms compute a sequence of approximate best responses against target strategies. In this dissertation, we introduce a new approach that leverages the ability of large language models (LLMs) to write computer programs to provide initial candidate solutions in the programmatic space for best responses. These candidates can be a best response or serve as a seed to begin the search for a best response. Empirical results in three games that are challenging for programmatic representations show that LLMs can speed up local search and facilitate the synthesis of strategies.

  • Subjects / Keywords
  • Graduation date
    Fall 2024
  • Type of Item
    Thesis
  • Degree
    Master of Science
  • DOI
    https://doi.org/10.7939/r3-13mv-af19
  • License
    This thesis is made available by the University of Alberta Library with permission of the copyright owner solely for non-commercial purposes. This thesis, or any portion thereof, may not otherwise be copied or reproduced without the written consent of the copyright owner, except to the extent permitted by Canadian copyright law.