The Baseline Approach to Agent Evaluation

  • Author / Creator
    Davidson, Joshua
  • Efficient, unbiased estimation of agent performance is essential for drawing statistically significant conclusions in multi-agent domains with high outcome variance. Naive Monte Carlo estimation is often insufficient, as it can require a prohibitive number of samples, especially when evaluating slow-acting agents. Classical variance reduction techniques typically require careful encoding of domain knowledge or are intrinsically complex. In this work, we introduce the baseline method of creating unbiased estimators for zero-sum, multi-agent high-variance domains. We provide two examples of estimators created using this approach, one that leverages computer agents in self-play, and another that utilizes existing player data. We show empirically that these baseline estimators are competitive with state-of-the-art techniques for efficient evaluation in variants of computer poker, a zero-sum domain with notably high outcome variance. Additionally, we demonstrate how simple, yet effective, baseline estimators can be created and deployed in domains where efficient evaluation techniques are currently non-existent.

  • Subjects / Keywords
  • Graduation date
    Spring 2014
  • Type of Item
  • Degree
    Master of Science
  • DOI
  • License
    This thesis is made available by the University of Alberta Libraries with permission of the copyright owner solely for non-commercial purposes. This thesis, or any portion thereof, may not otherwise be copied or reproduced without the written consent of the copyright owner, except to the extent permitted by Canadian copyright law.