Download the full-sized PDF of Using Response Functions for Strategy Training and EvaluationDownload the full-sized PDF



Permanent link (DOI):


Export to: EndNote  |  Zotero  |  Mendeley


This file is in the following communities:

Graduate Studies and Research, Faculty of


This file is in the following collections:

Theses and Dissertations

Using Response Functions for Strategy Training and Evaluation Open Access


Other title
Artificial Intelligence
Imperfect Information
Strategy Evaluation
Game Theory
Type of item
Degree grantor
University of Alberta
Author or creator
Davis, Trevor
Supervisor and department
Bowling, Michael (Computing Science)
Examining committee member and department
Bowling, Michael (Computing Science)
Szafron, Duane (Computing Science)
Buro, Michael (Computing Science)
Department of Computing Science

Date accepted
Graduation date
Master of Science
Degree level
Extensive-form games are a powerful framework for modeling sequential multi-agent interactions. In extensive-form games with imperfect information, Nash equilibria are generally used as a solution concept, but computing a Nash equilibrium can be intractable in large games. Instead, a variety of techniques are used to find strategies that approximate Nash equilibria. Traditionally, an approximate Nash equilibrium strategy is evaluated by measuring the strategy's worst-case performance, or exploitability. However, because exploitability fails to capture how likely the worst-case is to be realized, it provides only a limited picture of strategy strength, and there is extensive empirical evidence showing that exploitability can correlate poorly with one-on-one performance against a variety of opponents. In this thesis, we introduce a class of adaptive opponents called pretty-good responses that exploit a strategy but only have limited exploitative power. By playing a strategy against a variety of counter-strategies created with pretty-good responses, we get a more complete picture of strategy strength than that offered by exploitability alone. In addition, we show how standard no-regret algorithms can me modified to learn strategies that are strong against adaptive opponents. We prove that this technique can produce optimal strategies for playing against pretty-good responses. We empirically demonstrate the effectiveness of the technique by finding static strategies that are strong against Monte Carlo opponents who learn by sampling our strategy, including the UCT Monte Carlo tree search algorithm.
Permission is hereby granted to the University of Alberta Libraries to reproduce single copies of this thesis and to lend or sell such copies for private, scholarly or scientific research purposes only. The author reserves all other publication and other rights in association with the copyright in the thesis and, except as herein before provided, neither the thesis nor any substantial portion thereof may be printed or otherwise reproduced in any material form whatsoever without the author's prior written permission.
Citation for previous publication
Trevor Davis, Neil Burch, and Michael Bowling. Using response functions to measure strategy strength. In Proceedings of the 28th AAAI Conference on Artificial Intelligence (AAAI), 2014.

File Details

Date Uploaded
Date Modified
Audit Status
Audits have not yet been run on this file.
File format: pdf (Portable Document Format)
Mime type: application/pdf
File size: 532883
Last modified: 2016:06:24 17:45:36-06:00
Filename: Davis_Trevor_R_201506_MSc.pdf
Original checksum: a04c1aa70a49ff6560a1e28758830dd9
Well formed: true
Valid: true
File title: Introduction
Page count: 76
Activity of users you follow
User Activity Date