Download the full-sized PDF of State Evaluation and Opponent Modelling in Real-Time Strategy GamesDownload the full-sized PDF



Permanent link (DOI):


Export to: EndNote  |  Zotero  |  Mendeley


This file is in the following communities:

Graduate Studies and Research, Faculty of


This file is in the following collections:

Theses and Dissertations

State Evaluation and Opponent Modelling in Real-Time Strategy Games Open Access


Other title
Machine Learning
Artificial Intelligence
Real-Time Strategy
Type of item
Degree grantor
University of Alberta
Author or creator
Erickson, Graham KS
Supervisor and department
Buro, Michael (Computing Science)
Examining committee member and department
Buro, Michael (Computing Science)
Schaeffer, Jonathan (Computing Science)
Musilek, Petr (Electrical and Computer Engineering)
Department of Computing Science

Date accepted
Graduation date
Master of Science
Degree level
Designing competitive Artificial Intelligence (AI) systems for Real-Time Strategy (RTS) games often requires a large amount of expert knowledge (resulting in hard-coded rules for the AI system to follow). However, aspects of an RTS agent can be learned from human replay data. In this thesis, we present two ways in which information relevant to AI system design can be learned from replays, using the game StarCraft for experimentation. First we examine the problem of constructing build-order game payoff matrices from replay data, by clustering build-orders from real games. Clusters can be regarded as strategies and the resulting matrix can be populated with the results from the replay data. The matrix can be used to both examine the balance of a game and find which strategies are effective against which other strategies. Next we look at state evaluation and opponent modelling. We identify important features for predicting which player will win a given match. Model weights are learned from replays using logistic regression. We also present a metric for estimating player skill, which can be used as features in the predictive model, that is computed using a battle simulation as a baseline to compare player performance against. We test our model on human replay data giving a prediction accuracy of > 70% in later game states. Additionally, our player skill estimation technique is tested using data from a StarCraft AI system tournament, showing correlation between skill estimates and tournament standings.
Permission is hereby granted to the University of Alberta Libraries to reproduce single copies of this thesis and to lend or sell such copies for private, scholarly or scientific research purposes only. Where the thesis is converted to, or otherwise made available in digital form, the University of Alberta will advise potential users of the thesis of these terms. The author reserves all other publication and other rights in association with the copyright in the thesis and, except as herein before provided, neither the thesis nor any substantial portion thereof may be printed or otherwise reproduced in any material form whatsoever without the author's prior written permission.
Citation for previous publication
G. Erickson and M. Buro, “Global state evaluation in StarCraft,” in Tenth Artificial Intelligence and Interactive Digital Entertainment Conference, 2014 in press.

File Details

Date Uploaded
Date Modified
Audit Status
Audits have not yet been run on this file.
File format: pdf (Portable Document Format)
Mime type: application/pdf
File size: 713154
Last modified: 2015:10:12 11:35:09-06:00
Filename: Erickson_Graham_KS_2014Sept_MSc.pdf
Original checksum: 816f43a5bbab702a10c9f312fcf3458e
Well formed: true
Valid: true
File title: Introduction
Page count: 76
Activity of users you follow
User Activity Date