ERA

Download the full-sized PDF of Search, Abstractions and Learning in Real-Time Strategy GamesDownload the full-sized PDF

Analytics

Share

Permanent link (DOI): https://doi.org/10.7939/R3GT5FV8T

Download

Export to: EndNote  |  Zotero  |  Mendeley

Communities

This file is in the following communities:

Graduate Studies and Research, Faculty of

Collections

This file is in the following collections:

Theses and Dissertations

Search, Abstractions and Learning in Real-Time Strategy Games Open Access

Descriptions

Other title
Subject/Keyword
Heuristic Search
Real-Time Strategy games
Machine Learning
Abstractions
Type of item
Thesis
Degree grantor
University of Alberta
Author or creator
Barriga Richards, Nicolas A
Supervisor and department
Buro, Michael (Computing Science)a
Examining committee member and department
Ontanon, Santiago (Drexel University)
Mueller, Martin (Computing Science)
Ray, Nilanjan (Computing Science)
Bulitko, Vadim (Computing Science)
Buro, Michael (Computing Science)
Department
Department of Computing Science
Specialization

Date accepted
2017-09-27T11:48:24Z
Graduation date
2017-11:Fall 2017
Degree
Doctor of Philosophy
Degree level
Doctoral
Abstract
Real-time strategy (RTS) games are war simulation video games in which the players perform several simultaneous tasks like gathering and spending resources, building a base, and controlling units in combat against an enemy force. RTS games have recently drawn the interest of the game AI research community, due to its interesting sub-problems and the availability of professional human players. Large state and action space make standard adversarial search techniques impractical. Sampling the action space can lead to strong tactical performance on smaller scenarios, but doesn't scale to the sizes used on commercial RTS games. Using state and/or action abstractions contributes to solid strategic decision making, but tactical performance suffers, due to the necessary simplifications introduced by the abstractions. Combining both techniques is not straightforward, due to the real-time constraints involved. We first present Puppet Search, a search framework that employs scripts as action abstractions. It produces agents with strong strategic awareness, as well as adequate tactical performance. The tactical performance comes from incorporating sub-problem solutions, such as pathfinding and build order search, into the scripts. We then split the available computation time between this strategic search and NaiveMCTS, a strong tactical search algorithm that samples the low-level action space. This second search refines the output of the first one by reassigning actions to units engaged in combat with the opponent's units. Finally, we present a deep convolutional neural network (CNN) that can accurately predict Puppet Search output in a fraction of the time, thus leaving more time available for tactical search. Experiments in StarCraft: Brood War show that Puppet Search outperforms its component scripts, while in microRTS it also surpasses other state-of-the-art agents. Further experimental results show that the combined Puppet Search/NaiveMCTS algorithm achieves higher win-rates than either of its two independent components and other state-of-the-art microRTS agents. Finally, replacing Puppet Search with a CNN shows even higher performance. To the best of our knowledge, this is the first successful application of a convolutional network to play a full RTS game on standard sized game maps, as previous work has focused on sub-problems, such as combat, or on very small maps. We propose further work to focus on partial observability and CNNs for tactical decision-making. Finally, we explore possible utilization in other game genres and potential applications on the game development process itself, such as playtesting and game balancing.
Language
English
DOI
doi:10.7939/R3GT5FV8T
Rights
This thesis is made available by the University of Alberta Libraries with permission of the copyright owner solely for the purpose of private, scholarly or scientific research. This thesis, or any portion thereof, may not otherwise be copied or reproduced without the written consent of the copyright owner, except to the extent permitted by Canadian copyright law.
Citation for previous publication
Nicolas A. Barriga, Marius Stanescu, and Michael Buro. Building placement optimization in real-time strategy games. In Workshop on Artificial Intelligence in Adversarial Real-Time Games, AIIDE, 2014.Nicolas A. Barriga, Marius Stanescu, and Michael Buro. Parallel UCT search on GPUs. In IEEE Conference on Computational Intelligence and Games (CIG), 2014.Nicolas A. Barriga, Marius Stanescu, and Michael Buro. Puppet Search: Enhancing scripted behaviour by look-ahead search with applications to Real-Time Strategy games. In Eleventh Annual AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE), pages 9–15, 2015.Nicolas A. Barriga, Marius Stanescu, and Michael Buro. Combining strategic learning and tactical search in Real-Time Strategy games. In Accepted for presentation at the Thirteenth Annual AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE), 2017.Nicolas A. Barriga, Marius Stanescu, and Michael Buro. Game tree search based on non-deterministic action scripts in real-time strategy games. IEEE Transactions on Computational Intelligence and AI in Games (TCIAIG), 2017.

File Details

Date Uploaded
Date Modified
2017-09-27T17:48:24.758+00:00
Audit Status
Audits have not yet been run on this file.
Characterization
File format: pdf (PDF/A)
Mime type: application/pdf
File size: 6778991
Last modified: 2017:11:08 16:42:44-07:00
Filename: main.pdf
Original checksum: befa7ee59f7ec0c56aefca637e5f163c
Activity of users you follow
User Activity Date