Search
Skip to Search Results- 6Artificial Intelligence
- 3Game Theory
- 2Machine Learning
- 1Abstractions
- 1Computer Games
- 1Computer Poker
-
Fall 2019
Artificial agents have been shown to learn to communicate when needed to complete a cooperative task. Some level of language structure (e.g., compositionality) has been found in the learned communication protocols. This observed structure is often the result of specific environmental pressures...
-
Spring 2016
Games have been used as a testbed for artificial intelligence research since the earliest conceptions of computing itself. The twin goals of defeating human professional players at games, and of solving games outright by creating an optimal computer agent, have helped to drive practical ...
-
Fall 2013
Given nothing but the generative model of the environment, Monte Carlo Tree Search techniques have recently shown spectacular results on domains previously thought to be intractable. In this thesis we try to develop generic techniques for temporal abstraction inside MCTS that would allow the...
-
Spring 2020
Reinforcement learning (RL) is a powerful learning paradigm in which agents can learn to maximize sparse and delayed reward signals. Although RL has had many impressive successes in complex domains, learning can take hours, days, or even years of training data. A major challenge of contemporary...
-
Spring 2016
Game theoretic solution concepts, such as Nash equilibrium strategies that are optimal against worst case opponents, provide guidance in finding desirable autonomous agent behaviour. In particular, we wish to approximate solutions to complex, dynamic tasks, such as negotiation or bidding in...
-
Fall 2015
Extensive-form games are a powerful framework for modeling sequential multi-agent interactions. In extensive-form games with imperfect information, Nash equilibria are generally used as a solution concept, but computing a Nash equilibrium can be intractable in large games. Instead, a variety of...