- 28 views
- 35 downloads
Joint Level Generation and Translation Using Gameplay Videos
-
- Author / Creator
- Mirgati, Seyyede Negar
-
Procedural Content Generation via Machine Learning (PCGML) faces a significant hurdle that sets it apart from other ML problems, such as image or text generation, which is limited annotated data. For example, many existing methods for level generation via machine learning specifically require a secondary representation beyond level images. However, the current methods for obtaining such representations are laborious and time-consuming, which contributes to the limited data problem.
In this work, we aim to address the limited game level data problem by utilizing gameplay videos of human-annotated games to train a novel multi-tail framework to perform simultaneous level translation and generation. The translation tail of our framework can convert gameplay video frames to an equivalent secondary representation, while its generation tail can produce novel level segments. Evaluation results and comparisons between our framework and baselines suggest that combining the level generation and translation tasks can lead to improved performance for both tasks. Additionally, we have conducted experiments to evaluate the generalizability of our model across different scenarios. Our findings represent a possible solution to limited annotated level data, and we demonstrate the potential for future iterations of our model to generalize to unseen games. -
- Subjects / Keywords
-
- Graduation date
- Fall 2024
-
- Type of Item
- Thesis
-
- Degree
- Master of Science
-
- License
- This thesis is made available by the University of Alberta Library with permission of the copyright owner solely for non-commercial purposes. This thesis, or any portion thereof, may not otherwise be copied or reproduced without the written consent of the copyright owner, except to the extent permitted by Canadian copyright law.