Usage
  • 47 views
  • 79 downloads

Controllable 3D Character Motion Generation

  • Author / Creator
    Mu, Yuxuan
  • 3D character animation plays a pivotal role in video games and film productions. One crucial aspect that captivates audiences is the visual appeal and fluidity of character motion within these digital mediums. However, creating realistic character motion requires significant efforts and resources, including access to Motion Capture (MoCap) studios and skilled animation artists. The capability to automatically generate 3D character motion as desired would greatly empower the production of 3D digital content. Moreover, by learning to generate lifelike character motions, we can gain deeper insights into the physical motor behavior of real humans. Ultimately, this could lead to the creation of digital dynamic clones of ourselves. In this thesis, we make a step towards solving this problem, to generate lively character motion as wishes. We present controllable 3D character motion generation techniques that could create human-like motion following versatile instructions, such as natural language commands, motion examples, game-pad signals, and terrain environments.

    We begin by presenting a novel generative framework that enables general text-to-motion generation. A neural network learns to generate corresponding 3D skeletal animation based on the given descriptive natural language sentence. The network not only learns the text-conditioned dataset distribution but also models the unconditional motion prior. By utilizing this learned generative prior, this method can produce robust and life-like motion and extend any given motion clips. We then integrate the text-to-motion framework with the off-the-shelf stylization method to generate even more expressive motions. Furthermore, we develop a hierarchy reinforcement learning system that can enable the digital character to perform specific motions with spontaneous interaction in a physically simulated environment. Two policies are trained in an end-to-end manner to explicitly leverage the motion expert databases through retrieval and drive the simulated character with torques. The agents trained in the physical simulator inherently learn to interact and be resilient against perturbations from the environment.

  • Subjects / Keywords
  • Graduation date
    Fall 2024
  • Type of Item
    Thesis
  • Degree
    Master of Science
  • DOI
    https://doi.org/10.7939/r3-wfhj-1036
  • License
    This thesis is made available by the University of Alberta Library with permission of the copyright owner solely for non-commercial purposes. This thesis, or any portion thereof, may not otherwise be copied or reproduced without the written consent of the copyright owner, except to the extent permitted by Canadian copyright law.