Generating motion for 3D avatars is a common task across several disciplines. It is a fairly complex process with the most common methods often falling back on a large bank of animation files to handle all the situations. This process is out of reach for almost all end users due to the effort and expense of creating those animation files.
This project seeks to put forward a novel method of motion generation that does not require as large a bank of animation files as the conventional method. It would achieve this through a multi-step process that begins by using the initial kinematic rig to generate desired next positions of each of the points. These desired positions would be fed into a Generative Adversarial Network (GAN) based system which has been trained on relevant keyframes to generate new positions. Separately, this system would employ a fuzzy search algorithm to match the contextual world data with a database of profiles. This proposal also puts forward an alternative which is to use deep Q learning in place of a fuzzy search process.
The final step would be to take both of these positions and layer them together onto the starting initial rig, taking inspiration from traditional animation layering techniques.
This project will seek to validate its results by introducing a new metric inspired by the existing Frechet Inception Distance to be known as Frechet Motion Distance. This metric will allow for accurate analysis of generated motion when compared to known real motion.
Initially Graduating from Staffordshire University in 2019 with a BSc in Computer game design and production I have experience developing software for companies such as ADT, and Xerox.
I returned to academic life in 2020 to earn a master's from Kingston and have progressed on to doing a PhD.