Sensor based Dance Coherent Action Generation Model using Deep Learning Framework

Main Article Content

Hanzhen Jiang
Yingdong Yan

Abstract

Dance Coherent Action Generation is a popular research task in recent years to generate movements and actions for computer-generated characters in a simulated environment. It is sometimes referred to as "Motion Synthesis". Motion synthesis algorithms are used to generate physically believable, visually compelling, and contextually appropriate movement using motion sensors. The Dance Coherent Action Generation Model (DCAM) is a generative framework for producing aesthetically pleasing movements using deep learning from small amounts of data. By learning an internal representation of motion dynamics, DCAM can synthesize long sequences of movements in which coherent patterns can be created through latent space interpolation. This framework provides a mechanism for varying the amplitude of the generated motion, allowing further realistic thinking and expression. The proposed model obtained 93.79% accuracy, 93.79% precision, 97.75% recall and 92.92% F1 score. DCAM exploits the balance between imitation and creativity by enabling the production of novel outputs from limited input data and can be trained in an unsupervised manner or fine-tuned with sparse supervision. Furthermore, the framework is easily extended to handle multiple layers of abstraction and can be further personalized to a particular type of movement, enabling the generation of highly individualized outputs.

Article Details

Section
Special Issue - Scalable Dew Computing for future generation IoT systems