Advances in Human Motion Generation and Stylization

The field of human motion generation and stylization is rapidly advancing, with a focus on improving the realism and diversity of generated motion. Recent developments have explored the use of event cameras, which provide high temporal resolution and resistance to motion blur, to generate more realistic motion. Additionally, there has been a push towards incorporating stylistic attributes and multi-modal inputs into motion generation models, allowing for more nuanced and context-dependent motion synthesis. These advancements have the potential to enable more sophisticated and realistic human motion generation, with applications in fields such as animation, robotics, and video production. Noteworthy papers in this area include EvAnimate, which leverages event streams to animate static human images, and StyleMotif, which generates motion conditioned on both content and style from multiple modalities. LoRA-MDM and Video Motion Graphs also demonstrate promising results in motion stylization and generation.

Sources

EvAnimate: Event-conditioned Image-to-Video Generation for Human Animation

Dance Like a Chicken: Low-Rank Stylization for Human Motion Diffusion

Video Motion Graphs

EGVD: Event-Guided Video Diffusion Model for Physically Realistic Large-Motion Frame Interpolation

StyleMotif: Multi-Modal Motion Stylization using Style-Content Cross Fusion

Built with on top of