Advances in 3D Human Avatar Generation and Animation

The field of 3D human avatar generation and animation is rapidly advancing, with a focus on creating highly realistic and customizable avatars. Researchers are exploring new methods for generating high-quality textures, animating facial expressions, and creating drivable 3D head avatars from limited input data. Recent developments have also enabled the creation of photorealistic 3D head avatars from single images, with improved generalization and realism. Additionally, there is a growing interest in using diffusion models and other machine learning techniques to improve the accuracy and efficiency of avatar generation and animation. Notable papers in this area include: SMPL-GPTexture, which presents a novel pipeline for generating high-resolution textures for 3D human avatars, and THUNDER, which introduces a new supervision mechanism for training 3D talking head avatars with accurate lip-sync. SEGA is also noteworthy, as it proposes a novel approach for creating drivable 3D Gaussian head avatars from a single image, and FaceCraft4D, which generates high-quality, animatable 4D avatars from a single image.

Sources

SMPL-GPTexture: Dual-View 3D Human Texture Estimation using Text-to-Image Generation Models

Supervising 3D Talking Head Avatars with Analysis-by-Audio-Synthesis

SEGA: Drivable 3D Gaussian Head Avatar from a Single Image

ExFace: Expressive Facial Control for Humanoid Robots with Diffusion Transformers and Bootstrap Training

3D Gaussian Head Avatars with Expressive Dynamic Appearances by Compact Tensorial Representations

FaceCraft4D: Animated 3D Facial Avatar Generation from a Single Image

Shape-Guided Clothing Warping for Virtual Try-On

Bringing Diversity from Diffusion Models to Semantic-Guided Face Asset Generation

Text-based Animatable 3D Avatars with Morphable Model Alignment

3DV-TON: Textured 3D-Guided Consistent Video Try-on via Diffusion Models

Bolt: Clothing Virtual Characters at Scale

Built with on top of