The field of 3D avatar generation and animation is rapidly advancing, with a clear trend towards enhancing realism, editability, and control over various aspects of avatar creation and manipulation. Recent developments focus on overcoming limitations in deformation flexibility, editability, and the synthesis of lifelike animations, particularly in the context of facial expressions, hair dynamics, and lighting conditions. Innovations in 3D Gaussian Splatting (3DGS) and the integration of 3D Morphable Models (3DMM) with texture maps are at the forefront, enabling more precise control over facial attributes and expressions while preserving individual identity. Additionally, there is a growing emphasis on the synthesis of talking head videos that accurately capture complex facial dynamics and hair movements, as well as the generation of personalized avatars from single portraits with continuous and disentangled latent spaces for intuitive attribute manipulation.
Noteworthy papers include:
- A novel approach enhancing the editability and animation control of 3D head avatars through 3D Gaussian Splatting, offering improved illumination control and flexible texture editing.
- UniAvatar, which introduces comprehensive motion and lighting control for lifelike audio-driven talking head generation, outperforming existing methods in motion and lighting control.
- DEGSTalk, a method for realistic talking face synthesis with long hair preservation, achieving improved realism and synthesis quality in handling complex facial dynamics.
- PERSE, a method for creating personalized 3D generative avatars from a single portrait, enabling continuous and disentangled facial attribute editing while preserving identity.