The field of 3D human avatar generation and animation is rapidly advancing, with a focus on creating highly realistic and customizable avatars. Researchers are exploring new methods for generating high-quality textures, animating facial expressions, and creating drivable 3D head avatars from limited input data. Recent developments have also enabled the creation of photorealistic 3D head avatars from single images, with improved generalization and realism. Additionally, there is a growing interest in using diffusion models and other machine learning techniques to improve the accuracy and efficiency of avatar generation and animation. Notable papers in this area include: SMPL-GPTexture, which presents a novel pipeline for generating high-resolution textures for 3D human avatars, and THUNDER, which introduces a new supervision mechanism for training 3D talking head avatars with accurate lip-sync. SEGA is also noteworthy, as it proposes a novel approach for creating drivable 3D Gaussian head avatars from a single image, and FaceCraft4D, which generates high-quality, animatable 4D avatars from a single image.