The recent research in scalable vector graphics (SVG) generation and sign language production (SLP) has seen significant advancements, focusing on improving the quality, efficiency, and semantic understanding of generated content. In SVG generation, models are increasingly leveraging component-based approaches and autoregressive transformers to enhance both computational efficiency and output quality. These models are also benefiting from the introduction of large-scale, diverse datasets that include color information, which is crucial for realistic SVG creation. Additionally, integrating large language models (LLMs) with learnable semantic tokens is proving to be a promising direction for generating complex vector graphics with improved semantic alignment and reduced occlusion issues. In the realm of SLP, novel diffusion frameworks are being developed to better model the relative positions of joints, enhancing the naturalness and accuracy of generated sign poses. These frameworks disentangle and control various attributes of joint associations, leading to more realistic and semantically consistent sign language videos. Overall, the field is moving towards more sophisticated, data-driven, and semantically aware models that push the boundaries of what is possible in both SVG generation and SLP.