The recent research in the field of low-resource language translation and adversarial attacks on language models has shown significant advancements. There is a growing focus on developing translation models for underrepresented languages, particularly those in the Austroasiatic and Indian subcontinent language families. These models are leveraging advanced techniques such as transfer learning and data augmentation to overcome the challenges posed by limited available data. Additionally, there is a notable shift towards enhancing the robustness of language models, especially for minority languages, through the development of adversarial attack methods. These methods aim to evaluate and improve the resilience of models against subtle perturbations that can lead to incorrect predictions. The integration of visual similarity features in adversarial text generation for languages like Tibetan is a novel approach that underscores the need for context-specific solutions in language technology. Overall, the field is progressing towards more inclusive and robust language models, with a strong emphasis on addressing the unique challenges of low-resource and minority languages.