The recent research in the field of gender bias in artificial intelligence, particularly in language models, has seen significant advancements aimed at identifying and mitigating such biases. A common theme across the studies is the use of innovative methodologies to assess and counteract gender bias in various language models, including both large-scale models and those specific to low-resource languages. Techniques such as gender-name swapping, joint loss optimization, and novel objective functions for masked-language modeling have been proposed to effectively reduce bias while maintaining model performance. Additionally, the importance of prompting methods in influencing gender distribution has been highlighted, emphasizing the need for careful design in model interactions. Furthermore, the development of bias-free sentiment analysis models and the examination of writing style biases in information retrieval systems have broadened the scope of research, addressing biases beyond gender to include political and stylistic dimensions. These developments collectively underscore the complexity of bias in AI and the ongoing efforts to create more equitable and fair systems.