The field of digital democracy and AI is rapidly evolving, with a focus on increasing transparency and accountability in online platforms and language models. Recent research has highlighted the importance of understanding the impact of content moderation practices on the spread of misinformation and the manipulation of public discourse. The emergence of new datasets and research initiatives is providing valuable insights into the dynamics of online social phenomena, including the spread of conspiracy theories and the role of language in shaping political biases. Furthermore, studies on large language models have shown that they can exhibit significant political biases, which can vary depending on the language used for inquiry. Overall, the field is moving towards a greater emphasis on transparency, accountability, and fairness in AI systems and online platforms. Noteworthy papers include:
- A Dataset of the Representatives Elected in France During the Fifth Republic, which provides a comprehensive database for analyzing the evolution of political representation in France.
- What Large Language Models Do Not Talk About: An Empirical Study of Moderation and Censorship Practices, which investigates the content moderation practices of large language models and highlights the need for greater transparency and diversity in AI systems.