Impact of Large Language Models (LLMs) on Human Communication and Cognitive Biases

Report on Current Developments in the Impact of Large Language Models (LLMs) on Human Communication and Cognitive Biases

General Direction of the Field

The recent advancements in the field of Large Language Models (LLMs) are significantly reshaping human communication and cognitive processes, with a particular emphasis on textual information and decision-making. The integration of LLMs into academic and social contexts is leading to observable changes in language use, both in written and spoken forms. This shift is characterized by an increase in the frequency of certain LLM-style words, which are subtly influencing the way people express themselves. The impact of these models on verbal communication is not only evident in writing but is also beginning to manifest in spoken interactions, suggesting a broader societal ripple effect.

Moreover, the field is witnessing a critical examination of the limitations and possibilities of gender-related speech technology research. There is a growing recognition of the need for more nuanced and socially aware approaches to gender in speech technology, moving beyond binary categorizations. This shift is driven by the understanding that gender is a socially constructed spectrum, and the current terminology used in research often fails to reflect this complexity. This awareness is prompting researchers to reconsider their methodologies and to address the potential marginalization of certain groups.

Another significant development is the exploration of language ideologies encoded within LLMs, particularly concerning gendered language reform. Studies are revealing that LLMs exhibit political biases and internal inconsistencies in their language use, which can inadvertently communicate specific political or social values. This raises important questions about the value alignment of LLMs and the need for more transparent and ethically sound models.

Additionally, the field is increasingly focusing on cognitive biases within LLMs, extending beyond the well-documented social biases. Recent research has highlighted how LLMs can be influenced by cognitive biases, such as the threshold priming effect, in decision-making tasks. This discovery underscores the importance of considering human-like cognitive biases in the design, evaluation, and auditing of LLMs, particularly in information retrieval contexts.

Finally, there is a growing interest in analyzing gender biases in non-academic textual data, such as song lyrics. Using advanced topic modeling and bias measurement techniques, researchers are uncovering pervasive gender biases in popular culture, which can inform broader discussions on gender representation and equality.

Noteworthy Papers

  • The Impact of Large Language Models in Academia: from Writing to Speaking: This paper provides the first large-scale investigation into how LLMs influence verbal communication, highlighting emerging trends in both written and spoken language.

  • Do language models practice what they preach? Examining language ideologies about gendered language reform encoded in LLMs: This study uncovers political biases and internal inconsistencies in LLMs' language use, raising critical questions about value alignment and ethical considerations.

  • AI Can Be Cognitively Biased: An Exploratory Study on Threshold Priming in LLM-Based Batch Relevance Assessment: This paper demonstrates that LLMs are susceptible to cognitive biases, similar to humans, emphasizing the need for careful consideration in their design and application.

Sources

The Impact of Large Language Models in Academia: from Writing to Speaking

Beyond the binary: Limitations and possibilities of gender-related speech technology research

Do language models practice what they preach? Examining language ideologies about gendered language reform encoded in LLMs

AI Can Be Cognitively Biased: An Exploratory Study on Threshold Priming in LLM-Based Batch Relevance Assessment

Beats of Bias: Analyzing Lyrics with Topic Modeling and Gender Bias Measurements

Built with on top of