AI and Human Interaction Research

Report on Current Developments in AI and Human Interaction Research

General Direction of the Field

The recent advancements in the field of AI and human interaction are marked by a significant shift towards more human-centered and value-driven approaches. Researchers are increasingly focusing on understanding and mitigating biases in AI systems, particularly in conversational agents and large language models (LLMs). This shift is driven by the growing realization that these systems, which are often perceived as companions or assistants, can perpetuate or even amplify societal biases, leading to discriminatory and harmful outputs.

One of the key areas of focus is the development of user-driven value alignment strategies. This approach empowers users to actively engage with AI systems to correct biased or harmful outputs, thereby guiding the AI to better align with human values. This user-centric approach not only enhances the ethical integrity of AI systems but also fosters a sense of agency among users, who are increasingly seen as active participants in the AI alignment process.

Another notable trend is the exploration of biases in spoken conversational search systems. As voice-based interfaces become more prevalent, particularly among diverse populations, there is a pressing need to address the challenges of presenting information in a fair and balanced manner. This includes not only technical challenges related to the linear nature of voice channels but also the broader implications of how biases can influence user attitudes and perceptions.

The field is also witnessing a call for more responsible and inclusive AI applications, particularly in sectors like immigration settlement. Here, the emphasis is on leveraging AI to empower individuals directly, rather than merely serving state or authority interests. This shift underscores the potential of AI to address real-world challenges in a manner that is both human-centered and ethically sound.

Governance and auditing frameworks are emerging as critical components of this evolving landscape. As AI systems become more integrated into everyday life, there is a growing recognition of the need for robust mechanisms to monitor and mitigate biases. This includes the development of value-based auditing frameworks that can ensure AI systems adhere to societal norms and values, thereby promoting fairness and equity.

Noteworthy Developments

  1. User-Driven Value Alignment: The concept of user-driven value alignment is particularly innovative, as it shifts the responsibility of bias correction from developers to users, fostering a more democratic and participatory approach to AI ethics.

  2. Biases in Spoken Conversational Search: The investigation into biases in voice-based systems is timely, given the rapid adoption of these technologies. The proposed experimental setup to explore these biases is a significant step towards ensuring fair and effective voice-based interactions.

  3. Human-Centered AI for Immigration Settlement: The focus on AI applications in the immigration settlement sector is noteworthy for its emphasis on empowering individuals directly, offering a fresh perspective on how AI can be used to support vulnerable populations.

  4. Auditing Framework for Chatbots: The call for a values-based auditing framework for chatbots is urgent and necessary, given the rapid advancements in generative AI. The proposed framework aims to ensure that AI systems adhere to societal values, thereby promoting fairness and equity.

  5. Additive Bias in LLMs: The investigation of additive bias in LLMs is a novel contribution, highlighting a cognitive bias that could have significant implications for resource use and environmental impact. Addressing this bias is crucial for ensuring balanced and efficient problem-solving approaches in AI.

  6. Outgroup Biases in LLMs: The exploration of outgroup biases in LLMs is a significant advancement, as it addresses a critical gap in the literature. The findings suggest that it is possible to develop more equitable and balanced language models by mitigating these biases.

  7. Age Group Fairness Reward (AGR): The introduction of AGR for bias mitigation in LLMs is a notable development, particularly given the under-explored nature of age bias in AI. The approach demonstrates significant improvements in response accuracy and fairness across different age groups.

Sources

User-Driven Value Alignment: Understanding Users' Perceptions and Strategies for Addressing Biased and Discriminatory Statements in AI Companions

Towards Investigating Biases in Spoken Conversational Search

Human-Centered AI Applications for Canada's Immigration Settlement Sector

A+AI: Threats to Society, Remedies, and Governance

It is Time to Develop an Auditing Framework to Promote Value Aware Chatbots

More is More: Addition Bias in Large Language Models

ChatGPT vs Social Surveys: Probing the Objective and Subjective Human Society

Governing dual-use technologies: Case studies of international security agreements and lessons for AI governance

Persona Setting Pitfall: Persistent Outgroup Biases in Large Language Models Arising from Social Identity Adoption

AGR: Age Group fairness Reward for Bias Mitigation in LLMs

Built with on top of