Advances in Human-Centered AI and Explainability

The field of artificial intelligence is moving towards a more human-centered approach, with a focus on explainability and transparency. Recent research has emphasized the importance of aligning AI systems with human values and providing understandable explanations for AI decisions. A key challenge in this area is the development of effective explainability methods, particularly for complex deep foundation models.

Noteworthy papers in this area include: A Multi-Layered Research Framework for Human-Centered AI, which presents a novel three-layered framework for establishing a structured explainability paradigm. Intrinsic Barriers to Explaining Deep Foundation Models, which examines the fundamental characteristics of deep foundation models and the limitations of current explainability methods. A Framework for the Assurance of AI-Enabled Systems, which proposes a claims-based framework for risk management and assurance of AI systems.

Sources

A Survey for What Developers Require in AI-powered Tools that Aid in Component Selection in CBSD

From Teacher to Colleague: How Coding Experience Shapes Developer Perceptions of AI Tools

A Multi-Layered Research Framework for Human-Centered AI: Defining the Path to Explainability and Trust

Explainability for Embedding AI: Aspirations and Actuality

Safety Co-Option and Compromised National Security: The Self-Fulfilling Prophecy of Weakened AI Risk Thresholds

Bare Minimum Mitigations for Autonomous AI Development

A Conceptual Framework for AI-based Decision Systems in Critical Infrastructures

Quality of explanation of xAI from the prespective of Italian end-users: Italian version of System Causability Scale (SCS)

On Developers' Self-Declaration of AI-Generated Code: An Analysis of Practices

A Framework for the Assurance of AI-Enabled Systems

Intrinsic Barriers to Explaining Deep Foundation Models

Built with on top of