The field of artificial intelligence is moving towards a more human-centered approach, with a focus on explainability and transparency. Recent research has emphasized the importance of aligning AI systems with human values and providing understandable explanations for AI decisions. A key challenge in this area is the development of effective explainability methods, particularly for complex deep foundation models.
Noteworthy papers in this area include: A Multi-Layered Research Framework for Human-Centered AI, which presents a novel three-layered framework for establishing a structured explainability paradigm. Intrinsic Barriers to Explaining Deep Foundation Models, which examines the fundamental characteristics of deep foundation models and the limitations of current explainability methods. A Framework for the Assurance of AI-Enabled Systems, which proposes a claims-based framework for risk management and assurance of AI systems.