AI Governance and Compliance

Report on Current Developments in AI Governance and Compliance

General Direction of the Field

The recent advancements in the field of AI governance and compliance are primarily focused on integrating regulatory frameworks with technological innovations to ensure ethical and sustainable AI deployment. There is a noticeable shift towards creating transparent, accountable, and interoperable systems that align with global standards and regulations. This movement is driven by the need to harmonize diverse regulatory approaches, particularly in the context of the European Union's AI Act and other international standards.

One of the key trends is the development of open-source AI models and their potential impact on defense and industrial sectors. These models, known as open foundation models, are being scrutinized for their role in enhancing or complicating defense priorities such as supplier diversity, cybersecurity, and innovation. The debate extends to whether these models should be regulated and how such regulation could affect national security and industrial capabilities.

Another significant area of development is the adaptation of AI technologies to comply with sustainability regulations, particularly the EU Taxonomy. Researchers are exploring how AI can be leveraged to automate and streamline the assessment of business processes against sustainability criteria, thereby aiding companies in achieving regulatory compliance more efficiently.

Furthermore, there is a growing emphasis on the use of knowledge graphs and semantic technologies to map and align AI requirements with regulatory standards. This approach aims to reduce ambiguity and enhance the consistency of compliance claims across different organizations and sectors. It also addresses the challenges faced by small and medium-sized enterprises (SMEs) and public sector bodies in navigating complex regulatory landscapes.

Lastly, the responsible AI (RAI) community is undergoing a critical examination of its tools and artifacts, such as Model Cards and Transparency Notes. There is a recognition of the need to ensure that these tools not only serve the interests of technology companies but also effectively support external oversight and protect end-users from potential AI harms. This involves rethinking the design and governance of RAI artifacts to foster more collaborative and proactive regulatory environments.

Noteworthy Developments

  • Open Foundation Models and Defense Priorities: The exploration of open foundation models' impact on defense and industrial sectors is particularly innovative, offering insights into the intersection of AI technology and national security.
  • AI and Sustainability Compliance: The development of AI tools to automate compliance with the EU Taxonomy is noteworthy for its potential to significantly streamline sustainable business practices.
  • Mapping AI Requirements with Regulatory Standards: The use of open knowledge graphs to align AI requirements with regulatory standards is a promising approach that enhances clarity and consistency in compliance efforts.
  • Responsible AI Artifacts and External Oversight: The study on the perceived barriers to the effective use of RAI artifacts by external stakeholders is crucial for refining these tools to better support regulatory and civil oversight.

These developments highlight the dynamic and multifaceted nature of AI governance and compliance, emphasizing the need for continuous innovation and adaptation to meet the evolving challenges of AI deployment.

Sources

Defense Priorities in the Open-Source AI Debate: A Preliminary Assessment

Unlocking Sustainability Compliance: Characterizing the EU Taxonomy for Business Process Management

An Open Knowledge Graph-Based Approach for Mapping Concepts and Requirements between the EU AI Act and International Standards

Do Responsible AI Artifacts Advance Stakeholder Goals? Four Key Barriers Perceived by Legal and Civil Stakeholders

Catalog of General Ethical Requirements for AI Certification