Enhancing AI Security: MITRE’s AI Incident Sharing Initiative
In an era where artificial intelligence (AI) is increasingly integrated into various systems, the need for robust security measures has never been more critical. Recognizing this urgency, MITRE’s Center for Threat-Informed Defense has collaborated with over 15 companies to enhance community knowledge of threats and defenses for AI-enabled systems. This collaboration culminated in the launch of the AI Incident Sharing initiative, a groundbreaking effort aimed at improving the collective awareness of threats and defenses in the AI landscape.
The AI Incident Sharing Initiative
Launched as part of the Center’s Secure AI project, which began in June 2024, the AI Incident Sharing initiative is designed to facilitate the rapid and protected sharing of information regarding attacks or accidents involving AI-enabled systems. By creating a platform for organizations to share anonymized incident data, the initiative aims to foster a community of knowledge that can respond more effectively to emerging threats.
The AI incident sharing website and submission form are available online, allowing organizations to contribute to a growing database of real-world incidents. This collaborative approach builds on two years of incident-sharing efforts within the broader MITRE ATLAS community, enhancing the speed and efficiency of incident characterization and sharing.
Expanding the ATLAS Threat Framework
In parallel with the AI Incident Sharing initiative, the Secure AI collaboration has extended the ATLAS threat framework to address the evolving adversarial landscape specific to generative AI-enabled systems. Similar to the well-known MITRE ATT&CK framework, ATLAS serves as a community knowledge base that security professionals, developers, and operators can utilize to protect AI-enabled systems.
The project has introduced several new case studies and attack techniques focused on generative AI, enriching the public ATLAS knowledge base. Additionally, it has provided new methods for mitigating attacks on AI systems. This initiative builds on previous collaborations, such as the partnership with Microsoft, which aimed to enhance the ATLAS knowledge base with a focus on generative AI, with updates released in November 2023.
Collaborators and Community Engagement
The Secure AI project has attracted a diverse range of collaborators, including industry leaders such as AttackIQ, BlueRock, Booz Allen Hamilton, CATO Networks, Citigroup, Cloud Security Alliance, CrowdStrike, FS-ISAC, Fujitsu, HCA Healthcare, HiddenLayer, Intel, JPMorgan Chase Bank, Microsoft, Standard Chartered, and Verizon Business. This broad coalition underscores the importance of a unified approach to AI security across various sectors.
Douglas Robbins, vice president at MITRE Labs, emphasized the significance of this initiative, stating, “As public and private organizations of all sizes and sectors continue to incorporate AI into their systems, the ability to manage potential incidents is essential. Standardized and rapid information sharing about incidents will allow the entire community to improve the collective defense of such systems and mitigate external harms.”
A Trusted Community of Contributors
Under the MITRE ATLAS AI Incident Sharing initiative, a community of trusted contributors will receive protected and anonymized data on real-world AI incidents occurring across operational AI-enabled systems. Organizations interested in contributing can submit incidents via the public incident sharing site, and upon submission, they will be considered for membership in the trusted community of data receivers. This collaborative sharing of information will enable more data-driven risk intelligence and analysis at scale, benefiting the entire community.
MITRE’s Broader Information-Sharing Efforts
MITRE is no stranger to fostering public-private partnerships for information sharing. The organization operates several initiatives, including the publicly available Common Vulnerabilities and Exposures (CVE) list, which catalogs publicly disclosed cybersecurity vulnerabilities on behalf of the Cybersecurity and Infrastructure Security Agency. Additionally, MITRE manages the Aviation Safety Information Analysis and Sharing (ASIAS) database, which aims to identify and prevent hazards in aviation through shared safety information.
Recently, MITRE also announced the full release of the EMB3D Threat Model, which includes new mitigations to aid in identifying threats and implementing customized security measures for embedded devices. The complete public release of the EMB3D Threat Model is now accessible online, featuring tiered mitigation guidance and alignment with ISA/IEC 62443-4-2 standards.
Conclusion
As AI continues to permeate various sectors, the need for effective incident management and threat awareness is paramount. MITRE’s AI Incident Sharing initiative represents a significant step forward in fostering a collaborative environment for sharing critical information about AI-related incidents. By leveraging the collective knowledge of industry leaders and security professionals, this initiative aims to enhance the security posture of AI-enabled systems, ultimately contributing to a safer digital landscape for all.