
The Cloud Security Alliance, a respected non-profit founded in 2008 to pursue cloud security assurance, has now unveiled its Artificial Intelligence Controls Matrix (AICM), a quiet revolution for trustworthy AI.
It has come at a time when generative AI and large language models are moving quickly into every sector. These systems can transform business, but they can also fail, or be made to fail. Because of this, trust becomes the measure of success.
The AICM is a vendor-agnostic control framework built to help organizations manage AI-specific risks, secure systems, and build AI that can be trusted. It is also designed for the realities of cloud environments.
Released in July 2025, it provides a structured, measurable set of controls to promote responsible AI development, deployment, and use.
Purpose and Scope
The framework is grounded in the CSA’s long (and vast) experience with cloud security. It builds on the Cloud Controls Matrix, adapting its principles to the new governance needs fueled by AI's meteoric rise. The goal is to give firms a way to assess and manage AI-specific risks, align with international standards, and build security in across the AI lifecycle.
It covers 18 security domains and 243 control objectives. Some domains are familiar: Identity and Access Management, Data Security and Privacy, Governance, Risk and Compliance.
Others are AI-specific: Model Security, AI Supply Chain Management, Transparency, and Accountability. Together, they address conventional security concerns and the vulnerabilities that are unique to AI systems.
The Five Pillars
The AICM is structured around five pillars:
Control type identifies whether a control addresses AI-specific risks, applies to both AI and cloud, or is cloud-only.
Control applicability and ownership map responsibilities across the AI service stack. These roles include the Cloud Service Provider, Model Provider, Orchestrated Service Provider, and Application Provider.
Architectural relevance links each control to components of the GenAI stack; physical, network, compute, storage, application, and data layers.
Lifecycle relevance ensures controls are tied to phases from preparation and development to deployment, delivery, and retirement.
Threat category addresses nine areas, including model manipulation, data poisoning, sensitive data disclosure, model theft, insecure supply chains, and loss of governance.
This structure makes the controls adaptable and auditable. It also helps businesses assign clear accountability.
Examples in Practice
One example is control A&A-02, Independent Assessments. It calls for annual, independent audits against relevant standards and applies to all major AI service roles and spans from physical infrastructure to the data layer. It mitigates risks like model manipulation, data poisoning, and governance failures. It is relevant across the entire AI lifecycle.
Other controls focus on governance and ethics. These include regular AI impact assessments covering ethical, societal, legal, and security effects. There are also bias and fairness checks, ethics committees, and explainability requirements.
The framework’s mapping shows how controls apply to foundation and fine-tuned models, prompt and training data, orchestration services, plugin integrations, caching and monitoring services, user sessions, and AI applications. This layered approach recognizes the shared responsibility model. Each role in the AI supply chain has defined obligations.
Supporting Components
The AICM is not only a list of controls. It includes an AI-specific Consensus Assessment Initiative Questionnaire. This AI-CAIQ is a structured set of questions for self-assessments or third-party evaluations that supports the upcoming STAR Level 1 Self-Assessment for AI.
Implementation guidelines and auditing guidelines will follow. These will show how to apply and verify each control and will be aligned to the taxonomy of AI service roles.
The framework is mapped to other standards. This includes BSI AI C4, NIST AI 600-1, and ISO 42001. Mappings to the EU AI Act, plus reverse-mapping to identify gaps, are expected soon.
Part of a Broader Ecosystem
The AICM is a core part of CSA’s approach to trustworthy AI. Alongside it is the AI Trustworthy Pledge, a voluntary commitment to safety, transparency, ethics, and privacy. Organizations signing the pledge can display a digital badge and join a public list of supporters. The pledge is a first step for those not yet ready for full assessment.
The final stage is the STAR for AI program. This will provide independent certification against the AICM controls. The STAR model is already well-known in cloud security, so extending it to AI offers a familiar path for assurance.
A Progressive Journey
The CSA sees the process in three steps. First, take the pledge. Second, implement the AICM. Third, seek STAR for AI certification or attestation. This structure allows firms at different stages of maturity to engage with the framework.
The release of the AICM is a step toward embedding trust into AI from the start. The controls are detailed. They span the technology stack, the lifecycle, and the threat landscape. They are open, consensus-driven, and designed to evolve as AI changes.
For organizations, the work now is to study the matrix, map it to their own AI services, and close the gaps. This is deliberate work. It builds systems that are not only powerful, but also secure, compliant, and accountable.
In a fast-moving field, that steadiness may prove decisive.
Break the Attack Chain with Fortra®
Advanced offensive and defensive security solutions. Complete attack chain coverage. Shared threat intel and analytics. Add Fortra® to your arsenal.