1.3 The AI Security Literature — What the Standards Bodies Say

2-3 hours · Module 1 · Free

The AI Security Literature — What the Standards Bodies Say

Five frameworks define the current state of AI security thinking. Each serves a different purpose, addresses a different audience, and applies to a different aspect of your AI adoption. This subsection maps each framework, explains what it covers, identifies the sections most relevant to security operations teams, and provides the operational takeaway — the specific guidance you extract and apply.

This is not a literature review for its own sake. Each framework produces a concrete action item for your security operations program. By the end of this subsection, you will have an annotated reading list with operational relevance notes that you can present to your CISO as the evidence base for your AI governance decisions.


SANS Secure AI Blueprint

What it is: A controls framework published by SANS that defines the essential security controls for defending, governing, and deploying AI responsibly in enterprise environments. Developed by SANS researchers in collaboration with industry and government contributors.

What it covers: Access control for AI systems, model protection, inference security, monitoring, and governance/risk/compliance (GRC) for AI deployments. The framework addresses both securing AI systems you build and governing AI tools your team uses.

Why it matters for security operations teams: The SANS Blueprint is the most operationally focused of the five frameworks. It provides specific, implementable controls rather than abstract principles. If your CISO asks “what controls do we need for AI adoption?” — this is the framework to reference.

Key sections for security operations:

  • Access control — who can use which AI tools, with what data, under what conditions. Maps directly to the data classification matrix you build in subsection 1.5.
  • Monitoring — how to detect unauthorized AI usage (shadow AI), how to audit AI interactions, how to log and review AI-assisted decisions. Maps to the shadow AI detection rules you deploy in Module 7.
  • Incident response for AI — what constitutes an AI-related security incident, how to respond, how to report. This is the gap most organizations have not addressed.

Operational takeaway: Use the SANS Blueprint as your controls checklist when building the AI governance framework in Module 7. Each control maps to a specific implementation action.


NIST AI Risk Management Framework (AI RMF 1.0)

What it is: A voluntary framework from the National Institute of Standards and Technology that provides organizations with guidance for managing AI risks. Published January 2023, with ongoing updates and supplementary guidance (including the Generative AI Profile published in 2024).

What it covers: Four core functions — Govern, Map, Measure, Manage — applied to AI risk across the lifecycle from design through deployment and operation. The Generative AI Profile adds specific guidance for large language models and generative AI systems.

Why it matters for security operations teams: NIST AI RMF is becoming the de facto standard for AI governance in regulated industries. If your organization operates in financial services, healthcare, defense, or critical infrastructure, regulatory bodies are increasingly referencing AI RMF as the expected governance framework. Even if compliance is not mandatory, using NIST AI RMF demonstrates due diligence.

Key sections for security operations:

  • Govern 1.1-1.7 — establishing policies, roles, and accountability for AI usage. This is the organizational foundation Module 8 builds.
  • Map 1.1-1.6 — understanding the context in which AI is used, including intended purposes, known limitations, and potential impacts. This maps directly to the capabilities matrix from subsection 1.2.
  • Measure 2.1-2.13 — assessing AI system risks including accuracy, bias, robustness, and security. For security teams, the relevant measures are: accuracy (does the AI produce correct queries?), robustness (does it fail gracefully when given ambiguous input?), and security (can adversaries manipulate its output via prompt injection?).
  • Manage 3.1-3.4 — responding to and managing identified AI risks. Maps to Module 9 (adversarial AI) and Module 7 (governance).

Operational takeaway: When your compliance team asks how AI risk fits into the organization’s risk framework, point to NIST AI RMF. The four functions (Govern, Map, Measure, Manage) provide the structure. Your operational modules (Module 7, Module 8, Module 9) provide the implementation.


OWASP Top 10 for Large Language Model Applications

What it is: A security awareness document from OWASP identifying the ten most critical security risks in applications that use LLMs. Currently at version 2.0 (2025), actively maintained.

What it covers: Application-level security risks specific to LLM integration: prompt injection, insecure output handling, training data poisoning, model denial of service, supply chain vulnerabilities, sensitive information disclosure, insecure plugin/tool design, excessive agency, overreliance, and model theft.

Why it matters for security operations teams: If your organization builds applications that use LLMs (chatbots, copilots, automated workflows), the OWASP LLM Top 10 defines the attack surface you need to defend. If your team uses LLMs as tools (the focus of this course), items LLM01 (Prompt Injection), LLM02 (Insecure Output Handling), LLM06 (Sensitive Information Disclosure), and LLM09 (Overreliance) are directly relevant to your operational security.

Key items for security operations:

  • LLM01: Prompt Injection — an attacker embeds instructions in content that the LLM processes, causing it to perform unintended actions. For security teams: if you paste a phishing email into an AI tool for analysis and the email contains hidden prompt injection, the AI may follow the attacker’s instructions instead of yours. Module 9 covers this in depth.
  • LLM06: Sensitive Information Disclosure — the LLM reveals sensitive data through its responses. For security teams: if you paste production log data into an AI tool, that data may be stored, used for training, or accessible to vendor staff. Subsection 1.5 covers data handling controls.
  • LLM09: Overreliance — users trust LLM output without verification, leading to errors. For security teams: deploying a hallucinated KQL query, trusting an overconfident analysis, or sending an AI-drafted regulatory notification without review. The verification discipline addressed throughout this course.

Operational takeaway: Use the OWASP LLM Top 10 as a threat model for your AI-assisted workflows. For each workflow you build in this course, assess which OWASP items apply and what mitigations you have in place.


MITRE ATLAS (Adversarial Threat Landscape for AI Systems)

What it is: A knowledge base of adversary tactics, techniques, and procedures (TTPs) targeting AI systems, modeled after the MITRE ATT&CK framework. Maintained by MITRE with contributions from industry and government.

What it covers: Tactics and techniques that adversaries use to attack AI systems across the lifecycle: reconnaissance (identifying AI systems), resource development (building adversarial tools), initial access to AI systems, execution of adversarial techniques (evasion, poisoning, extraction), and impact (manipulation, denial, extraction of sensitive information).

Why it matters for security operations teams: ATLAS provides the adversarial perspective on AI systems. If your organization deploys AI tools, ATLAS defines how adversaries will target them. If your team uses AI tools, ATLAS techniques like prompt injection and model evasion describe how adversaries can manipulate your tools to produce incorrect output.

Key techniques for security operations:

  • AML.T0051: LLM Prompt Injection — overlaps with OWASP LLM01 but framed as an adversary technique with specific sub-techniques (direct injection, indirect injection, stored injection).
  • AML.T0048: Data Poisoning — an adversary introduces malicious data into AI training sets. Less directly relevant to security teams using AI tools, more relevant to organizations building AI products.
  • AML.T0043: Model Evasion — an adversary crafts inputs that cause the AI to produce incorrect classifications. Relevant to ML-based security tools (UEBA, malware detection) that adversaries may attempt to evade.

Operational takeaway: ATLAS is the framework you use when building the AI threat model in Module 9. Map your AI-assisted workflows to ATLAS techniques to identify where adversaries could manipulate your tools.


EU AI Act — implications for security operations

What it is: The European Union’s comprehensive regulation on AI, entered into force August 2024 with phased enforcement through 2027. The first major regulatory framework specifically targeting AI systems.

What it covers: Risk-based classification of AI systems (unacceptable, high-risk, limited, minimal), transparency requirements, technical documentation, human oversight, and conformity assessments. Organizations deploying AI systems in the EU or affecting EU citizens must comply.

Why it matters for security operations teams: If your organization operates in or serves EU markets, the AI Act applies to your AI tool usage. Security-related AI systems (biometric identification, critical infrastructure monitoring, law enforcement support) may be classified as high-risk, requiring: technical documentation of the AI system, human oversight measures, accuracy and robustness requirements, logging and traceability, and conformity assessments.

Key provisions for security operations:

  • Article 14 (Human Oversight) — high-risk AI systems must be designed to allow human oversight. For security teams: AI-assisted triage and investigation must maintain human decision authority. Autonomous containment without human approval may violate this provision.
  • Article 12 (Record-Keeping) — high-risk AI systems must generate logs of operations. For security teams: if you use AI for investigation or detection, maintain logs of what prompts you sent, what output you received, and what decisions you made based on that output.
  • Article 13 (Transparency) — users must be informed when they interact with AI. For security teams: if AI-generated content appears in incident reports or stakeholder communications, disclose that AI was used in the drafting process.

Operational takeaway: The EU AI Act is the reason your AI governance framework (Module 7) must include: documentation of AI system usage, human oversight procedures, audit logging, and transparency disclosures. Even if EU compliance is not currently mandatory for your organization, these are governance best practices that demonstrate due diligence.


Your annotated reading list

The five frameworks above are your primary references. The table below provides the specific documents, their current versions, and the sections most relevant to each module in this course.

FrameworkCurrent VersionURLMost Relevant Modules
SANS Secure AI Blueprintv1.1 (2025)sans.org/ai7 (Governance), 9 (Adversarial AI)
NIST AI RMF1.0 + GenAI Profile (2024)nist.gov/ai6 (Compliance), 7 (Governance), 8 (Deployment)
OWASP LLM Top 10v2.0 (2025)owasp.org/llm-top-105 (Automation), 9 (Adversarial AI)
MITRE ATLASOngoingatlas.mitre.org9 (Adversarial AI), 3 (Detection Engineering)
EU AI ActReg. 2024/1689eur-lex.europa.eu6 (Compliance), 7 (Governance)

Try it yourself

Download the SANS Secure AI Blueprint and the OWASP LLM Top 10 (both freely available). Read the executive summaries. For each, identify the three controls or risks most relevant to your team's current AI usage. If your team does not currently use AI tools, identify the three controls or risks you would need to address before adopting them. This produces the evidence base for the AI governance conversation with your CISO.

From SANS: access control (who is authorized to use AI tools), monitoring (how you detect unauthorized usage), and data handling (what data flows to the AI platform). From OWASP: LLM01 Prompt Injection (operational risk when analyzing malicious content), LLM06 Sensitive Information Disclosure (data handling risk), and LLM09 Overreliance (verification discipline). These six items form the minimum governance foundation for any security team adopting AI tools.

Check your understanding

1. Your CISO asks: "What frameworks should we reference for AI governance?" You need to recommend frameworks for three purposes: operational controls, risk management, and threat modeling. Which three frameworks do you recommend?

SANS Secure AI Blueprint for operational controls (specific, implementable security controls for AI deployment), NIST AI RMF for risk management (the Govern-Map-Measure-Manage structure for AI risk within existing enterprise risk frameworks), and MITRE ATLAS for threat modeling (adversarial TTPs targeting AI systems, modeled after ATT&CK). Each serves a distinct purpose. OWASP LLM Top 10 is also relevant but focuses on application security rather than operational governance. EU AI Act is a regulatory requirement, not a governance framework — though compliance requirements should inform governance decisions.
NIST AI RMF for all three purposes
ISO 27001 covers AI governance

You're reading the free modules of this course

The full course continues with advanced topics, production detection rules, worked investigation scenarios, and deployable artifacts. Premium subscribers get access to all courses.

View Pricing See Full Syllabus