Lab 16 Advanced

Build Your AI Governance Framework

90-120 minutes Modules: Module 6, Module 7

Objective

Use AI to draft 3 governance documents for AI use in your security operations: an Acceptable Use Policy, a risk register entry, and a data classification guide. Review and adapt each for your organisation.

Required: Access to Claude. Understanding of your organisation’s existing governance structure.


Step 1: Generate the AI Acceptable Use Policy

Role: You are a security governance specialist drafting an 
acceptable use policy for AI tools in security operations.

Context: Our security team (5 analysts) uses AI tools for:
- Investigation triage and analysis
- Detection rule development
- Incident response documentation
- Security automation scripting
- Compliance and policy drafting

Our environment: Microsoft 365, Sentinel, Defender XDR.
We handle data classified up to [CONFIDENTIAL].
Regulatory requirements: GDPR, [add your applicable regulations].

Task: Draft an AI Acceptable Use Policy covering:
1. Approved AI tools and their authorised use cases
2. Data classification rules — what can and cannot be input to AI
3. Prohibited uses (specific examples)
4. Output review requirements before deployment
5. Incident reporting for AI misuse or data exposure
6. Training requirements for AI tool users
7. Review cadence for this policy

Constraints: Practical and enforceable, not aspirational. 
Each section should be specific enough that a SOC analyst 
knows exactly what they can and cannot do.

Review and adapt:


Step 2: Generate the AI risk register entry

Role: You are conducting a risk assessment for AI tool adoption 
in security operations.

Context: [Same environment description as Step 1]

Task: Produce a risk register entry covering:
1. Risk ID and title
2. Risk description (specific to AI in security operations)
3. Likelihood and impact assessment (use a 5x5 matrix)
4. Current controls (what you already have)
5. Residual risk after controls
6. Risk treatment plan (accept, mitigate, transfer, avoid)

Assess these specific risks:
- Sensitive data exposure through AI prompts
- AI hallucination leading to incorrect investigation conclusions
- Over-reliance on AI reducing analyst skill development
- AI-generated code containing vulnerabilities
- Shadow AI use by team members using unapproved tools

Step 3: Generate the data classification guide for AI tools

Role: You are creating a practical data classification guide 
for security analysts using AI tools.

Task: Create a decision matrix that tells analysts exactly what 
data they can input to AI tools, with specific examples:

For each data classification level, provide:
1. Can this data be input to AI tools? (Yes/No/Conditional)
2. Specific examples of data at this level
3. Required sanitisation steps before input (if conditional)
4. What to do if this data is accidentally input to an AI tool

Step 4: Review all 3 documents together

Verify consistency:


Verification checklist