Compliance & Policy Generation
Compliance documentation is the task security professionals most often postpone — not because it is unimportant, but because it is tedious. Writing an Acceptable Use Policy from scratch takes a full day. Mapping 114 ISO 27001 controls to your environment takes a week. Claude compresses both: not by generating generic policies, but by producing structured drafts that you refine with organisational context.
Workflow 1: Policy drafting
Claude produces well-structured security policies that cover the required sections of a given framework. The key is providing the organisational context that makes a policy specific to your environment rather than generic.
Generic prompt (produces generic policy):
Write an information security policy.
Contextualised prompt (produces deployable draft):
<task>Draft an Acceptable Use Policy for AI tools.</task>
<organisation>
UK-based engineering company. 500 employees. M365 E5 environment.
Currently no formal AI policy. Employees are using Claude, ChatGPT,
and Copilot informally. Engineering team uses AI for CAD analysis.
Finance team uses AI for report drafting. SOC team uses Claude for
log analysis and KQL generation.
</organisation>
<framework>ISO 27001:2022 A.5.10 (Acceptable use of information and
other associated assets)</framework>
<requirements>
Sections: Purpose, Scope, Approved AI Tools (with data classification
restrictions per tool), Prohibited Uses, Data Handling Requirements
(what can/cannot be entered into AI tools), Monitoring and Audit,
Exceptions Process, Consequences for Violation.
Must address: shadow AI (unapproved tools), data classification
(what sensitivity levels can be processed by which tools), personal
vs corporate accounts, and retention of AI-generated outputs.
</requirements>
<tone>Professional, enforceable, clear to non-technical staff.</tone>
Claude produces a complete policy draft with all required sections. You review for: organisational accuracy (are the approved tools correct?), legal compliance (does it meet UK employment law requirements for monitoring disclosure?), and practical enforceability (can IT actually enforce this?). The draft that takes 4-6 hours to write takes 30 minutes to review and refine.
Workflow 2: Compliance gap analysis
Upload your current control matrix and the target framework. Claude identifies the gaps.
<task>Perform a gap analysis.</task>
<current_state>
I am uploading our current ISO 27001 Statement of Applicability (SoA)
as a CSV. Each row has: Control ID, Control Name, Implementation Status
(Implemented, Partially Implemented, Not Implemented), Evidence Reference.
</current_state>
<target>ISO 27001:2022 Annex A — all 93 controls</target>
<output>
For each control that is "Not Implemented" or "Partially Implemented":
1. The control requirement (what it asks for)
2. The gap (what is missing from our current implementation)
3. Recommended remediation action (specific, not generic)
4. Estimated effort (Low/Medium/High)
5. Priority (based on risk — which gaps create the most exposure?)
Return as a table sorted by Priority (highest first).
</output>
Upload the CSV. Claude processes the SoA against its knowledge of ISO 27001:2022 and produces a prioritised gap analysis. This is the first draft of your remediation roadmap — review the priorities against your organisation’s actual risk appetite and business context.
Limitation: Claude’s knowledge of ISO 27001:2022 is based on its training data. It covers the standard well, but verify specific control requirements against the official standard text. Claude may paraphrase a requirement slightly differently from the official wording — which matters for audit purposes.
Workflow 3: Risk assessment documentation
Claude accelerates risk assessment by generating structured risk entries from your informal risk notes.
<task>Document this risk for a risk register.</task>
<risk>
Employees are using personal Claude/ChatGPT accounts to process
company data. No approved AI tools policy exists. Multiple instances
of source code, customer data, and financial reports pasted into
free-tier AI accounts.
</risk>
<format>
Risk ID: [I will assign]
Risk Title: [concise name]
Risk Description: [detailed description of the threat]
Likelihood: [1-5 with justification]
Impact: [1-5 with justification]
Risk Rating: [Likelihood × Impact]
Current Controls: [what is currently in place]
Recommended Controls: [what should be implemented]
Control Owner: [leave blank]
Target Date: [leave blank]
Residual Risk: [expected rating after controls implemented]
Framework Mapping: [NIST CSF, ISO 27001 control references]
</format>
Claude produces a complete risk register entry. The likelihood and impact assessments are reasonable starting points — but they must be validated by someone who understands your organisation’s specific risk context. Claude cannot assess whether your CEO considers AI data leakage a career-ending issue or a minor concern — that political context determines the actual impact score.
Workflow 4: Audit preparation
Claude helps you prepare the narrative — what you will say. But auditors want evidence — what you can show. For every control, prepare both: the Claude-drafted explanation of how the control is implemented (what you say) AND the actual evidence (Sentinel screenshots, policy documents, access review exports) that proves the implementation (what you show). Claude drafts the first. You gather the second. Both are required.
Before a compliance audit, you need to prepare evidence, update documentation, and anticipate auditor questions. Claude helps with all three.
<task>Prepare for an ISO 27001 surveillance audit.</task>
<scope>The audit covers: A.5.1 (Information Security Policies),
A.8.2 (Privileged Access Rights), A.8.16 (Monitoring Activities),
and A.5.26 (Response to Information Security Incidents).</scope>
<our_evidence>
- A.5.1: Updated information security policy (last reviewed January 2026)
- A.8.2: PAM solution deployed, quarterly access reviews documented
- A.8.16: Sentinel with 29 detection rules, daily alert triage documented
- A.5.26: 3 incident reports from 2026 (AiTM, BEC, Token Replay)
</our_evidence>
<output>
For each control:
1. What the auditor will ask (3-5 likely questions)
2. What evidence to present
3. Potential weaknesses the auditor may probe
4. How to address those weaknesses in the meeting
</output>
This is audit rehearsal. Claude generates the questions an auditor would ask based on the control requirements and your evidence. Prepare answers for each before the audit meeting.
Try it yourself
Claude identifies genuine gaps (missing sections, outdated references) and occasionally flags items that are actually adequate. The gap identification is 80-90% accurate. The recommendations are structured and specific. You will need to verify framework-specific wording against the official standard text for audit purposes.
Knowledge checks
Check your understanding
1. Claude drafts an ISO 27001 policy section and cites "A.5.10 — Acceptable use of information." Should you include the citation directly in the final policy?
Key takeaways
Policy drafting is Claude’s highest-ROI compliance use case. A day of writing becomes 30 minutes of review.
Provide organisational context. Generic prompts produce generic policies. Specific context (industry, size, regulatory environment, current state) produces deployable drafts.
Verify framework citations. Claude references the correct standard 90% of the time. The other 10% matters for audits.
Risk assessments need human calibration. Claude produces reasonable likelihood/impact scores. Your organisation’s risk appetite and political context determine the actual ratings.
Workflow 5: Framework cross-mapping
Many organisations need to comply with multiple frameworks simultaneously — ISO 27001 AND NIST CSF AND SOC 2. Claude maps controls across frameworks, identifying overlaps and unique requirements.
<task>Map our ISO 27001:2022 controls to NIST CSF 2.0 and SOC 2.</task>
<controls>
I am uploading our ISO 27001 Statement of Applicability (CSV).
For each implemented control, identify:
1. The equivalent NIST CSF 2.0 function/category/subcategory
2. The equivalent SOC 2 Trust Services Criteria
3. Whether the ISO control fully satisfies both, or whether
additional evidence is needed for NIST/SOC 2
</controls>
<o>
Table with columns: ISO Control | NIST CSF Mapping | SOC 2 Mapping | Coverage Assessment (Full/Partial/Gap)
Then a summary: how many controls map cleanly across all three,
how many have gaps, and what the top 5 gaps are.
</o>
This cross-mapping typically takes a compliance specialist 2-3 weeks. Claude produces a first-draft mapping in minutes. The draft requires expert review — Claude may incorrectly map controls that have similar but not identical requirements across frameworks — but it eliminates the initial mapping effort entirely.
Where Claude gets cross-mapping wrong: Frameworks use different terminology for similar concepts. Claude sometimes maps based on keyword similarity rather than actual requirement alignment. Example: ISO A.8.16 (Monitoring activities) and NIST DE.CM (Security continuous monitoring) sound similar but have different scope requirements. Claude may mark this as “Full” when the actual mapping is “Partial” because your ISO monitoring implementation does not cover all NIST DE.CM subcategories. Verify each “Full” mapping against the actual framework text.
Workflow 6: Quarterly policy review acceleration
Policies require periodic review — typically annually or quarterly. Claude accelerates the review by identifying sections that need updating.
<task>Review this policy for currency and completeness.</task>
<policy>[paste or upload the full policy document]</policy>
<review_criteria>
1. Are there references to deprecated technologies or products?
(e.g., "Azure Active Directory" should now be "Entra ID")
2. Are there references to outdated regulatory requirements?
3. Are there sections that no longer reflect our actual practices?
4. Are there gaps — areas that should be covered but are not?
(Compare against: ISO 27001:2022, NIST CSF 2.0, UK GDPR)
5. Is the language clear and enforceable?
</review_criteria>
<o>
For each finding:
- Section/paragraph reference
- Issue (what is wrong or missing)
- Recommended change (specific wording or addition)
- Priority (High/Medium/Low)
</o>
Claude produces a structured review report. For a 20-page policy, this takes 5 minutes. Manually reviewing the same policy against three frameworks takes a full day. The output is a prioritised action list that goes directly into the policy update workflow.
Try it yourself
Claude typically identifies 5-15 issues in a policy that has not been reviewed recently. Common findings: references to deprecated product names (Azure AD → Entra ID), missing sections for new technology areas (AI governance, cloud security), outdated regulatory references, and vague language that is not enforceable. The findings are 80-90% valid. The remaining 10-20% are false positives where Claude misinterprets context. Overall: the review takes 5 minutes and produces a genuine improvement list.
Check your understanding
2. Claude maps ISO 27001 A.8.16 (Monitoring) to NIST CSF DE.CM and marks it as "Full coverage." Is this reliable?