Threat Intelligence & Detection Engineering
Detection engineering is where SOC analysts create lasting value — not triaging today’s alerts, but building the rules that catch tomorrow’s attacks. Claude accelerates every phase: mapping techniques to MITRE ATT&CK, writing KQL detection logic, generating threat briefings for stakeholders, and documenting rules for the team.
Workflow 1: MITRE ATT&CK mapping
Every detection rule should map to a MITRE ATT&CK technique. Claude knows the framework well — it saw the entire ATT&CK matrix extensively during training.
The mapping prompt:
I am building a detection rule that identifies inbox rules
created with financial keyword targeting (invoice, payment,
bank, wire, transfer) from non-corporate IPs.
Map this detection to:
1. The most specific MITRE ATT&CK technique and sub-technique
2. The tactic it falls under
3. Related techniques the attacker likely also uses (the
preceding and following techniques in the attack chain)
Format: Technique ID — Name — Tactic — Description of relevance
Claude returns: T1114.003 (Email Forwarding Rule) under Collection, with related techniques T1078.004 (Valid Accounts: Cloud) preceding it and T1534 (Internal Spearphishing) following it. This gives you the MITRE mapping for the rule AND the attack chain context for the rule documentation.
Batch mapping: When you have a set of detection rules that need MITRE mapping, provide them all in one prompt:
Map each of these 5 detection rules to MITRE ATT&CK:
1. [Rule description]
2. [Rule description]
3. [Rule description]
4. [Rule description]
5. [Rule description]
For each: Technique ID, Sub-technique (if applicable), Tactic, and a one-line justification.
Return as a table.
Workflow 2: Detection rule generation
Claude generates KQL detection rules from a natural-language description of the attack technique. The key is specificity — tell Claude what the attack looks like in the logs, not just the attack name.
Weak prompt:
Write a detection rule for BEC.
Strong prompt:
<technique>
BEC vendor payment diversion — the attacker replies to an existing
email thread about an invoice, changing the bank account details.
The reply comes from the compromised internal mailbox but from
a different IP than the user's normal sign-in IP.
</technique>
<log_source>EmailEvents table in Microsoft Sentinel</log_source>
<detection_logic>
Detect: a reply email (Subject starts with "Re:") sent by an
internal user, where the SenderIPv4 for the reply differs from
the SenderIPv4 of the previous message in the same thread from
the same user. This indicates the reply was sent from a different
location — potentially by an attacker using the compromised mailbox.
</detection_logic>
<rule_requirements>
- Schedule: 15 min / Lookback: 20 min
- Severity: Medium
- MITRE: T1534 (Internal Spearphishing)
- Entity mapping: Account → SenderFromAddress
- Include tuning notes (expected false positives, recommended exclusions)
</rule_requirements>
Claude produces a deployable detection rule with the KQL, entity mapping, and tuning notes. Review the query logic, test against 30 days of data, and deploy.
Workflow 3: IOC analysis and enrichment
When you receive IOCs from a threat intelligence feed, incident, or peer report, Claude accelerates the initial analysis.
<iocs>
IPs: 203.0.113.91, 198.51.100.44, 203.0.113.105
Domains: northgate-voicemail[.]com, login-northgate[.]com, secure-northgate[.]net
Email: notifications@northgate-voicemail[.]com
</iocs>
<task>
For each IOC:
1. Categorise: phishing infrastructure, C2, exfiltration, or unknown
2. Identify patterns (naming convention, infrastructure relationship)
3. Generate KQL queries to hunt for each IOC in our Sentinel workspace
4. Suggest Defender for Endpoint custom indicators (block/alert)
</task>
Claude produces: a categorised IOC table, pattern analysis (all domains impersonate “northgate” — indicating targeted campaign against Northgate Engineering), hunting queries for each IOC type, and indicator creation guidance.
Limitation: Claude cannot check IOC reputation databases (VirusTotal, AbuseIPDB) unless it uses web search. For reputation checks, use your threat intelligence platform. Claude handles the analysis, pattern recognition, and query generation.
IOC pattern recognition — where Claude excels:
Claude identifies patterns across IOCs that humans may miss when processing a long list manually. Given 20 IPs, Claude spots: shared ASN ownership (10 IPs belong to the same hosting provider), geographic clustering (all IPs resolve to the same country), sequential allocation (IP range suggests the attacker purchased a block), and timing patterns (if timestamps are provided — all IPs first appeared within the same 48-hour window).
Given 15 domains, Claude identifies: naming conventions (all contain the target company name with common typosquat patterns), registration timing (if WHOIS data is provided), shared registrar or name server infrastructure, and domain generation algorithm (DGA) patterns if the domains appear randomly generated.
The workflow is: receive IOCs → Claude categorises and identifies patterns → you verify against your TI platform (VirusTotal, AbuseIPDB, Shodan) → Claude generates hunting queries → you run the queries in Sentinel → Claude analyses the results. Claude handles the analytical heavy lifting. Your TI tools provide the ground truth. Neither replaces the other.
Generating Defender custom indicators from IOCs:
For each confirmed malicious IOC, generate the Defender for
Endpoint custom indicator specification:
Format per indicator:
- Indicator type: IP / Domain / URL / File Hash
- Indicator value: [the IOC]
- Action: Block / Alert / Allowed
- Title: [descriptive name following naming convention]
- Description: [what this indicator relates to]
- Severity: High / Medium / Low
- MITRE technique: [relevant technique]
- Recommended expiry: [based on IOC type and threat actor patterns]
Claude produces indicator specifications you can paste directly into the Defender portal (Settings → Endpoints → Indicators) or deploy via API. This accelerates the indicator creation step from 2-3 minutes per IOC (manual portal entry) to a batch deployment of all IOCs at once.
Workflow 4: Threat briefing generation
Weekly or monthly threat briefings for management need to be concise, relevant, and free of jargon. Claude transforms your technical notes into stakeholder-ready briefings.
<task>Draft a weekly threat briefing for the CISO.</task>
<this_week>
- 2 phishing campaigns targeting the company (blocked by transport rules)
- 1 credential stuffing attempt against 12 accounts (blocked by CA policy)
- Sentinel rule tuning: 3 rules adjusted for false positive reduction
- New detection rule deployed: financial keyword inbox rule monitoring
- Industry alert: CISA advisory on AiTM phishing targeting UK engineering firms
</this_week>
<format>
1-page briefing. Sections: Threats Blocked, Detection Improvements,
Industry Context, Recommended Actions. No technical jargon.
Written for a non-technical CISO who needs to brief the board.
</format>
Workflow 5: Detection rule documentation
Every detection rule needs documentation: what it detects, why it matters, the expected false positive rate, tuning guidance, and the MITRE mapping. This documentation is tedious to write. Claude generates it from the rule itself.
Document this detection rule for the SOC runbook:
[paste KQL rule]
Include:
- Rule name (follow naming convention: Severity-Tactic-Description)
- What it detects (plain English)
- Why it matters (attack context)
- MITRE ATT&CK mapping
- Expected false positive rate and sources
- Tuning guidance (what to adjust for your environment)
- Triage procedure (when this rule fires, what should the analyst do first?)
- Entity mapping
- Schedule and lookback
Claude produces complete rule documentation that goes directly into your SOC runbook. The documentation that analysts skip because it takes 20 minutes per rule now takes 2 minutes of review.
Try it yourself
Claude's MITRE mapping is typically correct for well-known techniques. The false positive assessment is reasonable but generic — you will need to adjust based on your environment's specific noise profile. The triage procedure is operationally sound for 80% of cases. The remaining 20% requires your environment-specific knowledge (which systems to check, which teams to contact). Overall: the documentation is a strong first draft that needs 5 minutes of localisation.
Knowledge checks
Check your understanding
1. Claude generates a detection rule and maps it to MITRE ATT&CK T1078 (Valid Accounts). Should you trust the mapping?
Key takeaways
Claude maps to MITRE ATT&CK reliably — verify sub-technique specificity.
Detection rule generation needs specific input. Describe the attack in log terms, not abstract terms. Tell Claude what the attack looks like in the data.
IOC analysis works for pattern recognition and query generation. Use your TI platform for reputation checks.
Rule documentation is the highest-ROI use case. It turns a 20-minute task into a 2-minute review. Every rule in your environment should have documentation — Claude makes this feasible.
Workflow 6: Building a threat briefing from raw intelligence
When a new threat advisory lands (CISA alert, Microsoft security blog, vendor threat report), you need to assess relevance to your environment and communicate it to stakeholders. Claude transforms raw intelligence into actionable briefings.
The threat briefing prompt:
<advisory>
CISA Advisory AA26-079A: AiTM phishing campaigns targeting
UK engineering and manufacturing sectors. Threat actors using
residential proxy infrastructure and EvilProxy phishing kits.
IOCs: [list of IPs and domains]
Affected products: Microsoft 365, Entra ID
</advisory>
<our_environment>
UK engineering company. 500 users. M365 E5.
Defender for Office 365 P2. Microsoft Sentinel.
CAE strict mode not yet deployed.
Token protection not yet deployed.
</our_environment>
<task>
Produce:
1. Relevance assessment (1-5 scale with justification)
2. Current exposure (which of our controls would catch this, which would miss it)
3. Recommended actions (prioritised, with effort estimate)
4. KQL hunting queries to check for indicators in our environment
5. CISO briefing paragraph (3-4 sentences, no jargon)
</task>
Claude produces a structured threat assessment that would take 2-3 hours to compile manually. The relevance score and exposure analysis are particularly valuable — they answer the CISO’s first question: “Does this affect us?”
Verifying the output: Claude’s assessment of your controls is based on what you told it about your environment. If you omitted a control (e.g., you have CAE but forgot to mention it), Claude’s exposure analysis will be wrong. The quality of the threat assessment is directly proportional to the accuracy of the environment description you provide.
Workflow 7: Detection rule testing methodology
Claude generates detection rules — but how do you test them before deployment? Claude also generates the test plan.
<rule>[paste the KQL detection rule]</rule>
<task>
Generate a test plan for this rule:
1. What does a true positive look like? (describe the log event pattern)
2. What does a false positive look like? (describe legitimate activity that matches)
3. Write a KQL query that simulates looking for historical true positives
in the last 30 days
4. Write a KQL query that estimates the false positive rate
5. Recommend a tuning threshold based on the estimated noise level
</task>
This produces a structured test plan that you execute in your Sentinel workspace before enabling the rule. The false positive estimation query is particularly valuable — it tells you how many alerts per day to expect, which determines whether the rule is operationally viable.
The rule lifecycle with Claude:
- Generate — Claude writes the KQL from your technique description
- Document — Claude produces the rule documentation (MITRE mapping, tuning notes)
- Test — Claude generates the test plan (true positive, false positive, threshold)
- You execute — Run the test queries in Sentinel, review results, adjust thresholds
- Deploy — Enable the rule with the tested configuration
- Tune — After 7 days, paste the alert data back to Claude for tuning recommendations
Each step takes 2-5 minutes of Claude interaction. The complete rule lifecycle (from technique to production rule) takes 1-2 hours instead of a full day.
Check your understanding
2. You ask Claude to assess a threat advisory against your environment. Claude says "your environment is fully protected against this technique." Should you trust this assessment?