In this module
OD1.7 The Attacker's Decision Matrix
You've classified attackers by constraint profile (OD1.3) and identified their objectives from telemetry diagnostics (OD1.2). This sub connects them: the decision matrix is how the attacker's objective, constraints, and reconnaissance findings combine into a specific operational plan — and how you reverse-engineer that plan from investigation evidence to predict the next move.
Operational Objective
Every operational decision an attacker makes — which infrastructure to build, which access method to use, which credentials to target, how fast to move — is driven by the intersection of three inputs: what they want (objective), what they have (constraints), and what they've learned (reconnaissance). Understanding the decision matrix lets you do something powerful during investigations: given what you've observed, predict what the attacker will do next.
Learning Objectives
By the end of this sub you will be able to:
- Explain the four outputs of the decision matrix (infrastructure design, access method selection, movement strategy, objective execution plan) and trace how each is driven by the attacker's objective, constraints, and reconnaissance. This matters because the matrix is the analytical framework that connects every concept in M1 into a single predictive model.
- Reverse-engineer an attacker's operational plan from investigation evidence using the four-step process (infer objective from targets, infer constraints from tools/pace, infer reconnaissance from targeting specificity, predict next move). The same reverse-engineering approach was used by Microsoft DART to reconstruct the Midnight Blizzard campaign timeline within the first 48 hours. This matters because the reverse-engineering process is what turns M1's theoretical frameworks into actionable investigation technique.
- Use AI to accelerate adversary profiling during live investigations by feeding first-phase evidence into a structured prompt that produces an adversary profile, prediction, and leadership brief. This matters because the matrix analysis competes with a dozen urgent tasks during an incident — AI acceleration compresses 30-60 minutes of structured analysis into minutes.
Figure OD1.7 — The decision matrix. Three inputs (objective, constraints, reconnaissance) produce four operational outputs. During investigations, you reverse the matrix: observe the outputs, infer the inputs, predict the next output.
Attackers don't pick techniques randomly
Every decision is driven by the intersection of objective, constraints, and reconnaissance.
The ransomware affiliate doesn't choose credential stuffing because it's their favourite technique. They choose it because they have breached credentials from reconnaissance, their timeline is 48 hours, and credential stuffing is the fastest path to access. The espionage operator doesn't choose AiTM phishing because it's the most sophisticated option. They choose it because their target has MFA enabled, their timeline is unlimited, and AiTM bypasses MFA while providing the session token needed for quiet, persistent access.
Understanding this reasoning lets you predict during investigations: given what you've observed, what would a rational attacker do next? Not mind-reading — operational logic. Rational actors facing the same constraints make similar decisions.
The four outputs of the matrix
Infrastructure design
Driven primarily by budget and risk tolerance.
Low budget + high risk tolerance = commodity infrastructure: a VPS, leaked C2, free domain. Disposable. Total cost under $50. High budget + low risk tolerance = layered infrastructure: aged domains, redundant redirectors, CDN-fronted C2, fallback channels. Total cost thousands. The infrastructure investment is proportional to the expected campaign duration.
Defensive translation: infrastructure sophistication tells you the expected campaign duration. Commodity = days. Layered = months. This directly affects your containment strategy.
Access method selection
The decision most directly shaped by all three inputs.
Consider the decision tree for an M365-heavy target: breached credentials available + legacy auth not blocked → credential stuffing (fast, cheap). Legacy auth blocked + MFA is standard push → MFA fatigue (cheap, moderate probability). MFA enforced + conditional access blocks unfamiliar devices → AiTM phishing (more expensive, bypasses MFA). Citrix on the perimeter with known CVE → vulnerability exploitation (technically demanding, no user interaction).
Each branch produces different telemetry. The access method tells you what the attacker prioritized — speed, stealth, reliability, or capability.
Movement strategy
Driven by objective and timeline.
Ransomware: fast toward DCs, backup systems, file servers. Many systems in hours. Dense authentication telemetry. Espionage: may not move at all if the initial compromise provides access to the target data. When they do move, one hop per day during business hours.
Defensive translation: fast, broad movement = financial or destructive. No movement or targeted, slow movement = intelligence. Movement toward build infrastructure = access.
Objective execution plan
Ransomware: GPO/PsExec deployment across the domain. Espionage: collection cadence — regular, automated exfiltration. BEC: wait for the right transaction to redirect. Sabotage: trigger the destructive payload. The execution plan determines the last possible detection point.
Worked example — two attackers, one target
Same target: a mid-size engineering firm with 865 endpoints, M365 E5, Sentinel, Conditional Access with MFA, legacy auth still permitted for a handful of accounts, three employees' credentials in breach databases, and a Citrix NetScaler on the perimeter.
Attacker A — Ransomware affiliate. Low budget, short time, medium capability, high risk tolerance. The matrix produces: credential stuff the breached passwords against legacy auth → harvest GAL from Exchange → internal phishing to IT admin → LSASS dump for domain admin creds → PsExec to DC and backup → delete shadow copies, disable backup agents, exfiltrate 50GB, deploy ransomware via GPO. Six detection points in 48 hours.
Attacker B — State intelligence. High budget, unlimited time, high capability, low risk tolerance. The matrix produces: build dedicated C2 with aged domains and CDN fronting → AiTM phishing targeting the engineering director's assistant → capture session token → OAuth consent grant for custom app with Mail.Read and Files.Read.All → collect engineering documents weekly via Graph API → maintain access for months. Four detection opportunities, each significantly harder than the ransomware scenario.
Same target. Different constraints. Completely different operational plans. Different detection challenges.
Reverse-engineering the matrix during investigations
Four steps. Works backwards from evidence to plan to prediction.
Step 1 — Infer objective from targets. What systems has the attacker accessed? Executive mailboxes = intelligence. Backup infrastructure = financial. Build pipeline = access.
Step 2 — Infer constraints from tools and pace. Commodity tools or custom? Fast or slow? The operational profile reveals budget, timeline, capability, and risk tolerance.
Step 3 — Infer reconnaissance from targeting specificity. Did they target specific people by name? Know the security stack? Exploit a specific vulnerability on the first attempt? The target-specific knowledge reveals pre-attack intelligence depth.
Step 4 — Run the matrix forward. Given inferred objective + constraints + reconnaissance depth, what would a rational attacker do next? That prediction tells you what to look for, protect, and contain.
The key insight: partial evidence is sufficient
The four constraint dimensions correlate. Two dimensions assessed with confidence often give you all four:
Quiet + custom tooling → long timeline, intelligence/access objective. Loud + commodity tools → short timeline, financial objective. Targeted single-mailbox access + slow pace → intelligence, high capability.
Using AI to accelerate the reverse-engineering
During a live investigation, the matrix analysis competes with containment, evidence collection, and leadership communication. Feed first-phase evidence into Claude with this structure:
AI-Accelerated Adversary Profiling Prompt:
I'm conducting a live investigation. Evidence from the first 4 hours:
[Paste: timeline of events with timestamps, systems, accounts, tools]
Build an adversary operational profile:
1. OBJECTIVE: What does the target selection suggest?
2. CONSTRAINTS: What do tools, pace, and OPSEC suggest about
budget, timeline, capability, risk tolerance?
3. RECONNAISSANCE: What did the attacker already know?
4. PREDICTION: What will they do in the next 12-24 hours?
What systems should we prioritize for containment?
5. BRIEF: One-paragraph leadership summary.The LLM produces a first-draft profile in minutes. The analyst validates, refines, and acts on it.
STEP 1 — Read only the FIRST section of the report
Stop before the report reveals the full campaign.
Extract: what systems were accessed, what tools were used,
what was the operational tempo, how noisy was the operation.
STEP 2 — Classify using the matrix inputs
Objective: financial / intelligence / disruption / access
Budget: low / medium / high
Timeline: hours / days / months
Risk tolerance: high / medium / low
Write one sentence for each with evidence.
STEP 3 — Predict BEFORE reading the rest
Based on your classification, write:
a. What the attacker will target next
b. How they will move (fast/broad or slow/targeted)
c. What techniques they will use
d. What the final objective will be
STEP 4 — Read the rest of the report and compare
How accurate was your prediction?
Where did it diverge?
What additional evidence would have improved it?
STEP 5 — AI validation
Paste the first-section evidence into Claude with the
adversary profiling prompt from this sub.
Compare the LLM's profile and prediction with yours.
Did the LLM catch anything you missed? Did you catch
anything the LLM missed?Hands-on Exercise — Decision Matrix Construction
Objective: Practice the reverse-engineering process on a published incident report, then validate with AI.
Prerequisites: A detailed published incident report (Mandiant M-Trends, CrowdStrike case study, Microsoft DART blog, or CISA advisory with technical detail).
Success criteria: You've classified the attacker from partial evidence, predicted the remaining campaign phases, and compared your prediction against the full report. By the third report, the profiling process becomes automatic.
Challenge: Run the exercise on two reports from different attacker types (e.g., one ransomware, one espionage). Compare how the matrix produces different operational plans from different constraint profiles against similar targets.
You should be able to do the following without referring back to this sub. If you can't, the sections to re-read are noted.
You're reading the free modules of offensive-security-for-defenders
The full course continues with advanced topics, production detection rules, worked investigation scenarios, and deployable artifacts.