In this module
OD0.2 How Attackers Think Differently from Defenders
You've investigated incidents and seen attacker behavior in telemetry. You know attackers don't follow playbooks the way defenders do. This sub names the specific cognitive differences between offensive and defensive thinking — not as an abstract exercise, but as a practical tool for anticipating attacker decisions during live investigations.
Operational Objective
During an investigation, you're reconstructing what the attacker did. But reconstruction is backward-looking — you're reading events after they happened. To contain an active threat, you need to anticipate the next step. Anticipation requires thinking the way the attacker thinks: in objectives and constraints, not controls and compliance.
Every investigation hits a moment where you've established what the attacker did but need to predict what they'll do next. You've found the compromised workstation, identified the credential theft, seen the lateral movement. The attacker is still in the network. Where do they go from here? The answer depends on how the attacker thinks — and that's fundamentally different from how defenders think. This sub maps five cognitive differences and gives you a practical method for switching perspectives during a live investigation.
Learning Objectives
By the end of this sub you will be able to:
- Name the five cognitive differences between defensive and offensive thinking and explain how each one changes investigation and containment decisions. The same differences explain why Microsoft's threat intelligence team restructured around "attacker intent" rather than "alert category" after tracking Nobelium — because category-based analysis missed the campaign connections that intent-based analysis revealed. This matters because switching perspective during an active investigation lets you predict the attacker's next step instead of only reconstructing their last one.
- Apply the three-step perspective switching method (reframe as decision → identify constraint profile → predict next step) to attacker actions observed in telemetry. This matters because prediction enables pre-positioning — you can build a detection query or containment action for the attacker's next move before they take it.
- Classify an attacker's constraint profile (time-pressured/noisy vs patient/quiet) from the telemetry you've already collected, and use that classification to predict whether the attacker will prioritize speed or stealth in their next action. This matters because the constraint profile determines the response urgency — a ransomware operator under time pressure requires immediate containment, while a persistent access operator requires careful scoping before any action that might alert them.
Figure OD0.2 — The five cognitive differences. Defenders think backward from events. Attackers think forward toward objectives. Switching to the attacker's perspective during an investigation lets you predict next steps instead of only reconstructing past ones.
The five differences
Difference 1 — Controls vs gaps
Defenders inventory what they have. Attackers inventory what's missing.
When a security team assesses their posture, they list controls: MFA is deployed, EDR is on every endpoint, Sentinel ingests these log sources, Conditional Access policies enforce these requirements. The assessment answers "what protection do we have?" and the implicit assumption is that more controls means more security.
An attacker assessing the same environment asks the inverse question: what protection is missing, misconfigured, or bypassable? MFA is deployed — but does it protect against token theft? EDR is installed — but does it detect process injection via direct syscalls? Sentinel ingests Windows Security events — but does it ingest Sysmon?
The practical implication: during an active investigation, stop asking "how did they get past our controls?" and start asking "which gaps in our controls is this attacker using?" The first question is backward-looking. The second tells you where the attacker will move next.
Difference 2 — Alerts vs objectives
Defenders react to what the SIEM surfaces. Attackers plan toward what they need to achieve.
The SOC analyst's day is structured around the alert queue. Events arrive, get triaged, get investigated, get closed. The analyst is reactive by design — the SIEM decides what deserves attention.
The attacker's day is structured around an objective. Get domain admin credentials. Access the finance share. Deploy ransomware. Exfiltrate the customer database. Everything the attacker does serves the objective. If a technique is blocked, they switch techniques. If a tool is detected, they switch tools. The objective doesn't change — only the path to it.
During an investigation, switching to the attacker's objective-driven perspective lets you ask: "Given what this attacker has done so far, what's their likely objective, and what do they need to do next to achieve it?" That question is more useful for containment than "what alert should I triage next?"
Difference 3 — Events vs decisions
Defenders reconstruct what happened from logs. Attackers make forward-looking decisions in real time.
Investigations are fundamentally retrospective. You're reading events that already occurred, building a timeline, identifying what the attacker did. The timeline is accurate but it's always behind — you're reading history while the attacker writes the next chapter.
The attacker is making decisions: "I've compromised this workstation. I have local admin. I can see the domain controller on the network. Do I dump LSASS now or wait until after business hours? Do I move laterally via RDP or WinRM? Do I target the file server first or go straight for the DC?"
Every decision is shaped by what the attacker has observed: what security tools they've encountered, how the network is structured, what credentials they've collected, how much time they think they have. If you understand the decision logic, you can read the events in your timeline and predict the next decision.
Difference 4 — Compliance vs constraints
Defenders benchmark against standards. Attackers work within operational limits.
The defender's world is organized around compliance: ISO 27001 controls, CIS benchmarks, framework requirements. The question is "do we meet the standard?"
The attacker's world is organized around constraints: budget, time, tooling capability, risk of detection. A ransomware crew with 72 hours makes different decisions from a state-sponsored group with six months of patience. A lone operator using open-source tools makes different decisions from a team with custom implants and zero-days.
Understanding the attacker's constraints is directly actionable. Noisy, fast-moving activity with commodity tools (Mimikatz, PsExec, batch scripts) — that's a ransomware operator under time pressure. Expect rapid lateral movement and prepare for ransomware deployment. Slow, careful activity with custom tooling and long sleep intervals — that's a patient adversary. Expect persistence mechanisms in unusual locations and prepare for data exfiltration over weeks.
Difference 5 — Categories vs paths
Defenders classify events by type. Attackers chain techniques by path.
When a defender sees an event, the first instinct is to categorize it: what ATT&CK technique? What tactic? What severity? Categorization is useful for metrics and reporting, but it treats each event as a member of a class rather than a step in a sequence.
When an attacker runs a technique, they're thinking about the path: "I used this technique to get this result, which enables this next technique, which gets me closer to this objective." The technique is a means, not an end.
Defenders who think in paths instead of categories detect campaigns. Instead of "we detected a T1003.001 credential dump," the analyst thinks "credential dump is the third step in an access chain — what were steps one and two, and what's step four?" That reframing connects the event to the campaign and predicts the next move.
The perspective switching method
Three steps you can apply during any active investigation to shift from reconstruction to prediction.
Step 1 — Reframe the event as a decision. Replace "the attacker used T1003.001" with "the attacker needed credentials and chose LSASS dumping because it produces plaintext credentials immediately, unlike Kerberoasting which requires offline cracking time." The reason reveals the operational logic.
Step 2 — Identify the constraint profile. Is the attacker moving fast and noisy, or slow and careful? Fast + noisy = time-pressured, likely ransomware. Slow + quiet = patient, likely espionage or persistent access. The constraint profile predicts the attacker's next priority.
Step 3 — Predict the next step. Given the objective and the constraints, what does the attacker need next? Credentials → movement target. Movement → persistence or objective. Persistence → data access. Data access → staging and exfiltration. Pre-position detection or containment at the predicted step.
STEP 1 — Select the investigation
Pull the timeline from your most recent confirmed compromise.
If you don't have one available, use the Northgate scenario from
OD0.1 (credential harvest → token replay → persistence).
STEP 2 — Reframe each action as a decision
For each attacker action in the timeline, write two lines:
a. Defender framing: "T1003.001 — LSASS credential dump on DESKTOP-042"
b. Attacker framing: "After landing on the workstation, the attacker
needed credentials to move laterally. They chose LSASS dumping
because it produces plaintext credentials immediately."
STEP 3 — Classify the constraint profile
Review the full timeline. Answer:
- Speed: fast (hours) or slow (days/weeks)?
- Noise: commodity tools (Mimikatz, PsExec) or custom tooling?
- Persistence: quick/disposable or carefully hidden?
Write one sentence: "This attacker's constraint profile is [X]
because [evidence from the timeline]."
STEP 4 — Predict the next step
For the LAST known attacker action in the timeline, predict what
the next step would have been. Write:
"Given [objective] and [constraint profile], the attacker's next
step would be [prediction] because [reasoning]."
STEP 5 — Validate the prediction
Check whether the investigation found evidence of the predicted
next step. If yes: the perspective switching method worked. If no:
either the prediction was wrong (review the reasoning) or the
investigation didn't look for it (that's an investigation gap).Hands-on Exercise — Perspective Switching on Your Last Investigation
Objective: Apply the perspective switching method to a real investigation timeline from your environment.
Prerequisites: The timeline from your most recent confirmed-true-positive investigation, or a significant incident you were involved in. You need the sequence of attacker actions identified during the investigation.
Success criteria: You've reframed at least 3 attacker actions as decisions, classified the constraint profile with evidence, and made a prediction for the next step with reasoning.
Challenge: If your investigation involved multiple compromised systems, do the perspective switch for each lateral movement decision. At each hop, ask: "Why did the attacker choose THIS system and not another?" The answer reveals their target prioritization — which you'll study in depth in M7 (Lateral Movement as Operational Maneuver).
You should be able to do the following without referring back to this sub. If you can't, the sections to re-read are noted.
You're reading the free modules of offensive-security-for-defenders
The full course continues with advanced topics, production detection rules, worked investigation scenarios, and deployable artifacts.