In this module
How Attackers Plan Operations
What this module is
Every campaign starts with a plan. The attacker has an objective — steal data, deploy ransomware, establish persistent access, sabotage operations. They have constraints — budget, time, capability, risk tolerance. They have intelligence about the target — technology stack, employee information, public-facing infrastructure, breached credentials. The plan connects the objective to the target through the constraints.
Module 1 teaches the planning stage of offensive operations in enough depth that you can reverse-engineer the plan from investigation evidence. When you find a Cobalt Strike beacon with a default malleable profile running discovery commands within minutes of landing, you can classify the attacker (ransomware affiliate, low budget, short timeline, high risk tolerance), predict their next move (credential access via LSASS dump, then lateral movement to the domain controller and backup systems), and set your response priorities (protect backups now, contain laterally, investigate the initial access vector).
When you find an OAuth consent grant for an unrecognised application with Mail.Read permissions on the CFO's mailbox, with a 47-day access pattern matching the CFO's meeting schedule — same classification framework, completely different profile. Intelligence objective, high capability, low risk tolerance, long timeline. Different response: scope covertly, don't isolate prematurely, audit the full extent before tipping off the attacker.
The same four-step methodology (Observe → Classify → Predict → Act) works for both. Module 1 teaches you that methodology — and the operational knowledge that makes the classification accurate.
What you'll be able to do after this module
Build an Operational Profile within the first hour of any investigation. Given partial evidence — a beacon on one endpoint, three discovery commands, one lateral movement attempt — you'll classify the attacker's objective, budget, timeline, and risk tolerance. Two dimensions assessed with confidence often give you all four, because the constraint dimensions correlate. Quiet + custom tooling → long timeline, intelligence objective. Loud + commodity tools → short timeline, financial objective.
Predict the attacker's next move. The decision matrix (OD1.7) connects the classification to a prediction: what will they target next, how fast will they move, what techniques will they use, and what's their exit strategy if detected. The prediction isn't clairvoyance — it's operational logic. Rational actors facing the same constraints make similar decisions.
Write a one-paragraph leadership brief. The classification produces a brief that answers leadership's four questions: who is doing this (adversary class), what do they want (objective), how serious is it (capability + timeline), and what are we doing about it (response priorities). A non-technical executive can read it and make decisions — whether to invoke the IR retainer, notify regulators, or communicate to the board.
Detect reconnaissance and timing patterns. You'll know how to detect low-and-slow password sprays using cross-account correlation (OD1.6), identify business-hours espionage by timing anomalies in high-value account authentication (OD1.8), and spot the handoff signature between an access broker and a ransomware affiliate in investigation telemetry (OD1.9).
Who this module is for
Module 1 is free. It's designed for any security practitioner who investigates incidents, triages alerts, or writes detection rules — and wants to understand the operational logic behind the attacks they're seeing.
SOC analysts will get the most immediate value from the constraint profiling (OD1.3) and the timing diagnostics (OD1.8). The first-hour classification — is this ransomware or espionage? — changes your escalation decision and your containment approach. That decision is usually made on instinct. After this module, it's made on evidence.
Detection engineers will find the reconnaissance detection subs (OD1.5, OD1.6) directly actionable. The KQL queries for spray detection, dormant account activation, and timing anomaly analysis can be deployed into your SIEM this week.
IR practitioners will find the decision matrix (OD1.7) and the Operational Profile (OD1.12) directly applicable to live investigations. The four-step methodology produces a structured adversary classification from partial evidence — the kind of output that turns the first chaotic hour of an incident into a directed investigation.
Security managers will find the leadership brief template (OD1.12) useful for structuring the communication that happens between the technical team and the executives making business decisions during an incident.
Module structure
Twelve content subs followed by a summary and a knowledge check. Estimated completion: 6-8 hours.
The module flows in three arcs:
Arc 1 — The attacker's operational model (OD1.1–1.4). The offensive lifecycle, how objectives drive decisions, the four constraints that shape every operation, and how risk tolerance determines the noise level. These four subs give you the classification framework.
- OD1.1 — The offensive lifecycle. Six phases from target selection through objective execution. Three detection windows (pre-attack, active, damage) and where each module in the course maps to the lifecycle.
- OD1.2 — Target selection and objective mapping. Four objectives (financial, intelligence, disruption, access) with investigation telemetry showing what each looks like in your SIEM. Three diagnostics (target, pace, noise) for first-hour classification.
- OD1.3 — Constraint analysis. Budget, time, capability, risk tolerance. Five constraint profiles (ransomware affiliate, RaaS operator, state espionage, access broker, insider) with the operational indicators for each.
- OD1.4 — Risk tolerance and OPSEC. The four noise levels (loud, visible, quiet, silent) with side-by-side telemetry comparison — the same operation performed by a loud attacker (70 seconds, 3 alerts) and a quiet attacker (3 days, zero alerts).
Arc 2 — Reconnaissance, decisions, and timing (OD1.5–1.9). How the attacker gathers intelligence, makes operational decisions, chooses timing, and organises their team.
- OD1.5 — Passive reconnaissance. Five categories of publicly available information. The self-assessment methodology using DNS, CT logs, breach databases, and AI-accelerated interpretation.
- OD1.6 — Active reconnaissance. The threshold gap — why per-entity detection rules miss distributed sprays. KQL for spray detection, slow-scan correlation, and cloud enumeration resilience assessment.
- OD1.7 — The attacker's decision matrix. Three inputs (objective, constraints, reconnaissance) → four outputs (infrastructure, access method, movement strategy, execution plan). Worked example: two attackers with different constraints against the same target producing completely different campaigns.
- OD1.8 — Operational timing. Four timing strategies (maximum-impact window, business-hours blending, response-gap exploitation, event-aligned). Investigation examples: the Kaseya Friday deployment, the espionage operator matching the CFO's meeting schedule, the lateral movement 12 minutes after shift handover.
- OD1.9 — Team structures and attacker roles. The cybercrime supply chain (IAB → affiliate → RaaS operator). The handoff signature and how to detect it. State vs criminal tradecraft consistency. KQL for dormant broker access.
Arc 3 — Campaign patterns and the Operational Profile (OD1.10–1.12). Real campaign analysis and the capstone methodology.
- OD1.10 — Documented campaigns: ransomware. The six-phase ransomware sequence with KQL at each phase and a detection priority stack. Credential access (Phase 3) is your highest-value detection investment.
- OD1.11 — Documented campaigns: espionage and supply chain. Collection cadences, espionage persistence mechanisms, supply chain behavioral baselines. The opposite of ransomware — 30-60 day pattern analysis instead of hourly correlation.
- OD1.12 — The defender's Operational Profile. The capstone. The four-step methodology (Observe → Classify → Predict → Act) applied to a worked NE investigation with specific telemetry, the reasoning at each step, and the leadership brief.
OD1.13 — Module summary.
OD1.14 — Check my knowledge. Ten scenario-based questions.
What you need
Module 1 is free and doesn't require a lab environment. The exercises use your own SIEM and published incident reports.
For the hands-on exercises: access to your SIEM with at least 30 days of authentication logs (Entra ID sign-in logs or equivalent). If you don't have SIEM access, every exercise includes a "published report" alternative that uses publicly available incident reports instead of your own data.
For AI-accelerated analysis: access to Claude or another LLM. Several exercises include structured prompts for feeding investigation data into an LLM for pattern analysis (spray detection, timing anomalies, adversary profiling). The LLM isn't required — the exercises work without it — but it demonstrates how AI accelerates the operational analysis this module teaches.
You're reading the free modules of offensive-security-for-defenders
The full course continues with advanced topics, production detection rules, worked investigation scenarios, and deployable artifacts.