In this module
OD1.12 The Defender's Operational Profile
You've learned the offensive lifecycle (OD1.1), objectives (OD1.2), constraints (OD1.3), risk tolerance (OD1.4), reconnaissance (OD1.5-1.6), the decision matrix (OD1.7), timing (OD1.8), team structures (OD1.9), and documented campaign patterns (OD1.10-1.11). This sub combines everything into a four-step methodology you apply during the first hours of every investigation.
Operational Objective
You're one hour into an active investigation. You have incomplete evidence, time pressure, and leadership asking "how bad is this?" The Operational Profile is a structured process that takes whatever evidence you have and produces two outputs: a classification of the adversary (who are we dealing with?) and a prediction of their next steps (what are they going to do?). Both drive your response.
Learning Objectives
By the end of this sub you will be able to:
- Produce an Operational Profile within the first hours of an investigation using the four-step methodology (Observe → Classify → Predict → Act) from whatever evidence is available. This is the methodology Microsoft DART applies to every engagement to produce an initial threat assessment before the full investigation is complete. This matters because response decisions can't wait for complete information — the Operational Profile gives you a defensible assessment from partial evidence.
- Write a one-paragraph leadership brief that answers: who is doing this (adversary class), what do they want (objective), how serious is it (capability and timeline), and what are we doing about it (response priorities). This matters because the brief converts your technical analysis into business decisions — and leadership's decisions (authorize emergency spending, invoke IR retainer, notify regulators, communicate to the board) depend on the quality of that brief.
Figure OD1.12 — The four-step Operational Profile. Applied during the first hours of every investigation. Partial evidence is sufficient — two dimensions assessed with confidence often give you all four.
Step 1 — Observe
Gather available evidence without interpreting it yet. Four factual questions.
What systems has the attacker accessed? Every system, account, and data store the evidence shows. The target list reveals the objective (OD1.2).
What tools has the attacker used? Every tool, technique, and artifact. Commodity or custom? Known families or novel implants? The tooling reveals budget and capability (OD1.3).
How fast are they moving? Time between observed events. Minutes between phases, or days? The tempo reveals the timeline constraint (OD1.3, OD1.8).
How noisy are they? Known-detectable tools? Bulk operations? Or carefully blending with legitimate activity? The noise level reveals risk tolerance (OD1.4).
Step 2 — Classify
Map observations to four dimensions.
Objective. Financial (backups, payment systems, extortion data), intelligence (exec communications, strategic docs, IP), disruption (critical systems, damage potential), or access (trust relationships, build pipelines, vendor tools).
Budget. Low (commodity tools, shared infrastructure), medium (some customization, dedicated infrastructure), or high (custom tooling, zero-day capability).
Timeline. Hours (ransomware sprint), days-to-weeks (targeted financial or access), or months-to-years (espionage or supply chain).
Risk tolerance. High (noisy, fast, accepts detection), medium (somewhat careful), or low (deliberate stealth, minimal telemetry).
The dimensions combine into a constraint profile that maps to an adversary class: opportunistic criminal, ransomware affiliate, professional criminal group, access broker, state-sponsored operator, or insider.
Step 3 — Predict
Run the decision matrix forward.
Next target. Financial → backup systems. Intelligence → additional exec mailboxes. Access → trust relationships. Disruption → critical infrastructure.
Pace. If tempo has been hours, next phase is imminent. If days, you have time to prepare.
Techniques. Commodity tools suggest known techniques with documented detection. Custom tools suggest novel approaches you may need to hunt for.
Exit strategy. High risk tolerance = push through and try to achieve objective before containment. Low risk tolerance = go quiet or withdraw to protect access and tools.
Step 4 — Act
The classification and prediction produce actionable response decisions.
Containment priorities. The objective determines what to protect first. Financial → isolate backup systems. Intelligence → revoke tokens and audit OAuth grants. Disruption → protect critical systems. Access → audit trust relationships.
Evidence collection focus. Short-timeline, commodity-tool attackers leave dense, recent evidence. Long-timeline, custom-tool attackers leave sparse evidence over weeks — expand your search window.
Scope assessment. Single-actor = contained scope. Supply-chain = may affect your entire customer base. Handoff signature (OD1.9) = scope is about to change.
Leadership brief. The classification produces a one-paragraph brief answering: who (adversary class), what (objective), how serious (capability + timeline), what we're doing (response priorities).
The profile in practice — worked example
Here's the Operational Profile applied to a live investigation at Northgate Engineering. The SOC received three alerts at 14:30 on a Tuesday. Here's what the analyst had at the one-hour mark.
Observe (14:30–15:00):
Systems accessed: DESKTOP-NGE042 (Tom Ashworth's workstation), one outbound connection to 185.220.101.42 (unknown IP, residential proxy range). Tools used: Sysmon Event 1 shows rundll32.exe loading a DLL from C:\Users\t.ashworth\AppData\Local\Temp\. Sysmon Event 3 shows outbound HTTPS to the unknown IP from the same process. Sysmon Event 7 shows the DLL was unsigned. Time between events: 3 minutes from DLL load to outbound connection. Noise level: one endpoint, one connection, no bulk operations — quiet.
Classify (15:00–15:15):
Objective: uncertain at this stage. One endpoint compromised, no lateral movement yet, no data access observed. Could be any objective. Budget: the DLL is custom (unsigned, unknown hash on VirusTotal) — this isn't commodity malware. Medium-to-high budget. Timeline: uncertain — only 3 minutes of activity observed. Risk tolerance: low noise, custom tooling, no rapid enumeration — this looks quiet and careful. Low risk tolerance.
Partial profile: custom tooling + quiet operation → long timeline, intelligence or access objective. This is probably not ransomware. An affiliate would have started running discovery commands within minutes of landing.
Predict (15:15–15:20):
Given the quiet profile: the attacker will not move laterally immediately. They'll establish persistence first — likely an OAuth consent grant, a scheduled task with a long interval, or a registry run key. They'll wait hours or days before the next action. They may access Tom's mailbox or OneDrive before moving to other systems.
Where to look next: check Tom's M365 activity for OAuth consent grants created in the same session. Check for scheduled task creation on DESKTOP-NGE042. Check for new inbox forwarding rules.
Act (15:20–15:30):
Containment priority: do NOT isolate the endpoint yet. If this is a patient adversary, isolating the endpoint tips them off and they may activate a backup persistence mechanism you haven't found. Instead: silently audit Tom's account for OAuth grants, forwarding rules, and unusual M365 activity. Monitor the outbound connection for additional C2 traffic. Expand the investigation to a 30-day lookback — if this is an espionage operator, the initial compromise may have happened weeks ago and the DLL execution today may be a second-stage deployment, not the initial access.
Evidence focus: 30-day sign-in history for Tom's account (look for the actual initial access). OAuth consent grants (persistence). M365 unified audit log for mailbox and file access. Sysmon Event 22 (DNS queries from the DLL process — reveals the C2 domain).
Leadership brief: "We've detected a likely targeted compromise of one user account using custom malware. The attacker is operating quietly with non-commodity tooling, consistent with a targeted intelligence or access operation — not ransomware. We're conducting covert scoping to determine the full extent before containment. No evidence of lateral movement or data exfiltration at this time. The investigation scope is expanding to 30 days of account activity. We'll provide an update within 4 hours."
That brief gives leadership the threat class (targeted, not commodity), the current assessment (not ransomware, no immediate crisis), the response approach (covert scoping, not immediate containment), and the timeline for the next update. It lets the executive decide whether to invoke the IR retainer or wait for the 4-hour update.
The leadership brief is the Operational Profile's external output. It translates technical classification into business language:
"We're responding to a likely ransomware affiliate who gained access through compromised VPN credentials purchased from an access broker. The attacker is using commodity tools and moving rapidly — consistent with a 48-72 hour timeline. They've reached the credential access phase and are targeting domain administrator accounts. Our immediate priorities are isolating backup systems, containing lateral movement, and verifying the initial access vector is closed. We expect the attacker to attempt ransomware deployment within the next 24-48 hours if not contained. The IR retainer has been activated."
That paragraph contains: adversary class (ransomware affiliate + broker), access method (VPN credentials), capability assessment (commodity tools, rapid), current phase (credential access), response priorities (backups, containment, access closure), timeline (24-48 hours), and escalation status (IR retainer activated). A non-technical executive can read it and make decisions.
STEP 1 — Observe (read the first section of the report only)
Systems accessed: ___
Tools used: ___
Time between events: ___
Noise level: ___
STEP 2 — Classify
Objective: financial / intelligence / disruption / access
Evidence: ___
Budget: low / medium / high
Evidence: ___
Timeline: hours / days / months
Evidence: ___
Risk tolerance: high / medium / low
Evidence: ___
Adversary class: ___
STEP 3 — Predict (BEFORE reading the rest of the report)
Next target: ___
Expected pace: ___
Likely techniques: ___
Exit strategy if detected: ___
STEP 4 — Write the leadership brief
One paragraph: who, what, how serious, what we're doing.
STEP 5 — Validate
Read the rest of the report. How accurate was your
classification? How accurate was your prediction?
What would have improved the profile?
STEP 6 — AI acceleration
Paste the first-section evidence into Claude:
"I'm building an Operational Profile during a live investigation.
Evidence from the first 4 hours:
[paste observations]
Produce:
1. Classification (objective, budget, timeline, risk tolerance)
2. Adversary class
3. Prediction (next target, pace, techniques, exit strategy)
4. One-paragraph leadership brief"
Compare the AI profile with yours. The combination is
typically better than either alone.Hands-on Exercise — Build an Operational Profile
Objective: Apply the four-step methodology to a real or published incident and produce an Operational Profile with a leadership brief.
Prerequisites: A published incident report with a detailed timeline, or your most recent confirmed investigation.
Success criteria: You've completed all four steps with evidence, written a one-paragraph brief, and validated your prediction against the full report.
Challenge: Repeat the exercise on three different reports (one ransomware, one espionage, one supply chain or BEC). By the third profile, the four-step process should feel automatic — you read first-phase evidence and the classification forms naturally. That automaticity is the goal of Module 1.
You should be able to do the following without referring back to this sub. If you can't, the sections to re-read are noted.
You're reading the free modules of offensive-security-for-defenders
The full course continues with advanced topics, production detection rules, worked investigation scenarios, and deployable artifacts.