In this module
OD1.2 Target Selection and Objective Mapping
You've investigated incidents and assessed whether they were targeted or opportunistic. You know that ransomware operators want money and espionage operators want data. This sub goes deeper — it maps how the attacker's objective determines every operational decision in the campaign, and shows you how to identify the objective from the telemetry you've already collected.
Operational Objective
You're two hours into an active investigation. The attacker has compromised a workstation and moved laterally. Do you prioritize protecting backups, monitoring mailboxes, auditing trust relationships, or isolating critical infrastructure? The answer depends on the attacker's objective — and you can determine it from three diagnostics applied to the telemetry you already have.
Learning Objectives
By the end of this sub you will be able to:
- Classify an attack as opportunistic or targeted based on the initial access method, tooling, and operational pace — the same classification that determines the M-Trends dwell time difference (opportunistic: days, targeted: months). This matters because the classification changes your threat model: opportunistic attacks are covered by standard detection rules; targeted attacks are designed to evade them.
- Identify the attacker's objective (financial, intelligence, disruption, or access) from three diagnostics: what they're targeting, how fast they're moving, and how noisy they are. This matters because the objective determines your response priority — protecting backups for ransomware vs preserving evidence for espionage vs auditing downstream trust for supply chain compromise.
Figure OD1.2 — Four objectives, four response priorities. The attacker's objective determines which systems they target, how fast they move, and what you need to protect first.
Opportunity vs targeted
Before the objective, there's a prior question: is this operation opportunistic or targeted?
Opportunistic attackers cast wide nets. They scan the internet for exposed RDP, unpatched Exchange servers, misconfigured cloud storage. They don't care who you are — they care that you have a vulnerability they can exploit. The initial access is automated. The post-exploitation follows a playbook: deploy ransomware, steal credentials for resale, install cryptominers.
Targeted attackers choose their victim before the operation begins. They research the organisation, identify the people with access to the objective, study the technology stack, and design an access method for that specific environment. The operation is custom-built.
The distinction matters because the operational patterns are different. Opportunistic attacks are louder — automated scanning, commodity malware, known exploit chains. Your existing detection rules probably cover them reasonably well. Targeted attacks are quieter — custom tooling, hand-crafted phishing, careful operational security. They're designed to evade the detection rules that catch opportunistic attacks.
When you're investigating, one of the first diagnostic questions is: did the attacker know who we were before they attacked, or did they find us by scanning? The answer changes your threat assessment, your containment priorities, and your remediation scope.
The four objectives
Every offensive operation serves one of four objectives. The objective determines every downstream decision.
Financial — ransomware, BEC, fraud
The attacker wants money. Ransomware operations encrypt data and demand payment. BEC operations compromise mailboxes and redirect financial transactions. Fraud operations steal credentials for resale or commit wire fraud.
Financial operations have tight timelines. Ransomware crews typically move from initial access to deployment in 24–72 hours. BEC operators may be more patient — monitoring mailbox conversations for weeks looking for the right invoice to redirect — but they're still on a timeline measured in weeks, not months.
The operational profile is distinctive. Financial attackers target backup systems early (to prevent recovery), email infrastructure (to monitor communications), and finance department systems (to access payment processes). They move laterally toward high-value targets quickly. They accept detection risk in the final stages because the objective is imminent.
Here's what the financial objective looks like in investigation telemetry. You're two hours into an active investigation at Northgate Engineering. The evidence so far:
Investigation evidence — financial objective indicators:
14:22 Compromised account authenticates to SRV-NGE-BKP01 (backup server)
→ Why backup first? Recovery prevention. The attacker wants to
ensure the victim can't restore from backup after encryption.
14:35 vssadmin delete shadows /all /quiet on SRV-NGE-BKP01
→ Shadow copy deletion. No legitimate admin runs this outside
a documented maintenance window.
14:41 Compromised account authenticates to SRV-NGE-FS01 (file server)
→ Data staging for double-extortion. The attacker will exfiltrate
high-value files before encryption.
14:55 Large outbound HTTPS transfer to mega.nz from SRV-NGE-FS01
→ Data exfiltration to MEGA. Financial operators use consumer
cloud storage because it's fast, free, and trusted by proxies.
Diagnosis: backup targeting + shadow copy deletion + data exfiltration
to consumer cloud storage = financial objective (ransomware with
double extortion). Response: protect remaining backup systems NOW.Defensive translation: if you see lateral movement toward backup infrastructure within hours of initial access, your containment priority is protecting the backups. If you see mailbox rules being created on finance department accounts, you're looking at BEC. The target selection tells you the objective, and the objective tells you the response priority.
Intelligence — espionage, IP theft
The attacker wants information. The operation is designed to maintain persistent, quiet access to high-value data sources — executive mailboxes, R&D file shares, strategic planning documents, M&A materials. The timeline is months to years.
Espionage operators move slowly, often waiting days between actions. They use minimal tooling — sometimes just native OS commands and legitimate remote access tools. They avoid mass credential harvesting in favour of targeted credential theft. Their persistence mechanisms are designed for longevity: long-sleep-timer beacons, web shells in rarely-audited locations, dormant scheduled tasks.
Here's what the intelligence objective looks like when you finally discover it — typically weeks or months after the initial compromise:
Investigation evidence — intelligence objective indicators:
Entra ID audit log (60-day lookback):
Day 1: OAuth consent grant for "Productivity Analytics" app
Permissions: Mail.Read, Files.Read.All, Calendars.Read
Granted by: p.sharma@northgateeng.com (exec assistant)
→ Persistence. This app has ongoing API access to Priya's
mailbox, OneDrive, and calendar — survives password reset.
Office 365 unified audit log (60-day lookback):
Every Tue + Thu, 09:15-10:05:
MailItemsAccessed on r.okafor@northgateeng.com (CISO mailbox)
via "Productivity Analytics" app
Folders accessed: Inbox, "Board Materials", "Incident Reports"
→ Collection cadence. Same folders, same days, same window.
Aligned with CISO's recurring Tuesday/Thursday leadership
meetings when the assistant legitimately accesses the mailbox.
Every 2nd Monday:
FileAccessed on SharePoint: /sites/Engineering/Shared Documents/
via "Productivity Analytics" app
→ Engineering document collection following the sprint cycle.
No lateral movement. No credential dumping. No discovery commands.
No LSASS access. No Sysmon alerts. No EDR alerts.
Diagnosis: OAuth persistence + regular collection cadence + targeted
high-value data (board materials, incident reports, engineering docs)
= intelligence objective. Dwell time: 47 days before discovery.Defensive translation: espionage operators produce minimal telemetry per unit of time. Individual events look like normal administration. The signal is in the pattern over weeks: the same account accessing the same executive mailbox at the same time, via the same application. Time-series analysis catches espionage. Individual alert triage doesn't.
Disruption — sabotage, wiper, DDoS
The attacker wants to cause damage. Wiper malware destroys disks. Sabotage operations corrupt critical systems or data. DDoS attacks overwhelm services.
Disruption operations are the fastest to execute and the hardest to detect before impact. The attacker may spend weeks positioning — mapping the environment, identifying critical systems, staging their payload — but execution is measured in minutes.
Defensive translation: the detection opportunity is in the staging phase, not the execution phase. Data staging, payload pre-positioning, service enumeration, backup destruction — these preparation activities precede the destructive act by hours to days. If you're looking for the wiper, you're too late. If you're looking for the pre-wiper reconnaissance pattern, you have a detection window.
Access — supply chain, MSP compromise, staging
The attacker wants your access, not your data. Your organisation is a waypoint. Supply chain compromises use your software update mechanism to reach your customers. MSP compromises use your management tools to access your clients' environments.
Access operations are patient and strategic. The attacker maintains quiet access for months, waiting for the right moment to leverage your trust relationships. They avoid disrupting your operations because they need your systems functional and trusted.
Defensive translation: access operations target trust relationships, management interfaces, and deployment mechanisms. If you see anomalous activity on your build pipeline, remote management tools, or federated identity systems, the attacker may not be targeting you — they may be targeting everyone who trusts you. The impact assessment expands from your organisation to your entire partner ecosystem.
Using the objective during investigations
Three diagnostics you can apply in the first hours of any investigation.
What are they targeting? The systems the attacker has accessed reveal the objective. Executive mailboxes = intelligence. Backup systems = financial. Build pipelines = access. Industrial control systems = disruption.
How fast are they moving? Hours between phases = financial or disruption. Days between phases = intelligence or access. The pace reveals the timeline pressure.
How noisy are they? Known malware and obvious artifacts = opportunistic financial. Custom tooling with minimal footprint = targeted intelligence or access. The noise level reveals capability and risk tolerance.
These three diagnostics — target, pace, noise — give you an operational profile within the first hours of an investigation. Here's the diagnostics applied to two real scenarios side by side:
Scenario A:
Target: Backup server (SRV-NGE-BKP01), domain controller (SRV-NGE-DC01)
Pace: Initial access → backup server in 6 hours
Noise: PsExec (known tool), Mimikatz (known tool), SharpHound (known tool)
→ Diagnosis: FINANCIAL (ransomware). Fast, loud, targeting recovery infra.
→ Priority: Protect backups. Contain laterally. Expect encryption within 24hr.
Scenario B:
Target: CFO mailbox (via OAuth app), Engineering SharePoint
Pace: Initial access → first collection 3 days later
Noise: No tools detected. Access via legitimate Graph API calls.
→ Diagnosis: INTELLIGENCE (espionage). Slow, quiet, targeting strategic data.
→ Priority: Scope silently. Audit OAuth grants. DON'T isolate yet —
premature containment destroys evidence of the full collection scope.The diagnostics produce different response strategies within the first hour. That's the operational value — you don't need to identify the threat group or the malware family to make response decisions. The target, pace, and noise tell you enough.
STEP 1 — Select an incident
Use the same report from OD1.1, or select a new one.
Recommended: search "ransomware incident report" or
"APT campaign analysis" on vendor blogs.
STEP 2 — Apply the three diagnostics
a. What did the attacker target?
List the systems/data the attacker accessed.
Map to objective: backups→financial, mailboxes→intelligence,
trust relationships→access, critical systems→disruption.
b. How fast did they move?
Measure time from initial access to objective execution.
Hours = financial/disruption. Days/weeks = intelligence/access.
c. How noisy were they?
List the tools and techniques observed.
Commodity (Mimikatz, PsExec, Cobalt Strike defaults) = financial.
Custom/minimal = intelligence or access.
STEP 3 — State the objective
Write: "The attacker's primary objective is [X] because:
targets = [evidence], pace = [evidence], noise = [evidence]."
STEP 4 — Determine response priority
Based on the objective, what should the response team
prioritize? Match against:
Financial → protect backups, contain laterally
Intelligence → preserve evidence, assess collection scope
Disruption → protect critical infrastructure, detect staging
Access → audit trust relationships, assess downstream impactHands-on Exercise — Objective Profiling
Objective: Apply the three diagnostics to a real or published incident and identify the attacker's objective.
Prerequisites: A completed investigation timeline or a published incident report.
Success criteria: You've applied all three diagnostics with evidence and stated the objective with reasoning.
Challenge: Find a report where the attacker's objective changed mid-campaign (e.g., an espionage operator who pivoted to ransomware after being detected). How do the three diagnostics show the pivot?
You should be able to do the following without referring back to this sub. If you can't, the sections to re-read are noted.
You're reading the free modules of offensive-security-for-defenders
The full course continues with advanced topics, production detection rules, worked investigation scenarios, and deployable artifacts.