In this module
OD0.1 The Gap Between Alerts and Campaigns
You triage security alerts. You've investigated incidents where multiple alerts turned out to be related. You know that attackers don't operate in single-technique bursts — they chain activity across systems and time. This sub doesn't introduce attack chains as a concept. It shows you the specific cognitive gap that causes experienced analysts to handle campaign components as isolated events — and what that gap costs in dwell time.
Operational Objective
Your SOC processes hundreds of alerts per week. Each one is triaged against a severity matrix, investigated as an independent event, and closed with a disposition. The process works for isolated incidents — a failed brute force, a blocked phishing email, a known false positive. It fails for coordinated campaigns because the campaign is spread across multiple alerts, multiple systems, and multiple hours. No single alert looks critical. The campaign is critical.
Most SOC teams are structured around individual alert handling: an analyst picks up an alert, investigates it, makes a disposition, and moves to the next one. The metrics reinforce this — mean time to triage, alerts closed per shift, SLA compliance. Nothing in that workflow asks the question that matters: "Is this alert connected to anything else in the last 24 hours?" This sub demonstrates the gap with three alert sequences from Northgate Engineering that were triaged independently and missed the campaign connecting them.
Learning Objectives
By the end of this sub you will be able to:
- Identify the campaign-level detection gap in standard SOC triage workflows — the same gap that allowed the SolarWinds SUNBURST campaign to operate undetected for months across environments with functional alert pipelines. This matters because every SOC has the technology to correlate events across systems; the gap is that analysts don't know what campaign patterns look like, so they don't ask the correlation question.
- Apply a three-step retrospective correlation method (shared attributes → hypothesized chain → sequence validation) to a set of closed alerts from your own environment. This matters because the most dangerous campaigns in your history may already be in your closed-alert queue, triaged correctly as individual events and never connected.
- Distinguish between alerts that are coincidentally similar and alerts that are operationally connected by the timing, sequencing, and logical progression of attacker operations. This matters because false correlation wastes investigation time — the goal is not to connect everything, but to recognize the patterns that indicate coordinated activity.
Figure OD0.1 — Three alerts, three systems, one campaign. Each alert was triaged correctly in isolation. The campaign connecting them was never identified. The attacker operated on the application server for fourteen days before a separate investigation found the persistence.
Tuesday morning, three alerts in the queue
It's 09:30 and your SOC queue has three open alerts from the last twelve hours. Different systems, different severities, different rule categories. You triage them in order.
The first alert fired at 09:14. A Sigma rule for suspicious PowerShell execution triggered on DESKTOP-NGE042 — Tom Ashworth's workstation. The command line shows Invoke-WebRequest downloading a file from a cloud storage URL. You check the URL against threat intel feeds. Clean. You check Tom's role. IT administrator. You check whether he's run similar commands before. He has — twice this month, both for legitimate admin scripts. You close the alert as a false positive with a note: "Admin script download, matches prior pattern."
The second alert fired at 11:47. An unusual sign-in location for t.ashworth@northgateeng.com. The sign-in came from an IP address in the same country but not from the office or Tom's home. Successful authentication, MFA satisfied, no risk flags from Entra ID. You check whether Tom is travelling or using a VPN. His calendar shows he's in the office today, but the IP could be a mobile hotspot or a VPN provider. Low severity. You close it as likely benign: "Possible VPN or mobile network. No suspicious activity following sign-in."
The third alert fired at 14:22. A new scheduled task created on SRV-NGE-APP01, one of Northgate Engineering's application servers. The task name is WindowsUpdateCleanup and it runs a PowerShell script from C:\ProgramData\. You check the server's patch schedule. Patching ran over the weekend. Two similar tasks exist from previous patch cycles. Low severity. You close it as operational: "Post-patching cleanup task. Consistent with patching cycle."
Three alerts. Three reasonable dispositions. Every one of them defensible in a triage review.
What actually happened
The same three events, read as a campaign instead of as isolated alerts.
The PowerShell alert on Tom's workstation wasn't an admin script. It was the execution stage of a spearphishing attack. The Invoke-WebRequest command downloaded an in-memory loader that harvested Tom's credentials from the LSASS process and exfiltrated them to attacker infrastructure. The URL was clean on threat intel because the attacker used a legitimate cloud storage service — the same one Tom actually uses for admin work, which is why the command matched his prior pattern.
The unusual sign-in at 11:47 wasn't Tom. It was the attacker replaying Tom's stolen access token from their own infrastructure. The sign-in appeared to satisfy MFA because the token was captured post-authentication — the attacker intercepted the session after Tom completed MFA, so the token carried the MFA claim. The IP address was an anonymizing proxy in the same country, deliberately chosen to avoid the "impossible travel" detection rule.
The scheduled task on the application server wasn't from patching. It was the attacker's persistence mechanism, installed using Tom's domain admin credentials harvested three hours earlier. The task name WindowsUpdateCleanup was chosen to mimic legitimate Windows maintenance tasks. The PowerShell script in C:\ProgramData\ was a reverse shell that beaconed to the attacker's C2 infrastructure every four hours.
One campaign. Three phases: credential harvest at 09:14, access replay at 11:47, persistence installation at 14:22. Each phase produced an alert. Each alert was triaged correctly in isolation. The campaign was never identified. The attacker operated on the application server for fourteen days.
Why the triage was correct and the outcome was wrong
Every disposition was defensible. The problem isn't the analyst's judgment on individual alerts — it's the workflow that never asks whether the alerts are connected.
The PowerShell execution DID match Tom's prior pattern. The sign-in WAS from the same country. The scheduled task DID look like a patching artifact. No individual alert was severe enough to trigger escalation. The triage framework worked exactly as designed — for isolated events.
It broke because the framework doesn't ask: "Are these events related?"
Relating them requires a different kind of knowledge. You need to understand that an attacker who harvests credentials will replay them within hours — not days. You need to know that token replay from a different IP is the expected follow-on to credential theft, not a coincidence. You need to recognize that persistence installation on a high-value server within the same operational window as credential theft and token replay is the completion of an access chain, not a maintenance task.
That knowledge doesn't come from better detection rules. Your rules fired correctly — all three alerts generated. It comes from understanding how offensive operations work: why attackers chain these specific activities in this specific sequence, on this specific timeline. That's what this course teaches.
The campaign-level detection gap
The gap isn't technology. Your SIEM can correlate. The gap is knowledge — knowing what campaign patterns look like so you ask the correlation question in the first place.
Your SIEM can query across tables. Sentinel can create multi-stage analytics rules. Splunk can correlate events across sourcetypes and time ranges. The technology to detect campaigns exists in every modern SIEM.
The gap is this: to build campaign-level detections, you need to know what campaigns look like. To triage alerts as campaign components, you need to understand the operational logic that connects them. To anticipate the attacker's next step during an investigation, you need to understand how offensive operations are planned and executed.
This is the gap between Detection Engineering (which teaches you to detect individual techniques) and this course (which teaches you to detect the campaign connecting them). A learner who completes both can write the individual detection rules AND build the correlation logic that connects them into campaign-level alerts.
The retrospective correlation method
Three steps you can apply today to your closed-alert queue. No new tools required — just a different question.
Step 1 — Shared attributes. Given a set of alerts from the same day, list what they have in common: same user identity, same credential, same time window, same target environment, logical sequence from one to the next. In the NE example: same user identity (t.ashworth), escalating privilege across systems (workstation → cloud → server), 5-hour operational window.
Step 2 — Hypothesized chain. For each alert, ask: "If this were a campaign phase, what would have preceded it and what would follow it?" The PowerShell execution as credential theft → the next step would be credential replay from a different source. The unusual sign-in as credential replay → the next step would be lateral movement or persistence. The scheduled task as persistence → the preceding step would be privileged access to the server.
Step 3 — Sequence validation. Check whether the hypothesized chain matches the observed alert sequence and timing. Credential theft at 09:14, credential replay at 11:47 (2.5 hours later — within the expected operational tempo), persistence at 14:22 (2.5 hours after replay — consistent with an attacker establishing stable access before proceeding). The sequence matches. The timing matches. The logical progression matches.
If the timing is wrong — persistence before credential theft — or the logical sequence doesn't hold, the correlation may be coincidental. Not every set of alerts is a campaign. The method helps you identify the ones that are.
STEP 1 — Export recent closed alerts
Open your SIEM (Sentinel, Splunk, Elastic, or your ticketing system).
Pull all alerts closed in the last 7 days.
Filter to medium and low severity only — high-severity alerts are
usually investigated thoroughly. The campaign gap lives in the
medium/low alerts that get triaged quickly and closed.
STEP 2 — Group by identity
Sort the alerts by user identity (username, email, or SID).
Look for any user who appears in 3+ alerts within the same 24-hour
window. These are your correlation candidates.
If using Sentinel KQL:
SecurityAlert
| where TimeGenerated > ago(7d)
| where AlertSeverity in ("Medium", "Low")
| summarize AlertCount = count(), Alerts = make_list(AlertName)
by AccountName = tostring(parse_json(Entities)[0].Name),
bin(TimeGenerated, 1d)
| where AlertCount >= 3
| order by AlertCount desc
STEP 3 — Apply the three-step method
For each candidate group:
a. List shared attributes (same user, same time window, same
systems, logical sequence)
b. Hypothesize the campaign chain — if these were campaign phases,
what connects them?
c. Validate the sequence — does the timing and logical progression
match an offensive operational pattern?
STEP 4 — Document your findings
For each group you evaluated, write one sentence:
"These alerts [are / are not] likely campaign components because
[specific reason]."Hands-on Exercise — Retrospective Alert Correlation
Objective: Apply the three-step retrospective correlation method to your own closed-alert queue and determine whether any recently closed alerts represent campaign components that were triaged independently.
Prerequisites: Access to your SIEM or ticketing system with at least 7 days of closed alerts. You need: timestamp, affected system/user, alert rule name, severity, and disposition for each alert. No lab VMs or offensive tools required.
Success criteria: You've evaluated at least 2 alert groups. For each, you've documented shared attributes, a hypothesized chain, and a sequence validation. You've made a judgment call on whether the group represents a campaign or coincidence, with reasoning.
Challenge: If you found a group that looks like it could be a campaign, open the original investigation notes for each alert. Did the analyst who triaged them mention the other alerts? If not, that's the gap this course is about.
You should be able to do the following without referring back to this sub. If you can't, the sections to re-read are noted.
You're reading the free modules of offensive-security-for-defenders
The full course continues with advanced topics, production detection rules, worked investigation scenarios, and deployable artifacts.