In this module

OD1.11 Documented Campaigns — Espionage and Supply Chain

6-8 hours · Module 1 · Free
What you already know

OD1.10 covered ransomware — fast, loud, predictable. This sub covers the opposite: espionage and supply chain campaigns that operate over months with minimal footprint. The detection challenge is fundamentally different — you're looking for patterns in normal-looking activity over 30-60 day windows, not dense telemetry clusters over hours.

Operational Objective

The attacker accesses the CFO's mailbox every Tuesday and Thursday morning. Each access event looks completely normal. The collection cadence is visible only when you analyse access patterns over 60 days and look for mechanical regularity from an unfamiliar device. This sub teaches you to detect the operational patterns that make espionage and supply chain campaigns visible despite producing almost no individual alerts.

Learning Objectives

By the end of this sub you will be able to:

  • Detect espionage collection cadences using long-window (30-60 day) access-pattern analysis in KQL and AI-assisted pattern detection. Volt Typhoon's operations against US critical infrastructure used exactly this pattern — regular, disciplined access during business hours that blended with legitimate use for months. This matters because espionage campaigns are invisible to standard alert-based detection; they require time-series analysis that most SOCs don't perform.
  • Identify supply chain positioning activity by establishing behavioral baselines for trusted software and detecting deviations in build pipeline, update distribution, or trust relationship usage. The SolarWinds SUNBURST campaign operated for 9+ months before detection precisely because the malicious activity used the trusted Orion update mechanism. This matters because supply chain campaigns exploit the one thing you're designed not to question — your trusted software.
ESPIONAGE vs RANSOMWARE — OPPOSITE OPERATIONAL PATTERNS
RANSOMWARE
Timeline: 48–72 hours
Telemetry: dense, clustered
Detection: per-event rules, hourly correlation
Signal: volume and velocity
ESPIONAGE
Timeline: months to years
Telemetry: sparse, normal-looking
Detection: 30–60 day access-pattern analysis
Signal: mechanical regularity over weeks

Figure OD1.11 — Opposite operational patterns. Ransomware detection uses hourly correlation for dense telemetry. Espionage detection uses 30-60 day pattern analysis for sparse, normal-looking events.


Espionage — the collection cadence

The defining characteristic is mechanical regularity: same resource, same time, same device, repeating over weeks with the precision of an automated process.

The operator accesses the target mailbox every Tuesday and Thursday morning. They download updated strategy documents every two weeks. They check the R&D SharePoint when new designs are uploaded, following the engineering team's sprint cycle. Each individual event looks normal. The cadence is visible only over 30-60 day windows.

Why espionage operators maintain cadence

The cadence is operationally rational. The target's data updates on a predictable schedule (board materials monthly, financial projections quarterly, engineering designs per sprint). The operator's intelligence consumer expects regular deliveries. And maintaining a consistent pattern reduces the risk of ad-hoc access at unusual times that might trigger anomaly detection.

Accessing the mailbox at 09:15 every Tuesday looks like a habit. Accessing it at 03:00 on a random Wednesday looks like a compromise. The cadence itself is a stealth mechanism.

Detection: long-window access-pattern analysis

Detection KQL — mailbox access cadence over 60 days:

OfficeActivity
| where TimeGenerated > ago(60d)
| where Operation == "MailboxLogin"
    or Operation == "MailItemsAccessed"
| summarize
    AccessCount = count(),
    DistinctDays = dcount(format_datetime(TimeGenerated, "yyyy-MM-dd")),
    DayOfWeekPattern = make_set(dayofweek(TimeGenerated)),
    DistinctDevices = dcount(tostring(ClientInfoString)),
    DistinctIPs = dcount(ClientIP)
    by UserId, bin(TimeGenerated, 7d)
| where DistinctDevices > 1
| order by AccessCount desc

The DayOfWeekPattern reveals weekly cadence. An account showing access exclusively on Tuesdays and Thursdays from a device different than their normal workstation warrants investigation.

For precise cadence detection, feed the raw access data into Claude:

AI prompt for collection cadence detection:

I'm analysing 60 days of mailbox access events for an executive
account to detect potential espionage collection cadence.

[Paste: timestamp, client IP, device, operation, folder accessed]

1. What are the normal access patterns (days, hours, devices)?
2. Are there accesses using a different device or IP than normal?
3. Do anomalous accesses target specific folders consistently?
4. Is there mathematical regularity (every 48hrs, every Tue/Thu)?
5. Assessment: normal behaviour, collection cadence, or inconclusive?

Persistence mechanisms — the durable artifacts

Espionage persistence is designed for months. The mechanisms are chosen for longevity, not speed:

OAuth consent grants with Mail.Read or Files.Read.All permissions survive password resets, session revocations, and MFA changes. The attacker registers a custom application (or abuses an existing one), obtains consent from the compromised account, and the application has persistent API access to the user's data. The consent grant appears in Entra ID audit logs when created — but most organizations don't monitor consent events. A consent grant created 4 months ago for an unrecognized application with Mail.Read permissions on an executive account is exactly what espionage persistence looks like.

M365 forwarding rules to external addresses survive session revocation and password reset. The attacker creates an inbox rule that BCC's all incoming mail to an external address. The rule persists in the mailbox configuration, not in the session. Your IR team revokes the session, resets the password, enforces new MFA — and the forwarding rule continues operating because nobody checked for it.

Dormant scheduled tasks with 72-hour or weekly intervals avoid beaconing detection tuned for hourly callbacks. A task that executes a PowerShell script every Sunday at 03:00 produces one event per week — invisible to any beaconing analysis using a 24-hour window.

Web shells in rarely audited application directories (/aspnet_client/, /owa/auth/, /ecp/healthcheck/) provide HTTP-based access that blends with legitimate web traffic. The shell is accessed during business hours and the HTTP requests look like normal application usage.

The persistence mechanisms are often the most detectable artifact — because establishing persistence requires a visible action (consent grant creation, forwarding rule creation, scheduled task registration) even if subsequent use is quiet. Detection should focus on the creation events, not the usage events.

Supply chain — the trust weaponization

The attacker compromises your trusted software's update mechanism. Every downstream customer receives the payload through the channel they're designed not to question.

The positioning phase

The attacker needs access to the build pipeline, source repository, or distribution mechanism. This requires a targeted compromise of the software vendor's development environment — which itself may take months of quiet espionage operations. The positioning phase is an espionage campaign against the vendor that precedes the supply chain attack against the vendor's customers.

Consider the SolarWinds SUNBURST timeline. The attackers compromised SolarWinds' build environment in early 2020. They spent months understanding the Orion build process — where the source code lived, how builds were compiled, how updates were signed and distributed. They injected malicious code into the Orion source that only activated under specific conditions (after a 12-14 day dormancy period, only if the host domain wasn't a security vendor's lab, only if specific security tools weren't running). The positioning phase was invisible because it happened inside SolarWinds' infrastructure, not the downstream customers'.

The 3CX supply chain attack in March 2023 followed a similar pattern — the attackers compromised the 3CX build pipeline and distributed a trojanised desktop client through the legitimate auto-update mechanism. The customer's EDR flagged the behavior, but many organizations suppressed the alert because it came from a trusted, signed application.

The distribution phase

Once positioned, the attacker injects malicious code into a legitimate update. The update is signed with the vendor's certificate, distributed through the vendor's update mechanism, and installed by the customer's patch management process. The customer's security tools trust the software because it's signed by a trusted vendor.

The distribution is the attacker's leverage multiplier. SolarWinds Orion had approximately 18,000 customers who installed the trojanised update. 3CX had approximately 600,000 organizations using their software. One compromise of one build pipeline → thousands of compromised customers. The effort-to-impact ratio is why state-sponsored actors invest months in the positioning phase.

Detection: behavioral baselines for trusted software

You can't signature-detect a supply chain compromise because the payload arrives inside trusted, signed software. The binary hash matches the vendor's published hash. The certificate is valid. The update channel is legitimate. Every traditional detection mechanism is satisfied.

Detection requires a fundamentally different approach: establishing a behavioral baseline for what the trusted software normally does — which network connections it makes, which processes it spawns, which files it accesses, which DNS queries it generates — and alerting when the behavior deviates from the baseline.

// Baseline: what network connections does a trusted application normally make?
// Replace the process name with any critical business application in your environment
DeviceNetworkEvents
| where TimeGenerated > ago(90d)
| where InitiatingProcessFileName == "SolarWinds.BusinessLayerHost.exe"
| summarize DistinctDestinations = make_set(RemoteUrl),
    DestCount = dcount(RemoteUrl)
    by bin(TimeGenerated, 7d)
// Review the weekly destination sets. If a new domain appears that
// wasn't in the previous 12 weeks, investigate. SUNBURST's C2 domain
// (avsvmcloud.com) would have appeared as a new destination in the
// week the trojanised update was first installed.

A more practical approach for organizations without per-application network baselines:

// Detect any process making first-time outbound connections to
// domains not seen in the previous 90 days
let KnownDestinations = DeviceNetworkEvents
| where TimeGenerated between (ago(90d) .. ago(7d))
| distinct RemoteUrl;
DeviceNetworkEvents
| where TimeGenerated > ago(7d)
| where RemoteUrl !in (KnownDestinations)
| where InitiatingProcessFileName !in ("chrome.exe", "msedge.exe",
    "firefox.exe", "outlook.exe", "teams.exe")
| summarize FirstSeen = min(TimeGenerated),
    ConnectionCount = count()
    by InitiatingProcessFileName, RemoteUrl
| order by ConnectionCount desc

This surfaces any non-browser process that started connecting to a new domain in the last week. The results will need tuning (new legitimate services, CDN changes), but a trusted application suddenly connecting to a previously unseen domain is exactly the SUNBURST signal.

STEP 1 — Espionage cadence check (20 minutes)
   Run the mailbox access cadence KQL from this sub against
   3 executive accounts in your environment.
   For each, check:
   - Does the DayOfWeekPattern show access from multiple devices?
   - Is there a regular cadence from a non-primary device?
   - Feed any suspicious patterns into Claude with the cadence
     detection prompt for deeper analysis.

STEP 2 — Persistence artifact audit (15 minutes)
   Check for espionage-style persistence on executive accounts:
-- OAuth consent grants with sensitive permissions:
AuditLogs
| where TimeGenerated > ago(90d)
| where OperationName == "Consent to application"
| extend AppName = tostring(TargetResources[0].displayName)
| extend Permissions = tostring(AdditionalDetails)
| project TimeGenerated, InitiatedBy, AppName, Permissions

-- Mail forwarding rules to external addresses:
OfficeActivity
| where TimeGenerated > ago(90d)
| where Operation == "New-InboxRule"
| where Parameters has "ForwardTo" or Parameters has "RedirectTo"
| project TimeGenerated, UserId, Parameters
   If either query returns results for executive accounts,
   investigate immediately — these are the most common
   espionage persistence mechanisms.

STEP 3 — Trusted application baseline (15 minutes)
   Pick one critical business application (backup agent, RMM tool,
   or similar). Query its network connections over 90 days.
   Document the normal destination set. Would you detect a new
   destination that appeared tomorrow?

Hands-on Exercise — Long-Window Pattern Analysis

Objective: Run long-window access-pattern analysis against high-value accounts in your environment and establish behavioral baselines for one trusted application.

Prerequisites: Access to your SIEM with 60+ days of Office 365 activity logs or equivalent. Access to network connection logs for at least one critical business application.

Success criteria: You've checked 3 executive accounts for access cadence anomalies, audited for espionage persistence artifacts, and established a network baseline for one trusted application.

Challenge: If you found OAuth consent grants on executive accounts, check when they were created, what permissions they have, and whether the application is recognized. An unrecognized app with Mail.Read permissions on the CFO's account created 6 months ago is exactly what espionage persistence looks like.


Next
OD1.12 — The Defender's Operational Profile. The capstone sub for M1. Everything you've learned — lifecycle, objectives, constraints, reconnaissance, timing, team structures, campaign patterns — combined into a four-step methodology you apply during the first hours of every investigation.
Checkpoint — before moving on

You should be able to do the following without referring back to this sub. If you can't, the sections to re-read are noted.

1. Detect an espionage collection cadence using long-window KQL analysis and explain why standard hourly/daily correlation windows miss it. (§ Detection: long-window access-pattern analysis)
2. Identify the persistence mechanisms espionage operators use for month-long access and explain which creation events are detectable. (§ Persistence mechanisms)
3. Explain why supply chain detection requires behavioral baselines rather than signatures and describe how to establish a baseline for one trusted application. (§ Supply chain — the trust weaponization)

You're reading the free modules of offensive-security-for-defenders

The full course continues with advanced topics, production detection rules, worked investigation scenarios, and deployable artifacts.

View Pricing See Full Syllabus