TR0.4 The NE Attack Timeline
Figure TR0.4 — CHAIN-HARVEST extended across three environments. The original attack (cloud BEC) is joined by an endpoint compromise and a Linux database pivot. Each environment crossing represents a triage intervention point — and a point where single-environment triage fails.
The original CHAIN-HARVEST: cloud only
The CHAIN-HARVEST attack as documented in the Detection Engineering and Incident Response courses was a cloud-native attack chain: password spray → AiTM token capture → inbox rule → MFA registration → mailbox access → BEC email. The entire chain operated within the M365 cloud environment. Rachel’s triage and containment were cloud-only: KQL queries against SigninLogs and OfficeActivity, session revocation via Graph PowerShell, and password reset through Entra ID.
This scenario worked because the attacker’s objective was financial fraud via BEC — an objective achievable entirely within the cloud email environment. The attacker did not need to touch an endpoint or a server to achieve their goal.
The extended CHAIN-HARVEST: cloud → Windows → Linux
Now consider a more sophisticated attacker with a different objective. The initial access is identical — AiTM phishing captures j.morrison’s session token. The cloud phase proceeds as before: inbox rule, MFA registration, mailbox access. But this attacker’s objective is not BEC. It is intellectual property theft from Northgate Engineering’s engineering database, which runs on a RHEL server (SRV-NGE-BRS-DB01) in the Bristol data centre.
The cloud session token gives the attacker access to j.morrison’s M365 environment, including OneDrive. The attacker places a malicious DLL in j.morrison’s OneDrive sync folder. When j.morrison’s workstation (DESKTOP-NGE042) syncs OneDrive, the DLL is pulled to the local machine. A scheduled task that j.morrison created months ago to process OneDrive files triggers the DLL — and now the attacker has code execution on a Windows endpoint.
At 14:30, the Defender for Endpoint alert fires: “Suspicious DLL loaded by scheduled task.” The cloud attack has crossed the boundary into the Windows environment. The triage responder now faces a multi-environment incident.
At 14:45, the attacker uses the endpoint access to dump LSASS memory (the standard credential theft technique covered in IR04). j.morrison’s workstation has local admin privileges — a common but dangerous configuration. The LSASS dump reveals cached credentials for a service account (svc-dbadmin@northgateeng.com) that j.morrison’s workstation uses to connect to the engineering database for automated report generation.
At 15:12, the attacker uses the stolen service account credentials to SSH from DESKTOP-NGE042 to SRV-NGE-BRS-DB01. The Linux auth.log records the SSH session. The attacker initiates a database export — 4.2 GB of engineering drawings and specifications — to an external staging server.
The attack has now traversed all three environments: cloud (identity compromise) → Windows (endpoint compromise, credential theft) → Linux (database access, data exfiltration). A triage responder who only triages the cloud alert misses the endpoint compromise. A responder who only triages the endpoint alert misses the database exfiltration. Only a responder who follows the attack across all three environments understands the true scope.
Triage intervention points
Each boundary crossing creates a triage intervention point — a moment where the responder can detect the attack’s expansion and execute containment in the new environment.
Intervention point 1: Cloud → Windows (14:30). The Defender for Endpoint alert fires when the malicious DLL executes. The triage responder sees a cloud compromise (j.morrison’s AiTM session from 08:14) AND an endpoint alert on j.morrison’s workstation. The scorecard question “Is the scope beyond a single entity?” immediately connects the two alerts. The entity correlation is straightforward: same user (j.morrison), cloud compromise precedes endpoint compromise by 6 hours, OneDrive sync is the delivery mechanism.
The triage action at this intervention point: isolate DESKTOP-NGE042 via Defender for Endpoint (network isolation preserves evidence while stopping lateral movement), capture memory (the malicious DLL and any C2 connection are in memory), and escalate the incident from “cloud compromise — BEC” to “cloud compromise with endpoint pivot — scope unknown.”
Intervention point 2: Windows → Linux (15:12). The Linux auth.log records the SSH session from DESKTOP-NGE042 to SRV-NGE-BRS-DB01 using the svc-dbadmin credentials. If the responder has already isolated DESKTOP-NGE042 at intervention point 1 (14:30), this SSH connection would be blocked by the network isolation — the attack stops here. If isolation was not executed at intervention point 1, the responder now faces a Linux triage in addition to the cloud and Windows triage.
The triage action at this intervention point: check auth.log for the SSH session details (source IP, authentication method, timing), check running processes on SRV-NGE-BRS-DB01 for active database export commands, check network connections for outbound transfers to external IPs, and execute containment (block the source IP via iptables, disable the svc-dbadmin account, capture the running process state).
The missed intervention. If the triage responder triaged only the cloud alert at 08:14 and revoked j.morrison’s cloud sessions (the correct cloud containment), but did not check for endpoint compromise, the attacker retains the endpoint access. The OneDrive-delivered DLL is already on the workstation and executing independently of the cloud session. Cloud containment does not stop the endpoint attack. The attacker proceeds from endpoint to Linux to data exfiltration — and the SOC believes the incident is contained because the cloud triage was correct within its scope.
Evidence available at each stage
Understanding what evidence exists at each stage — and where it lives — is the foundation of cross-environment triage.
Cloud evidence (08:11-13:08). SigninLogs (spray attempts, token replay, IP addresses, device fingerprints). AuditLogs (MFA method registration, conditional access evaluation). OfficeActivity (inbox rule creation, mailbox access, email send events). All queryable via KQL in Sentinel within minutes of the event occurring. Retention: 30-90 days depending on workspace configuration. This evidence is NOT volatile in the traditional sense — it persists in cloud storage. But it IS time-sensitive: the audit log entries that show the attacker’s MFA registration are only useful if the triage responder knows to look for them before the attacker’s session blends into the background noise of normal authentication activity.
Windows evidence (14:30-14:45). DeviceProcessEvents (DLL execution, LSASS dump attempt). DeviceFileEvents (malicious DLL written to OneDrive sync folder). DeviceLogonEvents (local and remote sessions). DeviceNetworkEvents (C2 communication if the DLL establishes one). These tables are queryable in Sentinel and Defender XDR Advanced Hunting. On the endpoint itself: the malicious DLL exists on disk (until the attacker deletes it), the scheduled task exists in the task scheduler (until modified), and the LSASS dump output exists in memory or on disk (until overwritten or deleted). Memory is the critical volatile artifact — it contains the DLL’s execution state, any injected code, and the C2 configuration if one exists.
Linux evidence (15:12 onward). auth.log records the SSH session (timestamp, source IP, authentication method, username). bash_history (if the attacker does not clear it) records the database export command. /proc/PID/ for the active database export process shows the command line, open files, and network connections. The process state is volatile — it disappears when the process completes or is killed. The auth.log and bash_history are semi-persistent — they survive until log rotation or deliberate clearing.
What the investigation team needs from triage
The triage handoff for this cross-environment incident must include findings from all three environments. The 15-minute triage report (taught in TR7.3) for this scenario covers: the cloud compromise (confirmed AiTM, j.morrison, 08:14, session revoked at 08:19), the endpoint compromise (confirmed DLL execution via OneDrive sync, DESKTOP-NGE042, isolated at 14:35, memory dump captured), the Linux compromise (probable database exfiltration, SRV-NGE-BRS-DB01, svc-dbadmin, SSH session from DESKTOP-NGE042, containment executed at 15:18), the evidence preserved per environment, the containment actions taken per environment, and the outstanding scope questions (has the attacker accessed any OTHER system using the svc-dbadmin credentials? Are any other users’ OneDrive folders seeded with malicious files?).
This report gives the investigation team a complete starting point. They do not need to re-triage any environment. They begin the deep investigation — forensic analysis of the DLL, full LSASS dump analysis, database access audit, lateral movement scope assessment — with the triage responder’s findings as their foundation.
Worked artifact: Cross-environment triage summary
Incident ID: INC-2026-0227-EXT Classification: TRUE POSITIVE — Cross-environment compromise Confidence: Confirmed (>95%) Severity: CRITICAL (active data exfiltration from production database)
Cloud phase (M365): AiTM token replay for j.morrison@northgateeng.com at 08:14 from 185.220.101.42 (Tor). Inbox rule and MFA registration as persistence. Session revoked at 08:19. Password reset at 08:25.
Windows phase (Endpoint): Malicious DLL executed via OneDrive sync on DESKTOP-NGE042 at 14:30. LSASS dump at 14:45 — svc-dbadmin credentials stolen. Endpoint network-isolated via Defender at 14:35. Memory dump captured at 14:38.
Linux phase (Server): SSH from DESKTOP-NGE042 to SRV-NGE-BRS-DB01 at 15:12 using svc-dbadmin. Database export command detected in active processes. Source IP blocked via iptables at 15:18. svc-dbadmin account disabled at 15:20. Process state captured via /proc before termination.
Evidence preserved: Cloud sign-in/audit logs (KQL snapshot). Windows memory dump + KAPE collection. Linux /proc state + auth.log + bash_history snapshot.
Outstanding scope questions: (1) Other systems accessed by svc-dbadmin? (2) Other OneDrive folders seeded with DLLs? (3) Volume of data exported before containment? (4) Attacker’s external staging server — any additional exfiltration channels?
Try it: identify the boundary crossings in your environment
Think about your organisation’s infrastructure. Where are the boundary crossings that an attacker could exploit? Cloud identity → endpoint access (via OAuth token, OneDrive sync, Intune push). Endpoint → server (via stored credentials, SSH keys, admin shares). On-prem AD → Azure (via Azure AD Connect sync, ADFS). For each crossing, ask: would my triage process detect the crossing? Or would I triage the alert in one environment and miss the pivot to the next?
The triage intervention points in the CHAIN-HARVEST timeline
The CHAIN-HARVEST extended timeline provides 5 specific intervention points where triage and containment would have interrupted the attack chain. Understanding these points is critical for the triage responder because they define the WINDOWS in which each containment action is effective:
Intervention Point 1 — Cloud detection (08:14-08:30). The Entra ID Protection alert fires on the anomalous token. If the triage responder classifies and revokes the session within 15 minutes, the attacker loses cloud access before establishing any persistence (MFA registration occurred at 09:22 — 68 minutes later). At this point, the incident scope is limited to one cloud identity. Containment: session revocation + password reset. No endpoint or Linux involvement.
Intervention Point 2 — Persistence detection (09:22-10:00). If intervention point 1 is missed, the attacker registers a fraudulent MFA method at 09:22. The AuditLogs show this registration. If the triage responder runs Query 2 (MFA changes) at any point before the attacker pivots to the endpoint, removing the MFA method prevents re-authentication after any later containment action. Without this removal, the attacker re-enters after every password reset and session revocation. This is why the 5-query triage pack includes the MFA check as Query 2, not Query 5 — it is time-critical.
Intervention Point 3 — BEC preparation detection (08:47-14:00). The inbox rule is created at 08:47. Between 09:30 and 13:00, the attacker reads 47 emails from the CEO’s communication history. If the triage responder detects the inbox rule (Query 3) and the mailbox access pattern (Query 5) at any point during this window, the BEC email can be prevented. The attacker has not yet sent the fraudulent wire transfer request — they are still in the preparation phase. Removing the inbox rule and revoking the session prevents the BEC. After 14:00, the risk increases because the attacker has sufficient financial intelligence to craft a convincing BEC email.
Intervention Point 4 — Endpoint compromise detection (14:30-15:00). The Cobalt Strike beacon activates at 14:30. Defender for Endpoint generates the alert. If the triage responder isolates the endpoint within 30 minutes (by 15:00), the attacker cannot pivot to Linux servers. The beacon has active C2 but has not yet been used for credential theft or lateral movement. Containment: device isolation + cloud containment from points 1-2. Investigation scope: one endpoint + one cloud identity.
Intervention Point 5 — Lateral movement detection (15:12+). The attacker SSH sessions to the Linux database server. If this connection is not detected and contained, the attacker has access to the engineering database — the ultimate target. Detection depends on: Linux auth.log forwarding to Sentinel (if not forwarded, the SSH session is invisible to the cloud-side triage), or the Windows endpoint’s connection table showing an SSH session to an internal Linux server (visible in Command 2 of the 10-command triage).
Each missed intervention point EXPANDS the scope. Point 1 missed: the attacker plants persistence. Point 2 missed: the attacker has re-entry capability. Point 3 missed: the attacker has BEC intelligence. Point 4 missed: the attacker has endpoint access. Point 5 missed: the attacker has database access. The triage responder who catches Point 1 resolves the incident in 15 minutes with zero business impact. The responder who catches Point 5 resolves the incident in hours with cross-environment containment and potential data exposure.
The MITRE ATT&CK framework documents this cross-environment reality: Initial Access in the cloud (credential phishing), Execution and Persistence on the endpoint (DLL sideloading, scheduled tasks), Credential Access bridging environments (LSASS dump provides AD credentials enabling both cloud and endpoint access), Lateral Movement crossing from endpoint to server (RDP, SSH, WMI), and Impact in any environment (encryption, exfiltration, BEC). The NE CHAIN-HARVEST timeline includes all phases across all three environments — learning to triage this timeline teaches the analyst to FOLLOW the attacker across boundaries rather than stopping at the first environment.
This is why triage speed matters. Not because speed is inherently valuable, but because speed determines which intervention point the responder reaches — and earlier intervention points produce smaller incidents with less damage.
The myth: We are a cloud-first organisation. Our data is in M365 and Azure. Endpoint and Linux triage are legacy skills for on-prem environments.
The reality: Cloud-first does not mean cloud-only. Users access cloud resources from endpoints — those endpoints run Windows. Backend services run on Linux VMs in Azure. Developers access cloud infrastructure from workstations that cache credentials locally. Every cloud environment has an endpoint attack surface (the devices accessing it) and often a Linux attack surface (the compute running behind it). The attacker who compromises a cloud identity through AiTM and then pivots to the endpoint via OneDrive sync operates across your “cloud-only” boundary. Triage capability must match the attacker’s movement, not your architectural label.
Troubleshooting
“I see the cloud compromise but cannot access the endpoint to triage it.” If you have Defender for Endpoint in your Sentinel workspace, you can triage the endpoint remotely via KQL — query DeviceProcessEvents, DeviceFileEvents, and DeviceNetworkEvents for the affected device. If you have Defender Live Response, you can execute triage commands directly on the endpoint. If neither is available, escalate to the endpoint team with your cloud triage findings and the specific entity (device name, user, timestamp) they need to investigate.
“The attack timeline spans 7 hours — how do I triage in 15 minutes?” You do not triage the entire 7-hour timeline in 15 minutes. You triage the CURRENT state in 15 minutes: is the attacker still active? What evidence is volatile right now? What containment action stops the current threat? The full timeline reconstruction is the investigation team’s job. Your triage report provides the investigation team with: the scope (which environments are affected), the current threat status (contained or active), and the evidence you preserved.
You're reading the free modules of this course
The full course continues with advanced topics, production detection rules, worked investigation scenarios, and deployable artifacts. Premium subscribers get access to all courses.