TR1.9 Cross-Environment Evidence Correlation
Figure TR1.9 — Three correlation methods. Timestamps connect events chronologically. IP addresses connect events across network boundaries. Entity mapping connects events to the same user or device despite different naming conventions per environment.
Correlation method 1: timestamps
The most intuitive correlation: events in different environments that occur within a narrow time window are likely related. The CHAIN-HARVEST extended timeline shows the cloud compromise at 08:14, the endpoint compromise at 14:30 (6 hours later), and the Linux pivot at 15:12 (42 minutes after endpoint). The 42-minute gap between endpoint and Linux is highly suggestive of a single attacker progressing through the kill chain — credential theft on the endpoint requires time to dump, crack, and use.
Timestamp challenges. Different environments may have different time zones, different clock synchronisation accuracy, and different timestamp granularity. Cloud logs use UTC. Windows event logs use the local system time (which should be UTC on servers but may be local time on workstations). Linux logs use the system’s configured timezone. Before correlating timestamps, normalise all times to UTC.
Clock drift. Systems without NTP synchronisation may drift seconds or minutes from the correct time. A 2-minute drift between the Windows endpoint clock and the Linux server clock means events that appear 5 minutes apart may actually be 3 minutes or 7 minutes apart. Check NTP status on both systems during triage: w32tm /query /status on Windows, timedatectl on Linux. Document the drift in the triage report so the investigation team can adjust their timeline.
| |
This KQL union merges cloud sign-in events and endpoint process/logon events into a single chronological timeline. The result shows the attack progression: cloud compromise → endpoint DLL execution → endpoint credential theft → lateral movement attempts — all in a single view.
For Linux evidence that is not in Sentinel (auth.log from a server without syslog forwarding), the triage responder must manually add the Linux events to the timeline. Export auth.log entries for the relevant window, normalise timestamps to UTC, and insert them into the timeline between the cloud and endpoint events.
Correlation method 2: IP addresses
The same IP address appearing in multiple environments’ logs confirms the attacker moved between them. The CHAIN-HARVEST attacker’s Tor exit node (185.220.101.42) appears in SigninLogs (cloud sign-in source). If the same IP appears in CommonSecurityLog (firewall logs for the same time window), the attacker also probed the network perimeter from the same infrastructure.
More commonly, the INTERNAL IP of a compromised system appears as a lateral movement source. DESKTOP-NGE042’s internal IP (10.1.1.42) appearing in the Linux auth.log as the SSH source IP confirms the endpoint-to-Linux pivot. This is the correlation that proves the cross-environment attack chain — without it, the cloud compromise and the Linux SSH session could be unrelated events.
NAT and proxy challenges. If the organisation uses a web proxy, all outbound connections appear to originate from the proxy IP in external-facing logs. The triage responder must look at the proxy’s internal logs to determine which endpoint made the connection. Similarly, if Linux servers are behind a NAT, the SSH source IP may be the NAT device rather than the true source endpoint. Check for X-Forwarded-For headers in web proxy logs and for source port differentiation behind NAT devices.
Correlation method 3: entity mapping
The same user appears with different identifiers in different environments:
Entra ID / M365: UserPrincipalName format: j.morrison@northgateeng.com
Active Directory / Windows: SAM account format: NORTHGATE\j.morrison or just j.morrison
Linux: Username format: j.morrison or jmorrison (varies by provisioning)
The triage responder must maintain a mental (or documented) mapping between these formats. A KQL join between SigninLogs and DeviceProcessEvents requires extracting the username from the UPN: extend AccountName = tostring(split(UserPrincipalName, "@")[0]).
For service accounts, the mapping may be less obvious. The cloud service principal sp-dbsync@northgateeng.com may correspond to the Linux account svc-dbadmin on the database server, provisioned months ago by a different team with a different naming convention. The triage responder who does not make this connection treats the cloud compromise and the Linux access as separate incidents.
Worked artifact: Entity mapping table for NE triage
Cloud (UPN) Windows (SAM) Linux (username) Role j.morrison@northgateeng.com NORTHGATE\j.morrison j.morrison Engineering svc-dbsync@northgateeng.com NORTHGATE\svc-dbadmin svc-dbadmin Service account r.williams@northgateeng.com NORTHGATE\r.williams r.williams Security Build this mapping for your environment before an incident occurs. During triage, the mapping eliminates the identity resolution step that otherwise consumes 5-10 minutes of the 15-minute triage window. During triage, refer to it to connect cross-environment events to the same entity.
Try it: build your entity mapping table
For your environment, identify 5 users who have accounts in multiple systems (cloud + endpoint, or endpoint + Linux, or all three). Document: their cloud UPN, their Windows SAM name, their Linux username (if applicable), and any service accounts associated with their role. This table takes 10 minutes to build and saves 10 minutes during every cross-environment triage.
Building the correlation during triage, not after
The most common correlation failure occurs when the triage responder treats cloud triage and endpoint triage as separate workflows. The cloud triage identifies an anomalous sign-in. The endpoint triage identifies a suspicious process. But the responder does not connect the two — classifying and containing the cloud compromise without checking whether the attacker moved to an endpoint, or triaging the endpoint without checking whether the attacker’s session originated from a cloud identity compromise.
The correlation must happen DURING triage, not during the investigation phase afterward. Three practical techniques:
Technique 1: IP pivot. When the cloud triage reveals an anomalous IP (185.220.101.42 in the CHAIN-HARVEST example), immediately check whether that IP appears in DeviceNetworkEvents or DeviceLogonEvents for any endpoint in the environment:
| |
If this query returns results, the attacker has connected to an endpoint from the same infrastructure used for the cloud compromise. The endpoint is now in scope for triage.
Technique 2: User pivot. When the cloud triage identifies a compromised user (j.morrison@northgateeng.com), immediately check whether that user has recent authentication events on endpoints:
| |
Every device where the compromised user authenticated is a potential target — the attacker may have used the stolen credentials to access those devices. Prioritise devices where the logon type is “RemoteInteractive” (RDP) or “Network” (SMB/WMI) from an unusual source IP.
Technique 3: Timeline pivot. When the cloud compromise has a known timestamp (08:14 in the CHAIN-HARVEST example), check for endpoint events in the window following the cloud compromise:
| |
Suspicious process creation by the compromised user within hours of the cloud compromise — especially on endpoints the user does not normally access — indicates the attacker has pivoted from cloud to endpoint.
These three pivots (IP, user, timeline) take 2-3 minutes total and immediately reveal whether the incident is cloud-only or cross-environment. The cloud-only incident (the attacker accessed cloud resources but did not pivot to an endpoint) requires only cloud containment. The cross-environment incident requires cloud AND endpoint containment — and potentially Linux containment if the endpoint pivoted further.
At NE, Rachel mandates that all cloud identity triage includes the user pivot query as a standard step — even if the cloud evidence shows no indication of endpoint involvement. The query takes 10 seconds to run and returns either zero results (confirming cloud-only scope) or device names that immediately expand the scope. The 10-second investment prevents the investigation team from discovering the endpoint compromise days later — after the attacker has had additional time to operate on the endpoint.
Why correlation urgency is increasing
Cross-environment correlation is becoming more time-critical because attackers compress their kill chains. Mandiant’s M-Trends 2026 report found that the median handoff from initial access brokers to ransomware operators dropped to 22 seconds — meaning the attacker who compromises a cloud identity can hand off to a specialised operator who begins lateral movement almost instantly. The triage responder no longer has hours between the cloud alert and the endpoint compromise. The two events may be minutes apart.
Double extortion compounds this urgency. Mandiant and Google Cloud found that 77% of 2025 ransomware intrusions involved confirmed data exfiltration — up from 57% in 2024. The attacker who moves from cloud to Windows to Linux is not just encrypting files. They are staging and exfiltrating data from every environment they touch. A triage responder who correlates the cloud and endpoint events but misses the Linux pivot may contain the encryption while the exfiltration continues from an uncontained server. The average breach cost has reached $4.88 million (IBM 2025), and organisations with mature incident response plans — including structured triage with cross-environment correlation — reduce that cost by an average of $1.76 million.
The correlation methodology from this subsection directly reduces dwell time. Every minute the triage responder spends recognising the boundary crossing between environments is a minute the investigation team does not need to spend reconstructing the attack path from scratch. The entity mapping table, the timestamp normalisation practice, and the KQL union query are operational tools that produce immediate time savings during the incident — not theoretical exercises completed after the fact.
The myth: If all logs are in Sentinel, the SIEM automatically correlates events across environments. The analyst just reads the incident.
The reality: Sentinel’s entity mapping and incident grouping help — alerts referencing the same user or IP may be grouped into a single incident. But Sentinel only correlates what it is configured to correlate. The entity mapping engine in Sentinel requires consistent field names across tables — if the analytics rules for cloud events use UserPrincipalName and the rules for endpoint events use AccountName, Sentinel may not automatically link them as the same entity. The detection engineering team must ensure entity mapping consistency across all analytics rules (covered in Detection Engineering DE2). Without this consistency, Sentinel creates separate incidents for the same attack chain — one cloud incident for the sign-in anomaly and one endpoint incident for the process alert — and the triage responder must manually recognise that both incidents reference the same attack. If the analytics rules do not map entities consistently across tables (UPN in SigninLogs, AccountName in DeviceProcessEvents, UserId in OfficeActivity), the correlation fails silently. And if Linux logs are not forwarded to Sentinel (a common gap for organisations that treat Linux as “not the SOC’s responsibility”), those events are invisible to Sentinel entirely. The triage responder must verify the correlation manually and add evidence from sources outside Sentinel’s visibility.
Troubleshooting
“The timestamps do not align — the Linux event appears BEFORE the Windows event that should have caused it.” Check for clock drift between the systems. If the Linux server’s clock is 3 minutes ahead of the Windows endpoint’s clock, an SSH session at 15:12 (Linux time) may correspond to a credential theft at 15:14 (Windows time) that actually occurred at 15:11 (real time). Document the observed drift and adjust the timeline accordingly.
“I found the same IP in both cloud and Linux logs, but the Linux system does not forward logs to Sentinel.” This is the most common correlation gap. The triage responder must SSH to the Linux system (or request the Linux admin to run commands) and manually extract the relevant log entries. The correlation is performed manually — grep auth.log for the IP, compare timestamps with the cloud evidence, and add the findings to the triage report. Module TR4 covers the specific Linux triage commands for this scenario.
The triage correlation checklist
Every triage should include a cross-environment check, regardless of whether the initial alert suggests cross-environment activity. The checklist takes 2 minutes and prevents scope underestimation:
For cloud alerts: Run the user pivot query (TR1.9 Technique 2) — check which endpoints the compromised user authenticated to in the last 48 hours. Run the IP pivot query (TR1.6 Technique 1) — check whether the attacker’s IP appears in endpoint or firewall logs. If either query returns results: the scope has expanded to endpoints. Begin endpoint triage (TR3) on the identified systems.
For endpoint alerts: Check whether the compromised endpoint’s user has cloud identity alerts in the same time window. Check whether the compromised endpoint has connections to Linux servers (Command 2 network connections — SSH connections to internal IPs). If cloud alerts exist: the attack may have started in the cloud and pivoted to the endpoint. If Linux connections exist: the attack may have pivoted from the endpoint to Linux servers.
For Linux alerts: Check whether the SSH source IP corresponds to a known Windows endpoint (cross-reference with the ARP table and DHCP records). Check whether the authenticating user has cloud identity alerts. If the SSH source is a compromised Windows endpoint: the attack is a multi-hop chain (cloud → Windows → Linux).
This bidirectional checking ensures that NO cross-environment attack escapes detection regardless of which environment generates the initial alert. The checking adds 2 minutes to the triage but prevents the investigation team from discovering uncontained systems days later.
The correlation documentation format: When cross-environment correlation is found, document it in the triage report using a simple chain notation: “Attack chain: Cloud (j.morrison AiTM 08:14) → Windows (DESKTOP-NGE042 beacon 14:30) → Linux (SRV-NGE-BRS-DB01 SSH 15:12). Containment required in all three environments.” This chain notation immediately communicates the scope to the investigation team and ensures containment is planned for every environment in the chain.
You're reading the free modules of this course
The full course continues with advanced topics, production detection rules, worked investigation scenarios, and deployable artifacts. Premium subscribers get access to all courses.