1. You arrive at a compromised Windows workstation. The user is at their desk and the system is running. In what order should you capture evidence per the volatility hierarchy?
Tier 1 first: running processes and network connections (PowerShell, 30 seconds). Tier 2: memory dump (WinPMem, 2-5 minutes). Tier 3: event logs and disk artifacts (KAPE, 3-5 minutes). This sequence preserves the most volatile evidence first — process state and network connections change in seconds, memory changes in minutes, event logs persist for hours.
Start with the memory dump — it captures everything including processes and network state. Then collect event logs.
Start with KAPE — it collects everything automatically. Then do the memory dump.
Correct. Tier 1 (processes, connections) takes 30 seconds and captures the most volatile data. Starting with the memory dump (Tier 2) is tempting because it captures more data, but it takes 2-5 minutes — during which Tier 1 data may change. The 30-second Tier 1 capture is insurance: if the memory dump fails or the system reboots during the dump, you still have the process list and connection table. KAPE (Tier 3) collects data that persists for hours — no urgency.
2. An attacker's OAuth consent grant in Entra ID survives which of the following containment actions?
Password reset AND session revocation. The OAuth grant gives the malicious application its own tokens, independent of the user's password or session. Only explicitly revoking the OAuth consent (removing the enterprise application) stops the application's access.
Session revocation only — a password reset invalidates the OAuth grant because it is tied to the user's credentials.
Neither — both password reset and session revocation terminate the OAuth grant.
Correct. OAuth consent grants are independent of the user's password and session. The grant authorises the application to access the user's data using the application's own client credentials and the consent. Resetting the user's password and revoking their session does not affect the application's authorisation. The triage responder must specifically check for and revoke malicious OAuth grants during the cloud preservation sequence (TR1.2).
3. A Linux server has been compromised and you suspect the attacker loaded a rootkit as a kernel module. The output of `lsmod` shows only expected modules. Does this prove there is no rootkit?
Yes — if the rootkit were loaded, it would appear in the lsmod output because lsmod reads directly from the kernel.
No. A kernel module rootkit can hide itself from lsmod by manipulating the kernel's module list. The rootkit IS in the kernel but has removed its own entry from the list that lsmod reads. A LiME memory dump analysed with Volatility3's linux.lsmod plugin bypasses this hiding technique because it reads the kernel data structures directly from memory rather than through the kernel's (compromised) reporting interface.
No — but you should check /proc/modules instead, which shows the true module list.
Correct. A kernel rootkit that hooks the kernel's module reporting functions can hide from both lsmod and /proc/modules — they both read from the same kernel data structure that the rootkit has modified. The only way to find the hidden module is to analyse a memory dump with an external tool (Volatility3) that reads the raw kernel memory rather than asking the compromised kernel to report its own state. This is why memory acquisition is critical even on Linux.
4. Active ransomware encryption is in progress on a file server. The triage responder wants to capture a memory dump before isolating the server. Is this the correct sequence?
Yes — the memory dump captures the ransomware binary's decryption key, which may allow file recovery.
No. Active encryption = active damage. Every minute spent dumping memory is a minute of additional files encrypted. CONTAIN FIRST: network-isolate the server to stop the encryption from spreading to network shares and prevent C2 communication. The ransomware process continues encrypting local files, but the blast radius is limited to the isolated server. AFTER isolation, capture the memory dump — the process is still running (isolation does not stop local processes), so memory evidence is still available.
It depends on how many files have been encrypted — if most are already encrypted, there is no urgency to contain.
Correct. The preservation decision tree (TR1.5): active damage → contain first. Network isolation stops the encryption from spreading while preserving the running state (processes, memory, disk). The memory dump can be captured after isolation because isolation does not power off the server — it only blocks network traffic. The 5-minute memory dump during active encryption could mean 5 minutes of additional encrypted files — potentially hundreds of engineering documents at NE's file modification rate.
5. You are correlating events across environments. The cloud sign-in log shows an AiTM session from 185.220.101.42 at 08:14 UTC. The Linux auth.log shows an SSH session from 10.1.1.42 at 15:12 local time (the Linux server is configured for BST, UTC+1). Are these events temporally consistent with the same attacker?
Yes, but only after time normalisation. 15:12 BST = 14:12 UTC. The cloud event at 08:14 UTC and the Linux event at 14:12 UTC are 5 hours 58 minutes apart. This is temporally consistent with the CHAIN-HARVEST extended timeline: the attacker compromises the cloud identity, pivots to the endpoint (takes hours), then SSH to Linux from the endpoint (10.1.1.42 is DESKTOP-NGE042's IP). The 10.1.1.42 IP confirms the endpoint-to-Linux pivot. Time normalisation is critical — without converting BST to UTC, the 7-hour apparent gap would seem too large.
No — 08:14 and 15:12 are 7 hours apart, which is too long for a single attacker's session.
Yes — the IP addresses match (185.220.101.42 and 10.1.1.42 are the same network).
Correct. Always normalise timestamps to UTC before correlation. The 5:58 gap between cloud compromise and Linux pivot is consistent with a multi-phase attack where the endpoint phase (credential theft, lateral movement preparation) takes several hours. The IP addresses are NOT the same network — 185.220.101.42 is the external Tor exit node and 10.1.1.42 is an internal endpoint IP. The IP correlation here is: the Linux SSH source (10.1.1.42) is the compromised Windows endpoint, not the external attacker IP. Two different correlation data points: timestamp (temporal consistency) and IP (internal pivot source).
6. A Docker container running on a production Linux server shows signs of compromise. The container orchestrator has a restart policy of `always`. What is the urgency of evidence collection?
Low — the container is production-critical and should not be interrupted. Collect evidence during the next maintenance window.
HIGH. If the container crashes or is restarted (by the orchestrator, a deployment, or the attacker), the writable layer — containing all of the attacker's modifications — is destroyed. `docker diff` shows the changes NOW. After restart, the container starts from the clean image and all forensic evidence in the writable layer is gone. Capture `docker diff`, `docker inspect`, `docker logs`, and `docker cp` any suspicious files IMMEDIATELY.
Medium — container layers are stored on the host filesystem and survive container restarts.
Correct. Container writable layers are ephemeral. This is the single most time-critical evidence capture in containerised environments. Unlike a Windows endpoint where a reboot destroys memory but preserves disk (event logs, registry, prefetch), a container restart destroys BOTH the writable layer AND the memory — because the container starts fresh from the image with a new writable layer. The orchestrator's restart policy compounds the urgency: a policy of "always" means any crash, error, or even a health check failure triggers an automatic restart that destroys ALL evidence in the writable layer. The triage responder working with containers must treat evidence collection with the same urgency as memory collection on a physical server — capture it NOW because it may not exist in 30 seconds. A restart policy of `always` means the orchestrator will restart the container on any failure — including failures caused by the attacker's actions. The new container starts from the original image with a clean writable layer. All attacker modifications (dropped files, modified configurations, installed tools) exist only in the current container's writable layer. `docker diff` is the most time-critical command — it shows everything the attacker changed. Capture it before anything triggers a restart.
7. Your organisation has Entra ID P2 licences. An incident is discovered 25 days after initial access. The triage responder opens the investigation. How many days of sign-in log data remain in native Entra ID retention?
5 days. Entra ID P1/P2 retains sign-in logs for 30 days natively. The incident occurred 25 days ago, so 30 minus 25 equals 5 days of remaining retention. The sign-in log entries from the first day of the incident are 25 days old and will expire in 5 days. The triage responder must snapshot the relevant sign-in entries to the case folder IMMEDIATELY because they will be permanently deleted in 5 days. If the triage takes 3 days and the investigation begins on day 4, only 1 day of native sign-in data remains. If Sentinel is ingesting sign-in logs, the data is also available in the SigninLogs table per the workspace retention configuration — but the triage responder should not ASSUME Sentinel has the data without verifying.
25 days — Entra retains logs for the life of the P2 licence.
0 days — sign-in logs are only retained for 7 days.
Correct. Cloud evidence retention is calendar-based. The triage responder must calculate the remaining retention for every evidence source during triage. Five days of remaining retention means the evidence snapshot is URGENT — not because the attacker is active, but because the evidence itself will disappear on schedule regardless of the investigation's progress.
8. You find /etc/ld.so.preload on a Linux server containing a path to an unknown shared library. What does this indicate?
Almost certainly a userspace rootkit. /etc/ld.so.preload forces the dynamic linker to load the specified shared library BEFORE all others, for every dynamically linked program on the system. This allows the attacker to hook any function — including the functions used by ps, ls, netstat, and other commands the triage responder relies on. The rootkit can hide processes, files, and network connections from the triage commands. Legitimate software essentially never uses /etc/ld.so.preload. Any entry in this file should be treated as a DEFINITIVE compromise indicator. The triage responder should capture a memory dump (LiME) because the hooked commands may produce unreliable output — the memory dump provides the unfiltered truth that the rootkit cannot hide from.
A performance optimisation — some applications preload libraries for faster startup.
A configuration file that lists libraries to exclude from loading.
Correct. /etc/ld.so.preload is the Linux equivalent of a kernel rootkit in terms of impact — it hooks every dynamically linked program's function calls. The key triage implication: if this file has an entry, the output of ps, ls, ss, and other triage commands may be unreliable because the rootkit's hooked library is intercepting their system calls and filtering the results. Memory analysis (Volatility3 on a LiME dump) provides the reliable view because it reads kernel data structures directly rather than through the compromised dynamic linker.
9. During cloud triage, you run the user pivot query and discover that the compromised user (j.morrison) successfully authenticated to 3 different endpoints in the last 24 hours. What is the triage implication?
No additional action needed — the user legitimately uses multiple devices.
All 3 endpoints are now in scope for triage. The attacker who compromised j.morrison's cloud identity may have used those credentials to access any of the 3 endpoints. Each endpoint must be checked for indicators of compromise — at minimum, run Command 2 (network connections) to check for C2 connections, and Command 1 (process list) to check for suspicious processes. The scope of the incident has expanded from 1 cloud identity to 1 cloud identity plus 3 endpoints. If any endpoint shows compromise indicators, the scope expands further based on that endpoint's connections and cached credentials. This scope expansion is why the cross-environment correlation from TR1.6 must happen DURING triage — discovering the 3 endpoints a week later means the attacker has had a week of uncontained endpoint access.
Check only the endpoint the user was using at the time of the cloud compromise.
Correct. Cross-environment correlation during triage is essential because it reveals the SCOPE of the incident. A cloud compromise that remains cloud-only is containable with cloud actions alone. A cloud compromise that has expanded to 3 endpoints requires cloud containment PLUS endpoint triage and potential containment on each endpoint. The user pivot query from TR1.6 takes 10 seconds and may triple the incident scope — better to know during triage than to discover during investigation.
10. A Linux server has journald configured with Storage=volatile. The server experiences an unplanned reboot during the investigation. What journal evidence is lost?
ALL journal evidence. When journald is configured with Storage=volatile, logs are stored in memory (/run/log/journal/), not on disk. An unplanned reboot erases all memory contents, including the journal. Every authentication event, service state change, kernel message, and application log captured by journald is permanently lost. This is why the triage responder checks journald storage mode during Linux triage (TR1.4): if Storage=volatile, the journal logs are Tier 1-2 volatile evidence that must be captured IMMEDIATELY. The triage command `journalctl --since "2026-04-06" > /IR/journal.txt` exports the current journal to a persistent file before any containment action (such as a reboot) can destroy it. Post-incident, the recommendation is always to change journald to Storage=persistent — the disk space cost is minimal and the forensic benefit is substantial.
Only the last hour of journal data — the rest was already rotated to disk.
No journal evidence is lost — journald always persists to /var/log/journal/.
Correct. The journald Storage setting is the single most important configuration for Linux evidence persistence. Volatile mode means all journal data lives in RAM — every reboot is a complete evidence wipe. Many Linux distributions ship with volatile as the default. The triage responder who does not check this setting may assume the journal survives reboot (as it does with persistent storage) and delay capture — only to discover after the reboot that all evidence is gone.
11. During cross-environment correlation, you discover that the attacker's cloud IP (185.220.101.42) does NOT appear in any endpoint DeviceNetworkEvents. Does this prove the attack did not reach any endpoints?
No. The absence of the EXTERNAL attacker IP on endpoints does not prove the attack is cloud-only. The attacker may have pivoted to endpoints using INTERNAL mechanisms that do not involve the external IP: token-based access (the stolen cloud token accesses OneDrive/SharePoint, and the user syncs files to their endpoint — the attacker's payload arrives via the sync client, not via a direct connection from the external IP), phishing payload delivery (the attacker sent a malicious email to the compromised user's mailbox, and the user opened it on their endpoint — the execution occurs locally), or VPN access (the attacker used the stolen credentials to connect to the corporate VPN, and their subsequent endpoint connections originate from the VPN IP range, not the original external IP). The IP pivot is ONE correlation method. Always supplement with the user pivot (check where the compromised user authenticated) and the timeline pivot (check for endpoint events after the cloud compromise timestamp).
Yes — if the IP is not in endpoint logs, the attacker did not reach endpoints.
Probably — but check one more time with a wider time range.
Correct. The IP pivot is a POSITIVE indicator (if found, the connection is confirmed) but not a NEGATIVE indicator (if not found, the connection is not excluded). The attacker may pivot through mechanisms that do not involve their original IP. This is why TR1.6 teaches THREE correlation methods — IP, user, and timeline — and requires all three to be checked. Any single method can produce a false negative; the combination of all three provides the most reliable scope assessment. This is a fundamental principle of cross-environment correlation: any SINGLE method can produce false negatives, but the combination of IP pivot + user pivot + timeline pivot provides comprehensive scope assessment. The triage responder who relies on only one correlation method risks underestimating the incident scope — discovering additional compromised systems days later when the investigation team runs the queries the triage responder skipped.