TR1.13 Check My Knowledge

· Module 1 · Free

Check My Knowledge

1. You arrive at a compromised Windows workstation. The user is at their desk and the system is running. In what order should you capture evidence per the volatility hierarchy?

Tier 1 first: running processes and network connections (PowerShell, 30 seconds). Tier 2: memory dump (WinPMem, 2-5 minutes). Tier 3: event logs and disk artifacts (KAPE, 3-5 minutes). This sequence preserves the most volatile evidence first — process state and network connections change in seconds, memory changes in minutes, event logs persist for hours.
Start with the memory dump — it captures everything including processes and network state. Then collect event logs.
Start with KAPE — it collects everything automatically. Then do the memory dump.

2. An attacker's OAuth consent grant in Entra ID survives which of the following containment actions?

Password reset AND session revocation. The OAuth grant gives the malicious application its own tokens, independent of the user's password or session. Only explicitly revoking the OAuth consent (removing the enterprise application) stops the application's access.
Session revocation only — a password reset invalidates the OAuth grant because it is tied to the user's credentials.
Neither — both password reset and session revocation terminate the OAuth grant.

3. A Linux server has been compromised and you suspect the attacker loaded a rootkit as a kernel module. The output of `lsmod` shows only expected modules. Does this prove there is no rootkit?

Yes — if the rootkit were loaded, it would appear in the lsmod output because lsmod reads directly from the kernel.
No. A kernel module rootkit can hide itself from lsmod by manipulating the kernel's module list. The rootkit IS in the kernel but has removed its own entry from the list that lsmod reads. A LiME memory dump analysed with Volatility3's linux.lsmod plugin bypasses this hiding technique because it reads the kernel data structures directly from memory rather than through the kernel's (compromised) reporting interface.
No — but you should check /proc/modules instead, which shows the true module list.

4. Active ransomware encryption is in progress on a file server. The triage responder wants to capture a memory dump before isolating the server. Is this the correct sequence?

Yes — the memory dump captures the ransomware binary's decryption key, which may allow file recovery.
No. Active encryption = active damage. Every minute spent dumping memory is a minute of additional files encrypted. CONTAIN FIRST: network-isolate the server to stop the encryption from spreading to network shares and prevent C2 communication. The ransomware process continues encrypting local files, but the blast radius is limited to the isolated server. AFTER isolation, capture the memory dump — the process is still running (isolation does not stop local processes), so memory evidence is still available.
It depends on how many files have been encrypted — if most are already encrypted, there is no urgency to contain.

5. You are correlating events across environments. The cloud sign-in log shows an AiTM session from 185.220.101.42 at 08:14 UTC. The Linux auth.log shows an SSH session from 10.1.1.42 at 15:12 local time (the Linux server is configured for BST, UTC+1). Are these events temporally consistent with the same attacker?

Yes, but only after time normalisation. 15:12 BST = 14:12 UTC. The cloud event at 08:14 UTC and the Linux event at 14:12 UTC are 5 hours 58 minutes apart. This is temporally consistent with the CHAIN-HARVEST extended timeline: the attacker compromises the cloud identity, pivots to the endpoint (takes hours), then SSH to Linux from the endpoint (10.1.1.42 is DESKTOP-NGE042's IP). The 10.1.1.42 IP confirms the endpoint-to-Linux pivot. Time normalisation is critical — without converting BST to UTC, the 7-hour apparent gap would seem too large.
No — 08:14 and 15:12 are 7 hours apart, which is too long for a single attacker's session.
Yes — the IP addresses match (185.220.101.42 and 10.1.1.42 are the same network).

6. A Docker container running on a production Linux server shows signs of compromise. The container orchestrator has a restart policy of `always`. What is the urgency of evidence collection?

Low — the container is production-critical and should not be interrupted. Collect evidence during the next maintenance window.
HIGH. If the container crashes or is restarted (by the orchestrator, a deployment, or the attacker), the writable layer — containing all of the attacker's modifications — is destroyed. `docker diff` shows the changes NOW. After restart, the container starts from the clean image and all forensic evidence in the writable layer is gone. Capture `docker diff`, `docker inspect`, `docker logs`, and `docker cp` any suspicious files IMMEDIATELY.
Medium — container layers are stored on the host filesystem and survive container restarts.

7. Your organisation has Entra ID P2 licences. An incident is discovered 25 days after initial access. The triage responder opens the investigation. How many days of sign-in log data remain in native Entra ID retention?

5 days. Entra ID P1/P2 retains sign-in logs for 30 days natively. The incident occurred 25 days ago, so 30 minus 25 equals 5 days of remaining retention. The sign-in log entries from the first day of the incident are 25 days old and will expire in 5 days. The triage responder must snapshot the relevant sign-in entries to the case folder IMMEDIATELY because they will be permanently deleted in 5 days. If the triage takes 3 days and the investigation begins on day 4, only 1 day of native sign-in data remains. If Sentinel is ingesting sign-in logs, the data is also available in the SigninLogs table per the workspace retention configuration — but the triage responder should not ASSUME Sentinel has the data without verifying.
25 days — Entra retains logs for the life of the P2 licence.
0 days — sign-in logs are only retained for 7 days.

8. You find /etc/ld.so.preload on a Linux server containing a path to an unknown shared library. What does this indicate?

Almost certainly a userspace rootkit. /etc/ld.so.preload forces the dynamic linker to load the specified shared library BEFORE all others, for every dynamically linked program on the system. This allows the attacker to hook any function — including the functions used by ps, ls, netstat, and other commands the triage responder relies on. The rootkit can hide processes, files, and network connections from the triage commands. Legitimate software essentially never uses /etc/ld.so.preload. Any entry in this file should be treated as a DEFINITIVE compromise indicator. The triage responder should capture a memory dump (LiME) because the hooked commands may produce unreliable output — the memory dump provides the unfiltered truth that the rootkit cannot hide from.
A performance optimisation — some applications preload libraries for faster startup.
A configuration file that lists libraries to exclude from loading.

9. During cloud triage, you run the user pivot query and discover that the compromised user (j.morrison) successfully authenticated to 3 different endpoints in the last 24 hours. What is the triage implication?

No additional action needed — the user legitimately uses multiple devices.
All 3 endpoints are now in scope for triage. The attacker who compromised j.morrison's cloud identity may have used those credentials to access any of the 3 endpoints. Each endpoint must be checked for indicators of compromise — at minimum, run Command 2 (network connections) to check for C2 connections, and Command 1 (process list) to check for suspicious processes. The scope of the incident has expanded from 1 cloud identity to 1 cloud identity plus 3 endpoints. If any endpoint shows compromise indicators, the scope expands further based on that endpoint's connections and cached credentials. This scope expansion is why the cross-environment correlation from TR1.6 must happen DURING triage — discovering the 3 endpoints a week later means the attacker has had a week of uncontained endpoint access.
Check only the endpoint the user was using at the time of the cloud compromise.

10. A Linux server has journald configured with Storage=volatile. The server experiences an unplanned reboot during the investigation. What journal evidence is lost?

ALL journal evidence. When journald is configured with Storage=volatile, logs are stored in memory (/run/log/journal/), not on disk. An unplanned reboot erases all memory contents, including the journal. Every authentication event, service state change, kernel message, and application log captured by journald is permanently lost. This is why the triage responder checks journald storage mode during Linux triage (TR1.4): if Storage=volatile, the journal logs are Tier 1-2 volatile evidence that must be captured IMMEDIATELY. The triage command `journalctl --since "2026-04-06" > /IR/journal.txt` exports the current journal to a persistent file before any containment action (such as a reboot) can destroy it. Post-incident, the recommendation is always to change journald to Storage=persistent — the disk space cost is minimal and the forensic benefit is substantial.
Only the last hour of journal data — the rest was already rotated to disk.
No journal evidence is lost — journald always persists to /var/log/journal/.

11. During cross-environment correlation, you discover that the attacker's cloud IP (185.220.101.42) does NOT appear in any endpoint DeviceNetworkEvents. Does this prove the attack did not reach any endpoints?

No. The absence of the EXTERNAL attacker IP on endpoints does not prove the attack is cloud-only. The attacker may have pivoted to endpoints using INTERNAL mechanisms that do not involve the external IP: token-based access (the stolen cloud token accesses OneDrive/SharePoint, and the user syncs files to their endpoint — the attacker's payload arrives via the sync client, not via a direct connection from the external IP), phishing payload delivery (the attacker sent a malicious email to the compromised user's mailbox, and the user opened it on their endpoint — the execution occurs locally), or VPN access (the attacker used the stolen credentials to connect to the corporate VPN, and their subsequent endpoint connections originate from the VPN IP range, not the original external IP). The IP pivot is ONE correlation method. Always supplement with the user pivot (check where the compromised user authenticated) and the timeline pivot (check for endpoint events after the cloud compromise timestamp).
Yes — if the IP is not in endpoint logs, the attacker did not reach endpoints.
Probably — but check one more time with a wider time range.
💬

How was this module?

Your feedback helps us improve the course. One click is enough — comments are optional.

Thank you — your feedback has been received.

You're reading the free modules of this course

The full course continues with advanced topics, production detection rules, worked investigation scenarios, and deployable artifacts. Premium subscribers get access to all courses.

View Pricing See Full Syllabus