A Northgate Engineering workstation triggered a Defender alert for suspicious PowerShell activity at 14:22. The SOC lead requests memory capture. The workstation is a VMware vSphere VM running Windows 11 with HVCI enabled and Tamper Protection enforced by Intune. You have local admin credentials on the guest and vCenter access with full snapshot permissions. Which acquisition method should you use, and why?
WinPmem from inside the guest, because it's the fastest method and you have admin credentials. HVCI and Tamper Protection are unlikely to interfere with a signed acquisition tool.
Hypervisor-based acquisition via vCenter memory-inclusive snapshot. You have the permissions, the VM is on vSphere, and this method produces zero guest footprint and zero smear. WinPmem would likely fail because HVCI blocks the driver load and Tamper Protection is policy-enforced (can't toggle locally). The hypervisor method bypasses both constraints entirely and produces a higher-fidelity image.
WinPmem after temporarily disabling HVCI via bcdedit and rebooting the workstation. The reboot clears the security constraint, and the capture can proceed normally.
AVML, because it operates in userspace and doesn't require a kernel driver, avoiding the HVCI constraint. AVML works on Windows the same way it works on Linux.
You captured a 128 GB memory image from a production server using WinPmem. The capture took 14 minutes. When you run windows.pslist and windows.psscan, you find 12 PIDs that appear in psscan but not in pslist. Six of those PIDs have ExitTime values that predate the capture window. The other six have no ExitTime. What do the two groups represent?
All twelve are rootkit-hidden processes. A rootkit used DKOM to unlink them from the active process list, but the pool allocations remain. The six with exit times are processes the rootkit terminated after hiding, and the six without are still running but hidden.
The six with exit times are pool remnants — processes that terminated normally before the capture and whose pool allocations haven't been reclaimed yet. They're legitimate kernel memory management artifacts, not evidence of hiding. The six without exit times are likely acquisition smear artifacts — processes that exited during the 14-minute capture window. The active process list was captured before they exited (so pslist missed them), but their pool allocations were captured after they exited (so psscan found them with partially torn EPROCESS fields). On a 128 GB capture taking 14 minutes, six smear-related disagreements is within expected range for a busy server.
All twelve are acquisition smear artifacts from the 14-minute capture window. The six with exit times exited early in the capture; the six without exited late. None are forensically significant.
The twelve represent a mix of rootkit activity and smear, and there's no way to distinguish them without additional analysis. The pslist/psscan comparison alone is insufficient to determine the cause.
You captured memory from a Northgate Engineering finance workstation at 15:30. You forgot to hash the image on the source workstation before transferring it to the analysis workstation via SCP. The source workstation was reimaged at 16:00 and is no longer available. You have the image on the analysis workstation. What should you do?
The image is forensically compromised because the chain of custody is broken. Without a source-side hash, you can't prove the image wasn't modified during transfer. Discard it and note the acquisition failure in the incident record.
Hash the image on the analysis workstation now — it still proves the file hasn't changed since it arrived at the workstation. Document in the acquisition record that the source-side hash was not calculated, the image was transferred without source-hash verification, and the available hash covers workstation-to-report integrity only. Proceed with analysis. The honest documentation of the procedural gap is defensible; pretending the gap doesn't exist is not. Discarding the only image of a reimaged system wastes evidence that may still produce valid findings.
Hash the image on the analysis workstation and record it as if it were the source-side hash. The image arrived via SCP, which uses encryption and integrity checking, so the transfer was effectively hash-verified. There's no need to note the gap.
Contact the SOC lead and request a second capture from the workstation. Memory evidence should always be captured with dual hashing — one at source, one at destination. Without both, the evidence is inadmissible.
You need to capture memory from a Northgate Engineering RHEL 8.9 database server that triggered anomalous outbound traffic alerts. The server is a physical host (no hypervisor). Your IR toolkit has pre-built LiME modules for Ubuntu 22.04 kernels. The server runs kernel 4.18.0-513.24.1.el8_9.x86_64. You have root SSH access and AVML on your toolkit USB. What is the correct acquisition approach?
Compile LiME on the server by installing kernel-devel and gcc, then load the compiled module. This is the highest-fidelity acquisition method and should be used whenever possible.
Use AVML. Your pre-built LiME modules are for Ubuntu 22.04 — they won't load on a RHEL 8.9 kernel due to version mismatch. Compiling LiME on the production database server requires installing development packages (kernel-devel, gcc), which modifies the server's package state during an active incident and increases the forensic footprint. AVML is a single static binary that reads /proc/kcore without kernel-module compilation or version coupling. The fidelity tradeoff (userspace read vs kernel-level read) is acceptable given the feasibility and footprint advantages.
Use the Ubuntu 22.04 LiME module from your toolkit. LiME modules are cross-distribution compatible — the same .ko file works on Ubuntu and RHEL as long as the kernel version is similar (both are 5.x-series).
Skip memory acquisition entirely. Without a matching LiME module, memory forensics isn't possible on this server. Focus on disk and network evidence instead.
After capturing memory from a Northgate Engineering workstation during an incident, your analysis reveals a process with a 256 KB private RWX allocation that contains only zeroed bytes. The process is present in both pslist and psscan with consistent EPROCESS fields. Sentinel logs show the process had active outbound connections to a known C2 IP during the investigation window. What should you report?
The process had a private RWX region that was empty at capture time. No memory-based evidence is available for this process. The C2 connection is noted from Sentinel but the memory analysis is inconclusive.
The process contained a 256 KB private RWX allocation with zeroed content, consistent with deliberate memory wiping prior to or during capture. The structural metadata (EPROCESS fields, pslist/psscan consistency) confirms the process existed and was active. Sentinel corroborates with C2 connection data. The finding is: evidence of process execution with active C2 communication and deliberate memory wiping — the wipe itself is an indicator of anti-forensic intent. Check the pagefile for pre-wipe content at the same virtual address range. The zeroed region is a finding, not a dead end.
The zeroed region is a smear artifact from the live capture. The pages were being written to during the capture and the tool read zeros because the page was being modified at the exact moment WinPmem read it. This is expected acquisition behavior on a busy system.
The process is likely legitimate. Private RWX allocations can be used by just-in-time compilers (JIT) in browsers and .NET applications. The zeroed content means the JIT code was garbage-collected before capture. The C2 connection is a coincidence — investigate the destination IP separately.
You're writing the acquisition record for a WinPmem capture taken during an incident. The report will state: "The SHA-256 hash confirms that the memory image is a forensically sound copy of the target system's RAM at 14:22 UTC on 19 April 2026." Is this claim defensible?
Yes — the SHA-256 hash is the gold standard for forensic integrity. If the hash matches at both source and destination, the image is a verified forensic copy of RAM.
No — the claim conflates file integrity with source fidelity. The SHA-256 hash proves the file wasn't modified after capture; it doesn't prove the file is a bit-identical copy of RAM at any single moment. WinPmem's live capture introduces smear, meaning the image describes RAM during the acquisition window (e.g., 14:22 to 14:29), not at 14:22 precisely. The defensible claim is: "The SHA-256 hash confirms file integrity of the acquisition output. The image reflects the state of the target's RAM during the acquisition window of approximately 14:22-14:29 UTC, subject to acquisition smear inherent in live-system capture."
Partially — the claim is correct for the hash portion but should include a disclaimer about smear being possible. Adding "subject to normal acquisition artifacts" to the end of the sentence makes it defensible.
No — the correct claim would reference the source-side and destination-side hashes separately. "The SHA-256 hash" is ambiguous and doesn't specify which hash is being cited.
A Northgate Engineering workstation has been running for eight days under normal office load (Outlook, Chrome, ERP client). You capture memory and want to know whether collecting the pagefile adds investigative value. The workstation has 16 GB RAM and an 8 GB pagefile. Which statement is most accurate?
Collect the pagefile. Eight days of runtime under office load with 16 GB RAM means significant paging activity. LSASS credential material, browser session data, email buffers, and application heap pages may have been evicted to the pagefile and not paged back in. Volatility 3's --swap flag can resolve these paged-out regions during analysis, surfacing evidence the RAM capture alone misses. The collection cost is 10-15 minutes (VSS shadow copy). Skip only if the system is about to be reimaged in the next few minutes and you genuinely can't fit the collection.
Skip the pagefile. With 16 GB RAM on a standard office workstation, memory pressure is low and paging is minimal. The pagefile mostly contains idle background service pages, not forensically relevant data. Collection is not worth the time.
Collect the pagefile only if the investigation involves credential theft. For other investigation types (malware analysis, data exfiltration), pagefile data is too fragmented to produce actionable findings.
Collect the pagefile but analyse it separately from the RAM image using string searches and carving tools. Volatility 3 cannot use pagefile data during plugin-based analysis; the --swap flag is for Linux swap only.
You're preparing to start MF2 (Process Injection). Your Target-Win baseline was captured two weeks ago. Since then, you've used Target-Win to test WinPmem, browse the web, and install a text editor. The MF1-clean-baseline snapshot is still available. What should you do before starting MF2's attack?
Proceed with the current VM state. The baseline image from two weeks ago is the reference — it doesn't matter that the VM has drifted. The comparison will show the attack changes plus some noise from your testing, which you can filter out manually.
Revert Target-Win to the MF1-clean-baseline snapshot before running the attack. The comparison between your baseline image and the attack-modified capture only works cleanly if the VM is in the exact state the baseline was captured from. Two weeks of drift (installed software, web browsing history, additional processes) adds noise to the comparison that's indistinguishable from attack artifacts without manual review. Revert, confirm the VM matches the baseline state, then run the MF2 attack.
Capture a new baseline from the current VM state and use that as the MF2 reference. The old baseline is stale — two weeks of Windows updates and your testing have changed the system. A fresh baseline is more accurate.
Start MF2 without reverting but capture memory before and after the attack. The "before" capture from the current state serves as the baseline, making the two-week-old baseline unnecessary.
💬
How was this module?
Your feedback helps us improve the course. One click is enough — comments are optional.
You've set up the lab and captured your first clean baselines.
MF0 built the three-VM lab and established the memory forensics landscape. MF1 taught acquisition with WinPmem and LiME, integrity verification, and chain of custody. From here, you execute attacks and investigate what they leave behind.
8 attack modules (MF2–MF9) — process injection, credential theft, fileless malware, persistence, kernel drivers, Linux rootkits, timeline construction, and a multi-stage capstone
You run every attack yourself — from Kali against your target VMs, then capture memory and investigate your own attack's artifacts with Volatility 3