In this module
MF1.6 Acquisition Verification and Integrity
From MF1.2-1.5 you know how to capture memory from Windows, Linux, and the hypervisor, and how to collect the pagefile and swap alongside the RAM image. Each sub included a hash step. This sub explains why that hash step matters, what it actually proves, how it differs from disk-image hashing, and how to build the chain-of-custody documentation that makes every subsequent analysis finding defensible.
Hashing a memory image is not the same thing as hashing a disk image, and the chain-of-custody documentation for memory evidence serves a different function than the chain-of-custody for a hard drive in an evidence bag. Practitioners who treat the two as interchangeable produce acquisition records that don't answer the questions opposing counsel actually asks — and they discover this gap during cross-examination, not during the investigation.
The fundamental difference: a disk image hash proves that the copy is bit-identical to the source at the time of imaging. A memory image hash proves that the copy is bit-identical to the file WinPmem (or LiME, or the hypervisor) produced — but it does not prove that the file is a bit-identical copy of RAM at any single moment, because smear means it never was. The hash proves file integrity, not source fidelity. Both matter, but they prove different things, and the acquisition record must describe what each proof covers.
This sub builds the acquisition record — the document that ties together the who, what, when, where, how, and hash chain for every memory image you capture. MF0.8 established the legal standards (ACPO, CPR 35, Daubert) that the record must satisfy. This sub makes those standards operational: a concrete template you fill in for every acquisition, with the specific fields that make each standard's requirements traceable.
Deliverable: A completed acquisition record template that you can reuse for every capture in this course and in production, understanding of what the SHA-256 hash actually proves for memory images versus disk images, the four structural checks that confirm an image is structurally sound (not just hash-matching), and the documentation discipline that makes the entire chain from capture to analysis to reporting defensible.
What the hash actually proves for memory images
This section corrects a misconception that causes practitioners to overclaim in reports. The hash proves file integrity, not source fidelity — and the distinction matters under cross-examination.
When you run the hash command immediately after WinPmem finishes:
certutil -hashfile image.raw SHA256The hash proves one thing: the file on disk at that moment has this specific byte sequence. When you copy the file to the analysis workstation and re-hash, a matching hash proves the copy is byte-identical to the original file.
The chain of hashes proves that the file you're analysing is the same file the tool produced.
What the hash does not prove is that the file is a faithful representation of RAM at any specific moment. MF1.1 established why: smear means the image describes a range of moments, not one moment.
Two consecutive captures of the same system produce different .raw files with different hashes — not because either capture failed, but because memory changed between them. The hash of a memory image is not the equivalent of the hash of a disk image, where two images of the same unchanged disk produce the same hash.
This distinction matters in reports. "The SHA-256 hash confirms the integrity of the memory image" is correct. "The SHA-256 hash confirms the image is a faithful copy of the target's RAM" is an overclaim — it's a faithful copy of what the tool captured, which is a smear-affected read of RAM. The acquisition record should state: "SHA-256 hash calculated at source confirms file integrity of the acquisition output. The image represents the state of physical memory during the acquisition window [start time] to [end time], subject to acquisition smear inherent in live-system capture." That phrasing is defensible. The shorter version is not.
The four structural checks
This section goes beyond hashing. A structurally sound image is one that both hashes correctly and parses correctly — the two checks are independent.
A hash match tells you the file wasn't modified during transfer. It doesn't tell you the capture completed correctly. A partial capture (WinPmem killed mid-write, disk ran out of space, LiME module crashed during acquisition) produces a valid file with a valid hash — it's just shorter than it should be and truncated at a random physical offset. The hash of a truncated image is perfectly correct for what the file contains; it just doesn't contain all of RAM.
Four structural checks confirm acquisition quality. Size check — the image file size must equal the target's physical RAM. For a 4 GB VM, the file is 4,294,967,296 bytes. For a 128 GB server, the file is 137,438,953,472 bytes. A smaller file is a partial capture. A larger file is unusual but possible with some acquisition formats that include metadata headers (LiME format includes segment headers; raw format does not).
Profile identification — Volatility 3 must identify the operating system. windows.info for Windows images, linux.banner for Linux. If Volatility 3 can't identify the OS, either the image is corrupt, the format is wrong, or the symbol pack is missing for the target's OS build. Profile identification failure on a correctly-sized image usually means a symbol-pack issue, not an acquisition issue.
Kernel address validation — the Kernel Base reported by windows.info should start with 0xfffff8 on 64-bit Windows. A value of 0x00000000 means the kernel wasn't located. An implausible value (e.g., a user-space address) means the image is structurally damaged at the kernel offset.
Process list sanity — windows.pslist (or linux.pslist) should return a reasonable number of processes. A clean Windows 11 boot produces 80-150 processes. Zero processes means the kernel structures weren't resolved. An unusually small number (under 20) on a system that should have more may indicate structural damage or a partial capture that truncated the active process list.
All four checks are quick — under two minutes for a 4 GB image. Run them on every capture before proceeding to analysis. A capture that fails any check should be re-acquired if the source system is still available.
The acquisition record — a concrete template
This section provides the template you'll use for every capture. It's not a theoretical framework — it's a form with specific fields that satisfy ACPO, CPR 35, and Daubert requirements simultaneously.
The acquisition record is a single document per captured image. Every field is completed at the time of acquisition, not reconstructed later. The record lives alongside the image in the evidence store and is referenced in any report that cites findings from the image.
Header fields. Case reference (if assigned). Date and time of acquisition start (UTC). Date and time of acquisition end (UTC). Operator name and role. Authorisation reference (who authorised the capture — ticket number, verbal authorisation from the SOC lead with their name, written request from legal). Trigger condition (which of MF0.1's four categories applies: active attack, post-incident review, proactive hunting, or baseline capture).
Target fields. Hostname. IP address at time of acquisition. Asset tag or serial number (if physical). Operating system and version (as reported by the system, not as assumed). RAM size (configured, not used). Hypervisor (if virtualised — product, version, host). Environmental notes (was the system under active attack during capture? Were alerts firing? Was the user logged in? Any concurrent activity that may affect the image?).
Method fields. Acquisition tool and version (e.g., "WinPmem mini x64 RC2, build date 2024-01-12"). Acquisition method rationale (why this tool and not an alternative — reference the fidelity/footprint/feasibility analysis from MF1.1). Output format (raw, LiME, ELF, crashdump). Output file path and filename. Additional files collected (pagefile, hibernation file, swap — each with its own hash).
Integrity fields. SHA-256 hash calculated at source immediately after capture. SHA-256 hash calculated at destination after transfer. Hash match confirmed (yes/no — if no, note the discrepancy and resolution). Evidence store location (where the original is stored). Working copy location (where the analysis copy is stored). Write-protection method (how the original is protected from modification — read-only permissions, write-blocked storage, sealed evidence bag for physical media).
Post-capture fields. Structural verification results (size check, profile identification, kernel address, process list count). Anomalies noted (anything unusual about the capture — partial completion, error messages, interrupted transfer, environmental interference). Signature of operator.
This procedure uses the capture you performed in MF1.2. You'll fill in the acquisition record template for that image. By the end, you have a completed record that you can reuse as a template for every subsequent capture.
The situation. You captured memory from a Northgate Engineering server at 03:15 during an active incident. In the urgency, you forgot to hash the image on the source system before transferring it to the analysis workstation. You have the image on the analysis workstation and you can hash it there. The source server has since been reimaged by the business. You cannot re-acquire.
The choice. Hash the image on the analysis workstation and use that hash as the chain-of-custody anchor. Or note the missing source-side hash in the acquisition record and proceed with the analysis, flagging the gap.
The correct call. Both — and the documentation matters more than the hash. Hash the image on the analysis workstation now (it's still useful for proving the file hasn't changed since it arrived at the workstation). But document in the acquisition record that the source-side hash was not calculated, that the image was transferred to the analysis workstation without hash verification at the source, and that the single hash covers workstation-to-report integrity only, not source-to-workstation integrity. This honest documentation is defensible: "the operator acknowledges a procedural gap and has documented what the available hash does and doesn't prove." The alternative — pretending the workstation hash is a source hash — is not defensible if the gap is discovered during review.
The operational lesson. Missing a procedural step is recoverable through documentation. Hiding a procedural gap is not. Opposing counsel's strongest attack isn't "you missed the hash" — it's "you missed the hash and didn't tell us." The acquisition record is where you tell them.
The myth. The SHA-256 hash is the ultimate proof of evidence integrity. If the hash at source matches the hash at destination, the evidence is forensically sound and the chain of custody is intact. No further verification is needed.
The reality. The hash proves file integrity — the bytes didn't change during transfer. It does not prove acquisition quality. A partial capture (half the RAM, truncated mid-write) has a perfectly valid hash that matches at both ends. A capture from the wrong system (you accidentally captured a staging server instead of the target) has a perfectly valid hash chain. A capture where the pagefile was needed but not collected has a perfectly valid RAM-image hash — it's just missing evidence.
File integrity is one of four verification requirements. The other three — size validation (does the file size match the target's RAM?), profile identification (can Volatility 3 identify the OS?), and structural sanity (does the process list look reasonable?) — catch the failure modes that hashing misses. An acquisition record that says "hash verified" but omits the structural checks leaves three classes of acquisition failure undetected. All four checks together take under two minutes. There's no reason to skip any of them.
Try it — Complete an acquisition record for your Target-Win baseline and verify all four structural checks
Setup. Your Target-Win baseline image from MF1.2 on the analysis workstation. The hashes you recorded during MF1.2. A blank document for the acquisition record.
Task. Fill in every field of the acquisition record template from this sub's "concrete template" section for your MF1.2 baseline capture. Then run the four structural checks and record the results in the post-capture fields.
Expected result. A completed acquisition record with all fields populated. All four structural checks pass: size = 4,294,967,296 bytes, profile = Windows 11, Kernel Base starts with 0xfffff8, process count 80-150. Both hashes present and matching. The document is a reusable template — save it and copy it for every subsequent capture in the course.
If your result doesn't match. If you can't fill in the authorisation field, use "self-authorised baseline capture for course lab" — this is legitimate for lab work. If any structural check fails, re-acquire before proceeding; the baseline must be structurally sound. If you don't have the source-side hash from MF1.2, document the gap honestly per this sub's Decision Point — then make a habit of hashing at source for every future capture.
You should be able to do the following without referring back to this sub. If you can't, the sections to re-read are noted.
You've set up the lab and captured your first clean baselines.
MF0 built the three-VM lab and established the memory forensics landscape. MF1 taught acquisition with WinPmem and LiME, integrity verification, and chain of custody. From here, you execute attacks and investigate what they leave behind.
- 8 attack modules (MF2–MF9) — process injection, credential theft, fileless malware, persistence, kernel drivers, Linux rootkits, timeline construction, and a multi-stage capstone
- You run every attack yourself — from Kali against your target VMs, then capture memory and investigate your own attack's artifacts with Volatility 3
- MF9 Capstone — multi-stage chain (initial access → privilege escalation → credential theft → persistence → data staging), three checkpoint captures, complete investigation report
- The lab pack — PoC kernel driver and LKM rootkit source code, setup scripts, 21 exercises, 7 verification scripts, investigation report templates
- Cross-platform coverage — Windows and Linux memory analysis in one course, with the timeline module integrating evidence from both
Cancel anytime