In this module
MF1.7 Smear Detection and Acquisition Quality
From MF1.1 you know smear exists — the image describes a range of moments, not one moment. From MF1.6 you know the four structural checks that confirm basic acquisition integrity. This sub goes deeper: how to detect smear in a specific image, how to assess whether the smear is bad enough to affect your analysis, and what to do when it is.
Every software-based memory capture has smear. The question isn't whether your image has it — it does — but whether the smear affects the structures you're analysing. A process list that disagrees with itself by one exited process is analytically harmless; a kernel pool where half the allocations were written during a different scheduling epoch than the other half can produce false positives on pool-scan-based analysis that waste hours of investigation time.
Most practitioners skip smear assessment entirely. They run windows.pslist, get results, and proceed. If the results look reasonable, they assume the image is clean. If the results look strange, they suspect the attacker rather than the acquisition. Both assumptions are dangerous: the first misses smear artifacts that contaminate findings; the second sends the investigation chasing acquisition noise instead of attacker activity.
This sub teaches three smear detection techniques: process-list cross-validation (comparing pslist against psscan against pstree to find smear-induced disagreements), structure-field consistency checks (examining EPROCESS fields for impossible combinations that indicate mid-update capture), and timestamp-span analysis (measuring the acquisition's time footprint across the image to estimate the smear window). Each technique takes under five minutes to run. Together they tell you whether your image is clean enough for the analysis you need to do — or whether you need to re-acquire.
Deliverable: Ability to assess any memory image's smear level using three independent detection techniques, determine whether the detected smear affects the analysis you intend to perform, and make an informed re-acquire decision when smear exceeds acceptable levels.
Technique 1 — process-list cross-validation
This section gives you the fastest smear check. If pslist and psscan disagree on which processes exist, the image has detectable smear.
Volatility 3 offers three independent methods of enumerating processes. windows.pslist walks the ActiveProcessLinks doubly-linked list starting from the System process. windows.psscan scans the pool for EPROCESS structures by pool tag, independent of the linked list. windows.pstree reconstructs the parent-child tree from the linked list. On a smear-free image, all three methods produce the same set of PIDs. On a smeared image, they may disagree.
The disagreements take two forms. A process appears in psscan but not in pslist — the pool allocation was captured before the process was unlinked from the active list (or after it was linked but before the list head was captured). A process appears in pslist but its EPROCESS fields are partially zeroed when examined in detail — the active-list entry was captured before the kernel started tearing the structure down, but the structure's field region was captured after teardown began.
The procedure is straightforward. Run both enumerations and save the output:
vol -f image.raw windows.pslist > /tmp/pslist.txt
vol -f image.raw windows.psscan > /tmp/psscan.txtCompare the PID columns. Any PID that appears in only one output is a smear candidate. On a clean lab VM captured with WinPmem, expect zero to two disagreements. On a production server captured under load, expect three to eight. More than ten disagreements on a moderately busy system suggests either very heavy system load during capture or a capture that took unusually long (disk bottleneck, perhaps). The disagreement count is your first-order smear metric.
Not every disagreement is smear. psscan can find terminated processes whose pool allocations haven't been reclaimed yet — those are legitimate pool remnants, not smear artifacts. To distinguish: check the process's ExitTime field. If it has an exit time that predates the capture window, it's a pool remnant. If it has no exit time but doesn't appear in pslist, it's likely a smear artifact (the process exited during capture and the list was captured before the exit but the structure was captured after).
Technique 2 — structure-field consistency checks
This section detects smear at the field level, inside individual kernel structures. This is the technique that catches the most insidious smear — the kind that looks like attacker activity.
An EPROCESS structure has dozens of fields that describe a process's state. Some of these fields are updated atomically under lock; others are updated independently. When a capture straddles a field update, the image contains a structure where some fields reflect the pre-update state and others reflect the post-update state. The combination may be impossible under normal kernel operation — and it's exactly the kind of anomaly that trips up analysts who don't check for smear.
The most common field-smear indicator is a thread-state mismatch. A thread can be in states like Running, Waiting, Terminated, Initialized, and others. A Running thread must have an associated CPU (the Affinity and IdealProcessor fields are populated). A thread marked Running with no CPU assignment is a smear artifact — the thread-state field was captured while it showed Running, but the CPU-assignment field was captured after the thread was descheduled. An analyst unfamiliar with smear might interpret this as a rootkit hiding CPU assignment, which sends the investigation in the wrong direction.
Another indicator: a process with ActiveThreads = 0 but ExitTime = 0 (not terminated). A running process always has at least one thread. Zero active threads with no exit timestamp is either a kernel-level manipulation (possible but rare) or a smear artifact where the thread-count field was captured after the last thread exited but the exit-timestamp field was captured before it was written. The distinction requires examining the context: does the process appear in the active list? Does psscan find it independently? Do network connections reference it? If the process has external evidence of having been running, the zero-thread count is smear, not a finding.
Technique 3 — timestamp-span analysis
This section measures the acquisition's time footprint across the image. The wider the span, the worse the potential smear.
Every kernel structure that tracks time — process creation, thread creation, network connection establishment, handle creation — carries a timestamp. The earliest timestamp you can find in the image approximates when the acquisition started (the first pages written). The latest timestamp approximates when the acquisition ended (the last pages written). The difference is the acquisition's time span — the window during which the system was running while the capture was in progress.
The procedure: run windows.pslist and examine the CreateTime column. The System process (PID 4) has the earliest create time (boot time, not acquisition time — ignore it for span measurement). Look instead at the most recently created process in the list and compare its create time against the acquisition start time from your acquisition record. If the most recent process was created during the acquisition window, the image contains structures from different moments — which is the definition of smear.
A more precise method: run windows.handles and examine handle creation timestamps across all processes. The timestamp range across all handles gives you the full span of kernel activity captured in the image. A span under 30 seconds on a 4 GB VM indicates a fast capture with minimal smear. A span over 5 minutes on a 128 GB server indicates a long capture with significant smear potential. The span itself doesn't tell you whether the smear affects your analysis — that depends on whether the structures you care about were modified during the span — but it gives you a bound on how much smear is theoretically possible.
What to do when smear is bad
This section closes the loop: you've detected smear, now what? The answer depends on what you're analysing.
Smear is a quality metric, not a pass/fail gate. The question isn't "is there smear?" (there always is, on any live capture). The question is "does the smear affect the structures I need for this investigation?"
If your investigation is looking at a specific process (its command line, its loaded DLLs, its network connections), check that specific process for field consistency. If the process's EPROCESS fields are consistent and its handles are coherent, the smear didn't affect your target and you can proceed at the confidence tier the other modifiers support. If the process shows field inconsistencies, the finding needs a smear caveat in the report and the confidence tier drops.
If your investigation requires a complete process enumeration (every process that was running, with no missed PIDs), the pslist-vs-psscan disagreement count matters more. A disagreement count of zero to two means the enumeration is reliable. A count of five or more means some processes may have been captured mid-lifecycle, and the enumeration is "best available" rather than "complete." Document the disagreement count and the specific PIDs involved.
Re-acquisition is the fix when smear is unacceptable — but only if the source system is still available and still in the relevant state. If the system has been reimaged, you work with what you have and document the smear assessment honestly. The confidence tier framework from MF0.7 handles this: a finding from a heavily smeared image caps at medium confidence because the raw-memory verification modifier is weakened. The report states the smear assessment and its impact on each finding's confidence. That's defensible. Pretending the image is clean when it isn't is not.
This procedure runs the three smear detection techniques against your MF1.2 or MF1.4 baseline image. On a clean lab VM, smear should be minimal — this exercise establishes your "low smear" baseline for comparison when you analyse attack-modified captures in later modules.
The situation. You've captured memory from a Northgate Engineering finance workstation during an incident. The smear assessment shows eight pslist-vs-psscan disagreements, two thread-state mismatches, and a timestamp span of 4.5 minutes. The image is 32 GB (the workstation has 32 GB RAM). Your investigation is focused on one specific process — a suspicious PowerShell execution. The PowerShell process appears in both pslist and psscan with consistent EPROCESS fields and no thread-state mismatches.
The choice. Re-acquire (the system is still running but the business wants it back by end of day), or proceed with this image and document the smear assessment alongside the findings.
The correct call. Proceed. The smear is heavy for the image overall (eight disagreements, 4.5-minute span), but the specific process you're investigating is consistent across all three checks. Your analysis is scoped to that process, not to a full process enumeration. The smear affects processes you're not investigating and doesn't affect the one you are. Document: "image-wide smear assessment: 8 pslist/psscan disagreements, 2 thread-state mismatches, 4.5-minute timestamp span. Target process (PowerShell, PID XXXX) is consistent across all three checks — present in both pslist and psscan, all EPROCESS fields consistent, no thread-state mismatches. Smear assessment does not affect findings scoped to the target process."
The operational lesson. Smear is assessed per-finding, not per-image. A heavily smeared image can still support high-confidence findings for processes whose structures are consistent. The smear assessment in the report is scoped to the finding, not to the image as a whole. Discarding a usable image because the overall smear metrics look bad is as much a mistake as ignoring smear entirely.
The myth. Disagreement between process enumeration methods means the image is unreliable. Either the acquisition failed or an attacker manipulated the structures. Either way, findings from the image can't be trusted.
The reality. Pslist-psscan disagreement is the expected fingerprint of live-system acquisition smear, not evidence of compromise or acquisition failure. On any software-based capture of a running system, some structures will reflect different moments because the system was running during capture. The disagreement tells you smear exists (which you already knew from MF1.1) and gives you a rough magnitude.
The critical question is whether the disagreements affect the specific structures your investigation depends on. If your target process is consistent across methods, the disagreement in other processes is analytically irrelevant to your findings. If your target process is itself inconsistent, the finding gets a smear caveat and a confidence-tier adjustment — but the image still has value for every other process that is consistent.
Discarding an entire image because of expected acquisition artifacts wastes the capture and forces re-acquisition — which may not be possible if the source system was reimaged. Assess smear per-finding, not per-image. Document honestly. Let the confidence tier framework handle the defensibility.
Try it — Run the three smear checks against your baseline image and record the results
Setup. Your Target-Win baseline image from MF1.2 or MF1.4 on the analysis workstation. Volatility 3 ready.
Task. Execute all three techniques from the Guided Procedure: pslist-vs-psscan cross-validation (count disagreements), thread-state consistency (check for Running threads without CPUs or zero-thread processes without exit times), and timestamp-span measurement (most recent process create time minus acquisition start time). Record all three metrics.
Expected result. On a clean 4 GB lab VM: zero to two disagreements, zero thread-state mismatches, span under 60 seconds. Record these as your baseline smear profile. When you capture attack-modified images in MF2 onward, you'll compare against these numbers to detect whether increased smear correlates with attack activity or just busier system state.
If your result doesn't match. If disagreements exceed two, check whether the disagreeing PIDs have exit times (pool remnants, not smear). If thread-state mismatches appear, note them but don't re-acquire — a single mismatch on a clean VM is a mild artifact. If the span exceeds 90 seconds, the capture was slower than expected — check disk I/O and consider whether background processes (Defender scan, Windows Update) were running during capture.
You should be able to do the following without referring back to this sub. If you can't, the sections to re-read are noted.
You've set up the lab and captured your first clean baselines.
MF0 built the three-VM lab and established the memory forensics landscape. MF1 taught acquisition with WinPmem and LiME, integrity verification, and chain of custody. From here, you execute attacks and investigate what they leave behind.
- 8 attack modules (MF2–MF9) — process injection, credential theft, fileless malware, persistence, kernel drivers, Linux rootkits, timeline construction, and a multi-stage capstone
- You run every attack yourself — from Kali against your target VMs, then capture memory and investigate your own attack's artifacts with Volatility 3
- MF9 Capstone — multi-stage chain (initial access → privilege escalation → credential theft → persistence → data staging), three checkpoint captures, complete investigation report
- The lab pack — PoC kernel driver and LKM rootkit source code, setup scripts, 21 exercises, 7 verification scripts, investigation report templates
- Cross-platform coverage — Windows and Linux memory analysis in one course, with the timeline module integrating evidence from both
Cancel anytime