In this module

MF1.7 Smear Detection and Acquisition Quality

6 hours · Module 1 · Free
What you already know

From MF1.1 you know smear exists — the image describes a range of moments, not one moment. From MF1.6 you know the four structural checks that confirm basic acquisition integrity. This sub goes deeper: how to detect smear in a specific image, how to assess whether the smear is bad enough to affect your analysis, and what to do when it is.

Operational Objective

Every software-based memory capture has smear. The question isn't whether your image has it — it does — but whether the smear affects the structures you're analysing. A process list that disagrees with itself by one exited process is analytically harmless; a kernel pool where half the allocations were written during a different scheduling epoch than the other half can produce false positives on pool-scan-based analysis that waste hours of investigation time.

Most practitioners skip smear assessment entirely. They run windows.pslist, get results, and proceed. If the results look reasonable, they assume the image is clean. If the results look strange, they suspect the attacker rather than the acquisition. Both assumptions are dangerous: the first misses smear artifacts that contaminate findings; the second sends the investigation chasing acquisition noise instead of attacker activity.

This sub teaches three smear detection techniques: process-list cross-validation (comparing pslist against psscan against pstree to find smear-induced disagreements), structure-field consistency checks (examining EPROCESS fields for impossible combinations that indicate mid-update capture), and timestamp-span analysis (measuring the acquisition's time footprint across the image to estimate the smear window). Each technique takes under five minutes to run. Together they tell you whether your image is clean enough for the analysis you need to do — or whether you need to re-acquire.

Deliverable: Ability to assess any memory image's smear level using three independent detection techniques, determine whether the detected smear affects the analysis you intend to perform, and make an informed re-acquire decision when smear exceeds acceptable levels.

Estimated completion: 40 minutes
THREE SMEAR DETECTION TECHNIQUES PROCESS-LIST CROSS-VALIDATION pslist vs psscan vs pstree Disagreements = smear 1-2 diffs: normal 5+ diffs: heavy smear Time: ~2 minutes STRUCTURE-FIELD CONSISTENCY EPROCESS field checks Impossible combinations = mid-update capture Thread state mismatches Time: ~3 minutes TIMESTAMP-SPAN ANALYSIS Earliest vs latest timestamp in kernel structures Span = smear window <30s: clean | >5m: heavy Time: ~2 minutes
Figure 1.7.1 — Each technique detects a different class of smear. Use all three on any image where analysis results look suspicious or where the capture took longer than expected.

Technique 1 — process-list cross-validation

This section gives you the fastest smear check. If pslist and psscan disagree on which processes exist, the image has detectable smear.

Volatility 3 offers three independent methods of enumerating processes. windows.pslist walks the ActiveProcessLinks doubly-linked list starting from the System process. windows.psscan scans the pool for EPROCESS structures by pool tag, independent of the linked list. windows.pstree reconstructs the parent-child tree from the linked list. On a smear-free image, all three methods produce the same set of PIDs. On a smeared image, they may disagree.

The disagreements take two forms. A process appears in psscan but not in pslist — the pool allocation was captured before the process was unlinked from the active list (or after it was linked but before the list head was captured). A process appears in pslist but its EPROCESS fields are partially zeroed when examined in detail — the active-list entry was captured before the kernel started tearing the structure down, but the structure's field region was captured after teardown began.

The procedure is straightforward. Run both enumerations and save the output:

vol -f image.raw windows.pslist > /tmp/pslist.txt
vol -f image.raw windows.psscan > /tmp/psscan.txt

Compare the PID columns. Any PID that appears in only one output is a smear candidate. On a clean lab VM captured with WinPmem, expect zero to two disagreements. On a production server captured under load, expect three to eight. More than ten disagreements on a moderately busy system suggests either very heavy system load during capture or a capture that took unusually long (disk bottleneck, perhaps). The disagreement count is your first-order smear metric.

Not every disagreement is smear. psscan can find terminated processes whose pool allocations haven't been reclaimed yet — those are legitimate pool remnants, not smear artifacts. To distinguish: check the process's ExitTime field. If it has an exit time that predates the capture window, it's a pool remnant. If it has no exit time but doesn't appear in pslist, it's likely a smear artifact (the process exited during capture and the list was captured before the exit but the structure was captured after).

Technique 2 — structure-field consistency checks

This section detects smear at the field level, inside individual kernel structures. This is the technique that catches the most insidious smear — the kind that looks like attacker activity.

An EPROCESS structure has dozens of fields that describe a process's state. Some of these fields are updated atomically under lock; others are updated independently. When a capture straddles a field update, the image contains a structure where some fields reflect the pre-update state and others reflect the post-update state. The combination may be impossible under normal kernel operation — and it's exactly the kind of anomaly that trips up analysts who don't check for smear.

The most common field-smear indicator is a thread-state mismatch. A thread can be in states like Running, Waiting, Terminated, Initialized, and others. A Running thread must have an associated CPU (the Affinity and IdealProcessor fields are populated). A thread marked Running with no CPU assignment is a smear artifact — the thread-state field was captured while it showed Running, but the CPU-assignment field was captured after the thread was descheduled. An analyst unfamiliar with smear might interpret this as a rootkit hiding CPU assignment, which sends the investigation in the wrong direction.

Another indicator: a process with ActiveThreads = 0 but ExitTime = 0 (not terminated). A running process always has at least one thread. Zero active threads with no exit timestamp is either a kernel-level manipulation (possible but rare) or a smear artifact where the thread-count field was captured after the last thread exited but the exit-timestamp field was captured before it was written. The distinction requires examining the context: does the process appear in the active list? Does psscan find it independently? Do network connections reference it? If the process has external evidence of having been running, the zero-thread count is smear, not a finding.

Extended context — KPROCESS and KTHREAD field-smear patterns for advanced analysis

At the kernel level, EPROCESS embeds a KPROCESS, which contains the thread list head and scheduling state. KTHREAD structures linked from the KPROCESS contain per-thread scheduling, priority, and quantum information. Smear at the KTHREAD level can produce quantum-remaining values that exceed the maximum quantum for the thread's priority class, wait-reason codes that don't match the thread's state, and stack-base addresses that point into freed pool regions. These patterns are subtle and only matter when the investigation involves kernel-level analysis (MF6 covers this in depth). For most MF1-level work, the process-list cross-validation and thread-state mismatch checks are sufficient.

Technique 3 — timestamp-span analysis

This section measures the acquisition's time footprint across the image. The wider the span, the worse the potential smear.

Every kernel structure that tracks time — process creation, thread creation, network connection establishment, handle creation — carries a timestamp. The earliest timestamp you can find in the image approximates when the acquisition started (the first pages written). The latest timestamp approximates when the acquisition ended (the last pages written). The difference is the acquisition's time span — the window during which the system was running while the capture was in progress.

The procedure: run windows.pslist and examine the CreateTime column. The System process (PID 4) has the earliest create time (boot time, not acquisition time — ignore it for span measurement). Look instead at the most recently created process in the list and compare its create time against the acquisition start time from your acquisition record. If the most recent process was created during the acquisition window, the image contains structures from different moments — which is the definition of smear.

A more precise method: run windows.handles and examine handle creation timestamps across all processes. The timestamp range across all handles gives you the full span of kernel activity captured in the image. A span under 30 seconds on a 4 GB VM indicates a fast capture with minimal smear. A span over 5 minutes on a 128 GB server indicates a long capture with significant smear potential. The span itself doesn't tell you whether the smear affects your analysis — that depends on whether the structures you care about were modified during the span — but it gives you a bound on how much smear is theoretically possible.

What to do when smear is bad

This section closes the loop: you've detected smear, now what? The answer depends on what you're analysing.

Smear is a quality metric, not a pass/fail gate. The question isn't "is there smear?" (there always is, on any live capture). The question is "does the smear affect the structures I need for this investigation?"

If your investigation is looking at a specific process (its command line, its loaded DLLs, its network connections), check that specific process for field consistency. If the process's EPROCESS fields are consistent and its handles are coherent, the smear didn't affect your target and you can proceed at the confidence tier the other modifiers support. If the process shows field inconsistencies, the finding needs a smear caveat in the report and the confidence tier drops.

If your investigation requires a complete process enumeration (every process that was running, with no missed PIDs), the pslist-vs-psscan disagreement count matters more. A disagreement count of zero to two means the enumeration is reliable. A count of five or more means some processes may have been captured mid-lifecycle, and the enumeration is "best available" rather than "complete." Document the disagreement count and the specific PIDs involved.

Re-acquisition is the fix when smear is unacceptable — but only if the source system is still available and still in the relevant state. If the system has been reimaged, you work with what you have and document the smear assessment honestly. The confidence tier framework from MF0.7 handles this: a finding from a heavily smeared image caps at medium confidence because the raw-memory verification modifier is weakened. The report states the smear assessment and its impact on each finding's confidence. That's defensible. Pretending the image is clean when it isn't is not.

Guided Procedure — Assess smear in your Target-Win baseline capture

This procedure runs the three smear detection techniques against your MF1.2 or MF1.4 baseline image. On a clean lab VM, smear should be minimal — this exercise establishes your "low smear" baseline for comparison when you analyse attack-modified captures in later modules.

Step 1 — Process-list cross-validation. Run `vol -f target-win-baseline.raw windows.pslist > /tmp/pslist.txt` and `vol -f target-win-baseline.raw windows.psscan > /tmp/psscan.txt`. Extract the PID columns from both outputs. Count PIDs that appear in only one output.
Expected output: Zero to two disagreements on a clean lab VM. Most disagreements on a clean system are terminated system processes whose pool allocations haven't been reclaimed (check `ExitTime` to confirm).
If it fails: More than five disagreements on a clean VM → the capture happened while the system was busier than expected (background updates, Defender scan, etc.). Not a problem for baseline purposes but worth noting. If `psscan` returns dramatically more PIDs than `pslist`, many terminated processes are in the pool — normal on a system that's been running for a while.
Step 2 — Thread-state consistency check. Run `vol -f target-win-baseline.raw windows.threads` and look for threads in `Running` state with no CPU assignment, or processes with `ActiveThreads = 0` and no `ExitTime`. These are the two primary field-smear indicators.
Expected output: No thread-state mismatches on a clean lab VM. All `Running` threads have CPU assignments. All processes with zero active threads have exit timestamps. This is your "no field smear" reference point.
If it fails: A thread-state mismatch on a clean VM is a mild smear artifact — note it but don't re-acquire. If multiple mismatches appear, the capture was taken during a system load spike. Consider re-capturing from a quieter VM state for a cleaner baseline.
Step 3 — Timestamp-span measurement. From the `pslist` output, find the most recently created process (highest `CreateTime` that falls within the capture window). Compare against the acquisition start time from your MF1.6 acquisition record. The difference is the timestamp span.
Expected output: A span under 30 seconds for a 4 GB VM captured to SSD. Under 90 seconds for a 4 GB VM captured to spinning disk. The span is the theoretical maximum smear window — actual smear is usually less because not every structure was modified during the span.
If it fails: A span over 2 minutes on a 4 GB VM suggests either very slow disk I/O or significant system activity during capture. Check the output drive's speed and whether background processes were running heavy workloads during acquisition.
Decision Point

The situation. You've captured memory from a Northgate Engineering finance workstation during an incident. The smear assessment shows eight pslist-vs-psscan disagreements, two thread-state mismatches, and a timestamp span of 4.5 minutes. The image is 32 GB (the workstation has 32 GB RAM). Your investigation is focused on one specific process — a suspicious PowerShell execution. The PowerShell process appears in both pslist and psscan with consistent EPROCESS fields and no thread-state mismatches.

The choice. Re-acquire (the system is still running but the business wants it back by end of day), or proceed with this image and document the smear assessment alongside the findings.

The correct call. Proceed. The smear is heavy for the image overall (eight disagreements, 4.5-minute span), but the specific process you're investigating is consistent across all three checks. Your analysis is scoped to that process, not to a full process enumeration. The smear affects processes you're not investigating and doesn't affect the one you are. Document: "image-wide smear assessment: 8 pslist/psscan disagreements, 2 thread-state mismatches, 4.5-minute timestamp span. Target process (PowerShell, PID XXXX) is consistent across all three checks — present in both pslist and psscan, all EPROCESS fields consistent, no thread-state mismatches. Smear assessment does not affect findings scoped to the target process."

The operational lesson. Smear is assessed per-finding, not per-image. A heavily smeared image can still support high-confidence findings for processes whose structures are consistent. The smear assessment in the report is scoped to the finding, not to the image as a whole. Discarding a usable image because the overall smear metrics look bad is as much a mistake as ignoring smear entirely.

Compliance Myth: "If pslist and psscan disagree, the image is compromised and can't be used"

The myth. Disagreement between process enumeration methods means the image is unreliable. Either the acquisition failed or an attacker manipulated the structures. Either way, findings from the image can't be trusted.

The reality. Pslist-psscan disagreement is the expected fingerprint of live-system acquisition smear, not evidence of compromise or acquisition failure. On any software-based capture of a running system, some structures will reflect different moments because the system was running during capture. The disagreement tells you smear exists (which you already knew from MF1.1) and gives you a rough magnitude.

The critical question is whether the disagreements affect the specific structures your investigation depends on. If your target process is consistent across methods, the disagreement in other processes is analytically irrelevant to your findings. If your target process is itself inconsistent, the finding gets a smear caveat and a confidence-tier adjustment — but the image still has value for every other process that is consistent.

Discarding an entire image because of expected acquisition artifacts wastes the capture and forces re-acquisition — which may not be possible if the source system was reimaged. Assess smear per-finding, not per-image. Document honestly. Let the confidence tier framework handle the defensibility.

Next

MF1.8 — Anti-Acquisition Techniques. You've covered what goes right. MF1.8 covers what goes wrong when the attacker doesn't want you to capture memory: anti-VM detection that shuts down malware when acquisition tools are detected, memory wiping that destroys evidence before you can read it, and the operational responses when acquisition fails or the image is deliberately degraded.

Try it — Run the three smear checks against your baseline image and record the results

Setup. Your Target-Win baseline image from MF1.2 or MF1.4 on the analysis workstation. Volatility 3 ready.

Task. Execute all three techniques from the Guided Procedure: pslist-vs-psscan cross-validation (count disagreements), thread-state consistency (check for Running threads without CPUs or zero-thread processes without exit times), and timestamp-span measurement (most recent process create time minus acquisition start time). Record all three metrics.

Expected result. On a clean 4 GB lab VM: zero to two disagreements, zero thread-state mismatches, span under 60 seconds. Record these as your baseline smear profile. When you capture attack-modified images in MF2 onward, you'll compare against these numbers to detect whether increased smear correlates with attack activity or just busier system state.

If your result doesn't match. If disagreements exceed two, check whether the disagreeing PIDs have exit times (pool remnants, not smear). If thread-state mismatches appear, note them but don't re-acquire — a single mismatch on a clean VM is a mild artifact. If the span exceeds 90 seconds, the capture was slower than expected — check disk I/O and consider whether background processes (Defender scan, Windows Update) were running during capture.

Checkpoint — before moving on

You should be able to do the following without referring back to this sub. If you can't, the sections to re-read are noted.

1. Name the three smear detection techniques and state what each one measures. (§ Techniques 1, 2, and 3)
2. Given an image with eight pslist/psscan disagreements but a target process that's consistent across all checks, state whether you'd re-acquire or proceed, and why. (§ What to do when smear is bad + Decision Point)
3. Explain in one sentence why pslist-psscan disagreement is an expected acquisition artifact rather than evidence of compromise. (§ Technique 1 + Compliance Myth)

You've set up the lab and captured your first clean baselines.

MF0 built the three-VM lab and established the memory forensics landscape. MF1 taught acquisition with WinPmem and LiME, integrity verification, and chain of custody. From here, you execute attacks and investigate what they leave behind.

  • 8 attack modules (MF2–MF9) — process injection, credential theft, fileless malware, persistence, kernel drivers, Linux rootkits, timeline construction, and a multi-stage capstone
  • You run every attack yourself — from Kali against your target VMs, then capture memory and investigate your own attack's artifacts with Volatility 3
  • MF9 Capstone — multi-stage chain (initial access → privilege escalation → credential theft → persistence → data staging), three checkpoint captures, complete investigation report
  • The lab pack — PoC kernel driver and LKM rootkit source code, setup scripts, 21 exercises, 7 verification scripts, investigation report templates
  • Cross-platform coverage — Windows and Linux memory analysis in one course, with the timeline module integrating evidence from both
Unlock with Specialist — £25/mo See Full Syllabus

Cancel anytime