In this module
MF0.3 The Memory Forensics Workflow and Documentation Standard
You know what memory is (MF0.1) and what lives in it (MF0.2). You know how to investigate — you follow leads, check systems, run plugins. What you don't have yet is a defensible methodology — a systematic, repeatable workflow that produces the same depth of analysis across every engagement and survives adversarial review. That's what this sub builds.
A memory image arrives. You have Volatility 3, the relevant symbols, and an understanding of what memory contains. What do you do next?
Without a disciplined workflow, investigators drift between plugins, follow interesting findings without systematic coverage, and produce inconsistent analyses — one engagement goes deep on process analysis and skips network connections; the next does the opposite. Inconsistency between investigations is the failure mode legal counsel, auditors, and clients notice. Not any individual finding — the absence of a repeatable methodology.
This sub establishes the six-phase workflow used throughout the course and the documentation standard every module applies. The workflow isn't a rigid script; it's a systematic coverage guarantee.
Deliverable: Working knowledge of the six-phase Memory Forensics Workflow (Acquire → Identify → Enumerate → Analyse → Correlate → Conclude), the purpose and typical outputs of each phase, the documentation discipline that records what was done at each step, and the checklist that determines when a phase is complete.
Figure 0.3.1 — The six-phase Memory Forensics Workflow. Phases are iterative but coverage is mandatory. Documentation is strictly linear — the audit trail records every phase entry, every return, every new finding that prompted the return.
The six phases — Acquire, Identify, Enumerate, Analyse, Correlate, Conclude — aren't proprietary to Ridgeline. They draw on established DFIR methodology adapted for memory forensics, where the categories of evidence (from MF0.2) are distinct from disk forensics. Every subsequent module in the course references these phases explicitly, and MF9's capstone demonstrates the full workflow against a realistic scenario.
Phase 1 — Acquire
Capture the memory image with verified integrity and complete chain of custody. Phase 1 produces the artefact everything else depends on — get it wrong here and no amount of downstream analysis saves the investigation.
The first phase captures the memory image. Acquisition is covered in depth in the next module (MF1); at the workflow level, phase 1 produces three outputs. The memory image itself — a bit-for-bit capture of physical RAM, with format dependent on the acquisition tool (WinPmem raw, DumpIt .raw, FTK Imager .mem, LiME, AVML, hypervisor .vmem). The integrity evidence — cryptographic hashes (MD5 and SHA-256 as standard) computed at acquisition, verified at receipt, logged in the case record. The acquisition record — method, tool, tool version, operator identity, acquisition timestamp, target system identification, and any notable events during acquisition.
Phase 1 is complete when the image is in the analyst's possession with verified hashes matching the acquisition record, the acquisition method is fully documented, chain of custody is signed and filed, and any acquisition anomalies (tool warnings, partial failures, timing variations) are recorded. Phase 1's documentation is typically the longest per-investigation record because it's what makes everything subsequent admissible.
Phase 2 — Identify
Confirm what the image actually is. Labelling errors happen; hypervisor confusion mixes up snapshots; evidence transfers sometimes send the wrong file. Phase 2 catches these before they propagate through the entire investigation.
The second phase establishes what the memory image actually is. An image arrives labelled "Windows 10 finance workstation NE-FIN-014" — your job in phase 2 is to verify that labelling and extract the specifics needed for analysis.
The technical work is profile selection and verification. Volatility 3 requires symbols that match the OS, version, and kernel build of the captured system. On Windows, this means identifying the exact build number (e.g. 19045.5131 for a specific Windows 10 22H2 patch level) and confirming Volatility 3's symbol store has matching symbols. On Linux, it means identifying the exact kernel version and using a pre-built profile or building symbols from the kernel's DWARF debug information. Profile mismatch is the most common cause of incorrect memory analysis — offsets that don't match the actual image produce output that looks correct but reports fields from the wrong memory locations.
$ vol -f NE-FIN-014-mem.raw windows.info
# Expected output (truncated):
#
# Variable Value
# Kernel Base 0xfffff80614400000
# DTB 0x1aa000
# Symbols file:///...Windows/10.0.19045.5131.json.xz
# Is64Bit True
# IsPAE False
# PrimaryProcessor 0
# KernelFileVersion 10.0.19045.5131
# KdVersionBlock 0xfffff80614c13710
# Major/Minor 15.19045
# MachineType 34404
# KeNumberProcessors 4
# SystemTime 2026-03-15 11:47:23 UTCBuild 19045.5131 is Windows 10 22H2; Volatility 3 auto-selected matching symbols. Kernel pointers are plausible (both start 0xfffff806..., which is kernel-space on x86_64). System time confirms the image was captured on 15 March 2026. Phase 2 complete.
If the output showed gibberish or impossible values (kernel base at 0x00000000, KeNumberProcessors at 0, system time in 1970), the profile is wrong and analysis can't proceed until it's corrected.
Phase 3 — Enumerate
The exhaustive inventory. Not "what's interesting" — that's phase 4. Phase 3 is every process, every thread, every network connection, every handle. Completeness here is what makes phase 4's "interesting" findings defensible as "these are the anomalies in a fully-enumerated set" rather than "these are the first things I noticed."
Phase 3 establishes what's in the image at the level of complete object lists. The outputs are CSV files, lists, and summary counts that give you the full picture before any prioritisation.
For Windows, phase 3 typically runs windows.pslist (active process list), windows.psscan (pool-scan for processes including hidden), windows.pstree (parent-child relationships), windows.cmdline (command lines for every process), windows.dlllist (loaded DLLs per process), windows.handles (open handles), windows.netscan (network connections), windows.modules (kernel modules), windows.ssdt (system call table), and windows.callbacks (kernel callbacks). Each produces output saved to the case file. For Linux: linux.pslist, linux.pstree, linux.bash (bash history), linux.lsmod, linux.tty_check, linux.check_syscall, linux.hidden_modules, and the socket enumeration plugins.
The critical activity is reconciliation. For object categories with multiple discovery methods — most notably processes — compare results from each method and investigate discrepancies.
# Windows: enumerate processes three ways, reconcile
$ vol -f NE-FIN-014-mem.raw windows.pslist | wc -l # 142
$ vol -f NE-FIN-014-mem.raw windows.psscan | wc -l # 144
$ vol -f NE-FIN-014-mem.raw windows.pstree | wc -l # 142
# psscan found 2 processes pslist didn't.
# Diff the PIDs:
$ comm -13 <(vol -f ... windows.pslist | awk '{print $3}' | sort) \
<(vol -f ... windows.psscan | awk '{print $3}' | sort)
#
# PIDs 7842 and 11334 appear only in psscan — DKOM-hidden candidates.
# Flag for phase 4.The three-way discrepancy is itself evidence. Two processes appearing in psscan but not pslist are DKOM-hidden — their EPROCESS structures exist in kernel memory but were unlinked from the active list. They don't get skipped because they don't show up in the default plugin; phase 3's reconciliation is what surfaces them.
Phase 3 is complete when every object category has been enumerated with all applicable methods, reconciliation has been performed and discrepancies identified, outputs are saved, and you have a written list of anomalies to investigate in phase 4.
Phase 4 — Analyse
The deep-dive phase. Phase 3 produced a list of anomalies, outliers, and objects warranting investigation. Phase 4 systematically examines each one. This is where raw-memory verification happens for findings that will appear in the report.
Phase 4 is where investigation occurs. For an investigation with five flagged processes, phase 4 produces five individual process analyses, each examining VAD tree, loaded modules, handles, network connections, threads, and any other object category relevant to the question at hand.
The raw-first principle applies here. When a plugin reports a process contains an injected DLL, phase 4 verifies by extracting the memory region and examining its structure directly. When a plugin reports suspicious connections, phase 4 verifies by walking the TCP endpoint structures and confirming the reported addresses match raw memory. The verification isn't performed for every finding — that takes too long — but for findings decisive for the investigation's conclusions.
# Deep-dive the RWX region in PID 4872 flagged by phase 3 malfind
$ vol -f NE-FIN-014-mem.raw windows.vadinfo --pid 4872 --address 0x7ff8a2100000
# Expected output:
# VAD start: 0x7ff8a2100000
# VAD end: 0x7ff8a2118000
# Protection: PAGE_EXECUTE_READWRITE
# Type: VadNone (private)
# File: - (no file backing)
# Commit: 24 pages
# Extract the region and verify the PE structure
$ vol -f NE-FIN-014-mem.raw windows.vadinfo --pid 4872 \
--address 0x7ff8a2100000 --dump
# Extracted: pid.4872.vad.0x7ff8a2100000-0x7ff8a2118000.dmp
# Raw verification
$ xxd pid.4872.vad.0x7ff8a2100000-0x7ff8a2118000.dmp | head -1
# 00000000: 4d5a 9000 0300 0000 0400 0000 ffff 0000 MZ..............Four independent signals confirm the finding: VAD protection is RWX, allocation is private, the first bytes are the MZ header (4d 5a), and no file backs the region. Confidence tier is HIGH. The PE header was read from raw memory, not trusted from the plugin's parse. Phase 4 finding is ready for phase 5 correlation.
Phase 5 — Correlate
Memory evidence alone proves what happened in memory. Phase 5 connects memory findings to disk evidence, event logs, network telemetry, and other hosts — producing the unified timeline that supports the investigation's conclusions.
Phase 5 takes the phase 4 findings and correlates them against independent evidence sources. Memory timestamps are cross-referenced with event log timestamps; C2 addresses from memory are checked against firewall logs; command lines from memory are matched to PowerShell Script Block Logging; process creation times from EPROCESS are compared to Security event 4688. The output is a unified timeline and a conflict log noting any discrepancies between sources and their resolution.
# Phase 5 reconciliation notes — worked example:
#
# Memory: PID 4872 CreateTime 2026-03-15 08:42:47
# Security: Event 4688 at 2026-03-15 08:42:47 ✓ match
#
# Memory: C2 connection 203.0.113.47:443 from PID 4872
# Firewall: Outbound 203.0.113.47:443 at 08:43:21 ✓ match
# (Firewall 3s later than memory — firewall stamps post-SYN-ACK,
# memory stamps kernel endpoint creation. No discrepancy after
# accounting for protocol timing.)
#
# Memory: Decoded cmdline for PID 4872
# PS Log: Event 4104 — encoded script ✓ partial
# (PowerShell log has encoded form only. Decoded form is in memory
# ONLY. Memory is the decisive source for the actual command.)
#
# Unified timeline: timeline.csvPhase 5 does something phase 4 can't: it determines which findings are independently supported by multiple sources (high defensibility) and which rely on memory alone (still valid, but worth flagging in the report). The decoded PowerShell command above is memory-only evidence — that doesn't weaken the finding, but the report should note that disk logs contain the encoded form and memory is the source for the decoded form.
Phase 6 — Conclude
Produce the report. The final phase is where methodology translates into written findings that can survive adversarial review. If the earlier phases were done properly, phase 6 is straightforward — if they weren't, phase 6 is when the gaps become obvious.
The concluding phase produces the report. Executive summary, findings at their confidence tier, limitations and known unknowns explicitly stated, methodology section referencing every prior phase's documentation, and appendices containing raw extracts and verification data. The report is structured for its audience — a CISO brief is different from a court report which is different from a peer-review technical deliverable — but the underlying methodology section remains constant.
Phase 6 includes adversarial self-review. Before releasing the report, ask: did I enumerate completely? Did I verify raw memory for decisive findings? Did I correlate before concluding? Are my confidence tiers honest? Are the limitations I've stated the real limitations? If any answer is uncertain, the report isn't ready.
What makes the sequence defensible
The six phases aren't a checklist to tick through. The sequence itself — not any individual phase — is what produces a defensible finding. This section makes that explicit because every subsequent module builds on it.
The six phases aren't a checklist to tick through. The sequence itself is what makes the investigation defensible. Any investigator can run windows.malfind and report an injection. What makes that finding survive adversarial review is the phase context around it: the identification evidence in phase 2 proves the correct profile was selected, the enumeration evidence in phase 3 proves every object category was covered with reconciliation documented, the analysis evidence in phase 4 proves raw-memory verification was performed, the correlation evidence in phase 5 proves memory wasn't taken at face value but confirmed against independent sources, and the concluding review in phase 6 proves the investigator scrutinised their own work before releasing the report.
A single command produces a finding. The sequence produces a forensic result.
The situation. You've completed phase 3 enumeration on NE-FIN-014 and flagged the RWX region in PID 4872 as a phase 4 target. Phase 4 raw-memory verification will take around 40 minutes: read the raw bytes at the VAD's start address, verify the MZ header against the plugin's report, walk the PE structure manually, compare the section layout against the plugin's parse. The CISO has asked for conclusions within a one-hour window. The plugin output is clear and unambiguous; skipping the raw verification would save 40 minutes.
The choice. Accept the plugin's parse at face value and proceed straight to phase 5, or perform the raw-memory verification at the cost of the CISO's deadline.
The correct call. Perform the raw verification, every time for any finding that will appear in the report. Volatility 3 is rarely wrong on a basic malfind — but raw verification is what converts a plugin-reported finding into an investigator-verified finding. The report's phase 4 record will read "finding verified against raw memory at offset X" rather than "plugin reported finding." Under adversarial review that distinction is the difference between a supportable claim and a dismissable one.
The operational lesson. The one-hour window is a communication problem with the CISO, not a methodology problem. Tell the CISO the verified analysis takes 90 minutes and deliver a defensible report. An unverified finding delivered in 60 minutes is more expensive than a verified finding delivered in 90, because the unverified finding may fail in the proceeding that follows.
The myth. The report contains the findings. Each finding has evidence. If the evidence is clear and the reasoning is sound, the reader has what they need. Separately documenting the methodology — "I ran these plugins, I reconciled them this way, I verified this finding against raw memory" — is paperwork that duplicates what the findings already show.
The reality. Findings documentation answers what you found. Methodology documentation answers how you found it and what else you looked for. The second question is the one that matters under adversarial review.
An opposing expert's first line of attack isn't "your finding is wrong." It's "how do we know you'd have found the alternatives if they existed?" A report that documents three findings, each with strong evidence, fails that challenge if it can't demonstrate the enumeration was exhaustive. "You found DKOM-hidden PID 7842 — did you find PID 7843 if it existed? How would you know? What did you check?" If the methodology isn't documented, the answer is effectively "trust me." Trust-me doesn't survive cross-examination.
Methodology documentation is how you prove the absence of alternative explanations. Without it, even correct findings become supportable opinions rather than demonstrable conclusions.
The documentation standard
Documentation is produced as each phase runs, not reconstructed afterward. The discipline is contemporaneous recording — what you did, when, what you saw, what you decided.
The documentation standard isn't a template. It's a discipline: every phase produces its own record as it runs, and every record is written contemporaneously. An investigator who tries to reconstruct documentation from memory after the investigation ends produces documentation that doesn't match what actually happened. Subtle omissions, forgotten sequence details, rationalisations of decisions that weren't the real reasons at the time — all of these corrupt retrospective records, and none of them corrupt contemporaneous ones.
The practical discipline is straightforward. Open the case log before phase 1 begins. Write the phase header. Write what you did, what you saw, what you decided. Timestamp each entry. When you move to the next phase, write the next phase header. When something in phase 4 sends you back to phase 3, write the return, enumerate what was missed, document what you found, then return to phase 4 with a timestamp. The log records the investigation as it happened, not as you remember it.
The per-phase content of the record — what goes in the phase 1 log versus the phase 4 log — is the Artifact Footer's extended reference below.
Try it: build a phase-by-phase record for a past investigation
Take any memory analysis you've done before — even a quick triage against a test image. Write up the documentation retrospectively following the six-phase standard.
Open a blank document. For phase 1, write what you remember: source of the image, how it was acquired, by whom, when. For phase 2, record what OS and build you identified and how you confirmed it. For phase 3, list every plugin you ran with arguments and any reconciliation you performed. For phase 4, write up any finding you remember investigating: what was analysed, what you verified, what you concluded. For phase 5, record any correlation against external sources. For phase 6, note what you actually reported.
Now look at what you can't remember or reconstruct accurately. Those gaps are the methodology you would not have been able to defend under adversarial review. They're also the specific disciplines contemporaneous documentation would have captured. The exercise makes the value of the standard visceral — every gap you find in reconstruction is a gap you'd have had in a report.
You've set up the lab and captured your first clean baselines.
MF0 built the three-VM lab and established the memory forensics landscape. MF1 taught acquisition with WinPmem and LiME, integrity verification, and chain of custody. From here, you execute attacks and investigate what they leave behind.
- 8 attack modules (MF2–MF9) — process injection, credential theft, fileless malware, persistence, kernel drivers, Linux rootkits, timeline construction, and a multi-stage capstone
- You run every attack yourself — from Kali against your target VMs, then capture memory and investigate your own attack's artifacts with Volatility 3
- MF9 Capstone — multi-stage chain (initial access → privilege escalation → credential theft → persistence → data staging), three checkpoint captures, complete investigation report
- The lab pack — PoC kernel driver and LKM rootkit source code, setup scripts, 21 exercises, 7 verification scripts, investigation report templates
- Cross-platform coverage — Windows and Linux memory analysis in one course, with the timeline module integrating evidence from both
Cancel anytime