In this module
MF0.11 Module Summary
The core argument
The module rests on one claim: memory forensics is not a specialist discipline you reach for when disk analysis fails, it's an operational baseline for any investigation that touches credentialed-user compromise, fileless malware, living-off-the-land execution, or kernel-level attacker tradecraft. The alternative — disk and EDR telemetry alone — produces conclusions that work only if the attacker cooperated by writing to the filesystem. Modern attackers increasingly don't, and the investigations that matter most are the ones where disk tells you nothing.
That claim reshapes how you think about SOC procedures, incident response workflows, and the boundary between routine triage and serious investigation. The four mandatory acquisition triggers from MF0.1 — credentialed-user compromise suspected, EDR firing without corresponding disk evidence, attribution required beyond "a compromise occurred," and incident likely to recur without root cause — are the conditions that move memory from optional to required. Every subsequent module assumes you've internalised those triggers and are prepared to defend memory acquisition as a standard step rather than a specialist exception.
What each subsection established
MF0.1 made the case for memory forensics and the four categories of attack where memory evidence is decisive. MF0.2 gave you the three-category taxonomy — volatile structures, ephemeral data, transient artefacts — that classifies any memory finding and constrains the claim language appropriate to each. MF0.3 established the six-phase workflow (Acquire → Identify → Enumerate → Analyse → Correlate → Conclude) and the documentation standard that makes findings defensible. MF0.4 explained physical vs virtual memory, paging, page tables, and ASLR — the mechanical foundations that underlie every plugin and every raw-memory analysis. MF0.5 oriented you to the two primary analysis tools, Volatility 3 and MemProcFS, with the decision rule for when each is correct. MF0.6 introduced WinDbg as the kernel-debugger perspective that validates Volatility findings for high-stakes reports. MF0.7 established the three-tier confidence hierarchy (high / medium / low) and the four reliability modifiers that determine tier assignment. MF0.8 covered the legal context — UK CPR 35 and ACPO principles, US Daubert standards, EU electronic evidence rules, best-evidence considerations, and the reporting language that matches each confidence tier. MF0.9 set up the three-VM lab (Target-Win, Target-Linux, Kali) plus the analysis workstation that every subsequent module depends on. MF0.10 verified the lab build end-to-end with the first memory capture.
The sequence is not arbitrary. Each subsection depends on the ones before. The confidence framework in MF0.7 requires the workflow in MF0.3 to have produced the multi-method discovery evidence it assesses. The legal reporting language in MF0.8 requires the confidence tiers from MF0.7 to assign the appropriate hedge. The lab verification in MF0.10 exercises the workstation built in MF0.9 against a real capture, confirming the infrastructure before MF1 starts depending on it. A learner who tried MF0.10 without MF0.9 would have nothing to capture from; a learner who tried MF0.8 without MF0.7 would not have tier language to hedge with.
The three operational disciplines that matter most
Out of everything MF0 has covered, three disciplines transfer across every subsequent module and every real investigation you'll conduct.
The first is acquisition discipline. Memory acquisition is not a consequence-free action — it alters the system being investigated, competes with operational business needs, and produces evidence that is permanent even if the incident turns out to be benign. But the alternative, skipping acquisition on ambiguous cases, produces investigations that cannot answer the questions memory forensics exists to answer. The four mandatory triggers from MF0.1 are the bright line. Inside the triggers, acquire. Outside them, don't. The discipline is recognising when a trigger fires and executing acquisition immediately, not negotiating with operational pressure.
The second is contemporaneous documentation. The documentation standard from MF0.3 — acquisition record, identification record, enumeration record with reconciliation, per-finding analysis record with confidence tier, correlation record, report — is not produced retrospectively. It's produced as each phase runs. An investigator who tries to reconstruct documentation at the end of the case produces documentation that doesn't match what actually happened. The discipline is adding an entry every time you take an action, not at the end when the facts have faded.
The third is confidence-matched reporting language. The MF0.7 framework and the MF0.8 legal reporting conventions combine into one rule: the claim you make in the report must match the evidence you have. High-confidence findings warrant direct assertions. Medium-confidence findings require explicit hedges. Low-confidence findings need careful qualification or relegation to evidence-of-interest status. The practitioner who asserts medium-confidence findings as though they were high-confidence produces a report whose strongest claims tarnish because the weakest claim can be exposed. The practitioner who hedges high-confidence findings as though they were medium understates what the evidence supports. Both failures are correctable; both are less common among practitioners who've internalised the tier-to-language mapping.
What you have built
At the end of MF0, you have a working three-VM lab on isolated host-only networking: Target-Win (Windows 11 victim), Target-Linux (Ubuntu 22.04 victim for MF7), and Kali (attacker). Each target VM has a clean-baseline snapshot. You have an analysis workstation — host OS or separate VM — with Volatility 3 in a Python virtualenv, MemProcFS installed, WinDbg configured, and a supporting toolchain for hex analysis, pattern matching, and memory carving. You have the methodology — the six-phase workflow, the confidence tier framework, the legal reporting conventions — as working habits rather than concepts you've read about. The MF0.10 lab verification confirmed that the infrastructure produces clean captures that Volatility 3 parses correctly.
The lab and the methodology are the infrastructure. MF1 onward is the analysis depth that the infrastructure supports, starting with memory acquisition in depth.
How MF1 extends what MF0 established
MF1 is about memory acquisition in depth — the phase that MF0 treated as "pre-performed, record reviewed" becomes the subject matter. You'll cover the acquisition problem (smear, order of volatility, why perfect acquisition doesn't exist), Windows acquisition with WinPmem at production quality, Linux acquisition with LiME and AVML, hypervisor-based acquisition via VMware .vmem files (the gold-standard for the lab environment — this is how most course captures happen), pagefile and swap as memory-adjacent evidence, acquisition verification and integrity, smear detection, and anti-acquisition techniques attackers use. The final subsection of MF1 asks you to capture clean baselines of both Target-Win and Target-Linux, documented to the course standard — these baselines become the reference captures that MF2 onwards compares against.
After MF1, the course pivots into the attack-capture-analyse loop. MF2 runs Metasploit reflective DLL injection against Target-Win, captures memory before and after, and analyses what the injection produced in the VAD tree. MF3 runs Mimikatz credential theft and analyses LSASS. MF4 runs PowerShell fileless malware and analyses in-memory payloads. MF5 establishes scheduled task and WMI persistence and analyses the memory-resident persistence artefacts. MF6 loads a proof-of-concept kernel driver and analyses driver objects, callbacks, and rootkit tradecraft. MF7 does the Linux equivalent — SSH brute force, LKM rootkit, and the Linux kernel-memory structures. MF8 constructs multi-source timelines from the captures accumulated across MF2-MF7. MF9 is the capstone: a multi-stage attack chain (phishing-to-persistence) with three-checkpoint capture and a full reference report.
The total arc is substantial. MF0 is around 35,000 words establishing foundations; the full course extends to roughly ten times that. The depth is what justifies the Specialist tier, and the depth is what produces the difference between "I can run Volatility" and "I can produce memory forensics evidence that survives cross-examination."
Moving on
When you start MF1, you'll begin by extending the acquisition discipline from MF0.1's mandatory triggers into the full acquisition methodology — tool selection, environment considerations, operational constraints, and the detailed workflow that MF0.3's six-phase Acquire phase compresses into a single line. The lab you built in MF0.9 becomes MF1's working environment: WinPmem captures from Target-Win, LiME captures from Target-Linux, hypervisor captures via VMware snapshots, all against the same three VMs you verified in MF0.10.
Make sure your MF0.9 build is solid before starting MF1. Revert Target-Win to its clean-baseline snapshot. Confirm network isolation passes the six-test matrix from MF0.9. If anything about the lab feels uncertain, re-run MF0.10's verification once more. The methodology discipline is cumulative; infrastructure gaps compound across modules.
Summary in one paragraph
Memory forensics is operational baseline for modern DFIR, not specialist exception. The four mandatory acquisition triggers define when it applies. The six-phase workflow produces defensible analysis. The three-tier confidence framework tiers findings. The legal reporting conventions match language to evidence. The three-VM lab and analysis workstation you've built execute the methodology; MF0.10 confirmed the infrastructure works end-to-end. MF1 builds on MF0 the same way MF0 builds on your prior DFIR experience — by taking the established principles and extending them into the technical depth that real investigations require.
How was this module?
Your feedback helps us improve the course. One click is enough — comments are optional.
You've set up the lab and captured your first clean baselines.
MF0 built the three-VM lab and established the memory forensics landscape. MF1 taught acquisition with WinPmem and LiME, integrity verification, and chain of custody. From here, you execute attacks and investigate what they leave behind.
- 8 attack modules (MF2–MF9) — process injection, credential theft, fileless malware, persistence, kernel drivers, Linux rootkits, timeline construction, and a multi-stage capstone
- You run every attack yourself — from Kali against your target VMs, then capture memory and investigate your own attack's artifacts with Volatility 3
- MF9 Capstone — multi-stage chain (initial access → privilege escalation → credential theft → persistence → data staging), three checkpoint captures, complete investigation report
- The lab pack — PoC kernel driver and LKM rootkit source code, setup scripts, 21 exercises, 7 verification scripts, investigation report templates
- Cross-platform coverage — Windows and Linux memory analysis in one course, with the timeline module integrating evidence from both
Cancel anytime