In this module
Evidence Reliability and Confidence Assessment
Figure WF0.6 — Evidence reliability hierarchy. High-confidence artifacts prove claims directly and are reportable as standalone findings (with stated confidence). Lower-confidence artifacts require corroboration from independent sources. All critical findings in a forensic report should have at least two independent artifact sources.
What "proves" means in forensic context
Forensic proof is not mathematical proof. When an examiner states that Prefetch "proves" program execution, they mean that the existence of a Prefetch file for a specific executable, created by the SysMain/Superfetch service, with execution timestamps and a run count, establishes program execution to the standard required for the examiner's professional opinion. The proof is probabilistic — there are theoretical scenarios where a Prefetch file could exist without the program having executed (filesystem corruption, deliberate fabrication by a sophisticated attacker with system-level access) — but these scenarios are sufficiently unlikely and detectable that the Prefetch evidence supports a reliable conclusion.
This is the standard for forensic findings: the evidence supports the conclusion to a level where the examiner can state their professional opinion with a defined confidence level, having considered and documented alternative explanations. The finding is not "absolute proof" — it is "evidence supporting the conclusion at high/moderate/low confidence, with the following alternative explanations considered and assessed."
The confidence level framework used in this course has four levels. High confidence means the artifact directly establishes the claimed fact through a mechanism that is well-understood, reliably produced by the operating system, and resistant to casual manipulation. Moderate-high confidence means the artifact strongly supports the claim with minor caveats that should be noted but do not materially reduce reliability. Moderate confidence means the artifact supports the claim under stated conditions — the examiner must verify those conditions and document them. Low confidence means the artifact is consistent with the claim but does not prove it — additional corroboration is required.
Filesystem artifact reliability
MFT existence (file existed): High confidence. An MFT record for a file proves the file existed on the volume. The record was created by NTFS when the file was created. An MFT record cannot be fabricated through normal system operations — it requires raw disk manipulation. Caveat: an MFT record marked "not in use" proves the file existed at some point but was deleted. The record does not prove the file exists currently.
MFT $FN timestamps (creation time): High confidence. The $FILE_NAME creation timestamp is set by the NTFS kernel driver at file creation and is not modifiable through the Windows API. It provides a reliable creation time for the file on this volume. Caveat: if the file was copied to this volume, the $FN creation timestamp reflects the copy time, not the original file's creation time. The $SI creation timestamp may preserve the original creation time (depending on the copy method), creating the counterintuitive situation where $SI Created is earlier than $FN Created — which looks like timestomping but is actually a legitimate copy artifact.
MFT $SI timestamps (modification, creation): Moderate confidence. $SI timestamps are set and updated by the operating system but are modifiable by any user-mode application with file write access. They are reliable in the absence of anti-forensic activity, but they must be cross-checked against $FN timestamps and USN Journal entries for critical findings.
USN Journal entries (file operations): Moderate-high confidence. USN Journal entries are created by the NTFS driver to record file change events. Each entry has a timestamp, a reason code identifying the operation type, and file/parent references. The journal is append-only during normal operation — entries cannot be selectively edited. Caveat: the journal has a maximum size and wraps, discarding the oldest entries. An attacker with administrator privileges can delete the entire journal with fsutil usn deletejournal. The absence of USN Journal entries does not prove the absence of file operations — it may prove anti-forensic activity.
$I30 index slack (deleted filenames): Moderate confidence. Deleted filename entries in directory index slack provide evidence that files existed in a directory. The evidence is reliable when present but its absence is not meaningful — index compaction and reallocation can remove entries without anti-forensic intent. The recovery is opportunistic, not guaranteed.
Execution artifact reliability
Prefetch (program execution): High confidence. A Prefetch file is created by the SysMain service when an executable runs for the first time. Subsequent executions update the file with new timestamps and increment the run count. The Prefetch file records the last 8 execution timestamps (Windows 8+), the total run count, and the files/directories referenced during execution. Prefetch is the strongest single-source evidence of program execution. Caveats: Prefetch can be disabled (check PrefetchParameters registry value). Prefetch files can be deleted. On Windows Server, Prefetch is disabled by default. The run count and last execution time are from the last file update, which may not reflect the very latest execution if the file wasn't flushed before collection.
Amcache (program installation/first execution): Moderate-high confidence. Amcache records program metadata including SHA1 hash, file path, publisher, and version. On Windows 10+, entries are created when a program is first executed or installed via an installer. The SHA1 hash is independently verifiable against threat intelligence databases. Caveats: the exact trigger for Amcache population has changed across Windows versions and is not fully documented by Microsoft. Some researchers have found that Amcache entries can be created for executables that were not actually run (e.g., executables scanned by Defender). The Amcache is a moderate-high source — strong supporting evidence, excellent for hash identification, but best corroborated with Prefetch for execution timing.
Shimcache (program in execution path): Moderate on Windows 10+ (with execution flag), low on pre-Windows 10. On Windows 10 and 11, the Shimcache includes an execution flag that indicates whether the program was actually executed. On Windows 7, Shimcache presence does not definitively prove execution — only that the Application Compatibility subsystem evaluated the executable. On all versions, Shimcache is populated on system shutdown, not in real time — entries reflect the state at last shutdown, which may miss executables that ran and were deleted between shutdowns. Shimcache is best used as corroborating evidence alongside Prefetch, not as a standalone execution proof.
BAM/DAM (execution with user attribution): Moderate-high confidence (when available). BAM entries include the full executable path, the execution timestamp, and the user SID — providing user-level execution attribution that Prefetch and Amcache lack. Caveats: BAM was introduced in Windows 10 1709 and its behavior has changed across versions. Not all Windows 10/11 builds maintain BAM entries. Check whether the BAM registry key exists and contains data on the evidence system before relying on it.
UserAssist (GUI program execution): Moderate confidence. UserAssist records programs launched through the Windows shell (Explorer, Start Menu, desktop shortcuts). The run count and last run time are encoded in registry values with ROT13-encoded value names. Caveats: UserAssist only captures GUI launches — programs executed from the command line, via scheduled tasks, or via services are not recorded. The run count has known accuracy issues on some Windows versions. UserAssist proves interactive user execution, which is valuable for establishing user intent, but it does not capture all program execution.
User activity artifact reliability
ShellBags (folder navigation): Moderate-high confidence. ShellBags record folders the user navigated to in Windows Explorer. Each entry includes the folder path (encoded as shell item IDs), access timestamps, and view settings. ShellBags are created by Explorer — they prove the user opened the folder in the shell. Caveats: ShellBags persist after the folder is deleted, after the drive is removed, and after the user profile is partially cleaned. A ShellBag entry proves the folder was accessed at some point — not that it was accessed at a specific time (the timestamps indicate first/last access but may be updated by view settings changes, not just navigation). ShellBag timestamps should be corroborated with Event Log authentication records or MFT timestamps.
LNK files (file access): Moderate-high confidence. LNK files in the Recent folder are created automatically when a file is opened. Each LNK contains the target file's path, timestamps, size, volume information, and machine identifier. LNK files persist after the target file is deleted. Caveats: LNK files prove a file was opened — they do not prove the file was read, modified, or exfiltrated. A user who double-clicked a file by accident and immediately closed it still generates an LNK file. LNK files can be deleted by clearing the Recent folder. The machine identifier in LNK files can link a file to the specific computer it was accessed from — useful for proving cross-device activity.
Jump Lists (application-specific file access): Moderate-high confidence. Jump Lists record files recently accessed by specific applications, identified by AppID. Caveats: Jump Lists combine pinned items (user-initiated) and recent items (automatic). The distinction matters — a pinned item proves intent, a recent item proves access. Jump Lists can be cleared by the user.
Your investigation has Prefetch evidence showing a credential tool executed (run count 1, loaded samlib.dll and wdigest.dll) and Shimcache evidence with execution flag True, but NO Amcache entry for the tool. The tool's executable is still on disk. What confidence level do you assign?
Your options: (A) CONFIRMED — Prefetch and Shimcache both confirm execution. Two sources is sufficient. (B) HIGH — Prefetch and Shimcache confirm execution (two independent sources agreeing), but the missing Amcache entry is worth noting. The absence could mean: Amcache was cleared (check for cleanup indicators), the tool ran in a way that avoided Amcache registration (rare but possible with certain execution methods), or an Amcache parsing error (re-parse with a different tool). Document the finding as HIGH with a note about the Amcache absence and any investigation into why it's missing.
The correct approach is B. Two sources confirming is strong evidence (HIGH), but the missing third source warrants documentation. The Amcache absence might reveal additional anti-forensic activity that is itself a finding.
Try It — Build a Confidence Assessment for a Finding
Scenario: You need to prove that the user "j.morrison" accessed the file \\SRV-NGE-FS01\Engineering\Manufacturing\Proprietary_Specs_Q3.xlsx on March 10, 2026. You have the following evidence:
1. ShellBag entry in j.morrison's UsrClass.dat showing \\SRV-NGE-FS01\Engineering\Manufacturing with last access timestamp March 10. 2. LNK file in j.morrison's Recent folder pointing to Proprietary_Specs_Q3.xlsx on \\SRV-NGE-FS01 with target timestamps from the file server. 3. Jump List entry for Excel (AppID: 1b4dd67f29cb1962) showing Proprietary_Specs_Q3.xlsx as a recent document. 4. Security Event Log on SRV-NGE-FS01 showing Event ID 4624 logon for j.morrison at 09:47:22 on March 10 from the user's workstation IP.
Assessment exercise: - What confidence level does each artifact provide independently? - Which artifacts corroborate each other? - What does the combined evidence prove that no single artifact proves alone? - What alternative explanations should be considered and documented?
The combined evidence provides high confidence: four independent sources consistently showing j.morrison accessed the Engineering\Manufacturing folder (ShellBag), opened the specific file (LNK), used Excel to view it (Jump List), and authenticated to the file server at the relevant time (Event Log). No single artifact is as strong as the combination.
The corroboration standard
This course uses a consistent corroboration standard for forensic findings: every critical finding in a report should be supported by at least two independent artifact sources. A critical finding is any conclusion that determines scope (what was accessed), attribution (who did it), timeline (when it happened), or impact (what was the consequence).
A single high-confidence artifact source is sufficient for a reportable finding — an examiner can state "Prefetch evidence confirms execution of toolname.exe" as a standalone finding with high confidence. But a critical finding that drives the investigation's conclusion — "the subject exfiltrated 4.7GB of proprietary data between March 1 and March 15 via OneDrive" — should be supported by multiple sources: SRUM data showing OneDrive upload volumes, USN Journal entries showing file copy operations to the OneDrive sync folder, ShellBag entries showing navigation to the restricted folders, and MFT timestamps establishing the file creation timeline.
The corroboration requirement is not about distrust of individual artifacts. It is about building findings that survive scrutiny. When opposing counsel challenges one artifact source — "isn't it true that ShellBag timestamps can be inaccurate?" — the examiner responds "yes, which is why this finding is based on ShellBag evidence corroborated by LNK file evidence, Jump List evidence, and authentication Event Logs. Each source independently confirms the access, and together they establish the finding at high confidence."
The myth: Tool output from an industry-standard forensic tool constitutes evidence. If MFTECmd reports a file with a specific timestamp, that timestamp is evidence. If RegRipper reports a registry value, that value is evidence. The tool's output is the finding.
The reality: Tool output is data. Evidence is data that has been validated, interpreted in context, assessed for reliability, and documented with its provenance and limitations. MFTECmd reporting a $SI timestamp is data — the examiner interpreting that timestamp as the file creation time after verifying it against $FN timestamps and USN Journal entries, confirming the OS version context, checking for timestomping indicators, and documenting the reliability assessment is evidence. The difference is the examiner's analysis and professional judgment applied to the raw data. A forensic report that presents tool output without analysis is a data dump, not an examination report. The value the examiner adds — and the standard they are held to — is the interpretation, correlation, and confidence assessment that transforms data into evidence.
Troubleshooting
"Multi-artifact corroboration sounds ideal but impractical — I don't have time to check every finding against multiple sources." You don't corroborate every data point. You corroborate critical findings — the conclusions that drive the investigation's outcome. A typical investigation report has 5-15 critical findings. Corroborating each against 2-3 sources adds hours, not days, to the analysis. The alternative — presenting uncorroborated findings that collapse under challenge — costs far more in credibility and case outcome.
"What if the corroborating sources conflict?" Conflicts are informative, not problematic. If the MFT timestamp says the file was created on March 15 but the USN Journal shows a FILE_CREATE entry on March 28, you have discovered either timestomping, a copy operation, or a tool parsing discrepancy. Investigating the conflict often reveals evidence that a single-source analysis would have missed entirely. Document the conflict, analyze the cause, and report the assessment. A finding that acknowledges and resolves a conflict is stronger than a finding that never encountered one.
"Some investigations only have limited artifacts available — the system was reimaged, the logs were cleared, the MFT was partially overwritten." Limited evidence is reality, not a methodology failure. Assess what you have, rate your confidence based on the available sources, and document what is missing and why. A finding stated at moderate confidence with documented limitations is honest and defensible. A finding stated at high confidence when the evidence only supports moderate is dishonest and will be exposed under scrutiny.
You've built the foundations of artifact-level forensic analysis.
WF0 gave you the taxonomy, NTFS architecture, and the five-step methodology. WF1 took you inside the MFT at the binary level — every attribute, every timestamp, every edge case. From here, every artifact category gets the same raw-first treatment.
- WF2–WF10: every major Windows artifact decoded at binary level — USN Journal, Prefetch, Amcache, Shimcache, ShellBags, LNK, Jump Lists, SRUM, Event Logs, and the Registry hives
- INC-NE-2026-0915 (WF13) — Insider data exfiltration capstone. Work the complete investigation from USB history to OneDrive exfiltration evidence
- INC-NE-2026-1022 (WF14) — Ransomware capstone. Three-host triage (FIN01 → IT03 → FS01) across the 72-hour attack chain
- The lab pack — 25+ realistic evidence files in 10 formats, simulated KAPE triage pre-populated, both capstones deployable to your own VM
- Anti-forensic detection methodology — defeat timestomping, log clearing, and Prefetch deletion with cross-artifact correlation
Cancel anytime