LX0.11 Module Summary

3-4 hours · Module 0 · Free

Module Summary

This module established the complete evidence model for Linux incident response — the mental map that every subsequent module builds on.

The evidence architecture is different from Windows. Linux does not have a registry, Prefetch, unified Event Log, or MFT. Instead, evidence is distributed across the filesystem hierarchy: log files in /var/log, process state in /proc, configuration in /etc, user artifacts in /home, and volatile data in /tmp, /dev/shm, and /var/run. Investigation success depends on correlating evidence across multiple independent sources rather than parsing a single rich artifact.

The transparency trade-off. What Linux lacks in centralized forensic infrastructure, it compensates for with pervasive transparency. Every process is visible as a directory in /proc. Every configuration is a readable text file. Every kernel parameter is exposed through pseudo-filesystems. A skilled investigator extracts more from native Linux tools than many Windows investigators extract from commercial forensic suites — but only if they know where to look.

Evidence is organized by investigation question. Five core questions drive every investigation: who authenticated (auth.log, wtmp, journal), what commands were executed (.bash_history, auditd, /proc/[pid]/cmdline), what was changed (filesystem timestamps, package manager logs, /etc modifications), how they persisted (authorized_keys, cron, systemd, ld.so.preload), and what they stole (network logs, bash history, auditd file access records).

Volatility determines collection order. Memory and /proc data are the most volatile — lost when a process terminates or the system reboots. /dev/shm and /tmp (tmpfs) are lost on reboot. Log files survive reboots but are subject to rotation. Filesystem artifacts are the most persistent. Collection must proceed from most volatile to least volatile — collecting in the wrong order means the most valuable evidence is destroyed before you reach it.

Three environments, three approaches. Bare-metal servers allow classical forensic collection (write-blocked disk imaging, USB-based memory acquisition). Cloud VMs add a second evidence plane (cloud audit trails) and use API-based disk snapshots. Containers are ephemeral — evidence must be collected before the container restarts, using docker export, kubectl cp, and orchestrator logs.

The attacker’s workflow creates the evidence trail. Initial access → reconnaissance → persistence → privilege escalation → objective execution. Each step leaves artifacts in predictable locations. Anti-forensic techniques (log truncation, history deletion, timestamp manipulation, rootkits) each have counter-techniques: cross-correlation across multiple log sources, ctime analysis, and memory forensics.

Distribution differences change evidence paths. Debian/Ubuntu uses auth.log, apt, AppArmor. RHEL/CentOS uses secure, dnf/yum, SELinux. Alpine uses minimal logging, apk, no MAC framework. Amazon Linux adds SSM agent as an alternative access path. The first command on any system is cat /etc/os-release — it determines which evidence paths apply.

The toolkit is free-first. UAC for triage collection, LiME for memory acquisition, dc3dd for disk imaging, Sleuth Kit for filesystem analysis, plaso for timeline generation, Volatility 3 for memory forensics, journalctl/ausearch/grep for log analysis. Commercial alternatives (Velociraptor, AXIOM Cyber) noted where they provide significant advantages.

Security architecture generates evidence. SELinux and AppArmor produce audit logs when the attacker violates policies. Linux capabilities determine what a non-root process can do. Namespaces isolate container evidence. Auditd (when configured) records every system call — the most comprehensive evidence source on Linux. The more security layers are active, the more evidence the attacker’s actions generate.

The lab is where you practice. A forensic workstation with analysis tools and at least one target VM provide the minimum environment. The full Northgate Engineering infrastructure (bastion, web server, database server, Kubernetes cluster, CI/CD runner) supports all scenario modules. Build the minimum lab before proceeding to LX1.

What comes next. LX1 (Evidence Collection and Triage) teaches you to collect the evidence described in this module — using UAC, live response commands, and cloud-specific collection methods. The collection follows the order of volatility: memory first, /proc second, volatile filesystems third, log files fourth, and disk image last. Everything you learned about where evidence lives in this module determines where you look in LX1.

💬

How was this module?

Your feedback helps us improve the course. One click is enough — comments are optional.

Thank you — your feedback has been received.

You're reading the free modules of this course

The full course continues with advanced topics, production detection rules, worked investigation scenarios, and deployable artifacts. Premium subscribers get access to all courses.

View Pricing See Full Syllabus