LX0.1 Why Linux IR Is Different

3-4 hours · Module 0 · Free
Operational Objective
The Platform Question: your monitoring system flags suspicious activity on a Linux web server — high CPU, outbound connections to an unknown IP. You open a terminal. Everything you know about investigation comes from Windows: registry, Prefetch, Event Log, MFT. None of those exist here. The evidence model is fundamentally different, and applying Windows instincts to a Linux investigation means you miss artifacts, misinterpret timestamps, and draw incorrect conclusions. You need a new mental model — not a translation table, but a ground-up understanding of how Linux stores, organizes, and exposes evidence to investigators.
Deliverable: The ability to articulate why Linux IR requires a distinct methodology, map the structural differences between Windows and Linux evidence architectures, and identify the Linux-specific advantages (transparency through /proc, distributed evidence resilience) that compensate for the lack of centralized forensic artifacts.
⏱ Estimated completion: 30 minutes

Why Linux Incident Response Is Fundamentally Different from Windows

The evidence architecture divide

If you have investigated incidents on Windows, you carry a mental model of where evidence lives. The registry records program execution, service configuration, user activity, and system state changes. The NTFS Master File Table records every file creation, modification, and deletion with nanosecond precision. Prefetch files record the first and last eight execution times of every program. The Windows Event Log provides a structured, indexed record of authentication events, process creation, service changes, and security policy modifications. LSASS holds cached credentials in memory. The AMSI interface captures script execution content. The WMI repository records persistent event subscriptions.

None of these exist on Linux.

That statement is not an exaggeration or a simplification. Linux does not have a registry — configuration is stored in text files scattered across /etc, /home, and application-specific directories. Linux does not have Prefetch — there is no built-in mechanism that records which programs were executed, when, and how many times. Linux does not have a Master File Table that records every file operation with nanosecond timestamps — the ext4 filesystem records four timestamps per inode, but the most forensically valuable one (crtime, the creation time) was not even exposed to userspace tools until kernel 4.11 in 2017, and many forensic tools still do not parse it correctly. Linux does not have a unified Event Log — log data is split across syslog, journald, auditd, application-specific log files, and kernel ring buffers, each with different formats, different retention policies, and different levels of tamper resistance.

This is not a criticism of Linux. The operating system was designed for different priorities — transparency, composability, and administrator control rather than centralized audit and forensic capability. But the consequence for investigators is significant: the evidence model you learned on Windows does not transfer to Linux. You must build a new mental model from the ground up.

What Linux gives you instead

Linux compensates for the lack of centralized forensic infrastructure with something Windows does not provide: pervasive transparency. Every running process is visible as a directory in /proc. Every open file handle, network connection, and memory mapping is exposed as a pseudo-file. The kernel publishes its internal state through /sys and /proc in real time. Configuration is stored in human-readable text files that can be examined, compared, and versioned without special tools.

This transparency means that a skilled Linux investigator can extract information that would require specialized forensic tools on Windows. On Windows, you need a tool like Volatility to examine process memory mappings. On Linux, you can read /proc/[pid]/maps directly. On Windows, you need a tool to enumerate network connections from a memory dump. On Linux, you can read /proc/net/tcp and /proc/net/tcp6 as text files. On Windows, you need a registry parser to determine which services are configured to start at boot. On Linux, you can list the files in /etc/systemd/system/ and read them with cat.

The transparency extends to the kernel itself. The /proc/sys/ hierarchy exposes hundreds of kernel parameters as readable files. Network stack configuration, memory management policies, filesystem behavior, security module settings — all readable with cat. An investigator who wants to know whether IP forwarding is enabled (a lateral movement indicator — the attacker may have configured the server as a network pivot) reads a single file:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
# Check if the compromised system is configured for IP forwarding
# IP forwarding enabled means the system can route traffic between
# network interfaces — a strong lateral movement indicator
cat /proc/sys/net/ipv4/ip_forward
# Output: 0 = disabled (normal for a server)
#         1 = enabled (suspicious unless this is a router/gateway)

# Check the persistent configuration (survives reboot)
grep -r "ip_forward" /etc/sysctl.conf /etc/sysctl.d/
# If net.ipv4.ip_forward = 1 appears in a config file, someone
# enabled forwarding deliberately — check the file's modification
# timestamp to determine when

On Windows, determining IP forwarding status requires querying the registry (HKLM\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\IPEnableRouter) with a registry parser. On Linux, it is a cat command. The information is the same. The access method is fundamentally simpler — but only if you know where to look.

The trade-off: distributed evidence requires correlation

The transparency advantage comes with a significant trade-off. On Windows, a single artifact often tells a complete story. A Prefetch file tells you that a program executed, when it executed (up to 8 timestamps), how many times it executed, and what files it loaded during execution. That is four investigation questions answered by one artifact.

On Linux, there is no equivalent single artifact. Instead, you must correlate evidence across multiple sources: the filesystem timestamps show when a file was created, the authentication logs show who was logged in at that time, the audit logs (if configured) show which process accessed the file, and the bash history (if not deleted) shows what commands were run. Each source provides a fragment. The investigation assembles the fragments into a timeline.

This correlation requirement means Linux investigation is slower when evidence sources are sparse (no auditd, deleted bash history, rotated logs) and potentially more thorough when evidence sources are rich (comprehensive auditd rules, intact logs, preserved memory). The investigator’s skill is knowing which sources to check, in what order, and how to correlate findings across them.

EVIDENCE MODEL COMPARISON: WINDOWS vs LINUXWINDOWS — CENTRALIZEDLINUX — DISTRIBUTEDRegistry → Centralized system state + execution evidence/etc/* + /home/* → Distributed config files (plaintext)Event Log → Structured, indexed, unified audit trailsyslog + journald + auditd → Fragmented, multiple formatsPrefetch → Program execution history (8 timestamps)No equivalent → Must correlate timestamps + logs + auditdNTFS MFT → Every file op with nanosecond precisionext4 inodes → 4 timestamps, second precision (ns since 4.x)LSASS → Cached credentials in protected memory/proc → Live process state as readable pseudo-filesAmcache / Shimcache → Execution artifact persistenceauditd (if configured) → Syscall-level audit recordingRich, centralized, tool-dependent, opaqueTransparent, distributed, correlation-dependentKey insight: neither model is superior — they require different investigation approaches
Figure LX0.1: Evidence model comparison showing the structural difference between Windows (centralized artifacts parsed with specialized tools) and Linux (distributed evidence correlated across multiple sources using native commands).

The filesystem hierarchy as an evidence map

On Windows, the investigator’s mental model is organized around artifacts: “I need the registry, I need the event logs, I need the Prefetch files, I need the MFT.” The artifacts are well-defined, well-documented, and parsed by standard forensic tools.

On Linux, the investigator’s mental model must be organized around the filesystem hierarchy, because every piece of evidence — every log file, every configuration file, every process artifact — exists as a file in a predictable location within the hierarchy. Understanding the hierarchy is understanding where evidence lives.

The key directories from an investigator’s perspective are not the same directories a system administrator prioritizes. A sysadmin thinks about /etc as “where I configure things.” An investigator thinks about /etc as “where I find what the attacker changed.” A sysadmin thinks about /var/log as “where logs rotate out.” An investigator thinks about /var/log as “the primary evidence source for authentication, service activity, and system events — and the first thing the attacker tries to delete.”

The following directories are the investigator’s primary evidence sources. Each is covered in depth in LX0.2 (Where Evidence Lives), but the overview here establishes the mental model:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
# The five directories an investigator checks first on any Linux system
# Run these commands to get an immediate sense of what evidence exists

# 1. Authentication and system events — the primary log directory
ls -la /var/log/auth.log* /var/log/secure* /var/log/syslog* 2>/dev/null
# Shows: which authentication logs exist, how many rotated copies,
# file sizes (empty = truncated by attacker), timestamps (last write)

# 2. Process state — what is running RIGHT NOW
ls /proc/ | grep -E '^[0-9]+$' | wc -l
# Shows: number of running processes. Compare against ps output
# later — a discrepancy indicates a rootkit hiding processes

# 3. System configuration — what the attacker may have modified
stat /etc/passwd /etc/shadow /etc/sudoers /etc/ssh/sshd_config 2>/dev/null | grep -E 'File:|Modify:'
# Shows: modification timestamps of critical config files. Recent
# modifications during the compromise window = attacker activity

# 4. User artifacts — command history and SSH keys
ls -la /home/*/.bash_history /root/.bash_history 2>/dev/null
ls -la /home/*/.ssh/authorized_keys /root/.ssh/authorized_keys 2>/dev/null
# Shows: bash history files (empty/missing = anti-forensics) and
# SSH authorized_keys (modified = persistence mechanism deployed)

# 5. Volatile staging areas — attacker tools and payloads
ls -laR /tmp/ /dev/shm/ 2>/dev/null | head -30
# Shows: files in world-writable directories. Attackers stage
# payloads in /tmp and /dev/shm. /dev/shm is RAM-backed and
# contents are lost on reboot — collect immediately

Worked artifact — Initial triage notes:

Adapt this template for your own investigations. Record these findings in the first five minutes of any Linux IR engagement.

System: WEBSRV-NGE01 (Ubuntu 22.04 LTS) — 198.51.100.10 Triage timestamp: 2026-03-28T03:22:00Z Investigator: [Your name]

Log status: auth.log present, 847KB (not truncated). 4 rotated copies. syslog present. Journal active (systemd). Process count: /proc shows 193 processes. ps shows 193. No discrepancy (rootkit unlikely). Config modifications: /etc/passwd modified 2026-03-28T03:19:41Z (within compromise window — investigate). /etc/shadow modified same time. /etc/ssh/sshd_config not recently modified. User artifacts: /home/a.patel/.bash_history present but empty (0 bytes, modified 03:18 — likely truncated by attacker). /root/.ssh/authorized_keys modified 03:20 (SSH key persistence — investigate). Staging areas: /dev/shm contains hidden directory .cache/ with binary file worker (1.2MB). /tmp contains payload.sh (4.3KB, created 03:19).

Initial assessment: Active compromise. Attacker gained access ~03:17, modified system accounts, deployed SSH key persistence, and staged tools in /dev/shm. Volatile evidence collection is urgent — /dev/shm contents lost on reboot.

The /proc filesystem: Linux’s unique investigative advantage

/proc deserves specific attention because it is the single most important evidence source during a live Linux investigation — and it has no equivalent on Windows. It is not a real filesystem. It is a virtual filesystem generated by the kernel in real time, exposing the state of every running process, every network connection, every mounted filesystem, and hundreds of kernel parameters.

When you run ps aux, the ps command reads /proc to generate its output. When you run netstat or ss, those commands read /proc/net/ to generate their output. An attacker who installs a rootkit can hook the library calls that ps and netstat use, making their processes and connections invisible to those commands — but the underlying /proc entries still exist and can be read directly.

This is why reading /proc directly is a critical skill: it bypasses the user-space tools that a rootkit can compromise. The investigator who runs ps auxf and sees nothing suspicious may miss three hidden attacker processes. The investigator who enumerates /proc directly finds all processes the kernel knows about — including the ones the rootkit is hiding. This technique is the foundation of rootkit detection covered in LX12 (Memory Forensics), but the principle applies from the first moment you touch a compromised Linux system.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
# Direct /proc enumeration — rootkit-resistant process discovery
# This reads the kernel's process list directly, bypassing any
# userspace hooks that a rootkit might have installed

for pid in /proc/[0-9]*/; do
  p=$(basename "$pid")
  cmdline=$(cat /proc/$p/cmdline 2>/dev/null | tr '\0' ' ')
  exe=$(readlink -f /proc/$p/exe 2>/dev/null)
  user=$(stat -c '%U' /proc/$p 2>/dev/null)
  echo "PID=$p USER=$user EXE=$exe CMD=$cmdline"
done > /tmp/proc_enum.txt

# Compare process count: /proc vs ps
proc_count=$(ls -d /proc/[0-9]*/ 2>/dev/null | wc -l)
ps_count=$(ps -e --no-headers | wc -l)
echo "Direct /proc count: $proc_count"
echo "ps command count: $ps_count"
# If proc_count > ps_count, processes are being hidden from ps
# This is a STRONG indicator of a userspace rootkit

Myth: “Linux servers don’t need the same forensic attention as Windows because Linux is inherently more secure.”

Reality: Linux servers are compromised through the same fundamental vectors as Windows systems — weak credentials, unpatched vulnerabilities, misconfigured services, and supply chain attacks. The difference is not in vulnerability but in visibility: most Linux servers lack the EDR agents, centralized logging, and forensic tooling that Windows environments have. The attacker who compromises a Linux web server often operates in an environment with near-zero detection capability. The investigation methodology must compensate for this — using native Linux evidence sources (filesystem timestamps, /proc, log files, auditd if configured) rather than relying on commercial detection tools that may not be deployed.

Decision points: when Linux evidence is richer than Windows

There are specific investigation scenarios where Linux evidence is actually richer than Windows evidence — not despite the distributed model, but because of it.

Process introspection during live response. On Windows, examining a running process’s memory mappings, open files, network connections, and environment variables requires specialized tools (Process Explorer, Handle, or a memory dump analyzed with Volatility). On Linux, every one of these is a readable file in /proc/[pid]/: maps (memory mappings), fd/ (open file descriptors as symlinks), net/tcp (network connections), environ (environment variables), cmdline (command-line arguments), exe (symlink to the actual binary). An investigator with only cat and ls can extract more process-level detail from a live Linux system than from a live Windows system without specialized tools.

Configuration visibility. On Windows, determining the exact configuration of a service requires parsing multiple registry hives, examining Group Policy results, and potentially querying WMI. On Linux, every service configuration is a readable text file — cat /etc/ssh/sshd_config shows the complete SSH configuration. cat /etc/systemd/system/malicious.service shows the complete service definition the attacker created. Nothing is hidden behind a binary database format.

Log tamper detection. On Windows, the Event Log uses a binary format (EVTX) that can be tampered with using specialized tools and the tampering may be difficult to detect. On Linux, the systemd journal uses a binary format with internal checksumming — individual entry modification corrupts the journal’s integrity checks, making tampering detectable. The investigator can verify journal integrity and know whether the log has been modified.

Troubleshooting: common mistakes when transitioning from Windows to Linux IR

Looking for artifacts that do not exist. The most common mistake: searching for a “Linux registry” or “Linux Prefetch” equivalent. There is none. Do not waste time looking for centralized execution evidence — instead, correlate filesystem timestamps, bash history, and auditd records (if available).

Trusting ps and ss output unconditionally. On Windows, Task Manager and netstat are generally trusted (though they can be manipulated). On Linux, ps and ss are the first tools an attacker hooks with a rootkit. Always supplement standard commands with direct /proc reads, especially when rootkit presence is suspected.

Ignoring the systemd journal. Investigators who are familiar with syslog check /var/log/auth.log and /var/log/syslog but forget about journalctl. The journal contains the same events but in a tamper-resistant binary format. If the attacker truncated the plaintext log files, the journal may still contain the evidence.

Not checking log rotation policy first. On Windows, Event Log retention is configured per log and is typically weeks to months. On Linux, log rotation is configured per file in /etc/logrotate.d/ and may be as short as 4 rotations of weekly files — only 28 days of history. If you do not check the rotation policy first, you may assume evidence is unavailable when it actually exists in a rotated compressed file, or you may assume you have months of history when you actually have only weeks.

Try it: On any Linux system you have access to (a VM, a cloud instance, your WSL installation), run the five-directory check from the command block above. Then run the /proc direct enumeration command. Compare the /proc process count against ps -e --no-headers | wc -l. On a clean system, they should match. On a rootkitted system, /proc shows more processes than ps. You are now reading the same evidence sources you will use in every investigation in this course. Save the output — it is your first baseline for comparison if the system is ever compromised.

Beyond this investigation

The Windows-to-Linux transition is the most common challenge for investigators expanding their capability. The architectural differences described in this subsection apply to every investigation scenario in this course — from SSH brute force (LX4) to container compromise (LX9) to cloud VM analysis (LX10). The investigators who struggle are the ones who keep looking for Windows-equivalent artifacts. The investigators who succeed are the ones who learn the Linux evidence model on its own terms.

The key principle: on Windows, you look for specific artifacts. On Linux, you look for specific files and correlate across multiple sources. The evidence is there — it is just organized differently.

Check your understanding:

  1. Why does Linux lack a direct equivalent to Windows Prefetch, and what investigation technique replaces it?
  2. What is /proc and why is reading it directly more reliable than using commands like ps and netstat during a compromise investigation?
  3. An attacker writes a payload to /dev/shm/payload.elf. Why is this location significant from a forensic perspective?
  4. Which directory is the primary evidence source for authentication events on a Debian-based Linux system?

You're reading the free modules of this course

The full course continues with advanced topics, production detection rules, worked investigation scenarios, and deployable artifacts. Premium subscribers get access to all courses.

View Pricing See Full Syllabus