TR1.4 Linux Evidence Volatility

· Module 1 · Free
Operational Objective
The Linux Evidence Difference: Linux volatile evidence lives in fundamentally different locations than Windows evidence. The /proc virtual filesystem provides real-time process and kernel state — but only while the system is running. Container layers exist only while the container runs — a restart destroys the attacker's modifications. Kernel modules loaded by a rootkit disappear from lsmod output if the rootkit hides itself. auth.log rotation follows cron schedules that vary by distribution. The triage responder who approaches Linux with a Windows mindset will look in the wrong places, use the wrong tools, and miss the evidence that matters.
Deliverable: The Linux evidence volatility map, the 10-command triage sequence using native commands, the LiME memory acquisition procedure, and the container-specific evidence capture workflow.
Estimated completion: 30 minutes
LINUX EVIDENCE VOLATILITY — /PROC, MEMORY, CONTAINERS, LOGSLIVE-ONLY (Tier 1-2)/proc filesystem · kernel modulesnetwork state (ss) · memory (LiME)Gone on reboot — capture immediatelyCONTAINER-EPHEMERAL (Tier 2)Container writable layer · env varsRuntime config · mounted secretsGone on container restartLOG-ROTATION (Tier 3)auth.log · syslog · daemon.logjournald · application logsRotation: daily to weekly by distroATTACKER ANTI-FORENSICS (any tier)LD_PRELOAD rootkit hides processes from ps · log truncation (> /var/log/auth.log)timestomping (touch -t) · kernel module rootkit hides from lsmod · bash_history clearing

Figure TR1.4 — Linux evidence volatility. Three primary categories plus attacker anti-forensics that can destroy evidence at any tier. Container-ephemeral evidence is unique to Linux — it does not exist on Windows endpoints.

Tier 1-2: live-only evidence

The /proc virtual filesystem

/proc is the single most valuable evidence source for Linux triage. It is a virtual filesystem — not stored on disk but generated by the kernel in real time. Every running process has a directory at /proc/PID/ containing:

/proc/PID/exe — a symbolic link to the actual executable. Even if the attacker deletes the binary from disk, /proc/PID/exe points to the deleted file and the kernel retains the binary in memory. This is the triage responder’s best defence against binary deletion: the attacker removed their tool from /tmp/backdoor, but /proc/PID/exe still shows (deleted) with the original path, and cp /proc/PID/exe /IR/recovered_binary copies the in-memory binary to the evidence folder.

/proc/PID/cmdline — the full command line used to launch the process. Reveals attacker commands including arguments, flags, and target files. A cryptominer masquerading as [kworker/0:0] in the ps output has its real command line in /proc/PID/cmdline.

/proc/PID/fd/ — open file descriptors. Shows every file, socket, and pipe the process has open. An exfiltration tool with an open socket to an external IP is visible here even if the process name is innocuous.

/proc/PID/maps — memory map. Shows loaded libraries, including any injected shared objects (LD_PRELOAD attacks). A rootkit loaded via LD_PRELOAD appears in the maps of every process it hooks.

The /proc timing sensitivity. /proc is updated in real-time by the kernel. The data you read at 14:30:00 may differ from the data at 14:30:01 — a process that terminates between the two reads disappears from /proc entirely. The capture script above iterates through all PIDs sequentially, taking 5-15 seconds to complete. During this iteration, processes may start or stop. The capture is a BEST-EFFORT snapshot, not an atomic point-in-time image. For an atomic process snapshot, a memory dump (LiME) captures the process table as it existed at the exact moment the dump began — no processes can appear or disappear during the dump because the dump captures the kernel’s data structures directly.

The practical implication: the /proc capture script provides a GOOD snapshot that is sufficient for triage classification. The memory dump provides the PERFECT snapshot that the investigation team uses for deep analysis. Both are valuable. Capture both when possible — the /proc script takes 15 seconds and the memory dump takes minutes, so starting the /proc capture while waiting for LiME to load is the optimal workflow.

All /proc content is destroyed on reboot. It cannot be captured as a static copy — it must be read while the system is running.

Triage capture:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
# Capture /proc state for all processes
mkdir -p /IR/proc_state
for pid in /proc/[0-9]*; do
    p=$(basename $pid)
    mkdir -p /IR/proc_state/$p
    cat $pid/cmdline > /IR/proc_state/$p/cmdline 2>/dev/null
    ls -la $pid/exe > /IR/proc_state/$p/exe_link 2>/dev/null
    cat $pid/status > /IR/proc_state/$p/status 2>/dev/null
    ls -la $pid/fd/ > /IR/proc_state/$p/fd_list 2>/dev/null
done

Memory (LiME)

LiME (Linux Memory Extractor) is a loadable kernel module that dumps physical memory to a file. It is the Linux equivalent of WinPMem. Unlike Windows memory acquisition tools, LiME must be compiled for the target kernel version — or pre-compiled for the specific distribution and kernel. Pre-staging LiME on production servers (or having pre-compiled modules for each kernel version in the go bag) is essential.

1
2
# LiME memory acquisition
insmod lime-$(uname -r).ko "path=/IR/memdump.lime format=lime"

The dump file size equals physical RAM. On a server with 64 GB RAM, this takes 5-10 minutes and requires 64 GB of free space on the output device. Write to an external drive or network mount — not the local disk.

If LiME is not pre-staged and cannot be loaded, capture what you can from /proc (processes, network, open files) and document that memory was not acquired. The investigation team will work with available evidence.

Network connections

1
2
3
4
# Active connections with process ownership
ss -tlnp  # TCP listening
ss -tnp   # TCP established
ss -ulnp  # UDP listening

The -p flag shows the owning process. An established connection to an external IP owned by an unexpected process (a Python script running from /tmp, a process named [kworker] with network connections) is a strong triage indicator. Cross-reference external IPs against the cloud triage findings — if the same IP appears in both the cloud sign-in logs and the Linux network connections, the cross-environment attack is confirmed.

Loaded kernel modules

1
2
3
4
# Currently loaded kernel modules
lsmod > /IR/modules.txt
# Check for recently loaded modules (potential rootkit)
ls -lt /lib/modules/$(uname -r)/extra/ 2>/dev/null

A rootkit loaded as a kernel module can hide itself from lsmod. The triage check: compare the lsmod output against the expected module list for a clean system of the same distribution and version. Any unexpected module warrants investigation. If a rootkit IS hiding from lsmod, it will still be visible in a LiME memory dump analysed with Volatility3’s linux.lsmod plugin — another reason the memory dump is critical.

Tier 2: container-ephemeral evidence

Containerised workloads add a volatility tier that traditional Linux systems do not have. A Docker container’s writable layer — the filesystem modifications made since the container started — is destroyed on container restart. Kubernetes pods are designed to be ephemeral; a CrashLoopBackOff or a rolling update destroys the current pod’s state.

Docker container evidence

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
# List running containers
docker ps --no-trunc

# Files modified in the container (attacker's changes)
docker diff CONTAINER_ID

# Container configuration (env vars, mounted volumes, network)
docker inspect CONTAINER_ID

# Container logs (stdout/stderr since start)
docker logs CONTAINER_ID --timestamps

# Copy a file out of the container for analysis
docker cp CONTAINER_ID:/tmp/suspicious_file /IR/container_evidence/

docker diff is the most valuable container triage command. It shows every file added (A), changed (C), or deleted (D) in the container’s writable layer since it was created from the image. An attacker who dropped a reverse shell in /tmp/ or modified /etc/passwd to add a user shows up immediately in the diff output.

Kubernetes pod evidence

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
# Pod status and recent events
kubectl describe pod POD_NAME -n NAMESPACE

# Pod logs
kubectl logs POD_NAME -n NAMESPACE --timestamps

# Exec into pod for live triage
kubectl exec -it POD_NAME -n NAMESPACE -- /bin/sh

# Check service account token (potential lateral movement credential)
kubectl exec POD_NAME -n NAMESPACE -- cat /var/run/secrets/kubernetes.io/serviceaccount/token

The service account token check is critical: a compromised pod with a service account token can query the Kubernetes API, potentially accessing secrets, creating new pods, or escalating privileges within the cluster. This is the container equivalent of finding cached domain admin credentials on a Windows workstation.

Tier 3: log-rotation evidence

auth.log / secure

The authentication log records SSH logins, sudo usage, PAM authentication events, and account creation. Rotation schedule: typically weekly on Debian/Ubuntu (logrotate), daily on some RHEL configurations. Check /etc/logrotate.d/rsyslog or /etc/logrotate.d/syslog for the actual rotation schedule on the target system.

1
2
3
4
# Current auth log + rotated copies
cat /var/log/auth.log > /IR/auth.log
cat /var/log/auth.log.1 > /IR/auth.log.1 2>/dev/null
zcat /var/log/auth.log.2.gz > /IR/auth.log.2 2>/dev/null

journald

systemd’s journal provides structured logging with metadata. If configured with Storage=persistent (in /etc/systemd/journald.conf), logs survive reboot and are retained per the configured size limit. If configured with Storage=volatile (default on some distributions), logs are stored in memory and lost on reboot — making them Tier 1-2 volatile.

1
2
3
4
5
6
7
8
# Check journald storage mode
grep Storage /etc/systemd/journald.conf

# Export journal entries for the incident window
journalctl --since "2026-04-06 00:00" --until "2026-04-06 12:00" > /IR/journal_export.txt

# SSH-specific entries
journalctl _SYSTEMD_UNIT=sshd.service --since "2026-04-06 00:00" > /IR/sshd_journal.txt

bash_history

User command history — what commands the attacker typed. Located at ~/.bash_history for each user. The attacker can (and often does) clear history: history -c && rm ~/.bash_history. The triage responder should capture bash_history immediately — before the attacker’s anti-forensics execute.

1
2
3
4
5
# Capture bash history for all users
for home in /home/* /root; do
    user=$(basename $home)
    cp $home/.bash_history /IR/bash_history_${user} 2>/dev/null
done

Linux evidence sources beyond the basics

The /proc, memory, containers, and logs covered above are the core Linux evidence categories. Several additional evidence sources deserve triage-level awareness:

SSH authorized_keys. Every user account’s ~/.ssh/authorized_keys file lists the public keys that can authenticate without a password. An attacker who adds their own SSH public key to a user’s authorized_keys has persistence that survives password changes — the key-based authentication does not require the password. The triage responder should capture all authorized_keys files:

1
2
3
4
5
6
7
8
for home in /home/* /root; do
    user=$(basename $home)
    if [ -f "$home/.ssh/authorized_keys" ]; then
        cp "$home/.ssh/authorized_keys" /IR/authorized_keys_${user}
        # Check for recently added keys
        stat "$home/.ssh/authorized_keys" >> /IR/ssh_key_timestamps.txt
    fi
done

At NE, the CHAIN-DRIFT attacker added an SSH key to the svc-dbadmin authorized_keys on the database server. The key provided persistent access that survived three password rotations before an investigation discovered it during a comprehensive credential audit. The triage responder who captures authorized_keys files during triage enables the investigation team to identify this persistence mechanism immediately.

Cron jobs (system and user). Beyond the user-level crontab checked in the basic triage, system-level cron jobs exist in /etc/crontab, /etc/cron.d/, /etc/cron.daily/, /etc/cron.hourly/, and /var/spool/cron/crontabs/. An attacker who creates a system cron job has persistence that runs regardless of which user is logged in — and system cron jobs are less frequently audited than user crontabs.

1
2
3
4
5
# Capture ALL cron configurations
cat /etc/crontab > /IR/crontab_system.txt
ls -la /etc/cron.d/ > /IR/cron_d_listing.txt
for f in /etc/cron.d/*; do cat "$f" >> /IR/cron_d_contents.txt; done
ls -la /var/spool/cron/crontabs/ > /IR/spool_crontabs.txt 2>/dev/null

Systemd timers. Modern Linux distributions use systemd timers as an alternative to cron. An attacker who creates a systemd timer service has persistence that is managed by systemd — and systemd timers support more complex scheduling than cron. List all timers: systemctl list-timers --all > /IR/systemd_timers.txt. Check for recently created timer units: find /etc/systemd/system/ /usr/lib/systemd/system/ -name "*.timer" -mtime -7 > /IR/recent_timers.txt.

LD_PRELOAD and library injection. The LD_PRELOAD environment variable forces the dynamic linker to load a specified shared library before all others. An attacker who sets LD_PRELOAD to their malicious library can hook any function in any dynamically linked program — a userspace rootkit. Check for LD_PRELOAD:

1
2
3
4
5
6
7
# Check for LD_PRELOAD persistence
cat /etc/ld.so.preload > /IR/ld_preload.txt 2>/dev/null
grep -r "LD_PRELOAD" /etc/environment /etc/profile /etc/profile.d/ > /IR/ld_preload_env.txt 2>/dev/null
# Check running processes for LD_PRELOAD
for pid in /proc/[0-9]*; do
    grep -l "LD_PRELOAD" $pid/environ 2>/dev/null && echo "PID $(basename $pid) has LD_PRELOAD"
done > /IR/ld_preload_procs.txt 2>/dev/null

If /etc/ld.so.preload contains any entry, it is almost certainly malicious — legitimate software does not use this file. The triage responder should flag this as a DEFINITIVE compromise indicator.

Container image layer analysis. For Docker environments, the docker history IMAGE_ID command shows every layer in the container’s image — revealing when and how the image was built. If the attacker pushed a modified image to the container registry and the orchestrator pulled it, the history shows the modification. Compare the running container’s image layers against the known-good image from the registry: docker inspect --format='{{.Image}}' CONTAINER_ID gives the image hash, which should match the hash of the expected image version. A mismatch indicates the container is running a modified or attacker-supplied image.

Kubernetes secrets and RBAC. In Kubernetes environments, the triage responder should check: which secrets the compromised pod has access to (kubectl get secrets -n NAMESPACE), what RBAC permissions the pod’s service account has (kubectl auth can-i --list --as=system:serviceaccount:NAMESPACE:SA_NAME), and whether any cluster-level resources have been modified recently (kubectl get clusterrolebindings -o yaml | grep -A5 "creationTimestamp" | head -20). A pod with ClusterAdmin permissions is the Kubernetes equivalent of a domain admin compromise — the attacker can access any resource in the cluster.

The 5-minute Linux triage capture

Minute 0-1 (Tier 1 — native commands):

1
2
3
4
ps auxf > /IR/processes.txt
ss -tnp > /IR/connections.txt
last -i > /IR/logins.txt
w > /IR/who.txt

Minute 1-3 (Tier 1-2 — /proc and memory): Start LiME if available. While it runs, capture /proc state and kernel modules.

1
2
insmod lime-$(uname -r).ko "path=/IR/memdump.lime format=lime" &
lsmod > /IR/modules.txt

Run the /proc capture script from above.

Minute 3-5 (Tier 2-3 — containers and logs): If containers are running: docker ps, docker diff, docker logs. Capture auth.log, bash_history, crontab, and systemd services.

1
2
3
4
5
6
7
8
9
cat /var/log/auth.log > /IR/auth.log
crontab -l > /IR/crontab_root.txt 2>/dev/null
for user in /home/*; do
    u=$(basename $user)
    crontab -l -u $u > /IR/crontab_${u}.txt 2>/dev/null
    cp $user/.bash_history /IR/bash_history_${u} 2>/dev/null
done
systemctl list-unit-files --type=service --state=enabled > /IR/services.txt
find /tmp /dev/shm /var/tmp -type f -mtime -1 > /IR/recent_tmp.txt

Worked artifact: Linux triage collection script

Save as linux_triage.sh, make executable, run as root from an external USB or NFS mount. All output goes to /IR/ (create this directory on the external mount, not on the compromised system’s disk).

Try it: check your Linux server's log retention

SSH to a Linux server you manage and check: cat /etc/logrotate.d/rsyslog (or syslog). How many rotated copies are kept? What is the rotation frequency? Then check: grep Storage /etc/systemd/journald.conf. Is journald persistent or volatile? If volatile, auth events are lost on reboot. These two checks tell you how far back your Linux evidence extends and whether a reboot destroys your authentication logs.

Compliance Myth: "Linux servers do not need memory acquisition — logs are sufficient"

The myth: Linux forensics is log-based. auth.log, syslog, and application logs provide complete evidence. Memory acquisition is a Windows technique.

The reality: Log-based forensics misses fileless attacks, in-memory rootkits, decrypted payloads, and active network connections that are not logged. A kernel module rootkit that hides processes from ps and connections from ss is invisible in any log — but visible in a LiME memory dump analysed with Volatility3. An attacker who clears bash_history and truncates auth.log eliminates all log evidence — but the commands they typed and the connections they established remain in memory until the next reboot. Memory acquisition on Linux is not common in practice because LiME requires pre-staging. This course makes it standard practice.

Troubleshooting

“LiME is not compiled for this kernel version.” This is the most common Linux memory acquisition failure. LiME must match the running kernel. Pre-compile for every kernel version deployed in your environment, or use a tool like AVML (Microsoft Acquire Volatile Memory for Linux), which accesses physical memory through /dev/crash or /proc/kcore without requiring a loadable kernel module. AVML is deployable on any Linux system without pre-compilation — making it the preferred fallback when LiME modules are not available for the target kernel. The immutable infrastructure pattern (containers, auto-scaling groups) adds urgency: the orchestrator may replace the server before the responder arrives, destroying all evidence. Capture docker diff, docker inspect, and docker logs IMMEDIATELY when container compromise is suspected — the writable layer evidence has the shortest lifespan of any Linux evidence category (Microsoft’s Acquire Volatile Memory for Linux) which does not require a kernel module. If neither is available, maximise /proc capture and log collection — these are the best alternatives when memory acquisition is not possible.

“The attacker cleared bash_history and truncated auth.log.” Check for rotated copies: auth.log.1, auth.log.2.gz. Check journald if configured with persistent storage. Check the Sentinel copy of the logs if a syslog forwarder was configured. If all local log evidence is destroyed, the memory dump (if captured before clearing) may still contain the attacker’s commands in the bash process’s memory space — Volatility3’s linux.bash plugin recovers command history from memory.

“I do not have root access to the Linux server.” You need root (or a user with sudo) to capture /proc state for all processes, acquire memory with LiME, and read auth.log. If you cannot obtain root access during triage, document the limitation and escalate to the Linux admin team with specific requests: “Run these 5 commands as root and send me the output files.” The commands from the 5-minute triage sequence do not modify the system — they only read.

Beyond this investigation: Linux evidence volatility connects to **Practical Linux IR** (where LiME dumps are analysed with Volatility3 in depth, and /proc captures are correlated with disk forensics), **Detection Engineering** (where syslog forwarding to Sentinel ensures log evidence survives local clearing), and **SOC Operations** (where the syslog forwarding architecture determines which Linux evidence is available for triage queries in Sentinel).

You're reading the free modules of this course

The full course continues with advanced topics, production detection rules, worked investigation scenarios, and deployable artifacts. Premium subscribers get access to all courses.

View Pricing See Full Syllabus