LX1.8 Container and Kubernetes Evidence Collection

3-4 hours · Module 1 · Free

Container Evidence Collection: Racing the Restart Clock

Learning objective: Master the complete evidence collection workflow for Docker containers and Kubernetes pods — from detecting that a container is compromised through capturing the ephemeral filesystem, collecting runtime and orchestrator logs, and preserving evidence from persistent volumes. Understand the unique forensic challenges of container evidence: filesystem ephemerality, namespace isolation, and the separation between container-layer and host-layer evidence.

Why Container Collection Is Different

Container evidence collection violates the fundamental assumption of forensics: that evidence persists until collected. On a bare-metal server, evidence exists on a physical disk that survives reboots, power cycles, and — if you do nothing — indefinitely. On a container, the writable filesystem layer exists only while the container is running. When a Kubernetes pod is terminated and rescheduled, the new pod starts from the pristine base image with no trace of what happened in the previous instance. The attacker’s web shell, the modified configuration files, the bash history, the downloaded tools — all erased by a container restart.

This creates an urgency that does not exist in other investigation environments. On a bare-metal server, you can wait hours before beginning collection — the evidence is stable. On a container, you may have minutes before a liveness probe failure, an autoscaler decision, or a manual restart destroys the evidence.

Docker Container Collection — Complete Workflow

The Docker evidence collection workflow captures three categories of evidence: the container filesystem (what the attacker modified), the container logs (what the application recorded), and the container metadata (how the container was configured).

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
# Step 1: Confirm the container is still running
docker ps --filter name=compromised-app
# If the container is not running, skip to "Dead Container Recovery"

# Step 2: Capture container metadata (configuration, env vars, mounts)
docker inspect compromised-app > evidence/docker_inspect.json

# Step 3: Capture container logs (stdout/stderr — application output)
docker logs --timestamps compromised-app > evidence/docker_logs.txt 2>&1

# Step 4: Identify what the attacker changed (diff from base image)
docker diff compromised-app > evidence/docker_diff.txt
# Output: A = added, C = changed, D = deleted
# Every entry is a file the attacker created or modified

# Step 5: Export the complete container filesystem
docker export compromised-app > evidence/container_filesystem.tar
# This captures the merged view — base image + writable layer

# Step 6: Copy specific directories for targeted analysis
docker cp compromised-app:/var/log/ evidence/container_var_log/
docker cp compromised-app:/tmp/ evidence/container_tmp/
docker cp compromised-app:/etc/crontab evidence/container_crontab 2>/dev/null
docker cp compromised-app:/root/.bash_history evidence/container_bash_history 2>/dev/null

# Step 7: Capture the running process state inside the container
docker exec compromised-app ps auxf > evidence/container_processes.txt 2>/dev/null
docker exec compromised-app ss -tlnp > evidence/container_network.txt 2>/dev/null
docker exec compromised-app cat /etc/resolv.conf > evidence/container_dns.txt 2>/dev/null

# Step 8: Check for container escape indicators
docker exec compromised-app ls -la /var/run/docker.sock 2>/dev/null && \
  echo "WARNING: Docker socket mounted inside container" > evidence/escape_indicator.txt
docker exec compromised-app cat /proc/1/cgroup > evidence/container_cgroup.txt 2>/dev/null

The docker diff output is uniquely valuable for container forensics. Because the base image is known and immutable (it is the image you deployed), every file that appears as Added or Changed in the diff is either legitimate application activity (log files, cache files, session data) or attacker activity. Separate the two by comparing against expected application behavior. A PHP web shell appearing as an Added file in the web root is clearly attacker activity. A modified /etc/passwd appearing as a Changed file indicates the attacker created a new user account.

Dead Container Recovery

If the container has already been terminated, your evidence options narrow significantly but do not disappear entirely.

Container runtime logs: Docker stores container logs even after termination (until the container is removed with docker rm). If the container was terminated but not removed: docker logs <container-id> > evidence/dead_container_logs.txt.

Docker data directory: The container’s writable layer persists on the host filesystem in /var/lib/docker/ until the container is removed. The location depends on the storage driver: /var/lib/docker/overlay2/<layer-id>/diff/ for overlay2 (the default on modern Docker installations). The layer ID is in the container’s inspect output. If the container was terminated but not removed, you can browse the writable layer directly on the host.

Persistent volumes: Any data stored on Docker volumes (docker volume ls) or bind mounts survives container termination. Check the container’s inspect output for Mounts — any volume or bind mount paths still contain their data on the host filesystem.

Container image: Even if the container is gone, the image it was running from is still on the host (unless explicitly removed). docker images lists available images. docker history <image> shows the image build layers. Compare the image against the known-good image from your container registry — if the image was tampered with (the attacker modified the image and redeployed), the layer digests will differ.

Kubernetes Pod Collection

Kubernetes adds an orchestration layer above Docker/containerd. The collection workflow must capture evidence from both the pod (container-level) and the cluster (orchestrator-level).

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
# Pod-level evidence
kubectl describe pod suspicious-pod -n production > evidence/pod_describe.txt
kubectl logs suspicious-pod -n production --all-containers > evidence/pod_logs.txt 2>&1
kubectl logs suspicious-pod -n production --previous > evidence/pod_logs_previous.txt 2>/dev/null
kubectl cp production/suspicious-pod:/var/log/ evidence/pod_var_log/
kubectl cp production/suspicious-pod:/tmp/ evidence/pod_tmp/

# Exec into the pod for live volatile collection
kubectl exec -it suspicious-pod -n production -- ps auxf > evidence/pod_processes.txt 2>/dev/null
kubectl exec -it suspicious-pod -n production -- ss -tlnp > evidence/pod_network.txt 2>/dev/null
kubectl exec -it suspicious-pod -n production -- cat /proc/1/cgroup > evidence/pod_cgroup.txt 2>/dev/null

# Cluster-level evidence
kubectl get events -n production --field-selector involvedObject.name=suspicious-pod --sort-by='.lastTimestamp' > evidence/k8s_events.txt
kubectl get pod suspicious-pod -n production -o yaml > evidence/pod_yaml.txt

# Service account token (was it stolen for RBAC abuse?)
kubectl exec suspicious-pod -n production -- cat /var/run/secrets/kubernetes.io/serviceaccount/token > evidence/sa_token.txt 2>/dev/null

# Kubernetes audit log (if enabled — cluster admin must provide access)
# Location varies: /var/log/kubernetes/audit.log, or cloud provider's log service

The kubectl logs --previous flag retrieves logs from the previous container instance in the same pod — if the container crashed and restarted, the previous instance’s logs may contain the evidence of the attacker’s activity that triggered the crash.

The service account token collection (/var/run/secrets/kubernetes.io/serviceaccount/token) is critical for detecting RBAC abuse. If the attacker stole the pod’s service account token, they can make API calls to the Kubernetes control plane — creating pods, reading secrets, modifying deployments. The audit log records these API calls, but you need the token to correlate them with the compromised pod.

Container Escape Detection

Container escape — where the attacker breaks out of the container to the host — fundamentally changes the investigation scope. What was a container-level incident becomes a host-level incident, potentially affecting every container on the node.

Evidence of container escape:

Docker socket mount: ls -la /var/run/docker.sock inside the container. If the Docker socket is mounted into the container, the attacker can issue Docker commands against the host’s Docker daemon — creating new privileged containers, accessing other containers’ filesystems, and executing commands on the host. This is the most common container escape vector.

Privileged container: check docker inspect output for "Privileged": true. A privileged container has nearly unrestricted access to the host kernel — the attacker can load kernel modules, access host devices, and modify host filesystems.

Excessive capabilities: check for CAP_SYS_ADMIN, CAP_SYS_PTRACE, CAP_NET_ADMIN in the container’s capability set. These capabilities enable escape paths.

Host process visibility: from inside the container, cat /proc/1/cgroup — if the output shows the host’s cgroup hierarchy rather than the container’s, the process may have escaped the PID namespace.

Try it: Practice the complete Docker collection workflow. Run a test container: docker run -d --name forensic-test nginx:latest. Make a change inside it: docker exec forensic-test bash -c "echo 'test-evidence' > /tmp/attacker-file.txt". Now run the full collection sequence from Step 1–8 above. Examine the docker diff output — you should see /tmp/attacker-file.txt as an Added file. Export the filesystem and verify you can find the file in the tar archive: tar tf evidence/container_filesystem.tar | grep attacker. Clean up: docker rm -f forensic-test.

Beyond This Investigation

Container evidence collection is the foundation for LX9 (Container Compromise), which investigates a complete container breach scenario including escape to the host and lateral movement through the Kubernetes API. The collection techniques in this subsection capture the evidence that LX9’s analysis examines in depth.

Check your understanding:

  1. A compromised container was restarted by Kubernetes 5 minutes ago. What evidence from the previous container instance is still accessible?
  2. What does the output of docker diff represent, and why is it more useful for container forensics than a full filesystem listing?
  3. You find /var/run/docker.sock mounted inside a compromised container. What can the attacker do with this, and what evidence should you look for on the host?
  4. The Kubernetes audit log shows create pod and get secrets API calls using the compromised pod’s service account token. What does this indicate about the scope of the compromise?

You're reading the free modules of this course

The full course continues with advanced topics, production detection rules, worked investigation scenarios, and deployable artifacts. Premium subscribers get access to all courses.

View Pricing See Full Syllabus