LX1.8 Container and Kubernetes Evidence Collection
Container Evidence Collection: Racing the Restart Clock
Learning objective: Master the complete evidence collection workflow for Docker containers and Kubernetes pods — from detecting that a container is compromised through capturing the ephemeral filesystem, collecting runtime and orchestrator logs, and preserving evidence from persistent volumes. Understand the unique forensic challenges of container evidence: filesystem ephemerality, namespace isolation, and the separation between container-layer and host-layer evidence.
Why Container Collection Is Different
Container evidence collection violates the fundamental assumption of forensics: that evidence persists until collected. On a bare-metal server, evidence exists on a physical disk that survives reboots, power cycles, and — if you do nothing — indefinitely. On a container, the writable filesystem layer exists only while the container is running. When a Kubernetes pod is terminated and rescheduled, the new pod starts from the pristine base image with no trace of what happened in the previous instance. The attacker’s web shell, the modified configuration files, the bash history, the downloaded tools — all erased by a container restart.
This creates an urgency that does not exist in other investigation environments. On a bare-metal server, you can wait hours before beginning collection — the evidence is stable. On a container, you may have minutes before a liveness probe failure, an autoscaler decision, or a manual restart destroys the evidence.
Docker Container Collection — Complete Workflow
The Docker evidence collection workflow captures three categories of evidence: the container filesystem (what the attacker modified), the container logs (what the application recorded), and the container metadata (how the container was configured).
| |
The docker diff output is uniquely valuable for container forensics. Because the base image is known and immutable (it is the image you deployed), every file that appears as Added or Changed in the diff is either legitimate application activity (log files, cache files, session data) or attacker activity. Separate the two by comparing against expected application behavior. A PHP web shell appearing as an Added file in the web root is clearly attacker activity. A modified /etc/passwd appearing as a Changed file indicates the attacker created a new user account.
Dead Container Recovery
If the container has already been terminated, your evidence options narrow significantly but do not disappear entirely.
Container runtime logs: Docker stores container logs even after termination (until the container is removed with docker rm). If the container was terminated but not removed: docker logs <container-id> > evidence/dead_container_logs.txt.
Docker data directory: The container’s writable layer persists on the host filesystem in /var/lib/docker/ until the container is removed. The location depends on the storage driver: /var/lib/docker/overlay2/<layer-id>/diff/ for overlay2 (the default on modern Docker installations). The layer ID is in the container’s inspect output. If the container was terminated but not removed, you can browse the writable layer directly on the host.
Persistent volumes: Any data stored on Docker volumes (docker volume ls) or bind mounts survives container termination. Check the container’s inspect output for Mounts — any volume or bind mount paths still contain their data on the host filesystem.
Container image: Even if the container is gone, the image it was running from is still on the host (unless explicitly removed). docker images lists available images. docker history <image> shows the image build layers. Compare the image against the known-good image from your container registry — if the image was tampered with (the attacker modified the image and redeployed), the layer digests will differ.
Kubernetes Pod Collection
Kubernetes adds an orchestration layer above Docker/containerd. The collection workflow must capture evidence from both the pod (container-level) and the cluster (orchestrator-level).
| |
The kubectl logs --previous flag retrieves logs from the previous container instance in the same pod — if the container crashed and restarted, the previous instance’s logs may contain the evidence of the attacker’s activity that triggered the crash.
The service account token collection (/var/run/secrets/kubernetes.io/serviceaccount/token) is critical for detecting RBAC abuse. If the attacker stole the pod’s service account token, they can make API calls to the Kubernetes control plane — creating pods, reading secrets, modifying deployments. The audit log records these API calls, but you need the token to correlate them with the compromised pod.
Container Escape Detection
Container escape — where the attacker breaks out of the container to the host — fundamentally changes the investigation scope. What was a container-level incident becomes a host-level incident, potentially affecting every container on the node.
Evidence of container escape:
Docker socket mount: ls -la /var/run/docker.sock inside the container. If the Docker socket is mounted into the container, the attacker can issue Docker commands against the host’s Docker daemon — creating new privileged containers, accessing other containers’ filesystems, and executing commands on the host. This is the most common container escape vector.
Privileged container: check docker inspect output for "Privileged": true. A privileged container has nearly unrestricted access to the host kernel — the attacker can load kernel modules, access host devices, and modify host filesystems.
Excessive capabilities: check for CAP_SYS_ADMIN, CAP_SYS_PTRACE, CAP_NET_ADMIN in the container’s capability set. These capabilities enable escape paths.
Host process visibility: from inside the container, cat /proc/1/cgroup — if the output shows the host’s cgroup hierarchy rather than the container’s, the process may have escaped the PID namespace.
Try it: Practice the complete Docker collection workflow. Run a test container: docker run -d --name forensic-test nginx:latest. Make a change inside it: docker exec forensic-test bash -c "echo 'test-evidence' > /tmp/attacker-file.txt". Now run the full collection sequence from Step 1–8 above. Examine the docker diff output — you should see /tmp/attacker-file.txt as an Added file. Export the filesystem and verify you can find the file in the tar archive: tar tf evidence/container_filesystem.tar | grep attacker. Clean up: docker rm -f forensic-test.
Beyond This Investigation
Container evidence collection is the foundation for LX9 (Container Compromise), which investigates a complete container breach scenario including escape to the host and lateral movement through the Kubernetes API. The collection techniques in this subsection capture the evidence that LX9’s analysis examines in depth.
Check your understanding:
- A compromised container was restarted by Kubernetes 5 minutes ago. What evidence from the previous container instance is still accessible?
- What does the output of
docker diffrepresent, and why is it more useful for container forensics than a full filesystem listing? - You find
/var/run/docker.sockmounted inside a compromised container. What can the attacker do with this, and what evidence should you look for on the host? - The Kubernetes audit log shows
create podandget secretsAPI calls using the compromised pod’s service account token. What does this indicate about the scope of the compromise?
You're reading the free modules of this course
The full course continues with advanced topics, production detection rules, worked investigation scenarios, and deployable artifacts. Premium subscribers get access to all courses.