LX1.3 Remote and Cloud-Specific Collection

3-4 hours · Module 1 · Free

Remote Collection, Cloud Snapshots, and Container Evidence

Learning objective: Master the collection techniques specific to each deployment environment: SSH-based remote collection for bare-metal and VM servers, cloud API-based collection for AWS/Azure/GCP VMs, and container-specific collection for Docker and Kubernetes environments. Understand the forensic implications of each method — what evidence it preserves, what it modifies, and what it cannot reach.

SSH-Based Remote Collection

Most Linux investigations begin with remote access. The server is in a data centre, a cloud region, or a customer’s network. You are at your desk. The connection method is SSH — and SSH has forensic implications that the investigator must understand.

When you SSH into a compromised system, your connection creates evidence:

Your login appears in auth.log: Accepted publickey for investigator from 192.0.2.10 port 52341 ssh2. Your session appears in wtmp (visible via last). Your session appears in utmp (visible via who). If the system records SSH sessions in the journal, your connection appears there too. These entries are evidence of your collection activity, not the attacker’s activity. Document the time you connected so that your entries can be distinguished from the attacker’s entries during analysis.

The collection approach for remote SSH access:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
# From your forensic workstation — collect volatile data first
# Pipe output directly to your workstation to avoid writing to
# the compromised filesystem

# Collect running processes
ssh investigator@target "ps auxf" > evidence/processes.txt

# Collect network connections
ssh investigator@target "ss -tnp && echo '---' && cat /proc/net/tcp" > evidence/network.txt

# Collect /proc data for all processes
ssh investigator@target "for p in /proc/[0-9]*/; do echo PID=\$(basename \$p); cat \$p/cmdline 2>/dev/null | tr '\0' ' '; echo; readlink -f \$p/exe 2>/dev/null; echo ---; done" > evidence/proc_data.txt

# Copy critical log files to your workstation
scp investigator@target:/var/log/auth.log* evidence/logs/
scp investigator@target:/var/log/syslog* evidence/logs/
scp -r investigator@target:/var/log/journal/ evidence/logs/journal/

# Copy UAC output (if you ran UAC on the target)
scp -r investigator@target:/tmp/uac-output/ evidence/uac/

The key technique: pipe output directly to your forensic workstation. Each ssh command runs on the remote system and the output streams to a local file. This avoids writing collection output to the compromised disk — the output never touches the target filesystem. For file transfers, scp copies files from the remote system to your workstation.

Cloud VM Collection — The No-Login Approach

Cloud environments provide a collection capability that does not exist for bare-metal servers: you can snapshot the disk without logging into the VM. This is the forensically cleanest collection method because it captures the disk state without any modification to the running system.

AWS EC2 disk snapshot:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
# Get the instance ID and volume ID
aws ec2 describe-instances \
  --filters "Name=tag:Name,Values=WEBSRV-NGE01" \
  --query "Reservations[].Instances[].{ID:InstanceId,Vols:BlockDeviceMappings[].Ebs.VolumeId}" \
  --output table

# Create a snapshot of the root volume
aws ec2 create-snapshot \
  --volume-id vol-0abc123def456789 \
  --description "IR-2026-0402 forensic snapshot WEBSRV-NGE01" \
  --tag-specifications "ResourceType=snapshot,Tags=[{Key=Case,Value=IR-2026-0402},{Key=Investigator,Value=j.morrison}]"

# Monitor snapshot progress
aws ec2 describe-snapshots --snapshot-ids snap-0abc123 \
  --query "Snapshots[].{State:State,Progress:Progress}"

# Collect CloudTrail events for the instance (30 days)
aws cloudtrail lookup-events \
  --lookup-attributes AttributeKey=ResourceName,AttributeValue=i-0abc123 \
  --start-time "2026-03-03T00:00:00Z" \
  --end-time "2026-04-02T23:59:59Z" \
  --output json > evidence/cloudtrail_events.json

# Collect VPC Flow Logs (if enabled)
aws ec2 describe-flow-logs \
  --filter "Name=resource-id,Values=eni-0abc123" \
  --output json > evidence/flow_log_config.json

Azure VM disk snapshot:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
# Create a snapshot of the OS disk
az snapshot create \
  --name "IR-2026-0402-WEBSRV-NGE01-osdisk" \
  --resource-group NorthgateEng-RG \
  --source "/subscriptions/SUB_ID/resourceGroups/NorthgateEng-RG/providers/Microsoft.Compute/disks/WEBSRV-NGE01-osdisk" \
  --tags Case=IR-2026-0402 Investigator=j.morrison

# Collect Azure Activity Log (90 days retention)
az monitor activity-log list \
  --resource-id "/subscriptions/SUB_ID/resourceGroups/NorthgateEng-RG/providers/Microsoft.Compute/virtualMachines/WEBSRV-NGE01" \
  --start-time "2026-03-03T00:00:00Z" \
  --output json > evidence/azure_activity_log.json

GCP Compute Engine disk snapshot:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
# Create a snapshot
gcloud compute disks snapshot WEBSRV-NGE01-disk \
  --zone=europe-west2-a \
  --snapshot-names=ir-2026-0402-websrv-nge01 \
  --description="IR-2026-0402 forensic snapshot"

# Collect Cloud Audit Logs
gcloud logging read \
  'resource.type="gce_instance" AND resource.labels.instance_id="123456789"' \
  --limit=1000 --format=json > evidence/gcp_audit_log.json

After the snapshot is created, attach it to a forensic analysis VM as a secondary (read-only) disk. This gives you full access to the filesystem without modifying the original evidence. The snapshot is immutable — it represents the disk state at the moment of creation.

The cloud audit trail is the second evidence plane that does not exist for bare-metal servers. CloudTrail, Azure Activity Log, and GCP Cloud Audit Logs record every API call made against the VM: who started it, who stopped it, who changed the security group, who attached a new disk, who modified the IAM role. If the attacker used stolen cloud credentials to modify the VM’s configuration, the evidence is in the cloud audit trail — not on the VM itself.

Container Evidence Collection

Container evidence collection is a race against time. The container may be restarted, rescheduled, or terminated at any moment — by the orchestrator’s health checks, by autoscaling, or by the attacker.

Docker container collection:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
# Is the container still running?
docker ps | grep suspicious-container

# Collect container metadata
docker inspect suspicious-container > evidence/container_inspect.json

# Collect container logs (stdout/stderr)
docker logs suspicious-container > evidence/container_logs.txt 2>&1

# See what files changed from the base image
docker diff suspicious-container > evidence/container_diff.txt

# Export the full container filesystem
docker export suspicious-container > evidence/container_filesystem.tar

# Copy specific files out of the container
docker cp suspicious-container:/var/log/ evidence/container_logs/
docker cp suspicious-container:/tmp/ evidence/container_tmp/
docker cp suspicious-container:/etc/crontab evidence/container_crontab

The docker diff command is uniquely valuable for container forensics. It shows every file that was added (A), changed (C), or deleted (D) from the base image. Since the base image is known-good (it is the image you deployed), every change in the diff is either legitimate application activity or attacker activity. This is a far cleaner evidence set than bare-metal forensics, where attacker files are mixed with hundreds of thousands of system files.

Kubernetes pod collection:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
# Is the pod still running?
kubectl get pods -n production | grep suspicious-pod

# Collect pod metadata
kubectl describe pod suspicious-pod -n production > evidence/pod_describe.txt

# Collect pod logs (all containers in the pod)
kubectl logs suspicious-pod -n production --all-containers > evidence/pod_logs.txt

# Copy files out of the running pod
kubectl cp production/suspicious-pod:/var/log/ evidence/pod_logs/
kubectl cp production/suspicious-pod:/tmp/ evidence/pod_tmp/

# Collect Kubernetes events for the pod
kubectl get events -n production --field-selector involvedObject.name=suspicious-pod > evidence/k8s_events.txt

# Collect Kubernetes audit log (if enabled — check with cluster admin)
# Location varies by cluster configuration

Try it: If you have AWS CLI configured, practice the snapshot workflow on a test instance: create a snapshot, wait for it to complete, then create a volume from the snapshot and attach it to a forensic VM. Mount it read-only: mount -o ro,noexec /dev/xvdf1 /mnt/evidence/. You have just performed the no-login disk collection workflow for a cloud VM. If you use Docker, practice the container collection workflow: docker run -d --name test-container nginx, then run the docker inspect, docker diff, docker logs, and docker export commands above.

Beyond This Investigation

Remote SSH collection is the default for LX4–LX8 scenarios. Cloud API collection is the default for LX10 (Cloud VM Compromise). Container collection is the default for LX9 (Container Compromise). LX11 (Lateral Movement) combines all three — the attacker moves from a container to the host to the cloud API, and the investigator collects evidence from all three environments.

Check your understanding:

  1. When you SSH into a compromised system to collect evidence, what entries does your connection create in the system’s logs?
  2. Why is a cloud disk snapshot forensically cleaner than running dd on the live system?
  3. What does docker diff show, and why is it more useful for investigation than listing all files in the container?
  4. A Kubernetes pod was terminated 5 minutes ago and a new pod has been scheduled. What evidence from the old pod is still available?

You're reading the free modules of this course

The full course continues with advanced topics, production detection rules, worked investigation scenarios, and deployable artifacts. Premium subscribers get access to all courses.

View Pricing See Full Syllabus