4.7 Defender for Containers and Kubernetes

14-18 hours · Module 4

Defender for Containers and Kubernetes

SC-200 Exam Objective

Domain 2 — Configure Protections and Detections: "Configure cloud workload protections in Microsoft Defender for Cloud."

Introduction

Containers and Kubernetes represent a different compute model from virtual machines. Instead of long-running servers with persistent state, containers are ephemeral — they start, run a workload, and are replaced by new instances. This ephemeral nature creates unique security challenges: a container compromised by an attacker may be destroyed and replaced before you can investigate it. The container image it ran from may contain vulnerabilities that affect every instance deployed from it. The Kubernetes orchestration layer introduces its own attack surface through the API server, pod configurations, and cluster networking.

Defender for Containers provides three layers of protection for containerised workloads: image scanning (find vulnerabilities in container images before deployment), runtime protection (detect suspicious behaviour in running containers), and Kubernetes audit log analysis (detect suspicious API calls and configuration changes at the orchestration layer).


Container image scanning

Before a container runs, it exists as an image stored in a container registry (Azure Container Registry, or ACR). Defender for Containers scans images in ACR for known vulnerabilities in the base OS packages and application dependencies.

When scanning occurs: Images are scanned when pushed to the registry (new image upload), on a regular schedule (catching newly disclosed CVEs in previously scanned images), and when pulled for deployment (the scan results are available in the deployment pipeline). This provides three checkpoints: development time (push), ongoing (scheduled rescan), and deployment time (pull).

Scan results show the CVE identifier, the affected package, the installed version, the fixed version (if a patch exists), and the CVSS score. Results are integrated into the Defender for Cloud recommendations: “Container registry images should have vulnerability findings resolved.” Each finding links to the specific image, layer, and package.

1
2
3
4
5
6
7
8
// Container image vulnerabilities in Sentinel
SecurityRecommendation
| where TimeGenerated > ago(1d)
| where RecommendationDisplayName has "container" and RecommendationDisplayName has "vulnerability"
| extend ImageName = tostring(Properties.resourceDetails.ResourceName)
| extend Severity = tostring(Properties.severity)
| project TimeGenerated, Severity, ImageName, RecommendationDisplayName
| order by Severity asc

Admission control extends image scanning to deployment prevention. Using Azure Policy for Kubernetes, you can create policies that prevent deployment of images with high-severity vulnerabilities. This shifts security left — vulnerable images are blocked at the orchestration layer before they run, not detected after deployment.


Runtime protection for containers

The Defender sensor (deployed as a DaemonSet on AKS nodes) monitors running containers for suspicious behaviour: process execution anomalies (a web server container executing /bin/bash is unusual), network anomalies (a container connecting to known malicious IPs), file system anomalies (writing to system directories in a read-only container), and privilege escalation (a container gaining host-level capabilities).

Container-specific alert types:

“Container with a sensitive volume mount” — a container mounted a host directory containing sensitive system files, potentially allowing container escape (breaking out of the container to access the host OS).

“Anomalous process execution in container” — a process that has never been seen in this container image is executing. This detects attackers who gain shell access in a container and run exploration commands (whoami, id, cat /etc/passwd).

“Docker socket mounted in container” — a container has access to the Docker socket (/var/run/docker.sock), which allows it to create, start, and manage other containers on the host. This is a critical container escape vector — an attacker with Docker socket access can deploy a privileged container that accesses the host OS.

“Communication with suspicious IP” — a container is communicating with a known C2 server or cryptocurrency mining pool.

Investigation challenges with containers: Containers are ephemeral. By the time you investigate an alert, the compromised container may have been terminated and replaced by a new instance. The investigation focuses on the container image (is the vulnerability in the image itself?), the Kubernetes deployment (was the pod configuration insecure?), and the cluster audit log (did the attacker manipulate the Kubernetes API?). Runtime evidence from the container’s execution is captured by the Defender sensor and available in Sentinel, even if the container no longer exists.


Kubernetes audit log analysis

Defender for Containers analyses the Kubernetes API server audit log to detect suspicious cluster management operations: creating privileged pods, modifying RBAC bindings to escalate permissions, deploying pods with host network access, creating service accounts with elevated permissions, and accessing Kubernetes secrets.

The Kubernetes audit log is the equivalent of the Azure Activity Log for Kubernetes — it records every API call to the cluster. Defender for Containers applies ML-based anomaly detection and rule-based detection to identify API calls that indicate attacker activity.

Alert examples:

“New high privileges role detected” — a new ClusterRole or ClusterRoleBinding with administrative permissions was created. This detects attackers who compromise a service account and escalate to cluster-admin.

“Container running in privileged mode” — a pod was deployed with the privileged flag, which gives the container full access to the host OS. Legitimate workloads rarely need privileged mode — its presence often indicates a container escape attempt or a misconfigured deployment.

“Access to Kubernetes dashboard” — the Kubernetes dashboard was accessed from an unexpected IP or with unexpected credentials. The dashboard provides full cluster management — accessing it is equivalent to having admin access to the cluster.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
// Kubernetes security alerts
SecurityAlert
| where TimeGenerated > ago(7d)
| where ProviderName == "Azure Security Center"
| where AlertType has "K8S" or AlertType has "Kubernetes"
| extend ClusterName = tostring(ExtendedProperties.["Cluster name"])
| extend PodName = tostring(ExtendedProperties.["Pod name"])
| project TimeGenerated, AlertName, AlertSeverity,
    ClusterName, PodName
| order by TimeGenerated desc

Container escape: the critical threat model

Container escape — breaking out of a container to access the host operating system — is the most severe container security threat because it bypasses all container isolation. An attacker who escapes a container gains host-level access, which typically means access to all other containers on the same node, the Kubernetes kubelet, and potentially the cluster’s secrets and service account tokens.

Common escape vectors detected by Defender for Containers:

Privileged containers: running a container with the --privileged flag gives it full access to the host kernel, device files, and network stack. This is equivalent to running a process as root on the host. Defender for Containers alerts on privileged pod deployment because legitimate workloads almost never need privileged mode.

Docker socket mounting: mounting /var/run/docker.sock inside a container gives it the ability to control the Docker daemon on the host — creating, starting, and stopping containers. An attacker with Docker socket access can deploy a new privileged container that mounts the host filesystem, then read any file on the host (including Kubernetes secrets, SSH keys, and credentials).

HostPath volumes: mounting host filesystem paths (/, /etc, /var) inside a container exposes the host’s files to the container process. An attacker who gains code execution in a container with a / hostPath mount can read and write any file on the host.

Kernel exploitation: containers share the host’s kernel. A kernel vulnerability can be exploited from within a container to gain host-level access. Unlike VMs (which each have their own kernel), containers rely on the shared kernel’s security boundaries — a kernel bug breaks all isolation.

Container Escape Risk Indicators
IndicatorSeverityWhy it matters
Privileged container deployedHighFull host access — no isolation
Docker socket mountedCriticalCan control all containers on host
HostPath volume (/) mountedHighFull host filesystem access
Container running as rootMediumHigher impact if escape occurs
Host network modeMediumDirect host network access
Host PID namespaceMediumCan see and signal host processes
Detection priority: Docker socket mount and privileged container are the two highest-risk indicators. Both provide near-trivial container escape paths. If Defender for Containers alerts on either of these in production, investigate immediately — they should not exist in production workloads unless explicitly approved with documented justification.

Pod security standards and admission control

Prevention is more effective than detection for container security. Pod security standards define which pod configurations are acceptable, and admission control enforces these standards at deployment time.

Kubernetes Pod Security Standards define three levels: Privileged (no restrictions — suitable only for system-level pods like the Defender sensor DaemonSet), Baseline (prevents the most dangerous configurations — blocks privileged pods, Docker socket mounts, host network/PID/IPC namespaces), and Restricted (most hardened — additionally requires non-root execution, read-only root filesystem, and explicit seccomp profiles).

Azure Policy for Kubernetes enforces these standards in AKS. When a pod deployment violates the policy (attempts to deploy a privileged pod in a namespace governed by the Baseline standard), the Kubernetes API server rejects the deployment. This prevents insecure configurations from ever running — a stronger security posture than detecting them after deployment.

Defender for Containers generates recommendations when pod security standards are not enforced: “Kubernetes cluster pods should only use approved host network and port range,” “Container should not share the host process ID namespace,” “Privileged containers should be avoided.” Implementing these recommendations through Azure Policy prevents the container escape vectors described above.


Supply chain security: from image registry to runtime

Container supply chain security covers the entire lifecycle from base image selection through build, storage, deployment, and runtime.

Base image selection: Use trusted base images from verified publishers (Microsoft Container Registry, Docker Official Images). Avoid using arbitrary images from public registries without vulnerability scanning.

Build pipeline scanning: Integrate image scanning into the CI/CD pipeline. Defender for Containers scans images when pushed to ACR, but catching vulnerabilities earlier (in the build pipeline, before the image is even pushed) reduces the time between vulnerability introduction and detection.

Registry security: Restrict who can push images to ACR using RBAC. Enable content trust (image signing) to ensure only verified images can be pulled for deployment. Configure ACR diagnostic logging to detect unauthorized push/pull operations.

Runtime monitoring: Even with a secure supply chain, runtime monitoring is necessary because: containers may be deployed from unscanned registries (bypassing the pipeline), base image vulnerabilities may be disclosed after deployment, and attackers may modify running containers through exploit chains.


Container investigation methodology

When investigating a container security alert, the methodology differs from VM investigation because containers are ephemeral.

Step 1: Preserve the evidence. The container that triggered the alert may already be terminated. Check whether it still exists (kubectl get pods). If it does, do not terminate it yet — capture the container’s logs, processes, and network connections first. If it is already gone, the Defender sensor data in Sentinel is your primary evidence source.

Step 2: Investigate the image. The container ran from a specific image tag. Check the image’s vulnerability scan results in ACR. If the image has critical CVEs, the vulnerability may have been the exploitation vector. Check the image’s build history: who built it, when, from which base image, and whether it was modified after the initial build.

Step 3: Investigate the pod configuration. Check the Kubernetes deployment manifest: was the pod running as privileged? Did it mount sensitive volumes? Was it using a service account with excessive permissions? The pod configuration reveals whether the deployment itself was insecure (configuration problem) or whether the attacker gained access through a runtime vulnerability (exploitation problem).

Step 4: Check lateral movement. If the attacker escaped the container, check for activity on the host node and other containers on the same node. Check the Kubernetes audit log for API calls made using the pod’s service account token — an attacker who compromises a pod may use its service account to query secrets, list pods, or create new deployments.

Container security on the SC-200 exam

The exam tests conceptual understanding of container security, not deep Kubernetes administration. Know: (1) image scanning finds vulnerabilities before deployment, (2) runtime protection detects suspicious container behaviour, (3) Kubernetes audit log analysis detects API-level attacks, (4) containers are ephemeral so investigation uses the sensor data in Sentinel rather than the container itself, (5) privileged pods and Docker socket mounts are high-severity findings because they enable container escape.


AKS security baseline and hardening

Microsoft publishes an AKS security baseline that maps to the CIS Kubernetes Benchmark. Defender for Containers assesses AKS clusters against this baseline and generates recommendations for non-compliant configurations.

Critical AKS hardening recommendations:

Enable Azure RBAC for Kubernetes authorization (replaces the legacy Kubernetes RBAC with Azure-native identity management). Enable pod security admission (enforces pod security standards at the namespace level). Disable the Kubernetes dashboard in production (it is a cluster management interface that attackers target). Enable network policies (restricts pod-to-pod communication to only necessary paths — limits lateral movement within the cluster). Enable audit logging on the API server (provides the audit trail that Defender for Containers analyses for suspicious API calls).

Image pull policy: Configure pods to use imagePullPolicy: Always in production deployments. This ensures that the image is pulled from the registry (where it is scanned) rather than using a locally cached version that may not reflect the latest scan results. Combined with admission control that blocks vulnerable images, this ensures every running container has passed the vulnerability assessment.


Container forensics: preserving evidence from ephemeral workloads

When a security alert fires for a container, you have a limited window to collect evidence before the container is replaced by the Kubernetes scheduler. If the container crashes or is terminated, a new pod is created from the same image — but the runtime state (processes, network connections, files created during execution) is lost.

Evidence collection priority for live containers:

First: capture container logs (kubectl logs <pod-name> --all-containers). Logs persist in Kubernetes for the log retention period even after the container terminates, but capturing them while the container is live ensures nothing is lost.

Second: capture the running process list (kubectl exec <pod-name> -- ps aux). This shows what is currently executing in the container — including any attacker processes.

Third: capture the network connections (kubectl exec <pod-name> -- netstat -tlnp or ss -tlnp). This shows active connections to C2 servers or other suspicious endpoints.

Fourth: capture the file system diff. If the container image is immutable (read-only root filesystem), any files the attacker created are in temporary mount points. List these to identify dropped tools, web shells, or exfiltrated data staging.

If the container is already terminated: rely entirely on the Defender sensor data in Sentinel (which captured runtime events before termination) and the Kubernetes audit log (which recorded all API interactions). The sensor data includes process execution events, network connections, and file operations — providing investigation evidence even after the container no longer exists.

Try it yourself

If your lab has an AKS cluster with Defender for Containers enabled, navigate to Defender for Cloud → Workload protections → Containers and review the dashboard: image vulnerability count, runtime alerts, and Kubernetes audit findings. If no AKS cluster exists (common in lab environments), review the container protection documentation — the exam tests concepts, not hands-on AKS management.

What you should observe

The container dashboard shows: container registries with image scan results, AKS clusters with their protection status, and any security alerts from runtime monitoring or Kubernetes audit analysis. In a lab without containers, the dashboard shows "no resources" — which is itself useful information (you know where to look when containers are deployed).


Knowledge check

Check your understanding

1. A container image in your ACR has a critical CVE in its base OS. The image is deployed to 15 pods in production. What Defender for Containers features address this?

Image scanning detected the CVE and created a security recommendation. To prevent future deployments of vulnerable images, configure an admission control policy (Azure Policy for Kubernetes) that blocks deployment of images with critical CVEs. For the 15 existing pods: update the image (patch the base OS, rebuild, push to ACR), then trigger a rolling deployment to replace the running pods with the patched image. Runtime protection monitors the existing vulnerable pods for exploitation attempts while you remediate.
Immediately terminate all 15 pods
Only runtime protection is relevant — image scanning is retrospective
The CVE is in the base OS — Defender for Containers only scans application code