4.8 Security Alerts: Investigation and Remediation

14-18 hours · Module 4

Security Alerts: Investigation and Remediation

SC-200 Exam Objective

Domain 3 — Manage Incident Response: "Investigate and remediate alerts and incidents identified by Microsoft Defender for Cloud workload protections."

Introduction

Subsections 4.5 through 4.7 covered the detection capabilities of each CWP plan. This subsection teaches you what to do when those detections fire. Security alerts from Defender for Cloud follow the same investigation methodology you learned in Modules 1-3 — triage, investigate, contain, remediate, document — but cloud-specific alerts require cloud-specific context and response actions.

A cryptocurrency mining alert on a VM requires different containment than a phishing alert in email. A SQL injection alert against a database requires different investigation than a compromised identity in Entra ID. This subsection bridges the gap between your existing investigation skills and the cloud-specific scenarios that Defender for Cloud surfaces.


Cloud security alert anatomy

Every Defender for Cloud security alert includes a consistent set of fields that drive your investigation.

Defender for Cloud Alert Structure
FieldWhat it tells youInvestigation use
Alert nameThe threat detectedIdentifies the alert type and expected investigation path
SeverityHigh / Medium / Low / InformationalDetermines triage priority
Affected resourceWhich Azure resource is involvedIdentifies the investigation target (VM, storage, SQL, etc.)
Resource group / subscriptionOrganisational contextProduction vs dev, business unit ownership
MITRE ATT&CK tacticsWhere in the kill chainMaps the alert to the attack phase — early or late stage
Kill chain intentPre-attack / Initial access / Execution / etc.Cloud-specific kill chain mapping
DescriptionWhat was detected and why it is suspiciousThe narrative explanation of the finding
Remediation stepsHow to respondDefender for Cloud's recommended response actions
Related entitiesIPs, accounts, processes, files involvedThe entities to investigate — pivot points for deeper analysis

The cloud kill chain: mapping attacks to alert types

Cloud attacks follow a kill chain that differs from traditional endpoint attacks. Understanding this chain helps you interpret what each alert means in the context of a broader attack.

CLOUD KILL CHAIN — ATTACK PHASES AND ALERT MAPPING① ReconPort scanningService enumerationExposed storage/APIs② Initial AccessBrute force SSH/RDPStolen credentialsExploited vulnerability③ ExecutionCrypto miningWeb shell deployMalicious script④ PersistenceNew user accountOAuth app registrationScheduled task/cron⑤ Priv. EscalationRole assignmentSubscription adminManaged identity abuse⑥ Data AccessStorage downloadSQL data exportKey vault access
Figure 4.6: The cloud kill chain. Each phase maps to specific Defender for Cloud alert types. Alerts in the early phases (recon, initial access) indicate an attack in progress but may not have caused damage yet. Alerts in later phases (execution, data access) indicate the attacker has already gained a foothold — containment is urgent.

Early-stage alerts (phases 1-2) indicate an attack in progress that may not have succeeded yet. A brute-force SSH alert means someone is trying to guess credentials — they may not have succeeded. A port scan detection means someone is mapping your infrastructure — they have not yet exploited anything. Response priority: block the source IP, check whether the attack succeeded (did any brute-force credential guess work? Check authentication logs), and harden the target (enable JIT, restrict NSG rules).

Mid-stage alerts (phases 3-4) indicate the attacker has gained access and is operating in your environment. A cryptocurrency mining alert means a VM is already compromised and running attacker software. A web shell detection means the attacker has a persistent backdoor in your web application. Response priority: isolate the resource (stop lateral movement), investigate the entry point (how did they get in?), and eradicate the compromise (remove the malware, close the vulnerability).

Late-stage alerts (phases 5-6) indicate the attacker is accessing or exfiltrating data. A suspicious role assignment means the attacker is escalating privileges. A storage account access from a suspicious IP means data may be leaving your environment. Response priority: contain immediately (revoke access, rotate keys), assess data exposure (what was accessed?), and begin incident reporting.


Cloud alert investigation workflow

CLOUD ALERT INVESTIGATION — FOUR-STEP WORKFLOW① Cloud contextResource, subscription, network② Threat analysisKill chain stage, MITRE mapping③ Contain + remediateIsolate, rotate, block, patch④ Posture improvementFix root cause, update CSPM
Figure 4.7: Cloud alert investigation adds step 4 (posture improvement) that does not exist in standard endpoint IR. After containing the immediate threat, fix the underlying misconfiguration that enabled it — so the same attack vector cannot be exploited again.

Step 1: Cloud context. Before investigating the alert technically, understand the resource’s context. Is this a production or development resource? Is it internet-facing? What subscription and resource group does it belong to (who owns it)? What other resources can it access (network topology, managed identity permissions)? This context determines the investigation’s urgency and scope.

In Sentinel, enrich the alert with resource metadata:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
// Enrich a Defender for Cloud alert with resource context
SecurityAlert
| where TimeGenerated > ago(24h)
| where ProviderName == "Azure Security Center"
| where AlertSeverity == "High"
| extend ResourceId = tostring(ExtendedProperties.["Compromised entity"])
| extend SubscriptionId = tostring(ExtendedProperties.["Subscription id"])
| extend ResourceGroup = tostring(ExtendedProperties.["Resource group"])
| extend AttackTactic = tostring(ExtendedProperties.["Kill chain intent"])
| project TimeGenerated, AlertName, AlertSeverity,
    ResourceId, SubscriptionId, ResourceGroup, AttackTactic
| order by TimeGenerated desc

Step 2: Threat analysis. Examine the alert details: what exactly was detected? Check the kill chain intent (which attack phase), the MITRE ATT&CK tactic mapping, and the alert description. For a cryptocurrency mining alert, the execution phase means the VM is already compromised. For a brute-force alert, the initial access phase means the attack may be in progress but may not have succeeded.

Check the related entities: IPs (where is the attack coming from?), processes (what is running on the VM?), accounts (which identity was used?), and files (were any files dropped?). These entities are your pivot points for deeper investigation in MDE or Sentinel.

Step 3: Contain and remediate. Cloud response actions differ by resource type.

For compromised VMs: isolate the VM (Network isolation through MDE or NSG lockdown — block all inbound/outbound except your investigation IP), investigate using the MDE device timeline (same techniques as Module 2.5), collect the investigation package if needed, and decide whether to clean or reimage. If the VM was compromised through a vulnerability, patch the vulnerability on all similar VMs before bringing the compromised one back online.

For compromised storage accounts: rotate all access keys immediately (this invalidates any stolen keys), revoke all SAS tokens, review the storage account access log to determine what data was accessed, and assess the data exposure scope.

For compromised SQL databases: reset the compromised database credentials, review the database audit log for data access, check for SQL injection indicators in application logs, and patch the application vulnerability that enabled the injection.

For compromised App Service: check for web shells in the application’s file system, review the application deployment history for unauthorized deployments, and check the application’s outbound network connections for C2 communication.

Step 4: Posture improvement. This step distinguishes cloud IR from traditional IR. After containing the immediate threat, address the root cause at the posture level. If the VM was compromised through a brute-force SSH attack, enable JIT VM access across all VMs (not just the compromised one). If the storage account was accessed through a leaked key, implement Entra ID-based access instead of key-based access. If the SQL database was attacked through SQL injection, implement a Web Application Firewall and fix the application code.

Check the Defender for Cloud recommendations for the affected resource — there is likely a recommendation that would have prevented the attack if it had been implemented. Document this in the incident report: “Attack vector: brute-force SSH. Root cause: SSH port permanently open. Prevention: JIT VM access (Defender for Cloud recommendation SC-5002). Action: JIT now enabled across all production subscriptions.”


Worked examples: common cloud alert investigations

Example 1: Cryptocurrency mining on a VM.

Alert: “Digital currency mining related behavior detected” (High severity, Execution phase).

Investigation: Check the device timeline in MDE for the mining process (often xmrig, t-rex, or similar). Identify the process tree — how did the miner get onto the VM? Common entry points: compromised SSH credentials (check authentication logs), exploited vulnerability in a web application running on the VM (check web server logs), or malicious container deployed in a container runtime on the VM.

Containment: Kill the mining process, isolate the VM if lateral movement is suspected, and check for persistence mechanisms (cron jobs, systemd services, modified startup scripts).

Posture improvement: Patch the vulnerability that enabled access, enable JIT for SSH, implement adaptive application controls to prevent execution of non-allowlisted binaries.

Example 2: Suspicious Azure Resource Manager operations.

Alert: “Suspicious Azure Resource Manager operation” (Medium severity, Privilege Escalation / Persistence phase).

Investigation: Check the Azure Activity Log for the resource management operations that triggered the alert. Common suspicious operations: creating a new user account with admin permissions, assigning a subscription-level role to an unknown identity, creating a new virtual machine in an unusual region (attackers deploy crypto miners in regions with cheaper compute), or modifying network security groups to open management ports.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
// Azure Resource Manager operations during investigation window
AzureActivity
| where TimeGenerated > ago(24h)
| where OperationNameValue has_any ("roleAssignments/write",
    "virtualMachines/write", "networkSecurityGroups/write",
    "storageAccounts/listKeys/action")
| where Caller !in ("known-admin@northgateeng.com", "deploy-pipeline@northgateeng.com")
| project TimeGenerated, Caller, OperationNameValue,
    ResourceGroup, ActivityStatusValue, CallerIpAddress
| order by TimeGenerated desc
Expected Output — Suspicious ARM Operations
TimeCallerOperationStatusIP
03:14unknown@ext.comroleAssignments/writeSucceeded198.51.100.77
03:16unknown@ext.comvirtualMachines/writeSucceeded198.51.100.77
03:18unknown@ext.comnetworkSecurityGroups/writeSucceeded198.51.100.77
Attack sequence: At 03:14, an unknown identity assigned itself a subscription-level role (privilege escalation). At 03:16, they created a new VM (likely for crypto mining — check the VM's region and size). At 03:18, they modified NSG rules (likely opening ports for C2 or mining pool communication). This is a complete cloud attack sequence: credential compromise → privilege escalation → resource deployment → network modification. Response: revoke the role assignment, delete the unauthorized VM, revert the NSG changes, investigate how the identity was compromised.

Example 3: Anomalous storage access.

Alert: “Access from an unusual location to a storage account” (Medium severity, Data Access phase).

Investigation: Check the storage account diagnostic logs for the specific access operation. What blob or container was accessed? How much data was downloaded? Was the authentication method a storage key (possible key compromise) or Entra ID (possible identity compromise)?

If key-based: rotate the storage keys immediately. Search code repositories and configuration files for hardcoded keys that may have been exposed.

If identity-based: investigate the identity through Entra ID sign-in logs (Module 1/3). Check for anomalous sign-in patterns. Revoke the identity’s access to the storage account.


Alert suppression and tuning

Not every Defender for Cloud alert requires investigation. Some alerts are false positives caused by legitimate operations that match suspicious patterns. Alert suppression rules filter out known false positive patterns without disabling the underlying detection.

Create suppression rules for recurring false positives: a specific IP address that triggers alerts because it is a legitimate monitoring service, a specific process that triggers execution alerts because it is a known administrative tool, or a specific storage access pattern that triggers anomalous access alerts because it is a scheduled backup operation.

Configure suppression rules in Defender for Cloud → Security alerts → Suppression rules. Each rule specifies the alert type, the entity to match (IP, resource name, process name), and the suppression action (dismiss the alert automatically). Suppression rules are logged — you can audit which alerts were suppressed and verify the rules remain appropriate.

Suppress cautiously

Every suppression rule creates a blind spot. A suppression rule for a specific IP address means that IP will never generate alerts — even if it is compromised. Review suppression rules quarterly to verify they are still appropriate. Remove rules for IPs, accounts, or services that no longer exist.

Try it yourself

Navigate to Defender for Cloud → Security alerts in the Azure portal. If any alerts exist in your lab environment, click into one and examine the alert structure: the severity, the MITRE ATT&CK mapping, the affected resource, the description, and the recommended remediation steps. If no alerts exist, generate a test alert by running the Defender for Servers detection test on your lab VM (the test generates a benign "Sample alert" that validates the detection pipeline). Investigate the test alert using the four-step workflow: cloud context, threat analysis, containment, posture improvement.

What you should observe

The alert detail page shows all the fields from the anatomy table above. The MITRE ATT&CK tactic mapping places the alert in the kill chain. The recommended remediation steps provide specific actions. For a test alert, the remediation is "this is a test — no action needed." For real alerts, the remediation steps are specific to the alert type and affected resource.


Knowledge check

Check your understanding

1. A Defender for Cloud alert shows "Digital currency mining related behavior detected" on a production VM. The alert's kill chain intent is "Execution." What does this tell you about the attack stage?

The VM is already compromised and the attacker is executing their payload. Execution is a mid-stage kill chain phase — the attacker has already gained initial access (phase 2) and is now running software on the VM (phase 3). The mining software is actively consuming compute resources. Immediate containment is required: kill the process, isolate the VM, investigate the entry point. Do not wait — every minute the miner runs costs compute resources and indicates an attacker has control of the system.
The attack is in its early stages — monitor but no containment needed
The alert is informational — mining is not a security threat
The VM is being scanned for mining vulnerabilities but not yet compromised

2. After containing a VM compromised through brute-force SSH, what posture improvement step prevents the same attack on other VMs?

Enable JIT VM access across all production subscriptions. JIT closes SSH port 22 by default, eliminating the brute-force attack surface entirely. This is the posture improvement step — it addresses the root cause (permanently open SSH port) rather than just the symptom (one compromised VM). Additionally, review and implement the Defender for Cloud recommendation for JIT, which may have existed before the incident. Document in the incident report: the recommendation existed, it was not implemented, and the attack exploited the gap.
Block the attacker's IP address in the NSG
Change the SSH password on the compromised VM
Install a stronger antivirus on all VMs

3. The Azure Activity Log shows an unknown identity created a VM at 03:16 in the Brazil South region. Your organisation only operates in UK South and West Europe. What does this suggest?

An attacker has compromised Azure credentials and is deploying a VM (likely for cryptocurrency mining) in a region where your organisation does not normally operate — specifically to avoid detection by analysts who only monitor UK South and West Europe activity. Brazil South is a common target for crypto mining VMs because of compute availability. Immediate actions: investigate the compromised identity (how did they get Azure credentials?), delete the unauthorized VM, revoke the role assignment, and implement Azure Policy to restrict VM deployment to authorised regions only.
A developer is testing in a new region — probably fine
Azure automatically deployed a VM for load balancing
This is a false positive — Defender for Cloud is being too sensitive