5.10 Cross-Product Investigation: Copilot-Assisted Workflow

12-16 hours · Module 5

Cross-Product Investigation: Copilot-Assisted Workflow

SC-200 Exam Objective

Domain 3 — Manage Incident Response: "Investigate complex attacks, such as multi-stage, multi-domain, and lateral movement." "Investigate incidents by using agentic AI, including embedded Copilot for Security."

Introduction

Modules 3.9 and 4.10 demonstrated cross-product investigations using manual techniques — writing KQL union queries, manually correlating data across tables, and building investigation timelines by hand. This subsection demonstrates the same investigation methodology with Copilot assistance, showing how the AI-assisted workflow reduces investigation time while producing the same quality of output.

The scenario is a multi-stage attack that spans identity (Entra ID), email (Defender for Office 365), endpoint (Defender for Endpoint), and cloud infrastructure (Defender for Cloud) — requiring cross-product investigation across all four domains.


The scenario: multi-stage attack across four domains

Three correlated alerts in the Defender XDR incident queue:

Alert 1 (Entra ID Protection): “Anomalous sign-in from new country” — admin@northgateeng.com signed in from Brazil at 02:47 UTC.

Alert 2 (Defender for Endpoint): “Suspicious process on PROD-WEB-01” — PowerShell downloading content from an external URL at 03:12 UTC.

Alert 3 (Defender for Cloud): “Suspicious Azure Resource Manager operation” — new VM created in Brazil South at 03:18 UTC.


Manual workflow: how long it takes

Using the techniques from Modules 1-4, the manual cross-product investigation follows these steps:

Open the incident (1 min). Read each of the 3 alerts in detail (5 min). Write a KQL query to check the admin’s sign-in history (3 min). Write a KQL query to trace the PowerShell execution on PROD-WEB-01 (5 min). Write a KQL query to check the Azure Activity Log for resource management operations (3 min). Correlate the findings manually — build a chronological timeline linking the identity compromise to the endpoint activity to the cloud resource deployment (10 min). Assess the scope — what data was accessed, what resources were created, what persistence mechanisms exist (10 min). Draft the incident report (30 min).

Total manual time: approximately 67 minutes.


Copilot-assisted workflow: the same investigation, faster

Step 1: Incident context (1 minute).

Open the incident. Read Copilot’s automatic summary in the Defender XDR panel.

Copilot summary: “Multi-stage attack targeting admin@northgateeng.com. The admin account was compromised via credential theft (anomalous sign-in from Brazil at 02:47 UTC). The compromised credentials were used to access PROD-WEB-01, where PowerShell downloaded a payload from an external URL at 03:12 UTC. The attacker then used Azure Resource Manager to create a new VM in Brazil South at 03:18 UTC — likely for cryptocurrency mining. Three alerts are correlated across Entra ID Protection, Defender for Endpoint, and Defender for Cloud.”

Analyst validation: Check the sign-in time (02:47 — confirmed in SigninLogs), the PowerShell alert (03:12 — confirmed in DeviceProcessEvents), and the ARM operation (03:18 — confirmed in AzureActivity). Summary is accurate.

Step 2: Deep investigation with Copilot KQL (5 minutes).

Prompt: “Generate a KQL query that creates a unified timeline of all activity by admin@northgateeng.com across SigninLogs, DeviceProcessEvents, and AzureActivity between 02:00 and 04:00 UTC today.”

Copilot generates a union query across three tables, projecting TimeGenerated, Source, Action, and Detail columns. The analyst validates the query structure, runs it, and reviews the chronological timeline.

Copilot-Generated Cross-Product Timeline
TimeSourceActionDetail
02:47Entra IDSign-inBrazil, MFA satisfied, high risk
03:05Entra IDSign-inBrazil, Azure Resource Manager
03:10EndpointRDP loginPROD-WEB-01 from attacker IP
03:12EndpointPowerShellIEX(New-Object Net.WebClient).Download...
03:14EndpointProcess creationxmrig.exe spawned from PowerShell
03:18ARMVM createBrazil South, Standard_D4s_v3
03:19ARMNSG modifyAllow all inbound on port 3333
03:22ARMlistKeysprod-storage-account
Three attack objectives revealed in one timeline: (1) Endpoint compromise — PowerShell downloaded and executed xmrig (crypto miner) on PROD-WEB-01. (2) Cloud resource abuse — a new VM deployed in Brazil for additional mining capacity. (3) Data access preparation — storage account keys listed for potential data exfiltration.

Prompt: “Explain each step in the timeline and map to MITRE ATT&CK techniques.”

Copilot maps: sign-in = T1078 (Valid Accounts), RDP = T1021.001 (Remote Desktop Protocol), PowerShell download = T1059.001 (PowerShell) + T1105 (Ingress Tool Transfer), xmrig = T1496 (Resource Hijacking), VM creation = T1578.002 (Create Cloud Instance), NSG modification = T1562.007 (Disable or Modify Cloud Firewall), key listing = T1552 (Unsecured Credentials).

Analyst validation: mappings are accurate. The analyst notes that T1562.007 is particularly significant — the attacker opened mining pool port 3333, which confirms the crypto mining objective.

Step 3: Scope assessment (3 minutes).

Prompt: “Based on the timeline, summarise the scope of this incident. What was compromised, what was accessed, what resources were created, and what is still at risk?”

Copilot summary: “Compromised: admin@northgateeng.com identity + PROD-WEB-01 server. Resources created: 1 VM in Brazil South (Standard_D4s_v3, ~$0.19/hr = $4.56/day). Data at risk: storage account keys were listed — the attacker may have the keys and could access prod-storage-account data at any time. Active threat: xmrig is running on PROD-WEB-01, consuming production compute. The Brazil VM is likely also mining. The storage account keys are unrotated.”

Analyst validation: confirms the assessment matches the evidence. Adds: “The admin account may have access to other Azure subscriptions — check for activity in other subscriptions from the same IP.”

Step 4: Report draft (3 minutes).

Prompt: “Draft a concise incident report for the CISO with: Executive Summary (3 sentences), Attack Timeline (table), Scope Assessment, Immediate Actions Required, and Longer-Term Recommendations.”

Copilot generates a complete report. The analyst edits: corrects the estimated compute cost (adds the PROD-WEB-01 mining cost to the Brazil VM cost), adds the storage key rotation as the most urgent action, and adds a recommendation for conditional access requiring compliant device for ARM access.

Total Copilot-assisted time: approximately 15 minutes.


The comparison

Cross-Product Investigation: Manual vs Copilot
PhaseManualCopilotSavings
Incident summary6 min1 min83%
KQL investigation11 min5 min55%
Scope assessment10 min3 min70%
Report drafting30 min3 min90%
Correlation + analysis10 min3 min70%
TOTAL67 min15 min~78%
The analyst's expertise is present in both workflows. The manual workflow requires the analyst to write KQL, correlate findings, and draft the report. The Copilot workflow requires the analyst to validate summaries, verify KQL, assess scope, and edit the report. The expertise is applied differently (generating vs validating) but is equally necessary.

When Copilot cannot help with cross-product investigation

Copilot is most effective when the data exists in connected plugins and the attack pattern is recognisable. It is less effective in three scenarios:

Novel attack techniques. If the attack uses a technique not represented in Copilot’s training data or threat intelligence, Copilot may not recognise the pattern. The analyst’s experience and pattern recognition remain the primary detection mechanism for novel attacks.

Third-party data correlation. If the investigation requires correlating with third-party data that is not in a Copilot plugin (Palo Alto firewall logs, CrowdStrike alerts, custom application logs), Copilot cannot query that data. The analyst must query it directly through Sentinel KQL and manually incorporate the findings.

Operational context. Copilot does not know that PROD-WEB-01 is a customer-facing production server that cannot be taken offline without a change management process. It does not know that the admin account belongs to the CTO who is presenting to the board tomorrow. It does not know that the storage account contains data subject to regulatory requirements. The analyst provides this context — and it changes the response decisions.


Evidence gap identification

One of Copilot’s most underappreciated capabilities in cross-product investigations is identifying evidence gaps — data that should exist but does not appear in the investigation.

Prompt: “Based on the attack timeline, are there any evidence gaps — data sources that should have recorded activity but show no relevant events?”

Copilot analyses the timeline and identifies: “The attacker accessed the storage account at 03:22 using listed keys. However, the StorageBlobLogs table shows no download events after the key listing. This could mean: (1) The attacker has not yet used the keys to download data — the keys are still valid and represent an ongoing risk. (2) Storage diagnostic logging is not configured for this storage account — downloads may have occurred but were not recorded. (3) The attacker used the keys from a different path that does not appear in the current query scope.”

This gap analysis is valuable because it identifies what you don’t know — which is often more important than what you do know. An investigation that concludes “no data was exfiltrated” when storage diagnostic logging was not configured is a false conclusion. Copilot’s gap identification catches this.

Prompt follow-up: “How can I determine which of these three explanations is correct?”

Copilot suggests: “Check the storage account diagnostic settings to determine if blob read logging is enabled. If logging is disabled, you cannot confirm or deny data access through logs — you must assess the risk based on the fact that the attacker has the keys and could access data at any time. If logging is enabled, broaden the query time window and check for access from any IP (not just the known attacker IP, in case the attacker used a different egress point).”


Copilot-assisted latency analysis

Cross-product investigations often reveal timing patterns that indicate the attacker’s sophistication and automation level. Copilot can analyse the timing between attack steps to provide this insight.

Prompt: “Analyse the time gaps between each step in the attack timeline. What does the timing tell us about the attacker?”

Copilot response: “The time between initial sign-in (02:47) and first RDP to PROD-WEB-01 (03:10) is 23 minutes — this suggests manual operation, not automation. The time between PowerShell download (03:12) and xmrig execution (03:14) is 2 minutes — this suggests a pre-built payload, not manual compilation. The time between VM creation (03:18) and NSG modification (03:19) is 1 minute — this suggests the attacker used a pre-written script or ARM template for the cloud resource deployment. The time between VM deployment and storage key listing (03:22) is 4 minutes — the attacker pivoted from resource abuse to data access quickly, suggesting they had a defined objective list.”

This analysis helps you understand the attacker’s methodology: semi-automated with manual reconnaissance and pre-built tooling. This assessment is relevant for the incident report (indicating a moderately sophisticated threat actor) and for detection engineering (the 23-minute gap between sign-in and lateral movement is a detectable window where conditional access or anomalous sign-in detection could have intervened).


When Copilot misses: novel attack patterns

In the worked example, the attack pattern (credential theft → endpoint compromise → crypto mining + data access) is well-known and well-represented in Copilot’s training data. Copilot performs well because it has seen thousands of similar patterns.

For genuinely novel attack techniques, Copilot’s performance degrades. If the attacker used a supply chain compromise to inject malicious code into a legitimate Azure DevOps pipeline, Copilot may not recognise the attack pattern because pipeline compromise is less common in its training data than credential theft.

Indicators that Copilot may be missing a novel pattern:

Copilot’s incident summary seems incomplete — it describes some alerts but does not connect them into a coherent narrative. This may mean the attack pattern does not match any known pattern in Copilot’s training.

Copilot’s MITRE ATT&CK mapping seems forced — the technique assignments feel generic rather than specific to the observed activity. This may mean Copilot is mapping to the closest known technique rather than recognising a new one.

Copilot’s recommended investigation steps seem standard — they follow a generic investigation playbook rather than targeting the specific anomaly. This may mean Copilot does not understand the unique aspects of the attack.

When you notice these indicators, shift to manual investigation techniques from Modules 1-4. Copilot is still useful for KQL generation and data retrieval, but the analytical reasoning shifts from AI-assisted to human-driven. The analyst’s expertise — the ability to recognise patterns that do not match any template — is the capability that Copilot cannot replicate for novel threats.


Copilot-assisted post-incident review

After the investigation is complete and the incident is contained, Copilot supports the post-incident review — the process of extracting lessons learned and improving defences.

Prompt: “Based on the complete investigation session, identify: (1) where our detections caught the attack, (2) where our detections missed the attack, and (3) what new detections should be created to catch this attack earlier.”

Copilot analyses the session context and responds: “Detections that worked: Entra ID Protection detected the anomalous sign-in at 02:47 (3 minutes after it occurred). Defender for Endpoint detected the suspicious PowerShell at 03:12 (immediate). Detections that were late: Defender for Cloud flagged the suspicious ARM operation at 03:18 but the alert took 15 minutes to correlate into the incident — the attacker had already deployed the VM. Detection gaps: no rule detected the transition from identity compromise to endpoint access (the 23-minute window between sign-in and RDP). Recommended new detection: create an analytics rule that correlates risky sign-ins with RDP connections to servers from the same IP within 30 minutes.”

This post-incident analysis — which takes a senior analyst 1-2 hours to produce in a formal post-incident review — is drafted by Copilot in 30 seconds. The analyst validates, refines, and uses it as the starting point for the formal review document.


Detection gap analysis from cross-product timelines

The cross-product timeline from the investigation (subsection 5.10 worked example) reveals detection gaps — time windows where the attacker operated undetected. These gaps are improvement opportunities.

The 23-minute gap: Between the anomalous sign-in (02:47) and the first endpoint alert (03:12), the attacker established RDP access to a production server. No detection fired during this window. This gap exists because: the anomalous sign-in alert was generated but not yet triaged (it was in the queue), and no analytics rule correlates sign-in anomalies with subsequent RDP connections.

The storage key gap: The attacker listed storage keys at 03:22. No alert fired for this operation because the default Defender for Cloud analytics do not flag listKeys operations as suspicious. This gap exists because key listing is a common administrative operation — but key listing by an identity that signed in anomalously 30 minutes ago is highly suspicious in context.

Using Copilot to close gaps: For each identified gap, ask Copilot to generate the analytics rule that would close it. “Create a Sentinel analytics rule that fires when: a user with a risky sign-in (risk level medium or high) lists storage account keys within 60 minutes of the sign-in.” Copilot generates the KQL join query, the analyst validates and deploys it.

This is the detection engineering feedback loop: investigations reveal gaps → Copilot generates detections to close gaps → future attacks are caught earlier. Each incident makes the detection surface larger and more effective. Copilot accelerates both sides of the loop — the investigation that finds the gap and the detection engineering that closes it.


Copilot-generated remediation checklists

After the investigation timeline is complete and the scope is assessed, Copilot can generate a structured remediation checklist that ensures no containment or eradication step is missed.

Prompt: “Based on this investigation, generate a prioritised remediation checklist. For each item, indicate: the action, the urgency (immediate / within 24h / within 7d), the responsible team, and verification criteria.”

Copilot output (example):

Immediate: Rotate all storage account keys for prod-storage-account. Team: Cloud Infrastructure. Verify: old keys return 403 on access attempt.

Immediate: Reset admin@northgateeng.com password and revoke all sessions. Team: Identity Operations. Verify: SigninLogs shows no new sign-ins from attacker IP after reset.

Immediate: Kill xmrig.exe process on PROD-WEB-01. Team: SOC. Verify: DeviceProcessEvents shows no xmrig activity after termination.

Within 24h: Delete the unauthorized VM in Brazil South (rg-brazil-temp). Team: Cloud Infrastructure. Verify: VM no longer appears in Azure resource list.

Within 24h: Revert NSG rule changes made by the attacker. Team: Cloud Infrastructure. Verify: NSG rules match the approved baseline.

Within 24h: Delete any OAuth applications created by the attacker. Team: Identity Operations. Verify: no unauthorized apps in Entra ID app registrations.

Within 7d: Implement conditional access requiring compliant device for ARM access. Team: Identity Operations. Verify: sign-in from non-compliant device is blocked.

Within 7d: Implement Azure Policy restricting VM creation to approved regions. Team: Cloud Governance. Verify: VM creation attempt in Brazil South is denied.

This checklist — which would take 15 minutes to write manually — is generated in seconds. The analyst validates each item against the investigation findings, adjusts priorities based on operational context, and distributes to the responsible teams. The verification criteria are particularly valuable: they define “done” for each remediation action, enabling the SOC to confirm that every action was completed successfully.

Try it yourself

Pick a recent multi-alert incident from your Defender XDR queue (ideally one with 3+ correlated alerts). Investigate it twice: once manually (using the techniques from Modules 1-4), and once with Copilot assistance. Compare the time taken for each approach and the quality of the final output. Where did Copilot provide the most value? Where did you still need to work manually?

What you should observe

Copilot provides the most value in the summary and report phases (where it processes large amounts of data into narrative). It provides moderate value in the KQL generation phase (faster than writing from scratch, but requires validation). It provides the least value in the scope assessment phase (where operational context and analyst judgement dominate). The final report should be comparable in quality regardless of which approach you used — the difference is the time invested.


Knowledge check

Check your understanding

1. Copilot generates a cross-product timeline that shows 8 events across 3 data sources. You notice the timeline is missing storage account access events that you know should be there. What is the most likely cause?

The Copilot-generated KQL query may not include the storage log table. Check the generated query — if it unions SigninLogs, DeviceProcessEvents, and AzureActivity but does not include StorageBlobLogs, storage access events will be missing. Ask Copilot to add StorageBlobLogs to the union query. Alternatively, the storage diagnostic logs may not be configured to send to your Sentinel workspace — check the data connector configuration.
Storage events are never available in cross-product queries
Copilot intentionally omits storage data for privacy
Storage events take 24 hours to appear