5.5 Embedded Copilot in Sentinel
Embedded Copilot in Sentinel
Domain 1 — Manage a SOC Environment: Copilot in Sentinel supports workspace management, analytics rule creation, and hunting — core SOC environment tasks.
Introduction
The Sentinel embedded experience brings Copilot capabilities directly into the SIEM workflow. While the Defender XDR embedded experience focuses on incident investigation (reactive), the Sentinel embedded experience focuses on detection engineering and threat hunting (proactive) — creating analytics rules, building hunting queries, and optimising workspace operations.
KQL query generation in Sentinel
The Sentinel experience includes Copilot integration in the Logs blade, the Analytics rule editor, and the Hunting interface. In each, a natural language bar accepts investigation questions and generates KQL.
Logs blade — investigation queries. When investigating in Sentinel, describe your investigation question in natural language: “Show me all Azure activity by admin@northgateeng.com in the last 48 hours where the operation was a role assignment or user creation.” Copilot generates the KQL against the AzureActivity, SigninLogs, or other relevant tables.
The advantage over the Defender XDR Advanced Hunting Copilot: Sentinel’s Copilot has access to all tables in your workspace — including custom tables, third-party data (Syslog, CEF, custom connectors), and Sentinel-specific tables (SecurityAlert, SecurityIncident, ThreatIntelligenceIndicator). Defender XDR’s Copilot is limited to the Defender XDR schema.
Analytics rule editor — detection logic. When creating a scheduled analytics rule, Copilot can generate the KQL detection query from a natural language description of what you want to detect.
Example prompt: “Detect when a user creates an inbox forwarding rule to an external domain within 30 minutes of a sign-in from an unusual country.”
Copilot generates a KQL query that joins CloudAppEvents (inbox rule creation) with SigninLogs (unusual sign-in) by user identity, with a time window constraint. The analyst validates the query logic, sets the rule parameters (frequency, lookback, severity, entity mapping), and activates the rule.
This capability is transformative for detection engineering: analysts who understand the detection concept but struggle with complex KQL joins and time window logic can describe the detection in natural language and get a working query to refine.
| |
The generated query is a starting point, not a production-ready rule. You need to: validate the KQL logic, test it against historical data (does it generate expected results for known incidents?), tune thresholds (does it generate too many false positives?), configure entity mapping (map users, IPs, and devices for investigation context), and set the rule frequency and lookback period. Copilot accelerates the query creation step — the tuning and validation steps still require analyst expertise.
Hunting query assistance
In the Sentinel Hunting interface, Copilot helps with two specific tasks: generating hunting hypotheses and translating hypotheses into hunting queries.
Hypothesis generation: Prompt Copilot with a broad hunting objective: “I want to hunt for signs of credential harvesting in my environment.” Copilot suggests specific hypotheses: “Check for new OAuth application registrations with suspicious redirect URIs,” “Look for bulk sign-in failures followed by a successful sign-in from a different IP,” “Search for users who accessed more mailboxes than their historical baseline.” Each hypothesis translates into a specific hunting query.
Query generation: For each hypothesis, Copilot generates the hunting KQL. The analyst runs the query, evaluates the results, and bookmarks interesting findings for further investigation. This hypothesis-to-query cycle is the same hunting methodology you will learn in Module 10 (Threat Hunting) — Copilot accelerates the query writing step.
Incident enrichment in Sentinel
When investigating a Sentinel incident, Copilot provides the same incident summary capability as in Defender XDR — but with access to all Sentinel data sources. If your Sentinel workspace ingests third-party firewall logs, custom application logs, and cloud provider audit logs, Copilot’s incident summary can incorporate data from these sources that the Defender XDR Copilot cannot see.
Copilot also assists with entity enrichment: given a user, IP, or domain entity from an incident, Copilot queries threat intelligence feeds, checks the entity’s history in your workspace, and provides an enrichment summary: “This IP has appeared in 3 previous incidents in the last 30 days,” “This domain is listed in the Microsoft Threat Intelligence database as associated with APT29,” “This user has triggered 7 analytics rules in the last 90 days.”
Workbook query assistance
Sentinel workbooks use KQL to visualise security data. When building or modifying workbooks, Copilot generates the KQL for visualisation queries: “Create a query that shows the top 10 countries by failed sign-in count in the last 30 days, formatted for a bar chart.” Copilot generates the KQL with the appropriate summarisation and projection for the workbook visualisation type.
This capability is particularly valuable for SOC teams that maintain operational dashboards — the analyst describes what they want to see, and Copilot generates the query that produces it.
Natural language log exploration
Beyond structured KQL generation, Copilot enables conversational log exploration — a fundamentally different way to interact with security data. Instead of writing precise queries, you ask questions in natural language and Copilot translates them into KQL, runs the query, and presents the results with explanation.
Exploration workflow example:
Analyst: “Were there any unusual sign-in patterns in the last 24 hours?”
Copilot analyses SigninLogs, identifies patterns that deviate from historical baselines, and responds: “I found 3 anomalous patterns: (1) User r.khan signed in from 4 countries in 6 hours — impossible travel. (2) Service account svc-backup authenticated 247 times in 1 hour — 10x above its daily average. (3) User admin@northgateeng.com signed in from Brazil at 02:47 — first sign-in from this country for this user.”
Analyst: “Tell me more about the admin sign-in from Brazil. What did the account do after signing in?”
Copilot queries AzureActivity and CloudAppEvents for the admin account after 02:47 UTC and presents the timeline of subsequent operations.
This conversational pattern is how many analysts naturally think about investigation — asking questions, not writing code. Copilot bridges the gap between investigative thinking and data querying. Over time, analysts who use this conversational approach also learn KQL from seeing the generated queries — each conversation teaches them the KQL equivalent of their natural language question.
Analytics rule optimisation
Beyond creating new analytics rules, Copilot assists with optimising existing rules. Common optimisation tasks:
Reducing false positives. Describe the false positive pattern to Copilot: “This analytics rule for brute force detection generates 50 alerts per day, but 90% are from our VPN concentrator IP 10.0.1.254. How can I exclude this IP without weakening the detection?” Copilot modifies the KQL to add a where clause excluding the known-good IP while preserving the detection logic for other sources.
Improving detection coverage. “This rule detects inbox rule creation but only checks for rules forwarding to @gmail.com. How can I broaden it to detect forwarding to any external domain?” Copilot modifies the filter from a specific domain check to a check against the organisation’s domain list — forwarding to any domain not in the list triggers the rule.
Performance tuning. “This rule takes 4 minutes to execute and times out during peak ingestion. How can I optimise the query performance?” Copilot analyses the query for performance bottlenecks: missing time filters, expensive joins, unnecessary data scanning, and suboptimal operator ordering. It suggests specific modifications that reduce execution time.
| |
| Metric | Original Query | Copilot-Optimised |
|---|---|---|
| Execution time | 4 minutes (timeout risk) | 8 seconds |
| Data scanned | Full SigninLogs + full CloudAppEvents | 1h of each, pre-filtered |
| Detection quality | Same | Same (added risk filter improves signal) |
| False positive rate | Higher (includes all sign-ins) | Lower (risky sign-ins only) |
Incident comment enrichment
When investigating Sentinel incidents, Copilot can generate enriched comments that document investigation findings directly in the incident. Instead of manually typing investigation notes, prompt Copilot: “Summarise what I have found so far in this investigation and add it as a comment to the incident.”
Copilot generates a structured comment containing the investigation summary, affected entities, key findings, and current status — then posts it to the incident timeline. This creates a running investigation log that other analysts can read, supporting shift handoffs and collaborative investigation.
Shift handoff pattern: At the end of a shift, the analyst prompts: “Summarise the current state of this investigation for the next shift. Include: what we know, what we don’t know yet, and recommended next steps.” Copilot generates a handoff brief that the incoming analyst reads to get up to speed in 2 minutes instead of 15 minutes of reading raw investigation notes.
Copilot in the unified security operations platform
Microsoft is progressively merging Sentinel into the Defender XDR portal as the “unified security operations platform.” As this convergence continues, Copilot capabilities that were previously separate (Defender XDR Copilot for incidents, Sentinel Copilot for KQL and detection) are merging into a single experience. The practical implication: you will increasingly access Sentinel Copilot capabilities from within the Defender portal rather than navigating to the separate Sentinel experience.
For the SC-200 exam, understand both the current state (separate embedded experiences in each portal) and the direction (unified experience in the Defender portal). Questions may reference either the legacy separate experience or the unified experience.
Automation rule assistance
Sentinel automation rules are lightweight logic that runs when an incident is created or updated — changing severity, assigning to analysts, adding tags, or triggering playbooks. Copilot assists with creating automation rules from natural language descriptions.
Prompt: “Create an automation rule that: when an incident is created with severity High and the title contains ‘BEC’, assign it to the BEC investigation team and add the tag ‘BEC-priority’.”
Copilot translates this into the automation rule configuration: trigger (incident created), conditions (severity = High AND title contains “BEC”), actions (assign to specified analyst group, add tag “BEC-priority”). The analyst reviews the rule logic, tests it against a sample incident, and activates it.
This capability is particularly useful for SOC teams that maintain dozens of automation rules for different incident types. Instead of manually configuring each rule through the UI (selecting conditions from dropdown menus, adding actions one at a time), the analyst describes the desired behavior and Copilot generates the configuration.
Prompt for complex automation: “Create an automation rule that runs when a Defender for Cloud alert with severity High is correlated into an incident. The rule should: change the incident severity to Critical if the alert type contains ‘cryptocurrency mining’, add the tag ‘cloud-compromise’, assign to the cloud security team, and trigger the ‘cloud-incident-notification’ playbook.”
Copilot generates the multi-condition, multi-action rule. The analyst validates each condition and action, then activates. Without Copilot, this complex rule takes 10-15 minutes to configure through the UI; with Copilot, the configuration is generated in seconds and validated in 2 minutes.
Data connector troubleshooting with Copilot
When a Sentinel data connector stops ingesting data, troubleshooting can be time-consuming — checking connector health, verifying permissions, examining ingestion latency, and comparing expected vs actual data volumes. Copilot accelerates this process.
Prompt: “The Microsoft 365 Defender data connector has not ingested data in the last 4 hours. What should I check?”
Copilot queries the connector health status and provides a diagnostic checklist: “Check the connector status in Sentinel → Data connectors (is it showing ‘Connected’?). Verify the service principal permissions have not expired. Check the Log Analytics workspace health (is the workspace accepting data?). Check the SecurityIncident and SecurityAlert tables — when was the last event ingested? If other connectors are working but the M365 Defender connector is not, the issue is connector-specific. If all connectors are affected, the issue is workspace-level.”
Prompt follow-up: “The connector shows ‘Connected’ but the SecurityAlert table has no data since 08:00. Other tables are receiving data.”
Copilot narrows the diagnosis: “If the connector is connected and other tables receive data, the most likely cause is: (1) No alerts were generated by Defender XDR in the last 4 hours (check the Defender portal incident queue directly). (2) The alert data type was deselected in the connector configuration (check Data connectors → Microsoft Defender XDR → Configuration → select the data types to sync). (3) A latency spike in the M365 Defender → Sentinel data pipeline (check Azure Service Health for any ongoing incidents).”
This diagnostic assistance saves 15-20 minutes of manual troubleshooting by guiding the analyst directly to the most likely causes in priority order.
Workspace health monitoring with Copilot
Beyond individual connector troubleshooting, Copilot supports proactive workspace health monitoring.
Daily health check prompt: “Give me a health summary of my Sentinel workspace: data ingestion volume trends for the last 7 days, any data connectors that have stopped ingesting, any analytics rules that failed to execute, and any tables with zero ingestion in the last 24 hours.”
Copilot queries the workspace health tables (Usage, SentinelHealth, _LogOperation) and returns a structured health report: “Ingestion volume: 12.4 GB/day average (stable, +2% week-over-week). Connector issues: the Palo Alto Syslog connector has not ingested data since 14:00 yesterday (possible network issue). Failed analytics rules: rule ‘Brute force SSH’ failed at 03:00 due to query timeout — consider optimising the query. Zero-ingestion tables: the AWSCloudTrail table has had zero events for 48 hours (was the AWS connector disconnected?).”
This health check takes Copilot 15 seconds. Running the equivalent checks manually — querying Usage tables, checking each connector status, reviewing the analytics rule health log — takes 20-30 minutes. As a daily start-of-shift routine, the Copilot health check ensures the SOC team knows the workspace is healthy before they begin investigating alerts that depend on complete data ingestion.
Try it yourself
If Copilot is enabled in your Sentinel workspace, navigate to Sentinel → Logs and use the Copilot bar to generate a query: "Show me the top 10 users with the most failed sign-in attempts in the last 7 days." Evaluate the generated KQL: does it use the correct table (SigninLogs)? Does it filter for failures correctly (ResultType != "0")? Does it summarise by user and sort by count? Run the query and review the results. Then try the Analytics rule assistant: navigate to Analytics → Create rule and describe a detection scenario to Copilot.
What you should observe
The KQL generation should produce a runnable query that returns a table of users ranked by failed sign-in count. Common validation points: the table name should be SigninLogs (not AADSignInEventsBeta or another table), the failure filter should use ResultType (not a boolean "success" field), and the summarisation should use summarize with count(). The analytics rule assistant should generate a detection query that you can refine and test before activating.
Knowledge check
Check your understanding
1. What is the key advantage of Copilot in Sentinel over Copilot in Defender XDR for investigation?