5.2 Security Copilot Architecture and Setup
Security Copilot Architecture and Setup
Domain 3 — Manage Incident Response: "Investigate incidents by using agentic AI, including embedded Copilot for Security." Understanding the architecture enables effective use of both standalone and embedded experiences.
Introduction
Security Copilot exists in two forms: the standalone experience (a dedicated portal at securitycopilot.microsoft.com) and embedded experiences (Copilot capabilities integrated directly into Microsoft security products like Defender XDR, Sentinel, and Entra ID). Both use the same underlying AI engine and the same grounding data — the difference is the interface and the context.
This subsection covers the architecture that powers both experiences: the standalone portal, the consumption-based pricing model (Security Compute Units), the plugin architecture that connects Copilot to data sources, the role-based access model, and the setup process.
The standalone experience
The standalone portal (securitycopilot.microsoft.com) provides a chat-like interface where you interact with Copilot through natural language prompts. Each interaction happens within a session — a persistent context that remembers previous prompts and responses within the same investigation. Sessions are the Copilot equivalent of an investigation notebook: you build context over multiple prompts, and Copilot uses the accumulated context to provide increasingly relevant responses.
Session workflow for investigation:
Start a new session for each investigation (or use a promptbook that creates a structured session automatically). The first prompt establishes the context: “I’m investigating incident INC-2026-0321 in Defender XDR. Summarise the incident.” Copilot retrieves the incident data through the Defender XDR plugin and provides a summary. Subsequent prompts build on this context: “What are the MITRE ATT&CK techniques involved?” “Generate a KQL query to find all sign-ins from the attacker IP 203.0.113.47 in the last 7 days.” “Draft an executive summary of this incident for the CISO.”
Each prompt in the session builds on the previous context. Copilot remembers that you are investigating INC-2026-0321, that the attacker IP is 203.0.113.47, and that the attack involved specific techniques. This session persistence is what makes multi-step investigations productive — you do not need to re-establish context with every prompt.
Session management: Sessions can be saved, shared with team members (for collaborative investigation), and pinned for reference. When you share a session, the recipient sees all prompts and responses — providing a complete record of the AI-assisted investigation. This is useful for shift handoffs: “I started investigating this incident on day shift. Here’s my Copilot session with findings so far.”
Security Compute Units (SCUs): the consumption model
Security Copilot uses a consumption-based pricing model measured in Security Compute Units (SCUs). Each prompt consumes SCUs based on the complexity of the request and the volume of data processed. Simple prompts (explain this alert) consume fewer SCUs than complex prompts (analyse this 500-line PowerShell script and explain every function).
| Prompt Type | Typical SCU Cost | Example |
|---|---|---|
| Simple explanation | Low (1-2 SCUs) | "What does this alert mean?" |
| Incident summary | Medium (3-5 SCUs) | "Summarise incident INC-321" |
| KQL generation | Medium (2-4 SCUs) | "Write a query for failed sign-ins from Russia" |
| Script analysis | High (5-10 SCUs) | "Analyse this obfuscated PowerShell script" |
| Full investigation promptbook | High (15-30 SCUs) | Multi-step investigation sequence |
Provisioning model: You provision SCU capacity in the Azure portal (Microsoft Security Copilot → Capacity). The capacity is billed hourly whether or not it is used. This means you should right-size the capacity based on your team’s actual usage patterns — over-provisioning wastes money, under-provisioning creates bottlenecks during high-activity periods (active incidents where multiple analysts need Copilot simultaneously).
Plugin architecture: connecting Copilot to data
Plugins are the connectors that give Copilot access to data sources. Without plugins, Copilot is a general-purpose AI with no access to your security environment. With plugins, it can query your incidents, search your logs, analyse your alerts, and access threat intelligence.
Microsoft first-party plugins include: Microsoft Defender XDR (incidents, alerts, Advanced Hunting, device data), Microsoft Sentinel (log queries, analytics rules, incidents, hunting), Microsoft Entra ID (sign-in data, user risk, conditional access), Microsoft Purview (audit data, compliance status), Microsoft Defender for Cloud (security alerts, recommendations), Microsoft Intune (device compliance, configuration status), and Microsoft Defender Threat Intelligence (threat actor profiles, IOC lookups, vulnerability data).
Third-party plugins extend Copilot to non-Microsoft data sources. Plugins exist for ServiceNow (ITSM ticket integration), Splunk (SIEM data from non-Microsoft environments), and other security tools. The plugin SDK allows organisations to build custom plugins that connect Copilot to internal APIs, proprietary data sources, or specialised tools.
Plugin management: Plugins are enabled and configured in the Copilot settings (Security Copilot → Settings → Plugins). Not all plugins are enabled by default — the administrator selects which plugins to activate based on the organisation’s data sources and security requirements. Each plugin requires appropriate permissions — enabling the Defender XDR plugin requires that the user has access to Defender XDR data.
Role-based access for Copilot
Security Copilot has its own RBAC model that determines who can use Copilot and what they can do.
Copilot Owner — can manage Copilot settings, enable/disable plugins, manage SCU capacity, view usage analytics, and use all Copilot features. Typically assigned to the SOC manager or security operations lead.
Copilot Contributor — can use all Copilot investigation features (prompts, sessions, promptbooks, embedded experiences) but cannot manage settings or plugins. Assigned to SOC analysts who use Copilot for daily investigation work.
The Copilot role is separate from the underlying data access permissions. A Copilot Contributor who does not have access to Sentinel data cannot use the Sentinel plugin — even though the plugin is enabled by the Owner. Copilot respects the user’s existing data permissions: it never shows data the user would not be able to access through the native portal.
This is a critical governance point. Copilot accesses data as the authenticated user, not as a privileged service account. If analyst A does not have access to Insider Risk Management data, Copilot will not return IRM data when analyst A uses it — even if the IRM plugin is enabled. Copilot inherits the user's permissions. This ensures that AI-assisted investigation does not create a privilege escalation path.
Setting up Security Copilot
The setup process requires three steps: provision capacity, configure plugins, and assign roles.
Step 1: Provision SCU capacity. In the Azure portal, create a Security Copilot capacity resource. Select the Azure region, set the number of SCUs per hour, and deploy. The capacity becomes available within minutes.
Step 2: Configure plugins. In the Security Copilot portal (Settings → Plugins), enable the plugins for your data sources. At minimum, enable the Defender XDR and Sentinel plugins — these are the primary investigation data sources. Enable additional plugins (Entra ID, Purview, Defender for Cloud, Threat Intelligence) as needed.
Step 3: Assign roles. In Microsoft Entra ID, assign the Copilot Owner role to the SOC lead and the Copilot Contributor role to SOC analysts. Verify that each analyst also has the appropriate data access permissions for the plugins they will use.
Verification: After setup, each analyst should test Copilot with a simple prompt that requires data access: “Show me the most recent high-severity incident in Defender XDR.” If the response includes actual incident data, the setup is complete. If it returns a generic response or an error, check the plugin configuration and the user’s data access permissions.
Standalone vs embedded: when to use each
The standalone portal and embedded experiences access the same AI engine with the same grounding data. The choice between them depends on the task.
Use the standalone portal when:
You need a persistent session for a multi-step investigation (the standalone session provides better context persistence than the embedded panel). You need to query across products in a single session (the standalone portal has all plugins available simultaneously). You need to generate a comprehensive report from investigation findings (the standalone session accumulates all context for report generation). You need to run a promptbook (promptbooks run in the standalone experience).
Use the embedded experience when:
You are triaging an incident in the Defender XDR portal and want a quick summary (the embedded panel is right there — no portal switching). You need to explain a specific alert while reviewing it (one click on “Explain” in the alert detail). You need a KQL query while working in Advanced Hunting or Sentinel Logs (the Copilot bar is built into the query editor). You need a quick identity or device assessment while reviewing entities in an incident.
The typical daily workflow uses both: start in the embedded experience for triage (quick summary, alert explanation), switch to the standalone for deep investigation (multi-step session with accumulated context), and return to the embedded experience for execution (running KQL queries in Advanced Hunting, taking response actions through the Defender portal).
Multi-tenant and MSSP considerations
Organisations with multiple tenants (parent company + subsidiaries, each with their own M365 tenant) and Managed Security Service Providers (MSSPs) managing multiple customer tenants have specific Copilot considerations.
SCU capacity per tenant. Each tenant requires its own SCU capacity provision. An MSSP managing 10 customer tenants must provision SCUs in each tenant (or use a delegated access model — if supported — to query customer data from the MSSP’s own Copilot instance). This cost structure means Copilot adoption for MSSPs requires clear value justification per customer.
Cross-tenant data isolation. Copilot processes data within the tenant boundary. An MSSP analyst using Copilot in Customer A’s tenant sees only Customer A’s data. There is no cross-tenant data leakage — the isolation is enforced at the authentication and plugin level. However, insights from one customer’s investigation cannot be automatically applied to another customer’s investigation (unless the analyst manually transfers the knowledge).
Delegated access. Azure Lighthouse and Defender multi-tenant management allow MSSPs to access customer tenants through delegated permissions. When using Copilot with delegated access, the same RBAC constraints apply — the MSSP analyst’s Copilot output is limited to the data their delegated permissions allow.
MSSP best practice: Create standardised promptbooks for common investigation types across all customer tenants. This ensures consistent investigation quality regardless of which analyst handles which customer. Customise promptbooks per customer only when the customer’s environment requires it (specific data sources, specific compliance requirements, specific escalation procedures).
Capacity planning: right-sizing SCUs for your SOC
SCU capacity planning depends on three factors: team size, investigation volume, and usage pattern.
Team size. Each analyst using Copilot concurrently consumes SCU capacity. A rough guideline: 1 SCU per 2-3 concurrent analysts during normal operations. During active incidents, when all analysts may use Copilot simultaneously, provision 1 SCU per analyst.
Investigation volume. A SOC that investigates 10 incidents per day consumes more SCUs than one that investigates 2 per day. Track monthly SCU consumption for the first 3 months after deployment, then right-size based on actual usage rather than estimates.
Usage pattern. If your SOC operates 8x5 (business hours only), SCU consumption is concentrated in 8 hours. If your SOC operates 24x7, consumption is spread across all hours but nighttime volume is typically lower. Consider dynamic scaling: higher capacity during business hours, lower capacity overnight.
Cost estimation example: A 4-analyst SOC investigating 8 incidents per day, using Copilot for triage, investigation, and reporting. Estimated consumption: 2-3 SCU/hour during business hours, 1 SCU/hour overnight. Monthly cost: approximately $400-600/month (varies by region and pricing tier). Compare this to the time saved: if Copilot saves each analyst 2 hours per day, that is 8 analyst-hours per day × 22 working days = 176 analyst-hours per month. At a loaded analyst cost of $50/hour, the time saved is worth $8,800/month. The ROI is clear even at conservative estimates.
Plugin dependency chain
Some Copilot capabilities depend on multiple plugins working together. Understanding the dependency chain prevents confusion when a capability does not work as expected.
Example: cross-product entity enrichment requires the Defender XDR plugin (for alert data), the Entra ID plugin (for identity data), the Sentinel plugin (for log data), and the Threat Intelligence plugin (for IOC enrichment). If any one plugin is disabled or the user lacks permissions for that plugin’s data source, the enrichment is incomplete — Copilot returns what it can access but silently omits data from the inaccessible plugin.
Diagnosis: If a Copilot response seems incomplete (it describes sign-in data but not alert history, or alert history but not identity risk), check which plugins are enabled and whether the user has the required permissions for each. The response quality is only as good as the weakest link in the plugin chain.
Try it yourself
If Security Copilot is available in your environment, navigate to securitycopilot.microsoft.com. Check Settings → Plugins to see which plugins are enabled. Start a new session and try: "What security incidents occurred in my environment in the last 24 hours?" Evaluate the response: does it reference actual incidents from your Defender XDR portal? If yes, the Defender XDR plugin is working correctly. If it provides a generic response about what security incidents are, the plugin is not connected or you lack the necessary permissions.
What you should observe
A working Copilot setup returns environment-specific data: incident numbers, alert names, affected entities, and timestamps from your actual tenant. A non-working setup returns generic security knowledge or an error message indicating the plugin could not be reached. The distinction demonstrates the grounding concept from subsection 5.1 — without connected plugins, Copilot is just a general-purpose AI.
Knowledge check
Check your understanding
1. What determines what data Security Copilot can access when an analyst uses it?
2. You start a Copilot session to investigate an incident. After 5 prompts, you switch to investigating a different incident. Should you start a new session?