5.9 Copilot Governance, Plugins, and Data Security

12-16 hours · Module 5

Copilot Governance, Plugins, and Data Security

SC-200 Exam Objective

Domain 1 — Manage a SOC Environment: Understanding Copilot governance supports the security operations environment management domain. The exam tests awareness of data handling, access controls, and organisational policies around AI-assisted investigation.

Introduction

Deploying an AI assistant that can query your entire security data estate introduces governance questions that your organisation must address: who can use Copilot, what data can it access, where does the data go, how is usage monitored, and what policies govern AI-generated investigation output. This subsection covers the governance framework for Security Copilot.


Data security and privacy model

Your data stays yours. Security Copilot processes prompts and generates responses using your organisation’s security data. This data is not used to train Microsoft’s models — your incident data, alert details, and investigation findings do not become part of the LLM’s training set. Each prompt-response cycle is isolated to your tenant.

Data residency. Copilot processes data in the Azure region where your SCU capacity is provisioned. If you provision capacity in UK South, prompts and responses are processed in UK South. Data may transit through Microsoft’s AI infrastructure during processing but is not stored outside your designated region after the response is generated.

Session data retention. Copilot sessions (the prompts and responses) are stored in your tenant and subject to your organisation’s data retention policies. Sessions can be deleted by users or administrators. Session data is accessible through the Copilot portal and via the management API for audit and compliance purposes.

Prompt logging and audit. Every Copilot interaction is logged in the Microsoft 365 audit log — who used Copilot, what they asked, and when. This audit trail supports compliance review and enables monitoring for inappropriate use. The audit events appear in the same unified audit log you investigated in Module 3.

1
2
3
4
5
6
7
8
9
// Monitor Copilot usage in Sentinel
CloudAppEvents
| where Timestamp > ago(7d)
| where Application == "Security Copilot"
| summarize PromptCount = count(),
    UniqueUsers = dcount(AccountId),
    AvgPromptsPerUser = count() / dcount(AccountId)
    by bin(Timestamp, 1d)
| order by Timestamp desc

Plugin security and management

Plugins determine what data Copilot can access. Each plugin represents a data source — and each data source has its own security implications.

First-party plugin security. Microsoft plugins (Defender XDR, Sentinel, Entra ID) access data through the user’s existing Azure AD authentication. The user’s RBAC permissions determine what data the plugin returns. Enabling a plugin does not grant new data access — it makes existing access available through the Copilot interface.

Third-party plugin risks. Third-party plugins connect Copilot to non-Microsoft data sources through APIs. When a third-party plugin is enabled, Copilot sends data (the user’s prompt and potentially context data from the session) to the third-party API. This creates a data flow outside your Microsoft tenant that must be evaluated for data security and privacy implications.

Before enabling third-party plugins, assess: what data does the plugin receive (the full prompt? session context?), where does the third-party process the data (which jurisdiction?), what is the third-party’s data retention policy (do they store your prompt data?), and what is the data protection agreement in place?

Custom plugin governance. Custom plugins developed by your organisation should follow the same security review process as any API integration: code review, security testing, data flow documentation, and access control verification. A custom plugin that connects Copilot to an internal database must be reviewed for the same data exposure risks as any other internal API.

Plugin management best practices: Enable only the plugins your team needs. Disable third-party plugins until they pass a security review. Review the enabled plugin list quarterly. Monitor plugin usage through the audit log — a plugin that is never used should be disabled to reduce the attack surface.


Organisational policies for AI-assisted investigation

Your organisation needs policies that govern how Copilot is used in security operations. These policies address questions that technology alone cannot answer.

Output validation policy. “All Copilot-generated content (incident summaries, KQL queries, report drafts) must be validated by the analyst before being included in official incident documentation.” This prevents unvalidated AI output from becoming official investigation records.

Incident report policy. “Copilot-generated report drafts may be used as starting points, but the final incident report must be reviewed and approved by a senior analyst or the incident commander.” This ensures quality control on the most important investigation deliverable.

Data handling policy. “Copilot sessions that contain investigation data (user names, IP addresses, attack details) must be deleted within 30 days of investigation closure unless required for legal proceedings.” This limits the retention of sensitive data in Copilot’s session storage.

Access policy. “Copilot Contributor access is restricted to certified SOC analysts. Copilot Owner access is restricted to the SOC operations manager. Access is reviewed quarterly.” This prevents unauthorised use of the AI assistant.

Training policy. “All analysts must complete Security Copilot training (including this module) before receiving Copilot Contributor access. Training covers prompting techniques, output validation, and the limitations of AI-generated content.” This ensures analysts know how to use Copilot effectively and safely.

Copilot governance is not optional

An AI assistant that can query your security data, generate investigation narratives, and draft incident reports is a powerful capability — but also a governance responsibility. Without policies governing validation, data handling, and access, Copilot introduces risks: unvalidated AI output in official reports, excessive data retention in session logs, and uncontrolled access to investigation capabilities. The governance framework ensures Copilot is a controlled, auditable tool — not an unmanaged risk.


Usage monitoring and optimisation

Monitor Copilot usage to ensure it is being used effectively and to right-size SCU capacity.

Usage metrics to track: prompts per analyst per day (is the team using Copilot?), prompt types (investigation, KQL generation, report drafting — which capabilities are most used?), SCU consumption (are you over- or under-provisioned?), and session count vs investigation count (are analysts using sessions per investigation, or mixing multiple investigations in one session?).

Under-utilisation indicators: Low prompts per analyst suggests the team is not comfortable with Copilot or does not see the value. Address with additional training and by sharing successful use cases. The investigation time comparison from subsection 5.7 is a compelling adoption motivator.

Over-utilisation concerns: Analysts who rely on Copilot for tasks they should do manually (basic KQL that they should know by heart, alert types they should recognise without AI explanation) may be developing a dependency that weakens their core skills. Copilot should accelerate expert work, not replace fundamental learning.


Copilot output classification policy

Your organisation should define how Copilot-generated content is classified and handled in official documentation.

Draft classification. All raw Copilot output is classified as “Draft” — it requires analyst validation before use. This classification applies to incident summaries, KQL queries, MITRE technique mappings, and report drafts. No Copilot output goes into an official document without analyst review and approval.

Approved classification. After analyst validation (and senior analyst or incident commander approval for incident reports), the content is classified as “Approved.” The approval process should document: who validated the output, what changes were made, and the date of approval. This creates an audit trail that distinguishes AI-generated content from analyst-authored content.

Evidence classification. Copilot output should not be used as primary investigation evidence. The primary evidence is the raw data (alert details, log entries, KQL query results). Copilot output is an analysis tool — it helps you interpret the evidence, but the evidence itself comes from the data sources, not from the AI.

Regulatory implications. Some regulatory frameworks (GDPR, DORA, NIS2) may require organisations to document when AI is used in security decision-making. Your incident report should note when Copilot was used to assist the investigation — not because this weakens the investigation quality, but because regulatory transparency may require it. Draft a standard footnote: “This investigation was assisted by Microsoft Security Copilot. All AI-generated content was validated by the investigating analyst before inclusion.”


Incident report approval workflow with Copilot

When Copilot generates a report draft, the approval workflow should follow your standard IR documentation process with one additional step: AI output validation.

Step 1: Copilot generates the draft. The analyst prompts Copilot to produce an incident report from the investigation session context.

Step 2: Analyst validates every claim. Each factual statement in the draft is cross-checked against the raw investigation data. Any hallucinated details (data volumes, user counts, IP addresses that do not appear in the evidence) are corrected or removed.

Step 3: Analyst adds operational context. Copilot does not know your organisation’s business context, impact assessments, or policy requirements. The analyst adds: business impact statement, affected stakeholders, communication timeline, and organisation-specific remediation steps.

Step 4: Senior review. The incident commander or senior analyst reviews the final report. At this stage, the report is a human-authored document that was accelerated by AI — the quality standard is identical to a fully manual report.

Step 5: Distribution. The approved report is distributed to stakeholders (CISO, compliance team, affected business units) through normal channels.


SCU cost optimisation

SCU capacity is billed hourly whether or not it is used. Optimising SCU provisioning prevents overspending while ensuring capacity is available during incidents.

Baseline capacity. Provision enough SCUs for normal daily operations: analyst count × average prompts per analyst per hour. For a 3-analyst SOC with moderate Copilot usage (5-10 prompts per analyst per hour), 2-3 SCUs is typically sufficient.

Incident surge capacity. During active incidents, multiple analysts may use Copilot simultaneously. Implement a documented process for temporarily scaling SCU capacity: the incident commander authorises the scale-up, the Copilot Owner adjusts the capacity in the Azure portal, and the capacity is scaled back after the incident.

Scheduled scaling. If your SOC has predictable demand patterns (higher during business hours, lower overnight), consider adjusting SCU capacity on a schedule — higher during peak hours, lower or minimum during off-hours. This requires automation (Azure CLI or REST API) but can reduce costs by 30-40%.

Usage review. Monthly, review SCU consumption data: what percentage of provisioned capacity was actually used? If average utilisation is below 30%, reduce the provisioned capacity. If utilisation regularly exceeds 80%, increase it to prevent bottlenecks.


Compliance implications of AI-assisted investigation

Using AI in security operations has compliance implications that your governance framework must address.

Data processing. Copilot processes security data (potentially including personal data — user names, email addresses, IP addresses) through Microsoft’s AI infrastructure. This processing should be documented in your Record of Processing Activities (ROPA) under GDPR if you process EU personal data. The processing basis is typically “legitimate interest” (security operations), but the documentation requirement still applies.

Automated decision-making. If Copilot’s output directly influences decisions that affect individuals (investigating an employee, determining data breach notification), GDPR Article 22 requirements for automated decision-making may apply. The human-in-the-loop validation model (analyst validates all output before acting) generally satisfies this requirement — but document the validation process to demonstrate that decisions are not fully automated.

Cross-border data transfers. If your SCU capacity is in a different region from your data subjects, Copilot processing may constitute a cross-border data transfer. Provision SCUs in the same region as your primary data to avoid cross-border transfer complexity.

Audit trail. Maintain records of Copilot usage in investigations: which investigations used Copilot, what prompts were submitted, what output was generated, and what validation was performed. The Copilot audit events in the M365 audit log provide the technical audit trail. Your incident reports provide the operational audit trail.


AI incident response playbook

Create a documented playbook specifically for when Copilot produces incorrect output that was not caught before it influenced investigation decisions. This playbook addresses a scenario that will eventually occur — an analyst acts on a Copilot recommendation without adequate validation and the action is incorrect.

Playbook steps:

Identify the error: what did Copilot state that was incorrect? What action was taken based on the incorrect output? What was the actual correct information?

Assess impact: did the incorrect action cause harm? Examples: isolating the wrong device (service disruption), reporting incorrect data exposure scope (misleading CISO), blocking a legitimate IP (disrupting business partner access).

Remediate: reverse the incorrect action if possible. Correct any reports or communications that included the incorrect information. Notify affected stakeholders of the correction.

Root cause analysis: why was the incorrect output not caught during validation? Was the validation checklist not followed? Was the error type unusual or particularly convincing? Does the validation process need strengthening?

Process improvement: update the validation checklist based on the error type. Add the error pattern to the team’s “Copilot known issues” list. Share the learning in the next team meeting.

This playbook normalises the fact that AI errors will occur — the goal is not zero errors (impossible with current AI) but rapid detection, minimal impact, and continuous process improvement.


Copilot in regulated industries

Organisations in regulated industries (financial services, healthcare, government, critical infrastructure) face additional governance requirements for AI-assisted security operations.

Financial services (FCA, PRA regulated): Regulators expect firms to maintain explainability in security decisions. When Copilot assists an investigation, the firm must be able to explain how the investigation conclusion was reached — which means documenting both the Copilot output and the analyst’s validation reasoning. The “black box” concern is addressed by the human-in-the-loop model: the decision is the analyst’s (explainable), Copilot’s contribution is documented in the session log (auditable).

Healthcare (NHS, HIPAA): Patient data may flow through Copilot sessions when investigating incidents involving healthcare systems. Ensure SCU capacity is provisioned in a region that complies with data residency requirements. Document the data flow in your Data Protection Impact Assessment (DPIA) for the Copilot deployment.

Government (OFFICIAL, SECRET): Government classifications may restrict which data can be processed by AI services. Verify that the Copilot processing environment meets the required classification level. For UK government, Copilot processing on Microsoft’s UK Azure regions meets OFFICIAL requirements; higher classifications may require additional controls or may not be compatible with cloud-based AI processing.

Critical infrastructure (NIS2): The NIS2 Directive requires critical infrastructure organisations to implement appropriate security measures including incident handling. AI-assisted incident response is compatible with NIS2 requirements provided the human-in-the-loop model is maintained and documented. The key NIS2 requirement: “the capability to detect, respond to, and recover from security incidents” — Copilot accelerates this capability without changing the fundamental requirement for human oversight.


Quarterly Copilot access review

Conduct a quarterly review of Copilot access and usage to ensure governance compliance.

Review checklist:

Role assignments: who has Copilot Owner access? Who has Contributor access? Are all assignments still appropriate? Have any team members left or changed roles? Remove access for departed or transferred staff.

Plugin configuration: which plugins are enabled? Are any plugins enabled that are not being used (disable them to reduce the data access surface)? Have any new plugins been added since the last review (verify they passed the security assessment)?

Usage patterns: which analysts are using Copilot most/least? Are there analysts with access who never use it (consider removing access or providing additional training)? Are any analysts using Copilot for tasks outside their investigation scope (potential policy violation)?

SCU consumption: is the provisioned capacity right-sized based on actual usage? Are there periods of over-provisioning (wasted cost) or under-provisioning (bottlenecks)?

Session retention: are completed investigation sessions being deleted per the data handling policy? Are any sessions older than the retention period (30/60/90 days per policy)?

Document the review findings and any corrective actions taken. This quarterly review creates the governance evidence trail that auditors and regulators expect.

Try it yourself

If you have Copilot Owner access, navigate to Security Copilot → Settings and review: the enabled plugins, the SCU capacity configuration, and the role assignments. Check the audit log for Copilot usage events: who has used Copilot, how many prompts, and what types. If your organisation has not defined governance policies for Copilot yet, draft a one-page policy covering output validation, data handling, access control, and training requirements.

What you should observe

The Settings page shows the enabled plugins, the provisioned SCU capacity, and the tenant-level configuration options. The audit log should show Copilot interaction events if the team is actively using it. If no usage events exist, the team may not be aware of Copilot's availability or may lack the training to use it effectively.


Knowledge check

Check your understanding

1. A third-party vendor offers a Copilot plugin that connects to their threat intelligence feed. What should you assess before enabling it?

Assess the data flow: what data does the plugin send to the third party (prompt content, session context, entity data?), where does the third party process and store the data, what is their data retention policy, and what contractual data protection agreements are in place. Third-party plugins create data flows outside your tenant — each plugin must be evaluated as a data sharing arrangement, not just a technical integration.
No assessment needed — Microsoft vets all plugins
Only check the plugin's functionality, not data handling
Third-party plugins are automatically blocked