Module 5 — Check My Knowledge (20 questions)
1. What is "grounding" in the context of Security Copilot?
The mechanism that connects the LLM to your organisation's actual security data through plugins. When you ask Copilot about an incident, the orchestration layer retrieves the incident data from Defender XDR and includes it in the prompt context. The LLM generates a response based on your actual data, not generic training knowledge.
Training the model on your organisation's data
Installing the Copilot agent on endpoints
Restricting Copilot to only security topics
Grounding is retrieval, not training. Copilot does not modify the model with your data. It retrieves your data at query time through plugins.
2. Copilot generates a KQL query. What should you do before running it?
Validate the table names, field names, operators, filter values, and time windows. Common Copilot errors include wrong field values (country name vs code), incorrect operators, and hallucinated field names. Validation takes 2 minutes and prevents incorrect investigation conclusions.
Run it immediately — Copilot queries are always correct
Rewrite it from scratch
Ask Copilot to verify its own query
Analyst validation is non-negotiable. Copilot-generated KQL is usually correct but not always. A wrong query produces wrong results and wrong conclusions.
3. What determines what data Security Copilot can access for a specific analyst?
The analyst's own RBAC permissions combined with the enabled plugins. Copilot accesses data as the authenticated user — it inherits their permissions. A plugin must be enabled AND the user must have access to the underlying data. Copilot never bypasses RBAC.
All data in the tenant — Copilot has full access
Only the plugins enabled by the Owner
Only data from the current session
Two-layer access model: plugins + user permissions. Both must be satisfied.
4. A junior analyst says: "Copilot can investigate incidents, so I don't need to learn KQL." Is this correct?
Incorrect. Copilot generates KQL — but if the analyst cannot validate whether the generated KQL is correct, they cannot trust the results. Copilot amplifies expertise; it does not replace it. Without KQL knowledge (Module 6), the analyst cannot verify Copilot's output, cannot identify subtle query errors, and cannot iterate effectively when results are unexpected.
Correct — Copilot eliminates the need for KQL
Partially correct — basic KQL is sufficient
Correct — KQL is only needed when Copilot is offline
Copilot is a force multiplier for skilled analysts. Without the underlying skill, there is nothing to multiply.
5. Copilot's incident report states "the attacker exfiltrated 500 MB of data." You check the investigation evidence and find no volume figure. What happened?
Copilot hallucinated the data volume. The specific figure does not exist in the evidence. Remove the unsupported claim and replace it with what the evidence actually shows. Every specific claim in a Copilot report must trace to specific evidence.
The 500 MB is probably from a data source you have not checked
Copilot has access to data you cannot see
The figure is an estimate — include it with a disclaimer
Hallucination in incident reports is dangerous. Unsupported claims can mislead management, create legal liability, and undermine investigation credibility.
6. What is the primary benefit of promptbooks over individual prompts?
Consistency. Promptbooks ensure every analyst follows the same investigation sequence, producing the same structured output regardless of experience level.
Promptbooks use fewer SCUs
Promptbooks guarantee correct output
Promptbooks bypass data access permissions
Standardised investigation quality across the team. Promptbooks do not reduce cost, guarantee accuracy, or bypass permissions.
7. Which Copilot embedded experience provides incident summary, alert explanation, guided response, and script analysis?
Defender XDR. The Defender portal's embedded Copilot provides the most comprehensive incident investigation capabilities: automatic incident summary, per-alert explanation, response recommendations, inline script analysis, and KQL generation in Advanced Hunting.
Sentinel — all investigation features are there
The standalone portal only
Entra ID
Defender XDR has the richest embedded experience for incident investigation. Sentinel focuses on detection engineering and hunting. Entra ID focuses on identity risk. The standalone portal provides all capabilities but requires navigating to a separate site.
8. What advantage does the Sentinel embedded Copilot have over the Defender XDR embedded Copilot?
Access to all data sources in the Sentinel workspace, including custom tables, third-party logs, and Sentinel-specific tables. Defender XDR Copilot is limited to the Defender XDR schema.
Sentinel Copilot is faster
Sentinel Copilot does not require SCUs
Sentinel Copilot can take automated actions
Sentinel is the unified data lake. Copilot in Sentinel can query everything in the workspace — including non-Microsoft data that Defender XDR cannot see.
9. You provision 3 SCUs per hour for your SOC team. During a major incident, all 3 analysts are using Copilot simultaneously and experiencing slow responses. What should you do?
Increase the SCU capacity temporarily. SCU capacity can be scaled up in the Azure portal. During high-activity periods (active incidents with multiple analysts), increase the provisioned SCUs to handle concurrent demand. Scale back down after the incident to control costs.
Wait — Copilot will queue the requests automatically
Restrict Copilot to one analyst during incidents
SCU capacity cannot be changed after provisioning
SCU capacity is elastic — scale up during high-demand periods, scale down during normal operations. Right-sizing prevents both bottlenecks and overspending.
10. Your organisation's Copilot governance policy should include which elements?
Output validation requirements (all Copilot output verified before use), data handling rules (session deletion timelines), access control (who gets Owner vs Contributor roles), training requirements (analysts trained before access), and plugin management (third-party plugin security review). These policies ensure Copilot is a controlled, auditable tool.
Only access control — other elements are unnecessary
Microsoft manages governance — no organisational policy needed
Governance is only needed for third-party plugins
Comprehensive governance covers validation, data handling, access, training, and plugin management. Microsoft provides the technology platform. Your organisation provides the governance framework.
11. Which prompting approach produces the best investigation output from Copilot?
A specific prompt with context, task, scope, and format: "Summarise incident INC-321, focusing on the attack timeline and data exposure, with MITRE ATT&CK mappings." Specific prompts produce specific, actionable responses. Vague prompts produce generic responses.
Short prompts — "Tell me about this incident"
Multi-paragraph prompts with extensive background
Prompts copied from other analysts without modification
Specificity drives quality. 30 seconds of prompt crafting saves 5 minutes of output refinement.
12. Copilot's guided response recommends isolating a device. The device is a production database server. What do you do?
Evaluate the recommendation against operational context. A production database server isolation would cause a service outage. Consider alternatives: restrict network access to essential ports only, block the attacker IP specifically, or coordinate with the database team before isolation. Copilot recommends based on security best practice. You decide based on operational reality.
Isolate immediately — Copilot recommendations are correct
Ignore the recommendation — Copilot does not understand production
Ask Copilot if it is safe to isolate
Copilot provides security-optimal recommendations without operational context. The analyst provides operational context and adapts the recommendation. Neither ignoring nor blindly following is correct.
13. An analyst uses Copilot to investigate two different incidents in the same session. Copilot's responses start referencing entities from the first incident when answering questions about the second. What is the problem?
Context bleeding between investigations. Sessions maintain all previous context. When two investigations share a session, Copilot's responses may confuse entities, timelines, and findings from both investigations. The fix: one session per investigation. Start a new session for the second incident.
A bug in Copilot — report to Microsoft
The incidents are actually related
Copilot has a memory limit — old context is corrupted
This is expected behavior, not a bug. Sessions persist all context. Mixed-context sessions produce lower-quality responses. One session per investigation.
14. A Copilot-generated incident summary says "7 alerts correlated across 4 products." You count only 5 alerts in the incident. What do you do?
Investigate the discrepancy. Check the incident in the Defender portal — are there alerts that have been resolved or removed since Copilot generated the summary? Were 2 alerts merged into existing alerts? Or did Copilot hallucinate the count? Do not include the "7 alerts" figure in your report until you can verify it. If only 5 alerts exist, the correct count is 5.
Trust Copilot — it has access to more data
Use 7 in the report — Copilot counted correctly
The 2 extra alerts are in a different system
Any discrepancy between Copilot output and the raw data must be investigated before the output is used. The raw data is the ground truth.
15. Where does Copilot save the most time in a typical investigation?
Report drafting (~85% time savings). Generating a structured incident report from investigation data is the most time-consuming manual task (30-60 minutes) and the task where Copilot provides the most dramatic acceleration (30 seconds for the draft, 5 minutes for editing). Incident summary and script analysis also provide significant savings.
KQL query writing
Alert triage
Response action execution
Report drafting has the largest absolute time saving because it is the most time-consuming manual task. KQL generation and alert triage also save significant time, but the base manual time is shorter for those tasks.
16. Your organisation's data is processed in the UK South Azure region. Where does Copilot process your prompts?
In the Azure region where your SCU capacity is provisioned. If you provisioned SCUs in UK South, prompts are processed in UK South. Data may transit through Microsoft's AI infrastructure during processing but is not stored outside your designated region.
Always in the US — all AI processing is US-based
In the region closest to the analyst's location
Data residency does not apply to AI services
Data residency is controlled by the SCU capacity location. Organisations with data sovereignty requirements should provision SCUs in their required region.
17. Copilot's script analysis identifies a Base64-encoded PowerShell command that downloads content from "attacker.com." What do you do with this finding?
Validate the analysis by manually decoding the Base64 string to confirm the URL. Then check "attacker.com" against threat intelligence. If confirmed malicious, add the domain to your IOC list, check if any other devices communicated with this domain, and include it as evidence in the incident investigation. Copilot's analysis is the starting point — validation confirms it.
Block the domain immediately without validation
Ignore it — Copilot may have misidentified the URL
Report the script to Microsoft
Validate first, then act. Copilot's script analysis is usually accurate, but manual verification of the decoded content takes 30 seconds and confirms the finding. Blocking a domain based on unvalidated AI output could block a legitimate service.
18. You are investigating in both the Defender XDR portal and Sentinel. Which embedded Copilot should you use for each task?
Defender XDR Copilot for incident summary, alert explanation, guided response, and script analysis (reactive investigation of specific incidents). Sentinel Copilot for KQL generation against all data sources, analytics rule creation, and hunting queries (proactive detection and investigation requiring third-party data). Use each where its data access and capabilities match the task.
Use only Defender XDR — it has all capabilities
Use only Sentinel — it has broader data access
Use the standalone portal for everything
Each embedded experience has strengths. Defender XDR excels at incident investigation. Sentinel excels at cross-source KQL and detection engineering. Using both in their areas of strength produces the best results.
19. A third-party vendor offers a Copilot plugin. What should you assess before enabling it?
The data flow: what data does the plugin send to the third party, where it is processed and stored, the vendor's retention policy, and contractual data protection agreements. Third-party plugins create data flows outside your tenant that must be assessed as data sharing arrangements.
Only the plugin's functionality
Microsoft vets all plugins — no assessment needed
Only check the pricing
Third-party plugins are not vetted by Microsoft for data security. Each plugin must be assessed by your organisation.
20. After completing this module, what is the most accurate description of Security Copilot's role in security operations?
Copilot is a force multiplier that accelerates expert analysts. It summarises incidents in seconds, generates KQL queries, analyses scripts, and drafts reports. But it does not replace analyst expertise — every output requires validation, every recommendation requires contextual evaluation, and every investigation decision remains the analyst's responsibility. Same quality output, dramatically less time. Expert required.
Copilot replaces the need for human analysts
Copilot is a search engine for security data
Copilot is only useful for report writing
Force multiplier for experts. Same quality, less time. Expert required. This is the complete, accurate description of Copilot's operational role.