5.12 Check My Knowledge

12-16 hours · Module 5

Check My Knowledge

Module 5 — Check My Knowledge (20 questions)

1. What is "grounding" in the context of Security Copilot?

The mechanism that connects the LLM to your organisation's actual security data through plugins. When you ask Copilot about an incident, the orchestration layer retrieves the incident data from Defender XDR and includes it in the prompt context. The LLM generates a response based on your actual data, not generic training knowledge.
Training the model on your organisation's data
Installing the Copilot agent on endpoints
Restricting Copilot to only security topics

2. Copilot generates a KQL query. What should you do before running it?

Validate the table names, field names, operators, filter values, and time windows. Common Copilot errors include wrong field values (country name vs code), incorrect operators, and hallucinated field names. Validation takes 2 minutes and prevents incorrect investigation conclusions.
Run it immediately — Copilot queries are always correct
Rewrite it from scratch
Ask Copilot to verify its own query

3. What determines what data Security Copilot can access for a specific analyst?

The analyst's own RBAC permissions combined with the enabled plugins. Copilot accesses data as the authenticated user — it inherits their permissions. A plugin must be enabled AND the user must have access to the underlying data. Copilot never bypasses RBAC.
All data in the tenant — Copilot has full access
Only the plugins enabled by the Owner
Only data from the current session

4. A junior analyst says: "Copilot can investigate incidents, so I don't need to learn KQL." Is this correct?

Incorrect. Copilot generates KQL — but if the analyst cannot validate whether the generated KQL is correct, they cannot trust the results. Copilot amplifies expertise; it does not replace it. Without KQL knowledge (Module 6), the analyst cannot verify Copilot's output, cannot identify subtle query errors, and cannot iterate effectively when results are unexpected.
Correct — Copilot eliminates the need for KQL
Partially correct — basic KQL is sufficient
Correct — KQL is only needed when Copilot is offline

5. Copilot's incident report states "the attacker exfiltrated 500 MB of data." You check the investigation evidence and find no volume figure. What happened?

Copilot hallucinated the data volume. The specific figure does not exist in the evidence. Remove the unsupported claim and replace it with what the evidence actually shows. Every specific claim in a Copilot report must trace to specific evidence.
The 500 MB is probably from a data source you have not checked
Copilot has access to data you cannot see
The figure is an estimate — include it with a disclaimer

6. What is the primary benefit of promptbooks over individual prompts?

Consistency. Promptbooks ensure every analyst follows the same investigation sequence, producing the same structured output regardless of experience level.
Promptbooks use fewer SCUs
Promptbooks guarantee correct output
Promptbooks bypass data access permissions

7. Which Copilot embedded experience provides incident summary, alert explanation, guided response, and script analysis?

Defender XDR. The Defender portal's embedded Copilot provides the most comprehensive incident investigation capabilities: automatic incident summary, per-alert explanation, response recommendations, inline script analysis, and KQL generation in Advanced Hunting.
Sentinel — all investigation features are there
The standalone portal only
Entra ID

8. What advantage does the Sentinel embedded Copilot have over the Defender XDR embedded Copilot?

Access to all data sources in the Sentinel workspace, including custom tables, third-party logs, and Sentinel-specific tables. Defender XDR Copilot is limited to the Defender XDR schema.
Sentinel Copilot is faster
Sentinel Copilot does not require SCUs
Sentinel Copilot can take automated actions

9. You provision 3 SCUs per hour for your SOC team. During a major incident, all 3 analysts are using Copilot simultaneously and experiencing slow responses. What should you do?

Increase the SCU capacity temporarily. SCU capacity can be scaled up in the Azure portal. During high-activity periods (active incidents with multiple analysts), increase the provisioned SCUs to handle concurrent demand. Scale back down after the incident to control costs.
Wait — Copilot will queue the requests automatically
Restrict Copilot to one analyst during incidents
SCU capacity cannot be changed after provisioning

10. Your organisation's Copilot governance policy should include which elements?

Output validation requirements (all Copilot output verified before use), data handling rules (session deletion timelines), access control (who gets Owner vs Contributor roles), training requirements (analysts trained before access), and plugin management (third-party plugin security review). These policies ensure Copilot is a controlled, auditable tool.
Only access control — other elements are unnecessary
Microsoft manages governance — no organisational policy needed
Governance is only needed for third-party plugins

11. Which prompting approach produces the best investigation output from Copilot?

A specific prompt with context, task, scope, and format: "Summarise incident INC-321, focusing on the attack timeline and data exposure, with MITRE ATT&CK mappings." Specific prompts produce specific, actionable responses. Vague prompts produce generic responses.
Short prompts — "Tell me about this incident"
Multi-paragraph prompts with extensive background
Prompts copied from other analysts without modification

12. Copilot's guided response recommends isolating a device. The device is a production database server. What do you do?

Evaluate the recommendation against operational context. A production database server isolation would cause a service outage. Consider alternatives: restrict network access to essential ports only, block the attacker IP specifically, or coordinate with the database team before isolation. Copilot recommends based on security best practice. You decide based on operational reality.
Isolate immediately — Copilot recommendations are correct
Ignore the recommendation — Copilot does not understand production
Ask Copilot if it is safe to isolate

13. An analyst uses Copilot to investigate two different incidents in the same session. Copilot's responses start referencing entities from the first incident when answering questions about the second. What is the problem?

Context bleeding between investigations. Sessions maintain all previous context. When two investigations share a session, Copilot's responses may confuse entities, timelines, and findings from both investigations. The fix: one session per investigation. Start a new session for the second incident.
A bug in Copilot — report to Microsoft
The incidents are actually related
Copilot has a memory limit — old context is corrupted

14. A Copilot-generated incident summary says "7 alerts correlated across 4 products." You count only 5 alerts in the incident. What do you do?

Investigate the discrepancy. Check the incident in the Defender portal — are there alerts that have been resolved or removed since Copilot generated the summary? Were 2 alerts merged into existing alerts? Or did Copilot hallucinate the count? Do not include the "7 alerts" figure in your report until you can verify it. If only 5 alerts exist, the correct count is 5.
Trust Copilot — it has access to more data
Use 7 in the report — Copilot counted correctly
The 2 extra alerts are in a different system

15. Where does Copilot save the most time in a typical investigation?

Report drafting (~85% time savings). Generating a structured incident report from investigation data is the most time-consuming manual task (30-60 minutes) and the task where Copilot provides the most dramatic acceleration (30 seconds for the draft, 5 minutes for editing). Incident summary and script analysis also provide significant savings.
KQL query writing
Alert triage
Response action execution

16. Your organisation's data is processed in the UK South Azure region. Where does Copilot process your prompts?

In the Azure region where your SCU capacity is provisioned. If you provisioned SCUs in UK South, prompts are processed in UK South. Data may transit through Microsoft's AI infrastructure during processing but is not stored outside your designated region.
Always in the US — all AI processing is US-based
In the region closest to the analyst's location
Data residency does not apply to AI services

17. Copilot's script analysis identifies a Base64-encoded PowerShell command that downloads content from "attacker.com." What do you do with this finding?

Validate the analysis by manually decoding the Base64 string to confirm the URL. Then check "attacker.com" against threat intelligence. If confirmed malicious, add the domain to your IOC list, check if any other devices communicated with this domain, and include it as evidence in the incident investigation. Copilot's analysis is the starting point — validation confirms it.
Block the domain immediately without validation
Ignore it — Copilot may have misidentified the URL
Report the script to Microsoft

18. You are investigating in both the Defender XDR portal and Sentinel. Which embedded Copilot should you use for each task?

Defender XDR Copilot for incident summary, alert explanation, guided response, and script analysis (reactive investigation of specific incidents). Sentinel Copilot for KQL generation against all data sources, analytics rule creation, and hunting queries (proactive detection and investigation requiring third-party data). Use each where its data access and capabilities match the task.
Use only Defender XDR — it has all capabilities
Use only Sentinel — it has broader data access
Use the standalone portal for everything

19. A third-party vendor offers a Copilot plugin. What should you assess before enabling it?

The data flow: what data does the plugin send to the third party, where it is processed and stored, the vendor's retention policy, and contractual data protection agreements. Third-party plugins create data flows outside your tenant that must be assessed as data sharing arrangements.
Only the plugin's functionality
Microsoft vets all plugins — no assessment needed
Only check the pricing

20. After completing this module, what is the most accurate description of Security Copilot's role in security operations?

Copilot is a force multiplier that accelerates expert analysts. It summarises incidents in seconds, generates KQL queries, analyses scripts, and drafts reports. But it does not replace analyst expertise — every output requires validation, every recommendation requires contextual evaluation, and every investigation decision remains the analyst's responsibility. Same quality output, dramatically less time. Expert required.
Copilot replaces the need for human analysts
Copilot is a search engine for security data
Copilot is only useful for report writing