5.3 Prompting Security Copilot: Techniques and Promptbooks

12-16 hours · Module 5

Prompting Security Copilot: Techniques and Promptbooks

SC-200 Exam Objective

Domain 3 — Manage Incident Response: "Investigate incidents by using agentic AI, including embedded Copilot for Security." Effective prompting is the skill that determines whether Copilot produces investigation-quality output or generic responses.

Introduction

The quality of Copilot’s output depends directly on the quality of your prompts. A vague prompt produces a vague response. A specific, well-structured prompt produces a specific, actionable response. This subsection teaches you the prompting techniques that consistently produce investigation-quality output and introduces promptbooks — pre-built sequences of prompts that automate common investigation workflows.


The anatomy of an effective security prompt

Effective security prompts share four characteristics: they specify the context (what you are investigating), the task (what you want Copilot to do), the scope (what data to include or exclude), and the format (how you want the response structured).

Prompt Quality Comparison
PromptQualityWhy
"Tell me about this incident"PoorNo context (which incident?), no task (tell what?), no scope
"Summarise incident INC-2026-0321"BetterContext (specific incident), task (summarise), but no scope or format
"Summarise incident INC-2026-0321, focusing on the attack timeline, affected entities, and data exposure. Include MITRE ATT&CK technique mappings."ExcellentContext + task + scope (timeline, entities, data) + format (MITRE mapping)
The specificity principle: The more specific your prompt, the more useful the response. Generic prompts produce generic responses. Investigation-specific prompts produce investigation-quality responses. Invest 30 seconds in crafting a good prompt to save 5 minutes in refining a poor response.

Context prompts establish what you are working on. Always start a session with a context-setting prompt: “I am investigating incident INC-2026-0321 from Defender XDR. The incident involves a compromised user account (j.morrison@northgateeng.com) that was used to exfiltrate data from SharePoint.” This gives Copilot the investigation frame — every subsequent prompt is interpreted within this context.

Task prompts tell Copilot what to do. Common task types for security operations:

Summarise — “Summarise the alerts in this incident, ordered by timestamp.” Copilot produces a narrative summary of each alert, its severity, affected entities, and the chronological sequence.

Explain — “Explain what the alert ‘Suspicious inbox forwarding rule’ means and why it is significant in a BEC investigation.” Copilot provides a contextual explanation linked to the attack technique.

Generate — “Generate a KQL query that finds all emails sent by j.morrison@northgateeng.com to external recipients between March 18 09:00 and March 18 12:00 UTC.” Copilot writes the KQL query.

Analyse — “Analyse this PowerShell script and explain what each function does. Identify any malicious indicators.” Copilot breaks down the script section by section.

Recommend — “Based on the investigation findings so far, recommend the next three investigation steps.” Copilot suggests actions based on the session context.

Draft — “Draft an executive summary of this incident for the CISO, including the attack timeline, data exposure assessment, and remediation actions taken.” Copilot produces a report draft.

Scope constraints narrow the response to what you need. Without scope constraints, Copilot may produce a comprehensive but overwhelming response. “Focus only on the authentication events” tells Copilot to exclude email, file, and other data from the response. “Limit to the time window 09:00-12:00 UTC” prevents Copilot from including irrelevant events outside the investigation window.

Format instructions control the response structure. “Present the findings as a numbered timeline” produces a chronological list. “Format as a table with columns: Time, Source, Action, Entity” produces a structured table. “Write this as a paragraph suitable for an executive summary” produces prose. Matching the format to your use case (timeline for investigation, table for data analysis, prose for reports) makes the output immediately usable.


Prompting patterns for common SOC tasks

Pattern 1: Incident triage prompt sequence.

Prompt 1: “Summarise incident INC-2026-0321 from Defender XDR. Include: number of alerts, severity, affected users, affected devices, and attack techniques.”

Prompt 2: “What are the MITRE ATT&CK techniques involved? Explain what each technique means in the context of this incident.”

Prompt 3: “Based on the alerts and techniques, is this a true positive that requires investigation, or could it be a false positive? Explain your reasoning.”

This three-prompt sequence replaces 15-20 minutes of manual alert review with 2-3 minutes of Copilot-assisted triage.

Pattern 2: KQL query generation and iteration.

Prompt 1: “Write a KQL query against the SigninLogs table that finds all successful sign-ins for j.morrison@northgateeng.com from IP addresses outside the United Kingdom in the last 7 days. Include the IP address, country, city, app name, and risk level.”

Prompt 2 (after running the query and reviewing results): “The query returned 3 sign-ins from the US. Modify the query to also check CloudAppEvents for any MailItemsAccessed events from these same IP addresses within 1 hour of each sign-in.”

Prompt 3 (iterating): “The join is returning duplicates. Fix the query to deduplicate by adding a distinct clause on the TimeGenerated and IPAddress columns.”

This iterative pattern mirrors how an analyst writes KQL manually — start with a basic query, refine based on results, and iterate until the query produces the exact data needed. Copilot accelerates each iteration.

Pattern 3: Script analysis.

Prompt: “Analyse the following PowerShell script. Explain what each section does, identify any obfuscation techniques used, identify any indicators of compromise (domains, IPs, file paths, registry keys), and assess the overall intent of the script. Here is the script: [paste script]”

Copilot breaks the script into sections, deobfuscates encoded strings, identifies C2 domains or malicious URLs, explains the execution flow, and assesses the malicious intent. This is one of Copilot’s strongest capabilities — script analysis that would take a skilled analyst 20-30 minutes takes Copilot 30 seconds.

Pattern 4: Investigation report draft.

Prompt: “Based on everything we have discussed in this session, draft an incident report with the following sections: Executive Summary (3 sentences), Attack Timeline (chronological table), Data Exposure Assessment, Response Actions Taken, Root Cause Analysis, and Recommendations for Prevention. Use a professional tone suitable for the CISO.”

Copilot generates a complete report draft using the session context. The analyst reviews, edits for accuracy, adds organisational context that Copilot does not know, and finalises. Time saved: 30-60 minutes of report writing.


Promptbooks: pre-built investigation workflows

Promptbooks are sequences of prompts packaged as reusable workflows. Instead of typing individual prompts for common investigation patterns, you run a promptbook that executes the entire sequence automatically, presenting the results at each step.

Built-in promptbooks include:

Incident investigation — summarise the incident, list affected entities, map MITRE techniques, recommend investigation steps, and generate a report draft.

Vulnerability impact assessment — given a CVE, identify which devices in your environment are affected, assess the exposure level, and recommend remediation priority.

Suspicious script analysis — analyse a script, identify malicious indicators, explain the attack technique, and suggest detection rules.

User compromise assessment — given a username, check sign-in anomalies, recent alerts, device health, and data access patterns.

Custom promptbooks can be created by the SOC team for organisation-specific investigation patterns. If your team has a standard BEC investigation procedure (check sign-ins → check inbox rules → check MailItemsAccessed → check file downloads → assess data exposure), encode this as a custom promptbook. Every analyst runs the same procedure, ensuring consistent investigation quality.

Promptbooks standardise investigation quality

The biggest benefit of promptbooks is not speed — it is consistency. When every analyst follows the same investigation sequence (via the promptbook), the investigation quality does not depend on the individual analyst's experience level. A junior analyst running the BEC investigation promptbook follows the same steps as a senior analyst, producing the same structured output. The senior analyst still reviews the output, but the investigation thoroughness is guaranteed by the promptbook structure.


Session management best practices

One session per investigation. Do not mix investigations in a single session. Context bleeding between investigations degrades response quality and creates confusing investigation records.

Name sessions descriptively. “INC-2026-0321 — BEC investigation j.morrison” is findable. “Session 47” is not. Session names become the investigation record.

Save and share sessions. When handing off an investigation to the next shift, share the session. The recipient sees all prompts, responses, and investigation findings — a complete handoff without a separate briefing document.

Pin important sessions. Pin sessions for ongoing investigations or reference material. Pinned sessions appear at the top of the session list for quick access.

Delete completed sessions. After an investigation is complete and the findings are documented in the incident report, delete the Copilot session to reduce data exposure. Sessions contain investigation details (user names, IP addresses, attack techniques) that should not persist indefinitely.


System capabilities: what you can ask Copilot to do

Security Copilot exposes specific capabilities through its plugin system. Understanding which capabilities are available helps you craft prompts that produce better results because you are asking Copilot to use a known capability rather than hoping it can figure out what you want.

Available capability categories:

Incident management — get incident details, list alerts in an incident, get incident status, summarise incident. Use these for triage and investigation context.

Threat intelligence — look up an IP, domain, file hash, or URL in the Microsoft Threat Intelligence database. Get threat actor profiles, related IOCs, and attack technique descriptions. Use these for entity enrichment during investigation.

KQL query generation — generate KQL from natural language, explain an existing KQL query, optimise a slow query. Use these for investigation data access and detection engineering.

Script analysis — analyse PowerShell, Python, bash, JavaScript, or VBScript. Deobfuscate encoded content. Identify malicious indicators. Use these when alerts include suspicious command lines or scripts.

Identity analysis — get user risk assessment, check sign-in history, evaluate conditional access outcomes. Use these for identity investigation.

File analysis — get file reputation, check for known malware hashes, identify file behaviour. Use these when alerts include suspicious files.

Natural language report generation — draft incident reports, executive summaries, investigation timelines, and remediation recommendations. Use these for documentation.

Prompt tip: Reference capabilities explicitly when your prompt is ambiguous. “Use threat intelligence to check IP 203.0.113.47” is clearer than “What do you know about 203.0.113.47?” The first explicitly invokes the TI plugin. The second might produce general knowledge rather than TI-grounded data.


Prompt chaining: building complex investigations from simple prompts

Complex investigations are not solved by a single prompt. They are solved by a chain of prompts where each prompt builds on the previous response. Effective prompt chaining follows the investigation logic — each prompt answers a question that leads to the next question.

Chain example for a compromised account investigation:

Chain 1: “Summarise incident INC-2026-0321.” → Establishes the investigation context.

Chain 2: “The incident involves j.morrison. Show the user’s sign-in history for the last 48 hours with IP addresses, countries, and risk levels.” → Identifies the authentication anomaly.

Chain 3: “Sign-in from 203.0.113.47 at 09:22 is suspicious. Check this IP against threat intelligence.” → Enriches the suspicious entity.

Chain 4: “The IP is associated with known AiTM infrastructure. Generate a KQL query to show all activity by j.morrison from this IP.” → Scopes the post-compromise activity.

Chain 5: “The query shows inbox rule creation and file downloads. Draft the data exposure section of the incident report.” → Begins documentation.

Each prompt is simple and focused. The chain builds a complete investigation. The session maintains context between prompts — Copilot remembers that you are investigating j.morrison, that 203.0.113.47 is the attacker IP, and that the attack technique is AiTM. This accumulated context makes each subsequent response more relevant and specific.


Creating custom promptbooks

Custom promptbooks encode your team’s investigation procedures as reusable Copilot workflows. Creating a custom promptbook follows this process:

Step 1: Document the investigation procedure. Write out the steps your team follows for a specific investigation type (BEC, ransomware, insider risk, cloud compromise). Each step becomes a prompt in the promptbook.

Step 2: Translate steps to prompts. Convert each investigation step into an effective Copilot prompt with context, task, scope, and format specifications. Test each prompt individually to ensure it produces useful output.

Step 3: Parameterise the prompts. Replace specific values (incident IDs, user names, IP addresses) with parameters that the analyst fills in when running the promptbook. “Summarise incident {incident_id}” becomes a reusable template.

Step 4: Test the complete sequence. Run the promptbook against a real (or realistic test) incident. Verify that each step produces useful output and that the session context accumulates correctly across the prompt sequence.

Step 5: Publish and train. Publish the promptbook to the team library. Train analysts on when to use it (which investigation types), how to fill in the parameters, and how to validate the output at each step.

Maintenance: Review and update custom promptbooks quarterly. As your investigation procedures evolve (new data sources, new attack techniques, new response actions), update the prompts to match. A promptbook that references deprecated tables or outdated procedures is worse than no promptbook.


Prompt anti-patterns: what not to do

Learning what does not work saves time on trial and error.

Anti-pattern 1: Asking Copilot to validate its own output. “Is this KQL query correct?” Copilot almost always says “yes” — it does not self-critique effectively. Instead, describe the problem: “This query returns zero results but I expect matches. The field LocationDetails.countryOrRegion might use ISO codes instead of country names. Fix the filter.”

Anti-pattern 2: Overly broad prompts. “Tell me everything about security.” Produces a generic essay. Instead, be specific: “Summarise the 3 highest-severity incidents in my Defender XDR queue from the last 24 hours.”

Anti-pattern 3: Prompts without context. “Is this IP malicious?” Without specifying which IP, which product to check, or what context the IP appeared in, Copilot produces a generic response. Instead: “Check IP 203.0.113.47 against Microsoft Threat Intelligence. This IP appeared as the source of a risky sign-in for admin@northgateeng.com at 02:47 UTC today.”

Anti-pattern 4: Multi-task prompts. “Summarise the incident, generate a KQL query for sign-ins, check the user’s risk level, and draft a report.” Copilot may handle all four tasks, but the quality of each is lower than if they were individual prompts. Instead, chain single-task prompts — each prompt gets full attention and context.

Anti-pattern 5: Assuming Copilot remembers between sessions. Copilot maintains context within a session but not between sessions. Starting a new session and referencing “the incident we investigated yesterday” produces a blank context. Instead, re-establish context: provide the incident ID, the key findings, and the current question.


Building a team prompt library

Over time, your SOC team will discover prompts that consistently produce high-quality output. Capture these in a shared prompt library — a documented collection of proven prompts for common investigation tasks.

Library structure: Organise by investigation phase (triage prompts, investigation prompts, KQL prompts, report prompts) and by incident type (BEC prompts, endpoint compromise prompts, cloud security prompts). Each entry includes: the prompt text, an example output, validation notes (what to check), and version history (when the prompt was last tested and updated).

Library maintenance: Test prompts quarterly against current Copilot behaviour — Microsoft updates Copilot regularly, and a prompt that worked perfectly 6 months ago may produce different output today. Update prompts when they degrade, and add new prompts when new investigation patterns emerge.

The prompt library becomes a training resource for new analysts: “Here are the 20 prompts our team uses daily. Start with these, learn what they produce, and build your own variations as you gain experience.”

Try it yourself

If Security Copilot is available, try the incident triage prompt sequence from Pattern 1. Start a new session, set the context (pick any recent incident from your Defender XDR queue), and run the three-prompt sequence. Evaluate the output: is the summary accurate? Are the MITRE ATT&CK mappings correct? Does the true positive / false positive assessment align with your own assessment? Then try the KQL generation pattern (Pattern 2) — ask Copilot to write a query, run it, and iterate. How many iterations does it take to get a correct, useful query?

What you should observe

The incident summary should be accurate (verify against the actual incident data in the Defender portal). The MITRE mappings should be correct (verify against the MITRE ATT&CK matrix). The KQL query may need 1-2 iterations to be fully correct — common issues include wrong field names, incorrect filter values, or overly broad time windows. The iterative pattern demonstrates the human-in-the-loop workflow: Copilot generates, you validate, you request corrections, Copilot refines.


Knowledge check

Check your understanding

1. You want Copilot to generate a KQL query for your investigation. Which prompt produces the best result?

The most specific prompt: "Write a KQL query against the SigninLogs table that finds all successful sign-ins for j.morrison@northgateeng.com from IP addresses not in the United Kingdom, in the last 7 days. Include columns: TimeGenerated, IPAddress, LocationDetails.countryOrRegion, AppDisplayName, RiskLevelDuringSignIn. Order by TimeGenerated ascending." This specifies the table, the filter conditions, the time window, the output columns, and the sort order. Copilot generates the exact query needed.
A short prompt: "Write a KQL query for sign-ins"
A directive: "Generate KQL"
A context-free request: "Show me suspicious activity"

2. What is the primary benefit of promptbooks over individual prompts?

Consistency. Promptbooks ensure every analyst follows the same investigation sequence, producing the same structured output regardless of individual experience level. A junior analyst running a BEC investigation promptbook follows the same steps as a senior analyst. Speed is a secondary benefit — the primary value is that promptbooks standardise investigation quality across the team.
Promptbooks use fewer SCUs than individual prompts
Promptbooks do not require data access permissions
Promptbooks guarantee correct output without validation