1.6 Building Your AI Operations Foundation
Building Your AI Operations Foundation
The preceding subsections established the knowledge: what AI does, where it fits in security operations, what the standards say, how to evaluate tools, and how to handle data. This subsection converts that knowledge into operational infrastructure. You will configure your AI workspace, establish the core methodology, and execute your first AI-assisted security workflow.
This is the bridge between understanding and execution. When you complete this subsection, you have a configured, tested AI operations environment ready for the operational modules (C2-C10) that follow.
Setting up your security operations workspace
A workspace (called a “Project” in Claude, a “Custom GPT” or “Project” in ChatGPT, a “Gem” in Gemini) provides persistent context that carries across conversations. Every conversation within the workspace inherits the system prompt, reference documents, and configuration you define. Without a workspace, you re-explain your environment, your conventions, and your preferences in every conversation. With a workspace, every conversation starts with that context already loaded.
Create a workspace with the following configuration:
Workspace name: Security Operations
System prompt:
You are assisting a security operations analyst with investigation,
detection engineering, and operational documentation.
Environment context:
- Organization: [your org name or Northgate Engineering for training]
- SIEM: [Microsoft Sentinel / Splunk / Elastic / other]
- EDR: [Defender for Endpoint / CrowdStrike / SentinelOne / other]
- Identity: [Entra ID / Okta / other]
- Primary query language: [KQL / SPL / other]
Output requirements:
- All KQL/SPL queries include inline comments explaining each operator
- All detection rules include: MITRE ATT&CK mapping, entity mapping,
severity rationale, tuning guidance, and false positive assessment
- IR report sections follow the structure: Executive Summary → Timeline
→ Technical Findings → Impact Assessment → Recommendations
- Compliance mappings reference: NIST CSF 2.0, ISO 27001:2022, SOC 2
Operational constraints:
- Never state inferences as facts. Label conclusions as conclusions.
- Include verification steps for every recommendation.
- State minimum required RBAC role before every portal action.
- Include blast radius for every configuration change.
- Use US English.
Reference documents to upload:
Upload these documents to the workspace’s knowledge base (Project Knowledge in Claude, Project files in ChatGPT). The AI will reference them in every conversation.
| Document | Purpose | Where to get it |
|---|---|---|
| Your IR report template | AI mirrors your reporting format | Your existing template, or build one in Module 4 |
| Your detection rule standard | AI follows your rule naming, documentation, and testing conventions | Your existing standard, or build one in Module 3 |
| Your KQL/SPL style guide | AI follows your query formatting and naming conventions | If you do not have one, the system prompt above is sufficient |
| Your incident classification matrix | AI assigns correct severity and category | Your existing matrix |
| Sanitization reference | AI understands your fictional replacement names | Use the Northgate Engineering values from this course |
If you do not have these documents yet: That is expected. Modules Module 3 and Module 4 produce them. For now, configure the workspace with the system prompt only. Add documents as you produce them throughout the course.
Verify: Start a new conversation in the workspace. Ask: “What is my SIEM platform and what query language should you use?” The AI should respond with the details from your system prompt. If it does not, the system prompt is not applied — check the workspace configuration.
The investigation feedback loop
The core operational methodology for AI-assisted security work. Every investigation module in this course (C2) and every detection engineering task (Module 3) follows this pattern.
The loop:
Describe the scenario to the AI. Provide the alert, the log data, or the technique description. Use structured prompts (XML tags recommended) to separate context from task from constraints.
AI generates an artifact. A KQL query, an analysis, a report section, a detection rule, a script. This is the first draft — not the final product.
You execute and verify. Run the query in your SIEM. Review the analysis against the evidence. Test the detection rule against historical data. Run the script in dev. The AI generates. You validate.
You provide the results back to the AI. Paste the query results. Describe what you found. Explain what was wrong or unexpected. The AI now has your real-world feedback.
AI generates the follow-up. A refined query, a deeper analysis, an adjusted rule. Each iteration incorporates what you found in the previous cycle.
Repeat until the task is complete. Most investigations take 4-8 cycles. Most detection rules take 2-4 cycles. Each cycle: 2-3 minutes.
Why this is faster than working alone: An experienced analyst spends 60-70% of investigation time writing and debugging queries. AI eliminates that bottleneck. Your time shifts from writing queries to analyzing results — which is the higher-value activity. The query is the means. The analysis is the end. AI handles the means so you focus on the end.
Why this is safer than autonomous AI: The human executes every query, reviews every result, and makes every decision. AI generates and analyzes. The human acts. There is no point in the loop where AI takes an action on your environment without your explicit execution. This is the human-in-the-loop principle applied to every interaction.
Structured prompting for security contexts
General-purpose prompting guides teach generic techniques. Security operations prompting has specific requirements that general guides do not address.
The structured prompt template for security tasks:
| |
Why XML tags matter: Tags create explicit boundaries between different types of information. The AI processes each tagged section with its intended purpose rather than parsing your intent from an unstructured paragraph. In testing, XML-tagged prompts produce more precise output with fewer iterations than equivalent unstructured prompts — particularly for complex tasks with multiple constraints.
Security-specific prompting rules:
- Always specify the query language and target platform. “Write a KQL query” is better than “write a query.” “Write a KQL query for Sentinel” is better than “write a KQL query” because it constrains the table schema.
- Describe attacks in log terms, not abstract terms. “Detect impossible travel” is vague. “Detect when the same UPN has SigninLogs entries from two different countries within 60 minutes, excluding known VPN IPs from the TrustedIPs watchlist” is specific. Specific input produces specific output.
- Include what you have already tried. “I ran this query and it returned 5,000 results, most of which are false positives from the VPN. Refine the query to exclude VPN traffic.” Iteration builds on context.
- State the audience for documentation. “Write the executive summary for the CISO. No jargon. Business impact focus.” This changes the tone, vocabulary, and level of detail.
The verification discipline: Output → Verify → Deploy
This discipline applies to every AI output in every module of this course. It is the single practice that separates safe, effective AI usage from dangerous overreliance.
Output: AI generates the artifact — query, analysis, report section, rule, script.
Verify: You check the output against ground truth.
| Output type | Verification method |
|---|---|
| KQL/SPL query | Run in lab environment. Check table names against schema browser. Spot-check 5-10 results. |
| Log analysis | Verify timestamps, IPs, and usernames against source data. Confirm analytical conclusions are supported by evidence. |
| Detection rule | Test against 30 days of historical data. Review false positive volume. Verify entity mapping fields exist. |
| IR report section | Cross-reference every factual claim against the investigation evidence. Remove inferences presented as facts. |
| PowerShell/Python script | Run with -WhatIf or dry-run flag. Review for credential handling. Test error scenarios. |
| Policy/compliance document | Verify framework citations against the current framework text. Confirm organizational context is accurate. |
Deploy: Only after verification passes, deploy to production — enable the rule, send the report, execute the script, publish the policy.
Never skip verification. AI output that reads most confidently may be the output most in need of checking (Pattern 4 from subsection 1.1). The verification step takes 5 minutes per output. The recovery from deploying an unverified output takes hours.
Your first AI-assisted security workflow
Execute this exercise in your configured workspace. It produces your first AI-assisted artifact and validates that your workspace, methodology, and verification discipline work end-to-end.
Try it yourself
In your Security Operations workspace, run this prompt:
| |
Then:
- Review the generated query. Are the table and column names correct for your Sentinel workspace?
- Run the query in Sentinel. Does it execute? Does it return results?
- If it errors, identify what is wrong (table name, column name, syntax) and ask the AI to correct it with the specific error message.
- If it executes, review 5-10 results. Do they make sense? Are the timestamps logical? Are the IP addresses real?
This is one cycle of the investigation feedback loop. The exercise validates your workspace configuration, your prompting approach, and your verification discipline.
The AI should produce a functional SigninLogs query with: TimeGenerated filter, ResultType filter for failed sign-ins, summarize by UPN and IP, threshold filter, and a left anti-join or has_any check against the TrustedIPs watchlist. Common issues to verify: the watchlist name must match your actual watchlist (if you have one), the ResultType values for failed sign-ins in your environment (commonly 50126, 50053, 530032), and the column names should match the standard Sentinel schema (UserPrincipalName, IPAddress, ResultType, TimeGenerated). If the query references columns that do not exist, this is Pattern 1 hallucination — correct it and note what the AI got wrong. This calibrates your verification expectations for the rest of the course.
Check your understanding
1. You ask AI to analyze sign-in logs you uploaded and it concludes: "The attacker exfiltrated 47 emails containing sensitive financial data." Your investigation evidence shows 47 MailItemsAccessed events but no data about the content of those emails. Is the AI's statement appropriate for an IR report?
2. You configure a Security Operations Project with a detailed system prompt. You start a new conversation and ask for a KQL query. The AI ignores your system prompt constraints (no inline comments, wrong query language). What is the most likely cause?
You're reading the free modules of this course
The full course continues with advanced topics, production detection rules, worked investigation scenarios, and deployable artifacts. Premium subscribers get access to all courses.