1.6 Building Your AI Operations Foundation

2-3 hours · Module 1 · Free

Building Your AI Operations Foundation

The preceding subsections established the knowledge: what AI does, where it fits in security operations, what the standards say, how to evaluate tools, and how to handle data. This subsection converts that knowledge into operational infrastructure. You will configure your AI workspace, establish the core methodology, and execute your first AI-assisted security workflow.

This is the bridge between understanding and execution. When you complete this subsection, you have a configured, tested AI operations environment ready for the operational modules (C2-C10) that follow.


Setting up your security operations workspace

A workspace (called a “Project” in Claude, a “Custom GPT” or “Project” in ChatGPT, a “Gem” in Gemini) provides persistent context that carries across conversations. Every conversation within the workspace inherits the system prompt, reference documents, and configuration you define. Without a workspace, you re-explain your environment, your conventions, and your preferences in every conversation. With a workspace, every conversation starts with that context already loaded.

Create a workspace with the following configuration:

Workspace name: Security Operations

System prompt:

You are assisting a security operations analyst with investigation,
detection engineering, and operational documentation.

Environment context:
- Organization: [your org name or Northgate Engineering for training]
- SIEM: [Microsoft Sentinel / Splunk / Elastic / other]
- EDR: [Defender for Endpoint / CrowdStrike / SentinelOne / other]
- Identity: [Entra ID / Okta / other]
- Primary query language: [KQL / SPL / other]

Output requirements:
- All KQL/SPL queries include inline comments explaining each operator
- All detection rules include: MITRE ATT&CK mapping, entity mapping,
  severity rationale, tuning guidance, and false positive assessment
- IR report sections follow the structure: Executive Summary → Timeline
  → Technical Findings → Impact Assessment → Recommendations
- Compliance mappings reference: NIST CSF 2.0, ISO 27001:2022, SOC 2

Operational constraints:
- Never state inferences as facts. Label conclusions as conclusions.
- Include verification steps for every recommendation.
- State minimum required RBAC role before every portal action.
- Include blast radius for every configuration change.
- Use US English.

Reference documents to upload:

Upload these documents to the workspace’s knowledge base (Project Knowledge in Claude, Project files in ChatGPT). The AI will reference them in every conversation.

DocumentPurposeWhere to get it
Your IR report templateAI mirrors your reporting formatYour existing template, or build one in Module 4
Your detection rule standardAI follows your rule naming, documentation, and testing conventionsYour existing standard, or build one in Module 3
Your KQL/SPL style guideAI follows your query formatting and naming conventionsIf you do not have one, the system prompt above is sufficient
Your incident classification matrixAI assigns correct severity and categoryYour existing matrix
Sanitization referenceAI understands your fictional replacement namesUse the Northgate Engineering values from this course

If you do not have these documents yet: That is expected. Modules Module 3 and Module 4 produce them. For now, configure the workspace with the system prompt only. Add documents as you produce them throughout the course.

Verify: Start a new conversation in the workspace. Ask: “What is my SIEM platform and what query language should you use?” The AI should respond with the details from your system prompt. If it does not, the system prompt is not applied — check the workspace configuration.


The investigation feedback loop

The core operational methodology for AI-assisted security work. Every investigation module in this course (C2) and every detection engineering task (Module 3) follows this pattern.

The loop:

  1. Describe the scenario to the AI. Provide the alert, the log data, or the technique description. Use structured prompts (XML tags recommended) to separate context from task from constraints.

  2. AI generates an artifact. A KQL query, an analysis, a report section, a detection rule, a script. This is the first draft — not the final product.

  3. You execute and verify. Run the query in your SIEM. Review the analysis against the evidence. Test the detection rule against historical data. Run the script in dev. The AI generates. You validate.

  4. You provide the results back to the AI. Paste the query results. Describe what you found. Explain what was wrong or unexpected. The AI now has your real-world feedback.

  5. AI generates the follow-up. A refined query, a deeper analysis, an adjusted rule. Each iteration incorporates what you found in the previous cycle.

  6. Repeat until the task is complete. Most investigations take 4-8 cycles. Most detection rules take 2-4 cycles. Each cycle: 2-3 minutes.

Why this is faster than working alone: An experienced analyst spends 60-70% of investigation time writing and debugging queries. AI eliminates that bottleneck. Your time shifts from writing queries to analyzing results — which is the higher-value activity. The query is the means. The analysis is the end. AI handles the means so you focus on the end.

Why this is safer than autonomous AI: The human executes every query, reviews every result, and makes every decision. AI generates and analyzes. The human acts. There is no point in the loop where AI takes an action on your environment without your explicit execution. This is the human-in-the-loop principle applied to every interaction.


Structured prompting for security contexts

General-purpose prompting guides teach generic techniques. Security operations prompting has specific requirements that general guides do not address.

The structured prompt template for security tasks:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
<context>
[Your environment: SIEM, EDR, identity provider, relevant configurations]
[The specific scenario: what happened, what you know so far, what data you have]
</context>

<task>
[What you need the AI to produce: a query, an analysis, a report section,
a detection rule, a script]
</task>

<constraints>
[Output requirements: query language, time range, specific tables,
 columns to include/exclude, format requirements]
[Verification requirements: what verification steps to include]
[Audience: technical analyst, CISO, regulator, employee]
</constraints>

<output_format>
[How to structure the response: table, narrative, code block,
 decision tree, checklist]
</output_format>

Why XML tags matter: Tags create explicit boundaries between different types of information. The AI processes each tagged section with its intended purpose rather than parsing your intent from an unstructured paragraph. In testing, XML-tagged prompts produce more precise output with fewer iterations than equivalent unstructured prompts — particularly for complex tasks with multiple constraints.

Security-specific prompting rules:

  • Always specify the query language and target platform. “Write a KQL query” is better than “write a query.” “Write a KQL query for Sentinel” is better than “write a KQL query” because it constrains the table schema.
  • Describe attacks in log terms, not abstract terms. “Detect impossible travel” is vague. “Detect when the same UPN has SigninLogs entries from two different countries within 60 minutes, excluding known VPN IPs from the TrustedIPs watchlist” is specific. Specific input produces specific output.
  • Include what you have already tried. “I ran this query and it returned 5,000 results, most of which are false positives from the VPN. Refine the query to exclude VPN traffic.” Iteration builds on context.
  • State the audience for documentation. “Write the executive summary for the CISO. No jargon. Business impact focus.” This changes the tone, vocabulary, and level of detail.

The verification discipline: Output → Verify → Deploy

This discipline applies to every AI output in every module of this course. It is the single practice that separates safe, effective AI usage from dangerous overreliance.

Output: AI generates the artifact — query, analysis, report section, rule, script.

Verify: You check the output against ground truth.

Output typeVerification method
KQL/SPL queryRun in lab environment. Check table names against schema browser. Spot-check 5-10 results.
Log analysisVerify timestamps, IPs, and usernames against source data. Confirm analytical conclusions are supported by evidence.
Detection ruleTest against 30 days of historical data. Review false positive volume. Verify entity mapping fields exist.
IR report sectionCross-reference every factual claim against the investigation evidence. Remove inferences presented as facts.
PowerShell/Python scriptRun with -WhatIf or dry-run flag. Review for credential handling. Test error scenarios.
Policy/compliance documentVerify framework citations against the current framework text. Confirm organizational context is accurate.

Deploy: Only after verification passes, deploy to production — enable the rule, send the report, execute the script, publish the policy.

Never skip verification. AI output that reads most confidently may be the output most in need of checking (Pattern 4 from subsection 1.1). The verification step takes 5 minutes per output. The recovery from deploying an unverified output takes hours.


Your first AI-assisted security workflow

Execute this exercise in your configured workspace. It produces your first AI-assisted artifact and validates that your workspace, methodology, and verification discipline work end-to-end.

Try it yourself

In your Security Operations workspace, run this prompt:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
<context>
I am a security analyst investigating a potential brute force attack
against our organization's Entra ID environment. I have access to
SigninLogs in Microsoft Sentinel.
</context>

<task>
Write a KQL query that identifies brute force sign-in attempts
in the last 24 hours.
</task>

<constraints>
- Table: SigninLogs
- Time range: last 24 hours
- Threshold: more than 10 failed attempts per user per source IP
- Include: UserPrincipalName, IPAddress, failure count,
  first and last attempt timestamps, ResultType
- Exclude: any IP in a watchlist called "TrustedIPs"
  (handle the case where the watchlist may not exist)
- Inline comments explaining each operator
</constraints>

<output_format>
KQL code block with inline comments.
After the query: one paragraph explaining what the results mean
and what investigation steps to take next.
</output_format>

Then:

  1. Review the generated query. Are the table and column names correct for your Sentinel workspace?
  2. Run the query in Sentinel. Does it execute? Does it return results?
  3. If it errors, identify what is wrong (table name, column name, syntax) and ask the AI to correct it with the specific error message.
  4. If it executes, review 5-10 results. Do they make sense? Are the timestamps logical? Are the IP addresses real?

This is one cycle of the investigation feedback loop. The exercise validates your workspace configuration, your prompting approach, and your verification discipline.

The AI should produce a functional SigninLogs query with: TimeGenerated filter, ResultType filter for failed sign-ins, summarize by UPN and IP, threshold filter, and a left anti-join or has_any check against the TrustedIPs watchlist. Common issues to verify: the watchlist name must match your actual watchlist (if you have one), the ResultType values for failed sign-ins in your environment (commonly 50126, 50053, 530032), and the column names should match the standard Sentinel schema (UserPrincipalName, IPAddress, ResultType, TimeGenerated). If the query references columns that do not exist, this is Pattern 1 hallucination — correct it and note what the AI got wrong. This calibrates your verification expectations for the rest of the course.

Check your understanding

1. You ask AI to analyze sign-in logs you uploaded and it concludes: "The attacker exfiltrated 47 emails containing sensitive financial data." Your investigation evidence shows 47 MailItemsAccessed events but no data about the content of those emails. Is the AI's statement appropriate for an IR report?

No. The AI stated an inference as a fact. The evidence shows 47 mail items were accessed — not that they contained "sensitive financial data." The content of the accessed emails is unknown unless you performed a content search. The correct statement for an IR report is: "The compromised account accessed 47 email items during the unauthorized session. Content analysis is required to determine the sensitivity of the accessed data." AI routinely escalates findings beyond what the evidence supports. Review every factual statement in AI-generated documentation and confirm it is directly supported by the log evidence.
Yes — the AI correctly identified the exfiltration
Remove the statement entirely

2. You configure a Security Operations Project with a detailed system prompt. You start a new conversation and ask for a KQL query. The AI ignores your system prompt constraints (no inline comments, wrong query language). What is the most likely cause?

Verify the workspace configuration. The most common cause is that the conversation was started outside the project (in the general chat area rather than within the Security Operations project). Check: are you in the correct project? Is the system prompt saved and applied? If the system prompt is long, the AI may deprioritize later instructions — move the most critical constraints (query language, comment style) to the top of the prompt. If the system prompt is correctly applied and the AI still ignores constraints, explicitly restate the ignored constraint in your message: "Use KQL, not SPL. Include inline comments on every line."
The AI cannot follow system prompts reliably
System prompts only work on Enterprise plans

You're reading the free modules of this course

The full course continues with advanced topics, production detection rules, worked investigation scenarios, and deployable artifacts. Premium subscribers get access to all courses.

View Pricing See Full Syllabus