Detection Engineering

For SOC Analysts and Security Engineers Building Detection Programs in Microsoft Sentinel and Defender XDR

Detection Engineering

The Mission: Build the Machine That Finds Threats Automatically

Your SOC has analytics rules. Most are templates, never tuned, covering less than 10% of the ATT&CK framework. This course builds the detection engineering practice that closes that gap. You will threat model a realistic organization, build 44+ production KQL detection rules across 6 multi-phase attack chains, test and tune every rule against real telemetry patterns, deploy using detection-as-code, and produce the coverage reports that justify the program to leadership. Every rule is built for Microsoft Sentinel and Defender XDR — the detection surface your organization actually runs.

DETECTION ENGINEERING — ATTACK CHAIN COVERAGECHAIN-HARVESTAiTM → BEC → Financial fraud5 detection points: phishing, token theft, inbox rule, email collection, wire fraudCHAIN-MESHRansomware via SD-WAN lateral movement7 detection points: VPN compromise to pre-encryption indicatorsCHAIN-ENDPOINTWatering hole → Cobalt Strike → crown jewels7 detection points: drive-by, beacon, LSASS, PtH, recon, file share, C2 exfilCHAIN-PRIVILEGEInsider PIM abuse → app registration persistence5 detection points: role activation, app creation, mailbox access, exfil, cover tracksCHAIN-DRIFTConfig change exploit4 points: CA gap → spray → persist → exfilCHAIN-FACTORYPhysical USB theft5 points: USB → exec → recon → copy → exfil44+ rules6 attack chains12 modules30-40 hoursFrom 10% ATT&CK coverage → defensible, risk-prioritized detection

The detection engineering lifecycle

Every detection rule in this course follows the same production-grade lifecycle — the same process used by mature detection engineering teams:

1. Threat model — identify the techniques that matter for your organization, not generic ATT&CK coverage. Crown jewel analysis drives prioritization.

2. Design the rule — hypothesis, ATT&CK mapping, data source, KQL logic, entity mapping, severity rationale, false positive analysis. The specification comes before the code.

3. Build and test — write the KQL, validate against 30 days of historical data, simulate the attack technique, measure false positive rate before deployment.

4. Deploy to production — Sentinel analytics rule configuration: frequency, lookback, trigger threshold, entity mapping, alert grouping, automation rules.

5. Tune and maintain — monthly threshold review, false positive classification, watchlist management, rule health metrics. The rule that fires on day one is not the rule that runs on day 90.

6. Measure coverage — ATT&CK heatmaps, detection coverage percentage, MTTD, TP rate. The metrics that justify the program to leadership.

Who this course is for

SOC analysts moving into detection engineering. You triage alerts and run KQL queries. Now you want to build the rules that generate those alerts. You need the methodology, the architecture, and the production discipline — not another KQL tutorial.

Security engineers in Microsoft environments. You have Sentinel deployed and Defender XDR licensed. Your analytics rules are mostly templates, never tuned. You need a systematic approach to building a detection library that covers your actual threat landscape.

Detection engineers migrating to Microsoft. You know detection engineering from Splunk, Elastic, or CrowdStrike. Now you need the Microsoft implementation: KQL syntax, Sentinel analytics rule architecture, Defender XDR custom detections, and the Microsoft data model.

Security team leads building a detection program. Your CISO wants "improved detection capability" and you need the roadmap: threat modeling, prioritized backlog, sprint cadence, coverage metrics, and the board report that demonstrates progress.

Who this course is not for

Complete beginners to KQL. This course writes production KQL detection rules. If you have not written a KQL query before, start with our Mastering KQL course, then return here. You need to be comfortable with where, summarize, join, let, and ago() before building detection rules.

Beginners to security operations. This course assumes you know what a SOC does, how alerts become incidents, and what MITRE ATT&CK is. If you are new to security: start with our M365 Security Operations course or SOC Operations course.

Threat hunters looking for ad-hoc query techniques. Threat hunting is proactive and hypothesis-driven — our Practical Threat Hunting course covers that discipline. Detection engineering builds the automated rules that run 24/7 after the hunter goes home. Complementary skills, different courses.

Built on a realistic organization — not abstract examples

Every detection rule in this course is built for Northgate Engineering — an 810-person precision manufacturing company with 11 offices, full-mesh SD-WAN, hybrid AD + Entra ID, Microsoft E5 licensing, 18 GB/day Sentinel ingestion, and all the architectural debt that real organizations carry: Server 2016 systems that cannot be upgraded, manufacturing workstations with limited EDR, Linux servers without Defender coverage, and 8 users eligible for Global Admin when 3 would suffice. The starting detection posture: 23 analytics rules — 12 templates, 11 basic custom — covering 7.2% of the ATT&CK framework. Your job is to build the detection program that closes that gap.

What you will be able to do

After completing this course, you will be able to:

1. Threat model your organization's detection priorities using crown jewel analysis, ATT&CK coverage assessment, and risk-based gap scoring — producing a prioritized detection backlog that your team can execute in 2-week sprints.

2. Build 44+ production KQL detection rules across the full ATT&CK kill chain: initial access (phishing, credential attacks, drive-by), persistence (inbox rules, scheduled tasks, app registrations), lateral movement (RDP, WMI, Pass-the-Hash across SD-WAN), collection and exfiltration (email, SharePoint, USB, C2 channels), and impact (pre-encryption ransomware indicators).

3. Test every rule before deployment using historical data validation, attack simulation, false positive estimation, and threshold optimization — so the rule that hits production has a known TP rate and a documented FP baseline.

4. Tune detection rules systematically by classifying false positives (benign TP, environmental FP, logic FP), managing watchlists and exclusions, running monthly tuning reviews, and tracking rule health metrics.

5. Deploy detection-as-code with Git version control, CI/CD pipelines (GitHub Actions → Sentinel), pull request review processes, and documentation standards — eliminating the "edit in portal" governance failure.

6. Produce coverage reports and program metrics for leadership: ATT&CK heatmaps, detection coverage percentage, MTTD, TP/FP rates, cost per detection, and the 90-day board report that justifies continued investment.

7. Run detection engineering as a sustainable practice with 2-week sprint cadence, cross-team collaboration, managed SOC partner integration, and compliance evidence generation.

Course at a glance

Modules: 12 (DE0–DE11)

Production detection rules: 44+ KQL rules with full specifications

Attack chains: 6 multi-phase scenarios (AiTM, ransomware, insider, physical, config drift, endpoint)

Estimated duration: 30–40 hours (self-paced)

Format: Written content — production KQL, SVG diagrams, interactive simulations, worked artifacts, knowledge checks

Free content: DE0–DE1 (2 modules) — no account required

Paid content: DE2–DE11 (10 modules) — Premium or Team subscription

Environment required: Microsoft Sentinel + Defender XDR (or Advanced Hunting access)

Built by Ridgeline Cyber

Ridgeline Cyber Defence builds security operations training grounded in practical and operational experience. The team behind this course runs CSOC operations across on-prem, Splunk, and Microsoft 365 security stacks, Cisco and Palo Alto networks, and managed SOC partnerships.

Experience spans detection engineering, incident response, and DFIR investigation across on-prem, M365, and Linux environments — including leading cyber incident response engagements, deploying security controls and managing Governance, Risk and Compliance operations.

The detection rules in this course are grounded in that operational work. The techniques, thresholds, false positive patterns, and tuning decisions are drawn from real detection engineering programs, adapted for training.

Technical requirements

Microsoft Sentinel workspace: Required for deploying detection rules. A free M365 E5 developer tenant with an Azure free subscription provides Sentinel with 5 GB/day free ingestion and all Defender XDR data connectors. DE0.6 provides complete lab setup instructions.

Defender XDR Advanced Hunting: Required for cross-product KQL queries. Included with any M365 E5 or Defender product license. Available in the developer tenant.

KQL proficiency: You need to be comfortable writing KQL queries — where, summarize, join, let, make-series, ago(). If you are new to KQL, complete our Mastering KQL course first. If you can write a multi-table join with time-windowed correlation, you are ready.

Git (recommended for DE10): The detection-as-code module uses Git for version control and GitHub Actions for CI/CD. Basic Git proficiency (clone, commit, push, pull request) is sufficient. Not required for other modules.

No third-party tools required. All detection rules are built in KQL and deployed to native Microsoft platforms. No Sigma, no Elastic, no Splunk. If you also use those platforms, the detection engineering methodology transfers — but the rules are written for Microsoft.

How to get the most from this course

Recommended pace: 1–2 modules per week. Each module takes 2–4 hours of focused study. The course is designed for 6–10 weeks of part-time learning alongside a full-time role.

Phase 1 (DE0–DE1) is sequential. DE0 introduces the NE Training Universe and the detection gap. DE1 teaches rule architecture. Complete both before building rules.

Phase 2 (DE2–DE8) follows a logical progression but individual modules can be prioritized based on your threat landscape. If credential attacks are your top risk: start with DE3–DE4. If ransomware is your concern: jump to DE8. The 90-day roadmap in DE2 helps you sequence your priorities.

Phase 3 (DE9–DE11) requires the rule library. Tuning, operations, and the capstone require rules from Phase 2 to exist. Complete at least DE3–DE5 before starting Phase 3.

Deploy the rules. Reading detection rules is not the same as deploying them. Set up your Sentinel workspace and deploy every rule to your development environment. The testing and tuning discipline only develops through practice.

Support and community

Questions about course content: training@ridgelinecyber.com

Billing and account management: Self-service via your account page or training@ridgelinecyber.com

LinkedIn: Follow Ridgeline Cyber for operational security content, course updates, and new module announcements

X: @RidgelineCyber

Course Syllabus

Three phases. Twelve modules. DE0–DE1 are free — no account required.

Phase 2 — Building the Detection Library

DE2
Threat Modeling for Detection Prioritization — Threat modeling specific to Northgate Engineering: manufacturing IP theft, defense contract exposure, financial fraud, ransomware. ATT&CK coverage assessment from 7.2% baseline. Risk-based gap scoring (threat relevance × data availability × impact severity). Building the detection backlog. The 90-day detection roadmap: Month 1 credential attacks, Month 2 persistence and movement, Month 3 collection, exfiltration, and validation.
DE3
Detecting Initial Access — Phishing detection beyond Safe Links (EmailUrlInfo patterns, sender reputation). Credential stuffing and password spray (spray ratio calculation, threshold tuning). Token theft and session hijacking (deviceId mismatch, non-interactive refresh anomalies). Drive-by and watering hole (C2 connection detection). Physical and removable media (USB mount + file type filtering for manufacturing environments). Valid account compromise (behavioral baselines). 6–8 production KQL rules.
DE4
Detecting Credential Attacks and Identity Threats — Password spray detection at scale (distributed proxy sprays). AiTM session token theft (session anomalies, device state mismatch). MFA fatigue and MFA bypass (push bombing detection, suspicious MFA registration). Impossible travel vs. VPN (making location rules work with 120 remote workers and 10 offices). PIM role activation anomalies (scope creep detection). Service principal and workload identity threats (app-registration-as-persistence). 8–10 production KQL rules.
DE5
Detecting Persistence and Execution — Mailbox rule persistence (New-InboxRule, forwarding rules, legitimate vs. attacker patterns). Scheduled task and service creation (schtasks, sc.exe, WMI subscriptions). Registry run key persistence (baselining legitimate entries). PowerShell and script execution (encoded commands, download cradles, AMSI bypass). Account creation and modification (net user /add, group membership). OAuth application persistence (consent grants, delegated vs. application permissions). 8–10 production KQL rules.
DE6
Detecting Discovery and Defense Evasion — Reconnaissance command sequences (time-windowed sequence detection for nltest, net group, whoami chains). LDAP and directory enumeration (admin group queries, IT admin baseline vs. attacker recon). Security tool tampering (Defender tamper protection bypass, service stops, AMSI disabling). Audit log manipulation (event log clearing, audit policy changes). Configuration change monitoring (CA policy modifications, authentication method changes — the CHAIN-DRIFT detection point with exploitation correlation). 6–8 production KQL rules.
DE7
Detecting Collection and Exfiltration — Email collection and forwarding (MailItemsAccessed Sync vs. Bind, transport rule abuse). SharePoint and OneDrive bulk access (volume baselines per user, distinguishing collaboration from exfiltration). File share access anomalies (bulk read from engineering shares, CAD file type filtering). USB and physical exfiltration (manufacturing USB exemption logic). Cloud storage exfiltration (personal cloud uploads, Graph API data transfer). DNS and C2 channel exfiltration (beaconing patterns, data size anomalies). 8–10 production KQL rules.
DE8
Detecting Lateral Movement and Impact — RDP lateral movement (user-to-device access mapping: "this user has never RDP'd to this server"). WMI and WinRM execution (wmiprvse.exe child process spawning). Pass the Hash and Pass the Ticket (NTLM from unexpected sources, Kerberos anomalies). Cross-site movement in SD-WAN (correlating logon events with firewall allow rules across the mesh — Edinburgh VPN → Bristol hub → Sheffield spoke). Credential dumping (LSASS access patterns). Pre-encryption ransomware indicators (NRT rule: vssadmin, bcdedit, mass file modification — the 15-minute detection window). 8–10 production KQL rules.

Phase 3 — Operations and Mastery

DE9
Testing, Tuning, and False Positive Management — Testing before deployment (historical data validation, atomic red team simulation, test plan template). Threshold optimization (the threshold curve: too low = analyst drowning, too high = miss the attack). False positive classification (benign TP, environmental FP, rule logic FP — different fixes for each). Watchlists and exclusions (corporate IPs, VPN egress, service accounts — preventing exclusion creep). Tuning cadence and rule health metrics (monthly review: dead rules, noisy rules, over-restrictive rules, rule health dashboard).
DE10
Detection-as-Code and Program Operations — Detection-as-code with Git (ARM/Bicep/YAML templates, version control, pull request review — why "edit in portal" is a governance failure). CI/CD for detection rules (GitHub Actions: lint → validate → deploy to dev → test → promote to production). Rule documentation standard (hypothesis, ATT&CK mapping, entity mapping, FP analysis, response procedure, tuning plan). Coverage reporting and ATT&CK heatmaps (Navigator layers from Sentinel rules, monthly reports for CISO and CFO). The detection engineering sprint (2-week cadence: 2–4 new rules, 2–4 tuned, 1 coverage report). Cross-team collaboration (IT resistance, managed SOC integration, GRC evidence).
DE11
Capstone — Building the NE Detection Program — The board mandate: 90 days to demonstrate meaningful improvement. Three detection sprints: credential attacks, persistence and movement, collection and exfiltration. All 6 attack chains replayed against the complete detection library. Full ATT&CK coverage assessment (before and after). The 90-day board report (baseline coverage, current coverage, rules deployed, TP/FP rates, MTTD improvement, cost, remaining risk, 12-month roadmap). Full alert queue triage simulation (30–40 alerts from all deployed rules — TP/FP/benign marking with tuning recommendations).

Prerequisites

Required:

KQL proficiency — you can write multi-table joins, time-windowed aggregations, and entity correlation queries. If SigninLogs | where TimeGenerated > ago(1h) | summarize count() by UserPrincipalName, IPAddress | where count_ > 10 is comfortable: you are ready.

Understanding of Microsoft security stack — you know what Sentinel, Defender XDR, Entra ID Protection, and the unified incident queue do. Completion of our M365 Security Operations course or equivalent operational experience.

Recommended but not required:

Experience with threat hunting (the Practical Threat Hunting course). Threat hunting produces the hypotheses that become detection rules. Prior hunting experience accelerates DE2 and DE3.

Familiarity with MITRE ATT&CK framework — technique IDs, tactic categories, and the Navigator tool. DE2 uses ATT&CK extensively for coverage assessment.

Usage rights and disclaimer

Course materials: Licensed for individual professional development. You may deploy detection rules from this course in your organization's production Sentinel workspace. You may not redistribute course content, share account credentials, or republish course materials.

Detection rules: All KQL detection rules are provided as-is for deployment in your environment. Test every rule against your environment's data before enabling in production. Thresholds, entity mappings, and exclusions require environment-specific tuning. Ridgeline Cyber Defence is not responsible for false positives, false negatives, or operational impact from deployed rules.

Fictional environment: All scenarios use the fictional Northgate Engineering environment. Any resemblance to real organizations, incidents, or individuals is coincidental. IP addresses use RFC 5737 documentation ranges.

Version and changelog

Current version: 1.0  |  Last updated: 2026

2026 — v1.0: Course launch. 12 modules (DE0–DE11). 44+ production KQL detection rules. 6 attack chains. Interactive components: scenario engine, alert simulator, network mesh visualizer. Full content standard compliance.

This course is actively maintained. Detection rules are updated as the Microsoft security platform evolves and new attack techniques emerge. Check this page for version updates.