DE0.14 Check My Knowledge
Check my knowledge
These questions test whether you have internalized the concepts from DE0. They are scenario-based — they require applying what you learned to realistic situations, not recalling definitions. If you can answer these without referring back to the subsections, you are ready for DE1.
Question 1 — Coverage assessment
Your Sentinel workspace has 55 active analytics rules. Your security vendor reports "above-average detection posture." Your CISO asks you to quantify the detection coverage. You map each rule to ATT&CK techniques and find 18 distinct techniques covered. Your threat-relevant technique set is 130. What is your coverage percentage, and how does it compare to Northgate Engineering's baseline?
Answer: 18 / 130 = 13.8% coverage. This is slightly above Northgate Engineering's baseline of 10.3% (15 / 145) — but both are in the same range. Having 55 rules instead of 23 did not meaningfully improve coverage because many of the additional rules detected the same techniques through different methods. The headline number (55 rules) masks the reality (13.8% technique coverage). The vendor's "above-average" claim is likely based on rule count, not technique coverage — a vanity metric.
Question 2 — Detection failure layers
After a BEC incident, the post-incident review finds that a Sentinel analytics rule for "suspicious inbox rule creation" existed and was enabled. The rule's KQL queries OfficeActivity for New-InboxRule events where the rule action is "ForwardTo" or "RedirectTo." The attacker created a rule that moved emails to the RSS Subscriptions folder — not a forwarding rule. Which layer of detection failure does this represent?
Answer: Layer 2 — the rule exists but does not fire. The rule queries the correct data source (OfficeActivity) and the correct event type (New-InboxRule), but the KQL filter only matches forwarding actions (ForwardTo, RedirectTo). The attacker used a MoveToFolder action, which the rule's logic does not cover. The rule exists, the technique executes, and the rule does not fire because the KQL does not match the specific attack variant. The fix is to broaden the rule to detect ALL inbox rule creation from risky sessions, regardless of the rule's action type — or to build a separate rule specifically for folder-move rules to financial keyword folders.
Question 3 — CHAIN-HARVEST detection opportunity
In the CHAIN-HARVEST walkthrough (DE0.2), the AiTM attacker used a residential proxy IP in the same country as the victim. The impossible travel rule did not fire. What alternative detection approach would identify the stolen session token?
Answer: Session token anomaly detection. Query SigninLogs for the interactive sign-in (the legitimate auth through the AiTM proxy) and then query AADNonInteractiveUserSignInLogs for subsequent token refresh events. The detection signal: a non-interactive token refresh from a different IP address than the interactive sign-in that created the session, where the device details (browser, OS, device compliance state) do not match the user's baseline. This detects AiTM regardless of geographic proximity because it looks at the session behavior, not the sign-in location.
Question 4 — Metrics and leadership reporting
The CFO asks: "We are spending $2,700 per month on Sentinel. Our detection rules generated 12 true positive incidents last month. Is that a good return on investment?" How do you answer?
Answer: Cost per detection: $2,700 / 12 = $225 per threat detected. Whether that is "good" depends on what those incidents would have cost if undetected. If the average incident cost for your industry is $100,000+ (per breach data), then $225 to detect and contain each threat before it escalates represents significant ROI. The more informative answer: "Our detection program identified 12 threats last month at $225 each. Without these rules, those threats would have operated undetected — our CHAIN-HARVEST analysis showed that similar attacks go undetected for 3+ hours, enough time for BEC, data exfiltration, or ransomware staging. As we add more rules and improve technique coverage, the same $2,700 will detect more threats — cost per detection will decrease while the value increases."
Question 5 — Template rules vs. custom rules
A colleague argues: "We should enable all 200+ Microsoft template analytics rules in the Sentinel content hub. That will give us comprehensive coverage without the effort of building custom rules." What are the two primary risks of this approach?
Answer: First, false positive flood. Template rules use generic thresholds designed for the broadest possible customer base. In your specific environment, many will fire on legitimate activity — IT admin PowerShell automation, field engineer travel patterns, manufacturing USB usage. Within weeks, analysts learn to ignore the noisy rules, which degrades response to ALL rules (Layer 3 detection failure). Second, false sense of coverage. Having 200+ rules creates the appearance of comprehensive detection, but many templates detect the same techniques through different methods, template logic may not match the specific attack variants targeting your environment, and no template has been validated against your historical data. Assumed coverage is not confirmed coverage. The correct approach: enable templates selectively based on your threat model, and tune each one for your environment before trusting it.
Question 6 — Maturity assessment
Your organization has 35 analytics rules. You added 8 of them after your last incident. The rules are created directly in the Sentinel portal (no Git, no version control). You do not track TP/FP rates. You measure "rules deployed" as the program metric. What maturity level is this, and what is the single most impactful improvement?
Answer: Level 1 — Reactive. Rules are added after incidents (reactive trigger). No version control. No FP tracking. Vanity metrics (rule count). The single most impactful improvement is starting FP classification. Adding a TP/FP/BTP tag to each incident at closure (2 seconds of effort) provides the data needed to identify which rules need tuning (highest FP rate), which rules are effective (highest TP rate), and enables two real metrics (TP rate and FP rate) that were previously unmeasured. This one change converts unmeasured rules into data-informed rules and unlocks the tuning cycle that defines Level 2.
Question 7 — Microsoft detection surface
You need to detect the following scenario: a user's account is compromised via AiTM (identity risk signal in SigninLogs), and within 60 minutes, the compromised account sends an internal email containing financial keywords to executive recipients (EmailEvents). Should this be a Sentinel analytics rule or a Defender XDR custom detection? Why?
Answer: Sentinel analytics rule. The detection requires a cross-source join between SigninLogs (identity domain) and EmailEvents (email domain) with a time-windowed correlation. While both tables are available in Defender XDR Advanced Hunting, Sentinel provides: configurable query frequency (including NRT for near-real-time detection), automation rules that can trigger containment playbooks (disable the account, revoke sessions), and incident grouping that correlates the identity alert with the email alert into a single incident for the SOC. The cross-source correlation is the core advantage of building in Sentinel.
Question 8 — NE Training Universe
CHAIN-MESH involves ransomware lateral movement from Edinburgh to Sheffield via the Bristol hub. What architectural characteristic of Northgate Engineering enables this cross-site movement, and what detection must span the movement path?
Answer: The full-mesh SD-WAN with permissive inter-site firewall rules. RDP is allowed between corporate VLANs at all sites for IT support convenience. The attacker uses this to move: Edinburgh (VPN entry) → Bristol (hub, domain controller access) → Sheffield (target, legacy Server 2016). The detection must correlate: new VPN session for the compromised user (SigninLogs or VPN logs) → RDP connection to Bristol server (DeviceNetworkEvents, DeviceLogonEvents) → WMI or RDP execution on Sheffield server from Bristol (DeviceProcessEvents, DeviceLogonEvents). This cross-site chain requires joining events across multiple device entities with a shared user entity — the kind of multi-hop correlation that templates do not provide.
Question 9 — Detection engineering lifecycle
A threat hunter on your team discovers that several Northgate Engineering users have been targeted by MFA push bombing (repeated MFA prompts that the user eventually accepts out of fatigue). The hunter shows you the KQL query that identified the pattern. What are the next three steps in the detection engineering lifecycle?
Answer: Step 1 (Design): Write the rule specification — ATT&CK mapping (T1621 Multi-Factor Authentication Request Generation), data source (SigninLogs, AADNonInteractiveUserSignInLogs), KQL logic (count MFA prompts denied by the user within a rolling 30-minute window, fire when count exceeds threshold), entity mapping (Account, IP), severity (High — MFA bypass leads directly to account compromise), FP analysis (legitimate MFA failures from app updates, device changes), and response procedure (contact user, verify session, revoke if unauthorized). Step 2 (Build): Write the KQL, test in Advanced Hunting against historical data. Step 3 (Test): Validate against the historical MFA bombing events the hunter found — does the rule fire? Estimate FP rate by running the rule against 30 days of data — how many non-malicious MFA denial clusters does it match?
Question 10 — The program argument
Rachel (CISO) has a meeting with David (CEO) and Sarah (CFO) in 2 weeks. She needs to present the case for dedicating 40-50% of the security engineer's time to detection engineering for the next 90 days. What three data points from this module make the strongest case?
Answer: First, the coverage gap: "Our current analytics rules cover 10.3% of the attack techniques relevant to our industry. An attacker operating in the remaining 89.7% generates no alerts." This is specific, quantified, and alarming. Second, the CHAIN-HARVEST walkthrough: "A standard AiTM phishing attack — the most common attack type against organizations like ours — traversed five phases over nearly four hours without triggering a single alert from our existing rules. The data was in our SIEM. No rule looked at it." This converts the abstract percentage into a concrete scenario the CEO can understand. Third, the ROI argument: "Detection engineering builds rules against data we already pay to ingest. Our Sentinel cost is $2,700/month. Adding detection rules does not increase that cost — it extracts more value from the existing investment. In 90 days, we target 35% coverage with measurable MTTD improvement and a quarterly metrics report." This addresses the CFO's cost concern directly.
How was this module?
Your feedback helps us improve the course. One click is enough — comments are optional.
You're reading the free modules of Detection Engineering
The full course continues with advanced topics, production detection rules, worked investigation scenarios, and deployable artifacts. Premium subscribers get access to all courses.