Module 9 — Check My Knowledge (20 questions)
1. What are the four analytics rule types in Sentinel?
Scheduled (custom KQL on a timer), NRT (near-real-time, ~1 minute evaluation), Microsoft Security (pass-through incidents from Defender products), and Anomaly (ML-based behavioural detection). Scheduled rules are the primary detection mechanism. NRT for critical, high-fidelity detections. Microsoft Security for Defender product alerts. Anomaly for behavioural baseline deviations.
High, Medium, Low, Informational
Alert, Incident, Automation, Hunting
Static, Dynamic, Adaptive, Manual
Scheduled, NRT, Microsoft Security, Anomaly. Each serves a specific detection use case.
2. Your rule runs every hour with a 5-minute lookback. What problem does this create?
A 55-minute detection gap per hour. The rule only evaluates 5 minutes of data out of every 60 minutes. Fix: set the lookback to 60-75 minutes to match the schedule interval.
Duplicate alerts
High compute cost
No problem
Schedule-lookback mismatch creates detection gaps. Always match or slightly exceed the schedule interval.
3. When should you use an NRT rule instead of a scheduled rule?
When sub-minute detection latency is critical and the detection is high-fidelity (near-zero false positives). Examples: security log cleared, honeytoken activation, high-confidence TI match. NRT rules run every ~1 minute but have KQL limitations — use scheduled rules for complex queries.
Always — NRT is better than scheduled
Only for Microsoft Security alerts
Never — NRT is deprecated
NRT for critical, high-fidelity, simple detections. Scheduled for everything else.
4. A brute-force rule generates 30 alerts from 2 different IPs. You want one incident per attacking IP. What alert grouping do you configure?
Group alerts with matching entities, using IP as the grouping entity. This creates 2 incidents (one per IP), each containing the related alerts. The analyst investigates each attacking IP independently.
No grouping (30 incidents)
Group all into one incident
Disable the rule
Entity-based grouping on IP. One incident per distinct attacking source.
5. Which entity types should you map for an inbox forwarding rule detection?
Account (victim), IP (attacker source), and Mailbox (external forwarding destination). Three entities covering who was affected, where the attack came from, and where data is being exfiltrated.
Only Account
File and Process
Host and DNS
Account + IP + Mailbox cover the investigation needs for email-based attacks.
6. What are the three incident classifications in Sentinel?
True Positive (confirmed threat), False Positive (benign activity incorrectly flagged), and Benign Positive (real event but expected/authorised, e.g., pen test). Plus Undetermined for inconclusive investigations.
High, Medium, Low
Threat, Noise, Ignore
Confirmed, Possible, Rejected
TP, FP, BP (+ Undetermined). Classification drives SOC metrics and rule tuning.
7. What is the purpose of investigation bookmarks?
Bookmarks capture specific KQL query results as evidence within an incident. They persist beyond the investigation session and are visible to other analysts reviewing the incident. Bookmarked entities appear in the investigation graph alongside alert entities, enriching the visual investigation with analyst-discovered evidence.
Bookmarks save frequently used KQL queries
Bookmarks tag incidents for follow-up
Bookmarks schedule query execution
Evidence preservation within incidents. Persistent, shareable, and visible in the investigation graph.
8. How do automation rules differ from playbooks?
Automation rules are no-code, instant actions (assign, tag, change severity, suppress, trigger playbooks). Playbooks are Logic Apps workflows for complex, multi-step response (call APIs, send notifications, isolate devices, reset passwords). Automation rules decide what to do; playbooks do it. They are commonly used together: the automation rule triggers the playbook.
They are the same thing
Automation rules replace playbooks
Playbooks are for reporting only
Automation rules = simple, instant, no-code. Playbooks = complex, multi-step, Logic Apps. Combined: automation rule triggers playbook.
9. A rule consistently generates false positives for a build server. How do you suppress the noise while tuning the rule?
Create an automation rule that auto-closes incidents matching the analytics rule name AND the build server host entity. Set an expiration date to prevent the suppression from becoming permanent. Then tune the analytics rule's KQL to exclude the build server. Remove the automation rule once tuning is complete.
Disable the analytics rule
Delete the incidents
Lower severity to Informational
Tactical suppression via automation rule with expiration + rule tuning. Never disable the entire rule.
10. Your "User Compromise Containment" playbook needs to revoke user sessions and reset passwords. Which playbook trigger type should it use?
Alert trigger. The playbook needs the specific Account entity from the alert to perform user-specific actions. The alert trigger provides entity data directly. The incident trigger provides the incident object, requiring additional parsing to extract the entity.
Incident trigger
Scheduled trigger
Manual trigger only
Alert trigger for entity-specific actions. Incident trigger for incident-level actions.
11. How long does UEBA need to build behavioural baselines?
14-21 days minimum. UEBA needs at least two weeks of historical data to establish meaningful baselines for anomaly detection. Enable UEBA early in your Sentinel deployment.
Immediately
24 hours
90 days
14-21 days minimum. Longer baselines produce more accurate detection.
12. What problem does ASIM solve?
ASIM normalises data from multiple sources into a common schema. One analytics rule written against the ASIM schema detects threats across all sources (SigninLogs, SecurityEvent, Syslog, etc.) without source-specific KQL. Without ASIM, the same detection must be duplicated for each source.
ASIM improves query performance
ASIM encrypts data
ASIM replaces DCRs
Cross-source normalisation. One rule, many sources.
13. What is the recommended alert grouping for a brute-force detection rule?
Group alerts with matching IP entities. Each unique attacking IP generates one incident containing all related alerts. This creates manageable, entity-correlated incidents rather than one incident per failed logon (too many) or one incident for all attacks (too broad).
No grouping
Group all into one incident
Alert grouping is not configurable
Entity-based grouping on IP for network detections. On Account for user-based detections.
14. An analytics rule has a 45% false positive rate. What is your first step?
Analyse the false positive incidents to identify the common benign pattern. Then modify the KQL query to exclude that pattern. Test the modified query against historical data to verify true positives are still detected. Deploy the updated rule.
Disable the rule permanently
Ignore the false positives
Delete all incidents
Analyse, exclude, test, deploy. Fix the rule, do not delete it or ignore the problem.
15. You have bi-directional sync for Defender XDR AND a Microsoft Security rule for Defender for Endpoint alerts. What happens?
Duplicate incidents. Both mechanisms create Sentinel incidents from the same alerts. Disable the Microsoft Security rule — bi-directional sync is the preferred mechanism for Defender XDR products.
No issue
Only sync creates incidents
Only the rule creates incidents
Duplicates. Choose one mechanism per product.
16. What severity should you assign to a "security log cleared" detection rule?
High. Clearing the security log is almost always an attacker covering their tracks — near-zero false positive rate and high investigation urgency. The analyst should investigate within 1 hour. This event also warrants an NRT rule for fastest detection.
Medium
Low
Informational
High severity + NRT rule. Security log cleared = evidence destruction = immediate investigation.
17. What is the purpose of custom details in an analytics rule?
Custom details attach additional context fields from the KQL output to the alert. They appear in the incident detail pane and are available to automation rules and playbooks. Well-configured custom details enable 30-second triage — the analyst sees all relevant context without running additional queries.
Custom details change the alert severity
Custom details create entity mappings
Custom details schedule playbook execution
Contextual enrichment for fast triage. Custom details = the "why this alert matters" context that accelerates investigation.
18. What is the UEBA investigation priority score?
A numerical score (0-10) assigned to each entity based on: the number and severity of detected anomalies, deviation from peer group behaviour, and involvement in recent incidents. High scores (>7) indicate entities that warrant proactive investigation even without a specific incident trigger.
The order in which incidents are investigated
The severity of the analytics rule
The number of incidents assigned to an analyst
Entity-level behavioural risk score. Used for proactive hunting and investigation enrichment.
19. Where do anomaly rule detections appear?
In the Anomalies table — not in SecurityAlert and not as incidents by default. To create incidents from anomaly detections, build a scheduled analytics rule that queries the Anomalies table and filters for the specific anomaly types and confidence thresholds you want to investigate.
In the incident queue automatically
In the SecurityAlert table
In the SigninLogs table
Anomalies table. Not auto-incident. Build a scheduled rule to create incidents from high-confidence anomalies.
20. What is the monthly detection engineering review?
A scheduled review that evaluates: new threat intelligence, MITRE ATT&CK coverage gaps, analytics rule false positive rates, rules with zero firings, new data sources requiring rules, and the plan for next month's rule development. This 2-hour monthly investment maintains detection coverage as the threat landscape and environment evolve.
A review of analyst performance
A review of Sentinel licensing costs
A review of data connector health
The continuous improvement review that prevents detection decay. Threat modelling, coverage analysis, rule tuning, and gap-driven development.