9.3 Near-Real-Time (NRT) and Microsoft Security Rules
Near-Real-Time (NRT) and Microsoft Security Rules
Introduction
Scheduled rules are the workhorse, but two other rule types serve specific purposes: NRT rules for the fastest possible detection, and Microsoft Security rules for pass-through incident creation from Defender products. Understanding when each is appropriate prevents both detection gaps (using scheduled when NRT is needed) and operational noise (using NRT when scheduled is sufficient).
NRT rules: sub-minute detection
NRT rules run approximately every minute. They evaluate the most recent data as it arrives, with no manual schedule or lookback configuration — the system manages the evaluation window automatically.
What NRT rules support: Basic KQL queries with filtering, projection, and simple aggregation. Entity mapping. Alert enrichment with custom details. MITRE ATT&CK mapping. All the same incident settings as scheduled rules.
What NRT rules do NOT support: Some time-windowed KQL functions behave differently because the lookback is system-managed. Joins across large time windows are unreliable (the evaluation window is very short). Complex aggregations that require historical baselines do not work well — use scheduled rules for baseline deviation detection.
When to use NRT:
The event is so critical that even a 5-minute delay is unacceptable. The event is high-fidelity (very low false positive rate) — you cannot afford NRT rule noise flooding the queue. The detection is simple enough to express without joins or historical comparisons.
Production NRT examples:
| |
An attacker clearing the security log is covering their tracks — every minute of delay gives them more time to complete the attack undetected. This event has near-zero false positive rate in most environments.
| |
A honeytoken account has no legitimate users. Any sign-in attempt (successful or failed) indicates an attacker who has discovered the account — likely through directory enumeration or credential harvesting.
| |
NRT rule KQL limitations
NRT rules evaluate a system-managed time window (approximately the last minute of data). This affects which KQL operators you can use effectively.
Supported and effective in NRT rules: where filtering, project column selection, extend computed columns, summarize within the current window, join with small lookup tables (watchlists), has, contains, in operators. These work well because they operate on the small dataset in the current window.
Not effective or unreliable in NRT rules: ago() with large windows (the system controls the window, not your query), join between two large tables over historical time ranges, make-series (requires historical data points), prev() and next() (window too small for sequence analysis), arg_max() over historical data. For detections requiring these operators, use scheduled rules.
Practical limitation: no historical comparison. An NRT rule cannot compare “current event” to “30-day baseline” because the query window only contains the last minute of data. Baseline deviation detections must be scheduled rules with appropriate lookback periods.
NRT to scheduled conversion pattern. If you write an NRT rule that turns out to be too noisy or too complex, convert it to a scheduled rule:
- Copy the KQL query from the NRT rule.
- Add a
where TimeGenerated > ago(5m)filter (replacing the system-managed window). - Create a new scheduled rule with 5-minute frequency and 5-minute lookback.
- Disable the NRT rule.
The detection latency increases from ~1 minute to ~5 minutes, but you gain access to the full KQL language.
NRT rule operational considerations
NRT resource consumption. NRT rules consume more system resources per detection than scheduled rules because they execute approximately 60 times more frequently per hour (every minute vs every hour). Limit NRT rules to 5-10 in a workspace. More than that degrades query performance for all rules.
NRT monitoring. Check NRT rule execution health:
| |
NRT rule failures are more critical than scheduled rule failures — the sub-minute detection gap is the reason you deployed NRT. If an NRT rule fails, the high-priority detection it provides is lost until the rule resumes.
Microsoft Security rules: detailed configuration
Step-by-step creation: Sentinel → Analytics → Create → Microsoft incident creation rule.
Step 1: Name the rule: “Create incidents from Defender for Cloud — High severity.”
Step 2: Select the Microsoft security service: Defender for Cloud.
Step 3: Filter by severity: select “High” and “Medium” (optionally exclude Low and Informational to reduce noise).
Step 4: Filter by alert name: optionally include or exclude specific alert names. Example: include only alerts containing “suspicious” or “malware” — excluding informational alerts about posture recommendations.
Step 5: Enable the rule.
When to use name filters: Defender for Cloud generates both security alerts (real threats) and recommendation alerts (posture suggestions). If you only want incidents for actual threats, filter by name or severity to exclude recommendation-type alerts.
Per-product Microsoft Security rule recommendations:
Entra ID Protection: create a Microsoft Security rule filtering for Medium and High risk detections. Entra ID Protection does not sync through the Defender XDR connector, so this is the only way to create incidents from identity risk detections.
Defender for IoT: if you have IoT devices, create a Microsoft Security rule for Defender for IoT alerts. IoT alerts do not flow through Defender XDR.
Microsoft Defender for Cloud: use either the bi-directional sync on the connector (Module 8.2) OR a Microsoft Security rule — not both.
NRT vs scheduled: the decision framework
| Factor | Use NRT | Use Scheduled |
|---|---|---|
| Detection latency requirement | Seconds matter | Minutes/hours acceptable |
| Query complexity | Simple filter/project | Complex joins, baselines |
| Expected alert volume | Very low (< 5/day) | Any volume |
| False positive rate | Near-zero | Acceptable with tuning |
| KQL features needed | Basic operators | Full KQL language |
Do NOT use NRT for: Brute-force detection (aggregation over time), baseline deviation (historical comparison), cross-table correlation (complex joins), or any detection that generates more than a few alerts per day. NRT rules that fire frequently create alert fatigue and consume disproportionate system resources.
Microsoft Security rules
Microsoft Security rules create Sentinel incidents from alerts generated by Microsoft security products. They are the simplest rule type — no KQL, no schedule, no lookback. They filter incoming SecurityAlert events by provider and severity.
Configuration: Sentinel → Analytics → Create → Microsoft incident creation rule. Select the source product (Microsoft Defender XDR, Defender for Cloud, Entra ID Protection, etc.). Filter by severity (optionally: only create incidents for High/Medium alerts). Filter by name pattern (optionally: only create incidents for alerts matching a specific pattern).
The bi-directional sync overlap. If you enabled bi-directional incident sync for Defender XDR (Module 8.3), Microsoft Security rules for Defender products are redundant. Both mechanisms create Sentinel incidents from the same alerts — resulting in duplicates.
Resolution: Choose one mechanism per product.
For Defender XDR products (MDE, MDO, MDI, MDA): use bi-directional sync (recommended). Disable Microsoft Security rules for these products.
For Defender for Cloud: use either bi-directional sync on the connector (if enabled) or a Microsoft Security rule (if you want severity filtering). Do not enable both.
For Entra ID Protection: use a Microsoft Security rule (Entra ID Protection does not have bi-directional sync through the Defender XDR connector).
Anomaly rules: ML-based detection
Anomaly rules are pre-built by Microsoft using machine learning models. They detect behavioural deviations from established baselines: a user who accesses 10x more files than normal, a device that connects to 50 unique external IPs when its baseline is 5, or an account that is active at unusual hours.
You do not write anomaly rules. They are provided as templates in the Analytics rule templates. You can enable, disable, and adjust the sensitivity threshold (higher sensitivity = more detections but more false positives, lower sensitivity = fewer detections but fewer false positives).
Anomaly rules generate anomaly records, not alerts. The output is written to the Anomalies table, not SecurityAlert. By default, anomaly rules do not create incidents. To generate incidents from anomalies: create a scheduled analytics rule that queries the Anomalies table and filters for the specific anomaly types you want to investigate.
Anomaly rule customisation
While you cannot write custom anomaly rules, you can adjust the pre-built rules to fit your environment.
Customising thresholds: Each anomaly rule has a sensitivity slider. At default sensitivity, the rule generates anomalies for behaviour that deviates significantly from the baseline. Increase sensitivity to catch subtler deviations (more anomalies, more false positives). Decrease sensitivity to focus on extreme outliers only (fewer anomalies, fewer false positives).
Customising scope: Some anomaly rules allow you to exclude specific users, IPs, or applications from evaluation. Use exclusions for: known service accounts with inherently variable behaviour, automated tools that generate legitimate behavioural anomalies, and test accounts used during development.
Customising the baseline period: Some rules allow you to set the baseline calculation window (e.g., 14 days vs 30 days). A shorter baseline adapts faster to behaviour changes but is more sensitive to short-term fluctuations. A longer baseline provides more stable baselines but is slower to adapt when a user’s legitimate behaviour changes (new role, new project).
Available anomaly rule templates
Microsoft provides anomaly rule templates across several categories. Key templates to evaluate for your environment:
Authentication anomalies: Anomalous sign-in activity (unusual location, device, or time), anomalous failed sign-in count, anomalous sign-in to a rare application. These complement your scheduled brute-force and impossible travel rules by detecting subtler patterns that threshold-based rules miss.
Data access anomalies: Anomalous data access volume (bulk download or copy), first-time access to a sensitive resource, anomalous SharePoint or OneDrive activity. These detect data exfiltration that develops gradually — too slow for threshold-based rules but obvious to ML-based baseline comparison.
Privileged activity anomalies: Anomalous administrative operations, anomalous group membership changes, anomalous role assignments. These detect privilege escalation that bypasses your scheduled admin-role-assignment rules — for example, an attacker who adds themselves to a non-standard but privileged group that your rule does not explicitly monitor.
Network anomalies: Anomalous outbound data transfer, anomalous connection to rare external IP, anomalous DNS query volume. These detect C2 communication and data exfiltration at the network level.
Building a blended detection strategy: rules + anomalies + UEBA
The most effective detection posture combines all three detection mechanisms.
Scheduled rules catch known threat patterns with high precision. You define exactly what to detect. Low false positive rate when well-tuned. But they only catch what you explicitly define — novel attack techniques that do not match any rule are invisible.
Anomaly rules catch unknown patterns by detecting deviations from established behaviour. They find threats that do not match any specific rule — but generate more false positives because not every behavioural deviation is malicious.
UEBA aggregates all anomaly detections into entity-level risk scores. It tells you which users and entities warrant investigation — even without a specific alert.
The blended model: Scheduled rules generate most incidents (80%). Anomaly rules feed UEBA and generate targeted incidents for high-confidence anomalies (15%). UEBA-driven proactive hunting catches the remaining 5% — the threats that bypass both rule-based and anomaly-based detection. Together, the three mechanisms provide layered coverage that no single approach can match.
| |
This pattern gives you control over which anomalies generate incidents and at what confidence threshold — preventing the UEBA system from flooding the incident queue with low-confidence detections.
NRT production deployment patterns
Beyond the basic examples from earlier in this subsection, these NRT patterns address high-value production scenarios.
NRT: Conditional access policy disabled or modified.
| |
An attacker who compromises a Global Admin will often disable conditional access policies to remove MFA requirements before pivoting. Sub-minute detection is critical — every second the policy is disabled, the attacker can access resources without MFA.
NRT: New federation trust created.
| |
Federation trust modification is a persistence technique used in advanced attacks (Golden SAML). It allows the attacker to forge authentication tokens for any user. This event has near-zero legitimate occurrence outside of initial tenant configuration.
NRT: Mass email deletion (evidence destruction).
| |
An attacker who has compromised a mailbox may mass-delete emails to remove evidence of their access (phishing emails they sent, reply chains they intercepted). NRT detection catches this within seconds.
Fusion detections: multi-signal correlation
Sentinel includes a built-in Fusion detection engine that correlates alerts from multiple Microsoft security products to identify multi-stage attacks. Fusion is a special analytics rule type that you enable but do not configure — it uses ML to identify attack patterns across Defender products and Sentinel detections.
How Fusion works: Fusion analyses alerts from: Entra ID Protection (risky sign-ins), Defender for Cloud Apps (anomalous activities), Defender for Endpoint (endpoint alerts), Defender for Office 365 (email alerts), and Sentinel analytics rules. It identifies combinations of alerts that, individually, might be low-confidence but together indicate a coordinated attack.
Example Fusion detection: A risky sign-in alert (Entra ID Protection) + inbox rule creation (Sentinel analytics rule) + data download from SharePoint (Defender for Cloud Apps) — individually, each might be Medium severity. Fusion correlates them into a single High-severity multi-stage incident because the combination matches a known BEC attack pattern.
Enabling Fusion: Navigate to Sentinel → Analytics → Rule templates → filter by type “Fusion” → enable “Advanced multistage attack detection.” Fusion generates incidents in the queue with the provider “Azure Sentinel Fusion Engine” and includes all correlated alerts as evidence.
Fusion limitations: Fusion only correlates alerts from Microsoft products and Sentinel analytics rules — it does not analyse raw log data. If your analytics rules miss a detection stage, Fusion cannot include it in the correlation. Comprehensive analytics rules (this module) feed better Fusion detections.
Try it yourself
Create an NRT rule for security log cleared: use the Event ID 1102 query above. Map the Account and Host entities. Set severity to High. Enable the rule. If you have access to a Windows server in your lab, clear the security log manually (from an elevated command prompt: wevtutil cl Security) and verify the NRT rule fires within 2 minutes. This end-to-end test validates your fastest detection path.
What you should observe
The NRT rule fires within 1-2 minutes of the log clear event. An incident appears in the queue with High severity, the account that cleared the log mapped as an entity, and the computer name mapped as a host entity. This is significantly faster than a scheduled rule with a 5-minute interval.
Knowledge check
Check your understanding
1. You have bi-directional sync enabled for Defender XDR and a Microsoft Security rule that creates incidents from Defender for Endpoint alerts. What happens?