TH2.15 KQL Anti-Patterns That Produce False Negatives
4-5 hours · Module 2
· Free
Operational Objective
A false negative in hunting is worse than no hunt — it creates documented assurance that a technique was searched for and not found, when the reality is that the query was wrong. This subsection catalogs the KQL mistakes that produce false negatives in hunting queries — queries that look correct, run without errors, and return results that silently miss the threat.
Deliverable: Recognition of the seven most common KQL anti-patterns in hunting and the corrected patterns for each.
⏱ Estimated completion: 25 minutes
The query ran. The results were wrong.
KQL does not throw errors when your logic is flawed. A query with a wrong filter, a mismatched join, or a truncated result set executes successfully and returns data. The data looks plausible. The analyst draws conclusions from it. The conclusions are wrong because the data is incomplete or filtered incorrectly.
These anti-patterns are not beginner mistakes. They catch experienced analysts because they are subtle — the query looks right until you examine exactly what it misses.
Anti-pattern 4: Filtering before join loses context
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
//WRONG:Pre-filteringtheenrichmenttabletooaggressivelyletsuspects=...;//suspectuserlistAuditLogs|whereTimeGenerated>ago(7d)|whereOperationName=="Consent to application"//onlyconsent|joinkind=innersuspectson...//Misses:MFAregistration,roleassignment,CApolicychange//Thefilterrestrictstooneoperationwhenthehuntneedsall//CORRECT:Joinfirst,thenexamineallactivityAuditLogs|whereTimeGenerated>ago(7d)|joinkind=innersuspectson...|summarizeOperations=make_set(OperationName,20)bytostring(InitiatedBy.user.userPrincipalName)//SeeALLauditoperationsbysuspectusers//Thenfiltertotheinterestingonesbasedonwhatyoufind
Anti-pattern 5: Baseline contamination
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
//WRONG:Baselineoverlapswithdetectionwindowletbaseline=SigninLogs|whereTimeGenerated>ago(30d)//Includesthelast7days!|summarizeKnownIPs=make_set(IPAddress)byUserPrincipalName;SigninLogs|whereTimeGenerated>ago(7d)|joinbaselineonUserPrincipalName|wherenot(IPAddressin(KnownIPs))//Theattacker's IP from the last 7 days is IN the baseline
// because the baseline includes the detection window
// The attacker'sIPisclassifiedas"known"—falsenegative//CORRECT:Gapwindowseparatesbaselinefromdetectionletbaseline=SigninLogs|whereTimeGeneratedbetween(ago(37d)..ago(7d))//Ends7daysago|summarizeKnownIPs=make_set(IPAddress)byUserPrincipalName;//TH1.10coversthisindetail—thegappreventscontamination
Anti-pattern 6: has vs contains vs == for dynamic fields
Anti-pattern 7: Time zone confusion in off-hours analysis
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
//WRONG:FilteringbyUTChourwhenyourusersareinGMT+0SigninLogs|wherehourofday(TimeGenerated)<6orhourofday(TimeGenerated)>22//TimeGeneratedisUTC.ForUKusers,thisiscorrectinwinter(GMT=UTC)//butwronginsummer(BST=UTC+1):22:00UTC=23:00BST//Asign-inat23:30BST(22:30UTC)iscorrectlyflaggedinsummer//butasign-inat05:30BST(04:30UTC)isincorrectlyflagged//CORRECT:Adjustfortheuser's time zone
SigninLogs
| extend LocalHour = hourofday(
datetime_utc_to_local(TimeGenerated, "Europe/London"))
| where LocalHour < 6 or LocalHour > 22
// datetime_utc_to_local handles DST automatically
// For multi-timezone organizations, join with a user→timezone lookup
Figure TH2.15 — Seven KQL anti-patterns that produce false negatives. Each runs without errors. Each misses threats. The peer review checklist from TH1.15 catches these before they close a hunt incorrectly.
Try it yourself
Exercise: Audit your existing queries for anti-patterns
Review the hunt queries you wrote during the TH1 and TH2 exercises. Check each against the seven anti-patterns:
1. Did you use == for string comparisons? Switch to =~ or has.
2. Could any query hit the 10,000-row limit without summarize? Add aggregation.
3. Did you use inner join where leftouter would preserve more context?
4. Did you pre-filter enrichment tables before joining?
5. Does your baseline overlap with the detection window?
6. Did you filter dynamic columns without parsing first?
7. Did you use UTC hours for off-hours analysis?
Fix any anti-patterns found. This review builds the habit that TH1.15 (QA) formalizes.
⚠ Compliance Myth: "If the query runs without errors, the results are correct"
The myth: A query that executes successfully produces valid results. Syntax correctness implies logical correctness.
The reality: KQL does not validate logic. A case-sensitive comparison on an inconsistently capitalized field returns partial results with no warning. A truncated result set looks identical to a complete one. A contaminated baseline produces plausible but wrong baselines. The query executes. The data looks reasonable. The conclusion is wrong. Anti-pattern awareness is the defense — there is no automated check. The peer review process from TH1.15 is the organizational defense.
Extend this awareness
New anti-patterns emerge as M365 log schemas change. Microsoft periodically modifies column names, data types, and nesting structures across their security log tables. A query that worked correctly against last month's schema may fail silently against this month's schema if a column was renamed or a value format changed. Before running a campaign module's queries in production, verify the column names and value formats against the current Microsoft Learn documentation for each table. TH0.10 (data sources) provides the reference, but the canonical source is always the current Microsoft documentation.
The full course continues with advanced topics, production detection rules, worked investigation scenarios, and deployable artifacts. Premium subscribers get access to all courses.