TH0.9 Common Hunting Myths

3-4 hours · Module 0 · Free
Operational Objective
Misconceptions about threat hunting prevent organizations from starting, cause programs to be structured incorrectly, and lead to premature abandonment when expectations are not met. This subsection dismantles the seven most common myths with evidence and operational reality — giving you the counterarguments for the objections you will hear when proposing or defending a hunting program.
Deliverable: Prepared responses to the seven objections most commonly raised against hunting programs — backed by evidence, not opinion.
⏱ Estimated completion: 20 minutes

The objections you will face

Every security capability has to survive scrutiny. Hunting is no different. The following myths are not strawmen — they are real objections raised by real SOC managers, CISOs, and CFOs when hunting programs are proposed. Each one contains a kernel of truth that makes it persuasive, and a fundamental error that makes it wrong.

Myth 1: “You need a dedicated threat hunting team”

The kernel of truth: Dedicated hunters produce more consistent output than analysts who hunt between alert triage sessions. A dedicated team can develop deeper expertise and run more complex campaigns.

The error: Dedicated teams are the ideal end-state, not the prerequisite. Most organizations that hunt effectively started with a rotational model — one senior analyst spending 4–8 hours per week on hunting duty while the rest of the team handles alerts. The rotational model is sufficient for 12 structured campaigns per year, each producing documented findings and at least one detection rule. That output — 12 new detections, measurable coverage improvement, documented negative findings — justifies the program and builds the case for eventually dedicating more resources. Starting with “we need to hire a team” before proving the value is backwards. Start with hours. Prove value. Then grow.

Myth 2: “You need threat intelligence to hunt”

The kernel of truth: Threat intelligence improves hunt hypothesis quality. A hypothesis derived from a specific TI report about a specific threat actor targeting your industry is more likely to produce findings than a generic hypothesis about “suspicious activity.”

The error: Threat intelligence is one of six hypothesis sources described in TH1. The others — ATT&CK coverage gaps, prior incident findings, environmental changes, detection rule failures, and peer community sharing — require no TI subscription. The ATT&CK coverage analysis in TH3 produces a hunt backlog using only your own detection data and the public ATT&CK framework. An organization with zero threat intelligence budget can run every campaign in this course by hypothesizing from coverage gaps and prior incidents. TI makes hunting better. It is not required to start.

Myth 3: “If our EDR/XDR is good enough, we do not need to hunt”

The kernel of truth: Modern EDR and XDR platforms (Defender XDR, CrowdStrike Falcon, SentinelOne) have extensive built-in detection capabilities, including behavioral analysis, machine learning models, and cloud-delivered protection that evolves continuously.

The error: TH0.1 and TH0.4 address this in depth, but the summary: built-in detections are tuned for high confidence across millions of tenants. They deliberately do not alert on behaviors that are ambiguous at scale — and those ambiguous behaviors are exactly where sophisticated attackers operate. Additionally, three of the five dominant M365 attack categories (TH0.5) operate entirely in the cloud identity and application plane, outside the EDR’s visibility. AiTM session hijacking, OAuth consent abuse, and living-off-the-cloud produce zero endpoint artifacts. The world’s best EDR cannot detect what it cannot see. Cloud-plane hunting requires cloud data sources that EDR does not collect.

Run this to see the split in your own environment:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
// Cloud vs endpoint event volume  where is the attack surface?
union
    (SigninLogs | where TimeGenerated > ago(7d)
    | summarize Count = count() | extend Source = "Cloud: SigninLogs"),
    (AuditLogs | where TimeGenerated > ago(7d)
    | summarize Count = count() | extend Source = "Cloud: AuditLogs"),
    (CloudAppEvents | where TimeGenerated > ago(7d)
    | summarize Count = count() | extend Source = "Cloud: CloudAppEvents"),
    (DeviceProcessEvents | where TimeGenerated > ago(7d)
    | summarize Count = count() | extend Source = "Endpoint: ProcessEvents")
| project Source, Count
// The cloud tables often contain more events than endpoint tables
// but receive less hunting attention  that imbalance is the gap

Myth 4: “Hunting is just running queries and hoping to find something”

The kernel of truth: Ad hoc hunting — opening the Advanced Hunting console and running random queries without a hypothesis — does look like “running queries and hoping.” And it produces poor results, because without a hypothesis, there is no way to know whether empty results mean “no threat” or “wrong query.”

The error: Structured hunting is hypothesis-driven, not random. The Hunt Cycle in TH1 starts with a specific, testable hypothesis derived from a concrete source. The hypothesis defines the data sources, the query logic, and the success criteria. The analyst is not hoping to find something — they are testing a prediction. The difference between ad hoc querying and structured hunting is the same as the difference between wandering a building and conducting a building search: methodology, systematization, and documentation.

Myth 5: “We tried hunting and did not find anything — it does not work for us”

The kernel of truth: Some hunting programs are shut down after 3–6 months because every campaign returned negative findings. If the organization expected hunting to discover compromises weekly, the reality of mostly-negative findings feels like failure.

The error: TH0.7 covers this in depth. Negative findings are not failures — they are documented evidence that a specific threat was searched for and not found. The value of negative findings includes uncertainty reduction, baseline establishment, detection rule validation, and compliance documentation. More importantly, if hunting consistently finds nothing, the explanation is one of three possibilities: the organization genuinely has no active compromises (unlikely for any M365 tenant over a 6-month period), the hunts are targeting the wrong techniques (fix the backlog prioritization), or the queries are not sensitive enough (fix the query methodology). The answer to “we found nothing” is to refine the program, not to shut it down.

Myth 6: “Hunting is only for large enterprises”

The kernel of truth: Large enterprises have more data, more attack surface, more threat actor attention, and more resources to dedicate to hunting. The highest-profile hunting programs (those described at conferences and in vendor case studies) tend to be at Fortune 500 scale.

The error: The detection gap exists in every M365 environment, regardless of size. A 500-user organization with Defender XDR and Sentinel has the same structural limitation as a 50,000-user organization: detection rules cover a fraction of relevant techniques, and the rest is unmonitored. The difference is scale, not applicability. A small organization running one hunt campaign per month — using the techniques in this course against their own Sentinel workspace — produces the same structural benefits as a large organization’s dedicated hunt team: coverage improvement, dwell time compression, and detection rule production. The queries are the same. The data sources are the same. The methodology is identical. The only difference is the volume of data and the number of findings — and smaller environments are often easier to hunt in because the baselines are cleaner and anomalies are more visible.

Myth 7: “AI will replace threat hunting”

The kernel of truth: AI-powered security tools (Microsoft Security Copilot, vendor-specific AI assistants) are improving rapidly. They can summarize incidents, suggest investigation steps, generate KQL queries, and identify patterns in large datasets faster than human analysts.

The error: AI tools accelerate hunting. They do not replace it. An AI assistant can generate a KQL query from a natural language hypothesis — but it cannot formulate the hypothesis. It can identify statistical anomalies in a dataset — but it cannot determine whether the anomaly represents a business trip, a VPN change, or an account takeover. It can summarize results — but it cannot decide whether to escalate to IR or document a negative finding. The judgment that hunting requires — contextual analysis, environmental knowledge, risk assessment, escalation decisions — is exactly the capability that current AI tools do not reliably provide. AI makes the analyst faster. It does not make the analyst unnecessary. TH2 includes techniques for using AI assistants during hunt campaigns to accelerate query construction and result analysis — as tools, not replacements.

SEVEN HUNTING MYTHS — WHAT THEY MISS"Need a team"Start with hours,not headcount"Need TI"ATT&CK gaps arefree hypothesis source"EDR is enough"Cloud attacks areinvisible to EDR"Just queries"Hypothesis-driven,not random"Found nothing"Negative findingshave value"Only for big cos"Gap exists atevery scale"AI replaces"AI accelerates,not replacesEach myth contains a kernel of truth that makes it persuasive — and a fundamental error that makes it wrong.Knowing both the truth and the error prepares you for the conversation.

Figure TH0.9 — Seven common hunting myths and their corrections. Each myth is a real objection you will encounter when proposing or defending a hunting program.

Try it yourself

Exercise: Identify the objections in your organization

Which of the seven myths have you heard (or would expect to hear) from leadership, peers, or your own internal skepticism? For each one, write a two-sentence response using the evidence from this module — coverage ratio data from TH0.1, dwell time data from TH0.2, ROI data from TH0.7, or the structural limitation arguments from TH0.4.

If the objection is "we cannot afford dedicated hunting hours," the response is not "hunting is important" (that is opinion). The response is: "12 hunt campaigns per year cost approximately $7,680 in analyst time and produce 12+ new detection rules that provide permanent automated coverage. The program pays for itself the first time it compresses dwell time on one intrusion that rules would have missed." That is evidence. Evidence wins the argument.

⚠ Compliance Myth: "Our compliance framework does not require threat hunting — so we do not need to do it"

The myth: Threat hunting is not explicitly named in our compliance requirements (PCI DSS, HIPAA, SOC 2, ISO 27001). If it is not required, it is not necessary.

The reality: Most compliance frameworks require “proactive monitoring,” “continuous improvement of detection capabilities,” or “regular assessment of security controls” — which hunting satisfies directly. PCI DSS 4.0 Requirement 11.3 requires penetration testing but also expects ongoing monitoring improvements. ISO 27001 Annex A Control A.5.25 requires assessment of information security events. SOC 2 CC7.2 requires monitoring for anomalies. Hunting produces documentation that satisfies all of these requirements more convincingly than alert triage alone — because it demonstrates proactive security activity rather than reactive response. The question is not “does our framework require hunting?” It is “does our framework require us to actively look for threats?” The answer is almost always yes.

Extend this analysis

Myths evolve as the industry matures. Three years ago, "what is threat hunting?" was a common question. Today, the question is "how do we make it work with limited resources?" The shift indicates that hunting is moving from novelty to operational expectation. If you encounter new objections not covered here — particularly around AI, automation, or managed detection services replacing hunting — evaluate them against the detection pyramid from TH0.3. The question is always: does this alternative address the known-unknown layer? If it does not (and most automation does not — it operates in the known-known layer), hunting remains necessary.


References Used in This Subsection

  • PCI Security Standards Council. “PCI DSS v4.0.” Requirement 11.3, Requirement 12.10.
  • ISO/IEC 27001:2022. Annex A Control A.5.25 (Assessment and Decision on Information Security Events).
  • AICPA. “SOC 2 Trust Services Criteria.” CC7.2 (System Monitoring).
  • Course cross-references: TH0.1 (coverage ratio), TH0.2 (dwell time), TH0.4 (structural limitations), TH0.7 (ROI), TH1 (hypothesis sources), TH3 (ATT&CK coverage analysis)

You're reading the free modules of this course

The full course continues with advanced topics, production detection rules, worked investigation scenarios, and deployable artifacts. Premium subscribers get access to all courses.

View Pricing See Full Syllabus