Module 10 — Check My Knowledge (20 questions)
1. What is the fundamental difference between detection and hunting?
Detection is automated and reactive (rules fire on known patterns). Hunting is manual and proactive (analysts search for unknown threats without an alert trigger). Neither replaces the other — detection catches known threats, hunting finds unknown ones.
Detection is faster
Hunting replaces detection
They are the same thing
Detection = automated, reactive. Hunting = manual, proactive. Both essential.
2. What are the three hunting approaches?
Hypothesis-driven (test a theory from TI), indicator-driven (search for specific IOCs), and analytics-driven (explore data for statistical anomalies).
Manual, automated, hybrid
Network, endpoint, identity
Quick, standard, deep
Hypothesis-driven, indicator-driven, analytics-driven.
3. Which hunting pattern finds an attacker's custom tool?
Rare event discovery. Custom attacker tools execute only once or twice across the environment — they are inherently rare. Search for processes with very low execution counts.
Temporal anomaly
IOC matching
Statistical outlier
Rare event discovery — attacker tools are, by definition, rare in your environment.
4. A hunt query returns zero results. Is the hunt a failure?
No. A negative finding confirms the absence of that specific threat for the searched period. Document the result and close the hunt. Negative findings are valuable — they contribute to understanding the threat landscape.
Yes — always find something
Re-run with broader criteria
Change the hypothesis
Negative findings are valuable. A hunt confirming "no compromise" is a successful hunt.
5. What is the purpose of hunting bookmarks?
Bookmarks capture specific query results as persistent evidence from hunting sessions. They survive beyond the session, can be promoted to incidents, and appear in the investigation graph. They are the evidence chain of the hunting process.
Bookmarks save favourite queries
Bookmarks schedule hunting sessions
Bookmarks create analytics rules
Evidence capture and preservation. Persistent, promotable to incidents, visible in investigation graph.
6. When should you use Livestream?
During active investigations — to monitor for the attacker's next move in real time. Livestream continuously runs a query against incoming data and surfaces results immediately. Use after identifying an attacker IP or compromised account to watch for further activity.
For permanent detection — replace all scheduled rules
Only for compliance reporting
Livestream is deprecated
Real-time monitoring during active hunts and investigations. Not for permanent detection (use scheduled/NRT rules).
7. A threat advisory reports IOCs from 8 months ago. Your Analytics tier retains 90 days. How do you hunt?
Create a search job. Search jobs query archived data asynchronously. Use the IOCs as filter criteria. Results are stored in a _SRCH table for standard KQL analysis.
Standard KQL query with ago(8months)
The data is gone
Restore archived data first
Search job for archived data. No restoration needed — search jobs query archives directly.
8. What should a successful hunt produce?
Two outputs: an incident (for immediate investigation and response) and an analytics rule (to detect the same pattern automatically in the future). The incident handles the current threat. The rule prevents future occurrences from requiring manual hunting.
A bookmark only
A report to management
Nothing — the hunt itself is the output
Incident + analytics rule. Hunting feeds detection engineering.
9. The MITRE ATT&CK blade shows T1136.003 with no coverage. What do you do?
Hunt for it: write a KQL query searching for cloud account creation not from HR provisioning. If found: promote to incident and create a detection rule. If not found: document the negative finding and create a rule anyway to detect future occurrences.
Ignore it
Wait for Content Hub
Only address it next year
Hunt first, then build a rule — close the gap immediately and permanently.
10. When should you use notebooks instead of KQL?
When analysis requires capabilities beyond KQL: machine learning, network graph analysis, time series decomposition, or external API integration during analysis. For standard hunting queries, KQL is faster and simpler.
Always — notebooks are superior
Only for compliance
Notebooks are deprecated
Notebooks for advanced analysis beyond KQL. KQL for daily hunting.
11. What is the recommended hunting cadence for a solo SOC operator?
4 hours per fortnight — 2 hunts per month. Rotate between hypothesis-driven, IOC-based, MITRE gap, and UEBA review hunts monthly. Consistency matters more than volume.
Full-time hunting
Once per quarter
Only when triggered by incidents
4 hours per fortnight with consistent monthly rotation across all approaches.
12. What are the four components of a good hunting hypothesis?
What (the specific technique/threat), Why (the intelligence motivating the hunt), Where (data sources and time range), and How (the observable evidence that confirms the hypothesis).
Who, what, when, where
Detect, contain, eradicate, recover
Source, destination, protocol, action
What, Why, Where, How — specific, motivated, targeted, and testable.
13. How does hunting feed detection engineering?
Every confirmed hunting finding should produce a new analytics rule that detects the same pattern automatically. This converts unknown threats (requiring manual hunting) into known detections (handled by automated rules). Over time, the most productive hunting queries migrate to permanent detection — reducing the organisation's reliance on manual hunting for those specific threats.
Hunting and detection are separate
Hunting replaces detection
Detection findings drive hunting only
Hunting → analytics rule. Unknown becomes known. The feedback loop continuously improves both capabilities.
14. What does Livestream NOT support?
Livestream does not create incidents, trigger playbooks, or run permanently. It is a temporary, manual monitoring tool for active hunting sessions. For permanent detection with automated response, use scheduled or NRT analytics rules.
Running KQL queries
Real-time monitoring
Filtering by entity
No incidents, no playbooks, no permanence. Livestream is real-time but temporary.
15. What is stacking (frequency analysis) in hunting?
Count occurrences of an attribute and examine the distribution. The most common values are normal. The least common values are interesting hunting targets — they may be attacker tools, unusual user agents, or rare process names.
Running multiple queries simultaneously
Layering multiple data sources
Building analytics rules in order
Frequency analysis — count and examine the distribution. Rare values are hunting targets.
16. Where do search job results appear?
In a new table with the _SRCH suffix (e.g., SigninLogs_SRCH). This table appears in the workspace and supports full KQL — including join, summarize, and all other operators. Results are retained for 30 days.
In the original table
In the incident queue
In a downloadable file
_SRCH table — full KQL available, 30-day retention.
17. What is a hypothesis backlog?
A prioritised list of hunting hypotheses — generated from threat intelligence, MITRE coverage gaps, incident findings, and UEBA anomalies. When a hunting session starts, the analyst picks the highest-priority hypothesis from the backlog rather than inventing one on the spot.
A list of completed hunts
A queue of incidents
A collection of KQL queries
Prioritised hypothesis list. Ensures hunting is systematic, not ad-hoc.
18. A hunting bookmark reveals a confirmed threat requiring password reset. What do you do?
Promote the bookmark to an incident. The incident triggers the formal response process: containment (password reset via playbook), investigation, classification, and closure. An incident provides documentation, automation, and accountability that a standalone bookmark does not.
Reset the password directly — no incident needed
Add more bookmarks first
Wait for the next hunting session
Promote to incident for formal response with documentation and automation.
19. What is the target threat confirmation rate for a hunting programme?
10-25%. Below 10% suggests hypotheses are too vague or data coverage is insufficient. Above 25% suggests you are only hunting for easy-to-find threats. A healthy programme finds real threats in roughly 1 out of 5 hunts — the other 4 confirm the environment is clean for those hypotheses.
100% — every hunt should find something
0% — hunts should only validate
50%
10-25%. Not too low (vague hypotheses), not too high (only easy targets).
20. What does the complete Sentinel operational model look like after Modules 7-10?
Workspace (M7) → Data (M8) → Detection (M9) → Hunting (M10). Together: configured workspace with governance, comprehensive data coverage, automated detection for known threats, proactive hunting for unknown threats, automated response via playbooks, operational dashboards via workbooks, behavioural analysis via UEBA, data normalisation via ASIM, and continuous improvement via the detection-hunting feedback loop.
Deploy Sentinel and wait for alerts
Configure connectors and enable rules
Install Content Hub and hope for the best
M7→M8→M9→M10: workspace, data, detection, hunting. A complete, continuously improving security operations capability.