TH1.17 Check My Knowledge

3-4 hours · Module 1 · Free

Check My Knowledge

1. An analyst writes the following hypothesis: "There might be compromised accounts in our M365 environment." Using the four hypothesis quality criteria from TH1.1, which properties does this hypothesis fail?

It fails specificity (no technique, behavior, or indicator named), testability (no data source identified, no observable defined), and actionability (if "confirmed," what does the analyst do — investigate every account?). The hypothesis is also not meaningfully falsifiable — there is no state of the data that would refute it, because "might be" cannot be disproven. A corrected version: "If AiTM phishing has compromised accounts, then AADNonInteractiveUserSignInLogs will contain token refresh events from IPs not in users' 30-day interactive baseline." That version names the technique, the data source, the observable, and the baseline.
It only fails specificity. The hypothesis is testable because the analyst can query SigninLogs, and it is actionable because compromised accounts should be disabled.
The hypothesis is acceptable for an initial exploration. Specificity can be added after the first query reveals patterns.
It fails only groundedness — it is not derived from a credible source like threat intelligence or ATT&CK coverage gaps.

2. You are scoping a hunt for inbox rule manipulation (TH5 campaign). You decide to query CloudAppEvents for "New-InboxRule" operations. A colleague suggests also querying AuditLogs, EmailEvents, and SigninLogs "while you are at it." What is the correct response, and why?

Add all suggested tables — more data sources produce more thorough results.
Start with CloudAppEvents for the inbox rule hunt — it is the minimum table required to test this specific hypothesis. Do not add tables speculatively. If step 2 or 3 results indicate a need to check sign-in data (to correlate inbox rule creation with authentication anomalies) or email data (to check for phishing delivery before the rule was created), add those tables in enrichment or pivot queries. Adding them from the start increases query complexity, execution time, and noise without improving the hypothesis test. If the colleague's suggestions represent separate hypotheses (sign-in anomalies, phishing delivery), add those to the backlog as separate hunts.
Query all tables simultaneously but use separate queries for each to maintain clarity.
The colleague is wrong — only CloudAppEvents is relevant to inbox rules. The other tables are never useful for this hunt.

3. Your step 2 query returns 10,000 rows — the maximum result limit in Advanced Hunting. What does this mean for your hunt, and what should you do?

The hunt is complete — 10,000 results is a comprehensive dataset to analyze.
The query returned too many results. The hypothesis should be abandoned as too noisy.
The data is truncated — you are seeing the first 10,000 rows, not the full result set. The actual number of matching events may be 10,000, 50,000, or 500,000 — you do not know. Mitigation: add a summarize operation to aggregate results (count by user, group by indicator) before hitting the row limit, or narrow the time window to reduce volume per query, or add additional filters to the where clause to reduce the result set. Do not analyze truncated data as if it were complete.
Run the query in Sentinel Log Analytics instead, which has a higher row limit.

4. During analysis, you find a user with sign-ins from a new IP in Romania (geographic anomaly). No other enrichment dimension shows anomalies — the sign-in was during business hours, no MFA change, no inbox rules, no file downloads, and the user is a sales representative who travels internationally. Using the confidence model from TH1.4, what is the appropriate conclusion?

Escalate to IR — any sign-in from an unexpected country warrants investigation.
Low confidence — document and close. One dimension (geographic) shows an anomaly, but four other dimensions (temporal, behavioral, correlated, user context) do not support compromise. The user's role (sales, international travel) provides a plausible legitimate explanation. Document: "User X — new IP from Romania. Analysis: sales role with international travel. No behavioral anomalies post-sign-in. No correlated indicators. Assessed as legitimate travel." This result becomes a known-false-positive pattern for the detection rule — add VPN or travel patterns for this user to the rule's exclusion list if the detection rule triggers on similar activity.
Medium confidence — investigate further by contacting the user's manager to confirm travel.
No finding — geographic anomalies alone are never meaningful.

5. A hunt produces no evidence of the hypothesized technique across the full scope. The analyst concludes "the hunt found nothing" and does not complete the documentation. What organizational value was lost?

No value was lost — the hunt genuinely found nothing.
Five forms of value were lost. First: the negative finding itself — documented proof that the technique was searched for and not found reduces organizational uncertainty. Second: compliance evidence — the documented hunt satisfies audit requirements for proactive monitoring. Third: detection rule conversion — the hunt query, now validated against real data, should be deployed as a scheduled analytics rule for permanent coverage. Fourth: baseline data — the hunt produced information about normal activity patterns that future hunts and anomaly detection can reference. Fifth: institutional knowledge — the query chain, the analysis reasoning, and the environmental context observed during the hunt are lost when the analyst forgets the details. All five are recoverable only if the hunt record is completed.
Only the detection rule was lost — the analyst should at least convert the query before closing.
The only loss is the compliance documentation — the audit team needed evidence of proactive hunting.

6. You are converting a hunt query to a Sentinel analytics rule. The hunt query used a 30-day baseline comparison with a let statement that computes per-user IP baselines across all users. In production, this query times out when run as a scheduled rule every hour. What is the recommended approach?

Increase the analytics rule timeout — Sentinel should support longer-running queries.
Abandon the detection rule — if the query is too complex for scheduled execution, the technique cannot be automated.
Pre-compute the baseline. Move the per-user IP baseline computation to a daily scheduled function or watchlist update. The function runs once per day (when timeout constraints are less critical because the baseline does not need real-time updates) and stores the results. The detection rule joins against the pre-computed baseline rather than recalculating it on every run. This separates the expensive baseline computation from the real-time detection query, keeping both within execution limits. Alternative: simplify the rule to catch the most obvious variants (new country, not new IP) and continue periodic hunting for the subtle variants.
Change the rule frequency from every hour to every 24 hours to allow more query execution time.

7. An analyst generates a hypothesis from a threat intelligence report describing a new AiTM toolkit variant. Before scoping the hunt, what should the analyst verify first?

Verify that the TI report is from a credible source (Microsoft, CISA, vendor with good track record).
Verify that the AiTM variant targets their industry sector.
Verify that the data sources required to test the hypothesis are ingested in their environment. A hypothesis about AiTM token replay requires AADNonInteractiveUserSignInLogs. If that table is not ingested, the hypothesis is valid but untestable — the action is "enable ingestion of this table" rather than "hunt for this technique." Testability is the prerequisite that prevents wasted hunt hours on hypotheses the environment cannot answer.
Verify that no existing detection rule already covers this variant.

8. During a hunt, the analyst's step 2 query produces 3 suspicious results. One appears to be a high-confidence compromise — new IP, new MFA registration, inbox rule creation within 4 hours. The analyst plans to complete steps 3–4 (enrichment and pivot) for all 3 results before escalating. Is this the correct approach?

Yes — the analyst should complete the full analysis before escalating to ensure the finding is accurate.
No. The high-confidence result (3+ correlated indicators) should be escalated to IR immediately. Do not wait for the remaining analysis to complete. An active compromise has a ticking clock — every hour of delay is an additional hour of attacker access. Escalate result 1 now with the evidence available. Continue analyzing results 2 and 3 in parallel with the IR response. The remaining hunt work and the IR investigation can proceed simultaneously.
No — the analyst should stop the hunt entirely and switch to incident response for all 3 results.
The analyst should verify the finding with a second analyst before escalating to avoid false escalations.
💬

How was this module?

Your feedback helps us improve the course. One click is enough — comments are optional.

Thank you — your feedback has been received.

You're reading the free modules of this course

The full course continues with advanced topics, production detection rules, worked investigation scenarios, and deployable artifacts. Premium subscribers get access to all courses.

View Pricing See Full Syllabus