In this module

IR0.5 Module Summary

5 minutes · Module 0 · Free
What you already know

You have worked through four content subsections. This one is a short recap. It exists so that when you come back to IR0 in three months to refresh, you can read a single page and have the module back in memory. It is not a replacement for the content subsections themselves — it is a map back into them if you need one.

Operational Objective
Four subsections produce four distinct things you should be carrying forward into IR1 and the rest of the course. The incident shape from IR0.1 — the four environments and the way a real attack crosses them. The reasoning pattern from IR0.2 — the five-step chain and the three-statement evidence discipline. The framework vocabulary from IR0.3 — six CSF 2.0 Functions, current NIST guidance, and the operational-plus-framework pattern for report writing. The toolkit overview from IR0.4 — six tool categories, all covered by free tools, and detection engineering as the highest-leverage next skill. This subsection is a concise restatement of the four, to confirm the scaffolding is in place before IR1.
Deliverable: A consolidated view of what IR0 established. Use it as a reference; if any of the four items below feels unfamiliar, re-read the relevant subsection before moving to IR1.
Estimated completion: 10 minutes

Incident shape (IR0.1)

Real Microsoft-stack incidents do not stay in one environment. The investigation pattern has to match the attack pattern.

Modern Microsoft-stack incidents cross four environments — Exchange Online, Entra ID, Windows endpoint, and the lateral surface of file servers and domain controllers. The example you worked through in IR0.1 was Claire, the finance manager at Northgate Engineering, who clicked an AiTM phishing link at 14:31 and whose incident reached four environments inside ninety minutes. The cloud side completed the BEC fraud in thirty minutes. The endpoint side was still developing hours later. Each environment holds a specific class of evidence that does not exist anywhere else — mailbox audit trails in Exchange Online, authentication and token events in Entra ID, process and memory state on the endpoint, lateral authentication on the servers.

The consequence for investigation practice is that single-environment investigations produce containment plans the attacker routes around. A cloud-led investigation that misses the endpoint leaves the credential theft and the beacon in place, and the incident recurs three weeks later through a different user. An endpoint-led investigation that misses the cloud leaves the inbox rule forwarding invoice emails to the attacker and the OAuth app the attacker consented to as a cloud-persistence mechanism that survives the endpoint rebuild. Cross-plane investigation is not a style preference — it is what the evidence model requires.

Current threat landscape figures that matter for how you investigate: global median dwell time rose to 14 days in 2025; voice phishing is now the most common initial vector for cloud-specific compromises at 23%; AiTM attacks grew 146% year-over-year through 2024; token theft detections are running around 39,000 per day in Microsoft's data; and the handoff time between initial-access brokers and secondary threat groups collapsed from eight-plus hours in 2022 to a median of 22 seconds in 2025. Investigation has to be fast, cross-plane, and grounded in current tradecraft rather than the 2022 threat model.

Reasoning pattern (IR0.2)

The mental model the rest of the course applies on every artefact, every module, every investigation.

Investigations are run as a cycle of five steps. Hypothesis — a specific, testable proposition about what happened. Evidence — the specific artefact or artefacts that would prove or disprove the hypothesis. Extract — run the tool and note the delta between what you expected and what the tool returned. Interpret — three statements for every finding: what it proves, what it does not prove, what investigation step it leads to. Next step — the next hypothesis, prioritized by which one would change the scope of the investigation the most if confirmed. The chain cycles until the incident is scoped.

The three-statement discipline is the hardest habit to build and the most important. Every finding gets three explicit sentences. "Proves X" stays narrow — only what the specific artefact literally records. "Does not prove Y" names the adjacent claims responders commonly collapse to. "Next step Z" is the specific pivot to the next hypothesis. The discipline prevents over-interpretation, which is the most common investigation error. Reports built from three-statement findings are defensible in audit and in court. Reports that collapse to conclusions are not.

Four anti-patterns emerge when the chain is not explicit. The artefact collector runs every tool and interprets nothing. The premature conclusion jumps from one data point to a declared compromise. The tool-first investigator chooses a tool before stating what hypothesis it tests. The single-source investigator covers the cloud or the endpoint but not both. Each one is caught by running the chain explicitly.

Framework vocabulary (IR0.3)

Short because the work is what matters; the vocabulary exists so you can write reports that audit-adjacent readers can parse.

NIST SP 800-61 Revision 2 was withdrawn by NIST in April 2025 and replaced by Revision 3. The old four-phase lifecycle — preparation, detection and analysis, containment and eradication, post-incident — is superseded. Rev 3 aligns incident response to the six Functions of the NIST Cybersecurity Framework 2.0 — Govern, Identify, Protect, Detect, Respond, Recover. Detect, Respond, and Recover run concurrently during an active incident rather than sequentially, and Govern runs continuously underneath all of them. The work has not changed; the vocabulary has.

This course teaches Detect (the validation and scoping layer that converts an alert into a declared incident), Respond (investigation, analysis, containment, eradication, reporting — the bulk of the course), and Recover (from the responder's angle — validating restored systems are clean, producing the findings that drive program-level recovery). Govern, Identify, and Protect are referenced where they intersect with responder work.

For report writing, the pattern is operational language plus framework mapping. Write the operational narrative first — what happened, in plain English that a CFO can read. Add a short framework mapping block at the end — the CSF 2.0 Functions and Subcategories the findings correspond to, which is what the audit reader needs. Reports that use only one or the other are legible to one audience and opaque to the other; combining them is current best practice.

Toolkit overview (IR0.4)

Six categories of tools, all free, all production-grade, detailed installation in IR1.

Collection is KAPE and Velociraptor. Endpoint analysis is the Eric Zimmerman Tools suite. Memory forensics is Volatility 3 and WinPMem. Cloud investigation is KQL (in Sentinel and Defender XDR), Purview audit, and Microsoft Graph PowerShell. Correlation is Sentinel and Defender XDR tying cloud and endpoint evidence together. Native response is PowerShell 7 for the live-response, containment, and automation work that connects the other five categories.

Paid alternatives exist — Magnet AXIOM Cyber, Binalyze AIR, Defender XDR Live Response, Splunk — and they add workflow convenience where team size and case volume justify the cost. They do not add forensic capability. Every technique in the course works with the free tools.

After the course, the highest-leverage adjacent skill for most IR practitioners is detection engineering. Every investigation produces findings that current detection rules did not catch; without detection engineering, those findings sit in reports unused. With detection engineering, each finding becomes a new rule, and the rules compound — every rule you write reduces the rate of future incidents of that class. The other adjacent disciplines — threat hunting, deeper memory forensics, network forensics, IR program leadership — are worth pursuing when the symptoms in your environment match them, but for most mid-level SOC analysts with IR responsibilities, detection engineering is the right next investment.

Next — IR1

IR0 gave you the foundations. IR1 makes them real. You install KAPE and validate it against a test triage collection. You install the Eric Zimmerman Tools suite and confirm each parser runs against a known artefact. You install Volatility 3, install Microsoft Graph PowerShell, connect to a Microsoft 365 developer tenant, run a test KQL query. By the end of IR1 you have a working forensic workstation that can collect, parse, and analyze evidence from any Windows endpoint or Microsoft 365 environment you have access to. IR2 onwards applies that toolkit against real evidence.

If any of the four IR0 foundations above feels shaky — if the five-step chain is not yet automatic, if the cross-plane picture is not yet intuitive, if the Rev 3 vocabulary is still fuzzy, if you cannot yet list the six tool categories from memory — go back to the relevant subsection now. The work in IR1 through IR19 assumes IR0 is internalised. An extra hour here saves ten hours of confusion later.

💬

How was this module?

Your feedback helps us improve the course. One click is enough — comments are optional.

Thank you — your feedback has been received.