In this module

IR0.1 How Real Incidents Actually Unfold

5 minutes · Module 0 · Free
What you already know

You've worked alerts. You've pivoted from a sign-in log to a process tree, from a mail event to a device timeline, from a Defender XDR incident to the underlying tables. You know the difference between triage and investigation, and you've done at least some of the latter. This subsection doesn't teach you what an incident is. It shows you, end to end, the incident shape this course is built to investigate — so the rest of the modules have something concrete to connect back to.

Operational Objective
A real Microsoft-stack incident does not stay in one environment. In the time it takes you to finish triaging the first alert, the attacker has already crossed from email to identity, from identity to endpoint, and from endpoint to lateral movement. A responder who investigates only the cloud side sees the stolen session and the inbox rule but misses the loader on disk and the credentials in memory. A responder who investigates only the endpoint side sees the beacon and the credential dump but misses the fraudulent payment email that went out forty minutes ago. Both produce containment plans that leave half of the attacker in place. The attacker returns through the unremediated channel, and the incident recurs. This course teaches the unified investigation because the attacker is a unified operator, and half an answer is not an answer.
Deliverable: A mental model of what a current Microsoft-stack incident looks like end to end — the four environments it touches, the order they're touched in, and the evidence that lives in each one.
Estimated completion: 20 minutes
ONE INCIDENT, FOUR ENVIRONMENTS, NINETY MINUTES EXCHANGE ONLINE cloud email 14:31 phishing arrives 14:31 user clicks link 14:32 AiTM proxy page 14:46 inbox rule created 14:58 BEC reply sent Evidence lives in: MailItemsAccessed UnifiedAuditLog MessageTrace Inbox rules Retention: 180d (E5) ENTRA ID cloud identity 14:32 token captured 14:36 sign-in succeeds 14:37 MFA satisfied 14:42 session replayed 15:18 OAuth consent Evidence lives in: SigninLogs AuditLogs IdentityRiskEvents CA evaluation logs Retention: 30-180d WINDOWS ENDPOINT user workstation 15:09 loader executes 15:10 beacon in memory 15:14 LSASS accessed 15:21 hash harvested 15:22 event log clear Evidence lives in: Prefetch, AmCache $MFT, $UsnJrnl Event logs, registry Memory, Sysmon Retention: hours to days LATERAL + SERVERS file, DC, backup 15:28 NTLM relay 15:34 file server auth 15:41 data staged 15:52 archive created 16:01 exfil to C2 Evidence lives in: 4624 / 4648 / 4672 WinRM / WMI logs Sysmon network DC audit, share audit Retention: varies wildly Ninety minutes from click to data exfiltration. The investigation has to cover all four columns or the containment plan is incomplete.

Figure IR0.1 — One AiTM-triggered incident, ninety minutes, four environments. The timestamps are from a real reconstruction (sanitized and replayed in the Northgate Engineering scenario used throughout this course).

A real Wednesday afternoon

The narrative below is how incidents actually unfold in a Microsoft-stack environment. Follow the time stamps. Notice how quickly the attacker crosses the boundaries your team's tooling may treat as separate.

It's 14:31 on a Wednesday. A finance manager at Northgate Engineering — call her Claire — clicks a link in an email that looks like a Microsoft 365 password expiry notice. The page she lands on is a perfect replica of the Microsoft login. She types her credentials. A push notification arrives on her phone. She approves it. Everything looks normal. She closes the tab and goes back to invoice work.

Here's what actually happened. The page was an AiTM proxy — an attacker-controlled server sitting between Claire and Microsoft's real login, forwarding every keystroke in both directions. Her password went to the attacker. Her MFA approval went to the attacker. Most importantly, the session cookie Microsoft issued — the thing that proves Claire is authenticated — went to the attacker too. The cookie is valid for hours. If the attacker harvested the refresh token as well, it's valid for days.

Four minutes later — 14:36 — the attacker replays Claire's session from a residential IP in Bucharest and signs in to her mailbox as her. The sign-in succeeds. From Entra ID's perspective, this is Claire. Her session cookie is valid, her MFA is satisfied (because the attacker has the token, not because Claire approved anything new), and the conditional access policy allows the request because the user is known and the application is known. Entra ID Protection flags the sign-in as low-risk — residential IPs are common now and the attacker's IP reputation is clean.

The attacker reads eleven minutes of Claire's recent email. Then, at 14:46, they create an inbox rule. The rule forwards any email containing "invoice," "payment," or "PO" to a folder named RSS Feeds and marks it as read. RSS Feeds is a default folder Claire rarely looks at. From this moment, the attacker reads Claire's invoice mail silently and Claire's unread count never ticks up.

Twenty-three minutes in, at 14:54, the attacker finds what they came for — a pending payment to a supplier. At 14:58 they draft a reply asking the supplier to update the remittance bank details before this week's payment run. The email sends from Claire's real mailbox, through Exchange Online's normal infrastructure, to the supplier's real contact. No malware. No attachment. A perfectly-written reply from a compromised real-user account about a real business transaction.

That's the cloud side of the attack. In ninety minutes the supplier will reply with the new bank details. In forty-eight hours the payment run will clear.

Meanwhile, something else has been happening on Claire's laptop.

The endpoint side of the same attack

The endpoint thread runs in parallel with the cloud thread — not after it. If your investigation treats them sequentially, the second half of the attack completes while you're writing up the first half.

The phishing page didn't just harvest Claire's credentials. It included a secondary payload — a small JavaScript hook that, when Claire's browser rendered the page, triggered a one-click download of a file named Payment_Confirmation_Q4.exe into her Downloads folder. Modern browsers warn about this. Claire clicked through the warning because the page context told her she needed the file.

Claire didn't execute the file. The attacker is patient. At 15:06 — roughly when Claire would have seen their BEC reply to the supplier — the attacker sends a follow-up email from Claire's own compromised account to Claire herself, subject "RE: Urgent — please review this before 15:00," asking her to open the file she downloaded earlier. Claire, seeing an email apparently from herself about something urgent, opens the file.

At 15:09 the loader executes. It downloads a second-stage Cobalt Strike beacon that loads reflectively into the memory of explorer.exe and never touches disk as a standalone file. The loader exits. The beacon stays resident inside a legitimate process. From here, the incident is no longer purely a cloud incident. The attacker has a foothold on an endpoint.

At 15:10 the beacon phones home to a domain-fronted C2. At 15:14 it touches LSASS and harvests Claire's cached NTLM hash, along with hashes for two service accounts that had logged into this workstation within the last twenty-four hours. At 15:22 the attacker clears the Security and System event logs — a step that leaves its own evidence (the Event Log Service clearing event, 1102, is itself logged) but removes most of what was in the logs before.

At 15:28 the attacker uses one of the harvested hashes to NTLM-relay into a file server. At 15:41 they're enumerating a folder called Projects\Q4-Financials\Draft-Forecasts\. At 15:52 they create a RAR archive. At 16:01 the archive egresses over HTTPS to a cloud storage endpoint that most egress controls will read as legitimate traffic.

From click to data exfiltration: ninety minutes. Four environments. One attacker. The BEC fraud completed in thirty minutes. The endpoint compromise was still developing hours later.

What a one-sided investigation looks like

This is where most investigations go wrong. Notice the specific evidence that each one-sided investigation misses and what that means for containment.

Suppose the alert that lands in your queue is the Entra ID Protection alert for the 14:42 session replay — low severity, flagged hours late because risk scoring took time to catch up. The analyst who picks it up investigates the cloud side. They find the anomalous sign-in, confirm the AiTM indicators (unusual user-agent, token replay pattern), identify the inbox rule, pull UnifiedAuditLog for the mailbox, confirm the BEC reply to the supplier. They recommend containment: revoke sessions, disable the inbox rule, force a password reset, notify the supplier of the fraud.

That's the cloud investigation. It's accurate. The containment plan is correct for what it covers.

What it misses: the loader on disk, the beacon in memory, the harvested hashes, the pivot to the file server, the staged archive, the exfiltrated data. Three weeks later, the attacker uses the service account hash they still hold (Claire's password reset didn't affect the service account) to authenticate to a different workstation, re-establish a beacon, and resume operations. The incident recurs. The post-mortem blames the password reset for not being broad enough — when the real failure was investigating only the cloud surface of a cross-plane attack.

The mirror case is an endpoint-led investigation. Suppose the alert is a Defender for Endpoint "Suspicious activity in LSASS" detection at 15:14. The analyst who picks it up investigates the endpoint. They find the loader, the reflective beacon, the LSASS access, the log clearing. They isolate the workstation, rebuild it, rotate Claire's password.

What they miss: the AiTM token theft that preceded the endpoint compromise by forty minutes, the inbox rule that is still forwarding invoice emails to the attacker, the OAuth app the attacker consented to at 15:18 that survives any session revocation. Next week the supplier sends the new bank details back to Claire's mailbox. The rule files them in RSS Feeds. The attacker reads them. The payment goes out. The endpoint investigation was accurate; the containment plan was correct for what it covered; and half the attacker stayed in the environment.

What's changed in the last three years

Modern incidents look the way the NE example looks because attacker tradecraft changed. A few numbers matter for how you investigate.

The identity attack picture in 2025 is not the one you trained against in 2022. Microsoft has reported a 146% rise in adversary-in-the-middle phishing attacks over the previous year, with token theft detections reaching roughly 39,000 per day at the time of their 2024 Digital Defense Report. Identity-based attacks rose another 32% in the first half of 2025. What's not changed is that password-based attacks still dominate raw volume — but what's changed is that the successful attacks, the ones that produce incidents, are increasingly session-hijacking and post-authentication abuse against accounts that already have MFA.

For cloud-specific compromises, Mandiant's M-Trends 2026 reporting identifies voice phishing as the most common initial vector (23% of cloud intrusions), followed by third-party compromise (17%), stolen credentials (16%), email phishing (15%), and insider threats (14%). Note what isn't there. On-premises vulnerability exploitation, the classic top initial-access vector globally, drops to 6% in cloud-specific attacks. The attack shape that reaches a Microsoft 365 tenant is different from the attack shape that reaches a perimeter VPN.

Global median dwell time rose to 14 days in 2025 from 11 days in 2024. Most of that increase is driven by long-tail espionage and identity-fraud operations (North Korean IT workers, for example, average 122 days of undetected presence). Ransomware dwell time, by contrast, dropped to a median of 9 days — because ransomware operators want to complete their objective quickly, and because the target of modern ransomware is no longer the data but the recovery capability. Current ransomware tradecraft systematically destroys identity providers, virtualisation management planes, and backup infrastructure before encrypting anything. The point is that by the time the encryption starts, there's nothing for you to roll back to.

One more data point worth internalising. The handoff time between initial-access brokers and the secondary threat groups they sell to has collapsed from a median of 8+ hours in 2022 to a median of 22 seconds in 2025. That means the alert your team categorizes as "probably initial access, not urgent" is often, by the time it's triaged, already the secondary group's entry point into the environment.

The point of these numbers is not to scare you. It's to explain why the investigation approach this course teaches looks the way it does. Investigations have to be faster because detection-to-handoff is faster. Investigations have to cross the cloud-endpoint boundary because the attack shape does. Investigations have to preserve volatile evidence early because the retention windows are short and the attacker's log-clearing is fast.

Why cross-plane matters mechanically, not just philosophically

The argument for cross-plane investigation isn't an aesthetic preference. It's mechanical — the evidence you need to answer the question "what did the attacker do" lives in different places depending on where they did it.

Each of the four environments in Figure IR0.1 holds a specific kind of evidence that doesn't exist anywhere else. Exchange Online holds message content, mailbox audit trails, and transport metadata — nowhere else can you see what the attacker read, what inbox rules they created, or what emails they sent. Entra ID holds authentication records, conditional access evaluations, token issuance, and OAuth consent — nowhere else can you see how the session was obtained, which MFA method was satisfied, and which apps were granted what permissions. The Windows endpoint holds process execution, file system activity, registry changes, and memory state — nowhere else can you see what code ran, what it loaded, what it wrote, or what it's still running. The lateral surface — domain controllers, file servers, backup infrastructure — holds authentication events, share access, and administrative activity that let you trace the attacker's movement beyond the first endpoint.

No single environment is sufficient. Exchange Online tells you the BEC reply was sent but not where the credentials came from. Entra ID tells you the session was hijacked but not what the attacker did with mailbox access. The endpoint tells you the beacon is resident but not which identity the attacker is using to log in elsewhere. The lateral surface tells you a file server was touched but not how the attacker got there.

Cross-plane investigation isn't a philosophy. It's what the evidence model requires. The containment decisions you make at the end — which sessions to revoke, which accounts to reset, which endpoints to isolate, which OAuth apps to purge — are only as good as the evidence you collected across all four environments.

The hard question this course prepares you for

Most of the course is about the techniques that answer this question. The question itself is short.

When the incident lands on your desk, you have to answer: what did the attacker actually do, and what do I contain first? That is the work. Everything else — tools, procedures, queries, artifacts — is scaffolding that makes the answer possible.

This course teaches the reasoning that produces the answer (IR0.2), the vocabulary you'll need to describe it to auditors and regulators (IR0.3), and the tools you'll use to extract it from the environment (IR0.4 and the rest of the course). The next nineteen modules after IR0 walk the techniques against realistic evidence from the same Northgate Engineering environment this narrative introduced. The incident changes from module to module. The four-environment, cross-plane investigation pattern doesn't.

Decision Point

The situation. It's 15:15. The Entra ID Protection alert from 14:42 just reached your queue after risk-scoring delay. You've confirmed the session hijack in SigninLogs, confirmed the inbox rule in UnifiedAuditLog, confirmed the BEC reply sent to the supplier. The endpoint has not been examined. The CISO is on the phone asking what to contain.

The choice. You can (a) recommend cloud-side containment only — revoke sessions, disable the inbox rule, force a password reset — while the endpoint investigation continues in parallel, (b) hold all containment until the endpoint picture is clear so containment is not piecemeal, (c) recommend full containment across both planes on the current cloud-side evidence, including endpoint rebuild, or (d) isolate the endpoint immediately, contain cloud-side on the current evidence, and document the scoping gaps in the investigation record.

The correct call. (d). The cloud evidence is strong enough to justify cloud-side containment now — a stolen session and an attacker-created inbox rule are not ambiguous findings, and leaving the session live while you investigate gives the attacker more time in the mailbox. Endpoint isolation is justified on a lower bar because endpoint isolation is reversible — worst case you isolate a clean endpoint for an hour and restore it. Option (b) holds containment while the BEC fraud continues. Option (a) leaves the endpoint attacker in place if the endpoint is compromised, which for AiTM-triggered incidents is more likely than not. Option (c) is defensible but heavier than needed — isolation gets you the protective effect of a rebuild without committing to the rebuild work.

The operational lesson. In a cross-plane incident, partial containment based on the evidence you actually have is almost always correct. Waiting for complete scope before containing gives the attacker working time. Over-containing commits you to remediation work that may not be necessary. The responder's judgment is calibrating how much containment the current evidence supports, and documenting the scoping gaps so the next investigation hour is focused on closing them.

Compliance Myth: "Our managed SOC handles investigations, so we don't need deep in-house IR skills"

The myth. The MSSP retainer provides full IR coverage. The in-house team does triage, the MSSP investigates, we're covered.

The reality. MSSPs investigate based on their playbooks and their generic visibility into your environment. They don't know that the affected user is your CFO, that the IP they're about to allowlist is your VPN provider's exit node, or that the "anomalous" SharePoint access pattern is how your M&A team works on every deal. Your environmental context — who matters, what business processes look normal, which actions require executive sign-off — is what transforms a generic investigation into an accurate scoping and response. MSSPs handle technical investigation. They don't handle contextual judgment. That's the in-house responder's job, and without those skills in-house the MSSP investigates in a vacuum. The findings reach the CISO's desk without the judgment that makes them actionable.

Next

IR0.2 — How investigators think. You've seen the incident shape. IR0.2 gives you the reasoning pattern that turns evidence into answers — hypothesis-driven investigation, the five-step chain experienced responders run on every artifact, and the three-statement discipline that prevents the most common investigation error. This is the mental model the rest of the course applies.

Try it: map a real incident you've worked across the four environments

Pick the most recent real investigation from your experience and classify it column by column

Setup. Pick the most recent real investigation you've worked — one where you had visibility of both cloud and endpoint, or one where you wished you had. If you haven't worked a cross-plane incident yet, use a tabletop or a write-up from a vendor blog. You don't need tooling for this exercise, only memory or notes.

Task. For each of the four environments in Figure IR0.1 (Exchange Online, Entra ID, Windows endpoint, lateral + servers), write down three things: (1) what attacker activity likely occurred in that environment, (2) what evidence you examined, and (3) what evidence you didn't examine. Take five minutes per column, twenty minutes total.

Expected result. A sheet of four columns. Most responders, doing this honestly, discover that their investigation examined one column thoroughly, touched a second, and left the other two blank. That pattern — thorough in one, touched in two, blank in four — is the exact skill gap this course is built to close. If your investigation covered all four columns equally, either your past case was narrower than the scenario above, or your organization's visibility is exceptional. Good. This course will still add depth to each column.

Debugging branch. If you can't fill in what evidence "likely existed" in a column you didn't examine, that's the signal that the column is the one to learn first. Note it. It'll tell you where your most productive hours in the course are. If you can't distinguish between "we didn't look" and "we had no visibility," that's a question for IR2 (evidence acquisition) — there are sources you can enable now that will exist for the next investigation.

Checkpoint — before moving on

You should be able to do the following without looking back at the text. If you can't, the sections to re-read are noted.

1. Without looking at Figure IR0.1, name the four environments a modern Microsoft-stack incident touches and the one kind of evidence that lives in each and nowhere else. (see: Why cross-plane matters mechanically)
2. Explain in one sentence why an investigation that thoroughly covers two environments and ignores the other two typically produces a containment plan the attacker routes around, referencing a specific example from the NE scenario. (see: What a one-sided investigation looks like)
3. State in plain English — the version you'd use with your CISO — why partial containment based on current cross-plane evidence is usually better than waiting for complete scope before containing. (see: Decision Point)