TH0.2 The Dwell Time Gap

3-4 hours · Module 0 · Free
Operational Objective
Dwell time is the number of days between initial compromise and detection. It is the metric that determines whether an intrusion is a containable incident or an organizational catastrophe. This subsection maps what attackers accomplish at each stage of undetected access in an M365 environment — and establishes why compressing dwell time through proactive hunting is the highest-leverage investment a security operation can make.
Deliverable: The ability to calculate your organization's dwell time baseline from Sentinel incident data, articulate the cost of each additional day of undetected access, and use dwell time data to justify hunting investment to leadership.
⏱ Estimated completion: 25 minutes

Eleven days

Mandiant’s M-Trends 2024 report — the industry benchmark for intrusion statistics — reported a global median dwell time of 10 days. That number has improved dramatically from the 416-day median reported a decade ago. Security operations have gotten better. Detection has gotten faster. But 10 days is the median, which means half of all investigated intrusions had dwell times longer than that. And the number only includes intrusions that were eventually detected. The ones nobody found are not in the dataset.

Ten days does not sound catastrophic. It is. Here is what a competent attacker accomplishes in a Microsoft 365 environment in ten days of undetected access.

Hours 0–24: the persistence window

The attacker has a valid session — obtained through AiTM phishing, credential stuffing, an access broker purchase, or a compromised partner account. Their first priority is not data theft. It is survival. They need to ensure that when someone resets the compromised password or revokes the session, they can get back in.

In the first 24 hours, a competent M365 attacker typically does four things.

They register a new MFA method. An authenticator app registration, a phone number addition, or a FIDO key enrollment on the compromised account. This is a normal user operation. Entra ID logs it in AuditLogs as “User registered security info.” It generates no alert in most environments because users register MFA methods every day — new employees, device replacements, app reinstalls. The attacker’s registration is invisible in the noise.

They create inbox rules. Rules that redirect emails containing “password reset,” “security alert,” “suspicious,” “unauthorized,” or “verify your identity” to Deleted Items, RSS Feeds, or a hidden folder. When the security team sends a “was this you?” email or Entra ID sends a sign-in notification, the legitimate user never sees it. The attacker bought time. This is the same technique the IR course covers in Module 13 (AiTM) and Module 14 (BEC) — from the investigation side. From the hunting side, TH5 teaches you to find these rules proactively.

They consent to an OAuth application. An app with Mail.ReadWrite and Files.ReadWrite.All that accesses data through the Graph API without requiring the user’s password. This access path survives password resets. It survives session revocation. It survives MFA re-enrollment. The app authenticates with its own credentials, not the user’s. Most organizations do not monitor OAuth consent events. TH6 covers this in detail.

They enumerate the directory. Who has Global Admin? What groups control access to sensitive resources? What conditional access policies are enforced? Where are the gaps? This enumeration uses standard Graph API calls — the same calls that legitimate applications make thousands of times per day. The attacker now has a map of the environment and knows exactly where the security controls are strong and where they are weak.

None of this generated a high-confidence alert. Every action was a legitimate M365 operation. The attacker is now persistent, informed, and invisible.

Days 2–5: the reconnaissance phase

With persistence established, the attacker shifts to understanding what this environment contains and where the value is.

They read email. Not random email — targeted email. Financial conversations, executive communications, vendor contracts, M&A discussions, customer lists, intellectual property. They search for keywords: “wire transfer,” “bank details,” “confidential,” “board,” “acquisition.” In a BEC operation, they are looking for an active financial transaction they can intercept. In a data theft operation, they are building a target list. In an espionage operation, they are reading everything from specific executives.

They explore SharePoint and OneDrive. Document libraries with sensitive data. Engineering specifications. Customer databases. HR files. Financial reports. The exploration uses normal file access APIs — the same APIs that users and applications use legitimately. A detection rule that alerts on “user accessed SharePoint” would fire thousands of times per hour. The attacker’s access is statistically identical to legitimate access unless you analyze the pattern: a single user accessing dozens of document libraries across multiple sites within a few hours is not normal behavior, even if each individual access event looks routine.

In hybrid environments, they probe the boundary between cloud and on-premises. Can the compromised cloud credentials access the VPN? Is Azure AD Connect synchronizing passwords bidirectionally? Are there pass-through authentication agents that could be exploited? The pivot from cloud to on-premises — or from on-premises to cloud — crosses a monitoring boundary that many SOCs have not bridged. The cloud team monitors cloud. The infrastructure team monitors on-prem. The attacker moves between them.

Days 5–10: objective execution

By day five, the attacker has persistent access through multiple channels, a map of the environment, and a list of valuable targets. Now they execute.

For BEC operators, this is when they insert themselves into financial conversations. They have been reading invoice threads for days. They know the vendor names, the payment amounts, the approval chains. They send an email — from the compromised account or from a lookalike domain — with updated bank details. Or they create a forwarding rule that copies incoming invoices to an external address, modify the bank details, and forward the modified version to the finance team. The FBI’s IC3 reported $2.9 billion in BEC losses in 2023. Every dollar of that loss required dwell time — the attacker needed days inside the mailbox to understand the organization’s financial processes well enough to execute the fraud convincingly.

For data theft operators, this is the exfiltration window. They download SharePoint document libraries using sync or bulk download. They export mailbox contents. They use OneDrive sync to copy files to an external device. The exfiltration uses Microsoft’s own services — SharePoint download APIs, OneDrive sync, email forwarding. Network-level monitoring sees traffic to Microsoft-owned domains on standard ports. There is nothing to block at the network level because the destination is legitimate. The use of the destination is not.

For ransomware affiliates, days 5–10 are staging. They have already escalated privileges, mapped the Active Directory structure, identified backup systems, and begun disabling or corrupting backup processes. The encryption event — the moment the organization discovers the intrusion — is the last step, not the first. By the time the ransom note appears, every domain-joined system is compromised and the backups may already be destroyed. The IR course covers this in Module 13 (BlackSuit ransomware). The hunting course covers the pre-encryption indicators in TH12 — the window where you can still stop it.

DWELL TIME — WHAT THE ATTACKER ACCOMPLISHESDay 1Day 2–5Day 5–10Day 10+PERSISTENCEMFA method registeredInbox rules createdOAuth app consentedDirectory enumeratedRemediation: hoursRECONNAISSANCEEmail triage + keyword searchSharePoint/OneDrive mappingPrivilege + group discoveryHybrid boundary probingRemediation: daysEXECUTIONBEC wire fraudData exfiltrationRansomware stagingImpersonation campaignsRemediation: weeksENTRENCHMENTIdentity infrastructureBackdoor accountsComplete data accessDormant persistenceRemediation: monthsHUNTING INTERVENES HEREBefore the attacker achieves their objective

Figure TH0.2 — Attacker progression over dwell time. Remediation cost compounds non-linearly. Hunting finds the attacker in the persistence or reconnaissance phase — before BEC fraud, before data exfiltration, before ransomware encryption.

Beyond day 10

Dwell time cost does not increase linearly. It compounds.

Day 1: one compromised account, one or two persistence mechanisms. Reset the password, revoke the sessions, check for inbox rules and OAuth apps. A few hours of work.

Day 10: multiple compromised accounts, persistence through OAuth and MFA registration and inbox rules, data accessed or exfiltrated from multiple sources, possible hybrid pivot. Full investigation across all evidence sources. Days of remediation work. Possible regulatory notification.

Day 90: the attacker may have compromised the identity infrastructure itself. Azure AD Connect, federation services, certificate authorities. They may have created backdoor accounts that survive a tenant-wide password reset. They may have exfiltrated the organization’s most sensitive data across multiple channels. Full remediation may require rebuilding identity infrastructure from scratch. External IR engagement. Regulatory notification certain. The incident response effort measured in months.

Nation-state operators and advanced ransomware affiliates deliberately maximize dwell time. A 90-day dwell time is not a detection failure they stumbled into — it is an operational objective. The longer they remain undetected, the deeper their access and the more complete their objectives. The only countermeasure that compresses dwell time for these adversaries is proactive hunting — going looking for evidence of their presence before any rule fires.

How hunting compresses dwell time

When you run the authentication anomaly hunt from TH4 against the last 30 days of sign-in logs, you are searching for compromised accounts that entered your environment during that window and were not detected by any rule. If the hunt identifies a compromised account at day 3 of the attacker’s access, you have compressed the dwell time from whatever-it-would-have-been (10 days? 30? 90?) to 3 days.

At day 3, the attacker has persistence but has not executed their objective. Remediation is containable — revoke sessions, reset credentials, remove the inbox rules, revoke the OAuth consent, check for lateral movement. A few hours of focused incident response.

At day 30, the same attacker may have completed a $47,000 wire fraud, exfiltrated 200 GB of customer data, or positioned ransomware across every endpoint. Remediation now involves forensic investigation, legal consultation, regulatory notification, customer notification, and potentially an external IR retainer engagement.

Hunting did not prevent the initial compromise. It compressed the window between compromise and discovery — and that compression is the difference between a minor incident and a major breach.

Measure your own dwell time

Your Sentinel incident data contains the evidence. This query approximates your median dwell time by measuring the gap between the earliest evidence of attacker activity and the moment the incident was created:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
// Your dwell time baseline  measure it, do not assume it
SecurityIncident
| where TimeGenerated > ago(180d)
| where Status == "Closed"
| extend EarliestEvidence = todatetime(
    parse_json(tostring(AdditionalData)).firstActivityTimeUtc)
| extend IncidentCreated = CreatedTime
| where isnotempty(EarliestEvidence)
| extend DwellDays = datetime_diff(
    'day', IncidentCreated, EarliestEvidence)
// Days between first attacker activity and incident creation
| where DwellDays >= 0 and DwellDays < 365
// Filter obvious data quality issues
| summarize
    MedianDwell = percentile(DwellDays, 50),
    P75Dwell = percentile(DwellDays, 75),
    P90Dwell = percentile(DwellDays, 90),
    IncidentCount = count()
// P90 is the number that matters  the long-tail intrusions
// where the attacker had extended access

This is a lower bound. It only includes intrusions your rules eventually caught. The intrusions that no rule detected — the ones currently active in your environment — are not in this dataset. The difference between this measured dwell time and the true dwell time (including undetected intrusions) is the hunting opportunity.

If you have fewer than 20 closed incidents in 180 days, that is itself informative. Either your environment is rarely targeted (unlikely for any M365 tenant), or your detection layer is not generating incidents for compromises that are occurring. Both possibilities strengthen the case for hunting.

Try it yourself

Exercise: Calculate your dwell time baseline

Run the query above against your Sentinel workspace. Record three numbers: median (P50), 75th percentile (P75), and 90th percentile (P90).

The P90 should concern you most. It represents the long-tail — the intrusions where the attacker had extended undetected access. If your P90 is above 30 days, your detection layer has a significant responsiveness gap that hunting directly addresses.

Compare your median to Mandiant's latest M-Trends benchmark (10 days global median in the 2024 report). If yours is higher, your detections are slower than industry average. If yours is lower, your detection engineering is effective for the threats it covers — but the undetected intrusions (the ones not in this dataset) may have dwell times far longer.

You will use this baseline in TH14 (Building a Hunt Program) to measure whether hunting is compressing dwell time over time.

⚠ Compliance Myth: "Our MTTR is under 4 hours — our detection capability is strong"

The myth: Fast mean time to respond (MTTR) proves the SOC is effective. If we respond quickly to incidents, we are detecting threats effectively.

The reality: MTTR measures how fast you respond after an alert fires. It says nothing about how long the attacker was present before the alert fired. A SOC with a 2-hour MTTR and a 30-day median dwell time responds quickly to incidents it eventually detects — but the attacker had 30 days of unmonitored access before that response began. MTTR without dwell time is half the picture. The metric that matters is mean time to detect (MTTD), measured from the earliest attacker activity to first detection. MTTD can only be reduced by better detection rules or proactive hunting.

The intrusions you will never measure

There is a category of intrusion that never appears in any dwell time statistic: the ones where the attacker achieved their objective and left without being detected at all. The BEC operator who intercepted one wire transfer and disappeared. The data theft operator who exfiltrated a customer database and sold it on a dark web marketplace. The competitor who read executive emails about an upcoming acquisition and used the intelligence commercially.

These intrusions are discovered months or years later — if ever — through downstream consequences: the customer data appears in a breach notification from another source, the wire transfer is flagged during an audit, the competitor’s suspiciously well-timed market move prompts an investigation. By then, the forensic evidence may be beyond your log retention window.

Hunting does not guarantee you will find these intrusions. But it is the only operational activity that looks for them. Detection rules wait for a pattern. Hunting goes looking. The intrusions you never discover are the ones you never looked for.

Extend this analysis

Dwell time benchmarks vary by industry and region. Financial services organizations typically report shorter dwell times than healthcare or manufacturing — regulatory pressure and security investment explain much of the difference. When presenting dwell time data to leadership, use sector-specific benchmarks from the Mandiant M-Trends report, the CrowdStrike Threat Hunting Report, or the IBM X-Force Threat Intelligence Index. A 15-day median in healthcare may be above average for that sector; the same number in financial services would be below. Context determines whether the number is alarming or acceptable — and hunting is justified in both cases because the undetected intrusions exist regardless of the benchmark.


References Used in This Subsection

You're reading the free modules of this course

The full course continues with advanced topics, production detection rules, worked investigation scenarios, and deployable artifacts. Premium subscribers get access to all courses.

View Pricing See Full Syllabus