In this module

OD1.6 Active Reconnaissance — Probing Without Being Caught

6-8 hours · Module 1 · Free
What you already know

You've seen port scans in your firewall logs. You've investigated brute-force alerts and credential-spray detections. This sub explains why those detections catch the amateur but miss the professional — and why the gap isn't a technology problem. It's a threshold problem that the attacker understands better than you do.

Operational Objective

Your detection rules for active reconnaissance use thresholds: five failed logins in five minutes, a hundred ports scanned in ten seconds. The attacker knows your thresholds and designs their probing to stay just below. The result: active reconnaissance that generates log entries you can see but alerts you never receive.

This sub walks the specific probing techniques professional attackers use, shows why your current thresholds miss them, and teaches you how to build detection that works against distributed, low-and-slow reconnaissance — including AI-assisted analysis of spray patterns.

Learning Objectives

By the end of this sub you will be able to:

  • Explain the threshold gap — why per-entity detection rules (per account, per IP) fail against distributed reconnaissance that stays below every individual threshold while being clearly visible in aggregate. The same gap allowed Midnight Blizzard's password spray against Microsoft's corporate tenant to succeed — the spray was distributed across enough IPs and slow enough that no per-entity threshold fired. This matters because most SOCs have never tested whether their reconnaissance detection catches professional-grade distributed probing.
  • Detect low-and-slow password sprays using cross-account correlation over 7-day windows, including AI-assisted analysis of exported authentication failures. This matters because password spraying against M365 tenants is the single most common active reconnaissance technique in the current threat landscape, and standard brute-force rules miss it by design.
  • Assess your cloud enumeration resilience — whether user enumeration and spraying succeed but are unexploitable because phishing-resistant MFA makes the findings useless. This matters because you can't prevent cloud tenant enumeration (the APIs are public by design), so your defense must be resilience, not prevention.
ACTIVE RECONNAISSANCE — THE THRESHOLD GAP
YOUR DETECTION THRESHOLDS
5 failed logins / account / 5 min → alert
100 ports scanned in 10 sec → alert
10 auth failures / IP / 5 min → block
Designed for: brute force, fast scans
ATTACKER'S ACTUAL RATE
1 attempt / account / hour → no alert
1 port / target / minute → no alert
Rotating IPs, 1 attempt each → no block
Designed for: staying below every threshold
THE GAP: Your thresholds detect fast, noisy reconnaissance.
Professional reconnaissance is slow and distributed. Every event is sub-threshold.
The logs exist. The events are recorded. Your rules just don't correlate them.

Figure OD1.6 — The threshold gap. Your detection is calibrated for fast, noisy reconnaissance. Professional reconnaissance stays below every per-entity threshold. The events are in your logs — your rules don't connect them.


Your thresholds were designed for the wrong attacker

Per-entity thresholds catch the amateur. The professional distributes across enough entities that no single one triggers.

Every security product ships with reconnaissance detection thresholds: account lockout after five failed attempts, brute-force alerting after ten failures in five minutes, port scan detection after a hundred SYN packets in ten seconds. These catch the script kiddie running Hydra at maximum speed.

The professional attacker knows your thresholds. One attempt per account per hour, from a rotating pool of residential proxy IPs, with randomised timing. That's 24 passwords tested per account per day from IPs that change with every request. No individual account hits lockout. No individual IP triggers the rate limit. No individual event is anomalous.

But in aggregate, across your entire tenant, someone is testing 1,000 accounts at one password per hour. 1,000 failed authentications spread across 1,000 accounts and 1,000 IPs over 60 minutes. The aggregate pattern is a coordinated spray. Each individual event is indistinguishable from a user mistyping their password.

The attack is invisible to per-entity rules and visible only to cross-entity correlation over extended time windows.

Password spraying — the most common active cloud reconnaissance

The combination of breached email lists, common password patterns, and legacy authentication endpoints makes spraying operationally cheap and technically simple.

The attacker starts with valid email addresses (from passive reconnaissance — OD1.5), selects one password to test (Spring2026!, Company123!, Welcome1!), and configures their spray tool to attempt one authentication per account per hour through residential proxies. After 24 hours, one password has been tested against every account. After a week, seven passwords. In a 500-account organisation, at least one user typically has a password matching the top 20 patterns.

The authentication endpoint matters. Smart attackers target legacy endpoints — Exchange ActiveSync, IMAP, POP3, Exchange Web Services — that may not trigger Entra ID risk policies, may not be subject to smart lockout, and may not produce the same sign-in log entries. If your organisation hasn't blocked legacy authentication, you have spray-vulnerable endpoints.

What the successful spray looks like. When the spray succeeds, there's no failed-login alert because the login succeeded. The sign-in log shows a successful authentication from an unfamiliar device and location. That's it. The alert, if one fires, is "unfamiliar sign-in properties" — medium severity.

Your detection needs two layers: cross-account correlation for the spray pattern (many accounts, one failure each, distributed IPs) and post-authentication anomaly detection for the successful credential (unfamiliar device + immediate discovery commands = compromised account).

Using AI to detect spray patterns. Export your failed authentication events for the past 7 days. Feed them into Claude:

Prompt for AI-assisted spray detection:

I'm analysing 7 days of failed authentication events from our M365
tenant to detect low-and-slow password spray patterns.

[Paste: timestamp, username, source IP, result code, app used]

Analyse for spray indicators:
1. Accounts with exactly 1 failure each from unique IPs — how many?
2. Are failures distributed evenly (hourly/daily) or clustered?
3. Do source IPs belong to residential proxy ranges or cloud hosting?
4. Pattern in which auth endpoint was used (legacy vs modern)?
5. Did any accounts subsequently have a successful login from an
   unfamiliar device within 48 hours of the failure?

Assessment: coordinated spray, organic typo noise, or inconclusive?

This analysis takes an analyst hours across thousands of events. The LLM identifies the cross-account pattern in minutes.

Network probing — the slow scan you never detect

One port per target per minute, from rotating IPs, over three days. Your firewall logs every connection. No threshold fires.

Fast scans are trivially detected. Slow scans — one port per target per minute, hundreds of targets, rotating source IPs, randomised timing — take days instead of seconds. Each individual connection attempt is indistinguishable from a legitimate client.

Over three days, the attacker identifies every open port on every external IP, captures service banners revealing software versions, identifies TLS certificates revealing internal hostnames, and finds the forgotten development server that IT deployed two years ago.

Defensive translation: volume-based detection catches fast scans. Time-window correlation catches slow scans if the window is long enough — 72 hours minimum. The trade-off: longer windows produce more false positives from legitimate scanners (Shodan, Censys, crawlers). Baseline your normal external scanning traffic and alert on deviations from baseline.

KQL for slow-scan detection:

// Detect distributed port scanning over 72 hours
CommonSecurityLog
| where TimeGenerated > ago(72h)
| where DeviceAction == "Deny" or DeviceAction == "Drop"
| summarize
    DistinctPorts = dcount(DestinationPort),
    DistinctSourceIPs = dcount(SourceIP),
    TotalAttempts = count()
    by DestinationIP, bin(TimeGenerated, 1h)
| where DistinctPorts > 20 and DistinctSourceIPs > 5
| order by DistinctPorts desc

Cloud tenant enumeration — the probing you can't prevent

Microsoft's public APIs let attackers confirm your tenant exists, enumerate valid usernames, and discover your federation configuration — without producing a single sign-in event.

The GetCredentialType endpoint accepts a username and returns whether it exists, what authentication methods are configured, and whether the tenant uses federation — all before any authentication attempt. Tools like o365creeper and trevorspray automate this. The attacker feeds in potential usernames (from LinkedIn names + email format) and receives a validated target list for the spray.

You cannot prevent this at the API level. The defensive response is resilience, not prevention: assume the attacker already knows your tenant exists and which users have accounts. Enforce phishing-resistant MFA so enumeration is useless. Block legacy authentication so the spray has no vulnerable endpoint. Your defense isn't preventing the reconnaissance — it's making the findings unexploitable.

Credential stuffing — the silent successful authentication

Unlike spraying (which guesses passwords), credential stuffing uses known passwords from breach databases. If the password was reused, the authentication succeeds on the first attempt.

The successful credential stuff produces one log entry: a successful sign-in from an unfamiliar device. Not a failed login. Not a spray pattern. A single successful authentication that looks exactly like a user logging in from a new phone.

Your detection is post-authentication behavioral analysis. After authentication succeeds, does the account immediately run discovery commands? Access unusual data? Create inbox rules or OAuth consent grants? The authentication itself is undetectable as malicious — the behavior following it is where your detection lives.

# STEP 1 — Document your current thresholds (10 minutes)
# Answer for your environment:
#   Failed-login lockout: ___ attempts in ___ minutes
#   Per-IP rate limit: ___ attempts in ___ minutes
#   Port scan detection: ___ connections in ___ seconds
#
# For each, calculate the attacker's sub-threshold capacity:
#   Attempts per hour while staying below threshold: ___
#   Attempts per day (x24): ___
#   Attempts per week (x168): ___

# STEP 2 — Check for spray patterns (20 minutes)
# Export 7 days of failed authentication events.
# In Sentinel:
SigninLogs
| where TimeGenerated > ago(7d)
| where ResultType != "0"
| summarize
    FailCount = count(),
    DistinctIPs = dcount(IPAddress),
    DistinctAccounts = dcount(UserPrincipalName)
    by bin(TimeGenerated, 1h)
| where DistinctAccounts > 50 and DistinctIPs > 20
| order by DistinctAccounts desc
# If this returns rows: you may have active spray activity.
# Export the detailed events for those hours and feed into
# Claude with the spray detection prompt from this sub.
#
# If this returns nothing: either no spray is active, or
# the spray is slower than 50 accounts/hour. Widen the
# bin to 4h or 24h and reduce the thresholds.

# STEP 3 — Check legacy authentication exposure (5 minutes)
SigninLogs
| where TimeGenerated > ago(30d)
| where ClientAppUsed in ("Exchange ActiveSync", "IMAP4", "POP3",
    "MAPI Over HTTP", "Authenticated SMTP", "Exchange Web Services",
    "Other clients")
| summarize count() by ClientAppUsed, UserPrincipalName
| order by count_ desc
# If this returns results: those endpoints are spray-vulnerable.
# Each account using legacy auth is a target the attacker can
# spray without triggering modern auth protections.

# STEP 4 — Check enumeration resilience (5 minutes)
# In Entra ID → Security → Conditional Access:
# Is there a policy requiring phishing-resistant MFA for all users?
# If yes: user enumeration and spraying findings are unexploitable.
# If no: document this as the highest-priority gap.

Hands-on Exercise — Reconnaissance Detection Gap Assessment

Objective: Assess your detection coverage against professional-grade distributed reconnaissance and identify specific gaps.

Prerequisites: Access to your SIEM with authentication logs (Entra ID sign-in logs or equivalent). Access to your Conditional Access policies.

Success criteria: You've documented your threshold capacities, checked for active spray patterns, identified legacy auth exposure, and assessed enumeration resilience.

Challenge: If you found active legacy authentication in Step 3, identify which users and which applications require it. For each, determine: can this application be migrated to modern authentication? If not, can a Conditional Access policy restrict legacy auth to specific IPs or compliant devices? This is the single highest-impact remediation for spray resilience.


Next
OD1.7 — The Attacker's Decision Matrix. How constraints, reconnaissance findings, and the objective combine into an operational plan — and how you reverse-engineer that plan from investigation evidence to predict the attacker's next move.
Checkpoint — before moving on

You should be able to do the following without referring back to this sub. If you can't, the sections to re-read are noted.

1. Explain why per-entity detection thresholds fail against distributed reconnaissance and what detection approach is required instead. (§ Your thresholds were designed for the wrong attacker)
2. Detect a low-and-slow password spray using cross-account correlation in KQL or AI-assisted analysis, and distinguish it from organic authentication failures. (§ Password spraying + Hands-on Exercise)
3. Explain why cloud tenant enumeration can't be prevented and what defensive posture (phishing-resistant MFA + legacy auth blocking) makes the enumeration findings unexploitable. (§ Cloud tenant enumeration)

You're reading the free modules of offensive-security-for-defenders

The full course continues with advanced topics, production detection rules, worked investigation scenarios, and deployable artifacts.

View Pricing See Full Syllabus