In this module

NF0.10 What Network Forensics Cannot Do — Honest Limits

8 hours · Module 0 · Free
What you already know
The previous nine subs built the case for network evidence as an investigation backbone — it survives when disk, memory, endpoint logs, and identity evidence are gone, encrypted, or tampered with. That's the pitch. This sub is the honest counter-balance. Every evidence source has blind spots; network evidence has specific ones you need to name before you reach for network investigation as the primary tool. If you pitch network evidence to the CISO as the answer to every investigation, the first case it fails on will cost you credibility you don't get back.
Operational Objective
Investigators who overcommit to one evidence source fail at the first case where that source has a gap. Network evidence has five concrete gaps you need to name, and a sixth structural limit you need to understand before you recommend NSM as the investigation backbone. Each gap has a compensating evidence source (endpoint, identity, memory, file-system) that fills it. The goal of this sub is to give you the honest vocabulary for when network evidence is the primary tool, when it's the supporting tool, and when it's not useful at all — so that you can recommend the right evidence posture to the CISO without overselling one source.
Deliverable: A personal decision framework for which evidence source you'd lead with against each of five common attack categories. Not a checklist — a short written statement you could defend to a CISO who asks why you prioritized network evidence over endpoint (or vice versa) for a specific case.
Estimated completion: 25-35 minutes
Network Evidence Gaps and Complementary Sources Five gaps where network alone doesn't answer the question GAP 1 — What ran on disk Process execution, command line File writes, persistence → Endpoint / EDR GAP 2 — Inside encrypted payload HTTP body content over TLS File content in exfiltration → TLS inspection / endpoint GAP 3 — Identity and intent Who authenticated as whom Why they accessed X → Identity logs / EDR GAP 4 — Host-local activity Local privilege escalation Disk-based persistence → Endpoint / host logs GAP 5 — In-memory state Injected code, credentials Rootkits, kernel compromise → Memory forensics LIMIT — Visibility boundaries No sensor = no evidence Retention expiry → Architecture decision The investigator's posture: lead with what fits the question Network evidence is the primary source for external-facing attacks, C2, exfiltration, and detection-evasion cases. Endpoint / memory / identity evidence leads for local privilege, credential theft, and authentication-scope cases. Overcommitment to any single evidence source is the failure mode. Lead with what fits; supplement with the rest.

Figure 0.10.1 — Five gaps in network evidence and the complementary sources that fill them. Each gap names a specific question network evidence cannot answer and the evidence source that does.

Gap 1 — Network evidence doesn't tell you what ran on the host

The network sees what left the host and what came back. It does not see what happened between — the process that launched, the registry key the malware wrote, the scheduled task it set, the DLL it loaded into a legitimate process. When the question is "what executed on this system," network evidence is supporting material at best.

Consider a ransomware case where you have full PCAP of the attacker's C2 channel. You can reconstruct the beacon timing, identify the Cobalt Strike JA3 fingerprint, map the domain-fronting infrastructure. What you cannot reconstruct is the sequence of powershell.exe spawns, the exact command line the attacker used to disable Defender, the registry key they added for persistence, or the exact moment the encryption process started.

Endpoint evidence fills this gap. Defender XDR's DeviceProcessEvents table, Sysmon Event ID 1, or EDR process-tree captures all record exactly what executed, when, and with what parameters. Network evidence tells you the attacker was in the house; endpoint evidence tells you which rooms they searched. An IR report that describes the attack without process-level detail leaves the board asking "but what did they actually do on the machines" — a question the network alone cannot answer.

The practical consequence for the investigator. When you're brought into an incident where the key question is process execution, privilege escalation, or host-local activity, pull endpoint evidence first. Use network evidence to confirm what happened externally (the C2 channel existed, the exfiltration traversed the wire) but let the endpoint evidence answer the host-level questions. Leading with network evidence for a host-execution question costs investigation time and often misses the answer.

Gap 2 — TLS shields the payload, not the metadata

TLS is the internet's default encryption. It protects HTTP request bodies, response payloads, and application-layer data. The network investigator sees everything about the connection — timing, size distribution, certificate, fingerprint — but not what was sent or received inside it. Without TLS inspection, a significant fraction of modern C2 and exfiltration is opaque to network-only investigation.

What you can see in an HTTPS session. Source and destination IP and port. TLS version and cipher suite. Server Name Indication (SNI) from the ClientHello. Certificate chain and issuer. JA3/JA4 client fingerprint. Connection duration. Bytes transferred each way. Packet-size distribution over time.

What you cannot see. The HTTP method (GET, POST, PUT). The URI path. Request and response headers. Request body (including credential submissions, file uploads, API requests). Response body (including extracted data, command output, exfiltrated files). Cookie values. Session tokens.

The consequence for investigation. If an attacker exfiltrates 2GB of engineering files over an HTTPS POST to a legitimate-looking cloud storage service, the network shows you the connection, the timing, the volume, and the destination — but not the file names, not the content, not the authorization headers that would identify which account did the upload. For high-sensitivity cases (legal proceedings, customer-data breaches, intellectual-property theft), the network evidence establishes that something happened but cannot establish what was taken.

TLS inspection (covered in NF4.7) closes this gap at the cost of architectural complexity and privacy tradeoffs. For environments without TLS inspection, endpoint evidence fills the gap — Defender XDR's DeviceNetworkEvents combined with process-level file-access logs shows the exfiltration from the host perspective even when the wire-level content is encrypted. A network-only investigator who ignores Gap 2 will overstate confidence about what was exfiltrated.

Gap 3 — Identity and intent live above the network layer

Network evidence shows you the connection. Identity evidence shows you who authenticated as whom to establish that connection. For authentication-scope investigations — account compromise, credential theft, privilege abuse, insider threat — identity logs are the primary source, and network evidence is supplementary.

In an M365 tenant compromise, the network sees the TLS connection to login.microsoftonline.com. It doesn't tell you which user completed MFA, whether the session was interactive or service-principal, which conditional-access policies fired, whether the sign-in risk score elevated, or which OAuth scopes were granted. Entra sign-in logs answer those questions directly — and the network connection, by itself, tells you almost nothing about the identity layer.

This is the reason NF14's capstone (INC-NE-2026-0830) includes the cautious framing around identity continuity across the MFA reset. Even when the investigation is explicitly network-only because Defender XDR is unavailable, the report has to acknowledge that identity-scope claims (the pre-reset and post-reset activity is the same actor) rest partly on inference from network indicators and partly on evidence thread analysis that identity logs would have made direct. An honest investigation flags what the evidence can and cannot establish, and identity is the gap most commonly over-claimed by network-only investigators.

The compensating evidence source is Entra (or equivalent identity platform) sign-in and audit logs. For privilege-abuse cases, the identity system's native telemetry is primary; network evidence supports the timeline but does not establish the identity-layer claims.

Gap 4 — Host-local activity leaves no network footprint

Some attack techniques operate entirely inside the host — no network traffic, no external communication, nothing for the sensor to see. For these techniques, network evidence is not just incomplete, it's irrelevant.

Examples. A local privilege-escalation via token manipulation — the attacker already has code execution as a low-privilege user, they inject into or impersonate a higher-privilege process, no packets cross the sensor. A disk-based persistence mechanism that waits for reboot — no network activity during the setup. A rootkit that hooks kernel callbacks to hide its presence — all the interesting work is in-kernel. A data-at-rest theft where the attacker copies files to a USB device or burns them to a DVD — no network involvement at all.

For these cases, endpoint forensics and memory forensics are the primary sources. The network investigator needs to recognize when the question is host-local so they can correctly scope their contribution to the investigation — "the network evidence is consistent with the endpoint findings but cannot independently establish the host-local chain" is an honest framing that supports the investigation rather than over-extending network evidence.

The corollary. If an attacker's entire operation is host-local, a network-only investigation will miss the attack entirely. The mitigation isn't better network forensics — it's layered evidence sources so the host-local activity gets caught by endpoint or memory investigation even when the network layer is silent.

Gap 5 — In-memory state doesn't transit the wire

Memory-resident attacks — reflective DLL loading, process injection, Cobalt Strike's sleep-mask obfuscation, Mimikatz credential extraction from LSASS — leave specific artefacts in RAM that never appear on the wire. Credentials held in memory, injected code regions, hidden kernel structures: all invisible to network sensors.

The practical example. Mimikatz extracts cached credentials from LSASS.EXE memory on a compromised workstation. The attacker then uses those credentials in a subsequent authentication, which is visible on the network (Kerberos ticket request, or NTLM challenge-response). But the extraction itself — the moment the credentials were stolen — produces no network traffic. If the investigator only has network evidence, they can establish that the stolen credentials were used but not when or how they were stolen.

Memory forensics (covered in Ridgeline's Applied Memory Forensics course) fills this gap. A memory capture preserves the in-RAM state at the moment of capture — injected code regions, process handles, cached credentials, kernel structures. Combined with network evidence of subsequent authentication, memory forensics establishes the full credential-theft chain. Without memory evidence, the network-only investigator has to infer the extraction from the subsequent use, which is weaker evidence and more vulnerable to alternative explanations.

The structural limit — visibility is an architecture decision, not an investigation one

The five gaps above are about what network evidence intrinsically cannot see. There's a sixth limit that's different in kind: network evidence only exists where sensors were deployed and only for as long as retention policy kept it. This is a property of the architecture, not the evidence type, but it shapes every investigation.

If your sensor was deployed at the internet perimeter but not at the DMZ boundary, and the attack pivoted laterally from the DMZ to internal systems without crossing the perimeter, you have no network evidence of the lateral movement. If your perimeter sensor captured the traffic but your retention policy rotated the PCAP after 7 days and the incident is investigated at day 14, the PCAP is gone.

These aren't gaps in what network evidence can see — they're gaps in what your specific deployment actually saw and retained. They compound the intrinsic gaps: a network-only investigator with a limited sensor deployment has both the five technique-level gaps AND the architectural gaps from their specific NSM posture. When scoping what network evidence can contribute to an investigation, always check the sensor map against the incident's geography and the retention policy against the incident's timeline before committing to a network-led approach.

NF13 covers this in depth — sensor placement, capture scaling, retention planning. For NF0 the takeaway is simpler: when you pitch network evidence as the investigation backbone, you're implicitly pitching the architecture that produces it. Network-led investigation without the underlying NSM architecture is a house built on sand.

Guided Procedure — Decide which evidence to lead with for five common cases
Step 1. Case 1 — AiTM credential phishing. The user clicked a phishing link, completed MFA through an attacker proxy, and the attacker captured the session token. Which evidence leads the investigation?
Expected output: Network evidence leads for the initial access phase (DNS resolution of the phishing domain, TLS session to the proxy, JA3 fingerprint). Identity evidence leads for the session-capture analysis (which token was stolen, what scopes, what conditional-access context). Endpoint evidence is supplementary — it confirms the user's browser initiated the session but doesn't change the primary findings.
If it fails: If you led with endpoint, you probably got confused about the user's local activity (they just clicked a link, a routine browser action produces no unusual endpoint signal). AiTM is an identity-plus-network case; endpoint is supplementary.
Step 2. Case 2 — Ransomware encryption event. The endpoint has been encrypted, event logs cleared, Defender XDR disabled. Which evidence leads?
Expected output: Network evidence leads because endpoint evidence is compromised. The attacker's C2 channel, lateral movement, and staging activity all transit the wire and are recoverable from Zeek/Suricata/PCAP. Endpoint evidence contributes where it survives — backup Defender data retained centrally, non-compromised endpoints that observed the attacker's pivot. Memory forensics may contribute if the endpoint is still running.
If it fails: If you insisted on endpoint-led despite the endpoint being compromised, you've misunderstood the evidence-integrity situation. When endpoint evidence is gone, encrypted, or tampered with, network evidence's survivability becomes the investigation's primary asset — which is the thesis of this whole course.
Step 3. Case 3 — Local privilege escalation via token theft. A non-admin user somehow gained local admin on their workstation. Which evidence leads?
Expected output: Endpoint and memory evidence lead. The token-theft technique produces no network traffic — it's a host-local operation against LSASS or a running process. Network evidence may show nothing related to the escalation itself; it might contribute later if the attacker uses the elevated privilege to reach network resources. Leading with network evidence on this case produces a thin investigation.
If it fails: If you insisted on network-led, re-read Gap 4. Some attacks are host-local; network evidence cannot see them directly. Recognizing the class of attack determines which evidence to lead with.
Step 4. Case 4 — Data exfiltration via HTTPS to a cloud-storage service. The attacker uploaded files to Dropbox using credentials compromised earlier. Which evidence leads?
Expected output: Mixed posture. Network evidence establishes that the upload occurred, the volume, the timing, the destination, and (via TLS fingerprinting) some properties of the client. But network evidence alone cannot tell you which files were uploaded, which account was used, or what the Dropbox API authorized. Endpoint evidence (file-access logs, process-network correlation, upload client behavior) fills the content gap. Identity evidence (Dropbox audit logs, if obtainable) establishes the account scope. Lead with whichever gives the cleanest answer to the specific question — for volume and timing, network; for content attribution, endpoint; for account scope, identity.
If it fails: If you claimed network evidence alone establishes the exfiltration scope, you've overstated. Re-read Gap 2. TLS opacity is specific and non-negotiable without TLS inspection.
Step 5. Case 5 — OAuth application persistence in M365. An attacker registered a malicious OAuth application with admin-consent-equivalent permissions, and the app is now accessing mail and files via client_credentials grant. Which evidence leads?
Expected output: Identity evidence leads — this is fundamentally an identity-layer attack. Entra audit logs show the application registration, the consent grant, the service-principal activity. Network evidence supplements: TLS connections to `graph.microsoft.com` using the application's credentials, the JA3 fingerprint of the attacker's tool, and the connection pattern that distinguishes service-principal activity from user activity. Network-only investigation would see the Graph API traffic but could not establish the identity-layer attribution.
If it fails: If you led with network, re-read Gap 3. Identity is where the attack lives. Network evidence is supporting context for an identity-scope attack.
Decision point

Your CISO asks: "Should we invest in network forensics capability, or should we put the budget into expanding Defender XDR deployment to the remaining 15% of endpoints?"

The knee-jerk response is to advocate for the capability you've been studying — network forensics is what this course is about, so network forensics must be the answer.

The correct response depends on what gaps the organization has. If Defender XDR is already deployed to 85% of endpoints and the remaining 15% are non-critical (test VMs, lab systems, edge devices), the marginal return on expanding Defender is low. Network forensics adds a layer that survives endpoint compromise and covers attack classes Defender doesn't catch well — external-facing attacks, domain-fronted C2, DNS tunnelling. Recommend the network investment.

If the 15% gap covers critical systems — a segment of finance endpoints, the engineering workstations that hold IP, the servers in the DMZ — expanding Defender closes a visible control gap that attackers will find. Network forensics is valuable but cannot replace endpoint visibility on high-value systems. Recommend closing the Defender gap first; then revisit the network investment.

The operational lesson: the correct evidence strategy is what fits the organization's current gaps, not what fits your specialization. An investigator who always advocates for their favorite tool misses the wider architectural picture. An investigator who recommends based on gap analysis earns credibility across investigation cycles.

Compliance Myth: "If the network sees everything, endpoint visibility is redundant"

Network-only investigation sells well — the premise that every attack crosses the wire at some point, so comprehensive network sensors should give you everything you need. The problem is that "crosses the wire at some point" is both true and misleading. Yes, the attacker's C2 beacon transits the network. Yes, the exfiltration leaves over HTTPS. But the five gaps above are real and they don't close because the sensor is well-placed. TLS is still opaque at the payload layer. Host-local privilege escalation still produces no network traffic. Identity-scope attacks still live in the identity system's logs, not in the network.

The right mental model: network evidence is one leg of a three-legged stool (endpoint, network, identity). Memory forensics is the fourth leg for sophisticated cases. Organizations that commit to one evidence source build a brittle investigation capability that fails at the first incident where the chosen source has a gap. Organizations that maintain layered evidence sources handle the range of attack classes without falling over when one layer is degraded.

The myth matters most when making architecture decisions. A CISO who believes "network sees everything" underfunds endpoint and identity telemetry. The first major incident — almost certainly one that has a significant identity-scope or endpoint-scope component — reveals the gap under time pressure. The investigation then has to work around the gap rather than through it. Expensive lesson.

Next
NF0.11 — Interactive lab. You've worked through the ten content subs that establish why network evidence matters, what it sees, and what it doesn't see. The lab brings the investigation methodology from NF0.4 into contact with real Zeek logs. You'll profile a capture, identify anomalies, and produce a first-pass investigation outcome — the skill every subsequent NF module assumes.
Try it: match each of five common incident types to the evidence source you'd lead with

Setup. A single page of notes, your own view. Use the Guided Procedure above as a starting point but write your own statements rather than copying.

Task. For each of these five incident types, write one sentence stating which evidence source you'd lead with and one sentence on what the other sources contribute. The incidents: (1) a public-facing web application compromise with SQL injection, (2) an insider threat where a privileged admin is suspected of copying customer data to a personal cloud storage account, (3) a Cobalt Strike beacon discovered on an engineering workstation with unknown entry point, (4) an Entra OAuth consent-phishing campaign against finance staff, (5) a ransomware incident where endpoint agents were disabled before encryption.

Expected result. Your answers should, roughly, match: (1) Web access logs and endpoint evidence lead; network evidence contributes timing and the attacker's external IP pattern. (2) Endpoint and cloud-service audit logs lead; network evidence contributes the volume and destination confirmation. (3) Network evidence leads for the C2 channel and entry-point identification; endpoint evidence contributes the on-host persistence and process chain. (4) Identity evidence leads; network evidence contributes the phishing-infrastructure trail and the token-use pattern. (5) Network evidence leads because endpoint evidence is compromised; surviving endpoint telemetry and identity evidence contribute where they remain intact.

Debugging branch. If you led every case with network evidence, you've committed the exact overclaim this sub is about — reread the five gaps. If you led every case with endpoint, you're probably coming from a pure-EDR background and haven't internalised the cases where endpoint evidence is gone, compromised, or irrelevant. The right answer varies by case; it's the gap analysis that gives you the right answer, not a default preference.

Checkpoint — before moving on

You should be able to do the following without referring back to this sub. If you can't, the sections to re-read are noted.

1. Name five attack scenarios where network evidence is the primary investigation source and five where it is supplementary. Defend each classification in one sentence. (§ Gap 1 through § Gap 5)
2. Explain to a CISO why investing only in network forensics without layered endpoint and identity telemetry produces a brittle investigation capability. (§ Compliance Myth)
3. Distinguish the five intrinsic gaps (what network evidence cannot see) from the structural limit (what your specific deployment actually saw and retained). (§ The structural limit)

You've built the sensor and mapped the evidence landscape.

NF0 established why network evidence matters when every other source is compromised. NF1 built your Zeek + Suricata sensor with the 10 investigation query patterns. From here, every module teaches protocol-specific investigation against real attack scenarios.

  • DNS deep dive (NF3) — tunnelling detection, DGA analysis, passive DNS infrastructure mapping, and the INC-NE-2026-0227 AiTM phishing DNS trail
  • Protocol analysis (NF4–NF7) — HTTP/HTTPS, SMB lateral movement, SSH tunnelling, and email protocol investigation with Zeek metadata and PCAP
  • Detection and hunting (NF8–NF11) — Suricata rule writing, C2 beacon detection with JA3, NetFlow analytics, and proactive network threat hunting
  • NSM architecture (NF13) — production sensor deployment at 1–10 Gbps with Arkime, Security Onion, and enterprise storage planning
  • INC-NE-2026-0830 capstone (NF14) — multi-stage investigation using only network evidence: phishing → domain-fronted C2 → lateral movement → DNS tunnel exfiltration
Unlock the full course with Premium See Full Syllabus

Cancel anytime