In this module

NF0.13 Check My Knowledge

8 hours · Module 0 · Free
Test your understanding of the NF0 module. Each question targets a specific concept. If you get one wrong, the explanation tells you which sub to revisit.
1. During a ransomware investigation, the disk is encrypted and event logs were cleared before encryption. Which network evidence source provides the independent timeline that neither disk nor log analysis can?
Suricata alerts, because they triggered on the ransomware deployment
Full PCAP, because it contains the encrypted files before encryption
Zeek conn.log, because connection timestamps come from the sensor's clock, not the compromised host
NetFlow, because it records which files were transferred
Answer: C. Zeek conn.log timestamps come from the NSM sensor's system clock, providing an independent timeline that the attacker can't manipulate from the compromised host. Suricata alerts may not cover the full timeline. PCAP doesn't contain disk files. NetFlow records connections, not file contents. Review NF0.1 § Scenario 4.
2. Your organization monitors a 1 Gbps internet link. You need to retain network evidence for 90 days. Which evidence type provides the best investigation value within a practical storage budget?
Full PCAP — approximately 90 TB for 90 days
Zeek metadata logs — approximately 1.35 TB for 90 days
Suricata alerts only — minimal storage
NetFlow — approximately 180 GB for 90 days
Answer: B. Zeek metadata provides structured, queryable logs at ~15 GB/day (1.35 TB for 90 days) — answering 90% of investigation questions at 1% of full PCAP storage cost. Suricata alerts alone can't support investigation (alert-only monitoring). NetFlow lacks protocol detail. Full PCAP at 90 TB is impractical for 90-day retention. Review NF0.2 § Type 2.
3. What is the core principle of Network Security Monitoring (NSM)?
Collect network evidence continuously so it exists before the incident is detected
Deploy IDS sensors to alert on known threats in real time
Capture full packets on every network segment for maximum visibility
Respond to network alerts within a defined SLA
Answer: A. NSM is a preparation capability, not a response capability. The data must exist before anyone knows an incident is occurring. Reactive capture can't travel backward in time. IDS deployment (B) is one component of NSM, not the principle. Full PCAP everywhere (C) is impractical and unnecessary. SLA response (D) is incident response, not monitoring. Review NF0.3 § The Core Principle.
4. In the six-step Network Investigation Methodology, at which step should you first use Wireshark for PCAP analysis?
Step 1 — Scope, to understand the traffic patterns
Step 2 — Identify, to find sessions of interest
Step 3 — Correlate, to link network and endpoint evidence
Step 4 — Reconstruct, after sessions of interest have been identified from metadata
Answer: D. Wireshark is a reconstruction tool. Steps 1-3 use Zeek metadata and Suricata alerts to identify and narrow the sessions of interest. Wireshark at Step 4 examines targeted PCAP for specific sessions — not the full capture. Opening Wireshark at Steps 1-2 means scrolling through millions of packets without direction. Review NF0.4 § Step 4 and NF0.5 § Wireshark.
5. How do Zeek and Suricata work together during an investigation?
Suricata generates Zeek logs when it detects threats
Zeek replaces Suricata for organizations that need metadata instead of alerts
Suricata alerts identify known-bad traffic; Community ID links those alerts to Zeek's conn.log, dns.log, and ssl.log for full session context
Zeek forwards suspicious connections to Suricata for deep packet inspection
Answer: C. Suricata and Zeek are complementary. Suricata detects known threats via signatures. Zeek provides structured metadata about every connection. Community ID is a standardized flow hash that both tools generate, allowing you to pivot from a Suricata alert to the same session's details in Zeek logs. They don't generate each other's output or forward traffic between them. Review NF0.5 § Suricata.
6. A workstation shows 847 connections to the same external IP over 72 hours, each lasting 0.3-0.5 seconds with 300-500 bytes transferred, at approximately 60-second intervals. Which baseline dimension is most anomalous?
Volume — 300-500 bytes per connection is unusually low
Timing — the consistent 60-second interval indicates automated periodic communication, not human-driven activity
Destination — connecting to an external IP is inherently suspicious
Protocol — the connection uses an unusual port
Answer: B. The defining anomaly is the consistent 60-second interval over 72 hours. Human-driven traffic is irregular. Automated periodic connections at a fixed interval are the signature of C2 beaconing. Low byte count (A) is notable but not the primary signal. External connections (C) are normal for any host. The question doesn't mention unusual ports (D). Review NF0.6 § Guided Procedure Step 2.
7. You can deploy one NSM sensor. Which position provides the highest-value evidence for most investigation scenarios?
Internet egress — captures C2, exfiltration, and initial access traffic
Core switch — captures lateral movement between VLANs
DMZ — captures attacks against public-facing services
Server VLAN — captures access to high-value targets
Answer: A. The egress sensor sees all external traffic — C2 channels, exfiltration, initial access via phishing, and malware downloads. These are the highest-priority investigation questions for most incidents. The core switch (B) adds lateral movement visibility but misses external traffic. DMZ (C) covers only public-facing services. Server VLAN (D) covers only server access. Review NF0.7 § Sensor Position 1.
8. What is the most critical evidence integrity requirement for network forensics?
Using SHA-512 instead of SHA-256 for PCAP hashing
Encrypting stored PCAP files to prevent tampering
Storing PCAP on write-once media
NTP synchronisation on the sensor — incorrect timestamps prevent correlation with endpoint evidence
Answer: D. If the sensor's clock is desynchronised, every timestamp in every log file and PCAP is wrong. This prevents correlation with endpoint evidence and can cause the investigator to associate the wrong network activity with the wrong endpoint events. SHA-256 vs SHA-512 (A) is irrelevant — both provide adequate integrity verification. Encryption (B) prevents reading, not tampering. Write-once media (C) is impractical for continuous capture. Review NF0.9 § Timestamp Integrity.
💬

How was this module?

Your feedback helps us improve the course. One click is enough — comments are optional.

Thank you — your feedback has been received.

You've built the sensor and mapped the evidence landscape.

NF0 established why network evidence matters when every other source is compromised. NF1 built your Zeek + Suricata sensor with the 10 investigation query patterns. From here, every module teaches protocol-specific investigation against real attack scenarios.

  • DNS deep dive (NF3) — tunnelling detection, DGA analysis, passive DNS infrastructure mapping, and the INC-NE-2026-0227 AiTM phishing DNS trail
  • Protocol analysis (NF4–NF7) — HTTP/HTTPS, SMB lateral movement, SSH tunnelling, and email protocol investigation with Zeek metadata and PCAP
  • Detection and hunting (NF8–NF11) — Suricata rule writing, C2 beacon detection with JA3, NetFlow analytics, and proactive network threat hunting
  • NSM architecture (NF13) — production sensor deployment at 1–10 Gbps with Arkime, Security Onion, and enterprise storage planning
  • INC-NE-2026-0830 capstone (NF14) — multi-stage investigation using only network evidence: phishing → domain-fronted C2 → lateral movement → DNS tunnel exfiltration
Unlock the full course with Premium See Full Syllabus

Cancel anytime