In this module
NF1.1 Sensor Architecture and Deployment Models
You've set up VMs before — for labs, for testing, for DFIR analysis. You understand that a network sensor needs a capture interface that sees traffic. NF0.7 covered where to place sensors in the network. This sub covers how the sensor itself is built — the VM architecture, the deployment models, and the design decisions that determine whether your sensor produces reliable evidence.
A network sensor isn't a single tool — it's a system. The capture interface receives traffic. Zeek parses protocols and produces metadata logs. Suricata inspects traffic against signature rules. The storage backend retains logs and optionally PCAP. The management layer handles log rotation, rule updates, and health monitoring. Each component has configuration decisions that affect the evidence quality.
Building the sensor wrong means discovering gaps in the middle of an investigation — Zeek wasn't parsing HTTP/2, Suricata rules were 6 months stale, the PCAP disk filled up and overwrote the evidence you needed. This sub covers the architecture of the sensor system and the three deployment models you'll encounter in production, then positions the lab sensor you'll build in NF1.2-NF1.9.
Deliverable: The sensor architecture — capture interface, processing engines (Zeek + Suricata), storage, and management — and the three deployment models (standalone VM, dedicated appliance, integrated platform). The design decisions for the course lab sensor.
Figure NF1.1 — The NSM sensor architecture. Traffic enters through the capture interface, is processed by Zeek (metadata) and Suricata (signatures) simultaneously, stored as structured logs, and managed for availability and currency. This course uses the standalone VM model — you build the sensor from a clean Linux installation.
The Four Components
Every NSM sensor, regardless of deployment model, has the same four components. Understanding each one prevents configuration mistakes that create evidence gaps.
The capture interface is the network interface that receives traffic. In a production deployment, this is connected to a SPAN port on a switch or a network TAP. The capture interface runs in promiscuous mode — it accepts all frames, not just those addressed to it — and has no IP address assigned. An interface with an IP address generates its own traffic (ARP, DHCP, NTP), which contaminates the capture. The capture interface is receive-only.
In your lab, the capture interface works differently. You won't have a SPAN port on your home network. Instead, you'll replay pre-captured PCAP files through Zeek and Suricata using their offline analysis modes. The tools process the PCAP exactly as they would process live traffic — the same logs, the same alerts, the same output. The difference is timing: live capture is real-time, PCAP replay is batch processing. The investigation methodology is identical.
The processing engines are Zeek and Suricata, running simultaneously on the same traffic. Zeek parses protocols and produces structured metadata logs. Suricata matches traffic against signature rules and produces alerts. Both tools can share the same capture interface using AF_PACKET with cluster mode — the kernel distributes packets to both tools without duplication. In the lab, both tools process the same PCAP file.
Storage holds the Zeek logs, Suricata alerts, and optionally full PCAP. Storage sizing is the critical capacity planning decision. Zeek metadata at 1 Gbps produces approximately 10-15 GB/day. Suricata EVE JSON alerts are typically under 1 GB/day. Full PCAP at 1 Gbps produces approximately 10 TB/day. Your retention requirements (from NF0.3) determine the total storage needed.
Management covers NTP synchronisation, rule updates, log rotation, and health monitoring. NTP is the most critical management requirement — NF0.9 covered why. Rule updates (suricata-update) need to run at least daily. Log rotation prevents the disk from filling. Health monitoring catches sensor failures before they create evidence gaps.
Three Deployment Models
Standalone VM is what you'll build in this module. Zeek and Suricata installed on a Linux VM (Ubuntu 24.04), processing PCAP files for analysis. This model works for labs, small networks, and organizations that want to start with NSM without dedicated hardware. Production deployments on a standalone VM can handle up to approximately 1 Gbps of sustained traffic on modern hardware (4+ cores, 16+ GB RAM).
Dedicated appliance is purpose-built hardware optimized for packet capture at high throughput. Corelight (the commercial Zeek vendor) sells appliances that handle 10-100 Gbps. Open-source equivalents use high-performance NICs with hardware offload (DPDK, PF_RING ZC) on server-class hardware. This model is for enterprise deployments where the traffic volume exceeds what a VM can handle.
Integrated platform bundles Zeek, Suricata, Elasticsearch, Kibana, and a management UI into a single deployable package. Security Onion is the most widely deployed open-source example. SELKS (Suricata + ELK + Scirius) is another. These platforms reduce deployment complexity but add overhead and abstraction. Module NF13 covers Security Onion for production NSM.
This course uses the standalone VM model. The skills transfer directly to any deployment model — the Zeek logs, Suricata alerts, and query patterns are identical regardless of whether the sensor is a VM, an appliance, or a platform.
Lab Sensor Design
The sensor you build in NF1.2-NF1.9 is designed for learning, not production throughput. The architecture is the same; the scale is different.
Your lab sensor is an Ubuntu 24.04 VM with Zeek and Suricata installed. It processes PCAP files from the NF course lab packs — pre-captured traffic from the Northgate Engineering scenarios. Each module's lab pack includes PCAP files that you'll replay through your sensor to generate Zeek logs and Suricata alerts for investigation.
The VM requirements are modest: 2 CPU cores, 4 GB RAM, 40 GB disk. This handles PCAP replay comfortably. If you plan to run the sensor on live traffic from your home network (optional, not required), increase to 4 cores and 8 GB RAM.
The design decisions for the lab sensor prioritize learning over performance. Zeek will run with default protocol analyzers enabled (all of them, even ones you won't use in every module). Suricata will run with the full ET Open ruleset. Log rotation will retain 30 days. These settings would be tuned differently in production (disable unused analyzers, tune rule thresholds, adjust retention) — but for learning, you want maximum visibility.
You're deciding whether to build a dedicated sensor VM or install Zeek and Suricata on your existing Ubuntu VM from the IR or Linux IR course.
A dedicated VM keeps your sensor environment clean — no interference from other tools, clear disk usage tracking, and the ability to snapshot the sensor state at known-good configurations. If something breaks, you restore the snapshot rather than debugging.
Installing on an existing VM saves disk space and setup time. If your existing VM has 40+ GB free and 4+ GB RAM, it handles both roles. The risk is tool conflicts — a Zeek update that breaks something might affect your other course work.
For this course, a dedicated VM is recommended. The sensor is a long-lived system that you'll use for 14 modules. Keeping it separate reduces the risk of configuration drift affecting your investigation data.
A VM handles NSM for networks up to approximately 1 Gbps sustained throughput on modern hardware. Most mid-sized organizations (500-2000 users) have internet egress links well under 1 Gbps average utilization. A properly configured VM with 4 cores, 16 GB RAM, and sufficient storage runs Zeek and Suricata without packet loss at these traffic levels.
Dedicated hardware becomes necessary above 1 Gbps — the bottleneck is usually the vSwitch (virtual switch) that passes traffic to the VM's virtual NIC. Physical NICs with hardware offload (DPDK, PF_RING ZC) bypass this limitation. For 10+ Gbps environments, dedicated hardware is essentially required.
For your lab, a VM with 2 cores and 4 GB RAM handles PCAP replay of any capture size. The processing isn't real-time constrained — Zeek can take 10 seconds to process a PCAP file that represents 1 hour of traffic. The output is identical.
Try it: Verify your virtualisation environment
Setup. Open your hypervisor of choice (VMware Workstation, VirtualBox, or Hyper-V).
Task. Verify you can create a new VM with the required specifications: 2 cores, 4 GB RAM, 40 GB disk. If you've already downloaded Ubuntu 24.04, attach the ISO and boot to the installer screen (don't install yet — NF1.2 covers the installation with specific configuration for the sensor role).
Expected result. The hypervisor accepts the VM configuration and the Ubuntu installer boots. If using VMware, the VM type should be "Linux / Ubuntu 64-bit."
Debugging branch. If virtualisation features are disabled: enable VT-x/AMD-V in your BIOS/UEFI settings. If VMware Workstation isn't available: VirtualBox (free, cross-platform) is the closest alternative. If you're on a Mac with Apple Silicon: use UTM (free) with the ARM64 Ubuntu image.
You've built the sensor and mapped the evidence landscape.
NF0 established why network evidence matters when every other source is compromised. NF1 built your Zeek + Suricata sensor with the 10 investigation query patterns. From here, every module teaches protocol-specific investigation against real attack scenarios.
- DNS deep dive (NF3) — tunnelling detection, DGA analysis, passive DNS infrastructure mapping, and the INC-NE-2026-0227 AiTM phishing DNS trail
- Protocol analysis (NF4–NF7) — HTTP/HTTPS, SMB lateral movement, SSH tunnelling, and email protocol investigation with Zeek metadata and PCAP
- Detection and hunting (NF8–NF11) — Suricata rule writing, C2 beacon detection with JA3, NetFlow analytics, and proactive network threat hunting
- NSM architecture (NF13) — production sensor deployment at 1–10 Gbps with Arkime, Security Onion, and enterprise storage planning
- INC-NE-2026-0830 capstone (NF14) — multi-stage investigation using only network evidence: phishing → domain-fronted C2 → lateral movement → DNS tunnel exfiltration
Cancel anytime