In this module
PT1.6 Linux VM Build with auditd
You've used Linux before — at least enough to navigate the command line, edit files, and manage services. This sub builds the Linux target VM that serves as the environment for reverse shells, privilege escalation, credential access on Linux, and the Caldera C2 server used in the capstone.
Step 1: Create the VM
VirtualBox
New VM wizard:
Name: PT-LINUX01
ISO Image: Ubuntu Server 24.04 LTS ISO
Type: Linux
Version: Ubuntu (64-bit)
☑ Skip Unattended Installation
Memory: 2048 MB
Processors: 1
Disk: 20 GB (dynamic)After creation: Settings → Network → Adapter 2 → Enable → Host-only Adapter → select your host-only network.
Hyper-V
Name: PT-LINUX01
Generation: Generation 2
Memory: 2048 MB (Dynamic Memory enabled)
Network: Default Switch
Disk: 20 GB
ISO: Ubuntu Server 24.04 LTSAfter creation: Settings → Add Hardware → Network Adapter → PurpleTeamLab. Under Security → uncheck Secure Boot (Ubuntu uses a different boot certificate than Windows — Secure Boot on Gen 2 Hyper-V blocks Ubuntu unless you change the template to "Microsoft UEFI Certificate Authority" or disable it).
VMware
ISO: Ubuntu Server 24.04 LTS
Guest OS: Linux → Ubuntu 64-bit
Name: PT-LINUX01
Disk: 20 GB
Memory: 2048 MB
Processors: 1
Network Adapter 1: NAT
Network Adapter 2: Host-only (add via Customize Hardware)Step 2: Install Ubuntu Server
Start the VM. It boots from the ISO.
The Ubuntu Server installer is text-based. Navigate with arrow keys and Enter.
- Language — English (or your preference)
- Installer update — if prompted to update the installer, choose "Continue without updating" (faster; you'll update the system after install)
- Keyboard — select your layout
- Type of install — choose "Ubuntu Server" (not minimized)
- Network connections — leave defaults. The NAT adapter gets an IP via DHCP. You'll configure the internal adapter manually after install.
- Proxy — leave blank unless you're behind a corporate proxy
- Mirror — leave the default Ubuntu mirror
- Storage — choose "Use an entire disk" → select the virtual disk → confirm. Leave LVM defaults.
- Storage summary — review and confirm "Done", then "Continue" on the destructive action warning
- Profile setup:
Your name: labadmin
Your server's name: pt-linux01
Pick a username: labadmin
Password: (set a password you'll remember)- Upgrade to Ubuntu Pro — skip (select "Skip for now")
- SSH Setup — check "Install OpenSSH server". This lets you SSH from your host machine, which is much easier than working in the VM console.
- Featured snaps — don't select any, continue
- Installation — wait for it to complete (5–10 minutes)
- Reboot — when prompted, press Enter. If it says "Please remove the installation medium", just press Enter again (the hypervisor usually detaches the ISO automatically).
After reboot, log in at the console with labadmin and your password.
Step 3: Update the system and install prerequisites
# Update package lists and upgrade everything
sudo apt update && sudo apt upgrade -y
# Install tools you'll need throughout the course
sudo apt install -y net-tools curl wget git python3 python3-pip python3-venv openssh-serverStep 4: Set a static IP on the internal network
First, identify the internal adapter:
# List all network interfaces
ip link show1: lo: <LOOPBACK,UP,LOWER_UP> ...
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> ... ← NAT (internet)
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> ... ← Internal (lab network)The adapter names vary by hypervisor. Common names: enp0s8 (VirtualBox), eth1 (Hyper-V), ens34 (VMware). The first adapter (enp0s3 or eth0 or ens33) is NAT. The second is internal. Confirm by checking which one has a DHCP address:
ip addr show | grep "inet " inet 127.0.0.1/8 scope host lo
inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3enp0s3 has the DHCP address (10.0.2.15) — that's NAT. The second adapter (enp0s8) has no IP yet — that's the internal one.
Create a Netplan configuration for the internal adapter:
sudo nano /etc/netplan/99-lab-internal.yamlEnter the following (replace enp0s8 with your actual internal adapter name):
network:
version: 2
ethernets:
enp0s8:
addresses:
- 10.0.0.20/24Important: YAML is indentation-sensitive. Use exactly 2 spaces per level, no tabs. If you get a parse error, check the indentation.
Apply the configuration:
sudo netplan applyIf you see a warning about permissions, it's safe to ignore in a lab. Verify:
ip addr show enp0s8 | grep "inet " inet 10.0.0.20/24 brd 10.0.0.255 scope global enp0s8Test connectivity to the Windows machines:
ping -c 2 10.0.0.10 # Windows endpoint
ping -c 2 10.0.0.1 # Domain controllerBoth should return replies. If not, check that all three VMs are on the same internal/host-only network in the hypervisor settings.
Step 5: Install and configure auditd
auditd is the Linux equivalent of Sysmon — it produces audit records for system calls, file access, process execution, and network activity.
# Install auditd and its dispatcher plugins
sudo apt install -y auditd audispd-pluginsVerify the service is running:
sudo systemctl status auditd● auditd.service - Security Auditing Service
Loaded: loaded (/lib/systemd/system/auditd.service; enabled; preset: enabled)
Active: active (running) since Fri 2026-04-25 14:45:01 UTC; 10s agoIf the status shows "inactive" or "failed", start it:
sudo systemctl enable auditd
sudo systemctl start auditdStep 6: Install the Neo23x0 audit rules
The default auditd rules log almost nothing useful for detection. The Neo23x0 ruleset adds rules for the specific system calls, file accesses, and process executions that attack techniques produce.
# Download the Neo23x0 Linux audit rules
sudo curl -o /etc/audit/rules.d/audit.rules \
"https://raw.githubusercontent.com/Neo23x0/auditd/master/audit.rules"
# Restart auditd to load the new rules
sudo systemctl restart auditdVerify the rules loaded:
sudo auditctl -l | head -15-w /etc/shadow -p rwxa -k shadow_access
-w /etc/passwd -p rwxa -k passwd_access
-w /etc/sudoers -p rwxa -k sudoers_access
-w /etc/sudoers.d -p rwxa -k sudoers_access
-a always,exit -F arch=b64 -S execve -k exec_cmd
-a always,exit -F arch=b64 -S connect -k net_connect
-w /usr/bin/wget -p x -k exec_download
-w /usr/bin/curl -p x -k exec_download
-w /usr/bin/ssh -p x -k exec_sshThe -k flag is the detection key — it tags events for easy searching. When you search for key="shadow_access" in the audit log, you find every access to /etc/shadow.
Troubleshooting: if auditctl -l returns "No rules" or an error:
# Check the rules file for syntax errors
sudo auditctl -R /etc/audit/rules.d/audit.rules
# Errors will show the line number and reason
# If the file is corrupt, re-download
sudo curl -o /etc/audit/rules.d/audit.rules \
"https://raw.githubusercontent.com/Neo23x0/auditd/master/audit.rules"
sudo systemctl restart auditdStep 7: Generate a test event and find it
# Read /etc/shadow — this triggers the shadow_access audit rule
sudo cat /etc/shadow > /dev/nullNow find the event in the audit log:
sudo ausearch -k shadow_access --start recent----
time->Fri Apr 25 14:52:33 2026
type=SYSCALL msg=audit(1745588553.112:892): arch=c000003e syscall=257
success=yes exit=3 a0=ffffff9c a1=7f8a2100000 a2=0 a3=0
items=1 ppid=4872 pid=4891 auid=1000 uid=0 gid=0 euid=0
comm="cat" exe="/usr/bin/cat"
key="shadow_access"
type=PATH msg=audit(1745588553.112:892): item=0
name="/etc/shadow" inode=524289 dev=08:01 mode=0100640
ouid=0 ogid=42Key fields for detection:
comm="cat"— the command that ranexe="/usr/bin/cat"— the full binary pathkey="shadow_access"— the detection tag (this is what your rules search for)uid=0— ran as root (via sudo)auid=1000— the original login UID (which user initiated this via sudo — 1000 is typically the first non-root user,labadmin)
When an attacker reads /etc/shadow to extract password hashes (T1003.008), this is the event your Linux detection rule will match on.
If ausearch returns nothing:
# Check the audit log directly
sudo tail -20 /var/log/audit/audit.log | grep shadow
# Check auditd is actually logging
sudo auditctl -s
# Look for "enabled 1" — if it shows "enabled 0", auditd is loaded but not enforcingStep 8: Install Caldera
MITRE Caldera is the adversary emulation framework used for chain emulation in the capstone (Module 14). Install it now so it's ready when you need it.
# Clone Caldera
cd /opt
sudo git clone https://github.com/mitre/caldera.git --recursive
sudo chown -R labadmin:labadmin /opt/caldera
# Create a Python virtual environment
cd /opt/caldera
python3 -m venv venv
source venv/bin/activate
# Install dependencies
pip install -r requirements.txtThis takes 3–5 minutes. If pip install fails with dependency errors, try:
pip install --upgrade pip
pip install -r requirements.txtVerification — start Caldera briefly to confirm it works:
cd /opt/caldera
source venv/bin/activate
python3 server.py --insecure &
sleep 15
curl -s http://localhost:8888 | head -3<!DOCTYPE html>
<html>
<head>If curl returns HTML, Caldera is working. Stop it for now:
kill %1
deactivateYou won't use Caldera until Module 14 — this just confirms the install is good.
Step 9: Create a test user
Create a low-privilege user account for attack scenarios where the attacker starts with limited access:
# Create a low-privilege user
sudo useradd -m -s /bin/bash labuser
echo "labuser:LabUser2026!" | sudo chpasswd
# Verify the account works
su - labuser -c "whoami && id"labuser
uid=1001(labuser) gid=1001(labuser) groups=1001(labuser)Step 10: Enable SSH from the host
If you haven't already, test SSH access from your host machine. This is much more convenient than working in the VM console:
# From your host machine (PowerShell or terminal)
ssh labadmin@10.0.0.20If SSH connects, you can work in a proper terminal with copy/paste support. If it fails, check that the SSH service is running on the Linux VM:
# On the Linux VM
sudo systemctl status ssh
# If not running:
sudo systemctl enable ssh
sudo systemctl start sshStep 11: Snapshot
# VirtualBox (from host)
VBoxManage snapshot "PT-LINUX01" take "Clean-Auditd-Baseline"# Hyper-V (from host)
Checkpoint-VM -Name "PT-LINUX01" -SnapshotName "Clean-Auditd-Baseline"VMware: right-click → Snapshot → Take Snapshot → name: Clean-Auditd-Baseline
Verification checklist
☐ Ubuntu Server VM running as pt-linux01
☐ System updated (apt update && apt upgrade completed)
☐ Two network interfaces present
☐ Static IP 10.0.0.20 on internal adapter
☐ Can ping Windows endpoint (10.0.0.10) and DC (10.0.0.1)
☐ auditd service running
☐ Neo23x0 rules loaded (auditctl -l shows rules with -k keys)
☐ Test event found in audit log (shadow_access key)
☐ Caldera installed and starts successfully
☐ labuser account created and works
☐ SSH accessible from host machine
☐ VM snapshot "Clean-Auditd-Baseline" takenYou've built the lab and understand the validation gap.
Module 0 showed you why detection rules fail silently — vendor schema changes, attacker tool evolution, environment divergence, tuning drift. Module 1 gave you a working four-environment, three-SIEM purple-team lab. From here, you walk the kill chain technique by technique.
- 61 ATT&CK techniques across 12 tactic modules — Initial Access through Impact, each walked end-to-end with attack commands, annotated telemetry, and multi-SIEM detection rules
- Every detection in four formats — Sigma rule (canonical), Sentinel KQL, Defender XDR Advanced Hunting KQL, and Splunk SPL or Elastic. Tabbed side-by-side in every technique sub
- Module 14 Capstone — CHAIN-HARVEST — full purple-team exercise on an AiTM credential-phishing chain. Multi-stage attack, detection results across all three SIEMs, coverage gaps, tuning recommendations
- Programme template — coverage matrix, MTTD per technique, FP rates, detection quality scores, remediation backlog. Populated as you work, presentable to leadership by Module 14
- Public Sigma rule repo — every detection rule in a GitHub repository. Alumni contribute via PR. The artefacts outlive the course
Cancel anytime