In this module
PT0.4 The Toolkit and What Comes Next
You've used at least some of the tools in this course — Sysmon, KQL, maybe Atomic Red Team or Sigma. This sub maps the complete toolset, shows each one in action, and names the adjacent skills that make the course easier but aren't prerequisites.
Tools as plumbing, techniques as curriculum
This course is structured around ATT&CK techniques, not tools. A technique sub teaches T1003.001 LSASS dumping — the attack, the telemetry, the detection, the tuning. The tools that make it possible are the plumbing. Here's how each one looks in practice.
Atomic Red Team — running the attack
Atomic Red Team is the primary way you execute techniques in the lab. One command, one technique, one test number:
# Run T1003.001 LSASS Memory — Test 1 (Mimikatz variant)
Invoke-AtomicTest T1003.001 -TestNumbers 1Output:
PathToAtomicsFolder = C:\AtomicRedTeam\atomics
Executing test: T1003.001-1 Dump LSASS.exe Memory using ProcDump
Command: C:\AtomicRedTeam\atomics\T1003.001\bin\procdump.exe -ma lsass.exe
C:\Windows\Temp\lsass_dump.dmp
Exit code: 0
Done executing test: T1003.001-1 Dump LSASS.exe Memory using ProcDumpThat's the attack. It ran procdump.exe against lsass.exe and produced a memory dump. Sysmon Event 10 fired. Now you check whether your detection rule caught it.
The course cites specific test numbers per technique sub — T1003.001 -TestNumbers 1, T1059.001 -TestNumbers 3, etc. Each test is a specific variant of the technique. You can also run techniques manually when the course calls for it (not every variant has an Atomic test).
Sysmon — the telemetry source
Sysmon produces the raw events your rules match on. Here's the Sysmon Event 10 that the Atomic Red Team test above generated:
{
"EventID": 10,
"SourceImage": "C:\\AtomicRedTeam\\atomics\\T1003.001\\bin\\procdump.exe",
"TargetImage": "C:\\Windows\\System32\\lsass.exe",
"GrantedAccess": "0x1FFFFF",
"SourceUser": "NORTHGATE\\t.ashworth",
"UtcTime": "2026-04-22 14:32:01.445"
}The key fields for detection: TargetImage (what was accessed), GrantedAccess (what access rights were requested), SourceImage (what process did the accessing). The Sigma rule you saw in PT0.1 matches on these fields.
Sysmon is configured in Module 1 with the SwiftOnSecurity baseline. On Linux, auditd fills the same role. Here's the equivalent auditd record for a credential access technique on Linux:
type=SYSCALL msg=audit(1713794521.445:892): arch=c000003e syscall=257
success=yes exit=3 a0=ffffff9c a1=7f8a2100000 a2=0 a3=0
items=1 ppid=4872 pid=4891 uid=0 gid=0 euid=0
comm="cat" exe="/usr/bin/cat"
key="shadow_access"
type=PATH msg=audit(1713794521.445:892): item=0
name="/etc/shadow" inode=524289 dev=08:01 mode=0100640The auditd key="shadow_access" field is the detection anchor — it fires when anything reads /etc/shadow. The course configures these keys in Module 1.
Sigma — the canonical detection format
Every detection in the course is Sigma-first. Here's the LSASS multi-variant rule from PT0.1, plus its conversions to all three SIEM platforms the course covers:
title: LSASS Memory Access - Credential Dumping (Multi-Variant)
id: a4f2c8e0-91d7-4b5a-8c3e-6d9f0e1a2b34
logsource:
category: process_access
product: windows
detection:
selection:
TargetImage|endswith: '\lsass.exe'
GrantedAccess|contains:
- '0x10'
- '0x1FFFFF'
filter_system:
SourceImage|startswith:
- 'C:\Windows\System32\'
- 'C:\Program Files\Windows Defender\'
condition: selection and not filter_system
level: critical
// Sentinel KQL — via MDE connector
DeviceProcessEvents
| where TimeGenerated > ago(1h)
| where FileName =~ "lsass.exe"
| where InitiatingProcessFileName !in~ (
"MsMpEng.exe", "csrss.exe", "services.exe"
)
| project TimeGenerated, DeviceName,
InitiatingProcessFileName, AccountName
// Defender XDR Advanced Hunting
// Note: Timestamp, not TimeGenerated
DeviceProcessEvents
| where Timestamp > ago(1h)
| where FileName == "lsass.exe"
| where InitiatingProcessFileName !in~ (
"MsMpEng.exe", "csrss.exe", "services.exe"
)
| project Timestamp, DeviceName,
InitiatingProcessFileName, AccountName
index=windows sourcetype="XmlWinEventLog:Microsoft-Windows-Sysmon/Operational"
EventCode=10 TargetImage="*\\lsass.exe"
| where NOT match(SourceImage, "^C:\\\\Windows\\\\System32\\\\")
| table _time, Computer, SourceImage, GrantedAccess, SourceUser
| sort - _time
One rule, four platforms. The tabs above are the pattern you'll see in every technique sub from Module 2 onward. Sigma is canonical — the platform queries are conversions. If you use a platform the course doesn't show (CrowdStrike, SumoLogic, etc.), the Sigma rule converts to your platform via sigma-cli or pySigma.
You can also convert programmatically:
# Convert Sigma to Sentinel KQL using sigma-cli
sigma convert -t microsoft365defender -p sysmon lsass-credential-dump.yml
# Convert Sigma to Splunk SPL
sigma convert -t splunk -p sysmon lsass-credential-dump.ymlVECTR — tracking what you've tested
VECTR is where you log the results of every technique test. Here's what an entry looks like after running T1003.001:
VECTR — Technique Test Result
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Technique: T1003.001 — LSASS Memory
Date: 2026-04-22
Target: DESKTOP-NGE042 (Windows 11)
Tool used: Atomic Red Team (Test 1 — procdump variant)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Detection results:
Sentinel: DETECTED — alert fired at 14:32:05 (MTTD: 4s)
Defender XDR: DETECTED — alert fired at 14:32:03 (MTTD: 2s)
Splunk: DETECTED — alert fired at 14:32:10 (MTTD: 9s)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
False positives observed: 2
1. VeeamAgent.exe — environmental FP (excluded by path+hash)
2. SCCM CcmExec.exe — environmental FP (excluded by parent)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Verdict: PASS (3/3 SIEMs detected)
Next action: Run comsvcs.dll variant (Test 2)Every technique sub produces one of these entries. By Module 14, you have sixty-one entries — a complete evidence trail of what you've tested and what the results were.
ATT&CK Navigator — visualising coverage
The Navigator is browser-based. It shows your coverage as a colour-coded heatmap. Here's what the colour coding means:
ATT&CK Navigator — Coverage Status
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
GREEN Validated — tested in last 90 days, rule fired, tuned
YELLOW Partial — some variants missed, or FP rate > 5%
ORANGE Deployed but untested — rule exists, never validated
RED No coverage — no rule, or rule confirmed broken
GREY Out of scope — technique not relevant
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━You update the Navigator after each module's coverage report. The quarterly assessment presents the heatmap to leadership.
Per-module attack tooling — what each looks like
The foundation tools land in Module 1. Per-module attack tools are introduced as techniques require them. Here's a preview of what each looks like in practice:
Mimikatz (Module 7 — Credential Access):
# Direct LSASS dump — the variant most rules are written for
mimikatz.exe "privilege::debug" "sekurlsa::logonpasswords" "exit"Impacket secretsdump.py (Module 7 — DCSync):
# Remote credential dump via DCE/RPC — no local Sysmon event
secretsdump.py NORTHGATE/admin@192.0.2.10 -just-dc-ntlmBloodHound collection (Module 8 — Discovery):
# Collect AD structure for attack-path mapping
Import-Module SharpHound.ps1
Invoke-BloodHound -CollectionMethod All -OutputDirectory C:\Temp\AADInternals (Module 2 — M365 Initial Access):
# Extract access token for M365 — used in token replay attacks
$token = Get-AADIntAccessTokenForMSGraph
Get-AADIntTenantDetails -AccessToken $tokenEvilginx2 (Module 14 — Capstone CHAIN-HARVEST):
# AiTM phishing proxy — captures session cookies through MFA
evilginx2 -p ./phishlets -debug
# Configure phishlet for Microsoft 365 login
phishlets hostname o365 login.northgateeng.com
phishlets enable o365Each tool is installed with step-by-step instructions in the module that first uses it. You don't install everything on day one — you install each piece as the course reaches it.
Adjacent skills — demonstrated
These make the course easier but aren't prerequisites. Here's what "comfortable" looks like for each:
KQL — can you read this query without pausing?
DeviceProcessEvents
| where TimeGenerated > ago(24h)
| where InitiatingProcessFileName in~ ("powershell.exe", "pwsh.exe")
| where ProcessCommandLine has_any ("-enc", "FromBase64String", "IEX")
| summarize Count = count() by DeviceName, AccountName
| where Count > 5If you read that and understood it — you're fine. If KQL is newer, the course annotates every query line by line.
Sigma — can you read this rule without pausing?
detection:
selection:
EventID: 1
Image|endswith: '\rundll32.exe'
CommandLine|contains: 'comsvcs'
condition: selectionIf you understood what that matches on — you're fine. If Sigma is newer, the course teaches it by example across sixty-one rules.
PowerShell — can you read this without Googling?
Get-WinEvent -FilterHashtable @{LogName='Security'; Id=4624} |
Where-Object { $_.Properties[8].Value -eq 10 } |
Select-Object TimeCreated, @{N='User';E={$_.Properties[5].Value}}Bash — can you read this without Googling?
ausearch -k shadow_access --start today |
aureport -f --summary |
head -20If most of these read naturally, the course flows. If some are new, you'll learn them in context — the course uses them repeatedly and they become familiar by Module 3 or 4.
What Module 1 asks of you
Module 1 builds the lab. It's free. By the end you've confirmed telemetry flows from all four environments into all three SIEMs by firing T1059.001 (PowerShell execution) and checking each SIEM:
# The Module 1 smoke test — fire T1059.001
Invoke-AtomicTest T1059.001 -TestNumbers 1Then verify in each SIEM:
// Sentinel — check the event landed
DeviceProcessEvents
| where TimeGenerated > ago(15m)
| where InitiatingProcessFileName =~ "powershell.exe"
| where ProcessCommandLine has "-enc"
| project TimeGenerated, DeviceName, ProcessCommandLine
// Defender XDR Advanced Hunting — same check
DeviceProcessEvents
| where Timestamp > ago(15m)
| where InitiatingProcessFileName == "powershell.exe"
| where ProcessCommandLine contains "-enc"
| project Timestamp, DeviceName, ProcessCommandLine
index=windows sourcetype="XmlWinEventLog:Microsoft-Windows-Sysmon/Operational"
EventCode=1 Image="*\\powershell.exe" CommandLine="*-enc*"
| table _time, Computer, CommandLine
| head 5
If all three return results, your lab is working. If any is empty, Module 1's troubleshooting guide covers the pipeline diagnostics.
The lab is yours permanently. The paid content builds on it — but the foundation is free.
You've built the lab and understand the validation gap.
Module 0 showed you why detection rules fail silently — vendor schema changes, attacker tool evolution, environment divergence, tuning drift. Module 1 gave you a working four-environment, three-SIEM purple-team lab. From here, you walk the kill chain technique by technique.
- 61 ATT&CK techniques across 12 tactic modules — Initial Access through Impact, each walked end-to-end with attack commands, annotated telemetry, and multi-SIEM detection rules
- Every detection in four formats — Sigma rule (canonical), Sentinel KQL, Defender XDR Advanced Hunting KQL, and Splunk SPL or Elastic. Tabbed side-by-side in every technique sub
- Module 14 Capstone — CHAIN-HARVEST — full purple-team exercise on an AiTM credential-phishing chain. Multi-stage attack, detection results across all three SIEMs, coverage gaps, tuning recommendations
- Programme template — coverage matrix, MTTD per technique, FP rates, detection quality scores, remediation backlog. Populated as you work, presentable to leadership by Module 14
- Public Sigma rule repo — every detection rule in a GitHub repository. Alumni contribute via PR. The artefacts outlive the course
Cancel anytime