1. Your SOC triages three alerts over 6 hours: a medium-severity PowerShell alert on a workstation, a low-severity unusual sign-in for the same user, and a low-severity new scheduled task on a server. Each analyst handles their alert independently and closes it. Two weeks later, an IR investigation discovers persistent attacker access on the server. What was the primary failure?
The detection rules were too weak — they should have generated high-severity alerts a
The rules fired correctly. The severity was appropriate for each individual event. The problem wasn't detection quality — it was the absence of correlation between related events.
The alerts were triaged as isolated events without checking for campaign-level correlation b
Correct. Each alert was handled independently. No analyst checked whether the three alerts — same user identity, escalating privilege across systems, 6-hour operational window — represented a coordinated campaign. The correlation gap is cognitive, not technical.
The SIEM lacked multi-table correlation capability c
Modern SIEMs (Sentinel, Splunk, Elastic) all support multi-table correlation. The capability exists. The correlation rule didn't — because writing campaign-level correlation rules requires understanding how campaigns are structured.
The SOC needed a faster triage SLA d
Faster triage processes each alert individually more quickly — it doesn't connect related alerts into campaigns. The 15-minute SLA myth: triage speed and campaign detection are different capabilities.
2. An attacker dumps LSASS on a compromised workstation. Thinking as the attacker, what is the most likely NEXT operational decision?
Exfiltrate the credentials to external infrastructure a
Exfiltrating credentials is unnecessary — the attacker uses them directly from the compromised environment. Sending credentials externally adds detection risk (network telemetry) without operational benefit.
Delete the LSASS dump file to cover tracks b
Evidence destruction may happen eventually, but it's not the immediate next step. The attacker's priority after credential acquisition is to USE the credentials for their objective — movement or privilege escalation — before worrying about cleanup.
Use the harvested credentials to move laterally to a higher-value target c
Correct. The attacker dumped LSASS to acquire credentials. Credentials are operational resources — fuel for movement. The next decision is WHERE to move: which system has higher privileges, access to the objective, or a path to domain admin. The credential dump was a means, not an end.
Establish persistence on the current workstation d
Persistence on the initial workstation is typically established in the first few minutes post-compromise — before the LSASS dump. By the time the attacker is harvesting credentials, they're preparing to move, not to stay.
3. You're allocating 40 hours of detection engineering time. Which investment produces the most durable detection value?
5 campaign-correlation rules that detect operational patterns from your last 3 confirmed incidents a
Correct. Campaign-correlation rules detect at the TTP level — the top of the Pyramid of Pain. Operational patterns are the most expensive thing for the attacker to change. These rules survive infrastructure rotations, tool changes, and technique substitutions. They detect the next campaign, not just the last one.
Integration of a threat intelligence feed adding 500 IOCs per week b
IOC feeds detect at the bottom of the Pyramid of Pain. The indicators change in hours to days. The feed detects the attacker's last campaign's infrastructure — which they've already rotated. Automate IOC ingestion, but don't spend engineering time on it.
20 new technique-level Sigma rules for ATT&CK techniques not currently covered c
Technique-level rules are valuable (the Purple Teaming layer), but they detect individual events, not campaigns. If your current gap is campaign correlation (which Module 0 established as the common gap), technique-level rules won't close it.
Tuning existing rules to reduce false positive rate from 80% to 20% d
Tuning is important for SOC efficiency, but it improves existing technique-level detection — it doesn't add campaign-level detection capability. If the question is "which investment produces the most durable detection VALUE," campaign-correlation rules add a capability that didn't exist before.
4. You observe fast, noisy attacker activity with commodity tools (Mimikatz, PsExec, batch scripts for lateral movement). The attacker is moving rapidly across the domain. Based on the attacker's constraint profile, what is the most likely objective?
Long-term espionage — the attacker will establish quiet persistence a
Espionage operators are patient and quiet. They use custom tooling, long sleep intervals, and minimal lateral movement. Noisy, fast-moving activity with commodity tools is the opposite of the espionage constraint profile.
Supply chain compromise — the attacker will pivot to downstream customers b
Supply chain operators need sustained access to the target's development or deployment infrastructure. They operate carefully to maintain access over weeks or months. Fast, noisy movement suggests a short timeline, not a patient supply chain operation.
Insider threat — the activity is from a compromised internal account c
Insider threats typically don't use commodity attack tools like Mimikatz and PsExec — they use their existing legitimate access. The tooling and movement pattern indicates an external attacker, not an insider.
Ransomware deployment — the attacker is under time pressure d
Correct. Fast movement, commodity tools, noisy lateral movement across the domain — this is the ransomware operator constraint profile. They have a short window (typically 24-72 hours) before the victim's IR team responds. Speed matters more than stealth. Expect backup destruction, shadow copy deletion, and ransomware staging to follow rapidly. Prioritize protecting backups and isolating critical systems.
5. A defender says: "We block Cobalt Strike beacons with our network signatures, so we're protected from C2." What's wrong with this claim?
Cobalt Strike is no longer used by attackers a
Cobalt Strike is still widely used. The claim's problem isn't about Cobalt Strike's prevalence — it's about the assumption that blocking one framework blocks C2 capability.
Cobalt Strike is one of many C2 frameworks, and malleable profiles can disguise its traffic as legitimate web requests b
Correct. Blocking Cobalt Strike's default beacon signature blocks one tool's default configuration. Attackers use malleable C2 profiles to disguise Cobalt Strike traffic as legitimate web requests (mimicking Outlook, Teams, or CDN traffic). And Cobalt Strike is one of dozens of frameworks — Sliver, Mythic, Havoc, Brute Ratel all provide equivalent C2 capability with different signatures. Blocking a tool is not blocking a capability.
Network signatures can't detect encrypted traffic c
While encryption does complicate signature detection (TLS-encrypted C2 bypasses most content-based signatures), the broader problem is that blocking one framework doesn't block C2 capability. The attacker switches frameworks, not just encryption.
The defender should block at the endpoint, not the network d
Endpoint detection is important, but the claim's fundamental flaw is equating "block one tool" with "block the capability." The attacker can switch C2 frameworks. Detection should target C2 behavioral patterns (beaconing intervals, data ratios, session characteristics) that persist across frameworks.
6. Your threat intelligence platform delivers 500 IOCs per week. Your detection engineering team spends 10 hours per week managing IOC-based rules. According to the Pyramid of Pain framework, what should change?
Increase the IOC feed to 1,000 per week for better coverage a
More IOCs at the same pyramid level doesn't improve campaign detection. It increases the volume of bottom-of-pyramid indicators that the attacker changes in hours. More is not better when the fundamental approach is targeting disposable indicators.
Reduce the feed to only high-confidence IOCs b
Filtering IOCs by confidence improves signal quality but doesn't change the pyramid level. High-confidence IOCs are still IOCs — the attacker still changes them easily. The 10 hours per week is still invested at the bottom of the pyramid.
Automate IOC ingestion fully and redirect the 10 engineering hours to campaign-correlation rules c
Correct. IOC matching should be automated — let the threat intel platform push IOCs directly into the SIEM with no manual engineering effort. The 10 hours freed up should be invested in campaign-correlation rules at the TTP level, where detection value persists across campaigns. You keep the IOC coverage (automated) and ADD campaign coverage (engineered).
Replace IOC feeds with YARA rules for better file-level detection d
YARA rules detect at the tool level (middle of the pyramid) — better than hash-based IOCs, but still not at the campaign level. The engineering time should go to the top of the pyramid, not the middle.
7. During an active incident, you identify that the attacker has compromised a domain user account and moved laterally to a file server containing financial data. Applying the perspective-switching framework, what question should you ask FIRST?
"Based on the attacker's operational profile so far, is their objective likely ransomware, data theft, or persistent access — and what does that tell me about their next step?" a
Correct. The attacker's constraint profile (fast/noisy vs slow/quiet) predicts their objective, which determines their next step. A ransomware operator on the file server will stage for encryption. A data theft operator will stage for exfiltration. A persistent access operator will establish persistence before touching data. The answer to this question determines your containment priority.
"Which ATT&CK techniques did the attacker use to get here?" b
Technique classification is useful for reporting and for checking whether your detection rules fired. But during an active incident, the priority question is "what happens next?" not "what category was the last step?" Technique classification is backward-looking; objective prediction is forward-looking.
"What data has been accessed on the file server?" c
Knowing what was accessed matters for impact assessment, but it's not the FIRST question during an active incident. The first question is what the attacker will do NEXT — because the answer determines your containment action. If they're staging for ransomware, protect backups NOW. If they're staging for exfiltration, monitor egress NOW.
"Has the attacker compromised any other accounts?" d
Scoping the compromise (which accounts, which systems) is important but it's an investigation task. During an active incident, the priority is predicting the next step so you can pre-position containment. Once you've predicted and acted, then scope the full compromise.
8. What is the primary reason this course uses pre-generated campaign telemetry datasets instead of a live attack lab (like the Purple Teaming course)?
The offensive techniques are too dangerous to run in a lab a
The Purple Teaming course runs the same offensive techniques in a lab safely. The danger isn't the reason. The skill being developed is different — analysis of multi-system, multi-day campaign telemetry, not single-technique observation.
Datasets are cheaper than maintaining a lab environment b
Cost is not the driver. The PT course uses a lab because technique-level observation requires firing the technique and watching the telemetry. The OD course uses datasets because campaign-level analysis requires processing realistic volumes of multi-system, multi-day data — which is an analysis skill, not an execution skill.
The course doesn't cover hands-on offensive techniques c
The course covers offensive techniques — but from the analytical perspective, not the execution perspective. The learner doesn't need to fire the technique because the skill being developed is campaign analysis, not technique validation.
Campaign detection is an analysis skill requiring realistic multi-system, multi-day telemetry — which datasets provide more effectively than a self-generated lab d
Correct. Campaign detection requires processing telemetry from multiple systems over realistic timeframes (hours to days) with legitimate baseline noise mixed in. A self-generated lab produces clean, single-technique telemetry — useful for PT but not for developing the messy, real-world analysis skill that campaign detection demands. Datasets simulate the actual investigation experience.
💬
How was this module?
Your feedback helps us improve the course. One click is enough — comments are optional.