In this module

OD1 Check My Knowledge

6-8 hours · Module 1 · Free
[A]
Check My Knowledge — How Attackers Plan Operations
Ten scenario-based questions. Select the best answer for each.
Score: 0/10
An attacker compromised an M365 account three weeks ago. Since then, they've only accessed the CFO's shared mailbox on Tuesday and Thursday mornings between 09:00-10:30. No lateral movement, no discovery commands, no additional account compromises. What is the most likely objective?
Financial — they're waiting for the right invoice to redirect
Intelligence — they're running a regular collection cadence against executive communications
Access — they're positioning for supply chain exploitation
Disruption — they're mapping the environment before deploying a wiper
The strict timing discipline (same days, same hours, same target) and three-week duration without escalation indicate a collection cadence — the hallmark of an intelligence objective. Financial attackers (BEC) typically act within days. Disruption and access objectives require broader environmental access.
During an investigation, you observe that the initial access used a zero-day exploit against your VPN appliance, but post-exploitation used Mimikatz with default parameters and PsExec with default service names. What does this skill discontinuity suggest?
The attacker was highly skilled but chose commodity tools to save time
The initial access was conducted by a different actor (likely an IAB) than the post-exploitation (likely a ransomware affiliate who purchased the access)
The attacker's capability degraded during the operation due to tool failures
This is normal — attackers routinely mix sophisticated and commodity techniques
A zero-day exploit represents high budget and high capability. Default Mimikatz and PsExec represent low-to-medium capability. This skill discontinuity is the handoff signature — a high-capability initial access broker sold access to a lower-skill affiliate. The investigation should assess whether the broker has access to other organisations via the same zero-day.
Your SIEM shows 200 failed authentication attempts across 200 unique user accounts over 48 hours. Each account has exactly one failure. Source IPs are distributed across residential proxy ranges. Your brute-force detection rule (5 failures per account in 5 minutes) didn't fire. What are you observing?
A distributed denial-of-service attack against your authentication infrastructure
Normal authentication noise from employees with typos
A low-and-slow password spray — one password tested against 200 accounts, designed to stay below per-account alerting thresholds
An initial access broker testing credentials from a breach database
One failure per account, distributed IPs, residential proxies, and 48-hour distribution are textbook password-spray indicators. The spray is designed to stay below your per-account threshold (5 failures in 5 minutes). Detection requires cross-account correlation: 200 unique accounts with single failures from residential proxy ranges within a 48-hour window. Check whether any of those 200 attempts succeeded.
A ransomware incident is discovered at 07:00 Monday. The encryption started at 22:00 Friday. Forensic analysis shows the attacker moved from initial access to encryption in approximately 52 hours. Why did the attacker choose this timing?
The attacker works business hours in their timezone
Friday evening deployment maximises encryption time before Monday-morning discovery, exploiting reduced weekend staffing and delayed response capability
The 52-hour timeline was dictated by the ransomware's technical deployment requirements
The timing was coincidental — ransomware operators deploy when ready, not on specific schedules
Friday evening deployment is a deliberate timing choice. The attacker gains 36-48 hours of uncontested encryption time over the weekend, when security staffing is minimal, response capability is reduced, and executives who authorise incident response spending may be unreachable. This is the documented ransomware timing pattern.
You're building an Operational Profile during an investigation. Evidence so far: the attacker used a custom C2 framework you've never seen before, moved between systems at a rate of one hop per day, and has only accessed the R&D file share containing product roadmap documents. How would you classify this adversary's risk tolerance?
High — they're using an unknown tool which is risky
Medium — the pace is moderate
Low — custom tooling (avoids known signatures), slow pace (avoids temporal clustering), and targeted data access (avoids unnecessary exposure) all indicate an operator prioritising stealth to protect long-term access
Cannot be determined from this evidence
Every observed behaviour prioritises stealth: custom C2 avoids signature-based detection, one hop per day avoids temporal correlation alerts, and accessing only the specific target (R&D roadmap) avoids generating unnecessary log entries. This is a low-risk-tolerance operator protecting access they invested significant effort to establish — consistent with an intelligence or access objective.
Which of the following is the most impactful finding from a passive reconnaissance self-assessment?
Your DNS records reveal that you use Microsoft 365
A job posting mentions "experience with CrowdStrike Falcon required"
47 employee email/password pairs appear in a recent breach database, and your organisation doesn't enforce phishing-resistant MFA
Certificate transparency logs reveal an internal hostname (staging.company.com)
Breached credentials with reusable passwords provide direct access without exploitation. If those credentials work against your M365 tenant (no phishing-resistant MFA), the attacker doesn't need phishing, exploits, or any other technique — they authenticate as a legitimate user. The other findings inform the attack plan but don't provide direct access.
During an active ransomware investigation, you detect `vssadmin delete shadows /all` executing on a backup server at 03:00. No prior alerts fired for discovery, credential access, or lateral movement. What does this tell you about your detection coverage?
Your detection is working — you caught the backup deletion
You missed the discovery, credential access, and lateral movement phases entirely — your detection caught only the staging phase, which is the last containment window before encryption
The attacker skipped the earlier phases and went directly to backup destruction
The attacker used a zero-day that bypassed your earlier detection rules
Backup deletion at 03:00 on a server means the attacker has already completed discovery (they know where the backups are), credential access (they have credentials that work on the backup server), and lateral movement (they reached the backup server from their initial foothold). Your detection rules for those earlier phases either don't exist or didn't fire. You're in the last containment window — contain immediately, then investigate why you missed the preceding 24-48 hours of activity.
An attacker's operational profile shows: low budget, short timeline, medium capability, high risk tolerance. Which campaign type does this most closely match?
State-sponsored espionage
Ransomware affiliate
Supply chain positioning
Initial access broker
Low budget (commodity tools), short timeline (24-72 hour ransomware sprint), medium capability (follows playbooks with some adaptation), and high risk tolerance (accepts noise to achieve speed) is the canonical ransomware affiliate profile. State-sponsored operations have high budget and low risk tolerance. Supply chain operations have long timelines. IABs have medium timelines and focus on access, not objective execution.
Why are supply chain compromises nearly impossible to detect at the distribution phase?
The malicious code is encrypted and can't be scanned
Supply chain attacks only target organisations without security monitoring
The poisoned update is delivered through the vendor's legitimate, trusted update mechanism — signed by the vendor, expected by the customer, installed by the normal update process
The attacker deletes evidence of the compromise during distribution
The distribution phase uses the vendor's own trusted delivery mechanism. The update is signed with the vendor's code-signing certificate, delivered through the expected update channel, and installed by the same process as every legitimate update. There's no phishing, no exploitation, no anomalous access — just a normal software update that happens to contain malicious code. Detection shifts to post-installation behavioural monitoring: does the trusted application suddenly make connections or spawn processes it's never done before?
You need to brief your CISO within 30 minutes of an incident being declared. Using the Operational Profile methodology, what's the minimum you need to deliver?
Full threat intelligence attribution identifying the specific threat group
Complete forensic analysis of all compromised systems
A one-paragraph brief: adversary class (not group name), objective, what they've done, predicted next steps, and current containment priorities
A detailed technical report of all observed indicators of compromise
The Operational Profile produces a leadership brief within the first hours: who (adversary class from constraint profiling), what (objective from target analysis), how bad (predicted next steps from the decision matrix), and what we're doing (containment priorities). Full attribution and forensic analysis take days or weeks. The CISO needs actionable understanding now — the Operational Profile delivers it.
💬

How was this module?

Your feedback helps us improve the course. One click is enough — comments are optional.

Thank you — your feedback has been received.

You're reading the free modules of offensive-security-for-defenders

The full course continues with advanced topics, production detection rules, worked investigation scenarios, and deployable artifacts.

View Pricing See Full Syllabus