The Ransomware Race
Incident Brief
Incident ID: SOC-2026-05-001 Date/Time: Saturday, 10 May 2026, 02:14 UTC Alert Source: DET-SOC-019 (Ransomware pre-encryption indicators) — NRT rule Severity: Critical Assigned to: You (on-call)
At 02:14 UTC on a Saturday morning, your phone wakes you. The PagerDuty alert reads:
CRITICAL: DET-SOC-019 — Ransomware pre-encryption on 3 devices Devices: SVR-FILE-01, SVR-FILE-02, SVR-SQL-01 Command: vssadmin delete shadows /all /quiet Account: svc-backup (service account)
The alert was triggered by the NRT detection rule. VSS shadow copy deletion is executing simultaneously on 3 servers using the svc-backup service account.
You are the only analyst on call. The SOC Manager’s phone is your escalation.
Your Investigation
This is a time-critical scenario. The adversary is deleting backup shadow copies — encryption typically follows within 30-60 minutes. Every minute counts.
Phase 1: Immediate Response (10 minutes)
You are reading the PagerDuty alert on your phone. What are your first 3 actions before you even open a laptop? (Think: communication, containment authority, information gathering.)
You open your laptop and access Sentinel. Run DET-SOC-019’s query. Confirm the scope: is the VSS deletion limited to 3 devices or expanding?
| |
- The svc-backup account is a domain service account. What does this tell you about the adversary’s access level? What is the worst-case scope?
Phase 2: Containment Decision (15 minutes)
You need to decide: isolate the 3 affected servers immediately via Defender for Endpoint, or wait for more investigation? What are the trade-offs? (Consider: these are file servers and a SQL server — isolation will impact business operations.)
You call the SOC Manager at 02:20. They authorise server isolation. After isolating the 3 servers, what do you check next?
Additional information: After isolation, you discover:
- svc-backup’s password was last changed 14 months ago
- svc-backup has local admin rights on 47 servers (it is the backup service account)
- DeviceLogonEvents shows svc-backup authenticated to the 3 affected servers from DESKTOP-MKT-042 (a marketing workstation) at 02:08
- DESKTOP-MKT-042 shows Cobalt Strike beacon indicators (DeviceProcessEvents: rundll32.exe loading a DLL from C:\ProgramData)
The adversary is on DESKTOP-MKT-042 with Cobalt Strike. They have the svc-backup credentials (local admin on 47 servers). Only 3 servers have been hit so far. What is your expanded containment plan?
Should you disable the svc-backup account? What breaks if you do? What breaks if you do not?
Phase 3: Scope and Impact Assessment (30 minutes)
How did DESKTOP-MKT-042 get compromised? Check DeviceProcessEvents for the past 7 days. Look for initial access indicators: phishing payload execution, LOLBin download chain, or drive-by download.
Map the full attack timeline: initial compromise → credential theft → lateral movement → pre-encryption. What detection rules should have fired at each stage?
The 3 servers were isolated before encryption began. Check DeviceFileEvents: did any file encryption (mass FileRenamed events) start before isolation?
What is the data exposure assessment? What data was on SVR-FILE-01, SVR-FILE-02, and SVR-SQL-01?
Phase 4: Recovery and Documentation (30 minutes)
The servers were isolated before encryption. What is your recovery plan? Can you simply un-isolate them, or is additional remediation required?
Write the executive summary. The CEO will read this on Monday morning. They need to know: what happened, what was the impact, what was done, and what the risk was.
What hardening failures enabled this attack? List at least 3 specific controls from Module 9 that would have prevented or detected the attack earlier.
Write the PIR action items. What specific improvements should be implemented within 30 days?
Solution Walkthrough
Reveal Phase 1 answers
Q1: (1) Call the SOC Manager — you need containment authority for servers, and you need someone else aware in case this escalates. (2) Open Sentinel on your phone/laptop to assess scope. (3) Do NOT wait to investigate before containing — this is a race condition. Pre-encryption indicators mean encryption is imminent.
Q3: svc-backup is a domain service account with local admin on 47 servers. The adversary has domain-level service account credentials. Worst case: the adversary can deploy ransomware to all 47 servers simultaneously. The 3 current targets may be the first wave of a broader deployment.
Reveal Phase 2 answers
Q4: Isolate immediately. Trade-off: file servers and SQL offline = business disruption. But the alternative is encrypted file servers and SQL = catastrophic data loss. The business disruption from isolation is hours; the business disruption from encryption is days to weeks. Isolate, then investigate.
Q6: Expanded containment: (1) Isolate DESKTOP-MKT-042 immediately. (2) Disable svc-backup account. (3) Isolate ALL 47 servers where svc-backup has admin rights as a precaution, OR at minimum isolate any server showing svc-backup logon events in the past 24 hours. (4) Check for other compromised accounts — the adversary likely dumped credentials from the marketing workstation.
Q7: Disabling svc-backup breaks the backup service across 47 servers. Not disabling it gives the adversary persistent admin access to 47 servers. Disable it. Backups are a lower priority than preventing ransomware deployment. Coordinate with IT to create a new backup service account with a fresh password after the incident is contained.
Reveal Phase 4 answers
Q14: Hardening failures: (1) svc-backup password not rotated in 14 months (Module 9: service account credential rotation). (2) svc-backup has local admin on 47 servers — excessive privilege (Module 9: least-privilege service accounts). (3) No Credential Guard on DESKTOP-MKT-042 (Module 9: endpoint hardening — DET-SOC-016 would have detected the credential dump if LSASS access was not protected). (4) No ASR rule blocking Cobalt Strike DLL sideload pattern (Module 9: ASR rules).