TR0.11 Interactive Lab: Your First Triage

· Module 0 · Free
Operational Objective
Practice Before Production: The triage scorecard from TR0.6 is a framework. This lab tests whether you can apply it under time pressure with incomplete information — the same conditions you face during a real alert triage. Six alerts are presented with the context available to a triage responder: the alert title, the affected entity, the source data, and the results of the first triage query. You classify each alert using the scorecard, then compare your classification against the reference answer.
Deliverable: Completed triage classifications for 6 alerts with documented scorecard reasoning. Self-assessment against reference answers to identify decision patterns.
Estimated completion: 30 minutes
INTERACTIVE LAB — YOUR FIRST TRIAGE EXERCISE6 alerts — 3 TP, 2 FP, 1 BTP — classify each using the triage scorecard

Figure TR0.11 — The first triage lab. Six alerts from the NE environment with mixed classifications.

Lab instructions

For each alert below, apply the 8-question triage scorecard from TR0.6. Score each question, calculate the total, determine the classification (FP, probable TP, or confirmed TP), and decide the immediate action. Write your reasoning before checking the reference answer. The reasoning is more important than the classification — two analysts can reach the same classification through different reasoning, and the reasoning reveals gaps in the triage process.

Alert 1: Impossible travel — s.patel

Alert: Entra ID Protection — Impossible travel. s.patel@northgateeng.com authenticated from London (09:02) and from Singapore (09:08). s.patel is a software developer based in the Edinburgh office. No VPN documented. No travel request in the system. The sign-in from Singapore used a Windows 10 device with a different device fingerprint than s.patel’s known laptop.

Triage query result: SigninLogs shows 0 previous sign-ins from this Singapore IP for any NE user. The IP resolves to a residential ISP in Singapore.

Reference answer

Score: Q1: YES (+3, different device fingerprint + residential Singapore IP with no history). Q2: NO (0, single user). Q3: ACTIVE (+3, session from Singapore is current). Q4: PROBABLE (+1, s.patel has access to source code repositories). Q5: IMMEDIATE (+2, active session). Q6: MEDIUM (+1, developer access to code). Q7: UNCERTAIN (+1, source code may contain customer data). Q8: HIGH confidence.

Total: 11 — Probable TP. Two simultaneous sessions from two continents with different devices, 6 minutes apart, from a residential IP with no history in the tenant. The legitimate explanation (VPN) is excluded by the residential ISP classification and lack of VPN documentation.

Action: Revoke s.patel’s sessions immediately. Preserve sign-in logs (last 48h). Check for post-authentication activity from the Singapore IP (mailbox access, file downloads, OAuth grants). Escalate to IR team.

Alert 2: Inbox rule creation — d.wright

Alert: Custom Sentinel rule — Suspicious inbox rule. d.wright@northgateeng.com created an inbox rule redirecting emails containing “bank details” and “payment confirmation” to the Deleted Items folder. Rule created at 11:45 from 198.51.100.10 (Bristol office egress IP).

Triage query result: SigninLogs shows d.wright authenticated from Bristol office IP at 08:30, normal browser user-agent, MFA completed via Authenticator push. No anomalous sign-ins in the last 30 days. d.wright works in the accounts payable department.

Reference answer

Score: Q1: NO (0, no anomalous sign-in activity — the rule was created from a legitimate session). Q2: NO (0, single user). Q3: ACTIVE (+3, the rule is currently active and filtering emails). Q4: YES (+2, accounts payable + financial keyword filtering). Q5: SOON (+1, rule is filtering but not exfiltrating). Q6: HIGH (+2, financial process disruption risk). Q7: NO (0, no personal data exposure yet). Q8: MEDIUM confidence — the rule COULD be legitimate (user filtering spam about bank details) but the Deleted Items destination is suspicious.

Total: 8 — Borderline probable TP. The authentication looks clean (no AiTM indicators), but the inbox rule matches the BEC persistence pattern from CHAIN-HARVEST exactly. The critical question: did d.wright create this rule themselves (BTP — user managing their inbox), or did an attacker with d.wright’s session create it (TP)?

Action: Do NOT delete the rule yet (preserve the evidence). Check OfficeActivity for the exact timestamp of rule creation and compare the session ID with d.wright’s authenticated session. Contact d.wright directly to verify: “Did you create an inbox rule at 11:45?” If d.wright confirms: BTP, document and close. If d.wright denies: TP, begin full cloud triage.

Alert 3: Credential access — SRV-NGE-BRS005

Alert: Defender for Endpoint — Suspicious credential access. Process rundll32.exe loaded comsvcs.dll on SRV-NGE-BRS005 (Bristol file server) at 02:22. Command line includes MiniDump. Running as SYSTEM.

Triage query result: DeviceProcessEvents shows the parent process is cmd.exe, parent of cmd.exe is svchost.exe (PID 876, service: Schedule). No remote logon events to this server in the last 60 minutes. The scheduled task “SystemHealthCheck” was created 3 days ago.

Reference answer

Score: Q1: YES (+3, LSASS dump via comsvcs.dll MiniDump — textbook credential theft technique). Q2: UNKNOWN (+2, server compromise may enable lateral movement). Q3: ACTIVE (+3, the dump was just executed). Q4: YES (+2, credential dump captures all cached credentials including domain admin if present). Q5: IMMEDIATE (+2, stolen credentials enable immediate lateral movement). Q6: HIGH (+2, file server with potential domain admin creds). Q7: PROBABLE (+1). Q8: HIGH confidence — comsvcs.dll MiniDump is not a legitimate Windows operation.

Total: 15 — Confirmed TP. This is an active LSASS credential dump executed via a scheduled task that was planted 3 days ago (persistence). The attacker has been in the environment for at least 3 days and is now harvesting credentials for lateral movement.

Action: Network-isolate SRV-NGE-BRS005 via Defender for Endpoint IMMEDIATELY — the attacker may be about to use the dumped credentials. Capture memory dump before isolation takes effect. Escalate to IR as CRITICAL. The 3-day-old scheduled task indicates the attacker has persistence elsewhere — scope the investigation beyond this single server.

Alert 4: Failed sign-ins — t.chen

Alert: Custom Sentinel rule — Brute force detected. 12 failed sign-in attempts for t.chen@northgateeng.com from IP 203.0.113.99 between 14:00 and 14:05. All failures: incorrect password.

Triage query result: SigninLogs shows 0 successful authentications from 203.0.113.99. The IP resolves to a hosting provider (OVH). t.chen has not authenticated since 13:30 (legitimate session from Manchester office). No MFA registration changes for t.chen.

Reference answer

Score: Q1: NO (0, only failed attempts — no successful compromise). Q2: NO (0, only t.chen targeted, though check if same IP targeted others). Q3: HISTORICAL (0, attack failed — no active session). Q4: NO (0, no data access occurred). Q5: STANDARD (0, no urgency — attack failed). Q6: LOW (0, no compromise occurred). Q7: NO (0). Q8: HIGH confidence — 12 failures from a hosting provider IP with 0 successes is a failed brute force.

Total: 0 — False positive (or more precisely, a detected-and-failed attack). The attack was real but unsuccessful. No compromise occurred.

Action: Close with documentation. Note the IP 203.0.113.99 for threat intelligence — check if this IP targeted other NE accounts (spray pattern). If the same IP shows up targeting multiple accounts, escalate the pattern even though individual attacks failed. Feed the IP to the detection engineering team for watchlist consideration.

Alert 5: Penetration test — a.patel

Alert: Defender for Identity — Suspected Kerberoasting. a.patel@northgateeng.com requested Kerberos TGS tickets for 15 service accounts in 2 minutes at 10:30 from DESKTOP-NGE-IT03.

Triage query result: IdentityLogonEvents confirms the Kerberoasting pattern. a.patel is a senior IT administrator. A change ticket (CHG-2026-0415) authorises penetration testing by a.patel from 10:00-12:00 today on DESKTOP-NGE-IT03.

Reference answer

Score: Q1: YES (+3, Kerberoasting pattern is clear). Q2: YES (+3, 15 service accounts targeted). Q3: ACTIVE (+3). Q4: YES (+2, service account credentials at risk). Q5: STANDARD (0, authorised test). Q6: LOW (0, authorised test). Q7: NO (0). Q8: HIGH confidence.

Total: 11 — but classification: BENIGN TRUE POSITIVE. The detection is correct (Kerberoasting occurred), the activity is real (not a false positive), but it is authorised. The change ticket CHG-2026-0415 explicitly covers this activity.

Action: Close as BTP. Reference CHG-2026-0415 in the closure documentation. Verify the testing window (10:00-12:00) matches the alert timestamp (10:30 — within window). If the activity occurred OUTSIDE the authorised window, reclassify as TP and escalate.

Alert 6: Outbound connection — DESKTOP-NGE042

Alert: Custom Sentinel rule — Beaconing pattern detected. DESKTOP-NGE042 (j.morrison’s workstation) is making HTTP connections to 45.155.205.99 every 60 seconds (±3 seconds). Connection started at 14:30. Consistent packet size (312 bytes).

Triage query result: DeviceNetworkEvents confirms 45 connections in 45 minutes. The destination IP has no DNS record. The process making the connections is rundll32.exe loading msedge_update.dll from C:\Users\j.morrison\AppData\Local\Temp.

Reference answer

Score: Q1: YES (+3, 60-second beaconing with consistent jitter and size = C2 communication pattern). Q2: UNKNOWN (+2, check for other devices connecting to same IP). Q3: ACTIVE (+3, beaconing is ongoing right now). Q4: PROBABLE (+1, j.morrison’s workstation may cache credentials). Q5: IMMEDIATE (+2, active C2 = attacker has remote access). Q6: HIGH (+2, C2 on an engineering workstation). Q7: UNCERTAIN (+1). Q8: HIGH confidence — rundll32.exe loading a DLL named msedge_update.dll from Temp with 60-second beaconing is textbook Cobalt Strike.

Total: 14 — Probable/Confirmed TP. This is an active C2 beacon. The attacker has persistent remote access to j.morrison’s workstation via a DLL sideloading technique masquerading as a Microsoft Edge update.

Action: Network-isolate DESKTOP-NGE042 via Defender for Endpoint immediately. Capture memory dump (the beacon configuration, including the C2 infrastructure details, is in memory). Do NOT delete the DLL — it is evidence. Escalate as CRITICAL. Check DeviceNetworkEvents for any other device connecting to 45.155.205.99.

Why practice speed matters

Attacker dwell times are decreasing. Mandiant’s reporting shows median dwell time at approximately 10 days — but that is a median across all incidents including those discovered months after compromise. For targeted attacks against organisations with detection capability, the window from initial access to objective completion is measured in hours. CrowdStrike’s breakout time metric (initial access to lateral movement) shows the fastest adversaries operating in under 2 minutes. Palo Alto’s Unit 42 reports that median time to exfiltration compressed from 9 days to 2 days between 2023 and 2024. The triage scorecard takes 10-15 minutes. The attacker’s kill chain takes hours. Practising the scorecard until it is reflexive — until you can classify an alert without consulting the reference card — is what makes the 15-minute triage window achievable under production pressure.

Scoring the lab: performance metrics for self-assessment

After completing all 6 alerts, calculate your performance across four dimensions:

Dimension 1: Classification accuracy. How many of your classifications matched the reference answers? Target: 5/6 correct. Score: assign 1 point per correct classification. If you scored 4/6 or below, identify which alert type you misclassified and review the relevant subsection (TR0.2 for classification definitions, TR0.6 for scorecard application).

Dimension 2: Scorecard consistency. For each alert, compare your scorecard score against the reference. A discrepancy of 1-2 points per question is acceptable (different analysts weigh ambiguous evidence differently). A discrepancy of 3+ points on any question indicates you may be over- or under-weighing that evidence category. Focus on Q1 (evidence of compromise) and Q3 (active vs historical) — these two questions have the highest impact on the final classification.

Dimension 3: Speed. Time yourself on the 6 alerts. Target: 30-45 minutes total. FP alerts should take 3-5 minutes each (read context, confirm no indicators, close). Probable TP alerts should take 8-10 minutes (full scorecard, containment planning). If any individual alert took longer than 12 minutes, identify the bottleneck: was it reading the alert context (improve by pre-studying alert types in TR2-TR3), running the scorecard (improve by memorising the 8 questions), or deciding on containment (improve by studying the containment playbooks in TR2.5 and TR3.6)?

Dimension 4: Documentation quality. Review your triage notes for each alert. Could another analyst read your notes and understand: what you checked, what you found, why you classified the alert the way you did, and what actions you took? If your notes are sparse (“Checked — looks fine — closed”), they fail the documentation standard. The triage report format from TR0.5 provides the template: classification summary, findings per query, containment actions, evidence preserved, outstanding questions.

Common lab mistakes to watch for:

Alert 3 (MFA fatigue): classifying a FAILED attack as a TP requiring full IR mobilisation. The attack was real but it FAILED — no compromise occurred. The correct response is password reset (invalidate the stolen credential) and monitoring, not full incident response. The distinction between “the attacker tried and failed” versus “the attacker tried and succeeded” is one of the most important triage skills.

Alert 5 (service account): dismissing a service account alert as noise without verifying the scheduled activity. Service accounts operating on schedule are BTPs — but only if you VERIFY the schedule. An unverified dismissal risks closing a genuine compromise of a service account, which is among the most dangerous compromise types because service accounts often have broad access and their activity is not monitored as closely as interactive user accounts.

Alert 6 (atypical travel): classifying atypical travel to a known NE site as a TP. The atypical travel alert fires because the user does not USUALLY sign in from Manchester — but NE has a Manchester office, and the user’s calendar shows a meeting there. Context resolves the alert. The analyst who classifies based solely on the alert title without checking environmental context will over-classify atypical travel alerts.

Self-assessment

Review your 6 classifications against the reference answers. For each alert where your classification matched: what reasoning did you use? Did you follow the scorecard, or did you classify by pattern recognition? For each alert where your classification differed: which scorecard question did you answer differently? Was your answer defensible with the available data, or did you miss a data point?

The goal is not 6/6 correct on the first attempt. The goal is a consistent triage methodology that produces defensible classifications — even when you are unsure, the scorecard process ensures you check the right data before deciding.

Compliance Myth: "Lab exercises do not prepare you for real triage"

The myth: Lab alerts are unrealistic. Real alerts are messy, ambiguous, and context-dependent. Practising on lab alerts gives false confidence.

The reality: These 6 alerts are drawn from real incident patterns at NE — sanitised and adapted, but structurally identical to production alerts. Alert 1 mirrors a real AiTM follow-up. Alert 3 mirrors a real credential theft technique documented in CHAIN-PRIVILEGE. Alert 6 mirrors real C2 beaconing from CHAIN-ENDPOINT. The ambiguity in Alerts 2 and 4 is deliberately realistic — many production alerts are genuinely ambiguous, and the triage scorecard’s value is precisely in handling ambiguity through structured questions rather than guesswork.

Beyond this investigation: These alert patterns connect to **Detection Engineering** (the rules that generated these alerts are production KQL covered in DE3-DE8), **Practical Incident Response** (the investigation phase that follows triage for Alerts 1, 2, 3, and 6), and **Mastering KQL** (the triage queries that answer Q1-Q3 for each alert).

The queue management discipline

Processing the 6-alert queue correctly requires queue management discipline that many junior analysts lack initially. Three rules that Rachel enforces at NE:

Rule 1: Process in priority order. The alerts are presented in severity order (HIGH first, then MEDIUM, then LOW). Process them in this order. Do not skip to Alert 5 (service account) because it looks easy and closeable. The HIGH-severity alerts demand attention first because the potential damage from a delayed HIGH is greater than the delay cost of keeping a LOW alert waiting.

Rule 2: Complete each alert before starting the next. Do not open Alert 2 while Alert 1 is still in progress. Each alert requires focused attention — context reading, query interpretation, scorecard scoring, and containment planning (if TP). Splitting attention between two alerts increases the risk of misclassification on both. The exception: if a CRITICAL alert arrives while triaging a MEDIUM, pause the MEDIUM (document where you stopped), triage the CRITICAL, then return to the MEDIUM.

Rule 3: Document as you go, not after. The triage report is not a post-triage documentation task — it is an DURING-triage recording. When you run Query 1 and see an anomalous IP, document it immediately in the incident comment: “Query 1: anomalous IP 185.220.101.42 (Tor exit, Romania) at 08:14. Device fingerprint mismatch.” When you score Q1 on the scorecard, document the score and reasoning immediately. At the end of the triage, the report is already complete — you do not need to remember what you found 10 minutes ago. This habit is what separates professional triage documentation from ad-hoc notes that the investigation team cannot interpret.

The shift handoff scenario. At NE, the SOC operates across shifts. If Analyst A triages Alert 1 but does not complete it before shift end, Analyst B must continue from where Analyst A stopped. The incident comments are the handoff mechanism — if Analyst A documented as they went, Analyst B reads the comments and continues from the last documented step. If Analyst A did not document, Analyst B starts the triage from scratch — duplicating 10-15 minutes of work. The document-as-you-go discipline is not just good practice for the investigation team handoff; it is essential for the shift handoff within the SOC.

You're reading the free modules of this course

The full course continues with advanced topics, production detection rules, worked investigation scenarios, and deployable artifacts. Premium subscribers get access to all courses.

View Pricing See Full Syllabus