TR0.3 Three Environments, One Methodology
Figure TR0.3 — The Triage Trinity applied to three environments. The methodology (classify, preserve, contain) is identical. The tools, evidence sources, and containment actions differ per environment. Cross-environment incidents require executing the Trinity in all affected environments simultaneously.
The Triage Trinity
Every triage — regardless of environment — follows three phases in a fixed sequence.
Phase 1 — Classify. Determine whether the alert represents a true incident, a false positive, a benign true positive, or an indeterminate finding. This phase answers the question: “Is this real?” The tools differ per environment (KQL queries in the cloud, process analysis on Windows, log review on Linux), but the decision framework from TR0.2 applies identically. The triage scorecard asks the same 8 questions whether the alert originates from Entra ID, Defender for Endpoint, or a Linux auth.log parser.
Phase 2 — Preserve. Capture volatile evidence before it degrades. This phase answers the question: “What evidence do I need to save right now?” The volatility hierarchy from TR1 governs priority: memory first, then processes and network state, then logs. The collection tools differ dramatically between environments — Graph PowerShell exports cloud sign-in data, KAPE collects Windows volatile artifacts, LiME captures Linux memory — but the priority logic is identical: capture the most volatile evidence first.
Phase 3 — Contain. Execute the minimum action required to stop the attacker’s progress while preserving evidence for the investigation team. This phase answers the question: “How do I stop this without destroying the evidence?” Containment actions are environment-specific: session revocation in the cloud, device network isolation on Windows, iptables block on Linux. Each action has a blast radius — legitimate access that breaks when containment is applied. The containment decision framework from TR8 applies identically across environments.
The sequence is fixed because each phase depends on the previous one. Containment before classification may execute unnecessary actions on a false positive (disabling a legitimate user’s account, isolating a production server). Containment before preservation may destroy the evidence the investigation needs (rebooting a server to “fix” it, which destroys memory and running process state). Classification before preservation is acceptable ONLY when the classification is confident and quick (under 5 minutes) — otherwise, begin preservation in parallel with classification for probable true positives.
Same questions, different data sources
The triage scorecard asks 8 questions. The data source for each answer changes per environment, but the question is identical.
Question 1: “Is there evidence of compromise beyond the initial alert?” In the cloud, this means querying SigninLogs for additional anomalous sign-ins, AuditLogs for configuration changes, and OfficeActivity for unusual data access. On Windows, this means checking for additional suspicious processes, unexpected network connections, new scheduled tasks, or modified registry autorun keys. On Linux, this means checking auth.log for additional access attempts, crontab for new entries, systemd for new services, and /tmp and /dev/shm for dropped files.
The question is identical. The KQL query, the PowerShell command, and the bash command that answer it are different. Modules TR2, TR3, and TR4 provide the specific commands for each environment. The scorecard provides the question sequence that tells you WHICH commands to run WHEN.
Question 2: “Is the scope beyond a single entity?” In the cloud: did the same IP authenticate as other users? Did the compromised OAuth application access other tenants? On Windows: did the attacker’s process spawn child processes on other devices? Are there lateral movement indicators (RDP sessions, WMI executions, SMB connections) to other endpoints? On Linux: did the compromised account SSH to other servers? Are there signs of the same attacker on other hosts in the same subnet?
Again, same question, different evidence sources. The triage scorecard keeps the analyst focused on the investigation logic while the environment-specific toolkits provide the technical implementation.
Why single-environment triage fails
Most analysts are trained in one environment. The Windows specialist runs PowerShell and collects event logs. The cloud analyst writes KQL and reviews sign-in data. The Linux admin checks auth.log and running processes. Each analyst is competent in their domain — and blind outside it.
Single-environment triage fails when the incident crosses boundaries, which modern attacks routinely do. CHAIN-HARVEST at NE started in the cloud (AiTM phishing), and the triage was performed entirely in cloud logs. If the attacker had also deployed a payload on j.morrison’s Windows endpoint (a common AiTM follow-up), cloud-only triage would have classified the cloud compromise correctly but missed the endpoint compromise entirely. The investigation team would have revoked the cloud session and considered the incident contained — while the attacker maintained access via the endpoint backdoor.
CHAIN-MESH started on-prem (VPN login from a compromised credential), moved through Windows (lateral movement via RDP and NTLM), and reached the Sheffield manufacturing server. An analyst who only triaged the VPN alert would see an anomalous login and potentially close it as a VPN misconfiguration. The lateral movement to the manufacturing server — where the ransomware was deployed — would go undetected until the encryption began.
The cross-environment triage methodology in TR6 teaches the full multi-environment approach. But the foundation starts here: the Triage Trinity applies to EVERY environment, and when an alert in one environment shows indicators of boundary crossing (cloud sign-in followed by endpoint activity, Windows compromise followed by SSH to Linux), the responder must execute the Trinity in ALL affected environments.
The environment handoff problem
In organisations with siloed teams (cloud security team, endpoint team, network team, Linux admin team), triage responsibility often falls through the gap between teams. The cloud team triages the Entra ID alert and passes it to the endpoint team: “We see a compromised cloud session — can you check the user’s workstation?” The endpoint team receives the request, adds it to their queue, and begins their own triage — but they do not have the cloud context (which IP the attacker used, what data they accessed, what persistence they established in the cloud). The endpoint triage starts from zero rather than building on the cloud team’s findings.
The Triage Trinity addresses this by producing a structured triage report at the end of Phase 3 that includes: what was found (classification and confidence), what evidence was preserved (per environment), what containment was executed (per environment), and what remains unknown (scope questions the next team must answer). This report is the handoff artifact — it ensures the receiving team starts from the cloud team’s findings rather than repeating the cloud triage and wasting the first 15 minutes.
The tool stack overview
Each environment has a primary toolkit for triage. These are introduced here and taught in depth in Modules 2-4.
Cloud toolkit. KQL is the primary triage language for M365 and Azure environments. Five pre-built triage queries (covered in TR2.1) answer the 5 most urgent cloud triage questions in under 10 minutes. Microsoft Graph PowerShell provides programmatic containment actions (session revocation, conditional access emergency policies, OAuth application control). Azure CLI extends triage to Azure IaaS workloads (VM process listing, NSG review, activity log analysis).
Windows toolkit. PowerShell provides the baseline triage capability on every Windows system — process enumeration, network connections, scheduled tasks, service listing, event log queries, and registry checks require no additional tooling. KAPE provides rapid volatile evidence collection — a 5-minute acquisition that captures memory, event logs, prefetch, MFT, and registry hives. Sysinternals (Process Explorer, Autoruns, TCPView) provides live visual assessment for analysts who prefer GUI tools. Velociraptor enables fleet-wide triage when the scope question from the scorecard indicates potential multi-system compromise. MemProcFS provides rapid memory analysis by mounting a memory dump as a filesystem — faster than running individual Volatility plugins for triage-level questions.
Linux toolkit. Native commands provide the baseline — ps, ss, last, lsof, find, stat, cat /proc/PID/ require no installation and work on every Linux system. LiME is the only tool that must be pre-staged for memory acquisition (kernel module, loads via insmod). Volatility3 provides 4 triage-relevant plugins (pslist, pstree, netscan, bash) for post-acquisition memory analysis. kubectl and Docker CLI provide container-specific triage when the compromised system runs containerised workloads.
Network toolkit. KQL against CommonSecurityLog in Sentinel provides firewall and proxy log analysis without accessing network devices directly. tcpdump provides packet capture for active C2 identification on Linux-based network appliances.
Every tool in this list answers a triage question within 5 minutes. If a tool requires 30 minutes to produce useful output, it belongs in the investigation phase (covered in Practical Incident Response and Practical Linux IR), not the triage phase.
Try it: map your current triage capability
For each environment in your organisation (cloud, Windows, Linux), answer: (1) Which tools can you access within 5 minutes of receiving an alert? (2) Can you collect volatile evidence (memory, processes, network state) in that environment? (3) Can you execute containment actions (session revoke, device isolate, account disable) in that environment? (4) Do you have pre-built triage queries or commands ready to run, or do you write them from scratch each time?
If any environment has gaps — no volatile evidence collection, no containment capability, no pre-built triage queries — that is the environment where a cross-boundary incident will stall. Modules TR2-TR4 address each environment’s toolkit specifically. The triage scripts in TR9.4 provide the pre-built “go bag” that eliminates the “write from scratch each time” problem.
The tool stack per environment
Each environment requires different tools, but the methodology (classify, preserve, contain) is identical. The tool stack overview for each environment, detailed in TR2-TR4:
Cloud (M365 / Entra ID): Query tools: KQL in Sentinel Logs blade or Defender XDR Advanced Hunting. The 5-query triage pack (TR2.1) and the 3-query email triage (TR2.3) use KQL exclusively. No additional software needed beyond browser access to the Sentinel or Defender portal. Containment tools: PowerShell with the Microsoft Graph module (Revoke-MgUserSignInSession, Update-MgUser), Entra portal (conditional access policy activation, enterprise application management), Exchange Online PowerShell (Remove-InboxRule). Preservation tools: KQL queries export results to CSV. Cloud evidence is API-accessible and does not require physical media.
Windows (endpoints and Active Directory): Query tools: Native PowerShell (Get-CimInstance, Get-NetTCPConnection, Get-WinEvent), Sysinternals Process Explorer and Autoruns, Defender Live Response for remote access. Containment tools: Defender for Endpoint device isolation (portal or API), Active Directory PowerShell (Disable-ADAccount, Set-ADAccountPassword), Group Policy for emergency configurations. Preservation tools: WinPMem for memory acquisition, KAPE for volatile artifact collection, MemProcFS for rapid memory analysis. All portable — run from USB or network share without installation. Analysis tools: Eric Zimmerman’s Tools (EvtxECmd, RECmd, PECmd, MFTECmd) for parsing KAPE output, Hayabusa for SIGMA-based event log analysis, Timeline Explorer for CSV review.
Linux (servers and containers): Query tools: Native commands (ps, ss, last, journalctl, find), /proc filesystem for live process state, Docker and Kubernetes CLI for container environments. Containment tools: iptables for network blocking, systemctl for service management, account management via passwd/usermod, SSH key revocation. Preservation tools: LiME for memory acquisition (requires pre-compiled kernel module), native commands for log and configuration snapshots, docker diff/inspect/logs for container evidence. Analysis tools: Volatility3 for memory analysis, standard text processing (grep, awk, jq) for log analysis.
The analyst proficiency self-assessment for this course: rate yourself 1-5 on each environment (Cloud KQL, Windows PowerShell, Linux native commands). The environment where you score lowest is where this course challenges you most and where you gain the most capability. The cross-training investment is approximately 20 hours per analyst to achieve triage-level competency in an unfamiliar environment — 5 hours of content study plus 15 hours of lab practice. At NE, Rachel measured the return: after cross-training, zero triage delays occurred due to environment-specific skill gaps — every analyst could triage every alert regardless of environment.
The tool diversity is significant — the triage responder who is expert in cloud KQL but unfamiliar with Linux commands (or vice versa) cannot triage across all three environments. This course teaches all three tool stacks to the level needed for triage classification. Deep analysis with each tool is covered in the environment-specific courses (Mastering KQL for cloud, Practical Incident Response for Windows, Practical Linux IR for Linux).
The environment handoff
When triage reveals that an attack crosses environments (cloud compromise leading to endpoint access, endpoint compromise leading to Linux lateral movement), the triage responder must execute the triage methodology in EACH environment the attacker touched. The cloud triage (TR2) identifies the initial access. The Windows triage (TR3) identifies the endpoint compromise. The Linux triage (TR4) identifies the server-side impact.
The handoff between environments occurs at the entity correlation point (TR1.6): the same user, the same IP, or the same timestamp connecting events across environments. The triage responder does not need to complete a full triage in one environment before starting the next — the preservation actions for each environment can run in parallel (cloud log snapshots + Windows memory dump + Linux log collection all happening simultaneously if the responder has access to all three environments).
At NE, the CHAIN-HARVEST extended attack crossed all three environments in sequence: cloud (AiTM at 08:14) to Windows (beacon activation at 14:30) to Linux (SSH lateral movement at 15:12, UTC-adjusted). Rachel’s triage executed the cloud triage pack first (because the alert fired on the cloud event), identified the endpoint entity from the cloud sign-in logs (device fingerprint and IP correlation), ran the Windows triage on the identified endpoint, and then checked the Linux systems from the endpoint’s ARP table and connection list. Total triage time across all three environments: 25 minutes. Total containment time: 8 minutes. Total handoff to IR: 38 minutes from initial alert — well within the 60-minute window.
The myth: Our SOC monitors M365 and Sentinel. Endpoint and Linux triage is someone else’s responsibility. We triage cloud alerts and escalate everything else.
The reality: Attackers pivot across environments because defenders are siloed in them. An attacker who gains access through a cloud identity (your alert) and immediately pivots to an endpoint (not your alert) exploits the gap between your cloud SOC and the endpoint team. By the time the endpoint team receives your escalation, triages it with their tools, and responds, the attacker has moved to the Linux database server — which neither team is monitoring. Cross-environment triage capability in a single responder (or a single team with cross-environment skills) eliminates the handoff delay that attackers exploit. This course builds that capability.
Troubleshooting
“I do not have access to all three environments.” Many analysts have Sentinel access (cloud logs) but not RDP access to endpoints or SSH access to Linux servers. Module TR9 covers remote triage automation that executes environment-specific commands through Sentinel playbooks and Velociraptor without requiring direct system access. For now, identify which environments you CAN access directly and which require coordination with another team — the handoff artifact from the Triage Trinity ensures the receiving team gets a complete context even when you cannot triage their environment yourself.
“Our Linux servers are managed by a different team that does not do security.” This is the most common cross-environment gap. The Linux admin team manages availability and performance but does not monitor for security events. Module TR4 teaches Linux triage with commands that any Linux administrator already knows (ps, ss, last, find) — the difference is knowing WHAT to look for, not learning new tools. Consider cross-training the Linux team on the 10-command triage sequence from TR4.1 so they can execute initial triage when you cannot access the server directly.
You're reading the free modules of this course
The full course continues with advanced topics, production detection rules, worked investigation scenarios, and deployable artifacts. Premium subscribers get access to all courses.