TR1.10 Live Response Scripting and Automation

· Module 1 · Free
Operational Objective
The Repeatability Problem: Manual triage — typing commands interactively during an incident — produces inconsistent results. Different analysts collect different artifacts, in different orders, with different levels of completeness. Scripts produce IDENTICAL output regardless of who runs them, ensuring every triage collects the same artifacts, in the same order, with the same integrity verification. This subsection teaches how to build, test, version-control, and deploy live-response scripts that become your triage standard operating procedure.
Deliverable: The Windows PowerShell triage script, the Linux Bash triage script (introduced in TR4.10 and formalised here), Velociraptor deployment for fleet collection, and Defender Live Response for remote endpoint triage.
Estimated completion: 30 minutes
LIVE RESPONSE AUTOMATION — FROM MANUAL TO FLEETMANUALInconsistent, slow, error-proneSCRIPTEDConsistent, fast, documentedVELOCIRAPTORFleet-wide, queued, centralisedDEFENDER LIVE RESPRemote, no SSH, cloud-managed

Figure TR1.10 — Live response maturity progression. Manual commands are unrepeatable. Scripts ensure consistency. Velociraptor scales to fleets. Defender Live Response provides remote access without SSH/RDP infrastructure.

The Windows PowerShell triage script

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
# === RIDGELINE WINDOWS TRIAGE SCRIPT v2.0 ===
# Run as Administrator. Output to USB or network share.
param(
    [string]$OutputPath = "E:\IR\$env:COMPUTERNAME_$(Get-Date -Format yyyyMMdd_HHmmss)"
)
$ErrorActionPreference = "SilentlyContinue"
New-Item -ItemType Directory -Force -Path $OutputPath | Out-Null
$log = "$OutputPath\00_collection_log.txt"

function Collect {
    param([string]$Name, [scriptblock]$Command)
    $start = Get-Date
    try {
        & $Command | Out-File "$OutputPath\$Name" -Encoding UTF8
        $elapsed = ((Get-Date) - $start).TotalMilliseconds
        "  OK: $Name (${elapsed}ms)" | Tee-Object -FilePath $log -Append
    } catch {
        "  FAIL: $Name ($($_.Exception.Message))" | Tee-Object -FilePath $log -Append
    }
}

"[*] Ridgeline Windows Triage v2.0 — $env:COMPUTERNAME$(Get-Date -Format o)" | Out-File $log
"[*] Collector: $env:USERNAME" | Out-File $log -Append

# Tier 1: Volatile
Collect "01_processes.csv" { Get-CimInstance Win32_Process | Select-Object ProcessId,ParentProcessId,Name,CommandLine,ExecutablePath,CreationDate | Export-Csv "$OutputPath\01_processes.csv" -NoTypeInformation }
Collect "02_connections.csv" { Get-NetTCPConnection | Where-Object State -eq 'Established' | Select-Object LocalAddress,LocalPort,RemoteAddress,RemotePort,OwningProcess,@{N='Process';E={(Get-Process -Id $_.OwningProcess -EA 0).Name}} | Export-Csv "$OutputPath\02_connections.csv" -NoTypeInformation }
Collect "03_sessions.txt" { query user 2>$null }
Collect "03_smb_sessions.csv" { Get-SmbSession | Select-Object ClientComputerName,ClientUserName,NumOpens,SecondsExists | Export-Csv "$OutputPath\03_smb_sessions.csv" -NoTypeInformation }

# Tier 2: Semi-volatile
Collect "04_services.csv" { Get-Service | Where-Object StartType -eq 'Automatic' | Select-Object Name,DisplayName,Status,@{N='Path';E={(Get-CimInstance Win32_Service -Filter "Name='$($_.Name)'").PathName}} | Export-Csv "$OutputPath\04_services.csv" -NoTypeInformation }
Collect "04_tasks.csv" { schtasks /query /fo csv /v }
Collect "05_autoruns.txt" { if (Test-Path "E:\tools\autorunsc64.exe") { & E:\tools\autorunsc64.exe -a * -c -h -nobanner } }
Collect "06_local_users.csv" { Get-LocalUser | Select-Object Name,Enabled,LastLogon,PasswordLastSet | Export-Csv "$OutputPath\06_local_users.csv" -NoTypeInformation }
Collect "06_local_admins.txt" { Get-LocalGroupMember -Group "Administrators" }

# Tier 3: Recent events
Collect "07_security_4624.csv" { Get-WinEvent -FilterHashtable @{LogName='Security';Id=4624} -MaxEvents 100 | Select-Object TimeCreated,@{N='LogonType';E={$_.Properties[8].Value}},@{N='Account';E={$_.Properties[5].Value}},@{N='SourceIP';E={$_.Properties[18].Value}} | Export-Csv "$OutputPath\07_security_4624.csv" -NoTypeInformation }
Collect "07_security_4625.csv" { Get-WinEvent -FilterHashtable @{LogName='Security';Id=4625} -MaxEvents 100 | Select-Object TimeCreated,@{N='Account';E={$_.Properties[5].Value}},@{N='SourceIP';E={$_.Properties[19].Value}} | Export-Csv "$OutputPath\07_security_4625.csv" -NoTypeInformation }
Collect "08_powershell_history.txt" { Get-Content "$env:APPDATA\Microsoft\Windows\PowerShell\PSReadline\ConsoleHost_history.txt" }
Collect "09_dns_cache.txt" { Get-DnsClientCache | Format-Table -AutoSize }
Collect "10_arp_table.txt" { Get-NetNeighbor | Format-Table -AutoSize }

# Hash all evidence
Get-ChildItem $OutputPath | ForEach-Object { Get-FileHash $_.FullName -Algorithm SHA256 } | Out-File "$OutputPath\99_hashes.txt"
"[*] Complete: $OutputPath ($(Get-ChildItem $OutputPath | Measure-Object).Count files)" | Out-File $log -Append

Defender Live Response for remote triage

Microsoft Defender for Endpoint includes Live Response — a remote command-line session to any Defender-enrolled endpoint. No SSH, no RDP, no network connectivity beyond the Defender agent’s cloud channel required.

Key Live Response commands for triage:

# Connect via Defender portal: Security.microsoft.com → Endpoints → Device → Live Response
# Collect running processes
processes
# Collect network connections
connections
# Collect the triage script output
putfile E:\tools\triage.ps1  # Upload script to the endpoint
run triage.ps1 -parameters "-OutputPath C:\IR"  # Execute
getfile C:\IR\  # Download the results

Live Response advantage: the analyst does NOT need network access to the endpoint. The commands travel through Defender’s cloud relay — the endpoint can be on a different network, behind a VPN, or on a remote site. For fleet triage: Live Response scripts can be pushed to multiple endpoints simultaneously via Defender’s Automated Investigation API.

Velociraptor for cross-platform fleet collection

Velociraptor provides fleet-wide volatile evidence collection across Windows AND Linux endpoints from a single console. The key volatile evidence artifacts:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
# Windows volatile evidence
- Windows.System.Pslist        # Running processes
- Windows.Network.Netstat      # Active connections
- Windows.Sys.Users            # Logged-in sessions
- Windows.Forensics.Prefetch   # Recent execution history
- Windows.System.Services      # Running services

# Linux volatile evidence
- Linux.Sys.Pslist             # Running processes
- Linux.Network.Netstat        # Active connections
- Linux.Ssh.AuthorizedKeys     # SSH key audit
- Linux.Persistence.CronJobs   # Scheduled persistence
- Linux.Sys.SystemdServices    # Service listing

Deploying Velociraptor for an incident (if not already deployed):

1
2
3
4
5
6
# Generate a deployment package
velociraptor config generate -i
# Deploy to Windows endpoints via Group Policy or SCCM
# Deploy to Linux servers via SSH
# Fleet collection: hunt for volatile evidence across all endpoints
velociraptor query "SELECT * FROM Artifact.Windows.System.Pslist()" --format json

The Velociraptor hunt runs on all enrolled endpoints simultaneously — capturing the volatile state across the entire fleet within minutes. For a 500-endpoint environment, this is the difference between 500 individual SSH/RDP sessions (days of work) and a single Velociraptor hunt (minutes of work).

Try it: run the PowerShell triage script on a test workstation
  1. Copy the script to a USB drive
  2. Open an elevated PowerShell on a test workstation
  3. Run: .\triage.ps1 -OutputPath E:\IR\test
  4. Check the output: how many files were collected? What is the total size?
  5. Verify: open 99_hashes.txt — does every file have a SHA256 hash?
  6. Compare with the Linux triage script from TR4.10 — the structure is deliberately parallel
Compliance Myth: "Scripts might crash the system or corrupt evidence"

The myth: Running a triage script on a production system during an incident is risky — the script might consume excessive resources, cause a crash, or corrupt the evidence it is trying to collect.

The reality: A well-tested triage script is SAFER than manual commands. Manual triage risks: typing errors (running rm instead of cat), inconsistent collection (forgetting to capture network connections), and unrecorded actions (no log of what was executed). A tested script: executes exactly the same commands every time, includes error handling (try/catch), logs every action with timestamps, and has been validated on test systems before the incident. The key requirement is TESTING — the script must be tested on representative systems during preparation, not deployed untested during an incident. Version-control the script in a Git repository and test after every modification.

Cross-platform triage toolkit organisation

The triage toolkit must be pre-staged on external media and contain tools for ALL three environments. The recommended structure: a windows directory containing KAPE, WinPMEM, Autoruns, and the TR3.1 PowerShell triage script; a linux directory containing AVML, the TR4.1 bash triage script, Cat-Scale, THOR Lite, and pre-compiled LiME modules per kernel version; and a cloud directory containing the TR2.1 5-query KQL pack as an executable PowerShell script and a Graph API collection script for offline environments.

Include a docs directory with the chain-of-custody template from TR1.8 and the triage report template from TR0.10. Include a hashes.txt manifest with SHA256 hashes of every tool — verify these hashes before using any tool during an incident to prevent supply chain tampering.

Toolkit maintenance cadence: update quarterly with new tool versions, fresh LiME compilations for current kernel versions, updated KAPE targets and modules, and new THOR Lite signatures. After each update: regenerate the hashes.txt manifest and distribute the updated toolkit to all SOC analysts. The toolkit version should be recorded in every triage report so the investigation team knows which tool versions produced the evidence.

Error handling in production triage scripts: a single command failure should NOT abort the entire collection. Wrap each artifact collection in a try-catch (PowerShell) or conditional execution (bash) that logs the failure and continues to the next artifact. The collection log records which artifacts succeeded and which failed, enabling the triage responder to document limitations in the triage report. A partially successful collection (8 of 10 artifacts collected, 2 failed due to permission denied) is far more valuable than an aborted collection (0 artifacts because the script crashed on artifact 3).

For fleet triage: deploy the script to multiple endpoints in parallel using SSH (Linux), PSRemoting (Windows), or Defender Live Response (Windows, no network dependency). The parallel execution window should be synchronised — start all collections within a 2-minute window so the evidence represents approximately the same point in time across the fleet. Staggered collection (endpoint 1 at 15:00, endpoint 5 at 15:45) creates timeline inconsistencies that complicate cross-endpoint correlation.

Troubleshooting

“The PowerShell script is blocked by execution policy.” Run with bypass: powershell -ExecutionPolicy Bypass -File .\triage.ps1. Or set the execution policy for the current session: Set-ExecutionPolicy -Scope Process -ExecutionPolicy Bypass.

“Defender Live Response is not available for this endpoint.” Live Response requires Defender for Endpoint Plan 2 (P2) and must be enabled in the Defender settings. Check: Security settings → Endpoints → Advanced features → Live Response. If the endpoint is not enrolled in Defender: use remote PowerShell (Enable-PSRemoting) or SSH for remote triage.

Beyond this investigation: Live response scripting connects to TR4.10 (where the Linux triage script and fleet automation are covered in depth), SOC Operations (where the triage scripts are maintained, version-controlled, and tested as operational tools), Detection Engineering DE10 (where detection-as-code principles are applied to triage automation — scripts reviewed, tested, and deployed through a pipeline), and M365 Security Operations (where Defender Live Response is covered in the context of endpoint investigation).

You're reading the free modules of this course

The full course continues with advanced topics, production detection rules, worked investigation scenarios, and deployable artifacts. Premium subscribers get access to all courses.

View Pricing See Full Syllabus