In this module
OD1.4 Risk Tolerance and Operational Security
You've seen incidents that were discovered within hours and incidents that had months of dwell time before detection. You've triaged alerts from noisy attackers who left artifacts everywhere and quiet attackers who left almost nothing. This sub explains why those differences exist — the attacker's risk tolerance is a deliberate operational choice, not an accident.
Operational Objective
During an investigation, the attacker's noise level is one of the first things you can observe. Most defenders treat it as a static characteristic — "this attacker is noisy." In reality, noise is a diagnostic. A noisy attacker and a quiet attacker aren't just different skill levels. They're making different operational choices driven by different risk tolerances, which are driven by different objectives.
The noise level tells you the attacker's intent and timeline within the first hour of an investigation. This sub teaches you the four noise levels, why each exists, and how to translate the noise classification into response priorities.
Learning Objectives
By the end of this sub you will be able to:
- Classify an attacker's operational noise level (loud, visible, quiet, silent) from the first hour of investigation telemetry. This matters because the noise level is the fastest diagnostic for response urgency — loud means you're in the final phase and need immediate containment, quiet means you may have a long-dwell compromise that requires careful evidence preservation.
- Explain why noise is a deliberate operational choice driven by risk tolerance and objective, not by skill level — the same insight that explains why Volt Typhoon (state-sponsored, near-silent) and LockBit affiliates (commodity tools, loud deployment) can both be highly effective despite operating at opposite ends of the noise spectrum. This matters because defenders who treat noise as a skill indicator underestimate quiet attackers and waste resources over-investigating loud ones.
- Identify detection coverage gaps across the noise spectrum in your own environment and explain why campaign-level correlation becomes essential at the "quiet" level where individual events look legitimate. This matters because most detection programmes cover the "visible" level well and the "quiet" level poorly — that gap is where the most damaging attackers operate.
Figure OD1.4 — The noise spectrum. Attacker noise is a function of risk tolerance and objective, not skill. Reading the noise level is a first-hour diagnostic for response prioritization.
Why noise is a choice
The natural assumption is that noisy attackers are unskilled and quiet attackers are skilled. That's wrong.
A ransomware operator in the deployment phase doesn't care about stealth. The encryption is about to start. Every second spent being careful is a second the SOC might detect the staging and prevent deployment. Speed beats stealth when the objective is imminent and irreversible.
An espionage operator accessing the CFO's mailbox every Thursday at 3 AM doesn't need to be fast. Their objective is ongoing collection — data that updates weekly, plans that evolve over months. Being detected means losing access that took weeks to establish. Every unnecessary command is a risk that could end the operation.
Both attackers are making rational risk-tolerance decisions. Both might have identical technical capability. The noise difference comes from the objective, not the skill.
The four noise levels
Loud — accepts detection
Loud operations happen when the attacker is in the final phase or when detection doesn't matter.
Ransomware encryption is the canonical example — by the time the alerts fire, the damage is in progress. Wiper deployment is similar. Mass credential dumping is loud because the attacker needs credentials quickly for lateral movement and is willing to generate alerts because the window between detection and response is their operational space. If the SOC takes 30 minutes to investigate and 60 minutes to contain, the attacker has 90 minutes of productive operation after the alert fires.
Here's what loud looks like in your SIEM — this is a 15-minute window during a ransomware staging phase:
21:30:14 SRV-NGE-BKP01 vssadmin delete shadows /all /quiet
21:30:22 SRV-NGE-BKP01 wmic shadowcopy delete
21:30:45 SRV-NGE-BKP01 sc stop veeam
21:31:08 SRV-NGE-DC01 gpupdate /force (deploying ransomware GPO)
21:31:22 SRV-NGE-FS01 copy \\SRV-NGE-DC01\SYSVOL\...\encrypt.exe C:\Windows\Temp\
21:31:35 SRV-NGE-FS02 copy \\SRV-NGE-DC01\SYSVOL\...\encrypt.exe C:\Windows\Temp\
21:31:48 SRV-NGE-SQL01 copy \\SRV-NGE-DC01\SYSVOL\...\encrypt.exe C:\Windows\Temp\
...12 more systems in the next 3 minutes...
21:35:00 DESKTOP-NGE* encrypt.exe executing across 47 endpoints simultaneouslyFifteen minutes. Shadow copy deletion, backup service stop, GPO push, ransomware binary distribution, mass execution. Every single event generates an alert. The attacker doesn't care — by the time you've opened the first ticket, encryption is running on 47 systems.
Defensive translation: when you see loud activity — mass process creation, bulk file operations, service stopping across multiple systems — you're in the final phase. Response priority: contain immediately. Isolate network segments. Kill the GPO. Investigate later.
Visible — detectable with existing rules
Visible operations use known tools and techniques that produce recognizable signatures.
Cobalt Strike with a default malleable profile. PowerShell Invoke-Mimikatz. PsExec for lateral movement. The attacker isn't being careless — they're being economical. Custom tooling costs time to develop. Commodity tooling works against the majority of targets. The risk calculation: "If this target detects us, we move to the next target. The cost of detection is low because the operation is cheap to reproduce."
Defensive translation: visible operations are the validation test for your detection programme. If your rules don't catch default-configuration Cobalt Strike, you have a coverage gap against the most common attack tooling. This is where technique-level detection (Purple Teaming's domain) pays off directly.
Quiet — blends with legitimate traffic
Quiet operations use legitimate tools and normal-looking behavior to avoid triggering rules.
The attacker uses net.exe instead of BloodHound for AD enumeration. They use native RDP instead of Cobalt Strike for lateral movement. They schedule activity during business hours. They use LOLBins — signed, trusted, expected Microsoft binaries — for every action they can.
Here's what quiet looks like — the same investigation phase (discovery + credential access) performed by a quiet attacker versus a loud one:
LOUD attacker (ransomware affiliate, T+1hr):
09:14:02 SharpHound.exe -c All → BloodHound collection
09:14:45 mimikatz.exe "sekurlsa::logonpasswords" → LSASS credential dump
09:15:12 PsExec \\SRV-NGE-DC01 cmd.exe → lateral movement
→ Three alerts fire in 70 seconds. All three are in your rule set.
QUIET attacker (espionage operator, spread over 3 days):
Mon 10:32 net group "Domain Admins" /domain → AD enumeration (legitimate command)
Tue 14:15 dir \\SRV-NGE-FS01\finance$ → share enumeration (legitimate command)
Wed 09:45 comsvcs.dll MiniDump via rundll32.exe → LSASS dump (LOLBin, no Mimikatz signature)
Wed 10:02 mstsc /v:SRV-NGE-DC01 → RDP (native Windows tool, no PsExec)
→ Zero alerts fire. Every command is a legitimate Windows binary.
Every action is during business hours. The gap between events
is measured in days, not seconds.The quiet attacker achieved the same outcome — domain admin credentials and access to the DC — without triggering a single alert. Each individual event is indistinguishable from an IT admin doing their job.
Defensive translation: quiet operations defeat technique-level detection because the individual events look legitimate. This is where campaign-level detection becomes necessary. The individual net.exe execution is legitimate. The pattern of net.exe followed by share enumeration followed by comsvcs.dll LSASS dump followed by RDP to the DC — spread over three days, from one account that isn't a domain admin — is a campaign. It's detectable through multi-day correlation even when each individual event is benign. This is the core capability this course builds.
Silent — below the detection surface
Silent operations produce minimal or no telemetry visible to standard monitoring.
Firmware implants don't appear in Sysmon logs. Supply chain poisoning in a build pipeline looks like a normal software build. Dormant persistence that activates once a month doesn't trigger beaconing detection tuned for hourly callbacks. Silent operations require high capability and high budget — the most advanced attacker class.
Defensive translation: detecting silent operations requires baseline integrity monitoring (has a firmware hash changed?), build pipeline verification (does the deployed binary match the source?), and anomaly detection (why did this account authenticate with a certificate we don't recognize?). This is advanced threat hunting territory, not SOC alerting.
Reading the noise during investigations
Within the first hour of investigation, you can classify the noise level. That classification tells you:
If loud: You're in the late stages. The objective is happening now or just completed. Contain immediately, investigate later.
If visible: The operation is in progress with known tools. The attacker may not know they've been detected. Silently assess scope before containment — identify all compromised systems before acting. You have time.
If quiet: You've detected something subtle — a behavioral anomaly or correlation alert. The attacker values their access. Preserve evidence meticulously. This may be a long-dwell compromise. Focus on determining how long the attacker has been present and what they've accessed.
If silent (suspected): You have an anomaly or an intelligence tip but no clear evidence. Threat hunt. Build hypotheses about where a silent adversary would operate and look for residual traces they can't fully eliminate.
STEP 1 — Pull 10 recent alerts
Select 10 true-positive or suspected-true-positive alerts from
the past week. Include a mix of severities.
STEP 2 — Classify each alert on the noise spectrum
For each alert, assign a noise level:
LOUD — mass operations, bulk activity, final-phase behavior
VISIBLE — known tools, default signatures, documented techniques
QUIET — legitimate tools used unusually, behavioral anomaly
SILENT — you won't find this in your alert queue (by definition)
Most alerts will be VISIBLE. That's expected — your detection
rules are built for the VISIBLE level.
STEP 3 — For each alert, answer three questions
a. What noise level would the attacker need to achieve this
same objective without triggering this specific alert?
b. What's the simplest change to drop one noise level lower?
(e.g., replace Mimikatz with comsvcs.dll MiniDump → drops
from VISIBLE to QUIET)
c. Do you have detection coverage at that lower noise level?
STEP 4 — Document the gap
Write: "My detection covers [X] of 4 noise levels. The gap is
at the [QUIET/SILENT] level because [specific reason — e.g.,
no behavioral correlation rules for legitimate tool abuse]."Hands-on Exercise — Noise Spectrum Classification
Objective: Classify alerts from your own environment on the noise spectrum and identify where your detection coverage has gaps.
Prerequisites: Access to your SIEM with at least 7 days of alerts. No lab VMs required.
Success criteria: You've classified 10 alerts, answered the three questions for each, and documented at least one noise-level gap.
Challenge: For the alerts you classified as VISIBLE, identify the specific tool or configuration that made them detectable. Then search your SIEM for the same activity performed with legitimate tools (the QUIET equivalent). If you find it, you've discovered where a quiet attacker would operate undetected in your environment.
You should be able to do the following without referring back to this sub. If you can't, the sections to re-read are noted.
You're reading the free modules of offensive-security-for-defenders
The full course continues with advanced topics, production detection rules, worked investigation scenarios, and deployable artifacts.