ES0.7 The Deployment Sequence That Matters

· Module 0 · Free
Operational Objective
The natural instinct after a gap assessment is to enable everything immediately. All ASR rules to block mode. Cloud protection to maximum. Compliance policies enforcing conditional access. Sysmon deployed fleet-wide. The result of this approach is predictable: phone calls from every department, LOB applications breaking, users locked out of email, and an IT operations team that reverts every change within 48 hours. The endpoint security project dies in its first week. Deployment sequence is not an administrative convenience — it is an engineering requirement. Each control has prerequisites, dependencies, and blast radius that determine when it can safely be deployed. Getting the sequence right is the difference between an endpoint security architecture that strengthens over 90 days and a failed project that gets rolled back on day 3.
Deliverable: A phased deployment sequence for endpoint security controls, with prerequisites and success criteria per phase, designed to build protection incrementally without disrupting business operations.
Estimated completion: 25 minutes
DEPLOYMENT SEQUENCE — 90-DAY ENDPOINT SECURITY ENGINEERINGPHASE 1: VISIBILITYWeeks 1-2Complete onboarding (100%)ASR rules to audit modeDevice health dashboardAV cloud protection → High+Baseline metrics collectedZero blast radiusPHASE 2: PREVENTIONWeeks 3-6Safe ASR rules → blockCompliance → CA (pilot)LSASS protection → blockLAPS deploymentCredential Guard (pilot)Graduated enforcementPHASE 3: DETECTIONWeeks 7-10Custom detections (20+)AIR configuration tunedHunting query libraryCareful ASR rules → blockSentinel integrationBuild detection capabilityPHASE 4: READINESSWeeks 11-12Sysmon deploymentAudit policy hardeningPowerShell loggingServers + Linux onboardCollection tools stagedEnsure evidence existsPHASE 5: OPTIMIZEOngoingVulnerability managementMonitoring dashboardsGovernance frameworkAutomation playbooksValidation testingContinuous improvement

Figure ES0.7 — The 90-day deployment sequence. Phase 1 creates visibility with zero blast radius. Phase 2 deploys prevention with graduated enforcement. Phase 3 builds detection capability. Phase 4 ensures forensic readiness. Phase 5 is the ongoing optimization cycle. Each phase has success criteria that must be met before advancing.

Phase 1: Visibility first, enforcement later

The first two weeks focus exclusively on visibility. Every action in Phase 1 has zero production impact — nothing is blocked, nothing is enforced, nothing changes the user experience. The goal is to establish a baseline: what does the environment look like before you start changing it?

Complete MDE onboarding to 100%. Resolve the 85 devices that are not onboarded — the co-managed SCCM devices, the BYOD exclusions, the offline devices, the conflicting AV instances. You cannot secure what you cannot see. 100% onboarding is the prerequisite for every subsequent phase.

Deploy all ASR rules in audit mode across the entire fleet. Audit mode generates event data showing what WOULD have been blocked if the rules were in block mode — without actually blocking anything. This data is essential: after 2-4 weeks of audit data collection, you can identify which rules generate zero false positives (safe to enforce immediately) and which rules trigger on legitimate LOB applications (require exclusions before enforcement).

Raise AV cloud protection from default to High+. This is a low-risk change that improves detection of unknown malware. High+ extends the cloud analysis time for suspicious files and blocks files that the cloud ML models have not yet classified as safe. The user-visible impact is occasional 10-second delays when opening unknown files for the first time — most users will not notice.

Build the device health dashboard (Module ES2). KQL queries that monitor onboarding coverage, sensor health, AV update status, and ASR audit events. This dashboard is your engineering control panel for the next 88 days.

Expand for Deeper Context

Phase 1 success criteria (must be met before advancing to Phase 2): MDE onboarded on 100% of active fleet devices (excluding documented decommissioned hardware). All 18+ ASR rules in audit mode on 100% of managed devices. AV cloud protection at High+ on 100% of managed devices. Device health dashboard operational with daily monitoring. Two weeks of ASR audit data collected. Baseline metrics documented (exposure score, ASR audit event volume per rule, device health distribution).

These criteria are not suggestions. If Phase 1 success criteria are not met, Phase 2 will create problems: ASR rules moved to block mode without audit data will generate unknown false positives. Compliance policies enforced without 100% onboarding will lock out devices the SOC cannot investigate. The visibility phase exists specifically to prevent these downstream failures.

Phase 2: Prevention with graduated enforcement

Weeks 3-6 deploy prevention controls using the audit-first, graduated enforcement methodology. The ASR audit data from Phase 1 drives every enforcement decision.

Start with the “safe set” — ASR rules that generate minimal false positives across most environments. “Block credential stealing from LSASS” rarely triggers on legitimate software. “Block abuse of exploited vulnerable signed drivers” targets kernel-mode driver exploitation. “Block untrusted and unsigned processes that run from USB” blocks USB-based malware delivery. Move these to block mode across the fleet. Monitor for one week. If no legitimate business impact occurs, proceed.

Deploy LAPS. This is a hardening control with significant lateral movement prevention value and low blast radius. Each endpoint gets a unique, randomly generated local administrator password stored in Entra ID (or on-prem AD). Existing scripts or tools that use the shared local admin password will break — identify these during Phase 1 and update them to use LAPS-managed passwords.

Connect compliance policies to conditional access for a pilot group. Start with IT and security staff — people who understand why they might be temporarily blocked and know how to remediate their device. Expand to standard users after the pilot validates the workflow.

What happens when you get the sequence wrong

The deployment sequence is not theoretical — the consequences of wrong sequencing are specific and measurable.

ASR before onboarding (wrong). You deploy ASR rules to all Intune-managed devices. But 85 devices are not onboarded to MDE — the ASR rules apply (they are Intune policies, not MDE features) but MDE has no telemetry from those devices. If an ASR rule blocks a legitimate application on an unonboarded device, the block event does not appear in MDE Advanced Hunting — you cannot see the false positive, cannot troubleshoot it, and the user reports “my application stopped working” without any security telemetry to diagnose the cause. Onboarding first ensures ASR audit data is visible before enforcement decisions are made.

Detection rules before AV tuning (wrong). You build custom detection rules for credential access techniques. But AV is at default cloud protection, which means unknown credential dumping tools may execute without AV intervention. The detection rule fires AFTER the tool runs — the detection is reactive. If AV were at High+ cloud protection, the tool might be blocked BEFORE execution — making the detection redundant for that specific instance. AV tuning first ensures the prevention layer catches what it can; detection rules catch what prevention misses.

Compliance enforcement before remediation (wrong). You enable the CA policy “require compliant device” without first checking how many devices are compliant. 47 devices fail the compliance check. 47 users are immediately blocked from M365 resources. 47 help desk tickets. 47 angry managers. The remediation-first approach (ES3.1) identifies and fixes the 47 non-compliant devices before enforcement, reducing the enforcement-day disruption to near zero.

Decision Point

ASR audit data shows that “Block Office applications from creating child processes” would block a legitimate LOB application that launches a helper process from Excel. Do you (A) keep the rule in audit mode permanently, (B) create an exclusion for the LOB application and move to block mode, or (C) keep the rule in warn mode? Option B is correct. An exclusion that allows a specific known application to bypass the rule preserves the rule’s protection for all other scenarios. The exclusion must be documented (application name, path, business justification, review date) in the ASR exclusion register. Keeping the rule in audit or warn mode permanently means the rule provides zero prevention value — every attacker who uses Office-to-PowerShell chains succeeds. One exclusion for one known application is a better security outcome than no enforcement at all.

Phases 3-5: Detection, readiness, and optimization

Phase 3 (Weeks 7-10): Detection. Build custom detection rules covering the highest-priority ATT&CK techniques: credential dumping (LSASS access by non-system processes), lateral movement (PsExec service creation, WMI remote execution), persistence (scheduled task creation from unusual parents, registry run key modification), and defense evasion (process injection, AMSI bypass indicators). Configure AIR at the appropriate automation level per alert type. Build the hunting query library. Move the “careful set” ASR rules (Office child process blocking, script execution blocking) to block mode with the exclusions identified during audit.

Phase 4 (Weeks 11-12): Forensic readiness. Deploy Sysmon with the SwiftOnSecurity baseline to all endpoints. Configure advanced audit policies (process creation with command-line, logon events, privilege use). Enable PowerShell ScriptBlock logging. Onboard Windows servers and Linux servers to MDE. Stage KAPE and Velociraptor for rapid collection. Validate that forensic evidence is being generated by executing a test scenario and confirming the expected log entries exist.

Phase 5 (Ongoing): Optimization. Operationalize vulnerability management (Defender Vulnerability Management recommendations, remediation tracking, exception management). Build monitoring dashboards (executive and engineering views). Create governance documentation (policy, exception register, change management, compliance mapping). Deploy automation playbooks (auto-collect evidence on high-severity alerts, auto-isolate on ransomware with confidence thresholds). Run validation testing (Atomic Red Team tests mapped to deployed controls).

Try it: build your 90-day deployment plan

Using the gap assessment from ES0.6, assign each gap to a phase:

Phase 1 (Visibility): Any control that can be deployed in audit or monitoring mode with zero production impact. ASR audit mode, device health monitoring, baseline metric collection. Phase 2 (Prevention): Controls that prevent attacks. ASR block mode (graduated), LAPS, Credential Guard, compliance→CA enforcement. Ordered by: impact × inverse blast radius. Phase 3 (Detection): Controls that detect attacks. Custom detection rules, hunting queries, AIR tuning, Sentinel integration. Phase 4 (Readiness): Controls that ensure evidence exists. Sysmon, audit policies, PowerShell logging, collection tool staging. Phase 5 (Optimization): Controls that require ongoing management. Vulnerability remediation, dashboards, governance, automation.

For each item, estimate the effort (hours) and identify the owner (security team, IT operations, or joint). The total should be achievable within the available engineering hours over 90 days. If it is not, reduce scope — cut Phase 5 items before Phase 2 items. Prevention is more valuable than optimization.

Compliance Myth: "We need to enable all security controls before our next audit"

The myth: The audit is in 6 weeks. We need everything configured by then. Deploy all controls at once to meet the deadline.

The reality: Deploying all controls simultaneously creates an unstable environment that you cannot troubleshoot. If 8 policy changes are applied in the same week and users report issues, you cannot isolate which change caused the problem. The audit deadline does not change the engineering reality: controls must be tested before enforcement. What the audit actually requires: demonstrate that you have an endpoint security program with a documented plan, that controls are being deployed according to a risk-prioritized schedule, and that each deployed control is validated. An audit that finds “15 controls planned, 8 deployed and validated, 7 in progress with documented timeline” passes. An audit that finds “15 controls deployed simultaneously, 6 reverted due to production issues, 3 in unknown state” does not.

Troubleshooting

“Our IT operations team wants 4 weeks of change management approval for each phase.” Negotiate a single change approval for the entire 90-day plan with phase-specific go/no-go gates. Present the plan as a single project with five phases, each requiring a brief approval checkpoint. This is more efficient than 15 individual change requests and gives IT operations visibility into the full scope.

“We are in the middle of an incident — should we skip the phased approach and enable everything now?” During an active incident, yes — enable the controls that directly address the threat. If the incident involves LSASS credential dumping, enable the LSASS ASR rule in block mode immediately. If it involves lateral movement via PsExec, enable the PSExec/WMI ASR rule. Post-incident, return to the phased approach for the remaining controls. Emergency deployment of targeted controls during an incident is justified. Blanket deployment of all controls during an incident is still risky.

Your Phase 1 ASR audit data shows that "Block Win32 API calls from Office macros" would block 340 events per week across 12 unique applications. Should you move this rule to block mode in Phase 2?
Yes — 340 blocks per week means the rule is providing high protection value. Move to block mode immediately and handle complaints as they come.
No — 340 false positives per week is too many. Keep the rule in audit mode permanently.
It depends on the analysis of the 12 applications. Investigate each: are they legitimate LOB applications that need macro API access? Or are they generic Office add-ins that can be updated or replaced? If 10 of the 12 are a single LOB application used by the finance team, create an exclusion for that application and move to block mode — 330 of the 340 events are eliminated by one exclusion, and the rule now blocks the remaining patterns which may be malicious. The decision is not "block or audit" — it is "exclude the known legitimate, block everything else." The audit data tells you what to exclude. The exclusion register documents the business justification. The remaining protection value justifies the rule being in block mode.
Move the rule to warn mode as a compromise — users can click through the warning for legitimate use while being alerted to potentially malicious activity.

You're reading the free modules of this course

The full course continues with advanced topics, production detection rules, worked investigation scenarios, and deployable artifacts. Premium subscribers get access to all courses.

View Pricing See Full Syllabus