ES0.11 Interactive Lab: Endpoint Security Assessment

· Module 0 · Free
Interactive Lab
This lab applies the endpoint security assessment methodology from this module to Northgate Engineering's environment. You will evaluate NE's current endpoint security posture across all five layers of the stack, score the posture against the maturity model, identify the top 10 gaps, and prioritize the deployment sequence. The lab simulates the assessment process that every endpoint security engineer performs before beginning a configuration project.
Deliverable: A completed gap assessment, maturity score, and prioritized deployment roadmap for NE's endpoint security architecture.
Estimated completion: 20 minutes
LAB: ENDPOINT SECURITY POSTURE ASSESSMENTSTEP 1: Assess 5 layersScore each layer 0-2STEP 2: Identify top 10 gapsRank by security impactSTEP 3: Build deployment planAssign gaps to 5 phases

Figure ES0.11 — Lab workflow: assess the five layers, identify the gaps, and build the deployment plan.

Lab scenario

You are the newly assigned endpoint security engineer at Northgate Engineering. The CISO has approved a 90-day endpoint security improvement project and needs a gap assessment, maturity score, and deployment plan by end of week. The following data represents NE’s current state.

NE endpoint security data

Fleet composition: 865 Windows endpoints (Windows 11 23H2), 12 Windows servers (Server 2022), 6 RHEL servers, 2 Ubuntu web servers, 340 iOS devices, 180 Android devices. Management: Intune (primary), SCCM (co-managed, 120 devices), GPO (AD-joined servers).

MDE status: Onboarded 780/865 Windows endpoints (90.2%). 8/12 Windows servers onboarded (unified agent). 0/8 Linux servers onboarded. Sensor health: 743 active, 37 inactive or intermittent. Zero custom detection rules. AIR at default semi-automated level. No indicators configured. Live response: basic mode only (advanced not enabled).

AV configuration: Defender AV active on all MDE-onboarded devices. Cloud protection: default level. Block at First Sight: enabled (default). PUA detection: not configured. Scan schedule: default (no custom schedule). Exclusions: 3 directory-level exclusions created during initial deployment (C:\Program Files\NE-ERP, C:\SQLData, D:\Backups) — none documented, no review dates.

ASR rules: No ASR rules configured through Intune or GPO. All rules effectively disabled.

Compliance and CA: Intune compliance policies assigned: require BitLocker, require firewall enabled, require Defender AV active. Non-compliant devices: 47. Conditional access: no policy references device compliance or MDE risk level. Non-compliant devices access all M365 resources normally.

Hardening: BitLocker enabled fleet-wide. LAPS: not deployed. Credential Guard: not enabled. CIS benchmarks: not applied. Security baselines: not assigned in Intune. Local admin password: same across all endpoints (set during image deployment, 2023).

Forensic readiness: Audit policy: default (basic audit). PowerShell ScriptBlock logging: not configured. Sysmon: not deployed. Event log sizes: default (20MB Security log). No collection tools pre-staged. No event forwarding.

Vulnerability management: Defender Vulnerability Management enabled (included in E5). 847 open recommendations. 12 critical vulnerabilities across 340 devices. Exposure score: 68/100 (high exposure). No remediation workflow. No exception management.

Step 1: Layer assessment

For each layer, assign a score based on the data above:

Layer 1 — Hardening (0-2): BitLocker is the only hardening control deployed. LAPS absent. Credential Guard absent. CIS benchmarks not applied. Shared local admin password.

Layer 2 — Prevention (0-2): AV running at default. Zero ASR rules. No WDAC. No exploit protection hardening. No Controlled Folder Access.

Layer 3 — Detection (0-2): MDE built-in alerts only. Zero custom detections. Zero hunting activity. No Sentinel cross-workload correlation active for endpoint data.

Layer 4 — Response (0-2): AIR at default. Live response basic only. No pre-built response scripts. No indicator management process. No documented isolation workflow.

Layer 5 — Forensic Readiness (0-2): Default audit policies. No PowerShell logging. No Sysmon. No collection tools. No event forwarding.

Use the data above to score each layer. Record your scores and calculate the total out of 10. NE should score approximately 1/10 (partial credit for Layer 1 BitLocker and Layer 4 having basic AIR available).

Step 2: Top 10 gaps

Rank the gaps by security impact — the gap that enables the most damaging attack scenarios ranks highest. Consider the compound effect of multiple gaps (e.g., no LAPS + no LSASS ASR rule = environment-wide credential reuse).

Suggested ranking (you may adjust based on your analysis):

  1. No LAPS deployed (enables environment-wide lateral movement from any single compromised endpoint)
  2. Zero ASR rules in block mode (no prevention layer for the most common initial access and execution techniques)
  3. AV cloud protection at default (reduced detection of unknown and zero-day malware)
  4. No custom detection rules (no detection for targeted attacks, environment-specific patterns)
  5. Compliance not enforcing CA (compromised or non-compliant devices access all M365 resources)
  6. No Credential Guard (LSASS credentials accessible to any process with appropriate privileges)
  7. No Sysmon or enhanced logging (investigation evidence will be insufficient)
  8. 85 devices not onboarded to MDE (blind spots in the fleet)
  9. No vulnerability remediation workflow (847 open recommendations, 12 critical vulnerabilities)
  10. Linux servers not in MDE (8 servers with no endpoint security monitoring)

Step 3: Deployment plan

Assign each gap to the appropriate phase:

Phase 1 (Weeks 1-2, Visibility): Complete MDE onboarding to 100%. Deploy all ASR rules in audit mode. Raise AV cloud protection to High+. Build device health dashboard.

Phase 2 (Weeks 3-6, Prevention): Deploy LAPS. LSASS ASR rule to block mode. Safe ASR rules to block. Compliance→CA for pilot group. Enable Credential Guard (pilot).

Phase 3 (Weeks 7-10, Detection): Build 20+ custom detection rules. Configure AIR levels. Enable advanced live response. Build hunting query library. Careful ASR rules to block with exclusions.

Phase 4 (Weeks 11-12, Readiness): Deploy Sysmon. Configure audit policies. Enable PowerShell logging. Onboard Linux servers. Stage collection tools.

Phase 5 (Ongoing, Optimization): Operationalize vulnerability management. Build monitoring dashboards. Create governance documentation. Deploy automation playbooks.

Lab self-assessment

Review your completed assessment against these criteria:

Did you identify the compound risk of no LAPS + no LSASS protection as the highest-priority gap? This compound gap enables the entire lateral movement phase of any attack chain.

Did your deployment plan put visibility actions (audit mode, monitoring) before enforcement actions (block mode, CA)? The audit-first methodology prevents deployment failures.

Did you account for the 85 unboarded devices in Phase 1? You cannot assess or protect what you cannot see.

Did your Phase 2 include LAPS before most ASR rules? LAPS has lower blast radius and higher lateral movement prevention value than most individual ASR rules.

In your gap analysis, you ranked "no custom detection rules" as gap #4 but another analyst ranked it #2 (above ASR rules). What is the strongest argument for prioritizing ASR deployment over custom detections?
ASR rules are easier to deploy than custom detections, so they should go first for quick wins.
ASR rules prevent attacks; custom detections detect them after they begin. An ASR rule in block mode that stops LSASS credential dumping eliminates the attack at the prevention layer — no detection, triage, investigation, or response required. A custom detection rule that alerts on LSASS credential dumping detects the attack after the credentials have been stolen — requiring analyst triage, investigation, and response while the attacker uses the stolen credentials. Prevention eliminates the incident entirely. Detection initiates an incident response workflow. Every attack prevented by ASR is an attack your SOC does not need to detect, investigate, or respond to. Prevention reduces the detection workload. Deploy prevention first, then build detections for the techniques that bypass prevention.
ASR rules are free while custom detections require engineering time — the cost argument favors ASR first.
Custom detections generate too many false positives to deploy early — they should wait until the team has more experience with MDE.

You're reading the free modules of this course

The full course continues with advanced topics, production detection rules, worked investigation scenarios, and deployable artifacts. Premium subscribers get access to all courses.

View Pricing See Full Syllabus