Module 8: Connect Logs to Microsoft Sentinel

14-18 hours · Manage a Security Operations Environment (40-45%)

Module 7 built the workspace. This module fills it with data.

A Sentinel workspace with no data connectors is an empty database — it has analytics rules with nothing to evaluate, hunting queries with no logs to search, and a health dashboard showing zero ingestion. Data connectors are the pipelines that feed security data from every source in your environment into the workspace where it becomes queryable, detectable, and actionable.

The SC-200 exam tests data connectors heavily within the “Manage a SOC Environment” domain. Questions cover: which connector type to use for a given data source, how to deploy Syslog and CEF collectors, how Data Collection Rules filter and transform data at ingestion, how to troubleshoot connectors that stop sending data, and how to validate that data is arriving in the expected tables. This module covers all of these.

Prerequisites

Complete Module 7 (Configure Your Microsoft Sentinel Environment) before starting this module. You need a functioning Sentinel workspace with health monitoring enabled. The concepts from Module 7 — log tiers, retention policies, key tables, and workspace architecture — are referenced throughout this module. If you set up the M365 developer tenant and Azure subscription in Module 0, you have the lab environment needed for the hands-on exercises.

What you will be able to do after completing this module

After completing this module, you will be able to design an ingestion strategy that prioritises data sources by detection value per cost. You will connect all Microsoft first-party services (Entra ID, Azure Activity, Microsoft 365) with one-click connectors. You will configure the Defender XDR connector with bi-directional incident sync and selective table ingestion. You will deploy the Azure Monitor Agent to Windows hosts and configure security event collection through Data Collection Rules. You will deploy a log forwarder for CEF and Syslog data from network devices, firewalls, and Linux servers. You will build Data Collection Rules that filter, transform, and route data to reduce ingestion volume and cost. You will configure custom log ingestion via the Logs Ingestion API for bespoke data sources. You will troubleshoot and validate every connector type. And you will optimise ingestion cost at the connector level — before data reaches the workspace.

How this module is structured

8.1 — Ingestion Strategy and Connector Architecture. The decision framework for which data sources to connect first, the four connector categories, and the architecture that determines how data flows from source to workspace.

8.2 — Microsoft First-Party Connectors. One-click connectors for Entra ID, Azure Activity, Microsoft 365, and Defender for Cloud. Configuration, verification, and the tables each connector populates.

8.3 — Connecting Microsoft Defender XDR. The Defender XDR connector in depth: bi-directional incident sync, Advanced Hunting table selection, the XDR data tier, and the operational model for Sentinel + Defender XDR integration.

8.4 — Connecting Windows Hosts to Sentinel. Azure Monitor Agent (AMA) deployment, Data Collection Rules for Windows Security Events, collection level selection (All Events, Common, Minimal), and Windows event forwarding at scale.

8.5 — Common Event Format (CEF) Connectors. CEF architecture, log forwarder deployment, field mapping, and connecting network security devices (firewalls, IDS/IPS, proxies) that output CEF.

8.6 — Syslog Data Sources. Syslog connector deployment, facility and severity filtering, the difference between CEF and Syslog, and connecting Linux servers and network infrastructure.

8.7 — Data Collection Rules: Filter, Transform, Route. DCR architecture, ingestion-time transformations, KQL-based filtering, column removal, data routing to multiple destinations, and the cost impact of transformation.

8.8 — Custom Logs and API Ingestion. The Logs Ingestion API, custom table creation, DCR-based ingestion for bespoke data sources, and when to use custom logs vs standard connectors.

8.9 — Connector Troubleshooting and Validation. Systematic troubleshooting for each connector type, validation queries, the SentinelHealth table for connector monitoring, and common failure patterns with resolution steps.

8.10 — Ingestion Cost Optimisation at the Connector Level. Reducing ingestion volume at the source through collection level selection, DCR filtering, table selection, and connector-specific optimisation techniques.

8.11 — Building the Complete Ingestion Pipeline. Putting it all together: the phased deployment plan, validation at each stage, the operational checklist for a production-ready ingestion pipeline, and the ongoing monitoring regime.

8.12 — Module Summary. Key takeaways, skills checklist, SC-200 exam objectives covered.

8.13 — Check My Knowledge. 20 scenario-based questions covering all subsections.

Sections in this module