10.9 Workbooks and Security Reporting
Workbooks and Security Reporting
Introduction
Required role: Microsoft Sentinel Contributor for analytics rules. Sentinel Responder for incident management.
Workbooks are Sentinel’s reporting and visualisation layer. They are Azure Monitor workbooks integrated into the Sentinel portal — interactive dashboards that display KQL query results as charts, tables, heatmaps, and time series. While analytics rules detect threats and automation rules handle response, workbooks show the SOC how the operation is performing: how many incidents were handled, which attack types are trending, where detection gaps exist, and how individual analysts are performing.
Workbook architecture
A workbook is a collection of visualisation tiles arranged on a canvas. Each tile runs a KQL query and displays the results in a chosen format: bar chart, line chart, pie chart, table, heatmap, tile, or markdown text. Tiles can be parameterised — allowing the viewer to filter the entire workbook by time range, severity, analyst, or data source.
Content Hub workbooks. Many Content Hub solutions include pre-built workbooks. The Entra ID solution includes a sign-in analysis workbook. The Defender XDR solution includes an incident overview workbook. These are excellent starting points — install them, review the KQL, and customise for your environment.
Custom workbooks. For operational dashboards that reflect your specific SOC workflow, create custom workbooks. Navigate to Sentinel → Workbooks → Add workbook.
The SOC operational dashboard
Every SOC needs a primary operational dashboard that answers these questions at a glance.
How many incidents are open? By severity and status.
| |
What is the incident trend? Daily incident creation over the last 30 days, broken down by severity.
| |
Visualise as a stacked bar chart. An upward trend in High-severity incidents demands immediate attention. A steady or downward trend indicates effective detection tuning and threat mitigation.
What are the top attack types? Incident classification breakdown — what threats is the SOC finding?
| |
How is the team performing? MTTT and MTTR by analyst.
| |
Where are the detection gaps? MITRE ATT&CK coverage — which techniques have analytics rules and which do not. This visualisation requires querying the analytics rule configuration (covered in subsection 10.11 detection engineering lifecycle).
Analyst workload distribution
Balance the SOC workload across the team with a workload distribution tile.
| |
Visualise as a stacked bar chart (incidents per analyst, coloured by severity). Analysts with disproportionately high workload need help — either through reassignment or by identifying automation opportunities for their most common incident types. The OldestIncident_hours column flags analysts who have stale incidents that need attention.
Rule effectiveness dashboard
Track which analytics rules deliver the most value to the SOC.
| |
Display as a table with conditional formatting: green background for High effectiveness, yellow for Medium, red for Low. This tile drives the monthly detection review (subsection 10.11) — rules with Low effectiveness are candidates for tuning or retirement.
Threat category breakdown
Show which attack categories are targeting your environment — essential for both SOC awareness and executive reporting.
| |
Visualise as a horizontal bar chart or donut chart. The distribution reveals your threat profile: if 60% of true positives are “Initial Access” (credential attacks), your environment is primarily targeted through identity compromise — prioritise identity detection and MFA hardening. If “Execution” dominates, endpoint threats are the primary vector — prioritise endpoint detection and application control.
Geographic threat map
For environments with external-facing sign-ins, a geographic heatmap shows where attacks originate.
| |
Visualise as a map tile (Azure Monitor workbooks support map visualisations with latitude/longitude). Countries with disproportionately high attack volumes that are not business-relevant locations may warrant geographic blocking via conditional access policy — an actionable insight directly from the workbook.
Workbook parameter patterns
Parameters make workbooks interactive and reusable. These patterns cover the most common needs.
Time range parameter: Add as the first parameter on every workbook. Default: 30 days for operational dashboards, 7 days for investigation dashboards. All tiles reference this parameter: | where TimeGenerated > {TimeRange:start}.
Severity filter parameter: Multi-select dropdown: High, Medium, Low, Informational. Allows analysts to focus on specific severity levels. All incident tiles add: | where Severity in ({SeverityFilter}).
Analyst filter parameter: Dropdown populated from a KQL query: SecurityIncident | distinct tostring(Owner.assignedTo). Enables per-analyst performance views. SOC managers use this to review individual analyst metrics.
Data source parameter: Multi-select: Identity, Endpoint, Email, Network, Custom. Maps to tags added by automation rules (subsection 10.6). Enables filtering all tiles by detection category.
Weekly trend analysis
A weekly trend tile shows whether the SOC’s operational posture is improving or degrading.
| |
Visualise as a multi-line chart: one line for incident volume, one for MTTR, one for TP rate. Healthy trends: stable or declining incident volume (indicating effective prevention), declining MTTR (faster response), and stable or increasing TP rate (better detection quality). Unhealthy trends: rising incident volume without corresponding true positives (more noise), rising MTTR (overwhelmed team), declining TP rate (degrading rule quality).
Executive reporting workbook
The executive audience needs different information: risk posture summary, incident trend over time, compliance metrics, and cost efficiency. Build a separate workbook for executives with high-level KPIs.
Key executive metrics:
Total incidents detected this month. Percentage resolved within SLA. Mean time to contain (from incident creation to containment action — not closure). Top 3 threat categories. Cost per incident (from Module 8.10). Month-over-month trend.
Workbook parameters for executive use: Default the time range to 30 days (executives review monthly). Add a “Department” parameter if incidents can be attributed to business units. Remove technical detail — executives need outcomes, not KQL.
Executive KPI definitions
Define these KPIs clearly for executive reporting. Each KPI should have: a definition, the target value, the current value (from KQL), and a trend indicator (improving/stable/degrading).
Security posture score: Percentage of High and Medium-severity incidents resolved within SLA. Target: 95%+. Calculation: closed incidents within SLA ÷ total closed incidents × 100. Trend: compare this month to the 3-month rolling average.
Threat detection effectiveness: Number of True Positive incidents per month. A rising number can indicate either improving detection or increasing threat activity — context matters. Pair with the false positive rate to distinguish: rising TP with stable FP = more threats detected. Rising TP with rising FP = noisier environment.
Mean time to contain (MTTC): Time from incident creation to the first containment action (device isolation, password reset, session revocation). More meaningful than MTTR for executives because containment stops the threat — closure may include documentation time that does not affect security posture. Target: under 2 hours for High severity.
| |
SLA tracking tile
Define service level agreements for incident response and track compliance.
| Severity | Triage SLA | Containment SLA | Resolution SLA |
|---|---|---|---|
| High | 30 minutes | 2 hours | 4 hours |
| Medium | 2 hours | 8 hours | 24 hours |
| Low | 8 hours | 24 hours | 72 hours |
| |
Display as a colour-coded table. Green for compliance above 95%. Yellow for 80-95%. Red for below 80%.
Workbook maintenance schedule
Workbooks degrade if not maintained. Queries break when tables are renamed, KQL functions change, or new data sources require updated tiles.
Monthly: Verify all tiles load without errors. Check that query time ranges and parameters produce current data. Update any hardcoded values (thresholds, IP ranges, user lists) that may have changed.
Quarterly: Review whether the workbook tiles still match SOC reporting needs. Remove tiles that nobody reads. Add tiles for new use cases identified in the quarterly detection review. Update the executive workbook with any new KPIs management has requested.
After major changes: When you deploy new analytics rules, add new data connectors, or restructure the automation rule library, review the workbooks for impact. A new data source may need a new data source health tile. New analytics rules may change the rule effectiveness metrics.
Data source health workbook
Complement the SOC operational dashboard with a data source health workbook that shows connector status (from Module 8.9).
| |
Display as a colour-coded table: green for Healthy, yellow for Delayed, red for Down. This workbook is the shift-start health check from Module 7.11 in visual form — the first thing every analyst checks at the start of their shift.
Workbook sharing and access control
Shared workbooks are saved to the Sentinel workspace and visible to all users with Sentinel Reader (or above) permissions. Use for: SOC operational dashboards, team performance reports, and data source health monitors.
Private workbooks are saved to the individual user and not shared. Use for: personal investigation templates, draft dashboards under development.
Workbook templates. Save a polished workbook as a template in Content Hub or as a gallery template — this allows other team members (or other Sentinel deployments) to install your workbook as a starting point.
Detection coverage heatmap
The most valuable workbook tile for detection engineering: a heatmap showing MITRE ATT&CK technique coverage — which techniques have active rules firing and which are silent.
| |
Visualise as a bar chart. Techniques with zero true positives may indicate: rules that are not detecting real threats (tune or retire), techniques that are not being used against your environment (expected — not every technique is relevant), or detection gaps where no rule exists (create one).
Alert funnel analysis
Track the journey from raw alerts to closed incidents to understand SOC efficiency and rule quality.
| |
Display as a funnel visualisation. The narrowing from alerts → incidents → true positives shows where volume is filtered at each stage. A wide funnel (many alerts, few true positives) indicates noisy rules. A narrow funnel (few alerts, most are true positives) indicates well-tuned detection.
Workbook design best practices
Use parameters for interactivity. Add time range, severity, and analyst parameters to every operational workbook. Parameters let each viewer filter to their area of responsibility without creating separate workbooks.
Group tiles logically. Top row: summary KPIs (total open incidents, MTTT, MTTR). Middle: trend charts (incident volume over time, severity distribution). Bottom: detail tables (individual incidents, rule performance). This mirrors the analyst workflow: scan summary → identify trends → drill into detail.
Limit to 6-8 tiles per tab. Workbooks with 20+ tiles load slowly and overwhelm the viewer. Use tabs (workbook groups) to organise: “Overview” tab with KPIs, “Rule Performance” tab with per-rule metrics, “Analyst Performance” tab with per-person metrics, “Health” tab with connector status.
Use conditional formatting. Colour-code values: green for healthy/good (MTTR within SLA, connector healthy), yellow for warning (MTTR approaching SLA, connector delayed), red for critical (MTTR exceeded, connector down). The analyst should be able to spot problems in 2 seconds by scanning colours — not by reading numbers.
Pin the workbook to the Azure dashboard. For the primary SOC operational workbook, pin it to the Azure portal dashboard so it appears immediately when analysts open the portal — no navigation required.
Incident volume forecasting
Add a tile that projects future incident volume based on historical trends.
| |
Rising forecasts indicate: new analytics rules generating more incidents (expected after rule deployment), a genuine increase in threat activity, or degrading rule quality (more false positives). Falling forecasts indicate: effective rule tuning, reduced threat activity, or connector failures reducing data volume (investigate).
Try it yourself
Create a workbook with three tiles: (1) open incidents by severity (bar chart), (2) daily incident trend over 30 days (line chart), and (3) data source health table. Add a time range parameter. Save as a shared workbook. This is the minimum viable SOC dashboard — the starting point that you refine over time as you identify additional visualisation needs.
What you should observe
The workbook renders KQL results as interactive visualisations. The time range parameter filters all tiles simultaneously. In a lab with few incidents, the charts may be sparse — the structure and queries are the deliverable, not the data volume. In production, these tiles provide at-a-glance SOC situational awareness.
Knowledge check
NIST CSF: DE.AE-1 (Baseline of operations established), PR.DS-1 (Data-at-rest is protected). ISO 27001: A.8.15 (Logging), A.8.16 (Monitoring activities). SOC 2: CC7.2 (Monitor system components). Every configuration in this subsection contributes to the logging and monitoring controls that auditors verify.
Check your understanding
1. Your SOC manager asks for a monthly report showing which MITRE ATT&CK techniques your analytics rules cover and which have gaps. What do you build?