In this module

1.2 Why GRC Programs Fail: Case Studies

2-3 hours · Module 1 · Free
Operational Objective
The Diagnosis Problem: before you can build a GRC program that works, you need to identify when one is not working — and specifically why. Each failure mode has a different root cause and requires a different correction.
Deliverable: A failure mode diagnosis of your organization's current GRC program — which of the four patterns (compliance trap, documentation trap, tool trap, audit-driven trap) apply, with specific evidence for each.
⏱ Estimated completion: 25 minutes reading + 15 minutes exercise

Why GRC Programs Fail: Case Studies

Beyond the four failure modes

Module G0 introduced the four failure modes: the compliance trap, the documentation trap, the tool trap, and the audit-driven trap. This subsection examines each in depth through composite case studies drawn from common organizational patterns. These are not single real organizations — they are composites that represent patterns observed across hundreds of GRC implementations. If your organization resembles one of these patterns, the diagnosis applies.

FOUR FAILURE MODES — WHAT EACH OPTIMISES FOR vs WHAT MATTERS OPTIMISES FOR IGNORES 1 Compliance trap Audit pass rate Control effectiveness, actual risk reduction 2 Documentation trap Policy count, document completeness Usability, maintainability, employee compliance 3 Tool trap Platform features, automation capability Program design, process maturity, actual use 4 Audit-driven trap Annual audit readiness Continuous assurance, 11 months of unmonitored risk

The purpose of these case studies

Figure 1.2: Four GRC failure modes. Each optimizes for a proxy metric (audit pass rate, policy count, platform features, annual readiness) while ignoring what actually matters (control effectiveness, usability, program design, continuous assurance).

is not to alarm you. It is to give you a diagnostic framework. Before you can build a GRC program that works, you need to be able to identify when a program is not working — and specifically why. Each failure mode has a different root cause and requires a different correction. Treating a documentation trap problem with a tool purchase (tool trap) makes both problems worse.


Case study 1: The certified but breached organization

Situation. A 300-person financial technology company achieved ISO 27001 certification eighteen months ago. The certification was driven by a customer requirement — their largest client, a tier-one bank, required ISO 27001 as a contractual condition. The implementation was led by an external consultant who produced the ISMS documentation, conducted the risk assessment, and prepared the organization for the certification audit. The audit was successful. The certificate was issued. The consultant's engagement ended.

Fourteen months after certification, the organization suffered a business email compromise. An attacker gained access to the CFO's mailbox through a credential phishing attack, monitored internal communications for three weeks, and then redirected a $340,000 vendor payment to a fraudulent account. The funds were unrecoverable.

What the GRC program said. The risk register listed "phishing attacks" as a risk with a likelihood of "Medium" and an impact of "Medium." Treatment: "Anti-phishing training provided to all employees annually." The Information Security Policy stated: "Users must not click on suspicious links." The access control policy stated: "Multi-factor authentication is required for all users." The Statement of Applicability showed Annex A.8.5 (Secure authentication) as "Implemented" and Annex A.6.3 (Information security awareness, education and training) as "Implemented."

Micro-audit: Find the failures before reading the diagnosis

Based on the situation and GRC documentation above, identify at least three governance failures that could have enabled the breach. Do not read ahead until you have your answer.

Reveal your diagnosis, then read the full analysis below

Common findings participants identify: (1) The risk rating was not data-driven — "Medium" likelihood with no supporting threat data. (2) The policy says "MFA required for all users" but there is no verification that the technical implementation matches the policy. (3) "Anti-phishing training provided annually" measures delivery, not effectiveness — there is no phishing simulation data. (4) The SoA says "Implemented" with no evidence of ongoing effectiveness monitoring. (5) The risk assessment has not been updated in 18 months despite a changing threat landscape. If you identified three or more, you are reading GRC documentation the way an incident investigator does — looking for the gap between what is documented and what is real.

What was actually happening. MFA was configured in Entra ID conditional access — but the policy had five exclusions: the CEO, CFO, and COO (who found MFA "disruptive to their workflow"), two service accounts used by the accounting software (which did not support modern authentication), and all users connecting from the office IP range (because the IT manager believed users on the corporate network were "already authenticated by the firewall"). The anti-phishing training was a twenty-minute annual video from a third-party provider that employees clicked through without watching — the LMS showed 94% completion, which satisfied the compliance evidence requirement, but nobody measured whether the training changed behavior. There was no phishing simulation program. There were no mailbox audit log reviews. There was no monitoring for suspicious inbox rules, mail forwarding changes, or anomalous mailbox access patterns. The "Medium" likelihood rating for phishing was not based on any data — it was the consultant's estimate during the initial risk assessment eighteen months earlier and had never been reviewed, updated, or challenged despite the industry experiencing a significant increase in AiTM phishing campaigns during that period.

Expand for Deeper Context

The failure mode. This is the compliance trap in its purest form. The organization passed the certification audit because the documentation was correct. The policy said MFA was required. The SoA said the control was implemented. The auditor sampled a subset of evidence and found it satisfactory — the conditional access policy existed, users were listed as MFA-enabled, and the training LMS showed completion rates above 90%. But the implementation had gaps that the compliance program was structurally incapable of detecting because it measured documentation existence, not control effectiveness.

The MFA exclusions were not documented in the risk register as accepted risks — they were informal decisions made by the IT manager under pressure from executives. The training completion rate looked like compliance evidence, but it measured button-clicking, not learning. The risk assessment had not been updated since initial certification, so it did not reflect the evolving threat landscape. The surveillance audit scheduled for month 12 might have caught some of these gaps — but the breach occurred in month 14.

What a working program would have done. A continuous monitoring program would have detected the MFA exclusions within the first month — a weekly or monthly query against conditional access policy configurations and Entra ID sign-in logs showing which users authenticated without MFA challenge. The results would have revealed five accounts signing in without MFA, triggering an immediate investigation and remediation. The risk assessment would have been updated after each threat intelligence briefing showing increased AiTM phishing activity in the fintech sector — the "Medium" likelihood would have been revised to "High" based on external data, triggering a governance decision to invest in additional controls. The phishing training would have been supplemented with simulation data showing actual click rates — if the finance team clicks simulated phishing at 12%, the "training provided" treatment is demonstrably ineffective and needs replacement. The access control policy would have specified the exact conditional access policy ID that implements the MFA requirement, making it auditable against the technical configuration — any exclusion not documented in the risk register would be an immediate nonconformity.

The lesson. Certification proves that the documentation was correct at the time of audit. It does not prove that the controls are working between audits. The gap between the two is where breaches live. A working GRC program closes that gap through continuous monitoring of control effectiveness — not more documentation, not more policies, not more audit frequency, but ongoing verification that what the documentation says is actually happening.

The Red Line. ISO 27001 Clause 9.1 requires the organization to "evaluate the information security performance and the effectiveness of the information security management system." Most organizations satisfy this with an annual management review and an internal audit. The clause actually requires ongoing evaluation of effectiveness — the annual management review is the minimum reporting cadence, not the minimum evaluation cadence. If you only evaluate control effectiveness once a year, you are operating on a 12-month feedback delay. The MFA exclusions in Case Study 1 existed for at least 14 months without detection. A monthly effectiveness check — a simple query showing MFA enforcement rate — would have caught them in month 1.

Case study 2: The fifty-policy organization

Situation. A 500-person professional services firm has been operating a GRC program for four years. The program was built incrementally — each compliance requirement, audit finding, and incident response triggered the creation of a new policy or procedure. The document library now contains 53 policies, 31 procedures, 12 standards, and 8 guidelines — 104 governance documents in total. The GRC team of two people spends approximately 40% of their time on document maintenance — reviewing, updating, chasing approvals for policy updates, and reconciling contradictions that auditors identify.

The symptoms. When the most recent ISO 27001 surveillance audit sampled the Remote Working Policy, the auditor asked the IT manager (the process owner) to describe the policy's key requirements. The IT manager had not read the policy. He described the actual remote working practices, which differed from the documented policy in three significant ways: the policy required VPN for all remote access, but the organization had migrated to zero-trust network access six months ago and VPN was no longer used; the policy required encrypted laptops, which was correct, but specified BitLocker while the organization had switched to FileVault for the MacBook fleet; and the policy required daily backups to the on-premises server, but backups now went to Azure Blob Storage. The auditor raised a minor nonconformity for "documented procedures not reflecting actual practice."

An employee in the finance team reported a suspicious email to the IT helpdesk. The helpdesk technician looked for the incident reporting procedure but found three documents that appeared relevant: "Incident Response Policy" (a high-level governance document describing roles and responsibilities), "Security Incident Reporting Procedure" (a step-by-step process written two years ago that referenced a ticketing system the organization no longer used), and "Phishing Response Guidelines" (a one-page document created after the last phishing exercise that contradicted the Incident Response Policy on escalation paths). The three documents described different escalation paths. The technician escalated to their line manager instead, who forwarded the email to the IT director, who forwarded it to the external managed security provider. By the time the phishing email was investigated, four hours had elapsed. In a BEC scenario, four hours is enough for the attacker to monitor communications, identify a pending payment, and redirect it.

The GRC team's response to the audit finding was to update the Remote Working Policy. Their response to the incident response confusion was to create a fourth document — "Phishing Incident Quick Reference Card." The document library grew to 105 documents.

The failure mode. This is the documentation trap. The GRC team measures program maturity by document count and coverage. Each new requirement or finding triggers a new document rather than an update to an existing one. The result is a document library that nobody can navigate, nobody reads completely, and nobody can maintain consistently. Contradictions emerge because multiple documents address overlapping topics without a clear hierarchy. Process owners cannot comply with policies they have not read, and they have not read them because there are too many documents and the relevant one is not findable.

The documentation trap is self-reinforcing. The more documents the GRC team produces, the more out-of-date documents they have to maintain. Version control becomes impossible across 104 documents. Employees ignore policies because they cannot find the relevant one, let alone determine which version is current when three documents appear to address their situation. Auditors sample policies at random and find inconsistencies because no human can maintain consistency across that volume. The GRC team responds by producing more documentation to address the findings, deepening the cycle. The 40% maintenance overhead will grow toward 60% as the library continues to expand, leaving the GRC team with no capacity for actual risk management or program improvement.

You can identify a documentation-trap program by a simple test: pick any policy and ask the process owner whether they have read it in the last six months. If they have not — or if they do not know which policy applies to their process — the policy is shelf-ware. It satisfies the auditor's sampling but does not govern anything.

What a working program would have done. A well-designed policy framework starts with a minimum viable document set — typically 8-12 core policies for an organization of this size, with procedures consolidated under their parent policies rather than scattered across a separate library. Each policy is short (5-10 pages maximum), specific, and owned by a named individual who is accountable for its accuracy and currency. The hierarchy is clear and enforced: policies set requirements, standards define technical specifications, procedures describe how to comply, guidelines provide optional advice. When a new requirement emerges, the first question is "which existing document does this belong in?" — not "which new document do we need to create?" When technology changes (VPN to zero-trust, BitLocker to FileVault, on-premises backup to Azure), the policy owner updates the policy within the same change management cycle that implements the technology change — not eighteen months later when an auditor notices the discrepancy.

The phishing response confusion would not have occurred because there would be one Incident Response Policy with a phishing section, not three overlapping documents. The policy would reference the current ticketing system, the current escalation path, and the current response SLA. When the ticketing system changed, the policy would be updated as part of the change.

Module G2 builds this framework from scratch. If your organization has an existing policy library that resembles this case study, Module G2 also covers how to consolidate and rationalize an overgrown document set — merging overlapping documents, retiring obsolete ones, and establishing the governance cadence that prevents regrowth.

Branching decision: The helpdesk phishing response

You are the helpdesk technician in Case Study 2. An employee has just reported a suspicious email. You have found three contradicting documents. The email appears to contain a link to a credential harvesting page. What do you do?

Option A: Follow the Incident Response Policy (the most senior document) and escalate to the CISO.

Option B: Follow the most recent document (Phishing Response Guidelines) and forward the email to the managed security provider.

Option C: Ignore all three documents. Block the sender, warn the employee not to click, and log a ticket.

Reveal: The downstream impact of each choice

Option A follows the governance hierarchy but introduces delay. The CISO is in a meeting. The escalation sits in their inbox for two hours. By the time they see it and forward it to the SOC, the attacker has had three hours inside the compromised mailbox. The Incident Response Policy was written for major incidents — it routes through senior leadership because it covers data breaches, regulatory notifications, and board communication. A single phishing email does not need CISO involvement.

Option B gets the email to a security analyst faster, but the Phishing Response Guidelines are a one-page document created after the last phishing exercise by a different team. The managed security provider receives the forwarded email but has no context — they do not know who reported it, which user received it, or whether anyone clicked the link. They investigate from scratch.

Option C is what most helpdesk technicians actually do when the documentation is confusing. It contains the immediate threat (sender blocked, user warned) but loses the investigation opportunity — no analyst examines the email headers to identify other recipients, no IOCs are extracted, no detection rule is created. If 15 other employees received the same email, they are still at risk.

The correct answer is that all three options are suboptimal — and that is the point. Contradicting documentation forces people to improvise. The real fix is Module G2: one Incident Response Policy with a clear phishing section that specifies exactly who to contact (SOC, not CISO), what information to provide (reporter, recipient, email headers, whether link was clicked), and what SLA applies (triage within 15 minutes for suspected active phishing). When one document gives one clear answer, the helpdesk technician does not need to choose between three.

Case study 3: The expensive platform with empty dashboards

Situation. A 2,000-person technology company purchased a GRC platform for $180,000 per year (three-year contract, total commitment $540,000). The purchase was motivated by the GRC team's frustration with spreadsheet-based compliance management — particularly the difficulty of managing evidence collection across multiple frameworks (ISO 27001, SOC 2, and GDPR) and the time spent producing reports for the quarterly risk committee.

The platform vendor demonstrated impressive capabilities: automated evidence collection, real-time compliance dashboards, cross-framework mapping, risk heat maps, audit management workflows, and board-level reporting. The GRC director approved the purchase based on a projected 50% reduction in manual effort and "audit-ready compliance at all times."

The implementation took eight months — three months longer than planned because the vendor's "best practice" templates did not match the organization's existing control structure, and the GRC team had to map their controls to the vendor's taxonomy. The risk register was migrated from the existing spreadsheet. The policy library was uploaded. Framework requirements were loaded from the vendor's content library. The platform was declared "live."

The symptoms. Twelve months after go-live, the GRC team reports that the platform has not reduced their workload. The risk register in the platform contains the same 47 risks that were in the spreadsheet — none have been added, updated, or closed since migration, because the workflow for updating risks in the platform is more complex than editing the spreadsheet. The evidence collection workflow sends automated email reminders to control owners every month, but 60% of evidence requests are overdue because control owners do not understand what evidence the platform is asking for — the evidence descriptions were written by the vendor's implementation consultant using generic language that does not match the organization's terminology or processes. The compliance dashboards are technically functional but the data they display is meaningless because the underlying data is stale — the dashboards show 47 risks at the ratings they had eighteen months ago, and evidence completion at 40% because most control owners have stopped responding to the automated reminders.

The quarterly risk committee still receives a PowerPoint presentation manually assembled by the GRC manager because the platform's reporting module does not produce output in the format the committee chair expects, and customizing the platform's reports requires the vendor's professional services team at $1,200 per day.

The GRC team now maintains three systems: the platform (because the license is paid and the auditor was told evidence would be in the platform), their original spreadsheets (for day-to-day risk management because the spreadsheet is faster and more flexible), and PowerPoint (for committee reporting because the platform's reports are not fit for purpose). The platform has increased their workload by approximately 30%, not reduced it by 50%.

The failure mode. This is the tool trap. The platform was purchased to solve problems that were actually program design problems, not tooling problems. The real issues were: the risk register was not being maintained because there was no defined review cadence, no trigger-based update process, and no clear ownership per risk — the spreadsheet was not the problem. Evidence collection was manual because the program had not defined what specific evidence was needed for each control, who was responsible for producing it, and at what frequency — the lack of automation was not the problem. Reporting was ad hoc because the program had not defined what the committee needed to see, in what format, at what level of detail — the presentation format was not the problem.

The platform automated the execution of these broken processes. Automated reminders for evidence that nobody understands how to produce are worse than no reminders — they create notification fatigue, erode trust in the platform, and give control owners a legitimate reason to disengage. Automated dashboards displaying stale data are worse than no dashboards — they create a false impression of visibility while actually masking the fact that the program is not being maintained.

What a working program would have done. The program design comes first. Define the risk review cadence and risk ownership model — each risk has a named owner who reviews it quarterly and updates it when the risk landscape changes. Define the evidence requirements for each control — what data, from which system, at what frequency, in what format, and who is responsible for producing it. Use terminology that the control owner understands — not "provide evidence of control A.8.5 operating effectiveness" but "provide a screenshot of the conditional access policy showing MFA enforcement for all users, plus a query result showing sign-in events where MFA was not challenged in the past 30 days." Define the reporting requirements — what the committee needs to see, in what format, at what cadence, with what decision requests. Operate the program manually for six to twelve months. Then, with clear requirements based on operational experience, evaluate whether a platform would improve efficiency — and if so, evaluate platforms against your actual workflow requirements, not against the vendor's demonstration.

Module G15 covers GRC tooling evaluation — deliberately positioned late in the course, after you have built the program that the tool will support. By that point, you know exactly what you need the tool to do because you have been doing it manually. Your selection criteria are based on operational experience, not vendor demonstrations. Your implementation will succeed because you are automating a process that already works, not hoping the tool will create a process that does not exist.


Case study 4: The annual panic

Situation. A 150-person software company has maintained SOC 2 Type II attestation for three years. The observation period runs from January to December. Every October, the GRC function (one person, part-time alongside other security responsibilities) enters "audit preparation mode." For eight to ten weeks, all other security work stops while the GRC analyst collects evidence, updates the risk register, reviews and re-approves policies that should have been reviewed during the year, chases control owners for attestations they should have provided quarterly, and assembles the evidence package for the CPA firm.

The symptoms. During the October-December preparation period, the security team does not deploy the three new detection rules that were planned for Q4, does not complete the vulnerability management improvements that were promised after the penetration test, does not finish the Sentinel automation project that has been in progress since July, and does not investigate two medium-severity alerts that turned out to be benign but sat untriaged for nine days because the GRC analyst was the same person who normally triages alerts.

The evidence package, when assembled, reveals gaps. Two policies were due for annual review in June and were not reviewed — the GRC analyst was on leave for two weeks in June and the reviews were not delegated. Three control owners cannot produce evidence for the controls they are responsible for because they did not know they were responsible — the control ownership was documented in the SoA but never communicated to the individuals named. The risk register was last updated in January when the previous year's audit closed — it does not reflect the ransomware attempt in April (which was detected and contained but never recorded as a risk event), the new cloud migration that started in August (which changed the architecture and therefore the risk landscape), or the third-party vendor breach in September (which should have triggered a supply chain risk reassessment).

The GRC analyst spends November producing retroactive evidence — writing policy review notes dated June with the review actually conducted in November, asking control owners to produce screenshots "as if" they had been monitoring all year, and updating the risk register with entries that describe the current state but imply continuous maintenance. The CPA firm's auditor, experienced with small-company attestations, asks pointed questions about the consistency of evidence timing but ultimately issues a clean report because the controls are in fact operating — the evidence just was not being collected.

The failure mode. This is the audit-driven trap. The GRC program exists for the audit. Between audits, it is dormant. The annual panic is not a temporary staffing problem — it is a structural feature of a program designed around the audit cycle rather than around continuous operations. And the retroactive evidence production is not a minor shortcut — it is misrepresentation of the program's operational state to the auditor and, by extension, to every customer who relies on the SOC 2 report.

The audit-driven trap is the most expensive failure mode when measured in opportunity cost. Eight to ten weeks of full-time audit preparation each year consumes approximately 15-20% of the security team's annual capacity. That capacity could be spent on actual risk reduction — deploying detection rules, completing vulnerability management improvements, building automation. Instead, it is consumed by creating retroactive evidence of governance activities that should have been continuous. The security program is measurably weaker because the GRC program consumes the capacity that should be improving it.

What a working program would have done. Continuous operations eliminate the annual panic entirely. The risk register is updated when risks change — the April ransomware attempt triggers an immediate risk register update (new entry: "ransomware pre-encryption detected, existing controls effective but response time of 4.5 hours exceeds target SLA of 1 hour"). Policy reviews happen on their defined schedule — they are tracked in a shared calendar with automated reminders two weeks before the review date, and if the primary reviewer is unavailable, a delegate is pre-assigned. Control evidence is collected at defined intervals throughout the year — monthly evidence snapshots for high-risk controls, quarterly for medium-risk controls. The evidence collection is distributed across control owners with clear instructions on what to provide (not "evidence of control effectiveness" but "export the Sentinel analytics rule summary showing active rules, last trigger date, and false positive rate for Q2").

When the auditor arrives in January, the evidence package is assembled in two to three days by extracting the already-collected monthly and quarterly evidence from the evidence repository. There is no "preparation" because the program is always in a state that can be audited. The GRC analyst's capacity is available year-round for actual risk management and program improvement. The security team deploys detection rules on schedule. Vulnerability management improvements are completed. Automation projects finish. Alerts are triaged within SLA. The GRC program enables security operations instead of competing with them for capacity.


The common thread

All four case studies share the same root cause: the GRC program was designed around outputs (documents, evidence, audit results) rather than around the operating system that produces those outputs. The certified-but-breached organization had documentation but not control effectiveness monitoring. The fifty-policy organization had volume but not usability. The platform organization had automation but not program design. The annual-panic organization had compliance but not continuity.

The rest of this module and the rest of this course focuses on building the operating system. The outputs — policies, risk registers, compliance evidence, audit results, board reports — emerge naturally from a well-designed operating system. You do not produce them separately. They are the byproduct of doing governance, risk management, and compliance correctly.

Try it yourself: Diagnose your program

For each failure mode, assess whether your organization shows symptoms. Be honest — most organizations exhibit at least one pattern.

Failure ModeDiagnostic QuestionYes / No / Partial
Compliance trapDo you know the MFA enforcement rate in your environment right now — not what the policy says, but the actual percentage from sign-in logs?
Compliance trapWhen was the last time a compliance activity directly led to discovering and closing a security gap?
Documentation trapCan every employee find and name the policy that governs their specific role's security obligations?
Documentation trapHow many policies does your organization have? Can you list them all without checking?
Tool trapIf you have a GRC platform, what percentage of the team's daily GRC work actually happens inside it?
Tool trapWas the GRC platform purchased before or after the program's processes were defined?
Audit-driven trapHow many weeks before the audit does "audit preparation" begin?
Audit-driven trapIf the auditor arrived unannounced today, how long would it take to produce the evidence package?
Reveal: Interpreting your answers

Compliance trap indicators: If you cannot state the MFA enforcement rate from operational data (only from the policy document), or if compliance activities have never directly led to discovering a security gap, you are likely measuring documentation rather than effectiveness.

Documentation trap indicators: If employees cannot find or name their relevant policies, the policies are shelf-ware regardless of their quality. If you have more than 15-20 policies for an organization under 500 people, you likely have volume without usability.

Tool trap indicators: If less than 70% of daily GRC work happens in the platform, the team is maintaining parallel systems. If the platform was purchased before processes were defined, it was configured around vendor defaults rather than your actual needs.

Audit-driven trap indicators: If audit preparation takes more than two weeks, the program is not continuously producing evidence. If an unannounced audit would take more than a few days to respond to, the evidence base is not current.

If you recognize your organization in any of these case studies, that is not a failure — it is a diagnosis. The most common response to recognizing a failure mode is to feel defensive or to blame the previous GRC lead, the consultant who built the program, or the budget that was not provided. Resist that impulse. Every GRC program starts imperfect. Many programs that are currently in a failure mode were built with good intentions by competent people under real constraints. The difference between a program that improves and one that stagnates is whether the people responsible can accurately diagnose the current state and take systematic action to improve it. This course provides the systematic approach. Your honest assessment of where you are today is the starting point.

Compliance Myth: "Compliance equals security"

The myth: Compliance equals security

The reality: Compliance frameworks define minimum acceptable controls — not optimal security posture. An organization can be fully compliant with ISO 27001 and still be breached because the framework does not mandate specific detection rules, threat hunting programs, or incident response testing. Compliance is the floor. Security capability is the ceiling. The gap between them is where attackers operate.

Decision point

NE's GRC program produces quarterly reports showing green status. The SOC handled 3 incidents the same quarter. The board asks: 'If we are compliant, why are we attacked?'

Compliance confirms controls are in place. Security outcomes depend on whether controls are effective. The incidents were detected and contained BECAUSE the controls worked. A more accurate report: 'Controls detected and contained 3 incidents including a BEC attempt intercepted before financial loss — demonstrating compliance is producing security outcomes.'

NE's GRC quarterly report shows green across all frameworks. The same quarter had 3 Severity 2 incidents. The board asks: 'If we are compliant, why are we attacked?'
Compliance means no attacks — question the SOC's detection capability.
Compliance confirms controls are in place. The incidents were detected and contained BECAUSE the controls worked. A better report: 'Controls detected and contained 3 incidents including a BEC attempt intercepted before loss — demonstrating compliance produces security outcomes.' Compliance is not immunity; it is capability.
Compliance is just paperwork with no security relationship.
Any incident should change compliance status to amber.

You know what GRC actually is.

G0 oriented you to the discipline. G1 made the case that governance is an operating system, not a documentation exercise — the shift from "we wrote the policy" to "the policy operates every day." Now you build the operating system.

  • 15 operational modules — policy framework, risk management, compliance operations, audit management, vendor risk, data governance, and sector-specific governance
  • External audit management playbook — the protocol for making audits a structured event instead of a firefight
  • Policy framework templates — every policy your organisation actually needs, with the structure that survives audit and operates in practice
  • Risk register operations — how to make the risk register a decision-making instrument instead of a spreadsheet
  • Sector governance (G16) — the specific compliance obligations for financial services, healthcare, public sector, and manufacturing
Unlock the full course with Premium See Full Syllabus