Audit & Compliance Manual

Control Intelligence

Coverage analysis, control relationships, and higher-signal views that help teams manage a control estate as it grows.

Audience: Compliance leads and framework ownersFocus: Control analysis and operating insightStatus: Public manual

Scope

Control intelligence is about understanding whether a control set is coherent, reusable, and mature enough to carry more than one obligation. This guide keeps the public-safe analysis model and strips private implementation details.

SSOT Document — Single Source of Truth Audience: System operators, IT administrators, L1/L2 support staff Last Updated: 2026-04-17

Overview

Control Intelligence is an automated recommendation engine that helps compliance operators manage their programs more effectively. It identifies gaps in your compliance coverage, suggests which policies should govern which controls, detects potentially duplicate controls across frameworks, and provides a health score for each program. All scoring is computed from your existing data — requirement mappings, cross-framework equivalencies, evidence bindings, and policy-control relationships — without external dependencies, AI models, or LLM calls.

Getting Started

Prerequisites

  • Meridian.view permission to read program health, gap categories, and suggestions.
  • Meridian.manage permission to accept or dismiss suggestions.
  • At least one active compliance program with controls, policies, and framework requirement mappings. The more complete your data, the more useful the suggestions.

See manual/programs.md for compliance program setup and framework binding. See manual/controls.md for control management and requirement mappings. See manual/policies.md for policy configuration.

Reading Program Health

Requesting Program Health (Step-by-Step)

When to use: When you want an overall picture of how well-connected a compliance program is — before an audit, during a periodic review, or after a batch of changes.

Steps: 2. The response includes: - health_score — An integer from 0 to 100. - total_controls — Number of controls in the program. - total_policies — Number of policies visible to the program. - gaps — Detailed gap categories (see section 3.3). - suggestion_count — Number of pending (unresolved) suggestions.

Result: A complete snapshot of program health computed fresh from current data. No caching — every request reflects the latest state.

Understanding the Health Score

The health score starts at 100 and applies penalties for each gap found:

Gap Type Penalty per Item Why It Matters
Control without governing policy 3 points Governance gap — no answer to “what policy governs this control?”
Control without evidence 2 points Proof gap — nothing to demonstrate the control operates
Control without requirement mappings 2 points Relevance gap — the control isn’t linked to any compliance requirement
Unmapped framework requirement 1 point Coverage gap — a requirement in your program’s frameworks has no control
Orphan policy (no controls linked) 1 point Stale policy — exists but doesn’t govern anything

The score is capped at 0 (it cannot go negative).

A score of 100 means every control has a governing policy, evidence, and requirement mappings; every framework requirement is mapped to at least one control; and no policies are orphaned.

A program with zero controls returns a health score of 100 (no penalties can be applied). This is intentional — an empty program has no gaps, just no content.

Interpreting the score:

Range Interpretation
90-100 Well-connected program. Minor gaps if any.
70-89 Reasonable shape. Review the gap categories and address the highest-penalty items.
50-69 Significant gaps. Likely missing governance links (controls without policy) or evidence.
Below 50 Major structural problems. Prioritize linking controls to policies first (3-point penalty each).

Gap Categories

Controls without policy (controls_without_policy)

Fix: Use the suggested-policies endpoint (section 4) to find the right policy, or manually set the control’s governing policy.

Controls without evidence (controls_without_evidence)

No evidence bindings exist for these controls. The control may be implemented, but there is nothing in the system to prove it.

Fix: Bind evidence to the control via evidence collection or manual upload. See manual/evidence.md.

Controls without requirements (controls_without_requirements)

These controls are not mapped to any framework requirement. They may be redundant, or they may have been added manually without being linked to a requirement.

Fix: Map the control to the appropriate framework requirements. See manual/controls.md section on mapping controls to requirements.

Policies without controls (policies_without_controls)

Policies that exist in the program’s scope but do not govern or map to any control. They may be unused, outdated, or simply not yet linked.

Fix: Use the suggested-controls-for-policy endpoint (section 4.6) or manually link controls to the policy.

Requirements without controls (requirements_without_controls)

Framework requirements in your program’s bound frameworks that are not addressed by any control. These are compliance gaps — requirements your program is supposed to cover but does not.

Fix: Create controls for these requirements, or map existing controls to them. Use gap analysis for a framework-by-framework view. See manual/gap-analysis.md.

Potential duplicates (potential_duplicates)

Pairs of controls with fit scores at or above 85%, indicating they may cover the same ground. This is especially common in multi-framework programs (e.g., SOC 2 + ISO 27001) where similar controls may have been created independently for each framework.

Each pair includes: - The two controls (with IDs, refs, and titles). - The fit score (percentage). - A reason string describing why they matched.

Fix: Review the pair and decide whether to consolidate. This is an informational finding — the system does not automatically merge controls.

Working with Suggestions

How Suggestions Are Generated

When you request suggested policies for a control (or suggested controls for a policy), the scoring engine evaluates every eligible candidate using five signals:

Signal Max Points What It Measures
Requirement Overlap 35 How many framework requirements are shared between the control and the policy’s existing controls. Cross-framework mappings expand the comparison — ISO-to-SOC 2 equivalencies count, weighted by mapping confidence (high = 1.0, medium = 0.7, low = 0.4).
Framework Domain 25 Whether the control and policy operate in the same compliance domain, based on the categories of their mapped requirements.
Governance Peers 20 How many “peer” controls (those sharing requirements with yours) are already governed by this policy. If 5 of 6 access controls are governed by the Access Control Policy, the 6th probably should be too.
Evidence Overlap 10 Whether the control shares evidence sources (same connector type + test name) with the policy’s other controls.
Text Similarity 10 Keyword overlap (Jaccard similarity) between control text (title, description, implementation notes) and policy text (title, content). This is a tiebreaker, not a primary signal.

The composite fit score is the sum of all five signals, capped at 100.

Only candidates with a fit score of 40 or above are returned. Below that threshold, the overlap is too tenuous to be useful.

Interpreting Fit Percentages

Score Range Label What to Do
90-100 Strong fit Likely correct. Review the signal breakdown and accept.
75-89 Good fit Probably right. Worth checking the detail strings for each signal.
60-74 Moderate fit Some overlap. Use operator judgment — verify the policy actually governs this compliance domain.
40-59 Weak fit Tenuous connection. Probably not the right match unless a specific signal explains it.

Getting Suggested Policies for a Control (Step-by-Step)

When to use: When a control has no governing policy and you want to find the best match.

Steps: 2. Optional query parameters: - limit (1-20, default 5) — Maximum number of suggestions to return. - include_dismissed (default false) — Set to true to include previously dismissed suggestions. 3. Review the response, which includes: - The control’s ID, ref, and title. - A list of suggestions, each containing: - fit_score — The composite percentage. - signals — Full breakdown with score, max, and detail string for each of the five signals. - dismissed_count — Number of suggestions hidden because they were previously dismissed.

Result: Suggestions are persisted as IntelligenceSuggestion rows. Pending suggestions have their scores updated on each request. Accepted and dismissed suggestions retain their original scores.

Accepting a Suggestion (Step-by-Step)

When to use: When you agree with a suggestion and want to create the governance link.

Steps: 1. Review the suggestion’s signal breakdown to confirm it makes sense.

Result: For a control-to-policy suggestion: - A PolicyControlMapping row is created (if one does not already exist). - The suggestion status changes to accepted with your user ID and timestamp. - An audit log entry (intelligence.suggestion.accept) records the action with full context. - The response includes mapping_created showing exactly what was linked.

For a control-to-control suggestion: - The suggestion is marked accepted (informational only). - No mapping is created — the operator decides how to consolidate.

This is a real change to the control’s governance. To undo it, manually edit the control to remove the governing policy.

Dismissing a Suggestion (Step-by-Step)

When to use: When a suggestion is irrelevant and you do not want to see it again.

Steps:

Result: - The suggestion status changes to dismissed with your user ID and timestamp. - An audit log entry (intelligence.suggestion.dismiss) records the action. - The suggestion will not appear in future requests unless include_dismissed=true is passed.

Dismissal is conditionally permanent. If the underlying data changes significantly enough that the recomputed fit score differs by more than 10 points from the score at the time of dismissal, the suggestion automatically resets to pending and will reappear. This ensures that material changes (e.g., new requirement mappings, changed evidence bindings) prompt re-evaluation. Small fluctuations (≤10 points) leave the dismissal intact.

Getting Suggested Controls for a Policy (Step-by-Step)

When to use: When a policy exists but you want to find controls it should govern (the reverse of section 4.3).

Steps: 2. Optional query parameters: limit (1-20, default 5), include_dismissed (default false). 3. Review the response. Each suggestion includes: 4. Accept or dismiss suggestions using the same endpoints as sections 4.4 and 4.5.

Getting Related Controls (Step-by-Step)

When to use: When you want to find controls similar to a given control — for deduplication, consolidation, or cross-framework alignment.

Steps: 2. Optional query parameters: limit (1-20, default 5), include_dismissed (default false). 3. Review the response. Each suggestion includes:

Result: The scoring for control-to-control comparison replaces the “governance peers” signal with a “requirement colocation” signal (4 points per shared requirement, up to 20). Pairs with scores at or above 85% are also flagged in the program health duplicate category.

Accepting a related-control suggestion is informational — it does not create any mapping or merge.

Merging Duplicate Controls (Step-by-Step)

When to use: When you’ve confirmed that two controls are duplicates (e.g., the potential duplicates section in program health shows them with 85%+ fit score) and you want to consolidate into one.

Steps: 1. Decide which control to keep as the canonical (primary) and which is the duplicate. 3. Review the response. The migrated object shows what was moved: - Requirement mappings — Framework requirements reassigned from duplicate to canonical. Already-existing mappings on the canonical are deduplicated (not double-counted). - Evidence bindings — Evidence artifacts bound to the duplicate are now bound to the canonical. - Policy mappings — Policy-control associations moved to the canonical. - Risk mappings — Risk-control associations moved to the canonical. - Tests — Control tests reassigned to the canonical. - SOX records — Narrative controls, walkthrough controls, and account-assertion links moved. - Tasks — Compliance tasks referencing the duplicate now reference the canonical. - Alerts — Alert records referencing the duplicate now reference the canonical. - Action items — MAP action items referencing the duplicate now reference the canonical. - Suggestions removed — Intelligence suggestions referencing the duplicate are deleted.

Result: - The canonical control inherits any missing fields from the duplicate (description, implementation notes, monitoring summary, governing policy). If the canonical already has these fields, the duplicate’s values are ignored. - An audit log entry (intelligence.controls.merge) records the action with both control IDs, the migration summary, and your note.

This is irreversible. The duplicate cannot be un-merged. All its relationships have been migrated or deleted. The only record of the duplicate’s original associations is the audit log entry and the migration summary in the response.

Validation rules: - You cannot merge a control with itself. - You cannot merge a control that is already in "merged" status. - Both controls must belong to the specified program. - Requires Meridian.manage permission.

Common Scenarios

Scenario: Triage a New Program’s Health Score

Situation: You just set up a compliance program with controls and policies but the health score is 42.

Steps: 1. Call program health to get the gap breakdown. 2. Start with controls_without_policy (3 points each — the highest penalty). 3. For each ungoverned control, call suggested-policies. Accept the strong-fit matches. 4. Move to controls_without_evidence (2 points each). Bind evidence or configure connectors. 5. Move to controls_without_requirements (2 points each). Map controls to framework requirements. 6. Address requirements_without_controls by creating new controls or mapping existing ones. 7. Clean up policies_without_controls by linking them to controls or archiving unused policies. 8. Re-check the health score. It should be significantly higher.

Expected Outcome: Health score improves as gaps are closed. A fully connected program reaches 100.

Scenario: Consolidate Duplicate Controls Across Frameworks

Situation: The potential duplicates section shows 3 pairs of controls with 90%+ fit scores — your SOC 2 and ISO 27001 programs created equivalent controls independently.

Steps: 1. For each pair, review the signal breakdown to understand what they share. 2. Decide which control to keep as the canonical (primary). 4. Review the migration summary in the response to confirm what was moved. 5. Repeat for each duplicate pair. 6. Re-check program health to confirm the duplicates are resolved.

Expected Outcome: Fewer controls to maintain. All requirement mappings, evidence, policies, tests, and other associations from the duplicates are consolidated onto the canonical controls. The duplicate controls remain as read-only tombstones with “merged” status. The duplicate category count drops to zero.

Scenario: Policy Suggestion Seems Wrong

Situation: A control gets a 72% fit score for a policy that clearly does not govern it.

Steps: 1. Read the signal breakdown. Identify which signal is inflating the score. 2. If requirement overlap is high: check whether the control’s requirements genuinely overlap with the policy’s other controls, or if the cross-framework mappings are too broad. 3. If governance peers is high: look at which peer controls are governed by this policy — one of them might be incorrectly assigned. 4. Dismiss the suggestion with a note explaining why it is wrong.

Expected Outcome: The suggestion is hidden. If the underlying data changes enough that the score shifts by more than 10 points, the suggestion will re-surface as pending for re-evaluation. Otherwise, it stays dismissed. The note provides context for future operators.

Scenario: No Suggestions Returned for a Control

Situation: A control has no governing policy, but the suggested-policies endpoint returns an empty list.

Steps: 1. Check whether the control has any requirement mappings (controls_without_requirements in program health). 2. If no requirements: map the control to its framework requirements first. The scoring engine needs requirements to compute overlap. 3. If requirements exist: check whether any policies have controls in overlapping domains. If the control is in a unique domain, there may not be a matching policy yet — create one. 4. Also verify that all candidate policies did not score below 40. The 40-point threshold filters out weak matches.

Expected Outcome: After adding requirements, re-requesting suggestions should yield results.

Permissions Reference

Permission Grants
Meridian.view View program health, gap categories, all suggestion endpoints (read-only)
Meridian.manage Accept suggestions (creates governance links), dismiss suggestions, merge duplicate controls

FAQ

Q: Why did this policy score 94%? A: Check the signal breakdown in the response. A 94% likely means strong requirement overlap (30+/35), same compliance domain (20+/25), and most peer controls already governed by this policy (15+/20). The detail string on each signal explains exactly what matched and how many items contributed to the score.

Q: Why is my policy not showing up as a suggestion?

Q: Can I undo a dismissed suggestion? A: There is no manual un-dismiss action. However, if the underlying data changes significantly (new requirement mappings, changed evidence bindings, etc.) such that the recomputed fit score differs by more than 10 points from the original, the suggestion automatically resets to pending and reappears. You can view dismissed suggestions at any time by passing include_dismissed=true.

Q: Can I undo a merge? A: No. Merging is irreversible. All relationships are migrated or deleted, and the duplicate control becomes a permanent “merged” tombstone. Review the duplicate pair carefully before merging. The audit log records the full migration summary for traceability.

Q: Why is my health score low? A: Check the gap categories. The most expensive gap is controls without a governing policy at 3 points each. A program with 10 such controls loses 30 points immediately. Prioritize: link controls to policies first, then add evidence and requirement mappings, then address unmapped requirements and orphan policies.

Q: How often are suggestions recomputed? A: On every request. There is no caching — scores are computed fresh from current data. If you add a new requirement mapping, the next suggestion request will reflect it. Pending suggestions have their fit scores updated on each request; accepted and dismissed suggestions retain their original scores.

Q: Does this use AI?

Q: What happens if I accept a suggestion for a control that already has a governing policy?

Q: Are suggestions shared across users? A: Yes. Suggestions are persisted per program and scoped by the source-target entity pair. If one operator views suggestions for a control, those suggestion rows are visible to any other operator with Meridian.view on the same account. Accepting or dismissing a suggestion affects all users.

Related Documentation

  • functional/evidence.md — Evidence system and evidence-control bindings
  • manual/controls.md — Control management, requirement mappings, and status lifecycle
  • manual/gap-analysis.md — Per-framework coverage matrix and audit readiness
  • manual/policies.md — Policy configuration and policy-control mappings
  • manual/programs.md — Compliance program setup and framework binding