Audit & Compliance Manual

Audit Reports

Audit outputs, review packages, and the reporting layer teams use to communicate current audit state clearly.

Audience: Audit leads and stakeholdersFocus: Audit-ready reportingStatus: Public manual

Scope

Audit reporting has to be current, explainable, and aligned with the actual evidence state. This public guide keeps the operator-facing reporting model while excluding private export mechanics.

Overview

Meridian generates two reports:

  1. Audit Report — a per-cycle artifact summarizing findings and remediation for a single audit.
  2. Compliance Readiness Report — a program-level posture snapshot for leadership.

Both reports return structured data that the frontend renders in a print-friendly layout. Use your browser’s print function to export them as PDFs for now (native PDF generation is planned for WS-21).

Audit Report

When to use it

  • Audit read-outs to leadership or the engagement team.
  • Pre-handoff packaging before marking an audit cycle complete.
  • Post-audit retrospectives tracking remediation against findings.
  • Identifying repeat findings across cycles.

Access

Navigate to an audit cycle detail page and click View Report, or go directly to the relevant workflow.

Requires Meridian.audit permission. Auditors and account admins have access; view-only users and compliance managers (without audit permission) do not.

Sections

Summary metrics — four large-number cards at the top: - Total findings - Material weaknesses - Repeat findings (pulled out because they indicate prior remediations that failed) - Overdue MAPs

Remediation Status — a one-line status breakdown: open / in progress / implemented / verified / closed / no MAP. The “no MAP” count is the number of findings WITHOUT an attached MAP and should be zero before you mark the audit complete.

Repeat Findings — a separately highlighted section listing every finding with is_repeat = true. Each entry links back to the finding detail page and shows its MAP status. Non-empty repeat sections are a red flag — they mean a prior cycle’s remediation did not stick.

Findings by Classification — three sections: - Material Weakness — audit-blocking failures - Deficiency — meaningful gaps - Observation — minor issues worth noting

Each finding entry includes its ref, classification, materiality, MAP status, linked control, and description. MAPs that are overdue show an overdue badge inline.

Reading the MAP badges

Badge Meaning
MAP: open MAP exists, no work started
MAP: in_progress Work in progress
MAP: implemented Remediation is done; waiting on auditor verification
MAP: verified Auditor has verified the fix
MAP: closed Manager has closed the loop
NO MAP (red) Finding has no MAP — blocks remediation tracking
overdue (red) Target date passed while status is still open or in progress

Printing

Use your browser’s print function (the header includes a Print button) to generate a PDF. The layout is designed to paginate cleanly. Native PDF generation with branded templates is planned for WS-21.

Compliance Readiness Report

When to use it

  • Monthly leadership reporting on program posture.
  • Pre-audit readiness checks before starting a new cycle.
  • Tracking progress over time across frameworks, controls, evidence, risks, policies, training, and MAPs.
  • Board-level compliance updates.

Access

Navigate to a program detail page and click View Readiness Report, or go directly to the relevant workflow.

Requires Meridian.view — any authenticated user with access to the program can see it.

Sections

Framework Coverage — per-framework breakdown of covered/partial/gap requirements. A requirement is “covered” only if it has a full mapping to an implemented control in this program. A requirement mapped to a partially_implemented control counts as “partial”, not “covered” — this is intentional to avoid green-washing gaps.

Control Summary — counts of controls by status plus test pass/fail totals.

Evidence Freshness — fresh / expiring_soon / stale breakdown based on evidence valid_until or, when not set, age relative to the 30-day freshness threshold.

Open Risks — count of open risks by level (critical / high / medium / low). Levels use residual_score when set, otherwise inherent_score.

Remediation (MAPs) — the same MAP stats widget shown on the program dashboard: total, by status, and overdue.

Open Findings — count of findings NOT in remediated or closed across all audit cycles in the program, broken down by classification.

Policies — count of published policies and the acknowledgment rate as a proxy metric (see docs/functional/audit-reporting.md for the exact formula — it’s a proxy because Meridian doesn’t own the user directory).

Training — count of training requirements and completion rate, computed using the same proxy structure.

Reading the numbers

  • Zero requirements_gap across all frameworks is the goal.
  • Evidence stale > 0 is a red flag — stale evidence won’t pass an audit.
  • Open risks critical > 0 should drive immediate attention.
  • Open findings material_weakness > 0 typically blocks certification.
  • MAPs overdue > 0 signals slipping remediation.

Printing

Same as the audit report — browser print. Native PDF generation is WS-21.

Known Limitations

  • Proxy rates for policies and training. Without a canonical headcount from Portal, ack rates and completion rates are proxies, not true percentages. They’re still useful for trend-tracking, but don’t present them to an external auditor as “N% of staff are compliant.”
  • No historical snapshots. Reports reflect current state. There’s no “report as of 2026-01-01” yet. Historical snapshotting is deferred until there’s demand.
  • No PDF generation. Use browser print for now. WS-21 will add server-side PDF generation with branded templates.