Audit Workflow
Audit planning, request handling, operator coordination, and the cadence that keeps audit work from becoming a scramble.
Scope
Audit workflow is where evidence, requests, follow-up, and accountability all need to hold together. The public version keeps the operator guidance and leaves private control-plane detail behind.
This guide walks through running an internal audit cycle in Meridian — from creating the cycle through producing a finalized findings log.
Step 1: Create the Audit Cycle
You can reach the audit workspace two ways:
- Sidebar → Audits — the cross-program workspace at the relevant workflow. Pick a program from the selector to see its audit cycles, filter by status or type, and page through history. This is the everyday entry point.
- Programs → [Your Program] → Audits — the per-program drill-in at the relevant workflow. This is the “New Audit Cycle” entry point and the view the sidebar “Program View” button opens.
From the per-program view, click New Audit Cycle. Fill in:
- Name: e.g., 2026 Q1 Internal SOC 2 Audit
- Audit Type: internal (or external if this is an external party — the cycle is still managed here, but external auditors get their own portal in WS-10)
- Auditor Firm / Lead Auditor: only relevant for external cycles
- Start Date / End Date: optional period bounds. The audit year on finding refs is taken from the start date.
The cycle is created in planning state. The review queue is empty until you advance to fieldwork.
Step 2: Move to Fieldwork
When you’re ready to start examining controls, advance the cycle status from planning to fieldwork. The system auto-creates a control review row for every control in your program. This is your work queue.
Note: If you add new controls to the program AFTER fieldwork starts, they do NOT automatically get review rows. You’ll need to roll back to planning and re-advance, or wait for WS-09 to add an “add control to active audit” workflow.
Step 3: Review Controls
Open the Review Queue (from the audit detail page). Each row is a control to examine. Click a row to open the Control Review Workspace.
The workspace has three panels: - Left: Control info — description, implementation notes, current control status - Center: Evidence collected on this control (with freshness) - Right: Discussion thread with the control owner
To start a review:
1. Click Claim Review. Status moves to under_review.
2. Examine the evidence. Click into individual evidence artifacts to review the underlying files or structured data.
3. Post questions or assertion challenges as you find issues. Use the comment type selector:
- Note: casual context
- Question: ask the control owner something
- Assertion Challenge: when you disagree with the control’s claimed state. These are highlighted prominently in the UI.
- Response: when responding to a previous comment
Requesting More Evidence
If the existing evidence isn’t enough, click New Request in the Evidence Requests panel. Describe what you need. The control owner sees the open request and uploads or links evidence to fulfill it.
When the control owner fulfills the request, it’s linked to a specific evidence record and becomes terminal. You can’t re-open a fulfilled or cancelled request — create a new request if you need more.
Cancelling a request: the compliance manager can cancel any open request. You can also cancel a request you opened yourself (e.g., if you later realize the evidence wasn’t needed). Cancelling is terminal — the request stays in the audit trail marked cancelled and cannot be re-opened. A confirmation dialog appears before cancel since it closes the thread.
Rendering a Conclusion
Once you’ve examined the evidence and resolved any open assertions, render your conclusion in the bottom-left panel: - Effective: The control is operating as intended - Observation: Minor issue worth noting - Deficiency: Meaningful gap - Material Weakness: Audit-blocking failure
Add review notes (sampling approach, scope, exceptions), then click Render Reviewed. The status moves to reviewed.
If you need to amend a reviewed control later, click Re-open to bounce it back to under_review.
Step 3b: Test Executions During Fieldwork (CW-23)
The Control Review Workspace now surfaces Test executions during this cycle above the evidence list. This shows every run of every control test that landed in the cycle’s fieldwork window — in-window runs are expanded, out-of-window runs are collapsed behind a “Show older runs” toggle.
Each execution row shows who ran it, when, the result, sample size/description, notes, and the linked evidence artifacts with primary/supporting roles. Evidence in the lower list is also badged from test execution when it was produced by a run, so you can quickly distinguish test-derived evidence from manual uploads.
If the control owner hasn’t recorded an execution in the window yet, ask them to — or record it yourself from ControlDetail Tests tab while you’re in the workspace. Executions recorded mid-review show up on the next refresh.
Step 4: Create Findings
Conclusions other than effective typically result in a finding. From the workspace, click New Finding. The form pre-fills the control and links the finding to this review.
Fill in: - Title: Concise statement of the issue - Description: Full description — issue, impact, evidence reviewed, recommendation
Click Create. The finding gets an auto-generated reference like F-2026-001 (sequential within the audit cycle).
Editing Findings
Findings start in draft and can be freely edited until they’re finalized. You can move them through draft → under_discussion if you want to formally signal “the control owner is responding to this” before locking.
Finalizing a Finding
When the finding is settled and ready to lock, click Finalize. Once finalized, the title, description, classification, and materiality cannot be changed. This protects the audit record from silent rewrites. Status can still progress: final → remediated → closed (or back to final if remediation fails verification).
A confirmation dialog appears before finalization since it’s irreversible for the text fields.
Step 5: Move to Reporting
In reporting state, you finalize all findings and prepare the audit report. The audit detail page has a View Report button in the header that opens the structured audit report at the relevant workflow — findings grouped by classification, MAP status, repeat findings, and cycle metrics. Non-auditors do not see the button because the backend endpoint requires Meridian.audit.
Step 5b: Generate the Evidence Package (CW-23)
Before you close the cycle, generate the package you’ll hand to the auditor. Scroll to the Evidence Package panel on AuditDetail and click Generate package. The backend:
- Walks the whole cycle — controls, reviews, tests, executions, evidence, findings.
- Builds a deterministic canonical manifest (same state, same bytes, every time).
- Computes a SHA-256
manifest_hashover the canonical bytes. This is what the auditor references in their workpaper. - Zips the manifest + every evidence file under
evidence/{id}/{filename}. - Writes the bundle to storage and inserts a package row.
Generation is synchronous. Small cycles finish in seconds; larger ones can take a minute — the panel shows a long-running warning at 30 seconds.
Click Preview manifest to expand the manifest tree inline. Every control block shows its governance snapshot, the reviews, the tests with executions, the evidence entries (with short content_hash), and the findings. If something is missing or wrong, fix it in the UI and regenerate — the second generation returns a new row with a new hash only if the data actually changed.
Share the package:
- Click Issue share link. The modal defaults to a 7-day expiration and hard-caps at 30 days. Anything above 14 days shows a yellow “long-lived tokens widen the breach window” warning.
- Add a note (e.g. “Q1 2026 external auditor”) for your own records.
- Click Issue share link. You’ll see the full URL with a big warning: This URL is shown only once. Click Copy and paste it somewhere durable immediately.
- Send the URL to the auditor via your usual channel.
- The auditor opens the URL — no login — and the ZIP streams to their browser. Every download writes an audit log entry with the remote IP and user agent, and the token’s
last_used_atupdates.
The panel lists every active share token with badges: - Not yet used — issued but never opened. - Downloaded recently — opened in the last 24 hours (auditor is working on it). - Last used {date} — older downloads.
Click Revoke to kill a token before its expiration; revokes are idempotent. Issue a fresh link if the auditor needs more time.
Step 6: Complete the Audit
When all reviews are reviewed AND all findings are finalized, advance the cycle to complete. The system checks multiple conditions and shows specific errors if any fail.
Test coverage gate (CW-23.8): before the old review / finding checks, the backend now also checks that every test on every control in the program has at least one non-superseded execution during the cycle’s fieldwork window. If any tests are missing coverage, the transition is blocked with a structured error listing the missing tests. The UI opens an Override modal:
- Lists every missing test (control ref + test name).
- Requires a free-text override reason of at least 20 characters after trim.
- Live character counter; Submit button disabled until the minimum is met.
Provide a real reason (“Connector X was down 2026-02-03 to 2026-02-10, manual coverage documented in ticket LIN-4221” is useful; “n/a” is not). On submit, the cycle closes and an audit_cycle.closure_override audit log entry records the reason plus the full missing-tests snapshot at override time. Future you — and the auditor — can see exactly what was skipped and why.
A complete audit cycle is immutable. No further reviews, comments, evidence requests, findings, or status changes can be made. The cycle is the audit record of record.
Filters and Triage
The Review Queue supports filters: - By conclusion: any of the four values - By reviewer: see who is working on what
The Findings Log supports filters by classification and status to triage what needs attention.
Common Operator Errors
| You see | What it means | What to do |
|---|---|---|
Invalid status transition: 'X' -> 'Y' |
The state machine doesn’t allow that move | Check the lifecycle in the functional doc |
Cannot enter reporting: N control review(s) not yet reviewed |
Not all reviews done | Filter the queue by not_reviewed and finish them |
Cannot complete: N finding(s) not finalized |
Some findings still draft/under_discussion | Filter findings by draft and finalize each |
A conclusion is required when status is 'reviewed' |
You tried to render reviewed without picking a conclusion | Pick one in the dropdown |
Finding is final ... |
You tried to edit a locked finding’s text | Status changes are still allowed; create a new finding if the text needs to change |
Evidence does not belong to the same program as this control review |
Fulfilling a request with evidence from another program | Pick evidence from the correct program |
Evidence is for a different control than this review |
Fulfilling with evidence tagged to a different control | Use program-level evidence or evidence tagged to the correct control |
Only the requesting auditor or a compliance manager can cancel this evidence request |
You tried to cancel someone else’s evidence request | Ask the original auditor or a manager to cancel |
Fulfilling an evidence request requires Meridian.manage |
Auditor tried to fulfill a request | Only compliance managers can fulfill — post the evidence and have the manager link it |
Could not allocate a unique finding_ref after multiple retries — please retry the request |
Rare finding_ref race (two concurrent creates in the same cycle) | Retry the request. If persistent, investigate duplicate creators |
Cannot post comments on a completed audit cycle |
The cycle is closed | Open a new cycle if you need to add to the audit record |
Tips
- Claim controls before reviewing them. Setting a reviewer makes it clear who is working on what and helps avoid duplicate effort.
- Use assertion challenges for real disagreements, not just questions. Reserve them for moments where you want to formally flag “the evidence does not support this claim.” They’re highlighted in the UI for a reason.
- Finalize findings sooner rather than later. The longer a finding sits in draft, the more likely the text drifts. Locking forces a clean record.
- Re-open a review if you find new information — don’t avoid amending. The audit trail captures all changes, and re-opening keeps the conclusion accurate.