Understanding Your Score
TruthVouch assessments automatically calculate dimension-based scores that measure your organization’s AI maturity across governance, technical, compliance, and operational areas. Learn how scores are calculated and what they mean.

Overall Score
Your Overall AI Maturity Score is a weighted average of five core dimensions:
| Dimension | What It Measures |
|---|---|
| Monitoring | How well you observe and alert on AI outputs, detection capabilities, observability |
| Compliance | Regulatory posture, policies, audits, compliance training completion |
| Governance | Policy discipline, configuration controls, decision boundaries, agent autonomy |
| Transparency | Model cards, explainability artifacts, trust center publication, audit trails |
| Operations | Incident response, supply-chain mapping, DR drill cadence, team readiness |
Each dimension is scored on a scale of 0–5, and the overall maturity level is the weighted average.
Dimension Scoring
Each dimension measures specific aspects of your AI maturity and is scored 0–5:
1. Monitoring (0–5)
How well you observe and alert on AI outputs:
- Real-time AI output scanning (hallucination detection, PII detection, anomaly detection)
- Alert infrastructure (routing, escalation, dashboards)
- Observability and logging (centralized logs, query traceability, response attribution)
- Integration with AI systems (SDKs, APIs, manual review processes)
Score 0: No monitoring; manual log collection only Score 5: Continuous automated detection with real-time alerts and centralized observability
2. Compliance (0–5)
Regulatory posture and compliance readiness:
- Industry compliance frameworks mapped (GDPR, SOX, HIPAA, EU AI Act, ISO 42001)
- Compliance checklist completion and gap tracking
- Audit preparation and evidence collection
- Compliance training program completion
Score 0: No formal compliance posture; unaware of applicable frameworks Score 5: All frameworks mapped, gaps closed, audit-ready documentation, training complete
3. Governance (0–5)
Policy discipline and control enforcement:
- Formal AI governance policy (documented, approved, communicated)
- Policy evaluation against AI decisions (manual or automated)
- Configuration controls for agent autonomy and action boundaries
- Risk assessment and approval workflows for AI deployments
Score 0: No formal policy; ad-hoc decision-making Score 5: Comprehensive policies, continuous evaluation, clear autonomy boundaries, approval workflows
4. Transparency (0–5)
Explainability and public trust:
- Model cards and technical documentation
- Explainability artifacts (decision trails, source attribution)
- Trust center publication and public verification pages
- Cryptographic attestation and certification
Score 0: No documentation; no public trust mechanisms Score 5: Comprehensive model cards, explainability artifacts, published trust center, cryptographic verification
5. Operations (0–5)
Operational readiness and incident response:
- Incident response procedures (classification, escalation, root cause analysis)
- Supply-chain mapping and risk assessment (vendors, models, data sources)
- Disaster recovery (backup, recovery testing, failover procedures)
- Team readiness (skills, training, on-call coverage)
Score 0: No formal incident response; team gaps; no DR plan Score 5: Documented procedures, regular testing, trained team, comprehensive supply-chain visibility
Score Interpretation
Maturity Levels (0–5 scale)
| Score Range | Level | What It Means |
|---|---|---|
| 4.0–5.0 | Advanced | Best-in-class; comprehensive controls across all dimensions |
| 3.0–3.9 | Developed | Strong foundation; some gaps remain in specific areas |
| 2.0–2.9 | Managed | Basic processes; multiple areas need strengthening |
| 1.0–1.9 | Initial | Early stage; foundational work required |
| 0–0.9 | Ad-hoc | Little to no formal governance |
Red Flags (Score < 2)
Scores below 2 in any dimension indicate:
- Monitoring < 2: No real-time detection; relying on manual processes
- Compliance < 2: Regulatory risk; unprepared for audits
- Governance < 2: No formal AI policy; uncontrolled AI deployments
- Transparency < 2: No audit trails or explainability; public trust issues
- Operations < 2: Team gaps; no formal incident response plan
Benchmarking
Your scores are compared against:
- Industry peers (Finance, Healthcare, Tech, etc.)
- Company size (1-10, 11-50, 51-250, 251-1000, 1000+)
- Geography (US, Europe, APAC)
See Benchmarks to understand your percentile ranking.
Scoring Methodology
Assessment Approach
TruthVouch uses two scoring methods:
- LLM-Based Scoring (Primary): An LLM evaluates your usage patterns and practices to generate rich narratives and nuanced scores across the five dimensions
- Deterministic Fallback: If LLM scoring unavailable, a heuristic scoring engine evaluates your documented practices based on assessment answers
Dimension Scoring
Each dimension is scored on a continuous scale of 0–5:
- 0–1: Minimal or no formal practices in this dimension
- 1–2: Basic awareness; some foundational elements in place
- 2–3: Established practices; most key controls present
- 3–4: Mature implementation; strong across most areas
- 4–5: Advanced, best-in-class implementation
Partial Credit
The assessment allows for granular scoring:
- Full score (5.0): All aspects mature and continuously improved
- High score (4.0–4.9): Strong across dimension with minor gaps
- Moderate score (3.0–3.9): Solid foundation with some targeted improvements needed
- Low score (1.0–2.9): Basic practices with significant gaps
- Minimal score (<1.0): Little to no formal practices
Confidence Scoring
The assessment includes a Confidence Score (0-100) for the overall maturity level:
- 90+: High confidence; comprehensive information; stable assessment
- 70-89: Moderate confidence; minor clarifications could refine score
- 50-69: Lower confidence; recommend detailed audit for accuracy
- <50: Low confidence; suggest external validation or consultant review
Low confidence indicates you may want additional assessment (interviews, audit) before major decisions.
Score Stability
Assessment scores are relatively stable but can fluctuate:
Month-to-Month Variation
- ±0.5 points: Natural variance from assessment interpretation
- ±1.0 point: Significant change; investigate what changed
Seasonal Patterns
- Scores may dip after compliance events (new regulations)
- Improve after training programs are completed
- Vary with team turnover or staffing changes
Improvement Tracking
Historical Comparison
After your second assessment (minimum 30 days later):
- See dimension-by-dimension progress
- Understand which improvements moved the needle
- Track overall maturity trajectory
Projected Trajectory
Based on your improvement rate:
- “At current pace, you’ll reach 4.5 Governance by Q3 2026”
- “You’ve plateaued at 3.0 Monitoring — likely need external help”
- “Fastest improvement: Operations (+0.5 points/quarter)“
Customization (Enterprise)
Enterprise customers can:
- Reweight dimensions (e.g., Compliance = 40% instead of 25%)
- Add custom questions (company-specific concerns)
- Adjust scoring rubrics (align with internal standards)
Contact your CSM to customize scoring.
Related Topics
- Taking an Assessment — Step-by-step walkthrough
- Benchmarks — How you compare to peers
- Improvement Planning — Actions based on your scores
Next Steps
- Review your dimension scores — which area is weakest?
- Compare to benchmarks — where are you vs. peers?
- Read improvement recommendations — prioritize the top 3 gaps
- Plan actions — assign owners and timelines
- Re-assess in 90 days — measure progress