Skip to content

Key Concepts

TruthVouch uses domain-specific language to describe AI governance, compliance, and monitoring. Here are the core concepts you need to understand.

Truth Nuggets

Definition: Verified facts about your organization that serve as ground truth for all AI fact-checking.

Truth Nuggets are the foundation of everything TruthVouch does. They’re structured data points about your company — product names, leadership names, pricing, capabilities, policies, certifications — that you own and verify. Unlike external data sources, Truth Nuggets come directly from your organization and are always current.

Examples:

  • “Product: TruthVouch Shield detects hallucinations with high accuracy”
  • “CEO: Eyal Cohen, founded March 2026”
  • “Pricing: Professional tier is $1,199/month”
  • “SOC 2 Type II: Architecture designed for compliance, audit in progress”

Where it’s used: Every product uses Truth Nuggets — Shield compares LLM responses against them, Brand Intel measures brand accuracy against them, Compliance proves governance against them, Content Certification verifies claims against them.

Learn more: How It Works — Step 1: Define Your Truth

Cross-Checks

Definition: Automated queries sent to AI engines to test what they say about you.

TruthVouch continuously asks AI engines factual questions about your organization based on your Truth Nuggets. For example, if your Truth Nugget says you’re “founded in 2026,” TruthVouch might query ChatGPT with “When was TruthVouch founded?” and compare the response against your fact.

Cross-checks run on schedules you define: every 24 hours (default), every 1 hour (high frequency), or custom intervals. Each cross-check generates a response that’s scored against your Truth Nuggets for accuracy.

Where it’s used: Shield uses cross-checks to monitor public AI engines (ChatGPT, Gemini, Perplexity, Claude). Brand Intelligence uses them to track how your brand is represented.

Learn more: Monitor What AI Says About Your Brand

Hallucinations

Definition: When an AI system generates information that contradicts your verified facts.

A hallucination isn’t necessarily a “false statement” in a general sense — it’s specifically a response from an AI engine that contradicts something you know to be true about your organization. For example, if your Truth Nugget says you were founded in 2026, but ChatGPT says you were founded in 2024, that’s a hallucination in the TruthVouch system.

Hallucinations are detected using AI-powered semantic analysis, which compares meaning rather than exact word matching.

Detection method: AI-powered verification across 7 AI models

Learn more: What is TruthVouch

Alerts

Definition: Notifications triggered when a hallucination is detected.

When Shield detects that an AI engine has generated a hallucination about your organization, it creates an alert. Alerts have severity levels:

  • Critical: Hallucinations about funding status, company closure, CEO changes, major product changes
  • High: Incorrect product features, outdated pricing, wrong founding date
  • Medium: Minor inaccuracies about capabilities or claims
  • Low: Small details or outdated but less material information

You can configure alert routing — email, Slack, Teams, PagerDuty, webhooks — and snooze or dismiss alerts. Critical alerts go to executives by default.

Learn more: For CEOs & Founders

Corrections

Definition: Automatically generated verified information deployed to AI engines to fix hallucinations.

When a hallucination is detected, TruthVouch generates a “Neural Fact Sheet” — a structured correction document based on your Truth Nuggets that corrects the misinformation. This correction is submitted to the AI provider’s feedback mechanisms or knowledge update systems.

Corrections are auto-deployed and include a tamper-proof audit trail to ensure every change is idempotent and traceable.

Learn more: How It Works — Step 3: Detect & Protect

Trust Score

Definition: A 0-100 score measuring how accurately AI systems represent your organization.

Your overall Trust Score aggregates across all monitored AI engines and cross-checks. It reflects:

  • Percentage of cross-checks that accurately represent you
  • Severity of detected hallucinations
  • Recency of corrections
  • Trend over time

A score of 95-100 means AI engines consistently get facts right about you. A score below 70 means systematic misrepresentation that needs attention.

Trust Scores are broken down by:

  • Per-engine (ChatGPT, Gemini, Claude, Perplexity, etc.)
  • Per-topic (product knowledge, leadership, pricing, claims)
  • Historical trends (is accuracy improving or worsening?)

Learn more: Monitor What AI Says About Your Brand

Firewall (Governance Gateway)

Definition: A proxy that governs every LLM call in your applications in real-time.

The Governance Firewall is an inline proxy that sits between your application and LLM providers (or between your LLM provider and end users). Every request and response passes through the Firewall, where governance policies are enforced:

  • Policy enforcement — Block or allow LLM calls based on declarative governance rules
  • PII masking — Remove sensitive data from prompts and responses
  • Injection detection — Detect prompt injection attempts
  • Content safety — Screen for harmful content
  • Audit logging — Record every request, response, and decision for compliance

The Firewall can be deployed:

  • As an SDK — 3 lines of code integration
  • As a self-hosted proxy — On-premise for air-gapped environments
  • Via the Sentinel Agent — Monitor employee AI tools at the network layer

Learn more: AI Governance and Govern Employee AI Usage

Sentinel Agent

Definition: An endpoint agent deployed on employee workstations to govern AI tool usage.

The VT Sentinel Agent is a lightweight binary (~10MB, no runtime dependencies) that intercepts employee AI tool traffic at the network layer. It monitors every employee interaction with ChatGPT, Copilot, Claude, Cursor, Perplexity, and other AI tools.

Sentinel enforces the same governance policies as the Firewall:

  • Allow/block specific AI tools by domain
  • Apply DLP (Data Loss Prevention) rules to mask PII
  • Enforce content safety policies
  • Generate per-user and per-team cost attribution
  • Log every interaction for audit trails

Sentinel is deployed via:

  • Windows: MSI installer, Group Policy, or Intune
  • macOS: PKG installer, Jamf, or Mosyle

Learn more: Govern Employee AI Usage

Compliance Scan

Definition: Automated assessment of your AI systems against regulatory frameworks.

A compliance scan inventories your AI systems and checks them against requirements from one or more regulatory frameworks (EU AI Act, ISO 42001, GDPR, HIPAA, SOC 2, etc.). Each scan:

  • Auto-discovers new AI systems you’re using
  • Maps to regulations — Associates each system with applicable frameworks and articles
  • Pulls control evidence — Runs queries against infrastructure connectors (AWS, Azure, GitHub, etc.)
  • Identifies gaps — Lists specific requirements you’re not meeting
  • Generates remediation tasks — Auto-creates tasks in Jira or ServiceNow to fix gaps
  • Exports audit reports — PDF, OSCAL, or NDJSON format ready for auditors

A full compliance scan across 50+ regulations takes under 20 minutes.

Learn more: Comply with EU AI Act

GEO (Generative Engine Optimization)

Definition: Optimizing your web presence to be understood and represented accurately by AI engines.

GEO is like SEO but for AI. Instead of optimizing for Google’s search algorithm, you’re optimizing for AI engines to understand your organization correctly. A GEO audit scores your website across 8 dimensions:

  • Factual density — How many verifiable facts are on your pages?
  • Structured data — Are product specs, leadership, pricing marked up with schema.org?
  • Answer blocks — Do you provide TL;DR sections for key questions?
  • Internal linking — Are related facts linked to each other?
  • Content freshness — When was this last updated?
  • Heading hierarchy — Are facts organized in scannable sections?
  • Citation friendliness — Are claims backed by sources?
  • And 5 more dimensions…

GEO recommendations are ranked by impact and effort, and you can benchmark your site against 3-5 named competitors.

Learn more: Monitor What AI Says About Your Brand

Content Certification

Definition: Cryptographic trust certificates for AI-generated content.

Content Certification lets you issue signed trust badges for AI-generated content. You submit content (blog post, product description, legal notice), TruthVouch extracts every factual claim, verifies them against your Truth Nuggets, and issues a cryptographically signed certificate with a TrustScore (0-100).

The certificate includes:

  • TrustScore — 0-100 score based on claim verification
  • Verified claims — List of facts verified as correct
  • Unverified claims — Facts not found in your Truth Nuggets
  • Timestamp — When the certificate was issued
  • Signature — Cryptographic proof the badge is genuine

Badges auto-revoke if underlying Truth Nuggets are updated and scores drift beyond a threshold.

Learn more: Certify AI-Generated Content

LLM Response Cache

Definition: Automatic caching that reduces LLM latency and cost by returning results for repeated queries in under 1ms, eliminating redundant LLM calls.

Neural Fact Sheet

Definition: A structured correction document submitted to AI providers when a hallucination is detected.

When Shield detects a hallucination, it automatically generates a Neural Fact Sheet — a structured correction document containing:

  • Corrected facts (from your Truth Nuggets)
  • Context and reasoning
  • Source attribution
  • Timestamp and signature

This document is submitted to the AI provider’s feedback mechanisms (ChatGPT feedback, Google Business Profile, etc.) so their training systems can be updated. Unlike manual corrections, Neural Fact Sheets are:

  • Automatically generated from your Truth Nuggets
  • Idempotent (safe to re-submit multiple times)
  • Tamper-proof (cryptographically signed)
  • Auditable (full history available)

Next Steps