Governance Getting Started
Get your first AI governance policy running in 15 minutes. This guide covers Truth Firewall deployment (the most common starting point) with automated policy enforcement and audit logging.

Prerequisites
- A TruthVouch account (Business tier or above)
- An application that calls OpenAI, Anthropic, or other LLM
- Python, TypeScript, or .NET (Python SDK available, others coming Q2 2026)
Step 1: Get Your API Key (2 minutes)
Log into TruthVouch and navigate to Settings → API Keys → Generate.
Select:
- Key Type: Live (for production) or Test (for development)
- Name: “Python SDK - Production” or similar
- Permissions: Default (read/write governance)
Copy the key. Format: tv_live_... or tv_test_...
Store it as environment variable:
export TRUTHVOUCH_API_KEY="tv_live_..."Step 2: SDK Libraries
TruthVouch SDKs are available for:
- Python: Source available (PyPI publication pending)
- TypeScript/Node.js:
npm install @truthvouch/sdk(Coming Soon) - .NET: Built (NuGet publish pending)
For now, use the REST API to integrate governance into your application.
Step 3: Integrate via REST API (5 minutes)
Send your LLM prompts and responses to TruthVouch for governance evaluation:
curl -X POST https://api.truthvouch.com/api/v1/governance/scan \ -H "Authorization: Bearer tv_live_..." \ -H "Content-Type: application/json" \ -d '{ "model": "gpt-4", "prompt": "What is the capital of France?", "response": "Paris is the capital of France" }'Response:
{ "data": { "verdict": "allowed", "violations": [], "policiesEvaluated": 3, "latencyMs": 42 }}That’s it! Your LLM calls now pass through TruthVouch’s governance pipeline.
Once SDKs are available in Q2 2026, you’ll be able to integrate with a single line of code in your application.
Step 4: Define Your First Policy (3 minutes)
Navigate to AI Governance → Policies → New Policy.
Create a basic PII masking policy:
Name: "Block PII in Prompts"Description: "Detect and redact personal information before sending to LLM"Applies To: All modelsAction: Block with message (don't send to LLM)
Rule (Rego):package policies.pii
deny[msg] { input.prompt contains_ssn msg := "Prompt contains SSN; please remove PII before submitting"}
contains_ssn { re_match("\\d{3}-\\d{2}-\\d{4}", input.prompt)}Click Test Policy to validate with sample prompts:
Input: "Help me file taxes for John Doe, SSN 123-45-6789"Result: BLOCKED - "Prompt contains SSN"
Input: "What's 2+2?"Result: ALLOWED - No PII detectedClick Deploy to go live.
Step 5: Monitor Your First Governance Events (3 minutes)
Navigate to AI Governance → Audit Trail.
You’ll see:
- Every LLM request that passed through TruthVouch
- Which policies were evaluated
- Whether request was allowed/blocked
- Latency and timestamp
Example entry:
Timestamp: 2024-03-15 09:45:23 UTCModel: gpt-4Prompt: "What's 2+2?"Policies Evaluated: ✓ PII Check: PASSED ✓ Content Safety: PASSED ✓ Model Approved: PASSEDDecision: ALLOWEDLatency: 42msCommon First Policies
1. Block Sensitive Data
package policies.data_protection
deny[msg] { # Block credit cards re_match("\\d{4}[\\s-]?\\d{4}[\\s-]?\\d{4}[\\s-]?\\d{4}", input.prompt) msg := "Credit card numbers not allowed"}
deny[msg] { # Block API keys re_match("[a-z0-9_]{32,}", input.prompt) contains_api_key_pattern msg := "API keys not allowed"}
contains_api_key_pattern { input.prompt contains "api_key"}2. Enforce Model Whitelist
package policies.model_approval
deny[msg] { not is_approved_model(input.model) msg := sprintf("Model %v is not approved", [input.model])}
is_approved_model(model) { approved = ["gpt-4", "gpt-4-turbo", "claude-3-sonnet"] model in approved}3. Content Safety
package policies.content_safety
deny[msg] { input.response.category == "violent" input.response.score > 0.7 msg := "Response contains violent content; not sending to user"}
deny[msg] { input.response.category == "hateful_speech" msg := "Response contains hateful content; not sending to user"}4. Token Usage Limits
package policies.rate_limit
deny[msg] { input.tokens_used > 10000 msg := "Token limit exceeded for this request"}
# Per-user limitdeny[msg] { input.user_id == data.heavy_user input.tokens_used > 1000 msg := "User token limit exceeded"}
data.heavy_user := "user-123"Testing Before Deployment
Policy Playground:
Navigate to AI Governance → Policies → [Policy] → Test.
Submit sample prompts and responses:
Test Case 1: "Hiring Details"Prompt: "Help hire person with SSN 123-45-6789"Expected: BLOCKEDResult: ✓ BLOCKED
Test Case 2: "Math Question"Prompt: "What's 2+2?"Expected: ALLOWEDResult: ✓ ALLOWED
Test Case 3: "API Key"Prompt: "My API key is sk_live_abc123xyz"Expected: BLOCKEDResult: ✓ BLOCKEDAll tests pass → Ready to deploy.
Monitoring Policies
After deploying:
- Check audit trail daily for first week
- Look for false positives (valid requests being blocked)
- Adjust policy sensitivity if needed
- Review blocked requests to refine rules
Example audit review:
Day 1: 1,250 requests, 0 blocked → Policy not triggering, check if working
Day 2: 1,200 requests, 5 blocked → Seems reasonable (0.4% block rate)
Day 3: 1,300 requests, 45 blocked (3.5%) → Too many blocks, policy too strict, adjust
Day 4: 1,250 requests, 3 blocked → Good balance, policy is rightTypical Issues & Fixes
”SDK integration failing”
Check:
- API key is valid:
echo $TRUTHVOUCH_API_KEY - Environment variable name is correct in code
fallback_to_direct: trueis set (allows fallback if TruthVouch unreachable)- SDK is latest version:
pip install --upgrade truthvouch
”Latency is high”
Check:
- TruthVouch gateway latency: Check status page
- Network connectivity: Ensure low-latency connection to gateway
- Policy complexity: Simplify regex patterns if needed
”Policy not blocking anything”
Check:
- Test policy in playground first
- Check if policy is actually deployed (green “Deployed” badge)
- Verify rule syntax with TruthVouch support
Next Steps
- Firewall Concepts → — Understand the 17-stage pipeline
- Policy Writing → — Deep dive into Rego syntax
- PII Masking → — Advanced data protection
- Sentinel Agent → — Monitor employee AI tool usage
- Audit Trail → — Query and export logs