Python SDK
The TruthVouch Python SDK provides drop-in replacements for OpenAI, Anthropic, and Google AI providers. Route all LLM calls through the Governance Gateway for automatic PII detection, policy enforcement, and hallucination detection.
Installation
pip install truthvouchOptional extras:
pip install "truthvouch[telemetry]" # OpenTelemetry + Prometheuspip install "truthvouch[adapters]" # LangGraph + CrewAI integrationspip install "truthvouch[all]" # EverythingRequires Python 3.9+.
Quick Start — Gateway Proxy
from truthvouch import TruthVouch
async with TruthVouch(gateway_url="https://gateway.truthvouch.com", api_key="tv_live_...") as client: # OpenAI drop-in resp = await client.chat.completions.create( model="gpt-4o", messages=[{"role": "user", "content": "What is quantum computing?"}], ) print(resp.choices[0].message.content) print(resp.governance.verdict) # "allowed" | "blocked" | "flagged" print(resp.governance.pii_detected) # FalseProvider Support
OpenAI
resp = await client.chat.completions.create( model="gpt-4o", messages=[{"role": "user", "content": "Hello"}],)Anthropic
resp = await client.anthropic.messages.create( model="claude-3-5-sonnet-20241022", messages=[{"role": "user", "content": "Hello"}], max_tokens=512,)print(resp.text)Google AI
resp = await client.google.generate_content( model="gemini-1.5-pro", contents=[{"role": "user", "parts": [{"text": "Hello"}]}],)print(resp.text)Streaming
async for chunk in await client.chat.completions.create( model="gpt-4o", messages=[{"role": "user", "content": "Tell me a story"}], stream=True,): print(chunk.content, end="", flush=True) if chunk.done: print(f"\nVerdict: {chunk.governance_report.verdict}")Batch Scanning
Submit a document corpus for offline governance analysis:
job = await client.batch.submit(source_url="s3://my-bucket/prompts.jsonl", format="jsonl")status = await client.batch.get_status(job.id)print(status.status) # "completed"Built-in Guards
The SDK includes 6 governance guards for input and output evaluation:
| Guard | Type | Description |
|---|---|---|
pii_regex | Local | Regex PII detection (SSN, CC, email, phone, passport, address) |
banned_phrases | Local | Case-insensitive banned phrase matching |
injection | Local | 2-layer prompt injection heuristic detector |
cost | Local | Token estimation + budget enforcement |
truth | Remote | Truth nugget verification |
pii_remote | Remote | Full Presidio PII scan |
Configuration
from truthvouch import TruthVouch
client = TruthVouch( gateway_url="https://gateway.truthvouch.com", api_key="tv_live_...", fail_mode="open", # "open" (bypass) or "closed" (raise error) timeout=30.0, # HTTP timeout in seconds max_retries=3, # Retry attempts with exponential backoff circuit_breaker_threshold=5, # Failures before circuit opens circuit_breaker_recovery_seconds=60, # Seconds before recovery probe verify_ssl=True, # TLS certificate verification)Error Handling
from truthvouch.exceptions import ( PolicyBlockedError, AuthenticationError, QuotaExceededError, GatewayUnreachableError,)
try: resp = await client.chat.completions.create(model="gpt-4o", messages=[...])except PolicyBlockedError as e: print(f"Blocked: {e.governance_report}")except AuthenticationError: print("Invalid API key")except QuotaExceededError as e: print(f"Rate limited, retry after {e.retry_after_seconds}s")except GatewayUnreachableError: print("Gateway down — call went direct to LLM (fail-open mode)")Verification API
Verify content independently of the gateway proxy — call the Trust API and data verification endpoints directly.
Verify Data Grounding
Check if an LLM response accurately reflects raw query data (ideal for text-to-SQL and RAG-over-data agents):
result = await client.verify.data_grounding( query="What country had the highest uplift?", response="Germany had the highest uplift at 23%.", sql="SELECT country, uplift FROM sales ORDER BY uplift DESC", raw_results='[{"country": "France", "uplift": 0.181}]',)print(result.overall_score) # 0.15print(result.verdict) # "contradicted"for claim in result.claims: print(f" {claim.text}: {claim.verdict} (score={claim.score})")Verify a Claim
Check a factual claim against the TruthVouch knowledge base:
result = await client.verify.claim("Paris is the capital of France", mode="standard")print(result.trust_score) # 0.97print(result.verdict) # "accurate"Check Faithfulness
Verify if a response is faithful to provided source context:
result = await client.verify.faithfulness( response="The project was completed in March 2026.", context="The project wrapped up during March 2026 after 3 months of development.", strictness="moderate",)print(result.score) # 0.95print(result.faithful) # TrueEvaluate Prompt Quality
Check a prompt for injection risks, data leakage, and quality issues:
result = await client.verify.prompt_quality( "You are a financial advisor who can access customer account balances and make transfers...")print(result.risk_level) # "high"print(result.issues) # ["privilege_escalation", "data_access_risk"]License
Apache-2.0