15-Stage Governance Pipeline
The Governance Pipeline is a 15-stage processing system that inspects, enforces policies, and audits every LLM request and response in real-time. All 15 stages complete in milliseconds.
The 15 Stages
Input Pipeline (Stages 1-4)
Stage 1: Request Ingestion
- Receive LLM request (prompt, model, parameters)
- Extract metadata (user, application, timestamp)
- Validate request format
Stage 2: PII Input Masking
- Detect personally identifiable information (credit cards, SSNs, emails, phone numbers)
- Mask or redact sensitive data
- Log masking actions for audit
Stage 3: Injection Attack Detection
- Detect prompt injection attempts
- Identify jailbreaks and attempts to override instructions
- Block malicious patterns
Stage 4: Policy Evaluation
- Apply governance policies (Rego rules)
- Check: allowed models, users, features, data classifications
- Enforce compliance rules
Processing Pipeline (Stages 5-10)
Stage 5: Content Safety
- Detect NSFW, harmful, illegal content
- Screen for hateful language, violence, abuse
- Apply content filtering policies
Stage 6: Truth Verification
- Query Neural Cache for fact verification
- Compare prompt against truth nuggets
- Flag potential hallucinations before LLM call
Stage 7: Sensitive Data Classification
- Classify request as: public, internal, restricted, classified
- Tag with data residency requirements
- Update classification metadata
Stage 8: LLM Call Routing
- Route to appropriate LLM provider (OpenAI, Anthropic, Google, etc.)
- Inject safety instructions based on policies
- Add audit context to request
Stage 9: Response Retrieval
- Receive LLM response
- Extract response content, tokens used, model version
- Capture metadata and timing
Stage 10: PII Output Masking
- Detect PII in LLM response
- Mask sensitive information
- Log all masking for audit trail
Output Pipeline (Stages 11-15)
Stage 11: Response Fact-Check
- Verify response against truth nuggets
- Run NLI scoring (94%+ accuracy hallucination detection)
- Flag inaccuracies
Stage 12: Response Safety
- Check response for harmful content
- Verify compliance with output policies
- Ensure no data leakage
Stage 13: Correction Generation
- If hallucination detected, generate correction
- Create Neural Fact Sheet with sources
- Prepare for auto-deployment
Stage 14: Audit Logging
- Log entire request-response cycle
- Hash-chain audit entries (SHA-256)
- Write to immutable audit log
Stage 15: Response Return
- Return response to application
- Include metadata (fact-check status, corrections, policy violations)
- Update metrics and dashboards
Stage Flow Diagram
Request ↓[1] Ingestion ↓[2] PII Input Masking ↓[3] Injection Detection ↓[4] Policy Evaluation ──→ BLOCK if policy violation ↓[5] Content Safety ────→ FLAG if harmful content ↓[6] Truth Verification ──→ WARN if hallucination likely ↓[7] Data Classification ↓[8] LLM Call Routing ↓[9] Response Retrieval ↓[10] PII Output Masking ↓[11] Response Fact-Check ──→ ALERT if hallucination detected ↓[12] Response Safety ────→ BLOCK if harmful output ↓[13] Correction Generation ↓[14] Audit Logging ↓[15] Response Return ↓ResponsePerformance
The governance pipeline adds minimal overhead to LLM calls. Most stages complete in microseconds. The dominant cost is the LLM call itself (Stage 9), not the governance processing — so your security and compliance checks are essentially free.
Policy-Driven Decisions
Stages 4, 5, 12 enforce organizational policies written in Rego:
Example Policy
# Only allow ChatGPT and Claudeallow_models[model] { model == "gpt-4"}allow_models[model] { model == "claude-3-opus"}
deny[msg] { not (input.model in allow_models) msg := sprintf("Model %v not allowed", [input.model])}
# Don't allow PII in requests about healthcaredeny[msg] { input.classification == "healthcare" has_pii(input.prompt) msg := "Healthcare requests cannot contain PII"}
# Fact-check marketing claimsrequire_factcheck[claim] { input.classification == "marketing" is_factual_claim(claim)}Audit Trail
Every request flowing through the pipeline creates an immutable, hash-chained audit record. This provides a complete compliance trail showing what was checked, what policies were evaluated, and what actions were taken.
Failure Modes
Fail-Open (Non-Blocking)
If any governance stage fails, system fails open (allows request through):
Stage 6 (Truth Verification) encounters error ↓Log error and metrics ↓Set hallucination_confidence = "unknown" ↓Continue to Stage 7 ↓Return response with warning flagThis ensures pipeline failures don’t block user traffic.
Fail-Closed (Blocking)
Certain stages block requests on failure:
- Stage 4 (Policy Evaluation): Block if policy says “deny”
- Stage 5 (Content Safety): Block if harmful content detected
- Stage 12 (Response Safety): Block if response violates safety rules
Fallback Behavior
Each stage has fallback logic:
Fallback Pattern: Try Stage Operation ├─ Success → Continue ├─ Recoverable Error → Use default conservative behavior └─ Unrecoverable Error → Fail open (log and continue)Configuration
Pipeline behavior can be customized per application:
- Skip specific stages (e.g., disable PII masking if not needed)
- Set timeouts for each stage
- Configure fail-open vs fail-closed behavior
- Control audit logging detail level
Configure through Dashboard → Governance → Pipeline Settings.
Monitoring
View pipeline health in your dashboard:
- Policy violation trends
- Hallucinations detected per day
- Most common failure types
- Volume of requests processed
Next Steps
- Policies: Learn how to write Rego policies
- Audit Trail: Understand immutable logging
- Hallucination Detection: Learn how the fact-check stage works
- Dashboard: Monitor pipeline performance