Skip to content

Governance Pipeline

The Governance Pipeline is a multi-stage processing system that inspects, enforces policies, and audits every LLM request and response in real-time. All stages complete in milliseconds, adding negligible overhead to your LLM calls.

Three Processing Phases

Phase 1: Input Protection

Before any request reaches an LLM, the pipeline:

  • Masks sensitive data — Detects and redacts personally identifiable information (credit cards, SSNs, emails, phone numbers) from prompts
  • Blocks injection attacks — Identifies prompt injection attempts, jailbreaks, and malicious patterns
  • Enforces organizational policies — Evaluates requests against your governance rules (allowed models, users, features, data classifications)
  • Screens for harmful content — Detects NSFW, hateful, violent, or illegal content and applies content filtering policies

Phase 2: Processing

During the LLM interaction, the pipeline:

  • Classifies data sensitivity — Tags requests as public, internal, restricted, or classified with data residency requirements
  • Routes intelligently — Directs requests to the appropriate LLM provider (OpenAI, Anthropic, Google, and more) with safety instructions injected based on policies
  • Captures metadata — Records response content, token usage, model version, and timing for audit and analytics

Phase 3: Output Verification

After the LLM responds, the pipeline:

  • Verifies factual accuracy — Compares responses against your truth nuggets using AI-powered hallucination detection
  • Checks output safety — Screens responses for harmful content, data leakage, and policy violations
  • Generates corrections — When hallucinations are detected, automatically creates Neural Fact Sheets with sourced corrections
  • Masks output PII — Detects and redacts sensitive information in LLM responses
  • Logs for compliance — Creates a tamper-proof audit record of the entire request-response cycle

Performance

The governance pipeline adds minimal overhead to LLM calls. The dominant cost is the LLM call itself, not the governance processing — so your security and compliance checks are essentially free.

Policy-Driven Decisions

Multiple stages enforce organizational policies written in a declarative policy language. You can define rules that control:

  • Which LLM models are allowed
  • Who can access which AI capabilities
  • How sensitive data is classified and handled
  • When fact-checking is required (e.g., for marketing claims)
  • Content safety thresholds

Full documentation and examples are available in the policy authoring guide.

Audit Trail

Every request flowing through the pipeline creates an immutable, tamper-proof audit record. This provides a complete compliance trail showing what was checked, what policies were evaluated, and what actions were taken.

Configuration

Pipeline behavior can be customized per application:

  • Enable or disable specific processing steps (e.g., skip PII masking if not needed)
  • Set timeouts for each phase
  • Control audit logging detail level

Configure through Dashboard > Governance > Pipeline Settings.

Monitoring

View pipeline health in your dashboard:

  • Policy violation trends
  • Hallucinations detected per day
  • Most common failure types
  • Volume of requests processed

Next Steps