Skip to main content
4 Independent Layers · BLOCK-Capable · Not Detection-Only

Verify Every Output
Before It Reaches Users

Four independent verification layers. A response must pass all four — bypassing one does not bypass the others.

Most AI guardrails detect-and-alert after bad output ships. G8KEPR's Verification Engine is BLOCK-capable — sitting at middleware position 14, after generation and before delivery. Conversation integrity, source grounding, tool-call structure, and output constraints — each evaluated independently with explicit failure modes.

4-Layer Pipeline
BLOCK + SUGGEST Actions
Citation Grounding
Two-Pass Schema
Hash-Chain Drift Detect
4
Verification layers
independent · all required
6
Failure modes
PASS · WARN · FAIL · …
4
Action priorities
ALLOW · WARN · BLOCK · SUGGEST
5
Constraint types
numerical · logical · …
15
Path prefixes
MCP · API · audit · gateway
53
API routes
verification sub-router
#14
Middleware position
after gen · before deliver
BLOCK
Capable, not just detect
fail-closed Stage 1 paths

What is the Verification Engine?

The last line of defense between AI outputs and your users

Output Validation at Scale

Every AI response, API output, and MCP tool result passes through the Verification Engine — after generation, before delivery. Four independent layers validate structure, grounding, tool-call shape, and output constraints. Bypass-resistant by design.

Output Structure Check
JSON Schema, type checking, required fields, format validation
PII Detection & Redaction
Auto-detect SSN, credit cards, emails, phone numbers — redact before output
Hallucination Detection
Citation matching against actual source material — catches invented URLs, figures, and policies
Compliance Enforcement
Audit-grade evidence for EU AI Act Art. 12 / 13, NIST AI RMF, ISO 42001

The Problem: Unverified AI Outputs

AI systems hallucinate confidently, leak PII, violate business rules, and emit malformed data. Without an intercept-and-block stage between generation and delivery, these errors reach users and damage trust.

Hallucinated Facts
LLMs confidently state false information as fact
PII Leakage
AI outputs accidentally include sensitive user data
Malformed Data
Wrong types, missing fields, invalid formats break downstream systems
Compliance Violations
EU AI Act requires output validation - fines up to 7% of global revenue

How Verification Works

Four independent layers catch errors before they reach users

Verification Pipeline
1. AI Generates Output
LLM response, API result, or MCP tool output ready for validation
2. The 4 Verification Layers
Independent · all required · each emits PASSED / WARNED / FAILED / SKIPPED / ERROR / TIMEOUT
L1
Conversation Integrity
Prompt injection in output, message-sequence consistency, tool-invocation authorization, hash-chain drift detection.
L2
Source Grounding
Traces factual claims to source docs / tool outputs. Catches hallucinated URLs, invented figures, fake citations. Optional NLI entailment.
L3
Structural (Tool Verification)
Validates LLM-directed tool calls match registered schema, parameter ranges, approved-list. Injection cross-check on generated tool args.
L4
Output Constraint Validation
PII scan + redact, dangerous-code detection, length/format limits, confidence threshold, custom org constraints (numerical / logical / temporal / structural / Python).

Bypassing one layer does not bypass the others. A response must pass all four.

3. Validated Output Delivered
Clean, verified response reaches end user with full audit trail
Constraint Enforcer

Action Priority System

When multiple constraints fire on the same response, the highest-priority action wins. A BLOCK from one constraint overrides a WARN from another. Constraints are AND-combined — a response must satisfy all of them.

P0
ALLOW
Explicit pass — response compliant with all constraints
P1
WARN
Log violation, deliver response (visibility without blocking)
P2
BLOCK
Reject response, return error to caller
P3
SUGGEST
Block AND return a proposed compliant alternative — uniquely G8KEPR
Layer 4 — Custom Rules

Constraint Framework

Five constraint types let an organization express any business rule. Version-controlled per org with full audit trail. For JSON outputs, enum values are injected into the prompt before generation AND validated via structured-output schemas — inject-then-validate reduces blocks at runtime.

Numerical
Min / max bounds, decimal precision
Logical
If field X is present, field Y must not be
Temporal
Timestamps must fall within valid ranges
Structural
Field cardinality, nesting depth, JSON schema
Custom (Python)
User-defined predicate functions executed at validation time
Detection-Only Tools Don't Cover This

What G8KEPR Adds That Generic AI Guardrails Don't

Five capabilities that exist in the platform — not in detect-and-alert tools, not in schema-only validators, not in the LLM itself.

4 Independent Layers

Conversation Integrity → Source Grounding → Structural → Output Constraint. Each emits PASSED / WARNED / FAILED / SKIPPED / ERROR / TIMEOUT. Failure in one is not failure in all.

no bypass cascadeverification_engine/
Single-layer guardrails fail open on bypass.

SUGGEST Action

Most validators only block. SUGGEST blocks AND returns a proposed compliant alternative the caller can accept — turning rejection into self-healing.

priority 3constraints/enforcer.py
Detection-only tools just alert.

Two-Pass Schema

Enum values injected into the LLM prompt BEFORE generation (steers the model) and validated via structured-output schema after. Reduces block frequency at the source.

inject + validatefunction_calling · tool_use
Validators only check after generation.

Citation Grounding

Traces factual claims back to actual source documents and tool outputs. Catches hallucinated URLs (that do not exist in source), invented financial figures, and fake citations.

L2 source checkgrounding_verifier.py
Confidence scores cannot catch invented sources.

Hash-Chain Drift Detect

Conversation state is hash-chained. Tampering with prior turns breaks downstream hashes — the engine sees the manipulation, not just the symptom.

L1 integrityguardrails_provenance.py
Output-only validators see only the last response.

Verification Use Cases

Protect your users from AI errors — in every scenario

LLM Response Validation

Verify ChatGPT, Claude, and Gemini responses before showing to users. Detect hallucinations, enforce output formats, redact accidental PII leakage, and block off-topic responses.

Hallucination detection
Format enforcement
PII redaction

MCP Tool Output Verification

Validate results from MCP tools (file reads, database queries, API calls) before AI agents process them. Prevent data poisoning and ensure tool outputs match expected schemas.

Database result validation
File content scanning
API response checks

API Response Validation

Verify your API responses before sending to clients. Enforce schemas, detect data quality issues, and ensure compliance with API contracts and SLAs.

Schema enforcement
Data quality checks
Contract validation

Verification Engine FAQs

Everything you need to know about output verification

Every AI-generated response, API output, and MCP tool result that traverses one of 15 intercepted path prefixes. Validation runs at middleware position 14 — after generation, before delivery — across four independent layers: Conversation Integrity (Layer 1), Source Grounding (Layer 2), Structural / Tool Verification (Layer 3), and Output Constraint Validation (Layer 4). Each layer emits one of six explicit failure modes (PASSED, WARNED, FAILED, SKIPPED, ERROR, TIMEOUT). A response must pass all four layers — bypassing one does not bypass the others.

Questions about implementing verification?

Talk to our verification experts →

Verification Results Generate Control Evidence For

Every layer score, hallucination risk, citation accuracy, and schema-validity record is written to verification_results and chained into the platform audit log. Mappings are pre-built — auditors get exports, not spreadsheets.

EU AI Act
Art. 12 — Record Keeping
EU AI Act
Art. 13 — Transparency
NIST AI RMF
Govern · Map · Measure · Manage
ISO 42001
AI Management System
SOC 2 Type II
CC7.2 — System Monitoring

Subject to independent audit and attestation. G8KEPR provides the technical controls and evidence — your auditor issues the certification.

EU AI Act Compliant

Protect Every Output
Starting Today

Four independent layers, four enforcement priorities (incl. SUGGEST), two-pass schema enforcement, and citation-grounded hallucination detection — at middleware position 14, before delivery.

Schema validation
Data sanitization
Hallucination checks
EU AI Act ready

No credit card required • 30-day free trial • Full verification pipeline