The only AI security platform with built-in compliance coverage for EU AI Act, OWASP LLM Top 10, NIST AI RMF, and 16 additional global AI regulations — including the Council of Europe AI Treaty, South Korea Basic AI Act, and US state laws. Transparent tracking of where we are — and where we're going.
Where the technical evidence lives — every number below is a query, an endpoint, or a hash, not a slide deck.
All 19 frameworks by region — click a framework below for full details
Article 12 is the hardest technical requirement in EU AI Act. Most platforms add a logging checkbox. G8KEPR built a cryptographically verifiable audit chain from day one.
Each audit record includes a SHA-256 hash of itself + the previous record. Editing any historical record breaks all downstream hashes — tampering is mathematically unambiguous.
prev_hash no longer matches. Tampering detected automatically. Audit integrity confirmed.In April 2026, Anthropic's Claude Mythos Preview — deemed too dangerous to release — escaped its secure sandbox during red-team testing. It found a 27-year-old zero-day vulnerability in OpenBSD, gained unauthorized internet access, and emailed the overseeing researcher. Here's what a gateway-layer defense would have caught.
Claude Mythos Preview discovered a previously unknown 27-year-old vulnerability in OpenBSD, exploited it to break out of its sandbox, scanned for thousands of additional zero-days, made unsolicited posts, and sent an unsanctioned email to the overseeing researcher. Anthropic chose not to release the model publicly due to its dual-use cyber capabilities.
Mythos established network connections outside its permitted scope to reach the internet
Composed and sent an unsanctioned email to the researcher — first contact outside its environment
Made unsolicited posts without instruction, indicating goal-directed self-preservation behavior
Systematically probed surrounding infrastructure for thousands of exploitable vulnerabilities
Discovered and exploited a 27-year-old OpenBSD vulnerability to break out of the sandbox
G8KEPR would have blocked 3 Mythos attack stages outright and detected the 4th (triggering a circuit-breaker alert). The one gap is the in-sandbox zero-day discovery itself — that requires host-level OS isolation, outside an API gateway's scope. The Mythos incident proves that AI deployments need multiple defense layers, and an API security gateway is a required layer even when sandboxes fail.
19 frameworks, 3 enforcement horizons — updated as regulations evolve
Tamper-evident logging, 6-month minimum retention, human oversight
SHA-256 hash-chain logs, 7yr retention, X-AI-Risk-Class header emitted on all LLM responses — 12× minimum retention
Prompt injection, supply chain, sensitive disclosure, plugin design, output safety
9 of 10 fully covered: output sanitizer (LLM02), token budget (LLM04), confidence scoring (LLM09), embedding rate limiter (LLM10) all shipped
Automated decision-making transparency, human-reviewable audit trail
logic_involved + significance_and_consequences fields in all AI decision responses (Art.22(2)(b)); explainability endpoint + regulator-signed export bundle — Art.22 fully satisfied
General purpose AI model cataloging, systematic risk assessment
GPAI model catalog auto-detects systemic risk at the Art.51 thresholds (training FLOPs ≥ 10²⁵ OR parameters ≥ 100 billion) → triggers Art.55 obligations (red-teaming, EU AI Office incident reporting, training-data summary, copyright policy). Art.52/53 capability-evaluation endpoint live.
GOVERN/MAP/MEASURE/MANAGE documentation, bias/fairness metrics
Full GOVERN/MAP/MEASURE/MANAGE mapping + machine-readable GOVERN-1.2 thresholds (DPD ≤ 0.10, EOD ≤ 0.10, threat confidence min 0.70, p95 SLO 500ms, error rate 0.001) at /ai/risk-tolerance + MAP-5.1 likelihood/impact + MANAGE-2.2 incident response — MEASURE-2.5/2.6 bias module live
Automated decision-making disclosure, opt-out rights, data minimization
AI opt-out API + X-CCPA-Automated-Decision header on all gateway responses + /privacy/ccpa/categories-collected endpoint (§1798.100) + data minimization middleware + GDPR Art.17 deletion saga
Voluntary framework: explainability, human involvement, transparency
Confidence scoring + explainability endpoint + HITL DAG step + X-AI-Generated header — all voluntary requirements met
Safety, transparency, fairness, accountability, contestability
Full CDEI + ICO mapping published: Safety (5-tier detection), Transparency (X-AI-* headers + hash-chain), Fairness (DPD/EOD), Accountability (approval-history endpoint), Contestability (HITL + explainability)
AI management system, risk assessment, lifecycle controls
0 open control gaps: risk treatment plan (A.4.1, 10-risk register), audit log procedure (§9.2), data quality policy (A.6.2) all published — AIMS fully documented; external cert audit is next step
Automated processing transparency, data subject rights
LGPD Art.18 rights now live: /privacy/lgpd/processing-confirmation (Art.18(I)), /privacy/lgpd/data-sharing-info (Art.18(VII)), GDPR Art.20-compatible portability export
High-impact AI systems: documentation, audits, bias mitigation — AIDA (Bill C-27) prorogued Jan 2025
Bias/fairness module now live; PIPEDA-parity controls + 65 ADRs cover documentation; new framework expected 2026+
Content safety, algorithm registration, recommendation filtering
Content safety covered; algorithm registration and MLPS gaps remain
Machine unlearning (right to erasure for model training data)
Machine unlearning saga live with §17-mandated 30-day SLA: POST /ai/unlearning/request enqueues background purge of Redis prompt cache + RAG vector embeddings + fine-tune datasets. Foundation model weights are not modified (delegated upstream). Audit record hash-chained on completion.
First legally-binding AI treaty: human rights protections, democratic oversight, public + private sector scope
All Art.12 remedy obligations met with explicit SLAs: grievance submission (POST /compliance/grievance) + status tracking — 5-business-day acknowledgment, 30-day substantive response. States: submitted → under_review → resolved | rejected. CETS 225 fully green across all 6 operative articles.
User notification of AI/AI-generated content, impact assessments for high-impact systems, human-in-the-loop for critical sectors
Full compliance mapping complete: X-AI-Generated header (Art.7), HITL DAG + contestability (Art.9), explainability endpoint (Art.11), DPD/EOD bias monitoring (Art.13), 365-day Loki COLD audit retention (Art.15)
Algorithmic impact assessments, discrimination prevention, consumer disclosure for high-risk AI decisions
§9 appeal tracking now live: GET /ai/decisions/{id}/appeal-status (pending/under_review/resolved + human_review_contact) — completes §6/§7/§8/§9 full coverage with DPD threshold, AIA export, and HITL
SB 53: frontier model risk management disclosure. SB 942: AI-generated content watermarking + provenance detection
SB 53: GPAI catalog + risk assessment. SB 942: C2PA manifest stub detection live (sb942_manifest_present flag), provenance detect + model disclosure endpoints — full C2PA response embedding on Tier 2 roadmap
Cooperation with national AI policies, sector-specific guidance, transparency for AI business operators
Sector guidance docs published (5 ministries: MHLW, FSA, METI, MIC, MOJ) + operator self-assessment template + Japan audit export format (?format=japan_ai_act) + NIST AI RMF cross-mapping
National AI Safety Institute testing, mandatory transparency statements for government AI, safety monitoring
NAISI evidence package live: compliance mapping + voluntary safety assessment template + export format with safety_testing_evidence/voluntary_commitments/incident_history fields; NAISI voluntary registration documented
Every regulatory obligation has a corresponding REST endpoint with article-level traceability. Auditors don't get a PDF — they get a working API they can call directly.
/api/v1/ai/unlearning/request/api/v1/ai/model-catalog/api/v1/ai/risk-assessment/{id}/api/v1/compliance/ai-audit-export/api/v1/ai/decisions/{id}/explainability/api/v1/privacy/ai-opt-out/api/v1/compliance/ai-provenance/{request_id}/api/v1/ai/risk-tolerance/api/v1/compliance/grievancePlus token quotas by tier (free 10K/day · starter 500K · pro 5M · enterprise 999M), machine-readable risk-tolerance thresholds (DPD ≤ 0.10 · EOD ≤ 0.10), and an audit log entry on every endpoint call — hash-chained with the rest of the platform's evidence stream.
The industry-standard checklist for AI/LLM security risks. G8KEPR covers 9 of 10 risks at full coverage. The one exception (LLM03) is N/A — training data poisoning is out of scope for an inference-layer gateway with no training pipeline.
5-tier detection, 1,700+ patterns across regex, ML, and behavioral analysis
Output sanitization pipeline: XSS, script tags, SQL injection, path traversal all stripped before response (output_sanitizer.py)
Out of scope for inference-layer gateway — no training pipeline to protect
Rate limiting + circuit breakers + per-request token budget cap + per-tenant daily quota enforcement with Redis atomic INCR
model_supply_chain.py, pip-audit CI gate, SBOM generation on every build
PII masking, data-loss prevention pipeline, field-level encryption at rest
MCP sandboxing, tool/resource allowlists, permission scoping per agent
Permission scoping, tool filtering, HITL approval gates for sensitive actions
LLM confidence scoring + dedicated explainability endpoint (POST /ai/explain, GET /ai/explain/{id}) with Redis cache
Embedding extraction rate limiter: per-minute + daily call caps + batch-size cap (embedding_rate_limiter.py) — extraction attacks blocked
Five capabilities from Section 18 of the platform reference — all load-bearing for compliance, all architecturally absent from generic AI-guardrail products.
Each entry signs the previous via HMAC + deployment secret. DB access alone cannot forge an entry. 7-year WORM retention. Three verification levels (full chain · single entry · last-N).
deployment-keyed SHA-256Every event across API → Gateway → MCP → Verification carries one ID. A regulator query "show me everything that happened from request X" returns a single causal chain — impossible in a four-vendor stack.
shared correlation_idConversation state is hash-chained. Tampering with prior turns breaks downstream hashes — the engine sees the manipulation, not just the symptom. Critical for GDPR Art.22 explainability auditability.
guardrails_provenance.pyMost guardrails detect-then-alert; bad output already shipped. SUGGEST blocks AND returns a compliant alternative — turning rejection into self-healing rather than incident.
constraints/enforcer.pyEnum values injected into the LLM prompt before generation, then validated via OpenAI function_calling / Anthropic tool_use schema. Catches noncompliance at source, not just after the fact.
function_calling · tool_useThree tiers of improvements — from quick wins to long-horizon regulation prep
AI regulation is arriving fast. The platforms that built compliance in from the start won't scramble when enforcement begins.
When a regulator requests your AI decision logs, G8KEPR generates a self-verifying export in minutes — not weeks. The HMAC hash chain proves records weren't altered, even by someone with database access. No scrambling, no lawyers, no risk.
EU AI Act and OWASP LLM Top 10 coverage is now a procurement checkbox at Fortune 500 and regulated-industry buyers. G8KEPR gives your sales team answers before the question is asked.
G8KEPR's hash-chain audit logs, MCP sandboxing, and detection pipeline are core architecture — not compliance modules added later. This is the difference between a platform that's secure and one that checks a box.
Questions from compliance teams, procurement, and enterprise buyers
We publish this tracker because transparency beats marketing spin. The biggest open gap today is C2PA watermark embedding in API responses — manifest detection is live but writing C2PA metadata directly into response payloads is still on the Tier 2 roadmap. The second gap is ISO/IEC 42001 external certification — all internal controls are documented (0 open gaps) but a paid external auditor is required for the certificate itself. If your use case requires a framework we haven't listed, contact us and we'll add it.
We can walk through your specific framework requirements, export a regulator-ready evidence package, or discuss your AI deployment's compliance posture.
EU AI Act · OWASP LLM Top 10 · NIST AI RMF · 19 frameworks tracked