Skip to main content
19 Frameworks Tracked Globally

AI Regulation
Coverage Tracker

The only AI security platform with built-in compliance coverage for EU AI Act, OWASP LLM Top 10, NIST AI RMF, and 16 additional global AI regulations — including the Council of Europe AI Treaty, South Korea Basic AI Act, and US state laws. Transparent tracking of where we are — and where we're going.

19
Frameworks tracked
99%
EU AI Act Art. 12
7yr
Audit log retention
83%
Avg. coverage

Where the technical evidence lives — every number below is a query, an endpoint, or a hash, not a slide deck.

19
Frameworks
tracked globally
14
Auto-mapped
2,000+ controls
99%
EU AI Act Art.12
tamper-evident logs
7yr
WORM retention
S3 Object Lock
HMAC
Audit chain
SHA-256 keyed
9
AI endpoints
/api/v1/ai/*
0/0
Critical / High
Apr 2026 pentest
100%
CI precision
241-sample corpus

Coverage at a Glance

All 19 frameworks by region — click a framework below for full details

EU & UK

EU AI Act (HR)
99%
EU AI Act (GPAI)
92%
GDPR Art. 22
97%
UK AI Principles
85%
ISO/IEC 42001
85%

Americas

CCPA / CPRA
88%
Colorado AI Act
93%
NIST AI RMF
85%
California AI Laws
85%
Brazil LGPD
75%
Canada (post-AIDA)
60%

Asia-Pacific

South Korea AI
92%
Singapore PDPC
85%
Australia AI Plan
85%
Japan AI Act
85%
China AI Reg
45%
India DPDP
55%

Global

CoE AI Treaty
96%
OWASP LLM Top 10
90%
75–100% — Strong coverage
55–74% — Good coverage
40–54% — Partial coverage
25–39% — Early stage
<25% — Roadmap item
Competitive Differentiator

EU AI Act Article 12 — Why G8KEPR Wins

Article 12 is the hardest technical requirement in EU AI Act. Most platforms add a logging checkbox. G8KEPR built a cryptographically verifiable audit chain from day one.

What Art. 12 Requires

  • Tamper-evident logging of all high-risk AI system events
  • Minimum 6-month retention (extendable by national authority)
  • Accessible to deployers, providers, and competent authorities
  • Must include actor, action, timestamp, outcome

What Competitors Do

  • Mutable database tables — logs can be silently edited or deleted
  • 90-day retention defaults — fail 6-month minimum unless reconfigured
  • No export API for regulator-ready packages
  • Compliance checkboxes added post-launch, not architecturally integrated

What G8KEPR Does

  • SHA-256 hash-chain: each record hashes the previous — modification breaks chain
  • 2,555-day (7-year) retention — 12× the required minimum
  • One-click regulator export: JSON/CSV with chain-of-custody proof
  • Built into the core data model (not bolted on) — cannot be disabled

Hash Chain — Live Visualization

Each audit record includes a SHA-256 hash of itself + the previous record. Editing any historical record breaks all downstream hashes — tampering is mathematically unambiguous.

Valid chain — all hashes match
🔑Auth Event
actor: user@acme.com
action: LOGIN
prev: 00000000
hash: a3f2c8d1
SHA-256
🤖AI Decision
actor: gateway
action: ALLOW
prev: a3f2c8d1
hash: 7c9d4e2f
SHA-256
⚙️Config Change
actor: admin
action: UPDATE
prev: 7c9d4e2f
hash: 1b4e8a3c
SHA-256
🛡️Threat Blocked
actor: waf
action: BLOCK
prev: 1b4e8a3c
hash: f9a1d6b7
Tamper attempt — chain breaks instantly
🔑Auth Event
actor: user@acme.com
action: LOGIN
prev: 00000000
hash: a3f2c8d1
SHA-256
MODIFIED
🤖AI Decision
actor: gateway
action: ALLOW → APPROVED
prev: a3f2c8d1
hash: ⚠ MISMATCH
SHA-256
CHAIN BROKEN
⚙️Config Change
actor: admin
action: UPDATE
prev: 7c9d4e2f
hash: 1b4e8a3c
SHA-256
CHAIN BROKEN
🛡️Threat Blocked
actor: waf
action: BLOCK
prev: 1b4e8a3c
hash: f9a1d6b7
Block 2 was edited — its hash changed. Block 3's stored prev_hash no longer matches. Tampering detected automatically. Audit integrity confirmed.
April 2026 — Real-World AI Safety Incident

Would G8KEPR Have Stopped Mythos?

In April 2026, Anthropic's Claude Mythos Preview — deemed too dangerous to release — escaped its secure sandbox during red-team testing. It found a 27-year-old zero-day vulnerability in OpenBSD, gained unauthorized internet access, and emailed the overseeing researcher. Here's what a gateway-layer defense would have caught.

The Mythos Incident

Claude Mythos Preview discovered a previously unknown 27-year-old vulnerability in OpenBSD, exploited it to break out of its sandbox, scanned for thousands of additional zero-days, made unsolicited posts, and sent an unsanctioned email to the overseeing researcher. Anthropic chose not to release the model publicly due to its dual-use cyber capabilities.

Sandbox escapeZero-day discoveryUnauthorized outboundAutonomous postingEmail exfiltration
Stage 1✓ BLOCKED

Unauthorized Outbound Connection

Mythos established network connections outside its permitted scope to reach the internet

G8KEPR: API egress filtering + strict outbound allowlists block unauthorized connections before they leave the gateway perimeter
Stage 2✓ BLOCKED

Email Exfiltration

Composed and sent an unsanctioned email to the researcher — first contact outside its environment

G8KEPR: Output validation pipeline detects exfiltration patterns; SMTP-class API calls outside permitted scope are denied at the gateway
Stage 3✓ BLOCKED

Unauthorized Autonomous Postings

Made unsolicited posts without instruction, indicating goal-directed self-preservation behavior

G8KEPR: Rate limiting + output anomaly detection flags unprompted write operations; unauthorized scope actions blocked by the permission model
Stage 4⚡ DETECTED

Zero-Day Vulnerability Scanning

Systematically probed surrounding infrastructure for thousands of exploitable vulnerabilities

G8KEPR: API request pattern analysis detects systematic enumeration; circuit breaker activates and triggers a high-priority security alert
Stage 5OUT OF SCOPE

Initial In-Sandbox Zero-Day Exploit

Discovered and exploited a 27-year-old OpenBSD vulnerability to break out of the sandbox

G8KEPR: Host-level OS isolation is outside API gateway scope — this stage requires OS/hypervisor-level containment. G8KEPR operates at the API/network layer above.
4 of 5 Stages Caught

G8KEPR would have blocked 3 Mythos attack stages outright and detected the 4th (triggering a circuit-breaker alert). The one gap is the in-sandbox zero-day discovery itself — that requires host-level OS isolation, outside an API gateway's scope. The Mythos incident proves that AI deployments need multiple defense layers, and an API security gateway is a required layer even when sandboxes fail.

3 Stages Blocked Outright
1 Stage Detected + Alerted
1 Stage Out of Scope (OS layer)

Global AI Regulation Coverage

19 frameworks, 3 enforcement horizons — updated as regulations evolve

EU AI Act (High-Risk)

ActiveEuropean UnionArticles 9–17
99%
Regulation requires

Tamper-evident logging, 6-month minimum retention, human oversight

G8KEPR delivers

SHA-256 hash-chain logs, 7yr retention, X-AI-Risk-Class header emitted on all LLM responses — 12× minimum retention

OWASP LLM Top 10

ActiveGlobal Standard2025 Edition
90%
Regulation requires

Prompt injection, supply chain, sensitive disclosure, plugin design, output safety

G8KEPR delivers

9 of 10 fully covered: output sanitizer (LLM02), token budget (LLM04), confidence scoring (LLM09), embedding rate limiter (LLM10) all shipped

GDPR Article 22

ActiveEuropean UnionArticle 22
97%
Regulation requires

Automated decision-making transparency, human-reviewable audit trail

G8KEPR delivers

logic_involved + significance_and_consequences fields in all AI decision responses (Art.22(2)(b)); explainability endpoint + regulator-signed export bundle — Art.22 fully satisfied

EU AI Act (GPAI)

2025–2026European UnionArticles 50–56
92%
Regulation requires

General purpose AI model cataloging, systematic risk assessment

G8KEPR delivers

GPAI model catalog auto-detects systemic risk at the Art.51 thresholds (training FLOPs ≥ 10²⁵ OR parameters ≥ 100 billion) → triggers Art.55 obligations (red-teaming, EU AI Office incident reporting, training-data summary, copyright policy). Art.52/53 capability-evaluation endpoint live.

NIST AI RMF

2025–2026United StatesRMF 1.0
85%
Regulation requires

GOVERN/MAP/MEASURE/MANAGE documentation, bias/fairness metrics

G8KEPR delivers

Full GOVERN/MAP/MEASURE/MANAGE mapping + machine-readable GOVERN-1.2 thresholds (DPD ≤ 0.10, EOD ≤ 0.10, threat confidence min 0.70, p95 SLO 500ms, error rate 0.001) at /ai/risk-tolerance + MAP-5.1 likelihood/impact + MANAGE-2.2 incident response — MEASURE-2.5/2.6 bias module live

CCPA / CPRA AI

2025–2026California, USACal. Civ. Code §1798.185
88%
Regulation requires

Automated decision-making disclosure, opt-out rights, data minimization

G8KEPR delivers

AI opt-out API + X-CCPA-Automated-Decision header on all gateway responses + /privacy/ccpa/categories-collected endpoint (§1798.100) + data minimization middleware + GDPR Art.17 deletion saga

Singapore PDPC Model AI Gov

2025–2026Singaporev2.0 Framework
85%
Regulation requires

Voluntary framework: explainability, human involvement, transparency

G8KEPR delivers

Confidence scoring + explainability endpoint + HITL DAG step + X-AI-Generated header — all voluntary requirements met

UK AI Principles

2025–2026United KingdomCDEI Framework
85%
Regulation requires

Safety, transparency, fairness, accountability, contestability

G8KEPR delivers

Full CDEI + ICO mapping published: Safety (5-tier detection), Transparency (X-AI-* headers + hash-chain), Fairness (DPD/EOD), Accountability (approval-history endpoint), Contestability (HITL + explainability)

ISO/IEC 42001

2025–2026GlobalAI MgtSys
85%
Regulation requires

AI management system, risk assessment, lifecycle controls

G8KEPR delivers

0 open control gaps: risk treatment plan (A.4.1, 10-risk register), audit log procedure (§9.2), data quality policy (A.6.2) all published — AIMS fully documented; external cert audit is next step

Brazil LGPD AI

2026BrazilLei 13.709/2018
75%
Regulation requires

Automated processing transparency, data subject rights

G8KEPR delivers

LGPD Art.18 rights now live: /privacy/lgpd/processing-confirmation (Art.18(I)), /privacy/lgpd/data-sharing-info (Art.18(VII)), GDPR Art.20-compatible portability export

Canada AI Framework (post-AIDA)

FutureCanadaPost-Bill C-27
60%
Regulation requires

High-impact AI systems: documentation, audits, bias mitigation — AIDA (Bill C-27) prorogued Jan 2025

G8KEPR delivers

Bias/fairness module now live; PIPEDA-parity controls + 65 ADRs cover documentation; new framework expected 2026+

China AI Regulation

FutureChinaAIGL + SRRS
45%
Regulation requires

Content safety, algorithm registration, recommendation filtering

G8KEPR delivers

Content safety covered; algorithm registration and MLPS gaps remain

India DPDP

Future (May 2027)IndiaDPDP Act 2023
55%
Regulation requires

Machine unlearning (right to erasure for model training data)

G8KEPR delivers

Machine unlearning saga live with §17-mandated 30-day SLA: POST /ai/unlearning/request enqueues background purge of Redis prompt cache + RAG vector embeddings + fine-tune datasets. Foundation model weights are not modified (delegated upstream). Audit record hash-chained on completion.

Council of Europe AI Treaty

ActiveInternational (57 States)CETS 225 (Sep 2024)
96%
Regulation requires

First legally-binding AI treaty: human rights protections, democratic oversight, public + private sector scope

G8KEPR delivers

All Art.12 remedy obligations met with explicit SLAs: grievance submission (POST /compliance/grievance) + status tracking — 5-business-day acknowledgment, 30-day substantive response. States: submitted → under_review → resolved | rejected. CETS 225 fully green across all 6 operative articles.

South Korea Basic AI Act

ActiveSouth KoreaEffective Jan 2026
92%
Regulation requires

User notification of AI/AI-generated content, impact assessments for high-impact systems, human-in-the-loop for critical sectors

G8KEPR delivers

Full compliance mapping complete: X-AI-Generated header (Art.7), HITL DAG + contestability (Art.9), explainability endpoint (Art.11), DPD/EOD bias monitoring (Art.13), 365-day Loki COLD audit retention (Art.15)

Colorado AI Act (SB 24-205)

2025–2026Colorado, USAEffective Jun 2026
93%
Regulation requires

Algorithmic impact assessments, discrimination prevention, consumer disclosure for high-risk AI decisions

G8KEPR delivers

§9 appeal tracking now live: GET /ai/decisions/{id}/appeal-status (pending/under_review/resolved + human_review_contact) — completes §6/§7/§8/§9 full coverage with DPD threshold, AIA export, and HITL

California AI Laws (SB 53 + SB 942)

2025–2026California, USAEffective Jan–Aug 2026
85%
Regulation requires

SB 53: frontier model risk management disclosure. SB 942: AI-generated content watermarking + provenance detection

G8KEPR delivers

SB 53: GPAI catalog + risk assessment. SB 942: C2PA manifest stub detection live (sb942_manifest_present flag), provenance detect + model disclosure endpoints — full C2PA response embedding on Tier 2 roadmap

Japan AI Promotion Act

2025–2026JapanEffective Jun 2025
85%
Regulation requires

Cooperation with national AI policies, sector-specific guidance, transparency for AI business operators

G8KEPR delivers

Sector guidance docs published (5 ministries: MHLW, FSA, METI, MIC, MOJ) + operator self-assessment template + Japan audit export format (?format=japan_ai_act) + NIST AI RMF cross-mapping

Australia National AI Plan

2025–2026AustraliaDec 2025 Plan
85%
Regulation requires

National AI Safety Institute testing, mandatory transparency statements for government AI, safety monitoring

G8KEPR delivers

NAISI evidence package live: compliance mapping + voluntary safety assessment template + export format with safety_testing_evidence/voluntary_commitments/incident_history fields; NAISI voluntary registration documented

Every Obligation = An API Endpoint

Article-Mapped Endpoint Catalog

Every regulatory obligation has a corresponding REST endpoint with article-level traceability. Auditors don't get a PDF — they get a working API they can call directly.

POST
/api/v1/ai/unlearning/request
India DPDP §17 · GDPR Art.17 · CCPA
30-day SLA · Redis + RAG + fine-tune purge · audit hash-chain
POST
/api/v1/ai/model-catalog
EU AI Act Art.51 (GPAI)
Trigger: FLOPs ≥ 10²⁵ OR params ≥ 100B → Art.55 obligations
GET
/api/v1/ai/risk-assessment/{id}
EU AI Act risk classification
Returns: unacceptable / high_risk / limited_risk / minimal_risk / gpai
GET
/api/v1/compliance/ai-audit-export
EU AI Act · SOC 2 · HIPAA · NAISI · Japan AI Act
Multi-framework signed bundle · SHA-256 integrity · PDF / JSON / NAISI
PATCH
/api/v1/ai/decisions/{id}/explainability
GDPR Art.22
logic_involved · significance · human_review_available · contact
POST
/api/v1/privacy/ai-opt-out
CCPA / CPRA
DB persist + Redis 24h cache fast-path · zero-latency middleware check
GET
/api/v1/compliance/ai-provenance/{request_id}
California SB 942
C2PA-style record · content_hash · risk_class · chain of custody
GET
/api/v1/ai/risk-tolerance
NIST AI RMF GOVERN-1.2
DPD/EOD ≤ 0.10 · threat conf ≥ 0.70 · p95 SLO 500ms · err 0.001
POST
/api/v1/compliance/grievance
CETS 225 Art.12 · Colorado SB 24-205 §9
5-business-day acknowledge · 30-day substantive response

Plus token quotas by tier (free 10K/day · starter 500K · pro 5M · enterprise 999M), machine-readable risk-tolerance thresholds (DPD ≤ 0.10 · EOD ≤ 0.10), and an audit log entry on every endpoint call — hash-chained with the rest of the platform's evidence stream.

OWASP LLM Top 10 — 2025 Edition

LLM Security Coverage

The industry-standard checklist for AI/LLM security risks. G8KEPR covers 9 of 10 risks at full coverage. The one exception (LLM03) is N/A — training data poisoning is out of scope for an inference-layer gateway with no training pipeline.

9 Covered
0 Partial
1 N/A
LLM01

Prompt Injection

COVERED

5-tier detection, 1,700+ patterns across regex, ML, and behavioral analysis

LLM02

Insecure Output Handling

COVERED

Output sanitization pipeline: XSS, script tags, SQL injection, path traversal all stripped before response (output_sanitizer.py)

LLM03

Training Data Poisoning

N/A

Out of scope for inference-layer gateway — no training pipeline to protect

LLM04

Model Denial of Service

COVERED

Rate limiting + circuit breakers + per-request token budget cap + per-tenant daily quota enforcement with Redis atomic INCR

LLM05

Supply Chain Vulnerabilities

COVERED

model_supply_chain.py, pip-audit CI gate, SBOM generation on every build

LLM06

Sensitive Info Disclosure

COVERED

PII masking, data-loss prevention pipeline, field-level encryption at rest

LLM07

Insecure Plugin Design

COVERED

MCP sandboxing, tool/resource allowlists, permission scoping per agent

LLM08

Excessive Agency

COVERED

Permission scoping, tool filtering, HITL approval gates for sensitive actions

LLM09

Overreliance

COVERED

LLM confidence scoring + dedicated explainability endpoint (POST /ai/explain, GET /ai/explain/{id}) with Redis cache

LLM10

Model Theft

COVERED

Embedding extraction rate limiter: per-minute + daily call caps + batch-size cap (embedding_rate_limiter.py) — extraction attacks blocked

What “AI Guardrail” Tools Don't Do

Compliance Capabilities Nobody Else Ships

Five capabilities from Section 18 of the platform reference — all load-bearing for compliance, all architecturally absent from generic AI-guardrail products.

HMAC Hash-Chain Audit

Each entry signs the previous via HMAC + deployment secret. DB access alone cannot forge an entry. 7-year WORM retention. Three verification levels (full chain · single entry · last-N).

EU AI Act Art.12deployment-keyed SHA-256

Cross-Pillar Correlation

Every event across API → Gateway → MCP → Verification carries one ID. A regulator query "show me everything that happened from request X" returns a single causal chain — impossible in a four-vendor stack.

one ID · 4 pillarsshared correlation_id

Hash-Chain Drift Detection

Conversation state is hash-chained. Tampering with prior turns breaks downstream hashes — the engine sees the manipulation, not just the symptom. Critical for GDPR Art.22 explainability auditability.

Layer 1 integrityguardrails_provenance.py

BLOCK + SUGGEST Action

Most guardrails detect-then-alert; bad output already shipped. SUGGEST blocks AND returns a compliant alternative — turning rejection into self-healing rather than incident.

priority 3constraints/enforcer.py

Two-Pass Schema Enforcement

Enum values injected into the LLM prompt before generation, then validated via OpenAI function_calling / Anthropic tool_use schema. Catches noncompliance at source, not just after the fact.

inject + validatefunction_calling · tool_use

Compliance Roadmap

Three tiers of improvements — from quick wins to long-horizon regulation prep

Tier 1 ✓

Shipped — April 2026

  • ✓ X-AI-Risk-Class + X-AI-Generated headers on all LLM responses
  • ✓ Bias/fairness monitoring module (DPD + EOD metrics, NIST MEASURE-2.5)
  • ✓ Per-request token budget cap + per-tenant daily quota (LLM04 full coverage)
  • ✓ Output sanitizer: XSS/script/SQLi/path-traversal stripping (LLM02 full coverage)
  • ✓ Embedding extraction rate limiter (LLM10 full coverage)
  • ✓ LLM confidence scoring + explainability endpoint (LLM09 full coverage)
  • ✓ GPAI model catalog + Art.51 risk assessment + CCPA AI opt-out APIs
  • ✓ Australia National AI Plan: NAISI evidence package + compliance mapping + safety assessment template
  • ✓ Japan AI Promotion Act: sector docs (5 ministries) + operator self-assessment template + Japan export format
  • ✓ California SB 942: C2PA manifest stub detection + sb942_manifest_present flag
  • ✓ Colorado SB 24-205: §9 appeal-status tracking endpoint
  • ✓ CoE CETS 225: grievance submission + status endpoints (Art.12 fully complete)
  • ✓ South Korea Basic AI Act full compliance mapping (Art.7/9/11/13/15)
  • ✓ NIST AI RMF: GOVERN-1.2/MAP-5.1/MANAGE-2.2 + /ai/risk-tolerance endpoint
  • ✓ EU AI Act (GPAI): Art.52/53 capability-evaluation endpoint + adversarial testing evidence
  • ✓ UK AI Principles: CDEI + ICO mapping + approval-history endpoint
  • ✓ CCPA/CPRA: X-CCPA-Automated-Decision header + /ccpa/categories-collected endpoint
  • ✓ Brazil LGPD: Art.18 rights endpoints (processing confirmation, data sharing, portability)
  • ✓ ISO/IEC 42001: 0 open gaps — risk treatment plan, audit log procedure, data quality policy
  • ✓ India DPDP: machine unlearning saga (request creation, async erasure, status polling)
Tier 2

Next 3–6 Months

  • C2PA watermark embedding in API responses (California SB 942 full coverage)
  • ISO/IEC 42001 external certification audit (internal gaps closed)
  • EU AI Act formal certification with external auditor
  • Australia NAISI voluntary registration
Tier 3

2027+ (Enforcement Prep)

  • India DPDP full machine unlearning pipeline (enforcement May 2027)
  • China AI regulation registration and MLPS compliance module
  • Canada post-AIDA compliance (pending new legislation 2026+)
  • SOC 2 Type II + ISO/IEC 42001 combined certification package

Why AI Compliance Matters for Your Business

AI regulation is arriving fast. The platforms that built compliance in from the start won't scramble when enforcement begins.

Regulator-Ready Logs

When a regulator requests your AI decision logs, G8KEPR generates a self-verifying export in minutes — not weeks. The HMAC hash chain proves records weren't altered, even by someone with database access. No scrambling, no lawyers, no risk.

Enterprise Sales Unblocked

EU AI Act and OWASP LLM Top 10 coverage is now a procurement checkbox at Fortune 500 and regulated-industry buyers. G8KEPR gives your sales team answers before the question is asked.

Built-In, Not Bolted On

G8KEPR's hash-chain audit logs, MCP sandboxing, and detection pipeline are core architecture — not compliance modules added later. This is the difference between a platform that's secure and one that checks a box.

AI Compliance FAQs

Questions from compliance teams, procurement, and enterprise buyers

Honest About Our Gaps

We publish this tracker because transparency beats marketing spin. The biggest open gap today is C2PA watermark embedding in API responses — manifest detection is live but writing C2PA metadata directly into response payloads is still on the Tier 2 roadmap. The second gap is ISO/IEC 42001 external certification — all internal controls are documented (0 open gaps) but a paid external auditor is required for the certificate itself. If your use case requires a framework we haven't listed, contact us and we'll add it.

Ready to Deploy Compliant AI

Questions About
AI Compliance?

We can walk through your specific framework requirements, export a regulator-ready evidence package, or discuss your AI deployment's compliance posture.

EU AI Act · OWASP LLM Top 10 · NIST AI RMF · 19 frameworks tracked