Cryptographically immutable decision records, live LLM replay engine, human correction loop, and EU AI Act compliance reports. Infrastructure, not a dashboard.
AI systems make critical decisions in production. When they fail, teams can't trace, debug, or prove compliance.
Which prompt was used? What model version? Logs are scattered and incomplete.
You can't re-run a failed decision with different parameters. Every debug starts from scratch.
EU AI Act requires auditable records. Most teams have nothing to show regulators.
LoopGrid sits underneath your AI apps. SDK-first, API-driven, cryptographically verifiable.
Every decision SHA-256 hashed and chained to the previous one. Tamper-evident ledger.
Fork past decisions with real LLM API calls. Compare outputs side-by-side.
Attach corrections as immutable ground truth for systematic learning.
EU AI Act Article 12/14/9 mapping. JSON + printable HTML for auditors.
Minimal SDK. Hash chain built in. Live replay or simulation. Python and JavaScript.
from loopgrid import LoopGrid grid = LoopGrid(service_name="support-agent") # Record every AI decision (cryptographically hashed) decision = grid.record_decision( decision_type="customer_support_reply", input={"message": "I was charged twice"}, model={"provider": "openai", "name": "gpt-4"}, output={"response": "Your account looks fine."} ) # Replay with live LLM re-execution replay = grid.create_replay( decision_id=decision["decision_id"], overrides={"prompt": {"template": "support_v2"}} ) # Verify ledger integrity grid.verify_integrity() # {"valid": True, "total": 1, ...} # Generate EU AI Act compliance report grid.compliance_report() # Article 12, 14, 9 mapping
Open source. Self-hosted. EU AI Act ready. No vendor lock-in.