Security model
Edictum is a security product.
Not a dev tool wearing a security hat. Deterministic contract enforcement that executes outside the LLM.
Architecture
Outside the LLM.
User prompt
Natural language
LLM
Decides tool call
Tool call
Function + args
Edictum
Contract enforcement
Tool execution
Or denied
Contracts are evaluated deterministically. They cannot be overridden by prompt injection, jailbreaks, or model confusion.
The model never sees the enforcement layer. It can't negotiate, argue, or bypass.
Failure modes
Every failure mode leads to deny.
Threat model
What we defend against — and what we don't.
Defends against
- Unauthorized tool execution (pre-contracts)
- Data exfiltration via output (post-contract redaction)
- Privilege escalation (principal-based rules)
- Unauthorized sub-agent spawning
- Secret leakage: OpenAI keys, AWS credentials, JWTs, GitHub tokens, Slack tokens
- Rate abuse (session limits: per-tool, per-session, per-attempt)
- Contract tampering (immutable YAML bundles with version hashing)
- Sensitive file access (built-in deny_sensitive_reads for .env, .ssh/, .aws/credentials, keys)
Does not defend against
- Write/Irreversible side effects already completed (postconditions fall back to warn)
- Kernel-level sandboxing (use gVisor/Firecracker for that)
- Hallucinated text content (Edictum enforces contracts on actions, not words)
- Network-level attacks (use network policies)
- Prompt injection on text responses (only on tool-call execution)
Standards alignment
OWASP + EU AI Act coverage.
Edictum directly mitigates 6 of the OWASP Top 10 for Agentic Applications (2026) and maps to EU AI Act Articles 9 and 14.
OWASP Top 10 for Agentic Applications — 6 of 10 mitigated
Pre-contracts deny unauthorized tool calls regardless of prompt manipulation.
Sandbox contracts restrict file paths, commands, and domains to allowlists.
Principal-based contracts enforce role-level permissions on every tool call.
Pre-contracts block dangerous shell patterns (rm -rf, sudo, curl|sh).
Session limits cap tool calls, preventing unbounded context manipulation.
Contracts enforce boundaries on sub-agent spawning and cross-tool chaining.
EU AI Act (effective Aug 2026)
- Declarative contracts document allowed/denied actions
- Observe mode for non-blocking evaluation before enforcement
- Audit trail with contract version hashing
- Session contracts for operational limits
- Human-in-the-loop (HITL) — agents pause for human authorization
- Configurable approval scope via YAML contracts
- Timeout with fail-safe — unanswered approvals deny by default
- Every approval decision recorded with principal identity
Docs
AI-consumable documentation — LLM agents can read and reason about Edictum's contract format, operators, and capabilities. Designed for both humans and machines.
Console security
8 security boundaries. 500+ tests.
The self-hosted operations console is hardened with defense-in-depth across every layer.
JWT authentication on all API endpoints
Tenant isolation — agents see only their own data
Webhook signature validation
Bcrypt API key hashing — plaintext never stored
CSRF protection on state-changing endpoints
Rate limiting on all endpoints
No sensitive data in client-side code
Input validation with Pydantic schemas
Red team
15 attack patterns. 1 real bypass.
Fixed in 6 minutes.
We attacked our own system before anyone else could. Here's what happened.
Retry after deny
PII exfiltration via output
Cross-tool chaining
Role escalation
Prompt injection on contracts
Parameter manipulation
Session counter bypass
Tool name spoofing
Wildcard abuse
YAML injection in args
Approval timeout race
Sub-agent policy escape
Environment variable leak
Regex backtracking (ReDoS)
read_file /etc/shadow
The one bypass
read_file /etc/shadow — the sandbox contract didn't cover absolute paths outside the workspace directory.
Fix: 2-line YAML addition
not_within:
- /etc
- /varHot-reloaded via SSE. No agent restart. Total fix time: 6 minutes from discovery.
Research
“Mind the GAP”
Every major AI model refuses harmful requests in text while simultaneously executing them through tool calls.
6
Frontier LLMs tested
17,420
Datapoints collected
4,536
Evaluation runs
6
Regulated domains
Vulnerability disclosure
Responsible disclosure.
We take every report seriously. No legal action against good-faith security researchers.
Safe harbor
We will not pursue legal action against researchers who discover and report vulnerabilities in good faith, follow responsible disclosure practices, and avoid data destruction or service disruption.
/.well-known/security.txt