Accountability infrastructure for AI agents
The Safe and Secure
way to run AI Agents.
APort enforces what each AI agent can do and what they must deliver. Deterministic enforcement — not prompts.
Works with every agent framework
Prompts are suggestions.
Your agents need enforcement.
Role descriptions are soft. Quality gates should be deterministic. APort enforces what each agent can do and what they must deliver — before they proceed.
Agent shipped broken code
Prompt said 'write tests.' Agent skipped them.
Deliverable contract: 80% coverage required. Enforced.
Agent merged its own PR
No review gate. Engineer bot approved itself.
Policy: reviewer_agent_id !== author_agent_id.
Agent rm -rf'd the repo
Prompt injection bypassed safety instructions.
APort blocks destructive commands before execution.
$15M unauthorized trade
Portfolio AI violated sector concentration limits.
Pre-trade policy check. Blocked in <100ms.
15K client records exported
AI agent used valid credentials to bulk-export PII.
Export limit enforced. Compliance officer notified.
Task marked done, wasn't
Agent said 'done' - acceptance criteria unmet.
/aport-complete verifies criteria with evidence.
Four primitives. One system.
The same infrastructure that blocks rm -rf also enforces test coverage thresholds and verifies task completion.
Passport
Who your agent is
Verifiable identity via W3C DID/VC. JSON file, no signup. Portable across any platform. The birth certificate your agent carries everywhere.
Policy
What your agent can do
Pre-action authorization. Command allowlists, spending caps, file size limits, sector concentration rules. Enforced before execution, not after.
Deliverable Contract
What your agent must deliver
Quality gates enforced deterministically. Test coverage thresholds, review requirements, acceptance criteria. Agents can't mark done until the contract is met.
Proof
Cryptographic record of everything
Every decision Ed25519 signed. Court-admissible attestations for SOC 2, HIPAA, SOX, IIROC compliance. Not logs — mathematical certainty.
Prompts change the mode.
APort changes what’s possible.
Prompt-based (honor system)
"Think like a senior architect."
"Be paranoid about security."
"Only merge if tests pass."
APort (enforced)
capabilities: [code.write, code.pr.open]
blocked: [code.push.main, code.deploy]
min_coverage: 80%
reviewer !== author
10 lines. Any framework.
Developer CLI for instant setup. Enterprise middleware for production APIs.
# One command - no clone required
npx @aporthq/aport-agent-guardrails
# Plugin now active - checks EVERY tool call automatically
# Agent → exec.run → APort checks policy → ✅ ALLOW
# Agent → exec.run → APort checks policy → ❌ DENY
# Verify task completion against deliverable contract:
# Agent → /aport-complete → contract met? → ✅ Signed receiptFor developers. For enterprises.
Same primitives. Different pain points.
AI Engineering Teams
Multi-agent workflows with enforced quality.
- Engineer bot can open PRs but not merge — enforced, not prompted
- Deliverable contracts gate task completion on test coverage
- Agent handoffs verified with signed receipts
- Block destructive commands, secret exposure, path traversal
Regulated Industries
Independent third-party authorization with proof.
- Pre-trade risk checks for portfolio AI (SOX, IIROC, OSFI)
- Data export controls with PIPEDA/HIPAA enforcement
- Court-admissible attestations — Ed25519 signed, not just logs
- ESG claim verification with data source provenance
How APort compares.
Prompts suggest. Gateways filter. APort enforces and proves.
| Feature | APort | Prompt Rules | Runlayer | Okta / Auth0 |
|---|---|---|---|---|
| Pre-action authorization | ✓ | — | Partial | — |
| Deliverable quality gates | ✓ | Soft | — | — |
| Agent identity (W3C DID) | ✓ | — | — | — |
| Cryptographic proofs | Ed25519 | — | — | — |
| Court-admissible attestations | ✓ | — | — | — |
| Task completion verification | ✓ | — | — | — |
| Agent handoff verification | ✓ | — | — | — |
| Framework agnostic | ✓ | ✓ | MCP only | — |
| Open source | ✓ | Varies | — | — |
APort and Runlayer are complementary. Runlayer governs your agents’ outbound actions. APort governs what agents must deliver and proves they did it.
Design Partner Program
Shape the standard.
First 3–5 companies define the accountability standard for AI agents. White-glove onboarding. Grandfathered pricing. Direct founder access.