Accountability infrastructure for AI agents

The Safe and Secure
way to run AI Agents.

APort enforces what each AI agent can do and what they must deliver. Deterministic enforcement — not prompts.

terminal

Works with every agent framework

OpenClaw
LangChain
LangGraph
CrewAI
OpenAI
FastAPI
Express
Next.js
Python
Node.js
OpenClaw
LangChain
LangGraph
CrewAI
OpenAI
FastAPI
Express
Next.js
Python
Node.js
OpenClaw
LangChain
LangGraph
CrewAI
OpenAI
FastAPI
Express
Next.js
Python
Node.js
OpenClaw
LangChain
LangGraph
CrewAI
OpenAI
FastAPI
Express
Next.js
Python
Node.js
MCP
Vercel
Claude
Anthropic
Salesforce
AWS
Django
Gemini
MCP
Vercel
Claude
Anthropic
Salesforce
AWS
Django
Gemini
MCP
Vercel
Claude
Anthropic
Salesforce
AWS
Django
Gemini
MCP
Vercel
Claude
Anthropic
Salesforce
AWS
Django
Gemini
<200ms
Policy check latency
40+
Security patterns
12+
Pre-built policy packs
100%
Open source

Prompts are suggestions.
Your agents need enforcement.

Role descriptions are soft. Quality gates should be deterministic. APort enforces what each agent can do and what they must deliver — before they proceed.

Quality

Agent shipped broken code

Prompt said 'write tests.' Agent skipped them.

Deliverable contract: 80% coverage required. Enforced.

Workflow

Agent merged its own PR

No review gate. Engineer bot approved itself.

Policy: reviewer_agent_id !== author_agent_id.

Security

Agent rm -rf'd the repo

Prompt injection bypassed safety instructions.

APort blocks destructive commands before execution.

Enterprise

$15M unauthorized trade

Portfolio AI violated sector concentration limits.

Pre-trade policy check. Blocked in <100ms.

Enterprise

15K client records exported

AI agent used valid credentials to bulk-export PII.

Export limit enforced. Compliance officer notified.

Quality

Task marked done, wasn't

Agent said 'done' - acceptance criteria unmet.

/aport-complete verifies criteria with evidence.

Four primitives. One system.

The same infrastructure that blocks rm -rf also enforces test coverage thresholds and verifies task completion.

Passport

Who your agent is

Verifiable identity via W3C DID/VC. JSON file, no signup. Portable across any platform. The birth certificate your agent carries everywhere.

Policy

What your agent can do

Pre-action authorization. Command allowlists, spending caps, file size limits, sector concentration rules. Enforced before execution, not after.

Deliverable Contract

What your agent must deliver

Quality gates enforced deterministically. Test coverage thresholds, review requirements, acceptance criteria. Agents can't mark done until the contract is met.

Proof

Cryptographic record of everything

Every decision Ed25519 signed. Court-admissible attestations for SOC 2, HIPAA, SOX, IIROC compliance. Not logs — mathematical certainty.

Prompts change the mode.
APort changes what’s possible.

Prompt-based (honor system)

# /plan mode
"Think like a senior architect."
# /review mode
"Be paranoid about security."
# /ship mode
"Only merge if tests pass."
Agent can still ignore all of this.

APort (enforced)

# engineer-bot passport
capabilities: [code.write, code.pr.open]
blocked: [code.push.main, code.deploy]
# deliverable contract
min_coverage: 80%
reviewer !== author
Physically cannot proceed until met.

10 lines. Any framework.

Developer CLI for instant setup. Enterprise middleware for production APIs.

terminal
# One command - no clone required
npx @aporthq/aport-agent-guardrails

# Plugin now active - checks EVERY tool call automatically

# Agent → exec.run → APort checks policy → ✅ ALLOW
# Agent → exec.run → APort checks policy → ❌ DENY

# Verify task completion against deliverable contract:
# Agent → /aport-complete → contract met? → ✅ Signed receipt

For developers. For enterprises.

Same primitives. Different pain points.

AI Engineering Teams

Multi-agent workflows with enforced quality.

  • Engineer bot can open PRs but not merge — enforced, not prompted
  • Deliverable contracts gate task completion on test coverage
  • Agent handoffs verified with signed receipts
  • Block destructive commands, secret exposure, path traversal

Regulated Industries

Independent third-party authorization with proof.

  • Pre-trade risk checks for portfolio AI (SOX, IIROC, OSFI)
  • Data export controls with PIPEDA/HIPAA enforcement
  • Court-admissible attestations — Ed25519 signed, not just logs
  • ESG claim verification with data source provenance
APAPORT?

Questions Developers Ask

Quick answers to common objections

How APort compares.

Prompts suggest. Gateways filter. APort enforces and proves.

FeatureAPortPrompt RulesRunlayerOkta / Auth0
Pre-action authorizationPartial
Deliverable quality gatesSoft
Agent identity (W3C DID)
Cryptographic proofsEd25519
Court-admissible attestations
Task completion verification
Agent handoff verification
Framework agnosticMCP only
Open sourceVaries

APort and Runlayer are complementary. Runlayer governs your agents’ outbound actions. APort governs what agents must deliver and proves they did it.

Design Partner Program

Shape the standard.

First 3–5 companies define the accountability standard for AI agents. White-glove onboarding. Grandfathered pricing. Direct founder access.