One‑liner: AI agent that makes provably governed access decisions using Bayesian risk scoring.
“The ARF legacy API endpoint is currently unavailable. The agent is designed to call it via OpenAPI, and the mock responses used in the demo exactly match the real API’s output schema. In production, you would replace the mock with the live ARF API.”
Enterprise agents often act without governance – they are non‑deterministic, unauditable, and unsafe for high‑stakes workflows like access control.
This agent uses the Agentic Reliability Framework (ARF) to evaluate every access request with:
- Bayesian risk scoring
- Expected loss minimisation
- Clear approve / deny / escalate decisions
- Full audit trail
Input (role, resource) → ARF API → Decision (approve/deny/escalate) + risk_score + audit_log
| Role | Expected Decision | Risk Level |
|---|---|---|
| admin | approve | low (<0.3) |
| intern | deny | high (>0.7) |
| contractor | escalate | medium (0.3–0.7) |
- ARF Legacy API (Bayesian risk engine)
- JSON interface (no UI required)
- Optional: FastAPI wrapper
openapi.yaml– ARF API specification for watsonx Orchestrateagent-instructions.md– deterministic instructions for the agenttest_cases.json– structured test casesevaluate.py– simple Python script to call the API (optional)
- Import
openapi.yamlinto watsonx Orchestrate as a tool. - Create an agent with the instructions from
agent-instructions.md. - Send a JSON payload:
{"incident": {"type": "access_request", "user_role": "admin"}} - Receive decision, risk_score, and audit log.
Every decision is auditable, risk‑quantified, and derived from ARF’s Bayesian inference – no black boxes.