Route inference across LLM providers. Track cost per request.
-
Updated
Feb 17, 2026 - Go
Route inference across LLM providers. Track cost per request.
The shared AI foundation for TYPO3 — one LLM setup for every extension on your site
Governed AI routing and execution control plane with policy enforcement, provider abstraction, auditability, and fallback orchestration.
Core Elixir primitives for building reliable self-hosted inference clients, provider adapters, transport boundaries, and operational controls for private AI runtimes across local, edge, and dedicated infrastructure.
A.I.L. (AI Intelligence Layer) is a modular platform for managing, validating, and executing AI workflows with strict architectural boundaries. It provides prompt versioning, provider abstraction, reliability controls, and observability for scalable, controlled AI integration.
Foundational Elixir runtime library for deterministic CLI subprocess orchestration, normalized event and payload semantics, provider profile contracts, registry plumbing, and shared support modules for the nshkr AI SDK runtime stack rearchitecture.
Add a description, image, and links to the provider-abstraction topic page so that developers can more easily learn about it.
To associate your repository with the provider-abstraction topic, visit your repo's landing page and select "manage topics."