You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Python SDK for Agent AI Observability, Monitoring and Evaluation Framework. Includes features like agent, llm and tools tracing, debugging multi-agentic system, self-hosted dashboard and advanced analytics with timeline and execution graph view
This project aims to build a Coding Agent for the energy market. Based on the user’s goals and the tools provided, the coding agent intelligently selects and executes the appropriate actions to complete the task.
Developer focused LLM experimentation and debugging server for testing prompts, tools, MCP integrations, traces, and scheduled AI workflows across models
A lightweight, local-first observability proxy and dashboard designed to intercept, log, and trace LLM interactions. OpenInspector acts as a transparent middleman, offering full visibility into agentic workflows, tool executions, and latency metrics without requiring you to change a single line of your application code.
A Python proof-of-concept for tracing multi-turn Agent-to-Agent (A2A) conversations as a single unified MLflow trace for LLM observability and evaluation.
LLM Tracker is a platform for analyzing and visualizing LLM usage and costs across your projects. It provides a simple interface to monitor your expenses and understand how your different models are being used.