# ZeroEval Documentation ## Docs - [Introduction](https://docs.zeroeval.com/autotune/introduction.md): Version, track, and optimize every prompt your agent uses - [Optimization](https://docs.zeroeval.com/autotune/prompts/optimization.md): Use feedback on production traces to generate and validate better prompts - [API Reference](https://docs.zeroeval.com/autotune/reference.md): REST API for managing prompts, versions, and deployments - [Python](https://docs.zeroeval.com/autotune/sdks/python.md): Track and version prompts in Python with ze.prompt() - [TypeScript](https://docs.zeroeval.com/autotune/sdks/typescript.md): Track and version prompts in TypeScript with ze.prompt() - [API Reference](https://docs.zeroeval.com/feedback/api-reference.md): REST API for submitting and retrieving feedback - [Introduction](https://docs.zeroeval.com/feedback/human-feedback.md): Collect feedback from reviewers in the dashboard or from end users via the API - [Introduction](https://docs.zeroeval.com/feedback/introduction.md): Attach human and AI feedback to your agent interactions to drive quality improvements - [Python](https://docs.zeroeval.com/feedback/python.md): Submit completion feedback from Python to power prompt optimization - [TypeScript](https://docs.zeroeval.com/feedback/typescript.md): Submit completion feedback from TypeScript to power prompt optimization - [CLI](https://docs.zeroeval.com/integrations/cli.md): Manage traces, prompts, judges, and optimization from your terminal - [Introduction](https://docs.zeroeval.com/integrations/introduction.md): Ways to integrate ZeroEval beyond the core SDKs - [MCP](https://docs.zeroeval.com/integrations/mcp.md): Connect AI agents to ZeroEval via the Model Context Protocol - [Skills](https://docs.zeroeval.com/integrations/skills.md): Set up ZeroEval from inside Cursor, Claude Code, Codex, and other coding agents - [Calibration](https://docs.zeroeval.com/judges/calibration.md): Correct judge evaluations to improve accuracy over time - [Introduction](https://docs.zeroeval.com/judges/introduction.md): AI evaluators that give automated feedback on your agent's interactions - [Multimodal Evaluation](https://docs.zeroeval.com/judges/multimodal-evaluation.md): Evaluate screenshots and images with LLM judges - [Pulling Evaluations](https://docs.zeroeval.com/judges/pull-evaluations.md): Retrieve judge evaluations via SDK or REST API - [API Reference](https://docs.zeroeval.com/tracing/api-reference.md): REST API for ingesting and querying spans, traces, and sessions - [Introduction](https://docs.zeroeval.com/tracing/introduction.md): Capture every step your AI agent takes so you can debug, evaluate, and optimize - [OpenTelemetry](https://docs.zeroeval.com/tracing/opentelemetry.md): Send traces to ZeroEval via the OpenTelemetry Protocol (OTLP) - [Integrations](https://docs.zeroeval.com/tracing/sdks/python/integrations.md): Automatic instrumentation for popular AI/ML frameworks - [Reference](https://docs.zeroeval.com/tracing/sdks/python/reference.md): Complete API reference for the Python SDK - [Setup](https://docs.zeroeval.com/tracing/sdks/python/setup.md): Get started with ZeroEval tracing in Python applications - [Integrations](https://docs.zeroeval.com/tracing/sdks/typescript/integrations.md): Automatic tracing for popular AI/ML libraries - [Reference](https://docs.zeroeval.com/tracing/sdks/typescript/reference.md): Complete API reference for the TypeScript SDK - [Setup](https://docs.zeroeval.com/tracing/sdks/typescript/setup.md): Get started with ZeroEval tracing in TypeScript and JavaScript applications ## OpenAPI Specs - [openapi](https://docs.zeroeval.com/api-reference/openapi.json) Built with [Mintlify](https://mintlify.com).