feat: AI prompt management dashboard and enhanced span inspectors#3244
feat: AI prompt management dashboard and enhanced span inspectors#3244
Conversation
…rride fixes, llm pricing sync
…s filter, show version in generation rows
…rsion/override display, breadcrumb fix
…, streamText, generateObject, toolCall, embed)
🦋 Changeset detectedLatest commit: 8cd61cf The changes in this PR will be included in the next version bump. This PR includes changesets to release 29 packages
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughImplements end-to-end prompt management: adds Prompt and PromptVersion Prisma models and migrations; extends ClickHouse LLM metrics schema and ingestion with prompt fields; introduces PromptService, PromptPresenter, API routes (list, resolve, promote, override lifecycle, generations), and ClickHouse queries; adds frontend pages/components (prompts list, prompt detail, filters, dashboard integration, span inspectors, editors); enriches events/metrics with prompt telemetry; integrates prompts into worker indexing, SDK (define/resolve prompts), CLI/MCP tools, resource catalog, and related schemas/types across the codebase. Estimated code review effort🎯 5 (Critical) | ⏱️ ~120+ minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
|
…reactivateOverride, use nullish coalescing in updateOverride
…arallelize in dashboard loader
…oMs; remove duplicate GenerationRow type
…revent orphaned labels on concurrent deploy
…aClientOrTransaction compatibility
…lected version content
prompts.define()Prompt management
Define prompts in your code with
prompts.define(), then manage versions and overrides from the dashboard without redeploying:The prompts list page shows each prompt with its current version, model, override status, and a usage sparkline over the last 24 hours.
From the prompt detail page you can:
prompt.resolve()is called.AI span inspectors
Every AI SDK operation now gets a custom inspector in the run trace view:
ai.generateText/ai.streamText— Shows model, token usage, cost, the full message thread (system prompt, user message, assistant response), and linked prompt detailsai.generateObject/ai.streamObject— Same as above plus the JSON schema and structured outputai.toolCall— Shows tool name, call ID, and input argumentsai.embed— Shows model and the text being embeddedFor generation spans linked to a prompt, a "Prompt" tab shows the prompt metadata, the input variables passed to
resolve(), and the template content from the prompt version.All AI span inspectors include a compact timestamp and duration header.
Other improvements
@window-splitter/stateto fix snapshot restoration)<div>inside<p>DOM nesting warnings in span titles and chat messagesScreenshots