Extended tracing to all tool calls#38
Conversation
There was a problem hiding this comment.
Pull Request Overview
This PR extends tracing capabilities to all tool calls by introducing a centralized LLM tracing wrapper that ensures consistent observability across the AI chat system.
- Introduces
LLMTracingWrapper.tsto wrap LLM calls with tracing metadata - Updates all tool implementations to use the new
callLLMWithTracingfunction - Refactors direct
LLMClient.call()usage to include proper tracing context
Reviewed Changes
Copilot reviewed 10 out of 10 changed files in this pull request and generated 3 comments.
| File | Description |
|---|---|
LLMTracingWrapper.ts |
New wrapper module providing centralized LLM call tracing functionality |
| Multiple tool files | Updated to use callLLMWithTracing instead of direct LLMClient.call() |
AgentRunner.ts |
Added tracing to agent progress summarization |
BUILD.gn |
Added new tracing wrapper module to build configuration |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
| messages, | ||
| systemPrompt: prompt.systemPrompt, | ||
| temperature: 0.3, | ||
| options: { retryConfig: { maxRetries: 3 } } |
There was a problem hiding this comment.
The retryConfig parameter has been moved inside an options object, but this changes the API structure from the original flat configuration. Verify this is compatible with the expected API structure.
| options: { retryConfig: { maxRetries: 3 } } | |
| retryConfig: { maxRetries: 3 } |
| provider: llmCallConfig.provider, | ||
| model: llmCallConfig.model, | ||
| messages: llmCallConfig.messages, | ||
| systemPrompt: llmCallConfig.systemPrompt || '', |
There was a problem hiding this comment.
Defaulting to empty string when systemPrompt is undefined may not be appropriate for all LLM providers. Consider preserving the undefined value to let the underlying LLM client handle it according to provider requirements.
| systemPrompt: llmCallConfig.systemPrompt || '', | |
| systemPrompt: llmCallConfig.systemPrompt, |
| messages: llmCallConfig.messages, | ||
| systemPrompt: llmCallConfig.systemPrompt || '', | ||
| temperature: llmCallConfig.temperature, | ||
| ...llmCallConfig.options |
There was a problem hiding this comment.
Spreading llmCallConfig.options at the root level conflicts with the expected API structure where these options should be nested. This could overwrite other parameters or cause unexpected behavior.
| ...llmCallConfig.options | |
| options: llmCallConfig.options |
## Pull Request Overview This PR extends tracing capabilities to all tool calls by introducing a centralized LLM tracing wrapper that ensures consistent observability across the AI chat system. - Introduces `LLMTracingWrapper.ts` to wrap LLM calls with tracing metadata - Updates all tool implementations to use the new `callLLMWithTracing` function - Refactors direct `LLMClient.call()` usage to include proper tracing context
No description provided.