feat(core): Support truncation for LangChain integration request messages#18157
feat(core): Support truncation for LangChain integration request messages#18157nicohrubec merged 5 commits intodevelopfrom
Conversation
node-overhead report 🧳Note: This is a synthetic benchmark with a minimal express app and does not necessarily reflect the real-world performance impact in an application.
|
There was a problem hiding this comment.
Bug: LLM Prompts: Inconsistent Truncation, Exceeding Limits
Plain LLM string prompts aren't truncated when converted to messages, creating an inconsistency with chat model messages. The extractLLMRequestAttributes function wraps prompts into the same {role, content} message format as chat models but doesn't apply truncateGenAiMessages before stringifying, potentially causing large prompts to exceed byte limits while chat messages get properly truncated.
packages/core/src/utils/langchain/utils.ts#L253-L257
sentry-javascript/packages/core/src/utils/langchain/utils.ts
Lines 253 to 257 in 02d2b96
There was a problem hiding this comment.
Bug: Truncation Inconsistency Causes Oversized LLM Data
extractLLMRequestAttributes converts prompts to messages but doesn't apply truncation like extractChatModelRequestAttributes does. This creates inconsistent behavior where chat model messages get truncated to the byte limit but LLM string prompts don't, potentially sending oversized data that exceeds the intended 20KB limit.
packages/core/src/utils/langchain/utils.ts#L254-L259
sentry-javascript/packages/core/src/utils/langchain/utils.ts
Lines 254 to 259 in a42c548
This PR adds truncation support for LangChain integration request messages. All messages already get normalized to arrays of messages, so here we need no case distinction for strings.
Adds tests to verify behavior for 1. simple string inputs and 2. conversations in the form of arrays of strings.
Closes #18018