Skip to content

Commit 6153a56

Browse files
cursoragentclaude
andcommitted
Fix streaming message capture order
Move input message extraction in the streaming ModelRequestNode wrapper to run after the original stream context exits, matching non-streaming behavior so post-execution instructions are captured. Co-Authored-By: gpt-5.3-codex-high <[email protected]>
1 parent 49dc6f4 commit 6153a56

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

sentry_sdk/integrations/pydantic_ai/patches/graph_nodes.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -91,13 +91,13 @@ async def wrapped_model_request_stream(self: "Any", ctx: "Any") -> "Any":
9191

9292
# Create chat span for streaming request
9393
with ai_client_span(None, model, model_settings) as span:
94-
if messages:
95-
_set_input_messages(span, messages)
96-
9794
# Call the original stream method
9895
async with original_stream_method(self, ctx) as stream:
9996
yield stream
10097

98+
if messages:
99+
_set_input_messages(span, messages)
100+
101101
# After streaming completes, update span with response data
102102
# The ModelRequestNode stores the final response in _result
103103
model_response = None

0 commit comments

Comments
 (0)