-
Notifications
You must be signed in to change notification settings - Fork 590
Description
AI Integration Test Failure
Date: 2026-03-12T04:11:32.400Z
Platform: python
Framework: all
Workflow Run: https://github.com/getsentry/sentry-python/actions/runs/22986069880
Summary
| Metric | Value |
|---|---|
| Total Tests | 176 |
| Passed | 78 |
| Failed | 98 |
| Skipped | 0 |
| Duration | 363.18s |
Results by Framework
FAILED - python/anthropic
| Test | Status | Duration |
|---|---|---|
| Basic LLM Test (sync, streaming) | ✗ failed | 2.94s |
| Basic LLM Test (sync, blocking) | ✓ passed | 1.61s |
| Basic LLM Test (async, streaming) | ✗ failed | 1.63s |
| Basic LLM Test (async, blocking) | ✓ passed | 1.57s |
| Multi-Turn LLM Test (sync, streaming) | ✗ failed | 23.47s |
| Multi-Turn LLM Test (sync, blocking) | ✓ passed | 35.38s |
| Multi-Turn LLM Test (async, streaming) | ✗ failed | 21.98s |
| Multi-Turn LLM Test (async, blocking) | ✗ failed | 24.27s |
| Basic Error LLM Test (sync, streaming) | ✗ failed | 2.40s |
| Basic Error LLM Test (sync, blocking) | ✓ passed | 2.46s |
| Basic Error LLM Test (async, streaming) | ✗ failed | 2.47s |
| Basic Error LLM Test (async, blocking) | ✓ passed | 2.44s |
| Vision LLM Test (sync, streaming) | ✗ failed | 19.07s |
| Vision LLM Test (sync, blocking) | ✓ passed | 18.95s |
| Vision LLM Test (async, streaming) | ✗ failed | 24.87s |
| Vision LLM Test (async, blocking) | ✗ failed | 24.02s |
| Long Input LLM Test (sync, streaming) | ✗ failed | 24.00s |
| Long Input LLM Test (sync, blocking) | ✗ failed | 23.99s |
| Long Input LLM Test (async, streaming) | ✗ failed | 24.52s |
| Long Input LLM Test (async, blocking) | ✓ passed | 11.94s |
| Conversation ID LLM Test (sync, streaming) | ✗ failed | 24.14s |
| Conversation ID LLM Test (sync, blocking) | ✗ failed | 24.05s |
| Conversation ID LLM Test (async, streaming) | ✗ failed | 22.99s |
| Conversation ID LLM Test (async, blocking) | ✓ passed | 49.18s |
FAILED - python/google-genai
| Test | Status | Duration |
|---|---|---|
| Basic Embeddings Test (sync, blocking) | ✗ failed | 1.99s |
| Basic Embeddings Test (async, blocking) | ✗ failed | 1.05s |
| Basic LLM Test (sync, streaming) | ✗ failed | 1.11s |
| Basic LLM Test (sync, blocking) | ✗ failed | 1.69s |
| Basic LLM Test (async, streaming) | ✗ failed | 1.64s |
| Basic LLM Test (async, blocking) | ✗ failed | 1.25s |
| Multi-Turn LLM Test (sync, streaming) | ✗ failed | 1.74s |
| Multi-Turn LLM Test (sync, blocking) | ✗ failed | 1.13s |
| Multi-Turn LLM Test (async, streaming) | ✗ failed | 1.68s |
| Multi-Turn LLM Test (async, blocking) | ✗ failed | 1.41s |
| Basic Error LLM Test (sync, streaming) | ✗ failed | 1.47s |
| Basic Error LLM Test (sync, blocking) | ✗ failed | 1.46s |
| Basic Error LLM Test (async, streaming) | ✗ failed | 1.43s |
| Basic Error LLM Test (async, blocking) | ✗ failed | 1.59s |
| Vision LLM Test (sync, streaming) | ✗ failed | 1.72s |
| Vision LLM Test (sync, blocking) | ✗ failed | 1.16s |
| Vision LLM Test (async, streaming) | ✗ failed | 1.65s |
| Vision LLM Test (async, blocking) | ✗ failed | 1.70s |
| Long Input LLM Test (sync, streaming) | ✗ failed | 1.15s |
| Long Input LLM Test (sync, blocking) | ✗ failed | 1.39s |
| Long Input LLM Test (async, streaming) | ✗ failed | 1.27s |
| Long Input LLM Test (async, blocking) | ✗ failed | 1.75s |
| Conversation ID LLM Test (sync, streaming) | ✗ failed | 1.63s |
| Conversation ID LLM Test (sync, blocking) | ✗ failed | 1.15s |
| Conversation ID LLM Test (async, streaming) | ✗ failed | 1.31s |
| Conversation ID LLM Test (async, blocking) | ✗ failed | 1.57s |
FAILED - python/langchain
| Test | Status | Duration |
|---|---|---|
| Basic Embeddings Test (sync, blocking) | ✗ failed | 4.84s |
| Basic Embeddings Test (async, blocking) | ✗ failed | 2.53s |
| Basic LLM Test (sync, streaming) | ✓ passed | 4.73s |
| Basic LLM Test (sync, blocking) | ✓ passed | 4.41s |
| Basic LLM Test (async, streaming) | ✓ passed | 3.98s |
| Basic LLM Test (async, blocking) | ✓ passed | 4.78s |
| Multi-Turn LLM Test (sync, streaming) | ✓ passed | 29.76s |
| Multi-Turn LLM Test (sync, blocking) | ✓ passed | 28.38s |
| Multi-Turn LLM Test (async, streaming) | ✓ passed | 22.55s |
| Multi-Turn LLM Test (async, blocking) | ✓ passed | 25.59s |
| Basic Error LLM Test (sync, streaming) | ✗ failed | 1.10s |
| Basic Error LLM Test (sync, blocking) | ✗ failed | 1.14s |
| Basic Error LLM Test (async, streaming) | ✗ failed | 1.21s |
| Basic Error LLM Test (async, blocking) | ✗ failed | 1.49s |
| Vision LLM Test (sync, streaming) | ✓ passed | 2.95s |
| Vision LLM Test (sync, blocking) | ✓ passed | 3.00s |
| Vision LLM Test (async, streaming) | ✓ passed | 3.04s |
| Vision LLM Test (async, blocking) | ✓ passed | 2.74s |
| Long Input LLM Test (sync, streaming) | ✓ passed | 4.38s |
| Long Input LLM Test (sync, blocking) | ✓ passed | 3.69s |
| Long Input LLM Test (async, streaming) | ✓ passed | 2.93s |
| Long Input LLM Test (async, blocking) | ✓ passed | 3.32s |
| Conversation ID LLM Test (sync, streaming) | ✓ passed | 9.96s |
| Conversation ID LLM Test (sync, blocking) | ✓ passed | 10.49s |
| Conversation ID LLM Test (async, streaming) | ✓ passed | 10.26s |
| Conversation ID LLM Test (async, blocking) | ✓ passed | 11.14s |
FAILED - python/langgraph
| Test | Status | Duration |
|---|---|---|
| Basic Agent Test (sync) | ✗ failed | 7.67s |
| Basic Agent Test (async) | ✗ failed | 7.63s |
| Tool Call Agent Test (sync) | ✗ failed | 12.93s |
| Tool Call Agent Test (async) | ✗ failed | 10.02s |
| Tool Error Agent Test (sync) | ✗ failed | 4.14s |
| Tool Error Agent Test (async) | ✗ failed | 3.76s |
| Vision Agent Test (sync) | ✗ failed | 4.18s |
| Vision Agent Test (async) | ✗ failed | 3.99s |
| Long Input Agent Test (sync) | ✗ failed | 27.42s |
| Long Input Agent Test (async) | ✗ failed | 41.15s |
| Conversation ID Agent Test (sync) | ✗ failed | 5.37s |
| Conversation ID Agent Test (async) | ✗ failed | 11.24s |
FAILED - python/litellm
| Test | Status | Duration |
|---|---|---|
| Basic Embeddings Test (sync, blocking) | ✓ passed | 6.16s |
| Basic Embeddings Test (async, blocking) | ✗ failed | 6.17s |
| Basic LLM Test (sync, streaming) | ✓ passed | 5.91s |
| Basic LLM Test (sync, blocking) | ✓ passed | 4.71s |
| Basic LLM Test (async, streaming) | ✗ failed | 5.45s |
| Basic LLM Test (async, blocking) | ✗ failed | 4.73s |
| Multi-Turn LLM Test (sync, streaming) | ✓ passed | 27.91s |
| Multi-Turn LLM Test (sync, blocking) | ✓ passed | 21.74s |
| Multi-Turn LLM Test (async, streaming) | ✗ failed | 31.14s |
| Multi-Turn LLM Test (async, blocking) | ✗ failed | 28.91s |
| Basic Error LLM Test (sync, streaming) | ✗ failed | 2.52s |
| Basic Error LLM Test (sync, blocking) | ✗ failed | 2.60s |
| Basic Error LLM Test (async, streaming) | ✗ failed | 2.55s |
| Basic Error LLM Test (async, blocking) | ✗ failed | 2.60s |
| Vision LLM Test (sync, streaming) | ✓ passed | 5.31s |
| Vision LLM Test (sync, blocking) | ✓ passed | 4.18s |
| Vision LLM Test (async, streaming) | ✗ failed | 4.80s |
| Vision LLM Test (async, blocking) | ✗ failed | 5.52s |
| Long Input LLM Test (sync, streaming) | ✓ passed | 6.20s |
| Long Input LLM Test (sync, blocking) | ✓ passed | 5.56s |
| Long Input LLM Test (async, streaming) | ✗ failed | 4.71s |
| Long Input LLM Test (async, blocking) | ✗ failed | 5.69s |
| Conversation ID LLM Test (sync, streaming) | ✓ passed | 12.00s |
| Conversation ID LLM Test (sync, blocking) | ✓ passed | 13.56s |
| Conversation ID LLM Test (async, streaming) | ✗ failed | 13.10s |
| Conversation ID LLM Test (async, blocking) | ✗ failed | 11.73s |
PASSED - python/manual
| Test | Status | Duration |
|---|---|---|
| Basic Agent Test (sync) | ✓ passed | 724ms |
| Basic Agent Test (async) | ✓ passed | 678ms |
| Tool Call Agent Test (sync) | ✓ passed | 677ms |
| Tool Call Agent Test (async) | ✓ passed | 675ms |
| Tool Error Agent Test (sync) | ✓ passed | 678ms |
| Tool Error Agent Test (async) | ✓ passed | 675ms |
| Vision Agent Test (sync) | ✓ passed | 676ms |
| Vision Agent Test (async) | ✓ passed | 679ms |
| Long Input Agent Test (sync) | ✓ passed | 676ms |
| Long Input Agent Test (async) | ✓ passed | 679ms |
| Conversation ID Agent Test (sync) | ✓ passed | 679ms |
| Conversation ID Agent Test (async) | ✓ passed | 687ms |
| Basic Embeddings Test (sync, blocking) | ✓ passed | 761ms |
| Basic Embeddings Test (async, blocking) | ✓ passed | 768ms |
| Basic LLM Test (sync, blocking) | ✓ passed | 675ms |
| Basic LLM Test (async, blocking) | ✓ passed | 696ms |
| Multi-Turn LLM Test (sync, blocking) | ✓ passed | 675ms |
| Multi-Turn LLM Test (async, blocking) | ✓ passed | 685ms |
| Vision LLM Test (sync, blocking) | ✓ passed | 685ms |
| Vision LLM Test (async, blocking) | ✓ passed | 681ms |
| Long Input LLM Test (sync, blocking) | ✓ passed | 685ms |
| Long Input LLM Test (async, blocking) | ✓ passed | 681ms |
| Conversation ID LLM Test (sync, blocking) | ✓ passed | 680ms |
| Conversation ID LLM Test (async, blocking) | ✓ passed | 682ms |
FAILED - python/openai
| Test | Status | Duration |
|---|---|---|
| Basic Embeddings Test (sync, blocking) | ✓ passed | 2.91s |
| Basic Embeddings Test (async, blocking) | ✓ passed | 1.93s |
| Basic LLM Test (sync, streaming) | ✗ failed | 3.17s |
| Basic LLM Test (sync, blocking) | ✓ passed | 2.77s |
| Basic LLM Test (async, streaming) | ✗ failed | 2.89s |
| Basic LLM Test (async, blocking) | ✓ passed | 3.18s |
| Multi-Turn LLM Test (sync, streaming) | ✗ failed | 24.09s |
| Multi-Turn LLM Test (sync, blocking) | ✓ passed | 21.60s |
| Multi-Turn LLM Test (async, streaming) | ✗ failed | 19.82s |
| Multi-Turn LLM Test (async, blocking) | ✓ passed | 23.33s |
| Basic Error LLM Test (sync, streaming) | ✗ failed | 598ms |
| Basic Error LLM Test (sync, blocking) | ✗ failed | 606ms |
| Basic Error LLM Test (async, streaming) | ✗ failed | 600ms |
| Basic Error LLM Test (async, blocking) | ✗ failed | 601ms |
| Vision LLM Test (sync, streaming) | ✗ failed | 1.88s |
| Vision LLM Test (sync, blocking) | ✗ failed | 1.83s |
| Vision LLM Test (async, streaming) | ✗ failed | 2.02s |
| Vision LLM Test (async, blocking) | ✗ failed | 2.51s |
| Long Input LLM Test (sync, streaming) | ✗ failed | 2.90s |
| Long Input LLM Test (sync, blocking) | ✓ passed | 2.98s |
| Long Input LLM Test (async, streaming) | ✗ failed | 2.76s |
| Long Input LLM Test (async, blocking) | ✓ passed | 2.40s |
| Conversation ID LLM Test (sync, streaming) | ✗ failed | 13.33s |
| Conversation ID LLM Test (sync, blocking) | ✓ passed | 13.69s |
| Conversation ID LLM Test (async, streaming) | ✗ failed | 9.63s |
| Conversation ID LLM Test (async, blocking) | ✓ passed | 12.42s |
FAILED - python/openai-agents
| Test | Status | Duration |
|---|---|---|
| Basic Agent Test (async) | ✓ passed | 5.71s |
| Tool Call Agent Test (async) | ✗ failed | 9.37s |
| Tool Error Agent Test (async) | ✗ failed | 6.36s |
| Vision Agent Test (async) | ✗ failed | 4.33s |
| Long Input Agent Test (async) | ✗ failed | 31.79s |
| Conversation ID Agent Test (async) | ✓ passed | 7.25s |
FAILED - python/pydantic-ai
| Test | Status | Duration |
|---|---|---|
| Basic Agent Test (async) | ✓ passed | 9.55s |
| Tool Call Agent Test (async) | ✗ failed | 7.13s |
| Tool Error Agent Test (async) | ✗ failed | 6.09s |
| Vision Agent Test (async) | ✓ passed | 5.79s |
| Long Input Agent Test (async) | ✗ failed | 33.73s |
| Conversation ID Agent Test (async) | ✓ passed | 8.12s |
Failed Tests Details
python/langgraph - Basic Agent Test (sync)
Error: 1 check(s) failed:
1 check(s) failed:
Child span (gen_ai.chat, id: ab834129) should have gen_ai.agent.name attribute
python/langgraph - Basic Agent Test (async)
Error: 1 check(s) failed:
1 check(s) failed:
Child span (gen_ai.chat, id: 9c228c16) should have gen_ai.agent.name attribute
python/langgraph - Tool Call Agent Test (sync)
Error: 4 check(s) failed:
4 check(s) failed:
Attribute validation failed:
Span 81e7cedd: Attribute 'gen_ai.tool.type' must exist but is missing
Span a7845334: Attribute 'gen_ai.tool.type' must exist but is missing
Child span (gen_ai.chat, id: 9b4b6995) should have gen_ai.agent.name attribute
Child span (gen_ai.execute_tool, id: 81e7cedd) should have gen_ai.agent.name attribute
Child span (gen_ai.chat, id: 9bf5c327) should have gen_ai.agent.name attribute
Child span (gen_ai.execute_tool, id: a7845334) should have gen_ai.agent.name attribute
Child span (gen_ai.chat, id: a9b8a02f) should have gen_ai.agent.name attribute
Tool call "add" should have argument "a"
Tool call "add" should have argument "b"
Tool call "multiply" should have argument "a"
Tool call "multiply" should have argument "b"
Tool "add" should have type "function" but has "undefined"
Tool "add" output should equal 8 but is {"content":"8","additional_kwargs":{},"response_metadata":{},"type":"tool","name":"add","id":"None","tool_call_id":"call_I31twT7dtUVVtXkvjF0V59nW","artifact":"None","status":"success"}
Tool "multiply" should have type "function" but has "undefined"
Tool "multiply" output should equal 32 but is {"content":"32","additional_kwargs":{},"response_metadata":{},"type":"tool","name":"multiply","id":"None","tool_call_id":"call_ElGuvwfcZBW9K3ZVWNttNDnR","artifact":"None","status":"success"}
python/langgraph - Tool Call Agent Test (async)
Error: 4 check(s) failed:
4 check(s) failed:
Attribute validation failed:
Span 8c665828: Attribute 'gen_ai.tool.type' must exist but is missing
Span abb2ae84: Attribute 'gen_ai.tool.type' must exist but is missing
Child span (gen_ai.chat, id: 89f06f56) should have gen_ai.agent.name attribute
Child span (gen_ai.execute_tool, id: 8c665828) should have gen_ai.agent.name attribute
Child span (gen_ai.chat, id: bf128c9e) should have gen_ai.agent.name attribute
Child span (gen_ai.execute_tool, id: abb2ae84) should have gen_ai.agent.name attribute
Child span (gen_ai.chat, id: 952c423d) should have gen_ai.agent.name attribute
Tool call "add" should have argument "a"
Tool call "add" should have argument "b"
Tool call "multiply" should have argument "a"
Tool call "multiply" should have argument "b"
Tool "add" should have type "function" but has "undefined"
Tool "add" output should equal 8 but is {"content":"8","additional_kwargs":{},"response_metadata":{},"type":"tool","name":"add","id":"None","tool_call_id":"call_92Sazmv4rHHTcJdQWhz1c2Pw","artifact":"None","status":"success"}
Tool "multiply" should have type "function" but has "undefined"
Tool "multiply" output should equal 32 but is {"content":"32","additional_kwargs":{},"response_metadata":{},"type":"tool","name":"multiply","id":"None","tool_call_id":"call_LBkCVoaaemCQIiESoTSMIjGR","artifact":"None","status":"success"}
python/langgraph - Tool Error Agent Test (sync)
Error: 4 check(s) failed:
4 check(s) failed:
Attribute validation failed:
Span 942fa1c1: Attribute 'gen_ai.tool.type' must exist but is missing
Child span (gen_ai.chat, id: b61aa10a) should have gen_ai.agent.name attribute
Child span (gen_ai.execute_tool, id: 942fa1c1) should have gen_ai.agent.name attribute
Child span (gen_ai.chat, id: b14bae19) should have gen_ai.agent.name attribute
Tool call "read_file" should have argument "path"
Tool span should have an error indicator (status=error, data.error, data.exception, gen_ai.tool.error, or tags.error)
python/langgraph - Tool Error Agent Test (async)
Error: 4 check(s) failed:
4 check(s) failed:
Attribute validation failed:
Span 8c76d5f0: Attribute 'gen_ai.tool.type' must exist but is missing
Child span (gen_ai.chat, id: 99296a7d) should have gen_ai.agent.name attribute
Child span (gen_ai.execute_tool, id: 8c76d5f0) should have gen_ai.agent.name attribute
Child span (gen_ai.chat, id: 806b0c6c) should have gen_ai.agent.name attribute
Tool call "read_file" should have argument "path"
Tool span should have an error indicator (status=error, data.error, data.exception, gen_ai.tool.error, or tags.error)
python/langgraph - Vision Agent Test (sync)
Error: 2 check(s) failed:
2 check(s) failed:
Child span (gen_ai.chat, id: afc6442e) should have gen_ai.agent.name attribute
Messages should not contain raw base64 data (should be redacted)
python/langgraph - Vision Agent Test (async)
Error: 2 check(s) failed:
2 check(s) failed:
Child span (gen_ai.chat, id: 8fe207fa) should have gen_ai.agent.name attribute
Messages should not contain raw base64 data (should be redacted)
python/langgraph - Long Input Agent Test (sync)
Error: 1 check(s) failed:
1 check(s) failed:
Child span (gen_ai.chat, id: ae4b21fa) should have gen_ai.agent.name attribute
Child span (gen_ai.execute_tool, id: a53a1b01) should have gen_ai.agent.name attribute
Child span (gen_ai.chat, id: a8920536) should have gen_ai.agent.name attribute
python/langgraph - Long Input Agent Test (async)
Error: 1 check(s) failed:
1 check(s) failed:
Child span (gen_ai.chat, id: 86b0d044) should have gen_ai.agent.name attribute
Child span (gen_ai.execute_tool, id: b34d252c) should have gen_ai.agent.name attribute
Child span (gen_ai.chat, id: 9e461e9b) should have gen_ai.agent.name attribute
python/langgraph - Conversation ID Agent Test (sync)
Error: 1 check(s) failed:
1 check(s) failed:
Child span (gen_ai.chat, id: aa65b496) should have gen_ai.agent.name attribute
Child span (gen_ai.chat, id: 90d7b322) should have gen_ai.agent.name attribute
Child span (gen_ai.chat, id: bd8a9478) should have gen_ai.agent.name attribute
Child span (gen_ai.chat, id: 8ebcc4f3) should have gen_ai.agent.name attribute
python/langgraph - Conversation ID Agent Test (async)
Error: 1 check(s) failed:
1 check(s) failed:
Child span (gen_ai.chat, id: b3f90cdf) should have gen_ai.agent.name attribute
Child span (gen_ai.chat, id: bab99b60) should have gen_ai.agent.name attribute
Child span (gen_ai.chat, id: a3823f0b) should have gen_ai.agent.name attribute
Child span (gen_ai.chat, id: 9f60de7f) should have gen_ai.agent.name attribute
python/openai-agents - Tool Call Agent Test (async)
Error: 1 check(s) failed:
1 check(s) failed:
Should have gen_ai.output.messages or gen_ai.response.text
Should have gen_ai.output.messages or gen_ai.response.text
python/openai-agents - Tool Error Agent Test (async)
Error: 1 check(s) failed:
1 check(s) failed:
Should have gen_ai.output.messages or gen_ai.response.text
python/openai-agents - Vision Agent Test (async)
Error: 1 check(s) failed:
1 check(s) failed:
Messages should not contain raw base64 data (should be redacted)
Messages should not contain raw base64 data (should be redacted)
Messages should contain '[Blob substitute]' marker indicating binary content was redacted
python/openai-agents - Long Input Agent Test (async)
Error: 2 check(s) failed:
2 check(s) failed:
Should have gen_ai.output.messages or gen_ai.response.text
Message should be trimmed (length 25667 > 20000)
Message should be trimmed (length 25667 > 20000)
python/pydantic-ai - Tool Call Agent Test (async)
Error: 1 check(s) failed:
1 check(s) failed:
Should have gen_ai.output.messages or gen_ai.response.text
Should have gen_ai.output.messages or gen_ai.response.text
python/pydantic-ai - Tool Error Agent Test (async)
Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/test-tool-error-agent-test-async.py
Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/test-tool-error-agent-test-async.py
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/test-tool-error-agent-test-async.py", line 48, in <module>
asyncio.run(main())
File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/runners.py", line 195, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/base_events.py", line 691, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/test-tool-error-agent-test-async.py", line 41, in main
result = await agent.run("Please read the file at /nonexistent/file.txt and tell me what it contains. Use the read_file tool.")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/integrations/pydantic_ai/patches/agent_run.py", line 129, in wrapper
reraise(*exc_info)
File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/utils.py", line 1785, in reraise
raise value
File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/integrations/pydantic_ai/patches/agent_run.py", line 119, in wrapper
result = await original_func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/agent/abstract.py", line 259, in run
async with self.iter(
^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/contextlib.py", line 231, in __aexit__
await self.gen.athrow(value)
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/agent/__init__.py", line 707, in iter
async with graph.iter(
^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/contextlib.py", line 231, in __aexit__
await self.gen.athrow(value)
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_graph/beta/graph.py", line 270, in iter
async with GraphRun[StateT, DepsT, OutputT](
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_graph/beta/graph.py", line 423, in __aexit__
await self._async_exit_stack.__aexit__(exc_type, exc_val, exc_tb)
File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/contextlib.py", line 754, in __aexit__
raise exc_details[1]
File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/contextlib.py", line 735, in __aexit__
cb_suppress = cb(*exc_details)
^^^^^^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/contextlib.py", line 158, in __exit__
self.gen.throw(value)
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_graph/beta/graph.py", line 981, in _unwrap_exception_groups
raise exception
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_graph/beta/graph.py", line 750, in _run_tracked_task
result = await self._run_task(t_)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_graph/beta/graph.py", line 782, in _run_task
output = await node.call(step_context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_graph/beta/step.py", line 253, in _call_node
return await node.run(GraphRunContext(state=ctx.state, deps=ctx.deps))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_agent_graph.py", line 593, in run
async with self.stream(ctx):
^^^^^^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/contextlib.py", line 217, in __aexit__
await anext(self.gen)
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_agent_graph.py", line 607, in stream
async for _event in stream:
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_agent_graph.py", line 744, in _run_stream
async for event in self._events_iterator:
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_agent_graph.py", line 705, in _run_stream
async for event in self._handle_tool_calls(ctx, tool_calls):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_agent_graph.py", line 760, in _handle_tool_calls
async for event in process_tool_calls(
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_agent_graph.py", line 1011, in process_tool_calls
async for event in _call_tools(
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_agent_graph.py", line 1161, in _call_tools
if event := await handle_call_or_result(coro_or_task=task, index=index): # pyright: ignore[reportArgumentType]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_agent_graph.py", line 1112, in handle_call_or_result
(await coro_or_task) if inspect.isawaitable(coro_or_task) else coro_or_task.result()
^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_agent_graph.py", line 1204, in _call_tool
tool_result = await tool_manager.handle_call(tool_call)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_tool_manager.py", line 153, in handle_call
return await self._call_function_tool(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_tool_manager.py", line 290, in _call_function_tool
tool_result = await self._call_tool(
^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/integrations/pydantic_ai/patches/tools.py", line 159, in wrapped_call_tool
result = await original_call_tool(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_tool_manager.py", line 212, in _call_tool
return await self.toolset.call_tool(name, args_dict, ctx, tool)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/toolsets/combined.py", line 90, in call_tool
return await tool.source_toolset.call_tool(name, tool_args, ctx, tool.source_tool)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/toolsets/function.py", line 383, in call_tool
return await tool.call_func(tool_args, ctx)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_function_schema.py", line 56, in call
return await run_in_executor(function, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_utils.py", line 83, in run_in_executor
return await run_sync(wrapped_func)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/anyio/to_thread.py", line 63, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 2502, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 986, in run
result = context.run(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/test-tool-error-agent-test-async.py", line 34, in read_file
raise Exception("FileNotFoundError: The file '/nonexistent/file.txt' does not exist")
Exception: FileNotFoundError: The file '/nonexistent/file.txt' does not exist
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/test-tool-error-agent-test-async.py", line 48, in <module>
asyncio.run(main())
File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/runners.py", line 195, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/base_events.py", line 691, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/test-tool-error-agent-test-async.py", line 41, in main
result = await agent.run("Please read the file at /nonexistent/file.txt and tell me what it contains. Use the read_file tool.")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/integrations/pydantic_ai/patches/agent_run.py", line 129, in wrapper
reraise(*exc_info)
File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/utils.py", line 1785, in reraise
raise value
File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/integrations/pydantic_ai/patches/agent_run.py", line 119, in wrapper
result = await original_func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/agent/abstract.py", line 259, in run
async with self.iter(
^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/contextlib.py", line 231, in __aexit__
await self.gen.athrow(value)
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/agent/__init__.py", line 707, in iter
async with graph.iter(
^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/contextlib.py", line 231, in __aexit__
await self.gen.athrow(value)
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_graph/beta/graph.py", line 270, in iter
async with GraphRun[StateT, DepsT, OutputT](
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_graph/beta/graph.py", line 423, in __aexit__
await self._async_exit_stack.__aexit__(exc_type, exc_val, exc_tb)
File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/contextlib.py", line 754, in __aexit__
raise exc_details[1]
File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/contextlib.py", line 735, in __aexit__
cb_suppress = cb(*exc_details)
^^^^^^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/contextlib.py", line 158, in __exit__
self.gen.throw(value)
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_graph/beta/graph.py", line 981, in _unwrap_exception_groups
raise exception
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_graph/beta/graph.py", line 750, in _run_tracked_task
result = await self._run_task(t_)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_graph/beta/graph.py", line 782, in _run_task
output = await node.call(step_context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_graph/beta/step.py", line 253, in _call_node
return await node.run(GraphRunContext(state=ctx.state, deps=ctx.deps))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_agent_graph.py", line 593, in run
async with self.stream(ctx):
^^^^^^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/contextlib.py", line 217, in __aexit__
await anext(self.gen)
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_agent_graph.py", line 607, in stream
async for _event in stream:
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_agent_graph.py", line 744, in _run_stream
async for event in self._events_iterator:
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_agent_graph.py", line 705, in _run_stream
async for event in self._handle_tool_calls(ctx, tool_calls):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_agent_graph.py", line 760, in _handle_tool_calls
async for event in process_tool_calls(
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_agent_graph.py", line 1011, in process_tool_calls
async for event in _call_tools(
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_agent_graph.py", line 1161, in _call_tools
if event := await handle_call_or_result(coro_or_task=task, index=index): # pyright: ignore[reportArgumentType]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_agent_graph.py", line 1112, in handle_call_or_result
(await coro_or_task) if inspect.isawaitable(coro_or_task) else coro_or_task.result()
^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_agent_graph.py", line 1204, in _call_tool
tool_result = await tool_manager.handle_call(tool_call)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_tool_manager.py", line 153, in handle_call
return await self._call_function_tool(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_tool_manager.py", line 290, in _call_function_tool
tool_result = await self._call_tool(
^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/integrations/pydantic_ai/patches/tools.py", line 159, in wrapped_call_tool
result = await original_call_tool(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_tool_manager.py", line 212, in _call_tool
return await self.toolset.call_tool(name, args_dict, ctx, tool)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/toolsets/combined.py", line 90, in call_tool
return await tool.source_toolset.call_tool(name, tool_args, ctx, tool.source_tool)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/toolsets/function.py", line 383, in call_tool
return await tool.call_func(tool_args, ctx)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_function_schema.py", line 56, in call
return await run_in_executor(function, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/pydantic_ai/_utils.py", line 83, in run_in_executor
return await run_sync(wrapped_func)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/anyio/to_thread.py", line 63, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 2502, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/.venv/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 986, in run
result = context.run(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/pydantic-ai-1.52.0-sentry-local/test-tool-error-agent-test-async.py", line 34, in read_file
raise Exception("FileNotFoundError: The file '/nonexistent/file.txt' does not exist")
Exception: FileNotFoundError: The file '/nonexistent/file.txt' does not exist
python/pydantic-ai - Long Input Agent Test (async)
Error: 2 check(s) failed:
2 check(s) failed:
Should have gen_ai.output.messages or gen_ai.response.text
Message should be trimmed (length 25667 > 20000)
Message should be trimmed (length 25667 > 20000)
python/google-genai - Basic Embeddings Test (sync, blocking)
Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-basic-embeddings-test-sync-blocking.py
Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-basic-embeddings-test-sync-blocking.py
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-basic-embeddings-test-sync-blocking.py", line 20, in <module>
client = genai.Client(
^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
self._api_client = self._get_api_client(
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
return BaseApiClient(
^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-basic-embeddings-test-sync-blocking.py", line 20, in <module>
client = genai.Client(
^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
self._api_client = self._get_api_client(
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
return BaseApiClient(
^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.
python/google-genai - Basic Embeddings Test (async, blocking)
Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-basic-embeddings-test-async-blocking.py
Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-basic-embeddings-test-async-blocking.py
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-basic-embeddings-test-async-blocking.py", line 21, in <module>
client = genai.Client(
^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
self._api_client = self._get_api_client(
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
return BaseApiClient(
^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-basic-embeddings-test-async-blocking.py", line 21, in <module>
client = genai.Client(
^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
self._api_client = self._get_api_client(
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
return BaseApiClient(
^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.
python/langchain - Basic Embeddings Test (sync, blocking)
Error: 2 check(s) failed:
2 check(s) failed:
Should have exactly 1 AI span(s) but found 2
Token usage validation failed:
input_tokens must exist
total_tokens must exist
gen_ai.response.model is missing (optional but recommended)
gen_ai.response.model is missing (optional but recommended)
python/langchain - Basic Embeddings Test (async, blocking)
Error: 2 check(s) failed:
2 check(s) failed:
Should have exactly 1 AI span(s) but found 2
Token usage validation failed:
input_tokens must exist
total_tokens must exist
gen_ai.response.model is missing (optional but recommended)
gen_ai.response.model is missing (optional but recommended)
python/litellm - Basic Embeddings Test (async, blocking)
Error: 2 check(s) failed:
2 check(s) failed:
Should have exactly 1 AI span(s) but found 0
Should have at least one embedding span
python/anthropic - Basic LLM Test (sync, streaming)
Error: 3 check(s) failed:
3 check(s) failed:
Should have exactly 1 AI span(s) but found 0
Should have at least one chat/completion span
Should have at least one chat or agent span
python/anthropic - Basic LLM Test (async, streaming)
Error: 3 check(s) failed:
3 check(s) failed:
Should have exactly 1 AI span(s) but found 0
Should have at least one chat/completion span
Should have at least one chat or agent span
python/anthropic - Multi-Turn LLM Test (sync, streaming)
Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-multi-turn-llm-test-sync-streaming.py
Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-multi-turn-llm-test-sync-streaming.py
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-multi-turn-llm-test-sync-streaming.py", line 100, in <module>
main()
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-multi-turn-llm-test-sync-streaming.py", line 62, in main
with client.messages.stream(**kwargs) as stream:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/lib/streaming/_messages.py", line 167, in __enter__
raw_stream = self.__api_request()
^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1364, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1137, in request
raise self._make_status_error_from_response(err.response) from None
anthropic.RateLimitError: Error code: 429 - {'type': 'error', 'error': {'type': 'rate_limit_error', 'message': "This request would exceed your organization's rate limit of 5 requests per minute (org: 71b5149e-9209-4114-b2c5-1d9512fa3a80, model: claude-haiku-4-5-20251001). For details, refer to: https://docs.claude.com/en/api/rate-limits. You can see the response headers for current usage. Please reduce the prompt length or the maximum tokens requested, or try again later. You may also contact sales at https://www.anthropic.com/contact-sales to discuss your options for a rate limit increase."}, 'request_id': 'req_011CYxXYgdsgbM4mqR64QMBj'}
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-multi-turn-llm-test-sync-streaming.py", line 100, in <module>
main()
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-multi-turn-llm-test-sync-streaming.py", line 62, in main
with client.messages.stream(**kwargs) as stream:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/lib/streaming/_messages.py", line 167, in __enter__
raw_stream = self.__api_request()
^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1364, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1137, in request
raise self._make_status_error_from_response(err.response) from None
anthropic.RateLimitError: Error code: 429 - {'type': 'error', 'error': {'type': 'rate_limit_error', 'message': "This request would exceed your organization's rate limit of 5 requests per minute (org: 71b5149e-9209-4114-b2c5-1d9512fa3a80, model: claude-haiku-4-5-20251001). For details, refer to: https://docs.claude.com/en/api/rate-limits. You can see the response headers for current usage. Please reduce the prompt length or the maximum tokens requested, or try again later. You may also contact sales at https://www.anthropic.com/contact-sales to discuss your options for a rate limit increase."}, 'request_id': 'req_011CYxXYgdsgbM4mqR64QMBj'}
python/anthropic - Multi-Turn LLM Test (async, streaming)
Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-multi-turn-llm-test-async-streaming.py
Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-multi-turn-llm-test-async-streaming.py
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-multi-turn-llm-test-async-streaming.py", line 101, in <module>
asyncio.run(main())
File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/runners.py", line 195, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/base_events.py", line 691, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-multi-turn-llm-test-async-streaming.py", line 38, in main
async with client.messages.stream(**kwargs) as stream:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/lib/streaming/_messages.py", line 317, in __aenter__
raw_stream = await self.__api_request
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1992, in post
return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1777, in request
raise self._make_status_error_from_response(err.response) from None
anthropic.RateLimitError: Error code: 429 - {'type': 'error', 'error': {'type': 'rate_limit_error', 'message': "This request would exceed your organization's rate limit of 5 requests per minute (org: 71b5149e-9209-4114-b2c5-1d9512fa3a80, model: claude-haiku-4-5-20251001). For details, refer to: https://docs.claude.com/en/api/rate-limits. You can see the response headers for current usage. Please reduce the prompt length or the maximum tokens requested, or try again later. You may also contact sales at https://www.anthropic.com/contact-sales to discuss your options for a rate limit increase."}, 'request_id': 'req_011CYxXYf29BzdeXhr81ByP1'}
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-multi-turn-llm-test-async-streaming.py", line 101, in <module>
asyncio.run(main())
File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/runners.py", line 195, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/base_events.py", line 691, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-multi-turn-llm-test-async-streaming.py", line 38, in main
async with client.messages.stream(**kwargs) as stream:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/lib/streaming/_messages.py", line 317, in __aenter__
raw_stream = await self.__api_request
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1992, in post
return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1777, in request
raise self._make_status_error_from_response(err.response) from None
anthropic.RateLimitError: Error code: 429 - {'type': 'error', 'error': {'type': 'rate_limit_error', 'message': "This request would exceed your organization's rate limit of 5 requests per minute (org: 71b5149e-9209-4114-b2c5-1d9512fa3a80, model: claude-haiku-4-5-20251001). For details, refer to: https://docs.claude.com/en/api/rate-limits. You can see the response headers for current usage. Please reduce the prompt length or the maximum tokens requested, or try again later. You may also contact sales at https://www.anthropic.com/contact-sales to discuss your options for a rate limit increase."}, 'request_id': 'req_011CYxXYf29BzdeXhr81ByP1'}
python/anthropic - Multi-Turn LLM Test (async, blocking)
Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-multi-turn-llm-test-async-blocking.py
Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-multi-turn-llm-test-async-blocking.py
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-multi-turn-llm-test-async-blocking.py", line 83, in <module>
asyncio.run(main())
File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/runners.py", line 195, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/base_events.py", line 691, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-multi-turn-llm-test-async-blocking.py", line 38, in main
response = await client.messages.create(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/integrations/anthropic.py", line 606, in _sentry_patched_create_async
return await _execute_async(f, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/integrations/anthropic.py", line 594, in _execute_async
reraise(*exc_info)
File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/utils.py", line 1785, in reraise
raise value
File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/integrations/anthropic.py", line 589, in _execute_async
result = await f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/resources/messages/messages.py", line 2331, in create
return await self._post(
^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1992, in post
return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1777, in request
raise self._make_status_error_from_response(err.response) from None
anthropic.RateLimitError: Error code: 429 - {'type': 'error', 'error': {'type': 'rate_limit_error', 'message': "This request would exceed your organization's rate limit of 5 requests per minute (org: 71b5149e-9209-4114-b2c5-1d9512fa3a80, model: claude-haiku-4-5-20251001). For details, refer to: https://docs.claude.com/en/api/rate-limits. You can see the response headers for current usage. Please reduce the prompt length or the maximum tokens requested, or try again later. You may also contact sales at https://www.anthropic.com/contact-sales to discuss your options for a rate limit increase."}, 'request_id': 'req_011CYxXZYVm1DoHNoCxFqpEK'}
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-multi-turn-llm-test-async-blocking.py", line 83, in <module>
asyncio.run(main())
File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/runners.py", line 195, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/base_events.py", line 691, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-multi-turn-llm-test-async-blocking.py", line 38, in main
response = await client.messages.create(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/integrations/anthropic.py", line 606, in _sentry_patched_create_async
return await _execute_async(f, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/integrations/anthropic.py", line 594, in _execute_async
reraise(*exc_info)
File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/utils.py", line 1785, in reraise
raise value
File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/integrations/anthropic.py", line 589, in _execute_async
result = await f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/resources/messages/messages.py", line 2331, in create
return await self._post(
^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1992, in post
return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1777, in request
raise self._make_status_error_from_response(err.response) from None
anthropic.RateLimitError: Error code: 429 - {'type': 'error', 'error': {'type': 'rate_limit_error', 'message': "This request would exceed your organization's rate limit of 5 requests per minute (org: 71b5149e-9209-4114-b2c5-1d9512fa3a80, model: claude-haiku-4-5-20251001). For details, refer to: https://docs.claude.com/en/api/rate-limits. You can see the response headers for current usage. Please reduce the prompt length or the maximum tokens requested, or try again later. You may also contact sales at https://www.anthropic.com/contact-sales to discuss your options for a rate limit increase."}, 'request_id': 'req_011CYxXZYVm1DoHNoCxFqpEK'}
python/anthropic - Basic Error LLM Test (sync, streaming)
Error: 2 check(s) failed:
2 check(s) failed:
Should have at least 1 AI span(s) but found 0
Should have at least one AI span but found none
python/anthropic - Basic Error LLM Test (async, streaming)
Error: 2 check(s) failed:
2 check(s) failed:
Should have at least 1 AI span(s) but found 0
Should have at least one AI span but found none
python/anthropic - Vision LLM Test (sync, streaming)
Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-vision-llm-test-sync-streaming.py
Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-vision-llm-test-sync-streaming.py
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-vision-llm-test-sync-streaming.py", line 51, in <module>
main()
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-vision-llm-test-sync-streaming.py", line 40, in main
with client.messages.stream(**kwargs) as stream:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/lib/streaming/_messages.py", line 167, in __enter__
raw_stream = self.__api_request()
^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1364, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1137, in request
raise self._make_status_error_from_response(err.response) from None
anthropic.RateLimitError: Error code: 429 - {'type': 'error', 'error': {'type': 'rate_limit_error', 'message': "This request would exceed your organization's rate limit of 5 requests per minute (org: 71b5149e-9209-4114-b2c5-1d9512fa3a80, model: claude-haiku-4-5-20251001). For details, refer to: https://docs.claude.com/en/api/rate-limits. You can see the response headers for current usage. Please reduce the prompt length or the maximum tokens requested, or try again later. You may also contact sales at https://www.anthropic.com/contact-sales to discuss your options for a rate limit increase."}, 'request_id': 'req_011CYxXaRPQ8AGggp7Cm7WNM'}
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-vision-llm-test-sync-streaming.py", line 51, in <module>
main()
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-vision-llm-test-sync-streaming.py", line 40, in main
with client.messages.stream(**kwargs) as stream:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/lib/streaming/_messages.py", line 167, in __enter__
raw_stream = self.__api_request()
^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1364, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1137, in request
raise self._make_status_error_from_response(err.response) from None
anthropic.RateLimitError: Error code: 429 - {'type': 'error', 'error': {'type': 'rate_limit_error', 'message': "This request would exceed your organization's rate limit of 5 requests per minute (org: 71b5149e-9209-4114-b2c5-1d9512fa3a80, model: claude-haiku-4-5-20251001). For details, refer to: https://docs.claude.com/en/api/rate-limits. You can see the response headers for current usage. Please reduce the prompt length or the maximum tokens requested, or try again later. You may also contact sales at https://www.anthropic.com/contact-sales to discuss your options for a rate limit increase."}, 'request_id': 'req_011CYxXaRPQ8AGggp7Cm7WNM'}
python/anthropic - Vision LLM Test (async, streaming)
Error: 3 check(s) failed:
3 check(s) failed:
Should have at least one chat/completion span
Should have at least one chat or agent span
Should have at least one chat or agent span
python/anthropic - Vision LLM Test (async, blocking)
Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-vision-llm-test-async-blocking.py
Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-vision-llm-test-async-blocking.py
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-vision-llm-test-async-blocking.py", line 46, in <module>
asyncio.run(main())
File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/runners.py", line 195, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/base_events.py", line 691, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-vision-llm-test-async-blocking.py", line 41, in main
response = await client.messages.create(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/integrations/anthropic.py", line 606, in _sentry_patched_create_async
return await _execute_async(f, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/integrations/anthropic.py", line 594, in _execute_async
reraise(*exc_info)
File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/utils.py", line 1785, in reraise
raise value
File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/integrations/anthropic.py", line 589, in _execute_async
result = await f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/resources/messages/messages.py", line 2331, in create
return await self._post(
^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1992, in post
return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1777, in request
raise self._make_status_error_from_response(err.response) from None
anthropic.RateLimitError: Error code: 429 - {'type': 'error', 'error': {'type': 'rate_limit_error', 'message': "This request would exceed your organization's rate limit of 5 requests per minute (org: 71b5149e-9209-4114-b2c5-1d9512fa3a80, model: claude-haiku-4-5-20251001). For details, refer to: https://docs.claude.com/en/api/rate-limits. You can see the response headers for current usage. Please reduce the prompt length or the maximum tokens requested, or try again later. You may also contact sales at https://www.anthropic.com/contact-sales to discuss your options for a rate limit increase."}, 'request_id': 'req_011CYxXbLQHpa3GgBgq8gQ3i'}
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-vision-llm-test-async-blocking.py", line 46, in <module>
asyncio.run(main())
File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/runners.py", line 195, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/base_events.py", line 691, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-vision-llm-test-async-blocking.py", line 41, in main
response = await client.messages.create(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/integrations/anthropic.py", line 606, in _sentry_patched_create_async
return await _execute_async(f, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/integrations/anthropic.py", line 594, in _execute_async
reraise(*exc_info)
File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/utils.py", line 1785, in reraise
raise value
File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/integrations/anthropic.py", line 589, in _execute_async
result = await f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/resources/messages/messages.py", line 2331, in create
return await self._post(
^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1992, in post
return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1777, in request
raise self._make_status_error_from_response(err.response) from None
anthropic.RateLimitError: Error code: 429 - {'type': 'error', 'error': {'type': 'rate_limit_error', 'message': "This request would exceed your organization's rate limit of 5 requests per minute (org: 71b5149e-9209-4114-b2c5-1d9512fa3a80, model: claude-haiku-4-5-20251001). For details, refer to: https://docs.claude.com/en/api/rate-limits. You can see the response headers for current usage. Please reduce the prompt length or the maximum tokens requested, or try again later. You may also contact sales at https://www.anthropic.com/contact-sales to discuss your options for a rate limit increase."}, 'request_id': 'req_011CYxXbLQHpa3GgBgq8gQ3i'}
python/anthropic - Long Input LLM Test (sync, streaming)
Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-long-input-llm-test-sync-streaming.py
Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-long-input-llm-test-sync-streaming.py
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-long-input-llm-test-sync-streaming.py", line 48, in <module>
main()
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-long-input-llm-test-sync-streaming.py", line 37, in main
with client.messages.stream(**kwargs) as stream:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/lib/streaming/_messages.py", line 167, in __enter__
raw_stream = self.__api_request()
^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1364, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1137, in request
raise self._make_status_error_from_response(err.response) from None
anthropic.RateLimitError: Error code: 429 - {'type': 'error', 'error': {'type': 'rate_limit_error', 'message': "This request would exceed your organization's rate limit of 5 requests per minute (org: 71b5149e-9209-4114-b2c5-1d9512fa3a80, model: claude-haiku-4-5-20251001). For details, refer to: https://docs.claude.com/en/api/rate-limits. You can see the response headers for current usage. Please reduce the prompt length or the maximum tokens requested, or try again later. You may also contact sales at https://www.anthropic.com/contact-sales to discuss your options for a rate limit increase."}, 'request_id': 'req_011CYxXcBxpMAwQqNPcsuyxY'}
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-long-input-llm-test-sync-streaming.py", line 48, in <module>
main()
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-long-input-llm-test-sync-streaming.py", line 37, in main
with client.messages.stream(**kwargs) as stream:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/lib/streaming/_messages.py", line 167, in __enter__
raw_stream = self.__api_request()
^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1364, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1137, in request
raise self._make_status_error_from_response(err.response) from None
anthropic.RateLimitError: Error code: 429 - {'type': 'error', 'error': {'type': 'rate_limit_error', 'message': "This request would exceed your organization's rate limit of 5 requests per minute (org: 71b5149e-9209-4114-b2c5-1d9512fa3a80, model: claude-haiku-4-5-20251001). For details, refer to: https://docs.claude.com/en/api/rate-limits. You can see the response headers for current usage. Please reduce the prompt length or the maximum tokens requested, or try again later. You may also contact sales at https://www.anthropic.com/contact-sales to discuss your options for a rate limit increase."}, 'request_id': 'req_011CYxXcBxpMAwQqNPcsuyxY'}
python/anthropic - Long Input LLM Test (sync, blocking)
Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-long-input-llm-test-sync-blocking.py
Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-long-input-llm-test-sync-blocking.py
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-long-input-llm-test-sync-blocking.py", line 42, in <module>
main()
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-long-input-llm-test-sync-blocking.py", line 37, in main
response = client.messages.create(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/integrations/anthropic.py", line 568, in _sentry_patched_create_sync
return _execute_sync(f, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/integrations/anthropic.py", line 556, in _execute_sync
reraise(*exc_info)
File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/utils.py", line 1785, in reraise
raise value
File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/integrations/anthropic.py", line 551, in _execute_sync
result = f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_utils/_utils.py", line 282, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/resources/messages/messages.py", line 950, in create
return self._post(
^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1364, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1137, in request
raise self._make_status_error_from_response(err.response) from None
anthropic.RateLimitError: Error code: 429 - {'type': 'error', 'error': {'type': 'rate_limit_error', 'message': "This request would exceed your organization's rate limit of 5 requests per minute (org: 71b5149e-9209-4114-b2c5-1d9512fa3a80, model: claude-haiku-4-5-20251001). For details, refer to: https://docs.claude.com/en/api/rate-limits. You can see the response headers for current usage. Please reduce the prompt length or the maximum tokens requested, or try again later. You may also contact sales at https://www.anthropic.com/contact-sales to discuss your options for a rate limit increase."}, 'request_id': 'req_011CYxXcDDDsG9GghLYTTGD9'}
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-long-input-llm-test-sync-blocking.py", line 42, in <module>
main()
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-long-input-llm-test-sync-blocking.py", line 37, in main
response = client.messages.create(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/integrations/anthropic.py", line 568, in _sentry_patched_create_sync
return _execute_sync(f, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/integrations/anthropic.py", line 556, in _execute_sync
reraise(*exc_info)
File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/utils.py", line 1785, in reraise
raise value
File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/integrations/anthropic.py", line 551, in _execute_sync
result = f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_utils/_utils.py", line 282, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/resources/messages/messages.py", line 950, in create
return self._post(
^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1364, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1137, in request
raise self._make_status_error_from_response(err.response) from None
anthropic.RateLimitError: Error code: 429 - {'type': 'error', 'error': {'type': 'rate_limit_error', 'message': "This request would exceed your organization's rate limit of 5 requests per minute (org: 71b5149e-9209-4114-b2c5-1d9512fa3a80, model: claude-haiku-4-5-20251001). For details, refer to: https://docs.claude.com/en/api/rate-limits. You can see the response headers for current usage. Please reduce the prompt length or the maximum tokens requested, or try again later. You may also contact sales at https://www.anthropic.com/contact-sales to discuss your options for a rate limit increase."}, 'request_id': 'req_011CYxXcDDDsG9GghLYTTGD9'}
python/anthropic - Long Input LLM Test (async, streaming)
Error: 2 check(s) failed:
2 check(s) failed:
Should have at least one chat/completion span
Should have at least one chat or agent span
python/anthropic - Conversation ID LLM Test (sync, streaming)
Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-conversation-id-llm-test-sync-streaming.py
Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-conversation-id-llm-test-sync-streaming.py
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-conversation-id-llm-test-sync-streaming.py", line 125, in <module>
main()
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-conversation-id-llm-test-sync-streaming.py", line 38, in main
with client.messages.stream(**kwargs) as stream:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/lib/streaming/_messages.py", line 167, in __enter__
raw_stream = self.__api_request()
^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1364, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1137, in request
raise self._make_status_error_from_response(err.response) from None
anthropic.RateLimitError: Error code: 429 - {'type': 'error', 'error': {'type': 'rate_limit_error', 'message': "This request would exceed your organization's rate limit of 5 requests per minute (org: 71b5149e-9209-4114-b2c5-1d9512fa3a80, model: claude-haiku-4-5-20251001). For details, refer to: https://docs.claude.com/en/api/rate-limits. You can see the response headers for current usage. Please reduce the prompt length or the maximum tokens requested, or try again later. You may also contact sales at https://www.anthropic.com/contact-sales to discuss your options for a rate limit increase."}, 'request_id': 'req_011CYxXdyEQN9ALBo4nvEJiJ'}
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-conversation-id-llm-test-sync-streaming.py", line 125, in <module>
main()
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-conversation-id-llm-test-sync-streaming.py", line 38, in main
with client.messages.stream(**kwargs) as stream:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/lib/streaming/_messages.py", line 167, in __enter__
raw_stream = self.__api_request()
^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1364, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1137, in request
raise self._make_status_error_from_response(err.response) from None
anthropic.RateLimitError: Error code: 429 - {'type': 'error', 'error': {'type': 'rate_limit_error', 'message': "This request would exceed your organization's rate limit of 5 requests per minute (org: 71b5149e-9209-4114-b2c5-1d9512fa3a80, model: claude-haiku-4-5-20251001). For details, refer to: https://docs.claude.com/en/api/rate-limits. You can see the response headers for current usage. Please reduce the prompt length or the maximum tokens requested, or try again later. You may also contact sales at https://www.anthropic.com/contact-sales to discuss your options for a rate limit increase."}, 'request_id': 'req_011CYxXdyEQN9ALBo4nvEJiJ'}
python/anthropic - Conversation ID LLM Test (sync, blocking)
Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-conversation-id-llm-test-sync-blocking.py
Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-conversation-id-llm-test-sync-blocking.py
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-conversation-id-llm-test-sync-blocking.py", line 101, in <module>
main()
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-conversation-id-llm-test-sync-blocking.py", line 38, in main
response = client.messages.create(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/integrations/anthropic.py", line 568, in _sentry_patched_create_sync
return _execute_sync(f, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/integrations/anthropic.py", line 556, in _execute_sync
reraise(*exc_info)
File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/utils.py", line 1785, in reraise
raise value
File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/integrations/anthropic.py", line 551, in _execute_sync
result = f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_utils/_utils.py", line 282, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/resources/messages/messages.py", line 950, in create
return self._post(
^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1364, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1137, in request
raise self._make_status_error_from_response(err.response) from None
anthropic.RateLimitError: Error code: 429 - {'type': 'error', 'error': {'type': 'rate_limit_error', 'message': "This request would exceed your organization's rate limit of 5 requests per minute (org: 71b5149e-9209-4114-b2c5-1d9512fa3a80, model: claude-haiku-4-5-20251001). For details, refer to: https://docs.claude.com/en/api/rate-limits. You can see the response headers for current usage. Please reduce the prompt length or the maximum tokens requested, or try again later. You may also contact sales at https://www.anthropic.com/contact-sales to discuss your options for a rate limit increase."}, 'request_id': 'req_011CYxXdyqN6Zjg4R5jCuLo5'}
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-conversation-id-llm-test-sync-blocking.py", line 101, in <module>
main()
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-conversation-id-llm-test-sync-blocking.py", line 38, in main
response = client.messages.create(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/integrations/anthropic.py", line 568, in _sentry_patched_create_sync
return _execute_sync(f, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/integrations/anthropic.py", line 556, in _execute_sync
reraise(*exc_info)
File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/utils.py", line 1785, in reraise
raise value
File "/home/runner/work/sentry-python/sentry-python/sentry_sdk/integrations/anthropic.py", line 551, in _execute_sync
result = f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_utils/_utils.py", line 282, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/resources/messages/messages.py", line 950, in create
return self._post(
^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1364, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1137, in request
raise self._make_status_error_from_response(err.response) from None
anthropic.RateLimitError: Error code: 429 - {'type': 'error', 'error': {'type': 'rate_limit_error', 'message': "This request would exceed your organization's rate limit of 5 requests per minute (org: 71b5149e-9209-4114-b2c5-1d9512fa3a80, model: claude-haiku-4-5-20251001). For details, refer to: https://docs.claude.com/en/api/rate-limits. You can see the response headers for current usage. Please reduce the prompt length or the maximum tokens requested, or try again later. You may also contact sales at https://www.anthropic.com/contact-sales to discuss your options for a rate limit increase."}, 'request_id': 'req_011CYxXdyqN6Zjg4R5jCuLo5'}
python/anthropic - Conversation ID LLM Test (async, streaming)
Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-conversation-id-llm-test-async-streaming.py
Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-conversation-id-llm-test-async-streaming.py
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-conversation-id-llm-test-async-streaming.py", line 126, in <module>
asyncio.run(main())
File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/runners.py", line 195, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/base_events.py", line 691, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-conversation-id-llm-test-async-streaming.py", line 39, in main
async with client.messages.stream(**kwargs) as stream:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/lib/streaming/_messages.py", line 317, in __aenter__
raw_stream = await self.__api_request
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1992, in post
return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1777, in request
raise self._make_status_error_from_response(err.response) from None
anthropic.RateLimitError: Error code: 429 - {'type': 'error', 'error': {'type': 'rate_limit_error', 'message': "This request would exceed your organization's rate limit of 5 requests per minute (org: 71b5149e-9209-4114-b2c5-1d9512fa3a80, model: claude-haiku-4-5-20251001). For details, refer to: https://docs.claude.com/en/api/rate-limits. You can see the response headers for current usage. Please reduce the prompt length or the maximum tokens requested, or try again later. You may also contact sales at https://www.anthropic.com/contact-sales to discuss your options for a rate limit increase."}, 'request_id': 'req_011CYxXdxDNzz1CRQpEoQynv'}
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-conversation-id-llm-test-async-streaming.py", line 126, in <module>
asyncio.run(main())
File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/runners.py", line 195, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.12.12/x64/lib/python3.12/asyncio/base_events.py", line 691, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/test-conversation-id-llm-test-async-streaming.py", line 39, in main
async with client.messages.stream(**kwargs) as stream:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/lib/streaming/_messages.py", line 317, in __aenter__
raw_stream = await self.__api_request
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1992, in post
return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/anthropic-0.77.0-sentry-local/.venv/lib/python3.12/site-packages/anthropic/_base_client.py", line 1777, in request
raise self._make_status_error_from_response(err.response) from None
anthropic.RateLimitError: Error code: 429 - {'type': 'error', 'error': {'type': 'rate_limit_error', 'message': "This request would exceed your organization's rate limit of 5 requests per minute (org: 71b5149e-9209-4114-b2c5-1d9512fa3a80, model: claude-haiku-4-5-20251001). For details, refer to: https://docs.claude.com/en/api/rate-limits. You can see the response headers for current usage. Please reduce the prompt length or the maximum tokens requested, or try again later. You may also contact sales at https://www.anthropic.com/contact-sales to discuss your options for a rate limit increase."}, 'request_id': 'req_011CYxXdxDNzz1CRQpEoQynv'}
python/google-genai - Basic LLM Test (sync, streaming)
Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-basic-llm-test-sync-streaming.py
Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-basic-llm-test-sync-streaming.py
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-basic-llm-test-sync-streaming.py", line 21, in <module>
client = genai.Client(
^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
self._api_client = self._get_api_client(
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
return BaseApiClient(
^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-basic-llm-test-sync-streaming.py", line 21, in <module>
client = genai.Client(
^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
self._api_client = self._get_api_client(
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
return BaseApiClient(
^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.
python/google-genai - Basic LLM Test (sync, blocking)
Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-basic-llm-test-sync-blocking.py
Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-basic-llm-test-sync-blocking.py
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-basic-llm-test-sync-blocking.py", line 21, in <module>
client = genai.Client(
^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
self._api_client = self._get_api_client(
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
return BaseApiClient(
^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-basic-llm-test-sync-blocking.py", line 21, in <module>
client = genai.Client(
^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
self._api_client = self._get_api_client(
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
return BaseApiClient(
^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.
python/google-genai - Basic LLM Test (async, streaming)
Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-basic-llm-test-async-streaming.py
Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-basic-llm-test-async-streaming.py
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-basic-llm-test-async-streaming.py", line 22, in <module>
client = genai.Client(
^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
self._api_client = self._get_api_client(
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
return BaseApiClient(
^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-basic-llm-test-async-streaming.py", line 22, in <module>
client = genai.Client(
^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
self._api_client = self._get_api_client(
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
return BaseApiClient(
^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.
python/google-genai - Basic LLM Test (async, blocking)
Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-basic-llm-test-async-blocking.py
Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-basic-llm-test-async-blocking.py
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-basic-llm-test-async-blocking.py", line 22, in <module>
client = genai.Client(
^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
self._api_client = self._get_api_client(
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
return BaseApiClient(
^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-basic-llm-test-async-blocking.py", line 22, in <module>
client = genai.Client(
^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
self._api_client = self._get_api_client(
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
return BaseApiClient(
^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.
python/google-genai - Multi-Turn LLM Test (sync, streaming)
Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-multi-turn-llm-test-sync-streaming.py
Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-multi-turn-llm-test-sync-streaming.py
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-multi-turn-llm-test-sync-streaming.py", line 21, in <module>
client = genai.Client(
^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
self._api_client = self._get_api_client(
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
return BaseApiClient(
^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-multi-turn-llm-test-sync-streaming.py", line 21, in <module>
client = genai.Client(
^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
self._api_client = self._get_api_client(
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
return BaseApiClient(
^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.
python/google-genai - Multi-Turn LLM Test (sync, blocking)
Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-multi-turn-llm-test-sync-blocking.py
Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-multi-turn-llm-test-sync-blocking.py
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-multi-turn-llm-test-sync-blocking.py", line 21, in <module>
client = genai.Client(
^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
self._api_client = self._get_api_client(
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
return BaseApiClient(
^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-multi-turn-llm-test-sync-blocking.py", line 21, in <module>
client = genai.Client(
^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
self._api_client = self._get_api_client(
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
return BaseApiClient(
^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.
python/google-genai - Multi-Turn LLM Test (async, streaming)
Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-multi-turn-llm-test-async-streaming.py
Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-multi-turn-llm-test-async-streaming.py
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-multi-turn-llm-test-async-streaming.py", line 22, in <module>
client = genai.Client(
^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
self._api_client = self._get_api_client(
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
return BaseApiClient(
^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-multi-turn-llm-test-async-streaming.py", line 22, in <module>
client = genai.Client(
^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
self._api_client = self._get_api_client(
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
return BaseApiClient(
^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.
python/google-genai - Multi-Turn LLM Test (async, blocking)
Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-multi-turn-llm-test-async-blocking.py
Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-multi-turn-llm-test-async-blocking.py
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-multi-turn-llm-test-async-blocking.py", line 22, in <module>
client = genai.Client(
^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
self._api_client = self._get_api_client(
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
return BaseApiClient(
^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-multi-turn-llm-test-async-blocking.py", line 22, in <module>
client = genai.Client(
^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
self._api_client = self._get_api_client(
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
return BaseApiClient(
^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.
python/google-genai - Basic Error LLM Test (sync, streaming)
Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-basic-error-llm-test-sync-streaming.py
Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-basic-error-llm-test-sync-streaming.py
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-basic-error-llm-test-sync-streaming.py", line 21, in <module>
client = genai.Client(
^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
self._api_client = self._get_api_client(
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
return BaseApiClient(
^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-basic-error-llm-test-sync-streaming.py", line 21, in <module>
client = genai.Client(
^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
self._api_client = self._get_api_client(
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
return BaseApiClient(
^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.
python/google-genai - Basic Error LLM Test (sync, blocking)
Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-basic-error-llm-test-sync-blocking.py
Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-basic-error-llm-test-sync-blocking.py
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-basic-error-llm-test-sync-blocking.py", line 21, in <module>
client = genai.Client(
^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
self._api_client = self._get_api_client(
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
return BaseApiClient(
^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-basic-error-llm-test-sync-blocking.py", line 21, in <module>
client = genai.Client(
^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
self._api_client = self._get_api_client(
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
return BaseApiClient(
^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.
python/google-genai - Basic Error LLM Test (async, streaming)
Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-basic-error-llm-test-async-streaming.py
Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-basic-error-llm-test-async-streaming.py
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-basic-error-llm-test-async-streaming.py", line 22, in <module>
client = genai.Client(
^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
self._api_client = self._get_api_client(
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
return BaseApiClient(
^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-basic-error-llm-test-async-streaming.py", line 22, in <module>
client = genai.Client(
^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
self._api_client = self._get_api_client(
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
return BaseApiClient(
^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.
python/google-genai - Basic Error LLM Test (async, blocking)
Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-basic-error-llm-test-async-blocking.py
Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-basic-error-llm-test-async-blocking.py
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-basic-error-llm-test-async-blocking.py", line 22, in <module>
client = genai.Client(
^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
self._api_client = self._get_api_client(
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
return BaseApiClient(
^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-basic-error-llm-test-async-blocking.py", line 22, in <module>
client = genai.Client(
^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
self._api_client = self._get_api_client(
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
return BaseApiClient(
^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.
python/google-genai - Vision LLM Test (sync, streaming)
Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-vision-llm-test-sync-streaming.py
Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-vision-llm-test-sync-streaming.py
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-vision-llm-test-sync-streaming.py", line 21, in <module>
client = genai.Client(
^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
self._api_client = self._get_api_client(
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
return BaseApiClient(
^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-vision-llm-test-sync-streaming.py", line 21, in <module>
client = genai.Client(
^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
self._api_client = self._get_api_client(
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
return BaseApiClient(
^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.
python/google-genai - Vision LLM Test (sync, blocking)
Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-vision-llm-test-sync-blocking.py
Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-vision-llm-test-sync-blocking.py
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-vision-llm-test-sync-blocking.py", line 21, in <module>
client = genai.Client(
^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
self._api_client = self._get_api_client(
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
return BaseApiClient(
^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-vision-llm-test-sync-blocking.py", line 21, in <module>
client = genai.Client(
^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
self._api_client = self._get_api_client(
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
return BaseApiClient(
^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.
python/google-genai - Vision LLM Test (async, streaming)
Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-vision-llm-test-async-streaming.py
Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-vision-llm-test-async-streaming.py
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-vision-llm-test-async-streaming.py", line 22, in <module>
client = genai.Client(
^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
self._api_client = self._get_api_client(
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
return BaseApiClient(
^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-vision-llm-test-async-streaming.py", line 22, in <module>
client = genai.Client(
^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
self._api_client = self._get_api_client(
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
return BaseApiClient(
^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.
python/google-genai - Vision LLM Test (async, blocking)
Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-vision-llm-test-async-blocking.py
Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-vision-llm-test-async-blocking.py
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-vision-llm-test-async-blocking.py", line 22, in <module>
client = genai.Client(
^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
self._api_client = self._get_api_client(
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
return BaseApiClient(
^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-vision-llm-test-async-blocking.py", line 22, in <module>
client = genai.Client(
^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
self._api_client = self._get_api_client(
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
return BaseApiClient(
^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.
python/google-genai - Long Input LLM Test (sync, streaming)
Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-long-input-llm-test-sync-streaming.py
Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-long-input-llm-test-sync-streaming.py
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-long-input-llm-test-sync-streaming.py", line 21, in <module>
client = genai.Client(
^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
self._api_client = self._get_api_client(
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
return BaseApiClient(
^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-long-input-llm-test-sync-streaming.py", line 21, in <module>
client = genai.Client(
^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
self._api_client = self._get_api_client(
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
return BaseApiClient(
^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.
python/google-genai - Long Input LLM Test (sync, blocking)
Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-long-input-llm-test-sync-blocking.py
Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-long-input-llm-test-sync-blocking.py
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-long-input-llm-test-sync-blocking.py", line 21, in <module>
client = genai.Client(
^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
self._api_client = self._get_api_client(
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
return BaseApiClient(
^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-long-input-llm-test-sync-blocking.py", line 21, in <module>
client = genai.Client(
^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
self._api_client = self._get_api_client(
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
return BaseApiClient(
^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.
python/google-genai - Long Input LLM Test (async, streaming)
Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-long-input-llm-test-async-streaming.py
Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-long-input-llm-test-async-streaming.py
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-long-input-llm-test-async-streaming.py", line 22, in <module>
client = genai.Client(
^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
self._api_client = self._get_api_client(
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
return BaseApiClient(
^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-long-input-llm-test-async-streaming.py", line 22, in <module>
client = genai.Client(
^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
self._api_client = self._get_api_client(
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
return BaseApiClient(
^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.
python/google-genai - Long Input LLM Test (async, blocking)
Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-long-input-llm-test-async-blocking.py
Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-long-input-llm-test-async-blocking.py
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-long-input-llm-test-async-blocking.py", line 22, in <module>
client = genai.Client(
^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
self._api_client = self._get_api_client(
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
return BaseApiClient(
^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-long-input-llm-test-async-blocking.py", line 22, in <module>
client = genai.Client(
^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
self._api_client = self._get_api_client(
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
return BaseApiClient(
^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.
python/google-genai - Conversation ID LLM Test (sync, streaming)
Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-conversation-id-llm-test-sync-streaming.py
Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-conversation-id-llm-test-sync-streaming.py
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-conversation-id-llm-test-sync-streaming.py", line 21, in <module>
client = genai.Client(
^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
self._api_client = self._get_api_client(
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
return BaseApiClient(
^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-conversation-id-llm-test-sync-streaming.py", line 21, in <module>
client = genai.Client(
^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
self._api_client = self._get_api_client(
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
return BaseApiClient(
^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.
python/google-genai - Conversation ID LLM Test (sync, blocking)
Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-conversation-id-llm-test-sync-blocking.py
Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-conversation-id-llm-test-sync-blocking.py
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-conversation-id-llm-test-sync-blocking.py", line 21, in <module>
client = genai.Client(
^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
self._api_client = self._get_api_client(
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
return BaseApiClient(
^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-conversation-id-llm-test-sync-blocking.py", line 21, in <module>
client = genai.Client(
^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
self._api_client = self._get_api_client(
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
return BaseApiClient(
^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.
python/google-genai - Conversation ID LLM Test (async, streaming)
Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-conversation-id-llm-test-async-streaming.py
Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-conversation-id-llm-test-async-streaming.py
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-conversation-id-llm-test-async-streaming.py", line 22, in <module>
client = genai.Client(
^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
self._api_client = self._get_api_client(
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
return BaseApiClient(
^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-conversation-id-llm-test-async-streaming.py", line 22, in <module>
client = genai.Client(
^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
self._api_client = self._get_api_client(
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
return BaseApiClient(
^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.
python/google-genai - Conversation ID LLM Test (async, blocking)
Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-conversation-id-llm-test-async-blocking.py
Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-conversation-id-llm-test-async-blocking.py
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-conversation-id-llm-test-async-blocking.py", line 22, in <module>
client = genai.Client(
^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
self._api_client = self._get_api_client(
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
return BaseApiClient(
^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/test-conversation-id-llm-test-async-blocking.py", line 22, in <module>
client = genai.Client(
^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 426, in __init__
self._api_client = self._get_api_client(
^^^^^^^^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/client.py", line 474, in _get_api_client
return BaseApiClient(
^^^^^^^^^^^^^^
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/google-genai-1.61.0-sentry-local/.venv/lib/python3.12/site-packages/google/genai/_api_client.py", line 690, in __init__
raise ValueError(
ValueError: Missing key inputs argument! To use the Google AI API, provide (`api_key`) arguments. To use the Google Cloud API, provide (`vertexai`, `project` & `location`) arguments.
python/langchain - Basic Error LLM Test (sync, streaming)
Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/langchain-1.2.8-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/langchain-1.2.8-sentry-local/test-basic-error-llm-test-sync-streaming.py
Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/langchain-1.2.8-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/langchain-1.2.8-sentry-local/test-basic-error-llm-test-sync-streaming.py
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/langchain-1.2.8-sentry-local/test-basic-error-llm-test-sync-streaming.py", line 13, in <module>
import respx
ModuleNotFoundError: No module named 'respx'
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/langchain-1.2.8-sentry-local/test-basic-error-llm-test-sync-streaming.py", line 13, in <module>
import respx
ModuleNotFoundError: No module named 'respx'
python/langchain - Basic Error LLM Test (sync, blocking)
Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/langchain-1.2.8-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/langchain-1.2.8-sentry-local/test-basic-error-llm-test-sync-blocking.py
Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/langchain-1.2.8-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/langchain-1.2.8-sentry-local/test-basic-error-llm-test-sync-blocking.py
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/langchain-1.2.8-sentry-local/test-basic-error-llm-test-sync-blocking.py", line 13, in <module>
import respx
ModuleNotFoundError: No module named 'respx'
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/langchain-1.2.8-sentry-local/test-basic-error-llm-test-sync-blocking.py", line 13, in <module>
import respx
ModuleNotFoundError: No module named 'respx'
python/langchain - Basic Error LLM Test (async, streaming)
Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/langchain-1.2.8-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/langchain-1.2.8-sentry-local/test-basic-error-llm-test-async-streaming.py
Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/langchain-1.2.8-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/langchain-1.2.8-sentry-local/test-basic-error-llm-test-async-streaming.py
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/langchain-1.2.8-sentry-local/test-basic-error-llm-test-async-streaming.py", line 14, in <module>
import respx
ModuleNotFoundError: No module named 'respx'
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/langchain-1.2.8-sentry-local/test-basic-error-llm-test-async-streaming.py", line 14, in <module>
import respx
ModuleNotFoundError: No module named 'respx'
python/langchain - Basic Error LLM Test (async, blocking)
Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/langchain-1.2.8-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/langchain-1.2.8-sentry-local/test-basic-error-llm-test-async-blocking.py
Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/langchain-1.2.8-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/langchain-1.2.8-sentry-local/test-basic-error-llm-test-async-blocking.py
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/langchain-1.2.8-sentry-local/test-basic-error-llm-test-async-blocking.py", line 14, in <module>
import respx
ModuleNotFoundError: No module named 'respx'
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/langchain-1.2.8-sentry-local/test-basic-error-llm-test-async-blocking.py", line 14, in <module>
import respx
ModuleNotFoundError: No module named 'respx'
python/litellm - Basic LLM Test (async, streaming)
Error: 3 check(s) failed:
3 check(s) failed:
Should have exactly 1 AI span(s) but found 0
Should have at least one chat/completion span
Should have at least one chat or agent span
python/litellm - Basic LLM Test (async, blocking)
Error: 3 check(s) failed:
3 check(s) failed:
Should have exactly 1 AI span(s) but found 0
Should have at least one chat/completion span
Should have at least one chat or agent span
python/litellm - Multi-Turn LLM Test (async, streaming)
Error: 3 check(s) failed:
3 check(s) failed:
Should have exactly 3 AI span(s) but found 0
Should have at least one chat/completion span
Should have at least one chat or agent span
python/litellm - Multi-Turn LLM Test (async, blocking)
Error: 3 check(s) failed:
3 check(s) failed:
Should have exactly 3 AI span(s) but found 0
Should have at least one chat/completion span
Should have at least one chat or agent span
python/litellm - Basic Error LLM Test (sync, streaming)
Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/litellm-1.81.6-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/litellm-1.81.6-sentry-local/test-basic-error-llm-test-sync-streaming.py
Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/litellm-1.81.6-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/litellm-1.81.6-sentry-local/test-basic-error-llm-test-sync-streaming.py
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/litellm-1.81.6-sentry-local/test-basic-error-llm-test-sync-streaming.py", line 16, in <module>
import respx
ModuleNotFoundError: No module named 'respx'
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/litellm-1.81.6-sentry-local/test-basic-error-llm-test-sync-streaming.py", line 16, in <module>
import respx
ModuleNotFoundError: No module named 'respx'
python/litellm - Basic Error LLM Test (sync, blocking)
Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/litellm-1.81.6-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/litellm-1.81.6-sentry-local/test-basic-error-llm-test-sync-blocking.py
Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/litellm-1.81.6-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/litellm-1.81.6-sentry-local/test-basic-error-llm-test-sync-blocking.py
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/litellm-1.81.6-sentry-local/test-basic-error-llm-test-sync-blocking.py", line 16, in <module>
import respx
ModuleNotFoundError: No module named 'respx'
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/litellm-1.81.6-sentry-local/test-basic-error-llm-test-sync-blocking.py", line 16, in <module>
import respx
ModuleNotFoundError: No module named 'respx'
python/litellm - Basic Error LLM Test (async, streaming)
Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/litellm-1.81.6-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/litellm-1.81.6-sentry-local/test-basic-error-llm-test-async-streaming.py
Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/litellm-1.81.6-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/litellm-1.81.6-sentry-local/test-basic-error-llm-test-async-streaming.py
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/litellm-1.81.6-sentry-local/test-basic-error-llm-test-async-streaming.py", line 17, in <module>
import respx
ModuleNotFoundError: No module named 'respx'
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/litellm-1.81.6-sentry-local/test-basic-error-llm-test-async-streaming.py", line 17, in <module>
import respx
ModuleNotFoundError: No module named 'respx'
python/litellm - Basic Error LLM Test (async, blocking)
Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/litellm-1.81.6-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/litellm-1.81.6-sentry-local/test-basic-error-llm-test-async-blocking.py
Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/litellm-1.81.6-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/litellm-1.81.6-sentry-local/test-basic-error-llm-test-async-blocking.py
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/litellm-1.81.6-sentry-local/test-basic-error-llm-test-async-blocking.py", line 17, in <module>
import respx
ModuleNotFoundError: No module named 'respx'
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/litellm-1.81.6-sentry-local/test-basic-error-llm-test-async-blocking.py", line 17, in <module>
import respx
ModuleNotFoundError: No module named 'respx'
python/litellm - Vision LLM Test (async, streaming)
Error: 3 check(s) failed:
3 check(s) failed:
Should have at least one chat/completion span
Should have at least one chat or agent span
Should have at least one chat or agent span
python/litellm - Vision LLM Test (async, blocking)
Error: 3 check(s) failed:
3 check(s) failed:
Should have at least one chat/completion span
Should have at least one chat or agent span
Should have at least one chat or agent span
python/litellm - Long Input LLM Test (async, streaming)
Error: 2 check(s) failed:
2 check(s) failed:
Should have at least one chat/completion span
Should have at least one chat or agent span
python/litellm - Long Input LLM Test (async, blocking)
Error: 2 check(s) failed:
2 check(s) failed:
Should have at least one chat/completion span
Should have at least one chat or agent span
python/litellm - Conversation ID LLM Test (async, streaming)
Error: 4 check(s) failed:
4 check(s) failed:
Should have exactly 4 AI span(s) but found 0
Should have at least one chat/completion span
Should have at least one AI span
Should have at least one chat or agent span
python/litellm - Conversation ID LLM Test (async, blocking)
Error: 4 check(s) failed:
4 check(s) failed:
Should have exactly 4 AI span(s) but found 0
Should have at least one chat/completion span
Should have at least one AI span
Should have at least one chat or agent span
python/openai - Basic LLM Test (sync, streaming)
Error: 2 check(s) failed:
2 check(s) failed:
Attribute validation failed:
Span bc7a5b66: Attribute 'gen_ai.usage.input_tokens' must exist but is missing
Span bc7a5b66: Attribute 'gen_ai.usage.output_tokens' must exist but is missing
Token usage validation failed:
input_tokens must exist
output_tokens must exist
total_tokens must exist
python/openai - Basic LLM Test (async, streaming)
Error: 2 check(s) failed:
2 check(s) failed:
Attribute validation failed:
Span 8cc8174c: Attribute 'gen_ai.usage.input_tokens' must exist but is missing
Span 8cc8174c: Attribute 'gen_ai.usage.output_tokens' must exist but is missing
Token usage validation failed:
input_tokens must exist
output_tokens must exist
total_tokens must exist
python/openai - Multi-Turn LLM Test (sync, streaming)
Error: 3 check(s) failed:
3 check(s) failed:
Attribute validation failed:
Span 8f6487a1: Attribute 'gen_ai.usage.input_tokens' must exist but is missing
Span 8f6487a1: Attribute 'gen_ai.usage.output_tokens' must exist but is missing
Span 83c7e8af: Attribute 'gen_ai.usage.input_tokens' must exist but is missing
Span 83c7e8af: Attribute 'gen_ai.usage.output_tokens' must exist but is missing
Span 9500d05d: Attribute 'gen_ai.usage.input_tokens' must exist but is missing
Span 9500d05d: Attribute 'gen_ai.usage.output_tokens' must exist but is missing
Token usage validation failed:
input_tokens must exist
output_tokens must exist
total_tokens must exist
Input token progression failed: tokens should increase with each turn
python/openai - Multi-Turn LLM Test (async, streaming)
Error: 3 check(s) failed:
3 check(s) failed:
Attribute validation failed:
Span 87e529e7: Attribute 'gen_ai.usage.input_tokens' must exist but is missing
Span 87e529e7: Attribute 'gen_ai.usage.output_tokens' must exist but is missing
Span b69a374c: Attribute 'gen_ai.usage.input_tokens' must exist but is missing
Span b69a374c: Attribute 'gen_ai.usage.output_tokens' must exist but is missing
Span 89bf4b79: Attribute 'gen_ai.usage.input_tokens' must exist but is missing
Span 89bf4b79: Attribute 'gen_ai.usage.output_tokens' must exist but is missing
Token usage validation failed:
input_tokens must exist
output_tokens must exist
total_tokens must exist
Input token progression failed: tokens should increase with each turn
python/openai - Basic Error LLM Test (sync, streaming)
Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/openai-2.16.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/openai-2.16.0-sentry-local/test-basic-error-llm-test-sync-streaming.py
Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/openai-2.16.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/openai-2.16.0-sentry-local/test-basic-error-llm-test-sync-streaming.py
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/openai-2.16.0-sentry-local/test-basic-error-llm-test-sync-streaming.py", line 12, in <module>
import respx
ModuleNotFoundError: No module named 'respx'
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/openai-2.16.0-sentry-local/test-basic-error-llm-test-sync-streaming.py", line 12, in <module>
import respx
ModuleNotFoundError: No module named 'respx'
python/openai - Basic Error LLM Test (sync, blocking)
Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/openai-2.16.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/openai-2.16.0-sentry-local/test-basic-error-llm-test-sync-blocking.py
Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/openai-2.16.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/openai-2.16.0-sentry-local/test-basic-error-llm-test-sync-blocking.py
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/openai-2.16.0-sentry-local/test-basic-error-llm-test-sync-blocking.py", line 12, in <module>
import respx
ModuleNotFoundError: No module named 'respx'
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/openai-2.16.0-sentry-local/test-basic-error-llm-test-sync-blocking.py", line 12, in <module>
import respx
ModuleNotFoundError: No module named 'respx'
python/openai - Basic Error LLM Test (async, streaming)
Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/openai-2.16.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/openai-2.16.0-sentry-local/test-basic-error-llm-test-async-streaming.py
Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/openai-2.16.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/openai-2.16.0-sentry-local/test-basic-error-llm-test-async-streaming.py
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/openai-2.16.0-sentry-local/test-basic-error-llm-test-async-streaming.py", line 13, in <module>
import respx
ModuleNotFoundError: No module named 'respx'
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/openai-2.16.0-sentry-local/test-basic-error-llm-test-async-streaming.py", line 13, in <module>
import respx
ModuleNotFoundError: No module named 'respx'
python/openai - Basic Error LLM Test (async, blocking)
Error: Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/openai-2.16.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/openai-2.16.0-sentry-local/test-basic-error-llm-test-async-blocking.py
Test execution failed: Command failed: /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/openai-2.16.0-sentry-local/.venv/bin/python /home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/openai-2.16.0-sentry-local/test-basic-error-llm-test-async-blocking.py
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/openai-2.16.0-sentry-local/test-basic-error-llm-test-async-blocking.py", line 13, in <module>
import respx
ModuleNotFoundError: No module named 'respx'
Traceback (most recent call last):
File "/home/runner/work/_actions/getsentry/testing-ai-sdk-integrations/285c012e522f241581534dfc89bd99ec3b1da4f6/runs/python/openai-2.16.0-sentry-local/test-basic-error-llm-test-async-blocking.py", line 13, in <module>
import respx
ModuleNotFoundError: No module named 'respx'
python/openai - Vision LLM Test (sync, streaming)
Error: 3 check(s) failed:
3 check(s) failed:
Attribute validation failed:
Span a52e278a: Attribute 'gen_ai.usage.input_tokens' must exist but is missing
Span a52e278a: Attribute 'gen_ai.usage.output_tokens' must exist but is missing
Token usage validation failed:
input_tokens must exist
output_tokens must exist
total_tokens must exist
Messages should not contain raw base64 data (should be redacted)
Messages should contain '[Blob substitute]' marker indicating binary content was redacted
python/openai - Vision LLM Test (sync, blocking)
Error: 1 check(s) failed:
1 check(s) failed:
Messages should not contain raw base64 data (should be redacted)
Messages should contain '[Blob substitute]' marker indicating binary content was redacted
python/openai - Vision LLM Test (async, streaming)
Error: 3 check(s) failed:
3 check(s) failed:
Attribute validation failed:
Span bb36a508: Attribute 'gen_ai.usage.input_tokens' must exist but is missing
Span bb36a508: Attribute 'gen_ai.usage.output_tokens' must exist but is missing
Token usage validation failed:
input_tokens must exist
output_tokens must exist
total_tokens must exist
Messages should not contain raw base64 data (should be redacted)
Messages should contain '[Blob substitute]' marker indicating binary content was redacted
python/openai - Vision LLM Test (async, blocking)
Error: 1 check(s) failed:
1 check(s) failed:
Messages should not contain raw base64 data (should be redacted)
Messages should contain '[Blob substitute]' marker indicating binary content was redacted
python/openai - Long Input LLM Test (sync, streaming)
Error: 1 check(s) failed:
1 check(s) failed:
Attribute validation failed:
Span a07e5d62: Attribute 'gen_ai.usage.input_tokens' must exist but is missing
Span a07e5d62: Attribute 'gen_ai.usage.output_tokens' must exist but is missing
python/openai - Long Input LLM Test (async, streaming)
Error: 1 check(s) failed:
1 check(s) failed:
Attribute validation failed:
Span acc0db67: Attribute 'gen_ai.usage.input_tokens' must exist but is missing
Span acc0db67: Attribute 'gen_ai.usage.output_tokens' must exist but is missing
python/openai - Conversation ID LLM Test (sync, streaming)
Error: 2 check(s) failed:
2 check(s) failed:
Attribute validation failed:
Span a17aed3d: Attribute 'gen_ai.usage.input_tokens' must exist but is missing
Span a17aed3d: Attribute 'gen_ai.usage.output_tokens' must exist but is missing
Span 98878626: Attribute 'gen_ai.usage.input_tokens' must exist but is missing
Span 98878626: Attribute 'gen_ai.usage.output_tokens' must exist but is missing
Span a016cce3: Attribute 'gen_ai.usage.input_tokens' must exist but is missing
Span a016cce3: Attribute 'gen_ai.usage.output_tokens' must exist but is missing
Span a7b0ff26: Attribute 'gen_ai.usage.input_tokens' must exist but is missing
Span a7b0ff26: Attribute 'gen_ai.usage.output_tokens' must exist but is missing
Token usage validation failed:
input_tokens must exist
output_tokens must exist
total_tokens must exist
python/openai - Conversation ID LLM Test (async, streaming)
Error: 2 check(s) failed:
2 check(s) failed:
Attribute validation failed:
Span 98e67d14: Attribute 'gen_ai.usage.input_tokens' must exist but is missing
Span 98e67d14: Attribute 'gen_ai.usage.output_tokens' must exist but is missing
Span 9629be94: Attribute 'gen_ai.usage.input_tokens' must exist but is missing
Span 9629be94: Attribute 'gen_ai.usage.output_tokens' must exist but is missing
Span a521ce06: Attribute 'gen_ai.usage.input_tokens' must exist but is missing
Span a521ce06: Attribute 'gen_ai.usage.output_tokens' must exist but is missing
Span 9b4009a0: Attribute 'gen_ai.usage.input_tokens' must exist but is missing
Span 9b4009a0: Attribute 'gen_ai.usage.output_tokens' must exist but is missing
Token usage validation failed:
input_tokens must exist
output_tokens must exist
total_tokens must exist
This issue was automatically created by the AI Integration Testing framework.