Request
Add the Google Gemini Generative AI API as a supported platform in One CLI (one add google-ai or one add gemini-ai).
Current state
One CLI has a gemini platform, but it maps to Dialogflow CX / Vertex AI Agent endpoints (conversation agents, app management). It does NOT include the Generative AI API endpoints that power Gemini Flash, Flash-Lite, and Pro models.
The endpoints we need:
| Endpoint |
Method |
Path |
Use case |
generateContent |
POST |
/v1beta/models/{model}:generateContent |
Text generation, classification, extraction |
generateContent (vision) |
POST |
/v1beta/models/{model}:generateContent |
PDF/image understanding (native vision) |
countTokens |
POST |
/v1beta/models/{model}:countTokens |
Token counting before API calls |
listModels |
GET |
/v1beta/models |
Discover available models |
Auth: API key (simplest) or OAuth2 (for Vertex AI).
Why this matters
We're building a multi-model pipeline where Gemini handles high-volume, cost-sensitive tasks (classification at $0.04/M tokens, PDF vision extraction at $0.08/M) while Claude handles quality-sensitive composition. Having Gemini available through One CLI would give us:
- Managed auth — API key management via
one add, not manual ~/.secrets/ files
- Rate limit handling — One CLI's built-in retry/backoff
- Observability — action execution logging
- Consistency — same
one --agent actions execute pattern for all LLM providers
- Sync integration — could use Gemini in
transform steps for content classification during sync
Current workaround
Direct API calls via google.generativeai Python SDK with API key stored at ~/.secrets/gemini/api-key.txt. Works but bypasses One CLI's auth management and observability.
Models we use
gemini-2.5-flash-lite — classification, routing ($0.075/M in)
gemini-2.5-flash — extraction, PDF vision ($0.15/M in)
gemini-2.5-pro — complex table extraction ($1.25/M in, rare)
Request
Add the Google Gemini Generative AI API as a supported platform in One CLI (
one add google-aiorone add gemini-ai).Current state
One CLI has a
geminiplatform, but it maps to Dialogflow CX / Vertex AI Agent endpoints (conversation agents, app management). It does NOT include the Generative AI API endpoints that power Gemini Flash, Flash-Lite, and Pro models.The endpoints we need:
generateContent/v1beta/models/{model}:generateContentgenerateContent(vision)/v1beta/models/{model}:generateContentcountTokens/v1beta/models/{model}:countTokenslistModels/v1beta/modelsAuth: API key (simplest) or OAuth2 (for Vertex AI).
Why this matters
We're building a multi-model pipeline where Gemini handles high-volume, cost-sensitive tasks (classification at $0.04/M tokens, PDF vision extraction at $0.08/M) while Claude handles quality-sensitive composition. Having Gemini available through One CLI would give us:
one add, not manual~/.secrets/filesone --agent actions executepattern for all LLM providerstransformsteps for content classification during syncCurrent workaround
Direct API calls via
google.generativeaiPython SDK with API key stored at~/.secrets/gemini/api-key.txt. Works but bypasses One CLI's auth management and observability.Models we use
gemini-2.5-flash-lite— classification, routing ($0.075/M in)gemini-2.5-flash— extraction, PDF vision ($0.15/M in)gemini-2.5-pro— complex table extraction ($1.25/M in, rare)