feat: add MiniMax M2.7 and M2.7-highspeed as LLM providers#184
feat: add MiniMax M2.7 and M2.7-highspeed as LLM providers#184octo-patch wants to merge 1 commit intosqlchat:mainfrom
Conversation
Add MiniMax AI models (MiniMax-M2.7 and MiniMax-M2.7-highspeed) as alternative LLM providers via their OpenAI-compatible API. Users can select MiniMax models in the settings UI and configure the endpoint to https://api.minimax.io. Changes: - Add MiniMax-M2.7 and MiniMax-M2.7-highspeed model definitions - Add MiniMax model options to the settings UI model selector - Add MiniMax setup documentation to README (EN + CN) - Add vitest test framework with unit and integration tests - Add test scripts to package.json
There was a problem hiding this comment.
Pull request overview
Adds MiniMax M2.7 and M2.7-highspeed as selectable LLM providers (via the existing OpenAI-compatible request path), and introduces a Vitest-based test suite plus documentation updates to support configuration and validation.
Changes:
- Add
MiniMax-M2.7andMiniMax-M2.7-highspeedmodel definitions (with non-zero temperature defaults). - Expose MiniMax models in the Settings UI model selector.
- Add Vitest config + unit/integration tests; add test scripts and document MiniMax setup.
Reviewed changes
Copilot reviewed 8 out of 8 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
vitest.config.ts |
Adds Vitest configuration and @ alias for tests. |
tests/unit/model.test.ts |
Unit tests for model presence/lookup and config expectations. |
tests/integration/minimax.test.ts |
External integration tests against MiniMax’s OpenAI-compatible API (skipped without env var). |
src/utils/model.ts |
Registers MiniMax models in the model registry used by request handling. |
src/components/OpenAIApiConfigView.tsx |
Adds MiniMax options to the model radio list. |
package.json |
Adds Vitest dependency and test scripts. |
README.md |
Documents MiniMax setup (key + endpoint). |
README.zh-CN.md |
Chinese documentation for MiniMax setup. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| "ts-node": "^10.9.1", | ||
| "typescript": "^4.9.5" | ||
| "typescript": "^4.9.5", | ||
| "vitest": "^4.1.0" | ||
| }, |
There was a problem hiding this comment.
package.json adds a new dev dependency (vitest), but pnpm-lock.yaml is not updated. Since this repo uses pnpm and commits the lockfile, installs/CI won’t pick up the new dependency deterministically until the lockfile is regenerated and committed.
| const API_KEY = process.env.MINIMAX_API_KEY; | ||
| const BASE_URL = "https://api.minimax.io/v1"; | ||
|
|
There was a problem hiding this comment.
This integration test uses process.env.MINIMAX_API_KEY and hard-codes BASE_URL to include /v1, while the app/docs use OPENAI_API_KEY + a base endpoint host (and then force the request path to /v1/chat/completions). For consistency and to avoid tests being unexpectedly skipped, consider accepting OPENAI_API_KEY (or both) and constructing the URL the same way as production (base host + /v1/...).
| while (true) { | ||
| const { done, value } = await reader.read(); | ||
| if (done) break; | ||
| const text = decoder.decode(value, { stream: true }); | ||
| const lines = text.split("\n").filter((l) => l.startsWith("data: ")); | ||
| for (const line of lines) { | ||
| const data = line.slice(6); | ||
| if (data === "[DONE]") continue; | ||
| try { | ||
| const json = JSON.parse(data); | ||
| const content = json.choices?.[0]?.delta?.content; | ||
| if (content) { | ||
| fullText += content; | ||
| chunks++; | ||
| } | ||
| } catch { | ||
| // skip incomplete JSON | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
The streaming test parses each reader.read() chunk by splitting on newlines and then JSON.parseing each data: line, but SSE frames/JSON payloads can be split across chunk boundaries. Silently skipping JSON.parse errors can drop content and make this test flaky; buffer incomplete lines between reads (or reuse an SSE parser like eventsource-parser) so only complete events are parsed.
Summary
Add MiniMax AI models as alternative LLM providers for SQL Chat via their OpenAI-compatible API:
Both models support 204,800 token context windows and are priced competitively ($0.3-0.6/M input tokens, $1.2-2.4/M output tokens).
Changes
src/utils/model.ts: Add MiniMax-M2.7 and MiniMax-M2.7-highspeed model definitions with temperature clamped to 0.01 (MiniMax requires temperature > 0)src/components/OpenAIApiConfigView.tsx: Add MiniMax model radio buttons to the settings UIREADME.md/README.zh-CN.md: Add MiniMax setup instructions (API key + endpoint configuration)vitest.config.ts+tests/: Add vitest test framework with 9 unit tests and 3 integration testspackage.json: Add test scripts (test,test:unit,test:integration)How to Use
Set environment variables:
Or enter the API key and endpoint in Settings UI (when
NEXT_PUBLIC_ALLOW_SELF_OPENAI_KEY=true).Test Plan