Skip to content

feat: add MiniMax M2.7 and M2.7-highspeed as LLM providers#184

Open
octo-patch wants to merge 1 commit intosqlchat:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax M2.7 and M2.7-highspeed as LLM providers#184
octo-patch wants to merge 1 commit intosqlchat:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link

Summary

Add MiniMax AI models as alternative LLM providers for SQL Chat via their OpenAI-compatible API:

  • MiniMax-M2.7: Peak Performance. Ultimate Value. Master the Complex.
  • MiniMax-M2.7-highspeed: Same performance, faster and more agile.

Both models support 204,800 token context windows and are priced competitively ($0.3-0.6/M input tokens, $1.2-2.4/M output tokens).

Changes

  • src/utils/model.ts: Add MiniMax-M2.7 and MiniMax-M2.7-highspeed model definitions with temperature clamped to 0.01 (MiniMax requires temperature > 0)
  • src/components/OpenAIApiConfigView.tsx: Add MiniMax model radio buttons to the settings UI
  • README.md / README.zh-CN.md: Add MiniMax setup instructions (API key + endpoint configuration)
  • vitest.config.ts + tests/: Add vitest test framework with 9 unit tests and 3 integration tests
  • package.json: Add test scripts (test, test:unit, test:integration)

How to Use

Set environment variables:

OPENAI_API_KEY=your-minimax-api-key
OPENAI_API_ENDPOINT=https://api.minimax.io

Or enter the API key and endpoint in Settings UI (when NEXT_PUBLIC_ALLOW_SELF_OPENAI_KEY=true).

Test Plan

  • 9 unit tests pass (model config, getModel lookup, temperature validation)
  • 3 integration tests pass (basic chat, streaming, highspeed model)
  • No breaking changes to existing models

Add MiniMax AI models (MiniMax-M2.7 and MiniMax-M2.7-highspeed) as
alternative LLM providers via their OpenAI-compatible API. Users can
select MiniMax models in the settings UI and configure the endpoint
to https://api.minimax.io.

Changes:
- Add MiniMax-M2.7 and MiniMax-M2.7-highspeed model definitions
- Add MiniMax model options to the settings UI model selector
- Add MiniMax setup documentation to README (EN + CN)
- Add vitest test framework with unit and integration tests
- Add test scripts to package.json
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds MiniMax M2.7 and M2.7-highspeed as selectable LLM providers (via the existing OpenAI-compatible request path), and introduces a Vitest-based test suite plus documentation updates to support configuration and validation.

Changes:

  • Add MiniMax-M2.7 and MiniMax-M2.7-highspeed model definitions (with non-zero temperature defaults).
  • Expose MiniMax models in the Settings UI model selector.
  • Add Vitest config + unit/integration tests; add test scripts and document MiniMax setup.

Reviewed changes

Copilot reviewed 8 out of 8 changed files in this pull request and generated 3 comments.

Show a summary per file
File Description
vitest.config.ts Adds Vitest configuration and @ alias for tests.
tests/unit/model.test.ts Unit tests for model presence/lookup and config expectations.
tests/integration/minimax.test.ts External integration tests against MiniMax’s OpenAI-compatible API (skipped without env var).
src/utils/model.ts Registers MiniMax models in the model registry used by request handling.
src/components/OpenAIApiConfigView.tsx Adds MiniMax options to the model radio list.
package.json Adds Vitest dependency and test scripts.
README.md Documents MiniMax setup (key + endpoint).
README.zh-CN.md Chinese documentation for MiniMax setup.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 97 to 100
"ts-node": "^10.9.1",
"typescript": "^4.9.5"
"typescript": "^4.9.5",
"vitest": "^4.1.0"
},
Copy link

Copilot AI Mar 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

package.json adds a new dev dependency (vitest), but pnpm-lock.yaml is not updated. Since this repo uses pnpm and commits the lockfile, installs/CI won’t pick up the new dependency deterministically until the lockfile is regenerated and committed.

Copilot uses AI. Check for mistakes.
Comment on lines +3 to +5
const API_KEY = process.env.MINIMAX_API_KEY;
const BASE_URL = "https://api.minimax.io/v1";

Copy link

Copilot AI Mar 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This integration test uses process.env.MINIMAX_API_KEY and hard-codes BASE_URL to include /v1, while the app/docs use OPENAI_API_KEY + a base endpoint host (and then force the request path to /v1/chat/completions). For consistency and to avoid tests being unexpectedly skipped, consider accepting OPENAI_API_KEY (or both) and constructing the URL the same way as production (base host + /v1/...).

Copilot uses AI. Check for mistakes.
Comment on lines +63 to +82
while (true) {
const { done, value } = await reader.read();
if (done) break;
const text = decoder.decode(value, { stream: true });
const lines = text.split("\n").filter((l) => l.startsWith("data: "));
for (const line of lines) {
const data = line.slice(6);
if (data === "[DONE]") continue;
try {
const json = JSON.parse(data);
const content = json.choices?.[0]?.delta?.content;
if (content) {
fullText += content;
chunks++;
}
} catch {
// skip incomplete JSON
}
}
}
Copy link

Copilot AI Mar 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The streaming test parses each reader.read() chunk by splitting on newlines and then JSON.parseing each data: line, but SSE frames/JSON payloads can be split across chunk boundaries. Silently skipping JSON.parse errors can drop content and make this test flaky; buffer incomplete lines between reads (or reuse an SSE parser like eventsource-parser) so only complete events are parsed.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants