Skip to content

feat: add MiniMax as LLM provider#543

Open
octo-patch wants to merge 1 commit intodi-sukharev:masterfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax as LLM provider#543
octo-patch wants to merge 1 commit intodi-sukharev:masterfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

Summary

  • Add MiniMax as a first-class LLM provider with MiniMax-M2.7, MiniMax-M2.5, and MiniMax-M2.5-highspeed models
  • New MiniMaxEngine extending OpenAiEngine using OpenAI-compatible API at api.minimax.io/v1
  • Full setup wizard integration: provider display, API key URL, recommended model, dynamic model fetching

Changes

File Change
src/engine/minimax.ts New MiniMax engine with think-tag stripping, low temperature for deterministic output
src/commands/config.ts MINIMAX enum value, model list, API key URL, recommended model, validator
src/utils/engine.ts Engine factory case for MiniMax
src/utils/errors.ts Billing URL and recommended model for error recovery
src/utils/modelCache.ts fetchMiniMaxModels() with /v1/models endpoint, cache integration
src/commands/setup.ts Provider display name, listed in OTHER_PROVIDERS
README.md Added minimax to provider list and config example
test/unit/minimax.test.ts 32 unit tests covering all integration points
test/unit/minimax-integration.test.ts 3 integration tests (M2.7, M2.5-highspeed, auth error)

Usage

oco config set OCO_AI_PROVIDER=minimax OCO_API_KEY=<your_minimax_api_key>

Or use the interactive setup wizard:

oco setup
# Select "Other providers..." → "MiniMax (M2.7, M2.5, fast inference)"

Test plan

  • 32 unit tests pass (npm run test -- test/unit/minimax.test.ts)
  • 3 integration tests pass with live MiniMax API (MINIMAX_API_KEY=... npm run test -- test/unit/minimax-integration.test.ts)
  • Existing tests pass with no regressions (test/unit/config.test.ts, test/unit/removeContentTags.test.ts)
  • Manual: oco setup → select MiniMax → generate commit message

Add MiniMax (M2.7, M2.5, M2.5-highspeed) as a first-class LLM provider
using the OpenAI-compatible API at api.minimax.io/v1.

- New MiniMaxEngine extending OpenAiEngine with think-tag stripping
- Full setup wizard integration (provider selection, model list, API key URL)
- Dynamic model fetching via /v1/models endpoint with 7-day cache
- Error handling with billing URL and model suggestions
- README updated with MiniMax provider config example
- 32 unit tests + 3 integration tests
@di-sukharev
Copy link
Copy Markdown
Owner

hello @octo-patch please merge master into your branch to solve the confict and i will merge it

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants