·New Provider·中文版本 →

MiniMax M2.7 Now Live on TheRouter — 7 Models, 204K Context, Agent-Native Reasoning

MiniMax joins TheRouter as our 10th direct provider, bringing 7 models optimized for agentic workflows — from the M2.7 flagship with interleaved thinking chains to Highspeed variants delivering ~100 tokens/sec.


About MiniMax

Founded in 2022, MiniMax has rapidly grown into a global AI powerhouse serving over 236 million individual users and 214,000+ enterprise clients across 200+ countries and regions. The company's mission — “co-creating intelligence with everyone” — drives a full-stack multimodal platform spanning text, speech, video, music, and image generation.

What sets MiniMax apart is their agent-native approach: the M2 model series is engineered from the ground up for multi-tool orchestration, task decomposition, and long-horizon planning — making them a natural fit for the agentic AI workflows that are becoming the industry standard.

What's New

We're adding MiniMax as a direct API provider with dedicated infrastructure and 3-tier failover routing. Here are the 7 models now available:

ModelContextSpeedPricing (input / output)
minimax/m2.7204K~60 tok/s$0.40 / $1.60 per MTok
minimax/m2.7-highspeed204K~100 tok/s$0.80 / $3.20 per MTok
minimax/m2.5204K~60 tok/s$0.40 / $1.60 per MTok
minimax/m2.5-highspeed204K~100 tok/s$0.80 / $3.20 per MTok
minimax/m2.1204K~60 tok/s$0.40 / $1.60 per MTok
minimax/m2.1-highspeed204K~100 tok/s$0.80 / $3.20 per MTok
minimax/m2204K~60 tok/s$0.40 / $1.60 per MTok

M2.7: The Agent-Native Flagship

M2.7 is MiniMax's most capable model, purpose-built for agentic workflows. Key capabilities include:

  • Interleaved thinking chains — M2.7 reasons step-by-step within its response, decomposing complex tasks into manageable sub-problems before synthesizing a solution.
  • Multi-tool orchestration — Native function calling with the standard OpenAI tools parameter. M2.7 excels at selecting the right tool, chaining calls, and handling tool output in multi-step workflows.
  • 204K context window — Process entire codebases, long documents, or extended conversation histories without truncation.
  • Strong coding performance — Competitive on full-stack software development, from React frontends to Python backends, with reliable instruction following.

Highspeed Variants: Speed When It Matters

Every M2 model (M2.7, M2.5, M2.1) has a -highspeed variant that delivers approximately 100 tokens/sec — nearly 2x the standard output speed — at 2x cost. Choose Highspeed when:

  • Building real-time coding assistants or chat UIs
  • Running interactive agent loops where latency compounds
  • Powering customer-facing applications with strict response time SLAs

Standard variants remain the better choice for batch processing, background tasks, and cost-sensitive workloads where throughput matters more than latency.

3-Tier Failover Routing

MiniMax models on TheRouter benefit from automatic multi-provider failover:

  1. Priority 0: MiniMax direct API (api.minimax.io) — lowest latency, full feature support
  2. Priority 1: AWS Bedrock — enterprise-grade fallback with AWS SLA guarantees
  3. Priority 2: SiliconFlow — additional redundancy via optimized inference platform

If the primary provider returns an error or times out, TheRouter automatically retries on the next available provider — no code changes needed. Your application stays up even when individual providers go down.

Quick Start

Start using MiniMax M2.7 in under a minute:

from openai import OpenAI

client = OpenAI(
    base_url="https://api.therouter.ai/v1",
    api_key="YOUR_THEROUTER_KEY",
)

response = client.chat.completions.create(
    model="minimax/m2.7",
    messages=[
        {"role": "system", "content": "You are an expert software engineer."},
        {"role": "user", "content": "Build a REST API with FastAPI that handles user authentication with JWT tokens."},
    ],
    max_tokens=2048,
)
print(response.choices[0].message.content)

No MiniMax API key needed — your TheRouter key handles everything. Works with any OpenAI-compatible SDK, Cursor, Continue, and other IDE integrations.

Get Started Today

MiniMax M2.7 and all 7 models are live now. Create an account and start building in minutes.