·New Models·中文版本 →

GPT-5.5 & GPT-5.5 Pro Now Available on TheRouter

OpenAI released GPT-5.5 (codenamed “Spud”) — their latest frontier model with breakthrough agentic coding, computer use, and research capabilities. Both gpt-5.5 and gpt-5.5-pro are live on TheRouter.


TheRouter adds day-one support for OpenAI GPT-5.5 and GPT-5.5 Pro. GPT-5.5: $5/$30 per MTok input/output, 1M context window. Excels at coding, research, data analysis, and computer use. GPT-5.5 Pro: $30/$180 per MTok, higher accuracy for complex tasks. Benchmarks: SWE-Bench Pro 58.6%, Terminal-Bench 2.0 82.7% (SOTA). Model IDs: openai/gpt-5.5, openai/gpt-5.5-pro.

GPT-5.5 — Frontier Agentic Intelligence

  • 1M context window — process massive codebases, research papers, and long documents in a single request.
  • Agentic coding — purpose-built for autonomous multi-step coding workflows, debugging, and code generation.
  • Computer use — native ability to interact with desktop environments, browsers, and terminal sessions.
  • Research & data analysis — advanced reasoning across structured and unstructured data, with strong synthesis capabilities.
  • $5 / $30 per MTok (input/output) — competitive pricing for a frontier-class model.

GPT-5.5 Pro — Maximum Accuracy

  • Higher accuracy on complex tasks — extended compute for problems that demand deeper reasoning and precision.
  • Same capabilities as GPT-5.5 — agentic coding, computer use, research, and 1M context — with more thorough reasoning.
  • $30 / $180 per MTok (input/output) — for workloads where accuracy is more important than cost.

Benchmarks

BenchmarkGPT-5.5GPT-5.5 Pro
SWE-Bench Pro58.6%
Terminal-Bench 2.082.7% (SOTA)

GPT-5.5 Pro achieves state-of-the-art on Terminal-Bench 2.0 at 82.7% and scores 58.6% on SWE-Bench Pro, demonstrating strong real-world coding and agentic task performance.

Pricing

ModelInputOutputContext
GPT-5.5$5/MTok$30/MTok1M
GPT-5.5 Pro$30/MTok$180/MTok1M

GPT-5.5 offers strong frontier capabilities at accessible pricing. GPT-5.5 Pro trades higher cost for maximum accuracy on demanding workloads.

How to Use It

Use the standard model names — TheRouter handles routing automatically:

# Global endpoint
curl https://api.therouter.ai/v1/chat/completions \
  -H "Authorization: Bearer $THE_ROUTER_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "openai/gpt-5.5",
    "messages": [{"role": "user", "content": "Explain how transformers work"}],
    "max_tokens": 4096
  }'

# China endpoint
curl https://airouter-api.mizone.me/v1/chat/completions \
  -H "Authorization: Bearer $THE_ROUTER_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "openai/gpt-5.5",
    "messages": [{"role": "user", "content": "Explain how transformers work"}],
    "max_tokens": 4096
  }'

For GPT-5.5 Pro, use openai/gpt-5.5-pro. Both models are available on the Global endpoint (api.therouter.ai) and the China endpoint (airouter-api.mizone.me).

Getting Started

Already on TheRouter? Just set the model to openai/gpt-5.5 or openai/gpt-5.5-pro — no other changes needed.


Questions? Reach out on GitHub.