Provider profile: OpenAI — OpenAI offers the widest selection of production-ready models, from the GPT-4o multimodal family and o-series reasoning models to specialized variants for image generation, audio, and embeddings.
OpenAI
US51 modelsGPT & o-series — the broadest model portfolio in the industry
OpenAI offers the widest selection of production-ready models, from the GPT-4o multimodal family and o-series reasoning models to specialized variants for image generation, audio, and embeddings.
- ✓51+ models — the broadest production-ready portfolio in the industry
- ✓GPT-4o multimodal family with vision, audio, and real-time capabilities
- ✓o-series reasoning models for math, code, and complex problem solving
- ✓Responses API support for Cursor IDE and agent frameworks
Quickstart
from openai import OpenAI
client = OpenAI(
base_url="https://api.therouter.ai/v1",
api_key="YOUR_THEROUTER_KEY",
)
response = client.chat.completions.create(
model="openai/gpt-4o",
messages=[{"role": "user", "content": "Summarize the key ideas in quantum computing"}],
max_tokens=512,
)
print(response.choices[0].message.content)Models
Frequently Asked Questions
Does TheRouter support the OpenAI Responses API?
Yes. TheRouter supports both the Chat Completions API and the Responses API format used by Cursor IDE and newer agent frameworks. Both formats are automatically detected and routed correctly.
Which OpenAI models are available?
All major production models: GPT-4o, GPT-4o mini, GPT-4.1, GPT-4.1 mini, o3, o4-mini, o4-mini-high, and more. Run GET /v1/models with your TheRouter API key to see the current list.
How does TheRouter handle OpenAI rate limits?
TheRouter manages request queuing and can fall back to alternative providers when OpenAI rate limits are hit, depending on your routing configuration.
Can I use GPT-4o Vision through TheRouter?
Yes. TheRouter's multimodal validation layer passes image_url content blocks directly to OpenAI in the native format. No image conversion needed.