Provider profile: Google — Google's Gemini family pushes multimodal frontiers with 1M-token context windows, native image and audio understanding, and deep integration with Google's research infrastructure via Vertex AI.
Gemini — multimodal intelligence at 1M+ token context
Google's Gemini family pushes multimodal frontiers with 1M-token context windows, native image and audio understanding, and deep integration with Google's research infrastructure via Vertex AI.
- ✓Gemini 2.5 Pro with 1M+ token context and native multimodal understanding
- ✓Best-in-class performance on coding, reasoning, and long-document analysis
- ✓Native image, audio, and video input via Vertex AI
- ✓Deep Google research infrastructure for reliability
Quickstart
from openai import OpenAI
client = OpenAI(
base_url="https://api.therouter.ai/v1",
api_key="YOUR_THEROUTER_KEY",
)
response = client.chat.completions.create(
model="google/gemini-2.5-pro",
messages=[{"role": "user", "content": "Analyze this code for security vulnerabilities"}],
max_tokens=1024,
)
print(response.choices[0].message.content)Models
Frequently Asked Questions
Which Gemini models are available on TheRouter?
TheRouter provides Gemini 2.5 Pro, Gemini 2.5 Flash, Gemini 2.0 Flash, and several other Gemini variants. All accessed via the standard google/model-name format.
Does TheRouter support Gemini's 1M token context?
Yes. TheRouter passes long-context requests to Vertex AI unchanged. Note that prompts exceeding 200K tokens may incur long-context pricing.
How does TheRouter connect to Google's models?
TheRouter routes Google model requests through a dedicated Vertex AI provider service in us-central1. Each request is a direct API call — no intermediaries.
Can I send images to Gemini through TheRouter?
Yes. TheRouter's multimodal pipeline fetches image URLs, converts to base64 inline data, and sends to Vertex AI in the required format. Supports JPEG, PNG, GIF, WebP, and more.