AI Agent Examples

Ready-to-use patterns for AI agents that call TheRouter.ai programmatically.

Basic Chat Completion

The simplest agent interaction: send a message, get a response.

Python
import openai

client = openai.OpenAI(
    base_url="https://api.therouter.ai/v1",
    api_key="sk-or-your-key",
)

response = client.chat.completions.create(
    model="openai/gpt-4o",
    messages=[{"role": "user", "content": "Summarize this document."}],
    temperature=0.3,
    max_tokens=500,
)

print(response.choices[0].message.content)

Tool Calling Agent Loop

Multi-step agent that defines tools, detects tool calls, executes them, and feeds results back.

Python
import json
import openai

client = openai.OpenAI(
    base_url="https://api.therouter.ai/v1",
    api_key="sk-or-your-key",
)

tools = [
    {
        "type": "function",
        "function": {
            "name": "get_weather",
            "description": "Get current weather for a city",
            "parameters": {
                "type": "object",
                "properties": {
                    "city": {"type": "string", "description": "City name"}
                },
                "required": ["city"],
            },
        },
    }
]

messages = [{"role": "user", "content": "What's the weather in Tokyo?"}]

for step in range(5):
    response = client.chat.completions.create(
        model="anthropic/claude-sonnet-4-5-20250514",
        messages=messages,
        tools=tools,
        tool_choice="auto",
    )

    choice = response.choices[0]
    messages.append(choice.message)

    if choice.finish_reason == "tool_calls":
        for tc in choice.message.tool_calls:
            # Execute tool in your runtime
            result = {"temperature": "22°C", "condition": "Sunny"}
            messages.append({
                "role": "tool",
                "tool_call_id": tc.id,
                "content": json.dumps(result),
            })
    else:
        print(choice.message.content)
        break

Model Fallbacks

Use TheRouter's models array to automatically try backup models if the primary is unavailable.

python
response = client.chat.completions.create(
    model="anthropic/claude-sonnet-4-5-20250514",
    messages=[{"role": "user", "content": "Explain quantum computing."}],
    extra_body={
        "models": [
            "anthropic/claude-sonnet-4-5-20250514",
            "openai/gpt-4o",
            "google/gemini-2.0-flash",
        ],
        "route": "fallback",
    },
)

Streaming Response

Stream tokens as they are generated for real-time agent output.

Python
stream = client.chat.completions.create(
    model="openai/gpt-4o",
    messages=[{"role": "user", "content": "Write a haiku."}],
    stream=True,
    stream_options={"include_usage": True},
)

for chunk in stream:
    delta = chunk.choices[0].delta if chunk.choices else None
    if delta and delta.content:
        print(delta.content, end="", flush=True)
    if hasattr(chunk, "usage") and chunk.usage:
        print(f"\nTokens: {chunk.usage.total_tokens}")

Structured JSON Output

Constrain the model to return valid JSON for reliable agent parsing.

python
response = client.chat.completions.create(
    model="openai/gpt-4o",
    messages=[
        {"role": "system", "content": "Extract entities as JSON."},
        {"role": "user", "content": "Apple released the iPhone 16 in Cupertino."},
    ],
    response_format={"type": "json_object"},
)

import json
data = json.loads(response.choices[0].message.content)
# {"entities": [{"name": "Apple", "type": "ORG"}, ...]}
System Prompt Best Practice
When building agents, include TheRouter model IDs directly in system prompts or tool descriptions so the agent can reference available models. Use GET /v1/models to dynamically discover the current model catalog.