Create Responses

Create a unified Responses API result with optional tools and reasoning.

POST/v1/responses

Request Parameters

NameTypeRequiredDescription
model
stringRequiredTarget model identifier.
input
string | arrayRequiredInput text or message-item array.
tools
arrayTool definitions available to the model.
tool_choice
string | objectHow tools are selected during generation.
reasoning
objectReasoning controls for supported models.
stream
booleanIf true, returns a streaming SSE response. Defaults to false.
store
booleanWhether to store the response for multi-turn conversations. Defaults to true.
metadata
objectArbitrary key-value pairs to attach to the response.
previous_response_id
stringID of a previous response to continue a multi-turn conversation.

Request Example

bash
curl -X POST https://api.therouter.ai/v1/responses
  -H "Authorization: Bearer $THEROUTER_API_KEY"
  -H "Content-Type: application/json"
  -d '{
    "model": "anthropic/claude-sonnet-4.5",
    "input": "Summarize this ticket in 2 bullets."
  }'

Response Fields

NameTypeRequiredDescription
id
stringRequiredUnique response identifier.
object
stringRequiredAlways "response".
model
stringRequiredModel used for the response.
created_at
numberRequiredUnix timestamp (seconds) when the response was created.
status
stringRequiredResponse status: "completed", "incomplete", or "failed".
output
arrayRequiredArray of output items (messages, tool calls).
incomplete_details
object | nullDetails when status is "incomplete". Contains a reason field ("max_output_tokens" or "content_filter").
usage
objectRequiredToken usage: input_tokens, output_tokens, total_tokens.

Response Example

json
{
  "id": "resp_abc",
  "object": "response",
  "model": "anthropic/claude-sonnet-4.5",
  "created_at": 1709145600,
  "status": "completed",
  "output": [
    {"type": "message", "role": "assistant", "content": [{"type": "output_text", "text": "- User requests..."}]}
  ],
  "incomplete_details": null,
  "usage": {"input_tokens": 18, "output_tokens": 24, "total_tokens": 42}
}

Streaming

Set stream: true to receive the response as server-sent events (SSE). Events are emitted in this order:

text
event: response.created
event: response.in_progress
event: response.output_item.added
event: response.output_text.delta    # repeated for each text chunk
event: response.output_text.done
event: response.output_item.done
event: response.completed            # final event with full response object
Notes
Responses API is recommended for new integrations that need tools and structured outputs.