Text Generation
Balance quality, latency, and cost for production generation workflows.
Prompt + Generation
TypeScript
const result = await client.callModel({
model: "openai/gpt-4o-mini",
input: [{ role: "user", content: "Write a concise changelog entry." }],
temperature: 0.2,
max_output_tokens: 120,
});
console.log(result.items);Generation Controls
| Name | Type | Required | Description |
|---|---|---|---|
temperature | number | Controls randomness. | |
presence_penalty | number | Penalty for token/topic repetition. | |
frequency_penalty | number | Penalty for frequent tokens. | |
max_output_tokens | integer | Upper output token bound. |
Output Validation
validate.ts
if (!result.items.some((item) => item.type === "text")) {
throw new Error("Expected text output item");
}Quality controls
Combine lower temperature with explicit format instructions for stable operational text outputs.