Text Generation

Balance quality, latency, and cost for production generation workflows.

Prompt + Generation

TypeScript
const result = await client.callModel({
  model: "openai/gpt-4o-mini",
  input: [{ role: "user", content: "Write a concise changelog entry." }],
  temperature: 0.2,
  max_output_tokens: 120,
});

console.log(result.items);

Generation Controls

NameTypeRequiredDescription
temperature
numberControls randomness.
presence_penalty
numberPenalty for token/topic repetition.
frequency_penalty
numberPenalty for frequent tokens.
max_output_tokens
integerUpper output token bound.

Output Validation

validate.ts
if (!result.items.some((item) => item.type === "text")) {
  throw new Error("Expected text output item");
}
Quality controls
Combine lower temperature with explicit format instructions for stable operational text outputs.