Models
Browse available models, compare pricing, and check capabilities.
Brand
Image model used in ChatGPT.
Claude 3.5 Haiku features offers enhanced capabilities in speed, coding accuracy, and tool use. Engineered to excel in real-time applications, it delivers quick response times that are essential for dynamic tasks such as chat interactions and immediate coding suggestions. This makes it highly suitable for environments that demand both speed and precision, such as software development, customer service bots, and data management systems.
Claude Haiku 4.5 delivers near-frontier performance for a wide range of use cases, and stands out as one of the best coding and agent models–with the right speed and cost to power free products and high-volume user experiences. Use cases: Powering free tier user experiences: Claude Haiku 4.5 delivers near-frontier performance at a cost and speed that makes powering free agent products and agentic use cases economically viable at scale. Real-time experiences: Claude Haiku 4.5's speed is ideal for real-time applications like customer service agents and chatbots where response time is critical. Coding sub-agents: Use Claude Haiku 4.5 to power sub-agents, enabling multi-agent systems that tackle complex refactors, migrations, and large feature builds with quality and speed. Financial sub-agents: Use Claude Haiku 4.5 to monitor thousands of data streams—tracking regulatory changes, market signals, and portfolio risks to preemptively adapt compliance and trading systems at previously impossible scales. Research sub-agents: Perform parallel analyses across multiple data sources while maintaining fast response times. Ideal for rapid business intelligence, competitive analysis, and real-time decision support. Business tasks: Claude Haiku 4.5 is capable of producing and editing office files like slides, documents, and spreadsheets. It also better supports strategy and campaign planning, business analysis and brainstorming.
Claude Opus 4 is Anthropic's most intelligent model and is state-of-the-art for coding and agent capabilities, especially agentic search. It excels for customers needing frontier intelligence: Advanced coding: Independently plan and execute complex development tasks end-to-end. It adapts to your style and maintains high code quality throughout. AI agents: Enable agents to tackle complex, multi-step tasks that require peak accuracy. Agentic search and research: Connect to multiple data sources to synthesize comprehensive insights across repositories. Long-horizon tasks and complex problem solving (virtual collaborator): Unlock new use cases involving long-horizon tasks that require memory, sustained reasoning, and long chains of actions. Content creation: Create human-quality content with natural prose. Produce long-form creative content, technical documentation, marketing copy, and frontend design mockups.
Claude Opus 4.1 is Anthropic's most intelligent model and an industry leader for coding and agent capabilities, especially agentic search. It excels for customers needing frontier intelligence: Advanced coding: Independently plan and execute complex development tasks end-to-end. It adapts to your style, thoughtfully plans and pivots, and maintains high code quality throughout. Long-horizon tasks and complex problem solving (virtual collaborator): Unlock new use cases involving long-horizon tasks that require memory, sustained reasoning, and long chains of actions. AI agents: Enable agents to tackle complex, multi-step tasks that require peak accuracy. Agentic search and research: Connect to multiple data sources to synthesize comprehensive insights across repositories. Content creation: Create human-quality content with natural prose. Produce long-form creative content, technical documentation, marketing copy, and frontend design mockups. Memory and context management: Incorporates memory capabilities that allow it to effectively summarize and reference previous interactions.
The next generation of Anthropic's most intelligent model, Claude Opus 4.5 is an industry leader across coding, agents, computer use, and enterprise workflows. Use cases: Coding: Opus 4.5 can confidently deliver multi-day software development projects in hours, working independently with the technical depth and taste to create efficient and straightforward solutions. It has improved performance across coding languages, with better planning and architecture choices - making it the ideal model for enterprise developers. Agents: Claude Opus 4.5, paired with our advanced tool use capabilities, enables more capable agents with new behaviors. Computer use: Our best computer-using model yet, Claude Opus 4.5 navigates new experiences with confident, consistent approaches that deliver more human-like browsing, enabling better web QA, workflow automation, and advanced user experiences. Enterprise workflows: Opus 4.5 can power agents that manage sprawling professional projects from start to finish. It better leverages memory to maintain context and consistency across files, alongside a step-change improvement in creating spreadsheets, slides, and docs. Financial analysis: Opus 4.5 connects the dots across complex information systems - regulatory filings, market reports, internal data - making sophisticated predictive modeling and proactive compliance possible. Cybersecurity: Opus 4.5 brings professional-grade analysis to security workflows, correlating logs, vulnerability databases, and threat intelligence for proactive threat detection and automated incident response.
Claude Opus 4.6 is the next generation of our most intelligent model, and the world's best model for coding, enterprise agents, and professional work. Use cases include: Agents: Opus 4.6 is the world's best model for agentic workflows, orchestrating complex tasks across dozens of tools with industry-leading reliability. It proactively spins up subagents, parallelizes work, and drives tasks forward with minimal oversight. Coding: Opus 4.6 is the world's best coding model, excelling at long-horizon projects, complex implementations, and large-scale codebases. It handles the full lifecycle from architecture to deployment—so senior engineers can delegate their most complex work with confidence. Enterprise workflows: Opus 4.6 sets the standard for enterprise workflows, powering agents that manage sprawling projects end-to-end with professional polish, domain awareness, and industry-leading performance on spreadsheets, slides, and docs. Financial analysis: Opus 4.6 is Anthropic's most capable model for financial workflows, surfacing insights that would take analysts days to compile. It handles the nuance and precision that compliance-sensitive work demands. Cybersecurity: Opus 4.6 delivers the deepest reasoning for security workflows, catching subtle patterns and complex attack vectors with unmatched accuracy. Computer use: Opus 4.6 is our most capable computer-use model for complex workflows, bringing deep reasoning to multi-step tasks that span multiple applications and require planning and judgment.
Claude Sonnet 4 balances impressive performance for coding with the right speed and cost for high-volume use cases: Coding: Handle everyday development tasks with enhanced performance-power code reviews, bug fixes, API integrations, and feature development with immediate feedback loops. AI Assistants: Power production-ready assistants for real-time applications—from customer support automation to operational workflows that require both intelligence and speed. Efficient research: Perform focused analysis across multiple data sources while maintaining fast response times. Ideal for rapid business intelligence, competitive analysis, and real-time decision support. Large-scale content: Generate and analyze content at scale with improved quality. Create customer communications, analyze user feedback, and produce marketing materials with the right balance of quality and throughput.
Claude Sonnet 4.5 is our most capable model to date for building real-world agents and handling complex, long-horizon tasks–balancing the right speed and cost for high-volume use cases: Long-running agents: Power production-ready assistants for multi-step, real-time applications—from customer support automation to complex operational workflows that require peak accuracy, intelligence, and speed. Coding: Handle everyday development tasks with enhanced performance––or plan and execute complex software projects spanning hours or days––with the ability to save, maintain, and reference information across multiple sessions. Cybersecurity: Deploy agents that autonomously patch vulnerabilities before exploitation––shifting from reactive detection to proactive defense. Financial analysis: Conduct entry-level financial analysis, deliver advanced predictive analysis, or preemptively develop intelligent risk management strategies that leverage best-in-class domain knowledge. Computer use: Claude Sonnet 4.5 is our most accurate model for computer use, enabling developers to direct Claude to use computers the way people do. Research: Perform focused analysis across multiple data sources, turning expert analysis into final deliverables. Ideal for complex problem solving, rapid business intelligence, and real-time decision support.
Claude Sonnet 4.6 delivers frontier intelligence at scale—built for coding, agents, and enterprise workflows.
Workhorse model for all daily tasks. Strong overall performance and low latency supports real-time applications. Suitable for chat interactions, content generation, and general-purpose AI tasks.
Google's cost-effective Gemini model to support high throughput. Optimized for the most price-sensitive use cases while maintaining solid quality for everyday tasks.
Best for balancing reasoning and speed. Gemini 2.5 Flash offers thinking capabilities with strong performance across coding, math, and reasoning tasks at an efficient price point.
Image generation model built on Gemini 2.5 Flash with conversational, multi-turn editing capabilities. Supports text and image input/output for creative workflows.
Most balanced Gemini model for low latency use cases. Optimized for high-volume, cost-sensitive workloads with strong quality at minimal cost.
Most balanced Gemini model for low latency use cases. A preview snapshot of Gemini 2.5 Flash Lite optimized for speed and cost efficiency.
Strong overall performance and low latency. A preview snapshot of Gemini 2.5 Flash with balanced reasoning and speed.
Strongest Gemini model quality, especially for code and complex prompts. Features advanced reasoning with thinking capabilities and excels at multi-step problem solving, code generation, and mathematical reasoning.
Google's agentic workhorse model, bringing near Pro agentic, coding and multimodal intelligence, with more balanced cost and speed. Ideal for production workloads that need strong reasoning at high throughput.
Google's standard model upgraded for rapid creative workflows with image generation and conversational, multi-turn editing capabilities. Supports both text and image output for creative and design tasks.
Google's most powerful agentic and coding model with the best multimodal understanding capabilities. Excels at complex reasoning, code generation, and multi-step problem solving across modalities.
Google's latest image generation model with text and image input/output. Supports up to 14 input images combined into 1 output image, with conversational multi-turn editing capabilities.
Designed for high-volume, cost-sensitive traffic, Gemini 3.1 Flash Lite delivers a massive quality leap over previous Lite generations while matching the core performance of Gemini 2.5 Flash. Ideal for real-time applications requiring low latency at minimal cost.
Google's most powerful agentic and coding model. It features a 1M token context window with complex multimodal understanding capabilities. Gemini 3.1 Pro excels at advanced reasoning, multi-step agentic tasks, and complex problem solving across text, code, images, and video.
Previous generation image generation model.
Cost-efficient version of GPT Image 1.
State-of-the-art image generation model.
OpenAI's smartest non-reasoning model. Excels at instruction following and tool calling with broad knowledge across domains. Features a 1M token context window and low latency.
Smaller, faster version of GPT-4.1. Excels at instruction following and tool calling with a 1M token context window and low latency without a reasoning step.
Fastest, most cost-efficient version of GPT-4.1. Excels at instruction following and tool calling with a 1M token context window and minimal latency.
OpenAI's versatile, high-intelligence flagship model. Accepts text and image inputs, produces text outputs including structured outputs. Best model for most tasks outside reasoning-heavy use cases.
GPT-4o model capable of audio inputs and outputs.
Fast, affordable small model for focused tasks. Accepts text and image inputs, produces text outputs. Ideal for fine-tuning and cost-efficient workloads.
Smaller audio-capable GPT-4o model.
Smaller realtime model for text and audio workflows.
Speech-to-text model powered by GPT-4o mini.
Text-to-speech model powered by GPT-4o mini.
Realtime text and audio model from the GPT-4o family.
Speech-to-text model powered by GPT-4o.
Transcription model that identifies who is speaking when.
OpenAI's intelligent reasoning model for coding and agentic tasks with configurable reasoning effort. Features a 400K context window and 128K max output.
A faster, cost-efficient version of GPT-5 for well-defined tasks. Features reasoning token support with a 400K context window and 128K max output at a fraction of the cost.
Fastest, most cost-efficient version of GPT-5. Great for summarization and classification tasks with reasoning token support. Features a 400K context window and 128K max output.
Version of GPT-5 optimized for agentic coding in Codex.
OpenAI's previous flagship reasoning model for coding and agentic tasks with configurable reasoning effort. Features a 400K context window and 128K max output.
Version of GPT-5.1 optimized for agentic coding in Codex.
Smaller, more cost-effective version of GPT-5.1-Codex.
Version of GPT-5.1 Codex optimized for long-running tasks.
OpenAI's best model for coding and agentic tasks across industries. Features a 400K context window with 128K max output, reasoning token support, and state-of-the-art long-context reasoning.
Intelligent coding model optimized for long-horizon, agentic coding tasks.
Most capable agentic coding model to date.
Best intelligence at scale for agentic, coding, and professional workflows.
Version of GPT-5.4 that produces smarter and more precise responses.
Audio inputs and outputs with the Chat Completions API.
Best voice model for audio in, audio out with Chat Completions.
Cost-efficient version of GPT Audio.
Model capable of realtime text and audio inputs and outputs.
Best voice model for audio in, audio out.
Cost-efficient version of GPT Realtime.
xAI's previous flagship model with 131K context window. Strong general-purpose performance with function calling and structured output support.
Cost-efficient reasoning model from xAI with 131K context window. Ideal for tasks requiring reasoning at lower cost with function calling and structured output support.
xAI's most powerful reasoning model with 256K token context window. Excels at complex reasoning, coding, and multi-step problem solving with function calling and structured outputs.
xAI's fastest model with 2M token context window. Optimized for speed without reasoning overhead, supporting text and image inputs with function calling and structured outputs.
xAI's latest fast reasoning model with 2M token context window. Combines speed with strong reasoning capabilities, supporting text and image inputs with function calling and structured outputs.
xAI's specialized coding model with reasoning capabilities. Optimized for code generation, analysis, and debugging tasks with function calling and structured outputs.
OpenAI's powerful reasoning model that pushes the frontier across coding, math, science, and visual perception. Excels in complex queries requiring multi-faceted analysis. Succeeded by GPT-5.
Most powerful deep research model.
Version of o3 with more compute for better, more precise responses. Best for complex reasoning tasks where accuracy is paramount.
Fast, cost-efficient reasoning model with a 200K context window. Ideal for tasks requiring reasoning at lower cost. Succeeded by GPT-5 Mini.
Faster, more affordable deep research model.
Flagship video generation with synced audio.
Most advanced synced-audio video generation.
Text-to-speech model optimized for speed.
Text-to-speech model optimized for quality.
General-purpose speech recognition model.