AI Models Comparison

Compare popular large language models across providers, pricing, capabilities, and performance.

Last updated: 2025-02-11

FeatureGPT-4oGPT-4oOpenAIClaude Opus 4Claude Opus 4AnthropicGemini 2.5 ProGemini 2.5 ProGoogleLLaMA 3.1 405BLLaMA 3.1 405BMetaMistral LargeMistral LargeMistral AIDeepSeek V3DeepSeek V3DeepSeekSonar ProSonar ProPerplexity
General
ProviderOpenAIAnthropicGoogleMetaMistral AIDeepSeekPerplexity
Release DateMay 2024May 2025Mar 2025Jul 2024Feb 2024Dec 2024Feb 2025
Open Source
ParametersUndisclosedUndisclosedUndisclosed405BUndisclosed671B MoEUndisclosed
Context & Tokens
Max Context Window128K200K1M128K128K128K200K
Max Output Tokens16K32K65K4K8K8K8K
Pricing (per 1M tokens)
Input Price$2.50$15.00$1.25Free / Varies$2.00$0.27$3.00
Output Price$10.00$75.00$10.00Free / Varies$6.00$1.10$15.00
Capabilities
Vision (Image Input)
Function / Tool Calling
Code Generation
Structured Output (JSON)
System Prompts
Streaming
Fine-tuning Available
Benchmarks
MMLU Score88.7%~90%90.0%88.6%84.0%88.5%N/A
HumanEval (Code)90.2%~93%89.0%89.0%81.0%82.6%N/A