AI Models Comparison
Compare popular large language models across providers, pricing, capabilities, and performance.
Last updated: 2025-02-11
| Feature | |||||||
|---|---|---|---|---|---|---|---|
| General | |||||||
| Provider | OpenAI | Anthropic | Meta | Mistral AI | DeepSeek | Perplexity | |
| Release Date | May 2024 | May 2025 | Mar 2025 | Jul 2024 | Feb 2024 | Dec 2024 | Feb 2025 |
| Open Source | |||||||
| Parameters | Undisclosed | Undisclosed | Undisclosed | 405B | Undisclosed | 671B MoE | Undisclosed |
| Context & Tokens | |||||||
| Max Context Window | 128K | 200K | 1M | 128K | 128K | 128K | 200K |
| Max Output Tokens | 16K | 32K | 65K | 4K | 8K | 8K | 8K |
| Pricing (per 1M tokens) | |||||||
| Input Price | $2.50 | $15.00 | $1.25 | Free / Varies | $2.00 | $0.27 | $3.00 |
| Output Price | $10.00 | $75.00 | $10.00 | Free / Varies | $6.00 | $1.10 | $15.00 |
| Capabilities | |||||||
| Vision (Image Input) | |||||||
| Function / Tool Calling | |||||||
| Code Generation | |||||||
| Structured Output (JSON) | |||||||
| System Prompts | |||||||
| Streaming | |||||||
| Fine-tuning Available | |||||||
| Benchmarks | |||||||
| MMLU Score | 88.7% | ~90% | 90.0% | 88.6% | 84.0% | 88.5% | N/A |
| HumanEval (Code) | 90.2% | ~93% | 89.0% | 89.0% | 81.0% | 82.6% | N/A |