Prix réels sur OpenRouter + Benchmarks agentic 2026
| Provider | Input | Output | Combiné |
|---|---|---|---|
| OpenAI | $5.00 | $30.00 | $35.00 |
| Azure | $5.00 | $30.00 | $35.00 |
| Provider | Input | Output | Combiné |
|---|---|---|---|
| $5.00 | $25.00 | $30.00 | |
| Bedrock | $5.00 | $25.00 | $30.00 |
| Anthropic | $5.00 | $25.00 | $30.00 |
| Provider | Input | Output | Combiné |
|---|---|---|---|
| Anthropic | $3.00 | $15.00 | $18.00 |
| Bedrock | $3.00 | $15.00 | $18.00 |
| $3.00 | $15.00 | $18.00 | |
| Azure | $3.00 | $15.00 | $18.00 |
| Provider | Input | Output | Combiné |
|---|---|---|---|
| DeepSeek | $0.435 | $0.870 | $1.30 |
| GMICloud | $1.39 | $2.78 | $4.18 |
| AtlasCloud | $1.70 | $3.40 | $5.10 |
| SiliconFlow | $1.74 | $3.48 | $5.22 |
| Together | $2.10 | $4.40 | $6.50 |
| Provider | Input | Output | Combiné |
|---|---|---|---|
| Alibaba | $0.325 | $1.95 | $2.27 |
| Provider | Input | Output | Combiné |
|---|---|---|---|
| Io Net | $0.740 | $3.49 | $4.23 |
| DeepInfra | $0.750 | $3.50 | $4.25 |
| Parasail | $0.800 | $3.50 | $4.30 |
| Moonshot | $0.950 | $4.00 | $4.95 |
| Cloudflare | $0.950 | $4.00 | $4.95 |
| Together | $1.20 | $4.50 | $5.70 |
| Provider | Input | Output | Combiné |
|---|---|---|---|
| Fireworks | $0.300 | $1.20 | $1.50 |
| Together | $0.300 | $1.20 | $1.50 |
| Minimax | $0.300 | $1.20 | $1.50 |
| SambaNova | $0.600 | $2.40 | $3.00 |
| Provider | Input | Output | Combiné |
|---|---|---|---|
| Chutes | $1.05 | $3.50 | $4.55 |
| DeepInfra | $1.05 | $3.50 | $4.55 |
| Z.AI | $1.40 | $4.40 | $5.80 |
| Together | $1.40 | $4.40 | $5.80 |
| SiliconFlow | $1.40 | $4.40 | $5.80 |
| Venice | $1.75 | $5.50 | $7.25 |
| Provider | Input | Output | Combiné |
|---|---|---|---|
| xAI | $1.25 | $2.50 | $3.75 |
| Modèle | SWE-Ver | SWE Pro | MCP | Term | Fin | Coding | Notes |
|---|---|---|---|---|---|---|---|
OpenAI GPT-5.5 | 82.6% | 58.6% | — | 82.7% | — | 1520 | Terminal-Bench 82.7%, Expert-SWE 73.1% |
Claude Opus 4.7 | 87.6% | 64.3% | 77.3% | — | 64.4% | 1548 | #1 SWE-Pro 64.3%, MCP-Atlas 77.3%, GPQA 94.2% |
Claude Sonnet 4.6 | 79.6% | 53.4% | 75.8% | — | 60.1% | 1530 | 79.6% SWE-Ver at ~50% Opus price. Arena 1530 |
DeepSeek V4 Pro | ~66% | — | — | — | — | — | ~66% SWE-Ver at 10x cheaper than frontier |
Qwen 3.6 Plus | — | — | — | — | — | — | No public agentic scores yet |
Kimi K2.6 | ~70% | — | — | — | — | — | ~70% SWE-Ver. 100 sub-agents, 1500 parallel calls |
MiniMax M2.7 | ~72.5 | — | — | — | — | — | ~72.5% SWE-Ver. Best overall value |
GLM-5.1 | — | 58.4% | — | — | — | — | SWE-Pro 58.4% best among non-OpenAI/Anthropic |
Grok 4.3 | — | — | — | — | — | — | No public agentic benchmark scores yet |
#1 SWE-bench Pro (64.3%), meilleur tool invocation (77.3%), +14% multi-step vs Opus 4.6. Choix premium pour Codex, refactoring, features.
79.6% SWE-Verified à ~50% prix d'Opus. Idéal pour le quotidien: review PR, fixes, petites features.
~72.5% SWE-Ver prix imbattable. Coding autonome + amélioration continue.
~66% SWE-Ver prix plancher. 1.6T MoE, 49B params. Exploration, questions.
Terminal-Bench 82.7% (meilleur), Expert-SWE 73.1%. Commands shell, debug, fichiers.
| Modèle | /100K | Combined /1M | Usage recommandé |
|---|---|---|---|
| OpenAI GPT-5.5 | $ 3.5000 | $35.00 | Ultra-premium |
| Claude Opus 4.7 | $ 3.0000 | $30.00 | Premium |
| Claude Sonnet 4.6 | $ 1.8000 | $18.00 | Premium |
| DeepSeek V4 Pro | $ 0.1305 | $1.30 | Budget |
| Qwen 3.6 Plus | $ 0.2275 | $2.27 | Budget |
| Kimi K2.6 | $ 0.4230 | $4.23 | Quotidien |
| MiniMax M2.7 | $ 0.1500 | $1.50 | Budget |
| GLM-5.1 | $ 0.4550 | $4.55 | Quotidien |
| Grok 4.3 | $ 0.3750 | $3.75 | Quotidien |
Stratégie recommandée :
• Default : Sonnet 4.6 — 90% des tâches
• Complexe : Opus 4.7 — quand Sonnet bloque
• Fallback : GPT-5.5 — shell, debug
• Budget : MiniMax/DeepSeek — tâches répétitives