DeepSeek V4 Pro (Reasoning, Max Effort)
Descripción
DeepSeek-V4-Pro-Max is the maximum reasoning effort mode of DeepSeek-V4-Pro, a 1.6T-parameter MoE model with 49B activated parameters and a 1M-token context window. It introduces a hybrid attention architecture combining Compressed Sparse Attention (CSA) and Heavily Compressed Attention (HCA) for dramatically improved long-context efficiency, requiring only 27% of single-token inference FLOPs and 10% of KV cache compared with DeepSeek-V3.2 at 1M-token context. The model also incorporates Manifold-Constrained Hyper-Connections (mHC) for stable signal propagation and is trained with the Muon optimizer for faster convergence. Pre-trained on more than 32T tokens, V4-Pro-Max significantly advances open-source knowledge capabilities, achieves top-tier performance in coding benchmarks, and bridges the gap with leading closed-source models on reasoning and agentic tasks.
Radar de capacidades
Science usa un proxy de razonamiento cuando los benchmarks científicos dedicados no están disponibles.
Rankings
| Dominio | #Posición | Puntuación | Fuente |
|---|---|---|---|
| Agents & Tools | 28 | 64.0 | LS |
| Code Ranking | 19 | 81.0 | AA |
| General Ranking | 11 | 89.0 | AA |
| Science | 16 | 86.0 | AA |
Puntuaciones de benchmarks (LLM Stats)
Agents
Biology
Code
Factuality
Finance
General
Math
Índices de evaluación AA
Puntuaciones por categoría LLM Stats
Precios
Velocidad
Proveedores disponibles
(Unidades internas LS)| Proveedor | Precio de entrada | Precio de salida |
|---|---|---|
| DeepSeek | 1.7M | 3.5M |
| DeepInfra | 1.7M | 3.5M |