DeepSeek V4 Pro (Reasoning, Max Effort)
描述
DeepSeek-V4-Pro-Max is the maximum reasoning effort mode of DeepSeek-V4-Pro, a 1.6T-parameter MoE model with 49B activated parameters and a 1M-token context window. It introduces a hybrid attention architecture combining Compressed Sparse Attention (CSA) and Heavily Compressed Attention (HCA) for dramatically improved long-context efficiency, requiring only 27% of single-token inference FLOPs and 10% of KV cache compared with DeepSeek-V3.2 at 1M-token context. The model also incorporates Manifold-Constrained Hyper-Connections (mHC) for stable signal propagation and is trained with the Muon optimizer for faster convergence. Pre-trained on more than 32T tokens, V4-Pro-Max significantly advances open-source knowledge capabilities, achieves top-tier performance in coding benchmarks, and bridges the gap with leading closed-source models on reasoning and agentic tasks.
能力雷达图
Science 在缺少专门科学评测时使用推理能力代理估算。
排行榜排名
基准测试分数 (LLM Stats)
Agents
Biology
Code
Factuality
Finance
General
Math
AA 评测指数
LLM Stats 分类评分
定价
速度
可用提供商
(LS 内部计价单位)| 提供商 | 输入价格 | 输出价格 |
|---|---|---|
| DeepSeek | 1.7M | 3.5M |
| DeepInfra | 1.7M | 3.5M |