DeepSeek V4 Pro (Reasoning, Max Effort)
설명
DeepSeek-V4-Pro-Max is the maximum reasoning effort mode of DeepSeek-V4-Pro, a 1.6T-parameter MoE model with 49B activated parameters and a 1M-token context window. It introduces a hybrid attention architecture combining Compressed Sparse Attention (CSA) and Heavily Compressed Attention (HCA) for dramatically improved long-context efficiency, requiring only 27% of single-token inference FLOPs and 10% of KV cache compared with DeepSeek-V3.2 at 1M-token context. The model also incorporates Manifold-Constrained Hyper-Connections (mHC) for stable signal propagation and is trained with the Muon optimizer for faster convergence. Pre-trained on more than 32T tokens, V4-Pro-Max significantly advances open-source knowledge capabilities, achieves top-tier performance in coding benchmarks, and bridges the gap with leading closed-source models on reasoning and agentic tasks.
능력 레이더
전용 과학 벤치마크가 없을 때 Science는 추론 프록시를 사용하여 추정합니다.
랭킹
| 도메인 | #순위 | 점수 | 소스 |
|---|---|---|---|
| Agents & Tools | 28 | 64.0 | LS |
| Code Ranking | 19 | 81.0 | AA |
| General Ranking | 11 | 89.0 | AA |
| Science | 16 | 86.0 | AA |
벤치마크 점수 (LLM Stats)
Agents
Biology
Code
Factuality
Finance
General
Math
AA 평가 지수
LLM Stats 카테고리 점수
가격
속도
사용 가능한 프로바이더
(LS 내부 단위)| 프로바이더 | 입력 가격 | 출력 가격 |
|---|---|---|
| DeepSeek | 1.7M | 3.5M |
| DeepInfra | 1.7M | 3.5M |