DeepSeek R1 Distill Llama 70B
DeepSeekLlamaOpen WeightMIT · Commercial OK
Description
DeepSeek-R1 is the first-generation reasoning model built atop DeepSeek-V3 (671B total parameters, 37B activated per token). It incorporates large-scale reinforcement learning (RL) to enhance its chain-of-thought and reasoning capabilities, delivering strong performance in math, code, and multi-step reasoning tasks.
Release Date
2025-01-20
Parameters
70.6B
Context Length
131K
Modalities
text
Capability Radar
34
general
19
coding
62
reasoning
30
scienceest.
0
agents
0
multimodal
Science uses a reasoning proxy when dedicated science benchmarks are unavailable.
Rankings
| Domain | #Rank | Score | Source |
|---|---|---|---|
| Code Ranking | 365 | 17.0 | AA |
| General Ranking | 323 | 34.0 | AA |
| Math Reasoning | 136 | 65.0 | AA |
| Science | 326 | 33.0 | AA |
Benchmark Scores (LLM Stats)
Biology
GPQA
65.2%SR
Code
LiveCodeBench
57.5%SR
Math
MATH-500
94.5%SR
AIME 2024
86.7%SR
AA Evaluation Indices
Math Index53.7
Intelligence Index16.0
Coding Index11.4
Math 5000.9
Mmlu Pro0.8
Aime0.7
Aime 250.5
Gpqa0.4
Scicode0.3
Ifbench0.3
Livecodebench0.3
Tau20.2
Lcr0.1
Hle0.1
Terminalbench Hard0.0
LLM Stats Category Scores
Math90
Reasoning80
Biology70
Chemistry70
Physics70
Code60
General60
Pricing
Input Price$0.7 / 1M tokens
Output Price$1.05 / 1M tokens
Blended Price (3:1)$0.787 / 1M tokens
Speed
Tokens/sec43.5 tokens/s
Time to First Token0.38s
Time to Answer46.36s
Available Providers
(LS internal units)No provider data available