Skip to main content

DeepSeek R1 Zero

DeepSeekDeepSeekOpen WeightMIT · Commercial OK

Description

DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrated remarkable performance on reasoning. With RL, DeepSeek-R1-Zero naturally emerged with numerous powerful and interesting reasoning behaviors. However, DeepSeek-R1-Zero encounters challenges such as endless repetition, poor readability, and language mixing. To address these issues and further enhance reasoning performance, we introduce DeepSeek-R1, which incorporates cold-start data before RL. DeepSeek-R1 achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks.

Release Date
2025-01-20
Parameters
671.0B
Context Length
Modalities

Capability Radar

60
general
50
coding
90
reasoning
60
scienceest.
0
agents
0
multimodal

Science uses a reasoning proxy when dedicated science benchmarks are unavailable.

Rankings

No ranking data available

Benchmark Scores (LLM Stats)

Biology

GPQA73.3%SR

Code

LiveCodeBench50.0%SR

Math

MATH-50095.9%SR
AIME 202486.7%SR

AA Evaluation Indices

No AA evaluation data available

LLM Stats Category Scores

Math
90
Reasoning
80
Biology
70
Chemistry
70
Physics
70
General
60
Code
50

Pricing

No pricing data available

Speed

No speed data available

Available Providers

(LS internal units)

No provider data available

External Sources