Skip to main content

Qwen2.5 Coder Instruct 32B

AlibabaQwenOpen WeightApache 2.0 · Commercial OK

Description

Qwen2.5-Coder is a specialized coding model trained on 5.5 trillion tokens of code data, supporting 92 programming languages with a 128K context window. It excels in code generation, completion, repair, and multi-programming tasks while maintaining strong performance in mathematics and general capabilities.

Release Date
2024-11-11
Parameters
32.0B
Context Length
33K
Modalities
text

Capability Radar

27
general
29
coding
37
reasoning
29
scienceest.
0
agents
0
multimodal

Science uses a reasoning proxy when dedicated science benchmarks are unavailable.

Rankings

Domain#RankScoreSource
Code Ranking256
32.0
AA
General Ranking339
33.0
AA
Math Reasoning208
45.0
AA
Reasoning23
83.0
LS
Science361
30.0
AA

Benchmark Scores (LLM Stats)

Code

HumanEval92.7%SR
LiveCodeBench31.4%SR

Finance

MMLU75.1%SR
TruthfulQA54.2%SR
MMLU-Pro50.4%SR
TheoremQA43.1%SR

General

MBPP0.90 / 100SR
MMLU-Redux77.5%SR
ARC-C70.5%SR
BigCodeBench-Full49.6%SR
BigCodeBench-Hard27.0%SR

Language

Winogrande80.8%SR

Math

GSM8k91.1%SR
MATH57.2%SR

Reasoning

HellaSwag83.0%SR

AA Evaluation Indices

Intelligence Index
12.9
Math 500
0.8
Mmlu Pro
0.6
Gpqa
0.4
Livecodebench
0.3
Scicode
0.3
Aime
0.1
Hle
0.0

LLM Stats Category Scores

Language
70
Math
70
Reasoning
70
Code
60
Finance
60
General
60
Healthcare
60
Legal
60
Physics
40

Pricing

Input PriceFree
Output PriceFree
Blended Price (3:1)Free

Speed

Tokens/sec0.0 tokens/s
Time to First Token0.00s
Time to Answer0.00s

Available Providers

(LS internal units)

No provider data available

External Sources