Skip to main content

Qwen2.5 Coder Instruct 7B

AlibabaQwenOpen WeightApache 2.0 · Commercial OK

Description

Qwen2.5-Coder is a specialized coding model trained on 5.5 trillion tokens of code data, supporting 92 programming languages with a 128K context window. It excels in code generation, completion, and repair while maintaining strong performance in math and general tasks. The model demonstrates exceptional capabilities in multi-programming language tasks and code reasoning.

Release Date
2024-09-19
Parameters
7.0B
Context Length
33K
Modalities
text

Capability Radar

20
general
13
coding
29
reasoning
21
scienceest.
0
agents
0
multimodal

Science uses a reasoning proxy when dedicated science benchmarks are unavailable.

Rankings

Domain#RankScoreSource
Code Ranking399
14.0
AA
General Ranking424
23.0
AA
Math Reasoning250
35.0
AA
Reasoning58
63.0
LS
Science418
21.0
AA

Benchmark Scores (LLM Stats)

Code

HumanEval88.4%SR
Aider55.6%SR
LiveCodeBench18.2%SR

Finance

MMLU-Base68.0%SR
MMLU67.6%SR
TruthfulQA50.6%SR
MMLU-Pro40.1%SR
TheoremQA34.0%SR

General

MBPP0.83 / 100SR
MMLU-Redux66.6%SR
ARC-C60.9%SR
BigCodeBench41.0%SR

Language

Winogrande72.9%SR

Math

GSM8k83.9%SR
MATH46.6%SR
STEM34.0%SR

Reasoning

HellaSwag76.8%SR
CRUXEval-Input-CoT56.5%SR
CRUXEval-Output-CoT56.0%SR

AA Evaluation Indices

Intelligence Index
10.0
Math 500
0.7
Mmlu Pro
0.5
Gpqa
0.3
Scicode
0.1
Livecodebench
0.1
Aime
0.1
Hle
0.0

LLM Stats Category Scores

General
60
Language
60
Math
60
Reasoning
60
Code
50
Finance
50
Healthcare
50
Legal
50
Physics
30

Pricing

Input PriceFree
Output PriceFree
Blended Price (3:1)Free

Speed

Tokens/sec0.0 tokens/s
Time to First Token0.00s
Time to Answer0.00s

Available Providers

(LS internal units)

No provider data available

External Sources