Qwen3.5 27B (Reasoning)
AlibabaQwenOpen WeightApache 2.0 · Commercial OK
Description
Qwen3.5-27B is a multimodal dense foundation model with 27 billion parameters. It combines strong reasoning, coding, multilingual, long-context, and visual understanding performance in a production-friendly open-weight package with a native 262K context window.
Date de sortie
2026-02-24
Paramètres
27.0B
Longueur du contexte
262K
Modalités
image, text, video
Radar de capacités
38
general
36
coding
86
reasoning
57
scienceest.
60
agents
80
multimodal
Science utilise un proxy de raisonnement lorsque les benchmarks scientifiques dédiés ne sont pas disponibles.
Classements
| Domaine | #Rang | Score | Source |
|---|---|---|---|
| Agents & Tools | 51 | 57.0 | LS |
| Code Ranking | 84 | 65.0 | AA |
| General Ranking | 43 | 80.0 | AA |
| Multimodal Ranking | 60 | 70.0 | LS |
| Reasoning | 54 | 67.0 | LS |
| Science | 63 | 68.0 | AA |
Scores de benchmarks (LLM Stats)
3d
SUNRGBD
0.35 / 100Aut.
Hypersim
0.13 / 100Aut.
Agents
t2-bench
79.0%Aut.
BFCL-V4
68.5%Aut.
AndroidWorld_SR
64.2%Aut.
WideSearch
61.1%Aut.
BrowseComp
61.0%Aut.
FullStackBench en
60.1%Aut.
TIR-Bench
59.8%Aut.
FullStackBench zh
57.4%Aut.
OSWorld-Verified
56.2%Aut.
VITA-Bench
41.9%Aut.
Terminal-Bench 2.0
41.6%Aut.
DeepPlanning
22.6%Aut.
Biology
GPQA
85.5%Aut.
Chemistry
SuperGPQA
65.6%Aut.
Code
SWE-Bench Verified
72.4%Aut.
Communication
Multi-Challenge
60.8%Aut.
Embodied
EmbSpatialBench
0.84 / 100Aut.
Finance
MMLU-Pro
86.1%Aut.
MMLU-ProX
82.2%Aut.
General
IFEval
95.0%Aut.
MMLU-Redux
93.2%Aut.
C-Eval
90.5%Aut.
MAXIFE
88.0%Aut.
Global PIQA
87.5%Aut.
MMMLU
85.9%Aut.
MMMU
82.3%Aut.
Include
81.6%Aut.
MMStar
81.0%Aut.
LiveCodeBench v6
80.7%Aut.
IFBench
76.5%Aut.
MMMU-Pro
75.0%Aut.
LongBench v2
60.6%Aut.
NOVA-63
58.1%Aut.
SimpleVQA
0.56 / 100Aut.
Grounding
RefCOCO-avg
0.91 / 100Aut.
ScreenSpot Pro
70.3%Aut.
RefSpatialBench
0.68 / 100Aut.
Healthcare
VideoMMMU
82.3%Aut.
SlakeVQA
80.0%Aut.
MedXpertQA
62.4%Aut.
PMC-VQA
62.4%Aut.
Image To Text
OCRBench
89.4%Aut.
Language
LingoQA
82.0%Aut.
WMT24++
77.6%Aut.
Long Context
MLVU
85.9%Aut.
LVBench
73.6%Aut.
AA-LCR
66.1%Aut.
MMLongBench-Doc
0.60 / 100Aut.
Math
HMMT 2025
92.0%Aut.
HMMT25
89.8%Aut.
MathVista-Mini
87.8%Aut.
DynaMath
87.7%Aut.
MathVision
86.0%Aut.
CodeForces
0.81 / 3000Aut.
PolyMATH
71.2%Aut.
Humanity's Last Exam
48.5%Aut.
Multimodal
VLMsAreBlind
96.9%Aut.
V*
93.7%Aut.
AI2D
92.9%Aut.
MMBench-V1.1
92.6%Aut.
OmniDocBench 1.5
88.9%Aut.
VideoMME w sub.
87.0%Aut.
VideoMME w/o sub.
82.8%Aut.
CC-OCR
81.0%Aut.
CharXiv-R
79.5%Aut.
MVBench
74.6%Aut.
MMVU
73.3%Aut.
BabyVision
44.6%Aut.
ZEROBench-Sub
0.36 / 100Aut.
Nuscene
15.2%Aut.
ZEROBench
0.10 / 100Aut.
Reasoning
CountBench
0.98 / 100Aut.
Hallusion Bench
70.0%Aut.
BrowseComp-zh
62.1%Aut.
ERQA
60.5%Aut.
Seal-0
47.2%Aut.
OJBench
40.1%Aut.
Spatial Reasoning
RealWorldQA
83.7%Aut.
Vision
ODinW
41.1%Aut.
Indices d'évaluation AA
Intelligence Index42.1
Coding Index34.9
Tau20.9
Gpqa0.9
Ifbench0.8
Lcr0.7
Scicode0.4
Terminalbench Hard0.3
Hle0.2
Scores par catégorie LLM Stats
Biology90
Instruction Following90
Structured Output80
Text-to-image80
Video80
Chemistry80
Embodied80
Finance80
General80
Grounding80
Image To Text80
Language80
Legal80
Math80
Physics80
Spatial Reasoning70
Vision70
Economics70
Frontend Development70
Healthcare70
Long Context70
Multimodal70
Reasoning70
Tool Calling60
Agents60
Code60
Communication60
Search60
Spatial20
3d20
Tarification
Prix d'entrée$0.3 / 1M tokens
Prix de sortie$2.4 / 1M tokens
Prix mixte (3:1)$0.825 / 1M tokens
Vitesse
Tokens/sec87.6 tokens/s
Délai du premier token1.40s
Temps de réponse24.23s
Fournisseurs disponibles
(Unités internes LS)| Fournisseur | Prix d'entrée | Prix de sortie |
|---|---|---|
| Novita | 300K | 2.4M |