Saltar al contenido principal

Qwen3.5 27B (Reasoning)

AlibabaQwenOpen WeightApache 2.0 · Commercial OK

Descripción

Qwen3.5-27B is a multimodal dense foundation model with 27 billion parameters. It combines strong reasoning, coding, multilingual, long-context, and visual understanding performance in a production-friendly open-weight package with a native 262K context window.

Fecha de lanzamiento
2026-02-24
Parámetros
27.0B
Longitud del contexto
262K
Modalidades
image, text, video

Radar de capacidades

38
general
36
coding
86
reasoning
57
scienceest.
60
agents
80
multimodal

Science usa un proxy de razonamiento cuando los benchmarks científicos dedicados no están disponibles.

Rankings

Dominio#PosiciónPuntuaciónFuente
Agents & Tools51
57.0
LS
Code Ranking84
65.0
AA
General Ranking43
80.0
AA
Multimodal Ranking60
70.0
LS
Reasoning54
67.0
LS
Science63
68.0
AA

Puntuaciones de benchmarks (LLM Stats)

3d

SUNRGBD0.35 / 100Aut.
Hypersim0.13 / 100Aut.

Agents

t2-bench79.0%Aut.
BFCL-V468.5%Aut.
AndroidWorld_SR64.2%Aut.
WideSearch61.1%Aut.
BrowseComp61.0%Aut.
FullStackBench en60.1%Aut.
TIR-Bench59.8%Aut.
FullStackBench zh57.4%Aut.
OSWorld-Verified56.2%Aut.
VITA-Bench41.9%Aut.
Terminal-Bench 2.041.6%Aut.
DeepPlanning22.6%Aut.

Biology

GPQA85.5%Aut.

Chemistry

SuperGPQA65.6%Aut.

Code

SWE-Bench Verified72.4%Aut.

Communication

Multi-Challenge60.8%Aut.

Embodied

EmbSpatialBench0.84 / 100Aut.

Finance

MMLU-Pro86.1%Aut.
MMLU-ProX82.2%Aut.

General

IFEval95.0%Aut.
MMLU-Redux93.2%Aut.
C-Eval90.5%Aut.
MAXIFE88.0%Aut.
Global PIQA87.5%Aut.
MMMLU85.9%Aut.
MMMU82.3%Aut.
Include81.6%Aut.
MMStar81.0%Aut.
LiveCodeBench v680.7%Aut.
IFBench76.5%Aut.
MMMU-Pro75.0%Aut.
LongBench v260.6%Aut.
NOVA-6358.1%Aut.
SimpleVQA0.56 / 100Aut.

Grounding

RefCOCO-avg0.91 / 100Aut.
ScreenSpot Pro70.3%Aut.
RefSpatialBench0.68 / 100Aut.

Healthcare

VideoMMMU82.3%Aut.
SlakeVQA80.0%Aut.
MedXpertQA62.4%Aut.
PMC-VQA62.4%Aut.

Image To Text

OCRBench89.4%Aut.

Language

LingoQA82.0%Aut.
WMT24++77.6%Aut.

Long Context

MLVU85.9%Aut.
LVBench73.6%Aut.
AA-LCR66.1%Aut.
MMLongBench-Doc0.60 / 100Aut.

Math

HMMT 202592.0%Aut.
HMMT2589.8%Aut.
MathVista-Mini87.8%Aut.
DynaMath87.7%Aut.
MathVision86.0%Aut.
CodeForces0.81 / 3000Aut.
PolyMATH71.2%Aut.
Humanity's Last Exam48.5%Aut.

Multimodal

VLMsAreBlind96.9%Aut.
V*93.7%Aut.
AI2D92.9%Aut.
MMBench-V1.192.6%Aut.
OmniDocBench 1.588.9%Aut.
VideoMME w sub.87.0%Aut.
VideoMME w/o sub.82.8%Aut.
CC-OCR81.0%Aut.
CharXiv-R79.5%Aut.
MVBench74.6%Aut.
MMVU73.3%Aut.
BabyVision44.6%Aut.
ZEROBench-Sub0.36 / 100Aut.
Nuscene15.2%Aut.
ZEROBench0.10 / 100Aut.

Reasoning

CountBench0.98 / 100Aut.
Hallusion Bench70.0%Aut.
BrowseComp-zh62.1%Aut.
ERQA60.5%Aut.
Seal-047.2%Aut.
OJBench40.1%Aut.

Spatial Reasoning

RealWorldQA83.7%Aut.

Vision

ODinW41.1%Aut.

Índices de evaluación AA

Intelligence Index
42.1
Coding Index
34.9
Tau2
0.9
Gpqa
0.9
Ifbench
0.8
Lcr
0.7
Scicode
0.4
Terminalbench Hard
0.3
Hle
0.2

Puntuaciones por categoría LLM Stats

Biology
90
Instruction Following
90
Structured Output
80
Text-to-image
80
Video
80
Chemistry
80
Embodied
80
Finance
80
General
80
Grounding
80
Image To Text
80
Language
80
Legal
80
Math
80
Physics
80
Spatial Reasoning
70
Vision
70
Economics
70
Frontend Development
70
Healthcare
70
Long Context
70
Multimodal
70
Reasoning
70
Tool Calling
60
Agents
60
Code
60
Communication
60
Search
60
Spatial
20
3d
20

Precios

Precio de entrada$0.3 / 1M tokens
Precio de salida$2.4 / 1M tokens
Precio mixto (3:1)$0.825 / 1M tokens

Velocidad

Tokens/seg87.6 tokens/s
Retraso del primer token1.40s
Tiempo hasta la respuesta24.23s

Proveedores disponibles

(Unidades internas LS)
ProveedorPrecio de entradaPrecio de salida
Novita300K2.4M

Fuentes externas