Qwen3.5 122B A10B (Non-reasoning)
AlibabaQwenOpen WeightApache 2.0 · Commercial OK
Descripción
Qwen3.5-122B-A10B is a multimodal Mixture-of-Experts model with 122 billion total parameters and 10 billion activated parameters. It combines strong reasoning, coding, long-context, and visual understanding performance with production-friendly efficiency and a native 262K context window.
Fecha de lanzamiento
2026-02-24
Parámetros
122.0B
Longitud del contexto
262K
Modalidades
image, text, video
Radar de capacidades
31
general
32
coding
83
reasoning
53
scienceest.
60
agents
80
multimodal
Science usa un proxy de razonamiento cuando los benchmarks científicos dedicados no están disponibles.
Rankings
| Dominio | #Posición | Puntuación | Fuente |
|---|---|---|---|
| Agents & Tools | 45 | 58.0 | LS |
| Code Ranking | 116 | 57.0 | AA |
| General Ranking | 121 | 63.0 | AA |
| Multimodal Ranking | 61 | 70.0 | LS |
| Reasoning | 53 | 68.0 | LS |
| Science | 108 | 59.0 | AA |
Puntuaciones de benchmarks (LLM Stats)
3d
SUNRGBD
0.36 / 100Aut.
Hypersim
0.13 / 100Aut.
Agents
t2-bench
79.5%Aut.
BFCL-V4
72.2%Aut.
AndroidWorld_SR
66.4%Aut.
BrowseComp
63.8%Aut.
FullStackBench en
62.6%Aut.
WideSearch
60.5%Aut.
FullStackBench zh
58.7%Aut.
OSWorld-Verified
58.0%Aut.
TIR-Bench
53.2%Aut.
Terminal-Bench 2.0
49.4%Aut.
VITA-Bench
33.6%Aut.
DeepPlanning
24.1%Aut.
Biology
GPQA
86.6%Aut.
Chemistry
SuperGPQA
67.1%Aut.
Code
SWE-Bench Verified
72.0%Aut.
Communication
Multi-Challenge
61.5%Aut.
Embodied
EmbSpatialBench
0.84 / 100Aut.
Finance
MMLU-Pro
86.7%Aut.
MMLU-ProX
82.2%Aut.
General
MMLU-Redux
94.0%Aut.
IFEval
93.4%Aut.
C-Eval
91.9%Aut.
Global PIQA
88.4%Aut.
MAXIFE
87.9%Aut.
MMMLU
86.7%Aut.
MMMU
83.9%Aut.
MMStar
82.9%Aut.
Include
82.8%Aut.
LiveCodeBench v6
78.9%Aut.
MMMU-Pro
76.9%Aut.
IFBench
76.1%Aut.
SimpleVQA
0.62 / 100Aut.
LongBench v2
60.2%Aut.
NOVA-63
58.6%Aut.
Grounding
RefCOCO-avg
0.91 / 100Aut.
ScreenSpot Pro
70.4%Aut.
RefSpatialBench
0.69 / 100Aut.
Healthcare
VideoMMMU
82.0%Aut.
SlakeVQA
81.6%Aut.
MedXpertQA
67.3%Aut.
PMC-VQA
63.3%Aut.
Image To Text
OCRBench
92.1%Aut.
Language
LingoQA
80.8%Aut.
WMT24++
78.3%Aut.
Long Context
MLVU
87.3%Aut.
LVBench
74.4%Aut.
AA-LCR
66.9%Aut.
MMLongBench-Doc
0.59 / 100Aut.
Math
HMMT 2025
91.4%Aut.
HMMT25
90.3%Aut.
MathVista-Mini
87.4%Aut.
MathVision
86.2%Aut.
DynaMath
85.9%Aut.
CodeForces
0.85 / 3000Aut.
PolyMATH
68.9%Aut.
Humanity's Last Exam
47.5%Aut.
Multimodal
VLMsAreBlind
96.7%Aut.
AI2D
93.3%Aut.
V*
93.2%Aut.
MMBench-V1.1
92.8%Aut.
OmniDocBench 1.5
89.8%Aut.
VideoMME w sub.
87.3%Aut.
VideoMME w/o sub.
83.9%Aut.
CC-OCR
81.8%Aut.
CharXiv-R
77.2%Aut.
MVBench
76.6%Aut.
MMVU
74.7%Aut.
BabyVision
40.2%Aut.
ZEROBench-Sub
0.36 / 100Aut.
Nuscene
15.4%Aut.
ZEROBench
0.09 / 100Aut.
Reasoning
CountBench
0.97 / 100Aut.
BrowseComp-zh
69.9%Aut.
Hallusion Bench
67.6%Aut.
ERQA
62.0%Aut.
Seal-0
44.1%Aut.
OJBench
39.5%Aut.
Spatial Reasoning
RealWorldQA
85.1%Aut.
Vision
ODinW
44.5%Aut.
Índices de evaluación AA
Intelligence Index35.9
Coding Index31.6
Tau20.8
Gpqa0.8
Lcr0.6
Ifbench0.5
Scicode0.4
Terminalbench Hard0.3
Hle0.1
Puntuaciones por categoría LLM Stats
Biology90
Structured Output80
Text-to-image80
Video80
Chemistry80
Embodied80
Finance80
General80
Grounding80
Healthcare80
Image To Text80
Instruction Following80
Language80
Legal80
Math80
Physics80
Spatial Reasoning70
Vision70
Economics70
Frontend Development70
Long Context70
Multimodal70
Reasoning70
Tool Calling60
Agents60
Code60
Communication60
Search60
Spatial20
3d20
Precios
Precio de entrada$0.4 / 1M tokens
Precio de salida$3.2 / 1M tokens
Precio mixto (3:1)$1.1 / 1M tokens
Velocidad
Tokens/seg146.3 tokens/s
Retraso del primer token1.23s
Tiempo hasta la respuesta1.23s
Proveedores disponibles
(Unidades internas LS)| Proveedor | Precio de entrada | Precio de salida |
|---|---|---|
| Novita | 400K | 3.2M |