Skip to main content

Qwen3 VL 235B A22B (Reasoning)

AlibabaQwenOpen WeightApache 2.0 · Commercial OK

Description

Qwen3-VL-235B-A22B-Thinking is the most powerful vision-language model in the Qwen series, featuring 236B parameters with MoE architecture for reasoning-enhanced multimodal understanding. Key capabilities include: Visual Agent (operates PC/mobile GUIs, recognizes elements, invokes tools), Visual Coding (generates Draw.io/HTML/CSS/JS from images/videos), Advanced Spatial Perception (2D grounding and 3D grounding for spatial reasoning and embodied AI), Long Context & Video Understanding (native 256K context expandable to 1M, handles hours-long video with second-level indexing), Enhanced Multimodal Reasoning (excels in STEM/Math with causal analysis), Upgraded Visual Recognition (celebrities, anime, products, landmarks, flora/fauna), and Expanded OCR (32 languages, robust in low light/blur/tilt). Architecture innovations include Interleaved-MRoPE for positional embeddings, DeepStack for multi-level ViT feature fusion, and Text-Timestamp Alignment for precise video temporal modeling.

Release Date
2025-09-23
Parameters
236.0B
Context Length
131K
Modalities
image, text, video

Capability Radar

42
general
38
coding
86
reasoning
51
scienceest.
70
agents
100
multimodal

Science uses a reasoning proxy when dedicated science benchmarks are unavailable.

Rankings

Domain#RankScoreSource
Agents & Tools24
66.0
LS
Code Ranking171
47.0
AA
General Ranking146
59.0
AA
Math Reasoning49
89.0
AA
Multimodal Ranking64
67.0
LS
Reasoning37
75.0
LS
Science137
56.0
AA

Benchmark Scores (LLM Stats)

3d

Objectron0.71 / 100SR
BLINK67.1%SR
ARKitScenes0.54 / 100SR
SUNRGBD0.35 / 100SR
Hypersim0.11 / 100SR

Agents

SIFO0.77 / 100SR
BFCL-v371.9%SR
SIFO-Multiturn0.71 / 100SR
OSWorld-G0.68 / 100SR
OSWorld38.1%SR

Chemistry

SuperGPQA64.3%SR

Code

Design2Code0.93 / 100SR

Communication

MM-MT-Bench8.50 / 100SR
WritingBench86.7%SR
Multi-IF79.1%SR

Creativity

Creative Writing v385.7%SR

Embodied

EmbSpatialBench0.84 / 100SR
RoboSpatialHome0.74 / 100SR

Factuality

SimpleQA44.4%SR

Finance

MMLU90.6%SR
MMLU-Pro83.8%SR
MMLU-ProX80.6%SR

General

MMLU-Redux93.7%SR
IFEval88.2%SR
MMMUval80.6%SR
Include80.0%SR
LiveBench 2024112579.6%SR
MMStar78.7%SR
LiveCodeBench v670.1%SR
MMMU-Pro69.3%SR
SimpleVQA0.61 / 100SR

Grounding

ScreenSpot95.4%SR
RefCOCO-avg0.92 / 100SR
RefSpatialBench0.70 / 100SR
ScreenSpot Pro61.8%SR

Healthcare

VideoMMMU80.0%SR

Image To Text

OCRBench87.5%SR
OCRBench-V2 (en)66.8%SR
OCRBench-V2 (zh)63.5%SR

Instruction Following

MIABench0.93 / 100SR

Language

CharadesSTA63.5%SR

Long Context

MLVU83.8%SR
LVBench63.6%SR
MMLongBench-Doc0.56 / 100SR

Math

AIME 202589.7%SR
MathVista-Mini85.8%SR
MathVerse-Mini0.85 / 100SR
HMMT2577.4%SR
MathVision74.6%SR
Humanity's Last Exam13.6%SR

Multimodal

DocVQAtest96.5%SR
MMBench-V1.190.6%SR
InfoVQAtest89.5%SR
AI2D89.2%SR
CC-OCR81.5%SR
MuirBench80.1%SR
VideoMME w/o sub.79.0%SR
CharXiv-R66.1%SR
VisuLogic0.34 / 100SR
ZEROBench-Sub0.28 / 100SR
ZEROBench0.04 / 100SR

Reasoning

ZebraLogic97.3%SR
CountBench0.94 / 100SR
Hallusion Bench66.7%SR
ERQA52.5%SR

Spatial Reasoning

RealWorldQA81.3%SR

Vision

ODinW43.2%SR

AA Evaluation Indices

Math Index
88.3
Intelligence Index
27.6
Coding Index
20.9
Aime 25
0.9
Mmlu Pro
0.8
Gpqa
0.8
Livecodebench
0.6
Lcr
0.6
Ifbench
0.6
Tau2
0.5
Scicode
0.4
Terminalbench Hard
0.1
Hle
0.1

LLM Stats Category Scores

Communication
3
Multimodal
100
Writing
90
Creativity
90
Structured Output
80
Text-to-image
80
Video
80
Embodied
80
Finance
80
Grounding
80
Healthcare
80
Instruction Following
80
Language
80
Legal
80
Math
80
Spatial Reasoning
70
Tool Calling
70
Vision
70
General
70
Image To Text
70
Long Context
70
Reasoning
70
Agents
60
Chemistry
60
Economics
60
Physics
60
3d
40
Factuality
40

Pricing

Input Price$0.84 / 1M tokens
Output Price$6.175 / 1M tokens
Blended Price (3:1)$2.174 / 1M tokens

Speed

Tokens/sec30.0 tokens/s
Time to First Token1.35s
Time to Answer68.10s

Available Providers

(LS internal units)
ProviderInput PriceOutput Price
DeepInfra450K3.5M
Novita980K4.0M

External Sources