Qwen3 235B A22B 2507 (Reasoning)
Description
Qwen3-235B-A22B-Thinking-2507 is a state-of-the-art thinking-enabled Mixture-of-Experts (MoE) model with 235B total parameters (22B activated). It features 94 layers, 128 experts (8 activated), and supports 262K native context length. This version delivers significantly improved reasoning performance, achieving state-of-the-art results among open-source thinking models on logical reasoning, mathematics, science, coding, and academic benchmarks. Key enhancements include markedly better general capabilities (instruction following, tool usage, text generation), enhanced 256K long-context understanding, and increased thinking depth. The model supports only thinking mode with automatic <think> tag inclusion.
Capability Radar
Science uses a reasoning proxy when dedicated science benchmarks are unavailable.
Rankings
| Domain | #Rank | Score | Source |
|---|---|---|---|
| Agents & Tools | 10 | 72.0 | LS |
| Code Ranking | 123 | 55.0 | AA |
| General Ranking | 153 | 58.0 | AA |
| Math Reasoning | 19 | 95.0 | AA |
| Reasoning | 98 | 33.0 | LS |
| Science | 100 | 62.0 | AA |
Benchmark Scores (LLM Stats)
Agents
Biology
Chemistry
Code
Communication
Creativity
Finance
General
Math
Reasoning
AA Evaluation Indices
LLM Stats Category Scores
Pricing
Speed
Available Providers
(LS internal units)| Provider | Input Price | Output Price |
|---|---|---|
| Fireworks | 300K | 3.0M |
| Novita | 300K | 3.0M |