Skip to main content

Kimi K2 0905

KimiKimiProprietary

Description

Kimi K2 0905 is the September update of Kimi K2 0711. It is a large-scale Mixture-of-Experts (MoE) language model developed by Moonshot AI, featuring 1 trillion total parameters with 32 billion active per forward pass. It supports long-context inference up to 256k tokens, extended from the previous 128k. This update improves agentic coding with higher accuracy and better generalization across scaffolds, and enhances frontend coding with more aesthetic and functional outputs for web, 3D, and related tasks. The model is trained with a novel stack incorporating the MuonClip optimizer for stable large-scale MoE training.

Release Date
2025-09-05
Parameters
1.0T
Context Length
262K
Modalities
text

Capability Radar

43
general
39
coding
61
reasoning
47
scienceest.
0
agents
0
multimodal

Science uses a reasoning proxy when dedicated science benchmarks are unavailable.

Rankings

Domain#RankScoreSource
Code Ranking140
52.0
AA
General Ranking140
60.0
AA
Math Reasoning156
58.0
AA
Science199
48.0
AA

Benchmark Scores (LLM Stats)

Biology

GPQA75.8%SR

Code

HumanEval94.5%SR

Finance

MMLU90.2%SR
MMLU-Pro82.5%SR

Math

MATH89.1%SR
AIME 202472.0%SR

AA Evaluation Indices

Math Index
57.3
Intelligence Index
30.9
Coding Index
25.9
Mmlu Pro
0.8
Gpqa
0.8
Tau2
0.7
Livecodebench
0.6
Aime 25
0.6
Lcr
0.5
Ifbench
0.4
Scicode
0.3
Terminalbench Hard
0.2
Hle
0.1

LLM Stats Category Scores

Code
90
Finance
90
Healthcare
90
Language
90
Legal
90
Biology
80
Chemistry
80
General
80
Math
80
Physics
80
Reasoning
80

Pricing

Input Price$0.6 / 1M tokens
Output Price$2.5 / 1M tokens
Blended Price (3:1)$1.075 / 1M tokens

Speed

Tokens/sec19.3 tokens/s
Time to First Token1.70s
Time to Answer1.70s

Available Providers

(LS internal units)
ProviderInput PriceOutput Price
Novita600K2.5M

External Sources