Skip to main content

MiniCPM-SALA

OpenBMBOpen WeightApache 2.0 · Commercial OK

Description

MiniCPM-SALA (Sparse Attention and Linear Attention) is a 9B hybrid model built from a MiniCPM-4.0 checkpoint via continual training (~2T tokens, 25% of training-from-scratch cost). It interleaves 25% InfLLM-V2 sparse attention and 75% Lightning Attention layers, achieving up to 3.5x inference speed over dense baselines at 256K tokens. With HyPE (Hybrid Positional Encoding) and NoPE in sparse layers, the model extrapolates to 2048K tokens despite a 520K training length, enabling 1M-token inference on consumer GPUs like the RTX 5090.

Release Date
2026-02-11
Parameters
9.5B
Context Length
Modalities

Capability Radar

70
general
100
coding
80
reasoning
60
scienceest.
0
agents
0
multimodal

Science uses a reasoning proxy when dedicated science benchmarks are unavailable.

Rankings

No ranking data available

Benchmark Scores (LLM Stats)

Code

HumanEval95.1%SR

Finance

MMLU-Pro67.0%SR

General

MBPP0.89 / 100SR
CMMLU81.5%SR
IFEval76.3%SR
LiveCodeBench v560.5%SR
LiveCodeBench v652.0%SR
MRCR 64K (2-needle)29.8%SR
MRCR 128K (2-needle)28.6%SR
MRCR 64K (4-needle)20.6%SR
MRCR 128K (4-needle)19.6%SR
MRCR 64K (8-needle)16.6%SR
MRCR 128K (8-needle)10.1%SR

Language

BBH81.5%SR

Long Context

RULER 64k92.7%SR
RULER 128k89.4%SR
RULER 512K87.1%SR
RULER 1000K86.3%SR
RULER 2048K81.6%SR
NoLiMa 32K54.5%SR
NoLiMa 64K43.0%SR
NoLiMa 128K23.9%SR

Math

AIME 202483.8%SR
AIME 202578.3%SR

AA Evaluation Indices

No AA evaluation data available

LLM Stats Category Scores

Code
100
Structured Output
80
Instruction Following
80
Language
80
Math
80
Reasoning
80
Finance
70
General
70
Healthcare
70
Legal
70

Pricing

No pricing data available

Speed

No speed data available

Available Providers

(LS internal units)

No provider data available

External Sources