MiniCPM-SALA
OpenBMBOpen WeightApache 2.0 · Commercial OK
描述
MiniCPM-SALA (Sparse Attention and Linear Attention) is a 9B hybrid model built from a MiniCPM-4.0 checkpoint via continual training (~2T tokens, 25% of training-from-scratch cost). It interleaves 25% InfLLM-V2 sparse attention and 75% Lightning Attention layers, achieving up to 3.5x inference speed over dense baselines at 256K tokens. With HyPE (Hybrid Positional Encoding) and NoPE in sparse layers, the model extrapolates to 2048K tokens despite a 520K training length, enabling 1M-token inference on consumer GPUs like the RTX 5090.
發布日期
2026-02-11
參數規模
9.5B
上下文長度
—
支援模態
—
能力雷達圖
70
general
100
coding
80
reasoning
60
science估算
0
agents
0
multimodal
Science 在缺少專門科學評測時使用推理能力代理估算。
排行榜排名
暫無排名資料
基準測試分數 (LLM Stats)
Code
HumanEval
95.1%自報
Finance
MMLU-Pro
67.0%自報
General
MBPP
0.89 / 100自報
CMMLU
81.5%自報
IFEval
76.3%自報
LiveCodeBench v5
60.5%自報
LiveCodeBench v6
52.0%自報
MRCR 64K (2-needle)
29.8%自報
MRCR 128K (2-needle)
28.6%自報
MRCR 64K (4-needle)
20.6%自報
MRCR 128K (4-needle)
19.6%自報
MRCR 64K (8-needle)
16.6%自報
MRCR 128K (8-needle)
10.1%自報
Language
BBH
81.5%自報
Long Context
RULER 64k
92.7%自報
RULER 128k
89.4%自報
RULER 512K
87.1%自報
RULER 1000K
86.3%自報
RULER 2048K
81.6%自報
NoLiMa 32K
54.5%自報
NoLiMa 64K
43.0%自報
NoLiMa 128K
23.9%自報
Math
AIME 2024
83.8%自報
AIME 2025
78.3%自報
AA 評測指數
暫無 AA 評測資料
LLM Stats 分類評分
Code100
Structured Output80
Instruction Following80
Language80
Math80
Reasoning80
Finance70
General70
Healthcare70
Legal70
定價
暫無定價資料
速度
暫無速度資料
可用提供商
(LS 內部計價單位)暫無提供商資料