Sarvam 30B (high)
SarvamOpen WeightApache 2.0 · Commercial OK
Description
Sarvam-30B is an open-source 30B-parameter Mixture-of-Experts reasoning model from Sarvam AI trained from scratch and optimized for Indian languages, coding, and conversational workloads. It uses 128 sparse experts with 2.4B active parameters per token, Grouped Query Attention, and was pre-trained on 16 trillion tokens spanning code, mathematics, multilingual, and web data.
Release Date
2026-03-06
Parameters
30.0B
Context Length
—
Modalities
—
Capability Radar
11
general
10
coding
63
reasoning
37
scienceest.
40
agents
0
multimodal
Science uses a reasoning proxy when dedicated science benchmarks are unavailable.
Rankings
| Domain | #Rank | Score | Source |
|---|---|---|---|
| Agents & Tools | 90 | 36.0 | LS |
| Code Ranking | 436 | 8.0 | AA |
| General Ranking | 431 | 21.0 | AA |
| Science | 297 | 36.0 | AA |
Benchmark Scores (LLM Stats)
Agents
BrowseComp
35.5%SR
Biology
GPQA
66.5%SR
Code
HumanEval
92.1%SR
SWE-Bench Verified
34.0%SR
Creativity
Arena-Hard v2
49.0%SR
Finance
MMLU
85.1%SR
MMLU-Pro
80.0%SR
General
MBPP
0.93 / 100SR
LiveCodeBench v6
70.0%SR
Math
MATH-500
97.0%SR
AIME 2025
96.7%SR
HMMT25
74.2%SR
HMMT 2025
73.3%SR
Beyond AIME
58.3%SR
AA Evaluation Indices
Intelligence Index12.3
Coding Index7.9
Gpqa0.6
Tau20.3
Ifbench0.3
Scicode0.2
Hle0.1
Terminalbench Hard0.0
Lcr0.0
LLM Stats Category Scores
Finance80
Healthcare80
Language80
Legal80
Math80
Biology70
Chemistry70
General70
Physics70
Reasoning70
Code60
Writing50
Creativity50
Agents40
Search40
Frontend Development30
Pricing
Input PriceFree
Output PriceFree
Blended Price (3:1)Free
Speed
Tokens/sec168.7 tokens/s
Time to First Token1.23s
Time to Answer13.09s
Available Providers
(LS internal units)No provider data available