Skip to main content

Sarvam 105B (high)

SarvamOpen WeightApache 2.0 · Commercial OK

Description

Sarvam-105B is Sarvam AI's flagship open-source Mixture-of-Experts reasoning model built for complex reasoning, coding, and agentic workflows. It uses 128 sparse experts with Multi-head Latent Attention for efficient long-context inference and was pre-trained on 12 trillion tokens spanning code, mathematics, multilingual, and web data.

Release Date
2026-03-06
Parameters
105.0B
Context Length
Modalities

Capability Radar

16
general
12
coding
74
reasoning
44
scienceest.
50
agents
10
multimodal

Science uses a reasoning proxy when dedicated science benchmarks are unavailable.

Rankings

Domain#RankScoreSource
Agents & Tools74
50.0
LS
Code Ranking428
9.0
AA
General Ranking344
32.0
AA
Science202
47.0
AA

Benchmark Scores (LLM Stats)

Agents

BrowseComp49.5%SR

Biology

GPQA78.7%SR

Code

SWE-Bench Verified45.0%SR

Creativity

Arena-Hard v271.0%SR

Finance

MMLU90.6%SR
MMLU-Pro81.7%SR

General

IFEval84.8%SR
LiveCodeBench v671.7%SR

Math

MATH-50098.6%SR
AIME 202596.7%SR
HMMT2585.8%SR
HMMT 202585.8%SR
Beyond AIME69.1%SR
Humanity's Last Exam11.2%SR

AA Evaluation Indices

Intelligence Index
18.2
Coding Index
9.8
Gpqa
0.7
Tau2
0.5
Ifbench
0.3
Scicode
0.3
Hle
0.1
Terminalbench Hard
0.0
Lcr
0.0

LLM Stats Category Scores

Finance
90
Healthcare
90
Language
90
Legal
90
Structured Output
80
Biology
80
Chemistry
80
General
80
Instruction Following
80
Math
80
Physics
80
Writing
70
Creativity
70
Reasoning
70
Agents
50
Code
50
Frontend Development
50
Search
50
Vision
10

Pricing

Input PriceFree
Output PriceFree
Blended Price (3:1)Free

Speed

Tokens/sec105.5 tokens/s
Time to First Token1.24s
Time to Answer20.19s

Available Providers

(LS internal units)

No provider data available

External Sources