Mistral Large (Feb '24)
MistralMistralOpen WeightApache 2.0 · Commercial OK
Description
Mistral Large 3 (675B Instruct 2512) is a state-of-the-art general-purpose Multimodal granular Mixture-of-Experts model with 41B active parameters and 675B total parameters trained from scratch with 3000 H200s. This model is the instruct post-trained version in FP8, fine-tuned for instruction tasks, making it ideal for chat, agentic and instruction based use cases. A no-loss FP8 version to reduce resource requirements. Can be deployed on a node of B200s or H200s. Designed for reliability and long-context comprehension - It is engineered for production-grade assistants, retrieval-augmented systems, scientific workloads, and complex enterprise workflows.
Release Date
2024-02-26
Parameters
675.0B
Context Length
128K
Modalities
image, text
Capability Radar
21
general
18
coding
23
reasoning
24
scienceest.
0
agents
75
multimodal
Science uses a reasoning proxy when dedicated science benchmarks are unavailable.
Rankings
| Domain | #Rank | Score | Source |
|---|---|---|---|
| Code Ranking | 355 | 19.0 | AA |
| General Ranking | 410 | 24.0 | AA |
| Math Reasoning | 288 | 25.0 | AA |
| Science | 404 | 23.0 | AA |
Benchmark Scores (LLM Stats)
Biology
GPQA
43.9%SR
Code
LiveCodeBench
34.4%SR
Factuality
SimpleQA
23.8%SR
General
MMMLU
85.5%SR
Math
AMC_2022_23
52.0%SR
AA Evaluation Indices
Intelligence Index9.9
Math 5000.5
Mmlu Pro0.5
Gpqa0.4
Scicode0.2
Livecodebench0.2
Hle0.0
Aime0.0
LLM Stats Category Scores
Language90
Math70
General50
Reasoning50
Biology40
Chemistry40
Physics40
Code30
Factuality20
Pricing
Input Price$4 / 1M tokens
Output Price$12 / 1M tokens
Blended Price (3:1)$6 / 1M tokens
Speed
Tokens/sec0.0 tokens/s
Time to First Token0.00s
Time to Answer0.00s
Available Providers
(LS internal units)| Provider | Input Price | Output Price |
|---|---|---|
| Mistral AI | 500K | 1.5M |