Llama 3.3 Instruct 70B
MetaLlamaOpen WeightLlama 3.3 Community License Agreement
Description
Llama 3.3 is a multilingual large language model optimized for dialogue use cases across multiple languages. It is a pretrained and instruction-tuned generative model with 70 billion parameters, outperforming many open-source and closed chat models on common industry benchmarks. Llama 3.3 supports a context length of 128,000 tokens and is designed for commercial and research use in multiple languages.
Release Date
2024-12-06
Parameters
70.0B
Context Length
131K
Modalities
text
Capability Radar
30
general
19
coding
28
reasoning
32
scienceest.
80
agents
0
multimodal
Science uses a reasoning proxy when dedicated science benchmarks are unavailable.
Rankings
| Domain | #Rank | Score | Source |
|---|---|---|---|
| Code Ranking | 357 | 18.0 | AA |
| General Ranking | 280 | 38.0 | AA |
| Math Reasoning | 280 | 26.0 | AA |
| Science | 334 | 32.0 | AA |
Benchmark Scores (LLM Stats)
Biology
GPQA
50.5%SR
Code
HumanEval
88.4%SR
Finance
MMLU
86.0%SR
MMLU-Pro
68.9%SR
General
IFEval
92.1%SR
MBPP EvalPlus
87.6%SR
BFCL v2
77.3%SR
Math
MGSM
91.1%SR
MATH
77.0%SR
AA Evaluation Indices
Intelligence Index14.5
Coding Index10.7
Math Index7.7
Math 5000.8
Mmlu Pro0.7
Gpqa0.5
Ifbench0.5
Aime0.3
Livecodebench0.3
Tau20.3
Scicode0.3
Lcr0.1
Aime 250.1
Hle0.0
Terminalbench Hard0.0
LLM Stats Category Scores
Structured Output90
Code90
Instruction Following90
Tool Calling80
Finance80
Healthcare80
Language80
Legal80
Math80
Reasoning80
General70
Biology50
Chemistry50
Physics50
Pricing
Input Price$0.585 / 1M tokens
Output Price$0.71 / 1M tokens
Blended Price (3:1)$0.616 / 1M tokens
Speed
Tokens/sec92.9 tokens/s
Time to First Token0.57s
Time to Answer0.57s
Available Providers
(LS internal units)No provider data available