Skip to main content

Phi-3.5-mini-instruct

MicrosoftPhiOpen WeightMIT · Commercial OK

Description

Phi-3.5-mini-instruct is a 3.8B-parameter model that supports up to 128K context tokens, with improved multilingual capabilities across over 20 languages. It underwent additional training and safety post-training to enhance instruction-following, reasoning, math, and code generation. Ideal for environments with memory or latency constraints, it uses an MIT license.

Release Date
2024-08-23
Parameters
3.8B
Context Length
Modalities
text

Capability Radar

60
general
60
coding
60
reasoning
26
scienceest.
0
agents
0
multimodal

Science uses a reasoning proxy when dedicated science benchmarks are unavailable.

Rankings

Domain#RankScoreSource
Reasoning51
69.0
LS

Benchmark Scores (LLM Stats)

Biology

GPQA30.4%SR

Code

RepoQA77.0%SR
HumanEval62.8%SR

Creativity

Social IQa74.7%SR
Arena Hard37.0%SR

Finance

MMLU69.0%SR
TruthfulQA64.0%SR
MMLU-Pro47.4%SR

General

ARC-C84.6%SR
PIQA81.0%SR
OpenBookQA79.2%SR
MBPP0.70 / 100SR
MMMLU55.4%SR

Language

BoolQ78.0%SR
MEGA XStoryCloze73.5%SR
BIG-Bench Hard69.0%SR
Winogrande68.5%SR
MEGA XCOPA63.1%SR
MEGA TyDi QA62.2%SR
MEGA MLQA61.7%SR
MEGA UDPOS46.5%SR
SQuALITY24.3%SR

Long Context

RULER84.1%SR
Qasper41.9%SR
GovReport25.9%SR
QMSum21.3%SR
SummScreenFD16.0%SR

Math

GSM8k86.2%SR
MATH48.5%SR
MGSM47.9%SR

Reasoning

HellaSwag69.4%SR

AA Evaluation Indices

No AA evaluation data available

LLM Stats Category Scores

Psychology
70
Reasoning
70
Code
60
Creativity
60
Finance
60
General
60
Healthcare
60
Language
60
Legal
60
Math
60
Physics
60
Long Context
50
Writing
40
Biology
30
Chemistry
30
Summarization
20

Pricing

No pricing data available

Speed

No speed data available

Available Providers

(LS internal units)

No provider data available

External Sources