Skip to main content

Phi-3.5-MoE-instruct

MicrosoftPhiOpen WeightMIT · Commercial OK

Description

Phi-3.5-MoE-instruct is a mixture-of-experts model with ~42B total parameters (6.6B active) and a 128K context window. It excels at reasoning, math, coding, and multilingual tasks, outperforming larger dense models in many benchmarks. It underwent a thorough safety post-training process (SFT + DPO) and is licensed under MIT. This model is ideal for scenarios where efficiency and high performance are both required, particularly in multi-lingual or reasoning-intensive tasks.

Release Date
2024-08-23
Parameters
60.0B
Context Length
Modalities

Capability Radar

70
general
70
coding
70
reasoning
34
scienceest.
0
agents
0
multimodal

Science uses a reasoning proxy when dedicated science benchmarks are unavailable.

Rankings

Domain#RankScoreSource
Reasoning21
84.0
LS

Benchmark Scores (LLM Stats)

Biology

GPQA36.8%SR

Code

RepoQA85.0%SR
HumanEval70.7%SR

Creativity

Social IQa78.0%SR
Arena Hard37.9%SR

Finance

MMLU78.9%SR
TruthfulQA77.5%SR
MMLU-Pro45.3%SR

General

ARC-C91.0%SR
OpenBookQA89.6%SR
PIQA88.6%SR
MBPP0.81 / 100SR
MMMLU69.9%SR

Language

BoolQ84.6%SR
MEGA XStoryCloze82.8%SR
Winogrande81.3%SR
BIG-Bench Hard79.1%SR
MEGA XCOPA76.6%SR
MEGA TyDi QA67.1%SR
MEGA MLQA65.3%SR
MEGA UDPOS60.4%SR
SQuALITY24.1%SR

Long Context

RULER87.1%SR
Qasper40.0%SR
GovReport26.4%SR
QMSum19.9%SR
SummScreenFD16.9%SR

Math

GSM8k88.7%SR
MATH59.5%SR
MGSM58.7%SR

Reasoning

HellaSwag83.8%SR

AA Evaluation Indices

No AA evaluation data available

LLM Stats Category Scores

Psychology
80
Code
70
Finance
70
General
70
Healthcare
70
Language
70
Legal
70
Math
70
Reasoning
70
Creativity
60
Long Context
60
Physics
60
Writing
40
Biology
40
Chemistry
40
Summarization
20

Pricing

No pricing data available

Speed

No speed data available

Available Providers

(LS internal units)

No provider data available

External Sources