Skip to main content

Phi-3.5-vision-instruct

MicrosoftPhiOpen WeightMIT · Commercial OK

Description

Phi-3.5-vision-instruct is a 4.2B-parameter open multimodal model with up to 128K context tokens. It emphasizes multi-frame image understanding and reasoning, boosting performance on single-image benchmarks while enabling multi-image comparison, summarization, and even video analysis. The model underwent safety post-training for improved instruction-following, alignment, and robust handling of visual and text inputs, and is released under the MIT license.

Release Date
2024-08-23
Parameters
4.2B
Context Length
Modalities

Capability Radar

40
general
0
coding
40
reasoning
34
scienceest.
0
agents
70
multimodal

Science uses a reasoning proxy when dedicated science benchmarks are unavailable.

Rankings

Domain#RankScoreSource
Multimodal Ranking30
80.0
LS

Benchmark Scores (LLM Stats)

General

MMMU43.0%SR

Image To Text

TextVQA72.0%SR

Math

ScienceQA91.3%SR
MathVista43.9%SR
InterGPS36.3%SR

Multimodal

POPE86.1%SR
MMBench81.9%SR
ChartQA81.8%SR
AI2D78.1%SR

AA Evaluation Indices

No AA evaluation data available

LLM Stats Category Scores

Vision
70
Image To Text
70
Multimodal
70
Reasoning
70
General
40
Healthcare
40
Math
40

Pricing

No pricing data available

Speed

No speed data available

Available Providers

(LS internal units)

No provider data available

External Sources