Phi-3.5-vision-instruct
MicrosoftPhiOpen WeightMIT · Commercial OK
Description
Phi-3.5-vision-instruct is a 4.2B-parameter open multimodal model with up to 128K context tokens. It emphasizes multi-frame image understanding and reasoning, boosting performance on single-image benchmarks while enabling multi-image comparison, summarization, and even video analysis. The model underwent safety post-training for improved instruction-following, alignment, and robust handling of visual and text inputs, and is released under the MIT license.
Release Date
2024-08-23
Parameters
4.2B
Context Length
—
Modalities
—
Capability Radar
40
general
0
coding
40
reasoning
34
scienceest.
0
agents
70
multimodal
Science uses a reasoning proxy when dedicated science benchmarks are unavailable.
Rankings
| Domain | #Rank | Score | Source |
|---|---|---|---|
| Multimodal Ranking | 30 | 80.0 | LS |
Benchmark Scores (LLM Stats)
General
MMMU
43.0%SR
Image To Text
TextVQA
72.0%SR
Math
ScienceQA
91.3%SR
MathVista
43.9%SR
InterGPS
36.3%SR
Multimodal
POPE
86.1%SR
MMBench
81.9%SR
ChartQA
81.8%SR
AI2D
78.1%SR
AA Evaluation Indices
No AA evaluation data available
LLM Stats Category Scores
Vision70
Image To Text70
Multimodal70
Reasoning70
General40
Healthcare40
Math40
Pricing
No pricing data available
Speed
No speed data available
Available Providers
(LS internal units)No provider data available