Microsoft4 local models

Phi Models: Small AI That Thinks Big

Microsoft's Phi family proves that size is not everything. Phi-4 Mini at just 3.8B parameters matches or beats many 7B models on reasoning benchmarks. If you have a device with limited RAM (8-16GB), Phi models give you the most intelligence per gigabyte.

Developer

Microsoft

Models

4

Size Range

3.8B – 14B

RAM Range

722 GB

Key Features

Best quality-per-parameter in small sizes
Strong reasoning for its size
Phi-4 Mini rivals 7B models at 3.8B
Runs well on low-RAM devices

All Phi Models

ModelSizeQuantVRAMMin RAMBest ForQualityOllama
Phi-3 Mini 3.8B3.8BQ4_K_M3.2 GB7 GBCoding, Chat
70
Phi-4 Mini 3.8B3.8BQ4_K_M3.2 GB7 GBCoding, Chat
72
Phi-3 Medium 14B14BQ4_K_M11 GB20 GBCoding, Quality
89
Phi-4 14B14BQ4_K_M11.5 GB22 GBCoding, Quality
92

Device Compatibility

Which Phi models can run on each device class, based on minimum RAM requirements.

ModeliPhoneAirProStudioMini
Phi-3 Mini 3.8B (3.8B)PossiblePossibleExcellentExcellentExcellent
Phi-4 Mini 3.8B (3.8B)PossiblePossibleExcellentExcellentExcellent
Phi-3 Medium 14B (14B)NoPossiblePossibleExcellentPossible
Phi-4 14B (14B)NoPossiblePossibleExcellentPossible

RAM Requirements

Phi-3 Mini 3.8B
3.2 GB
min 7 GB
Phi-4 Mini 3.8B
3.2 GB
min 7 GB
Phi-3 Medium 14B
11 GB
min 20 GB
Phi-4 14B
11.5 GB
min 22 GB

Frequently Asked Questions

Is Phi-4 Mini really as good as 7B models?+
On reasoning and math benchmarks, yes. Phi-4 Mini 3.8B matches Llama 3.2 8B on several tasks while using half the RAM. For pure chat quality, 7B models still have a slight edge.
What RAM does Phi need?+
Phi-4 Mini Q4 needs just 7GB RAM. Phi-4 14B needs 22GB. The Mini model is perfect for 8GB MacBook Airs and iPhones.
How does Phi compare to Gemma?+
Phi-4 Mini (3.8B) vs Gemma 2 2B: Phi wins clearly. Phi-4 14B vs Gemma 2 27B: Gemma wins on quality but needs more RAM. Pick Phi when RAM is tight.

Related Model Families

Getting Started