Llama 3.1 8B Instruct
Llama / 8B / Q4_K_M / ~6.5 GB
Best for: Chat, Coding·Pop: 94/100
Perf: ~53.0 tok/s · first token ~0.6s
Best for chat, coding. Strong fit for 24 GB RAM with balanced speed and quality.
ollama run llama3.1:8b-instruct-q4_K_M
AI model recommendations for MacBook Air M4 with up to 32GB RAM. Perfect for 14B models with fast inference. This configuration provides optimal performance for local AI models.
The M4 is Apple's latest chip with the most powerful Neural Engine yet. With up to 32GB unified memory, the MacBook Air M4 delivers the fastest inference speeds in the Air lineup, making 14B models practical for everyday use.
Based on our analysis, 8 out of 8 recommended models run excellently on this configuration. The sweet spot for MacBook Air with Apple M4 is 7B-14B parameter models with Q4_K_M quantization, which provides the best trade-off between quality and inference speed.
Llama / 8B / Q4_K_M / ~6.5 GB
Best for: Chat, Coding·Pop: 94/100
Perf: ~53.0 tok/s · first token ~0.6s
Best for chat, coding. Strong fit for 24 GB RAM with balanced speed and quality.
ollama run llama3.1:8b-instruct-q4_K_M
Qwen / 9B / Q4_K_M / ~7 GB
Best for: Quality, Coding, Reasoning·Pop: 86/100
Perf: ~47.7 tok/s · first token ~0.7s
Best for quality, coding, reasoning. Strong fit for 24 GB RAM with balanced speed and quality.
ollama run qwen3.5:9b-instruct-q4_K_M
Qwen / 8B / Q4_K_M / ~6.5 GB
Best for: Chat, Coding·Pop: 88/100
Perf: ~53.0 tok/s · first token ~0.6s
Best for chat, coding. Strong fit for 24 GB RAM with balanced speed and quality.
ollama run qwen3:8b-q4_K_M
Qwen / 7B / Q4_K_M / ~5.5 GB
Best for: Coding·Pop: 85/100
Perf: ~59.8 tok/s · first token ~0.6s
Best for coding. Strong fit for 24 GB RAM with balanced speed and quality.
ollama run qwen2.5-coder:7b-q4_K_M
Mistral / 7B / Q4_K_M / ~5.5 GB
Best for: Chat, Coding·Pop: 90/100
Perf: ~59.8 tok/s · first token ~0.6s
Best for chat, coding. Strong fit for 24 GB RAM with balanced speed and quality.
ollama run mistral:7b-instruct-q4_K_M
Qwen / 7B / Q4_K_M / ~5.5 GB
Best for: Chat, Coding·Pop: 86/100
Perf: ~59.8 tok/s · first token ~0.6s
Best for chat, coding. Strong fit for 24 GB RAM with balanced speed and quality.
ollama run qwen2.5:7b-instruct-q4_K_M
DeepSeek / 7B / Q4_K_M / ~5.5 GB
Best for: Reasoning, Coding·Pop: 77/100
Perf: ~59.8 tok/s · first token ~0.6s
Best for reasoning, coding. Strong fit for 24 GB RAM with balanced speed and quality.
ollama run deepseek-r1-distill:qwen-7b-q4_K_M
LFM2 / 8B / Q4_K_M / ~6 GB
Best for: Local agents, tool calling, fast chat·Pop: 75/100
Perf: ~53.0 tok/s · first token ~0.6s
Best for local agents, tool calling, fast chat. Strong fit for 24 GB RAM with balanced speed and quality.
ollama run liquidai/lfm2:8b-a1b-instruct-q4_K_M
With 24GB RAM and the Apple M4 chip, we recommend Llama 3.1 8B Instruct for the best balance of speed and quality. The Apple M4 handles 7B-14B parameter models well.
MacBook Air with Apple M4 supports 16, 24, 32GB configurations. For most AI workloads, 24GB provides good headroom. A 7B model typically needs 4-5GB of free RAM, while 14B models need 8-10GB.
Apple M4 on MacBook Air achieves approximately 53 tokens per second with optimized models. The M4 is Apple's latest chip with the most powerful Neural Engine yet. With up to 32GB unified memory, the MacBook Air M4 delivers the fastest inference speeds in the Air lineup, making 14B models practical for everyday use.
Yes, Ollama runs natively on Apple Silicon including Apple M4. You can install it in minutes and run models like Llama 3.1 8B Instruct locally. Our wizard recommends the best models based on your exact Apple M4 configuration and available RAM.
Use our interactive wizard to test different RAM configurations and priorities for your specific Apple M4 setup.
Open ModelFit Wizard →