ModelFit

Find the best local AI model for your machine

Llama 3.1 8B Instruct

Llama · 8B · Q4_K_M

OK

This model matches your mixed workflow and is a strong fit for 16 GB RAM.

Ollama
ollama run llama3.1:8b-instruct-q4_K_M

Qwen2.5 7B Instruct

Qwen · 7B · Q4_K_M

OK

This model matches your mixed workflow and is a strong fit for 16 GB RAM.

Ollama
ollama run qwen2.5:7b-instruct-q4_K_M

Gemma 2 9B Instruct

Gemma · 9B · Q4_K_M

OK

This model matches your mixed workflow and is a strong fit for 16 GB RAM.

Ollama
ollama run gemma2:9b-instruct-q4_K_M

Llama 3.2 3B Instruct

Llama · 3B · Q4_K_M

Excellent

This model matches your mixed workflow and is a strong fit for 16 GB RAM.

Ollama
ollama run llama3.2:3b-instruct-q4_K_M

Qwen2.5 Coder 7B

Qwen · 7B · Q4_K_M

OK

This model matches your mixed workflow and is a strong fit for 16 GB RAM.

Ollama
ollama run qwen2.5-coder:7b-q4_K_M

Mistral 7B Instruct

Mistral · 7B · Q4_K_M

OK

This model matches your mixed workflow and is a strong fit for 16 GB RAM.

Ollama
ollama run mistral:7b-instruct-q4_K_M

Phi-3 Mini 3.8B

Phi · 3.8B · Q4_K_M

Excellent

This model matches your mixed workflow and is a strong fit for 16 GB RAM.

Ollama
ollama run phi3:mini-q4_K_M

Qwen2.5 3B Instruct

Qwen · 3B · Q4_K_M

Excellent

This model matches your mixed workflow and is a strong fit for 16 GB RAM.

Ollama
ollama run qwen2.5:3b-instruct-q4_K_M

Gemma 2 2B Instruct

Gemma · 2B · Q4_K_M

Excellent

This model matches your mixed workflow and is a strong fit for 16 GB RAM.

Ollama
ollama run gemma2:2b-instruct-q4_K_M

Qwen2.5 1.5B Instruct

Qwen · 1.5B · Q4_K_M

Excellent

This model matches your mixed workflow and is a strong fit for 16 GB RAM.

Ollama
ollama run qwen2.5:1.5b-instruct-q4_K_M