Llama 3.1 8B Instruct
Llama · 8B · Q4_K_M
This model matches your mixed workflow and is a strong fit for 16 GB RAM.
ollama run llama3.1:8b-instruct-q4_K_M
Find the best local AI model for your machine
Llama · 8B · Q4_K_M
This model matches your mixed workflow and is a strong fit for 16 GB RAM.
ollama run llama3.1:8b-instruct-q4_K_M
Qwen · 7B · Q4_K_M
This model matches your mixed workflow and is a strong fit for 16 GB RAM.
ollama run qwen2.5:7b-instruct-q4_K_M
Gemma · 9B · Q4_K_M
This model matches your mixed workflow and is a strong fit for 16 GB RAM.
ollama run gemma2:9b-instruct-q4_K_M
Llama · 3B · Q4_K_M
This model matches your mixed workflow and is a strong fit for 16 GB RAM.
ollama run llama3.2:3b-instruct-q4_K_M
Qwen · 7B · Q4_K_M
This model matches your mixed workflow and is a strong fit for 16 GB RAM.
ollama run qwen2.5-coder:7b-q4_K_M
Mistral · 7B · Q4_K_M
This model matches your mixed workflow and is a strong fit for 16 GB RAM.
ollama run mistral:7b-instruct-q4_K_M
Phi · 3.8B · Q4_K_M
This model matches your mixed workflow and is a strong fit for 16 GB RAM.
ollama run phi3:mini-q4_K_M
Qwen · 3B · Q4_K_M
This model matches your mixed workflow and is a strong fit for 16 GB RAM.
ollama run qwen2.5:3b-instruct-q4_K_M
Gemma · 2B · Q4_K_M
This model matches your mixed workflow and is a strong fit for 16 GB RAM.
ollama run gemma2:2b-instruct-q4_K_M
Qwen · 1.5B · Q4_K_M
This model matches your mixed workflow and is a strong fit for 16 GB RAM.
ollama run qwen2.5:1.5b-instruct-q4_K_M