Llama 3.1 8B Instruct
Llama / 8B / Q4_K_M / ~6.5 GB
Best for: Chat, Coding·Pop: 94/100
Perf: ~104.0 tok/s · first token ~0.3s
Fits in 24 GB VRAM with room to spare. Best for chat, coding on RTX 4090.
ollama run llama3.1:8b-instruct-q4_K_M
The RTX 4090 is the current king of local AI inference. With 24GB GDDR6X and 104 tokens per second, it handles everything from small chat models to 32B parameter reasoning models with ease. The gold standard for serious AI enthusiasts.
| GPU | VRAM | Speed | Bandwidth | Price |
|---|---|---|---|---|
| RTX 3090 | 24 GB | 87 tok/s | 936 GB/s | $900 |
| RTX 5080 | 16 GB | 94 tok/s | 960 GB/s | $999 |
| RTX 5090 | 32 GB | 145 tok/s | 1792 GB/s | $2,499 |
| RTX 4090 | 24 GB | 104 tok/s | 1008 GB/s | $2,574 |
Llama / 8B / Q4_K_M / ~6.5 GB
Best for: Chat, Coding·Pop: 94/100
Perf: ~104.0 tok/s · first token ~0.3s
Fits in 24 GB VRAM with room to spare. Best for chat, coding on RTX 4090.
ollama run llama3.1:8b-instruct-q4_K_M
Qwen / 9B / Q4_K_M / ~7 GB
Best for: Quality, Coding, Reasoning·Pop: 86/100
Perf: ~94.1 tok/s · first token ~0.4s
Fits in 24 GB VRAM with room to spare. Best for quality, coding, reasoning on RTX 4090.
ollama run qwen3.5:9b-instruct-q4_K_M
Qwen / 8B / Q4_K_M / ~6.5 GB
Best for: Chat, Coding·Pop: 88/100
Perf: ~104.0 tok/s · first token ~0.3s
Fits in 24 GB VRAM with room to spare. Best for chat, coding on RTX 4090.
ollama run qwen3:8b-q4_K_M
LFM2 / 24B / Q4_K_M / ~14 GB
Best for: Local AI agents, privacy-first tool calling, MCP workflows·Pop: 80/100
Perf: ~40.9 tok/s · first token ~0.4s
Fits in 24 GB VRAM with room to spare. Best for local ai agents, privacy-first tool calling, mcp workflows on RTX 4090.
ollama run liquidai/lfm2:24b-a2b-instruct-q4_K_M
Llama / 8B / Q5_K_M / ~8 GB
Best for: Chat, Coding·Pop: 82/100
Perf: ~89.4 tok/s · first token ~0.4s
Fits in 24 GB VRAM with room to spare. Best for chat, coding on RTX 4090.
ollama run llama3.1:8b-instruct-q5_K_M
Gemma / 9B / Q4_K_M / ~7 GB
Best for: Chat, Coding·Pop: 81/100
Perf: ~94.1 tok/s · first token ~0.4s
Fits in 24 GB VRAM with room to spare. Best for chat, coding on RTX 4090.
ollama run gemma2:9b-instruct-q4_K_M
Qwen / 14B / Q4_K_M / ~11 GB
Best for: Coding, Quality·Pop: 84/100
Perf: ~64.6 tok/s · first token ~0.4s
Fits in 24 GB VRAM with room to spare. Best for coding, quality on RTX 4090.
ollama run qwen3:14b-q4_K_M
Qwen / 14B / Q4_K_M / ~11 GB
Best for: Coding, Chat·Pop: 80/100
Perf: ~64.6 tok/s · first token ~0.4s
Fits in 24 GB VRAM with room to spare. Best for coding, chat on RTX 4090.
ollama run qwen2.5:14b-instruct-q4_K_M
Qwen / 14B / Q4_K_M / ~11 GB
Best for: Coding·Pop: 79/100
Perf: ~64.6 tok/s · first token ~0.4s
Fits in 24 GB VRAM with room to spare. Best for coding on RTX 4090.
ollama run qwen2.5-coder:14b-q4_K_M
Mistral / 12B / Q4_K_M / ~9.5 GB
Best for: Chat, Translation·Pop: 78/100
Perf: ~73.7 tok/s · first token ~0.4s
Fits in 24 GB VRAM with room to spare. Best for chat, translation on RTX 4090.
ollama run mistral-nemo:12b-q4_K_M
With 24GB VRAM, the RTX 4090 can run up to 32b parameter models. Top recommendations include Llama 3.1 8B Instruct, Qwen3.5 9B Instruct, Qwen3 8B.
The RTX 4090 achieves 104 tokens per second with Qwen3 8B at Q4 quantization. Smaller models run faster, larger models slower.
24GB VRAM is excellent for local AI. You can comfortably run up to 32b parameter models with room for KV cache. 10 of our top 10 recommended models run at full speed.
Install Ollama from ollama.com, then run models directly. For example: ollama run llama3.1:8b-instruct-q4_K_M. Ollama automatically detects your NVIDIA GPU and uses CUDA acceleration.
Use our interactive wizard to compare models across Apple Silicon and NVIDIA GPUs.
Open ModelFit Wizard →