Hugging Face1 local models

SmolLM: The Tiniest Local AI Model

Hugging Face's SmolLM proves that AI can run on virtually anything. At just 360M parameters and 1GB RAM requirement, SmolLM is the go-to choice for ultra-constrained devices — older iPhones, entry-level Macs, or embedded applications where every megabyte counts.

Developer

Hugging Face

Models

1

Size Range

0.36B – 0.36B

RAM Range

11 GB

Key Features

Ultra-tiny at 360M parameters
Runs on any device with 1GB free RAM
Fastest local model available
Good for basic text tasks on edge devices

All SmolLM Models

ModelSizeQuantVRAMMin RAMBest ForQualityOllama
SmolLM2 360M0.36BQ4_K_M0.5 GB1 GBChat, Embedded
38

Device Compatibility

Which SmolLM models can run on each device class, based on minimum RAM requirements.

ModeliPhoneAirProStudioMini
SmolLM2 360M (0.36B)ExcellentExcellentExcellentExcellentExcellent

RAM Requirements

SmolLM2 360M
0.5 GB
min 1 GB

Frequently Asked Questions

What can SmolLM actually do?+
Basic text completion, simple Q&A, and short summaries. Do not expect complex reasoning or long conversations — SmolLM is for edge cases where running any model at all is the goal.
Is SmolLM useful or just a demo?+
It is genuinely useful for specific tasks: text classification, simple extraction, basic chat on devices that cannot run anything larger. Think of it as the minimum viable AI.

Related Model Families

Getting Started