Liquid AI2 local models

LFM2 Models: Liquid AI for Agentic Workflows

Liquid AI's LFM2 brings a fresh architecture to local AI. Unlike traditional transformers, LFM2 uses liquid neural networks that excel at tool use and agentic tasks. If you need a local model that can reliably call APIs and follow structured workflows, LFM2 is worth a look.

Developer

Liquid AI

Models

2

Size Range

8B – 24B

RAM Range

816 GB

Key Features

Novel liquid neural network architecture
Optimized for agentic tool use
Efficient inference on Apple Silicon
Good for structured tasks and API integration

All LFM2 Models

ModelSizeQuantVRAMMin RAMBest ForQualityOllama
LFM2 8B-A1B Instruct8BQ4_K_M6 GB8 GBLocal agents, tool calling, fast chat
78
LFM2 24B-A2B Instruct24BQ4_K_M14 GB16 GBLocal AI agents, privacy-first tool calling, MCP workflows
85

Device Compatibility

Which LFM2 models can run on each device class, based on minimum RAM requirements.

ModeliPhoneAirProStudioMini
LFM2 8B-A1B Instruct (8B)PossiblePossibleExcellentExcellentExcellent
LFM2 24B-A2B Instruct (24B)NoPossiblePossibleExcellentPossible

RAM Requirements

LFM2 8B-A1B Instruct
6 GB
min 8 GB
LFM2 24B-A2B Instruct
14 GB
min 16 GB

Frequently Asked Questions

What makes LFM2 different from other models?+
LFM2 uses a liquid neural network architecture instead of standard transformers. This makes it particularly good at tool use, structured output, and agentic workflows where reliability matters.
What RAM does LFM2 need?+
LFM2 8B needs 8GB RAM and LFM2 24B needs 16GB. Both are efficient enough for MacBook Air and Mac Mini.
Is LFM2 good for general chat?+
It works for chat but shines more on structured tasks. For general chat, Qwen or Llama at similar sizes offer better conversational quality.

Related Model Families

Getting Started