LFM2 is a new generation of hybrid models developed by Liquid AI, specifically designed for edge AI and on-device deployment. The family includes four post-trained checkpoints with 350M, 700M, 1.2B, and 2.6B parameters that achieve 3x faster training compared to the previous generation, with 2x faster decode and prefill speed on CPU compared to similar models. LFM2 features a new hybrid architecture with multiplicative gates and short convolutions, combining double-gated short-range convolution blocks with grouped query attention blocks. The models are optimized for flexible deployment on CPU, GPU, and NPU hardware for smartphones, laptops, and vehicles. LFM2 outperforms similarly-sized models across multiple benchmark categories including knowledge, mathematics, instruction following, and multilingual capabilities, supporting English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish. Due to their small size, these models are particularly suited for agentic tasks, data extraction, retrieval-augmented generation, creative writing, and multi-turn conversations, and are recommended for fine-tuning on narrow use cases rather than knowledge-intensive or programming-heavy tasks.
LiquidAI
available local models on Mirai:
available local models on Mirai:
Name
Quantisation
Size
LFM2-1.2B
No
1.2B
Quant.
No
Size
1.2B
LFM2-2.6B
No
2.6B
Quant.
No
Size
2.6B
LFM2-350M
No
350M
Quant.
No
Size
350M
LFM2-700M
No
700M
Quant.
No
Size
700M
LFM2.5-1.2B-Instruct
No
1.2B
Quant.
No
Size
1.2B
LFM2.5-1.2B-Instruct-MLX-4bit
No
1.2B
Quant.
No
Size
1.2B
LFM2.5-1.2B-Instruct-MLX-8bit
No
1.2B
Quant.
No
Size
1.2B
LFM2.5-1.2B-Thinking
No
1.2B
Quant.
No
Size
1.2B
LFM2-1.2B-4bit
No
1.2B
Quant.
No
Size
1.2B
LFM2-1.2B-8bit
No
1.2B
Quant.
No
Size
1.2B
LFM2-2.6B-4bit
No
2.6B
Quant.
No
Size
2.6B
LFM2-2.6B-8bit
No
2.6B
Quant.
No
Size
2.6B
LFM2-350M-4bit
No
350M
Quant.
No
Size
350M
LFM2-350M-8bit
No
350M
Quant.
No
Size
350M
LFM2-700M-4bit
No
700M
Quant.
No
Size
700M
LFM2-700M-8bit
No
700M
Quant.
No
Size
700M
LFM2.5-1.2B-Thinking-4bit
No
1.2B
Quant.
No
Size
1.2B
LFM2.5-1.2B-Thinking-8bit
No
1.2B
Quant.
No
Size
1.2B
LFM2 is a new generation of hybrid models developed by Liquid AI, specifically designed for edge AI and on-device deployment. The family includes four post-trained checkpoints with 350M, 700M, 1.2B, and 2.6B parameters that achieve 3x faster training compared to the previous generation, with 2x faster decode and prefill speed on CPU compared to similar models. LFM2 features a new hybrid architecture with multiplicative gates and short convolutions, combining double-gated short-range convolution blocks with grouped query attention blocks. The models are optimized for flexible deployment on CPU, GPU, and NPU hardware for smartphones, laptops, and vehicles. LFM2 outperforms similarly-sized models across multiple benchmark categories including knowledge, mathematics, instruction following, and multilingual capabilities, supporting English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish. Due to their small size, these models are particularly suited for agentic tasks, data extraction, retrieval-augmented generation, creative writing, and multi-turn conversations, and are recommended for fine-tuning on narrow use cases rather than knowledge-intensive or programming-heavy tasks.
LiquidAI
available local models on Mirai:
Name
Quantisation
Size
LFM2-1.2B
No
1.2B
Quant.
No
Size
1.2B
LFM2-2.6B
No
2.6B
Quant.
No
Size
2.6B
LFM2-350M
No
350M
Quant.
No
Size
350M
LFM2-700M
No
700M
Quant.
No
Size
700M
LFM2.5-1.2B-Instruct
No
1.2B
Quant.
No
Size
1.2B
LFM2.5-1.2B-Instruct-MLX-4bit
No
1.2B
Quant.
No
Size
1.2B
LFM2.5-1.2B-Instruct-MLX-8bit
No
1.2B
Quant.
No
Size
1.2B
LFM2.5-1.2B-Thinking
No
1.2B
Quant.
No
Size
1.2B
LFM2-1.2B-4bit
No
1.2B
Quant.
No
Size
1.2B
LFM2-1.2B-8bit
No
1.2B
Quant.
No
Size
1.2B
LFM2-2.6B-4bit
No
2.6B
Quant.
No
Size
2.6B
LFM2-2.6B-8bit
No
2.6B
Quant.
No
Size
2.6B
LFM2-350M-4bit
No
350M
Quant.
No
Size
350M
LFM2-350M-8bit
No
350M
Quant.
No
Size
350M
LFM2-700M-4bit
No
700M
Quant.
No
Size
700M
LFM2-700M-8bit
No
700M
Quant.
No
Size
700M
LFM2.5-1.2B-Thinking-4bit
No
1.2B
Quant.
No
Size
1.2B
LFM2.5-1.2B-Thinking-8bit
No
1.2B
Quant.
No
Size
1.2B