LFM2 is a new generation of hybrid models developed by Liquid AI, specifically designed for edge AI and on-device deployment. The family consists of four post-trained checkpoints with 350M, 700M, 1.2B, and 2.6B parameters that achieve 3x faster training compared to the previous generation, with 2x faster decode and prefill speed on CPU compared to similarly-sized competitors. LFM2 features a new hybrid architecture combining multiplicative gates and short convolutions, enabling flexible deployment across CPU, GPU, and NPU hardware for smartphones, laptops, and vehicles. The models support eight languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish, and are trained with knowledge distillation, large-scale supervised fine-tuning on downstream tasks, custom direct preference optimization, and iterative model merging. Due to their small size, LFM2 models are recommended for fine-tuning on narrow use cases such as agentic tasks, data extraction, retrieval-augmented generation, creative writing, and multi-turn conversations, rather than knowledge-intensive tasks or programming-focused applications.
LiquidAI
available local models on Mirai:
available local models on Mirai:
Name
Quantisation
Size
LFM2-1.2B
No
1.2B
Quant.
No
Size
1.2B
LFM2-2.6B
No
2.6B
Quant.
No
Size
2.6B
LFM2-350M
No
350M
Quant.
No
Size
350M
LFM2-700M
No
700M
Quant.
No
Size
700M
LFM2.5-1.2B-Instruct
No
1.2B
Quant.
No
Size
1.2B
LFM2.5-1.2B-Instruct-MLX-4bit
No
1.2B
Quant.
No
Size
1.2B
LFM2.5-1.2B-Instruct-MLX-8bit
No
1.2B
Quant.
No
Size
1.2B
LFM2.5-1.2B-Thinking
No
1.2B
Quant.
No
Size
1.2B
LFM2-1.2B-4bit
No
1.2B
Quant.
No
Size
1.2B
LFM2-1.2B-8bit
No
1.2B
Quant.
No
Size
1.2B
LFM2-2.6B-4bit
No
2.6B
Quant.
No
Size
2.6B
LFM2-2.6B-8bit
No
2.6B
Quant.
No
Size
2.6B
LFM2-350M-4bit
No
350M
Quant.
No
Size
350M
LFM2-350M-8bit
No
350M
Quant.
No
Size
350M
LFM2-700M-4bit
No
700M
Quant.
No
Size
700M
LFM2-700M-8bit
No
700M
Quant.
No
Size
700M
LFM2.5-1.2B-Thinking-4bit
No
1.2B
Quant.
No
Size
1.2B
LFM2.5-1.2B-Thinking-8bit
No
1.2B
Quant.
No
Size
1.2B
LFM2 is a new generation of hybrid models developed by Liquid AI, specifically designed for edge AI and on-device deployment. The family consists of four post-trained checkpoints with 350M, 700M, 1.2B, and 2.6B parameters that achieve 3x faster training compared to the previous generation, with 2x faster decode and prefill speed on CPU compared to similarly-sized competitors. LFM2 features a new hybrid architecture combining multiplicative gates and short convolutions, enabling flexible deployment across CPU, GPU, and NPU hardware for smartphones, laptops, and vehicles. The models support eight languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish, and are trained with knowledge distillation, large-scale supervised fine-tuning on downstream tasks, custom direct preference optimization, and iterative model merging. Due to their small size, LFM2 models are recommended for fine-tuning on narrow use cases such as agentic tasks, data extraction, retrieval-augmented generation, creative writing, and multi-turn conversations, rather than knowledge-intensive tasks or programming-focused applications.
LiquidAI
available local models on Mirai:
Name
Quantisation
Size
LFM2-1.2B
No
1.2B
Quant.
No
Size
1.2B
LFM2-2.6B
No
2.6B
Quant.
No
Size
2.6B
LFM2-350M
No
350M
Quant.
No
Size
350M
LFM2-700M
No
700M
Quant.
No
Size
700M
LFM2.5-1.2B-Instruct
No
1.2B
Quant.
No
Size
1.2B
LFM2.5-1.2B-Instruct-MLX-4bit
No
1.2B
Quant.
No
Size
1.2B
LFM2.5-1.2B-Instruct-MLX-8bit
No
1.2B
Quant.
No
Size
1.2B
LFM2.5-1.2B-Thinking
No
1.2B
Quant.
No
Size
1.2B
LFM2-1.2B-4bit
No
1.2B
Quant.
No
Size
1.2B
LFM2-1.2B-8bit
No
1.2B
Quant.
No
Size
1.2B
LFM2-2.6B-4bit
No
2.6B
Quant.
No
Size
2.6B
LFM2-2.6B-8bit
No
2.6B
Quant.
No
Size
2.6B
LFM2-350M-4bit
No
350M
Quant.
No
Size
350M
LFM2-350M-8bit
No
350M
Quant.
No
Size
350M
LFM2-700M-4bit
No
700M
Quant.
No
Size
700M
LFM2-700M-8bit
No
700M
Quant.
No
Size
700M
LFM2.5-1.2B-Thinking-4bit
No
1.2B
Quant.
No
Size
1.2B
LFM2.5-1.2B-Thinking-8bit
No
1.2B
Quant.
No
Size
1.2B