LFM2 is a new generation of hybrid models developed by Liquid AI, specifically designed for edge AI and on-device deployment. The family consists of four post-trained checkpoints with 350M, 700M, 1.2B, and 2.6B parameters that achieve 3x faster training compared to the previous generation, with 2x faster decode and prefill speed on CPU compared to similarly-sized competitors. LFM2 features a new hybrid architecture combining multiplicative gates and short convolutions, enabling flexible deployment across CPU, GPU, and NPU hardware for smartphones, laptops, and vehicles. The models support eight languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish, and are trained with knowledge distillation, large-scale supervised fine-tuning on downstream tasks, custom direct preference optimization, and iterative model merging. Due to their small size, LFM2 models are recommended for fine-tuning on narrow use cases such as agentic tasks, data extraction, retrieval-augmented generation, creative writing, and multi-turn conversations, rather than knowledge-intensive tasks or programming-focused applications.
LFM2 is a new generation of hybrid models developed by Liquid AI, specifically designed for edge AI and on-device deployment. The family consists of four post-trained checkpoints with 350M, 700M, 1.2B, and 2.6B parameters that achieve 3x faster training compared to the previous generation, with 2x faster decode and prefill speed on CPU compared to similarly-sized competitors. LFM2 features a new hybrid architecture combining multiplicative gates and short convolutions, enabling flexible deployment across CPU, GPU, and NPU hardware for smartphones, laptops, and vehicles. The models support eight languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish, and are trained with knowledge distillation, large-scale supervised fine-tuning on downstream tasks, custom direct preference optimization, and iterative model merging. Due to their small size, LFM2 models are recommended for fine-tuning on narrow use cases such as agentic tasks, data extraction, retrieval-augmented generation, creative writing, and multi-turn conversations, rather than knowledge-intensive tasks or programming-focused applications.
LFM2 is a new generation of hybrid models developed by Liquid AI, specifically designed for edge AI and on-device deployment. The family consists of four post-trained checkpoints with 350M, 700M, 1.2B, and 2.6B parameters that achieve 3x faster training compared to the previous generation, with 2x faster decode and prefill speed on CPU compared to similarly-sized competitors. LFM2 features a new hybrid architecture combining multiplicative gates and short convolutions, enabling flexible deployment across CPU, GPU, and NPU hardware for smartphones, laptops, and vehicles. The models support eight languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish, and are trained with knowledge distillation, large-scale supervised fine-tuning on downstream tasks, custom direct preference optimization, and iterative model merging. Due to their small size, LFM2 models are recommended for fine-tuning on narrow use cases such as agentic tasks, data extraction, retrieval-augmented generation, creative writing, and multi-turn conversations, rather than knowledge-intensive tasks or programming-focused applications.