LFM2-350M

Run locally Apple devices with Mirai

Type

Type

Local

From

From

LiquidAI

Quantisation

Quantisation

No

Precision

Precision

No

Size

Size

350M

Source

Source

Hugging Face Logo

LFM2 is a new generation of hybrid models developed by Liquid AI specifically designed for edge AI and on-device deployment. The family includes four post-trained checkpoints with 350M, 700M, 1.2B, and 2.6B parameters that achieve 3x faster training compared to the previous generation and 2x faster decode and prefill speed on CPU compared to competing models. LFM2 features a new hybrid architecture with multiplicative gates and short convolutions, combining 10 double-gated short-range convolution blocks with 6 grouped query attention blocks. The models outperform similarly-sized models across multiple benchmark categories including knowledge, mathematics, instruction following, and multilingual capabilities. They support eight languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish with a 32,768 token context length. The models are particularly well-suited for agentic tasks, data extraction, retrieval-augmented generation, creative writing, and multi-turn conversations. Liquid AI recommends fine-tuning LFM2 on narrow use cases to maximize performance and notes that the models are not recommended for knowledge-intensive tasks or programming-heavy applications. The training approach involved knowledge distillation from the LFM1-7B teacher model, large-scale supervised fine-tuning, custom direct preference optimization, and iterative model merging.

1
Choose framework
2
Run the following command to install Mirai SDK
SPMhttps://github.com/trymirai/uzu-swift
3
Set Mirai API keyGet API Key
4
Apply code
Loading...

LFM2-350M

Run locally Apple devices with Mirai

Type

Local

From

LiquidAI

Quantisation

No

Precision

float16

Size

350M

Source

Hugging Face Logo

LFM2 is a new generation of hybrid models developed by Liquid AI specifically designed for edge AI and on-device deployment. The family includes four post-trained checkpoints with 350M, 700M, 1.2B, and 2.6B parameters that achieve 3x faster training compared to the previous generation and 2x faster decode and prefill speed on CPU compared to competing models. LFM2 features a new hybrid architecture with multiplicative gates and short convolutions, combining 10 double-gated short-range convolution blocks with 6 grouped query attention blocks. The models outperform similarly-sized models across multiple benchmark categories including knowledge, mathematics, instruction following, and multilingual capabilities. They support eight languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish with a 32,768 token context length. The models are particularly well-suited for agentic tasks, data extraction, retrieval-augmented generation, creative writing, and multi-turn conversations. Liquid AI recommends fine-tuning LFM2 on narrow use cases to maximize performance and notes that the models are not recommended for knowledge-intensive tasks or programming-heavy applications. The training approach involved knowledge distillation from the LFM1-7B teacher model, large-scale supervised fine-tuning, custom direct preference optimization, and iterative model merging.

1
Choose framework
2
Run the following command to install Mirai SDK
SPMhttps://github.com/trymirai/uzu-swift
3
Set Mirai API keyGet API Key
4
Apply code
Loading...