LFM2-2.6B

Run locally Apple devices with Mirai

Type

Type

Local

From

From

LiquidAI

Quantisation

Quantisation

No

Precision

Precision

No

Size

Size

2.6B

Source

Source

Hugging Face Logo

LFM2 is a new generation of hybrid models developed by Liquid AI, specifically designed for edge AI and on-device deployment. The family includes four post-trained checkpoints with 350M, 700M, 1.2B, and 2.6B parameters that achieve 3x faster training compared to the previous generation, with 2x faster decode and prefill speed on CPU compared to similar models. LFM2 features a new hybrid architecture with multiplicative gates and short convolutions, combining double-gated short-range convolution blocks with grouped query attention blocks. The models are optimized for flexible deployment on CPU, GPU, and NPU hardware for smartphones, laptops, and vehicles. LFM2 outperforms similarly-sized models across multiple benchmark categories including knowledge, mathematics, instruction following, and multilingual capabilities, supporting English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish. Due to their small size, these models are particularly suited for agentic tasks, data extraction, retrieval-augmented generation, creative writing, and multi-turn conversations, and are recommended for fine-tuning on narrow use cases rather than knowledge-intensive or programming-heavy tasks.

1
Choose framework
2
Run the following command to install Mirai SDK
SPMhttps://github.com/trymirai/uzu-swift
3
Set Mirai API keyGet API Key
4
Apply code
Loading...

LFM2-2.6B

Run locally Apple devices with Mirai

Type

Local

From

LiquidAI

Quantisation

No

Precision

float16

Size

2.6B

Source

Hugging Face Logo

LFM2 is a new generation of hybrid models developed by Liquid AI, specifically designed for edge AI and on-device deployment. The family includes four post-trained checkpoints with 350M, 700M, 1.2B, and 2.6B parameters that achieve 3x faster training compared to the previous generation, with 2x faster decode and prefill speed on CPU compared to similar models. LFM2 features a new hybrid architecture with multiplicative gates and short convolutions, combining double-gated short-range convolution blocks with grouped query attention blocks. The models are optimized for flexible deployment on CPU, GPU, and NPU hardware for smartphones, laptops, and vehicles. LFM2 outperforms similarly-sized models across multiple benchmark categories including knowledge, mathematics, instruction following, and multilingual capabilities, supporting English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish. Due to their small size, these models are particularly suited for agentic tasks, data extraction, retrieval-augmented generation, creative writing, and multi-turn conversations, and are recommended for fine-tuning on narrow use cases rather than knowledge-intensive or programming-heavy tasks.

1
Choose framework
2
Run the following command to install Mirai SDK
SPMhttps://github.com/trymirai/uzu-swift
3
Set Mirai API keyGet API Key
4
Apply code
Loading...