LFM2-1.2B

Run locally Apple devices with Mirai

Type

Type

Local

From

From

LiquidAI

Quantisation

Quantisation

No

Precision

Precision

No

Size

Size

1.2B

Source

Source

Hugging Face Logo

LFM2 is a new generation of hybrid models developed by Liquid AI specifically designed for edge AI and on-device deployment. The model comes in four sizes with 350M, 700M, 1.2B, and 2.6B parameters, offering a new standard in quality, speed, and memory efficiency. LFM2 features a hybrid architecture with multiplicative gates and short convolutions, combining 10 double-gated short-range convolution blocks with 6 grouped query attention blocks. The model achieves 3x faster training compared to its previous generation and provides 2x faster decode and prefill speed on CPU compared to competing models. LFM2 outperforms similarly-sized models across multiple benchmark categories including knowledge, mathematics, instruction following, and multilingual capabilities. It runs efficiently on CPU, GPU, and NPU hardware for flexible deployment on smartphones, laptops, or vehicles. The model supports eight languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish, and is particularly suited for agentic tasks, data extraction, RAG, creative writing, and multi-turn conversations.

1
Choose framework
2
Run the following command to install Mirai SDK
SPMhttps://github.com/trymirai/uzu-swift
3
Set Mirai API keyGet API Key
4
Apply code
Loading...

LFM2-1.2B

Run locally Apple devices with Mirai

Type

Local

From

LiquidAI

Quantisation

No

Precision

float16

Size

1.2B

Source

Hugging Face Logo

LFM2 is a new generation of hybrid models developed by Liquid AI specifically designed for edge AI and on-device deployment. The model comes in four sizes with 350M, 700M, 1.2B, and 2.6B parameters, offering a new standard in quality, speed, and memory efficiency. LFM2 features a hybrid architecture with multiplicative gates and short convolutions, combining 10 double-gated short-range convolution blocks with 6 grouped query attention blocks. The model achieves 3x faster training compared to its previous generation and provides 2x faster decode and prefill speed on CPU compared to competing models. LFM2 outperforms similarly-sized models across multiple benchmark categories including knowledge, mathematics, instruction following, and multilingual capabilities. It runs efficiently on CPU, GPU, and NPU hardware for flexible deployment on smartphones, laptops, or vehicles. The model supports eight languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish, and is particularly suited for agentic tasks, data extraction, RAG, creative writing, and multi-turn conversations.

1
Choose framework
2
Run the following command to install Mirai SDK
SPMhttps://github.com/trymirai/uzu-swift
3
Set Mirai API keyGet API Key
4
Apply code
Loading...