LFM2.5-1.2B-Thinking-8bit

Run locally on Apple devices with Mirai

Type

Type

Local

From

From

LiquidAI

Quantisation

Quantisation

uint8

Size

Size

1.2B

Source

Source

Hugging Face Logo

This model is a converted version of LiquidAI's LFM2.5-1.2B-Thinking model optimized for MLX format. It is a 1.2 billion parameter language model with thinking capabilities, quantized to 8-bit precision for efficient edge deployment. The model supports multiple languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish, making it suitable for multilingual text generation tasks.

1
Choose framework
2
Run the following command to install Mirai SDK
SPMhttps://github.com/trymirai/uzu-swift
3
Set Mirai API keyGet API Key
4
Apply code
Loading...

LFM2.5-1.2B-Thinking-8bit

Run locally on Apple devices with Mirai

Type

Local

From

LiquidAI

Quantisation

uint8

Size

1.2B

Source

Hugging Face Logo

This model is a converted version of LiquidAI's LFM2.5-1.2B-Thinking model optimized for MLX format. It is a 1.2 billion parameter language model with thinking capabilities, quantized to 8-bit precision for efficient edge deployment. The model supports multiple languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish, making it suitable for multilingual text generation tasks.

1
Choose framework
2
Run the following command to install Mirai SDK
SPMhttps://github.com/trymirai/uzu-swift
3
Set Mirai API keyGet API Key
4
Apply code
Loading...