LFM2.5-1.2B-Thinking-8bit

Run locally Apple devices with Mirai

Type

Type

Local

From

From

LiquidAI

Quantisation

Quantisation

uint8

Precision

Precision

No

Size

Size

1.2B

Source

Source

Hugging Face Logo

This model is a converted version of LiquidAI's LFM2.5-1.2B-Thinking model optimized for MLX format. It is a 1.2 billion parameter language model with thinking capabilities, quantized to 8-bit precision for efficient edge deployment. The model supports multiple languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish, making it suitable for multilingual text generation tasks.

This model is a converted version of LiquidAI's LFM2.5-1.2B-Thinking model optimized for MLX format. It is a 1.2 billion parameter language model with thinking capabilities, quantized to 8-bit precision for efficient edge deployment. The model supports multiple languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish, making it suitable for multilingual text generation tasks.

LFM2.5-1.2B-Thinking-8bit

Run locally Apple devices with Mirai

Type

Local

From

LiquidAI

Quantisation

uint8

Precision

float16

Size

1.2B

Source

Hugging Face Logo

This model is a converted version of LiquidAI's LFM2.5-1.2B-Thinking model optimized for MLX format. It is a 1.2 billion parameter language model with thinking capabilities, quantized to 8-bit precision for efficient edge deployment. The model supports multiple languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish, making it suitable for multilingual text generation tasks.