Type
Type
Local
From
From
LiquidAI
Quantisation
Quantisation
uint4
Precision
Precision
No
Size
Size
1.2B
This model is a 4-bit quantized version of LFM2-1.2B converted to MLX format for efficient inference on edge devices. LFM2 is a compact language model that supports multiple languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish. The model is designed for text generation tasks and represents a lightweight alternative suitable for deployment on resource-constrained hardware while maintaining reasonable performance across diverse languages.
This model is a 4-bit quantized version of LFM2-1.2B converted to MLX format for efficient inference on edge devices. LFM2 is a compact language model that supports multiple languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish. The model is designed for text generation tasks and represents a lightweight alternative suitable for deployment on resource-constrained hardware while maintaining reasonable performance across diverse languages.