Type
Type
Local
From
From
LiquidAI
Quantisation
Quantisation
uint4
Precision
Precision
No
Size
Size
1.2B
This is an MLX export of the LFM2.5-1.2B-Instruct model optimized for Apple Silicon inference. The model is a 1.2 billion parameter instruction-tuned language model quantized to 4-bit precision, resulting in a 628 MB footprint while maintaining a 128K context length. It supports multiple languages including English, Japanese, Korean, French, Spanish, German, Italian, Portuguese, Arabic, and Chinese, making it suitable for multilingual text generation tasks on edge devices.
This is an MLX export of the LFM2.5-1.2B-Instruct model optimized for Apple Silicon inference. The model is a 1.2 billion parameter instruction-tuned language model quantized to 4-bit precision, resulting in a 628 MB footprint while maintaining a 128K context length. It supports multiple languages including English, Japanese, Korean, French, Spanish, German, Italian, Portuguese, Arabic, and Chinese, making it suitable for multilingual text generation tasks on edge devices.