Type
Type
Local
From
From
LiquidAI
Quantisation
Quantisation
uint8
Precision
Precision
No
Size
Size
1.2B
This model is a converted version of LiquidAI's LFM2.5-1.2B-Thinking model optimized for MLX format. It is a 1.2 billion parameter language model with thinking capabilities, quantized to 8-bit precision for efficient edge deployment. The model supports multiple languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish, making it suitable for multilingual text generation tasks.
This model is a converted version of LiquidAI's LFM2.5-1.2B-Thinking model optimized for MLX format. It is a 1.2 billion parameter language model with thinking capabilities, quantized to 8-bit precision for efficient edge deployment. The model supports multiple languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish, making it suitable for multilingual text generation tasks.
This model is a converted version of LiquidAI's LFM2.5-1.2B-Thinking model optimized for MLX format. It is a 1.2 billion parameter language model with thinking capabilities, quantized to 8-bit precision for efficient edge deployment. The model supports multiple languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish, making it suitable for multilingual text generation tasks.