Type
Type
Local
From
From
LiquidAI
Quantisation
Quantisation
uint4
Precision
Precision
No
Size
Size
1.2B
This model is a 4-bit quantized version of LiquidAI/LFM2.5-1.2B-Thinking converted to MLX format for efficient inference on edge devices. It is a 1.2 billion parameter language model capable of text generation across multiple languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish. The model features thinking capabilities and is optimized for running on Apple Silicon and other edge hardware through the MLX framework.
This model is a 4-bit quantized version of LiquidAI/LFM2.5-1.2B-Thinking converted to MLX format for efficient inference on edge devices. It is a 1.2 billion parameter language model capable of text generation across multiple languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish. The model features thinking capabilities and is optimized for running on Apple Silicon and other edge hardware through the MLX framework.