Type
Type
Local
From
From
LiquidAI
Quantisation
Quantisation
uint8
Precision
Precision
No
Size
Size
1.2B
This model is an 8-bit quantized version of LiquidAI/LFM2-1.2B converted to MLX format for efficient inference on Apple Silicon devices. LFM2 is a compact language model designed for edge deployment, supporting multiple languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish. The model is optimized for text generation tasks and can be used with the mlx-lm library for fast inference on Mac hardware.
This model is an 8-bit quantized version of LiquidAI/LFM2-1.2B converted to MLX format for efficient inference on Apple Silicon devices. LFM2 is a compact language model designed for edge deployment, supporting multiple languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish. The model is optimized for text generation tasks and can be used with the mlx-lm library for fast inference on Mac hardware.