Type
Type
Local
From
From
LiquidAI
Quantisation
Quantisation
uint4
Precision
Precision
No
Size
Size
2.6B
This is a 4-bit quantized version of the LFM2-2.6B model converted to MLX format for efficient inference on edge devices. LFM2 is a compact language model designed for text generation tasks and supports multiple languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish. The model has been optimized for the MLX framework to enable fast and efficient execution on compatible hardware.
This is a 4-bit quantized version of the LFM2-2.6B model converted to MLX format for efficient inference on edge devices. LFM2 is a compact language model designed for text generation tasks and supports multiple languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish. The model has been optimized for the MLX framework to enable fast and efficient execution on compatible hardware.