Type
Type
Local
From
From
LiquidAI
Quantisation
Quantisation
uint8
Precision
Precision
No
Size
Size
2.6B
This model is an MLX-format conversion of LiquidAI's LFM2-2.6B, a 2.6 billion parameter language model quantized to 8-bit precision for efficient inference on edge devices. LFM2 is a Liquid Foundation Model designed for text generation across multiple languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish. The model has been optimized for the MLX framework to enable fast, low-memory inference while maintaining strong performance capabilities.
This model is an MLX-format conversion of LiquidAI's LFM2-2.6B, a 2.6 billion parameter language model quantized to 8-bit precision for efficient inference on edge devices. LFM2 is a Liquid Foundation Model designed for text generation across multiple languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish. The model has been optimized for the MLX framework to enable fast, low-memory inference while maintaining strong performance capabilities.