Type
Type
Local
From
From
LiquidAI
Quantisation
Quantisation
uint8
Precision
Precision
No
Size
Size
700M
This model is the MLX format version of LiquidAI's LFM2-700M, a 700 million parameter language model converted for efficient inference on Apple Silicon using the MLX framework. LFM2 is a Liquid Foundation Model designed for edge deployment and supports multiple languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish. The 8-bit quantized version provides a balance between model performance and computational efficiency for text generation tasks.
This model is the MLX format version of LiquidAI's LFM2-700M, a 700 million parameter language model converted for efficient inference on Apple Silicon using the MLX framework. LFM2 is a Liquid Foundation Model designed for edge deployment and supports multiple languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish. The 8-bit quantized version provides a balance between model performance and computational efficiency for text generation tasks.
This model is the MLX format version of LiquidAI's LFM2-700M, a 700 million parameter language model converted for efficient inference on Apple Silicon using the MLX framework. LFM2 is a Liquid Foundation Model designed for edge deployment and supports multiple languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish. The 8-bit quantized version provides a balance between model performance and computational efficiency for text generation tasks.