Type
Type
Local
From
From
LiquidAI
Quantisation
Quantisation
uint4
Precision
Precision
No
Size
Size
700M
This model is a 4-bit quantized version of LiquidAI's LFM2-700M converted to MLX format for efficient inference. LFM2-700M is a language model capable of text generation across multiple languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish. The model has been optimized for edge deployment and can be used with the MLX framework for fast inference on Apple Silicon and other platforms.
This model is a 4-bit quantized version of LiquidAI's LFM2-700M converted to MLX format for efficient inference. LFM2-700M is a language model capable of text generation across multiple languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish. The model has been optimized for edge deployment and can be used with the MLX framework for fast inference on Apple Silicon and other platforms.
This model is a 4-bit quantized version of LiquidAI's LFM2-700M converted to MLX format for efficient inference. LFM2-700M is a language model capable of text generation across multiple languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish. The model has been optimized for edge deployment and can be used with the MLX framework for fast inference on Apple Silicon and other platforms.