Type
Type
Local
From
From
LiquidAI
Quantisation
Quantisation
uint4
Precision
Precision
No
Size
Size
350M
This model is a 4-bit quantized version of LiquidAI/LFM2-350M converted to MLX format for efficient inference on Apple Silicon devices. LFM2-350M is a compact language model capable of text generation across multiple languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish. The model uses Liquid Foundation Model architecture optimized for edge deployment while maintaining strong performance characteristics.
This model is a 4-bit quantized version of LiquidAI/LFM2-350M converted to MLX format for efficient inference on Apple Silicon devices. LFM2-350M is a compact language model capable of text generation across multiple languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish. The model uses Liquid Foundation Model architecture optimized for edge deployment while maintaining strong performance characteristics.