Type
Type
Local
From
From
LiquidAI
Quantisation
Quantisation
uint8
Precision
Precision
No
Size
Size
350M
This is an 8-bit quantized version of the LFM2-350M model converted to MLX format for efficient inference on Apple Silicon devices. The model is based on LiquidAI's LFM2-350M and supports text generation across multiple languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish. It is designed for edge deployment with reduced memory requirements through 8-bit quantization while maintaining the capabilities of the original Liquid Foundation Model 2.
This is an 8-bit quantized version of the LFM2-350M model converted to MLX format for efficient inference on Apple Silicon devices. The model is based on LiquidAI's LFM2-350M and supports text generation across multiple languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish. It is designed for edge deployment with reduced memory requirements through 8-bit quantization while maintaining the capabilities of the original Liquid Foundation Model 2.