Type
Type
Local
From
From
LiquidAI
Quantisation
Quantisation
uint8
Precision
Precision
No
Size
Size
1.2B
MLX export of LFM2.5-1.2B-Instruct for Apple Silicon inference. This is a 1.2 billion parameter language model optimized for edge deployment with 8-bit quantization, supporting a 128K context length and trained for instruction-following across multiple languages including English, Japanese, Korean, French, Spanish, German, Italian, Portuguese, Arabic, and Chinese.
MLX export of LFM2.5-1.2B-Instruct for Apple Silicon inference. This is a 1.2 billion parameter language model optimized for edge deployment with 8-bit quantization, supporting a 128K context length and trained for instruction-following across multiple languages including English, Japanese, Korean, French, Spanish, German, Italian, Portuguese, Arabic, and Chinese.