This is an MLX export of the LFM2.5-1.2B-Instruct model optimized for Apple Silicon inference. The model is a 1.2 billion parameter instruction-tuned language model quantized to 4-bit precision, resulting in a 628 MB footprint while maintaining a 128K context length. It supports multiple languages including English, Japanese, Korean, French, Spanish, German, Italian, Portuguese, Arabic, and Chinese, making it suitable for multilingual text generation tasks on edge devices.
LiquidAI
available local models on Mirai:
available local models on Mirai:
Name
Quantisation
Size
LFM2-1.2B
uint4
1.2B
Quant.
uint4
Size
1.2B
LFM2-2.6B
uint4
2.6B
Quant.
uint4
Size
2.6B
LFM2-350M
uint4
350M
Quant.
uint4
Size
350M
LFM2-700M
uint4
700M
Quant.
uint4
Size
700M
LFM2.5-1.2B-Instruct
uint4
1.2B
Quant.
uint4
Size
1.2B
LFM2.5-1.2B-Instruct-MLX-4bit
uint4
1.2B
Quant.
uint4
Size
1.2B
LFM2.5-1.2B-Instruct-MLX-8bit
uint4
1.2B
Quant.
uint4
Size
1.2B
LFM2.5-1.2B-Thinking
uint4
1.2B
Quant.
uint4
Size
1.2B
LFM2-1.2B-4bit
uint4
1.2B
Quant.
uint4
Size
1.2B
LFM2-1.2B-8bit
uint4
1.2B
Quant.
uint4
Size
1.2B
LFM2-2.6B-4bit
uint4
2.6B
Quant.
uint4
Size
2.6B
LFM2-2.6B-8bit
uint4
2.6B
Quant.
uint4
Size
2.6B
LFM2-350M-4bit
uint4
350M
Quant.
uint4
Size
350M
LFM2-350M-8bit
uint4
350M
Quant.
uint4
Size
350M
LFM2-700M-4bit
uint4
700M
Quant.
uint4
Size
700M
LFM2-700M-8bit
uint4
700M
Quant.
uint4
Size
700M
LFM2.5-1.2B-Thinking-4bit
uint4
1.2B
Quant.
uint4
Size
1.2B
LFM2.5-1.2B-Thinking-8bit
uint4
1.2B
Quant.
uint4
Size
1.2B
This is an MLX export of the LFM2.5-1.2B-Instruct model optimized for Apple Silicon inference. The model is a 1.2 billion parameter instruction-tuned language model quantized to 4-bit precision, resulting in a 628 MB footprint while maintaining a 128K context length. It supports multiple languages including English, Japanese, Korean, French, Spanish, German, Italian, Portuguese, Arabic, and Chinese, making it suitable for multilingual text generation tasks on edge devices.
LiquidAI
available local models on Mirai:
Name
Quantisation
Size
LFM2-1.2B
uint4
1.2B
Quant.
uint4
Size
1.2B
LFM2-2.6B
uint4
2.6B
Quant.
uint4
Size
2.6B
LFM2-350M
uint4
350M
Quant.
uint4
Size
350M
LFM2-700M
uint4
700M
Quant.
uint4
Size
700M
LFM2.5-1.2B-Instruct
uint4
1.2B
Quant.
uint4
Size
1.2B
LFM2.5-1.2B-Instruct-MLX-4bit
uint4
1.2B
Quant.
uint4
Size
1.2B
LFM2.5-1.2B-Instruct-MLX-8bit
uint4
1.2B
Quant.
uint4
Size
1.2B
LFM2.5-1.2B-Thinking
uint4
1.2B
Quant.
uint4
Size
1.2B
LFM2-1.2B-4bit
uint4
1.2B
Quant.
uint4
Size
1.2B
LFM2-1.2B-8bit
uint4
1.2B
Quant.
uint4
Size
1.2B
LFM2-2.6B-4bit
uint4
2.6B
Quant.
uint4
Size
2.6B
LFM2-2.6B-8bit
uint4
2.6B
Quant.
uint4
Size
2.6B
LFM2-350M-4bit
uint4
350M
Quant.
uint4
Size
350M
LFM2-350M-8bit
uint4
350M
Quant.
uint4
Size
350M
LFM2-700M-4bit
uint4
700M
Quant.
uint4
Size
700M
LFM2-700M-8bit
uint4
700M
Quant.
uint4
Size
700M
LFM2.5-1.2B-Thinking-4bit
uint4
1.2B
Quant.
uint4
Size
1.2B
LFM2.5-1.2B-Thinking-8bit
uint4
1.2B
Quant.
uint4
Size
1.2B