This model is a converted version of LiquidAI's LFM2.5-1.2B-Thinking model optimized for MLX format. It is a 1.2 billion parameter language model with thinking capabilities, quantized to 8-bit precision for efficient edge deployment. The model supports multiple languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish, making it suitable for multilingual text generation tasks.
LiquidAI
available local models on Mirai:
available local models on Mirai:
Name
Quantisation
Size
LFM2-1.2B
uint8
1.2B
Quant.
uint8
Size
1.2B
LFM2-2.6B
uint8
2.6B
Quant.
uint8
Size
2.6B
LFM2-350M
uint8
350M
Quant.
uint8
Size
350M
LFM2-700M
uint8
700M
Quant.
uint8
Size
700M
LFM2.5-1.2B-Instruct
uint8
1.2B
Quant.
uint8
Size
1.2B
LFM2.5-1.2B-Instruct-MLX-4bit
uint8
1.2B
Quant.
uint8
Size
1.2B
LFM2.5-1.2B-Instruct-MLX-8bit
uint8
1.2B
Quant.
uint8
Size
1.2B
LFM2.5-1.2B-Thinking
uint8
1.2B
Quant.
uint8
Size
1.2B
LFM2-1.2B-4bit
uint8
1.2B
Quant.
uint8
Size
1.2B
LFM2-1.2B-8bit
uint8
1.2B
Quant.
uint8
Size
1.2B
LFM2-2.6B-4bit
uint8
2.6B
Quant.
uint8
Size
2.6B
LFM2-2.6B-8bit
uint8
2.6B
Quant.
uint8
Size
2.6B
LFM2-350M-4bit
uint8
350M
Quant.
uint8
Size
350M
LFM2-350M-8bit
uint8
350M
Quant.
uint8
Size
350M
LFM2-700M-4bit
uint8
700M
Quant.
uint8
Size
700M
LFM2-700M-8bit
uint8
700M
Quant.
uint8
Size
700M
LFM2.5-1.2B-Thinking-4bit
uint8
1.2B
Quant.
uint8
Size
1.2B
LFM2.5-1.2B-Thinking-8bit
uint8
1.2B
Quant.
uint8
Size
1.2B
This model is a converted version of LiquidAI's LFM2.5-1.2B-Thinking model optimized for MLX format. It is a 1.2 billion parameter language model with thinking capabilities, quantized to 8-bit precision for efficient edge deployment. The model supports multiple languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish, making it suitable for multilingual text generation tasks.
LiquidAI
available local models on Mirai:
Name
Quantisation
Size
LFM2-1.2B
uint8
1.2B
Quant.
uint8
Size
1.2B
LFM2-2.6B
uint8
2.6B
Quant.
uint8
Size
2.6B
LFM2-350M
uint8
350M
Quant.
uint8
Size
350M
LFM2-700M
uint8
700M
Quant.
uint8
Size
700M
LFM2.5-1.2B-Instruct
uint8
1.2B
Quant.
uint8
Size
1.2B
LFM2.5-1.2B-Instruct-MLX-4bit
uint8
1.2B
Quant.
uint8
Size
1.2B
LFM2.5-1.2B-Instruct-MLX-8bit
uint8
1.2B
Quant.
uint8
Size
1.2B
LFM2.5-1.2B-Thinking
uint8
1.2B
Quant.
uint8
Size
1.2B
LFM2-1.2B-4bit
uint8
1.2B
Quant.
uint8
Size
1.2B
LFM2-1.2B-8bit
uint8
1.2B
Quant.
uint8
Size
1.2B
LFM2-2.6B-4bit
uint8
2.6B
Quant.
uint8
Size
2.6B
LFM2-2.6B-8bit
uint8
2.6B
Quant.
uint8
Size
2.6B
LFM2-350M-4bit
uint8
350M
Quant.
uint8
Size
350M
LFM2-350M-8bit
uint8
350M
Quant.
uint8
Size
350M
LFM2-700M-4bit
uint8
700M
Quant.
uint8
Size
700M
LFM2-700M-8bit
uint8
700M
Quant.
uint8
Size
700M
LFM2.5-1.2B-Thinking-4bit
uint8
1.2B
Quant.
uint8
Size
1.2B
LFM2.5-1.2B-Thinking-8bit
uint8
1.2B
Quant.
uint8
Size
1.2B