This model is a 4-bit quantized version of LiquidAI/LFM2-350M converted to MLX format for efficient inference on Apple Silicon devices. LFM2-350M is a compact language model capable of text generation across multiple languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish. The model uses Liquid Foundation Model architecture optimized for edge deployment while maintaining strong performance characteristics.
LiquidAI
available local models on Mirai:
available local models on Mirai:
Name
Quantisation
Size
LFM2-1.2B
uint4
1.2B
Quant.
uint4
Size
1.2B
LFM2-2.6B
uint4
2.6B
Quant.
uint4
Size
2.6B
LFM2-350M
uint4
350M
Quant.
uint4
Size
350M
LFM2-700M
uint4
700M
Quant.
uint4
Size
700M
LFM2.5-1.2B-Instruct
uint4
1.2B
Quant.
uint4
Size
1.2B
LFM2.5-1.2B-Instruct-MLX-4bit
uint4
1.2B
Quant.
uint4
Size
1.2B
LFM2.5-1.2B-Instruct-MLX-8bit
uint4
1.2B
Quant.
uint4
Size
1.2B
LFM2.5-1.2B-Thinking
uint4
1.2B
Quant.
uint4
Size
1.2B
LFM2-1.2B-4bit
uint4
1.2B
Quant.
uint4
Size
1.2B
LFM2-1.2B-8bit
uint4
1.2B
Quant.
uint4
Size
1.2B
LFM2-2.6B-4bit
uint4
2.6B
Quant.
uint4
Size
2.6B
LFM2-2.6B-8bit
uint4
2.6B
Quant.
uint4
Size
2.6B
LFM2-350M-4bit
uint4
350M
Quant.
uint4
Size
350M
LFM2-350M-8bit
uint4
350M
Quant.
uint4
Size
350M
LFM2-700M-4bit
uint4
700M
Quant.
uint4
Size
700M
LFM2-700M-8bit
uint4
700M
Quant.
uint4
Size
700M
LFM2.5-1.2B-Thinking-4bit
uint4
1.2B
Quant.
uint4
Size
1.2B
LFM2.5-1.2B-Thinking-8bit
uint4
1.2B
Quant.
uint4
Size
1.2B
This model is a 4-bit quantized version of LiquidAI/LFM2-350M converted to MLX format for efficient inference on Apple Silicon devices. LFM2-350M is a compact language model capable of text generation across multiple languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish. The model uses Liquid Foundation Model architecture optimized for edge deployment while maintaining strong performance characteristics.
LiquidAI
available local models on Mirai:
Name
Quantisation
Size
LFM2-1.2B
uint4
1.2B
Quant.
uint4
Size
1.2B
LFM2-2.6B
uint4
2.6B
Quant.
uint4
Size
2.6B
LFM2-350M
uint4
350M
Quant.
uint4
Size
350M
LFM2-700M
uint4
700M
Quant.
uint4
Size
700M
LFM2.5-1.2B-Instruct
uint4
1.2B
Quant.
uint4
Size
1.2B
LFM2.5-1.2B-Instruct-MLX-4bit
uint4
1.2B
Quant.
uint4
Size
1.2B
LFM2.5-1.2B-Instruct-MLX-8bit
uint4
1.2B
Quant.
uint4
Size
1.2B
LFM2.5-1.2B-Thinking
uint4
1.2B
Quant.
uint4
Size
1.2B
LFM2-1.2B-4bit
uint4
1.2B
Quant.
uint4
Size
1.2B
LFM2-1.2B-8bit
uint4
1.2B
Quant.
uint4
Size
1.2B
LFM2-2.6B-4bit
uint4
2.6B
Quant.
uint4
Size
2.6B
LFM2-2.6B-8bit
uint4
2.6B
Quant.
uint4
Size
2.6B
LFM2-350M-4bit
uint4
350M
Quant.
uint4
Size
350M
LFM2-350M-8bit
uint4
350M
Quant.
uint4
Size
350M
LFM2-700M-4bit
uint4
700M
Quant.
uint4
Size
700M
LFM2-700M-8bit
uint4
700M
Quant.
uint4
Size
700M
LFM2.5-1.2B-Thinking-4bit
uint4
1.2B
Quant.
uint4
Size
1.2B
LFM2.5-1.2B-Thinking-8bit
uint4
1.2B
Quant.
uint4
Size
1.2B