This model is a 4-bit quantized version of LiquidAI's LFM2-700M converted to MLX format for efficient inference. LFM2-700M is a language model capable of text generation across multiple languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish. The model has been optimized for edge deployment and can be used with the MLX framework for fast inference on Apple Silicon and other platforms.
LiquidAI
available local models on Mirai:
available local models on Mirai:
Name
Quantisation
Size
LFM2-1.2B
uint4
1.2B
Quant.
uint4
Size
1.2B
LFM2-2.6B
uint4
2.6B
Quant.
uint4
Size
2.6B
LFM2-350M
uint4
350M
Quant.
uint4
Size
350M
LFM2-700M
uint4
700M
Quant.
uint4
Size
700M
LFM2.5-1.2B-Instruct
uint4
1.2B
Quant.
uint4
Size
1.2B
LFM2.5-1.2B-Instruct-MLX-4bit
uint4
1.2B
Quant.
uint4
Size
1.2B
LFM2.5-1.2B-Instruct-MLX-8bit
uint4
1.2B
Quant.
uint4
Size
1.2B
LFM2.5-1.2B-Thinking
uint4
1.2B
Quant.
uint4
Size
1.2B
LFM2-1.2B-4bit
uint4
1.2B
Quant.
uint4
Size
1.2B
LFM2-1.2B-8bit
uint4
1.2B
Quant.
uint4
Size
1.2B
LFM2-2.6B-4bit
uint4
2.6B
Quant.
uint4
Size
2.6B
LFM2-2.6B-8bit
uint4
2.6B
Quant.
uint4
Size
2.6B
LFM2-350M-4bit
uint4
350M
Quant.
uint4
Size
350M
LFM2-350M-8bit
uint4
350M
Quant.
uint4
Size
350M
LFM2-700M-4bit
uint4
700M
Quant.
uint4
Size
700M
LFM2-700M-8bit
uint4
700M
Quant.
uint4
Size
700M
LFM2.5-1.2B-Thinking-4bit
uint4
1.2B
Quant.
uint4
Size
1.2B
LFM2.5-1.2B-Thinking-8bit
uint4
1.2B
Quant.
uint4
Size
1.2B
This model is a 4-bit quantized version of LiquidAI's LFM2-700M converted to MLX format for efficient inference. LFM2-700M is a language model capable of text generation across multiple languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish. The model has been optimized for edge deployment and can be used with the MLX framework for fast inference on Apple Silicon and other platforms.
LiquidAI
available local models on Mirai:
Name
Quantisation
Size
LFM2-1.2B
uint4
1.2B
Quant.
uint4
Size
1.2B
LFM2-2.6B
uint4
2.6B
Quant.
uint4
Size
2.6B
LFM2-350M
uint4
350M
Quant.
uint4
Size
350M
LFM2-700M
uint4
700M
Quant.
uint4
Size
700M
LFM2.5-1.2B-Instruct
uint4
1.2B
Quant.
uint4
Size
1.2B
LFM2.5-1.2B-Instruct-MLX-4bit
uint4
1.2B
Quant.
uint4
Size
1.2B
LFM2.5-1.2B-Instruct-MLX-8bit
uint4
1.2B
Quant.
uint4
Size
1.2B
LFM2.5-1.2B-Thinking
uint4
1.2B
Quant.
uint4
Size
1.2B
LFM2-1.2B-4bit
uint4
1.2B
Quant.
uint4
Size
1.2B
LFM2-1.2B-8bit
uint4
1.2B
Quant.
uint4
Size
1.2B
LFM2-2.6B-4bit
uint4
2.6B
Quant.
uint4
Size
2.6B
LFM2-2.6B-8bit
uint4
2.6B
Quant.
uint4
Size
2.6B
LFM2-350M-4bit
uint4
350M
Quant.
uint4
Size
350M
LFM2-350M-8bit
uint4
350M
Quant.
uint4
Size
350M
LFM2-700M-4bit
uint4
700M
Quant.
uint4
Size
700M
LFM2-700M-8bit
uint4
700M
Quant.
uint4
Size
700M
LFM2.5-1.2B-Thinking-4bit
uint4
1.2B
Quant.
uint4
Size
1.2B
LFM2.5-1.2B-Thinking-8bit
uint4
1.2B
Quant.
uint4
Size
1.2B