MLX export of LFM2.5-1.2B-Instruct for Apple Silicon inference. This is a 1.2 billion parameter language model optimized for edge deployment with 8-bit quantization, supporting a 128K context length and trained for instruction-following across multiple languages including English, Japanese, Korean, French, Spanish, German, Italian, Portuguese, Arabic, and Chinese.
LiquidAI
available local models on Mirai:
available local models on Mirai:
Name
Quantisation
Size
LFM2-1.2B
uint8
1.2B
Quant.
uint8
Size
1.2B
LFM2-2.6B
uint8
2.6B
Quant.
uint8
Size
2.6B
LFM2-350M
uint8
350M
Quant.
uint8
Size
350M
LFM2-700M
uint8
700M
Quant.
uint8
Size
700M
LFM2.5-1.2B-Instruct
uint8
1.2B
Quant.
uint8
Size
1.2B
LFM2.5-1.2B-Instruct-MLX-4bit
uint8
1.2B
Quant.
uint8
Size
1.2B
LFM2.5-1.2B-Instruct-MLX-8bit
uint8
1.2B
Quant.
uint8
Size
1.2B
LFM2.5-1.2B-Thinking
uint8
1.2B
Quant.
uint8
Size
1.2B
LFM2-1.2B-4bit
uint8
1.2B
Quant.
uint8
Size
1.2B
LFM2-1.2B-8bit
uint8
1.2B
Quant.
uint8
Size
1.2B
LFM2-2.6B-4bit
uint8
2.6B
Quant.
uint8
Size
2.6B
LFM2-2.6B-8bit
uint8
2.6B
Quant.
uint8
Size
2.6B
LFM2-350M-4bit
uint8
350M
Quant.
uint8
Size
350M
LFM2-350M-8bit
uint8
350M
Quant.
uint8
Size
350M
LFM2-700M-4bit
uint8
700M
Quant.
uint8
Size
700M
LFM2-700M-8bit
uint8
700M
Quant.
uint8
Size
700M
LFM2.5-1.2B-Thinking-4bit
uint8
1.2B
Quant.
uint8
Size
1.2B
LFM2.5-1.2B-Thinking-8bit
uint8
1.2B
Quant.
uint8
Size
1.2B
MLX export of LFM2.5-1.2B-Instruct for Apple Silicon inference. This is a 1.2 billion parameter language model optimized for edge deployment with 8-bit quantization, supporting a 128K context length and trained for instruction-following across multiple languages including English, Japanese, Korean, French, Spanish, German, Italian, Portuguese, Arabic, and Chinese.
LiquidAI
available local models on Mirai:
Name
Quantisation
Size
LFM2-1.2B
uint8
1.2B
Quant.
uint8
Size
1.2B
LFM2-2.6B
uint8
2.6B
Quant.
uint8
Size
2.6B
LFM2-350M
uint8
350M
Quant.
uint8
Size
350M
LFM2-700M
uint8
700M
Quant.
uint8
Size
700M
LFM2.5-1.2B-Instruct
uint8
1.2B
Quant.
uint8
Size
1.2B
LFM2.5-1.2B-Instruct-MLX-4bit
uint8
1.2B
Quant.
uint8
Size
1.2B
LFM2.5-1.2B-Instruct-MLX-8bit
uint8
1.2B
Quant.
uint8
Size
1.2B
LFM2.5-1.2B-Thinking
uint8
1.2B
Quant.
uint8
Size
1.2B
LFM2-1.2B-4bit
uint8
1.2B
Quant.
uint8
Size
1.2B
LFM2-1.2B-8bit
uint8
1.2B
Quant.
uint8
Size
1.2B
LFM2-2.6B-4bit
uint8
2.6B
Quant.
uint8
Size
2.6B
LFM2-2.6B-8bit
uint8
2.6B
Quant.
uint8
Size
2.6B
LFM2-350M-4bit
uint8
350M
Quant.
uint8
Size
350M
LFM2-350M-8bit
uint8
350M
Quant.
uint8
Size
350M
LFM2-700M-4bit
uint8
700M
Quant.
uint8
Size
700M
LFM2-700M-8bit
uint8
700M
Quant.
uint8
Size
700M
LFM2.5-1.2B-Thinking-4bit
uint8
1.2B
Quant.
uint8
Size
1.2B
LFM2.5-1.2B-Thinking-8bit
uint8
1.2B
Quant.
uint8
Size
1.2B