LFM2.5-1.2B-Instruct-MLX-4bit

Run locally Apple devices with Mirai

Type

Type

Local

From

From

LiquidAI

Quantisation

Quantisation

uint4

Precision

Precision

No

Size

Size

1.2B

Source

Source

Hugging Face Logo

This is an MLX export of the LFM2.5-1.2B-Instruct model optimized for Apple Silicon inference. The model is a 1.2 billion parameter instruction-tuned language model quantized to 4-bit precision, resulting in a 628 MB footprint while maintaining a 128K context length. It supports multiple languages including English, Japanese, Korean, French, Spanish, German, Italian, Portuguese, Arabic, and Chinese, making it suitable for multilingual text generation tasks on edge devices.

1
Choose framework
2
Run the following command to install Mirai SDK
SPMhttps://github.com/trymirai/uzu-swift
3
Set Mirai API keyGet API Key
4
Apply code
Loading...

LFM2.5-1.2B-Instruct-MLX-4bit

Run locally Apple devices with Mirai

Type

Local

From

LiquidAI

Quantisation

uint4

Precision

float16

Size

1.2B

Source

Hugging Face Logo

This is an MLX export of the LFM2.5-1.2B-Instruct model optimized for Apple Silicon inference. The model is a 1.2 billion parameter instruction-tuned language model quantized to 4-bit precision, resulting in a 628 MB footprint while maintaining a 128K context length. It supports multiple languages including English, Japanese, Korean, French, Spanish, German, Italian, Portuguese, Arabic, and Chinese, making it suitable for multilingual text generation tasks on edge devices.

1
Choose framework
2
Run the following command to install Mirai SDK
SPMhttps://github.com/trymirai/uzu-swift
3
Set Mirai API keyGet API Key
4
Apply code
Loading...