LFM2.5-1.2B-Instruct-MLX-8bit

Run locally Apple devices with Mirai

Type

Type

Local

From

From

LiquidAI

Quantisation

Quantisation

uint8

Precision

Precision

No

Size

Size

1.2B

Source

Source

Hugging Face Logo

MLX export of LFM2.5-1.2B-Instruct for Apple Silicon inference. This is a 1.2 billion parameter language model optimized for edge deployment with 8-bit quantization, supporting a 128K context length and trained for instruction-following across multiple languages including English, Japanese, Korean, French, Spanish, German, Italian, Portuguese, Arabic, and Chinese.

1
Choose framework
2
Run the following command to install Mirai SDK
SPMhttps://github.com/trymirai/uzu-swift
3
Set Mirai API keyGet API Key
4
Apply code
Loading...

LFM2.5-1.2B-Instruct-MLX-8bit

Run locally Apple devices with Mirai

Type

Local

From

LiquidAI

Quantisation

uint8

Precision

float16

Size

1.2B

Source

Hugging Face Logo

MLX export of LFM2.5-1.2B-Instruct for Apple Silicon inference. This is a 1.2 billion parameter language model optimized for edge deployment with 8-bit quantization, supporting a 128K context length and trained for instruction-following across multiple languages including English, Japanese, Korean, French, Spanish, German, Italian, Portuguese, Arabic, and Chinese.

1
Choose framework
2
Run the following command to install Mirai SDK
SPMhttps://github.com/trymirai/uzu-swift
3
Set Mirai API keyGet API Key
4
Apply code
Loading...