LFM2.5-1.2B-Thinking-4bit

Run locally Apple devices with Mirai

Type

Type

Local

From

From

LiquidAI

Quantisation

Quantisation

uint4

Precision

Precision

No

Size

Size

1.2B

Source

Source

Hugging Face Logo

This model is a 4-bit quantized version of LiquidAI/LFM2.5-1.2B-Thinking converted to MLX format for efficient inference on edge devices. It is a 1.2 billion parameter language model capable of text generation across multiple languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish. The model features thinking capabilities and is optimized for running on Apple Silicon and other edge hardware through the MLX framework.

1
Choose framework
2
Run the following command to install Mirai SDK
SPMhttps://github.com/trymirai/uzu-swift
3
Set Mirai API keyGet API Key
4
Apply code
Loading...

LFM2.5-1.2B-Thinking-4bit

Run locally Apple devices with Mirai

Type

Local

From

LiquidAI

Quantisation

uint4

Precision

float16

Size

1.2B

Source

Hugging Face Logo

This model is a 4-bit quantized version of LiquidAI/LFM2.5-1.2B-Thinking converted to MLX format for efficient inference on edge devices. It is a 1.2 billion parameter language model capable of text generation across multiple languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish. The model features thinking capabilities and is optimized for running on Apple Silicon and other edge hardware through the MLX framework.

1
Choose framework
2
Run the following command to install Mirai SDK
SPMhttps://github.com/trymirai/uzu-swift
3
Set Mirai API keyGet API Key
4
Apply code
Loading...