LFM2-1.2B-4bit

Run locally Apple devices with Mirai

Type

Type

Local

From

From

LiquidAI

Quantisation

Quantisation

uint4

Precision

Precision

No

Size

Size

1.2B

Source

Source

Hugging Face Logo

This model is a 4-bit quantized version of LFM2-1.2B converted to MLX format for efficient inference on edge devices. LFM2 is a compact language model that supports multiple languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish. The model is designed for text generation tasks and represents a lightweight alternative suitable for deployment on resource-constrained hardware while maintaining reasonable performance across diverse languages.

1
Choose framework
2
Run the following command to install Mirai SDK
SPMhttps://github.com/trymirai/uzu-swift
3
Set Mirai API keyGet API Key
4
Apply code
Loading...

LFM2-1.2B-4bit

Run locally Apple devices with Mirai

Type

Local

From

LiquidAI

Quantisation

uint4

Precision

float16

Size

1.2B

Source

Hugging Face Logo

This model is a 4-bit quantized version of LFM2-1.2B converted to MLX format for efficient inference on edge devices. LFM2 is a compact language model that supports multiple languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish. The model is designed for text generation tasks and represents a lightweight alternative suitable for deployment on resource-constrained hardware while maintaining reasonable performance across diverse languages.

1
Choose framework
2
Run the following command to install Mirai SDK
SPMhttps://github.com/trymirai/uzu-swift
3
Set Mirai API keyGet API Key
4
Apply code
Loading...