LFM2-1.2B-8bit

Run locally Apple devices with Mirai

Type

Type

Local

From

From

LiquidAI

Quantisation

Quantisation

uint8

Precision

Precision

No

Size

Size

1.2B

Source

Source

Hugging Face Logo

This model is an 8-bit quantized version of LiquidAI/LFM2-1.2B converted to MLX format for efficient inference on Apple Silicon devices. LFM2 is a compact language model designed for edge deployment, supporting multiple languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish. The model is optimized for text generation tasks and can be used with the mlx-lm library for fast inference on Mac hardware.

1
Choose framework
2
Run the following command to install Mirai SDK
SPMhttps://github.com/trymirai/uzu-swift
3
Set Mirai API keyGet API Key
4
Apply code
Loading...

LFM2-1.2B-8bit

Run locally Apple devices with Mirai

Type

Local

From

LiquidAI

Quantisation

uint8

Precision

float16

Size

1.2B

Source

Hugging Face Logo

This model is an 8-bit quantized version of LiquidAI/LFM2-1.2B converted to MLX format for efficient inference on Apple Silicon devices. LFM2 is a compact language model designed for edge deployment, supporting multiple languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish. The model is optimized for text generation tasks and can be used with the mlx-lm library for fast inference on Mac hardware.

1
Choose framework
2
Run the following command to install Mirai SDK
SPMhttps://github.com/trymirai/uzu-swift
3
Set Mirai API keyGet API Key
4
Apply code
Loading...