LFM2-2.6B-4bit

Run locally Apple devices with Mirai

Type

Type

Local

From

From

LiquidAI

Quantisation

Quantisation

uint4

Precision

Precision

No

Size

Size

2.6B

Source

Source

Hugging Face Logo

This is a 4-bit quantized version of the LFM2-2.6B model converted to MLX format for efficient inference on edge devices. LFM2 is a compact language model designed for text generation tasks and supports multiple languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish. The model has been optimized for the MLX framework to enable fast and efficient execution on compatible hardware.

1
Choose framework
2
Run the following command to install Mirai SDK
SPMhttps://github.com/trymirai/uzu-swift
3
Set Mirai API keyGet API Key
4
Apply code
Loading...

LFM2-2.6B-4bit

Run locally Apple devices with Mirai

Type

Local

From

LiquidAI

Quantisation

uint4

Precision

float16

Size

2.6B

Source

Hugging Face Logo

This is a 4-bit quantized version of the LFM2-2.6B model converted to MLX format for efficient inference on edge devices. LFM2 is a compact language model designed for text generation tasks and supports multiple languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish. The model has been optimized for the MLX framework to enable fast and efficient execution on compatible hardware.

1
Choose framework
2
Run the following command to install Mirai SDK
SPMhttps://github.com/trymirai/uzu-swift
3
Set Mirai API keyGet API Key
4
Apply code
Loading...