LFM2-2.6B-8bit

Run locally Apple devices with Mirai

Type

Type

Local

From

From

LiquidAI

Quantisation

Quantisation

uint8

Precision

Precision

No

Size

Size

2.6B

Source

Source

Hugging Face Logo

This model is an MLX-format conversion of LiquidAI's LFM2-2.6B, a 2.6 billion parameter language model quantized to 8-bit precision for efficient inference on edge devices. LFM2 is a Liquid Foundation Model designed for text generation across multiple languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish. The model has been optimized for the MLX framework to enable fast, low-memory inference while maintaining strong performance capabilities.

1
Choose framework
2
Run the following command to install Mirai SDK
SPMhttps://github.com/trymirai/uzu-swift
3
Set Mirai API keyGet API Key
4
Apply code
Loading...

LFM2-2.6B-8bit

Run locally Apple devices with Mirai

Type

Local

From

LiquidAI

Quantisation

uint8

Precision

float16

Size

2.6B

Source

Hugging Face Logo

This model is an MLX-format conversion of LiquidAI's LFM2-2.6B, a 2.6 billion parameter language model quantized to 8-bit precision for efficient inference on edge devices. LFM2 is a Liquid Foundation Model designed for text generation across multiple languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish. The model has been optimized for the MLX framework to enable fast, low-memory inference while maintaining strong performance capabilities.

1
Choose framework
2
Run the following command to install Mirai SDK
SPMhttps://github.com/trymirai/uzu-swift
3
Set Mirai API keyGet API Key
4
Apply code
Loading...