LFM2-700M-4bit

Run locally Apple devices with Mirai

Type

Type

Local

From

From

LiquidAI

Quantisation

Quantisation

uint4

Precision

Precision

No

Size

Size

700M

Source

Source

Hugging Face Logo

This model is a 4-bit quantized version of LiquidAI's LFM2-700M converted to MLX format for efficient inference. LFM2-700M is a language model capable of text generation across multiple languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish. The model has been optimized for edge deployment and can be used with the MLX framework for fast inference on Apple Silicon and other platforms.

This model is a 4-bit quantized version of LiquidAI's LFM2-700M converted to MLX format for efficient inference. LFM2-700M is a language model capable of text generation across multiple languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish. The model has been optimized for edge deployment and can be used with the MLX framework for fast inference on Apple Silicon and other platforms.

LFM2-700M-4bit

Run locally Apple devices with Mirai

Type

Local

From

LiquidAI

Quantisation

uint4

Precision

float16

Size

700M

Source

Hugging Face Logo

This model is a 4-bit quantized version of LiquidAI's LFM2-700M converted to MLX format for efficient inference. LFM2-700M is a language model capable of text generation across multiple languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish. The model has been optimized for edge deployment and can be used with the MLX framework for fast inference on Apple Silicon and other platforms.