LFM2-700M-8bit

Run locally Apple devices with Mirai

Type

Type

Local

From

From

LiquidAI

Quantisation

Quantisation

uint8

Precision

Precision

No

Size

Size

700M

Source

Source

Hugging Face Logo

This model is the MLX format version of LiquidAI's LFM2-700M, a 700 million parameter language model converted for efficient inference on Apple Silicon using the MLX framework. LFM2 is a Liquid Foundation Model designed for edge deployment and supports multiple languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish. The 8-bit quantized version provides a balance between model performance and computational efficiency for text generation tasks.

This model is the MLX format version of LiquidAI's LFM2-700M, a 700 million parameter language model converted for efficient inference on Apple Silicon using the MLX framework. LFM2 is a Liquid Foundation Model designed for edge deployment and supports multiple languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish. The 8-bit quantized version provides a balance between model performance and computational efficiency for text generation tasks.

LFM2-700M-8bit

Run locally Apple devices with Mirai

Type

Local

From

LiquidAI

Quantisation

uint8

Precision

float16

Size

700M

Source

Hugging Face Logo

This model is the MLX format version of LiquidAI's LFM2-700M, a 700 million parameter language model converted for efficient inference on Apple Silicon using the MLX framework. LFM2 is a Liquid Foundation Model designed for edge deployment and supports multiple languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish. The 8-bit quantized version provides a balance between model performance and computational efficiency for text generation tasks.