LFM2-700M-4bit

Run locally on Apple devices with Mirai

Type

Type

Local

From

From

LiquidAI

Quantisation

Quantisation

uint4

Size

Size

700M

Source

Source

Hugging Face Logo

This model is a 4-bit quantized version of LiquidAI's LFM2-700M converted to MLX format for efficient inference. LFM2-700M is a language model capable of text generation across multiple languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish. The model has been optimized for edge deployment and can be used with the MLX framework for fast inference on Apple Silicon and other platforms.

1
Choose framework
2
Run the following command to install Mirai SDK
SPMhttps://github.com/trymirai/uzu-swift
3
Set Mirai API keyGet API Key
4
Apply code
Loading...

LFM2-700M-4bit

Run locally on Apple devices with Mirai

Type

Local

From

LiquidAI

Quantisation

uint4

Size

700M

Source

Hugging Face Logo

This model is a 4-bit quantized version of LiquidAI's LFM2-700M converted to MLX format for efficient inference. LFM2-700M is a language model capable of text generation across multiple languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish. The model has been optimized for edge deployment and can be used with the MLX framework for fast inference on Apple Silicon and other platforms.

1
Choose framework
2
Run the following command to install Mirai SDK
SPMhttps://github.com/trymirai/uzu-swift
3
Set Mirai API keyGet API Key
4
Apply code
Loading...