LFM2-350M-8bit

Run locally Apple devices with Mirai

Type

Type

Local

From

From

LiquidAI

Quantisation

Quantisation

uint8

Precision

Precision

No

Size

Size

350M

Source

Source

Hugging Face Logo

This is an 8-bit quantized version of the LFM2-350M model converted to MLX format for efficient inference on Apple Silicon devices. The model is based on LiquidAI's LFM2-350M and supports text generation across multiple languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish. It is designed for edge deployment with reduced memory requirements through 8-bit quantization while maintaining the capabilities of the original Liquid Foundation Model 2.

1
Choose framework
2
Run the following command to install Mirai SDK
SPMhttps://github.com/trymirai/uzu-swift
3
Set Mirai API keyGet API Key
4
Apply code
Loading...

LFM2-350M-8bit

Run locally Apple devices with Mirai

Type

Local

From

LiquidAI

Quantisation

uint8

Precision

float16

Size

350M

Source

Hugging Face Logo

This is an 8-bit quantized version of the LFM2-350M model converted to MLX format for efficient inference on Apple Silicon devices. The model is based on LiquidAI's LFM2-350M and supports text generation across multiple languages including English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish. It is designed for edge deployment with reduced memory requirements through 8-bit quantization while maintaining the capabilities of the original Liquid Foundation Model 2.

1
Choose framework
2
Run the following command to install Mirai SDK
SPMhttps://github.com/trymirai/uzu-swift
3
Set Mirai API keyGet API Key
4
Apply code
Loading...