Llama-3.2-1B-Instruct-4bit

Run locally Apple devices with Mirai

Type

Type

Local

From

From

Meta

Quantisation

Quantisation

uint4

Precision

Precision

No

Size

Size

1B

Source

Source

Hugging Face Logo

This is Llama 3.2 1B Instruct converted to 4-bit quantized format optimized for use with the MLX framework. The model is based on Meta's Llama 3.2 1B parameter foundational large language model, fine-tuned with instruction-following capabilities to follow user directions and engage in multi-turn conversations. It supports multiple languages including English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai. This quantized version reduces model size while maintaining performance, making it suitable for deployment on Apple Silicon and other resource-constrained environments through the MLX machine learning framework.

1
Choose framework
2
Run the following command to install Mirai SDK
SPMhttps://github.com/trymirai/uzu-swift
3
Set Mirai API keyGet API Key
4
Apply code
Loading...

Llama-3.2-1B-Instruct-4bit

Run locally Apple devices with Mirai

Type

Local

From

Meta

Quantisation

uint4

Precision

float16

Size

1B

Source

Hugging Face Logo

This is Llama 3.2 1B Instruct converted to 4-bit quantized format optimized for use with the MLX framework. The model is based on Meta's Llama 3.2 1B parameter foundational large language model, fine-tuned with instruction-following capabilities to follow user directions and engage in multi-turn conversations. It supports multiple languages including English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai. This quantized version reduces model size while maintaining performance, making it suitable for deployment on Apple Silicon and other resource-constrained environments through the MLX machine learning framework.

1
Choose framework
2
Run the following command to install Mirai SDK
SPMhttps://github.com/trymirai/uzu-swift
3
Set Mirai API keyGet API Key
4
Apply code
Loading...