gemma-3-1b-it-4bit

Run locally Apple devices with Mirai

Type

Type

Local

From

From

Google

Quantisation

Quantisation

uint4

Precision

Precision

No

Size

Size

1B

Source

Source

Hugging Face Logo

This is a 4-bit quantized version of Google's Gemma 3 1B instruction-tuned model converted to MLX format for efficient inference on Apple silicon devices. The model is optimized for text generation tasks and maintains the instruction-following capabilities of the original Gemma 3 1B model while reducing memory requirements through 4-bit quantization.

1
Choose framework
2
Run the following command to install Mirai SDK
SPMhttps://github.com/trymirai/uzu-swift
3
Set Mirai API keyGet API Key
4
Apply code
Loading...

gemma-3-1b-it-4bit

Run locally Apple devices with Mirai

Type

Local

From

Google

Quantisation

uint4

Precision

float16

Size

1B

Source

Hugging Face Logo

This is a 4-bit quantized version of Google's Gemma 3 1B instruction-tuned model converted to MLX format for efficient inference on Apple silicon devices. The model is optimized for text generation tasks and maintains the instruction-following capabilities of the original Gemma 3 1B model while reducing memory requirements through 4-bit quantization.

1
Choose framework
2
Run the following command to install Mirai SDK
SPMhttps://github.com/trymirai/uzu-swift
3
Set Mirai API keyGet API Key
4
Apply code
Loading...