gemma-3-4b-it-4bit

Run locally Apple devices with Mirai

Type

Type

Local

From

From

Google

Quantisation

Quantisation

uint4

Precision

Precision

No

Size

Size

4B

Source

Source

Hugging Face Logo

This is a quantized MLX format conversion of Google's Gemma 3 4B instruction-tuned model. The model was converted from the original google/gemma-3-4b-it using mlx-vlm and is optimized for running on Apple Silicon devices. It is a multimodal model capable of processing both images and text to generate text responses.

1
Choose framework
2
Run the following command to install Mirai SDK
SPMhttps://github.com/trymirai/uzu-swift
3
Set Mirai API keyGet API Key
4
Apply code
Loading...

gemma-3-4b-it-4bit

Run locally Apple devices with Mirai

Type

Local

From

Google

Quantisation

uint4

Precision

float16

Size

4B

Source

Hugging Face Logo

This is a quantized MLX format conversion of Google's Gemma 3 4B instruction-tuned model. The model was converted from the original google/gemma-3-4b-it using mlx-vlm and is optimized for running on Apple Silicon devices. It is a multimodal model capable of processing both images and text to generate text responses.

1
Choose framework
2
Run the following command to install Mirai SDK
SPMhttps://github.com/trymirai/uzu-swift
3
Set Mirai API keyGet API Key
4
Apply code
Loading...