gemma-3-4b-it-8bit

Run locally Apple devices with Mirai

Type

Type

Local

From

From

Google

Quantisation

Quantisation

uint8

Precision

Precision

No

Size

Size

4B

Source

Source

Hugging Face Logo

This model is a conversion of Google's Gemma 3 4B instruction-tuned variant to MLX format, quantized to 8-bit precision. It is a multimodal model capable of processing both images and text inputs to generate text responses. The model was converted using mlx-vlm version 0.1.18 and is optimized for use with the MLX framework on Apple Silicon hardware.

1
Choose framework
2
Run the following command to install Mirai SDK
SPMhttps://github.com/trymirai/uzu-swift
3
Set Mirai API keyGet API Key
4
Apply code
Loading...

gemma-3-4b-it-8bit

Run locally Apple devices with Mirai

Type

Local

From

Google

Quantisation

uint8

Precision

float16

Size

4B

Source

Hugging Face Logo

This model is a conversion of Google's Gemma 3 4B instruction-tuned variant to MLX format, quantized to 8-bit precision. It is a multimodal model capable of processing both images and text inputs to generate text responses. The model was converted using mlx-vlm version 0.1.18 and is optimized for use with the MLX framework on Apple Silicon hardware.

1
Choose framework
2
Run the following command to install Mirai SDK
SPMhttps://github.com/trymirai/uzu-swift
3
Set Mirai API keyGet API Key
4
Apply code
Loading...