This is a quantized MLX format conversion of Google's Gemma 3 4B instruction-tuned model. The model was converted from the original google/gemma-3-4b-it using mlx-vlm and is optimized for running on Apple Silicon devices. It is a multimodal model capable of processing both images and text to generate text responses.
available local models on Mirai:
available local models on Mirai:
Name
Quantisation
Size
gemma-3-1b-it
uint4
1B
Quant.
uint4
Size
1B
gemma-3-27b-it
uint4
27B
Quant.
uint4
Size
27B
gemma-3-4b-it
uint4
4B
Quant.
uint4
Size
4B
gemma-3-1b-it-4bit
uint4
1B
Quant.
uint4
Size
1B
gemma-3-1b-it-8bit
uint4
1B
Quant.
uint4
Size
1B
gemma-3-27b-it-4bit
uint4
27B
Quant.
uint4
Size
27B
gemma-3-27b-it-8bit
uint4
27B
Quant.
uint4
Size
27B
gemma-3-4b-it-4bit
uint4
4B
Quant.
uint4
Size
4B
gemma-3-4b-it-8bit
uint4
4B
Quant.
uint4
Size
4B
This is a quantized MLX format conversion of Google's Gemma 3 4B instruction-tuned model. The model was converted from the original google/gemma-3-4b-it using mlx-vlm and is optimized for running on Apple Silicon devices. It is a multimodal model capable of processing both images and text to generate text responses.
available local models on Mirai:
Name
Quantisation
Size
gemma-3-1b-it
uint4
1B
Quant.
uint4
Size
1B
gemma-3-27b-it
uint4
27B
Quant.
uint4
Size
27B
gemma-3-4b-it
uint4
4B
Quant.
uint4
Size
4B
gemma-3-1b-it-4bit
uint4
1B
Quant.
uint4
Size
1B
gemma-3-1b-it-8bit
uint4
1B
Quant.
uint4
Size
1B
gemma-3-27b-it-4bit
uint4
27B
Quant.
uint4
Size
27B
gemma-3-27b-it-8bit
uint4
27B
Quant.
uint4
Size
27B
gemma-3-4b-it-4bit
uint4
4B
Quant.
uint4
Size
4B
gemma-3-4b-it-8bit
uint4
4B
Quant.
uint4
Size
4B