Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. Gemma 3 models are multimodal, handling text and image input and generating text output, with open weights for both pre-trained variants and instruction-tuned variants. Gemma 3 has a large 128K context window (32K for the 1B size), multilingual support in over 140 languages, and is available in more sizes than previous versions. Gemma 3 models are well-suited for a variety of text generation and image understanding tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as laptops, desktops, or custom cloud infrastructure, democratizing access to state-of-the-art AI models and helping foster innovation for everyone. The models accept text strings and images normalized to 896 by 896 resolution as input and generate text output of up to 8192 tokens.
available local models on Mirai:
available local models on Mirai:
Name
Size
gemma-3-1b-it
1B
Quant.
No
Size
1B
gemma-3-27b-it
27B
Quant.
No
Size
27B
gemma-3-4b-it
4B
Quant.
No
Size
4B
gemma-3-1b-it-4bit
1B
Quant.
No
Size
1B
gemma-3-1b-it-8bit
1B
Quant.
No
Size
1B
gemma-3-27b-it-4bit
27B
Quant.
No
Size
27B
gemma-3-27b-it-8bit
27B
Quant.
No
Size
27B
gemma-3-4b-it-4bit
4B
Quant.
No
Size
4B
gemma-3-4b-it-8bit
4B
Quant.
No
Size
4B
Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. Gemma 3 models are multimodal, handling text and image input and generating text output, with open weights for both pre-trained variants and instruction-tuned variants. Gemma 3 has a large 128K context window (32K for the 1B size), multilingual support in over 140 languages, and is available in more sizes than previous versions. Gemma 3 models are well-suited for a variety of text generation and image understanding tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as laptops, desktops, or custom cloud infrastructure, democratizing access to state-of-the-art AI models and helping foster innovation for everyone. The models accept text strings and images normalized to 896 by 896 resolution as input and generate text output of up to 8192 tokens.
available local models on Mirai:
Name
Size
gemma-3-1b-it
1B
Quant.
No
Size
1B
gemma-3-27b-it
27B
Quant.
No
Size
27B
gemma-3-4b-it
4B
Quant.
No
Size
4B
gemma-3-1b-it-4bit
1B
Quant.
No
Size
1B
gemma-3-1b-it-8bit
1B
Quant.
No
Size
1B
gemma-3-27b-it-4bit
27B
Quant.
No
Size
27B
gemma-3-27b-it-8bit
27B
Quant.
No
Size
27B
gemma-3-4b-it-4bit
4B
Quant.
No
Size
4B
gemma-3-4b-it-8bit
4B
Quant.
No
Size
4B