Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. Gemma 3 models are multimodal, handling text and image input and generating text output, with open weights for both pre-trained variants and instruction-tuned variants. Gemma 3 has a large 128K context window, multilingual support in over 140 languages, and is available in more sizes than previous versions. Gemma 3 models are well-suited for a variety of text generation and image understanding tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as laptops, desktops or your own cloud infrastructure, democratizing access to state-of-the-art AI models and helping foster innovation for everyone.
available local models on Mirai:
available local models on Mirai:
Name
Quantisation
Size
gemma-3-1b-it
No
1B
Quant.
No
Size
1B
gemma-3-27b-it
No
27B
Quant.
No
Size
27B
gemma-3-4b-it
No
4B
Quant.
No
Size
4B
gemma-3-1b-it-4bit
No
1B
Quant.
No
Size
1B
gemma-3-1b-it-8bit
No
1B
Quant.
No
Size
1B
gemma-3-27b-it-4bit
No
27B
Quant.
No
Size
27B
gemma-3-27b-it-8bit
No
27B
Quant.
No
Size
27B
gemma-3-4b-it-4bit
No
4B
Quant.
No
Size
4B
gemma-3-4b-it-8bit
No
4B
Quant.
No
Size
4B
Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. Gemma 3 models are multimodal, handling text and image input and generating text output, with open weights for both pre-trained variants and instruction-tuned variants. Gemma 3 has a large 128K context window, multilingual support in over 140 languages, and is available in more sizes than previous versions. Gemma 3 models are well-suited for a variety of text generation and image understanding tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as laptops, desktops or your own cloud infrastructure, democratizing access to state-of-the-art AI models and helping foster innovation for everyone.
available local models on Mirai:
Name
Quantisation
Size
gemma-3-1b-it
No
1B
Quant.
No
Size
1B
gemma-3-27b-it
No
27B
Quant.
No
Size
27B
gemma-3-4b-it
No
4B
Quant.
No
Size
4B
gemma-3-1b-it-4bit
No
1B
Quant.
No
Size
1B
gemma-3-1b-it-8bit
No
1B
Quant.
No
Size
1B
gemma-3-27b-it-4bit
No
27B
Quant.
No
Size
27B
gemma-3-27b-it-8bit
No
27B
Quant.
No
Size
27B
gemma-3-4b-it-4bit
No
4B
Quant.
No
Size
4B
gemma-3-4b-it-8bit
No
4B
Quant.
No
Size
4B