Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. Gemma 3 models are multimodal, handling text and image input and generating text output, with open weights for both pre-trained variants and instruction-tuned variants. Gemma 3 has a large 128K context window (32K for the 1B size), multilingual support in over 140 languages, and is available in more sizes than previous versions. Gemma 3 models are well-suited for a variety of text generation and image understanding tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as laptops, desktops, or custom cloud infrastructure, democratizing access to state-of-the-art AI models and helping foster innovation for everyone. The models accept text strings and images normalized to 896 by 896 resolution as input and generate text output of up to 8192 tokens.
Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. Gemma 3 models are multimodal, handling text and image input and generating text output, with open weights for both pre-trained variants and instruction-tuned variants. Gemma 3 has a large 128K context window (32K for the 1B size), multilingual support in over 140 languages, and is available in more sizes than previous versions. Gemma 3 models are well-suited for a variety of text generation and image understanding tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as laptops, desktops, or custom cloud infrastructure, democratizing access to state-of-the-art AI models and helping foster innovation for everyone. The models accept text strings and images normalized to 896 by 896 resolution as input and generate text output of up to 8192 tokens.