Qwen3 is the latest generation of large language models in the Qwen series, offering a comprehensive suite of dense and mixture-of-experts models built upon extensive training. It delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support. A key distinguishing feature is the unique ability to seamlessly switch between thinking mode for complex logical reasoning, mathematics, and coding, and non-thinking mode for efficient general-purpose dialogue, all within a single model. This ensures optimal performance across various scenarios. Qwen3 shows significant enhancements in reasoning capabilities, surpassing previous QwQ and Qwen2.5 instruct models on mathematics, code generation, and commonsense logical reasoning, while also excelling in creative writing, role-playing, and multi-turn dialogues. The model demonstrates superior expertise in agent capabilities, enabling precise integration with external tools in both thinking and non-thinking modes and achieving leading performance among open-source models in complex agent-based tasks. It supports over 100 languages and dialects with strong capabilities for multilingual instruction following and translation. Qwen3-4B specifically is a 4 billion parameter causal language model that natively supports context lengths up to 32,768 tokens, with support for up to 131,072 tokens using YaRN scaling techniques.
Qwen3 is the latest generation of large language models in the Qwen series, offering a comprehensive suite of dense and mixture-of-experts models built upon extensive training. It delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support. A key distinguishing feature is the unique ability to seamlessly switch between thinking mode for complex logical reasoning, mathematics, and coding, and non-thinking mode for efficient general-purpose dialogue, all within a single model. This ensures optimal performance across various scenarios. Qwen3 shows significant enhancements in reasoning capabilities, surpassing previous QwQ and Qwen2.5 instruct models on mathematics, code generation, and commonsense logical reasoning, while also excelling in creative writing, role-playing, and multi-turn dialogues. The model demonstrates superior expertise in agent capabilities, enabling precise integration with external tools in both thinking and non-thinking modes and achieving leading performance among open-source models in complex agent-based tasks. It supports over 100 languages and dialects with strong capabilities for multilingual instruction following and translation. Qwen3-4B specifically is a 4 billion parameter causal language model that natively supports context lengths up to 32,768 tokens, with support for up to 131,072 tokens using YaRN scaling techniques.