Qwen3-4B is a 4 billion parameter causal language model that represents the latest generation in the Qwen series of large language models. The model uniquely supports seamless switching between thinking mode for complex logical reasoning, mathematics, and coding tasks, and non-thinking mode for efficient general-purpose dialogue, all within a single model. This dual-mode capability allows optimal performance across various scenarios without requiring separate models. The model demonstrates significantly enhanced reasoning capabilities that surpass both the previous QwQ model in thinking mode and Qwen2.5 instruct models in non-thinking mode on mathematics, code generation, and commonsense logical reasoning. Beyond reasoning, Qwen3-4B excels in human preference alignment for creative writing, role-playing, multi-turn dialogues, and instruction following, while also featuring strong agent capabilities for precise integration with external tools. The model supports over 100 languages and dialects with advanced multilingual instruction following and translation abilities. Qwen3-4B natively supports a context length of 32,768 tokens and can be extended to 131,072 tokens using YaRN scaling techniques. With 36 layers, 32 attention heads for queries and 8 for key-value pairs using grouped query attention, the model is optimized for both performance and efficiency across diverse applications.
Qwen3-4B is a 4 billion parameter causal language model that represents the latest generation in the Qwen series of large language models. The model uniquely supports seamless switching between thinking mode for complex logical reasoning, mathematics, and coding tasks, and non-thinking mode for efficient general-purpose dialogue, all within a single model. This dual-mode capability allows optimal performance across various scenarios without requiring separate models. The model demonstrates significantly enhanced reasoning capabilities that surpass both the previous QwQ model in thinking mode and Qwen2.5 instruct models in non-thinking mode on mathematics, code generation, and commonsense logical reasoning. Beyond reasoning, Qwen3-4B excels in human preference alignment for creative writing, role-playing, multi-turn dialogues, and instruction following, while also featuring strong agent capabilities for precise integration with external tools. The model supports over 100 languages and dialects with advanced multilingual instruction following and translation abilities. Qwen3-4B natively supports a context length of 32,768 tokens and can be extended to 131,072 tokens using YaRN scaling techniques. With 36 layers, 32 attention heads for queries and 8 for key-value pairs using grouped query attention, the model is optimized for both performance and efficiency across diverse applications.