Type
Type
Local
From
From
Alibaba
Quantisation
Quantisation
uint4
Precision
Precision
No
Size
Size
32B
Qwen3-32B-AWQ is a 32.8 billion parameter language model and the latest generation in the Qwen series, featuring a 4-bit AWQ quantization. It uniquely supports seamless switching between thinking mode for complex logical reasoning, mathematics, and coding, and non-thinking mode for efficient general-purpose dialogue, all within a single model. The model demonstrates significant improvements in reasoning capabilities, human preference alignment for creative writing and role-playing, agent capabilities for tool integration, and multilingual support across 100+ languages and dialects. Built on extensive pretraining and post-training, Qwen3-32B natively supports context lengths of 32,768 tokens and can extend to 131,072 tokens using YaRN rope scaling. The quantized version maintains strong performance relative to its full-precision counterpart while offering improved efficiency for deployment.
Qwen3-32B-AWQ is a 32.8 billion parameter language model and the latest generation in the Qwen series, featuring a 4-bit AWQ quantization. It uniquely supports seamless switching between thinking mode for complex logical reasoning, mathematics, and coding, and non-thinking mode for efficient general-purpose dialogue, all within a single model. The model demonstrates significant improvements in reasoning capabilities, human preference alignment for creative writing and role-playing, agent capabilities for tool integration, and multilingual support across 100+ languages and dialects. Built on extensive pretraining and post-training, Qwen3-32B natively supports context lengths of 32,768 tokens and can extend to 131,072 tokens using YaRN rope scaling. The quantized version maintains strong performance relative to its full-precision counterpart while offering improved efficiency for deployment.