Qwen3-32B-AWQ is a 32.8 billion parameter language model and the latest generation in the Qwen series, featuring a 4-bit AWQ quantization. It uniquely supports seamless switching between thinking mode for complex logical reasoning, mathematics, and coding, and non-thinking mode for efficient general-purpose dialogue, all within a single model. The model demonstrates significant improvements in reasoning capabilities, human preference alignment for creative writing and role-playing, agent capabilities for tool integration, and multilingual support across 100+ languages and dialects. Built on extensive pretraining and post-training, Qwen3-32B natively supports context lengths of 32,768 tokens and can extend to 131,072 tokens using YaRN rope scaling. The quantized version maintains strong performance relative to its full-precision counterpart while offering improved efficiency for deployment.
Alibaba
available local models on Mirai:
available local models on Mirai:
Name
Quantisation
Size
Qwen2.5-Coder-0.5B-Instruct
uint4
0.5B
Quant.
uint4
Size
0.5B
Qwen2.5-Coder-1.5B-Instruct
uint4
1.5B
Quant.
uint4
Size
1.5B
Qwen2.5-Coder-14B-Instruct
uint4
14B
Quant.
uint4
Size
14B
Qwen2.5-Coder-32B-Instruct
uint4
32B
Quant.
uint4
Size
32B
Qwen2.5-Coder-3B-Instruct
uint4
3B
Quant.
uint4
Size
3B
Qwen2.5-Coder-7B-Instruct
uint4
7B
Quant.
uint4
Size
7B
Qwen3-0.6B
uint4
0.6B
Quant.
uint4
Size
0.6B
Qwen3-0.6B-MLX-4bit
uint4
0.6B
Quant.
uint4
Size
0.6B
Qwen3-0.6B-MLX-8bit
uint4
0.6B
Quant.
uint4
Size
0.6B
Qwen3-1.7B
uint4
1.7B
Quant.
uint4
Size
1.7B
Qwen3-1.7B-MLX-4bit
uint4
1.7B
Quant.
uint4
Size
1.7B
Qwen3-1.7B-MLX-8bit
uint4
1.7B
Quant.
uint4
Size
1.7B
Qwen3-14B
uint4
14B
Quant.
uint4
Size
14B
Qwen3-14B-AWQ
uint4
14B
Quant.
uint4
Size
14B
Qwen3-14B-MLX-4bit
uint4
14B
Quant.
uint4
Size
14B
Qwen3-14B-MLX-8bit
uint4
14B
Quant.
uint4
Size
14B
Qwen3-32B
uint4
32B
Quant.
uint4
Size
32B
Qwen3-32B-AWQ
uint4
32B
Quant.
uint4
Size
32B
Qwen3-32B-MLX-4bit
uint4
32B
Quant.
uint4
Size
32B
Qwen3-4B
uint4
4B
Quant.
uint4
Size
4B
Qwen3-32B-AWQ is a 32.8 billion parameter language model and the latest generation in the Qwen series, featuring a 4-bit AWQ quantization. It uniquely supports seamless switching between thinking mode for complex logical reasoning, mathematics, and coding, and non-thinking mode for efficient general-purpose dialogue, all within a single model. The model demonstrates significant improvements in reasoning capabilities, human preference alignment for creative writing and role-playing, agent capabilities for tool integration, and multilingual support across 100+ languages and dialects. Built on extensive pretraining and post-training, Qwen3-32B natively supports context lengths of 32,768 tokens and can extend to 131,072 tokens using YaRN rope scaling. The quantized version maintains strong performance relative to its full-precision counterpart while offering improved efficiency for deployment.
Alibaba
available local models on Mirai:
Name
Quantisation
Size
Qwen2.5-Coder-0.5B-Instruct
uint4
0.5B
Quant.
uint4
Size
0.5B
Qwen2.5-Coder-1.5B-Instruct
uint4
1.5B
Quant.
uint4
Size
1.5B
Qwen2.5-Coder-14B-Instruct
uint4
14B
Quant.
uint4
Size
14B
Qwen2.5-Coder-32B-Instruct
uint4
32B
Quant.
uint4
Size
32B
Qwen2.5-Coder-3B-Instruct
uint4
3B
Quant.
uint4
Size
3B
Qwen2.5-Coder-7B-Instruct
uint4
7B
Quant.
uint4
Size
7B
Qwen3-0.6B
uint4
0.6B
Quant.
uint4
Size
0.6B
Qwen3-0.6B-MLX-4bit
uint4
0.6B
Quant.
uint4
Size
0.6B
Qwen3-0.6B-MLX-8bit
uint4
0.6B
Quant.
uint4
Size
0.6B
Qwen3-1.7B
uint4
1.7B
Quant.
uint4
Size
1.7B
Qwen3-1.7B-MLX-4bit
uint4
1.7B
Quant.
uint4
Size
1.7B
Qwen3-1.7B-MLX-8bit
uint4
1.7B
Quant.
uint4
Size
1.7B
Qwen3-14B
uint4
14B
Quant.
uint4
Size
14B
Qwen3-14B-AWQ
uint4
14B
Quant.
uint4
Size
14B
Qwen3-14B-MLX-4bit
uint4
14B
Quant.
uint4
Size
14B
Qwen3-14B-MLX-8bit
uint4
14B
Quant.
uint4
Size
14B
Qwen3-32B
uint4
32B
Quant.
uint4
Size
32B
Qwen3-32B-AWQ
uint4
32B
Quant.
uint4
Size
32B
Qwen3-32B-MLX-4bit
uint4
32B
Quant.
uint4
Size
32B
Qwen3-4B
uint4
4B
Quant.
uint4
Size
4B