Qwen3-4B-MLX-4bit is the MLX-quantized version of Qwen3-4B, the latest generation of large language models from the Qwen series. It is a 4 billion parameter causal language model that supports seamless switching between thinking mode for complex logical reasoning, mathematics, and coding tasks, and non-thinking mode for efficient general-purpose dialogue. The model excels in reasoning capabilities, human preference alignment for creative writing and role-playing, agent capabilities with tool integration, and supports over 100 languages with strong multilingual instruction following and translation abilities. It natively supports context lengths of up to 32,768 tokens and can be extended to 131,072 tokens using YaRN scaling techniques.
Alibaba
available local models on Mirai:
available local models on Mirai:
Name
Quantisation
Size
Qwen2.5-Coder-0.5B-Instruct
uint4
0.5B
Quant.
uint4
Size
0.5B
Qwen2.5-Coder-1.5B-Instruct
uint4
1.5B
Quant.
uint4
Size
1.5B
Qwen2.5-Coder-14B-Instruct
uint4
14B
Quant.
uint4
Size
14B
Qwen2.5-Coder-32B-Instruct
uint4
32B
Quant.
uint4
Size
32B
Qwen2.5-Coder-3B-Instruct
uint4
3B
Quant.
uint4
Size
3B
Qwen2.5-Coder-7B-Instruct
uint4
7B
Quant.
uint4
Size
7B
Qwen3-0.6B
uint4
0.6B
Quant.
uint4
Size
0.6B
Qwen3-0.6B-MLX-4bit
uint4
0.6B
Quant.
uint4
Size
0.6B
Qwen3-0.6B-MLX-8bit
uint4
0.6B
Quant.
uint4
Size
0.6B
Qwen3-1.7B
uint4
1.7B
Quant.
uint4
Size
1.7B
Qwen3-1.7B-MLX-4bit
uint4
1.7B
Quant.
uint4
Size
1.7B
Qwen3-1.7B-MLX-8bit
uint4
1.7B
Quant.
uint4
Size
1.7B
Qwen3-14B
uint4
14B
Quant.
uint4
Size
14B
Qwen3-14B-AWQ
uint4
14B
Quant.
uint4
Size
14B
Qwen3-14B-MLX-4bit
uint4
14B
Quant.
uint4
Size
14B
Qwen3-14B-MLX-8bit
uint4
14B
Quant.
uint4
Size
14B
Qwen3-32B
uint4
32B
Quant.
uint4
Size
32B
Qwen3-32B-AWQ
uint4
32B
Quant.
uint4
Size
32B
Qwen3-32B-MLX-4bit
uint4
32B
Quant.
uint4
Size
32B
Qwen3-4B
uint4
4B
Quant.
uint4
Size
4B
Qwen3-4B-MLX-4bit is the MLX-quantized version of Qwen3-4B, the latest generation of large language models from the Qwen series. It is a 4 billion parameter causal language model that supports seamless switching between thinking mode for complex logical reasoning, mathematics, and coding tasks, and non-thinking mode for efficient general-purpose dialogue. The model excels in reasoning capabilities, human preference alignment for creative writing and role-playing, agent capabilities with tool integration, and supports over 100 languages with strong multilingual instruction following and translation abilities. It natively supports context lengths of up to 32,768 tokens and can be extended to 131,072 tokens using YaRN scaling techniques.
Alibaba
available local models on Mirai:
Name
Quantisation
Size
Qwen2.5-Coder-0.5B-Instruct
uint4
0.5B
Quant.
uint4
Size
0.5B
Qwen2.5-Coder-1.5B-Instruct
uint4
1.5B
Quant.
uint4
Size
1.5B
Qwen2.5-Coder-14B-Instruct
uint4
14B
Quant.
uint4
Size
14B
Qwen2.5-Coder-32B-Instruct
uint4
32B
Quant.
uint4
Size
32B
Qwen2.5-Coder-3B-Instruct
uint4
3B
Quant.
uint4
Size
3B
Qwen2.5-Coder-7B-Instruct
uint4
7B
Quant.
uint4
Size
7B
Qwen3-0.6B
uint4
0.6B
Quant.
uint4
Size
0.6B
Qwen3-0.6B-MLX-4bit
uint4
0.6B
Quant.
uint4
Size
0.6B
Qwen3-0.6B-MLX-8bit
uint4
0.6B
Quant.
uint4
Size
0.6B
Qwen3-1.7B
uint4
1.7B
Quant.
uint4
Size
1.7B
Qwen3-1.7B-MLX-4bit
uint4
1.7B
Quant.
uint4
Size
1.7B
Qwen3-1.7B-MLX-8bit
uint4
1.7B
Quant.
uint4
Size
1.7B
Qwen3-14B
uint4
14B
Quant.
uint4
Size
14B
Qwen3-14B-AWQ
uint4
14B
Quant.
uint4
Size
14B
Qwen3-14B-MLX-4bit
uint4
14B
Quant.
uint4
Size
14B
Qwen3-14B-MLX-8bit
uint4
14B
Quant.
uint4
Size
14B
Qwen3-32B
uint4
32B
Quant.
uint4
Size
32B
Qwen3-32B-AWQ
uint4
32B
Quant.
uint4
Size
32B
Qwen3-32B-MLX-4bit
uint4
32B
Quant.
uint4
Size
32B
Qwen3-4B
uint4
4B
Quant.
uint4
Size
4B