Qwen2.5-Coder is the latest series of code-specific large language models from Alibaba Cloud, available in six mainstream sizes from 0.5 to 32 billion parameters. Built on the strong Qwen2.5 foundation and trained on 5.5 trillion tokens including source code, text-code grounding, and synthetic data, it brings significant improvements in code generation, code reasoning, and code fixing. The 32B variant has achieved state-of-the-art performance among open-source code LLMs, with coding abilities matching GPT-4o. This instruction-tuned 14B variant is a causal language model with 48 layers, 40 query attention heads with grouped query attention for 8 key-value heads, and supports up to 128K tokens of context length. Beyond coding, it maintains strong capabilities in mathematics and general competencies, making it a comprehensive foundation for real-world applications including code agents. The model features standard transformer architecture enhancements including RoPE positional embeddings, SwiGLU activation, and RMSNorm.
Alibaba
available local models on Mirai:
available local models on Mirai:
Name
Quantisation
Size
Qwen2.5-Coder-0.5B-Instruct
No
0.5B
Quant.
No
Size
0.5B
Qwen2.5-Coder-1.5B-Instruct
No
1.5B
Quant.
No
Size
1.5B
Qwen2.5-Coder-14B-Instruct
No
14B
Quant.
No
Size
14B
Qwen2.5-Coder-32B-Instruct
No
32B
Quant.
No
Size
32B
Qwen2.5-Coder-3B-Instruct
No
3B
Quant.
No
Size
3B
Qwen2.5-Coder-7B-Instruct
No
7B
Quant.
No
Size
7B
Qwen3-0.6B
No
0.6B
Quant.
No
Size
0.6B
Qwen3-0.6B-MLX-4bit
No
0.6B
Quant.
No
Size
0.6B
Qwen3-0.6B-MLX-8bit
No
0.6B
Quant.
No
Size
0.6B
Qwen3-1.7B
No
1.7B
Quant.
No
Size
1.7B
Qwen3-1.7B-MLX-4bit
No
1.7B
Quant.
No
Size
1.7B
Qwen3-1.7B-MLX-8bit
No
1.7B
Quant.
No
Size
1.7B
Qwen3-14B
No
14B
Quant.
No
Size
14B
Qwen3-14B-AWQ
No
14B
Quant.
No
Size
14B
Qwen3-14B-MLX-4bit
No
14B
Quant.
No
Size
14B
Qwen3-14B-MLX-8bit
No
14B
Quant.
No
Size
14B
Qwen3-32B
No
32B
Quant.
No
Size
32B
Qwen3-32B-AWQ
No
32B
Quant.
No
Size
32B
Qwen3-32B-MLX-4bit
No
32B
Quant.
No
Size
32B
Qwen3-4B
No
4B
Quant.
No
Size
4B
Qwen2.5-Coder is the latest series of code-specific large language models from Alibaba Cloud, available in six mainstream sizes from 0.5 to 32 billion parameters. Built on the strong Qwen2.5 foundation and trained on 5.5 trillion tokens including source code, text-code grounding, and synthetic data, it brings significant improvements in code generation, code reasoning, and code fixing. The 32B variant has achieved state-of-the-art performance among open-source code LLMs, with coding abilities matching GPT-4o. This instruction-tuned 14B variant is a causal language model with 48 layers, 40 query attention heads with grouped query attention for 8 key-value heads, and supports up to 128K tokens of context length. Beyond coding, it maintains strong capabilities in mathematics and general competencies, making it a comprehensive foundation for real-world applications including code agents. The model features standard transformer architecture enhancements including RoPE positional embeddings, SwiGLU activation, and RMSNorm.
Alibaba
available local models on Mirai:
Name
Quantisation
Size
Qwen2.5-Coder-0.5B-Instruct
No
0.5B
Quant.
No
Size
0.5B
Qwen2.5-Coder-1.5B-Instruct
No
1.5B
Quant.
No
Size
1.5B
Qwen2.5-Coder-14B-Instruct
No
14B
Quant.
No
Size
14B
Qwen2.5-Coder-32B-Instruct
No
32B
Quant.
No
Size
32B
Qwen2.5-Coder-3B-Instruct
No
3B
Quant.
No
Size
3B
Qwen2.5-Coder-7B-Instruct
No
7B
Quant.
No
Size
7B
Qwen3-0.6B
No
0.6B
Quant.
No
Size
0.6B
Qwen3-0.6B-MLX-4bit
No
0.6B
Quant.
No
Size
0.6B
Qwen3-0.6B-MLX-8bit
No
0.6B
Quant.
No
Size
0.6B
Qwen3-1.7B
No
1.7B
Quant.
No
Size
1.7B
Qwen3-1.7B-MLX-4bit
No
1.7B
Quant.
No
Size
1.7B
Qwen3-1.7B-MLX-8bit
No
1.7B
Quant.
No
Size
1.7B
Qwen3-14B
No
14B
Quant.
No
Size
14B
Qwen3-14B-AWQ
No
14B
Quant.
No
Size
14B
Qwen3-14B-MLX-4bit
No
14B
Quant.
No
Size
14B
Qwen3-14B-MLX-8bit
No
14B
Quant.
No
Size
14B
Qwen3-32B
No
32B
Quant.
No
Size
32B
Qwen3-32B-AWQ
No
32B
Quant.
No
Size
32B
Qwen3-32B-MLX-4bit
No
32B
Quant.
No
Size
32B
Qwen3-4B
No
4B
Quant.
No
Size
4B