Qwen2.5-Coder is the latest series of code-specific large language models from Alibaba Cloud, available in six model sizes ranging from 0.5 to 32 billion parameters. This instruction-tuned 0.5B variant brings significant improvements in code generation, code reasoning, and code fixing, trained on 5.5 trillion tokens including source code, text-code grounding, and synthetic data. The model maintains strong capabilities in mathematics and general competencies while serving as a comprehensive foundation for real-world applications such as code agents. The 0.5B model is a causal language model with 24 transformer layers, 14 query attention heads and 2 key-value heads using grouped query attention, and a full context length of 32,768 tokens. Built on the strong Qwen2.5 base, it incorporates architectural improvements including RoPE positional embeddings, SwiGLU activations, and RMSNorm normalization to deliver efficient coding capabilities in a lightweight package.
Alibaba
available local models on Mirai:
available local models on Mirai:
Name
Quantisation
Size
Qwen2.5-Coder-0.5B-Instruct
No
0.5B
Quant.
No
Size
0.5B
Qwen2.5-Coder-1.5B-Instruct
No
1.5B
Quant.
No
Size
1.5B
Qwen2.5-Coder-14B-Instruct
No
14B
Quant.
No
Size
14B
Qwen2.5-Coder-32B-Instruct
No
32B
Quant.
No
Size
32B
Qwen2.5-Coder-3B-Instruct
No
3B
Quant.
No
Size
3B
Qwen2.5-Coder-7B-Instruct
No
7B
Quant.
No
Size
7B
Qwen3-0.6B
No
0.6B
Quant.
No
Size
0.6B
Qwen3-0.6B-MLX-4bit
No
0.6B
Quant.
No
Size
0.6B
Qwen3-0.6B-MLX-8bit
No
0.6B
Quant.
No
Size
0.6B
Qwen3-1.7B
No
1.7B
Quant.
No
Size
1.7B
Qwen3-1.7B-MLX-4bit
No
1.7B
Quant.
No
Size
1.7B
Qwen3-1.7B-MLX-8bit
No
1.7B
Quant.
No
Size
1.7B
Qwen3-14B
No
14B
Quant.
No
Size
14B
Qwen3-14B-AWQ
No
14B
Quant.
No
Size
14B
Qwen3-14B-MLX-4bit
No
14B
Quant.
No
Size
14B
Qwen3-14B-MLX-8bit
No
14B
Quant.
No
Size
14B
Qwen3-32B
No
32B
Quant.
No
Size
32B
Qwen3-32B-AWQ
No
32B
Quant.
No
Size
32B
Qwen3-32B-MLX-4bit
No
32B
Quant.
No
Size
32B
Qwen3-4B
No
4B
Quant.
No
Size
4B
Qwen2.5-Coder is the latest series of code-specific large language models from Alibaba Cloud, available in six model sizes ranging from 0.5 to 32 billion parameters. This instruction-tuned 0.5B variant brings significant improvements in code generation, code reasoning, and code fixing, trained on 5.5 trillion tokens including source code, text-code grounding, and synthetic data. The model maintains strong capabilities in mathematics and general competencies while serving as a comprehensive foundation for real-world applications such as code agents. The 0.5B model is a causal language model with 24 transformer layers, 14 query attention heads and 2 key-value heads using grouped query attention, and a full context length of 32,768 tokens. Built on the strong Qwen2.5 base, it incorporates architectural improvements including RoPE positional embeddings, SwiGLU activations, and RMSNorm normalization to deliver efficient coding capabilities in a lightweight package.
Alibaba
available local models on Mirai:
Name
Quantisation
Size
Qwen2.5-Coder-0.5B-Instruct
No
0.5B
Quant.
No
Size
0.5B
Qwen2.5-Coder-1.5B-Instruct
No
1.5B
Quant.
No
Size
1.5B
Qwen2.5-Coder-14B-Instruct
No
14B
Quant.
No
Size
14B
Qwen2.5-Coder-32B-Instruct
No
32B
Quant.
No
Size
32B
Qwen2.5-Coder-3B-Instruct
No
3B
Quant.
No
Size
3B
Qwen2.5-Coder-7B-Instruct
No
7B
Quant.
No
Size
7B
Qwen3-0.6B
No
0.6B
Quant.
No
Size
0.6B
Qwen3-0.6B-MLX-4bit
No
0.6B
Quant.
No
Size
0.6B
Qwen3-0.6B-MLX-8bit
No
0.6B
Quant.
No
Size
0.6B
Qwen3-1.7B
No
1.7B
Quant.
No
Size
1.7B
Qwen3-1.7B-MLX-4bit
No
1.7B
Quant.
No
Size
1.7B
Qwen3-1.7B-MLX-8bit
No
1.7B
Quant.
No
Size
1.7B
Qwen3-14B
No
14B
Quant.
No
Size
14B
Qwen3-14B-AWQ
No
14B
Quant.
No
Size
14B
Qwen3-14B-MLX-4bit
No
14B
Quant.
No
Size
14B
Qwen3-14B-MLX-8bit
No
14B
Quant.
No
Size
14B
Qwen3-32B
No
32B
Quant.
No
Size
32B
Qwen3-32B-AWQ
No
32B
Quant.
No
Size
32B
Qwen3-32B-MLX-4bit
No
32B
Quant.
No
Size
32B
Qwen3-4B
No
4B
Quant.
No
Size
4B