Llama-3.1-8B-Instruct

Run locally Apple devices with Mirai

Type

Type

Local

From

From

Meta

Quantisation

Quantisation

No

Precision

Precision

No

Size

Size

8B

Source

Source

Hugging Face Logo

Meta Llama 3.1 is a collection of pretrained and instruction-tuned multilingual large language models available in 8B, 70B, and 405B sizes. The instruction-tuned versions are optimized for multilingual dialogue use cases and outperform many available open source and closed chat models on standard industry benchmarks. Llama 3.1 uses an optimized transformer architecture and employs supervised fine-tuning and reinforcement learning with human feedback to align with human preferences for helpfulness and safety. The model supports eight languages including English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai, with a 128k token context length. It was trained on approximately 15 trillion tokens of publicly available data with a knowledge cutoff of December 2023. The instruction-tuned models incorporate both human-generated and synthetic training data, with particular emphasis on reducing refusals to benign prompts while maintaining safety guardrails against harmful use cases. Llama 3.1 is designed to be deployed as part of comprehensive AI systems with additional safety measures, and developers should implement appropriate safeguards including content filtering and tool-use protections when building applications. The model can generate text and code, support tool use and function calling, and handle complex multilingual reasoning tasks.

Meta Llama 3.1 is a collection of pretrained and instruction-tuned multilingual large language models available in 8B, 70B, and 405B sizes. The instruction-tuned versions are optimized for multilingual dialogue use cases and outperform many available open source and closed chat models on standard industry benchmarks. Llama 3.1 uses an optimized transformer architecture and employs supervised fine-tuning and reinforcement learning with human feedback to align with human preferences for helpfulness and safety. The model supports eight languages including English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai, with a 128k token context length. It was trained on approximately 15 trillion tokens of publicly available data with a knowledge cutoff of December 2023. The instruction-tuned models incorporate both human-generated and synthetic training data, with particular emphasis on reducing refusals to benign prompts while maintaining safety guardrails against harmful use cases. Llama 3.1 is designed to be deployed as part of comprehensive AI systems with additional safety measures, and developers should implement appropriate safeguards including content filtering and tool-use protections when building applications. The model can generate text and code, support tool use and function calling, and handle complex multilingual reasoning tasks.

Llama-3.1-8B-Instruct

Run locally Apple devices with Mirai

Type

Local

From

Meta

Quantisation

No

Precision

float16

Size

8B

Source

Hugging Face Logo

Meta Llama 3.1 is a collection of pretrained and instruction-tuned multilingual large language models available in 8B, 70B, and 405B sizes. The instruction-tuned versions are optimized for multilingual dialogue use cases and outperform many available open source and closed chat models on standard industry benchmarks. Llama 3.1 uses an optimized transformer architecture and employs supervised fine-tuning and reinforcement learning with human feedback to align with human preferences for helpfulness and safety. The model supports eight languages including English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai, with a 128k token context length. It was trained on approximately 15 trillion tokens of publicly available data with a knowledge cutoff of December 2023. The instruction-tuned models incorporate both human-generated and synthetic training data, with particular emphasis on reducing refusals to benign prompts while maintaining safety guardrails against harmful use cases. Llama 3.1 is designed to be deployed as part of comprehensive AI systems with additional safety measures, and developers should implement appropriate safeguards including content filtering and tool-use protections when building applications. The model can generate text and code, support tool use and function calling, and handle complex multilingual reasoning tasks.